path
stringlengths 7
265
| concatenated_notebook
stringlengths 46
17M
|
---|---|
02 Supervised Learning - Regression/Week 6 - Data Preprocessing/02 Additional Case Study - Laptop Prices.ipynb | ###Markdown
Laptop Configuration and Price Analysis Case Study ContextLaptopia101 is an online laptop retailer with a wide range of products. Different types of customers have different requirements, and Laptopia101 wants to improve its website by including informative visuals regarding laptop configuration and prices to improve customer experience.The original dataset can be viewed [here](https://www.kaggle.com/muhammetvarl/laptop-price). ObjectiveTo answer some of the questions which will help us understand the kind of information and visuals Laptopia101 can put on their website to improve customer experience. Data DescriptionThe data contains information about the model, manufacturer, price, and configuration of various laptops in the inventory of Laptopia101. The detailed data dictionary is given below.**Data Dictionary**- Company: Laptop Manufacturer- Product: Brand and Model- TypeName: Type (Notebook, Ultrabook, Gaming, etc.)- Inches: Screen Size- ScreenResolution: Screen Resolution- Cpu: Central Processing Unit- Ram: Laptop RAM- Memory: Hard Disk / SSD Memory- GPU: Graphics Processing Unit- OpSys: Operating System- Weight: Laptop Weight- Price_euros: Price in euros
###Code
# this will help in making the Python code more structured automatically (good coding practice)
%load_ext nb_black
# Libraries to help with reading and manipulating data
import numpy as np
import pandas as pd
# Libraries to help with data visualization
import matplotlib.pyplot as plt
import seaborn as sns
sns.set()
# Removes the limit for the number of displayed columns
pd.set_option("display.max_columns", None)
# Sets the limit for the number of displayed rows
pd.set_option("display.max_rows", 200)
# loading the dataset
df = pd.read_csv("./datasets/laptop_price.csv", engine="python")
# checking the shape of the data
print(f"There are {df.shape[0]} rows and {df.shape[1]} columns.") # f-string
# let's view a sample of the data
df.sample(n=10, random_state=1)
# checking column datatypes and number of non-null values
df.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 1303 entries, 0 to 1302
Data columns (total 13 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 laptop_ID 1303 non-null int64
1 Company 1303 non-null object
2 Product 1303 non-null object
3 TypeName 1303 non-null object
4 Inches 1303 non-null float64
5 ScreenResolution 1303 non-null object
6 Cpu 1303 non-null object
7 Ram 1303 non-null object
8 Memory 1303 non-null object
9 Gpu 1303 non-null object
10 OpSys 1303 non-null object
11 Weight 1303 non-null object
12 Price_euros 1303 non-null float64
dtypes: float64(2), int64(1), object(10)
memory usage: 132.5+ KB
###Markdown
* *laptop_ID*, *Inches*, and *Price_euros* are numerical columns.* All other columns are of *object* type.
###Code
# checking for missing values
df.isnull().sum()
###Output
_____no_output_____
###Markdown
* There are no missing values in the data.
###Code
# Let's look at the statistical summary of the data
df.describe(include="all").T
###Output
_____no_output_____
###Markdown
**Observations*** There are 19 different laptop manufacturing companies in the data.* There are over 600 different laptop models in the data.* The screen size varies from 10.1 to 18.1 inches.* The laptop prices vary from 174 to ~6100 euros.
###Code
# function to create labeled barplots
def labeled_barplot(data, feature, perc=False, n=None):
"""
Barplot with percentage at the top
data: dataframe
feature: dataframe column
perc: whether to display percentages instead of count (default is False)
n: displays the top n category levels (default is None, i.e., display all levels)
"""
total = len(data[feature]) # length of the column
count = data[feature].nunique()
if n is None:
plt.figure(figsize=(count + 1, 5))
else:
plt.figure(figsize=(n + 1, 5))
plt.xticks(rotation=90, fontsize=15)
ax = sns.countplot(
data=data,
x=feature,
palette="Paired",
order=data[feature].value_counts().index[:n].sort_values(),
)
for p in ax.patches:
if perc == True:
label = "{:.1f}%".format(
100 * p.get_height() / total
) # percentage of each class of the category
else:
label = p.get_height() # count of each level of the category
x = p.get_x() + p.get_width() / 2 # width of the plot
y = p.get_height() # height of the plot
ax.annotate(
label,
(x, y),
ha="center",
va="center",
size=12,
xytext=(0, 5),
textcoords="offset points",
) # annotate the percentage
plt.show() # show the plot
###Output
_____no_output_____
###Markdown
Q. How many laptops are available across the different companies manufacturing laptops?
###Code
df.Company.value_counts()
labeled_barplot(df, "Company")
###Output
_____no_output_____
###Markdown
HP, Dell, and Lenovo have the highest number of available laptops. Q. How does the price vary across the different companies manufacturing laptops?
###Code
df.groupby("Company")["Price_euros"].mean()
plt.figure(figsize=(15, 5))
plt.subplot(1, 2, 1)
sns.barplot(data=df, y="Price_euros", x="Company")
plt.xticks(rotation=90)
plt.subplot(1, 2, 2)
sns.boxplot(data=df, y="Price_euros", x="Company")
plt.xticks(rotation=90)
plt.show()
###Output
_____no_output_____
###Markdown
Asus, Dell, Lenovo, and HP offer laptops at competitive prices. Apple and MSI laptops are slightly higher priced, while Acer laptops are cheaper. Laptops manufactured by Razer are the most expensive in general. Q. How does the price vary across the different types of laptops?
###Code
df.groupby("TypeName")["Price_euros"].mean()
plt.figure(figsize=(15, 5))
plt.subplot(1, 2, 1)
sns.barplot(data=df, y="Price_euros", x="TypeName")
plt.xticks(rotation=45)
plt.subplot(1, 2, 2)
sns.boxplot(data=df, y="Price_euros", x="TypeName")
plt.xticks(rotation=45)
plt.show()
###Output
_____no_output_____
###Markdown
Gamings laptops and workstations are the most expensive types of laptops on average, while Notebooks and Netbooks are the cheapest.
###Code
# let's create a copy of our data
df1 = df.copy()
###Output
_____no_output_____
###Markdown
Q. The amount of RAM available is a key factor in gaming performance. How does the amount of RAM vary by the company for Gaming laptops?
###Code
df1.head()
# defining a function to extract the amount of RAM
def ram_to_num(ram_val):
"""
This function takes in a string representing the amount of RAM
and converts it to a number. For example, '8GB' becomes 8.
If the input is already numeric, which probably means it's NaN,
this function just returns np.nan.
"""
if isinstance(ram_val, str): # checks if 'ram_val' is a string
if ram_val.endswith("GB"):
return float(ram_val.replace("GB", ""))
elif ram_val.endswith("MB"):
return (
float(ram_val.replace("MB", "")) / 1024
) # converting MB to GB by dividing by 1024
else: # this happens when the current ram is np.nan
return np.nan
# extract the amount of RAM
df1["RAM_GB"] = df1["Ram"].apply(ram_to_num)
df1[["RAM_GB", "Ram"]].head()
df1.drop("Ram", axis=1, inplace=True)
df1["RAM_GB"].describe()
df_gaming = df1[df1.TypeName == "Gaming"]
df_gaming.groupby("Company")["RAM_GB"].mean()
plt.figure(figsize=(15, 5))
plt.subplot(1, 2, 1)
sns.barplot(data=df_gaming, y="RAM_GB", x="Company")
plt.xticks(rotation=45)
plt.subplot(1, 2, 2)
sns.boxplot(data=df_gaming, y="RAM_GB", x="Company")
plt.xticks(rotation=45)
plt.show()
###Output
_____no_output_____
###Markdown
Razer provides the highest amount of RAM on an average for Gaming laptops. Q. GPUs are a key component for users interested in gaming, and Nvidia is one of the leading manufacturers of GPUs. How does the price vary by the company for Gaming laptops with an Nvidia GeForce GTX GPU?
###Code
# we create a new column to indicate if a laptop has the NVIDIA Geforce GTX GPU
df1["GPU_Nvidia_GTX"] = [
1 if "Nvidia GeForce GTX" in item else 0 for item in df1["Gpu"].values
]
df1["GPU_Nvidia_GTX"].value_counts()
df_gaming_nvidia = df1[(df1.TypeName == "Gaming") & (df1.GPU_Nvidia_GTX == 1)]
df_gaming_nvidia.groupby("Company")["Price_euros"].mean()
plt.figure(figsize=(15, 5))
plt.subplot(1, 2, 1)
sns.barplot(data=df_gaming_nvidia, y="Price_euros", x="Company")
plt.xticks(rotation=45)
plt.subplot(1, 2, 2)
sns.boxplot(data=df_gaming_nvidia, y="Price_euros", x="Company")
plt.xticks(rotation=45)
plt.show()
###Output
_____no_output_____
###Markdown
Lenovo and Acer provide gaming laptops with an Nvidia GeForce GTX GPU at comparatively cheaper prices. Q. It has been found from sales history that executives prefer fast and lightweight laptops, and Ultrabooks are one of the best choices for them. How does the weight of laptops of type Ultrabook differ by the company?
###Code
# checking the units of weight
weight_units = list(set([item[-2:] for item in df1.Weight]))
weight_units
###Output
_____no_output_____
###Markdown
All laptops in the data have weight in kilograms.
###Code
# removing the units and converting to float
df1["Weight_kg"] = df1["Weight"].str.replace("kg", "").astype(float)
df1.drop("Weight", axis=1, inplace=True)
df1["Weight_kg"].describe()
df_ultrabook = df1[df1.TypeName == "Ultrabook"]
df_ultrabook.groupby("Company")["Weight_kg"].mean()
plt.figure(figsize=(15, 5))
plt.subplot(1, 2, 1)
sns.barplot(data=df_ultrabook, y="Weight_kg", x="Company")
plt.xticks(rotation=90)
plt.subplot(1, 2, 2)
sns.boxplot(data=df_ultrabook, y="Weight_kg", x="Company")
plt.xticks(rotation=90)
plt.show()
###Output
_____no_output_____
###Markdown
Apple, Samsung, Huawei, and LG provide the lightest Ultrabooks. A few Samsung and Apple laptops weigh less than a kilogram. Q. The sales history also shows that executives have a preference for small laptops running on Windows OS for ease of use. How many laptops running on Windows OS have a screen size not greater than 14 inches are available across different companies?
###Code
df1.OpSys.value_counts()
df1["OS"] = df1["OpSys"].str.split(" ").str[0]
df1["OS"].value_counts()
df1["OS"] = [
"NA" if item == "No" else "MacOS" if item.lower().startswith("mac") else item
for item in df1.OS.values
]
df1["OS"].value_counts()
df_win_small = df1[(df1.OS == "Windows") & (df1.Inches <= 14)]
df_win_small.Company.value_counts()
labeled_barplot(df_win_small, "Company", perc=True)
###Output
_____no_output_____
###Markdown
Lenovo and HP manufacturer around 52% of the laptops running on Windows OS with a screen size less than 15 inches. Q. Operating systems like Linux and ChromeOS are not so common, and the sales history shows that the number of customers buying laptops running on these operating systems is limited. How many laptops across different companies run on a Linux or Chrome operating system?
###Code
df_linux_chrome = df1[(df1.OS == "Linux") | (df1.OS == "Chrome")]
df_linux_chrome.shape[0]
df_linux_chrome.groupby(["OS", "Company"]).Product.count()
plt.figure(figsize=(12, 6))
sns.countplot(data=df_linux_chrome, x="OS", hue="Company")
plt.xticks(rotation=45)
###Output
_____no_output_____
###Markdown
Dell has the highest number of laptops running on Linux, while Acer has the highest number of laptops running on ChromeOS. Q. High-resolution screens are good to have in laptops for entertainment purposes. How many laptops are available for different companies with screen resolutions better than 1600x900?
###Code
# extract the screen resolution
df1["ScrRes"] = df1["ScreenResolution"].str.split(" ").str[-1]
df1[["ScrRes", "ScreenResolution"]].head()
df1.drop("ScreenResolution", axis=1, inplace=True)
df1.ScrRes.value_counts()
df1["ScrRes_C1"] = df1.ScrRes.str[:4].astype(float)
df1["ScrRes_C1"].describe()
df_highres = df1[df1.ScrRes_C1 > 1600]
df_highres.Company.value_counts()
labeled_barplot(df_highres, "Company")
###Output
_____no_output_____
###Markdown
HP, Dell, and Lenovo provide more options in terms of laptops having higher screen resolutions. Q. What percentage of laptops in each company have high-resolution screens?
###Code
df_highres.Company.unique()
df.Company.unique()
# let us compute the percentage of laptops in each company having high resolution screens
df_highres.Company.value_counts() / df[df.Company != "Fujitsu"].Company.value_counts()
###Output
_____no_output_____
###Markdown
Many companies manufacture laptops with high-resolution screens only. Fujitsu does not manufacture laptops with high-resolution screens. Q. Intel and AMD are primary manufacturers of processors. How does the speed of processing vary between these two processor manufacturers for laptops of type Notebook?
###Code
df1.head()
df1["CPU_mnfc"] = df1.Cpu.str.split().str[0]
df1["CPU_speed"] = df1.Cpu.str.split().str[-1]
df1.CPU_mnfc.value_counts()
# checking the units of CPU speed
cpu_units = list(set([item[-3:] for item in df1.CPU_speed]))
cpu_units
# extract the amount of RAM
df1["CPU_speed"] = df1["CPU_speed"].str.replace("GHz", "").astype(float)
df1[["CPU_speed", "Cpu"]].head()
df_notebook = df1[df1.TypeName == "Notebook"]
df_notebook.groupby("CPU_mnfc")["CPU_speed"].mean()
plt.figure(figsize=(15, 5))
plt.subplot(1, 2, 1)
sns.barplot(data=df_notebook, y="CPU_speed", x="CPU_mnfc")
plt.xticks(rotation=45)
plt.subplot(1, 2, 2)
sns.boxplot(data=df_notebook, y="CPU_speed", x="CPU_mnfc")
plt.xticks(rotation=45)
plt.show()
###Output
_____no_output_____
###Markdown
AMD processors tend to offer more processing speed than Intel processors. Q. Many recent laptops have started to provide multiple storage options (like an SSD with an HDD). What are the different kinds of storage available for laptops manufactured by Apple? (Refer to the '*Memory*' column)
###Code
np.random.seed(2)
df1.sample(10)
df1["Memory"] = [
item + " + NaN" if "+" not in item else item for item in df1["Memory"].values
]
df1.head()
df1["Storage1"] = df1["Memory"].str.split("+").str[0].str.strip()
df1["Storage2"] = df1["Memory"].str.split("+").str[1].str.strip()
df1.head()
np.random.seed(2)
df1.sample(10)
df1["Storage1_Type"] = df1["Storage1"].str.split(" ").str[1]
df1["Storage1_Volume"] = df1["Storage1"].str.split(" ").str[0]
df1["Storage2_Type"] = df1["Storage2"].str.split(" ").str[1]
df1["Storage2_Volume"] = df1["Storage2"].str.split(" ").str[0]
df1.head()
np.random.seed(2)
df1.sample(10)
def storage_volume_to_num(str_vol_val):
"""This function takes in a string representing the volume of a storage device
and converts it to a number.
For example, '256GB' becomes 256.
If the input is already numeric, which probably means it's NaN,
this function just returns np.nan."""
if isinstance(str_vol_val, str): # checks if `str_vol_val` is a string
multiplier = 1 # handles GB vs TB
if str_vol_val.endswith("TB"):
multiplier = 1024
return float(str_vol_val.replace("GB", "").replace("TB", "")) * multiplier
else: # this happens when the str_vol is np.nan
return np.nan
df1["Storage1_Volume"] = df1["Storage1_Volume"].apply(storage_volume_to_num)
df1["Storage2_Volume"] = df1["Storage2_Volume"].apply(storage_volume_to_num)
df1.head()
np.random.seed(2)
df1.sample(10)
df_apple = df1[df1.Company == "Apple"]
df_apple[["Storage1_Volume", "Storage2_Volume"]].describe()
df_apple["Storage1_Type"].value_counts()
df_apple["Storage2_Type"].value_counts()
###Output
_____no_output_____ |
Notebooks/Powerusage_sourcecode.ipynb | ###Markdown
Prediction of maximum power usage(max KWH) of a retail store on any given day. Data&Task DescriptionThe given data is depicting the power usage(max KWH) of a retail store against various features(mostly climatic) across a the period of 2 years. The patterns of the data against various parameters need to be understood and to be used to build a model to predict the maximum usage(max kwh) on the unseen data i.e test data.Train-Test Split:75-25
###Code
#Importing the required libraries
import matplotlib.pyplot as plt
import seaborn as sns
import pandas as pd
import numpy as np
%matplotlib inline
#Importing the data to a data frame
energy_dt = pd.read_csv("C:/Users/IBM_ADMIN/Desktop/Accenture_assign/ModelData1.csv",header=0)
energy_dt.shape
energy_dt.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 8215 entries, 0 to 8214
Data columns (total 23 columns):
Row Labels 8215 non-null object
Hour 8215 non-null int64
Date-Hour 8215 non-null object
Month 8215 non-null int64
Year 8215 non-null int64
Sum/Win 8215 non-null int64
WeekNumber 8215 non-null int64
Weeday 8215 non-null int64
Prior Period Temp C 8215 non-null int64
Avg Temp C 8215 non-null int64
Winter HDD (51 Base) 8215 non-null float64
Summer CDD (51 Base) 8215 non-null float64
Max Temp C 8215 non-null int64
Dew C 8215 non-null int64
Humidity 8213 non-null float64
Visibility (km) 8213 non-null float64
Wind Dir 8215 non-null object
Wind Speed (km/h) 8214 non-null object
Gust Speed (km/h) 1547 non-null float64
Precip (mm) 149 non-null float64
Events 449 non-null object
Conditions 8215 non-null object
Max kW 8215 non-null int64
dtypes: float64(6), int64(11), object(6)
memory usage: 1.4+ MB
###Markdown
1.Data Preprocessing1) Filling the missing values if any 2) Converting the datatypes of the features if needed 3) Elimination or creation of new features.
###Code
energy_dt.isnull().sum()
energy_dt[energy_dt["Visibility (km)"].isnull()]
energy_dt[energy_dt["Conditions"]=="Unknown"]['Visibility (km)'].mean()
energy_dt['Visibility (km)'].fillna(energy_dt[energy_dt["Conditions"]=="Unknown"]['Visibility (km)'].mean(),inplace=True)
energy_dt['Wind Speed (km/h)'].fillna(energy_dt['Wind Speed (km/h)'].mode(),inplace=True)
energy_dt[energy_dt["Humidity"].isnull()]
energy_dt['Humidity'] = energy_dt.groupby(['Conditions'])['Humidity'].apply(lambda x: x.fillna(x.mean()))
###Output
_____no_output_____
###Markdown
___Addressing the missing values of Precipitation___ The main forms of precipitation include drizzle, rain, sleet, snow, graupel and hail. __Precipitation occurs when a portion of the atmosphere becomes saturated with water vapor, so that the water condenses and "precipitates". Thus, fog and mist are not precipitation but suspensions__, because the water vapor does not condense sufficiently to precipitate. Source:https://en.wikipedia.org/wiki/PrecipitationThe precipitation depends on the atmospheric conditions which means "Conditions" and "Precipitation" convey the same information.From the above description,The precipitation can be made '0' for Fog and Mist.
###Code
energy_dt['Precip (mm)'] = energy_dt.groupby(['Conditions'])['Precip (mm)'].apply(lambda x: x.fillna(x.mean()))
energy_dt.groupby('Sum/Win')['Precip (mm)'].mean().reset_index()
###Output
_____no_output_____
###Markdown
Still i could find some missing values. So,Filled them based on "Sum/Win"(Summer or Winter) column
###Code
energy_dt['Precip (mm)'] = energy_dt.groupby(['Sum/Win'])['Precip (mm)'].apply(lambda x: x.fillna(x.mean()))
del energy_dt["Events"]
###Output
_____no_output_____
###Markdown
"Conditions" and "Events" convey the same information. So,deleted "Events"
###Code
energy_dt.isnull().sum()
energy_dt[energy_dt["Wind Speed (km/h)"].isnull()]
###Output
_____no_output_____
###Markdown
Many values with "Calm" has been found.When researched on google.found the below:Observed that the Wind speed for some observations is "Calm".As per Beufort wind scale,Wind speed calssified as "Calm" ranges between 0 and 1. Hence We can randomize the values for "Calm" between 0 and 1.Source :http://www.spc.noaa.gov/faq/tornado/beaufort.htmlHence, Randomized the values of speed between o and 1 for the values of "Calm"
###Code
energy_dt['Wind Speed (km/h)']= np.where(energy_dt['Wind Speed (km/h)']=='Calm',abs(energy_dt['Wind Speed (km/h)'].apply(lambda v: np.random.normal(0,1))),energy_dt['Wind Speed (km/h)'])
#Convert the Wind speed to float
energy_dt['Wind Speed (km/h)']=pd.to_numeric(energy_dt['Wind Speed (km/h)'])
energy_dt['Wind Speed (km/h)'] = energy_dt.groupby(['Conditions'])['Wind Speed (km/h)'].apply(lambda x: x.fillna(x.mean()))
###Output
_____no_output_____
###Markdown
___Addressing the missing values of Gust speed:___A gust and wind both refer to the movement of different gases in the earth’s atmosphere around the earth.Wind is created by the difference in atmospheric pressure caused by lighter hot air and denser cold air. On the other hand, gusts are brief increases in the wind’s speed, mainly caused by the wind passing through the terrain.Wind blows in varying speeds throughout the entire day. Gusts only occur for extremely short periods of time, usually lasting no more than just 20 seconds, occurring at 2-minute intervals.Source: http://www.differencebetween.net/science/nature/difference-between-gust-and-wind/An interesting inference can be drawn from the above:__Gust speed is always > Wind speed__
###Code
energy_dt.groupby('Conditions')['Wind Speed (km/h)','Gust Speed (km/h)'].mean().reset_index()
energy_dt[energy_dt["Gust Speed (km/h)"].isnull()]["Conditions"].value_counts()
energy_dt['Gust Speed (km/h)'] = energy_dt.groupby(['Conditions'])['Gust Speed (km/h)'].apply(lambda x: x.fillna(x.mean()))
energy_dt.groupby('Conditions')['Wind Speed (km/h)','Gust Speed (km/h)'].mean().reset_index()
###Output
_____no_output_____
###Markdown
For "Drizzle", The mean Gust speed was also missing. Found the below in google about it.Wind blows in varying speeds throughout the entire day. Gusts only occur for extremely short periods of time, usually lasting no more than just 20 seconds, occurring at 2-minute intervals.Source: http://www.differencebetween.net/science/nature/difference-between-gust-and-wind/An interesting inference can be drawn from the above:__Gust speed is always > Wind speed__The mean value of the differences between gust speed and wind speed is 21.9.Hence,Computed the gust speed of the "Drizzle" in the below manner.
###Code
energy_dt['Gust Speed (km/h)']=energy_dt['Gust Speed (km/h)'].fillna(energy_dt['Wind Speed (km/h)']+21.9)
dayhour=energy_dt.groupby(by=['Weeday','Hour']).mean()['Max kW'].unstack()
dayhour
sns.heatmap(dayhour,cmap='coolwarm')
energy_dt.isnull().sum()
###Output
_____no_output_____
###Markdown
"Hour" and "Sum" are categorical values with integer values
###Code
trainDfDummies = pd.get_dummies(energy_dt, columns=['Hour','Sum/Win'])
trainDfDummies.columns
ener = pd.get_dummies( trainDfDummies[['WeekNumber', 'Weeday', 'Winter HDD (51 Base)',
'Summer CDD (51 Base)', 'Max Temp C', 'Dew C', 'Humidity',
'Visibility (km)', 'Wind Dir', 'Wind Speed (km/h)', 'Gust Speed (km/h)',
'Precip (mm)', 'Conditions','Hour_0', 'Hour_1', 'Hour_2',
'Hour_3', 'Hour_4', 'Hour_5', 'Hour_6', 'Hour_7', 'Hour_8', 'Hour_9',
'Hour_10', 'Hour_11', 'Hour_12', 'Hour_13', 'Hour_14', 'Hour_15',
'Hour_16', 'Hour_17', 'Hour_18', 'Hour_19', 'Hour_20', 'Hour_21',
'Hour_22', 'Hour_23', 'Sum/Win_0', 'Sum/Win_1']], drop_first = True )
from sklearn.preprocessing import LabelEncoder, OneHotEncoder
onehotenc = OneHotEncoder(categorical_features= [1])
x = onehotenc.fit_transform(ener).toarray()
from sklearn.cross_validation import train_test_split
x_train,x_test,y_train,y_test = train_test_split(x, y, test_size = 0.25, random_state = 0)
###Output
_____no_output_____
###Markdown
Model1: Linear Regression
###Code
#with all the features
import statsmodels.api as sm
# Note the difference in argument order
model = sm.OLS(y_train, x_train).fit()
predictions = model.predict(x_test) # make the predictions by the model
# Print out the statistics
model.summary()
#Predicting the test results
predictions
sns.pairplot(energy_dt, x_vars=['Prior Period Temp C', 'Avg Temp C','Max Temp C','Hour','WeekNumber', 'Weeday'], y_vars='Max kW',size=4, aspect=0.7)
###Output
_____no_output_____
###Markdown
All the temperature columns are highly correlated as the shapes of the curves look to be the same. Quantifying them with a heatmap would help.
###Code
sns.set_context('notebook')
sns.heatmap(energy_dt[["Prior Period Temp C","Avg Temp C","Max Temp C"]].corr())
###Output
_____no_output_____
###Markdown
Hence,Any one feauture would help.We can eliminate two features and see our R2 and Adjusted R2 are changing.And we can repeat the same exercise for other features as well further.
###Code
#After removing Avg Temp and Prior Temp
import statsmodels.api as sm
# Note the difference in argument order
model = sm.OLS(y_train, x_train).fit()
predictions = model.predict(x_test) # make the predictions by the model
# Print out the statistics
model.summary()
###Output
_____no_output_____
###Markdown
As expected, There is no change in the Metrics.
###Code
sns.pairplot(energy_dt, x_vars=['Winter HDD (51 Base)', 'Summer CDD (51 Base)', 'Dew C',
'Humidity','Visibility (km)','Precip (mm)'], y_vars='Max kW', size=4, aspect=0.7)
plt.figure(figsize=(18,4))
sns.heatmap(ener[['Winter HDD (51 Base)', 'Summer CDD (51 Base)', 'Dew C',
'Humidity','Visibility (km)','Precip (mm)','Wind Speed (km/h)',
'Gust Speed (km/h)']].corr())
###Output
_____no_output_____
###Markdown
There is no suspicious correlations(>90%).Hence No conclusions can be drawn from this plot.
###Code
sns.pairplot(energy_dt, x_vars=['Wind Speed (km/h)',
'Gust Speed (km/h)'], y_vars='Max kW', size=4, aspect=0.7)
sns.heatmap(ener[['Wind Speed (km/h)',
'Gust Speed (km/h)']].corr())
#After removing "Weeday" feature
#with all the features
import statsmodels.api as sm
# Note the difference in argument order
model = sm.OLS(y_train, x_train).fit()
predictions = model.predict(x_test) # make the predictions by the model
# Print out the statistics
model.summary()
###Output
_____no_output_____
###Markdown
Adjusted R2 decreased by 0.001. Hence,It's not advisable to eliminate this feature.Rather let's encode it and see if the R2 is improved.
###Code
#After adding "Weeday" feature again and encoding it
#with all the features
import statsmodels.api as sm
# Note the difference in argument order
model = sm.OLS(y_train, x_train).fit()
predictions = model.predict(x_test) # make the predictions by the model
# Print out the statistics
model.summary()
###Output
_____no_output_____
###Markdown
Model2: Random Forest
###Code
# Import the model we are using
from sklearn.ensemble import RandomForestRegressor
# Instantiate model with 1000 decision trees
rf = RandomForestRegressor(n_estimators = 1200, random_state = 42)
# Train the model on training data
rf.fit(x_train, y_train);
y_pred_rf=rf.predict(x_test)
import numpy as np
def mean_absolute_percentage_error(y_true, y_pred):
y_true, y_pred = np.array(y_true), np.array(y_pred)
return np.mean(np.abs((y_true - y_pred) / y_true)) * 100
y_pred_linear=predictions
mean_absolute_percentage_error(y_test,y_pred_linear)
mean_absolute_percentage_error(y_test,y_pred_rf)
from sklearn.metrics import r2_score
coefficient_of_dermination_rf = r2_score(y_test, y_pred_rf)
coefficient_of_dermination_rf
from sklearn.metrics import r2_score
coefficient_of_dermination_lin = r2_score(y_test, predictions)
coefficient_of_dermination_lin
sns.distplot(predictions)
sns.distplot(y_pred_rf)
sns.distplot(y_test)
###Output
_____no_output_____
###Markdown
The distribution plots of the predicted values of the two regressors and the actual values are almost converging.
###Code
# Get numerical feature importances
importances = list(rf.feature_importances_)
# List of tuples with variable and importance
feature_importances = [(feature, round(importance, 2)) for feature, importance in zip(['WeekNumber', 'Weeday','Winter HDD (51 Base)',
'Summer CDD (51 Base)', 'Max Temp C', 'Dew C', 'Humidity',
'Visibility (km)', 'Wind Dir', 'Wind Speed (km/h)', 'Gust Speed (km/h)',
'Precip (mm)', 'Conditions','Hour_0', 'Hour_1', 'Hour_2',
'Hour_3', 'Hour_4', 'Hour_5', 'Hour_6', 'Hour_7', 'Hour_8', 'Hour_9',
'Hour_10', 'Hour_11', 'Hour_12', 'Hour_13', 'Hour_14', 'Hour_15',
'Hour_16', 'Hour_17', 'Hour_18', 'Hour_19', 'Hour_20', 'Hour_21',
'Hour_22', 'Hour_23', 'Sum/Win_0', 'Sum/Win_1'], importances)]
# Sort the feature importances by most important first
feature_importances = sorted(feature_importances, key = lambda x: x[1], reverse = True)
# Print out the feature and importances
[print('Variable: {:20} Importance: {}'.format(*pair)) for pair in feature_importances];
###Output
Variable: Gust Speed (km/h) Importance: 0.3
Variable: Conditions Importance: 0.08
Variable: Wind Speed (km/h) Importance: 0.07
Variable: Visibility (km) Importance: 0.05
Variable: Hour_4 Importance: 0.05
Variable: Hour_5 Importance: 0.04
Variable: Hour_6 Importance: 0.04
Variable: Hour_7 Importance: 0.04
Variable: Hour_8 Importance: 0.04
Variable: Sum/Win_1 Importance: 0.04
Variable: Hour_9 Importance: 0.03
Variable: Hour_10 Importance: 0.03
Variable: Hour_11 Importance: 0.02
Variable: Precip (mm) Importance: 0.01
Variable: Hour_1 Importance: 0.01
Variable: WeekNumber Importance: 0.0
Variable: Weeday Importance: 0.0
Variable: Winter HDD (51 Base) Importance: 0.0
Variable: Summer CDD (51 Base) Importance: 0.0
Variable: Max Temp C Importance: 0.0
Variable: Dew C Importance: 0.0
Variable: Humidity Importance: 0.0
Variable: Wind Dir Importance: 0.0
Variable: Hour_0 Importance: 0.0
Variable: Hour_2 Importance: 0.0
Variable: Hour_3 Importance: 0.0
Variable: Hour_12 Importance: 0.0
Variable: Hour_13 Importance: 0.0
Variable: Hour_14 Importance: 0.0
Variable: Hour_15 Importance: 0.0
Variable: Hour_16 Importance: 0.0
Variable: Hour_17 Importance: 0.0
Variable: Hour_18 Importance: 0.0
Variable: Hour_19 Importance: 0.0
Variable: Hour_20 Importance: 0.0
Variable: Hour_21 Importance: 0.0
Variable: Hour_22 Importance: 0.0
Variable: Hour_23 Importance: 0.0
Variable: Sum/Win_0 Importance: 0.0
|
Perceptron/1.3 The Hello World of machine learning.ipynb | ###Markdown
1. IntroductionIn this codelab you'll learn the basic "Hello World" of machine learning where, instead of programming explicit rules in a language such as Java or C++, you'll build a system that is trained on data to infer the rules that determine a relationship between numbers.Consider the following problem: You're building a system that performs `activity recognition` of Human for `fitness tracking`. You might have access to the speed at which a person is moving, and attempt to infer their activity based on this speed using a conditional: * Input : Speed is Your input* output : Status(Walking,Rrunning,Playing)
###Code
if(speed<4){
status=WALKING;
}
###Output
_____no_output_____
###Markdown
You could extend this to running with another condition:
###Code
if(speed<4){
status=WALKING;
} else {
status=RUNNING;
}
###Output
_____no_output_____
###Markdown
In a final condition you could similarly detect cycling:
###Code
if(speed<4){
status=WALKING;
} else if(speed<12){
status=RUNNING;
} else {
status=BIKING;
}
###Output
_____no_output_____
###Markdown
Now consider what happens when you want to include an activity like golf? Suddenly it's less obvious how to create a rule to determine the activity.
###Code
// Now what?
###Output
_____no_output_____
###Markdown
It's extremely difficult to write a program (expressed in code) that will give us the golfing activity. So what do you do? That's where machine learning can be used to solve the problem! 2. What is machine learning? In the previous section you saw a problem where, when trying to determine the fitness activity of a user, you hit limitations in what you could write code to achieve.Consider building applications in the traditional manner as represented in the following diagram:You express `rules` in a programming language. These act on `data` and your program provides `answers`. In the case of the activity detection, the rules (the code you wrote to define types of activities) acted upon the data (the person's movement speed) in order to find an answer -- the return value from the function for determining the activity status of the user (whether they were walking, running, biking, etc.).The process for detecting this activity status via Machine Learning is very similar -- only the axes are different:Instead of trying to define the rules and express them in a programming language, you provide the answers (typically called `labels`) along with the data, and the machine will `infer` the rules that determine the relationship between the answers and the data. For example, our activity detection scenario might look like this in a machine learning context:We gather lots of data, and label it to effectively say "This is what walking looks like", "This is what running looks like" etc. Then, the computer can infer the rules that determine, from the data, what the distinct patterns that denote a particular activity are.Beyond being an alternative method to programming this scenario, this also gives you the ability to open up new scenarios, such as the golfing one that may not have been possible under the rules-based traditional programming approach.In traditional programming your code compiles into a binary that is typically called a program. In machine learning, the item you create from the data and labels is called a **model**.So if we go back to this diagram:Consider the result of this to be a model, which at runtime is used like this:You will pass the model some data, and the model will use the rules it inferred from the training to come up with a prediction -- i.e. "That data looks like walking", "That data looks like biking" etc.In the next section you'll start coding, building a very simple "Hello World" model which will have most of the building blocks that can be used in any Machine Learning Scenario! 3. Before you start...In the next section you'll create a very simple machine learned model that determines patterns in a set of data using machine learning techniques and a neural network.If you've never created a Machine Learning model using TensorFlow, I'd strongly recommend you use Google Colaboratory, a browser-based environment that contains all the required dependencies. You can find the code for the rest of this lab running in a Colab.Otherwise, the main language you will use for training models is Python, so you will need to have that installed. In addition to that you'll also need TensorFlow. Details on installing it are [here](https://www.tensorflow.org/install). You'll also need the [numpy](https://numpy.org/) library. 4. Create your first machine-learned modelConsider the following sets of numbers. Can you see the relationship between them?|X:| -1 | 0| 1| 2 |3| 4||--|--|--|--|--|--|--||Y:|-2 |1 | 4|7|10|13|Like every first app you should start with something super simple that shows the overall scaffolding for how your code works. In the case of creating neural networks, the sample I like to use is one where it learns the relationship between two numbers. So, for example, if you were writing code for a function like this, you already know the 'rules' -- ```float my_function(float x){ float y = (3 * x) + 1; return y;}```So how would you train a neural network to do the equivalent task? Using data! By feeding it with a set of Xs, and a set of Ys, it should be able to figure out the relationship between them. This is obviously a very different paradigm than what you might be used to, so let's step through it piece by piece. ImportsLet's start with our imports. Here we are importing TensorFlow and calling it tf for ease of use.We then import a library called numpy, which helps us to represent our data as lists easily and quickly.The framework for defining a neural network as a set of Sequential layers is called keras, so we import that too.
###Code
!pip install tensorflow==2.0
import tensorflow as tf
import numpy as np
from tensorflow import keras
keras.__version__
tf.__version__
###Output
_____no_output_____
###Markdown
Define and Compile the Neural NetworkNext we will create the simplest possible neural network. It has 1 layer, and that layer has 1 neuron, and the input shape to it is just 1 value.
###Code
model = tf.keras.Sequential([keras.layers.Dense(units=1, input_shape=[1])])
model
###Output
_____no_output_____
###Markdown
Now we compile our Neural Network. When we do so, we have to specify 2 functions, a loss and an optimizer.If you've seen lots of math for machine learning, here's where it's usually used, but in this case it's nicely encapsulated in functions for you. But what happens here -- let's explain... y = mx+c or Y = $b_1X+b$We know that in our function, the relationship between the numbers is y=3x+1. When the computer is trying to 'learn' that, it makes a guess...maybe y=10x+10. The `LOSS` function measures the guessed answers against the known correct answers and measures how well or how badly it did.It then uses the `OPTIMIZER` function to make another guess. Based on how the loss function went, it will try to minimize the loss. At that point maybe it will come up with somehting like y=5x+5, which, while still pretty bad, is closer to the correct result (i.e. the loss is lower)It will repeat this for the number of `EPOCHS` which you will see shortly. But first, here's how we tell it to use `MEAN SQUARED ERROR` for the loss and `STOCHASTIC GRADIENT DESCENT` for the optimizer. You don't need to understand the math for these yet, but you can see that they work! :)Over time you will learn the different and appropriate loss and optimizer functions for different scenarios.
###Code
model.compile(optimizer='sgd', loss='mean_squared_error',metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
Providing the DataNext up we'll feed in some data. In this case we are taking 6 xs and 6ys. You can see that the relationship between these is that y=2x-1, so where x = -1, y=-3 etc. etc. A python library called 'Numpy' provides lots of array type data structures that are a defacto standard way of doing it. We declare that we want to use these by specifying the values asn an np.array[]
###Code
xs = np.array([-1.0, 0.0, 1.0, 2.0, 3.0, 4.0], dtype=float)
ys = np.array([-2.0, 1.0, 4.0, 7.0, 10.0, 13.0], dtype=float)
###Output
_____no_output_____
###Markdown
Training the Neural NetworkThe process of training the neural network, where it 'learns' the relationship between the Xs and Ys is in the **model.fit** call. This is where it will go through the loop we spoke about above, making a guess, measuring how good or bad it is (aka the loss), using the opimizer to make another guess etc. It will do it for the number of epochs you specify. When you run this code, you'll see the loss on the right hand side.
###Code
model.fit(xs, ys, epochs=70)
###Output
Train on 6 samples
Epoch 1/70
6/6 [==============================] - 0s 1ms/sample - loss: 1.3046e-05 - accuracy: 0.1667
Epoch 2/70
6/6 [==============================] - 0s 498us/sample - loss: 1.2777e-05 - accuracy: 0.1667
Epoch 3/70
6/6 [==============================] - 0s 665us/sample - loss: 1.2515e-05 - accuracy: 0.1667
Epoch 4/70
6/6 [==============================] - 0s 333us/sample - loss: 1.2258e-05 - accuracy: 0.1667
Epoch 5/70
6/6 [==============================] - 0s 665us/sample - loss: 1.2006e-05 - accuracy: 0.1667
Epoch 6/70
6/6 [==============================] - 0s 332us/sample - loss: 1.1760e-05 - accuracy: 0.1667
Epoch 7/70
6/6 [==============================] - 0s 830us/sample - loss: 1.1518e-05 - accuracy: 0.1667
Epoch 8/70
6/6 [==============================] - 0s 835us/sample - loss: 1.1282e-05 - accuracy: 0.1667
Epoch 9/70
6/6 [==============================] - 0s 333us/sample - loss: 1.1050e-05 - accuracy: 0.1667
Epoch 10/70
6/6 [==============================] - 0s 333us/sample - loss: 1.0823e-05 - accuracy: 0.1667
Epoch 11/70
6/6 [==============================] - 0s 326us/sample - loss: 1.0601e-05 - accuracy: 0.1667
Epoch 12/70
6/6 [==============================] - 0s 166us/sample - loss: 1.0384e-05 - accuracy: 0.1667
Epoch 13/70
6/6 [==============================] - 0s 332us/sample - loss: 1.0170e-05 - accuracy: 0.1667
Epoch 14/70
6/6 [==============================] - 0s 333us/sample - loss: 9.9605e-06 - accuracy: 0.1667
Epoch 15/70
6/6 [==============================] - 0s 166us/sample - loss: 9.7562e-06 - accuracy: 0.1667
Epoch 16/70
6/6 [==============================] - 0s 499us/sample - loss: 9.5559e-06 - accuracy: 0.1667
Epoch 17/70
6/6 [==============================] - 0s 664us/sample - loss: 9.3594e-06 - accuracy: 0.1667
Epoch 18/70
6/6 [==============================] - 0s 665us/sample - loss: 9.1667e-06 - accuracy: 0.1667
Epoch 19/70
6/6 [==============================] - 0s 499us/sample - loss: 8.9786e-06 - accuracy: 0.1667
Epoch 20/70
6/6 [==============================] - 0s 327us/sample - loss: 8.7944e-06 - accuracy: 0.1667
Epoch 21/70
6/6 [==============================] - 0s 664us/sample - loss: 8.6135e-06 - accuracy: 0.1667
Epoch 22/70
6/6 [==============================] - 0s 499us/sample - loss: 8.4369e-06 - accuracy: 0.1667
Epoch 23/70
6/6 [==============================] - 0s 498us/sample - loss: 8.2636e-06 - accuracy: 0.1667
Epoch 24/70
6/6 [==============================] - 0s 332us/sample - loss: 8.0936e-06 - accuracy: 0.1667
Epoch 25/70
6/6 [==============================] - 0s 665us/sample - loss: 7.9276e-06 - accuracy: 0.1667
Epoch 26/70
6/6 [==============================] - 0s 492us/sample - loss: 7.7648e-06 - accuracy: 0.1667
Epoch 27/70
6/6 [==============================] - 0s 665us/sample - loss: 7.6051e-06 - accuracy: 0.1667
Epoch 28/70
6/6 [==============================] - 0s 332us/sample - loss: 7.4487e-06 - accuracy: 0.1667
Epoch 29/70
6/6 [==============================] - 0s 666us/sample - loss: 7.2960e-06 - accuracy: 0.1667
Epoch 30/70
6/6 [==============================] - 0s 499us/sample - loss: 7.1462e-06 - accuracy: 0.1667
Epoch 31/70
6/6 [==============================] - 0s 831us/sample - loss: 6.9992e-06 - accuracy: 0.1667
Epoch 32/70
6/6 [==============================] - 0s 332us/sample - loss: 6.8557e-06 - accuracy: 0.1667
Epoch 33/70
6/6 [==============================] - 0s 665us/sample - loss: 6.7152e-06 - accuracy: 0.1667
Epoch 34/70
6/6 [==============================] - 0s 499us/sample - loss: 6.5769e-06 - accuracy: 0.1667
Epoch 35/70
6/6 [==============================] - 0s 503us/sample - loss: 6.4422e-06 - accuracy: 0.1667
Epoch 36/70
6/6 [==============================] - 0s 333us/sample - loss: 6.3098e-06 - accuracy: 0.1667
Epoch 37/70
6/6 [==============================] - 0s 332us/sample - loss: 6.1803e-06 - accuracy: 0.1667
Epoch 38/70
6/6 [==============================] - 0s 665us/sample - loss: 6.0533e-06 - accuracy: 0.1667
Epoch 39/70
6/6 [==============================] - 0s 499us/sample - loss: 5.9292e-06 - accuracy: 0.1667
Epoch 40/70
6/6 [==============================] - 0s 332us/sample - loss: 5.8076e-06 - accuracy: 0.1667
Epoch 41/70
6/6 [==============================] - 0s 665us/sample - loss: 5.6876e-06 - accuracy: 0.1667
Epoch 42/70
6/6 [==============================] - 0s 664us/sample - loss: 5.5705e-06 - accuracy: 0.1667
Epoch 43/70
6/6 [==============================] - 0s 665us/sample - loss: 5.4561e-06 - accuracy: 0.1667
Epoch 44/70
6/6 [==============================] - 0s 500us/sample - loss: 5.3441e-06 - accuracy: 0.1667
Epoch 45/70
6/6 [==============================] - 0s 333us/sample - loss: 5.2343e-06 - accuracy: 0.1667
Epoch 46/70
6/6 [==============================] - 0s 665us/sample - loss: 5.1268e-06 - accuracy: 0.1667
Epoch 47/70
6/6 [==============================] - 0s 332us/sample - loss: 5.0213e-06 - accuracy: 0.1667
Epoch 48/70
6/6 [==============================] - 0s 500us/sample - loss: 4.9185e-06 - accuracy: 0.1667
Epoch 49/70
6/6 [==============================] - 0s 499us/sample - loss: 4.8173e-06 - accuracy: 0.1667
Epoch 50/70
6/6 [==============================] - 0s 664us/sample - loss: 4.7182e-06 - accuracy: 0.1667
Epoch 51/70
6/6 [==============================] - 0s 332us/sample - loss: 4.6214e-06 - accuracy: 0.1667
Epoch 52/70
6/6 [==============================] - 0s 333us/sample - loss: 4.5263e-06 - accuracy: 0.1667
Epoch 53/70
6/6 [==============================] - 0s 499us/sample - loss: 4.4334e-06 - accuracy: 0.1667
Epoch 54/70
6/6 [==============================] - 0s 831us/sample - loss: 4.3424e-06 - accuracy: 0.1667
Epoch 55/70
6/6 [==============================] - 0s 499us/sample - loss: 4.2530e-06 - accuracy: 0.1667
Epoch 56/70
6/6 [==============================] - 0s 832us/sample - loss: 4.1657e-06 - accuracy: 0.1667
Epoch 57/70
6/6 [==============================] - 0s 662us/sample - loss: 4.0799e-06 - accuracy: 0.1667
Epoch 58/70
6/6 [==============================] - 0s 333us/sample - loss: 3.9961e-06 - accuracy: 0.1667
Epoch 59/70
6/6 [==============================] - 0s 332us/sample - loss: 3.9143e-06 - accuracy: 0.1667
Epoch 60/70
6/6 [==============================] - 0s 665us/sample - loss: 3.8339e-06 - accuracy: 0.1667
Epoch 61/70
6/6 [==============================] - 0s 500us/sample - loss: 3.7547e-06 - accuracy: 0.1667
Epoch 62/70
6/6 [==============================] - 0s 499us/sample - loss: 3.6779e-06 - accuracy: 0.1667
Epoch 63/70
6/6 [==============================] - 0s 498us/sample - loss: 3.6024e-06 - accuracy: 0.1667
Epoch 64/70
6/6 [==============================] - 0s 831us/sample - loss: 3.5283e-06 - accuracy: 0.1667
Epoch 65/70
6/6 [==============================] - 0s 499us/sample - loss: 3.4561e-06 - accuracy: 0.1667
Epoch 66/70
6/6 [==============================] - 0s 333us/sample - loss: 3.3851e-06 - accuracy: 0.1667
Epoch 67/70
6/6 [==============================] - 0s 666us/sample - loss: 3.3157e-06 - accuracy: 0.1667
Epoch 68/70
6/6 [==============================] - 0s 333us/sample - loss: 3.2475e-06 - accuracy: 0.1667
Epoch 69/70
6/6 [==============================] - 0s 498us/sample - loss: 3.1805e-06 - accuracy: 0.1667
Epoch 70/70
6/6 [==============================] - 0s 166us/sample - loss: 3.1152e-06 - accuracy: 0.1667
###Markdown
Ok, now you have a model that has been trained to learn the relationshop between X and Y. You can use the **model.predict** method to have it figure out the Y for a previously unknown X. So, for example, if X = 10, what do you think Y will be? Take a guess before you run this code:
###Code
print(model.predict([5]))
print(model.predict([10.0]))
%load_ext tensorboard
from tensorboardcolab import *
tbc = TensorBoardColab() # To create a tensorboardcolab object it will automatically creat a link
writer = tbc.get_writer() # To create a FileWriter
writer.add_graph(tf.get_default_graph()) # add the graph
writer.flush()
###Output
Using TensorFlow backend.
|
Machine_Learning/mp/mountain_project.ipynb | ###Markdown
Transfer Learning with DistilBERT IntroductionThis notebook walks through an example of using DistilBERT and transfer learning for sentiment analysis. I start by setting a goal, laying out a plan, and scraping the data before moving on to model training, and finally cover some analysis of the results. The idea is to follow the project from beginning to end, so that the whole data science process is illustrated. As many data scientists know, machine learning is about 10% actual machine learning, and 90% other. I hope that this in-depth description of my project illustrates that point. https://www.mountainproject.com/ The Goal of This ProjectWhen I’m not cleaning data, or analyzing data, or learning about data, or daydreaming about data, I like to spend my time rock climbing. Luckily for me, there is a great website called MountainProject.com, which is an online resource for climbers. The main purpose of Mountain Project is to serve as an online guidebook, where each climbing route has a description, as well as information about the quality, difficulty, and type of climb. There are also forums where climbers can ask questions, learn new techniques, find partners, brag about recent climbing adventures, and review climbing gear.The climbing gear reviews are really helpful for me as a climber when I am trying to decide what kind of gear I want to buy next. It occurred to me that climbing gear companies may want to know what climbers think of their brand, or of a particular piece of gear that they make. Thus, this project was born.The goal of this project is to label the sentiment of the gear review forums as positive, negative, or neutral. The question is: “How do climbers feel about different kinds of climbing gear?” More broadly, the goal of this project is to create a model that can label the sentiment of an online forum in a niche community using limited training data. Although this project is focused on the rock climbing community, the methods described here could easily be used in other domains as well. This could be useful for the participants in a given community who want to know the best techniques and buy the best gear. It would also be useful for companies that supply products for that industry; it would be useful for these companies to know how users feel about their products, and which keywords participants are using when they make a positive or negative comment about the product. The PlanUnfortunately for me, the Mountain Project gear review forums are not labeled. The forums are a collection of thoughts and opinions, but there is no numerical value associated with them. By that I mean, I can write what I think about a piece of gear, but I don’t give it a star rating. This eliminates the possibility of direct supervised learning. (Yeah, sure, I could go and label the 100k+ forums by hand, but where is the fun in that? Nowhere, that sounds awful.)Enter transfer learning. Transfer learning is when a model trained on one task is used for a similar task. In this case, I have an unlabeled dataset and I want to assign labels to it. So, I need to create a model that is trained to predict labels on a labeled dataset, then use that model to create labels for my unlabeled forum dataset.Because this model will need to analyze natural language, I need my model to first understand language. This is why I am use a DistilBERT model*. The details about how DistilBERT works are beyond the scope of this article, but can be found in this description of BERT and this description of DistilBERT. Put simply, DistilBERT is a pretrained LSTM model that understands English. After loading the DistilBERT model, it can be fine-tuned on a more specific dataset. In this case, I want to tune DistilBERT so that it can accurately label climbing gear forums.Thus, there will be two transfers of learning; first, knowledge contained in DistilBERT will be transferred to my labeled dataset. Then, I will train this model to label the sentiment of data. Second, I will transfer this model to my unlabeled forum dataset. This will allow my model to label that dataset. After the forum dataset is labeled with positive, negative, or neutral sentiment, I can run an analysis on what climbers think about different types of climbing gear. The DataThe closer your initial task is to your final task, the more effective transfer learning will be. So, I need to find a labeled dataset related to climbing and climbing gear. My search led me to two places. First, to Trailspace.com. Trailspace is a website where outdoor enthusiasts can write reviews about their gear, and (most importantly) leave a star rating. This seemed perfect, but sadly there were only ~1000 reviews related to climbing gear.This lack of data led me to my second labeled dataset: the routes on Mountain Project. Each route has a description and a star rating, and there are ~116,000 routes on the website. That is plenty of data, but the data isn’t exactly what I need because the way climbers talk about routes is different from the way climbers talk about gear. For example, I wouldn’t describe gear as “fun”, and I wouldn’t describe a route as “useful”.  Still, I think it will be better to train on route data than nothing because there is some overlap in the way climbers talk about anything, and the vernacular is quite unique with a lot of slang. For example, if I describe a route as “a sick climb with bomber gear” then the climb is high-quality. The hope is that my model will learn this unique climbing vocabulary and apply it to the gear review forums when it comes time to label them. Step One: Scrape the DataMy plan is laid out, and now it is time to actually gather some data. To do this, I needed to create a web scraper for Mountain Project and Trailspace. Prior to starting this project, I had no idea how to do any kind of web scraping. So, I watched this extremely helpful YouTube video about web scraping in Python using BeautifulSoup. Luckily for me, both Trailspace and the Mountain Project forums were quite easy to scrape. The code for the Trailspace scraping is below, as an example:
###Code
from urllib.request import urlopen as uReq
from bs4 import BeautifulSoup as soup
import os
import time
%%capture
from tqdm import tqdm_notebook as tqdm
tqdm().pandas()# Manually gather list of main page URLs
all_urls = ["https://www.trailspace.com/gear/mountaineering-boots/",
"https://www.trailspace.com/gear/mountaineering-boots/?page=2",
"https://www.trailspace.com/gear/mountaineering-boots/?page=3",
"https://www.trailspace.com/gear/approach-shoes/",
"https://www.trailspace.com/gear/approach-shoes/?page=2",
"https://www.trailspace.com/gear/climbing-shoes/",
"https://www.trailspace.com/gear/climbing-shoes/?page=2",
"https://www.trailspace.com/gear/climbing-protection/",
"https://www.trailspace.com/gear/ropes/",
"https://www.trailspace.com/gear/carabiners-and-quickdraws/",
"https://www.trailspace.com/gear/belay-rappel/",
"https://www.trailspace.com/gear/ice-and-snow-gear/",
"https://www.trailspace.com/gear/big-wall-aid-gear/",
"https://www.trailspace.com/gear/harnesses/",
"https://www.trailspace.com/gear/climbing-helmets/",
"https://www.trailspace.com/gear/climbing-accessories/"]
# Define a function to get URLs
def get_gear_subpages(main_url):
'''Function to grab all sub-URLs from main URL'''
# Get HTML info
uClient = uReq(main_url) # request the URL
page_html = uClient.read() # Read the html
uClient.close() # close the connection
gear_soup = soup(page_html, "html.parser")
item_urls = []
items = gear_soup.findAll("a", {"class":"plProductSummaryGrid"})
for a_tag in items:
href = a_tag.attrs.get("href")
if href == "" or href is None:
continue
else:
item_urls.append("https://www.trailspace.com"+href)
return item_urls
# Get a lit of all sub-URLs
all_sub_urls = []
for main_url in tqdm(all_urls):
all_sub_urls += get_gear_subpages(main_url)
# Define function to scrape data
def get_gear_comments(gear_url):
'''Function to extract all comments from each sub-URL'''
# Get HTML info
uClient = uReq(gear_url) # request the URL
page_html = uClient.read() # Read the html
uClient.close() # close the connection
review_soup = soup(page_html, "html.parser")
all_reviews = review_soup.find("div", {"id":"reviews"})
review_dict = dict()
try:
for this_review in all_reviews.findAll("div", {"class": "reviewOuterContainer"}):
# Get review rating
try:
rating = float(str(this_review.find_next('img').find_next("img")).split("rated ")[1].split(" of")[0])
except:
rating = float(str(this_review.find("img").find_next("img").find_next("img")).split("rated ")[1].split(" of")[0])
# Get review text
review_summary = this_review.find("div",{"class":"review summary"}).findAll("p")
review_text = ""
for blurb in review_summary:
review_text += " " + blurb.text.replace("\n", " ").replace("\r", " ")
review_dict[review_text] = rating
except:
pass
return review_dict
# Extract information from all URLs and save to file:
t0 = time.time()
filename = "trailspace_gear_reviews.csv"
f = open(filename, "w")
headers = "brand, model, rating, rating_text\n"
f.write(headers)
for url in tqdm(all_sub_urls):
brand = url.split("/")[4]
model = url.split("/")[5]
info = get_gear_comments(url)
for review in info.keys():
rating_text = review.replace(",", "~")
rating = info[review]
f.write(brand +","+
model +","+
str(rating) +","+
rating_text + "\n")
f.close()
t1 = time.time()
t1-t0
###Output
_____no_output_____
###Markdown
The routes proved to be much more challenging. On Mountain Project, routes are sorted by “area > subarea > route”. But sometimes, there are multiple subareas, so it looks like “area > big-subarea > middle-subarea > small-subarea > route”. My main problem was iterating over all of the routes to ensure I gathered data about all of them, even though they are not uniformly organized. Thankfully, Mountain Project had another way around this. Within Mountain Project you can search for routes and sort them by difficulty, then by name. It will then output a lovely csv file that includes the URL for each route in your search results. Unfortunately, the search maxes out at 1000 routes, so you can’t get them all in one go. Not to be deterred by such a small inconvenience, I painstakingly went through each area and subarea, grabbed 1000 routes at a time, and saved the files to my computer until I had all 116,000 routes saved in separate csv files on my computer. Once I had all the csv files, I combined them with this code:
###Code
import os
import glob
import pandas as pd
from progress.bar import Bar
import time
import tqdm
# Combine CSVs that I got directly from Mountain Project
extension = 'csv'
all_filenames = [i for i in glob.glob('*.{}'.format(extension))]
#combine all files in the list
combined_csv = pd.concat([pd.read_csv(f) for f in all_filenames ])
#export to csv
combined_csv.to_csv( "all_routes.csv", index=False, encoding='utf-8-sig')
routes = pd.read_csv("all_routes.csv")
routes.drop_duplicates(subset = "URL", inplace = True)
# remove routes with no rating
routes = routes[routes["Avg Stars"]!= -1]
###Output
_____no_output_____
###Markdown
At this point, I have a large csv file that has the URLs of all the routes on Mountain Project. Now, I need to iterate over each URL and scrape the information I need, then add it back to this csv. Routes with empty descriptions, non-English descriptions, or fewer than 10 votes** were removed. This cut my number of route examples down to about 31,000.
###Code
def description_scrape(url_to_scrape, write = True):
"""Get description from route URL"""
# Get HTML info
uClient = uReq(url_to_scrape) # request the URL
page_html = uClient.read() # Read the html
uClient.close() # close the connection
route_soup = soup(page_html, "html.parser")
# Get route description headers
heading_container = route_soup.findAll("h2", {"class":"mt-2"})
heading_container[0].text.strip()
headers = ""
for h in range(len(heading_container)):
headers += "&&&" + heading_container[h].text.strip()
headers = headers.split("&&&")[1:]
# Get route description text
route_soup = soup(page_html, "html.parser")
desc_container = route_soup.findAll("div", {"class":"fr-view"})
words = ""
for l in range(len(desc_container)):
words += "&&&" + desc_container[l].text
words = words.split("&&&")[1:]
# Combine into dictionary
route_dict = dict(zip(headers, words))
# Add URL to dictionary
route_dict["URL"] = url_to_scrape
# Get number of votes on star rating and add to dictionary
star_container = route_soup.find("span", id="route-star-avg")
num_votes = int(star_container.span.span.text.strip().split("from")[1].split("\n")[0].replace(",", ""))
route_dict["star_votes"] = num_votes
if write == True:
# Write to file:
f.write(route_dict["URL"] +","+
route_dict.setdefault("Description", "none listed").replace(",", "~") +","+
route_dict.setdefault("Protection", "none listed").replace(",", "~") +","+
str(route_dict["star_votes"]) + "\n")
else:
return route_dict# Get URLs from large route.csv file
all_route_urls = list(routes["URL"]) # where routes is the name of the dataframe
# Open a new file
filename = "route_desc.csv"
f = open(filename, "w")
headers = "URL, desc, protection, num_votes\n"
f.write(headers)# Scrape all the routes
for route_url in tqdm(all_route_urls):
description_scrape(route_url)
time.sleep(.05)
t1 = time.time()
t1-t0
f.close()
# Merge these dataframes:
merged = routes.merge(route_desc, on='URL')
merged.to_csv("all_routes_and_desc.csv", index=False)
df = pd.read_csv("all_routes_and_desc.csv")
##### CLEANING STEPS #####
# Drop column that shows my personal vote
df.drop(["Your Stars"], axis = 1, inplace=True)
# Removes whitespace around column names
df_whole = df.rename(columns=lambda x: x.strip())
# Combine text columns and select needed columns
df_whole["words"] = df_whole["desc"] + " " + df_whole["protection"]
df = df_whole[["words", "num_votes", "Avg Stars"]]
# Remove rows with no description
bad_df = df[df.words.apply(lambda x: len(str(x))<=5)]
new_df = df[~df.words.isin(bad_df.words)]
print(len(df), len(bad_df), len(new_df), len(df)-len(bad_df)==len(new_df))
df = new_df
# Remove non-english entries... takes a few minutes...
from langdetect import detect
def is_english(x):
try:
return detect(x)
except:
return None
df["english"] = df['words'].apply(lambda x: is_english(x) == 'en')
df = df[df.english]
df = df[["words", "num_votes", "Avg Stars"]]
# Now remove rows with less than 10 votes
few_votes = np.where(df.num_votes <= 9)[0]
for vote in few_votes:
try:
df.drop(vote, inplace = True)
except:
pass
df_small = df.drop(few_votes)
df = df_small
# Save it
df.to_csv('data/words_and_stars_no_ninevotes.csv', index=False, header=True)
###Output
_____no_output_____
###Markdown
Now, I have three datasets to work with: Trailspace.com gear reviews, Mountain Project routes, and Mountain Project gear review forums.One problem with the gear review forums is that there are often multiple sentiments about multiple pieces of gear in the same forum posting. So, in a naive attempt to split the forum posts into sentences, I split each posting every time there was a period. There are many reasons why this is not the best way to split text into sentences, but I assumed that it would be good enough for this project. This increased the number of samples to about 200,000. Step Two: Build a ModelIf you have made it this far, rejoice, because it is time for some actual machine learning! Well, almost time. Before I can begin building a model, I need to have some kind of metric to measure the quality of my model. This means I had to take on the dreaded task of labelling some of the forums by hand. I manually labeled 4000 samples of the forums as positive (2), negative (0), or neutral (1), so that I could evaluate my model. This took about four hours.Of course, I want to build the best possible model for my task. I have two datasets to help me create this model. The Trailspace data is small, but more relevant. The route data is large, but less relevant. Which one will help my model more? Or, should I use a combination of both? And, importantly, does the additional data provide better performance than a simple DistilBERT model?I decided to do a comparison of four models: 1. A model with DistilBERT only 2. A model with DistilBERT and route information 3. A model with DistilBERT and Trailspace information 4. A model with DistilBERT and both datasetsOn top of each DistilBERT is a small, identical neural network. This network is trained on 4000 labeled forum examples with a random seed set to 42 to prevent variation in the way the data is split. The lower DistilBERT layers were locked, meaning DistilBERT was not re-trained by the forum data. By keeping the networks identical, the only variation between the models is the dataset (or lack thereof) on which DistilBERT was tuned. This will allow me to conclude which dataset did the best at tuning DistilBERT for predicting forum post labels, without introducing noise from different types of models or variation in data split. Because there are three categories (positive, negative, and neutral), categorical cross-entropy was used as a loss function.After a great deal of experimenting and tweaking parameters, I found that the best way to train the DistilBERT only model is the method below:
###Code
from transformers import pipeline
from transformers import AutoTokenizer, TFAutoModelForSequenceClassification
from transformers import DistilBertTokenizer, DistilBertModel, DistilBertConfig, TFAutoModelWithLMHead, TFAutoModel, AutoModel
from sklearn.model_selection import train_test_split
import tensorflow as tf
import pandas as pd
import numpy as np
classifier = pipeline('sentiment-analysis')
import random
random.seed(42)
##### SET UP THE MODEL #####
save_directory = "distilbert-base-uncased"
config = DistilBertConfig(dropout=0.2, attention_dropout=0.2)
config.output_hidden_states = False
transformer_model = TFAutoModel.from_pretrained(save_directory, from_pt=True, config = config)
input_ids_in = tf.keras.layers.Input(shape=(128,), name='input_token', dtype='int32')
input_masks_in = tf.keras.layers.Input(shape=(128,), name='masked_token', dtype='int32') # Build model that will go on top of DistilBERT
embedding_layer = transformer_model(input_ids_in, attention_mask=input_masks_in)[0]
X = tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(50, return_sequences=True, dropout=0.1, recurrent_dropout=0.1))(embedding_layer)
X = tf.keras.layers.GlobalMaxPool1D()(X)
X = tf.keras.layers.Dense(50, activation='relu')(X)
X = tf.keras.layers.Dropout(0.2)(X)
X = tf.keras.layers.Dense(3, activation='sigmoid')(X)
tf.keras.layers.Softmax(axis=-1)
model = tf.keras.Model(inputs=[input_ids_in, input_masks_in], outputs = X)
for layer in model.layers[:3]:
layer.trainable = False
model.compile(optimizer="Adam", loss=tf.keras.losses.CategoricalCrossentropy(), metrics=["acc"])
##### LOAD THE TEST DATA #####
df = pd.read_csv('data/labeled_forum_test.csv')
X_train, X_test, y_train, y_test = train_test_split(df["text"], df["sentiment"], test_size=0.20, random_state=42)# Create X values
tokenizer = AutoTokenizer.from_pretrained(save_directory)
X_train = tokenizer(
list(X_train),
padding=True,
truncation=True,
return_tensors="tf",
max_length = 128
)
X_test = tokenizer(
list(X_test),
padding=True,
truncation=True,
return_tensors="tf",
max_length = 128
)
# Create Y values
y_train = pd.get_dummies(y_train)
y_test = pd.get_dummies(y_test)
#### TRAIN THE MODEL ####
history = model.fit([X_train["input_ids"], X_train["attention_mask"]],
y_train,
batch_size=128,
epochs=8,
verbose=1,
validation_split=0.2)
#### SAVE WEIGHTS FOR LATER ####
model.save_weights('models/final_models/bert_only2/bert_only2')
###Output
_____no_output_____
###Markdown
The code above creates the baseline model, DistilBERT only. Now, I will tune DistilBERT with the data I saved to ‘data/words_and_stars_no_ninevotes.csv’.
###Code
from transformers import pipeline
from transformers import AutoTokenizer, TFAutoModelForSequenceClassification
from transformers import DistilBertTokenizer, DistilBertModel, DistilBertConfig, TFAutoModelWithLMHead, TFAutoModel, AutoModel
import tensorflow as tf
import numpy as np
classifier = pipeline('sentiment-analysis')
##### LOAD DATA THAT WILL TUNE DISTILBERT #####
df = pd.read_csv('data/words_and_stars_no_ninevotes.csv')
df.replace(4,3.9999999) # prevents errors
#### TUNE DISTILBERT #####
# normalize star values
df["norm_star"] = df["Avg Stars"]/2
df.head()
# drop null entries
print(len(np.where(pd.isnull(df["words"]))[0])) # 288 null entries
df.dropna(inplace = True)
model_name = "distilbert-base-uncased"
tf_model = TFAutoModelForSequenceClassification.from_pretrained(model_name)
tokenizer = DistilBertTokenizer.from_pretrained('distilbert-base-uncased')
model = DistilBertModel.from_pretrained('distilbert-base-uncased')
tf_batch = tokenizer(
list(df["words"]),
padding=True,
truncation=True,
return_tensors="tf"
)
tf_outputs = tf_model(tf_batch, labels = tf.constant(list(df["norm_star"]), dtype=tf.float64))
loss = [list(df["norm_star"])[i]-float(tf_outputs[0][i]) for i in range(len(df))]
star_diff = (sum(loss)/1000)*4
star_diff
# Save the tuned DistilBERT so you can use it later
save_directory = "models/route_model"
tokenizer.save_pretrained(save_directory)
model.save_pretrained(save_directory)
###Output
_____no_output_____
###Markdown
From here, the code to create a model that has a tuned DistilBERT at its base is the same as the code used to create the DistilBERT only model, except instead of `save_directory = "distilbert-base-uncased"`, use `save_directory = "models/route_model"`. After experimentation and parameter tweaking, the results for the four models looks like this: The DistilBERT model with both route and gear data provided the best test accuracy at 81.6% for three way classification, and will be used to label the Mountain Project forums. Step Three: Model AnalysisThe route and gear model provided a test accuracy of 81.6%. This begs the question: what is happening in the other 18.4%? Could it be the length of the post that is causing inaccuracy? An initial look at string lengths does not show that the lengths of the mismatched strings are terribly different from the lengths of the whole dataset, so this is unlikely to be the culprit.Next, I looked at the counts of words that were mislabeled compared to the counts of words correctly labeled, both with and without stop words. In each, the counts looked similar except for two words: “cam” and “hex”. Posts with these words tended to be mislabeled. These each refer to a type of climbing gear, and I think they are mislabeled for different reasons.Hexes are “old-school” gear that work, but are outdated. Therefore, people don’t really buy them anymore and there are a lot of mixed feelings in the forums about whether or not they are still useful. This may have confused the model when it comes to classifying sentiment when the subject is hexes.When there is a big sale on gear, people often post about it in the forums. As I was labeling data, if the post was simply “25% off on cams at website.com”, I labeled it as neutral. Cams are expensive, they go on sale frequently, and climbers need a lot of them, so sales on cams are posted frequently; all of these postings were labeled as neutral. There is also a lot of debate about the best kind of cams, which can lead to sentences with multiple sentiments, causing the label to come out as neutral. Additionally, people talk about cams in a neutral way when the recommended gear for a specific climb. I think that these things led my model to believe that cams are almost always neutral.Sentiment about hexes is quite controversial. Sentiment about cams are more often listed as neutral. Notice the difference in the number of examples; cams are far more popular than hexes.In sum, my model confuses sentiment in the following cases: 1. When there is a sale mentioned: “25% off Black Diamond cams, the best on the market” true: 2, label: 1 2. When the post is not directly related to climbing: “The republican party does not support our use of public land” true: 0, label: 1 (I suspect this is because my model is not trained for it) 3. When parallel cracks are mentioned: “Cams are good for parallel cracks” true: 2, label: 0 (I suspect this is because cams are the only kind of gear that work well in parallel cracks. Most posts say things like “tricams are good unless it is a parallel crack.”) 4. When hexes are mentioned: “Hexes are fine for your first rack.” true: 2, label: 0 With this project, I had hoped to determine if a large, less relevant dataset is better than a smaller, more relevant dataset for analyzing sentiment on a niche online forum. My model that had the most training examples performed the best. However, it is unclear whether this is due to the additional examples being more relevant, or if it is simply due to more examples.I have additional route data (the routes with 9 or fewer votes). Although I suspect that this data may be less reliable, it could be useful in follow-up experiments. I could train models on only route data of different sizes, up to the 116,700 examples that I scraped, then compare. This would tell me if the additional accuracy was solely due to more data, or if the specificity of the small gear dataset helped.Although it cannot be concluded that inclusion of a smaller but more relevant labeled data improves the model, it can be concluded that more data is indeed better than less, even if the larger dataset is a little bit less relevant. This is evidenced by the comparison of the gear only and route only models. However, the relevance of the gear dataset may or may not have improved the final model; further experimentation is needed to make that conclusion. Step Four: Analyze ResultsFinally, it is time to label the forums so that I can run some analysis on them and see how climbers really feel about gear.
###Code
from transformers import pipeline
from transformers import AutoTokenizer, TFAutoModelForSequenceClassification
from transformers import DistilBertTokenizer, DistilBertModel, DistilBertConfig, TFAutoModelWithLMHead, TFAutoModel, AutoModel
from transformers import PreTrainedModel
from sklearn.model_selection import train_test_split
import tensorflow as tf
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
classifier = pipeline('sentiment-analysis')
import random
random.seed(42)
%matplotlib inline
#### RERUN YOUR MODEL #####
save_directory = "models/route_model"
config = DistilBertConfig(dropout=0.2, attention_dropout=0.2)
config.output_hidden_states = False
transformer_model = TFAutoModel.from_pretrained(save_directory, from_pt=True, config = config)
input_ids_in = tf.keras.layers.Input(shape=(128,), name='input_token', dtype='int32')
input_masks_in = tf.keras.layers.Input(shape=(128,), name='masked_token', dtype='int32')
# Build model that will go on top of DistilBERT
embedding_layer = transformer_model(input_ids_in, attention_mask=input_masks_in)[0]
X = tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(50, return_sequences=True, dropout=0.1, recurrent_dropout=0.1))(embedding_layer)
X = tf.keras.layers.GlobalMaxPool1D()(X)
X = tf.keras.layers.Dense(50, activation='relu')(X)
X = tf.keras.layers.Dropout(0.2)(X)
X = tf.keras.layers.Dense(3, activation='sigmoid')(X)
tf.keras.layers.Softmax(axis=-1)
model = tf.keras.Model(inputs=[input_ids_in, input_masks_in], outputs = X)
for layer in model.layers[:3]:
layer.trainable = False
model.compile(optimizer="Adam", loss=tf.keras.losses.CategoricalCrossentropy(), metrics=["acc"])
#### LOAD THE WEIGHTS THAT YOU TRAINED BEFORE AND PREP DATA #####
model.load_weights('models/final_models/route_only2/route_only2')# read in data
df = pd.read_csv('data/all_forums.csv')# Create X values
tokenizer = AutoTokenizer.from_pretrained(save_directory)
X = tokenizer(
list(df["text"]),
padding=True,
truncation=True,
return_tensors="tf",
max_length = 128
)preds = model.predict([X["input_ids"], X["attention_mask"]])
#### ADD PREDICTIONS TO THE DATAFRAME #####
# Start with the first 5000, then replace the first n rows of the df
# For some reason, the merge works better this way.
# Add predicted labels to df
pred_labels = [np.argmax(preds[i], axis = 0) for i in range(len(preds))]
df_small = df.copy()
df_small = df_small[:5000] # remove in full set
df_small["pred_label"] = pred_labels[:5000] # add predicted labels
df_small["text"] = df_small["text"].str.strip().str.lower() # lower and strip whitespace
# remove empty rows
df_small['text'].replace('', np.nan, inplace=True)
df_small.dropna(subset=['text'], inplace=True)
#clean index mess
df_small.reset_index(inplace = True)
df_small.drop(["index"], axis = 1, inplace = True)
# Get labeled dataframe
labeled_df = pd.read_csv("data/labeled_forum_test.csv")
labeled_df["text"] = labeled_df["text"].str.strip().str.lower()
# Now merge
new_df = df_small.merge(labeled_df, how = 'left', on = "text")
print(len(new_df))
print(len(new_df)-len(df_small))
# Now get big DF and replace the first n rows
# Add predicted labels to df
pred_labels = [np.argmax(preds[i], axis = 0) for i in range(len(preds))]
full_df = df.copy()
full_df["pred_label"] = pred_labels # add predicted labels
full_df["text"] = full_df["text"].str.strip().str.lower() # lower and strip whitespace
# remove empty rows
full_df['text'].replace('', np.nan, inplace=True)
full_df.dropna(subset=['text'], inplace=True)
#clean index mess
full_df.reset_index(inplace = True)
full_df.drop(["index"], axis = 1, inplace = True)
##### COMBINE THE DATAFRAMES AND SAVE #####
# Combine df_small and full_df[len(new_df):]
df_full = new_df.append(full_df[len(new_df):])
df_full = df_full.rename(columns={"sentiment": "true_label"})
df_full.reset_index(inplace = True)
df_full.drop(["index"], axis = 1, inplace = True)
df_full.to_csv('data/full_forum_labeled.csv', index = False)
###Output
_____no_output_____
###Markdown
From here, further analysis can be done, depending on what exactly you want to know. Below are some examples of what you could do next. I am picking on Mammut a little bit because I just bought a Mammut backpack based on a Mountain Project recommendation. Do climbers write mostly positive, negative, or neutral reviews of gear?It appears that most posts are neutral. This makes sense because climbers are often talking about sales, or recommending gear for a specific climb, both of which I labeled as neutral.
###Code
df = pd.read_csv('/Users/patriciadegner/Documents/MIDS/DL/final_project/data/full_forum_labeled.csv')
plt.title("Overall Sentiment")
plt.ylabel('Count')
plt.xlabel('Sentiment')
plt.xticks([0,1,2])
plt.hist(df.pred_label)
###Output
_____no_output_____
###Markdown
Has sentiment about Mammut changed over time?It appears that sentiment about Mammut has not changed much over time.
###Code
# Generate dataframe
mammut_df = df[df.text.str.contains("mammut").fillna(False)]
mammut_df["post_year"] = [mammut_df.post_date[i][-4:] for i in mammut_df.index]
mammut_grouped = mammut_df.groupby(["post_year"]).mean()
# Create plot
plt.title("Mammut Sentiment Over Time")
plt.ylabel('Average Sentiment')
plt.xlabel('Year')
plt.xticks(rotation=45)
plt.bar(mammut_grouped.index, mammut_grouped.pred_label)
###Output
/Users/patriciadegner/opt/anaconda3/lib/python3.7/site-packages/ipykernel_launcher.py:3: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
This is separate from the ipykernel package so we can avoid doing imports until
###Markdown
Do climbers who joined Mountain Project more recently have different feelings about Mammut than those who joined a long time ago? (Account age is being used as a proxy for the number of years spent as a climber; do more experienced climbers have different preferences?)This graph compares the age of the account to the average sentiment towards Mammut. There does not appear to be a trend but there is more variance in older accounts, so it is hard to say for sure.
###Code
# Generate dataframe
mammut_df = df[df.text.str.contains("mammut").fillna(False)]
mammut_df["post_year"] = [int(mammut_df.post_date[i][-4:]) for i in mammut_df.index]
# Get join dates if available
join_year_list = []
for i in mammut_df.index:
try:
join_year_list.append(int(mammut_df.join_date[i][-4:]))
except:
join_year_list.append(-1000)
# Add join year and years as memeber before posting columns, remove missing info
mammut_df["join_year"] = join_year_list
mammut_df["years_as_mem_before_posting"] = mammut_df["post_year"] - mammut_df["join_year"]
mammut_df = mammut_df[mammut_df['years_as_mem_before_posting'] < 900]
# groupby
mammut_grouped = mammut_df.groupby(["years_as_mem_before_posting"]).mean()
# Create plot
plt.title("Mammut Sentiment of Newer vs. Older Accounts")
plt.ylabel('Average Sentiment')
plt.xlabel('Num Years as Member')
plt.xticks(rotation=45)
plt.bar(mammut_grouped.index, mammut_grouped.pred_label)
###Output
/Users/patriciadegner/opt/anaconda3/lib/python3.7/site-packages/ipykernel_launcher.py:3: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
This is separate from the ipykernel package so we can avoid doing imports until
/Users/patriciadegner/opt/anaconda3/lib/python3.7/site-packages/ipykernel_launcher.py:10: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
# Remove the CWD from sys.path while we load stuff.
/Users/patriciadegner/opt/anaconda3/lib/python3.7/site-packages/ipykernel_launcher.py:11: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
# This is added back by InteractiveShellApp.init_path()
###Markdown
Is the variance in the previous graph due to a smaller sample size of older accounts?There are much fewer older accounts, which is why there is more variance in the graph above at the right end of the graph.
###Code
# Groupby
mammut_grouby_count = mammut_df.groupby(["years_as_mem_before_posting"]).count()
# Create plot
plt.title("Count of Account Age that Mentions Mammut")
plt.ylabel('Count')
plt.xlabel('Num Years as Member')
plt.bar(mammut_grouby_count.index, mammut_grouby_count.join_year)
###Output
_____no_output_____ |
in-class-activities/08_Dask/8W_Accelerating_Dask/8W_Dask_Numba.ipynb | ###Markdown
Accelerating Dask with NumbaIn this notebook, we'll explore how you can accelerate parallel Dask DataFrame workflows by using `numba` to precompile code ahead of time (and reorganizing our code to take advantage of vectorization). We've already seen how we can use `numba` in concert with `mpi4py` to accelerate parallel simulation programs earlier in the class. Here, we'll focus on using `numba` in a common analytical workflow -- that of applying some function to a column (or several columns) in your DataFrame and creating a new, derived column for further study.For this demonstration, we'll be working with a small sample of [AirBnB's listing data](http://insideairbnb.com/get-the-data.html), a large dataset that contains information on AirBnBs from around the world on a month-by-month basis. The methods described in this notebook are fully scalable (and can handle the full archive of AirBnB data if you wanted to), though, if you increase the number of workers (and memory) in your Dask cluster.To begin, let's load in our packages and request resources to start up our Dask cluster (note that this notebook is meant to be run on the Midway Cluster):
###Code
import dask
from dask.distributed import Client
from dask_jobqueue import SLURMCluster
import dask.dataframe as dd
from numba.pycc import CC
import numpy as np
import time
# Compose SLURM script
cluster = SLURMCluster(queue='broadwl', cores=4, memory='2GB',
processes=4, walltime='00:15:00', interface='ib0',
job_extra=['--account=macs30123']
)
# Request resources
cluster.scale(jobs=1)
! squeue -u jclindaniel
client = Client(cluster)
client
###Output
_____no_output_____
###Markdown
Then, we can load in our AirBnB data (included in this directory) and see what it looks like (note that this data is from three cities: Chicago, Boston, and San Francisco, compiled by AirBnB in April 2021):
###Code
df = dd.read_csv('listings*.csv')
df.head()
###Output
_____no_output_____
###Markdown
You'll notice that two of the columns in the DataFrame are "latitude" and "longitude" -- spatial coordinates corresponding to AirBnB locations. Let's say that we're interested in creating a derived column from these coordinates, measuring how far each AirBnB is from the MACSS building at the University of Chicago (so that we can compute some further summary statistics about this column).To measure this distance, we can write a Python function to calculate the distance between two sets of (longitude, latitude) coordinates using [great-circle distance](https://en.wikipedia.org/wiki/Great-circle_distance). We'll write another version of this function that uses `numba` to compile this function ahead of time. Finally, we can write an additional function to make use of these distance formulas and assess the distance of any coordinates from the MACSS building:
###Code
def distance(lon1, lat1, lon2, lat2):
'''
Calculate the circle distance between two points
on the earth (specified in decimal degrees)
'''
# convert decimal degrees to radians
lon1, lat1, lon2, lat2 = map(np.radians, [lon1, lat1, lon2, lat2])
# haversine formula
dlon = lon2 - lon1
dlat = lat2 - lat1
a = np.sin(dlat / 2) ** 2 + np.cos(lat1) * np.cos(lat2) * np.sin(dlon / 2) ** 2
c = 2 * np.arcsin(np.sqrt(a))
# 6367 km is the radius of the Earth
km = 6367 * c
m = km * 1000
return m
# Use Numba to compile this same function in a module named `aot`
cc = CC('aot')
@cc.export('distance', 'f8(f8,f8,f8,f8)')
def distance_numba(lon1, lat1, lon2, lat2):
'''
Calculate the circle distance between two points
on the earth (specified in decimal degrees)
(Numba-accelerated version)
'''
# convert decimal degrees to radians
lon1, lat1, lon2, lat2 = map(np.radians, [lon1, lat1, lon2, lat2])
# haversine formula
dlon = lon2 - lon1
dlat = lat2 - lat1
a = np.sin(dlat / 2) ** 2 + np.cos(lat1) * np.cos(lat2) * np.sin(dlon / 2) ** 2
c = 2 * np.arcsin(np.sqrt(a))
# 6367 km is the radius of the Earth
km = 6367 * c
m = km * 1000
return m
cc.compile()
# Import aot and incorporate both distance functions into single function that
# we'll apply to our dataframe
import aot
def distance_from_macss(lon, lat, numba=False):
'''
Compute distance to MACSS building (1155 E. 60th Street, Chicago, IL)
from a given coordinate (longitude, latitude). Can accelerate with
Numba if specify `numba=True` when calling function.
'''
macss_lon, macss_lat = -87.5970978, 41.7856443
if numba:
return aot.distance(lon, lat, macss_lon, macss_lat)
return distance(lon, lat, macss_lon, macss_lat)
###Output
_____no_output_____
###Markdown
Then, we can "apply" this `distance_from_macss` function to our DataFrame, which will run our function in parallel on the different DataFrame partitions spread across our Dask workers (using the `map_partitions` method). We'll also produce some summary statistics using the `describe` method to get a sense of how our data is shaped. Note that we use both a plain-Python version of our code as well as our `numba`-accelerated one (setting `numba=True`) to see if we observe a performance boost by using our compiled distance function:
###Code
print("Dask alone:")
%timeit summary = df.apply(lambda x: distance_from_macss(x.longitude, x.latitude), axis=1, meta=(None, 'float64')) \
.describe() \
.compute()
print("Dask + Numba:")
%timeit summary = df.apply(lambda x: distance_from_macss(x.longitude, x.latitude, numba=True), axis=1, meta=(None, 'float64')) \
.describe() \
.compute()
###Output
Dask alone:
573 ms ± 184 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
Dask + Numba:
326 ms ± 4.83 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
###Markdown
We can see that we do in fact see a performance boost by using Numba to compile our code in addition to parallelizing it with Dask. This allows us to boost our performance and, for larger data sizes (and more computationally intensive function applications), this could make a major difference in our run time. Note that we can see additional performance boosts by vectorizing our code (apply just loops over the rows in our dataframe), like so (just as we can with vanilla Pandas and NumPy on our local machines):
###Code
# Use Numba to compile vectorized function in new module named `aot_vec`
cc = CC('aot_vec')
@cc.export('distance_from_point', 'f8[:](f8[:],f8[:],f8,f8)')
def distance_from_point(lon1, lat1, lon2, lat2):
'''
Calculate the circle distance between each longitude
and latitude value in an array (lon1, lat1) and an
arbitrary point (lon2, lat2)
Returns an array of distances (specified in decimal degrees)
(Vectorized, Numba-accelerated version)
'''
# convert decimal degrees to radians
lon1, lat1 = map(np.radians, [lon1, lat1])
lon2, lat2 = map(np.radians, [lon2, lat2])
# haversine formula
dlon = lon2 - lon1
dlat = lat2 - lat1
a = np.sin(dlat / 2) ** 2 + np.cos(lat1) * np.cos(lat2) * np.sin(dlon / 2) ** 2
c = 2 * np.arcsin(np.sqrt(a))
# 6367 km is the radius of the Earth
km = 6367 * c
m = km * 1000
return m
cc.compile()
import aot_vec
print("Dask Alone (Vectorized):")
%timeit vec = distance_from_macss(df.longitude, df.latitude).describe() \
.compute()
print("Dask + Numba (Vectorized):")
%timeit vec = dd.from_dask_array(df.map_partitions( \
lambda d: aot_vec.distance_from_point(d.longitude.values, d.latitude.values, -87.5970978, 41.7856443))) \
.describe() \
.compute()
###Output
Dask Alone (Vectorized):
241 ms ± 100.72 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
Dask + Numba (Vectorized):
154 ms ± 2.34 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
###Markdown
Then, with the time savings in computation, we have more time to analyze our data! For instance, we can see that the closest AirBnB to the MACSS building is only 150 meters away:
###Code
summary = distance_from_macss(df.longitude, df.latitude).describe() \
.compute()
summary
###Output
_____no_output_____
###Markdown
And, using more complex Pandas-like queries, we can find those nearby AirBnBs that match other relevant criteria (such as being located in the Hyde Park neighborhood in Chicago and having more than one review associated with them:
###Code
df['distance_from_macss'] = distance_from_macss(df.longitude, df.latitude)
df[(df.distance_from_macss < 800) & (df.number_of_reviews > 1) & (df.neighbourhood == 'Hyde Park')].compute()
###Output
_____no_output_____ |
submodules/resource/d2l-zh/mxnet/chapter_computer-vision/neural-style.ipynb | ###Markdown
风格迁移如果你是一位摄影爱好者,你也许接触过滤波器。它能改变照片的颜色风格,从而使风景照更加锐利或者令人像更加美白。但一个滤波器通常只能改变照片的某个方面。如果要照片达到理想中的风格,你可能需要尝试大量不同的组合。这个过程的复杂程度不亚于模型调参。在本节中,我们将介绍如何使用卷积神经网络,自动将一个图像中的风格应用在另一图像之上,即*风格迁移*(style transfer) :cite:`Gatys.Ecker.Bethge.2016`。这里我们需要两张输入图像:一张是*内容图像*,另一张是*风格图像*。我们将使用神经网络修改内容图像,使其在风格上接近风格图像。例如, :numref:`fig_style_transfer`中的内容图像为本书作者在西雅图郊区的雷尼尔山国家公园拍摄的风景照,而风格图像则是一幅主题为秋天橡树的油画。最终输出的合成图像应用了风格图像的油画笔触让整体颜色更加鲜艳,同时保留了内容图像中物体主体的形状。:label:`fig_style_transfer` 方法 :numref:`fig_style_transfer_model`用简单的例子阐述了基于卷积神经网络的风格迁移方法。首先,我们初始化合成图像,例如将其初始化为内容图像。该合成图像是风格迁移过程中唯一需要更新的变量,即风格迁移所需迭代的模型参数。然后,我们选择一个预训练的卷积神经网络来抽取图像的特征,其中的模型参数在训练中无须更新。这个深度卷积神经网络凭借多个层逐级抽取图像的特征,我们可以选择其中某些层的输出作为内容特征或风格特征。以 :numref:`fig_style_transfer_model`为例,这里选取的预训练的神经网络含有3个卷积层,其中第二层输出内容特征,第一层和第三层输出风格特征。:label:`fig_style_transfer_model`接下来,我们通过前向传播(实线箭头方向)计算风格迁移的损失函数,并通过反向传播(虚线箭头方向)迭代模型参数,即不断更新合成图像。风格迁移常用的损失函数由3部分组成:(i)*内容损失*使合成图像与内容图像在内容特征上接近;(ii)*风格损失*使合成图像与风格图像在风格特征上接近;(iii)*全变分损失*则有助于减少合成图像中的噪点。最后,当模型训练结束时,我们输出风格迁移的模型参数,即得到最终的合成图像。在下面,我们将通过代码来进一步了解风格迁移的技术细节。 [**阅读内容和风格图像**]首先,我们读取内容和风格图像。从打印出的图像坐标轴可以看出,它们的尺寸并不一样。
###Code
%matplotlib inline
from mxnet import autograd, gluon, image, init, np, npx
from mxnet.gluon import nn
from d2l import mxnet as d2l
npx.set_np()
d2l.set_figsize()
content_img = image.imread('../img/rainier.jpg')
d2l.plt.imshow(content_img.asnumpy());
style_img = image.imread('../img/autumn-oak.jpg')
d2l.plt.imshow(style_img.asnumpy());
###Output
_____no_output_____
###Markdown
[**预处理和后处理**]下面,定义图像的预处理函数和后处理函数。预处理函数`preprocess`对输入图像在RGB三个通道分别做标准化,并将结果变换成卷积神经网络接受的输入格式。后处理函数`postprocess`则将输出图像中的像素值还原回标准化之前的值。由于图像打印函数要求每个像素的浮点数值在0到1之间,我们对小于0和大于1的值分别取0和1。
###Code
rgb_mean = np.array([0.485, 0.456, 0.406])
rgb_std = np.array([0.229, 0.224, 0.225])
def preprocess(img, image_shape):
img = image.imresize(img, *image_shape)
img = (img.astype('float32') / 255 - rgb_mean) / rgb_std
return np.expand_dims(img.transpose(2, 0, 1), axis=0)
def postprocess(img):
img = img[0].as_in_ctx(rgb_std.ctx)
return (img.transpose(1, 2, 0) * rgb_std + rgb_mean).clip(0, 1)
###Output
_____no_output_____
###Markdown
[**抽取图像特征**]我们使用基于ImageNet数据集预训练的VGG-19模型来抽取图像特征 :cite:`Gatys.Ecker.Bethge.2016`。
###Code
pretrained_net = gluon.model_zoo.vision.vgg19(pretrained=True)
###Output
_____no_output_____
###Markdown
为了抽取图像的内容特征和风格特征,我们可以选择VGG网络中某些层的输出。一般来说,越靠近输入层,越容易抽取图像的细节信息;反之,则越容易抽取图像的全局信息。为了避免合成图像过多保留内容图像的细节,我们选择VGG较靠近输出的层,即*内容层*,来输出图像的内容特征。我们还从VGG中选择不同层的输出来匹配局部和全局的风格,这些图层也称为*风格层*。正如 :numref:`sec_vgg`中所介绍的,VGG网络使用了5个卷积块。实验中,我们选择第四卷积块的最后一个卷积层作为内容层,选择每个卷积块的第一个卷积层作为风格层。这些层的索引可以通过打印`pretrained_net`实例获取。
###Code
style_layers, content_layers = [0, 5, 10, 19, 28], [25]
###Output
_____no_output_____
###Markdown
使用VGG层抽取特征时,我们只需要用到从输入层到最靠近输出层的内容层或风格层之间的所有层。下面构建一个新的网络`net`,它只保留需要用到的VGG的所有层。
###Code
net = nn.Sequential()
for i in range(max(content_layers + style_layers) + 1):
net.add(pretrained_net.features[i])
###Output
_____no_output_____
###Markdown
给定输入`X`,如果我们简单地调用前向传播`net(X)`,只能获得最后一层的输出。由于我们还需要中间层的输出,因此这里我们逐层计算,并保留内容层和风格层的输出。
###Code
def extract_features(X, content_layers, style_layers):
contents = []
styles = []
for i in range(len(net)):
X = net[i](X)
if i in style_layers:
styles.append(X)
if i in content_layers:
contents.append(X)
return contents, styles
###Output
_____no_output_____
###Markdown
下面定义两个函数:`get_contents`函数对内容图像抽取内容特征;`get_styles`函数对风格图像抽取风格特征。因为在训练时无须改变预训练的VGG的模型参数,所以我们可以在训练开始之前就提取出内容特征和风格特征。由于合成图像是风格迁移所需迭代的模型参数,我们只能在训练过程中通过调用`extract_features`函数来抽取合成图像的内容特征和风格特征。
###Code
def get_contents(image_shape, device):
content_X = preprocess(content_img, image_shape).copyto(device)
contents_Y, _ = extract_features(content_X, content_layers, style_layers)
return content_X, contents_Y
def get_styles(image_shape, device):
style_X = preprocess(style_img, image_shape).copyto(device)
_, styles_Y = extract_features(style_X, content_layers, style_layers)
return style_X, styles_Y
###Output
_____no_output_____
###Markdown
[**定义损失函数**]下面我们来描述风格迁移的损失函数。它由内容损失、风格损失和全变分损失3部分组成。 内容损失与线性回归中的损失函数类似,内容损失通过平方误差函数衡量合成图像与内容图像在内容特征上的差异。平方误差函数的两个输入均为`extract_features`函数计算所得到的内容层的输出。
###Code
def content_loss(Y_hat, Y):
return np.square(Y_hat - Y).mean()
###Output
_____no_output_____
###Markdown
风格损失风格损失与内容损失类似,也通过平方误差函数衡量合成图像与风格图像在风格上的差异。为了表达风格层输出的风格,我们先通过`extract_features`函数计算风格层的输出。假设该输出的样本数为1,通道数为$c$,高和宽分别为$h$和$w$,我们可以将此输出转换为矩阵$\mathbf{X}$,其有$c$行和$hw$列。这个矩阵可以被看作是由$c$个长度为$hw$的向量$\mathbf{x}_1, \ldots, \mathbf{x}_c$组合而成的。其中向量$\mathbf{x}_i$代表了通道$i$上的风格特征。在这些向量的*格拉姆矩阵*$\mathbf{X}\mathbf{X}^\top \in \mathbb{R}^{c \times c}$中,$i$行$j$列的元素$x_{ij}$即向量$\mathbf{x}_i$和$\mathbf{x}_j$的内积。它表达了通道$i$和通道$j$上风格特征的相关性。我们用这样的格拉姆矩阵来表达风格层输出的风格。需要注意的是,当$hw$的值较大时,格拉姆矩阵中的元素容易出现较大的值。此外,格拉姆矩阵的高和宽皆为通道数$c$。为了让风格损失不受这些值的大小影响,下面定义的`gram`函数将格拉姆矩阵除以了矩阵中元素的个数,即$chw$。
###Code
def gram(X):
num_channels, n = X.shape[1], d2l.size(X) // X.shape[1]
X = X.reshape((num_channels, n))
return np.dot(X, X.T) / (num_channels * n)
###Output
_____no_output_____
###Markdown
自然地,风格损失的平方误差函数的两个格拉姆矩阵输入分别基于合成图像与风格图像的风格层输出。这里假设基于风格图像的格拉姆矩阵`gram_Y`已经预先计算好了。
###Code
def style_loss(Y_hat, gram_Y):
return np.square(gram(Y_hat) - gram_Y).mean()
###Output
_____no_output_____
###Markdown
全变分损失有时候,我们学到的合成图像里面有大量高频噪点,即有特别亮或者特别暗的颗粒像素。一种常见的去噪方法是*全变分去噪*(total variation denoising):假设$x_{i, j}$表示坐标$(i, j)$处的像素值,降低全变分损失$$\sum_{i, j} \left|x_{i, j} - x_{i+1, j}\right| + \left|x_{i, j} - x_{i, j+1}\right|$$能够尽可能使邻近的像素值相似。
###Code
def tv_loss(Y_hat):
return 0.5 * (np.abs(Y_hat[:, :, 1:, :] - Y_hat[:, :, :-1, :]).mean() +
np.abs(Y_hat[:, :, :, 1:] - Y_hat[:, :, :, :-1]).mean())
###Output
_____no_output_____
###Markdown
损失函数[**风格转移的损失函数是内容损失、风格损失和总变化损失的加权和**]。通过调节这些权重超参数,我们可以权衡合成图像在保留内容、迁移风格以及去噪三方面的相对重要性。
###Code
content_weight, style_weight, tv_weight = 1, 1e3, 10
def compute_loss(X, contents_Y_hat, styles_Y_hat, contents_Y, styles_Y_gram):
# 分别计算内容损失、风格损失和全变分损失
contents_l = [content_loss(Y_hat, Y) * content_weight for Y_hat, Y in zip(
contents_Y_hat, contents_Y)]
styles_l = [style_loss(Y_hat, Y) * style_weight for Y_hat, Y in zip(
styles_Y_hat, styles_Y_gram)]
tv_l = tv_loss(X) * tv_weight
# 对所有损失求和
l = sum(10 * styles_l + contents_l + [tv_l])
return contents_l, styles_l, tv_l, l
###Output
_____no_output_____
###Markdown
[**初始化合成图像**]在风格迁移中,合成的图像是训练期间唯一需要更新的变量。因此,我们可以定义一个简单的模型`SynthesizedImage`,并将合成的图像视为模型参数。模型的前向传播只需返回模型参数即可。
###Code
class SynthesizedImage(nn.Block):
def __init__(self, img_shape, **kwargs):
super(SynthesizedImage, self).__init__(**kwargs)
self.weight = self.params.get('weight', shape=img_shape)
def forward(self):
return self.weight.data()
###Output
_____no_output_____
###Markdown
下面,我们定义`get_inits`函数。该函数创建了合成图像的模型实例,并将其初始化为图像`X`。风格图像在各个风格层的格拉姆矩阵`styles_Y_gram`将在训练前预先计算好。
###Code
def get_inits(X, device, lr, styles_Y):
gen_img = SynthesizedImage(X.shape)
gen_img.initialize(init.Constant(X), ctx=device, force_reinit=True)
trainer = gluon.Trainer(gen_img.collect_params(), 'adam',
{'learning_rate': lr})
styles_Y_gram = [gram(Y) for Y in styles_Y]
return gen_img(), styles_Y_gram, trainer
###Output
_____no_output_____
###Markdown
[**训练模型**]在训练模型进行风格迁移时,我们不断抽取合成图像的内容特征和风格特征,然后计算损失函数。下面定义了训练循环。
###Code
def train(X, contents_Y, styles_Y, device, lr, num_epochs, lr_decay_epoch):
X, styles_Y_gram, trainer = get_inits(X, device, lr, styles_Y)
animator = d2l.Animator(xlabel='epoch', ylabel='loss',
xlim=[10, num_epochs], ylim=[0, 20],
legend=['content', 'style', 'TV'],
ncols=2, figsize=(7, 2.5))
for epoch in range(num_epochs):
with autograd.record():
contents_Y_hat, styles_Y_hat = extract_features(
X, content_layers, style_layers)
contents_l, styles_l, tv_l, l = compute_loss(
X, contents_Y_hat, styles_Y_hat, contents_Y, styles_Y_gram)
l.backward()
trainer.step(1)
if (epoch + 1) % lr_decay_epoch == 0:
trainer.set_learning_rate(trainer.learning_rate * 0.8)
if (epoch + 1) % 10 == 0:
animator.axes[1].imshow(postprocess(X).asnumpy())
animator.add(epoch + 1, [float(sum(contents_l)),
float(sum(styles_l)), float(tv_l)])
return X
###Output
_____no_output_____
###Markdown
现在我们[**训练模型**]:首先将内容图像和风格图像的高和宽分别调整为300和450像素,用内容图像来初始化合成图像。
###Code
device, image_shape = d2l.try_gpu(), (450, 300)
net.collect_params().reset_ctx(device)
content_X, contents_Y = get_contents(image_shape, device)
_, styles_Y = get_styles(image_shape, device)
output = train(content_X, contents_Y, styles_Y, device, 0.9, 500, 50)
###Output
_____no_output_____ |
Intermediate_ML/5] Cross Validation/cross-validation.ipynb | ###Markdown
In this tutorial, you will learn how to use **cross-validation** for better measures of model performance. IntroductionMachine learning is an iterative process. You will face choices about what predictive variables to use, what types of models to use, what arguments to supply to those models, etc. So far, you have made these choices in a data-driven way by measuring model quality with a validation (or holdout) set. But there are some drawbacks to this approach. To see this, imagine you have a dataset with 5000 rows. You will typically keep about 20% of the data as a validation dataset, or 1000 rows. But this leaves some random chance in determining model scores. That is, a model might do well on one set of 1000 rows, even if it would be inaccurate on a different 1000 rows. At an extreme, you could imagine having only 1 row of data in the validation set. If you compare alternative models, which one makes the best predictions on a single data point will be mostly a matter of luck!In general, the larger the validation set, the less randomness (aka "noise") there is in our measure of model quality, and the more reliable it will be. Unfortunately, we can only get a large validation set by removing rows from our training data, and smaller training datasets mean worse models! What is cross-validation?In **cross-validation**, we run our modeling process on different subsets of the data to get multiple measures of model quality. For example, we could begin by dividing the data into 5 pieces, each 20% of the full dataset. In this case, we say that we have broken the data into 5 "**folds**". Then, we run one experiment for each fold:- In **Experiment 1**, we use the first fold as a validation (or holdout) set and everything else as training data. This gives us a measure of model quality based on a 20% holdout set. - In **Experiment 2**, we hold out data from the second fold (and use everything except the second fold for training the model). The holdout set is then used to get a second estimate of model quality.- We repeat this process, using every fold once as the holdout set. Putting this together, 100% of the data is used as holdout at some point, and we end up with a measure of model quality that is based on all of the rows in the dataset (even if we don't use all rows simultaneously). When should you use cross-validation?Cross-validation gives a more accurate measure of model quality, which is especially important if you are making a lot of modeling decisions. However, it can take longer to run, because it estimates multiple models (one for each fold). So, given these tradeoffs, when should you use each approach?- _For small datasets_, where extra computational burden isn't a big deal, you should run cross-validation.- _For larger datasets_, a single validation set is sufficient. Your code will run faster, and you may have enough data that there's little need to re-use some of it for holdout.There's no simple threshold for what constitutes a large vs. small dataset. But if your model takes a couple minutes or less to run, it's probably worth switching to cross-validation. Alternatively, you can run cross-validation and see if the scores for each experiment seem close. If each experiment yields the same results, a single validation set is probably sufficient. ExampleWe'll work with the same data as in the previous tutorial. We load the input data in `X` and the output data in `y`.
###Code
import pandas as pd
# Read the data
data = pd.read_csv('../input/melbourne-housing-snapshot/melb_data.csv')
# Select subset of predictors
cols_to_use = ['Rooms', 'Distance', 'Landsize', 'BuildingArea', 'YearBuilt']
X = data[cols_to_use]
# Select target
y = data.Price
###Output
_____no_output_____
###Markdown
Then, we define a pipeline that uses an imputer to fill in missing values and a random forest model to make predictions. While it's _possible_ to do cross-validation without pipelines, it is quite difficult! Using a pipeline will make the code remarkably straightforward.
###Code
from sklearn.ensemble import RandomForestRegressor
from sklearn.pipeline import Pipeline
from sklearn.impute import SimpleImputer
my_pipeline = Pipeline(steps=[('preprocessor', SimpleImputer()),
('model', RandomForestRegressor(n_estimators=50,
random_state=0))
])
###Output
_____no_output_____
###Markdown
We obtain the cross-validation scores with the [`cross_val_score()`](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.cross_val_score.html) function from scikit-learn. We set the number of folds with the `cv` parameter.
###Code
from sklearn.model_selection import cross_val_score
# Multiply by -1 since sklearn calculates *negative* MAE
scores = -1 * cross_val_score(my_pipeline, X, y,
cv=5,
scoring='neg_mean_absolute_error')
print("MAE scores:\n", scores)
###Output
_____no_output_____
###Markdown
The `scoring` parameter chooses a measure of model quality to report: in this case, we chose negative mean absolute error (MAE). The docs for scikit-learn show a [list of options](http://scikit-learn.org/stable/modules/model_evaluation.html). It is a little surprising that we specify *negative* MAE. Scikit-learn has a convention where all metrics are defined so a high number is better. Using negatives here allows them to be consistent with that convention, though negative MAE is almost unheard of elsewhere. We typically want a single measure of model quality to compare alternative models. So we take the average across experiments.
###Code
print("Average MAE score (across experiments):")
print(scores.mean())
###Output
_____no_output_____ |
Arrow Classification.ipynb | ###Markdown
Classification with Plain SVM
###Code
from skimage.transform import resize
from skimage.io import imread
# Path to folder containing the datasets
inputPaths = "C://Users//Yash Umale//Documents//7th Sem//IRC//IRC-Rover-Files//Datasets//Creating Datasets//Downloaded Datasets//Final Datasets for Training"
# All possible labels/ directions
labels = []
# List to store the paths of all images in the dataset
imagePaths = list(paths.list_images(inputPaths))
# This list will be used to store all the images in Bitmap format from OpenCV's imread()
images = []
i = 0
for imagePath in imagePaths:
label = imagePath.split(os.path.sep)[-2]
labels.append(label)
image = cv2.imread(imagePath)
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
image = cv2.resize(image, (64, 64))
print("Added image: ", i)
i += 1
images.append(image.flatten())
print("Number of images: ", len(images))
print("Number of labels: ", len(labels))
images = np.array(images)
labels = np.array(labels)
df = pd.DataFrame(images)
df['Labels'] = labels
x = df.iloc[:, : -1]
y = df.iloc[:, -1]
# SVM Model Construction
from sklearn import svm
from sklearn.model_selection import GridSearchCV, train_test_split
# Initializing model
svc = svm.SVC(probability = True)
params = {'C' : [0.1, 1, 10, 100], 'gamma' : [0.0001, 0.01, 0.1, 1], 'kernel' : ['rbf', 'poly']}
model = GridSearchCV(svc, params)
# Split the train and test datasets
x_train, x_test, y_train, y_test = train_test_split(x, y, test_size = 0.2, stratify = y)
model.fit(x_train, y_train)
# Test the trained model
y_pred = model.predict(x_test)
print("Actual:\n", y_test)
print("Predicted:\n", y_pred)
print("Accuracy: ", (accuracy_score(y_pred, y_test) * 100), "%\n\n")
# Saving the model
import pickle
fileName = "arrowClassifier.sav"
modelPath = "C://Users//Yash Umale//Documents//7th Sem//IRC//IRC-Rover-Files//Saved Models//Arrow Classifier//SVM Model"
os.chdir(modelPath)
pickle.dump(model, open(filename, 'wb'))
###Output
_____no_output_____ |
notebooks/CovidData.ipynb | ###Markdown
Importing libraries and loading data files.
###Code
import numpy as np
import pandas as pd
from sklearn.ensemble import RandomForestRegressor
from sklearn.metrics import mean_absolute_error
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeRegressor
us_data = pd.read_csv(r'C:\Users\winra\OneDrive\Documents\GitHub\EECS-731-Project-1-Jimmy-Wrangler-Data-Explorer\data\external\datasets_us.csv')
global_data = pd.read_csv(r'C:\Users\winra\OneDrive\Documents\GitHub\EECS-731-Project-1-Jimmy-Wrangler-Data-Explorer\data\external\datasets_global.csv')
us_data
###Output
_____no_output_____
###Markdown
Joining tables on the basis of date Joining tables on the basis of date
###Code
all_data = pd.merge(global_data,us_data, on=["Date"])
all_data
###Output
_____no_output_____
###Markdown
Taking percentage of cases and deaths in US from the world
###Code
all_data['US cases %'] = all_data['US_Cases']/(all_data['Confirmed'])*100
all_data['US Deaths %'] = all_data['US_Deaths']/(all_data['Deaths'])*100
all_data.round(2)
###Output
_____no_output_____
###Markdown
Saving the resulting data to CSV file.
###Code
all_data.to_csv(r"C:\Users\winra\OneDrive\Documents\GitHub\EECS-731-Project-1-Jimmy-Wrangler-Data-Explorer\data\processed\Result.csv",index=False)
###Output
_____no_output_____
###Markdown
Plotting the percentage graph.
###Code
result_fig = all_data.plot(x ='Date', y= ['US cases %','US Deaths %'], kind = 'line')
result_fig.get_figure().savefig(r"C:\Users\winra\OneDrive\Documents\GitHub\EECS-731-Project-1-Jimmy-Wrangler-Data-Explorer\reports\figures\Cases percentage report.pdf")
###Output
_____no_output_____ |
permafrost/Howto Guide Permafrost Temperatures.ipynb | ###Markdown
Howto Guide Permafrost Temperature Profile DataDr. Klaus G. Paul for Arctic BasecampThis notebook demonstrates the methods used to use the [GTN-P Arctic Database permafrost borehole temperature datasets](http://gtnpdatabase.org/boreholes) and perform data cleansing, such that it becomes possible to use these data to compute thickness of the so-called active layer, the region of permafrost that thaws, and, if the data covers it, the depth of the isothermal permafrost layer.The [Global Terrestrial Network for Permafrost (GTN-P)](http://gtnpdatabase.org/) is a service provided by the [Arctic Portal](https://arcticportal.org/), and allows researchers to upload their observation datasets, and interested parties to use these data. The data uploaded are generally "raw" data, this notebook describes some approaches required to turn these data into information.There is an excellnt description on how these data are acquired in section 2.5 of the [GTN-P Strategy and Implementation Plan](http://library.arcticportal.org/1938/1/GTNP_-_Implementation_Plan.pdf)
###Code
import pandas as pd
import zipfile
import requests
import io
import numpy as np
from scipy.interpolate import interp1d
from scipy.ndimage import zoom
import seaborn as sns
import matplotlib.pyplot as plt
from matplotlib.colors import LinearSegmentedColormap
###Output
_____no_output_____
###Markdown
An example with a lot of dataWe chose the [Larsbreen](http://www.gtnpdatabase.org/boreholes/view/60/) [datasets](http://www.gtnpdatabase.org/datasets/view/1491), we take one that has a lot of data, in particular, `1491`. [Larsbreen is a glacier site on Svalbrad](https://www.openstreetmap.org/way/673494772map=6/78.175/15.552), the islands north of Norway, also known as Spitsbergen.We do a live data download of the zip file with the data and process it directly, without storing it.
###Code
r = requests.get("http://gtnpdatabase.org/rest/boreholes/dlpackage/60/true")
if r.ok:
zf = zipfile.ZipFile(io.BytesIO(r.content))
for f in zf.filelist:
if "1491" in f.filename:
dfData = pd.read_csv(io.StringIO(zf.read(f).decode("ascii")))
dfData.head()
###Output
_____no_output_____
###Markdown
The data columns are* a datetime column named `Date/Depth`, which can be a bit misleading, the column itself is a datetime, which the `Depth` is pointing to the remainder columns* a number of columns which are labelled with numbers, these are generally floating point numbers indicating the depth of the measurement, in meters. Positive numbers are depths, negative numbers are above the zero level.Note the difference to [borehole active layer/annual thaw depth measurements](http://www.gtnpdatabase.org/activelayers), which are given in centimeters.We want time series dataframes, so we convert the `Date/Depth` column to a datetime index.
###Code
dfData.index = pd.to_datetime(dfData["Date/Depth"])
dfData.index.name = None
del dfData["Date/Depth"]
dfData.head()
###Output
_____no_output_____
###Markdown
Data QualityThe data are raw, and have been uploaded by multiple individuals. The example chosen specifically shows some of the challenges in data cleansing. Lets have a look at the time series data.
###Code
sns.set()
_ =dfData.plot(figsize=(10,5))
###Output
_____no_output_____
###Markdown
This looks like a data quality issue, well two:-* a set of lonesome datapoints from around 1900, then a big gap* lots of spikes out towards what seems to be a missing value indicator (which seems to also be used for the 1900-ish dataset)Lets check:-
###Code
dfData.describe(percentiles=[0.01,0.5,0.99])
###Output
_____no_output_____
###Markdown
Lets remove anything below 900. Well, actually, as this is degrees Celsius, lets remove any temperature below 273.15 °C.
###Code
dfData = dfData.apply(lambda x: np.where(x < -273.15,np.nan,x))
_ = dfData.plot(figsize=(10,5))
###Output
_____no_output_____
###Markdown
This looks like temperatures from a borehole and seasonal variation. While we are at it, lets check for nan datasets and remove blank lines.
###Code
print("number of recordings {}, {} of which are completely out of range".format(len(dfData),
len(dfData)-len(dfData.dropna(axis=0,how="all"))))
dfData.dropna(axis=0,how="all",inplace=True)
###Output
number of recordings 60581, 31 of which are completely out of range
###Markdown
Permafrost Thawed / Active Layer ThicknessThis is a cross section of expected measurements, taken from Wikipedia.Lets look at our dataset as a heatmap, well, coldmap 8-)
###Code
fig, ax = plt.subplots(figsize=(30,5))
plt.minorticks_off()
cm = LinearSegmentedColormap.from_list("permafrost", ["#C0BFFF","#E2E2FF","#FFFFFF","#904323","#531910"], N=250)
a = ax.contourf(dfData.index,dfData.columns,dfData.transpose(),cmap=cm,vmin=-3,vmax=3,levels=250)
fig.colorbar(a,ax=ax)
ax.invert_yaxis()
ax.set_xlabel("datetime")
_ = ax.set_ylabel("depth [m]")
###Output
_____no_output_____
###Markdown
So, to find the thickness of what is called the active layer (this is the thawed layer, if present), we need to look at zero crossings of the temperature, which is given in deg C. This needs to be done for every recording, i.e. every row in the dataset.Lets pick a random datapoint that has positive and negative temps first.
###Code
while True:
dfSample = dfData.sample(1)
if dfSample.max().max()>0 and dfSample.min().min()<0:
break
dfSample
###Output
_____no_output_____
###Markdown
Lets plot the data and have a look at it.
###Code
dfSample = dfSample.transpose()
dfSample.columns = ["temperature"]
dfSample["depth"] = pd.to_numeric(dfSample.index)
dfSample.plot(y="depth",x="temperature",figsize=(5,8)).invert_yaxis()
dfSample
###Output
_____no_output_____
###Markdown
We need to find the zero crossing of the temperature. This can readily be done by using [Scipy's](https://scipy.org/) [ `interp1d` ](https://docs.scipy.org/doc/scipy/reference/generated/scipy.interpolate.interp1d.html) one dimensional interpolation routine. The x values are the temperature values observed, and the y values are the depth values.
###Code
f = interp1d(dfSample.temperature.values,dfSample.depth.values)
zerocrossing = f(0.)[()]
zerocrossing
ax = dfSample.plot(y="depth",x="temperature",figsize=(5,8)).invert_yaxis()
sns.lineplot(x=[dfSample.temperature.min(),dfSample.temperature.max()],y=[zerocrossing,zerocrossing],ax=ax)
###Output
_____no_output_____
###Markdown
This seems to produce the correct results. Lets apply this to the complete dataset.
###Code
datetime_dates = []
zerocrossings = []
for i,r in dfData.iterrows():
values = r.dropna()
if values.max() < 0 or values.min() > 0:
zerocrossing = np.nan
else:
f = interp1d(values,pd.to_numeric(values.index))
zerocrossing = f(0.)[()]
datetime_dates.append(i)
zerocrossings.append(zerocrossing)
ax = pd.DataFrame(zerocrossings,index=datetime_dates).rename(columns={0:"depth of 0°C temperature [m]"}).plot(figsize=(20,5),title="Active Layer Thickness")
ax.invert_yaxis()
ax.set_xlabel("datetime")
_ = ax.set_ylabel("depth [m]")
fig, ax = plt.subplots(figsize=(30,5))
plt.minorticks_off()
cm = LinearSegmentedColormap.from_list("permafrost", ["#C0BFFF","#E2E2FF","#FFFFFF","#904323","#531910"], N=250)
a = ax.contourf(dfData.index,dfData.columns,dfData.transpose(),cmap=cm,vmin=-3,vmax=3,levels=250)
fig.colorbar(a,ax=ax)
pd.DataFrame(zerocrossings,index=datetime_dates).rename(columns={0:"depth of 0°C temperature [m]"}).plot(figsize=(20,5),title="Active Layer Thickness",ax=ax,color="tomato")
ax.invert_yaxis()
ax.set_xlabel("datetime")
_ = ax.set_ylabel("depth [m]")
###Output
_____no_output_____
###Markdown
Isothermal PermafrostIn the previous section we approximated the depth of the active layer by computing the zero crossing of the measured soil temperature. Permafrost also exposes a zone called the isothermal permafrost, which is defined as the region unaffected by seasonal temperature variation. Also, according to [Smith, SL.L. et al. Thermal state of permafrost in North America: a contribution to the international polar year](https://onlinelibrary.wiley.com/doi/10.1002/ppp.690), we take a minimum isothermal temperature of -3 °C.Lets look at the temperature map again:
###Code
fig, ax = plt.subplots(figsize=(30,5))
plt.minorticks_off()
cm = LinearSegmentedColormap.from_list("permafrost", ["#C0BFFF","#E2E2FF","#FFFFFF","#904323","#531910"], N=250)
a = ax.contourf(dfData.index,dfData.columns,dfData.transpose(),cmap=cm,vmin=-3,vmax=3,levels=250)
fig.colorbar(a,ax=ax)
ax.invert_yaxis()
ax.set_xlabel("datetime")
_ = ax.set_ylabel("depth [m]")
###Output
_____no_output_____
###Markdown
Side excursion --- Data QualityThere appears to be another data quality issue around April, 2014. Temperatures are all of a sudden around room temperature, around 20 °C. The reason for this behaviour is not clear. The data still seems to be a measured quantity, it very much looks like daily temperature variations.
###Code
_ = dfData[(pd.to_datetime("2014-04-01") <= dfData.index)&(dfData.index <= pd.to_datetime("2014-05-01"))].plot()
###Output
_____no_output_____
###Markdown
There appear to be two regimes of records where data are compromised, one between 2014-04-18 02:00 until 2014-04-24 14:02, and the second one, two records 2014-09-30 07:39:00 and 2014-09-30 08:00:00. One recommended way to treat this would be to remove these data from the dataset by changing them to NaN (Not a Number).
###Code
dfData[dfData.min(axis=1)>19]
###Output
_____no_output_____
###Markdown
Isothermal LayerTo compute the depth of the isothermal layer, we need to compute the difference between consecutive measurements. That, however, would result in a large inaccuracy between the active layer depths and the isothermal zone. The isothermal zone is, according to [Smith, SL.L. et al. Thermal state of permafrost in North America: a contribution to the international polar year](https://onlinelibrary.wiley.com/doi/10.1002/ppp.690), a zone with a temperature of at most -3 °C.According to [Biskaborn et al., Permafrost is warming at a global scale](https://doi.org/10.1038/s41467-018-08240-4), isothermal layer exposes a temperature variation of less than 0.1 deg C.
###Code
# filter for below -3 deg C
dfDelta = dfData[dfData < -3.].diff(axis=0).dropna(axis=0,how="all")
dfDelta.columns = pd.to_numeric(dfDelta.columns)
# filter for 0.1 deg C between measurements
dfDeltaHaveIso = dfDelta[(-0.1 <= dfDelta)&(dfDelta <= 0.1)].dropna(axis=0,how="all")
# slightly tricky, we take the lowest index, which is equivalent to the minimum depth, as the value
dfIsoLayerDepth = pd.DataFrame(dfDeltaHaveIso.apply(lambda x : x.dropna().index.min(),axis=1)).rename(columns={0:"depth of near const temperature [°C]"})
dfIsoLayerDepth
_ = dfIsoLayerDepth.plot(figsize=(30,5)).invert_yaxis()
###Output
_____no_output_____
###Markdown
The above dataset exposes some jaggedness due to the nature of the depth values (1 meter increment).We can do a sanity check comparison of the isothermal thickness and the active layer/thaw depth values computed earlier to confirm that the active layer depth is less than the isothermal layer depth, as can be seen below.
###Code
ax = dfIsoLayerDepth.plot(figsize=(30,5))
ax.invert_yaxis()
_ = pd.DataFrame(zerocrossings,index=datetime_dates).rename(columns={0:"depth of 0°C temperature [m]"}).plot(ax=ax)
###Output
_____no_output_____
###Markdown
Handling Datasets with Sparse DataThe data format returned is a tabular csv with depth values in meters as columns, and datetimes as rows. Some datasets have varying depth values over time, which, unfortunately, makes the csv tabular format a bit difficult to use.
###Code
r = requests.get("http://gtnpdatabase.org/rest/boreholes/dlpackage/88/true")
if r.ok:
zf = zipfile.ZipFile(io.BytesIO(r.content))
for f in zf.filelist:
if "656" in f.filename:
dfData = pd.read_csv(io.StringIO(zf.read(f).decode("ascii")))
dfData.head()
###Output
_____no_output_____
###Markdown
Lets convert the datetime column to a datetime index to get a timeseries dataframe, as before.
###Code
dfData.index = pd.to_datetime(dfData["Date/Depth"])
dfData.index.name = None
del dfData["Date/Depth"]
dfData.plot()
###Output
_____no_output_____
###Markdown
This plot is not very useful, as it contains way too many columns, and most of them are filled with NaN values. Remove those first, as before, by declaring values below the absolute minimum temperature as invalid.
###Code
dfData = dfData.apply(lambda x: np.where(x < -273.15,np.nan,x))
dfData.plot()
###Output
_____no_output_____
###Markdown
Resampling/RemappingNow the data seem good, but still, there is too many columns, or depths.
###Code
fig, ax = plt.subplots(figsize=(30,5))
dfDataTransposed = dfData.transpose()
dfDataTransposed.columns = ["{:%Y-%m-%d}".format(d) for d in dfDataTransposed.columns]
cm = LinearSegmentedColormap.from_list("permafrost", ["#C0BFFF","#E2E2FF","#FFFFFF","#904323","#531910"], N=250)
ax = sns.heatmap(dfDataTransposed,ax=ax,cmap=cm,vmin=-3,vmax=3)
###Output
_____no_output_____
###Markdown
Many of the entries are empty in this dataset, which makes it hard to understand.
###Code
print("{} values in {} cells ({:.1f}%).".format(dfData.count().sum(),len(dfData)*len(dfData.columns),dfData.count().sum()/(len(dfData)*len(dfData.columns))*100))
dfData.head()
###Output
24763 values in 96951 cells (25.5%).
###Markdown
In order to look at multi-year change of thermal properties of these borehole observations, we need to remap it onto a stable vertical measurement grid. We use linear interpolation of the measured values to map them onto the grid defined.There is no clear standard for such measurements, the [GTN-P report](http://library.arcticportal.org/1938/1/GTNP_-_Implementation_Plan.pdf) quotes [Harris et al., Permafrost monitoring in the high mountains of Europe: the PACE Project in its global context](https://onlinelibrary.wiley.com/doi/abs/10.1002/ppp.377) as suggesting 0.2, 0.4, 0.8, 1.2,1.6, 2, 2.5, 3, 3.5, 4, 5, 7, 9, 10, 11, 13, 15, 20, 25, 30, 40, 50, 60, 70, 80, 85, 90, 95, 97.5 and100 m. A [google search for _standard for the thermistor spacing for borehole_](https://www.google.com/search?q=standard+for+the+thermistor+spacing+for+borehole) reveals more sources such as [MANUAL FOR MONITORING AND REPORTING PERMAFROSTMEASUREMENTS](https://permafrost.gi.alaska.edu/sites/default/files/TSP_manual.pdf).So, rather than trying to adhere to multiple standards, we re-map the value ranges to the values most commonly present in the datasets. As a rather arbitrary selection, we chose the 95% percentile of all observations. This resulted in the following depths:-2.00, -0.50, 0.00, 0.01, 0.02, 0.10, 0.20, 0.25, 0.40, 0.50, 0.75, 0.80, 1.00, 1.20, 1.50, 1.60, 2.00, 2.50, 3.00, 3.20, 3.50, 4.00, 4.50, 5.00, 5.50, 6.00, 7.00, 7.50, 8.00, 9.00, 9.85, 10.00, 11.00, 12.00, 15.00, 20.00, 25.00 . Visual inspection of the data below also suggests 30., 40., 50., 60., 70., 80., 100., 120., and 140. m depths.We interpolate inside the above regime, i.e. the standard depths as defined here will be clipped to match the source dataset.The main purpose of this is to be able to automate this step without sacrificing quality of the results.
###Code
dfBoreholeStats = pd.read_csv("./sun/temperature_depth_stats.tsv",delimiter="\t")
dfBoreholeStats.plot.line(x="depth",y="count",figsize=(15,5), title="total number of datapoints sampled at depth")
dfBoreholeStats[dfBoreholeStats.depth<=25.].plot.line(x="depth",y="count",figsize=(15,5), title="total number of datapoints sampled at depths <= 25 m")
_ = dfBoreholeStats[dfBoreholeStats.depth>25.].plot.line(x="depth",y="count",figsize=(15,5), title="total number of datapoints sampled at depths > 25 m")
print(", ".join(["{:.2f}".format(pd.to_numeric(d)) for d in dfBoreholeStats[dfBoreholeStats["count"] >= dfBoreholeStats["count"].quantile(0.95)].depth.values]))
standard_depths = np.array([-2.00, -0.50, 0.00, 0.01, 0.02, 0.10, 0.20, 0.25, 0.40, 0.50, 0.75, 0.80, 1.00,
1.20, 1.50, 1.60, 2.00, 2.50, 3.00, 3.20, 3.50, 4.00, 4.50, 5.00, 5.50, 6.00,
7.00, 7.50, 8.00, 9.00, 9.85, 10.00, 11.00, 12.00, 15.00, 20.00, 25.00, 30.0,
40.0, 50.0, 60.0, 70.0, 80.0, 100.0, 120.0, 140.0])
depths = pd.to_numeric(dfData.columns)
model = standard_depths[(depths.min() <= standard_depths)&(standard_depths<=depths.max())]
alldata = {}
for i,r in dfData.iterrows():
measurements = pd.Series(r.values,index=depths).dropna()
f = interp1d(measurements.index.values,measurements.values,fill_value="extrapolate")
interpolated_values = f(model)[()]
alldata[i] = interpolated_values
ddf = pd.DataFrame(alldata).transpose()
ddf.columns = model
ddf
fig, ax = plt.subplots(figsize=(30,5))
plt.minorticks_off()
cm = LinearSegmentedColormap.from_list("permafrost", ["#C0BFFF","#E2E2FF","#FFFFFF","#904323","#531910"], N=250)
a = ax.contourf(ddf.index,ddf.columns,ddf.transpose(),cmap=cm,vmin=-3,vmax=3,levels=250)
fig.colorbar(a,ax=ax)
ax.invert_yaxis()
ax.set_xlabel("datetime")
_ = ax.set_ylabel("depth [m]")
###Output
_____no_output_____
###Markdown
Storing DataLets look back at the original data, which had only about 25% of all cells being populated with measured values. It is not practical to store the data in columns representing depths. As a data store, we use a key-value pattern, in fact, multiple keys and one value. Keys are* datetime* depth* dataset IDand the value is temperature. This can readily be stored in a SQL table, which is a good choice, as the data we deal with are highly structured. One could also use a NoSQL store, but, ultimately, the complexities of dealing with varying depths stay in the business logic.The way to transform the data is, from tabular data to keys,value to use [`pandas.stack`](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.stack.html) . For reading the data back into a tabular data frame, use [`pandas.pivot_table`](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.pivot_table.html).
###Code
dfStacked = dfData.stack().reset_index().rename(columns={"level_0":"datetime_date","level_1":"depth",0:"temperature"})
dfStacked["dataset"] = "656"
dfStacked.head()
# We are not doing a SQL store in this example
# dfStacked.to_sql("table",con=conn)
# dfStacked = pd.read_sql("SELECT * FROM table WHERE dataset='656'")
dfStacked.pivot_table(index="datetime_date",values="temperature",columns="depth").head()
###Output
_____no_output_____
###Markdown
Dealing with Resolution LossA few datasets, unfortunately, were found to have duplicate timestamps. The cause is not clear, what has been observed in the past, though, is an unfortunate combination of formatting cells as dates (i.e., YYYY-mm-dd) instead of timestamps (YYYY-mm-dd HH:MM), then storing the data as csv. Some spreadsheet tools (MS Excel being one of them) then store the csv in reduced resolution, i.e. a timestamp (date plus time) may be clipped to a date.
###Code
r = requests.get("http://gtnpdatabase.org/rest/boreholes/dlpackage/118/true") #("http://gtnpdatabase.org/rest/boreholes/dlpackage/1191/true")
###Output
_____no_output_____
###Markdown
The data initially looks inconspicuous and the Date column appears to have datetime timestamps.
###Code
if r.ok:
zf = zipfile.ZipFile(io.BytesIO(r.content))
for f in zf.filelist:
if "618" in f.filename:
dfData = pd.read_csv(io.StringIO(zf.read(f).decode("ascii")))
dfData.head()
###Output
_____no_output_____
###Markdown
One way of computing duplication is to call `pandas.DataFrame.duplicated`, another is to check if `len(DataFrame)>len(DataFrame["Date/Depth"]`. It turns out there is a reasonable number of duplicates.
###Code
print("{} records, {} of which are duplicate Date/Depth fields".format(len(dfData),dfData.duplicated(subset=["Date/Depth"],keep=False).sum()))
dfData[dfData.duplicated(subset=["Date/Depth"],keep=False)].sort_values("Date/Depth")
###Output
1511 records, 320 of which are duplicate Date/Depth fields
###Markdown
To deal with this, we loop through the duplicate entries and compute the mean of the two values. For some data files, it seems logical to pad the data with a synthetic hour entry, as it appears that one reading was taken 12 hrs before the other, but this would result in speculation.In addition, we are interested in longer term drifts of permafrost temperatures
###Code
dfData.index = pd.to_datetime(dfData["Date/Depth"])
dfData.index.name = None
del dfData["Date/Depth"]
dfData = dfData.apply(lambda x: np.where(x < -273.15,np.nan,x))
dfData = dfData.dropna(axis=0,how="all")
alldata = []
for dt in dfData.index.unique():
ddf = dfData[dfData.index == dt]
newvalue = dict(zip(list(ddf.columns),ddf.values[0]))
newvalue["datetime_date"] = dt
alldata.append(newvalue)
dfData = pd.DataFrame(alldata)#.set_index("datetime_date")
#dfData.index.name = None
dfData
print("{} records, {} of which are duplicate Date/Depth fields".format(len(dfData),dfData.duplicated(subset="datetime_date",keep=False).sum()))
###Output
1273 records, 0 of which are duplicate Date/Depth fields
|
2_UNet.ipynb | ###Markdown
Model training hyperparameters
###Code
BEST_PATH = './models/best_Unet.h5'
DISP_STEPS = 100
TRAINING_EPOCHS = 500
BATCH_SIZE = 32
LEARNING_RATE = 0.001
###Output
_____no_output_____
###Markdown
data loading
###Code
l = np.load('./data/pap_dataset.npz')
raw_input = l['raw_input']
raw_label = l['raw_label']
test_input = l['test_input']
test_label = l['test_label']
MAXS = l['MAXS']
MINS = l['MINS']
SCREEN_SIZE = l['SCREEN_SIZE']
print(raw_input.shape)
print(raw_label.shape)
print(test_input.shape)
print(test_label.shape)
raw_input = raw_input.astype(np.float32)
raw_label = raw_label.astype(np.float32)
test_input = test_input.astype(np.float32)
test_label = test_label.astype(np.float32)
num_train = int(raw_input.shape[0]*.7)
raw_input, raw_label = shuffle(raw_input, raw_label, random_state=4574)
train_input, train_label = raw_input[:num_train, ...], raw_label[:num_train, ...]
val_input, val_label = raw_input[num_train:, ...], raw_label[num_train:, ...]
train_dataset = tf.data.Dataset.from_tensor_slices((train_input, train_label))
train_dataset = train_dataset.cache().shuffle(BATCH_SIZE*50).batch(BATCH_SIZE)
val_dataset = tf.data.Dataset.from_tensor_slices((val_input, val_label))
val_dataset = val_dataset.cache().shuffle(BATCH_SIZE*50).batch(BATCH_SIZE)
test_dataset = tf.data.Dataset.from_tensor_slices((test_input, test_label))
test_dataset = test_dataset.batch(BATCH_SIZE)
class ConvBlock(layers.Layer):
def __init__(self, filters, kernel_size, dropout_rate):
super(ConvBlock, self).__init__()
self.filters = filters
self.kernel_size = kernel_size
self.dropout_rate = dropout_rate
self.conv1 = layers.Conv2D(self.filters, self.kernel_size,
activation='relu', kernel_initializer='he_normal', padding='same')
self.batch1 = layers.BatchNormalization()
self.drop = layers.Dropout(self.dropout_rate)
self.conv2 = layers.Conv2D(self.filters, self.kernel_size,
activation='relu', kernel_initializer='he_normal', padding='same')
self.batch2 = layers.BatchNormalization()
def call(self, inp):
inp = self.batch1(self.conv1(inp))
inp = self.drop(inp)
inp = self.batch2(self.conv2(inp))
return inp
class DeconvBlock(layers.Layer):
def __init__(self, filters, kernel_size, strides):
super(DeconvBlock, self).__init__()
self.filters = filters
self.kernel_size = kernel_size
self.strides = strides
self.deconv1 = layers.Conv2DTranspose(self.filters, self.kernel_size, strides=self.strides, padding='same')
def call(self, inp):
inp = self.deconv1(inp)
return inp
class UNet(Model):
def __init__(self):
super(UNet, self).__init__()
self.conv_block1 = ConvBlock(32, (2, 2), 0.1)
self.pool1 = layers.MaxPooling2D()
self.conv_block2 = ConvBlock(64, (2, 2), 0.2)
self.pool2 = layers.MaxPooling2D()
self.conv_block3 = ConvBlock(128, (2, 2), 0.2)
self.deconv_block1 = DeconvBlock(64, (2, 2), (2, 2))
self.padding = layers.ZeroPadding2D(((1, 0), (0, 1)))
self.conv_block4 = ConvBlock(64, (2, 2), 0.2)
self.deconv_block2 = DeconvBlock(32, (2, 2), (2, 2))
self.conv_block5 = ConvBlock(32, (2, 2), 0.1)
self.output_conv = layers.Conv2D(1, (1, 1), activation='sigmoid')
def call(self, inp):
conv1 = self.conv_block1(inp)
pooled1 = self.pool1(conv1)
conv2 = self.conv_block2(pooled1)
pooled2 = self.pool2(conv2)
bottom = self.conv_block3(pooled2)
deconv1 = self.padding(self.deconv_block1(bottom))
deconv1 = layers.concatenate([deconv1, conv2])
deconv1 = self.conv_block4(deconv1)
deconv2 = self.deconv_block2(deconv1)
deconv2 = layers.concatenate([deconv2, conv1])
deconv2 = self.conv_block5(deconv2)
return self.output_conv(deconv2)
#loss inputs should be masked.
loss_object = tf.keras.losses.MeanSquaredError()
def loss_function(model, inp, tar):
masked_real = tar * (1 - inp[..., 1:2])
masked_pred = model(inp) * (1 - inp[..., 1:2])
return loss_object(masked_real, masked_pred)
unet_model = UNet()
opt = tf.optimizers.Adam(learning_rate=LEARNING_RATE)
@tf.function
def train(loss_function, model, opt, inp, tar):
with tf.GradientTape() as tape:
gradients = tape.gradient(loss_function(model, inp, tar), model.trainable_variables)
gradient_variables = zip(gradients, model.trainable_variables)
opt.apply_gradients(gradient_variables)
checkpoint_path = "./checkpoints/UNet_prototype"
ckpt = tf.train.Checkpoint(unet_model=unet_model,
opt=opt)
ckpt_manager = tf.train.CheckpointManager(ckpt, checkpoint_path, max_to_keep=10)
writer = tf.summary.create_file_writer('tmp')
prev_test_loss = 100.0
with writer.as_default():
with tf.summary.record_if(True):
for epoch in range(TRAINING_EPOCHS):
for step, (inp, tar) in enumerate(train_dataset):
train(loss_function, unet_model, opt, inp, tar)
loss_values = loss_function(unet_model, inp, tar)
tf.summary.scalar('loss', loss_values, step=step)
if step % DISP_STEPS == 0:
test_loss = 0
for step_, (inp_, tar_) in enumerate(test_dataset):
test_loss += loss_function(unet_model, inp_, tar_)
if step_ > DISP_STEPS:
test_loss /= DISP_STEPS
break
if test_loss.numpy() < prev_test_loss:
ckpt_save_path = ckpt_manager.save()
prev_test_loss = test_loss.numpy()
print('Saving checkpoint at {}'.format(ckpt_save_path))
print('Epoch {} batch {} train loss: {:.4f} test loss: {:.4f}'
.format(epoch, step, loss_values.numpy(), test_loss.numpy()))
###Output
Saving checkpoint at ./checkpoints/UNet_prototype/ckpt-1
Epoch 0 batch 0 train loss: 0.0394 test loss: 0.0344
Saving checkpoint at ./checkpoints/UNet_prototype/ckpt-2
Epoch 0 batch 100 train loss: 0.0069 test loss: 0.0060
Saving checkpoint at ./checkpoints/UNet_prototype/ckpt-3
Epoch 0 batch 200 train loss: 0.0030 test loss: 0.0033
Saving checkpoint at ./checkpoints/UNet_prototype/ckpt-4
Epoch 0 batch 300 train loss: 0.0025 test loss: 0.0025
Epoch 0 batch 400 train loss: 0.0016 test loss: 0.0025
Saving checkpoint at ./checkpoints/UNet_prototype/ckpt-5
Epoch 0 batch 500 train loss: 0.0024 test loss: 0.0020
Epoch 0 batch 600 train loss: 0.0022 test loss: 0.0021
Epoch 0 batch 700 train loss: 0.0019 test loss: 0.0020
Saving checkpoint at ./checkpoints/UNet_prototype/ckpt-6
Epoch 0 batch 800 train loss: 0.0013 test loss: 0.0016
Saving checkpoint at ./checkpoints/UNet_prototype/ckpt-7
Epoch 0 batch 900 train loss: 0.0020 test loss: 0.0016
Saving checkpoint at ./checkpoints/UNet_prototype/ckpt-8
Epoch 0 batch 1000 train loss: 0.0022 test loss: 0.0015
Saving checkpoint at ./checkpoints/UNet_prototype/ckpt-9
Epoch 0 batch 1100 train loss: 0.0014 test loss: 0.0014
Epoch 0 batch 1200 train loss: 0.0016 test loss: 0.0014
Epoch 1 batch 0 train loss: 0.0019 test loss: 0.0014
Saving checkpoint at ./checkpoints/UNet_prototype/ckpt-10
Epoch 1 batch 100 train loss: 0.0012 test loss: 0.0014
Epoch 1 batch 200 train loss: 0.0017 test loss: 0.0015
Saving checkpoint at ./checkpoints/UNet_prototype/ckpt-11
Epoch 1 batch 300 train loss: 0.0010 test loss: 0.0013
Epoch 1 batch 400 train loss: 0.0019 test loss: 0.0013
Epoch 1 batch 500 train loss: 0.0008 test loss: 0.0013
Saving checkpoint at ./checkpoints/UNet_prototype/ckpt-12
Epoch 1 batch 600 train loss: 0.0016 test loss: 0.0012
Saving checkpoint at ./checkpoints/UNet_prototype/ckpt-13
Epoch 1 batch 700 train loss: 0.0010 test loss: 0.0011
Epoch 1 batch 800 train loss: 0.0011 test loss: 0.0013
Epoch 1 batch 900 train loss: 0.0015 test loss: 0.0012
Epoch 1 batch 1000 train loss: 0.0010 test loss: 0.0012
Saving checkpoint at ./checkpoints/UNet_prototype/ckpt-14
Epoch 1 batch 1100 train loss: 0.0009 test loss: 0.0011
Epoch 1 batch 1200 train loss: 0.0012 test loss: 0.0011
Epoch 2 batch 0 train loss: 0.0016 test loss: 0.0012
Epoch 2 batch 100 train loss: 0.0008 test loss: 0.0011
Saving checkpoint at ./checkpoints/UNet_prototype/ckpt-15
Epoch 2 batch 200 train loss: 0.0018 test loss: 0.0011
Saving checkpoint at ./checkpoints/UNet_prototype/ckpt-16
Epoch 2 batch 300 train loss: 0.0010 test loss: 0.0010
Epoch 2 batch 400 train loss: 0.0013 test loss: 0.0011
Epoch 2 batch 500 train loss: 0.0016 test loss: 0.0012
Saving checkpoint at ./checkpoints/UNet_prototype/ckpt-17
Epoch 2 batch 600 train loss: 0.0014 test loss: 0.0010
Saving checkpoint at ./checkpoints/UNet_prototype/ckpt-18
Epoch 2 batch 700 train loss: 0.0013 test loss: 0.0010
Saving checkpoint at ./checkpoints/UNet_prototype/ckpt-19
Epoch 2 batch 800 train loss: 0.0011 test loss: 0.0010
Epoch 2 batch 900 train loss: 0.0007 test loss: 0.0010
Epoch 2 batch 1000 train loss: 0.0007 test loss: 0.0010
Epoch 2 batch 1100 train loss: 0.0008 test loss: 0.0010
Epoch 2 batch 1200 train loss: 0.0007 test loss: 0.0010
Epoch 3 batch 0 train loss: 0.0007 test loss: 0.0011
Epoch 3 batch 100 train loss: 0.0013 test loss: 0.0010
Saving checkpoint at ./checkpoints/UNet_prototype/ckpt-20
Epoch 3 batch 200 train loss: 0.0008 test loss: 0.0010
Saving checkpoint at ./checkpoints/UNet_prototype/ckpt-21
Epoch 3 batch 300 train loss: 0.0008 test loss: 0.0010
Epoch 3 batch 400 train loss: 0.0009 test loss: 0.0010
Saving checkpoint at ./checkpoints/UNet_prototype/ckpt-22
Epoch 3 batch 500 train loss: 0.0007 test loss: 0.0009
Epoch 3 batch 600 train loss: 0.0009 test loss: 0.0011
Saving checkpoint at ./checkpoints/UNet_prototype/ckpt-23
Epoch 3 batch 700 train loss: 0.0006 test loss: 0.0009
Epoch 3 batch 800 train loss: 0.0006 test loss: 0.0009
Epoch 3 batch 900 train loss: 0.0008 test loss: 0.0010
Epoch 3 batch 1000 train loss: 0.0012 test loss: 0.0010
Epoch 3 batch 1100 train loss: 0.0007 test loss: 0.0009
Epoch 3 batch 1200 train loss: 0.0011 test loss: 0.0009
Epoch 4 batch 0 train loss: 0.0008 test loss: 0.0009
Epoch 4 batch 100 train loss: 0.0005 test loss: 0.0010
Saving checkpoint at ./checkpoints/UNet_prototype/ckpt-24
Epoch 4 batch 200 train loss: 0.0006 test loss: 0.0009
Epoch 4 batch 300 train loss: 0.0008 test loss: 0.0009
Saving checkpoint at ./checkpoints/UNet_prototype/ckpt-25
Epoch 4 batch 400 train loss: 0.0009 test loss: 0.0009
Epoch 4 batch 500 train loss: 0.0007 test loss: 0.0009
Saving checkpoint at ./checkpoints/UNet_prototype/ckpt-26
Epoch 4 batch 600 train loss: 0.0007 test loss: 0.0009
Epoch 4 batch 700 train loss: 0.0006 test loss: 0.0009
Epoch 4 batch 800 train loss: 0.0010 test loss: 0.0010
Epoch 4 batch 900 train loss: 0.0009 test loss: 0.0009
Saving checkpoint at ./checkpoints/UNet_prototype/ckpt-27
Epoch 4 batch 1000 train loss: 0.0010 test loss: 0.0009
Epoch 4 batch 1100 train loss: 0.0008 test loss: 0.0009
Epoch 4 batch 1200 train loss: 0.0009 test loss: 0.0009
Epoch 5 batch 0 train loss: 0.0008 test loss: 0.0009
Epoch 5 batch 100 train loss: 0.0009 test loss: 0.0009
Epoch 5 batch 200 train loss: 0.0006 test loss: 0.0009
Saving checkpoint at ./checkpoints/UNet_prototype/ckpt-28
Epoch 5 batch 300 train loss: 0.0006 test loss: 0.0009
Epoch 5 batch 400 train loss: 0.0005 test loss: 0.0009
Saving checkpoint at ./checkpoints/UNet_prototype/ckpt-29
Epoch 5 batch 500 train loss: 0.0008 test loss: 0.0008
Saving checkpoint at ./checkpoints/UNet_prototype/ckpt-30
Epoch 5 batch 600 train loss: 0.0009 test loss: 0.0008
Epoch 5 batch 700 train loss: 0.0007 test loss: 0.0009
Saving checkpoint at ./checkpoints/UNet_prototype/ckpt-31
Epoch 5 batch 800 train loss: 0.0011 test loss: 0.0008
Epoch 5 batch 900 train loss: 0.0010 test loss: 0.0009
Epoch 5 batch 1000 train loss: 0.0013 test loss: 0.0009
Epoch 5 batch 1100 train loss: 0.0007 test loss: 0.0009
Epoch 5 batch 1200 train loss: 0.0005 test loss: 0.0008
Epoch 6 batch 0 train loss: 0.0007 test loss: 0.0009
Epoch 6 batch 100 train loss: 0.0007 test loss: 0.0009
Epoch 6 batch 200 train loss: 0.0007 test loss: 0.0008
Epoch 6 batch 300 train loss: 0.0008 test loss: 0.0008
Epoch 6 batch 400 train loss: 0.0006 test loss: 0.0008
Epoch 6 batch 500 train loss: 0.0011 test loss: 0.0008
Saving checkpoint at ./checkpoints/UNet_prototype/ckpt-32
Epoch 6 batch 600 train loss: 0.0014 test loss: 0.0008
Epoch 6 batch 700 train loss: 0.0014 test loss: 0.0009
Epoch 6 batch 800 train loss: 0.0009 test loss: 0.0011
Epoch 6 batch 900 train loss: 0.0008 test loss: 0.0009
Epoch 6 batch 1000 train loss: 0.0009 test loss: 0.0008
Epoch 6 batch 1100 train loss: 0.0006 test loss: 0.0008
Epoch 6 batch 1200 train loss: 0.0010 test loss: 0.0009
Epoch 7 batch 0 train loss: 0.0010 test loss: 0.0008
Epoch 7 batch 100 train loss: 0.0007 test loss: 0.0008
Epoch 7 batch 200 train loss: 0.0005 test loss: 0.0008
Epoch 7 batch 300 train loss: 0.0008 test loss: 0.0009
Epoch 7 batch 400 train loss: 0.0006 test loss: 0.0008
Epoch 7 batch 500 train loss: 0.0008 test loss: 0.0010
Saving checkpoint at ./checkpoints/UNet_prototype/ckpt-33
Epoch 7 batch 600 train loss: 0.0006 test loss: 0.0008
Epoch 7 batch 700 train loss: 0.0007 test loss: 0.0008
Epoch 7 batch 800 train loss: 0.0005 test loss: 0.0008
Epoch 7 batch 900 train loss: 0.0007 test loss: 0.0008
Epoch 7 batch 1000 train loss: 0.0009 test loss: 0.0008
Epoch 7 batch 1100 train loss: 0.0007 test loss: 0.0008
Epoch 7 batch 1200 train loss: 0.0005 test loss: 0.0009
Saving checkpoint at ./checkpoints/UNet_prototype/ckpt-34
Epoch 8 batch 0 train loss: 0.0005 test loss: 0.0008
Epoch 8 batch 100 train loss: 0.0008 test loss: 0.0008
Epoch 8 batch 200 train loss: 0.0006 test loss: 0.0008
Epoch 8 batch 300 train loss: 0.0008 test loss: 0.0009
Epoch 8 batch 400 train loss: 0.0007 test loss: 0.0008
Epoch 8 batch 500 train loss: 0.0006 test loss: 0.0008
Epoch 8 batch 600 train loss: 0.0011 test loss: 0.0008
Epoch 8 batch 700 train loss: 0.0006 test loss: 0.0008
Epoch 8 batch 800 train loss: 0.0004 test loss: 0.0008
Epoch 8 batch 900 train loss: 0.0007 test loss: 0.0008
Epoch 8 batch 1000 train loss: 0.0007 test loss: 0.0008
Epoch 8 batch 1100 train loss: 0.0006 test loss: 0.0008
Epoch 8 batch 1200 train loss: 0.0008 test loss: 0.0008
Epoch 9 batch 0 train loss: 0.0006 test loss: 0.0008
Epoch 9 batch 100 train loss: 0.0005 test loss: 0.0009
Epoch 9 batch 200 train loss: 0.0007 test loss: 0.0008
Epoch 9 batch 300 train loss: 0.0005 test loss: 0.0008
Epoch 9 batch 400 train loss: 0.0007 test loss: 0.0008
Epoch 9 batch 500 train loss: 0.0008 test loss: 0.0008
Saving checkpoint at ./checkpoints/UNet_prototype/ckpt-35
Epoch 9 batch 600 train loss: 0.0010 test loss: 0.0008
Epoch 9 batch 700 train loss: 0.0004 test loss: 0.0008
Epoch 9 batch 800 train loss: 0.0008 test loss: 0.0008
Epoch 9 batch 900 train loss: 0.0004 test loss: 0.0008
Saving checkpoint at ./checkpoints/UNet_prototype/ckpt-36
Epoch 9 batch 1000 train loss: 0.0006 test loss: 0.0008
Epoch 9 batch 1100 train loss: 0.0009 test loss: 0.0008
Saving checkpoint at ./checkpoints/UNet_prototype/ckpt-37
Epoch 9 batch 1200 train loss: 0.0006 test loss: 0.0008
Epoch 10 batch 0 train loss: 0.0005 test loss: 0.0008
Epoch 10 batch 100 train loss: 0.0005 test loss: 0.0008
Epoch 10 batch 200 train loss: 0.0007 test loss: 0.0008
Epoch 10 batch 300 train loss: 0.0007 test loss: 0.0008
Epoch 10 batch 400 train loss: 0.0014 test loss: 0.0008
Saving checkpoint at ./checkpoints/UNet_prototype/ckpt-38
Epoch 10 batch 500 train loss: 0.0005 test loss: 0.0007
Epoch 10 batch 600 train loss: 0.0013 test loss: 0.0008
Epoch 10 batch 700 train loss: 0.0007 test loss: 0.0008
Epoch 10 batch 800 train loss: 0.0008 test loss: 0.0008
Epoch 10 batch 900 train loss: 0.0005 test loss: 0.0008
Epoch 10 batch 1000 train loss: 0.0007 test loss: 0.0008
Epoch 10 batch 1100 train loss: 0.0005 test loss: 0.0008
Epoch 10 batch 1200 train loss: 0.0009 test loss: 0.0008
Epoch 11 batch 0 train loss: 0.0007 test loss: 0.0008
Epoch 11 batch 100 train loss: 0.0005 test loss: 0.0008
Epoch 11 batch 200 train loss: 0.0007 test loss: 0.0008
Epoch 11 batch 300 train loss: 0.0007 test loss: 0.0008
Epoch 11 batch 400 train loss: 0.0008 test loss: 0.0008
Epoch 11 batch 500 train loss: 0.0008 test loss: 0.0008
Epoch 11 batch 600 train loss: 0.0008 test loss: 0.0008
Epoch 11 batch 700 train loss: 0.0006 test loss: 0.0008
Epoch 11 batch 800 train loss: 0.0008 test loss: 0.0008
Epoch 11 batch 900 train loss: 0.0009 test loss: 0.0008
Epoch 11 batch 1000 train loss: 0.0006 test loss: 0.0008
Saving checkpoint at ./checkpoints/UNet_prototype/ckpt-39
Epoch 11 batch 1100 train loss: 0.0008 test loss: 0.0007
Epoch 11 batch 1200 train loss: 0.0008 test loss: 0.0008
Epoch 12 batch 0 train loss: 0.0006 test loss: 0.0008
Epoch 12 batch 100 train loss: 0.0005 test loss: 0.0008
Epoch 12 batch 200 train loss: 0.0005 test loss: 0.0008
Epoch 12 batch 300 train loss: 0.0007 test loss: 0.0007
Epoch 12 batch 400 train loss: 0.0007 test loss: 0.0008
Epoch 12 batch 500 train loss: 0.0006 test loss: 0.0008
Epoch 12 batch 600 train loss: 0.0009 test loss: 0.0007
Epoch 12 batch 700 train loss: 0.0004 test loss: 0.0008
Epoch 12 batch 800 train loss: 0.0007 test loss: 0.0008
Epoch 12 batch 900 train loss: 0.0004 test loss: 0.0007
Saving checkpoint at ./checkpoints/UNet_prototype/ckpt-40
Epoch 12 batch 1000 train loss: 0.0008 test loss: 0.0007
Epoch 12 batch 1100 train loss: 0.0008 test loss: 0.0008
Epoch 12 batch 1200 train loss: 0.0005 test loss: 0.0008
Epoch 13 batch 0 train loss: 0.0005 test loss: 0.0008
Epoch 13 batch 100 train loss: 0.0007 test loss: 0.0008
Epoch 13 batch 200 train loss: 0.0010 test loss: 0.0008
Epoch 13 batch 300 train loss: 0.0004 test loss: 0.0007
Saving checkpoint at ./checkpoints/UNet_prototype/ckpt-41
Epoch 13 batch 400 train loss: 0.0007 test loss: 0.0007
Epoch 13 batch 500 train loss: 0.0008 test loss: 0.0008
Epoch 13 batch 600 train loss: 0.0007 test loss: 0.0008
Saving checkpoint at ./checkpoints/UNet_prototype/ckpt-42
Epoch 13 batch 700 train loss: 0.0007 test loss: 0.0007
Epoch 13 batch 800 train loss: 0.0006 test loss: 0.0008
Epoch 13 batch 900 train loss: 0.0005 test loss: 0.0008
Epoch 13 batch 1000 train loss: 0.0005 test loss: 0.0007
Epoch 13 batch 1100 train loss: 0.0004 test loss: 0.0008
Epoch 13 batch 1200 train loss: 0.0006 test loss: 0.0008
Epoch 14 batch 0 train loss: 0.0004 test loss: 0.0007
Epoch 14 batch 100 train loss: 0.0006 test loss: 0.0008
Epoch 14 batch 200 train loss: 0.0007 test loss: 0.0008
Epoch 14 batch 300 train loss: 0.0005 test loss: 0.0008
Epoch 14 batch 400 train loss: 0.0006 test loss: 0.0008
Epoch 14 batch 500 train loss: 0.0008 test loss: 0.0007
Epoch 14 batch 600 train loss: 0.0008 test loss: 0.0008
Epoch 14 batch 700 train loss: 0.0005 test loss: 0.0008
Epoch 14 batch 800 train loss: 0.0007 test loss: 0.0008
Epoch 14 batch 900 train loss: 0.0006 test loss: 0.0008
Epoch 14 batch 1000 train loss: 0.0007 test loss: 0.0008
Epoch 14 batch 1100 train loss: 0.0011 test loss: 0.0007
Epoch 14 batch 1200 train loss: 0.0008 test loss: 0.0008
Epoch 15 batch 0 train loss: 0.0005 test loss: 0.0007
Epoch 15 batch 100 train loss: 0.0007 test loss: 0.0007
Epoch 15 batch 200 train loss: 0.0010 test loss: 0.0007
Epoch 15 batch 300 train loss: 0.0006 test loss: 0.0008
Epoch 15 batch 400 train loss: 0.0007 test loss: 0.0008
Epoch 15 batch 500 train loss: 0.0005 test loss: 0.0007
Epoch 15 batch 600 train loss: 0.0006 test loss: 0.0007
Epoch 15 batch 700 train loss: 0.0005 test loss: 0.0008
Epoch 15 batch 800 train loss: 0.0007 test loss: 0.0008
Epoch 15 batch 900 train loss: 0.0004 test loss: 0.0007
Epoch 15 batch 1000 train loss: 0.0009 test loss: 0.0008
Epoch 15 batch 1100 train loss: 0.0007 test loss: 0.0007
Epoch 15 batch 1200 train loss: 0.0008 test loss: 0.0008
Epoch 16 batch 0 train loss: 0.0008 test loss: 0.0007
Saving checkpoint at ./checkpoints/UNet_prototype/ckpt-43
Epoch 16 batch 100 train loss: 0.0007 test loss: 0.0007
Saving checkpoint at ./checkpoints/UNet_prototype/ckpt-44
Epoch 16 batch 200 train loss: 0.0004 test loss: 0.0007
Saving checkpoint at ./checkpoints/UNet_prototype/ckpt-45
Epoch 16 batch 300 train loss: 0.0005 test loss: 0.0007
Epoch 16 batch 400 train loss: 0.0007 test loss: 0.0008
Saving checkpoint at ./checkpoints/UNet_prototype/ckpt-46
Epoch 16 batch 500 train loss: 0.0004 test loss: 0.0007
Epoch 16 batch 600 train loss: 0.0004 test loss: 0.0007
Epoch 16 batch 700 train loss: 0.0004 test loss: 0.0008
Epoch 16 batch 800 train loss: 0.0006 test loss: 0.0008
Epoch 16 batch 900 train loss: 0.0004 test loss: 0.0007
Epoch 16 batch 1000 train loss: 0.0006 test loss: 0.0007
Epoch 16 batch 1100 train loss: 0.0005 test loss: 0.0008
Epoch 16 batch 1200 train loss: 0.0007 test loss: 0.0007
Epoch 17 batch 0 train loss: 0.0008 test loss: 0.0008
Epoch 17 batch 100 train loss: 0.0005 test loss: 0.0007
Epoch 17 batch 200 train loss: 0.0006 test loss: 0.0008
Epoch 17 batch 300 train loss: 0.0004 test loss: 0.0007
Epoch 17 batch 400 train loss: 0.0007 test loss: 0.0007
Epoch 17 batch 500 train loss: 0.0007 test loss: 0.0007
Epoch 17 batch 600 train loss: 0.0005 test loss: 0.0007
Epoch 17 batch 700 train loss: 0.0006 test loss: 0.0008
Epoch 17 batch 800 train loss: 0.0004 test loss: 0.0007
Epoch 17 batch 900 train loss: 0.0004 test loss: 0.0008
Epoch 17 batch 1000 train loss: 0.0007 test loss: 0.0007
Epoch 17 batch 1100 train loss: 0.0005 test loss: 0.0007
Epoch 17 batch 1200 train loss: 0.0005 test loss: 0.0008
Epoch 18 batch 0 train loss: 0.0008 test loss: 0.0007
Epoch 18 batch 100 train loss: 0.0004 test loss: 0.0007
Epoch 18 batch 200 train loss: 0.0006 test loss: 0.0007
Epoch 18 batch 300 train loss: 0.0005 test loss: 0.0007
Epoch 18 batch 400 train loss: 0.0006 test loss: 0.0008
Epoch 18 batch 500 train loss: 0.0005 test loss: 0.0007
Epoch 18 batch 600 train loss: 0.0005 test loss: 0.0007
Epoch 18 batch 700 train loss: 0.0010 test loss: 0.0007
Epoch 18 batch 800 train loss: 0.0008 test loss: 0.0007
Epoch 18 batch 900 train loss: 0.0006 test loss: 0.0007
Epoch 18 batch 1000 train loss: 0.0005 test loss: 0.0008
Epoch 18 batch 1100 train loss: 0.0006 test loss: 0.0007
Saving checkpoint at ./checkpoints/UNet_prototype/ckpt-47
Epoch 18 batch 1200 train loss: 0.0005 test loss: 0.0007
Epoch 19 batch 0 train loss: 0.0004 test loss: 0.0007
Epoch 19 batch 100 train loss: 0.0004 test loss: 0.0007
Epoch 19 batch 200 train loss: 0.0004 test loss: 0.0007
Epoch 19 batch 300 train loss: 0.0005 test loss: 0.0007
Epoch 19 batch 400 train loss: 0.0007 test loss: 0.0007
Epoch 19 batch 500 train loss: 0.0007 test loss: 0.0007
Epoch 19 batch 600 train loss: 0.0007 test loss: 0.0007
Epoch 19 batch 700 train loss: 0.0004 test loss: 0.0007
Epoch 19 batch 800 train loss: 0.0005 test loss: 0.0008
Epoch 19 batch 900 train loss: 0.0007 test loss: 0.0007
Epoch 19 batch 1000 train loss: 0.0006 test loss: 0.0008
Epoch 19 batch 1100 train loss: 0.0006 test loss: 0.0007
Epoch 19 batch 1200 train loss: 0.0005 test loss: 0.0007
Epoch 20 batch 0 train loss: 0.0004 test loss: 0.0007
Epoch 20 batch 100 train loss: 0.0003 test loss: 0.0007
Epoch 20 batch 200 train loss: 0.0006 test loss: 0.0007
Epoch 20 batch 300 train loss: 0.0005 test loss: 0.0007
Epoch 20 batch 400 train loss: 0.0004 test loss: 0.0007
Saving checkpoint at ./checkpoints/UNet_prototype/ckpt-48
Epoch 20 batch 500 train loss: 0.0013 test loss: 0.0007
Epoch 20 batch 600 train loss: 0.0008 test loss: 0.0007
Epoch 20 batch 700 train loss: 0.0004 test loss: 0.0007
Epoch 20 batch 800 train loss: 0.0008 test loss: 0.0007
Epoch 20 batch 900 train loss: 0.0008 test loss: 0.0007
Epoch 20 batch 1000 train loss: 0.0005 test loss: 0.0007
Epoch 20 batch 1100 train loss: 0.0007 test loss: 0.0007
Epoch 20 batch 1200 train loss: 0.0008 test loss: 0.0008
Epoch 21 batch 0 train loss: 0.0004 test loss: 0.0007
Epoch 21 batch 100 train loss: 0.0005 test loss: 0.0007
Epoch 21 batch 200 train loss: 0.0006 test loss: 0.0007
Epoch 21 batch 300 train loss: 0.0003 test loss: 0.0007
Epoch 21 batch 400 train loss: 0.0007 test loss: 0.0007
Saving checkpoint at ./checkpoints/UNet_prototype/ckpt-49
Epoch 21 batch 500 train loss: 0.0008 test loss: 0.0007
Epoch 21 batch 600 train loss: 0.0005 test loss: 0.0007
Epoch 21 batch 700 train loss: 0.0005 test loss: 0.0007
Epoch 21 batch 800 train loss: 0.0007 test loss: 0.0008
Epoch 21 batch 900 train loss: 0.0010 test loss: 0.0008
Epoch 21 batch 1000 train loss: 0.0008 test loss: 0.0007
Epoch 21 batch 1100 train loss: 0.0005 test loss: 0.0007
Epoch 21 batch 1200 train loss: 0.0006 test loss: 0.0007
Epoch 22 batch 0 train loss: 0.0003 test loss: 0.0007
Epoch 22 batch 100 train loss: 0.0004 test loss: 0.0007
Epoch 22 batch 200 train loss: 0.0007 test loss: 0.0007
Epoch 22 batch 300 train loss: 0.0004 test loss: 0.0007
Epoch 22 batch 400 train loss: 0.0004 test loss: 0.0007
Epoch 22 batch 500 train loss: 0.0006 test loss: 0.0007
Epoch 22 batch 600 train loss: 0.0007 test loss: 0.0007
Epoch 22 batch 700 train loss: 0.0005 test loss: 0.0007
Epoch 22 batch 800 train loss: 0.0007 test loss: 0.0008
Epoch 22 batch 900 train loss: 0.0005 test loss: 0.0007
Epoch 22 batch 1000 train loss: 0.0005 test loss: 0.0007
Epoch 22 batch 1100 train loss: 0.0004 test loss: 0.0007
Epoch 22 batch 1200 train loss: 0.0006 test loss: 0.0007
Epoch 23 batch 0 train loss: 0.0004 test loss: 0.0007
Epoch 23 batch 100 train loss: 0.0007 test loss: 0.0007
Epoch 23 batch 200 train loss: 0.0006 test loss: 0.0007
Epoch 23 batch 300 train loss: 0.0004 test loss: 0.0007
Epoch 23 batch 400 train loss: 0.0005 test loss: 0.0007
Saving checkpoint at ./checkpoints/UNet_prototype/ckpt-50
Epoch 23 batch 500 train loss: 0.0005 test loss: 0.0007
Epoch 23 batch 600 train loss: 0.0004 test loss: 0.0007
Epoch 23 batch 700 train loss: 0.0006 test loss: 0.0007
Epoch 23 batch 800 train loss: 0.0008 test loss: 0.0007
Epoch 23 batch 900 train loss: 0.0009 test loss: 0.0007
Epoch 23 batch 1000 train loss: 0.0004 test loss: 0.0007
Epoch 23 batch 1100 train loss: 0.0006 test loss: 0.0007
Epoch 23 batch 1200 train loss: 0.0007 test loss: 0.0007
Epoch 24 batch 0 train loss: 0.0005 test loss: 0.0007
Epoch 24 batch 100 train loss: 0.0006 test loss: 0.0007
Epoch 24 batch 200 train loss: 0.0005 test loss: 0.0007
Epoch 24 batch 300 train loss: 0.0006 test loss: 0.0007
Epoch 24 batch 400 train loss: 0.0005 test loss: 0.0007
Epoch 24 batch 500 train loss: 0.0004 test loss: 0.0007
Epoch 24 batch 600 train loss: 0.0006 test loss: 0.0007
Epoch 24 batch 700 train loss: 0.0006 test loss: 0.0007
Epoch 24 batch 800 train loss: 0.0003 test loss: 0.0007
Epoch 24 batch 900 train loss: 0.0005 test loss: 0.0007
Epoch 24 batch 1000 train loss: 0.0003 test loss: 0.0007
Epoch 24 batch 1100 train loss: 0.0004 test loss: 0.0007
Epoch 24 batch 1200 train loss: 0.0006 test loss: 0.0007
Epoch 25 batch 0 train loss: 0.0007 test loss: 0.0007
Epoch 25 batch 100 train loss: 0.0005 test loss: 0.0007
Epoch 25 batch 200 train loss: 0.0008 test loss: 0.0007
Epoch 25 batch 300 train loss: 0.0013 test loss: 0.0007
Epoch 25 batch 400 train loss: 0.0006 test loss: 0.0007
Saving checkpoint at ./checkpoints/UNet_prototype/ckpt-51
Epoch 25 batch 500 train loss: 0.0007 test loss: 0.0007
Epoch 25 batch 600 train loss: 0.0006 test loss: 0.0007
Epoch 25 batch 700 train loss: 0.0005 test loss: 0.0007
Epoch 25 batch 800 train loss: 0.0007 test loss: 0.0007
Epoch 25 batch 900 train loss: 0.0004 test loss: 0.0007
Epoch 25 batch 1000 train loss: 0.0006 test loss: 0.0007
Epoch 25 batch 1100 train loss: 0.0006 test loss: 0.0007
Epoch 25 batch 1200 train loss: 0.0004 test loss: 0.0007
Epoch 26 batch 0 train loss: 0.0004 test loss: 0.0007
Epoch 26 batch 100 train loss: 0.0006 test loss: 0.0007
Epoch 26 batch 200 train loss: 0.0005 test loss: 0.0007
Epoch 26 batch 300 train loss: 0.0008 test loss: 0.0007
Epoch 26 batch 400 train loss: 0.0005 test loss: 0.0007
Epoch 26 batch 500 train loss: 0.0003 test loss: 0.0007
Epoch 26 batch 600 train loss: 0.0010 test loss: 0.0007
Epoch 26 batch 700 train loss: 0.0005 test loss: 0.0007
Epoch 26 batch 800 train loss: 0.0005 test loss: 0.0007
Epoch 26 batch 900 train loss: 0.0004 test loss: 0.0007
Epoch 26 batch 1000 train loss: 0.0003 test loss: 0.0007
Epoch 26 batch 1100 train loss: 0.0007 test loss: 0.0007
Epoch 26 batch 1200 train loss: 0.0012 test loss: 0.0007
Epoch 27 batch 0 train loss: 0.0005 test loss: 0.0007
Epoch 27 batch 100 train loss: 0.0004 test loss: 0.0007
Epoch 27 batch 200 train loss: 0.0005 test loss: 0.0007
Epoch 27 batch 300 train loss: 0.0005 test loss: 0.0007
Epoch 27 batch 400 train loss: 0.0005 test loss: 0.0007
Epoch 27 batch 500 train loss: 0.0004 test loss: 0.0007
Epoch 27 batch 600 train loss: 0.0005 test loss: 0.0007
Epoch 27 batch 700 train loss: 0.0005 test loss: 0.0008
Epoch 27 batch 800 train loss: 0.0005 test loss: 0.0007
Epoch 27 batch 900 train loss: 0.0008 test loss: 0.0007
Epoch 27 batch 1000 train loss: 0.0003 test loss: 0.0007
Epoch 27 batch 1100 train loss: 0.0011 test loss: 0.0007
Epoch 27 batch 1200 train loss: 0.0009 test loss: 0.0007
Epoch 28 batch 0 train loss: 0.0004 test loss: 0.0007
Epoch 28 batch 100 train loss: 0.0005 test loss: 0.0007
Epoch 28 batch 200 train loss: 0.0003 test loss: 0.0007
Epoch 28 batch 300 train loss: 0.0004 test loss: 0.0007
Epoch 28 batch 400 train loss: 0.0006 test loss: 0.0007
Epoch 28 batch 500 train loss: 0.0005 test loss: 0.0007
Epoch 28 batch 600 train loss: 0.0006 test loss: 0.0007
Epoch 28 batch 700 train loss: 0.0007 test loss: 0.0007
Epoch 28 batch 800 train loss: 0.0007 test loss: 0.0007
Epoch 28 batch 900 train loss: 0.0005 test loss: 0.0007
Epoch 28 batch 1000 train loss: 0.0004 test loss: 0.0007
Epoch 28 batch 1100 train loss: 0.0004 test loss: 0.0008
Epoch 28 batch 1200 train loss: 0.0003 test loss: 0.0007
Epoch 29 batch 0 train loss: 0.0005 test loss: 0.0008
Epoch 29 batch 100 train loss: 0.0004 test loss: 0.0007
Epoch 29 batch 200 train loss: 0.0005 test loss: 0.0007
Epoch 29 batch 300 train loss: 0.0006 test loss: 0.0007
Saving checkpoint at ./checkpoints/UNet_prototype/ckpt-52
Epoch 29 batch 400 train loss: 0.0004 test loss: 0.0007
Epoch 29 batch 500 train loss: 0.0006 test loss: 0.0007
Epoch 29 batch 600 train loss: 0.0004 test loss: 0.0007
Epoch 29 batch 700 train loss: 0.0005 test loss: 0.0007
Epoch 29 batch 800 train loss: 0.0005 test loss: 0.0007
Epoch 29 batch 900 train loss: 0.0006 test loss: 0.0007
Epoch 29 batch 1000 train loss: 0.0008 test loss: 0.0007
Epoch 29 batch 1100 train loss: 0.0005 test loss: 0.0007
Epoch 29 batch 1200 train loss: 0.0004 test loss: 0.0007
Epoch 30 batch 0 train loss: 0.0005 test loss: 0.0007
Epoch 30 batch 100 train loss: 0.0006 test loss: 0.0007
Epoch 30 batch 200 train loss: 0.0006 test loss: 0.0007
Epoch 30 batch 300 train loss: 0.0009 test loss: 0.0007
Epoch 30 batch 400 train loss: 0.0006 test loss: 0.0007
Epoch 30 batch 500 train loss: 0.0004 test loss: 0.0007
Epoch 30 batch 600 train loss: 0.0008 test loss: 0.0007
Epoch 30 batch 700 train loss: 0.0003 test loss: 0.0007
Epoch 30 batch 800 train loss: 0.0004 test loss: 0.0007
Epoch 30 batch 900 train loss: 0.0005 test loss: 0.0007
Epoch 30 batch 1000 train loss: 0.0003 test loss: 0.0007
Epoch 30 batch 1100 train loss: 0.0007 test loss: 0.0007
Epoch 30 batch 1200 train loss: 0.0006 test loss: 0.0007
Epoch 31 batch 0 train loss: 0.0003 test loss: 0.0007
Epoch 31 batch 100 train loss: 0.0004 test loss: 0.0007
Epoch 31 batch 200 train loss: 0.0007 test loss: 0.0007
Epoch 31 batch 300 train loss: 0.0004 test loss: 0.0007
Epoch 31 batch 400 train loss: 0.0003 test loss: 0.0007
Epoch 31 batch 500 train loss: 0.0005 test loss: 0.0007
Epoch 31 batch 600 train loss: 0.0006 test loss: 0.0007
Epoch 31 batch 700 train loss: 0.0010 test loss: 0.0007
Epoch 31 batch 800 train loss: 0.0005 test loss: 0.0007
Epoch 31 batch 900 train loss: 0.0006 test loss: 0.0007
Epoch 31 batch 1000 train loss: 0.0007 test loss: 0.0007
Epoch 31 batch 1100 train loss: 0.0007 test loss: 0.0007
Epoch 31 batch 1200 train loss: 0.0005 test loss: 0.0007
Epoch 32 batch 0 train loss: 0.0005 test loss: 0.0007
Epoch 32 batch 100 train loss: 0.0006 test loss: 0.0007
Epoch 32 batch 200 train loss: 0.0006 test loss: 0.0007
Epoch 32 batch 300 train loss: 0.0004 test loss: 0.0007
Epoch 32 batch 400 train loss: 0.0005 test loss: 0.0007
Epoch 32 batch 500 train loss: 0.0006 test loss: 0.0007
Epoch 32 batch 600 train loss: 0.0005 test loss: 0.0007
Epoch 32 batch 700 train loss: 0.0006 test loss: 0.0007
Epoch 32 batch 800 train loss: 0.0004 test loss: 0.0007
Epoch 32 batch 900 train loss: 0.0005 test loss: 0.0007
Epoch 32 batch 1000 train loss: 0.0004 test loss: 0.0007
Epoch 32 batch 1100 train loss: 0.0004 test loss: 0.0007
Epoch 32 batch 1200 train loss: 0.0005 test loss: 0.0007
Epoch 33 batch 0 train loss: 0.0006 test loss: 0.0007
Epoch 33 batch 100 train loss: 0.0005 test loss: 0.0007
Epoch 33 batch 200 train loss: 0.0004 test loss: 0.0007
Epoch 33 batch 300 train loss: 0.0006 test loss: 0.0007
Epoch 33 batch 400 train loss: 0.0003 test loss: 0.0007
Epoch 33 batch 500 train loss: 0.0005 test loss: 0.0007
Epoch 33 batch 600 train loss: 0.0004 test loss: 0.0007
Epoch 33 batch 700 train loss: 0.0003 test loss: 0.0007
Epoch 33 batch 800 train loss: 0.0003 test loss: 0.0007
Epoch 33 batch 900 train loss: 0.0003 test loss: 0.0007
Epoch 33 batch 1000 train loss: 0.0003 test loss: 0.0007
Epoch 33 batch 1100 train loss: 0.0010 test loss: 0.0007
Epoch 33 batch 1200 train loss: 0.0004 test loss: 0.0007
Epoch 34 batch 0 train loss: 0.0004 test loss: 0.0007
Epoch 34 batch 100 train loss: 0.0006 test loss: 0.0007
Epoch 34 batch 200 train loss: 0.0005 test loss: 0.0008
Epoch 34 batch 300 train loss: 0.0004 test loss: 0.0007
Epoch 34 batch 400 train loss: 0.0006 test loss: 0.0007
Epoch 34 batch 500 train loss: 0.0006 test loss: 0.0007
Epoch 34 batch 600 train loss: 0.0005 test loss: 0.0007
Epoch 34 batch 700 train loss: 0.0006 test loss: 0.0007
Epoch 34 batch 800 train loss: 0.0007 test loss: 0.0007
Epoch 34 batch 900 train loss: 0.0005 test loss: 0.0007
Epoch 34 batch 1000 train loss: 0.0004 test loss: 0.0007
Epoch 34 batch 1100 train loss: 0.0005 test loss: 0.0007
Epoch 34 batch 1200 train loss: 0.0005 test loss: 0.0007
Epoch 35 batch 0 train loss: 0.0006 test loss: 0.0007
Epoch 35 batch 100 train loss: 0.0005 test loss: 0.0007
Epoch 35 batch 200 train loss: 0.0003 test loss: 0.0007
Epoch 35 batch 300 train loss: 0.0003 test loss: 0.0007
Epoch 35 batch 400 train loss: 0.0004 test loss: 0.0007
Saving checkpoint at ./checkpoints/UNet_prototype/ckpt-53
Epoch 35 batch 500 train loss: 0.0004 test loss: 0.0007
Epoch 35 batch 600 train loss: 0.0005 test loss: 0.0007
Epoch 35 batch 700 train loss: 0.0005 test loss: 0.0007
Epoch 35 batch 800 train loss: 0.0004 test loss: 0.0007
Epoch 35 batch 900 train loss: 0.0006 test loss: 0.0007
Epoch 35 batch 1000 train loss: 0.0007 test loss: 0.0007
Epoch 35 batch 1100 train loss: 0.0006 test loss: 0.0007
Epoch 35 batch 1200 train loss: 0.0006 test loss: 0.0007
Epoch 36 batch 0 train loss: 0.0005 test loss: 0.0007
Epoch 36 batch 100 train loss: 0.0007 test loss: 0.0007
Epoch 36 batch 200 train loss: 0.0005 test loss: 0.0007
Epoch 36 batch 300 train loss: 0.0004 test loss: 0.0007
Epoch 36 batch 400 train loss: 0.0008 test loss: 0.0007
Epoch 36 batch 500 train loss: 0.0006 test loss: 0.0007
Epoch 36 batch 600 train loss: 0.0004 test loss: 0.0007
Epoch 36 batch 700 train loss: 0.0004 test loss: 0.0007
Epoch 36 batch 800 train loss: 0.0005 test loss: 0.0007
Epoch 36 batch 900 train loss: 0.0005 test loss: 0.0007
Epoch 36 batch 1000 train loss: 0.0005 test loss: 0.0007
Epoch 36 batch 1100 train loss: 0.0005 test loss: 0.0007
Epoch 36 batch 1200 train loss: 0.0003 test loss: 0.0007
Epoch 37 batch 0 train loss: 0.0003 test loss: 0.0007
Epoch 37 batch 100 train loss: 0.0008 test loss: 0.0007
Epoch 37 batch 200 train loss: 0.0008 test loss: 0.0007
Epoch 37 batch 300 train loss: 0.0005 test loss: 0.0007
Epoch 37 batch 400 train loss: 0.0005 test loss: 0.0007
Epoch 37 batch 500 train loss: 0.0003 test loss: 0.0007
Epoch 37 batch 600 train loss: 0.0007 test loss: 0.0007
Epoch 37 batch 700 train loss: 0.0004 test loss: 0.0007
Epoch 37 batch 800 train loss: 0.0005 test loss: 0.0007
Epoch 37 batch 900 train loss: 0.0004 test loss: 0.0007
Epoch 37 batch 1000 train loss: 0.0003 test loss: 0.0007
Epoch 37 batch 1100 train loss: 0.0006 test loss: 0.0007
Epoch 37 batch 1200 train loss: 0.0006 test loss: 0.0007
Epoch 38 batch 0 train loss: 0.0006 test loss: 0.0007
Epoch 38 batch 100 train loss: 0.0005 test loss: 0.0007
Epoch 38 batch 200 train loss: 0.0004 test loss: 0.0007
Saving checkpoint at ./checkpoints/UNet_prototype/ckpt-54
Epoch 38 batch 300 train loss: 0.0003 test loss: 0.0007
Epoch 38 batch 400 train loss: 0.0008 test loss: 0.0007
Epoch 38 batch 500 train loss: 0.0006 test loss: 0.0007
Epoch 38 batch 600 train loss: 0.0003 test loss: 0.0007
Epoch 38 batch 700 train loss: 0.0007 test loss: 0.0007
Epoch 38 batch 800 train loss: 0.0003 test loss: 0.0007
Epoch 38 batch 900 train loss: 0.0004 test loss: 0.0007
Epoch 38 batch 1000 train loss: 0.0006 test loss: 0.0007
Epoch 38 batch 1100 train loss: 0.0005 test loss: 0.0007
Epoch 38 batch 1200 train loss: 0.0004 test loss: 0.0007
Epoch 39 batch 0 train loss: 0.0010 test loss: 0.0008
Epoch 39 batch 100 train loss: 0.0006 test loss: 0.0007
Epoch 39 batch 200 train loss: 0.0004 test loss: 0.0007
Epoch 39 batch 300 train loss: 0.0005 test loss: 0.0007
Epoch 39 batch 400 train loss: 0.0006 test loss: 0.0007
Epoch 39 batch 500 train loss: 0.0004 test loss: 0.0007
Epoch 39 batch 600 train loss: 0.0004 test loss: 0.0007
Epoch 39 batch 700 train loss: 0.0007 test loss: 0.0007
Epoch 39 batch 800 train loss: 0.0004 test loss: 0.0007
Epoch 39 batch 900 train loss: 0.0006 test loss: 0.0007
Epoch 39 batch 1000 train loss: 0.0004 test loss: 0.0007
Epoch 39 batch 1100 train loss: 0.0007 test loss: 0.0007
Epoch 39 batch 1200 train loss: 0.0008 test loss: 0.0007
Epoch 40 batch 0 train loss: 0.0003 test loss: 0.0007
Epoch 40 batch 100 train loss: 0.0005 test loss: 0.0007
Epoch 40 batch 200 train loss: 0.0007 test loss: 0.0007
Epoch 40 batch 300 train loss: 0.0005 test loss: 0.0007
Epoch 40 batch 400 train loss: 0.0008 test loss: 0.0007
Epoch 40 batch 500 train loss: 0.0006 test loss: 0.0007
Epoch 40 batch 600 train loss: 0.0004 test loss: 0.0007
Epoch 40 batch 700 train loss: 0.0003 test loss: 0.0007
Epoch 40 batch 800 train loss: 0.0003 test loss: 0.0007
Epoch 40 batch 900 train loss: 0.0006 test loss: 0.0007
Epoch 40 batch 1000 train loss: 0.0005 test loss: 0.0007
Epoch 40 batch 1100 train loss: 0.0007 test loss: 0.0007
Epoch 40 batch 1200 train loss: 0.0004 test loss: 0.0007
Epoch 41 batch 0 train loss: 0.0003 test loss: 0.0007
Epoch 41 batch 100 train loss: 0.0005 test loss: 0.0007
Epoch 41 batch 200 train loss: 0.0002 test loss: 0.0007
Epoch 41 batch 300 train loss: 0.0006 test loss: 0.0007
Epoch 41 batch 400 train loss: 0.0004 test loss: 0.0007
Epoch 41 batch 500 train loss: 0.0007 test loss: 0.0007
Epoch 41 batch 600 train loss: 0.0004 test loss: 0.0007
Epoch 41 batch 700 train loss: 0.0005 test loss: 0.0007
Epoch 41 batch 800 train loss: 0.0004 test loss: 0.0007
Epoch 41 batch 900 train loss: 0.0004 test loss: 0.0007
Epoch 41 batch 1000 train loss: 0.0006 test loss: 0.0007
Epoch 41 batch 1100 train loss: 0.0003 test loss: 0.0007
Epoch 41 batch 1200 train loss: 0.0005 test loss: 0.0007
Epoch 42 batch 0 train loss: 0.0005 test loss: 0.0007
Epoch 42 batch 100 train loss: 0.0005 test loss: 0.0007
Epoch 42 batch 200 train loss: 0.0004 test loss: 0.0007
Epoch 42 batch 300 train loss: 0.0004 test loss: 0.0007
Epoch 42 batch 400 train loss: 0.0005 test loss: 0.0007
Saving checkpoint at ./checkpoints/UNet_prototype/ckpt-55
Epoch 42 batch 500 train loss: 0.0004 test loss: 0.0007
Epoch 42 batch 600 train loss: 0.0004 test loss: 0.0007
Epoch 42 batch 700 train loss: 0.0007 test loss: 0.0007
Epoch 42 batch 800 train loss: 0.0004 test loss: 0.0007
Epoch 42 batch 900 train loss: 0.0004 test loss: 0.0007
Epoch 42 batch 1000 train loss: 0.0005 test loss: 0.0007
Epoch 42 batch 1100 train loss: 0.0003 test loss: 0.0007
Epoch 42 batch 1200 train loss: 0.0004 test loss: 0.0007
Epoch 43 batch 0 train loss: 0.0003 test loss: 0.0007
Saving checkpoint at ./checkpoints/UNet_prototype/ckpt-56
Epoch 43 batch 100 train loss: 0.0006 test loss: 0.0007
Epoch 43 batch 200 train loss: 0.0003 test loss: 0.0007
Saving checkpoint at ./checkpoints/UNet_prototype/ckpt-57
Epoch 43 batch 300 train loss: 0.0014 test loss: 0.0007
Epoch 43 batch 400 train loss: 0.0003 test loss: 0.0007
Epoch 43 batch 500 train loss: 0.0005 test loss: 0.0007
Epoch 43 batch 600 train loss: 0.0005 test loss: 0.0007
Epoch 43 batch 700 train loss: 0.0003 test loss: 0.0007
Epoch 43 batch 800 train loss: 0.0007 test loss: 0.0007
Epoch 43 batch 900 train loss: 0.0005 test loss: 0.0007
Epoch 43 batch 1000 train loss: 0.0007 test loss: 0.0007
Epoch 43 batch 1100 train loss: 0.0005 test loss: 0.0007
Epoch 43 batch 1200 train loss: 0.0005 test loss: 0.0007
Epoch 44 batch 0 train loss: 0.0004 test loss: 0.0007
Epoch 44 batch 100 train loss: 0.0007 test loss: 0.0007
Epoch 44 batch 200 train loss: 0.0005 test loss: 0.0007
Epoch 44 batch 300 train loss: 0.0005 test loss: 0.0007
Epoch 44 batch 400 train loss: 0.0007 test loss: 0.0007
Epoch 44 batch 500 train loss: 0.0003 test loss: 0.0007
Epoch 44 batch 600 train loss: 0.0005 test loss: 0.0007
Epoch 44 batch 700 train loss: 0.0003 test loss: 0.0007
Epoch 44 batch 800 train loss: 0.0004 test loss: 0.0007
Epoch 44 batch 900 train loss: 0.0007 test loss: 0.0007
Epoch 44 batch 1000 train loss: 0.0004 test loss: 0.0007
Epoch 44 batch 1100 train loss: 0.0004 test loss: 0.0007
Epoch 44 batch 1200 train loss: 0.0007 test loss: 0.0007
Epoch 45 batch 0 train loss: 0.0004 test loss: 0.0007
Epoch 45 batch 100 train loss: 0.0005 test loss: 0.0007
Epoch 45 batch 200 train loss: 0.0010 test loss: 0.0007
Epoch 45 batch 300 train loss: 0.0003 test loss: 0.0007
Epoch 45 batch 400 train loss: 0.0005 test loss: 0.0007
Epoch 45 batch 500 train loss: 0.0003 test loss: 0.0007
Epoch 45 batch 600 train loss: 0.0004 test loss: 0.0007
Epoch 45 batch 700 train loss: 0.0003 test loss: 0.0007
Epoch 45 batch 800 train loss: 0.0008 test loss: 0.0007
Epoch 45 batch 900 train loss: 0.0008 test loss: 0.0007
Epoch 45 batch 1000 train loss: 0.0004 test loss: 0.0007
Epoch 45 batch 1100 train loss: 0.0004 test loss: 0.0007
Epoch 45 batch 1200 train loss: 0.0004 test loss: 0.0007
Epoch 46 batch 0 train loss: 0.0003 test loss: 0.0007
Epoch 46 batch 100 train loss: 0.0005 test loss: 0.0007
Epoch 46 batch 200 train loss: 0.0005 test loss: 0.0007
Epoch 46 batch 300 train loss: 0.0008 test loss: 0.0007
Epoch 46 batch 400 train loss: 0.0008 test loss: 0.0007
Epoch 46 batch 500 train loss: 0.0005 test loss: 0.0007
Epoch 46 batch 600 train loss: 0.0006 test loss: 0.0007
Epoch 46 batch 700 train loss: 0.0007 test loss: 0.0007
Epoch 46 batch 800 train loss: 0.0007 test loss: 0.0007
Epoch 46 batch 900 train loss: 0.0003 test loss: 0.0007
Epoch 46 batch 1000 train loss: 0.0004 test loss: 0.0007
Epoch 46 batch 1100 train loss: 0.0004 test loss: 0.0007
Epoch 46 batch 1200 train loss: 0.0005 test loss: 0.0007
Epoch 47 batch 0 train loss: 0.0004 test loss: 0.0007
Epoch 47 batch 100 train loss: 0.0004 test loss: 0.0007
Epoch 47 batch 200 train loss: 0.0004 test loss: 0.0007
Epoch 47 batch 300 train loss: 0.0003 test loss: 0.0007
Epoch 47 batch 400 train loss: 0.0004 test loss: 0.0007
Epoch 47 batch 500 train loss: 0.0006 test loss: 0.0007
Epoch 47 batch 600 train loss: 0.0003 test loss: 0.0007
Epoch 47 batch 700 train loss: 0.0004 test loss: 0.0007
Epoch 47 batch 800 train loss: 0.0006 test loss: 0.0007
Epoch 47 batch 900 train loss: 0.0006 test loss: 0.0007
Epoch 47 batch 1000 train loss: 0.0004 test loss: 0.0007
Epoch 47 batch 1100 train loss: 0.0004 test loss: 0.0007
Epoch 47 batch 1200 train loss: 0.0006 test loss: 0.0007
Epoch 48 batch 0 train loss: 0.0003 test loss: 0.0007
Epoch 48 batch 100 train loss: 0.0006 test loss: 0.0007
Epoch 48 batch 200 train loss: 0.0002 test loss: 0.0007
Epoch 48 batch 300 train loss: 0.0007 test loss: 0.0007
Epoch 48 batch 400 train loss: 0.0007 test loss: 0.0007
Epoch 48 batch 500 train loss: 0.0007 test loss: 0.0007
Epoch 48 batch 600 train loss: 0.0006 test loss: 0.0007
Epoch 48 batch 700 train loss: 0.0005 test loss: 0.0007
Epoch 48 batch 800 train loss: 0.0005 test loss: 0.0007
Epoch 48 batch 900 train loss: 0.0005 test loss: 0.0007
Epoch 48 batch 1000 train loss: 0.0007 test loss: 0.0007
Epoch 48 batch 1100 train loss: 0.0006 test loss: 0.0007
Epoch 48 batch 1200 train loss: 0.0005 test loss: 0.0007
Epoch 49 batch 0 train loss: 0.0004 test loss: 0.0007
Epoch 49 batch 100 train loss: 0.0006 test loss: 0.0007
Epoch 49 batch 200 train loss: 0.0004 test loss: 0.0007
Epoch 49 batch 300 train loss: 0.0003 test loss: 0.0007
Epoch 49 batch 400 train loss: 0.0008 test loss: 0.0007
Epoch 49 batch 500 train loss: 0.0005 test loss: 0.0007
Epoch 49 batch 600 train loss: 0.0007 test loss: 0.0007
Epoch 49 batch 700 train loss: 0.0004 test loss: 0.0007
Epoch 49 batch 800 train loss: 0.0005 test loss: 0.0007
Epoch 49 batch 900 train loss: 0.0006 test loss: 0.0007
Epoch 49 batch 1000 train loss: 0.0006 test loss: 0.0007
Epoch 49 batch 1100 train loss: 0.0005 test loss: 0.0007
Epoch 49 batch 1200 train loss: 0.0007 test loss: 0.0007
Epoch 50 batch 0 train loss: 0.0004 test loss: 0.0007
Epoch 50 batch 100 train loss: 0.0003 test loss: 0.0007
Epoch 50 batch 200 train loss: 0.0010 test loss: 0.0007
Epoch 50 batch 300 train loss: 0.0005 test loss: 0.0007
Epoch 50 batch 400 train loss: 0.0004 test loss: 0.0007
Epoch 50 batch 500 train loss: 0.0005 test loss: 0.0007
Epoch 50 batch 600 train loss: 0.0005 test loss: 0.0007
Epoch 50 batch 700 train loss: 0.0004 test loss: 0.0007
Epoch 50 batch 800 train loss: 0.0004 test loss: 0.0007
Epoch 50 batch 900 train loss: 0.0004 test loss: 0.0007
Epoch 50 batch 1000 train loss: 0.0003 test loss: 0.0007
Epoch 50 batch 1100 train loss: 0.0005 test loss: 0.0007
Epoch 50 batch 1200 train loss: 0.0003 test loss: 0.0007
Epoch 51 batch 0 train loss: 0.0003 test loss: 0.0007
Epoch 51 batch 100 train loss: 0.0004 test loss: 0.0007
Epoch 51 batch 200 train loss: 0.0002 test loss: 0.0007
Epoch 51 batch 300 train loss: 0.0004 test loss: 0.0007
Epoch 51 batch 400 train loss: 0.0006 test loss: 0.0007
Epoch 51 batch 500 train loss: 0.0006 test loss: 0.0007
Epoch 51 batch 600 train loss: 0.0006 test loss: 0.0007
Epoch 51 batch 700 train loss: 0.0006 test loss: 0.0007
Epoch 51 batch 800 train loss: 0.0003 test loss: 0.0007
Epoch 51 batch 900 train loss: 0.0005 test loss: 0.0007
Epoch 51 batch 1000 train loss: 0.0003 test loss: 0.0007
Epoch 51 batch 1100 train loss: 0.0010 test loss: 0.0007
Epoch 51 batch 1200 train loss: 0.0005 test loss: 0.0007
Epoch 52 batch 0 train loss: 0.0004 test loss: 0.0007
Epoch 52 batch 100 train loss: 0.0004 test loss: 0.0007
Epoch 52 batch 200 train loss: 0.0006 test loss: 0.0007
Epoch 52 batch 300 train loss: 0.0003 test loss: 0.0007
Epoch 52 batch 400 train loss: 0.0005 test loss: 0.0007
Epoch 52 batch 500 train loss: 0.0006 test loss: 0.0007
Epoch 52 batch 600 train loss: 0.0005 test loss: 0.0007
Epoch 52 batch 700 train loss: 0.0003 test loss: 0.0007
Epoch 52 batch 800 train loss: 0.0006 test loss: 0.0007
Epoch 52 batch 900 train loss: 0.0002 test loss: 0.0007
Epoch 52 batch 1000 train loss: 0.0005 test loss: 0.0007
Epoch 52 batch 1100 train loss: 0.0003 test loss: 0.0007
Epoch 52 batch 1200 train loss: 0.0005 test loss: 0.0007
Epoch 53 batch 0 train loss: 0.0006 test loss: 0.0007
Epoch 53 batch 100 train loss: 0.0003 test loss: 0.0007
Epoch 53 batch 200 train loss: 0.0004 test loss: 0.0007
Epoch 53 batch 300 train loss: 0.0005 test loss: 0.0007
Epoch 53 batch 400 train loss: 0.0005 test loss: 0.0007
Epoch 53 batch 500 train loss: 0.0004 test loss: 0.0007
Epoch 53 batch 600 train loss: 0.0004 test loss: 0.0007
Epoch 53 batch 700 train loss: 0.0006 test loss: 0.0007
Epoch 53 batch 800 train loss: 0.0003 test loss: 0.0007
Epoch 53 batch 900 train loss: 0.0006 test loss: 0.0007
Epoch 53 batch 1000 train loss: 0.0005 test loss: 0.0007
Epoch 53 batch 1100 train loss: 0.0003 test loss: 0.0007
Epoch 53 batch 1200 train loss: 0.0007 test loss: 0.0007
Epoch 54 batch 0 train loss: 0.0010 test loss: 0.0007
Epoch 54 batch 100 train loss: 0.0004 test loss: 0.0007
Epoch 54 batch 200 train loss: 0.0004 test loss: 0.0007
Epoch 54 batch 300 train loss: 0.0003 test loss: 0.0007
Epoch 54 batch 400 train loss: 0.0004 test loss: 0.0007
Epoch 54 batch 500 train loss: 0.0004 test loss: 0.0007
Epoch 54 batch 600 train loss: 0.0007 test loss: 0.0007
Epoch 54 batch 700 train loss: 0.0005 test loss: 0.0007
Epoch 54 batch 800 train loss: 0.0004 test loss: 0.0007
Epoch 54 batch 900 train loss: 0.0005 test loss: 0.0007
Epoch 54 batch 1000 train loss: 0.0002 test loss: 0.0007
Epoch 54 batch 1100 train loss: 0.0005 test loss: 0.0007
Epoch 54 batch 1200 train loss: 0.0004 test loss: 0.0007
Epoch 55 batch 0 train loss: 0.0003 test loss: 0.0007
Epoch 55 batch 100 train loss: 0.0004 test loss: 0.0007
Epoch 55 batch 200 train loss: 0.0005 test loss: 0.0007
Epoch 55 batch 300 train loss: 0.0006 test loss: 0.0007
Epoch 55 batch 400 train loss: 0.0006 test loss: 0.0007
Epoch 55 batch 500 train loss: 0.0004 test loss: 0.0007
Epoch 55 batch 600 train loss: 0.0006 test loss: 0.0007
Epoch 55 batch 700 train loss: 0.0006 test loss: 0.0007
Epoch 55 batch 800 train loss: 0.0007 test loss: 0.0007
Epoch 55 batch 900 train loss: 0.0003 test loss: 0.0007
Epoch 55 batch 1000 train loss: 0.0006 test loss: 0.0007
Epoch 55 batch 1100 train loss: 0.0003 test loss: 0.0007
Epoch 55 batch 1200 train loss: 0.0005 test loss: 0.0007
Epoch 56 batch 0 train loss: 0.0004 test loss: 0.0007
Epoch 56 batch 100 train loss: 0.0004 test loss: 0.0007
Epoch 56 batch 200 train loss: 0.0003 test loss: 0.0007
Epoch 56 batch 300 train loss: 0.0003 test loss: 0.0007
Epoch 56 batch 400 train loss: 0.0004 test loss: 0.0007
Epoch 56 batch 500 train loss: 0.0005 test loss: 0.0007
Epoch 56 batch 600 train loss: 0.0003 test loss: 0.0007
Epoch 56 batch 700 train loss: 0.0005 test loss: 0.0007
Epoch 56 batch 800 train loss: 0.0006 test loss: 0.0007
Epoch 56 batch 900 train loss: 0.0004 test loss: 0.0007
Epoch 56 batch 1000 train loss: 0.0003 test loss: 0.0007
Epoch 56 batch 1100 train loss: 0.0006 test loss: 0.0007
Epoch 56 batch 1200 train loss: 0.0006 test loss: 0.0007
Epoch 57 batch 0 train loss: 0.0004 test loss: 0.0007
Epoch 57 batch 100 train loss: 0.0005 test loss: 0.0007
Epoch 57 batch 200 train loss: 0.0004 test loss: 0.0007
Epoch 57 batch 300 train loss: 0.0004 test loss: 0.0007
Epoch 57 batch 400 train loss: 0.0003 test loss: 0.0007
Epoch 57 batch 500 train loss: 0.0003 test loss: 0.0007
Epoch 57 batch 600 train loss: 0.0003 test loss: 0.0007
Epoch 57 batch 700 train loss: 0.0006 test loss: 0.0007
Epoch 57 batch 800 train loss: 0.0006 test loss: 0.0007
Epoch 57 batch 900 train loss: 0.0004 test loss: 0.0007
Epoch 57 batch 1000 train loss: 0.0004 test loss: 0.0007
Epoch 57 batch 1100 train loss: 0.0005 test loss: 0.0007
Epoch 57 batch 1200 train loss: 0.0002 test loss: 0.0007
Epoch 58 batch 0 train loss: 0.0003 test loss: 0.0007
Epoch 58 batch 100 train loss: 0.0003 test loss: 0.0007
Epoch 58 batch 200 train loss: 0.0005 test loss: 0.0007
Epoch 58 batch 300 train loss: 0.0005 test loss: 0.0007
Epoch 58 batch 400 train loss: 0.0003 test loss: 0.0007
Epoch 58 batch 500 train loss: 0.0004 test loss: 0.0007
Epoch 58 batch 600 train loss: 0.0003 test loss: 0.0007
Epoch 58 batch 700 train loss: 0.0003 test loss: 0.0007
Epoch 58 batch 800 train loss: 0.0005 test loss: 0.0007
Epoch 58 batch 900 train loss: 0.0004 test loss: 0.0007
Epoch 58 batch 1000 train loss: 0.0003 test loss: 0.0007
Epoch 58 batch 1100 train loss: 0.0004 test loss: 0.0007
Epoch 58 batch 1200 train loss: 0.0003 test loss: 0.0007
Epoch 59 batch 0 train loss: 0.0004 test loss: 0.0007
Epoch 59 batch 100 train loss: 0.0005 test loss: 0.0007
Epoch 59 batch 200 train loss: 0.0003 test loss: 0.0007
Epoch 59 batch 300 train loss: 0.0003 test loss: 0.0007
Epoch 59 batch 400 train loss: 0.0008 test loss: 0.0007
Epoch 59 batch 500 train loss: 0.0007 test loss: 0.0007
Epoch 59 batch 600 train loss: 0.0005 test loss: 0.0007
Epoch 59 batch 700 train loss: 0.0003 test loss: 0.0007
Epoch 59 batch 800 train loss: 0.0004 test loss: 0.0007
Epoch 59 batch 900 train loss: 0.0007 test loss: 0.0007
Epoch 59 batch 1000 train loss: 0.0003 test loss: 0.0007
Epoch 59 batch 1100 train loss: 0.0003 test loss: 0.0007
Epoch 59 batch 1200 train loss: 0.0005 test loss: 0.0007
Epoch 60 batch 0 train loss: 0.0003 test loss: 0.0007
Epoch 60 batch 100 train loss: 0.0006 test loss: 0.0007
Epoch 60 batch 200 train loss: 0.0003 test loss: 0.0007
Epoch 60 batch 300 train loss: 0.0003 test loss: 0.0007
Epoch 60 batch 400 train loss: 0.0003 test loss: 0.0007
Epoch 60 batch 500 train loss: 0.0005 test loss: 0.0007
Epoch 60 batch 600 train loss: 0.0004 test loss: 0.0007
Epoch 60 batch 700 train loss: 0.0004 test loss: 0.0007
Epoch 60 batch 800 train loss: 0.0005 test loss: 0.0007
Epoch 60 batch 900 train loss: 0.0004 test loss: 0.0007
Epoch 60 batch 1000 train loss: 0.0006 test loss: 0.0007
Epoch 60 batch 1100 train loss: 0.0003 test loss: 0.0007
Epoch 60 batch 1200 train loss: 0.0004 test loss: 0.0007
Epoch 61 batch 0 train loss: 0.0003 test loss: 0.0007
Epoch 61 batch 100 train loss: 0.0004 test loss: 0.0007
Epoch 61 batch 200 train loss: 0.0004 test loss: 0.0007
Epoch 61 batch 300 train loss: 0.0004 test loss: 0.0007
Epoch 61 batch 400 train loss: 0.0007 test loss: 0.0007
Epoch 61 batch 500 train loss: 0.0006 test loss: 0.0007
Epoch 61 batch 600 train loss: 0.0004 test loss: 0.0007
Epoch 61 batch 700 train loss: 0.0004 test loss: 0.0007
Epoch 61 batch 800 train loss: 0.0004 test loss: 0.0007
Epoch 61 batch 900 train loss: 0.0004 test loss: 0.0007
Epoch 61 batch 1000 train loss: 0.0006 test loss: 0.0007
Epoch 61 batch 1100 train loss: 0.0005 test loss: 0.0007
Epoch 61 batch 1200 train loss: 0.0003 test loss: 0.0007
Epoch 62 batch 0 train loss: 0.0003 test loss: 0.0007
Epoch 62 batch 100 train loss: 0.0004 test loss: 0.0007
Epoch 62 batch 200 train loss: 0.0005 test loss: 0.0007
Epoch 62 batch 300 train loss: 0.0004 test loss: 0.0007
Epoch 62 batch 400 train loss: 0.0006 test loss: 0.0007
Epoch 62 batch 500 train loss: 0.0006 test loss: 0.0007
Epoch 62 batch 600 train loss: 0.0002 test loss: 0.0007
Epoch 62 batch 700 train loss: 0.0004 test loss: 0.0007
Epoch 62 batch 800 train loss: 0.0007 test loss: 0.0007
Epoch 62 batch 900 train loss: 0.0006 test loss: 0.0007
Epoch 62 batch 1000 train loss: 0.0003 test loss: 0.0007
Epoch 62 batch 1100 train loss: 0.0003 test loss: 0.0007
Epoch 62 batch 1200 train loss: 0.0006 test loss: 0.0007
Epoch 63 batch 0 train loss: 0.0005 test loss: 0.0007
Epoch 63 batch 100 train loss: 0.0004 test loss: 0.0007
Epoch 63 batch 200 train loss: 0.0006 test loss: 0.0007
Epoch 63 batch 300 train loss: 0.0003 test loss: 0.0007
Epoch 63 batch 400 train loss: 0.0005 test loss: 0.0007
Epoch 63 batch 500 train loss: 0.0007 test loss: 0.0007
Epoch 63 batch 600 train loss: 0.0003 test loss: 0.0007
Epoch 63 batch 700 train loss: 0.0004 test loss: 0.0007
Epoch 63 batch 800 train loss: 0.0003 test loss: 0.0007
Epoch 63 batch 900 train loss: 0.0004 test loss: 0.0007
Epoch 63 batch 1000 train loss: 0.0007 test loss: 0.0007
Epoch 63 batch 1100 train loss: 0.0004 test loss: 0.0007
Epoch 63 batch 1200 train loss: 0.0003 test loss: 0.0007
Epoch 64 batch 0 train loss: 0.0006 test loss: 0.0007
Epoch 64 batch 100 train loss: 0.0004 test loss: 0.0007
Epoch 64 batch 200 train loss: 0.0005 test loss: 0.0007
Epoch 64 batch 300 train loss: 0.0004 test loss: 0.0007
Epoch 64 batch 400 train loss: 0.0003 test loss: 0.0007
Epoch 64 batch 500 train loss: 0.0003 test loss: 0.0007
Epoch 64 batch 600 train loss: 0.0004 test loss: 0.0007
Epoch 64 batch 700 train loss: 0.0003 test loss: 0.0007
Epoch 64 batch 800 train loss: 0.0003 test loss: 0.0007
Epoch 64 batch 900 train loss: 0.0007 test loss: 0.0007
Epoch 64 batch 1000 train loss: 0.0007 test loss: 0.0007
Epoch 64 batch 1100 train loss: 0.0003 test loss: 0.0007
Epoch 64 batch 1200 train loss: 0.0003 test loss: 0.0007
Epoch 65 batch 0 train loss: 0.0005 test loss: 0.0007
Epoch 65 batch 100 train loss: 0.0005 test loss: 0.0007
Epoch 65 batch 200 train loss: 0.0005 test loss: 0.0007
Epoch 65 batch 300 train loss: 0.0007 test loss: 0.0007
Epoch 65 batch 400 train loss: 0.0004 test loss: 0.0007
Epoch 65 batch 500 train loss: 0.0003 test loss: 0.0007
Epoch 65 batch 600 train loss: 0.0006 test loss: 0.0007
Epoch 65 batch 700 train loss: 0.0005 test loss: 0.0007
Epoch 65 batch 800 train loss: 0.0005 test loss: 0.0007
Epoch 65 batch 900 train loss: 0.0009 test loss: 0.0007
Epoch 65 batch 1000 train loss: 0.0005 test loss: 0.0007
Epoch 65 batch 1100 train loss: 0.0003 test loss: 0.0007
Epoch 65 batch 1200 train loss: 0.0004 test loss: 0.0007
Epoch 66 batch 0 train loss: 0.0006 test loss: 0.0007
Epoch 66 batch 100 train loss: 0.0002 test loss: 0.0007
Epoch 66 batch 200 train loss: 0.0006 test loss: 0.0007
Epoch 66 batch 300 train loss: 0.0002 test loss: 0.0007
Epoch 66 batch 400 train loss: 0.0005 test loss: 0.0007
Epoch 66 batch 500 train loss: 0.0005 test loss: 0.0007
Epoch 66 batch 600 train loss: 0.0004 test loss: 0.0007
Epoch 66 batch 700 train loss: 0.0004 test loss: 0.0007
Epoch 66 batch 800 train loss: 0.0004 test loss: 0.0007
Epoch 66 batch 900 train loss: 0.0005 test loss: 0.0007
Epoch 66 batch 1000 train loss: 0.0004 test loss: 0.0007
Epoch 66 batch 1100 train loss: 0.0004 test loss: 0.0007
Epoch 66 batch 1200 train loss: 0.0003 test loss: 0.0007
Epoch 67 batch 0 train loss: 0.0003 test loss: 0.0007
Epoch 67 batch 100 train loss: 0.0005 test loss: 0.0007
Epoch 67 batch 200 train loss: 0.0003 test loss: 0.0007
Epoch 67 batch 300 train loss: 0.0004 test loss: 0.0007
Epoch 67 batch 400 train loss: 0.0005 test loss: 0.0007
Epoch 67 batch 500 train loss: 0.0004 test loss: 0.0007
Epoch 67 batch 600 train loss: 0.0004 test loss: 0.0007
Epoch 67 batch 700 train loss: 0.0004 test loss: 0.0007
Epoch 67 batch 800 train loss: 0.0002 test loss: 0.0007
Epoch 67 batch 900 train loss: 0.0005 test loss: 0.0007
Epoch 67 batch 1000 train loss: 0.0003 test loss: 0.0007
Epoch 67 batch 1100 train loss: 0.0003 test loss: 0.0007
Epoch 67 batch 1200 train loss: 0.0007 test loss: 0.0007
Epoch 68 batch 0 train loss: 0.0004 test loss: 0.0007
Epoch 68 batch 100 train loss: 0.0004 test loss: 0.0007
Epoch 68 batch 200 train loss: 0.0003 test loss: 0.0007
Epoch 68 batch 300 train loss: 0.0004 test loss: 0.0007
Epoch 68 batch 400 train loss: 0.0004 test loss: 0.0007
Epoch 68 batch 500 train loss: 0.0004 test loss: 0.0007
Epoch 68 batch 600 train loss: 0.0005 test loss: 0.0007
Epoch 68 batch 700 train loss: 0.0005 test loss: 0.0008
Epoch 68 batch 800 train loss: 0.0004 test loss: 0.0007
Epoch 68 batch 900 train loss: 0.0002 test loss: 0.0007
Epoch 68 batch 1000 train loss: 0.0004 test loss: 0.0007
Epoch 68 batch 1100 train loss: 0.0003 test loss: 0.0007
Epoch 68 batch 1200 train loss: 0.0004 test loss: 0.0007
Epoch 69 batch 0 train loss: 0.0003 test loss: 0.0007
Epoch 69 batch 100 train loss: 0.0005 test loss: 0.0007
Epoch 69 batch 200 train loss: 0.0003 test loss: 0.0007
Epoch 69 batch 300 train loss: 0.0004 test loss: 0.0007
Epoch 69 batch 400 train loss: 0.0003 test loss: 0.0007
Epoch 69 batch 500 train loss: 0.0005 test loss: 0.0007
Epoch 69 batch 600 train loss: 0.0004 test loss: 0.0007
Epoch 69 batch 700 train loss: 0.0003 test loss: 0.0007
Epoch 69 batch 800 train loss: 0.0004 test loss: 0.0007
Epoch 69 batch 900 train loss: 0.0004 test loss: 0.0007
Epoch 69 batch 1000 train loss: 0.0006 test loss: 0.0007
Epoch 69 batch 1100 train loss: 0.0003 test loss: 0.0007
Epoch 69 batch 1200 train loss: 0.0003 test loss: 0.0007
Epoch 70 batch 0 train loss: 0.0003 test loss: 0.0007
Epoch 70 batch 100 train loss: 0.0003 test loss: 0.0007
Epoch 70 batch 200 train loss: 0.0003 test loss: 0.0007
Epoch 70 batch 300 train loss: 0.0005 test loss: 0.0007
Epoch 70 batch 400 train loss: 0.0007 test loss: 0.0007
Epoch 70 batch 500 train loss: 0.0005 test loss: 0.0007
Epoch 70 batch 600 train loss: 0.0004 test loss: 0.0007
Epoch 70 batch 700 train loss: 0.0003 test loss: 0.0007
Epoch 70 batch 800 train loss: 0.0003 test loss: 0.0007
Epoch 70 batch 900 train loss: 0.0003 test loss: 0.0007
Epoch 70 batch 1000 train loss: 0.0003 test loss: 0.0007
Epoch 70 batch 1100 train loss: 0.0005 test loss: 0.0007
Epoch 70 batch 1200 train loss: 0.0003 test loss: 0.0007
Epoch 71 batch 0 train loss: 0.0003 test loss: 0.0007
Epoch 71 batch 100 train loss: 0.0003 test loss: 0.0007
Epoch 71 batch 200 train loss: 0.0003 test loss: 0.0007
Epoch 71 batch 300 train loss: 0.0005 test loss: 0.0007
Epoch 71 batch 400 train loss: 0.0002 test loss: 0.0007
Epoch 71 batch 500 train loss: 0.0004 test loss: 0.0007
Epoch 71 batch 600 train loss: 0.0004 test loss: 0.0007
Epoch 71 batch 700 train loss: 0.0006 test loss: 0.0007
Epoch 71 batch 800 train loss: 0.0006 test loss: 0.0007
Epoch 71 batch 900 train loss: 0.0004 test loss: 0.0007
Epoch 71 batch 1000 train loss: 0.0004 test loss: 0.0007
Epoch 71 batch 1100 train loss: 0.0002 test loss: 0.0007
Epoch 71 batch 1200 train loss: 0.0005 test loss: 0.0007
Epoch 72 batch 0 train loss: 0.0005 test loss: 0.0007
Epoch 72 batch 100 train loss: 0.0006 test loss: 0.0007
Epoch 72 batch 200 train loss: 0.0008 test loss: 0.0007
Epoch 72 batch 300 train loss: 0.0003 test loss: 0.0007
Epoch 72 batch 400 train loss: 0.0004 test loss: 0.0007
Epoch 72 batch 500 train loss: 0.0006 test loss: 0.0007
Epoch 72 batch 600 train loss: 0.0005 test loss: 0.0007
Epoch 72 batch 700 train loss: 0.0005 test loss: 0.0007
Epoch 72 batch 800 train loss: 0.0005 test loss: 0.0007
Epoch 72 batch 900 train loss: 0.0003 test loss: 0.0007
Epoch 72 batch 1000 train loss: 0.0004 test loss: 0.0007
Epoch 72 batch 1100 train loss: 0.0003 test loss: 0.0007
Epoch 72 batch 1200 train loss: 0.0004 test loss: 0.0007
Epoch 73 batch 0 train loss: 0.0003 test loss: 0.0007
Epoch 73 batch 100 train loss: 0.0002 test loss: 0.0007
Epoch 73 batch 200 train loss: 0.0005 test loss: 0.0007
Epoch 73 batch 300 train loss: 0.0004 test loss: 0.0007
Epoch 73 batch 400 train loss: 0.0006 test loss: 0.0007
Epoch 73 batch 500 train loss: 0.0003 test loss: 0.0007
Epoch 73 batch 600 train loss: 0.0004 test loss: 0.0007
Epoch 73 batch 700 train loss: 0.0004 test loss: 0.0007
Epoch 73 batch 800 train loss: 0.0004 test loss: 0.0007
Epoch 73 batch 900 train loss: 0.0006 test loss: 0.0007
Epoch 73 batch 1000 train loss: 0.0004 test loss: 0.0007
Epoch 73 batch 1100 train loss: 0.0003 test loss: 0.0007
Epoch 73 batch 1200 train loss: 0.0003 test loss: 0.0007
Epoch 74 batch 0 train loss: 0.0005 test loss: 0.0007
Epoch 74 batch 100 train loss: 0.0006 test loss: 0.0007
Epoch 74 batch 200 train loss: 0.0003 test loss: 0.0007
Epoch 74 batch 300 train loss: 0.0002 test loss: 0.0007
Epoch 74 batch 400 train loss: 0.0005 test loss: 0.0007
Epoch 74 batch 500 train loss: 0.0007 test loss: 0.0007
Epoch 74 batch 600 train loss: 0.0004 test loss: 0.0007
Epoch 74 batch 700 train loss: 0.0004 test loss: 0.0007
Epoch 74 batch 800 train loss: 0.0003 test loss: 0.0007
Epoch 74 batch 900 train loss: 0.0004 test loss: 0.0007
Epoch 74 batch 1000 train loss: 0.0005 test loss: 0.0007
Epoch 74 batch 1100 train loss: 0.0003 test loss: 0.0007
Epoch 74 batch 1200 train loss: 0.0004 test loss: 0.0007
Epoch 75 batch 0 train loss: 0.0004 test loss: 0.0007
Epoch 75 batch 100 train loss: 0.0003 test loss: 0.0007
Epoch 75 batch 200 train loss: 0.0004 test loss: 0.0007
Epoch 75 batch 300 train loss: 0.0003 test loss: 0.0007
Epoch 75 batch 400 train loss: 0.0003 test loss: 0.0007
Epoch 75 batch 500 train loss: 0.0003 test loss: 0.0007
Epoch 75 batch 600 train loss: 0.0003 test loss: 0.0007
Epoch 75 batch 700 train loss: 0.0002 test loss: 0.0007
Epoch 75 batch 800 train loss: 0.0002 test loss: 0.0007
Epoch 75 batch 900 train loss: 0.0003 test loss: 0.0007
Epoch 75 batch 1000 train loss: 0.0004 test loss: 0.0007
Epoch 75 batch 1100 train loss: 0.0003 test loss: 0.0007
Epoch 75 batch 1200 train loss: 0.0005 test loss: 0.0007
Epoch 76 batch 0 train loss: 0.0005 test loss: 0.0007
Epoch 76 batch 100 train loss: 0.0006 test loss: 0.0007
Epoch 76 batch 200 train loss: 0.0005 test loss: 0.0007
Epoch 76 batch 300 train loss: 0.0004 test loss: 0.0007
Epoch 76 batch 400 train loss: 0.0004 test loss: 0.0007
Epoch 76 batch 500 train loss: 0.0003 test loss: 0.0007
Epoch 76 batch 600 train loss: 0.0005 test loss: 0.0007
Epoch 76 batch 700 train loss: 0.0004 test loss: 0.0007
Epoch 76 batch 800 train loss: 0.0004 test loss: 0.0007
Epoch 76 batch 900 train loss: 0.0007 test loss: 0.0007
Epoch 76 batch 1000 train loss: 0.0003 test loss: 0.0007
Epoch 76 batch 1100 train loss: 0.0003 test loss: 0.0007
Epoch 76 batch 1200 train loss: 0.0004 test loss: 0.0008
Epoch 77 batch 0 train loss: 0.0005 test loss: 0.0007
Epoch 77 batch 100 train loss: 0.0003 test loss: 0.0007
Epoch 77 batch 200 train loss: 0.0005 test loss: 0.0007
Epoch 77 batch 300 train loss: 0.0006 test loss: 0.0007
Epoch 77 batch 400 train loss: 0.0004 test loss: 0.0007
Epoch 77 batch 500 train loss: 0.0004 test loss: 0.0007
Epoch 77 batch 600 train loss: 0.0005 test loss: 0.0007
Epoch 77 batch 700 train loss: 0.0004 test loss: 0.0007
Epoch 77 batch 800 train loss: 0.0003 test loss: 0.0007
Epoch 77 batch 900 train loss: 0.0004 test loss: 0.0008
Epoch 77 batch 1000 train loss: 0.0004 test loss: 0.0007
Epoch 77 batch 1100 train loss: 0.0004 test loss: 0.0007
Epoch 77 batch 1200 train loss: 0.0004 test loss: 0.0007
Epoch 78 batch 0 train loss: 0.0004 test loss: 0.0007
Epoch 78 batch 100 train loss: 0.0004 test loss: 0.0007
Epoch 78 batch 200 train loss: 0.0006 test loss: 0.0007
Epoch 78 batch 300 train loss: 0.0004 test loss: 0.0007
Epoch 78 batch 400 train loss: 0.0005 test loss: 0.0007
Epoch 78 batch 500 train loss: 0.0003 test loss: 0.0007
Epoch 78 batch 600 train loss: 0.0005 test loss: 0.0007
Epoch 78 batch 700 train loss: 0.0004 test loss: 0.0007
Epoch 78 batch 800 train loss: 0.0003 test loss: 0.0007
Epoch 78 batch 900 train loss: 0.0006 test loss: 0.0007
Epoch 78 batch 1000 train loss: 0.0006 test loss: 0.0007
Epoch 78 batch 1100 train loss: 0.0003 test loss: 0.0007
Epoch 78 batch 1200 train loss: 0.0006 test loss: 0.0007
Epoch 79 batch 0 train loss: 0.0003 test loss: 0.0008
Epoch 79 batch 100 train loss: 0.0003 test loss: 0.0007
Epoch 79 batch 200 train loss: 0.0003 test loss: 0.0007
Epoch 79 batch 300 train loss: 0.0002 test loss: 0.0007
Epoch 79 batch 400 train loss: 0.0003 test loss: 0.0007
Epoch 79 batch 500 train loss: 0.0003 test loss: 0.0007
Epoch 79 batch 600 train loss: 0.0003 test loss: 0.0007
Epoch 79 batch 700 train loss: 0.0003 test loss: 0.0007
Epoch 79 batch 800 train loss: 0.0008 test loss: 0.0007
Epoch 79 batch 900 train loss: 0.0002 test loss: 0.0007
Epoch 79 batch 1000 train loss: 0.0004 test loss: 0.0007
Epoch 79 batch 1100 train loss: 0.0005 test loss: 0.0007
Epoch 79 batch 1200 train loss: 0.0002 test loss: 0.0007
Epoch 80 batch 0 train loss: 0.0003 test loss: 0.0008
Epoch 80 batch 100 train loss: 0.0003 test loss: 0.0007
Epoch 80 batch 200 train loss: 0.0004 test loss: 0.0007
Epoch 80 batch 300 train loss: 0.0003 test loss: 0.0007
Epoch 80 batch 400 train loss: 0.0003 test loss: 0.0007
Epoch 80 batch 500 train loss: 0.0002 test loss: 0.0007
Epoch 80 batch 600 train loss: 0.0003 test loss: 0.0007
Epoch 80 batch 700 train loss: 0.0005 test loss: 0.0008
Epoch 80 batch 800 train loss: 0.0004 test loss: 0.0007
Epoch 80 batch 900 train loss: 0.0005 test loss: 0.0007
Epoch 80 batch 1000 train loss: 0.0006 test loss: 0.0007
Epoch 80 batch 1100 train loss: 0.0004 test loss: 0.0007
Epoch 80 batch 1200 train loss: 0.0004 test loss: 0.0007
Epoch 81 batch 0 train loss: 0.0004 test loss: 0.0007
Epoch 81 batch 100 train loss: 0.0003 test loss: 0.0007
Epoch 81 batch 200 train loss: 0.0004 test loss: 0.0007
Epoch 81 batch 300 train loss: 0.0005 test loss: 0.0007
Epoch 81 batch 400 train loss: 0.0003 test loss: 0.0007
Epoch 81 batch 500 train loss: 0.0003 test loss: 0.0007
Epoch 81 batch 600 train loss: 0.0005 test loss: 0.0007
Epoch 81 batch 700 train loss: 0.0005 test loss: 0.0007
Epoch 81 batch 800 train loss: 0.0004 test loss: 0.0007
Epoch 81 batch 900 train loss: 0.0005 test loss: 0.0007
Epoch 81 batch 1000 train loss: 0.0004 test loss: 0.0007
Epoch 81 batch 1100 train loss: 0.0004 test loss: 0.0007
Epoch 81 batch 1200 train loss: 0.0003 test loss: 0.0007
Epoch 82 batch 0 train loss: 0.0004 test loss: 0.0007
Epoch 82 batch 100 train loss: 0.0004 test loss: 0.0007
Epoch 82 batch 200 train loss: 0.0004 test loss: 0.0007
Epoch 82 batch 300 train loss: 0.0004 test loss: 0.0007
Epoch 82 batch 400 train loss: 0.0004 test loss: 0.0007
Epoch 82 batch 500 train loss: 0.0003 test loss: 0.0007
Epoch 82 batch 600 train loss: 0.0005 test loss: 0.0007
Epoch 82 batch 700 train loss: 0.0003 test loss: 0.0007
Epoch 82 batch 800 train loss: 0.0004 test loss: 0.0007
Epoch 82 batch 900 train loss: 0.0003 test loss: 0.0007
Epoch 82 batch 1000 train loss: 0.0004 test loss: 0.0008
Epoch 82 batch 1100 train loss: 0.0003 test loss: 0.0007
Epoch 82 batch 1200 train loss: 0.0006 test loss: 0.0007
Epoch 83 batch 0 train loss: 0.0004 test loss: 0.0008
Epoch 83 batch 100 train loss: 0.0003 test loss: 0.0007
Epoch 83 batch 200 train loss: 0.0002 test loss: 0.0007
Epoch 83 batch 300 train loss: 0.0003 test loss: 0.0007
Epoch 83 batch 400 train loss: 0.0004 test loss: 0.0007
Epoch 83 batch 500 train loss: 0.0005 test loss: 0.0007
Epoch 83 batch 600 train loss: 0.0005 test loss: 0.0007
Epoch 83 batch 700 train loss: 0.0003 test loss: 0.0007
Epoch 83 batch 800 train loss: 0.0003 test loss: 0.0007
Epoch 83 batch 900 train loss: 0.0005 test loss: 0.0007
Epoch 83 batch 1000 train loss: 0.0003 test loss: 0.0007
Epoch 83 batch 1100 train loss: 0.0003 test loss: 0.0007
Epoch 83 batch 1200 train loss: 0.0004 test loss: 0.0007
Epoch 84 batch 0 train loss: 0.0004 test loss: 0.0007
Epoch 84 batch 100 train loss: 0.0004 test loss: 0.0007
Epoch 84 batch 200 train loss: 0.0006 test loss: 0.0007
Epoch 84 batch 300 train loss: 0.0003 test loss: 0.0007
Epoch 84 batch 400 train loss: 0.0002 test loss: 0.0007
Epoch 84 batch 500 train loss: 0.0004 test loss: 0.0007
Epoch 84 batch 600 train loss: 0.0002 test loss: 0.0007
Epoch 84 batch 700 train loss: 0.0004 test loss: 0.0007
Epoch 84 batch 800 train loss: 0.0005 test loss: 0.0007
Epoch 84 batch 900 train loss: 0.0003 test loss: 0.0007
Epoch 84 batch 1000 train loss: 0.0005 test loss: 0.0007
Epoch 84 batch 1100 train loss: 0.0002 test loss: 0.0007
Epoch 84 batch 1200 train loss: 0.0003 test loss: 0.0007
Epoch 85 batch 0 train loss: 0.0004 test loss: 0.0007
Epoch 85 batch 100 train loss: 0.0003 test loss: 0.0007
Epoch 85 batch 200 train loss: 0.0005 test loss: 0.0007
Epoch 85 batch 300 train loss: 0.0003 test loss: 0.0007
Epoch 85 batch 400 train loss: 0.0004 test loss: 0.0007
Epoch 85 batch 500 train loss: 0.0003 test loss: 0.0007
Epoch 85 batch 600 train loss: 0.0003 test loss: 0.0007
Epoch 85 batch 700 train loss: 0.0005 test loss: 0.0007
Epoch 85 batch 800 train loss: 0.0003 test loss: 0.0007
Epoch 85 batch 900 train loss: 0.0004 test loss: 0.0007
Epoch 85 batch 1000 train loss: 0.0004 test loss: 0.0007
Epoch 85 batch 1100 train loss: 0.0004 test loss: 0.0007
Epoch 85 batch 1200 train loss: 0.0003 test loss: 0.0007
Epoch 86 batch 0 train loss: 0.0004 test loss: 0.0007
Epoch 86 batch 100 train loss: 0.0003 test loss: 0.0007
Epoch 86 batch 200 train loss: 0.0004 test loss: 0.0007
Epoch 86 batch 300 train loss: 0.0004 test loss: 0.0007
Epoch 86 batch 400 train loss: 0.0003 test loss: 0.0007
Epoch 86 batch 500 train loss: 0.0003 test loss: 0.0007
Epoch 86 batch 600 train loss: 0.0006 test loss: 0.0007
Epoch 86 batch 700 train loss: 0.0004 test loss: 0.0007
Epoch 86 batch 800 train loss: 0.0003 test loss: 0.0007
Epoch 86 batch 900 train loss: 0.0003 test loss: 0.0007
Epoch 86 batch 1000 train loss: 0.0003 test loss: 0.0007
Epoch 86 batch 1100 train loss: 0.0004 test loss: 0.0007
Epoch 86 batch 1200 train loss: 0.0003 test loss: 0.0007
Epoch 87 batch 0 train loss: 0.0003 test loss: 0.0007
Epoch 87 batch 100 train loss: 0.0005 test loss: 0.0008
Epoch 87 batch 200 train loss: 0.0003 test loss: 0.0007
Epoch 87 batch 300 train loss: 0.0004 test loss: 0.0007
Epoch 87 batch 400 train loss: 0.0004 test loss: 0.0007
Epoch 87 batch 500 train loss: 0.0004 test loss: 0.0007
Epoch 87 batch 600 train loss: 0.0003 test loss: 0.0007
Epoch 87 batch 700 train loss: 0.0005 test loss: 0.0008
Epoch 87 batch 800 train loss: 0.0005 test loss: 0.0007
Epoch 87 batch 900 train loss: 0.0005 test loss: 0.0007
Epoch 87 batch 1000 train loss: 0.0004 test loss: 0.0007
Epoch 87 batch 1100 train loss: 0.0004 test loss: 0.0007
Epoch 87 batch 1200 train loss: 0.0008 test loss: 0.0007
Epoch 88 batch 0 train loss: 0.0003 test loss: 0.0007
Epoch 88 batch 100 train loss: 0.0005 test loss: 0.0007
Epoch 88 batch 200 train loss: 0.0004 test loss: 0.0007
Epoch 88 batch 300 train loss: 0.0004 test loss: 0.0007
Epoch 88 batch 400 train loss: 0.0004 test loss: 0.0007
Epoch 88 batch 500 train loss: 0.0003 test loss: 0.0007
Epoch 88 batch 600 train loss: 0.0003 test loss: 0.0007
Epoch 88 batch 700 train loss: 0.0004 test loss: 0.0007
Epoch 88 batch 800 train loss: 0.0003 test loss: 0.0008
Epoch 88 batch 900 train loss: 0.0003 test loss: 0.0007
Epoch 88 batch 1000 train loss: 0.0003 test loss: 0.0007
Epoch 88 batch 1100 train loss: 0.0003 test loss: 0.0007
Epoch 88 batch 1200 train loss: 0.0005 test loss: 0.0007
Epoch 89 batch 0 train loss: 0.0005 test loss: 0.0007
Epoch 89 batch 100 train loss: 0.0004 test loss: 0.0007
Epoch 89 batch 200 train loss: 0.0004 test loss: 0.0007
Epoch 89 batch 300 train loss: 0.0004 test loss: 0.0007
Epoch 89 batch 400 train loss: 0.0006 test loss: 0.0007
Epoch 89 batch 500 train loss: 0.0003 test loss: 0.0007
Epoch 89 batch 600 train loss: 0.0006 test loss: 0.0007
Epoch 89 batch 700 train loss: 0.0002 test loss: 0.0007
Epoch 89 batch 800 train loss: 0.0006 test loss: 0.0007
Epoch 89 batch 900 train loss: 0.0002 test loss: 0.0007
Epoch 89 batch 1000 train loss: 0.0004 test loss: 0.0007
Epoch 89 batch 1100 train loss: 0.0005 test loss: 0.0007
Epoch 89 batch 1200 train loss: 0.0004 test loss: 0.0008
Epoch 90 batch 0 train loss: 0.0005 test loss: 0.0007
Epoch 90 batch 100 train loss: 0.0003 test loss: 0.0007
Epoch 90 batch 200 train loss: 0.0006 test loss: 0.0007
Epoch 90 batch 300 train loss: 0.0003 test loss: 0.0007
Epoch 90 batch 400 train loss: 0.0005 test loss: 0.0007
Epoch 90 batch 500 train loss: 0.0002 test loss: 0.0007
Epoch 90 batch 600 train loss: 0.0003 test loss: 0.0007
Epoch 90 batch 700 train loss: 0.0003 test loss: 0.0007
Epoch 90 batch 800 train loss: 0.0003 test loss: 0.0008
Epoch 90 batch 900 train loss: 0.0003 test loss: 0.0007
Epoch 90 batch 1000 train loss: 0.0002 test loss: 0.0007
Epoch 90 batch 1100 train loss: 0.0003 test loss: 0.0007
Epoch 90 batch 1200 train loss: 0.0002 test loss: 0.0007
Epoch 91 batch 0 train loss: 0.0002 test loss: 0.0008
Epoch 91 batch 100 train loss: 0.0004 test loss: 0.0007
Epoch 91 batch 200 train loss: 0.0004 test loss: 0.0007
Epoch 91 batch 300 train loss: 0.0003 test loss: 0.0007
Epoch 91 batch 400 train loss: 0.0002 test loss: 0.0007
Epoch 91 batch 500 train loss: 0.0003 test loss: 0.0007
Epoch 91 batch 600 train loss: 0.0005 test loss: 0.0007
Epoch 91 batch 700 train loss: 0.0004 test loss: 0.0007
Epoch 91 batch 800 train loss: 0.0003 test loss: 0.0007
Epoch 91 batch 900 train loss: 0.0005 test loss: 0.0008
Epoch 91 batch 1000 train loss: 0.0003 test loss: 0.0007
Epoch 91 batch 1100 train loss: 0.0003 test loss: 0.0007
Epoch 91 batch 1200 train loss: 0.0002 test loss: 0.0007
Epoch 92 batch 0 train loss: 0.0005 test loss: 0.0008
Epoch 92 batch 100 train loss: 0.0003 test loss: 0.0007
Epoch 92 batch 200 train loss: 0.0003 test loss: 0.0007
Epoch 92 batch 300 train loss: 0.0005 test loss: 0.0007
Epoch 92 batch 400 train loss: 0.0006 test loss: 0.0007
Epoch 92 batch 500 train loss: 0.0003 test loss: 0.0007
Epoch 92 batch 600 train loss: 0.0003 test loss: 0.0007
Epoch 92 batch 700 train loss: 0.0003 test loss: 0.0007
Epoch 92 batch 800 train loss: 0.0003 test loss: 0.0007
Epoch 92 batch 900 train loss: 0.0004 test loss: 0.0007
Epoch 92 batch 1000 train loss: 0.0002 test loss: 0.0007
Epoch 92 batch 1100 train loss: 0.0003 test loss: 0.0007
Epoch 92 batch 1200 train loss: 0.0002 test loss: 0.0007
Epoch 93 batch 0 train loss: 0.0004 test loss: 0.0007
Epoch 93 batch 100 train loss: 0.0003 test loss: 0.0007
Epoch 93 batch 200 train loss: 0.0005 test loss: 0.0007
Epoch 93 batch 300 train loss: 0.0004 test loss: 0.0007
Epoch 93 batch 400 train loss: 0.0003 test loss: 0.0007
Epoch 93 batch 500 train loss: 0.0004 test loss: 0.0007
Epoch 93 batch 600 train loss: 0.0005 test loss: 0.0007
Epoch 93 batch 700 train loss: 0.0003 test loss: 0.0007
Epoch 93 batch 800 train loss: 0.0005 test loss: 0.0007
Epoch 93 batch 900 train loss: 0.0004 test loss: 0.0007
Epoch 93 batch 1000 train loss: 0.0003 test loss: 0.0007
Epoch 93 batch 1100 train loss: 0.0002 test loss: 0.0007
Epoch 93 batch 1200 train loss: 0.0004 test loss: 0.0007
Epoch 94 batch 0 train loss: 0.0004 test loss: 0.0008
Epoch 94 batch 100 train loss: 0.0003 test loss: 0.0007
Epoch 94 batch 200 train loss: 0.0005 test loss: 0.0007
Epoch 94 batch 300 train loss: 0.0004 test loss: 0.0007
Epoch 94 batch 400 train loss: 0.0003 test loss: 0.0007
Epoch 94 batch 500 train loss: 0.0007 test loss: 0.0007
Epoch 94 batch 600 train loss: 0.0001 test loss: 0.0007
Epoch 94 batch 700 train loss: 0.0004 test loss: 0.0007
Epoch 94 batch 800 train loss: 0.0004 test loss: 0.0007
Epoch 94 batch 900 train loss: 0.0002 test loss: 0.0008
Epoch 94 batch 1000 train loss: 0.0004 test loss: 0.0007
Epoch 94 batch 1100 train loss: 0.0004 test loss: 0.0007
Epoch 94 batch 1200 train loss: 0.0004 test loss: 0.0007
Epoch 95 batch 0 train loss: 0.0002 test loss: 0.0007
Epoch 95 batch 100 train loss: 0.0004 test loss: 0.0007
Epoch 95 batch 200 train loss: 0.0007 test loss: 0.0007
Epoch 95 batch 300 train loss: 0.0003 test loss: 0.0007
Epoch 95 batch 400 train loss: 0.0002 test loss: 0.0007
Epoch 95 batch 500 train loss: 0.0003 test loss: 0.0007
Epoch 95 batch 600 train loss: 0.0005 test loss: 0.0007
Epoch 95 batch 700 train loss: 0.0005 test loss: 0.0008
Epoch 95 batch 800 train loss: 0.0002 test loss: 0.0007
Epoch 95 batch 900 train loss: 0.0003 test loss: 0.0007
Epoch 95 batch 1000 train loss: 0.0004 test loss: 0.0007
Epoch 95 batch 1100 train loss: 0.0005 test loss: 0.0007
Epoch 95 batch 1200 train loss: 0.0006 test loss: 0.0007
Epoch 96 batch 0 train loss: 0.0002 test loss: 0.0007
Epoch 96 batch 100 train loss: 0.0004 test loss: 0.0007
Epoch 96 batch 200 train loss: 0.0003 test loss: 0.0007
Epoch 96 batch 300 train loss: 0.0003 test loss: 0.0007
Epoch 96 batch 400 train loss: 0.0007 test loss: 0.0007
Epoch 96 batch 500 train loss: 0.0003 test loss: 0.0007
Epoch 96 batch 600 train loss: 0.0003 test loss: 0.0007
Epoch 96 batch 700 train loss: 0.0006 test loss: 0.0008
Epoch 96 batch 800 train loss: 0.0004 test loss: 0.0007
Epoch 96 batch 900 train loss: 0.0004 test loss: 0.0007
Epoch 96 batch 1000 train loss: 0.0002 test loss: 0.0007
Epoch 96 batch 1100 train loss: 0.0004 test loss: 0.0007
Epoch 96 batch 1200 train loss: 0.0002 test loss: 0.0007
Epoch 97 batch 0 train loss: 0.0003 test loss: 0.0008
Epoch 97 batch 100 train loss: 0.0006 test loss: 0.0007
Epoch 97 batch 200 train loss: 0.0004 test loss: 0.0007
Epoch 97 batch 300 train loss: 0.0003 test loss: 0.0007
Epoch 97 batch 400 train loss: 0.0004 test loss: 0.0007
Epoch 97 batch 500 train loss: 0.0003 test loss: 0.0007
Epoch 97 batch 600 train loss: 0.0004 test loss: 0.0007
Epoch 97 batch 700 train loss: 0.0004 test loss: 0.0007
Epoch 97 batch 800 train loss: 0.0005 test loss: 0.0007
Epoch 97 batch 900 train loss: 0.0004 test loss: 0.0007
Epoch 97 batch 1000 train loss: 0.0004 test loss: 0.0007
Epoch 97 batch 1100 train loss: 0.0003 test loss: 0.0007
Epoch 97 batch 1200 train loss: 0.0005 test loss: 0.0007
Epoch 98 batch 0 train loss: 0.0003 test loss: 0.0007
Epoch 98 batch 100 train loss: 0.0003 test loss: 0.0007
Epoch 98 batch 200 train loss: 0.0005 test loss: 0.0007
Epoch 98 batch 300 train loss: 0.0004 test loss: 0.0007
Epoch 98 batch 400 train loss: 0.0004 test loss: 0.0007
Epoch 98 batch 500 train loss: 0.0003 test loss: 0.0007
Epoch 98 batch 600 train loss: 0.0004 test loss: 0.0007
Epoch 98 batch 700 train loss: 0.0004 test loss: 0.0007
Epoch 98 batch 800 train loss: 0.0003 test loss: 0.0007
Epoch 98 batch 900 train loss: 0.0004 test loss: 0.0007
Epoch 98 batch 1000 train loss: 0.0006 test loss: 0.0007
Epoch 98 batch 1100 train loss: 0.0004 test loss: 0.0007
Epoch 98 batch 1200 train loss: 0.0005 test loss: 0.0007
Epoch 99 batch 0 train loss: 0.0002 test loss: 0.0007
Epoch 99 batch 100 train loss: 0.0003 test loss: 0.0007
Epoch 99 batch 200 train loss: 0.0003 test loss: 0.0007
Epoch 99 batch 300 train loss: 0.0006 test loss: 0.0007
Epoch 99 batch 400 train loss: 0.0003 test loss: 0.0007
Epoch 99 batch 500 train loss: 0.0003 test loss: 0.0007
Epoch 99 batch 600 train loss: 0.0003 test loss: 0.0007
Epoch 99 batch 700 train loss: 0.0003 test loss: 0.0007
Epoch 99 batch 800 train loss: 0.0003 test loss: 0.0007
Epoch 99 batch 900 train loss: 0.0003 test loss: 0.0007
Epoch 99 batch 1000 train loss: 0.0005 test loss: 0.0007
Epoch 99 batch 1100 train loss: 0.0002 test loss: 0.0007
Epoch 99 batch 1200 train loss: 0.0003 test loss: 0.0007
Epoch 100 batch 0 train loss: 0.0004 test loss: 0.0007
Epoch 100 batch 100 train loss: 0.0005 test loss: 0.0007
Epoch 100 batch 200 train loss: 0.0005 test loss: 0.0007
Epoch 100 batch 300 train loss: 0.0003 test loss: 0.0007
Epoch 100 batch 400 train loss: 0.0004 test loss: 0.0007
Epoch 100 batch 500 train loss: 0.0005 test loss: 0.0007
Epoch 100 batch 600 train loss: 0.0007 test loss: 0.0007
Epoch 100 batch 700 train loss: 0.0002 test loss: 0.0007
Epoch 100 batch 800 train loss: 0.0002 test loss: 0.0007
Epoch 100 batch 900 train loss: 0.0005 test loss: 0.0007
Epoch 100 batch 1000 train loss: 0.0002 test loss: 0.0007
Epoch 100 batch 1100 train loss: 0.0003 test loss: 0.0007
Epoch 100 batch 1200 train loss: 0.0006 test loss: 0.0007
Epoch 101 batch 0 train loss: 0.0003 test loss: 0.0008
Epoch 101 batch 100 train loss: 0.0003 test loss: 0.0007
Epoch 101 batch 200 train loss: 0.0003 test loss: 0.0007
Epoch 101 batch 300 train loss: 0.0002 test loss: 0.0007
Epoch 101 batch 400 train loss: 0.0005 test loss: 0.0007
Epoch 101 batch 500 train loss: 0.0004 test loss: 0.0007
Epoch 101 batch 600 train loss: 0.0006 test loss: 0.0007
Epoch 101 batch 700 train loss: 0.0004 test loss: 0.0007
Epoch 101 batch 800 train loss: 0.0002 test loss: 0.0007
Epoch 101 batch 900 train loss: 0.0005 test loss: 0.0007
Epoch 101 batch 1000 train loss: 0.0007 test loss: 0.0007
Epoch 101 batch 1100 train loss: 0.0003 test loss: 0.0007
Epoch 101 batch 1200 train loss: 0.0003 test loss: 0.0007
Epoch 102 batch 0 train loss: 0.0004 test loss: 0.0007
Epoch 102 batch 100 train loss: 0.0003 test loss: 0.0007
Epoch 102 batch 200 train loss: 0.0004 test loss: 0.0007
Epoch 102 batch 300 train loss: 0.0003 test loss: 0.0007
Epoch 102 batch 400 train loss: 0.0004 test loss: 0.0007
Epoch 102 batch 500 train loss: 0.0003 test loss: 0.0007
Epoch 102 batch 600 train loss: 0.0005 test loss: 0.0007
Epoch 102 batch 700 train loss: 0.0004 test loss: 0.0007
Epoch 102 batch 800 train loss: 0.0003 test loss: 0.0007
Epoch 102 batch 900 train loss: 0.0004 test loss: 0.0007
Epoch 102 batch 1000 train loss: 0.0008 test loss: 0.0007
Epoch 102 batch 1100 train loss: 0.0003 test loss: 0.0007
Epoch 102 batch 1200 train loss: 0.0002 test loss: 0.0007
Epoch 103 batch 0 train loss: 0.0004 test loss: 0.0007
Epoch 103 batch 100 train loss: 0.0004 test loss: 0.0007
Epoch 103 batch 200 train loss: 0.0006 test loss: 0.0007
Epoch 103 batch 300 train loss: 0.0003 test loss: 0.0007
Epoch 103 batch 400 train loss: 0.0003 test loss: 0.0007
Epoch 103 batch 500 train loss: 0.0005 test loss: 0.0007
Epoch 103 batch 600 train loss: 0.0004 test loss: 0.0007
Epoch 103 batch 700 train loss: 0.0005 test loss: 0.0007
Epoch 103 batch 800 train loss: 0.0003 test loss: 0.0007
Epoch 103 batch 900 train loss: 0.0003 test loss: 0.0007
Epoch 103 batch 1000 train loss: 0.0003 test loss: 0.0007
Epoch 103 batch 1100 train loss: 0.0005 test loss: 0.0007
Epoch 103 batch 1200 train loss: 0.0003 test loss: 0.0007
Epoch 104 batch 0 train loss: 0.0002 test loss: 0.0007
Epoch 104 batch 100 train loss: 0.0006 test loss: 0.0007
Epoch 104 batch 200 train loss: 0.0003 test loss: 0.0007
Epoch 104 batch 300 train loss: 0.0003 test loss: 0.0007
Epoch 104 batch 400 train loss: 0.0003 test loss: 0.0007
Epoch 104 batch 500 train loss: 0.0004 test loss: 0.0007
Epoch 104 batch 600 train loss: 0.0004 test loss: 0.0007
Epoch 104 batch 700 train loss: 0.0002 test loss: 0.0007
Epoch 104 batch 800 train loss: 0.0003 test loss: 0.0007
Epoch 104 batch 900 train loss: 0.0002 test loss: 0.0007
Epoch 104 batch 1000 train loss: 0.0002 test loss: 0.0007
Epoch 104 batch 1100 train loss: 0.0003 test loss: 0.0007
Epoch 104 batch 1200 train loss: 0.0003 test loss: 0.0007
Epoch 105 batch 0 train loss: 0.0002 test loss: 0.0007
Epoch 105 batch 100 train loss: 0.0003 test loss: 0.0007
Epoch 105 batch 200 train loss: 0.0004 test loss: 0.0007
Epoch 105 batch 300 train loss: 0.0004 test loss: 0.0007
Epoch 105 batch 400 train loss: 0.0004 test loss: 0.0007
Epoch 105 batch 500 train loss: 0.0004 test loss: 0.0007
Epoch 105 batch 600 train loss: 0.0003 test loss: 0.0007
Epoch 105 batch 700 train loss: 0.0006 test loss: 0.0007
Epoch 105 batch 800 train loss: 0.0005 test loss: 0.0007
Epoch 105 batch 900 train loss: 0.0005 test loss: 0.0007
Epoch 105 batch 1000 train loss: 0.0003 test loss: 0.0007
Epoch 105 batch 1100 train loss: 0.0005 test loss: 0.0007
Epoch 105 batch 1200 train loss: 0.0002 test loss: 0.0007
Epoch 106 batch 0 train loss: 0.0004 test loss: 0.0007
Epoch 106 batch 100 train loss: 0.0002 test loss: 0.0007
Epoch 106 batch 200 train loss: 0.0003 test loss: 0.0007
Epoch 106 batch 300 train loss: 0.0004 test loss: 0.0007
Epoch 106 batch 400 train loss: 0.0004 test loss: 0.0007
Epoch 106 batch 500 train loss: 0.0003 test loss: 0.0007
Epoch 106 batch 600 train loss: 0.0005 test loss: 0.0007
Epoch 106 batch 700 train loss: 0.0005 test loss: 0.0007
Epoch 106 batch 800 train loss: 0.0005 test loss: 0.0007
Epoch 106 batch 900 train loss: 0.0005 test loss: 0.0007
Epoch 106 batch 1000 train loss: 0.0005 test loss: 0.0007
Epoch 106 batch 1100 train loss: 0.0002 test loss: 0.0007
Epoch 106 batch 1200 train loss: 0.0005 test loss: 0.0007
Epoch 107 batch 0 train loss: 0.0003 test loss: 0.0008
Epoch 107 batch 100 train loss: 0.0002 test loss: 0.0007
Epoch 107 batch 200 train loss: 0.0003 test loss: 0.0007
Epoch 107 batch 300 train loss: 0.0002 test loss: 0.0007
Epoch 107 batch 400 train loss: 0.0004 test loss: 0.0007
Epoch 107 batch 500 train loss: 0.0003 test loss: 0.0007
Epoch 107 batch 600 train loss: 0.0002 test loss: 0.0007
Epoch 107 batch 700 train loss: 0.0003 test loss: 0.0007
Epoch 107 batch 800 train loss: 0.0004 test loss: 0.0007
Epoch 107 batch 900 train loss: 0.0003 test loss: 0.0007
Epoch 107 batch 1000 train loss: 0.0003 test loss: 0.0007
Epoch 107 batch 1100 train loss: 0.0006 test loss: 0.0007
Epoch 107 batch 1200 train loss: 0.0004 test loss: 0.0008
Epoch 108 batch 0 train loss: 0.0003 test loss: 0.0007
Epoch 108 batch 100 train loss: 0.0005 test loss: 0.0007
Epoch 108 batch 200 train loss: 0.0005 test loss: 0.0007
Epoch 108 batch 300 train loss: 0.0003 test loss: 0.0007
Epoch 108 batch 400 train loss: 0.0002 test loss: 0.0007
Epoch 108 batch 500 train loss: 0.0008 test loss: 0.0007
Epoch 108 batch 600 train loss: 0.0003 test loss: 0.0007
Epoch 108 batch 700 train loss: 0.0003 test loss: 0.0008
Epoch 108 batch 800 train loss: 0.0004 test loss: 0.0007
Epoch 108 batch 900 train loss: 0.0003 test loss: 0.0008
Epoch 108 batch 1000 train loss: 0.0003 test loss: 0.0007
Epoch 108 batch 1100 train loss: 0.0004 test loss: 0.0007
Epoch 108 batch 1200 train loss: 0.0005 test loss: 0.0007
Epoch 109 batch 0 train loss: 0.0003 test loss: 0.0008
Epoch 109 batch 100 train loss: 0.0005 test loss: 0.0007
Epoch 109 batch 200 train loss: 0.0004 test loss: 0.0007
Epoch 109 batch 300 train loss: 0.0005 test loss: 0.0007
Epoch 109 batch 400 train loss: 0.0004 test loss: 0.0007
Epoch 109 batch 500 train loss: 0.0004 test loss: 0.0008
Epoch 109 batch 600 train loss: 0.0004 test loss: 0.0007
Epoch 109 batch 700 train loss: 0.0005 test loss: 0.0007
Epoch 109 batch 800 train loss: 0.0004 test loss: 0.0008
Epoch 109 batch 900 train loss: 0.0004 test loss: 0.0008
Epoch 109 batch 1000 train loss: 0.0005 test loss: 0.0008
Epoch 109 batch 1100 train loss: 0.0003 test loss: 0.0007
Epoch 109 batch 1200 train loss: 0.0003 test loss: 0.0008
Epoch 110 batch 0 train loss: 0.0005 test loss: 0.0007
Epoch 110 batch 100 train loss: 0.0006 test loss: 0.0007
Epoch 110 batch 200 train loss: 0.0003 test loss: 0.0008
Epoch 110 batch 300 train loss: 0.0004 test loss: 0.0008
Epoch 110 batch 400 train loss: 0.0005 test loss: 0.0007
Epoch 110 batch 500 train loss: 0.0003 test loss: 0.0007
Epoch 110 batch 600 train loss: 0.0004 test loss: 0.0007
Epoch 110 batch 700 train loss: 0.0007 test loss: 0.0008
Epoch 110 batch 800 train loss: 0.0006 test loss: 0.0007
Epoch 110 batch 900 train loss: 0.0002 test loss: 0.0007
Epoch 110 batch 1000 train loss: 0.0004 test loss: 0.0007
Epoch 110 batch 1100 train loss: 0.0005 test loss: 0.0007
Epoch 110 batch 1200 train loss: 0.0005 test loss: 0.0007
Epoch 111 batch 0 train loss: 0.0004 test loss: 0.0007
Epoch 111 batch 100 train loss: 0.0003 test loss: 0.0007
Epoch 111 batch 200 train loss: 0.0003 test loss: 0.0007
Epoch 111 batch 300 train loss: 0.0004 test loss: 0.0007
Epoch 111 batch 400 train loss: 0.0002 test loss: 0.0007
Epoch 111 batch 500 train loss: 0.0005 test loss: 0.0007
Epoch 111 batch 600 train loss: 0.0005 test loss: 0.0007
Epoch 111 batch 700 train loss: 0.0006 test loss: 0.0007
Epoch 111 batch 800 train loss: 0.0003 test loss: 0.0007
Epoch 111 batch 900 train loss: 0.0005 test loss: 0.0008
Epoch 111 batch 1000 train loss: 0.0005 test loss: 0.0007
Epoch 111 batch 1100 train loss: 0.0004 test loss: 0.0007
Epoch 111 batch 1200 train loss: 0.0003 test loss: 0.0007
Epoch 112 batch 0 train loss: 0.0003 test loss: 0.0008
Epoch 112 batch 100 train loss: 0.0005 test loss: 0.0007
Epoch 112 batch 200 train loss: 0.0003 test loss: 0.0007
Epoch 112 batch 300 train loss: 0.0003 test loss: 0.0007
Epoch 112 batch 400 train loss: 0.0004 test loss: 0.0007
Epoch 112 batch 500 train loss: 0.0004 test loss: 0.0007
Epoch 112 batch 600 train loss: 0.0004 test loss: 0.0007
Epoch 112 batch 700 train loss: 0.0004 test loss: 0.0007
Epoch 112 batch 800 train loss: 0.0003 test loss: 0.0007
Epoch 112 batch 900 train loss: 0.0003 test loss: 0.0008
Epoch 112 batch 1000 train loss: 0.0002 test loss: 0.0007
Epoch 112 batch 1100 train loss: 0.0002 test loss: 0.0007
Epoch 112 batch 1200 train loss: 0.0003 test loss: 0.0007
Epoch 113 batch 0 train loss: 0.0002 test loss: 0.0007
Epoch 113 batch 100 train loss: 0.0005 test loss: 0.0007
Epoch 113 batch 200 train loss: 0.0003 test loss: 0.0007
Epoch 113 batch 300 train loss: 0.0004 test loss: 0.0007
Epoch 113 batch 400 train loss: 0.0004 test loss: 0.0007
Epoch 113 batch 500 train loss: 0.0004 test loss: 0.0007
Epoch 113 batch 600 train loss: 0.0002 test loss: 0.0007
Epoch 113 batch 700 train loss: 0.0002 test loss: 0.0007
Epoch 113 batch 800 train loss: 0.0003 test loss: 0.0007
Epoch 113 batch 900 train loss: 0.0004 test loss: 0.0007
Epoch 113 batch 1000 train loss: 0.0004 test loss: 0.0007
Epoch 113 batch 1100 train loss: 0.0007 test loss: 0.0007
Epoch 113 batch 1200 train loss: 0.0003 test loss: 0.0007
Epoch 114 batch 0 train loss: 0.0005 test loss: 0.0008
Epoch 114 batch 100 train loss: 0.0004 test loss: 0.0007
Epoch 114 batch 200 train loss: 0.0004 test loss: 0.0007
Epoch 114 batch 300 train loss: 0.0003 test loss: 0.0007
Epoch 114 batch 400 train loss: 0.0003 test loss: 0.0007
Epoch 114 batch 500 train loss: 0.0002 test loss: 0.0007
Epoch 114 batch 600 train loss: 0.0005 test loss: 0.0007
Epoch 114 batch 700 train loss: 0.0004 test loss: 0.0007
Epoch 114 batch 800 train loss: 0.0004 test loss: 0.0007
Epoch 114 batch 900 train loss: 0.0003 test loss: 0.0007
Epoch 114 batch 1000 train loss: 0.0003 test loss: 0.0007
Epoch 114 batch 1100 train loss: 0.0005 test loss: 0.0008
Epoch 114 batch 1200 train loss: 0.0004 test loss: 0.0007
Epoch 115 batch 0 train loss: 0.0003 test loss: 0.0007
Epoch 115 batch 100 train loss: 0.0003 test loss: 0.0007
Epoch 115 batch 200 train loss: 0.0004 test loss: 0.0007
Epoch 115 batch 300 train loss: 0.0002 test loss: 0.0007
Epoch 115 batch 400 train loss: 0.0004 test loss: 0.0007
Epoch 115 batch 500 train loss: 0.0003 test loss: 0.0007
Epoch 115 batch 600 train loss: 0.0002 test loss: 0.0007
Epoch 115 batch 700 train loss: 0.0004 test loss: 0.0007
Epoch 115 batch 800 train loss: 0.0006 test loss: 0.0007
Epoch 115 batch 900 train loss: 0.0003 test loss: 0.0008
Epoch 115 batch 1000 train loss: 0.0002 test loss: 0.0007
Epoch 115 batch 1100 train loss: 0.0003 test loss: 0.0007
Epoch 115 batch 1200 train loss: 0.0004 test loss: 0.0008
Epoch 116 batch 0 train loss: 0.0005 test loss: 0.0008
Epoch 116 batch 100 train loss: 0.0004 test loss: 0.0008
Epoch 116 batch 200 train loss: 0.0004 test loss: 0.0007
Epoch 116 batch 300 train loss: 0.0004 test loss: 0.0007
Epoch 116 batch 400 train loss: 0.0005 test loss: 0.0007
Epoch 116 batch 500 train loss: 0.0004 test loss: 0.0007
Epoch 116 batch 600 train loss: 0.0004 test loss: 0.0007
Epoch 116 batch 700 train loss: 0.0004 test loss: 0.0007
Epoch 116 batch 800 train loss: 0.0003 test loss: 0.0007
Epoch 116 batch 900 train loss: 0.0003 test loss: 0.0007
Epoch 116 batch 1000 train loss: 0.0004 test loss: 0.0007
Epoch 116 batch 1100 train loss: 0.0004 test loss: 0.0007
Epoch 116 batch 1200 train loss: 0.0005 test loss: 0.0007
Epoch 117 batch 0 train loss: 0.0004 test loss: 0.0008
Epoch 117 batch 100 train loss: 0.0004 test loss: 0.0007
Epoch 117 batch 200 train loss: 0.0007 test loss: 0.0007
Epoch 117 batch 300 train loss: 0.0003 test loss: 0.0007
Epoch 117 batch 400 train loss: 0.0005 test loss: 0.0007
Epoch 117 batch 500 train loss: 0.0002 test loss: 0.0007
Epoch 117 batch 600 train loss: 0.0004 test loss: 0.0007
Epoch 117 batch 700 train loss: 0.0002 test loss: 0.0007
Epoch 117 batch 800 train loss: 0.0002 test loss: 0.0007
Epoch 117 batch 900 train loss: 0.0003 test loss: 0.0007
Epoch 117 batch 1000 train loss: 0.0004 test loss: 0.0007
Epoch 117 batch 1100 train loss: 0.0004 test loss: 0.0007
Epoch 117 batch 1200 train loss: 0.0003 test loss: 0.0008
Epoch 118 batch 0 train loss: 0.0004 test loss: 0.0008
Epoch 118 batch 100 train loss: 0.0004 test loss: 0.0007
Epoch 118 batch 200 train loss: 0.0005 test loss: 0.0007
Epoch 118 batch 300 train loss: 0.0003 test loss: 0.0007
Epoch 118 batch 400 train loss: 0.0004 test loss: 0.0007
Epoch 118 batch 500 train loss: 0.0003 test loss: 0.0007
Epoch 118 batch 600 train loss: 0.0002 test loss: 0.0007
Epoch 118 batch 700 train loss: 0.0003 test loss: 0.0007
Epoch 118 batch 800 train loss: 0.0003 test loss: 0.0008
Epoch 118 batch 900 train loss: 0.0003 test loss: 0.0007
Epoch 118 batch 1000 train loss: 0.0002 test loss: 0.0007
Epoch 118 batch 1100 train loss: 0.0004 test loss: 0.0007
Epoch 118 batch 1200 train loss: 0.0004 test loss: 0.0007
Epoch 119 batch 0 train loss: 0.0004 test loss: 0.0008
Epoch 119 batch 100 train loss: 0.0002 test loss: 0.0007
Epoch 119 batch 200 train loss: 0.0003 test loss: 0.0007
Epoch 119 batch 300 train loss: 0.0003 test loss: 0.0007
Epoch 119 batch 400 train loss: 0.0005 test loss: 0.0007
Epoch 119 batch 500 train loss: 0.0003 test loss: 0.0007
Epoch 119 batch 600 train loss: 0.0005 test loss: 0.0007
Epoch 119 batch 700 train loss: 0.0004 test loss: 0.0008
Epoch 119 batch 800 train loss: 0.0004 test loss: 0.0007
Epoch 119 batch 900 train loss: 0.0006 test loss: 0.0008
Epoch 119 batch 1000 train loss: 0.0004 test loss: 0.0007
Epoch 119 batch 1100 train loss: 0.0004 test loss: 0.0007
Epoch 119 batch 1200 train loss: 0.0003 test loss: 0.0008
Epoch 120 batch 0 train loss: 0.0002 test loss: 0.0007
Epoch 120 batch 100 train loss: 0.0003 test loss: 0.0007
Epoch 120 batch 200 train loss: 0.0005 test loss: 0.0007
Epoch 120 batch 300 train loss: 0.0004 test loss: 0.0008
Epoch 120 batch 400 train loss: 0.0003 test loss: 0.0007
Epoch 120 batch 500 train loss: 0.0006 test loss: 0.0007
Epoch 120 batch 600 train loss: 0.0003 test loss: 0.0007
Epoch 120 batch 700 train loss: 0.0005 test loss: 0.0007
Epoch 120 batch 800 train loss: 0.0004 test loss: 0.0008
Epoch 120 batch 900 train loss: 0.0003 test loss: 0.0008
Epoch 120 batch 1000 train loss: 0.0002 test loss: 0.0007
Epoch 120 batch 1100 train loss: 0.0003 test loss: 0.0007
Epoch 120 batch 1200 train loss: 0.0004 test loss: 0.0007
Epoch 121 batch 0 train loss: 0.0004 test loss: 0.0008
Epoch 121 batch 100 train loss: 0.0003 test loss: 0.0007
Epoch 121 batch 200 train loss: 0.0005 test loss: 0.0007
Epoch 121 batch 300 train loss: 0.0003 test loss: 0.0007
Epoch 121 batch 400 train loss: 0.0004 test loss: 0.0007
Epoch 121 batch 500 train loss: 0.0004 test loss: 0.0007
Epoch 121 batch 600 train loss: 0.0006 test loss: 0.0007
Epoch 121 batch 700 train loss: 0.0004 test loss: 0.0008
Epoch 121 batch 800 train loss: 0.0004 test loss: 0.0007
Epoch 121 batch 900 train loss: 0.0005 test loss: 0.0007
Epoch 121 batch 1000 train loss: 0.0003 test loss: 0.0007
Epoch 121 batch 1100 train loss: 0.0003 test loss: 0.0007
Epoch 121 batch 1200 train loss: 0.0002 test loss: 0.0007
Epoch 122 batch 0 train loss: 0.0004 test loss: 0.0008
Epoch 122 batch 100 train loss: 0.0004 test loss: 0.0007
Epoch 122 batch 200 train loss: 0.0003 test loss: 0.0008
Epoch 122 batch 300 train loss: 0.0003 test loss: 0.0007
Epoch 122 batch 400 train loss: 0.0005 test loss: 0.0008
Epoch 122 batch 500 train loss: 0.0006 test loss: 0.0008
Epoch 122 batch 600 train loss: 0.0002 test loss: 0.0007
Epoch 122 batch 700 train loss: 0.0003 test loss: 0.0008
Epoch 122 batch 800 train loss: 0.0003 test loss: 0.0007
Epoch 122 batch 900 train loss: 0.0003 test loss: 0.0007
Epoch 122 batch 1000 train loss: 0.0002 test loss: 0.0007
Epoch 122 batch 1100 train loss: 0.0005 test loss: 0.0008
Epoch 122 batch 1200 train loss: 0.0003 test loss: 0.0007
Epoch 123 batch 0 train loss: 0.0003 test loss: 0.0008
Epoch 123 batch 100 train loss: 0.0004 test loss: 0.0007
Epoch 123 batch 200 train loss: 0.0003 test loss: 0.0007
Epoch 123 batch 300 train loss: 0.0003 test loss: 0.0007
Epoch 123 batch 400 train loss: 0.0003 test loss: 0.0008
Epoch 123 batch 500 train loss: 0.0002 test loss: 0.0007
Epoch 123 batch 600 train loss: 0.0005 test loss: 0.0007
Epoch 123 batch 700 train loss: 0.0002 test loss: 0.0007
Epoch 123 batch 800 train loss: 0.0003 test loss: 0.0008
Epoch 123 batch 900 train loss: 0.0004 test loss: 0.0007
Epoch 123 batch 1000 train loss: 0.0003 test loss: 0.0007
Epoch 123 batch 1100 train loss: 0.0005 test loss: 0.0007
Epoch 123 batch 1200 train loss: 0.0003 test loss: 0.0007
Epoch 124 batch 0 train loss: 0.0004 test loss: 0.0007
Epoch 124 batch 100 train loss: 0.0003 test loss: 0.0007
Epoch 124 batch 200 train loss: 0.0005 test loss: 0.0007
Epoch 124 batch 300 train loss: 0.0004 test loss: 0.0007
Epoch 124 batch 400 train loss: 0.0003 test loss: 0.0007
Epoch 124 batch 500 train loss: 0.0005 test loss: 0.0007
Epoch 124 batch 600 train loss: 0.0003 test loss: 0.0007
Epoch 124 batch 700 train loss: 0.0003 test loss: 0.0008
Epoch 124 batch 800 train loss: 0.0002 test loss: 0.0007
Epoch 124 batch 900 train loss: 0.0005 test loss: 0.0008
Epoch 124 batch 1000 train loss: 0.0003 test loss: 0.0008
Epoch 124 batch 1100 train loss: 0.0003 test loss: 0.0007
Epoch 124 batch 1200 train loss: 0.0002 test loss: 0.0007
Epoch 125 batch 0 train loss: 0.0004 test loss: 0.0007
Epoch 125 batch 100 train loss: 0.0005 test loss: 0.0007
Epoch 125 batch 200 train loss: 0.0003 test loss: 0.0007
Epoch 125 batch 300 train loss: 0.0002 test loss: 0.0007
Epoch 125 batch 400 train loss: 0.0004 test loss: 0.0007
Epoch 125 batch 500 train loss: 0.0004 test loss: 0.0007
Epoch 125 batch 600 train loss: 0.0003 test loss: 0.0007
Epoch 125 batch 700 train loss: 0.0003 test loss: 0.0007
Epoch 125 batch 800 train loss: 0.0003 test loss: 0.0008
Epoch 125 batch 900 train loss: 0.0002 test loss: 0.0008
Epoch 125 batch 1000 train loss: 0.0004 test loss: 0.0008
Epoch 125 batch 1100 train loss: 0.0002 test loss: 0.0007
Epoch 125 batch 1200 train loss: 0.0003 test loss: 0.0007
Epoch 126 batch 0 train loss: 0.0003 test loss: 0.0007
Epoch 126 batch 100 train loss: 0.0002 test loss: 0.0007
Epoch 126 batch 200 train loss: 0.0003 test loss: 0.0007
Epoch 126 batch 300 train loss: 0.0002 test loss: 0.0007
Epoch 126 batch 400 train loss: 0.0004 test loss: 0.0008
Epoch 126 batch 500 train loss: 0.0006 test loss: 0.0007
Epoch 126 batch 600 train loss: 0.0002 test loss: 0.0007
Epoch 126 batch 700 train loss: 0.0005 test loss: 0.0008
Epoch 126 batch 800 train loss: 0.0003 test loss: 0.0007
Epoch 126 batch 900 train loss: 0.0005 test loss: 0.0008
Epoch 126 batch 1000 train loss: 0.0002 test loss: 0.0007
Epoch 126 batch 1100 train loss: 0.0002 test loss: 0.0007
Epoch 126 batch 1200 train loss: 0.0003 test loss: 0.0007
Epoch 127 batch 0 train loss: 0.0003 test loss: 0.0008
Epoch 127 batch 100 train loss: 0.0004 test loss: 0.0007
Epoch 127 batch 200 train loss: 0.0004 test loss: 0.0007
Epoch 127 batch 300 train loss: 0.0003 test loss: 0.0007
Epoch 127 batch 400 train loss: 0.0003 test loss: 0.0007
Epoch 127 batch 500 train loss: 0.0003 test loss: 0.0007
Epoch 127 batch 600 train loss: 0.0005 test loss: 0.0007
Epoch 127 batch 700 train loss: 0.0002 test loss: 0.0007
Epoch 127 batch 800 train loss: 0.0004 test loss: 0.0007
Epoch 127 batch 900 train loss: 0.0003 test loss: 0.0008
Epoch 127 batch 1000 train loss: 0.0004 test loss: 0.0007
Epoch 127 batch 1100 train loss: 0.0007 test loss: 0.0007
Epoch 127 batch 1200 train loss: 0.0005 test loss: 0.0007
Epoch 128 batch 0 train loss: 0.0002 test loss: 0.0007
Epoch 128 batch 100 train loss: 0.0003 test loss: 0.0007
Epoch 128 batch 200 train loss: 0.0003 test loss: 0.0007
Epoch 128 batch 300 train loss: 0.0004 test loss: 0.0007
Epoch 128 batch 400 train loss: 0.0003 test loss: 0.0007
Epoch 128 batch 500 train loss: 0.0003 test loss: 0.0007
Epoch 128 batch 600 train loss: 0.0003 test loss: 0.0008
Epoch 128 batch 700 train loss: 0.0003 test loss: 0.0007
Epoch 128 batch 800 train loss: 0.0004 test loss: 0.0008
Epoch 128 batch 900 train loss: 0.0005 test loss: 0.0008
Epoch 128 batch 1000 train loss: 0.0003 test loss: 0.0007
Epoch 128 batch 1100 train loss: 0.0005 test loss: 0.0008
Epoch 128 batch 1200 train loss: 0.0003 test loss: 0.0007
Epoch 129 batch 0 train loss: 0.0004 test loss: 0.0008
Epoch 129 batch 100 train loss: 0.0005 test loss: 0.0008
Epoch 129 batch 200 train loss: 0.0004 test loss: 0.0007
Epoch 129 batch 300 train loss: 0.0004 test loss: 0.0007
Epoch 129 batch 400 train loss: 0.0004 test loss: 0.0008
Epoch 129 batch 500 train loss: 0.0004 test loss: 0.0007
Epoch 129 batch 600 train loss: 0.0003 test loss: 0.0007
Epoch 129 batch 700 train loss: 0.0003 test loss: 0.0008
Epoch 129 batch 800 train loss: 0.0005 test loss: 0.0008
Epoch 129 batch 900 train loss: 0.0005 test loss: 0.0008
Epoch 129 batch 1000 train loss: 0.0002 test loss: 0.0008
Epoch 129 batch 1100 train loss: 0.0003 test loss: 0.0008
Epoch 129 batch 1200 train loss: 0.0006 test loss: 0.0007
Epoch 130 batch 0 train loss: 0.0002 test loss: 0.0008
Epoch 130 batch 100 train loss: 0.0002 test loss: 0.0007
Epoch 130 batch 200 train loss: 0.0004 test loss: 0.0008
Epoch 130 batch 300 train loss: 0.0004 test loss: 0.0007
Epoch 130 batch 400 train loss: 0.0002 test loss: 0.0007
Epoch 130 batch 500 train loss: 0.0002 test loss: 0.0007
Epoch 130 batch 600 train loss: 0.0001 test loss: 0.0007
Epoch 130 batch 700 train loss: 0.0004 test loss: 0.0007
Epoch 130 batch 800 train loss: 0.0005 test loss: 0.0007
Epoch 130 batch 900 train loss: 0.0003 test loss: 0.0008
Epoch 130 batch 1000 train loss: 0.0004 test loss: 0.0008
Epoch 130 batch 1100 train loss: 0.0003 test loss: 0.0008
Epoch 130 batch 1200 train loss: 0.0004 test loss: 0.0008
Epoch 131 batch 0 train loss: 0.0004 test loss: 0.0008
Epoch 131 batch 100 train loss: 0.0004 test loss: 0.0007
Epoch 131 batch 200 train loss: 0.0003 test loss: 0.0007
Epoch 131 batch 300 train loss: 0.0003 test loss: 0.0007
Epoch 131 batch 400 train loss: 0.0003 test loss: 0.0008
Epoch 131 batch 500 train loss: 0.0003 test loss: 0.0007
Epoch 131 batch 600 train loss: 0.0003 test loss: 0.0007
Epoch 131 batch 700 train loss: 0.0003 test loss: 0.0008
Epoch 131 batch 800 train loss: 0.0003 test loss: 0.0007
Epoch 131 batch 900 train loss: 0.0003 test loss: 0.0008
Epoch 131 batch 1000 train loss: 0.0002 test loss: 0.0008
Epoch 131 batch 1100 train loss: 0.0002 test loss: 0.0008
Epoch 131 batch 1200 train loss: 0.0003 test loss: 0.0008
Epoch 132 batch 0 train loss: 0.0004 test loss: 0.0008
Epoch 132 batch 100 train loss: 0.0005 test loss: 0.0007
Epoch 132 batch 200 train loss: 0.0005 test loss: 0.0008
Epoch 132 batch 300 train loss: 0.0007 test loss: 0.0007
Epoch 132 batch 400 train loss: 0.0003 test loss: 0.0007
Epoch 132 batch 500 train loss: 0.0004 test loss: 0.0008
Epoch 132 batch 600 train loss: 0.0003 test loss: 0.0007
Epoch 132 batch 700 train loss: 0.0005 test loss: 0.0008
Epoch 132 batch 800 train loss: 0.0007 test loss: 0.0008
Epoch 132 batch 900 train loss: 0.0006 test loss: 0.0008
Epoch 132 batch 1000 train loss: 0.0004 test loss: 0.0007
Epoch 132 batch 1100 train loss: 0.0004 test loss: 0.0008
Epoch 132 batch 1200 train loss: 0.0004 test loss: 0.0007
Epoch 133 batch 0 train loss: 0.0004 test loss: 0.0008
Epoch 133 batch 100 train loss: 0.0003 test loss: 0.0007
Epoch 133 batch 200 train loss: 0.0003 test loss: 0.0007
Epoch 133 batch 300 train loss: 0.0003 test loss: 0.0007
Epoch 133 batch 400 train loss: 0.0004 test loss: 0.0008
Epoch 133 batch 500 train loss: 0.0005 test loss: 0.0007
Epoch 133 batch 600 train loss: 0.0003 test loss: 0.0007
Epoch 133 batch 700 train loss: 0.0005 test loss: 0.0008
Epoch 133 batch 800 train loss: 0.0004 test loss: 0.0007
Epoch 133 batch 900 train loss: 0.0004 test loss: 0.0008
Epoch 133 batch 1000 train loss: 0.0002 test loss: 0.0007
Epoch 133 batch 1100 train loss: 0.0003 test loss: 0.0007
Epoch 133 batch 1200 train loss: 0.0003 test loss: 0.0007
Epoch 134 batch 0 train loss: 0.0006 test loss: 0.0008
Epoch 134 batch 100 train loss: 0.0003 test loss: 0.0007
Epoch 134 batch 200 train loss: 0.0003 test loss: 0.0007
Epoch 134 batch 300 train loss: 0.0004 test loss: 0.0007
Epoch 134 batch 400 train loss: 0.0006 test loss: 0.0007
Epoch 134 batch 500 train loss: 0.0006 test loss: 0.0008
Epoch 134 batch 600 train loss: 0.0003 test loss: 0.0008
Epoch 134 batch 700 train loss: 0.0003 test loss: 0.0008
Epoch 134 batch 800 train loss: 0.0002 test loss: 0.0007
Epoch 134 batch 900 train loss: 0.0003 test loss: 0.0008
Epoch 134 batch 1000 train loss: 0.0002 test loss: 0.0007
Epoch 134 batch 1100 train loss: 0.0007 test loss: 0.0008
Epoch 134 batch 1200 train loss: 0.0005 test loss: 0.0008
Epoch 135 batch 0 train loss: 0.0003 test loss: 0.0008
Epoch 135 batch 100 train loss: 0.0003 test loss: 0.0008
Epoch 135 batch 200 train loss: 0.0003 test loss: 0.0007
Epoch 135 batch 300 train loss: 0.0002 test loss: 0.0007
Epoch 135 batch 400 train loss: 0.0004 test loss: 0.0008
Epoch 135 batch 500 train loss: 0.0003 test loss: 0.0007
Epoch 135 batch 600 train loss: 0.0005 test loss: 0.0008
Epoch 135 batch 700 train loss: 0.0004 test loss: 0.0008
Epoch 135 batch 800 train loss: 0.0005 test loss: 0.0008
Epoch 135 batch 900 train loss: 0.0003 test loss: 0.0008
Epoch 135 batch 1000 train loss: 0.0006 test loss: 0.0008
Epoch 135 batch 1100 train loss: 0.0005 test loss: 0.0008
Epoch 135 batch 1200 train loss: 0.0002 test loss: 0.0008
Epoch 136 batch 0 train loss: 0.0005 test loss: 0.0008
Epoch 136 batch 100 train loss: 0.0004 test loss: 0.0007
Epoch 136 batch 200 train loss: 0.0003 test loss: 0.0008
Epoch 136 batch 300 train loss: 0.0003 test loss: 0.0008
Epoch 136 batch 400 train loss: 0.0003 test loss: 0.0008
Epoch 136 batch 500 train loss: 0.0003 test loss: 0.0007
Epoch 136 batch 600 train loss: 0.0004 test loss: 0.0008
Epoch 136 batch 700 train loss: 0.0003 test loss: 0.0008
Epoch 136 batch 800 train loss: 0.0004 test loss: 0.0007
Epoch 136 batch 900 train loss: 0.0004 test loss: 0.0008
Epoch 136 batch 1000 train loss: 0.0003 test loss: 0.0008
Epoch 136 batch 1100 train loss: 0.0004 test loss: 0.0007
Epoch 136 batch 1200 train loss: 0.0003 test loss: 0.0008
Epoch 137 batch 0 train loss: 0.0002 test loss: 0.0008
Epoch 137 batch 100 train loss: 0.0002 test loss: 0.0007
Epoch 137 batch 200 train loss: 0.0005 test loss: 0.0007
Epoch 137 batch 300 train loss: 0.0009 test loss: 0.0007
Epoch 137 batch 400 train loss: 0.0002 test loss: 0.0008
Epoch 137 batch 500 train loss: 0.0007 test loss: 0.0007
Epoch 137 batch 600 train loss: 0.0003 test loss: 0.0007
Epoch 137 batch 700 train loss: 0.0004 test loss: 0.0008
Epoch 137 batch 800 train loss: 0.0004 test loss: 0.0008
Epoch 137 batch 900 train loss: 0.0003 test loss: 0.0008
Epoch 137 batch 1000 train loss: 0.0002 test loss: 0.0008
Epoch 137 batch 1100 train loss: 0.0004 test loss: 0.0008
Epoch 137 batch 1200 train loss: 0.0003 test loss: 0.0007
Epoch 138 batch 0 train loss: 0.0002 test loss: 0.0007
Epoch 138 batch 100 train loss: 0.0003 test loss: 0.0007
Epoch 138 batch 200 train loss: 0.0002 test loss: 0.0007
Epoch 138 batch 300 train loss: 0.0003 test loss: 0.0007
Epoch 138 batch 400 train loss: 0.0005 test loss: 0.0008
Epoch 138 batch 500 train loss: 0.0005 test loss: 0.0007
Epoch 138 batch 600 train loss: 0.0003 test loss: 0.0007
Epoch 138 batch 700 train loss: 0.0003 test loss: 0.0008
Epoch 138 batch 800 train loss: 0.0004 test loss: 0.0007
Epoch 138 batch 900 train loss: 0.0003 test loss: 0.0008
Epoch 138 batch 1000 train loss: 0.0003 test loss: 0.0008
Epoch 138 batch 1100 train loss: 0.0004 test loss: 0.0008
Epoch 138 batch 1200 train loss: 0.0004 test loss: 0.0008
Epoch 139 batch 0 train loss: 0.0003 test loss: 0.0008
Epoch 139 batch 100 train loss: 0.0003 test loss: 0.0008
Epoch 139 batch 200 train loss: 0.0003 test loss: 0.0008
Epoch 139 batch 300 train loss: 0.0002 test loss: 0.0008
Epoch 139 batch 400 train loss: 0.0006 test loss: 0.0008
Epoch 139 batch 500 train loss: 0.0003 test loss: 0.0008
Epoch 139 batch 600 train loss: 0.0004 test loss: 0.0007
Epoch 139 batch 700 train loss: 0.0003 test loss: 0.0008
Epoch 139 batch 800 train loss: 0.0003 test loss: 0.0008
Epoch 139 batch 900 train loss: 0.0003 test loss: 0.0008
Epoch 139 batch 1000 train loss: 0.0004 test loss: 0.0008
Epoch 139 batch 1100 train loss: 0.0002 test loss: 0.0007
Epoch 139 batch 1200 train loss: 0.0003 test loss: 0.0008
Epoch 140 batch 0 train loss: 0.0003 test loss: 0.0008
Epoch 140 batch 100 train loss: 0.0004 test loss: 0.0008
Epoch 140 batch 200 train loss: 0.0003 test loss: 0.0007
Epoch 140 batch 300 train loss: 0.0003 test loss: 0.0008
Epoch 140 batch 400 train loss: 0.0003 test loss: 0.0007
Epoch 140 batch 500 train loss: 0.0002 test loss: 0.0008
Epoch 140 batch 600 train loss: 0.0003 test loss: 0.0007
Epoch 140 batch 700 train loss: 0.0003 test loss: 0.0008
Epoch 140 batch 800 train loss: 0.0002 test loss: 0.0008
Epoch 140 batch 900 train loss: 0.0004 test loss: 0.0008
Epoch 140 batch 1000 train loss: 0.0002 test loss: 0.0008
Epoch 140 batch 1100 train loss: 0.0003 test loss: 0.0008
Epoch 140 batch 1200 train loss: 0.0002 test loss: 0.0008
Epoch 141 batch 0 train loss: 0.0005 test loss: 0.0008
Epoch 141 batch 100 train loss: 0.0005 test loss: 0.0007
Epoch 141 batch 200 train loss: 0.0002 test loss: 0.0007
Epoch 141 batch 300 train loss: 0.0006 test loss: 0.0007
Epoch 141 batch 400 train loss: 0.0002 test loss: 0.0008
Epoch 141 batch 500 train loss: 0.0002 test loss: 0.0008
Epoch 141 batch 600 train loss: 0.0002 test loss: 0.0007
Epoch 141 batch 700 train loss: 0.0003 test loss: 0.0008
Epoch 141 batch 800 train loss: 0.0003 test loss: 0.0007
Epoch 141 batch 900 train loss: 0.0005 test loss: 0.0008
Epoch 141 batch 1000 train loss: 0.0003 test loss: 0.0008
Epoch 141 batch 1100 train loss: 0.0004 test loss: 0.0008
Epoch 141 batch 1200 train loss: 0.0004 test loss: 0.0007
Epoch 142 batch 0 train loss: 0.0005 test loss: 0.0008
Epoch 142 batch 100 train loss: 0.0003 test loss: 0.0008
Epoch 142 batch 200 train loss: 0.0004 test loss: 0.0008
Epoch 142 batch 300 train loss: 0.0004 test loss: 0.0007
Epoch 142 batch 400 train loss: 0.0003 test loss: 0.0008
Epoch 142 batch 500 train loss: 0.0005 test loss: 0.0008
Epoch 142 batch 600 train loss: 0.0003 test loss: 0.0007
Epoch 142 batch 700 train loss: 0.0002 test loss: 0.0008
Epoch 142 batch 800 train loss: 0.0003 test loss: 0.0008
Epoch 142 batch 900 train loss: 0.0004 test loss: 0.0008
Epoch 142 batch 1000 train loss: 0.0003 test loss: 0.0008
Epoch 142 batch 1100 train loss: 0.0002 test loss: 0.0008
Epoch 142 batch 1200 train loss: 0.0003 test loss: 0.0008
Epoch 143 batch 0 train loss: 0.0003 test loss: 0.0008
Epoch 143 batch 100 train loss: 0.0003 test loss: 0.0008
Epoch 143 batch 200 train loss: 0.0006 test loss: 0.0008
Epoch 143 batch 300 train loss: 0.0004 test loss: 0.0008
Epoch 143 batch 400 train loss: 0.0005 test loss: 0.0007
Epoch 143 batch 500 train loss: 0.0004 test loss: 0.0008
Epoch 143 batch 600 train loss: 0.0003 test loss: 0.0007
Epoch 143 batch 700 train loss: 0.0003 test loss: 0.0008
Epoch 143 batch 800 train loss: 0.0004 test loss: 0.0008
Epoch 143 batch 900 train loss: 0.0003 test loss: 0.0008
Epoch 143 batch 1000 train loss: 0.0002 test loss: 0.0008
Epoch 143 batch 1100 train loss: 0.0002 test loss: 0.0007
Epoch 143 batch 1200 train loss: 0.0004 test loss: 0.0008
Epoch 144 batch 0 train loss: 0.0003 test loss: 0.0008
Epoch 144 batch 100 train loss: 0.0003 test loss: 0.0007
Epoch 144 batch 200 train loss: 0.0002 test loss: 0.0008
Epoch 144 batch 300 train loss: 0.0004 test loss: 0.0007
Epoch 144 batch 400 train loss: 0.0003 test loss: 0.0008
Epoch 144 batch 500 train loss: 0.0003 test loss: 0.0008
Epoch 144 batch 600 train loss: 0.0004 test loss: 0.0007
Epoch 144 batch 700 train loss: 0.0006 test loss: 0.0008
Epoch 144 batch 800 train loss: 0.0003 test loss: 0.0008
Epoch 144 batch 900 train loss: 0.0004 test loss: 0.0008
Epoch 144 batch 1000 train loss: 0.0004 test loss: 0.0008
Epoch 144 batch 1100 train loss: 0.0003 test loss: 0.0008
Epoch 144 batch 1200 train loss: 0.0003 test loss: 0.0007
Epoch 145 batch 0 train loss: 0.0004 test loss: 0.0008
Epoch 145 batch 100 train loss: 0.0005 test loss: 0.0008
Epoch 145 batch 200 train loss: 0.0004 test loss: 0.0008
Epoch 145 batch 300 train loss: 0.0003 test loss: 0.0007
Epoch 145 batch 400 train loss: 0.0005 test loss: 0.0008
Epoch 145 batch 500 train loss: 0.0004 test loss: 0.0008
Epoch 145 batch 600 train loss: 0.0005 test loss: 0.0008
Epoch 145 batch 700 train loss: 0.0005 test loss: 0.0008
Epoch 145 batch 800 train loss: 0.0004 test loss: 0.0008
Epoch 145 batch 900 train loss: 0.0004 test loss: 0.0008
Epoch 145 batch 1000 train loss: 0.0002 test loss: 0.0007
Epoch 145 batch 1100 train loss: 0.0002 test loss: 0.0007
Epoch 145 batch 1200 train loss: 0.0003 test loss: 0.0008
Epoch 146 batch 0 train loss: 0.0004 test loss: 0.0008
Epoch 146 batch 100 train loss: 0.0002 test loss: 0.0007
Epoch 146 batch 200 train loss: 0.0004 test loss: 0.0008
Epoch 146 batch 300 train loss: 0.0004 test loss: 0.0007
Epoch 146 batch 400 train loss: 0.0004 test loss: 0.0008
Epoch 146 batch 500 train loss: 0.0003 test loss: 0.0008
Epoch 146 batch 600 train loss: 0.0003 test loss: 0.0008
Epoch 146 batch 700 train loss: 0.0005 test loss: 0.0007
Epoch 146 batch 800 train loss: 0.0002 test loss: 0.0008
Epoch 146 batch 900 train loss: 0.0003 test loss: 0.0008
Epoch 146 batch 1000 train loss: 0.0003 test loss: 0.0008
Epoch 146 batch 1100 train loss: 0.0004 test loss: 0.0008
Epoch 146 batch 1200 train loss: 0.0003 test loss: 0.0008
Epoch 147 batch 0 train loss: 0.0004 test loss: 0.0008
Epoch 147 batch 100 train loss: 0.0004 test loss: 0.0008
Epoch 147 batch 200 train loss: 0.0005 test loss: 0.0007
Epoch 147 batch 300 train loss: 0.0004 test loss: 0.0008
Epoch 147 batch 400 train loss: 0.0003 test loss: 0.0008
Epoch 147 batch 500 train loss: 0.0004 test loss: 0.0008
Epoch 147 batch 600 train loss: 0.0002 test loss: 0.0007
Epoch 147 batch 700 train loss: 0.0003 test loss: 0.0008
Epoch 147 batch 800 train loss: 0.0003 test loss: 0.0007
Epoch 147 batch 900 train loss: 0.0002 test loss: 0.0008
Epoch 147 batch 1000 train loss: 0.0003 test loss: 0.0007
Epoch 147 batch 1100 train loss: 0.0002 test loss: 0.0008
Epoch 147 batch 1200 train loss: 0.0003 test loss: 0.0008
Epoch 148 batch 0 train loss: 0.0004 test loss: 0.0008
Epoch 148 batch 100 train loss: 0.0002 test loss: 0.0008
Epoch 148 batch 200 train loss: 0.0004 test loss: 0.0008
Epoch 148 batch 300 train loss: 0.0005 test loss: 0.0008
Epoch 148 batch 400 train loss: 0.0002 test loss: 0.0008
Epoch 148 batch 500 train loss: 0.0003 test loss: 0.0008
Epoch 148 batch 600 train loss: 0.0003 test loss: 0.0008
Epoch 148 batch 700 train loss: 0.0003 test loss: 0.0008
Epoch 148 batch 800 train loss: 0.0003 test loss: 0.0008
Epoch 148 batch 900 train loss: 0.0006 test loss: 0.0008
Epoch 148 batch 1000 train loss: 0.0006 test loss: 0.0008
Epoch 148 batch 1100 train loss: 0.0003 test loss: 0.0008
Epoch 148 batch 1200 train loss: 0.0002 test loss: 0.0008
Epoch 149 batch 0 train loss: 0.0002 test loss: 0.0008
Epoch 149 batch 100 train loss: 0.0004 test loss: 0.0008
Epoch 149 batch 200 train loss: 0.0005 test loss: 0.0008
Epoch 149 batch 300 train loss: 0.0003 test loss: 0.0008
Epoch 149 batch 400 train loss: 0.0002 test loss: 0.0007
Epoch 149 batch 500 train loss: 0.0003 test loss: 0.0008
Epoch 149 batch 600 train loss: 0.0003 test loss: 0.0008
Epoch 149 batch 700 train loss: 0.0003 test loss: 0.0008
Epoch 149 batch 800 train loss: 0.0004 test loss: 0.0008
Epoch 149 batch 900 train loss: 0.0003 test loss: 0.0008
Epoch 149 batch 1000 train loss: 0.0003 test loss: 0.0008
Epoch 149 batch 1100 train loss: 0.0003 test loss: 0.0008
Epoch 149 batch 1200 train loss: 0.0002 test loss: 0.0008
Epoch 150 batch 0 train loss: 0.0004 test loss: 0.0008
Epoch 150 batch 100 train loss: 0.0002 test loss: 0.0008
Epoch 150 batch 200 train loss: 0.0002 test loss: 0.0008
Epoch 150 batch 300 train loss: 0.0002 test loss: 0.0007
Epoch 150 batch 400 train loss: 0.0005 test loss: 0.0007
Epoch 150 batch 500 train loss: 0.0003 test loss: 0.0008
Epoch 150 batch 600 train loss: 0.0003 test loss: 0.0008
Epoch 150 batch 700 train loss: 0.0003 test loss: 0.0008
Epoch 150 batch 800 train loss: 0.0005 test loss: 0.0008
Epoch 150 batch 900 train loss: 0.0004 test loss: 0.0008
Epoch 150 batch 1000 train loss: 0.0003 test loss: 0.0007
Epoch 150 batch 1100 train loss: 0.0007 test loss: 0.0008
Epoch 150 batch 1200 train loss: 0.0002 test loss: 0.0008
Epoch 151 batch 0 train loss: 0.0003 test loss: 0.0008
Epoch 151 batch 100 train loss: 0.0003 test loss: 0.0008
Epoch 151 batch 200 train loss: 0.0003 test loss: 0.0008
Epoch 151 batch 300 train loss: 0.0004 test loss: 0.0008
Epoch 151 batch 400 train loss: 0.0002 test loss: 0.0008
Epoch 151 batch 500 train loss: 0.0004 test loss: 0.0008
Epoch 151 batch 600 train loss: 0.0003 test loss: 0.0008
Epoch 151 batch 700 train loss: 0.0003 test loss: 0.0008
Epoch 151 batch 800 train loss: 0.0003 test loss: 0.0008
Epoch 151 batch 900 train loss: 0.0002 test loss: 0.0008
Epoch 151 batch 1000 train loss: 0.0004 test loss: 0.0008
Epoch 151 batch 1100 train loss: 0.0003 test loss: 0.0008
Epoch 151 batch 1200 train loss: 0.0002 test loss: 0.0008
Epoch 152 batch 0 train loss: 0.0002 test loss: 0.0008
Epoch 152 batch 100 train loss: 0.0003 test loss: 0.0008
Epoch 152 batch 200 train loss: 0.0003 test loss: 0.0008
Epoch 152 batch 300 train loss: 0.0006 test loss: 0.0007
Epoch 152 batch 400 train loss: 0.0004 test loss: 0.0008
Epoch 152 batch 500 train loss: 0.0002 test loss: 0.0008
Epoch 152 batch 600 train loss: 0.0003 test loss: 0.0007
Epoch 152 batch 700 train loss: 0.0003 test loss: 0.0008
Epoch 152 batch 800 train loss: 0.0002 test loss: 0.0007
Epoch 152 batch 900 train loss: 0.0002 test loss: 0.0008
Epoch 152 batch 1000 train loss: 0.0006 test loss: 0.0008
Epoch 152 batch 1100 train loss: 0.0004 test loss: 0.0008
Epoch 152 batch 1200 train loss: 0.0003 test loss: 0.0008
Epoch 153 batch 0 train loss: 0.0005 test loss: 0.0008
Epoch 153 batch 100 train loss: 0.0007 test loss: 0.0007
Epoch 153 batch 200 train loss: 0.0002 test loss: 0.0008
Epoch 153 batch 300 train loss: 0.0005 test loss: 0.0007
Epoch 153 batch 400 train loss: 0.0003 test loss: 0.0008
Epoch 153 batch 500 train loss: 0.0005 test loss: 0.0008
Epoch 153 batch 600 train loss: 0.0002 test loss: 0.0008
Epoch 153 batch 700 train loss: 0.0002 test loss: 0.0007
Epoch 153 batch 800 train loss: 0.0003 test loss: 0.0008
Epoch 153 batch 900 train loss: 0.0002 test loss: 0.0008
Epoch 153 batch 1000 train loss: 0.0004 test loss: 0.0008
Epoch 153 batch 1100 train loss: 0.0002 test loss: 0.0008
Epoch 153 batch 1200 train loss: 0.0004 test loss: 0.0008
Epoch 154 batch 0 train loss: 0.0004 test loss: 0.0008
Epoch 154 batch 100 train loss: 0.0003 test loss: 0.0008
Epoch 154 batch 200 train loss: 0.0005 test loss: 0.0008
Epoch 154 batch 300 train loss: 0.0002 test loss: 0.0007
Epoch 154 batch 400 train loss: 0.0004 test loss: 0.0008
Epoch 154 batch 500 train loss: 0.0006 test loss: 0.0007
Epoch 154 batch 600 train loss: 0.0004 test loss: 0.0007
Epoch 154 batch 700 train loss: 0.0005 test loss: 0.0008
Epoch 154 batch 800 train loss: 0.0003 test loss: 0.0008
Epoch 154 batch 900 train loss: 0.0004 test loss: 0.0008
Epoch 154 batch 1000 train loss: 0.0005 test loss: 0.0008
Epoch 154 batch 1100 train loss: 0.0004 test loss: 0.0008
Epoch 154 batch 1200 train loss: 0.0003 test loss: 0.0008
Epoch 155 batch 0 train loss: 0.0002 test loss: 0.0008
Epoch 155 batch 100 train loss: 0.0002 test loss: 0.0008
Epoch 155 batch 200 train loss: 0.0003 test loss: 0.0007
Epoch 155 batch 300 train loss: 0.0002 test loss: 0.0007
Epoch 155 batch 400 train loss: 0.0004 test loss: 0.0008
Epoch 155 batch 500 train loss: 0.0003 test loss: 0.0008
Epoch 155 batch 600 train loss: 0.0002 test loss: 0.0008
Epoch 155 batch 700 train loss: 0.0003 test loss: 0.0008
Epoch 155 batch 800 train loss: 0.0004 test loss: 0.0008
Epoch 155 batch 900 train loss: 0.0003 test loss: 0.0008
Epoch 155 batch 1000 train loss: 0.0004 test loss: 0.0008
Epoch 155 batch 1100 train loss: 0.0002 test loss: 0.0008
Epoch 155 batch 1200 train loss: 0.0003 test loss: 0.0008
Epoch 156 batch 0 train loss: 0.0003 test loss: 0.0008
Epoch 156 batch 100 train loss: 0.0002 test loss: 0.0008
Epoch 156 batch 200 train loss: 0.0006 test loss: 0.0007
Epoch 156 batch 300 train loss: 0.0003 test loss: 0.0007
Epoch 156 batch 400 train loss: 0.0004 test loss: 0.0008
Epoch 156 batch 500 train loss: 0.0004 test loss: 0.0008
Epoch 156 batch 600 train loss: 0.0004 test loss: 0.0007
Epoch 156 batch 700 train loss: 0.0003 test loss: 0.0007
Epoch 156 batch 800 train loss: 0.0003 test loss: 0.0008
Epoch 156 batch 900 train loss: 0.0003 test loss: 0.0008
Epoch 156 batch 1000 train loss: 0.0003 test loss: 0.0008
Epoch 156 batch 1100 train loss: 0.0002 test loss: 0.0008
Epoch 156 batch 1200 train loss: 0.0002 test loss: 0.0008
Epoch 157 batch 0 train loss: 0.0004 test loss: 0.0008
Epoch 157 batch 100 train loss: 0.0002 test loss: 0.0008
Epoch 157 batch 200 train loss: 0.0003 test loss: 0.0007
Epoch 157 batch 300 train loss: 0.0004 test loss: 0.0008
Epoch 157 batch 400 train loss: 0.0005 test loss: 0.0008
Epoch 157 batch 500 train loss: 0.0004 test loss: 0.0008
Epoch 157 batch 600 train loss: 0.0003 test loss: 0.0008
Epoch 157 batch 700 train loss: 0.0003 test loss: 0.0008
Epoch 157 batch 800 train loss: 0.0003 test loss: 0.0008
Epoch 157 batch 900 train loss: 0.0002 test loss: 0.0008
Epoch 157 batch 1000 train loss: 0.0002 test loss: 0.0008
Epoch 157 batch 1100 train loss: 0.0004 test loss: 0.0008
Epoch 157 batch 1200 train loss: 0.0003 test loss: 0.0008
Epoch 158 batch 0 train loss: 0.0002 test loss: 0.0008
Epoch 158 batch 100 train loss: 0.0003 test loss: 0.0008
Epoch 158 batch 200 train loss: 0.0003 test loss: 0.0008
Epoch 158 batch 300 train loss: 0.0003 test loss: 0.0008
Epoch 158 batch 400 train loss: 0.0004 test loss: 0.0008
Epoch 158 batch 500 train loss: 0.0004 test loss: 0.0008
Epoch 158 batch 600 train loss: 0.0005 test loss: 0.0008
Epoch 158 batch 700 train loss: 0.0005 test loss: 0.0008
Epoch 158 batch 800 train loss: 0.0004 test loss: 0.0008
Epoch 158 batch 900 train loss: 0.0007 test loss: 0.0008
Epoch 158 batch 1000 train loss: 0.0002 test loss: 0.0008
Epoch 158 batch 1100 train loss: 0.0003 test loss: 0.0008
Epoch 158 batch 1200 train loss: 0.0003 test loss: 0.0008
Epoch 159 batch 0 train loss: 0.0003 test loss: 0.0008
Epoch 159 batch 100 train loss: 0.0003 test loss: 0.0007
Epoch 159 batch 200 train loss: 0.0003 test loss: 0.0007
Epoch 159 batch 300 train loss: 0.0002 test loss: 0.0008
Epoch 159 batch 400 train loss: 0.0008 test loss: 0.0008
Epoch 159 batch 500 train loss: 0.0005 test loss: 0.0008
Epoch 159 batch 600 train loss: 0.0002 test loss: 0.0008
Epoch 159 batch 700 train loss: 0.0003 test loss: 0.0008
Epoch 159 batch 800 train loss: 0.0004 test loss: 0.0008
Epoch 159 batch 900 train loss: 0.0002 test loss: 0.0008
Epoch 159 batch 1000 train loss: 0.0002 test loss: 0.0007
Epoch 159 batch 1100 train loss: 0.0003 test loss: 0.0008
Epoch 159 batch 1200 train loss: 0.0003 test loss: 0.0008
Epoch 160 batch 0 train loss: 0.0002 test loss: 0.0008
Epoch 160 batch 100 train loss: 0.0003 test loss: 0.0008
Epoch 160 batch 200 train loss: 0.0002 test loss: 0.0007
Epoch 160 batch 300 train loss: 0.0006 test loss: 0.0008
Epoch 160 batch 400 train loss: 0.0003 test loss: 0.0008
Epoch 160 batch 500 train loss: 0.0004 test loss: 0.0008
Epoch 160 batch 600 train loss: 0.0004 test loss: 0.0008
Epoch 160 batch 700 train loss: 0.0004 test loss: 0.0008
Epoch 160 batch 800 train loss: 0.0003 test loss: 0.0008
Epoch 160 batch 900 train loss: 0.0002 test loss: 0.0008
Epoch 160 batch 1000 train loss: 0.0002 test loss: 0.0008
Epoch 160 batch 1100 train loss: 0.0004 test loss: 0.0008
Epoch 160 batch 1200 train loss: 0.0003 test loss: 0.0008
Epoch 161 batch 0 train loss: 0.0004 test loss: 0.0008
Epoch 161 batch 100 train loss: 0.0003 test loss: 0.0008
Epoch 161 batch 200 train loss: 0.0004 test loss: 0.0008
Epoch 161 batch 300 train loss: 0.0003 test loss: 0.0008
Epoch 161 batch 400 train loss: 0.0003 test loss: 0.0008
Epoch 161 batch 500 train loss: 0.0003 test loss: 0.0008
Epoch 161 batch 600 train loss: 0.0004 test loss: 0.0007
Epoch 161 batch 700 train loss: 0.0002 test loss: 0.0008
Epoch 161 batch 800 train loss: 0.0003 test loss: 0.0008
Epoch 161 batch 900 train loss: 0.0003 test loss: 0.0008
Epoch 161 batch 1000 train loss: 0.0003 test loss: 0.0008
Epoch 161 batch 1100 train loss: 0.0004 test loss: 0.0008
Epoch 161 batch 1200 train loss: 0.0002 test loss: 0.0008
Epoch 162 batch 0 train loss: 0.0003 test loss: 0.0008
Epoch 162 batch 100 train loss: 0.0002 test loss: 0.0008
Epoch 162 batch 200 train loss: 0.0004 test loss: 0.0007
Epoch 162 batch 300 train loss: 0.0003 test loss: 0.0008
Epoch 162 batch 400 train loss: 0.0005 test loss: 0.0008
Epoch 162 batch 500 train loss: 0.0003 test loss: 0.0008
Epoch 162 batch 600 train loss: 0.0003 test loss: 0.0008
Epoch 162 batch 700 train loss: 0.0003 test loss: 0.0008
Epoch 162 batch 800 train loss: 0.0004 test loss: 0.0008
Epoch 162 batch 900 train loss: 0.0006 test loss: 0.0008
Epoch 162 batch 1000 train loss: 0.0004 test loss: 0.0008
Epoch 162 batch 1100 train loss: 0.0004 test loss: 0.0008
Epoch 162 batch 1200 train loss: 0.0002 test loss: 0.0008
Epoch 163 batch 0 train loss: 0.0003 test loss: 0.0008
Epoch 163 batch 100 train loss: 0.0002 test loss: 0.0008
Epoch 163 batch 200 train loss: 0.0002 test loss: 0.0008
Epoch 163 batch 300 train loss: 0.0003 test loss: 0.0008
Epoch 163 batch 400 train loss: 0.0004 test loss: 0.0008
Epoch 163 batch 500 train loss: 0.0003 test loss: 0.0008
Epoch 163 batch 600 train loss: 0.0005 test loss: 0.0008
Epoch 163 batch 700 train loss: 0.0003 test loss: 0.0008
Epoch 163 batch 800 train loss: 0.0004 test loss: 0.0008
Epoch 163 batch 900 train loss: 0.0004 test loss: 0.0008
Epoch 163 batch 1000 train loss: 0.0003 test loss: 0.0008
Epoch 163 batch 1100 train loss: 0.0003 test loss: 0.0008
Epoch 163 batch 1200 train loss: 0.0004 test loss: 0.0008
Epoch 164 batch 0 train loss: 0.0002 test loss: 0.0008
Epoch 164 batch 100 train loss: 0.0003 test loss: 0.0008
Epoch 164 batch 200 train loss: 0.0002 test loss: 0.0008
Epoch 164 batch 300 train loss: 0.0002 test loss: 0.0008
Epoch 164 batch 400 train loss: 0.0003 test loss: 0.0008
Epoch 164 batch 500 train loss: 0.0002 test loss: 0.0008
Epoch 164 batch 600 train loss: 0.0003 test loss: 0.0007
Epoch 164 batch 700 train loss: 0.0003 test loss: 0.0008
Epoch 164 batch 800 train loss: 0.0003 test loss: 0.0008
Epoch 164 batch 900 train loss: 0.0004 test loss: 0.0008
Epoch 164 batch 1000 train loss: 0.0002 test loss: 0.0008
Epoch 164 batch 1100 train loss: 0.0004 test loss: 0.0008
Epoch 164 batch 1200 train loss: 0.0003 test loss: 0.0008
Epoch 165 batch 0 train loss: 0.0004 test loss: 0.0008
Epoch 165 batch 100 train loss: 0.0003 test loss: 0.0008
Epoch 165 batch 200 train loss: 0.0004 test loss: 0.0008
Epoch 165 batch 300 train loss: 0.0003 test loss: 0.0008
Epoch 165 batch 400 train loss: 0.0003 test loss: 0.0008
Epoch 165 batch 500 train loss: 0.0003 test loss: 0.0008
Epoch 165 batch 600 train loss: 0.0001 test loss: 0.0007
Epoch 165 batch 700 train loss: 0.0003 test loss: 0.0008
Epoch 165 batch 800 train loss: 0.0004 test loss: 0.0008
Epoch 165 batch 900 train loss: 0.0004 test loss: 0.0008
Epoch 165 batch 1000 train loss: 0.0003 test loss: 0.0008
Epoch 165 batch 1100 train loss: 0.0004 test loss: 0.0008
Epoch 165 batch 1200 train loss: 0.0004 test loss: 0.0008
Epoch 166 batch 0 train loss: 0.0002 test loss: 0.0008
Epoch 166 batch 100 train loss: 0.0002 test loss: 0.0007
Epoch 166 batch 200 train loss: 0.0002 test loss: 0.0008
Epoch 166 batch 300 train loss: 0.0003 test loss: 0.0008
Epoch 166 batch 400 train loss: 0.0003 test loss: 0.0008
Epoch 166 batch 500 train loss: 0.0005 test loss: 0.0008
Epoch 166 batch 600 train loss: 0.0002 test loss: 0.0008
Epoch 166 batch 700 train loss: 0.0003 test loss: 0.0008
Epoch 166 batch 800 train loss: 0.0003 test loss: 0.0008
Epoch 166 batch 900 train loss: 0.0003 test loss: 0.0008
Epoch 166 batch 1000 train loss: 0.0004 test loss: 0.0008
Epoch 166 batch 1100 train loss: 0.0002 test loss: 0.0008
Epoch 166 batch 1200 train loss: 0.0003 test loss: 0.0008
Epoch 167 batch 0 train loss: 0.0005 test loss: 0.0008
Epoch 167 batch 100 train loss: 0.0003 test loss: 0.0007
Epoch 167 batch 200 train loss: 0.0004 test loss: 0.0008
Epoch 167 batch 300 train loss: 0.0002 test loss: 0.0008
Epoch 167 batch 400 train loss: 0.0002 test loss: 0.0008
Epoch 167 batch 500 train loss: 0.0003 test loss: 0.0008
Epoch 167 batch 600 train loss: 0.0005 test loss: 0.0008
Epoch 167 batch 700 train loss: 0.0003 test loss: 0.0008
Epoch 167 batch 800 train loss: 0.0002 test loss: 0.0008
Epoch 167 batch 900 train loss: 0.0004 test loss: 0.0008
Epoch 167 batch 1000 train loss: 0.0003 test loss: 0.0008
Epoch 167 batch 1100 train loss: 0.0004 test loss: 0.0008
Epoch 167 batch 1200 train loss: 0.0003 test loss: 0.0008
Epoch 168 batch 0 train loss: 0.0003 test loss: 0.0008
Epoch 168 batch 100 train loss: 0.0006 test loss: 0.0008
Epoch 168 batch 200 train loss: 0.0002 test loss: 0.0008
Epoch 168 batch 300 train loss: 0.0003 test loss: 0.0007
Epoch 168 batch 400 train loss: 0.0003 test loss: 0.0008
Epoch 168 batch 500 train loss: 0.0002 test loss: 0.0008
Epoch 168 batch 600 train loss: 0.0002 test loss: 0.0008
Epoch 168 batch 700 train loss: 0.0005 test loss: 0.0008
Epoch 168 batch 800 train loss: 0.0002 test loss: 0.0008
Epoch 168 batch 900 train loss: 0.0002 test loss: 0.0008
Epoch 168 batch 1000 train loss: 0.0002 test loss: 0.0008
Epoch 168 batch 1100 train loss: 0.0004 test loss: 0.0008
Epoch 168 batch 1200 train loss: 0.0002 test loss: 0.0008
Epoch 169 batch 0 train loss: 0.0004 test loss: 0.0008
Epoch 169 batch 100 train loss: 0.0003 test loss: 0.0008
Epoch 169 batch 200 train loss: 0.0003 test loss: 0.0008
Epoch 169 batch 300 train loss: 0.0003 test loss: 0.0007
Epoch 169 batch 400 train loss: 0.0003 test loss: 0.0008
Epoch 169 batch 500 train loss: 0.0002 test loss: 0.0008
Epoch 169 batch 600 train loss: 0.0003 test loss: 0.0007
Epoch 169 batch 700 train loss: 0.0002 test loss: 0.0008
Epoch 169 batch 800 train loss: 0.0005 test loss: 0.0008
Epoch 169 batch 900 train loss: 0.0004 test loss: 0.0008
Epoch 169 batch 1000 train loss: 0.0004 test loss: 0.0008
Epoch 169 batch 1100 train loss: 0.0002 test loss: 0.0007
Epoch 169 batch 1200 train loss: 0.0002 test loss: 0.0008
Epoch 170 batch 0 train loss: 0.0002 test loss: 0.0008
Epoch 170 batch 100 train loss: 0.0002 test loss: 0.0008
Epoch 170 batch 200 train loss: 0.0005 test loss: 0.0007
Epoch 170 batch 300 train loss: 0.0003 test loss: 0.0008
Epoch 170 batch 400 train loss: 0.0003 test loss: 0.0008
Epoch 170 batch 500 train loss: 0.0003 test loss: 0.0008
Epoch 170 batch 600 train loss: 0.0003 test loss: 0.0008
Epoch 170 batch 700 train loss: 0.0003 test loss: 0.0008
Epoch 170 batch 800 train loss: 0.0004 test loss: 0.0008
Epoch 170 batch 900 train loss: 0.0004 test loss: 0.0008
Epoch 170 batch 1000 train loss: 0.0005 test loss: 0.0008
Epoch 170 batch 1100 train loss: 0.0003 test loss: 0.0008
Epoch 170 batch 1200 train loss: 0.0003 test loss: 0.0008
Epoch 171 batch 0 train loss: 0.0003 test loss: 0.0008
Epoch 171 batch 100 train loss: 0.0004 test loss: 0.0008
Epoch 171 batch 200 train loss: 0.0004 test loss: 0.0008
Epoch 171 batch 300 train loss: 0.0003 test loss: 0.0008
Epoch 171 batch 400 train loss: 0.0002 test loss: 0.0008
Epoch 171 batch 500 train loss: 0.0003 test loss: 0.0008
Epoch 171 batch 600 train loss: 0.0003 test loss: 0.0008
Epoch 171 batch 700 train loss: 0.0003 test loss: 0.0008
Epoch 171 batch 800 train loss: 0.0003 test loss: 0.0008
Epoch 171 batch 900 train loss: 0.0003 test loss: 0.0008
Epoch 171 batch 1000 train loss: 0.0006 test loss: 0.0008
Epoch 171 batch 1100 train loss: 0.0003 test loss: 0.0008
Epoch 171 batch 1200 train loss: 0.0002 test loss: 0.0008
Epoch 172 batch 0 train loss: 0.0002 test loss: 0.0008
Epoch 172 batch 100 train loss: 0.0002 test loss: 0.0008
Epoch 172 batch 200 train loss: 0.0003 test loss: 0.0008
Epoch 172 batch 300 train loss: 0.0004 test loss: 0.0008
Epoch 172 batch 400 train loss: 0.0006 test loss: 0.0008
Epoch 172 batch 500 train loss: 0.0002 test loss: 0.0008
Epoch 172 batch 600 train loss: 0.0005 test loss: 0.0008
Epoch 172 batch 700 train loss: 0.0002 test loss: 0.0008
Epoch 172 batch 800 train loss: 0.0003 test loss: 0.0008
Epoch 172 batch 900 train loss: 0.0002 test loss: 0.0008
Epoch 172 batch 1000 train loss: 0.0003 test loss: 0.0008
Epoch 172 batch 1100 train loss: 0.0003 test loss: 0.0008
Epoch 172 batch 1200 train loss: 0.0003 test loss: 0.0008
Epoch 173 batch 0 train loss: 0.0002 test loss: 0.0008
Epoch 173 batch 100 train loss: 0.0003 test loss: 0.0008
Epoch 173 batch 200 train loss: 0.0005 test loss: 0.0008
Epoch 173 batch 300 train loss: 0.0005 test loss: 0.0008
Epoch 173 batch 400 train loss: 0.0004 test loss: 0.0008
Epoch 173 batch 500 train loss: 0.0003 test loss: 0.0008
Epoch 173 batch 600 train loss: 0.0002 test loss: 0.0008
Epoch 173 batch 700 train loss: 0.0003 test loss: 0.0008
Epoch 173 batch 800 train loss: 0.0003 test loss: 0.0007
Epoch 173 batch 900 train loss: 0.0003 test loss: 0.0008
Epoch 173 batch 1000 train loss: 0.0003 test loss: 0.0008
Epoch 173 batch 1100 train loss: 0.0002 test loss: 0.0008
Epoch 173 batch 1200 train loss: 0.0003 test loss: 0.0008
Epoch 174 batch 0 train loss: 0.0003 test loss: 0.0008
Epoch 174 batch 100 train loss: 0.0003 test loss: 0.0008
Epoch 174 batch 200 train loss: 0.0004 test loss: 0.0008
Epoch 174 batch 300 train loss: 0.0003 test loss: 0.0008
Epoch 174 batch 400 train loss: 0.0004 test loss: 0.0008
Epoch 174 batch 500 train loss: 0.0002 test loss: 0.0008
Epoch 174 batch 600 train loss: 0.0006 test loss: 0.0008
Epoch 174 batch 700 train loss: 0.0004 test loss: 0.0008
Epoch 174 batch 800 train loss: 0.0003 test loss: 0.0008
Epoch 174 batch 900 train loss: 0.0003 test loss: 0.0008
Epoch 174 batch 1000 train loss: 0.0002 test loss: 0.0008
Epoch 174 batch 1100 train loss: 0.0003 test loss: 0.0008
Epoch 174 batch 1200 train loss: 0.0002 test loss: 0.0008
Epoch 175 batch 0 train loss: 0.0004 test loss: 0.0008
Epoch 175 batch 100 train loss: 0.0002 test loss: 0.0008
Epoch 175 batch 200 train loss: 0.0004 test loss: 0.0008
Epoch 175 batch 300 train loss: 0.0003 test loss: 0.0008
Epoch 175 batch 400 train loss: 0.0003 test loss: 0.0008
Epoch 175 batch 500 train loss: 0.0002 test loss: 0.0008
Epoch 175 batch 600 train loss: 0.0004 test loss: 0.0008
Epoch 175 batch 700 train loss: 0.0003 test loss: 0.0008
Epoch 175 batch 800 train loss: 0.0002 test loss: 0.0008
Epoch 175 batch 900 train loss: 0.0002 test loss: 0.0008
Epoch 175 batch 1000 train loss: 0.0003 test loss: 0.0008
Epoch 175 batch 1100 train loss: 0.0005 test loss: 0.0008
Epoch 175 batch 1200 train loss: 0.0003 test loss: 0.0008
Epoch 176 batch 0 train loss: 0.0006 test loss: 0.0008
Epoch 176 batch 100 train loss: 0.0003 test loss: 0.0008
Epoch 176 batch 200 train loss: 0.0004 test loss: 0.0008
Epoch 176 batch 300 train loss: 0.0003 test loss: 0.0008
Epoch 176 batch 400 train loss: 0.0005 test loss: 0.0008
Epoch 176 batch 500 train loss: 0.0003 test loss: 0.0008
Epoch 176 batch 600 train loss: 0.0004 test loss: 0.0008
Epoch 176 batch 700 train loss: 0.0005 test loss: 0.0008
Epoch 176 batch 800 train loss: 0.0002 test loss: 0.0008
Epoch 176 batch 900 train loss: 0.0003 test loss: 0.0008
Epoch 176 batch 1000 train loss: 0.0003 test loss: 0.0008
Epoch 176 batch 1100 train loss: 0.0004 test loss: 0.0008
Epoch 176 batch 1200 train loss: 0.0002 test loss: 0.0008
Epoch 177 batch 0 train loss: 0.0003 test loss: 0.0008
Epoch 177 batch 100 train loss: 0.0005 test loss: 0.0008
Epoch 177 batch 200 train loss: 0.0002 test loss: 0.0008
Epoch 177 batch 300 train loss: 0.0002 test loss: 0.0008
Epoch 177 batch 400 train loss: 0.0002 test loss: 0.0008
Epoch 177 batch 500 train loss: 0.0002 test loss: 0.0008
Epoch 177 batch 600 train loss: 0.0004 test loss: 0.0008
Epoch 177 batch 700 train loss: 0.0003 test loss: 0.0008
Epoch 177 batch 800 train loss: 0.0003 test loss: 0.0008
Epoch 177 batch 900 train loss: 0.0002 test loss: 0.0008
Epoch 177 batch 1000 train loss: 0.0004 test loss: 0.0008
Epoch 177 batch 1100 train loss: 0.0003 test loss: 0.0007
Epoch 177 batch 1200 train loss: 0.0003 test loss: 0.0008
Epoch 178 batch 0 train loss: 0.0003 test loss: 0.0008
Epoch 178 batch 100 train loss: 0.0002 test loss: 0.0008
Epoch 178 batch 200 train loss: 0.0004 test loss: 0.0008
Epoch 178 batch 300 train loss: 0.0004 test loss: 0.0008
Epoch 178 batch 400 train loss: 0.0002 test loss: 0.0008
Epoch 178 batch 500 train loss: 0.0003 test loss: 0.0008
Epoch 178 batch 600 train loss: 0.0003 test loss: 0.0008
Epoch 178 batch 700 train loss: 0.0003 test loss: 0.0008
Epoch 178 batch 800 train loss: 0.0002 test loss: 0.0008
Epoch 178 batch 900 train loss: 0.0002 test loss: 0.0008
Epoch 178 batch 1000 train loss: 0.0003 test loss: 0.0008
Epoch 178 batch 1100 train loss: 0.0004 test loss: 0.0008
Epoch 178 batch 1200 train loss: 0.0002 test loss: 0.0008
Epoch 179 batch 0 train loss: 0.0004 test loss: 0.0008
Epoch 179 batch 100 train loss: 0.0002 test loss: 0.0008
Epoch 179 batch 200 train loss: 0.0002 test loss: 0.0008
Epoch 179 batch 300 train loss: 0.0003 test loss: 0.0008
Epoch 179 batch 400 train loss: 0.0004 test loss: 0.0008
Epoch 179 batch 500 train loss: 0.0003 test loss: 0.0007
Epoch 179 batch 600 train loss: 0.0002 test loss: 0.0008
Epoch 179 batch 700 train loss: 0.0003 test loss: 0.0008
Epoch 179 batch 800 train loss: 0.0003 test loss: 0.0008
Epoch 179 batch 900 train loss: 0.0003 test loss: 0.0008
Epoch 179 batch 1000 train loss: 0.0005 test loss: 0.0008
Epoch 179 batch 1100 train loss: 0.0005 test loss: 0.0008
Epoch 179 batch 1200 train loss: 0.0003 test loss: 0.0008
Epoch 180 batch 0 train loss: 0.0002 test loss: 0.0008
Epoch 180 batch 100 train loss: 0.0003 test loss: 0.0008
Epoch 180 batch 200 train loss: 0.0003 test loss: 0.0008
Epoch 180 batch 300 train loss: 0.0004 test loss: 0.0008
Epoch 180 batch 400 train loss: 0.0004 test loss: 0.0008
Epoch 180 batch 500 train loss: 0.0003 test loss: 0.0008
Epoch 180 batch 600 train loss: 0.0003 test loss: 0.0008
Epoch 180 batch 700 train loss: 0.0002 test loss: 0.0008
Epoch 180 batch 800 train loss: 0.0005 test loss: 0.0008
Epoch 180 batch 900 train loss: 0.0003 test loss: 0.0008
Epoch 180 batch 1000 train loss: 0.0003 test loss: 0.0008
Epoch 180 batch 1100 train loss: 0.0005 test loss: 0.0008
Epoch 180 batch 1200 train loss: 0.0002 test loss: 0.0007
Epoch 181 batch 0 train loss: 0.0002 test loss: 0.0008
Epoch 181 batch 100 train loss: 0.0004 test loss: 0.0008
Epoch 181 batch 200 train loss: 0.0002 test loss: 0.0008
Epoch 181 batch 300 train loss: 0.0004 test loss: 0.0008
Epoch 181 batch 400 train loss: 0.0003 test loss: 0.0008
Epoch 181 batch 500 train loss: 0.0003 test loss: 0.0008
Epoch 181 batch 600 train loss: 0.0004 test loss: 0.0008
Epoch 181 batch 700 train loss: 0.0003 test loss: 0.0008
Epoch 181 batch 800 train loss: 0.0002 test loss: 0.0008
Epoch 181 batch 900 train loss: 0.0002 test loss: 0.0008
Epoch 181 batch 1000 train loss: 0.0004 test loss: 0.0008
Epoch 181 batch 1100 train loss: 0.0003 test loss: 0.0008
Epoch 181 batch 1200 train loss: 0.0002 test loss: 0.0008
Epoch 182 batch 0 train loss: 0.0003 test loss: 0.0008
Epoch 182 batch 100 train loss: 0.0003 test loss: 0.0008
Epoch 182 batch 200 train loss: 0.0003 test loss: 0.0008
Epoch 182 batch 300 train loss: 0.0002 test loss: 0.0008
Epoch 182 batch 400 train loss: 0.0003 test loss: 0.0008
Epoch 182 batch 500 train loss: 0.0005 test loss: 0.0008
Epoch 182 batch 600 train loss: 0.0002 test loss: 0.0008
Epoch 182 batch 700 train loss: 0.0002 test loss: 0.0008
Epoch 182 batch 800 train loss: 0.0003 test loss: 0.0008
Epoch 182 batch 900 train loss: 0.0006 test loss: 0.0008
Epoch 182 batch 1000 train loss: 0.0002 test loss: 0.0008
Epoch 182 batch 1100 train loss: 0.0008 test loss: 0.0008
Epoch 182 batch 1200 train loss: 0.0002 test loss: 0.0008
Epoch 183 batch 0 train loss: 0.0002 test loss: 0.0008
Epoch 183 batch 100 train loss: 0.0003 test loss: 0.0008
Epoch 183 batch 200 train loss: 0.0004 test loss: 0.0008
Epoch 183 batch 300 train loss: 0.0002 test loss: 0.0008
Epoch 183 batch 400 train loss: 0.0004 test loss: 0.0008
Epoch 183 batch 500 train loss: 0.0003 test loss: 0.0008
Epoch 183 batch 600 train loss: 0.0004 test loss: 0.0008
Epoch 183 batch 700 train loss: 0.0005 test loss: 0.0008
Epoch 183 batch 800 train loss: 0.0002 test loss: 0.0008
Epoch 183 batch 900 train loss: 0.0002 test loss: 0.0008
Epoch 183 batch 1000 train loss: 0.0003 test loss: 0.0008
Epoch 183 batch 1100 train loss: 0.0005 test loss: 0.0008
Epoch 183 batch 1200 train loss: 0.0002 test loss: 0.0008
Epoch 184 batch 0 train loss: 0.0002 test loss: 0.0008
Epoch 184 batch 100 train loss: 0.0003 test loss: 0.0008
Epoch 184 batch 200 train loss: 0.0002 test loss: 0.0008
Epoch 184 batch 300 train loss: 0.0003 test loss: 0.0008
Epoch 184 batch 400 train loss: 0.0004 test loss: 0.0008
Epoch 184 batch 500 train loss: 0.0002 test loss: 0.0008
Epoch 184 batch 600 train loss: 0.0004 test loss: 0.0008
Epoch 184 batch 700 train loss: 0.0005 test loss: 0.0008
Epoch 184 batch 800 train loss: 0.0002 test loss: 0.0008
Epoch 184 batch 900 train loss: 0.0004 test loss: 0.0008
Epoch 184 batch 1000 train loss: 0.0003 test loss: 0.0008
Epoch 184 batch 1100 train loss: 0.0005 test loss: 0.0008
Epoch 184 batch 1200 train loss: 0.0004 test loss: 0.0008
Epoch 185 batch 0 train loss: 0.0003 test loss: 0.0008
Epoch 185 batch 100 train loss: 0.0006 test loss: 0.0008
Epoch 185 batch 200 train loss: 0.0003 test loss: 0.0007
Epoch 185 batch 300 train loss: 0.0004 test loss: 0.0008
Epoch 185 batch 400 train loss: 0.0004 test loss: 0.0008
Epoch 185 batch 500 train loss: 0.0004 test loss: 0.0008
Epoch 185 batch 600 train loss: 0.0005 test loss: 0.0007
Epoch 185 batch 700 train loss: 0.0002 test loss: 0.0008
Epoch 185 batch 800 train loss: 0.0004 test loss: 0.0008
Epoch 185 batch 900 train loss: 0.0003 test loss: 0.0008
Epoch 185 batch 1000 train loss: 0.0004 test loss: 0.0008
Epoch 185 batch 1100 train loss: 0.0003 test loss: 0.0008
Epoch 185 batch 1200 train loss: 0.0004 test loss: 0.0008
Epoch 186 batch 0 train loss: 0.0002 test loss: 0.0008
Epoch 186 batch 100 train loss: 0.0003 test loss: 0.0008
Epoch 186 batch 200 train loss: 0.0003 test loss: 0.0008
Epoch 186 batch 300 train loss: 0.0002 test loss: 0.0008
Epoch 186 batch 400 train loss: 0.0002 test loss: 0.0008
Epoch 186 batch 500 train loss: 0.0004 test loss: 0.0008
Epoch 186 batch 600 train loss: 0.0003 test loss: 0.0008
Epoch 186 batch 700 train loss: 0.0003 test loss: 0.0008
Epoch 186 batch 800 train loss: 0.0004 test loss: 0.0008
Epoch 186 batch 900 train loss: 0.0002 test loss: 0.0008
Epoch 186 batch 1000 train loss: 0.0003 test loss: 0.0008
Epoch 186 batch 1100 train loss: 0.0002 test loss: 0.0008
Epoch 186 batch 1200 train loss: 0.0002 test loss: 0.0008
Epoch 187 batch 0 train loss: 0.0003 test loss: 0.0008
Epoch 187 batch 100 train loss: 0.0003 test loss: 0.0008
Epoch 187 batch 200 train loss: 0.0003 test loss: 0.0008
Epoch 187 batch 300 train loss: 0.0002 test loss: 0.0007
Epoch 187 batch 400 train loss: 0.0003 test loss: 0.0008
Epoch 187 batch 500 train loss: 0.0004 test loss: 0.0008
Epoch 187 batch 600 train loss: 0.0002 test loss: 0.0008
Epoch 187 batch 700 train loss: 0.0003 test loss: 0.0008
Epoch 187 batch 800 train loss: 0.0005 test loss: 0.0008
Epoch 187 batch 900 train loss: 0.0003 test loss: 0.0008
Epoch 187 batch 1000 train loss: 0.0004 test loss: 0.0008
Epoch 187 batch 1100 train loss: 0.0003 test loss: 0.0008
Epoch 187 batch 1200 train loss: 0.0003 test loss: 0.0008
Epoch 188 batch 0 train loss: 0.0003 test loss: 0.0008
Epoch 188 batch 100 train loss: 0.0003 test loss: 0.0008
Epoch 188 batch 200 train loss: 0.0003 test loss: 0.0008
Epoch 188 batch 300 train loss: 0.0007 test loss: 0.0008
Epoch 188 batch 400 train loss: 0.0004 test loss: 0.0008
Epoch 188 batch 500 train loss: 0.0002 test loss: 0.0008
Epoch 188 batch 600 train loss: 0.0004 test loss: 0.0008
Epoch 188 batch 700 train loss: 0.0002 test loss: 0.0008
Epoch 188 batch 800 train loss: 0.0003 test loss: 0.0008
Epoch 188 batch 900 train loss: 0.0003 test loss: 0.0008
Epoch 188 batch 1000 train loss: 0.0003 test loss: 0.0008
Epoch 188 batch 1100 train loss: 0.0004 test loss: 0.0008
Epoch 188 batch 1200 train loss: 0.0003 test loss: 0.0008
Epoch 189 batch 0 train loss: 0.0002 test loss: 0.0008
Epoch 189 batch 100 train loss: 0.0003 test loss: 0.0008
Epoch 189 batch 200 train loss: 0.0002 test loss: 0.0008
Epoch 189 batch 300 train loss: 0.0003 test loss: 0.0008
Epoch 189 batch 400 train loss: 0.0002 test loss: 0.0008
Epoch 189 batch 500 train loss: 0.0002 test loss: 0.0008
Epoch 189 batch 600 train loss: 0.0005 test loss: 0.0008
Epoch 189 batch 700 train loss: 0.0004 test loss: 0.0008
Epoch 189 batch 800 train loss: 0.0003 test loss: 0.0008
Epoch 189 batch 900 train loss: 0.0008 test loss: 0.0008
Epoch 189 batch 1000 train loss: 0.0003 test loss: 0.0008
Epoch 189 batch 1100 train loss: 0.0003 test loss: 0.0008
Epoch 189 batch 1200 train loss: 0.0003 test loss: 0.0008
Epoch 190 batch 0 train loss: 0.0003 test loss: 0.0008
Epoch 190 batch 100 train loss: 0.0004 test loss: 0.0007
Epoch 190 batch 200 train loss: 0.0002 test loss: 0.0008
Epoch 190 batch 300 train loss: 0.0003 test loss: 0.0008
Epoch 190 batch 400 train loss: 0.0002 test loss: 0.0008
Epoch 190 batch 500 train loss: 0.0002 test loss: 0.0008
Epoch 190 batch 600 train loss: 0.0004 test loss: 0.0008
Epoch 190 batch 700 train loss: 0.0003 test loss: 0.0008
Epoch 190 batch 800 train loss: 0.0003 test loss: 0.0008
Epoch 190 batch 900 train loss: 0.0005 test loss: 0.0008
Epoch 190 batch 1000 train loss: 0.0004 test loss: 0.0008
Epoch 190 batch 1100 train loss: 0.0002 test loss: 0.0008
Epoch 190 batch 1200 train loss: 0.0003 test loss: 0.0008
Epoch 191 batch 0 train loss: 0.0002 test loss: 0.0008
Epoch 191 batch 100 train loss: 0.0003 test loss: 0.0008
Epoch 191 batch 200 train loss: 0.0004 test loss: 0.0008
Epoch 191 batch 300 train loss: 0.0003 test loss: 0.0008
Epoch 191 batch 400 train loss: 0.0003 test loss: 0.0008
Epoch 191 batch 500 train loss: 0.0005 test loss: 0.0008
Epoch 191 batch 600 train loss: 0.0003 test loss: 0.0007
Epoch 191 batch 700 train loss: 0.0003 test loss: 0.0008
Epoch 191 batch 800 train loss: 0.0002 test loss: 0.0008
Epoch 191 batch 900 train loss: 0.0002 test loss: 0.0008
Epoch 191 batch 1000 train loss: 0.0002 test loss: 0.0008
Epoch 191 batch 1100 train loss: 0.0002 test loss: 0.0008
Epoch 191 batch 1200 train loss: 0.0003 test loss: 0.0008
Epoch 192 batch 0 train loss: 0.0002 test loss: 0.0008
Epoch 192 batch 100 train loss: 0.0006 test loss: 0.0008
Epoch 192 batch 200 train loss: 0.0002 test loss: 0.0008
Epoch 192 batch 300 train loss: 0.0004 test loss: 0.0008
Epoch 192 batch 400 train loss: 0.0002 test loss: 0.0008
Epoch 192 batch 500 train loss: 0.0002 test loss: 0.0008
Epoch 192 batch 600 train loss: 0.0003 test loss: 0.0008
Epoch 192 batch 700 train loss: 0.0002 test loss: 0.0008
Epoch 192 batch 800 train loss: 0.0002 test loss: 0.0008
Epoch 192 batch 900 train loss: 0.0003 test loss: 0.0008
Epoch 192 batch 1000 train loss: 0.0002 test loss: 0.0008
Epoch 192 batch 1100 train loss: 0.0006 test loss: 0.0008
Epoch 192 batch 1200 train loss: 0.0003 test loss: 0.0008
Epoch 193 batch 0 train loss: 0.0002 test loss: 0.0008
Epoch 193 batch 100 train loss: 0.0003 test loss: 0.0008
Epoch 193 batch 200 train loss: 0.0005 test loss: 0.0008
Epoch 193 batch 300 train loss: 0.0003 test loss: 0.0008
Epoch 193 batch 400 train loss: 0.0004 test loss: 0.0008
Epoch 193 batch 500 train loss: 0.0002 test loss: 0.0008
Epoch 193 batch 600 train loss: 0.0002 test loss: 0.0008
Epoch 193 batch 700 train loss: 0.0003 test loss: 0.0008
Epoch 193 batch 800 train loss: 0.0005 test loss: 0.0008
Epoch 193 batch 900 train loss: 0.0002 test loss: 0.0008
Epoch 193 batch 1000 train loss: 0.0003 test loss: 0.0008
Epoch 193 batch 1100 train loss: 0.0004 test loss: 0.0008
Epoch 193 batch 1200 train loss: 0.0002 test loss: 0.0008
Epoch 194 batch 0 train loss: 0.0004 test loss: 0.0008
Epoch 194 batch 100 train loss: 0.0003 test loss: 0.0008
Epoch 194 batch 200 train loss: 0.0004 test loss: 0.0008
Epoch 194 batch 300 train loss: 0.0003 test loss: 0.0008
Epoch 194 batch 400 train loss: 0.0003 test loss: 0.0008
Epoch 194 batch 500 train loss: 0.0002 test loss: 0.0008
Epoch 194 batch 600 train loss: 0.0005 test loss: 0.0008
Epoch 194 batch 700 train loss: 0.0002 test loss: 0.0008
Epoch 194 batch 800 train loss: 0.0002 test loss: 0.0008
Epoch 194 batch 900 train loss: 0.0004 test loss: 0.0008
Epoch 194 batch 1000 train loss: 0.0003 test loss: 0.0008
Epoch 194 batch 1100 train loss: 0.0003 test loss: 0.0008
Epoch 194 batch 1200 train loss: 0.0002 test loss: 0.0008
Epoch 195 batch 0 train loss: 0.0002 test loss: 0.0008
Epoch 195 batch 100 train loss: 0.0003 test loss: 0.0008
Epoch 195 batch 200 train loss: 0.0003 test loss: 0.0008
Epoch 195 batch 300 train loss: 0.0003 test loss: 0.0008
Epoch 195 batch 400 train loss: 0.0003 test loss: 0.0008
Epoch 195 batch 500 train loss: 0.0003 test loss: 0.0008
Epoch 195 batch 600 train loss: 0.0003 test loss: 0.0008
Epoch 195 batch 700 train loss: 0.0003 test loss: 0.0008
Epoch 195 batch 800 train loss: 0.0003 test loss: 0.0008
Epoch 195 batch 900 train loss: 0.0004 test loss: 0.0008
Epoch 195 batch 1000 train loss: 0.0002 test loss: 0.0008
Epoch 195 batch 1100 train loss: 0.0003 test loss: 0.0008
Epoch 195 batch 1200 train loss: 0.0003 test loss: 0.0008
Epoch 196 batch 0 train loss: 0.0002 test loss: 0.0008
Epoch 196 batch 100 train loss: 0.0002 test loss: 0.0008
Epoch 196 batch 200 train loss: 0.0003 test loss: 0.0008
Epoch 196 batch 300 train loss: 0.0002 test loss: 0.0008
Epoch 196 batch 400 train loss: 0.0007 test loss: 0.0008
Epoch 196 batch 500 train loss: 0.0004 test loss: 0.0008
Epoch 196 batch 600 train loss: 0.0002 test loss: 0.0008
Epoch 196 batch 700 train loss: 0.0003 test loss: 0.0008
Epoch 196 batch 800 train loss: 0.0002 test loss: 0.0008
Epoch 196 batch 900 train loss: 0.0003 test loss: 0.0008
Epoch 196 batch 1000 train loss: 0.0003 test loss: 0.0008
Epoch 196 batch 1100 train loss: 0.0002 test loss: 0.0008
Epoch 196 batch 1200 train loss: 0.0002 test loss: 0.0008
Epoch 197 batch 0 train loss: 0.0005 test loss: 0.0008
Epoch 197 batch 100 train loss: 0.0005 test loss: 0.0008
Epoch 197 batch 200 train loss: 0.0003 test loss: 0.0008
Epoch 197 batch 300 train loss: 0.0003 test loss: 0.0008
Epoch 197 batch 400 train loss: 0.0003 test loss: 0.0008
Epoch 197 batch 500 train loss: 0.0004 test loss: 0.0008
Epoch 197 batch 600 train loss: 0.0007 test loss: 0.0008
Epoch 197 batch 700 train loss: 0.0003 test loss: 0.0008
Epoch 197 batch 800 train loss: 0.0002 test loss: 0.0008
Epoch 197 batch 900 train loss: 0.0002 test loss: 0.0008
Epoch 197 batch 1000 train loss: 0.0003 test loss: 0.0008
Epoch 197 batch 1100 train loss: 0.0004 test loss: 0.0008
Epoch 197 batch 1200 train loss: 0.0003 test loss: 0.0008
Epoch 198 batch 0 train loss: 0.0005 test loss: 0.0008
Epoch 198 batch 100 train loss: 0.0004 test loss: 0.0008
Epoch 198 batch 200 train loss: 0.0003 test loss: 0.0008
Epoch 198 batch 300 train loss: 0.0003 test loss: 0.0008
Epoch 198 batch 400 train loss: 0.0003 test loss: 0.0008
Epoch 198 batch 500 train loss: 0.0003 test loss: 0.0008
Epoch 198 batch 600 train loss: 0.0004 test loss: 0.0008
Epoch 198 batch 700 train loss: 0.0003 test loss: 0.0008
Epoch 198 batch 800 train loss: 0.0002 test loss: 0.0008
Epoch 198 batch 900 train loss: 0.0003 test loss: 0.0008
Epoch 198 batch 1000 train loss: 0.0002 test loss: 0.0008
Epoch 198 batch 1100 train loss: 0.0002 test loss: 0.0008
Epoch 198 batch 1200 train loss: 0.0004 test loss: 0.0008
Epoch 199 batch 0 train loss: 0.0003 test loss: 0.0008
Epoch 199 batch 100 train loss: 0.0004 test loss: 0.0008
Epoch 199 batch 200 train loss: 0.0004 test loss: 0.0008
Epoch 199 batch 300 train loss: 0.0002 test loss: 0.0008
Epoch 199 batch 400 train loss: 0.0004 test loss: 0.0008
Epoch 199 batch 500 train loss: 0.0003 test loss: 0.0008
Epoch 199 batch 600 train loss: 0.0001 test loss: 0.0008
Epoch 199 batch 700 train loss: 0.0003 test loss: 0.0008
Epoch 199 batch 800 train loss: 0.0003 test loss: 0.0008
Epoch 199 batch 900 train loss: 0.0005 test loss: 0.0008
Epoch 199 batch 1000 train loss: 0.0003 test loss: 0.0008
Epoch 199 batch 1100 train loss: 0.0003 test loss: 0.0008
Epoch 199 batch 1200 train loss: 0.0002 test loss: 0.0008
Epoch 200 batch 0 train loss: 0.0003 test loss: 0.0008
Epoch 200 batch 100 train loss: 0.0003 test loss: 0.0008
Epoch 200 batch 200 train loss: 0.0002 test loss: 0.0008
Epoch 200 batch 300 train loss: 0.0004 test loss: 0.0008
Epoch 200 batch 400 train loss: 0.0003 test loss: 0.0008
Epoch 200 batch 500 train loss: 0.0003 test loss: 0.0008
Epoch 200 batch 600 train loss: 0.0002 test loss: 0.0008
Epoch 200 batch 700 train loss: 0.0002 test loss: 0.0008
Epoch 200 batch 800 train loss: 0.0003 test loss: 0.0008
Epoch 200 batch 900 train loss: 0.0003 test loss: 0.0008
Epoch 200 batch 1000 train loss: 0.0002 test loss: 0.0008
Epoch 200 batch 1100 train loss: 0.0003 test loss: 0.0008
Epoch 200 batch 1200 train loss: 0.0004 test loss: 0.0008
Epoch 201 batch 0 train loss: 0.0004 test loss: 0.0008
Epoch 201 batch 100 train loss: 0.0005 test loss: 0.0008
Epoch 201 batch 200 train loss: 0.0003 test loss: 0.0008
Epoch 201 batch 300 train loss: 0.0002 test loss: 0.0008
Epoch 201 batch 400 train loss: 0.0003 test loss: 0.0008
Epoch 201 batch 500 train loss: 0.0003 test loss: 0.0008
Epoch 201 batch 600 train loss: 0.0003 test loss: 0.0008
Epoch 201 batch 700 train loss: 0.0003 test loss: 0.0008
Epoch 201 batch 800 train loss: 0.0006 test loss: 0.0008
Epoch 201 batch 900 train loss: 0.0003 test loss: 0.0008
Epoch 201 batch 1000 train loss: 0.0003 test loss: 0.0008
Epoch 201 batch 1100 train loss: 0.0004 test loss: 0.0008
Epoch 201 batch 1200 train loss: 0.0002 test loss: 0.0008
Epoch 202 batch 0 train loss: 0.0002 test loss: 0.0008
Epoch 202 batch 100 train loss: 0.0003 test loss: 0.0008
Epoch 202 batch 200 train loss: 0.0004 test loss: 0.0008
Epoch 202 batch 300 train loss: 0.0002 test loss: 0.0008
Epoch 202 batch 400 train loss: 0.0005 test loss: 0.0008
Epoch 202 batch 500 train loss: 0.0002 test loss: 0.0008
Epoch 202 batch 600 train loss: 0.0002 test loss: 0.0008
Epoch 202 batch 700 train loss: 0.0002 test loss: 0.0008
Epoch 202 batch 800 train loss: 0.0002 test loss: 0.0008
Epoch 202 batch 900 train loss: 0.0003 test loss: 0.0008
Epoch 202 batch 1000 train loss: 0.0004 test loss: 0.0008
Epoch 202 batch 1100 train loss: 0.0002 test loss: 0.0008
Epoch 202 batch 1200 train loss: 0.0002 test loss: 0.0008
Epoch 203 batch 0 train loss: 0.0002 test loss: 0.0008
Epoch 203 batch 100 train loss: 0.0003 test loss: 0.0008
Epoch 203 batch 200 train loss: 0.0002 test loss: 0.0008
Epoch 203 batch 300 train loss: 0.0002 test loss: 0.0008
Epoch 203 batch 400 train loss: 0.0003 test loss: 0.0008
Epoch 203 batch 500 train loss: 0.0004 test loss: 0.0008
Epoch 203 batch 600 train loss: 0.0002 test loss: 0.0008
Epoch 203 batch 700 train loss: 0.0003 test loss: 0.0008
Epoch 203 batch 800 train loss: 0.0004 test loss: 0.0008
Epoch 203 batch 900 train loss: 0.0003 test loss: 0.0008
Epoch 203 batch 1000 train loss: 0.0003 test loss: 0.0008
Epoch 203 batch 1100 train loss: 0.0003 test loss: 0.0008
Epoch 203 batch 1200 train loss: 0.0003 test loss: 0.0008
Epoch 204 batch 0 train loss: 0.0003 test loss: 0.0008
Epoch 204 batch 100 train loss: 0.0002 test loss: 0.0008
Epoch 204 batch 200 train loss: 0.0003 test loss: 0.0008
Epoch 204 batch 300 train loss: 0.0003 test loss: 0.0008
Epoch 204 batch 400 train loss: 0.0004 test loss: 0.0008
Epoch 204 batch 500 train loss: 0.0004 test loss: 0.0008
Epoch 204 batch 600 train loss: 0.0002 test loss: 0.0008
Epoch 204 batch 700 train loss: 0.0003 test loss: 0.0008
Epoch 204 batch 800 train loss: 0.0002 test loss: 0.0008
Epoch 204 batch 900 train loss: 0.0003 test loss: 0.0008
Epoch 204 batch 1000 train loss: 0.0002 test loss: 0.0008
Epoch 204 batch 1100 train loss: 0.0003 test loss: 0.0008
Epoch 204 batch 1200 train loss: 0.0003 test loss: 0.0008
Epoch 205 batch 0 train loss: 0.0005 test loss: 0.0008
Epoch 205 batch 100 train loss: 0.0003 test loss: 0.0008
Epoch 205 batch 200 train loss: 0.0005 test loss: 0.0008
Epoch 205 batch 300 train loss: 0.0002 test loss: 0.0008
Epoch 205 batch 400 train loss: 0.0002 test loss: 0.0008
Epoch 205 batch 500 train loss: 0.0004 test loss: 0.0008
Epoch 205 batch 600 train loss: 0.0003 test loss: 0.0008
Epoch 205 batch 700 train loss: 0.0002 test loss: 0.0008
Epoch 205 batch 800 train loss: 0.0002 test loss: 0.0008
Epoch 205 batch 900 train loss: 0.0004 test loss: 0.0008
Epoch 205 batch 1000 train loss: 0.0004 test loss: 0.0008
Epoch 205 batch 1100 train loss: 0.0004 test loss: 0.0008
Epoch 205 batch 1200 train loss: 0.0002 test loss: 0.0008
Epoch 206 batch 0 train loss: 0.0003 test loss: 0.0008
Epoch 206 batch 100 train loss: 0.0002 test loss: 0.0008
Epoch 206 batch 200 train loss: 0.0002 test loss: 0.0008
Epoch 206 batch 300 train loss: 0.0003 test loss: 0.0008
Epoch 206 batch 400 train loss: 0.0003 test loss: 0.0008
Epoch 206 batch 500 train loss: 0.0003 test loss: 0.0008
Epoch 206 batch 600 train loss: 0.0004 test loss: 0.0008
Epoch 206 batch 700 train loss: 0.0003 test loss: 0.0008
Epoch 206 batch 800 train loss: 0.0002 test loss: 0.0008
Epoch 206 batch 900 train loss: 0.0004 test loss: 0.0008
Epoch 206 batch 1000 train loss: 0.0002 test loss: 0.0008
Epoch 206 batch 1100 train loss: 0.0004 test loss: 0.0008
Epoch 206 batch 1200 train loss: 0.0002 test loss: 0.0008
Epoch 207 batch 0 train loss: 0.0002 test loss: 0.0008
Epoch 207 batch 100 train loss: 0.0002 test loss: 0.0008
Epoch 207 batch 200 train loss: 0.0003 test loss: 0.0008
Epoch 207 batch 300 train loss: 0.0003 test loss: 0.0008
Epoch 207 batch 400 train loss: 0.0002 test loss: 0.0008
Epoch 207 batch 500 train loss: 0.0005 test loss: 0.0008
Epoch 207 batch 600 train loss: 0.0003 test loss: 0.0008
Epoch 207 batch 700 train loss: 0.0004 test loss: 0.0008
Epoch 207 batch 800 train loss: 0.0003 test loss: 0.0008
Epoch 207 batch 900 train loss: 0.0003 test loss: 0.0008
Epoch 207 batch 1000 train loss: 0.0003 test loss: 0.0008
Epoch 207 batch 1100 train loss: 0.0003 test loss: 0.0008
Epoch 207 batch 1200 train loss: 0.0003 test loss: 0.0008
Epoch 208 batch 0 train loss: 0.0002 test loss: 0.0008
Epoch 208 batch 100 train loss: 0.0004 test loss: 0.0008
Epoch 208 batch 200 train loss: 0.0003 test loss: 0.0008
Epoch 208 batch 300 train loss: 0.0003 test loss: 0.0008
Epoch 208 batch 400 train loss: 0.0003 test loss: 0.0008
Epoch 208 batch 500 train loss: 0.0004 test loss: 0.0008
Epoch 208 batch 600 train loss: 0.0002 test loss: 0.0008
Epoch 208 batch 700 train loss: 0.0003 test loss: 0.0008
Epoch 208 batch 800 train loss: 0.0004 test loss: 0.0008
Epoch 208 batch 900 train loss: 0.0003 test loss: 0.0008
Epoch 208 batch 1000 train loss: 0.0002 test loss: 0.0008
Epoch 208 batch 1100 train loss: 0.0003 test loss: 0.0008
Epoch 208 batch 1200 train loss: 0.0003 test loss: 0.0008
Epoch 209 batch 0 train loss: 0.0004 test loss: 0.0008
Epoch 209 batch 100 train loss: 0.0003 test loss: 0.0008
Epoch 209 batch 200 train loss: 0.0003 test loss: 0.0008
Epoch 209 batch 300 train loss: 0.0003 test loss: 0.0008
Epoch 209 batch 400 train loss: 0.0004 test loss: 0.0008
Epoch 209 batch 500 train loss: 0.0005 test loss: 0.0008
Epoch 209 batch 600 train loss: 0.0003 test loss: 0.0008
Epoch 209 batch 700 train loss: 0.0004 test loss: 0.0008
Epoch 209 batch 800 train loss: 0.0003 test loss: 0.0008
Epoch 209 batch 900 train loss: 0.0002 test loss: 0.0008
Epoch 209 batch 1000 train loss: 0.0002 test loss: 0.0008
Epoch 209 batch 1100 train loss: 0.0004 test loss: 0.0008
Epoch 209 batch 1200 train loss: 0.0004 test loss: 0.0008
Epoch 210 batch 0 train loss: 0.0006 test loss: 0.0008
Epoch 210 batch 100 train loss: 0.0002 test loss: 0.0008
Epoch 210 batch 200 train loss: 0.0002 test loss: 0.0008
Epoch 210 batch 300 train loss: 0.0003 test loss: 0.0008
Epoch 210 batch 400 train loss: 0.0004 test loss: 0.0008
Epoch 210 batch 500 train loss: 0.0003 test loss: 0.0008
Epoch 210 batch 600 train loss: 0.0002 test loss: 0.0008
Epoch 210 batch 700 train loss: 0.0003 test loss: 0.0008
Epoch 210 batch 800 train loss: 0.0003 test loss: 0.0008
Epoch 210 batch 900 train loss: 0.0004 test loss: 0.0008
Epoch 210 batch 1000 train loss: 0.0003 test loss: 0.0008
Epoch 210 batch 1100 train loss: 0.0003 test loss: 0.0008
Epoch 210 batch 1200 train loss: 0.0002 test loss: 0.0008
Epoch 211 batch 0 train loss: 0.0002 test loss: 0.0008
Epoch 211 batch 100 train loss: 0.0004 test loss: 0.0008
Epoch 211 batch 200 train loss: 0.0005 test loss: 0.0008
Epoch 211 batch 300 train loss: 0.0002 test loss: 0.0008
Epoch 211 batch 400 train loss: 0.0003 test loss: 0.0008
Epoch 211 batch 500 train loss: 0.0002 test loss: 0.0008
Epoch 211 batch 600 train loss: 0.0006 test loss: 0.0008
Epoch 211 batch 700 train loss: 0.0006 test loss: 0.0008
Epoch 211 batch 800 train loss: 0.0003 test loss: 0.0008
Epoch 211 batch 900 train loss: 0.0007 test loss: 0.0008
Epoch 211 batch 1000 train loss: 0.0004 test loss: 0.0008
Epoch 211 batch 1100 train loss: 0.0003 test loss: 0.0008
Epoch 211 batch 1200 train loss: 0.0003 test loss: 0.0008
Epoch 212 batch 0 train loss: 0.0003 test loss: 0.0008
Epoch 212 batch 100 train loss: 0.0003 test loss: 0.0008
Epoch 212 batch 200 train loss: 0.0004 test loss: 0.0008
Epoch 212 batch 300 train loss: 0.0002 test loss: 0.0008
Epoch 212 batch 400 train loss: 0.0002 test loss: 0.0008
Epoch 212 batch 500 train loss: 0.0002 test loss: 0.0008
Epoch 212 batch 600 train loss: 0.0003 test loss: 0.0008
Epoch 212 batch 700 train loss: 0.0005 test loss: 0.0008
Epoch 212 batch 800 train loss: 0.0006 test loss: 0.0008
Epoch 212 batch 900 train loss: 0.0003 test loss: 0.0008
Epoch 212 batch 1000 train loss: 0.0002 test loss: 0.0008
Epoch 212 batch 1100 train loss: 0.0003 test loss: 0.0008
Epoch 212 batch 1200 train loss: 0.0002 test loss: 0.0008
Epoch 213 batch 0 train loss: 0.0004 test loss: 0.0008
Epoch 213 batch 100 train loss: 0.0002 test loss: 0.0008
Epoch 213 batch 200 train loss: 0.0002 test loss: 0.0008
Epoch 213 batch 300 train loss: 0.0003 test loss: 0.0008
Epoch 213 batch 400 train loss: 0.0003 test loss: 0.0008
Epoch 213 batch 500 train loss: 0.0003 test loss: 0.0008
Epoch 213 batch 600 train loss: 0.0002 test loss: 0.0008
Epoch 213 batch 700 train loss: 0.0003 test loss: 0.0008
Epoch 213 batch 800 train loss: 0.0004 test loss: 0.0008
Epoch 213 batch 900 train loss: 0.0004 test loss: 0.0008
Epoch 213 batch 1000 train loss: 0.0005 test loss: 0.0008
Epoch 213 batch 1100 train loss: 0.0002 test loss: 0.0008
Epoch 213 batch 1200 train loss: 0.0002 test loss: 0.0008
Epoch 214 batch 0 train loss: 0.0005 test loss: 0.0008
Epoch 214 batch 100 train loss: 0.0002 test loss: 0.0008
Epoch 214 batch 200 train loss: 0.0004 test loss: 0.0008
Epoch 214 batch 300 train loss: 0.0002 test loss: 0.0008
Epoch 214 batch 400 train loss: 0.0003 test loss: 0.0008
Epoch 214 batch 500 train loss: 0.0003 test loss: 0.0008
Epoch 214 batch 600 train loss: 0.0002 test loss: 0.0008
Epoch 214 batch 700 train loss: 0.0006 test loss: 0.0008
Epoch 214 batch 800 train loss: 0.0003 test loss: 0.0008
Epoch 214 batch 900 train loss: 0.0002 test loss: 0.0008
Epoch 214 batch 1000 train loss: 0.0005 test loss: 0.0008
Epoch 214 batch 1100 train loss: 0.0003 test loss: 0.0008
Epoch 214 batch 1200 train loss: 0.0003 test loss: 0.0008
Epoch 215 batch 0 train loss: 0.0003 test loss: 0.0008
Epoch 215 batch 100 train loss: 0.0004 test loss: 0.0008
Epoch 215 batch 200 train loss: 0.0002 test loss: 0.0008
Epoch 215 batch 300 train loss: 0.0003 test loss: 0.0008
Epoch 215 batch 400 train loss: 0.0003 test loss: 0.0008
Epoch 215 batch 500 train loss: 0.0002 test loss: 0.0008
Epoch 215 batch 600 train loss: 0.0002 test loss: 0.0008
Epoch 215 batch 700 train loss: 0.0003 test loss: 0.0008
Epoch 215 batch 800 train loss: 0.0003 test loss: 0.0008
Epoch 215 batch 900 train loss: 0.0008 test loss: 0.0008
Epoch 215 batch 1000 train loss: 0.0004 test loss: 0.0008
Epoch 215 batch 1100 train loss: 0.0003 test loss: 0.0008
Epoch 215 batch 1200 train loss: 0.0002 test loss: 0.0008
Epoch 216 batch 0 train loss: 0.0002 test loss: 0.0008
Epoch 216 batch 100 train loss: 0.0004 test loss: 0.0008
Epoch 216 batch 200 train loss: 0.0002 test loss: 0.0008
Epoch 216 batch 300 train loss: 0.0003 test loss: 0.0008
Epoch 216 batch 400 train loss: 0.0004 test loss: 0.0008
Epoch 216 batch 500 train loss: 0.0006 test loss: 0.0008
Epoch 216 batch 600 train loss: 0.0005 test loss: 0.0008
Epoch 216 batch 700 train loss: 0.0002 test loss: 0.0008
Epoch 216 batch 800 train loss: 0.0002 test loss: 0.0008
Epoch 216 batch 900 train loss: 0.0003 test loss: 0.0008
Epoch 216 batch 1000 train loss: 0.0003 test loss: 0.0008
Epoch 216 batch 1100 train loss: 0.0002 test loss: 0.0008
Epoch 216 batch 1200 train loss: 0.0002 test loss: 0.0008
Epoch 217 batch 0 train loss: 0.0004 test loss: 0.0008
Epoch 217 batch 100 train loss: 0.0003 test loss: 0.0008
Epoch 217 batch 200 train loss: 0.0004 test loss: 0.0008
Epoch 217 batch 300 train loss: 0.0002 test loss: 0.0008
Epoch 217 batch 400 train loss: 0.0002 test loss: 0.0008
Epoch 217 batch 500 train loss: 0.0003 test loss: 0.0008
Epoch 217 batch 600 train loss: 0.0002 test loss: 0.0008
Epoch 217 batch 700 train loss: 0.0004 test loss: 0.0008
Epoch 217 batch 800 train loss: 0.0003 test loss: 0.0008
Epoch 217 batch 900 train loss: 0.0003 test loss: 0.0008
Epoch 217 batch 1000 train loss: 0.0004 test loss: 0.0008
Epoch 217 batch 1100 train loss: 0.0004 test loss: 0.0008
Epoch 217 batch 1200 train loss: 0.0004 test loss: 0.0008
Epoch 218 batch 0 train loss: 0.0004 test loss: 0.0008
Epoch 218 batch 100 train loss: 0.0002 test loss: 0.0008
Epoch 218 batch 200 train loss: 0.0003 test loss: 0.0008
Epoch 218 batch 300 train loss: 0.0004 test loss: 0.0008
Epoch 218 batch 400 train loss: 0.0002 test loss: 0.0008
Epoch 218 batch 500 train loss: 0.0003 test loss: 0.0008
Epoch 218 batch 600 train loss: 0.0004 test loss: 0.0008
Epoch 218 batch 700 train loss: 0.0003 test loss: 0.0008
Epoch 218 batch 800 train loss: 0.0002 test loss: 0.0008
Epoch 218 batch 900 train loss: 0.0003 test loss: 0.0008
Epoch 218 batch 1000 train loss: 0.0002 test loss: 0.0008
Epoch 218 batch 1100 train loss: 0.0003 test loss: 0.0008
Epoch 218 batch 1200 train loss: 0.0002 test loss: 0.0008
Epoch 219 batch 0 train loss: 0.0002 test loss: 0.0008
Epoch 219 batch 100 train loss: 0.0002 test loss: 0.0008
Epoch 219 batch 200 train loss: 0.0002 test loss: 0.0008
Epoch 219 batch 300 train loss: 0.0005 test loss: 0.0008
Epoch 219 batch 400 train loss: 0.0002 test loss: 0.0008
Epoch 219 batch 500 train loss: 0.0002 test loss: 0.0008
Epoch 219 batch 600 train loss: 0.0003 test loss: 0.0008
Epoch 219 batch 700 train loss: 0.0003 test loss: 0.0008
Epoch 219 batch 800 train loss: 0.0004 test loss: 0.0008
Epoch 219 batch 900 train loss: 0.0002 test loss: 0.0008
Epoch 219 batch 1000 train loss: 0.0003 test loss: 0.0008
Epoch 219 batch 1100 train loss: 0.0002 test loss: 0.0008
Epoch 219 batch 1200 train loss: 0.0002 test loss: 0.0008
Epoch 220 batch 0 train loss: 0.0005 test loss: 0.0008
Epoch 220 batch 100 train loss: 0.0005 test loss: 0.0008
Epoch 220 batch 200 train loss: 0.0003 test loss: 0.0008
Epoch 220 batch 300 train loss: 0.0002 test loss: 0.0008
Epoch 220 batch 400 train loss: 0.0003 test loss: 0.0008
Epoch 220 batch 500 train loss: 0.0003 test loss: 0.0008
Epoch 220 batch 600 train loss: 0.0003 test loss: 0.0008
Epoch 220 batch 700 train loss: 0.0004 test loss: 0.0008
Epoch 220 batch 800 train loss: 0.0005 test loss: 0.0008
Epoch 220 batch 900 train loss: 0.0003 test loss: 0.0008
Epoch 220 batch 1000 train loss: 0.0003 test loss: 0.0008
Epoch 220 batch 1100 train loss: 0.0003 test loss: 0.0008
Epoch 220 batch 1200 train loss: 0.0003 test loss: 0.0008
Epoch 221 batch 0 train loss: 0.0004 test loss: 0.0008
Epoch 221 batch 100 train loss: 0.0003 test loss: 0.0008
Epoch 221 batch 200 train loss: 0.0002 test loss: 0.0008
Epoch 221 batch 300 train loss: 0.0003 test loss: 0.0008
Epoch 221 batch 400 train loss: 0.0003 test loss: 0.0008
Epoch 221 batch 500 train loss: 0.0002 test loss: 0.0008
Epoch 221 batch 600 train loss: 0.0004 test loss: 0.0008
Epoch 221 batch 700 train loss: 0.0002 test loss: 0.0008
Epoch 221 batch 800 train loss: 0.0003 test loss: 0.0008
Epoch 221 batch 900 train loss: 0.0005 test loss: 0.0008
Epoch 221 batch 1000 train loss: 0.0002 test loss: 0.0008
Epoch 221 batch 1100 train loss: 0.0002 test loss: 0.0008
Epoch 221 batch 1200 train loss: 0.0002 test loss: 0.0008
Epoch 222 batch 0 train loss: 0.0005 test loss: 0.0008
Epoch 222 batch 100 train loss: 0.0003 test loss: 0.0008
Epoch 222 batch 200 train loss: 0.0004 test loss: 0.0008
Epoch 222 batch 300 train loss: 0.0003 test loss: 0.0008
Epoch 222 batch 400 train loss: 0.0004 test loss: 0.0008
Epoch 222 batch 500 train loss: 0.0003 test loss: 0.0008
Epoch 222 batch 600 train loss: 0.0004 test loss: 0.0008
Epoch 222 batch 700 train loss: 0.0002 test loss: 0.0008
Epoch 222 batch 800 train loss: 0.0003 test loss: 0.0008
Epoch 222 batch 900 train loss: 0.0003 test loss: 0.0008
Epoch 222 batch 1000 train loss: 0.0003 test loss: 0.0008
Epoch 222 batch 1100 train loss: 0.0003 test loss: 0.0008
Epoch 222 batch 1200 train loss: 0.0002 test loss: 0.0008
Epoch 223 batch 0 train loss: 0.0003 test loss: 0.0008
Epoch 223 batch 100 train loss: 0.0003 test loss: 0.0008
Epoch 223 batch 200 train loss: 0.0003 test loss: 0.0008
Epoch 223 batch 300 train loss: 0.0002 test loss: 0.0008
Epoch 223 batch 400 train loss: 0.0002 test loss: 0.0008
Epoch 223 batch 500 train loss: 0.0002 test loss: 0.0008
Epoch 223 batch 600 train loss: 0.0002 test loss: 0.0008
Epoch 223 batch 700 train loss: 0.0003 test loss: 0.0008
Epoch 223 batch 800 train loss: 0.0003 test loss: 0.0008
Epoch 223 batch 900 train loss: 0.0004 test loss: 0.0008
Epoch 223 batch 1000 train loss: 0.0002 test loss: 0.0008
Epoch 223 batch 1100 train loss: 0.0002 test loss: 0.0008
Epoch 223 batch 1200 train loss: 0.0001 test loss: 0.0008
Epoch 224 batch 0 train loss: 0.0002 test loss: 0.0008
Epoch 224 batch 100 train loss: 0.0003 test loss: 0.0008
Epoch 224 batch 200 train loss: 0.0003 test loss: 0.0008
Epoch 224 batch 300 train loss: 0.0003 test loss: 0.0008
Epoch 224 batch 400 train loss: 0.0002 test loss: 0.0008
Epoch 224 batch 500 train loss: 0.0003 test loss: 0.0008
Epoch 224 batch 600 train loss: 0.0004 test loss: 0.0008
Epoch 224 batch 700 train loss: 0.0003 test loss: 0.0008
Epoch 224 batch 800 train loss: 0.0003 test loss: 0.0008
Epoch 224 batch 900 train loss: 0.0003 test loss: 0.0008
Epoch 224 batch 1000 train loss: 0.0006 test loss: 0.0008
Epoch 224 batch 1100 train loss: 0.0003 test loss: 0.0008
Epoch 224 batch 1200 train loss: 0.0003 test loss: 0.0008
Epoch 225 batch 0 train loss: 0.0003 test loss: 0.0008
Epoch 225 batch 100 train loss: 0.0003 test loss: 0.0008
Epoch 225 batch 200 train loss: 0.0005 test loss: 0.0008
Epoch 225 batch 300 train loss: 0.0001 test loss: 0.0008
Epoch 225 batch 400 train loss: 0.0003 test loss: 0.0008
Epoch 225 batch 500 train loss: 0.0002 test loss: 0.0008
Epoch 225 batch 600 train loss: 0.0002 test loss: 0.0008
Epoch 225 batch 700 train loss: 0.0002 test loss: 0.0008
Epoch 225 batch 800 train loss: 0.0004 test loss: 0.0008
Epoch 225 batch 900 train loss: 0.0003 test loss: 0.0008
Epoch 225 batch 1000 train loss: 0.0003 test loss: 0.0008
Epoch 225 batch 1100 train loss: 0.0003 test loss: 0.0008
Epoch 225 batch 1200 train loss: 0.0003 test loss: 0.0008
Epoch 226 batch 0 train loss: 0.0003 test loss: 0.0008
Epoch 226 batch 100 train loss: 0.0004 test loss: 0.0008
Epoch 226 batch 200 train loss: 0.0003 test loss: 0.0008
Epoch 226 batch 300 train loss: 0.0002 test loss: 0.0008
Epoch 226 batch 400 train loss: 0.0002 test loss: 0.0008
Epoch 226 batch 500 train loss: 0.0002 test loss: 0.0008
Epoch 226 batch 600 train loss: 0.0002 test loss: 0.0008
Epoch 226 batch 700 train loss: 0.0003 test loss: 0.0008
Epoch 226 batch 800 train loss: 0.0005 test loss: 0.0008
Epoch 226 batch 900 train loss: 0.0005 test loss: 0.0008
Epoch 226 batch 1000 train loss: 0.0002 test loss: 0.0008
Epoch 226 batch 1100 train loss: 0.0002 test loss: 0.0008
Epoch 226 batch 1200 train loss: 0.0002 test loss: 0.0008
Epoch 227 batch 0 train loss: 0.0006 test loss: 0.0008
Epoch 227 batch 100 train loss: 0.0003 test loss: 0.0008
Epoch 227 batch 200 train loss: 0.0005 test loss: 0.0008
Epoch 227 batch 300 train loss: 0.0002 test loss: 0.0008
Epoch 227 batch 400 train loss: 0.0003 test loss: 0.0008
Epoch 227 batch 500 train loss: 0.0002 test loss: 0.0008
Epoch 227 batch 600 train loss: 0.0005 test loss: 0.0008
Epoch 227 batch 700 train loss: 0.0002 test loss: 0.0008
Epoch 227 batch 800 train loss: 0.0003 test loss: 0.0008
Epoch 227 batch 900 train loss: 0.0004 test loss: 0.0008
Epoch 227 batch 1000 train loss: 0.0005 test loss: 0.0008
Epoch 227 batch 1100 train loss: 0.0005 test loss: 0.0008
Epoch 227 batch 1200 train loss: 0.0003 test loss: 0.0008
Epoch 228 batch 0 train loss: 0.0003 test loss: 0.0008
Epoch 228 batch 100 train loss: 0.0002 test loss: 0.0008
Epoch 228 batch 200 train loss: 0.0003 test loss: 0.0008
Epoch 228 batch 300 train loss: 0.0002 test loss: 0.0008
Epoch 228 batch 400 train loss: 0.0004 test loss: 0.0008
Epoch 228 batch 500 train loss: 0.0003 test loss: 0.0008
Epoch 228 batch 600 train loss: 0.0002 test loss: 0.0008
Epoch 228 batch 700 train loss: 0.0003 test loss: 0.0008
Epoch 228 batch 800 train loss: 0.0003 test loss: 0.0008
Epoch 228 batch 900 train loss: 0.0004 test loss: 0.0008
Epoch 228 batch 1000 train loss: 0.0003 test loss: 0.0008
Epoch 228 batch 1100 train loss: 0.0003 test loss: 0.0008
Epoch 228 batch 1200 train loss: 0.0005 test loss: 0.0008
Epoch 229 batch 0 train loss: 0.0005 test loss: 0.0008
Epoch 229 batch 100 train loss: 0.0003 test loss: 0.0008
Epoch 229 batch 200 train loss: 0.0002 test loss: 0.0008
Epoch 229 batch 300 train loss: 0.0002 test loss: 0.0008
Epoch 229 batch 400 train loss: 0.0003 test loss: 0.0008
Epoch 229 batch 500 train loss: 0.0003 test loss: 0.0008
Epoch 229 batch 600 train loss: 0.0003 test loss: 0.0008
Epoch 229 batch 700 train loss: 0.0003 test loss: 0.0008
Epoch 229 batch 800 train loss: 0.0003 test loss: 0.0008
Epoch 229 batch 900 train loss: 0.0003 test loss: 0.0008
Epoch 229 batch 1000 train loss: 0.0004 test loss: 0.0008
Epoch 229 batch 1100 train loss: 0.0004 test loss: 0.0008
Epoch 229 batch 1200 train loss: 0.0002 test loss: 0.0008
Epoch 230 batch 0 train loss: 0.0003 test loss: 0.0008
Epoch 230 batch 100 train loss: 0.0003 test loss: 0.0008
Epoch 230 batch 200 train loss: 0.0003 test loss: 0.0008
Epoch 230 batch 300 train loss: 0.0003 test loss: 0.0008
Epoch 230 batch 400 train loss: 0.0002 test loss: 0.0008
Epoch 230 batch 500 train loss: 0.0002 test loss: 0.0008
Epoch 230 batch 600 train loss: 0.0002 test loss: 0.0008
Epoch 230 batch 700 train loss: 0.0003 test loss: 0.0008
Epoch 230 batch 800 train loss: 0.0004 test loss: 0.0008
Epoch 230 batch 900 train loss: 0.0002 test loss: 0.0008
Epoch 230 batch 1000 train loss: 0.0004 test loss: 0.0008
Epoch 230 batch 1100 train loss: 0.0003 test loss: 0.0008
Epoch 230 batch 1200 train loss: 0.0003 test loss: 0.0008
Epoch 231 batch 0 train loss: 0.0005 test loss: 0.0008
Epoch 231 batch 100 train loss: 0.0004 test loss: 0.0008
Epoch 231 batch 200 train loss: 0.0003 test loss: 0.0008
Epoch 231 batch 300 train loss: 0.0003 test loss: 0.0008
Epoch 231 batch 400 train loss: 0.0003 test loss: 0.0008
Epoch 231 batch 500 train loss: 0.0002 test loss: 0.0008
Epoch 231 batch 600 train loss: 0.0003 test loss: 0.0008
Epoch 231 batch 700 train loss: 0.0003 test loss: 0.0008
Epoch 231 batch 800 train loss: 0.0002 test loss: 0.0008
Epoch 231 batch 900 train loss: 0.0003 test loss: 0.0008
Epoch 231 batch 1000 train loss: 0.0003 test loss: 0.0008
Epoch 231 batch 1100 train loss: 0.0002 test loss: 0.0008
Epoch 231 batch 1200 train loss: 0.0003 test loss: 0.0008
Epoch 232 batch 0 train loss: 0.0003 test loss: 0.0008
Epoch 232 batch 100 train loss: 0.0004 test loss: 0.0008
Epoch 232 batch 200 train loss: 0.0004 test loss: 0.0008
Epoch 232 batch 300 train loss: 0.0004 test loss: 0.0008
Epoch 232 batch 400 train loss: 0.0003 test loss: 0.0008
Epoch 232 batch 500 train loss: 0.0003 test loss: 0.0008
Epoch 232 batch 600 train loss: 0.0005 test loss: 0.0008
Epoch 232 batch 700 train loss: 0.0004 test loss: 0.0008
Epoch 232 batch 800 train loss: 0.0003 test loss: 0.0008
Epoch 232 batch 900 train loss: 0.0003 test loss: 0.0008
Epoch 232 batch 1000 train loss: 0.0002 test loss: 0.0008
Epoch 232 batch 1100 train loss: 0.0002 test loss: 0.0008
Epoch 232 batch 1200 train loss: 0.0003 test loss: 0.0008
Epoch 233 batch 0 train loss: 0.0003 test loss: 0.0008
Epoch 233 batch 100 train loss: 0.0002 test loss: 0.0008
Epoch 233 batch 200 train loss: 0.0002 test loss: 0.0008
Epoch 233 batch 300 train loss: 0.0003 test loss: 0.0008
Epoch 233 batch 400 train loss: 0.0002 test loss: 0.0008
Epoch 233 batch 500 train loss: 0.0006 test loss: 0.0008
Epoch 233 batch 600 train loss: 0.0002 test loss: 0.0008
Epoch 233 batch 700 train loss: 0.0003 test loss: 0.0008
Epoch 233 batch 800 train loss: 0.0002 test loss: 0.0008
Epoch 233 batch 900 train loss: 0.0002 test loss: 0.0008
Epoch 233 batch 1000 train loss: 0.0004 test loss: 0.0008
Epoch 233 batch 1100 train loss: 0.0003 test loss: 0.0008
Epoch 233 batch 1200 train loss: 0.0003 test loss: 0.0008
Epoch 234 batch 0 train loss: 0.0003 test loss: 0.0008
Epoch 234 batch 100 train loss: 0.0003 test loss: 0.0008
Epoch 234 batch 200 train loss: 0.0003 test loss: 0.0008
Epoch 234 batch 300 train loss: 0.0002 test loss: 0.0008
Epoch 234 batch 400 train loss: 0.0002 test loss: 0.0008
Epoch 234 batch 500 train loss: 0.0002 test loss: 0.0008
Epoch 234 batch 600 train loss: 0.0004 test loss: 0.0008
Epoch 234 batch 700 train loss: 0.0005 test loss: 0.0008
Epoch 234 batch 800 train loss: 0.0004 test loss: 0.0008
Epoch 234 batch 900 train loss: 0.0003 test loss: 0.0008
Epoch 234 batch 1000 train loss: 0.0002 test loss: 0.0008
Epoch 234 batch 1100 train loss: 0.0002 test loss: 0.0008
Epoch 234 batch 1200 train loss: 0.0003 test loss: 0.0008
Epoch 235 batch 0 train loss: 0.0004 test loss: 0.0008
Epoch 235 batch 100 train loss: 0.0004 test loss: 0.0008
Epoch 235 batch 200 train loss: 0.0004 test loss: 0.0008
Epoch 235 batch 300 train loss: 0.0004 test loss: 0.0008
Epoch 235 batch 400 train loss: 0.0003 test loss: 0.0008
Epoch 235 batch 500 train loss: 0.0002 test loss: 0.0008
Epoch 235 batch 600 train loss: 0.0004 test loss: 0.0008
Epoch 235 batch 700 train loss: 0.0002 test loss: 0.0008
Epoch 235 batch 800 train loss: 0.0003 test loss: 0.0008
Epoch 235 batch 900 train loss: 0.0004 test loss: 0.0008
Epoch 235 batch 1000 train loss: 0.0002 test loss: 0.0008
Epoch 235 batch 1100 train loss: 0.0003 test loss: 0.0008
Epoch 235 batch 1200 train loss: 0.0002 test loss: 0.0008
Epoch 236 batch 0 train loss: 0.0002 test loss: 0.0008
Epoch 236 batch 100 train loss: 0.0003 test loss: 0.0008
Epoch 236 batch 200 train loss: 0.0003 test loss: 0.0008
Epoch 236 batch 300 train loss: 0.0003 test loss: 0.0008
Epoch 236 batch 400 train loss: 0.0005 test loss: 0.0008
Epoch 236 batch 500 train loss: 0.0004 test loss: 0.0008
Epoch 236 batch 600 train loss: 0.0002 test loss: 0.0008
Epoch 236 batch 700 train loss: 0.0005 test loss: 0.0008
Epoch 236 batch 800 train loss: 0.0002 test loss: 0.0008
Epoch 236 batch 900 train loss: 0.0002 test loss: 0.0008
Epoch 236 batch 1000 train loss: 0.0002 test loss: 0.0008
Epoch 236 batch 1100 train loss: 0.0002 test loss: 0.0008
Epoch 236 batch 1200 train loss: 0.0003 test loss: 0.0008
Epoch 237 batch 0 train loss: 0.0004 test loss: 0.0008
Epoch 237 batch 100 train loss: 0.0003 test loss: 0.0008
Epoch 237 batch 200 train loss: 0.0003 test loss: 0.0008
Epoch 237 batch 300 train loss: 0.0005 test loss: 0.0008
Epoch 237 batch 400 train loss: 0.0003 test loss: 0.0008
Epoch 237 batch 500 train loss: 0.0004 test loss: 0.0008
Epoch 237 batch 600 train loss: 0.0002 test loss: 0.0008
Epoch 237 batch 700 train loss: 0.0002 test loss: 0.0008
Epoch 237 batch 800 train loss: 0.0004 test loss: 0.0008
Epoch 237 batch 900 train loss: 0.0004 test loss: 0.0008
Epoch 237 batch 1000 train loss: 0.0003 test loss: 0.0008
Epoch 237 batch 1100 train loss: 0.0003 test loss: 0.0008
Epoch 237 batch 1200 train loss: 0.0003 test loss: 0.0008
Epoch 238 batch 0 train loss: 0.0002 test loss: 0.0008
Epoch 238 batch 100 train loss: 0.0003 test loss: 0.0008
Epoch 238 batch 200 train loss: 0.0002 test loss: 0.0008
Epoch 238 batch 300 train loss: 0.0002 test loss: 0.0008
Epoch 238 batch 400 train loss: 0.0002 test loss: 0.0008
Epoch 238 batch 500 train loss: 0.0006 test loss: 0.0008
Epoch 238 batch 600 train loss: 0.0002 test loss: 0.0008
Epoch 238 batch 700 train loss: 0.0002 test loss: 0.0008
Epoch 238 batch 800 train loss: 0.0002 test loss: 0.0008
Epoch 238 batch 900 train loss: 0.0003 test loss: 0.0008
Epoch 238 batch 1000 train loss: 0.0002 test loss: 0.0008
Epoch 238 batch 1100 train loss: 0.0003 test loss: 0.0008
Epoch 238 batch 1200 train loss: 0.0003 test loss: 0.0008
Epoch 239 batch 0 train loss: 0.0004 test loss: 0.0008
Epoch 239 batch 100 train loss: 0.0002 test loss: 0.0008
Epoch 239 batch 200 train loss: 0.0003 test loss: 0.0008
Epoch 239 batch 300 train loss: 0.0004 test loss: 0.0008
Epoch 239 batch 400 train loss: 0.0003 test loss: 0.0008
Epoch 239 batch 500 train loss: 0.0002 test loss: 0.0008
Epoch 239 batch 600 train loss: 0.0003 test loss: 0.0008
Epoch 239 batch 700 train loss: 0.0003 test loss: 0.0008
Epoch 239 batch 800 train loss: 0.0002 test loss: 0.0008
Epoch 239 batch 900 train loss: 0.0003 test loss: 0.0008
Epoch 239 batch 1000 train loss: 0.0003 test loss: 0.0008
Epoch 239 batch 1100 train loss: 0.0002 test loss: 0.0008
Epoch 239 batch 1200 train loss: 0.0002 test loss: 0.0008
Epoch 240 batch 0 train loss: 0.0004 test loss: 0.0008
Epoch 240 batch 100 train loss: 0.0003 test loss: 0.0008
Epoch 240 batch 200 train loss: 0.0002 test loss: 0.0008
Epoch 240 batch 300 train loss: 0.0002 test loss: 0.0008
Epoch 240 batch 400 train loss: 0.0003 test loss: 0.0008
Epoch 240 batch 500 train loss: 0.0002 test loss: 0.0008
Epoch 240 batch 600 train loss: 0.0002 test loss: 0.0008
Epoch 240 batch 700 train loss: 0.0004 test loss: 0.0008
Epoch 240 batch 800 train loss: 0.0002 test loss: 0.0008
Epoch 240 batch 900 train loss: 0.0004 test loss: 0.0008
Epoch 240 batch 1000 train loss: 0.0005 test loss: 0.0008
Epoch 240 batch 1100 train loss: 0.0003 test loss: 0.0008
Epoch 240 batch 1200 train loss: 0.0003 test loss: 0.0008
Epoch 241 batch 0 train loss: 0.0003 test loss: 0.0008
Epoch 241 batch 100 train loss: 0.0002 test loss: 0.0008
Epoch 241 batch 200 train loss: 0.0003 test loss: 0.0008
Epoch 241 batch 300 train loss: 0.0003 test loss: 0.0008
Epoch 241 batch 400 train loss: 0.0003 test loss: 0.0008
Epoch 241 batch 500 train loss: 0.0003 test loss: 0.0008
Epoch 241 batch 600 train loss: 0.0005 test loss: 0.0008
Epoch 241 batch 700 train loss: 0.0004 test loss: 0.0008
Epoch 241 batch 800 train loss: 0.0003 test loss: 0.0008
Epoch 241 batch 900 train loss: 0.0002 test loss: 0.0008
Epoch 241 batch 1000 train loss: 0.0002 test loss: 0.0008
Epoch 241 batch 1100 train loss: 0.0003 test loss: 0.0008
Epoch 241 batch 1200 train loss: 0.0009 test loss: 0.0008
Epoch 242 batch 0 train loss: 0.0004 test loss: 0.0008
Epoch 242 batch 100 train loss: 0.0002 test loss: 0.0008
Epoch 242 batch 200 train loss: 0.0004 test loss: 0.0008
Epoch 242 batch 300 train loss: 0.0003 test loss: 0.0008
Epoch 242 batch 400 train loss: 0.0003 test loss: 0.0008
Epoch 242 batch 500 train loss: 0.0005 test loss: 0.0008
Epoch 242 batch 600 train loss: 0.0002 test loss: 0.0008
Epoch 242 batch 700 train loss: 0.0003 test loss: 0.0008
Epoch 242 batch 800 train loss: 0.0002 test loss: 0.0008
Epoch 242 batch 900 train loss: 0.0003 test loss: 0.0008
Epoch 242 batch 1000 train loss: 0.0003 test loss: 0.0008
Epoch 242 batch 1100 train loss: 0.0003 test loss: 0.0008
Epoch 242 batch 1200 train loss: 0.0003 test loss: 0.0008
Epoch 243 batch 0 train loss: 0.0005 test loss: 0.0008
Epoch 243 batch 100 train loss: 0.0001 test loss: 0.0008
Epoch 243 batch 200 train loss: 0.0003 test loss: 0.0008
Epoch 243 batch 300 train loss: 0.0004 test loss: 0.0008
Epoch 243 batch 400 train loss: 0.0003 test loss: 0.0008
Epoch 243 batch 500 train loss: 0.0002 test loss: 0.0008
Epoch 243 batch 600 train loss: 0.0001 test loss: 0.0008
Epoch 243 batch 700 train loss: 0.0002 test loss: 0.0008
Epoch 243 batch 800 train loss: 0.0003 test loss: 0.0008
Epoch 243 batch 900 train loss: 0.0003 test loss: 0.0008
Epoch 243 batch 1000 train loss: 0.0004 test loss: 0.0008
Epoch 243 batch 1100 train loss: 0.0003 test loss: 0.0008
Epoch 243 batch 1200 train loss: 0.0003 test loss: 0.0008
Epoch 244 batch 0 train loss: 0.0003 test loss: 0.0008
Epoch 244 batch 100 train loss: 0.0001 test loss: 0.0008
Epoch 244 batch 200 train loss: 0.0002 test loss: 0.0008
Epoch 244 batch 300 train loss: 0.0002 test loss: 0.0008
Epoch 244 batch 400 train loss: 0.0003 test loss: 0.0008
Epoch 244 batch 500 train loss: 0.0004 test loss: 0.0008
Epoch 244 batch 600 train loss: 0.0002 test loss: 0.0008
Epoch 244 batch 700 train loss: 0.0002 test loss: 0.0008
Epoch 244 batch 800 train loss: 0.0003 test loss: 0.0008
Epoch 244 batch 900 train loss: 0.0004 test loss: 0.0008
Epoch 244 batch 1000 train loss: 0.0003 test loss: 0.0008
Epoch 244 batch 1100 train loss: 0.0003 test loss: 0.0008
Epoch 244 batch 1200 train loss: 0.0002 test loss: 0.0008
Epoch 245 batch 0 train loss: 0.0002 test loss: 0.0008
Epoch 245 batch 100 train loss: 0.0003 test loss: 0.0008
Epoch 245 batch 200 train loss: 0.0002 test loss: 0.0008
Epoch 245 batch 300 train loss: 0.0003 test loss: 0.0008
Epoch 245 batch 400 train loss: 0.0002 test loss: 0.0008
Epoch 245 batch 500 train loss: 0.0005 test loss: 0.0008
Epoch 245 batch 600 train loss: 0.0002 test loss: 0.0008
Epoch 245 batch 700 train loss: 0.0003 test loss: 0.0008
Epoch 245 batch 800 train loss: 0.0001 test loss: 0.0008
Epoch 245 batch 900 train loss: 0.0003 test loss: 0.0008
Epoch 245 batch 1000 train loss: 0.0003 test loss: 0.0008
Epoch 245 batch 1100 train loss: 0.0005 test loss: 0.0008
Epoch 245 batch 1200 train loss: 0.0003 test loss: 0.0008
Epoch 246 batch 0 train loss: 0.0003 test loss: 0.0008
Epoch 246 batch 100 train loss: 0.0002 test loss: 0.0008
Epoch 246 batch 200 train loss: 0.0002 test loss: 0.0008
Epoch 246 batch 300 train loss: 0.0003 test loss: 0.0008
Epoch 246 batch 400 train loss: 0.0003 test loss: 0.0008
Epoch 246 batch 500 train loss: 0.0004 test loss: 0.0008
Epoch 246 batch 600 train loss: 0.0003 test loss: 0.0008
Epoch 246 batch 700 train loss: 0.0003 test loss: 0.0008
Epoch 246 batch 800 train loss: 0.0003 test loss: 0.0008
Epoch 246 batch 900 train loss: 0.0002 test loss: 0.0008
Epoch 246 batch 1000 train loss: 0.0003 test loss: 0.0008
Epoch 246 batch 1100 train loss: 0.0003 test loss: 0.0008
Epoch 246 batch 1200 train loss: 0.0003 test loss: 0.0008
Epoch 247 batch 0 train loss: 0.0004 test loss: 0.0008
Epoch 247 batch 100 train loss: 0.0003 test loss: 0.0008
Epoch 247 batch 200 train loss: 0.0004 test loss: 0.0008
Epoch 247 batch 300 train loss: 0.0003 test loss: 0.0008
Epoch 247 batch 400 train loss: 0.0002 test loss: 0.0008
Epoch 247 batch 500 train loss: 0.0004 test loss: 0.0008
Epoch 247 batch 600 train loss: 0.0002 test loss: 0.0008
Epoch 247 batch 700 train loss: 0.0002 test loss: 0.0008
Epoch 247 batch 800 train loss: 0.0003 test loss: 0.0008
Epoch 247 batch 900 train loss: 0.0003 test loss: 0.0008
Epoch 247 batch 1000 train loss: 0.0003 test loss: 0.0008
Epoch 247 batch 1100 train loss: 0.0003 test loss: 0.0008
Epoch 247 batch 1200 train loss: 0.0004 test loss: 0.0008
Epoch 248 batch 0 train loss: 0.0002 test loss: 0.0008
Epoch 248 batch 100 train loss: 0.0003 test loss: 0.0008
Epoch 248 batch 200 train loss: 0.0003 test loss: 0.0008
Epoch 248 batch 300 train loss: 0.0003 test loss: 0.0008
Epoch 248 batch 400 train loss: 0.0004 test loss: 0.0008
Epoch 248 batch 500 train loss: 0.0003 test loss: 0.0008
Epoch 248 batch 600 train loss: 0.0003 test loss: 0.0008
Epoch 248 batch 700 train loss: 0.0003 test loss: 0.0008
Epoch 248 batch 800 train loss: 0.0004 test loss: 0.0008
Epoch 248 batch 900 train loss: 0.0003 test loss: 0.0008
Epoch 248 batch 1000 train loss: 0.0003 test loss: 0.0008
Epoch 248 batch 1100 train loss: 0.0002 test loss: 0.0008
Epoch 248 batch 1200 train loss: 0.0002 test loss: 0.0008
Epoch 249 batch 0 train loss: 0.0002 test loss: 0.0008
Epoch 249 batch 100 train loss: 0.0003 test loss: 0.0008
Epoch 249 batch 200 train loss: 0.0003 test loss: 0.0008
Epoch 249 batch 300 train loss: 0.0002 test loss: 0.0008
Epoch 249 batch 400 train loss: 0.0002 test loss: 0.0008
Epoch 249 batch 500 train loss: 0.0002 test loss: 0.0008
Epoch 249 batch 600 train loss: 0.0002 test loss: 0.0008
Epoch 249 batch 700 train loss: 0.0006 test loss: 0.0008
Epoch 249 batch 800 train loss: 0.0003 test loss: 0.0008
Epoch 249 batch 900 train loss: 0.0003 test loss: 0.0008
Epoch 249 batch 1000 train loss: 0.0002 test loss: 0.0008
Epoch 249 batch 1100 train loss: 0.0004 test loss: 0.0008
Epoch 249 batch 1200 train loss: 0.0002 test loss: 0.0008
Epoch 250 batch 0 train loss: 0.0002 test loss: 0.0008
Epoch 250 batch 100 train loss: 0.0003 test loss: 0.0008
Epoch 250 batch 200 train loss: 0.0004 test loss: 0.0008
Epoch 250 batch 300 train loss: 0.0002 test loss: 0.0008
Epoch 250 batch 400 train loss: 0.0004 test loss: 0.0008
Epoch 250 batch 500 train loss: 0.0002 test loss: 0.0008
Epoch 250 batch 600 train loss: 0.0003 test loss: 0.0008
Epoch 250 batch 700 train loss: 0.0002 test loss: 0.0008
Epoch 250 batch 800 train loss: 0.0004 test loss: 0.0008
Epoch 250 batch 900 train loss: 0.0003 test loss: 0.0008
Epoch 250 batch 1000 train loss: 0.0003 test loss: 0.0008
Epoch 250 batch 1100 train loss: 0.0003 test loss: 0.0008
Epoch 250 batch 1200 train loss: 0.0003 test loss: 0.0008
Epoch 251 batch 0 train loss: 0.0003 test loss: 0.0008
Epoch 251 batch 100 train loss: 0.0002 test loss: 0.0008
Epoch 251 batch 200 train loss: 0.0002 test loss: 0.0008
Epoch 251 batch 300 train loss: 0.0004 test loss: 0.0008
Epoch 251 batch 400 train loss: 0.0003 test loss: 0.0008
Epoch 251 batch 500 train loss: 0.0004 test loss: 0.0008
Epoch 251 batch 600 train loss: 0.0003 test loss: 0.0008
Epoch 251 batch 700 train loss: 0.0003 test loss: 0.0008
Epoch 251 batch 800 train loss: 0.0003 test loss: 0.0008
Epoch 251 batch 900 train loss: 0.0003 test loss: 0.0008
Epoch 251 batch 1000 train loss: 0.0003 test loss: 0.0008
Epoch 251 batch 1100 train loss: 0.0003 test loss: 0.0008
Epoch 251 batch 1200 train loss: 0.0002 test loss: 0.0008
Epoch 252 batch 0 train loss: 0.0003 test loss: 0.0008
Epoch 252 batch 100 train loss: 0.0003 test loss: 0.0008
Epoch 252 batch 200 train loss: 0.0002 test loss: 0.0008
Epoch 252 batch 300 train loss: 0.0005 test loss: 0.0008
Epoch 252 batch 400 train loss: 0.0003 test loss: 0.0008
Epoch 252 batch 500 train loss: 0.0005 test loss: 0.0008
Epoch 252 batch 600 train loss: 0.0003 test loss: 0.0008
Epoch 252 batch 700 train loss: 0.0003 test loss: 0.0008
Epoch 252 batch 800 train loss: 0.0003 test loss: 0.0008
Epoch 252 batch 900 train loss: 0.0002 test loss: 0.0008
Epoch 252 batch 1000 train loss: 0.0003 test loss: 0.0008
Epoch 252 batch 1100 train loss: 0.0003 test loss: 0.0008
Epoch 252 batch 1200 train loss: 0.0002 test loss: 0.0008
Epoch 253 batch 0 train loss: 0.0002 test loss: 0.0008
Epoch 253 batch 100 train loss: 0.0003 test loss: 0.0008
Epoch 253 batch 200 train loss: 0.0004 test loss: 0.0008
Epoch 253 batch 300 train loss: 0.0005 test loss: 0.0008
Epoch 253 batch 400 train loss: 0.0003 test loss: 0.0008
Epoch 253 batch 500 train loss: 0.0003 test loss: 0.0008
Epoch 253 batch 600 train loss: 0.0002 test loss: 0.0008
Epoch 253 batch 700 train loss: 0.0005 test loss: 0.0008
Epoch 253 batch 800 train loss: 0.0003 test loss: 0.0008
Epoch 253 batch 900 train loss: 0.0002 test loss: 0.0008
Epoch 253 batch 1000 train loss: 0.0002 test loss: 0.0008
Epoch 253 batch 1100 train loss: 0.0003 test loss: 0.0008
Epoch 253 batch 1200 train loss: 0.0002 test loss: 0.0008
Epoch 254 batch 0 train loss: 0.0003 test loss: 0.0008
Epoch 254 batch 100 train loss: 0.0002 test loss: 0.0008
Epoch 254 batch 200 train loss: 0.0003 test loss: 0.0008
Epoch 254 batch 300 train loss: 0.0002 test loss: 0.0008
Epoch 254 batch 400 train loss: 0.0003 test loss: 0.0008
Epoch 254 batch 500 train loss: 0.0003 test loss: 0.0008
Epoch 254 batch 600 train loss: 0.0002 test loss: 0.0008
Epoch 254 batch 700 train loss: 0.0007 test loss: 0.0008
Epoch 254 batch 800 train loss: 0.0002 test loss: 0.0008
Epoch 254 batch 900 train loss: 0.0003 test loss: 0.0008
Epoch 254 batch 1000 train loss: 0.0002 test loss: 0.0008
Epoch 254 batch 1100 train loss: 0.0002 test loss: 0.0008
Epoch 254 batch 1200 train loss: 0.0002 test loss: 0.0008
Epoch 255 batch 0 train loss: 0.0003 test loss: 0.0008
Epoch 255 batch 100 train loss: 0.0003 test loss: 0.0008
Epoch 255 batch 200 train loss: 0.0002 test loss: 0.0008
Epoch 255 batch 300 train loss: 0.0005 test loss: 0.0008
Epoch 255 batch 400 train loss: 0.0005 test loss: 0.0008
Epoch 255 batch 500 train loss: 0.0002 test loss: 0.0008
Epoch 255 batch 600 train loss: 0.0002 test loss: 0.0008
Epoch 255 batch 700 train loss: 0.0002 test loss: 0.0008
Epoch 255 batch 800 train loss: 0.0003 test loss: 0.0008
Epoch 255 batch 900 train loss: 0.0002 test loss: 0.0008
Epoch 255 batch 1000 train loss: 0.0002 test loss: 0.0008
Epoch 255 batch 1100 train loss: 0.0002 test loss: 0.0008
Epoch 255 batch 1200 train loss: 0.0003 test loss: 0.0008
Epoch 256 batch 0 train loss: 0.0003 test loss: 0.0008
Epoch 256 batch 100 train loss: 0.0002 test loss: 0.0008
Epoch 256 batch 200 train loss: 0.0002 test loss: 0.0008
Epoch 256 batch 300 train loss: 0.0001 test loss: 0.0008
Epoch 256 batch 400 train loss: 0.0004 test loss: 0.0008
Epoch 256 batch 500 train loss: 0.0005 test loss: 0.0008
Epoch 256 batch 600 train loss: 0.0005 test loss: 0.0008
Epoch 256 batch 700 train loss: 0.0004 test loss: 0.0008
Epoch 256 batch 800 train loss: 0.0003 test loss: 0.0008
Epoch 256 batch 900 train loss: 0.0004 test loss: 0.0008
Epoch 256 batch 1000 train loss: 0.0002 test loss: 0.0008
Epoch 256 batch 1100 train loss: 0.0002 test loss: 0.0008
Epoch 256 batch 1200 train loss: 0.0002 test loss: 0.0008
Epoch 257 batch 0 train loss: 0.0002 test loss: 0.0008
Epoch 257 batch 100 train loss: 0.0002 test loss: 0.0008
Epoch 257 batch 200 train loss: 0.0004 test loss: 0.0008
Epoch 257 batch 300 train loss: 0.0002 test loss: 0.0008
Epoch 257 batch 400 train loss: 0.0002 test loss: 0.0008
Epoch 257 batch 500 train loss: 0.0004 test loss: 0.0008
Epoch 257 batch 600 train loss: 0.0003 test loss: 0.0008
Epoch 257 batch 700 train loss: 0.0002 test loss: 0.0008
Epoch 257 batch 800 train loss: 0.0003 test loss: 0.0008
Epoch 257 batch 900 train loss: 0.0003 test loss: 0.0008
Epoch 257 batch 1000 train loss: 0.0003 test loss: 0.0008
Epoch 257 batch 1100 train loss: 0.0002 test loss: 0.0008
Epoch 257 batch 1200 train loss: 0.0002 test loss: 0.0008
Epoch 258 batch 0 train loss: 0.0003 test loss: 0.0008
Epoch 258 batch 100 train loss: 0.0003 test loss: 0.0008
Epoch 258 batch 200 train loss: 0.0003 test loss: 0.0008
Epoch 258 batch 300 train loss: 0.0003 test loss: 0.0008
Epoch 258 batch 400 train loss: 0.0004 test loss: 0.0008
Epoch 258 batch 500 train loss: 0.0002 test loss: 0.0008
Epoch 258 batch 600 train loss: 0.0003 test loss: 0.0008
Epoch 258 batch 700 train loss: 0.0003 test loss: 0.0008
Epoch 258 batch 800 train loss: 0.0006 test loss: 0.0008
Epoch 258 batch 900 train loss: 0.0003 test loss: 0.0008
Epoch 258 batch 1000 train loss: 0.0002 test loss: 0.0008
Epoch 258 batch 1100 train loss: 0.0004 test loss: 0.0008
Epoch 258 batch 1200 train loss: 0.0003 test loss: 0.0008
Epoch 259 batch 0 train loss: 0.0006 test loss: 0.0008
Epoch 259 batch 100 train loss: 0.0004 test loss: 0.0008
Epoch 259 batch 200 train loss: 0.0002 test loss: 0.0008
Epoch 259 batch 300 train loss: 0.0003 test loss: 0.0008
Epoch 259 batch 400 train loss: 0.0003 test loss: 0.0008
Epoch 259 batch 500 train loss: 0.0004 test loss: 0.0008
Epoch 259 batch 600 train loss: 0.0002 test loss: 0.0008
Epoch 259 batch 700 train loss: 0.0004 test loss: 0.0008
Epoch 259 batch 800 train loss: 0.0004 test loss: 0.0008
Epoch 259 batch 900 train loss: 0.0003 test loss: 0.0008
Epoch 259 batch 1000 train loss: 0.0003 test loss: 0.0008
Epoch 259 batch 1100 train loss: 0.0003 test loss: 0.0008
Epoch 259 batch 1200 train loss: 0.0003 test loss: 0.0008
Epoch 260 batch 0 train loss: 0.0003 test loss: 0.0008
Epoch 260 batch 100 train loss: 0.0003 test loss: 0.0008
Epoch 260 batch 200 train loss: 0.0003 test loss: 0.0008
Epoch 260 batch 300 train loss: 0.0005 test loss: 0.0008
Epoch 260 batch 400 train loss: 0.0002 test loss: 0.0008
Epoch 260 batch 500 train loss: 0.0005 test loss: 0.0008
Epoch 260 batch 600 train loss: 0.0003 test loss: 0.0008
Epoch 260 batch 700 train loss: 0.0003 test loss: 0.0008
Epoch 260 batch 800 train loss: 0.0002 test loss: 0.0008
Epoch 260 batch 900 train loss: 0.0002 test loss: 0.0008
Epoch 260 batch 1000 train loss: 0.0003 test loss: 0.0008
Epoch 260 batch 1100 train loss: 0.0003 test loss: 0.0008
Epoch 260 batch 1200 train loss: 0.0003 test loss: 0.0008
Epoch 261 batch 0 train loss: 0.0002 test loss: 0.0008
Epoch 261 batch 100 train loss: 0.0002 test loss: 0.0008
Epoch 261 batch 200 train loss: 0.0002 test loss: 0.0008
Epoch 261 batch 300 train loss: 0.0002 test loss: 0.0008
Epoch 261 batch 400 train loss: 0.0006 test loss: 0.0008
Epoch 261 batch 500 train loss: 0.0002 test loss: 0.0008
Epoch 261 batch 600 train loss: 0.0002 test loss: 0.0008
Epoch 261 batch 700 train loss: 0.0002 test loss: 0.0008
Epoch 261 batch 800 train loss: 0.0002 test loss: 0.0008
Epoch 261 batch 900 train loss: 0.0003 test loss: 0.0008
Epoch 261 batch 1000 train loss: 0.0002 test loss: 0.0008
Epoch 261 batch 1100 train loss: 0.0003 test loss: 0.0008
Epoch 261 batch 1200 train loss: 0.0003 test loss: 0.0008
Epoch 262 batch 0 train loss: 0.0002 test loss: 0.0008
Epoch 262 batch 100 train loss: 0.0003 test loss: 0.0008
Epoch 262 batch 200 train loss: 0.0003 test loss: 0.0008
Epoch 262 batch 300 train loss: 0.0006 test loss: 0.0008
Epoch 262 batch 400 train loss: 0.0003 test loss: 0.0008
Epoch 262 batch 500 train loss: 0.0003 test loss: 0.0008
Epoch 262 batch 600 train loss: 0.0002 test loss: 0.0008
Epoch 262 batch 700 train loss: 0.0005 test loss: 0.0008
Epoch 262 batch 800 train loss: 0.0005 test loss: 0.0008
Epoch 262 batch 900 train loss: 0.0003 test loss: 0.0008
Epoch 262 batch 1000 train loss: 0.0002 test loss: 0.0008
Epoch 262 batch 1100 train loss: 0.0002 test loss: 0.0008
Epoch 262 batch 1200 train loss: 0.0002 test loss: 0.0008
Epoch 263 batch 0 train loss: 0.0003 test loss: 0.0008
Epoch 263 batch 100 train loss: 0.0002 test loss: 0.0008
Epoch 263 batch 200 train loss: 0.0003 test loss: 0.0008
Epoch 263 batch 300 train loss: 0.0003 test loss: 0.0008
Epoch 263 batch 400 train loss: 0.0004 test loss: 0.0008
Epoch 263 batch 500 train loss: 0.0004 test loss: 0.0008
Epoch 263 batch 600 train loss: 0.0003 test loss: 0.0008
Epoch 263 batch 700 train loss: 0.0003 test loss: 0.0008
Epoch 263 batch 800 train loss: 0.0002 test loss: 0.0008
Epoch 263 batch 900 train loss: 0.0002 test loss: 0.0008
Epoch 263 batch 1000 train loss: 0.0002 test loss: 0.0008
Epoch 263 batch 1100 train loss: 0.0005 test loss: 0.0008
Epoch 263 batch 1200 train loss: 0.0003 test loss: 0.0008
Epoch 264 batch 0 train loss: 0.0002 test loss: 0.0008
Epoch 264 batch 100 train loss: 0.0004 test loss: 0.0008
Epoch 264 batch 200 train loss: 0.0004 test loss: 0.0008
Epoch 264 batch 300 train loss: 0.0002 test loss: 0.0008
Epoch 264 batch 400 train loss: 0.0008 test loss: 0.0008
Epoch 264 batch 500 train loss: 0.0003 test loss: 0.0008
Epoch 264 batch 600 train loss: 0.0003 test loss: 0.0008
Epoch 264 batch 700 train loss: 0.0003 test loss: 0.0008
Epoch 264 batch 800 train loss: 0.0002 test loss: 0.0008
Epoch 264 batch 900 train loss: 0.0003 test loss: 0.0008
Epoch 264 batch 1000 train loss: 0.0003 test loss: 0.0008
Epoch 264 batch 1100 train loss: 0.0003 test loss: 0.0008
Epoch 264 batch 1200 train loss: 0.0002 test loss: 0.0008
Epoch 265 batch 0 train loss: 0.0007 test loss: 0.0008
Epoch 265 batch 100 train loss: 0.0003 test loss: 0.0008
Epoch 265 batch 200 train loss: 0.0003 test loss: 0.0008
Epoch 265 batch 300 train loss: 0.0002 test loss: 0.0008
Epoch 265 batch 400 train loss: 0.0003 test loss: 0.0008
Epoch 265 batch 500 train loss: 0.0002 test loss: 0.0008
Epoch 265 batch 600 train loss: 0.0003 test loss: 0.0008
Epoch 265 batch 700 train loss: 0.0002 test loss: 0.0008
Epoch 265 batch 800 train loss: 0.0003 test loss: 0.0008
Epoch 265 batch 900 train loss: 0.0003 test loss: 0.0008
Epoch 265 batch 1000 train loss: 0.0003 test loss: 0.0008
Epoch 265 batch 1100 train loss: 0.0004 test loss: 0.0008
Epoch 265 batch 1200 train loss: 0.0002 test loss: 0.0008
Epoch 266 batch 0 train loss: 0.0001 test loss: 0.0008
Epoch 266 batch 100 train loss: 0.0002 test loss: 0.0008
Epoch 266 batch 200 train loss: 0.0002 test loss: 0.0008
Epoch 266 batch 300 train loss: 0.0003 test loss: 0.0008
Epoch 266 batch 400 train loss: 0.0003 test loss: 0.0008
Epoch 266 batch 500 train loss: 0.0003 test loss: 0.0008
Epoch 266 batch 600 train loss: 0.0004 test loss: 0.0008
Epoch 266 batch 700 train loss: 0.0004 test loss: 0.0008
Epoch 266 batch 800 train loss: 0.0002 test loss: 0.0008
Epoch 266 batch 900 train loss: 0.0004 test loss: 0.0008
Epoch 266 batch 1000 train loss: 0.0006 test loss: 0.0008
Epoch 266 batch 1100 train loss: 0.0003 test loss: 0.0008
Epoch 266 batch 1200 train loss: 0.0003 test loss: 0.0008
Epoch 267 batch 0 train loss: 0.0002 test loss: 0.0008
Epoch 267 batch 100 train loss: 0.0002 test loss: 0.0008
Epoch 267 batch 200 train loss: 0.0002 test loss: 0.0008
Epoch 267 batch 300 train loss: 0.0002 test loss: 0.0008
Epoch 267 batch 400 train loss: 0.0002 test loss: 0.0008
Epoch 267 batch 500 train loss: 0.0003 test loss: 0.0008
Epoch 267 batch 600 train loss: 0.0002 test loss: 0.0008
Epoch 267 batch 700 train loss: 0.0002 test loss: 0.0008
Epoch 267 batch 800 train loss: 0.0001 test loss: 0.0008
Epoch 267 batch 900 train loss: 0.0002 test loss: 0.0008
Epoch 267 batch 1000 train loss: 0.0002 test loss: 0.0008
Epoch 267 batch 1100 train loss: 0.0002 test loss: 0.0008
Epoch 267 batch 1200 train loss: 0.0003 test loss: 0.0008
Epoch 268 batch 0 train loss: 0.0002 test loss: 0.0008
Epoch 268 batch 100 train loss: 0.0003 test loss: 0.0008
Epoch 268 batch 200 train loss: 0.0004 test loss: 0.0008
Epoch 268 batch 300 train loss: 0.0003 test loss: 0.0008
Epoch 268 batch 400 train loss: 0.0003 test loss: 0.0008
Epoch 268 batch 500 train loss: 0.0002 test loss: 0.0008
Epoch 268 batch 600 train loss: 0.0002 test loss: 0.0008
Epoch 268 batch 700 train loss: 0.0004 test loss: 0.0008
Epoch 268 batch 800 train loss: 0.0003 test loss: 0.0008
Epoch 268 batch 900 train loss: 0.0002 test loss: 0.0008
Epoch 268 batch 1000 train loss: 0.0002 test loss: 0.0008
Epoch 268 batch 1100 train loss: 0.0004 test loss: 0.0008
Epoch 268 batch 1200 train loss: 0.0002 test loss: 0.0008
Epoch 269 batch 0 train loss: 0.0004 test loss: 0.0008
Epoch 269 batch 100 train loss: 0.0003 test loss: 0.0008
Epoch 269 batch 200 train loss: 0.0002 test loss: 0.0008
Epoch 269 batch 300 train loss: 0.0002 test loss: 0.0008
Epoch 269 batch 400 train loss: 0.0002 test loss: 0.0008
Epoch 269 batch 500 train loss: 0.0004 test loss: 0.0008
Epoch 269 batch 600 train loss: 0.0003 test loss: 0.0008
Epoch 269 batch 700 train loss: 0.0002 test loss: 0.0008
Epoch 269 batch 800 train loss: 0.0003 test loss: 0.0008
Epoch 269 batch 900 train loss: 0.0004 test loss: 0.0008
Epoch 269 batch 1000 train loss: 0.0003 test loss: 0.0008
Epoch 269 batch 1100 train loss: 0.0005 test loss: 0.0008
Epoch 269 batch 1200 train loss: 0.0002 test loss: 0.0008
Epoch 270 batch 0 train loss: 0.0005 test loss: 0.0008
Epoch 270 batch 100 train loss: 0.0002 test loss: 0.0008
Epoch 270 batch 200 train loss: 0.0003 test loss: 0.0008
Epoch 270 batch 300 train loss: 0.0002 test loss: 0.0008
Epoch 270 batch 400 train loss: 0.0004 test loss: 0.0008
Epoch 270 batch 500 train loss: 0.0002 test loss: 0.0008
Epoch 270 batch 600 train loss: 0.0003 test loss: 0.0008
Epoch 270 batch 700 train loss: 0.0004 test loss: 0.0008
Epoch 270 batch 800 train loss: 0.0002 test loss: 0.0008
Epoch 270 batch 900 train loss: 0.0002 test loss: 0.0008
Epoch 270 batch 1000 train loss: 0.0002 test loss: 0.0008
Epoch 270 batch 1100 train loss: 0.0003 test loss: 0.0008
Epoch 270 batch 1200 train loss: 0.0004 test loss: 0.0008
Epoch 271 batch 0 train loss: 0.0002 test loss: 0.0008
Epoch 271 batch 100 train loss: 0.0002 test loss: 0.0008
Epoch 271 batch 200 train loss: 0.0002 test loss: 0.0008
Epoch 271 batch 300 train loss: 0.0002 test loss: 0.0008
Epoch 271 batch 400 train loss: 0.0004 test loss: 0.0008
Epoch 271 batch 500 train loss: 0.0003 test loss: 0.0008
Epoch 271 batch 600 train loss: 0.0003 test loss: 0.0008
Epoch 271 batch 700 train loss: 0.0004 test loss: 0.0008
Epoch 271 batch 800 train loss: 0.0004 test loss: 0.0008
Epoch 271 batch 900 train loss: 0.0002 test loss: 0.0008
Epoch 271 batch 1000 train loss: 0.0004 test loss: 0.0008
Epoch 271 batch 1100 train loss: 0.0003 test loss: 0.0008
Epoch 271 batch 1200 train loss: 0.0006 test loss: 0.0008
Epoch 272 batch 0 train loss: 0.0003 test loss: 0.0008
Epoch 272 batch 100 train loss: 0.0003 test loss: 0.0008
Epoch 272 batch 200 train loss: 0.0003 test loss: 0.0008
Epoch 272 batch 300 train loss: 0.0003 test loss: 0.0008
Epoch 272 batch 400 train loss: 0.0003 test loss: 0.0008
Epoch 272 batch 500 train loss: 0.0005 test loss: 0.0008
Epoch 272 batch 600 train loss: 0.0002 test loss: 0.0008
Epoch 272 batch 700 train loss: 0.0003 test loss: 0.0008
Epoch 272 batch 800 train loss: 0.0005 test loss: 0.0008
Epoch 272 batch 900 train loss: 0.0003 test loss: 0.0008
Epoch 272 batch 1000 train loss: 0.0003 test loss: 0.0008
Epoch 272 batch 1100 train loss: 0.0002 test loss: 0.0008
Epoch 272 batch 1200 train loss: 0.0003 test loss: 0.0008
Epoch 273 batch 0 train loss: 0.0003 test loss: 0.0008
Epoch 273 batch 100 train loss: 0.0003 test loss: 0.0008
Epoch 273 batch 200 train loss: 0.0002 test loss: 0.0008
Epoch 273 batch 300 train loss: 0.0004 test loss: 0.0008
Epoch 273 batch 400 train loss: 0.0002 test loss: 0.0008
Epoch 273 batch 500 train loss: 0.0002 test loss: 0.0008
Epoch 273 batch 600 train loss: 0.0002 test loss: 0.0008
Epoch 273 batch 700 train loss: 0.0002 test loss: 0.0008
Epoch 273 batch 800 train loss: 0.0004 test loss: 0.0008
Epoch 273 batch 900 train loss: 0.0003 test loss: 0.0008
Epoch 273 batch 1000 train loss: 0.0002 test loss: 0.0008
Epoch 273 batch 1100 train loss: 0.0004 test loss: 0.0008
Epoch 273 batch 1200 train loss: 0.0003 test loss: 0.0008
Epoch 274 batch 0 train loss: 0.0003 test loss: 0.0008
Epoch 274 batch 100 train loss: 0.0003 test loss: 0.0008
Epoch 274 batch 200 train loss: 0.0004 test loss: 0.0008
Epoch 274 batch 300 train loss: 0.0002 test loss: 0.0008
Epoch 274 batch 400 train loss: 0.0003 test loss: 0.0008
Epoch 274 batch 500 train loss: 0.0002 test loss: 0.0008
Epoch 274 batch 600 train loss: 0.0003 test loss: 0.0008
Epoch 274 batch 700 train loss: 0.0003 test loss: 0.0008
Epoch 274 batch 800 train loss: 0.0002 test loss: 0.0008
Epoch 274 batch 900 train loss: 0.0002 test loss: 0.0008
Epoch 274 batch 1000 train loss: 0.0003 test loss: 0.0008
Epoch 274 batch 1100 train loss: 0.0002 test loss: 0.0008
Epoch 274 batch 1200 train loss: 0.0003 test loss: 0.0008
Epoch 275 batch 0 train loss: 0.0004 test loss: 0.0008
Epoch 275 batch 100 train loss: 0.0002 test loss: 0.0008
Epoch 275 batch 200 train loss: 0.0002 test loss: 0.0008
Epoch 275 batch 300 train loss: 0.0002 test loss: 0.0008
Epoch 275 batch 400 train loss: 0.0003 test loss: 0.0008
Epoch 275 batch 500 train loss: 0.0002 test loss: 0.0008
Epoch 275 batch 600 train loss: 0.0002 test loss: 0.0008
Epoch 275 batch 700 train loss: 0.0002 test loss: 0.0008
Epoch 275 batch 800 train loss: 0.0002 test loss: 0.0008
Epoch 275 batch 900 train loss: 0.0004 test loss: 0.0008
Epoch 275 batch 1000 train loss: 0.0004 test loss: 0.0008
Epoch 275 batch 1100 train loss: 0.0003 test loss: 0.0008
Epoch 275 batch 1200 train loss: 0.0003 test loss: 0.0008
Epoch 276 batch 0 train loss: 0.0002 test loss: 0.0008
Epoch 276 batch 100 train loss: 0.0003 test loss: 0.0008
Epoch 276 batch 200 train loss: 0.0002 test loss: 0.0008
Epoch 276 batch 300 train loss: 0.0002 test loss: 0.0008
Epoch 276 batch 400 train loss: 0.0002 test loss: 0.0008
Epoch 276 batch 500 train loss: 0.0002 test loss: 0.0008
Epoch 276 batch 600 train loss: 0.0001 test loss: 0.0008
Epoch 276 batch 700 train loss: 0.0002 test loss: 0.0008
Epoch 276 batch 800 train loss: 0.0003 test loss: 0.0008
Epoch 276 batch 900 train loss: 0.0002 test loss: 0.0008
Epoch 276 batch 1000 train loss: 0.0002 test loss: 0.0008
Epoch 276 batch 1100 train loss: 0.0001 test loss: 0.0008
Epoch 276 batch 1200 train loss: 0.0004 test loss: 0.0008
Epoch 277 batch 0 train loss: 0.0005 test loss: 0.0008
Epoch 277 batch 100 train loss: 0.0006 test loss: 0.0008
Epoch 277 batch 200 train loss: 0.0002 test loss: 0.0008
Epoch 277 batch 300 train loss: 0.0002 test loss: 0.0008
Epoch 277 batch 400 train loss: 0.0004 test loss: 0.0008
Epoch 277 batch 500 train loss: 0.0002 test loss: 0.0008
Epoch 277 batch 600 train loss: 0.0002 test loss: 0.0008
Epoch 277 batch 700 train loss: 0.0004 test loss: 0.0008
Epoch 277 batch 800 train loss: 0.0002 test loss: 0.0008
Epoch 277 batch 900 train loss: 0.0002 test loss: 0.0008
Epoch 277 batch 1000 train loss: 0.0002 test loss: 0.0008
Epoch 277 batch 1100 train loss: 0.0003 test loss: 0.0008
Epoch 277 batch 1200 train loss: 0.0003 test loss: 0.0008
Epoch 278 batch 0 train loss: 0.0004 test loss: 0.0008
Epoch 278 batch 100 train loss: 0.0005 test loss: 0.0008
Epoch 278 batch 200 train loss: 0.0003 test loss: 0.0008
Epoch 278 batch 300 train loss: 0.0002 test loss: 0.0008
Epoch 278 batch 400 train loss: 0.0002 test loss: 0.0008
Epoch 278 batch 500 train loss: 0.0003 test loss: 0.0008
Epoch 278 batch 600 train loss: 0.0002 test loss: 0.0008
Epoch 278 batch 700 train loss: 0.0004 test loss: 0.0008
Epoch 278 batch 800 train loss: 0.0007 test loss: 0.0008
Epoch 278 batch 900 train loss: 0.0004 test loss: 0.0008
Epoch 278 batch 1000 train loss: 0.0002 test loss: 0.0008
Epoch 278 batch 1100 train loss: 0.0003 test loss: 0.0008
Epoch 278 batch 1200 train loss: 0.0003 test loss: 0.0008
Epoch 279 batch 0 train loss: 0.0003 test loss: 0.0008
Epoch 279 batch 100 train loss: 0.0003 test loss: 0.0008
Epoch 279 batch 200 train loss: 0.0002 test loss: 0.0008
Epoch 279 batch 300 train loss: 0.0004 test loss: 0.0008
Epoch 279 batch 400 train loss: 0.0002 test loss: 0.0008
Epoch 279 batch 500 train loss: 0.0003 test loss: 0.0008
Epoch 279 batch 600 train loss: 0.0003 test loss: 0.0008
Epoch 279 batch 700 train loss: 0.0002 test loss: 0.0008
Epoch 279 batch 800 train loss: 0.0005 test loss: 0.0008
Epoch 279 batch 900 train loss: 0.0004 test loss: 0.0008
Epoch 279 batch 1000 train loss: 0.0002 test loss: 0.0008
Epoch 279 batch 1100 train loss: 0.0002 test loss: 0.0008
Epoch 279 batch 1200 train loss: 0.0004 test loss: 0.0008
Epoch 280 batch 0 train loss: 0.0003 test loss: 0.0008
Epoch 280 batch 100 train loss: 0.0004 test loss: 0.0008
Epoch 280 batch 200 train loss: 0.0002 test loss: 0.0008
Epoch 280 batch 300 train loss: 0.0002 test loss: 0.0008
Epoch 280 batch 400 train loss: 0.0004 test loss: 0.0008
Epoch 280 batch 500 train loss: 0.0002 test loss: 0.0008
Epoch 280 batch 600 train loss: 0.0002 test loss: 0.0008
Epoch 280 batch 700 train loss: 0.0003 test loss: 0.0008
Epoch 280 batch 800 train loss: 0.0002 test loss: 0.0008
Epoch 280 batch 900 train loss: 0.0003 test loss: 0.0008
Epoch 280 batch 1000 train loss: 0.0003 test loss: 0.0008
Epoch 280 batch 1100 train loss: 0.0003 test loss: 0.0008
Epoch 280 batch 1200 train loss: 0.0002 test loss: 0.0008
Epoch 281 batch 0 train loss: 0.0005 test loss: 0.0008
Epoch 281 batch 100 train loss: 0.0003 test loss: 0.0008
Epoch 281 batch 200 train loss: 0.0002 test loss: 0.0008
Epoch 281 batch 300 train loss: 0.0003 test loss: 0.0008
Epoch 281 batch 400 train loss: 0.0003 test loss: 0.0008
Epoch 281 batch 500 train loss: 0.0002 test loss: 0.0008
Epoch 281 batch 600 train loss: 0.0002 test loss: 0.0008
Epoch 281 batch 700 train loss: 0.0003 test loss: 0.0008
Epoch 281 batch 800 train loss: 0.0002 test loss: 0.0008
Epoch 281 batch 900 train loss: 0.0003 test loss: 0.0008
Epoch 281 batch 1000 train loss: 0.0003 test loss: 0.0008
Epoch 281 batch 1100 train loss: 0.0003 test loss: 0.0008
Epoch 281 batch 1200 train loss: 0.0002 test loss: 0.0008
Epoch 282 batch 0 train loss: 0.0002 test loss: 0.0008
Epoch 282 batch 100 train loss: 0.0004 test loss: 0.0008
Epoch 282 batch 200 train loss: 0.0004 test loss: 0.0008
Epoch 282 batch 300 train loss: 0.0004 test loss: 0.0008
Epoch 282 batch 400 train loss: 0.0002 test loss: 0.0008
Epoch 282 batch 500 train loss: 0.0003 test loss: 0.0008
Epoch 282 batch 600 train loss: 0.0002 test loss: 0.0008
Epoch 282 batch 700 train loss: 0.0003 test loss: 0.0008
Epoch 282 batch 800 train loss: 0.0003 test loss: 0.0008
Epoch 282 batch 900 train loss: 0.0003 test loss: 0.0008
Epoch 282 batch 1000 train loss: 0.0002 test loss: 0.0008
Epoch 282 batch 1100 train loss: 0.0002 test loss: 0.0008
Epoch 282 batch 1200 train loss: 0.0002 test loss: 0.0008
Epoch 283 batch 0 train loss: 0.0002 test loss: 0.0008
Epoch 283 batch 100 train loss: 0.0003 test loss: 0.0008
Epoch 283 batch 200 train loss: 0.0003 test loss: 0.0008
Epoch 283 batch 300 train loss: 0.0003 test loss: 0.0008
Epoch 283 batch 400 train loss: 0.0002 test loss: 0.0008
Epoch 283 batch 500 train loss: 0.0003 test loss: 0.0008
Epoch 283 batch 600 train loss: 0.0002 test loss: 0.0008
Epoch 283 batch 700 train loss: 0.0003 test loss: 0.0008
Epoch 283 batch 800 train loss: 0.0003 test loss: 0.0008
Epoch 283 batch 900 train loss: 0.0003 test loss: 0.0008
Epoch 283 batch 1000 train loss: 0.0004 test loss: 0.0008
Epoch 283 batch 1100 train loss: 0.0003 test loss: 0.0008
Epoch 283 batch 1200 train loss: 0.0003 test loss: 0.0008
Epoch 284 batch 0 train loss: 0.0003 test loss: 0.0008
Epoch 284 batch 100 train loss: 0.0004 test loss: 0.0008
Epoch 284 batch 200 train loss: 0.0004 test loss: 0.0008
Epoch 284 batch 300 train loss: 0.0004 test loss: 0.0008
Epoch 284 batch 400 train loss: 0.0004 test loss: 0.0008
Epoch 284 batch 500 train loss: 0.0002 test loss: 0.0008
Epoch 284 batch 600 train loss: 0.0002 test loss: 0.0008
Epoch 284 batch 700 train loss: 0.0003 test loss: 0.0008
Epoch 284 batch 800 train loss: 0.0002 test loss: 0.0008
Epoch 284 batch 900 train loss: 0.0002 test loss: 0.0008
Epoch 284 batch 1000 train loss: 0.0003 test loss: 0.0008
Epoch 284 batch 1100 train loss: 0.0003 test loss: 0.0008
Epoch 284 batch 1200 train loss: 0.0003 test loss: 0.0008
Epoch 285 batch 0 train loss: 0.0002 test loss: 0.0008
Epoch 285 batch 100 train loss: 0.0004 test loss: 0.0008
Epoch 285 batch 200 train loss: 0.0002 test loss: 0.0008
Epoch 285 batch 300 train loss: 0.0002 test loss: 0.0008
Epoch 285 batch 400 train loss: 0.0004 test loss: 0.0008
Epoch 285 batch 500 train loss: 0.0003 test loss: 0.0008
Epoch 285 batch 600 train loss: 0.0003 test loss: 0.0008
Epoch 285 batch 700 train loss: 0.0004 test loss: 0.0008
Epoch 285 batch 800 train loss: 0.0002 test loss: 0.0008
Epoch 285 batch 900 train loss: 0.0004 test loss: 0.0008
Epoch 285 batch 1000 train loss: 0.0002 test loss: 0.0008
Epoch 285 batch 1100 train loss: 0.0002 test loss: 0.0008
Epoch 285 batch 1200 train loss: 0.0006 test loss: 0.0008
Epoch 286 batch 0 train loss: 0.0005 test loss: 0.0008
Epoch 286 batch 100 train loss: 0.0003 test loss: 0.0008
Epoch 286 batch 200 train loss: 0.0002 test loss: 0.0008
Epoch 286 batch 300 train loss: 0.0002 test loss: 0.0008
Epoch 286 batch 400 train loss: 0.0002 test loss: 0.0008
Epoch 286 batch 500 train loss: 0.0002 test loss: 0.0008
Epoch 286 batch 600 train loss: 0.0002 test loss: 0.0008
Epoch 286 batch 700 train loss: 0.0002 test loss: 0.0008
Epoch 286 batch 800 train loss: 0.0002 test loss: 0.0008
Epoch 286 batch 900 train loss: 0.0002 test loss: 0.0008
Epoch 286 batch 1000 train loss: 0.0003 test loss: 0.0008
Epoch 286 batch 1100 train loss: 0.0003 test loss: 0.0008
Epoch 286 batch 1200 train loss: 0.0002 test loss: 0.0008
Epoch 287 batch 0 train loss: 0.0002 test loss: 0.0008
Epoch 287 batch 100 train loss: 0.0002 test loss: 0.0008
Epoch 287 batch 200 train loss: 0.0002 test loss: 0.0008
Epoch 287 batch 300 train loss: 0.0002 test loss: 0.0008
Epoch 287 batch 400 train loss: 0.0003 test loss: 0.0008
Epoch 287 batch 500 train loss: 0.0005 test loss: 0.0008
Epoch 287 batch 600 train loss: 0.0002 test loss: 0.0008
Epoch 287 batch 700 train loss: 0.0002 test loss: 0.0008
Epoch 287 batch 800 train loss: 0.0002 test loss: 0.0008
Epoch 287 batch 900 train loss: 0.0003 test loss: 0.0008
Epoch 287 batch 1000 train loss: 0.0002 test loss: 0.0008
Epoch 287 batch 1100 train loss: 0.0002 test loss: 0.0008
Epoch 287 batch 1200 train loss: 0.0003 test loss: 0.0008
Epoch 288 batch 0 train loss: 0.0003 test loss: 0.0008
Epoch 288 batch 100 train loss: 0.0002 test loss: 0.0008
Epoch 288 batch 200 train loss: 0.0002 test loss: 0.0008
Epoch 288 batch 300 train loss: 0.0003 test loss: 0.0008
Epoch 288 batch 400 train loss: 0.0002 test loss: 0.0008
Epoch 288 batch 500 train loss: 0.0003 test loss: 0.0008
Epoch 288 batch 600 train loss: 0.0002 test loss: 0.0008
Epoch 288 batch 700 train loss: 0.0002 test loss: 0.0008
Epoch 288 batch 800 train loss: 0.0003 test loss: 0.0008
Epoch 288 batch 900 train loss: 0.0003 test loss: 0.0008
Epoch 288 batch 1000 train loss: 0.0004 test loss: 0.0008
Epoch 288 batch 1100 train loss: 0.0002 test loss: 0.0008
Epoch 288 batch 1200 train loss: 0.0004 test loss: 0.0008
Epoch 289 batch 0 train loss: 0.0003 test loss: 0.0008
Epoch 289 batch 100 train loss: 0.0004 test loss: 0.0008
Epoch 289 batch 200 train loss: 0.0004 test loss: 0.0008
Epoch 289 batch 300 train loss: 0.0003 test loss: 0.0008
Epoch 289 batch 400 train loss: 0.0003 test loss: 0.0008
Epoch 289 batch 500 train loss: 0.0004 test loss: 0.0008
Epoch 289 batch 600 train loss: 0.0003 test loss: 0.0008
Epoch 289 batch 700 train loss: 0.0002 test loss: 0.0008
Epoch 289 batch 800 train loss: 0.0003 test loss: 0.0008
Epoch 289 batch 900 train loss: 0.0002 test loss: 0.0008
Epoch 289 batch 1000 train loss: 0.0003 test loss: 0.0008
Epoch 289 batch 1100 train loss: 0.0004 test loss: 0.0008
Epoch 289 batch 1200 train loss: 0.0002 test loss: 0.0008
Epoch 290 batch 0 train loss: 0.0002 test loss: 0.0008
Epoch 290 batch 100 train loss: 0.0003 test loss: 0.0008
Epoch 290 batch 200 train loss: 0.0005 test loss: 0.0008
Epoch 290 batch 300 train loss: 0.0002 test loss: 0.0008
Epoch 290 batch 400 train loss: 0.0004 test loss: 0.0008
Epoch 290 batch 500 train loss: 0.0002 test loss: 0.0008
Epoch 290 batch 600 train loss: 0.0002 test loss: 0.0008
Epoch 290 batch 700 train loss: 0.0003 test loss: 0.0008
Epoch 290 batch 800 train loss: 0.0003 test loss: 0.0008
Epoch 290 batch 900 train loss: 0.0002 test loss: 0.0008
Epoch 290 batch 1000 train loss: 0.0002 test loss: 0.0008
Epoch 290 batch 1100 train loss: 0.0003 test loss: 0.0008
Epoch 290 batch 1200 train loss: 0.0002 test loss: 0.0008
Epoch 291 batch 0 train loss: 0.0003 test loss: 0.0008
Epoch 291 batch 100 train loss: 0.0003 test loss: 0.0008
Epoch 291 batch 200 train loss: 0.0004 test loss: 0.0008
Epoch 291 batch 300 train loss: 0.0002 test loss: 0.0008
Epoch 291 batch 400 train loss: 0.0002 test loss: 0.0008
Epoch 291 batch 500 train loss: 0.0004 test loss: 0.0008
Epoch 291 batch 600 train loss: 0.0002 test loss: 0.0008
Epoch 291 batch 700 train loss: 0.0003 test loss: 0.0008
Epoch 291 batch 800 train loss: 0.0003 test loss: 0.0008
Epoch 291 batch 900 train loss: 0.0003 test loss: 0.0008
Epoch 291 batch 1000 train loss: 0.0002 test loss: 0.0008
Epoch 291 batch 1100 train loss: 0.0003 test loss: 0.0008
Epoch 291 batch 1200 train loss: 0.0003 test loss: 0.0008
Epoch 292 batch 0 train loss: 0.0002 test loss: 0.0008
Epoch 292 batch 100 train loss: 0.0005 test loss: 0.0008
Epoch 292 batch 200 train loss: 0.0003 test loss: 0.0008
Epoch 292 batch 300 train loss: 0.0003 test loss: 0.0008
Epoch 292 batch 400 train loss: 0.0004 test loss: 0.0008
Epoch 292 batch 500 train loss: 0.0003 test loss: 0.0008
Epoch 292 batch 600 train loss: 0.0003 test loss: 0.0008
Epoch 292 batch 700 train loss: 0.0002 test loss: 0.0008
Epoch 292 batch 800 train loss: 0.0003 test loss: 0.0008
Epoch 292 batch 900 train loss: 0.0002 test loss: 0.0008
Epoch 292 batch 1000 train loss: 0.0003 test loss: 0.0008
Epoch 292 batch 1100 train loss: 0.0003 test loss: 0.0008
Epoch 292 batch 1200 train loss: 0.0003 test loss: 0.0008
Epoch 293 batch 0 train loss: 0.0003 test loss: 0.0008
Epoch 293 batch 100 train loss: 0.0003 test loss: 0.0008
Epoch 293 batch 200 train loss: 0.0003 test loss: 0.0008
Epoch 293 batch 300 train loss: 0.0002 test loss: 0.0008
Epoch 293 batch 400 train loss: 0.0002 test loss: 0.0008
Epoch 293 batch 500 train loss: 0.0003 test loss: 0.0008
Epoch 293 batch 600 train loss: 0.0003 test loss: 0.0008
Epoch 293 batch 700 train loss: 0.0004 test loss: 0.0008
Epoch 293 batch 800 train loss: 0.0002 test loss: 0.0008
Epoch 293 batch 900 train loss: 0.0003 test loss: 0.0008
Epoch 293 batch 1000 train loss: 0.0002 test loss: 0.0008
Epoch 293 batch 1100 train loss: 0.0002 test loss: 0.0008
Epoch 293 batch 1200 train loss: 0.0004 test loss: 0.0008
Epoch 294 batch 0 train loss: 0.0003 test loss: 0.0008
Epoch 294 batch 100 train loss: 0.0004 test loss: 0.0008
Epoch 294 batch 200 train loss: 0.0003 test loss: 0.0008
Epoch 294 batch 300 train loss: 0.0002 test loss: 0.0008
Epoch 294 batch 400 train loss: 0.0003 test loss: 0.0008
Epoch 294 batch 500 train loss: 0.0002 test loss: 0.0008
Epoch 294 batch 600 train loss: 0.0003 test loss: 0.0008
Epoch 294 batch 700 train loss: 0.0003 test loss: 0.0008
Epoch 294 batch 800 train loss: 0.0003 test loss: 0.0008
Epoch 294 batch 900 train loss: 0.0002 test loss: 0.0008
Epoch 294 batch 1000 train loss: 0.0003 test loss: 0.0008
Epoch 294 batch 1100 train loss: 0.0003 test loss: 0.0008
Epoch 294 batch 1200 train loss: 0.0005 test loss: 0.0008
Epoch 295 batch 0 train loss: 0.0002 test loss: 0.0008
Epoch 295 batch 100 train loss: 0.0005 test loss: 0.0008
Epoch 295 batch 200 train loss: 0.0003 test loss: 0.0008
Epoch 295 batch 300 train loss: 0.0002 test loss: 0.0008
Epoch 295 batch 400 train loss: 0.0004 test loss: 0.0008
Epoch 295 batch 500 train loss: 0.0003 test loss: 0.0008
Epoch 295 batch 600 train loss: 0.0004 test loss: 0.0008
Epoch 295 batch 700 train loss: 0.0003 test loss: 0.0008
Epoch 295 batch 800 train loss: 0.0003 test loss: 0.0008
Epoch 295 batch 900 train loss: 0.0003 test loss: 0.0008
Epoch 295 batch 1000 train loss: 0.0002 test loss: 0.0008
Epoch 295 batch 1100 train loss: 0.0003 test loss: 0.0008
Epoch 295 batch 1200 train loss: 0.0004 test loss: 0.0008
Epoch 296 batch 0 train loss: 0.0006 test loss: 0.0008
Epoch 296 batch 100 train loss: 0.0002 test loss: 0.0008
Epoch 296 batch 200 train loss: 0.0002 test loss: 0.0008
Epoch 296 batch 300 train loss: 0.0005 test loss: 0.0008
Epoch 296 batch 400 train loss: 0.0004 test loss: 0.0008
Epoch 296 batch 500 train loss: 0.0002 test loss: 0.0008
Epoch 296 batch 600 train loss: 0.0003 test loss: 0.0008
Epoch 296 batch 700 train loss: 0.0003 test loss: 0.0008
Epoch 296 batch 800 train loss: 0.0003 test loss: 0.0008
Epoch 296 batch 900 train loss: 0.0004 test loss: 0.0008
Epoch 296 batch 1000 train loss: 0.0005 test loss: 0.0008
Epoch 296 batch 1100 train loss: 0.0002 test loss: 0.0008
Epoch 296 batch 1200 train loss: 0.0002 test loss: 0.0008
Epoch 297 batch 0 train loss: 0.0002 test loss: 0.0008
Epoch 297 batch 100 train loss: 0.0002 test loss: 0.0008
Epoch 297 batch 200 train loss: 0.0003 test loss: 0.0008
Epoch 297 batch 300 train loss: 0.0005 test loss: 0.0008
Epoch 297 batch 400 train loss: 0.0003 test loss: 0.0008
Epoch 297 batch 500 train loss: 0.0004 test loss: 0.0008
Epoch 297 batch 600 train loss: 0.0002 test loss: 0.0008
Epoch 297 batch 700 train loss: 0.0003 test loss: 0.0008
Epoch 297 batch 800 train loss: 0.0003 test loss: 0.0008
Epoch 297 batch 900 train loss: 0.0003 test loss: 0.0008
Epoch 297 batch 1000 train loss: 0.0002 test loss: 0.0008
Epoch 297 batch 1100 train loss: 0.0003 test loss: 0.0008
Epoch 297 batch 1200 train loss: 0.0003 test loss: 0.0008
Epoch 298 batch 0 train loss: 0.0002 test loss: 0.0008
Epoch 298 batch 100 train loss: 0.0003 test loss: 0.0008
Epoch 298 batch 200 train loss: 0.0002 test loss: 0.0008
Epoch 298 batch 300 train loss: 0.0002 test loss: 0.0008
Epoch 298 batch 400 train loss: 0.0002 test loss: 0.0008
Epoch 298 batch 500 train loss: 0.0002 test loss: 0.0008
Epoch 298 batch 600 train loss: 0.0004 test loss: 0.0008
Epoch 298 batch 700 train loss: 0.0003 test loss: 0.0008
Epoch 298 batch 800 train loss: 0.0002 test loss: 0.0008
Epoch 298 batch 900 train loss: 0.0002 test loss: 0.0008
Epoch 298 batch 1000 train loss: 0.0003 test loss: 0.0008
Epoch 298 batch 1100 train loss: 0.0005 test loss: 0.0008
Epoch 298 batch 1200 train loss: 0.0002 test loss: 0.0008
Epoch 299 batch 0 train loss: 0.0003 test loss: 0.0008
Epoch 299 batch 100 train loss: 0.0004 test loss: 0.0008
Epoch 299 batch 200 train loss: 0.0002 test loss: 0.0008
Epoch 299 batch 300 train loss: 0.0002 test loss: 0.0008
Epoch 299 batch 400 train loss: 0.0003 test loss: 0.0008
Epoch 299 batch 500 train loss: 0.0003 test loss: 0.0008
Epoch 299 batch 600 train loss: 0.0003 test loss: 0.0008
Epoch 299 batch 700 train loss: 0.0003 test loss: 0.0008
Epoch 299 batch 800 train loss: 0.0002 test loss: 0.0008
Epoch 299 batch 900 train loss: 0.0004 test loss: 0.0008
Epoch 299 batch 1000 train loss: 0.0002 test loss: 0.0008
Epoch 299 batch 1100 train loss: 0.0002 test loss: 0.0008
Epoch 299 batch 1200 train loss: 0.0003 test loss: 0.0008
Epoch 300 batch 0 train loss: 0.0002 test loss: 0.0008
Epoch 300 batch 100 train loss: 0.0004 test loss: 0.0008
Epoch 300 batch 200 train loss: 0.0003 test loss: 0.0008
Epoch 300 batch 300 train loss: 0.0002 test loss: 0.0008
Epoch 300 batch 400 train loss: 0.0003 test loss: 0.0008
Epoch 300 batch 500 train loss: 0.0002 test loss: 0.0008
Epoch 300 batch 600 train loss: 0.0002 test loss: 0.0008
Epoch 300 batch 700 train loss: 0.0002 test loss: 0.0008
Epoch 300 batch 800 train loss: 0.0003 test loss: 0.0008
Epoch 300 batch 900 train loss: 0.0002 test loss: 0.0008
Epoch 300 batch 1000 train loss: 0.0003 test loss: 0.0008
Epoch 300 batch 1100 train loss: 0.0002 test loss: 0.0008
Epoch 300 batch 1200 train loss: 0.0003 test loss: 0.0008
Epoch 301 batch 0 train loss: 0.0004 test loss: 0.0008
Epoch 301 batch 100 train loss: 0.0004 test loss: 0.0008
Epoch 301 batch 200 train loss: 0.0002 test loss: 0.0008
Epoch 301 batch 300 train loss: 0.0002 test loss: 0.0008
Epoch 301 batch 400 train loss: 0.0006 test loss: 0.0008
Epoch 301 batch 500 train loss: 0.0004 test loss: 0.0008
Epoch 301 batch 600 train loss: 0.0002 test loss: 0.0008
Epoch 301 batch 700 train loss: 0.0004 test loss: 0.0008
Epoch 301 batch 800 train loss: 0.0002 test loss: 0.0008
Epoch 301 batch 900 train loss: 0.0003 test loss: 0.0008
Epoch 301 batch 1000 train loss: 0.0002 test loss: 0.0008
Epoch 301 batch 1100 train loss: 0.0002 test loss: 0.0008
Epoch 301 batch 1200 train loss: 0.0003 test loss: 0.0008
Epoch 302 batch 0 train loss: 0.0002 test loss: 0.0008
Epoch 302 batch 100 train loss: 0.0002 test loss: 0.0008
Epoch 302 batch 200 train loss: 0.0004 test loss: 0.0008
Epoch 302 batch 300 train loss: 0.0004 test loss: 0.0008
Epoch 302 batch 400 train loss: 0.0002 test loss: 0.0008
Epoch 302 batch 500 train loss: 0.0004 test loss: 0.0008
Epoch 302 batch 600 train loss: 0.0002 test loss: 0.0008
Epoch 302 batch 700 train loss: 0.0002 test loss: 0.0008
Epoch 302 batch 800 train loss: 0.0002 test loss: 0.0008
Epoch 302 batch 900 train loss: 0.0005 test loss: 0.0008
Epoch 302 batch 1000 train loss: 0.0005 test loss: 0.0008
Epoch 302 batch 1100 train loss: 0.0002 test loss: 0.0008
Epoch 302 batch 1200 train loss: 0.0003 test loss: 0.0008
Epoch 303 batch 0 train loss: 0.0003 test loss: 0.0008
Epoch 303 batch 100 train loss: 0.0003 test loss: 0.0008
Epoch 303 batch 200 train loss: 0.0003 test loss: 0.0008
Epoch 303 batch 300 train loss: 0.0003 test loss: 0.0008
Epoch 303 batch 400 train loss: 0.0002 test loss: 0.0008
Epoch 303 batch 500 train loss: 0.0003 test loss: 0.0008
Epoch 303 batch 600 train loss: 0.0002 test loss: 0.0008
Epoch 303 batch 700 train loss: 0.0002 test loss: 0.0008
Epoch 303 batch 800 train loss: 0.0005 test loss: 0.0008
Epoch 303 batch 900 train loss: 0.0002 test loss: 0.0008
Epoch 303 batch 1000 train loss: 0.0002 test loss: 0.0008
Epoch 303 batch 1100 train loss: 0.0002 test loss: 0.0008
Epoch 303 batch 1200 train loss: 0.0002 test loss: 0.0008
Epoch 304 batch 0 train loss: 0.0002 test loss: 0.0008
Epoch 304 batch 100 train loss: 0.0002 test loss: 0.0008
Epoch 304 batch 200 train loss: 0.0003 test loss: 0.0008
Epoch 304 batch 300 train loss: 0.0002 test loss: 0.0008
Epoch 304 batch 400 train loss: 0.0008 test loss: 0.0008
Epoch 304 batch 500 train loss: 0.0003 test loss: 0.0008
Epoch 304 batch 600 train loss: 0.0003 test loss: 0.0008
Epoch 304 batch 700 train loss: 0.0002 test loss: 0.0008
Epoch 304 batch 800 train loss: 0.0003 test loss: 0.0008
Epoch 304 batch 900 train loss: 0.0003 test loss: 0.0008
Epoch 304 batch 1000 train loss: 0.0003 test loss: 0.0008
Epoch 304 batch 1100 train loss: 0.0002 test loss: 0.0008
Epoch 304 batch 1200 train loss: 0.0003 test loss: 0.0008
Epoch 305 batch 0 train loss: 0.0002 test loss: 0.0008
Epoch 305 batch 100 train loss: 0.0002 test loss: 0.0008
Epoch 305 batch 200 train loss: 0.0002 test loss: 0.0008
Epoch 305 batch 300 train loss: 0.0005 test loss: 0.0008
Epoch 305 batch 400 train loss: 0.0002 test loss: 0.0008
Epoch 305 batch 500 train loss: 0.0002 test loss: 0.0008
Epoch 305 batch 600 train loss: 0.0001 test loss: 0.0008
Epoch 305 batch 700 train loss: 0.0002 test loss: 0.0008
Epoch 305 batch 800 train loss: 0.0002 test loss: 0.0008
Epoch 305 batch 900 train loss: 0.0003 test loss: 0.0008
Epoch 305 batch 1000 train loss: 0.0004 test loss: 0.0008
Epoch 305 batch 1100 train loss: 0.0004 test loss: 0.0008
Epoch 305 batch 1200 train loss: 0.0002 test loss: 0.0008
Epoch 306 batch 0 train loss: 0.0002 test loss: 0.0008
Epoch 306 batch 100 train loss: 0.0003 test loss: 0.0008
Epoch 306 batch 200 train loss: 0.0002 test loss: 0.0008
Epoch 306 batch 300 train loss: 0.0002 test loss: 0.0008
Epoch 306 batch 400 train loss: 0.0003 test loss: 0.0008
Epoch 306 batch 500 train loss: 0.0003 test loss: 0.0008
Epoch 306 batch 600 train loss: 0.0003 test loss: 0.0008
Epoch 306 batch 700 train loss: 0.0002 test loss: 0.0008
Epoch 306 batch 800 train loss: 0.0004 test loss: 0.0008
Epoch 306 batch 900 train loss: 0.0003 test loss: 0.0008
Epoch 306 batch 1000 train loss: 0.0002 test loss: 0.0008
Epoch 306 batch 1100 train loss: 0.0003 test loss: 0.0008
Epoch 306 batch 1200 train loss: 0.0001 test loss: 0.0008
Epoch 307 batch 0 train loss: 0.0003 test loss: 0.0008
Epoch 307 batch 100 train loss: 0.0002 test loss: 0.0008
Epoch 307 batch 200 train loss: 0.0002 test loss: 0.0008
Epoch 307 batch 300 train loss: 0.0003 test loss: 0.0008
Epoch 307 batch 400 train loss: 0.0003 test loss: 0.0008
Epoch 307 batch 500 train loss: 0.0003 test loss: 0.0008
Epoch 307 batch 600 train loss: 0.0005 test loss: 0.0008
Epoch 307 batch 700 train loss: 0.0004 test loss: 0.0008
Epoch 307 batch 800 train loss: 0.0003 test loss: 0.0008
Epoch 307 batch 900 train loss: 0.0003 test loss: 0.0008
Epoch 307 batch 1000 train loss: 0.0002 test loss: 0.0008
Epoch 307 batch 1100 train loss: 0.0002 test loss: 0.0008
Epoch 307 batch 1200 train loss: 0.0002 test loss: 0.0008
Epoch 308 batch 0 train loss: 0.0002 test loss: 0.0008
Epoch 308 batch 100 train loss: 0.0002 test loss: 0.0008
Epoch 308 batch 200 train loss: 0.0002 test loss: 0.0008
Epoch 308 batch 300 train loss: 0.0002 test loss: 0.0008
Epoch 308 batch 400 train loss: 0.0002 test loss: 0.0008
Epoch 308 batch 500 train loss: 0.0003 test loss: 0.0008
Epoch 308 batch 600 train loss: 0.0002 test loss: 0.0008
Epoch 308 batch 700 train loss: 0.0003 test loss: 0.0008
Epoch 308 batch 800 train loss: 0.0001 test loss: 0.0008
Epoch 308 batch 900 train loss: 0.0003 test loss: 0.0008
Epoch 308 batch 1000 train loss: 0.0003 test loss: 0.0008
Epoch 308 batch 1100 train loss: 0.0002 test loss: 0.0008
Epoch 308 batch 1200 train loss: 0.0002 test loss: 0.0008
Epoch 309 batch 0 train loss: 0.0006 test loss: 0.0008
Epoch 309 batch 100 train loss: 0.0002 test loss: 0.0008
Epoch 309 batch 200 train loss: 0.0004 test loss: 0.0008
Epoch 309 batch 300 train loss: 0.0003 test loss: 0.0008
Epoch 309 batch 400 train loss: 0.0002 test loss: 0.0008
Epoch 309 batch 500 train loss: 0.0002 test loss: 0.0008
Epoch 309 batch 600 train loss: 0.0002 test loss: 0.0008
Epoch 309 batch 700 train loss: 0.0005 test loss: 0.0008
Epoch 309 batch 800 train loss: 0.0005 test loss: 0.0008
Epoch 309 batch 900 train loss: 0.0003 test loss: 0.0008
Epoch 309 batch 1000 train loss: 0.0002 test loss: 0.0008
Epoch 309 batch 1100 train loss: 0.0004 test loss: 0.0008
Epoch 309 batch 1200 train loss: 0.0002 test loss: 0.0008
Epoch 310 batch 0 train loss: 0.0003 test loss: 0.0008
Epoch 310 batch 100 train loss: 0.0004 test loss: 0.0008
Epoch 310 batch 200 train loss: 0.0003 test loss: 0.0008
Epoch 310 batch 300 train loss: 0.0002 test loss: 0.0008
Epoch 310 batch 400 train loss: 0.0003 test loss: 0.0008
Epoch 310 batch 500 train loss: 0.0003 test loss: 0.0008
Epoch 310 batch 600 train loss: 0.0004 test loss: 0.0008
Epoch 310 batch 700 train loss: 0.0002 test loss: 0.0008
Epoch 310 batch 800 train loss: 0.0002 test loss: 0.0008
Epoch 310 batch 900 train loss: 0.0002 test loss: 0.0008
Epoch 310 batch 1000 train loss: 0.0003 test loss: 0.0008
Epoch 310 batch 1100 train loss: 0.0003 test loss: 0.0008
Epoch 310 batch 1200 train loss: 0.0002 test loss: 0.0008
Epoch 311 batch 0 train loss: 0.0003 test loss: 0.0008
Epoch 311 batch 100 train loss: 0.0003 test loss: 0.0008
Epoch 311 batch 200 train loss: 0.0003 test loss: 0.0008
Epoch 311 batch 300 train loss: 0.0001 test loss: 0.0008
Epoch 311 batch 400 train loss: 0.0003 test loss: 0.0008
Epoch 311 batch 500 train loss: 0.0002 test loss: 0.0008
Epoch 311 batch 600 train loss: 0.0002 test loss: 0.0008
Epoch 311 batch 700 train loss: 0.0002 test loss: 0.0008
Epoch 311 batch 800 train loss: 0.0002 test loss: 0.0008
Epoch 311 batch 900 train loss: 0.0002 test loss: 0.0008
Epoch 311 batch 1000 train loss: 0.0002 test loss: 0.0008
Epoch 311 batch 1100 train loss: 0.0003 test loss: 0.0008
Epoch 311 batch 1200 train loss: 0.0002 test loss: 0.0008
Epoch 312 batch 0 train loss: 0.0002 test loss: 0.0008
Epoch 312 batch 100 train loss: 0.0002 test loss: 0.0008
Epoch 312 batch 200 train loss: 0.0004 test loss: 0.0008
Epoch 312 batch 300 train loss: 0.0003 test loss: 0.0008
Epoch 312 batch 400 train loss: 0.0003 test loss: 0.0008
Epoch 312 batch 500 train loss: 0.0002 test loss: 0.0008
Epoch 312 batch 600 train loss: 0.0002 test loss: 0.0008
Epoch 312 batch 700 train loss: 0.0004 test loss: 0.0008
Epoch 312 batch 800 train loss: 0.0002 test loss: 0.0008
Epoch 312 batch 900 train loss: 0.0002 test loss: 0.0008
Epoch 312 batch 1000 train loss: 0.0002 test loss: 0.0008
Epoch 312 batch 1100 train loss: 0.0002 test loss: 0.0008
Epoch 312 batch 1200 train loss: 0.0003 test loss: 0.0008
Epoch 313 batch 0 train loss: 0.0002 test loss: 0.0008
Epoch 313 batch 100 train loss: 0.0004 test loss: 0.0008
Epoch 313 batch 200 train loss: 0.0006 test loss: 0.0008
Epoch 313 batch 300 train loss: 0.0002 test loss: 0.0008
Epoch 313 batch 400 train loss: 0.0002 test loss: 0.0008
Epoch 313 batch 500 train loss: 0.0003 test loss: 0.0008
Epoch 313 batch 600 train loss: 0.0004 test loss: 0.0008
Epoch 313 batch 700 train loss: 0.0002 test loss: 0.0008
Epoch 313 batch 800 train loss: 0.0004 test loss: 0.0008
Epoch 313 batch 900 train loss: 0.0002 test loss: 0.0008
Epoch 313 batch 1000 train loss: 0.0003 test loss: 0.0008
Epoch 313 batch 1100 train loss: 0.0002 test loss: 0.0008
Epoch 313 batch 1200 train loss: 0.0002 test loss: 0.0008
Epoch 314 batch 0 train loss: 0.0003 test loss: 0.0008
Epoch 314 batch 100 train loss: 0.0002 test loss: 0.0008
Epoch 314 batch 200 train loss: 0.0002 test loss: 0.0008
Epoch 314 batch 300 train loss: 0.0002 test loss: 0.0008
Epoch 314 batch 400 train loss: 0.0002 test loss: 0.0008
Epoch 314 batch 500 train loss: 0.0002 test loss: 0.0008
Epoch 314 batch 600 train loss: 0.0004 test loss: 0.0008
Epoch 314 batch 700 train loss: 0.0002 test loss: 0.0008
Epoch 314 batch 800 train loss: 0.0003 test loss: 0.0008
Epoch 314 batch 900 train loss: 0.0002 test loss: 0.0008
Epoch 314 batch 1000 train loss: 0.0003 test loss: 0.0008
Epoch 314 batch 1100 train loss: 0.0002 test loss: 0.0008
Epoch 314 batch 1200 train loss: 0.0003 test loss: 0.0008
Epoch 315 batch 0 train loss: 0.0002 test loss: 0.0008
Epoch 315 batch 100 train loss: 0.0003 test loss: 0.0008
Epoch 315 batch 200 train loss: 0.0003 test loss: 0.0008
Epoch 315 batch 300 train loss: 0.0003 test loss: 0.0008
Epoch 315 batch 400 train loss: 0.0002 test loss: 0.0008
Epoch 315 batch 500 train loss: 0.0002 test loss: 0.0008
Epoch 315 batch 600 train loss: 0.0003 test loss: 0.0008
Epoch 315 batch 700 train loss: 0.0002 test loss: 0.0008
Epoch 315 batch 800 train loss: 0.0002 test loss: 0.0008
Epoch 315 batch 900 train loss: 0.0003 test loss: 0.0008
Epoch 315 batch 1000 train loss: 0.0002 test loss: 0.0008
Epoch 315 batch 1100 train loss: 0.0001 test loss: 0.0008
Epoch 315 batch 1200 train loss: 0.0003 test loss: 0.0008
Epoch 316 batch 0 train loss: 0.0004 test loss: 0.0008
Epoch 316 batch 100 train loss: 0.0004 test loss: 0.0008
Epoch 316 batch 200 train loss: 0.0003 test loss: 0.0008
Epoch 316 batch 300 train loss: 0.0004 test loss: 0.0008
Epoch 316 batch 400 train loss: 0.0002 test loss: 0.0008
Epoch 316 batch 500 train loss: 0.0006 test loss: 0.0008
Epoch 316 batch 600 train loss: 0.0002 test loss: 0.0008
Epoch 316 batch 700 train loss: 0.0002 test loss: 0.0008
Epoch 316 batch 800 train loss: 0.0003 test loss: 0.0008
Epoch 316 batch 900 train loss: 0.0003 test loss: 0.0008
Epoch 316 batch 1000 train loss: 0.0003 test loss: 0.0008
Epoch 316 batch 1100 train loss: 0.0001 test loss: 0.0008
Epoch 316 batch 1200 train loss: 0.0004 test loss: 0.0008
Epoch 317 batch 0 train loss: 0.0002 test loss: 0.0008
Epoch 317 batch 100 train loss: 0.0005 test loss: 0.0008
Epoch 317 batch 200 train loss: 0.0002 test loss: 0.0008
Epoch 317 batch 300 train loss: 0.0002 test loss: 0.0008
Epoch 317 batch 400 train loss: 0.0002 test loss: 0.0008
Epoch 317 batch 500 train loss: 0.0002 test loss: 0.0008
Epoch 317 batch 600 train loss: 0.0002 test loss: 0.0008
Epoch 317 batch 700 train loss: 0.0003 test loss: 0.0008
Epoch 317 batch 800 train loss: 0.0002 test loss: 0.0008
Epoch 317 batch 900 train loss: 0.0003 test loss: 0.0008
Epoch 317 batch 1000 train loss: 0.0003 test loss: 0.0008
Epoch 317 batch 1100 train loss: 0.0003 test loss: 0.0008
Epoch 317 batch 1200 train loss: 0.0002 test loss: 0.0008
Epoch 318 batch 0 train loss: 0.0002 test loss: 0.0008
Epoch 318 batch 100 train loss: 0.0003 test loss: 0.0008
Epoch 318 batch 200 train loss: 0.0002 test loss: 0.0008
Epoch 318 batch 300 train loss: 0.0003 test loss: 0.0008
Epoch 318 batch 400 train loss: 0.0002 test loss: 0.0008
Epoch 318 batch 500 train loss: 0.0002 test loss: 0.0008
Epoch 318 batch 600 train loss: 0.0002 test loss: 0.0008
Epoch 318 batch 700 train loss: 0.0001 test loss: 0.0008
Epoch 318 batch 800 train loss: 0.0002 test loss: 0.0008
Epoch 318 batch 900 train loss: 0.0004 test loss: 0.0008
Epoch 318 batch 1000 train loss: 0.0004 test loss: 0.0008
Epoch 318 batch 1100 train loss: 0.0003 test loss: 0.0008
Epoch 318 batch 1200 train loss: 0.0003 test loss: 0.0008
Epoch 319 batch 0 train loss: 0.0002 test loss: 0.0008
Epoch 319 batch 100 train loss: 0.0002 test loss: 0.0008
Epoch 319 batch 200 train loss: 0.0003 test loss: 0.0008
Epoch 319 batch 300 train loss: 0.0002 test loss: 0.0008
Epoch 319 batch 400 train loss: 0.0002 test loss: 0.0008
Epoch 319 batch 500 train loss: 0.0002 test loss: 0.0008
Epoch 319 batch 600 train loss: 0.0003 test loss: 0.0008
Epoch 319 batch 700 train loss: 0.0003 test loss: 0.0008
Epoch 319 batch 800 train loss: 0.0003 test loss: 0.0008
Epoch 319 batch 900 train loss: 0.0003 test loss: 0.0008
Epoch 319 batch 1000 train loss: 0.0003 test loss: 0.0008
Epoch 319 batch 1100 train loss: 0.0003 test loss: 0.0008
Epoch 319 batch 1200 train loss: 0.0003 test loss: 0.0008
Epoch 320 batch 0 train loss: 0.0002 test loss: 0.0008
Epoch 320 batch 100 train loss: 0.0004 test loss: 0.0008
Epoch 320 batch 200 train loss: 0.0003 test loss: 0.0008
Epoch 320 batch 300 train loss: 0.0002 test loss: 0.0008
Epoch 320 batch 400 train loss: 0.0003 test loss: 0.0008
Epoch 320 batch 500 train loss: 0.0004 test loss: 0.0008
Epoch 320 batch 600 train loss: 0.0002 test loss: 0.0008
Epoch 320 batch 700 train loss: 0.0002 test loss: 0.0008
Epoch 320 batch 800 train loss: 0.0002 test loss: 0.0008
Epoch 320 batch 900 train loss: 0.0003 test loss: 0.0008
Epoch 320 batch 1000 train loss: 0.0002 test loss: 0.0008
Epoch 320 batch 1100 train loss: 0.0003 test loss: 0.0008
Epoch 320 batch 1200 train loss: 0.0005 test loss: 0.0008
Epoch 321 batch 0 train loss: 0.0004 test loss: 0.0008
Epoch 321 batch 100 train loss: 0.0003 test loss: 0.0008
Epoch 321 batch 200 train loss: 0.0002 test loss: 0.0008
Epoch 321 batch 300 train loss: 0.0003 test loss: 0.0008
Epoch 321 batch 400 train loss: 0.0003 test loss: 0.0008
Epoch 321 batch 500 train loss: 0.0002 test loss: 0.0008
Epoch 321 batch 600 train loss: 0.0004 test loss: 0.0008
Epoch 321 batch 700 train loss: 0.0003 test loss: 0.0008
Epoch 321 batch 800 train loss: 0.0004 test loss: 0.0008
Epoch 321 batch 900 train loss: 0.0002 test loss: 0.0008
Epoch 321 batch 1000 train loss: 0.0002 test loss: 0.0008
Epoch 321 batch 1100 train loss: 0.0002 test loss: 0.0008
Epoch 321 batch 1200 train loss: 0.0002 test loss: 0.0008
Epoch 322 batch 0 train loss: 0.0004 test loss: 0.0008
Epoch 322 batch 100 train loss: 0.0002 test loss: 0.0008
Epoch 322 batch 200 train loss: 0.0003 test loss: 0.0008
Epoch 322 batch 300 train loss: 0.0002 test loss: 0.0008
Epoch 322 batch 400 train loss: 0.0002 test loss: 0.0008
Epoch 322 batch 500 train loss: 0.0002 test loss: 0.0008
Epoch 322 batch 600 train loss: 0.0003 test loss: 0.0008
Epoch 322 batch 700 train loss: 0.0004 test loss: 0.0008
Epoch 322 batch 800 train loss: 0.0003 test loss: 0.0008
Epoch 322 batch 900 train loss: 0.0004 test loss: 0.0008
Epoch 322 batch 1000 train loss: 0.0003 test loss: 0.0008
Epoch 322 batch 1100 train loss: 0.0002 test loss: 0.0008
Epoch 322 batch 1200 train loss: 0.0003 test loss: 0.0008
Epoch 323 batch 0 train loss: 0.0004 test loss: 0.0008
Epoch 323 batch 100 train loss: 0.0002 test loss: 0.0008
Epoch 323 batch 200 train loss: 0.0003 test loss: 0.0008
Epoch 323 batch 300 train loss: 0.0002 test loss: 0.0008
Epoch 323 batch 400 train loss: 0.0004 test loss: 0.0008
Epoch 323 batch 500 train loss: 0.0006 test loss: 0.0008
Epoch 323 batch 600 train loss: 0.0002 test loss: 0.0008
Epoch 323 batch 700 train loss: 0.0003 test loss: 0.0008
Epoch 323 batch 800 train loss: 0.0003 test loss: 0.0008
Epoch 323 batch 900 train loss: 0.0004 test loss: 0.0008
Epoch 323 batch 1000 train loss: 0.0002 test loss: 0.0008
Epoch 323 batch 1100 train loss: 0.0002 test loss: 0.0008
Epoch 323 batch 1200 train loss: 0.0003 test loss: 0.0008
Epoch 324 batch 0 train loss: 0.0002 test loss: 0.0008
Epoch 324 batch 100 train loss: 0.0003 test loss: 0.0008
Epoch 324 batch 200 train loss: 0.0002 test loss: 0.0008
Epoch 324 batch 300 train loss: 0.0003 test loss: 0.0008
Epoch 324 batch 400 train loss: 0.0002 test loss: 0.0008
Epoch 324 batch 500 train loss: 0.0002 test loss: 0.0008
Epoch 324 batch 600 train loss: 0.0002 test loss: 0.0008
Epoch 324 batch 700 train loss: 0.0003 test loss: 0.0008
Epoch 324 batch 800 train loss: 0.0003 test loss: 0.0008
Epoch 324 batch 900 train loss: 0.0003 test loss: 0.0008
Epoch 324 batch 1000 train loss: 0.0002 test loss: 0.0008
Epoch 324 batch 1100 train loss: 0.0002 test loss: 0.0008
Epoch 324 batch 1200 train loss: 0.0003 test loss: 0.0008
Epoch 325 batch 0 train loss: 0.0004 test loss: 0.0008
Epoch 325 batch 100 train loss: 0.0003 test loss: 0.0008
Epoch 325 batch 200 train loss: 0.0003 test loss: 0.0008
Epoch 325 batch 300 train loss: 0.0002 test loss: 0.0008
Epoch 325 batch 400 train loss: 0.0003 test loss: 0.0008
Epoch 325 batch 500 train loss: 0.0002 test loss: 0.0008
Epoch 325 batch 600 train loss: 0.0002 test loss: 0.0008
Epoch 325 batch 700 train loss: 0.0002 test loss: 0.0008
Epoch 325 batch 800 train loss: 0.0003 test loss: 0.0008
Epoch 325 batch 900 train loss: 0.0003 test loss: 0.0008
Epoch 325 batch 1000 train loss: 0.0002 test loss: 0.0008
Epoch 325 batch 1100 train loss: 0.0002 test loss: 0.0008
Epoch 325 batch 1200 train loss: 0.0003 test loss: 0.0008
Epoch 326 batch 0 train loss: 0.0001 test loss: 0.0008
Epoch 326 batch 100 train loss: 0.0003 test loss: 0.0008
Epoch 326 batch 200 train loss: 0.0001 test loss: 0.0008
Epoch 326 batch 300 train loss: 0.0003 test loss: 0.0008
Epoch 326 batch 400 train loss: 0.0002 test loss: 0.0008
Epoch 326 batch 500 train loss: 0.0003 test loss: 0.0008
Epoch 326 batch 600 train loss: 0.0003 test loss: 0.0008
Epoch 326 batch 700 train loss: 0.0002 test loss: 0.0008
Epoch 326 batch 800 train loss: 0.0002 test loss: 0.0008
Epoch 326 batch 900 train loss: 0.0002 test loss: 0.0008
Epoch 326 batch 1000 train loss: 0.0002 test loss: 0.0008
Epoch 326 batch 1100 train loss: 0.0002 test loss: 0.0008
Epoch 326 batch 1200 train loss: 0.0003 test loss: 0.0008
Epoch 327 batch 0 train loss: 0.0006 test loss: 0.0008
Epoch 327 batch 100 train loss: 0.0005 test loss: 0.0008
Epoch 327 batch 200 train loss: 0.0004 test loss: 0.0008
Epoch 327 batch 300 train loss: 0.0002 test loss: 0.0008
Epoch 327 batch 400 train loss: 0.0002 test loss: 0.0008
Epoch 327 batch 500 train loss: 0.0003 test loss: 0.0008
Epoch 327 batch 600 train loss: 0.0003 test loss: 0.0008
Epoch 327 batch 700 train loss: 0.0002 test loss: 0.0008
Epoch 327 batch 800 train loss: 0.0003 test loss: 0.0008
Epoch 327 batch 900 train loss: 0.0002 test loss: 0.0008
Epoch 327 batch 1000 train loss: 0.0002 test loss: 0.0008
Epoch 327 batch 1100 train loss: 0.0003 test loss: 0.0008
Epoch 327 batch 1200 train loss: 0.0004 test loss: 0.0008
Epoch 328 batch 0 train loss: 0.0001 test loss: 0.0008
Epoch 328 batch 100 train loss: 0.0002 test loss: 0.0008
Epoch 328 batch 200 train loss: 0.0001 test loss: 0.0008
Epoch 328 batch 300 train loss: 0.0001 test loss: 0.0008
Epoch 328 batch 400 train loss: 0.0002 test loss: 0.0008
Epoch 328 batch 500 train loss: 0.0003 test loss: 0.0008
Epoch 328 batch 600 train loss: 0.0004 test loss: 0.0008
Epoch 328 batch 700 train loss: 0.0002 test loss: 0.0008
Epoch 328 batch 800 train loss: 0.0003 test loss: 0.0008
Epoch 328 batch 900 train loss: 0.0003 test loss: 0.0008
Epoch 328 batch 1000 train loss: 0.0002 test loss: 0.0008
Epoch 328 batch 1100 train loss: 0.0003 test loss: 0.0008
Epoch 328 batch 1200 train loss: 0.0002 test loss: 0.0008
Epoch 329 batch 0 train loss: 0.0003 test loss: 0.0008
Epoch 329 batch 100 train loss: 0.0002 test loss: 0.0008
Epoch 329 batch 200 train loss: 0.0002 test loss: 0.0008
Epoch 329 batch 300 train loss: 0.0004 test loss: 0.0008
Epoch 329 batch 400 train loss: 0.0002 test loss: 0.0008
Epoch 329 batch 500 train loss: 0.0002 test loss: 0.0008
Epoch 329 batch 600 train loss: 0.0005 test loss: 0.0008
Epoch 329 batch 700 train loss: 0.0004 test loss: 0.0008
Epoch 329 batch 800 train loss: 0.0002 test loss: 0.0008
Epoch 329 batch 900 train loss: 0.0002 test loss: 0.0008
Epoch 329 batch 1000 train loss: 0.0002 test loss: 0.0008
Epoch 329 batch 1100 train loss: 0.0002 test loss: 0.0008
Epoch 329 batch 1200 train loss: 0.0002 test loss: 0.0008
Epoch 330 batch 0 train loss: 0.0006 test loss: 0.0008
Epoch 330 batch 100 train loss: 0.0004 test loss: 0.0008
Epoch 330 batch 200 train loss: 0.0004 test loss: 0.0008
Epoch 330 batch 300 train loss: 0.0002 test loss: 0.0008
Epoch 330 batch 400 train loss: 0.0002 test loss: 0.0008
Epoch 330 batch 500 train loss: 0.0002 test loss: 0.0008
Epoch 330 batch 600 train loss: 0.0004 test loss: 0.0008
Epoch 330 batch 700 train loss: 0.0002 test loss: 0.0008
Epoch 330 batch 800 train loss: 0.0003 test loss: 0.0008
Epoch 330 batch 900 train loss: 0.0002 test loss: 0.0008
Epoch 330 batch 1000 train loss: 0.0004 test loss: 0.0008
Epoch 330 batch 1100 train loss: 0.0002 test loss: 0.0008
Epoch 330 batch 1200 train loss: 0.0002 test loss: 0.0008
Epoch 331 batch 0 train loss: 0.0002 test loss: 0.0008
Epoch 331 batch 100 train loss: 0.0005 test loss: 0.0008
Epoch 331 batch 200 train loss: 0.0002 test loss: 0.0008
Epoch 331 batch 300 train loss: 0.0002 test loss: 0.0008
Epoch 331 batch 400 train loss: 0.0004 test loss: 0.0008
Epoch 331 batch 500 train loss: 0.0003 test loss: 0.0008
Epoch 331 batch 600 train loss: 0.0003 test loss: 0.0008
Epoch 331 batch 700 train loss: 0.0003 test loss: 0.0008
Epoch 331 batch 800 train loss: 0.0002 test loss: 0.0008
Epoch 331 batch 900 train loss: 0.0003 test loss: 0.0008
Epoch 331 batch 1000 train loss: 0.0002 test loss: 0.0008
Epoch 331 batch 1100 train loss: 0.0003 test loss: 0.0008
Epoch 331 batch 1200 train loss: 0.0002 test loss: 0.0008
Epoch 332 batch 0 train loss: 0.0002 test loss: 0.0008
Epoch 332 batch 100 train loss: 0.0001 test loss: 0.0008
Epoch 332 batch 200 train loss: 0.0003 test loss: 0.0008
Epoch 332 batch 300 train loss: 0.0003 test loss: 0.0008
Epoch 332 batch 400 train loss: 0.0003 test loss: 0.0008
Epoch 332 batch 500 train loss: 0.0002 test loss: 0.0008
Epoch 332 batch 600 train loss: 0.0002 test loss: 0.0008
Epoch 332 batch 700 train loss: 0.0003 test loss: 0.0008
Epoch 332 batch 800 train loss: 0.0002 test loss: 0.0008
Epoch 332 batch 900 train loss: 0.0003 test loss: 0.0008
Epoch 332 batch 1000 train loss: 0.0004 test loss: 0.0008
Epoch 332 batch 1100 train loss: 0.0003 test loss: 0.0008
Epoch 332 batch 1200 train loss: 0.0002 test loss: 0.0008
Epoch 333 batch 0 train loss: 0.0002 test loss: 0.0008
Epoch 333 batch 100 train loss: 0.0002 test loss: 0.0008
Epoch 333 batch 200 train loss: 0.0002 test loss: 0.0008
Epoch 333 batch 300 train loss: 0.0002 test loss: 0.0008
Epoch 333 batch 400 train loss: 0.0003 test loss: 0.0008
Epoch 333 batch 500 train loss: 0.0004 test loss: 0.0008
Epoch 333 batch 600 train loss: 0.0002 test loss: 0.0008
Epoch 333 batch 700 train loss: 0.0003 test loss: 0.0008
Epoch 333 batch 800 train loss: 0.0002 test loss: 0.0008
Epoch 333 batch 900 train loss: 0.0002 test loss: 0.0008
Epoch 333 batch 1000 train loss: 0.0003 test loss: 0.0008
Epoch 333 batch 1100 train loss: 0.0004 test loss: 0.0008
Epoch 333 batch 1200 train loss: 0.0002 test loss: 0.0008
Epoch 334 batch 0 train loss: 0.0003 test loss: 0.0008
Epoch 334 batch 100 train loss: 0.0004 test loss: 0.0008
Epoch 334 batch 200 train loss: 0.0002 test loss: 0.0008
Epoch 334 batch 300 train loss: 0.0002 test loss: 0.0008
Epoch 334 batch 400 train loss: 0.0001 test loss: 0.0008
Epoch 334 batch 500 train loss: 0.0002 test loss: 0.0008
Epoch 334 batch 600 train loss: 0.0002 test loss: 0.0008
Epoch 334 batch 700 train loss: 0.0002 test loss: 0.0008
Epoch 334 batch 800 train loss: 0.0002 test loss: 0.0008
Epoch 334 batch 900 train loss: 0.0004 test loss: 0.0008
Epoch 334 batch 1000 train loss: 0.0002 test loss: 0.0008
Epoch 334 batch 1100 train loss: 0.0003 test loss: 0.0008
Epoch 334 batch 1200 train loss: 0.0004 test loss: 0.0008
Epoch 335 batch 0 train loss: 0.0002 test loss: 0.0008
Epoch 335 batch 100 train loss: 0.0003 test loss: 0.0008
Epoch 335 batch 200 train loss: 0.0002 test loss: 0.0008
Epoch 335 batch 300 train loss: 0.0002 test loss: 0.0008
Epoch 335 batch 400 train loss: 0.0002 test loss: 0.0008
Epoch 335 batch 500 train loss: 0.0002 test loss: 0.0008
Epoch 335 batch 600 train loss: 0.0003 test loss: 0.0008
Epoch 335 batch 700 train loss: 0.0002 test loss: 0.0008
Epoch 335 batch 800 train loss: 0.0002 test loss: 0.0008
Epoch 335 batch 900 train loss: 0.0003 test loss: 0.0008
Epoch 335 batch 1000 train loss: 0.0002 test loss: 0.0008
Epoch 335 batch 1100 train loss: 0.0003 test loss: 0.0008
Epoch 335 batch 1200 train loss: 0.0002 test loss: 0.0008
Epoch 336 batch 0 train loss: 0.0002 test loss: 0.0008
Epoch 336 batch 100 train loss: 0.0003 test loss: 0.0008
Epoch 336 batch 200 train loss: 0.0003 test loss: 0.0008
Epoch 336 batch 300 train loss: 0.0003 test loss: 0.0008
Epoch 336 batch 400 train loss: 0.0002 test loss: 0.0008
Epoch 336 batch 500 train loss: 0.0003 test loss: 0.0008
Epoch 336 batch 600 train loss: 0.0002 test loss: 0.0008
Epoch 336 batch 700 train loss: 0.0003 test loss: 0.0008
Epoch 336 batch 800 train loss: 0.0001 test loss: 0.0008
Epoch 336 batch 900 train loss: 0.0002 test loss: 0.0008
Epoch 336 batch 1000 train loss: 0.0002 test loss: 0.0008
Epoch 336 batch 1100 train loss: 0.0003 test loss: 0.0008
Epoch 336 batch 1200 train loss: 0.0002 test loss: 0.0008
Epoch 337 batch 0 train loss: 0.0002 test loss: 0.0008
Epoch 337 batch 100 train loss: 0.0002 test loss: 0.0008
Epoch 337 batch 200 train loss: 0.0003 test loss: 0.0008
Epoch 337 batch 300 train loss: 0.0002 test loss: 0.0008
Epoch 337 batch 400 train loss: 0.0002 test loss: 0.0008
Epoch 337 batch 500 train loss: 0.0002 test loss: 0.0008
Epoch 337 batch 600 train loss: 0.0002 test loss: 0.0008
Epoch 337 batch 700 train loss: 0.0002 test loss: 0.0008
Epoch 337 batch 800 train loss: 0.0002 test loss: 0.0008
Epoch 337 batch 900 train loss: 0.0002 test loss: 0.0008
Epoch 337 batch 1000 train loss: 0.0003 test loss: 0.0008
Epoch 337 batch 1100 train loss: 0.0002 test loss: 0.0008
Epoch 337 batch 1200 train loss: 0.0003 test loss: 0.0008
Epoch 338 batch 0 train loss: 0.0002 test loss: 0.0008
Epoch 338 batch 100 train loss: 0.0004 test loss: 0.0008
Epoch 338 batch 200 train loss: 0.0002 test loss: 0.0008
Epoch 338 batch 300 train loss: 0.0005 test loss: 0.0008
Epoch 338 batch 400 train loss: 0.0003 test loss: 0.0008
Epoch 338 batch 500 train loss: 0.0002 test loss: 0.0008
Epoch 338 batch 600 train loss: 0.0004 test loss: 0.0008
Epoch 338 batch 700 train loss: 0.0002 test loss: 0.0008
Epoch 338 batch 800 train loss: 0.0002 test loss: 0.0008
Epoch 338 batch 900 train loss: 0.0003 test loss: 0.0008
Epoch 338 batch 1000 train loss: 0.0003 test loss: 0.0008
Epoch 338 batch 1100 train loss: 0.0002 test loss: 0.0008
Epoch 338 batch 1200 train loss: 0.0002 test loss: 0.0008
Epoch 339 batch 0 train loss: 0.0002 test loss: 0.0008
Epoch 339 batch 100 train loss: 0.0005 test loss: 0.0008
Epoch 339 batch 200 train loss: 0.0002 test loss: 0.0008
Epoch 339 batch 300 train loss: 0.0002 test loss: 0.0008
Epoch 339 batch 400 train loss: 0.0002 test loss: 0.0008
Epoch 339 batch 500 train loss: 0.0002 test loss: 0.0008
Epoch 339 batch 600 train loss: 0.0003 test loss: 0.0008
Epoch 339 batch 700 train loss: 0.0004 test loss: 0.0008
Epoch 339 batch 800 train loss: 0.0002 test loss: 0.0008
Epoch 339 batch 900 train loss: 0.0003 test loss: 0.0008
Epoch 339 batch 1000 train loss: 0.0003 test loss: 0.0008
Epoch 339 batch 1100 train loss: 0.0002 test loss: 0.0008
Epoch 339 batch 1200 train loss: 0.0004 test loss: 0.0008
Epoch 340 batch 0 train loss: 0.0004 test loss: 0.0008
Epoch 340 batch 100 train loss: 0.0003 test loss: 0.0008
Epoch 340 batch 200 train loss: 0.0002 test loss: 0.0008
Epoch 340 batch 300 train loss: 0.0002 test loss: 0.0008
Epoch 340 batch 400 train loss: 0.0002 test loss: 0.0008
Epoch 340 batch 500 train loss: 0.0002 test loss: 0.0008
Epoch 340 batch 600 train loss: 0.0003 test loss: 0.0008
Epoch 340 batch 700 train loss: 0.0003 test loss: 0.0008
Epoch 340 batch 800 train loss: 0.0002 test loss: 0.0008
Epoch 340 batch 900 train loss: 0.0002 test loss: 0.0008
Epoch 340 batch 1000 train loss: 0.0002 test loss: 0.0008
Epoch 340 batch 1100 train loss: 0.0002 test loss: 0.0008
Epoch 340 batch 1200 train loss: 0.0003 test loss: 0.0008
Epoch 341 batch 0 train loss: 0.0004 test loss: 0.0008
Epoch 341 batch 100 train loss: 0.0002 test loss: 0.0008
Epoch 341 batch 200 train loss: 0.0006 test loss: 0.0008
Epoch 341 batch 300 train loss: 0.0002 test loss: 0.0008
Epoch 341 batch 400 train loss: 0.0002 test loss: 0.0008
Epoch 341 batch 500 train loss: 0.0003 test loss: 0.0008
Epoch 341 batch 600 train loss: 0.0002 test loss: 0.0008
Epoch 341 batch 700 train loss: 0.0003 test loss: 0.0008
Epoch 341 batch 800 train loss: 0.0002 test loss: 0.0008
Epoch 341 batch 900 train loss: 0.0002 test loss: 0.0008
Epoch 341 batch 1000 train loss: 0.0003 test loss: 0.0008
Epoch 341 batch 1100 train loss: 0.0003 test loss: 0.0008
Epoch 341 batch 1200 train loss: 0.0002 test loss: 0.0008
Epoch 342 batch 0 train loss: 0.0004 test loss: 0.0008
Epoch 342 batch 100 train loss: 0.0002 test loss: 0.0008
Epoch 342 batch 200 train loss: 0.0003 test loss: 0.0008
Epoch 342 batch 300 train loss: 0.0002 test loss: 0.0008
Epoch 342 batch 400 train loss: 0.0002 test loss: 0.0008
Epoch 342 batch 500 train loss: 0.0002 test loss: 0.0008
Epoch 342 batch 600 train loss: 0.0002 test loss: 0.0008
Epoch 342 batch 700 train loss: 0.0006 test loss: 0.0008
Epoch 342 batch 800 train loss: 0.0002 test loss: 0.0008
Epoch 342 batch 900 train loss: 0.0002 test loss: 0.0008
Epoch 342 batch 1000 train loss: 0.0004 test loss: 0.0008
Epoch 342 batch 1100 train loss: 0.0003 test loss: 0.0008
Epoch 342 batch 1200 train loss: 0.0002 test loss: 0.0008
Epoch 343 batch 0 train loss: 0.0003 test loss: 0.0008
Epoch 343 batch 100 train loss: 0.0003 test loss: 0.0008
Epoch 343 batch 200 train loss: 0.0004 test loss: 0.0008
Epoch 343 batch 300 train loss: 0.0003 test loss: 0.0008
Epoch 343 batch 400 train loss: 0.0004 test loss: 0.0008
Epoch 343 batch 500 train loss: 0.0003 test loss: 0.0008
Epoch 343 batch 600 train loss: 0.0003 test loss: 0.0008
Epoch 343 batch 700 train loss: 0.0002 test loss: 0.0008
Epoch 343 batch 800 train loss: 0.0003 test loss: 0.0008
Epoch 343 batch 900 train loss: 0.0003 test loss: 0.0008
Epoch 343 batch 1000 train loss: 0.0002 test loss: 0.0008
Epoch 343 batch 1100 train loss: 0.0002 test loss: 0.0008
Epoch 343 batch 1200 train loss: 0.0001 test loss: 0.0008
Epoch 344 batch 0 train loss: 0.0002 test loss: 0.0008
Epoch 344 batch 100 train loss: 0.0004 test loss: 0.0008
Epoch 344 batch 200 train loss: 0.0002 test loss: 0.0008
Epoch 344 batch 300 train loss: 0.0004 test loss: 0.0008
Epoch 344 batch 400 train loss: 0.0003 test loss: 0.0008
Epoch 344 batch 500 train loss: 0.0003 test loss: 0.0008
Epoch 344 batch 600 train loss: 0.0005 test loss: 0.0008
Epoch 344 batch 700 train loss: 0.0002 test loss: 0.0008
Epoch 344 batch 800 train loss: 0.0004 test loss: 0.0008
Epoch 344 batch 900 train loss: 0.0003 test loss: 0.0008
Epoch 344 batch 1000 train loss: 0.0002 test loss: 0.0008
Epoch 344 batch 1100 train loss: 0.0002 test loss: 0.0008
Epoch 344 batch 1200 train loss: 0.0002 test loss: 0.0008
Epoch 345 batch 0 train loss: 0.0002 test loss: 0.0008
Epoch 345 batch 100 train loss: 0.0002 test loss: 0.0008
Epoch 345 batch 200 train loss: 0.0002 test loss: 0.0008
Epoch 345 batch 300 train loss: 0.0002 test loss: 0.0008
Epoch 345 batch 400 train loss: 0.0002 test loss: 0.0008
Epoch 345 batch 500 train loss: 0.0002 test loss: 0.0008
Epoch 345 batch 600 train loss: 0.0003 test loss: 0.0008
Epoch 345 batch 700 train loss: 0.0003 test loss: 0.0008
Epoch 345 batch 800 train loss: 0.0002 test loss: 0.0008
Epoch 345 batch 900 train loss: 0.0004 test loss: 0.0008
Epoch 345 batch 1000 train loss: 0.0004 test loss: 0.0008
Epoch 345 batch 1100 train loss: 0.0002 test loss: 0.0008
Epoch 345 batch 1200 train loss: 0.0004 test loss: 0.0008
Epoch 346 batch 0 train loss: 0.0003 test loss: 0.0008
Epoch 346 batch 100 train loss: 0.0003 test loss: 0.0008
Epoch 346 batch 200 train loss: 0.0003 test loss: 0.0008
Epoch 346 batch 300 train loss: 0.0003 test loss: 0.0008
Epoch 346 batch 400 train loss: 0.0004 test loss: 0.0008
Epoch 346 batch 500 train loss: 0.0003 test loss: 0.0008
Epoch 346 batch 600 train loss: 0.0003 test loss: 0.0008
Epoch 346 batch 700 train loss: 0.0006 test loss: 0.0008
Epoch 346 batch 800 train loss: 0.0003 test loss: 0.0008
Epoch 346 batch 900 train loss: 0.0005 test loss: 0.0008
Epoch 346 batch 1000 train loss: 0.0002 test loss: 0.0008
Epoch 346 batch 1100 train loss: 0.0004 test loss: 0.0008
Epoch 346 batch 1200 train loss: 0.0002 test loss: 0.0008
Epoch 347 batch 0 train loss: 0.0002 test loss: 0.0008
Epoch 347 batch 100 train loss: 0.0002 test loss: 0.0008
Epoch 347 batch 200 train loss: 0.0003 test loss: 0.0008
Epoch 347 batch 300 train loss: 0.0004 test loss: 0.0008
Epoch 347 batch 400 train loss: 0.0003 test loss: 0.0008
Epoch 347 batch 500 train loss: 0.0003 test loss: 0.0008
Epoch 347 batch 600 train loss: 0.0006 test loss: 0.0008
Epoch 347 batch 700 train loss: 0.0002 test loss: 0.0008
Epoch 347 batch 800 train loss: 0.0003 test loss: 0.0008
Epoch 347 batch 900 train loss: 0.0003 test loss: 0.0008
Epoch 347 batch 1000 train loss: 0.0004 test loss: 0.0008
Epoch 347 batch 1100 train loss: 0.0002 test loss: 0.0008
Epoch 347 batch 1200 train loss: 0.0003 test loss: 0.0008
Epoch 348 batch 0 train loss: 0.0002 test loss: 0.0008
Epoch 348 batch 100 train loss: 0.0002 test loss: 0.0008
Epoch 348 batch 200 train loss: 0.0002 test loss: 0.0008
Epoch 348 batch 300 train loss: 0.0002 test loss: 0.0008
Epoch 348 batch 400 train loss: 0.0004 test loss: 0.0008
Epoch 348 batch 500 train loss: 0.0002 test loss: 0.0008
Epoch 348 batch 600 train loss: 0.0005 test loss: 0.0008
Epoch 348 batch 700 train loss: 0.0003 test loss: 0.0008
Epoch 348 batch 800 train loss: 0.0003 test loss: 0.0008
Epoch 348 batch 900 train loss: 0.0003 test loss: 0.0008
Epoch 348 batch 1000 train loss: 0.0001 test loss: 0.0008
Epoch 348 batch 1100 train loss: 0.0003 test loss: 0.0008
Epoch 348 batch 1200 train loss: 0.0002 test loss: 0.0008
Epoch 349 batch 0 train loss: 0.0005 test loss: 0.0008
Epoch 349 batch 100 train loss: 0.0003 test loss: 0.0008
Epoch 349 batch 200 train loss: 0.0002 test loss: 0.0008
Epoch 349 batch 300 train loss: 0.0004 test loss: 0.0008
Epoch 349 batch 400 train loss: 0.0003 test loss: 0.0008
Epoch 349 batch 500 train loss: 0.0003 test loss: 0.0008
Epoch 349 batch 600 train loss: 0.0006 test loss: 0.0008
Epoch 349 batch 700 train loss: 0.0004 test loss: 0.0008
Epoch 349 batch 800 train loss: 0.0003 test loss: 0.0008
Epoch 349 batch 900 train loss: 0.0003 test loss: 0.0008
Epoch 349 batch 1000 train loss: 0.0003 test loss: 0.0008
Epoch 349 batch 1100 train loss: 0.0003 test loss: 0.0008
Epoch 349 batch 1200 train loss: 0.0002 test loss: 0.0008
Epoch 350 batch 0 train loss: 0.0002 test loss: 0.0008
Epoch 350 batch 100 train loss: 0.0002 test loss: 0.0008
Epoch 350 batch 200 train loss: 0.0004 test loss: 0.0008
Epoch 350 batch 300 train loss: 0.0001 test loss: 0.0008
Epoch 350 batch 400 train loss: 0.0003 test loss: 0.0008
Epoch 350 batch 500 train loss: 0.0003 test loss: 0.0008
Epoch 350 batch 600 train loss: 0.0003 test loss: 0.0008
Epoch 350 batch 700 train loss: 0.0003 test loss: 0.0008
Epoch 350 batch 800 train loss: 0.0003 test loss: 0.0008
Epoch 350 batch 900 train loss: 0.0002 test loss: 0.0008
Epoch 350 batch 1000 train loss: 0.0003 test loss: 0.0008
Epoch 350 batch 1100 train loss: 0.0003 test loss: 0.0008
Epoch 350 batch 1200 train loss: 0.0002 test loss: 0.0008
Epoch 351 batch 0 train loss: 0.0003 test loss: 0.0008
Epoch 351 batch 100 train loss: 0.0005 test loss: 0.0008
Epoch 351 batch 200 train loss: 0.0004 test loss: 0.0008
Epoch 351 batch 300 train loss: 0.0005 test loss: 0.0008
Epoch 351 batch 400 train loss: 0.0003 test loss: 0.0008
Epoch 351 batch 500 train loss: 0.0003 test loss: 0.0008
Epoch 351 batch 600 train loss: 0.0002 test loss: 0.0008
Epoch 351 batch 700 train loss: 0.0002 test loss: 0.0008
Epoch 351 batch 800 train loss: 0.0002 test loss: 0.0008
Epoch 351 batch 900 train loss: 0.0004 test loss: 0.0008
Epoch 351 batch 1000 train loss: 0.0003 test loss: 0.0008
Epoch 351 batch 1100 train loss: 0.0003 test loss: 0.0008
Epoch 351 batch 1200 train loss: 0.0003 test loss: 0.0008
Epoch 352 batch 0 train loss: 0.0002 test loss: 0.0008
Epoch 352 batch 100 train loss: 0.0002 test loss: 0.0008
Epoch 352 batch 200 train loss: 0.0002 test loss: 0.0008
Epoch 352 batch 300 train loss: 0.0001 test loss: 0.0008
Epoch 352 batch 400 train loss: 0.0002 test loss: 0.0008
Epoch 352 batch 500 train loss: 0.0003 test loss: 0.0008
Epoch 352 batch 600 train loss: 0.0003 test loss: 0.0008
Epoch 352 batch 700 train loss: 0.0003 test loss: 0.0008
Epoch 352 batch 800 train loss: 0.0002 test loss: 0.0008
Epoch 352 batch 900 train loss: 0.0002 test loss: 0.0008
Epoch 352 batch 1000 train loss: 0.0002 test loss: 0.0008
Epoch 352 batch 1100 train loss: 0.0003 test loss: 0.0008
Epoch 352 batch 1200 train loss: 0.0002 test loss: 0.0008
Epoch 353 batch 0 train loss: 0.0002 test loss: 0.0008
Epoch 353 batch 100 train loss: 0.0002 test loss: 0.0008
Epoch 353 batch 200 train loss: 0.0002 test loss: 0.0008
Epoch 353 batch 300 train loss: 0.0002 test loss: 0.0008
Epoch 353 batch 400 train loss: 0.0002 test loss: 0.0008
Epoch 353 batch 500 train loss: 0.0005 test loss: 0.0008
Epoch 353 batch 600 train loss: 0.0002 test loss: 0.0008
Epoch 353 batch 700 train loss: 0.0001 test loss: 0.0008
Epoch 353 batch 800 train loss: 0.0003 test loss: 0.0008
Epoch 353 batch 900 train loss: 0.0004 test loss: 0.0008
Epoch 353 batch 1000 train loss: 0.0002 test loss: 0.0008
Epoch 353 batch 1100 train loss: 0.0003 test loss: 0.0008
Epoch 353 batch 1200 train loss: 0.0003 test loss: 0.0008
Epoch 354 batch 0 train loss: 0.0003 test loss: 0.0008
Epoch 354 batch 100 train loss: 0.0004 test loss: 0.0008
Epoch 354 batch 200 train loss: 0.0002 test loss: 0.0008
Epoch 354 batch 300 train loss: 0.0002 test loss: 0.0008
Epoch 354 batch 400 train loss: 0.0002 test loss: 0.0008
Epoch 354 batch 500 train loss: 0.0002 test loss: 0.0008
Epoch 354 batch 600 train loss: 0.0002 test loss: 0.0008
Epoch 354 batch 700 train loss: 0.0002 test loss: 0.0008
Epoch 354 batch 800 train loss: 0.0002 test loss: 0.0008
Epoch 354 batch 900 train loss: 0.0002 test loss: 0.0008
Epoch 354 batch 1000 train loss: 0.0002 test loss: 0.0008
Epoch 354 batch 1100 train loss: 0.0003 test loss: 0.0008
Epoch 354 batch 1200 train loss: 0.0001 test loss: 0.0008
Epoch 355 batch 0 train loss: 0.0003 test loss: 0.0008
Epoch 355 batch 100 train loss: 0.0003 test loss: 0.0008
Epoch 355 batch 200 train loss: 0.0002 test loss: 0.0008
Epoch 355 batch 300 train loss: 0.0002 test loss: 0.0008
Epoch 355 batch 400 train loss: 0.0002 test loss: 0.0008
Epoch 355 batch 500 train loss: 0.0002 test loss: 0.0008
Epoch 355 batch 600 train loss: 0.0004 test loss: 0.0008
Epoch 355 batch 700 train loss: 0.0005 test loss: 0.0008
Epoch 355 batch 800 train loss: 0.0003 test loss: 0.0008
Epoch 355 batch 900 train loss: 0.0002 test loss: 0.0008
Epoch 355 batch 1000 train loss: 0.0002 test loss: 0.0008
Epoch 355 batch 1100 train loss: 0.0002 test loss: 0.0008
Epoch 355 batch 1200 train loss: 0.0003 test loss: 0.0008
Epoch 356 batch 0 train loss: 0.0003 test loss: 0.0008
Epoch 356 batch 100 train loss: 0.0005 test loss: 0.0008
Epoch 356 batch 200 train loss: 0.0002 test loss: 0.0008
Epoch 356 batch 300 train loss: 0.0002 test loss: 0.0008
Epoch 356 batch 400 train loss: 0.0003 test loss: 0.0008
Epoch 356 batch 500 train loss: 0.0002 test loss: 0.0008
Epoch 356 batch 600 train loss: 0.0003 test loss: 0.0008
Epoch 356 batch 700 train loss: 0.0002 test loss: 0.0008
Epoch 356 batch 800 train loss: 0.0002 test loss: 0.0008
Epoch 356 batch 900 train loss: 0.0002 test loss: 0.0008
Epoch 356 batch 1000 train loss: 0.0004 test loss: 0.0008
Epoch 356 batch 1100 train loss: 0.0002 test loss: 0.0008
Epoch 356 batch 1200 train loss: 0.0002 test loss: 0.0008
Epoch 357 batch 0 train loss: 0.0002 test loss: 0.0008
Epoch 357 batch 100 train loss: 0.0005 test loss: 0.0008
Epoch 357 batch 200 train loss: 0.0002 test loss: 0.0008
Epoch 357 batch 300 train loss: 0.0002 test loss: 0.0008
Epoch 357 batch 400 train loss: 0.0002 test loss: 0.0008
Epoch 357 batch 500 train loss: 0.0003 test loss: 0.0008
Epoch 357 batch 600 train loss: 0.0003 test loss: 0.0008
Epoch 357 batch 700 train loss: 0.0003 test loss: 0.0008
Epoch 357 batch 800 train loss: 0.0002 test loss: 0.0008
Epoch 357 batch 900 train loss: 0.0002 test loss: 0.0008
Epoch 357 batch 1000 train loss: 0.0006 test loss: 0.0008
Epoch 357 batch 1100 train loss: 0.0002 test loss: 0.0008
Epoch 357 batch 1200 train loss: 0.0002 test loss: 0.0008
Epoch 358 batch 0 train loss: 0.0002 test loss: 0.0009
Epoch 358 batch 100 train loss: 0.0002 test loss: 0.0008
Epoch 358 batch 200 train loss: 0.0004 test loss: 0.0008
Epoch 358 batch 300 train loss: 0.0002 test loss: 0.0008
Epoch 358 batch 400 train loss: 0.0002 test loss: 0.0008
Epoch 358 batch 500 train loss: 0.0003 test loss: 0.0008
Epoch 358 batch 600 train loss: 0.0002 test loss: 0.0008
Epoch 358 batch 700 train loss: 0.0002 test loss: 0.0008
Epoch 358 batch 800 train loss: 0.0002 test loss: 0.0008
Epoch 358 batch 900 train loss: 0.0003 test loss: 0.0008
Epoch 358 batch 1000 train loss: 0.0004 test loss: 0.0008
Epoch 358 batch 1100 train loss: 0.0002 test loss: 0.0008
Epoch 358 batch 1200 train loss: 0.0003 test loss: 0.0008
Epoch 359 batch 0 train loss: 0.0004 test loss: 0.0008
Epoch 359 batch 100 train loss: 0.0004 test loss: 0.0008
Epoch 359 batch 200 train loss: 0.0005 test loss: 0.0008
Epoch 359 batch 300 train loss: 0.0004 test loss: 0.0008
Epoch 359 batch 400 train loss: 0.0002 test loss: 0.0008
Epoch 359 batch 500 train loss: 0.0004 test loss: 0.0008
Epoch 359 batch 600 train loss: 0.0002 test loss: 0.0008
Epoch 359 batch 700 train loss: 0.0003 test loss: 0.0008
Epoch 359 batch 800 train loss: 0.0002 test loss: 0.0008
Epoch 359 batch 900 train loss: 0.0002 test loss: 0.0008
Epoch 359 batch 1000 train loss: 0.0003 test loss: 0.0008
Epoch 359 batch 1100 train loss: 0.0003 test loss: 0.0008
Epoch 359 batch 1200 train loss: 0.0002 test loss: 0.0008
Epoch 360 batch 0 train loss: 0.0003 test loss: 0.0008
Epoch 360 batch 100 train loss: 0.0002 test loss: 0.0008
Epoch 360 batch 200 train loss: 0.0003 test loss: 0.0008
Epoch 360 batch 300 train loss: 0.0006 test loss: 0.0008
Epoch 360 batch 400 train loss: 0.0003 test loss: 0.0008
Epoch 360 batch 500 train loss: 0.0004 test loss: 0.0008
Epoch 360 batch 600 train loss: 0.0002 test loss: 0.0008
Epoch 360 batch 700 train loss: 0.0002 test loss: 0.0008
Epoch 360 batch 800 train loss: 0.0004 test loss: 0.0008
Epoch 360 batch 900 train loss: 0.0003 test loss: 0.0008
Epoch 360 batch 1000 train loss: 0.0003 test loss: 0.0008
Epoch 360 batch 1100 train loss: 0.0003 test loss: 0.0008
Epoch 360 batch 1200 train loss: 0.0003 test loss: 0.0008
Epoch 361 batch 0 train loss: 0.0002 test loss: 0.0008
Epoch 361 batch 100 train loss: 0.0003 test loss: 0.0008
Epoch 361 batch 200 train loss: 0.0002 test loss: 0.0008
Epoch 361 batch 300 train loss: 0.0002 test loss: 0.0008
Epoch 361 batch 400 train loss: 0.0003 test loss: 0.0008
Epoch 361 batch 500 train loss: 0.0002 test loss: 0.0008
Epoch 361 batch 600 train loss: 0.0003 test loss: 0.0008
Epoch 361 batch 700 train loss: 0.0004 test loss: 0.0008
Epoch 361 batch 800 train loss: 0.0003 test loss: 0.0008
Epoch 361 batch 900 train loss: 0.0003 test loss: 0.0008
Epoch 361 batch 1000 train loss: 0.0002 test loss: 0.0008
Epoch 361 batch 1100 train loss: 0.0003 test loss: 0.0008
Epoch 361 batch 1200 train loss: 0.0002 test loss: 0.0008
Epoch 362 batch 0 train loss: 0.0002 test loss: 0.0008
Epoch 362 batch 100 train loss: 0.0002 test loss: 0.0008
Epoch 362 batch 200 train loss: 0.0003 test loss: 0.0008
Epoch 362 batch 300 train loss: 0.0002 test loss: 0.0008
Epoch 362 batch 400 train loss: 0.0003 test loss: 0.0008
Epoch 362 batch 500 train loss: 0.0004 test loss: 0.0008
Epoch 362 batch 600 train loss: 0.0003 test loss: 0.0008
Epoch 362 batch 700 train loss: 0.0002 test loss: 0.0008
Epoch 362 batch 800 train loss: 0.0003 test loss: 0.0008
Epoch 362 batch 900 train loss: 0.0002 test loss: 0.0008
Epoch 362 batch 1000 train loss: 0.0007 test loss: 0.0008
Epoch 362 batch 1100 train loss: 0.0003 test loss: 0.0008
Epoch 362 batch 1200 train loss: 0.0003 test loss: 0.0008
Epoch 363 batch 0 train loss: 0.0002 test loss: 0.0008
Epoch 363 batch 100 train loss: 0.0002 test loss: 0.0008
Epoch 363 batch 200 train loss: 0.0003 test loss: 0.0008
Epoch 363 batch 300 train loss: 0.0003 test loss: 0.0008
Epoch 363 batch 400 train loss: 0.0002 test loss: 0.0008
Epoch 363 batch 500 train loss: 0.0003 test loss: 0.0008
Epoch 363 batch 600 train loss: 0.0002 test loss: 0.0008
Epoch 363 batch 700 train loss: 0.0003 test loss: 0.0008
Epoch 363 batch 800 train loss: 0.0002 test loss: 0.0008
Epoch 363 batch 900 train loss: 0.0002 test loss: 0.0008
Epoch 363 batch 1000 train loss: 0.0002 test loss: 0.0008
Epoch 363 batch 1100 train loss: 0.0003 test loss: 0.0008
Epoch 363 batch 1200 train loss: 0.0003 test loss: 0.0008
Epoch 364 batch 0 train loss: 0.0004 test loss: 0.0008
Epoch 364 batch 100 train loss: 0.0004 test loss: 0.0008
Epoch 364 batch 200 train loss: 0.0002 test loss: 0.0008
Epoch 364 batch 300 train loss: 0.0002 test loss: 0.0008
Epoch 364 batch 400 train loss: 0.0003 test loss: 0.0008
Epoch 364 batch 500 train loss: 0.0003 test loss: 0.0008
Epoch 364 batch 600 train loss: 0.0003 test loss: 0.0008
Epoch 364 batch 700 train loss: 0.0002 test loss: 0.0008
Epoch 364 batch 800 train loss: 0.0003 test loss: 0.0008
Epoch 364 batch 900 train loss: 0.0002 test loss: 0.0008
Epoch 364 batch 1000 train loss: 0.0001 test loss: 0.0008
Epoch 364 batch 1100 train loss: 0.0002 test loss: 0.0008
Epoch 364 batch 1200 train loss: 0.0002 test loss: 0.0008
Epoch 365 batch 0 train loss: 0.0002 test loss: 0.0008
Epoch 365 batch 100 train loss: 0.0006 test loss: 0.0008
Epoch 365 batch 200 train loss: 0.0003 test loss: 0.0008
Epoch 365 batch 300 train loss: 0.0003 test loss: 0.0008
Epoch 365 batch 400 train loss: 0.0003 test loss: 0.0008
Epoch 365 batch 500 train loss: 0.0002 test loss: 0.0008
Epoch 365 batch 600 train loss: 0.0003 test loss: 0.0008
Epoch 365 batch 700 train loss: 0.0003 test loss: 0.0008
Epoch 365 batch 800 train loss: 0.0002 test loss: 0.0008
Epoch 365 batch 900 train loss: 0.0002 test loss: 0.0008
Epoch 365 batch 1000 train loss: 0.0003 test loss: 0.0008
Epoch 365 batch 1100 train loss: 0.0002 test loss: 0.0008
Epoch 365 batch 1200 train loss: 0.0003 test loss: 0.0008
Epoch 366 batch 0 train loss: 0.0002 test loss: 0.0008
Epoch 366 batch 100 train loss: 0.0002 test loss: 0.0008
Epoch 366 batch 200 train loss: 0.0002 test loss: 0.0008
Epoch 366 batch 300 train loss: 0.0006 test loss: 0.0008
Epoch 366 batch 400 train loss: 0.0004 test loss: 0.0008
Epoch 366 batch 500 train loss: 0.0003 test loss: 0.0008
Epoch 366 batch 600 train loss: 0.0003 test loss: 0.0008
Epoch 366 batch 700 train loss: 0.0003 test loss: 0.0008
Epoch 366 batch 800 train loss: 0.0003 test loss: 0.0008
Epoch 366 batch 900 train loss: 0.0006 test loss: 0.0008
Epoch 366 batch 1000 train loss: 0.0003 test loss: 0.0008
Epoch 366 batch 1100 train loss: 0.0002 test loss: 0.0008
Epoch 366 batch 1200 train loss: 0.0003 test loss: 0.0008
Epoch 367 batch 0 train loss: 0.0002 test loss: 0.0008
Epoch 367 batch 100 train loss: 0.0003 test loss: 0.0008
Epoch 367 batch 200 train loss: 0.0002 test loss: 0.0008
Epoch 367 batch 300 train loss: 0.0002 test loss: 0.0008
Epoch 367 batch 400 train loss: 0.0003 test loss: 0.0008
Epoch 367 batch 500 train loss: 0.0002 test loss: 0.0008
Epoch 367 batch 600 train loss: 0.0002 test loss: 0.0008
Epoch 367 batch 700 train loss: 0.0002 test loss: 0.0008
Epoch 367 batch 800 train loss: 0.0004 test loss: 0.0008
Epoch 367 batch 900 train loss: 0.0005 test loss: 0.0008
Epoch 367 batch 1000 train loss: 0.0003 test loss: 0.0008
Epoch 367 batch 1100 train loss: 0.0002 test loss: 0.0008
Epoch 367 batch 1200 train loss: 0.0002 test loss: 0.0008
Epoch 368 batch 0 train loss: 0.0002 test loss: 0.0008
Epoch 368 batch 100 train loss: 0.0002 test loss: 0.0008
Epoch 368 batch 200 train loss: 0.0002 test loss: 0.0008
Epoch 368 batch 300 train loss: 0.0005 test loss: 0.0008
Epoch 368 batch 400 train loss: 0.0003 test loss: 0.0008
Epoch 368 batch 500 train loss: 0.0002 test loss: 0.0008
Epoch 368 batch 600 train loss: 0.0002 test loss: 0.0008
Epoch 368 batch 700 train loss: 0.0004 test loss: 0.0008
Epoch 368 batch 800 train loss: 0.0002 test loss: 0.0008
Epoch 368 batch 900 train loss: 0.0005 test loss: 0.0008
Epoch 368 batch 1000 train loss: 0.0002 test loss: 0.0008
Epoch 368 batch 1100 train loss: 0.0002 test loss: 0.0008
Epoch 368 batch 1200 train loss: 0.0002 test loss: 0.0008
Epoch 369 batch 0 train loss: 0.0003 test loss: 0.0008
Epoch 369 batch 100 train loss: 0.0003 test loss: 0.0008
Epoch 369 batch 200 train loss: 0.0002 test loss: 0.0008
Epoch 369 batch 300 train loss: 0.0002 test loss: 0.0008
Epoch 369 batch 400 train loss: 0.0003 test loss: 0.0008
Epoch 369 batch 500 train loss: 0.0005 test loss: 0.0008
Epoch 369 batch 600 train loss: 0.0004 test loss: 0.0008
Epoch 369 batch 700 train loss: 0.0003 test loss: 0.0008
Epoch 369 batch 800 train loss: 0.0002 test loss: 0.0008
Epoch 369 batch 900 train loss: 0.0002 test loss: 0.0008
Epoch 369 batch 1000 train loss: 0.0002 test loss: 0.0008
Epoch 369 batch 1100 train loss: 0.0004 test loss: 0.0008
Epoch 369 batch 1200 train loss: 0.0002 test loss: 0.0008
Epoch 370 batch 0 train loss: 0.0002 test loss: 0.0008
Epoch 370 batch 100 train loss: 0.0002 test loss: 0.0008
Epoch 370 batch 200 train loss: 0.0003 test loss: 0.0008
Epoch 370 batch 300 train loss: 0.0002 test loss: 0.0008
Epoch 370 batch 400 train loss: 0.0002 test loss: 0.0008
Epoch 370 batch 500 train loss: 0.0002 test loss: 0.0008
Epoch 370 batch 600 train loss: 0.0002 test loss: 0.0008
Epoch 370 batch 700 train loss: 0.0002 test loss: 0.0008
Epoch 370 batch 800 train loss: 0.0002 test loss: 0.0008
Epoch 370 batch 900 train loss: 0.0003 test loss: 0.0008
Epoch 370 batch 1000 train loss: 0.0003 test loss: 0.0008
Epoch 370 batch 1100 train loss: 0.0002 test loss: 0.0008
Epoch 370 batch 1200 train loss: 0.0002 test loss: 0.0008
Epoch 371 batch 0 train loss: 0.0002 test loss: 0.0008
Epoch 371 batch 100 train loss: 0.0001 test loss: 0.0008
Epoch 371 batch 200 train loss: 0.0002 test loss: 0.0008
Epoch 371 batch 300 train loss: 0.0003 test loss: 0.0008
Epoch 371 batch 400 train loss: 0.0005 test loss: 0.0008
Epoch 371 batch 500 train loss: 0.0002 test loss: 0.0008
Epoch 371 batch 600 train loss: 0.0002 test loss: 0.0008
Epoch 371 batch 700 train loss: 0.0002 test loss: 0.0008
Epoch 371 batch 800 train loss: 0.0003 test loss: 0.0008
Epoch 371 batch 900 train loss: 0.0002 test loss: 0.0008
Epoch 371 batch 1000 train loss: 0.0002 test loss: 0.0008
Epoch 371 batch 1100 train loss: 0.0002 test loss: 0.0008
Epoch 371 batch 1200 train loss: 0.0002 test loss: 0.0008
Epoch 372 batch 0 train loss: 0.0003 test loss: 0.0008
Epoch 372 batch 100 train loss: 0.0003 test loss: 0.0008
Epoch 372 batch 200 train loss: 0.0002 test loss: 0.0008
Epoch 372 batch 300 train loss: 0.0004 test loss: 0.0008
Epoch 372 batch 400 train loss: 0.0003 test loss: 0.0008
Epoch 372 batch 500 train loss: 0.0002 test loss: 0.0008
Epoch 372 batch 600 train loss: 0.0003 test loss: 0.0008
Epoch 372 batch 700 train loss: 0.0003 test loss: 0.0008
Epoch 372 batch 800 train loss: 0.0001 test loss: 0.0008
Epoch 372 batch 900 train loss: 0.0003 test loss: 0.0008
Epoch 372 batch 1000 train loss: 0.0003 test loss: 0.0008
Epoch 372 batch 1100 train loss: 0.0002 test loss: 0.0008
Epoch 372 batch 1200 train loss: 0.0003 test loss: 0.0008
Epoch 373 batch 0 train loss: 0.0004 test loss: 0.0008
Epoch 373 batch 100 train loss: 0.0002 test loss: 0.0008
Epoch 373 batch 200 train loss: 0.0004 test loss: 0.0008
Epoch 373 batch 300 train loss: 0.0002 test loss: 0.0008
Epoch 373 batch 400 train loss: 0.0002 test loss: 0.0008
Epoch 373 batch 500 train loss: 0.0002 test loss: 0.0008
Epoch 373 batch 600 train loss: 0.0003 test loss: 0.0008
Epoch 373 batch 700 train loss: 0.0002 test loss: 0.0008
Epoch 373 batch 800 train loss: 0.0002 test loss: 0.0008
Epoch 373 batch 900 train loss: 0.0001 test loss: 0.0008
Epoch 373 batch 1000 train loss: 0.0003 test loss: 0.0008
Epoch 373 batch 1100 train loss: 0.0003 test loss: 0.0008
Epoch 373 batch 1200 train loss: 0.0002 test loss: 0.0008
Epoch 374 batch 0 train loss: 0.0002 test loss: 0.0008
Epoch 374 batch 100 train loss: 0.0002 test loss: 0.0008
Epoch 374 batch 200 train loss: 0.0003 test loss: 0.0008
Epoch 374 batch 300 train loss: 0.0002 test loss: 0.0008
Epoch 374 batch 400 train loss: 0.0003 test loss: 0.0008
Epoch 374 batch 500 train loss: 0.0002 test loss: 0.0008
Epoch 374 batch 600 train loss: 0.0003 test loss: 0.0008
Epoch 374 batch 700 train loss: 0.0001 test loss: 0.0008
Epoch 374 batch 800 train loss: 0.0002 test loss: 0.0008
Epoch 374 batch 900 train loss: 0.0002 test loss: 0.0008
Epoch 374 batch 1000 train loss: 0.0007 test loss: 0.0008
Epoch 374 batch 1100 train loss: 0.0003 test loss: 0.0008
Epoch 374 batch 1200 train loss: 0.0003 test loss: 0.0008
Epoch 375 batch 0 train loss: 0.0002 test loss: 0.0008
Epoch 375 batch 100 train loss: 0.0004 test loss: 0.0008
Epoch 375 batch 200 train loss: 0.0002 test loss: 0.0008
Epoch 375 batch 300 train loss: 0.0005 test loss: 0.0008
Epoch 375 batch 400 train loss: 0.0003 test loss: 0.0008
Epoch 375 batch 500 train loss: 0.0003 test loss: 0.0008
Epoch 375 batch 600 train loss: 0.0002 test loss: 0.0008
Epoch 375 batch 700 train loss: 0.0003 test loss: 0.0008
Epoch 375 batch 800 train loss: 0.0003 test loss: 0.0008
Epoch 375 batch 900 train loss: 0.0002 test loss: 0.0008
Epoch 375 batch 1000 train loss: 0.0002 test loss: 0.0008
Epoch 375 batch 1100 train loss: 0.0002 test loss: 0.0008
Epoch 375 batch 1200 train loss: 0.0003 test loss: 0.0008
Epoch 376 batch 0 train loss: 0.0003 test loss: 0.0008
Epoch 376 batch 100 train loss: 0.0002 test loss: 0.0008
Epoch 376 batch 200 train loss: 0.0002 test loss: 0.0008
Epoch 376 batch 300 train loss: 0.0002 test loss: 0.0008
Epoch 376 batch 400 train loss: 0.0002 test loss: 0.0008
Epoch 376 batch 500 train loss: 0.0002 test loss: 0.0008
Epoch 376 batch 600 train loss: 0.0001 test loss: 0.0008
Epoch 376 batch 700 train loss: 0.0003 test loss: 0.0008
Epoch 376 batch 800 train loss: 0.0003 test loss: 0.0008
Epoch 376 batch 900 train loss: 0.0002 test loss: 0.0008
Epoch 376 batch 1000 train loss: 0.0002 test loss: 0.0008
Epoch 376 batch 1100 train loss: 0.0002 test loss: 0.0008
Epoch 376 batch 1200 train loss: 0.0005 test loss: 0.0008
Epoch 377 batch 0 train loss: 0.0003 test loss: 0.0008
Epoch 377 batch 100 train loss: 0.0004 test loss: 0.0008
Epoch 377 batch 200 train loss: 0.0003 test loss: 0.0008
Epoch 377 batch 300 train loss: 0.0003 test loss: 0.0008
Epoch 377 batch 400 train loss: 0.0003 test loss: 0.0008
Epoch 377 batch 500 train loss: 0.0002 test loss: 0.0008
Epoch 377 batch 600 train loss: 0.0003 test loss: 0.0008
Epoch 377 batch 700 train loss: 0.0003 test loss: 0.0008
Epoch 377 batch 800 train loss: 0.0004 test loss: 0.0008
Epoch 377 batch 900 train loss: 0.0004 test loss: 0.0008
Epoch 377 batch 1000 train loss: 0.0002 test loss: 0.0008
Epoch 377 batch 1100 train loss: 0.0003 test loss: 0.0008
Epoch 377 batch 1200 train loss: 0.0005 test loss: 0.0008
Epoch 378 batch 0 train loss: 0.0002 test loss: 0.0008
Epoch 378 batch 100 train loss: 0.0003 test loss: 0.0008
Epoch 378 batch 200 train loss: 0.0002 test loss: 0.0008
Epoch 378 batch 300 train loss: 0.0004 test loss: 0.0008
Epoch 378 batch 400 train loss: 0.0005 test loss: 0.0008
Epoch 378 batch 500 train loss: 0.0003 test loss: 0.0008
Epoch 378 batch 600 train loss: 0.0004 test loss: 0.0008
Epoch 378 batch 700 train loss: 0.0003 test loss: 0.0008
Epoch 378 batch 800 train loss: 0.0003 test loss: 0.0008
Epoch 378 batch 900 train loss: 0.0002 test loss: 0.0008
Epoch 378 batch 1000 train loss: 0.0003 test loss: 0.0008
Epoch 378 batch 1100 train loss: 0.0003 test loss: 0.0008
Epoch 378 batch 1200 train loss: 0.0004 test loss: 0.0008
Epoch 379 batch 0 train loss: 0.0003 test loss: 0.0008
Epoch 379 batch 100 train loss: 0.0003 test loss: 0.0008
Epoch 379 batch 200 train loss: 0.0003 test loss: 0.0008
Epoch 379 batch 300 train loss: 0.0004 test loss: 0.0008
Epoch 379 batch 400 train loss: 0.0003 test loss: 0.0008
Epoch 379 batch 500 train loss: 0.0002 test loss: 0.0008
Epoch 379 batch 600 train loss: 0.0002 test loss: 0.0008
Epoch 379 batch 700 train loss: 0.0002 test loss: 0.0008
Epoch 379 batch 800 train loss: 0.0003 test loss: 0.0008
Epoch 379 batch 900 train loss: 0.0004 test loss: 0.0008
Epoch 379 batch 1000 train loss: 0.0002 test loss: 0.0008
Epoch 379 batch 1100 train loss: 0.0003 test loss: 0.0008
Epoch 379 batch 1200 train loss: 0.0002 test loss: 0.0008
Epoch 380 batch 0 train loss: 0.0002 test loss: 0.0008
Epoch 380 batch 100 train loss: 0.0003 test loss: 0.0008
Epoch 380 batch 200 train loss: 0.0003 test loss: 0.0008
Epoch 380 batch 300 train loss: 0.0003 test loss: 0.0008
Epoch 380 batch 400 train loss: 0.0003 test loss: 0.0008
Epoch 380 batch 500 train loss: 0.0002 test loss: 0.0008
Epoch 380 batch 600 train loss: 0.0002 test loss: 0.0008
Epoch 380 batch 700 train loss: 0.0003 test loss: 0.0008
Epoch 380 batch 800 train loss: 0.0003 test loss: 0.0008
Epoch 380 batch 900 train loss: 0.0001 test loss: 0.0008
Epoch 380 batch 1000 train loss: 0.0003 test loss: 0.0008
Epoch 380 batch 1100 train loss: 0.0002 test loss: 0.0008
Epoch 380 batch 1200 train loss: 0.0002 test loss: 0.0008
Epoch 381 batch 0 train loss: 0.0002 test loss: 0.0008
Epoch 381 batch 100 train loss: 0.0003 test loss: 0.0008
Epoch 381 batch 200 train loss: 0.0005 test loss: 0.0008
Epoch 381 batch 300 train loss: 0.0002 test loss: 0.0008
Epoch 381 batch 400 train loss: 0.0004 test loss: 0.0008
Epoch 381 batch 500 train loss: 0.0004 test loss: 0.0008
Epoch 381 batch 600 train loss: 0.0003 test loss: 0.0008
Epoch 381 batch 700 train loss: 0.0002 test loss: 0.0008
Epoch 381 batch 800 train loss: 0.0002 test loss: 0.0008
Epoch 381 batch 900 train loss: 0.0002 test loss: 0.0008
Epoch 381 batch 1000 train loss: 0.0002 test loss: 0.0008
Epoch 381 batch 1100 train loss: 0.0003 test loss: 0.0008
Epoch 381 batch 1200 train loss: 0.0002 test loss: 0.0008
Epoch 382 batch 0 train loss: 0.0002 test loss: 0.0008
Epoch 382 batch 100 train loss: 0.0002 test loss: 0.0008
Epoch 382 batch 200 train loss: 0.0002 test loss: 0.0008
Epoch 382 batch 300 train loss: 0.0005 test loss: 0.0008
Epoch 382 batch 400 train loss: 0.0002 test loss: 0.0008
Epoch 382 batch 500 train loss: 0.0002 test loss: 0.0008
Epoch 382 batch 600 train loss: 0.0005 test loss: 0.0008
Epoch 382 batch 700 train loss: 0.0003 test loss: 0.0008
Epoch 382 batch 800 train loss: 0.0003 test loss: 0.0008
Epoch 382 batch 900 train loss: 0.0003 test loss: 0.0008
Epoch 382 batch 1000 train loss: 0.0003 test loss: 0.0008
Epoch 382 batch 1100 train loss: 0.0002 test loss: 0.0008
Epoch 382 batch 1200 train loss: 0.0001 test loss: 0.0008
Epoch 383 batch 0 train loss: 0.0003 test loss: 0.0008
Epoch 383 batch 100 train loss: 0.0002 test loss: 0.0008
Epoch 383 batch 200 train loss: 0.0003 test loss: 0.0008
Epoch 383 batch 300 train loss: 0.0004 test loss: 0.0008
Epoch 383 batch 400 train loss: 0.0003 test loss: 0.0008
Epoch 383 batch 500 train loss: 0.0002 test loss: 0.0008
Epoch 383 batch 600 train loss: 0.0005 test loss: 0.0008
Epoch 383 batch 700 train loss: 0.0003 test loss: 0.0008
Epoch 383 batch 800 train loss: 0.0002 test loss: 0.0008
Epoch 383 batch 900 train loss: 0.0002 test loss: 0.0008
Epoch 383 batch 1000 train loss: 0.0003 test loss: 0.0008
Epoch 383 batch 1100 train loss: 0.0002 test loss: 0.0008
Epoch 383 batch 1200 train loss: 0.0002 test loss: 0.0008
Epoch 384 batch 0 train loss: 0.0003 test loss: 0.0008
Epoch 384 batch 100 train loss: 0.0002 test loss: 0.0008
Epoch 384 batch 200 train loss: 0.0003 test loss: 0.0008
Epoch 384 batch 300 train loss: 0.0003 test loss: 0.0008
Epoch 384 batch 400 train loss: 0.0002 test loss: 0.0008
Epoch 384 batch 500 train loss: 0.0002 test loss: 0.0008
Epoch 384 batch 600 train loss: 0.0002 test loss: 0.0008
Epoch 384 batch 700 train loss: 0.0002 test loss: 0.0009
Epoch 384 batch 800 train loss: 0.0003 test loss: 0.0008
Epoch 384 batch 900 train loss: 0.0003 test loss: 0.0008
Epoch 384 batch 1000 train loss: 0.0002 test loss: 0.0008
Epoch 384 batch 1100 train loss: 0.0004 test loss: 0.0008
Epoch 384 batch 1200 train loss: 0.0002 test loss: 0.0008
Epoch 385 batch 0 train loss: 0.0003 test loss: 0.0008
Epoch 385 batch 100 train loss: 0.0003 test loss: 0.0008
Epoch 385 batch 200 train loss: 0.0002 test loss: 0.0008
Epoch 385 batch 300 train loss: 0.0002 test loss: 0.0008
Epoch 385 batch 400 train loss: 0.0002 test loss: 0.0008
Epoch 385 batch 500 train loss: 0.0002 test loss: 0.0008
Epoch 385 batch 600 train loss: 0.0003 test loss: 0.0008
Epoch 385 batch 700 train loss: 0.0002 test loss: 0.0008
Epoch 385 batch 800 train loss: 0.0002 test loss: 0.0008
Epoch 385 batch 900 train loss: 0.0004 test loss: 0.0008
Epoch 385 batch 1000 train loss: 0.0005 test loss: 0.0008
Epoch 385 batch 1100 train loss: 0.0002 test loss: 0.0008
Epoch 385 batch 1200 train loss: 0.0003 test loss: 0.0008
Epoch 386 batch 0 train loss: 0.0002 test loss: 0.0008
Epoch 386 batch 100 train loss: 0.0006 test loss: 0.0008
Epoch 386 batch 200 train loss: 0.0002 test loss: 0.0008
Epoch 386 batch 300 train loss: 0.0004 test loss: 0.0008
Epoch 386 batch 400 train loss: 0.0002 test loss: 0.0008
Epoch 386 batch 500 train loss: 0.0002 test loss: 0.0008
Epoch 386 batch 600 train loss: 0.0002 test loss: 0.0008
Epoch 386 batch 700 train loss: 0.0002 test loss: 0.0008
Epoch 386 batch 800 train loss: 0.0002 test loss: 0.0008
Epoch 386 batch 900 train loss: 0.0002 test loss: 0.0008
Epoch 386 batch 1000 train loss: 0.0003 test loss: 0.0008
Epoch 386 batch 1100 train loss: 0.0003 test loss: 0.0008
Epoch 386 batch 1200 train loss: 0.0002 test loss: 0.0008
Epoch 387 batch 0 train loss: 0.0002 test loss: 0.0008
Epoch 387 batch 100 train loss: 0.0002 test loss: 0.0008
Epoch 387 batch 200 train loss: 0.0003 test loss: 0.0008
Epoch 387 batch 300 train loss: 0.0001 test loss: 0.0008
Epoch 387 batch 400 train loss: 0.0003 test loss: 0.0008
Epoch 387 batch 500 train loss: 0.0002 test loss: 0.0008
Epoch 387 batch 600 train loss: 0.0002 test loss: 0.0008
Epoch 387 batch 700 train loss: 0.0002 test loss: 0.0008
Epoch 387 batch 800 train loss: 0.0003 test loss: 0.0008
Epoch 387 batch 900 train loss: 0.0003 test loss: 0.0008
Epoch 387 batch 1000 train loss: 0.0002 test loss: 0.0008
Epoch 387 batch 1100 train loss: 0.0001 test loss: 0.0008
Epoch 387 batch 1200 train loss: 0.0003 test loss: 0.0008
Epoch 388 batch 0 train loss: 0.0005 test loss: 0.0008
Epoch 388 batch 100 train loss: 0.0003 test loss: 0.0008
Epoch 388 batch 200 train loss: 0.0002 test loss: 0.0008
Epoch 388 batch 300 train loss: 0.0002 test loss: 0.0008
Epoch 388 batch 400 train loss: 0.0003 test loss: 0.0008
Epoch 388 batch 500 train loss: 0.0002 test loss: 0.0008
Epoch 388 batch 600 train loss: 0.0001 test loss: 0.0008
Epoch 388 batch 700 train loss: 0.0002 test loss: 0.0008
Epoch 388 batch 800 train loss: 0.0003 test loss: 0.0008
Epoch 388 batch 900 train loss: 0.0004 test loss: 0.0008
Epoch 388 batch 1000 train loss: 0.0002 test loss: 0.0008
Epoch 388 batch 1100 train loss: 0.0003 test loss: 0.0008
Epoch 388 batch 1200 train loss: 0.0003 test loss: 0.0008
Epoch 389 batch 0 train loss: 0.0003 test loss: 0.0008
Epoch 389 batch 100 train loss: 0.0002 test loss: 0.0008
Epoch 389 batch 200 train loss: 0.0003 test loss: 0.0008
Epoch 389 batch 300 train loss: 0.0002 test loss: 0.0008
Epoch 389 batch 400 train loss: 0.0003 test loss: 0.0008
Epoch 389 batch 500 train loss: 0.0004 test loss: 0.0008
Epoch 389 batch 600 train loss: 0.0001 test loss: 0.0008
Epoch 389 batch 700 train loss: 0.0002 test loss: 0.0008
Epoch 389 batch 800 train loss: 0.0002 test loss: 0.0008
Epoch 389 batch 900 train loss: 0.0002 test loss: 0.0008
Epoch 389 batch 1000 train loss: 0.0003 test loss: 0.0008
Epoch 389 batch 1100 train loss: 0.0002 test loss: 0.0008
Epoch 389 batch 1200 train loss: 0.0002 test loss: 0.0008
Epoch 390 batch 0 train loss: 0.0002 test loss: 0.0008
Epoch 390 batch 100 train loss: 0.0003 test loss: 0.0008
Epoch 390 batch 200 train loss: 0.0002 test loss: 0.0008
Epoch 390 batch 300 train loss: 0.0003 test loss: 0.0008
Epoch 390 batch 400 train loss: 0.0002 test loss: 0.0008
Epoch 390 batch 500 train loss: 0.0003 test loss: 0.0008
Epoch 390 batch 600 train loss: 0.0003 test loss: 0.0008
Epoch 390 batch 700 train loss: 0.0004 test loss: 0.0008
Epoch 390 batch 800 train loss: 0.0002 test loss: 0.0008
Epoch 390 batch 900 train loss: 0.0006 test loss: 0.0009
Epoch 390 batch 1000 train loss: 0.0002 test loss: 0.0008
Epoch 390 batch 1100 train loss: 0.0002 test loss: 0.0008
Epoch 390 batch 1200 train loss: 0.0003 test loss: 0.0008
Epoch 391 batch 0 train loss: 0.0001 test loss: 0.0008
Epoch 391 batch 100 train loss: 0.0002 test loss: 0.0008
Epoch 391 batch 200 train loss: 0.0004 test loss: 0.0008
Epoch 391 batch 300 train loss: 0.0003 test loss: 0.0008
Epoch 391 batch 400 train loss: 0.0003 test loss: 0.0008
Epoch 391 batch 500 train loss: 0.0003 test loss: 0.0008
Epoch 391 batch 600 train loss: 0.0002 test loss: 0.0008
Epoch 391 batch 700 train loss: 0.0003 test loss: 0.0008
Epoch 391 batch 800 train loss: 0.0003 test loss: 0.0008
Epoch 391 batch 900 train loss: 0.0002 test loss: 0.0008
Epoch 391 batch 1000 train loss: 0.0004 test loss: 0.0008
Epoch 391 batch 1100 train loss: 0.0002 test loss: 0.0008
Epoch 391 batch 1200 train loss: 0.0002 test loss: 0.0008
Epoch 392 batch 0 train loss: 0.0002 test loss: 0.0008
Epoch 392 batch 100 train loss: 0.0002 test loss: 0.0008
Epoch 392 batch 200 train loss: 0.0002 test loss: 0.0008
Epoch 392 batch 300 train loss: 0.0003 test loss: 0.0008
Epoch 392 batch 400 train loss: 0.0002 test loss: 0.0008
Epoch 392 batch 500 train loss: 0.0003 test loss: 0.0008
Epoch 392 batch 600 train loss: 0.0002 test loss: 0.0008
Epoch 392 batch 700 train loss: 0.0002 test loss: 0.0008
Epoch 392 batch 800 train loss: 0.0002 test loss: 0.0008
Epoch 392 batch 900 train loss: 0.0004 test loss: 0.0008
Epoch 392 batch 1000 train loss: 0.0002 test loss: 0.0008
Epoch 392 batch 1100 train loss: 0.0001 test loss: 0.0008
Epoch 392 batch 1200 train loss: 0.0002 test loss: 0.0008
Epoch 393 batch 0 train loss: 0.0004 test loss: 0.0008
Epoch 393 batch 100 train loss: 0.0003 test loss: 0.0008
Epoch 393 batch 200 train loss: 0.0004 test loss: 0.0008
Epoch 393 batch 300 train loss: 0.0003 test loss: 0.0008
Epoch 393 batch 400 train loss: 0.0002 test loss: 0.0008
Epoch 393 batch 500 train loss: 0.0006 test loss: 0.0008
Epoch 393 batch 600 train loss: 0.0003 test loss: 0.0008
Epoch 393 batch 700 train loss: 0.0002 test loss: 0.0008
Epoch 393 batch 800 train loss: 0.0003 test loss: 0.0008
Epoch 393 batch 900 train loss: 0.0003 test loss: 0.0008
Epoch 393 batch 1000 train loss: 0.0004 test loss: 0.0008
Epoch 393 batch 1100 train loss: 0.0003 test loss: 0.0008
Epoch 393 batch 1200 train loss: 0.0003 test loss: 0.0008
Epoch 394 batch 0 train loss: 0.0003 test loss: 0.0009
Epoch 394 batch 100 train loss: 0.0003 test loss: 0.0008
Epoch 394 batch 200 train loss: 0.0003 test loss: 0.0008
Epoch 394 batch 300 train loss: 0.0004 test loss: 0.0008
Epoch 394 batch 400 train loss: 0.0002 test loss: 0.0008
Epoch 394 batch 500 train loss: 0.0004 test loss: 0.0008
Epoch 394 batch 600 train loss: 0.0003 test loss: 0.0008
Epoch 394 batch 700 train loss: 0.0002 test loss: 0.0008
Epoch 394 batch 800 train loss: 0.0002 test loss: 0.0008
Epoch 394 batch 900 train loss: 0.0003 test loss: 0.0008
Epoch 394 batch 1000 train loss: 0.0002 test loss: 0.0008
Epoch 394 batch 1100 train loss: 0.0002 test loss: 0.0008
Epoch 394 batch 1200 train loss: 0.0004 test loss: 0.0008
Epoch 395 batch 0 train loss: 0.0004 test loss: 0.0008
Epoch 395 batch 100 train loss: 0.0003 test loss: 0.0008
Epoch 395 batch 200 train loss: 0.0003 test loss: 0.0008
Epoch 395 batch 300 train loss: 0.0002 test loss: 0.0008
Epoch 395 batch 400 train loss: 0.0004 test loss: 0.0008
Epoch 395 batch 500 train loss: 0.0003 test loss: 0.0008
Epoch 395 batch 600 train loss: 0.0003 test loss: 0.0008
Epoch 395 batch 700 train loss: 0.0004 test loss: 0.0008
Epoch 395 batch 800 train loss: 0.0002 test loss: 0.0008
Epoch 395 batch 900 train loss: 0.0002 test loss: 0.0008
Epoch 395 batch 1000 train loss: 0.0003 test loss: 0.0008
Epoch 395 batch 1100 train loss: 0.0002 test loss: 0.0008
Epoch 395 batch 1200 train loss: 0.0003 test loss: 0.0008
Epoch 396 batch 0 train loss: 0.0003 test loss: 0.0008
Epoch 396 batch 100 train loss: 0.0003 test loss: 0.0008
Epoch 396 batch 200 train loss: 0.0002 test loss: 0.0008
Epoch 396 batch 300 train loss: 0.0002 test loss: 0.0008
Epoch 396 batch 400 train loss: 0.0004 test loss: 0.0008
Epoch 396 batch 500 train loss: 0.0002 test loss: 0.0008
Epoch 396 batch 600 train loss: 0.0002 test loss: 0.0008
Epoch 396 batch 700 train loss: 0.0003 test loss: 0.0008
Epoch 396 batch 800 train loss: 0.0002 test loss: 0.0008
Epoch 396 batch 900 train loss: 0.0002 test loss: 0.0008
Epoch 396 batch 1000 train loss: 0.0002 test loss: 0.0008
Epoch 396 batch 1100 train loss: 0.0003 test loss: 0.0008
Epoch 396 batch 1200 train loss: 0.0002 test loss: 0.0008
Epoch 397 batch 0 train loss: 0.0002 test loss: 0.0008
Epoch 397 batch 100 train loss: 0.0003 test loss: 0.0008
Epoch 397 batch 200 train loss: 0.0003 test loss: 0.0008
Epoch 397 batch 300 train loss: 0.0004 test loss: 0.0008
Epoch 397 batch 400 train loss: 0.0003 test loss: 0.0008
Epoch 397 batch 500 train loss: 0.0002 test loss: 0.0008
Epoch 397 batch 600 train loss: 0.0004 test loss: 0.0008
Epoch 397 batch 700 train loss: 0.0002 test loss: 0.0008
Epoch 397 batch 800 train loss: 0.0003 test loss: 0.0008
Epoch 397 batch 900 train loss: 0.0002 test loss: 0.0008
Epoch 397 batch 1000 train loss: 0.0003 test loss: 0.0008
Epoch 397 batch 1100 train loss: 0.0002 test loss: 0.0008
Epoch 397 batch 1200 train loss: 0.0003 test loss: 0.0008
Epoch 398 batch 0 train loss: 0.0005 test loss: 0.0008
Epoch 398 batch 100 train loss: 0.0002 test loss: 0.0008
Epoch 398 batch 200 train loss: 0.0004 test loss: 0.0008
Epoch 398 batch 300 train loss: 0.0002 test loss: 0.0008
Epoch 398 batch 400 train loss: 0.0003 test loss: 0.0008
Epoch 398 batch 500 train loss: 0.0003 test loss: 0.0008
Epoch 398 batch 600 train loss: 0.0002 test loss: 0.0008
Epoch 398 batch 700 train loss: 0.0002 test loss: 0.0008
Epoch 398 batch 800 train loss: 0.0002 test loss: 0.0008
Epoch 398 batch 900 train loss: 0.0003 test loss: 0.0008
Epoch 398 batch 1000 train loss: 0.0003 test loss: 0.0008
Epoch 398 batch 1100 train loss: 0.0002 test loss: 0.0008
Epoch 398 batch 1200 train loss: 0.0004 test loss: 0.0008
Epoch 399 batch 0 train loss: 0.0005 test loss: 0.0008
Epoch 399 batch 100 train loss: 0.0004 test loss: 0.0008
Epoch 399 batch 200 train loss: 0.0002 test loss: 0.0008
Epoch 399 batch 300 train loss: 0.0002 test loss: 0.0008
Epoch 399 batch 400 train loss: 0.0002 test loss: 0.0008
Epoch 399 batch 500 train loss: 0.0003 test loss: 0.0008
Epoch 399 batch 600 train loss: 0.0006 test loss: 0.0008
Epoch 399 batch 700 train loss: 0.0003 test loss: 0.0008
Epoch 399 batch 800 train loss: 0.0002 test loss: 0.0008
Epoch 399 batch 900 train loss: 0.0003 test loss: 0.0008
Epoch 399 batch 1000 train loss: 0.0002 test loss: 0.0008
Epoch 399 batch 1100 train loss: 0.0002 test loss: 0.0008
Epoch 399 batch 1200 train loss: 0.0001 test loss: 0.0008
Epoch 400 batch 0 train loss: 0.0002 test loss: 0.0008
Epoch 400 batch 100 train loss: 0.0002 test loss: 0.0008
Epoch 400 batch 200 train loss: 0.0003 test loss: 0.0008
Epoch 400 batch 300 train loss: 0.0003 test loss: 0.0008
Epoch 400 batch 400 train loss: 0.0004 test loss: 0.0008
Epoch 400 batch 500 train loss: 0.0002 test loss: 0.0008
Epoch 400 batch 600 train loss: 0.0003 test loss: 0.0008
Epoch 400 batch 700 train loss: 0.0001 test loss: 0.0008
Epoch 400 batch 800 train loss: 0.0003 test loss: 0.0008
Epoch 400 batch 900 train loss: 0.0002 test loss: 0.0008
Epoch 400 batch 1000 train loss: 0.0003 test loss: 0.0008
Epoch 400 batch 1100 train loss: 0.0003 test loss: 0.0008
Epoch 400 batch 1200 train loss: 0.0003 test loss: 0.0008
Epoch 401 batch 0 train loss: 0.0003 test loss: 0.0008
Epoch 401 batch 100 train loss: 0.0004 test loss: 0.0008
Epoch 401 batch 200 train loss: 0.0002 test loss: 0.0008
Epoch 401 batch 300 train loss: 0.0003 test loss: 0.0008
Epoch 401 batch 400 train loss: 0.0003 test loss: 0.0008
Epoch 401 batch 500 train loss: 0.0001 test loss: 0.0008
Epoch 401 batch 600 train loss: 0.0002 test loss: 0.0008
Epoch 401 batch 700 train loss: 0.0003 test loss: 0.0008
Epoch 401 batch 800 train loss: 0.0004 test loss: 0.0008
Epoch 401 batch 900 train loss: 0.0002 test loss: 0.0008
Epoch 401 batch 1000 train loss: 0.0002 test loss: 0.0008
Epoch 401 batch 1100 train loss: 0.0002 test loss: 0.0008
Epoch 401 batch 1200 train loss: 0.0003 test loss: 0.0008
Epoch 402 batch 0 train loss: 0.0002 test loss: 0.0008
Epoch 402 batch 100 train loss: 0.0002 test loss: 0.0008
Epoch 402 batch 200 train loss: 0.0002 test loss: 0.0008
Epoch 402 batch 300 train loss: 0.0003 test loss: 0.0008
Epoch 402 batch 400 train loss: 0.0002 test loss: 0.0008
Epoch 402 batch 500 train loss: 0.0003 test loss: 0.0008
Epoch 402 batch 600 train loss: 0.0002 test loss: 0.0008
Epoch 402 batch 700 train loss: 0.0002 test loss: 0.0008
Epoch 402 batch 800 train loss: 0.0002 test loss: 0.0008
Epoch 402 batch 900 train loss: 0.0005 test loss: 0.0008
Epoch 402 batch 1000 train loss: 0.0003 test loss: 0.0008
Epoch 402 batch 1100 train loss: 0.0003 test loss: 0.0008
Epoch 402 batch 1200 train loss: 0.0002 test loss: 0.0008
Epoch 403 batch 0 train loss: 0.0002 test loss: 0.0008
Epoch 403 batch 100 train loss: 0.0003 test loss: 0.0008
Epoch 403 batch 200 train loss: 0.0004 test loss: 0.0008
Epoch 403 batch 300 train loss: 0.0003 test loss: 0.0008
Epoch 403 batch 400 train loss: 0.0003 test loss: 0.0008
Epoch 403 batch 500 train loss: 0.0002 test loss: 0.0008
Epoch 403 batch 600 train loss: 0.0002 test loss: 0.0008
Epoch 403 batch 700 train loss: 0.0002 test loss: 0.0008
Epoch 403 batch 800 train loss: 0.0002 test loss: 0.0008
Epoch 403 batch 900 train loss: 0.0003 test loss: 0.0008
Epoch 403 batch 1000 train loss: 0.0002 test loss: 0.0008
Epoch 403 batch 1100 train loss: 0.0002 test loss: 0.0008
Epoch 403 batch 1200 train loss: 0.0004 test loss: 0.0008
Epoch 404 batch 0 train loss: 0.0002 test loss: 0.0008
Epoch 404 batch 100 train loss: 0.0002 test loss: 0.0008
Epoch 404 batch 200 train loss: 0.0002 test loss: 0.0008
Epoch 404 batch 300 train loss: 0.0004 test loss: 0.0008
Epoch 404 batch 400 train loss: 0.0003 test loss: 0.0008
Epoch 404 batch 500 train loss: 0.0002 test loss: 0.0008
Epoch 404 batch 600 train loss: 0.0003 test loss: 0.0008
Epoch 404 batch 700 train loss: 0.0002 test loss: 0.0008
Epoch 404 batch 800 train loss: 0.0002 test loss: 0.0008
Epoch 404 batch 900 train loss: 0.0003 test loss: 0.0008
Epoch 404 batch 1000 train loss: 0.0002 test loss: 0.0008
Epoch 404 batch 1100 train loss: 0.0002 test loss: 0.0008
Epoch 404 batch 1200 train loss: 0.0003 test loss: 0.0008
Epoch 405 batch 0 train loss: 0.0005 test loss: 0.0008
Epoch 405 batch 100 train loss: 0.0002 test loss: 0.0008
Epoch 405 batch 200 train loss: 0.0002 test loss: 0.0008
Epoch 405 batch 300 train loss: 0.0003 test loss: 0.0008
Epoch 405 batch 400 train loss: 0.0003 test loss: 0.0008
Epoch 405 batch 500 train loss: 0.0004 test loss: 0.0008
Epoch 405 batch 600 train loss: 0.0002 test loss: 0.0008
Epoch 405 batch 700 train loss: 0.0003 test loss: 0.0008
Epoch 405 batch 800 train loss: 0.0003 test loss: 0.0008
Epoch 405 batch 900 train loss: 0.0002 test loss: 0.0008
Epoch 405 batch 1000 train loss: 0.0003 test loss: 0.0008
Epoch 405 batch 1100 train loss: 0.0003 test loss: 0.0008
Epoch 405 batch 1200 train loss: 0.0003 test loss: 0.0008
Epoch 406 batch 0 train loss: 0.0002 test loss: 0.0008
Epoch 406 batch 100 train loss: 0.0002 test loss: 0.0008
Epoch 406 batch 200 train loss: 0.0002 test loss: 0.0008
Epoch 406 batch 300 train loss: 0.0004 test loss: 0.0008
Epoch 406 batch 400 train loss: 0.0002 test loss: 0.0008
Epoch 406 batch 500 train loss: 0.0002 test loss: 0.0008
Epoch 406 batch 600 train loss: 0.0003 test loss: 0.0008
Epoch 406 batch 700 train loss: 0.0002 test loss: 0.0008
Epoch 406 batch 800 train loss: 0.0003 test loss: 0.0008
Epoch 406 batch 900 train loss: 0.0002 test loss: 0.0008
Epoch 406 batch 1000 train loss: 0.0004 test loss: 0.0008
Epoch 406 batch 1100 train loss: 0.0002 test loss: 0.0008
Epoch 406 batch 1200 train loss: 0.0002 test loss: 0.0008
Epoch 407 batch 0 train loss: 0.0002 test loss: 0.0008
Epoch 407 batch 100 train loss: 0.0002 test loss: 0.0008
Epoch 407 batch 200 train loss: 0.0001 test loss: 0.0008
Epoch 407 batch 300 train loss: 0.0003 test loss: 0.0008
Epoch 407 batch 400 train loss: 0.0002 test loss: 0.0008
Epoch 407 batch 500 train loss: 0.0003 test loss: 0.0008
Epoch 407 batch 600 train loss: 0.0004 test loss: 0.0008
Epoch 407 batch 700 train loss: 0.0002 test loss: 0.0008
Epoch 407 batch 800 train loss: 0.0004 test loss: 0.0008
Epoch 407 batch 900 train loss: 0.0002 test loss: 0.0008
Epoch 407 batch 1000 train loss: 0.0002 test loss: 0.0008
Epoch 407 batch 1100 train loss: 0.0003 test loss: 0.0008
Epoch 407 batch 1200 train loss: 0.0001 test loss: 0.0008
Epoch 408 batch 0 train loss: 0.0003 test loss: 0.0008
Epoch 408 batch 100 train loss: 0.0002 test loss: 0.0008
Epoch 408 batch 200 train loss: 0.0003 test loss: 0.0008
Epoch 408 batch 300 train loss: 0.0002 test loss: 0.0008
Epoch 408 batch 400 train loss: 0.0002 test loss: 0.0008
Epoch 408 batch 500 train loss: 0.0002 test loss: 0.0008
Epoch 408 batch 600 train loss: 0.0002 test loss: 0.0008
Epoch 408 batch 700 train loss: 0.0003 test loss: 0.0008
Epoch 408 batch 800 train loss: 0.0003 test loss: 0.0008
Epoch 408 batch 900 train loss: 0.0002 test loss: 0.0008
Epoch 408 batch 1000 train loss: 0.0002 test loss: 0.0008
Epoch 408 batch 1100 train loss: 0.0004 test loss: 0.0008
Epoch 408 batch 1200 train loss: 0.0002 test loss: 0.0008
Epoch 409 batch 0 train loss: 0.0003 test loss: 0.0008
Epoch 409 batch 100 train loss: 0.0003 test loss: 0.0008
Epoch 409 batch 200 train loss: 0.0002 test loss: 0.0008
Epoch 409 batch 300 train loss: 0.0002 test loss: 0.0008
Epoch 409 batch 400 train loss: 0.0003 test loss: 0.0008
Epoch 409 batch 500 train loss: 0.0002 test loss: 0.0008
Epoch 409 batch 600 train loss: 0.0003 test loss: 0.0008
Epoch 409 batch 700 train loss: 0.0003 test loss: 0.0008
Epoch 409 batch 800 train loss: 0.0003 test loss: 0.0008
Epoch 409 batch 900 train loss: 0.0003 test loss: 0.0008
Epoch 409 batch 1000 train loss: 0.0002 test loss: 0.0008
Epoch 409 batch 1100 train loss: 0.0002 test loss: 0.0008
Epoch 409 batch 1200 train loss: 0.0002 test loss: 0.0008
Epoch 410 batch 0 train loss: 0.0002 test loss: 0.0008
Epoch 410 batch 100 train loss: 0.0003 test loss: 0.0008
Epoch 410 batch 200 train loss: 0.0002 test loss: 0.0008
Epoch 410 batch 300 train loss: 0.0002 test loss: 0.0008
Epoch 410 batch 400 train loss: 0.0003 test loss: 0.0008
Epoch 410 batch 500 train loss: 0.0002 test loss: 0.0008
Epoch 410 batch 600 train loss: 0.0002 test loss: 0.0008
Epoch 410 batch 700 train loss: 0.0003 test loss: 0.0008
Epoch 410 batch 800 train loss: 0.0001 test loss: 0.0008
Epoch 410 batch 900 train loss: 0.0002 test loss: 0.0008
Epoch 410 batch 1000 train loss: 0.0002 test loss: 0.0008
Epoch 410 batch 1100 train loss: 0.0002 test loss: 0.0008
Epoch 410 batch 1200 train loss: 0.0002 test loss: 0.0008
Epoch 411 batch 0 train loss: 0.0004 test loss: 0.0008
Epoch 411 batch 100 train loss: 0.0002 test loss: 0.0008
Epoch 411 batch 200 train loss: 0.0003 test loss: 0.0008
Epoch 411 batch 300 train loss: 0.0002 test loss: 0.0008
Epoch 411 batch 400 train loss: 0.0002 test loss: 0.0008
Epoch 411 batch 500 train loss: 0.0002 test loss: 0.0008
Epoch 411 batch 600 train loss: 0.0002 test loss: 0.0008
Epoch 411 batch 700 train loss: 0.0002 test loss: 0.0008
Epoch 411 batch 800 train loss: 0.0002 test loss: 0.0008
Epoch 411 batch 900 train loss: 0.0002 test loss: 0.0008
Epoch 411 batch 1000 train loss: 0.0001 test loss: 0.0008
Epoch 411 batch 1100 train loss: 0.0002 test loss: 0.0008
Epoch 411 batch 1200 train loss: 0.0003 test loss: 0.0008
Epoch 412 batch 0 train loss: 0.0006 test loss: 0.0008
Epoch 412 batch 100 train loss: 0.0002 test loss: 0.0008
Epoch 412 batch 200 train loss: 0.0003 test loss: 0.0008
Epoch 412 batch 300 train loss: 0.0002 test loss: 0.0008
Epoch 412 batch 400 train loss: 0.0002 test loss: 0.0008
Epoch 412 batch 500 train loss: 0.0004 test loss: 0.0008
Epoch 412 batch 600 train loss: 0.0002 test loss: 0.0008
Epoch 412 batch 700 train loss: 0.0004 test loss: 0.0008
Epoch 412 batch 800 train loss: 0.0003 test loss: 0.0008
Epoch 412 batch 900 train loss: 0.0004 test loss: 0.0008
Epoch 412 batch 1000 train loss: 0.0002 test loss: 0.0008
Epoch 412 batch 1100 train loss: 0.0002 test loss: 0.0008
Epoch 412 batch 1200 train loss: 0.0004 test loss: 0.0008
Epoch 413 batch 0 train loss: 0.0002 test loss: 0.0008
Epoch 413 batch 100 train loss: 0.0002 test loss: 0.0008
Epoch 413 batch 200 train loss: 0.0004 test loss: 0.0008
Epoch 413 batch 300 train loss: 0.0002 test loss: 0.0008
Epoch 413 batch 400 train loss: 0.0002 test loss: 0.0008
Epoch 413 batch 500 train loss: 0.0004 test loss: 0.0008
Epoch 413 batch 600 train loss: 0.0002 test loss: 0.0008
Epoch 413 batch 700 train loss: 0.0002 test loss: 0.0008
Epoch 413 batch 800 train loss: 0.0002 test loss: 0.0008
Epoch 413 batch 900 train loss: 0.0003 test loss: 0.0008
Epoch 413 batch 1000 train loss: 0.0004 test loss: 0.0008
Epoch 413 batch 1100 train loss: 0.0002 test loss: 0.0008
Epoch 413 batch 1200 train loss: 0.0003 test loss: 0.0008
Epoch 414 batch 0 train loss: 0.0002 test loss: 0.0008
Epoch 414 batch 100 train loss: 0.0004 test loss: 0.0008
Epoch 414 batch 200 train loss: 0.0002 test loss: 0.0008
Epoch 414 batch 300 train loss: 0.0004 test loss: 0.0008
Epoch 414 batch 400 train loss: 0.0002 test loss: 0.0008
Epoch 414 batch 500 train loss: 0.0003 test loss: 0.0008
Epoch 414 batch 600 train loss: 0.0003 test loss: 0.0008
Epoch 414 batch 700 train loss: 0.0003 test loss: 0.0008
Epoch 414 batch 800 train loss: 0.0002 test loss: 0.0008
Epoch 414 batch 900 train loss: 0.0003 test loss: 0.0008
Epoch 414 batch 1000 train loss: 0.0004 test loss: 0.0008
Epoch 414 batch 1100 train loss: 0.0002 test loss: 0.0008
Epoch 414 batch 1200 train loss: 0.0005 test loss: 0.0008
Epoch 415 batch 0 train loss: 0.0004 test loss: 0.0008
Epoch 415 batch 100 train loss: 0.0003 test loss: 0.0008
Epoch 415 batch 200 train loss: 0.0004 test loss: 0.0008
Epoch 415 batch 300 train loss: 0.0002 test loss: 0.0008
Epoch 415 batch 400 train loss: 0.0002 test loss: 0.0008
Epoch 415 batch 500 train loss: 0.0003 test loss: 0.0008
Epoch 415 batch 600 train loss: 0.0003 test loss: 0.0008
Epoch 415 batch 700 train loss: 0.0004 test loss: 0.0008
Epoch 415 batch 800 train loss: 0.0004 test loss: 0.0008
Epoch 415 batch 900 train loss: 0.0002 test loss: 0.0008
Epoch 415 batch 1000 train loss: 0.0003 test loss: 0.0008
Epoch 415 batch 1100 train loss: 0.0003 test loss: 0.0008
Epoch 415 batch 1200 train loss: 0.0003 test loss: 0.0008
Epoch 416 batch 0 train loss: 0.0003 test loss: 0.0008
Epoch 416 batch 100 train loss: 0.0003 test loss: 0.0008
Epoch 416 batch 200 train loss: 0.0001 test loss: 0.0008
Epoch 416 batch 300 train loss: 0.0004 test loss: 0.0008
Epoch 416 batch 400 train loss: 0.0003 test loss: 0.0008
Epoch 416 batch 500 train loss: 0.0003 test loss: 0.0008
Epoch 416 batch 600 train loss: 0.0003 test loss: 0.0008
Epoch 416 batch 700 train loss: 0.0003 test loss: 0.0008
Epoch 416 batch 800 train loss: 0.0004 test loss: 0.0008
Epoch 416 batch 900 train loss: 0.0002 test loss: 0.0008
Epoch 416 batch 1000 train loss: 0.0004 test loss: 0.0008
Epoch 416 batch 1100 train loss: 0.0003 test loss: 0.0008
Epoch 416 batch 1200 train loss: 0.0003 test loss: 0.0008
Epoch 417 batch 0 train loss: 0.0002 test loss: 0.0008
Epoch 417 batch 100 train loss: 0.0002 test loss: 0.0008
Epoch 417 batch 200 train loss: 0.0003 test loss: 0.0008
Epoch 417 batch 300 train loss: 0.0002 test loss: 0.0008
Epoch 417 batch 400 train loss: 0.0002 test loss: 0.0008
Epoch 417 batch 500 train loss: 0.0003 test loss: 0.0008
Epoch 417 batch 600 train loss: 0.0002 test loss: 0.0008
Epoch 417 batch 700 train loss: 0.0004 test loss: 0.0008
Epoch 417 batch 800 train loss: 0.0003 test loss: 0.0008
Epoch 417 batch 900 train loss: 0.0002 test loss: 0.0008
Epoch 417 batch 1000 train loss: 0.0004 test loss: 0.0008
Epoch 417 batch 1100 train loss: 0.0003 test loss: 0.0008
Epoch 417 batch 1200 train loss: 0.0002 test loss: 0.0008
Epoch 418 batch 0 train loss: 0.0003 test loss: 0.0008
Epoch 418 batch 100 train loss: 0.0005 test loss: 0.0008
Epoch 418 batch 200 train loss: 0.0003 test loss: 0.0008
Epoch 418 batch 300 train loss: 0.0003 test loss: 0.0008
Epoch 418 batch 400 train loss: 0.0002 test loss: 0.0008
Epoch 418 batch 500 train loss: 0.0003 test loss: 0.0008
Epoch 418 batch 600 train loss: 0.0002 test loss: 0.0008
Epoch 418 batch 700 train loss: 0.0001 test loss: 0.0009
Epoch 418 batch 800 train loss: 0.0002 test loss: 0.0008
Epoch 418 batch 900 train loss: 0.0002 test loss: 0.0008
Epoch 418 batch 1000 train loss: 0.0003 test loss: 0.0008
Epoch 418 batch 1100 train loss: 0.0003 test loss: 0.0008
Epoch 418 batch 1200 train loss: 0.0003 test loss: 0.0008
Epoch 419 batch 0 train loss: 0.0002 test loss: 0.0008
Epoch 419 batch 100 train loss: 0.0002 test loss: 0.0008
Epoch 419 batch 200 train loss: 0.0002 test loss: 0.0008
Epoch 419 batch 300 train loss: 0.0002 test loss: 0.0008
Epoch 419 batch 400 train loss: 0.0002 test loss: 0.0008
Epoch 419 batch 500 train loss: 0.0002 test loss: 0.0008
Epoch 419 batch 600 train loss: 0.0002 test loss: 0.0008
Epoch 419 batch 700 train loss: 0.0002 test loss: 0.0008
Epoch 419 batch 800 train loss: 0.0003 test loss: 0.0008
Epoch 419 batch 900 train loss: 0.0004 test loss: 0.0008
Epoch 419 batch 1000 train loss: 0.0004 test loss: 0.0008
Epoch 419 batch 1100 train loss: 0.0002 test loss: 0.0008
Epoch 419 batch 1200 train loss: 0.0003 test loss: 0.0008
Epoch 420 batch 0 train loss: 0.0002 test loss: 0.0008
Epoch 420 batch 100 train loss: 0.0004 test loss: 0.0008
Epoch 420 batch 200 train loss: 0.0004 test loss: 0.0008
Epoch 420 batch 300 train loss: 0.0002 test loss: 0.0008
Epoch 420 batch 400 train loss: 0.0003 test loss: 0.0008
Epoch 420 batch 500 train loss: 0.0002 test loss: 0.0008
Epoch 420 batch 600 train loss: 0.0002 test loss: 0.0008
Epoch 420 batch 700 train loss: 0.0003 test loss: 0.0008
Epoch 420 batch 800 train loss: 0.0002 test loss: 0.0008
Epoch 420 batch 900 train loss: 0.0003 test loss: 0.0008
Epoch 420 batch 1000 train loss: 0.0003 test loss: 0.0008
Epoch 420 batch 1100 train loss: 0.0002 test loss: 0.0008
Epoch 420 batch 1200 train loss: 0.0002 test loss: 0.0008
Epoch 421 batch 0 train loss: 0.0003 test loss: 0.0008
Epoch 421 batch 100 train loss: 0.0002 test loss: 0.0008
Epoch 421 batch 200 train loss: 0.0003 test loss: 0.0008
Epoch 421 batch 300 train loss: 0.0003 test loss: 0.0008
Epoch 421 batch 400 train loss: 0.0003 test loss: 0.0008
Epoch 421 batch 500 train loss: 0.0002 test loss: 0.0008
Epoch 421 batch 600 train loss: 0.0005 test loss: 0.0008
Epoch 421 batch 700 train loss: 0.0003 test loss: 0.0008
Epoch 421 batch 800 train loss: 0.0002 test loss: 0.0008
Epoch 421 batch 900 train loss: 0.0002 test loss: 0.0008
Epoch 421 batch 1000 train loss: 0.0002 test loss: 0.0008
Epoch 421 batch 1100 train loss: 0.0002 test loss: 0.0008
Epoch 421 batch 1200 train loss: 0.0003 test loss: 0.0008
Epoch 422 batch 0 train loss: 0.0002 test loss: 0.0008
Epoch 422 batch 100 train loss: 0.0002 test loss: 0.0008
Epoch 422 batch 200 train loss: 0.0002 test loss: 0.0008
Epoch 422 batch 300 train loss: 0.0002 test loss: 0.0008
Epoch 422 batch 400 train loss: 0.0002 test loss: 0.0008
Epoch 422 batch 500 train loss: 0.0002 test loss: 0.0008
Epoch 422 batch 600 train loss: 0.0002 test loss: 0.0008
Epoch 422 batch 700 train loss: 0.0002 test loss: 0.0008
Epoch 422 batch 800 train loss: 0.0003 test loss: 0.0008
Epoch 422 batch 900 train loss: 0.0002 test loss: 0.0008
Epoch 422 batch 1000 train loss: 0.0003 test loss: 0.0008
Epoch 422 batch 1100 train loss: 0.0004 test loss: 0.0008
Epoch 422 batch 1200 train loss: 0.0002 test loss: 0.0008
Epoch 423 batch 0 train loss: 0.0002 test loss: 0.0008
Epoch 423 batch 100 train loss: 0.0002 test loss: 0.0008
Epoch 423 batch 200 train loss: 0.0002 test loss: 0.0008
Epoch 423 batch 300 train loss: 0.0001 test loss: 0.0008
Epoch 423 batch 400 train loss: 0.0003 test loss: 0.0008
Epoch 423 batch 500 train loss: 0.0002 test loss: 0.0008
Epoch 423 batch 600 train loss: 0.0004 test loss: 0.0008
Epoch 423 batch 700 train loss: 0.0002 test loss: 0.0008
Epoch 423 batch 800 train loss: 0.0002 test loss: 0.0008
Epoch 423 batch 900 train loss: 0.0002 test loss: 0.0008
Epoch 423 batch 1000 train loss: 0.0002 test loss: 0.0008
Epoch 423 batch 1100 train loss: 0.0002 test loss: 0.0008
Epoch 423 batch 1200 train loss: 0.0003 test loss: 0.0008
Epoch 424 batch 0 train loss: 0.0003 test loss: 0.0008
Epoch 424 batch 100 train loss: 0.0003 test loss: 0.0008
Epoch 424 batch 200 train loss: 0.0002 test loss: 0.0008
Epoch 424 batch 300 train loss: 0.0003 test loss: 0.0008
Epoch 424 batch 400 train loss: 0.0002 test loss: 0.0008
Epoch 424 batch 500 train loss: 0.0002 test loss: 0.0008
Epoch 424 batch 600 train loss: 0.0004 test loss: 0.0008
Epoch 424 batch 700 train loss: 0.0002 test loss: 0.0008
Epoch 424 batch 800 train loss: 0.0003 test loss: 0.0008
Epoch 424 batch 900 train loss: 0.0002 test loss: 0.0008
Epoch 424 batch 1000 train loss: 0.0002 test loss: 0.0008
Epoch 424 batch 1100 train loss: 0.0002 test loss: 0.0008
Epoch 424 batch 1200 train loss: 0.0002 test loss: 0.0008
Epoch 425 batch 0 train loss: 0.0002 test loss: 0.0008
Epoch 425 batch 100 train loss: 0.0003 test loss: 0.0008
Epoch 425 batch 200 train loss: 0.0004 test loss: 0.0008
Epoch 425 batch 300 train loss: 0.0003 test loss: 0.0008
Epoch 425 batch 400 train loss: 0.0003 test loss: 0.0008
Epoch 425 batch 500 train loss: 0.0002 test loss: 0.0008
Epoch 425 batch 600 train loss: 0.0002 test loss: 0.0008
Epoch 425 batch 700 train loss: 0.0003 test loss: 0.0008
Epoch 425 batch 800 train loss: 0.0002 test loss: 0.0008
Epoch 425 batch 900 train loss: 0.0003 test loss: 0.0008
Epoch 425 batch 1000 train loss: 0.0004 test loss: 0.0008
Epoch 425 batch 1100 train loss: 0.0004 test loss: 0.0008
Epoch 425 batch 1200 train loss: 0.0002 test loss: 0.0008
Epoch 426 batch 0 train loss: 0.0002 test loss: 0.0008
Epoch 426 batch 100 train loss: 0.0004 test loss: 0.0008
Epoch 426 batch 200 train loss: 0.0002 test loss: 0.0008
Epoch 426 batch 300 train loss: 0.0003 test loss: 0.0008
Epoch 426 batch 400 train loss: 0.0003 test loss: 0.0008
Epoch 426 batch 500 train loss: 0.0002 test loss: 0.0008
Epoch 426 batch 600 train loss: 0.0002 test loss: 0.0008
Epoch 426 batch 700 train loss: 0.0002 test loss: 0.0009
Epoch 426 batch 800 train loss: 0.0003 test loss: 0.0008
Epoch 426 batch 900 train loss: 0.0002 test loss: 0.0008
Epoch 426 batch 1000 train loss: 0.0002 test loss: 0.0008
Epoch 426 batch 1100 train loss: 0.0002 test loss: 0.0008
Epoch 426 batch 1200 train loss: 0.0002 test loss: 0.0008
Epoch 427 batch 0 train loss: 0.0002 test loss: 0.0008
Epoch 427 batch 100 train loss: 0.0003 test loss: 0.0008
Epoch 427 batch 200 train loss: 0.0002 test loss: 0.0008
Epoch 427 batch 300 train loss: 0.0002 test loss: 0.0008
Epoch 427 batch 400 train loss: 0.0003 test loss: 0.0008
Epoch 427 batch 500 train loss: 0.0002 test loss: 0.0008
Epoch 427 batch 600 train loss: 0.0004 test loss: 0.0008
Epoch 427 batch 700 train loss: 0.0003 test loss: 0.0008
Epoch 427 batch 800 train loss: 0.0003 test loss: 0.0008
Epoch 427 batch 900 train loss: 0.0002 test loss: 0.0008
Epoch 427 batch 1000 train loss: 0.0001 test loss: 0.0008
Epoch 427 batch 1100 train loss: 0.0002 test loss: 0.0008
Epoch 427 batch 1200 train loss: 0.0002 test loss: 0.0008
Epoch 428 batch 0 train loss: 0.0003 test loss: 0.0008
Epoch 428 batch 100 train loss: 0.0002 test loss: 0.0008
Epoch 428 batch 200 train loss: 0.0002 test loss: 0.0008
Epoch 428 batch 300 train loss: 0.0002 test loss: 0.0008
Epoch 428 batch 400 train loss: 0.0002 test loss: 0.0008
Epoch 428 batch 500 train loss: 0.0003 test loss: 0.0008
Epoch 428 batch 600 train loss: 0.0002 test loss: 0.0008
Epoch 428 batch 700 train loss: 0.0003 test loss: 0.0008
Epoch 428 batch 800 train loss: 0.0001 test loss: 0.0008
Epoch 428 batch 900 train loss: 0.0003 test loss: 0.0008
Epoch 428 batch 1000 train loss: 0.0002 test loss: 0.0008
Epoch 428 batch 1100 train loss: 0.0002 test loss: 0.0008
Epoch 428 batch 1200 train loss: 0.0002 test loss: 0.0008
Epoch 429 batch 0 train loss: 0.0004 test loss: 0.0008
Epoch 429 batch 100 train loss: 0.0002 test loss: 0.0008
Epoch 429 batch 200 train loss: 0.0002 test loss: 0.0008
Epoch 429 batch 300 train loss: 0.0002 test loss: 0.0008
Epoch 429 batch 400 train loss: 0.0002 test loss: 0.0008
Epoch 429 batch 500 train loss: 0.0002 test loss: 0.0008
Epoch 429 batch 600 train loss: 0.0002 test loss: 0.0008
Epoch 429 batch 700 train loss: 0.0003 test loss: 0.0008
Epoch 429 batch 800 train loss: 0.0003 test loss: 0.0008
Epoch 429 batch 900 train loss: 0.0005 test loss: 0.0008
Epoch 429 batch 1000 train loss: 0.0002 test loss: 0.0008
Epoch 429 batch 1100 train loss: 0.0002 test loss: 0.0008
Epoch 429 batch 1200 train loss: 0.0002 test loss: 0.0008
Epoch 430 batch 0 train loss: 0.0004 test loss: 0.0008
Epoch 430 batch 100 train loss: 0.0003 test loss: 0.0008
Epoch 430 batch 200 train loss: 0.0002 test loss: 0.0008
Epoch 430 batch 300 train loss: 0.0003 test loss: 0.0008
Epoch 430 batch 400 train loss: 0.0002 test loss: 0.0008
Epoch 430 batch 500 train loss: 0.0002 test loss: 0.0008
Epoch 430 batch 600 train loss: 0.0002 test loss: 0.0008
Epoch 430 batch 700 train loss: 0.0002 test loss: 0.0008
Epoch 430 batch 800 train loss: 0.0002 test loss: 0.0008
Epoch 430 batch 900 train loss: 0.0002 test loss: 0.0008
Epoch 430 batch 1000 train loss: 0.0002 test loss: 0.0008
Epoch 430 batch 1100 train loss: 0.0002 test loss: 0.0008
Epoch 430 batch 1200 train loss: 0.0002 test loss: 0.0008
Epoch 431 batch 0 train loss: 0.0002 test loss: 0.0008
Epoch 431 batch 100 train loss: 0.0001 test loss: 0.0008
Epoch 431 batch 200 train loss: 0.0002 test loss: 0.0008
Epoch 431 batch 300 train loss: 0.0002 test loss: 0.0008
Epoch 431 batch 400 train loss: 0.0003 test loss: 0.0008
Epoch 431 batch 500 train loss: 0.0003 test loss: 0.0008
Epoch 431 batch 600 train loss: 0.0002 test loss: 0.0008
Epoch 431 batch 700 train loss: 0.0002 test loss: 0.0008
Epoch 431 batch 800 train loss: 0.0004 test loss: 0.0008
Epoch 431 batch 900 train loss: 0.0004 test loss: 0.0008
Epoch 431 batch 1000 train loss: 0.0003 test loss: 0.0008
Epoch 431 batch 1100 train loss: 0.0002 test loss: 0.0008
Epoch 431 batch 1200 train loss: 0.0003 test loss: 0.0008
Epoch 432 batch 0 train loss: 0.0002 test loss: 0.0008
Epoch 432 batch 100 train loss: 0.0002 test loss: 0.0008
Epoch 432 batch 200 train loss: 0.0002 test loss: 0.0008
Epoch 432 batch 300 train loss: 0.0002 test loss: 0.0008
Epoch 432 batch 400 train loss: 0.0003 test loss: 0.0008
Epoch 432 batch 500 train loss: 0.0005 test loss: 0.0008
Epoch 432 batch 600 train loss: 0.0003 test loss: 0.0008
Epoch 432 batch 700 train loss: 0.0002 test loss: 0.0008
Epoch 432 batch 800 train loss: 0.0003 test loss: 0.0008
Epoch 432 batch 900 train loss: 0.0003 test loss: 0.0008
Epoch 432 batch 1000 train loss: 0.0003 test loss: 0.0008
Epoch 432 batch 1100 train loss: 0.0002 test loss: 0.0008
Epoch 432 batch 1200 train loss: 0.0002 test loss: 0.0008
Epoch 433 batch 0 train loss: 0.0002 test loss: 0.0008
Epoch 433 batch 100 train loss: 0.0002 test loss: 0.0008
Epoch 433 batch 200 train loss: 0.0002 test loss: 0.0008
Epoch 433 batch 300 train loss: 0.0002 test loss: 0.0008
Epoch 433 batch 400 train loss: 0.0005 test loss: 0.0008
Epoch 433 batch 500 train loss: 0.0002 test loss: 0.0008
Epoch 433 batch 600 train loss: 0.0002 test loss: 0.0008
Epoch 433 batch 700 train loss: 0.0002 test loss: 0.0008
Epoch 433 batch 800 train loss: 0.0007 test loss: 0.0008
Epoch 433 batch 900 train loss: 0.0002 test loss: 0.0008
Epoch 433 batch 1000 train loss: 0.0003 test loss: 0.0008
Epoch 433 batch 1100 train loss: 0.0002 test loss: 0.0008
Epoch 433 batch 1200 train loss: 0.0002 test loss: 0.0008
Epoch 434 batch 0 train loss: 0.0002 test loss: 0.0008
Epoch 434 batch 100 train loss: 0.0002 test loss: 0.0008
Epoch 434 batch 200 train loss: 0.0002 test loss: 0.0008
Epoch 434 batch 300 train loss: 0.0002 test loss: 0.0008
Epoch 434 batch 400 train loss: 0.0002 test loss: 0.0008
Epoch 434 batch 500 train loss: 0.0002 test loss: 0.0008
Epoch 434 batch 600 train loss: 0.0002 test loss: 0.0008
Epoch 434 batch 700 train loss: 0.0004 test loss: 0.0008
Epoch 434 batch 800 train loss: 0.0003 test loss: 0.0008
Epoch 434 batch 900 train loss: 0.0004 test loss: 0.0008
Epoch 434 batch 1000 train loss: 0.0002 test loss: 0.0008
Epoch 434 batch 1100 train loss: 0.0002 test loss: 0.0008
Epoch 434 batch 1200 train loss: 0.0006 test loss: 0.0008
Epoch 435 batch 0 train loss: 0.0002 test loss: 0.0009
Epoch 435 batch 100 train loss: 0.0002 test loss: 0.0008
Epoch 435 batch 200 train loss: 0.0002 test loss: 0.0008
Epoch 435 batch 300 train loss: 0.0003 test loss: 0.0008
Epoch 435 batch 400 train loss: 0.0004 test loss: 0.0008
Epoch 435 batch 500 train loss: 0.0002 test loss: 0.0008
Epoch 435 batch 600 train loss: 0.0003 test loss: 0.0008
Epoch 435 batch 700 train loss: 0.0002 test loss: 0.0008
Epoch 435 batch 800 train loss: 0.0002 test loss: 0.0008
Epoch 435 batch 900 train loss: 0.0003 test loss: 0.0008
Epoch 435 batch 1000 train loss: 0.0002 test loss: 0.0008
Epoch 435 batch 1100 train loss: 0.0004 test loss: 0.0008
Epoch 435 batch 1200 train loss: 0.0002 test loss: 0.0008
Epoch 436 batch 0 train loss: 0.0002 test loss: 0.0008
Epoch 436 batch 100 train loss: 0.0002 test loss: 0.0008
Epoch 436 batch 200 train loss: 0.0003 test loss: 0.0008
Epoch 436 batch 300 train loss: 0.0004 test loss: 0.0008
Epoch 436 batch 400 train loss: 0.0002 test loss: 0.0008
Epoch 436 batch 500 train loss: 0.0002 test loss: 0.0008
Epoch 436 batch 600 train loss: 0.0003 test loss: 0.0008
Epoch 436 batch 700 train loss: 0.0002 test loss: 0.0008
Epoch 436 batch 800 train loss: 0.0002 test loss: 0.0008
Epoch 436 batch 900 train loss: 0.0002 test loss: 0.0009
Epoch 436 batch 1000 train loss: 0.0003 test loss: 0.0008
Epoch 436 batch 1100 train loss: 0.0003 test loss: 0.0008
Epoch 436 batch 1200 train loss: 0.0003 test loss: 0.0008
Epoch 437 batch 0 train loss: 0.0003 test loss: 0.0008
Epoch 437 batch 100 train loss: 0.0003 test loss: 0.0008
Epoch 437 batch 200 train loss: 0.0004 test loss: 0.0008
Epoch 437 batch 300 train loss: 0.0002 test loss: 0.0008
Epoch 437 batch 400 train loss: 0.0003 test loss: 0.0008
Epoch 437 batch 500 train loss: 0.0003 test loss: 0.0008
Epoch 437 batch 600 train loss: 0.0003 test loss: 0.0008
Epoch 437 batch 700 train loss: 0.0002 test loss: 0.0008
Epoch 437 batch 800 train loss: 0.0004 test loss: 0.0008
Epoch 437 batch 900 train loss: 0.0004 test loss: 0.0008
Epoch 437 batch 1000 train loss: 0.0004 test loss: 0.0008
Epoch 437 batch 1100 train loss: 0.0002 test loss: 0.0008
Epoch 437 batch 1200 train loss: 0.0003 test loss: 0.0008
Epoch 438 batch 0 train loss: 0.0002 test loss: 0.0008
Epoch 438 batch 100 train loss: 0.0003 test loss: 0.0008
Epoch 438 batch 200 train loss: 0.0001 test loss: 0.0008
Epoch 438 batch 300 train loss: 0.0002 test loss: 0.0008
Epoch 438 batch 400 train loss: 0.0003 test loss: 0.0008
Epoch 438 batch 500 train loss: 0.0004 test loss: 0.0008
Epoch 438 batch 600 train loss: 0.0005 test loss: 0.0008
Epoch 438 batch 700 train loss: 0.0002 test loss: 0.0008
Epoch 438 batch 800 train loss: 0.0002 test loss: 0.0008
Epoch 438 batch 900 train loss: 0.0002 test loss: 0.0008
Epoch 438 batch 1000 train loss: 0.0003 test loss: 0.0008
Epoch 438 batch 1100 train loss: 0.0002 test loss: 0.0008
Epoch 438 batch 1200 train loss: 0.0003 test loss: 0.0008
Epoch 439 batch 0 train loss: 0.0003 test loss: 0.0008
Epoch 439 batch 100 train loss: 0.0002 test loss: 0.0008
Epoch 439 batch 200 train loss: 0.0002 test loss: 0.0008
Epoch 439 batch 300 train loss: 0.0002 test loss: 0.0008
Epoch 439 batch 400 train loss: 0.0004 test loss: 0.0008
Epoch 439 batch 500 train loss: 0.0002 test loss: 0.0008
Epoch 439 batch 600 train loss: 0.0003 test loss: 0.0008
Epoch 439 batch 700 train loss: 0.0002 test loss: 0.0009
Epoch 439 batch 800 train loss: 0.0002 test loss: 0.0008
Epoch 439 batch 900 train loss: 0.0002 test loss: 0.0008
Epoch 439 batch 1000 train loss: 0.0003 test loss: 0.0008
Epoch 439 batch 1100 train loss: 0.0002 test loss: 0.0008
Epoch 439 batch 1200 train loss: 0.0001 test loss: 0.0008
Epoch 440 batch 0 train loss: 0.0003 test loss: 0.0008
Epoch 440 batch 100 train loss: 0.0002 test loss: 0.0008
Epoch 440 batch 200 train loss: 0.0002 test loss: 0.0008
Epoch 440 batch 300 train loss: 0.0002 test loss: 0.0008
Epoch 440 batch 400 train loss: 0.0004 test loss: 0.0008
Epoch 440 batch 500 train loss: 0.0002 test loss: 0.0008
Epoch 440 batch 600 train loss: 0.0001 test loss: 0.0008
Epoch 440 batch 700 train loss: 0.0002 test loss: 0.0008
Epoch 440 batch 800 train loss: 0.0003 test loss: 0.0008
Epoch 440 batch 900 train loss: 0.0002 test loss: 0.0008
Epoch 440 batch 1000 train loss: 0.0002 test loss: 0.0008
Epoch 440 batch 1100 train loss: 0.0003 test loss: 0.0008
Epoch 440 batch 1200 train loss: 0.0002 test loss: 0.0008
Epoch 441 batch 0 train loss: 0.0003 test loss: 0.0008
Epoch 441 batch 100 train loss: 0.0002 test loss: 0.0008
Epoch 441 batch 200 train loss: 0.0003 test loss: 0.0008
Epoch 441 batch 300 train loss: 0.0002 test loss: 0.0008
Epoch 441 batch 400 train loss: 0.0003 test loss: 0.0008
Epoch 441 batch 500 train loss: 0.0003 test loss: 0.0008
Epoch 441 batch 600 train loss: 0.0003 test loss: 0.0008
Epoch 441 batch 700 train loss: 0.0003 test loss: 0.0008
Epoch 441 batch 800 train loss: 0.0002 test loss: 0.0008
Epoch 441 batch 900 train loss: 0.0005 test loss: 0.0008
Epoch 441 batch 1000 train loss: 0.0002 test loss: 0.0008
Epoch 441 batch 1100 train loss: 0.0002 test loss: 0.0008
Epoch 441 batch 1200 train loss: 0.0003 test loss: 0.0008
Epoch 442 batch 0 train loss: 0.0003 test loss: 0.0009
Epoch 442 batch 100 train loss: 0.0002 test loss: 0.0008
Epoch 442 batch 200 train loss: 0.0006 test loss: 0.0008
Epoch 442 batch 300 train loss: 0.0003 test loss: 0.0008
Epoch 442 batch 400 train loss: 0.0002 test loss: 0.0008
Epoch 442 batch 500 train loss: 0.0002 test loss: 0.0008
Epoch 442 batch 600 train loss: 0.0002 test loss: 0.0008
Epoch 442 batch 700 train loss: 0.0004 test loss: 0.0008
Epoch 442 batch 800 train loss: 0.0003 test loss: 0.0008
Epoch 442 batch 900 train loss: 0.0003 test loss: 0.0008
Epoch 442 batch 1000 train loss: 0.0003 test loss: 0.0008
Epoch 442 batch 1100 train loss: 0.0002 test loss: 0.0008
Epoch 442 batch 1200 train loss: 0.0002 test loss: 0.0008
Epoch 443 batch 0 train loss: 0.0003 test loss: 0.0008
Epoch 443 batch 100 train loss: 0.0003 test loss: 0.0008
Epoch 443 batch 200 train loss: 0.0004 test loss: 0.0008
Epoch 443 batch 300 train loss: 0.0002 test loss: 0.0008
Epoch 443 batch 400 train loss: 0.0002 test loss: 0.0008
Epoch 443 batch 500 train loss: 0.0003 test loss: 0.0008
Epoch 443 batch 600 train loss: 0.0003 test loss: 0.0008
Epoch 443 batch 700 train loss: 0.0003 test loss: 0.0009
Epoch 443 batch 800 train loss: 0.0003 test loss: 0.0008
Epoch 443 batch 900 train loss: 0.0002 test loss: 0.0008
Epoch 443 batch 1000 train loss: 0.0002 test loss: 0.0008
Epoch 443 batch 1100 train loss: 0.0003 test loss: 0.0008
Epoch 443 batch 1200 train loss: 0.0002 test loss: 0.0008
Epoch 444 batch 0 train loss: 0.0001 test loss: 0.0008
Epoch 444 batch 100 train loss: 0.0003 test loss: 0.0008
Epoch 444 batch 200 train loss: 0.0003 test loss: 0.0008
Epoch 444 batch 300 train loss: 0.0002 test loss: 0.0008
Epoch 444 batch 400 train loss: 0.0002 test loss: 0.0008
Epoch 444 batch 500 train loss: 0.0003 test loss: 0.0008
Epoch 444 batch 600 train loss: 0.0003 test loss: 0.0008
Epoch 444 batch 700 train loss: 0.0002 test loss: 0.0008
Epoch 444 batch 800 train loss: 0.0002 test loss: 0.0008
Epoch 444 batch 900 train loss: 0.0002 test loss: 0.0008
Epoch 444 batch 1000 train loss: 0.0001 test loss: 0.0008
Epoch 444 batch 1100 train loss: 0.0003 test loss: 0.0008
Epoch 444 batch 1200 train loss: 0.0002 test loss: 0.0008
Epoch 445 batch 0 train loss: 0.0002 test loss: 0.0009
Epoch 445 batch 100 train loss: 0.0002 test loss: 0.0008
Epoch 445 batch 200 train loss: 0.0002 test loss: 0.0008
Epoch 445 batch 300 train loss: 0.0004 test loss: 0.0008
Epoch 445 batch 400 train loss: 0.0003 test loss: 0.0008
Epoch 445 batch 500 train loss: 0.0002 test loss: 0.0008
Epoch 445 batch 600 train loss: 0.0002 test loss: 0.0008
Epoch 445 batch 700 train loss: 0.0002 test loss: 0.0008
Epoch 445 batch 800 train loss: 0.0003 test loss: 0.0008
Epoch 445 batch 900 train loss: 0.0003 test loss: 0.0009
Epoch 445 batch 1000 train loss: 0.0002 test loss: 0.0008
Epoch 445 batch 1100 train loss: 0.0003 test loss: 0.0008
Epoch 445 batch 1200 train loss: 0.0002 test loss: 0.0008
Epoch 446 batch 0 train loss: 0.0002 test loss: 0.0008
Epoch 446 batch 100 train loss: 0.0005 test loss: 0.0008
Epoch 446 batch 200 train loss: 0.0003 test loss: 0.0008
Epoch 446 batch 300 train loss: 0.0002 test loss: 0.0008
Epoch 446 batch 400 train loss: 0.0003 test loss: 0.0008
Epoch 446 batch 500 train loss: 0.0002 test loss: 0.0008
Epoch 446 batch 600 train loss: 0.0002 test loss: 0.0008
Epoch 446 batch 700 train loss: 0.0002 test loss: 0.0008
Epoch 446 batch 800 train loss: 0.0001 test loss: 0.0008
Epoch 446 batch 900 train loss: 0.0003 test loss: 0.0009
Epoch 446 batch 1000 train loss: 0.0003 test loss: 0.0008
Epoch 446 batch 1100 train loss: 0.0002 test loss: 0.0008
Epoch 446 batch 1200 train loss: 0.0002 test loss: 0.0008
Epoch 447 batch 0 train loss: 0.0002 test loss: 0.0008
Epoch 447 batch 100 train loss: 0.0004 test loss: 0.0008
Epoch 447 batch 200 train loss: 0.0003 test loss: 0.0008
Epoch 447 batch 300 train loss: 0.0002 test loss: 0.0008
Epoch 447 batch 400 train loss: 0.0002 test loss: 0.0008
Epoch 447 batch 500 train loss: 0.0003 test loss: 0.0008
Epoch 447 batch 600 train loss: 0.0003 test loss: 0.0008
Epoch 447 batch 700 train loss: 0.0002 test loss: 0.0009
Epoch 447 batch 800 train loss: 0.0002 test loss: 0.0008
Epoch 447 batch 900 train loss: 0.0002 test loss: 0.0009
Epoch 447 batch 1000 train loss: 0.0003 test loss: 0.0008
Epoch 447 batch 1100 train loss: 0.0003 test loss: 0.0008
Epoch 447 batch 1200 train loss: 0.0002 test loss: 0.0008
Epoch 448 batch 0 train loss: 0.0002 test loss: 0.0009
Epoch 448 batch 100 train loss: 0.0004 test loss: 0.0008
Epoch 448 batch 200 train loss: 0.0004 test loss: 0.0008
Epoch 448 batch 300 train loss: 0.0001 test loss: 0.0008
Epoch 448 batch 400 train loss: 0.0002 test loss: 0.0008
Epoch 448 batch 500 train loss: 0.0003 test loss: 0.0008
Epoch 448 batch 600 train loss: 0.0004 test loss: 0.0008
Epoch 448 batch 700 train loss: 0.0002 test loss: 0.0008
Epoch 448 batch 800 train loss: 0.0003 test loss: 0.0008
Epoch 448 batch 900 train loss: 0.0002 test loss: 0.0008
Epoch 448 batch 1000 train loss: 0.0002 test loss: 0.0008
Epoch 448 batch 1100 train loss: 0.0002 test loss: 0.0008
Epoch 448 batch 1200 train loss: 0.0002 test loss: 0.0008
Epoch 449 batch 0 train loss: 0.0005 test loss: 0.0008
Epoch 449 batch 100 train loss: 0.0002 test loss: 0.0008
Epoch 449 batch 200 train loss: 0.0003 test loss: 0.0008
Epoch 449 batch 300 train loss: 0.0001 test loss: 0.0008
Epoch 449 batch 400 train loss: 0.0002 test loss: 0.0008
Epoch 449 batch 500 train loss: 0.0002 test loss: 0.0008
Epoch 449 batch 600 train loss: 0.0002 test loss: 0.0008
Epoch 449 batch 700 train loss: 0.0002 test loss: 0.0008
Epoch 449 batch 800 train loss: 0.0003 test loss: 0.0008
Epoch 449 batch 900 train loss: 0.0002 test loss: 0.0009
Epoch 449 batch 1000 train loss: 0.0003 test loss: 0.0008
Epoch 449 batch 1100 train loss: 0.0002 test loss: 0.0008
Epoch 449 batch 1200 train loss: 0.0002 test loss: 0.0008
Epoch 450 batch 0 train loss: 0.0002 test loss: 0.0008
Epoch 450 batch 100 train loss: 0.0003 test loss: 0.0008
Epoch 450 batch 200 train loss: 0.0003 test loss: 0.0008
Epoch 450 batch 300 train loss: 0.0003 test loss: 0.0008
Epoch 450 batch 400 train loss: 0.0001 test loss: 0.0008
Epoch 450 batch 500 train loss: 0.0002 test loss: 0.0008
Epoch 450 batch 600 train loss: 0.0002 test loss: 0.0008
Epoch 450 batch 700 train loss: 0.0002 test loss: 0.0008
Epoch 450 batch 800 train loss: 0.0002 test loss: 0.0008
Epoch 450 batch 900 train loss: 0.0002 test loss: 0.0008
Epoch 450 batch 1000 train loss: 0.0003 test loss: 0.0008
Epoch 450 batch 1100 train loss: 0.0002 test loss: 0.0008
Epoch 450 batch 1200 train loss: 0.0003 test loss: 0.0008
Epoch 451 batch 0 train loss: 0.0003 test loss: 0.0008
Epoch 451 batch 100 train loss: 0.0002 test loss: 0.0008
Epoch 451 batch 200 train loss: 0.0002 test loss: 0.0008
Epoch 451 batch 300 train loss: 0.0003 test loss: 0.0008
Epoch 451 batch 400 train loss: 0.0003 test loss: 0.0008
Epoch 451 batch 500 train loss: 0.0005 test loss: 0.0008
Epoch 451 batch 600 train loss: 0.0002 test loss: 0.0008
Epoch 451 batch 700 train loss: 0.0003 test loss: 0.0009
Epoch 451 batch 800 train loss: 0.0003 test loss: 0.0008
Epoch 451 batch 900 train loss: 0.0004 test loss: 0.0008
Epoch 451 batch 1000 train loss: 0.0003 test loss: 0.0008
Epoch 451 batch 1100 train loss: 0.0004 test loss: 0.0008
Epoch 451 batch 1200 train loss: 0.0002 test loss: 0.0008
Epoch 452 batch 0 train loss: 0.0005 test loss: 0.0008
Epoch 452 batch 100 train loss: 0.0002 test loss: 0.0008
Epoch 452 batch 200 train loss: 0.0002 test loss: 0.0008
Epoch 452 batch 300 train loss: 0.0002 test loss: 0.0008
Epoch 452 batch 400 train loss: 0.0007 test loss: 0.0008
Epoch 452 batch 500 train loss: 0.0004 test loss: 0.0008
Epoch 452 batch 600 train loss: 0.0002 test loss: 0.0008
Epoch 452 batch 700 train loss: 0.0003 test loss: 0.0008
Epoch 452 batch 800 train loss: 0.0004 test loss: 0.0009
Epoch 452 batch 900 train loss: 0.0002 test loss: 0.0008
Epoch 452 batch 1000 train loss: 0.0002 test loss: 0.0008
Epoch 452 batch 1100 train loss: 0.0002 test loss: 0.0008
Epoch 452 batch 1200 train loss: 0.0004 test loss: 0.0008
Epoch 453 batch 0 train loss: 0.0002 test loss: 0.0008
Epoch 453 batch 100 train loss: 0.0003 test loss: 0.0008
Epoch 453 batch 200 train loss: 0.0003 test loss: 0.0008
Epoch 453 batch 300 train loss: 0.0002 test loss: 0.0008
Epoch 453 batch 400 train loss: 0.0004 test loss: 0.0008
Epoch 453 batch 500 train loss: 0.0004 test loss: 0.0008
Epoch 453 batch 600 train loss: 0.0005 test loss: 0.0008
Epoch 453 batch 700 train loss: 0.0002 test loss: 0.0008
Epoch 453 batch 800 train loss: 0.0002 test loss: 0.0008
Epoch 453 batch 900 train loss: 0.0003 test loss: 0.0008
Epoch 453 batch 1000 train loss: 0.0004 test loss: 0.0008
Epoch 453 batch 1100 train loss: 0.0002 test loss: 0.0008
Epoch 453 batch 1200 train loss: 0.0003 test loss: 0.0008
Epoch 454 batch 0 train loss: 0.0003 test loss: 0.0009
Epoch 454 batch 100 train loss: 0.0002 test loss: 0.0008
Epoch 454 batch 200 train loss: 0.0003 test loss: 0.0008
Epoch 454 batch 300 train loss: 0.0002 test loss: 0.0008
Epoch 454 batch 400 train loss: 0.0002 test loss: 0.0008
Epoch 454 batch 500 train loss: 0.0004 test loss: 0.0008
Epoch 454 batch 600 train loss: 0.0002 test loss: 0.0008
Epoch 454 batch 700 train loss: 0.0004 test loss: 0.0008
Epoch 454 batch 800 train loss: 0.0002 test loss: 0.0008
Epoch 454 batch 900 train loss: 0.0006 test loss: 0.0009
Epoch 454 batch 1000 train loss: 0.0002 test loss: 0.0008
Epoch 454 batch 1100 train loss: 0.0002 test loss: 0.0008
Epoch 454 batch 1200 train loss: 0.0002 test loss: 0.0008
Epoch 455 batch 0 train loss: 0.0003 test loss: 0.0009
Epoch 455 batch 100 train loss: 0.0004 test loss: 0.0008
Epoch 455 batch 200 train loss: 0.0003 test loss: 0.0008
Epoch 455 batch 300 train loss: 0.0002 test loss: 0.0008
Epoch 455 batch 400 train loss: 0.0003 test loss: 0.0008
Epoch 455 batch 500 train loss: 0.0003 test loss: 0.0008
Epoch 455 batch 600 train loss: 0.0002 test loss: 0.0008
Epoch 455 batch 700 train loss: 0.0003 test loss: 0.0008
Epoch 455 batch 800 train loss: 0.0002 test loss: 0.0008
Epoch 455 batch 900 train loss: 0.0002 test loss: 0.0008
Epoch 455 batch 1000 train loss: 0.0002 test loss: 0.0008
Epoch 455 batch 1100 train loss: 0.0002 test loss: 0.0008
Epoch 455 batch 1200 train loss: 0.0002 test loss: 0.0008
Epoch 456 batch 0 train loss: 0.0002 test loss: 0.0008
Epoch 456 batch 100 train loss: 0.0002 test loss: 0.0008
Epoch 456 batch 200 train loss: 0.0001 test loss: 0.0008
Epoch 456 batch 300 train loss: 0.0003 test loss: 0.0008
Epoch 456 batch 400 train loss: 0.0004 test loss: 0.0008
Epoch 456 batch 500 train loss: 0.0003 test loss: 0.0008
Epoch 456 batch 600 train loss: 0.0002 test loss: 0.0008
Epoch 456 batch 700 train loss: 0.0001 test loss: 0.0008
Epoch 456 batch 800 train loss: 0.0003 test loss: 0.0008
Epoch 456 batch 900 train loss: 0.0004 test loss: 0.0008
Epoch 456 batch 1000 train loss: 0.0002 test loss: 0.0008
Epoch 456 batch 1100 train loss: 0.0002 test loss: 0.0008
Epoch 456 batch 1200 train loss: 0.0002 test loss: 0.0008
Epoch 457 batch 0 train loss: 0.0003 test loss: 0.0008
Epoch 457 batch 100 train loss: 0.0003 test loss: 0.0008
Epoch 457 batch 200 train loss: 0.0002 test loss: 0.0008
Epoch 457 batch 300 train loss: 0.0002 test loss: 0.0008
Epoch 457 batch 400 train loss: 0.0003 test loss: 0.0008
Epoch 457 batch 500 train loss: 0.0002 test loss: 0.0008
Epoch 457 batch 600 train loss: 0.0001 test loss: 0.0008
Epoch 457 batch 700 train loss: 0.0003 test loss: 0.0008
Epoch 457 batch 800 train loss: 0.0003 test loss: 0.0009
Epoch 457 batch 900 train loss: 0.0004 test loss: 0.0008
Epoch 457 batch 1000 train loss: 0.0002 test loss: 0.0008
Epoch 457 batch 1100 train loss: 0.0002 test loss: 0.0008
Epoch 457 batch 1200 train loss: 0.0003 test loss: 0.0008
Epoch 458 batch 0 train loss: 0.0002 test loss: 0.0008
Epoch 458 batch 100 train loss: 0.0003 test loss: 0.0008
Epoch 458 batch 200 train loss: 0.0002 test loss: 0.0008
Epoch 458 batch 300 train loss: 0.0002 test loss: 0.0008
Epoch 458 batch 400 train loss: 0.0003 test loss: 0.0008
Epoch 458 batch 500 train loss: 0.0002 test loss: 0.0008
Epoch 458 batch 600 train loss: 0.0003 test loss: 0.0008
Epoch 458 batch 700 train loss: 0.0002 test loss: 0.0008
Epoch 458 batch 800 train loss: 0.0002 test loss: 0.0008
Epoch 458 batch 900 train loss: 0.0003 test loss: 0.0009
Epoch 458 batch 1000 train loss: 0.0003 test loss: 0.0008
Epoch 458 batch 1100 train loss: 0.0002 test loss: 0.0008
Epoch 458 batch 1200 train loss: 0.0002 test loss: 0.0008
Epoch 459 batch 0 train loss: 0.0003 test loss: 0.0008
Epoch 459 batch 100 train loss: 0.0002 test loss: 0.0008
Epoch 459 batch 200 train loss: 0.0003 test loss: 0.0008
Epoch 459 batch 300 train loss: 0.0002 test loss: 0.0008
Epoch 459 batch 400 train loss: 0.0002 test loss: 0.0008
Epoch 459 batch 500 train loss: 0.0002 test loss: 0.0008
Epoch 459 batch 600 train loss: 0.0003 test loss: 0.0008
Epoch 459 batch 700 train loss: 0.0002 test loss: 0.0008
Epoch 459 batch 800 train loss: 0.0003 test loss: 0.0008
Epoch 459 batch 900 train loss: 0.0002 test loss: 0.0008
Epoch 459 batch 1000 train loss: 0.0003 test loss: 0.0008
Epoch 459 batch 1100 train loss: 0.0004 test loss: 0.0008
Epoch 459 batch 1200 train loss: 0.0002 test loss: 0.0008
Epoch 460 batch 0 train loss: 0.0002 test loss: 0.0008
Epoch 460 batch 100 train loss: 0.0003 test loss: 0.0008
Epoch 460 batch 200 train loss: 0.0002 test loss: 0.0008
Epoch 460 batch 300 train loss: 0.0002 test loss: 0.0008
Epoch 460 batch 400 train loss: 0.0002 test loss: 0.0008
Epoch 460 batch 500 train loss: 0.0003 test loss: 0.0008
Epoch 460 batch 600 train loss: 0.0003 test loss: 0.0008
Epoch 460 batch 700 train loss: 0.0004 test loss: 0.0008
Epoch 460 batch 800 train loss: 0.0003 test loss: 0.0009
Epoch 460 batch 900 train loss: 0.0004 test loss: 0.0008
Epoch 460 batch 1000 train loss: 0.0004 test loss: 0.0008
Epoch 460 batch 1100 train loss: 0.0003 test loss: 0.0008
Epoch 460 batch 1200 train loss: 0.0002 test loss: 0.0008
Epoch 461 batch 0 train loss: 0.0003 test loss: 0.0009
Epoch 461 batch 100 train loss: 0.0004 test loss: 0.0008
Epoch 461 batch 200 train loss: 0.0002 test loss: 0.0008
Epoch 461 batch 300 train loss: 0.0003 test loss: 0.0008
Epoch 461 batch 400 train loss: 0.0004 test loss: 0.0008
Epoch 461 batch 500 train loss: 0.0002 test loss: 0.0008
Epoch 461 batch 600 train loss: 0.0003 test loss: 0.0008
Epoch 461 batch 700 train loss: 0.0002 test loss: 0.0008
Epoch 461 batch 800 train loss: 0.0003 test loss: 0.0008
Epoch 461 batch 900 train loss: 0.0003 test loss: 0.0008
Epoch 461 batch 1000 train loss: 0.0003 test loss: 0.0008
Epoch 461 batch 1100 train loss: 0.0001 test loss: 0.0008
Epoch 461 batch 1200 train loss: 0.0003 test loss: 0.0008
Epoch 462 batch 0 train loss: 0.0002 test loss: 0.0009
Epoch 462 batch 100 train loss: 0.0003 test loss: 0.0008
Epoch 462 batch 200 train loss: 0.0002 test loss: 0.0008
Epoch 462 batch 300 train loss: 0.0002 test loss: 0.0008
Epoch 462 batch 400 train loss: 0.0004 test loss: 0.0008
Epoch 462 batch 500 train loss: 0.0004 test loss: 0.0008
Epoch 462 batch 600 train loss: 0.0002 test loss: 0.0008
Epoch 462 batch 700 train loss: 0.0004 test loss: 0.0008
Epoch 462 batch 800 train loss: 0.0002 test loss: 0.0008
Epoch 462 batch 900 train loss: 0.0002 test loss: 0.0008
Epoch 462 batch 1000 train loss: 0.0002 test loss: 0.0008
Epoch 462 batch 1100 train loss: 0.0003 test loss: 0.0009
Epoch 462 batch 1200 train loss: 0.0001 test loss: 0.0008
Epoch 463 batch 0 train loss: 0.0003 test loss: 0.0009
Epoch 463 batch 100 train loss: 0.0002 test loss: 0.0008
Epoch 463 batch 200 train loss: 0.0004 test loss: 0.0008
Epoch 463 batch 300 train loss: 0.0001 test loss: 0.0008
Epoch 463 batch 400 train loss: 0.0003 test loss: 0.0008
Epoch 463 batch 500 train loss: 0.0003 test loss: 0.0008
Epoch 463 batch 600 train loss: 0.0002 test loss: 0.0009
Epoch 463 batch 700 train loss: 0.0002 test loss: 0.0008
Epoch 463 batch 800 train loss: 0.0003 test loss: 0.0008
Epoch 463 batch 900 train loss: 0.0002 test loss: 0.0008
Epoch 463 batch 1000 train loss: 0.0003 test loss: 0.0008
Epoch 463 batch 1100 train loss: 0.0003 test loss: 0.0008
Epoch 463 batch 1200 train loss: 0.0002 test loss: 0.0008
Epoch 464 batch 0 train loss: 0.0003 test loss: 0.0009
Epoch 464 batch 100 train loss: 0.0003 test loss: 0.0008
Epoch 464 batch 200 train loss: 0.0004 test loss: 0.0008
Epoch 464 batch 300 train loss: 0.0003 test loss: 0.0008
Epoch 464 batch 400 train loss: 0.0002 test loss: 0.0008
Epoch 464 batch 500 train loss: 0.0003 test loss: 0.0008
Epoch 464 batch 600 train loss: 0.0002 test loss: 0.0008
Epoch 464 batch 700 train loss: 0.0002 test loss: 0.0009
Epoch 464 batch 800 train loss: 0.0001 test loss: 0.0009
Epoch 464 batch 900 train loss: 0.0002 test loss: 0.0008
Epoch 464 batch 1000 train loss: 0.0002 test loss: 0.0008
Epoch 464 batch 1100 train loss: 0.0005 test loss: 0.0008
Epoch 464 batch 1200 train loss: 0.0002 test loss: 0.0008
Epoch 465 batch 0 train loss: 0.0002 test loss: 0.0008
Epoch 465 batch 100 train loss: 0.0002 test loss: 0.0008
Epoch 465 batch 200 train loss: 0.0002 test loss: 0.0008
Epoch 465 batch 300 train loss: 0.0002 test loss: 0.0008
Epoch 465 batch 400 train loss: 0.0002 test loss: 0.0008
Epoch 465 batch 500 train loss: 0.0004 test loss: 0.0008
Epoch 465 batch 600 train loss: 0.0006 test loss: 0.0008
Epoch 465 batch 700 train loss: 0.0003 test loss: 0.0008
Epoch 465 batch 800 train loss: 0.0003 test loss: 0.0008
Epoch 465 batch 900 train loss: 0.0002 test loss: 0.0008
Epoch 465 batch 1000 train loss: 0.0002 test loss: 0.0008
Epoch 465 batch 1100 train loss: 0.0002 test loss: 0.0008
Epoch 465 batch 1200 train loss: 0.0002 test loss: 0.0008
Epoch 466 batch 0 train loss: 0.0002 test loss: 0.0008
Epoch 466 batch 100 train loss: 0.0003 test loss: 0.0008
Epoch 466 batch 200 train loss: 0.0002 test loss: 0.0008
Epoch 466 batch 300 train loss: 0.0002 test loss: 0.0008
Epoch 466 batch 400 train loss: 0.0001 test loss: 0.0008
Epoch 466 batch 500 train loss: 0.0003 test loss: 0.0008
Epoch 466 batch 600 train loss: 0.0003 test loss: 0.0008
Epoch 466 batch 700 train loss: 0.0002 test loss: 0.0008
Epoch 466 batch 800 train loss: 0.0002 test loss: 0.0008
Epoch 466 batch 900 train loss: 0.0002 test loss: 0.0008
Epoch 466 batch 1000 train loss: 0.0003 test loss: 0.0008
Epoch 466 batch 1100 train loss: 0.0002 test loss: 0.0008
Epoch 466 batch 1200 train loss: 0.0005 test loss: 0.0008
Epoch 467 batch 0 train loss: 0.0003 test loss: 0.0009
Epoch 467 batch 100 train loss: 0.0003 test loss: 0.0008
Epoch 467 batch 200 train loss: 0.0003 test loss: 0.0008
Epoch 467 batch 300 train loss: 0.0003 test loss: 0.0008
Epoch 467 batch 400 train loss: 0.0002 test loss: 0.0008
Epoch 467 batch 500 train loss: 0.0002 test loss: 0.0008
Epoch 467 batch 600 train loss: 0.0002 test loss: 0.0008
Epoch 467 batch 700 train loss: 0.0002 test loss: 0.0008
Epoch 467 batch 800 train loss: 0.0002 test loss: 0.0008
Epoch 467 batch 900 train loss: 0.0003 test loss: 0.0008
Epoch 467 batch 1000 train loss: 0.0002 test loss: 0.0008
Epoch 467 batch 1100 train loss: 0.0002 test loss: 0.0008
Epoch 467 batch 1200 train loss: 0.0002 test loss: 0.0008
Epoch 468 batch 0 train loss: 0.0004 test loss: 0.0009
Epoch 468 batch 100 train loss: 0.0002 test loss: 0.0008
Epoch 468 batch 200 train loss: 0.0002 test loss: 0.0008
Epoch 468 batch 300 train loss: 0.0001 test loss: 0.0008
Epoch 468 batch 400 train loss: 0.0002 test loss: 0.0008
Epoch 468 batch 500 train loss: 0.0003 test loss: 0.0008
Epoch 468 batch 600 train loss: 0.0002 test loss: 0.0008
Epoch 468 batch 700 train loss: 0.0002 test loss: 0.0008
Epoch 468 batch 800 train loss: 0.0002 test loss: 0.0008
Epoch 468 batch 900 train loss: 0.0003 test loss: 0.0008
Epoch 468 batch 1000 train loss: 0.0004 test loss: 0.0008
Epoch 468 batch 1100 train loss: 0.0003 test loss: 0.0008
Epoch 468 batch 1200 train loss: 0.0002 test loss: 0.0008
Epoch 469 batch 0 train loss: 0.0002 test loss: 0.0008
Epoch 469 batch 100 train loss: 0.0003 test loss: 0.0008
Epoch 469 batch 200 train loss: 0.0003 test loss: 0.0008
Epoch 469 batch 300 train loss: 0.0002 test loss: 0.0008
Epoch 469 batch 400 train loss: 0.0003 test loss: 0.0008
Epoch 469 batch 500 train loss: 0.0002 test loss: 0.0008
Epoch 469 batch 600 train loss: 0.0002 test loss: 0.0008
Epoch 469 batch 700 train loss: 0.0002 test loss: 0.0008
Epoch 469 batch 800 train loss: 0.0003 test loss: 0.0008
Epoch 469 batch 900 train loss: 0.0003 test loss: 0.0008
Epoch 469 batch 1000 train loss: 0.0002 test loss: 0.0008
Epoch 469 batch 1100 train loss: 0.0004 test loss: 0.0008
Epoch 469 batch 1200 train loss: 0.0002 test loss: 0.0008
Epoch 470 batch 0 train loss: 0.0003 test loss: 0.0009
Epoch 470 batch 100 train loss: 0.0003 test loss: 0.0009
Epoch 470 batch 200 train loss: 0.0003 test loss: 0.0008
Epoch 470 batch 300 train loss: 0.0002 test loss: 0.0008
Epoch 470 batch 400 train loss: 0.0002 test loss: 0.0008
Epoch 470 batch 500 train loss: 0.0002 test loss: 0.0008
Epoch 470 batch 600 train loss: 0.0006 test loss: 0.0008
Epoch 470 batch 700 train loss: 0.0005 test loss: 0.0008
Epoch 470 batch 800 train loss: 0.0002 test loss: 0.0008
Epoch 470 batch 900 train loss: 0.0004 test loss: 0.0008
Epoch 470 batch 1000 train loss: 0.0002 test loss: 0.0008
Epoch 470 batch 1100 train loss: 0.0004 test loss: 0.0008
Epoch 470 batch 1200 train loss: 0.0004 test loss: 0.0008
Epoch 471 batch 0 train loss: 0.0002 test loss: 0.0008
Epoch 471 batch 100 train loss: 0.0002 test loss: 0.0008
Epoch 471 batch 200 train loss: 0.0002 test loss: 0.0008
Epoch 471 batch 300 train loss: 0.0002 test loss: 0.0008
Epoch 471 batch 400 train loss: 0.0002 test loss: 0.0008
Epoch 471 batch 500 train loss: 0.0002 test loss: 0.0008
Epoch 471 batch 600 train loss: 0.0002 test loss: 0.0008
Epoch 471 batch 700 train loss: 0.0002 test loss: 0.0009
Epoch 471 batch 800 train loss: 0.0002 test loss: 0.0008
Epoch 471 batch 900 train loss: 0.0004 test loss: 0.0008
Epoch 471 batch 1000 train loss: 0.0002 test loss: 0.0008
Epoch 471 batch 1100 train loss: 0.0002 test loss: 0.0008
Epoch 471 batch 1200 train loss: 0.0002 test loss: 0.0008
Epoch 472 batch 0 train loss: 0.0002 test loss: 0.0009
Epoch 472 batch 100 train loss: 0.0002 test loss: 0.0008
Epoch 472 batch 200 train loss: 0.0003 test loss: 0.0008
Epoch 472 batch 300 train loss: 0.0003 test loss: 0.0008
Epoch 472 batch 400 train loss: 0.0004 test loss: 0.0008
Epoch 472 batch 500 train loss: 0.0002 test loss: 0.0008
Epoch 472 batch 600 train loss: 0.0002 test loss: 0.0008
Epoch 472 batch 700 train loss: 0.0002 test loss: 0.0008
Epoch 472 batch 800 train loss: 0.0002 test loss: 0.0008
Epoch 472 batch 900 train loss: 0.0002 test loss: 0.0008
Epoch 472 batch 1000 train loss: 0.0002 test loss: 0.0008
Epoch 472 batch 1100 train loss: 0.0003 test loss: 0.0008
Epoch 472 batch 1200 train loss: 0.0003 test loss: 0.0008
Epoch 473 batch 0 train loss: 0.0002 test loss: 0.0008
Epoch 473 batch 100 train loss: 0.0004 test loss: 0.0008
Epoch 473 batch 200 train loss: 0.0003 test loss: 0.0008
Epoch 473 batch 300 train loss: 0.0005 test loss: 0.0008
Epoch 473 batch 400 train loss: 0.0003 test loss: 0.0008
Epoch 473 batch 500 train loss: 0.0003 test loss: 0.0008
Epoch 473 batch 600 train loss: 0.0003 test loss: 0.0008
Epoch 473 batch 700 train loss: 0.0002 test loss: 0.0008
Epoch 473 batch 800 train loss: 0.0002 test loss: 0.0008
Epoch 473 batch 900 train loss: 0.0003 test loss: 0.0008
Epoch 473 batch 1000 train loss: 0.0003 test loss: 0.0008
Epoch 473 batch 1100 train loss: 0.0004 test loss: 0.0008
Epoch 473 batch 1200 train loss: 0.0002 test loss: 0.0008
Epoch 474 batch 0 train loss: 0.0002 test loss: 0.0008
Epoch 474 batch 100 train loss: 0.0003 test loss: 0.0008
Epoch 474 batch 200 train loss: 0.0003 test loss: 0.0008
Epoch 474 batch 300 train loss: 0.0001 test loss: 0.0008
Epoch 474 batch 400 train loss: 0.0003 test loss: 0.0008
Epoch 474 batch 500 train loss: 0.0004 test loss: 0.0008
Epoch 474 batch 600 train loss: 0.0002 test loss: 0.0008
Epoch 474 batch 700 train loss: 0.0002 test loss: 0.0008
Epoch 474 batch 800 train loss: 0.0001 test loss: 0.0008
Epoch 474 batch 900 train loss: 0.0003 test loss: 0.0008
Epoch 474 batch 1000 train loss: 0.0002 test loss: 0.0008
Epoch 474 batch 1100 train loss: 0.0004 test loss: 0.0008
Epoch 474 batch 1200 train loss: 0.0002 test loss: 0.0008
Epoch 475 batch 0 train loss: 0.0002 test loss: 0.0008
Epoch 475 batch 100 train loss: 0.0003 test loss: 0.0008
Epoch 475 batch 200 train loss: 0.0002 test loss: 0.0008
Epoch 475 batch 300 train loss: 0.0002 test loss: 0.0008
Epoch 475 batch 400 train loss: 0.0002 test loss: 0.0008
Epoch 475 batch 500 train loss: 0.0002 test loss: 0.0008
Epoch 475 batch 600 train loss: 0.0002 test loss: 0.0008
Epoch 475 batch 700 train loss: 0.0003 test loss: 0.0008
Epoch 475 batch 800 train loss: 0.0002 test loss: 0.0008
Epoch 475 batch 900 train loss: 0.0002 test loss: 0.0008
Epoch 475 batch 1000 train loss: 0.0003 test loss: 0.0008
Epoch 475 batch 1100 train loss: 0.0001 test loss: 0.0008
Epoch 475 batch 1200 train loss: 0.0002 test loss: 0.0008
Epoch 476 batch 0 train loss: 0.0002 test loss: 0.0008
Epoch 476 batch 100 train loss: 0.0002 test loss: 0.0008
Epoch 476 batch 200 train loss: 0.0002 test loss: 0.0008
Epoch 476 batch 300 train loss: 0.0003 test loss: 0.0008
Epoch 476 batch 400 train loss: 0.0003 test loss: 0.0008
Epoch 476 batch 500 train loss: 0.0003 test loss: 0.0008
Epoch 476 batch 600 train loss: 0.0002 test loss: 0.0008
Epoch 476 batch 700 train loss: 0.0002 test loss: 0.0008
Epoch 476 batch 800 train loss: 0.0003 test loss: 0.0008
Epoch 476 batch 900 train loss: 0.0002 test loss: 0.0008
Epoch 476 batch 1000 train loss: 0.0005 test loss: 0.0008
Epoch 476 batch 1100 train loss: 0.0002 test loss: 0.0008
Epoch 476 batch 1200 train loss: 0.0002 test loss: 0.0008
Epoch 477 batch 0 train loss: 0.0003 test loss: 0.0008
Epoch 477 batch 100 train loss: 0.0003 test loss: 0.0008
Epoch 477 batch 200 train loss: 0.0002 test loss: 0.0008
Epoch 477 batch 300 train loss: 0.0002 test loss: 0.0008
Epoch 477 batch 400 train loss: 0.0002 test loss: 0.0008
Epoch 477 batch 500 train loss: 0.0002 test loss: 0.0008
Epoch 477 batch 600 train loss: 0.0002 test loss: 0.0008
Epoch 477 batch 700 train loss: 0.0002 test loss: 0.0008
Epoch 477 batch 800 train loss: 0.0003 test loss: 0.0009
Epoch 477 batch 900 train loss: 0.0002 test loss: 0.0008
Epoch 477 batch 1000 train loss: 0.0002 test loss: 0.0008
Epoch 477 batch 1100 train loss: 0.0002 test loss: 0.0008
Epoch 477 batch 1200 train loss: 0.0002 test loss: 0.0008
Epoch 478 batch 0 train loss: 0.0003 test loss: 0.0009
Epoch 478 batch 100 train loss: 0.0003 test loss: 0.0008
Epoch 478 batch 200 train loss: 0.0002 test loss: 0.0008
Epoch 478 batch 300 train loss: 0.0003 test loss: 0.0008
Epoch 478 batch 400 train loss: 0.0002 test loss: 0.0008
Epoch 478 batch 500 train loss: 0.0002 test loss: 0.0008
Epoch 478 batch 600 train loss: 0.0002 test loss: 0.0008
Epoch 478 batch 700 train loss: 0.0003 test loss: 0.0008
Epoch 478 batch 800 train loss: 0.0002 test loss: 0.0008
Epoch 478 batch 900 train loss: 0.0005 test loss: 0.0009
Epoch 478 batch 1000 train loss: 0.0002 test loss: 0.0008
Epoch 478 batch 1100 train loss: 0.0002 test loss: 0.0008
Epoch 478 batch 1200 train loss: 0.0004 test loss: 0.0008
Epoch 479 batch 0 train loss: 0.0003 test loss: 0.0008
Epoch 479 batch 100 train loss: 0.0001 test loss: 0.0008
Epoch 479 batch 200 train loss: 0.0002 test loss: 0.0008
Epoch 479 batch 300 train loss: 0.0003 test loss: 0.0008
Epoch 479 batch 400 train loss: 0.0002 test loss: 0.0008
Epoch 479 batch 500 train loss: 0.0002 test loss: 0.0008
Epoch 479 batch 600 train loss: 0.0002 test loss: 0.0008
Epoch 479 batch 700 train loss: 0.0003 test loss: 0.0008
Epoch 479 batch 800 train loss: 0.0003 test loss: 0.0008
Epoch 479 batch 900 train loss: 0.0004 test loss: 0.0008
Epoch 479 batch 1000 train loss: 0.0001 test loss: 0.0008
Epoch 479 batch 1100 train loss: 0.0002 test loss: 0.0008
Epoch 479 batch 1200 train loss: 0.0005 test loss: 0.0008
Epoch 480 batch 0 train loss: 0.0004 test loss: 0.0008
Epoch 480 batch 100 train loss: 0.0002 test loss: 0.0008
Epoch 480 batch 200 train loss: 0.0004 test loss: 0.0008
Epoch 480 batch 300 train loss: 0.0002 test loss: 0.0008
Epoch 480 batch 400 train loss: 0.0003 test loss: 0.0008
Epoch 480 batch 500 train loss: 0.0002 test loss: 0.0008
Epoch 480 batch 600 train loss: 0.0005 test loss: 0.0008
Epoch 480 batch 700 train loss: 0.0002 test loss: 0.0008
Epoch 480 batch 800 train loss: 0.0003 test loss: 0.0008
Epoch 480 batch 900 train loss: 0.0002 test loss: 0.0009
Epoch 480 batch 1000 train loss: 0.0002 test loss: 0.0008
Epoch 480 batch 1100 train loss: 0.0003 test loss: 0.0008
Epoch 480 batch 1200 train loss: 0.0003 test loss: 0.0008
Epoch 481 batch 0 train loss: 0.0002 test loss: 0.0008
Epoch 481 batch 100 train loss: 0.0002 test loss: 0.0008
Epoch 481 batch 200 train loss: 0.0002 test loss: 0.0008
Epoch 481 batch 300 train loss: 0.0003 test loss: 0.0008
Epoch 481 batch 400 train loss: 0.0002 test loss: 0.0008
Epoch 481 batch 500 train loss: 0.0002 test loss: 0.0008
Epoch 481 batch 600 train loss: 0.0007 test loss: 0.0008
Epoch 481 batch 700 train loss: 0.0002 test loss: 0.0009
Epoch 481 batch 800 train loss: 0.0002 test loss: 0.0008
Epoch 481 batch 900 train loss: 0.0002 test loss: 0.0008
Epoch 481 batch 1000 train loss: 0.0002 test loss: 0.0008
Epoch 481 batch 1100 train loss: 0.0003 test loss: 0.0008
Epoch 481 batch 1200 train loss: 0.0002 test loss: 0.0008
Epoch 482 batch 0 train loss: 0.0003 test loss: 0.0008
Epoch 482 batch 100 train loss: 0.0002 test loss: 0.0008
Epoch 482 batch 200 train loss: 0.0003 test loss: 0.0008
Epoch 482 batch 300 train loss: 0.0002 test loss: 0.0009
Epoch 482 batch 400 train loss: 0.0003 test loss: 0.0008
Epoch 482 batch 500 train loss: 0.0002 test loss: 0.0008
Epoch 482 batch 600 train loss: 0.0002 test loss: 0.0008
Epoch 482 batch 700 train loss: 0.0002 test loss: 0.0008
Epoch 482 batch 800 train loss: 0.0003 test loss: 0.0008
Epoch 482 batch 900 train loss: 0.0002 test loss: 0.0008
Epoch 482 batch 1000 train loss: 0.0002 test loss: 0.0008
Epoch 482 batch 1100 train loss: 0.0003 test loss: 0.0008
Epoch 482 batch 1200 train loss: 0.0003 test loss: 0.0008
Epoch 483 batch 0 train loss: 0.0002 test loss: 0.0008
Epoch 483 batch 100 train loss: 0.0002 test loss: 0.0008
Epoch 483 batch 200 train loss: 0.0004 test loss: 0.0008
Epoch 483 batch 300 train loss: 0.0003 test loss: 0.0008
Epoch 483 batch 400 train loss: 0.0002 test loss: 0.0008
Epoch 483 batch 500 train loss: 0.0003 test loss: 0.0008
Epoch 483 batch 600 train loss: 0.0002 test loss: 0.0008
Epoch 483 batch 700 train loss: 0.0002 test loss: 0.0009
Epoch 483 batch 800 train loss: 0.0002 test loss: 0.0008
Epoch 483 batch 900 train loss: 0.0003 test loss: 0.0009
Epoch 483 batch 1000 train loss: 0.0002 test loss: 0.0008
Epoch 483 batch 1100 train loss: 0.0001 test loss: 0.0008
Epoch 483 batch 1200 train loss: 0.0002 test loss: 0.0008
Epoch 484 batch 0 train loss: 0.0002 test loss: 0.0008
Epoch 484 batch 100 train loss: 0.0003 test loss: 0.0008
Epoch 484 batch 200 train loss: 0.0002 test loss: 0.0008
Epoch 484 batch 300 train loss: 0.0002 test loss: 0.0008
Epoch 484 batch 400 train loss: 0.0002 test loss: 0.0008
Epoch 484 batch 500 train loss: 0.0003 test loss: 0.0008
Epoch 484 batch 600 train loss: 0.0003 test loss: 0.0008
Epoch 484 batch 700 train loss: 0.0003 test loss: 0.0008
Epoch 484 batch 800 train loss: 0.0002 test loss: 0.0008
Epoch 484 batch 900 train loss: 0.0002 test loss: 0.0008
Epoch 484 batch 1000 train loss: 0.0002 test loss: 0.0008
Epoch 484 batch 1100 train loss: 0.0002 test loss: 0.0008
Epoch 484 batch 1200 train loss: 0.0003 test loss: 0.0008
Epoch 485 batch 0 train loss: 0.0002 test loss: 0.0008
Epoch 485 batch 100 train loss: 0.0003 test loss: 0.0008
Epoch 485 batch 200 train loss: 0.0002 test loss: 0.0008
Epoch 485 batch 300 train loss: 0.0003 test loss: 0.0008
Epoch 485 batch 400 train loss: 0.0002 test loss: 0.0008
Epoch 485 batch 500 train loss: 0.0002 test loss: 0.0008
Epoch 485 batch 600 train loss: 0.0002 test loss: 0.0008
Epoch 485 batch 700 train loss: 0.0002 test loss: 0.0009
Epoch 485 batch 800 train loss: 0.0002 test loss: 0.0008
Epoch 485 batch 900 train loss: 0.0003 test loss: 0.0009
Epoch 485 batch 1000 train loss: 0.0002 test loss: 0.0008
Epoch 485 batch 1100 train loss: 0.0002 test loss: 0.0008
Epoch 485 batch 1200 train loss: 0.0003 test loss: 0.0008
Epoch 486 batch 0 train loss: 0.0002 test loss: 0.0009
Epoch 486 batch 100 train loss: 0.0002 test loss: 0.0008
Epoch 486 batch 200 train loss: 0.0002 test loss: 0.0008
Epoch 486 batch 300 train loss: 0.0001 test loss: 0.0008
Epoch 486 batch 400 train loss: 0.0002 test loss: 0.0008
Epoch 486 batch 500 train loss: 0.0002 test loss: 0.0008
Epoch 486 batch 600 train loss: 0.0002 test loss: 0.0008
Epoch 486 batch 700 train loss: 0.0003 test loss: 0.0008
Epoch 486 batch 800 train loss: 0.0002 test loss: 0.0008
Epoch 486 batch 900 train loss: 0.0003 test loss: 0.0008
Epoch 486 batch 1000 train loss: 0.0002 test loss: 0.0008
Epoch 486 batch 1100 train loss: 0.0002 test loss: 0.0008
Epoch 486 batch 1200 train loss: 0.0002 test loss: 0.0008
Epoch 487 batch 0 train loss: 0.0002 test loss: 0.0009
Epoch 487 batch 100 train loss: 0.0002 test loss: 0.0008
Epoch 487 batch 200 train loss: 0.0003 test loss: 0.0008
Epoch 487 batch 300 train loss: 0.0002 test loss: 0.0008
Epoch 487 batch 400 train loss: 0.0004 test loss: 0.0008
Epoch 487 batch 500 train loss: 0.0002 test loss: 0.0008
Epoch 487 batch 600 train loss: 0.0002 test loss: 0.0008
Epoch 487 batch 700 train loss: 0.0005 test loss: 0.0008
Epoch 487 batch 800 train loss: 0.0002 test loss: 0.0008
Epoch 487 batch 900 train loss: 0.0004 test loss: 0.0008
Epoch 487 batch 1000 train loss: 0.0001 test loss: 0.0008
Epoch 487 batch 1100 train loss: 0.0002 test loss: 0.0008
Epoch 487 batch 1200 train loss: 0.0003 test loss: 0.0008
Epoch 488 batch 0 train loss: 0.0002 test loss: 0.0008
Epoch 488 batch 100 train loss: 0.0004 test loss: 0.0008
Epoch 488 batch 200 train loss: 0.0004 test loss: 0.0008
Epoch 488 batch 300 train loss: 0.0002 test loss: 0.0008
Epoch 488 batch 400 train loss: 0.0002 test loss: 0.0008
Epoch 488 batch 500 train loss: 0.0002 test loss: 0.0008
Epoch 488 batch 600 train loss: 0.0002 test loss: 0.0008
Epoch 488 batch 700 train loss: 0.0002 test loss: 0.0008
Epoch 488 batch 800 train loss: 0.0002 test loss: 0.0008
Epoch 488 batch 900 train loss: 0.0003 test loss: 0.0008
Epoch 488 batch 1000 train loss: 0.0003 test loss: 0.0008
Epoch 488 batch 1100 train loss: 0.0002 test loss: 0.0008
Epoch 488 batch 1200 train loss: 0.0003 test loss: 0.0008
Epoch 489 batch 0 train loss: 0.0003 test loss: 0.0009
Epoch 489 batch 100 train loss: 0.0005 test loss: 0.0008
Epoch 489 batch 200 train loss: 0.0003 test loss: 0.0008
Epoch 489 batch 300 train loss: 0.0002 test loss: 0.0008
Epoch 489 batch 400 train loss: 0.0002 test loss: 0.0008
Epoch 489 batch 500 train loss: 0.0002 test loss: 0.0008
Epoch 489 batch 600 train loss: 0.0004 test loss: 0.0008
Epoch 489 batch 700 train loss: 0.0002 test loss: 0.0008
Epoch 489 batch 800 train loss: 0.0002 test loss: 0.0008
Epoch 489 batch 900 train loss: 0.0002 test loss: 0.0008
Epoch 489 batch 1000 train loss: 0.0002 test loss: 0.0008
Epoch 489 batch 1100 train loss: 0.0003 test loss: 0.0008
Epoch 489 batch 1200 train loss: 0.0002 test loss: 0.0008
Epoch 490 batch 0 train loss: 0.0002 test loss: 0.0008
Epoch 490 batch 100 train loss: 0.0002 test loss: 0.0008
Epoch 490 batch 200 train loss: 0.0002 test loss: 0.0008
Epoch 490 batch 300 train loss: 0.0002 test loss: 0.0008
Epoch 490 batch 400 train loss: 0.0002 test loss: 0.0008
Epoch 490 batch 500 train loss: 0.0003 test loss: 0.0008
Epoch 490 batch 600 train loss: 0.0003 test loss: 0.0008
Epoch 490 batch 700 train loss: 0.0003 test loss: 0.0008
Epoch 490 batch 800 train loss: 0.0005 test loss: 0.0008
Epoch 490 batch 900 train loss: 0.0002 test loss: 0.0008
Epoch 490 batch 1000 train loss: 0.0003 test loss: 0.0008
Epoch 490 batch 1100 train loss: 0.0002 test loss: 0.0008
Epoch 490 batch 1200 train loss: 0.0002 test loss: 0.0008
Epoch 491 batch 0 train loss: 0.0002 test loss: 0.0008
Epoch 491 batch 100 train loss: 0.0004 test loss: 0.0008
Epoch 491 batch 200 train loss: 0.0003 test loss: 0.0008
Epoch 491 batch 300 train loss: 0.0002 test loss: 0.0008
Epoch 491 batch 400 train loss: 0.0002 test loss: 0.0008
Epoch 491 batch 500 train loss: 0.0003 test loss: 0.0008
Epoch 491 batch 600 train loss: 0.0003 test loss: 0.0008
Epoch 491 batch 700 train loss: 0.0002 test loss: 0.0008
Epoch 491 batch 800 train loss: 0.0003 test loss: 0.0008
Epoch 491 batch 900 train loss: 0.0003 test loss: 0.0008
Epoch 491 batch 1000 train loss: 0.0002 test loss: 0.0008
Epoch 491 batch 1100 train loss: 0.0002 test loss: 0.0008
Epoch 491 batch 1200 train loss: 0.0003 test loss: 0.0008
Epoch 492 batch 0 train loss: 0.0002 test loss: 0.0009
Epoch 492 batch 100 train loss: 0.0002 test loss: 0.0008
Epoch 492 batch 200 train loss: 0.0002 test loss: 0.0008
Epoch 492 batch 300 train loss: 0.0001 test loss: 0.0008
Epoch 492 batch 400 train loss: 0.0002 test loss: 0.0008
Epoch 492 batch 500 train loss: 0.0001 test loss: 0.0008
Epoch 492 batch 600 train loss: 0.0002 test loss: 0.0008
Epoch 492 batch 700 train loss: 0.0003 test loss: 0.0008
Epoch 492 batch 800 train loss: 0.0002 test loss: 0.0008
Epoch 492 batch 900 train loss: 0.0002 test loss: 0.0008
Epoch 492 batch 1000 train loss: 0.0003 test loss: 0.0008
Epoch 492 batch 1100 train loss: 0.0002 test loss: 0.0008
Epoch 492 batch 1200 train loss: 0.0002 test loss: 0.0008
Epoch 493 batch 0 train loss: 0.0002 test loss: 0.0008
Epoch 493 batch 100 train loss: 0.0002 test loss: 0.0008
Epoch 493 batch 200 train loss: 0.0003 test loss: 0.0008
Epoch 493 batch 300 train loss: 0.0002 test loss: 0.0008
Epoch 493 batch 400 train loss: 0.0004 test loss: 0.0008
Epoch 493 batch 500 train loss: 0.0002 test loss: 0.0008
Epoch 493 batch 600 train loss: 0.0002 test loss: 0.0008
Epoch 493 batch 700 train loss: 0.0005 test loss: 0.0008
Epoch 493 batch 800 train loss: 0.0002 test loss: 0.0008
Epoch 493 batch 900 train loss: 0.0002 test loss: 0.0008
Epoch 493 batch 1000 train loss: 0.0002 test loss: 0.0008
Epoch 493 batch 1100 train loss: 0.0001 test loss: 0.0008
Epoch 493 batch 1200 train loss: 0.0002 test loss: 0.0008
Epoch 494 batch 0 train loss: 0.0002 test loss: 0.0008
Epoch 494 batch 100 train loss: 0.0002 test loss: 0.0008
Epoch 494 batch 200 train loss: 0.0002 test loss: 0.0008
Epoch 494 batch 300 train loss: 0.0002 test loss: 0.0008
Epoch 494 batch 400 train loss: 0.0007 test loss: 0.0008
Epoch 494 batch 500 train loss: 0.0002 test loss: 0.0008
Epoch 494 batch 600 train loss: 0.0002 test loss: 0.0008
Epoch 494 batch 700 train loss: 0.0002 test loss: 0.0008
Epoch 494 batch 800 train loss: 0.0002 test loss: 0.0009
Epoch 494 batch 900 train loss: 0.0003 test loss: 0.0009
Epoch 494 batch 1000 train loss: 0.0002 test loss: 0.0008
Epoch 494 batch 1100 train loss: 0.0002 test loss: 0.0008
Epoch 494 batch 1200 train loss: 0.0002 test loss: 0.0008
Epoch 495 batch 0 train loss: 0.0002 test loss: 0.0009
Epoch 495 batch 100 train loss: 0.0002 test loss: 0.0008
Epoch 495 batch 200 train loss: 0.0002 test loss: 0.0008
Epoch 495 batch 300 train loss: 0.0002 test loss: 0.0008
Epoch 495 batch 400 train loss: 0.0006 test loss: 0.0008
Epoch 495 batch 500 train loss: 0.0004 test loss: 0.0008
Epoch 495 batch 600 train loss: 0.0003 test loss: 0.0008
Epoch 495 batch 700 train loss: 0.0003 test loss: 0.0008
Epoch 495 batch 800 train loss: 0.0002 test loss: 0.0008
Epoch 495 batch 900 train loss: 0.0002 test loss: 0.0009
Epoch 495 batch 1000 train loss: 0.0002 test loss: 0.0008
Epoch 495 batch 1100 train loss: 0.0003 test loss: 0.0008
Epoch 495 batch 1200 train loss: 0.0003 test loss: 0.0008
Epoch 496 batch 0 train loss: 0.0002 test loss: 0.0008
Epoch 496 batch 100 train loss: 0.0003 test loss: 0.0008
Epoch 496 batch 200 train loss: 0.0002 test loss: 0.0008
Epoch 496 batch 300 train loss: 0.0002 test loss: 0.0008
Epoch 496 batch 400 train loss: 0.0002 test loss: 0.0008
Epoch 496 batch 500 train loss: 0.0002 test loss: 0.0008
Epoch 496 batch 600 train loss: 0.0002 test loss: 0.0008
Epoch 496 batch 700 train loss: 0.0002 test loss: 0.0008
Epoch 496 batch 800 train loss: 0.0004 test loss: 0.0008
Epoch 496 batch 900 train loss: 0.0004 test loss: 0.0008
Epoch 496 batch 1000 train loss: 0.0003 test loss: 0.0008
Epoch 496 batch 1100 train loss: 0.0002 test loss: 0.0008
Epoch 496 batch 1200 train loss: 0.0002 test loss: 0.0008
Epoch 497 batch 0 train loss: 0.0002 test loss: 0.0008
Epoch 497 batch 100 train loss: 0.0002 test loss: 0.0008
Epoch 497 batch 200 train loss: 0.0002 test loss: 0.0008
Epoch 497 batch 300 train loss: 0.0003 test loss: 0.0008
Epoch 497 batch 400 train loss: 0.0003 test loss: 0.0008
Epoch 497 batch 500 train loss: 0.0002 test loss: 0.0008
Epoch 497 batch 600 train loss: 0.0002 test loss: 0.0009
Epoch 497 batch 700 train loss: 0.0002 test loss: 0.0008
Epoch 497 batch 800 train loss: 0.0002 test loss: 0.0008
Epoch 497 batch 900 train loss: 0.0003 test loss: 0.0008
Epoch 497 batch 1000 train loss: 0.0002 test loss: 0.0008
Epoch 497 batch 1100 train loss: 0.0002 test loss: 0.0008
Epoch 497 batch 1200 train loss: 0.0003 test loss: 0.0008
Epoch 498 batch 0 train loss: 0.0002 test loss: 0.0008
Epoch 498 batch 100 train loss: 0.0002 test loss: 0.0008
Epoch 498 batch 200 train loss: 0.0005 test loss: 0.0008
Epoch 498 batch 300 train loss: 0.0003 test loss: 0.0008
Epoch 498 batch 400 train loss: 0.0002 test loss: 0.0008
Epoch 498 batch 500 train loss: 0.0002 test loss: 0.0008
Epoch 498 batch 600 train loss: 0.0002 test loss: 0.0008
Epoch 498 batch 700 train loss: 0.0002 test loss: 0.0008
Epoch 498 batch 800 train loss: 0.0003 test loss: 0.0008
Epoch 498 batch 900 train loss: 0.0002 test loss: 0.0008
Epoch 498 batch 1000 train loss: 0.0002 test loss: 0.0008
Epoch 498 batch 1100 train loss: 0.0002 test loss: 0.0008
Epoch 498 batch 1200 train loss: 0.0002 test loss: 0.0008
Epoch 499 batch 0 train loss: 0.0002 test loss: 0.0009
Epoch 499 batch 100 train loss: 0.0005 test loss: 0.0008
Epoch 499 batch 200 train loss: 0.0002 test loss: 0.0008
Epoch 499 batch 300 train loss: 0.0002 test loss: 0.0008
Epoch 499 batch 400 train loss: 0.0002 test loss: 0.0008
Epoch 499 batch 500 train loss: 0.0002 test loss: 0.0008
Epoch 499 batch 600 train loss: 0.0002 test loss: 0.0008
Epoch 499 batch 700 train loss: 0.0002 test loss: 0.0008
Epoch 499 batch 800 train loss: 0.0002 test loss: 0.0008
Epoch 499 batch 900 train loss: 0.0002 test loss: 0.0008
Epoch 499 batch 1000 train loss: 0.0002 test loss: 0.0008
Epoch 499 batch 1100 train loss: 0.0003 test loss: 0.0008
Epoch 499 batch 1200 train loss: 0.0003 test loss: 0.0008
###Markdown
Model evaluation
###Code
i = -1
if ckpt_manager.checkpoints:
ckpt.restore(ckpt_manager.checkpoints[i])
print ('Checkpoint ' + ckpt_manager.checkpoints[i][-2:] +' restored!!')
unet_model.compile(optimizer = tf.keras.optimizers.Adam(learning_rate=LEARNING_RATE),
loss = tf.keras.losses.MeanSquaredError())
test_loss = unet_model.evaluate(test_dataset)
pred_result = unet_model.predict(test_dataset)
print(pred_result.shape)
print(test_label.shape)
masked_pred = pred_result[..., 0] * (1 - test_input[..., 1])
masked_label = test_label[..., 0] * (1 - test_input[..., 1])
for _ in range(test_label.shape[-2]):
print(np.sqrt(mean_squared_error(masked_label[..., _].reshape(-1), masked_pred[..., _].reshape(-1))))
r2 = RSquare()
for _ in range(test_label.shape[-2]):
r2.reset_states()
print(r2(masked_label[..., _].reshape(-1), masked_pred[..., _].reshape(-1)))
fig = plt.figure(figsize=((8/2.54)*4, (6/2.54)*4))
plt.scatter(tf.cast(tf.reshape(((MAXS-MINS)*masked_label + MINS), (-1, 1)), tf.float32),
tf.cast(tf.reshape(((MAXS-MINS)*masked_pred + MINS), (-1, 1)), tf.float32),
c=cmap[0], s=2)
masked_label.shape
x_t = np.arange(0, test_label.shape[1])
for _ in range (6):
NUMBERS = np.arange(1, pred_result.shape[0])
np.random.shuffle(NUMBERS)
NUMBERS = NUMBERS[:6]
position = 331
fig = plt.figure(figsize=((8.5/2.54)*8, (6/2.54)*8))
i=0
for NUMBER in NUMBERS:
ax = plt.subplot(position)
measured1 = plt.plot(x_t, test_label[NUMBER, :, i], c='k', alpha=0.8) #measured
expect1 = plt.plot(x_t, masked_pred[NUMBER, :, i], 'o', c=cmap[5], alpha=0.4) #estimated
expect1 = plt.plot(x_t, pred_result[NUMBER, :, i], c=cmap[2], alpha=0.4) #estimated
ax.axis('off')
position += 1
plt.show()
_ += 1
###Output
_____no_output_____ |
12-DA_Pandas_realworld.ipynb | ###Markdown
12. A complete example
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
###Output
_____no_output_____
###Markdown
We have seen now most of the basic features of Pandas including importing data, combining dataframes, aggregating information and plotting it. In this chapter, we are going to re-use these concepts with the real data seen in the [introduction chapter](06-DA_Pandas_introduction.ipynb). We are also going to explore some more advanced plotting libraries that exploit to the maximum dataframe structures. 12.1 Importing data We are importing here two tables provided openly by the Swiss National Science Foundation. One contains a list of all *projects* to which funds have been allocated since 1975. The other table contains a list of all *people* to which funds have been awarded during the same period:
###Code
# local import
projects = pd.read_csv('Data/P3_GrantExport.csv',sep = ';')
persons = pd.read_csv('Data/P3_PersonExport.csv',sep = ';')
# import from url
#projects = pd.read_csv('http://p3.snf.ch/P3Export/P3_GrantExport.csv',sep = ';')
#persons = pd.read_csv('http://p3.snf.ch/P3Export/P3_PersonExport.csv',sep = ';')
###Output
_____no_output_____
###Markdown
We can have a brief look at both tables:
###Code
projects.head(5)
persons.head(5)
###Output
_____no_output_____
###Markdown
We see that the ```persons``` table gives information such as the role of a person in various projects (applicant, employee etc.), her/his gender etc. The *project* table on the other side gives information such as the period of a grant, how much money was awarded etc.What if we now wish to know for example:- How much money is awarded on average depending on gender?- How long does it typically take for a researcher to go from employee to applicant status on a grant?We need a way to *link* the two tables, i.e. create a large table where *each row* corresponds to a single *observation* containing information from the two tables such as: applicant, gender, awarded funds, dates etc. We will now go through all necessary steps to achieve that goal. 12.2 Merging tablesIf each row of the persons table contained a single observation with a single person and a single project (the same person would appear of course multiple times), we could just *join* the two tables based e.g. on the project ID. Unfortunately, in the persons table, each line corresponds to a *single researcher* with all projects IDs lumped together in a list. For example:
###Code
persons.iloc[10041]
persons.iloc[10041]['Projects as responsible Applicant']
###Output
_____no_output_____
###Markdown
Therefore the first thing we need to do is to split those strings into actual lists. We can do that by using classic Python string splitting. We simply ```apply``` that function to the relevant columns. We need to take care of rows containing NaNs on which we cannot use ```split()```. We create two series, one for applicants, one for employees:
###Code
projID_a = persons['Projects as responsible Applicant'].apply(lambda x: x.split(';') if not pd.isna(x) else np.nan)
projID_e = persons['Projects as Employee'].apply(lambda x: x.split(';') if not pd.isna(x) else np.nan)
projID_a
projID_a[10041]
###Output
_____no_output_____
###Markdown
Now, to avoid problems later we'll only keep rows that are not NaNs. We first add the two series to the dataframe and then remove NaNs:
###Code
pd.isna(projID_a)
applicants = persons.copy()
applicants['projID'] = projID_a
applicants = applicants[~pd.isna(projID_a)]
employees = persons.copy()
employees['projID'] = projID_e
employees = employees[~pd.isna(projID_e)]
###Output
_____no_output_____
###Markdown
Now we want each of these projects to become a single line in the dataframe. Here we use a function that we haven't used before called ```explode``` which turns every element in a list into a row (a good illustration of the variety of available functions in Pandas):
###Code
applicants = applicants.explode('projID')
employees = employees.explode('projID')
applicants.head(5)
###Output
_____no_output_____
###Markdown
So now we have one large table, where each row corresponds to a *single* applicant and a *single* project. We can finally do our merging operation where we combined information on persons and projects. We will do two such operations: one for applicants using the ```projID_a``` column for merging and one using the ```projID_e``` column. We have one last problem to fix:
###Code
applicants.loc[1].projID
projects.loc[1]['Project Number']
###Output
_____no_output_____
###Markdown
We need the project ID in the persons table to be a *number* and not a *string*. We can try to convert but get an error:
###Code
applicants.projID = applicants.projID.astype(int)
employees.projID = employees.projID.astype(int)
###Output
_____no_output_____
###Markdown
It looks like we have a row that doesn't conform to expectation and only contains ''. Let's try to figure out what happened. First we find the location with the issue:
###Code
applicants[applicants.projID=='']
###Output
_____no_output_____
###Markdown
Then we look in the original table:
###Code
persons.loc[50947]
###Output
_____no_output_____
###Markdown
Unfortunately, as is often the case, we have a misformatting in the original table. The project as applicant entry has a single number but still contains the ```;``` sign. Therefore when we split the text, we end up with ```['8','']```. Can we fix this? We can for example filter the table and remove rows where ```projID``` has length 0:
###Code
applicants = applicants[applicants.projID.apply(lambda x: len(x) > 0)]
employees = employees[employees.projID.apply(lambda x: len(x) > 0)]
###Output
_____no_output_____
###Markdown
Now we can convert the ```projID``` column to integer:
###Code
applicants.projID = applicants.projID.astype(int)
employees.projID = employees.projID.astype(int)
###Output
_____no_output_____
###Markdown
Finally we can use ```merge``` to combine both tables. We will combine the projects (on 'Project Number') and persons table (on 'projID_a' and 'projID_e'):
###Code
merged_appl = pd.merge(applicants, projects, left_on='projID', right_on='Project Number')
merged_empl = pd.merge(employees, projects, left_on='projID', right_on='Project Number')
applicants.head(5)
###Output
_____no_output_____
###Markdown
12.3 Reformatting columns: timeWe now have in those tables information on both scientists and projects. Among other things we now when each project of each scientist has started via the ```Start Date``` column:
###Code
merged_empl['Start Date']
###Output
_____no_output_____
###Markdown
If we want to do computations with dates (e.g. measuring time spans) we have to change the type of the column. Currently it is indeed just a string. We could parse that string, but Pandas already offers tools to handle dates. For example we can use ```pd.to_datetime``` to transform the string into a Python ```datetime``` format. Let's create a new ```date``` column:
###Code
merged_empl['date'] = pd.to_datetime(merged_empl['Start Date'])
merged_appl['date'] = pd.to_datetime(merged_appl['Start Date'])
merged_empl.iloc[0]['date']
merged_empl.iloc[0]['date'].year
###Output
_____no_output_____
###Markdown
Let's add a year column to our dataframe:
###Code
merged_empl['year'] = merged_empl.date.apply(lambda x: x.year)
merged_appl['year'] = merged_appl.date.apply(lambda x: x.year)
###Output
_____no_output_____
###Markdown
12.4 Completing informationAs we did in the introduction, we want to be able to broadly classify projects into three categories. We therefore search for a specific string ('Humanities', 'Mathematics','Biology') within the 'Discipline Name Hierarchy' column to create a new column called 'Field'^:
###Code
science_types = ['Humanities', 'Mathematics','Biology']
merged_appl['Field'] = merged_appl['Discipline Name Hierarchy'].apply(
lambda el: next((y for y in [x for x in science_types if x in el] if y is not None),None) if not pd.isna(el) else el)
###Output
_____no_output_____
###Markdown
We will use the amounts awarded in our analysis. Let's look at that column:
###Code
merged_appl['Approved Amount']
###Output
_____no_output_____
###Markdown
Problem: we have rows that are not numerical. Let's coerce that column to numerical:
###Code
merged_appl['Approved Amount'] = pd.to_numeric(merged_appl['Approved Amount'], errors='coerce')
merged_appl['Approved Amount']
###Output
_____no_output_____
###Markdown
12.5 Data anaylsisWe are finally done tidying up our tables so that we can do proper data analysis. We can *aggregate* data to answer some questions. 12.5.1 Amounts by gender Let's see for example what is the average amount awarded every year, split by gender. We keep only the 'Project funding' category to avoid obscuring the results with large funds awarded for specific projects (PNR etc):
###Code
merged_projects = merged_appl[merged_appl['Funding Instrument Hierarchy'] == 'Project funding']
grouped_gender = merged_projects.groupby(['Gender','year'])['Approved Amount'].mean().reset_index()
grouped_gender
###Output
_____no_output_____
###Markdown
To generate a plot, we use here Seaborn which uses some elements of a grammar of graphics. For example we can assign variables to each "aspect" of our plot. Here x and y axis are year and amount while color ('hue') is the gender. In one line, we can generate a plot that compiles all the information:
###Code
sns.lineplot(data = grouped_gender, x='year', y='Approved Amount', hue='Gender')
###Output
_____no_output_____
###Markdown
There seems to be a small but systematic difference in the average amount awarded.We can now use a plotting library that is essentially a Python port of ggplot to add even more complexity to this plot. For example, let's split the data also by Field:
###Code
import plotnine as p9
grouped_gender_field = merged_projects.groupby(['Gender','year','Field'])['Approved Amount'].mean().reset_index()
grouped_gender_field
(p9.ggplot(grouped_gender_field, p9.aes('year', 'Approved Amount', color='Gender'))
+ p9.geom_point()
+ p9.geom_line()
+ p9.facet_wrap('~Field'))
###Output
_____no_output_____
###Markdown
12.5.2 From employee to applicantOne of the questions we wanted to answer above was how much time goes by between the first time a scientist is mentioned as "employee" on an application and the first time he applies as main applicant. We have therefore to:1. Find all rows corresponding to a specific scientist2. Find the earliest date of projectFor (1) we can use ```groupby``` and use the ```Person ID SNSF``` ID which is a unique ID assigned to each researcher. Once this *aggregation* is done, we can summarize each group by looking for the "minimal" date:
###Code
first_empl = merged_empl.groupby('Person ID SNSF').date.min().reset_index()
first_appl = merged_appl.groupby('Person ID SNSF').date.min().reset_index()
###Output
_____no_output_____
###Markdown
We have now two dataframes indexed by the ```Person ID```:
###Code
first_empl.head(5)
###Output
_____no_output_____
###Markdown
Now we can again merge the two series to be able to compare applicant/employee start dates for single people:
###Code
merge_first = pd.merge(first_appl, first_empl, on = 'Person ID SNSF', suffixes=('_appl', '_empl'))
merge_first
###Output
_____no_output_____
###Markdown
Finally we merge with the full table, based on the index to recover the other paramters:
###Code
full_table = pd.merge(merge_first, merged_appl,on = 'Person ID SNSF')
###Output
_____no_output_____
###Markdown
Finally we can add a column to that dataframe as a "difference in dates":
###Code
full_table['time_diff'] = full_table.date_appl-full_table.date_empl
full_table.time_diff = full_table.time_diff.apply(lambda x: x.days/365)
full_table.hist(column='time_diff',bins = 50)
###Output
_____no_output_____
###Markdown
We see that we have one strong peak at $\Delta T==0$ which corresponds to people who were paid for the first time through an SNSF grant when they applied themselves. The remaining cases have a peak around $\Delta T==5$ which typically corresponds to the case where a PhD student was payed on a grant and then applied for a postdoc grant ~4-5 years later.We can go further and ask how dependent this waiting time is on the Field of research. Obviously Humanities are structured very differently
###Code
sns.boxplot(data=full_table, y='time_diff', x='Field');
sns.violinplot(data=full_table, y='time_diff', x='Field', );
###Output
_____no_output_____ |
examples/IPython Kernel/nbpackage/mynotebook.ipynb | ###Markdown
My Notebook
###Code
def foo():
return "foo"
def has_ip_syntax():
listing = !ls
return listing
def whatsmyname():
return __name__
###Output
_____no_output_____ |
Chapter03/CH3_kmeans.ipynb | ###Markdown
Importing librariesFirst we will import libraries needed. In order to improve the understanding of the algorithms, we will use the numpy library.Then we will use the well known matplotlib, for the graphical representation of the algorithms.
###Code
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
%matplotlib inline
###Output
_____no_output_____
###Markdown
Now we will generate the samples, which will be a number of 2D elements, and then will generate the candidate centers, which number will be of 4 2D elements. [infobox] In order to generat a sampleset, Normally a random number generator is used, but in this case we want to set the samples to predetermined numbers, so you will be able to generate your own algorithms, and test the class assignation based on it.
###Code
samples=np.array([[1,2],[12,2],[0,1],[10,0],[9,1],[8,2],[0,10],[1,8],[2,9],[9,9],[10,8],[8,9] ], dtype=np.float)
centers=np.array([[3,2], [2,6], [9,3], [7,6]], dtype=np.float)
N=len(samples)
###Output
_____no_output_____
###Markdown
Lets represent the samples an center. First we will initialize a new matplotlib figure, with the corresponding axes. The fig object will allow us to change all the figureThe plt and ax variable names are almost an standarized way to refer to this objects So let's try to have an idea of how the samples look like. This will be done through the matplotlib's scatter drawing type. It takes as parameters the x coordinates, the y coordinates, size (in points squared) the marker type, color, etc. [infobox] There is a variety of markers to choose from, like point (.), circle (o), square (s). To see the full list see: https://matplotlib.org/api/markers_api.html
###Code
fig, ax = plt.subplots()
ax.scatter(samples.transpose()[0], samples.transpose()[1], marker = 'o', s = 100 )
ax.scatter(centers.transpose()[0], centers.transpose()[1], marker = 's', s = 100, color='black')
plt.plot()
###Output
_____no_output_____
###Markdown
Let's define a function that, given a new sample, will return a list with the distances whith all the current centroids, in order to assign this new sample to one of them, and aterwards, recalculate the centroids again.
###Code
def distance (sample, centroids):
distances=np.zeros(len(centroids))
for i in range(0,len(centroids)):
dist=np.sqrt(sum(pow(np.subtract(sample,centroids[i]),2)))
distances[i]=dist
return distances
###Output
_____no_output_____
###Markdown
Lest define a function which will build, one by one, the step by step graphic of our application.It expects a maximum of 12 subpictures, and the plotnumber parameter will determine the position on tha 6x2 matrix (620 will be the first left subplot, and so on writing order)Then for each picure we will do a scatterplot of the clustered samples, and then of the current centroid position.
###Code
def showcurrentstatus (samples, centers, clusters, plotnumber):
plt.subplot(620+plotnumber)
plt.scatter(samples.transpose()[0], samples.transpose()[1], marker = 'o', s = 150 , c=clusters)
plt.scatter(centers.transpose()[0], centers.transpose()[1], marker = 's', s = 100, color='black')
plt.plot()
###Output
_____no_output_____
###Markdown
The following function, will use the previous distance function and we will have an auxiliary clusters array, in which we will store to which centroid the new sample is assigned (it will be a number from 1 to K).The main loop will go from sample 0 to N, and for each one, it will look for the closest centroid, assign the centroid number to index n of the clusters array, and sums the sample coordinates to its currently assigned centroid.Then, to get the sample, we use the bincount method to count the number of samples or each centroid, and will get the divisor array, then we divide the sum of a class elements by the previous divisor array, and there we have the new centroids.
###Code
def kmeans_old(centroids, samples, K):
distances=np.zeros((N,K))
new_centroids=np.zeros((K, 2))
clusters=np.zeros(len(samples), np.int)
for i in range(0,len(samples)):
distances[i] = distance(samples[i], centroids)
clusters[i] = np.argmin(distances[i])
new_centroids[clusters[i]] += samples[i]
divisor = np.bincount(clusters).astype(np.float)
for i in range(0,K):
new_centroids[i] = np.nan_to_num(np.divide(new_centroids[i] ,divisor[i]))
showcurrentstatus(samples, new_centroids, clusters)
return new_centroids
def kmeans(centroids, samples, K, plotresults):
plt.figure(figsize=(20,20))
distances=np.zeros((N,K))
new_centroids=np.zeros((K, 2))
final_centroids=np.zeros((K, 2))
clusters=np.zeros(len(samples), np.int)
for i in range(0,len(samples)):
distances[i] = distance(samples[i], centroids)
clusters[i] = np.argmin(distances[i])
new_centroids[clusters[i]] += samples[i]
divisor = np.bincount(clusters).astype(np.float)
divisor.resize([K])
for j in range(0,K):
final_centroids[j] = np.nan_to_num(np.divide(new_centroids[j] ,divisor[j]))
if (i>3 and plotresults==True):
showcurrentstatus(samples[:i], final_centroids, clusters[:i], i-3)
return final_centroids
###Output
_____no_output_____
###Markdown
Now it's time to kickstart the kmeans algorithm, using the initial samples and centers we set up at first. The current algorithm will show how the clusters are avolving, starting from a few elements, into the final state.
###Code
finalcenters=kmeans (centers, samples, 4, True)
###Output
/usr/local/lib/python2.7/dist-packages/ipykernel/__main__.py:15: RuntimeWarning: invalid value encountered in divide
|
WMcarracing_test.ipynb | ###Markdown
###Code
!add-apt-repository ppa:graphics-drivers/ppa
!apt update
!apt install nvidia-430 nvidia-430-dev
!apt-get install g++ freeglut3-dev build-essential libx11-dev libxmu-dev libxi-dev libglu1-mesa libglu1-mesa-dev
# 環境の確認
!echo -- OS --
!cat /etc/os-release | grep -e ^VERSION= -e ^NAME=
!echo
!echo -- python --
!echo python version: `python --version`
!echo
!echo -- GPU --
!nvidia-smi
# Google Driveのマウント
from google.colab import drive
drive.mount('/content/drive')
import numpy as np
import os
import json
import tensorflow as tf
import random
from vae.vae import ConvVAE, reset_graph
#Directory 作成
import os
WM_BASE_DIR = "/content/drive/My Drive/WM"
WM_DIR = WM_BASE_DIR + "/worldmodel-test"
!if [ ! -d "{WM_BASE_DIR}" ]; then mkdir "{WM_BASE_DIR}"; fi
print(WM_BASE_DIR)
!echo "{WM_BASE_DIR}"
# git clone
!if [ ! -d "{WM_DIR}" ]; then git clone https://github.com/Syunkolee9891/worldmodel-test.git "{WM_DIR}" ; else echo "Already cloned" ; fi
#クローンの確認
!echo -- worldmodel-test --
!ls "{WM_DIR}"
!pip install python-xlib
!Xvfb :99 -screen 0 1024x768x24 -listen tcp -ac
!apt install x11-apps
!DISPLAY=localhost:99 xeyes
!x11vnc -display :99 -listen 0.0.0.0 -forever -xkb -shared -nopw
!DISPLAY=:0 gvncviewer localhost::5900
#!DISPLAY=localhost:99 ipython
from Xlib.display import Display
from Xlib.X import MotionNotify
from Xlib.X import ButtonPress
from Xlib.X import ButtonRelease
from Xlib.ext.xtest import fake_input
display = Display()
fake_input(display, MotionNotify, x=0, y=0)
fake_input(display, ButtonPress, 1)
display.sync()
import pyglet
window = pyglet.window.Window(width=400,height=300)
label = pyglet.text.Label('Hello, world',
font_name='Times New Roman',
font_size=36,
x=window.width//2, y=window.height//2,
anchor_x='center', anchor_y='center')
@window.event
def on_draw():
window.clear()
label.draw()
pyglet.app.run()
# pythonのモジュールのインストール (ランタイムが変わるたびに必要)
!python -m pip install -r "{WM_DIR}/requirements-colab.txt"
# tensorflowの確認
!pip freeze
# tensorflow-gpu の動作確認
#from tensorflow.python.client import device_lib
#device_lib.list_local_devices()
#動くディレクトリへの移動(carracing)
%cd drive/MyDrive/WM/worldmodel-test/carracing
!ls
#(imresize のimport error より)
!pip install scipy==1.2.0
!pip install box2d-py
!pip install gym[Box_2D]
#test
#!pip uninstall --yes tensorflow
#!pip install tensorflow==1.13.1
import matplotlib.pyplot as plt
%matplotlib inline
np.set_printoptions(precision=4, edgeitems=6, linewidth=100, suppress=True)
from IPython import display
import numpy as np
import time
import PIL.Image
import io
!python model.py render log/carracing.cma.16.64.best.json
###Output
/usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/dtypes.py:526: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint8 = np.dtype([("qint8", np.int8, 1)])
/usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/dtypes.py:527: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint8 = np.dtype([("quint8", np.uint8, 1)])
/usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/dtypes.py:528: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint16 = np.dtype([("qint16", np.int16, 1)])
/usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/dtypes.py:529: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint16 = np.dtype([("quint16", np.uint16, 1)])
/usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/dtypes.py:530: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint32 = np.dtype([("qint32", np.int32, 1)])
/usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/dtypes.py:535: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
np_resource = np.dtype([("resource", np.ubyte, 1)])
filename log/carracing.cma.16.64.best.json
WARNING:tensorflow:From /content/drive/My Drive/WM/worldmodel-test/carracing/vae/vae.py:37: conv2d (from tensorflow.python.layers.convolutional) is deprecated and will be removed in a future version.
Instructions for updating:
Use keras.layers.conv2d instead.
WARNING:tensorflow:From /usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/op_def_library.py:263: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version.
Instructions for updating:
Colocations handled automatically by placer.
WARNING:tensorflow:From /content/drive/My Drive/WM/worldmodel-test/carracing/vae/vae.py:44: dense (from tensorflow.python.layers.core) is deprecated and will be removed in a future version.
Instructions for updating:
Use keras.layers.dense instead.
WARNING:tensorflow:From /content/drive/My Drive/WM/worldmodel-test/carracing/vae/vae.py:53: conv2d_transpose (from tensorflow.python.layers.convolutional) is deprecated and will be removed in a future version.
Instructions for updating:
Use keras.layers.conv2d_transpose instead.
2022-01-03 15:46:45.280084: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
2022-01-03 15:46:45.283564: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 2199995000 Hz
2022-01-03 15:46:45.283803: I tensorflow/compiler/xla/service/service.cc:150] XLA service 0x5588053db080 executing computations on platform Host. Devices:
2022-01-03 15:46:45.283839: I tensorflow/compiler/xla/service/service.cc:158] StreamExecutor device (0): <undefined>, <undefined>
model using cpu
WARNING: The TensorFlow contrib module will not be included in TensorFlow 2.0.
For more information, please see:
* https://github.com/tensorflow/community/blob/master/rfcs/20180907-contrib-sunset.md
* https://github.com/tensorflow/addons
If you depend on functionality not listed there, please file an issue.
input dropout mode = False
output dropout mode = False
recurrent dropout mode = False
WARNING:tensorflow:From /content/drive/My Drive/WM/worldmodel-test/carracing/rnn/rnn.py:127: dynamic_rnn (from tensorflow.python.ops.rnn) is deprecated and will be removed in a future version.
Instructions for updating:
Please use `keras.layers.RNN(cell)`, which is equivalent to this API
model size 867
/usr/local/lib/python3.7/dist-packages/gym/logger.py:30: UserWarning: [33mWARN: Box bound precision lowered by casting to float32[0m
warnings.warn(colorize('%s: %s'%('WARN', msg % args), 'yellow'))
loading file log/carracing.cma.16.64.best.json
Track generation: 1241..1555 -> 314-tiles track
Traceback (most recent call last):
File "model.py", line 290, in <module>
main()
File "model.py", line 280, in main
train_mode=False, render_mode=render_mode, num_episode=1)
File "model.py", line 175, in simulate
obs = model.env.reset()
File "/usr/local/lib/python3.7/dist-packages/gym/envs/box2d/car_racing.py", line 364, in reset
return self.step(None)[0]
File "/usr/local/lib/python3.7/dist-packages/gym/envs/box2d/car_racing.py", line 376, in step
self.state = self.render("state_pixels")
File "/usr/local/lib/python3.7/dist-packages/gym/envs/box2d/car_racing.py", line 399, in render
from gym.envs.classic_control import rendering
File "/usr/local/lib/python3.7/dist-packages/gym/envs/classic_control/rendering.py", line 25, in <module>
from pyglet.gl import *
File "/usr/local/lib/python3.7/dist-packages/pyglet/gl/__init__.py", line 244, in <module>
import pyglet.window
File "/usr/local/lib/python3.7/dist-packages/pyglet/window/__init__.py", line 1880, in <module>
gl._create_shadow_window()
File "/usr/local/lib/python3.7/dist-packages/pyglet/gl/__init__.py", line 220, in _create_shadow_window
_shadow_window = Window(width=1, height=1, visible=False)
File "/usr/local/lib/python3.7/dist-packages/pyglet/window/xlib/__init__.py", line 165, in __init__
super(XlibWindow, self).__init__(*args, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/pyglet/window/__init__.py", line 570, in __init__
display = pyglet.canvas.get_display()
File "/usr/local/lib/python3.7/dist-packages/pyglet/canvas/__init__.py", line 94, in get_display
return Display()
File "/usr/local/lib/python3.7/dist-packages/pyglet/canvas/xlib.py", line 123, in __init__
raise NoSuchDisplayException('Cannot connect to "%s"' % name)
pyglet.canvas.xlib.NoSuchDisplayException: Cannot connect to "None"
|
examples/_debug/do_boundaries.ipynb | ###Markdown
Spain provinces boundaries map
###Code
es_boundaries = do.boundaries(region='Spain')
es_boundaries.dataframe[['geom_name', 'geom_id']]
es_provinces = do.boundaries(boundary='es.cnig.prov')
es_provinces.dataframe.info()
Layer(es_provinces.dataframe)
###Output
_____no_output_____
###Markdown
Exploring Australia boundaries
###Code
au_boundaries = do.boundaries(region='Australia')
au_boundaries.dataframe[['geom_name', 'geom_id']]
au_postal_areas = do.boundaries(boundary='au.geo.POA')
# Be careful with the dataframe size here before try to render a Layer with it:
au_postal_areas.dataframe.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 2513 entries, 0 to 2512
Data columns (total 2 columns):
the_geom 2513 non-null object
geom_refs 2513 non-null object
dtypes: object(2)
memory usage: 39.4+ KB
###Markdown
Looking for "median income" data in US area
###Code
tracts = do.boundaries(
boundary='us.census.tiger.census_tract',
region=[-112.096642,43.429932,-111.974213,43.553539]
)
tracts.upload(table_name='idaho_falls_tracts', credentials=credentials, if_exists='replace')
median_income_meta = do.discovery(
'idaho_falls_tracts',
keywords='median income')
median_income_meta
# Warning: right now the median_income_meta may have duplicates that breaks the following `augment` call
median_income_unique = median_income_meta.loc[:2]
idaho_falls_income = do.augment(
'idaho_falls_tracts',
median_income_unique,
how='geom_refs')
idaho_falls_income.dataframe.head(5)
Layer(idaho_falls_income.dataframe)
###Output
_____no_output_____ |
soluciones/Entrega_8_sol.ipynb | ###Markdown
Cargando los Datos
###Code
#-- Descomprimimos el dataset
# !rm -r mnist
# !unzip /content/drive/MyDrive/Colab/IntroDeepLearning_202102/fashion-mnist.zip
#--- Buscamos las direcciones de cada archivo de imagen
from glob import glob
train_files = glob('./fashion-mnist/train/*/*.png')
valid_files = glob('./fashion-mnist/valid/*/*.png')
test_files = glob('./fashion-mnist/test/*/*.png')
#--- Ordenamos los datos de forma aleatoria para evitar sesgos
import numpy as np
np.random.shuffle(train_files)
np.random.shuffle(valid_files)
np.random.shuffle(test_files)
import torchvision.transforms as transforms
#--- Transformamos los datos para adaptarlos a la entrada de ResNet 224x224 px
data_transform = transforms.Compose([
transforms.Resize((224, 224)),
transforms.Grayscale(3), #Dado que MNIST tiene un solo canal, lo cambiamos a 3 para no tener que modificar más capas en el modelo
transforms.ToTensor(),
transforms.Normalize(mean=[0.485, 0.456, 0.406],std=[0.229, 0.224, 0.225])
])
#--- Cargamos los datos de entrenamiento en listas
from PIL import Image
N_train = len(train_files[:4000])
X_train = []
Y_train = []
for i, train_file in enumerate(train_files[:4000]):
Y_train.append( int(train_file.split('/')[3]) )
X_train.append( np.array(data_transform(Image.open(train_file) )))
#--- Cargamos los datos de testeo en listas
N_test = len(test_files[:500])
X_test = []
Y_test = []
for i, test_file in enumerate(test_files[:500]):
Y_test.append( int(test_file.split('/')[3]) )
X_test.append( np.array(data_transform(Image.open(test_file)) ))
#-- Visualizamos los datos
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(8,8))
for i in range(4):
plt.subplot(2,2,i+1)
plt.imshow(X_test[i*15].reshape(224,224,3))
plt.title(Y_test[i*15])
plt.axis(False)
plt.show()
#--- Convetimos las listas con los datos a tensores de torch
import torch
from torch.autograd import Variable
X_train = Variable(torch.from_numpy(np.array(X_train))).float()
Y_train = Variable(torch.from_numpy(np.array(Y_train))).long()
X_test = Variable(torch.from_numpy(np.array(X_test))).float()
Y_test = Variable(torch.from_numpy(np.array(Y_test))).long()
X_train.data.size()
#-- Creamos el DataLoader
batch_size = 32
train_ds = torch.utils.data.TensorDataset(X_train, Y_train)
train_dl = torch.utils.data.DataLoader(train_ds, batch_size=batch_size, shuffle=True)
del train_ds
###Output
_____no_output_____
###Markdown
Entrenando el modelo
###Code
#--- Seleccionamos y cargamos el modelo
import torch
model = torch.hub.load('pytorch/vision', 'resnet18', pretrained=True)
model
#--- Congelamos los pesos en las capaz del modelo para que no se actualicen
for p in model.parameters():
p.requires_grad = False
#--- Definimos el número de clases
out_dim = 10
#--- Reescribimos la nueva capa de salida con el nuevo dataset
model.fc = torch.nn.Sequential(
torch.nn.Linear(model.fc.in_features, 100),
torch.nn.ReLU(),
torch.nn.Linear(100, out_dim)
)
model.load_state_dict(model.state_dict())
model
!pip install hiddenlayer
import hiddenlayer as hl
#--- Creamos variables para almacenar los scores en cada época
from sklearn.metrics import f1_score
model = model.cuda()
model.train()
#--- Definimos nuestro criterio de evaluación y el optimizador
optimizer = torch.optim.SGD(model.parameters(), lr=0.001, weight_decay=0.1)
criterion = torch.nn.CrossEntropyLoss()
#--- Entrenamos el modelo usando únicamente 5 épocas
n_epochs = 20
history = hl.History()
canvas = hl.Canvas()
iter = 0
for epoch in range(n_epochs):
for batch_idx, (X_train_batch, Y_train_batch) in enumerate(train_dl):
# Pasamos os datos a 'cuda'
X_train_batch = X_train_batch.cuda()
Y_train_batch = Y_train_batch.cuda()
# Realiza una predicción
Y_pred = model(X_train_batch)
# Calcula el loss
loss = criterion(Y_pred, Y_train_batch)
Y_pred = torch.argmax(Y_pred, 1)
# Calcula el accuracy
acc = sum(Y_train_batch == Y_pred)/len(Y_pred)
# Backpropagation
optimizer.zero_grad()
loss.backward()
optimizer.step()
if iter%10 == 0:
#-- Visualizamos la evolución de los score loss y accuracy
history.log((epoch+1, iter), loss=loss, accuracy=acc)
with canvas:
canvas.draw_plot(history["loss"])
canvas.draw_plot(history["accuracy"])
iter += 1
del X_train_batch, Y_train_batch, Y_pred
#-- Validamos el modelo
from sklearn.metrics import f1_score
model.cpu()
model.eval()
Y_pred = model(X_test)
loss = criterion(Y_pred,Y_test)
Y_pred = torch.argmax(Y_pred, 1)
f1 = f1_score(Y_test, Y_pred, average='macro')
acc = sum(Y_test == Y_pred)/len(Y_pred)
print( 'Loss:{:.2f}, F1:{:.2f}, Acc:{:.2f}'.format(loss.item(), f1, acc ) )
#--- Guardamos el nuevo Modelo
torch.save(model,open('./ResNet_MNIST.pt','wb'))
from sklearn.metrics import confusion_matrix
def CM(Y_true, Y_pred, classes, lclasses=None):
fig = plt.figure(figsize=(10, 10))
cm = confusion_matrix(Y_true, Y_pred)
if lclasses == None:
lclasses = np.arange(0,classes)
cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
cmap=plt.cm.Blues
ax = fig.add_subplot(1,1,1)
im = ax.imshow(cm, interpolation='nearest', cmap=cmap)
ax.figure.colorbar(im, ax=ax, pad=0.01, shrink=0.86)
ax.set(xticks=np.arange(cm.shape[1]), yticks=np.arange(cm.shape[0]),xticklabels=lclasses, yticklabels=lclasses)
ax.set_xlabel("Predicted",size=20)
ax.set_ylabel("True",size=20)
ax.set_ylim(classes-0.5, -0.5)
plt.setp(ax.get_xticklabels(), size=12)
plt.setp(ax.get_yticklabels(), size=12)
fmt = '.2f'
thresh = cm.max()/2.
for i in range(cm.shape[0]):
for j in range(cm.shape[1]):
ax.text(j, i, format(cm[i, j], fmt),ha="center", va="center",size=15 , color="white" if cm[i, j] > thresh else "black")
plt.show()
CM(Y_test, Y_pred, 10)
###Output
_____no_output_____ |
B_Submissions_Kopuru_competition/2021-05-26_submit/Batch_OLS/workerbee05_HEXmonths.ipynb | ###Markdown
HEX algorithm **Kopuru Vespa Velutina Competition****Linear Regression model**Purpose: Predict the number of Nests in each of Biscay's 112 municipalities for the year 2020.Output: *(WaspBusters_20210512_batch_OLSmonths.csv)*@authors:* [email protected]* [email protected]* [email protected]* [email protected] Libraries
###Code
# Base packages -----------------------------------
import numpy as np
import pandas as pd
# Visualization -----------------------------------
from matplotlib import pyplot
# Scaling data ------------------------------------
from sklearn import preprocessing
# Linear Regression -------------------------------
from statsmodels.formula.api import ols
#from sklearn.datasets import make_regression
from sklearn.linear_model import LinearRegression
###Output
_____no_output_____
###Markdown
Functions
###Code
# Function that checks if final Output is ready for submission or needs revision
def check_data(HEX):
if HEX.shape == (112, 3):
print(HEX.shape,": Shape is correct.")
else:
print(HEX.shape,": Shape is **INCORRECT!**")
if HEX["CODIGO MUNICIPIO"].nunique() == 112:
print(HEX["CODIGO MUNICIPIO"].nunique(),": Number of unique municipalities is correct.")
else:
print(HEX["CODIGO MUNICIPIO"].nunique(),": Number of unique municipalities is **INCORRECT!**")
if any(HEX["NIDOS 2020"] < 0):
print("**INCORRECT!** At least one municipality has NESTS <= 0.")
else:
print("Great! All municipalities have NESTS >= 0.")
print("The Total 2020 Nests' Prediction is", int(HEX["NIDOS 2020"].sum()))
###Output
_____no_output_____
###Markdown
Get the data
###Code
QUEEN_train = pd.read_csv('../Feeder_months/WBds03_QUEENtrainMONTHS.csv', sep=',')
QUEEN_predict = pd.read_csv('../Feeder_months/WBds03_QUEENpredictMONTHS.csv', sep=',')
clustersMario = pd.read_csv("../Feeder_years/WBds_CLUSTERSnests.csv")
#QUEEN_predict.isnull().sum()
QUEEN_train.shape
QUEEN_predict.shape
###Output
_____no_output_____
###Markdown
Add in more Clusters (nest amount + commercial density clusters)
###Code
QUEEN_train = pd.merge(QUEEN_train, clustersMario, how = 'left', on = ['municip_code', 'municip_name'])
QUEEN_predict = pd.merge(QUEEN_predict, clustersMario, how = 'left', on = ['municip_code', 'municip_name'])
QUEEN_train.fillna(4, inplace=True)
QUEEN_predict.fillna(4, inplace=True)
QUEEN_train.shape
QUEEN_predict.shape
#QUEEN_train.isnull().sum()
#QUEEN_predict.isnull().sum()
QUEEN_train.Cluster.value_counts()
###Output
_____no_output_____
###Markdown
Determine feature importance
###Code
X = QUEEN_train.drop(columns = ['municip_name', 'municip_code', 'NESTS', 'station_code', 'station_name', 'year'])
y = QUEEN_train['NESTS']
# Scale the datasets using MinMaxScaler
scalators = X.columns
X[scalators] = preprocessing.minmax_scale(X[scalators])
# define the model
model_fi = LinearRegression()
# fit the model
model_fi.fit(X, y)
# get importance
importance = model_fi.coef_
# summarize feature importance
for i,v in enumerate(importance):
print('Feature: %0s, Score: %.5f' % (X.columns[i],v))
# plot feature importance
pyplot.bar([x for x in range(len(importance))], importance)
pyplot.show()
for i,v in enumerate(importance):
if v > 1:
print('Feature: %0s, Score: %.2f' % (X.columns[i],v))
###Output
Feature: month, Score: 1.55
Feature: colonies_amount, Score: 1.64
Feature: food_fruit, Score: 2.30
Feature: food_txakoli, Score: 1.19
Feature: food_blueberry, Score: 1.24
Feature: food_raspberry, Score: 1.06
Feature: weath_humidity, Score: 1.43
Feature: weath_maxLevel, Score: 1.84
Feature: weath_accuRainfall, Score: 1.65
Feature: weath_maxMeanTemp, Score: 3.66
Feature: weath_minTemp, Score: 9.50
Feature: weath_meanWindM, Score: 1.84
Feature: cluster_size, Score: 2.93
Feature: cluster_cosmo, Score: 4.91
###Markdown
Train the model With the variables suggested by the Feature Importance method
###Code
#model = ols('NESTS ~ month + colonies_amount + food_fruit + food_txakoli + food_blueberry + food_raspberry + weath_humidity + weath_maxLevel + weath_accuRainfall + weath_maxMeanTemp + weath_minTemp + weath_meanWindM + C(cluster_cosmo) + C(cluster_size)',\
# data=QUEEN_train).fit()
#print(model.summary())
#model = ols('NESTS ~ weath_meanTemp + weath_maxMeanTemp + weath_minTemp + weath_maxWindM + C(cluster_cosmo) + C(cluster_size)',\
# data=QUEEN_train).fit()
#print(model.summary())
###Output
_____no_output_____
###Markdown
Backward elimination
###Code
#model = ols('NESTS ~ month + food_txakoli + food_blueberry + food_raspberry + weath_humidity + weath_accuRainfall + weath_maxMeanTemp + weath_minTemp + weath_meanWindM + C(cluster_cosmo) + C(cluster_size)',\
# data=QUEEN_train).fit()
#print(model.summary())
#model = ols('NESTS ~ weath_meanTemp + weath_maxMeanTemp + weath_minTemp + C(cluster_cosmo) + C(cluster_size)',\
# data=QUEEN_train).fit()
#print(model.summary())
###Output
_____no_output_____
###Markdown
With the additional Cluster Categorical for nest amounts
###Code
model = ols('NESTS ~ month + food_txakoli + food_blueberry + food_raspberry + weath_humidity + weath_accuRainfall + weath_maxMeanTemp + weath_minTemp + weath_meanWindM + C(cluster_cosmo) + C(cluster_size) + C(Cluster)',\
data=QUEEN_train).fit()
print(model.summary())
#model = ols('NESTS ~ weath_meanTemp + weath_maxMeanTemp + weath_minTemp + C(cluster_cosmo) + C(cluster_size) + C(Cluster)',\
# data=QUEEN_train).fit()
#print(model.summary())
###Output
_____no_output_____
###Markdown
Predict 2020's nests
###Code
y_2020 = model.predict(QUEEN_predict)
y_2020
# Any municipality/month resulting in NESTS<0 is equivalent to = 0
y_2020[y_2020 < 0] = 0
y_2020
QUEEN_predict['NESTS'] = y_2020
HEX = QUEEN_predict.loc[:,['municip_code','municip_name','NESTS']].groupby(by=['municip_code','municip_name'], as_index=False).sum()
###Output
_____no_output_____
###Markdown
Manual adjustments
###Code
HEX.loc[HEX.municip_code.isin([48022, 48071, 48088, 48074, 48051, 48020]), :]
HEX.loc[HEX.municip_code.isin([48022, 48071, 48088, 48074, 48051, 48020]), 'NESTS'] = 0
HEX.loc[HEX.municip_code.isin([48022, 48071, 48088, 48074, 48051, 48020]), :]
HEX.columns = ["CODIGO MUNICIPIO", "NOMBRE MUNICIPIO", "NIDOS 2020"] # change column names to Spanish (Competition template)
###Output
_____no_output_____
###Markdown
Verify dataset format
###Code
check_data(HEX)
###Output
(112, 3) : Shape is correct.
112 : Number of unique municipalities is correct.
Great! All municipalities have NESTS >= 0.
The Total 2020 Nests' Prediction is 2980
###Markdown
Export dataset for submission
###Code
HEX.to_csv('WaspBusters_20210526_OLSmonthsClustersGalore.csv', index=False)
###Output
_____no_output_____ |
python-a-7.ipynb | ###Markdown
jupyter notebook 编程环境
###Code
print('hello world!')
for i in range(10):
print(i)
###Output
0
1
2
3
4
5
6
7
8
9
###Markdown
流程图小玉家的电费 https://www.luogu.org/problem/P1422
###Code
n = eval(input())
if n<=150:
f = 0.4463 * n # 150度以下的计费
elif n<=400:
f = 0.4463 * 150 + 0.4663 * (n - 150) # 151-400度的计费
else:
f = 0.4463 * 150 + 0.4663 * 250 + 0.5663 * (n - 400) # 400度以上的计费
print('%.1f'%f) # %.1f保留一位小数
# 鸡兔同笼
head, foot = eval(input()) # head 头的数量, foot 脚的数量
flag = False # 标志变量,用来记录是否找到答案
for i in range(1,head+1): # i 鸡的数量
j = head - i # j 兔的数量
if i*2+j*4==foot:
print(i,j)
flag = True # 找到答案,设置标志变量
if flag==False:
print('你数错了')
# 百钱百鸡 公鸡5元1只,母鸡3元1只,小鸡1元3只。100元买到100只鸡,问公鸡、母鸡、小鸡各几只
for i in range(20): # 公鸡数量
for j in range(33): # 母鸡数量
k = 100 - i - j
if i*5+j*3+k/3==100:
print(i,j,k)
###Output
0 25 75
4 18 78
8 11 81
12 4 84
|
ML n DL for Programmers - Session IV.ipynb | ###Markdown
ML n DL for Programmers------------------------------- Session IV Improving input tensor - Embeddings* One hot encoding is not an ideal way to represent words/characters.* They don't capture semantic relationship. First method* Use embedding layer provided by keras.* This will be **version 4.**
###Code
# Load training data
import gensim.downloader as api
from smart_open import smart_open
text8_path = api.load("text8", return_path=True)
text8_data = ""
with smart_open(text8_path, 'rb') as file:
for line in file:
line = line.decode('utf8')
text8_data += line
text8_data = text8_data.strip()
text8_data = text8_data[:1000000]
print(f'Lenght of Corpus: {len(text8_data)}')
# Prepare dictionaries
chars = sorted(list(set(text8_data)))
char_indices = dict((c, i) for i, c in enumerate(chars))
indices_char = dict((i, c) for i, c in enumerate(chars))
print(f'unique chars: {len(chars)}')
import numpy as np
from tqdm import tqdm_notebook as tqdm
# prepare integer label for our text8_data
text8_data = [char_indices[char] for char in tqdm(text8_data)]
# Prepare training data
SEQUENCE_LENGTH = 30
STEP = 3
sentences = []
next_chars = []
for i in tqdm(range(0, len(text8_data)-SEQUENCE_LENGTH, STEP)):
sentences.append(text8_data[i:i+SEQUENCE_LENGTH])
next_chars.append(text8_data[i+SEQUENCE_LENGTH])
sentences = np.array(sentences)
next_chars = np.array(next_chars)
print(f'number of training sentences: {len(sentences)}')
print(f'2nd sentence: {sentences[2]}')
print(f'char after 2nd sentence: {next_chars[2]}')
print(f'3rd sentence: {sentences[3]}')
print(f'shape of sentences: {sentences.shape}')
print(f'shape of next_chars: {next_chars.shape}')
from keras.models import Sequential
from keras.layers import Embedding, LSTM, Dense, BatchNormalization, Dropout
model = Sequential()
model.add(Embedding(len(chars), 5, input_length=SEQUENCE_LENGTH, name='input_layer'))
model.add(LSTM(150, return_sequences=True))
model.add(BatchNormalization())
model.add(Dropout(0.3))
model.add(LSTM(100))
model.add(Dense(100, activation='relu'))
model.add(Dropout(0.3))
model.add(Dense(40, activation='relu'))
model.add(BatchNormalization())
model.add(Dense(len(chars), activation='softmax', name='output_layer'))
model.summary()
from keras.callbacks import EarlyStopping, ReduceLROnPlateau
from keras.utils import to_categorical
import pickle
early_stop = EarlyStopping(patience=5)
reduce_lr = ReduceLROnPlateau(factor=0.2, patience=3, verbose=1)
callbacks = [early_stop, reduce_lr]
# Convert labels (integers to categorical data), basically one-hot encode labels
next_chars = to_categorical(next_chars, len(chars))
# Train
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
history = model.fit(sentences, next_chars, validation_split=0.1, batch_size=64, epochs=50, callbacks=callbacks, shuffle=True)
#save model and its history
model.save('models/predictive_keyboard_v4.h5')
pickle.dump(history.history, open('models/history_pk_v4.p', 'wb'))
# load model back again
from keras.models import load_model
model = load_model('models/predictive_keyboard_v4.h5')
history = pickle.load(open("models/history_pk_v4.p", "rb"))
import matplotlib.pyplot as plt
%matplotlib inline
# plot accuracy
plt.plot(history['acc'])
plt.plot(history['val_acc'])
plt.title('model accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper left');
# Prepare test data
test_data = text8_data[-50000:]
test_sentences = []
test_chars = []
for i in range(0, len(test_data)-SEQUENCE_LENGTH, STEP):
test_sentences.append(test_data[i:i+SEQUENCE_LENGTH])
test_chars.append(test_data[i+SEQUENCE_LENGTH])
print(f'number of test sentences: {len(test_sentences)}')
print(f'2nd sentence: {test_sentences[2]}')
print(f'char after 2nd sentence: {test_chars[2]}')
print(f'3rd sentence: {test_sentences[3]}')
test_sentences = np.array(test_sentences)
test_chars = np.array(test_chars)
test_chars = to_categorical(test_chars, len(chars))
model.evaluate(test_sentences, test_chars)
# Post processing
import heapq
def prepare_input(text):
text = [[char_indices[char] for char in text]]
x = np.array(text)
return x
def sample(preds, top_n=3):
preds = np.asarray(preds).astype('float64')
preds = np.log(preds)
exp_preds = np.exp(preds)
preds = exp_preds / np.sum(exp_preds)
return heapq.nlargest(top_n, range(len(preds)), preds.take)
def predict_completion(text):
original_text = text
generated = text
completion = ''
while True:
x = prepare_input(text)
preds = model.predict(x, verbose=0)[0]
next_index = sample(preds, top_n=1)[0]
next_char = indices_char[next_index]
text = text[1:] + next_char
completion += next_char
if len(original_text + completion) + 2 > len(original_text) and next_char == ' ':
return completion
def predict_completions(text, n=3):
x = prepare_input(text)
preds = model.predict(x, verbose=0)[0]
next_indices = sample(preds, n)
return [indices_char[idx] + predict_completion(text[1:] + indices_char[idx]) for idx in next_indices]
# Test model
test_sent = ["He told us a very exciting adventure story",
"She wrote him a long letter but he did not read it",
"The sky is clear black with shining stars",
"I am counting my calories yet I really want dessert",
"We need to rent a room for our party"
]
for sent in test_sent:
sent_4_NN = sent[:30].lower()
print(sent_4_NN)
print(predict_completions(sent_4_NN, 5))
print()
###Output
_____no_output_____ |
code/tests/completed_tests/pb_issues_guy.ipynb | ###Markdown
We're going to try fitting a full asymptotic relation to some simulated dataWe'll do Gaussian noise cos it makes my life easier
###Code
import numpy as np
import matplotlib.pyplot as plt
import theano.tensor as tt
import lightkurve as lk
from astropy.units import cds
from astropy import units as u
import seaborn as sns
import corner
import pystan
import pandas as pd
import pickle
import glob
from astropy.io import ascii
import os
import pymc3 as pm
import arviz
###Output
_____no_output_____
###Markdown
Build the model
###Code
class model():
def __init__(self, f, n0_, n1_, n2_):
self.f = f
self.n0 = n0_
self.n1 = n1_
self.n2 = n2_
self.npts = len(f)
self.M = [len(n0_), len(n1_), len(n2_)]
def epsilon(self, i, theano=True):
eps = tt.zeros((3,3))
eps0 = tt.set_subtensor(eps[0][0], 1.)
eps1 = tt.set_subtensor(eps[1][0], tt.cos(i)**2)
eps1 = tt.set_subtensor(eps1[1], 0.5 * tt.sin(i)**2)
eps2 = tt.set_subtensor(eps[2][0], 0.25 * (3. * tt.cos(i)**2 - 1.)**2)
eps2 = tt.set_subtensor(eps2[1], (3./8.)*tt.sin(2*i)**2)
eps2 = tt.set_subtensor(eps2[2], (3./8.) * tt.sin(i)**4)
eps = tt.set_subtensor(eps[0], eps0)
eps = tt.set_subtensor(eps[1], eps1)
eps = tt.set_subtensor(eps[2], eps2)
if not theano:
return eps.eval()
return eps
def lor(self, freq, h, w):
return h / (1.0 + 4.0/w**2*(self.f - freq)**2)
def mode(self, l, freqs, hs, ws, eps, split=0):
for idx in range(self.M[l]):
for m in range(-l, l+1, 1):
self.modes += self.lor(freqs[idx] + (m*split),
hs[idx] * eps[l,abs(m)],
ws[idx])
def model(self, p, theano=True):
f0, f1, f2, g0, g1, g2, h0, h1, h2, split, i, b = p
# Calculate the modes
eps = self.epsilon(i, theano)
self.modes = np.zeros(self.npts)
self.mode(0, f0, h0, g0, eps)
self.mode(1, f1, h1, g1, eps, split)
self.mode(2, f2, h2, g2, eps, split)
#Create the model
self.mod = self.modes + b
return self.mod
def asymptotic(self, n, numax, deltanu, alpha, epsilon):
nmax = (numax / deltanu) - epsilon
over = (n + epsilon + ((alpha/2)*(nmax - n)**2))
return over * deltanu
def f0(self, p):
numax, deltanu, alpha, epsilon, d01, d02 = p
return self.asymptotic(self.n0, numax, deltanu, alpha, epsilon)
def f1(self, p):
numax, deltanu, alpha, epsilon, d01, d02 = p
f0 = self.asymptotic(self.n1, numax, deltanu, alpha, epsilon)
return f0 + d01
def f2(self, p):
numax, deltanu, alpha, epsilon, d01, d02 = p
f0 = self.asymptotic(self.n2+1, numax, deltanu, alpha, epsilon)
return f0 - d02
nmodes = 2 # Two overtones
nbase = 18 # Starting at n = 18
n0_ = np.arange(nmodes)+nbase
n1_ = np.copy(n0_)
n2_ = np.copy(n0_) - 1.
fs = .05 # Data has a frequency spacing of 0.05 microhertz
nyq = (0.5 * (1./58.6) * u.hertz).to(u.microhertz).value # Setting a sensible nyquist value
ff = np.arange(fs, nyq, fs) # Generating the full frequency range
###Output
_____no_output_____
###Markdown
Set the asymptotic parameters
###Code
deltanu_ = 60.
numax_= 1150.
alpha_ = 0.
epsilon_ = 0.
d01_ = deltanu_/2.
d02_ = 6.
###Output
_____no_output_____
###Markdown
Generate the model for the full range
###Code
mod = model(ff, n0_, n1_, n2_)
###Output
_____no_output_____
###Markdown
Calculate the predicted mode frequencies
###Code
init_f = [numax_, deltanu_, alpha_, epsilon_, d01_, d02_]
f0_ = mod.f0(init_f)
f1_ = mod.f1(init_f)
f2_ = mod.f2(init_f)
###Output
_____no_output_____
###Markdown
Slice up the data to just be around the mode frequencies
###Code
lo = f2_.min() - .25*deltanu_
hi = f1_.max() + .25*deltanu_
sel = (ff > lo) & (ff < hi)
f = ff[sel]
###Output
_____no_output_____
###Markdown
And now lets reset the model for the new frequency range...
###Code
mod = model(f, n0_, n1_, n2_)
###Output
_____no_output_____
###Markdown
Set up initial guesses for the model parameters
###Code
def gaussian(locs, l, numax, Hmax0):
fwhm = 0.25 * numax
std = fwhm/2.355
Vl = [1.0, 1.22, 0.71, 0.14]
return Hmax0 * Vl[l] * np.exp(-0.5 * (locs - numax)**2 / std**2)
init_m =[f0_, # l0 modes
f1_, # l1 modes
f2_, # l2 modes
np.ones(len(f0_)) * 2.0, # l0 widths
np.ones(len(f1_)) * 2.0, # l1 widths
np.ones(len(f2_)) * 2.0, # l2 widths
np.sqrt(gaussian(f0_, 0, numax_, 1000.) * 2.0 * np.pi / 2.0) ,# l0 heights
np.sqrt(gaussian(f1_, 1, numax_, 1000.) * 2.0 * np.pi / 2.0) ,# l1 heights
np.sqrt(gaussian(f2_, 2, numax_, 1000.) * 2.0 * np.pi / 2.0) ,# l2 heights
1., # splitting
np.pi/4., # inclination angle
1. # background parameters
]
# Add on the chisquare 2 dof noise
p = mod.model(init_m, theano=False)*np.random.chisquare(2., size=len(f))/2
###Output
_____no_output_____
###Markdown
Plot what our data looks like
###Code
plt.plot(f, p)
plt.plot(f, mod.model(init_m, theano=False), lw=3)
plt.show()
plt.show()
###Output
_____no_output_____
###Markdown
Fitting the model:
###Code
pm_model = pm.Model()
with pm_model:
epsilon = pm.Normal('epsilon', 0, 1)
dnu = pm.Normal('dnu', 60, 0.01)
d02 = pm.Normal('d02', 0.1, 0.01)
d01 = pm.Normal('d01', 0.5, 0.01)
f0 = pm.Normal('f0', (n0_ + epsilon) * dnu, 1.0, shape=2)
f1 = pm.Normal('f1', (n0_ + epsilon + d01) * dnu, 1.0, shape=2)
f2 = pm.Normal('f2', (n0_ + epsilon - d02) * dnu, 1.0, shape=2)
split = pm.Lognormal('split', np.log(0.4), 0.1)
i = 70.0
g = pm.Lognormal('g', np.log(1.0), 0.1, shape=2)
h = pm.Lognormal('h', np.log(50.0), 0.1, shape=2)
b = 1.0
fit = mod.model([f0, f1, f2, g, g, g, h, h, h, split, i, b])
like = pm.Gamma('like', alpha=1., beta=1./fit, observed=p)
'''
# I've left these below for you to use if you want because I trust these parameterizations
xsplit = pm.HalfNormal('xsplit', sigma=2.0, testval=init_m[9] * np.sin(init_m[10]))
cosi = pm.Uniform('cosi', 0., 1., testval=np.cos(init_m[10]))
i = pm.Deterministic('i', tt.arccos(cosi))
split = pm.Deterministic('split', xsplit/tt.sin(i))
b = pm.Bound(pm.Normal, lower=0.)('b', mu=1., sigma=.1, testval=1.)
fit = mod.model([f0, f1, f2, g0, g1, g2, h0, h1, h2, split, i, b])
like = pm.Gamma('like', alpha=1., beta=1./fit, observed=p)
'''
with pm_model:
inference = pm.ADVI()
approx = pm.fit(n=90000, method=inference)
start = dict(pm.summary(approx.sample(draws=1000)).mean(axis=1))
with pm_model:
trace = pm.sample(1000, start=start)
pm.summary(trace)
pm.summary(approx.sample(draws=1000))
###Output
_____no_output_____ |
notebooks/Kernel Trick.ipynb | ###Markdown
Para demonstrarmos o funcionamento das diferentes funções de Kernel num SVM, vamos pegar um database não-trivial e tentar classifica-lo com o nosso modelo.* Para isso, escolhemos o database da NASA: "Kepler Exoplanet Search Results"* Nesse database, temos informações sobre diversos planetas detectados pelo telescópio Kepler, cujo foco é de encontrar exoplanetas (planetas que orbitam estrelas que não o Sol) pelo universo. Na coluna *kResult*, temos a situação do planeta ID:* É um exoplaneta, portanto CONFIRMED.* Não é um exoplaneta, portanto FALSE POSITIVE.* Talvez seja um exoplaneta, portanto CANDIDATE. Vamos criar um modelo que classifique se um planeta é/pode ser um exoplaneta ou não.
###Code
planetas.head(20)
###Output
_____no_output_____
###Markdown
Percebemos claramente, nós gráficos em pares abaixo, que as amostras do nosso database não são nada triviais, e nem linearmente separáveis. Portanto, caso queiramos utilizar o SVM para classificar os dados, teremos que testar diferentes funções de Kernel e molda-las para uma melhor precisão.
###Code
sns.pairplot(planetas.head(1000), vars = planetas.columns[8:14],hue = 'kResult')
from sklearn.model_selection import train_test_split
from sklearn.metrics import classification_report
from sklearn import svm
yPlanetas = planetas['kResult'].copy()
xPlanetas = planetas.drop(['kResult','kName','kID','Not Transit-Like Flag','Stellar Eclipse Flag','Centroid Offset Flag','Ephemeris Match Indicates Contamination Flag'],axis = 1)
xTreino, xTeste, yTreino, yTeste = train_test_split(xPlanetas, yPlanetas, test_size=0.80, random_state=3)
###Output
_____no_output_____
###Markdown
Aqui criaremos 4 modelos com funções de kernel diferentes (parâmetro *kernel*) e treinaremos cada um deles com nossos dados de treino para no fim escolher o de precisão mais satisfatória.
###Code
modeloLinear = svm.SVC(kernel = 'linear')
modeloPoly = svm.SVC(kernel = 'poly')
modeloRBF = svm.SVC(kernel = 'rbf')
modeloSigmoid = svm.SVC(kernel = 'sigmoid')
###Output
_____no_output_____
###Markdown

###Code
modeloLinear.fit(xTreino,yTreino)
modeloPoly.fit(xTreino,yTreino)
modeloRBF.fit(xTreino,yTreino)
modeloSigmoid.fit(xTreino,yTreino)
###Output
_____no_output_____
###Markdown
Aqui iremos mostrar o "score" de cada um dos nossos modelos, isto é, o quão preciso o modelo foi em relação à realidade, e os coeficientes da função de decisão.* Perceba que os modelos Linear, Polinomial e RBF tiveram eficiência muito próxima, o que indica a complexidade do database. Portanto, para melhorar a precisão, teremos que manualmente testar os parâmetros (ou *coeficientes* ) de cada um dos modelos.* Perceba também que a pontuação média de 60% indica que nosso modelo tem uma certa eficiência considerável, já o dobro da eficiência esperada para um modelo aleátorio (pontuação em torno de 30%). Isso demonstra que o truque de kernel é efetivo na manipulação de dados extremamente complexos como o utilizado.
###Code
print(" Score = ",modeloLinear.score(xTeste,yTeste), "\n")
print(" Coeficientes da função de decisão: \n\n",modeloLinear.decision_function(xTeste))
print(" Score = ",modeloPoly.score(xTeste,yTeste), "\n")
print(" Coeficientes da função de decisão: \n\n",modeloPoly.decision_function(xTeste))
print(" Score = ",modeloRBF.score(xTeste,yTeste), "\n")
print(" Coeficientes da função de decisão: \n\n",modeloRBF.decision_function(xTeste))
print(" Score = ",modeloSigmoid.score(xTeste,yTeste), "\n")
print(" Coeficientes da função de decisão: \n\n",modeloSigmoid.decision_function(xTeste))
###Output
Score = 0.5010460251046025
Coeficientes da função de decisão:
[[ 0.81933557 -0.24431296 2.26569038]
[-0.19743269 2.22217393 0.88228737]
[ 2.06122758 -0.17444461 1.15535531]
...
[-0.21185122 2.16059007 1.14960189]
[-0.18608687 0.98948277 2.18817593]
[ 0.79995624 -0.28362208 2.29271369]]
|
casing3D/SimulationViewer.ipynb | ###Markdown
Load Results from a simulation and view them
###Code
import discretize
from discretize import utils
import numpy as np
import scipy.sparse as sp
from scipy.constants import mu_0
from SimPEG.EM import FDEM
from SimPEG import Utils, Maps
import CasingSimulations
from pymatsolver import Pardiso
import matplotlib.pyplot as plt
%matplotlib inline
###Output
_____no_output_____ |
Summer_Training_Linear_Regression.ipynb | ###Markdown
###Code
#import libraries
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
#import dataset
dataset = pd.read_csv("/content/salaryData.csv")
print(dataset)
dataset.head()
print(dataset.tail())
dataset.shape
dataset.info()
dataset.mean()
dataset.describe()
dataset.iloc[2:6]
datset.nunique()
###Output
_____no_output_____
###Markdown
Visualising using Scatter Plot
###Code
x=dataset['YearsExperience']
y=dataset['Salary']
plt.xlabel('YearsExperience')
plt.ylabel('Salary')
plt.scatter(x,y,color='red',marker='+')
plt.show()
###Output
_____no_output_____
###Markdown
Dividing data in Test Set and Train Set
###Code
x=dataset.iloc[:,:-1].values
y=dataset.iloc[:,1].values
import sklearn
from sklearn.model_selection import train_test_split
xtrain,xtest,ytrain,ytest=train_test_split(x,y,test_size=1/3,random_state=1)
###Output
_____no_output_____
###Markdown
creating linear model
###Code
from sklearn.linear_model import LinearRegression
model = LinearRegression() #y=x+b
model.fit(xtrain,ytrain)
###Output
_____no_output_____
###Markdown
Prediction
###Code
y_predict=model.predict(xtest)
###Output
_____no_output_____
###Markdown
difference of y_predict is pedicted by model and ytest
###Code
ytest
y_predict
model.predict([[9]])
model.predict([[2]])
model.coef_
model.intercept_
#y = mx+c
9158.1391*2+26137.2400
plt.scatter(xtrain,ytrain,color='red')
plt.plot(xtrain,model.predict(xtrain))
plt.show()
###Output
_____no_output_____ |
0.9/_downloads/755e9d2519246ba8f353758537e0e3bd/visualizing-results.ipynb | ###Markdown
Visualizing optimization resultsTim Head, August 2016.Reformatted by Holger Nahrstaedt 2020.. currentmodule:: skoptBayesian optimization or sequential model-based optimization uses a surrogatemodel to model the expensive to evaluate objective function `func`. It isthis model that is used to determine at which points to evaluate the expensiveobjective next.To help understand why the optimization process is proceeding the way it is,it is useful to plot the location and order of the points at which theobjective is evaluated. If everything is working as expected, early sampleswill be spread over the whole parameter space and later samples shouldcluster around the minimum.The :class:`plots.plot_evaluations` function helps with visualizing the location andorder in which samples are evaluated for objectives with an arbitrarynumber of dimensions.The :class:`plots.plot_objective` function plots the partial dependence of the objective,as represented by the surrogate model, for each dimension and as pairs of theinput dimensions.All of the minimizers implemented in `skopt` return an [`OptimizeResult`]()instance that can be inspected. Both :class:`plots.plot_evaluations` and :class:`plots.plot_objective`are helpers that do just that
###Code
print(__doc__)
import numpy as np
np.random.seed(123)
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
Toy modelsWe will use two different toy models to demonstrate how :class:`plots.plot_evaluations`works.The first model is the :class:`benchmarks.branin` function which has two dimensions and threeminima.The second model is the `hart6` function which has six dimension which makesit hard to visualize. This will show off the utility of:class:`plots.plot_evaluations`.
###Code
from skopt.benchmarks import branin as branin
from skopt.benchmarks import hart6 as hart6_
# redefined `hart6` to allow adding arbitrary "noise" dimensions
def hart6(x):
return hart6_(x[:6])
###Output
_____no_output_____
###Markdown
Starting with `branin`To start let's take advantage of the fact that :class:`benchmarks.branin` is a simplefunction which can be visualised in two dimensions.
###Code
from matplotlib.colors import LogNorm
def plot_branin():
fig, ax = plt.subplots()
x1_values = np.linspace(-5, 10, 100)
x2_values = np.linspace(0, 15, 100)
x_ax, y_ax = np.meshgrid(x1_values, x2_values)
vals = np.c_[x_ax.ravel(), y_ax.ravel()]
fx = np.reshape([branin(val) for val in vals], (100, 100))
cm = ax.pcolormesh(x_ax, y_ax, fx,
norm=LogNorm(vmin=fx.min(),
vmax=fx.max()),
cmap='viridis_r')
minima = np.array([[-np.pi, 12.275], [+np.pi, 2.275], [9.42478, 2.475]])
ax.plot(minima[:, 0], minima[:, 1], "r.", markersize=14,
lw=0, label="Minima")
cb = fig.colorbar(cm)
cb.set_label("f(x)")
ax.legend(loc="best", numpoints=1)
ax.set_xlabel("$X_0$")
ax.set_xlim([-5, 10])
ax.set_ylabel("$X_1$")
ax.set_ylim([0, 15])
plot_branin()
###Output
_____no_output_____
###Markdown
Evaluating the objective functionNext we use an extra trees based minimizer to find one of the minima of the:class:`benchmarks.branin` function. Then we visualize at which points the objective is beingevaluated using :class:`plots.plot_evaluations`.
###Code
from functools import partial
from skopt.plots import plot_evaluations
from skopt import gp_minimize, forest_minimize, dummy_minimize
bounds = [(-5.0, 10.0), (0.0, 15.0)]
n_calls = 160
forest_res = forest_minimize(branin, bounds, n_calls=n_calls,
base_estimator="ET", random_state=4)
_ = plot_evaluations(forest_res, bins=10)
###Output
_____no_output_____
###Markdown
:class:`plots.plot_evaluations` creates a grid of size `n_dims` by `n_dims`.The diagonal shows histograms for each of the dimensions. In the lowertriangle (just one plot in this case) a two dimensional scatter plot of allpoints is shown. The order in which points were evaluated is encoded in thecolor of each point. Darker/purple colors correspond to earlier samples andlighter/yellow colors correspond to later samples. A red point shows thelocation of the minimum found by the optimization process.You should be able to see that points start clustering around the locationof the true miminum. The histograms show that the objective is evaluatedmore often at locations near to one of the three minima.Using :class:`plots.plot_objective` we can visualise the one dimensional partialdependence of the surrogate model for each dimension. The contour plot inthe bottom left corner shows the two dimensional partial dependence. In thiscase this is the same as simply plotting the objective as it only has twodimensions. Partial dependence plotsPartial dependence plots were proposed by[Friedman (2001)]_as a method for interpreting the importance of input features used ingradient boosting machines. Given a function of $k$: variables$y=f\left(x_1, x_2, ..., x_k\right)$: thepartial dependence of $f$ on the $i$-th variable $x_i$ is calculated as:$\phi\left( x_i \right) = \frac{1}{N} \sum^N_{j=0}f\left(x_{1,j}, x_{2,j}, ..., x_i, ..., x_{k,j}\right)$:with the sum running over a set of $N$ points drawn at random from thesearch space.The idea is to visualize how the value of $x_j$: influences the function$f$: after averaging out the influence of all other variables.
###Code
from skopt.plots import plot_objective
_ = plot_objective(forest_res)
###Output
_____no_output_____
###Markdown
The two dimensional partial dependence plot can look like the trueobjective but it does not have to. As points at which the objective functionis being evaluated are concentrated around the suspected minimum thesurrogate model sometimes is not a good representation of the objective faraway from the minima. Random samplingCompare this to a minimizer which picks points at random. There is nostructure visible in the order in which it evaluates the objective. Becausethere is no model involved in the process of picking sample points atrandom, we can not plot the partial dependence of the model.
###Code
dummy_res = dummy_minimize(branin, bounds, n_calls=n_calls, random_state=4)
_ = plot_evaluations(dummy_res, bins=10)
###Output
_____no_output_____
###Markdown
Working in six dimensionsVisualising what happens in two dimensions is easy, where:class:`plots.plot_evaluations` and :class:`plots.plot_objective` start to be useful is when thenumber of dimensions grows. They take care of many of the more mundanethings needed to make good plots of all combinations of the dimensions.The next example uses class:`benchmarks.hart6` which has six dimensions and shows both:class:`plots.plot_evaluations` and :class:`plots.plot_objective`.
###Code
bounds = [(0., 1.),] * 6
forest_res = forest_minimize(hart6, bounds, n_calls=n_calls,
base_estimator="ET", random_state=4)
_ = plot_evaluations(forest_res)
_ = plot_objective(forest_res, n_samples=40)
###Output
_____no_output_____
###Markdown
Going from 6 to 6+2 dimensionsTo make things more interesting let's add two dimension to the problem.As :class:`benchmarks.hart6` only depends on six dimensions we know that for this problemthe new dimensions will be "flat" or uninformative. This is clearly visiblein both the placement of samples and the partial dependence plots.
###Code
bounds = [(0., 1.),] * 8
n_calls = 200
forest_res = forest_minimize(hart6, bounds, n_calls=n_calls,
base_estimator="ET", random_state=4)
_ = plot_evaluations(forest_res)
_ = plot_objective(forest_res, n_samples=40)
# .. [Friedman (2001)] `doi:10.1214/aos/1013203451 section 8.2 <http://projecteuclid.org/euclid.aos/1013203451>`
###Output
_____no_output_____ |
theme_3/mission_activity/national_mission_activity.ipynb | ###Markdown
Indicator calculationsEach of these functions is assumed to take the form```pythondef _an_indicator_calulation(data, year=None, _max=1): """ A function calculating an indicator. Args: data (list): Rows of data year (int): A year to consider, if applicable. _max (int): Divide by this to normalise your results. This is automatically applied in :obj:`make_activity_plot` Returns: result (list) A list of indicators to plot. The length of the list is assumed to be equal to the number of countries. """ Calculate something```
###Code
def _total_activity_by_country(data, year=None, _max=1):
"""
Indicator: Sum of relevance scores, by year (if specified) or in total.
"""
if year is None:
scores = [sum(row.values())/_max for row in data]
else:
scores = [row[year]/_max for row in data]
return scores
def _average_activity_by_country(data, year=None, _max=1):
"""
Indicator: Mean relevance score. This function is basically a lambda, since it assumes the average has already been calculated.
"""
return [row/_max for row in data]
def _corrected_average_activity_by_country(data, year=None, _max=1):
"""
Indicator: Mean relevance score minus it's (very) approximate Poisson error.
"""
return [(row - np.sqrt(row))/_max for row in data]
def _linear_coeffs(years, scores, _max):
"""Calculates linear coefficients for scores wrt years"""
return [np.polyfit(_scores, _years, 1)[0]/_max
if all(v > 0 for v in _scores) else 0
for _years, _scores in zip(years, scores)]
def _trajectory(data, year=None, _max=1):
"""
Indicator: Linear coefficient of total relevance score wrt year
"""
years = [list(row.keys()) for row in data]
scores = [list(row.values()) for row in data]
return _linear_coeffs(years, scores, _max)
def _corrected_trajectory(data, year=None, _max=1):
"""
Indicator: Linear coefficient of upper and lower limits of relevance score wrt year
"""
# Reformulate the data in terms of upper and lower bounds
years, scores = [], []
for row in data:
_years, _scores = [], []
for k, v in row.items():
_years += [k,k]
_scores += [v - np.sqrt(v), v + np.sqrt(v)] # Estimate upper and lower limits with very approximate Poisson errors
years.append(_years)
scores.append(_scores)
return _linear_coeffs(years, scores, _max)
###Output
_____no_output_____
###Markdown
Plotting functionality
###Code
class _Sorter:
def __init__(self, values, topn=None):
if topn is None:
topn = len(values)
self.indices = list(np.argsort(values))[-topn:] # Argsort is ascending, so -ve indexing to pick up topn
def sort(self, x):
"""Sort list x by indices"""
return [x[i] for i in self.indices]
def _s3_savefig(query, fig_name, extension='png'):
"""Save the figure to s3. The figure is grabbed from the global scope."""
if not SAVE_RESULTS:
return
outname = (f'figures/{SAVE_PATH}/'
f'{query.replace(" ","_").lower()}'
f'/{fig_name.replace(" ","_").lower()}'
f'.{extension}')
with io.BytesIO() as f:
plt.savefig(f, bbox_inches='tight', format=extension, pad_inches=0)
obj = S3.Object(BUCKET, outname)
f.seek(0)
obj.put(Body=f)
def _s3_savetable(data, key, index, object_path, transformer=lambda x: x):
"""Upload the table to s3"""
if not SAVE_RESULTS:
return
df = pd.DataFrame(transformer(data[key]), index=index)
if len(df.columns) == 1:
df.columns = ['value']
df = df / df.max().max()
table_data = df.to_csv().encode()
obj = S3.Object(BUCKET, os.path.join(f'tables/{SAVE_PATH}', object_path))
obj.put(Body=table_data)
def make_activity_plot(f, data, countries, max_query_terms, query,
year=None, label=None, x_padding=0.5, y_padding=0.05, xlabel_fontsize=14):
"""
Make a query and generate indicators by country, saving the plots to S3 and saving the rawest data
to tables on S3.
Args:
f: An indicator function, as described in the 'Indicator calculations' section.
data (dict): {max_query_terms --> [{year --> sum_score} for each country]}
countries (list): A list of EU ISO-2 codes
max_query_terms (list): Triple of max_query_terms for clio, corresponding to low, middle and high values of
max_query_terms to test robustness of the query.
query (str): query used to generate this data.
year (int): Year to generate the indicator for (if applicable).
label (str): label for annotating the plot.
{x,y}_padding (float): Aesthetic padding around the extreme limits of the {x,y} axis.
xlabel_fontsize (int): Fontsize of the x labels (country ISO-2 codes).
"""
# Calculate the indicator for each value of n, then recalculate the normalised indicator
_, middle, _ = (f(data[n], year=year) for n in max_query_terms)
low, middle, high = (f(data[n], year=year, _max=max(middle)) for n in max_query_terms)
indicator = [np.median([a, b, c]) for a, b, c in zip(low, middle, high)]
# Sort all data by indicator value
s = _Sorter(indicator)
countries = s.sort(countries)
low = s.sort(low)
middle = s.sort(middle)
high = s.sort(high)
indicator = s.sort(indicator)
# Make the scatter plot
fig, ax = plt.subplots(figsize=(15, 6))
make_error_boxes(ax, low, middle, high) # Draw the bounding box
ax.scatter(countries, indicator, s=0, marker='o', color='black') # Draw the centre mark
ax.set_title(f'{label}\nQuery: "{query}"')
ax.set_ylabel(label)
# Set limits and formulate
y0 = min(low+middle+high)
y1 = max(low+middle+high)
if -y1*y_padding < y0:
y0 = -y1*y_padding
else: # In case of negative values
y0 = y0 - np.abs(y0*y_padding)
ax.set_ylim(y0, y1*(1+y_padding))
ax.set_xlim(-x_padding, len(countries)-x_padding)
for tick in ax.xaxis.get_major_ticks():
tick.label.set_fontsize(xlabel_fontsize)
# Save to s3 & return
_s3_savefig(query, label)
return ax
def make_error_boxes(ax, low, middle, high, facecolor='r',
edgecolor='None', alpha=0.5):
"""
Generate outer rectangles based on three values, and draw a horizontal line through the middle of the rectangle.
No assumption is made on the order of values, so don't worry if they're not properly ordered.
Args:
ax (matplotlib.axis): An axis to add patches to.
{low, middle, high} (list): Three concurrent lists of values from which to calculate the rectangle limits.
{facecolor, edgecolor} (str): The {face,edge} colour of the rectangles.
alpha (float): The alpha of the rectangles.
"""
# Generate the rectangle
errorboxes = []
middlelines = []
for x, ys in enumerate(zip(low, middle, high)):
rect = Rectangle((x - 0.45, min(ys)), 0.9, max(ys) - min(ys))
line = Rectangle((x - 0.45, np.median(ys)), 0.9, 0)
errorboxes.append(rect)
middlelines.append(line)
# Create patch collection with specified colour/alpha
pc = PatchCollection(errorboxes, facecolor=facecolor, alpha=alpha, edgecolor=edgecolor, hatch='/')
lc = PatchCollection(middlelines, facecolor='black', alpha=0.9, edgecolor='black')
# Add collection to axes
ax.add_collection(pc)
ax.add_collection(lc)
def stacked_scores(all_scores, query, topn=8,
low_bins=[10**i for i in np.arange(0, 1.1, 0.025)],
high_bins=[10**i for i in np.arange(1.1, 2.5, 0.05)],
x_scale='log', label='Relevance score breakdown',
xlabel='Relevance score', ylabel='Number of relevant documents',
legend_fontsize='small', legend_cols=2):
"""
Create stacked histogram of document scores by country. Two sets of bins are used,
in order to have a more legible binning scale.
Args:
all_scores (dict): {max_query_terms --> {country --> [score for doc in docs] } }
query (str): query used to generate this data.
low_bins (list): List of initial bin edges.
high_bins (list): List of supplementary bin edges. These could have a different spacing scheme to the lower bin edges.
x_scale (str): Argument for `ax.set_xscale`.
label (str): label for annotating the plot.
{x,y}_label (str): Argument for `ax.set_{x,y}label`.
legend_fontsize (str): Argument for legend fontsize.
legend_cols (str): Argument for legend ncol.
"""
# Sort countries and scores by the sum of scores by country
countries = list(all_scores.keys())
scores = list(all_scores.values())
s = _Sorter([sum(v) for v in scores], topn=topn)
scores = s.sort(scores)
countries = s.sort(countries)
# Plot the stacked scores
fig, ax = plt.subplots(figsize=(10, 6))
plt.set_cmap(COLOR_MAP)
ax.hist(scores, bins=low_bins+high_bins, stacked=True,
label=countries, color=COLORS[:len(scores)])
# Prettify the plot
ax.set_xlabel(xlabel)
ax.set_ylabel(ylabel)
ax.legend(fontsize=legend_fontsize, ncol=legend_cols)
ax.set_xlim(low_bins[0], None)
ax.set_xscale(x_scale)
ax.set_title(f'{label}\nQuery: "{query}"')
# Save to s3
_s3_savefig(query, label)
return ax
###Output
_____no_output_____
###Markdown
Bringing it all together
###Code
def generate_indicator(q, max_query_terms=[7, 10, 13], countries=EU_COUNTRIES, *args, **kwargs):
"""
Make a query and generate indicators by country, saving the plots to S3 and saving the rawest data
to tables on S3.
Args:
q (str): The query to Elasticsearch
max_query_terms (list): Triple of max_query_terms for clio, corresponding to low, middle and high values of
max_query_terms to test robustness of the query.
countries (list): A list of EU ISO-2 codes
Returns:
top_doc (dict): The highest ranking document from the search.
data (dict): {max_query_terms --> [{year --> sum_score} for each country]}
all_scores (dict): {max_query_terms --> {country --> [score for doc in docs] } }
"""
# Make the search and retrieve scores by country, and the highest ranking doc
example_doc, data, all_scores = make_search(q, max_query_terms=max_query_terms, countries=countries, *args, **kwargs)
# Reformat the scores to calculate the average
avg_scores = defaultdict(list)
for ctry in countries:
for n, _scores in all_scores.items():
mean = np.mean(_scores[ctry]) if len(_scores[ctry]) > 0 else 0
avg_scores[n].append(mean)
plot_kwargs = dict(countries=countries, max_query_terms=max_query_terms, query=q)
# Calculate loads of indicators and save the plots
_ = make_activity_plot(_total_activity_by_country, data, label='Total relevance score', **plot_kwargs)
_ = make_activity_plot(_average_activity_by_country, avg_scores, label='Average relevance', **plot_kwargs)
_ = make_activity_plot(_corrected_average_activity_by_country, avg_scores, label='Corrected average relevance', **plot_kwargs)
_ = make_activity_plot(_trajectory, data, label='Trajectory', **plot_kwargs)
_ = make_activity_plot(_corrected_trajectory, data, label='Corrected trajectory', **plot_kwargs)
_ = stacked_scores(all_scores[max_query_terms[1]], query=q)
# Save the basic raw data as tables. Note: not as rich as the plotted data.
_q = q.replace(" ","_").lower()
_s3_savetable(data, max_query_terms[1], index=countries, object_path=f'{_q}/LMA.csv')
_s3_savetable(avg_scores, max_query_terms[1], index=countries, object_path=f'{_q}/avg_LMA.csv')
plt.close('all') # Clean up the memory cache (unbelievable that matplotlib doesn't do this)
return example_doc, data, all_scores
###Output
_____no_output_____
###Markdown
Iterate over queries
###Code
for term in ["Adaptation to climate change, including societal transformation",
"Cancer",
"Climate-neutral and smart cities",
"Soil health and food"]:
print(term)
print("-"*len(term))
top_doc, data, all_scores = generate_indicator(term)
print(top_doc['title_of_article'], ",", top_doc['year_of_article'])
print(top_doc['terms_countries_article'])
print(top_doc['textBody_abstract_article'])
print("\n==============================\n")
###Output
Adaptation to climate change, including societal transformation
---------------------------------------------------------------
Validity of altmetrics data for measuring societal impact: A study using
data from Altmetric and F1000Prime , 2014
['DE']
Can altmetric data be validly used for the measurement of societal impact?
The current study seeks to answer this question with a comprehensive dataset
(about 100,000 records) from very disparate sources (F1000, Altmetric, and an
in-house database based on Web of Science). In the F1000 peer review system,
experts attach particular tags to scientific papers which indicate whether a
paper could be of interest for science or rather for other segments of society.
The results show that papers with the tag "good for teaching" do achieve higher
altmetric counts than papers without this tag - if the quality of the papers is
controlled. At the same time, a higher citation count is shown especially by
papers with a tag that is specifically scientifically oriented ("new finding").
The findings indicate that papers tailored for a readership outside the area of
research should lead to societal impact. If altmetric data is to be used for
the measurement of societal impact, the question arises of its normalization.
In bibliometrics, citations are normalized for the papers' subject area and
publication year. This study has taken a second analytic step involving a
possible normalization of altmetric data. As the results show there are
particular scientific topics which are of especial interest for a wide
audience. Since these more or less interesting topics are not completely
reflected in Thomson Reuters' journal sets, a normalization of altmetric data
should not be based on the level of subject categories, but on the level of
topics.
==============================
Cancer
------
A Statistical Approach to Identifying Significant Transgenerational
Methylation Changes , 2014
['GB', 'US', 'CH', 'CA', 'IE']
Epigenetic aberrations have profound effects on phenotypic output. Genome
wide methylation alterations are inheritable to pass down the aberrations
through multiple generations. We developed a statistical method, Genome-wide
Identification of Significant Methylation Alteration, GISAIM, to study the
significant transgenerational methylation changes. GISAIM finds the significant
methylation aberrations that are inherited through multiple generations. In a
concrete biological study, we investigated whether exposing pregnant rats (F0)
to a high fat (HF) diet throughout pregnancy or ethinyl estradiol
(EE2)-supplemented diet during gestation days 14 20 affects carcinogen-induced
mammary cancer risk in daughters (F1), granddaughters (F2) and
great-granddaughters (F3). Mammary tumorigenesis was higher in daughters and
granddaughters of HF rat dams, and in daughters, granddaughters and
great-granddaughters of EE2 rat dams. Outcross experiments showed that
increased mammary cancer risk was transmitted to HF granddaughters equally
through the female or male germlines, but is only transmitted to EE2
granddaughters through the female germline. Transgenerational effect on mammary
cancer risk was associated with increased expression of DNA methyltransferases,
and across all three EE2 generations hypo or hyper methylation of the same 375
gene promoter regions in their mammary glands. Our study shows that maternal
dietary estrogenic exposures during pregnancy can increase breast cancer risk
in multiple generations of offspring, and the increase in risk may be inherited
through non-genetic means, possibly involving DNA methylation.
==============================
Climate-neutral and smart cities
--------------------------------
Software-Defined and Virtualized Future Mobile and Wireless Networks: A
Survey , 2014
['CN', 'CH', 'GR']
With the proliferation of mobile demands and increasingly multifarious
services and applications, mobile Internet has been an irreversible trend.
Unfortunately, the current mobile and wireless network (MWN) faces a series of
pressing challenges caused by the inherent design. In this paper, we extend two
latest and promising innovations of Internet, software-defined networking and
network virtualization, to mobile and wireless scenarios. We first describe the
challenges and expectations of MWN, and analyze the opportunities provided by
the software-defined wireless network (SDWN) and wireless network
virtualization (WNV). Then, this paper focuses on SDWN and WNV by presenting
the main ideas, advantages, ongoing researches and key technologies, and open
issues respectively. Moreover, we interpret that these two technologies highly
complement each other, and further investigate efficient joint design between
them. This paper confirms that SDWN and WNV may efficiently address the crucial
challenges of MWN and significantly benefit the future mobile and wireless
network.
==============================
Soil health and food
--------------------
GREEND: An Energy Consumption Dataset of Households in Italy and Austria , 2014
['AT']
Home energy management systems can be used to monitor and optimize
consumption and local production from renewable energy. To assess solutions
before their deployment, researchers and designers of those systems demand for
energy consumption datasets. In this paper, we present the GREEND dataset,
containing detailed power usage information obtained through a measurement
campaign in households in Austria and Italy. We provide a description of
consumption scenarios and discuss design choices for the sensing
infrastructure. Finally, we benchmark the dataset with state-of-the-art
techniques in load disaggregation, occupancy detection and appliance usage
mining.
==============================
|
LAB4/071_04_02.ipynb | ###Markdown
Task 2 Apply algorithm on breast cancer wisconsin dataset - One Hot Encoding of features: and Train test Division 60%-40%
###Code
#Import scikit-learn dataset library
from sklearn import datasets
from sklearn.tree import DecisionTreeClassifier
import pandas as pd
import numpy as np
#Load dataset
data = datasets.load_breast_cancer()
# print the names of the features
print("Features: ", data.feature_names)
# print the label type of breast cancer
print("\n class: \n",data.target_names)
# print data(feature)shape
print( "\n",data.data.shape)
#import the necessary module
from sklearn.model_selection import train_test_split
#split data set into train and test sets
data_train, data_test, target_train, target_test = train_test_split(data.data,data.target, test_size = 0.40, random_state = 71)
#Create a Decision Tree Classifier (using Gini)
cli=DecisionTreeClassifier(criterion='gini',max_leaf_nodes=100)
cli.fit(data_train,target_train)
#Train the model using the training sets
# Predict the classes of test data
prediction=cli.predict(data_test)
#print(test_pred.dtype)
prediction.dtype
from sklearn import metrics
# Model Accuracy, how often is the classifier correct?
print("Accuracy :",metrics.accuracy_score(target_test,prediction))
from sklearn.metrics import precision_score
from sklearn.metrics import recall_score
precision = precision_score(target_test, prediction,average=None)
recall = recall_score(target_test, prediction,average=None)
print('precision: \n {}'.format(precision))
print('\n')
print('recall: \n {}'.format(recall))
###Output
precision:
[0.85555556 0.95652174]
recall:
[0.92771084 0.91034483]
|
notebooks/mp-artificial-neuron.ipynb | ###Markdown
The McCulloch-Pitts Artificial Neuron Learning objectives 1. Understand the rationality and principles behind the creation of the McCulloch-Pitts model2. Identify the main elements of the McCulloch-Pitts model architecture3. Gain an intuitive understanding of the mathematics behind the McCulloch-Pitts model4. Develop a basic code implementation of the McCulloch-Pitts model5. Explore problems that can be solved with the McCulloch-Pitts model6. Identify limitations of the McCulloch-Pitts model Historical and theoretical background Alan Turing's formalization of computation as [Turing Machines](https://www.youtube.com/watch?v=dNRDvLACg5Q) provided the theoretical and mathematical foundations for modern computer science[1](fn1). Turing Machines are an abstraction of a general computation device. Turing (1937) described these machines as composed by an "infinite tape" made of "cells" (divided into squares), a "tape head", and a table with a finite set of instructions. Each cell contained a symbol, either a 0 or a 1, serving as information storage. The tape head can move along the tape, one cell at the time, to read the cell information. Then, according to a table of instructions at that state and the cell information, the tape head can erase information, write information, or do nothing, to then move to the next cell at the left or the right. In the next cell, the tape head would again do something according to the table of instructions and the cell information, to then repeat the process until the last instruction in the table of instructions. **Figure 1** shows a representation of a Turing machine. Figure 1 The particularities of Turing's description of Turing Machines are not relevant. You can envision a different way to implement the same general computing device. Indeed, alternative models of computation exist, such as "lambda calculus" and "cellular automata" (Fernández, 2009). The crucial part of Turing's proposal was the articulation of a machine capable to implement *any computable program*. The *computable* part of the last phrase is important because as Turing demonstrated, there are functions that can not be computed, like the [*Entscheidungsproblem*](https://en.wikipedia.org/wiki/Entscheidungsproblem) (a famous problem in mathematics formulated by [David Hilbert](https://en.wikipedia.org/wiki/David_Hilbert) in 1928), one of the problems that motivated Turing's work in the first place. The advances in computability theory inspired a new generation of researchers to develop computational models in the study of cognition. Among the pioneers were [Warren McCulloch](https://en.wikipedia.org/wiki/Warren_McCulloch) and [Walter Pitts](https://en.wikipedia.org/wiki/Walter_Pitts), who in 1943 proposed that biological neurons can be described as computational devices (McCulloch & Pitts, 1943). In a way, this is another iteration to the problem of describing a general-purpose computation device. This time, inspired by how biological neurons work, and with an architecture significantly different than Turing Machines. The heart of the McCulloch and Pitts idea is that given the *all-or-none* character of neural activity, the behavior of the nervous system can be described by means of *propositional logic*. To understand this, let's examine that main components of biological neurons (Purves et al., 2008):1. the **cell body** containing the nucleus and metabolic machinery2. an **axon** that transmit information via its synaptic terminals, and 1. the **dendrites** that receive inputs from other neurons via synapses. **Figure 2** shows an scheme of the main components of two neurons and their synapses. *Footnote* 1: a detalied examination of the Turin Machine is beyond the scope of this tutorial. For an extended explanation of Turing Machines see [here](https://plato.stanford.edu/entries/turing-machine/DefiTuriMach) Figure 2 Neurons communicate with each other by passing *electro-chemical signals* from the axon terminals in the pre-synaptic neuron to the dendrites in the post-synaptic neuron. Usually, each neuron connects to hundreds or thousands of neurons. For a neuron to "*fire*", certain voltage *threshold* must be passed. The *combined excitatory and inhibitory input* received by the post-synaptic neuron from the pre-synaptic neurons determines whether the neuron passes the threshold and fires. Is this *firing* or *spiking* behavior that McCulloch and Pitts modeled computationally. Furthermore, by carefully calibrating the combination of inhibitory and excitatory signals passed to a neuron, McCulloch and Pitts were able to emulate the behavior of a few *boolean functions* or *logical gates*, like the *AND* gate and the *OR* gate. Thinking in this process abstractly, neurons can be seen as biological computational devices, in the sense that they can receive inputs, apply calculations over those inputs algorithmically, and then produce outputs.The main elements of the McCulloch-Pitts model can be summarized as follow:1. Neuron activation is binary. A neuron either fire or not-fire2. For a neuron to fire, the weighted sum of inputs has to be equal or larger than a predefined threshold3. If one or more inputs are inhibitory the neuron will not fire4. It takes a fixed one time step for the signal to pass through a link 5. Neither the structure nor the weights change over timeMcCulloch and Pitts decided on this architecture based on what it was known at the time about the function of biological neurons. Naturally, they also wanted to abstract away most details and keep what they thought were the *fundamentals elements* to represent computation in biological neurons. Next, we will examine the formal definition of this model. Mathematical formalization McCulloch and Pitts developed a mathematical formulation know as *linear threshold gate*, which describes the activity of a single neuron with two states, *firing* or *not-firing*. In its simplest form, the mathematical formulation is as follows: $$Sum = \sum_{i=1}^NI_iW_i$$ $$y(Sum)=\begin{cases}1, & \text{if } Sum \geq T \\0, & \text{otherwise}\end{cases}$$ Where $I_1, I_2,..., I_N$ are binary input values $\in\{0,1\}$ ; $w_1, w_2,..., w_n$ are weights associated with each input $\in\{-1,1\}$ ; $Sum$ is the weighted sum of inputs; and $T$ is a predefined threshold value for the neuron activation (i.e., *firing*). **Figure 3** shows a graphical representation of the McCulloch-Pitts artificial neuron. Figure 3 An input is considered *excitatory* when its contribution to the weighted sum is positive, for instance $I_1*w_1 = 1 * 1 = 1$; whereas an input is considered *inhibitory* when its contribution to the weighted sum is negative, for instance $I_1*w_1 = 1 * -1 = -1$. If the value of $Sum$ is $\geq$ $T$, the neuron fires, otherwise, it does not. **Figure 4** shows a graphical representation of the threshold function. Figure 4 This is known as a *step-function*, where the $y$-axis encodes the activation-state of the neuron, and the $Sum$-axis encodes the output of the weighted sum of inputs.**Note**: It is important to highlight that the only role of the "weights" in the McCulloch-Pitts model, as presented here, is to determine whether the input is excitatory or inhibitory. If you are familiar with modern neural networks, this is a different role. In modern neural networks, weights have the additional role of *increasing* and *decreasing* the input values. From that perspective, the McCulloch-Pitts model is actually *unweighted*. Code implementation Implementing the McCulloch-Pitts artificial neuron in code is very simple thanks to the features offered by libraries of high-level programming languages that are available today. We can do this in four steps using `python` and `numpy`: Step 1: generate a vector of inputs and a vector of weights
###Code
import numpy as np
np.random.seed(seed=0)
I = np.random.choice([0,1], 3)# generate random vector I, sampling from {0,1}
W = np.random.choice([-1,1], 3) # generate random vector W, sampling from {-1,1}
print(f'Input vector:{I}, Weight vector:{W}')
###Output
Input vector:[0 1 1], Weight vector:[-1 1 1]
###Markdown
Step 2: compute the dot product between the vector of inputs and weights
###Code
dot = I @ W
print(f'Dot product: {dot}')
###Output
Dot product: 2
###Markdown
Step 3: define the threshold activation function
###Code
def linear_threshold_gate(dot: int, T: float) -> int:
'''Returns the binary threshold output'''
if dot >= T:
return 1
else:
return 0
###Output
_____no_output_____
###Markdown
Step 4: compute the output based on the threshold value
###Code
T = 1
activation = linear_threshold_gate(dot, T)
print(f'Activation: {activation}')
###Output
Activation: 1
###Markdown
In the previous example, the threshold was set to $T=1$. Since $Sum=2$, the neuron fires. If we increase the threshold for firing to $T=3$, the neuron will not fire.
###Code
T = 3
activation = linear_threshold_gate(dot, T)
print(f'Activation: {activation}')
###Output
Activation: 0
###Markdown
Application: boolean algebra using the McCulloch-Pitts artificial neuron Understanding how logical thinking works has been one of the main goals of cognitive scientists since the creation of the field. One way to approach the study of logical thinking is by building an artificial system able to perform logical operations. [*Truth tables*](https://en.wikipedia.org/wiki/Truth_table) are a schematic way to express the behavior of *boolean functions*, which are essentially logical operations. Here, we will use the McCulloch-Pitts model to replicate the behavior of a few boolean functions, as expressed in their respective truth tables. Notice that I'm using the term "function" to describe boolean logic, but you may find that the term "logic gate" is also widely used, particularly in the electronic circuits literature where this kind of function is fundamental. The AND Function The *AND* function is "activated" only when all the incoming inputs are "on", this is, it outputs a 1 only when all inputs are 1. In "neural" terms, the neuron *fires* when all the incoming signals are *excitatory*. On a more abstract level, think in a situation where you would decide that something is "true" or you would say "yes", depending on the value of some "conditions" or "variables". This relationship is expressed in **Table 1**. **Table 1**: Truth Table For AND Function| A | B | Output ||---|---|--------|| 0 | 0 | 0 || 0 | 1 | 0 || 1 | 0 | 0 || 1 | 1 | 1 | Now, imagine that you are deciding whether to watch a movie or not. In this simplified scenario, you would watch the movie *only if* the movie features Samuel L. Jackson AND the director is Quentin Tarantino. Now the truth table looks like this: **Table 2**: Movie Decision Table| Samuel L. Jackson | Quentin Tarantino | Watch the movie ||-------------------|-------------------|-----------------|| No | No | No || No | Yes | No || Yes | No | No || Yes | Yes | Yes | As we mentioned, the AND function can be implemented with the McCulloch-Pitts model. Each neuron has four parts: *inputs*, *weights*, *threshold*, and *output*. The *inputs* are given in the **Movie Decision Table**, and the *output* is completely determined by other elements, therefore, to create an AND function, we need to manipulate the *weights* and the *threshold*. Since we want the neuron to fire only when both inputs are excitatory, the threshold for activation must be 2. To obtain an output of 2, we need both inputs to be excitatory, therefore, the weights must be positive (i.e., 1). Summarizing, we need: - weights: all positive- threshold: 2Now, let's repeat the same four steps. Step 1: generate a vector of inputs and a vector of weights
###Code
# matrix of inputs
input_table = np.array([
[0,0], # both no
[0,1], # one no, one yes
[1,0], # one yes, one no
[1,1] # bot yes
])
print(f'input table:\n{input_table}')
# array of weights
weights = np.array([1,1])
print(f'weights: {weights}')
###Output
weights: [1 1]
###Markdown
Step 2: compute the dot product between the matrix of inputs and weights
###Code
# dot product matrix of inputs and weights
dot_products = input_table @ weights
print(f'Dot products: {dot_products}')
###Output
Dot products: [0 1 1 2]
###Markdown
**Note**: in case you are wondering why multiplying a 4x2 matrix by a 1x2 vector works, the answer is that `numpy` internally "broadcast" the smaller array to match the shape of the larger array. This means that the 1x2 is transformed into a 4x2 array where each new row replicates the values of the original 1x2 array. More on broadcasting [here](https://numpy.org/doc/1.18/user/basics.broadcasting.html). Step 3: define the threshold activation function We defined this already, so we will reuse our `linear_threshold_gate` function Step 4: compute the output based on the threshold value
###Code
T = 2
for i in range(0,4):
activation = linear_threshold_gate(dot_products[i], T)
print(f'Activation: {activation}')
###Output
Activation: 0
Activation: 0
Activation: 0
Activation: 1
###Markdown
As expected, only the last movie, with Samuel L. Jackson as an actor and Quentin Tarantino as director, resulted in the neuron firing. The OR Function The *OR* function is "activated" when *at least one* of the incoming inputs is "on". In "neural" terms, the neuron *fires* when at least one of the incoming signals is *excitatory*. This relationship is expressed in **Table 3**. **Table 3**: Truth Table For OR Function| A | B | Output ||---|---|--------|| 0 | 0 | 0 || 0 | 1 | 1 || 1 | 0 | 1 || 1 | 1 | 1 | Imagine that you decide to be flexible about your decision criteria. Now, you will watch the movie *if at least one* of your favorite stars, Samuel L. Jackson or Quentin Tarantino, is involved in the movie. Now, the truth table looks like this: **Table 4**: Movie Decision Table| Samuel L. Jackson | Quentin Tarantino | Watch the movie ||-------------------|-------------------|-----------------|| No | No | No || No | Yes | Yes || Yes | No | Yes || Yes | Yes | Yes | Since we want the neuron to fire when at least one of the inputs is excitatory, the threshold for activation must be 1. To obtain an output of at least 1, we need both inputs to be excitatory, therefore, the weights must be positive (i.e., 1). Summarizing, we need: - weights: all positive- threshold: 1Now, let's repeat the same four steps. Step 1: generate a vector of inputs and a vector of weights Neither the matrix of inputs nor the array of weights changes, so we can reuse our `input_table` and `weights` vector. Step 2: compute the dot product between the matrix of inputs and weights Since neither the matrix of inputs nor the vector of weights changes, the dot product of those stays the same. Step 3: define the threshold activation function We can use the `linear_threshold_gate` function again. Step 4: compute the output based on the threshold value
###Code
T = 1
for i in range(0,4):
activation = linear_threshold_gate(dot_products[i], T)
print(f'Activation: {activation}')
###Output
Activation: 0
Activation: 1
Activation: 1
Activation: 1
###Markdown
As you can probably appreciate by now, the only thing we needed to change was the `threshold` value, and the expected behavior is obtained. The NOR function The *OR* function is "activated" when *all* the incoming inputs are "off". In this sense, it is the inverse of the OR function. In "neural" terms, the neuron *fires* when all the signals are *inhibitory*. This relationship is expressed in **Table 5**. **Table 5**: Truth Table For OR Function| A | B | Output ||---|---|--------|| 0 | 0 | 1 || 0 | 1 | 0 || 1 | 0 | 0 || 1 | 1 | 0 | This time, imagine that you got saturated of watching Samuel L. Jackson and/or Quentin Tarantino movies, and you decide you only watch movies where both are absent. The presence of even one of them is unacceptable for you. The new truth table looks like this: Table 6: Movie Decision Table| Samuel L. Jackson | Quentin Tarantino | Watch the movie ||-------------------|-------------------|-----------------|| No | No | Yes || No | Yes | No || Yes | No | No || Yes | Yes | No | Since we want the neuron to fire only when both inputs are inhibitory, the threshold for activation must be 0. To obtain an output of 0, we need both inputs to be inhibitory, therefore, the weights must be negative (i.e., -1). Summarizing, we need: - weights: all negative- threshold: 0Now, let's repeat the steps. Step 1: generate a vector of inputs and a vector of weights The matrix of inputs remain the same, but we need a new vector of weights
###Code
# array of weights
weights = np.array([-1,-1])
print(f'weights: {weights}')
###Output
weights: [-1 -1]
###Markdown
Step 2: compute the dot product between the matrix of inputs and weights
###Code
# dot product matrix of inputs and weights
dot_products = input_table @ weights
print(f'Dot products: {dot_products}')
###Output
Dot products: [ 0 -1 -1 -2]
###Markdown
Step 3: define the threshold activation function The function remains the same. Step 4: compute the output based on the threshold value
###Code
T = 0
for i in range(0,4):
activation = linear_threshold_gate(dot_products[i], T)
print(f'Activation: {activation}')
###Output
Activation: 1
Activation: 0
Activation: 0
Activation: 0
|
3_Mod_WordEmbedding/Mod_Word_Embedding.ipynb | ###Markdown
PROJET : RECONNAISSANCE VOCALE SYSTEME DE TRADUCTION ADAPTE AUX LUNETTES CONNECTEES *** Dans l'itération précédente, nous avons procédé à une traduction mot à mot en utilisant un ensemble de phrases traduites à la fois en anglais et en français et en se servant de la place des mots dans la phrase. Nous avions constaté que cette méthode n'était pas probante.Au cours de cette étape, nous avons procédé en entrainant un algorithme de deep learning qui se base sur la correspondance entre l'espace vectoriel de mots source vers les mots cibles. Grâce aux mots transparents entre les deux langues, l'algorithme peut générer une matrice de correspondance permettant la traduction de l'ensemble des mots. Partie II : Modélisation - Itération 2 Cette partie reprend le dictionnaire de référence tel que travaillé précédemment.
###Code
import numpy as np
import pandas as pd
# Dataset utilisé : DS2
data = pd.read_csv("en-fr.txt", sep = " ", names = ["Anglais", "Français"])
data.head()
uniq_eng = list(data.Anglais.unique())
print("Nbre de mots anglais uniques dans le 2ème dataset :", len(uniq_eng), "mots.\n")
uniq_fr = list(data.Français.unique())
print("Nbre de mots français uniques dans le 2ème dataset :", len(uniq_fr), "mots.")
# Création du dictionnaire de référence (DS2) : association de toutes les traductions françaises possibles pour chaque mot
#anglais en clé
data_ord = data.groupby(['Anglais']).agg(lambda x : list(x)).reset_index()
dico_ref = {data_ord['Anglais'][i] : data_ord['Français'][i] for i in range(data_ord.shape[0])}
###Output
_____no_output_____
###Markdown
1. Création des dictionnaires de vectorisation des mots anglais et français Cette partie de code sert à créer les dictionnaires de vectorisation des mots français et anglais contenus dans notre dataset de référence (DS2) à partir de matrices d'embeddings pré-entrainées.En raison de la durée d'exécution, le code suivant ne sera exécuté qu'une seule fois : la suite des analyses se base sur les fichiers csv créés dans cette partie.
###Code
# Téléchargement des matrices d'embedding pré-entrainées
"""
from pathlib import Path
from urllib.request import urlretrieve
PATH_TO_DATA = Path()
fr_embeddings_path = PATH_TO_DATA / 'cc.fr.300.vec.gz'
if not fr_embeddings_path.exists():
urlretrieve('https://dl.fbaipublicfiles.com/fasttext/vectors-crawl/cc.fr.300.vec.gz', fr_embeddings_path)
en_embeddings_path = PATH_TO_DATA / 'cc.en.300.vec.gz'
if not en_embeddings_path.exists():
urlretrieve('https://dl.fbaipublicfiles.com/fasttext/vectors-crawl/cc.en.300.vec.gz', en_embeddings_path)
"""
###Output
_____no_output_____
###Markdown
La cellule suivante reprend le code de création de la classe Word2Vec() : + Les méthodes d'initialisation de la classe et de loading permettent, à partir des matrices téléchargées précédemment, de générer un array de 2 millions de lignes (nombre de mots inclus dans les matrices pré-entrainées) et de 301 colonnes, représentant le mot et le vecteur de taille 300 correspondant, dont découlent les éléments suivants : + Le mapping word/index et index/word (méthodes word2id et id2word) + La méthode words et la méthode embeddings : chaque ligne de la matrice est splittée après le 1er élément, constituant ainsi une liste des mots de la matrice (méthode words) et un array de 2 millions de lignes et de 300 colonnes représentant chaque vecteur (méthode embeddings) + La méthode encode retourne pour le mot inclus en attribut l'array du vecteur correspondant si ce mot appartient à la matrice pré-entrainée, sinon il retourne un array zéros de longeur 300.
###Code
import gzip
class Word2Vec():
def __init__(self, filepath):
self.words, self.embeddings = self.load_wordvec(filepath)
# Mappings for O(1) retrieval:
self.word2id = {word: idx for idx, word in enumerate(self.words)}
self.id2word = {idx: word for idx, word in enumerate(self.words)}
def load_wordvec(self, filepath):
assert str(filepath).endswith('.gz')
words = []
embeddings = []
with gzip.open(filepath, 'rt', encoding = 'utf-8') as f:
next(f) # Skip header
for i, line in enumerate(f):
word, vec = line.split(',', 1)
words.append(word)
embeddings.append(np.fromstring(vec, sep=','))
print('Loaded %s pretrained word vectors' % (len(words)))
return words, np.vstack(embeddings)
def encode(self, word):
'''
Inputs:
-word : mot
Output:
- l'embedding du mot correspondant
'''
if word in self.words :
row = self.embeddings[self.word2id[word],:]
return(row)
else :
return(np.zeros((1, self.embeddings.shape[1]))) #si le mot n'est pas dans le dictionnaire
# Chargement des données et création des mappings word/vector
#fr_word2vec = Word2Vec(filepath=fr_embeddings_path)
#en_word2vec = Word2Vec(filepath=en_embeddings_path)
# Création des dictionnaires de vectorisation des mots du dataset 2
#dico_eng = {}
#for i in uniq_eng:
#if i in en_word2vec.words:
#dico_eng.update({i : en_word2vec.encode(i)})
#dico_fr = {}
#for i in uniq_fr:
#if i in fr_word2vec.words:
#dico_fr.update({i : fr_word2vec.encode(i)})
# Création d'un fichier reprenant les dictionnaires incluant les vectorisations des mots anglais et français du DS2
#pd.DataFrame.from_dict(data = dico_eng, orient = 'index').to_csv("dico_eng.csv", header = False)
#pd.DataFrame.from_dict(data = dico_fr, orient = 'index').to_csv("dico_fr.csv", header = False)
###Output
_____no_output_____
###Markdown
2. Matrice de correspondance W et traduction des mots anglais Chargement des données des dictionnaires de vectorisation
###Code
# Lecture des dictionnaires de vectorisations des mots anglais et français
df_eng = pd.read_csv("dico_eng.csv", header= None, index_col = 0)
dico_eng = {df_eng.index[i] : np.array(df_eng.iloc[i, :]) for i in range(df_eng.shape[0])}
df_fr = pd.read_csv("dico_fr.csv", header= None, index_col = 0)
dico_fr = {df_fr.index[i] : np.array(df_fr.iloc[i, :]) for i in range(df_fr.shape[0])}
# Création de la liste des mots anglais vectorisés
dico_eng_words = list(dico_eng.keys())
# Création de la liste des vecteurs correspondants
dico_eng_embeds = []
for i in dico_eng.keys():
dico_eng_embeds.append(dico_eng[i])
# Création de la liste des mots français vectorisés
dico_fr_words = list(dico_fr.keys())
# Création de la liste des vecteurs correspondants
dico_fr_embeds = []
for i in dico_fr.keys():
dico_fr_embeds.append(dico_fr[i])
###Output
_____no_output_____
###Markdown
Matrice de correspondance W Une fois que nous avons obtenu les vectorisations de tous les mots en communs des deux jeux de données, nous allons créer deux matrices d'embeddings X et Y contenant les vectorisations de tous les mots transparents apparaissant dans les deux langues. Le but sera ensuite de trouver la matrice W qui projettera l'espace vectoriel des mots sources (Anglais dans notre cas) sur l'espace vectoriel des mots cibles (Français dans notre cas) afin que les mots ayant les mêmes significations dans les deux langues aient des coordonnées les plus proches possibles. Afin de trouver la matrice W, nous allons utiliser une formule reposant sur l'orthogonalité et les propriétés d'un matrice: $W^* = UV^T$ avec $U$$\Sigma$$V^T$ $=$ $SVD$ $(YX^T)$ Ou SVD est la décomposition en valeur singulière d'une matrice.
###Code
# Création de la liste des mots transparents (identiques dans les dictionnaires français et anglais)
mots_transparents = [word for word in dico_eng if word in dico_fr]
print("Nbre de mots transparents :", len(mots_transparents), "mots.")
# Encodage des mots transparents
X, Y = np.empty([300,len(mots_transparents)]),np.empty([300,len(mots_transparents)])
for i, word in enumerate(mots_transparents):
X[:,i] = dico_eng[word]
Y[:,i] = dico_fr[word]
assert X.shape[0] == 300 and Y.shape[0] == 300
# Calcul de W, la matrice de correspondance entre l'anglais et le français
U, sigma, Vtranspose = np.linalg.svd(Y.dot(X.T))
W = U.dot(Vtranspose)
###Output
_____no_output_____
###Markdown
Fonction de traduction Nous allons créer une fonction qui permet d'associer à tout mot anglais les k mots français les plus similaires (k représentant le nombre de traductions proposées par le dictionnaire de référence). On calcule pour cela la métrique cosine similarity que l'on cherche à maximiser afin de proposer la meilleure traduction possible.
###Code
# Création de la fonction de traduction de l'anglais au français
def get_closest_french_words(eng_word):
'''
Inputs:
- eng_word : mot en anglais
Output:
-renvoie les k mots les plus proches dans la traduction
'''
# k représente le nombre de mots proposés en traduction du mot anglais dans le dictionnaire de référence
k = len(dico_ref[eng_word])
# On reprend le vecteur du mot anglais sélectionné
eng_obj = dico_eng[eng_word]
# On projette le mot anglais dans l'espace vectoriel des mots français grâce la matrice de correspondance W
aligne_eng = W.dot(eng_obj.T)
# On crée l'array reprenant tous les vecteurs des mots français
fr_embeds = np.array(dico_fr_embeds)
# Calcul de la similitude du mot anglais avec les mots français avec la métrique cosine similarity
norm_prod = np.linalg.norm(aligne_eng)*np.linalg.norm(fr_embeds, axis=1)
scores = fr_embeds.dot(aligne_eng) / norm_prod
# Récupération des k meilleurs scores de similitude
best_k = np.flip(np.argsort(scores))[:k]
# Liste contenant une liste des k mots français les plus proches du mot anglais et la liste contenant leur degré de similitude
return [[dico_fr_words[idx] for idx in best_k], [scores[idx] for idx in best_k]]
# Exemple de traduction
get_closest_french_words('car')[0]
# Création du dictionnaire de traduction qui affecte à chaque mot anglais, les k traductions possibles
#from tqdm import tqdm
#dico_trad = {}
#for word in tqdm(dico_eng_words):
#dico_trad.update({word : get_closest_french_words(word)})
# Création d'un fichier csv pour recharger le dictionnaire obtenu
#pd.DataFrame.from_dict(data = dico_trad, orient = 'index').to_csv("dico_trad.csv", header = False)
###Output
_____no_output_____
###Markdown
3. Scoring Une fois notre dictionnaire de traduction créé grâce au word embedding, nous allons ensuite le comparer au dictionnaire de traduction de référence.Le score représentera le pourcentage de similarité entre les traductions des deux dictionnaires.
###Code
# Import du dataframe contenant le dictionnaire de traduction
df_trad_imp = pd.read_csv("dico_trad.csv", header = None, index_col = 0)
# Retraitement du dataframe précédent
carac = [",", "'", "[", "]"]
for i in range(df_trad_imp.shape[0]):
for c in df_trad_imp.iloc[i, 0]:
if c in carac:
df_trad_imp.iloc[i, 0] = df_trad_imp.iloc[i, 0].replace(c, " ")
df_trad_imp.iloc[i, 0] = df_trad_imp.iloc[i, 0].split()
# Création du dataframe de scoring
df_trad = df_trad_imp.reset_index().iloc[:, 0:2]
df_trad.columns = ['Mot_anglais', 'Trad_func']
df_ref = pd.DataFrame(list(dico_ref.items()), columns = ['Mot_anglais', 'Trad_ref'])
df_score = pd.merge(df_ref, df_trad)
# Calcul du score : rapport entre le nbre de traductions communes (dictionnaire de référence et
#dictionnaire de traduction créé) et le nbre de traductions de référence
score = []
tot = 0
res = 0
for i in range(df_score.shape[0]):
for j in df_score.Trad_func[i]:
if j in df_score.Trad_ref[i]:
tot +=1
res = round(tot / len(df_score.Trad_ref[i]) * 100, 2)
tot = 0
score.append(res)
df_score['Score'] = score
df_score.head()
score_model = df_score.Score.mean()
print("Le sore est de :", score_model)
###Output
Le sore est de : 46.90854424855996
###Markdown
Utilisation d'une méthode utilisant un k fixe =5
###Code
#Création d'un dictionnaire comportant les mots français et leurs vecteurs associés
#for idx, i in enumerate(tqdm(uniq_fr)):
# if i in fr_word2vec.words:
# dico_fr.update({i : fr_word2vec.encode(i)})
##Création d'un dictionnaire comportant les mots anglais et leurs vecteurs associés
#for idx, i in enumerate(tqdm(uniq_eng)):
# if i in en_word2vec.words:
# dico_eng.update({i : en_word2vec.encode(i)})
#On exporte les deux dictionnaires en fichier csv afin de pouvoir les réutiliser, leur création ayant pris pas mal de temps
#pd.DataFrame.from_dict(data = dico_eng, orient = 'index').to_csv("dico_eng.csv", header = False)
#pd.DataFrame.from_dict(data = dico_fr, orient = 'index').to_csv("dico_fr.csv", header = False)
#A partir des deux fichiers csv compressés en format gzip, nous créons deux objets de la classe word2vec
fr_word2vec2 = Word2Vec(filepath='dico_fr.csv.gz', vocab_size=2000000)
en_word2vec2 = Word2Vec(filepath='dico_eng.csv.gz', vocab_size=2000000)
# Obtenir les mots qui apparaissent dans les 2 vocabulaires (mots qui ont des chaines de caractères identiques)
#mots_transparents = [word for word in fr_word2vec2.words if word in en_word2vec2.words]
# On encode nos mots : obtention des embeddings de chaque mot
#X, Y = np.empty([300,len(mots_transparents)]),np.empty([300,len(mots_transparents)])
#for i, word in enumerate(mots_transparents) :
# X[:,i] = en_word2vec2.encode(word)
# Y[:,i] = fr_word2vec2.encode(word)
#
#assert X.shape[0] == 300 and Y.shape[0] == 300
#Calcul de la matrice W
U, sigma, Vtranspose = np.linalg.svd(Y.dot(X.T))
W = U.dot(Vtranspose)
W
#définition d'une fonction de traduction
"""def get_closest_french_words(en_word, k):
en_obj = en_word2vec2.encode(en_word)
aligne_en = W.dot(en_obj.T)
fr_embeds = fr_word2vec2.embeddings
norm_prod = np.linalg.norm(aligne_en)*np.linalg.norm(fr_embeds, axis=1)
scores = fr_embeds.dot(aligne_en) / norm_prod
best_k = np.flip(np.argsort(scores))[:k]
return ([fr_word2vec2.words[idx] for idx in best_k], [scores[idx] for idx in best_k])
get_closest_french_words('cat', 5)"""
#création d'un dictionnaire de traduction Anglais: 5 mots français les plus probable
#from tqdm import tqdm
#dico_trad = {}
#for word in tqdm(en_word2vec2.words):
# dico_trad.update({word : get_closest_french_words(word, 5)})
#récupération des mots d'origines et mots cibles en ignorant les probabilités
"""dico_trad2 = {}
for word in dico_trad.keys():
dico_trad2.update({word : dico_trad[word][0]})"""
#création d'un dataframe à partie du dictionnaire dico_trad2
#df_trad = pd.DataFrame(list(dico_trad2.items()), columns = ['Mot_anglais', 'Trad_func'])
#df_trad.head()
#data_ord = data.groupby(['Anglais']).agg(lambda x : list(x)).reset_index()
#dico_ref = {data_ord['Anglais'][i] : data_ord['Français'][i] for i in range(data_ord.shape[0])}
#df_trad_csv = df_trad.to_csv('df_trad.csv')
df_trad2 = pd.read_csv('df_trad.csv', index_col = 0)
carac = [",", "'", "[", "]"]
for i in range(df_trad2.shape[0]):
for c in df_trad2.iloc[i, 1]:
if c in carac:
df_trad2.iloc[i, 1] = df_trad2.iloc[i, 1].replace(c, " ")
df_trad2.iloc[i, 1] = df_trad2.iloc[i, 1].split()
df_ref = pd.DataFrame(list(dico_ref.items()), columns = ['Mot_anglais', 'Trad_ref'])
df_score = pd.merge(df_ref, df_trad2)
score = []
tot = 0
res = 0
for i in range(df_score.shape[0]):
for j in df_score.Trad_func[i]:
if j in df_score.Trad_ref[i]:
tot +=1
res = round(tot / len(df_score.Trad_ref[i]) * 100, 2)
tot = 0
score.append(res)
df_score['Score'] = score
df_score.head()
score_model = df_score.Score.mean()
score_model
###Output
_____no_output_____ |
CNN/CNN-1.ipynb | ###Markdown
2020 Fall Final Project code1 (CNN Model 1st Try) Data Preprocessing
###Code
import os
import random
from shutil import copy2
import keras
keras.__version__
print('number of A:',len(os.listdir('/Users/yixuanwang/Desktop/2020 F Final_Project/A')))
print('number of B:',len(os.listdir('/Users/yixuanwang/Desktop/2020 F Final_Project/B')))
print('number of E:',len(os.listdir('/Users/yixuanwang/Desktop/2020 F Final_Project/E')))
print('number of G:',len(os.listdir('/Users/yixuanwang/Desktop/2020 F Final_Project/G')))
base_dir = '/Users/yixuanwang/Desktop/2020 F Final_Project'
train_dir = os.path.join(base_dir, 'train')
if not os.path.exists(train_dir):
os.mkdir(train_dir)
validation_dir = os.path.join(base_dir, 'validation')
if not os.path.exists(validation_dir):
os.mkdir(validation_dir)
test_dir = os.path.join(base_dir, 'test')
if not os.path.exists(test_dir):
os.mkdir(test_dir)
train_a_dir = os.path.join(train_dir, 'A')
if not os.path.exists(train_a_dir):
os.mkdir(train_a_dir)
train_b_dir = os.path.join(train_dir, 'B')
if not os.path.exists(train_b_dir):
os.mkdir(train_b_dir)
train_e_dir = os.path.join(train_dir, 'E')
if not os.path.exists(train_e_dir):
os.mkdir(train_e_dir)
train_g_dir = os.path.join(train_dir, 'G')
if not os.path.exists(train_g_dir):
os.mkdir(train_g_dir)
test_a_dir = os.path.join(test_dir, 'A')
if not os.path.exists(test_a_dir):
os.mkdir(test_a_dir)
test_b_dir = os.path.join(test_dir, 'B')
if not os.path.exists(test_b_dir):
os.mkdir(test_b_dir)
test_e_dir = os.path.join(test_dir, 'E')
if not os.path.exists(test_e_dir):
os.mkdir(test_e_dir)
test_g_dir = os.path.join(test_dir, 'G')
if not os.path.exists(test_g_dir):
os.mkdir(test_g_dir)
validation_a_dir = os.path.join(validation_dir, 'A')
if not os.path.exists(validation_a_dir):
os.mkdir(validation_a_dir)
validation_b_dir = os.path.join(validation_dir, 'B')
if not os.path.exists(validation_b_dir):
os.mkdir(validation_b_dir)
validation_e_dir = os.path.join(validation_dir, 'E')
if not os.path.exists(validation_e_dir):
os.mkdir(validation_e_dir)
validation_g_dir = os.path.join(validation_dir, 'G')
if not os.path.exists(validation_g_dir):
os.mkdir(validation_g_dir)
A_dir = '/Users/yixuanwang/Desktop/2020 F Final_Project/A'
B_dir = '/Users/yixuanwang/Desktop/2020 F Final_Project/B'
E_dir = '/Users/yixuanwang/Desktop/2020 F Final_Project/E'
G_dir = '/Users/yixuanwang/Desktop/2020 F Final_Project/G'
num_A = len(os.listdir(A_dir))
num_B = len(os.listdir(B_dir))
num_E = len(os.listdir(E_dir))
num_G = len(os.listdir(G_dir))
A_all = os.listdir(A_dir)
B_all = os.listdir(B_dir)
E_all = os.listdir(E_dir)
G_all = os.listdir(G_dir)
index_list_a = list(range(num_A))
index_list_b = list(range(num_B))
index_list_e = list(range(num_E))
index_list_g = list(range(num_G))
random.shuffle(index_list_a)
random.shuffle(index_list_b)
random.shuffle(index_list_e)
random.shuffle(index_list_g)
num = 0
for i in index_list_a:
fileName = os.path.join(A_dir, A_all[i])
if num < num_A*0.6:
print(str(fileName))
copy2(fileName, train_a_dir)
elif num > num_A *0.6 and num < num_A*0.8:
copy2(fileName, test_a_dir)
else:
copy2(fileName, validation_a_dir)
num += 1
for i in index_list_b:
fileName = os.path.join(B_dir, B_all[i])
if num < num_B*0.6:
print(str(fileName))
copy2(fileName, train_b_dir)
elif num > num_B *0.6 and num < num_B*0.8:
copy2(fileName, test_b_dir)
else:
copy2(fileName, validation_b_dir)
num += 1
for i in index_list_e:
fileName = os.path.join(E_dir, E_all[i])
if num < num_E*0.6:
print(str(fileName))
copy2(fileName, train_e_dir)
elif num > num_E *0.6 and num < num_E*0.8:
copy2(fileName, test_e_dir)
else:
copy2(fileName, validation_e_dir)
num += 1
for i in index_list_g:
fileName = os.path.join(G_dir, G_all[i])
if num < num_G*0.6:
print(str(fileName))
copy2(fileName, train_g_dir)
elif num > num_G *0.6 and num < num_G*0.8:
copy2(fileName, test_g_dir)
else:
copy2(fileName, validation_g_dir)
num += 1
print('number of training a:',len(os.listdir('/Users/yixuanwang/Desktop/2020 F Final_Project/train/A')))
print('number of training b:',len(os.listdir('/Users/yixuanwang/Desktop/2020 F Final_Project/train/B')))
print('number of training e:',len(os.listdir('/Users/yixuanwang/Desktop/2020 F Final_Project/train/E')))
print('number of training g:',len(os.listdir('/Users/yixuanwang/Desktop/2020 F Final_Project/train/G')))
print('number of testing a:',len(os.listdir('/Users/yixuanwang/Desktop/2020 F Final_Project/test/A')))
print('number of testing b:',len(os.listdir('/Users/yixuanwang/Desktop/2020 F Final_Project/test/B')))
print('number of testing e:',len(os.listdir('/Users/yixuanwang/Desktop/2020 F Final_Project/test/E')))
print('number of testing g:',len(os.listdir('/Users/yixuanwang/Desktop/2020 F Final_Project/test/G')))
print('number of validation a:',len(os.listdir('/Users/yixuanwang/Desktop/2020 F Final_Project/validation/A')))
print('number of validation b:',len(os.listdir('/Users/yixuanwang/Desktop/2020 F Final_Project/validation/B')))
print('number of validation e:',len(os.listdir('/Users/yixuanwang/Desktop/2020 F Final_Project/validation/E')))
print('number of validation g:',len(os.listdir('/Users/yixuanwang/Desktop/2020 F Final_Project/validation/G')))
from keras.preprocessing.image import ImageDataGenerator
# All images will be rescaled by 1./255
train_datagen = ImageDataGenerator(rescale=1./255)
test_datagen = ImageDataGenerator(rescale=1./255)
validation_datagen = ImageDataGenerator(rescale=1./255)
train_generator = train_datagen.flow_from_directory(
'/Users/yixuanwang/Desktop/2020 F Final_Project/train',
batch_size=32,
class_mode='categorical',
color_mode = 'grayscale')
test_generator = train_datagen.flow_from_directory(
'/Users/yixuanwang/Desktop/2020 F Final_Project/test',
batch_size=32,
class_mode='categorical',
color_mode = 'grayscale')
validation_generator = train_datagen.flow_from_directory(
'/Users/yixuanwang/Desktop/2020 F Final_Project/validation',
batch_size=32,
class_mode='categorical',
color_mode = 'grayscale')
for data_batch, labels_batch in train_generator:
print('data batch shape:', data_batch.shape)
print('labels batch shape:', labels_batch.shape)
break
import matplotlib.pyplot as plt
plt.figure(figsize=(10,10))
plt.subplot(3,3,1)
plt.imshow(data_batch[5].reshape(256,256),cmap='gray')
plt.subplot(3,3,2)
plt.imshow(data_batch[10].reshape(256,256),cmap='gray')
plt.subplot(3,3,3)
plt.imshow(data_batch[15].reshape(256,256),cmap='gray')
plt.subplot(3,3,4)
plt.imshow(data_batch[20].reshape(256,256),cmap='gray')
plt.subplot(3,3,5)
plt.imshow(data_batch[1].reshape(256,256),cmap='gray')
plt.subplot(3,3,6)
plt.imshow(data_batch[6].reshape(256,256),cmap='gray')
plt.subplot(3,3,7)
plt.imshow(data_batch[9].reshape(256,256),cmap='gray')
plt.subplot(3,3,8)
plt.imshow(data_batch[14].reshape(256,256),cmap='gray')
plt.subplot(3,3,9)
plt.imshow(data_batch[21].reshape(256,256),cmap='gray')
plt.show()
###Output
_____no_output_____
###Markdown
CNN
###Code
from keras import layers
from keras import models
from keras.layers.core import Dropout
from IPython.display import Image
from keras.utils.vis_utils import model_to_dot
model = models.Sequential()
model.add(layers.Conv2D(32, (3, 3), activation='relu',
input_shape=(256, 256, 1)))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(64, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(Dropout(0.25))
model.add(layers.Conv2D(128, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(128, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(Dropout(0.5))
model.add(layers.Flatten())
model.add(layers.Dense(512, activation='relu'))
model.add(layers.Dense(4, activation='softmax'))
model.summary()
from keras import optimizers
model.compile(loss='categorical_crossentropy',
optimizer=optimizers.RMSprop(lr=1e-4),
metrics=['acc'])
G = model_to_dot (model)
Image (G.create (prog = "dot", format = "jpg"))
history = model.fit_generator(
train_generator,
steps_per_epoch=50,
epochs=20,
validation_data=validation_generator,
validation_steps=50)
model.save('lung.h5')
acc = history.history['acc']
val_acc = history.history['val_acc']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(len(acc))
plt.plot(epochs, acc, 'b-', label='Training acc')
plt.plot(epochs, val_acc, 'r-', label='Validation acc')
plt.title('Training and validation accuracy')
plt.legend()
plt.figure()
plt.plot(epochs, loss, 'b-', label='Training loss')
plt.plot(epochs, val_loss, 'r-', label='Validation loss')
plt.title('Training and validation loss')
plt.legend()
plt.show()
model.evaluate(test_generator)
from keras.models import load_model
model_trained = load_model('/Users/yixuanwang/Desktop/2020 F Final_Project/lung.h5')
model_trained
aaa=len(os.listdir('/Users/yixuanwang/Desktop/2020 F Final_Project/A'))
bbb=len(os.listdir('/Users/yixuanwang/Desktop/2020 F Final_Project/B'))
eee=len(os.listdir('/Users/yixuanwang/Desktop/2020 F Final_Project/E'))
ggg=len(os.listdir('/Users/yixuanwang/Desktop/2020 F Final_Project/G'))
aaa
bbb
eee
ggg
import matplotlib.pyplot as plt
import matplotlib
matplotlib.rcParams['font.sans-serif'] = ['SimHei']
matplotlib.rcParams['axes.unicode_minus'] = False
price = [aaa,bbb,eee,ggg]
"""
绘制水平条形图方法barh
参数一:y轴
参数二:x轴
"""
plt.barh(range(4), price, height=0.7, color='steelblue', alpha=0.8) # 从下往上画
plt.yticks(range(5), ['A', 'B', 'E', 'G'])
# plt.xlim(30,47)
plt.xlabel("Number")
plt.title("Number of Each Class Lung Cancer")
for x, y in enumerate(price):
plt.text(y + 0.2, x - 0.1, '%s' % y)
plt.show()
price
###Output
_____no_output_____ |
backup/P2-Copy1.ipynb | ###Markdown
Advanced Lane Finding ProjectThe goals / steps of this project are the following:* Compute the camera calibration matrix and distortion coefficients given a set of chessboard images.* Apply a distortion correction to raw images.* Use color transforms, gradients, etc., to create a thresholded binary image.* Apply a perspective transform to rectify binary image ("birds-eye view").* Detect lane pixels and fit to find the lane boundary.* Determine the curvature of the lane and vehicle position with respect to center.* Warp the detected lane boundaries back onto the original image.* Output visual display of the lane boundaries and numerical estimation of lane curvature and vehicle position.--- First, I'll compute the camera calibration using chessboard images
###Code
#importing some useful packages
import numpy as np
import cv2
import glob
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import pickle
# Import everything needed to edit/save/watch video clips
from moviepy.editor import VideoFileClip
from IPython.display import HTML
#%matplotlib qt
%matplotlib inline
def undistort_image(img,mtx,dist):
return cv2.undistort(img, mtx, dist, None, mtx)
# Define a function that thresholds the S-channel of HLS
def hls_select(img, thresh=(0, 255)):
hls = cv2.cvtColor(img, cv2.COLOR_RGB2HLS)
s_channel = hls[:,:,2]
binary_output = np.zeros_like(s_channel)
binary_output[(s_channel > thresh[0]) & (s_channel <= thresh[1])] = 1
return binary_output
#gradx = abs_sobel_thresh(image, orient='x', sobel_kernel=ksize, thresh=(0, 255))
def abs_sobel_thresh(img, orient='x', sobel_kernel=3, sobel_thresh=(0, 255)):
# Convert to grayscale
gray = cv2.cvtColor(img, cv2.COLOR_RGB2GRAY)
# Apply x or y gradient with the OpenCV Sobel() function
# and take the absolute value
if orient == 'x':
abs_sobel = np.absolute(cv2.Sobel(gray, cv2.CV_64F, 1, 0,ksize=sobel_kernel))
if orient == 'y':
abs_sobel = np.absolute(cv2.Sobel(gray, cv2.CV_64F, 0, 1,ksize=sobel_kernel))
# Rescale back to 8 bit integer
scaled_sobel = np.uint8(255*abs_sobel/np.max(abs_sobel))
# Create a copy and apply the threshold
binary_output = np.zeros_like(scaled_sobel)
# Here I'm using inclusive (>=, <=) thresholds, but exclusive is ok too
binary_output[(scaled_sobel >= sobel_thresh[0]) & (scaled_sobel <= sobel_thresh[1])] = 1
# Return the result
return binary_output
# Define a function to return the magnitude of the gradient
# for a given sobel kernel size and threshold values
def mag_thresh(img, sobel_kernel=3, mag_thresh=(0, 255)):
# Convert to grayscale
gray = cv2.cvtColor(img, cv2.COLOR_RGB2GRAY)
# Take both Sobel x and y gradients
sobelx = cv2.Sobel(gray, cv2.CV_64F, 1, 0, ksize=sobel_kernel)
sobely = cv2.Sobel(gray, cv2.CV_64F, 0, 1, ksize=sobel_kernel)
# Calculate the gradient magnitude
gradmag = np.sqrt(sobelx**2 + sobely**2)
# Rescale to 8 bit
scale_factor = np.max(gradmag)/255
gradmag = (gradmag/scale_factor).astype(np.uint8)
# Create a binary image of ones where threshold is met, zeros otherwise
binary_output = np.zeros_like(gradmag)
binary_output[(gradmag >= mag_thresh[0]) & (gradmag <= mag_thresh[1])] = 1
# Return the binary image
return binary_output
# Define a function to threshold an image for a given range and Sobel kernel
def dir_threshold(img, sobel_kernel=3, thresh=(0, np.pi/2)):
# Grayscale
gray = cv2.cvtColor(img, cv2.COLOR_RGB2GRAY)
# Calculate the x and y gradients
sobelx = cv2.Sobel(gray, cv2.CV_64F, 1, 0, ksize=sobel_kernel)
sobely = cv2.Sobel(gray, cv2.CV_64F, 0, 1, ksize=sobel_kernel)
# Take the absolute value of the gradient direction,
# apply a threshold, and create a binary image result
absgraddir = np.arctan2(np.absolute(sobely), np.absolute(sobelx))
binary_output = np.zeros_like(absgraddir)
binary_output[(absgraddir >= thresh[0]) & (absgraddir <= thresh[1])] = 1
# Return the binary image
return binary_output
def combined_binary(img, s_thresh=(170, 255), sx_thresh=(20, 100)):
img = np.copy(img)
# Convert to HLS color space and separate the V channel
hls = cv2.cvtColor(img, cv2.COLOR_RGB2HLS)
l_channel = hls[:,:,1]
s_channel = hls[:,:,2]
# Sobel x
sobelx = cv2.Sobel(l_channel, cv2.CV_64F, 1, 0) # Take the derivative in x
abs_sobelx = np.absolute(sobelx) # Absolute x derivative to accentuate lines away from horizontal
scaled_sobel = np.uint8(255*abs_sobelx/np.max(abs_sobelx))
# Threshold x gradient
sxbinary = np.zeros_like(scaled_sobel)
sxbinary[(scaled_sobel >= sx_thresh[0]) & (scaled_sobel <= sx_thresh[1])] = 1
# Threshold color channel
s_binary = np.zeros_like(s_channel)
s_binary[(s_channel >= s_thresh[0]) & (s_channel <= s_thresh[1])] = 1
# Stack each channel
combined_binary_img = np.zeros_like(s_binary)
combined_binary_img[(s_binary == 1) | (sxbinary == 1)] = 1
#combined_binary = np.dstack(( np.zeros_like(sxbinary), sxbinary, s_binary)) * 255
return combined_binary_img, s_binary, sxbinary
def combined_binary2(img, ksize=3, sx_thresh=(20, 100)):
#abs_sobel_thresh(img, orient='x', thresh_min=0, thresh_max=255):
gradx = abs_sobel_thresh(img, orient='x', sobel_kernel=ksize, sobel_thresh=(30, 100))
grady = abs_sobel_thresh(img, orient='y', sobel_kernel=ksize, sobel_thresh=(30, 100))
mag_binary = mag_thresh(img, sobel_kernel=ksize, mag_thresh=(30, 100))
dir_binary = dir_threshold(img, sobel_kernel=ksize, thresh=(0.2, np.pi/2))
sxbinary = np.zeros_like(img)
sxbinary[((gradx == 1) & (grady == 1)) | ((mag_binary == 1) & (dir_binary == 1))] = 1
return sxbinary
def warp_image(img,src, dst):
M = cv2.getPerspectiveTransform(src, dst)
Minv= cv2.getPerspectiveTransform(dst,src)
img_size = (img.shape[1], img.shape[0])
warped = cv2.warpPerspective(img, M, img_size)
return warped,M,Minv
def mask_image(img, vertices):
#defining a blank mask to start with
mask = np.zeros_like(img)
#defining a 3 channel or 1 channel color to fill the mask with depending on the input image
if len(img.shape) > 2:
channel_count = img.shape[2] # i.e. 3 or 4 depending on your image
ignore_mask_color = (255,) * channel_count
else:
ignore_mask_color = 255
#filling pixels inside the polygon defined by "vertices" with the fill color
cv2.fillPoly(mask, [vertices], ignore_mask_color)
#returning the image only where mask pixels are nonzero
masked_image = cv2.bitwise_and(img, mask)
return masked_image
# Define a function that takes an image, number of x and y points,
# camera matrix and distortion coefficients
def corners_unwarp(img, nx, ny, mtx, dist):
# Use the OpenCV undistort() function to remove distortion
undist = cv2.undistort(img, mtx, dist, None, mtx)
# Convert undistorted image to grayscale
gray = cv2.cvtColor(undist, cv2.COLOR_BGR2GRAY)
# Search for corners in the grayscaled image
ret, corners = cv2.findChessboardCorners(gray, (nx, ny), None)
if ret == True:
# If we found corners, draw them! (just for fun)
cv2.drawChessboardCorners(undist, (nx, ny), corners, ret)
# Choose offset from image corners to plot detected corners
# This should be chosen to present the result at the proper aspect ratio
# My choice of 100 pixels is not exact, but close enough for our purpose here
offset = 100 # offset for dst points
# Grab the image shape
img_size = (gray.shape[1], gray.shape[0])
# For source points I'm grabbing the outer four detected corners
src = np.float32([corners[0], corners[nx-1], corners[-1], corners[-nx]])
# For destination points, I'm arbitrarily choosing some points to be
# a nice fit for displaying our warped result
# again, not exact, but close enough for our purposes
dst = np.float32([[offset, offset], [img_size[0]-offset, offset],
[img_size[0]-offset, img_size[1]-offset],
[offset, img_size[1]-offset]])
# Given src and dst points, calculate the perspective transform matrix
M = cv2.getPerspectiveTransform(src, dst)
# Warp the image using OpenCV warpPerspective()
warped = cv2.warpPerspective(undist, M, img_size)
# Return the resulting image and matrix
return warped, M
def hist(img):
# TO-DO: Grab only the bottom half of the image
# Lane lines are likely to be mostly vertical nearest to the car
bottom_half = img[img.shape[0]//2:,:]
# TO-DO: Sum across image pixels vertically - make sure to set `axis`
# i.e. the highest areas of vertical lines should be larger values
histogram = np.sum(bottom_half, axis=0)
# or just use the single line below
#histogram = np.sum(img[img.shape[0]//2:,:], axis=0)
return histogram
def find_lane_pixels(binary_warped, nwindows = 9, margin = 100, minpix = 50):
# Take a histogram of the bottom half of the image
histogram = np.sum(binary_warped[binary_warped.shape[0]//2:,:], axis=0)
# Create an output image to draw on and visualize the result
out_img = np.dstack((binary_warped, binary_warped, binary_warped))
# Find the peak of the left and right halves of the histogram
# These will be the starting point for the left and right lines
midpoint = np.int(histogram.shape[0]//2)
leftx_base = np.argmax(histogram[:midpoint])
rightx_base = np.argmax(histogram[midpoint:]) + midpoint
# Set height of windows - based on nwindows above and image shape
window_height = np.int(binary_warped.shape[0]//nwindows)
# Identify the x and y positions of all nonzero pixels in the image
nonzero = binary_warped.nonzero()
nonzeroy = np.array(nonzero[0])
nonzerox = np.array(nonzero[1])
# Current positions to be updated later for each window in nwindows
leftx_current = leftx_base
rightx_current = rightx_base
# Create empty lists to receive left and right lane pixel indices
left_lane_inds = []
right_lane_inds = []
# Step through the windows one by one
for window in range(nwindows):
# Identify window boundaries in x and y (and right and left)
win_y_low = binary_warped.shape[0] - (window+1)*window_height
win_y_high = binary_warped.shape[0] - window*window_height
win_xleft_low = leftx_current - margin
win_xleft_high = leftx_current + margin
win_xright_low = rightx_current - margin
win_xright_high = rightx_current + margin
#print(win_xleft_low,win_y_low,win_xleft_high,win_y_high)
# Draw the windows on the visualization image
cv2.rectangle(out_img,(win_xleft_low,win_y_low), (win_xleft_high,win_y_high),(0,255,0), 2)
cv2.rectangle(out_img,(win_xright_low,win_y_low), (win_xright_high,win_y_high),(0,255,0), 2)
# Identify the nonzero pixels in x and y within the window #
good_left_inds = ((nonzeroy >= win_y_low) & (nonzeroy < win_y_high) & (nonzerox >= win_xleft_low) & (nonzerox < win_xleft_high)).nonzero()[0]
good_right_inds = ((nonzeroy >= win_y_low) & (nonzeroy < win_y_high) & (nonzerox >= win_xright_low) & (nonzerox < win_xright_high)).nonzero()[0]
# Append these indices to the lists
left_lane_inds.append(good_left_inds)
right_lane_inds.append(good_right_inds)
# If you found > minpix pixels, recenter next window on their mean position
if len(good_left_inds) > minpix:
leftx_current = np.int(np.mean(nonzerox[good_left_inds]))
if len(good_right_inds) > minpix:
rightx_current = np.int(np.mean(nonzerox[good_right_inds]))
# Concatenate the arrays of indices (previously was a list of lists of pixels)
try:
left_lane_inds = np.concatenate(left_lane_inds)
right_lane_inds = np.concatenate(right_lane_inds)
except ValueError:
# Avoids an error if the above is not implemented fully
pass
# Extract left and right line pixel positions
leftx = nonzerox[left_lane_inds]
lefty = nonzeroy[left_lane_inds]
rightx = nonzerox[right_lane_inds]
righty = nonzeroy[right_lane_inds]
return leftx, lefty, rightx, righty, out_img
def fit_polynomial(binary_warped):
# Find our lane pixels first
leftx, lefty, rightx, righty, out_img = find_lane_pixels(binary_warped)
# Fit a second order polynomial to each using `np.polyfit`
left_fit = np.polyfit(lefty, leftx, 2)
right_fit = np.polyfit(righty, rightx, 2)
# Generate x and y values for plotting
ploty = np.linspace(0, binary_warped.shape[0]-1, binary_warped.shape[0] )
try:
left_fitx = left_fit[0]*ploty**2 + left_fit[1]*ploty + left_fit[2]
right_fitx = right_fit[0]*ploty**2 + right_fit[1]*ploty + right_fit[2]
except TypeError:
# Avoids an error if `left` and `right_fit` are still none or incorrect
print('The function failed to fit a line!')
left_fitx = 1*ploty**2 + 1*ploty
right_fitx = 1*ploty**2 + 1*ploty
## Visualization ##
# Colors in the left and right lane regions
out_img[lefty, leftx] = [255, 0, 0]
out_img[righty, rightx] = [0, 0, 255]
# Plots the left and right polynomials on the lane lines
#plt.plot(left_fitx, ploty, color='yellow')
#plt.plot(right_fitx, ploty, color='yellow')
return left_fit, right_fit,out_img
def fit_poly(img_shape, leftx, lefty, rightx, righty):
### TO-DO: Fit a second order polynomial to each with np.polyfit() ###
left_fit = np.polyfit(lefty, leftx, 2)
right_fit = np.polyfit(righty, rightx, 2)
# Generate x and y values for plotting
ploty = np.linspace(0, img_shape[0]-1, img_shape[0])
### TO-DO: Calc both polynomials using ploty, left_fit and right_fit ###
left_fitx = left_fit[0]*ploty**2 + left_fit[1]*ploty + left_fit[2]
right_fitx = right_fit[0]*ploty**2 + right_fit[1]*ploty + right_fit[2]
return left_fitx, right_fitx, ploty
def search_around_poly(binary_warped):
# HYPERPARAMETER
# Choose the width of the margin around the previous polynomial to search
# The quiz grader expects 100 here, but feel free to tune on your own!
margin = 100
# Grab activated pixels
nonzero = binary_warped.nonzero()
nonzeroy = np.array(nonzero[0])
nonzerox = np.array(nonzero[1])
### TO-DO: Set the area of search based on activated x-values ###
### within the +/- margin of our polynomial function ###
### Hint: consider the window areas for the similarly named variables ###
### in the previous quiz, but change the windows to our new search area ###
left_lane_inds = ((nonzerox > (left_fit[0]*(nonzeroy**2) + left_fit[1]*nonzeroy +
left_fit[2] - margin)) & (nonzerox < (left_fit[0]*(nonzeroy**2) +
left_fit[1]*nonzeroy + left_fit[2] + margin)))
right_lane_inds = ((nonzerox > (right_fit[0]*(nonzeroy**2) + right_fit[1]*nonzeroy +
right_fit[2] - margin)) & (nonzerox < (right_fit[0]*(nonzeroy**2) +
right_fit[1]*nonzeroy + right_fit[2] + margin)))
# Again, extract left and right line pixel positions
leftx = nonzerox[left_lane_inds]
lefty = nonzeroy[left_lane_inds]
rightx = nonzerox[right_lane_inds]
righty = nonzeroy[right_lane_inds]
# Fit new polynomials
left_fitx, right_fitx, ploty = fit_poly(binary_warped.shape, leftx, lefty, rightx, righty)
## Visualization ##
# Create an image to draw on and an image to show the selection window
out_img = np.dstack((binary_warped, binary_warped, binary_warped))*255
window_img = np.zeros_like(out_img)
# Color in left and right line pixels
out_img[nonzeroy[left_lane_inds], nonzerox[left_lane_inds]] = [255, 0, 0]
out_img[nonzeroy[right_lane_inds], nonzerox[right_lane_inds]] = [0, 0, 255]
# Generate a polygon to illustrate the search window area
# And recast the x and y points into usable format for cv2.fillPoly()
left_line_window1 = np.array([np.transpose(np.vstack([left_fitx-margin, ploty]))])
left_line_window2 = np.array([np.flipud(np.transpose(np.vstack([left_fitx+margin,
ploty])))])
left_line_pts = np.hstack((left_line_window1, left_line_window2))
right_line_window1 = np.array([np.transpose(np.vstack([right_fitx-margin, ploty]))])
right_line_window2 = np.array([np.flipud(np.transpose(np.vstack([right_fitx+margin,
ploty])))])
right_line_pts = np.hstack((right_line_window1, right_line_window2))
# Draw the lane onto the warped blank image
cv2.fillPoly(window_img, np.int_([left_line_pts]), (0,255, 0))
cv2.fillPoly(window_img, np.int_([right_line_pts]), (0,255, 0))
result = cv2.addWeighted(out_img, 1, window_img, 0.3, 0)
# Plot the polynomial lines onto the image
plt.plot(left_fitx, ploty, color='yellow')
plt.plot(right_fitx, ploty, color='yellow')
## End visualization steps ##
return result
# Define a class to receive the characteristics of each line detection
def fill_lane(undist_img,warped_img,Minv,ploty,left_fitx,right_fitx):
# Create an image to draw the lines on
warp_zero = np.zeros_like(warped_img).astype(np.uint8)
color_warp = np.dstack((warp_zero, warp_zero, warp_zero))
# Recast the x and y points into usable format for cv2.fillPoly()
pts_left = np.array([np.transpose(np.vstack([left_fitx, ploty]))])
pts_right = np.array([np.flipud(np.transpose(np.vstack([right_fitx, ploty])))])
pts = np.hstack((pts_left, pts_right))
# Draw the lane onto the warped blank image
cv2.fillPoly(color_warp, np.int_([pts]), (0,255, 0))
# Warp the blank back to original image space using inverse perspective matrix (Minv)
newwarp = cv2.warpPerspective(color_warp, Minv, (undist_img.shape[1], undist_img.shape[0]))
# Combine the result with the original image
result = cv2.addWeighted(undist_img, 1, newwarp, 0.3, 0)
return result
def measure_curvature_pixels(left_fit, right_fit,ploty,ym_per_pix,xm_per_pix):
left_fitx = left_fit[0]*ploty**2 + left_fit[1]*ploty + left_fit[2]
right_fitx = right_fit[0]*ploty**2 + right_fit[1]*ploty + right_fit[2]
left_fit_cr = np.polyfit(ploty*ym_per_pix, left_fitx*xm_per_pix, 2)
right_fit_cr = np.polyfit(ploty*ym_per_pix, right_fitx*xm_per_pix, 2)
y_eval=np.max(ploty)
# Calculation of R_curve (radius of curvature)
left_curverad = ((1 + (2*left_fit_cr[0]*y_eval + left_fit_cr[1])**2)**1.5) / np.absolute(2*left_fit_cr[0])
right_curverad = ((1 + (2*right_fit_cr[0]*y_eval + right_fit_cr[1])**2)**1.5) / np.absolute(2*right_fit_cr[0])
# Calculate vehicle center
#left_lane and right lane bottom in pixels
left_lane_bottom = (left_fit[0]*y_eval)**2 + left_fit[0]*y_eval + left_fit[2]
right_lane_bottom = (right_fit[0]*y_eval)**2 + right_fit[0]*y_eval + right_fit[2]
lane_center = (left_lane_bottom + right_lane_bottom)/2.
center_image = 640
center = (lane_center - center_image)*xm_per_pix #Convert to meters
return left_curverad, right_curverad,center
def cal_camera():
# prepare object points, like (0,0,0), (1,0,0), (2,0,0) ....,(6,5,0)
nx=9
ny=6
objp = np.zeros((6*9,3), np.float32)
objp[:,:2] = np.mgrid[0:9,0:6].T.reshape(-1,2)
images_ret=[]
# Arrays to store object points and image points from all the images.
objpoints = [] # 3d points in real world space
imgpoints = [] # 2d points in image plane.
# Make a list of calibration images
images = glob.glob('camera_cal/calibration*.jpg')
# Step through the list and search for chessboard corners
for fname in images:
img = mpimg.imread(fname)
gray = cv2.cvtColor(img,cv2.COLOR_RGB2GRAY)
# Find the chessboard corners
ret, corners = cv2.findChessboardCorners(gray, (nx,ny),None)
# If found, add object points, image points
if ret == True:
images_ret.append(fname)
objpoints.append(objp)
imgpoints.append(corners)
# Draw and display the corners
img = cv2.drawChessboardCorners(img, (nx,ny), corners, ret)
image_name=fname.split("/",1)[1]
plt.imsave("camera_cal_output/draw_"+image_name,img)
#cv2.waitKey(500)
print("Done - draw on cal image")
ret, mtx, dist, rvecs, tvecs = cv2.calibrateCamera(objpoints, imgpoints, gray.shape[::-1], None, None)
print("Done - got parameters")
for fname in images_ret:
img = mpimg.imread(fname)
dst = undistort_image(img,mtx,dist)
image_name=fname.split("/",1)[1]
plt.imsave("camera_cal_output/undistort_"+image_name,dst)
print("Done - undistort cal image")
print("Finish Camera Calibration!")
return mtx, dist
class Line():
def __init__(self):
# was the line detected in the last iteration?
self.detected = False
# x values of the last n fits of the line
self.recent_xfitted = []
#average x values of the fitted line over the last n iterations
self.bestx = None
#polynomial coefficients averaged over the last n iterations
self.best_fit = None
#polynomial coefficients for the most recent fit
self.current_fit = [np.array([False])]
#radius of curvature of the line in some units
self.radius_of_curvature = None
#distance in meters of vehicle center from the line
self.line_base_pos = None
#difference in fit coefficients between last and new fits
self.diffs = np.array([0,0,0], dtype='float')
#x values for detected line pixels
self.allx = None
#y values for detected line pixels
self.ally = None
def process_image(image):
global mtx,dist,vertices,src,dst,i_frame,ym_per_pix,xm_per_pix
undist_img=undistort_image(image,mtx,dist)
binary_img,s_binary, sxbinary=combined_binary(undist_img, s_thresh=(170, 255), sx_thresh=(20, 100))
masked_img=mask_image(binary_img,vertices)
warped_img,M,Minv=warp_image(masked_img,src, dst)
leftx, lefty, rightx, righty, out_img=find_lane_pixels(warped_img,nwindows = 9, margin = 100, minpix = 50)
left_fit,right_fit,out_img=fit_polynomial(warped_img)
left_fitx, right_fitx, ploty=fit_poly(warped_img.shape, leftx, lefty, rightx, righty)
filled_img=fill_lane(undist_img,warped_img,Minv,ploty,left_fitx,right_fitx)
left_curverad, right_curverad,center=measure_curvature_pixels(left_fit, right_fit,ploty,ym_per_pix,xm_per_pix)
text1_on_img="Radius of Curvature: {:.2f}m".format(left_curverad)
LorR = "left" if center < 0 else "right"
text2_on_img="Car is {:.2f}m {} of the center.".format(abs(center),LorR)
cv2.putText(filled_img,text1_on_img,(300,40), cv2.FONT_HERSHEY_SIMPLEX,1, (255,255,255), 1, cv2.LINE_AA)
cv2.putText(filled_img,text2_on_img,(300,80), cv2.FONT_HERSHEY_SIMPLEX,1, (255,255,255), 1, cv2.LINE_AA)
process_img=out_img
#cv2.line(process_img, (140,720), (560,440), [0, 0, 255], 2)
#cv2.line(process_img, (560,440), (710,440), [0, 0, 255], 2)
#cv2.line(process_img, (710,440), (1180,720), [0, 0, 255], 2)
return process_img
def process_image_2(image):
global mtx,dist,vertices,src,dst,i_frame,ym_per_pix,xm_per_pix
undist_img=undistort_image(image,mtx,dist)
binary_img,s_binary, sxbinary=combined_binary(undist_img, s_thresh=(170, 255), sx_thresh=(20, 100))
masked_img=mask_image(binary_img,vertices)
warped_img,M,Minv=warp_image(masked_img,src, dst)
if i_frame==0 or left_line.detected==False or right_line.detected==False :
leftx, lefty, rightx, righty, out_img=find_lane_pixels(warped_img,nwindows = 9, margin = 100, minpix = 50)
left_fit,right_fit,out_img=fit_polynomial(warped_img)
left_fitx, right_fitx, ploty=fit_poly(warped_img.shape, leftx, lefty, rightx, righty)
left_line.detected=True
right_line.detected=True
if i_frame>0 and left_line.detected=True and right_line.detected=True:
search_around_poly(binary_warped)
filled_img=fill_lane(undist_img,warped_img,Minv,ploty,left_fitx,right_fitx)
left_curverad, right_curverad,center=measure_curvature_pixels(left_fit, right_fit,ploty,ym_per_pix,xm_per_pix)
text1_on_img="Radius of Curvature: {:.2f}m".format(left_curverad)
LorR = "left" if center < 0 else "right"
text2_on_img="Car is {:.2f}m {} of the center.".format(abs(center),LorR)
cv2.putText(filled_img,text1_on_img,(300,40), cv2.FONT_HERSHEY_SIMPLEX,1, (255,255,255), 1, cv2.LINE_AA)
cv2.putText(filled_img,text2_on_img,(300,80), cv2.FONT_HERSHEY_SIMPLEX,1, (255,255,255), 1, cv2.LINE_AA)
process_img=out_img
i_frame+=1
#cv2.line(process_img, (140,720), (560,440), [0, 0, 255], 2)
#cv2.line(process_img, (560,440), (710,440), [0, 0, 255], 2)
#cv2.line(process_img, (710,440), (1180,720), [0, 0, 255], 2)
return process_img
line1=line
###Output
_____no_output_____
###Markdown
Camera Calibration
###Code
print("starting calibration camera....please wait")
mtx, dist=cal_camera()
print("pickle to data file")
output = open('camera_cal_data.pkl', 'wb')
pickle.dump(mtx, output)
pickle.dump(dist, output)
output.close()
###Output
starting calibration camera....please wait
Done - draw on cal image
Done - got parameters
Done - undistort cal image
Finish Camera Calibration!
pickle to data file
###Markdown
Initialization
###Code
# set up public variables
src = np.float32([[600,450], [215,720],[680,450],[1100,720]])
dst = np.float32([[400,0], [400,720],[800,0], [800,720]])
ym_per_pix = 30/720 # meters per pixel in y dimension
xm_per_pix = 3.7/400 # meters per pixel in x dimension
mask_list=[[140,720], [560,440],[710,440], [1180,720]]
vertices=np.array(mask_list)
left_fit_pre = np.array([0,0,0])
right_fit_pre = np.array([0,0,0])
offset = 100
img_size = (1280,720) # (shape[1] width 1280,shape[0] height 720)
#unpickle mtx and dist
pkl_file = open('camera_cal_data.pkl', 'rb')
mtx = pickle.load(pkl_file)
dist = pickle.load(pkl_file)
pkl_file.close()
###Output
_____no_output_____
###Markdown
pipeline for test images
###Code
print("Start processing test images \n")
images = glob.glob('test_images/*.jpg')
i_frame=0
left_line =line() #[ Line() for i in range (10)]
right_line =line() #[ Line() for i in range (10)]
for fname in images:
img = mpimg.imread(fname)
process_img = process_image(img)
image_name=fname.split("/",1)[1]
plt.imsave("test_images_output/processed_"+image_name,process_img)
print(fname, "----finished and saved ")
print("\nFinished processing test image")
#src = np.float32([[560,440], [650,440], [1180,720],[240,720]]) src for video
#src = np.float32([[610,440], [215,720],[670,440],[1100,720]]) # src for straight_line1, 1st try
#dst = np.float32([[offset, offset], [img_size[0]-offset, offset],
# [img_size[0]-offset, img_size[1]],
# [offset, img_size[1]]])
#dst = np.float32([[215,0], [215,720],[1100,0], [1100,720]])
project_video_output = 'test_videos_output/project_video_output.mp4'
## To speed up the testing process you may want to try your pipeline on a shorter subclip of the video
## To do so add .subclip(start_second,end_second) to the end of the line below
## Where start_second and end_second are integer values representing the start and end of the subclip
## You may also uncomment the following line for a subclip of the first 5 seconds
##clip1 = VideoFileClip("test_videos/solidWhiteRight.mp4").subclip(0,5)
i_frame=0
left_line = () #[ Line() for i in range (10)]
Right_line =() #[ Line() for i in range (10)]
clip1 = VideoFileClip("test_videos/project_video.mp4")
project_video_clip = clip1.fl_image(process_image1) #NOTE: this function expects color images!!
%time project_video_clip.write_videofile(project_video_output, audio=False)
HTML("""
<video width="960" height="540" controls>
<source src="{0}">
</video>
""".format(project_video_output))
###Output
_____no_output_____ |
docs/_sources/spatial_unit/chunk_uk.ipynb | ###Markdown
Geo-chunking Great BritainTo efficiently analyse whole Great Britain, we need to subdivide it into reasonable chunks. For that, we will use local authority polygons combined into contiguous groups of ~ 100k buildings. Let's start with the retrieval of our building layer from the database.
###Code
import os
import numpy as np
import geopandas as gpd
from sqlalchemy import create_engine
user = os.environ.get('DB_USER')
pwd = os.environ.get('DB_PWD')
host = os.environ.get('DB_HOST')
port = os.environ.get('DB_PORT')
db_connection_url = f"postgres+psycopg2://{user}:{pwd}@{host}:{port}/built_env"
engine = create_engine(db_connection_url)
sql = f'SELECT * FROM openmap_buildings_200814'
df = gpd.read_postgis(sql, engine, geom_col='geometry')
df.shape
###Output
_____no_output_____
###Markdown
Local authority districts are available as `geojson` from ArcGIS open data portal:
###Code
auth = gpd.read_file('https://opendata.arcgis.com/datasets/7f83b82ef6ce46d3a5635d371e8a3e7c_0.geojson')
###Output
_____no_output_____
###Markdown
We want to make sure that both layers use the same CRS.
###Code
auth = auth.to_crs(df.crs)
###Output
_____no_output_____
###Markdown
To speedup spatial query counting the number of buildings within each disttrict, we use only centroids instead of building polygons. The calculation of builidngs per polygon is a simple query using spatial index and getting counts per each unique index.
###Code
inp, res = df.centroid.sindex.query_bulk(auth.geometry, predicate='intersects')
u, c = np.unique(inp, return_counts=True)
auth.loc[u, 'counts'] = c
auth.plot('counts', figsize=(18, 18), legend=True)
auth.to_parquet('../../urbangrammar_samba/local_authorities.pq')
auth['counts'].describe()
###Output
_____no_output_____
###Markdown
Geo-chunking Local Authority DistrictsOn this section we will create a partition of the UK that groups spatially contiguous local authorities (LADs) in *regions* that have similar number of buildings. For that, we will employ a regionalisation algorithm that instead of requiring the number of regions uses a floor threshold for the minimum number of building counts and tries to generate groups _around_ that number (but above).
###Code
import pandas
import geopandas
import numpy as np
from mapclassify import greedy
from copy import deepcopy
from libpysal.weights import Queen, KNN, W
auth = geopandas.read_parquet("local_authorities.pq")
###Output
_____no_output_____
###Markdown
Note that we need to install [`region`](https://github.com/pysal/region) for this operation. The library is not part of `gds_env:5.0` as it has been phased out in benefit of [`spopt`](https://github.com/pysal/spopt), which will eventually be part of PySAL.
###Code
#! pip install region
import region
###Output
_____no_output_____
###Markdown
DataWe currently only have building counts for GB:
###Code
ax = auth.plot(color="k")
auth.loc[auth["counts"].isna(), :].plot(color="red", ax=ax)
###Output
_____no_output_____
###Markdown
So, for now, we will remove Northern Ireland for this regionalisation:
###Code
auth = auth.dropna().reset_index()
###Output
_____no_output_____
###Markdown
Topology: `W`To be able to build spatially constrained clusters, we need a way to capture topological relationships between local authorities. Spatial weights matrices come to the rescue here. Ideally, we need one that will give us contiguity relationships, but also that connects _every_ observation. In a geography like ours, this must come from a combination of more than one criterium.Our starting point is based on queen contiguity:
###Code
%time w_queen = Queen.from_dataframe(auth)
###Output
CPU times: user 1min 23s, sys: 2.7 s, total: 1min 26s
Wall time: 1min 25s
###Markdown
This produces a matrix with six islands -observations with no neighbors. To connect these to the rest of the graph, we are going to generate a nearest neighbor matrix, and use it for islands.
###Code
%time w_k1 = KNN.from_dataframe(auth, k=1)
###Output
CPU times: user 851 ms, sys: 4.04 ms, total: 855 ms
Wall time: 857 ms
###Markdown
Our resulting matrix will be a queen contiguity one with islands connected to their nearest neighbor (and viceversa, for symmetry).
###Code
neighbors = deepcopy(w_queen.neighbors)
for i in w_queen.islands:
j = w_k1.neighbors[i][0]
neighbors[i] = [j]
neighbors[j].append(i)
w = W(neighbors)
###Output
_____no_output_____
###Markdown
And we are ready to regionalise! RegionalisationThe Max-P algorithms requires four hyper-parameters:1. Topology: we'll use `w`1. Non-spatial features: to help the compactness of the regions, we will use the coordinates of each polygon's centroid1. A variable to guide the number of regions (`spatiall_extensive_attr`): we will use the building count (`counts`) in `auth` with a flipped sign1. A floor `threshold` to ensure no region has at least that value of the `spatiall_extensive_attr`: following our estimates for Dask performance, we will use a maximum number of 200,000 (-200,000) First, let's pull out centroid coords:
###Code
cents = auth.geometry.centroid
xys = pandas.DataFrame({"X": cents.x,
"Y": cents.y
}, index=auth.index
)
###Output
_____no_output_____
###Markdown
And we can set up and run the optimisation. We are specifying that areas have a minimum of 100,000 buildings. By the nature of MaxP, it will try to create clusters _roughly_ that amount, but the only guarantee is to be above the floor threshold. This is a relatively fast run:
###Code
%%time
from region.max_p_regions.heuristics import MaxPRegionsHeu
model = MaxPRegionsHeu(random_state=123445)
model.fit_from_w(w,
xys.values,
spatially_extensive_attr=auth["counts"].values,
threshold=100000
)
###Output
CPU times: user 57.5 s, sys: 84.6 ms, total: 57.6 s
Wall time: 57.4 s
###Markdown
---**NOTE**: I tried the following config which I _interpret_ to impose two restrictions, a floor and a ceiling, but discarded it as the results are not significantly better and running time is considerably higher and the upper limit does not seem to apply.
###Code
%%time
from region.max_p_regions.heuristics import MaxPRegionsHeu
model = MaxPRegionsHeu(random_state=123445)
model.fit_from_w(w,
xys.values,
spatially_extensive_attr=auth[["counts"]].assign(neg_counts=-auth["counts"]).values,
threshold=np.array([150000, -200000])
)
###Output
CPU times: user 57min 51s, sys: 841 ms, total: 57min 52s
Wall time: 57min 51s
###Markdown
--- The resulting labels can be further explored:
###Code
sizes = auth.groupby(model.labels_)["counts"].sum()
print((f"There are {pandas.Series(model.labels_).unique().shape[0]} regions"\
f" and {sizes[sizes>200000].shape[0]} are above 200,000 buildings"))
###Output
There are 103 regions and 11 are above 200,000 buildings
###Markdown
Number of LADs per region and the number of buildings they include:
###Code
g = auth.groupby(model.labels_)
region_stats = pandas.DataFrame({"n_lads": g.size(),
"n_buildings": g["counts"].sum()
})
region_stats.describe().T
###Output
_____no_output_____
###Markdown
Distribution of number of buildings by region:
###Code
ax = region_stats["n_buildings"].plot.hist(bins=100)
ax.axvline(200000, color="red")
###Output
_____no_output_____
###Markdown
And the spatial layout of the regions:
###Code
auth.assign(lbls=model.labels_)\
.plot(column="lbls",
categorical=True,
figsize=(12, 12)
)
###Output
_____no_output_____
###Markdown
Writing labels out
###Code
auth.assign(lbls=model.labels_)[["lad20cd", "lbls"]].to_csv("chunking_labels.csv")
###Output
_____no_output_____
###Markdown
Chunking data to parquet filesNow we use generated chunks to split building and enclosure data into chunked parquet files.
###Code
chunks = pandas.read_csv('../../urbangrammar_samba/spatial_signatures/local_auth_chunking_labels.csv', index_col=0)
auth = gpd.read_parquet('../../urbangrammar_samba/spatial_signatures/local_authorities.pq')
###Output
_____no_output_____
###Markdown
The first step is to dissolve each chunk to a single geometry.
###Code
districts = auth.merge(chunks, on='lad20cd', how='inner')
chunks = districts.dissolve('lbls')
chunks['chunkID'] = range(len(chunks))
chunks[['geometry', 'chunkID']].to_parquet('../../urbangrammar_samba/spatial_signatures/local_auth_chunks.pq')
df['uID'] = range(len(df))
###Output
_____no_output_____
###Markdown
Above, we have also assigned unique IDs to both chunks and to buildings, below we do the same for enclosuers. This allows easy attribute-based merging of data in the future instead of costly spatial joins.
###Code
enclosures = gpd.read_parquet('../../urbangrammar_samba/spatial_signatures/enclosures.pq')
enclosures['enclosureID'] = range(len(enclosures))
###Output
_____no_output_____
###Markdown
We first use spatial index to chunk enclosures as our top-level geometry and chunk buildings based on enclosures. That ensures that both layr align on the edges of district chunks. The resulting chunked GeoDataFrames are then saved to parquet files.
###Code
inp, res = enclosures.centroid.sindex.query_bulk(chunks.geometry, predicate='intersects')
blg_six = df.centroid.sindex
from tqdm.notebook import tqdm
for chunk_id in tqdm(range(len(chunks)), total=len(chunks)):
wihtin_chunk = enclosures.iloc[res[inp == chunk_id]]
wihtin_chunk.to_parquet(f'../../urbangrammar_samba/spatial_signatures/enclosures/encl_{chunk_id}.pq')
i, r = blg_six.query_bulk(wihtin_chunk.geometry, predicate='intersects')
df.iloc[np.unique(r)][['uID', 'geometry']].to_parquet(f'../../urbangrammar_samba/spatial_signatures/buildings/blg_{chunk_id}.pq')
###Output
_____no_output_____ |
Data Analysis with Python Peer Graded Assignment.ipynb | ###Markdown
Data Analysis with Python House Sales in King County, USA This dataset contains house sale prices for King County, which includes Seattle. It includes homes sold between May 2014 and May 2015. id :a notation for a house date: Date house was soldprice: Price is prediction targetbedrooms: Number of Bedrooms/Housebathrooms: Number of bathrooms/bedroomssqft_living: square footage of the homesqft_lot: square footage of the lotfloors :Total floors (levels) in housewaterfront :House which has a view to a waterfrontview: Has been viewedcondition :How good the condition is Overallgrade: overall grade given to the housing unit, based on King County grading systemsqft_above :square footage of house apart from basementsqft_basement: square footage of the basementyr_built :Built Yearyr_renovated :Year when house was renovatedzipcode:zip codelat: Latitude coordinatelong: Longitude coordinatesqft_living15 :Living room area in 2015(implies-- some renovations) This might or might not have affected the lotsize areasqft_lot15 :lotSize area in 2015(implies-- some renovations) You will require the following libraries
###Code
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler,PolynomialFeatures
%matplotlib inline
###Output
_____no_output_____
###Markdown
1.0 Importing the Data Load the csv:
###Code
file_name='https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/DA0101EN/coursera/project/kc_house_data_NaN.csv'
df=pd.read_csv(file_name)
###Output
_____no_output_____
###Markdown
we use the method head to display the first 5 columns of the dataframe.
###Code
df.head()
###Output
_____no_output_____
###Markdown
Question 1 Display the data types of each column using the attribute dtype, then take a screenshot and submit it, include your code in the image.
###Code
print(df.dtypes)
###Output
Unnamed: 0 int64
id int64
date object
price float64
bedrooms float64
bathrooms float64
sqft_living int64
sqft_lot int64
floors float64
waterfront int64
view int64
condition int64
grade int64
sqft_above int64
sqft_basement int64
yr_built int64
yr_renovated int64
zipcode int64
lat float64
long float64
sqft_living15 int64
sqft_lot15 int64
dtype: object
###Markdown
We use the method describe to obtain a statistical summary of the dataframe.
###Code
df.describe()
###Output
_____no_output_____
###Markdown
2.0 Data Wrangling Question 2 Drop the columns "id" and "Unnamed: 0" from axis 1 using the method drop(), then use the method describe() to obtain a statistical summary of the data. Take a screenshot and submit it, make sure the inplace parameter is set to True
###Code
df.drop(['id', 'Unnamed: 0'], axis=1, inplace=True)
df.describe()
###Output
_____no_output_____
###Markdown
we can see we have missing values for the columns bedrooms and bathrooms
###Code
print("number of NaN values for the column bedrooms :", df['bedrooms'].isnull().sum())
print("number of NaN values for the column bathrooms :", df['bathrooms'].isnull().sum())
###Output
number of NaN values for the column bedrooms : 13
number of NaN values for the column bathrooms : 10
###Markdown
We can replace the missing values of the column 'bedrooms' with the mean of the column 'bedrooms' using the method replace. Don't forget to set the inplace parameter top True
###Code
mean=df['bedrooms'].mean()
df['bedrooms'].replace(np.nan,mean, inplace=True)
###Output
_____no_output_____
###Markdown
We also replace the missing values of the column 'bathrooms' with the mean of the column 'bedrooms' using the method replace.Don't forget to set the inplace parameter top Ture
###Code
mean=df['bathrooms'].mean()
df['bathrooms'].replace(np.nan,mean, inplace=True)
print("number of NaN values for the column bedrooms :", df['bedrooms'].isnull().sum())
print("number of NaN values for the column bathrooms :", df['bathrooms'].isnull().sum())
###Output
number of NaN values for the column bedrooms : 0
number of NaN values for the column bathrooms : 0
###Markdown
3.0 Exploratory data analysis Question 3Use the method value_counts to count the number of houses with unique floor values, use the method .to_frame() to convert it to a dataframe.
###Code
df['floors'].value_counts().to_frame()
###Output
_____no_output_____
###Markdown
Question 4Use the function boxplot in the seaborn library to determine whether houses with a waterfront view or without a waterfront view have more price outliers .
###Code
sns.boxplot(x='waterfront', y='price', data=df)
###Output
/opt/conda/envs/DSX-Python35/lib/python3.5/site-packages/seaborn/categorical.py:462: FutureWarning: remove_na is deprecated and is a private function. Do not use.
box_data = remove_na(group_data)
###Markdown
Question 5Use the function regplot in the seaborn library to determine if the feature sqft_above is negatively or positively correlated with price.
###Code
sns.regplot(x='sqft_above', y='price', data=df)
###Output
_____no_output_____
###Markdown
We can use the Pandas method corr() to find the feature other than price that is most correlated with price.
###Code
df.corr()['price'].sort_values()
###Output
_____no_output_____
###Markdown
Module 4: Model Development Import libraries
###Code
import matplotlib.pyplot as plt
from sklearn.linear_model import LinearRegression
###Output
_____no_output_____
###Markdown
We can Fit a linear regression model using the longitude feature 'long' and caculate the R^2.
###Code
X = df[['long']]
Y = df['price']
lm = LinearRegression()
lm
lm.fit(X,Y)
lm.score(X, Y)
###Output
_____no_output_____
###Markdown
Question 6Fit a linear regression model to predict the 'price' using the feature 'sqft_living' then calculate the R^2. Take a screenshot of your code and the value of the R^2.
###Code
X = df[['sqft_living']]
Y = df['price']
lm = LinearRegression()
lm.fit(X, Y)
lm.score(X, Y)
###Output
_____no_output_____
###Markdown
Question 7Fit a linear regression model to predict the 'price' using the list of features:
###Code
features =["floors", "waterfront","lat" ,"bedrooms" ,"sqft_basement" ,"view" ,"bathrooms","sqft_living15","sqft_above","grade","sqft_living"]
###Output
_____no_output_____
###Markdown
the calculate the R^2. Take a screenshot of your code
###Code
X = df[features]
Y= df['price']
lm = LinearRegression()
lm.fit(X, Y)
lm.score(X, Y)
###Output
_____no_output_____
###Markdown
this will help with Question 8Create a list of tuples, the first element in the tuple contains the name of the estimator:'scale''polynomial''model'The second element in the tuple contains the model constructor StandardScaler()PolynomialFeatures(include_bias=False)LinearRegression()
###Code
Input=[('scale',StandardScaler()),('polynomial', PolynomialFeatures(include_bias=False)),('model',LinearRegression())]
###Output
_____no_output_____
###Markdown
Question 8Use the list to create a pipeline object, predict the 'price', fit the object using the features in the list features , then fit the model and calculate the R^2
###Code
pipe=Pipeline(Input)
pipe
pipe.fit(X,Y)
pipe.score(X,Y)
###Output
_____no_output_____
###Markdown
Module 5: MODEL EVALUATION AND REFINEMENT import the necessary modules
###Code
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import train_test_split
print("done")
###Output
done
###Markdown
we will split the data into training and testing set
###Code
features =["floors", "waterfront","lat" ,"bedrooms" ,"sqft_basement" ,"view" ,"bathrooms","sqft_living15","sqft_above","grade","sqft_living"]
X = df[features ]
Y = df['price']
x_train, x_test, y_train, y_test = train_test_split(X, Y, test_size=0.15, random_state=1)
print("number of test samples :", x_test.shape[0])
print("number of training samples:",x_train.shape[0])
###Output
number of test samples : 3242
number of training samples: 18371
###Markdown
Question 9Create and fit a Ridge regression object using the training data, setting the regularization parameter to 0.1 and calculate the R^2 using the test data.
###Code
from sklearn.linear_model import Ridge
RidgeModel = Ridge(alpha = 0.1)
RidgeModel.fit(x_train, y_train)
RidgeModel.score(x_test, y_test)
###Output
_____no_output_____
###Markdown
Question 10Perform a second order polynomial transform on both the training data and testing data. Create and fit a Ridge regression object using the training data, setting the regularisation parameter to 0.1. Calculate the R^2 utilising the test data provided. Take a screenshot of your code and the R^2.
###Code
from sklearn.preprocessing import PolynomialFeatures
from sklearn.linear_model import Ridge
pr = PolynomialFeatures(degree=2)
x_train_pr = pr.fit_transform(x_train)
x_test_pr = pr.fit_transform(x_test)
poly = Ridge(alpha=0.1)
poly.fit(x_train_pr, y_train)
poly.score(x_test_pr, y_test)
###Output
_____no_output_____ |
Job Data Scraper.ipynb | ###Markdown
Job Data ScraperThis file will serve to scrape the job details so we can perform our analysis. Steps1. Pull up page with Selenium2. Create list from search results list3. Click on first job4. Grab data - Company Name: id= vjs-cn - Location: id= vjs-loc - Title: id= vjs-jobtitle - Description: id= vjs-desc - Salary (if available) //*[@id="vjs-jobinfo"]/div[3]/span5. Loop over remaining jobs and repeat steps 3 and 46. Store data or set up for analysis (write to file?)
###Code
import pandas as pd
import time
from selenium import webdriver
from selenium.webdriver.support.wait import WebDriverWait
from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as EC
browser = webdriver.Firefox()
url = "https://www.indeed.com/jobs?q=%22software+engineer%22+OR+%22software+developer%22&l=Salt+Lake+City%2C+UT&radius=100&sort=date&limit=50"
browser.get(url)
columns = ["job_title", "company_name", "location", "description", "salary"]
job_df = pd.DataFrame(columns = columns)
job_results = browser.find_elements_by_class_name("clickcard")
for job in job_results:
# assign record number for df
num = (len(job_df) + 1)
# select the job
job.click()
# grab job info
# force wait for page change
WebDriverWait(browser, 10).until(EC.presence_of_element_located((By.CSS_SELECTOR, "#vjs-cn, #vjs-loc, #vjs-jobtitle,#vjs-desc")))
company_name = browser.find_element_by_id('vjs-cn').text
location = browser.find_element_by_id('vjs-loc').text
title = browser.find_element_by_id('vjs-jobtitle').text
description = browser.find_element_by_id('vjs-desc').text
ifSalary = browser.find_elements_by_xpath("//*[@id=\"vjs-jobinfo\"]/div[3]/span")
if len(ifSalary)>0:
salary = ifSalary[0].text
else:
salary = "Not_Listed"
# do something with the data
# append data to df
job_post = [title, company_name, location, description, salary]
job_df.loc[num] = job_post
job_df.head(10)
###Output
_____no_output_____ |
src/models/single_SVM.ipynb | ###Markdown
data preprocessing
###Code
def detectNaN(a):
for i in range(len(a[0])):
e = True
for j in range(len(a) - 1):
if np.isnan(a[j][i]):
e = False
break
if (not e):
print(i)
def replace(a):
for i in range(len(a[0])):
e = True
for j in range(len(a)):
if np.isnan(a[j][i]):
a[j][i] = a[j - 1][i]
return a
# preprocessing training set
trainDataArray = np.array(trainData)
trainDataArrayProcessed = np.delete(trainDataArray, [1, 2], 1)
trainDataProcessed = replace(trainDataArrayProcessed)
detectNaN(trainDataProcessed)
scaler = StandardScaler()
scaler.fit(trainDataProcessed)
trainDataNormalized = scaler.transform(trainDataProcessed)
detectNaN(trainDataNormalized)
# preprocessing validation set
testDataArray = np.array(testData)
testDataArrayProcessed = np.delete(testDataArray, [1, 2], 1)
testDataProcessed = replace(testDataArrayProcessed)
detectNaN(testDataProcessed)
scaler = StandardScaler()
scaler.fit(testDataProcessed)
testDataNormalized = scaler.transform(testDataProcessed)
detectNaN(testDataNormalized)
###Output
_____no_output_____
###Markdown
parameter selection
###Code
cList = [1, 2, 5, 10, 20]
gammaList = [100, 50, 10, 5, 1]
epsilonList = [0.1, 0.05, 0.01, 0.005, 0.001]
preds_svm = []
eval_svm = []
recordNum = len(validLabel)
for c in cList:
for gamma in gammaList:
for epsilon in epsilonList:
rgs_svm = SVR(C = c, epsilon = epsilon, gamma = gamma)
rgs_svm.fit(trainDataNormalized, trainLabel)
pred_svm = rgs_svm.predict(testDataNormalized)
preds_svm.append(pred_svm)
evaluation = abs((validLabel - pred_svm)/(validLabel + pred_svm)).sum()/recordNum
eval_svm.append(evaluation)
###Output
_____no_output_____ |
Project_1_RBM_and_Tomography/.ipynb_checkpoints/CDL_DWaveSet-checkpoint.ipynb | ###Markdown
Cohort Project - DWave Data Set
###Code
import numpy as np
import helper as dw
import csv
###Output
_____no_output_____
###Markdown
Load the Training and Validation data from the DWave npz file
###Code
train, val = dw.load_dataset("dataset_x1a46w3557od23s750k9.npz")
print (train.shape)
print (val.shape)
###Output
_____no_output_____
###Markdown
Load the correlated features csv data
###Code
corr_feat = []
with open('correlated_features.csv', newline='') as csvfile:
reader = csv.reader(csvfile, delimiter=',')
for row in reader:
corr_feat.append(row)
print (len(corr_feat))
#for r in cf:
# print (r[0],r[1],r[2])
###Output
_____no_output_____
###Markdown
Add Code below to process the data
###Code
#BASIC Template code - TO BE MODIFIED
from dwave.system import DWaveSampler, EmbeddingComposite
#Converts the QUBO into a BinaryQuadraticModel and then calls sample().
sampler = EmbeddingComposite(DWaveSampler())
#Example encoding for x0,x1,x2,x4: needs to be generalized based on dataset
Q = {('x0', 'x0'):1, ('x1', 'x1'):1, ('x2', 'x2'):1, ('x3', 'x3'):1, ('x4', 'x4'):1,}
results = sampler.sample_qubo(Q, num_reads=10000)
# print the results
for smpl, energy in results.data(['sample', 'energy']):
print(smpl, energy)
###Output
_____no_output_____ |
notebooks/extraction/check_MLclassification.ipynb | ###Markdown
To check the automatic classificationFrom the dataiku classification (ML) the script goes through the OCs with a specific classification (c,u,n)
###Code
import os
from IPython.display import Image, display
import pandas as pd
import numpy as np
rootdir = os.environ.get("GAIA_ROOT")
ocplot = "/home/stephane/Science/GAIA/e2e_products.edr3/res.alma.edr3/e2e_products.edr3/plotSelect"
scoredfile= "/home/stephane/Downloads/ocres_testsc_joined_scored.csv"
df= pd.read_csv(scoredfile, sep=";")
def shuffle(df, n=1, axis=0):
df = df.copy()
for _ in range(n):
df.apply(np.random.shuffle, axis=axis)
return df
check= "c"
dfs= df = df.sample(frac = 1) # shuffle
for index, row in dfs.iterrows():
clfile= "%s.%d.cluster.png"%(row['votname'], row['cycle'])
clim= os.path.join(ocplot,clfile)
rawfile= "%s.%d.raw.png"%(row['votname'], row['cycle'])
rawim= os.path.join(ocplot,rawfile)
# display(Image(filename=rawim))
if row['prediction'] == check:
# print(row['votname'], row['cycle'], row['prediction'])
display(Image(filename=rawim))
display(Image(filename=clim))
value= input("#RET")
print("##################### \n")
###Output
_____no_output_____ |
Quiz_HandsOn_1.ipynb | ###Markdown
Quiz and Hands On 1 QuizPlease answer the value and its type of objects. If the object is *Function*, answer "Function". If Error is expected answer answer why.
###Code
def show_val_type(obj):
if callable(obj):
print('Function')
return
print(obj, type(obj))
return
def dummy_func():
pass
def x2(val):
return val * 2
def add2(a, b):
val = a + b
return val
# Exercise
show_val_type(1)
show_val_type(3.14)
show_val_type('123'*2)
# Please think before execution 1
show_val_type(4/2)
show_val_type(5//2)
show_val_type(dummy_func)
show_val_type(dummy_func())
print()
show_val_type(x2(10))
show_val_type(x2('abc'))
show_val_type(add2(10, 20))
show_val_type(add2('abc', 'xyz'))
show_val_type(add2(0.5, 1))
show_val_type(add2('1', 1))
# Please think before execution 2
show_val_type(show_val_type)
show_val_type(int(3.14))
show_val_type(str(2**10))
show_val_type(str(2**10)[3])
show_val_type(str(2**10)[1:3])
show_val_type(str(2**10)[4])
# Please think before execution 3
s = 'abcdefghij'
show_val_type(s.capitalize())
show_val_type(s.capitalize)
f = s.capitalize
f()
show_val_type(s.upper())
show_val_type(s[:5])
show_val_type(s[5:])
show_val_type(s[-5:-1])
# Please think before execution 4
s = '123 456 78'
show_val_type(s)
show_val_type(s[2])
show_val_type(int(s[5:8])) # tricky, verify the behavior of int()
x = s.split()
show_val_type(x)
show_val_type(x[1])
show_val_type(int(x[0]*3))
show_val_type(x[-1] + 1)
###Output
_____no_output_____
###Markdown
Hands onPlease create some objects and check their value and type. As for the method of String, ```help(str)``` in the interactive shell will show the summary.
###Code
help(str)
# Define variables and objectys for next cell
t = (1,2,3)
u = ['a', 45, 1.234, t]
w = {'tupple': t, 'list': u, 1: 3.14, 3.14: 2**8, 256: '2**8'}
# operators for SET are & | - ^ < <= > >=
set1 = {2, 3, 5, 7}
set2 = {5, 7, 11,13, 17, 19}
# Execute after above cell is evaluated
# IPythn is good at auto-completion, type "show" and Tab key will expand to "show_val_type"
show_val_type(w[3.14])
show_val_type(set1 ^ set2)
show_val_type(set1 | set2 > set2)
u[2] = 'replaced'
show_val_type (w['list'])
###Output
_____no_output_____ |
Aprendizaje Supervisado/Investigaciones/Investigacion1_DeepLearning.ipynb | ###Markdown
Pregunta de Investigación El uso de funciones de pérdida robustas, como la función de Huber, permite mejorar el desempeño de una red en problemas de regresión donde la salida presenta outliers. INF-395 Redes Neuronales y Deep Learning Autores: Francisco Andrades | Lucas DíazLenguaje: PythonTemas: - Arquitectura de Redes Neuronales *Feed-Forward* - Entrenamiento de Redes Neuronales. - Parte Básica de Redes Convolucionales. - Problemas Especiales. Videos Explicativos: - https://youtu.be/DbpbfwR61Ws - https://youtu.be/WL5InIBNLos
###Code
# Librerías
!pip install scipy==1.6
import pandas as pd
import numpy as np
import seaborn as sns; sns.set()
import matplotlib.pyplot as plt
import scipy
from scipy import stats
from sklearn.model_selection import train_test_split, KFold
from sklearn.metrics import mean_squared_error as mse
from sklearn.metrics import mean_absolute_error as mae
from sklearn.compose import ColumnTransformer
from sklearn.datasets import fetch_openml
from sklearn.pipeline import Pipeline
from sklearn.impute import SimpleImputer
from sklearn.preprocessing import StandardScaler, OneHotEncoder
from sklearn.linear_model import LogisticRegression
from tensorflow import keras
from tensorflow.keras import layers
from tensorflow.keras.datasets import boston_housing
from tensorflow.keras import optimizers
# LOSSES
from tensorflow.keras.losses import Huber
from tensorflow.keras.losses import mean_squared_error as mse_loss
from tensorflow.keras.losses import mean_absolute_error as mae_loss
from tensorflow.keras.losses import LogCosh
from tensorflow.keras.callbacks import EarlyStopping
#P2
from tensorflow.keras import backend as K
def root_mean_squared_error(y_true, y_pred):
return K.sqrt(K.mean(K.square(y_pred - y_true)))
path = ''
###Output
_____no_output_____
###Markdown
Pregunta de Investigación \**Hipótesis:** El uso de funciones de pérdida robustas, como la función de Huber, **permite** mejorar el desempeño de una red en problemas de regresión donde la salida presenta outliers.**Objetivo:** Demostrar la hipótesis.**Propuesta:** Se demostrará la hipótesis evidenciando al menos **1** caso donde la función de Huber mejora el desempeño de una arquitectura frente a la función de pérdida *Mean Squared Error*. Posteriormente se extenderá la demostración a otras funciones de pérdida robustas. **Metodología** 1. Se harán experimentos en 2 datasets sintéticos y 2 reales, utilizando una arquitectura feed-forward **fija para cada dataset**.2. Se entrenará la arquitectura de forma independiente para cada función de pérdida y se evaluará utilizando como métricas MAE y MSE.3. Se ejecutarán N iteraciones del paso 2 y se obtendrá una distribución de resultados pertinente a cada función de pérdida.4. Mediante Hipótesis Testing, se responderá a la siguiente pregunta para cada métrica: ¿Se puede afirmar, con determinada confianza, que la media de la distribución generada por Huber es estrictamente menor a la media de la distribución generada por MSE?5. Si la respuesta es positiva para ambas métricas en al menos **1** caso, se entenderá por demostrada la hipótesis. **Esqueleto**1. Recordatorio MSE, MAE, HUBER. - ¿Por qué escogemos MSE como la función de pérdida a comparar? - ¿Por qué consideramos que la hipótesis debiese ser cierta?2. Descripción de los datasets. - ¿Por qué los escogimos?3. Evaluación de los modelos. - ¿Por qué utilizar MAE y MSE como métricas?4. Experimentos. - Generación de las distribuciones. - Hipótesis Testing.5. Extensión a otras funciones de pérdida robustas. - LogCosh() **** 1. Recordatorio MSE, MAE Y HUBER MSE- Media de los errores al cuadrado (Norma L2)- Sensible a outliers$$MSE = \frac{\sum_{i=1}^n(y_i-y_i^p)^2} {n}$$ MAE- Media del valor absoluto de los errores (Norma L1)- Diferenciable por trazos.$$MAE = \frac{\sum_{i=1}^n|y_i-y_i^p|} {n}$$ Huber- Combina MSE con MAE.- MAE, pero se convierte en MSE para errores pequeños.- Robustez frente a outliers (MAE), pero diferenciable en 0 (MSE).En 1964, Peter Huber, en su *paper Robust estimation of a location parameter*, define la función (con $k(\epsilon)$):\begin{equation*} \rho(y - f(x)) = \begin{cases} \frac{1}{2}(y - f(x))^2 & \text{ para } \quad |y - f(x)| < \delta \\ \delta|y - f(x)| - \frac{1}{2}\delta^2 & \text{ para } \quad |y - f(x)| \geq \delta \end{cases}\end{equation*} Por conveniencia, se puede reescribir la función de Huber con parámetros utilizados en este trabajo, así:\begin{align*} t &= y - f(x)\\ k &= \delta\end{align*}\begin{equation*} \rho(y - f(x)) = \begin{cases} \frac{1}{2}(y - f(x))^2 & \text{ para } \quad |y - f(x)| < \delta \\ \delta|y - f(x)| - \frac{1}{2}\delta^2 & \text{ para } \quad |y - f(x)| \geq \delta \end{cases}\end{equation*}donde: \begin{align*}y&: \text{ valor real.}\\f(x)&: \text{ predicción del modelo.}\end{align*}- Sintonización parámetro $\delta$ en Huber($\delta$):Se puede interpretar $\delta$ como el valor en donde el modelo comienza a considerar "grande" la diferencia entre el valor real y la predicción. **** 2. Descripción de los datasets Se trabajarán con dos datasets sintéticos y dos datasets reales. * **Dataset sintético 1**: No contiene outliers.* **Dataset sintético 2**: Contiene outliers.* **Dataset real Boston housing price**: Contiene outliers moderados.* **Dataset real Challenge**: Contiene outliers extremos. Todos los datasets se preprocesarán de la siguiente forma:
###Code
from sklearn.compose import make_column_selector as selector
numeric_transformer = StandardScaler()
categorical_transformer = OneHotEncoder(handle_unknown='ignore')
preprocessor = ColumnTransformer(transformers=[
('num', numeric_transformer, selector(dtype_include="number")),
('cat', categorical_transformer, selector(dtype_include="category"))
])
def procesar_data(X, y):
x_train, x_test, y_train, y_test = [pd.DataFrame(elem) for elem in train_test_split(X,
y,
shuffle=True,
random_state=1)]
x_train, x_test = preprocessor.fit_transform(x_train), preprocessor.transform(x_test)
y_train, y_test = preprocessor.fit_transform(y_train), preprocessor.transform(y_test)
return x_train, x_test, y_train, y_test
###Output
_____no_output_____
###Markdown
Datasets sintéticosLa construcción de datasets sintéticos busca mostrar de forma más evidente la ventaja de utilizar -en problemas de regresión- un modelo con una loss function robusta a *outliers*, frente a una que no.En este caso, los modelos a utilizar son:* Regresión lineal utilizando **MSE** como *loss function*.* Regresión lineal utilizando la función de **Huber** como *loss function*. Para la generación de los datasets, se utiliza la función ```gendata2(n_samples, n_outliers)```. En dicha función, ```n_samples``` corresponde a la cantidad de datos totales que se utilizarán, mientras ```n_outliers``` es una fracción del total de datos, correspondiente a la cantidad de outliers.A continuación, se generarán los datasets sintéticos a utilizar:* A la izquierda, el dataset sintético **sin outliers**, donde los modelos obtendrán un resultado similar.* A la derecha, en cambio, el dataset sintético **con outliers**, donde el modelo con una *loss function* robusta se desempeñará mejor.
###Code
def gendata(n_samples=1000, n_outliers=50):
X = np.linspace(0, 100, n_samples)
y = np.linspace(0, 100, n_samples) + np.random.normal(0, 5, n_samples)
arreglo_pos = np.random.randint(0, n_samples, n_outliers)
y[arreglo_pos] = y[arreglo_pos] + (np.random.choice((-1,1),size=n_outliers)
*np.random.randint(50, 210, n_outliers))
return X, y
# Datasets sintéticos
X, y = gendata(400, 0)
X_outliers, y_outliers = gendata(400, 10)
# Preprocesar datasets sintéticos
#sin outliers
x_train_sint, x_test_sint, y_train_sint, y_test_sint = procesar_data(X, y)
#con outliers
x_train_sintout, x_test_sintout, y_train_sintout, y_test_sintout = procesar_data(X_outliers, y_outliers)
#PLOTEANDO
fig, ax = plt.subplots(1, 2, figsize=(24,12))
ax[0].scatter(np.concatenate((x_train_sint,
x_test_sint)),
np.concatenate((y_train_sint,
y_test_sint)))
ax[0].set_title('data sin outliers')
ax[1].scatter(np.concatenate((x_train_sintout,
x_test_sintout)),
np.concatenate((y_train_sintout,
y_test_sintout)))
ax[1].set_title('data con outliers')
plt.show()
###Output
_____no_output_____
###Markdown
Al comparar ambos gráficos, es notoria la presencia de outliers en el que está ubicado a la derecha. Dicha "exageración" en los datos que son atípicos, evidenciará de mejor manera el desempeño de ambos métodos.A continuación, se utilizará la herramienta para diagramas de caja ```pyplot.boxplot()```, para interpretar la distribución de los datos a través de sus cuartiles, además de poder representar los valores atípicos.
###Code
fig, ax = plt.subplots(2, 2, figsize=(20,14))
ax[0, 0].boxplot(np.concatenate((y_train_sint, y_test_sint)))
ax[0, 0].set_title('y values: sin outliers')
ax[0, 1].boxplot(np.concatenate((x_train_sint, x_test_sint)), vert=False)
ax[0, 1].set_title('X values')
fig.tight_layout(pad=2.0)#just to separate
ax[1, 0].boxplot(np.concatenate((y_train_sintout, y_test_sintout)))
ax[1, 0].set_title('y values: con outliers')
ax[1, 1].boxplot(np.concatenate((x_train_sintout, x_test_sintout)), vert=False)
ax[1, 1].set_title('X values')
plt.show()
###Output
_____no_output_____
###Markdown
Lo anterior, permite vislumbrar que los valores atípicos del segundo dataset están, en efecto, significativamente distantes con respecto al resto de datos. Por contraparte, con esta herramienta, el primer dataset tampoco presenta valores atípicos.Finalmente, cabe destacar la organización de los valores de *X*, que a simple vista se muestran distribuidos uniformemente. Datasets realesLos datasets reales a utilizar corresponden a:* ```boston_housing```: importado desde ```tensorflow.keras.datasets```, dataset que tiene outliers moderados.* **metadata_casas_train.csv**: utilizado en la segunda parte de la Tarea, dataset con valores atípicos en la columna **'precio'**, con outliers extremos. En el caso del dataset Boston, se utilizará una una red neuronal de 3 capas densas (2 ocultas + 1 de salida).Para el dataset del Challenge, se utilizará una red neuronal con 2 capas densas(1 oculta + 1 de salida).Funciones de pérdida a utilizar:* MSE loss function.* Huber loss function. Lo anterior, pretende buscar si es que la naturaleza de los valores atípicos genera algún cambio a la hora de resolver un problema con determinadas *loss functions*.
###Code
from tensorflow.keras.datasets import boston_housing
#BOSTON
(x_train, y_boston), (x_test, y_test) = boston_housing.load_data()
X = np.concatenate((x_train, x_test))
y = np.concatenate((y_boston, y_test))
X_train_boston, X_test_boston, y_train_boston, y_test_boston = procesar_data(X, y)
#CASAS CHALLENGE
df_train = pd.read_csv(path+'metadata_casas_train.csv',index_col=0)
df_houses = pd.read_csv(path+'metadata_casas_train.csv',index_col=0)
df_houses.zipcode = df_houses.zipcode.astype('category')
y_houses = pd.DataFrame(np.array(df_houses.pop('precio')).reshape(-1,1))
X_train_challenge, X_test_challenge, y_train_challenge, y_test_challenge = procesar_data(df_houses, y_houses)
fig, ax = plt.subplots(1, 2, figsize=(24,12))
ax[0].boxplot(np.concatenate((y_train_boston, y_test_boston)))
ax[0].set_title('y boston: outliers moderados')
ax[1].boxplot(np.concatenate((y_train_challenge, y_test_challenge)))
ax[1].set_title('y values: outliers extremos')
plt.show()
###Output
_____no_output_____
###Markdown
**** 3. Evaluación de los modelos Para evaluar los modelos se escogieron las métricas MAE y MSE.¿Es válido evaluar los modelos utilizando estas métricas?Se presentará un pequeño experimento para demostrar que las métricas son, en efecto, válidas y que la función de pérdida con que se entrena el modelo **no asegura un mejor desempeño cuando coincide con alguna de las métricas**.
###Code
def pequeño_experimento(n_samples=1000, n_outliers=50):
X = np.linspace(0, 100, n_samples)
y = np.linspace(0, 100, n_samples) + np.random.normal(0, 5, n_samples)
np.random.seed(5)
arreglo_pos = np.random.randint(0, n_samples, n_outliers)
y[arreglo_pos] = y[arreglo_pos] + np.random.randint(50, 210, n_outliers)
return X, y
X_p, y_p = pequeño_experimento(400, 10)
x_train_sint_p, x_test_sint_p, y_train_sint_p, y_test_sint_p = procesar_data(X_p, y_p)
#PLOTEANDO
fig = plt.figure(figsize=(24,12))
plt.scatter(np.concatenate((x_train_sint_p,
x_test_sint_p)),
np.concatenate((y_train_sint_p,
y_test_sint_p)))
plt.scatter(x_train_sint_p,
y_train_sint_p)
plt.title('data')
plt.show()
def linearmodels(n_features, loss):
inputs = keras.Input(shape=(n_features, ), name='input_data')
outputs = layers.Dense(1, name='output')(inputs)
model = keras.Model(inputs=inputs, outputs = outputs, name='modelo')
model.compile(loss = loss,
optimizer=optimizers.Adam(lr=0.01),
metrics=['mean_absolute_error','mean_squared_error'],
)
return model
modelo_huber = linearmodels(x_train_sint_p.shape[1],mae_loss)
modelo_mse = linearmodels(x_train_sint_p.shape[1],mse_loss)
history = modelo_huber.fit(x_train_sint_p,y_train_sint_p,epochs=50,
verbose=0,)
history2 = modelo_mse.fit(x_train_sint_p,y_train_sint_p,epochs=50,
verbose=0,)
preds_huber = modelo_huber.predict(x_test_sint_p)
preds_mse = modelo_mse.predict(x_test_sint_p)
print('MSE con loss MAE: ', mse(preds_huber,y_test_sint_p))
print('MSE con loss MSE: ', mse(preds_mse,y_test_sint_p))
print('\nMAE con loss MAE: ', mae(preds_huber,y_test_sint_p))
print('MAE con loss MSE: ', mae(preds_mse,y_test_sint_p))
hide_toggle()
###Output
MSE con loss MAE: 0.041196428311516174
MSE con loss MSE: 0.04725872877278875
MAE con loss MAE: 0.11049072496476306
MAE con loss MSE: 0.13411814509041042
###Markdown
**** 4. Experimentos Pipeline del Experimento
###Code
def one_iteration(loss, model_func, verbose, *data_args):
x_train,y_train,x_test,y_test = data_args
n_features = x_train.shape[1]
model = model_func(n_features, loss)
history = model.fit(
x = x_train,
y = y_train,
batch_size=32,
epochs=100,
verbose=0,
)
test_results = model.evaluate(x_test, y_test, verbose=0)
if not(verbose):
return test_results[1], test_results[2]
###Output
_____no_output_____
###Markdown
Pipeline del Experimento
###Code
def experiment(loss, model_func, plot, *data_args):
x_train,y_train,x_test,y_test = data_args
maes = np.array([one_iteration(loss, model_func, False, x_train, y_train, x_test, y_test) for i in range(100)])
if plot:
fig = plt.figure(figsize=(20, 5))
ax1 = fig.add_subplot(121)
ax2 = fig.add_subplot(122)
ax1.set_title('MAE')
ax2.set_title('MSE')
ax1.hist(maes[:,0])
ax2.hist(maes[:,1])
plt.show()
return maes
###Output
_____no_output_____
###Markdown
Sintetic Dataset
###Code
def linearmodels(n_features, loss):
inputs = keras.Input(shape=(n_features, ), name='input data')
outputs = layers.Dense(1, name='output')(inputs)
model = keras.Model(inputs=inputs, outputs = outputs, name='modelo')
model.compile(loss = loss,
optimizer=optimizers.Adam(lr=0.01),
metrics=['mean_absolute_error','mean_squared_error'],
)
return model
###Output
_____no_output_____
###Markdown
> Sin outliers
###Code
sintetic_huber = experiment(Huber(1),
linearmodels,
True,
x_train_sint,
y_train_sint,
x_test_sint,
y_test_sint
)
sintetic_mse = experiment(mse_loss,
linearmodels,
True,
x_train_sint,
y_train_sint,
x_test_sint,
y_test_sint
)
# H0: mean_mse <= mean_huber
# H1: mean_mse > mean_huber
print("Resultado para MAE: ", stats.ttest_ind(sintetic_mse[:, 0], sintetic_huber[:, 0], alternative='greater'))
print("Resultado para MSE: ", stats.ttest_ind(sintetic_mse[:, 1], sintetic_huber[:, 1], alternative='greater'))
###Output
Resultado para MAE: Ttest_indResult(statistic=-0.8737023794784703, pvalue=0.8083306087635856)
Resultado para MSE: Ttest_indResult(statistic=-1.1105159126041166, pvalue=0.8659382187911061)
###Markdown
> Con outliers
###Code
sintetic_outliers_huber = experiment(Huber(1), linearmodels, True,
x_train_sintout, y_train_sintout,
x_test_sintout, y_test_sintout)
sintetic_outliers_mse = experiment(mse_loss, linearmodels, True,
x_train_sintout, y_train_sintout,
x_test_sintout, y_test_sintout)
print("Resultados MAE: ", stats.ttest_ind(sintetic_outliers_mse[:, 0], sintetic_outliers_huber[:, 0], alternative='greater'))
print("Resultados MSE: ", stats.ttest_ind(sintetic_outliers_mse[:, 1], sintetic_outliers_huber[:, 1], alternative='greater'))
###Output
Resultados MAE: Ttest_indResult(statistic=17.488092543084342, pvalue=2.5262852393005206e-42)
Resultados MSE: Ttest_indResult(statistic=22.99484427063339, pvalue=4.095052397409369e-58)
###Markdown
Boston Housing Dataset
###Code
def boston_model(n_features,loss):
inputs = keras.Input(shape=(n_features, ), name='input_boston')
x = layers.Dense(16, activation='relu')(inputs)
x = layers.Dense(8, activation='relu')(x)
outputs = layers.Dense(1, name='Output')(x)
model = keras.Model(inputs=inputs, outputs=outputs, name='Modelo_boston')
model.compile(loss=loss,optimizer='adam',metrics=['mean_absolute_error','mean_squared_error'])
return model
###Output
_____no_output_____
###Markdown
Boston Housing Dataset
###Code
boston_huber = experiment(Huber(0.1), boston_model, True,
X_train_boston, y_train_boston,
X_test_boston, y_test_boston)
boston_mse = experiment(mse_loss, boston_model, True,
X_train_boston, y_train_boston,
X_test_boston, y_test_boston)
print("Resultados MAE: ", stats.ttest_ind(boston_mse[:, 0], boston_huber[:, 0], alternative='greater'))
print("Resultados MSE: ", stats.ttest_ind(boston_mse[:, 1], boston_huber[:, 1], alternative='greater'))
###Output
Resultados MAE: Ttest_indResult(statistic=3.944972662648769, pvalue=5.5377310138874965e-05)
Resultados MSE: Ttest_indResult(statistic=1.7419278573077739, pvalue=0.041536981277773985)
###Markdown
Challenge Dataset
###Code
def challenge_model(n_features,loss):
inputs = keras.Input(shape=(n_features, ), name='input_challenge')
x = layers.Dense(4, activation='relu')(inputs)
outputs = layers.Dense(1, name='Output')(x)
model = keras.Model(inputs=inputs, outputs=outputs, name='Modelo_challenge')
model.compile(loss=loss,optimizer='adam',metrics=['mean_absolute_error','mean_squared_error'])
return model
###Output
_____no_output_____
###Markdown
Challenge Dataset
###Code
challenge_huber = experiment(Huber(0.1), challenge_model, True,
X_train_challenge, y_train_challenge,
X_test_challenge, y_test_challenge)
challenge_mse = experiment(mse_loss, challenge_model, True,
X_train_challenge, y_train_challenge,
X_test_challenge, y_test_challenge)
print("Resultados MAE: ", stats.ttest_ind(challenge_mse[:, 0], challenge_huber[:, 0], alternative='greater'))
print("Resultados MSE: ", stats.ttest_ind(challenge_mse[:, 1], challenge_huber[:, 1], alternative='greater'))
###Output
Resultados MAE: Ttest_indResult(statistic=14.16762999543144, pvalue=3.180405821688636e-32)
Resultados MSE: Ttest_indResult(statistic=-4.246195086577404, pvalue=0.9999832903073754)
###Markdown
***5. Extensión a otras loss functions robustas LogCosh Loss\begin{equation}L(y, y^p) = \sum_{i=1}^n log(cosh(y_{i}^p - y_i))\end{equation} Challenge Dataset
###Code
challenge_logcosh = experiment(LogCosh(), challenge_model, True,
X_train_challenge, y_train_challenge,
X_test_challenge, y_test_challenge)
challenge_mse = experiment(mse_loss, challenge_model, True,
X_train_challenge, y_train_challenge,
X_test_challenge, y_test_challenge)
print("Resultados MAE: ", stats.ttest_ind(challenge_mse[:, 0], challenge_logcosh[:, 0], alternative='greater'))
print("Resultados MSE: ", stats.ttest_ind(challenge_mse[:, 1], challenge_logcosh[:, 1], alternative='greater'))
###Output
Resultados MAE: Ttest_indResult(statistic=12.267830600650663, pvalue=2.1055720093883013e-26)
Resultados MSE: Ttest_indResult(statistic=0.05287432502408132, pvalue=0.4789426744847888)
|
Project_2_SQL Route.ipynb | ###Markdown
ETL & Visualization Project UV Exposure & Melanoma Rates Correlation in United States Extract: UV Exposure and Melanoma Data (csv)
###Code
# Dependencies and Setup
%matplotlib inline
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
import json
import pandas as pd
from sqlalchemy import create_engine
# from scipy.stats import linregress
# from scipy import stats
# import pingouin as pg # Install pingouin stats package (pip install pingouin)
# import seaborn as sns # Install seaborn data visualization library (pip install seaborn)
# from scipy.stats import pearsonr
yr_list= [2005, 2006, 2007, 2008, 2009, 2010, 2011, 2012, 2013,
2014, 2015]
# Hide warning messages in notebook
import warnings
warnings.filterwarnings('ignore')
# File to Load
CDI_data_to_load = "CDI_data.csv"
# Read the Population Health Data
CDI_data_pd = pd.read_csv(CDI_data_to_load)
# Display the data table for preview
CDI_data_pd
# Extracting cancer data
topic_sorted_df = CDI_data_pd.groupby('Topic')
topic_sorted_df
cancer_df = topic_sorted_df.get_group('Cancer')
cancer_df
cancer_df = cancer_df.sort_values('LocationDesc')
cancer_df[[]]
new_cancer_df = cancer_df[['LocationAbbr','LocationDesc','Topic',
'Question','DataValueType','DataValue']].copy()
new_cancer_df
incidence_df = new_cancer_df.loc[new_cancer_df['Question'] == 'Invasive melanoma, incidence']
incidence_df
incidence_df = incidence_df.loc[incidence_df['DataValueType'] == 'Average Annual Number']
incidence_df.set_index('LocationAbbr', inplace=True)
# incidence_df.to_csv('incidence.csv')
incidence_df.head()
mortality_df = new_cancer_df.loc[new_cancer_df['Question'] == 'Melanoma, mortality']
mortality_df
mortality_df = mortality_df.loc[mortality_df['DataValueType'] == 'Average Annual Number']
mortality_df.set_index('LocationAbbr', inplace=True)
#mortality_df.to_csv('mortality.csv')
mortality_df.head()
# 2nd File to Load
UV_data_to_load = "UV_data.csv"
# Read the Population Health Data
UV_data_df = pd.read_csv(UV_data_to_load)
# # Display the data table for preview
UV_data_df = UV_data_df.groupby("STATENAME", as_index=False)["UV_ Wh/m²"].mean()
UV_data_df.set_index('STATENAME', inplace=True)
UV_data_df.to_csv('UV_data_post.csv')
UV_data_df.tail()
###Output
_____no_output_____
###Markdown
Load: Database (MongoDB)
###Code
# Dependencies
import pymongo
import pandas as pd
# Initialize PyMongo to work with MongoDBs
conn = 'mongodb://localhost:27017'
client = pymongo.MongoClient(conn)
###Output
_____no_output_____
###Markdown
Upload Clean Data to Database 1. Melanoma Incidence Data
###Code
# Define database and collection
db = client.uv_melanoma_db
collection = db.melanoma_incidence
# Convert the data frame of melanoma incidence data to dictionary
incidence_dict = incidence_df.to_dict("records")
incidence_dict
# Upload melanoma incidence data to MongoDB
for incidence_data in range(len(incidence_dict)):
collection.insert_one(incidence_dict[incidence_data])
# Display the MongoDB records created above
melanoma_incidence_records = db.melanoma_incidence.find()
for melanoma_incidence_record in melanoma_incidence_records:
print(melanoma_incidence_record)
###Output
{'_id': ObjectId('5e811abec80bd52203159fd8'), 'LocationAbbr': 'AL', 'LocationDesc': 'Alabama', 'Topic': 'Cancer', 'Question': 'Invasive melanoma, incidence', 'DataValueType': 'Average Annual Number', 'DataValue': '1128'}
{'_id': ObjectId('5e811abec80bd52203159fd9'), 'LocationAbbr': 'AK', 'LocationDesc': 'Alaska', 'Topic': 'Cancer', 'Question': 'Invasive melanoma, incidence', 'DataValueType': 'Average Annual Number', 'DataValue': '81'}
{'_id': ObjectId('5e811abec80bd52203159fda'), 'LocationAbbr': 'AZ', 'LocationDesc': 'Arizona', 'Topic': 'Cancer', 'Question': 'Invasive melanoma, incidence', 'DataValueType': 'Average Annual Number', 'DataValue': '1135'}
{'_id': ObjectId('5e811abec80bd52203159fdb'), 'LocationAbbr': 'AR', 'LocationDesc': 'Arkansas', 'Topic': 'Cancer', 'Question': 'Invasive melanoma, incidence', 'DataValueType': 'Average Annual Number', 'DataValue': '534'}
{'_id': ObjectId('5e811abec80bd52203159fdc'), 'LocationAbbr': 'CA', 'LocationDesc': 'California', 'Topic': 'Cancer', 'Question': 'Invasive melanoma, incidence', 'DataValueType': 'Average Annual Number', 'DataValue': '7740'}
{'_id': ObjectId('5e811abec80bd52203159fdd'), 'LocationAbbr': 'CO', 'LocationDesc': 'Colorado', 'Topic': 'Cancer', 'Question': 'Invasive melanoma, incidence', 'DataValueType': 'Average Annual Number', 'DataValue': '1113'}
{'_id': ObjectId('5e811abec80bd52203159fde'), 'LocationAbbr': 'CT', 'LocationDesc': 'Connecticut', 'Topic': 'Cancer', 'Question': 'Invasive melanoma, incidence', 'DataValueType': 'Average Annual Number', 'DataValue': '902'}
{'_id': ObjectId('5e811abec80bd52203159fdf'), 'LocationAbbr': 'DE', 'LocationDesc': 'Delaware', 'Topic': 'Cancer', 'Question': 'Invasive melanoma, incidence', 'DataValueType': 'Average Annual Number', 'DataValue': '293'}
{'_id': ObjectId('5e811abec80bd52203159fe0'), 'LocationAbbr': 'DC', 'LocationDesc': 'District of Columbia', 'Topic': 'Cancer', 'Question': 'Invasive melanoma, incidence', 'DataValueType': 'Average Annual Number', 'DataValue': '49'}
{'_id': ObjectId('5e811abec80bd52203159fe1'), 'LocationAbbr': 'FL', 'LocationDesc': 'Florida', 'Topic': 'Cancer', 'Question': 'Invasive melanoma, incidence', 'DataValueType': 'Average Annual Number', 'DataValue': '4692'}
{'_id': ObjectId('5e811abec80bd52203159fe2'), 'LocationAbbr': 'GA', 'LocationDesc': 'Georgia', 'Topic': 'Cancer', 'Question': 'Invasive melanoma, incidence', 'DataValueType': 'Average Annual Number', 'DataValue': '2194'}
{'_id': ObjectId('5e811abec80bd52203159fe3'), 'LocationAbbr': 'HI', 'LocationDesc': 'Hawaii', 'Topic': 'Cancer', 'Question': 'Invasive melanoma, incidence', 'DataValueType': 'Average Annual Number', 'DataValue': '316'}
{'_id': ObjectId('5e811abec80bd52203159fe4'), 'LocationAbbr': 'ID', 'LocationDesc': 'Idaho', 'Topic': 'Cancer', 'Question': 'Invasive melanoma, incidence', 'DataValueType': 'Average Annual Number', 'DataValue': '405'}
{'_id': ObjectId('5e811abec80bd52203159fe5'), 'LocationAbbr': 'IL', 'LocationDesc': 'Illinois', 'Topic': 'Cancer', 'Question': 'Invasive melanoma, incidence', 'DataValueType': 'Average Annual Number', 'DataValue': '2399'}
{'_id': ObjectId('5e811abec80bd52203159fe6'), 'LocationAbbr': 'IN', 'LocationDesc': 'Indiana', 'Topic': 'Cancer', 'Question': 'Invasive melanoma, incidence', 'DataValueType': 'Average Annual Number', 'DataValue': '1223'}
{'_id': ObjectId('5e811abec80bd52203159fe7'), 'LocationAbbr': 'IA', 'LocationDesc': 'Iowa', 'Topic': 'Cancer', 'Question': 'Invasive melanoma, incidence', 'DataValueType': 'Average Annual Number', 'DataValue': '798'}
{'_id': ObjectId('5e811abec80bd52203159fe8'), 'LocationAbbr': 'KS', 'LocationDesc': 'Kansas', 'Topic': 'Cancer', 'Question': 'Invasive melanoma, incidence', 'DataValueType': 'Average Annual Number', 'DataValue': '712'}
{'_id': ObjectId('5e811abec80bd52203159fe9'), 'LocationAbbr': 'KY', 'LocationDesc': 'Kentucky', 'Topic': 'Cancer', 'Question': 'Invasive melanoma, incidence', 'DataValueType': 'Average Annual Number', 'DataValue': '1130'}
{'_id': ObjectId('5e811abec80bd52203159fea'), 'LocationAbbr': 'LA', 'LocationDesc': 'Louisiana', 'Topic': 'Cancer', 'Question': 'Invasive melanoma, incidence', 'DataValueType': 'Average Annual Number', 'DataValue': '714'}
{'_id': ObjectId('5e811abec80bd52203159feb'), 'LocationAbbr': 'ME', 'LocationDesc': 'Maine', 'Topic': 'Cancer', 'Question': 'Invasive melanoma, incidence', 'DataValueType': 'Average Annual Number', 'DataValue': '370'}
{'_id': ObjectId('5e811abec80bd52203159fec'), 'LocationAbbr': 'MD', 'LocationDesc': 'Maryland', 'Topic': 'Cancer', 'Question': 'Invasive melanoma, incidence', 'DataValueType': 'Average Annual Number', 'DataValue': '1272'}
{'_id': ObjectId('5e811abec80bd52203159fed'), 'LocationAbbr': 'MA', 'LocationDesc': 'Massachusetts', 'Topic': 'Cancer', 'Question': 'Invasive melanoma, incidence', 'DataValueType': 'Average Annual Number', 'DataValue': '1590'}
{'_id': ObjectId('5e811abec80bd52203159fee'), 'LocationAbbr': 'MI', 'LocationDesc': 'Michigan', 'Topic': 'Cancer', 'Question': 'Invasive melanoma, incidence', 'DataValueType': 'Average Annual Number', 'DataValue': '2053'}
{'_id': ObjectId('5e811abec80bd52203159fef'), 'LocationAbbr': 'MN', 'LocationDesc': 'Minnesota', 'Topic': 'Cancer', 'Question': 'Invasive melanoma, incidence', 'DataValueType': 'Average Annual Number', 'DataValue': '1530'}
{'_id': ObjectId('5e811abec80bd52203159ff0'), 'LocationAbbr': 'MS', 'LocationDesc': 'Mississippi', 'Topic': 'Cancer', 'Question': 'Invasive melanoma, incidence', 'DataValueType': 'Average Annual Number', 'DataValue': '538'}
{'_id': ObjectId('5e811abec80bd52203159ff1'), 'LocationAbbr': 'MO', 'LocationDesc': 'Missouri', 'Topic': 'Cancer', 'Question': 'Invasive melanoma, incidence', 'DataValueType': 'Average Annual Number', 'DataValue': '1177'}
{'_id': ObjectId('5e811abec80bd52203159ff2'), 'LocationAbbr': 'MT', 'LocationDesc': 'Montana', 'Topic': 'Cancer', 'Question': 'Invasive melanoma, incidence', 'DataValueType': 'Average Annual Number', 'DataValue': '268'}
{'_id': ObjectId('5e811abec80bd52203159ff3'), 'LocationAbbr': 'NE', 'LocationDesc': 'Nebraska', 'Topic': 'Cancer', 'Question': 'Invasive melanoma, incidence', 'DataValueType': 'Average Annual Number', 'DataValue': '356'}
{'_id': ObjectId('5e811abec80bd52203159ff4'), 'LocationAbbr': 'NV', 'LocationDesc': 'Nevada', 'Topic': 'Cancer', 'Question': 'Invasive melanoma, incidence', 'DataValueType': 'Average Annual Number', 'DataValue': nan}
{'_id': ObjectId('5e811abec80bd52203159ff5'), 'LocationAbbr': 'NH', 'LocationDesc': 'New Hampshire', 'Topic': 'Cancer', 'Question': 'Invasive melanoma, incidence', 'DataValueType': 'Average Annual Number', 'DataValue': '400'}
{'_id': ObjectId('5e811abec80bd52203159ff6'), 'LocationAbbr': 'NJ', 'LocationDesc': 'New Jersey', 'Topic': 'Cancer', 'Question': 'Invasive melanoma, incidence', 'DataValueType': 'Average Annual Number', 'DataValue': '2093'}
{'_id': ObjectId('5e811abec80bd52203159ff7'), 'LocationAbbr': 'NM', 'LocationDesc': 'New Mexico', 'Topic': 'Cancer', 'Question': 'Invasive melanoma, incidence', 'DataValueType': 'Average Annual Number', 'DataValue': '406'}
{'_id': ObjectId('5e811abec80bd52203159ff8'), 'LocationAbbr': 'NY', 'LocationDesc': 'New York', 'Topic': 'Cancer', 'Question': 'Invasive melanoma, incidence', 'DataValueType': 'Average Annual Number', 'DataValue': '3717'}
{'_id': ObjectId('5e811abec80bd52203159ff9'), 'LocationAbbr': 'NC', 'LocationDesc': 'North Carolina', 'Topic': 'Cancer', 'Question': 'Invasive melanoma, incidence', 'DataValueType': 'Average Annual Number', 'DataValue': '2302'}
{'_id': ObjectId('5e811abec80bd52203159ffa'), 'LocationAbbr': 'ND', 'LocationDesc': 'North Dakota', 'Topic': 'Cancer', 'Question': 'Invasive melanoma, incidence', 'DataValueType': 'Average Annual Number', 'DataValue': '158'}
{'_id': ObjectId('5e811abec80bd52203159ffb'), 'LocationAbbr': 'OH', 'LocationDesc': 'Ohio', 'Topic': 'Cancer', 'Question': 'Invasive melanoma, incidence', 'DataValueType': 'Average Annual Number', 'DataValue': '2491'}
{'_id': ObjectId('5e811abec80bd52203159ffc'), 'LocationAbbr': 'OK', 'LocationDesc': 'Oklahoma', 'Topic': 'Cancer', 'Question': 'Invasive melanoma, incidence', 'DataValueType': 'Average Annual Number', 'DataValue': '654'}
{'_id': ObjectId('5e811abec80bd52203159ffd'), 'LocationAbbr': 'OR', 'LocationDesc': 'Oregon', 'Topic': 'Cancer', 'Question': 'Invasive melanoma, incidence', 'DataValueType': 'Average Annual Number', 'DataValue': '1140'}
{'_id': ObjectId('5e811abec80bd52203159ffe'), 'LocationAbbr': 'PA', 'LocationDesc': 'Pennsylvania', 'Topic': 'Cancer', 'Question': 'Invasive melanoma, incidence', 'DataValueType': 'Average Annual Number', 'DataValue': '3045'}
{'_id': ObjectId('5e811abec80bd52203159fff'), 'LocationAbbr': 'RI', 'LocationDesc': 'Rhode Island', 'Topic': 'Cancer', 'Question': 'Invasive melanoma, incidence', 'DataValueType': 'Average Annual Number', 'DataValue': '252'}
{'_id': ObjectId('5e811abec80bd5220315a000'), 'LocationAbbr': 'SC', 'LocationDesc': 'South Carolina', 'Topic': 'Cancer', 'Question': 'Invasive melanoma, incidence', 'DataValueType': 'Average Annual Number', 'DataValue': '1144'}
{'_id': ObjectId('5e811abec80bd5220315a001'), 'LocationAbbr': 'SD', 'LocationDesc': 'South Dakota', 'Topic': 'Cancer', 'Question': 'Invasive melanoma, incidence', 'DataValueType': 'Average Annual Number', 'DataValue': '167'}
{'_id': ObjectId('5e811abec80bd5220315a002'), 'LocationAbbr': 'TN', 'LocationDesc': 'Tennessee', 'Topic': 'Cancer', 'Question': 'Invasive melanoma, incidence', 'DataValueType': 'Average Annual Number', 'DataValue': '1431'}
{'_id': ObjectId('5e811abec80bd5220315a003'), 'LocationAbbr': 'TX', 'LocationDesc': 'Texas', 'Topic': 'Cancer', 'Question': 'Invasive melanoma, incidence', 'DataValueType': 'Average Annual Number', 'DataValue': '3025'}
{'_id': ObjectId('5e811abec80bd5220315a004'), 'LocationAbbr': 'US', 'LocationDesc': 'United States', 'Topic': 'Cancer', 'Question': 'Invasive melanoma, incidence', 'DataValueType': 'Average Annual Number', 'DataValue': '65719'}
{'_id': ObjectId('5e811abec80bd5220315a005'), 'LocationAbbr': 'UT', 'LocationDesc': 'Utah', 'Topic': 'Cancer', 'Question': 'Invasive melanoma, incidence', 'DataValueType': 'Average Annual Number', 'DataValue': '742'}
{'_id': ObjectId('5e811abec80bd5220315a006'), 'LocationAbbr': 'VT', 'LocationDesc': 'Vermont', 'Topic': 'Cancer', 'Question': 'Invasive melanoma, incidence', 'DataValueType': 'Average Annual Number', 'DataValue': '215'}
{'_id': ObjectId('5e811abec80bd5220315a007'), 'LocationAbbr': 'VA', 'LocationDesc': 'Virginia', 'Topic': 'Cancer', 'Question': 'Invasive melanoma, incidence', 'DataValueType': 'Average Annual Number', 'DataValue': '1535'}
{'_id': ObjectId('5e811abec80bd5220315a008'), 'LocationAbbr': 'WA', 'LocationDesc': 'Washington', 'Topic': 'Cancer', 'Question': 'Invasive melanoma, incidence', 'DataValueType': 'Average Annual Number', 'DataValue': '1821'}
{'_id': ObjectId('5e811abec80bd5220315a009'), 'LocationAbbr': 'WV', 'LocationDesc': 'West Virginia', 'Topic': 'Cancer', 'Question': 'Invasive melanoma, incidence', 'DataValueType': 'Average Annual Number', 'DataValue': '464'}
{'_id': ObjectId('5e811abec80bd5220315a00a'), 'LocationAbbr': 'WI', 'LocationDesc': 'Wisconsin', 'Topic': 'Cancer', 'Question': 'Invasive melanoma, incidence', 'DataValueType': 'Average Annual Number', 'DataValue': '1284'}
{'_id': ObjectId('5e811abec80bd5220315a00b'), 'LocationAbbr': 'WY', 'LocationDesc': 'Wyoming', 'Topic': 'Cancer', 'Question': 'Invasive melanoma, incidence', 'DataValueType': 'Average Annual Number', 'DataValue': '135'}
###Markdown
2. Melanoma Mortality Data
###Code
# Define database and collection
db = client.uv_melanoma_db
collection = db.melanoma_mortality
# Convert the data frame of melanoma mortality data to dictionary
mortality_dict = mortality_df.to_dict("records")
mortality_dict
# Upload melanoma mortality data to MongoDB
for mortality_data in range(len(mortality_dict)):
collection.insert_one(mortality_dict[mortality_data])
# Display the MongoDB records created above
melanoma_mortality_records = db.melanoma_mortality.find()
for melanoma_mortality_record in melanoma_mortality_records:
print(melanoma_mortality_record)
###Output
{'_id': ObjectId('5e811accc80bd5220315a00c'), 'LocationAbbr': 'AL', 'LocationDesc': 'Alabama', 'Topic': 'Cancer', 'Question': 'Melanoma, mortality', 'DataValueType': 'Average Annual Number', 'DataValue': '151'}
{'_id': ObjectId('5e811accc80bd5220315a00d'), 'LocationAbbr': 'AK', 'LocationDesc': 'Alaska', 'Topic': 'Cancer', 'Question': 'Melanoma, mortality', 'DataValueType': 'Average Annual Number', 'DataValue': '12'}
{'_id': ObjectId('5e811accc80bd5220315a00e'), 'LocationAbbr': 'AZ', 'LocationDesc': 'Arizona', 'Topic': 'Cancer', 'Question': 'Melanoma, mortality', 'DataValueType': 'Average Annual Number', 'DataValue': '203'}
{'_id': ObjectId('5e811accc80bd5220315a00f'), 'LocationAbbr': 'AR', 'LocationDesc': 'Arkansas', 'Topic': 'Cancer', 'Question': 'Melanoma, mortality', 'DataValueType': 'Average Annual Number', 'DataValue': '93'}
{'_id': ObjectId('5e811accc80bd5220315a010'), 'LocationAbbr': 'CA', 'LocationDesc': 'California', 'Topic': 'Cancer', 'Question': 'Melanoma, mortality', 'DataValueType': 'Average Annual Number', 'DataValue': '943'}
{'_id': ObjectId('5e811accc80bd5220315a011'), 'LocationAbbr': 'CO', 'LocationDesc': 'Colorado', 'Topic': 'Cancer', 'Question': 'Melanoma, mortality', 'DataValueType': 'Average Annual Number', 'DataValue': '161'}
{'_id': ObjectId('5e811accc80bd5220315a012'), 'LocationAbbr': 'CT', 'LocationDesc': 'Connecticut', 'Topic': 'Cancer', 'Question': 'Melanoma, mortality', 'DataValueType': 'Average Annual Number', 'DataValue': '106'}
{'_id': ObjectId('5e811accc80bd5220315a013'), 'LocationAbbr': 'DE', 'LocationDesc': 'Delaware', 'Topic': 'Cancer', 'Question': 'Melanoma, mortality', 'DataValueType': 'Average Annual Number', 'DataValue': '29'}
{'_id': ObjectId('5e811accc80bd5220315a014'), 'LocationAbbr': 'DC', 'LocationDesc': 'District of Columbia', 'Topic': 'Cancer', 'Question': 'Melanoma, mortality', 'DataValueType': 'Average Annual Number', 'DataValue': '7'}
{'_id': ObjectId('5e811accc80bd5220315a015'), 'LocationAbbr': 'FL', 'LocationDesc': 'Florida', 'Topic': 'Cancer', 'Question': 'Melanoma, mortality', 'DataValueType': 'Average Annual Number', 'DataValue': '728'}
{'_id': ObjectId('5e811accc80bd5220315a016'), 'LocationAbbr': 'GA', 'LocationDesc': 'Georgia', 'Topic': 'Cancer', 'Question': 'Melanoma, mortality', 'DataValueType': 'Average Annual Number', 'DataValue': '217'}
{'_id': ObjectId('5e811accc80bd5220315a017'), 'LocationAbbr': 'HI', 'LocationDesc': 'Hawaii', 'Topic': 'Cancer', 'Question': 'Melanoma, mortality', 'DataValueType': 'Average Annual Number', 'DataValue': '24'}
{'_id': ObjectId('5e811accc80bd5220315a018'), 'LocationAbbr': 'ID', 'LocationDesc': 'Idaho', 'Topic': 'Cancer', 'Question': 'Melanoma, mortality', 'DataValueType': 'Average Annual Number', 'DataValue': '55'}
{'_id': ObjectId('5e811accc80bd5220315a019'), 'LocationAbbr': 'IL', 'LocationDesc': 'Illinois', 'Topic': 'Cancer', 'Question': 'Melanoma, mortality', 'DataValueType': 'Average Annual Number', 'DataValue': '328'}
{'_id': ObjectId('5e811accc80bd5220315a01a'), 'LocationAbbr': 'IN', 'LocationDesc': 'Indiana', 'Topic': 'Cancer', 'Question': 'Melanoma, mortality', 'DataValueType': 'Average Annual Number', 'DataValue': '215'}
{'_id': ObjectId('5e811accc80bd5220315a01b'), 'LocationAbbr': 'IA', 'LocationDesc': 'Iowa', 'Topic': 'Cancer', 'Question': 'Melanoma, mortality', 'DataValueType': 'Average Annual Number', 'DataValue': '106'}
{'_id': ObjectId('5e811accc80bd5220315a01c'), 'LocationAbbr': 'KS', 'LocationDesc': 'Kansas', 'Topic': 'Cancer', 'Question': 'Melanoma, mortality', 'DataValueType': 'Average Annual Number', 'DataValue': '93'}
{'_id': ObjectId('5e811accc80bd5220315a01d'), 'LocationAbbr': 'KY', 'LocationDesc': 'Kentucky', 'Topic': 'Cancer', 'Question': 'Melanoma, mortality', 'DataValueType': 'Average Annual Number', 'DataValue': '151'}
{'_id': ObjectId('5e811accc80bd5220315a01e'), 'LocationAbbr': 'LA', 'LocationDesc': 'Louisiana', 'Topic': 'Cancer', 'Question': 'Melanoma, mortality', 'DataValueType': 'Average Annual Number', 'DataValue': '105'}
{'_id': ObjectId('5e811accc80bd5220315a01f'), 'LocationAbbr': 'ME', 'LocationDesc': 'Maine', 'Topic': 'Cancer', 'Question': 'Melanoma, mortality', 'DataValueType': 'Average Annual Number', 'DataValue': '51'}
{'_id': ObjectId('5e811accc80bd5220315a020'), 'LocationAbbr': 'MD', 'LocationDesc': 'Maryland', 'Topic': 'Cancer', 'Question': 'Melanoma, mortality', 'DataValueType': 'Average Annual Number', 'DataValue': '160'}
{'_id': ObjectId('5e811accc80bd5220315a021'), 'LocationAbbr': 'MA', 'LocationDesc': 'Massachusetts', 'Topic': 'Cancer', 'Question': 'Melanoma, mortality', 'DataValueType': 'Average Annual Number', 'DataValue': '219'}
{'_id': ObjectId('5e811accc80bd5220315a022'), 'LocationAbbr': 'MI', 'LocationDesc': 'Michigan', 'Topic': 'Cancer', 'Question': 'Melanoma, mortality', 'DataValueType': 'Average Annual Number', 'DataValue': '275'}
{'_id': ObjectId('5e811accc80bd5220315a023'), 'LocationAbbr': 'MN', 'LocationDesc': 'Minnesota', 'Topic': 'Cancer', 'Question': 'Melanoma, mortality', 'DataValueType': 'Average Annual Number', 'DataValue': '144'}
{'_id': ObjectId('5e811accc80bd5220315a024'), 'LocationAbbr': 'MS', 'LocationDesc': 'Mississippi', 'Topic': 'Cancer', 'Question': 'Melanoma, mortality', 'DataValueType': 'Average Annual Number', 'DataValue': '71'}
{'_id': ObjectId('5e811accc80bd5220315a025'), 'LocationAbbr': 'MO', 'LocationDesc': 'Missouri', 'Topic': 'Cancer', 'Question': 'Melanoma, mortality', 'DataValueType': 'Average Annual Number', 'DataValue': '208'}
{'_id': ObjectId('5e811accc80bd5220315a026'), 'LocationAbbr': 'MT', 'LocationDesc': 'Montana', 'Topic': 'Cancer', 'Question': 'Melanoma, mortality', 'DataValueType': 'Average Annual Number', 'DataValue': '36'}
{'_id': ObjectId('5e811accc80bd5220315a027'), 'LocationAbbr': 'NE', 'LocationDesc': 'Nebraska', 'Topic': 'Cancer', 'Question': 'Melanoma, mortality', 'DataValueType': 'Average Annual Number', 'DataValue': '63'}
{'_id': ObjectId('5e811accc80bd5220315a028'), 'LocationAbbr': 'NV', 'LocationDesc': 'Nevada', 'Topic': 'Cancer', 'Question': 'Melanoma, mortality', 'DataValueType': 'Average Annual Number', 'DataValue': '80'}
{'_id': ObjectId('5e811accc80bd5220315a029'), 'LocationAbbr': 'NH', 'LocationDesc': 'New Hampshire', 'Topic': 'Cancer', 'Question': 'Melanoma, mortality', 'DataValueType': 'Average Annual Number', 'DataValue': '42'}
{'_id': ObjectId('5e811accc80bd5220315a02a'), 'LocationAbbr': 'NJ', 'LocationDesc': 'New Jersey', 'Topic': 'Cancer', 'Question': 'Melanoma, mortality', 'DataValueType': 'Average Annual Number', 'DataValue': '245'}
{'_id': ObjectId('5e811accc80bd5220315a02b'), 'LocationAbbr': 'NM', 'LocationDesc': 'New Mexico', 'Topic': 'Cancer', 'Question': 'Melanoma, mortality', 'DataValueType': 'Average Annual Number', 'DataValue': '60'}
{'_id': ObjectId('5e811accc80bd5220315a02c'), 'LocationAbbr': 'NY', 'LocationDesc': 'New York', 'Topic': 'Cancer', 'Question': 'Melanoma, mortality', 'DataValueType': 'Average Annual Number', 'DataValue': '484'}
{'_id': ObjectId('5e811accc80bd5220315a02d'), 'LocationAbbr': 'NC', 'LocationDesc': 'North Carolina', 'Topic': 'Cancer', 'Question': 'Melanoma, mortality', 'DataValueType': 'Average Annual Number', 'DataValue': '296'}
{'_id': ObjectId('5e811accc80bd5220315a02e'), 'LocationAbbr': 'ND', 'LocationDesc': 'North Dakota', 'Topic': 'Cancer', 'Question': 'Melanoma, mortality', 'DataValueType': 'Average Annual Number', 'DataValue': '17'}
{'_id': ObjectId('5e811accc80bd5220315a02f'), 'LocationAbbr': 'OH', 'LocationDesc': 'Ohio', 'Topic': 'Cancer', 'Question': 'Melanoma, mortality', 'DataValueType': 'Average Annual Number', 'DataValue': '391'}
{'_id': ObjectId('5e811accc80bd5220315a030'), 'LocationAbbr': 'OK', 'LocationDesc': 'Oklahoma', 'Topic': 'Cancer', 'Question': 'Melanoma, mortality', 'DataValueType': 'Average Annual Number', 'DataValue': '130'}
{'_id': ObjectId('5e811accc80bd5220315a031'), 'LocationAbbr': 'OR', 'LocationDesc': 'Oregon', 'Topic': 'Cancer', 'Question': 'Melanoma, mortality', 'DataValueType': 'Average Annual Number', 'DataValue': '147'}
{'_id': ObjectId('5e811accc80bd5220315a032'), 'LocationAbbr': 'PA', 'LocationDesc': 'Pennsylvania', 'Topic': 'Cancer', 'Question': 'Melanoma, mortality', 'DataValueType': 'Average Annual Number', 'DataValue': '459'}
{'_id': ObjectId('5e811accc80bd5220315a033'), 'LocationAbbr': 'RI', 'LocationDesc': 'Rhode Island', 'Topic': 'Cancer', 'Question': 'Melanoma, mortality', 'DataValueType': 'Average Annual Number', 'DataValue': '32'}
{'_id': ObjectId('5e811accc80bd5220315a034'), 'LocationAbbr': 'SC', 'LocationDesc': 'South Carolina', 'Topic': 'Cancer', 'Question': 'Melanoma, mortality', 'DataValueType': 'Average Annual Number', 'DataValue': '135'}
{'_id': ObjectId('5e811accc80bd5220315a035'), 'LocationAbbr': 'SD', 'LocationDesc': 'South Dakota', 'Topic': 'Cancer', 'Question': 'Melanoma, mortality', 'DataValueType': 'Average Annual Number', 'DataValue': '22'}
{'_id': ObjectId('5e811accc80bd5220315a036'), 'LocationAbbr': 'TN', 'LocationDesc': 'Tennessee', 'Topic': 'Cancer', 'Question': 'Melanoma, mortality', 'DataValueType': 'Average Annual Number', 'DataValue': '213'}
{'_id': ObjectId('5e811accc80bd5220315a037'), 'LocationAbbr': 'TX', 'LocationDesc': 'Texas', 'Topic': 'Cancer', 'Question': 'Melanoma, mortality', 'DataValueType': 'Average Annual Number', 'DataValue': '544'}
{'_id': ObjectId('5e811accc80bd5220315a038'), 'LocationAbbr': 'US', 'LocationDesc': 'United States', 'Topic': 'Cancer', 'Question': 'Melanoma, mortality', 'DataValueType': 'Average Annual Number', 'DataValue': '9071'}
{'_id': ObjectId('5e811accc80bd5220315a039'), 'LocationAbbr': 'UT', 'LocationDesc': 'Utah', 'Topic': 'Cancer', 'Question': 'Melanoma, mortality', 'DataValueType': 'Average Annual Number', 'DataValue': '79'}
{'_id': ObjectId('5e811accc80bd5220315a03a'), 'LocationAbbr': 'VT', 'LocationDesc': 'Vermont', 'Topic': 'Cancer', 'Question': 'Melanoma, mortality', 'DataValueType': 'Average Annual Number', 'DataValue': '23'}
{'_id': ObjectId('5e811accc80bd5220315a03b'), 'LocationAbbr': 'VA', 'LocationDesc': 'Virginia', 'Topic': 'Cancer', 'Question': 'Melanoma, mortality', 'DataValueType': 'Average Annual Number', 'DataValue': '239'}
{'_id': ObjectId('5e811accc80bd5220315a03c'), 'LocationAbbr': 'WA', 'LocationDesc': 'Washington', 'Topic': 'Cancer', 'Question': 'Melanoma, mortality', 'DataValueType': 'Average Annual Number', 'DataValue': '209'}
{'_id': ObjectId('5e811accc80bd5220315a03d'), 'LocationAbbr': 'WV', 'LocationDesc': 'West Virginia', 'Topic': 'Cancer', 'Question': 'Melanoma, mortality', 'DataValueType': 'Average Annual Number', 'DataValue': '76'}
{'_id': ObjectId('5e811accc80bd5220315a03e'), 'LocationAbbr': 'WI', 'LocationDesc': 'Wisconsin', 'Topic': 'Cancer', 'Question': 'Melanoma, mortality', 'DataValueType': 'Average Annual Number', 'DataValue': '176'}
{'_id': ObjectId('5e811accc80bd5220315a03f'), 'LocationAbbr': 'WY', 'LocationDesc': 'Wyoming', 'Topic': 'Cancer', 'Question': 'Melanoma, mortality', 'DataValueType': 'Average Annual Number', 'DataValue': '18'}
###Markdown
3. UV Exposure Data
###Code
# Define database and collection
db = client.uv_melanoma_db
collection = db.uv
# Convert the data frame of UV exposure data to dictionary
UV_dict = UV_data_df.to_dict("records")
UV_dict
# Upload UV exposure data to MongoDB
for UV_data in range(len(UV_dict)):
collection.insert_one(UV_dict[UV_data])
# Display the MongoDB records created above
UV_records = db.uv.find()
for UV_record in UV_records:
print(UV_record)
###Output
{'_id': ObjectId('5e811ad7c80bd5220315a040'), 'STATENAME': 'Alabama', 'UV_ Wh/m²': 4505.164179104478}
{'_id': ObjectId('5e811ad7c80bd5220315a041'), 'STATENAME': 'Arizona', 'UV_ Wh/m²': 5528.466666666666}
{'_id': ObjectId('5e811ad7c80bd5220315a042'), 'STATENAME': 'Arkansas', 'UV_ Wh/m²': 4515.346666666666}
{'_id': ObjectId('5e811ad7c80bd5220315a043'), 'STATENAME': 'California', 'UV_ Wh/m²': 4871.413793103448}
{'_id': ObjectId('5e811ad7c80bd5220315a044'), 'STATENAME': 'Colorado', 'UV_ Wh/m²': 4802.730158730159}
{'_id': ObjectId('5e811ad7c80bd5220315a045'), 'STATENAME': 'Connecticut', 'UV_ Wh/m²': 3832.5}
{'_id': ObjectId('5e811ad7c80bd5220315a046'), 'STATENAME': 'Delaware', 'UV_ Wh/m²': 4074.0}
{'_id': ObjectId('5e811ad7c80bd5220315a047'), 'STATENAME': 'District of Columbia', 'UV_ Wh/m²': 4100.0}
{'_id': ObjectId('5e811ad7c80bd5220315a048'), 'STATENAME': 'Florida', 'UV_ Wh/m²': 4743.671641791045}
{'_id': ObjectId('5e811ad7c80bd5220315a049'), 'STATENAME': 'Georgia', 'UV_ Wh/m²': 4563.974842767296}
{'_id': ObjectId('5e811ad7c80bd5220315a04a'), 'STATENAME': 'Idaho', 'UV_ Wh/m²': 4170.545454545455}
{'_id': ObjectId('5e811ad7c80bd5220315a04b'), 'STATENAME': 'Illinois', 'UV_ Wh/m²': 4117.4607843137255}
{'_id': ObjectId('5e811ad7c80bd5220315a04c'), 'STATENAME': 'Indiana', 'UV_ Wh/m²': 4019.3804347826085}
{'_id': ObjectId('5e811ad7c80bd5220315a04d'), 'STATENAME': 'Iowa', 'UV_ Wh/m²': 4053.5454545454545}
{'_id': ObjectId('5e811ad7c80bd5220315a04e'), 'STATENAME': 'Kansas', 'UV_ Wh/m²': 4572.047619047619}
{'_id': ObjectId('5e811ad7c80bd5220315a04f'), 'STATENAME': 'Kentucky', 'UV_ Wh/m²': 4113.825}
{'_id': ObjectId('5e811ad7c80bd5220315a050'), 'STATENAME': 'Louisiana', 'UV_ Wh/m²': 4557.875}
{'_id': ObjectId('5e811ad7c80bd5220315a051'), 'STATENAME': 'Maine', 'UV_ Wh/m²': 3779.375}
{'_id': ObjectId('5e811ad7c80bd5220315a052'), 'STATENAME': 'Maryland', 'UV_ Wh/m²': 4057.1666666666665}
{'_id': ObjectId('5e811ad7c80bd5220315a053'), 'STATENAME': 'Massachusetts', 'UV_ Wh/m²': 3874.785714285714}
{'_id': ObjectId('5e811ad7c80bd5220315a054'), 'STATENAME': 'Michigan', 'UV_ Wh/m²': 3715.867469879518}
{'_id': ObjectId('5e811ad7c80bd5220315a055'), 'STATENAME': 'Minnesota', 'UV_ Wh/m²': 3841.022988505747}
{'_id': ObjectId('5e811ad7c80bd5220315a056'), 'STATENAME': 'Mississippi', 'UV_ Wh/m²': 4518.817073170731}
{'_id': ObjectId('5e811ad7c80bd5220315a057'), 'STATENAME': 'Missouri', 'UV_ Wh/m²': 4308.339130434782}
{'_id': ObjectId('5e811ad7c80bd5220315a058'), 'STATENAME': 'Montana', 'UV_ Wh/m²': 3927.964285714286}
{'_id': ObjectId('5e811ad7c80bd5220315a059'), 'STATENAME': 'Nebraska', 'UV_ Wh/m²': 4350.763440860215}
{'_id': ObjectId('5e811ad7c80bd5220315a05a'), 'STATENAME': 'Nevada', 'UV_ Wh/m²': 4977.0}
{'_id': ObjectId('5e811ad7c80bd5220315a05b'), 'STATENAME': 'New Hampshire', 'UV_ Wh/m²': 3841.5}
{'_id': ObjectId('5e811ad7c80bd5220315a05c'), 'STATENAME': 'New Jersey', 'UV_ Wh/m²': 3951.095238095238}
{'_id': ObjectId('5e811ad7c80bd5220315a05d'), 'STATENAME': 'New Mexico', 'UV_ Wh/m²': 5439.69696969697}
{'_id': ObjectId('5e811ad7c80bd5220315a05e'), 'STATENAME': 'New York', 'UV_ Wh/m²': 3745.1290322580644}
{'_id': ObjectId('5e811ad7c80bd5220315a05f'), 'STATENAME': 'North Carolina', 'UV_ Wh/m²': 4352.17}
{'_id': ObjectId('5e811ad7c80bd5220315a060'), 'STATENAME': 'North Dakota', 'UV_ Wh/m²': 3902.1132075471696}
{'_id': ObjectId('5e811ad7c80bd5220315a061'), 'STATENAME': 'Ohio', 'UV_ Wh/m²': 3842.056818181818}
{'_id': ObjectId('5e811ad7c80bd5220315a062'), 'STATENAME': 'Oklahoma', 'UV_ Wh/m²': 4714.7532467532465}
{'_id': ObjectId('5e811ad7c80bd5220315a063'), 'STATENAME': 'Oregon', 'UV_ Wh/m²': 3977.777777777778}
{'_id': ObjectId('5e811ad7c80bd5220315a064'), 'STATENAME': 'Pennsylvania', 'UV_ Wh/m²': 3809.208955223881}
{'_id': ObjectId('5e811ad7c80bd5220315a065'), 'STATENAME': 'Rhode Island', 'UV_ Wh/m²': 3881.4}
{'_id': ObjectId('5e811ad7c80bd5220315a066'), 'STATENAME': 'South Carolina', 'UV_ Wh/m²': 4525.021739130435}
{'_id': ObjectId('5e811ad7c80bd5220315a067'), 'STATENAME': 'South Dakota', 'UV_ Wh/m²': 4111.742424242424}
{'_id': ObjectId('5e811ad7c80bd5220315a068'), 'STATENAME': 'Tennessee', 'UV_ Wh/m²': 4278.578947368421}
{'_id': ObjectId('5e811ad7c80bd5220315a069'), 'STATENAME': 'Texas', 'UV_ Wh/m²': 4917.952755905511}
{'_id': ObjectId('5e811ad7c80bd5220315a06a'), 'STATENAME': 'Utah', 'UV_ Wh/m²': 4737.551724137931}
{'_id': ObjectId('5e811ad7c80bd5220315a06b'), 'STATENAME': 'Vermont', 'UV_ Wh/m²': 3744.0714285714284}
{'_id': ObjectId('5e811ad7c80bd5220315a06c'), 'STATENAME': 'Virginia', 'UV_ Wh/m²': 4181.985074626866}
{'_id': ObjectId('5e811ad7c80bd5220315a06d'), 'STATENAME': 'Washington', 'UV_ Wh/m²': 3594.102564102564}
{'_id': ObjectId('5e811ad7c80bd5220315a06e'), 'STATENAME': 'West Virginia', 'UV_ Wh/m²': 3892.3636363636365}
{'_id': ObjectId('5e811ad7c80bd5220315a06f'), 'STATENAME': 'Wisconsin', 'UV_ Wh/m²': 3810.7083333333335}
{'_id': ObjectId('5e811ad7c80bd5220315a070'), 'STATENAME': 'Wyoming', 'UV_ Wh/m²': 4350.521739130435}
###Markdown
PLAN:- Dropdown for each states- Map showing the UV exposure and layers for incidence and mortality
###Code
# CLEANING WITH PANDAS - DONE
# MONGODB - DONE
# FLASK APP
# VISUALIZATIONS (JS)
# WEB DEPLOYMENT
%load_ext sql
DB_ENDPOINT = "localhost"
DB = 'melanoma_db'
DB_USER = 'postgres'
DB_PASSWORD = [REDACTED]
DB_PORT = '5432'
# postgresql://username:password@host:port/database
conn_string = "postgresql://{}:{}@{}:{}/{}" \
.format(DB_USER, DB_PASSWORD, DB_ENDPOINT, DB_PORT, DB)
print(conn_string)
%sql $conn_string
rds_connection_string = "postgres:password@localhost:5432/melanoma_db"
engine = create_engine(f'postgresql://{rds_connection_string}')
engine.table_names()
pd.read_sql_query('select * from uv', con=engine)
###Output
_____no_output_____ |
hpds-03-Code.ipynb | ###Markdown
High Performance Data Science (HPDS)https://taudata.blogspot.comHPDS-03: Pendahuluan Pemrograman Parallel di Pythonhttps://taudata.blogspot.com/2022/04/hpds-03.html(C) Taufik Sutanto
###Code
# -*- coding: utf-8 -*-
import os
import time
import threading
import multiprocessing
NUM_WORKERS = 4
def only_sleep():
""" Do nothing, wait for a timer to expire """
print("PID: %s, Process Name: %s, Thread Name: %s" % (
os.getpid(),
multiprocessing.current_process().name,
threading.current_thread().name)
)
time.sleep(1)
def crunch_numbers():
""" Do some computations """
print("PID: %s, Process Name: %s, Thread Name: %s" % (
os.getpid(),
multiprocessing.current_process().name,
threading.current_thread().name)
)
x = 0
while x < 10000000:
x += 1
if __name__ == '__main__':
## Run tasks serially
start_time = time.time()
for _ in range(NUM_WORKERS):
only_sleep()
end_time = time.time()
print("Serial time=", end_time - start_time)
# Run tasks using threads
start_time = time.time()
threads = [threading.Thread(target=only_sleep) for _ in range(NUM_WORKERS)]
[thread.start() for thread in threads]
[thread.join() for thread in threads]
end_time = time.time()
print("Threads time=", end_time - start_time)
# Run tasks using processes
start_time = time.time()
processes = [multiprocessing.Process(target=only_sleep()) for _ in range(NUM_WORKERS)]
[process.start() for process in processes]
[process.join() for process in processes]
end_time = time.time()
print("Parallel time=", end_time - start_time)
import multiprocessing as mp
def f(x):
return x*x
if __name__ == '__main__':
print('Number of currently available processor = ', mp.cpu_count())
input_ = [1, 2, 3, 4, 5, 7, 9, 10]
print('input = ', input_)
with mp.Pool(5) as p:
print(p.map(f, input_))
###Output
_____no_output_____
###Markdown
Contoh Pool yang lebih cocok.
###Code
import multiprocessing
import numpy as np
import time
def pungsi(N):
s = 0.0
for i in range(1,N):
s += np.log(i)
return s
if __name__ == '__main__':
inputs = [10**6] * 20
print('Serial/Sequential Programming biasa:')
mulai = time.time()
outputs = [pungsi(x) for x in inputs]
akhir = time.time()
print("Rata-rata Output: {}".format(np.mean(outputs)))
print("Waktu Serial: {}".format(akhir-mulai))
print('Parallel Programming:')
mulai = time.time()
pool = multiprocessing.Pool(processes=8)
outputs = pool.map(pungsi, inputs)
akhir = time.time()
#print("Input: {}".format(inputs))
print("Rata-rata Output: {}".format(np.mean(outputs)))
print("Waktu parallel: {}".format(akhir-mulai))
import multiprocessing as mp
def square(x):
return x * x
if __name__ == '__main__':
inputs = [0,1,2,3,4,5,6,7,8]
print('Sync Parallel Processing')
pool = mp.Pool()
outputs = pool.map(square, inputs)
print("Input: {}".format(inputs))
print("Output: {} \n".format(outputs))
pool.close(); del pool
print('Async Parallel Processing')
pool = mp.Pool()
outputs_async = pool.map_async(square, inputs)
outputs = outputs_async.get()
print("Input: {}".format(inputs))
print("Output: {}".format(outputs))
###Output
_____no_output_____
###Markdown
Ingat kita bisa assign fungsi apapun ke masing-masing processor secara manual jika dibutuhkan
###Code
import multiprocessing
import os
import time
import threading
class ProsesA(multiprocessing.Process):
def __init__(self, id):
super(ProsesA, self).__init__()
self.id = id
def run(self):
time.sleep(1)
print("PID: %s, Process ID: %s, Process Name: %s, Thread Name: %s" % (
os.getpid(), self.id,
multiprocessing.current_process().name,
threading.current_thread().name))
class ProsesB(multiprocessing.Process):
def __init__(self, id):
super(ProsesB, self).__init__()
self.id = id
def run(self):
time.sleep(1)
print("PID: %s, Process ID: %s, Process Name: %s, Thread Name: %s" % (
os.getpid(), self.id,
multiprocessing.current_process().name,
threading.current_thread().name))
if __name__ == '__main__':
p1 = ProsesA(0)
p1.start()
p2 = ProsesB(1)
p2.start()
p1.join(); p2.join()
###Output
_____no_output_____
###Markdown
Parallel Programming pada Fungsi Multivariate: StarMap
###Code
import multiprocessing as mp
def f_sum(a, b):
return a + b
if __name__ == '__main__':
process_pool = mp.Pool(4)
data = [(1, 1), (2, 1), (3, 1), (6, 9)]
output = process_pool.starmap(f_sum, data)
print("input = ", data)
print("output = ", output)
###Output
_____no_output_____ |
materials/_build/html/_sources/materials/lectures/07_lecture-pypi-cran-and-pkg-review.ipynb | ###Markdown
Lecture 7: Peer review of packages, and the package repositories/indices CRAN and PyPI Learning objectives:By the end of this lecture, students should be able to:- Explain the advantage of using of packages that have undergone peer review- List the rOpenSci and PyOpenSci organizations aims and goals- Describe the peer review process used by the rOpenSci and PyOpenSci organizations- Describe the requirements for publishing packages on CRAN and PyPI- Explain the philosophical difference between how CRAN and PyPI gatekeep pacakges, and how this impacts the packages that are found on each repository/index [rOpenSci](https://ropensci.org/) aims and goals:rOpenSci fosters a culture that values open and reproducible research using shared data and reusable software. We do this by:- Creating technical infrastructure in the form of carefully vetted, staff- and community-contributed R software tools that lower barriers to working with scientific data sources on the web - Creating social infrastructure through a welcoming and diverse community - Making the right data, tools and best practices more discoverable - Building capacity of software users and developers and fostering a sense of pride in their work - Promoting advocacy for a culture of data sharing and reusable software.*Source: * rOpenSci's open peer review process - Authors submit complete R packages to rOpenSci. - Editors check that packages fit into rOpenSci's scope, run a series of automated tests to ensure a baseline of code quality and completeness, and then assign two independent reviewers. - Reviewers comment on usability, quality, and style of software code as well as documentation. - Authors make changes in response. - Once reviewers are satisfied with the updates, the package receives a badge of approval and joins rOpenSci's suite of approved pacakges. - Happens openly, and publicly on GitHub in issues. - Process is quite iterative and fast. After reviewers post a first round of extensive reviews, authors and reviewers chat in an informal back-and-forth, only lightly moderated by an editor. *Source: * rOpenSci's Guidance and StandardsWhat aspects of a package are reviewed? - high-level best practices: - is the code reusable (e.g. follow the DRY principle)? - are sufficient edge cases tested? - etc - low-level standards: - are naming conventions for functions followed? - did they make the best choices of dependencies for the package's intended tasks? - etc *Source: * rOpenSci's Review Guidebook- rOpenSci-reviewed packages:- Let's look at an rOpenSci review!All packages currently under review: - [Review of tidypmc](https://github.com/ropensci/software-review/issues/290) What do you get for having your package reviewed by rOpenSci?- valuable feedback from the knowledgeable editors and reviewers- help with package maintenance and submission of your package to CRAN- promotion of your package on their website, blog and social media- packages that have a short accompanying paper can be automatically submitted to [JOSS](https://joss.theoj.org/) and fast-tracked for publication. [pyOpenSci](https://www.pyopensci.org/)- A new organization, modelled after rOpenSci- scope is Python packages- First package submitted to pyOpenSci was in May 2019 pyOpenSci's Review Guidebook- Practice peer review:- MDS Open peer review: - Last year's cohort: - Your cohort: If you really enjoyed this course and the peer review...You may want to consider getting involved with one of these organizations! Ways to get involved:- Join the community forum [rOpensci](https://discuss.ropensci.org/) [pyOpensci](https://pyopensci.discourse.group/)- Virtually attend community calls! [rOpensci](https://ropensci.org/commcalls/) [pyOpensci](https://www.pyopensci.org/community-meetings)- Volunteer to review packages! [rOpensci](https://ropensci.org/onboarding/) [pyOpenSci](https://forms.gle/wvwLaLQre58YLHpD6) - Submit your package for review! [rOpensci](https://github.com/ropensci/software-review/why-and-how-submit-your-package-to-ropensci) [pyOpensci](https://www.pyopensci.org/dev_guide/peer_review/aims_scope.html) CRAN- CRAN (founded in 1997) stands for the "Comprehensive R Archive Network"- it is a collection of sites which host identical copies of: - R distribution(s) - the contributed extensions (*i.e.,* packages) - documentation for R - binaries (i.e., packages)- as of 2012, there were 85 official ‘daily’ mirrors *Source: Hornik, K (2012). The Comprehensive R Archive Network. Wiley interdisciplinary reviews. Computational statistics. 4(4): 394-398. [doi:10.1002/wics.1212](https://onlinelibrary-wiley-com.ezproxy.library.ubc.ca/doi/full/10.1002/wics.1212)* > Binary vs source distributions, what's the difference?> > Binary distributions are pre-compiled (computer readable), whereas source distributions have to be compiled before they are installed.> > Precompiled binaries are often different for each operating system (e.g., Windows vs Mac) Number of packages hosted by CRAN over history*Source: ["Reproducibility and Replicability in a Fast-Paced Methodological World"](https://journals.sagepub.com/doi/10.1177/2515245919847421) by Sacha Epskamp* What does it mean to be a CRAN package:**A stamp of authenticity:**- passed quality control of the `check` utility **Ease of installation:**- can be installed by users via `install.packages` (it's actually the default!)- binaries available for Windows & Mac OS's **Discoverability:**- listed as a package on CRAN **HOWEVER** - CRAN makes no assertions about the package's usability, or the efficiency and correctness of the computations it performs How to submit a package to CRAN1. Pick a version number.2. Run and document `R CMD check`.3. Check that you’re aligned with CRAN policies.4. Update README.md and NEWS.md.5. Submit the package to CRAN.6. Prepare for the next version by updating version numbers.7. Publicise the new version.*Source: [Chapter 18 Releasing a package](https://r-pkgs.org/release.html) - R packages book by Hadley Wickham & Jenny Bryan* Notes on submitting to CRAN- CRAN is staffed by volunteers, all of whom have other full-time jobs- A typical week has over 100 submissions and only three volunteers to process them all. - The less work you make for them the more likely you are to have a pleasant submission experience... Notes on submitting to CRAN (cont'd)Technical things:- Your package must pass `R CMD check` with the current development version of R (R-devel)- it must work on at least two platforms (CRAN uses the following 4 platforms: Windows, Mac OS X, Linux and Solaris) - use GitHub Actions to ensure this before submitting to CRAN!*If you decide to submit a package to CRAN follow the detailed instructions in [Chapter 18 Releasing a package](https://r-pkgs.org/release.html) fromt the R packages book by Hadley Wickham & Jenny Bryan to do so. If you submit your package to rOpenSci, they will help you get everything in order for submission to CRAN as well!* Notes on submitting to CRAN (cont'd)CRAN policies: Most common problems (from the R packages book):- The maintainer’s e-mail address must be stable, if they can’t get in touch with you they will remove your package from CRAN. - You must have clearly identified the copyright holders in DESCRIPTION: if you have included external source code, you must ensure that the license is compatible.- Do not make external changes without explicit user permission. Don’t write to the file system, change options, install packages, quit R, send information over the internet, open external software, etc.- Do not submit updates too frequently. The policy suggests a new version once every 1-2 months at most. If your submission fails:Read section 18.6.1 "On failure" from [Chapter 18 Releasing a package](https://r-pkgs.org/release.html) - R packages book by Hadley Wickham & Jenny Bryan*TL;DR - Breathe, don't argue, fix what is needed and re-submit. PyPI- should be pronounced like "pie pea eye"- also known as the Cheese Shop (a reference to the Monty Python's Flying Circus sketch "Cheese Shop")
###Code
from IPython.display import YouTubeVideo
YouTubeVideo('zB8pbUW5n1g')
###Output
_____no_output_____ |
multiple_polynomial_regression_from_scratch.ipynb | ###Markdown
Multiple and Polynomial Regression from scratch Objective: The goal is to create median house price estimator for Boston city, using a Multiple Linear Regression Model build from scratch, applied on the "Boston Housing Dataset". Then I will try to improve this estimator, using a Multiple Polynomial Regression Model.I won't explain theory here, only background basic equations needed for code understanding.  --- Table of contents1. [Dataset](data)2. [Multiple Linear model definition](model)3. [Loss function definition](loss)4. [Gradient definition](gradient)5. [Gradient descent algorithm](descent)6. [Training the model on data](training)7. [Model evaluation](evaluation)8. [Multiple Polynomial Regression](polynomial)9. [Median houses prices Estimations](estimation)10. [Conclusion](conclusion) Importing libraries
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import dataframe_image as dfi
import csv
%matplotlib inline
###Output
_____no_output_____
###Markdown
1.Dataset Importing dataset First we need to load the "Boston Housing Dataset" using sklearn datasets library:
###Code
from sklearn.datasets import load_boston
boston = load_boston()
boston.keys()
###Output
_____no_output_____
###Markdown
From these keys, we can get more informations about the data and construct a pandas dataframe !
###Code
print(boston.DESCR)
###Output
.. _boston_dataset:
Boston house prices dataset
---------------------------
**Data Set Characteristics:**
:Number of Instances: 506
:Number of Attributes: 13 numeric/categorical predictive. Median Value (attribute 14) is usually the target.
:Attribute Information (in order):
- CRIM per capita crime rate by town
- ZN proportion of residential land zoned for lots over 25,000 sq.ft.
- INDUS proportion of non-retail business acres per town
- CHAS Charles River dummy variable (= 1 if tract bounds river; 0 otherwise)
- NOX nitric oxides concentration (parts per 10 million)
- RM average number of rooms per dwelling
- AGE proportion of owner-occupied units built prior to 1940
- DIS weighted distances to five Boston employment centres
- RAD index of accessibility to radial highways
- TAX full-value property-tax rate per $10,000
- PTRATIO pupil-teacher ratio by town
- B 1000(Bk - 0.63)^2 where Bk is the proportion of blacks by town
- LSTAT % lower status of the population
- MEDV Median value of owner-occupied homes in $1000's
:Missing Attribute Values: None
:Creator: Harrison, D. and Rubinfeld, D.L.
This is a copy of UCI ML housing dataset.
https://archive.ics.uci.edu/ml/machine-learning-databases/housing/
This dataset was taken from the StatLib library which is maintained at Carnegie Mellon University.
The Boston house-price data of Harrison, D. and Rubinfeld, D.L. 'Hedonic
prices and the demand for clean air', J. Environ. Economics & Management,
vol.5, 81-102, 1978. Used in Belsley, Kuh & Welsch, 'Regression diagnostics
...', Wiley, 1980. N.B. Various transformations are used in the table on
pages 244-261 of the latter.
The Boston house-price data has been used in many machine learning papers that address regression
problems.
.. topic:: References
- Belsley, Kuh & Welsch, 'Regression diagnostics: Identifying Influential Data and Sources of Collinearity', Wiley, 1980. 244-261.
- Quinlan,R. (1993). Combining Instance-Based and Model-Based Learning. In Proceedings on the Tenth International Conference of Machine Learning, 236-243, University of Massachusetts, Amherst. Morgan Kaufmann.
###Markdown
**The "13 numeric/categorical predictive" are the feature variables based on which we will predict the median value of houses "MEDV" ... our target !** Dataframe construction
###Code
df = pd.DataFrame(boston.data, columns=boston.feature_names)
df['MEDV'] = boston.target # set the target as MEDV
###Output
_____no_output_____
###Markdown
Ok now let's visualize our dataframe:
###Code
pd.options.display.float_format = '{:.2f}'.format
dfi.export(df.head(7), "img/boston_dataframe.png")
###Output
_____no_output_____
###Markdown
 Basic informations about the dataset:
###Code
df.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 506 entries, 0 to 505
Data columns (total 14 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 CRIM 506 non-null float64
1 ZN 506 non-null float64
2 INDUS 506 non-null float64
3 CHAS 506 non-null float64
4 NOX 506 non-null float64
5 RM 506 non-null float64
6 AGE 506 non-null float64
7 DIS 506 non-null float64
8 RAD 506 non-null float64
9 TAX 506 non-null float64
10 PTRATIO 506 non-null float64
11 B 506 non-null float64
12 LSTAT 506 non-null float64
13 MEDV 506 non-null float64
dtypes: float64(14)
memory usage: 55.5 KB
###Markdown
One can see that our dataframe is composed of 14 columns and 506 entries (rows). All values are as float type and there is no null values. Even more intersting statistical informations about our data:
###Code
dfi.export(df.describe(), "img/boston_dataframe_describe.png")
###Output
_____no_output_____
###Markdown
 Outliers **An interesing fact on this dataset is about the max value of MEDV! From the original dataset description, we can read:***Variable 14 seems to be censored at 50.00 (corresponding to a median price of 50,000 Dollars); Censoring is suggested by the fact that the highest median price of exactly 50,000 Dollars is reported in 16 cases, while 15 cases have prices between 40,000 and 50,000 Dollars, with prices rounded to the nearest hundred. Harrison and Rubinfeld do not mention any censoring.* So let's remove these problematic values where MEDV = 50.0 :
###Code
df = df[~(df['MEDV'] >= 50.0)]
###Output
_____no_output_____
###Markdown
Correlation matrix Now we want to explore our data and see how our target is correlated to a feature or another.
###Code
correlation = df.corr()
fig = plt.figure(figsize=(13, 9))
mask = np.triu(np.ones(correlation.shape)).astype(bool)
sns.heatmap(correlation.round(2), mask=mask, annot=True, cmap='BrBG',
vmin=-1, vmax=1, center= 0, linewidths=2, linecolor='white')
plt.show()
###Output
_____no_output_____
###Markdown
Remember, our target is the median price **MEDV** ! So we need to read the last line of our table.We can observe that **the target is highly and positively correlated to the RM** feature (the average number of rooms per dwelling) and it looks normal as the more a dwelling has rooms, the more its price is.On the other hand, we see that **the target is highly and negatively correlated to the LSTAT** feature (% lower status of the population) ... houses prices are lower when the amount of lower status people increase. **Given these observations, I will keep RM and LSTAT features for analysis as they are the two most correlated to our target.** Data visualization
###Code
# setting plots background color
plt.rcParams.update({'axes.facecolor':'#f8fafc'})
fig = plt.figure(figsize=(15, 5))
plt.subplot(121)
plt.scatter(df['RM'], df['MEDV'], c= 'yellowgreen', edgecolor='k', s=50)
plt.xlabel('Avg Nb Rooms', fontsize = 12), plt.ylabel('Median Price (1000$)', fontsize = 12)
plt.title('Median Price v/s Avg Nb Rooms', fontsize = 15)
plt.subplot(122)
plt.scatter(df['LSTAT'], df['MEDV'], c= 'darkorange', edgecolor='k', s=50)
plt.xlabel('% Low Status', fontsize = 12), plt.ylabel('Median Price (1000$)', fontsize = 12)
plt.title('Median Price v/s % Lower Status People', fontsize = 15)
plt.show()
###Output
_____no_output_____
###Markdown
Except for few points, the median price looks approximately linearly correlated to our features ! For a first approach, let's consider the hypothesis of a linear correlation between the target MEDV and features RM and LSTAT. 3D data visualization
###Code
# interactive plot: %matplotlib notebook
# normal mode plot: %matplotlib inline
plt.rcParams.update({'axes.facecolor':'white'})
fig = plt.figure(figsize=(14, 14))
fig.suptitle('Dataset 3D Visualization', fontsize=20)
fig.subplots_adjust(top=1.2)
ax = fig.add_subplot(projection='3d')
ax.set_xlabel('Avg Nb Rooms', fontsize = 15)
ax.set_ylabel('% Low Status', fontsize = 15)
ax.set_zlabel('Median Price (1000$)', fontsize = 15)
u1 = np.array(df['RM'])
u2 = np.array(df['LSTAT'])
v = np.array(df['MEDV'])
ax.scatter(u1, u2, v, color='lightcoral', edgecolor='k', s=50)
ax.elev, ax.azim = 5, -130
plt.show()
###Output
_____no_output_____
###Markdown
2.Multiple-Linear model definition In statistics, multiple linear regression is a mathematical regression method extending simple Linear Regression to describe the variations of an endogenous variable associated with the variations of several exogenous variables. For a given sequence of **m** observations, as we consider two features from our data ${(x_{1 i}, x_{2 i}, y_i) \hspace{4mm}i=1 ..., m}$ $ y_i=\beta_{0} +\beta_{1}.x_{1 i} +\beta_{2}.x_{2 i} + \epsilon_i $ $ Using \hspace{3mm}matrix \hspace{3mm}notations \hspace{3mm}x_1 = [x_{1 1}, x_{1 2}, ..., x_{1 m}]^T \hspace{3mm}x_2 = [x_{2 1}, x_{2 2}, ..., x_{2 m}]^T \hspace{3mm}$ $and \hspace{3mm} y = [y_1, y_2, ..., y_m]^T$ $The \hspace{3mm}model \hspace{3mm}is:\hspace{3mm} y = X.\beta + \epsilon$ $Where\hspace{3mm} X = [1 \hspace{3mm} x_{1}\hspace{3mm} x_{2}] \hspace{3mm}and\hspace{3mm} \beta= [\beta_{0} \hspace{3mm}\beta_{1}\hspace{3mm}\beta_{2}]^T$ Converting data into arrays
###Code
x1 = df['RM'].values.reshape(-1, 1)
x2 = df['LSTAT'].values.reshape(-1, 1)
y = df['MEDV'].values.reshape(-1, 1)
print(x1.shape, x2.shape, y.shape)
###Output
(490, 1) (490, 1) (490, 1)
###Markdown
Adding identity column vector to features:
###Code
X = np.hstack((x1, x2))
X = np.hstack((np.ones(x1.shape), X))
print(X[0:5])
###Output
[[1. 6.575 4.98 ]
[1. 6.421 9.14 ]
[1. 7.185 4.03 ]
[1. 6.998 2.94 ]
[1. 7.147 5.33 ]]
###Markdown
Defining model
###Code
def model(X, Beta):
return X.dot(Beta)
###Output
_____no_output_____
###Markdown
Parameters initialization
###Code
np.random.seed(50)
Beta = np.random.randn(3, 1)
print(Beta)
###Output
[[-1.56035211]
[-0.0309776 ]
[-0.62092842]]
###Markdown
Checking model's initial state by visualizing the regression plane
###Code
%matplotlib inline
plt.rcParams.update({'axes.facecolor':'white'})
fig = plt.figure(figsize=(13, 13))
fig.suptitle('Initial State Regression Plane Visualization', fontsize=20)
fig.subplots_adjust(top=1.2)
ax = fig.add_subplot(projection='3d')
ax.set_xlabel('Avg Nb Rooms', fontsize = 15)
ax.set_ylabel('% Low Status', fontsize = 15)
ax.set_zlabel('Median Price (1000$)', fontsize = 15)
# Dataset plot
u1 = np.array(df['RM'])
u2 = np.array(df['LSTAT'])
v = np.array(df['MEDV'])
ax.scatter(u1, u2, v, color='lightcoral', edgecolor='k', s=50)
# Hyperplane regression plot
u1 = np.linspace(0,9,10)
u2 = np.linspace(0,40,10)
u1, u2 = np.meshgrid(u1, u2)
v = Beta[0] + Beta[1]*u1 + Beta[2]*u2
ax.plot_surface(u1, u2, v, alpha=0.5, cmap='plasma', edgecolor='plum')
ax.elev, ax.azim = 5, -130
plt.show()
###Output
_____no_output_____
###Markdown
One clearly see on plot that the model is not fitted yet to our data ! 3.Loss function : Mean Squared Error (MSE) To measure the error of our model we define the Loss function as the mean of the squares of the residuals: $ J(\beta) = \frac{1}{m} \sum (y - X.\beta)^2 $
###Code
def lossFunction(X, y, Beta):
m = y.shape[0]
return 1/(m) * np.sum((model(X, Beta) - y)**2)
###Output
_____no_output_____
###Markdown
Let's check the initial Loss value:
###Code
lossFunction(X, y, Beta)
###Output
_____no_output_____
###Markdown
4.Gradient definition $\frac{\partial J(\beta) }{\partial \beta} = - \frac{2}{m} X^T.(y - X.\beta)$ But as we want to solve a minimization problem, we keep for calculations the gradient multiplied by -1:
###Code
def gradient(X, y, Beta):
m = y.shape[0]
return 2/m * X.T.dot(model(X, Beta) - y)
###Output
_____no_output_____
###Markdown
5.Gradient descent algorithm $\beta' = \beta - \alpha \frac{\partial J(\beta) }{\partial \beta}$
###Code
def gradientDescent(X, y, Beta, learning_rate, n_iterations):
loss_history = np.zeros(n_iterations)
for i in range(n_iterations):
Beta = Beta - learning_rate * gradient(X, y, Beta)
loss_history[i] = lossFunction(X, y, Beta)
return Beta, loss_history
###Output
_____no_output_____
###Markdown
6.Training the model on data First I define the hypermarameters:
###Code
N_ITERATIONS = 500
LEARNING_RATE = 0.001
###Output
_____no_output_____
###Markdown
Then I train the model on the data...
###Code
Final_Beta, loss_history = gradientDescent(X, y, Beta, LEARNING_RATE, N_ITERATIONS)
###Output
_____no_output_____
###Markdown
...so I can get the regression coefficients.
###Code
print(f"Intercept = {Final_Beta[0][0]:.2f}")
print(f"coefficient 1 = {Final_Beta[1][0]:.2f}")
print(f"coefficient 2 = {Final_Beta[2][0]:.2f}")
###Output
Intercept = -0.87
coefficient 1 = 4.78
coefficient 2 = -0.57
###Markdown
Let's have a look on the Learning Curve:
###Code
plt.rcParams.update({'axes.facecolor':'#f8fafc'})
plt.figure(figsize=(10,6))
plt.plot(range(N_ITERATIONS), loss_history, c= 'purple')
plt.title('Learning curve', fontsize = 15)
plt.xlabel('Iterations', fontsize = 12)
plt.ylabel('Loss value', fontsize = 12)
plt.show()
###Output
_____no_output_____
###Markdown
7.Model evaluation Regression plane visualization Let's create a 3D plot animation for better visualization:
###Code
from matplotlib import animation
plt.rcParams.update({'axes.facecolor':'white'})
fig = plt.figure(figsize=(13, 13))
ax = fig.add_subplot(projection='3d')
fig.suptitle('Final Regression Plane Visualization', fontsize=20)
fig.subplots_adjust(top=1.1)
ax.set_xlabel('Avg Nb Rooms', fontsize = 15)
ax.set_ylabel('% Low Status', fontsize = 15)
ax.set_zlabel('Median Price (1000$)', fontsize = 15)
plt.close()
def init():
# Dataset plot
u1 = np.array(df['RM'])
u2 = np.array(df['LSTAT'])
v = np.array(df['MEDV'])
ax.scatter(u1, u2, v, color='lightcoral', edgecolor='k', s=50)
# Regression plane plot
u1 = np.linspace(0,9,20)
u2 = np.linspace(0,40,20)
u1, u2 = np.meshgrid(u1, u2)
v = Final_Beta[0] + Final_Beta[1]*u1 + Final_Beta[2]*u2
ax.plot_surface(u1, u2, v, alpha=0.3, cmap='plasma', edgecolor='plum')
return fig,
def animate(i):
ax.view_init(elev=10., azim=i)
return fig,
# Creating the animation
anim = animation.FuncAnimation(fig, animate, init_func=init,
frames=360, interval=40, blit=True)
# Saving the animation
anim.save('igm/animation1.gif', fps=25)
###Output
_____no_output_____
###Markdown
 The regression plane seems to fit pretty good on median prices lower than 40000 Dollars, But not so good for higher values ! It fits badly as well for median prices where the percentage of low status people is high. **The model could be too simple to well modelize the data.** Coefficient of determination Coefficient values between [0, 1]. Indicates the quality of the prediction. $ R^2 = 1 - \frac{\sum ((y_i - \hat{y_i})^2}{\sum ((y_i - \bar{y})^2)} $
###Code
def determinationCoef(y, y_pred):
SSR = ((y - y_pred)**2).sum() # Residual sum of squares SSR
SST = ((y - y.mean())**2).sum() # Total sum of squares SST
return np.sqrt(1 - SSR/SST)
y_prediction = model(X, Final_Beta)
print(f"R2 = {determinationCoef(y, y_prediction):0.2f}")
###Output
R2 = 0.81
###Markdown
The R2 coefficient is quite high, but far from the maximum value.**This confirms what we can see on graphics: the model must be too simple... So let's try to improve it !** 8.Multiple Polynomial Regression **This time we will consider a non linear - polynomial correlation between the median price value and the percentage of lower status people feature.**  Polynomial regression is a form of regression analysis in which the relationship between the independent variable x and the dependent variable y is modelled as an nth degree polynomial in x.Polynomial regression fits a nonlinear relationship between the value of x and the corresponding conditional mean of y. Although polynomial regression fits a nonlinear function of the data, as a statistical estimation problem it is linear, in the sense that the regression model is linear to the unknown parameters that are estimated from the data. For this reason, polynomial regression is considered to be a special case of multiple linear regression. **The regression model remains linear to the unknown parameters:** $ y = X.\beta + \epsilon$ **... but polynomial regression fits a nonlinear function of the features:** $ y_i=\beta_{0} +\beta_{1}.x_{1 i} +\beta_{2}.x_{2 i} +\beta_{3}.x_{1 i}.x_{2 i} +\beta_{4}.x_{2 i}^2 + \epsilon_i $ Converting data into arrays
###Code
x1 = df['RM'].values.reshape(-1, 1)
x2 = df['LSTAT'].values.reshape(-1, 1)
y = df['MEDV'].values.reshape(-1, 1)
print(x1.shape, x2.shape, y.shape)
###Output
(490, 1) (490, 1) (490, 1)
###Markdown
Here we need to scale our data in order to avoid memory overflow !! Calculations without scaling brings too high values ...  I used the sklearn MinMaxScaler preprocessing to scale the data:$X_{scaled} = X_{std} * (max - min) + min$where $X_{std} = (X - X_{min}) / (X_{max} - X_{min})$
###Code
from sklearn.preprocessing import MinMaxScaler
RM_scaler = MinMaxScaler([-1,1])
LSTAT_scaler = MinMaxScaler([-1,1])
MEDV_scaler = MinMaxScaler([-1,1])
x1 = RM_scaler.fit_transform(x1)
x2 = LSTAT_scaler.fit_transform(x2)
y = MEDV_scaler.fit_transform(y)
###Output
_____no_output_____
###Markdown
Now we need to add identity and non-linear terms column vector features: $X = [1 \hspace{3mm} x_1 \hspace{3mm} x_2 \hspace{3mm} x_1.x_2 \hspace{3mm} x_2^2]$
###Code
X = np.hstack((x1, x2))
X = np.hstack((X, x1*x2))
X = np.hstack((X, x2**2))
X = np.hstack((np.ones(x1.shape), X))
print(X[0:5])
###Output
[[ 1. 0.15501054 -0.83328702 -0.12916827 0.69436726]
[ 1. 0.0959954 -0.6021117 -0.05779995 0.3625385 ]
[ 1. 0.3887718 -0.88607947 -0.34448271 0.78513682]
[ 1. 0.31711056 -0.94665185 -0.3001933 0.89614972]
[ 1. 0.37420962 -0.81383718 -0.3045457 0.66233095]]
###Markdown
Time for hyperparameters initialization:
###Code
np.random.seed(50)
Beta = np.random.randn(5, 1)
print(Beta)
###Output
[[-1.56035211]
[-0.0309776 ]
[-0.62092842]
[-1.46458049]
[ 1.41194612]]
###Markdown
Ok now we can check model's initial state by visualizing the 3D regression surface
###Code
%matplotlib inline
plt.rcParams.update({'axes.facecolor':'white'})
fig = plt.figure(figsize=(13, 13))
fig.suptitle('Initial State Regression surface Visualization', fontsize=20)
fig.subplots_adjust(top=1.2)
ax = fig.add_subplot(projection='3d')
ax.set_xlabel('Scaled Avg Nb Rooms', fontsize = 15)
ax.set_ylabel('Scaled % Low Status', fontsize = 15)
ax.set_zlabel('Scaled Median Price', fontsize = 15)
# Dataset plot
ax.scatter(x1, x2, y, color='lightcoral', edgecolor='k', s=50)
# Regression surface plot
u1 = np.linspace(-1,1,20)
u2 = np.linspace(-1,1,20)
u1, u2 = np.meshgrid(u1, u2)
v = Beta[0] + Beta[1]*u1 + Beta[2]*u2 + Beta[3]* u1*u2 + Beta[4]* u2**2
ax.plot_surface(u1, u2, v, alpha=0.3, cmap='plasma', edgecolor='plum')
ax.elev, ax.azim = 10, -130
plt.show()
###Output
_____no_output_____
###Markdown
Perfect ! We can see that the regression surface is far from the dataset points. It's time to train our polynomial regression model. Let's launch the train ...
###Code
N_ITERATIONS = 50000
LEARNING_RATE = 0.01
Final_Beta, loss_history = gradientDescent(X, y, Beta, LEARNING_RATE, N_ITERATIONS)
print(Final_Beta)
###Output
[[-0.54169521]
[ 0.00731156]
[-0.62805316]
[-1.00386475]
[-0.05360365]]
###Markdown
... and visualize the updated regression surface !
###Code
%matplotlib inline
plt.rcParams.update({'axes.facecolor':'white'})
fig = plt.figure(figsize=(13, 13))
ax = fig.add_subplot(projection='3d')
fig.suptitle('Final Regression surface Visualization', fontsize=20)
fig.subplots_adjust(top=1.1)
ax.set_xlabel('Scaled Avg Nb Rooms', fontsize = 15)
ax.set_ylabel('Scaled % Low Status', fontsize = 15)
ax.set_zlabel('Scaled Median Price', fontsize = 15)
plt.close()
def init():
# Dataset plot
ax.scatter(x1, x2, y, color='lightcoral', edgecolor='k', s=50)
# Regression surface plot
u1 = np.linspace(-1,1,20).reshape(-1,1)
u2 = np.linspace(-1,1,20).reshape(-1,1)
u1, u2 = np.meshgrid(u1, u2)
v = Final_Beta[0] + Final_Beta[1]*u1 + Final_Beta[2]*u2 + Final_Beta[3]* u1*u2 + Final_Beta[4]* u2**2
ax.plot_surface(u1, u2, v, alpha=0.3, cmap='plasma', edgecolor ='plum')
return fig,
def animate(i):
ax.view_init(elev=10., azim=i)
return fig,
# Creating the animation
anim = animation.FuncAnimation(fig, animate, init_func=init,
frames=360, interval=40, blit=True)
# Saving the animation
anim.save('igm/animation2.gif', fps=25)
###Output
_____no_output_____
###Markdown
 Great !! This regression surface fits our data much better than previously with the multiple linear regression model. Coefficient of determination
###Code
y_prediction = model(X, Final_Beta)
print(f"R2 = {determinationCoef(y, y_prediction):0.2f}")
###Output
R2 = 0.87
###Markdown
We see that for this model, the coefficient of determination is really better, in addition to the fact that the regression surface adapts quite well to our data! 9. Median houses prices Estimations **Ok, now that we have a quite good model, let's try to do what we are here for : make some estimations.** Imagine that new data incoming ... we get new lines in our dataframe, and it reveals that:
###Code
import tabulate
data_new = {'RM': [4, 8], 'LSTAT': [35, 6], 'MEDV': ['estimation ?', 'estimation ?']}
df_new = pd.DataFrame(data=data_new)
print(df_new.to_markdown())
###Output
| | RM | LSTAT | MEDV |
|---:|-----:|--------:|:-------------|
| 0 | 4 | 35 | estimation ? |
| 1 | 8 | 6 | estimation ? |
###Markdown
**Now, as it is our goal, we want to estimate the median price value (MEDV) for these new data!**So let's convert our data into arrays:
###Code
x1_new_data = np.array([4, 8]).reshape(-1,1)
x2_new_data = np.array([30, 6]).reshape(-1,1)
###Output
_____no_output_____
###Markdown
To do the prediction calculation, we need to go in the scaled space !
###Code
x1_new = RM_scaler.transform(x1_new_data)
x2_new = LSTAT_scaler.transform(x2_new_data)
###Output
_____no_output_____
###Markdown
Then add identity and non-linear terms column vector to features as usual ...
###Code
X = np.hstack((x1_new, x2_new))
X = np.hstack((X, x1_new*x2_new))
X = np.hstack((X, x2_new**2))
X = np.hstack((np.ones(x1_new.shape), X))
print(X)
###Output
[[ 1. -0.83176854 0.55709919 -0.46337758 0.31035951]
[ 1. 0.70109216 -0.77660461 -0.54447141 0.60311472]]
###Markdown
So we can estimate a scaled median price:
###Code
y_prediciton_scaled = model(X, Final_Beta)
###Output
_____no_output_____
###Markdown
... and go back to real values using the **inverse scaling transformation** !
###Code
y_prediciton = MEDV_scaler.inverse_transform(y_prediciton_scaled)
print(f"The estimated median price for (RM=4,LSTAT=35) is {y_prediciton[0][0]*1000:.0f} Dollars !")
print(f"The estimated median price for (RM=8,LSTAT=6) is {y_prediciton[1][0]*1000:.0f} Dollars !")
###Output
The estimated median price for (RM=4,LSTAT=35) is 17064 Dollars !
The estimated median price for (RM=8,LSTAT=6) is 37093 Dollars !
###Markdown
To finally report the estimated median price values on plots:
###Code
%matplotlib inline
plt.rcParams.update({'axes.facecolor':'white'})
fig = plt.figure(figsize=(15, 15))
fig.suptitle('Median Price Estimations', fontsize=20)
fig.subplots_adjust(top=1.4)
# VIEW 1
ax = fig.add_subplot(121, projection='3d')
ax.set_xlabel('Avg Nb Rooms', fontsize = 15)
ax.set_ylabel('% Low Status', fontsize = 15)
ax.set_zlabel('Median Price (1000$)', fontsize = 15)
ax.elev, ax.azim = 8, -130
# Dataset plot
u1 = np.array(df['RM'])
u2 = np.array(df['LSTAT'])
v = np.array(df['MEDV'])
ax.scatter(u1, u2, v, color='lightcoral', edgecolor='k', s=50)
# Prediction plot
ax.scatter(x1_new_data[0], x2_new_data[0], y_prediciton[0][0], color='aqua', edgecolor='k', s=150)
ax.scatter(x1_new_data[1], x2_new_data[1], y_prediciton[1][0], color='yellow', edgecolor='k', s=150)
# VIEW 2
ax = fig.add_subplot(122, projection='3d')
ax.set_xlabel('Avg Nb Rooms', fontsize = 15)
ax.set_ylabel('% Low Status', fontsize = 15)
ax.set_zlabel('Median Price (1000$)', fontsize = 15)
ax.elev, ax.azim = 25, -160
# Dataset plot
u1 = np.array(df['RM'])
u2 = np.array(df['LSTAT'])
v = np.array(df['MEDV'])
ax.scatter(u1, u2, v, color='lightcoral', edgecolor='k', s=50)
# Prediction plot
ax.scatter(x1_new_data[0], x2_new_data[0], y_prediciton[0][0], color='aqua', edgecolor='k', s=150)
ax.scatter(x1_new_data[1], x2_new_data[1], y_prediciton[1][0], color='yellow', edgecolor='k', s=150)
plt.show()
###Output
_____no_output_____ |
Final Result 1.ipynb | ###Markdown
Importing Libraries
###Code
import shutil
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
from tqdm import tqdm_notebook
import joblib
from sklearn.metrics import confusion_matrix, recall_score, precision_score, f1_score, accuracy_score
import warnings
warnings.filterwarnings("ignore")
###Output
_____no_output_____
###Markdown
Read Data
###Code
x_test_1 = pd.read_csv("Data/x_test_1.csv")
x_test_2 = pd.read_csv("Data/x_test_2.csv")
x_test_3 = pd.read_csv("Data/x_test_3.csv")
rem = np.load("Data/test_removed.npy")
x_test_1.drop(index = rem, axis = 0, inplace = True)
x_test_2.drop(index = rem, axis = 0, inplace = True)
x_test_3.drop(index = rem, axis = 0, inplace = True)
x_test_1 = np.array(x_test_1)
x_test_2 = np.array(x_test_2)
x_test_3 = np.array(x_test_3)
test_embedding = np.load("Data/test_embedding.npy")
y_test = pd.read_csv("Data/org_test.csv", usecols = ["label"])
y_test.drop(index = rem, axis = 0, inplace = True)
x_test = np.hstack((x_test_1, x_test_2, x_test_3, test_embedding))
print("Number of Rows in X_Test =", x_test.shape[0])
print("Number of Columns in X_Test =", x_test.shape[1])
# Loading the ML Model
model = joblib.load('model/model.joblib')
y_pred = model.predict(x_test)
###Output
_____no_output_____
###Markdown
Model Result
###Code
print("Accuracy on Test Data =", accuracy_score(y_test, y_pred) * 100, "%")
print("Precision on Test Data =", precision_score(y_test, y_pred) * 100, "%")
print("Recall on Test Data =", recall_score(y_test, y_pred) * 100, "%")
print("F1 Score on Test Data =", f1_score(y_test, y_pred))
# Getting Confusion matrix for Test Data
plt.figure(figsize = (10, 8))
sns.heatmap(confusion_matrix(y_test, y_pred), annot = True, fmt = "d", cmap = "Blues")
plt.title("Confusion Matrix For Test Data")
plt.show()
# precision percentage confusion matrix for Test data for class 1 and class 0
CM = confusion_matrix(y_test, y_pred)
CM = CM / CM.sum(axis = 0)
plt.figure(figsize = (10, 8))
sns.heatmap(CM, annot = True, cmap = "Blues")
plt.title("Precsion Percentage Confusion Matrix For Test Data")
plt.show()
# recall percentage confusion matrix for Test data for class 1 and class 0
CM = confusion_matrix(y_test, y_pred)
CM = ((CM.T) / CM.sum(axis = 1)).T
plt.figure(figsize = (10, 8))
sns.heatmap(CM, annot = True, cmap = "Blues")
plt.title("Recall Percentage Confusion Matrix For Test Data")
plt.show()
###Output
_____no_output_____ |
Python Pandas Smart Tricks/5_Filter_Dataframe_using_in_notin_like_SQL_Working-Class.ipynb | ###Markdown
Method 1
###Code
#Select 2 SalesReps based on SQL's "in" - Filter for ['Amy','Bob']
sales[sales.SalesRep.isin(['Amy','Bob'])]
#Select 2 SalesReps based on SQL's "not in"
sales[~sales.SalesRep.isin(['Amy','Bob'])]
###Output
_____no_output_____
###Markdown
Method 2
###Code
#isin
#Create a list of SalesRep's you want to select. To select then use 'isin'
filter_SalesRep = ['Amy','Bob']
sales[sales.SalesRep.isin(filter_SalesRep)]
#not in
#Create a list of SalesRep's you want to select. To select then use 'isin' along with the '~'
filter_SalesRep = ['Amy','Bob']
sales[~sales.SalesRep.isin(filter_SalesRep)]
###Output
_____no_output_____
###Markdown
Method 3 - Multiple Criteria
###Code
#in
#You can ue 2 filters using & operator and provide seperate list for each column - use '&'
filter_SalesRep = ['Amy','Bob']
filter_Region = ['North', 'West']
sales[sales.SalesRep.isin(filter_SalesRep) & sales.Region.isin(filter_Region)]
#in & not in
#Just use the '~' in front of column you don't want values from
filter_SalesRep = ['Amy','Bob']
filter_Region = ['North', 'West']
sales[sales.SalesRep.isin(filter_SalesRep) & ~sales.Region.isin(filter_Region)]
###Output
_____no_output_____
###Markdown
Method 4 - Using Numpy (Faster)
###Code
#in
filter_SalesRep = ['Amy','Bob']
sales[np.isin(sales.SalesRep, filter_SalesRep)]
#not in
filter_SalesRep = ['Amy','Bob']
sales[np.isin(sales.SalesRep, filter_SalesRep, invert = True)]
###Output
_____no_output_____
###Markdown
Method 5 - Using List Comprehensions (Much Faster)
###Code
#in
filter_SalesRep = ['Amy','Bob']
sales[[x in filter_SalesRep for x in sales.SalesRep]]
#not in
filter_SalesRep = ['Amy','Bob']
sales[[x not in filter_SalesRep for x in sales.SalesRep]]
###Output
_____no_output_____
###Markdown
Method 6 - Select Rows based on wherever the value is present in any Column
###Code
games = pd.read_csv('games.csv')
games
games[['Game2','Game4']]
criteria = ['Yes']
games[games[['Game2','Game4']].isin(criteria).any(axis = 1)]
###Output
_____no_output_____ |
13Ene2021.ipynb | ###Markdown
###Code
class NodoArbol:
def __init__( self , dato , hijo_izq=None , hijo_der=None):
self.dato = dato
self.left = hijo_izq
self.right = hijo_der
class BinarySearchTree:
def __init__(self):
self.__root = none
def insert (self, value):
if self.__root == None:
self.__root = NodoArbol (value, None, None)
else:
self.__inset_nodo__(self.__root, value)
def __insert_nodo__(self, nodo , value ):
if nodo.dato == value:
pass
elif value < nodo.dato:
if nodo.left == None:
nodo.left = NodoArbol (value, None, None)
else:
self.__inset_nodo(nodo.left, value)
else:
if nodo.right == None:
nodo.right = NodoArbol (value, None, None)
else:
self.__inset_nodo(nodo.right, value)
bst= BinarySearchTree()
bst.insert(50)
bst.insert(30)
bst.insert(20)
bst.search(30)
###Output
_____no_output_____ |
phqVariationAnalysis.ipynb | ###Markdown
Analysis of PHQ-9 and PHQ-A responses for early adolescents aged 10 to 14: mixed-methods exploratory studyEric A. Miller, Giselle Sanchez, Alexandra Pryor, Janis H. Jenkins ---- Index ---- [Data preprocessing](Preprocess)* Import libraries* Figure plotting settings* Load data* Clean data [Figures](FiguresHeading)* [Figure 1](Fig1)* [Figure 2](Fig2)* [Figure 3](Fig3) [Supplemental Figures](Supplementals):* [Supplemental Figure 1](FigS1)* [Supplemental Figure 2](FigS2)* [Supplemental Figure 3](FigS3) [Other calculations](Various)* Cronbach's alpha coefficient* Intraclass correlation coefficient* Uncertainty in internal state given total score* GAD results not dependent on modified items* Confirm PCA results using Scikit Learn ---- Data preprocessing ---- Import libraries
###Code
%matplotlib inline
# Python core
import os
import math
import datetime
# Scipy core
import numpy as np
from numpy import linalg as LA
import pandas as pd
from scipy import stats
import matplotlib.pyplot as plt
# Other libraries
import dateutil
import seaborn as sns
import pingouin as pg
from sklearn.decomposition import PCA
###Output
_____no_output_____
###Markdown
Figure plotting settings
###Code
savefigs = True
# If saving figs, define your main output directory and inside that directory put empty
# subdirectories titled 'Fig1', 'Fig2', ..., 'FigS3' for each figure's panels to be saved
main_output_directory = '/Users/ericmiller/Documents/UCSD/Jenkins Lab/Oceanside Project/PHQ_Paper/Panels/'
outdir = os.path.expanduser(main_output_directory)
# Variables for saving PDFs with matplotlib
plt.rcParams['pdf.fonttype'] = 42 # embed fonts so we can edit in Illustrator
plt.rcParams['font.sans-serif'] = ['Helvetica', 'Tahoma', 'Verdana']
resolution = 1000 # DPI
# Box plots
boxprops = dict(linewidth=1, color='k')
medianprops = dict(linewidth=0.9, color='k')
###Output
_____no_output_____
###Markdown
Load data
###Code
#Data available upon request
phq = pd.read_csv('./data/phqA.csv')
phq_adult = pd.read_csv('./data/phq9.csv')
gad = pd.read_csv('./data/gad.csv')
phq_dates = pd.read_csv('./data/phq_dates.csv')
###Output
_____no_output_____
###Markdown
Clean data
###Code
# Reindex the dataframes to start at 1 instead of 0, to match the student ID numbers
phq.index = np.arange(1, len(phq) + 1)
phq_adult.index = np.arange(1, len(phq_adult) + 1)
gad.index = np.arange(1, len(gad) + 1)
phq_dates.index = np.arange(1, len(phq_dates) + 1)
# Clear empty rows for students with no data for a given questionnaire
# -- 5, 42, 44, and 47 did not complete either PHQ-9 or PHQ-A
# -- 50, 51, 52, 53 completed only PHQ-A. 29 and 36 completed only the PHQ-9
no_data_adult = [5, 42, 44, 47, 50, 51, 52, 53]
no_data_child = [5, 42, 44, 47, 29, 36]
phq = phq.drop(no_data_child)
phq_adult = phq_adult.drop(no_data_adult)
gad = gad.drop(no_data_child)
# Only keep PHQ dates for participants who completed both questionnaires, as these
# will be used only for evaluating change between the questionnaires
phq_dates.drop(np.union1d(no_data_adult, no_data_child))
phq_dates = phq_dates.dropna()
# Cast numerical columns of the PHQ and GAD data to float
phq[phq.columns[1:]] = phq[phq.columns[1:]].astype('float64')
phq_adult[phq_adult.columns[1:]] = phq_adult[phq_adult.columns[1:]].astype('float64')
gad[gad.columns[1:]] = gad[gad.columns[1:]].astype('float64')
# Replace user-defined '99' code with NaN
phq = phq.replace(99, np.nan)
phq_adult = phq_adult.replace(99, np.nan)
gad = gad.replace(99, np.nan)
# Cast date strings to date objects (allows operations like subtraction of dates)
phq_dates['Adult PHQ Date'] = phq_dates['Adult PHQ Date'].apply(lambda d: dateutil.parser.parse(d))
phq_dates['Child PHQ Date'] = phq_dates['Child PHQ Date'].apply(lambda d: dateutil.parser.parse(d))
###Output
_____no_output_____
###Markdown
---- Figures ----[return to top](title) Fig 1. PHQ total scores not normally distributed and correlated with GAD[return to top](title) a. Total score distributions and tests for normality
###Code
# Histogram bins
phqbins = np.arange(0, 22, 2) # use same bins for both PHQ scales
gadbins = np.arange(0, 33, 3)
# Weights for converting histograms into frequency distributions
phqweights = np.ones_like(phq['PHQTOTAL'])/len(phq['PHQTOTAL'])
phq_adult_weights = np.ones_like(phq_adult['PHQTOTAL_Adult'])/len(phq_adult['PHQTOTAL_Adult'])
gadweights = np.ones_like(gad['GADRAW'])/len(gad['GADRAW'])
# PHQ-A ("Child & Adolescent" form)
plt.figure()
ax = plt.subplot(111)
plt.hist(phq['PHQTOTAL'], bins=phqbins, weights=phqweights, color='white', edgecolor='black')
plt.xlabel('PHQ-A Total Score', fontsize=14)
plt.ylabel('Fraction of students', fontsize=14)
plt.xticks([0, 4, 8, 12, 16, 20])
plt.yticks([0, 0.1, 0.2, 0.3])
plt.ylim([0, 0.35])
ax.spines['right'].set_visible(False)
ax.spines['top'].set_visible(False)
ax.yaxis.set_ticks_position('left')
ax.xaxis.set_ticks_position('bottom')
W, p = stats.shapiro(phq['PHQTOTAL'])
print("Shapiro-Wilk Test for PHQ-A (Adolescent PHQ-9) Total: %f" % p)
if savefigs:
plt.savefig(fname=outdir+'Fig1/phqA_hist.pdf', format='pdf', dpi=resolution, bbox_inches="tight")
# PHQ-9 ("Adult" form)
plt.figure()
ax = plt.subplot(111)
plt.hist(phq_adult['PHQTOTAL_Adult'], bins=phqbins, weights=phq_adult_weights, color='white', edgecolor='black')
plt.xlabel('PHQ-9 Total Score', fontsize=14)
plt.ylabel('Fraction of students', fontsize=14)
plt.xticks([0, 4, 8, 12, 16, 20])
plt.yticks([0, 0.1, 0.2, 0.3])
plt.ylim([0, 0.35])
ax.spines['right'].set_visible(False)
ax.spines['top'].set_visible(False)
ax.yaxis.set_ticks_position('left')
ax.xaxis.set_ticks_position('bottom')
W, p = stats.shapiro(phq_adult['PHQTOTAL_Adult'])
print("Shapiro-Wilk Test for PHQ-9 (Adult) Total: %f" % p)
if savefigs:
plt.savefig(fname=outdir+'Fig1/phq9_hist.pdf', format='pdf', dpi=resolution, bbox_inches="tight")
# GAD
plt.figure()
ax = plt.subplot(111)
plt.hist(gad['GADRAW'], bins=gadbins, weights=gadweights, color='white', edgecolor='black')
plt.xlabel('GAD Total Score', fontsize=14)
plt.ylabel('Fraction of students', fontsize=14)
# plt.xticks([0, 0.4, 0.8, 1.2, 1.6, 2.0])
plt.yticks([0, 0.1, 0.2, 0.3, 0.4])
plt.ylim([0, 0.4])
ax.spines['right'].set_visible(False)
ax.spines['top'].set_visible(False)
ax.yaxis.set_ticks_position('left')
ax.xaxis.set_ticks_position('bottom')
W, p = stats.shapiro(gad['GADRAW'])
print("Shapiro-Wilk Test for GAD Total (Average): %f" % p)
if savefigs:
plt.savefig(fname=outdir+'Fig1/gad_hist.pdf', format='pdf', dpi=resolution, bbox_inches="tight")
###Output
_____no_output_____
###Markdown
b. Correlation of PHQ and GAD
###Code
df = pd.merge(phq, gad, on='ID')
plt.figure()
ax = plt.subplot(111)
plt.scatter(df['GADRAW'], df['PHQTOTAL'], color='k', alpha=0.6)
plt.xlabel('GAD-A Total Score', fontsize=14)
plt.ylabel('PHQ-A Total Score', fontsize=14)
ax.spines['right'].set_visible(False)
ax.spines['top'].set_visible(False)
ax.yaxis.set_ticks_position('left')
ax.xaxis.set_ticks_position('bottom')
plt.yticks([0, 4, 8, 12, 16, 20])
r, p = stats.pearsonr(df['GADRAW'], df['PHQTOTAL'])
print("PHQ-A R-squared = %f" % r**2)
if savefigs:
plt.savefig(fname=outdir+'Fig1/phqA_vs_gad.pdf', format='pdf', dpi=resolution, bbox_inches="tight")
df = pd.merge(phq_adult, gad, on='ID')
plt.figure()
ax = plt.subplot(111)
plt.scatter(df['GADRAW'], df['PHQTOTAL_Adult'], color='k', alpha=0.6)
plt.xlabel('GAD-A Total Score', fontsize=14)
plt.ylabel('PHQ-9 Total Score', fontsize=14)
ax.spines['right'].set_visible(False)
ax.spines['top'].set_visible(False)
ax.yaxis.set_ticks_position('left')
ax.xaxis.set_ticks_position('bottom')
plt.yticks([0, 4, 8, 12, 16, 20])
r, p = stats.pearsonr(df['GADRAW'], df['PHQTOTAL_Adult'])
print("PHQ-9 R-squared = %f" % r**2)
if savefigs:
plt.savefig(fname=outdir+'Fig1/phq9_vs_gad.pdf', format='pdf', dpi=resolution, bbox_inches="tight")
###Output
_____no_output_____
###Markdown
1c-e. Changes between the PHQ-9 and PHQ-A for individual subjects.[return to top](title) 1c. Show vectors of PHQ change for each individual student
###Code
df = pd.merge(phq, phq_adult, on='ID')
df['diffs'] = np.array(df['PHQTOTAL'] - df['PHQTOTAL_Adult'])
df['avg'] = np.array((df['PHQTOTAL'] + df['PHQTOTAL_Adult']) / 2)
df_sort = df.sort_values(by='avg', ascending=True)
plt.figure(figsize=(15,5))
ax = plt.subplot(111)
plt.scatter(range(df_sort.shape[0]), df_sort['PHQTOTAL_Adult'], marker='o', color='k')
plt.scatter(range(df_sort.shape[0]), df_sort['PHQTOTAL'], marker='o', color='k')
ax.axhline(y=4, linestyle='--', color='grey')
ax.axhline(y=9, linestyle='--', color='grey')
ax.axhline(y=14, linestyle='--', color='grey')
plt.ylabel('PHQ Total', fontsize=12)
plt.yticks([0, 4, 9, 14, 19])
plt.xlabel('Participants', fontsize=12)
plt.xticks([])
for i in range(df_sort.shape[0]):
row = df_sort.iloc[i, :]
plt.arrow(x=i, y=row['PHQTOTAL_Adult'], dx=0, dy=row['diffs'],
length_includes_head=True, color='grey', facecolor='grey', head_width=0.8, head_length=0.7)
ax.spines['right'].set_visible(False)
ax.spines['top'].set_visible(False)
ax.yaxis.set_ticks_position('left')
ax.xaxis.set_ticks_position('bottom')
if savefigs:
plt.savefig(fname=outdir+'Fig1/all_diffs.pdf', format='pdf', dpi=resolution, bbox_inches="tight")
###Output
_____no_output_____
###Markdown
1d Distribution of PHQ changes across studentsCheck for normality and for a consistent bias in direction of change.
###Code
df = pd.merge(phq, phq_adult, on='ID')
df['diffs'] = np.array(df['PHQTOTAL'] - df['PHQTOTAL_Adult'])
weights = np.ones_like(df['diffs'])/len(df['diffs'])
plt.figure(figsize=(8, 5))
ax = plt.subplot(111)
plt.hist(df['diffs'], weights=weights, bins=np.arange(-10, 12, 2), color='white', edgecolor='k')
plt.xlabel('Difference in PHQ', fontsize=12)
plt.ylabel('Fraction of students', fontsize=12)
plt.xticks([-10, -5, 0, 5, 10])
ax.spines['right'].set_visible(False)
ax.spines['top'].set_visible(False)
ax.yaxis.set_ticks_position('left')
ax.xaxis.set_ticks_position('bottom')
print("One sample t-test vs 0:")
print(stats.ttest_1samp(df['diffs'], 0))
s, p = stats.shapiro(df['diffs'])
print('\nShapiro-Wilk test for normality: p = %f' % p)
if savefigs:
plt.savefig(fname=outdir+'Fig1/diff_histogram.pdf', format='pdf', dpi=resolution, bbox_inches="tight")
###Output
_____no_output_____
###Markdown
Supplemental: Differences in PHQ-9 and PHQ-A total score vs. the time elapsed between measurements.12 / 43 of the subjects who completed both the PHQ-9 and the PHQ-A had missing data for the date they completed the PHQ-A, so we were unable to calculate the elapsed time for these subjects. For the remaining 31 subjects, we were able to test whether the absolute difference depended on the elapsed time between the tests.
###Code
df = pd.merge(phq, phq_adult, on='ID')
df['diffs'] = np.array(df['PHQTOTAL'] - df['PHQTOTAL_Adult'])
diffs = list(phq_dates['Child PHQ Date'] - phq_dates['Adult PHQ Date'])
phq_dates['date_diffs'] = [d.days for d in diffs]
df2 = pd.merge(df, phq_dates, on='ID')
plt.figure(figsize=(8, 5))
ax = plt.subplot(111)
plt.scatter(df2['date_diffs'], np.abs(df2['diffs']), color='k', alpha=0.6)
plt.xlabel('Time elapsed (days)', fontsize=12)
plt.ylabel('Absolute value difference', fontsize=12)
ax.spines['right'].set_visible(False)
ax.spines['top'].set_visible(False)
ax.yaxis.set_ticks_position('left')
ax.xaxis.set_ticks_position('bottom')
r, p = stats.pearsonr(df2['date_diffs'], np.abs(df2['diffs']))
print("\nAbsolute value difference:")
print("p = %f, R = %f" % (p, r))
print("R-squared = %f" % r**2)
if savefigs:
plt.savefig(fname=outdir+'FigS1/diff_histogram.pdf', format='pdf', dpi=resolution, bbox_inches="tight")
###Output
_____no_output_____
###Markdown
1e Bland-Altman plot95% limits of agreement derived from this paper:https://journals.sagepub.com/doi/pdf/10.1177/096228029900800204
###Code
df = pd.merge(phq, phq_adult, on='ID')
df['diffs'] = np.array(df['PHQTOTAL'] - df['PHQTOTAL_Adult'])
df['avg'] = np.array((df['PHQTOTAL'] + df['PHQTOTAL_Adult']) / 2)
plt.figure()
ax = plt.subplot(111)
plt.scatter(df['avg'], df['diffs'], color='k', alpha=0.6)
plt.xticks([0, 5, 10, 15])
plt.yticks([-15, -10, -5, 0, 5, 10, 15])
plt.ylabel('Difference in PHQ total', fontsize=12)
plt.xlabel('Average PHQ total', fontsize=12)
ax.spines['right'].set_visible(False)
ax.spines['top'].set_visible(False)
ax.yaxis.set_ticks_position('left')
ax.xaxis.set_ticks_position('bottom')
overall_mu = np.mean(df['diffs'])
overall_std = np.std(df['diffs'])
print("Overall mean: %f +/- %f" % (overall_mu, overall_std))
# Compute 95% limits of agreement
# --------
# 1. Regress the differences vs the averages to estimate D_hat
# D_hat = b1 + m1 * A
m1, b1, r_value, p_value, std_err = stats.linregress(df['avg'], df['diffs'])
xs = np.linspace(0, np.max(df['avg']), 100)
ys = [m1 * x + b1 for x in xs]
plt.plot(xs, ys, linestyle='--', color='grey', alpha=0.7)
print("Overall slope: %f +/- %f" % (m1, std_err))
# 2. Regress the residuals of D_hat vs. the average to estimate R_hat
# R_hat = b2 + m2*A
res = []
for i in range(df.shape[0]):
a = df['avg'][i]
d_hat = m1 * a + b1
d = df['diffs'][i]
res.append(np.abs(d - d_hat))
m2, b2, r_value, p_value, std_err = stats.linregress(df['avg'], res)
# 3. Estimate the 95% limits of agreement D_hat +/- 2.46 * R_hat
ys2_upper = [(m1 * x + b1) + 2.46 * (m2 * x + b2) for x in xs]
ys2_lower = [(m1 * x + b1) - 2.46 * (m2 * x + b2) for x in xs]
plt.plot(xs, ys2_upper, linestyle='--', color='grey')
plt.plot(xs, ys2_lower, linestyle='--', color='grey')
if savefigs:
plt.savefig(fname=outdir+'Fig1/bland_altman.pdf', format='pdf', dpi=resolution, bbox_inches="tight")
###Output
_____no_output_____
###Markdown
Fig 2. Variance in PHQ-9 and PHQ-A item responses[return to top](title) Graphical analysis of item response structureThis is a graphical approach to observe patterns in item responses across students. On the y-axis, we order students from highest to lowest total score, and on the x-axis we rank questions from highest to lowest total score. Thus the top left is where we should find the most high scores, and the bottom right is where we should see the most low scores. Another thing to look at is whether the ordering of students and questions are maintained between the two forms. Here, for example, we can see that far fewer high scores were reported for the "Movement" question in the Child form, compared to the adult form.
###Code
# Adult form
phq_sort = phq_adult.iloc[:, :-2].sort_values(by='PHQTOTAL_Adult', ascending=False)
question_order = phq_sort.iloc[:, 1:-1].sum().sort_values(ascending=False).index.values
plt.figure(figsize=(7, 12))
ax = plt.subplot(111)
hm = sns.heatmap(phq_sort[question_order], cmap='Greys', cbar=False)
xlabels = [q.split("_")[0] for q in question_order]
hm.set_xticklabels(xlabels, rotation=30)
plt.title('PHQ-9', fontsize=12)
plt.xlabel('Item', fontsize=12)
plt.ylabel('Participant', fontsize=12)
plt.yticks([])
if savefigs:
plt.savefig(fname=outdir+'Fig2/heatplot_phq9.pdf', format='pdf', dpi=resolution, bbox_inches="tight")
# Child form
phq_sort = phq.iloc[:, :-1].sort_values(by='PHQTOTAL', ascending=False)
question_order = phq_sort.iloc[:, 1:-1].sum().sort_values(ascending=False).index.values
plt.figure(figsize=(7, 12))
ax = plt.subplot(111)
hm = sns.heatmap(phq_sort[question_order], cmap="Greys", cbar=False)
hm.set_xticklabels(question_order, rotation=30)
plt.title('PHQ-A', fontsize=12)
plt.xlabel('Item', fontsize=12)
plt.ylabel('Participant', fontsize=12)
plt.yticks([])
if savefigs:
plt.savefig(fname=outdir+'Fig2/heatplot_phqA.pdf', format='pdf', dpi=resolution, bbox_inches="tight")
###Output
_____no_output_____
###Markdown
Principal components analysis (PCA) on the PHQ itemsStandard PCA on the mean-subtracted item responses. Two students were dropped for PHQ-A and one student for PHQ-9 due to missing values. These analyses were double-checked using Scikit Learn (see bottom of this notebook, in "Other" analyses).
###Code
def pca(A):
cov = np.cov(A.T)
w, v = LA.eig(cov.astype(float))
vs = [] # eigenvectors
for i in range(len(w)):
vs.append(v[:, i])
sorted_pairs = sorted(zip(w, vs), key=lambda x: x[0], reverse=True)
w = [pair[0] for pair in sorted_pairs]
v = [pair[1] for pair in sorted_pairs]
return w, v
# Create PHQ matrices that just have the item scores, with no metadata
# and ensure that the columns are ordered identically for the two tests
phq_clean = phq.iloc[:, 1:-2]
phq_adult_clean = phq_adult.iloc[:, 1:-3]
cols1 = list(phq_clean.columns)
cols2 = [c + "_Adult" for c in cols1]
phq_adult_clean = phq_adult_clean.loc[:, cols2]
# PCA on PHQ-A
A = phq_clean.copy()
A = A.dropna() # 2 students dropped for missing val
# Subtract the mean of each column
for i, p in enumerate(A.columns):
A.loc[:, p] = A[p] - A[p].mean()
A = np.array(A)
w1, v1 = pca(A)
sum1 = np.sum(w1)
varexp1 = [w/sum1 for w in w1]
# PCA on PHQ-9
A = phq_adult_clean.copy()
A = A.dropna() # 1 student dropped for missing val
# Subtract the mean of each column
for i, p in enumerate(A.columns):
A.loc[:, p] = A[p] - A[p].mean()
A = np.array(A)
w2, v2 = pca(A)
sum2 = np.sum(w2)
varexp2 = [w/sum2 for w in w2]
# Save PCs to excel tables
pcsA = np.array(v1)
pcs9 = np.array(v2)
df_A = pd.DataFrame(pcsA, index=np.arange(1, 10), columns=[cols1]).round(3)
df_9 = pd.DataFrame(pcs9, index=np.arange(1, 10), columns=[cols1]).round(3)
varexp = np.array([varexp1, varexp2]).T.round(2)
df_pctvar = pd.DataFrame(varexp, index=np.arange(1, 10), columns=['PHQ-A', 'PHQ-9'])
if savefigs:
df_A.to_excel("./phqA_pcs.xlsx")
df_9.to_excel("./phq9_pcs.xlsx")
df_pctvar.to_excel("./pctvar.xlsx")
###Output
_____no_output_____
###Markdown
Parallel methodTo evaluate the "significance" of the PCs, we compare the actual eigenvalues to the 95th percentile of eigenvalues of 100,000 randomly generated item response distributions. Here we use Scikit Learn to compute PCA.
###Code
n = 100000
percentile = 0.95
all_eigs = []
matrix_shape = phq_clean.dropna().shape
for i in range(n):
rand = np.random.choice(4, matrix_shape) # random item response
pca_rand = PCA().fit(rand)
all_eigs.append(pca_rand.explained_variance_)
all_eigs = np.array(all_eigs)
all_eigs.sort(axis=0)
print("95th percentile of randomly generated eigenvalues:\n")
print(all_eigs[int(percentile * n), :])
###Output
_____no_output_____
###Markdown
Analyze the scree plot and the coefficients of the first two PCsAnother way of evaluating the relative "significance" of the components.
###Code
print("PHQ-A")
print("PC1 explains %f percent of variance" % varexp1[0])
print("PC2 explains %f percent of variance" % varexp1[1])
print("\nPHQ-9")
print("PC1 explains %f percent of variance" % varexp2[0])
print("PC2 explains %f percent of variance" % varexp2[1])
# Scree plot
plt.figure()
ax = plt.subplot(111)
plt.plot(range(len(varexp1)), varexp1, marker='o', color='k', label='PHQ-A')
plt.plot(range(len(varexp2)), varexp2, marker='^', color='k', linestyle='--', label='PHQ-9')
plt.ylabel("Fraction of variance explained", fontsize=12)
plt.xlabel('Principal component', fontsize=12)
plt.xticks(range(0, 9), range(1, 10))
plt.legend()
ax.spines['right'].set_visible(False)
ax.spines['top'].set_visible(False)
ax.yaxis.set_ticks_position('left')
ax.xaxis.set_ticks_position('bottom')
if savefigs:
plt.savefig(fname=outdir+'Fig2/scree.pdf', format='pdf', dpi=resolution, bbox_inches="tight")
# PC1
# figsize=(7, 7*0.55)
cols = phq_clean.columns
plt.figure()
ax = plt.subplot(111)
plt.plot(v1[0], color='k', marker='o', label='PHQ-A')
plt.plot(v2[0], color='k', marker='^', linestyle='--', label='PHQ-9')
ax.axhline(y=0, linestyle='--', color='grey')
plt.xticks(range(len(v1[0])), cols, rotation=25)
plt.xlabel('Item', fontsize=12)
plt.ylabel('Coefficient', fontsize=12)
ax.spines['right'].set_visible(False)
ax.spines['top'].set_visible(False)
ax.yaxis.set_ticks_position('left')
ax.xaxis.set_ticks_position('bottom')
plt.yticks(np.arange(-0.2, 0.6, 0.2))
# plt.ylim([-0.8, 0.8])
if savefigs:
plt.savefig(fname=outdir+'Fig2/pc1_coef.pdf', format='pdf', dpi=resolution, bbox_inches="tight")
# PC2
# plt.figure()
# ax = plt.subplot(111)
# plt.plot(-v1[1], color='k', marker='o', label='PHQ-A')
# plt.plot(v2[1], color='k', marker='^', linestyle='--', label='PHQ-9')
# ax.axhline(y=0, linestyle='--', color='grey')
# plt.xticks(range(len(v1[0])), cols, rotation=25)
# plt.xlabel('Item', fontsize=12)
# plt.ylabel('Coefficient', fontsize=12)
# plt.yticks(np.arange(-0.8, 1.0, 0.4))
# plt.ylim([-0.8, 0.8])
# ax.spines['right'].set_visible(False)
# ax.spines['top'].set_visible(False)
# ax.yaxis.set_ticks_position('left')
# ax.xaxis.set_ticks_position('bottom')
# if savefigs:
# plt.savefig(fname=outdir+'Fig2/pc2_coef.pdf', format='pdf', dpi=resolution, bbox_inches="tight")
###Output
_____no_output_____
###Markdown
Fig 3. Inconsistencies in item responses within individuals.[return to top](title) Compute measures of internal inconsistency for each studentSee Materials and Methods for definitions of these quantities:* points of disagreement* separate items disagree on* total internal change (in either direction)* percent disagreement Create a new dataframe (df_disagree) that contains the inconsistency metrics for each participant
###Code
colnames = np.array(phq.columns[1:-2]) # ensure we compare same items from both questionnaires
df_disagree = pd.DataFrame(columns=['ID', 'net_diff', 'pts_disagree', 'items_disagree', 'total_change', 'pct_disagree'])
df = pd.merge(phq, phq_adult, on='ID')
df['diffs'] = np.array(df['PHQTOTAL'] - df['PHQTOTAL_Adult'])
idx = 0
for sid in df['ID']:
row = df[df['ID']==sid]
net_diff = np.round(row['diffs'].values[0]) # net diff in total score
# Now we compute the new quantitites
pts_disagree = 0
items_disagree = 0
total_change = 0
for col in colnames:
col_adult = col + "_Adult"
item_diff = (row[col] - row[col_adult]).values[0] # difference just on this item
if np.isnan(item_diff) or item_diff == 0:
continue
else:
total_change += np.abs(item_diff) # update the total internal change
if (((net_diff > 0) and (item_diff < 0)) or ((net_diff < 0) and (item_diff > 0))):
# PD is defined as the item change in opposite direction from net change
pts_disagree += np.abs(item_diff)
items_disagree += 1
elif (net_diff == 0):
# If 0 net change, PD is defined as 1/2 total item changes
pts_disagree += 0.5 * np.abs(item_diff)
items_disagree += 1
# Compute percent disagreement
if total_change != 0:
pct_disagree = 100 * pts_disagree / total_change
else:
pct_disagree = 0
df_disagree.loc[idx] = [sid, net_diff, pts_disagree, items_disagree, total_change, pct_disagree]
idx += 1
###Output
_____no_output_____
###Markdown
Helper function for plotting individual participnts' data
###Code
# Running this function requires the df_disagree dataframe to have already been computed (preceding cell)
# it also requires the base dataframes (phq and phq_adult) to have been computed.
def plot_subject_items(sid):
colnames = np.array(phq.columns[1:-2])
df = pd.merge(phq, phq_adult, on='ID')
df['diffs'] = np.array(df['PHQTOTAL'] - df['PHQTOTAL_Adult'])
net_change = df[df['ID']==sid]['diffs']
pts_inconsistent = df_disagree[df_disagree['ID']==sid]['pts_disagree']
# Get the item responses to the PHQ-9 (adult form) and PHQ-A (child form) for this participant
adult_answers = []
child_answers = []
for col in colnames:
col_adult = col + "_Adult"
adult_answers.append(df[df['ID']==sid][col_adult].values[0])
child_answers.append(df[df['ID']==sid][col].values[0])
plt.figure(figsize=(7, 7*0.55))
ax = plt.subplot(111)
plt.title("Net change: %d, Disagreement: %d" % (net_change, pts_inconsistent), fontsize=12)
plt.ylabel("Item response", fontsize=12)
plt.xlabel("Item", fontsize=12)
plt.xticks(range(len(colnames)), colnames, rotation=30)
plt.yticks([0, 1, 2, 3])
plt.ylim([-0.5, 3.5])
ax.spines['right'].set_visible(False)
ax.spines['top'].set_visible(False)
ax.yaxis.set_ticks_position('left')
ax.xaxis.set_ticks_position('bottom')
plt.scatter(range(len(colnames)), adult_answers, marker='o', linestyle="None", color='k')
plt.scatter(range(len(colnames)), child_answers, marker='o', linestyle="None", color='k')
# Draw the arrows from the PHQ-9 (adult) to the PHQ-A (child) response
for i in range(len(colnames)):
diff = child_answers[i] - adult_answers[i]
plt.arrow(x=i, y=adult_answers[i], dx=0, dy=diff,
length_includes_head=True, color='grey', facecolor='grey', head_width=0.25, head_length=0.2)
if savefigs:
plt.savefig(fname=outdir+'Fig3/individuals/%s.pdf' % sid, format='pdf', dpi=resolution, bbox_inches="tight")
###Output
_____no_output_____
###Markdown
Fig. 3 a&b. Two individual participants
###Code
plot_subject_items('ST35')
plot_subject_items('ST30')
###Output
_____no_output_____
###Markdown
Fig. 3 c & d. Distributions of disagreement metricsSee Materials and Methods for definitions of these quantities:* Total internal points of change* Points of disagreement* Percent disagreement
###Code
total_change = df_disagree['total_change'].values
weights = np.ones_like(total_change)/len(total_change)
plt.figure(figsize=(8, 6))
ax = plt.subplot(111)
plt.hist(total_change, bins=np.arange(-0.5, 11.5, 1), weights=weights, rwidth=0.8, color='w', edgecolor='k')
plt.yticks([0, 0.05, 0.10, 0.15, 0.20])
plt.ylim([0, 0.23])
plt.xlabel('Total internal points of change', fontsize=12)
plt.ylabel('Fraction of participants', fontsize=12)
ax.spines['right'].set_visible(False)
ax.spines['top'].set_visible(False)
ax.yaxis.set_ticks_position('left')
ax.xaxis.set_ticks_position('bottom')
if savefigs:
plt.savefig(fname=outdir+'Fig3/hist_internal_change.pdf', format='pdf', dpi=resolution, bbox_inches="tight")
print("Total points of change (P_T)")
print("Mean: %f, Standard dev: %f" % (np.mean(total_change), np.std(total_change)))
all_disagree = df_disagree['pts_disagree'].values
weights = np.ones_like(all_disagree)/len(all_disagree)
plt.figure(figsize=(8, 6))
ax = plt.subplot(111)
plt.hist(all_disagree, weights=weights, bins=[0, 1, 2, 3, 4, 5], rwidth=0.8, color='w', edgecolor='k')
plt.xticks([0.5, 1.5, 2.5, 3.5, 4.5], [0, 1, 2, 3, 4])
plt.yticks([0, 0.1, 0.2, 0.3, 0.4])
plt.ylim([0, 0.45])
plt.xlabel('Points of disagreement', fontsize=12)
plt.ylabel('Fraction of participants', fontsize=12)
ax.spines['right'].set_visible(False)
ax.spines['top'].set_visible(False)
ax.yaxis.set_ticks_position('left')
ax.xaxis.set_ticks_position('bottom')
if savefigs:
plt.savefig(fname=outdir+'Fig3/hist_pts_disagree.pdf', format='pdf', dpi=resolution, bbox_inches="tight")
print("\nPoints of disagreement (P_D)")
print("Mean: %f, Standard dev: %f" % (np.mean(all_disagree), np.std(all_disagree)))
pct_disagree = df_disagree['pct_disagree']
pct_disagree = pct_disagree.dropna().values # NAs arise from
weights = np.ones_like(pct_disagree)/len(pct_disagree)
plt.figure(figsize=(8, 6))
ax = plt.subplot(111)
plt.hist(pct_disagree, weights=weights, bins=[0, 10, 20, 30, 40, 50], color='w', edgecolor='k')
plt.yticks(np.arange(0, 0.5, 0.1))
plt.ylim([0, 0.45])
plt.xlabel('Percent disagreement', fontsize=12)
plt.ylabel('Fraction of participants', fontsize=12)
ax.spines['right'].set_visible(False)
ax.spines['top'].set_visible(False)
ax.yaxis.set_ticks_position('left')
ax.xaxis.set_ticks_position('bottom')
if savefigs:
plt.savefig(fname=outdir+'Fig3/hist_percent_disagree.pdf', format='pdf', dpi=resolution, bbox_inches="tight")
print("\nPercent disagreement (100 x P_D / P_T)")
print("Mean: %f, Standard dev: %f" %
(np.mean(pct_disagree), np.std(pct_disagree)))
delta = df_disagree[df_disagree['items_disagree'] > 0] # subjects with at least 1 item of inconsistency
print('\n%d / %d subjects had at least 1 item of inconsistency' % (delta.shape[0], df.shape[0]))
item_pts = delta['pts_disagree'] - delta['items_disagree']
multi_delta = len(np.where(item_pts > 0)[0])
print('%d / %d subjects had an inconsistent item with >1 pt of change' % (multi_delta, delta.shape[0]))
###Output
_____no_output_____
###Markdown
3e. Probability of change and disagreement
###Code
colnames = np.array(phq.columns[1:-2])
df = pd.merge(phq, phq_adult, on='ID')
df['diffs'] = np.array(df['PHQTOTAL'] - df['PHQTOTAL_Adult'])
# calculate all changes for every item and participant
dfi = pd.DataFrame(columns=['item', 'v1', 'v2', 'disagree', 'abs_change'])
idx = 0
for sid in df['ID']:
row = df[df['ID']==sid]
net_diff = np.round(row['diffs'].values[0])
for col in colnames:
col_adult = col + "_Adult"
v1 = row[col].values[0]
v2 = row[col_adult].values[0]
item_diff = v1 - v2
if np.isnan(item_diff):
continue # only compare items that were answered on both questionnaires
if item_diff == 0:
disagreed = -1 # no change at all in this item
elif net_diff == 0:
disagreed = 1 # change in this item but net changes canceled out
elif (((net_diff > 0) and (item_diff < 0)) or ((net_diff < 0) and (item_diff > 0))):
disagreed = 1 # change in this item opposed net change
elif (((net_diff > 0) and (item_diff > 0)) or ((net_diff < 0) and (item_diff < 0))):
disagreed = 0 # change in this item aligned with net change
else:
raise RuntimeException("Error")
dfi.loc[idx] = [col, v1, v2, disagreed, np.abs(item_diff)]
idx += 1
# compute the probability of change,
# and the probability of disagreement given change, for each item
df_prob = pd.DataFrame(columns=['item', 'prob_change', 'prob_disagree'])
idx = 0
for col in colnames:
d = dfi[dfi['item']==col]
num_total = d.shape[0]
num_change = d[d['abs_change'] > 0].shape[0]
num_disagree = d[d['disagree'] == 1].shape[0]
prob_change = num_change / num_total
prob_disagree = num_disagree / num_total
df_prob.loc[idx] = [col, prob_change, prob_disagree]
idx += 1
df_prob = df_prob.sort_values(by='prob_change', ascending=False)
plt.figure()
plt.figure(figsize=(9, 6))
plt.scatter(np.arange(df_prob.shape[0]), df_prob['prob_change'].values, color='k', marker='.', s=150, label='P(change)')
plt.scatter(np.arange(df_prob.shape[0]), df_prob['prob_disagree'].values, color='k', marker='+', s=150, label='P(disagree)')
plt.xticks(np.arange(df_prob.shape[0]), df_prob['item'].values, rotation=45)
plt.ylabel('Probability', fontsize=12)
ax = plt.subplot(111)
ax.spines['right'].set_visible(False)
ax.spines['top'].set_visible(False)
ax.yaxis.set_ticks_position('left')
ax.xaxis.set_ticks_position('bottom')
plt.legend(frameon=False)
if savefigs:
plt.savefig(fname=outdir+'Fig3/prob_change.pdf', format='pdf', dpi=resolution, bbox_inches="tight")
###Output
_____no_output_____
###Markdown
Supplemental. Number of ways of arranging items
###Code
from itertools import product
# Enumerate all possible arrangements of item responses and compute their sums
item_response_levels = [0, 1, 2, 3]
omega = product(item_response_levels, repeat=9)
sums = [np.sum(outcome) for outcome in omega]
# Count how many ways there are to produce each possible sum score
num_ways = {}
for s in sums:
if s not in num_ways.keys():
num_ways[s] = 1
else:
num_ways[s] += 1
# Count how many ways of choosing a pair of scores (start and end)
num_change_ways = {}
for score1 in num_ways.keys():
num_change_ways[score1] = {}
for score2 in num_ways.keys():
# We take the product of the number of ways for each score
num_change_ways[score1][score2] = np.log10(num_ways[score1] * num_ways[score2])
xs = np.arange(0, 28, 1)
plt.figure(figsize=(9, 6))
ax = plt.subplot(111)
plt.xlabel('End score', fontsize=12)
plt.ylabel('Log ways of change', fontsize=12)
i = 0
plt.plot(xs, list(num_change_ways[i].values()), marker='.', markersize=8,
markeredgecolor='k', markerfacecolor='k', color='grey', linestyle='dashed', label=i)
i = 4
plt.plot(xs, list(num_change_ways[i].values()), marker='*', markeredgecolor='k', markersize=8,
markerfacecolor='k', color='grey', linestyle='dashed', label=i)
i = 9
plt.plot(xs, list(num_change_ways[i].values()), marker='+', markeredgecolor='k', markersize=8,
markerfacecolor='k', color='grey', linestyle='dashed', label=i)
plt.legend(frameon=False, loc='upper right')
ax.spines['right'].set_visible(False)
ax.spines['top'].set_visible(False)
ax.yaxis.set_ticks_position('left')
ax.xaxis.set_ticks_position('bottom')
if savefigs:
plt.savefig(fname=outdir+'Fig3/number_ways.pdf', format='pdf', dpi=resolution, bbox_inches="tight")
###Output
_____no_output_____
###Markdown
---- Supplemental Figures ----[return to top](title) Supplemental Fig 1. Total scores after square root transformation[return to top](title) a&b. Total score distributions and tests for normality
###Code
# Weights for converting to frequency histograms
phqweights = np.ones_like(phq['PHQTOTAL'])/len(phq['PHQTOTAL'])
phq_adult_weights = np.ones_like(phq_adult['PHQTOTAL_Adult'])/len(phq_adult['PHQTOTAL_Adult'])
gadweights = np.ones_like(gad['GADRAW'])/len(gad['GADRAW'])
# Child & Adolescent PHQ -- Square root transformation
bins = np.arange(0, 6, 0.5)
plt.figure()
ax = plt.subplot(111)
sqrt_phq = phq['PHQTOTAL'].apply(lambda x: np.sqrt(x))
plt.hist(sqrt_phq, color='white', weights=phqweights, edgecolor='black')
plt.xlabel('Square root of PHQ-A total score', fontsize=14)
plt.ylabel('Fraction of students', fontsize=14)
plt.yticks([0, 0.05, 0.1, 0.15, 0.2, 0.25])
ax.spines['right'].set_visible(False)
ax.spines['top'].set_visible(False)
ax.yaxis.set_ticks_position('left')
ax.xaxis.set_ticks_position('bottom')
W, p = stats.shapiro(sqrt_phq)
print("Shapiro-Wilk Test for SQUARE ROOT of PHQ-A Total: %f" % p)
if savefigs:
plt.savefig(fname=outdir+'FigS1/sqrt_phqA_hist.pdf', format='pdf', dpi=resolution, bbox_inches="tight")
# PHQ-9 (Adult) -- Square root transformation
bins = np.arange(0, 6, 0.5)
plt.figure()
ax = plt.subplot(111)
sqrt_phq = phq_adult['PHQTOTAL_Adult'].apply(lambda x: np.sqrt(x))
plt.hist(sqrt_phq, color='white', weights=phq_adult_weights, edgecolor='black')
plt.xlabel('Square root of PHQ-9 total score', fontsize=14)
plt.ylabel('Fraction of students', fontsize=14)
plt.yticks([0, 0.05, 0.1, 0.15, 0.2, 0.25])
ax.spines['right'].set_visible(False)
ax.spines['top'].set_visible(False)
ax.yaxis.set_ticks_position('left')
ax.xaxis.set_ticks_position('bottom')
W, p = stats.shapiro(sqrt_phq)
print("Shapiro-Wilk Test for SQUARE ROOT of PHQ-9 Total: %f" % p)
if savefigs:
plt.savefig(fname=outdir+'FigS1/sqrt_phq9_hist.pdf', format='pdf', dpi=resolution, bbox_inches="tight")
# GAD -- Square root transformation
plt.figure()
ax = plt.subplot(111)
sqrt_gad = gad['GADRAW'].apply(lambda x: np.sqrt(x))
plt.hist(sqrt_gad, weights=gadweights, color='white', edgecolor='black')
plt.xlabel('Square root of GAD-A total score', fontsize=14)
plt.ylabel('Fraction of students', fontsize=14)
plt.yticks([0, 0.05, 0.1, 0.15, 0.2, 0.25])
ax.spines['right'].set_visible(False)
ax.spines['top'].set_visible(False)
ax.yaxis.set_ticks_position('left')
ax.xaxis.set_ticks_position('bottom')
W, p = stats.shapiro(sqrt_gad)
print("Shapiro-Wilk Test for SQUARE ROOT of GAD Total (Average): %f" % p)
if savefigs:
plt.savefig(fname=outdir+'FigS1/sqrt_gad_hist.pdf', format='pdf', dpi=resolution, bbox_inches="tight")
###Output
_____no_output_____
###Markdown
c. PHQ vs GAD linear correlation after square root transform
###Code
df = pd.merge(phq, gad, on='ID')
df['PHQTOTAL'] = df['PHQTOTAL'].apply(lambda x: np.sqrt(x))
df['GADRAW'] = df['GADRAW'].apply(lambda x: np.sqrt(x))
plt.figure()
ax = plt.subplot(111)
plt.scatter(df['GADRAW'], df['PHQTOTAL'], color='k', alpha=0.7)
plt.xlabel('Square root of\nGAD-A Total Score', fontsize=14)
plt.ylabel('Square root of\nPHQ-A Total Score', fontsize=14)
ax.spines['right'].set_visible(False)
ax.spines['top'].set_visible(False)
ax.yaxis.set_ticks_position('left')
ax.xaxis.set_ticks_position('bottom')
r, p = stats.pearsonr(df['GADRAW'], df['PHQTOTAL'])
print("PHQ-A: p = %f, R-squared = %f" % (p, r**2))
if savefigs:
plt.savefig(fname=outdir+'FigS1/sqrt_phqA_vs_gad.pdf', format='pdf', dpi=resolution, bbox_inches="tight")
df = pd.merge(phq_adult, gad, on='ID')
df['PHQTOTAL_Adult'] = df['PHQTOTAL_Adult'].apply(lambda x: np.sqrt(x))
df['GADRAW'] = df['GADRAW'].apply(lambda x: np.sqrt(x))
plt.figure()
ax = plt.subplot(111)
plt.scatter(df['GADRAW'], df['PHQTOTAL_Adult'], color='k', alpha=0.7)
plt.xlabel('Square root of\nGAD-A Total Score', fontsize=14)
plt.ylabel('Square root of\nPHQ-9 Total Score', fontsize=14)
ax.spines['right'].set_visible(False)
ax.spines['top'].set_visible(False)
ax.yaxis.set_ticks_position('left')
ax.xaxis.set_ticks_position('bottom')
r, p = stats.pearsonr(df['GADRAW'], df['PHQTOTAL_Adult'])
print("PHQ-9: p = %f, R-squared = %f" % (p, r**2))
if savefigs:
plt.savefig(fname=outdir+'FigS1/sqrt_phq9_vs_gad.pdf', format='pdf', dpi=resolution, bbox_inches="tight")
###Output
_____no_output_____
###Markdown
Supplemental Fig 2. Frequency distribution of items and response categories[return to top](title)
###Code
from textwrap import wrap
phqweights = np.ones_like(phq['PHQTOTAL'])/len(phq['PHQTOTAL'])
phq_adult_weights = np.ones_like(phq_adult['PHQTOTAL_Adult'])/len(phq_adult['PHQTOTAL_Adult'])
colnames = phq.columns[1:-2]
colors = plt.cm.jet(np.linspace(0, 1, len(colnames)))
df = phq.copy()
plt.figure()
ax = plt.subplot(111)
for i, colname in enumerate(colnames):
col = phq[colname]
valid_vals = col.dropna()
weights = np.ones_like(valid_vals)/len(valid_vals)
responses, bins = np.histogram(valid_vals, bins=[0,1,2,3,4], weights=weights)
plt.plot(range(3), responses[1:], marker='o', label=colname, color=colors[i])
plt.legend(frameon=False, bbox_to_anchor=(1, 1.02))
plt.title('PHQ-A', fontsize=12)
plt.xticks([0, 1, 2], ['Several', 'More than\nHalf', 'Nearly every'])
plt.ylabel('Fraction of students', fontsize=12)
ax.spines['right'].set_visible(False)
ax.spines['top'].set_visible(False)
ax.yaxis.set_ticks_position('left')
ax.xaxis.set_ticks_position('bottom')
plt.yticks([0, 0.1, 0.2, 0.3, 0.4])
if savefigs:
plt.savefig(fname=outdir+'FigS2/item_freq_phqA.pdf', format='pdf', dpi=resolution, bbox_inches="tight")
df = phq_adult.copy()
plt.figure()
ax = plt.subplot(111)
for i, colname in enumerate(colnames):
colname_adult = colname + "_Adult"
col = phq_adult[colname_adult]
valid_vals = col.dropna()
weights = np.ones_like(valid_vals)/len(valid_vals)
responses, bins = np.histogram(valid_vals, bins=[0,1,2,3,4], weights=weights)
plt.plot(range(3), responses[1:], marker='o', label=colname, color=colors[i])
plt.title('PHQ-9', fontsize=12)
plt.xticks([0, 1, 2], ['Several', 'More than\nHalf', 'Nearly every'])
plt.ylabel('Fraction of subjects', fontsize=12)
ax.spines['right'].set_visible(False)
ax.spines['top'].set_visible(False)
ax.yaxis.set_ticks_position('left')
ax.xaxis.set_ticks_position('bottom')
plt.yticks([0, 0.1, 0.2, 0.3, 0.4])
if savefigs:
plt.savefig(fname=outdir+'FigS2/item_freq_phq9.pdf', format='pdf', dpi=resolution, bbox_inches="tight")
###Output
_____no_output_____
###Markdown
Supplemental Fig 3. PHQ total scores by rater and response format(a) Box plots of PHQ-9 and PHQ-A total scores, broken out by the three researchers.(b) Box plots of PHQ-9 broken out by self-report vs out-loud.[return to top](title)
###Code
rater1 = ['ST1', 'ST3', 'ST14', 'ST15', 'ST16', 'ST17', 'ST20', 'ST21',
'ST24', 'ST25', 'ST26', 'ST27', 'ST28', 'ST35', 'ST37', 'ST29', 'ST36']
rater2 = ['ST12', 'ST13', 'ST19', 'ST22', 'ST31', 'ST32', 'ST38',
'ST41', 'ST48', 'ST49', 'ST50', 'ST51', 'ST52', 'ST53']
rater3 = ['ST2', 'ST4', 'ST6', 'ST7', 'ST8', 'ST9', 'ST10', 'ST11', 'ST18',
'ST23', 'ST30', 'ST33', 'ST34', 'ST39', 'ST40', 'ST43', 'ST45', 'ST46']
rater3_idx = [2, 4, 6, 7, 8, 9, 10, 11, 18, 23, 30, 33, 34, 39, 40, 43, 45, 46]
###Output
_____no_output_____
###Markdown
(a) PHQ total score by different raters
###Code
df = phq.copy()
r1 = df[df['ID'].isin(rater1)]
r2 = df[df['ID'].isin(rater2)]
r3 = df[df['ID'].isin(rater3)]
plt.figure(figsize=(7, 6))
ax = plt.subplot(111)
bplot = plt.boxplot([r1['PHQTOTAL'], r2['PHQTOTAL'], r3['PHQTOTAL']],
patch_artist=True, boxprops=boxprops, medianprops=medianprops)
bplot['boxes'][0].set_facecolor('w')
bplot['boxes'][1].set_facecolor('w')
bplot['boxes'][2].set_facecolor('w')
plt.xticks([1, 2, 3], ['Rater 1', 'Rater 2', 'Rater 3'], fontsize=12)
plt.ylabel('PHQ-A Total Score', fontsize=12)
ax.spines['right'].set_visible(False)
ax.spines['top'].set_visible(False)
ax.yaxis.set_ticks_position('left')
ax.xaxis.set_ticks_position('bottom')
plt.yticks([0, 4, 8, 12, 16, 20])
if savefigs:
plt.savefig(fname=outdir+'FigS3/box_raters_phqA.pdf', format='pdf', dpi=resolution, bbox_inches="tight")
df = phq_adult.copy()
r1 = df[df['ID'].isin(rater1)]
r2 = df[df['ID'].isin(rater2)]
r3 = df[df['ID'].isin(rater3)]
plt.figure(figsize=(7, 6))
ax = plt.subplot(111)
bplot = plt.boxplot([r1['PHQTOTAL_Adult'], r2['PHQTOTAL_Adult'], r3['PHQTOTAL_Adult']],
patch_artist=True, boxprops=boxprops, medianprops=medianprops)
bplot['boxes'][0].set_facecolor('w')
bplot['boxes'][1].set_facecolor('w')
bplot['boxes'][2].set_facecolor('w')
plt.xticks([1, 2, 3], ['Rater 1', 'Rater 2', 'Rater 3'], fontsize=12)
plt.ylabel('PHQ-9 Total Score', fontsize=12)
ax.spines['right'].set_visible(False)
ax.spines['top'].set_visible(False)
ax.yaxis.set_ticks_position('left')
ax.xaxis.set_ticks_position('bottom')
plt.yticks([0, 4, 8, 12, 16, 20])
if savefigs:
plt.savefig(fname=outdir+'FigS3/box_raters_phq9.pdf', format='pdf', dpi=resolution, bbox_inches="tight")
###Output
_____no_output_____
###Markdown
(b) PHQ total scores by response formatSeveral students completed the PHQ-9 out-loud instead of self-report.
###Code
outloud_adult_ids = ['ST2', 'ST6', 'ST7', 'ST9', 'ST13', 'ST18', 'ST30',
'ST32', 'ST34', 'ST41', 'ST43', 'ST45', 'ST46']
df = phq_adult.copy()
outloud = df[df['ID'].isin(outloud_adult_ids)]
selfreport = df[~df['ID'].isin(outloud_adult_ids)]
plt.figure(figsize=(7, 6))
ax = plt.subplot(111)
bplot = plt.boxplot([outloud['PHQTOTAL_Adult'], selfreport['PHQTOTAL_Adult']],
patch_artist=True, boxprops=boxprops, medianprops=medianprops)
bplot['boxes'][0].set_facecolor('w')
bplot['boxes'][1].set_facecolor('w')
plt.xticks([1, 2], ['Out loud', 'Self report'], fontsize=12)
plt.ylabel('PHQ-9 Total Score', fontsize=12)
ax.spines['right'].set_visible(False)
ax.spines['top'].set_visible(False)
ax.yaxis.set_ticks_position('left')
ax.xaxis.set_ticks_position('bottom')
plt.yticks([0, 4, 8, 12, 16, 20])
if savefigs:
plt.savefig(fname=outdir+'FigS3/box_response_phq9.pdf', format='pdf', dpi=resolution, bbox_inches="tight")
print('\nPHQ-9:')
print("Out loud median (n=%d): %f" % (outloud.shape[0], np.median(outloud['PHQTOTAL_Adult'])))
print("Self report median (n=%d): %f" % (selfreport.shape[0], np.median(selfreport['PHQTOTAL_Adult'])))
U, p = stats.mannwhitneyu(outloud['PHQTOTAL_Adult'], selfreport['PHQTOTAL_Adult'], alternative='greater')
cles = U / (outloud.shape[0] * selfreport.shape[0]) # common language effect size
print("Mann-Whitney U = %f, p = %f" % (U, p))
print("Common Language Effect Size: %f" % cles)
###Output
_____no_output_____
###Markdown
Re-compute percent disagreement after removing any items with even a single word of difference between the two questionnaires
###Code
words_changed = ['DEPRSD', 'FOOD', 'FOCUS']
remove_changed = True
colnames = np.array(phq.columns[1:-2]) # ensure we compare same items from both questionnaires
df_disagree = pd.DataFrame(columns=['ID', 'net_diff', 'pts_disagree', 'items_disagree', 'total_change', 'pct_disagree'])
df = pd.merge(phq, phq_adult, on='ID')
df['diffs'] = np.array(df['PHQTOTAL'] - df['PHQTOTAL_Adult'])
idx = 0
for sid in df['ID']:
row = df[df['ID']==sid]
net_diff = np.round(row['diffs'].values[0]) # net diff in total score
# Now we compute the new quantitites
pts_disagree = 0
items_disagree = 0
total_change = 0
for col in colnames:
if remove_changed and col in words_changed:
continue
col_adult = col + "_Adult"
item_diff = (row[col] - row[col_adult]).values[0] # difference just on this item
if np.isnan(item_diff) or item_diff == 0:
continue
else:
total_change += np.abs(item_diff) # update the total internal change
if (((net_diff > 0) and (item_diff < 0)) or ((net_diff < 0) and (item_diff > 0))):
# PD is defined as the item change in opposite direction from net change
pts_disagree += np.abs(item_diff)
items_disagree += 1
elif (net_diff == 0):
# If 0 net change, PD is defined as 1/2 total item changes
pts_disagree += 0.5 * np.abs(item_diff)
items_disagree += 1
# Compute percent disagreement
if total_change != 0:
pct_disagree = 100 * pts_disagree / total_change
else:
pct_disagree = 0
df_disagree.loc[idx] = [sid, net_diff, pts_disagree, items_disagree, total_change, pct_disagree]
idx += 1
total_change = df_disagree['total_change'].values
weights = np.ones_like(total_change)/len(total_change)
plt.figure(figsize=(8, 6))
ax = plt.subplot(111)
plt.hist(total_change, bins=np.arange(-0.5, 9.5, 1), weights=weights, rwidth=0.8, color='w', edgecolor='k')
plt.yticks([0, 0.05, 0.10, 0.15, 0.20])
plt.ylim([0, 0.23])
plt.xlabel('Total internal points of change', fontsize=12)
plt.ylabel('Fraction of participants', fontsize=12)
ax.spines['right'].set_visible(False)
ax.spines['top'].set_visible(False)
ax.yaxis.set_ticks_position('left')
ax.xaxis.set_ticks_position('bottom')
if savefigs:
plt.savefig(fname=outdir+'FigS4/hist_internal_change_pure.pdf', format='pdf', dpi=resolution, bbox_inches="tight")
print("Total points of change (P_T)")
print("Mean: %f, Standard dev: %f" % (np.mean(total_change), np.std(total_change)))
pct_disagree = df_disagree['pct_disagree']
pct_disagree = pct_disagree.dropna().values # NAs arise from
weights = np.ones_like(pct_disagree)/len(pct_disagree)
plt.figure(figsize=(8, 6))
ax = plt.subplot(111)
plt.hist(pct_disagree, weights=weights, bins=[0, 10, 20, 30, 40, 50], color='w', edgecolor='k')
plt.xlabel('Percent disagreement', fontsize=12)
plt.ylabel('Fraction of participants', fontsize=12)
ax.spines['right'].set_visible(False)
ax.spines['top'].set_visible(False)
ax.yaxis.set_ticks_position('left')
ax.xaxis.set_ticks_position('bottom')
if savefigs:
plt.savefig(fname=outdir+'FigS4/hist_percent_disagree_pure.pdf', format='pdf', dpi=resolution, bbox_inches="tight")
print("\nPercent disagreement (100 x P_D / P_T)")
print("Mean: %f, Standard dev: %f" %
(np.mean(pct_disagree), np.std(pct_disagree)))
delta = df_disagree[df_disagree['items_disagree'] > 0] # subjects with at least 1 item of inconsistency
print('\n%d / %d subjects had at least 1 item of inconsistency' % (delta.shape[0], df.shape[0]))
item_pts = delta['pts_disagree'] - delta['items_disagree']
multi_delta = len(np.where(item_pts > 0)[0])
print('%d / %d subjects had an inconsistent item with >1 pt of change' % (multi_delta, delta.shape[0]))
###Output
_____no_output_____
###Markdown
Other metrics[return to top](title) Cronbach's Alpha as a measure of item interrelatednessCronbach's alpha represents a measure of interrelatedness of items, often described as "internal consistency", and is a lower bound on "reliability" as it is classically defined. Note that Cronbach's alpha does not imply unidimensionality, nor does it necessarily imply high covariances among individual items of the scale. To address the latter, we also compute the mean covariance for each item. These results were confirmed with SPSS.
###Code
# Cronbach's Alpha
# -----------------
phq_clean = phq.iloc[:, 1:-2] # just the item response cols
phq_adult_clean = phq_adult.iloc[:, 1:-3]
alpha_child = pg.cronbach_alpha(phq_clean, nan_policy='pairwise')
alpha_adult = pg.cronbach_alpha(phq_adult_clean, nan_policy='pairwise')
print("Cronbach's Alpha: PHQ-A")
print(alpha_child)
print("\nCronbach's Alpha: PHQ-9")
print(alpha_adult)
# PHQ-A: Compute average inter-item correlations (not including self-correlations)
corr_mat = phq_clean.corr(method='pearson')
all_corrs = []
for colname in corr_mat.columns:
nonself = [c for c in corr_mat.columns if c != colname]
corrs = corr_mat.loc[nonself, colname].values
all_corrs.extend(corrs)
print("\nPHQ-A:")
print("Average (stdev) iteritem correlation = %f +/- %f" % (np.mean(all_corrs), np.std(all_corrs)))
# PHQ-9: Compute average inter-item correlations (not including self-correlations)
corr_mat = phq_adult_clean.corr(method='pearson')
all_corrs = []
for colname in corr_mat.columns:
nonself = [c for c in corr_mat.columns if c != colname]
corrs = corr_mat.loc[nonself, colname].values
all_corrs.extend(corrs)
print("\nPHQ-9:")
print("Average (stdev) iteritem correlation = %f +/- %f" % (np.mean(all_corrs), np.std(all_corrs)))
###Output
_____no_output_____
###Markdown
Intraclass Correlation Coefficient (ICC) as a measure of test-retest reliabilityWe report ICC2 which measures absolute agreement with random raters. Commenting the square root transformation will show the results prior to transformation. These results were confirmed with the output of SPSS.
###Code
df = pd.merge(phq, phq_adult, on='ID')
df['PHQTOTAL'] = df['PHQTOTAL'].apply(lambda x: np.sqrt(x))
df['PHQTOTAL_Adult'] = df['PHQTOTAL_Adult'].apply(lambda x: np.sqrt(x))
ids = np.array(df['ID'])
ids_repeat = np.concatenate((ids, ids))
child = np.array(df['PHQTOTAL'])
adult = np.array(df['PHQTOTAL_Adult'])
both = np.concatenate((child, adult))
tests = ['child']*len(child) + ['adult']*len(adult)
df2 = pd.DataFrame(columns=['ID', 'SCORE', 'TEST'])
df2['ID'] = ids_repeat
df2['SCORE'] = both
df2['TEST'] = tests
print("ICC with absolute agreement (prior to square root transform)")
pg.intraclass_corr(data=df2, targets='ID', raters='TEST', ratings='SCORE')
###Output
_____no_output_____
###Markdown
GAD results not dependent on modified items
###Code
# GAD without the questions derived from #10
gad_sub = gad.drop(['ALC', 'WEED', 'MEDS', 'OBJCT', 'HELP', 'ACTVTY', 'RLIGON'], axis=1)
gsc = gad_sub.iloc[:, 1:-3]
gsc['gadraw'] = gsc.sum(axis=1)
gsc['ID'] = gad['ID']
plt.figure()
ax = plt.subplot(111)
plt.hist(gsc['gadraw'], color='white', edgecolor='black')
plt.xlabel('GAD-A total score\nnot including modified items', fontsize=14)
plt.ylabel('Number of students', fontsize=14)
ax.spines['right'].set_visible(False)
ax.spines['top'].set_visible(False)
ax.yaxis.set_ticks_position('left')
ax.xaxis.set_ticks_position('bottom')
W, p = stats.shapiro(gsc['gadraw'])
print("Shapiro-Wilk test: %f" % p)
df = pd.merge(phq, gsc, on='ID')
plt.figure()
ax = plt.subplot(111)
plt.scatter(df['gadraw'], df['PHQTOTAL'], color='k')
plt.xlabel('GAD-A total score\nnot including modified items', fontsize=14)
plt.ylabel('PHQ-A total score', fontsize=14)
ax.spines['right'].set_visible(False)
ax.spines['top'].set_visible(False)
ax.yaxis.set_ticks_position('left')
ax.xaxis.set_ticks_position('bottom')
plt.yticks([0, 4, 8, 12, 16, 20])
r, p = stats.pearsonr(df['gadraw'], df['PHQTOTAL'])
print("PHQ-A r-squared = %f" % r**2)
df = pd.merge(phq_adult, gsc, on='ID')
plt.figure()
ax = plt.subplot(111)
plt.scatter(df['gadraw'], df['PHQTOTAL_Adult'], color='k')
plt.xlabel('GAD-A total score\nnot including modified items', fontsize=14)
plt.ylabel('PHQ-9 Total Score', fontsize=14)
ax.spines['right'].set_visible(False)
ax.spines['top'].set_visible(False)
ax.yaxis.set_ticks_position('left')
ax.xaxis.set_ticks_position('bottom')
plt.yticks([0, 4, 8, 12, 16, 20])
r, p = stats.pearsonr(df['gadraw'], df['PHQTOTAL_Adult'])
print("PHQ-9 r-squared = %f" % r**2)
###Output
_____no_output_____
###Markdown
Bland-Altman plot on square root PHQ totalNote that the "fan out" pattern could be largely or fully due to the fact that the majority of measurments were very small, which would necessarily produce a "squeezing" effect on the Bland-Altman plot.
###Code
df = pd.merge(phq, phq_adult, on='ID')
df['PHQTOTAL'] = df['PHQTOTAL'].apply(lambda x: np.sqrt(x))
df['PHQTOTAL_Adult'] = df['PHQTOTAL_Adult'].apply(lambda x: np.sqrt(x))
df['diffs'] = np.array(df['PHQTOTAL'] - df['PHQTOTAL_Adult'])
df['avg'] = np.array((df['PHQTOTAL'] + df['PHQTOTAL_Adult']) / 2)
plt.figure()
ax = plt.subplot(111)
plt.scatter(df['avg'], df['diffs'], color='k', alpha=0.6)
plt.xticks([0, 1, 2, 3, 4])
plt.yticks([-2, -1, 0, 1, 2])
plt.ylabel('Difference in $\sqrt{\mathrm{PHQ}}$', fontsize=12)
plt.xlabel('Average $\sqrt{\mathrm{PHQ}}$', fontsize=12)
mu = np.mean(df['diffs'])
sdev = np.std(df['diffs'])
ax.axhline(y=mu, linestyle='--', color='grey')
ax.axhline(y=mu+2*sdev, linestyle='--', color='grey')
ax.axhline(y=mu-2*sdev, linestyle='--', color='grey')
ax.spines['right'].set_visible(False)
ax.spines['top'].set_visible(False)
ax.yaxis.set_ticks_position('left')
ax.xaxis.set_ticks_position('bottom')
###Output
_____no_output_____
###Markdown
Confirm PCA results using scikit learnWe computed PCA by hand in Figure 3. Here we confirm the results using the Scikit Learn library.
###Code
phq_clean = phq.iloc[:, 1:-2]
pca_child = PCA().fit(phq_clean.dropna())
print("Percent variance explained by PCs of PHQ-A responses:")
print(pca_child.explained_variance_ratio_)
print("\nAll PCs of PHQ-A responses:")
print(pd.DataFrame(pca_child.components_, index=np.arange(1,10,1), columns=phq_clean.columns))
phq_adult_clean = phq_adult.iloc[:, 1:-3]
pca_adult = PCA().fit(phq_adult_clean.dropna())
print("\nPercent variance explained by PCs of PHQ-9 responses:")
print(pca_adult.explained_variance_ratio_)
print("\nAll PCs of PHQ-9 responses:")
print(pd.DataFrame(pca_adult.components_, index=np.arange(1,10,1), columns=phq_adult_clean.columns))
###Output
_____no_output_____ |
practice/week-08/W08_1_Linear_regression.ipynb | ###Markdown
BIG DATA ANALYSIS: Linear Regression--- 1. 가상의 데이터 생성
###Code
import numpy as np
X = np.array([[1], [2], [3], [4]])
#y= 3x+3
y = np.dot(X, np.array([3])) + 3
#데이터 확인하기
import matplotlib.pyplot as plt
plt.scatter(X[:,0], y[:])
plt.show()
###Output
_____no_output_____
###Markdown
2. Linear Regression 훈련
###Code
from sklearn.linear_model import LinearRegression
reg = LinearRegression().fit(X, y)
#1에 가까울 수록 양의 상관관계, 0에 가까우면 x와 y는 큰 상관관계가 없음
print(reg.score(X, y))
# 추정된 가중치(weight) 벡터
print(reg.coef_)
# 추정된 상수항
print(reg.intercept_)
# 5는 값을 넣었을때의 예상치
print(reg.predict(np.array([[5]])))
###Output
1.0
[3.]
2.9999999999999982
[18.]
###Markdown
3. 큰 데이터로 실험
###Code
from sklearn.datasets import make_regression
X, y, coef = make_regression(n_samples=100, n_features=1,
bias=100, noise=10, coef=True, random_state=1)
X
print(y)
import matplotlib.pyplot as plt
plt.scatter(X[:,0], y[:])
plt.show()
reg = LinearRegression().fit(X, y)
# 추정된 가중치(weight) 벡터
print(reg.coef_)
# 추정된 상수항
print(reg.intercept_)
x = np.linspace(-3,3)
fn = reg.coef_*x + reg.intercept_
plt.scatter(X[:,0], y[:])
plt.plot(x, fn,c="red")
plt.show()
###Output
_____no_output_____ |
ModelFormulation.ipynb | ###Markdown
###Code
from __future__ import division
from gekko import GEKKO
import numpy as np
#Initial conditions
c = np.array([0.03,0.015,0.06,0])
areas = np.array([13.4, 12, 384.5, 4400])
V0 = np.array([0.26, 0.18, 0.68, 22])
h0 = 1000 * V0 / areas
Vout0 = c * np.sqrt(h0)
vin = [0.13,0.13,0.13,0.21,0.21,0.21,0.13,\
0.13,0.13,0.13,0.13,0.13,0.13]
Vin = [0,0,0,0]
#Initialize model
m = GEKKO()
#time array
m.time = np.linspace(0,1,13)
#define constants
c = m.Array(m.Const,4,value=0)
c[0].value = 0.03
c[1].value = c[0] / 2
c[2].value = c[0] * 2
c[3].value = 0
Vuse = [0.03,0.05,0.02,0.00]
#Parameters
evap_c = m.Array(m.Param,4,value=1e-5)
evap_c[-1].value = 0.5e-5
A = [m.Param(value=i) for i in areas]
Vin[0] = m.Param(value=vin)
#Variables
V = [m.Var(value=i) for i in V0]
h = [m.Var(value=i) for i in h0]
Vout = [m.Var(value=i) for i in Vout0]
#Intermediates
Vin[1:4] = [m.Intermediate(Vout[i]) for i in range(3)]
Vevap = [m.Intermediate(evap_c[i] * A[i]) for i in range(4)]
#Equations
m.Equations([V[i].dt() == \
Vin[i] - Vout[i] - Vevap[i] - Vuse[i] \
for i in range(4)])
m.Equations([1000*V[i] == h[i]*A[i] for i in range(4)])
m.Equations([Vout[i]**2 == c[i]**2 * h[i] for i in range(4)])
#Set to simulation mode
m.options.imode = 4
#Solve
m.solve()
#%% Plot results
time = [x * 12 for x in m.time]
# plot results
import matplotlib.pyplot as plt
plt.figure(1)
plt.subplot(311)
plt.plot(time,h[0].value,'r-')
plt.plot(time,h[1].value,'b--')
plt.ylabel('Level (m)')
plt.legend(['Jordanelle Reservoir','Deer Creek Reservoir'])
plt.subplot(312)
plt.plot(time,h[3].value,'g-')
plt.plot(time,h[2].value,'k:')
plt.ylabel('Level (m)')
plt.legend(['Great Salt Lake','Utah Lake'])
plt.subplot(313)
plt.plot(time,Vin[0].value,'k-')
plt.plot(time,Vout[0].value,'r-')
plt.plot(time,Vout[1].value,'b--')
plt.plot(time,Vout[2].value,'g-')
plt.xlabel('Time (month)')
plt.ylabel('Flow (km3/yr)')
plt.legend(['Supply Flow','Upper Provo River', \
'Lower Provo River','Jordan River'])
plt.show()
###Output
_____no_output_____ |
Notebooks/Duplicated_Checking.ipynb | ###Markdown
To load entire dataset
###Code
root_folder = Path('../Data/')
file_name = 'TRACE2014_jinming.csv'
file_path = root_folder / file_name
index_col = ['REC_CT_NB']
dtype={'TRC_ST': str, 'BOND_SYM_ID': str, 'CUSIP_ID': str, 'ENTRD_VOL_QT': np.int64, 'RPTD_PR': np.float64 \
,'YLD_SIGN_CD': str, 'YLD_PT': np.float64,'ASOF_CD': str, 'Report_Dealer_Index': str \
,'Contra_Party_Index': str, 'ISSUE_ID': str, 'OFFERING_AMT':np.int64}
data = pd.read_csv(file_path)
###Output
C:\Users\raymo\Anaconda3\lib\site-packages\IPython\core\interactiveshell.py:3018: DtypeWarning: Columns (5,8,17,20,23,25) have mixed types. Specify dtype option on import or set low_memory=False.
interactivity=interactivity, compiler=compiler, result=result)
###Markdown
To load only field of interest
###Code
root_folder = Path('../Data/')
#file_name = 'TRACE2014_jinming_5000.csv'
file_name = 'TRACE2014_jinming.csv'
file_path = root_folder / file_name
#field_of_interest_dateOnly = ['BOND_SYM_ID','CUSIP_ID','SCRTY_TYPE_CD','ENTRD_VOL_QT','RPTD_PR','RPT_SIDE_CD','TRD_EXCTN_DT','TRD_RPT_DT','RPT_SIDE_CD','Report_Dealer_Index','Contra_Party_Index']
field_of_interest_datetime = ['BOND_SYM_ID','CUSIP_ID','SCRTY_TYPE_CD','ENTRD_VOL_QT','RPTD_PR','RPT_SIDE_CD' \
,'TRD_EXCTN_DT_D','EXCTN_TM_D','TRD_RPT_DT','TRD_RPT_TM', 'Report_Dealer_Index'\
,'Contra_Party_Index','TRC_ST']
dtype={'BOND_SYM_ID': str, 'CUSIP_ID': str,'SCRTY_TYPE_CD':str, 'ENTRD_VOL_QT': np.float64, 'RPTD_PR': np.float64 \
,'RPT_SIDE_CD':str, 'Report_Dealer_Index': str,'Contra_Party_Index': str, 'TRC_ST':str}
parse_dates = {'TRD_RPT_DTTM':['TRD_RPT_DT','TRD_RPT_TM'],'TRD_EXCTN_DTTM':['TRD_EXCTN_DT_D','EXCTN_TM_D']}
data = pd.read_csv(file_path,usecols=field_of_interest_datetime,parse_dates=parse_dates\
,infer_datetime_format=True,converters={'TRD_RPT_TM':lambda x : pd.to_datetime(x,format='%H%M%S')})
###Output
_____no_output_____
###Markdown
Data Validation
###Code
shape = data.shape
print('We have {} rows {} columns'.format(shape[0],shape[1]))
n_duplicated = data.duplicated().sum()
percentage = n_duplicated/shape[0]*100
print('{} rows that are entirely the same in the data set which is {:.2f}'.format(n_duplicated,percentage))
print('Number of duplcations based on grouping keys:')
test_duplication = [['BOND_SYM_ID'],['CUSIP_ID'],['BOND_SYM_ID','CUSIP_ID'],['Report_Dealer_Index','TRD_EXCTN_DTTM'],
['Report_Dealer_Index','TRD_EXCTN_DTTM','BOND_SYM_ID']]
for test in test_duplication:
print('{} : {}'.format(test, data.duplicated(subset=test,keep='first').sum()))
subset = ['BOND_SYM_ID','CUSIP_ID','SCRTY_TYPE_CD','ENTRD_VOL_QT','RPTD_PR','RPT_SIDE_CD','TRD_EXCTN_DT','TRD_RPT_DT','RPT_SIDE_CD','Report_Dealer_Index','Contra_Party_Index']
duplicate = data.loc[data.duplicated(keep=False,subset=subset)].sort_values(by=['BOND_SYM_ID','TRD_EXCTN_DT','RPTD_PR'])
duplicate.to_csv('Duplicated_Acedemic_TRACE.csv')
###Output
_____no_output_____ |
KrigingExample/.ipynb_checkpoints/1. ParquetToFrost-checkpoint.ipynb | ###Markdown
Parquet to SensorThings
###Code
import csv
import re
import math
import time
import random
import numpy as np
import sys
import json
import requests
from pyspark.sql import SparkSession
from pyspark.conf import SparkConf
#spark conf
conf = ( SparkConf()
.setMaster("local[*]")
.setAppName('pyspark')
)
ss = SparkSession.builder.config(conf=conf).getOrCreate()
sc = ss.sparkContext
inputDir = "./out/outAllSortByTimeStampAndIDBig/TimeStamp=20150128/"
dataFileDF = ss.read.option("basepath",inputDir).parquet(inputDir)#+"TimeStamp=20160504/ID=I72406BI1")
dataFileDF = dataFileDF.withColumnRenamed("AitTemperature","AirTemperature")
#Date| Time|Latitude|Longitude|AitTemperature|Humidity|AirPressure| ID
dataFileDF.show(10)
idDF = dataFileDF["ID","Latitude","Longitude"]
from pyspark.sql import functions as F
idDF = idDF.withColumn("LatLon",F.array("Latitude","Longitude"))
idDF = idDF.groupBy(idDF.ID).agg(F.first(idDF.LatLon).alias("LatLon"))
idDF = idDF.withColumn("DSUID",F.format_string("Datastream-uid-%s",idDF.ID))
idDF = idDF.withColumn("FoIUID",F.format_string("FoI-uid-%s",idDF.ID))
idDF = idDF.withColumn("SensorUID",F.format_string("Sensor-uid-%s",idDF.ID))
idDF.show()
###Output
+---------+----------------+--------------------+-----------------+--------------------+
| ID| LatLon| DSUID| FoIUID| SensorUID|
+---------+----------------+--------------------+-----------------+--------------------+
|IKNIGSBA9| [48.961, 8.577]|Datastream-uid-IK...|FoI-uid-IKNIGSBA9|Sensor-uid-IKNIGSBA9|
|IKNIGSBA6| [48.964, 8.616]|Datastream-uid-IK...|FoI-uid-IKNIGSBA6|Sensor-uid-IKNIGSBA6|
|IKNIGSEG2| [47.928, 9.419]|Datastream-uid-IK...|FoI-uid-IKNIGSEG2|Sensor-uid-IKNIGSEG2|
|IKNIGSBA8| [48.965, 8.646]|Datastream-uid-IK...|FoI-uid-IKNIGSBA8|Sensor-uid-IKNIGSBA8|
|IKNIGSBR2|[48.256, 10.887]|Datastream-uid-IK...|FoI-uid-IKNIGSBR2|Sensor-uid-IKNIGSBR2|
|IKNIGSBO2|[52.136, 11.770]|Datastream-uid-IK...|FoI-uid-IKNIGSBO2|Sensor-uid-IKNIGSBO2|
+---------+----------------+--------------------+-----------------+--------------------+
###Markdown
Thing
###Code
idDF = dataFileDF["ID","Latitude","Longitude"]
from pyspark.sql import functions as F
idDF = idDF.withColumn("LatLon",F.array("Latitude","Longitude"))
idDF = idDF.groupBy(idDF.ID).agg(F.first(idDF.LatLon).alias("LatLon"))
urlHome = 'http://smartaqnet-dev.teco.edu:8080/FROST-Server/v1.0'
#urlHome = 'http://smartaqnet.teco.edu:8080/FROST-Server/v1.0'
urlThings = urlHome + '/Things'
urlThings
row = idDF.collect()[0]
data = {
"name": "DWD-Sensor-" + row.ID,
"description": "This is DWD-Sensor-" + row.ID,
"@iot.id": "002" + row.ID,
"Locations": [
{
"name":"Location-DWD-Sensor-"+row.ID,
"description": "This is the location of DWD-Sensor-" + row.ID,
"encodingType": "application/vnd.geo+json",
"location": {
"type": "Point",
"coordinates": [float(row.LatLon[0]), float(row.LatLon[1])]
},
"@iot.id": "GGLocation-DWD-Sensor/" + row.ID
}
]
}
p = requests.post(urlThings, json.dumps(data))
if (p.status_code == 201):
print(201)
else:
print("Error:", p.status_code)
for chunk in p.iter_content(chunk_size=128):
print(chunk)
###Output
201
###Markdown
Sensor
###Code
idDF = dataFileDF["ID","Latitude","Longitude"]
from pyspark.sql import functions as F
idDF = idDF.withColumn("LatLon",F.array("Latitude","Longitude"))
idDF = idDF.groupBy(idDF.ID).agg(F.first(idDF.LatLon).alias("LatLon"))
#urlHome = 'http://smartaqnet-dev.teco.edu:8080/FROST-Server/v1.0'
urlThings = urlHome + '/Sensors'
row = idDF.collect()[1]
mytypes = ["Temperature","Humidity","Pressure"]
deviceAddr = "https://www.wunderground.com/personal-weather-station/dashboard?ID="+row.ID
for mytype in mytypes:
data = {
"name": "GG-DWD-Sensor/" + row.ID,
"description": "This is a Sensor from Netatmo Weather Station",
"encodingType": "application/pdf",
"metadata": deviceAddr,
"@iot.id": "GG-DWD-Sensor-"+ mytype +"/" + row.ID
}
p = requests.post(urlThings, json.dumps(data))
if (p.status_code == 201):
print(201)
else:
print("Error:", p.status_code)
for chunk in p.iter_content(chunk_size=128):
print(chunk)
###Output
('Error:', 500)
Failed to store data.
('Error:', 500)
Failed to store data.
('Error:', 500)
Failed to store data.
###Markdown
ObservedProperty
###Code
idDF = dataFileDF["ID","Latitude","Longitude"]
from pyspark.sql import functions as F
idDF = idDF.withColumn("LatLon",F.array("Latitude","Longitude"))
idDF = idDF.groupBy(idDF.ID).agg(F.first(idDF.LatLon).alias("LatLon"))
#urlHome = 'http://smartaqnet-dev.teco.edu:8080/FROST-Server/v1.0'
urlObservedProperty = urlHome + '/ObservedProperties'
data ={
"name": "Area Temperature",
"description": "The degree or intensity of heat present in the area",
"definition": "http://www.qudt.org/qudt/owl/1.0.0/quantity/Instances.html#AreaTemperature",
"@iot.id": "Area Temperature"
}
p = requests.post(urlObservedProperty, json.dumps(data))
if (p.status_code == 201):
print(201)
else:
print("Error:", p.status_code)
for chunk in p.iter_content(chunk_size=128):
print(chunk)
###Output
201
###Markdown
Datastream
###Code
idDF = dataFileDF["ID","Latitude","Longitude"]
from pyspark.sql import functions as F
idDF = idDF.withColumn("LatLon",F.array("Latitude","Longitude"))
idDF = idDF.groupBy(idDF.ID).agg(F.first(idDF.LatLon).alias("LatLon"))
row = idDF.collect()[1]
#urlHome = 'http://smartaqnet.teco.edu:8080/FROST-Server/v1.0'
#urlHome = 'http://smartaqnet-dev.teco.edu:8080/FROST-Server/v1.0'
urlDataStream = urlHome + '/Datastreams'
data = {
"name": "Air Temperature DS",
"description": "Datastream for recording temperature",
"observationType": "http://www.opengis.net/def/observationType/OGC-OM/2.0/OM_Measurement",
"@iot.id":"DS-AT-"+row.ID,
"unitOfMeasurement": {
"name": "Degree Celsius",
"symbol": "degC",
"definition": "http://www.qudt.org/qudt/owl/1.0.0/unit/Instances.html#DegreeCelsius"
},
"Thing":{"@iot.id":"GGDWD-Sensor/IKNIGSBA9"},
"ObservedProperty":{"@iot.id":"Area Temperature"},
"Sensor":{"@iot.id":"GG-DWD-Sensor-Temperature/" + row.ID}
}
p = requests.post(urlDataStream, json.dumps(data))
if (p.status_code == 201):
print(201)
else:
print("Error:", p.status_code)
for chunk in p.iter_content(chunk_size=128):
print(chunk)
###Output
201
###Markdown
Feature of interest
###Code
idDF = dataFileDF["ID","Latitude","Longitude"]
from pyspark.sql import functions as F
idDF = idDF.withColumn("LatLon",F.array("Latitude","Longitude"))
idDF = idDF.groupBy(idDF.ID).agg(F.first(idDF.LatLon).alias("LatLon"))
row = idDF.collect()[1]
#urlHome = 'http://smartaqnet.teco.edu:8080/FROST-Server/v1.0'
#urlHome = 'http://smartaqnet-dev.teco.edu:8080/FROST-Server/v1.0'
urlFoI = urlHome + '/FeaturesOfInterest'
data = {
"name": "Weather Station-" + row.ID,
"description": "Weather Station-" + row.ID,
"encodingType": "application/vnd.geo+json",
"@iot.id":"FoIxx-"+row.ID,
"feature": {
"type": "Point",
"coordinates": [float(row.LatLon[0]), float(row.LatLon[1])]
}
}
p = requests.post(urlFoI, json.dumps(data))
if (p.status_code == 201):
print(201)
else:
print("Error:", p.status_code)
for chunk in p.iter_content(chunk_size=128):
print(chunk)
p.headers["location"]
###Output
_____no_output_____
###Markdown
Observation
###Code
urlObs = urlHome + '/Observations'
for row in dataFileDF.collect():
#
data = {
"phenomenonTime": "2017-02-07T18:02:00.000Z",
"resultTime" : "2017-02-07T18:02:05.000Z",
"result" : 21.6,
"Datastream":{"@iot.id":8}
}
p = requests.post(urlFoI, json.dumps(data))
if (p.status_code == 201):
print(201)
else:
print("Error:", p.status_code)
for chunk in p.iter_content(chunk_size=128):
print(chunk)
#dataFileDF.show(60)
from datetime import tzinfo, timedelta, datetime
row = dataFileDF.first()
row.Date.split("-")[0]
class TZ(tzinfo):
def utcoffset(self, dt): return timedelta(minutes=120)
datetime(int(row.Date.split('-')[0]),
int(row.Date.split("-")[1]),
int(row.Date.split("-")[2]),
int(row.Time.split(':')[0]),
int(row.Time.split(':')[1]),
int(row.Time.split(':')[2]),tzinfo=TZ()).isoformat()
###Output
_____no_output_____
###Markdown
Tryouts
###Code
row = idDF.collect()[1]
urlHome = 'http://smartaqnet.teco.edu:8080/FROST-Server/v1.0'
urlThings = urlHome + '/Things'
data = {
"name": "DWD-Sensor-noSens" + row[0],
"description": "DWD_Sensor-" + row[0],
"@iot.id": "DWD-Sensor/noSensblblb" + row[0]
}
p = requests.post(urlThings, json.dumps(data))
if (p.status_code == 201):
print(201)
else:
print("Error:", p.status_code)
for chunk in p.iter_content(chunk_size=128):
print(chunk)
idDF = idDF.groupBy(idDF.ID).agg(F.first(idDF.LatLon).alias("LatLon"))#.show(3)#.agg(F.first(idDF.Longitude))
urlHome = 'http://smartaqnet-dev01.teco.edu:8080/FROST-Server/v1.0'
#urlHome = 'http://smartaqnet-dev.teco.edu:8080/FROST-Server/v1.0'
urlThings = urlHome + '/Things'
#idDF2 = idDF.select("ID","LatLon").rdd.map(DfToSensorthings)
row = idDF.collect()[1]
import requests
urlHome = 'http://smartaqnet.teco.edu:8080/FROST-Server/v1.0'
urlThings = urlHome + '/Things'
sensorAddr = "https://www.wunderground.com/personal-weather-station/dashboard?ID="+row[0]
data = {
"name": "DWD-Sensor-" + row[0],
"description": "DWD_Sensor-" + row[0],
"@iot.id": "DWD-Sensor/" + row[0]
}
p = requests.get(urlThings+"('DWD-Sensor/noSensblbl" + row[0]+"')")
if (p.status_code == 200):
print(200)
else:
print("Error:", p.status_code)
for chunk in p.iter_content(chunk_size=128):
print(chunk)
row
p = requests.post(urlThings, json.dumps(data))
if (p.status_code == 201):
print("Creation successful")
else:
print("Error:", p.status_code)
for chunk in p.iter_content(chunk_size=128):
print(chunk)
#Delete a thing
deleteID = urlThings + "('DWD-Sensors/IBREUNA2')"
print(deleteID)
p = requests.delete(urlThings)
if (p.status_code == 201):
print("Deletion successful")
else:
print("Error:", p.status_code)
for chunk in p.iter_content(chunk_size=128):
print(chunk)
#Query a thing
getID = urlThings + "(\"teco.edu/Test-2\")"
print(getID)
p = requests.get(urlThings)
if (p.status_code == 201):
print("Get successful")
else:
print("Error:", p.status_code)
for chunk in p.iter_content(chunk_size=128):
print(chunk)
p.content
sensors = idDF.collect()
for sensor in sensors:
print(DfToSensorthingsPost(sensor))
#self link
import requests
dataFileDF = ss.read.parquet(inputDir)
outDir = "./outDir/outAllAll5/"
print(csv.__file__)
validFileNames = [inputDir+f for f in os.listdir(inputDir) if ('.' not in f) and ("part-" in f)]
validFileNames
for aFile in validFileNames:
with open(aFile) as csvfile:
for row in csvfile:
k = row[1:9]
v = row[13:-3]+'\n'
vNew = re.sub(';','\n',v)
outFile = open(outDir+k,"w")
outFile.write(vNew)
outFile.close()
row = csvfile.readline()
k = row[1:9]
v = row[13:-3].split(";")
k,v
with open(inputFile) as csvfile:
for row in csvfile:
k = row[1:9]
v = row[13:-3]+'\n'
vNew = re.sub(';','\n',v)
outFile = open(outDir+k,"w")
outFile.write(vNew)
outFile.close()
import re
print(vNew)
outFile = open(outDir+k,"w")
outFile.write(vNew)
outFile.close()
with open(inputFile,'rb') as csvfile:
reader = csv.Reader(csvfile)
row = reader.__next__()
row
row
all=string.maketrans('','')
nodigs=all.translate(all, string.digits)
class Del:
def __init__(self, keep=string.digits):
self.comp = dict((ord(c),c) for c in keep)
def __getitem__(self, k):
return self.comp.get(k)
with open(inputPath) as csvfile:
reader = csv.DictReader(csvfile)
for i in range(10):
row = reader.__next__()
lineID = row['id']
timeStamp = row['time'].translate(Del())
print(lineID, timeStamp)
###Output
I17SAINT2 2016081912000002
I17SAINT2 2016081812000002
I17SAINT2 2016081712000002
I17SAINT2 2016081612000002
I17SAINT2 2016081512000002
I17SAINT2 2016081412000002
I17SAINT2 2016081312000002
I17SAINT2 2016081212000002
I17SAINT2 2016081112000002
I17SAINT2 2016081012000002
|
01 python/lecture 14 materials pandas/Кирилл Сетдеков Homework8.ipynb | ###Markdown
Python для визуализации данных*Рогович Татьяна, ВШЭ* УпражененияПервые три задания работаем с набором данных, который содержит всех новорожденных и их имена в CША. Последние два задания делаем на уже известном вам датасете про индийских женщин и диабет.
###Code
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
%matplotlib inline
babies = pd.read_csv('https://raw.githubusercontent.com/pileyan/Data/master/data/babies%20names/babies_all.txt')
pima = pd.read_csv('https://raw.githubusercontent.com/pileyan/Data/master/data/pima-indians-diabetes.csv')
babies = babies.set_index('Unnamed: 0')
babies.tail()
pima.head()
###Output
_____no_output_____
###Markdown
Задание 1.Исследуйте набор данных babies. Ответьте на вопросы.1) Какие годы включает датасет
###Code
babies.groupby('year').year.count()
2010-1880+1
###Output
_____no_output_____
###Markdown
Все годы с 1880 по 2010 год без пропусков 2) Какое имя в датасете находится по индексом 121?
###Code
babies.name.iloc[121]
###Output
_____no_output_____
###Markdown
3) Cколько всего родилось детей по имени 'Aaron' за все время?
###Code
babies[babies.name == 'Aaron'].number.sum()
###Output
_____no_output_____
###Markdown
4) Насколько больше за все время родилось мальчиков чем девочек?
###Code
babies.groupby('sex').number.sum()
f"Мальчиков родилось на {babies[babies.sex == 'M'].number.sum() - babies[babies.sex == 'F'].number.sum()} больше"
###Output
_____no_output_____
###Markdown
5) Cколько мальчиков родилось в 2010?
###Code
babies[(babies.sex == 'M') & (babies.year == 2010)].number.sum()
###Output
_____no_output_____
###Markdown
6) Сколько в датасете девочек по имени John?
###Code
babies[(babies.sex == 'F') & (babies.name == 'John')].number.sum()
###Output
_____no_output_____
###Markdown
Задание 21. Сгруппируйте набор данных babies по году и полу и сохраните результаты в два новых датафрейма: babies_girls и babies_boys.2. Создайте фигуру matplotlib с 3 графиками один под другим.3. Постройте линейные графики. Первый график должен показывать тренд рождаемости для девочек, второй - для мальчиков, третий объединять их все вместе (с теми же цветами, что в индивидуальных графиках). Годы - x, количество детей - y. 4. Верхняя и правая границы графиков должны быть невидимы, к каждому графику должен быть заголовок, третий график должен содержать легенду, шкалы графиков должны быть подписаны.5. Для шкалы количество должны быть установлены лимиты, чтобы она была одинакова на обоих графиках.6. Кратко опишите тренды в ячейке markdown под графиками.Если при группировке вы сделали год индексом, то можно обратиться к значениям этой переменной через аттрибут .index
###Code
babies_girls = babies[(babies.sex == 'F')].groupby(['year']).agg('sum')
babies_boys = babies[(babies.sex != 'F')].groupby(['year']).agg('sum')
fig, axs = plt.subplots(3, 1, figsize=(9, 12))
axs[0].plot(babies_girls.index, babies_girls.number, color="tab:blue")
axs[0].title.set_text('Динамика числа рождений девочек по годам')
axs[1].plot(babies_boys.index, babies_boys.number, color="tab:orange")
axs[1].title.set_text('Динамика числа рождений мальчиков по годам')
axs[2].plot(babies_girls.index, babies_girls.number, label='girls')
axs[2].plot(babies_boys.index, babies_boys.number, label='boys')
axs[2].legend()
axs[2].title.set_text('Динамика числа рождений детей двух полов по годам')
axs[0].set_ylabel('Количество девочек, млн.')
axs[1].set_ylabel('Количество мальчиков, млн.')
axs[2].set_ylabel('Количество детей, млн.')
axs[2].set_xlabel('Годы')
for i in range(3): # оси ставлю всем
right_side = axs[i].spines["right"]
right_side.set_visible(False)
topside = axs[i].spines["top"]
topside.set_visible(False)
axs[i].set_ylim(0, max(max(babies_boys.number), max(babies_girls.number))) # лимит по у - максимум из значений детей в год
###Output
_____no_output_____
###Markdown
* первые три года с 1883 года рождалось больше мальчиков* с 1883 до 1917 года рождаемость мальчиков была ниже, чем девочек.* с 1917 года, разница сокращалась* с 1927 года число родившихся мальчиков было больше, чем девочек и оставалось выше до 2010 года Задание 31. Сгруппируйте нужным способом датафрейм babies и найдите 4 самых популярных имени за всю историю (2 женских и 2 мужских).2. Для каждого найденного имени создайте новый датафрейм вида babies_alisa и сохраните в него данные, сколько детей с таким именем рождалось каждый год.3. Создайте фигуру matplotlib с 4 горизонтальными графиками один под другим.4. Постройте 4 линейных графика - тренд для каждого имени за все время.5. Верхняя и правая границы графиков должны быть невидимы, каждый график должен содержать легенду, один общий заголовок, шкалы графиков должны быть подписаны.6. Для шкалы количество должны быть установлены лимиты, чтобы она была одинакова на обоих графиках.7. Опишите тренды в ячейке markdown под графиками.
###Code
df_female = pd.DataFrame(babies[(babies.sex == 'F')].groupby("name").number.agg("sum"))
top2w = df_female.sort_values(by=['number'], ascending=False).iloc[:2]
print(top2w)
girls_top = top2w.index # имена - индексы этого дф, берем их как отдельный списко
df_male = pd.DataFrame(babies[(babies.sex != 'F')].groupby("name").number.agg("sum"))
top2m = df_male.sort_values(by=['number'], ascending=False).iloc[:2]
print(top2m)
boys_top = top2m.index # аналогично берем имена мальчиков
# делаем дф, где только динамика по годам для 1 имени
# boys
babies_m1 = babies[(babies.name == boys_top[0])].groupby(['year']).agg('sum')
babies_m2 = babies[(babies.name == boys_top[1])].groupby(['year']).agg('sum')
# girls
babies_w1 = babies[(babies.name == girls_top[0])].groupby(['year']).agg('sum')
babies_w2 = babies[(babies.name == girls_top[1])].groupby(['year']).agg('sum')
# count limit for y
ylimit_num = max(max(babies_m1.number), max(babies_m2.number), max(babies_w1.number), max(babies_w2.number))
fig, axs = plt.subplots(4, 1, figsize=(6, 15))
axs[0].plot(babies_w1.index, babies_w1.number, color="xkcd:purple", label = girls_top[0])
axs[1].plot(babies_w2.index, babies_w2.number, color="xkcd:green", label = girls_top[1])
axs[2].plot(babies_m1.index, babies_m1.number, color="xkcd:blue", label = boys_top[0])
axs[3].plot(babies_m2.index, babies_m2.number, color="xkcd:red", label = boys_top[1])
fig.suptitle('Динамика двух женских и двух мужских имен по годам') # общая подпись
for i in range(4): # оси ставлю всем
right_side = axs[i].spines["right"]
right_side.set_visible(False)
topside = axs[i].spines["top"]
topside.set_visible(False)
axs[i].set_ylim(0, ylimit_num) # лимит по у - максимум из значений детей в год
axs[i].set_ylabel('Количество детей')
axs[i].set_xlabel('Годы')
axs[i].legend()
###Output
number
name
Mary 4103935
Patricia 1568742
number
name
James 5049727
John 5040319
###Markdown
* интересное наблюдение, неожиданное до рассмотрения динамики - самые популярные имена - не те, у кого был максимальный пик, а те, которые были популярны на длительном интервале* среди топ 4 имен, мы видим, что для них пик популярности был в послевоенные годы (2 мировая война) * исключение - Мари, максимум популярности - в 1920-е годы* для этих 4 имен характерно рост популярности в 1910-е годы и падение в 1980-е Задание 41. В оригинальном датафрейме babies создайте новую колонку - первая буква имени.2. Выберете год из датасета. Сгруппируйте датасет, чтобы в нем в рядах были первые буквы, а в колонках - количество детей с такими именами. Сохраните три новых датафрейма для любых трех лет из выборки с такой группировкой.3. Создайте фигуру matplotlib с 3 горизонтальными графиками один под другим.4. Верхняя и правая границы графиков должны быть невидимы, каждый график быть с заголовком, шкалы графиков должны быть подписаны.5. Постройте столбчатую диаграмму для каждого года. 6. Сделайте вывод - какие первые буквы имени были самыми популярными в каждом году.
###Code
def one_letter(short_str):
return short_str.lower()[:1]
babies['one_letter'] = babies.name.map(one_letter) # make new column
# select and create counts
df1911 = pd.DataFrame(babies[babies.year == 1911].groupby("one_letter").number.agg("sum"))
df1942 = pd.DataFrame(babies[babies.year == 1942].groupby("one_letter").number.agg("sum"))
df2001 = pd.DataFrame(babies[babies.year == 2001].groupby("one_letter").number.agg("sum"))
total_count = pd.DataFrame(babies.groupby("one_letter").number.agg("sum")) # total counts - to get all combination
def edit_df(pandas_df):
"""function to leftjoin columns to total and return a fixed number of rows"""
merged = total_count.join(pandas_df, how='left',lsuffix='_left')
merged.drop("number_left", axis='columns', inplace=True)
merged.replace(np.nan, 0, inplace=True)
return merged
df1911 = edit_df(df1911)
df1942 = edit_df(df1942)
df2001 = edit_df(df2001)
# width = 0.35 # the width of the bars
fig, axs = plt.subplots(3, 1, figsize=(6, 15)) #начал рисовать
axs[0].bar(df1911.index, df1911.number)
axs[0].title.set_text('Популярность первых букв в 1911 году')
axs[1].bar(df1942.index, df1942.number)
axs[1].title.set_text('Популярность первых букв в 1942 году')
axs[2].bar(df2001.index, df2001.number)
axs[2].title.set_text('Популярность первых букв в 2011 году')
for i in range(3): # оси ставлю всем
right_side = axs[i].spines["right"]
right_side.set_visible(False)
topside = axs[i].spines["top"]
topside.set_visible(False)
axs[i].set_ylabel('Количество детей')
axs[i].set_xlabel('буквы')
print("Две первых буквы по популярности в именах в 1911 году:")
print(*df1911.sort_values(by=['number'], ascending=False).iloc[:2].index, "\n")
print("Две первых буквы по популярности в именах в 1942 году:")
print(*df1942.sort_values(by=['number'], ascending=False).iloc[:2].index, "\n")
print("Две первых буквы по популярности в именах в 2001 году:")
print(*df2001.sort_values(by=['number'], ascending=False).iloc[:2].index)
###Output
Две первых буквы по популярности в именах в 1911 году:
m e
Две первых буквы по популярности в именах в 1942 году:
j r
Две первых буквы по популярности в именах в 2001 году:
j a
###Markdown
Задание 51. Создайте фигуру matplotlib с двумя осями координат (1 ряд, две колонки)2. В первой оси координат для датасета pima постройте мультивариативный график рассеяния. Шкала x - уровень глюкозы, шкала y - давление, размер - возраст, цвет - наличие диабета (Class). 3. Во второй оси координат постройте мультивариативный график, где по x - количество беременностей, y - BMI, цвет - наличие диабета. У этого графика принудительно приведите значения шкалы x к дискретным (с помощью метода оси координат, смотрели такой для леса).4. Верхняя и правая границы графиков должны быть невидимы, каждый график быть с заголовком, шкалы графиков должны быть подписаны.5. По графикам вывод как эти переменные могут быть связаны с зависимой переменной (класс).
###Code
from matplotlib.ticker import MaxNLocator
fig, axs = plt.subplots(1, 2, figsize=(12, 6)) #начал рисовать
scatter = axs[0].scatter(x=pima.Glucose, y=pima.BloodPressure, c=pima.Class, s=pima.Age*2,
alpha=0.4, edgecolors='none')
scalebar = axs[1].scatter(x=pima.Pregnancies.map(int), y=pima.BMI, c=pima.Class,
alpha=0.4, edgecolors='none')
axs[1].xaxis.set_major_locator(MaxNLocator(integer=True)) # set integer ticks
axs[0].title.set_text('Распределение пациентов по уровню \n глюкозы и давлению (размер - возраст)')
axs[0].set_ylabel('Давление')
axs[0].set_xlabel('Уровень глюкозы')
axs[1].title.set_text('Распределение пациентов по числу беременностей и BMI')
axs[1].set_xlabel('Число беременностей')
axs[1].set_ylabel('BMI')
legend1 = axs[0].legend(*scatter.legend_elements(),
loc="lower left", title="Classes")
axs[0].add_artist(legend1);
for i in range(2): # оси ставлю всем
right_side = axs[i].spines["right"]
right_side.set_visible(False)
topside = axs[i].spines["top"]
topside.set_visible(False)
axs[0].legend(*scatter.legend_elements("sizes", num=3));
###Output
_____no_output_____
###Markdown
предварительные выводы по визуальному анализу:* связь диабета с возрастом, давлением и числомо беременностей не очевидна* предположение, что есть положительная связь: * между уровнем сахара и наличием диабета * между BMI и наличием диабета Дополнительное задание 1. Создайте на основе датасета pima новый датасет: ряды - количество беременностей, колонки: mean_glucose (средний показатель уровня глюкозы для каждого количества беременностей), mean_bmi (аналогично для BMI). 2. Создайте фигуру matplotlib с одни объектом.2. Постройте для этого датасета совмещенную столбчатую диаграмму (для каждого значения переменной Pregnancies должно быть две колонки - mean_glucose, mean_bmi.3. Верхняя и правая границы графика должны быть невидимы, график должен быть с заголовком, шкалы графика должны быть подписаны.4. Сделайте вывод о связи количества беременностей и средних уровней глюкозы и индекса массы тела.
###Code
new_pima = pima.groupby("Pregnancies")[["Glucose", "BMI"]].mean()
new_pima.columns = ['mean_glucose', 'mean_bmi']
import numpy as np
labels = new_pima.index
glucose = new_pima.mean_glucose
bmi = new_pima.mean_bmi
x = np.arange(len(labels)) # the label locations
width = 0.35 # the width of the bars
fig, ax = plt.subplots()
right_side = ax.spines["right"]
right_side.set_visible(False)
topside = ax.spines["top"]
topside.set_visible(False)
rects1 = ax.bar(x - width/2, glucose, width, label='mean_glucose')
rects2 = ax.bar(x + width/2, bmi, width, label='mean_BMI')
ax.set_ylabel('Values')
ax.set_xlabel('Pregnancies')
ax.set_title('BMI and glucose levels by pregnancy counts')
ax.set_xticks(x)
ax.set_xticklabels(labels)
ax.legend()
###Output
_____no_output_____ |
Jupyter Tutorial/Exercise_FPGA_and_the_DevCloud.ipynb | ###Markdown
Exercise: FPGA and the DevCloudNow that we've walked through the process of requesting an edge node with a CPU and Intel® Arria 10 FPGA on Intel's DevCloud and loading a model on the Intel® Arria 10 FPGA, you will have the opportunity to do this yourself with the addition of running inference on an image.In this exercise, you will do the following:1. Write a Python script to load a model and run inference 10 times on a device on Intel's DevCloud. * Calculate the time it takes to load the model. * Calculate the time it takes to run inference 10 times.2. Write a shell script to submit a job to Intel's DevCloud.3. Submit a job using `qsub` on an **IEI Tank-870** edge node with an **Intel® Arria 10 FPGA**.4. Run `liveQStat` to view the status of your submitted jobs.5. Retrieve the results from your job.6. View the results.Click the **Exercise Overview** button below for a demonstration. Exercise Overview IMPORTANT: Set up paths so we can run Dev Cloud utilitiesYou *must* run this every time you enter a Workspace session.
###Code
%env PATH=/opt/conda/bin:/opt/spark-2.4.3-bin-hadoop2.7/bin:/opt/conda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/opt/intel_devcloud_support
import os
import sys
sys.path.insert(0, os.path.abspath('/opt/intel_devcloud_support'))
sys.path.insert(0, os.path.abspath('/opt/intel'))
###Output
_____no_output_____
###Markdown
The ModelWe will be using the `vehicle-license-plate-detection-barrier-0106` model for this exercise. Remember that to run a model on the FPGA, we need to use `FP16` as the model precision.The model has already been downloaded for you in the `/data/models/intel` directory on Intel's DevCloud.We will be running inference on an image of a car. The path to the image is `/data/resources/car.png` Step 1: Creating a Python ScriptThe first step is to create a Python script that you can use to load the model and perform inference. We'll use the `%%writefile` magic to create a Python file called `inference_on_device.py`. In the next cell, you will need to complete the `TODO` items for this Python script.`TODO` items:1. Load the model2. Get the name of the input node3. Prepare the model for inference (create an input dictionary)4. Run inference 10 times in a loopIf you get stuck, you can click on the **Show Solution** button below for a walkthrough with the solution code.
###Code
%%writefile inference_on_device.py
import time
import numpy as np
import cv2
from openvino.inference_engine import IENetwork
from openvino.inference_engine import IECore
import argparse
def main(args):
model=args.model_path
model_weights=model+'.bin'
model_structure=model+'.xml'
start=time.time()
# TODO: Load the model
print(f"Time taken to load model = {time.time()-start} seconds")
# Reading and Preprocessing Image
input_img=cv2.imread('/data/resources/car.png')
input_img=cv2.resize(input_img, (300,300), interpolation = cv2.INTER_AREA)
input_img=np.moveaxis(input_img, -1, 0)
# TODO: Prepare the model for inference (create input dict etc.)
start=time.time()
for _ in range(10):
# TODO: Run Inference in a Loop
print(f"Time Taken to run 10 Inference on FPGA is = {time.time()-start} seconds")
if __name__=='__main__':
parser=argparse.ArgumentParser()
parser.add_argument('--model_path', required=True)
parser.add_argument('--device', default=None)
args=parser.parse_args()
main(args)
###Output
_____no_output_____
###Markdown
Show Solution Step 2: Creating a Job Submission ScriptTo submit a job to the DevCloud, you'll need to create a shell script. Similar to the Python script above, we'll use the `%%writefile` magic command to create a shell script called `inference_fpga_model_job.sh`. In the next cell, you will need to complete the `TODO` items for this shell script.`TODO` items:1. Create three variables: * `DEVICE` - Assign the value as the first argument passed into the shell script. * `MODELPATH` - Assign the value as the second argument passed into the shell script.2. Call the Python script using the three variable values as the command line argumentIf you get stuck, you can click on the **Show Solution** button below for a walkthrough with the solution code.
###Code
%%writefile inference_fpga_model_job.sh
#!/bin/bash
exec 1>/output/stdout.log 2>/output/stderr.log
mkdir -p /output
# TODO: Create DEVICE variable
# TODO: Create MODELPATH variable
export AOCL_BOARD_PACKAGE_ROOT=/opt/intel/openvino/bitstreams/a10_vision_design_sg2_bitstreams/BSP/a10_1150_sg2
source /opt/altera/aocl-pro-rte/aclrte-linux64/init_opencl.sh
aocl program acl0 /opt/intel/openvino/bitstreams/a10_vision_design_sg2_bitstreams/2020-2_PL2_FP16_MobileNet_Clamp.aocx
export CL_CONTEXT_COMPILER_MODE_INTELFPGA=3
# TODO: Call the Python script
cd /output
tar zcvf output.tgz * # compresses all files in the current directory (output)
###Output
_____no_output_____
###Markdown
Show Solution Step 3: Submitting a Job to Intel's DevCloudIn the next cell, you will write your `!qsub` command to load your model and run inference on the **IEI Tank-870** edge node with an **Intel Core i5** CPU and an **Intel® Arria 10 FPGA**.Your `!qsub` command should take the following flags and arguments:1. The first argument should be the shell script filename2. `-d` flag - This argument should be `.`3. `-l` flag - This argument should request an edge node with an **IEI Tank-870**. The default quantity is 1, so the **1** after `nodes` is optional. * **Intel Core i5 6500TE** for your `CPU`. * **Intel® Arria 10** for your `FPGA`.To get the queue labels for these devices, you can go to [this link](https://devcloud.intel.com/edge/get_started/devcloud/)4. `-F` flag - This argument should contain the two values to assign to the variables of the shell script: * **DEVICE** - Device type for the job: `FPGA`. Remember that we need to use the **Heterogenous plugin** (HETERO) to run inference on the FPGA. * **MODELPATH** - Full path to the model for the job. As a reminder, the model is located in `/data/models/intel`.**Note**: There is an optional flag, `-N`, you may see in a few exercises. This is an argument that only works on Intel's DevCloud that allows you to name your job submission. This argument doesn't work in Udacity's workspace integration with Intel's DevCloud.
###Code
job_id_core = # TODO: Write qsub command
print(job_id_core[0])
###Output
_____no_output_____
###Markdown
Show Solution Step 4: Running liveQStatRunning the `liveQStat` function, we can see the live status of our job. Running the this function will lock the cell and poll the job status 10 times. The cell is locked until this finishes polling 10 times or you can interrupt the kernel to stop it by pressing the stop button at the top: * `Q` status means our job is currently awaiting an available node* `R` status means our job is currently running on the requested node**Note**: In the demonstration, it is pointed out that `W` status means your job is done. This is no longer accurate. Once a job has finished running, it will no longer show in the list when running the `liveQStat` function.Click the **Running liveQStat** button below for a demonstration. Running liveQStat
###Code
import liveQStat
liveQStat.liveQStat()
###Output
_____no_output_____
###Markdown
Step 5: Retrieving Output FilesIn this step, we'll be using the `getResults` function to retrieve our job's results. This function takes a few arguments.1. `job id` - This value is stored in the `job_id_core` variable we created during **Step 3**. Remember that this value is an array with a single string, so we access the string value using `job_id_core[0]`.2. `filename` - This value should match the filename of the compressed file we have in our `inference_fpga_model_job.sh` shell script.3. `blocking` - This is an optional argument and is set to `False` by default. If this is set to `True`, the cell is locked while waiting for the results to come back. There is a status indicator showing the cell is waiting on results.**Note**: The `getResults` function is unique to Udacity's workspace integration with Intel's DevCloud. When working on Intel's DevCloud environment, your job's results are automatically retrieved and placed in your working directory.Click the **Retrieving Output Files** button below for a demonstration. Retrieving Output Files
###Code
import get_results
get_results.getResults(job_id_core[0], filename="output.tgz", blocking=True)
!tar zxf output.tgz
!cat stdout.log
!cat stderr.log
###Output
_____no_output_____ |
spring1819_assignment1/assignment1/python_numpy_tutorial.ipynb | ###Markdown
CS 231n Python & NumPy Tutorial Python 3 and NumPy will be used extensively throughout this course, so it's important to be familiar with them. A good amount of the material in this notebook comes from Justin Johnson's Python & NumPy Tutorial:http://cs231n.github.io/python-numpy-tutorial/. At this moment, not everything from that tutorial is in this notebook and not everything from this notebook is in the tutorial. Python 3 If you're unfamiliar with Python 3, here are some of the most common changes from Python 2 to look out for. Print is a function
###Code
print("Hello!")
###Output
Hello!
###Markdown
Without parentheses, printing will not work.
###Code
print "Hello!"
###Output
_____no_output_____
###Markdown
Floating point division by default
###Code
5 / 2
###Output
_____no_output_____
###Markdown
To do integer division, we use two backslashes:
###Code
5 // 2
###Output
_____no_output_____
###Markdown
No xrange The xrange from Python 2 is now merged into "range" for Python 3 and there is no xrange in Python 3. In Python 3, range(3) does not create a list of 3 elements as it would in Python 2, rather just creates a more memory efficient iterator.Hence, xrange in Python 3: Does not exist range in Python 3: Has very similar behavior to Python 2's xrange
###Code
for i in range(3):
print(i)
range(3)
# If need be, can use the following to get a similar behavior to Python 2's range:
print(list(range(3)))
###Output
_____no_output_____
###Markdown
NumPy "NumPy is the fundamental package for scientific computing in Python. It is a Python library that provides a multidimensional array object, various derived objects (such as masked arrays and matrices), and an assortment of routines for fast operations on arrays, including mathematical, logical, shape manipulation, sorting, selecting, I/O, discrete Fourier transforms, basic linear algebra, basic statistical operations, random simulation and much more" -https://docs.scipy.org/doc/numpy-1.10.1/user/whatisnumpy.html.
###Code
import numpy as np
###Output
_____no_output_____
###Markdown
Let's run through an example showing how powerful NumPy is. Suppose we have two lists a and b, consisting of the first 100,000 non-negative numbers, and we want to create a new list c whose *i*th element is a[i] + 2 * b[i]. Without NumPy:
###Code
%%time
a = list(range(100000))
b = list(range(100000))
%%time
for _ in range(10):
c = []
for i in range(len(a)):
c.append(a[i] + 2 * b[i])
###Output
_____no_output_____
###Markdown
With NumPy:
###Code
%%time
a = np.arange(100000)
b = np.arange(100000)
%%time
for _ in range(10):
c = a + 2 * b
###Output
_____no_output_____
###Markdown
The result is 10 to 15 times (sometimes more) faster, and we could do it in fewer lines of code (and the code itself is more intuitive)! Regular Python is much slower due to type checking and other overhead of needing to interpret code and support Python's abstractions.For example, if we are doing some addition in a loop, constantly type checking in a loop will lead to many more instructions than just performing a regular addition operation. NumPy, using optimized pre-compiled C code, is able to avoid a lot of the overhead introduced.The process we used above is **vectorization**. Vectorization refers to applying operations to arrays instead of just individual elements (i.e. no loops). Why vectorize?1. Much faster2. Easier to read and fewer lines of code3. More closely assembles mathematical notationVectorization is one of the main reasons why NumPy is so powerful. ndarray ndarrays, n-dimensional arrays of homogenous data type, are the fundamental datatype used in NumPy. As these arrays are of the same type and are fixed size at creation, they offer less flexibility than Python lists, but can be substantially more efficient runtime and memory-wise. (Python lists are arrays of pointers to objects, adding a layer of indirection.)The number of dimensions is the rank of the array; the shape of an array is a tuple of integers giving the size of the array along each dimension.
###Code
# Can initialize ndarrays with Python lists, for example:
a = np.array([1, 2, 3]) # Create a rank 1 array
print('type:', type(a)) # Prints "<class 'numpy.ndarray'>"
print('shape:', a.shape) # Prints "(3,)"
print('a:', a) # Prints "1 2 3"
a_cpy= a.copy()
a[0] = 5 # Change an element of the array
print('a modeified:', a) # Prints "[5, 2, 3]"
print('a copy:', a_cpy)
b = np.array([[1, 2, 3],
[4, 5, 6]]) # Create a rank 2 array
print('shape:', b.shape) # Prints "(2, 3)"
print(b[0, 0], b[0, 1], b[1, 0]) # Prints "1 2 4"
###Output
_____no_output_____
###Markdown
There are many other initializations that NumPy provides:
###Code
a = np.zeros((2, 2)) # Create an array of all zeros
print(a) # Prints "[[ 0. 0.]
# [ 0. 0.]]"
b = np.full((2, 2), 7) # Create a constant array
print(b) # Prints "[[ 7. 7.]
# [ 7. 7.]]"
c = np.eye(2) # Create a 2 x 2 identity matrix
print(c) # Prints "[[ 1. 0.]
# [ 0. 1.]]"
d = np.random.random((2, 2)) # Create an array filled with random values
print(d) # Might print "[[ 0.91940167 0.08143941]
# [ 0.68744134 0.87236687]]"
###Output
_____no_output_____
###Markdown
How do we create a 2 by 2 matrix of ones?
###Code
a = np.ones((2, 2)) # Create an array of all ones
print(a) # Prints "[[ 1. 1.]
# [ 1. 1.]]"
###Output
_____no_output_____
###Markdown
Useful to keep track of shape; helpful for debugging and knowing dimensions will be very useful when computing gradients, among other reasons.
###Code
nums = np.arange(8)
print(nums)
print(nums.shape)
nums = nums.reshape((2, 4))
print('Reshaped:\n', nums)
print(nums.shape)
# The -1 in reshape corresponds to an unknown dimension that numpy will figure out,
# based on all other dimensions and the array size.
# Can only specify one unknown dimension.
# For example, sometimes we might have an unknown number of data points, and
# so we can use -1 instead without worrying about the true number.
nums = nums.reshape((4, -1))
print('Reshaped with -1:\n', nums, '\nshape:\n', nums.shape)
# You can also flatten the array by using -1 reshape
print('Flatten:\n', nums.reshape(-1), '\nshape:\n', nums.reshape(-1).shape)
###Output
_____no_output_____
###Markdown
NumPy supports an object-oriented paradigm, such that ndarray has a number of methods and attributes, with functions similar to ones in the outermost NumPy namespace. For example, we can do both:
###Code
nums = np.arange(8)
print(nums.min()) # Prints 0
print(np.min(nums)) # Prints 0
print(np.reshape(nums, (4, 2)))
###Output
_____no_output_____
###Markdown
Array Operations/Math NumPy supports many elementwise operations:
###Code
x = np.array([[1, 2],
[3, 4]], dtype=np.float64)
y = np.array([[5, 6],
[7, 8]], dtype=np.float64)
# Elementwise sum; both produce the array
# [[ 6.0 8.0]
# [10.0 12.0]]
print(np.array_equal(x + y, np.add(x, y)))
# Elementwise difference; both produce the array
# [[-4.0 -4.0]
# [-4.0 -4.0]]
print(np.array_equal(x - y, np.subtract(x, y)))
# Elementwise product; both produce the array
# [[ 5.0 12.0]
# [21.0 32.0]]
print(np.array_equal(x * y, np.multiply(x, y)))
# Elementwise square root; produces the array
# [[ 1. 1.41421356]
# [ 1.73205081 2. ]]
print(np.sqrt(x))
###Output
_____no_output_____
###Markdown
How do we elementwise divide between two arrays?
###Code
x = np.array([[1, 2], [3, 4]], dtype=np.float64)
y = np.array([[5, 6], [7, 8]], dtype=np.float64)
# Elementwise division; both produce the array
# [[ 0.2 0.33333333]
# [ 0.42857143 0.5 ]]
print(x / y)
print(np.divide(x, y))
###Output
_____no_output_____
###Markdown
Note * is elementwise multiplication, not matrix multiplication. We instead use the dot function to compute inner products of vectors, to multiply a vector by a matrix, and to multiply matrices. dot is available both as a function in the numpy module and as an instance method of array objects:
###Code
x = np.array([[1, 2], [3, 4]])
y = np.array([[5, 6], [7, 8]])
v = np.array([9, 10])
w = np.array([11, 12])
# Inner product of vectors; both produce 219
print(v.dot(w))
print(np.dot(v, w))
# Matrix / vector product; both produce the rank 1 array [29 67]
print(x.dot(v))
print(np.dot(x, v))
# Matrix / matrix product; both produce the rank 2 array
# [[19 22]
# [43 50]]
print(x.dot(y))
print(np.dot(x, y))
###Output
_____no_output_____
###Markdown
There are many useful functions built into NumPy, and often we're able to express them across specific axes of the ndarray:
###Code
x = np.array([[1, 2, 3],
[4, 5, 6]])
print(np.sum(x)) # Compute sum of all elements; prints "21"
print(np.sum(x, axis=0)) # Compute sum of each column; prints "[5 7 9]"
print(np.sum(x, axis=1)) # Compute sum of each row; prints "[6 15]"
print(np.max(x, axis=1)) # Compute max of each row; prints "[3 6]"
###Output
_____no_output_____
###Markdown
How can we compute the index of the max value of each row? Useful, to say, find the class that corresponds to the maximum score for an input image.
###Code
x = np.array([[1, 2, 3],
[4, 5, 6]])
print(np.argmax(x, axis=1)) # Compute index of max of each row; prints "[2 2]"
###Output
_____no_output_____
###Markdown
We can find indices of elements that satisfy some conditions by using `np.where`
###Code
print(np.where(nums > 5))
print(nums[np.where(nums > 5)])
###Output
_____no_output_____
###Markdown
Note the axis you apply the operation will have its dimension removed from the shape.This is useful to keep in mind when you're trying to figure out what axis correspondsto what.For example:
###Code
x = np.array([[1, 2, 3],
[4, 5, 6]])
print('x ndim:', x.ndim)
print((x.max(axis=0)).ndim) # Taking the max over axis 0 has shape (3,)
# corresponding to the 3 columns.
# An array with rank 3
x = np.array([[[1, 2, 3],
[4, 5, 6]],
[[10, 23, 33],
[43, 52, 16]]
])
print('x ndim:', x.ndim) # Has shape (2, 2, 3)
print((x.max(axis=1)).ndim) # Taking the max over axis 1 has shape (2, 3)
print((x.max(axis=(1, 2))).ndim) # Can take max over multiple axes; prints [6 52]
###Output
_____no_output_____
###Markdown
Indexing NumPy also provides powerful indexing schemes.
###Code
# Create the following rank 2 array with shape (3, 4)
# [[ 1 2 3 4]
# [ 5 6 7 8]
# [ 9 10 11 12]]
a = np.array([[1, 2, 3, 4],
[5, 6, 7, 8],
[9, 10, 11, 12]])
print('Original:\n', a)
# Can select an element as you would in a 2 dimensional Python list
print('Element (0, 0) (a[0][0]):\n', a[0][0]) # Prints 1
# or as follows
print('Element (0, 0) (a[0, 0]) :\n', a[0, 0]) # Prints 1
# Use slicing to pull out the subarray consisting of the first 2 rows
# and columns 1 and 2; b is the following array of shape (2, 2):
# [[2 3]
# [6 7]]
print('Sliced (a[:2, 1:3]):\n', a[:2, 1:3])
# Steps are also supported in indexing. The following reverses the first row:
print('Reversing the first row (a[0, ::-1]) :\n', a[0, ::-1]) # Prints [4 3 2 1]
# slice by the first dimension, works for n-dimensional array where n >= 1
print('slice the first row by the [...] operator: \n', a[0, ...])
###Output
_____no_output_____
###Markdown
Often, it's useful to select or modify one element from each row of a matrix. The following example employs **fancy indexing**, where we index into our array using an array of indices (say an array of integers or booleans):
###Code
# Create a new array from which we will select elements
a = np.array([[1, 2, 3],
[4, 5, 6],
[7, 8, 9],
[10, 11, 12]])
print(a) # prints "array([[ 1, 2, 3],
# [ 4, 5, 6],
# [ 7, 8, 9],
# [10, 11, 12]])"
# Create an array of indices
b = np.array([0, 2, 0, 1])
# Select one element from each row of a using the indices in b
print(a[np.arange(4), b]) # Prints "[ 1 6 7 11]"
# same as
for x, y in zip(np.arange(4), b):
print(a[x, y])
c = a[0]
c[0] = 100
print(a)
# Mutate one element from each row of a using the indices in b
a[np.arange(4), b] += 10
print(a) # prints "array([[11, 2, 3],
# [ 4, 5, 16],
# [17, 8, 9],
# [10, 21, 12]])
###Output
_____no_output_____
###Markdown
We can also use boolean indexing/masks. Suppose we want to set all elements greater than MAX to MAX:
###Code
MAX = 5
nums = np.array([1, 4, 10, -1, 15, 0, 5])
print(nums > MAX) # Prints [False, False, True, False, True, False, False]
nums[nums > MAX] = 100
print(nums) # Prints [1, 4, 5, -1, 5, 0, 5]
nums = np.array([1, 4, 10, -1, 15, 0, 5])
nums > 5
###Output
_____no_output_____
###Markdown
Note that the indices in fancy indexing can appear in any order and even multiple times:
###Code
nums = np.array([1, 4, 10, -1, 15, 0, 5])
print(nums[[1, 2, 3, 1, 0]]) # Prints [4 10 -1 4 1]
###Output
_____no_output_____
###Markdown
Broadcasting Many of the operations we've looked at above involved arrays of the same rank. However, many times we might have a smaller array and use that multiple times to update an array of a larger dimensions. For example, consider the below example of shifting the mean of each column from the elements of the corresponding column:
###Code
x = np.array([[1, 2, 3],
[3, 5, 7]])
print(x.shape) # Prints (2, 3)
col_means = x.mean(axis=0)
print(col_means) # Prints [2. 3.5 5.]
print(col_means.shape) # Prints (3,)
# Has a smaller rank than x!
mean_shifted = x - col_means
print('\n', mean_shifted)
print(mean_shifted.shape) # Prints (2, 3)
###Output
_____no_output_____
###Markdown
Or even just multiplying a matrix by 2:
###Code
x = np.array([[1, 2, 3],
[3, 5, 7]])
print(x * 2) # Prints [[ 2 4 6]
# [ 6 10 14]]
###Output
_____no_output_____
###Markdown
Broadcasting two arrays together follows these rules:1. If the arrays do not have the same rank, prepend the shape of the lower rank array with 1s until both shapes have the same length.2. The two arrays are said to be compatible in a dimension if they have the same size in the dimension, or if one of the arrays has size 1 in that dimension.3. The arrays can be broadcast together if they are compatible in all dimensions.4. After broadcasting, each array behaves as if it had shape equal to the elementwise maximum of shapes of the two input arrays.5. In any dimension where one array had size 1 and the other array had size greater than 1, the first array behaves as if it were copied along that dimension. For example, when subtracting the columns above, we had arrays of shape (2, 3) and (3,).1. These arrays do not have same rank, so we prepend the shape of the lower rank one to make it (1, 3).2. (2, 3) and (1, 3) are compatible (have the same size in the dimension, or if one of the arrays has size 1 in that dimension).3. Can be broadcast together!4. After broadcasting, each array behaves as if it had shape equal to (2, 3).5. The smaller array will behave as if it were copied along dimension 0. Let's try to subtract the mean of each row!
###Code
x = np.array([[1, 2, 3],
[3, 5, 7]])
row_means = x.mean(axis=1)
print(row_means) # Prints [2. 5.]
mean_shifted = x - row_means
###Output
_____no_output_____
###Markdown
To figure out what's wrong, we print some shapes:
###Code
x = np.array([[1, 2, 3],
[3, 5, 7]])
print(x.shape) # Prints (2, 3)
row_means = x.mean(axis=1)
print(row_means) # Prints [2. 5.]
print(row_means.shape) # Prints (2,)
###Output
_____no_output_____
###Markdown
What happened? Answer: If we following broadcasting rule 1, then we'd prepend a 1 to the smaller rank array ot get (1, 2). However, the last dimensions don't match now between (2, 3) and (1, 2), and so we can't broadcast. Take 2, reshaping the row means to get the desired behavior:
###Code
x = np.array([[1, 2, 3],
[3, 5, 7]])
print(x.shape) # Prints (2, 3)
row_means = x.mean(axis=1)
print('row_means shape:', row_means.shape)
print('expanded row_means shape: ', np.expand_dims(row_means, axis=1).shape)
mean_shifted = x - np.expand_dims(row_means, axis=1)
print(mean_shifted)
print(mean_shifted.shape) # Prints (2, 3)
###Output
_____no_output_____
###Markdown
More broadcasting examples!
###Code
# Compute outer product of vectors
v = np.array([1, 2, 3]) # v has shape (3,)
w = np.array([4, 5]) # w has shape (2,)
# To compute an outer product, we first reshape v to be a column
# vector of shape (3, 1); we can then broadcast it against w to yield
# an output of shape (3, 2), which is the outer product of v and w:
# [[ 4 5]
# [ 8 10]
# [12 15]]
print(np.reshape(v, (3, 1)) * w)
# Add a vector to each row of a matrix
x = np.array([[1, 2, 3], [4, 5, 6]])
# x has shape (2, 3) and v has shape (3,) so they broadcast to (2, 3),
# giving the following matrix:
# [[2 4 6]
# [5 7 9]]
print(x + v)
# Add a vector to each column of a matrix
# x has shape (2, 3) and w has shape (2,).
# If we transpose x then it has shape (3, 2) and can be broadcast
# against w to yield a result of shape (3, 2); transposing this result
# yields the final result of shape (2, 3) which is the matrix x with
# the vector w added to each column. Gives the following matrix:
# [[ 5 6 7]
# [ 9 10 11]]
print((x.T + w).T)
# Another solution is to reshape w to be a column vector of shape (2, 1);
# we can then broadcast it directly against x to produce the same
# output.
print(x + np.reshape(w, (2, 1)))
###Output
_____no_output_____
###Markdown
Views vs. Copies Unlike a copy, in a **view** of an array, the data is shared between the view and the array. Sometimes, our results are copies of arrays, but other times they can be views. Understanding when each is generated is important to avoid any unforeseen issues.Views can be created from a slice of an array, changing the dtype of the same data area (using arr.view(dtype), not the result of arr.astype(dtype)), or even both.
###Code
x = np.arange(5)
print('Original:\n', x) # Prints [0 1 2 3 4]
# Modifying the view will modify the array
view = x[1:3]
view[1] = -1
print('Array After Modified View:\n', x) # Prints [0 1 -1 3 4]
x = np.arange(5)
view = x[1:3]
view[1] = -1
# Modifying the array will modify the view
print('View Before Array Modification:\n', view) # Prints [1 -1]
x[2] = 10
print('Array After Modifications:\n', x) # Prints [0 1 10 3 4]
print('View After Array Modification:\n', view) # Prints [1 10]
###Output
_____no_output_____
###Markdown
However, if we use fancy indexing, the result will actually be a copy and not a view:
###Code
x = np.arange(5)
print('Original:\n', x) # Prints [0 1 2 3 4]
# Modifying the result of the selection due to fancy indexing
# will not modify the original array.
copy = x[[1, 2]]
copy[1] = -1
print('Copy:\n', copy) # Prints [1 -1]
print('Array After Modified Copy:\n', x) # Prints [0 1 2 3 4]
# Another example involving fancy indexing
x = np.arange(5)
print('Original:\n', x) # Prints [0 1 2 3 4]
copy = x[x >= 2]
print('Copy:\n', copy) # Prints [2 3 4]
x[3] = 10
print('Modified Array:\n', x) # Prints [0 1 2 10 4]
print('Copy After Modified Array:\n', copy) # Prints [2 3 4]
###Output
_____no_output_____ |
Powerproduction_ML.ipynb | ###Markdown
Power Production - Machine Learning project___The assignment project for Machine Learning and Statistics, GMIT 2020-2021Lecturer: Dr. Ian McLoughlin>Author: **Andrzej Kocielski** >Github: [andkoc001](https://github.com/andkoc001/) >Email: [email protected], [email protected] Content1. Introduction2. Wind power3. Data set - exploaratory analysis4. Model 1 - simple linear regression5. Model 2 - polynomial regression6. Model 3 - random forest7. Summary8. References ___ Introduction___This notebook consists of the project development, research summary, code, etc. It should be read in conjunction with the corresponding README.md file at the project [repository](https://github.com/andkoc001/Machine-Learning-and-Statistics-Project.git) at GitHub. Project objectivesThe objective of the is to develop a web service to make predictions using Machine Learning (ML) paradigm. The goal of the project is to produce a model or models that, based on the provided dataset `powerproduction`, and through applying the appropriate ML techniques, predicts power output generated by wind turbine from wind. The power output predictions should be generated in response to wind speed values to be obtained as HTTP requests.Further details can be found in the [project brief](https://github.com/andkoc001/Machine-Learning-and-Statistics/blob/main/assessment.pdf). Web AppAlong with this notebook, the project is constituted by development of an web application. The web app provides an interactive and user friendly tool that allows for testing selected machine learning models. The app returns predicted values of power output, upon user input wind speed. Please refer to the README.md file in this repository for more information. ___ Wind power ___Wind power (or wind energy) is a general term describing energy generated from wind, where the wind kinetic enrgy is converted into electrical power. Typically, the power is generated in wind turbins.There is many factors influencing the generated output, but wind speed is a fundamental contributor. Wikipedia article states that the power is proportional to the third power of the wind speed ([Wikipedia - Wind Power](https://en.wikipedia.org/wiki/Wind_power)):$$P = \frac{1}{2} A \rho v^3$$where $P$ is the power output, $A$ corresponds to the size of the turbin, $\rho$ is the air density and $v$ is the wind speed.However, in practical terms, the observed power output follows a more complex pattern ([Wind education - Wind power](https://energyeducation.ca/encyclopedia/Wind_power)). Image source: ([Quora - What is a power curve](https://www.quora.com/What-is-a-power-curve-and-how-do-we-draw-one))The wind turbin activates on a certain treshold wind speed - referred as to cut-in speed. Below that speed the turbin operation is not economically viable. The maximum power output is achieved at the rated wind speed - turbin specific. In the zone between the cut-in and rated speeds, the power increases exponencially with the wind speed. Behind the rated speed, the produced output remains approximately flat (or may decline gently), until it reaches the cut-off speed. At such speed, the turbine is shut down in order to prevent them from taking damege. It is also worth noting that the above power curve is only a crude approximation of the observed amount of energy produced in reality. ___ Data set - exploratory analysis___ Importing required librariesFor this project the external packages and modules are used. All of them are imported in the cell(s) below. It is required to import the packages and modules before running the subsequent cells.
###Code
# import required libraries and packages - see description above
# ignore deprecated warnings
import warnings
warnings.filterwarnings("ignore")
# calculating square root
from math import sqrt
# numerical operations on arrays
import numpy as np
# data manipulation (dataframe)
import pandas as pd
# fitting linear regression
from sklearn.linear_model import LinearRegression
# polynomial coefficients
from sklearn.preprocessing import PolynomialFeatures
# split data set into random train and test subsets
from sklearn.model_selection import train_test_split
# random forest regressor
from sklearn.ensemble import RandomForestRegressor
# for accuracy evaluation
from sklearn import metrics
from sklearn.metrics import mean_squared_error, r2_score
# plotting
import matplotlib.pyplot as plt
import seaborn as sns
###Output
_____no_output_____
###Markdown
General plotting settings are defined in the following cell.
###Code
# plotting settings
# set plotting style
plt.style.use('ggplot')
# set default figure size
plt.figure(figsize=(14,8))
plt.rcParams["figure.figsize"] = (14,8)
# plot matplotlib graphs next to the code
%matplotlib inline
###Output
_____no_output_____
###Markdown
Loading the data set from a fileThe data set provided for the project is loaded from the file powerproduction.txt (in the repository). It is stored as a DataFrame and assigned under the name `df_raw`.
###Code
# load the data set from file
df_raw = pd.read_csv(r"powerproduction.txt")
###Output
_____no_output_____
###Markdown
A glance into the data setThe dataset loaded from the provided file is assigned to variable `df_raw`. Let us take a sneak peek as to how this dataset looks like. We will attempt to evaluate its size, basic statistical properties, distributions, etc. as well as produce some plots for a better understanding of its properties.
###Code
# show a few first rows of the data set
df_raw.head(8)
# rudimentary statistical insight into the data set
df_raw.describe().T
# plot the data points
sns.relplot(data=df_raw, x="speed", y="power", s=10, height=6, aspect=2)
###Output
_____no_output_____
###Markdown
The analysis of the wind speed distribution shows that the wind speed appears to uniformly distribute, with no particular wind speed dominating.
###Code
# Histogram of power outputs - frequency of occurance - 'zero' values seem to distort the plot
plt.rcParams["figure.figsize"] = (14,4)
sns.distplot(df_raw.power, bins=100, kde=False)
plt.show()
# what wind speeds dominate - it appears to be more or less uniformely distributed
plt.rcParams["figure.figsize"] = (14,2)
sns.distplot(df_raw.speed, bins=100, kde=False)
plt.show()
###Output
_____no_output_____
###Markdown
Exploratory data analysisFrom the above dataset description and data points plot, the following conclusions can be drawn.The data set consist of 500 observations (rows). Each observation consists of 2 attributes (columns): wind speed (`speed`) and corresponding power output (`power`). The units of the values are not explicitly given. Possibly the wind speed is shown in m/s, whereas power values represents the turbin efficiency.The wind speed values varies from 0 to 25 and are shown in ascending order. Every speed value is unique. The power output values varies from 0 to 113.556. There are 49 instances (approximatelly 10% of all observations) where the power output equals zero.From the plot one can observe three distinct areas with different behaviour of the data points. I will refer to them as zones A, B, C and D.A - The wind speed ranging from 0 to about 8. B - In wind speed range from about 8 to about 17. C - The wind speed between approximately 17 and 24.5. D - Above wind speed level of 24.5.Such a behaviour agrees closely with the wind power curve shown above ([Wind education - Wind power](https://energyeducation.ca/encyclopedia/Wind_power)). It can be explained by the fact that at low wind speed - in zone A - the wind does not carry sufficient energy to turn the turbines or it is not economically justified. The genreated power output readings are not affected by the wind speed in this region. These readings may be caused by other factors (noise). For this analyis, however, it is assumed these are valid output readings. In zone B there is a nearly linear (even close to exponencial) correlation between the wind speed and the power output. When wind speed exceeds approximately 17, the wind turbines work with a high performance, close to its 100% efficiency. However, the power output declines slightl with incrasing wind speed, up to approximately 24.5 - zone C. In the zone D where the winds are very strong, the power output is not produced. The power generation abruptly ceases, possibly because the turbines are shut off for safety reasons.
###Code
sns.relplot(data=df_raw, x="speed", y="power", s=10, height=3, aspect=2)
plt.plot([8,8],[0,120], "k:")
plt.plot([17,17],[0,120], "k:")
plt.plot([24.5,24.5],[0,120], "k:")
plt.text(3.5, 85, "A", size='20', color='black')
plt.text(12, 85, "B", size='20', color='black')
plt.text(20.5, 45, "C", size='20', color='black')
plt.text(24.8, 45, "D", size='20', color='black')
plt.xlim(0,26)
plt.ylim(-5,120)
###Output
_____no_output_____
###Markdown
Noise and erroneous readingsBased on the research and my own professional expertise, the data set is clearly affected by some random noise, as the readings show a consistent erratic behavior. It is assumed thera are other factors in play which are not identified, and yet influence the reading. The study show that wind turbines do not normally generate any output untill it reaches 'cut-in' treshold. However, in the provided data set some low speed resulted in production of electricity. Similarly, power outputs going significantly above its full capacity should not be considered accurate. These readings could be results of measurement errors ([Statisticd How To - Measurement Error](https://www.statisticshowto.com/measurement-error/)).Also, there occasional observations where the power output is zero, even though the wind speeds are in zones allowing for producing the power. In total, there is 49 observations with power output equal to zero. It is assumed these data points represent observations during, for example, maintenance works, when the turbin was shut down. Clean the datasetThe observations where the power output is zero spread randomly along the wind speeds. The data points seem to distort data set. These points are therefore assumed to be data anomaly and excluded from further analysis.The data set is now cleand by removal of these observations from the dataset. A new dataframe `df` is created.
###Code
# clean the dataset by removing distorting observations
# remove the observations where wind speed is less than 6 and the power output greated than 5 - these readings are considered affected by noise
df = df_raw.drop(df_raw.loc[(df_raw.power > 5) & (df_raw.speed < 4)].index)
# remove the observations where wind speed greater than 10 and power output is zero - these are considered errous readings (e.g. due to maintenance)
df = df.drop(df.loc[(df.power == 0) & (df.speed > 10)].index)
# remove the observations where wind power output is greater than 110 - these are considered errous readings (noise)
df = df.drop(df.loc[(df.power > 110)].index)
###Output
_____no_output_____
###Markdown
Throught the cleaning the data set shape has been updated to the following:
###Code
# number of removed observations after cleaning
print(f"Number of removed observations due to cleaning: {df_raw.shape[0] - df.shape[0]}")
print(f"Number of remaining observations in the data set after cleaning: {df.shape[0]}")
# plot the data points
# sns.relplot(data=df, x="speed", y="power", s=10, height=6, aspect=2) # commented out not to clutter the notebook
###Output
_____no_output_____
###Markdown
The problemIn terms of the machine learning, wind speed to power output relationship constitutes a regression problem. The objective of this project is to apply machine learning techniques in order to forecast a responce of the power output (independent variable) based on a given wind speed (dependent variable). The provided data set is composed out of two variables (features). As the variables are continues in nature (in contrast to categorical) and unlabeled, the task requires application of unsupervised learning ([Real Python - Linear Regression in Python](https://realpython.com/linear-regression-in-python/)). ___ Simple linear regression___Simple linear regression is one of the simplest supervised learning algorithms in machine learning and is widely used for forecasting. The aim of the linear regression is to fit a stight line to the data ([https://en.wikipedia.org/wiki/Linear_regression](https://en.wikipedia.org/wiki/Linear_regression)). In this section, sklearn library is used. This section of the project is based on [Alaettin Serhan Mete - Linear regression - exercise project](https://amete.github.io/DataSciencePortfolio/Udemy/Python-DS-and-ML-Bootcamp/Linear_Regression_Project.html).The entire data set is first devided into variable `X` that holds the wind speed values and a variable `y` equal to the power outputs. Next, the data is split into training and testing sets. The training set is formed out of 70% random instances from the whole (cleaned) data set and the test data - out of remaing 30%. A linear regression model, called `lin_reg_model`, is built and trained on the training sets X and y.
###Code
# assign "speed" and "power" sets to variables X and y
X, y = df["speed"], df["power"]
# random_state (seed) is set for consistancy
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=2020)
# convert the array shape and unify the lengths
X_train = X_train.values.reshape(-1,1)
y_train = y_train.values.reshape(-1,1)
# create an instance of a LinearRegression() model named lin_reg_model.
lin_reg_model = LinearRegression()
#Train/fit lin_reg_model on the training data.
lin_reg_model.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
Model evaluationLet's see the model parameters: the coefficient and intercept. The meaning of the model coefficient is that for each x-value increase by 1, the predicted response increases by coefficient. The intercept is the value where the regression line crosses the y-axis.
###Code
# coefficient of the model (slope)
print(f"The model coefficient (slope) is {float(lin_reg_model.coef_):.2f}")
# intercept value
print(f"Intercept: {float(lin_reg_model.intercept_):.1f}")
###Output
The model coefficient (slope) is 5.74
Intercept: -22.2
###Markdown
PredictionPrediction of the test values will allow to evaluate the model's performance. Function `predict()` is used to predict the `X_test` set of the data. Then, the predictions of the test values will be ploted against the real test values versus.
###Code
# reschape X_test
X_test = X_test.values.reshape(-1,1)
predictions = lin_reg_model.predict(X_test)
# plot the results
plt.scatter(predictions, y_test, s=4, color='red', alpha=.8)
plt.xlabel('Y Predicted')
plt.ylabel('Y Test')
plt.rcParams["figure.figsize"] = (8,8)
plt.xlim(-20,120)
plt.ylim(-20,120)
# take a random wind speed value from the provided data set (cleaned)
wind_test = df["speed"].sample()
# link corresponding actual power output
for i in range(df.shape[0]):
if df.iloc[i]["speed"] == wind_test.iloc[0]:
actual_output = df.iloc[i]["power"]
power_predict = lin_reg_model.predict([wind_test])
print(f"The predicted power output for wind speed {float(wind_test):.3f} is: \t {float(power_predict):.3f}")
print(f"The actual power output for wind speed {float(wind_test):.3f} is: \t {float(actual_output):.3f}")
print(f"The prediction accuracy for the data point is: \t\t {float(abs(1.-(abs(power_predict-actual_output)/actual_output))*100):.1f}%")
# accuracy of the test set
predictions = predictions.flatten()
print(f"Root mean square error (RMSE): {mean_squared_error(y_test, predictions, squared=False):.2f}") # (by hand: {sqrt((1./len(y_test))*(sum((y_test-predictions)**2))):.2f})")
print(f"Coefficient of determination (R square): {r2_score(y_test, predictions):.2f}") # true value, predicted value
###Output
Root mean square error (RMSE): 13.14
Coefficient of determination (R square): 0.90
###Markdown
ConclusionSimple linear regression (first polynomial order), offers only a crude approximation. The accuracy for the given data set is resonable only in some limited ranges of wind speed. The simple linear regressiong should be used with much care as the results may be grossly wrong. The model can even yield a negative power output from certain wind speeds! ___ Polynomial regression___Better results, when comparing to simple linear regression, can be produced from higher order polynomial regressions. I have tried various polynomial orders of regression from 1st up to 15th order. Some of them are shown in below plot. Even though regression of lower polynomial are still suseptible to underfitting approximation, they appear to be significantly better simple linear regression. At the same time, higher polynomial order regression tend to excessive complexity and may lead to the risk of overfitting.Several polynomials of various order has been tested. The 7th order seems to provide a good accuracy in the wind speed between 0 and 24.5, as shown below in solid blue line.
###Code
# reshape the array
X = df.iloc[:, 0].values.reshape(-1,1)
# y = df.iloc[:, 1].values.reshape(-1,1)
y = df.iloc[:, 1]
# adapted from https://stackoverflow.com/q/51732577
plt.scatter(X, y, s=4, color='blue', alpha=0.8, label="data")
# Fitting Polynomial Regression to the dataset
poly_reg = PolynomialFeatures(degree = 1)
X_poly = poly_reg.fit_transform(X)
lin_reg = LinearRegression()
lin_reg.fit(X_poly, y)
plt.plot(X, lin_reg.predict(X_poly), ls="--", color = 'green', alpha=0.5, label='1st polynomial order')
poly_reg = PolynomialFeatures(degree = 3)
X_poly = poly_reg.fit_transform(X)
lin_reg = LinearRegression()
lin_reg.fit(X_poly, y)
plt.plot(X, lin_reg.predict(X_poly), ls="--", color='cyan', alpha=0.7, label='3rd polynomial order')
poly_reg = PolynomialFeatures(degree = 5)
X_poly = poly_reg.fit_transform(X)
lin_reg = LinearRegression()
lin_reg.fit(X_poly, y)
plt.plot(X, lin_reg.predict(X_poly), ls="-", color='k', alpha=1.0, label='5th polynomial order')
poly_reg = PolynomialFeatures(degree = 20)
X_poly = poly_reg.fit_transform(X)
lin_reg = LinearRegression()
lin_reg.fit(X_poly, y)
plt.plot(X, lin_reg.predict(X_poly), ls="--", color='m', alpha=0.8, label='20th polynomial order')
# Visualising the Polynomial Regression results
plt.legend(loc='best')
plt.rcParams["figure.figsize"] = (14,8)
plt.show()
###Output
_____no_output_____
###Markdown
Apply regression modelThis section is based on tutorial at [Miroslaw Mamczur - Jak działa regresja liniowa i czy warto ją stosować (in Polish)](https://miroslawmamczur.pl/jak-dziala-regresja-liniowa-i-czy-warto-ja-stosowac/). Again, sklearn package id used to built the regression model. This time, the regression model is develope on the entire data set, without splitting for training and test sets.The polynomial order=7 appears to closely follow the pattern of the data points in the domain of the wind speed (range 0-24.5). It also appears to be free from overfitting in this range. Therefore I consider it a good candidate for further analysis.
###Code
# develop a regression model
poly = PolynomialFeatures(degree = 7) # 7th polynomial order
X_poly = poly.fit_transform(X)
# lin_reg = LinearRegression()
# ask our model to fit the data.
# lin_reg.fit(X_poly, y)
poly_reg = LinearRegression().fit(X_poly, y)
# perform regression to predict the power output out of wind speed
y_pred = poly_reg.predict(X_poly)
print('Coefficients: ', poly_reg.coef_)
print('Intercept: ', poly_reg.intercept_)
###Output
Coefficients: [ 0.00000000e+00 -9.25381441e+00 6.48184708e+00 -1.73542743e+00
2.19520315e-01 -1.35529643e-02 4.01514498e-04 -4.59439083e-06]
Intercept: 4.265712712579351
###Markdown
Out of curiosity, I have also compared the polynomial coefficients when applying the Numpy's `polyfit()`. This function gets the value of the coefficients that minimise the squared order of the polynomial function. The coefficients and the intercept are the same as those achieved using the sklearn library above.
###Code
coeff = np.polyfit(df['speed'], df['power'], 7)
#coeff
yp = np.poly1d(coeff)
print("y = ")
print(yp)
###Output
y =
7 6 5 4 3 2
-4.594e-06 x + 0.0004015 x - 0.01355 x + 0.2195 x - 1.735 x + 6.482 x - 9.254 x + 4.266
###Markdown
PredictionsIn order to predict the power output in function of the wind speed, the above equation will be used. In the below cell, enter manually wind speed (in the range 0-25) to get the model to calculate the power output.
###Code
# enter arbitrary wind speed value in range between 0 and 25
wind_speed = 20
x = wind_speed
power_output = (-5.11800967e-06*pow(x,7)) + 4.48301902e-04*pow(x,6) - 1.52309426e-02*pow(x,5) + 2.50368085e-01*pow(x,4) - 2.04365136e+00*pow(x,3) + 8.13376871e+00*pow(x,2) - 1.38470256e+01*pow(x,1) + 10.91407191*pow(x,0)
print(f"Predicted power output from wind speed = {wind_speed} is: {power_output:.2f}")
###Output
Predicted power output from wind speed = 20 is: 98.42
###Markdown
Accuracy
###Code
# plot the results
plt.scatter(y_pred, y, s=4, color='red', alpha=.8)
plt.xlabel('Y Predicted')
plt.ylabel('Y Test')
plt.rcParams["figure.figsize"] = (6,6)
plt.xlim(-20,120)
plt.ylim(-20,120)
print(f"Root Mean Squared Error (RMSE): {np.sqrt(mean_squared_error(y, y_pred)):.2f}")
print(f"Coefficient of determination (R square): {r2_score(y, y_pred):.2f}") # true value, predicted value
###Output
Root Mean Squared Error (RMSE): 3.91
Coefficient of determination (R square): 0.99
###Markdown
From the above metrics the most important are root mean squared error, being a way to measure the error of a model in predicting quantitative data, and the coefficient of determination. The achieved value of $RMSE=4.05$ is low, suggesting a good model. Also $R^2 = 0.99$ is very close to 1, meaning it is a good fit for the data points. ConclusionPolynomial regressions provide a distinctly better approximation than simple linear regression. The above shown 7th order polynomial gives accurate results and the model does not show overfitting in the considered wind speeds range. ___ Random Forest___Random forest belongs to ensemble methods, which combine multiple algorithms in order to improve the performance and robustness. Random forest in particular, applies a number of decision tree estimators and averages the predictions. The randomness of the results provides a decreased variance and a more robust behaviour of the predictions ([Sklearn](https://scikit-learn.org/stable/modules/ensemble.htmlforest)).The `RandomForestRegressor` class from Sklearn library will be used in the below analyis. It will be applied to the already cleaned data set `df`, splitted into training and test subsets is used, as previously. The subsets have been already reshaped as needed.
###Code
# reshape the subset arrays
# X_train = X_train.values.reshape(-1, 1) # already reshaped for previous model
# y_train = y_train.values
# X_test = X_test.values.reshape(-1, 1)
###Output
_____no_output_____
###Markdown
Creating the modelThe regressor `RandomForestRegression` from the Sklearn will below set numbe of estimators. It will be subsequently used to finde the best curve fitting the train data points ([Jack Vanderplas - Python data science handbook - decision trees and random forests](https://jakevdp.github.io/PythonDataScienceHandbook/05.08-random-forests.html)).
###Code
# https://www.geeksforgeeks.org/random-forest-regression-in-python/
# create an instance of the random forest model
rand_forest_model = RandomForestRegressor(n_estimators = 100, random_state = 2020)
# fit the regressor with the train data subset
rand_forest_model.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
Making prediction
###Code
test_result = rand_forest_model.predict(X_test)
###Output
_____no_output_____
###Markdown
The predicted results are shown in the below plot against the test data points.
###Code
# Scatter plot for original data
plt.scatter(X_test, y_test, color = 'green', alpha=.3, label='Measured')
# plot predicted data
plt.scatter(X_test, test_result, color = 'red', marker='x', alpha=.7, label='Predicted')
plt.title('Random Forest Regression')
plt.xlabel('Wind speed')
plt.ylabel('Power output')
plt.legend(loc='best')
plt.rcParams["figure.figsize"] = (14,8)
plt.show()
###Output
_____no_output_____
###Markdown
Model accuracyIn order to evaluate the accuracy of the predictions, both RMSE and R-square metrics are calculated.
###Code
print(f"Root Mean Squared Error (RMSE): {np.sqrt(mean_squared_error(y_test, test_result)):.2f}")
print(f"Coefficient of determination (R square): {r2_score(y_test, test_result):.2f}") # true value, predicted value
###Output
Root Mean Squared Error (RMSE): 4.76
Coefficient of determination (R square): 0.99
|
notebooks/05_best_model.ipynb | ###Markdown
> AIDA with Transfer Learning Imports
###Code
import pandas as pd
import numpy as np
import os
import math
import seaborn as sns
import urllib.request
from urllib.parse import urlparse
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
from tensorflow.keras.preprocessing.image import ImageDataGenerator
import tensorflow_addons as tfa
#https://machinelearningmastery.com/how-to-use-transfer-learning-when-developing-convolutional-neural-network-models/
from keras.applications.inception_resnet_v2 import InceptionResNetV2
from keras.applications.xception import Xception
from keras.models import Model
from keras import metrics
from keras.callbacks import ModelCheckpoint, TensorBoard
from numba import cuda
import sklearn.model_selection as skms
from sklearn.utils import class_weight
#from wcs.google import google_drive_share
import urllib.request
from urllib.parse import urlparse
#from google.colab import drive
import src.helper.helper as hlp
import src.helper.const as const
import datetime as dt
import time
import warnings
warnings.simplefilter(action='ignore')
from PIL import ImageFile
ImageFile.LOAD_TRUNCATED_IMAGES = True
###Output
_____no_output_____
###Markdown
Configuration
###Code
# Model
MODEL_2_LOAD = "InceptionResNetV2_corrected_20210323T223046Z.hd5"
MODEL_NAME = "InceptionResNetV2_Xceptionv4_Customized"
USE_MOST_POPULAR = False
NUM_MOST_POPULAR = 50000
DO_TRAIN_VALID_SPLIT = False
BATCH_SIZE = 64*4
EPOCHS = 10
AUTOTUNE = tf.data.experimental.AUTOTUNE # Adapt preprocessing and prefetching dynamically to reduce GPU and CPU idle time
DO_SHUFFLE = False
SHUFFLE_BUFFER_SIZE = 1024 # Shuffle the training data by a chunck of 1024 observations
IMG_DIMS = [299, 299]
IMG_CHANNELS = 3 # Keep RGB color channels to match the input format of the model
LABEL_COLS = ['Action', 'Adventure', 'Animation', 'Comedy', 'Crime',
'Documentary', 'Drama', 'Family', 'Fantasy', 'History', 'Horror',
'Music', 'Mystery', 'Romance', 'Science Fiction', 'TV Movie',
'Thriller', 'War', 'Western']
# Data pipeline
DP_TFDATA = "Data pipeline using tf.data"
DP_IMGGEN = "Data pipeline using tf.keras.ImageGenerator"
DP = DP_TFDATA
# Directories
DIR = './'
DATA_DIR_POSTER = DIR + '../data/raw/posters_v3/'
DATA_DIR_INTERIM = DIR + "../data/interim/"
DATA_DIR_RAW = DIR + "../data/raw/"
MODEL_DIR = DIR + "../models/"
BASE_DIR = DIR
IMAGES_DIR = DATA_DIR_POSTER
SEED = const.SEED
TENSORBOARD_LOGDIR = DIR + "tensorboard_logs/scalars/"
# Display of dataframes
pd.set_option('display.max_colwidth', None)
gpus = tf.config.list_physical_devices('GPU')
if gpus:
# Create virtual GPUs
try:
tf.config.experimental.set_virtual_device_configuration(
#OK, but solwer:
#gpus[0], [tf.config.experimental.VirtualDeviceConfiguration(memory_limit=2.5*1024),
# tf.config.experimental.VirtualDeviceConfiguration(memory_limit=2.5*1024),
# tf.config.experimental.VirtualDeviceConfiguration(memory_limit=2.5*1024),
# tf.config.experimental.VirtualDeviceConfiguration(memory_limit=2.5*1024)],
gpus[0], [tf.config.experimental.VirtualDeviceConfiguration(memory_limit=5*1024),
tf.config.experimental.VirtualDeviceConfiguration(memory_limit=5*1024)],
#Error: gpus[0], [tf.config.experimental.VirtualDeviceConfiguration(memory_limit=20*1024)],
)
tf.config.experimental.set_virtual_device_configuration(
#OK, but solwer:
#gpus[1], [tf.config.experimental.VirtualDeviceConfiguration(memory_limit=2.5*1024),
# tf.config.experimental.VirtualDeviceConfiguration(memory_limit=2.5*1024),
# tf.config.experimental.VirtualDeviceConfiguration(memory_limit=2.5*1024),
# tf.config.experimental.VirtualDeviceConfiguration(memory_limit=2.5*1024)],
gpus[1], [tf.config.experimental.VirtualDeviceConfiguration(memory_limit=5*1024),
tf.config.experimental.VirtualDeviceConfiguration(memory_limit=5*1024)],
#Error: gpus[1], [tf.config.experimental.VirtualDeviceConfiguration(memory_limit=20*1024)],
)
logical_gpus = tf.config.experimental.list_logical_devices('GPU')
print(len(gpus), "Physical GPU,", len(logical_gpus), "Logical GPUs")
except RuntimeError as e:
# Virtual devices must be set before GPUs have been initialized
print(e)
###Output
2 Physical GPU, 4 Logical GPUs
###Markdown
Datapipeline based on tf.data
###Code
def parse_function(filename, label):
"""Function that returns a tuple of normalized image array and labels array.
Args:
filename: string representing path to image
label: 0/1 one-dimensional array of size N_LABELS
"""
# Read an image from a file
image_string = tf.io.read_file(DATA_DIR_POSTER + filename)
# Decode it into a dense vector
image_decoded = tf.image.decode_jpeg(image_string, channels=IMG_CHANNELS)
# Resize it to fixed shape
image_resized = tf.image.resize(image_decoded, [IMG_DIMS[0], IMG_DIMS[1]])
# Normalize it from [0, 255] to [0.0, 1.0]
image_normalized = image_resized / 255.0
return image_normalized, label
def create_dataset(filenames, labels, cache=True):
"""Load and parse dataset.
Args:
filenames: list of image paths
labels: numpy array of shape (BATCH_SIZE, N_LABELS)
is_training: boolean to indicate training mode
"""
# Create a first dataset of file paths and labels
dataset = tf.data.Dataset.from_tensor_slices((filenames, labels))
# Parse and preprocess observations in parallel
dataset = dataset.map(parse_function, num_parallel_calls=AUTOTUNE)
if cache == True:
# This is a small dataset, only load it once, and keep it in memory.
dataset = dataset.cache()
# Shuffle the data each buffer size
if DO_SHUFFLE:
dataset = dataset.shuffle(buffer_size=SHUFFLE_BUFFER_SIZE)
# Batch the data for multiple steps
dataset = dataset.batch(BATCH_SIZE)
# Fetch batches in the background while the model is training.
dataset = dataset.prefetch(buffer_size=AUTOTUNE)
return dataset
###Output
_____no_output_____
###Markdown
Preproc
###Code
# Read train/eval datasets from file
df = pd.read_parquet(DATA_DIR_INTERIM + "df_train_unbalanced_v3.gzip")
df.shape
# Read test datasets from file
df_test = pd.read_csv(DATA_DIR_INTERIM + "df_test_v3.csv")
df_test.shape
# Get 50000 most popular movies
if USE_MOST_POPULAR:
df = df.sort_values(by='popularity', ascending=False).iloc[:NUM_MOST_POPULAR] # first NUM_MOST_POPULAR most release_date
print(df.shape)
# Add genre names
map_genre = {
28:"Action", 12:"Adventure", 16:"Animation", 35:"Comedy", 80:"Crime", 99:"Documentary", 18:"Drama", 10751:"Family",
14:"Fantasy", 36:"History", 27:"Horror", 10402:"Music", 9648:"Mystery", 10749:"Romance", 878:"Science Fiction", 10770:"TV Movie",
53:"Thriller", 10752:"War", 37:"Western"}
def create_genre_names(in_str):
if isinstance(in_str, list):
return [map_genre[id] for id in in_str]
else:
# it must be string
if in_str is None or len(in_str) == 0:
return []
else:
ret = eval(in_str)
return [map_genre[id] for id in ret]
df['genre_names'] = df['genre_id'].map(create_genre_names)
df[LABEL_COLS+['genre_names']].head()
df.info()
df_test.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 1000 entries, 0 to 999
Data columns (total 27 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 id 1000 non-null int64
1 original_title 1000 non-null object
2 poster_path 1000 non-null object
3 genre_names 1000 non-null object
4 Action 1000 non-null int64
5 Adventure 1000 non-null int64
6 Animation 1000 non-null int64
7 Comedy 1000 non-null int64
8 Crime 1000 non-null int64
9 Documentary 1000 non-null int64
10 Drama 1000 non-null int64
11 Family 1000 non-null int64
12 Fantasy 1000 non-null int64
13 History 1000 non-null int64
14 Horror 1000 non-null int64
15 Music 1000 non-null int64
16 Mystery 1000 non-null int64
17 Romance 1000 non-null int64
18 Science Fiction 1000 non-null int64
19 TV Movie 1000 non-null int64
20 Thriller 1000 non-null int64
21 War 1000 non-null int64
22 Western 1000 non-null int64
23 poster_exists 1000 non-null bool
24 filename 1000 non-null object
25 genre_ids2 1000 non-null object
26 genre_ids2_list 1000 non-null object
dtypes: bool(1), int64(20), object(6)
memory usage: 204.2+ KB
###Markdown
Create ImageGenerators
###Code
print(DP)
if DO_TRAIN_VALID_SPLIT:
df_train, df_valid = skms.train_test_split(df, test_size=0.2, random_state=SEED)
else:
df_train = df
df_valid = df_test
df_train.shape, df_valid.shape
#tf.autograph.set_verbosity(3, True)
if DP == DP_IMGGEN:
datagen = ImageDataGenerator(rescale=1 / 255.)#, validation_split=0.1)
train_generator = datagen.flow_from_dataframe(
dataframe=df_train,
directory=IMAGES_DIR,
x_col="filename",
y_col="genre_ids2_list",
batch_size=BATCH_SIZE,
seed=SEED,
shuffle=True,
class_mode="categorical",
target_size=(299, 299),
subset='training',
validate_filenames=True
)
valid_generator = datagen.flow_from_dataframe(
dataframe=df_valid,
directory=IMAGES_DIR,
x_col="filename",
y_col="genre_ids2_list",
batch_size=BATCH_SIZE,
seed=SEED,
shuffle=False,
class_mode="categorical",
target_size=(299, 299),
subset='training',
validate_filenames=True
)
test_generator = datagen.flow_from_dataframe(
dataframe=df_test,
directory=IMAGES_DIR,
x_col="filename",
y_col="genre_ids2_list",
batch_size=BATCH_SIZE,
seed=SEED,
shuffle=False,
class_mode="categorical",
target_size=(299, 299),
subset='training',
validate_filenames=True
)
else:
X_train = df_train.filename.to_numpy()
y_train = df_train[LABEL_COLS].to_numpy()
X_valid = df_valid.filename.to_numpy()
y_valid = df_valid[LABEL_COLS].to_numpy()
X_test = df_test.filename.to_numpy()
y_test = df_test[LABEL_COLS].to_numpy()
train_generator = create_dataset(X_train, y_train, cache=True)
valid_generator = create_dataset(X_valid, y_valid, cache=True)
test_generator = create_dataset(X_test, y_test, cache=True)
print(f"{len(X_train)} training datasets, using {y_train.shape[1]} classes")
print(f"{len(X_valid)} validation datasets, unsing {y_valid.shape[1]} classes")
print(f"{len(X_test)} training datasets, using {y_test.shape[1]} classes")
# Show label distribution
df_tmp = df = pd.DataFrame(
{
'train': df_train[LABEL_COLS].sum()/len(df_train),
'valid': df_valid[LABEL_COLS].sum()/len(df_valid),
'test': df_test[LABEL_COLS].sum()/len(df_test)
},
index=LABEL_COLS
)
df_tmp.sort_values('train', ascending=False).plot.bar(figsize=(14,6), title='Label distributions')
df_train.info()
from sklearn.utils import class_weight
#In order to calculate the class weight do the following
class_weights = class_weight.compute_class_weight('balanced',
LABEL_COLS, # np.array(list(train_generator.class_indices.keys()),dtype="int"),
np.array(df_train.genre_names.explode()))
class_weights = dict(zip(list(range(len(class_weights))), class_weights))
number_of_classes = len(LABEL_COLS)
pd.DataFrame({'weight': [i[1] for i in class_weights.items()]}, index=[LABEL_COLS[i[0]] for i in class_weights.items()])
###Output
_____no_output_____
###Markdown
Create model
###Code
def model_load(model_dir:str, model_fname: str):
print(f"Loading model from file {model_fname}...")
tic = time.perf_counter()
# Load model
model = keras.models.load_model(model_dir + model_fname)
# Check and set name of model
if model.name == None:
model_name = MODEL_NAME
else:
model_name = model.name
toc = time.perf_counter()
secs_all = toc - tic
mins = int(secs_all / 60)
secs = int((secs_all - mins*60))
print(f"model {model_name} loaded in {mins}m {secs}s!")
return model, model_name
###Output
_____no_output_____
###Markdown
Finally, we implemented a standard DenseNet-169 architecture with similar modifications. The finalfully-connected layer of 1000 units was once again replaced by 3 sequential fully-connected layers of31024, 128, and 7 units with ReLU, ReLU, and sigmoid activations respectively. The entire modelconsists of 14,479,943 parameters, out of which, 14,321,543 were trainable.
###Code
#!mkdir model_checkpoints
#tf.debugging.set_log_device_placement(True)
l_rtc_names = [
"2-GPU_MirroredStrategy",
"2-GPU_CentralStorageStrategy",
"1-GPU",
"56_CPU",
"2-GPU_MirroredStrategy_NCCL-All-Reduced",
]
l_rtc = [
tf.distribute.MirroredStrategy().scope(),
tf.distribute.experimental.CentralStorageStrategy().scope(),
tf.device("/GPU:0"),
tf.device("/CPU:0"),
tf.distribute.MirroredStrategy(cross_device_ops=tf.distribute.NcclAllReduce()).scope(),
]
# Load Model
i = 0
runtime_context = l_rtc[i]
######for i, runtime_context in enumerate(l_rtc):
print(f"Runtime Context: {l_rtc_names[i]}")
# Create and train model
with runtime_context:
model, model_name = model_load(MODEL_DIR, MODEL_2_LOAD)
# Start time measurement
tic = time.perf_counter()
# Define Tensorflow callback log-entry
model_name_full = f"{model.name}_{l_rtc_names[i]}_{dt.datetime.now().strftime('%Y%m%d-%H%M%S')}"
tb_logdir = f"{TENSORBOARD_LOGDIR}{model_name_full}"
#checkpoint_path = "model_checkpoints/saved-model-06-0.46.hdf5"
#model.load_weights(checkpoint_path)
# mark loaded layers as not trainable
# except last layer
leng = len(model.layers)
print(leng)
for i,layer in enumerate(model.layers):
if leng-i == 5:
print("stopping at",i)
break
layer.trainable = False
# Def metrics
threshold = 0.5
f1_micro = tfa.metrics.F1Score(num_classes=19, average='micro', name='f1_micro',threshold=threshold),
f1_macro = tfa.metrics.F1Score(num_classes=19, average='macro', name='f1_macro',threshold=threshold)
f1_weighted = tfa.metrics.F1Score(num_classes=19, average='weighted', name='f1_score_weighted',threshold=threshold)
# Compile model
model.compile(
optimizer="adam",
loss="binary_crossentropy",
metrics=[
"accuracy",
"categorical_accuracy",
tf.keras.metrics.AUC(multi_label = True),#,label_weights=class_weights),
f1_micro,
f1_macro,
f1_weighted]
)
print("create callbacks")
#filepath = "model_checkpoints/{model_name}_saved-model-{epoch:02d}-{val_f1_score_weighted:.2f}.hdf5"
#cb_checkpoint = ModelCheckpoint(filepath, monitor='val_f1_score_weighted', verbose=1, save_best_only=True, mode='max')
cb_tensorboard = TensorBoard(
log_dir = tb_logdir,
histogram_freq=0,
update_freq='epoch',
write_graph=True,
write_images=False)
#callbacks_list = [cb_checkpoint, cb_tensorboard]
#callbacks_list = [cb_checkpoint]
callbacks_list = [cb_tensorboard]
# Model summary
print(model.summary())
# Train model
print("model fit")
history = model.fit(
train_generator,
validation_data=valid_generator,
batch_size=BATCH_SIZE,
epochs=EPOCHS,
# reduce steps per epochs for faster epochs
#steps_per_epoch = math.ceil(266957 / BATCH_SIZE /8),
class_weight = class_weights,
callbacks=callbacks_list,
use_multiprocessing=False
)
# Measure time of loop
toc = time.perf_counter()
secs_all = toc - tic
mins = int(secs_all / 60)
secs = int((secs_all - mins*60))
print(f"Time spend for current run: {secs_all:0.4f} seconds => {mins}m {secs}s")
# Predict testset
y_pred_test = model.predict(test_generator)
# Store resulting model
try:
fpath = MODEL_DIR + model_name_full
print(f"Saving final model to file {fpath}")
model.save(fpath)
except Exception as e:
print("-------------------------------------------")
print(f"Error during saving of final model\n{e}")
print("-------------------------------------------\n")
try:
fpath = MODEL_DIR + model_name_full + ".ckpt"
print(f"Saving final model weights to file {fpath}]")
model.save_weights(fpath)
except Exception as e:
print("-------------------------------------------")
print(f"Error during saving of final model weights\n{e}")
print("-------------------------------------------\n")
###Output
WARNING:tensorflow:NCCL is not supported when using virtual GPUs, fallingback to reduction to one device
INFO:tensorflow:Using MirroredStrategy with devices ('/job:localhost/replica:0/task:0/device:GPU:0', '/job:localhost/replica:0/task:0/device:GPU:1', '/job:localhost/replica:0/task:0/device:GPU:2', '/job:localhost/replica:0/task:0/device:GPU:3')
INFO:tensorflow:ParameterServerStrategy (CentralStorageStrategy if you are using a single machine) with compute_devices = ['/job:localhost/replica:0/task:0/device:GPU:0', '/job:localhost/replica:0/task:0/device:GPU:1', '/job:localhost/replica:0/task:0/device:GPU:2', '/job:localhost/replica:0/task:0/device:GPU:3'], variable_device = '/device:CPU:0'
INFO:tensorflow:Using MirroredStrategy with devices ('/job:localhost/replica:0/task:0/device:GPU:0', '/job:localhost/replica:0/task:0/device:GPU:1', '/job:localhost/replica:0/task:0/device:GPU:2', '/job:localhost/replica:0/task:0/device:GPU:3')
Runtime Context: 2-GPU_MirroredStrategy
Loading model from file InceptionResNetV2_corrected_20210323T223046Z.hd5...
INFO:tensorflow:Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).
INFO:tensorflow:Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).
INFO:tensorflow:Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).
INFO:tensorflow:Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).
INFO:tensorflow:Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).
INFO:tensorflow:Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).
INFO:tensorflow:Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).
INFO:tensorflow:Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).
INFO:tensorflow:Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).
INFO:tensorflow:Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).
WARNING:tensorflow:No training configuration found in save file, so the model was *not* compiled. Compile it manually.
model InceptionResNetV2_Customized loaded in 3m 24s!
15
stopping at 10
create callbacks
Model: "InceptionResNetV2_Customized"
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
input_12 (InputLayer) [(None, 299, 299, 3) 0
__________________________________________________________________________________________________
tf.math.truediv_6 (TFOpLambda) (None, 299, 299, 3) 0 input_12[0][0]
__________________________________________________________________________________________________
tf.math.truediv_7 (TFOpLambda) (None, 299, 299, 3) 0 input_12[0][0]
__________________________________________________________________________________________________
tf.math.subtract_6 (TFOpLambda) (None, 299, 299, 3) 0 tf.math.truediv_6[0][0]
__________________________________________________________________________________________________
tf.math.subtract_7 (TFOpLambda) (None, 299, 299, 3) 0 tf.math.truediv_7[0][0]
__________________________________________________________________________________________________
inception_resnet_v2 (Functional (None, 1536) 54336736 tf.math.subtract_6[0][0]
__________________________________________________________________________________________________
xception (Functional) (None, 2048) 20861480 tf.math.subtract_7[0][0]
__________________________________________________________________________________________________
batch_normalization_834 (BatchN (None, 1536) 6144 inception_resnet_v2[0][0]
__________________________________________________________________________________________________
batch_normalization_835 (BatchN (None, 2048) 8192 xception[0][0]
__________________________________________________________________________________________________
tf.concat_3 (TFOpLambda) (None, 3584) 0 batch_normalization_834[0][0]
batch_normalization_835[0][0]
__________________________________________________________________________________________________
dense_9 (Dense) (None, 1024) 3671040 tf.concat_3[0][0]
__________________________________________________________________________________________________
dropout_6 (Dropout) (None, 1024) 0 dense_9[0][0]
__________________________________________________________________________________________________
dense_10 (Dense) (None, 512) 524800 dropout_6[0][0]
__________________________________________________________________________________________________
dropout_7 (Dropout) (None, 512) 0 dense_10[0][0]
__________________________________________________________________________________________________
dense_11 (Dense) (None, 19) 9747 dropout_7[0][0]
==================================================================================================
Total params: 79,418,139
Trainable params: 4,205,587
Non-trainable params: 75,212,552
__________________________________________________________________________________________________
None
model fit
Epoch 1/10
WARNING:tensorflow:From C:\Users\A291127E01\.conda\envs\aida\lib\site-packages\tensorflow\python\data\ops\multi_device_iterator_ops.py:601: get_next_as_optional (from tensorflow.python.data.ops.iterator_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.data.Iterator.get_next_as_optional()` instead.
INFO:tensorflow:Reduce to /job:localhost/replica:0/task:0/device:GPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:GPU:0', '/job:localhost/replica:0/task:0/device:GPU:1', '/job:localhost/replica:0/task:0/device:GPU:2', '/job:localhost/replica:0/task:0/device:GPU:3').
INFO:tensorflow:Reduce to /job:localhost/replica:0/task:0/device:GPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:GPU:0', '/job:localhost/replica:0/task:0/device:GPU:1', '/job:localhost/replica:0/task:0/device:GPU:2', '/job:localhost/replica:0/task:0/device:GPU:3').
INFO:tensorflow:Reduce to /job:localhost/replica:0/task:0/device:GPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:GPU:0', '/job:localhost/replica:0/task:0/device:GPU:1', '/job:localhost/replica:0/task:0/device:GPU:2', '/job:localhost/replica:0/task:0/device:GPU:3').
INFO:tensorflow:Reduce to /job:localhost/replica:0/task:0/device:GPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:GPU:0', '/job:localhost/replica:0/task:0/device:GPU:1', '/job:localhost/replica:0/task:0/device:GPU:2', '/job:localhost/replica:0/task:0/device:GPU:3').
INFO:tensorflow:Reduce to /job:localhost/replica:0/task:0/device:GPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:GPU:0', '/job:localhost/replica:0/task:0/device:GPU:1', '/job:localhost/replica:0/task:0/device:GPU:2', '/job:localhost/replica:0/task:0/device:GPU:3').
INFO:tensorflow:Reduce to /job:localhost/replica:0/task:0/device:GPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:GPU:0', '/job:localhost/replica:0/task:0/device:GPU:1', '/job:localhost/replica:0/task:0/device:GPU:2', '/job:localhost/replica:0/task:0/device:GPU:3').
INFO:tensorflow:Reduce to /job:localhost/replica:0/task:0/device:GPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:GPU:0', '/job:localhost/replica:0/task:0/device:GPU:1', '/job:localhost/replica:0/task:0/device:GPU:2', '/job:localhost/replica:0/task:0/device:GPU:3').
INFO:tensorflow:Reduce to /job:localhost/replica:0/task:0/device:GPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:GPU:0', '/job:localhost/replica:0/task:0/device:GPU:1', '/job:localhost/replica:0/task:0/device:GPU:2', '/job:localhost/replica:0/task:0/device:GPU:3').
INFO:tensorflow:Reduce to /job:localhost/replica:0/task:0/device:GPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:GPU:0', '/job:localhost/replica:0/task:0/device:GPU:1', '/job:localhost/replica:0/task:0/device:GPU:2', '/job:localhost/replica:0/task:0/device:GPU:3').
INFO:tensorflow:Reduce to /job:localhost/replica:0/task:0/device:GPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:GPU:0', '/job:localhost/replica:0/task:0/device:GPU:1', '/job:localhost/replica:0/task:0/device:GPU:2', '/job:localhost/replica:0/task:0/device:GPU:3').
1/1070 [..............................] - ETA: 0s - loss: 0.4261 - accuracy: 0.0664 - categorical_accuracy: 0.0664 - auc: 0.4568 - f1_micro: 0.1387 - f1_macro: 0.0913 - f1_score_weighted: 0.2409WARNING:tensorflow:From C:\Users\A291127E01\.conda\envs\aida\lib\site-packages\tensorflow\python\ops\summary_ops_v2.py:1277: stop (from tensorflow.python.eager.profiler) is deprecated and will be removed after 2020-07-01.
Instructions for updating:
use `tf.profiler.experimental.stop` instead.
61/1070 [>.............................] - ETA: 1:28:53 - loss: 0.2441 - accuracy: 0.1795 - categorical_accuracy: 0.1795 - auc: 0.5134 - f1_micro: 0.0228 - f1_macro: 0.0146 - f1_score_weighted: 0.0228
###Markdown
Threshold optimization
###Code
from keras import metrics
threshold = 0.35
f1_micro = tfa.metrics.F1Score(num_classes=19, average='micro', name='f1_micro',threshold=threshold),
f1_macro = tfa.metrics.F1Score(num_classes=19, average='macro', name='f1_macro',threshold=threshold)
f1_weighted = tfa.metrics.F1Score(num_classes=19, average='weighted', name='f1_score_weighted',threshold=threshold)
y_true_test = [ [1 if i in e else 0 for i in range(19)] for e in test_generator.labels]
y_true_test = np.array(y_true_test)
from sklearn.metrics import f1_score
ths = np.linspace(0.1, 0.5, 10)
pd.DataFrame({
'threshold': ths,
'f1-micro': [f1_score(y_true_test, (y_pred_test > th)*1., average="micro") for th in ths],
'f1-weighted': [f1_score(y_true_test, (y_pred_test > th)*1., average="weighted") for th in ths],
'class' : "all"
}
)
from sklearn.metrics import f1_score
ths = np.linspace(0.1, 0.5, 9)
df_ths = pd.DataFrame({'threshold' : ths}
)
for cl in range(19):
col = pd.DataFrame({f'f1-class_{cl}': [f1_score(y_true_test[:,cl], (y_pred_test[:,cl] > th)*1.) for th in ths]
})
df_ths=pd.concat([df_ths,col],axis="columns")
df_ths.style.highlight_max(color = 'lightgreen', axis = 0)
df_ths
argmax_index=df_ths.iloc[:,1:].idxmax(axis=0)
class_thresholds = df_ths.threshold[argmax_index].values
class_thresholds
f1_micro_opt_th = f1_score(y_true_test, (y_pred_test > class_thresholds)*1., average="micro")
f1_weighted_opt_th = f1_score(y_true_test, (y_pred_test > class_thresholds)*1., average="weighted")
print("Class thresholds optimized on test set:",
f"f1_micro_opt_th: {f1_micro_opt_th:.3f}, f1_weighted_opt_th: {f1_weighted_opt_th:.3f}",
sep="\n")
#datagen = ImageDataGenerator(rescale=1 / 255.)#, validation_split=0.1)
BATCH_SIZE = 64
train2_generator = datagen.flow_from_dataframe(
dataframe=df.loc[~df.is_holdout].sample(20000),
directory=IMAGES_DIR,
x_col="filename",
y_col="genre_id",
batch_size=BATCH_SIZE,
seed=42,
shuffle=False,
class_mode="categorical",
target_size=(299, 299),
subset='training',
validate_filenames=False
)
y_pred_train = model.predict(train2_generator)
y_true_train = [ [1 if i in e else 0 for i in range(19)] for e in train2_generator.labels]
y_true_train = np.array(y_true_train)
from sklearn.metrics import f1_score
ths = np.linspace(0.1, 0.5, 9)
df_ths = pd.DataFrame({'threshold' : ths}
)
for cl in range(19):
col = pd.DataFrame({f'f1-class_{cl}': [f1_score(y_true_train[:,cl], (y_pred_train[:,cl] > th)*1.) for th in ths]
})
df_ths=pd.concat([df_ths,col],axis="columns")
df_ths.style.highlight_max(color = 'lightgreen', axis = 0)
df_ths
argmax_index=df_ths.iloc[:,1:].idxmax(axis=0)
class_thresholds = df_ths.threshold[argmax_index].values
class_thresholds
f1_micro_opt_th = f1_score(y_true, (y_pred > class_thresholds)*1., average="micro")
f1_weighted_opt_th = f1_score(y_true, (y_pred > class_thresholds)*1., average="weighted")
print("Class thresholds optimized on training set:",
f"f1_micro_opt_th: {f1_micro_opt_th:.3f}, f1_weighted_opt_th: {f1_weighted_opt_th:.3f}",
sep="\n")
df_train
df[df.original_title.str.contains("brian")==True]
###Output
_____no_output_____ |
kg_demo/kg_demo.ipynb | ###Markdown
利用信息抽取技术搭建知识库在这个notebook文件中,有些模板代码已经提供给你,但你还需要实现更多的功能来完成这个项目。除非有明确要求,你无须修改任何已给出的代码。以**'【练习】'**开始的标题表示接下来的代码部分中有你需要实现的功能。这些部分都配有详细的指导,需要实现的部分也会在注释中以'TODO'标出。请仔细阅读所有的提示。>**提示:**Code 和 Markdown 区域可通过 **Shift + Enter** 快捷键运行。此外,Markdown可以通过双击进入编辑模式。--- 让我们开始吧本项目的目的是结合命名实体识别、依存语法分析、实体消歧、实体统一对网站开放语料抓取的数据建立小型知识图谱。在现实世界中,你需要拼凑一系列的模型来完成不同的任务;举个例子,用来预测狗种类的算法会与预测人类的算法不同。在做项目的过程中,你可能会遇到不少失败的预测,因为并不存在完美的算法和模型。你最终提交的不完美的解决方案也一定会给你带来一个有趣的学习经验!--- 步骤 1:实体统一 实体统一做的是对同一实体具有多个名称的情况进行统一,将多种称谓统一到一个实体上,并体现在实体的属性中(可以给实体建立“别称”属性)例如:对“河北银行股份有限公司”、“河北银行公司”和“河北银行”我们都可以认为是一个实体,我们就可以将通过提取前两个称谓的主要内容,得到“河北银行”这个实体关键信息。公司名称有其特点,例如后缀可以省略、上市公司的地名可以省略等等。在data/dict目录中提供了几个词典,可供实体统一使用。- company_suffix.txt是公司的通用后缀词典- company_business_scope.txt是公司经营范围常用词典- co_Province_Dim.txt是省份词典- co_City_Dim.txt是城市词典- stopwords.txt是可供参考的停用词 练习1:编写main_extract函数,实现对实体的名称提取“主体名称”的功能。
###Code
import jieba
import jieba.posseg as pseg
import re
import datetime
dict_entity_name_unify = {}
# 从输入的“公司名”中提取主体
def main_extract(input_str, stop_word, d_4_delete, d_city_province):
# 开始分词并处理
seg = pseg.cut(input_str)
seg_lst = remove_word(seg, stop_word, d_4_delete)
seg_lst = city_prov_ahead(seg_lst, d_city_province)
result = ''.join(seg_lst)
if result != input_str:
if result not in dict_entity_name_unify:
dict_entity_name_unify[result] = ""
dict_entity_name_unify[result] = dict_entity_name_unify[result] + "|" + input_str
return result
#TODO:实现公司名称中地名提前
def city_prov_ahead(seg, d_city_province):
city_prov_lst = []
# TODO ...
for word in seg:
if word in d_city_province:
city_prov_lst.append(word)
seg_lst = [word for word in seg if word not in city_prov_lst]
return city_prov_lst + seg_lst
#TODO:替换特殊符号
def remove_word(seg, stop_word, d_4_delete):
# TODO ...
seg_lst = [word for word, flag in seg if word not in stop_word and word not in d_4_delete]
return seg_lst
# 初始化,加载词典
def my_initial():
fr1 = open(r"./data/dict/co_City_Dim.txt", encoding='utf-8')
fr2 = open(r"./data/dict/co_Province_Dim.txt", encoding='utf-8')
fr3 = open(r"./data/dict/company_business_scope.txt", encoding='utf-8')
fr4 = open(r"./data/dict/company_suffix.txt", encoding='utf-8')
#城市名
lines1 = fr1.readlines()
d_4_delete = []
d_city_province = [re.sub(r'(\r|\n)*','',line) for line in lines1]
#省份名
lines2 = fr2.readlines()
l2_tmp = [re.sub(r'(\r|\n)*','',line) for line in lines2]
d_city_province.extend(l2_tmp)
#公司后缀
lines3 = fr3.readlines()
l3_tmp = [re.sub(r'(\r|\n)*','',line) for line in lines3]
lines4 = fr4.readlines()
l4_tmp = [re.sub(r'(\r|\n)*','',line) for line in lines4]
d_4_delete.extend(l4_tmp)
#get stop_word
fr = open(r'./data/dict/stopwords.txt', encoding='utf-8')
stop_word = fr.readlines()
stop_word_after = [re.sub(r'(\r|\n)*','',stop_word[i]) for i in range(len(stop_word))]
# stop_word_after[-1] = stop_word[-1]
stop_word = stop_word_after
return d_4_delete, stop_word, d_city_province
# TODO:测试实体统一用例
d_4_delete, stop_word, d_city_province = my_initial()
company_name = "河北银行股份有限公司"
company_name = main_extract(company_name, stop_word, d_4_delete, d_city_province)
print(company_name)
###Output
Building prefix dict from the default dictionary ...
Loading model from cache /tmp/jieba.cache
Loading model cost 0.799 seconds.
Prefix dict has been built successfully.
###Markdown
步骤 2:实体识别有很多开源工具可以帮助我们对实体进行识别。常见的有LTP、StanfordNLP、FoolNLTK等等。本次采用FoolNLTK实现实体识别,fool是一个基于bi-lstm+CRF算法开发的深度学习开源NLP工具,包括了分词、实体识别等功能,大家可以通过fool很好地体会深度学习在该任务上的优缺点。在‘data/train_data.csv’和‘data/test_data.csv’中是从网络上爬虫得到的上市公司公告,数据样例如下:
###Code
import pandas as pd
train_data = pd.read_csv('./data/info_extract/train_data.csv', encoding = 'gb2312', header=0)
train_data.head()
test_data = pd.read_csv('./data/info_extract/test_data.csv', encoding = 'gb2312', header=0)
test_data.head()
###Output
_____no_output_____
###Markdown
我们选取一部分样本进行标注,即train_data,该数据由5列组成。id列表示原始样本序号;sentence列为我们截取的一段关键信息;如果关键信息中存在两个实体之间有股权交易关系则tag列为1,否则为0;如果tag为1,则在member1和member2列会记录两个实体出现在sentence中的名称。剩下的样本没有标注,即test_data,该数据只有id和sentence两列,希望你能训练模型对test_data中的实体进行识别,并判断实体对之间有没有股权交易关系。 练习2:将每句句子中实体识别出,存入实体词典,并用特殊符号替换语句。
###Code
import fool
words, ners = fool.analysis('多氟多化工股份有限公司与李云峰先生签署了《附条件生效的股份认购合同》')
ners
# 处理test数据,利用开源工具进行实体识别和并使用实体统一函数存储实体
import fool
import pandas as pd
from copy import copy
test_data = pd.read_csv('./data/info_extract/test_data.csv', encoding = 'gb2312', header=0)
test_data['ner'] = None
ner_id = 1001
ner_dict_new = {} # 存储所有实体
ner_dict_reverse_new = {} # 存储所有实体
for i in range(len(test_data)):
sentence = copy(test_data.iloc[i, 1])
# TODO:调用fool进行实体识别,得到words和ners结果
words, ners = fool.analysis(sentence)
ners[0].sort(key=lambda x:x[0], reverse=True)
for start, end, ner_type, ner_name in ners[0]:
if ner_type=='company' or ner_type=='person':
# TODO:调用实体统一函数,存储统一后的实体
# 并自增ner_id
company_main_name = main_extract(ner_name, stop_word, d_4_delete, d_city_province)
if company_main_name not in ner_dict_new:
# ner_id 从 1001开始
ner_dict_new[company_main_name] = ner_id
ner_id += 1
# 在句子中用编号替换实体名
sentence = sentence[:start] + ' ner_' + str(ner_dict_new[company_main_name]) + '_ ' + sentence[end:]
test_data.iloc[i, -1] = sentence
X_test = test_data[['ner']]
X_test
# 处理train数据,利用开源工具进行实体识别和并使用实体统一函数存储实体
train_data = pd.read_csv('./data/info_extract/train_data.csv', encoding = 'gb2312', header=0)
train_data['ner'] = None
for i in range(len(train_data)):
# 判断正负样本
if train_data.iloc[i,:]['member1']=='0' and train_data.iloc[i,:]['member2']=='0':
sentence = copy(train_data.iloc[i, 1])
# TODO:调用fool进行实体识别,得到words和ners结果
words, ners = fool.analysis(sentence)
ners[0].sort(key=lambda x:x[0], reverse=True)
for start, end, ner_type, ner_name in ners[0]:
if ner_type=='company' or ner_type=='person':
# TODO:调用实体统一函数,存储统一后的实体
# 并自增ner_id
company_main_name = main_extract(ner_name, stop_word, d_4_delete, d_city_province)
if company_main_name not in ner_dict_new:
ner_dict_new[company_main_name] = ner_id
ner_id += 1
# 在句子中用编号替换实体名
sentence = sentence[:start] + ' ner_' + str(ner_dict_new[company_main_name]) + '_ ' + sentence[end:]
train_data.iloc[i, -1] = sentence
else:
# 将训练集中正样本已经标注的实体也使用编码替换
sentence = copy(train_data.iloc[i,:]['sentence'])
for company_main_name in [train_data.iloc[i,:]['member1'], train_data.iloc[i,:]['member2']]:
# TODO:调用实体统一函数,存储统一后的实体
# 并自增ner_id
company_main_name_new = main_extract(company_main_name, stop_word, d_4_delete, d_city_province)
if company_main_name_new not in ner_dict_new:
ner_dict_new[company_main_name_new] = ner_id
ner_id += 1
# 在句子中用编号替换实体名
sentence = re.sub(company_main_name, ' ner_%s_ '%(str(ner_dict_new[company_main_name_new])), sentence)
train_data.iloc[i, -1] = sentence
ner_dict_reverse_new = {id:name for name, id in ner_dict_new.items()}
y = train_data.loc[:,['tag']]
train_num = len(train_data)
X_train = train_data[['ner']]
# 将train和test放在一起提取特征
X = pd.concat([X_train, X_test], axis=0)
len(X_train), len(X_test), len(X)
X.iloc[0].tolist()
###Output
_____no_output_____
###Markdown
步骤 3:关系抽取目标:借助句法分析工具,和实体识别的结果,以及文本特征,基于训练数据抽取关系,并存储进图数据库。本次要求抽取股权交易关系,关系为无向边,不要求判断投资方和被投资方,只要求得到双方是否存在交易关系。模板建立可以使用“正则表达式”、“实体间距离”、“实体上下文”、“依存句法”等。答案提交在submit目录中,命名为info_extract_submit.csv和info_extract_entity.csv。- info_extract_entity.csv格式为:第一列是实体编号,第二列是实体名(实体统一的多个实体名用“|”分隔)- info_extract_submit.csv格式为:第一列是关系中实体1的编号,第二列为关系中实体2的编号。示例:- info_extract_entity.csv| 实体编号 | 实体名 || ------ | ------ || 1001 | 小王 || 1002 | A化工厂 |- info_extract_submit.csv| 实体1 | 实体2 || ------ | ------ || 1001 | 1003 || 1002 | 1001 | 练习3:提取文本tf-idf特征去除停用词,并转换成tfidf向量。
###Code
# code
from sklearn.feature_extraction.text import TfidfTransformer
from sklearn.feature_extraction.text import CountVectorizer
from pyltp import Segmentor
# 实体符号加入分词词典
with open('./data/user_dict.txt', 'w') as fw:
for v in ['一', '二', '三', '四', '五', '六', '七', '八', '九', '十']:
fw.write( v + '号企业 ni\n')
# 初始化实例
segmentor = Segmentor()
# 加载模型,加载自定义词典
segmentor.load_with_lexicon('./ltp_data_v3.4.0/cws.model', './data/user_dict.txt')
# 加载停用词
fr = open(r'./data/dict/stopwords.txt', encoding='utf-8')
stop_word = fr.readlines()
stop_word = [re.sub(r'(\r|\n)*', '', stop_word[i]) for i in range(len(stop_word))]
# 分词
# f = lambda x: ' '.join([word for word in segmentor.segment(re.sub(r'ner\_\d\d\d\d\_','',x)) if word not in stop_word])
f = lambda x: ' '.join([word for word in segmentor.segment(x) if word not in stop_word and not re.findall(r'ner\_\d\d\d\d\_', word)])
corpus = X['ner'].map(f).tolist()
from sklearn.feature_extraction.text import TfidfVectorizer
# TODO:提取tfidf特征
vectorizer = TfidfVectorizer() # 定一个tf-idf的vectorizer
X_tfidf = vectorizer.fit_transform(corpus).toarray() # 结果存放在X矩阵
print(X_tfidf)
X_tfidf.shape
###Output
_____no_output_____
###Markdown
练习4:提取句法特征除了词语层面的句向量特征,我们还可以从句法入手,提取一些句法分析的特征。参考特征:1、企业实体间距离2、企业实体间句法距离3、企业实体分别和关键触发词的距离4、实体的依存关系类别
###Code
s = '我喜欢你'
words = segmentor.segment(s)
tags = postagger.postag(words)
parser = Parser() # 初始化实例
parser.load('./ltp_data_v3.4.0/parser.model') # 加载模型
arcs = parser.parse(words, tags) # 句法分析
arcs_lst = list(map(list, zip(*[[arc.head, arc.relation] for arc in arcs])))
print(arcs_lst)
# 实体的依存关系类别
rely_id = [arc.head for arc in arcs] # 提取依存父节点id
relation = [arc.relation for arc in arcs] # 提取依存关系
heads = ['Root' if id == 0 else words[id - 1] for id in rely_id] # 匹配依存父节点词语
for i in range(len(words)):
print(relation[i] + '(' + words[i] + ', ' + heads[i] + ')')
s = '我喜欢你'
words = segmentor.segment(s)
tags = postagger.postag(words)
parser = Parser() # 初始化实例
parser.load('./ltp_data_v3.4.0/parser.model') # 加载模型
arcs = parser.parse(words, tags) # 句法分析
arcs_lst = list(map(list, zip(*[[arc.head, arc.relation] for arc in arcs])))
# 句法分析结果输出
parse_result = pd.DataFrame([[a,b,c,d] for a,b,c,d in zip(list(words), list(tags), arcs_lst[0], arcs_lst[1])], index=range(1, len(words)+1))
parser.release() # 释放模型
parse_result
# -*- coding: utf-8 -*-
from pyltp import Parser
from pyltp import Segmentor
from pyltp import Postagger
import networkx as nx
import pylab
import re
import numpy as np
postagger = Postagger() # 初始化实例
postagger.load_with_lexicon('./ltp_data_v3.4.0/pos.model', './data/user_dict.txt') # 加载模型
segmentor = Segmentor() # 初始化实例
segmentor.load_with_lexicon('./ltp_data_v3.4.0/cws.model', './data/user_dict.txt') # 加载模型
SEN_TAGS = ["SBV","VOB","IOB","FOB","DBL","ATT","ADV","CMP","COO","POB","LAD","RAD","IS","HED"]
def parse(s, isGraph = False):
"""
对语句进行句法分析,并返回句法结果
"""
tmp_ner_dict = {}
num_lst = ['一', '二', '三', '四', '五', '六', '七', '八', '九', '十']
# 将公司代码替换为特殊称谓,保证分词词性正确
for i, ner in enumerate(list(set(re.findall(r'(ner\_\d\d\d\d\_)', s)))):
try:
tmp_ner_dict[num_lst[i]+'号企业'] = ner
except IndexError:
# TODO:定义错误情况的输出
num_lst.append(str(i))
tmp_ner_dict[num_lst[i] + '号企业'] = ner
s = s.replace(ner, num_lst[i]+'号企业')
words = segmentor.segment(s)
tags = postagger.postag(words)
parser = Parser() # 初始化实例
parser.load('./ltp_data_v3.4.0/parser.model') # 加载模型
arcs = parser.parse(words, tags) # 句法分析
arcs_lst = list(map(list, zip(*[[arc.head, arc.relation] for arc in arcs])))
# 句法分析结果输出
parse_result = pd.DataFrame([[a,b,c,d] for a,b,c,d in zip(list(words), list(tags), arcs_lst[0], arcs_lst[1])], index=range(1, len(words)+1))
parser.release() # 释放模型
# TODO:提取企业实体依存句法类型
result = []
# 实体的依存关系类别
rely_id = [arc.head for arc in arcs] # 提取依存父节点id
relation = [arc.relation for arc in arcs] # 提取依存关系
heads = ['Root' if id == 0 else words[id - 1] for id in rely_id] # 匹配依存父节点词语
company_list = list(tmp_ner_dict.keys())
str_enti_1 = "一号企业"
str_enti_2 = "二号企业"
l_w = list(words)
is_two_company = str_enti_1 in l_w and str_enti_2 in l_w
if is_two_company:
second_entity_index = l_w.index(str_enti_2)
entity_sentence_type = parse_result.iloc[second_entity_index, -1]
if entity_sentence_type in SEN_TAGS:
result.append(SEN_TAGS.index(entity_sentence_type))
else:
result.append(-1)
else:
result.append(-1)
if isGraph:
g = Digraph('测试图片')
g.node(name='Root')
for word in words:
g.node(name=word, fontname="SimHei")
for i in range(len(words)):
if relation[i] not in ['HED']:
g.edge(words[i], heads[i], label=relation[i], fontname="SimHei")
else:
if heads[i] == 'Root':
g.edge(words[i], 'Root', label=relation[i], fontname="SimHei")
else:
g.edge(heads[i], 'Root', label=relation[i], fontname="SimHei")
g.view()
# 企业实体间句法距离
distance_e_jufa = 0
if is_two_company:
distance_e_jufa = shortest_path(parse_result, list(words), str_enti_1, str_enti_2, isGraph=False)
result.append(distance_e_jufa)
# 企业实体间距离
distance_entity = 0
if is_two_company:
distance_entity = np.abs(l_w.index(str_enti_1) - l_w.index(str_enti_2))
result.append(distance_entity)
# 投资关系关键词
key_words = ["收购","竞拍","转让","扩张","并购","注资","整合","并入","竞购","竞买","支付","收购价","收购价格","承购","购得","购进",
"购入","买进","买入","赎买","购销","议购","函购","函售","抛售","售卖","销售","转售"]
# TODO:*根据关键词和对应句法关系提取特征(如没有思路可以不完成)
k_w = None
for w in words:
if w in key_words:
k_w = w
break
dis_key_e_1 = -1
dis_key_e_2 = -1
if k_w != None and is_two_company:
k_w = str(k_w)
l_w = list(words)
dis_key_e_1 = np.abs(l_w.index(str_enti_1) - l_w.index(k_w))
dis_key_e_2 = np.abs(l_w.index(str_enti_2) - l_w.index(k_w))
result.append(dis_key_e_1)
result.append(dis_key_e_2)
return result
def shortest_path(arcs_ret, words, source, target, isGraph = False):
"""
求出两个词最短依存句法路径,不存在路径返回-1
arcs_ret:句法分析结果
source:实体1
target:实体2
"""
G = nx.DiGraph()
# 为这个网络添加节点...
for i in list(arcs_ret.index):
G.add_node(i)
# TODO:在网络中添加带权中的边...(注意,我们需要的是无向边)
for i in range(len(arcs_ret)):
head = arcs_ret.iloc[i, -2]
index = i + 1 # 从1开始
G.add_edge(index, head)
if isGraph:
nx.draw(G, with_labels=True)
# plt.savefig("undirected_graph_2.png")
plt.close()
try:
# TODO:利用nx包中shortest_path_length方法实现最短距离提取
source_index = words.index(source) + 1 #从1开始
target_index = words.index(target) + 1 #从1开始
distance = nx.shortest_path_length(G, source=source_index, target=target_index)
# print("'%s'与'%s'在依存句法分析图中的最短距离为: %s" % (source, target, distance))
return distance
except:
return -1
def get_feature(s):
"""
汇总上述函数汇总句法分析特征与TFIDF特征
"""
# TODO:汇总上述函数汇总句法分析特征与TFIDF特征
sen_feature = []
len_s = len(s)
for i in range(len_s):
f_e = parse(s[i], isGraph = False)
sen_feature.append(f_e)
sen_feature = np.array(sen_feature)
features = np.concatenate((X_tfidf, sen_feature), axis=1)
return features
import os
f_v_s_path = "./data/feature_vector.npy"
is_exist_f_v = os.path.exists(f_v_s_path)
corpus_1 = X['ner'].tolist()
len_train_data = len(train_data)
features = []
if not is_exist_f_v:
features = get_feature(corpus_1)
np.save(f_v_s_path, features)
else:
features = np.load(f_v_s_path)
features_train = features[:len_train_data, :]
features_train.shape
###Output
_____no_output_____
###Markdown
练习5:建立分类器利用已经提取好的tfidf特征以及parse特征,建立分类器进行分类任务。
###Code
# 建立分类器进行分类
from sklearn.ensemble import RandomForestClassifier
from sklearn import preprocessing
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import GridSearchCV
from sklearn.metrics import roc_auc_score
from sklearn.metrics import f1_score
from sklearn.metrics import confusion_matrix
from sklearn.model_selection import KFold
from sklearn.metrics import classification_report
from sklearn.naive_bayes import BernoulliNB
seed = 2019
y = train_data.loc[:, ['tag']]
y = np.array(y.values)
y = y.reshape(-1)
Xtrain, Xtest, ytrain, ytest = train_test_split(features_train, y, test_size=0.2, random_state=seed)
def logistic_class(Xtrain, Xtest, ytrain, ytest):
cross_validator = KFold(n_splits=5, shuffle=True, random_state=seed)
lr = LogisticRegression(penalty = "l1", solver='liblinear')
# params = {"penalty":["l1","l2"], "C":[0.1,1.0,10.0,20.0,30.0,100.0]}
params = {"C":[0.1,1.0,10.0,15.0,20.0,30.0,40.0,50.0]}
grid = GridSearchCV(estimator=lr, param_grid=params, cv=cross_validator)
grid.fit(Xtrain, ytrain)
print("最优参数为:",grid.best_params_)
model = grid.best_estimator_
y_pred = model.predict(Xtest)
y_test = [str(value) for value in ytest]
y_pred = [str(value) for value in y_pred]
# train_score = model.score(X_train, y_train)
# print("train_score", train_score)
# test_score = model.score(X_test, y_test)
# print("test_score", test_score)
# f1_score_value = f1_score(y_test, y_pred)
# print("F1-Score: {}".format(f1_score_value))
proba_value = model.predict_proba(Xtest)
p = proba_value[:, 1]
print("Logistic=========== ROC-AUC score: %.3f" % roc_auc_score(y_test, p))
report = classification_report(y_pred=y_pred,y_true=y_test)
print(report)
return model
# TODO:保存Test_data分类结果
# 答案提交在submit目录中,命名为info_extract_submit.csv和info_extract_entity.csv。
# info_extract_entity.csv格式为:第一列是实体编号,第二列是实体名(实体统一的多个实体名用“|”分隔)
# info_extract_submit.csv格式为:第一列是关系中实体1的编号,第二列为关系中实体2的编号。
s_model = logistic_class(Xtrain, Xtest, ytrain, ytest)
features_test = features[len_train_data:, :]
y_pred_test = s_model.predict(features_test)
l_X_test_ner = X_test.values.tolist()
entity_dict = {}
relation_list = []
for i, label in enumerate(y_pred_test):
if label == 1:
cur_ner_content = str(l_X_test_ner[i])
ner_list = list(set(re.findall(r'(ner\_\d\d\d\d\_)', cur_ner_content)))
if len(ner_list) == 2:
r_e_l = []
for i, ner in enumerate(ner_list):
split_list = str.split(ner, "_")
if len(split_list) == 3:
ner_id = int(split_list[1])
if ner_id in ner_dict_reverse_new:
if ner_id not in entity_dict:
company_main_name = ner_dict_reverse_new[ner_id]
if company_main_name in dict_entity_name_unify:
entity_dict[ner_id] = company_main_name + dict_entity_name_unify[company_main_name]
else:
entity_dict[ner_id] = company_main_name
r_e_l.append(ner_id)
if len(r_e_l) == 2:
relation_list.append(r_e_l)
entity_list = [[item[0], item[1]] for item in entity_dict.items()]
pd_enti = pd.DataFrame(np.array(entity_list), columns=['实体编号','实体名'])
pd_enti.to_csv("./data/info_extract_entity.csv",index=0, encoding='utf_8_sig')
pd_re = pd.DataFrame(np.array(relation_list), columns=['实体1','实体2'])
pd_re.to_csv("./data/info_extract_submit.csv",index=0,encoding='utf_8_sig')
entity = pd.read_csv('./data/info_extract_entity.csv', encoding='utf_8_sig', header=0)
entity.head()
relation = pd.read_csv('./data/info_extract_submit.csv', encoding='utf_8_sig', header=0)
relation.head()
###Output
_____no_output_____
###Markdown
练习6:操作图数据库对关系最好的描述就是用图,那这里就需要使用图数据库,目前最常用的图数据库是noe4j,通过cypher语句就可以操作图数据库的增删改查。可以参考“https://cuiqingcai.com/4778.html”。本次作业我们使用neo4j作为图数据库,neo4j需要java环境,请先配置好环境。将我们提出的实体关系插入图数据库,并查询某节点的3层投资关系,即三个节点组成的路径(如果有的话)。如果无法找到3层投资关系,请查询出任意指定节点的投资路径。
###Code
from py2neo import Node, Relationship, Graph
graph = Graph(
"http://localhost:7474",
username="neo4j",
password="666666"
)
for v in relation_list:
a = Node('Company', name=v[0])
b = Node('Company', name=v[1])
# 本次不区分投资方和被投资方,无向图
r = Relationship(a, 'INVEST', b)
s = a | b | r
graph.create(s)
r = Relationship(b, 'INVEST', a)
s = a | b | r
graph.create(s)
# TODO:查询某节点的3层投资关系
import random
result_2 = []
result_3 = []
for value in entity_list:
ner_id = value[0]
str_sql_3 = "match data=(na:Company{{name:'{0}'}})-[:INVEST]->(nb:Company)-[:INVEST]->(nc:Company) where na.name <> nc.name return data".format(str(ner_id))
result_3 = graph.run(str_sql_3).data()
if len(result_3) > 0:
break
if len(result_3) > 0:
print("step1")
print(result_3)
else:
print("step2")
random_index = random.randint(0, len(entity_list) - 1)
random_ner_id = entity_list[random_index][0]
str_sql_2 = "match data=(na:Company{{name:'{0}'}})-[*2]->(nb:Company) return data".format(str(random_ner_id))
result_2 = graph.run(str_sql_2).data()
print(result_2)
###Output
step2
[]
###Markdown
步骤4:实体消歧解决了实体识别和关系的提取,我们已经完成了一大截,但是我们提取的实体究竟对应知识库中哪个实体呢?下图中,光是“苹果”就对应了13个同名实体。在这个问题上,实体消歧旨在解决文本中广泛存在的名称歧义问题,将句中识别的实体与知识库中实体进行匹配,解决实体歧义问题。 练习7:匹配test_data.csv中前25条样本中的人物实体对应的百度百科URL(此部分样本中所有人名均可在百度百科中链接到)。利用scrapy、beautifulsoup、request等python包对百度百科进行爬虫,判断是否具有一词多义的情况,如果有的话,选择最佳实体进行匹配。使用URL为‘https://baike.baidu.com/item/’+人名 可以访问百度百科该人名的词条,此处需要根据爬取到的网页识别该词条是否对应多个实体,如下图:如果该词条有对应多个实体,请返回正确匹配的实体URL,例如该示例网页中的‘https://baike.baidu.com/item/陆永/20793929’。- 提交文件:entity_disambiguation_submit.csv- 提交格式:第一列为实体id(与info_extract_submit.csv中id保持一致),第二列为对应URL。- 示例:| 实体编号 | URL || ------ | ------ || 1001 | https://baike.baidu.com/item/陆永/20793929 || 1002 | https://baike.baidu.com/item/王芳/567232 |
###Code
import jieba
import pandas as pd
# 找出test_data.csv中前25条样本所有的人物名称,以及人物所在文档的上下文内容
test_data = pd.read_csv('./data/info_extract/test_data.csv', encoding = 'gb2312', header=0)
# 存储人物以及上下文信息(key为人物ID,value为人物名称、人物上下文内容)
list_person_content = {}
# 观察上下文的窗口大小
window = 20
f = lambda x: ' '.join([word for word in segmentor.segment(x)])
corpus= test_data['sentence'].map(f).tolist()
vectorizer = TfidfVectorizer() # 定一个tf-idf的vectorizer
X_tfidf = vectorizer.fit_transform(corpus).toarray() # 结果存放在X矩阵
# 遍历前25条样本
for i in range(25):
sentence = str(copy(test_data.iloc[i, 1]))
len_sen = len(sentence)
words, ners = fool.analysis(sentence)
ners[0].sort(key=lambda x: x[0], reverse=True)
for start, end, ner_type, ner_name in ners[0]:
if ner_type == 'person':
# TODO:提取实体的上下文
start_index = max(0, start - window)
end_index = min(len_sen - 1, end - 1 + window)
left_str = sentence[start_index:start]
right_str = sentence[end - 1:end_index]
left_str = ' '.join([word for word in segmentor.segment(left_str)])
right_str = ' '.join([word for word in segmentor.segment(right_str)])
new_str = left_str + " " +right_str
content_vec = vectorizer.transform([new_str])
ner_id = ner_dict_new[ner_name]
if ner_id not in list_person_content:
list_person_content[ner_id] = content_vec
# 利用爬虫得到每个人物名称对应的URL
# TODO:找到每个人物实体的词条内容。
from requests_html import HTMLSession
from requests_html import HTML
from sklearn.metrics.pairwise import cosine_similarity
from scipy.sparse import csr_matrix
import jieba
list_company_names = [company for value in entity_list for company in str.split(value[1], "|")]
list_person_url = []
url_prefix = "https://baike.baidu.com/item/"
url_error = "https://baike.baidu.com/error.html"
l_p_items = list(list_person_content.items())
len_items = len(l_p_items)
def get_para_vector(para_elems):
str_res = ""
for p_e in para_elems:
str_res += re.sub(r'(\r|\n)*', '', p_e.text)
str_res = ' '.join([word for word in jieba.cut(str_res)])
content_vec = vectorizer.transform([str_res])
content_vec = content_vec.toarray()[0]
return content_vec
for index in range(len_items):
value = l_p_items[index]
person_id = value[0]
vector_entity = csr_matrix(value[1])
person_name = ner_dict_reverse_new[person_id]
session = HTMLSession()
url = url_prefix + person_name
response = session.get(url)
url_list = []
if response.url != url_error:
para_elems = response.html.find('.para')
content_vec = get_para_vector(para_elems)
url_list.append([response.url, content_vec])
banks = response.html.find('.polysemantList-wrapper')
if len(banks) > 0:
banks_child = banks[0]
persion_links = list(banks_child.absolute_links)
for link in persion_links:
r_link = session.get(link)
if r_link.url == url_error:
continue
para_elems = r_link.html.find('.para')
content_vec = get_para_vector(para_elems)
url_list.append([r_link.url, content_vec])
vectorizer_list = [item[1] for item in url_list]
vectorizer_list = csr_matrix(vectorizer_list)
result = list(cosine_similarity(value[1], vectorizer_list)[0])
max_index = result.index(max(result))
list_person_url.append([person_id, person_name, url_list[max_index][0]])
print(list_person_url)
pd_re = pd.DataFrame(np.array(list_person_url), columns=['实体编号','名字','url'])
pd_re.to_csv("./data/entity_disambiguation_submit.csv",index=0,encoding='utf_8_sig')
###Output
[[1001, '李云峰', 'https://baike.baidu.com/item/%E6%9D%8E%E4%BA%91%E5%B3%B0/22102428#viewPageContent'], [1003, '侯毅', 'https://baike.baidu.com/item/%E4%BE%AF%E6%AF%85/12795458#viewPageContent'], [1005, '张德华', 'https://baike.baidu.com/item/%E5%BC%A0%E5%BE%B7%E5%8D%8E/7002694#viewPageContent'], [1007, '肖文革', 'https://baike.baidu.com/item/%E8%82%96%E6%96%87%E9%9D%A9/22791038#viewPageContent'], [1009, '熊海涛', 'https://baike.baidu.com/item/%E7%86%8A%E6%B5%B7%E6%B6%9B/10849366'], [1011, '宋琳', 'https://baike.baidu.com/item/%E5%AE%8B%E7%90%B3/16173836#viewPageContent'], [1014, '王友林', 'https://baike.baidu.com/item/%E7%8E%8B%E5%8F%8B%E6%9E%97/71412'], [1016, '彭聪', 'https://baike.baidu.com/item/%E5%BD%AD%E8%81%AA/19890127'], [1017, '曹飞', 'https://baike.baidu.com/item/%E6%9B%B9%E9%A3%9E/16542190#viewPageContent'], [1019, '颜军', 'https://baike.baidu.com/item/%E9%A2%9C%E5%86%9B/3476040'], [1021, '宋睿', 'https://baike.baidu.com/item/%E5%AE%8B%E7%9D%BF/2629451'], [1025, '邓冠华', 'https://baike.baidu.com/item/%E9%82%93%E5%86%A0%E5%8D%8E'], [1028, '孙锋峰', 'https://baike.baidu.com/item/%E5%AD%99%E9%94%8B%E5%B3%B0'], [1029, '林奇', 'https://baike.baidu.com/item/%E6%9E%97%E5%A5%87/53180'], [1031, '江斌', 'https://baike.baidu.com/item/%E6%B1%9F%E6%96%8C/24697329#viewPageContent'], [1033, '林海峰', 'https://baike.baidu.com/item/%E6%9E%97%E6%B5%B7%E5%B3%B0/10781910#viewPageContent'], [1036, '郭为', 'https://baike.baidu.com/item/%E9%83%AD%E4%B8%BA/77150'], [1038, '吴宏亮', 'https://baike.baidu.com/item/%E5%90%B4%E5%AE%8F%E4%BA%AE/1488540'], [1041, '王利平', 'https://baike.baidu.com/item/%E7%8E%8B%E5%88%A9%E5%B9%B3/3273251'], [1043, '周旭辉', 'https://baike.baidu.com/item/%E5%91%A8%E6%97%AD%E8%BE%89/9981261'], [1049, '吴艳', 'https://baike.baidu.com/item/%E5%90%B4%E8%89%B3/5951149']]
|
notebooks/Week3_variables_municipalities.ipynb | ###Markdown
Variables of households and population of Mexican Municipalities in 2020This Notebook uses the households and population dataframe of Mexican Municipalities (admin2) derived from the 2020 Mexican Census: [INEGI](https://inegi.org.mx/programas/ccpv/2020/Datos_abiertos).
###Code
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from string import ascii_letters
import numpy as np
%matplotlib inline
%reload_ext autoreload
%autoreload 2
###Output
_____no_output_____
###Markdown
Read Variables of households and population of Mexico in 2020
###Code
df = pd.read_parquet('../data/conjunto_de_datos_iter_00CSV20.parquet')
###Output
_____no_output_____
###Markdown
By using this query only the totals of each variable for each municipality is used well the rest of the dataframe is ignored.
###Code
df.query("NOM_LOC == 'Total del Municipio'", inplace = True)
df
###Output
_____no_output_____
###Markdown
By using the dictonary that the dataset offers, the selection of columns of interest is done.
###Code
df=df[['ENTIDAD','NOM_ENT','MUN','NOM_MUN','PCON_DISC','PCON_LIMI','PCLIM_PMEN','PSIND_LIM','GRAPROES','PSINDER','PDER_SS','PROM_OCUP',
'TVIVPARHAB','VPH_SINTIC','POB0_14','POB15_64','POB65_MAS']].copy()
###Output
_____no_output_____
###Markdown
Based on the dictonary the columns are renamed in a clearer way.
###Code
df.rename(columns = {'MUN':'municipality_number', 'NOM_MUN': 'municipalities','PCON_DISC': 'population_disability','PCON_LIMI': 'population_limitation',
'PCLIM_PMEN': 'population_mental_problem','PSIND_LIM':'population_no_problems','GRAPROES': 'average_years_finish', 'PSINDER': 'no_med_insurance',
'PDER_SS': 'med_insurance', 'PROM_OCUP': 'average_household_size','TVIVPARHAB': 'total_households','VPH_SINTIC': 'household_no_tics',
'POB0_14':'population_0_14_years_old','POB15_64':'population_15_64_years_old','POB65_MAS':'population_65_more_years_old'}, inplace=True)
###Output
_____no_output_____
###Markdown
To properly merge the dataframe from the week 1 analyzes with the dataframe that is currently being analyze it is necessary to obtain the code that describes the state of origin of the municipality.
###Code
df['mun_num'] = df['municipality_number'].apply(lambda i: f'{i:03d}')
df['ENTIDAD'] = df['ENTIDAD'].astype(str)
df['cve_ent'] = df['ENTIDAD'] + df['mun_num']
###Output
_____no_output_____
###Markdown
It is also necessary to change the data types of the columns of interest to int and float data types, since this values will be normalized for further study.
###Code
df['population_disability'] = df['population_disability'].astype(int)
df['population_limitation'] = df['population_limitation'].astype(int)
df['population_mental_problem'] = df['population_mental_problem'].astype(int)
df['population_no_problems'] = df['population_no_problems'].astype(int)
df['average_years_finish'] = df['average_years_finish'].astype(float)
df['no_med_insurance'] = df['no_med_insurance'].astype(int)
df['med_insurance'] = df['med_insurance'].astype(int)
df['average_household_size'] = df['average_household_size'].astype(float)
df['total_households'] = df['total_households'].astype(int)
df['household_no_tics'] = df['household_no_tics'].astype(int)
df['population_0_14_years_old'] = df['population_0_14_years_old'].astype(int)
df['population_15_64_years_old'] = df['population_15_64_years_old'].astype(int)
df['population_65_more_years_old'] = df['population_65_more_years_old'].astype(int)
###Output
_____no_output_____
###Markdown
To obtain the total household which have TIC's it is necessary to substract from the total household the households that do not have TIC's
###Code
df['household_tics'] = df['total_households']-df['household_no_tics']
###Output
_____no_output_____
###Markdown
The week 1 analyzes it is read
###Code
dfWeek1 = pd.read_csv('../data/week1analyzesMunicipalities.csv')
###Output
_____no_output_____
###Markdown
The week 1 analyzes cve_ent is converted to a string value for a good compatibility for future merging
###Code
dfWeek1['cve_ent'] = dfWeek1['cve_ent'].astype('str')
dfWeek1.head()
###Output
_____no_output_____
###Markdown
The week 1 analyzes and the lastest dataframe is merged using the code of the state of origin of the municipality.
###Code
dfAll = pd.merge(df,dfWeek1,on=['cve_ent'])
dfAll.head()
###Output
_____no_output_____
###Markdown
Once merged the dataframes only the data that is possible to normalized is selected. After selecting the data the normalization of it is implemented based on the total population or total households of each municipality by obtain the percentage of people or households with the certain variable of interest.
###Code
dfAll = dfAll[['cve_ent','municipality','population','population_disability', 'population_limitation',
'population_mental_problem','population_no_problems', 'average_years_finish', 'no_med_insurance',
'med_insurance', 'average_household_size', 'case_rate',
'case_rate_last_60_days', 'death_rate',
'death_rate_last_60_days','total_households','household_tics','household_no_tics',
'population_0_14_years_old','population_15_64_years_old','population_65_more_years_old']].copy()
dfAll['pct_disability']=dfAll['population_disability']/dfAll['population']*100
dfAll['pct_limitation']=dfAll['population_limitation']/dfAll['population']*100
dfAll['pct_mental_problem']=dfAll['population_mental_problem']/dfAll['population']*100
dfAll['pct_no_problems']=dfAll['population_no_problems']/dfAll['population']*100
dfAll['pct_no_med_insurance']=dfAll['no_med_insurance']/dfAll['population']*100
dfAll['pct_med_insurance']=dfAll['med_insurance']/dfAll['population']*100
dfAll['pct_household_tics']=dfAll['household_tics']/dfAll['total_households']*100
dfAll['pct_household_no_tics']=dfAll['household_no_tics']/dfAll['total_households']*100
dfAll['pct_pop_0_14_years_old']=dfAll['population_0_14_years_old']/dfAll['population']*100
dfAll['pct_pop_15_64_years_old']=dfAll['population_15_64_years_old']/dfAll['population']*100
dfAll['pct_pop_65_more_years_old']=dfAll['population_65_more_years_old']/dfAll['population']*100
###Output
_____no_output_____
###Markdown
Finally the variables and the region codes are selected of the dataframe for future storage
###Code
dfFinal = dfAll[['cve_ent','municipality','case_rate','case_rate_last_60_days', 'death_rate',
'death_rate_last_60_days','population','pct_disability',
'pct_limitation','pct_mental_problem', 'pct_no_problems' ,'average_years_finish',
'pct_no_med_insurance','pct_med_insurance', 'average_household_size',
'pct_household_tics','pct_household_no_tics','pct_pop_0_14_years_old',
'pct_pop_15_64_years_old','pct_pop_65_more_years_old']].copy()
dfFinal.head()
###Output
_____no_output_____
###Markdown
The dataframe is stored
###Code
dfFinal.to_csv('../data/week3_variables_municipalities.csv',index=False)
###Output
_____no_output_____ |
notebooks/Math-appendix/Numerical Optimization/Zero Order Optimization Methods/Coordinate Search.ipynb | ###Markdown
Coordinate SearchThe coordinate search and descent algorithms are additional zero order local methods that get around the inherent scaling issues of random local search by restricting the set of search directions to the coordinate axes of the input space. With coordinate wise algorithms we attempt to minimize such a function with respect to one coordinate or weight at a time - or more generally one subset of coordinates or weights at a time - keeping all others fixed.
###Code
import numpy as np
def coordinate_search(g, alpha_choice, max_iter, w):
"Coordinate Search function."
# Construct set of all coordinate directions
directions_positive = np.eye(np.size(w), np.size(w))
directions_negative = -np.eye(np.size(w), np.size(w))
directions = np.concatenate((directions_positive, directions_negative), axis=0)
# Run coordinate search
weight_history = []
cost_history = []
alpha = 0
for k in range(1, max_iter + 1):
# check if diminishing steplength rule is used
alpha = 1 / float(k) if (alpha_choice == 'diminishing') else alpha_choice
# Record weights and cost evaluation
weight_history.append(w)
cost_history.append(g(w))
# --- Pick the best descent direction ---
# Compute all new candidate points
w_candidates = w + alpha * directions
# Evalutate all new candidates
evals = np.array([g(w_val) for w_val in w_candidates])
# If we find a real descent direction, take step in its direction
ind = np.argmin(evals)
if evals[ind] < g(w):
d = directions[ind, :] # Grab best descent direction
w = w + alpha * d # Take step
# Record weights and cost evaluation
weight_history.append(w)
cost_history.append(g(w))
return weight_history, cost_history
###Output
_____no_output_____
###Markdown
Run coordinate search
###Code
# Define function
g = lambda w: np.dot(w.T,w) + 2
# Run coordinate search algorithm
alpha_choice = 1
w = np.array([3,4])
max_its = 7
weight_history, cost_history = coordinate_search(g,alpha_choice,max_its,w)
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
# Seaborn Plot Styling
sns.set(style="white", palette="husl")
sns.set_context("poster")
sns.set_style("ticks")
w0 = list(map(lambda weights: weights[0], weight_history))
w1 = list(map(lambda weights: weights[0], weight_history))
plt.plot(w0)
plt.plot(w1)
plt.plot(cost_history)
###Output
_____no_output_____
###Markdown
--- Zero Order Coordinate DescentA slight twist on the coordinate search produces a much more effective algorithm at precisely the same computational cost. Instead of collecting each coordinate direction (along with its negative), and then choosing a single best direction from this entire set, we can simply examine one coordinate direction (and its negative) at a time and step in this direction if it produces descent.
###Code
def coordinate_descent_zero_order(g, alpha_choice, max_iter, w):
"Zero order coordinate descent function"
N = np.size(w)
weight_history = []
cost_history = []
alpha = 0
for k in range(1, max_iter + 1):
# check if diminishing steplength rule is used
alpha = 1 / float(k) if (alpha_choice == 'diminishing') else alpha_choice
# Random shuffle of coordinates
c = np.random.permutation(N)
# Form the direction matrix out of the loop
cost = g(w)
# Loop over each coordinate direction
for n in range(N):
direction = np.zeros((N, 1)).flatten()
direction[c[n]] = 1
# Record weights and cost evaluation
weight_history.append(w)
cost_history.append(cost)
# Evaluate all candidates
evals = [g(w + alpha * direction)]
evals.append(g(w - alpha * direction))
evals = np.array(evals)
# If we find a real descent direction, take step in its direction
ind = np.argmin(evals)
if evals[ind] < cost_history[-1]:
# Take step
w = w + ((-1) ** (ind)) * alpha * direction
cost = evals[ind]
# record weights and cost evaluation
weight_history.append(w)
cost_history.append(g(w))
return weight_history,cost_history
# define function
g = lambda w: 0.26*(w[0]**2 + w[1]**2) - 0.48*w[0]*w[1]
# run coordinate descent algorithm
alpha_choice = 'diminishing'
w = np.array([3,4])
max_its = 40
weight_history, cost_history = coordinate_descent_zero_order(g,alpha_choice,max_its,w)
w0 = list(map(lambda weights: weights[0], weight_history))
w1 = list(map(lambda weights: weights[0], weight_history))
plt.plot(w0)
plt.plot(w1)
plt.plot(cost_history)
###Output
_____no_output_____ |
2019-01-10-ESRF/notebooks/01.2.d3.ipynb | ###Markdown
Example from http://bl.ocks.org/mbostock/4060366
###Code
%%javascript
var s = document.createElement("style");
s.innerHTML = `
path {
stroke: #fff;
}
path:first-child {
fill: yellow !important;
}
circle {
fill: #000;
pointer-events: none;
}
.q0-9 { fill: rgb(197,27,125); }
.q1-9 { fill: rgb(222,119,174); }
.q2-9 { fill: rgb(241,182,218); }
.q3-9 { fill: rgb(253,224,239); }
.q4-9 { fill: rgb(247,247,247); }
.q5-9 { fill: rgb(230,245,208); }
.q6-9 { fill: rgb(184,225,134); }
.q7-9 { fill: rgb(127,188,65); }
.q8-9 { fill: rgb(77,146,33); }`;
document.getElementsByTagName("head")[0].appendChild(s);
class MyD3(widgets.DOMWidget):
_view_name = Unicode('HelloView').tag(sync=True)
_view_module = Unicode('myd3').tag(sync=True)
width = Int().tag(sync=True)
height = Int().tag(sync=True)
vertices = List().tag(sync=True)
%%javascript
require.undef('myd3');
define('myd3', ["@jupyter-widgets/base",
"https://cdnjs.cloudflare.com/ajax/libs/d3/3.5.17/d3.js"], function(widgets, d3) {
var HelloView = widgets.DOMWidgetView.extend({
render: function() {
var that = this;
this.width = this.model.get('width');
this.height = this.model.get('height');
that.vertices = this.model.get('vertices');
that.voronoi = d3.geom.voronoi()
.clipExtent([[0, 0], [that.width, that.height]]);
this.svg = d3.select(this.el).append("svg")
.attr("width", that.width)
.attr("height", that.height)
.on("mousemove", function() {
that.vertices[0] = d3.mouse(this);
that.redraw();
});
var g1 = this.svg.append("g");
this.path = g1.selectAll("path");
var g2 = this.svg.append("g");
this.circle = g2.selectAll("circle");
this.model.on('change:vertices', this.update_vertices, this);
this.redraw();
},
update_vertices: function() {
this.redraw();
},
redraw: function () {
this.vertices = this.model.get('vertices');
this.path = this.path
.data(this.voronoi(this.vertices), this.polygon);
this.path.exit().remove();
this.path.enter().append("path")
.attr("class", function(d, i) { return "q" + (i % 9) + "-9"; })
.attr("d", this.polygon);
this.path.order();
this.circle = this.circle
.data([]);
this.circle.exit().remove();
this.circle = this.circle
.data(this.vertices.slice(1));
this.circle.enter().append("circle")
.attr("transform", function(d) {
return "translate(" + d + ")";
})
.attr("r", 1.5);
},
polygon: function (d) {
return "M" + d.join("L") + "Z";
}
});
return {
HelloView : HelloView
};
});
import numpy as np
sample_size = 100
width = 750
height = 300
m = MyD3(vertices=(np.random.rand(sample_size, 2) * np.array([width, height])).tolist(),
height=height, width=width)
m
m.vertices = (np.random.rand(sample_size, 2) * np.array([width, height])).tolist()
###Output
_____no_output_____ |
my-submission/first-neural-network/Your_first_neural_network.ipynb | ###Markdown
Your first neural networkIn this project, you'll build your first neural network and use it to predict daily bike rental ridership. We've provided some of the code, but left the implementation of the neural network up to you (for the most part). After you've submitted this project, feel free to explore the data and the model more.
###Code
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
Load and prepare the dataA critical step in working with neural networks is preparing the data correctly. Variables on different scales make it difficult for the network to efficiently learn the correct weights. Below, we've written the code to load and prepare the data. You'll learn more about this soon!
###Code
data_path = 'Bike-Sharing-Dataset/hour.csv'
rides = pd.read_csv(data_path)
rides.head()
###Output
_____no_output_____
###Markdown
Checking out the dataThis dataset has the number of riders for each hour of each day from January 1 2011 to December 31 2012. The number of riders is split between casual and registered, summed up in the `cnt` column. You can see the first few rows of the data above.Below is a plot showing the number of bike riders over the first 10 days or so in the data set. (Some days don't have exactly 24 entries in the data set, so it's not exactly 10 days.) You can see the hourly rentals here. This data is pretty complicated! The weekends have lower over all ridership and there are spikes when people are biking to and from work during the week. Looking at the data above, we also have information about temperature, humidity, and windspeed, all of these likely affecting the number of riders. You'll be trying to capture all this with your model.
###Code
rides[:24*10].plot(x='dteday', y='cnt')
###Output
_____no_output_____
###Markdown
Dummy variablesHere we have some categorical variables like season, weather, month. To include these in our model, we'll need to make binary dummy variables. This is simple to do with Pandas thanks to `get_dummies()`.
###Code
dummy_fields = ['season', 'weathersit', 'mnth', 'hr', 'weekday']
for each in dummy_fields:
dummies = pd.get_dummies(rides[each], prefix=each, drop_first=False)
rides = pd.concat([rides, dummies], axis=1)
fields_to_drop = ['instant', 'dteday', 'season', 'weathersit',
'weekday', 'atemp', 'mnth', 'workingday', 'hr']
data = rides.drop(fields_to_drop, axis=1)
data.head()
###Output
_____no_output_____
###Markdown
Scaling target variablesTo make training the network easier, we'll standardize each of the continuous variables. That is, we'll shift and scale the variables such that they have zero mean and a standard deviation of 1.The scaling factors are saved so we can go backwards when we use the network for predictions.
###Code
quant_features = ['casual', 'registered', 'cnt', 'temp', 'hum', 'windspeed']
# Store scalings in a dictionary so we can convert back later
scaled_features = {}
for each in quant_features:
mean, std = data[each].mean(), data[each].std()
scaled_features[each] = [mean, std]
data.loc[:, each] = (data[each] - mean)/std
###Output
_____no_output_____
###Markdown
Splitting the data into training, testing, and validation setsWe'll save the data for the last approximately 21 days to use as a test set after we've trained the network. We'll use this set to make predictions and compare them with the actual number of riders.
###Code
# Save data for approximately the last 21 days
test_data = data[-21*24:]
# Now remove the test data from the data set
data = data[:-21*24]
# Separate the data into features and targets
target_fields = ['cnt', 'casual', 'registered']
features, targets = data.drop(target_fields, axis=1), data[target_fields]
test_features, test_targets = test_data.drop(target_fields, axis=1), test_data[target_fields]
###Output
_____no_output_____
###Markdown
We'll split the data into two sets, one for training and one for validating as the network is being trained. Since this is time series data, we'll train on historical data, then try to predict on future data (the validation set).
###Code
# Hold out the last 60 days or so of the remaining data as a validation set
train_features, train_targets = features[:-60*24], targets[:-60*24]
val_features, val_targets = features[-60*24:], targets[-60*24:]
###Output
_____no_output_____
###Markdown
Time to build the networkBelow you'll build your network. We've built out the structure and the backwards pass. You'll implement the forward pass through the network. You'll also set the hyperparameters: the learning rate, the number of hidden units, and the number of training passes.The network has two layers, a hidden layer and an output layer. The hidden layer will use the sigmoid function for activations. The output layer has only one node and is used for the regression, the output of the node is the same as the input of the node. That is, the activation function is $f(x)=x$. A function that takes the input signal and generates an output signal, but takes into account the threshold, is called an activation function. We work through each layer of our network calculating the outputs for each neuron. All of the outputs from one layer become inputs to the neurons on the next layer. This process is called *forward propagation*.We use the weights to propagate signals forward from the input to the output layers in a neural network. We use the weights to also propagate error backwards from the output back into the network to update our weights. This is called *backpropagation*.> **Hint:** You'll need the derivative of the output activation function ($f(x) = x$) for the backpropagation implementation. If you aren't familiar with calculus, this function is equivalent to the equation $y = x$. What is the slope of that equation? That is the derivative of $f(x)$.Below, you have these tasks:1. Implement the sigmoid function to use as the activation function. Set `self.activation_function` in `__init__` to your sigmoid function.2. Implement the forward pass in the `train` method.3. Implement the backpropagation algorithm in the `train` method, including calculating the output error.4. Implement the forward pass in the `run` method.
###Code
#############
# In the my_answers.py file, fill out the TODO sections as specified
#############
from my_answers import NeuralNetwork
def MSE(y, Y):
return np.mean((y-Y)**2)
###Output
_____no_output_____
###Markdown
Unit testsRun these unit tests to check the correctness of your network implementation. This will help you be sure your network was implemented correctly befor you starting trying to train it. These tests must all be successful to pass the project.
###Code
import unittest
inputs = np.array([[0.5, -0.2, 0.1]])
targets = np.array([[0.4]])
test_w_i_h = np.array([[0.1, -0.2],
[0.4, 0.5],
[-0.3, 0.2]])
test_w_h_o = np.array([[0.3],
[-0.1]])
class TestMethods(unittest.TestCase):
##########
# Unit tests for data loading
##########
def test_data_path(self):
# Test that file path to dataset has been unaltered
self.assertTrue(data_path.lower() == 'bike-sharing-dataset/hour.csv')
def test_data_loaded(self):
# Test that data frame loaded
self.assertTrue(isinstance(rides, pd.DataFrame))
##########
# Unit tests for network functionality
##########
def test_activation(self):
network = NeuralNetwork(3, 2, 1, 0.5)
# Test that the activation function is a sigmoid
self.assertTrue(np.all(network.activation_function(0.5) == 1/(1+np.exp(-0.5))))
def test_train(self):
# Test that weights are updated correctly on training
network = NeuralNetwork(3, 2, 1, 0.5)
network.weights_input_to_hidden = test_w_i_h.copy()
network.weights_hidden_to_output = test_w_h_o.copy()
network.train(inputs, targets)
self.assertTrue(np.allclose(network.weights_hidden_to_output,
np.array([[ 0.37275328],
[-0.03172939]])))
self.assertTrue(np.allclose(network.weights_input_to_hidden,
np.array([[ 0.10562014, -0.20185996],
[0.39775194, 0.50074398],
[-0.29887597, 0.19962801]])))
def test_run(self):
# Test correctness of run method
network = NeuralNetwork(3, 2, 1, 0.5)
network.weights_input_to_hidden = test_w_i_h.copy()
network.weights_hidden_to_output = test_w_h_o.copy()
self.assertTrue(np.allclose(network.run(inputs), 0.09998924))
suite = unittest.TestLoader().loadTestsFromModule(TestMethods())
unittest.TextTestRunner().run(suite)
###Output
.....
----------------------------------------------------------------------
Ran 5 tests in 0.017s
OK
###Markdown
Training the networkHere you'll set the hyperparameters for the network. The strategy here is to find hyperparameters such that the error on the training set is low, but you're not overfitting to the data. If you train the network too long or have too many hidden nodes, it can become overly specific to the training set and will fail to generalize to the validation set. That is, the loss on the validation set will start increasing as the training set loss drops.You'll also be using a method know as Stochastic Gradient Descent (SGD) to train the network. The idea is that for each training pass, you grab a random sample of the data instead of using the whole data set. You use many more training passes than with normal gradient descent, but each pass is much faster. This ends up training the network more efficiently. You'll learn more about SGD later. Choose the number of iterationsThis is the number of batches of samples from the training data we'll use to train the network. The more iterations you use, the better the model will fit the data. However, this process can have sharply diminishing returns and can waste computational resources if you use too many iterations. You want to find a number here where the network has a low training loss, and the validation loss is at a minimum. The ideal number of iterations would be a level that stops shortly after the validation loss is no longer decreasing. Choose the learning rateThis scales the size of weight updates. If this is too big, the weights tend to explode and the network fails to fit the data. Normally a good choice to start at is 0.1; however, if you effectively divide the learning rate by n_records, try starting out with a learning rate of 1. In either case, if the network has problems fitting the data, try reducing the learning rate. Note that the lower the learning rate, the smaller the steps are in the weight updates and the longer it takes for the neural network to converge. Choose the number of hidden nodesIn a model where all the weights are optimized, the more hidden nodes you have, the more accurate the predictions of the model will be. (A fully optimized model could have weights of zero, after all.) However, the more hidden nodes you have, the harder it will be to optimize the weights of the model, and the more likely it will be that suboptimal weights will lead to overfitting. With overfitting, the model will memorize the training data instead of learning the true pattern, and won't generalize well to unseen data. Try a few different numbers and see how it affects the performance. You can look at the losses dictionary for a metric of the network performance. If the number of hidden units is too low, then the model won't have enough space to learn and if it is too high there are too many options for the direction that the learning can take. The trick here is to find the right balance in number of hidden units you choose. You'll generally find that the best number of hidden nodes to use ends up being between the number of input and output nodes.
###Code
import sys
####################
### Set the hyperparameters in you myanswers.py file ###
####################
from my_answers import iterations, learning_rate, hidden_nodes, output_nodes
N_i = train_features.shape[1]
#print("N_i is ", N_i)
#iterations = 6000
#hidden_nodes = 40
#learning_rate = 0.3
#print("iterations is ", iterations)
#print("hidden_nodes is ", hidden_nodes)
#print("learning_rate is ", learning_rate)
network = NeuralNetwork(N_i, hidden_nodes, output_nodes, learning_rate)
losses = {'train':[], 'validation':[]}
for ii in range(iterations):
# Go through a random batch of 128 records from the training data set
batch = np.random.choice(train_features.index, size=128)
X, y = train_features.ix[batch].values, train_targets.ix[batch]['cnt']
network.train(X, y)
# Printing out the training progress
train_loss = MSE(network.run(train_features).T, train_targets['cnt'].values)
val_loss = MSE(network.run(val_features).T, val_targets['cnt'].values)
sys.stdout.write("\rProgress: {:2.1f}".format(100 * ii/float(iterations)) \
+ "% ... Training loss: " + str(train_loss)[:5] \
+ " ... Validation loss: " + str(val_loss)[:5])
sys.stdout.flush()
losses['train'].append(train_loss)
losses['validation'].append(val_loss)
plt.plot(losses['train'], label='Training loss')
plt.plot(losses['validation'], label='Validation loss')
plt.legend()
_ = plt.ylim()
###Output
_____no_output_____
###Markdown
Check out your predictionsHere, use the test data to view how well your network is modeling the data. If something is completely wrong here, make sure each step in your network is implemented correctly.
###Code
fig, ax = plt.subplots(figsize=(8,4))
mean, std = scaled_features['cnt']
predictions = network.run(test_features).T*std + mean
ax.plot(predictions[0], label='Prediction')
ax.plot((test_targets['cnt']*std + mean).values, label='Data')
ax.set_xlim(right=len(predictions))
ax.legend()
dates = pd.to_datetime(rides.ix[test_data.index]['dteday'])
dates = dates.apply(lambda d: d.strftime('%b %d'))
ax.set_xticks(np.arange(len(dates))[12::24])
_ = ax.set_xticklabels(dates[12::24], rotation=45)
###Output
C:\Users\ikc\Miniconda3\envs\dlnd\lib\site-packages\ipykernel_launcher.py:10: DeprecationWarning:
.ix is deprecated. Please use
.loc for label based indexing or
.iloc for positional indexing
See the documentation here:
http://pandas.pydata.org/pandas-docs/stable/indexing.html#ix-indexer-is-deprecated
# Remove the CWD from sys.path while we load stuff.
|
ART (1).ipynb | ###Markdown
ART 1
###Code
import math
import sys
VIGILANCE = 0.4
PATTERNS = 7
N = 4
M = 3
TRAINING_PATTERNS = 4
PATTERN_ARRAY = [[1, 1, 0, 0],
[0, 0, 0, 1],
[1, 0, 0, 0],
[0, 0, 1, 1],
[0, 1, 0, 0],
[0, 0, 1, 0],
[1, 0, 1, 0]]
class ART:
def __init__(self, inputSize, numClusters, vigilance, numPatterns, numTraining, patternArray):
self.mInputSize = inputSize
self.mNumClusters = numClusters
self.mVigilance = vigilance
self.mNumPatterns = numPatterns
self.mNumTraining = numTraining
self.mPatterns = patternArray
self.bw = [] # Bottom-up weights.
self.tw = [] # Top-down weights.
self.f1a = [] # Input layer.
self.f1b = [] # Interface layer.
self.f2 = []
return
def initialize_arrays(self):
# Initialize bottom-up weight matrix.
sys.stdout.write("Weights initialized to:")
for i in range(self.mNumClusters):
self.bw.append([0.0] * self.mInputSize)
for j in range(self.mInputSize):
self.bw[i][j] = 1.0 / (1.0 + self.mInputSize)
sys.stdout.write(str(self.bw[i][j]) + ", ")
sys.stdout.write("\n")
sys.stdout.write("\n")
# Initialize top-down weight matrix.
for i in range(self.mNumClusters):
self.tw.append([0.0] * self.mInputSize)
for j in range(self.mInputSize):
self.tw[i][j] = 1.0
sys.stdout.write(str(self.tw[i][j]) + ", ")
sys.stdout.write("\n")
sys.stdout.write("\n")
self.f1a = [0.0] * self.mInputSize
self.f1b = [0.0] * self.mInputSize
self.f2 = [0.0] * self.mNumClusters
return
def get_vector_sum(self, nodeArray):
total = 0
length = len(nodeArray)
for i in range(length):
total += nodeArray[i]
return total
def get_maximum(self, nodeArray):
maximum = 0;
foundNewMaximum = False;
length = len(nodeArray)
done = False
while not done:
foundNewMaximum = False
for i in range(length):
if i != maximum:
if nodeArray[i] > nodeArray[maximum]:
maximum = i
foundNewMaximum = True
if foundNewMaximum == False:
done = True
return maximum
def test_for_reset(self, activationSum, inputSum, f2Max):
doReset = False
if(float(activationSum) / float(inputSum) >= self.mVigilance):
doReset = False # Candidate is accepted.
else:
self.f2[f2Max] = -1.0 # Inhibit.
doReset = True # Candidate is rejected.
return doReset
def update_weights(self, activationSum, f2Max):
# Update bw(f2Max)
for i in range(self.mInputSize):
self.bw[f2Max][i] = (2.0 * float(self.f1b[i])) / (1.0 + float(activationSum))
for i in range(self.mNumClusters):
for j in range(self.mInputSize):
sys.stdout.write(str(self.bw[i][j]) + ", ")
sys.stdout.write("\n")
sys.stdout.write("\n")
# Update tw(f2Max)
for i in range(self.mInputSize):
self.tw[f2Max][i] = self.f1b[i]
for i in range(self.mNumClusters):
for j in range(self.mInputSize):
sys.stdout.write(str(self.tw[i][j]) + ", ")
sys.stdout.write("\n")
sys.stdout.write("\n")
return
def ART(self):
inputSum = 0
activationSum = 0
f2Max = 0
reset = True
sys.stdout.write("Begin ART:\n")
for k in range(self.mNumPatterns):
sys.stdout.write("Vector: " + str(k) + "\n\n")
# Initialize f2 layer activations to 0.0
for i in range(self.mNumClusters):
self.f2[i] = 0.0
# Input pattern() to f1 layer.
for i in range(self.mInputSize):
self.f1a[i] = self.mPatterns[k][i]
# Compute sum of input pattern.
inputSum = self.get_vector_sum(self.f1a)
sys.stdout.write("InputSum (si) = " + str(inputSum) + "\n\n")
# Compute activations for each node in the f1 layer.
# Send input signal from f1a to the fF1b layer.
for i in range(self.mInputSize):
self.f1b[i] = self.f1a[i]
# Compute net input for each node in the f2 layer.
for i in range(self.mNumClusters):
for j in range(self.mInputSize):
self.f2[i] += self.bw[i][j] * float(self.f1a[j])
sys.stdout.write(str(self.f2[i]) + ", ")
sys.stdout.write("\n")
sys.stdout.write("\n")
reset = True
while reset == True:
# Determine the largest value of the f2 nodes.
f2Max = self.get_maximum(self.f2)
# Recompute the f1a to f1b activations (perform AND function)
for i in range(self.mInputSize):
sys.stdout.write(str(self.f1b[i]) + " * " + str(self.tw[f2Max][i]) + " = " + str(self.f1b[i] * self.tw[f2Max][i]) + "\n")
self.f1b[i] = self.f1a[i] * math.floor(self.tw[f2Max][i])
# Compute sum of input pattern.
activationSum = self.get_vector_sum(self.f1b)
sys.stdout.write("ActivationSum (x(i)) = " + str(activationSum) + "\n\n")
reset = self.test_for_reset(activationSum, inputSum, f2Max)
# Only use number of TRAINING_PATTERNS for training, the rest are tests.
if k < self.mNumTraining:
self.update_weights(activationSum, f2Max)
sys.stdout.write("Vector #" + str(k) + " belongs to cluster #" + str(f2Max) + "\n\n")
return
def print_results(self):
sys.stdout.write("Final weight values:\n")
for i in range(self.mNumClusters):
for j in range(self.mInputSize):
sys.stdout.write(str(self.bw[i][j]) + ", ")
sys.stdout.write("\n")
sys.stdout.write("\n")
for i in range(self.mNumClusters):
for j in range(self.mInputSize):
sys.stdout.write(str(self.tw[i][j]) + ", ")
sys.stdout.write("\n")
sys.stdout.write("\n")
return
if __name__ == '__main__':
art = ART(N, M, VIGILANCE, PATTERNS, TRAINING_PATTERNS, PATTERN_ARRAY)
art.initialize_arrays()
art.ART()
art.print_results()
import numpy as np
from numpy.linalg import norm
# Logging conf
import logging
import sys
class ART2(object):
def __init__(self, n=5, m=3, rho=0.9, theta=None):
"""
Create ART2 network with specified shape
For Input array I of size n, we need n input nodes in F1.
Parameters:
-----------
n : int
feature dimension of input; number of nodes in F1
m : int
Number of neurons in F2 competition layer
max number of categories
compare to n_class
rho : float
Vigilance parameter
larger rho: less inclusive prototypes
smaller rho: more generalization
theta :
Suppression paramater
L : float
Learning parameter: # TODO
internal parameters
----------
Bij: array of shape (m x n)
Feed-Forward weights
Tji: array of shape (n x m)
Feed-back weights
"""
self.input_size = n
self.output_size = m
"""init layers
F0 --> F1 --> F2
S --> X --> Y
"""
# F2
self.yj = np.zeros(self.output_size)
self.active_cluster_units = []
# F1
self.xi = np.zeros(self.input_size)
# F0
self.si = np.zeros(self.input_size)
"""init parameters"""
self.params = {}
# a,b fixed weights in F1 layer; should not be zero
self.params['a'] = 10
self.params['b'] = 10
# c fixed weight used in testing for reset
self.params['c'] = 0.1
# d activation of winning F2 unit
self.params['d'] = 0.9
# c*d / (1-d) must be less than or equal to one
# as ratio --> 1 for greater vigilance
self.params['e'] = 0.00001
# small param to prevent division by zero
# self.L = 2
# rho : vigilance parameter
self.rho = rho
# theta: noise suppression parameter
# e.g. theta = 1 / sqrt(n)
if theta is None:
self.theta = 1 / np.sqrt(self.input_size)
else:
self.theta = theta
# alpha: learning rate. Small value : slower learning,
# but also more likely to reach equilibrium in slow
# learning mode
self.alpha = 0.6
"""init weights"""
# Bij initially (7.0, 7.0) for each cluster unit
self.Bij = np.ones((n, m)) * 5.0
# Tji initially 0
self.Tji = np.zeros((m, n))
"""init other activations"""
self.ui = np.zeros(self.input_size)
self.vi = None
"""Other helpers"""
self.log = None
def compute(self, all_data):
"""Process and learn from all data
Step 1
fast learning: repeat this step until placement of
patterns on cluster units does not change
from one epoch to the next
"""
for iepoch in range(self.n_epochs):
self._training_epoch(all_data)
# test stopping condition for n_epochs
return True
def learning_trial(self, idata):
"""
Step 3-11
idata is a single row of input
A learning trial consists of one presentation of one input pattern.
V and P will reach equilibrium after two updates of F1
"""
self.log.info("Starting Learning Trial.")
self.log.debug("input pattern: {}".format(idata))
self.log.debug("theta: {}".format(self.theta))
# at beginning of learning trial, set all activations to zero
self._zero_activations()
self.si = idata
# TODO: Should this be here?
# Update F1 activations, no candidate cluster unit
self._update_F1_activation()
# Update F1 activations again
self._update_F1_activation()
"""
After F1 activations achieve equilibrium
TMP: Assume only two F1 updates needed for now
Then proceed feed-forward to F2
"""
# TODO: instead check if ui or pi will change significantly
# now P units send signals to F2 layer
self.yj = np.dot(self.Bij.T, self.pi)
J = self._select_candidate_cluster_unit()
"""step 8 (resonance)
reset cannot occur during resonance
new winning unit (J) cannot be chosen during resonance
"""
if len(self.active_cluster_units) == 0:
self._update_weights_first_pattern(J)
else:
self._resonance_learning(J)
# add J to active list
if J not in self.active_cluster_units:
self.active_cluster_units.append(J)
return True
def _training_epoch(self, all_data):
# initialize parameters and weights
pass # done in __init__
for idata in all_data:
self.si = idata # input vector F0
self.learning_trial()
return True
def _select_candidate_cluster_unit(self):
""" RESET LOOP
This loop selects an appropriate candidate cluster unit for learninig
- Each iteration selects a candidate unit.
- Iterations continue until reset condition is met (reset is False)
- if a candidate unit does not satisfy, it is inhibited and can not be
selected again in this presentation of the input pattern.
No learning occurs in this phase.
returns:
J, the index of the selected cluster unit
"""
self.reset = True
while self.reset:
self.log.info("candidate selection loop iter start")
# check reset
# Select best active candidate
# ... largest element of Y that is not inhibited
J = np.argmax(self.yj) # J current candidate, not same as index jj
self.log.debug("\tyj: {}".format(self.yj))
self.log.debug("\tpicking J = {}".format(J))
# Test stopping condition here
# (check reset)
e = self.params['e']
# confirm candidate: inhibit or proceed
if (self.vi == 0).all():
self.ui = np.zeros(self.input_size)
else:
self.ui = self.vi / (e + norm(self.vi))
# pi =
# calculate ri (reset node)
c = self.params['c']
term1 = norm(self.ui + c*self.ui)
term2 = norm(self.ui) + c*norm(self.ui)
self.ri = term1 / term2
if self.ri >= (self.rho - e):
self.log.info("\tReset is False: Candidate is good.")
# Reset condition satisfied: cluster unit may learn
self.reset = False
# finish updating F1 activations
self._update_F1_activation()
# TODO: this will update ui twice. Confirm ok
elif self.ri < (self.rho - e):
self.reset = True
self.log.info("\treset is True")
self.yj[J] = -1.0
# break inf loop manually
# self.log.warn("EXIT RESET LOOP MANUALLY")
# self.reset = False
return J
def _resonance_learning(self, J, n_iter=20):
"""
Learn on confirmed candidate
In slow learning, only one update of weights in this trial
n_learning_iterations = 1
we then present the next input pattern
In fast learning, present input again (same learning trial)
- until weights reach equilibrium for this trial
- presentation is: "weight-update-F1-update"
"""
self.log.info("Entering Resonance phase with J = {}".format(J))
for ilearn in range(n_iter):
self.log.info("learning iter start")
self._update_weights(J)
# in slow learning, this step not required?
D = np.ones(self.output_size)
self._update_F1_activation(J, D)
# test stopping condition for weight updates
# if change in weights was below some tolerance
return True
def _update_weights_first_pattern(self, J):
"""Equilibrium weights for the first pattern presented
converge to these values. This shortcut can save many
iterations.
"""
self.log.info("Weight update using first-pattern shortcut")
# Fast learning first pattern simplification
d = self.params['d']
self.Tji[J, :] = self.ui / (1 - d)
self.Bij[:, J] = self.ui / (1 - d)
# log
self.log.debug("Tji[J]: {}".format(self.Tji[J, :]))
self.log.debug("Bij[J]: {}".format(self.Bij[:, J]))
return
def _update_weights(self, J):
"""update weights
for Tji and Bij
"""
self.log.info("Updating Weights")
# get useful terms
alpha = self.alpha
d = self.params['d']
term1 = alpha*d*self.ui
term2 = (1 + alpha*d*(d - 1))
self.Tji[J, :] = term1 + term2*self.Tji[J, :]
self.Bij[:, J] = term1 + term2*self.Bij[:, J]
# log
self.log.debug("Tji[J]: {}".format(self.Tji[J, :]))
self.log.debug("Bij[J]: {}".format(self.Bij[:, J]))
return
def _update_F1_activation(self, J=None, D=None):
"""
if winning unit has been selected
J is winning cluster unit
D is F2 activation
else if no winning unit selected
J is None
D is zero vector
"""
# Checks
# self.log.warn("Warning: Skipping J xor D check!")
# if (J is None) ^ (D is None):
# raise Exception("Must provide both J and D, or neither.")
msg = "Updating F1 activations"
if J is not None:
msg = msg + " with J = {}".format(J)
self.log.info(msg)
a = self.params['a']
b = self.params['b']
d = self.params['d']
e = self.params['e']
# compute activation of Unit Ui
# - activation of Vi normalized to unit length
if self.vi is None:
self.ui = np.zeros(self.input_size)
else:
self.ui = self.vi / (e + norm(self.vi))
# signal sent from each unit Ui to associated Wi and Pi
# compute activation of Wi
self.wi = self.si + a * self.ui
# compute activation of pi
# WRONG: self.pi = self.ui + np.dot(self.yj, self.Tji)
if J is not None:
self.pi = self.ui + d * self.Tji[J, :]
else:
self.pi = self.ui
# TODO: consider RESET here
# compute activation of Xi
# self.xi = self._thresh(self.wi / norm(self.wi))
self.xi = self.wi / (e + norm(self.wi))
# compute activation of Qi
# self.qi = self._thresh(self.pi / (e + norm(self.pi)))
self.qi = self.pi / (e + norm(self.pi))
# send signal to Vi
self.vi = self._thresh(self.xi) + b * self._thresh(self.qi)
self._log_values()
return True
"""Helper methods"""
def _zero_activations(self):
"""Set activations to zero
common operation, e.g. beginning of a learning trial
"""
self.log.debug("zero'ing activations")
self.si = np.zeros(self.input_size)
self.ui = np.zeros(self.input_size)
self.vi = np.zeros(self.input_size)
return
def _thresh(self, vec):
"""
This function treats any signal that is less than theta
as noise and suppresses it (sets it to zero). The value
of the parameter theta is specified by the user.
"""
assert isinstance(vec, np.ndarray), "type check"
cpy = vec.copy()
cpy[cpy < self.theta] = 0
return cpy
def _clean_input_pattern(self, idata):
assert len(idata) == self.input_size, "size check"
assert isinstance(idata, np.ndarray), "type check"
return idata
"""Logging Functions"""
def stop_logging(self):
"""Logging stuff
closes filehandlers and stuff
"""
self.log.info('Stop Logging.')
handlers = self.log.handlers[:]
for handler in handlers:
handler.close()
self.log.removeHandler(handler)
self.log = None
def start_logging(self, to_file=True, to_console=True):
"""Logging!
init logging handlers and stuff
to_file and to_console are booleans
# TODO: accept logging level
"""
# remove any existing logger
if self.log is not None:
self.stop_logging()
self.log = None
# Create logger and configure
self.log = logging.getLogger('ann.art.art2')
self.log.setLevel(logging.DEBUG)
self.log.propagate = False
formatter = logging.Formatter(
fmt='%(levelname)8s:%(message)s'
)
# add file logging
if to_file:
fh = logging.FileHandler(
filename='ART_LOG.log',
mode='w',
)
fh.setFormatter(formatter)
fh.setLevel(logging.WARN)
self.log.addHandler(fh)
# create console handler with a lower log level for debugging
if to_console:
ch = logging.StreamHandler(sys.stdout)
ch.setFormatter(formatter)
ch.setLevel(logging.DEBUG)
self.log.addHandler(ch)
self.log.info('Start Logging')
def getlogger(self):
"""Logging stuff
"""
return self.log
def _log_values(self, J=None):
"""Logging stuff
convenience function
"""
self.log.debug("\t--- debug values --- ")
self.log.debug("\tui : {}".format(self.ui))
self.log.debug("\twi : {}".format(self.wi))
self.log.debug("\tpi : {}".format(self.pi))
self.log.debug("\txi : {}".format(self.xi))
self.log.debug("\tqi : {}".format(self.qi))
self.log.debug("\tvi : {}".format(self.vi))
if J is not None:
self.log.debug("\tWeights with J = {}".format(J))
self.log.debug("\tBij: {}".format(self.bij[:, J]))
self.log.debug("\tTji: {}".format(self.tji[J, :]))
###Output
_____no_output_____ |
docs/demos/cem.ipynb | ###Markdown
Coastline Evolution Model* Link to this notebook: https://github.com/csdms/pymt/blob/master/docs/demos/cem.ipynb* Install command: `$ conda install notebook pymt_cem`* Download local copy of notebook: `$ curl -O https://raw.githubusercontent.com/csdms/pymt/master/docs/demos/cem.ipynb`This example explores how to use a BMI implementation using the CEM model as an example. Links* [CEM source code](https://github.com/csdms/cem-old/tree/mcflugen/add-function-pointers): Look at the files that have *deltas* in their name.* [CEM description on CSDMS](http://csdms.colorado.edu/wiki/Model_help:CEM): Detailed information on the CEM model. Interacting with the Coastline Evolution Model BMI using Python Some magic that allows us to view images within the notebook.
###Code
%matplotlib inline
###Output
_____no_output_____
###Markdown
Import the `Cem` class, and instantiate it. In Python, a model with a BMI will have no arguments for its constructor. Note that although the class has been instantiated, it's not yet ready to be run. We'll get to that later!
###Code
import pymt.models
cem = pymt.models.Cem()
###Output
[33;01m➡ models: Cem, Waves[39;49;00m
###Markdown
Even though we can't run our waves model yet, we can still get some information about it. *Just don't try to run it.* Some things we can do with our model are get the names of the input variables.
###Code
cem.output_var_names
cem.input_var_names
###Output
_____no_output_____
###Markdown
We can also get information about specific variables. Here we'll look at some info about wave direction. This is the main input of the Cem model. Notice that BMI components always use [CSDMS standard names](http://csdms.colorado.edu/wiki/CSDMS_Standard_Names). The CSDMS Standard Name for wave angle is, "sea_surface_water_wave__azimuth_angle_of_opposite_of_phase_velocity"Quite a mouthful, I know. With that name we can get information about that variable and the grid that it is on (it's actually not a one).
###Code
angle_name = 'sea_surface_water_wave__azimuth_angle_of_opposite_of_phase_velocity'
print("Data type: %s" % cem.get_var_type(angle_name))
print("Units: %s" % cem.get_var_units(angle_name))
print("Grid id: %d" % cem.get_var_grid(angle_name))
print("Number of elements in grid: %d" % cem.get_grid_number_of_nodes(0))
print("Type of grid: %s" % cem.get_grid_type(0))
###Output
Data type: float64
Units: radians
Grid id: 0
Number of elements in grid: 1
Type of grid: scalar
###Markdown
OK. We're finally ready to run the model. Well not quite. First we initialize the model with the BMI **initialize** method. Normally we would pass it a string that represents the name of an input file. For this example we'll pass **None**, which tells Cem to use some defaults.
###Code
args = cem.setup(number_of_rows=100, number_of_cols=200, grid_spacing=200.)
cem.initialize(*args)
###Output
_____no_output_____
###Markdown
Before running the model, let's set a couple input parameters. These two parameters represent the wave height and wave period of the incoming waves to the coastline.
###Code
import numpy as np
cem.set_value("sea_surface_water_wave__height", 2.)
cem.set_value("sea_surface_water_wave__period", 7.)
cem.set_value("sea_surface_water_wave__azimuth_angle_of_opposite_of_phase_velocity", 0. * np.pi / 180.)
###Output
_____no_output_____
###Markdown
The main output variable for this model is *water depth*. In this case, the CSDMS Standard Name is much shorter: "sea_water__depth"First we find out which of Cem's grids contains water depth.
###Code
grid_id = cem.get_var_grid('sea_water__depth')
###Output
_____no_output_____
###Markdown
With the *grid_id*, we can now get information about the grid. For instance, the number of dimension and the type of grid (structured, unstructured, etc.). This grid happens to be *uniform rectilinear*. If you were to look at the "grid" types for wave height and period, you would see that they aren't on grids at all but instead are scalars.
###Code
grid_type = cem.get_grid_type(grid_id)
grid_rank = cem.get_grid_ndim(grid_id)
print('Type of grid: %s (%dD)' % (grid_type, grid_rank))
###Output
Type of grid: uniform_rectilinear (2D)
###Markdown
Because this grid is uniform rectilinear, it is described by a set of BMI methods that are only available for grids of this type. These methods include:* get_grid_shape* get_grid_spacing* get_grid_origin
###Code
spacing = np.empty((grid_rank, ), dtype=float)
shape = cem.get_grid_shape(grid_id)
cem.get_grid_spacing(grid_id, out=spacing)
print('The grid has %d rows and %d columns' % (shape[0], shape[1]))
print('The spacing between rows is %f and between columns is %f' % (spacing[0], spacing[1]))
###Output
The grid has 100 rows and 200 columns
The spacing between rows is 200.000000 and between columns is 200.000000
###Markdown
Allocate memory for the water depth grid and get the current values from `cem`.
###Code
z = np.empty(shape, dtype=float)
cem.get_value('sea_water__depth', out=z)
###Output
_____no_output_____
###Markdown
Here I define a convenience function for plotting the water depth and making it look pretty. You don't need to worry too much about it's internals for this tutorial. It just saves us some typing later on.
###Code
def plot_coast(spacing, z):
import matplotlib.pyplot as plt
xmin, xmax = 0., z.shape[1] * spacing[0] * 1e-3
ymin, ymax = 0., z.shape[0] * spacing[1] * 1e-3
plt.imshow(z, extent=[xmin, xmax, ymin, ymax], origin='lower', cmap='ocean')
plt.colorbar().ax.set_ylabel('Water Depth (m)')
plt.xlabel('Along shore (km)')
plt.ylabel('Cross shore (km)')
###Output
_____no_output_____
###Markdown
It generates plots that look like this. We begin with a flat delta (green) and a linear coastline (y = 3 km). The bathymetry drops off linearly to the top of the domain.
###Code
plot_coast(spacing, z)
###Output
_____no_output_____
###Markdown
Right now we have waves coming in but no sediment entering the ocean. To add some discharge, we need to figure out where to put it. For now we'll put it on a cell that's next to the ocean. Allocate memory for the sediment discharge array and set the discharge at the coastal cell to some value.
###Code
qs = np.zeros_like(z)
qs[0, 100] = 1250
###Output
_____no_output_____
###Markdown
The CSDMS Standard Name for this variable is: "land_surface_water_sediment~bedload__mass_flow_rate"You can get an idea of the units based on the quantity part of the name. "mass_flow_rate" indicates mass per time. You can double-check this with the BMI method function **get_var_units**.
###Code
cem.get_var_units('land_surface_water_sediment~bedload__mass_flow_rate')
cem.time_step, cem.time_units, cem.time
###Output
_____no_output_____
###Markdown
Set the bedload flux and run the model.
###Code
for time in range(3000):
cem.set_value('land_surface_water_sediment~bedload__mass_flow_rate', qs)
cem.update_until(time)
cem.get_value('sea_water__depth', out=z)
cem.time
cem.get_value('sea_water__depth', out=z)
plot_coast(spacing, z)
val = np.empty((5, ), dtype=float)
cem.get_value("basin_outlet~coastal_center__x_coordinate", val)
val / 100.
###Output
_____no_output_____
###Markdown
Let's add another sediment source with a different flux and update the model.
###Code
qs[0, 150] = 1500
for time in range(3750):
cem.set_value('land_surface_water_sediment~bedload__mass_flow_rate', qs)
cem.update_until(time)
cem.get_value('sea_water__depth', out=z)
plot_coast(spacing, z)
###Output
_____no_output_____
###Markdown
Here we shut off the sediment supply completely.
###Code
qs.fill(0.)
for time in range(4000):
cem.set_value('land_surface_water_sediment~bedload__mass_flow_rate', qs)
cem.update_until(time)
cem.get_value('sea_water__depth', out=z)
plot_coast(spacing, z)
###Output
_____no_output_____ |
notebooks/Constructing_Operators_States.ipynb | ###Markdown
Constructing Operators and States Part 1 Load necessary modules Import statements like these need to be at the top of every notebook you create.
###Code
import numpy as np # Needed to create arrays (vectors and matrices).
import qutip as qt # the QuTiP library.
import matplotlib.pyplot as plt # Plotting tools.
%matplotlib inline
###Output
_____no_output_____
###Markdown
Constructing Operators and States for Two-Level Systems Start by generating the Pauli spin operators and identity operator for a two-level system. From your favorite QM textbook:$$\begin{equation}\sigma_{x} = \begin{bmatrix}0 & 1 \\1 & 0 \end{bmatrix},\ \\sigma_{y} = \begin{bmatrix}0 & -i \\i & 0 \end{bmatrix},\ \\sigma_{z} = \begin{bmatrix}1 & 0 \\0 & -1 \end{bmatrix},\ \ I = \begin{bmatrix}1 & 0 \\0 & 1 \end{bmatrix}\end{equation}$$
###Code
qt.sigmax()
qt.sigmay()
qt.sigmaz()
qt.identity(2) # Identity works for any dimension, not just 2.
###Output
_____no_output_____
###Markdown
What implicit assumption did we make when writing down these operators? We have implicity choosen the z-axis as our preferred basis (representation). What do the corresponding state vectors look like?
###Code
up = qt.basis(2,0) # First number is Hilbert dimension, second labels the state
up
###Output
_____no_output_____
###Markdown
If I don't know what the inputs are, then it is always possible to ask for help.
###Code
qt.basis?
down = qt.basis(2,1)
down
###Output
_____no_output_____
###Markdown
Check the action of $\sigma_{z}$ on the states:
###Code
qt.sigmaz() * up
qt.sigmaz() * down
###Output
_____no_output_____
###Markdown
Of course, given a basis for our Hilbert space, we can form any other state via a linear combination of basis vectors:$$\begin{equation}|\Psi\rangle = a|\uparrow\rangle+b|\downarrow\rangle\end{equation}$$
###Code
psi = 5*up - 1j*down # Complex numbers in Python written as e.g. 1+3j.
psi
###Output
_____no_output_____
###Markdown
State is not properly normalized. Can be easily fixed.
###Code
psi.unit()
###Output
_____no_output_____
###Markdown
What's going on here? See lecture slides for description of the Quantum Object class. Part 2 Constructing Operators and States for Oscillators Unlike two-level systems, oscillators live in a formally infinite Hilbert space. Therefore we must be careful as to where we truncate the state space for simulation. The truncation of state space can lead to misunderstandings and error, if not accounted for.A good choice of basis, and the one used by QuTiP is the usual number state $|n\rangle$ (Fock) basis.
###Code
N = 5 # Must always specify how large you want the Hilbert space to be.
psi = qt.basis(N,0)
psi
###Output
_____no_output_____
###Markdown
This is the ground state $|0\rangle$ of the number state basis. To get the $|1\rangle$ state simply do:
###Code
qt.basis(N,1)
###Output
_____no_output_____
###Markdown
What this shows is that for a given Hilbert size $N$, the corresponding basis vectors go from $n=0 \rightarrow N-1$.We know that to move up and down the number state basis we need creation and annihiation operators.
###Code
a = qt.destroy(N)
a
###Output
_____no_output_____
###Markdown
To build the creation operator we could do `create(N)`, but lets us a `Qobj` method instead:
###Code
a.dag() # The 'dag' method creates the dagger (adjoint) of an operator.
a.dag()*psi
a.dag()**2 * psi
###Output
_____no_output_____
###Markdown
We can also build operators such as the number operator:
###Code
a.dag()*a
###Output
_____no_output_____
###Markdown
or we could have just used the builtin number operator function:
###Code
qt.num(N)
###Output
_____no_output_____
###Markdown
Building Density Matrices QuTiP focuses on open quantum systesm, i.e. systems that interact with an environment. This demands the use of density matrices, rather than state vectors, as the interaction with the environment, in general, produces a mixed state. Typically, we prepare our system in a given pure state. The corresponding density matrix would then just be formed by the outer product $|\Psi\rangle\langle \Psi|$.
###Code
psi = qt.basis(N,2)
psi
psi*psi.dag()
###Output
_____no_output_____
###Markdown
We could also use the built in function `ket2dm()` to do the product for us:
###Code
qt.ket2dm(psi)
###Output
_____no_output_____
###Markdown
QuTiP has a collection of built in density operators that are commonly used:
###Code
qt.fock_dm(N,2)
qt.coherent_dm(N,alpha=1) #alpha specifies the coherent state amplitude
###Output
_____no_output_____
###Markdown
The coherent states represent one example of a situation where the state vector or density matrix is a non-sparse in the Fock basis. This is because they are generated using a displacement operator using `expm`.
###Code
qt.thermal_dm(N,1) #`1` indicates the average number of particles in the thermal state
###Output
_____no_output_____ |
py_functions/2-NLP.ipynb | ###Markdown
Data Import
###Code
df = pd.read_csv("../Data/data_NLP_round1.csv")
df.head()
###Output
_____no_output_____
###Markdown
Since each news article can contain slightly different unicode formatting, its best to convert everything to ascii format, to make it easier to work the data. All incomptabile characters will be converted or dropped. Since we are working with English, the hope is that a majority of the data is retained.**But we can come to this later to see how much data is being dropped.**
###Code
# Ensuring everything is in ascii format and removing any wierd formatings.
df['text_ascii'] = df.text.map(lambda x: unicodedata.normalize('NFKD', x).encode('ascii', 'ignore').decode('ascii'))
df[['text','text_ascii']].sample()
###Output
_____no_output_____
###Markdown
Pre-processing to work on1. Better cleaning process - Post lemma and pre lemma? what else??1. Compound term extraction - incl. punctuation separated & space separated1. Named entity extraction & linkage (eg: hong_kong vs hong kong)1. Find a way to check if actual word or not and filter out all non-words Breaking Into ParasLet's breakout each news article into paragraphs and expand this into a new dataframe. These paragraphs will be treated as individual documents that will be used to vectorize & topic model. Post which, for a given overall news headline, each paragraph from the left & right bias will be compared to see pair up paragraphs.
###Code
df_expanded = df[['number','global_bias','title','news_source','text_ascii']].copy(deep=True)
# Splitting each para into a list of paras
df_expanded['text_paras_list'] = df_expanded.text_ascii.str.split('\n\n')
# Exploding the paragraphs into a dataframe, where each row has a paragraph
df_expanded_col = pd.DataFrame(df_expanded.text_paras_list.explode())
df_expanded_col.rename(columns={'text_paras_list':'text_paras'}, inplace=True)
# Joining the exploded dataframe back, so that other metadata can be associated with it
df_expanded = df_expanded.join(df_expanded_col,).reset_index()
df_expanded.rename(columns={'index':'article'}, inplace=True)
df_expanded.drop(columns='text_paras_list', inplace=True)
# getting paragraph numbering
df_expanded['para_count'] = df_expanded.groupby('article').cumcount()
df_expanded[df_expanded.text_paras == '']
###Output
_____no_output_____
###Markdown
Pre-processing LemmatizationLemmatizing first helps preserve as much meaning of the word as possible, while separating out punctuation as needed. It also preserves entity names. **Only need to link compound words somehow**
###Code
%%time
df_expanded['text_paras_lemma'] = df_expanded.text_paras.map(spacy_lemmatization)
df_expanded[['text_paras', 'text_paras_lemma']].sample(2)
pd.set_option('display.max_colwidth', None)
print(df_expanded.sample()[['text_paras','text_paras_lemma']])
pd.reset_option('display.max_colwidth')
###Output
text_paras \
7124 Google didn't immediately respond to a request for comment, but the company has said its competitive edge comes from offering a product that billions of people choose to use each day. Alphabet's shares opened Tuesday up roughly 1%, ahead of the broader market, after The Wall Street Journal first reported news of the impending suit.
text_paras_lemma
7124 Google do not immediately respond to a request for comment , but the company have say competitive edge come from offer a product that billion of people choose to use each day . Alphabet 's share open Tuesday up roughly 1 % , ahead of the broad market , after the Wall Street Journal first report news of the impending suit .
###Markdown
Misc CleaningMisc. cleaning of the documents. Currently this involves just removing email addresses, website links & any non-alphanumeric characters.
###Code
df_expanded['text_paras_misc_clean'] = df_expanded.text_paras_lemma.map(cleaning)
df_expanded[['text_paras_lemma','text_paras_misc_clean']].sample(2)
pd.set_option('display.max_colwidth', None)
print(df_expanded.loc[18300,['text_paras','text_paras_misc_clean']])
pd.reset_option('display.max_colwidth')
pd.set_option('display.max_colwidth', None)
print(df_expanded.sample()[['text_paras','text_paras_misc_clean']])
pd.reset_option('display.max_colwidth')
%%time
custom_stop_words = ['ad', 'advertisement', '000', 'mr', 'ms', 'said', 'going', 'dont', 'think', 'know', 'want', 'like', 'im', 'thats', 'told', \
'lot', 'hes', 'really', 'say', 'added', 'come', 'great','newsletter','daily','sign','app',\
'click','app','inbox', 'latest', 'jr','everybody','`']
df_expanded['text_paras_stopwords'] = df_expanded.text_paras_misc_clean.map(lambda x: remove_stopwords(x, custom_words=custom_stop_words))
# df_expanded['text_paras_stopwords'] = df_expanded.text_paras_stopwords.map(lambda x: remove_stopwords(x, remove_words_list = [], \
# custom_words = custom_stop_words))
df_expanded[['text_paras_lemma','text_paras_stopwords']].sample(2)
# spacy_text = sp_nlp(df_expanded.loc[18300,'text_paras_stopwords'])
# [[token.text, token.ent_type_] for token in spacy_text]
corpus = ' '.join(df_expanded.text_paras_misc_clean.tolist())
two = corpus.split()
two_list = [word for word in two if (len(word) <= 2) and (not word.isdigit())]
two_list
# df_nlp_round1['text_final'] = df_nlp_round1['text_stopwords']
df_expanded['text_final'] = df_expanded['text_paras_stopwords']
df_expanded['text_paras_stopwords'].str.contains('mr').sum()
%%time
params = {'stop_words':'english','min_df': 10, 'max_df': 0.5, 'ngram_range':(1, 1),}
tfidf = TfidfVectorizer(**params)
review_word_matrix_tfidf = tfidf.fit_transform(df_expanded['text_final'])
review_vocab_tfidf = tfidf.get_feature_names()
lda_tfidf, score_tfidf, topic_matrix_tfidf, word_matrix_tfidf = lda_topic_modeling(review_word_matrix_tfidf, vocab = review_vocab_tfidf, n = 20)
###Output
iteration: 1 of max_iter: 100
iteration: 2 of max_iter: 100
iteration: 3 of max_iter: 100
iteration: 4 of max_iter: 100
iteration: 5 of max_iter: 100
iteration: 6 of max_iter: 100
iteration: 7 of max_iter: 100
iteration: 8 of max_iter: 100
iteration: 9 of max_iter: 100
iteration: 10 of max_iter: 100
iteration: 11 of max_iter: 100
iteration: 12 of max_iter: 100
iteration: 13 of max_iter: 100
iteration: 14 of max_iter: 100
iteration: 15 of max_iter: 100
iteration: 16 of max_iter: 100
iteration: 17 of max_iter: 100
iteration: 18 of max_iter: 100
iteration: 19 of max_iter: 100
iteration: 20 of max_iter: 100
iteration: 21 of max_iter: 100
iteration: 22 of max_iter: 100
iteration: 23 of max_iter: 100
iteration: 24 of max_iter: 100
iteration: 25 of max_iter: 100
iteration: 26 of max_iter: 100
iteration: 27 of max_iter: 100
iteration: 28 of max_iter: 100
iteration: 29 of max_iter: 100
iteration: 30 of max_iter: 100
iteration: 31 of max_iter: 100
iteration: 32 of max_iter: 100
iteration: 33 of max_iter: 100
iteration: 34 of max_iter: 100
iteration: 35 of max_iter: 100
iteration: 36 of max_iter: 100
iteration: 37 of max_iter: 100
iteration: 38 of max_iter: 100
iteration: 39 of max_iter: 100
iteration: 40 of max_iter: 100
iteration: 41 of max_iter: 100
iteration: 42 of max_iter: 100
iteration: 43 of max_iter: 100
iteration: 44 of max_iter: 100
iteration: 45 of max_iter: 100
iteration: 46 of max_iter: 100
iteration: 47 of max_iter: 100
iteration: 48 of max_iter: 100
iteration: 49 of max_iter: 100
iteration: 50 of max_iter: 100
iteration: 51 of max_iter: 100
iteration: 52 of max_iter: 100
iteration: 53 of max_iter: 100
iteration: 54 of max_iter: 100
iteration: 55 of max_iter: 100
iteration: 56 of max_iter: 100
iteration: 57 of max_iter: 100
iteration: 58 of max_iter: 100
iteration: 59 of max_iter: 100
iteration: 60 of max_iter: 100
iteration: 61 of max_iter: 100
iteration: 62 of max_iter: 100
iteration: 63 of max_iter: 100
iteration: 64 of max_iter: 100
iteration: 65 of max_iter: 100
iteration: 66 of max_iter: 100
iteration: 67 of max_iter: 100
iteration: 68 of max_iter: 100
iteration: 69 of max_iter: 100
iteration: 70 of max_iter: 100
iteration: 71 of max_iter: 100
iteration: 72 of max_iter: 100
iteration: 73 of max_iter: 100
iteration: 74 of max_iter: 100
iteration: 75 of max_iter: 100
iteration: 76 of max_iter: 100
iteration: 77 of max_iter: 100
iteration: 78 of max_iter: 100
iteration: 79 of max_iter: 100
iteration: 80 of max_iter: 100
iteration: 81 of max_iter: 100
iteration: 82 of max_iter: 100
iteration: 83 of max_iter: 100
iteration: 84 of max_iter: 100
iteration: 85 of max_iter: 100
iteration: 86 of max_iter: 100
iteration: 87 of max_iter: 100
iteration: 88 of max_iter: 100
iteration: 89 of max_iter: 100
iteration: 90 of max_iter: 100
iteration: 91 of max_iter: 100
iteration: 92 of max_iter: 100
iteration: 93 of max_iter: 100
iteration: 94 of max_iter: 100
iteration: 95 of max_iter: 100
iteration: 96 of max_iter: 100
iteration: 97 of max_iter: 100
iteration: 98 of max_iter: 100
iteration: 99 of max_iter: 100
iteration: 100 of max_iter: 100
Wall time: 9min 6s
###Markdown
Exploring The Topic ModelsLet's take a look at the topic model to see what we've got.
###Code
top_words_for_all_topics(word_matrix_tfidf, 20, 20)
###Output
Topic 0
king, register, professor, nbc, water, barrett, university, harvard, clip, grassley, trudeau, todd, picture, lake, patrick, spillway, plenty, feeling, writer, positive,
Topic 1
trump, investigation, mr, committee, say, house, president, fbi, mueller, comey, attorney, counsel, flynn, impeachment, general, special, russia, report, probe, official,
Topic 2
percent, say, tax, health, year, job, care, pay, plan, billion, million, government, cut, rate, budget, worker, 000, insurance, economy, federal,
Topic 3
newsletter, daily, manage, sign, uranium, cumming, brunson, turkish, dowd, audience, opinion, conway, participation, hammer, yang, erdogan, liar, stuff, rosatom, quarter,
Topic 4
llc, copyright, 2020, times, click, permission, reprint, washington, buzz, nooyi, charles, word, post, 2006, warfare, pepsico, typically, drink, examiner, ag,
Topic 5
police, say, officer, protester, black, protest, city, people, video, mr, man, shooting, trump, walker, arrest, church, violence, death, group, shoot,
Topic 6
graham, reid, lindsey, harry, plane, space, mccain, rush, apartment, smart, mercer, mail, oath, vehicle, christopher, sen, guest, rich, say, car,
Topic 7
book, christie, appointee, brown, pugh, spicer, baltimore, loyalty, ginsburg, sean, fierce, wildstein, quiet, farr, audit, weather, belong, lane, carter, listen,
Topic 8
united, states, say, iran, mr, trade, american, china, war, deal, president, military, iraq, obama, sanction, nuclear, trump, world, foreign, force,
Topic 9
campaign, clinton, mr, democratic, candidate, trump, state, win, sander, republican, say, voter, election, party, vote, presidential, new, race, biden, president,
Topic 10
facebook, medium, social, company, northam, news, mr, twitter, update, tech, post, google, sandberg, say, video, hunt, registration, platform, celebrate, page,
Topic 11
say, people, bush, know, think, like, tell, thing, mr, family, good, great, life, want, president, work, time, look, year, add,
Topic 12
say, vote, senate, house, republican, democrats, republicans, mr, president, trump, think, congress, senator, leader, mcconnell, make, legislation, majority, come, party,
Topic 13
north, korea, kim, korean, summit, sen, rubio, tip, marco, independent, schultz, south, alaska, jong, direction, denuclearization, maine, paul, trump, contender,
Topic 14
border, say, immigrant, trump, immigration, wall, people, security, 000, administration, official, family, health, country, child, emergency, coronavirus, virus, illegal, president,
Topic 15
trump, president, house, white, say, mr, read, comment, official, news, russian, meeting, speak, tweet, adviser, putin, secretary, respond, continue, story,
Topic 16
court, say, law, justice, case, supreme, state, judge, department, federal, mr, decision, rule, email, use, president, clinton, information, legal, ruling,
Topic 17
contribute, fox, report, news, associated, press, app, click, hong, kong, los, bloomberg, angeles, david, elizabeth, business, mike, warren, ben, et,
Topic 18
say, schumer, mr, trump, israel, leader, president, pelosi, reporter, tell, minority, mnuchin, saudi, secretary, hamas, meeting, talk, treasury, negotiation, house,
Topic 19
weapon, attack, say, syrian, chemical, drone, military, assad, strike, iran, opinion, official, analysis, bomb, syria, terrorist, use, islamic, al, missile,
###Markdown
Looking at the top words for each topics, there are a number of filler words which we could remove to make the topics a lot more senseful. Additionally, all numbers except for years can be removed too. Lastly, a way needs to be identified for detecting compound words, especially names of places, like Hong Kong, North America etc
###Code
custom_stop_words = ['000', 'mr', 'said', 'going', 'dont', 'think', 'know', 'want', 'like', 'im', 'thats', 'told', \
'lot', 'hes', 'really', 'say', 'added', 'come', 'great','newsletter','daily','sign','app',\
'click','app','inbox', 'latest', 'jr','everybody']
###Output
_____no_output_____ |
Program's_Contributed_By_Contributors/AI-Summer-Course/py-master/ML/7_logistic_reg/7_logistic_regression.ipynb | ###Markdown
Predicting if a person would buy life insurnace based on his age using logistic regression Above is a binary logistic regression problem as there are only two possible outcomes (i.e. if person buys insurance or he/she doesn't).
###Code
import pandas as pd
from matplotlib import pyplot as plt
%matplotlib inline
df = pd.read_csv("insurance_data.csv")
df.head()
plt.scatter(df.age,df.bought_insurance,marker='+',color='red')
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(df[['age']],df.bought_insurance,train_size=0.8)
X_test
from sklearn.linear_model import LogisticRegression
model = LogisticRegression()
model.fit(X_train, y_train)
X_test
y_predicted = model.predict(X_test)
model.predict_proba(X_test)
model.score(X_test,y_test)
y_predicted
X_test
###Output
_____no_output_____
###Markdown
**model.coef_ indicates value of m in y=m*x + b equation**
###Code
model.coef_
###Output
_____no_output_____
###Markdown
**model.intercept_ indicates value of b in y=m*x + b equation**
###Code
model.intercept_
###Output
_____no_output_____
###Markdown
**Lets defined sigmoid function now and do the math with hand**
###Code
import math
def sigmoid(x):
return 1 / (1 + math.exp(-x))
def prediction_function(age):
z = 0.042 * age - 1.53 # 0.04150133 ~ 0.042 and -1.52726963 ~ -1.53
y = sigmoid(z)
return y
age = 35
prediction_function(age)
###Output
_____no_output_____
###Markdown
**0.485 is less than 0.5 which means person with 35 age will *not* buy insurance**
###Code
age = 43
prediction_function(age)
###Output
_____no_output_____ |
tutorials/01_least_squares_optimization.ipynb | ###Markdown
Least-squares Optimization with TheseusThis tutorial demonstrates how to solve a curve-fitting problem with Theseus. The examples in this tutorial are inspired by the [Ceres](https://ceres-solver.org/) [tutorial](http://ceres-solver.org/nnls_tutorial.html), and structured like the [curve-fitting example](http://ceres-solver.org/nnls_tutorial.htmlcurve-fitting) and [robust curve-fitting example](http://ceres-solver.org/nnls_tutorial.htmlrobust-curve-fitting) in Ceres.Quadratic curve-fittingIn this tutorial, we will show how we can fit a quadratic function: y = ax2 + bStep 0: Generating DataWe first generate data by sampling points from the quadratic function x2 + 0.5. To this, we add Gaussian noise with σ = 0.01.
###Code
import torch
torch.manual_seed(0)
def generate_data(num_points=100, a=1, b=0.5, noise_factor=0.01):
# Generate data: 100 points sampled from the quadratic curve listed above
data_x = torch.rand((1, num_points))
noise = torch.randn((1, num_points)) * noise_factor
data_y = a * data_x.square() + b + noise
return data_x, data_y
data_x, data_y = generate_data()
# Plot the data
import matplotlib.pyplot as plt
fig, ax = plt.subplots()
ax.scatter(data_x, data_y);
ax.set_xlabel('x');
ax.set_ylabel('y');
###Output
_____no_output_____
###Markdown
We demonstrate how to use Theseus to solve this curve-fitting problem in 3 steps:Step 1: Represent data and variablesStep 2: Set up optimizationStep 3: Run optimizationStep 1: Represent data and variables in TheseusAs we described in Tutorial 0, Theseus Variables are semantically divided into two main classes:optimization variables: those that will be modified by our non-linear least-squares optimizers to minimize the total cost functionauxiliary variables: other variables required by the cost functions to carry out the optimization, but which will not be optimized by the non-linear least-squares optimizers, e.g., application data in this example (we will see more examples of )Our first step is to represent the data (x, y) and the optimization variables (a and b) in Theseus data structures.The optimization variables must be of type `Manifold`. For this example, we choose its `Vector` sub-class to represent a and b. Because they are one-dimensional quantities, we require only 1 degree-of-freedom in initializing these `Vector` objects. (Alternately, we could also represent both variables as a single 2-dimensional `Vector` object; however, this would change how the error functions are written.) The (auxiliary) data variables may be an instance of any `Variable` type. For this example, the type `Variable` itself suffices.
###Code
import theseus as th
# data is of type Variable
x = th.Variable(data_x, name="x")
y = th.Variable(data_y, name="y")
# optimization variables are of type Vector with 1 degree of freedom (dof)
a = th.Vector(1, name="a")
b = th.Vector(1, name="b")
###Output
_____no_output_____
###Markdown
Step 2: Set up optimization The residual errors of the least-squares fit is captured in a `CostFunction`. In this example, we will use the `AutoDiffCostFunction` provided by Theseus, which provides an easy-to-use way to capture arbitrary cost functions. The `AutoDiffCostFunction` only requires that we define the optimization variables and the auxiliary variables, and provide an error function that computes the residual errors. From there, it uses the PyTorch autograd to compute the Jacobians for the optimization variables via automatic differentiation. In the example below, the `quad_error_fn` captures the least-squares error of the quadratic function fitted with the two 1-dimensional `Vector` objects `a`, `b`. The total least-squares error can be captured by either one 100-dimensional `AutoDiffCostFunction` (where each dimension represents the error of one data point), or a set of 100 one-dimensional `AutoDiffCostFunction` (where instead each cost function captures the error of one data point). We use the former (i.e., one 100-dimensional `AutoDiffCostFunction`) in this example, but we will see examples of the latter in Tutorials 4 & 5.Finally, we combine the cost functions into a Theseus optimization problem:- The optimization criteria is represented by the `Objective`. This is constructed by adding all the cost functions to it.- We can then choose an optimizer and set some of its default configuration (e.g., `GaussNewton` with `max_iterations=15` in the example below).- The objective and its associated optimizer are then used to construct the `TheseusLayer`, which represents one layer of optimization
###Code
def quad_error_fn(optim_vars, aux_vars):
a, b = optim_vars
x, y = aux_vars
est = a.data * x.data.square() + b.data
err = y.data - est
return err
optim_vars = a, b
aux_vars = x, y
cost_function = th.AutoDiffCostFunction(
optim_vars, quad_error_fn, 100, aux_vars=aux_vars, name="quadratic_cost_fn"
)
objective = th.Objective()
objective.add(cost_function)
optimizer = th.GaussNewton(
objective,
max_iterations=15,
step_size=0.5,
)
theseus_optim = th.TheseusLayer(optimizer)
###Output
_____no_output_____
###Markdown
Step 3: Run optimization Running the optimization problem now only requires that we provide the input data and initial values, and call the forward function on the `TheseusLayer`.The input is provided as a dictionary, where the keys represent either the optimization variables (which are paired with their initial values), or the auxiliary variables (which are paired with their data). The dictionary `theseus_inputs` shows an example of this.With this input, we can now run the least squares optimization in Theseus. We do this by calling the `forward` function on the `TheseusLayer`. Two quantities are returned after each call to the `forward` function:1. The `updated_inputs` object, which holds the final values for the optimized variables, along with unchanged auxiliary variable values. This allows us to use the `updated_inputs` as input to downstream functions or Theseus layers (e.g., for problems that require multiple forward passes, as we will see in Tutorial 2.)2. The `info` object, which can track the best solution if necessary, and holds other useful information about the optimization. The best solution is useful to track because the optimization algorithm does not stop if the error increases from an earlier iteration. (The best solution is not as useful when backpropagation is carried out, because backpropagation uses the entire optimization sequence; see Tutorial 2.)
###Code
theseus_inputs = {
"x": data_x,
"y": data_y,
"a": 2 * torch.ones((1, 1)),
"b": torch.ones((1, 1))
}
with torch.no_grad():
updated_inputs, info = theseus_optim.forward(
theseus_inputs, optimizer_kwargs={"track_best_solution": True, "verbose":True})
print("Best solution:", info.best_solution)
# Plot the optimized function
fig, ax = plt.subplots()
ax.scatter(data_x, data_y);
a = info.best_solution['a'].squeeze()
b = info.best_solution['b'].squeeze()
x = torch.linspace(0., 1., steps=100)
y = a*x*x + b
ax.plot(x, y, color='k', lw=4, linestyle='--',
label='Optimized quadratic')
ax.legend()
ax.set_xlabel('x');
ax.set_ylabel('y');
###Output
Nonlinear optimizer. Iteration: 0. Error: 38.42743682861328
Nonlinear optimizer. Iteration: 1. Error: 9.609884262084961
Nonlinear optimizer. Iteration: 2. Error: 2.405491828918457
Nonlinear optimizer. Iteration: 3. Error: 0.6043925285339355
Nonlinear optimizer. Iteration: 4. Error: 0.15411755442619324
Nonlinear optimizer. Iteration: 5. Error: 0.04154873266816139
Nonlinear optimizer. Iteration: 6. Error: 0.013406438753008842
Nonlinear optimizer. Iteration: 7. Error: 0.006370890885591507
Nonlinear optimizer. Iteration: 8. Error: 0.0046120136976242065
Nonlinear optimizer. Iteration: 9. Error: 0.0041722883470356464
Nonlinear optimizer. Iteration: 10. Error: 0.004062363877892494
Nonlinear optimizer. Iteration: 11. Error: 0.004034876357764006
Nonlinear optimizer. Iteration: 12. Error: 0.004028005059808493
Nonlinear optimizer. Iteration: 13. Error: 0.004026289563626051
Nonlinear optimizer. Iteration: 14. Error: 0.004025860223919153
Nonlinear optimizer. Iteration: 15. Error: 0.00402575358748436
Best solution: {'a': tensor([[0.9945]]), 'b': tensor([[0.5018]])}
###Markdown
We observe that we have recovered almost exactly the original a, b values used in the quadratic function we sampled from.Robust Quadratic curve-fittingThis example can also be adapted for a problem where the errors are weighted, e.g., with a Cauchy loss that reduces the weight of data points with extremely high errors. This is similar to the [robust curve-fitting example](http://ceres-solver.org/nnls_tutorial.htmlrobust-curve-fitting) in the Ceres solver.In this tutorial, we make a simple modification to add a Cauchy-loss weighting to the error function: we replace the `quad_error_fn` above in the `AutoDiffCostFunction` by creating the following `cauchy_loss_quad_error_fn` that weights it.
###Code
def cauchy_fn(x):
return torch.sqrt(0.5 * torch.log(1 + x ** 2))
def cauchy_loss_quad_error_fn(optim_vars, aux_vars):
err = quad_error_fn(optim_vars, aux_vars)
return cauchy_fn(err)
wt_cost_function = th.AutoDiffCostFunction(
optim_vars, cauchy_loss_quad_error_fn, 100, aux_vars=aux_vars, name="cauchy_quad_cost_fn"
)
###Output
_____no_output_____
###Markdown
Similar to the example above, we can now construct the Theseus optimization problem with this weighted cost function: create `Objective`, an optimizer, and a `TheseusLayer`, and run the optimization.
###Code
objective = th.Objective()
objective.add(wt_cost_function)
optimizer = th.GaussNewton(
objective,
max_iterations=20,
step_size=0.3,
)
theseus_optim = th.TheseusLayer(optimizer)
theseus_inputs = {
"x": data_x,
"y": data_y,
"a": 2 * torch.ones((1, 1)),
"b": torch.ones((1, 1))
}
# We suppress warnings in this optimization call, because we observed that with this data, Cauchy
# loss often results in singular systems with numerical computations as it approaches optimality.
# Please note: getting a singular system during the forward optimization will throw
# an error if torch's gradient tracking is enabled.
import warnings
warnings.simplefilter("ignore")
with torch.no_grad():
_, info = theseus_optim.forward(
theseus_inputs, optimizer_kwargs={"track_best_solution": True, "verbose":True})
print("Best solution:", info.best_solution)
# Plot the optimized function
fig, ax = plt.subplots()
ax.scatter(data_x, data_y);
a = info.best_solution['a'].squeeze()
b = info.best_solution['b'].squeeze()
x = torch.linspace(0., 1., steps=100)
y = a*x*x + b
ax.plot(x, y, color='k', lw=4, linestyle='--',
label='Optimized quadratic')
ax.legend()
ax.set_xlabel('x');
ax.set_ylabel('y');
###Output
Nonlinear optimizer. Iteration: 0. Error: 13.170896530151367
Nonlinear optimizer. Iteration: 1. Error: 5.595989227294922
Nonlinear optimizer. Iteration: 2. Error: 2.6045525074005127
Nonlinear optimizer. Iteration: 3. Error: 1.248531460762024
Nonlinear optimizer. Iteration: 4. Error: 0.6063637733459473
Nonlinear optimizer. Iteration: 5. Error: 0.2966576814651489
Nonlinear optimizer. Iteration: 6. Error: 0.14604422450065613
Nonlinear optimizer. Iteration: 7. Error: 0.0725097507238388
Nonlinear optimizer. Iteration: 8. Error: 0.03653936833143234
Nonlinear optimizer. Iteration: 9. Error: 0.018927372992038727
Nonlinear optimizer. Iteration: 10. Error: 0.01030135527253151
Nonlinear optimizer. Iteration: 11. Error: 0.0060737887397408485
Best solution: {'a': tensor([[1.0039]]), 'b': tensor([[0.5112]])}
|
notes/docs.sympy.org/tutorial/tutorial.ipynb | ###Markdown
SymPy Tutorial*Arthur Ryman* *Last Updated: 2020-04-13*This notebook contains the examples from the [SymPy Tutorial](https://docs.sympy.org/latest/tutorial/index.html). Preliminaries InstallationThe following example confirms that SymPy is installed correctly.
###Code
from sympy import *
x = symbols('x')
a = Integral(cos(x)*exp(x), x)
Eq(a, a.doit())
###Output
_____no_output_____
###Markdown
The above example shows that SymPy is very nicely integrated with Jupyter. The expression is printed as a properly typeset mathematical equation.I assume that Jupyter sets the default pretty printer to LaTeX. Introduction What is Symbolic Computation?
###Code
import math
math.sqrt(9)
math.sqrt(8)
import sympy
sympy.sqrt(3)
sympy.sqrt(8)
from sympy import symbols
x, y = symbols('x y')
expr = x + 2*y
expr
expr + 1
expr - x
x*expr
from sympy import expand, factor
expanded_expr = expand(x*expr)
expanded_expr
factor(expanded_expr)
###Output
_____no_output_____
###Markdown
The Power of Symbolic Computation
###Code
from sympy import *
x, t, z, nu = symbols('x t z nu')
init_printing(use_unicode=True)
diff(sin(x)*exp(x), x)
integrate(exp(x)*sin(x) + exp(x)*cos(x), x)
integrate(sin(x**2), (x, -oo, oo))
limit(sin(x)/x, x, 0)
solve(x**2 -2, x)
y = Function('y')
dsolve(Eq(y(t).diff(t, t) - y(t), exp(t)), y(t))
Matrix([[1, 2], [2, 2]]).eigenvals()
besselj(nu, z).rewrite(jn)
latex(Integral(cos(x)**2, (x, 0, pi)))
Integral(cos(x)**2, (x, 0, pi))
latex_1 = latex(Integral(cos(x)**2, (x, 0, pi)))
print(latex_1)
###Output
\int\limits_{0}^{\pi} \cos^{2}{\left(x \right)}\, dx
###Markdown
Why SymPy? Gotchas Symbols
###Code
from sympy import *
x + 1
x = symbols('x')
x + 1
x, y, z = symbols('x y z')
a, b = symbols('b a')
a
b
crazy = symbols('unrelated')
crazy + 1
print(latex(crazy))
x = symbols('x')
expr = x + 1
x = 2
print(expr)
x = 'abc'
expr = x + 'def'
expr
x = 'ABC'
expr
x = symbols('x')
expr = x + 1
expr.subs(x, 2)
###Output
_____no_output_____
###Markdown
Equal signs
###Code
x + 1 == 4
Eq(x + 1, 4)
(x + 1)**2 == x**2 + 2*x + 1
a = (x + 1)**2
b = x**2 + 2*x + 1
simplify(a - b)
c = x**2 - 2*x + 1
simplify(a - c)
a = cos(x)**2 - sin(x)**2
b = cos(2*x)
a.equals(b)
simplify(a - b)
###Output
_____no_output_____
###Markdown
Two Final Notes: ^ and /
###Code
True ^ False
True ^ True
Xor(x, y)
type(Integer(1) + 1)
type(1 + 1)
Integer(1)/Integer(3)
type(Integer(1)/Integer(3))
1/3
from __future__ import division
1/2
1//2
Rational(1, 2)
x + 1/2
x + Rational(1,2)
###Output
_____no_output_____
###Markdown
Further Reading Basic Operations
###Code
from sympy import *
x, y, z = symbols('x y z')
###Output
_____no_output_____
###Markdown
Substitution
###Code
expr = cos(x) + 1
expr.subs(x, y)
expr.subs(x, 0)
expr = x**y
expr
expr = expr.subs(y, x**y)
expr
expr = expr.subs(y, x**x)
expr
expr = sin(2*x) + cos(2*x)
expand_trig(expr)
expr.subs(sin(2*x), 2*sin(x)*cos(x))
expr = cos(x)
expr.subs(x, 0)
expr
x
expr = x**3 + 4*x*y -z
expr.subs([(x, 2), (y, 4), (z, 0)])
expr = x**4 - 4*x**3 + 4*x**2 - 2*x + 3
replacements = [(x**i, y**i) for i in range(5) if i % 2 == 0]
expr.subs(replacements)
###Output
_____no_output_____
###Markdown
Converting Strings to SymPy Expression
###Code
str_expr = 'x**2 + 3*x - 1/2'
expr = sympify(str_expr)
expr
expr.subs(x, 2)
###Output
_____no_output_____
###Markdown
evalf
###Code
expr = sqrt(8)
expr.evalf()
pi.evalf(100)
expr = cos(2*x)
expr.evalf(subs={x: 2.4})
one = cos(1)**2 + sin(1)**2
(one -1).evalf()
(one -1).evalf(chop=True)
###Output
_____no_output_____
###Markdown
lambdify
###Code
import numpy
a = numpy.arange(10)
expr = sin(x)
f = lambdify(x, expr, 'numpy')
f(a)
f = lambdify(x, expr, 'math')
f(0.1)
def mysin(x):
"""
My sine. Note that this is only accurate for small x.
"""
return x
f = lambdify(x, expr, {'sin': mysin})
f(0.1)
###Output
_____no_output_____
###Markdown
Printing Printers Setting up Pretty Printing
###Code
from sympy import init_printing
init_printing()
from sympy import init_session
init_session()
from sympy import *
x, y, z = symbols('x y z')
init_printing()
Integral(sqrt(1/x), x)
###Output
_____no_output_____
###Markdown
Printing Functions str
###Code
from sympy import *
x, y, z = symbols('x y z')
str(Integral(sqrt(1/x), x))
print(Integral(sqrt(1/x), x))
###Output
Integral(sqrt(1/x), x)
###Markdown
srepr
###Code
srepr(Integral(sqrt(1/x), x))
###Output
_____no_output_____
###Markdown
ASCII Pretty Printer
###Code
pprint(Integral(sqrt(1/x), x), use_unicode=False)
pretty(Integral(sqrt(1/x), x), use_unicode=False)
print(pretty(Integral(sqrt(1/x), x), use_unicode=False))
###Output
/
|
| ___
| / 1
| / - dx
| \/ x
|
/
###Markdown
Unicode Pretty Printer
###Code
pprint(Integral(sqrt(1/x), x), use_unicode=True)
###Output
⌠
⎮ ___
⎮ ╱ 1
⎮ ╱ ─ dx
⎮ ╲╱ x
⌡
###Markdown
Latex
###Code
print(latex(Integral(sqrt(1/x), x)))
###Output
\int \sqrt{\frac{1}{x}}\, dx
###Markdown
MathML
###Code
from sympy.printing.mathml import print_mathml
print_mathml(Integral(sqrt(1/x), x))
###Output
<apply>
<int/>
<bvar>
<ci>x</ci>
</bvar>
<apply>
<root/>
<apply>
<power/>
<ci>x</ci>
<cn>-1</cn>
</apply>
</apply>
</apply>
###Markdown
Dot
###Code
from sympy.printing.dot import dotprint
from sympy.abc import x
print(dotprint(x+2))
###Output
digraph{
# Graph style
"ordering"="out"
"rankdir"="TD"
#########
# Nodes #
#########
"Add(Integer(2), Symbol('x'))_()" ["color"="black", "label"="Add", "shape"="ellipse"];
"Integer(2)_(0,)" ["color"="black", "label"="2", "shape"="ellipse"];
"Symbol('x')_(1,)" ["color"="black", "label"="x", "shape"="ellipse"];
#########
# Edges #
#########
"Add(Integer(2), Symbol('x'))_()" -> "Integer(2)_(0,)";
"Add(Integer(2), Symbol('x'))_()" -> "Symbol('x')_(1,)";
}
|
functions_class.ipynb | ###Markdown
`NUMBER 1`
###Code
def shut_down(comp):
if comp == "yes":
return "shutting down"
elif comp == "no":
return "shut down aborted"
else:
return "sorry, such argument is not welcome here"
print(shut_down(input("Enter an option: ")))
###Output
Enter an option: YES
sorry, such argument is not welcome here
###Markdown
`NUMBER 2`
###Code
def showEmployee():
###Output
_____no_output_____
###Markdown
`NUMBER 3`
###Code
def by_three(number):
if number % 3 == 0 :
return cube(number)
else:
return "False"
def cube(number):
number = number^3
d = int(input("Enter a value: "))
print((by_three(d)))
###Output
Enter a value: 6
None
###Markdown
`NUMBER 4`
###Code
string = input("Enter a word: ")
digit1 = 0
digit2 = 0
for i in string:
if (i.islower()):
digit1=digit1+1
elif (i.isupper()):
digit2=digit2+1
print ("The number of lowercase is: ", digit1)
print ("The number of uppercase is: ", digit2)
###Output
Enter a word: APPLE
The number of lowercase is: 0
The number of uppercase is: 5
|
notebooks/retired/verify_1Dqpf_figures.ipynb | ###Markdown
Run in Google CoLab! (Open in new window or new tab)[](https://colab.research.google.com/github/m-wessler/nbm-verify/blob/master/notebooks/verify_1Dqpf_dev.ipynb)
###Code
import sys
sys.path.insert(1, '../scripts/')
import os
import csv
import nbm_funcs
import numpy as np
import pandas as pd
import xarray as xr
import seaborn as sns
import scipy.stats as scipy
import urllib.request as req
import matplotlib.pyplot as plt
from datetime import datetime, timedelta
import warnings
warnings.filterwarnings('ignore')
###Output
_____no_output_____
###Markdown
****** ConfigurationSelect 'site' to evaluate, modify 'vsite' if an alternate verification site is preferredFixed 'date0' at the start of the NBM v3.2 period (2/20/2020)Full lead time is 263 hours - Note if date1 is within this period, there will be missing verification data as it does not exist yet!
###Code
# NBM 1D Viewer Site to use
# site = sys.argv[1]
site = nbm_funcs._site = 'KSEA'
vsite = site
# Data Range
lead_time_end = 263
init_hours = [13]#[1, 7, 13, 19]
date0 = nbm_funcs._date0 = datetime(2020, 3, 1)
date1 = nbm_funcs._date1 = datetime(2020, 7, 15)
sitepath = site if site == vsite else '_'.join([site, vsite])
datadir = nbm_funcs._datadir = '../archive/%s/data/'%sitepath
os.makedirs(datadir, exist_ok=True)
figdir = nbm_funcs._figdir = '../archive//%s/figures/'%sitepath
os.makedirs(figdir, exist_ok=True)
dates = pd.date_range(date0, date1, freq='1D')
date2 = nbm_funcs._date2 = date1 + timedelta(hours=lead_time_end)
print(('\nForecast Site: {}\nVerif Site: {}\nInit Hours: '+
'{}\nFirst Init: {}\nLast Init: {}\nLast Verif: {}').format(
site, vsite, init_hours, date0, date1, date2))
###Output
_____no_output_____
###Markdown
****** Obtain observation data from SynopticLabs (MesoWest) APIThese are quality-controlled precipitation observations with adjustable accumulation periodsSee more at: https://developers.synopticdata.com/mesonet/v2/stations/precipitation/If no observation file exists, will download and save for future use
###Code
obfile = datadir + '%s_obs_%s_%s.pd'%(site, date0.strftime('%Y%m%d'), date1.strftime('%Y%m%d'))
if os.path.isfile(obfile):
# Load file
obs = pd.read_pickle(obfile)
print('\nLoaded obs from file %s\n'%obfile)
else:
# Get and save file
obs = nbm_funcs.get_precip_obs(vsite, date0, date2)
obs = obs[0].merge(obs[1], how='inner', on='ValidTime').merge(obs[2], how='inner', on='ValidTime')
obs = obs[[k for k in obs.keys() if 'precip' in k]].sort_index()
obs.to_pickle(obfile)
print('\nSaved obs to file %s\n'%obfile)
mm_in = 1/25.4
obs *= mm_in
[obs.rename(columns={k:k.replace('mm', 'in')}, inplace=True) for k in obs.keys()]
obs.describe().T
###Output
_____no_output_____
###Markdown
****** Obtain NBM forecast data from NBM 1D Viewer (csv file API)These are the NBM 1D output files extracted from the viewer with 3 set accumulation periodsSee more at: https://hwp-viz.gsd.esrl.noaa.gov/wave1d/?location=KSLC&col=2&hgt=1&obs=true&fontsize=1&selectedgroup=DefaultIf no forecast file exists, will download and save for future use. This can take some time.
###Code
nbmfile = datadir + '%s_nbm_%s_%s.pd'%(site, date0.strftime('%Y%m%d'), date1.strftime('%Y%m%d'))
if os.path.isfile(nbmfile):
# Load file
nbm = pd.read_pickle(nbmfile)
print('Loaded NBM from file %s'%nbmfile)
else:
url_list = []
for date in dates:
for init_hour in init_hours:
# For now pull from the csv generator
# Best to get API access or store locally later
base = 'https://hwp-viz.gsd.esrl.noaa.gov/wave1d/data/archive/'
datestr = '{:04d}/{:02d}/{:02d}'.format(date.year, date.month, date.day)
sitestr = '/NBM/{:02d}/{:s}.csv'.format(init_hour, site)
url_list.append([date, init_hour, base + datestr + sitestr])
# Try multiprocessing this for speed?
nbm = np.array([nbm_funcs.get_1d_csv(url, this=i+1, total=len(url_list)) for i, url in enumerate(url_list)])
nbm = np.array([line for line in nbm if line is not None])
header = nbm[0, 0]
# This drops days with incomplete collections. There may be some use
# to keeping this data, can fix in the future if need be
# May also want to make the 100 value flexible!
nbm = np.array([np.array(line[1]) for line in nbm if len(line[1]) == 100])
nbm = nbm.reshape(-1, nbm.shape[-1])
nbm[np.where(nbm == '')] = np.nan
# Aggregate to a clean dataframe
nbm = pd.DataFrame(nbm, columns=header).set_index(
['InitTime', 'ValidTime']).sort_index()
# Drop last column (misc metadata?)
nbm = nbm.iloc[:, :-2].astype(float)
header = nbm.columns
# variables = np.unique([k.split('_')[0] for k in header])
# levels = np.unique([k.split('_')[1] for k in header])
init = nbm.index.get_level_values(0)
valid = nbm.index.get_level_values(1)
# Note the 1h 'fudge factor' in the lead time here
lead = pd.DataFrame(
np.transpose([init, valid, ((valid - init).values/3600/1e9).astype(int)+1]),
columns=['InitTime', 'ValidTime', 'LeadTime']).set_index(['InitTime', 'ValidTime'])
nbm.insert(0, 'LeadTime', lead)
klist = np.array([k for k in np.unique([k for k in list(nbm.keys())]) if ('APCP' in k)&('1hr' not in k)])
klist = klist[np.argsort(klist)]
klist = np.append('LeadTime', klist)
nbm = nbm.loc[:, klist]
# Nix values where lead time shorter than acc interval
for k in nbm.keys():
if 'APCP24hr' in k:
nbm[k][nbm['LeadTime'] < 24] = np.nan
elif 'APCP12hr' in k:
nbm[k][nbm['LeadTime'] < 12] = np.nan
elif 'APCP6hr' in k:
nbm[k][nbm['LeadTime'] < 6] = np.nan
else:
pass
nbm.to_pickle(nbmfile)
print('\nSaved NBM to file %s'%obfile)
# Convert mm to in
nbm = pd.DataFrame([nbm['LeadTime']] + [nbm[k] * mm_in for k in nbm.keys() if 'LeadTime' not in k]).T
# Display some basic stats
nbm.loc[:, ['APCP6hr_surface', 'APCP6hr_surface_70% level', 'APCP6hr_surface_50% level',
'APCP12hr_surface', 'APCP12hr_surface_70% level', 'APCP12hr_surface_50% level',
'APCP24hr_surface', 'APCP24hr_surface_70% level', 'APCP24hr_surface_50% level'
]].describe().T
###Output
_____no_output_____
###Markdown
Plot the distribution of precipitation observations vs forecasts for assessment of representativeness
###Code
thresh_id = nbm_funcs._thresh_id = {'Small':[0, 1], 'Medium':[1, 2], 'Large':[2, 3], 'All':[0, 3]}
# 33rd, 67th percentile determined above
thresholds = nbm_funcs._thresholds = {interval:nbm_funcs.apcp_dist_plot(obs, nbm, interval)
for interval in [6, 12, 24]}
# Use fixed override if desired
# thresholds = {
# 6:[1, 2],
# 12:[1, 2],
# 24:[1, 2]}
thresholds
###Output
_____no_output_____
###Markdown
****** Reorganize the data for analysis: Isolate the forecasts by accumulation interval and lead time
###Code
plist = np.arange(1, 100)
data = []
for interval in [6, 12, 24]:
pkeys = np.array([k for k in nbm.keys() if '%dhr_'%interval in k])
pkeys = np.array([k for k in pkeys if '%' in k])
pkeys = pkeys[np.argsort([int(k.split('_')[-1].split('%')[0]) for k in pkeys])]
for lead_time in np.arange(interval, lead_time_end, 6):
for esize in ['Small', 'Medium', 'Large']:
thresh = [thresholds[interval][thresh_id[esize][0]],
thresholds[interval][thresh_id[esize][1]]]
print('\rProcessing interval %d lead %dh'%(interval, lead_time), end='')
# We need to break out the verification to each lead time,
# but within each lead time we have a number of valid times.
# At each lead time, valid time, isolate the forecast verification
# Combine the datasets to make it easier to work with
idata = nbm[nbm['LeadTime'] == lead_time].merge(obs, on='ValidTime').drop(columns='LeadTime')
# Subset for event size
iobs = idata['%dh_precip_in'%interval]
idata = idata[((iobs >= thresh[0]) & (iobs < thresh[1]))]
for itime in idata.index:
try:
prob_fx = idata.loc[itime, pkeys].values
mean_fx = np.nanmean(prob_fx)
std_fx = np.nanstd(prob_fx)
med_fx = idata.loc[itime, 'APCP%dhr_surface_50%% level'%interval]
det_fx = idata.loc[itime, 'APCP%dhr_surface'%interval]
# Optional - leave as nan?
det_fx = det_fx if ~np.isnan(det_fx) else 0.
verif_ob = idata.loc[itime, '%dh_precip_in'%interval]
verif_rank = np.searchsorted(prob_fx, verif_ob, 'right')
verif_rank_val = prob_fx[verif_rank-1]
verif_rank_error = verif_rank_val - verif_ob
verif_rank = 101 if ((verif_rank >= 99) & (verif_ob > verif_rank_val)) else verif_rank
verif_rank = -1 if ((verif_rank <= 1) & (verif_ob < verif_rank_val)) else verif_rank
det_rank = np.searchsorted(prob_fx, det_fx, 'right')
det_error = det_fx - verif_ob
except:
raise
# pass
# print('failed', itime)
else:
if ((verif_ob > 0.) & ~np.isnan(verif_rank_val)):
data.append([
# Indexers
interval, lead_time, itime, esize,
# Verification and deterministic
verif_ob, det_fx, det_rank, det_error,
# Probabilistic
verif_rank, verif_rank_val, verif_rank_error,
med_fx, mean_fx, std_fx])
data = pd.DataFrame(data, columns=['Interval', 'LeadTime', 'ValidTime', 'EventSize',
'verif_ob', 'det_fx', 'det_rank', 'det_error',
'verif_rank', 'verif_rank_val', 'verif_rank_error',
'med_fx', 'mean_fx', 'std_fx'])
print('\n\nAvailable keys:\n\t\t{}\nn rows: {}'.format('\n\t\t'.join(data.keys()), len(data)))
###Output
_____no_output_____
###Markdown
****** Create Bulk Temporal Stats Plots Reliability diagrams, bias over time, rank over time, etc. Plot histograms of percentile rank
###Code
short, long = 0, 120
plot_type = 'Verification'
plot_var = 'verif_rank'
esize = 'All'
for interval in [6, 12, 24]:
kwargs = {'_interval':interval, '_esize':esize,
'_short':short, '_long':long,
'_plot_type':plot_type, '_plot_var':plot_var}
nbm_funcs.histograms_verif_rank(data, **kwargs)
###Output
_____no_output_____
###Markdown
Plot a reliability diagram style CDF to evaluate percentile rankings
###Code
short, long = 0, 120
plot_type = 'Verification'
plot_var = 'verif_rank'
esize = 'All'
for interval in [6, 12, 24]:
kwargs = {'_interval':interval, '_esize':esize,
'_short':short, '_long':long,
'_plot_type':plot_type, '_plot_var':plot_var}
nbm_funcs.reliability_verif_cdf(data, **kwargs)
###Output
_____no_output_____
###Markdown
Produce bias, ME, MAE, and percentile rank plots as they evolve over timeThis helps illustrate at what leads a dry/wet bias may exist and how severe it may beAdds value in interpreting the CDF reliability diagrams
###Code
short, long = 0, 120
esize = 'All'
for interval in [6, 12, 24]:
kwargs = {'_interval':interval, '_esize':esize,
'_short':short, '_long':long}
nbm_funcs.rank_over_leadtime(data, **kwargs)
###Output
_____no_output_____ |
Fig02d - Sleep stages by hours of night.ipynb | ###Markdown
Setup
###Code
%matplotlib inline
import numpy as np
import scipy.signal as sig
import scipy.stats as stat
import matplotlib.pyplot as plt
import seaborn as sns
import os
import h5py
import datetime
import pandas as pd
from pandas import DataFrame,Series,read_table
###Output
_____no_output_____
###Markdown
General info
###Code
savePlots = False # whether or not to save plots
saveData = False # whether or not to save csv files
saveAsPath = './Fig 02/'
if not os.path.exists(saveAsPath):
os.mkdir(saveAsPath)
saveAsName = 'Fig2d_'
#path = '/Users/svcanavan/Dropbox/Coding in progress/00_BudgieSleep/Data_copies/'
birdPaths = ['../data_copies/01_PreprocessedData/01_BudgieFemale_green1/00_Baseline_night/',
'../data_copies/01_PreprocessedData/02_BudgieMale_yellow1/00_Baseline_night/',
'../data_copies/01_PreprocessedData/03_BudgieFemale_white1/00_Baseline_night/',
'../data_copies/01_PreprocessedData/04_BudgieMale_yellow2/00_Baseline_night/',
'../data_copies/01_PreprocessedData/05_BudgieFemale_green2/00_Baseline_night/']
arfFilePaths = ['EEG 2 scored/',
'EEG 3 scored/',
'EEG 3 scored/',
'EEG 4 scored/',
'EEG 4 scored/']
### load BEST EEG channels - as determined during manual scoring ####
channelsToLoadEEG_best = [['6 LEEGm-LEEGp', '5 LEEGf-LEEGp'], #, '9 REEGp-LEEGp'], # extra channel to represent R hemisphere
['5 LEEGf-LEEGm', '4 LEEGf-Fgr'], #, '9 REEGf-REEGm'], # extra channel to represent R hemisphere
['9REEGm-REEGp', '4LEEGf-LEEGp'],
['6LEEGm-LEEGf', '9REEGf-REEGp'],
['7REEGf-REEGp', '4LEEGf-LEEGp']]
### load ALL of EEG channels ####
channelsToLoadEEG = [['4 LEEGf-Fgr', '5 LEEGf-LEEGp', '6 LEEGm-LEEGp', '7 LEEGp-Fgr', '8 REEGp-Fgr','9 REEGp-LEEGp'],
['4 LEEGf-Fgr','5 LEEGf-LEEGm', '6 LEEGm-LEEGp', '7 REEGf-Fgr', '8 REEGm-Fgr', '9 REEGf-REEGm'],
['4LEEGf-LEEGp', '5LEEGf-LEEGm', '6LEEGm-LEEGp', '7REEGf-REEGp', '8REEGf-REEGm', '9REEGm-REEGp'],
['4LEEGf-LEEGp', '5LEEGm-LEEGp', '6LEEGm-LEEGf', '7REEGf-Fgr', '8REEGf-REEGm','9REEGf-REEGp',],
['4LEEGf-LEEGp', '5LEEGf-LEEGm', '6LEEGm-LEEGp', '7REEGf-REEGp', '8REEGf-REEGm', '9REEGm-REEGp']]
channelsToLoadEOG = [['1 LEOG-Fgr', '2 REOG-Fgr'],
['2 LEOG-Fgr', '3 REOG-Fgr'],
['2LEOG-Fgr', '3REOG-Fgr'],
['2LEOG-Fgr', '3REOG-Fgr'],
['2LEOG-Fgr', '3REOG-Fgr']]
birds_LL = [1,2,3]
nBirds_LL = len(birds_LL)
birdPaths_LL = ['../data_copies/01_PreprocessedData/02_BudgieMale_yellow1/01_Constant_light/',
'../data_copies/01_PreprocessedData/03_BudgieFemale_white1/01_Constant_light/',
'../data_copies/01_PreprocessedData/04_BudgieMale_yellow2/01_Constant_light/',]
arfFilePaths_LL = ['EEG 2 preprocessed/',
'EEG 2 preprocessed/',
'EEG 2 preprocessed/']
lightsOffSec = np.array([7947, 9675, 9861 + 8*3600, 9873, 13467]) # lights off times in seconds from beginning of file
lightsOnSec = np.array([46449, 48168, 48375+ 8*3600, 48381, 52005]) # Bird 3 gets 8 hours added b/c file starts at 8:00 instead of 16:00
epochLength = 3
sr = 200
scalingFactor = (2**15)*0.195 # scaling/conversion factor from amplitude to uV (when recording arf from jrecord)
stages = ['w','d','u','i','s','r'] # wake, drowsy, unihem sleep, intermediate sleep, SWS, REM
stagesSleep = ['u','i','s','r']
stagesVideo = ['m','q','d','s','u'] # moving wake, quiet wake, drowsy, sleep, unclear
## Path to scores formatted as CSVs
formatted_scores_path = '../formatted_scores/'
## Path to detect SW ands EM events: use folder w/ EMs and EM artifacts detected during non-sleep
events_path = '../data_copies/SWs_EMs_and_EMartifacts/'
colors = sns.color_palette(np.array([[234,103,99],
[218,142,60],
[174,174,62],
[97,188,101],
[140,133,232],
[225,113,190]])
/255)
sns.palplot(colors)
# colorpalette from iWantHue
###Output
_____no_output_____
###Markdown
Plot-specific info
###Code
sns.set_context("notebook", font_scale=1.5)
sns.set_style("white")
axis_label_fontsize = 24
# Markers for legends of EEG scoring colors
legendMarkersEEG = []
for stage in range(len(stages)):
legendMarkersEEG.append(plt.Line2D([0],[0], color=colors[stage], marker='o', linestyle='', alpha=0.7))
###Output
_____no_output_____
###Markdown
Calculate general variables
###Code
lightsOffEp = lightsOffSec / epochLength
lightsOnEp = lightsOnSec / epochLength
nBirds = len(birdPaths)
epochLengthPts = epochLength*sr
nStages = len(stagesSleep)
###Output
_____no_output_____
###Markdown
Load formatted scores
###Code
AllScores = {}
for b in range(nBirds):
bird_name = 'Bird ' + str(b+1)
file = formatted_scores_path + 'All_scores_' + bird_name + '.csv'
data = pd.read_csv(file, index_col=0)
AllScores[bird_name] = data
###Output
_____no_output_____
###Markdown
Calculate lights off in Zeitgeber time (s and hrs)Lights on is 0
###Code
lightsOffDatetime = np.array([], dtype='datetime64')
lightsOnDatetime = np.array([], dtype='datetime64')
for b_num in range(nBirds):
b_name = 'Bird ' + str(b_num+1)
Scores = AllScores[b_name]
startDatetime = np.datetime64(Scores.index.values[0])
# Calc lights off & on using datetime formats
lightsOffTimedelta = lightsOffSec[b_num].astype('timedelta64[s]')
lightsOffDatetime = np.append(lightsOffDatetime, startDatetime + lightsOffTimedelta)
lightsOnTimedelta = lightsOnSec[b_num].astype('timedelta64[s]')
lightsOnDatetime = np.append(lightsOnDatetime, startDatetime + lightsOnTimedelta)
lightsOffZeit_s = lightsOffSec - lightsOnSec
lightsOffZeit_hr = lightsOffZeit_s / 3600
###Output
_____no_output_____
###Markdown
Make table of % of each stage per bin
###Code
binSize_min = 60
binSize_s = int(binSize_min*60)
binSize_ep = int(binSize_s/epochLength)
stageProportions_whole_night_all = {}
for b in range(nBirds):
nBins = int(np.ceil(np.min(lightsOnSec - lightsOffSec)/(60*binSize_min)))
stageProportions = DataFrame([], columns=range(len(stages)))
b_name = 'Bird ' + str(b+1)
Scores = AllScores[b_name]
for bn in range(nBins):
start = str(lightsOffDatetime[b] + bn *np.timedelta64(binSize_s,'s')).replace('T', ' ')
end = str(lightsOffDatetime[b] + (bn+1)*np.timedelta64(binSize_s,'s')).replace('T', ' ')
bn_scores = Scores[str(start):str(end)]
bn_stage_frequencies = bn_scores['Label (#)'].value_counts(normalize=True,sort=False)
stageProportions = stageProportions.append(bn_stage_frequencies, ignore_index=True)
# Replace NaNs with 0
stageProportions = stageProportions.fillna(0)
# Calc TST and sleep stages as % TST
stageProportions['TST'] = stageProportions[[2,3,4,5]].sum(axis=1)
stageProportions['U (% TST)'] = stageProportions[2]/stageProportions['TST']
stageProportions['I (% TST)'] = stageProportions[3]/stageProportions['TST']
stageProportions['S (% TST)'] = stageProportions[4]/stageProportions['TST']
stageProportions['R (% TST)'] = stageProportions[5]/stageProportions['TST']
# Add to dictionary
stageProportions_whole_night_all[b] = stageProportions
###Output
_____no_output_____
###Markdown
Plot by individual bird
###Code
plt.figure(figsize=(10,5*nBirds))
for b in range(nBirds):
stageProportions = stageProportions_whole_night_all[b]
# Plot
with sns.color_palette(colors):
plt.subplot(nBirds,1,b+1)
plt.plot(stageProportions[[0,1,2,3,4,5]], 'o-')
# Labels etc
plt.ylabel('Bird ' + str(b+1))
plt.xlim((-0.5, len(stageProportions)+.5))
# Legend just on first graph
if b == 0:
plt.legend(legendMarkersEEG, stages, loc=1)
# X-axis labels just on last graph
if b < nBirds-1:
plt.xticks([])
else:
plt.xlabel('Hour of night')
#if savePlots:
# plt.savefig(saveAsPath + "Fig2_All_birds_by_hour_of_night.pdf")
###Output
_____no_output_____
###Markdown
By hour of night: sleep only
###Code
plt.figure(figsize=(10,5*nBirds))
for b in range(nBirds):
stageProportions = stageProportions_whole_night_all[b]
# Plot
with sns.color_palette(colors[2:6]):
plt.subplot(nBirds,1,b+1)
plt.plot(stageProportions[['U (% TST)', 'I (% TST)', 'S (% TST)', 'R (% TST)']], 'o-')
# Labels etc
plt.ylabel('Bird ' + str(b+1))
plt.xlim((-0.5, len(stageProportions)+.5))
# Legend just on first graph
if b == 0:
plt.legend(legendMarkersEEG[2:6], stages[2:6], loc=1)
# X-axis labels just on last graph
if b < nBirds-1:
plt.xticks([])
else:
plt.xlabel('Hour of night')
#if savePlots:
# plt.savefig(saveAsPath + "Fig2_All_birds_by_percent_of_TST.pdf")
###Output
_____no_output_____
###Markdown
By hour of sleep
###Code
stageProportions_sleep_only = {}
for b in range(nBirds):
b_name = 'Bird ' + str(b+1)
Scores = AllScores[b_name]
Scores_Nighttime = Scores[int(lightsOffEp[b]):int(lightsOnEp[b])]
Scores_Nighttime_Sleep = Scores_Nighttime[Scores_Nighttime['Label (#)']>=2]
# Re-index to consecutive numbers starting at 0
Scores_Nighttime_Sleep = Scores_Nighttime_Sleep.reset_index(drop=True)
nBins_sleep = int(np.ceil(len(Scores_Nighttime_Sleep)/(binSize_ep)))
stageProportions = DataFrame([], columns=np.arange(2,6))
for bn in range(nBins_sleep):
start_ep = int(bn*binSize_ep)
end_ep = int((bn+1)*binSize_ep)
bn_scores = Scores_Nighttime_Sleep[start_ep:end_ep]
bn_stage_frequencies = bn_scores['Label (#)'].value_counts(normalize=True,sort=False)
stageProportions = stageProportions.append(bn_stage_frequencies, ignore_index=True)
# Replace NaNs with 0
stageProportions = stageProportions.fillna(0)
# Add to dictionary
stageProportions_sleep_only[b] = stageProportions
###Output
_____no_output_____
###Markdown
Plot
###Code
plt.figure(figsize=(10,5*nBirds))
for b in range(nBirds):
stageProportions = stageProportions_sleep_only[b]
# Plot
with sns.color_palette(colors[2:6]):
plt.subplot(nBirds,1,b+1)
plt.plot(stageProportions[[2,3,4,5]], 'o-')
# Labels etc
plt.ylabel('Bird ' + str(b+1))
plt.xlim((-0.5, len(stageProportions)))
# Legend just on first graph
if b == 0:
plt.legend(legendMarkersEEG[2:6], stages[2:6], loc=1)
# X-axis labels just on last graph
if b == nBirds-1:
plt.xlabel('Hour of nighttime sleep')
#if savePlots:
# plt.savefig(saveAsPath + "Fig2_All_birds_by_hour_of_sleep.pdf")
###Output
_____no_output_____
###Markdown
Plot summary figures Organize proportions by stage (instead of by bird)
###Code
stageProportions_by_stage = {}
stage_labels_by_hour = stageProportions_whole_night_all[0].columns.values
for st in stage_labels_by_hour:
stageProportions_stage = DataFrame([])
for b in range(nBirds):
stageProportions_bird = stageProportions_whole_night_all[b]
stageProportions_stage['Bird ' + str(b+1)] = stageProportions_bird[st]
stageProportions_by_stage[st] = stageProportions_stage
stage_labels_by_sleep = stageProportions_sleep_only[0].columns.values
for st in stage_labels_by_sleep:
stageProportions_stage = DataFrame([])
for b in range(nBirds):
stageProportions_bird = stageProportions_sleep_only[b]
stageProportions_stage['Bird ' + str(b+1)] = stageProportions_bird[st]
stageProportions_by_stage[str(st) + ' by hr of sleep'] = stageProportions_stage
###Output
_____no_output_____
###Markdown
Find means and SDs over time
###Code
Means = DataFrame([])
SDs = DataFrame([])
SEMs = DataFrame([])
for st in stage_labels_by_hour:
tmp_mean = stageProportions_by_stage[st].mean(axis=1)
tmp_sd = stageProportions_by_stage[st].std(axis=1)
nObservations = np.sum((np.isnan(stageProportions_by_stage[st]))==0, axis=1)
tmp_sem = tmp_sd/np.sqrt(nObservations)
Means[st] = tmp_mean
SDs[st] = tmp_sd
SEMs[st] = tmp_sem
for st in stage_labels_by_sleep:
tmp_mean = stageProportions_by_stage[str(st) + ' by hr of sleep'].mean(axis=1,skipna=True)
tmp_sd = stageProportions_by_stage[str(st) + ' by hr of sleep'].std(axis=1,skipna=True)
nObservations = np.sum((np.isnan(stageProportions_by_stage[str(st) + ' by hr of sleep']))==0, axis=1)
tmp_sem = tmp_sd/np.sqrt(nObservations)
Means[str(st) + ' by hr of sleep'] = tmp_mean
SDs[str(st) + ' by hr of sleep'] = tmp_sd
SEMs[str(st) + ' by hr of sleep'] = tmp_sem
###Output
_____no_output_____
###Markdown
Plot by hour of night: all stages Just the legend
###Code
stage_names = ['Wake','Drowsy','Unihem','IS','SWS','REM']
# Markers for legends of EEG scoring colors
legendMarkersEEG = []
for stage in range(len(stages)):
legendMarkersEEG.append(plt.Line2D([0],[0], color=colors[stage], marker='o',
markersize=10, lw=5, alpha=0.7))
axis_color = [.8,.8,.8]
with plt.rc_context({'axes.edgecolor': axis_color}): # set color of plot outline
plt.figure(figsize=(3,4))
leg = plt.legend(legendMarkersEEG, stage_names, loc=5)
for text,st in zip(leg.get_texts(),range(len(stages))):
plt.setp(text, color = colors[st], fontsize=20, fontweight='bold')
plt.xticks([])
plt.yticks([])
if savePlots:
plt.savefig(saveAsPath + "Fig02_Legend.pdf")
(100*Means[[0,1,2,3,4,5]]).plot(yerr=100*SEMs,
color=colors, figsize=(7,5),
marker='o', markersize=10,
linewidth=5, alpha=0.7,
capsize=3, capthick=3,
elinewidth=3, legend='')
plt.xlabel('Hour of night',fontsize=axis_label_fontsize )
plt.ylabel('% recording time',fontsize=axis_label_fontsize )
plt.xlim((-.5, nBins - .5))
plt.xticks(np.arange(0,11,2), np.arange(1,12,2))
plt.ylim((-3,70))
sns.despine()
if savePlots:
plt.savefig(saveAsPath + "Fig2d_Summary_by_hour_of_night_allstages.pdf")
###Output
_____no_output_____
###Markdown
FIGURE 2D: Plot by hour of night: sleep only (%TST)
###Code
(100*Means[['U (% TST)', 'I (% TST)', 'S (% TST)', 'R (% TST)',]]).plot(yerr=100*SEMs,
color=colors[2:6], figsize=(7,5),
marker='o', markersize=10,
linewidth=5, alpha=0.7,
capsize=3, capthick=3,
elinewidth=3, legend='')
#plt.legend(legendMarkersEEG[2:6], stage_names[2:6], loc=1)
plt.xlabel('Hour of night', fontsize=axis_label_fontsize)
plt.ylabel('% of total sleep time', fontsize=axis_label_fontsize)
plt.xlim((-.5, nBins - .5))
plt.xticks(np.arange(0,11,2), np.arange(1,12,2))
plt.ylim((-3,80))
sns.despine()
if savePlots:
plt.savefig(saveAsPath + "Fig02_Summary_by_percent_of_TST.pdf")
###Output
_____no_output_____
###Markdown
Plot by hour of sleep
###Code
(100*Means[['3 by hr of sleep', '4 by hr of sleep', '5 by hr of sleep']]).plot(
yerr=100*SEMs,
color=colors[3:6], figsize=(7,5),
marker='o', markersize=10,
linewidth=5, alpha=0.7,
capsize=3, capthick=3,
elinewidth=3, legend='')
#plt.legend(legendMarkersEEG[2:6], stages[2:6], loc=1)
plt.xlabel('Hour of sleep', fontsize=axis_label_fontsize)
plt.ylabel('% of hour', fontsize=axis_label_fontsize)
plt.xlim((-.5, nBins - .5))
plt.xticks(np.arange(0,9,2), np.arange(1,10,2))
plt.ylim((0,80))
sns.despine()
if savePlots:
plt.savefig(saveAsPath + "Fig02_Summary_by_hour_of_sleep.pdf")
###Output
/Users/svcanavan/anaconda3/lib/python3.7/site-packages/numpy/core/_asarray.py:85: UserWarning: Warning: converting a masked element to nan.
return array(a, dtype, copy=False, order=order)
###Markdown
Save means and SDs to csv
###Code
if saveData:
Means.to_csv(saveAsPath + 'Fig2d_hour_of_night_Means.csv')
SDs.to_csv(saveAsPath + 'Fig2d_hour_of_night_SDs.csv')
###Output
_____no_output_____
###Markdown
FIGURE 2D STATS: Correlation/regression testing All hours of night, % TST
###Code
test=100*Means['I (% TST)'].dropna()
slope, intercept, r_value, p_value, std_err = stat.linregress(test.index.values, test.values)
print('slope =', slope, ', r2 =', r_value**2, ', p =', p_value)
test=100*Means['S (% TST)'].dropna()
slope, intercept, r_value, p_value, std_err = stat.linregress(test.index.values, test.values)
print('slope =', slope, ', r2 =', r_value**2, ', p =', p_value)
test=100*Means['R (% TST)'].dropna()
slope, intercept, r_value, p_value, std_err = stat.linregress(test.index.values, test.values)
print('slope =', slope, ', r2 =', r_value**2, ', p =', p_value)
###Output
slope = 2.2650817518617794 , r2 = 0.5885500901126326 , p = 0.005856901849077558
###Markdown
By hour of sleep, % of hour TST
###Code
test=100*Means['3 by hr of sleep'].dropna() # IS
slope, intercept, r_value, p_value, std_err = stat.linregress(test.index.values, test.values)
print('slope =', slope, ', r2 =', r_value**2, ', p =', p_value)
test=100*Means['4 by hr of sleep'].dropna() # SWS
slope, intercept, r_value, p_value, std_err = stat.linregress(test.index.values, test.values)
print('slope =', slope, ', r2 =', r_value**2, ', p =', p_value)
test=100*Means['5 by hr of sleep'].dropna() # REM
slope, intercept, r_value, p_value, std_err = stat.linregress(test.index.values, test.values)
print('slope =', slope, ', r2 =', r_value**2, ', p =', p_value)
###Output
slope = 2.7141319604113243 , r2 = 0.6731497859805853 , p = 0.0067418643565080455
|
5_Cleaned_Tweet_Keras_CNN.ipynb | ###Markdown
Model Building Fitting LSTM with Embedding layer
###Code
vocab_size = 300000
tokenizer = Tokenizer(num_words = vocab_size)
tokenizer.fit_on_texts(X_train)
list_tokenized_train = tokenizer.texts_to_sequences(X_train)
max_review_len = 40
X_train = pad_sequences(list_tokenized_train, maxlen = max_review_len, padding = 'post')
print(X_train.shape, y_train.shape)
X_train[:2]
y_train[:2]
model = Sequential()
model.add(Embedding(vocab_size, 32, input_length = 40))
model.add(Conv1D(filters=128, kernel_size=5, padding='same', activation='relu'))
model.add(MaxPooling1D(pool_size=2))
model.add(Dropout(0.2))
model.add(Conv1D(filters=64, kernel_size=6, padding='same', activation='relu'))
model.add(MaxPooling1D(pool_size=2))
model.add(Dropout(0.2))
model.add(Conv1D(filters=32, kernel_size=7, padding='same', activation='relu'))
model.add(MaxPooling1D(pool_size=2))
model.add(Dropout(0.2))
model.add(Conv1D(filters=32, kernel_size=8, padding='same', activation='relu'))
model.add(MaxPooling1D(pool_size=2))
model.add(Dropout(0.2))
model.add(Flatten())
model.add(Dense(1,activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
model.summary()
history = model.fit(X_train, y_train, batch_size = 32, verbose = 1, validation_split = 0.2, epochs = 2 )
list_tokenized_test = tokenizer.texts_to_sequences(X_test)
X_test = pad_sequences(list_tokenized_test, maxlen = max_review_len)
# Prediction
prediction = model.predict(X_test)
y_pred = (prediction > 0.5)
y_pred[:2]
# Evaluation
print('Model Accuracy : ', accuracy_score(y_test, y_pred))
print("F1 Score : ", f1_score(y_test, y_pred))
print("Confusion_Matrix" , '\n', confusion_matrix(y_test, y_pred))
###Output
Model Accuracy : 0.4461833333333333
F1 Score : 0.6079867869993512
Confusion_Matrix
[[ 1003 29146]
[ 4083 25768]]
|
Applying AI to 2D Medical Imaging Data/Models for Classification of 2D Medical Images/Exercise - Fine-tuning CNNs for Classification/Exercise.ipynb | ###Markdown
Setting up the image augmentation from last Lesson: Note that this section of the code has been pre-written for you and does not need to be changed, just run. If you would like to change the ImageDataGenerator parameters, feel free.
###Code
## This is the image size that VGG16 takes as input
IMG_SIZE = (224, 224)
train_idg = ImageDataGenerator(rescale=1. / 255.0,
horizontal_flip = True,
vertical_flip = False,
height_shift_range= 0.1,
width_shift_range=0.1,
rotation_range=20,
shear_range = 0.1,
zoom_range=0.1)
train_gen = train_idg.flow_from_dataframe(dataframe=train_df,
directory=None,
x_col = 'img_path',
y_col = 'class',
class_mode = 'binary',
target_size = IMG_SIZE,
batch_size = 9
)
# Note that the validation data should not be augmented! We only want to do some basic intensity rescaling here
val_idg = ImageDataGenerator(rescale=1. / 255.0
)
val_gen = val_idg.flow_from_dataframe(dataframe=valid_df,
directory=None,
x_col = 'img_path',
y_col = 'class',
class_mode = 'binary',
target_size = IMG_SIZE,
batch_size = 6) ## We've only been provided with 6 validation images
## Pull a single large batch of random validation data for testing after each epoch
testX, testY = val_gen.next()
###Output
_____no_output_____
###Markdown
Now we'll load in VGG16 with pre-trained ImageNet weights:
###Code
model = VGG16(include_top=True, weights='imagenet')
model.summary()
transfer_layer = model.get_layer('block5_pool')
vgg_model = Model(inputs=model.input,
outputs=transfer_layer.output)
for indx, layer in enumerate(vgg_model.layers):
print(indx, layer.name, layer.output_shape)
## Now, choose which layers of VGG16 we actually want to fine-tune
## Here, we'll freeze all but the last convolutional layer
## Add some code here to freeze all but the last convolutional layer:
##### Your code here ######
for layer in vgg_model.layers[0:17]:
layer.trainable = False
## Check to make sure you froze the right ones:
for layer in vgg_model.layers:
print(layer.name, layer.trainable)
###Output
input_2 False
block1_conv1 False
block1_conv2 False
block1_pool False
block2_conv1 False
block2_conv2 False
block2_pool False
block3_conv1 False
block3_conv2 False
block3_conv3 False
block3_pool False
block4_conv1 False
block4_conv2 False
block4_conv3 False
block4_pool False
block5_conv1 False
block5_conv2 False
block5_conv3 True
block5_pool True
###Markdown
Build a simple sequential model using only the VGG16 architectureNote the code in the cell below has been pre-written for you, you only need to run it
###Code
## Build your model using the mostly-frozen VGG16 architecture:
new_model = Sequential()
# Add the convolutional part of the VGG16 model from above.
new_model.add(vgg_model)
# Flatten the output of the VGG16 model because it is from a
# convolutional layer.
new_model.add(Flatten())
# Add a dense (aka. fully-connected) layer.
# This is for combining features that the VGG16 model has
# recognized in the image.
new_model.add(Dense(1, activation='relu'))
## Set our optimizer, loss function, and learning rate (you can change the learning rate here if you'd like)
## but otherwise this cell can be run as is
optimizer = Adam(lr=1e-4)
loss = 'binary_crossentropy'
metrics = ['binary_accuracy']
new_model.compile(optimizer=optimizer, loss=loss, metrics=metrics)
## Just run a single epoch to see how it does:
new_model.fit_generator(train_gen,
validation_data = (testX, testY),
epochs = 5)
###Output
Epoch 1/5
3/3 [==============================] - 16s 5s/step - loss: 3.0454 - binary_accuracy: 0.4000 - val_loss: 5.3106 - val_binary_accuracy: 0.5000
Epoch 2/5
3/3 [==============================] - 15s 5s/step - loss: 4.3945 - binary_accuracy: 0.4500 - val_loss: 7.6246 - val_binary_accuracy: 0.1667
Epoch 3/5
3/3 [==============================] - 15s 5s/step - loss: 5.1895 - binary_accuracy: 0.1500 - val_loss: 7.6246 - val_binary_accuracy: 0.0000e+00
Epoch 4/5
3/3 [==============================] - 15s 5s/step - loss: 7.2017 - binary_accuracy: 0.0000e+00 - val_loss: 7.6246 - val_binary_accuracy: 0.0000e+00
Epoch 5/5
3/3 [==============================] - 15s 5s/step - loss: 7.6666 - binary_accuracy: 0.0000e+00 - val_loss: 7.6246 - val_binary_accuracy: 0.0000e+00
###Markdown
Let's try another experiment where we add a few more dense layers:
###Code
new_model = Sequential()
# Add the convolutional part of the VGG16 model from above.
new_model.add(vgg_model)
# Flatten the output of the VGG16 model because it is from a
# convolutional layer.
new_model.add(Flatten())
# Add a couple of dense (aka. fully-connected) layers.
# This is for combining features that the VGG16 model has
# recognized in the image.
##### Your code here ######
new_model.add(Dense(1024, activation='relu'))
new_model.add(Dense(512, activation='relu'))
new_model.add(Dense(64, activation='relu'))
# Final output layer:
new_model.add(Dense(1, activation='sigmoid'))
new_model.compile(optimizer=optimizer, loss=loss, metrics=metrics)
## Just run a single epoch to see how it does:
new_model.fit_generator(train_gen,
validation_data = (testX, testY),
epochs = 5)
###Output
Epoch 1/5
3/3 [==============================] - 19s 6s/step - loss: 0.9345 - binary_accuracy: 0.5000 - val_loss: 0.7996 - val_binary_accuracy: 0.5000
Epoch 2/5
3/3 [==============================] - 18s 6s/step - loss: 0.6509 - binary_accuracy: 0.5000 - val_loss: 0.6392 - val_binary_accuracy: 0.6667
Epoch 3/5
3/3 [==============================] - 18s 6s/step - loss: 0.4577 - binary_accuracy: 0.8500 - val_loss: 0.6269 - val_binary_accuracy: 0.6667
Epoch 4/5
3/3 [==============================] - 18s 6s/step - loss: 0.3862 - binary_accuracy: 0.9000 - val_loss: 0.7931 - val_binary_accuracy: 0.6667
Epoch 5/5
3/3 [==============================] - 18s 6s/step - loss: 0.3203 - binary_accuracy: 0.9000 - val_loss: 0.7870 - val_binary_accuracy: 0.6667
###Markdown
Now let's add dropout and another fully connected layer:
###Code
new_model = Sequential()
# Add the convolutional part of the VGG16 model from above.
new_model.add(vgg_model)
# Flatten the output of the VGG16 model because it is from a
# convolutional layer.
new_model.add(Flatten())
# Add several fully-connected layers with dropout
##### Your code here ######
new_model.add(Dropout(0.25))
new_model.add(Dense(1024, activation='relu'))
new_model.add(Dropout(0.25))
new_model.add(Dense(512, activation='relu'))
new_model.add(Dropout(0.25))
new_model.add(Dense(128, activation='relu'))
new_model.add(Dropout(0.25))
new_model.add(Dense(32, activation='relu'))
# Final output layer
new_model.add(Dense(1, activation='relu'))
new_model.compile(optimizer=optimizer, loss=loss, metrics=metrics)
## Just run a single epoch to see how it does:
new_model.fit_generator(train_gen,
validation_data = (testX, testY),
epochs = 5)
###Output
Epoch 1/5
3/3 [==============================] - 19s 6s/step - loss: 2.4093 - binary_accuracy: 0.5500 - val_loss: 0.7217 - val_binary_accuracy: 0.5000
Epoch 2/5
3/3 [==============================] - 18s 6s/step - loss: 2.8452 - binary_accuracy: 0.4500 - val_loss: 0.6085 - val_binary_accuracy: 0.5000
Epoch 3/5
3/3 [==============================] - 18s 6s/step - loss: 3.7402 - binary_accuracy: 0.2000 - val_loss: 3.1789 - val_binary_accuracy: 0.5000
Epoch 4/5
3/3 [==============================] - 18s 6s/step - loss: 5.3982 - binary_accuracy: 0.3500 - val_loss: 2.9891 - val_binary_accuracy: 0.5000
Epoch 5/5
3/3 [==============================] - 18s 6s/step - loss: 6.9496 - binary_accuracy: 0.4000 - val_loss: 2.9131 - val_binary_accuracy: 0.6667
|
exercice-04-NoSQL-Data-Models/Lesson 3 Exercise 2 Primary Key.ipynb | ###Markdown
Lesson 3 Exercise 2: Focus on Primary Key Walk through the basics of creating a table with a good Primary Key in Apache Cassandra, inserting rows of data, and doing a simple CQL query to validate the information. Replace with your own answers. We will use a python wrapper/ python driver called cassandra to run the Apache Cassandra queries. This library should be preinstalled but in the future to install this library you can run this command in a notebook to install locally: ! pip install cassandra-driver More documentation can be found here: https://datastax.github.io/python-driver/ Import Apache Cassandra python package
###Code
import cassandra
###Output
_____no_output_____
###Markdown
Create a connection to the database
###Code
from cassandra.cluster import Cluster
try:
cluster = Cluster(['127.0.0.1']) #If you have a locally installed Apache Cassandra instance
session = cluster.connect()
except Exception as e:
print(e)
###Output
_____no_output_____
###Markdown
Create a keyspace to work in
###Code
try:
session.execute("""
CREATE KEYSPACE IF NOT EXISTS udacity
WITH REPLICATION =
{ 'class' : 'SimpleStrategy', 'replication_factor' : 1 }"""
)
except Exception as e:
print(e)
###Output
_____no_output_____
###Markdown
Connect to the Keyspace. Compare this to how we had to create a new session in PostgreSQL.
###Code
try:
session.set_keyspace('udacity')
except Exception as e:
print(e)
###Output
_____no_output_____
###Markdown
Imagine you need to create a new Music Library of albums Here is the information asked of the data: 1. Give every album in the music library that was created by a given artist`select * from music_library WHERE artist_name="The Beatles"` Here is the collection of data Practice by making the PRIMARY KEY only 1 Column (not 2 or more)
###Code
query = "CREATE TABLE IF NOT EXISTS ##### "
query = query + "(##### PRIMARY KEY (#####))"
try:
session.execute(query)
except Exception as e:
print(e)
###Output
_____no_output_____
###Markdown
Let's insert the data into the table
###Code
query = "INSERT INTO ##### (year, artist_name, album_name, city)"
query = query + " VALUES (%s, %s, %s, %s)"
try:
session.execute(query, (1970, "The Beatles", "Let it Be", "Liverpool"))
except Exception as e:
print(e)
try:
session.execute(query, (1965, "The Beatles", "Rubber Soul", "Oxford"))
except Exception as e:
print(e)
try:
session.execute(query, (1965, "The Who", "My Generation", "London"))
except Exception as e:
print(e)
try:
session.execute(query, (1966, "The Monkees", "The Monkees", "Los Angeles"))
except Exception as e:
print(e)
try:
session.execute(query, (1970, "The Carpenters", "Close To You", "San Diego"))
except Exception as e:
print(e)
###Output
_____no_output_____
###Markdown
Validate the Data Model -- Does it give you two rows?
###Code
query = "select * from ##### WHERE #####"
try:
rows = session.execute(query)
except Exception as e:
print(e)
for row in rows:
print (row.year, row.artist_name, row.album_name, row.city)
###Output
_____no_output_____
###Markdown
If you used just one column as your PRIMARY KEY, your output should be:1965 The Beatles Rubber Soul Oxford That didn't work out as planned! Why is that? Did you create a unique primary key? Try again - Create a new table with a composite key this time
###Code
query = "CREATE TABLE IF NOT EXISTS ##### "
query = query + "(#####)"
try:
session.execute(query)
except Exception as e:
print(e)
## You can opt to change the sequence of columns to match your composite key. \
## Make sure to match the values in the INSERT statement
query = "INSERT INTO ##### (year, artist_name, album_name, city)"
query = query + " VALUES (%s, %s, %s, %s)"
try:
session.execute(query, (1970, "The Beatles", "Let it Be", "Liverpool"))
except Exception as e:
print(e)
try:
session.execute(query, (1965, "The Beatles", "Rubber Soul", "Oxford"))
except Exception as e:
print(e)
try:
session.execute(query, (1965, "The Who", "My Generation", "London"))
except Exception as e:
print(e)
try:
session.execute(query, (1966, "The Monkees", "The Monkees", "Los Angeles"))
except Exception as e:
print(e)
try:
session.execute(query, (1970, "The Carpenters", "Close To You", "San Diego"))
except Exception as e:
print(e)
###Output
_____no_output_____
###Markdown
Validate the Data Model -- Did it work?
###Code
query = "#####"
try:
rows = session.execute(query)
except Exception as e:
print(e)
for row in rows:
print (row.year, row.artist_name, row.album_name, row.city)
###Output
_____no_output_____
###Markdown
Your output should be:1970 The Beatles Let it Be Liverpool1965 The Beatles Rubber Soul Oxford Drop the tables
###Code
query = "#####"
try:
rows = session.execute(query)
except Exception as e:
print(e)
query = "#####"
try:
rows = session.execute(query)
except Exception as e:
print(e)
###Output
_____no_output_____
###Markdown
Close the session and cluster connection
###Code
session.shutdown()
cluster.shutdown()
###Output
_____no_output_____ |
ch03/Snippets_Importing_libraries.ipynb | ###Markdown
Importing a library that is not in ColaboratoryTo import a library that's not in Colaboratory by default, you can use `!pip install` or `!apt-get install`.
###Code
!pip install matplotlib-venn
!apt-get -qq install -y libfluidsynth1
###Output
_____no_output_____
###Markdown
Install 7zip reader [libarchive](https://pypi.python.org/pypi/libarchive)
###Code
# https://pypi.python.org/pypi/libarchive
!apt-get -qq install -y libarchive-dev && pip install -U libarchive
import libarchive
###Output
_____no_output_____
###Markdown
Install GraphViz & [PyDot](https://pypi.python.org/pypi/pydot)
###Code
# https://pypi.python.org/pypi/pydot
!apt-get -qq install -y graphviz && pip install pydot
import pydot
###Output
_____no_output_____
###Markdown
Install [cartopy](http://scitools.org.uk/cartopy/docs/latest/)
###Code
!pip install cartopy
import cartopy
###Output
_____no_output_____ |
colab/chap17.ipynb | ###Markdown
Chapter 17 *Modeling and Simulation in Python*Copyright 2021 Allen DowneyLicense: [Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International](https://creativecommons.org/licenses/by-nc-sa/4.0/)
###Code
# check if the libraries we need are installed
try:
import pint
except ImportError:
!pip install pint
import pint
try:
from modsim import *
except ImportError:
!pip install modsimpy
from modsim import *
###Output
_____no_output_____
###Markdown
DataWe have data from Pacini and Bergman (1986), "MINMOD: a computer program to calculate insulin sensitivity and pancreatic responsivity from the frequently sampled intravenous glucose tolerance test", *Computer Methods and Programs in Biomedicine*, 23: 113-122..
###Code
import os
filename = 'glucose_insulin.csv'
if not os.path.exists(filename):
!wget https://raw.githubusercontent.com/AllenDowney/ModSimPy/master/data/glucose_insulin.csv
data = pd.read_csv(filename, index_col='time')
###Output
_____no_output_____
###Markdown
Here's what the glucose time series looks like.
###Code
plot(data.glucose, 'bo', label='glucose')
decorate(xlabel='Time (min)',
ylabel='Concentration (mg/dL)')
###Output
_____no_output_____
###Markdown
And the insulin time series.
###Code
plot(data.insulin, 'go', label='insulin')
decorate(xlabel='Time (min)',
ylabel='Concentration ($\mu$U/mL)')
###Output
_____no_output_____
###Markdown
For the book, I put them in a single figure, using `subplot`
###Code
subplot(2, 1, 1)
plot(data.glucose, 'bo', label='glucose')
decorate(ylabel='Concentration (mg/dL)')
subplot(2, 1, 2)
plot(data.insulin, 'go', label='insulin')
decorate(xlabel='Time (min)',
ylabel='Concentration ($\mu$U/mL)')
###Output
_____no_output_____
###Markdown
InterpolationWe have measurements of insulin concentration at discrete points in time, but we need to estimate it at intervening points. We'll use `interpolate`, which takes a `Series` and returns a function: The return value from `interpolate` is a function.
###Code
I = interpolate(data.insulin)
###Output
_____no_output_____
###Markdown
We can use the result, `I`, to estimate the insulin level at any point in time.
###Code
I(7)
###Output
_____no_output_____
###Markdown
`I` can also take an array of time and return an array of estimates:
###Code
t_0 = get_first_label(data)
t_end = get_last_label(data)
ts = linrange(t_0, t_end, endpoint=True)
I(ts)
type(ts)
###Output
_____no_output_____
###Markdown
Here's what the interpolated values look like.
###Code
plot(data.insulin, 'go', label='insulin data')
plot(ts, I(ts), color='green', label='interpolated')
decorate(xlabel='Time (min)',
ylabel='Concentration ($\mu$U/mL)')
###Output
_____no_output_____
###Markdown
**Exercise:** [Read the documentation](https://docs.scipy.org/doc/scipy/reference/generated/scipy.interpolate.interp1d.html) of `scipy.interpolate.interp1d`. Pass a keyword argument to `interpolate` to specify one of the other kinds of interpolation, and run the code again to see what it looks like.
###Code
# Solution goes here
###Output
_____no_output_____
###Markdown
**Exercise:** Interpolate the glucose data and generate a plot, similar to the previous one, that shows the data points and the interpolated curve evaluated at the time values in `ts`.
###Code
# Solution goes here
###Output
_____no_output_____
###Markdown
Under the hood
###Code
source_code(interpolate)
###Output
_____no_output_____ |
Sensor_Fusion_New_Fault_Codes_WD/Sensor_Fusion_New_Fault_Codes_WD.ipynb | ###Markdown
Fault Classes | Fault | Binary | Decimal || --- | --- | --- || A | 00 | 0 || B | 01 | 1 || C | 10 | 2 | Fault Code = (variable << 2) + fault_class | Fault | Binary | Decimal || --- | --- | --- || A1 | 100 | 4 || B1 | 101 | 5 || C1 | 110 | 6 | Sample (C1)
###Code
fault = 2
variable = 1
code = (variable << 2) + fault
code
###Output
_____no_output_____
###Markdown
Reverse
###Code
fault = code & 3
fault
variable = code >> 2
variable
###Output
_____no_output_____
###Markdown
Current Fault Codes | Fault | Decimal || --- | --- || A1 | 4 || B1 | 5 || C1 | 6 || A1B1 | 4-5 || A1C1 | 4-6 || B1C1 | 5-6 | Flow Chart Run
###Code
command = "D:/code/C++/RT-Cadmium-FDD-New-Code/top_model/mainwd.exe"
completed_process = subprocess.run(command, shell=False, capture_output=True, text=True)
#print(completed_process.stdout)
###Output
_____no_output_____
###Markdown
Read from file
###Code
fileName = "SensorFusion.txt"
fault_codes = {}
with open(fileName, "r") as f:
lines = f.readlines()
with open(fileName, "r") as f:
output = f.read()
for line in lines:
if (re.search("supervisor", line) != None):
res = re.findall("\{\d+[, ]*\d*[, ]*\d*\}", line)
if len(res) > 0:
str_interest = res[0].replace('}', '').replace('{', '')
faults = str_interest.split(', ')
key = '-' + '-'.join(faults) + '-'
fault_codes[key] = fault_codes.get(key, 0) + 1
generators = {'A': 0, 'B': 0, 'C': 0}
for key in generators.keys():
generators[key] = len(re.findall("faultGen" + key, output))
fault_codes
def sumFromSupervisor(code):
'''
Returns the number of times faults associated with a particular pure fault (the parameter) were output by the supervisor
@param code: int
@return int
'''
sum = 0
for key, value in fault_codes.items():
if '-' + str(code) + '-' in key:
sum += value;
return sum;
a_discarded = generators['A'] - sumFromSupervisor(4)
a_discarded
b_discarded = generators['B'] - sumFromSupervisor(5)
b_discarded
c_discarded = generators['C'] - sumFromSupervisor(6)
c_discarded
total_discarded = a_discarded + b_discarded + c_discarded + d_discarded
total_discarded
total_generated = generators['A'] + generators['B'] + generators['C'] + generators['D']
total_generated
discarded = {'A': a_discarded, 'B': b_discarded, 'C': c_discarded}
discarded_percentage = {'A': a_discarded * 100 / total_generated, 'B': b_discarded * 100 / total_generated, 'C': c_discarded * 100 / total_generated}
discarded
fault_codes
a_increment = generators['A'] - fault_codes['-4-5-'] - fault_codes['-4-6-'] - a_discarded
a_increment
b_increment = generators['B'] - fault_codes['-4-5-'] - fault_codes['-5-6-'] - b_discarded
b_increment
c_increment = generators['C'] - fault_codes['-4-6-'] - fault_codes['-5-6-'] - c_discarded
c_increment
###Output
_____no_output_____
###Markdown
Discard Charts
###Code
#plt.title('Discarded Bar')
plt.bar(discarded.keys(), discarded.values())
plt.show()
#plt.savefig('discarded bar.png', format='png')
keys, values = list(discarded.keys()), list(discarded.values())
legend_keys = copy.copy(keys)
for i in range(len(keys)):
legend_keys[i] = str(legend_keys[i]) + " = " + str(values[i])
# Remove wedgeprops to make pie
wedges, texts = plt.pie(values, textprops=dict(color="w"), wedgeprops=dict(width=0.5))
plt.legend(wedges, legend_keys,
title="Fault Codes",
loc="center left",
bbox_to_anchor=(1, 0, 0.5, 1))
#plt.title("Discarded Pie")
plt.show()
#plt.savefig('discard pie.png', format='png')
###Output
<ipython-input-44-a55430dbb174>:8: MatplotlibDeprecationWarning: normalize=None does not normalize if the sum is less than 1 but this behavior is deprecated since 3.3 until two minor releases later. After the deprecation period the default value will be normalize=True. To prevent normalization pass normalize=False
wedges, texts = plt.pie(values, textprops=dict(color="w"), wedgeprops=dict(width=0.5))
###Markdown
Discard Percentage Charts
###Code
#plt.title('Discard Percentage')
plt.bar(discarded_percentage.keys(), discarded_percentage.values())
plt.show()
#plt.savefig('sensorfusion.png', format='png')
keys, values = list(discarded_percentage.keys()), list(discarded_percentage.values())
legend_keys = copy.copy(keys)
for i in range(len(keys)):
legend_keys[i] = str(legend_keys[i]) + " (%) = " + str(values[i])
# Remove wedgeprops to make pie
wedges, texts = plt.pie(values, textprops=dict(color="w"), wedgeprops=dict(width=0.5))
plt.legend(wedges, legend_keys,
title="Fault Codes",
loc="center left",
bbox_to_anchor=(1, 0, 0.5, 1))
#plt.title("Discard Percentage")
plt.show()
#plt.savefig('discard percntage pie.png')
###Output
<ipython-input-46-0f993063da11>:8: MatplotlibDeprecationWarning: normalize=None does not normalize if the sum is less than 1 but this behavior is deprecated since 3.3 until two minor releases later. After the deprecation period the default value will be normalize=True. To prevent normalization pass normalize=False
wedges, texts = plt.pie(values, textprops=dict(color="w"), wedgeprops=dict(width=0.5))
###Markdown
Toggle Time vs Frequency of Generators
###Code
toggle_times = {'A': 620, 'B': 180, 'C': 490}
###Output
_____no_output_____
###Markdown
Premise $faults\,generated \propto \frac{1}{toggle\,time}$$\therefore B > D > C > A$ Generator Output Charts (Possibilities of Faults)
###Code
generators['A']
#plt.title('Generator Output (Possibilities of Faults)')
plt.bar(generators.keys(), generators.values())
plt.show()
#plt.savefig('generator output bar.png')
keys, values = list(generators.keys()), list(generators.values())
legend_keys = copy.copy(keys)
for i in range(len(keys)):
legend_keys[i] = "n (" + str(legend_keys[i]) + ") = " + str(values[i])
# Remove wedgeprops to make pie
wedges, texts = plt.pie(values, textprops=dict(color="w"), wedgeprops=dict(width=0.5))
plt.legend(wedges, legend_keys,
title="Fault Codes",
loc="center left",
bbox_to_anchor=(1, 0, 0.5, 1))
#plt.title("Generator Output Charts (Possibilities of Fault)")
#plt.show()
plt.savefig('generator output pie.png')
###Output
_____no_output_____
###Markdown
Single-Run Fault Charts
###Code
chart_data = copy.copy(fault_codes)
values = list(chart_data.values())
keys = list(chart_data.keys())
plt.bar(keys, values)
#plt.title('Single-Run')
plt.show()
#plt.savefig('single-run bar.png')
# Remove wedgeprops to make pie
wedges, texts = plt.pie(values,
textprops=dict(color="w"),
wedgeprops=dict(width=0.5))
legend_keys = copy.copy(keys)
for i in range(len(keys)):
legend_keys[i] = str(legend_keys[i]) + " " + str(values[i]) + " " + "times"
plt.legend(wedges, legend_keys,
title="Fault Codes",
loc="center left",
bbox_to_anchor=(1, 0, 0.5, 1))
plt.title("Single-Run")
plt.show()
plt.savefig('single-run pie.png')
###Output
_____no_output_____ |
jupyter/Complex.ipynb | ###Markdown
In this exerise we will learn how to visualize complex numbers and interpret how different operations change them. We know that complex numbers can be written as a sum of their real and imaginary parts--ie. in standard form z =a+b*i, where a = Re(z) and b = Im(z). We can represent this number in the complex plane with the coordinates (a,b). For example 3+4*i
###Code
import matplotlib.pyplot as plt
plt.plot(3,4, 'bo')
plt.xlim(-5,5)
plt.ylim(-5,5)
plt.show()
import numpy as np
import matplotlib.pyplot as plt
a = 1.0
b = 4.5
root = np.pi
def theta(a,b):
if a==0:
if b<0:
theta = -np.pi/2
elif b>0:
theta = np.pi/2
else:
theta = np.arctan(float(b)/float(a))
if a < 0 and b > 0:
theta = np.pi + theta
elif a <0 and b <0:
theta = theta-np.pi
return theta
def r(a,b):
return np.sqrt(float(a)**2+float(b)**2)
angle = theta(a,b)
length = r(a,b)
print angle
fig = plt.figure(figsize=(10, 10), dpi= 80, facecolor='w', edgecolor='k')
plt.xlim(-length-0.1,length+0.1)
plt.ylim(-length-0.1,length+0.1)
plt.plot(a,b, 'ro')
for i in range(100):
x = length**1/root *np.cos(angle/root + 2*np.pi/root*i)
y = length**1/root *np.sin(angle/root + 2*np.pi/root*i)
plt.plot(x,y, 'bo')
plt.show()
#def init():
# fig = plt.figure(figsize=(10, 20), dpi= 80, facecolor='w', edgecolor='k')
# ax = add_subplot()
#def animate(i):
#anim = animation.FuncAnimation(fig, animate,init_func=init, frames=20, interval=180, blit=True)
###Output
1.35212738092
|
Solving QC/Quantum Least Squares Fitting.ipynb | ###Markdown
**Quantum Least Squares Fitting**
###Code
import numpy as nump
from scipy import linalg as lina
from scipy.linalg import lstsq
def get_least_squares_fit(data, basis_matrix, weights=None, PSD=False, trace=None):
c_mat = basis_matrix
d_mat = nump.array(data)
if weights is not None:
w = nump.array(weights)
c_mat = w[:, None] * c_mat
d_mat = w * d_mat
rho_fit_mat, _, _, _ = lstsq(c_mat.T, d_mat)
print(rho_fit_mat)
size = len(rho_fit_mat)
dim = int(nump.sqrt(size))
if dim * dim != size:
raise ValueError("fitted vector needs to be a sqaure matrix")
rho_fit_mat = rho_fit_mat.reshape(dim, dim, order='F')
if PSD is True:
rho_fit_mat = convert_positive_semidefinite_matrix#(rho_fit)
if trace is not None:
rho_fit_mat *= trace / nump.trace(rho_fit_mat)
return rho_fit_mat
def convert_positive_semidefinite_matrix(mat, epsilon=0):
if epsilon < 0:
raise ValueError('epsilon nees to be positive ')
dim = len(mat)
v, w = lina.eigh(mat)
for j in range(dim):
if v[j] < epsilon:
tmp = v[j]
v[j] = 0.
x = 0.
for k in range(j + 1, dim):
x += tmp / (dim - (j + 1))
v[k] = v[k] + tmp / (dim - (j + 1))
matrix_psd = nump.zeros([dim, dim], dtype=complex)
for j in range(dim):
matrix_psd += v[j] * nump.outer(w[:, j], nump.conj(w[:, j]))
return matrix_psd
data = [12, 21, 23.5, 24.5, 25, 33, 23, 15.5, 28,19]
u_matrix = nump.arange(0, 10)
basis_matrix = nump.array([u_matrix, nump.ones(10)])
rho_fit_val = get_least_squares_fit(data,basis_matrix)
###Output
[ 0.45757576 20.39090909]
Traceback [1;36m(most recent call last)[0m:
Input [0;32mIn [6][0m in [0;35m<module>[0m
rho_fit_val = get_least_squares_fit(data,basis_matrix)
[1;36m Input [1;32mIn [2][1;36m in [1;35mget_least_squares_fit[1;36m[0m
[1;33m raise ValueError("fitted vector needs to be a sqaure matrix")[0m
[1;31mValueError[0m[1;31m:[0m fitted vector needs to be a sqaure matrix
Use %tb to get the full traceback.
|
RedWineAlcoholLevels.ipynb | ###Markdown
Neil Sano 991541147 Red Wine Alcohol Levels
###Code
#If you are using Google Colab, run this command
#!pip install pycaret
import pandas as pd
import numpy as np
import sklearn as sk
wineData = pd.read_csv("winequality-red.csv", sep= ";")
print(wineData.shape)
wineData
dataSample = wineData.sample(frac=0.9, random_state = 786)
data_unseen = wineData.drop(dataSample.index)
dataSample.reset_index(drop=True, inplace=True)
data_unseen.reset_index(drop=True, inplace=True)
print("Data for Modeling: " + str(dataSample.shape))
print("Unseen Data for Predictions: " + str(data_unseen.shape))
from pycaret.regression import *
wine_regression = setup(data=dataSample, target = 'alcohol', session_id=1)
best = compare_models(exclude = ['ransac'], sort='RMSE')
###Output
_____no_output_____
###Markdown
Top 3 models are Catboost Regressor, Light Gradient Boosting Machine, Extreme Gradient Boosting.
###Code
#Create a CatBoost Regressor as it currently has the lowest RMSE Score out of all the regression models
cat = create_model('catboost')
print(cat)
# Create a LGBM model as it was the second lowest.
lgbm = create_model('lightgbm')
print(lgbm)
# Create an extreme gradient boosting model
xgb = create_model("xgboost")
print(xgb)
###Output
_____no_output_____
###Markdown
Tune the Models
###Code
tuned_cat = tune_model(cat, optimize='RMSE')
print(tuned_cat.get_params)
tuned_lgbm = tune_model(lgbm, optimize='RMSE')
print(tuned_lgbm)
tuned_xgb = tune_model(xgb, optimize='RMSE')
print(tuned_xgb)
###Output
_____no_output_____
###Markdown
Plot the models CatBoost (Plot)
###Code
plot_model(tuned_cat)
plot_model(tuned_cat, plot = 'error')
plot_model(tuned_cat, plot='feature')
###Output
_____no_output_____
###Markdown
LGBM (Plot)
###Code
plot_model(tuned_lgbm)
plot_model(tuned_lgbm, plot="error")
plot_model(tuned_lgbm, plot='feature')
###Output
_____no_output_____
###Markdown
XGB (Plot)
###Code
plot_model(tuned_xgb)
plot_model(tuned_xgb, plot="error")
plot_model(tuned_xgb, plot="feature")
###Output
_____no_output_____
###Markdown
Evaluate the Models Tuned Cat
###Code
evaluate_model(tuned_cat)
predict_model(tuned_cat)
###Output
_____no_output_____
###Markdown
Tuned LGBM
###Code
evaluate_model(tuned_lgbm)
predict_model(tuned_lgbm)
###Output
_____no_output_____
###Markdown
Tuned XGB
###Code
evaluate_model(tuned_xgb)
predict_model(tuned_xgb)
###Output
_____no_output_____
###Markdown
Finalize Models Cat Boost
###Code
final_cat = finalize_model(tuned_cat)
print(final_cat)
predict_model(final_cat)
unseen_predictions = predict_model(final_cat, data= data_unseen)
unseen_predictions.head()
from pycaret.utils import check_metric
check_metric(unseen_predictions.alcohol, unseen_predictions.Label, 'RMSE')
###Output
_____no_output_____
###Markdown
LGBM
###Code
final_lgbm = finalize_model(tuned_lgbm)
print(final_lgbm)
predict_model(final_lgbm)
unseen_predictions = predict_model(final_lgbm, data= data_unseen)
unseen_predictions.head()
check_metric(unseen_predictions.alcohol, unseen_predictions.Label, 'RMSE')
###Output
_____no_output_____
###Markdown
XGB
###Code
final_xgb = finalize_model(tuned_xgb)
print(final_xgb)
predict_model(final_xgb)
unseen_predictions = predict_model(final_xgb, data= data_unseen)
unseen_predictions.head()
check_metric(unseen_predictions.alcohol, unseen_predictions.Label, 'RMSE')
###Output
_____no_output_____
###Markdown
Results The final results leave the Catboost Regressor as the one with the most accurate score on the test data with an RMSE score of 0.4432. The LGBM regressor with the highest RMSE score of 0.5289. The XGB regressor with the second best RMSE score of 0.4939. As the Catboost Regressor has the lowest RMSE score I want to tune it further. According to the feature plot model of the cat booster the following attributes do not matter as much: - Volatile Acidity - Total Sulpher Dioxide - Chlorides Lets see the results when I drop all 3.
###Code
wineData = pd.read_csv("winequality-red.csv", sep= ";")
print(wineData.shape)
wineData.columns = wineData.columns.str.replace(" ", "_")
wineData = wineData.drop(columns=["volatile_acidity","chlorides","total_sulfur_dioxide"])
wineData
dataSample = wineData.sample(frac=0.9, random_state = 786)
data_unseen = wineData.drop(dataSample.index)
dataSample.reset_index(drop=True, inplace=True)
data_unseen.reset_index(drop=True, inplace=True)
print("Data for Modeling: " + str(dataSample.shape))
print("Unseen Data for Predictions: " + str(data_unseen.shape))
wine_regression = setup(data=dataSample, target = 'alcohol', session_id=1)
best = compare_models(exclude = ['ransac'], sort='RMSE')
###Output
_____no_output_____
###Markdown
It appears that the RMSE score is higher when running the initial setup
###Code
new_cat = create_model('catboost')
print(new_cat)
tuned_new_cat = tune_model(new_cat, optimize='RMSE')
evaluate_model(tuned_new_cat)
final_new_cat = finalize_model(tuned_new_cat)
predict_model(final_new_cat)
unseen_predictions = predict_model(final_new_cat, data= data_unseen)
unseen_predictions.head()
check_metric(unseen_predictions.alcohol, unseen_predictions.Label, 'RMSE')
###Output
_____no_output_____
###Markdown
The Results of removing the bottom 3 features from the feature importance plot resulted in a 0.0230 increase in the RMSE score lowering the precision by a decent amount.However there are a decent amount of columns added to the table with the quality column being broken into multiple categories and a lot of them were not even on the feature plot.Lets investigate there.
###Code
wineData = pd.read_csv("winequality-red.csv", sep= ";")
print(wineData.shape)
wineData.columns = wineData.columns.str.replace(" ", "_")
wineData = wineData.drop(columns=["quality"])
wineData
dataSample = wineData.sample(frac=0.9, random_state = 786)
data_unseen = wineData.drop(dataSample.index)
dataSample.reset_index(drop=True, inplace=True)
data_unseen.reset_index(drop=True, inplace=True)
print("Data for Modeling: " + str(dataSample.shape))
print("Unseen Data for Predictions: " + str(data_unseen.shape))
wine_regression = setup(data=dataSample, target = 'alcohol', session_id=1)
best = compare_models(exclude = ['ransac'], sort='RMSE')
###Output
_____no_output_____ |
Mathematics_for_Machine_Learning/PCA/utf-8''week1.ipynb | ###Markdown
Mean/Covariance of a data set and effect of a linear transformationWe are going to investigate how the mean and (co)variance of a dataset changeswhen we apply affine transformation to the dataset. Learning objectives1. Get Farmiliar with basic programming using Python and Numpy/Scipy.2. Learn to appreciate implementing functions to compute statistics of dataset in vectorized way.3. Understand the effects of affine transformations on a dataset.4. Understand the importance of testing in programming for machine learning. First, let's import the packages that we will use for the week
###Code
# PACKAGE: DO NOT EDIT THIS CELL
import numpy as np
import matplotlib
matplotlib.use('Agg')
import matplotlib.pyplot as plt
matplotlib.style.use('fivethirtyeight')
from sklearn.datasets import fetch_lfw_people, fetch_olivetti_faces
import time
import timeit
%matplotlib inline
from ipywidgets import interact
###Output
_____no_output_____
###Markdown
Next, we are going to retrieve Olivetti faces dataset.When working with some datasets, before digging into further analysis, it is almost alwaysuseful to do a few things to understand your dataset. First of all, answer the followingset of questions:1. What is the size of your dataset?2. What is the dimensionality of your data?The dataset we have are usually stored as 2D matrices, then it would be really importantto know which dimension represents the dimension of the dataset, and which representsthe data points in the dataset. __When you implement the functions for your assignment, make sure you readthe docstring for what each dimension of your inputs represents the data points, and which represents the dimensions of the dataset!__.
###Code
image_shape = (64, 64)
# Load faces data
dataset = fetch_olivetti_faces('./')
faces = dataset.data.T
print('Shape of the faces dataset: {}'.format(faces.shape))
print('{} data points'.format(faces.shape[1]))
###Output
Shape of the faces dataset: (4096, 400)
400 data points
###Markdown
When your dataset are images, it's a really good idea to see what they look like.One veryconvenient tool in Jupyter is the `interact` widget, which we use to visualize the images (faces). For more information on how to use interact, have a look at the documentation [here](http://ipywidgets.readthedocs.io/en/stable/examples/Using%20Interact.html).We have created two function which help you visuzlie the faces dataset. You do not need to modify them.
###Code
def show_face(face):
plt.figure()
plt.imshow(face.reshape((64, 64)), cmap='gray')
plt.show()
@interact(n=(0, faces.shape[1]-1))
def display_faces(n=0):
plt.figure()
plt.imshow(faces[:,n].reshape((64, 64)), cmap='gray')
plt.show()
###Output
_____no_output_____
###Markdown
1. Mean and Covariance of a Dataset In this week, you will need to implement functions in the cell below which compute the mean and covariance of a dataset.
###Code
# GRADED FUNCTION: DO NOT EDIT THIS LINE
def mean_naive(X):
"Compute the mean for a dataset X nby iterating over the data points"
# X is of size (D,N) where D is the dimensionality and N the number of data points
D, N = X.shape
mean = np.zeros((D,1))
for n in range(N): # iterate over the dataset
mean[:, 0] = mean[:, 0] + (1/N * X[:, n])# <-- EDIT THIS
return mean
def cov_naive(X):
"""Compute the covariance for a dataset of size (D,N)
where D is the dimension and N is the number of data points"""
# 1/N * \sum (x_i - m)(x_i - m)^T (where m is the mean)
D, N = X.shape
covariance = np.zeros((D, D))
m = mean(X)
Y = X - m
for n in range(N):
covariance += 1/N * (Y[:, n].reshape(D, 1) * Y[:, n].reshape(1, D)) # <-- EDIT THIS
return covariance
def mean(X):
"Compute the mean for a dataset of size (D,N) where D is the dimension and N is the number of data points"
# given a dataset of size (D, N), the mean should be an array of size (D,)
mean = np.mean(X, axis=1, keepdims=True) # <-- EDIT THIS
return mean
def cov(X):
"Compute the covariance for a dataset"
# X is of size (D,N)
# https://stackoverflow.com/questions/16062804/numpy-cov-covariance-function-what-exactly-does-it-compute
# It is possible to vectorize our code for computing the covariance, i.e., we do not need to explicitly
# iterate over the entire dataset as looping in Python tends to be slow
# We challenge you to give a vectorized implementation without using np.cov.
D, N = X.shape
Y = X - mean(X)
covariance_matrix = 1/N * (Y @ Y.T) # <-- EDIT THIS
return covariance_matrix
###Output
_____no_output_____
###Markdown
Now, let's see whether our implementations are consistent
###Code
np.testing.assert_almost_equal(mean(faces), mean_naive(faces), decimal=6)
np.testing.assert_almost_equal(cov(faces), cov_naive(faces))
###Output
_____no_output_____
###Markdown
With the `mean` function implemented, let's take a look at the _mean_ face of our dataset!
###Code
def mean_face(faces):
return faces.mean(axis=1).reshape((64, 64))
plt.imshow(mean_face(faces), cmap='gray');
###Output
_____no_output_____
###Markdown
We can also visualize the covariance. Since the faces dataset are too high dimensional, let's instead take a look at the covariance matrix for a smaller dataset: the MNIST digits dataset. One of the advantage of writing vectorized code is speedup gained when working on larger dataset. Loops in Pythonare slow, and most of the time you want to utilise the fast native code provided by Numpy without explicitly usingfor loops. To put things into perspective, we can benchmark the two different implementation with the `%time` functionin the following way:
###Code
# We have some HUUUGE data matrix which we want to compute its mean
X = np.random.randn(20, 1000)
# Benchmarking time for computing mean
%time mean_naive(X)
%time mean(X)
pass
# Benchmarking time for computing covariance
%time cov_naive(X)
%time cov(X)
pass
###Output
CPU times: user 13.5 ms, sys: 268 µs, total: 13.8 ms
Wall time: 102 ms
CPU times: user 1.35 ms, sys: 409 µs, total: 1.75 ms
Wall time: 1.24 ms
###Markdown
Alternatively, we can also see how running time increases as we increase the size of our dataset.In the following cell, we run `mean`, `mean_naive` and `cov`, `cov_naive` for many times on different sizes ofthe dataset and collect their running time. If you are less familiar with Python, you may want to spendsome time understanding what the code does. The next cell includes a function that records the time taken for executing a function `f` by repeating it for `repeat` number of times. You do not need to modify the function but you can use it to compare the running time for functions which you are interested in knowing the running time.
###Code
def time(f, repeat=10):
"""Helper function to compute the time taken for running a function f
"""
# you don't need to edit this function
times = []
for _ in range(repeat):
start = timeit.default_timer()
f()
stop = timeit.default_timer()
times.append(stop-start)
return np.mean(times), np.std(times)
###Output
_____no_output_____
###Markdown
Let's first benchmark the running time for `mean` and `mean_naive`.Note that it may take a long time for the code to run if you repeat it for too many times. If you do not see the next cell terminate within a reasonable amount of time, try reducing the number of times you `repeat` running the function.
###Code
fast_time = []
slow_time = []
# we iterate over datasets of different sizes, and compute the time taken to run mean, mean_naive on the dataset
for size in np.arange(100, 501, step=100):
X = np.random.randn(size, 20)
f = lambda : mean(X) # we create an "anonymous" function for running mean on dataset X
mu, sigma = time(f, repeat=10) # the `time` function computes the mean and standard deviation of running
fast_time.append((size, mu, sigma)) # keep the results of the runtime in a list
# we repeat the same steps for `mean_naive`
f = lambda : mean_naive(X)
mu, sigma = time(f, repeat=10)
slow_time.append((size, mu, sigma))
fast_time = np.array(fast_time)
slow_time = np.array(slow_time)
###Output
_____no_output_____
###Markdown
Let's visualize the running time for `mean` and `mean_naive`.
###Code
fig, ax = plt.subplots()
ax.errorbar(fast_time[:,0], fast_time[:,1], fast_time[:,2], label='fast mean', linewidth=2)
ax.errorbar(slow_time[:,0], slow_time[:,1], slow_time[:,2], label='naive mean', linewidth=2)
ax.set_xlabel('size of dataset')
ax.set_ylabel('running time')
plt.legend();
###Output
_____no_output_____
###Markdown
We can create a similar benchmark for `cov` and `cov_naive`. Follow the pattern for how we created the benchmark for `mean` and `mean_naive` and update the code below.
###Code
fast_time_cov = []
slow_time_cov = []
for size in np.arange(100, 501, step=100):
X = np.random.randn(size, 20)
# You should follow how we create the running time benchmarks for mean and mean_naive above to
# create some benchmarks for the running time of cov_naive and cov
f = lambda : cov(X) # <-- EDIT THIS
mu, sigma = time(f, repeat=10) # <-- EDIT THIS
fast_time_cov.append((size, mu, sigma))
f = lambda : cov_naive(X) # <-- EDIT THIS
mu, sigma = time(f, repeat=10) # <-- EDIT THIS
slow_time_cov.append((size, mu, sigma))
fast_time_cov = np.array(fast_time_cov)
slow_time_cov = np.array(slow_time_cov)
fig, ax = plt.subplots()
ax.errorbar(fast_time_cov[:,0], fast_time_cov[:,1], fast_time_cov[:,2], label='fast covariance', linewidth=2)
ax.errorbar(slow_time_cov[:,0], slow_time_cov[:,1], slow_time_cov[:,2], label='naive covariance', linewidth=2)
ax.set_xlabel('size of dataset')
ax.set_ylabel('running time')
plt.legend();
###Output
_____no_output_____
###Markdown
2. Affine Transformation of DatasetsIn this week we are also going to verify a few properties about the mean andcovariance of affine transformation of random variables.Consider a data matrix $\boldsymbol X$ of size $(D, N)$. We would like to knowwhat is the covariance when we apply affine transformation $\boldsymbol A\boldsymbol x_i + \boldsymbol b$ for each datapoint $\boldsymbol x_i$ in $\boldsymbol X$, i.e.,we would like to know what happens to the mean and covariance for the new dataset if we apply affine transformation.For this assignment, you will need to implement the `affine_mean` and `affine_covariance` in the cell below.
###Code
# GRADED FUNCTION: DO NOT EDIT THIS LINE
def affine_mean(mean, A, b):
"""Compute the mean after affine transformation
Args:
x: ndarray, the mean vector
A, b: affine transformation applied to x
Returns:
mean vector after affine transformation
"""
affine_m = A@mean + b # <-- EDIT THIS
return affine_m
def affine_covariance(S, A, b):
"""Compute the covariance matrix after affine transformation
Args:
S: ndarray, the covariance matrix
A, b: affine transformation applied to each element in X
Returns:
covariance matrix after the transformation
"""
affine_cov = A@[email protected] # <-- EDIT THIS
return affine_cov
###Output
_____no_output_____
###Markdown
Once the two functions above are implemented, we can verify the correctness our implementation. Assuming that we have some $\boldsymbol A$ and $\boldsymbol b$.
###Code
random = np.random.RandomState(42)
A = random.randn(4,4)
b = random.randn(4,1)
###Output
_____no_output_____
###Markdown
Next we can generate some random dataset $\boldsymbol X$
###Code
X = random.randn(4,100)
###Output
_____no_output_____
###Markdown
Assuming that for some dataset $\boldsymbol X$, the mean and covariance are $\boldsymbol m$, $\boldsymbol S$, and for the new dataset after affine transformation $\boldsymbol X'$, the mean and covariance are $\boldsymbol m'$ and $\boldsymbol S'$, then we would have the following identity:$$\boldsymbol m' = \text{affine_mean}(\boldsymbol m, \boldsymbol A, \boldsymbol b)$$$$\boldsymbol S' = \text{affine_covariance}(\boldsymbol S, \boldsymbol A, \boldsymbol b)$$
###Code
X1 = (A @ X) + b # applying affine transformation once
X2 = (A @ X1) + b # twice
###Output
_____no_output_____
###Markdown
One very useful way to compare whether arrays are equal/similar is use the helper functionsin `numpy.testing`.Check the Numpy [documentation](https://docs.scipy.org/doc/numpy-1.13.0/reference/routines.testing.html)for details. The mostly used function is `np.testing.assert_almost_equal`, which raises AssertionError if the two arrays are not almost equal.If you are interested in learning more about floating point arithmetic, here is a good [paper](http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.22.6768).
###Code
np.testing.assert_almost_equal(mean(X1), affine_mean(mean(X), A, b))
np.testing.assert_almost_equal(cov(X1), affine_covariance(cov(X), A, b))
np.testing.assert_almost_equal(mean(X2), affine_mean(mean(X1), A, b))
np.testing.assert_almost_equal(cov(X2), affine_covariance(cov(X1), A, b))
###Output
_____no_output_____ |
3. Convolutional Neural Networks/week1/Convolution model - Step by Step - v2.ipynb | ###Markdown
Convolutional Neural Networks: Step by StepWelcome to Course 4's first assignment! In this assignment, you will implement convolutional (CONV) and pooling (POOL) layers in numpy, including both forward propagation and (optionally) backward propagation. **Notation**:- Superscript $[l]$ denotes an object of the $l^{th}$ layer. - Example: $a^{[4]}$ is the $4^{th}$ layer activation. $W^{[5]}$ and $b^{[5]}$ are the $5^{th}$ layer parameters.- Superscript $(i)$ denotes an object from the $i^{th}$ example. - Example: $x^{(i)}$ is the $i^{th}$ training example input. - Lowerscript $i$ denotes the $i^{th}$ entry of a vector. - Example: $a^{[l]}_i$ denotes the $i^{th}$ entry of the activations in layer $l$, assuming this is a fully connected (FC) layer. - $n_H$, $n_W$ and $n_C$ denote respectively the height, width and number of channels of a given layer. If you want to reference a specific layer $l$, you can also write $n_H^{[l]}$, $n_W^{[l]}$, $n_C^{[l]}$. - $n_{H_{prev}}$, $n_{W_{prev}}$ and $n_{C_{prev}}$ denote respectively the height, width and number of channels of the previous layer. If referencing a specific layer $l$, this could also be denoted $n_H^{[l-1]}$, $n_W^{[l-1]}$, $n_C^{[l-1]}$. We assume that you are already familiar with `numpy` and/or have completed the previous courses of the specialization. Let's get started! 1 - PackagesLet's first import all the packages that you will need during this assignment. - [numpy](www.numpy.org) is the fundamental package for scientific computing with Python.- [matplotlib](http://matplotlib.org) is a library to plot graphs in Python.- np.random.seed(1) is used to keep all the random function calls consistent. It will help us grade your work.
###Code
import numpy as np
import h5py
import matplotlib.pyplot as plt
%matplotlib inline
plt.rcParams['figure.figsize'] = (5.0, 4.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
%load_ext autoreload
%autoreload 2
np.random.seed(1)
###Output
_____no_output_____
###Markdown
2 - Outline of the AssignmentYou will be implementing the building blocks of a convolutional neural network! Each function you will implement will have detailed instructions that will walk you through the steps needed:- Convolution functions, including: - Zero Padding - Convolve window - Convolution forward - Convolution backward (optional)- Pooling functions, including: - Pooling forward - Create mask - Distribute value - Pooling backward (optional) This notebook will ask you to implement these functions from scratch in `numpy`. In the next notebook, you will use the TensorFlow equivalents of these functions to build the following model:**Note** that for every forward function, there is its corresponding backward equivalent. Hence, at every step of your forward module you will store some parameters in a cache. These parameters are used to compute gradients during backpropagation. 3 - Convolutional Neural NetworksAlthough programming frameworks make convolutions easy to use, they remain one of the hardest concepts to understand in Deep Learning. A convolution layer transforms an input volume into an output volume of different size, as shown below. In this part, you will build every step of the convolution layer. You will first implement two helper functions: one for zero padding and the other for computing the convolution function itself. 3.1 - Zero-PaddingZero-padding adds zeros around the border of an image: **Figure 1** : **Zero-Padding** Image (3 channels, RGB) with a padding of 2. The main benefits of padding are the following:- It allows you to use a CONV layer without necessarily shrinking the height and width of the volumes. This is important for building deeper networks, since otherwise the height/width would shrink as you go to deeper layers. An important special case is the "same" convolution, in which the height/width is exactly preserved after one layer. - It helps us keep more of the information at the border of an image. Without padding, very few values at the next layer would be affected by pixels as the edges of an image.**Exercise**: Implement the following function, which pads all the images of a batch of examples X with zeros. [Use np.pad](https://docs.scipy.org/doc/numpy/reference/generated/numpy.pad.html). Note if you want to pad the array "a" of shape $(5,5,5,5,5)$ with `pad = 1` for the 2nd dimension, `pad = 3` for the 4th dimension and `pad = 0` for the rest, you would do:```pythona = np.pad(a, ((0,0), (1,1), (0,0), (3,3), (0,0)), 'constant', constant_values = (..,..))```
###Code
# GRADED FUNCTION: zero_pad
def zero_pad(X, pad):
"""
Pad with zeros all images of the dataset X. The padding is applied to the height and width of an image,
as illustrated in Figure 1.
Argument:
X -- python numpy array of shape (m, n_H, n_W, n_C) representing a batch of m images
pad -- integer, amount of padding around each image on vertical and horizontal dimensions
Returns:
X_pad -- padded image of shape (m, n_H + 2*pad, n_W + 2*pad, n_C)
"""
### START CODE HERE ### (≈ 1 line)
X_pad = np.pad(X, ((0,0), (pad,pad), (pad,pad), (0,0)), 'constant', constant_values = (0,0))
### END CODE HERE ###
return X_pad
np.random.seed(1)
x = np.random.randn(4, 3, 3, 2)
x_pad = zero_pad(x, 2)
print ("x.shape =", x.shape)
print ("x_pad.shape =", x_pad.shape)
print ("x[1,1] =", x[1,1])
print ("x_pad[1,1] =", x_pad[1,1])
fig, axarr = plt.subplots(1, 2)
axarr[0].set_title('x')
axarr[0].imshow(x[0,:,:,0])
axarr[1].set_title('x_pad')
axarr[1].imshow(x_pad[0,:,:,0])
###Output
x.shape = (4, 3, 3, 2)
x_pad.shape = (4, 7, 7, 2)
x[1,1] = [[ 0.90085595 -0.68372786]
[-0.12289023 -0.93576943]
[-0.26788808 0.53035547]]
x_pad[1,1] = [[ 0. 0.]
[ 0. 0.]
[ 0. 0.]
[ 0. 0.]
[ 0. 0.]
[ 0. 0.]
[ 0. 0.]]
###Markdown
**Expected Output**: **x.shape**: (4, 3, 3, 2) **x_pad.shape**: (4, 7, 7, 2) **x[1,1]**: [[ 0.90085595 -0.68372786] [-0.12289023 -0.93576943] [-0.26788808 0.53035547]] **x_pad[1,1]**: [[ 0. 0.] [ 0. 0.] [ 0. 0.] [ 0. 0.] [ 0. 0.] [ 0. 0.] [ 0. 0.]] 3.2 - Single step of convolution In this part, implement a single step of convolution, in which you apply the filter to a single position of the input. This will be used to build a convolutional unit, which: - Takes an input volume - Applies a filter at every position of the input- Outputs another volume (usually of different size) **Figure 2** : **Convolution operation** with a filter of 3x3 and a stride of 1 (stride = amount you move the window each time you slide) In a computer vision application, each value in the matrix on the left corresponds to a single pixel value, and we convolve a 3x3 filter with the image by multiplying its values element-wise with the original matrix, then summing them up and adding a bias. In this first step of the exercise, you will implement a single step of convolution, corresponding to applying a filter to just one of the positions to get a single real-valued output. Later in this notebook, you'll apply this function to multiple positions of the input to implement the full convolutional operation. **Exercise**: Implement conv_single_step(). [Hint](https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.sum.html).
###Code
# GRADED FUNCTION: conv_single_step
def conv_single_step(a_slice_prev, W, b):
"""
Apply one filter defined by parameters W on a single slice (a_slice_prev) of the output activation
of the previous layer.
Arguments:
a_slice_prev -- slice of input data of shape (f, f, n_C_prev)
W -- Weight parameters contained in a window - matrix of shape (f, f, n_C_prev)
b -- Bias parameters contained in a window - matrix of shape (1, 1, 1)
Returns:
Z -- a scalar value, result of convolving the sliding window (W, b) on a slice x of the input data
"""
### START CODE HERE ### (≈ 2 lines of code)
# Element-wise product between a_slice_prev and W. Do not add the bias yet.
s = np.multiply(a_slice_prev, W)
# Sum over all entries of the volume s.
Z = np.sum(s)
# Add bias b to Z. Cast b to a float() so that Z results in a scalar value.
Z = Z+b
### END CODE HERE ###
return Z
np.random.seed(1)
a_slice_prev = np.random.randn(4, 4, 3)
W = np.random.randn(4, 4, 3)
b = np.random.randn(1, 1, 1)
Z = conv_single_step(a_slice_prev, W, b)
print("Z =", Z)
###Output
Z = [[[-6.99908945]]]
###Markdown
**Expected Output**: **Z** -6.99908945068 3.3 - Convolutional Neural Networks - Forward passIn the forward pass, you will take many filters and convolve them on the input. Each 'convolution' gives you a 2D matrix output. You will then stack these outputs to get a 3D volume: **Exercise**: Implement the function below to convolve the filters W on an input activation A_prev. This function takes as input A_prev, the activations output by the previous layer (for a batch of m inputs), F filters/weights denoted by W, and a bias vector denoted by b, where each filter has its own (single) bias. Finally you also have access to the hyperparameters dictionary which contains the stride and the padding. **Hint**: 1. To select a 2x2 slice at the upper left corner of a matrix "a_prev" (shape (5,5,3)), you would do:```pythona_slice_prev = a_prev[0:2,0:2,:]```This will be useful when you will define `a_slice_prev` below, using the `start/end` indexes you will define.2. To define a_slice you will need to first define its corners `vert_start`, `vert_end`, `horiz_start` and `horiz_end`. This figure may be helpful for you to find how each of the corner can be defined using h, w, f and s in the code below. **Figure 3** : **Definition of a slice using vertical and horizontal start/end (with a 2x2 filter)** This figure shows only a single channel. **Reminder**:The formulas relating the output shape of the convolution to the input shape is:$$ n_H = \lfloor \frac{n_{H_{prev}} - f + 2 \times pad}{stride} \rfloor +1 $$$$ n_W = \lfloor \frac{n_{W_{prev}} - f + 2 \times pad}{stride} \rfloor +1 $$$$ n_C = \text{number of filters used in the convolution}$$For this exercise, we won't worry about vectorization, and will just implement everything with for-loops.
###Code
# GRADED FUNCTION: conv_forward
def conv_forward(A_prev, W, b, hparameters):
"""
Implements the forward propagation for a convolution function
Arguments:
A_prev -- output activations of the previous layer, numpy array of shape (m, n_H_prev, n_W_prev, n_C_prev)
W -- Weights, numpy array of shape (f, f, n_C_prev, n_C)
b -- Biases, numpy array of shape (1, 1, 1, n_C)
hparameters -- python dictionary containing "stride" and "pad"
Returns:
Z -- conv output, numpy array of shape (m, n_H, n_W, n_C)
cache -- cache of values needed for the conv_backward() function
"""
### START CODE HERE ###
# Retrieve dimensions from A_prev's shape (≈1 line)
(m, n_H_prev, n_W_prev, n_C_prev) = A_prev.shape
# Retrieve dimensions from W's shape (≈1 line)
(f, f, n_C_prev, n_C) = W.shape
# Retrieve information from "hparameters" (≈2 lines)
stride = hparameters['stride']
pad = hparameters['pad']
# Compute the dimensions of the CONV output volume using the formula given above. Hint: use int() to floor. (≈2 lines)
n_H = int((n_H_prev - f + 2 * pad) / stride) + 1
n_W = int((n_W_prev - f + 2 * pad) / stride) + 1
# Initialize the output volume Z with zeros. (≈1 line)
Z = np.zeros((m, n_H, n_W, n_C))
# Create A_prev_pad by padding A_prev
A_prev_pad = zero_pad(A_prev, pad)
for i in range(m): # loop over the batch of training examples
a_prev_pad = A_prev_pad[i] # Select ith training example's padded activation
for h in range(n_H): # loop over vertical axis of the output volume
for w in range(n_W): # loop over horizontal axis of the output volume
for c in range(n_C): # loop over channels (= #filters) of the output volume
# Find the corners of the current "slice" (≈4 lines)
vert_start = h * stride
vert_end = vert_start + f
horiz_start = w * stride
horiz_end = horiz_start + f
# Use the corners to define the (3D) slice of a_prev_pad (See Hint above the cell). (≈1 line)
a_slice_prev = a_prev_pad[vert_start:vert_end, horiz_start:horiz_end, :]
# Convolve the (3D) slice with the correct filter W and bias b, to get back one output neuron. (≈1 line)
Z[i, h, w, c] = conv_single_step(a_slice_prev, W[...,c], b[...,c])
### END CODE HERE ###
# Making sure your output shape is correct
assert(Z.shape == (m, n_H, n_W, n_C))
# Save information in "cache" for the backprop
cache = (A_prev, W, b, hparameters)
return Z, cache
np.random.seed(1)
A_prev = np.random.randn(10,4,4,3)
W = np.random.randn(2,2,3,8)
b = np.random.randn(1,1,1,8)
hparameters = {"pad" : 2,
"stride": 2}
Z, cache_conv = conv_forward(A_prev, W, b, hparameters)
print("Z's mean =", np.mean(Z))
print("Z[3,2,1] =", Z[3,2,1])
print("cache_conv[0][1][2][3] =", cache_conv[0][1][2][3])
###Output
Z's mean = 0.0489952035289
Z[3,2,1] = [-0.61490741 -6.7439236 -2.55153897 1.75698377 3.56208902 0.53036437
5.18531798 8.75898442]
cache_conv[0][1][2][3] = [-0.20075807 0.18656139 0.41005165]
###Markdown
**Expected Output**: **Z's mean** 0.0489952035289 **Z[3,2,1]** [-0.61490741 -6.7439236 -2.55153897 1.75698377 3.56208902 0.53036437 5.18531798 8.75898442] **cache_conv[0][1][2][3]** [-0.20075807 0.18656139 0.41005165] Finally, CONV layer should also contain an activation, in which case we would add the following line of code:```python Convolve the window to get back one output neuronZ[i, h, w, c] = ... Apply activationA[i, h, w, c] = activation(Z[i, h, w, c])```You don't need to do it here. 4 - Pooling layer The pooling (POOL) layer reduces the height and width of the input. It helps reduce computation, as well as helps make feature detectors more invariant to its position in the input. The two types of pooling layers are: - Max-pooling layer: slides an ($f, f$) window over the input and stores the max value of the window in the output.- Average-pooling layer: slides an ($f, f$) window over the input and stores the average value of the window in the output.These pooling layers have no parameters for backpropagation to train. However, they have hyperparameters such as the window size $f$. This specifies the height and width of the fxf window you would compute a max or average over. 4.1 - Forward PoolingNow, you are going to implement MAX-POOL and AVG-POOL, in the same function. **Exercise**: Implement the forward pass of the pooling layer. Follow the hints in the comments below.**Reminder**:As there's no padding, the formulas binding the output shape of the pooling to the input shape is:$$ n_H = \lfloor \frac{n_{H_{prev}} - f}{stride} \rfloor +1 $$$$ n_W = \lfloor \frac{n_{W_{prev}} - f}{stride} \rfloor +1 $$$$ n_C = n_{C_{prev}}$$
###Code
# GRADED FUNCTION: pool_forward
def pool_forward(A_prev, hparameters, mode = "max"):
"""
Implements the forward pass of the pooling layer
Arguments:
A_prev -- Input data, numpy array of shape (m, n_H_prev, n_W_prev, n_C_prev)
hparameters -- python dictionary containing "f" and "stride"
mode -- the pooling mode you would like to use, defined as a string ("max" or "average")
Returns:
A -- output of the pool layer, a numpy array of shape (m, n_H, n_W, n_C)
cache -- cache used in the backward pass of the pooling layer, contains the input and hparameters
"""
# Retrieve dimensions from the input shape
(m, n_H_prev, n_W_prev, n_C_prev) = A_prev.shape
# Retrieve hyperparameters from "hparameters"
f = hparameters["f"]
stride = hparameters["stride"]
# Define the dimensions of the output
n_H = int(1 + (n_H_prev - f) / stride)
n_W = int(1 + (n_W_prev - f) / stride)
n_C = n_C_prev
# Initialize output matrix A
A = np.zeros((m, n_H, n_W, n_C))
### START CODE HERE ###
for i in range(m): # loop over the training examples
for h in range(n_H): # loop on the vertical axis of the output volume
for w in range(n_W): # loop on the horizontal axis of the output volume
for c in range (n_C): # loop over the channels of the output volume
# Find the corners of the current "slice" (≈4 lines)
vert_start = h * stride
vert_end = vert_start + f
horiz_start = w * stride
horiz_end = horiz_start + f
# Use the corners to define the current slice on the ith training example of A_prev, channel c. (≈1 line)
a_prev_slice = A_prev[i, vert_start:vert_end, horiz_start:horiz_end, c]
# Compute the pooling operation on the slice. Use an if statment to differentiate the modes. Use np.max/np.mean.
if mode == "max":
A[i, h, w, c] = np.max(a_prev_slice)
elif mode == "average":
A[i, h, w, c] = np.mean(a_prev_slice)
### END CODE HERE ###
# Store the input and hparameters in "cache" for pool_backward()
cache = (A_prev, hparameters)
# Making sure your output shape is correct
assert(A.shape == (m, n_H, n_W, n_C))
return A, cache
np.random.seed(1)
A_prev = np.random.randn(2, 4, 4, 3)
hparameters = {"stride" : 2, "f": 3}
A, cache = pool_forward(A_prev, hparameters)
print("mode = max")
print("A =", A)
print()
A, cache = pool_forward(A_prev, hparameters, mode = "average")
print("mode = average")
print("A =", A)
###Output
mode = max
A = [[[[ 1.74481176 0.86540763 1.13376944]]]
[[[ 1.13162939 1.51981682 2.18557541]]]]
mode = average
A = [[[[ 0.02105773 -0.20328806 -0.40389855]]]
[[[-0.22154621 0.51716526 0.48155844]]]]
###Markdown
**Expected Output:** A = [[[[ 1.74481176 0.86540763 1.13376944]]] [[[ 1.13162939 1.51981682 2.18557541]]]] A = [[[[ 0.02105773 -0.20328806 -0.40389855]]] [[[-0.22154621 0.51716526 0.48155844]]]] Congratulations! You have now implemented the forward passes of all the layers of a convolutional network. The remainer of this notebook is optional, and will not be graded. 5 - Backpropagation in convolutional neural networks (OPTIONAL / UNGRADED)In modern deep learning frameworks, you only have to implement the forward pass, and the framework takes care of the backward pass, so most deep learning engineers don't need to bother with the details of the backward pass. The backward pass for convolutional networks is complicated. If you wish however, you can work through this optional portion of the notebook to get a sense of what backprop in a convolutional network looks like. When in an earlier course you implemented a simple (fully connected) neural network, you used backpropagation to compute the derivatives with respect to the cost to update the parameters. Similarly, in convolutional neural networks you can to calculate the derivatives with respect to the cost in order to update the parameters. The backprop equations are not trivial and we did not derive them in lecture, but we briefly presented them below. 5.1 - Convolutional layer backward pass Let's start by implementing the backward pass for a CONV layer. 5.1.1 - Computing dA:This is the formula for computing $dA$ with respect to the cost for a certain filter $W_c$ and a given training example:$$ dA += \sum _{h=0} ^{n_H} \sum_{w=0} ^{n_W} W_c \times dZ_{hw} \tag{1}$$Where $W_c$ is a filter and $dZ_{hw}$ is a scalar corresponding to the gradient of the cost with respect to the output of the conv layer Z at the hth row and wth column (corresponding to the dot product taken at the ith stride left and jth stride down). Note that at each time, we multiply the the same filter $W_c$ by a different dZ when updating dA. We do so mainly because when computing the forward propagation, each filter is dotted and summed by a different a_slice. Therefore when computing the backprop for dA, we are just adding the gradients of all the a_slices. In code, inside the appropriate for-loops, this formula translates into:```pythonda_prev_pad[vert_start:vert_end, horiz_start:horiz_end, :] += W[:,:,:,c] * dZ[i, h, w, c]``` 5.1.2 - Computing dW:This is the formula for computing $dW_c$ ($dW_c$ is the derivative of one filter) with respect to the loss:$$ dW_c += \sum _{h=0} ^{n_H} \sum_{w=0} ^ {n_W} a_{slice} \times dZ_{hw} \tag{2}$$Where $a_{slice}$ corresponds to the slice which was used to generate the acitivation $Z_{ij}$. Hence, this ends up giving us the gradient for $W$ with respect to that slice. Since it is the same $W$, we will just add up all such gradients to get $dW$. In code, inside the appropriate for-loops, this formula translates into:```pythondW[:,:,:,c] += a_slice * dZ[i, h, w, c]``` 5.1.3 - Computing db:This is the formula for computing $db$ with respect to the cost for a certain filter $W_c$:$$ db = \sum_h \sum_w dZ_{hw} \tag{3}$$As you have previously seen in basic neural networks, db is computed by summing $dZ$. In this case, you are just summing over all the gradients of the conv output (Z) with respect to the cost. In code, inside the appropriate for-loops, this formula translates into:```pythondb[:,:,:,c] += dZ[i, h, w, c]```**Exercise**: Implement the `conv_backward` function below. You should sum over all the training examples, filters, heights, and widths. You should then compute the derivatives using formulas 1, 2 and 3 above.
###Code
def conv_backward(dZ, cache):
"""
Implement the backward propagation for a convolution function
Arguments:
dZ -- gradient of the cost with respect to the output of the conv layer (Z), numpy array of shape (m, n_H, n_W, n_C)
cache -- cache of values needed for the conv_backward(), output of conv_forward()
Returns:
dA_prev -- gradient of the cost with respect to the input of the conv layer (A_prev),
numpy array of shape (m, n_H_prev, n_W_prev, n_C_prev)
dW -- gradient of the cost with respect to the weights of the conv layer (W)
numpy array of shape (f, f, n_C_prev, n_C)
db -- gradient of the cost with respect to the biases of the conv layer (b)
numpy array of shape (1, 1, 1, n_C)
"""
### START CODE HERE ###
# Retrieve information from "cache"
(A_prev, W, b, hparameters) = cache
# Retrieve dimensions from A_prev's shape
(m, n_H_prev, n_W_prev, n_C_prev) = A_prev.shape
# Retrieve dimensions from W's shape
(f, f, n_C_prev, n_C) = W.shape
# Retrieve information from "hparameters"
stride = hparameters["stride"]
pad = hparameters["pad"]
# Retrieve dimensions from dZ's shape
(m, n_H, n_W, n_C) = dZ.shape
# Initialize dA_prev, dW, db with the correct shapes
dA_prev = np.zeros((m, n_H_prev, n_W_prev, n_C_prev))
dW = np.zeros((f, f, n_C_prev, n_C))
db = np.zeros((1, 1, 1, n_C))
# Pad A_prev and dA_prev
A_prev_pad = zero_pad(A_prev, pad)
dA_prev_pad = zero_pad(dA_prev, pad)
for i in range(m): # loop over the training examples
# select ith training example from A_prev_pad and dA_prev_pad
a_prev_pad = A_prev_pad[i]
da_prev_pad = dA_prev_pad[i]
for h in range(n_H): # loop over vertical axis of the output volume
for w in range(n_W): # loop over horizontal axis of the output volume
for c in range(n_C): # loop over the channels of the output volume
# Find the corners of the current "slice"
vert_start = h * stride
vert_end = vert_start + f
horiz_start = w * stride
horiz_end = horiz_start + f
# Use the corners to define the slice from a_prev_pad
a_slice = a_prev_pad[vert_start:vert_end, horiz_start:horiz_end, :]
# Update gradients for the window and the filter's parameters using the code formulas given above
da_prev_pad[vert_start:vert_end, horiz_start:horiz_end, :] += W[:,:,:,c] * dZ[i, h, w, c]
dW[:,:,:,c] += a_slice * dZ[i, h, w, c]
db[:,:,:,c] += dZ[i, h, w, c]
# Set the ith training example's dA_prev to the unpaded da_prev_pad (Hint: use X[pad:-pad, pad:-pad, :])
dA_prev[i, :, :, :] = da_prev_pad[pad:-pad, pad:-pad, :]
### END CODE HERE ###
# Making sure your output shape is correct
assert(dA_prev.shape == (m, n_H_prev, n_W_prev, n_C_prev))
return dA_prev, dW, db
np.random.seed(1)
dA, dW, db = conv_backward(Z, cache_conv)
print("dA_mean =", np.mean(dA))
print("dW_mean =", np.mean(dW))
print("db_mean =", np.mean(db))
###Output
dA_mean = 1.45243777754
dW_mean = 1.72699145831
db_mean = 7.83923256462
###Markdown
** Expected Output: ** **dA_mean** 1.45243777754 **dW_mean** 1.72699145831 **db_mean** 7.83923256462 5.2 Pooling layer - backward passNext, let's implement the backward pass for the pooling layer, starting with the MAX-POOL layer. Even though a pooling layer has no parameters for backprop to update, you still need to backpropagation the gradient through the pooling layer in order to compute gradients for layers that came before the pooling layer. 5.2.1 Max pooling - backward pass Before jumping into the backpropagation of the pooling layer, you are going to build a helper function called `create_mask_from_window()` which does the following: $$ X = \begin{bmatrix}1 && 3 \\4 && 2\end{bmatrix} \quad \rightarrow \quad M =\begin{bmatrix}0 && 0 \\1 && 0\end{bmatrix}\tag{4}$$As you can see, this function creates a "mask" matrix which keeps track of where the maximum of the matrix is. True (1) indicates the position of the maximum in X, the other entries are False (0). You'll see later that the backward pass for average pooling will be similar to this but using a different mask. **Exercise**: Implement `create_mask_from_window()`. This function will be helpful for pooling backward. Hints:- [np.max()]() may be helpful. It computes the maximum of an array.- If you have a matrix X and a scalar x: `A = (X == x)` will return a matrix A of the same size as X such that:```A[i,j] = True if X[i,j] = xA[i,j] = False if X[i,j] != x```- Here, you don't need to consider cases where there are several maxima in a matrix.
###Code
def create_mask_from_window(x):
"""
Creates a mask from an input matrix x, to identify the max entry of x.
Arguments:
x -- Array of shape (f, f)
Returns:
mask -- Array of the same shape as window, contains a True at the position corresponding to the max entry of x.
"""
### START CODE HERE ### (≈1 line)
mask = x == np.max(x)
### END CODE HERE ###
return mask
np.random.seed(1)
x = np.random.randn(2,3)
mask = create_mask_from_window(x)
print('x = ', x)
print("mask = ", mask)
###Output
x = [[ 1.62434536 -0.61175641 -0.52817175]
[-1.07296862 0.86540763 -2.3015387 ]]
mask = [[ True False False]
[False False False]]
###Markdown
**Expected Output:** **x =**[[ 1.62434536 -0.61175641 -0.52817175] [-1.07296862 0.86540763 -2.3015387 ]] **mask =**[[ True False False] [False False False]] Why do we keep track of the position of the max? It's because this is the input value that ultimately influenced the output, and therefore the cost. Backprop is computing gradients with respect to the cost, so anything that influences the ultimate cost should have a non-zero gradient. So, backprop will "propagate" the gradient back to this particular input value that had influenced the cost. 5.2.2 - Average pooling - backward pass In max pooling, for each input window, all the "influence" on the output came from a single input value--the max. In average pooling, every element of the input window has equal influence on the output. So to implement backprop, you will now implement a helper function that reflects this.For example if we did average pooling in the forward pass using a 2x2 filter, then the mask you'll use for the backward pass will look like: $$ dZ = 1 \quad \rightarrow \quad dZ =\begin{bmatrix}1/4 && 1/4 \\1/4 && 1/4\end{bmatrix}\tag{5}$$This implies that each position in the $dZ$ matrix contributes equally to output because in the forward pass, we took an average. **Exercise**: Implement the function below to equally distribute a value dz through a matrix of dimension shape. [Hint](https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.ones.html)
###Code
def distribute_value(dz, shape):
"""
Distributes the input value in the matrix of dimension shape
Arguments:
dz -- input scalar
shape -- the shape (n_H, n_W) of the output matrix for which we want to distribute the value of dz
Returns:
a -- Array of size (n_H, n_W) for which we distributed the value of dz
"""
### START CODE HERE ###
# Retrieve dimensions from shape (≈1 line)
(n_H, n_W) = None
# Compute the value to distribute on the matrix (≈1 line)
average = None
# Create a matrix where every entry is the "average" value (≈1 line)
a = None
### END CODE HERE ###
return a
a = distribute_value(2, (2,2))
print('distributed value =', a)
###Output
_____no_output_____
###Markdown
**Expected Output**: distributed_value =[[ 0.5 0.5] [ 0.5 0.5]] 5.2.3 Putting it together: Pooling backward You now have everything you need to compute backward propagation on a pooling layer.**Exercise**: Implement the `pool_backward` function in both modes (`"max"` and `"average"`). You will once again use 4 for-loops (iterating over training examples, height, width, and channels). You should use an `if/elif` statement to see if the mode is equal to `'max'` or `'average'`. If it is equal to 'average' you should use the `distribute_value()` function you implemented above to create a matrix of the same shape as `a_slice`. Otherwise, the mode is equal to '`max`', and you will create a mask with `create_mask_from_window()` and multiply it by the corresponding value of dA.
###Code
def pool_backward(dA, cache, mode = "max"):
"""
Implements the backward pass of the pooling layer
Arguments:
dA -- gradient of cost with respect to the output of the pooling layer, same shape as A
cache -- cache output from the forward pass of the pooling layer, contains the layer's input and hparameters
mode -- the pooling mode you would like to use, defined as a string ("max" or "average")
Returns:
dA_prev -- gradient of cost with respect to the input of the pooling layer, same shape as A_prev
"""
### START CODE HERE ###
# Retrieve information from cache (≈1 line)
(A_prev, hparameters) = None
# Retrieve hyperparameters from "hparameters" (≈2 lines)
stride = None
f = None
# Retrieve dimensions from A_prev's shape and dA's shape (≈2 lines)
m, n_H_prev, n_W_prev, n_C_prev = None
m, n_H, n_W, n_C = None
# Initialize dA_prev with zeros (≈1 line)
dA_prev = None
for i in range(None): # loop over the training examples
# select training example from A_prev (≈1 line)
a_prev = None
for h in range(None): # loop on the vertical axis
for w in range(None): # loop on the horizontal axis
for c in range(None): # loop over the channels (depth)
# Find the corners of the current "slice" (≈4 lines)
vert_start = None
vert_end = None
horiz_start = None
horiz_end = None
# Compute the backward propagation in both modes.
if mode == "max":
# Use the corners and "c" to define the current slice from a_prev (≈1 line)
a_prev_slice = None
# Create the mask from a_prev_slice (≈1 line)
mask = None
# Set dA_prev to be dA_prev + (the mask multiplied by the correct entry of dA) (≈1 line)
dA_prev[i, vert_start: vert_end, horiz_start: horiz_end, c] += None
elif mode == "average":
# Get the value a from dA (≈1 line)
da = None
# Define the shape of the filter as fxf (≈1 line)
shape = None
# Distribute it to get the correct slice of dA_prev. i.e. Add the distributed value of da. (≈1 line)
dA_prev[i, vert_start: vert_end, horiz_start: horiz_end, c] += None
### END CODE ###
# Making sure your output shape is correct
assert(dA_prev.shape == A_prev.shape)
return dA_prev
np.random.seed(1)
A_prev = np.random.randn(5, 5, 3, 2)
hparameters = {"stride" : 1, "f": 2}
A, cache = pool_forward(A_prev, hparameters)
dA = np.random.randn(5, 4, 2, 2)
dA_prev = pool_backward(dA, cache, mode = "max")
print("mode = max")
print('mean of dA = ', np.mean(dA))
print('dA_prev[1,1] = ', dA_prev[1,1])
print()
dA_prev = pool_backward(dA, cache, mode = "average")
print("mode = average")
print('mean of dA = ', np.mean(dA))
print('dA_prev[1,1] = ', dA_prev[1,1])
###Output
_____no_output_____ |
models/4.1.1 Model Manipulation/.ipynb_checkpoints/Workforce_regular-checkpoint.ipynb | ###Markdown
Workforce Optimization Tool -Notebook Demonstration
###Code
# these are all open source components we have re-used
import os
import pandas as pd
from pulp import *
import numpy as np
import sys
from numpy import dot
from cvxopt import matrix, solvers
import matplotlib.pyplot as plt
# we wrote these modules over the last three weeks!
import workforce_pandas as wfpd # a pandas representation of the Input Data
from allocation_notebook import * # the workforce optimizer itself
sut_target = 0.8 # default target
collapse_group = False # default collapse
FTE_time = 60*2080 # default FTE mins per annum
# this is the directory that you have data
pop_chronic_trend = wfpd.dataframes['pop_chronic_trend']
pop_chronic_prev = wfpd.dataframes['pop_chronic_prev']
pop_chronic_trend = wfpd.dataframes['pop_chronic_trend']
chron_care_freq = wfpd.dataframes['chron_care_freq']
geo_area = wfpd.dataframes['geo_area_list']
service_characteristics = wfpd.dataframes['service_characteristics']
pop_acute_need = wfpd.dataframes['pop_acute_need']
population = wfpd.dataframes['population']
provider_supply = wfpd.dataframes['provider_supply']
pop_prev_need = wfpd.dataframes['pop_prev_need']
provider_list = wfpd.dataframes['provider_list']
encounter_detail = wfpd.dataframes['encounter_detail']
overhead_work = wfpd.dataframes['overhead_work']
#################################################################
# user inputs here - please change these
# option: 'ideal_staffing', 'ideal_staffing_current', 'service_allocation'
# subpotion:
# for ideal_staffing' and 'ideal_staffing_current':"all_combination", "wage_max", "wage_weight"
# for service_allocation: subpotion = None
# for suboption ="wage_max", sub_option_value = maximum wage
# for suboption ="wage_weight", sub_option_value = wage weight
year = '2020'; current_year = '2018'; geo = 'State of Utah'
option1 = 'ideal_staffing' ; sub_option1 = "all_combination"; sub_option_value1 = None
###################################################################
#then run main function
out1, supply1 = main(geo, year, current_year, option1, sub_option1, sub_option_value1,
sut_target, collapse_group, FTE_time, pop_chronic_trend,
pop_chronic_prev, chron_care_freq, geo_area, service_characteristics,
pop_acute_need, population, provider_supply , pop_prev_need ,
provider_list , encounter_detail, overhead_work)
out1.keys()
# it shows total wage, total sutability & FTE and
# plot - you can change 0.1 to other value [0~1]
if( isinstance(out1, dict) ):
plotall(0.1, out1, supply1, option1, sub_option1, provider_list)
print( summaryout(out1,sub_option1 ) )
out1['detail_f2f_mini']
year = '2020'; current_year = '2018'
option2 = 'ideal_staffing_current' ; sub_option2 = "wage_max"; sub_option_value2 = 10000; #s_weight = 0.1
geo = 'State of Utah'
out2, supply2 = main(geo, year, current_year, option2, sub_option2, sub_option_value2, sut_target, collapse_group, FTE_time,
pop_chronic_trend, pop_chronic_prev, chron_care_freq, geo_area, service_characteristics,
pop_acute_need, population, provider_supply , pop_prev_need , provider_list , encounter_detail, overhead_work)
if( isinstance(out2, dict) ):
summaryout(out2,sub_option2)
print( plotall(0.1, out2, supply2, option2, sub_option2, provider_list) )
out2
#################################################################
# user inputs here - please change these
year = '2018'; current_year = '2018'; geo = 'State of Utah'
option3 = 'service_allocation' ; sub_option3 = None; sub_option_value3 = None
###################################################################
#then run main function
out3, supply3 = main(geo, year, current_year, option3, sub_option3, sub_option_value3,
sut_target, collapse_group, FTE_time, pop_chronic_trend,
pop_chronic_prev, chron_care_freq, geo_area, service_characteristics,
pop_acute_need, population, provider_supply , pop_prev_need ,
provider_list , encounter_detail, overhead_work)
if( isinstance(out3, dict) ):
plotall(sub_option_value3, out3, supply3, option3, sub_option3, provider_list)
###Output
_____no_output_____ |
examples/vision/ipynb/transformer_in_transformer.ipynb | ###Markdown
Image classification with TNT(Transformer in Transformer)**Author:** [ZhiYong Chang](https://github.com/czy00000)**Date created:** 2021/10/25**Last modified:** 2021/11/29**Description:** Implementing the Transformer in Transformer (TNT) model for image classification. IntroductionThis example implements the [TNT](https://arxiv.org/abs/2103.00112)model for image classification, and demonstrates it's performance on the CIFAR-100dataset.To keep training time reasonable, we will train and test a smaller model than is in thepaper(0.66M params vs 23.8M params).TNT is a novel model for modeling both patch-level and pixel-levelrepresentation. In each TNT block, an ***outer*** transformer block is utilized to processpatch embeddings, and an ***inner***transformer block extracts local features from pixel embeddings. The pixel-levelfeature is projected to the space of patch embedding by a linear transformation layerand then added into the patch.This example requires TensorFlow 2.5 or higher, as well as[TensorFlow Addons](https://www.tensorflow.org/addons/overview) package for theAdamW optimizer.Tensorflow Addons can be installed using the following command:```pythonpip install -U tensorflow-addons``` Setup
###Code
import matplotlib.pyplot as plt
import numpy as np
import math
import tensorflow as tf
import tensorflow_addons as tfa
from tensorflow import keras
from tensorflow.keras import layers
from itertools import repeat
###Output
_____no_output_____
###Markdown
Prepare the data
###Code
num_classes = 100
input_shape = (32, 32, 3)
(x_train, y_train), (x_test, y_test) = keras.datasets.cifar100.load_data()
y_train = keras.utils.to_categorical(y_train, num_classes)
y_test = keras.utils.to_categorical(y_test, num_classes)
print(f"x_train shape: {x_train.shape} - y_train shape: {y_train.shape}")
print(f"x_test shape: {x_test.shape} - y_test shape: {y_test.shape}")
###Output
_____no_output_____
###Markdown
Configure the hyperparameters
###Code
weight_decay = 0.0002
learning_rate = 0.001
label_smoothing = 0.1
validation_split = 0.2
batch_size = 128
image_size = (96, 96) # resize images to this size
patch_size = (8, 8)
num_epochs = 50
outer_block_embedding_dim = 64
inner_block_embedding_dim = 32
num_transformer_layer = 5
outer_block_num_heads = 4
inner_block_num_heads = 2
mlp_ratio = 4
attention_dropout = 0.5
projection_dropout = 0.5
first_stride = 4
###Output
_____no_output_____
###Markdown
Use data augmentation
###Code
def data_augmentation(inputs):
x = layers.Rescaling(scale=1.0 / 255)(inputs)
x = layers.Resizing(image_size[0], image_size[1])(x)
x = layers.RandomFlip("horizontal")(x)
x = layers.RandomRotation(factor=0.1)(x)
x = layers.RandomContrast(factor=0.1)(x)
x = layers.RandomZoom(height_factor=0.2, width_factor=0.2)(x)
return x
###Output
_____no_output_____
###Markdown
Implement the pixel embedding and patch embedding layer
###Code
class PatchEncoder(layers.Layer):
def __init__(self, num_patches, projection_dim):
super(PatchEncoder, self).__init__()
self.num_patches = num_patches
self.projection = layers.Dense(units=projection_dim)
self.position_embedding = layers.Embedding(
input_dim=num_patches, output_dim=projection_dim
)
def call(self, patch):
positions = tf.range(start=0, limit=self.num_patches)
encoded = self.projection(patch) + self.position_embedding(positions)
return encoded
def pixel_embed(x, image_size=image_size, patch_size=patch_size, in_dim=48, stride=4):
_, channel, height, width = x.shape
num_patches = (image_size[0] // patch_size[0]) * (image_size[1] // patch_size[1])
inner_patch_size = [math.ceil(ps / stride) for ps in patch_size]
x = layers.Conv2D(in_dim, kernel_size=7, strides=stride, padding="same")(x)
# pixel extraction
x = tf.image.extract_patches(
images=x,
sizes=(1, inner_patch_size[0], inner_patch_size[1], 1),
strides=(1, inner_patch_size[0], inner_patch_size[1], 1),
rates=(1, 1, 1, 1),
padding="VALID",
)
x = tf.reshape(x, shape=(-1, inner_patch_size[0] * inner_patch_size[1], in_dim))
x = PatchEncoder(inner_patch_size[0] * inner_patch_size[1], in_dim)(x)
return x, num_patches, inner_patch_size
def patch_embed(
pixel_embedding,
num_patches,
outer_block_embedding_dim,
inner_block_embedding_dim,
num_pixels,
):
patch_embedding = tf.reshape(
pixel_embedding, shape=(-1, num_patches, inner_block_embedding_dim * num_pixels)
)
patch_embedding = layers.LayerNormalization(epsilon=1e-5)(patch_embedding)
patch_embedding = layers.Dense(outer_block_embedding_dim)(patch_embedding)
patch_embedding = layers.LayerNormalization(epsilon=1e-5)(patch_embedding)
patch_embedding = PatchEncoder(num_patches, outer_block_embedding_dim)(
patch_embedding
)
patch_embedding = layers.Dropout(projection_dropout)(patch_embedding)
return patch_embedding
###Output
_____no_output_____
###Markdown
Implement the MLP block
###Code
def mlp(x, hidden_dim, output_dim, drop_rate=0.2):
x = layers.Dense(hidden_dim, activation=tf.nn.gelu)(x)
x = layers.Dropout(drop_rate)(x)
x = layers.Dense(output_dim)(x)
x = layers.Dropout(drop_rate)(x)
return x
###Output
_____no_output_____
###Markdown
Implement the TNT block
###Code
def transformer_in_transformer_block(
pixel_embedding,
patch_embedding,
out_embedding_dim,
in_embedding_dim,
num_pixels,
out_num_heads,
in_num_heads,
mlp_ratio,
attention_dropout,
projection_dropout,
):
# inner transformer block
residual_in_1 = pixel_embedding
pixel_embedding = layers.LayerNormalization(epsilon=1e-5)(pixel_embedding)
pixel_embedding = layers.MultiHeadAttention(
num_heads=in_num_heads, key_dim=in_embedding_dim, dropout=attention_dropout
)(pixel_embedding, pixel_embedding)
pixel_embedding = layers.add([pixel_embedding, residual_in_1])
residual_in_2 = pixel_embedding
pixel_embedding = layers.LayerNormalization(epsilon=1e-5)(pixel_embedding)
pixel_embedding = mlp(
pixel_embedding, in_embedding_dim * mlp_ratio, in_embedding_dim
)
pixel_embedding = layers.add([pixel_embedding, residual_in_2])
# outer transformer block
_, num_patches, channel = patch_embedding.shape
# fuse local and global information
fused_embedding = tf.reshape(
pixel_embedding, shape=(-1, num_patches, in_embedding_dim * num_pixels)
)
fused_embedding = layers.LayerNormalization(epsilon=1e-5)(fused_embedding)
fused_embedding = layers.Dense(out_embedding_dim)(fused_embedding)
patch_embedding = layers.add([patch_embedding, fused_embedding])
residual_out_1 = patch_embedding
patch_embedding = layers.LayerNormalization(epsilon=1e-5)(patch_embedding)
patch_embedding = layers.MultiHeadAttention(
num_heads=out_num_heads, key_dim=out_embedding_dim, dropout=attention_dropout
)(patch_embedding, patch_embedding)
patch_embedding = layers.add([patch_embedding, residual_out_1])
residual_out_2 = patch_embedding
patch_embedding = layers.LayerNormalization(epsilon=1e-5)(patch_embedding)
patch_embedding = mlp(
patch_embedding, out_embedding_dim * mlp_ratio, out_embedding_dim
)
patch_embedding = layers.add([patch_embedding, residual_out_2])
return pixel_embedding, patch_embedding
###Output
_____no_output_____
###Markdown
Implement the TNT modelThe TNT model consists of multiple TNT blocks.In the TNT block, there are two transformer blocks wherethe outer transformer block models the global relation among patch embeddings,and the inner one extracts local structure information of pixel embeddings.The local information is added on the patchembedding by linearly projecting the pixel embeddings into the space of patch embedding.Patch-level and pixel-level position embeddings are introduced in order toretain spatial information. In orginal paper, the authors use the class token forclassification.We use the `layers.GlobalAvgPool1D` to fuse patch information.
###Code
def get_model(
image_size=image_size,
patch_size=patch_size,
outer_block_embedding_dim=outer_block_embedding_dim,
inner_block_embedding_dim=inner_block_embedding_dim,
num_transformer_layer=num_transformer_layer,
outer_block_num_heads=outer_block_num_heads,
inner_block_num_heads=inner_block_num_heads,
mlp_ratio=mlp_ratio,
attention_dropout=attention_dropout,
projection_dropout=projection_dropout,
first_stride=first_stride,
):
inputs = layers.Input(shape=input_shape)
# Image augment
x = data_augmentation(inputs)
# extract pixel embedding
pixel_embedding, num_patches, inner_patch_size = pixel_embed(
x, image_size, patch_size, inner_block_embedding_dim, first_stride
)
num_pixels = inner_patch_size[0] * inner_patch_size[1]
# extract patch embedding
patch_embedding = patch_embed(
pixel_embedding,
num_patches,
outer_block_embedding_dim,
inner_block_embedding_dim,
num_pixels,
)
# create multiple layers of the TNT block.
for _ in range(num_transformer_layer):
pixel_embedding, patch_embedding = transformer_in_transformer_block(
pixel_embedding,
patch_embedding,
outer_block_embedding_dim,
inner_block_embedding_dim,
num_pixels,
outer_block_num_heads,
inner_block_num_heads,
mlp_ratio,
attention_dropout,
projection_dropout,
)
patch_embedding = layers.LayerNormalization(epsilon=1e-5)(patch_embedding)
x = layers.GlobalAvgPool1D()(patch_embedding)
outputs = layers.Dense(num_classes, activation="softmax")(x)
model = keras.Model(inputs=inputs, outputs=outputs)
return model
###Output
_____no_output_____
###Markdown
Train on CIFAR-100
###Code
model = get_model()
model.summary()
model.compile(
loss=keras.losses.CategoricalCrossentropy(label_smoothing=label_smoothing),
optimizer=tfa.optimizers.AdamW(
learning_rate=learning_rate, weight_decay=weight_decay
),
metrics=[
keras.metrics.CategoricalAccuracy(name="accuracy"),
keras.metrics.TopKCategoricalAccuracy(5, name="top-5-accuracy"),
],
)
history = model.fit(
x_train,
y_train,
batch_size=batch_size,
epochs=num_epochs,
validation_split=validation_split,
)
###Output
_____no_output_____
###Markdown
Visualize the training progress of the model.
###Code
plt.plot(history.history["loss"], label="train_loss")
plt.plot(history.history["val_loss"], label="val_loss")
plt.xlabel("Epochs")
plt.ylabel("Loss")
plt.title("Train and Validation Losses Over Epochs", fontsize=14)
plt.legend()
plt.grid()
plt.show()
plt.plot(history.history["accuracy"], label="train_accuracy")
plt.plot(history.history["val_accuracy"], label="val_accuracy")
plt.xlabel("Epochs")
plt.ylabel("Accuracy")
plt.title("Train and Validation Accuracies Over Epochs", fontsize=14)
plt.legend()
plt.grid()
plt.show()
###Output
_____no_output_____
###Markdown
Let's display the final results of the test on CIFAR-100.
###Code
loss, accuracy, top_5_accuracy = model.evaluate(x_test, y_test)
print(f"Test loss: {round(loss, 2)}")
print(f"Test accuracy: {round(accuracy * 100, 2)}%")
print(f"Test top 5 accuracy: {round(top_5_accuracy * 100, 2)}%")
###Output
_____no_output_____ |
15-1 Notebook interaction examples.ipynb | ###Markdown
via nbrun
###Code
nb = 'analyseSubStructure.ipynb'
args = {'mol': 'OCCc1c(C)[n+](cs1)Cc2cnc(C)nc2N'}
run_notebook(nb, out=f'convertSMILES/executed_notebook_{mol}', nb_kwargs=args)
###Output
_____no_output_____
###Markdown
via nbparameterise
###Code
# Example: analyseSubStructure.ipynb
mol = 'OCCc1c(C)[n+](cs1)Cc2cnc(C)nc2N'
import nbformat
from nbparameterise import (extract_parameters, replace_definitions, values)
with open("analyseSubStructure.ipynb") as f:
nb = nbformat.read(f, as_version=4)
# Get a list of Parameter objects
orig = extract_parameters(nb)
# Update one or more parameters,
# replaces ‘OCCc1c(C)[n+](cs1)Cc2cnc(C)nc2N’ (Thiamin)
params = values(orig, mol='CC(=O)OCCC(/C)=C\C[C@H](C(C)=C)CCC=C')
# Make a notebook object with these definit
###Output
_____no_output_____ |
.ipynb_checkpoints/Basic_classification_stats-checkpoint.ipynb | ###Markdown
Basic classification stats These script is heavily inspired from https://github.com/zooniverse/Data-digging/blob/master/scripts_GeneralPython/basic_classification_processing.py
###Code
import numpy as np
import pandas as pd
import json
from datetime import date
def gini(list_of_values):
sorted_list = sorted(list_of_values)
height, area = 0, 0
for value in sorted_list:
height += value
area += height - value / 2.
fair_area = height * len(list_of_values) / 2
return (fair_area - area) / fair_area
###Output
_____no_output_____
###Markdown
Space Fluff, at the time of Beta, has 3 workflows: - 'Classify!' - 'Classify on the go!' - 'Hardcore version!'These are all based on simple multiple choice questions.
###Code
workflow_classify = '~/Desktop/SUNDIAL/images/beta_classify-classifications.csv'
workflow_on_the_go = '~/Desktop/SUNDIAL/images/beta_classify-on-the-go-classifications.csv'
workflow_hardcore = '~/Desktop/SUNDIAL/images/beta_classify-hardcore-edition-classifications.csv'
workflow_all = '~/Desktop/SUNDIAL/images/beta_space-fluff-classifications.csv'
classifications_classify = pd.read_csv(workflow_classify)
classifications_on_the_go = pd.read_csv(workflow_on_the_go)
classifications_hardcore = pd.read_csv(workflow_hardcore)
classifications_all = pd.read_csv(workflow_all)
###Output
_____no_output_____
###Markdown
If you want to select a certain period of time, you can use this snippets here:
###Code
## To select between two dates
#classifications['created_at'] = pd.to_datetime(classifications['created_at'])
#classifications[(classifications['created_at'] > pd.Timestamp(date(2020,10,20)))& (classifications['created_at'] < pd.Timestamp(date(2020,10,21)))]
## To select after a certain date
#classifications['created_at'] = pd.to_datetime(classifications['created_at'])
#classifications = classifications[classifications['created_at'] > pd.Timestamp(date(2020,10,20))]
## Remember to turn the data type back to strings
#classifications['created_at'] = str(classifications['created_at'])
###Output
_____no_output_____
###Markdown
In our case, we want to select the classifications made during the Beta Phase.
###Code
#for all classifications: 32229
classifications_all = classifications_all[32239:]
#for 'Classify!': 6295
classifications_classify = classifications_classify[6295:]
#for 'Classify on the go!': 19989
classifications_on_the_go = classifications_on_the_go[19989:]
#for 'Classify: hardcore edition!': 5945
classifications_hardcore = classifications_hardcore[5945:]
# grab the subject counts
n_subj_tot_all = len(classifications_all.subject_data.unique())
by_subject_all = classifications_all.groupby('subject_data')
subj_class_all = by_subject_all.created_at.aggregate('count')
n_subj_tot_classify = len(classifications_classify.subject_data.unique())
by_subject_classify = classifications_classify.groupby('subject_data')
subj_class_classify = by_subject_classify.created_at.aggregate('count')
n_subj_tot_on_the_go = len(classifications_on_the_go.subject_data.unique())
by_subject_on_the_go = classifications_on_the_go.groupby('subject_data')
subj_class_on_the_go = by_subject_on_the_go.created_at.aggregate('count')
n_subj_tot_hardcore = len(classifications_hardcore.subject_data.unique())
by_subject_hardcore = classifications_hardcore.groupby('subject_data')
subj_class_hardcore = by_subject_hardcore.created_at.aggregate('count')
# basic stats on how classified the subjects are
subj_class_mean_all = np.mean(subj_class_all)
subj_class_med_all = np.median(subj_class_all)
subj_class_min_all = np.min(subj_class_all)
subj_class_max_all = np.max(subj_class_all)
subj_class_mean_classify = np.mean(subj_class_classify)
subj_class_med_classify = np.median(subj_class_classify)
subj_class_min_classify = np.min(subj_class_classify)
subj_class_max_classify = np.max(subj_class_classify)
subj_class_mean_on_the_go = np.mean(subj_class_on_the_go)
subj_class_med_on_the_go = np.median(subj_class_on_the_go)
subj_class_min_on_the_go = np.min(subj_class_on_the_go)
subj_class_max_on_the_go = np.max(subj_class_on_the_go)
subj_class_mean_hardcore = np.mean(subj_class_hardcore)
subj_class_med_hardcore = np.median(subj_class_hardcore)
subj_class_min_hardcore = np.min(subj_class_hardcore)
subj_class_max_hardcore = np.max(subj_class_hardcore)
all_users_all = classifications_all.user_name.unique()
by_user_all = classifications_all.groupby('user_name')
all_users_classify = classifications_classify.user_name.unique()
by_user_classify = classifications_classify.groupby('user_name')
all_users_on_the_go = classifications_on_the_go.user_name.unique()
by_user_on_the_go = classifications_on_the_go.groupby('user_name')
all_users_hardcore = classifications_hardcore.user_name.unique()
by_user_hardcore = classifications_hardcore.groupby('user_name')
# get total classification and user counts for all classifications
n_class_tot_all = len(classifications_all)
n_users_tot_all = len(all_users_all)
unregistered_all = [q.startswith("not-logged-in") for q in all_users_all]
n_unreg_all = sum(unregistered_all)
n_reg_all = n_users_tot_all - n_unreg_all
# get total classification and user counts for Classify
n_class_tot_classify = len(classifications_classify)
n_users_tot_classify = len(all_users_classify)
unregistered_classify = [q.startswith("not-logged-in") for q in all_users_classify]
n_unreg_classify = sum(unregistered_classify)
n_reg_classify = n_users_tot_classify - n_unreg_classify
# get total classification and user counts for on the go
n_class_tot_on_the_go = len(classifications_on_the_go)
n_users_tot_on_the_go = len(all_users_on_the_go)
unregistered_on_the_go = [q.startswith("not-logged-in") for q in all_users_on_the_go]
n_unreg_on_the_go = sum(unregistered_on_the_go)
n_reg_on_the_go = n_users_tot_on_the_go - n_unreg_on_the_go
# get total classification and user counts for hardcore
n_class_tot_hardcore = len(classifications_hardcore)
n_users_tot_hardcore = len(all_users_hardcore)
unregistered_hardcore = [q.startswith("not-logged-in") for q in all_users_hardcore]
n_unreg_hardcore = sum(unregistered_hardcore)
n_reg_hardcore = n_users_tot_hardcore - n_unreg_hardcore
nclass_byuser_all = by_user_all.created_at.aggregate('count')
nclass_byuser_ranked_all = nclass_byuser_all.copy()
nclass_byuser_ranked_all.sort_values(ascending=False)
nclass_byuser_classify = by_user_classify.created_at.aggregate('count')
nclass_byuser_ranked_classify = nclass_byuser_classify.copy()
nclass_byuser_ranked_classify.sort_values(ascending=False)
nclass_byuser_hardcore = by_user_hardcore.created_at.aggregate('count')
nclass_byuser_ranked_hardcore = nclass_byuser_hardcore.copy()
nclass_byuser_ranked_hardcore.sort_values(ascending=False)
nclass_byuser_on_the_go = by_user_on_the_go.created_at.aggregate('count')
nclass_byuser_ranked_on_the_go = nclass_byuser_on_the_go.copy()
nclass_byuser_ranked_on_the_go.sort_values(ascending=False)
# very basic stats
nclass_med_all = np.median(nclass_byuser_all)
nclass_mean_all = np.mean(nclass_byuser_all)
nclass_med_classify = np.median(nclass_byuser_classify)
nclass_mean_classify = np.mean(nclass_byuser_classify)
nclass_med_on_the_go = np.median(nclass_byuser_on_the_go)
nclass_mean_on_the_go = np.mean(nclass_byuser_on_the_go)
nclass_med_hardcore = np.median(nclass_byuser_hardcore)
nclass_mean_hardcore = np.mean(nclass_byuser_hardcore)
# Gini coefficient - see the comments above the gini() function for more notes
nclass_gini_all = gini(nclass_byuser_all)
nclass_gini_classify = gini(nclass_byuser_classify)
nclass_gini_on_the_go = gini(nclass_byuser_on_the_go)
nclass_gini_hardcore = gini(nclass_byuser_hardcore)
print("\nOverall:\n\n",n_class_tot_all,"classifications of",n_subj_tot_all,"subjects by",n_users_tot_all,"classifiers,")
print(n_reg_all,"registered and",n_unreg_all,"unregistered.\n")
print("That's %.2f classifications per subject on average (median = %.1f)." % (subj_class_mean_all, subj_class_med_all))
print("The most classified subject has ",subj_class_max_all,"classifications; the least-classified subject has",subj_class_min_all,".\n")
print("Median number of classifications per user:",nclass_med_all)
print("Mean number of classifications per user: %.2f" % nclass_mean_all)
print("\nTop 10 most prolific classifiers:\n",nclass_byuser_ranked_all.head(10))
print("\n\nGini coefficient for classifications by user: %.2f\n" % nclass_gini_all)
print("\nOverall:\n\n",n_class_tot_classify,"classifications of",n_subj_tot_classify,"subjects by",n_users_tot_classify,"classifiers,")
print(n_reg_classify,"registered and",n_unreg_classify,"unregistered.\n")
print("That's %.2f classifications per subject on average (median = %.1f)." % (subj_class_mean_classify, subj_class_med_classify))
print("The most classified subject has ",subj_class_max_classify,"classifications; the least-classified subject has",subj_class_min_classify,".\n")
print("Median number of classifications per user:",nclass_med_classify)
print("Mean number of classifications per user: %.2f" % nclass_mean_classify)
print("\nTop 10 most prolific classifiers:\n",nclass_byuser_ranked_classify.head(10))
print("\n\nGini coefficient for classifications by user: %.2f\n" % nclass_gini_classify)
print("\nOverall:\n\n",n_class_tot_on_the_go,"classifications of",n_subj_tot_on_the_go,"subjects by",n_users_tot_on_the_go,"classifiers,")
print(n_reg_on_the_go,"registered and",n_unreg_on_the_go,"unregistered.\n")
print("That's %.2f classifications per subject on average (median = %.1f)." % (subj_class_mean_on_the_go, subj_class_med_on_the_go))
print("The most classified subject has ",subj_class_max_on_the_go,"classifications; the least-classified subject has",subj_class_min_on_the_go,".\n")
print("Median number of classifications per user:",nclass_med_on_the_go)
print("Mean number of classifications per user: %.2f" % nclass_mean_on_the_go)
print("\nTop 10 most prolific classifiers:\n",nclass_byuser_ranked_on_the_go.head(10))
print("\n\nGini coefficient for classifications by user: %.2f\n" % nclass_gini_on_the_go)
print("\nOverall:\n\n",n_class_tot_hardcore,"classifications of",n_subj_tot_hardcore,"subjects by",n_users_tot_hardcore,"classifiers,")
print(n_reg_hardcore,"registered and",n_unreg_hardcore,"unregistered.\n")
print("That's %.2f classifications per subject on average (median = %.1f)." % (subj_class_mean_hardcore, subj_class_med_hardcore))
print("The most classified subject has ",subj_class_max_hardcore,"classifications; the least-classified subject has",subj_class_min_hardcore,".\n")
print("Median number of classifications per user:",nclass_med_hardcore)
print("Mean number of classifications per user: %.2f" % nclass_mean_hardcore)
print("\nTop 10 most prolific classifiers:\n",nclass_byuser_ranked_hardcore.head(10))
print("\n\nGini coefficient for classifications by user: %.2f\n" % nclass_gini_hardcore)
###Output
Overall:
1806 classifications of 336 subjects by 49 classifiers,
27 registered and 22 unregistered.
That's 5.38 classifications per subject on average (median = 5.0).
The most classified subject has 15 classifications; the least-classified subject has 1 .
Median number of classifications per user: 4.0
Mean number of classifications per user: 36.86
Top 10 most prolific classifiers:
user_name
Bbllee75 65
Budgieye 4
Davinelulinvega 3
KJDL80 21
KLIMCAK-62 14
Liava 756
MonkeyDragonCat 16
Mtfd2222 2
Nelllythetardigrade 4
Omniua 12
Name: created_at, dtype: int64
Gini coefficient for classifications by user: 0.84
|
notebooks/inference_client.ipynb | ###Markdown
Send Requests to Triton Inference Server with FastNN Client Examples: **"distilbert-squad" Model**
###Code
from fastnn.processors.nlp.question_answering import TransformersQAProcessor
context = ["""Albert Einstein was born at Ulm, in Württemberg, Germany, on March 14, 1879. Six weeks later the family moved to Munich, where he later on began his schooling at the Luitpold Gymnasium.
Later, they moved to Italy and Albert continued his education at Aarau, Switzerland and in 1896 he entered the Swiss Federal Polytechnic School in Zurich to be trained as a teacher in physics and mathematics.
In 1901, the year he gained his diploma, he acquired Swiss citizenship and, as he was unable to find a teaching post, he accepted a position as technical assistant in the Swiss Patent Office. In 1905 he obtained his doctor’s degree.
During his stay at the Patent Office, and in his spare time, he produced much of his remarkable work and in 1908 he was appointed Privatdozent in Berne. In 1909 he became Professor Extraordinary at Zurich, in 1911 Professor of
Theoretical Physics at Prague, returning to Zurich in the following year to fill a similar post. In 1914 he was appointed Director of the Kaiser Wilhelm Physical Institute and Professor in the University of Berlin. He became a
German citizen in 1914 and remained in Berlin until 1933 when he renounced his citizenship for political reasons and emigrated to America to take the position of Professor of Theoretical Physics at Princeton. He became a United
States citizen in 1940 and retired from his post in 1945. After World War II, Einstein was a leading figure in the World Government Movement, he was offered the Presidency of the State of Israel, which he declined, and he
collaborated with Dr. Chaim Weizmann in establishing the Hebrew University of Jerusalem. Einstein always appeared to have a clear view of the problems of physics and the determination to solve them. He had a strategy of
his own and was able to visualize the main stages on the way to his goal. He regarded his major achievements as mere stepping-stones for the next advance. At the start of his scientific work, Einstein realized the
inadequacies of Newtonian mechanics and his special theory of relativity stemmed from an attempt to reconcile the laws of mechanics with the laws of the electromagnetic field. He dealt with classical problems of
statistical mechanics and problems in which they were merged with quantum theory: this led to an explanation of the Brownian movement of molecules. He investigated the thermal properties of light with a low radiation
density and his observations laid the foundation of the photon theory of light. In his early days in Berlin, Einstein postulated that the correct interpretation of the special theory of relativity must also furnish a
theory of gravitation and in 1916 he published his paper on the general theory of relativity. During this time he also contributed to the problems of the theory of radiation and statistical mechanics."""]
query = ["When was Einstein born?"]
# Specify tokenizer for encoding
model_name_or_path = "distilbert-base-cased-distilled-squad"
processor = TransformersQAProcessor(model_name_or_path=model_name_or_path)
examples, features, dataloader = processor.process_batch(query=query*8, context=context*8, mini_batch_size=8, use_gpu=True)
import torch
import numpy as np
from fastnn.client import FastNNClient
client = FastNNClient(url="127.0.0.1:8001", model_name="distilbert-squad", model_version="1", client_type="grpc")
#client = FastNNClient(url="127.0.0.1:8000", model_name="distilbert-squad", model_version="1", client_type="http")
#%%timeit
all_outputs = []
with torch.no_grad():
for batch in dataloader:
response = client.request(batch)
start_logits = response.as_numpy('output__0')
start_logits = np.asarray(start_logits, dtype=np.float32)
end_logits = response.as_numpy('output__1')
end_logits = np.asarray(end_logits, dtype=np.float32)
example_indices = response.as_numpy('output__2')
example_indices = np.asarray(example_indices, dtype=np.int64)
output = (torch.from_numpy(start_logits), torch.from_numpy(end_logits), torch.from_numpy(example_indices))
all_outputs.append(output)
all_outputs
from fastnn.processors.cv.object_detection import ObjectDetectionProcessor
# COCO dataset category names
label_strings = [
'__background__', 'person', 'bicycle', 'car', 'motorcycle', 'airplane', 'bus',
'train', 'truck', 'boat', 'traffic light', 'fire hydrant', 'N/A', 'stop sign',
'parking meter', 'bench', 'bird', 'cat', 'dog', 'horse', 'sheep', 'cow',
'elephant', 'bear', 'zebra', 'giraffe', 'N/A', 'backpack', 'umbrella', 'N/A', 'N/A',
'handbag', 'tie', 'suitcase', 'frisbee', 'skis', 'snowboard', 'sports ball',
'kite', 'baseball bat', 'baseball glove', 'skateboard', 'surfboard', 'tennis racket',
'bottle', 'N/A', 'wine glass', 'cup', 'fork', 'knife', 'spoon', 'bowl',
'banana', 'apple', 'sandwich', 'orange', 'broccoli', 'carrot', 'hot dog', 'pizza',
'donut', 'cake', 'chair', 'couch', 'potted plant', 'bed', 'N/A', 'dining table',
'N/A', 'N/A', 'toilet', 'N/A', 'tv', 'laptop', 'mouse', 'remote', 'keyboard', 'cell phone',
'microwave', 'oven', 'toaster', 'sink', 'refrigerator', 'N/A', 'book',
'clock', 'vase', 'scissors', 'teddy bear', 'hair drier', 'toothbrush'
]
processor = ObjectDetectionProcessor(label_strings=label_strings)
# Replace "img_dir_path" with root directory of .png or .jpeg images
dataloader = processor.process_batch(dir_path="./img_dir_path", mini_batch_size=2, use_gpu=False)
###Output
_____no_output_____
###Markdown
"dbmdz/bert-large-cased-finetuned-conll03-english"
###Code
from fastnn.nn.token_tagging import NERModule
from fastnn.processors.nlp.token_tagging import TransformersTokenTaggingProcessor
from fastnn.exporting import TorchScriptExporter
from fastnn.client import FastNNClient
context = ["""Albert Einstein was born at Ulm, in Württemberg, Germany, on March 14, 1879. Six weeks later the family moved to Munich, where he later on began his schooling at the Luitpold Gymnasium. Later, they moved to Italy and Albert continued his education at Aarau, Switzerland and in 1896 he entered the Swiss Federal Polytechnic School in Zurich to be trained as a teacher in physics and mathematics. In 1901, the year he gained his diploma, he acquired Swiss citizenship and, as he was unable to find a teaching post, he accepted a position as technical assistant in the Swiss Patent Office. In 1905 he obtained his doctor’s degree.
During his stay at the Patent Office, and in his spare time, he produced much of his remarkable work and in 1908 he was appointed Privatdozent in Berne. In 1909 he became Professor Extraordinary at Zurich, in 1911 Professor of Theoretical Physics at Prague, returning to Zurich in the following year to fill a similar post. In 1914 he was appointed Director of the Kaiser Wilhelm Physical Institute and Professor in the University of Berlin. He became a German citizen in 1914 and remained in Berlin until 1933 when he renounced his citizenship for political reasons and emigrated to America to take the position of Professor of Theoretical Physics at Princeton*. He became a United States citizen in 1940 and retired from his post in 1945.
After World War II, Einstein was a leading figure in the World Government Movement, he was offered the Presidency of the State of Israel, which he declined, and he collaborated with Dr. Chaim Weizmann in establishing the Hebrew University of Jerusalem.
Einstein always appeared to have a clear view of the problems of physics and the determination to solve them. He had a strategy of his own and was able to visualize the main stages on the way to his goal. He regarded his major achievements as mere stepping-stones for the next advance.
At the start of his scientific work, Einstein realized the inadequacies of Newtonian mechanics and his special theory of relativity stemmed from an attempt to reconcile the laws of mechanics with the laws of the electromagnetic field. He dealt with classical problems of statistical mechanics and problems in which they were merged with quantum theory: this led to an explanation of the Brownian movement of molecules. He investigated the thermal properties of light with a low radiation density and his observations laid the foundation of the photon theory of light.
In his early days in Berlin, Einstein postulated that the correct interpretation of the special theory of relativity must also furnish a theory of gravitation and in 1916 he published his paper on the general theory of relativity. During this time he also contributed to the problems of the theory of radiation and statistical mechanics.""",]
model_name_or_path = "dbmdz/bert-large-cased-finetuned-conll03-english"
label_strings = [
"O", # Outside of a named entity
"B-MISC", # Beginning of a miscellaneous entity right after another miscellaneous entity
"I-MISC", # Miscellaneous entity
"B-PER", # Beginning of a person's name right after another person's name
"I-PER", # Person's name
"B-ORG", # Beginning of an organisation right after another organisation
"I-ORG", # Organisation
"B-LOC", # Beginning of a location right after another location
"I-LOC" # Location
]
processor = TransformersTokenTaggingProcessor(model_name_or_path, label_strings=label_strings)
dataloader = processor.process_batch(context*2, mini_batch_size=2, use_gpu=False)
client = FastNNClient(url="127.0.0.1:8001", model_name="dbmdz.bert-large-cased-finetuned-conll03-english", model_version="1", client_type="grpc")
#client = FastNNClient(url="127.0.0.1:8000", model_name="dbmdz.bert-large-cased-finetuned-conll03-english", model_version="1", client_type="http")
import time
import torch
import numpy as np
start = time.time()
all_outputs = []
with torch.no_grad():
for batch in dataloader:
response = client.request(batch)
logits = response.as_numpy('output__0')
logits = np.asarray(logits, dtype=np.float32)
input_ids = response.as_numpy('output__1')
input_ids = np.asarray(input_ids, dtype=np.int64)
output = (torch.from_numpy(logits), torch.from_numpy(input_ids))
all_outputs.append(output)
end = time.time()
print(end-start)
results = processor.process_output_batch(all_outputs)
results
###Output
_____no_output_____
###Markdown
**"fasterrcnn-resnet50" Model**
###Code
import torch
import numpy as np
from fastnn.client import FastNNClient
client = FastNNClient(url="127.0.0.1:8000", model_name="fasterrcnn-resnet50-cpu", model_version="1", client_type="grpc")
client = FastNNClient(url="127.0.0.1:8001", model_name="fasterrcnn-resnet50-cpu", model_version="1", client_type="http")
#%%timeit
all_outputs = []
with torch.no_grad():
for batch in dataloader:
response = client.request(*batch)
boxes = response.as_numpy('output__0')
boxes = np.asarray(boxes, dtype=np.float32)
labels = response.as_numpy('output__1')
labels = np.asarray(labels, dtype=np.int64)
scores = response.as_numpy('output__2')
scores = np.asarray(scores, dtype=np.float32)
output = (torch.from_numpy(boxes), torch.from_numpy(labels), torch.from_numpy(scores))
all_outputs.append(output)
all_outputs
###Output
_____no_output_____ |
doc/3. Verifying causal fairness.ipynb | ###Markdown
Verifying causal fairness using JusticiaIn this tutorial, we apply Justicia to verify causal fairness, more specifically path-specific causal fairness (PSCF)\[1,2\]. Path-specific causal fairness states that the outcome of a classifier should not depend directly on a sensitive attribute, but may depend indirectly on sensitive covariates through other mediator covariates that are relevant for prediction.For example, the outcome of college admission should not rely on the sensitive attribute "sex", but may depend on it indirectly through a mediator covariate "years of experience". To satisfy path-specific causal fairness, the mediator covariate for a minority group (e.g., female) takes valuation as if it belongs to the majority group (e.g., male). Hence, in the hypothetical world, all other attributes of minority group remain same except mediator covariates.
###Code
from graphviz import Digraph
dot = Digraph()
dot.edge('sex', 'years of experience')
dot.edge('years of experience', 'outcome')
dot
###Output
_____no_output_____
###Markdown
Outline of the tutorial1. Learn a classifier on a dataset2. Learn majority (most favored) sensitive group using Justicia3. Define mediator attributes and input along with majority group information to Justicia4. Verify path-specific causal fairness
###Code
# standard library
import sklearn.metrics
from sklearn.model_selection import train_test_split
from pyrulelearn.imli import imli
from sklearn.linear_model import LogisticRegression
from sklearn.svm import SVC
from sklearn import tree
import seaborn as sns
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
from IPython.display import Markdown, display
import sys
import pandas as pd
sys.path.append("..")
# From this framework
import justicia.utils
from justicia.metrics import Metric
sys.path.append("..")
from data.objects.adult import Adult
from data.objects.titanic import Titanic
###Output
_____no_output_____
###Markdown
Prepare a dataset
###Code
verbose = False
dataset = Adult(verbose=verbose, config=0) # config defines configuration for sensitive groups
df = dataset.get_df()
# get X,y
X = df.drop(['target'], axis=1)
y = df['target']
display(Markdown("#### Sensitive attributes"))
print(dataset.known_sensitive_attributes)
display(Markdown("#### Feature matrix"))
print("Before one hot encoding")
X.head()
# one-hot encoding for categorical features (this takes care of Label encoding automatically)
X = justicia.utils.get_one_hot_encoded_df(X,dataset.categorical_attributes)
print("After->")
X
###Output
After->
###Markdown
Train a classifier
###Code
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.1, shuffle = True, random_state=2) # 70% training and 30% test
clf = LogisticRegression(class_weight='balanced', solver='liblinear', random_state=0)
# clf = tree.DecisionTreeClassifier(max_depth=5)
# clf = SVC(kernel="linear")
clf.fit(X_train.values, y_train.values)
print("\nTrain Accuracy:", sklearn.metrics.accuracy_score(clf.predict(X_train.values),y_train.values))
print("Test Accuracy:", sklearn.metrics.accuracy_score(clf.predict(X_test.values),y_test.values))
###Output
Train Accuracy: 0.764946764946765
Test Accuracy: 0.7675775253300583
###Markdown
Learn most favored groupWe now learn the most favored group based on disparte impact and sensitive attributes. This group information is required to verify path-specific causal fairness. More details can be found in tutorial [2](./3\.\ Verifying\ causal\ fairness.ipynb).
###Code
metric = Metric(model=clf, data=X_test, sensitive_attributes=dataset.known_sensitive_attributes, verbose=False, encoding="Enum-dependency")
metric.compute()
print("Sensitive attributes", metric.given_sensitive_attributes)
print("Disparate Impact:", metric.disparate_impact_ratio)
print("Statistical Parity:", metric.statistical_parity_difference)
print("Time taken", metric.time_taken, "seconds")
display(Markdown("#### Most Favored group"))
print((", ").join([" ".join([each_sensitive_attribute[0], each_sensitive_attribute[1][0], str(each_sensitive_attribute[1][1])]) for each_sensitive_attribute in list(metric.most_favored_group.items())]))
###Output
Sensitive attributes ['race', 'sex']
Disparate Impact: 0.6438788473156194
Statistical Parity: 0.1392954
Time taken 1.9022130966186523 seconds
###Markdown
Verify path-specific causal fairnessWe now input metric.most_favored_group as the most favored group to computing path-specific causal fairness. Additionally, we define mediator attributes.
###Code
display(Markdown("#### Mediator attributes"))
mediator_attributes = ['education-num', 'capital-gain']
# mediator_attributes = dataset.mediator_attributes
print(mediator_attributes)
metric_pscf = Metric(model=clf, data=X_test, sensitive_attributes=dataset.known_sensitive_attributes, mediator_attributes=mediator_attributes, major_group=metric.most_favored_group, verbose=False, encoding="Enum-dependency")
metric_pscf.compute()
display(Markdown("#### Path-specific causal results"))
print("Disparate Impact:", metric_pscf.disparate_impact_ratio)
print("Statistical Parity:", metric_pscf.statistical_parity_difference)
display(Markdown("#### Do metrics change in path-specific causal fairness?"))
print("Disparate impact", "increases" if metric.disparate_impact_ratio < metric_pscf.disparate_impact_ratio else ("decreases" if metric.disparate_impact_ratio > metric_pscf.disparate_impact_ratio else 'is equal'))
print("Statistical parity", "increases" if metric.statistical_parity_difference < metric_pscf.statistical_parity_difference else ("decreases" if metric.statistical_parity_difference > metric_pscf.statistical_parity_difference else 'is equal'))
###Output
_____no_output_____
###Markdown
Below we show detailed results on positive predictive value (PPV) of the classifier for different sensitive groups. We observe that PPV usually differs when some attributes are considered mediator attributes.
###Code
groups = [(", ").join([each_sensitive_attribute[0] if each_sensitive_attribute[1] == 1 else "not " + each_sensitive_attribute[0] for each_sensitive_attribute in group_info[0]]) for group_info in metric.sensitive_group_statistics]
PPVs = [group_info[1] for group_info in metric.sensitive_group_statistics]
groups_pscf = [(", ").join([each_sensitive_attribute[0] if each_sensitive_attribute[1] == 1 else "not " + each_sensitive_attribute[0] for each_sensitive_attribute in group_info[0]]) for group_info in metric_pscf.sensitive_group_statistics]
PPVs_pscf = [group_info[1] for group_info in metric_pscf.sensitive_group_statistics]
df = pd.DataFrame({
"Group" : groups + groups_pscf,
"PPV" : PPVs + PPVs_pscf,
'causal' : ["Causal" for _ in range(len(PPVs))] + ["" for _ in range(len(PPVs))]
})
display(Markdown("#### Most Favored group"))
print((", ").join([each_sensitive_attribute[0] if each_sensitive_attribute[1] == 1 else "not " + each_sensitive_attribute[0] for each_sensitive_attribute in list(metric.most_favored_group.items())]))
display(Markdown("#### Least Favored group"))
print((", ").join([each_sensitive_attribute[0] if each_sensitive_attribute[1] == 1 else "not " + each_sensitive_attribute[0] for each_sensitive_attribute in list(metric.least_favored_group.items())]))
display(Markdown("#### Most Favored group (Causal)"))
print((", ").join([each_sensitive_attribute[0] if each_sensitive_attribute[1] == 1 else "not " + each_sensitive_attribute[0] for each_sensitive_attribute in list(metric_pscf.most_favored_group.items())]))
display(Markdown("#### Least Favored group (Causal)"))
print((", ").join([each_sensitive_attribute[0] if each_sensitive_attribute[1] == 1 else "not " + each_sensitive_attribute[0] for each_sensitive_attribute in list(metric_pscf.least_favored_group.items())]))
fontsize = 22
labelsize = 18
sns.set_style("whitegrid", {'axes.grid' : True})
sns.barplot(x='Group', y='PPV', hue='causal', data=df, palette='colorblind')
plt.xticks(fontsize=labelsize-2, rotation=90)
plt.yticks(fontsize=labelsize-2)
plt.ylabel(r"$\Pr[\hat{Y} = 1]$", fontsize=labelsize)
plt.xlabel("Sensitive groups", fontsize=labelsize)
plt.title(r"$\Pr[\hat{Y} = 1 | A=\mathbf{a}]$", fontsize=labelsize)
plt.legend(loc='upper center', fontsize=labelsize-4, bbox_to_anchor=(1.2, 1.05), fancybox=True, shadow=True)
plt.show()
plt.clf()
###Output
_____no_output_____ |
assignment3_problem1.ipynb | ###Markdown
Problem A Imputing missing value
###Code
url_1 = 'https://archive.ics.uci.edu/ml/machine-learning-databases' \
'/credit-screening/crx.data'
attributes_1 = (
'A1', 'A2', 'A3',
'A4', 'A5', 'A6',
'A7', 'A8', 'A9',
'A10', 'A11', 'A12',
'A13', 'A14', 'A15',
'class')
df_1 = pd.read_csv(
StringIO(requests.get(url_1).content.decode('utf-8')), names = attributes_1)
df_1
df_1_1 = df_1.replace('?', np.nan)
df_1_1[['A2','A14']] = df_1_1[["A2",'A14']].astype(float)
df_1_1.mean()
df_1_1.isnull().sum()
values = {'A1':df_1_1["A1"].mode()[0], 'A2':df_1_1["A2"].mean(), 'A3':df_1_1["A3"].mean(),
'A4':df_1_1["A4"].mode()[0], 'A5':df_1_1["A5"].mode()[0], 'A6':df_1_1["A6"].mode()[0],
'A7':df_1_1["A7"].mode()[0], 'A8':df_1_1["A8"].mean(), 'A9':df_1_1["A9"].mode()[0],
'A10':df_1_1["A10"].mode()[0], 'A11':df_1_1["A11"].mean(), 'A12':df_1_1["A12"].mode()[0],
'A13':df_1_1["A13"].mode()[0], 'A14':df_1_1["A14"].mean(), 'A15':df_1_1["A15"].mean()}
df_1_1 = df_1_1.fillna(values)
df_1_1.isnull().sum()
df_1_final = pd.get_dummies(df_1_1, prefix=['A1', 'A4', 'A5', 'A6', 'A7', 'A9', 'A10', 'A12',
'A13','class'])
df_1_final[['A2','A3','A8','A11','A14','A15']] = df_1_final[['A2','A3','A8','A11','A14','A15']] / df_1_final[['A2','A3','A8','A11','A14','A15']].max()
df_1_final = df_1_final.sample(frac = 1, random_state = 7021).reset_index(drop = True)
df_1_final = df_1_final.drop(columns = ['class_-','A1_a','A4_u','A5_g','A6_d','A7_bb','A9_t','A10_f','A12_t','A13_g'])
###Output
_____no_output_____
###Markdown
Split set
###Code
N_train_1 = round(0.75 * df_1_final.shape[0])
N_train_1
X_1 = df_1_final.iloc[:,:-1]
y_1 = df_1_final.iloc[:, -1]
X_train_1 = X_1.iloc[0:N_train_1,:]
X_test_1 = X_1.iloc[N_train_1:,:]
y_train_1 = y_1.iloc[:N_train_1]
y_test_1 = y_1.iloc[N_train_1:]
N_1, P_1 = X_train_1.shape
###Output
_____no_output_____
###Markdown
Cross validation
###Code
%matplotlib inline
from sklearn.model_selection import cross_val_score
score_list = []
depths = []
for dep in range(1,21):
depths.append(dep)
clf_cross = DecisionTreeClassifier(
max_depth = dep, max_leaf_nodes = 2**dep, random_state = 7021)
scores = cross_val_score(clf_cross, X_1, y_1, cv=4)
score_list.append(scores.mean())
print(depths,score_list)
plt.plot(depths,score_list)
plt.show()
print('The best accuracy of CART is', max(score_list),'and the corresponding depth is',score_list.index(max(score_list))+1)
###Output
[1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20] [0.8551132544696867, 0.8551132544696867, 0.8304543621454497, 0.8406455840838823, 0.8536597660975938, 0.834806425594838, 0.8203303535421428, 0.8101727382712731, 0.8174317784648475, 0.7985532329614196, 0.7971333512568894, 0.8087276515660707, 0.7970997445893265, 0.7985532329614196, 0.7956462562172335, 0.7956462562172335, 0.7956462562172335, 0.7956462562172335, 0.7956462562172335, 0.7956462562172335]
###Markdown
CART
###Code
depth = 1
clf_1 = DecisionTreeClassifier(
max_depth = depth, max_leaf_nodes = 2**depth, random_state = 7021)
clf_1.fit(X_train_1, y_train_1)
fig_1 = plt.figure(figsize = (8, 6))
_ = plot_tree(
clf_1, filled = False, fontsize = 8, rounded = True, feature_names = tuple(df_1_final.columns[0:-1]))
fig_1.savefig('CART_1.png')
print(export_text(clf_1, feature_names = tuple(df_1_final.columns[0:-1])))
def find_path(root, path, x, children_left, children_right):
path.append(root)
if root == x:
return True
left = False
right = False
if (children_left[root] != -1):
left = find_path(children_left[root], path, x, children_left, children_right)
if (children_right[root] != -1):
right = find_path(children_right[root], path, x, children_left, children_right)
if left or right:
return True
path.remove(root)
return False
def get_rule(path, children_left, attributes, feature, threshold):
mask = ''
for idx, node in enumerate(path):
# filter out the leaf node
if idx != len(path) - 1:
# left or right branch node
if (children_left[node] == path[idx + 1]):
mask += "('{}' <= {:.2f}) \t ".format(
attributes[feature[node]], threshold[node])
else:
mask += "('{}' > {:.2f}) \t ".format(
attributes[feature[node]], threshold[node])
mask = mask.replace("\t", "&", mask.count("\t") - 1)
mask = mask.replace("\t", "").strip()
return mask
children_left_1 = clf_1.tree_.children_left
children_right_1 = clf_1.tree_.children_right
feature_1 = clf_1.tree_.feature
threshold_1 = clf_1.tree_.threshold
leaf_id_1 = np.unique(clf_1.apply(X_train_1))
paths_1 = {}
for leaf in leaf_id_1:
path_leaf = []
find_path(0, path_leaf, leaf, children_left_1, children_right_1)
paths_1[leaf] = path_leaf
CART_rules_1 = {}
for leaf in paths_1:
CART_rules_1[leaf] = get_rule(paths_1[leaf], children_left_1, tuple(df_1_final.columns[0:-1]), feature_1, threshold_1)
leaf_id_1
CART_rules_1
from IPython.core.interactiveshell import InteractiveShell
InteractiveShell.ast_node_interactivity = "all"
total_nodes_1 = clf_1.tree_.node_count
leaf_nodes_1 = round(total_nodes_1 / 2)
branch_nodes_1 = total_nodes_1 // 2
initial_a_1 = np.array([i for i in clf_1.tree_.feature if i != -2])
initial_a_1
initial_b_1 = np.array([i for i in clf_1.tree_.threshold if i != -2])
initial_b_1
clf_1.score(X_train_1, y_train_1)
clf_1.score(X_test_1, y_test_1)
###Output
_____no_output_____
###Markdown
Rules and performance of CART
###Code
print('The decision rules in CART is ', CART_rules_1)
print('\nThe in-sample performance in CART is', clf_1.score(X_train_1, y_train_1))
print('\nThe out-of-sample performance in CART is', clf_1.score(X_test_1, y_test_1))
###Output
The decision rules in CART is {1: "('A9_f' <= 0.50)", 2: "('A9_f' > 0.50)"}
The in-sample performance in CART is 0.8571428571428571
The out-of-sample performance in CART is 0.8488372093023255
###Markdown
OCT Based on the structure of CART, the depth of OCT is also 1.After respectively testing on alpha = 0.3, 0.5, 0.6 and epsilon = 0.01, 0.001, 0.0001, 10e-7, 10e-8, I find that the best accuracy is based on alpha = 0.6 and epsilon = 0.01.
###Code
alpha = 0.6
K = 2
Y_1 = np.zeros([N_1, K], dtype = int) - 1
Y_1[X_train_1.index, y_train_1.astype(int)] = 1
Y_1.shape
import os
os.add_dll_directory(os.path.join(os.getenv('GUROBI_HOME'), 'bin'))
from gurobipy import *
model = Model('mip1')
# declare decision variables
# d: [# branch nodes], whether a branch node applies a split or not
d = model.addVars(branch_nodes_1 ,vtype = GRB.BINARY, name = "d") # ∈ {0, 1}
# split criterion: ax < b / ax >== b
# a: [# branch nodes, p], b: [# branch nodes]
a = model.addVars(branch_nodes_1, P_1, vtype = GRB.BINARY, name = 'a') # ∈ {0, 1}
b = model.addVars(branch_nodes_1 ,vtype = GRB.CONTINUOUS, name = "b")
# l: [# leaf nodes], whether a leaf node contains any points or not
l = model.addVars(leaf_nodes_1, vtype = GRB.BINARY, name = "l") # ∈ {0, 1}
# z: [# points, # branch nodes], a point is assigned to which leaf node
z = model.addVars(N_1, leaf_nodes_1, vtype = GRB.BINARY, name = "z") # ∈ {0, 1}
# N_kt: [# labels, # leaf nodes], number of points labelled for class k in a leaf node
N_kt = model.addVars(K, leaf_nodes_1, vtype = GRB.INTEGER, name = "N_kt")
# N_t: [# leaf nodes], number of points in the leaf
N_t = model.addVars(leaf_nodes_1, vtype = GRB.INTEGER, name = "N_t")
# c_kt: [# labels, # leaf nodes], whether predicted label is label k for a leaf node
c_kt = model.addVars(K, leaf_nodes_1, vtype = GRB.BINARY, name = "c") # ∈ {0, 1}
# L: [# leaf nodes], misclassification loss for a leaf node
L = model.addVars(leaf_nodes_1, vtype = GRB.INTEGER, name = "L")
# warm start using the results of CART algorithm
for t in range(branch_nodes_1):
a[t, initial_a_1[t]].start = 1
b[t].start = initial_b_1[t]
model.update()
# baseline accuracy by predicting the dominant label for the whole dataset
L_hat = y_train_1.value_counts().max()/y_train_1.shape[0]
L_hat
# declare the objective
model.setObjective(L.sum()/L_hat + alpha * d.sum(), GRB.MINIMIZE)
def get_parent(i, depth = 1):
assert i > 0, "No parent for Root"
assert i <= 2 ** (depth + 1) - 1, "Error! Total: {0}; i: {1}".format(
2 ** (depth + 1) - 1, i)
return int((i - 1)/2)
# constraint set 1
for t in range(branch_nodes_1):
model.addConstr(a.sum(t, '*') == d[t]) # sum(a_tj for j in P) = d_t
b[t].setAttr(GRB.Attr.LB, 0) # b_t >= 0
model.addConstr(b[t] <= d[t]) # b_t <= d_t
model.addConstr(d[t] == 1) # d_t = 1 (assume all the branch applies a split)
# constraint set 2
for t in range(1, branch_nodes_1): # exception: root
model.addConstr(d[t] <= d[get_parent(t)]) # d_t <= d_p(t)
# constraint set 3
for i in range(N_1):
model.addConstr(z.sum(i, '*') == 1) # sum(z_it for t in T_L)
# constraint set 4
N_min = 1
for t in range(leaf_nodes_1):
model.addConstr(l[t] == 1) # l_t == 1 (assume leaf contains points)
for i in range(N_1):
model.addConstr(z[i, t] <= l[t]) # z_it <= l_t
model.addConstr(z.sum('*', t) >= N_min * l[t]) # sum(z_it for i in N) >= N_min * l_t
depth = 1
all_branch_nodes = list(reversed(range(branch_nodes_1)))
depth_dict = {}
for i in range(depth):
depth_dict[i] = sorted(all_branch_nodes[-2**i:])
for j in range(2**i):
all_branch_nodes.pop()
depth_dict
all_leaf_nodes = list(range(leaf_nodes_1))
branch_dict = {}
for i in range(branch_nodes_1):
for k in range(depth):
if i in depth_dict[k]:
floor_len = len(depth_dict[k])
step = 2**depth // floor_len
sliced_leaf = [all_leaf_nodes[i:i+step] for i in range(0, 2**depth, step)]
idx = depth_dict[k].index(i)
branch_dict[i] = sliced_leaf[idx]
else:
continue
branch_dict
epsilon = 0.01
# constraint set 5
for i in range(N_1):
for tl in range(leaf_nodes_1):
for tb in range(branch_nodes_1):
if tl in branch_dict[tb]:
length = len(branch_dict[tb])
idx = branch_dict[tb].index(tl)
# left-branch ancestors:
# np.dot(a_m.T, (x_i + mu))<= b_m + (1 + mu)(1- z_it)
if idx+1 <= length//2:
model.addConstr(
sum(a.select(tb, '*') * X_train_1.iloc[i, :]) + epsilon
<= b[tb] + (1 + epsilon) * (1 - z[i, tl]))
# right-branch ancestors:
# np.dot(a_m.T, x_i) >= b_m - (1- z_it)
elif idx+1 > length//2:
model.addConstr(
sum(a.select(tb, '*') * X_train_1.iloc[i, :])
>= b[tb] - (1 - z[i, tl]))
else:
continue
# constraint set 6, 7 & 8
for t in range(leaf_nodes_1):
# constraint set 8
model.addConstr(L[t] >= 0) # L_t >= 0
for k in range(K):
# L_t >= N_t - N_kt - n(1 - c_kt)
model.addConstr(L[t] >= N_t[t] - N_kt[k, t] - N_1 * (1 - c_kt[k, t]))
# L_t <= N_t - N_kt + n * c_kt
model.addConstr(L[t] <= N_t[t] - N_kt[k, t] + N_1 * c_kt[k, t])
# constraint set 6
# N_kt = 1/2 sum((1 + Y_ik)z_it for i in N)
model.addConstr(N_kt[k, t] == 1/2 * sum(z.select('*', t) * (Y_1[:, k] + 1)))
model.addConstr(N_t[t] == z.sum('*', t)) # N_t = sum(z_it for i in n)
# constraint set 7
model.addConstr(c_kt.sum('*', t) == l[t]) # l_t = sum(c_kt for k in K)
model.Params.timelimit = 60*10
model.optimize()
print('Obj:', model.objVal)
coef_a = np.zeros([branch_nodes_1, P_1], dtype = int)
coef_b = np.zeros(branch_nodes_1)
for i in range(branch_nodes_1):
b = model.getVarByName('b' + '[' + str(i) + ']')
coef_b[i] = b.x
for j in range(P_1):
a = model.getVarByName('a' + '[' + str(i) + ',' + str (j) + ']')
coef_a[i, j] = int(a.x)
coef_a
coef_b
_ , a_idx = np.where(coef_a == 1)
a_idx = a_idx.tolist()
OCT_a = []
for i in range(len(feature_1)):
if i in np.where(feature_1 == -2)[0]:
OCT_a.append(-2)
else:
OCT_a.append(a_idx[0])
a_idx.pop(0)
OCT_b = []
tmp_b = coef_b.tolist()
for i in range(len(threshold_1)):
if i in np.where(threshold_1 == -2)[0]:
OCT_b.append(-2)
else:
OCT_b.append(round(tmp_b[0], 2))
tmp_b.pop(0)
OCT_a
OCT_b
OCT_rules = {}
for leaf in paths_1:
OCT_rules[leaf] = get_rule(
paths_1[leaf], children_left_1, tuple(df_1_final.columns[0:-1]), OCT_a, OCT_b)
OCT_rules
coef_c = np.zeros([K, leaf_nodes_1], dtype = int)
for i in range(K):
for j in range(leaf_nodes_1):
c = model.getVarByName('c' + '[' + str(i) + ',' + str (j) + ']')
coef_c[i,j] = int(c.x)
coef_c
k_idx, t_idx = np.where(coef_c == 1)
labels = np.zeros(leaf_nodes_1, dtype = int) - 1
for i in range(len(k_idx)):
labels[t_idx[i]] = k_idx[i]
labels
y_hat = np.hstack([
np.reshape(y_train_1.values, (N_1, 1)),
np.zeros([N_1, 1], dtype = int)])
num_nodes = 0
for i in range(branch_nodes_1):
d = model.getVarByName('d' + '[' + str(i) + ']')
num_nodes += int(d.x)
num_nodes
# initialize
init = np.array([], dtype = int).reshape(0, P_1)
nodes = {}
for i in range(num_nodes * 2):
nodes[i] = init
# split
# split
for i in range(N_1):
if np.dot(coef_a[0,:], np.transpose(X_train_1.iloc[i,:])) <= coef_b[0]:
nodes[0] = np.vstack([X_train_1.iloc[i,:], nodes[0]])
# if np.dot(coef_a[1,:], np.transpose(X_train_1.iloc[i,:])) <= coef_b[1]:
# nodes[2] = np.vstack([X_train_1.iloc[i,:], nodes[2]])
y_hat[i,1] = labels[0]
# elif np.dot(coef_a[1,:], np.transpose(X_train_1.iloc[i,:])) > coef_b[1]:
# nodes[3] = np.vstack([X_train_1.iloc[i,:], nodes[3]])
# y_hat[i,1] = labels[1]
elif np.dot(coef_a[0,:], np.transpose(X_train_1.iloc[i,:])) > coef_b[0]:
nodes[1] = np.vstack([X_train_1.iloc[i,:], nodes[1]])
# if np.dot(coef_a[2,:], np.transpose(X_train_1.iloc[i,:])) <= coef_b[2]:
# nodes[4] = np.vstack([X_train_1.iloc[i,:], nodes[4]])
y_hat[i,1] = labels[1]
# elif np.dot(coef_a[2,:], np.transpose(X_train_1.iloc[i,:])) > coef_b[2]:
# nodes[5] = np.vstack([X_train_1.iloc[i,:], nodes[5]])
# y_hat[i,1] = labels[3]
performance_in = 1 - sum(np.abs(y_hat[:,1] - y_hat[:,0])) / N_1
for i in range(len(labels)):
print('\nNode {}'.format(str(i+7)))
print('Predicted label: {}'.format(str(labels[i])))
print('No. of obs.: {}'.format(nodes[i].shape[0]))
N_prime, P = X_test_1.shape
y_predict = np.hstack([
np.reshape(y_test_1.values, (N_prime, 1)),
np.zeros([N_prime, 1], dtype = int)])
# initialize
init = np.array([], dtype = int).reshape(0, P)
nodes = {}
for i in range(num_nodes * 2):
nodes[i] = init
# split
for i in range(N_prime):
if np.dot(coef_a[0,:], np.transpose(X_test_1.iloc[i,:])) <= coef_b[0]:
nodes[0] = np.vstack([X_test_1.iloc[i,:], nodes[0]])
# if np.dot(coef_a[1,:], np.transpose(X_test_1.iloc[i,:])) <= coef_b[1]:
# nodes[2] = np.vstack([X_test_1.iloc[i,:], nodes[2]])
# y_predict[i,1] = labels[0]
# elif np.dot(coef_a[1,:], np.transpose(X_test_1.iloc[i,:])) > coef_b[1]:
# nodes[3] = np.vstack([X_test_1.iloc[i,:], nodes[3]])
y_predict[i,1] = labels[0]
elif np.dot(coef_a[0,:], np.transpose(X_test_1.iloc[i,:])) > coef_b[0]:
nodes[1] = np.vstack([X_test_1.iloc[i,:], nodes[1]])
# if np.dot(coef_a[2,:], np.transpose(X_test_1.iloc[i,:])) <= coef_b[2]:
# nodes[4] = np.vstack([X_test_1.iloc[i,:], nodes[4]])
# y_predict[i,1] = labels[2]
# elif np.dot(coef_a[2,:], np.transpose(X_test_1.iloc[i,:])) > coef_b[2]:
# nodes[5] = np.vstack([X_test_1.iloc[i,:], nodes[5]])
y_predict[i,1] = labels[1]
performance_out = 1 - sum(np.abs(y_predict[:,1] - y_predict[:,0])) / N_prime
for i in range(len(labels)):
print('\nNode {}'.format(str(i+7)))
print('Predicted label: {}'.format(str(labels[i])))
print('No. of obs.: {}'.format(nodes[i+0].shape[0]))
###Output
Node 7
Predicted label: 1
No. of obs.: 97
Node 8
Predicted label: 0
No. of obs.: 75
###Markdown
Rules and performance of OCT
###Code
print('The decision rules in OCT is ', OCT_rules)
print('\nThe in-sample performance in OCT is', performance_in)
print('\nThe out-of-sample performance in OCT is', performance_out)
###Output
The decision rules in OCT is {1: "('A9_f' <= 0.50)", 2: "('A9_f' > 0.50)"}
The in-sample performance in OCT is 0.8571428571428572
The out-of-sample performance in OCT is 0.8488372093023255
|
notebook/version.ipynb | ###Markdown
---
###Code
import platform
print(platform.python_version())
print(type(platform.python_version()))
print(platform.python_version_tuple())
print(type(platform.python_version_tuple()))
print(platform.python_version_tuple()[0])
print(type(platform.python_version_tuple()[0]))
###Output
<class 'str'>
|
QWorld's Global Quantum Programming Workshop/Basics Of Python/3.Drawing In Python.ipynb | ###Markdown
Python: Drawing Here we list certain tools from the python library "matplotlib.pyplot" that we will use throughout the tutorial. Importing some useful tools for drawing figures in python:
###Code
from matplotlib.pyplot import plot, figure, arrow, Circle, gca, text, bar
###Output
_____no_output_____
###Markdown
Drawing a figure with a specified size and dpi value:
###Code
figure(figsize=(6,6), dpi=60)
#The higher dpi value makes the figure bigger.
###Output
_____no_output_____
###Markdown
Drawing a blue point at (x,y):
###Code
plot(1,5,'bo')
###Output
_____no_output_____
###Markdown
For red or green points, 'ro' or 'go' can be used, respectively. Drawing a line from (x,y) to (x+dx,y+dy): arrow(x,y,dx,dy)Additional parameters: color='red' linewidth=1.5 linestyle='dotted' ('dashed', 'dash-dot', 'solid') Drawing a blue arrow from (x,y) to (x+dx,y+dy) with a specifed size head:
###Code
arrow(0.5,0.5,0.1,0.1,head_width=0.04,head_length=0.08,color="blue")
###Output
_____no_output_____
###Markdown
Drawing the axes on 2-dimensional plane:
###Code
arrow(0,0,1.1,0,head_width=0.04,head_length=0.08)
arrow(0,0,-1.1,0,head_width=0.04,head_length=0.08)
arrow(0,0,0,-1.1,head_width=0.04,head_length=0.08)
arrow(0,0,0,1.1,head_width=0.04,head_length=0.08)
###Output
_____no_output_____
###Markdown
Drawing a circle centered as (x,y) with radius r on 2-dimensional plane:
###Code
gca().add_patch( Circle((0.5,0.5),0.2,color='black',fill=False) )
###Output
_____no_output_____
###Markdown
Placing a text at (x,y): text(x,y,string)Additional parameters: rotation=90 (numeric degree values) fontsize=12 Drawing a bar: bar(list_of_labels,list_of_data) Our pre-defined functions We include our predefined functions by using the following line of code: %run qlatvia.pyThe file "/include/drawing.py" contains our predefined functions for drawing.
###Code
%run qlatvia.py
###Output
_____no_output_____
###Markdown
Drawing the axes on 2-dimensional plane:
###Code
import matplotlib
def draw_axes():
# dummy points for zooming out
points = [ [1.3,0], [0,1.3], [-1.3,0], [0,-1.3] ]
# coordinates for the axes
arrows = [ [1.1,0], [0,1.1], [-1.1,0], [0,-1.1] ]
# drawing dummy points
for p in points: matplotlib.pyplot.plot(p[0],p[1]+0.2)
# drawing the axes
for a in arrows: matplotlib.pyplot.arrow(0,0,a[0],a[1],head_width=0.04, head_length=0.08)
draw_axes()
###Output
_____no_output_____
###Markdown
Drawing the unit circle on 2-dimensional plane:
###Code
import matplotlib
def draw_unit_circle():
unit_circle= matplotlib.pyplot.Circle((0.2,0.2),0.2,color='black',fill=False)
matplotlib.pyplot.gca().add_patch(unit_circle)
draw_unit_circle()
###Output
_____no_output_____
###Markdown
Drawing a quantum state on 2-dimensional plane:
###Code
import matplotlib
def draw_quantum_state(x,y):
# shorten the line length to 0.92
# line_length + head_length should be 1
x1 = 0.92 * x
y1 = 0.92 * y
matplotlib.pyplot.arrow(0,0,x1,y1,head_width=0.04,head_length=0.08,color="blue")
x2 = 1.15 * x
y2 = 1.15 * y
matplotlib.pyplot.text(x2,y2,arrow)
draw_quantum_state(1,1)
###Output
_____no_output_____
###Markdown
Drawing a qubit on 2-dimensional plane:
###Code
import matplotlib
def draw_qubit():
# draw a figure
matplotlib.pyplot.figure(figsize=(6,6), dpi=60)
# draw the origin
matplotlib.pyplot.plot(0,0,'ro') # a point in red color
# drawing the axes by using one of our predefined functions
draw_axes()
# drawing the unit circle by using one of our predefined functions
draw_unit_circle()
# drawing |0>
matplotlib.pyplot.plot(1,0,"o")
matplotlib.pyplot.text(1.05,0.05,"|0>")
# drawing |1>
matplotlib.pyplot.plot(0,1,"o")
matplotlib.pyplot.text(0.05,1.05,"|1>")
# drawing -|0>
matplotlib.pyplot.plot(-1,0,"o")
matplotlib.pyplot.text(-1.2,-0.1,"-|0>")
# drawing -|1>
matplotlib.pyplot.plot(0,-1,"o")
matplotlib.pyplot.text(-0.2,-1.1,"-|1>")
draw_qubit()
###Output
_____no_output_____ |
Dive-into-DL-paddlepaddle/docs/7_convolutional-modern/7.2_VGG.ipynb | ###Markdown
使用块的网络(VGG):label:`sec_vgg`虽然 AlexNet 证明深层神经网络卓有成效,但它没有提供一个通用的模板来指导后续的研究人员设计新的网络。在下面的几个章节中,我们将介绍一些常用于设计深层神经网络的启发式概念。与芯片设计中工程师从放置晶体管到逻辑元件再到逻辑块的过程类似,神经网络结构的设计也逐渐变得更加抽象。研究人员开始从单个神经元的角度思考问题,发展到整个层次,现在又转向模块,重复各层的模式。使用块的想法首先出现在牛津大学的 [视觉几何组(visualgeometry Group)](http://www.robots.ox.ac.uk/~vgg/) (VGG)的 *VGG网络* 中。通过使用循环和子程序,可以很容易地在任何现代深度学习框架的代码中实现这些重复的结构。 (**VGG块**)经典卷积神经网络的基本组成部分是下面的这个序列:1. 带填充以保持分辨率的卷积层;1. 非线性激活函数,如ReLU;1. 池化层,如最大池化层。而一个 VGG 块与之类似,由一系列卷积层组成,后面再加上用于空间下采样的最大池化层。在最初的 VGG 论文 :cite:`Simonyan.Zisserman.2014` 中,作者使用了带有 $3\times3$ 卷积核、填充为 1(保持高度和宽度)的卷积层,和带有 $2 \times 2$ 池化窗口、步幅为 2(每个块后的分辨率减半)的最大池化层。在下面的代码中,我们定义了一个名为 `vgg_block` 的函数来实现一个 VGG 块。 该函数有三个参数,分别对应于卷积层的数量 `num_convs`、输入通道的数量 `in_channels`和输出通道的数量 `out_channels`.
###Code
import paddle
import paddle.nn as nn
def vgg_block(num_convs, in_channels, out_channels):
layers = []
for _ in range(num_convs):
layers.append(
nn.Conv2D(in_channels, out_channels, kernel_size=3, padding=1))
layers.append(nn.ReLU())
in_channels = out_channels
layers.append(nn.MaxPool2D(kernel_size=2, stride=2))
return nn.Sequential(*layers)
###Output
_____no_output_____
###Markdown
[**VGG网络**]与 AlexNet、LeNet 一样,VGG 网络可以分为两部分:第一部分主要由卷积层和池化层组成,第二部分由全连接层组成。如 :numref:`fig_vgg` 中所示。:width:`400px`:label:`fig_vgg`VGG神经网络连续连接 :numref:`fig_vgg` 的几个 VGG 块(在 `vgg_block` 函数中定义)。其中有超参数变量 `conv_arch` 。该变量指定了每个VGG块里卷积层个数和输出通道数。全连接模块则与AlexNet中的相同。原始 VGG 网络有 5 个卷积块,其中前两个块各有一个卷积层,后三个块各包含两个卷积层。第一个模块有 64 个输出通道,每个后续模块将输出通道数量翻倍,直到该数字达到 512。由于该网络使用 8 个卷积层和 3 个全连接层,因此它通常被称为 VGG-11。
###Code
conv_arch = ((1, 64), (1, 128), (2, 256), (2, 512), (2, 512))
###Output
_____no_output_____
###Markdown
下面的代码实现了 VGG-11。可以通过在 `conv_arch` 上执行 for 循环来简单实现。
###Code
def vgg(conv_arch):
conv_blks = []
in_channels = 1
# 卷积层部分
for (num_convs, out_channels) in conv_arch:
conv_blks.append(vgg_block(num_convs, in_channels, out_channels))
in_channels = out_channels
return nn.Sequential(*conv_blks, nn.Flatten(),
# 全连接层部分
nn.Linear(out_channels * 7 * 7, 4096), nn.ReLU(),
nn.Dropout(0.5), nn.Linear(4096, 4096), nn.ReLU(),
nn.Dropout(0.5), nn.Linear(4096, 10))
VGG = vgg(conv_arch)
###Output
_____no_output_____
###Markdown
接下来,我们将构建一个高度和宽度为 224 的单通道数据样本,以[**观察每个层输出的形状**]。
###Code
print(paddle.summary(VGG, (1, 1, 224, 224)))
###Output
---------------------------------------------------------------------------
Layer (type) Input Shape Output Shape Param #
===========================================================================
Conv2D-1 [[1, 1, 224, 224]] [1, 64, 224, 224] 640
ReLU-1 [[1, 64, 224, 224]] [1, 64, 224, 224] 0
MaxPool2D-1 [[1, 64, 224, 224]] [1, 64, 112, 112] 0
Conv2D-2 [[1, 64, 112, 112]] [1, 128, 112, 112] 73,856
ReLU-2 [[1, 128, 112, 112]] [1, 128, 112, 112] 0
MaxPool2D-2 [[1, 128, 112, 112]] [1, 128, 56, 56] 0
Conv2D-3 [[1, 128, 56, 56]] [1, 256, 56, 56] 295,168
ReLU-3 [[1, 256, 56, 56]] [1, 256, 56, 56] 0
Conv2D-4 [[1, 256, 56, 56]] [1, 256, 56, 56] 590,080
ReLU-4 [[1, 256, 56, 56]] [1, 256, 56, 56] 0
MaxPool2D-3 [[1, 256, 56, 56]] [1, 256, 28, 28] 0
Conv2D-5 [[1, 256, 28, 28]] [1, 512, 28, 28] 1,180,160
ReLU-5 [[1, 512, 28, 28]] [1, 512, 28, 28] 0
Conv2D-6 [[1, 512, 28, 28]] [1, 512, 28, 28] 2,359,808
ReLU-6 [[1, 512, 28, 28]] [1, 512, 28, 28] 0
MaxPool2D-4 [[1, 512, 28, 28]] [1, 512, 14, 14] 0
Conv2D-7 [[1, 512, 14, 14]] [1, 512, 14, 14] 2,359,808
ReLU-7 [[1, 512, 14, 14]] [1, 512, 14, 14] 0
Conv2D-8 [[1, 512, 14, 14]] [1, 512, 14, 14] 2,359,808
ReLU-8 [[1, 512, 14, 14]] [1, 512, 14, 14] 0
MaxPool2D-5 [[1, 512, 14, 14]] [1, 512, 7, 7] 0
Flatten-1 [[1, 512, 7, 7]] [1, 25088] 0
Linear-1 [[1, 25088]] [1, 4096] 102,764,544
ReLU-9 [[1, 4096]] [1, 4096] 0
Dropout-1 [[1, 4096]] [1, 4096] 0
Linear-2 [[1, 4096]] [1, 4096] 16,781,312
ReLU-10 [[1, 4096]] [1, 4096] 0
Dropout-2 [[1, 4096]] [1, 4096] 0
Linear-3 [[1, 4096]] [1, 10] 40,970
===========================================================================
Total params: 128,806,154
Trainable params: 128,806,154
Non-trainable params: 0
---------------------------------------------------------------------------
Input size (MB): 0.19
Forward/backward pass size (MB): 125.37
Params size (MB): 491.36
Estimated Total Size (MB): 616.92
---------------------------------------------------------------------------
{'total_params': 128806154, 'trainable_params': 128806154}
###Markdown
正如你所看到的,我们在每个块的高度和宽度减半,最终高度和宽度都为7。最后再展平表示,送入全连接层处理。 训练模型[**由于VGG-11比AlexNet计算量更大,因此我们构建了一个通道数较少的网络**],足够用于训练Fashion-MNIST数据集。
###Code
ratio = 4
small_conv_arch = [(pair[0], pair[1] // ratio) for pair in conv_arch]
VGG = vgg(small_conv_arch)
###Output
_____no_output_____
###Markdown
除了使用略高的学习率外,[**模型训练**]过程与 :numref:`sec_alexnet` 中的 AlexNet 类似。
###Code
import paddle.vision.transforms as T
from paddle.vision.datasets import FashionMNIST
lr, num_epochs, batch_size = 0.005, 10, 128
# 数据集处理
transform = T.Compose([
T.Resize(224),
T.Transpose(),
T.Normalize([127.5], [127.5]),
])
# 数据集定义
train_dataset = FashionMNIST(mode='train', transform=transform)
val_dataset = FashionMNIST(mode='test', transform=transform)
# 模型设置
model = paddle.Model(VGG)
model.prepare(
paddle.optimizer.Adam(learning_rate=lr, parameters=model.parameters()),
paddle.nn.CrossEntropyLoss(),
paddle.metric.Accuracy(topk=(1, 5)))
# 模型训练
model.fit(train_dataset, val_dataset, epochs=num_epochs, batch_size=batch_size, log_freq=200)
###Output
The loss value printed in the log is the current step, and the metric is the average value of previous steps.
Epoch 1/10
|
reports/201016_WIP4.ipynb | ###Markdown
compilations of figures for 4th WIP talk. These figures might be found in other scripts or jupyter notebooks
###Code
import itertools as itt
import pathlib as pl
from configparser import ConfigParser
from textwrap import fill
import joblib as jl
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import scipy.stats as sst
import seaborn as sns
from cycler import cycler
import src.data.LDA as cLDA
import src.data.dPCA as cdPCA
import src.metrics.dprime as cDP
import src.visualization.fancy_plots as fplt
from src.data.load import load
from src.data.cache import set_name
from src.visualization.fancy_plots import savefig
from src.metrics.reliability import signal_reliability
#general plottin formating
plt.style.use('dark_background')
# modify figure color cycler back to the default one
color_cycler = cycler(color=['#1f77b4', '#ff7f0e', '#2ca02c', '#d62728', '#9467bd',
'#8c564b', '#e377c2', '#7f7f7f', '#bcbd22', '#17becf'])
trans_color_map = {'silence': '#377eb8', # blue
'continuous': '#ff7f00', # orange
'similar': '#4daf4a', # green
'sharp': '#a65628'} # brown
params = {'axes.labelsize': 15,
'axes.titlesize': 20,
'axes.spines.top': False,
'axes.spines.right': False,
'axes.prop_cycle': color_cycler,
'xtick.labelsize': 11,
'ytick.labelsize': 11,
'lines.markersize': 8,
'figure.titlesize': 30,
'figure.figsize': [4,4],
'figure.autolayout':True,
'svg.fonttype': 'none',
'font.sans-serif': 'Arial',
'legend.loc': 'upper right',
'legend.frameon': False,
'legend.fontsize': 15,
'legend.markerscale': 3,
}
widescreen = [13.3, 7.5]
plt.rcParams.update(params)
# path to caches and meta parameters
config = ConfigParser()
config.read_file(open(pl.Path().cwd().parent / 'config' / 'settings.ini'))
meta = {'reliability': 0.1, # r value
'smoothing_window': 0, # ms
'raster_fs': 30,
'transitions': ['silence', 'continuous', 'similar', 'sharp'],
'montecarlo': 1000,
'zscore': True,
'dprime_absolute': None}
# loads the summary metrics
summary_DF_file = pl.Path(config['paths']['analysis_cache']) / 'DF_summary' / set_name(meta)
DF = jl.load(summary_DF_file)
# create the id_probe pair for
DF['id_probe'] = DF['cellid'].fillna(value=DF['siteid'])
DF['id_probe'] = DF[['id_probe', 'probe']].agg('_'.join, axis=1)
# load digested data
rec_recache = False
all_probes = [2, 3, 5, 6]
# load the calculated dprimes and montecarlo shuffling/simulations
# the loadede dictionary has 3 layers, analysis, value type and cell/site
batch_dprimes_file = pl.Path(config['paths']['analysis_cache']) / 'batch_dprimes' / set_name(meta)
batch_dprimes = jl.load(batch_dprimes_file)
sites = set(batch_dprimes['dPCA']['dprime'].keys())
all_cells = set(batch_dprimes['SC']['dprime'].keys())
# some small preprocecing of the digested data.
# defines a significant threshold and transform the pvalues into boolean (significant vs nonsignificant)
threshold = 0.01
for analysis_name, mid_dict in batch_dprimes.items():
mid_dict['shuffled_significance'] = {key: (val <= threshold) for key, val in mid_dict['shuffled_pvalue'].items()}
if analysis_name != 'SC':
mid_dict['simulated_significance'] = {key: (val <= threshold) for key, val in
mid_dict['simulated_pvalue'].items()}
# set up the time bin labels in milliseconds, this is critical for plotting and calculating the tau
nbin = np.max([value.shape[-1] for value in batch_dprimes['SC']['dprime'].values()])
fs = meta['raster_fs']
times = np.linspace(0, nbin / fs, nbin, endpoint=False) * 1000
bar_width = 1 / fs * 1000
fig_root = 'single_cell_context_dprime'
###Output
_____no_output_____
###Markdown
plots related to steps in data procesing and examples
###Code
# functions taken/modified from 200221_exp_fit_SC_dPCA_LDA_examples.py
def analysis_steps_plot(id, probe, source):
site = id[:7] if source == 'SC' else id
# loads the raw data
recs = load(site, rasterfs=meta['raster_fs'], recache=False)
sig = recs['trip0']['resp']
# calculates response realiability and select only good cells to improve analysis
r_vals, goodcells = signal_reliability(sig, r'\ASTIM_*', threshold=meta['reliability'])
goodcells = goodcells.tolist()
# get the full data raster Context x Probe x Rep x Neuron x Time
raster = cdPCA.raster_from_sig(sig, probe, channels=goodcells, transitions=meta['transitions'],
smooth_window=meta['smoothing_window'], raster_fs=meta['raster_fs'],
zscore=meta['zscore'], part='probe')
# trialR shape: Trial x Cell x Context x Probe x Time; R shape: Cell x Context x Probe x Time
trialR, R, _ = cdPCA.format_raster(raster)
trialR, R = trialR.squeeze(axis=3), R.squeeze(axis=2) # squeezes out probe
if source == 'dPCA':
projection, _ = cdPCA.fit_transform(R, trialR)
elif source == 'LDA':
projection, _ = cLDA.fit_transform_over_time(trialR)
projection = projection.squeeze(axis=1)
if meta['zscore'] is False:
trialR = trialR * meta['raster_fs']
if source == 'dPCA':
projection = projection * meta['raster_fs']
# flips signs of dprimes and montecarlos as needed
dprimes, shuffleds = cDP.flip_dprimes(batch_dprimes[source]['dprime'][id],
batch_dprimes[source]['shuffled_dprime'][id], flip='max')
if source in ['dPCA', 'LDA']:
_, simulations = cDP.flip_dprimes(batch_dprimes[source]['dprime'][id],
batch_dprimes[source]['simulated_dprime'][id], flip='max')
t = times[:trialR.shape[-1]]
# nrows = 2 if source == 'SC' else 3
nrows = 2
fig, axes = plt.subplots(nrows, 6, sharex='all', sharey='row')
# PSTH
for tt, trans in enumerate(itt.combinations(meta['transitions'], 2)):
t0_idx = meta['transitions'].index(trans[0])
t1_idx = meta['transitions'].index(trans[1])
if source == 'SC':
cell_idx = goodcells.index(id)
axes[0, tt].plot(t, trialR[:, cell_idx, t0_idx, :].mean(axis=0), color=trans_color_map[trans[0]],
linewidth=3)
axes[0, tt].plot(t, trialR[:, cell_idx, t1_idx, :].mean(axis=0), color=trans_color_map[trans[1]],
linewidth=3)
else:
axes[0, tt].plot(t, projection[:, t0_idx, :].mean(axis=0), color=trans_color_map[trans[0]], linewidth=3)
axes[0, tt].plot(t, projection[:, t1_idx, :].mean(axis=0), color=trans_color_map[trans[1]], linewidth=3)
# Raster, dprime, CI
bottom, top = axes[0, 0].get_ylim()
half = ((top - bottom) / 2) + bottom
for tt, trans in enumerate(itt.combinations(meta['transitions'], 2)):
prb_idx = all_probes.index(probe)
pair_idx = tt
if source == 'SC':
# raster
cell_idx = goodcells.index(id)
t0_idx = meta['transitions'].index(trans[0])
t1_idx = meta['transitions'].index(trans[1])
_ = fplt._raster(t, trialR[:, cell_idx, t0_idx, :], y_offset=0, y_range=(bottom, half), ax=axes[0, tt],
scatter_kws={'color': trans_color_map[trans[0]], 'alpha': 0.4, 's': 10})
_ = fplt._raster(t, trialR[:, cell_idx, t1_idx, :], y_offset=0, y_range=(half, top), ax=axes[0, tt],
scatter_kws={'color': trans_color_map[trans[1]], 'alpha': 0.4, 's': 10})
# plots the real dprime and the shuffled dprime ci
axes[1, tt].plot(t, dprimes[prb_idx, pair_idx, :], color='white')
_ = fplt._cint(t, shuffleds[:, prb_idx, pair_idx, :], confidence=0.95, ax=axes[1, tt],
fillkwargs={'color': 'white', 'alpha': 0.5})
# if source in ['dPCA', 'LDA']:
# # plots the real dprime and simulated dprime ci
# axes[2, tt].plot(t, dprimes[prb_idx, pair_idx, :], color='white')
# _ = fplt._cint(t, simulations[:, prb_idx, pair_idx, :], confidence=0.95, ax=axes[2, tt],
# fillkwargs={'color': 'white', 'alpha': 0.5})
# significance bars
ax1_bottom = axes[1, 0].get_ylim()[0]
# if source == 'dPCA':
# ax2_bottom = axes[2, 0].get_ylim()[0]
for tt, trans in enumerate(itt.combinations(meta['transitions'], 2)):
prb_idx = all_probes.index(probe)
pair_idx = tt
# histogram of context discrimination
axes[1, tt].bar(t, batch_dprimes[source]['shuffled_significance'][id][prb_idx, pair_idx, :],
width=bar_width, align='center', color='gray', edgecolor='white', bottom=ax1_bottom)
# if source in ['dPCA', 'LDA']:
# # histogram of population effects
# axes[2, tt].bar(t, batch_dprimes[source]['simulated_significance'][id][prb_idx, pair_idx, :],
# width=bar_width, align='center', edgecolor='white', bottom=ax2_bottom)
# formats legend
if tt == 0:
if source == 'SC':
axes[0, tt].set_ylabel(f'z-score')
elif source == 'dPCA':
axes[0, tt].set_ylabel(f'dPC')
axes[1, tt].set_ylabel(f'dprime')
# if source in ['dPCA', 'LDA']:
# axes[2, tt].set_ylabel(f'dprime')
axes[-1, tt].set_xlabel('time (ms)')
axes[0, tt].set_title(f'{trans[0][:3]}_{trans[1][:3]}')
for ax in np.ravel(axes):
ax.spines['right'].set_visible(False)
ax.spines['top'].set_visible(False)
return fig, axes
def non_param_example_plot(id, probe, trans_pair, source):
"""
Plots dprime and significant bins. Displays the correspondente significant area under the curve (green) and its
center of mass (dashed vertical white line)
:param id: str. cell or site id
:param source: str. 'SC', 'dPCA', or 'LDA'
:return: fig, axes.
"""
# flips signs of dprimes and montecarlos as neede
dprimes, shuffleds = cDP.flip_dprimes(batch_dprimes[source]['dprime'][id],
batch_dprimes[source]['shuffled_dprime'][id], flip='max')
signif_bars = batch_dprimes[source]['shuffled_significance'][id]
probe_idx = all_probes.index(probe)
trans_pair_idx = trans_pair
mean_dprime = dprimes[probe_idx, trans_pair_idx, :]
mean_signif = signif_bars[probe_idx, trans_pair_idx, :]
signif_mask = mean_signif>0
t = times[:dprimes.shape[-1]]
# calculates center of mass and integral
significant_abs_mass_center = np.sum(np.abs(mean_dprime[signif_mask]) * t[signif_mask]) / np.sum(np.abs(mean_dprime[signif_mask]))
significant_abs_sum = np.sum(np.abs(mean_dprime[signif_mask])) * np.mean(np.diff(t))
# fig, axes = plt.subplots(2, 1, sharex='all', sharey='all')
fig, axes = plt.subplots()
# plots dprime plus fit
axes.plot(t, mean_dprime, color='white')
axes.axhline(0, color='gray', linestyle='--')
axes.fill_between(t, mean_dprime, 0, where=signif_mask, color='green', label=f"integral\n{significant_abs_sum:.2f} ms*d'")
# _ = fplt.exp_decay(t, mean_dprime, ax=axes[0], linestyle='--', color='white')
# plots confifence bins plut fit
# ax1_bottom = axes.get_ylim()[0]
ax1_bottom = -1
axes.bar(t, mean_signif*0.5, width=bar_width, align='center',
color='gray', edgecolor='white', bottom= ax1_bottom, alpha=0.8)
# _ = fplt.exp_decay(times, mean_signif, ax=axes[1], linestyle='--', color='white')
axes.axvline(significant_abs_mass_center, color='white', linewidth=3, linestyle='--',
label=f'center of mass\n{significant_abs_mass_center:.2f} ms')
axes.legend()
# axes[1].legend()
# formats axis, legend and so on.
axes.set_ylabel(f'dprime')
axes.set_xlabel('time (ms)')
return fig, axes
def category_summary_plot(id, source):
"""
Plots calculated dprime, confidense interval of shuffled dprime, and histogram of significant bins, for all contexts
and probes.
Subplots are a grid of al combinations of probe (rows) and context pairs (columns), plus the means of each category,
and the grand mean
:param id: str. cell or site id
:param source: str. 'SC', 'dPCA', or 'LDA'
:return: fig, axes.
"""
# flips signs of dprimes and montecarlos as neede
dprimes, shuffleds = cDP.flip_dprimes(batch_dprimes[source]['dprime'][id],
batch_dprimes[source]['shuffled_dprime'][id], flip='max')
signif_bars = batch_dprimes[source]['shuffled_significance'][id]
t = times[:dprimes.shape[-1]]
fig, axes = plt.subplots(5, 7, sharex='all', sharey='all')
# dprime and confidence interval for each probe-transition combinations
for (pp, probe), (tt, trans) in itt.product(enumerate(all_probes),
enumerate(itt.combinations(meta['transitions'], 2))):
prb_idx = all_probes.index(probe)
# plots the real dprime and the shuffled dprime
axes[pp, tt].plot(t, dprimes[prb_idx, tt, :], color='white')
_ = fplt._cint(t, shuffleds[:, prb_idx, tt, :], confidence=0.95, ax=axes[pp, tt],
fillkwargs={'color': 'white', 'alpha': 0.5})
# dprime and ci for the mean across context pairs
for pp, probe in enumerate(all_probes):
prb_idx = all_probes.index(probe)
axes[pp, -1].plot(t, np.mean(dprimes[prb_idx, :, :], axis=0), color='white')
axes[pp, -1].axhline(0, color='gray', linestyle='--')
# dprime and ci for the mean across probes
for tt, trans in enumerate(itt.combinations(meta['transitions'], 2)):
axes[-1, tt].plot(t, np.mean(dprimes[:, tt, :], axis=0), color='white')
axes[-1, tt].axhline(0, color='gray', linestyle='--')
# significance bars for each probe-transition combinations
bar_bottom = axes[0, 0].get_ylim()[0]
for (pp, probe), (tt, trans) in itt.product(enumerate(all_probes),
enumerate(itt.combinations(meta['transitions'], 2))):
prb_idx = all_probes.index(probe)
axes[pp, tt].bar(t, signif_bars[prb_idx, tt, :], width=bar_width, align='center', color='gray',
edgecolor='white', bottom=bar_bottom)
# _ = fplt.exp_decay(t, signif_bars[prb_idx, tt, :], ax=axes[2, tt])
# significance bars for the mean across context pairs
for pp, probe in enumerate(all_probes):
prb_idx = all_probes.index(probe)
axes[pp, -1].bar(t, np.mean(signif_bars[prb_idx, :, :], axis=0), width=bar_width, align='center', color='gray',
edgecolor='white', bottom=bar_bottom)
# _ = fplt.exp_decay(t, np.mean(signif_bars[prb_idx, :, :], axis=0), ax=axes[pp, -1], yoffset=bar_bottom,
# linestyle=':', color='gray')
# axes[pp, -1].legend(loc='upper right', fontsize='small', markerscale=3, frameon=False)
# significance bars for the mean across probes
for tt, trans in enumerate(itt.combinations(meta['transitions'], 2)):
axes[-1, tt].bar(t, np.mean(signif_bars[:, tt, :], axis=0), width=bar_width, align='center', color='gray',
edgecolor='white', bottom=bar_bottom)
# _ = fplt.exp_decay(t, np.mean(signif_bars[:, tt, :], axis=0), axes[-1, tt], yoffset=bar_bottom,
# linestyle=':', color='gray')
# axes[-1, tt].legend(loc='upper right', fontsize='small', markerscale=3, frameon=False)
# cell summary mean: dprime, confidence interval
axes[-1, -1].plot(t, np.mean(dprimes[:, :, :], axis=(0, 1)), color='white')
axes[-1, -1].axhline(0, color='gray', linestyle='--')
axes[-1, -1].bar(t, np.mean(signif_bars[:, :, :], axis=(0, 1)), width=bar_width, align='center', color='gray',
edgecolor='white', bottom=bar_bottom)
# _ = fplt.exp_decay(t, np.mean(signif_bars[:, :, :], axis=(0, 1)), ax=axes[-1, -1], yoffset=bar_bottom,
# linestyle=':', color='gray')
# axes[-1, -1].legend(loc='upper right', fontsize='small', markerscale=3, frameon=False)
# formats axis, legend and so on.
for pp, probe in enumerate(all_probes):
axes[pp, 0].set_ylabel(f'probe {probe}')
axes[-1, 0].set_ylabel(f'probe\nmean')
for tt, trans in enumerate(itt.combinations(meta['transitions'], 2)):
axes[0, tt].set_title(f'{trans[0][:3]}_{trans[1][:3]}')
axes[-1, tt].set_xlabel('time (ms)')
axes[0, -1].set_title(f'pair\nmean')
axes[-1, -1].set_xlabel('time (ms)')
for ax in np.ravel(axes):
ax.spines['right'].set_visible(False)
ax.spines['top'].set_visible(False)
return fig, axes
def site_cell_summary(id):
"""
plots a grid of subplots, each one showing the real dprime, histogram of significant bins and fitted exponential
decay to the significant bins. Both the dprime and significant bins are the cell grand mean across probes and
context pairs
:param id: str. site id
:return: fig, axes
"""
site_cells = set([cell for cell in batch_dprimes['SC']['dprime'].keys() if cell[:7] == id])
fig, axes = fplt.subplots_sqr(len(site_cells), sharex=True, sharey=True)
for ax, cell in zip(axes, site_cells):
grand_mean, _ = cDP.flip_dprimes(batch_dprimes['SC']['dprime'][cell], flip='max')
line = np.mean(grand_mean, axis=(0, 1))
hist = np.mean(batch_dprimes['SC']['shuffled_significance'][cell], axis=(0, 1))
ax.plot(times[:len(line)], line, color='white')
ax.axhline(0, color='gray', linestyle='--' )
ax.bar(times[:len(hist)], hist, width=bar_width, align='center', color='gray', edgecolor='white',bottom=-0.5)
# _ = fplt.exp_decay(times[:len(hist)], hist, ax=ax, linestyle='--', color='gray')
# ax.set_title(cell)
# ax.legend(loc='upper right', fontsize='small', markerscale=3)
return fig, axes
def dPCA_site_summary(site, probe):
# loads the raw data
recs = load(site, rasterfs=meta['raster_fs'], recache=rec_recache)
sig = recs['trip0']['resp']
# calculates response realiability and select only good cells to improve analysis
r_vals, goodcells = signal_reliability(sig, r'\ASTIM_*', threshold=meta['reliability'])
goodcells = goodcells.tolist()
# get the full data raster Context x Probe x Rep x Neuron x Time
raster = cdPCA.raster_from_sig(sig, probe, channels=goodcells, transitions=meta['transitions'],
smooth_window=meta['smoothing_window'], raster_fs=meta['raster_fs'],
zscore=meta['zscore'], part='probe')
# trialR shape: Trial x Cell x Context x Probe x Time; R shape: Cell x Context x Probe x Time
trialR, R, _ = cdPCA.format_raster(raster)
trialR, R = trialR.squeeze(axis=3), R.squeeze(axis=2) # squeezes out probe
Z, trialZ, dpca = cdPCA._cpp_dPCA(R, trialR)
fig, axes = plt.subplots(1, 3, sharex='all', sharey='row', squeeze=True)
# for vv, (marginalization, arr) in enumerate(Z.items()):
marginalization = 'ct'
means = Z[marginalization]
trials = trialZ[marginalization]
if marginalization == 't':
marginalization = 'probe'
elif marginalization == 'ct':
marginalization = 'context'
for pc in range(3): # first 3 principal components
ax = axes[pc]
for tt, trans in enumerate(meta['transitions']): # for each context
ax.plot(times, means[pc, tt, :], label=trans, color=trans_color_map[trans], linewidth=2)
# _ = fplt._cint(times, trials[:,pc,tt,:], confidence=0.95, ax=ax,
# fillkwargs={'color': trans_color_map[trans], 'alpha': 0.5})
# formats axes labels and ticks
if pc == 0 :
ax.set_ylabel(f'{marginalization} dependent\nfiring rate (z-score)')
ax.set_title(f'dPC #{pc + 1}')
ax.set_xlabel('time (ms)')
# legend in last axis
# axes[-1, -1].legend(fontsize='x-large', markerscale=10,)
axes[-1].legend(fontsize='x-large', markerscale=10,)
return fig, ax, dpca
# finds the unit-probe combination and site with the highes absolute integral and hopefully center of mass
ff_parameter = DF.parameter.isin(['significant_abs_sum', 'significant_abs_mass_center'])
ff_trans = DF.transition_pair != 'mean'
ff_probe = DF.probe != 'mean'
# single cell
ff_analysis = DF.analysis=='SC'
filtered = DF.loc[ff_parameter & ff_analysis & ff_trans & ff_probe, :]
pivoted = filtered.pivot_table(index=['cellid', 'probe'], columns='parameter', values='value')
single_cell_top = pivoted.sort_values(['significant_abs_sum', 'significant_abs_mass_center'], ascending=False).head(5)
# population
ff_analysis = DF.analysis=='dPCA'
filtered = DF.loc[ff_parameter & ff_analysis & ff_trans & ff_probe, :]
pivoted = filtered.pivot_table(index=['siteid', 'probe'], columns='parameter', values='value')
population_top = pivoted.sort_values(['significant_abs_sum', 'significant_abs_mass_center'], ascending=False).head(5)
print(single_cell_top)
print(population_top)
# # old example sites
# ['AMT028b', 'DRX008b']
# # old example units
# ['AMT028b-20-1', 'DRX008b-04-1']
# # old example probe
# probe = 2
# one of the short experiments
# cell = 'DRX021a-10-2'
# site = 'DRX021a'
# probe = 2
cell = 'AMT029a-51-1'
site = 'AMT029a'
probe = 5
###Output
_____no_output_____
###Markdown
analysis steps
###Code
# SC examples
fig, axes = analysis_steps_plot(cell, probe, 'SC')
fig.set_size_inches(13.3, 3.81)
title = f'SC, {cell} probe {probe} calc steps'
print(title)
savefig(fig, 'WIP4_figures', title, type='png')
plt.show()
# dPCA examples
fig, axes = analysis_steps_plot(site, probe, 'dPCA')
fig.set_size_inches(13.3, 3.81)
title = f'dPCA, {site} probe {probe}, calc steps'
print(title)
savefig(fig, 'WIP4_figures', title, type='png')
plt.show()
###Output
loading recording from box
SC, AMT029a-51-1 probe 5 calc steps
loading recording from box
You chose to determine the regularization parameter automatically. This can
take substantial time and grows linearly with the number of crossvalidation
folds. The latter can be set by changing self.n_trials (default = 3). Similarly,
use self.protect to set the list of axes that are not supposed to get to get shuffled
(e.g. upon splitting the data into test- and training, time-points should always
be drawn from the same trial, i.e. self.protect = ['t']). This can significantly
speed up the code.
Start optimizing regularization.
Starting trial 1 / 3
Starting trial 2 / 3
Starting trial 3 / 3
Optimized regularization, optimal lambda = 0.025511577864105985
Regularization will be fixed; to compute the optimal parameter again on the next fit, please set opt_regularizer_flag to True.
dPCA, AMT029a probe 5, calc steps
###Markdown
example fit, to be modified for area under the curve
###Code
# SC examples
fig, axes = non_param_example_plot(cell, probe, 4, 'SC')
title = f'SC, {cell} probe_{probe} param calc summary'
fig.set_size_inches([5,5])
fplt.savefig(fig, 'WIP4_figures', title)
plt.show()
# # dPCA site examples
# fig, axes = fit_example_plot(site, probe, 4, 'dPCA')
# title = f'dPCA, {site} fit summary'
# # fplt.savefig(fig, 'WIP4_figures', title)
# plt.show()
###Output
_____no_output_____
###Markdown
all categorie of transition pairs and probes
###Code
# SC example
fig, axes = category_summary_plot(cell, 'SC')
# fig.set_size_inches(np.asarray([16, 9])*0.7)
fig.set_size_inches(np.asarray([13.33, 6.7]))
title = f'SC, {cell} probe context_pair summary'
print(title)
# fig.suptitle(title)
savefig(fig, 'WIP4_figures', title)
plt.show()
# # dpca site example
# fig, axes = category_summary_plot(site, 'dPCA')
# fig.set_size_inches(np.asarray([16, 9])*0.7)
# title = f'dPCA, {site} probe context_pair summary'
# print(title)
# # fig.suptitle(title)
# savefig(fig, 'WIP4_figures', title)
# plt.show()
###Output
SC, AMT029a-51-1 probe context_pair summary
###Markdown
summary of all cells in site
###Code
big_site = 'DRX008b'
fig, axes = site_cell_summary(big_site) # site with most cells
fig.set_size_inches(np.asarray([13.33, 7.5]))
title = f'{big_site} all cells summary'
# fig.suptitle(title)
# fig.tight_layout(rect=(0, 0, 1, 0.95))
savefig(fig, 'WIP4_figures', title)
plt.show()
###Output
_____no_output_____
###Markdown
dPCA projecdtion and variance explained
###Code
fig1, axes, dpca = dPCA_site_summary(site, probe)
fig1.set_size_inches((13.33, 3))
title = f'{site} probe-{probe} dPCA projection'
# fig1.suptitle(title)
# fig1.tight_layout(rect=(0, 0, 1, 0.95))
savefig(fig1, 'WIP4_figures', title, type='png')
plt.show()
fig2, ax, inset = cdPCA.variance_explained(dpca, ax=None, names=['probe', 'context'], colors=['gray', 'green'],
inset=False)
fig2.set_size_inches((6, 3.6))
title = f'{site} probe-{probe} dPCA variance explained'
# _, labels, autotexts = inset
# plt.setp(autotexts, size=15, weight='normal')
# plt.setp(labels, size=15, weight='normal')
# var_ax.set_title('marginalized variance')
savefig(fig2, 'WIP4_figures', title, type='png')
plt.show()
###Output
loading recording from box
You chose to determine the regularization parameter automatically. This can
take substantial time and grows linearly with the number of crossvalidation
folds. The latter can be set by changing self.n_trials (default = 3). Similarly,
use self.protect to set the list of axes that are not supposed to get to get shuffled
(e.g. upon splitting the data into test- and training, time-points should always
be drawn from the same trial, i.e. self.protect = ['t']). This can significantly
speed up the code.
Start optimizing regularization.
Starting trial 1 / 3
Starting trial 2 / 3
Starting trial 3 / 3
Optimized regularization, optimal lambda = 0.025511577864105985
Regularization will be fixed; to compute the optimal parameter again on the next fit, please set opt_regularizer_flag to True.
###Markdown
plots related to summary metrics
###Code
def parameter_space_scatter(x, y, analysis='SC', source='dprime', ax=None):
ff_parameter = DF.parameter.isin([x, y])
ff_analysis = DF.analysis == analysis
ff_source = DF.source == source
ff_probe = DF.probe == 'mean'
ff_transpair= DF.transition_pair == 'mean'
ff_good = np.logical_or(DF.goodness > 0.1, DF.goodness.isna()) # empirical good value to filter out garbage cells
filtered = DF.loc[ff_parameter & ff_analysis & ff_source & ff_probe & ff_transpair & ff_good, :]
pivoted = filtered.pivot_table(index=['id_probe', 'region', 'siteid'],
columns='parameter', values='value').dropna().reset_index()
_, _, r2, _, _ = sst.linregress(pivoted[x], pivoted[y])
ax = sns.regplot(x=x, y=y, data=pivoted, ax=ax, color='white', label=f'n={len(pivoted.index)}\nr2={r2:.2f}')
ax.axhline(0, linestyle='--')
ax.axvline(0, linestyle='--')
fig = ax.get_figure()
return fig, ax
def condition_effect_on_parameter(parameter='significant_abs_mass_center', compare='transition_pairs', analysis='SC',
source='drpime', nan2zero=False, nozero=True):
if compare == 'probe':
ff_probe = DF.probe != 'mean'
ff_trans = DF.transition_pair == 'mean'
elif compare == 'transition_pair':
ff_probe = DF.probe == 'mean'
ff_trans = DF.transition_pair != 'mean'
else:
raise ValueError(f'unknown value compare: {compare}')
if analysis == 'SC':
index = 'cellid'
elif analysis in ('dPCA', 'LDA'):
index = 'siteid'
else:
raise ValueError(f'unknown analysis value:{analysis}')
ff_analisis = DF.analysis == analysis
ff_parameter = DF.parameter == parameter
ff_source = DF.source == source
if nozero:
ff_value = DF.value > 0
elif nozero is False:
ff_value = DF.value >= 0
else:
raise ValueError('nozero not bool')
filtered = DF.loc[ff_analisis & ff_probe & ff_trans & ff_parameter & ff_source & ff_value,
[index, compare, 'goodness', 'value']]
pivoted = filtered.pivot(index=index, columns=compare, values='value')
if nan2zero:
pivoted = pivoted.fillna(value=0).reset_index()
elif nan2zero is False:
pivoted = pivoted.dropna().reset_index()
else:
raise ValueError('nan2zero not bool')
molten = pivoted.melt(id_vars=index, var_name=compare)
fig, ax = plt.subplots()
# _ = fplt.paired_comparisons(ax, data=molten,x=compare, y='value', color='gray', alpha=0.3)
ax = sns.boxplot(x=compare, y='value', data=molten, ax=ax, color='white', width=0.5)
# no significant comparisons
box_pairs = list(itt.combinations(filtered[compare].unique(), 2))
stat_resutls = fplt.add_stat_annotation(ax, data=molten, x=compare, y='value', test='Wilcoxon',
box_pairs=box_pairs, width=0.5, comparisons_correction='bonferroni')
if parameter == 'significant_abs_sum':
ylabel = "integral (ms*d')"
title = f'integral\nn={len(pivoted.index)}'
elif parameter == 'significant_abs_mass_center':
ylabel = 'center of mass (ms)'
title = f'center of mass\nn={len(pivoted.index)}'
else:
ylabel = 'value'
ax.set_ylabel(ylabel)
ax.set_title(fill(title,35))
ax.set_xticklabels(ax.get_xticklabels(), rotation=45, horizontalalignment='right')
return fig, ax
def condition_effect_on_parameter_cell_probe(parameter='significant_abs_mass_center', analysis='SC', source='dprime',
nan2zero=False, nozero=True):
ff_probe = DF.probe != 'mean'
ff_trans = DF.transition_pair != 'mean'
ff_analisis = DF.analysis == analysis
ff_parameter = DF.parameter == parameter
ff_source = DF.source == source
if nozero:
ff_value = DF.value > 0
elif nozero is False:
ff_value = DF.value >= 0
else:
raise ValueError('nozero not bool')
ff_good = np.logical_or(DF.goodness > 0.1, DF.goodness.isnull())
filtered = DF.loc[ff_analisis & ff_probe & ff_trans & ff_parameter & ff_source & ff_value & ff_good, :].copy()
pivoted = filtered.pivot_table(index='id_probe', columns='transition_pair', values='value')
if nan2zero:
pivoted = pivoted.fillna(value=0).reset_index()
elif nan2zero is False:
pivoted = pivoted.dropna().reset_index()
else:
raise ValueError('nan2zero not bool')
molten = pivoted.melt(id_vars='id_probe', var_name='transition_pair')
# _ = fplt.paired_comparisons(ax, data=molten,x=compare, y='value', color='gray', alpha=0.3)
ax = sns.boxplot(x='transition_pair', y='value', data=molten, color='white', width=0.5)
# no significant comparisons
box_pairs = list(itt.combinations(filtered['transition_pair'].unique(), 2))
stat_resutls = fplt.add_stat_annotation(ax, data=molten, x='transition_pair', y='value', test='Wilcoxon',
box_pairs=box_pairs, width=0.5, comparisons_correction='bonferroni')
if parameter == 'significant_abs_sum':
ylabel = "integral (ms*d')"
title = f'integral\nn={len(pivoted.index)}'
elif parameter == 'significant_abs_mass_center':
ylabel = 'center of mass (ms)'
title = f'center of mass\nn={len(pivoted.index)}'
else:
ylabel = 'value'
ax.set_ylabel(ylabel)
ax.set_title(fill(title,35))
ax.set_xticklabels(ax.get_xticklabels(), rotation=45, horizontalalignment='right')
return fig, ax
def dPCA_SC_param_comparison(parameter, nan2zero=False, nozero=True, id_probe=False):
if id_probe:
ff_probe = DF.probe != 'mean'
else:
ff_probe = DF.probe == 'mean'
ff_trans = DF.transition_pair == 'mean'
ff_param = DF.parameter == parameter
ff_source = DF.source == 'dprime'
if nozero:
ff_value = DF.value > 0
elif nozero is False:
ff_value = DF.value >= 0
else:
raise ValueError('nozero not bool')
ff_anal = DF.analysis == 'SC'
sing = DF.loc[ff_anal & ff_probe & ff_trans & ff_param & ff_source & ff_value,
['id_probe', 'region', 'siteid', 'cellid', 'parameter', 'value']]
sing['site_probe'] = DF[['siteid', 'probe']].agg('_'.join, axis=1)
sing_pivot = sing.pivot(index='site_probe', columns='cellid', values='value')
if nan2zero:
sing_pivot = sing_pivot.fillna(0)
sing_pivot['agg'] = sing_pivot.max(axis=1)
ff_anal = DF.analysis == 'dPCA'
pops = DF.loc[ff_anal & ff_probe & ff_trans & ff_param & ff_source & ff_value,
['id_probe', 'region', 'siteid', 'cellid', 'parameter', 'value']]
if nan2zero:
pops = pops.fillna(0)
pops['site_probe'] = DF['id_probe']
pops = pops.set_index('site_probe')
toplot = pd.concat((pops.loc[:, ['region', 'value']], sing_pivot.loc[:, 'agg']), axis=1)
_, _, r2, _, _ = sst.linregress(toplot['value'], toplot['agg'])
ax = sns.regplot(x='value', y='agg', data=toplot, color='white', label=f'n={len(toplot.index)}\nr2={r2:.2f}')
sns.despine(ax=ax)
_ = fplt.unit_line(ax, square_shape=False)
if parameter == 'significant_abs_mass_center':
parameter = 'center of mass'
xlabel = f"population (ms)"
ylabel = f'single cell max (ms)'
elif parameter == 'significant_abs_sum':
parameter = 'integral'
xlabel = f"population (ms*d')"
ylabel = f"single cell max (ms*d')"
else:
xlabel='value'
ylabel='value'
ax.set_xlabel(xlabel)
ax.set_ylabel(ylabel)
fig = ax.figure
title = f'{parameter}'
ax.set_title(fill(title, 35))
plt.show()
return fig,ax
###Output
_____no_output_____
###Markdown
parameter space
###Code
# parameter space
x = 'significant_abs_sum'
y = 'significant_abs_mass_center'
# single cell
analysis = 'SC'
fig, ax = parameter_space_scatter(x, y, analysis=analysis)
title = f'single cells in parameter space'
ax.set_title(fill(title, 35))
ax.set_xlabel("integral (ms*d')")
ax.set_ylabel("center of mass (ms)")
ax.legend(markerscale=1.5)
file = f'{analysis} summary {x} vs {y}'
savefig(fig, 'WIP4_figures', file, type='png')
plt.show()
# population
analysis = 'dPCA'
fig, ax = parameter_space_scatter(x, y, analysis=analysis)
title = f'populations in parameter space '
ax.set_title(fill(title, 35))
ax.set_xlabel("integral (ms*d')")
ax.set_ylabel("center of mass (ms)")
ax.legend(loc='best', markerscale=1.5)
file = f'{analysis} summary {x} vs {y}'
savefig(fig, 'WIP4_figures', file, type='png')
plt.show()
###Output
_____no_output_____
###Markdown
transition_pair effect
###Code
# single cell
analysis = 'SC'
source = 'dprime'
compare='transition_pair'
# center of mass
parameter = 'significant_abs_mass_center'
fig, ax = condition_effect_on_parameter(parameter, compare, analysis, source, nan2zero=False, nozero=True)
file = f'{analysis} {source}-{parameter} between {compare}'
savefig(fig, 'WIP4_figures', file, type='png')
plt.show()
# integral
parameter = 'significant_abs_sum'
fig, ax = condition_effect_on_parameter(parameter, compare, analysis, source, nan2zero=False, nozero=True)
file = f'{analysis} {source}-{parameter} between {compare}'
savefig(fig, 'WIP4_figures', file, type='png')
plt.show()
# population
analysis = 'dPCA'
source = 'dprime'
compare='transition_pair'
# center of mass
parameter = 'significant_abs_mass_center'
fig, ax = condition_effect_on_parameter(parameter, compare, analysis, source, nan2zero=False, nozero=True)
file = f'{analysis} {source}-{parameter} between {compare}'
savefig(fig, 'WIP4_figures', file, type='png')
plt.show()
# integral
parameter = 'significant_abs_sum'
fig, ax = condition_effect_on_parameter(parameter, compare, analysis, source, nan2zero=False, nozero=True)
file = f'{analysis} {source}-{parameter} between {compare}'
savefig(fig, 'WIP4_figures', file, type='png')
plt.show()
###Output
Using zero_method pratt
Using zero_method pratt
Using zero_method pratt
Using zero_method pratt
Using zero_method pratt
Using zero_method pratt
Using zero_method pratt
Using zero_method pratt
Using zero_method pratt
Using zero_method pratt
Using zero_method pratt
Using zero_method pratt
Using zero_method pratt
Using zero_method pratt
Using zero_method pratt
Using zero_method pratt
Using zero_method pratt
Using zero_method pratt
Using zero_method pratt
Using zero_method pratt
Using zero_method pratt
Using zero_method pratt
Using zero_method pratt
Using zero_method pratt
Using zero_method pratt
Using zero_method pratt
Using zero_method pratt
Using zero_method pratt
Using zero_method pratt
Using zero_method pratt
p-value annotation legend:
ns: 5.00e-02 < p <= 1.00e+00
*: 1.00e-02 < p <= 5.00e-02
**: 1.00e-03 < p <= 1.00e-02
***: 1.00e-04 < p <= 1.00e-03
****: p <= 1.00e-04
Using zero_method pratt
continuous_sharp v.s. continuous_similar: Wilcoxon test (paired samples) with Bonferroni correction, P_val=6.653e-03 stat=0.000e+00
Using zero_method pratt
continuous_similar v.s. silence_sharp: Wilcoxon test (paired samples) with Bonferroni correction, P_val=6.653e-03 stat=0.000e+00
Using zero_method pratt
continuous_similar v.s. similar_sharp: Wilcoxon test (paired samples) with Bonferroni correction, P_val=6.653e-03 stat=0.000e+00
###Markdown
population by site-probe
###Code
# population
analysis = 'dPCA'
source = 'dprime'
compare='transition_pair'
# center of mass
parameter = 'significant_abs_mass_center'
fig, ax = condition_effect_on_parameter_cell_probe(parameter, analysis, source,
nan2zero=False, nozero=True)
file = f'{analysis} id_probe {source}-{parameter} between {compare}'
savefig(fig, 'WIP4_figures', file, type='png')
plt.show()
# integral
parameter = 'significant_abs_sum'
fig, ax = condition_effect_on_parameter_cell_probe(parameter, analysis, source,
nan2zero=False, nozero=True)
file = f'{analysis} id_probe {source}-{parameter} between {compare}'
savefig(fig, 'WIP4_figures', file, type='png')
plt.show()
###Output
Using zero_method wilcox
Using zero_method wilcox
Using zero_method wilcox
Using zero_method wilcox
Using zero_method wilcox
Using zero_method wilcox
Using zero_method wilcox
Using zero_method wilcox
Using zero_method wilcox
Using zero_method wilcox
Using zero_method wilcox
Using zero_method wilcox
Using zero_method wilcox
Using zero_method wilcox
Using zero_method wilcox
Using zero_method wilcox
Using zero_method wilcox
Using zero_method wilcox
Using zero_method wilcox
Using zero_method wilcox
Using zero_method wilcox
Using zero_method wilcox
Using zero_method wilcox
Using zero_method wilcox
Using zero_method wilcox
Using zero_method wilcox
Using zero_method wilcox
Using zero_method wilcox
Using zero_method wilcox
Using zero_method wilcox
p-value annotation legend:
ns: 5.00e-02 < p <= 1.00e+00
*: 1.00e-02 < p <= 5.00e-02
**: 1.00e-03 < p <= 1.00e-02
***: 1.00e-04 < p <= 1.00e-03
****: p <= 1.00e-04
Using zero_method wilcox
continuous_similar v.s. silence_continuous: Wilcoxon test (paired samples) with Bonferroni correction, P_val=1.161e-04 stat=5.000e+01
Using zero_method wilcox
continuous_sharp v.s. continuous_similar: Wilcoxon test (paired samples) with Bonferroni correction, P_val=2.635e-04 stat=6.100e+01
Using zero_method wilcox
continuous_similar v.s. silence_sharp: Wilcoxon test (paired samples) with Bonferroni correction, P_val=9.969e-05 stat=4.800e+01
Using zero_method wilcox
continuous_similar v.s. silence_similar: Wilcoxon test (paired samples) with Bonferroni correction, P_val=9.969e-05 stat=4.800e+01
Using zero_method wilcox
continuous_similar v.s. similar_sharp: Wilcoxon test (paired samples) with Bonferroni correction, P_val=7.628e-04 stat=7.600e+01
###Markdown
population vs single cell comparison
###Code
# integral
parameter = 'significant_abs_sum'
fig, ax = dPCA_SC_param_comparison(parameter, nan2zero=False, nozero=True, id_probe=False)
ax.legend(markerscale=1.5, loc='best')
file = f'SC DPCA {parameter} comparison'
savefig(fig, 'WIP4_figures', file, type='png')
plt.show()
# center of mass
parameter = 'significant_abs_mass_center'
fig, ax = dPCA_SC_param_comparison(parameter, nan2zero=False, nozero=True, id_probe=False)
ax.legend(markerscale=1.5, loc='best')
file = f'SC DPCA {parameter} comparison'
savefig(fig, 'WIP4_figures', file, type='png')
plt.show()
###Output
C:\Users\Mateo\Miniconda3\envs\nems\lib\site-packages\ipykernel_launcher.py:186: FutureWarning: Sorting because non-concatenation axis is not aligned. A future version
of pandas will change to not sort by default.
To accept the future behavior, pass 'sort=False'.
To retain the current behavior and silence the warning, pass 'sort=True'.
C:\Users\Mateo\Miniconda3\envs\nems\lib\site-packages\ipykernel_launcher.py:186: FutureWarning: Sorting because non-concatenation axis is not aligned. A future version
of pandas will change to not sort by default.
To accept the future behavior, pass 'sort=False'.
To retain the current behavior and silence the warning, pass 'sort=True'.
|
chapters/02_regression/prep-work.ipynb | ###Markdown
SimplexSimple simplex implementation for teaching about regression.
###Code
import numpy as np
import matplotlib.pylab as plt
%matplotlib inline
class Simplex:
"""
Simplex minimizer for arbitrary 2D objective functions.
"""
def __init__(self,objective_function,guess_min=-10,guess_max=10):
"""
Create simplex instance.
arguments:
----------
objective_function: objective function for the minimizer. must take x and
y as first arguments.
guess_min: minimum initial guess (for both x and y)
guess_max: maximum initial guess (for both x and y)
"""
# Record objective function
self._objective_function = objective_function
# Create initial points as random raws from uniform distribution
self.points = np.random.uniform(guess_min,guess_max,size=(3,2))
# Record guesses
self._guesses = np.copy(self.points)
# Create values for points
self._update_values()
self._num_moves = 0
self._plot = False
def _update_values(self):
"""
Calculate values and rank points from min to max.
"""
self.values = [0,0,0]
for i in range(3):
self.values[i] = self._objective_function(*self.points[i,:])
tmp = [(v,i) for i, v in enumerate(self.values)]
tmp.sort()
# points, from minimum to maximum
self.min_pt = tmp[0][1]
self.mid_pt = tmp[1][1]
self.max_pt = tmp[2][1]
def _flip(self):
"""
Reflect maximum point across the midpoint between the other two points.
"""
# midpoint for flipping
mid_flip = self.points[self.min_pt,:] + (self.points[self.mid_pt,:] - self.points[self.min_pt,:])/2
# move max point so its origin is the mid_flip point
to_flip = self.points[self.max_pt,:] - mid_flip
# Flip using some basic trig
L = np.sqrt(np.sum(to_flip**2))
thetax = np.arccos(to_flip[0]/L) + np.pi
thetay = np.arcsin(to_flip[1]/L) + np.pi
flipped = L*np.array((np.cos(thetax),np.sin(thetay)))
# Move the flipped point back to the original coordinates
new_point = flipped + mid_flip
# Calculate value. If it is no longer the max point, this is a successful move.
new_value = self._objective_function(*new_point)
if new_value < self.values[self.min_pt] or new_value < self.values[self.mid_pt]:
# Plot if requested
if self._plot:
plt.plot((self.points[self.max_pt,0],new_point[0]),
(self.points[self.max_pt,1],new_point[1]),'y-')
plt.plot((new_point[0]),(new_point[1]),"yo",ms=9)
self.points[self.max_pt,:] = new_point
return True
return False
def _small_step(self):
"""
Move the max point to midway between it and the midpoint between the other
two points.
"""
# midpoint for flipping
mid_flip = self.points[self.min_pt,:] + (self.points[self.mid_pt,:] - self.points[self.min_pt,:])/2
# move max point so its origin is the mid_flip point
to_flip = self.points[self.max_pt,:] - mid_flip
# Move using some basic trig
L = np.sqrt(np.sum(to_flip**2))
thetax = np.arccos(to_flip[0]/L) + np.pi
thetay = np.arcsin(to_flip[1]/L) + np.pi
flipped = -0.5*L*np.array((np.cos(thetax),np.sin(thetay)))
# Move the flipped point back to the original coordinates
new_point = flipped + mid_flip
# Calculate value. If it is no longer the max point, this is a successful move
new_value = self._objective_function(*new_point)
if new_value < self.values[self.min_pt] or new_value < self.values[self.mid_pt]:
# Plot if requested
if self._plot:
plt.plot((self.points[self.max_pt,0],new_point[0]),
(self.points[self.max_pt,1],new_point[1]),'y-',lw=2)
plt.plot((new_point[0]),(new_point[1]),"yo",ms=9)
self.points[self.max_pt,:] = new_point
return True
return False
def _contract(self):
"""
Contract the simplex towards the two better points.
"""
# Plot starting triangle
if self._plot:
plt.plot(self.points[:,0],self.points[:,1],"ko",ms=6)
plt.plot((self.points[self.min_pt,0],self.points[self.max_pt,0]),
(self.points[self.min_pt,1],self.points[self.max_pt,1]),"k-")
plt.plot((self.points[self.mid_pt,0],self.points[self.max_pt,0]),
(self.points[self.mid_pt,1],self.points[self.max_pt,1]),"k-")
plt.plot((self.points[self.min_pt,0],self.points[self.mid_pt,0]),
(self.points[self.min_pt,1],self.points[self.mid_pt,1]),"r--")
# Midpoint between better two points
mid_flip = self.points[self.min_pt,:] + (self.points[self.mid_pt,:] - self.points[self.min_pt,:])/2
# Midpoint between the max and min points
new_mid = self.points[self.max_pt,:] + (self.points[self.min_pt,:] - self.points[self.max_pt,:])/2
# Perform the contraction.
self.points[self.max_pt,:] = new_mid
self.points[self.mid_pt,:] = mid_flip
# Plot final triangle
if self._plot:
plt.plot(self.points[:,0],self.points[:,1],"yo",ms=6)
plt.plot((self.points[self.min_pt,0],self.points[self.max_pt,0]),
(self.points[self.min_pt,1],self.points[self.max_pt,1]),"y-")
plt.plot((self.points[self.mid_pt,0],self.points[self.max_pt,0]),
(self.points[self.mid_pt,1],self.points[self.max_pt,1]),"y-")
plt.plot((self.points[self.min_pt,0],self.points[self.mid_pt,0]),
(self.points[self.min_pt,1],self.points[self.mid_pt,1]),"y-")
return True
def make_move(self,plot=False):
"""
Make a simplex move. Try flip, then small step, then contract.
"""
self._plot = plot
# Update to current values
self._update_values()
# Plot initial triangle
if self._plot:
plt.plot(self.points[:,0],self.points[:,1],"ko",ms=6)
plt.plot((self.points[self.min_pt,0],self.points[self.max_pt,0]),
(self.points[self.min_pt,1],self.points[self.max_pt,1]),"k-")
plt.plot((self.points[self.mid_pt,0],self.points[self.max_pt,0]),
(self.points[self.mid_pt,1],self.points[self.max_pt,1]),"k-")
plt.plot((self.points[self.min_pt,0],self.points[self.mid_pt,0]),
(self.points[self.min_pt,1],self.points[self.mid_pt,1]),"r--")
# If flip fails, small_step. If small_step fails, contract.
if self._flip():
print("flip")
elif self._small_step():
print("simple")
else:
print("large")
self._contract()
self._num_moves += 1
def restore_guesses(self):
"""
Restore the fitter to the initial guesses.
"""
self.points = np.copy(self._guesses)
@property
def estimate(self):
"""
Current best estimate of the minimum.
"""
self._update_values()
return self.points[self.min_pt], self.values[self.min_pt]
def objective_simple(x,y):
"""
Single-peaked 2D polynomial in x and y.
"""
return 20*(x + 2)**2 + 15*(y - 0.5)**2
def objective_doomed(x,y):
"""
Saddle-shapped 2D polynomial in x and y.
"""
return -20*(x + 2)**2 + 15*(y - 0.5)**2
def objective_multi(x,y):
return -(20*(x + 5))**2 - (20*(y + 5))**2 + (20*(x - 5))**2 + (20*(y - 5))**2
def run_simplex(objective_function,prefix="z",simplex=None,figsize=(10,10)):
if simplex == None:
simplex = Simplex(objective_function)
xlist = np.linspace(-10.0, 10.0, 100)
ylist = np.linspace(-10.0, 10.0, 100)
X, Y = np.meshgrid(xlist, ylist)
Z = objective_function(X,Y)
for i in range(20):
plt.figure(figsize=figsize)
cp = plt.contourf(X, Y, Z,20,cmap="terrain")
#plt.plot((-2),(0.5),"b+",ms=10)
plt.axis("equal")
plt.axis("off")
plt.xlim((-10,10))
plt.ylim((-10,10))
simplex.make_move(plot=True)
#simplex.make_move(plot=False)
name = "{}".format(i)
name = name.zfill(5)
plt.savefig("{}{}.png".format(prefix,name),bbox_inches="tight")
plt.show()
return simplex
#s.restore_guesses()
x = run_simplex(objective_simple)
###Output
_____no_output_____
###Markdown
Prep: Create a m/b SSR heat map
###Code
m_list = np.linspace(-0.5, 0.5, 100)
b_list = np.linspace(-0.5, 0.5, 100)
M, B = np.meshgrid(m_list,b_list)
def ssr(x_obs,y_obs,m,b):
out = np.zeros(m.shape)
for i in range(m.shape[0]):
for j in range(m.shape[1]):
out[i,j] = np.sum(((m[i,j]*x_obs + b[i,j]) - y_obs)**2)
return out
Z = ssr(d.time,d.obs,M,B)
plt.figure(figsize=(8,8))
cp = plt.contourf(M, B, Z,20,cmap="terrain")
plt.axis("equal")
plt.xlabel("m")
plt.ylabel("b")
plt.plot((0.0696240601504),(0.186085714286),"y+",ms=20)
plt.savefig("param-space.png")
###Output
_____no_output_____
###Markdown
Regression engine
###Code
## Models
def lin(x,a=1,b=1):
return a + b*x
def hb(x,a=1,b=1):
return a*(b*x)/(1 + b*x)
def hbc(x,a=1,b=1,c=1):
return a*(b*(x**c))/(1 + b*(x**c))
def expt(x,a=1,b=1):
return a*(1 - np.exp(-b*x))
def second(x,a=1,b=1,c=1):
return a + b*x + c*(x**2)
def third(x,a=1,b=1,c=1,d=1):
return a + b*x + c*(x**2) + d*(x**3)
def trig(x,a=1,b=1,c=1):
return a*np.sin(b*x + c)
def trig2(x,a=1,b=1,c=1,d=1):
return a*np.sin(x*b) + c*np.sin(x*d)
def logd(x,a=1,b=1):
return a*np.log(x + b)
def logdc(x,a=1,b=1,c=1):
return a*np.log(x*b + c)
### TEST FITTING
import inspect
import scipy.optimize
import pandas as pd
def residuals(param,x,y,f):
"""A generalized residuals function."""
return y - f(x,*param)
def fitter(x,y,f):
"""
A generalized fitter. Find parameters of `f` that minimize the
residual between `f` and `y` for values of `x`. This function
assumes that `f` has the form:
f(x,param1,param2,param3...)
x and y should be numpy arrays of the same length.
"""
# Create a list of parameter names and guesses using `inspect`
names = []
guesses = []
s = inspect.signature(f)
for i, p in enumerate(s.parameters):
names.append(s.parameters[p].name)
guesses.append(s.parameters[p].default)
# Fit the model to the data.
x0 = np.array(guesses[1:])
fit = scipy.optimize.least_squares(residuals,x0,
args=(x,y,f))
# Plot hte fit
x_range = np.linspace(np.min(x),np.max(x),100)
plt.plot(x_range,f(x_range,*fit.x),"-")
# Calculate R^2
ss_err = np.sum(residuals(fit.x,x,y,f)**2)
ss_tot = np.sum((y - np.mean(y))**2)
R_sq = 1 - (ss_err/ss_tot)
print(len(fit.fun))
return len(fit.x), fit.cost #np.sum(residuals(fit.x,x,y,f)**2)
# Load in dataset
d = pd.read_csv("data/dataset_0.csv")
plt.plot(d.x,d.y,"ko")
# Fit all of those functions to the data
func_list = [lin,hb,hbc,expt,second,third,trig,trig2,logd,logdc]
results = []
for f in func_list:
results.append((str(f).split()[1],fitter(d.x,d.y,f)))
print(results[-1])
###Output
_____no_output_____
###Markdown
Generate data sets to fit
###Code
# Generate data
x = np.linspace(0,10,41)
y = expt(x,4,0.3) + np.random.normal(0,0.3,len(x))
d = pd.DataFrame({"x":x,"y":y})
plt.plot(x,y,"o"); plt.show()
d.to_csv("dataset_0.csv")
x = np.linspace(0,10,41)
y = hbc(x,4,0.005,4) + np.random.normal(0,0.3,len(x))
d = pd.DataFrame({"x":x,"y":y})
plt.plot(x,y,"o"); plt.show()
d.to_csv("dataset_1.csv")
###Output
_____no_output_____ |
notebooks/MICCAI_plots.ipynb | ###Markdown
Indexing: `n_frames, n_frames0, n_rois, N_reflections, max_rot, tracker` `T_update[n_frame0, N_refl, rot, tracker]`
###Code
qJ_t = np.nanquantile(JACC, [.25, .5, .75], axis=(1,2,3,4))
fig, ax = plt.subplots(4,1, figsize=(10,8))
ax2 = []
for ii,a in enumerate(ax):
a.plot(qJ_t[1,:,ii])
a.set_title((fit_flow_funs+cv2trackers_to_use[:-slow_trackers_last_n])[ii])
# a.set_ylabel('Median Jaccard Index')
a.axis(xmin=0, xmax=50, ymin=.7, ymax=1)
if ii>1:
a2 = a.twinx()
ax2.append(a2)
a2.plot( 1-np.count_nonzero(np.isnan(JACC[...,ii]), axis=(1,2,3,4))/np.prod(JACC.shape[1:-1]), '--' )
# a2.set_ylabel('Percentage of ROIs successfully tracked')
a2.axis(xmin=0, xmax=50, ymin=.1, ymax=1)
fig.add_subplot(111, frameon=False)
# hide tick and tick label of the big axis
plt.tick_params(labelcolor='none', which='both', top=False, bottom=False, left=False, right=False)
plt.xlabel("frame number")
plt.ylabel('Median Jaccard Index')
# ax3 = fig.add_subplot(111, frameon=False, label='3')
# # hide tick and tick label of the big axis
# plt.tick_params(labelcolor='none', which='both', top=False, bottom=False, left=False, right=False)
# ax3.yaxis.set_label_position('right')
# ax3.set_ylabel('Percentage of ROIs successfully tracked')
plt.tight_layout()
qJ_trkr = np.nanquantile(JACC, [.1, .25, .5, .75, .9], axis=range(5)).T
qJ_trkr_slow = np.nanquantile(JACC_slow, [.1, .25, .5, .75, .9], axis=range(5)).T
fig, ax = plt.subplots()
ax.bxp([
{'whislo' : J[0], 'q1':J[1], 'med':J[2], 'q3':J[3], 'whishi':J[4]}
for J in np.vstack((qJ_trkr, qJ_trkr_slow))
], showfliers=False)
ax.grid(True, axis='x')
ax.set_ylabel('Jaccard Index')
n_frames_counted = np.array( [JACC.shape[0]]*JACC.shape[-1] + [JACC_slow.shape[0]]*JACC_slow.shape[-1])
avg_T_update = np.sum(T_update, axis=(0,1,2))/(n_frames_counted*np.prod(T_update.shape[:-1]))
ax2 = ax.twinx()
ax2.plot(1+np.arange(avg_T_update.shape[-1]),1/avg_T_update, 'v:', markersize=12)
ax2.set_ylabel('Average FPS')
ax.set_xticklabels(fit_flow_funs+cv2trackers_to_use)
# plt.savefig('boxplot_all.PDF', pad_inches=0, bbox_inches='tight')
qJ_trkr_rot = np.nanquantile(JACC, [.1, .25, .5, .75, .9], axis=range(4)).T
qJ_trkr_slow_rot = np.nanquantile(JACC_slow, [.1, .25, .5, .75, .9], axis=range(4)).T
fig, ax = plt.subplots()
boxwidth = .12
ax.bxp(sum([[
{'whislo' : J[0], 'q1':J[1], 'med':J[2], 'q3':J[3], 'whishi':J[4]}
for J in JJ]
for JJ in np.vstack((qJ_trkr_rot, qJ_trkr_slow_rot))
], []), showfliers=False,
positions= (np.array([[-2*boxwidth,0,2*boxwidth]]).T+np.arange(8)).flatten(order='F'), widths=boxwidth
)
ax.grid(True, axis='x')
ax.set_ylabel('Jaccard Index')
xticklabels = sum([ [f'{max_rots_deg[0]:.0f}', f'{max_rots_deg[1]:.0f}'+'\n'+f, f'{max_rots_deg[2]:.0f}'] for f in fit_flow_funs+cv2trackers_to_use], [])
_ = ax.set_xticklabels(xticklabels, fontsize=16)
# plt.savefig('boxplot_rot.PDF', pad_inches=0, bbox_inches='tight')
qJ_trkr_ref = np.nanquantile(JACC, [.1, .25, .5, .75, .9], axis=(0,1,2,4)).T
qJ_trkr_slow_ref = np.nanquantile(JACC_slow, [.1, .25, .5, .75, .9], axis=(0,1,2,4)).T
fig, ax = plt.subplots()
boxwidth = .12
ax.bxp(sum([[
{'whislo' : J[0], 'q1':J[1], 'med':J[2], 'q3':J[3], 'whishi':J[4]}
for J in JJ]
for JJ in np.vstack((qJ_trkr_ref, qJ_trkr_slow_ref))
], []), showfliers=False,
positions= (np.array([[-2*boxwidth,0,2*boxwidth]]).T+np.arange(8)).flatten(order='F'), widths=boxwidth
)
ax.grid(True, axis='x')
ax.set_ylabel('Jaccard Index')
xticklabels = sum([ [f'{N_reflections[0]:.0f}', f'{N_reflections[1]:.0f}'+'\n'+f, f'{N_reflections[2]:.0f}'] for f in fit_flow_funs+cv2trackers_to_use], [])
_ = ax.set_xticklabels(xticklabels, fontsize=16)
# plt.savefig('boxplot_ref.PDF', pad_inches=0, bbox_inches='tight')
###Output
_____no_output_____ |
notebooks/pyannote.metrics.diarization.ipynb | ###Markdown
Diarization evaluation metrics
###Code
from pyannote.core import Annotation, Segment
reference = Annotation()
reference[Segment(0, 10)] = 'A'
reference[Segment(12, 20)] = 'B'
reference[Segment(24, 27)] = 'A'
reference[Segment(30, 40)] = 'C'
reference
hypothesis = Annotation()
hypothesis[Segment(2, 13)] = 'a'
hypothesis[Segment(13, 14)] = 'd'
hypothesis[Segment(14, 20)] = 'b'
hypothesis[Segment(22, 38)] = 'c'
hypothesis[Segment(38, 40)] = 'd'
hypothesis
###Output
_____no_output_____
###Markdown
Diarization error rate
###Code
from pyannote.metrics.diarization import DiarizationErrorRate
diarizationErrorRate = DiarizationErrorRate()
print("DER = {0:.3f}".format(diarizationErrorRate(reference, hypothesis, uem=Segment(0, 40))))
###Output
DER = 0.516
###Markdown
Optimal mapping
###Code
reference
hypothesis
diarizationErrorRate.optimal_mapping(reference, hypothesis)
###Output
_____no_output_____
###Markdown
Details
###Code
diarizationErrorRate(reference, hypothesis, detailed=True, uem=Segment(0, 40))
###Output
_____no_output_____
###Markdown
Clusters purity and coverage
###Code
from pyannote.metrics.diarization import DiarizationPurity
purity = DiarizationPurity()
print("Purity = {0:.3f}".format(purity(reference, hypothesis, uem=Segment(0, 40))))
from pyannote.metrics.diarization import DiarizationCoverage
coverage = DiarizationCoverage()
print("Coverage = {0:.3f}".format(coverage(reference, hypothesis, uem=Segment(0, 40))))
###Output
Coverage = 0.759
|
Explore_Data.ipynb | ###Markdown
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn import datasets, linear_model
from sklearn.metrics import mean_squared_error, r2_score, accuracy_score, mean_absolute_error
%matplotlib inline
df = pd.read_csv('/content/listings.csv')
df.head()
df.columns
df_n = df.drop(columns=['listing_url', 'scrape_id', 'last_scraped', 'name', 'summary','space', 'experiences_offered',
'neighborhood_overview','notes', 'transit', 'thumbnail_url', 'medium_url', 'picture_url','xl_picture_url',
'host_id', 'host_url', 'host_name', 'host_since','host_location', 'host_about', 'host_response_time',
'host_response_rate', 'host_acceptance_rate', 'host_is_superhost','host_thumbnail_url', 'host_picture_url',
'host_neighbourhood','host_listings_count', 'host_total_listings_count','host_verifications',
'host_has_profile_pic', 'host_identity_verified','street', 'neighbourhood_cleansed','neighbourhood_group_cleansed',
'state', 'zipcode', 'market','smart_location', 'country_code', 'country', 'latitude', 'longitude',
'is_location_exact', 'property_type', 'room_type', 'bed_type', 'square_feet', 'weekly_price', 'monthly_price',
'security_deposit','cleaning_fee', 'guests_included', 'extra_people', 'minimum_nights','maximum_nights',
'calendar_updated', 'has_availability','availability_30', 'availability_60', 'availability_90','availability_365',
'calendar_last_scraped', 'requires_license','license', 'jurisdiction_names', 'instant_bookable','cancellation_policy',
'require_guest_profile_picture','require_guest_phone_verification', 'calculated_host_listings_count', 'amenities', 'accommodates', 'neighbourhood', 'city', ], inplace=True)
df.head()
###Output
_____no_output_____ |
concepts/Advanced Algorithms/03 Dynamic programming/01 knapsack_problem.ipynb | ###Markdown
Knapsack ProblemNow that you saw the dynamic programming solution for the knapsack problem, it's time to implement it. Implement the function `max_value` to return the maximum value given the items (`items`) and the maximum weight of the knapsack (`knapsack_max_weight`). The `items` variable is the type `Item`, which is a [named tuple](https://docs.python.org/3/library/collections.htmlcollections.namedtuple).
###Code
import collections
Item = collections.namedtuple('Item', ['weight', 'value'])
def max_value(knapsack_max_weight, items):
"""
Get the maximum value of the knapsack.
"""
pass
tests = [
{
'correct_output': 14,
'input':
{
'knapsack_max_weight': 15,
'items': [Item(10, 7), Item(9, 8), Item(5, 6)]}},
{
'correct_output': 13,
'input':
{
'knapsack_max_weight': 25,
'items': [Item(10, 2), Item(29, 10), Item(5, 7), Item(5, 3), Item(5, 1), Item(24, 12)]}}]
for test in tests:
assert test['correct_output'] == max_value(**test['input'])
###Output
_____no_output_____
###Markdown
Hide Solution
###Code
def max_value(knapsack_max_weight, items):
lookup_table = [0] * (knapsack_max_weight + 1)
for item in items:
for capacity in reversed(range(knapsack_max_weight + 1)):
if item.weight <= capacity:
lookup_table[capacity] = max(lookup_table[capacity], lookup_table[capacity - item.weight] + item.value)
return lookup_table[-1]
###Output
_____no_output_____ |
Play Store App Analysis.ipynb | ###Markdown
Data Cleaning
###Code
#using mean to fill the missing values in the 'Rating' attribute
mean=df['Rating'].mean()
df['Rating'].fillna(mean)
#using 'inplace' parameter to make the changes well in place.
df.fillna(df.mean(),inplace = True)
df.info()
#re-checking count of the duplicate data
sum(df.duplicated())
df.shape
df['Type'].fillna(mean,inplace=True)
df['Content Rating'].fillna(mean,inplace=True)
df['Current Ver'].fillna(mean,inplace=True)
df['Android Ver'].fillna(mean,inplace=True)
#final checking of missing values by using info() method
df.info()
#checking the count of missing or null values
df.isnull().sum()
#checking the final shape of the data set
df.shape
#cchecking the first five rows of the data set....
df.head()
###Output
_____no_output_____ |
ETL Pipelines/10_imputation_exercise/10_imputations_exercise-solution.ipynb | ###Markdown
Imputing DataWhen a dataset has missing values, you can either remove those values or fill them in. In this exercise, you'll work with World Bank GDP (Gross Domestic Product) data to fill in missing values.
###Code
# run this code cell to read in the data set
import pandas as pd
df = pd.read_csv('../data/gdp_data.csv', skiprows=4)
df.drop('Unnamed: 62', axis=1, inplace=True)
# run this code cell to see what the data looks like
df.head()
# Run this code cell to check how many null values are in the data set
df.isnull().sum()
###Output
_____no_output_____
###Markdown
There are quite a few null values. Run the code below to plot the data for a few countries in the data set.
###Code
import matplotlib.pyplot as plt
# put the data set into long form instead of wide
df_melt = pd.melt(df, id_vars=['Country Name', 'Country Code', 'Indicator Name', 'Indicator Code'], var_name='year', value_name='GDP')
# convert year to a date time
df_melt['year'] = pd.to_datetime(df_melt['year'])
def plot_results(column_name):
# plot the results for Afghanistan, Albania, and Honduras
fig, ax = plt.subplots(figsize=(8,6))
df_melt[(df_melt['Country Name'] == 'Afghanistan') |
(df_melt['Country Name'] == 'Albania') |
(df_melt['Country Name'] == 'Honduras')].groupby('Country Name').plot('year', column_name, legend=True, ax=ax)
ax.legend(labels=['Afghanistan', 'Albania', 'Honduras'])
plot_results('GDP')
###Output
_____no_output_____
###Markdown
Afghanistan and Albania are missing data, which show up as gaps in the results. Exercise - Part 1Your first task is to calculate mean GDP for each country and fill in missing values with the country mean. This is a bit tricky to do in pandas. Here are a few links that should be helpful:* https://pandas.pydata.org/pandas-docs/version/0.23/generated/pandas.DataFrame.groupby.html* https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.transform.html* https://pandas.pydata.org/pandas-docs/version/0.22/generated/pandas.DataFrame.fillna.html
###Code
# TODO: Use the df_melt dataframe and fill in missing values with a country's mean GDP
# If aren't sure how to do this,
# look up something like "how to group data and fill in nan values in pandas" in a search engine
# Put the results in a new column called 'GDP_filled'.
df_melt['GDP_filled'] = df_melt.groupby('Country Name')['GDP'].transform(lambda x: x.fillna(x.mean()))
df_melt['GDP_filled']
# Plot the results
plot_results('GDP_filled')
###Output
_____no_output_____
###Markdown
This is somewhat of an improvement. At least there is no missing data; however, because GDP tends to increase over time, the mean GDP is probably not the best way to fill in missing values for this particular case. Next, try using forward fill to deal with any missing values. Excercise - Part 2Use the fillna forward fill method to fill in the missing data. Here is the [documentation](https://pandas.pydata.org/pandas-docs/version/0.22/generated/pandas.DataFrame.fillna.html). As explained in the course video, forward fill takes previous values to fill in nulls.The pandas fillna method has a forward fill option. For example, if you wanted to use forward fill on the GDP dataset, you could execute `df_melt['GDP'].fillna(method='ffill')`. However, there are two issues with that code. 1. You want to first make sure the data is sorted by year2. You need to group the data by country name so that the forward fill stays within each countryWrite code to first sort the df_melt dataframe by year, then group by 'Country Name', and finally use the forward fill method.
###Code
# TODO: Use forward fill to fill in missing GDP values
# HINTS: use the sort_values(), groupby(), and fillna() methods
df_melt['GDP_ffill'] = df_melt.sort_values('year').groupby('Country Name')['GDP'].fillna(method='ffill')
# plot the results
plot_results('GDP_ffill')
###Output
_____no_output_____
###Markdown
This looks better at least for the Afghanistan data; however, the Albania data is still missing values. You can fill in the Albania data using back fill. That is what you'll do next. Exercise - Part 3This part is similar to Part 2, but now you will use backfill. Write code that backfills the missing GDP data.
###Code
# TODO: Use back fill to fill in missing GDP values
# HINTS: use the sort_values(), groupby(), and fillna() methods
df_melt['GDP_bfill'] = df_melt.sort_values('year').groupby('Country Name')['GDP'].fillna(method='bfill')
# plot the results
plot_results('GDP_bfill')
###Output
_____no_output_____
###Markdown
Conclusion In this case, the GDP data for all three countries is now complete. Note that forward fill did not fill all the Albania data because the first data entry in 1960 was NaN. Forward fill would try to fill the 1961 value with the NaN value from 1960.To completely fill the entire GDP data for all countries, you might have to run both forward fill and back fill. Note as well that the results will be slightly different depending on if you run forward fill first or back fill first. Afghanistan, for example, is missing data in the middle of the data set. Hence forward fill and back fill will have slightly different results.Run this next code cell to see if running both forward fill and back fill end up filling all the GDP NaN values.
###Code
# Run forward fill and backward fill on the GDP data
df_melt['GDP_ff_bf'] = df_melt.sort_values('year').groupby('Country Name')['GDP'].fillna(method='ffill').fillna(method='bfill')
# Check if any GDP values are null
df_melt['GDP_ff_bf'].isnull().sum()
###Output
_____no_output_____ |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.