path
stringlengths 7
265
| concatenated_notebook
stringlengths 46
17M
|
---|---|
stream_ex0_stream_random_tweets.ipynb | ###Markdown
This example pulls random tweets from Twitter's streaming API, meaning it will keep gathering more data the longer you let it run.Hit the stop button in the header to stop it from gathering more data.
###Code
api = tweepy.API(auth)
class StreamListener(tweepy.StreamListener):
def on_status(self, tweet):
print(tweet.author.screen_name + "\t" + tweet.text)
def on_error(self, status_code):
print('Error: ' + repr(status_code))
return False
l = StreamListener()
streamer = tweepy.Stream(auth=auth, listener=l)
streamer.sample()
###Output
_____no_output_____ |
training-data-analyst/summary/02-02_structed_summary.ipynb | ###Markdown
Get googld cloud project id
###Code
!gcloud config list project
###Output
_____no_output_____
###Markdown
Set googld cloud config
###Code
gcloud config set compute/region $REGION
gcloud config set ai_platform/region global
###Output
_____no_output_____
###Markdown
Create bucket
###Code
%%bash
if ! gsutil ls | grep -q gs://${BUCKET}; then
gsutil mb -l ${REGION} gs://${BUCKET}
fi
###Output
_____no_output_____
###Markdown
Usage of argparse
###Code
# https://docs.python.org/ko/3/library/argparse.html
import argparse
parser = argparse.ArgumentParser()
parser.add_argument(
name or flags... # ex: "--job-dir"
[, nargs] # ex: "+" 존재하는 모든 명령행 인자를 리스트로 모읍니다. 또한, 적어도 하나의 명령행 인자가 제공되지 않으면 에러 메시지가 만들어집니다
[, default]
[, type] # ex: int, float
[, required] # ex: True or False
[, help]
)
args = parser.parse_args()
arguments = args.__dict__
###Output
_____no_output_____
###Markdown
train_and_evaluate 1. build_wide_deep_model - create input layers - { name: keras.Input(name, shape, dtype) } - create feature columns of wide and deep features - create numeric_column(name) - create categorical column - categorical_column_with_vocabulary_list(name, values) - indicator_column(category) - create new deep feature with numeric columns - bucketized_column(fc, boundaries=list) - indicator_column(bucketized): one-hot encoding - crossed_column([bucketized1, bucketized2], hash_bucket_size) - embedding_column(crossed, dimension=nembeds) - stack DenseFeatures layers of wide and deep features - add Dense layers - activation=relu - add Concatenate layer of wide and deep layers - add output Dense layer - units=1, activation=linear - create model with inputs and outputs - compile model - loss=mse, optimizer=adam, metrics=[rmse, mse]2. load_dataset: traind, evals - get dataset from csv - make_csv_dataset(pattern, batch_size, COLUMNS, DEFAULTS) - append map function to split feature and labels - if train, then shuffle and repeat - repeat(None): infinite repeat - shuffle(buffer_size=1000): input proper value - prefetch - buffer_size=1 is AUTOTUNE3. model.fit - trainds - validation_data=evalds - epochs = NUM_EVALS or NUM_EPOCHS - steps_per_epoch = train_examples // (batch_size * epochs) - callbacks=[callbacks]4. set hypertune - report_hyperparameter_tuning_metric - hyperparameter_metric_tag='rmse' monitoring target name - metric_value=history.history['val_rmse'][-1], monitoring target value: - global_step=args['num_epochs']5. save model steps_per_epoch 의미steps_per_epoch = train_examples // (batch_size * epochs)- virtual epoch. 1개의 epoch을 여러 step으로 나누고 각 step이 epoch과 동일하게 eval, callbacks 처리됨 Deleting and deploying model
###Code
%%bash
MODEL_NAME="babyweight"
MODEL_VERSION="ml_on_gcp"
MODEL_LOCATION='gs://qwiklabs-gcp-00-0db9b1bc58c6/babyweight/trained_model/20210602015319'
echo "Deleting and deploying $MODEL_NAME $MODEL_VERSION from $MODEL_LOCATION"
# 모델 삭제 전에 모든 버전을 삭제해야 함
# default 버전은 다른 버전을 모두 삭제한 후에 삭제 가능함
gcloud ai-platform versions delete ${MODEL_VERSION} --model ${MODEL_NAME} -q
gcloud ai-platform models delete ${MODEL_NAME} -q
# 모델 생성 후 버전 생성
gcloud ai-platform models create ${MODEL_NAME} --regions ${REGION}
gcloud ai-platform versions create ${MODEL_VERSION} \
--model=${MODEL_NAME} \
--origin=${MODEL_LOCATION} \
--runtime-version=2.1 \
--python-version=3.7
###Output
_____no_output_____ |
Convolutional Neural Networks in Tensorflow/Week 1/Graded Exercise_1_Cats_vs_Dogs_Question-FINAL.ipynb | ###Markdown
NOTE:In the cell below you **MUST** use a batch size of 10 (`batch_size=10`) for the `train_generator` and the `validation_generator`. Using a batch size greater than 10 will exceed memory limits on the Coursera platform.
###Code
TRAINING_DIR = "/tmp/cats-v-dogs/training"
train_datagen = ImageDataGenerator(rescale=1.0/255)
# NOTE: YOU MUST USE A BATCH SIZE OF 10 (batch_size=10) FOR THE
# TRAIN GENERATOR.
train_generator = train_datagen.flow_from_directory(TRAINING_DIR,
batch_size=10,
class_mode='binary',
target_size=(150, 150))
VALIDATION_DIR = "/tmp/cats-v-dogs/testing"
validation_datagen = ImageDataGenerator(rescale=1.0/255)
# NOTE: YOU MUST USE A BACTH SIZE OF 10 (batch_size=10) FOR THE
# VALIDATION GENERATOR.
validation_generator = train_datagen.flow_from_directory(VALIDATION_DIR,
batch_size=10,
class_mode='binary',
target_size=(150, 150))
# Expected Output:
# Found 2700 images belonging to 2 classes.
# Found 300 images belonging to 2 classes.
history = model.fit_generator(train_generator,
epochs=2,
verbose=1,
validation_data=validation_generator)
# PLOT LOSS AND ACCURACY
%matplotlib inline
import matplotlib.image as mpimg
import matplotlib.pyplot as plt
#-----------------------------------------------------------
# Retrieve a list of list results on training and test data
# sets for each training epoch
#-----------------------------------------------------------
acc=history.history['acc']
val_acc=history.history['val_acc']
loss=history.history['loss']
val_loss=history.history['val_loss']
epochs=range(len(acc)) # Get number of epochs
#------------------------------------------------
# Plot training and validation accuracy per epoch
#------------------------------------------------
plt.plot(epochs, acc, 'r', "Training Accuracy")
plt.plot(epochs, val_acc, 'b', "Validation Accuracy")
plt.title('Training and validation accuracy')
plt.figure()
#------------------------------------------------
# Plot training and validation loss per epoch
#------------------------------------------------
plt.plot(epochs, loss, 'r', "Training Loss")
plt.plot(epochs, val_loss, 'b', "Validation Loss")
plt.title('Training and validation loss')
# Desired output. Charts with training and validation metrics. No crash :)
###Output
_____no_output_____
###Markdown
Submission Instructions
###Code
# Now click the 'Submit Assignment' button above.
###Output
_____no_output_____
###Markdown
When you're done or would like to take a break, please run the two cells below to save your work and close the Notebook. This will free up resources for your fellow learners.
###Code
%%javascript
<!-- Save the notebook -->
IPython.notebook.save_checkpoint();
%%javascript
IPython.notebook.session.delete();
window.onbeforeunload = null
setTimeout(function() { window.close(); }, 1000);
###Output
_____no_output_____ |
Worksheets/DataTypes1.ipynb | ###Markdown
Data types and Variables---**Recap**: Variables can store data of different types. * **int** (whole numbers, e.g. 4, 523, 1984) * **float** (decimal numbers e.g. 4.3, ) * **str** (strings of characters) * **bool** (True or False) and can be stored in groups (**lists, tuples, dictionaries,** etc)You can '**assign**' a value to a variable using the **=** sign.Once a variable has been assigned a value it will decide what type it is from that value. For example: firstname = “Monty” age = 20 *firstname* is now a *str* variable (a string of characters) *age* is now an *int* variable (a whole number) Once a variable knows its type you will only be able to use it for processes that are relevant to that type. For example, you won't be able to add firstname and age together because firstname is a word and age is a number. You would, however, be able to add 1 to the age age = age + 1 age is now 1 bigger than it was before Using variables of different types and functions---- Exercise 1The cell below contains a function. Functions are named sets of instructions that do one particular thing, often creating a new set of data but sometimes just setting something up. A function starts with the keyword def (short for define or definition). All instructions below the definition are indented and this indicates that they are part of that function. A function runs when its name is used outside the function (here it is not indented). The indentation is important, note where the code is and isn't indented. * create a variable called **name** and assign it a value (any name) * print the message “Hello” name * change the value of `name` and run the code again to get a new message
###Code
def print_welcome():
# create the variable called name below here (indented like this line) and add the instruction print("Hello",name)
print_welcome()
###Output
_____no_output_____
###Markdown
---- Exercise 2* create two variables **num1** and **num2** and assign them each a whole number * create a third variable **total** which will store the sum of num1 + num2 * run the code. Change the value of one of the numbers and run the code again to get new messages and a new total.
###Code
def print_total():
# add your code below here
print(num1, "+", num2, "=", total)
print_total()
###Output
_____no_output_____
###Markdown
--- Exercise 3 - variables of different types* create a variable called **name** and assign it the value "Billy" * create a variable called **age** and assign it the value 18 * print a message "Hello `name` you are `age` years old" Test input: Billy 18 Expected output: Hello Billy you are 18 years old
###Code
def print_info():
# add your code below here
print_info()
###Output
_____no_output_____
###Markdown
--- Exercise 4 - float variables (and writing your own function)Write a function called **print_price()** which will: * create a variable called **product** and assign the value "Chocolate Bar" * create a variable called **cost** and assign the value 1.39 * print the message `product`, "costs", "£", `cost` Expected output: Chocolate Bar costs £ 1.39
###Code
###Output
_____no_output_____
###Markdown
--- Exercise 5 - concatenating stringsWrite a function called **print_full_name()** which will: * create variable called **name** and assign it the value "Monty" * create a variable called **surname** and assign it the value "Python" * create a variable called **full_name** and assign it the value `name` + " " + `surname` * print the `full_name` Expected output: Monty Python
###Code
###Output
_____no_output_____ |
AV_WeatherPy.ipynb | ###Markdown
WeatherPy---- Note* Instructions have been included for each segment. You do not have to follow them exactly, but they are included to help you think through the steps.
###Code
# Dependencies and Setup
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
import requests
import time
from scipy.stats import linregress
# Import API key
from api_keys import weather_api_key
temp_units = "imperial"
# Incorporated citipy to determine city based on latitude and longitude
from citipy import citipy
# Output File (CSV)
output_data_file = "output_data/cities.csv"
# Range of latitudes and longitudes
lat_range = (-90, 90)
lng_range = (-180, 180)
# Define url
query_url = f"http://api.openweathermap.org/data/2.5/weather?appid={weather_api_key}&units={temp_units}&q="
###Output
_____no_output_____
###Markdown
Generate Cities List
###Code
# List for holding lat_lngs and cities
lat_lngs = []
cities = []
# Create a set of random lat and lng combinations
lats = np.random.uniform(low=-90.000, high=90.000, size=1500)
lngs = np.random.uniform(low=-180.000, high=180.000, size=1500)
lat_lngs = zip(lats, lngs)
# Identify nearest city for each lat, lng combination
for lat_lng in lat_lngs:
city = citipy.nearest_city(lat_lng[0], lat_lng[1]).city_name
# If the city is unique, then add it to a our cities list
if city not in cities:
cities.append(city)
# Print the city count to confirm sufficient count
len(cities)
###Output
_____no_output_____
###Markdown
Perform API Calls* Perform a weather check on each city using a series of successive API calls.* Include a print log of each city as it'sbeing processed (with the city number and city name).
###Code
# Create the lists for the dataframe
city_name = []
cloudiness = []
country = []
date = []
humidity = []
lat = []
lng = []
max_temp = []
wind_speed = []
city_two = []
# Set the count
first_count = 0
first_set = 1
# Print the introduction to the output
print("Beginning Data Retrieval")
print("-----------------------------")
# Create a loop for all the cities
for city in cities:
try:
response = requests.get(query_url + city.replace(" ","&")).json()
city_name.append(response["name"])
cloudiness.append(response['clouds']['all'])
country.append(response['sys']['country'])
date.append(response['dt'])
humidity.append(response['main']['humidity'])
lat.append(response['coord']['lat'])
lng.append(response['coord']['lon'])
max_temp.append(response['main']['temp_max'])
wind_speed.append(response['wind']['speed'])
# Create sets every 50 city
if first_count > 49:
first_count = 1
first_set += 1
city_two.append(city)
# Create city numbers
else:
first_count += 1
city_two.append(city)
print(f"Processing Record {first_count} of Set {first_set} | {city}")
# If it doesn't work, continue looping to the next city
except Exception:
print("City not found. Skipping...")
# Print the ending of the output
print("-----------------------------")
print("Data Retrieval Complete")
print("-----------------------------")
###Output
Beginning Data Retrieval
-----------------------------
City not found. Skipping...
Processing Record 1 of Set 1 | sioux lookout
Processing Record 2 of Set 1 | hithadhoo
Processing Record 3 of Set 1 | kaeo
Processing Record 4 of Set 1 | tiksi
Processing Record 5 of Set 1 | gorokhovets
City not found. Skipping...
Processing Record 6 of Set 1 | narsaq
City not found. Skipping...
Processing Record 7 of Set 1 | qaanaaq
Processing Record 8 of Set 1 | mataura
City not found. Skipping...
Processing Record 9 of Set 1 | guerrero negro
Processing Record 10 of Set 1 | cabo san lucas
Processing Record 11 of Set 1 | pisco
Processing Record 12 of Set 1 | hobart
Processing Record 13 of Set 1 | jamestown
City not found. Skipping...
Processing Record 14 of Set 1 | jumla
Processing Record 15 of Set 1 | kavieng
Processing Record 16 of Set 1 | pevek
Processing Record 17 of Set 1 | upernavik
Processing Record 18 of Set 1 | ushuaia
City not found. Skipping...
Processing Record 19 of Set 1 | menongue
Processing Record 20 of Set 1 | vaini
Processing Record 21 of Set 1 | vestmannaeyjar
Processing Record 22 of Set 1 | cody
Processing Record 23 of Set 1 | mar del plata
Processing Record 24 of Set 1 | new norfolk
Processing Record 25 of Set 1 | saskylakh
Processing Record 26 of Set 1 | mys shmidta
Processing Record 27 of Set 1 | puerto ayora
Processing Record 28 of Set 1 | chokurdakh
Processing Record 29 of Set 1 | punta arenas
Processing Record 30 of Set 1 | hermanus
Processing Record 31 of Set 1 | vanimo
Processing Record 32 of Set 1 | ewo
Processing Record 33 of Set 1 | port lincoln
Processing Record 34 of Set 1 | phonhong
Processing Record 35 of Set 1 | kodiak
Processing Record 36 of Set 1 | butaritari
Processing Record 37 of Set 1 | rikitea
Processing Record 38 of Set 1 | iquique
Processing Record 39 of Set 1 | norwich
Processing Record 40 of Set 1 | hay river
Processing Record 41 of Set 1 | port alfred
Processing Record 42 of Set 1 | cidreira
Processing Record 43 of Set 1 | jalu
City not found. Skipping...
Processing Record 44 of Set 1 | bambous virieux
Processing Record 45 of Set 1 | tessalit
Processing Record 46 of Set 1 | college
Processing Record 47 of Set 1 | kapaa
Processing Record 48 of Set 1 | kaseda
Processing Record 49 of Set 1 | faya
Processing Record 50 of Set 1 | bluff
Processing Record 1 of Set 2 | isangel
Processing Record 2 of Set 2 | sechura
Processing Record 3 of Set 2 | shimoda
City not found. Skipping...
Processing Record 4 of Set 2 | boyolangu
City not found. Skipping...
Processing Record 5 of Set 2 | feijo
Processing Record 6 of Set 2 | yantarnyy
Processing Record 7 of Set 2 | avarua
Processing Record 8 of Set 2 | macau
Processing Record 9 of Set 2 | tuatapere
City not found. Skipping...
Processing Record 10 of Set 2 | arraial do cabo
Processing Record 11 of Set 2 | havoysund
Processing Record 12 of Set 2 | avera
Processing Record 13 of Set 2 | noumea
Processing Record 14 of Set 2 | albany
Processing Record 15 of Set 2 | birao
Processing Record 16 of Set 2 | peterhead
Processing Record 17 of Set 2 | flinders
Processing Record 18 of Set 2 | kharp
Processing Record 19 of Set 2 | swan river
Processing Record 20 of Set 2 | poum
Processing Record 21 of Set 2 | cayenne
Processing Record 22 of Set 2 | yellowknife
Processing Record 23 of Set 2 | longyearbyen
Processing Record 24 of Set 2 | chapais
Processing Record 25 of Set 2 | nambucca heads
Processing Record 26 of Set 2 | airai
Processing Record 27 of Set 2 | arawa
Processing Record 28 of Set 2 | castro
Processing Record 29 of Set 2 | oranjemund
Processing Record 30 of Set 2 | busselton
Processing Record 31 of Set 2 | qasigiannguit
City not found. Skipping...
City not found. Skipping...
Processing Record 32 of Set 2 | trabzon
Processing Record 33 of Set 2 | tura
Processing Record 34 of Set 2 | lazaro cardenas
Processing Record 35 of Set 2 | barrow
Processing Record 36 of Set 2 | atuona
Processing Record 37 of Set 2 | yulara
Processing Record 38 of Set 2 | bathsheba
Processing Record 39 of Set 2 | muli
Processing Record 40 of Set 2 | carnarvon
Processing Record 41 of Set 2 | nantucket
Processing Record 42 of Set 2 | port hedland
Processing Record 43 of Set 2 | moose factory
Processing Record 44 of Set 2 | salalah
Processing Record 45 of Set 2 | hilo
Processing Record 46 of Set 2 | neftekamsk
Processing Record 47 of Set 2 | yetkul
Processing Record 48 of Set 2 | georgetown
Processing Record 49 of Set 2 | sao joao da barra
City not found. Skipping...
Processing Record 50 of Set 2 | ust-kuyga
Processing Record 1 of Set 3 | atambua
Processing Record 2 of Set 3 | fairbanks
Processing Record 3 of Set 3 | cherskiy
Processing Record 4 of Set 3 | oblivskaya
Processing Record 5 of Set 3 | chuy
Processing Record 6 of Set 3 | cape town
City not found. Skipping...
Processing Record 7 of Set 3 | tasiilaq
City not found. Skipping...
Processing Record 8 of Set 3 | araouane
Processing Record 9 of Set 3 | provideniya
City not found. Skipping...
City not found. Skipping...
Processing Record 10 of Set 3 | bocanda
Processing Record 11 of Set 3 | hofn
Processing Record 12 of Set 3 | kot samaba
Processing Record 13 of Set 3 | kaitangata
Processing Record 14 of Set 3 | puerto narino
City not found. Skipping...
City not found. Skipping...
Processing Record 15 of Set 3 | codrington
Processing Record 16 of Set 3 | lavrentiya
Processing Record 17 of Set 3 | waingapu
Processing Record 18 of Set 3 | east london
Processing Record 19 of Set 3 | grindavik
Processing Record 20 of Set 3 | severo-kurilsk
Processing Record 21 of Set 3 | pontes e lacerda
Processing Record 22 of Set 3 | luderitz
Processing Record 23 of Set 3 | kungurtug
Processing Record 24 of Set 3 | raudeberg
Processing Record 25 of Set 3 | thalassery
Processing Record 26 of Set 3 | fayetteville
Processing Record 27 of Set 3 | raseiniai
City not found. Skipping...
Processing Record 28 of Set 3 | namatanai
Processing Record 29 of Set 3 | sur
Processing Record 30 of Set 3 | augustow
City not found. Skipping...
Processing Record 31 of Set 3 | savalou
Processing Record 32 of Set 3 | zabid
Processing Record 33 of Set 3 | grand river south east
Processing Record 34 of Set 3 | coquimbo
Processing Record 35 of Set 3 | tasgaon
Processing Record 36 of Set 3 | brownsville
City not found. Skipping...
Processing Record 37 of Set 3 | peace river
City not found. Skipping...
Processing Record 38 of Set 3 | pervomayskoye
City not found. Skipping...
Processing Record 39 of Set 3 | beruwala
Processing Record 40 of Set 3 | hamilton
Processing Record 41 of Set 3 | nikolskoye
Processing Record 42 of Set 3 | touros
City not found. Skipping...
Processing Record 43 of Set 3 | okhotsk
Processing Record 44 of Set 3 | beohari
Processing Record 45 of Set 3 | havre-saint-pierre
Processing Record 46 of Set 3 | talnakh
Processing Record 47 of Set 3 | marathon
City not found. Skipping...
Processing Record 48 of Set 3 | ribeira grande
Processing Record 49 of Set 3 | egvekinot
Processing Record 50 of Set 3 | bredasdorp
Processing Record 1 of Set 4 | sao jose da coroa grande
Processing Record 2 of Set 4 | burns lake
Processing Record 3 of Set 4 | srednekolymsk
Processing Record 4 of Set 4 | puerto leguizamo
Processing Record 5 of Set 4 | lorengau
Processing Record 6 of Set 4 | ilulissat
Processing Record 7 of Set 4 | constitucion
Processing Record 8 of Set 4 | sistranda
Processing Record 9 of Set 4 | vardo
Processing Record 10 of Set 4 | nizwa
Processing Record 11 of Set 4 | libreville
Processing Record 12 of Set 4 | nanakuli
Processing Record 13 of Set 4 | kupang
Processing Record 14 of Set 4 | ginir
Processing Record 15 of Set 4 | ugoofaaru
Processing Record 16 of Set 4 | esna
Processing Record 17 of Set 4 | dezful
Processing Record 18 of Set 4 | deputatskiy
Processing Record 19 of Set 4 | sorong
Processing Record 20 of Set 4 | kampot
Processing Record 21 of Set 4 | alwar
Processing Record 22 of Set 4 | huarmey
Processing Record 23 of Set 4 | san cayetano
Processing Record 24 of Set 4 | monopoli
Processing Record 25 of Set 4 | meulaboh
Processing Record 26 of Set 4 | alyangula
Processing Record 27 of Set 4 | kruisfontein
Processing Record 28 of Set 4 | saldanha
Processing Record 29 of Set 4 | parrita
###Markdown
Convert Raw Data to DataFrame* Export the city data into a .csv.* Display the DataFrame
###Code
# Create a dictonary with the lists generated
weatherpy_dict = {
"City": city_name,
"Cloudiness":cloudiness,
"Country":country,
"Date":date,
"Humidity": humidity,
"Lat":lat,
"Lng":lng,
"Max Temp": max_temp,
"Wind Speed":wind_speed
}
weather_df = pd.DataFrame(weatherpy_dict)
# Export to CSV
weather_df.to_csv(output_data_file)
weather_df.count()
pd.DataFrame(weather_df).head()
###Output
_____no_output_____
###Markdown
Plotting the Data* Use proper labeling of the plots using plot titles (including date of analysis) and axes labels.* Save the plotted figures as .pngs. Latitude vs. Temperature Plot
###Code
# Create a scatter plot for latitude vs temperature
plt.scatter(weather_dataframe["Lat"],
weather_dataframe["Max Temp"],
edgecolors="black",
facecolors="tab:blue")
# Add title, labels, grid
plt.title("City Latitude vs. Max Temperature (10/08/19)")
plt.xlabel("Latitude")
plt.ylabel("Max Temperature (F)")
plt.grid()
# Save figure
plt.savefig("output_data/lat_temp_plot.png")
plt.show()
###Output
_____no_output_____
###Markdown
Latitude vs. Humidity Plot
###Code
# Create a scatter plot for latitude vs humidity
plt.scatter(weather_dataframe["Lat"],
weather_dataframe["Humidity"],
edgecolors="black",
facecolors="tab:blue")
# Add title, labels, grid
plt.title("City Latitude vs. Humidity (10/08/19)")
plt.xlabel("Latitude")
plt.ylabel("Humidity (%)")
plt.grid()
# Save figure
plt.savefig("output_data/lat_humidity_plot.png")
plt.show()
###Output
_____no_output_____
###Markdown
Latitude vs. Cloudiness Plot
###Code
# Create a scatter plot for latitude vs cloudiness
plt.scatter(weather_dataframe["Lat"],
weather_dataframe["Cloudiness"],
edgecolors="black",
facecolors="tab:blue")
# Add title, labels, grid
plt.title("City Latitude vs. Cloudiness (10/08/19)")
plt.xlabel("Latitude")
plt.ylabel("Cloudiness (%)")
plt.grid()
# Save figure
plt.savefig("output_data/lat_cloud_plot.png")
plt.show()
###Output
_____no_output_____
###Markdown
Latitude vs. Wind Speed Plot
###Code
# Create a scatter plot for latitude vs wind speed
plt.scatter(weather_dataframe["Lat"],
weather_dataframe["Wind Speed"],
edgecolors="black",
facecolors="tab:blue")
# Add title, labels, grid
plt.title("City Latitude vs. Wind Speed (10/08/19)")
plt.xlabel("Latitude")
plt.ylabel("Wind Speed (mph)")
plt.grid()
# Save figure
plt.savefig("output_data/lat_cloud_plot.png")
plt.show()
###Output
_____no_output_____
###Markdown
Linear Regression
###Code
# OPTIONAL: Create a function to create Linear Regression plots
# Create Northern and Southern Hemisphere DataFrames
###Output
_____no_output_____ |
Kaggle competitions/commonlit-readablity-price/submit.ipynb | ###Markdown
[Commonlit Readability Price](https://www.kaggle.com/c/commonlitreadabilityprize/overview)Hosted by - Kaggle Notebook 2 - inference modeluses roberta model from SimpleTransformers to predict the easeness of Readability. Notebook 3 - Submission train model without internet requirements for Submission on kaggle.
###Code
import pandas
train = pandas.read_csv('../input/commonlitreadabilityprize/train.csv')
test = pandas.read_csv('../input/commonlitreadabilityprize/test.csv')
train_df = train[['excerpt','target']]
submit = test[['id']]
train_df.columns = ['text','labels']
test_df = list(test[['excerpt']].values.ravel())
train_data = train_df[:2126]
eval_data = train_df[2126:]
test_data = test_df
import os
import shutil
if os.path.exists('/kaggle/temp/wheels'):
shutil.rmtree('/kaggle/temp/wheels')
if os.path.exists('/kaggle/working/wheels'):
shutil.rmtree('/kaggle/working/wheels')
!unzip -q ../input/commonlit-readability-data/wheels.zip -d /kaggle/working/wheels
!pip install \
--requirement ../input/commonlit-readability-data/requirements.txt \
--no-index \
--find-links /kaggle/working/wheels
# !conda install ../input/commonlit-readability-data/fsspec-2021.6.0-pyhd8ed1ab_0.tar.bz2 --offline --force-reinstall
import fsspec
fsspec.__version__
import pickle
# pickle.dump(model,open('roberta_model.pickle','wb'))
model = pickle.load(open('../input/commonlit-readability2-model/roberta_model.pickle','rb'))
predictions, raw_outputs = model.predict(test_data)
predictions
submit['target'] = predictions
submit.to_csv('submission.csv',index = False)
submit
###Output
/opt/conda/lib/python3.7/site-packages/ipykernel_launcher.py:1: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
"""Entry point for launching an IPython kernel.
|
assunto-2_condicionais/.ipynb_checkpoints/exercicios_assunto_2-checkpoint.ipynb | ###Markdown
Controle de fluxo e condicionais Utilize as condicionais "if", "else" e "elif" e os controles de fluxo "continue", "break" e "pass" para resolver os exercícios. Talvez nem todos sejam necessários
###Code
# Lembra-se como podemos percorrer os valores de uma lista utilizando "for" ?
lst = [1, 2, 3, 4]
for i in lst:
if isinstance(i, int):
print(i)
###Output
1
2
3
4
###Markdown
Exercício 1Adapte o código acima para atribuir à uma variável x o valor True se qualquer um dos valores forem int, e False se nenhum dos valores forem do tipo int. Utilize a função isinstance para verificar à qual tipo cada valor do array pertence, como no exemplo acima.
###Code
# Exercício 1
###Output
_____no_output_____
###Markdown
Exercício 2Adapte o código acima para atribuir à uma variável x o valor True se todos os valores forem int.
###Code
# Exercício 2
###Output
_____no_output_____
###Markdown
Exercício 3Adapte o código acima para atribuir à uma variável x o valor True se nenhum dos valores forem int.
###Code
# Exercício 3
###Output
_____no_output_____ |
Chapter7/concept_drift_examples/cd_ks_cifar10.ipynb | ###Markdown
Kolmogorov-Smirnov data drift detector on CIFAR-10 MethodThe drift detector applies feature-wise two-sample [Kolmogorov-Smirnov](https://en.wikipedia.org/wiki/Kolmogorov%E2%80%93Smirnov_test) (K-S) tests. For multivariate data, the obtained p-values for each feature are aggregated either via the [Bonferroni](https://mathworld.wolfram.com/BonferroniCorrection.html) or the [False Discovery Rate](http://www.math.tau.ac.il/~ybenja/MyPapers/benjamini_hochberg1995.pdf) (FDR) correction. The Bonferroni correction is more conservative and controls for the probability of at least one false positive. The FDR correction on the other hand allows for an expected fraction of false positives to occur.For high-dimensional data, we typically want to reduce the dimensionality before computing the feature-wise univariate K-S tests and aggregating those via the chosen correction method. Following suggestions in [Failing Loudly: An Empirical Study of Methods for Detecting Dataset Shift](https://arxiv.org/abs/1810.11953), we incorporate Untrained AutoEncoders (UAE) and black-box shift detection using the classifier's softmax outputs ([BBSDs](https://arxiv.org/abs/1802.03916)) as out-of-the box preprocessing methods and note that [PCA](https://en.wikipedia.org/wiki/Principal_component_analysis) can also be easily implemented using `scikit-learn`. Preprocessing methods which do not rely on the classifier will usually pick up drift in the input data, while BBSDs focuses on label shift. The [adversarial detector](https://arxiv.org/abs/2002.09364) which is part of the library can also be transformed into a drift detector picking up drift that reduces the performance of the classification model. We can therefore combine different preprocessing techniques to figure out if there is drift which hurts the model performance, and whether this drift can be classified as input drift or label shift. BackendThe method works with both the **PyTorch** and **TensorFlow** frameworks for the optional preprocessing step. Alibi Detect does however not install PyTorch for you. Check the [PyTorch docs](https://pytorch.org/) how to do this. Dataset[CIFAR10](https://www.cs.toronto.edu/~kriz/cifar.html) consists of 60,000 32 by 32 RGB images equally distributed over 10 classes. We evaluate the drift detector on the CIFAR-10-C dataset ([Hendrycks & Dietterich, 2019](https://arxiv.org/abs/1903.12261)). The instances inCIFAR-10-C have been corrupted and perturbed by various types of noise, blur, brightness etc. at different levels of severity, leading to a gradual decline in the classification model performance. We also check for drift against the original test set with class imbalances.
###Code
import matplotlib.pyplot as plt
import numpy as np
import os
import tensorflow as tf
from alibi_detect.cd import KSDrift
from alibi_detect.models.tensorflow.resnet import scale_by_instance
from alibi_detect.utils.fetching import fetch_tf_model, fetch_detector
from alibi_detect.utils.saving import save_detector, load_detector
from alibi_detect.datasets import fetch_cifar10c, corruption_types_cifar10c
###Output
_____no_output_____
###Markdown
Load dataOriginal CIFAR-10 data:
###Code
(X_train, y_train), (X_test, y_test) = tf.keras.datasets.cifar10.load_data()
X_train = X_train.astype('float32') / 255
X_test = X_test.astype('float32') / 255
y_train = y_train.astype('int64').reshape(-1,)
y_test = y_test.astype('int64').reshape(-1,)
###Output
_____no_output_____
###Markdown
For CIFAR-10-C, we can select from the following corruption types at 5 severity levels:
###Code
corruptions = corruption_types_cifar10c()
print(corruptions)
###Output
['brightness', 'contrast', 'defocus_blur', 'elastic_transform', 'fog', 'frost', 'gaussian_blur', 'gaussian_noise', 'glass_blur', 'impulse_noise', 'jpeg_compression', 'motion_blur', 'pixelate', 'saturate', 'shot_noise', 'snow', 'spatter', 'speckle_noise', 'zoom_blur']
###Markdown
Let's pick a subset of the corruptions at corruption level 5. Each corruption type consists of perturbations on all of the original test set images.
###Code
corruption = ['gaussian_noise', 'motion_blur', 'brightness', 'pixelate']
X_corr, y_corr = fetch_cifar10c(corruption=corruption, severity=5, return_X_y=True)
X_corr = X_corr.astype('float32') / 255
###Output
_____no_output_____
###Markdown
We split the original test set in a reference dataset and a dataset which should not be rejected under the *H0* of the K-S test. We also split the corrupted data by corruption type:
###Code
np.random.seed(0)
n_test = X_test.shape[0]
idx = np.random.choice(n_test, size=n_test // 2, replace=False)
idx_h0 = np.delete(np.arange(n_test), idx, axis=0)
X_ref,y_ref = X_test[idx], y_test[idx]
X_h0, y_h0 = X_test[idx_h0], y_test[idx_h0]
print(X_ref.shape, X_h0.shape)
# check that the classes are more or less balanced
classes, counts_ref = np.unique(y_ref, return_counts=True)
counts_h0 = np.unique(y_h0, return_counts=True)[1]
print('Class Ref H0')
for cl, cref, ch0 in zip(classes, counts_ref, counts_h0):
assert cref + ch0 == n_test // 10
print('{} {} {}'.format(cl, cref, ch0))
n_corr = len(corruption)
X_c = [X_corr[i * n_test:(i + 1) * n_test] for i in range(n_corr)]
###Output
_____no_output_____
###Markdown
We can visualise the same instance for each corruption type:
###Code
i = 1
n_test = X_test.shape[0]
plt.title('Original')
plt.axis('off')
plt.imshow(X_test[i])
plt.show()
for _ in range(len(corruption)):
plt.title(corruption[_])
plt.axis('off')
plt.imshow(X_corr[n_test * _+ i])
plt.show()
###Output
_____no_output_____
###Markdown
We can also verify that the performance of a classification model on CIFAR-10 drops significantly on this perturbed dataset:
###Code
dataset = 'cifar10'
model = 'resnet32'
clf = fetch_tf_model(dataset, model)
acc = clf.evaluate(scale_by_instance(X_test), y_test, batch_size=128, verbose=0)[1]
print('Test set accuracy:')
print('Original {:.4f}'.format(acc))
clf_accuracy = {'original': acc}
for _ in range(len(corruption)):
acc = clf.evaluate(scale_by_instance(X_c[_]), y_test, batch_size=128, verbose=0)[1]
clf_accuracy[corruption[_]] = acc
print('{} {:.4f}'.format(corruption[_], acc))
###Output
Test set accuracy:
Original 0.9278
gaussian_noise 0.2208
motion_blur 0.6339
brightness 0.8913
pixelate 0.3666
###Markdown
Given the drop in performance, it is important that we detect the harmful data drift! Detect driftFirst we try a drift detector using the **TensorFlow** framework for the preprocessing step. We are trying to detect data drift on high-dimensional (*32x32x3*) data using feature-wise univariate tests. It therefore makes sense to apply dimensionality reduction first. Some dimensionality reduction methods also used in [Failing Loudly: An Empirical Study of Methods for Detecting Dataset Shift](https://arxiv.org/pdf/1810.11953.pdf) are readily available: a randomly initialized encoder (**UAE** or Untrained AutoEncoder in the paper), **BBSDs** (black-box shift detection using the classifier's softmax outputs) and **PCA**. Random encoderFirst we try the randomly initialized encoder:
###Code
from functools import partial
from tensorflow.keras.layers import Conv2D, Dense, Flatten, InputLayer, Reshape
from alibi_detect.cd.tensorflow import preprocess_drift
tf.random.set_seed(0)
# define encoder
encoding_dim = 32
encoder_net = tf.keras.Sequential(
[
InputLayer(input_shape=(32, 32, 3)),
Conv2D(64, 4, strides=2, padding='same', activation=tf.nn.relu),
Conv2D(128, 4, strides=2, padding='same', activation=tf.nn.relu),
Conv2D(512, 4, strides=2, padding='same', activation=tf.nn.relu),
Flatten(),
Dense(encoding_dim,)
]
)
# define preprocessing function
preprocess_fn = partial(preprocess_drift, model=encoder_net, batch_size=512)
# initialise drift detector
p_val = .05
cd = KSDrift(X_ref, p_val=p_val, preprocess_fn=preprocess_fn)
# we can also save/load an initialised detector
filepath = 'my_path' # change to directory where detector is saved
save_detector(cd, filepath)
cd = load_detector(filepath)
###Output
WARNING:tensorflow:Compiled the loaded model, but the compiled metrics have yet to be built. `model.compile_metrics` will be empty until you train or evaluate the model.
WARNING:tensorflow:No training configuration found in the save file, so the model was *not* compiled. Compile it manually.
###Markdown
The p-value used by the detector for the multivariate data with *encoding_dim* features is equal to *p_val / encoding_dim* because of the [Bonferroni correction](https://mathworld.wolfram.com/BonferroniCorrection.html).
###Code
assert cd.p_val / cd.n_features == p_val / encoding_dim
###Output
_____no_output_____
###Markdown
Let's check whether the detector thinks drift occurred on the different test sets and time the prediction calls:
###Code
from timeit import default_timer as timer
labels = ['No!', 'Yes!']
def make_predictions(cd, x_h0, x_corr, corruption):
t = timer()
preds = cd.predict(x_h0)
dt = timer() - t
print('No corruption')
print('Drift? {}'.format(labels[preds['data']['is_drift']]))
print('Feature-wise p-values:')
print(preds['data']['p_val'])
print(f'Time (s) {dt:.3f}')
if isinstance(x_corr, list):
for x, c in zip(x_corr, corruption):
t = timer()
preds = cd.predict(x)
dt = timer() - t
print('')
print(f'Corruption type: {c}')
print('Drift? {}'.format(labels[preds['data']['is_drift']]))
print('Feature-wise p-values:')
print(preds['data']['p_val'])
print(f'Time (s) {dt:.3f}')
make_predictions(cd, X_h0, X_c, corruption)
###Output
No corruption
Drift? No!
Feature-wise p-values:
[0.9386024 0.13979132 0.6384489 0.05413922 0.37460664 0.25598603
0.87304014 0.47553554 0.11587767 0.67217577 0.47553554 0.7388285
0.08215971 0.14635575 0.3114053 0.3114053 0.60482025 0.36134896
0.8023182 0.21715216 0.24582714 0.46030036 0.11587767 0.44532147
0.25598603 0.58811766 0.5550683 0.95480835 0.8598946 0.23597081
0.8975547 0.68899393]
Time (s) 1.960
Corruption type: gaussian_noise
Drift? Yes!
Feature-wise p-values:
[4.85834153e-03 7.20506581e-03 5.44517934e-02 9.87569049e-09
3.35018486e-01 8.05620551e-02 6.66609779e-03 2.68237293e-01
1.52247362e-02 1.01558706e-02 1.78680534e-03 1.04267694e-01
4.93385670e-08 1.35106135e-10 1.04696119e-04 1.35730659e-06
2.87180692e-01 3.79266362e-06 3.45018925e-04 1.96636513e-01
1.86571106e-03 5.92635339e-03 4.70917694e-10 5.92635339e-03
5.07743537e-01 5.31427140e-05 3.80059540e-01 1.13354892e-01
2.75738519e-02 7.75579622e-07 3.23252240e-03 2.02312917e-02]
Time (s) 3.461
Corruption type: motion_blur
Drift? Yes!
Feature-wise p-values:
[3.39037769e-07 1.16525307e-01 5.04726835e-04 3.81079665e-03
6.31192625e-01 5.28989534e-04 3.61990853e-04 1.57829020e-02
1.94784126e-03 1.26909809e-02 4.46249526e-09 6.99155149e-04
3.79746925e-04 5.88651128e-21 1.35596551e-07 2.00218983e-05
7.15865940e-02 7.28750820e-05 1.04267694e-01 1.10198918e-04
2.22608112e-04 1.52403876e-01 6.41064299e-03 3.15323919e-02
3.04985344e-02 8.97102946e-05 6.54255822e-02 2.03331537e-03
1.15137536e-03 8.04463718e-10 9.62164486e-04 3.45018925e-04]
Time (s) 3.666
Corruption type: brightness
Drift? Yes!
Feature-wise p-values:
[0.0000000e+00 0.0000000e+00 4.0479114e-29 0.0000000e+00 0.0000000e+00
0.0000000e+00 0.0000000e+00 0.0000000e+00 0.0000000e+00 0.0000000e+00
0.0000000e+00 0.0000000e+00 0.0000000e+00 2.3823024e-21 2.9582986e-38
0.0000000e+00 0.0000000e+00 0.0000000e+00 7.1651735e-34 0.0000000e+00
0.0000000e+00 0.0000000e+00 0.0000000e+00 0.0000000e+00 0.0000000e+00
3.8567345e-05 0.0000000e+00 0.0000000e+00 0.0000000e+00 0.0000000e+00
0.0000000e+00 0.0000000e+00]
Time (s) 3.545
Corruption type: pixelate
Drift? Yes!
Feature-wise p-values:
[3.35018486e-01 2.06576437e-01 1.96636513e-01 8.40378553e-03
5.92454255e-01 9.30991620e-02 1.64748102e-01 5.16919196e-01
2.40608733e-02 4.54285979e-01 1.91894709e-04 1.33487985e-01
1.63820793e-03 1.78680534e-03 1.13354892e-01 9.04688612e-02
8.27570856e-01 6.15745559e-02 6.54255822e-02 9.06871445e-03
2.38713458e-01 6.89552963e-01 1.07227206e-01 8.29487666e-02
3.42268527e-01 1.37110472e-01 3.64637136e-01 3.00327957e-01
3.72297794e-01 9.06871445e-03 4.98639137e-01 9.78103094e-03]
Time (s) 4.167
###Markdown
As expected, drift was only detected on the corrupted datasets. The feature-wise p-values for each univariate K-S test per (encoded) feature before multivariate correction show that most of them are well above the $0.05$ threshold for *H0* and below for the corrupted datasets. BBSDsFor **BBSDs**, we use the classifier's softmax outputs for black-box shift detection. This method is based on [Detecting and Correcting for Label Shift with Black Box Predictors](https://arxiv.org/abs/1802.03916). The ResNet classifier is trained on data standardised by instance so we need to rescale the data.
###Code
X_train = scale_by_instance(X_train)
X_test = scale_by_instance(X_test)
X_ref = scale_by_instance(X_ref)
X_h0 = scale_by_instance(X_h0)
X_c = [scale_by_instance(X_c[i]) for i in range(n_corr)]
###Output
_____no_output_____
###Markdown
Now we initialize the detector. Here we use the output of the softmax layer to detect the drift, but other hidden layers can be extracted as well by setting *'layer'* to the index of the desired hidden layer in the model:
###Code
from alibi_detect.cd.tensorflow import HiddenOutput
# define preprocessing function, we use the
preprocess_fn = partial(preprocess_drift, model=HiddenOutput(clf, layer=-1), batch_size=128)
cd = KSDrift(X_ref, p_val=p_val, preprocess_fn=preprocess_fn)
###Output
_____no_output_____
###Markdown
Again we can see that the p-value used by the detector for the multivariate data with 10 features (number of CIFAR-10 classes) is equal to *p_val / 10* because of the [Bonferroni correction](https://mathworld.wolfram.com/BonferroniCorrection.html).
###Code
assert cd.p_val / cd.n_features == p_val / 10
###Output
_____no_output_____
###Markdown
There is no drift on the original held out test set:
###Code
make_predictions(cd, X_h0, X_c, corruption)
###Output
No corruption
Drift? No!
Feature-wise p-values:
[0.11587767 0.5226477 0.19109942 0.19949944 0.49101472 0.722359
0.12151605 0.41617486 0.8320209 0.75510186]
Time (s) 10.897
Corruption type: gaussian_noise
Drift? Yes!
Feature-wise p-values:
[0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
Time (s) 27.559
Corruption type: motion_blur
Drift? Yes!
Feature-wise p-values:
[0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
Time (s) 24.664
Corruption type: brightness
Drift? Yes!
Feature-wise p-values:
[0.0000000e+00 3.8790170e-15 2.2549014e-33 4.6733894e-07 2.1857751e-15
1.2091652e-05 2.3977423e-30 1.0099583e-09 4.3286997e-12 3.8117909e-17]
Time (s) 25.344
Corruption type: pixelate
Drift? Yes!
Feature-wise p-values:
[0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
Time (s) 24.257
###Markdown
Label driftWe can also check what happens when we introduce class imbalances between the reference data *X_ref* and the tested data *X_imb*. The reference data will use $75$% of the instances of the first 5 classes and only $25$% of the last 5. The data used for drift testing then uses respectively $25$% and $75$% of the test instances for the first and last 5 classes.
###Code
np.random.seed(0)
# get index for each class in the test set
num_classes = len(np.unique(y_test))
idx_by_class = [np.where(y_test == c)[0] for c in range(num_classes)]
# sample imbalanced data for different classes for X_ref and X_imb
perc_ref = .75
perc_ref_by_class = [perc_ref if c < 5 else 1 - perc_ref for c in range(num_classes)]
n_by_class = n_test // num_classes
X_ref = []
X_imb, y_imb = [], []
for _ in range(num_classes):
idx_class_ref = np.random.choice(n_by_class, size=int(perc_ref_by_class[_] * n_by_class), replace=False)
idx_ref = idx_by_class[_][idx_class_ref]
idx_class_imb = np.delete(np.arange(n_by_class), idx_class_ref, axis=0)
idx_imb = idx_by_class[_][idx_class_imb]
assert not np.array_equal(idx_ref, idx_imb)
X_ref.append(X_test[idx_ref])
X_imb.append(X_test[idx_imb])
y_imb.append(y_test[idx_imb])
X_ref = np.concatenate(X_ref)
X_imb = np.concatenate(X_imb)
y_imb = np.concatenate(y_imb)
print(X_ref.shape, X_imb.shape, y_imb.shape)
###Output
(5000, 32, 32, 3) (5000, 32, 32, 3) (5000,)
###Markdown
Update reference dataset for the detector and make predictions. Note that we store the preprocessed reference data since the `preprocess_x_ref` kwarg is by default True:
###Code
cd.x_ref = cd.preprocess_fn(X_ref)
preds_imb = cd.predict(X_imb)
print('Drift? {}'.format(labels[preds_imb['data']['is_drift']]))
print(preds_imb['data']['p_val'])
###Output
Drift? Yes!
[5.2598997e-20 1.1312397e-20 5.8646589e-29 9.2977640e-18 6.4071548e-23
1.4155961e-15 5.0236095e-19 1.6651963e-20 4.2726706e-21 3.5123729e-21]
###Markdown
Update reference dataSo far we have kept the reference data the same throughout the experiments. It is possible however that we want to test a new batch against the last *N* instances or against a batch of instances of fixed size where we give each instance we have seen up until now the same chance of being in the reference batch ([reservoir sampling](https://en.wikipedia.org/wiki/Reservoir_sampling)). The `update_x_ref` argument allows you to change the reference data update rule. It is a Dict which takes as key the update rule (*'last'* for last *N* instances or *'reservoir_sampling'*) and as value the batch size *N* of the reference data. You can also save the detector after the prediction calls to save the updated reference data.
###Code
N = 7500
cd = KSDrift(X_ref, p_val=.05, preprocess_fn=preprocess_fn, update_x_ref={'reservoir_sampling': N})
###Output
_____no_output_____
###Markdown
The reference data is now updated with each `predict` call. Say we start with our imbalanced reference set and make a prediction on the remaining test set data *X_imb*, then the drift detector will figure out data drift has occurred.
###Code
preds_imb = cd.predict(X_imb)
print('Drift? {}'.format(labels[preds_imb['data']['is_drift']]))
###Output
Drift? Yes!
###Markdown
We can now see that the reference data consists of *N* instances, obtained through reservoir sampling.
###Code
assert cd.x_ref.shape[0] == N
###Output
_____no_output_____
###Markdown
We then draw a random sample from the training set and compare it with the updated reference data. This still highlights that there is data drift but will update the reference data again:
###Code
np.random.seed(0)
perc_train = .5
n_train = X_train.shape[0]
idx_train = np.random.choice(n_train, size=int(perc_train * n_train), replace=False)
preds_train = cd.predict(X_train[idx_train])
print('Drift? {}'.format(labels[preds_train['data']['is_drift']]))
###Output
Drift? Yes!
###Markdown
When we draw a new sample from the training set, it highlights that it is not drifting anymore against the reservoir in *X_ref*.
###Code
np.random.seed(1)
perc_train = .1
idx_train = np.random.choice(n_train, size=int(perc_train * n_train), replace=False)
preds_train = cd.predict(X_train[idx_train])
print('Drift? {}'.format(labels[preds_train['data']['is_drift']]))
###Output
Drift? No!
###Markdown
Multivariate correction mechanismInstead of the Bonferroni correction for multivariate data, we can also use the less conservative [False Discovery Rate](http://www.math.tau.ac.il/~ybenja/MyPapers/benjamini_hochberg1995.pdf) (FDR) correction. See [here](https://riffyn.com/riffyn-blog/2017/10/29/false-discovery-rate) or [here](https://matthew-brett.github.io/teaching/fdr.html) for nice explanations. While the Bonferroni correction controls the probability of at least one false positive, the FDR correction controls for an expected amount of false positives. The `p_val` argument at initialisation time can be interpreted as the acceptable q-value when the FDR correction is applied.
###Code
cd = KSDrift(X_ref, p_val=.05, preprocess_fn=preprocess_fn, correction='fdr')
preds_imb = cd.predict(X_imb)
print('Drift? {}'.format(labels[preds_imb['data']['is_drift']]))
###Output
Drift? Yes!
###Markdown
Adversarial autoencoder as a malicious drift detectorWe can leverage the adversarial scores obtained from an [adversarial autoencoder](https://arxiv.org/abs/2002.09364) trained on normal data and transform it into a data drift detector. The score function of the adversarial autoencoder becomes the preprocessing function for the drift detector. The K-S test is then a simple univariate test on the adversarial scores. Importantly, an adversarial drift detector flags **malicious data drift**. We can fetch the pretrained adversarial detector from a [Google Cloud Bucket](https://console.cloud.google.com/storage/browser/seldon-models/alibi-detect/ad/cifar10/resnet32) or train one from scratch:
###Code
load_pretrained = True
from tensorflow.keras.regularizers import l1
from tensorflow.keras.layers import Conv2DTranspose
from alibi_detect.ad import AdversarialAE
# change filepath to (absolute) directory where model is downloaded
filepath = os.path.join(os.getcwd(), 'my_path')
detector_type = 'adversarial'
detector_name = 'base'
filepath = os.path.join(filepath, detector_name)
if load_pretrained:
ad = fetch_detector(filepath, detector_type, dataset, detector_name, model=model)
else: # train detector from scratch
# define encoder and decoder networks
encoder_net = tf.keras.Sequential(
[
InputLayer(input_shape=(32, 32, 3)),
Conv2D(32, 4, strides=2, padding='same',
activation=tf.nn.relu, kernel_regularizer=l1(1e-5)),
Conv2D(64, 4, strides=2, padding='same',
activation=tf.nn.relu, kernel_regularizer=l1(1e-5)),
Conv2D(256, 4, strides=2, padding='same',
activation=tf.nn.relu, kernel_regularizer=l1(1e-5)),
Flatten(),
Dense(40)
]
)
decoder_net = tf.keras.Sequential(
[
InputLayer(input_shape=(40,)),
Dense(4 * 4 * 128, activation=tf.nn.relu),
Reshape(target_shape=(4, 4, 128)),
Conv2DTranspose(256, 4, strides=2, padding='same',
activation=tf.nn.relu, kernel_regularizer=l1(1e-5)),
Conv2DTranspose(64, 4, strides=2, padding='same',
activation=tf.nn.relu, kernel_regularizer=l1(1e-5)),
Conv2DTranspose(3, 4, strides=2, padding='same',
activation=None, kernel_regularizer=l1(1e-5))
]
)
# initialise and train detector
ad = AdversarialAE(encoder_net=encoder_net, decoder_net=decoder_net, model=clf)
ad.fit(X_train, epochs=50, batch_size=128, verbose=True)
# save the trained adversarial detector
save_detector(ad, filepath)
###Output
Directory /Users/shachatt1/Desktop/sharmi/books/My_book_responsible_ai/python_code/Chapter 7/alibi-detect-master/examples/my_path/base does not exist and is now created.
###Markdown
Initialise the drift detector:
###Code
np.random.seed(0)
idx = np.random.choice(n_test, size=n_test // 2, replace=False)
X_ref = scale_by_instance(X_test[idx])
# adversarial score fn = preprocess step
preprocess_fn = partial(ad.score, batch_size=128)
cd = KSDrift(X_ref, p_val=.05, preprocess_fn=preprocess_fn)
###Output
_____no_output_____
###Markdown
Make drift predictions on the original test set and corrupted data:
###Code
clf_accuracy['h0'] = clf.evaluate(X_h0, y_h0, batch_size=128, verbose=0)[1]
preds_h0 = cd.predict(X_h0)
print('H0: Accuracy {:.4f} -- Drift? {}'.format(
clf_accuracy['h0'], labels[preds_h0['data']['is_drift']]))
clf_accuracy['imb'] = clf.evaluate(X_imb, y_imb, batch_size=128, verbose=0)[1]
preds_imb = cd.predict(X_imb)
print('imbalance: Accuracy {:.4f} -- Drift? {}'.format(
clf_accuracy['imb'], labels[preds_imb['data']['is_drift']]))
for x, c in zip(X_c, corruption):
preds = cd.predict(x)
print('{}: Accuracy {:.4f} -- Drift? {}'.format(
c, clf_accuracy[c],labels[preds['data']['is_drift']]))
###Output
H0: Accuracy 0.9286 -- Drift? No!
imbalance: Accuracy 0.9282 -- Drift? No!
gaussian_noise: Accuracy 0.2208 -- Drift? Yes!
motion_blur: Accuracy 0.6339 -- Drift? Yes!
brightness: Accuracy 0.8913 -- Drift? Yes!
pixelate: Accuracy 0.3666 -- Drift? Yes!
###Markdown
While *X_imb* clearly exhibits input data drift due to the introduced class imbalances, it is not flagged by the adversarial drift detector since the performance of the classifier is not affected and the drift is not malicious. We can visualise this by plotting the adversarial scores together with the harmfulness of the data corruption as reflected by the drop in classifier accuracy:
###Code
adv_scores = {}
score = ad.score(X_ref, batch_size=128)
adv_scores['original'] = {'mean': score.mean(), 'std': score.std()}
score = ad.score(X_h0, batch_size=128)
adv_scores['h0'] = {'mean': score.mean(), 'std': score.std()}
score = ad.score(X_imb, batch_size=128)
adv_scores['imb'] = {'mean': score.mean(), 'std': score.std()}
for x, c in zip(X_c, corruption):
score_x = ad.score(x, batch_size=128)
adv_scores[c] = {'mean': score_x.mean(), 'std': score_x.std()}
mu = [v['mean'] for _, v in adv_scores.items()]
stdev = [v['std'] for _, v in adv_scores.items()]
xlabels = list(adv_scores.keys())
acc = [clf_accuracy[label] for label in xlabels]
xticks = np.arange(len(mu))
width = .35
fig, ax = plt.subplots()
ax2 = ax.twinx()
p1 = ax.bar(xticks, mu, width, yerr=stdev, capsize=2)
color = 'tab:red'
p2 = ax2.bar(xticks + width, acc, width, color=color)
ax.set_title('Adversarial Scores and Accuracy by Corruption Type')
ax.set_xticks(xticks + width / 2)
ax.set_xticklabels(xlabels, rotation=45)
ax.legend((p1[0], p2[0]), ('Score', 'Accuracy'), loc='upper right', ncol=2)
ax.set_ylabel('Adversarial Score')
color = 'tab:red'
ax2.set_ylabel('Accuracy')
ax2.set_ylim((-.26,1.2))
ax.set_ylim((-2,9))
plt.show()
###Output
_____no_output_____
###Markdown
We can therefore **use the scores of the detector itself to quantify the harmfulness of the drift**! We can generalise this to all the corruptions at each severity level in CIFAR-10-C:
###Code
def accuracy(y_true: np.ndarray, y_pred: np.ndarray) -> float:
return (y_true == y_pred).astype(int).sum() / y_true.shape[0]
from alibi_detect.utils.tensorflow.prediction import predict_batch
severities = [1, 2, 3, 4, 5]
score_drift = {
1: {'all': [], 'harm': [], 'noharm': [], 'acc': 0},
2: {'all': [], 'harm': [], 'noharm': [], 'acc': 0},
3: {'all': [], 'harm': [], 'noharm': [], 'acc': 0},
4: {'all': [], 'harm': [], 'noharm': [], 'acc': 0},
5: {'all': [], 'harm': [], 'noharm': [], 'acc': 0},
}
y_pred = predict_batch(X_test, clf, batch_size=256).argmax(axis=1)
score_x = ad.score(X_test, batch_size=256)
for s in severities:
print('\nSeverity: {} of {}'.format(s, len(severities)))
print('Loading corrupted dataset...')
X_corr, y_corr = fetch_cifar10c(corruption=corruptions, severity=s, return_X_y=True)
X_corr = X_corr.astype('float32')
print('Preprocess data...')
X_corr = scale_by_instance(X_corr)
print('Make predictions on corrupted dataset...')
y_pred_corr = predict_batch(X_corr, clf, batch_size=256).argmax(axis=1)
print('Compute adversarial scores on corrupted dataset...')
score_corr = ad.score(X_corr, batch_size=256)
print('Get labels for malicious corruptions...')
labels_corr = np.zeros(score_corr.shape[0])
repeat = y_corr.shape[0] // y_test.shape[0]
y_pred_repeat = np.tile(y_pred, (repeat,))
# malicious/harmful corruption: original prediction correct but
# prediction on corrupted data incorrect
idx_orig_right = np.where(y_pred_repeat == y_corr)[0]
idx_corr_wrong = np.where(y_pred_corr != y_corr)[0]
idx_harmful = np.intersect1d(idx_orig_right, idx_corr_wrong)
labels_corr[idx_harmful] = 1
labels = np.concatenate([np.zeros(X_test.shape[0]), labels_corr]).astype(int)
# harmless corruption: original prediction correct and prediction
# on corrupted data correct
idx_corr_right = np.where(y_pred_corr == y_corr)[0]
idx_harmless = np.intersect1d(idx_orig_right, idx_corr_right)
score_drift[s]['all'] = score_corr
score_drift[s]['harm'] = score_corr[idx_harmful]
score_drift[s]['noharm'] = score_corr[idx_harmless]
score_drift[s]['acc'] = accuracy(y_corr, y_pred_corr)
###Output
Severity: 1 of 5
Loading corrupted dataset...
Preprocess data...
Make predictions on corrupted dataset...
Compute adversarial scores on corrupted dataset...
Get labels for malicious corruptions...
Severity: 2 of 5
Loading corrupted dataset...
Preprocess data...
Make predictions on corrupted dataset...
Compute adversarial scores on corrupted dataset...
Get labels for malicious corruptions...
Severity: 3 of 5
Loading corrupted dataset...
Preprocess data...
Make predictions on corrupted dataset...
Compute adversarial scores on corrupted dataset...
Get labels for malicious corruptions...
Severity: 4 of 5
Loading corrupted dataset...
Preprocess data...
Make predictions on corrupted dataset...
Compute adversarial scores on corrupted dataset...
Get labels for malicious corruptions...
Severity: 5 of 5
Loading corrupted dataset...
Preprocess data...
Make predictions on corrupted dataset...
Compute adversarial scores on corrupted dataset...
Get labels for malicious corruptions...
###Markdown
We now compute mean scores and standard deviations per severity level and plot the results. The plot shows the mean adversarial scores (lhs) and ResNet-32 accuracies (rhs) for increasing data corruption severity levels. Level 0 corresponds to the original test set. Harmful scores are scores from instances which have been flipped from the correct to an incorrect prediction because of the corruption. Not harmful means that the prediction was unchanged after the corruption.
###Code
mu_noharm, std_noharm = [], []
mu_harm, std_harm = [], []
acc = [clf_accuracy['original']]
for k, v in score_drift.items():
mu_noharm.append(v['noharm'].mean())
std_noharm.append(v['noharm'].std())
mu_harm.append(v['harm'].mean())
std_harm.append(v['harm'].std())
acc.append(v['acc'])
plot_labels = ['0', '1', '2', '3', '4', '5']
N = 6
ind = np.arange(N)
width = .35
fig_bar_cd, ax = plt.subplots()
ax2 = ax.twinx()
p0 = ax.bar(ind[0], score_x.mean(), yerr=score_x.std(), capsize=2)
p1 = ax.bar(ind[1:], mu_noharm, width, yerr=std_noharm, capsize=2)
p2 = ax.bar(ind[1:] + width, mu_harm, width, yerr=std_harm, capsize=2)
ax.set_title('Adversarial Scores and Accuracy by Corruption Severity')
ax.set_xticks(ind + width / 2)
ax.set_xticklabels(plot_labels)
ax.set_ylim((-1,6))
ax.legend((p1[0], p2[0]), ('Not Harmful', 'Harmful'), loc='upper right', ncol=2)
ax.set_ylabel('Score')
ax.set_xlabel('Corruption Severity')
color = 'tab:red'
ax2.set_ylabel('Accuracy', color=color)
ax2.plot(acc, color=color)
ax2.tick_params(axis='y', labelcolor=color)
plt.show()
###Output
_____no_output_____ |
examples/projecteuler/problem_0011.ipynb | ###Markdown
ProjectEuler.net [problem 11](https://projecteuler.net/problem=11)In the 20×20 grid below, what is the greatest product of four adjacent numbers in the same direction (up, down, left, right, or diagonally) in the 20×20 grid?
###Code
import gridthings
text = """
08 02 22 97 38 15 00 40 00 75 04 05 07 78 52 12 50 77 91 08
49 49 99 40 17 81 18 57 60 87 17 40 98 43 69 48 04 56 62 00
81 49 31 73 55 79 14 29 93 71 40 67 53 88 30 03 49 13 36 65
52 70 95 23 04 60 11 42 69 24 68 56 01 32 56 71 37 02 36 91
22 31 16 71 51 67 63 89 41 92 36 54 22 40 40 28 66 33 13 80
24 47 32 60 99 03 45 02 44 75 33 53 78 36 84 20 35 17 12 50
32 98 81 28 64 23 67 10 26 38 40 67 59 54 70 66 18 38 64 70
67 26 20 68 02 62 12 20 95 63 94 39 63 08 40 91 66 49 94 21
24 55 58 05 66 73 99 26 97 17 78 78 96 83 14 88 34 89 63 72
21 36 23 09 75 00 76 44 20 45 35 14 00 61 33 97 34 31 33 95
78 17 53 28 22 75 31 67 15 94 03 80 04 62 16 14 09 53 56 92
16 39 05 42 96 35 31 47 55 58 88 24 00 17 54 24 36 29 85 57
86 56 00 48 35 71 89 07 05 44 44 37 44 60 21 58 51 54 17 58
19 80 81 68 05 94 47 69 28 73 92 13 86 52 17 77 04 89 55 40
04 52 08 83 97 35 99 16 07 97 57 32 16 26 26 79 33 27 98 66
88 36 68 87 57 62 20 72 03 46 33 67 46 55 12 32 63 93 53 69
04 42 16 73 38 25 39 11 24 94 72 18 08 46 29 32 40 62 76 36
20 69 36 41 72 30 23 88 34 62 99 69 82 67 59 85 74 04 36 16
20 73 35 29 78 31 90 01 74 31 49 71 48 86 81 16 23 57 05 54
01 70 54 71 83 51 54 69 16 92 33 48 61 43 52 01 89 19 67 48
"""
grid = gridthings.IntGrid(text, sep=" ")
grid
# gridthings can help solve this problem by extracting rows
# of cells using the grid.line() method.
#
# To cover all combinations
# of numbers, we'll start in the top left (0, 0) and "fan" our
# way across the entire grid, grabbing lines that extend right,
# lines that extend down, and lines that extend down diagonally
# going both right and left
# Here's a line from 0,0 to 0,4
#
# the arguments are: y, x start point; y_step, x_step slope; and distance
collection = grid.line(y=0, x=0, x_step=1, distance=4)
collection
# Thanks to how the Cell and Collection classes are built,
# functions like math.prod can be used out of the box
# (as opposed to having to extract the Cell.value's yourself)
import math
math.prod(collection)
# A diagonal line sloping down and right
grid.line(y=0, x=0, y_step=1, x_step=1, distance=4)
# A vertical line
grid.line(y=0, x=0, y_step=1, x_step=0, distance=4)
# A diagonal line sloping down and left
#
# Notice in this output there are OutOfBoundsCells, meaning
# our line extends outside the grid. In some other use-cases
# there might be something to do with those, but for this problem
# it means we shouldn't evaluate the product
grid.line(y=0, x=0, y_step=1, x_step=-1, distance=4)
# We can check if a Collection contains out of bounds cells
collection = grid.line(y=0, x=0, y_step=1, x_step=-1, distance=4)
collection.extends_out_of_bounds()
# Here's a brute force approach to the solution.
# Iterate through every cell, and calculate the product
# for horizontal, vertical, and diagonal lines from that point
#
# Save off the Collection with the highest product
top_product = None
for cell in grid.flatten():
line_right = grid.line(y=cell.y, x=cell.x, x_step=1, distance=4)
line_down = grid.line(y=cell.y, x=cell.x, y_step=1, x_step=0, distance=4)
line_diag_right = grid.line(y=cell.y, x=cell.x, y_step=1, x_step=1, distance=4)
line_diag_left = grid.line(y=cell.y, x=cell.x, y_step=1, x_step=-1, distance=4)
for collection in [line_right, line_down, line_diag_right, line_diag_left]:
if collection.extends_out_of_bounds():
# skip any line that extends outside the grid
continue
if not top_product:
top_product = collection
else:
if math.prod(collection) > math.prod(top_product):
top_product = collection
top_product
math.prod(top_product)
# We could also try to optimize this a little bit by guessing that the
# top product must have a start cell that's somewhat high, say >80?
# Additionally, we could store the product separate from the collection
top_product = None
top_collection = None
for cell in grid.flatten():
if cell.value < 80:
continue
line_right = grid.line(y=cell.y, x=cell.x, x_step=1, distance=4)
line_down = grid.line(y=cell.y, x=cell.x, y_step=1, x_step=0, distance=4)
line_diag_right = grid.line(y=cell.y, x=cell.x, y_step=1, x_step=1, distance=4)
line_diag_left = grid.line(y=cell.y, x=cell.x, y_step=1, x_step=-1, distance=4)
for collection in [line_right, line_down, line_diag_right, line_diag_left]:
if collection.extends_out_of_bounds():
# skip any line that extends outside the grid
continue
product = math.prod(collection)
if not top_product or product > top_product:
top_product = product
top_collection = collection
collection, top_product
###Output
_____no_output_____ |
DeepLearningAI/Convolutional Neural Networks/Autonomous+driving+application+-+Car+detection+-+v3.ipynb | ###Markdown
Autonomous driving - Car detectionWelcome to your week 3 programming assignment. You will learn about object detection using the very powerful YOLO model. Many of the ideas in this notebook are described in the two YOLO papers: Redmon et al., 2016 (https://arxiv.org/abs/1506.02640) and Redmon and Farhadi, 2016 (https://arxiv.org/abs/1612.08242). **You will learn to**:- Use object detection on a car detection dataset- Deal with bounding boxesRun the following cell to load the packages and dependencies that are going to be useful for your journey!
###Code
import argparse
import os
import matplotlib.pyplot as plt
from matplotlib.pyplot import imshow
import scipy.io
import scipy.misc
import numpy as np
import pandas as pd
import PIL
import tensorflow as tf
from keras import backend as K
from keras.layers import Input, Lambda, Conv2D
from keras.models import load_model, Model
from yolo_utils import read_classes, read_anchors, generate_colors, preprocess_image, draw_boxes, scale_boxes
from yad2k.models.keras_yolo import yolo_head, yolo_boxes_to_corners, preprocess_true_boxes, yolo_loss, yolo_body
%matplotlib inline
###Output
Using TensorFlow backend.
###Markdown
**Important Note**: As you can see, we import Keras's backend as K. This means that to use a Keras function in this notebook, you will need to write: `K.function(...)`. 1 - Problem StatementYou are working on a self-driving car. As a critical component of this project, you'd like to first build a car detection system. To collect data, you've mounted a camera to the hood (meaning the front) of the car, which takes pictures of the road ahead every few seconds while you drive around. Pictures taken from a car-mounted camera while driving around Silicon Valley. We would like to especially thank [drive.ai](https://www.drive.ai/) for providing this dataset! Drive.ai is a company building the brains of self-driving vehicles.You've gathered all these images into a folder and have labelled them by drawing bounding boxes around every car you found. Here's an example of what your bounding boxes look like. **Figure 1** : **Definition of a box** If you have 80 classes that you want YOLO to recognize, you can represent the class label $c$ either as an integer from 1 to 80, or as an 80-dimensional vector (with 80 numbers) one component of which is 1 and the rest of which are 0. The video lectures had used the latter representation; in this notebook, we will use both representations, depending on which is more convenient for a particular step. In this exercise, you will learn how YOLO works, then apply it to car detection. Because the YOLO model is very computationally expensive to train, we will load pre-trained weights for you to use. 2 - YOLO YOLO ("you only look once") is a popular algoritm because it achieves high accuracy while also being able to run in real-time. This algorithm "only looks once" at the image in the sense that it requires only one forward propagation pass through the network to make predictions. After non-max suppression, it then outputs recognized objects together with the bounding boxes. 2.1 - Model detailsFirst things to know:- The **input** is a batch of images of shape (m, 608, 608, 3)- The **output** is a list of bounding boxes along with the recognized classes. Each bounding box is represented by 6 numbers $(p_c, b_x, b_y, b_h, b_w, c)$ as explained above. If you expand $c$ into an 80-dimensional vector, each bounding box is then represented by 85 numbers. We will use 5 anchor boxes. So you can think of the YOLO architecture as the following: IMAGE (m, 608, 608, 3) -> DEEP CNN -> ENCODING (m, 19, 19, 5, 85).Lets look in greater detail at what this encoding represents. **Figure 2** : **Encoding architecture for YOLO** If the center/midpoint of an object falls into a grid cell, that grid cell is responsible for detecting that object. Since we are using 5 anchor boxes, each of the 19 x19 cells thus encodes information about 5 boxes. Anchor boxes are defined only by their width and height.For simplicity, we will flatten the last two last dimensions of the shape (19, 19, 5, 85) encoding. So the output of the Deep CNN is (19, 19, 425). **Figure 3** : **Flattening the last two last dimensions** Now, for each box (of each cell) we will compute the following elementwise product and extract a probability that the box contains a certain class. **Figure 4** : **Find the class detected by each box** Here's one way to visualize what YOLO is predicting on an image:- For each of the 19x19 grid cells, find the maximum of the probability scores (taking a max across both the 5 anchor boxes and across different classes). - Color that grid cell according to what object that grid cell considers the most likely.Doing this results in this picture: **Figure 5** : Each of the 19x19 grid cells colored according to which class has the largest predicted probability in that cell. Note that this visualization isn't a core part of the YOLO algorithm itself for making predictions; it's just a nice way of visualizing an intermediate result of the algorithm. Another way to visualize YOLO's output is to plot the bounding boxes that it outputs. Doing that results in a visualization like this: **Figure 6** : Each cell gives you 5 boxes. In total, the model predicts: 19x19x5 = 1805 boxes just by looking once at the image (one forward pass through the network)! Different colors denote different classes. In the figure above, we plotted only boxes that the model had assigned a high probability to, but this is still too many boxes. You'd like to filter the algorithm's output down to a much smaller number of detected objects. To do so, you'll use non-max suppression. Specifically, you'll carry out these steps: - Get rid of boxes with a low score (meaning, the box is not very confident about detecting a class)- Select only one box when several boxes overlap with each other and detect the same object. 2.2 - Filtering with a threshold on class scoresYou are going to apply a first filter by thresholding. You would like to get rid of any box for which the class "score" is less than a chosen threshold. The model gives you a total of 19x19x5x85 numbers, with each box described by 85 numbers. It'll be convenient to rearrange the (19,19,5,85) (or (19,19,425)) dimensional tensor into the following variables: - `box_confidence`: tensor of shape $(19 \times 19, 5, 1)$ containing $p_c$ (confidence probability that there's some object) for each of the 5 boxes predicted in each of the 19x19 cells.- `boxes`: tensor of shape $(19 \times 19, 5, 4)$ containing $(b_x, b_y, b_h, b_w)$ for each of the 5 boxes per cell.- `box_class_probs`: tensor of shape $(19 \times 19, 5, 80)$ containing the detection probabilities $(c_1, c_2, ... c_{80})$ for each of the 80 classes for each of the 5 boxes per cell.**Exercise**: Implement `yolo_filter_boxes()`.1. Compute box scores by doing the elementwise product as described in Figure 4. The following code may help you choose the right operator: ```pythona = np.random.randn(19*19, 5, 1)b = np.random.randn(19*19, 5, 80)c = a * b shape of c will be (19*19, 5, 80)```2. For each box, find: - the index of the class with the maximum box score ([Hint](https://keras.io/backend/argmax)) (Be careful with what axis you choose; consider using axis=-1) - the corresponding box score ([Hint](https://keras.io/backend/max)) (Be careful with what axis you choose; consider using axis=-1)3. Create a mask by using a threshold. As a reminder: `([0.9, 0.3, 0.4, 0.5, 0.1] < 0.4)` returns: `[False, True, False, False, True]`. The mask should be True for the boxes you want to keep. 4. Use TensorFlow to apply the mask to box_class_scores, boxes and box_classes to filter out the boxes we don't want. You should be left with just the subset of boxes you want to keep. ([Hint](https://www.tensorflow.org/api_docs/python/tf/boolean_mask))Reminder: to call a Keras function, you should use `K.function(...)`.
###Code
# GRADED FUNCTION: yolo_filter_boxes
def yolo_filter_boxes(box_confidence, boxes, box_class_probs, threshold = .6):
"""Filters YOLO boxes by thresholding on object and class confidence.
Arguments:
box_confidence -- tensor of shape (19, 19, 5, 1)
boxes -- tensor of shape (19, 19, 5, 4)
box_class_probs -- tensor of shape (19, 19, 5, 80)
threshold -- real value, if [ highest class probability score < threshold], then get rid of the corresponding box
Returns:
scores -- tensor of shape (None,), containing the class probability score for selected boxes
boxes -- tensor of shape (None, 4), containing (b_x, b_y, b_h, b_w) coordinates of selected boxes
classes -- tensor of shape (None,), containing the index of the class detected by the selected boxes
Note: "None" is here because you don't know the exact number of selected boxes, as it depends on the threshold.
For example, the actual output size of scores would be (10,) if there are 10 boxes.
"""
# Step 1: Compute box scores
### START CODE HERE ### (≈ 1 line)
box_scores = box_confidence*box_class_probs
### END CODE HERE ###
# Step 2: Find the box_classes thanks to the max box_scores, keep track of the corresponding score
### START CODE HERE ### (≈ 2 lines)
box_classes = K.argmax(box_scores,axis=-1)
box_class_scores = K.max(box_scores,axis=-1)
### END CODE HERE ###
# Step 3: Create a filtering mask based on "box_class_scores" by using "threshold". The mask should have the
# same dimension as box_class_scores, and be True for the boxes you want to keep (with probability >= threshold)
### START CODE HERE ### (≈ 1 line)
filtering_mask = box_class_scores>=threshold
### END CODE HERE ###
# Step 4: Apply the mask to scores, boxes and classes
### START CODE HERE ### (≈ 3 lines)
scores = tf.boolean_mask(box_class_scores,filtering_mask)
boxes = tf.boolean_mask(boxes,filtering_mask)
classes = tf.boolean_mask(box_classes,filtering_mask)
### END CODE HERE ###
return scores, boxes, classes
with tf.Session() as test_a:
box_confidence = tf.random_normal([19, 19, 5, 1], mean=1, stddev=4, seed = 1)
boxes = tf.random_normal([19, 19, 5, 4], mean=1, stddev=4, seed = 1)
box_class_probs = tf.random_normal([19, 19, 5, 80], mean=1, stddev=4, seed = 1)
scores, boxes, classes = yolo_filter_boxes(box_confidence, boxes, box_class_probs, threshold = 0.5)
print("scores[2] = " + str(scores[2].eval()))
print("boxes[2] = " + str(boxes[2].eval()))
print("classes[2] = " + str(classes[2].eval()))
print("scores.shape = " + str(scores.shape))
print("boxes.shape = " + str(boxes.shape))
print("classes.shape = " + str(classes.shape))
###Output
scores[2] = 10.7506
boxes[2] = [ 8.42653275 3.27136683 -0.5313437 -4.94137383]
classes[2] = 7
scores.shape = (?,)
boxes.shape = (?, 4)
classes.shape = (?,)
###Markdown
**Expected Output**: **scores[2]** 10.7506 **boxes[2]** [ 8.42653275 3.27136683 -0.5313437 -4.94137383] **classes[2]** 7 **scores.shape** (?,) **boxes.shape** (?, 4) **classes.shape** (?,) 2.3 - Non-max suppression Even after filtering by thresholding over the classes scores, you still end up a lot of overlapping boxes. A second filter for selecting the right boxes is called non-maximum suppression (NMS). **Figure 7** : In this example, the model has predicted 3 cars, but it's actually 3 predictions of the same car. Running non-max suppression (NMS) will select only the most accurate (highest probabiliy) one of the 3 boxes. Non-max suppression uses the very important function called **"Intersection over Union"**, or IoU. **Figure 8** : Definition of "Intersection over Union". **Exercise**: Implement iou(). Some hints:- In this exercise only, we define a box using its two corners (upper left and lower right): `(x1, y1, x2, y2)` rather than the midpoint and height/width.- To calculate the area of a rectangle you need to multiply its height `(y2 - y1)` by its width `(x2 - x1)`.- You'll also need to find the coordinates `(xi1, yi1, xi2, yi2)` of the intersection of two boxes. Remember that: - xi1 = maximum of the x1 coordinates of the two boxes - yi1 = maximum of the y1 coordinates of the two boxes - xi2 = minimum of the x2 coordinates of the two boxes - yi2 = minimum of the y2 coordinates of the two boxes- In order to compute the intersection area, you need to make sure the height and width of the intersection are positive, otherwise the intersection area should be zero. Use `max(height, 0)` and `max(width, 0)`.In this code, we use the convention that (0,0) is the top-left corner of an image, (1,0) is the upper-right corner, and (1,1) the lower-right corner.
###Code
# GRADED FUNCTION: iou
def iou(box1, box2):
"""Implement the intersection over union (IoU) between box1 and box2
Arguments:
box1 -- first box, list object with coordinates (x1, y1, x2, y2)
box2 -- second box, list object with coordinates (x1, y1, x2, y2)
"""
# Calculate the (y1, x1, y2, x2) coordinates of the intersection of box1 and box2. Calculate its Area.
### START CODE HERE ### (≈ 5 lines)
xi1 = max(box1[0],box2[0])
yi1 = max(box1[1],box2[1])
xi2 = min(box1[2],box2[2])
yi2 = min(box1[3],box2[3])
inter_area = max(xi2-xi1,0)*max(yi2-yi1,0)
### END CODE HERE ###
# Calculate the Union area by using Formula: Union(A,B) = A + B - Inter(A,B)
### START CODE HERE ### (≈ 3 lines)
box1_area = (box1[3]-box1[1])*(box1[2]-box1[0])
box2_area = (box2[3]-box2[1])*(box2[2]-box2[0])
union_area = box1_area + box2_area - inter_area
### END CODE HERE ###
# compute the IoU
### START CODE HERE ### (≈ 1 line)
iou = inter_area/union_area
### END CODE HERE ###
return iou
box1 = (2, 1, 4, 3)
box2 = (1, 2, 3, 4)
print("iou = " + str(iou(box1, box2)))
###Output
iou = 0.14285714285714285
###Markdown
**Expected Output**: **iou = ** 0.14285714285714285 You are now ready to implement non-max suppression. The key steps are: 1. Select the box that has the highest score.2. Compute its overlap with all other boxes, and remove boxes that overlap it more than `iou_threshold`.3. Go back to step 1 and iterate until there's no more boxes with a lower score than the current selected box.This will remove all boxes that have a large overlap with the selected boxes. Only the "best" boxes remain.**Exercise**: Implement yolo_non_max_suppression() using TensorFlow. TensorFlow has two built-in functions that are used to implement non-max suppression (so you don't actually need to use your `iou()` implementation):- [tf.image.non_max_suppression()](https://www.tensorflow.org/api_docs/python/tf/image/non_max_suppression)- [K.gather()](https://www.tensorflow.org/api_docs/python/tf/gather)
###Code
# GRADED FUNCTION: yolo_non_max_suppression
def yolo_non_max_suppression(scores, boxes, classes, max_boxes = 10, iou_threshold = 0.5):
"""
Applies Non-max suppression (NMS) to set of boxes
Arguments:
scores -- tensor of shape (None,), output of yolo_filter_boxes()
boxes -- tensor of shape (None, 4), output of yolo_filter_boxes() that have been scaled to the image size (see later)
classes -- tensor of shape (None,), output of yolo_filter_boxes()
max_boxes -- integer, maximum number of predicted boxes you'd like
iou_threshold -- real value, "intersection over union" threshold used for NMS filtering
Returns:
scores -- tensor of shape (, None), predicted score for each box
boxes -- tensor of shape (4, None), predicted box coordinates
classes -- tensor of shape (, None), predicted class for each box
Note: The "None" dimension of the output tensors has obviously to be less than max_boxes. Note also that this
function will transpose the shapes of scores, boxes, classes. This is made for convenience.
"""
max_boxes_tensor = K.variable(max_boxes, dtype='int32') # tensor to be used in tf.image.non_max_suppression()
K.get_session().run(tf.variables_initializer([max_boxes_tensor])) # initialize variable max_boxes_tensor
# Use tf.image.non_max_suppression() to get the list of indices corresponding to boxes you keep
### START CODE HERE ### (≈ 1 line)
nms_indices = tf.image.non_max_suppression(boxes,scores,max_boxes)
### END CODE HERE ###
# Use K.gather() to select only nms_indices from scores, boxes and classes
### START CODE HERE ### (≈ 3 lines)
scores = K.gather(scores,nms_indices)
boxes = K.gather(boxes,nms_indices)
classes = K.gather(classes,nms_indices)
### END CODE HERE ###
return scores, boxes, classes
with tf.Session() as test_b:
scores = tf.random_normal([54,], mean=1, stddev=4, seed = 1)
boxes = tf.random_normal([54, 4], mean=1, stddev=4, seed = 1)
classes = tf.random_normal([54,], mean=1, stddev=4, seed = 1)
scores, boxes, classes = yolo_non_max_suppression(scores, boxes, classes)
print("scores[2] = " + str(scores[2].eval()))
print("boxes[2] = " + str(boxes[2].eval()))
print("classes[2] = " + str(classes[2].eval()))
print("scores.shape = " + str(scores.eval().shape))
print("boxes.shape = " + str(boxes.eval().shape))
print("classes.shape = " + str(classes.eval().shape))
###Output
scores[2] = 6.9384
boxes[2] = [-5.299932 3.13798141 4.45036697 0.95942086]
classes[2] = -2.24527
scores.shape = (10,)
boxes.shape = (10, 4)
classes.shape = (10,)
###Markdown
**Expected Output**: **scores[2]** 6.9384 **boxes[2]** [-5.299932 3.13798141 4.45036697 0.95942086] **classes[2]** -2.24527 **scores.shape** (10,) **boxes.shape** (10, 4) **classes.shape** (10,) 2.4 Wrapping up the filteringIt's time to implement a function taking the output of the deep CNN (the 19x19x5x85 dimensional encoding) and filtering through all the boxes using the functions you've just implemented. **Exercise**: Implement `yolo_eval()` which takes the output of the YOLO encoding and filters the boxes using score threshold and NMS. There's just one last implementational detail you have to know. There're a few ways of representing boxes, such as via their corners or via their midpoint and height/width. YOLO converts between a few such formats at different times, using the following functions (which we have provided): ```pythonboxes = yolo_boxes_to_corners(box_xy, box_wh) ```which converts the yolo box coordinates (x,y,w,h) to box corners' coordinates (x1, y1, x2, y2) to fit the input of `yolo_filter_boxes````pythonboxes = scale_boxes(boxes, image_shape)```YOLO's network was trained to run on 608x608 images. If you are testing this data on a different size image--for example, the car detection dataset had 720x1280 images--this step rescales the boxes so that they can be plotted on top of the original 720x1280 image. Don't worry about these two functions; we'll show you where they need to be called.
###Code
# GRADED FUNCTION: yolo_eval
def yolo_eval(yolo_outputs, image_shape = (720., 1280.), max_boxes=10, score_threshold=.6, iou_threshold=.5):
"""
Converts the output of YOLO encoding (a lot of boxes) to your predicted boxes along with their scores, box coordinates and classes.
Arguments:
yolo_outputs -- output of the encoding model (for image_shape of (608, 608, 3)), contains 4 tensors:
box_confidence: tensor of shape (None, 19, 19, 5, 1)
box_xy: tensor of shape (None, 19, 19, 5, 2)
box_wh: tensor of shape (None, 19, 19, 5, 2)
box_class_probs: tensor of shape (None, 19, 19, 5, 80)
image_shape -- tensor of shape (2,) containing the input shape, in this notebook we use (608., 608.) (has to be float32 dtype)
max_boxes -- integer, maximum number of predicted boxes you'd like
score_threshold -- real value, if [ highest class probability score < threshold], then get rid of the corresponding box
iou_threshold -- real value, "intersection over union" threshold used for NMS filtering
Returns:
scores -- tensor of shape (None, ), predicted score for each box
boxes -- tensor of shape (None, 4), predicted box coordinates
classes -- tensor of shape (None,), predicted class for each box
"""
### START CODE HERE ###
# Retrieve outputs of the YOLO model (≈1 line)
box_confidence, box_xy, box_wh, box_class_probs = yolo_outputs
# Convert boxes to be ready for filtering functions
boxes = yolo_boxes_to_corners(box_xy, box_wh)
# Use one of the functions you've implemented to perform Score-filtering with a threshold of score_threshold (≈1 line)
scores, boxes, classes = yolo_filter_boxes(box_confidence,boxes,box_class_probs,score_threshold)
# Scale boxes back to original image shape.
boxes = scale_boxes(boxes, image_shape)
# Use one of the functions you've implemented to perform Non-max suppression with a threshold of iou_threshold (≈1 line)
scores, boxes, classes = yolo_non_max_suppression(scores,boxes,classes,max_boxes,iou_threshold)
### END CODE HERE ###
return scores, boxes, classes
with tf.Session() as test_b:
yolo_outputs = (tf.random_normal([19, 19, 5, 1], mean=1, stddev=4, seed = 1),
tf.random_normal([19, 19, 5, 2], mean=1, stddev=4, seed = 1),
tf.random_normal([19, 19, 5, 2], mean=1, stddev=4, seed = 1),
tf.random_normal([19, 19, 5, 80], mean=1, stddev=4, seed = 1))
scores, boxes, classes = yolo_eval(yolo_outputs)
print("scores[2] = " + str(scores[2].eval()))
print("boxes[2] = " + str(boxes[2].eval()))
print("classes[2] = " + str(classes[2].eval()))
print("scores.shape = " + str(scores.eval().shape))
print("boxes.shape = " + str(boxes.eval().shape))
print("classes.shape = " + str(classes.eval().shape))
###Output
scores[2] = 138.791
boxes[2] = [ 1292.32971191 -278.52166748 3876.98925781 -835.56494141]
classes[2] = 54
scores.shape = (10,)
boxes.shape = (10, 4)
classes.shape = (10,)
###Markdown
**Expected Output**: **scores[2]** 138.791 **boxes[2]** [ 1292.32971191 -278.52166748 3876.98925781 -835.56494141] **classes[2]** 54 **scores.shape** (10,) **boxes.shape** (10, 4) **classes.shape** (10,) **Summary for YOLO**:- Input image (608, 608, 3)- The input image goes through a CNN, resulting in a (19,19,5,85) dimensional output. - After flattening the last two dimensions, the output is a volume of shape (19, 19, 425): - Each cell in a 19x19 grid over the input image gives 425 numbers. - 425 = 5 x 85 because each cell contains predictions for 5 boxes, corresponding to 5 anchor boxes, as seen in lecture. - 85 = 5 + 80 where 5 is because $(p_c, b_x, b_y, b_h, b_w)$ has 5 numbers, and and 80 is the number of classes we'd like to detect- You then select only few boxes based on: - Score-thresholding: throw away boxes that have detected a class with a score less than the threshold - Non-max suppression: Compute the Intersection over Union and avoid selecting overlapping boxes- This gives you YOLO's final output. 3 - Test YOLO pretrained model on images In this part, you are going to use a pretrained model and test it on the car detection dataset. As usual, you start by **creating a session to start your graph**. Run the following cell.
###Code
sess = K.get_session()
###Output
_____no_output_____
###Markdown
3.1 - Defining classes, anchors and image shape. Recall that we are trying to detect 80 classes, and are using 5 anchor boxes. We have gathered the information about the 80 classes and 5 boxes in two files "coco_classes.txt" and "yolo_anchors.txt". Let's load these quantities into the model by running the next cell. The car detection dataset has 720x1280 images, which we've pre-processed into 608x608 images.
###Code
class_names = read_classes("model_data/coco_classes.txt")
anchors = read_anchors("model_data/yolo_anchors.txt")
image_shape = (720., 1280.)
###Output
_____no_output_____
###Markdown
3.2 - Loading a pretrained modelTraining a YOLO model takes a very long time and requires a fairly large dataset of labelled bounding boxes for a large range of target classes. You are going to load an existing pretrained Keras YOLO model stored in "yolo.h5". (These weights come from the official YOLO website, and were converted using a function written by Allan Zelener. References are at the end of this notebook. Technically, these are the parameters from the "YOLOv2" model, but we will more simply refer to it as "YOLO" in this notebook.) Run the cell below to load the model from this file.
###Code
yolo_model = load_model("model_data/yolo.h5")
###Output
/opt/conda/lib/python3.6/site-packages/keras/models.py:251: UserWarning: No training configuration found in save file: the model was *not* compiled. Compile it manually.
warnings.warn('No training configuration found in save file: '
###Markdown
This loads the weights of a trained YOLO model. Here's a summary of the layers your model contains.
###Code
yolo_model.summary()
###Output
____________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
====================================================================================================
input_1 (InputLayer) (None, 608, 608, 3) 0
____________________________________________________________________________________________________
conv2d_1 (Conv2D) (None, 608, 608, 32) 864 input_1[0][0]
____________________________________________________________________________________________________
batch_normalization_1 (BatchNorm (None, 608, 608, 32) 128 conv2d_1[0][0]
____________________________________________________________________________________________________
leaky_re_lu_1 (LeakyReLU) (None, 608, 608, 32) 0 batch_normalization_1[0][0]
____________________________________________________________________________________________________
max_pooling2d_1 (MaxPooling2D) (None, 304, 304, 32) 0 leaky_re_lu_1[0][0]
____________________________________________________________________________________________________
conv2d_2 (Conv2D) (None, 304, 304, 64) 18432 max_pooling2d_1[0][0]
____________________________________________________________________________________________________
batch_normalization_2 (BatchNorm (None, 304, 304, 64) 256 conv2d_2[0][0]
____________________________________________________________________________________________________
leaky_re_lu_2 (LeakyReLU) (None, 304, 304, 64) 0 batch_normalization_2[0][0]
____________________________________________________________________________________________________
max_pooling2d_2 (MaxPooling2D) (None, 152, 152, 64) 0 leaky_re_lu_2[0][0]
____________________________________________________________________________________________________
conv2d_3 (Conv2D) (None, 152, 152, 128) 73728 max_pooling2d_2[0][0]
____________________________________________________________________________________________________
batch_normalization_3 (BatchNorm (None, 152, 152, 128) 512 conv2d_3[0][0]
____________________________________________________________________________________________________
leaky_re_lu_3 (LeakyReLU) (None, 152, 152, 128) 0 batch_normalization_3[0][0]
____________________________________________________________________________________________________
conv2d_4 (Conv2D) (None, 152, 152, 64) 8192 leaky_re_lu_3[0][0]
____________________________________________________________________________________________________
batch_normalization_4 (BatchNorm (None, 152, 152, 64) 256 conv2d_4[0][0]
____________________________________________________________________________________________________
leaky_re_lu_4 (LeakyReLU) (None, 152, 152, 64) 0 batch_normalization_4[0][0]
____________________________________________________________________________________________________
conv2d_5 (Conv2D) (None, 152, 152, 128) 73728 leaky_re_lu_4[0][0]
____________________________________________________________________________________________________
batch_normalization_5 (BatchNorm (None, 152, 152, 128) 512 conv2d_5[0][0]
____________________________________________________________________________________________________
leaky_re_lu_5 (LeakyReLU) (None, 152, 152, 128) 0 batch_normalization_5[0][0]
____________________________________________________________________________________________________
max_pooling2d_3 (MaxPooling2D) (None, 76, 76, 128) 0 leaky_re_lu_5[0][0]
____________________________________________________________________________________________________
conv2d_6 (Conv2D) (None, 76, 76, 256) 294912 max_pooling2d_3[0][0]
____________________________________________________________________________________________________
batch_normalization_6 (BatchNorm (None, 76, 76, 256) 1024 conv2d_6[0][0]
____________________________________________________________________________________________________
leaky_re_lu_6 (LeakyReLU) (None, 76, 76, 256) 0 batch_normalization_6[0][0]
____________________________________________________________________________________________________
conv2d_7 (Conv2D) (None, 76, 76, 128) 32768 leaky_re_lu_6[0][0]
____________________________________________________________________________________________________
batch_normalization_7 (BatchNorm (None, 76, 76, 128) 512 conv2d_7[0][0]
____________________________________________________________________________________________________
leaky_re_lu_7 (LeakyReLU) (None, 76, 76, 128) 0 batch_normalization_7[0][0]
____________________________________________________________________________________________________
conv2d_8 (Conv2D) (None, 76, 76, 256) 294912 leaky_re_lu_7[0][0]
____________________________________________________________________________________________________
batch_normalization_8 (BatchNorm (None, 76, 76, 256) 1024 conv2d_8[0][0]
____________________________________________________________________________________________________
leaky_re_lu_8 (LeakyReLU) (None, 76, 76, 256) 0 batch_normalization_8[0][0]
____________________________________________________________________________________________________
max_pooling2d_4 (MaxPooling2D) (None, 38, 38, 256) 0 leaky_re_lu_8[0][0]
____________________________________________________________________________________________________
conv2d_9 (Conv2D) (None, 38, 38, 512) 1179648 max_pooling2d_4[0][0]
____________________________________________________________________________________________________
batch_normalization_9 (BatchNorm (None, 38, 38, 512) 2048 conv2d_9[0][0]
____________________________________________________________________________________________________
leaky_re_lu_9 (LeakyReLU) (None, 38, 38, 512) 0 batch_normalization_9[0][0]
____________________________________________________________________________________________________
conv2d_10 (Conv2D) (None, 38, 38, 256) 131072 leaky_re_lu_9[0][0]
____________________________________________________________________________________________________
batch_normalization_10 (BatchNor (None, 38, 38, 256) 1024 conv2d_10[0][0]
____________________________________________________________________________________________________
leaky_re_lu_10 (LeakyReLU) (None, 38, 38, 256) 0 batch_normalization_10[0][0]
____________________________________________________________________________________________________
conv2d_11 (Conv2D) (None, 38, 38, 512) 1179648 leaky_re_lu_10[0][0]
____________________________________________________________________________________________________
batch_normalization_11 (BatchNor (None, 38, 38, 512) 2048 conv2d_11[0][0]
____________________________________________________________________________________________________
leaky_re_lu_11 (LeakyReLU) (None, 38, 38, 512) 0 batch_normalization_11[0][0]
____________________________________________________________________________________________________
conv2d_12 (Conv2D) (None, 38, 38, 256) 131072 leaky_re_lu_11[0][0]
____________________________________________________________________________________________________
batch_normalization_12 (BatchNor (None, 38, 38, 256) 1024 conv2d_12[0][0]
____________________________________________________________________________________________________
leaky_re_lu_12 (LeakyReLU) (None, 38, 38, 256) 0 batch_normalization_12[0][0]
____________________________________________________________________________________________________
conv2d_13 (Conv2D) (None, 38, 38, 512) 1179648 leaky_re_lu_12[0][0]
____________________________________________________________________________________________________
batch_normalization_13 (BatchNor (None, 38, 38, 512) 2048 conv2d_13[0][0]
____________________________________________________________________________________________________
leaky_re_lu_13 (LeakyReLU) (None, 38, 38, 512) 0 batch_normalization_13[0][0]
____________________________________________________________________________________________________
max_pooling2d_5 (MaxPooling2D) (None, 19, 19, 512) 0 leaky_re_lu_13[0][0]
____________________________________________________________________________________________________
conv2d_14 (Conv2D) (None, 19, 19, 1024) 4718592 max_pooling2d_5[0][0]
____________________________________________________________________________________________________
batch_normalization_14 (BatchNor (None, 19, 19, 1024) 4096 conv2d_14[0][0]
____________________________________________________________________________________________________
leaky_re_lu_14 (LeakyReLU) (None, 19, 19, 1024) 0 batch_normalization_14[0][0]
____________________________________________________________________________________________________
conv2d_15 (Conv2D) (None, 19, 19, 512) 524288 leaky_re_lu_14[0][0]
____________________________________________________________________________________________________
batch_normalization_15 (BatchNor (None, 19, 19, 512) 2048 conv2d_15[0][0]
____________________________________________________________________________________________________
leaky_re_lu_15 (LeakyReLU) (None, 19, 19, 512) 0 batch_normalization_15[0][0]
____________________________________________________________________________________________________
conv2d_16 (Conv2D) (None, 19, 19, 1024) 4718592 leaky_re_lu_15[0][0]
____________________________________________________________________________________________________
batch_normalization_16 (BatchNor (None, 19, 19, 1024) 4096 conv2d_16[0][0]
____________________________________________________________________________________________________
leaky_re_lu_16 (LeakyReLU) (None, 19, 19, 1024) 0 batch_normalization_16[0][0]
____________________________________________________________________________________________________
conv2d_17 (Conv2D) (None, 19, 19, 512) 524288 leaky_re_lu_16[0][0]
____________________________________________________________________________________________________
batch_normalization_17 (BatchNor (None, 19, 19, 512) 2048 conv2d_17[0][0]
____________________________________________________________________________________________________
leaky_re_lu_17 (LeakyReLU) (None, 19, 19, 512) 0 batch_normalization_17[0][0]
____________________________________________________________________________________________________
conv2d_18 (Conv2D) (None, 19, 19, 1024) 4718592 leaky_re_lu_17[0][0]
____________________________________________________________________________________________________
batch_normalization_18 (BatchNor (None, 19, 19, 1024) 4096 conv2d_18[0][0]
____________________________________________________________________________________________________
leaky_re_lu_18 (LeakyReLU) (None, 19, 19, 1024) 0 batch_normalization_18[0][0]
____________________________________________________________________________________________________
conv2d_19 (Conv2D) (None, 19, 19, 1024) 9437184 leaky_re_lu_18[0][0]
____________________________________________________________________________________________________
batch_normalization_19 (BatchNor (None, 19, 19, 1024) 4096 conv2d_19[0][0]
____________________________________________________________________________________________________
conv2d_21 (Conv2D) (None, 38, 38, 64) 32768 leaky_re_lu_13[0][0]
____________________________________________________________________________________________________
leaky_re_lu_19 (LeakyReLU) (None, 19, 19, 1024) 0 batch_normalization_19[0][0]
____________________________________________________________________________________________________
batch_normalization_21 (BatchNor (None, 38, 38, 64) 256 conv2d_21[0][0]
____________________________________________________________________________________________________
conv2d_20 (Conv2D) (None, 19, 19, 1024) 9437184 leaky_re_lu_19[0][0]
____________________________________________________________________________________________________
leaky_re_lu_21 (LeakyReLU) (None, 38, 38, 64) 0 batch_normalization_21[0][0]
____________________________________________________________________________________________________
batch_normalization_20 (BatchNor (None, 19, 19, 1024) 4096 conv2d_20[0][0]
____________________________________________________________________________________________________
space_to_depth_x2 (Lambda) (None, 19, 19, 256) 0 leaky_re_lu_21[0][0]
____________________________________________________________________________________________________
leaky_re_lu_20 (LeakyReLU) (None, 19, 19, 1024) 0 batch_normalization_20[0][0]
____________________________________________________________________________________________________
concatenate_1 (Concatenate) (None, 19, 19, 1280) 0 space_to_depth_x2[0][0]
leaky_re_lu_20[0][0]
____________________________________________________________________________________________________
conv2d_22 (Conv2D) (None, 19, 19, 1024) 11796480 concatenate_1[0][0]
____________________________________________________________________________________________________
batch_normalization_22 (BatchNor (None, 19, 19, 1024) 4096 conv2d_22[0][0]
____________________________________________________________________________________________________
leaky_re_lu_22 (LeakyReLU) (None, 19, 19, 1024) 0 batch_normalization_22[0][0]
____________________________________________________________________________________________________
conv2d_23 (Conv2D) (None, 19, 19, 425) 435625 leaky_re_lu_22[0][0]
====================================================================================================
Total params: 50,983,561
Trainable params: 50,962,889
Non-trainable params: 20,672
____________________________________________________________________________________________________
###Markdown
**Note**: On some computers, you may see a warning message from Keras. Don't worry about it if you do--it is fine.**Reminder**: this model converts a preprocessed batch of input images (shape: (m, 608, 608, 3)) into a tensor of shape (m, 19, 19, 5, 85) as explained in Figure (2). 3.3 - Convert output of the model to usable bounding box tensorsThe output of `yolo_model` is a (m, 19, 19, 5, 85) tensor that needs to pass through non-trivial processing and conversion. The following cell does that for you.
###Code
yolo_outputs = yolo_head(yolo_model.output, anchors, len(class_names))
###Output
_____no_output_____
###Markdown
You added `yolo_outputs` to your graph. This set of 4 tensors is ready to be used as input by your `yolo_eval` function. 3.4 - Filtering boxes`yolo_outputs` gave you all the predicted boxes of `yolo_model` in the correct format. You're now ready to perform filtering and select only the best boxes. Lets now call `yolo_eval`, which you had previously implemented, to do this.
###Code
scores, boxes, classes = yolo_eval(yolo_outputs, image_shape)
###Output
_____no_output_____
###Markdown
3.5 - Run the graph on an imageLet the fun begin. You have created a (`sess`) graph that can be summarized as follows:1. yolo_model.input is given to `yolo_model`. The model is used to compute the output yolo_model.output 2. yolo_model.output is processed by `yolo_head`. It gives you yolo_outputs 3. yolo_outputs goes through a filtering function, `yolo_eval`. It outputs your predictions: scores, boxes, classes **Exercise**: Implement predict() which runs the graph to test YOLO on an image.You will need to run a TensorFlow session, to have it compute `scores, boxes, classes`.The code below also uses the following function:```pythonimage, image_data = preprocess_image("images/" + image_file, model_image_size = (608, 608))```which outputs:- image: a python (PIL) representation of your image used for drawing boxes. You won't need to use it.- image_data: a numpy-array representing the image. This will be the input to the CNN.**Important note**: when a model uses BatchNorm (as is the case in YOLO), you will need to pass an additional placeholder in the feed_dict {K.learning_phase(): 0}.
###Code
def predict(sess, image_file):
"""
Runs the graph stored in "sess" to predict boxes for "image_file". Prints and plots the preditions.
Arguments:
sess -- your tensorflow/Keras session containing the YOLO graph
image_file -- name of an image stored in the "images" folder.
Returns:
out_scores -- tensor of shape (None, ), scores of the predicted boxes
out_boxes -- tensor of shape (None, 4), coordinates of the predicted boxes
out_classes -- tensor of shape (None, ), class index of the predicted boxes
Note: "None" actually represents the number of predicted boxes, it varies between 0 and max_boxes.
"""
# Preprocess your image
image, image_data = preprocess_image("images/" + image_file, model_image_size = (608, 608))
# Run the session with the correct tensors and choose the correct placeholders in the feed_dict.
# You'll need to use feed_dict={yolo_model.input: ... , K.learning_phase(): 0})
### START CODE HERE ### (≈ 1 line)
out_scores, out_boxes, out_classes = sess.run([scores,boxes,classes],feed_dict={yolo_model.input:image_data,K.learning_phase():0})
### END CODE HERE ###
# Print predictions info
print('Found {} boxes for {}'.format(len(out_boxes), image_file))
# Generate colors for drawing bounding boxes.
colors = generate_colors(class_names)
# Draw bounding boxes on the image file
draw_boxes(image, out_scores, out_boxes, out_classes, class_names, colors)
# Save the predicted bounding box on the image
image.save(os.path.join("out", image_file), quality=90)
# Display the results in the notebook
output_image = scipy.misc.imread(os.path.join("out", image_file))
imshow(output_image)
return out_scores, out_boxes, out_classes
###Output
_____no_output_____
###Markdown
Run the following cell on the "test.jpg" image to verify that your function is correct.
###Code
out_scores, out_boxes, out_classes = predict(sess, "test.jpg")
###Output
Found 7 boxes for test.jpg
car 0.60 (925, 285) (1045, 374)
car 0.66 (706, 279) (786, 350)
bus 0.67 (5, 266) (220, 407)
car 0.70 (947, 324) (1280, 705)
car 0.74 (159, 303) (346, 440)
car 0.80 (761, 282) (942, 412)
car 0.89 (367, 300) (745, 648)
|
Lab1/.ipynb_checkpoints/tutorial-samples-checkpoint.ipynb | ###Markdown
Some Examples for python tutorials from W3 schools.
###Code
#Following examples are code from w3 schools
###Output
_____no_output_____
###Markdown
Example 1-python list
###Code
thislist = ["apple", "banana", "cherry"]
print(thislist)
###Output
['apple', 'banana', 'cherry']
###Markdown
Example -2 -IF
###Code
a = 33
b = 200
if b > a:
print("b is greater than a")
###Output
b is greater than a
###Markdown
Example - 3: math plot
###Code
import matplotlib.pyplot as plt
import numpy as np
x = np.array(["A", "B", "C", "D"])
y = np.array([3, 8, 1, 10])
plt.bar(x,y)
plt.show()
# beautiful isn't it..you can write what ever comments you can.
###Output
_____no_output_____ |
End-To-End & Modelling/House_Prices_Iowa.ipynb | ###Markdown
House Price Prediction in Iowa RegionLets import the libraries needed and the data.
###Code
# Importing Lib's
import numpy as np
import pandas as pd
# Scalar & Pipelines
from sklearn.preprocessing import RobustScaler, LabelEncoder
from sklearn.pipeline import make_pipeline
# Models
from sklearn.model_selection import cross_val_score, train_test_split, KFold
from sklearn.linear_model import Lasso, ElasticNet
from sklearn.kernel_ridge import KernelRidge
from sklearn.ensemble import GradientBoostingRegressor, AdaBoostRegressor, RandomForestRegressor
from xgboost import XGBRegressor as xgb
from lightgbm import LGBMRegressor as lgb
# Metrics
from scipy import stats
from scipy.special import boxcox1p
from scipy.stats import skew, norm
from sklearn.metrics import mean_squared_error, make_scorer
# Viz Lib
import seaborn as sns
import matplotlib.pyplot as plt
from IPython.display import display
# Ignore Warnings
import warnings
warnings.filterwarnings('ignore')
# Parameters
pd.set_option('display.float_format', lambda x: '%.3f' % x) # Limit Float output to 3 decimal points
pd.set_option('max_columns', 10)
%matplotlib inline
sns.set_style('whitegrid')
# Importing Data
train = pd.read_csv('Data/Iowa_House_Prices/train.csv')
test = pd.read_csv('Data/Iowa_House_Prices/test.csv')
print(f'Train Data Shape:{train.shape},\n Test Data Shape: {test.shape}')
# Checking Column's and Id
train.head()
###Output
Train Data Shape:(1460, 81),
Test Data Shape: (1459, 80)
###Markdown
Data Description- SalePrice: the property's sale price in dollars. This is the target variable that you're trying to predict.- MSSubClass: The building class- MSZoning: The general zoning classification- LotFrontage: Linear feet of street connected to property- LotArea: Lot size in square feet- Street: Type of road access- Alley: Type of alley access- LotShape: General shape of property- LandContour: Flatness of the property- Utilities: Type of utilities available- LotConfig: Lot configuration- LandSlope: Slope of property- Neighborhood: Physical locations within Ames city limits- Condition1: Proximity to main road or railroad- Condition2: Proximity to main road or railroad (if a second is present)- BldgType: Type of dwelling- HouseStyle: Style of dwelling- OverallQual: Overall material and finish quality- OverallCond: Overall condition rating- YearBuilt: Original construction date- YearRemodAdd: Remodel date- RoofStyle: Type of roof- RoofMatl: Roof material- Exterior1st: Exterior covering on house- Exterior2nd: Exterior covering on house (if more than one material)- MasVnrType: Masonry veneer type- MasVnrArea: Masonry veneer area in square feet- ExterQual: Exterior material quality- ExterCond: Present condition of the material on the exterior- Foundation: Type of foundation- BsmtQual: Height of the basement- BsmtCond: General condition of the basement- BsmtExposure: Walkout or garden level basement walls- BsmtFinType1: Quality of basement finished area- BsmtFinSF1: Type 1 finished square feet- BsmtFinType2: Quality of second finished area (if present)- BsmtFinSF2: Type 2 finished square feet- BsmtUnfSF: Unfinished square feet of basement area- TotalBsmtSF: Total square feet of basement area- Heating: Type of heating- HeatingQC: Heating quality and condition- CentralAir: Central air conditioning- Electrical: Electrical system- 1stFlrSF: First Floor square feet- 2ndFlrSF: Second floor square feet- LowQualFinSF: Low quality finished square feet (all floors)- GrLivArea: Above grade (ground) living area square feet- BsmtFullBath: Basement full bathrooms- BsmtHalfBath: Basement half bathrooms- FullBath: Full bathrooms above grade- HalfBath: Half baths above grade- Bedroom: Number of bedrooms above basement level- Kitchen: Number of kitchens- KitchenQual: Kitchen quality- TotRmsAbvGrd: Total rooms above grade (does not include bathrooms)- Functional: Home functionality rating- Fireplaces: Number of fireplaces- FireplaceQu: Fireplace quality- GarageType: Garage location- GarageYrBlt: Year garage was built- GarageFinish: Interior finish of the garage- GarageCars: Size of garage in car capacity- GarageArea: Size of garage in square feet- GarageQual: Garage quality- GarageCond: Garage condition- PavedDrive: Paved driveway- WoodDeckSF: Wood deck area in square feet- OpenPorchSF: Open porch area in square feet- EnclosedPorch: Enclosed porch area in square feet- 3SsnPorch: Three season porch area in square feet- ScreenPorch: Screen porch area in square feet- PoolArea: Pool area in square feet- PoolQC: Pool quality- Fence: Fence quality- MiscFeature: Miscellaneous feature not covered in other categories- MiscVal: $Value of miscellaneous feature- MoSold: Month Sold- YrSold: Year Sold- SaleType: Type of sale- SaleCondition: Condition of sale
###Code
# Check duplicate ID's
unique_ids = len(set(train.Id))
total_ids = train.shape[0]
dupes = unique_ids - total_ids
print(f'The number of duplicates are {str(dupes)} for {str(total_ids)} total entries')
# Save Id's column
train_Id = train.Id
test_Id = test.Id
# Drop Id's column
train.drop('Id', axis=1, inplace=True)
test.drop('Id', axis=1, inplace=True)
print(f'Train Data Shape:{train.shape},\n Test Data Shape: {test.shape} after dropping Id column')
###Output
Train Data Shape:(1460, 80),
Test Data Shape: (1459, 79) after dropping Id column
###Markdown
1. Preprocessing 1.1 OutliersFirst, let's deal with outliers as mentioned in the [documentation]{https://ww2.amstat.org/publications/jse/v19n3/decock.pdf}. There seems to be 2 extreme outliers where very large houses sold for very cheap. The author recommends to delete these observations and gneerally any house beyond 4k sq ft from the dataset.
###Code
# Outliers
plt.scatter(train.GrLivArea, train.SalePrice, marker='x')
plt.xlabel('GrLivArea')
plt.ylabel('Sale Price')
plt.title('Outliers')
# Drop observations above 4k sq ft in GrLivArea
train = train[train.GrLivArea < 4000]
# Plotting
plt.scatter(train.GrLivArea, train.SalePrice, marker='x')
plt.xlabel('GrLivArea')
plt.ylabel('Sale Price')
plt.title('Outliers')
###Output
_____no_output_____
###Markdown
1.2 Target Variable (`SalePrice`)Lets look at our target variable and check if there is some skewness in it.
###Code
sns.distplot(train.SalePrice, fit=norm)
plt.figure()
stats.probplot(train.SalePrice, plot=plt)
###Output
_____no_output_____
###Markdown
Target Variable seems right skewed, it would be better to do a log-transformation so that the errors in predicting expensive houses and cheap houses will affect the result equally.
###Code
# Get Normal Distribution Parameters
mu, sigma = norm.fit(train.SalePrice)
print(f'mu: {mu:.3f} and sigma: {sigma:.3f} values before log transformation')
# Log Transformation
train['SalePrice'] = np.log1p(train['SalePrice'])
# Parameters after Transformation
mu, sigma = norm.fit(train.SalePrice)
print(f'mu: {mu:.3f} and sigma: {sigma:.3f} values after log transformation')
# Plot
sns.distplot(train.SalePrice, fit=norm)
plt.legend([f'$\mu=$ {mu:.2f}, $\sigma=$ {sigma:.2f}'], loc='best')
plt.ylabel('Frequency')
plt.title('SalePrice Distribution')
plt.figure()
stats.probplot(train.SalePrice, plot=plt)
###Output
mu: 12.022 and sigma: 0.396 values after log transformation
###Markdown
Now, it's normally distributed and would be easier for linear models to work with it. 2. Feature EngineeringTo make our lives sane combine both train and test sets for the feature engineering part so that w/e we do its reflected on both sets without us repeating the steps on both datasets.
###Code
# Train + Test
df = pd.concat(objs=[train, test]).reset_index(drop=True)
# Target Variable Isolation
y_train = train.SalePrice.values
# Drop target from data for now
df.drop(['SalePrice'], axis=1, inplace=True)
# Total Data Shape
df.shape
###Output
_____no_output_____
###Markdown
2.1 Missing DataDeal with missing data as per the data description. Get the percentage of missing data from each feature to get a more clear picture.
###Code
# Missing data Statistics
miss_count = df.isnull().sum().sort_values(ascending=False)
miss_percent = ((df.isnull().sum()/df.isnull().count())*100).sort_values(ascending=False)
missing_data = pd.concat([miss_count, miss_percent], axis=1, keys=['Total', 'Percent'])
missing_data.head(35)
###Output
_____no_output_____
###Markdown
Impute the data missing with each feature checking in with the description.- Alley: Data description says NA means "no alley access"- BsmtFinSF1, BsmtFinSF2, BsmtUnfSF, TotalBsmtSF, BsmtFullBath and BsmtHalfBath: Missing values mostly means have no basement- BsmtQual, BsmtCond, BsmtExposure, BsmtFinType1 and BsmtFinType2: These are categorical Basement values so we will deal with them a later time and there is no basement with `NaN` value. So, for now just replace with `None`- Electrical: One missing value so just replace with common observation- Exterior1st and Exterior2nd: Both have 1 and 2 missing values sub with common value- Functional: Description says `NA` means typical- Fence: Data description says NA means "no fence"- FireplaceQu: Data description says NA means "no fireplace"- GarageType, GarageFinish, GarageQual and GarageCond: Missing mostly means not present, so replace with `None`- GarageYrBlt, GarageArea and GarageCars:Replace missing values with `0` since no garage = no cars- KitchenQual: One missing value replace with common record- LotFrontage: `NA` most likely means no lot frontage- MSZoning: Fill with most common value, as this pertains to zoning classification- MSSubClass: `NA` most likely means no building class, replace missing values with `None`- MasVnrArea and MasVnrType: NA most likely means no Masonry veneer, So fill with `0` for area and `None` for type- MiscFeature: Data description says NA means "no misc feature"- SaleType: Same as above, replace with common value- PoolQC: Data description says NA means "No Pool". So, majority(99%+) of the missing values mean just that most houses didn't have pools to begin with which is a common thing.- Utilities: NA most likely means `AllPub` like the other records, if all records are `AllPub` it wont be helping much in predicting so we can remove this feature all together.
###Code
# Alley
df['Alley'] = df['Alley'].fillna('None')
# Basement Features
# Numerical
for col in ['BsmtFinSF1', 'BsmtFinSF2', 'BsmtUnfSF','TotalBsmtSF', 'BsmtFullBath', 'BsmtHalfBath']:
df[col] = df[col].fillna(0)
# Categorical
for col in ['BsmtQual', 'BsmtCond', 'BsmtExposure', 'BsmtFinType1', 'BsmtFinType2']:
df[col] = df[col].fillna('None')
# Electrical
df['Electrical'] = df['Electrical'].fillna(df['Electrical'].mode()[0])
# Exterior Features
df['Exterior1st'] = df['Exterior1st'].fillna(df['Exterior1st'].mode()[0])
df['Exterior2nd'] = df['Exterior2nd'].fillna(df['Exterior2nd'].mode()[0])
# Functional
df['Functional'] = df['Functional'].fillna('Typ')
# Fence
df['Fence'] = df['Fence'].fillna('None')
# Fireplace
df['FireplaceQu'] = df['FireplaceQu'].fillna('None')
# Garage Features
# Numerical
for col in ['GarageYrBlt', 'GarageArea', 'GarageCars']:
df[col] = df[col].fillna(0)
# Categorical
for col in ['GarageType', 'GarageFinish', 'GarageQual', 'GarageCond']:
df[col] = df[col].fillna('None')
# KitchenQuality
df['KitchenQual'] = df['KitchenQual'].fillna(df['KitchenQual'].mode()[0])
# LotFrontage
df['LotFrontage'] = df['LotFrontage'].fillna(0)
# MSZoning
df['MSZoning'] = df['MSZoning'].fillna(df['MSZoning'].mode()[0])
# MSSubclass
df['MSSubClass'] = df['MSSubClass'].fillna('None')
# MasVnr Features
# Numerical
df['MasVnrArea'] = df['MasVnrArea'].fillna(0)
# Categorical
df['MasVnrType'] = df['MasVnrType'].fillna('None')
# MiscFeatures
df['MiscFeature'] = df['MiscFeature'].fillna('None')
# SaleType
df['SaleType'] = df['SaleType'].fillna(df['SaleType'].mode()[0])
# PoolQC
df['PoolQC'] = df['PoolQC'].fillna('None')
# Utilities
df.drop(['Utilities'], axis=1, inplace=True)
# Check for missing values now
miss_count = df.isnull().sum().sort_values(ascending=False)
miss_percent = ((df.isnull().sum()/df.isnull().count())*100).sort_values(ascending=False)
missing_data = pd.concat([miss_count, miss_percent], axis=1, keys=['Total', 'Percent'])
missing_data.head()
###Output
_____no_output_____
###Markdown
2.2 Formatingas we were trying to fill the missing values we noticed lot of categorical features which are wrongly indentified as numerical. So, lets deal with them now.
###Code
df['YrSold'] = df['YrSold'].astype(str)
df['MoSold'] = df['MoSold'].astype(str)
df['OverallCond'] = df['OverallCond'].astype(str)
df['MSSubClass'] = df['MSSubClass'].astype(str)
###Output
_____no_output_____
###Markdown
2.3 Numerical ValuesWe can deal with numerical values which explode the preictions and model with overfitting it by transforming them using `Box-Cox` or `Log1p` to normalize them.
###Code
# Extracting Numerical Features
numerical_features = df.dtypes[df.dtypes != 'object'].index
numerical_features
# Check how skewed the data is for these features
skewed_features = df[numerical_features].apply(lambda x: skew(x)).sort_values(ascending=False)
# Convert to Data Frame for convenience
skewed_df = pd.DataFrame({'Skew': skewed_features})
# Get most skewed data
skewed_df = skewed_df[abs(skewed_df) > 0.75]
print(f'There are {(skewed_df.shape[0])} skewed numerical features which needs to be transformed')
# Box-Cox Transform the skewed features
for f in skewed_df.index:
df[f] = boxcox1p(df[f], 0.15)
###Output
There are 32 skewed numerical features which needs to be transformed
###Markdown
2.4 Categorical ValuesWhen it comes to categorical values the equivalent of normalization to smooth the predcition process is `Label Encoding`. Then get dummies for the remaining categorical values.
###Code
# Label Encoding
lbe = LabelEncoder()
cols = ['FireplaceQu', 'BsmtQual', 'BsmtCond', 'GarageQual', 'GarageCond', 'ExterQual', 'ExterCond','HeatingQC',
'PoolQC', 'KitchenQual', 'BsmtFinType1', 'BsmtFinType2', 'Functional', 'Fence', 'BsmtExposure', 'GarageFinish',
'LandSlope', 'LotShape', 'PavedDrive', 'Street', 'Alley', 'CentralAir', 'MSSubClass', 'OverallCond', 'YrSold',
'MoSold']
for i in cols:
lbe.fit_transform(df[i].values)
# Get dummy values
df = pd.get_dummies(df)
df.shape
###Output
_____no_output_____
###Markdown
3. ModellingLet's get on with modelling part, divide back our data set to train and test like before first though. Then start with some basic models and see how well these model do and try to improve on that.Evaluation use `RMSE` and cross validate models using `KFold`.
###Code
# Getting train and test sets back after Feature engineering
train_1 = df[:len(train)]
test_1 = df[len(train):]
print(f'Shape of Train Set: {train_1.shape}\n Shape of Test Set: {test_1.shape}')
# Cross Val with KFold
k_fold = KFold(10, shuffle=True, random_state=42).get_n_splits(train_1.values)
scorer = make_scorer(mean_squared_error, greater_is_better=False)
# Validation Function
def rmse_cv(model):
rmse = np.sqrt(-cross_val_score(model, train_1.values, y_train, scoring=scorer, cv=k_fold))
return rmse
###Output
_____no_output_____
###Markdown
`GridSearchCV` is one good way to find the optimal parameters. I have used it to find best parameters for the models below, Can check my [other](https://github.com/srp98/Analysis-Engineering-and-Modelling/blob/master/End-To-End%20%26%20Modelling/RMS_Titanic.ipynb) notebook to get the best parameters for the models. It is bit taxing on finding the paramets though but a good processor with multi threading can get the job done in around 15-20 mins.
###Code
# Basic Models
regressors = [KernelRidge(alpha=0.5, kernel='polynomial', degree=2, coef0=2.5),
make_pipeline(RobustScaler(), ElasticNet(alpha=0.0005, l1_ratio=.9, random_state=3)),
make_pipeline(RobustScaler(), Lasso(alpha=0.0005, random_state=3)),
GradientBoostingRegressor(n_estimators=3000, learning_rate=1e-4, max_depth=4, max_features='sqrt',
min_samples_leaf=15, min_samples_split=10, loss='huber', random_state=3),
AdaBoostRegressor(n_estimators=3000, learning_rate=1e-4, loss='square', random_state=3),
RandomForestRegressor(n_estimators=3000, max_features='sqrt', max_depth=4, min_samples_split=10,
min_samples_leaf=15, random_state=3),
xgb(colsample_bytree=0.45, gamma=0.05, learning_rate=0.05, max_depth=3, min_child_weight=2,
n_estimators=2300, reg_alpha=0.5, reg_lambda=0.85, subsample=0.5, random_state=7),
lgb(objective='regression', num_leaves=5, learning_rate=0.05, n_estimators=750, max_bin=50,
bagging_fraction=0.7, feature_fraction=0.25, feature_fraction_seed=8, bagging_seed=8, bagging_freq=5,
min_data_in_leaf=5, min_sum_hessian_in_leaf=10)]
# Model Scores
cv_results = [rmse_cv(model) for model in regressors]
cv_means, cv_std = [], []
for result in cv_results:
cv_means.append(result.mean())
cv_std.append(result.std())
cv_res_df = pd.DataFrame({'Algorithms': ['KernalRidge', 'ElasticNet', 'Lasso', 'GradientBoost Regressor',
'AdaBoost Regressor', 'Random Forest Regressor', 'XGBoost', 'LightGBM'],
'RMSE_Score': cv_means, 'RMSE_Deviation': cv_std})
cv_res_df.sort_values('RMSE_Score')
# Plot to check which algorithms did better
plt.figure(figsize=(12, 8))
sns.barplot(x='Algorithms', y='RMSE_Score', data=cv_res_df)
plt.xticks(rotation=90)
plt.xlabel('Mean Accuracy')
plt.title('RMSE Scores')
###Output
_____no_output_____ |
use_cases/u2-genome.ipynb | ###Markdown
Setup__Note:__ This analysis requires a lot of temporary storage space (~200GB) in the data fetch phase. Subsequently, around ~50GB are required to store all the output artifacts.
###Code
import os
import matplotlib.pyplot as plt
import pandas as pd
import qiime2 as q2
import seaborn as sns
import skbio
from matplotlib import cm
from qiime2.plugins import (
fondue, sourmash, diversity, emperor, demux,
sample_classifier, cutadapt
)
data_loc = 'u2-genome-results'
if not os.path.isdir(data_loc):
os.mkdir(data_loc)
email = '[email protected]'
n_jobs = 16
nextstrain_metadata_path = os.path.join(data_loc, 'metadata_nextstrain.tsv')
nextstrain_meta_url = 'https://data.nextstrain.org/files/ncov/open/metadata.tsv.gz'
nextstrain_last_submit_date = '2022-01-31'
genomes_per_variant = 250
random_seed = 11
sra_metadata_path = os.path.join(data_loc, 'metadata_sra.tsv')
metadata_merged_path = os.path.join(data_loc, 'metadata_merged.tsv')
def sample_variants(metadata_df, n, grouping_col='Nextstrain_clade', random_state=1):
"""Draw a random, stratified sample from all available virus variants.
Args:
metadata_df (pd.DataFrame): Metadata of all samples.
n (int): Sample size per virus variant.
grouping_col (str): Name of the column containing variant name.
random_state (int): Random seed to be used when sampling.
Returns:
pd.DataFrame: DataFrame containing subsampled metadata.
"""
metadata_ns_vars_smp = metadata_df.groupby(grouping_col).apply(
lambda x: x.sample(n=n, random_state=random_state)
)
if 'sra_accession' in metadata_ns_vars_smp.columns:
metadata_ns_vars_smp.set_index('sra_accession', drop=True, inplace=True)
else:
metadata_ns_vars_smp.reset_index(level=0, drop=True, inplace=True)
metadata_ns_vars_smp.index.name = 'id'
return metadata_ns_vars_smp
def color_variants(x, cmap='plasma'):
"""
Return a color from provided color map based on virus variant.
Args:
x (str): Variant name.
cmap (str): Matplotlib's color map name.
Returns:
Color from Matplotlib's cmap.
"""
colors = cm.get_cmap(cmap, 8).colors
if x == 'Alpha':
return colors[0]
elif x == 'Delta':
return colors[1]
else:
return colors[2]
###Output
_____no_output_____
###Markdown
Process NextStrain's metadataWe are interested in taking a sample of SARS-CoV-2 genomes from the full Nextstrain list. We will only consider genomes available in the SRA repository for a few virus variants. Moreover, we will only work with single-end sequences to simplify the analysis. We begin by fetching the original Nextstrain metadata:
###Code
%%bash -s "$nextstrain_metadata_path" "$data_loc" "$nextstrain_meta_url"
if test -f "$1"; then
echo "$1 exists and will not be re-downloaded."
else
wget -nv -O "$2/metadata.tsv.gz" "$3";
gzip -f -d "$2/metadata.tsv.gz";
mv "$2/metadata.tsv" "$2/metadata_nextstrain.tsv"
fi
metadata_ns = pd.read_csv(nextstrain_metadata_path, sep='\t')
metadata_ns.shape
metadata_ns.head(5)
# remove the records obtained later than the indicated date
metadata_ns['date_submitted'] = pd.to_datetime(metadata_ns['date_submitted'])
metadata_ns = metadata_ns[metadata_ns['date_submitted'] <= nextstrain_last_submit_date]
metadata_ns.shape
# convert date_submitted back to string (to conform with QIIME 2' Metadata format)
metadata_ns['date_submitted'] = metadata_ns['date_submitted'].astype(str)
# check count of samples per variant
metadata_ns['Nextstrain_clade'].value_counts()
###Output
_____no_output_____
###Markdown
Only keep samples with SRA accession numbers.
###Code
metadata_ns = metadata_ns[
metadata_ns['sra_accession'].notna() & \
metadata_ns['sra_accession'].str.startswith(('SRR', 'ERR')) &
(metadata_ns['sra_accession'].str.contains(',') == False)
]
metadata_ns.shape
###Output
_____no_output_____
###Markdown
Only keep samples with good `QC_missing_data`.
###Code
metadata_ns = metadata_ns[metadata_ns['QC_missing_data'] == 'good']
metadata_ns.shape
###Output
_____no_output_____
###Markdown
Filter out and group together only the desired virus variants.
###Code
variants = ['Alpha', 'Delta', 'Omicron']
metadata_ns_vars = metadata_ns[metadata_ns['Nextstrain_clade'].notna()]
metadata_ns_vars = metadata_ns_vars[metadata_ns_vars['Nextstrain_clade'].str.contains('|'.join(variants))]
metadata_ns_vars.shape
# rename clades to generate groups
metadata_ns_vars['Nextstrain_clade_grouped'] = metadata_ns_vars['Nextstrain_clade']
for variant in variants:
metadata_ns_vars['Nextstrain_clade_grouped'] = \
metadata_ns_vars['Nextstrain_clade_grouped'].str.replace(rf'.*{variant}.*', variant, regex=True)
clade_counts = metadata_ns_vars['Nextstrain_clade_grouped'].value_counts()
clade_counts
###Output
_____no_output_____
###Markdown
Sample _n_ sequences per virus variant. We are first sampling more than the final required count per variant - we will need some initial metadata to sample only the records with desired properties.
###Code
n = 150 * genomes_per_variant
metadata_ns_vars_smp = sample_variants(
metadata_ns_vars, n=n, grouping_col='Nextstrain_clade_grouped', random_state=random_seed
)
# remove some columns with mixed types (we will not need those)
for col in ['clock_deviation']:
metadata_ns_vars_smp.drop(col, axis=1, inplace=True)
metadata_ns_vars_smp['Nextstrain_clade'].value_counts()
###Output
_____no_output_____
###Markdown
Fetch sample metadata using q2-fondueFetch metadata for the pre-selected sequences using q2-fondue's `get-metadata` action. We will then use this metadata to filter out samples containing single-end reads only and merge those with the original Nextstrain metadata. Finally, we will subsample those to get the final list of genomes to fetch, stratified per virus variant.
###Code
# we will be fetching metadata in several batches, due to large ID count
ids = metadata_ns_vars_smp.index.to_list()
ids_chunked = [ids[i:i + 4000] for i in range(0, metadata_ns_vars_smp.shape[0], 4000)]
all_meta = []
if not os.path.isfile(sra_metadata_path):
for i, _ids in enumerate(ids_chunked):
print(f'-----Fetching metadata - batch {i + 1} out of {len(ids_chunked)}...-----')
current_batch_loc = os.path.join(data_loc, f'sra_meta_batch{i}.qza')
_ids = pd.Series(_ids, name='ID')
if not os.path.isfile(current_batch_loc):
sra_meta, failed_ids, = fondue.methods.get_metadata(
accession_ids=q2.Artifact.import_data('NCBIAccessionIDs', _ids),
email=email,
n_jobs=n_jobs,
log_level='WARNING'
)
sra_meta.save(current_batch_loc)
failed_ids.save(os.path.join(data_loc, f'sra_failed_ids_batch{i}.qza'))
else:
print(f'Reading current SRA meta batch from file {current_batch_loc}...')
sra_meta = q2.Artifact.load(current_batch_loc)
all_meta.append(sra_meta)
del sra_meta
# merge metadata from all the batches
sra_meta, = fondue.methods.merge_metadata(
metadata=all_meta
)
sra_meta_df = sra_meta.view(pd.DataFrame)
sra_meta_df.to_csv(sra_metadata_path, sep='\t')
# clean up
del all_meta
else:
print(f'Metadata artifact exists and will be read from {sra_metadata_path}.')
sra_meta_df = pd.read_csv(sra_metadata_path, sep='\t', index_col=0)
###Output
_____no_output_____
###Markdown
Merge SRA and Nextstrain metadataMerge SRA metadata with Nextstrain metadata and re-sample only __single-end short__ reads.
###Code
sra_meta_smp_df = metadata_ns_vars_smp.merge(sra_meta_df, left_index=True, right_index=True)
sra_meta_smp_df.shape
selection = \
(sra_meta_smp_df['Instrument'].str.contains('NextSeq 550')) & \
(sra_meta_smp_df['Library Layout'] == 'SINGLE')
sra_meta_smp_df = sra_meta_smp_df[selection]
sra_meta_smp_df_gr = sra_meta_smp_df.groupby(['Nextstrain_clade_grouped']).count()
sra_meta_smp_df_gr.iloc[:,:2]
###Output
_____no_output_____
###Markdown
Find the largest possible sample size.
###Code
n = sra_meta_smp_df_gr.iloc[:, 0].min()
n = n if n < genomes_per_variant else genomes_per_variant
print(f'Taking a sample of {n} genomes per variant.')
sra_meta_smp_df = metadata_ns_vars_smp.merge(
sra_meta_df, left_index=True, right_index=True
)
sra_meta_smp_df = sample_variants(
sra_meta_smp_df[selection], n=n,
grouping_col='Nextstrain_clade_grouped', random_state=random_seed
)
sra_meta_smp_df['Public'] = sra_meta_smp_df['Public'].astype(str)
sra_meta_smp_df.shape
# check count of samples per variant
sra_meta_smp_df['Nextstrain_clade_grouped'].value_counts()
###Output
_____no_output_____
###Markdown
Save merged & sampled metadata to file.
###Code
if not os.path.isfile(metadata_merged_path):
sra_meta_smp_df.to_csv(metadata_merged_path, sep='\t')
print('Saved sample metadata to', metadata_merged_path)
###Output
_____no_output_____
###Markdown
Fetch SARS-CoV-2 genomes using q2-fondueWe can use IDs from our final metadata table to fetch all the corresponding sequencing files from the SRA using `q2-fondue`'s `get-sequences` action.
###Code
single_reads_out = os.path.join(data_loc, 'sars-single.qza')
if not os.path.isfile(single_reads_out):
_ids = pd.Series(sra_meta_smp_df.index.to_list(), name='ID')
single_reads, _, _ = fondue.methods.get_sequences(
accession_ids=q2.Artifact.import_data('NCBIAccessionIDs', _ids),
email=email,
n_jobs=n_jobs
)
single_reads.save(single_reads_out)
else:
print(f'Single-reads artifact exists and will be read from {single_reads_out}.')
single_reads = q2.Artifact.load(single_reads_out)
###Output
_____no_output_____
###Markdown
Quality control of the sequencesBefore proceeding to the next step, we can assess the quality of the retrieved dataset using the `summarize` action from the `q2-demux` plugin.
###Code
qc_viz_out = os.path.join(data_loc, 'qc-viz.qzv')
if not os.path.isfile(qc_viz_out):
qc_viz, = demux.visualizers.summarize(
data=single_reads
)
qc_viz.save(qc_viz_out)
else:
print(f'Quality control artifact exists and will be read from {qc_viz_out}.')
qc_viz = q2.Visualization.load(qc_viz_out)
qc_viz
###Output
_____no_output_____
###Markdown
Data clean-up: sequence trimmingAs can be seen in the visualization above, the data is already of good quality. We will just perform one additional cleaning step to remove sequences shorter than 35bp and with error rates higher than 0.01.
###Code
trimmed_out = os.path.join(data_loc, 'sars-single-trimmed.qza')
if not os.path.isfile(trimmed_out):
single_reads_trimmed, = cutadapt.methods.trim_single(
demultiplexed_sequences=single_reads,
error_rate=0.01,
minimum_length=35,
cores=n_jobs
)
single_reads_trimmed.save(trimmed_out)
else:
print(f'Trimmed reads artifact exists and will be read from {trimmed_out}.')
single_reads_trimmed = q2.Artifact.load(trimmed_out)
trimmed_viz_out = os.path.join(data_loc, 'qc-viz-trimmed.qzv')
if not os.path.isfile(trimmed_viz_out):
qc_viz_trimmed, = demux.visualizers.summarize(
data=single_reads_trimmed
)
qc_viz_trimmed.save(trimmed_viz_out)
else:
print(f'Trimmed reads visualization exists and will be read from {trimmed_viz_out}.')
qc_viz_trimmed = q2.Visualization.load(trimmed_viz_out)
qc_viz_trimmed
###Output
_____no_output_____
###Markdown
Calculate and compare MinHash signatures for every genomeHaving checked the data quality, we will proceed to calculating the MinHash signatures of every genome using `q2-sourmash`. First, we calculate the hashes from the short reads using the `compute` action. Subsequently, we generate a distance matrix comparing hashes pairwise (using the `compare` action).
###Code
genome_hash_out = os.path.join(data_loc, 'genome-hash-trimmed.qza')
if not os.path.isfile(genome_hash_out):
genome_hash, = sourmash.methods.compute(
sequence_file=single_reads_trimmed,
ksizes=31,
scaled=10
)
genome_hash.save(genome_hash_out)
else:
print(f'Genome hashes artifact exists and will be read from {genome_hash_out}.')
genome_hash = q2.Artifact.load(genome_hash_out)
hash_compare_out = os.path.join(data_loc, 'hash-compare-trimmed.qza')
if not os.path.isfile(hash_compare_out):
hash_compare, = sourmash.methods.compare(
min_hash_signature=genome_hash,
ksize=31
)
hash_compare.save(hash_compare_out)
else:
print(f'Distance matrix artifact exists and will be read from {hash_compare_out}.')
hash_compare = q2.Artifact.load(hash_compare_out)
###Output
_____no_output_____
###Markdown
Perform dimensionality reduction of the genome MinHash distance matrixFinally, a 2D t-SNE plot is generated from the obtained distance matrix (`tsne` method from the `q2-diversity` plugin) and visualized using the EMPeror plot (`plot` action from the `q2-emperor` plugin).
###Code
genome_tsne, = diversity.methods.tsne(
distance_matrix=hash_compare,
learning_rate=125,
perplexity=18
)
cols = ['Nextstrain_clade_grouped', 'Nextstrain_clade', 'Instrument']
emperor_plot_out = os.path.join(data_loc, 'emperor-plot-trimmed.qzv')
if not os.path.isfile(emperor_plot_out):
emperor_plot, = emperor.visualizers.plot(
pcoa=genome_tsne,
metadata=q2.Metadata(sra_meta_smp_df[cols])
)
emperor_plot.save(emperor_plot_out)
else:
print(f'Emperor plot artifact exists and will be read from {emperor_plot_out}.')
emperor_plot = q2.Visualization.load(emperor_plot_out)
emperor_plot
###Output
_____no_output_____
###Markdown
We can also use the results above to generate our own plots using any of the Python plotting libraries - see below.
###Code
tsne_table = genome_tsne.view(skbio.OrdinationResults)
tsne_df = tsne_table.samples
# switch to inline plotting
%matplotlib inline
# create a 2D plot of Dim1 vs Dim2
sns.set(rc={'figure.figsize':(8, 8), 'font.family': ['Arial']}, style='white')
with sns.plotting_context("notebook", font_scale=1.2):
fig = plt.figure()
ax = fig.add_subplot(111)
ax.set_xlabel(f'Axis 1')
ax.set_ylabel(f'Axis 2')
sns.scatterplot(
x=tsne_df.iloc[:, 0],
y=tsne_df.iloc[:, 1],
s=70,
hue=sra_meta_smp_df['Nextstrain_clade_grouped'],
ax=ax,
alpha=0.75
)
ax.set_xticks([])
ax.set_yticks([])
plt.tight_layout()
fig.savefig(os.path.join(data_loc, 'sars_cov_2_tsne.eps'))
###Output
_____no_output_____
###Markdown
We can see from the plots above that when the `Nextclade_clade_grouped` column is used to color the data points, genomes group into distinct clusters corresponding to their variant assignment. The best visible is the Omicron variant that forms a single cluster, next to Alpha and Delta which both form their own (multiple smaller) clusters. Classify samples using hash signaturesWe can also more quantitatively test whether MinHash genome signatures generated by _sourmash_ are predictive of SARS-CoV-2 genome variant. To do that we can use the `classify_samples_from_dist` method from the `q2-sample-classifier` plugin.
###Code
predictions_out = os.path.join(data_loc, 'predictions.qza')
accuracy_out = os.path.join(data_loc, 'accuracy.qzv')
if not os.path.isfile(predictions_out):
predictions, accuracy, = sample_classifier.pipelines.classify_samples_from_dist(
distance_matrix=hash_compare,
metadata = q2.CategoricalMetadataColumn(sra_meta_smp_df['Nextstrain_clade_grouped']),
k=3,
cv=10,
random_state=random_seed,
n_jobs=n_jobs
)
predictions.save(predictions_out)
accuracy.save(accuracy_out)
else:
print(f'Classification artifacts exist and will be read from {predictions_out} and {accuracy_out}.')
predictions = q2.Artifact.load(predictions_out)
accuracy = q2.Visualization.load(accuracy_out)
accuracy
###Output
_____no_output_____ |
test notebook.ipynb | ###Markdown
###Code
###Output
_____no_output_____ |
notebooks/data_reduction_with_psql.ipynb | ###Markdown
Data Reduction Goals for this notebook: - Show a definition of PUMAs for South King County- Import PSQL searches and load as DataFrames- Find the total number of persons we can identify in South King County as OY Method We use Pandas to import PSQL searches and loaded the results as DataFrames. We find count totals with the 'sum()' method. Finally we desplay the total number of persons we can identify in South King County as OY. Detailed Steps Import the necessary packages
###Code
import pandas as pd
from sqlalchemy import create_engine
###Output
_____no_output_____
###Markdown
Create a pointer to the PSQL database
###Code
engine = create_engine("postgresql:///opportunity_youth")
###Output
_____no_output_____
###Markdown
Import a psql table as a pandas DataFrame and display the dataframe. This result is the sum of all people sampled in King County.
###Code
puma_name = pd.read_sql(sql="SELECT * FROM tot_people_in_KC;", con=engine)
puma_name
###Output
_____no_output_____
###Markdown
List all the OY sample results for all the pumas in King County.
###Code
puma_totals = pd.read_sql(sql="SELECT * FROM total_by_puma_f;", con=engine)
puma_totals_df = pd.DataFrame(puma_totals)
puma_totals_df.set_index('puma')
# puma_totals_df = puma_totals_df.style.hide_index()
# puma_totals_df
###Output
_____no_output_____
###Markdown
So, we have data from 16 pumas. These are persons - between ages 16 and 24, - not enrolled in school, - are unimployed or have not worked. The total sample across all pumas is:
###Code
puma_totals_df['sum'].sum()
###Output
_____no_output_____
###Markdown
That's ~20,000 OY in King County from our sample. We define South King County by the following six PUMAs:
###Code
#force rightmost column to display wider
pd.options.display.max_rows
pd.set_option('display.max_colwidth', -1)
puma_names_list = pd.read_sql(sql="SELECT * FROM puma_names_finder0;", con=engine)
puma_names_df = pd.DataFrame(puma_names_list)
puma_names_df.set_index('puma')
###Output
_____no_output_____
###Markdown
The weighted sum of persons for each PUMA in South King County is given by:
###Code
puma_oy_totals = pd.read_sql(sql="SELECT * FROM OY_by_puma0;", con=engine)
puma_oy_totals_df = pd.DataFrame(puma_oy_totals)
puma_oy_totals_df.set_index('puma')
###Output
_____no_output_____
###Markdown
Adding the sum column:
###Code
print('In South King county there are ' + str(puma_oy_totals['sum'].sum()) + ' persons we can identify as OY.')
final_df = pd.merge(puma_oy_totals_df, puma_names_df, on = 'puma')
# puma_names_df['puma_name']
final_df.set_index('puma')
###Output
_____no_output_____ |
scaling/scale_TrMassFlux.ipynb | ###Markdown
Tracer mass on shelfLook at tracer mass flux $\Phi_{Tr}$ calculated as transport of water with tracer concentration higher or equal that threshold across shelf and canyon lid times its concentration. The threshold is the tracer concentration at shelf break depth.Scale tracer mass flux onto shelf as:$\Phi_{Tr}=\bar{C}\Phi_{HCW}$Where $\bar{C}$ is proportional to $-(Z/H_s)(1-3t/Tau)\delta_zC_0)/2(H_{sb}+H_h)$ (See scaling in scale_TrGradient_within_canyon), and $\Phi_{HCW}$ is the upwelling flux, defined in Howatt and Allen, 2013.
###Code
#%load_ext autoreload
#%autoreload 2
import matplotlib.pyplot as plt
%matplotlib inline
from netCDF4 import Dataset
import numpy as np
import pandas as pd
import scipy.stats
import seaborn as sns
import canyon_tools.metrics_tools as mtt
import canyon_tools.readout_tools as rout
def Tracer_AlongShelf(Tr,TrAdv,MaskC,rA,hFacC,drF,yin,zfin,xi,yi,nzlim):
'''
INPUT----------------------------------------------------------------------------------------------------------------
Tr : Array with concentration values for a tracer. Until this function is more general, size 19x90x360x360
TrAdv : Array with concentration values for low diffusivity tracer. Until this function is more general, size 19x90x360x360
MaskC : Land mask for tracer
nzlim : The nz index under which to look for water properties
rA : Area of cell faces at C points (360x360)
fFacC : Fraction of open cell (90x360x360)
drF : Distance between cell faces (90)
yin : across-shore index of shelf break
zfin : shelf break index + 1
xi : initial profile x index
yi : initial profile y index
OUTPUT----------------------------------------------------------------------------------------------------------------
TrMass = Array with the mass of tracer over the shelf in HCW [t,360] at every time output.
Total_Tracer = Array with the mass of tracer (m^3*[C]*l/m^3) at each x-position over the shelf [t,360] at
every time output.
-----------------------------------------------------------------------------------------------------------------------
'''
maskExp = mtt.maskExpand(MaskC,TrAdv)
TrMask=np.ma.array(TrAdv,mask=maskExp)
trlim = TrMask[0,nzlim,yi,xi]
print('tracer limit concentration is: ',trlim)
# mask cells with tracer concentration < trlim on shelf
HighConc_Masked = np.ma.masked_less(TrMask[:,:zfin,yin:,:], trlim)
HighConc_Mask = HighConc_Masked.mask
#Get volume of water of cells with relatively high concentration
rA_exp = np.expand_dims(rA[yin:,:],0)
drF_exp = np.expand_dims(np.expand_dims(drF[:zfin],1),1)
rA_exp = rA_exp + np.zeros(hFacC[:zfin,yin:,:].shape)
drF_exp = drF_exp + np.zeros(hFacC[:zfin,yin:,:].shape)
ShelfVolume = hFacC[:zfin,yin:,:]*drF_exp*rA_exp
ShelfVolume_exp = np.expand_dims(ShelfVolume,0)
ShelfVolume_exp = ShelfVolume_exp + np.zeros(HighConc_Mask.shape)
HighConc_CellVol = np.ma.masked_array(ShelfVolume_exp,mask = HighConc_Mask)
TrConc_HCW = np.ma.masked_array(Tr[:,:zfin,yin:,:],mask = HighConc_Mask)
MassTrHighConc =np.ma.sum(np.ma.sum(np.sum(HighConc_CellVol*TrConc_HCW*1000,axis = 1),axis=1),axis=1)
#Get total mass of tracer on shelf
Total_Tracer = np.ma.sum(np.ma.sum(np.sum(ShelfVolume_exp*TrMask[:,:zfin,yin:,:]*1000.0,axis = 1),axis=1),axis=1)
# 1 m^3 = 1000 l
return (MassTrHighConc, Total_Tracer)
# Set appearance options seaborn
sns.set_style('white')
sns.set_context('notebook')
CanyonGrid='/ocean/kramosmu/MITgcm/TracerExperiments/CNTDIFF/run38/gridGlob.nc'
CanyonGridOut = Dataset(CanyonGrid)
CanyonGridNoC='/ocean/kramosmu/MITgcm/TracerExperiments/CNTDIFF/run42/gridGlob.nc'
CanyonGridOutNoC = Dataset(CanyonGridNoC)
CanyonState='/ocean/kramosmu/MITgcm/TracerExperiments/CNTDIFF/run38/stateGlob.nc'
CanyonStateOut = Dataset(CanyonState)
# Grid variables
nx = 360
ny = 360
nz = 90
nt = 19 # t dimension size
xc = rout.getField(CanyonGrid, 'XC') # x coords tracer cells
yc = rout.getField(CanyonGrid, 'YC') # y coords tracer cells
rc = CanyonGridOut.variables['RC']
dxg = rout.getField(CanyonGrid, 'dxG') # x coords tracer cells
dyg = rout.getField(CanyonGrid, 'dyG') # y coords tracer cells
bathy = rout.getField(CanyonGrid, 'Depth')
hFacC = rout.getField(CanyonGrid, 'HFacC')
MaskC = rout.getMask(CanyonGrid, 'HFacC')
bathyNoC = rout.getField(CanyonGridNoC, 'Depth')
hFacCNoC = rout.getField(CanyonGridNoC, 'HFacC')
MaskCNoC = rout.getMask(CanyonGridNoC, 'HFacC')
rA = rout.getField(CanyonGrid, 'rA')
z = CanyonStateOut.variables['Z']
drF = CanyonGridOut.variables['drF']
time = CanyonStateOut.variables['T']
import os
import sys
lib_path = os.path.abspath('../PythonScripts/Paper1Figures/') # Add absolute path to my python scripts
sys.path.append(lib_path)
import canyon_records
import nocanyon_records
records = canyon_records.main()
recordsNoC = nocanyon_records.main()
# Constants and scales
L = 6400.0 # canyon length
R = 5000.0 # Upstream radius of curvature
g = 9.81 # accel. gravity
Wsb = 13000 # Width at shelf break
Hs = 150.0 # Shelf break depth
Hh = 132.0 # Not real head depth
s = 0.005 # shelf slope
W = 8300 # mid-length width
# NOTE: The default values of all functions correspond to the base case
def Dh(f=9.66E-4,L=6400.0,N=5.5E-3):
'''Vertical scale Dh'''
return((f*L)/(N))
def Ro(U=0.37,f=9.66E-4,R=5000.0):
'''Rossby number using radius of curvature as length scale'''
return(U/(f*R))
def F(Ro):
'''Function that estimates the ability of the flow to follow isobaths'''
return(Ro/(0.9+Ro))
def Bu(N=5.5E-3,f=9.66E-5,L=6400.0,Hs=150.0):
'''Burger number'''
return(N*Hs/(f*L))
def RossbyRad(N=5.5E-3,Hs=150.0,f=9.66E-4):
'''1st Rossby radius of deformation'''
return(N*Hs/f)
def Phi(U=0.37,f=9.66E-5,L=6400,R=5000.0,Wsb=13000,N=0.0055):
''' flux of upwelling as in Allen and Hickey 2010 , with expected coef of 1/4'''
f2 = (0.9**(1.5))*((Ro(U,f,R))/(1+(Ro(U,f,R)/0.9)))**(1.5)
f3 = Ro(U,f,L)**(0.5)
return(f2*f3)
# Save HCW at each 1/2 day into class record.
for record in records:
filename=('/ocean/kramosmu/MITgcm/TracerExperiments/%s/%s/ptracersGlob.nc' %(record.exp_code,record.run_num))
Tr1 = rout.getField(filename,'Tr1')
Tr2 = rout.getField(filename,'Tr2')
record.TrMass, TotTrMass = Tracer_AlongShelf(Tr1,Tr2, MaskCNoC, rA, hFacCNoC, drF[:], 227, 30, 180, 50,29)
# Save HCW at each 1/2 day into class record.
for record in recordsNoC:
filename=('/ocean/kramosmu/MITgcm/TracerExperiments/%s/%s/ptracersGlob.nc' %(record.exp_code,record.run_num))
Tr1 = rout.getField(filename,'Tr1')
Tr2 = rout.getField(filename,'Tr2')
record.TrMass, TotTrMass = Tracer_AlongShelf(Tr1,Tr2, MaskCNoC, rA, hFacCNoC, drF[:], 227, 30, 180, 50,29)
###Output
tracer limit concentration is: 7.21757
tracer limit concentration is: 7.21757
tracer limit concentration is: 7.21757
tracer limit concentration is: 7.21757
tracer limit concentration is: 7.21757
tracer limit concentration is: 7.21757
tracer limit concentration is: 7.21757
tracer limit concentration is: 7.21757
tracer limit concentration is: 7.21757
tracer limit concentration is: 7.21757
tracer limit concentration is: 7.21757
tracer limit concentration is: 7.21757
tracer limit concentration is: 7.21757
tracer limit concentration is: 7.21757
tracer limit concentration is: 7.21757
tracer limit concentration is: 7.21757
tracer limit concentration is: 7.21757
###Markdown
Tracer mass of HCW on shelf
###Code
# Choose only the runs that satisfy all restrictions in Allen and Hickey (2010)
fig,ax = plt.subplots(1,1,figsize=(8,6))
labels=[]
for rec,recNoC in zip(records,recordsNoC):
plt1 = ax.plot(time[:]/(3600*24),((rec.TrMass)-(recNoC.TrMass))/TotTrMass,
marker = rec.mstyle,
markersize = rec.msize,
color = sns.xkcd_rgb[rec.color],
label=rec.label)
ax.set_title('Tracer mass upwelled onto the shelf')
ax.set_ylabel('Tracer Mass in pool/ Initial tracer mass on shelf ')
ax.set_xlabel('Days')
ax.legend(bbox_to_anchor=(1.3,1))
plt.show()
fig,ax = plt.subplots(1,1,figsize=(8,6))
labels=[]
for rec in records:
plt1 = ax.plot(time[:]/(3600*24),rec.TrMass/TotTrMass,
marker = rec.mstyle,
markersize = rec.msize,
color = sns.xkcd_rgb[rec.color],
label=rec.label)
ax.set_title('Tracer Mass in HCW: Canyon case')
ax.set_ylabel('Tracer Mass HCW/ Initial tracer mass')
ax.set_xlabel('Days')
ax.legend(bbox_to_anchor=(1.3,1))
plt.show()
fig,ax = plt.subplots(1,1,figsize=(8,6))
labels=[]
for recNoC in recordsNoC:
plt1 = ax.plot(time[:]/(3600*24),recNoC.TrMass/TotTrMass,
marker = recNoC.mstyle,
markersize = recNoC.msize,
color = sns.xkcd_rgb[recNoC.color],
label=recNoC.label)
ax.set_title('Tracer Mass in HCW: No canyon case')
ax.set_ylabel('Tracer Mass HCW/ Initial tracer mass')
ax.set_xlabel('Days')
ax.legend(bbox_to_anchor=(1.3,1))
plt.show()
###Output
_____no_output_____
###Markdown
Tacer mass flux compared to $\bar{C}\Phi_{HA2013}$ Upwelling flux from Howatt and Allen (2013) is:$\frac{\Phi_{AH}}{UWD_h}= 0.91\mathcal{F}_w^{3/2}R_L^{1/2}(1-1.21S_E)^3+0.07$, where $S_E=\frac{sN}{f(\mathcal{F}_w/R_L)^{1/2}}$while $\bar C$ is proportional to $(-(Z/H_s)(1-3t/Tau)\delta_zC_0)/2(H_{sb}+H_h)$where $\delta_zC_0=-0.035983276367187497$
###Code
fig,ax = plt.subplots(1,1,figsize=(8,6))
labels=[]
t=4 # days
Hh=132.0
for rec,recNoC in zip(records,recordsNoC):
Z = ((rec.f*rec.u*F(Ro(rec.u,rec.f,R))*L)**(0.5))/rec.N
TauNo = 1-(3*t*24*3600*rec.kv/((Z**2)))
Se = (s*rec.N)/(rec.f*((F(Ro(rec.u,rec.f,W))/Ro(rec.u,rec.f,L))**(1/2)))
HA2013=(rec.u*W*Dh(rec.f,L,rec.N))*((0.91*(F(Ro(rec.u,rec.f,W))**(3/2))*(Ro(rec.u,rec.f,L)**(1/2))*((1-1.21*Se)**3))+0.07)
plt1 = ax.plot(1000*HA2013*((-(Z/Hs)*(TauNo))*(Hs+Hh)*-0.03598)/2.0,
(((rec.TrMass[8])-(rec.TrMass[6]))/(time[8]-time[6])),
marker = rec.mstyle,
markersize = rec.msize,
color = sns.xkcd_rgb[rec.color],
label=rec.label)
ax.set_title('Tracer Mass upwelled (canyon case) scaled as $C \Phi_{HA2013}$')
ax.set_xlabel('$C_{can} \Phi_{HA2013}$ (Mol/s)')
ax.set_ylabel('Tr mass flux (Mol/s)')
ax.legend(bbox_to_anchor=(1.3,1))
plt.show()
fig,ax = plt.subplots(1,1,figsize=(8,6))
labels=[]
t=4 # days
Hh=132.0
for rec,recNoC in zip(records,recordsNoC):
Z = ((rec.f*rec.u*F(Ro(rec.u,rec.f,R))*L)**(0.5))/rec.N
TauNo = 1-((3*t*24*3600*rec.kv)/(Z**2))
Se = (s*rec.N)/(rec.f*((F(Ro(rec.u,rec.f,W))/Ro(rec.u,rec.f,L))**(1/2)))
HA2013=(rec.u*W*Dh(rec.f,L,rec.N))*((0.91*(F(Ro(rec.u,rec.f,W))**(3/2))*(Ro(rec.u,rec.f,L)**(1/2))*((1-1.21*Se)**3))+0.07)
plt1 = ax.plot(1000*HA2013*((-(Z/Hs)*(TauNo))*(Hs+Hh)*-0.03598)/2.0,
(((rec.TrMass[8]-recNoC.TrMass[8])-(rec.TrMass[6]-recNoC.TrMass[6]))/(time[8]-time[6])),
marker = rec.mstyle,
markersize = rec.msize,
color = sns.xkcd_rgb[rec.color],
label=rec.label)
ax.set_title('Tracer Mass upwelled scaled as $C \Phi_{HA2013}$')
ax.set_xlabel('$C_{can} \Phi_{HA2013}$ (Mol/s)')
ax.set_ylabel('Tr mass flux (Mol/s)')
ax.legend(bbox_to_anchor=(1.3,1))
plt.show()
###Output
_____no_output_____
###Markdown
Using corrected stratification (kv) to calculate $\Phi$
###Code
fig,ax = plt.subplots(1,1,figsize=(8,6))
labels=[]
t=4 # days
Hh=132.0
for rec,recNoC in zip(records,recordsNoC):
Z = ((rec.f*rec.u*F(Ro(rec.u,rec.f,R))*L)**(0.5))/rec.N
TauNo = 1-((4*t*24*3600*rec.kv)/(Z**2))
rec.Z=Z
rec.TauNo=TauNo
Napprox=((Z/Hs)*(TauNo))*rec.N
Se = (s*Napprox)/(rec.f*((F(Ro(rec.u,rec.f,W))/Ro(rec.u,rec.f,L))**(1/2)))
HA2013=(rec.u*W*Dh(rec.f,L,Napprox))*((0.91*(F(Ro(rec.u,rec.f,W))**(3/2))*(Ro(rec.u,rec.f,L)**(1/2))*((1-1.21*Se)**3))+0.07)
rec.HA2013=HA2013
plt1 = ax.plot(1000*HA2013*((-(Z/Hs)*(TauNo))*(Hs+Hh)*-0.03598)/2.0,
(((rec.TrMass[8]-recNoC.TrMass[8])-(rec.TrMass[6]-recNoC.TrMass[6]))/(time[8]-time[6])),
marker = rec.mstyle,
markersize = rec.msize,
color = sns.xkcd_rgb[rec.color],
label=rec.label)
ax.set_title('Tracer Mass upwelled scaled as $C \Phi_{HA2013}$, using N approx')
ax.set_xlabel('$C_{can} \Phi_{HA2013}$ (Mol/s)')
ax.set_ylabel('Tr mass flux (Mol/s)')
ax.legend(bbox_to_anchor=(1.3,1))
#Linear fit
maxN_array_Kv = np.array([(((rec.TrMass[8]-recNoC.TrMass[8])-(rec.TrMass[6]-recNoC.TrMass[6]))/(time[8]-time[6]))
for rec,recNoC in zip(records,recordsNoC)])
tilt_array_Kv = np.array([1000*rec.HA2013*((-(rec.Z/Hs)*(rec.TauNo))*(Hs+Hh)*-0.03598)/2.0
for rec,recNoC in zip(records,recordsNoC)])
x_fit = np.linspace(1E8, 3.5E8, 50)
slope_Kv, intercept_Kv, r_value_Kv, p_value_Kv, std_err_Kv = scipy.stats.linregress(tilt_array_Kv,maxN_array_Kv)
plt3 = ax.plot(x_fit,slope_Kv*x_fit+intercept_Kv,'-k')
mean_sq_err_Kv = np.mean((maxN_array_Kv-(slope_Kv*tilt_array_Kv+intercept_Kv))**2)
upper_bound = ax.plot(x_fit,slope_Kv*x_fit+intercept_Kv+(mean_sq_err_Kv)**(0.5),linestyle = '--',color='0.5')
lower_bound = ax.plot(x_fit,slope_Kv*x_fit+intercept_Kv-(mean_sq_err_Kv)**(0.5),linestyle = '--',color='0.5')
ax.legend(bbox_to_anchor=(1.4,1))
plt.show()
print(slope_Kv,intercept_Kv)
fig,ax = plt.subplots(1,1,figsize=(8,6))
labels=[]
t=4 # days
Hh=132.0
for rec,recNoC in zip(records,recordsNoC):
Z = ((rec.f*rec.u*F(Ro(rec.u,rec.f,R))*L)**(0.5))/rec.N
TauNo = 1-((4*t*24*3600*rec.kv)/(Z**2))
rec.Z=Z
rec.TauNo=TauNo
Napprox=((Z/Hs)*(TauNo))*rec.N
Se = (s*Napprox)/(rec.f*((F(Ro(rec.u,rec.f,W))/Ro(rec.u,rec.f,L))**(1/2)))
HA2013=(rec.u*W*Dh(rec.f,L,Napprox))*((0.91*(F(Ro(rec.u,rec.f,W))**(3/2))*(Ro(rec.u,rec.f,L)**(1/2))*((1-1.21*Se)**3))+0.07)
rec.HA2013=HA2013
plt1 = ax.plot((1.19*1000*HA2013*((-(Z/Hs)*(TauNo))*(Hs+Hh)*(-0.03598))/2.0)-100499027.028,
(((rec.TrMass[8]-recNoC.TrMass[8])-(rec.TrMass[6]-recNoC.TrMass[6]))/(time[8]-time[6])),
marker = rec.mstyle,
markersize = rec.msize,
color = sns.xkcd_rgb[rec.color],
label=rec.label)
ax.plot(np.linspace(0.5E8,3.0E8,50),np.linspace(0.5E8,3.0E8,50),'k-')
ax.set_title('Tracer Mass upwelled scaled as $C \Phi_{HA2013}$, using N approx')
ax.set_xlabel('$1.19C_{can} \Phi_{HA2013}-10^{-8}$ (Mol/s)')
ax.set_ylabel('Tr mass flux (Mol/s)')
ax.legend(bbox_to_anchor=(1.3,1))
plt.show()
flux_file43 = '/ocean/kramosmu/MITgcm/TracerExperiments/CNTDIFF/run43/FluxTR01Glob.nc'
flux_file38 = '/ocean/kramosmu/MITgcm/TracerExperiments/CNTDIFF/run38/FluxTR01Glob.nc'
flux_file37 = '/ocean/kramosmu/MITgcm/TracerExperiments/CNTDIFF/run37/FluxTR01Glob.nc'
flux_file36 = '/ocean/kramosmu/MITgcm/TracerExperiments/CNTDIFF/run36/FluxTR01Glob.nc'
flux_fileN63 = '/ocean/kramosmu/MITgcm/TracerExperiments/CNTDIFF/run45/FluxTR01Glob.nc'
flux_fileN74 = '/ocean/kramosmu/MITgcm/TracerExperiments/CNTDIFF/run73/FluxTR01Glob.nc'
flux_fileN45 = '/ocean/kramosmu/MITgcm/TracerExperiments/CNTDIFF/run75/FluxTR01Glob.nc'
flux_filef10 = '/ocean/kramosmu/MITgcm/TracerExperiments/CNTDIFF/run67/FluxTR01Glob.nc'
flux_filef76 = '/ocean/kramosmu/MITgcm/TracerExperiments/CNTDIFF/run51/FluxTR01Glob.nc'
flux_filef86 = '/ocean/kramosmu/MITgcm/TracerExperiments/CNTDIFF/run69/FluxTR01Glob.nc'
flux_filef64 = '/ocean/kramosmu/MITgcm/TracerExperiments/CNTDIFF/run71/FluxTR01Glob.nc'
flux_file3D04 = '/ocean/kramosmu/MITgcm/TracerExperiments/3DVISC/run01/FluxTR01Glob.nc'
flux_file3D05 = '/ocean/kramosmu/MITgcm/TracerExperiments/3DVISC/run02/FluxTR01Glob.nc'
flux_file3D06 = '/ocean/kramosmu/MITgcm/TracerExperiments/3DVISC/run03/FluxTR01Glob.nc'
flux_file3D07 = '/ocean/kramosmu/MITgcm/TracerExperiments/3DVISC/run04/FluxTR01Glob.nc'
flux_fileU26 = '/ocean/kramosmu/MITgcm/TracerExperiments/LOW_BF/run01/FluxTR01Glob.nc'
flux_fileU32 = '/ocean/kramosmu/MITgcm/TracerExperiments/LOWER_BF/run01/FluxTR01Glob.nc'
import xarray as xr
grid = xr.open_dataset(CanyonGrid)
FluxFiles =[flux_file43 ,
flux_file38,
flux_file37,
flux_file36,
flux_fileN63,
flux_fileN74,
flux_fileN45,
flux_filef10,
flux_filef76,
flux_filef86,
flux_filef64,
flux_file3D04,
flux_file3D05,
flux_file3D06,
flux_file3D07 ,
flux_fileU26,
flux_fileU32]
for file,rec in zip(FluxFiles,records):
dataset = xr.open_dataset(file)
TotTr01 = (dataset.WTRAC01.isel(Zmd000090=29, X= slice(120,240), Y=slice(229,267)))
rAexp = np.expand_dims(grid.rA.isel(X= slice(120,240), Y=slice(229,267)),0)
rAexp = np.zeros(np.shape(TotTr01)) + rAexp
totalV = ((TotTr01 * rAexp).sum(dim='X')).sum(dim='Y')
rec.TotTrFlux=(totalV).isel(T=slice(6,9)).mean(dim='T')
fig,ax = plt.subplots(1,1,figsize=(8,6))
labels=[]
t=4 # days
Hh=132.0
for rec,recNoC in zip(records,recordsNoC):
plt1 = ax.plot(rec.TotTrFlux*1000,
(((rec.TrMass[8]-recNoC.TrMass[8])-(rec.TrMass[6]-recNoC.TrMass[6]))/(time[8]-time[6])),
marker = rec.mstyle,
markersize = rec.msize,
color = sns.xkcd_rgb[rec.color],
label=rec.label)
ax.plot(np.linspace(1E8,3E8,100),np.linspace(1E8,3E8,100),'k-')
ax.set_title('Tracer Mass flux vs tracer mass flux from diagnostics')
ax.set_xlabel(' Tracer mass flux from diagnostics (Mol/s)')
ax.set_ylabel('Tr mass flux (Mol/s)')
ax.legend(bbox_to_anchor=(1.3,1))
plt.show()
fig,ax = plt.subplots(1,1,figsize=(8,6))
labels=[]
t=4 # days
Hh=132.0
for rec,recNoC in zip(records,recordsNoC):
Z = ((rec.f*rec.u*F(Ro(rec.u,rec.f,R))*L)**(0.5))/rec.N
TauNo = 1-((4*t*24*3600*rec.kv)/(Z**2))
Napprox=((Z/Hs)*(TauNo))*rec.N
Se = (s*Napprox)/(rec.f*((F(Ro(rec.u,rec.f,W))/Ro(rec.u,rec.f,L))**(1/2)))
HA2013=(rec.u*W*Dh(rec.f,L,Napprox))*((0.91*(F(Ro(rec.u,rec.f,W))**(3/2))*(Ro(rec.u,rec.f,L)**(1/2))*((1-1.21*Se)**3))+0.07)
rec.HA2013=HA2013
plt1 = ax.plot(1000*HA2013*((-(Z/Hs)*(TauNo))*(Hs+Hh)*-0.03598)/2.0,
(rec.TotTrFlux*1000),
marker = rec.mstyle,
markersize = rec.msize,
color = sns.xkcd_rgb[rec.color],
label=rec.label)
ax.set_title('Tracer Mass upwelled scaled as $C \Phi_{HA2013}$, using N approx')
ax.set_xlabel('$C_{can} \Phi_{HA2013}$ (Mol/s)')
ax.set_ylabel('Tr mass flux diagnostics (Mol/s)')
ax.legend(bbox_to_anchor=(1.3,1))
#Linear fit
maxN_array_Kv = np.array([rec.TotTrFlux*1000 for rec,recNoC in zip(records,recordsNoC)])
tilt_array_Kv = np.array([1000*rec.HA2013*((-(rec.Z/Hs)*(rec.TauNo))*(Hs+Hh)*-0.03598)/2.0
for rec,recNoC in zip(records,recordsNoC)])
x_fit = np.linspace(1E8, 3.5E8, 50)
slope_Kv, intercept_Kv, r_value_Kv, p_value_Kv, std_err_Kv = scipy.stats.linregress(tilt_array_Kv,maxN_array_Kv)
plt3 = ax.plot(x_fit,slope_Kv*x_fit+intercept_Kv,'-k')
mean_sq_err_Kv = np.mean((maxN_array_Kv-(slope_Kv*tilt_array_Kv+intercept_Kv))**2)
upper_bound = ax.plot(x_fit,slope_Kv*x_fit+intercept_Kv+(mean_sq_err_Kv)**(0.5),linestyle = '--',color='0.5')
lower_bound = ax.plot(x_fit,slope_Kv*x_fit+intercept_Kv-(mean_sq_err_Kv)**(0.5),linestyle = '--',color='0.5')
ax.legend(bbox_to_anchor=(1.4,1))
plt.show()
###Output
_____no_output_____ |
gravity/7 - RadialVelocity.ipynb | ###Markdown
Radial Velocity Curves Contents1 The 2-body problem1.1 Coordinates1.2 Stepping2 Defining the simulation3 Interactive plotting
###Code
%matplotlib inline
import time
import numpy as np
import matplotlib.pyplot as plt
plt.rcParams['font.size'] = 16
from ipywidgets import interact, Layout
import ipywidgets as w
from astropy import units as u
from astropy.constants import G
###Output
_____no_output_____
###Markdown
The 2-body problemFor a single-planet system, the star and planet are in elliptical orbits sharing a common focus at the center of mass. At any time, the line joining the two bodies must pass through this common focus.Our calculations will assume that a system of two bodies in mutual orbit can be treated as mathematically equivalent to an object with reduced mass $\mu$ in orbit around a _stationary_ mass $M$ corresponding to the combined center of mass of the bodies. This gives us a 1-body problem which is relatively easy to model. Once we have the radius $r$ and angle $\theta$ for a point in the CoM reference frame, it is simple arithmetic to get the postions of each body.From Kepler's Third Law, the semi-major axis of the binary system is $$a = \sqrt[3]{\frac{P^2 G (m_1+m_2)}{4 \pi^2}}$$The reduced mass is: $$\mu = \frac{m_1 m_2}{m_1 + m_2}$$The individual bodies have semi-major axes about the center of mass: $$a_1 = \frac{\mu}{m_1} a \qquad a_2 = \frac{\mu}{m_2} a$$For convenience, we define $M = m_1+m_2$. CoordinatesWe are ignoring inclination of the orbital plane, as this merely scales the velocities without changing anything interesting in the simulation. This reduces it to a 2-D problem with the orbit in a plane that includes our line of sight.We need to take two angles into account: the planet's position on the orbit, $\theta$, and the angle between pericenter and our line of sight, $\varpi$ (called varpi in the code and plots).The sim always starts at pericenter, taken as $\theta=0$ SteppingIn the simulation, we need to calculate the angular step $d\theta$ for each time step $dt$. This is Kepler II, a manifestation of angular momentum conservation.The angular momentum is$$ L = \mu \sqrt{G M a (1-e^2)} $$and $$ \frac{dA}{dt} = \frac{1}{2} r^2 \frac{d \theta}{dt} = \frac{1}{2} \frac{L}{\mu} \quad \Rightarrow \quad d \theta = \frac{dA}{dt} \frac{2}{r^2} dt $$By Kepler II, $\frac{dA}{dt}$ is constant and $d \theta \sim r^{-2}$: varying around the orbit for non-zero eccentricities. Defining the simulation
###Code
def runSim(m_p, P, e, varpi, m_star):
"Main orbit loop. Parameters should have appropriate units."
# initialize parameters
M = m_star + m_p
a = ((P**2 * G * M)/(4*np.pi**2))**(1/3)
mu = (m_star * m_p)/M
N = 500 # steps around the orbit
t = 0 * u.s
dt = P/N # time step
theta = 0 * u.rad
# calculate angular momentum about center of mass
L_ang = mu * np.sqrt(G * M * a * (1 - e**2))
# dA/dt, for calculating dtheta at each time step
dAdt = L_ang / (2 * mu)
# initialize output arrays,
t_P = np.zeros(N) # t/P, fraction of the orbit [dimensionless]
v_r_star = np.zeros(N) * u.m/u.s # radial velocity, star
# run the time-step loop
for step in range(N):
# Calculate orbit parameters in the CoM reference frame,
# then translate to individual body positions
# position
r = a*(1 - e**2)/(1 + e*np.cos(theta))
# velocity
v = np.sqrt(G * M * (2/r - 1/a))
# radial velocity for mu (along our line of sight)
vr = -v * np.sin(theta + varpi)
# radial velocity for star
v1r = mu/m_star * vr
# store the results
t_P[step] = t/P
v_r_star[step] = v1r
# prepare for next step
dtheta = 2 * dAdt/r**2 * dt * u.rad
theta += dtheta
t += dt
return t_P, v_r_star
###Output
_____no_output_____
###Markdown
The sim only runs for a single orbit, but the results are duplicated for plotting 2 orbits. Scientifically meaningless but visually helpful.
###Code
plt.rcParams['font.size'] = 16
def plotRV(m_p, m_p_unit, P, P_unit, e, varpi, m_star):
# add units
if m_p_unit=='M_earth':
m_p *= u.M_earth
else:
m_p *= u.M_jup
m_star *= u.M_sun
if P_unit=='day':
P *= u.day
else:
P *= u.year
varpi *= u.deg
# run the simulation
t_P, v_r_star = runSim(m_p, P, e, varpi, m_star)
# plotting
x = np.hstack( (t_P, 1+t_P) )
y = np.hstack( (v_r_star, v_r_star) )
plt.figure(figsize=(15, 5))
plt.plot(x, y)
plt.xlabel('t/P')
plt.ylabel('RV (m/s)')
plt.title(f"Planet: {m_p}, star: {m_star}, P: {P}, e: {e}, varpi: {varpi}");
###Output
_____no_output_____
###Markdown
Interactive plottingUgly layout at this stage - it would be good to have units dropdowns alongside the associated slider, but that's not very easy with widgets. On the TODO list...
###Code
style = {'description_width': 'initial'} # to avoid the labels getting truncated
interact(plotRV,
m_star = w.FloatSlider(description="star mass ($M_{\odot}$)", style=style,
layout=Layout(width='80%'),
continuous_update=False,
min=0.1, max=10.0, value=1),
m_p = w.FloatSlider(description="planet mass", style=style,
layout=Layout(width='80%'),
continuous_update=False,
min=0.1, max=10.0, value=1),
m_p_unit = w.RadioButtons(description="Planet unit",
options=['M_Earth', 'M_Jup']),
P = w.FloatSlider(description="Orbit period", style=style,
layout=Layout(width='80%'),
continuous_update=False,
min=1.0, max=50.0, step=0.1, value=20),
P_unit = w.RadioButtons(description="Period unit",
options=['day', 'year']),
e = w.FloatSlider(description="eccentricity", style=style,
layout=Layout(width='80%'),
continuous_update=False,
min=0.0, max=0.9, step=0.01, value=0),
varpi = w.FloatSlider(description="varpi (deg)", style=style,
layout=Layout(width='80%'),
continuous_update=False,
min=-90.0, max=90.0, value=0) );
###Output
_____no_output_____ |
vector/exercise/verifica.ipynb | ###Markdown
Esercizio **Partecipanti*** NOME COGNOME* NOME COGNOME**Istruzioni per la consegna**1. Clicca sul pulsante "Condividi" in alto a destra2. Clicca sul pulsante in basso della finestra aperta "Cambia in chiunqua abbia il link"3. Clicca sul pulsante "Copia link"4. Assicurati che il link funzioni, ad esempio incollandolo in una finestra in incognito del browser5. Incolla il link sulla consegna di Teams e invia Vettore
###Code
%%writefile vettore.hpp
// scrivi qui il codice C++ della classe `Vettore`
###Output
Overwriting vettore.hpp
###Markdown
Vettori
###Code
%%writefile vettori.hpp
// scrivi qui il codice C++ della classe `Vettori`
###Output
Overwriting vettori.hpp
###Markdown
Main
###Code
%%writefile main.cpp
#include <iostream>
int main() {
// scrivi qui il codice C++ del main per testare la classe `Vettore` e `Vettori`
std::cout << "Hello, World!" << std::endl;
return 0;
}
###Output
Overwriting main.cpp
###Markdown
Output
###Code
%%script bash
g++ main.cpp -std=c++11
./a.out
###Output
Hello, World!
###Markdown
Pallina Rimbalzante Engine Implementazione dell'engine grafico 2D per terminale
###Code
%%writefile engine.hpp
#include <iostream>
#include <vector>
class Engine
{
private:
std::vector<char> M;
int righe;
int colonne;
void init_board();
public:
Engine(int righe = 22, int colonne = 40);
void clear();
void flush();
void display();
void set(int x, int y, char c);
};
/** Crea un engine grafico */
Engine::Engine(int righe, int colonne)
{
this->righe = righe;
this->colonne = colonne;
this->M.reserve(righe * colonne);
this->init_board();
}
/** Pulisce il terminale */
void Engine::clear()
{
std::system("clear");
}
/** Pulisce la matrice dai punti disegnati precedentemente */
void Engine::flush()
{
init_board();
}
/** Mostra la matrice a video */
void Engine::display()
{
for (int i = 0; i < righe; i++)
{
for (int j = 0; j < colonne; j++)
{
std::cout << M[(i * colonne) + j];
}
std::cout << std::endl;
}
}
/** Imposta la cella (x,y) con il carattere `c` */
void Engine::set(int x, int y, char c)
{
if (x < 0 || y < 0 || x > colonne - 1 || y > righe - 1)
{
return;
}
M[(y * colonne) + x] = c;
}
/** Inizializza la board con spazi vuoti */
void Engine::init_board()
{
for (int i = 0; i < righe * colonne; i++)
{
M[i] = ' ';
}
}
###Output
Overwriting engine.hpp
###Markdown
Main Utilizzo l'engine grafico e il vettore per creare una semplice simulazione: pallina che rimbalza.
###Code
%%writefile main.cpp
#include <iostream>
#include <chrono>
#include <thread>
#include "vettore.hpp"
#include "engine.hpp"
int main()
{
// creo l'engine che ha il compito di visualizzare coordiante 2D
// con 1 righe e 80 colonne
Engine engine = Engine(1, 80);
// creo tre vettori per la simulazione
Vettore pallina = Vettore(70., 0.);
Vettore velocita = Vettore(-1., 0.);
Vettore accelerazione = Vettore(-1., 0.);
Vettore frizione = Vettore(2, 0.9);
// inizio la simulazione
while (true)
{
engine.clear();
engine.flush();
// disegno il pavimento
engine.set(0, 0, '|');
// ricavo le componenti della pallina
int x_pallina = (int)pallina[0];
int y_pallina = (int)pallina[1];
// disegno la pallina
engine.set(x_pallina, y_pallina, '*');
// visualizzo tutto a video
engine.display();
// attendo 200 ms prima di procedere (5 frame per secondo!)
// bisogna attendere altrimenti l'occhio umano potrebbe non
// percepisce il movimento se eseguito troppo velocemente
std::this_thread::sleep_for(std::chrono::milliseconds(200));
// aggiorno lo stato della pallina
velocita = velocita + accelerazione;
pallina = pallina + velocita;
// quando la pallina tocca il terreno la faccio rimbalzare, ovvero
// ribalto la sua velocità
if (pallina[0] <= 0 && velocita[0] <= 0)
{
Vettore inverti = Vettore(2, -1.0);
// la pallina ogni volta che tocca terra è frenata con una certa frizione
velocita = velocita * inverti * frizione;
// aggiusto la posizione della pallina dato che potrebbe esssere
// uscita dallo schermo
pallina = pallina + velocita;
}
}
}
###Output
Overwriting main.cpp
###Markdown
Demo Compilo il main
###Code
%%script bash
g++ main.cpp -std=c++11
###Output
main.cpp: In function ‘int main()’:
main.cpp:15:5: error: ‘Vettore’ was not declared in this scope
Vettore pallina = Vettore(70., 0.);
^~~~~~~
main.cpp:16:13: error: expected ‘;’ before ‘velocita’
Vettore velocita = Vettore(-1., 0.);
^~~~~~~~
main.cpp:17:13: error: expected ‘;’ before ‘accelerazione’
Vettore accelerazione = Vettore(-1., 0.);
^~~~~~~~~~~~~
main.cpp:18:13: error: expected ‘;’ before ‘frizione’
Vettore frizione = Vettore(2, 0.9);
^~~~~~~~
main.cpp:30:30: error: ‘pallina’ was not declared in this scope
int x_pallina = (int)pallina[0];
^~~~~~~
main.cpp:30:30: note: suggested alternative: ‘x_pallina’
int x_pallina = (int)pallina[0];
^~~~~~~
x_pallina
main.cpp:45:9: error: ‘velocita’ was not declared in this scope
velocita = velocita + accelerazione;
^~~~~~~~
main.cpp:45:9: note: suggested alternative: ‘alloca’
velocita = velocita + accelerazione;
^~~~~~~~
alloca
main.cpp:45:31: error: ‘accelerazione’ was not declared in this scope
velocita = velocita + accelerazione;
^~~~~~~~~~~~~
main.cpp:52:21: error: expected ‘;’ before ‘inverti’
Vettore inverti = Vettore(2, -1.0);
^~~~~~~
main.cpp:55:35: error: ‘inverti’ was not declared in this scope
velocita = velocita * inverti * frizione;
^~~~~~~
main.cpp:55:45: error: ‘frizione’ was not declared in this scope
velocita = velocita * inverti * frizione;
^~~~~~~~
###Markdown
Eseguo il programma compilato.*Nota*: per interrompere l'esecuzione, che purtroppo non viene eseguita sulla stessa riga ma crea sempre una nuova riga, premere "Ctrl+M I".
###Code
!./a.out
###Output
_____no_output_____ |
data-science-master/Section-2-Basics-of-Python-Programming/Lec-2.15-Creating-Python-Modules-and-Packages/package-files/mypackage.ipynb | ###Markdown
--- Department of Data ScienceCourse: Tools and Techniques for Data Science---Instructor: Muhammad Arif Butt, Ph.D. Lecture 2.15 (Part - II) _mypackage.ipynb_ 1- What are Python Packages? 2- How to Create a Python Package? **Let us create a package:**- First of all, remember a package is a directory and the name of the package is the name of the directory. - We create our directory with the name "_packageDemo_". - Inside this we create sub-directories "_package1_" and "_package2_".- The packageDemo, package1, and package2 directory needs to contain a file with the name '_ _ init _ _.py'. To keep things simple, we left these file empty.- Now we can put all of the Python files which will be the modules into these directory. We create 3 different modules named arithmetic, comparison and bitcompare, containing our function definitions.- We put arithmetic module in package1(sub-package), and place the comparison and bitcompare module in package2 (sub-package) Hierarchy: - **packageDemo** - `__init__.py` - **package1** - `___init__.py` - `arithmetic.py` - **package2** - `__init__.py` - `comparison.py` - `bitcompare.py` packageDemo/package1/arithmetic.py```def myadd(a,b): return a+bdef mysub(a,b): return a-bdef mymul(a,b): return a*bdef mydiv(a,b): if (y==0): return 'Not Possible' else: return a/bdef mypow(a,b): return a**b``` pakcageDemo/package2/bitcompare.py```def bitand(a,b): print('a & b is',a&b) def bitor(a,b): print('a | b is',a|b)def bitnot(a): print('~a is',~a)def bitxor(a,b): print('a ^ b is',a^b)def bitright(a,b): print('x >> b is',a>>b)def bitleft(a,b): print('a << b is',a<<b)``` pakcageDemo/package2/comparison.py```def comparison(x,y): print("Comparing both operand where x is ",x,"and y is ",y) if(x>y): print(" x > y is: ", x>y) elif(x<y): print(" x < y is: ", x<y) else: print(" x == y is: ", x==y) ``` 3. How to import and use modules from the Package **Using functions inside the arithmetic module**
###Code
import packageDemo.package1.arithmetic
packageDemo.package1.arithmetic.mymul(12, 15)
import packageDemo.package1.arithmetic as abc
abc.mypow(2,10)
from packageDemo.package1 import arithmetic
arithmetic.myadd(5,3)
from packageDemo.package1.arithmetic import *
myadd(3,9)
print(dir())
###Output
['In', 'Out', '_', '_2', '_4', '_6', '_8', '__', '___', '__builtin__', '__builtins__', '__doc__', '__loader__', '__name__', '__package__', '__spec__', '_dh', '_i', '_i1', '_i2', '_i3', '_i4', '_i5', '_i6', '_i7', '_i8', '_i9', '_ih', '_ii', '_iii', '_oh', 'abc', 'arithmetic', 'exit', 'get_ipython', 'myadd', 'mydiv', 'mymul', 'mypow', 'mysub', 'packageDemo', 'quit']
###Markdown
**Using functions inside the comparison module** **Using functions inside the bitcompare module** **Using functions of all the modules**
###Code
from packageDemo.package1 import arithmetic as arith
from packageDemo.package2 import bitcompare as bits
from packageDemo.package2 import comparison as comp
###Output
_____no_output_____ |
analysis/perturb_thp1/preprocess/preprocess_raw.ipynb | ###Markdown
Assembling the complete ECCITE-seq dataset
###Code
import scanpy as sc
import pandas as pd
import seaborn as sns
pd.set_option('display.max_rows', 500)
data_path = '/data_volume/memento/eccite/'
a = 5
a
adata = sc.read_10x_h5(data_path + 'filtered_feature_bc_matrix.h5')
adata.var_names_make_unique()
adata.obs['lane'] = adata.obs.index.str.split('-').str[-1]
adata.var['mt'] = adata.var_names.str.startswith('MT-') # annotate the group of mitochondrial genes as 'mt'
sc.pp.calculate_qc_metrics(adata, qc_vars=['mt'], percent_top=None, log1p=False, inplace=True)
adata.obs.head(5)
adata = adata[adata.obs.n_genes_by_counts > 100, :]
adata = adata[adata.obs.pct_counts_mt < 10, :]
adata
adata.write(data_path + 'filtered_eccite_cDNA.h5ad')
adata.obs['original_bc'] = adata.obs.index.str.split('-').str[0]
for lane in range(1, 9):
bcs = adata.obs.query('lane == "{}"'.format(lane))[['original_bc']]
bcs.to_csv(data_path + 'cell_bcs/run{}.csv'.format(lane), index=False, header=False)
###Output
_____no_output_____
###Markdown
Combine HTO counts
###Code
adata = sc.read(data_path + 'filtered_eccite_cDNA.h5ad')
# Read HTO assignments
df_list = []
for lane in range(1, 9):
df = pd.read_csv(data_path + 'hto_counts/hto{}_out/umi_count/multi_class.csv'.format(lane), index_col=0)
df.index = df.index.values + '-' + str(lane)
df_list.append(df.copy())
multiseq_df = pd.concat(df_list)
overlap_bcs = list(set(adata.obs.index) & set(multiseq_df.index))
adata = adata[overlap_bcs, :].copy()
adata.obs = adata.obs.join(multiseq_df, how='left')
adata.obs['MULTI_ID'].value_counts()
adata = adata[adata.obs['MULTI_ID'].str.startswith('rep')].copy()
adata.obs['replicate'] = adata.obs['MULTI_ID'].str.split('-').str[0]
adata.obs['treatment'] = adata.obs['MULTI_ID'].str.split('-').str[1]
adata.write(data_path + 'filtered_eccite_cDNA_hto.h5ad')
###Output
... storing 'orig.ident' as categorical
... storing 'MULTI_ID' as categorical
... storing 'MULTI_classification' as categorical
... storing 'replicate' as categorical
... storing 'treatment' as categorical
###Markdown
Attach guide information
###Code
# Read HTO assignments
df_list = []
for lane in range(1, 9):
df = pd.read_csv(data_path + 'gdo_counts/gdo{}_out/umi_count/gdo_counts.csv'.format(lane), index_col=0).T
df.index = df.index.values + '-' + str(lane)
df_list.append(df.copy())
gdo_df = pd.concat(df_list)
gdo_df = gdo_df[gdo_df.columns[:-1]]
gdo_df = gdo_df[~((gdo_df > 5).sum(axis=1) == 0)]
gdo_df
gdo_df['guide_ID'] = gdo_df.idxmax(axis=1)
def second_percent(row):
return row.nlargest(2).values[-1]/row.nlargest(1).values[-1]
gdo_df['second_percent'] = gdo_df.iloc[:, :-1].apply(second_percent,axis=1)
filtered_gdo_df = gdo_df.query('second_percent < 0.30')
adata = sc.read(data_path + 'filtered_eccite_cDNA_hto.h5ad')
overlap_bcs = list(set(adata.obs.index) & set(filtered_gdo_df.index))
filtered_gdo_df['max'] = filtered_gdo_df.iloc[:, :-1].max(axis=1).values
filtered_gdo_df.iloc[:, :-2].sum(axis=1).mean()
f
filtered_gdo_df.loc[overlap_bcs, :].sum(axis=1).mean()
adata = adata[overlap_bcs, :].copy()
adata.shape
adata.obs = adata.obs.join(filtered_gdo_df.iloc[:, -2:], how='left')
adata.obs['gene'] = adata.obs['guide_ID'].str.split('g').str[0]
# adata = adata[adata.obs['replicate'] != 'rep4']
adata.write(data_path + 'eccite.h5ad')
adata.obs[adata.obs['gene']=='STAT1'].guide_ID.value_counts()
adata.obs[adata.obs['gene']=='NT'].guide_ID.value_counts()
adata.obs.query('treatment == "ctrl" & replicate != "rep4"').gene.value_counts()
###Output
_____no_output_____ |
notebooks/community/vertex_endpoints/nvidia-triton/nvidia-triton-custom-container-prediction.ipynb | ###Markdown
Getting started: Serving Models with NVIDIA Triton Inference Server with a custom container Run in Colab --> View on GitHub Run on Vertex AI Workbench OverviewThis tutorial shows how to use a custom container running [NVIDIA Triton Inference Server (Triton)](https://developer.nvidia.com/nvidia-triton-inference-server) to deploy a machine learning (ML) model on [Vertex AI Prediction](https://cloud.google.com/vertex-ai/docs/predictions/getting-predictions) that serves online predictions. DatasetThe tutorial uses Faster R-CNN with ResNet-101 v1 object detection model provided on [TensorFlow Hub](https://tfhub.dev/tensorflow/faster_rcnn/resnet101_v1_640x640/1) that has been trained on the [COCO 2017 dataset](https://cocodataset.org/download) with training images scaled to 640x640. ObjectiveIn this tutorial, you deploy a container running Triton to serve predictions from an object detection model on [Vertex AI Predictions](https://cloud.google.com/vertex-ai/docs/predictions/getting-predictions) and then use the Vertex AI Endpoints to detect objects in an image.The steps performed in this tutorial include:- Download model artifacts from TensorFlow Hub- Create Triton configuration file for the model- Pull a custom serving container running Triton- Upload the model as a Vertex `Model` resource- Deploy the `Model` resource to a serving `Endpoint` resource.- Make a prediction request- Undeploy the `Model` resource and delete the `Endpoint` CostsThis tutorial uses billable components of Google Cloud:* Vertex AI* Cloud StorageLearn about [Vertex AI pricing](https://cloud.google.com/vertex-ai/pricing) and [Cloud Storage pricing](https://cloud.google.com/storage/pricing), and use the [Pricing Calculator](https://cloud.google.com/products/calculator/) to generate a cost estimate based on your projected usage. NVIDIA Triton Inference Server (Triton) Overview[NVIDIA Triton Inference Server (Triton)](https://github.com/triton-inference-server/server) provides an inference solution optimized for both CPUs and GPUs. Triton can run multiple models from the same or different frameworks concurrently on a single GPU or CPU. In a multi-GPU server, it automatically creates an instance of each model on each GPU to increase utilization without extra coding. It supports real-time inferencing, batch inferencing to maximize GPU/CPU utilization, and streaming inference with built-in support for audio streaming input. It also supports model ensembles for use cases that require multiple models to perform end-to-end inference.The following figure shows the Triton's high-level architecture.- The model repository is a file-system based repository of the models that Triton will make available for inference.- Inference requests arrive at the server via either HTTP/REST or gRPC and are then routed to the appropriate per-model scheduler.- Triton implements multiple scheduling and batching algorithms that can be configured on a model-by-model basis.- The backend performs inference using the inputs provided in the batched requests to produce the requested outputs.Triton provides readiness and liveness health endpoints, as well as utilization, throughput, and latency metrics, which enable the integration of Triton into deployment environments, such as Vertex AI Prediction.Refer to [Triton architecture](https://github.com/triton-inference-server/server/blob/main/docs/architecture.md) for more detailed information. Triton on Vertex AI PredictionTriton inference server runs inside a container published by NVIDIA GPU Cloud (NGC) - [NVIDIA Triton Inference Server Image](https://catalog.ngc.nvidia.com/orgs/nvidia/containers/tritonserver). NVIDIA and GCP Vertex AI team collaborated and added packages and configurations to align Triton with Vertex AI [requirements for custom serving container images](https://cloud.google.com/vertex-ai/docs/predictions/custom-container-requirements).The model to be served by Triton should be registered with Vertex AI as a `Model` resource. The `Model`'s metadata refer to a location of the ensemble artifacts in Google Cloud Storage (GCS) and the custom serving container including configuration.Triton loads the models and exposes inference, health, and model management REST endpoints using [standard inference protocols](https://github.com/kserve/kserve/tree/master/docs/predict-api/v2). While deploying to Vertex AI, Triton recognizes Vertex AI environment and adopts Vertex AI Prediction protocol for [health checks](https://cloud.google.com/vertex-ai/docs/predictions/custom-container-requirementshealth) and predictions.To invoke the model through the Vertex AI Prediction endpoint, format prediction request using a [standard Inference Request JSON Object](https://github.com/kserve/kserve/blob/master/docs/predict-api/v2/required_api.mdinference) or a [Inference Request JSON Object with a binary extension](https://github.com/triton-inference-server/server/blob/main/docs/protocol/extension_binary_data.md) and submit a request to Vertex AI Prediction [REST rawPredict endpoint](https://cloud.google.com/vertex-ai/docs/reference/rest/v1beta1/projects.locations.endpoints/rawPredict). You need to use the `rawPredict` rather than `predict` endpoint because inference request formats used by Triton are not compatible with the Vertex AI Prediction [standard input format](https://cloud.google.com/vertex-ai/docs/predictions/online-predictions-custom-modelsformatting-prediction-input). Set up your local development environmentThis notebook is only supported on [Vertex AI Workbench](https://cloud.google.com/vertex-ai/docs/workbench/introduction), and the Vertex AI Workbench environment already meets all the requirements to run this notebook. Please make sure you are running the notebook in TensorFlow kernel in the Vertex AI Workbench notebook environment.**NOTE:** This notebook uses `docker` commands to build and test the containers in the local development environment before deploying a custom container to Vertex AI Predictions. [Google Colab currently does not natively support running docker](https://github.com/googlecolab/colabtools/issues/299issuecomment-615308778) and hence the notebook is supported only on Vertex AI Workbench. InstallationInstall the latest version of [Vertex AI SDK for Python](https://cloud.google.com/vertex-ai/docs/start/client-librariespython).
###Code
import os
# Google Cloud Notebook
if os.path.exists("/opt/deeplearning/metadata/env_version"):
USER_FLAG = "--user"
else:
USER_FLAG = ""
! pip3 install --upgrade google-cloud-aiplatform $USER_FLAG
###Output
_____no_output_____
###Markdown
Restart the kernelOnce you've installed the additional packages, you need to restart the notebook kernel so it can find the packages.
###Code
import os
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
###Output
_____no_output_____
###Markdown
Before you begin GPU runtimeThis tutorial does not require a GPU runtime. Set up your Google Cloud project**The following steps are required, regardless of your notebook environment.**1. [Select or create a Google Cloud project](https://console.cloud.google.com/cloud-resource-manager). When you first create an account, you get a $300 free credit towards your compute/storage costs.2. [Make sure that billing is enabled for your project.](https://cloud.google.com/billing/docs/how-to/modify-project)3. [Enable the following APIs: Vertex AI APIs, Compute Engine APIs, and Cloud Storage.](https://console.cloud.google.com/flows/enableapi?apiid=ml.googleapis.com,compute_component,storage-component.googleapis.com)4. [The Google Cloud SDK](https://cloud.google.com/sdk) is already installed in Google Cloud Notebook.5. Enter your project ID in the cell below. Then run the cell to make sure theCloud SDK uses the right project for all the commands in this notebook.**Note**: Jupyter runs lines prefixed with `!` as shell commands, and it interpolates Python variables prefixed with `$`.
###Code
PROJECT_ID = "[your-project-id]" # @param {type:"string"}
import os
# Get your Google Cloud project ID using google.auth
import google.auth
_, PROJECT_ID = google.auth.default()
print("Project ID: ", PROJECT_ID)
# validate PROJECT_ID
if PROJECT_ID == "" or PROJECT_ID is None or PROJECT_ID == "[your-project-id]":
print(
f"Please set your project id before proceeding to next step. Currently it's set as {PROJECT_ID}"
)
! gcloud config set project $PROJECT_ID
###Output
_____no_output_____
###Markdown
RegionYou can also change the `REGION` variable, which is used for operationsthroughout the rest of this notebook. Below are regions supported for Vertex AI. We recommend that you choose the region closest to you.- Americas: `us-central1`- Europe: `europe-west4`- Asia Pacific: `asia-east1`You may not use a multi-regional bucket for training with Vertex AI. Not all regions provide support for all Vertex AI services.Learn more about [Vertex AI regions](https://cloud.google.com/vertex-ai/docs/general/locations)
###Code
REGION = "us-central1" # @param {type: "string"}
###Output
_____no_output_____
###Markdown
TimestampIf you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append the timestamp onto the name of resources you create in this tutorial.
###Code
from datetime import datetime
TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
###Output
_____no_output_____
###Markdown
Authenticate your Google Cloud account**If you are running Notebook from Vertex AI Workbench, your environment is already authenticated.** Create a Cloud Storage bucket**The following steps are required, regardless of your notebook environment.**When you initialize the Vertex SDK for Python, you specify a Cloud Storage staging bucket. The staging bucket is where all the data associated with your dataset and model resources are retained across sessions.Set the name of your Cloud Storage bucket below. Bucket names must be globally unique across all Google Cloud projects, including those outside of your organization.
###Code
BUCKET_NAME = "gs://[your-bucket-name]" # @param {type:"string"}
if BUCKET_NAME == "" or BUCKET_NAME is None or BUCKET_NAME == "gs://[your-bucket-name]":
BUCKET_NAME = "gs://" + PROJECT_ID + "aip-" + TIMESTAMP
###Output
_____no_output_____
###Markdown
---**Only if your bucket doesn't already exist**: Run the following cell to create your Cloud Storage bucket.---
###Code
! gsutil mb -l $REGION $BUCKET_NAME
###Output
_____no_output_____
###Markdown
Finally, validate access to your Cloud Storage bucket by examining its contents:
###Code
! gsutil ls -al $BUCKET_NAME
###Output
_____no_output_____
###Markdown
Set up variablesNext, set up some variables used throughout the tutorial. Import libraries and define constants
###Code
import json
import os
from pathlib import Path
import numpy as np
import requests
from google.api import httpbody_pb2
from google.cloud import aiplatform as aip
from google.cloud import aiplatform_v1 as gapic
from PIL import Image
###Output
_____no_output_____
###Markdown
`MODEL_ARTIFACTS_REPOSITORY` is a root GCS location where the Triton model artifacts will be stored.
###Code
MODEL_ARTIFACTS_REPOSITORY = f"{BUCKET_NAME}/triton-on-vertex/models"
###Output
_____no_output_____
###Markdown
The following set of constants will be used to create names and display names of Vertex Prediction resources like models, endpoints, and model deployments.
###Code
# set model names and version
MODEL_NAME = "faster-rcnn"
MODEL_VERSION = "v01"
MODEL_DISPLAY_NAME = f"triton-{MODEL_NAME}-{MODEL_VERSION}"
ENDPOINT_DISPLAY_NAME = f"endpoint-{MODEL_NAME}-{MODEL_VERSION}"
# You can get the latest Triton image uri from
# https://catalog.ngc.nvidia.com/orgs/nvidia/containers/tritonserver
NGC_TRITON_IMAGE_URI = "nvcr.io/nvidia/tritonserver:22.01-py3"
# prediction container image name
IMAGE_NAME = "vertex-triton-inference:22.01"
###Output
_____no_output_____
###Markdown
Initialize Vertex SDK for PythonInitialize the Vertex SDK for Python for your project and corresponding bucket.
###Code
aip.init(project=PROJECT_ID, staging_bucket=BUCKET_NAME)
###Output
_____no_output_____
###Markdown
Download model artifactsFor this tutorial, download the object detection model provided on [TensorFlow Hub](https://tfhub.dev/tensorflow/faster_rcnn/resnet101_v1_640x640/1) that has been trained on the [COCO 2017 dataset](https://cocodataset.org/download). Triton expects [model repository]() to be organized in the following structure for serving [TensorFlow `SavedModel`](https://www.tensorflow.org/guide/saved_model) formats:> ```> └── model-repository-path> └── model_name> ├── config.pbtxt> └── 1> └── model.savedmodel> └── > ```The `config.pbtxt` file describes the [model configuration](https://github.com/triton-inference-server/server/blob/main/docs/model_configuration.md) for the model.
###Code
# download and organize model artifacts as per Triton model repository spec
! mkdir -p models/object_detector/1/model.savedmodel/
! curl -L "https://tfhub.dev/tensorflow/faster_rcnn/resnet101_v1_640x640/1?tf-hub-format=compressed" | \
tar -zxvC ./models/object_detector/1/model.savedmodel/
! ls -ltr ./models/object_detector/1/model.savedmodel/
###Output
_____no_output_____
###Markdown
After downloading the model locally, model repository will be organized as following:> ```> ./models> └── object_detector> └── 1> └── model.savedmodel> ├── saved_model.pb> └── variables> ├── variables.data-00000-of-00001> └── variables.index> ```
###Code
!tree ./models
###Output
_____no_output_____
###Markdown
Create model configuration fileThe [model configuration](https://github.com/triton-inference-server/server/blob/main/docs/model_configuration.md) file `config.pbtxt` provides information about the model such as inputs and outputs. Refer to the [Triton docs](https://github.com/triton-inference-server/common/blob/main/protobuf/model_config.proto) for the configuration format. For TensorFlow models, you could use the [`saved_model_cli` command](https://www.tensorflow.org/guide/saved_modeldetails_of_the_savedmodel_command_line_interface) and map to the Triton's configuration format. Note that Triton datatypes are different from the frameworks and should be mapped accordingly based on the table [here](https://github.com/triton-inference-server/server/blob/main/docs/model_configuration.mddatatypes).
###Code
!saved_model_cli show --dir ./models/object_detector/1/model.savedmodel/ --all
%%writefile ./models/object_detector/config.pbtxt
name: "object_detector"
platform: "tensorflow_savedmodel"
backend: "tensorflow"
max_batch_size: 0
input [
{
name: "input_tensor"
data_type: TYPE_UINT8
dims: [ 1, -1, -1, 3 ]
}
]
output [
{
name: "detection_anchor_indices"
data_type: TYPE_FP32
dims: [ 1, 300 ]
},
{
name: "detection_boxes"
data_type: TYPE_FP32
dims: [ 1, 300, 4 ]
},
{
name: "detection_classes"
data_type: TYPE_FP32
dims: [ 1, 300 ]
},
{
name: "detection_multiclass_scores"
data_type: TYPE_FP32,
dims: [ 1, 300, 91]
},
{
name: "detection_scores"
data_type: TYPE_FP32
dims: [ 1, 300 ]
},
{
name: "num_detections"
data_type: TYPE_FP32
dims: [ 1 ]
},
{
name: "raw_detection_boxes"
data_type: TYPE_FP32
dims: [ 1, 300, 4 ]
},
{
name: "raw_detection_scores"
data_type: TYPE_FP32
dims: [ 1, 300, 91 ]
}
]
###Output
_____no_output_____
###Markdown
Push model artifacts to GCS BucketThe downloaded model artifacts including model configuration file are pushed to GCS bucket defined by `MODEL_ARTIFACTS_REPOSITORY` which will be used when creating the Vertex AI Model resource.
###Code
! gsutil cp -r ./models/object_detector $MODEL_ARTIFACTS_REPOSITORY/
###Output
_____no_output_____
###Markdown
Validate model artifacts are copied in the GCS model artifacts uri location.
###Code
!gsutil ls -r $MODEL_ARTIFACTS_REPOSITORY/object_detector/
###Output
_____no_output_____
###Markdown
Download test image file and generate payload to make prediction requests- The following function downloads a test image file, formats the request payload as JSON file that will be passed to prediction requests.
###Code
def generate_payload(image_url):
# download image to memory and resize
image_inputs = Image.open(requests.get(image_url, stream=True).raw).resize(
(200, 200)
)
# convert image to numpy array
image_tensor = np.asarray(image_inputs)
# derive image shape
image_shape = [1] + list(image_tensor.shape)
# create/set directory to save payload
base = Path("./test")
base.mkdir(exist_ok=True)
# create payload request
payload = {
"id": "0",
"inputs": [
{
"name": "input_tensor",
"shape": image_shape,
"datatype": "UINT8",
"parameters": {},
"data": image_tensor.tolist(),
}
],
}
# save payload as json file
payload_file = os.path.join(base, "payload.json")
with open(payload_file, "w") as f:
json.dump(payload, f)
print(f"Payload generated at {payload_file}")
return payload_file
###Output
_____no_output_____
###Markdown
- Download and view the sample image
###Code
# set image url
image_url = "https://github.com/tensorflow/models/raw/master/research/object_detection/test_images/image2.jpg"
# show image
image = Image.open(requests.get(image_url, stream=True).raw).resize((200, 200))
image
###Output
_____no_output_____
###Markdown
- Format the request payload as JSON
###Code
# format payload as JSON
payload_file = generate_payload(image_url)
###Output
_____no_output_____
###Markdown
- Run prediction with the object detection model downloaded from TensorFlow Hub on the test image
###Code
import tensorflow as tf
import tensorflow_hub as hub
# download model
detector = hub.load("https://tfhub.dev/tensorflow/faster_rcnn/resnet101_v1_640x640/1")
# download image
image = Image.open(requests.get(image_url, stream=True).raw).resize((200, 200))
# convert image to a tensor using `tf.convert_to_tensor`
image_tensor = tf.convert_to_tensor(np.asarray(image))
# model expects an image batch, for single image add an axis with `tf.newaxis`
image_tensor = image_tensor[tf.newaxis, ...]
# run inference
detector_output = detector(image_tensor)
# return class_ids
class_ids = detector_output["detection_classes"]
print(class_ids)
###Output
_____no_output_____
###Markdown
Building and pushing the container imageTo use a custom container for serving predictions, you must specify a Docker container image that meets the [custom container requirements](https://cloud.google.com/vertex-ai/docs/predictions/custom-container-requirements). This section describes how to create the container image running Triton and push it to Google Artifact Registry (GAR) or Google Container Registry (GCR). This tutorial shows how to push the custom container to Artifact Registry. Setting up Artifact Registry - **Enable the Artifact Registry API service for your project.**
###Code
! gcloud services enable artifactregistry.googleapis.com
###Output
_____no_output_____
###Markdown
- **Create a private Docker repository to push the container images**
###Code
DOCKER_ARTIFACT_REPO = "triton-prediction-container"
# create a new Docker repository with your region with the description
! gcloud artifacts repositories create {DOCKER_ARTIFACT_REPO} \
--repository-format=docker \
--location={REGION} \
--description="Triton Docker repository"
# verify that your repository was created.
! gcloud artifacts repositories list \
--location={REGION} \
--filter="name~"{DOCKER_ARTIFACT_REPO}
###Output
_____no_output_____
###Markdown
- **Configure authentication to the private repo**Before you push or pull container images, configure Docker to use the `gcloud` command-line tool to authenticate requests to Artifact Registry for your region.
###Code
! gcloud auth configure-docker {REGION}-docker.pkg.dev --quiet
###Output
_____no_output_____
###Markdown
- **Build the image and tag the Artifact Registry path that the image will be pushed to**
###Code
IMAGE_URI = f"{REGION}-docker.pkg.dev/{PROJECT_ID}/{DOCKER_ARTIFACT_REPO}/{IMAGE_NAME}"
! docker pull $NGC_TRITON_IMAGE_URI
! docker tag $NGC_TRITON_IMAGE_URI $IMAGE_URI
###Output
_____no_output_____
###Markdown
Run the container locally *[optional]*Before pushing the custom container image to Artifact Registry to use it with Vertex AI Predictions, run the container in local environment to verify that the server responds to prediction instances. 1. To run the container image as a container locally, run the following command:**NOTE:** You can ignore error `No such container` which is thrown when the container is not running.
###Code
! docker stop local_object_detector
! docker run -t -d -p 8000:8000 --rm \
--name=local_object_detector \
-e AIP_MODE=True \
$IMAGE_URI \
--model-repository $MODEL_ARTIFACTS_REPOSITORY
! sleep 10
# check if the triton container is running locally
!docker container ls -f"name=local_object_detector" --no-trunc
###Output
_____no_output_____
###Markdown
2. To send the container's server a health check, run the following command. It should return the status code as `200`
###Code
! curl -s -o /dev/null -w "%{http_code}" http://localhost:8000/v2/health/ready
###Output
_____no_output_____
###Markdown
3. To send the container's server a prediction request, run the following command with sample image file as the payload and get prediction responses
###Code
# request prediction response
! curl -X POST -H "Content-Type: application/octet-stream" \
-H "Accept: */*" \
--data-binary @$payload_file \
localhost:8000/v2/models/object_detector/infer | \
jq -c '.outputs[] | select(.name == "detection_classes")'
###Output
_____no_output_____
###Markdown
4. To stop the container, run the following command:
###Code
! docker stop local_object_detector
###Output
_____no_output_____
###Markdown
Push the container image to Artifact RegistryAfter testing the container image locally, push the image to Artifact Registry. The Artifact Registry image URI will be used when creating the Vertex AI model resource.
###Code
! docker push $IMAGE_URI
###Output
_____no_output_____
###Markdown
Create Vertex AI Model resourceA Vertex AI Model resource must be created to deploy the model to a Vertex AI Prediction Endpoint. Create a Vertex AI Model resource with the deployment image pointing to the model artifacts. Refer to [Vertex AI Prediction guide](https://cloud.google.com/vertex-ai/docs/predictions/use-custom-container) for creating Vertex AI Model resource with custom container.
###Code
model = aip.Model.upload(
display_name=MODEL_DISPLAY_NAME,
serving_container_image_uri=IMAGE_URI,
artifact_uri=MODEL_ARTIFACTS_REPOSITORY,
sync=True,
)
model.resource_name
###Output
_____no_output_____
###Markdown
Deploy the model to Vertex AI PredictionsDeploying a Vertex AI Prediction Model is a two step process.1. Create an `Endpoint` exposing an external interface to users consuming the model. 2. After the `Endpoint` is ready, deploy multiple versions of a model to the `Endpoint`. The deployed model runs the custom container image running Triton to serve predictions.Refer to Vertex AI Predictions guide to [Deploy a model using the Vertex AI API](https://cloud.google.com/vertex-ai/docs/predictions/deploy-model-api) for more information about the APIs used in the following cells. Create an endpointBefore deploying the model you need to create a Vertex AI Prediction endpoint.
###Code
endpoint = aip.Endpoint.create(display_name=ENDPOINT_DISPLAY_NAME)
###Output
_____no_output_____
###Markdown
Deploy model to an endpointAfter the endpoint is ready, deploy model to the endpoint.The deployed model runs the Triton Server on a GPU node equipped with the NVIDIA Tesla T4 GPUs. Refer to [Deploy a model using the Vertex AI API](https://cloud.google.com/vertex-ai/docs/predictions/deploy-model-api) guide for more information.**NOTE:** This step can take up to 20 min.
###Code
traffic_percentage = 100
machine_type = "n1-standard-4"
accelerator_type = "NVIDIA_TESLA_T4"
accelerator_count = 1
min_replica_count = 1
max_replica_count = 2
model.deploy(
endpoint=endpoint,
deployed_model_display_name=MODEL_DISPLAY_NAME,
machine_type=machine_type,
min_replica_count=min_replica_count,
max_replica_count=max_replica_count,
traffic_percentage=traffic_percentage,
accelerator_type=accelerator_type,
accelerator_count=accelerator_count,
sync=True,
)
endpoint.name
###Output
_____no_output_____
###Markdown
Invoking the model and getting predictionsTo invoke the model through Vertex AI Prediction endpoint, format prediction request using a [standard Inference Request JSON Object](https://github.com/kserve/kserve/blob/master/docs/predict-api/v2/required_api.mdinference) or a [Inference Request JSON Object with a binary extension](https://github.com/triton-inference-server/server/blob/main/docs/protocol/extension_binary_data.md) and submit the request to Vertex AI Prediction [REST rawPredict endpoint](https://cloud.google.com/vertex-ai/docs/reference/rest/v1beta1/projects.locations.endpoints/rawPredict). ---Instead of `predict` API, you must use the `rawPredict` API because prediction request formats used by Triton are not compatible with the Vertex AI Prediction [standard input format](https://cloud.google.com/vertex-ai/docs/predictions/online-predictions-custom-modelsformatting-prediction-input).---In the [previous section](Download-test-image-file-and-generate-payload-to-make-prediction-requests) the request body was formatted as a [standard Inference Request JSON Object](https://github.com/kserve/kserve/blob/master/docs/predict-api/v2/required_api.mdinference). **You can invoke the Vertex AI Prediction `rawPredict` endpoint using [Vertex AI SDK](https://googleapis.dev/python/aiplatform/latest/aiplatform_v1/prediction_service.html:~:text=async-,raw_predict,-(request%3A%20Optional%5BUnion), any HTTP tool or library, including `curl`.**To use `Endpoint` in another session: set endpoint as following```endpoint = aip.Endpoint('projects//locations//endpoints/')```
###Code
endpoint_name = f"projects/{PROJECT_ID}/locations/{REGION}/endpoints/{endpoint.name}"
###Output
_____no_output_____
###Markdown
Calling `rawPredict` using Vertex AI SDK to get prediction response **Initialize prediction service client**
###Code
# initialize service client
client_options = {"api_endpoint": f"{REGION}-aiplatform.googleapis.com"}
prediction_client = gapic.PredictionServiceClient(client_options=client_options)
###Output
_____no_output_____
###Markdown
**Format the http request**
###Code
# format payload
http_body = httpbody_pb2.HttpBody(
data=open(payload_file).read().encode("utf-8"),
content_type="application/json",
)
# Initialize request argument(s)
request = gapic.RawPredictRequest(endpoint=endpoint_name, http_body=http_body)
###Output
_____no_output_____
###Markdown
**Make the prediction request**
###Code
# Make the prediction request
response = prediction_client.raw_predict(request=request)
result = json.loads(response.data)
###Output
_____no_output_____
###Markdown
**Get detection classes from the output**
###Code
detection_classes = [
item for item in result["outputs"] if item["name"] == "detection_classes"
][0]
json.dumps(detection_classes)
###Output
_____no_output_____
###Markdown
Making `curl` request to get prediction responseNotice the use of `rawPredict` API endpoint in the URI below
###Code
endpoint_uri = f"https://{REGION}-aiplatform.googleapis.com/v1/projects/{PROJECT_ID}/locations/{REGION}/endpoints/{endpoint.name}:rawPredict"
! curl -X POST \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H "Content-Type: application/json" \
-H "Accept: */*" \
$endpoint_uri \
-d @$payload_file | \
jq -c '.outputs[] | select(.name == "detection_classes")'
###Output
_____no_output_____
###Markdown
Cleaning up Cleaning up training and deployment resourcesTo clean up all Google Cloud resources used in this notebook, you can [delete the Google Cloud project](https://cloud.google.com/resource-manager/docs/creating-managing-projectsshutting_down_projects) you used for the tutorial.Otherwise, you can delete the individual resources you created in this tutorial:- Model- Endpoint- Cloud Storage Bucket- Container Images Set flags for the resource type to be deleted
###Code
delete_endpoint = True
delete_model = True
delete_bucket = False
delete_image = True
###Output
_____no_output_____
###Markdown
**Undeploy models and Delete endpoints**
###Code
# Delete the endpoint using the Vertex AI fully qualified identifier for the endpoint
try:
if delete_endpoint or os.getenv("IS_TESTING"):
# get endpoint resource
endpoints = aip.Endpoint.list(
filter=f"display_name={ENDPOINT_DISPLAY_NAME}", order_by="create_time"
)
endpoint = endpoints[0]
# undeploy models from the endpoint
print(
f"Undeploying all deployed models from the endpoint {endpoint.display_name} [{endpoint._gca_resource.name}]"
)
endpoint.undeploy_all(sync=True)
# deleting endpoint
print(
f"Deleting endpoint {endpoint.display_name} [{endpoint._gca_resource.name}]"
)
aip.Endpoint.delete(endpoint)
print(f"Deleted endpoint {endpoint.display_name}")
except Exception as e:
print(e)
###Output
_____no_output_____
###Markdown
**Deleting models**
###Code
# Delete the model using the Vertex AI fully qualified identifier for the model
try:
if delete_model or os.getenv("IS_TESTING"):
# get model resource
models = aip.Model.list(
filter=f"display_name={MODEL_DISPLAY_NAME}", order_by="create_time"
)
for model in models:
# deleting model
print(f"Deleting model {model.display_name} [{model._gca_resource.name}]")
aip.Model.delete(model)
print(f"Deleted model {model.display_name}")
except Exception as e:
print(e)
###Output
_____no_output_____
###Markdown
**Delete contents from the staging bucket**---***NOTE: Everything in this Cloud Storage bucket will be DELETED. Please run it with caution.***---
###Code
if (delete_bucket or os.getenv("IS_TESTING")) and "BUCKET_NAME" in globals():
print(f"Deleting all contents from the bucket {BUCKET_NAME}")
shell_output = ! gsutil du -as $BUCKET_NAME
print(
f"Size of the bucket {BUCKET_NAME} before deleting = {shell_output[0].split()[0]} bytes"
)
# uncomment below line to delete contents of the bucket
# ! gsutil rm -r $BUCKET_NAME
shell_output = ! gsutil du -as $BUCKET_NAME
if float(shell_output[0].split()[0]) > 0:
print(
"PLEASE UNCOMMENT LINE TO DELETE BUCKET. CONTENT FROM THE BUCKET NOT DELETED"
)
print(
f"Size of the bucket {BUCKET_NAME} after deleting = {shell_output[0].split()[0]} bytes"
)
###Output
_____no_output_____
###Markdown
**Delete images from Artifact Registry**Deletes all the container images created in this tutorial with name defined by variable `IMAGE_NAME` from the registry. All associated tags are also deleted.
###Code
gar_images = ! gcloud artifacts docker images list $REGION-docker.pkg.dev/$PROJECT_ID/$DOCKER_ARTIFACT_REPO \
--filter="package~"$(echo $IMAGE_NAME | sed 's/:.*//') \
--format="get(package)"
try:
if delete_image or os.getenv("IS_TESTING"):
for image in gar_images:
# delete only if image name starts with valid region
if image.startswith(f'{REGION}-docker.pkg.dev'):
print(f"Deleting image {image} including all tags")
! gcloud artifacts docker images delete $image --delete-tags --quiet
except Exception as e:
print(e)
###Output
_____no_output_____ |
.ipynb_checkpoints/XGB_Model_Training_Notebook-checkpoint.ipynb | ###Markdown
Training a Gradient Boost Model Checking if everything loads well. Change input path and number of variables!
###Code
df = pd.read_pickle('./experiments/wizard_nfsp_result/trick_prediction_results/tp01.pickle')
N_VARIABLES = 60 # Change this value dependant on the dataset
# df1 = pd.read_pickle('C:/Users/Magnus/Documents/GitHub/rlcard/experiments/wizard_nfsp_result/trick_prediction_results/tp02.pickle')
# df=pd.concat((df,df1))
# df.set_index(list(np.arange(0,N_VARIABLES))).groupby(list(np.arange(0,24))).mean()
df2=df.set_index(list(np.arange(0,N_VARIABLES)))
df2
###Output
_____no_output_____
###Markdown
Training the model.
###Code
import xgboost as xgb
data=df
sz=data.shape
### Train-Test Split
train = data.loc[:int(sz[0] * 0.7), :]
test = data.loc[int(sz[0] * 0.3):, :]
train_X = train.loc[:, :N_VARIABLES-1]
train_Y = train.loc[:, N_VARIABLES]
test_X = test.loc[:, :N_VARIABLES-1]
test_Y = test.loc[:, N_VARIABLES]
xg_train = xgb.DMatrix(train_X, label=train_Y)
xg_test = xgb.DMatrix(test_X, label=test_Y)
# setup parameters for xgboost
param = {}
# use softmax multi-class classification
param['objective'] = 'multi:softmax'
# scale weight of positive examples
param['eta'] = 0.1
param['max_depth'] = 10
param['nthread'] = 4
param['num_class'] = 6
watchlist = [(xg_train, 'train'), (xg_test, 'test')]
num_round = 200
bst = xgb.train(param, xg_train, num_round, watchlist)
# get prediction
pred = bst.predict(xg_test)
error_rate = np.sum(pred != test_Y) / test_Y.shape[0]
print('Test error using softmax = {}'.format(error_rate))
# # do the same thing again, but output probabilities
# param['objective'] = 'multi:softprob'
# bst = xgb.train(param, xg_train, num_round, watchlist)
# # Note: this convention has been changed since xgboost-unity
# # get prediction, this is in 1D array, need reshape to (ndata, nclass)
# pred_prob = bst.predict(xg_test).reshape(test_Y.shape[0], 6)
# pred_label = np.argmax(pred_prob, axis=1)
# error_rate = np.sum(pred_label != test_Y) / test_Y.shape[0]
# print('Test error using softprob = {}'.format(error_rate))
###Output
[21:12:25] WARNING: C:/Users/Administrator/workspace/xgboost-win64_release_1.5.1/src/learner.cc:1115: Starting in XGBoost 1.3.0, the default evaluation metric used with the objective 'multi:softmax' was changed from 'merror' to 'mlogloss'. Explicitly set eval_metric if you'd like to restore the old behavior.
[0] train-mlogloss:1.76892 test-mlogloss:1.76927
[1] train-mlogloss:1.74890 test-mlogloss:1.74952
[2] train-mlogloss:1.73074 test-mlogloss:1.73165
[3] train-mlogloss:1.71457 test-mlogloss:1.71575
[4] train-mlogloss:1.69961 test-mlogloss:1.70103
[5] train-mlogloss:1.68614 test-mlogloss:1.68780
[6] train-mlogloss:1.67383 test-mlogloss:1.67570
[7] train-mlogloss:1.66247 test-mlogloss:1.66455
[8] train-mlogloss:1.65196 test-mlogloss:1.65425
[9] train-mlogloss:1.64217 test-mlogloss:1.64462
[10] train-mlogloss:1.63302 test-mlogloss:1.63567
[11] train-mlogloss:1.62456 test-mlogloss:1.62737
[12] train-mlogloss:1.61661 test-mlogloss:1.61958
[13] train-mlogloss:1.60917 test-mlogloss:1.61232
[14] train-mlogloss:1.60226 test-mlogloss:1.60559
[15] train-mlogloss:1.59572 test-mlogloss:1.59924
[16] train-mlogloss:1.58958 test-mlogloss:1.59324
[17] train-mlogloss:1.58382 test-mlogloss:1.58764
[18] train-mlogloss:1.57831 test-mlogloss:1.58234
[19] train-mlogloss:1.57319 test-mlogloss:1.57741
[20] train-mlogloss:1.56828 test-mlogloss:1.57266
[21] train-mlogloss:1.56368 test-mlogloss:1.56822
[22] train-mlogloss:1.55925 test-mlogloss:1.56399
[23] train-mlogloss:1.55506 test-mlogloss:1.55996
[24] train-mlogloss:1.55113 test-mlogloss:1.55617
[25] train-mlogloss:1.54730 test-mlogloss:1.55255
[26] train-mlogloss:1.54369 test-mlogloss:1.54913
[27] train-mlogloss:1.54018 test-mlogloss:1.54583
[28] train-mlogloss:1.53684 test-mlogloss:1.54267
[29] train-mlogloss:1.53370 test-mlogloss:1.53971
[30] train-mlogloss:1.53066 test-mlogloss:1.53687
[31] train-mlogloss:1.52779 test-mlogloss:1.53415
[32] train-mlogloss:1.52501 test-mlogloss:1.53155
[33] train-mlogloss:1.52229 test-mlogloss:1.52901
[34] train-mlogloss:1.51969 test-mlogloss:1.52663
[35] train-mlogloss:1.51722 test-mlogloss:1.52435
[36] train-mlogloss:1.51484 test-mlogloss:1.52214
[37] train-mlogloss:1.51253 test-mlogloss:1.52001
[38] train-mlogloss:1.51032 test-mlogloss:1.51798
[39] train-mlogloss:1.50817 test-mlogloss:1.51601
[40] train-mlogloss:1.50611 test-mlogloss:1.51410
[41] train-mlogloss:1.50407 test-mlogloss:1.51225
[42] train-mlogloss:1.50212 test-mlogloss:1.51047
[43] train-mlogloss:1.50025 test-mlogloss:1.50878
[44] train-mlogloss:1.49845 test-mlogloss:1.50714
[45] train-mlogloss:1.49671 test-mlogloss:1.50555
[46] train-mlogloss:1.49495 test-mlogloss:1.50399
[47] train-mlogloss:1.49324 test-mlogloss:1.50246
[48] train-mlogloss:1.49163 test-mlogloss:1.50103
[49] train-mlogloss:1.49007 test-mlogloss:1.49963
[50] train-mlogloss:1.48853 test-mlogloss:1.49828
[51] train-mlogloss:1.48703 test-mlogloss:1.49694
[52] train-mlogloss:1.48557 test-mlogloss:1.49568
[53] train-mlogloss:1.48415 test-mlogloss:1.49443
[54] train-mlogloss:1.48274 test-mlogloss:1.49321
[55] train-mlogloss:1.48138 test-mlogloss:1.49202
[56] train-mlogloss:1.48005 test-mlogloss:1.49087
[57] train-mlogloss:1.47874 test-mlogloss:1.48973
[58] train-mlogloss:1.47746 test-mlogloss:1.48864
[59] train-mlogloss:1.47620 test-mlogloss:1.48754
[60] train-mlogloss:1.47492 test-mlogloss:1.48645
[61] train-mlogloss:1.47372 test-mlogloss:1.48542
[62] train-mlogloss:1.47248 test-mlogloss:1.48437
[63] train-mlogloss:1.47131 test-mlogloss:1.48339
[64] train-mlogloss:1.47010 test-mlogloss:1.48239
[65] train-mlogloss:1.46897 test-mlogloss:1.48145
[66] train-mlogloss:1.46785 test-mlogloss:1.48050
[67] train-mlogloss:1.46677 test-mlogloss:1.47961
[68] train-mlogloss:1.46567 test-mlogloss:1.47870
[69] train-mlogloss:1.46456 test-mlogloss:1.47779
[70] train-mlogloss:1.46351 test-mlogloss:1.47692
[71] train-mlogloss:1.46246 test-mlogloss:1.47608
[72] train-mlogloss:1.46142 test-mlogloss:1.47523
[73] train-mlogloss:1.46040 test-mlogloss:1.47438
[74] train-mlogloss:1.45938 test-mlogloss:1.47356
[75] train-mlogloss:1.45839 test-mlogloss:1.47276
[76] train-mlogloss:1.45739 test-mlogloss:1.47196
[77] train-mlogloss:1.45643 test-mlogloss:1.47118
[78] train-mlogloss:1.45546 test-mlogloss:1.47043
[79] train-mlogloss:1.45452 test-mlogloss:1.46968
[80] train-mlogloss:1.45358 test-mlogloss:1.46893
[81] train-mlogloss:1.45267 test-mlogloss:1.46822
[82] train-mlogloss:1.45176 test-mlogloss:1.46752
[83] train-mlogloss:1.45088 test-mlogloss:1.46682
[84] train-mlogloss:1.45002 test-mlogloss:1.46615
[85] train-mlogloss:1.44915 test-mlogloss:1.46548
[86] train-mlogloss:1.44829 test-mlogloss:1.46482
[87] train-mlogloss:1.44744 test-mlogloss:1.46417
###Markdown
Potentially further training.
###Code
# bst2=xgb.train(param, xg_train, 50, watchlist,xgb_model=bst2)
###Output
[0] train-mlogloss:1.21898 test-mlogloss:1.24526
[1] train-mlogloss:1.21803 test-mlogloss:1.24458
[2] train-mlogloss:1.21697 test-mlogloss:1.24385
[3] train-mlogloss:1.21619 test-mlogloss:1.24333
[4] train-mlogloss:1.21542 test-mlogloss:1.24280
[5] train-mlogloss:1.21457 test-mlogloss:1.24223
[6] train-mlogloss:1.21385 test-mlogloss:1.24177
[7] train-mlogloss:1.21288 test-mlogloss:1.24109
[8] train-mlogloss:1.21201 test-mlogloss:1.24049
[9] train-mlogloss:1.21116 test-mlogloss:1.23993
[10] train-mlogloss:1.21016 test-mlogloss:1.23924
[11] train-mlogloss:1.20933 test-mlogloss:1.23868
[12] train-mlogloss:1.20852 test-mlogloss:1.23815
[13] train-mlogloss:1.20754 test-mlogloss:1.23748
[14] train-mlogloss:1.20673 test-mlogloss:1.23694
[15] train-mlogloss:1.20588 test-mlogloss:1.23639
[16] train-mlogloss:1.20497 test-mlogloss:1.23576
[17] train-mlogloss:1.20416 test-mlogloss:1.23521
[18] train-mlogloss:1.20339 test-mlogloss:1.23468
[19] train-mlogloss:1.20264 test-mlogloss:1.23418
[20] train-mlogloss:1.20192 test-mlogloss:1.23370
[21] train-mlogloss:1.20114 test-mlogloss:1.23319
[22] train-mlogloss:1.20034 test-mlogloss:1.23266
[23] train-mlogloss:1.19951 test-mlogloss:1.23212
[24] train-mlogloss:1.19863 test-mlogloss:1.23154
[25] train-mlogloss:1.19780 test-mlogloss:1.23099
[26] train-mlogloss:1.19698 test-mlogloss:1.23045
[27] train-mlogloss:1.19618 test-mlogloss:1.22993
[28] train-mlogloss:1.19532 test-mlogloss:1.22937
[29] train-mlogloss:1.19454 test-mlogloss:1.22887
[30] train-mlogloss:1.19365 test-mlogloss:1.22828
[31] train-mlogloss:1.19289 test-mlogloss:1.22779
[32] train-mlogloss:1.19218 test-mlogloss:1.22732
[33] train-mlogloss:1.19151 test-mlogloss:1.22688
[34] train-mlogloss:1.19078 test-mlogloss:1.22642
[35] train-mlogloss:1.19000 test-mlogloss:1.22591
[36] train-mlogloss:1.18928 test-mlogloss:1.22544
[37] train-mlogloss:1.18841 test-mlogloss:1.22490
[38] train-mlogloss:1.18754 test-mlogloss:1.22434
[39] train-mlogloss:1.18681 test-mlogloss:1.22387
[40] train-mlogloss:1.18608 test-mlogloss:1.22342
[41] train-mlogloss:1.18530 test-mlogloss:1.22293
[42] train-mlogloss:1.18457 test-mlogloss:1.22245
[43] train-mlogloss:1.18387 test-mlogloss:1.22203
[44] train-mlogloss:1.18306 test-mlogloss:1.22148
[45] train-mlogloss:1.18222 test-mlogloss:1.22092
[46] train-mlogloss:1.18155 test-mlogloss:1.22050
[47] train-mlogloss:1.18085 test-mlogloss:1.22005
[48] train-mlogloss:1.18017 test-mlogloss:1.21964
[49] train-mlogloss:1.17941 test-mlogloss:1.21914
[50] train-mlogloss:1.17871 test-mlogloss:1.21869
[51] train-mlogloss:1.17800 test-mlogloss:1.21825
[52] train-mlogloss:1.17728 test-mlogloss:1.21779
[53] train-mlogloss:1.17667 test-mlogloss:1.21742
[54] train-mlogloss:1.17599 test-mlogloss:1.21700
[55] train-mlogloss:1.17534 test-mlogloss:1.21661
[56] train-mlogloss:1.17473 test-mlogloss:1.21623
[57] train-mlogloss:1.17407 test-mlogloss:1.21583
[58] train-mlogloss:1.17329 test-mlogloss:1.21532
[59] train-mlogloss:1.17258 test-mlogloss:1.21488
[60] train-mlogloss:1.17183 test-mlogloss:1.21440
[61] train-mlogloss:1.17111 test-mlogloss:1.21395
[62] train-mlogloss:1.17036 test-mlogloss:1.21348
[63] train-mlogloss:1.16957 test-mlogloss:1.21299
[64] train-mlogloss:1.16877 test-mlogloss:1.21249
[65] train-mlogloss:1.16797 test-mlogloss:1.21198
[66] train-mlogloss:1.16721 test-mlogloss:1.21151
[67] train-mlogloss:1.16648 test-mlogloss:1.21104
[68] train-mlogloss:1.16579 test-mlogloss:1.21060
[69] train-mlogloss:1.16506 test-mlogloss:1.21014
[70] train-mlogloss:1.16438 test-mlogloss:1.20972
[71] train-mlogloss:1.16362 test-mlogloss:1.20924
[72] train-mlogloss:1.16288 test-mlogloss:1.20877
[73] train-mlogloss:1.16218 test-mlogloss:1.20832
[74] train-mlogloss:1.16144 test-mlogloss:1.20783
[75] train-mlogloss:1.16071 test-mlogloss:1.20737
[76] train-mlogloss:1.16003 test-mlogloss:1.20694
[77] train-mlogloss:1.15925 test-mlogloss:1.20643
[78] train-mlogloss:1.15854 test-mlogloss:1.20598
[79] train-mlogloss:1.15773 test-mlogloss:1.20548
[80] train-mlogloss:1.15699 test-mlogloss:1.20500
[81] train-mlogloss:1.15630 test-mlogloss:1.20458
[82] train-mlogloss:1.15561 test-mlogloss:1.20414
[83] train-mlogloss:1.15495 test-mlogloss:1.20372
[84] train-mlogloss:1.15430 test-mlogloss:1.20333
[85] train-mlogloss:1.15368 test-mlogloss:1.20295
[86] train-mlogloss:1.15301 test-mlogloss:1.20251
[87] train-mlogloss:1.15236 test-mlogloss:1.20210
[88] train-mlogloss:1.15170 test-mlogloss:1.20168
[89] train-mlogloss:1.15104 test-mlogloss:1.20127
[90] train-mlogloss:1.15039 test-mlogloss:1.20087
[91] train-mlogloss:1.14975 test-mlogloss:1.20049
[92] train-mlogloss:1.14907 test-mlogloss:1.20007
[93] train-mlogloss:1.14846 test-mlogloss:1.19968
[94] train-mlogloss:1.14781 test-mlogloss:1.19929
[95] train-mlogloss:1.14724 test-mlogloss:1.19894
[96] train-mlogloss:1.14664 test-mlogloss:1.19857
[97] train-mlogloss:1.14603 test-mlogloss:1.19821
[98] train-mlogloss:1.14539 test-mlogloss:1.19785
[99] train-mlogloss:1.14481 test-mlogloss:1.19750
###Markdown
Testing & SavingHere we have a look if the classifier actually works with a pre-set hand from the game which should result in almost 100% Win_Rate but mistakes are possible.
###Code
test_series1 = pd.DataFrame(np.array([[0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1]]))
print(bst.predict(xgb.DMatrix(test_series1)))
np.mean(df2.loc[0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1,:])
bst.save_model('./rlcard/games/wizard_trickpreds/xgb_models/2P5C.json')
###Output
_____no_output_____
###Markdown
Loading
###Code
model_xgb_2 = xgb.Booster()
model_xgb_2.load_model(r"./experiments/rlcard/games/wizard_ms_trickpreds/xgb_models/2P5C.json")
###Output
_____no_output_____
###Markdown
log Plotting
###Code
import re
import matplotlib.pyplot as plt
from sklearn import linear_model
import numpy as np
# PATH = "wizard_s_trickpreds_result_nfsp/"
PATH = "wizard_nfsp_result/"
with open(PATH+"complete_log.txt") as f:
lines = f.readlines()
timesteps = []
rewards = []
for line in lines:
x=re.match("^\s+timestep\s+\|\s+([0-9]*)\\n$",line)
if x:
timesteps.append(int(x.group(1)))
x=re.match("^\s+reward\s+\|\s+([0-9]+\.[0-9]+)\\n$",line)
if x:
rewards.append(float(x.group(1)))
x=re.match("^\s+timestep\s+\|\s+([0-9]*)\\n$",line)
timesteps_transformed = []
last_timestep=0
add_timestep=0
for idx,timestep in enumerate(timesteps):
if idx==0:
add_timestep=0
elif timestep==10:
add_timestep=last_timestep
# print(timestep,add_timestep)
timesteps_transformed.append(timestep+add_timestep)
last_timestep=timestep+add_timestep
# #Linear Model
reg = linear_model.LinearRegression()
reg.fit(np.array(timesteps_transformed).reshape(-1, 1),rewards)
fig, ax = plt.subplots(1)
ax.plot(timesteps_transformed,rewards, ".")
ax.plot(timesteps_transformed,timesteps_transformed*reg.coef_+reg.intercept_,"-",color="red")
ax.set_title("Model improvement\nwizard_default")
ax.set_xlabel("timestep")
ax.set_ylabel("reward")
plt.savefig(PATH+"model_improvement.png",facecolor="white")
# # Logarithmic model
# reg = linear_model.LinearRegression()
# inputs=np.array(list(zip(timesteps_transformed,np.log(timesteps_transformed))))
# reg.fit(inputs,np.array(rewards).reshape(len(rewards), 1))
# fig, ax = plt.subplots(1)
# ax.plot(timesteps_transformed,np.array(rewards).reshape(len(rewards), 1), ".")
# ax.plot(timesteps_transformed,np.sum(inputs*reg.coef_,axis=1)+reg.intercept_,"-",color="red")
# ax.set_title("Model improvement\nwizard most simple with trick predictions")
# ax.set_xlabel("timestep")
# ax.set_ylabel("reward")
# plt.savefig(PATH+"model_improvement.png")
###Output
_____no_output_____
###Markdown
Helper & Misc cells
###Code
r"C:\Users\Magnus\Documents\GitHub\rlcard\rlcard\games\wizard_s_trickpreds\xgb_models\2P5C.json".replace("\\","/")
###Output
_____no_output_____ |
jupyter/Cloud Pak for Data v3.5.x/CopyAndSolveScenarios.ipynb | ###Markdown
To use this Notebook you have to import the StaffPlanning example (New Decision Optimization Model/From File) ClientCreate a DODS client to connect to initial scenario.
###Code
from dd_scenario import *
client = Client()
decision = client.get_model_builder(name="StaffPlanning")
scenario = decision.get_scenario(name="Scenario 1")
###Output
_____no_output_____
###Markdown
Global parametersThe number of days and number of periods per day:
###Code
N_DAYS = 2
N_PERIODS_PER_DAY = 24*4
N_PERIODS = N_DAYS * N_PERIODS_PER_DAY
###Output
_____no_output_____
###Markdown
Random generatorA method to generate the random demand for the given number of days and periods.
###Code
import random
import numpy as np
import pandas as pd
def random_demand( b_size ):
rs = []
for d in range(N_DAYS):
# Morning
p1 = random.uniform(0.2, 0.4)
s1 = int(random.uniform(b_size*0.5, b_size*1.5))
rs.append(np.random.binomial(n=N_PERIODS_PER_DAY, p=p1, size=s1) + d*N_PERIODS_PER_DAY)
# Afternoon
p2 = random.uniform(0.6, 0.8)
s2 = int(random.uniform(b_size*0.5, b_size*1.5))
rs.append(np.random.binomial(n=N_PERIODS_PER_DAY, p=p2, size=s2) + d*N_PERIODS_PER_DAY)
# Rest of day
s3 = int(random.uniform(b_size*0.4, b_size*0.7))
e = np.array([ random.randint(int(d*N_PERIODS_PER_DAY + 0.2*N_PERIODS_PER_DAY), int(d*N_PERIODS_PER_DAY + 0.8*N_PERIODS_PER_DAY)) for i in range(s3) ])
#print(e)
rs.append(e)
#print(rs)
s = np.concatenate(rs)
#print(s)
g_arrivals = pd.DataFrame(data=s, columns=['value'])
_demands = [0 for i in range(0, N_PERIODS+1)]
for t in s:
_demands[t] = _demands[t] +1
demands = pd.DataFrame(data= [(t, _demands[t]) for t in range(N_PERIODS)], columns = ['period', 'demand'])
return demands
###Output
_____no_output_____
###Markdown
The number of scenarios you want to generate and solve:
###Code
N_SCENARIOS = 5
###Output
_____no_output_____
###Markdown
When copying the scenario, copy the input data, the model and the solution if any.Then attach new randomly generated data and solve.Grab the solution to perform some multi scenario reporting in this notebook.
###Code
all_kpis = pd.DataFrame()
for i in range(1, N_SCENARIOS+1):
sc_name = "Copy %02d" % (i)
print(sc_name)
copy = decision.get_scenario(name=sc_name)
if (copy != None):
print(" Deleting old...")
decision.delete_container(copy)
print(" Copying from original scenario...")
copy = scenario.copy(sc_name)
print(" Generating new demand...")
df_demands = random_demand(200)
copy.add_table_data("demands", df_demands, category='input')
print(" Solving...")
copy.solve()
print(" Grabbing solution kpis...")
kpis = copy.get_table_data('kpis')
kpis['scenario'] = sc_name
mk = [[ kpis.iloc[0]['Value'], "%02d" % (kpis.iloc[1]['Value']), sc_name, "%02d" % (kpis.iloc[2]['Value'])]]
my_kpis = pd.DataFrame(data=mk, columns=['cost','fix','scenario','temp'])
copy.add_table_data('my_kpis', data=my_kpis, category='output')
all_kpis = all_kpis.append(kpis)
print("Done!")
###Output
Copy 01
Deleting old...
Copying from original scenario...
Generating new demand...
Solving...
Grabbing solution kpis...
Copy 02
Deleting old...
Copying from original scenario...
Generating new demand...
Solving...
Grabbing solution kpis...
Copy 03
Deleting old...
Copying from original scenario...
Generating new demand...
Solving...
Grabbing solution kpis...
Copy 04
Deleting old...
Copying from original scenario...
Generating new demand...
Solving...
Grabbing solution kpis...
Copy 05
Deleting old...
Copying from original scenario...
Generating new demand...
Solving...
Grabbing solution kpis...
Done!
###Markdown
ReportingDisplay multi scenario comparison report.
###Code
total_cost = all_kpis[all_kpis.Name=='Total Cost']
import matplotlib.pyplot as plt
import matplotlib.colors as mcolors
my_colors = mcolors.TABLEAU_COLORS
plot = plt.figure(figsize=(20,5))
plot = plt.bar(range(N_SCENARIOS),[total_cost.iloc[i]['Value'] for i in range(N_SCENARIOS)], width = 0.8, color = my_colors)
plot = plt.xticks(range(N_SCENARIOS),[total_cost.iloc[i]['scenario'] for i in range(N_SCENARIOS)])
labels = list(total_cost.iloc[i]['scenario'] for i in range(N_SCENARIOS))
handles = [plt.Rectangle((0,0),1,1, color = my_colors[v_color]) for v_color in my_colors]
plot = plt.legend(handles, labels, title = 'Scenario', loc = 'upper right', bbox_to_anchor=(1.1, 1))
###Output
_____no_output_____ |
notebooks/neural_nets/perceptron.ipynb | ###Markdown
Neural Network (Multilayer Perceptron) Demo> ☝Before moving on with this demo you might want to take a look at:**Artificial neural networks (ANN)** or connectionist systems are computing systems vaguely inspired by the biological neural networks that constitute animal brains. The neural network itself isn't an algorithm, but rather a framework for many different machine learning algorithms to work together and process complex data inputs. Such systems "learn" to perform tasks by considering examples, generally without being programmed with any task-specific rules.A **multilayer perceptron (MLP)** is a class of feedforward artificial neural network. An MLP consists of, at least, three layers of nodes: an input layer, a hidden layer and an output layer. Except for the input nodes, each node is a neuron that uses a nonlinear activation function. MLP utilizes a supervised learning technique called backpropagation for training. Its multiple layers and non-linear activation distinguish MLP from a linear perceptron. It can distinguish data that is not linearly separable.> **Demo Project:** In this example we will train clothes classifier that will recognize clothes types (10 categories) from `28x28` pixel images using simple multilayer perceptron.
###Code
# To make debugging of multilayer_perceptron module easier we enable imported modules autoreloading feature.
# By doing this you may change the code of multilayer_perceptron library and all these changes will be available here.
%load_ext autoreload
%autoreload 2
# Add project root folder to module loading paths.
import sys
sys.path.append('../..')
###Output
_____no_output_____
###Markdown
Import Dependencies- [pandas](https://pandas.pydata.org/) - library that we will use for loading and displaying the data in a table- [numpy](http://www.numpy.org/) - library that we will use for linear algebra operations- [matplotlib](https://matplotlib.org/) - library that we will use for plotting the data- [math](https://docs.python.org/3/library/math.html) - math library that we will use to calculate sqaure roots etc.- [neural_network](https://github.com/trekhleb/homemade-machine-learning/blob/master/homemade/neural_network/multilayer_perceptron.py) - custom implementation of multilayer perceptron
###Code
# Import 3rd party dependencies.
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import math
# Import custom multilayer perceptron implementation.
from homemade.neural_network import MultilayerPerceptron
###Output
_____no_output_____
###Markdown
Load the DataIn this demo we will use a sample of [Fashion MNIST dataset in a CSV format](https://www.kaggle.com/zalando-research/fashionmnist).Fashion-MNIST is a dataset of Zalando's article images—consisting of a training set. Each example is a 28x28 grayscale image, associated with a label from 10 classes. Zalando intends Fashion-MNIST to serve as a direct drop-in replacement for the original MNIST dataset for benchmarking machine learning algorithms. It shares the same image size and structure of training and testing splits.Instead of using full dataset with 60000 training examples we will use cut dataset of just 5000 examples that we will also split into training and testing sets.Each row in the dataset consists of 785 values: the first value is the label (a category from 0 to 9) and the remaining 784 values (28x28 pixels image) are the pixel values (a number from 0 to 255).Each training and test example is assigned to one of the following labels:- 0 T-shirt/top- 1 Trouser- 2 Pullover- 3 Dress- 4 Coat- 5 Sandal- 6 Shirt- 7 Sneaker- 8 Bag- 9 Ankle boot
###Code
# Load the data.
data = pd.read_csv('../../data/fashion-mnist-demo.csv')
# Laets create the mapping between numeric category and category name.
label_map = {
0: 'T-shirt/top',
1: 'Trouser',
2: 'Pullover',
3: 'Dress',
4: 'Coat',
5: 'Sandal',
6: 'Shirt',
7: 'Sneaker',
8: 'Bag',
9: 'Ankle boot',
}
# Print the data table.
data.head(10)
data.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 5000 entries, 0 to 4999
Columns: 785 entries, label to pixel784
dtypes: int64(785)
memory usage: 29.9 MB
###Markdown
Plot the DataLet's peek first 25 rows of the dataset and display them as an images to have an example of clothes we will be working with.
###Code
# How many images to display.
numbers_to_display = 25
# Calculate the number of cells that will hold all the images.
num_cells = math.ceil(math.sqrt(numbers_to_display))
# Make the plot a little bit bigger than default one.
plt.figure(figsize=(10, 10))
# Go through the first images in a training set and plot them.
for plot_index in range(numbers_to_display):
# Extract image data.
digit = data[plot_index:plot_index + 1].values
digit_label = digit[0][0]
digit_pixels = digit[0][1:]
# Calculate image size (remember that each picture has square proportions).
image_size = int(math.sqrt(digit_pixels.shape[0]))
# Convert image vector into the matrix of pixels.
frame = digit_pixels.reshape((image_size, image_size))
# Plot the image matrix.
plt.subplot(num_cells, num_cells, plot_index + 1)
plt.imshow(frame, cmap='Greys')
plt.title(label_map[digit_label])
plt.tick_params(axis='both', which='both', bottom=False, left=False, labelbottom=False, labelleft=False)
# Plot all subplots.
plt.subplots_adjust(hspace=0.5, wspace=0.5)
plt.show()
###Output
_____no_output_____
###Markdown
Split the Data Into Training and Test SetsIn this step we will split our dataset into _training_ and _testing_ subsets (in proportion 80/20%).Training data set will be used for training of our model. Testing dataset will be used for validating of the model. All data from testing dataset will be new to model and we may check how accurate are model predictions.
###Code
# Split data set on training and test sets with proportions 80/20.
# Function sample() returns a random sample of items.
pd_train_data = data.sample(frac=0.8)
pd_test_data = data.drop(pd_train_data.index)
# Convert training and testing data from Pandas to NumPy format.
train_data = pd_train_data.values
test_data = pd_test_data.values
# Extract training/test labels and features.
num_training_examples = 1000
x_train = train_data[:num_training_examples, 1:]
y_train = train_data[:num_training_examples, [0]]
x_test = test_data[:, 1:]
y_test = test_data[:, [0]]
###Output
_____no_output_____
###Markdown
Init and Train MLP Model> ☝🏻 This is the place where you might want to play with model configuration.> ⚠️ Be aware though that the training of the neural network with current parameters may take up to 15 minutes depending on the hardware. - `layers` - configuration of the multilayer perceptron layers (array of numbers where every number represents the number of nayron in specific layer).- `max_iterations` - this is the maximum number of iterations that gradient descent algorithm will use to find the minimum of a cost function. Low numbers may prevent gradient descent from reaching the minimum. High numbers will make the algorithm work longer without improving its accuracy.- `regularization_param` - parameter that will fight overfitting. The higher the parameter, the simplier is the model will be.- `normalize_data` - boolean flag that indicates whether data normalization is needed or not.- `alpha` - the size of gradient descent steps. You may need to reduce the step size if gradient descent can't find the cost function minimum.
###Code
# Configure neural network.
layers = [
784, # Input layer - 28x28 input pixels.
25, # First hidden layer - 25 hidden units.
10, # Output layer - 10 labels, from 0 to 9.
];
normalize_data = True # Flag that detects whether we want to do features normalization or not.
epsilon = 0.12 # Defines the range for initial theta values.
max_iterations = 350 # Max number of gradient descent iterations.
regularization_param = 2 # Helps to fight model overfitting.
alpha = 0.1 # Gradient descent step size.
# Init neural network.
multilayer_perceptron = MultilayerPerceptron(x_train, y_train, layers, epsilon, normalize_data)
# Train neural network.
(thetas, costs) = multilayer_perceptron.train(regularization_param, max_iterations, alpha)
plt.plot(range(len(costs)), costs)
plt.xlabel('Gradient Steps')
plt.ylabel('Cost')
plt.show()
###Output
_____no_output_____
###Markdown
Illustrate Hidden Layers PerceptronsEach perceptron in the hidden layer learned something from the training process. What it learned is represented by input theta parameters for it. Each perceptron in the hidden layer has 28x28 input thetas (one for each input image pizel). Each theta represents how valuable each pixel is for this particuar perceptron. So let's try to plot how valuable each pixel of input image is for each perceptron based on its theta values.
###Code
# Setup the number of layer we want to display.
# We want to display the first hidden layer.
layer_number = 1
# How many perceptrons to display.
num_perceptrons = len(thetas[layer_number - 1])
# Calculate the number of cells that will hold all the images.
num_cells = math.ceil(math.sqrt(num_perceptrons))
# Make the plot a little bit bigger than default one.
plt.figure(figsize=(10, 10))
# Go through the perceptrons plot what they've learnt.
for perceptron_index in range(num_perceptrons):
# Extract perceptron data.
perceptron = thetas[layer_number - 1][perceptron_index][1:]
# Calculate image size (remember that each picture has square proportions).
image_size = int(math.sqrt(perceptron.shape[0]))
# Convert image vector into the matrix of pixels.
frame = perceptron.reshape((image_size, image_size))
# Plot the image matrix.
plt.subplot(num_cells, num_cells, perceptron_index + 1)
plt.imshow(frame, cmap='Greys', vmin=np.amin(frame), vmax=np.amax(frame))
plt.title('Percep. #%s' % perceptron_index)
plt.tick_params(axis='both', which='both', bottom=False, left=False, labelbottom=False, labelleft=False)
# Plot all subplots.
plt.subplots_adjust(hspace=0.5, wspace=0.5)
plt.show()
###Output
_____no_output_____
###Markdown
Calculate Model Training PrecisionCalculate how many of training and test examples have been classified correctly. Normally we need test precission to be as high as possible. In case if training precision is high and test precission is low it may mean that our model is overfitted (it works really well with the training data set but it is not good at classifying new unknown data from the test dataset). In this case you may want to play with `regularization_param` parameter to fighth the overfitting.
###Code
# Make training set predictions.
y_train_predictions = multilayer_perceptron.predict(x_train)
y_test_predictions = multilayer_perceptron.predict(x_test)
# Check what percentage of them are actually correct.
train_precision = np.sum(y_train_predictions == y_train) / y_train.shape[0] * 100
test_precision = np.sum(y_test_predictions == y_test) / y_test.shape[0] * 100
print('Training Precision: {:5.4f}%'.format(train_precision))
print('Test Precision: {:5.4f}%'.format(test_precision))
###Output
Training Precision: 93.8000%
Test Precision: 80.6000%
###Markdown
Plot Test Dataset PredictionsIn order to illustrate how our model classifies unknown examples let's plot first 64 predictions for testing dataset. All green clothes on the plot below have been recognized correctly but all the red clothes have not been recognized correctly by our classifier. On top of each image you may see the class that has been recognized on the image.
###Code
# How many images to display.
numbers_to_display = 64
# Calculate the number of cells that will hold all the images.
num_cells = math.ceil(math.sqrt(numbers_to_display))
# Make the plot a little bit bigger than default one.
plt.figure(figsize=(15, 15))
# Go through the first images in a test set and plot them.
for plot_index in range(numbers_to_display):
# Extract digit data.
digit_label = y_test[plot_index, 0]
digit_pixels = x_test[plot_index, :]
# Predicted label.
predicted_label = y_test_predictions[plot_index][0]
# Calculate image size (remember that each picture has square proportions).
image_size = int(math.sqrt(digit_pixels.shape[0]))
# Convert image vector into the matrix of pixels.
frame = digit_pixels.reshape((image_size, image_size))
# Plot the image matrix.
color_map = 'Greens' if predicted_label == digit_label else 'Reds'
plt.subplot(num_cells, num_cells, plot_index + 1)
plt.imshow(frame, cmap=color_map)
plt.title(label_map[predicted_label])
plt.tick_params(axis='both', which='both', bottom=False, left=False, labelbottom=False, labelleft=False)
# Plot all subplots.
plt.subplots_adjust(hspace=0.5, wspace=0.5)
plt.show()
###Output
_____no_output_____ |
seating_detection_data_analyze.ipynb | ###Markdown
###Code
#file_paths = glob.glob(".\\data\\*")
file_paths = glob.glob("./data/*")
#print(file_paths)
category =np.empty((0,1), float)
rssi =np.empty((0,100), float)
for file in file_paths:
d = np.loadtxt(file, delimiter=',')
category_tmp, rssi_tmp = np.hsplit(d, [1])
rssi = np.concatenate([rssi, rssi_tmp], axis=0)
category = np.concatenate([category, category_tmp], axis=0)
all_data_count = category.shape[0]
seating_data_count = np.count_nonzero(category > 0)
print("all data count : ", all_data_count)
data_pie = [seating_data_count, all_data_count - seating_data_count]
label = ["Seating", "NOT Seating"]
plt.pie(data_pie, labels=label, counterclock=False, startangle=90, autopct="%1.1f%%")
#file_paths = glob.glob(".\\data\\*")
file_paths = glob.glob("./data/*")
all_data = np.empty((0,101), float)
for file in file_paths:
d = np.loadtxt(file, delimiter=',')
all_data = np.concatenate([all_data, d], axis=0)
print(all_data.shape)
seating_data = np.empty((0,101), float)
not_seating_data = np.empty((0,101), float)
max_data = []
min_data = []
mean_data = []
median_data = []
std_data = []
diff_data = []
category_data = []
for data in all_data:
#print(data)
#if data[0] == 1.0:
category_data.append(data[0]*10-30)
data = np.delete(data, 0)
max_data.append(data.max())
min_data.append(data.min())
diff_data.append(data.max() - data.min())
mean_data.append(data.mean())
median_data.append(np.median(data))
std_data.append(np.std(data))
max_data2 = []
min_data2 = []
mean_data2 = []
median_data2 = []
std_data2 = []
category_data2 = []
for i in range(len(std_data)):
if std_data[i] < 1.0:
max_data2.append(max_data[i])
min_data2.append(min_data[i])
mean_data2.append(mean_data[i])
median_data2.append(median_data[i])
category_data2.append(category_data[i])
plt.plot(max_data)
plt.plot(min_data)
plt.plot(mean_data)
plt.plot(category_data)
#plt.title('max rssi')
plt.ylabel('rssi')
plt.xlabel('number')
plt.legend(['max', 'min', 'mean', 'category'], loc='upper left')
plt.show()
plt.plot(mean_data)
plt.plot(median_data)
#plt.plot(category_data)
#plt.title('max rssi')
plt.ylabel('rssi')
plt.xlabel('number')
plt.legend(['mean', 'median'], loc='upper left')
plt.show()
plt.hist(std_data)
plt.show()
plt.plot(max_data2)
plt.plot(min_data2)
plt.plot(mean_data2)
plt.plot(median_data2)
#plt.plot(category_data2)
#plt.title('max rssi')
plt.ylabel('rssi')
plt.xlabel('number')
plt.legend(['max2', 'min2', 'mean2', 'median2', 'category2'], loc='upper left')
plt.show()
plt.plot(diff_data)
#plt.plot(category_data)
#plt.title('max rssi')
plt.ylabel('rssi')
plt.xlabel('number')
plt.legend(['diff', 'category2'], loc='upper left')
plt.show()
file_paths = glob.glob(".\\data2\\*")
print(file_paths)
fig = plt.figure(figsize=(20,10))
GRAPHE_NUM = 4
X_LIM_MAX = -30
X_LIM_MIN = -70
X_TICKS = 9
Y_LIM_MAX = 0.3
Y_LIM_MIN = 0
Y_TICKS = 7
graph_count = 1
for file in file_paths:
d = np.loadtxt(file, delimiter=',')
cat, rssi = np.hsplit(d, [1])
rssi_max = rssi.max()
rssi_min = rssi.min()
rssi_mean = rssi.mean()
rssi_median = np.median(rssi)
rssi_width = np.abs(rssi.max()-rssi.min())
ax = fig.add_subplot(GRAPHE_NUM/2, 2, graph_count)
plt.xlim(X_LIM_MIN, X_LIM_MAX)
plt.xticks(np.linspace(X_LIM_MIN, X_LIM_MAX, X_TICKS))
plt.ylim(Y_LIM_MIN, Y_LIM_MAX)
plt.yticks(np.linspace(Y_LIM_MIN, Y_LIM_MAX, Y_TICKS))
plt.text(X_LIM_MIN, Y_LIM_MIN,\
'MAX : ' + '{:.1f}'.format(rssi_max) + '\n'\
'MIN : ' + '{:.1f}'.format(rssi_min) + '\n'\
'MEAN : ' + '{:.1f}'.format(rssi_mean) + '\n'\
'MEDIAN : ' + '{:.1f}'.format(rssi_median) + '\n'\
, fontsize=13)
if graph_count % 2 == 1:
plt.title('NOT seating hist\n( ' + file + ' )')
(a_hist3, a_bins3, _) = ax.hist(rssi, bins=int(rssi_width), density=True, color="tab:green")
else:
plt.title('seating hist\n( ' + file + ' )')
(a_hist3, a_bins3, _) = ax.hist(rssi, bins=int(rssi_width), density=True, color="tab:orange")
graph_count += 1
fig.show()
###Output
['.\\data2\\20200402_not_seating.csv', '.\\data2\\20200402_seating.csv', '.\\data2\\20200406_not_seating.csv', '.\\data2\\20200406_seating.csv']
|
use-gpu.ipynb | ###Markdown
GPU计算到目前为止,我们一直在使用CPU计算。对复杂的神经网络和大规模的数据来说,使用CPU来计算可能不够高效。在本节中,我们将介绍如何使用单块NVIDIA GPU来计算。首先,需要确保已经安装好了至少一块NVIDIA GPU。然后,下载CUDA并按照提示设置好相应的路径(可参考附录中[“使用AWS运行代码”](../chapter_appendix/aws.ipynb)一节)。这些准备工作都完成后,下面就可以通过`nvidia-smi`命令来查看显卡信息了。
###Code
!nvidia-smi # 对Linux/macOS用户有效
###Output
Mon Aug 19 16:44:05 2019
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 430.34 Driver Version: 430.34 CUDA Version: 10.1 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 GeForce MX150 Off | 00000000:01:00.0 Off | N/A |
| N/A 68C P0 N/A / N/A | 1290MiB / 2002MiB | 63% Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| 0 1367 G /usr/lib/xorg/Xorg 27MiB |
| 0 1917 G /usr/lib/xorg/Xorg 148MiB |
| 0 2089 G /usr/bin/gnome-shell 263MiB |
| 0 2910 G ...equest-channel-token=622082868255724596 10MiB |
| 0 4905 G ...uest-channel-token=11903961447382698150 105MiB |
| 0 9112 G ...quest-channel-token=3355754333226412869 16MiB |
| 0 12618 C ...unyuan/miniconda3/envs/mxnet/bin/python 657MiB |
+-----------------------------------------------------------------------------+
###Markdown
接下来,我们需要确认安装了MXNet的GPU版本。安装方法见[“获取和运行本书的代码”](../chapter_prerequisite/install.ipynb)一节。运行本节中的程序需要至少2块GPU。 计算设备MXNet可以指定用来存储和计算的设备,如使用内存的CPU或者使用显存的GPU。默认情况下,MXNet会将数据创建在内存,然后利用CPU来计算。在MXNet中,`mx.cpu()`(或者在括号里填任意整数)表示所有的物理CPU和内存。这意味着,MXNet的计算会尽量使用所有的CPU核。但`mx.gpu()`只代表一块GPU和相应的显存。如果有多块GPU,我们用`mx.gpu(i)`来表示第$i$块GPU及相应的显存($i$从0开始)且`mx.gpu(0)`和`mx.gpu()`等价。
###Code
import mxnet as mx
from mxnet import nd
from mxnet.gluon import nn
mx.cpu(), mx.gpu(), mx.gpu(1)
###Output
_____no_output_____
###Markdown
`NDArray`的GPU计算在默认情况下,`NDArray`存在内存上。因此,之前我们每次打印`NDArray`的时候都会看到`@cpu(0)`这个标识。
###Code
x = nd.array([1, 2, 3])
x
###Output
_____no_output_____
###Markdown
我们可以通过`NDArray`的`context`属性来查看该`NDArray`所在的设备。
###Code
x.context
###Output
_____no_output_____
###Markdown
GPU上的存储我们有多种方法将`NDArray`存储在显存上。例如,我们可以在创建`NDArray`的时候通过`ctx`参数指定存储设备。下面我们将`NDArray`变量`a`创建在`gpu(0)`上。注意,在打印`a`时,设备信息变成了`@gpu(0)`。创建在显存上的`NDArray`只消耗同一块显卡的显存。我们可以通过`nvidia-smi`命令查看显存的使用情况。通常,我们需要确保不创建超过显存上限的数据。
###Code
a = nd.array([1, 2, 3], ctx=mx.gpu())
a
###Output
_____no_output_____
###Markdown
假设至少有2块GPU,下面代码将会在`gpu(1)`上创建随机数组。
###Code
B = nd.random.uniform(shape=(2, 3), ctx=mx.gpu(1))
B
###Output
_____no_output_____
###Markdown
除了在创建时指定,我们也可以通过`copyto`函数和`as_in_context`函数在设备之间传输数据。下面我们将内存上的`NDArray`变量`x`复制到`gpu(0)`上。
###Code
y = x.copyto(mx.gpu())
y
z = x.as_in_context(mx.gpu())
z
###Output
_____no_output_____
###Markdown
需要区分的是,如果源变量和目标变量的`context`一致,`as_in_context`函数使目标变量和源变量共享源变量的内存或显存。
###Code
y.as_in_context(mx.gpu()) is y
###Output
_____no_output_____
###Markdown
而`copyto`函数总是为目标变量开新的内存或显存。
###Code
y.copyto(mx.gpu()) is y
###Output
_____no_output_____
###Markdown
GPU上的计算MXNet的计算会在数据的`context`属性所指定的设备上执行。为了使用GPU计算,我们只需要事先将数据存储在显存上。计算结果会自动保存在同一块显卡的显存上。
###Code
(z + 2).exp() * y
###Output
_____no_output_____
###Markdown
注意,MXNet要求计算的所有输入数据都在内存或同一块显卡的显存上。这样设计的原因是CPU和不同的GPU之间的数据交互通常比较耗时。因此,MXNet希望用户确切地指明计算的输入数据都在内存或同一块显卡的显存上。例如,如果将内存上的`NDArray`变量`x`和显存上的`NDArray`变量`y`做运算,会出现错误信息。当我们打印`NDArray`或将`NDArray`转换成NumPy格式时,如果数据不在内存里,MXNet会将它先复制到内存,从而造成额外的传输开销。 Gluon的GPU计算同`NDArray`类似,Gluon的模型可以在初始化时通过`ctx`参数指定设备。下面的代码将模型参数初始化在显存上。
###Code
net = nn.Sequential()
net.add(nn.Dense(1))
net.initialize(ctx=mx.gpu())
###Output
_____no_output_____
###Markdown
当输入是显存上的`NDArray`时,Gluon会在同一块显卡的显存上计算结果。
###Code
net(y)
###Output
_____no_output_____
###Markdown
下面我们确认一下模型参数存储在同一块显卡的显存上。
###Code
net[0].weight.data()
###Output
_____no_output_____ |
wei/p02.ipynb | ###Markdown
p.2 Stop Words
###Code
from IPython.display import YouTubeVideo
YouTubeVideo('w36-U-ccajM')
###Output
_____no_output_____
###Markdown
1. Definition"Useless words" 1.1 Some word that indicates a discontinuance of analysis 1.2 Some word that has nothing to do with the analysisa, the, and...
###Code
from nltk.corpus import stopwords
from nltk.tokenize import word_tokenize
example_sentence = "This is an example showing off stop word filtration."
stop_words = set(stopwords.words("english"))
# print(stop_words)
words = word_tokenize(example_sentence)
# filtered_sentence = []
# for w in words:
# if w not in stop_words:
# filtered_sentence.append(w)
# print(filtered_sentence)
filtered_sentence = [w for w in words if not w in stop_words]
print(filtered_sentence)
###Output
['This', 'example', 'showing', 'stop', 'word', 'filtration', '.']
|
Chapter19/chapter19.ipynb | ###Markdown
Scientific Computing with Python (Second Edition) Chapter 19: Comprehensive Examples*We start by importing all from Numpy. As explained in Chapter 01 the examples are written assuming this import is initially done.*
###Code
from numpy import *
###Output
_____no_output_____
###Markdown
19.1 Polynomials*Here we give the complete code, which was developped step-wise in the book Section 19.1* 19.1.3 The polynomial class
###Code
import scipy.linalg as sl
import matplotlib.pyplot as mp
class PolyNomial:
base='monomial'
def __init__(self,**args):
if 'points' in args:
self.points = array(args['points'])
self.xi = self.points[:,0]
self.coeff = self.point_2_coeff()
self.degree = len(self.coeff)-1
elif 'coeff' in args:
self.coeff = array(args['coeff'])
self.degree = len(self.coeff)-1
self.points = self.coeff_2_point()
else:
self.points = array([[0,0]])
self.xi = array([1.])
self.coeff = self.point_2_coeff()
self.degree = 0
def point_2_coeff(self):
return sl.solve(vander(self.x),self.y)
def coeff_2_point(self):
points = [[x,self(x)] for x in linspace(0,1,self.degree+1)]
return array(points)
def __call__(self,x):
return polyval(self.coeff,x)
@property
def x(self):
return self.points[:,0]
@property
def y(self):
return self.points[:,1]
def __repr__(self):
txt = f'Polynomial of degree {self.degree} \n'
txt += f'with coefficients {self.coeff} \n in {self.base} basis.'
return txt
margin = .05
plotres = 500
def plot(self,ab=None,plotinterp=True):
if ab is None: # guess a and b
x = self.x
a, b = x.min(), x.max()
h = b-a
a -= self.margin*h
b += self.margin*h
else:
a,b = ab
x = linspace(a,b,self.plotres)
y = vectorize(self.__call__)(x)
mp.plot(x,y)
mp.xlabel('$x$')
mp.ylabel('$p(x)$')
if plotinterp:
mp.plot(self.x, self.y, 'ro')
def companion(self):
companion = eye(self.degree, k=-1)
companion[0,:] -= self.coeff[1:]/self.coeff[0]
return companion
def zeros(self):
companion = self.companion()
return sl.eigvals(companion)
###Output
_____no_output_____
###Markdown
19.1.4 Usage examples of the polynomial class
###Code
p = PolyNomial(points=[(1,0),(2,3),(3,8)])
p.coeff
p.plot((-3.5,3.5))
pz = p.zeros()
print(pz)
###Output
[-1.+0.j 1.+0.j]
###Markdown
19.1.5 Newton polynomial*Again we present here the entire class directly and not the step-wise development as given in the text*
###Code
class NewtonPolynomial(PolyNomial):
base = 'Newton'
def __init__(self,**args):
if 'coeff' in args:
try:
self.xi = array(args['xi'])
except KeyError:
raise ValueError('Coefficients need to be given'
'together with abscissae values xi')
super(NewtonPolynomial, self).__init__(**args)
def point_2_coeff(self):
return array(list(self.divdiff()))
def divdiff(self):
xi = self.xi
row = self.y
yield row[0]
for level in range(1,len(xi)):
row = (row[1:] - row[:-1])/(xi[level:] - xi[:-level])
if allclose(row,0): # check: elements of row nearly zero
self.degree = level-1
break
yield row[0]
def __call__(self,x):
# first compute the sequence 1, (x-x_1), (x-x_1)(x-x_2),...
nps = hstack([1., cumprod(x-self.xi[:self.degree])])
return self.coeff@nps
def companion(self):
degree = self.degree
companion = eye(degree, k=-1)
diagonal = identity(degree,dtype=bool)
companion[diagonal] = self.x[:degree]
companion[:,-1] -= self.coeff[:degree]/self.coeff[degree]
return companion
# here we define the interpolation data: (x,y) pairs
pts = array([[0.,0],[.5,1],[1.,0],[2,0.]])
pN = NewtonPolynomial(points=pts) # this creates an instance of the
# polynomial class
pN.coeff # returns the coefficients array([0. , 2. , -4. , 2.66666667])
print(pN)
###Output
Polynomial of degree 3
with coefficients [ 0. 2. -4. 2.66666667]
in Newton basis.
###Markdown
*Here we make two demonstrations, which are not presented in the book*
###Code
print(pN.zeros())
pN.plot()
###Output
_____no_output_____
###Markdown
19.2 Spectral clustering We make first a general import as stated in Chapter 1 of the book:
###Code
from numpy import *
from matplotlib.pyplot import *
import scipy.linalg as sl
# create some data points
n = 100
x1 = 1.2 * random.randn(n, 2)
x2 = 0.8 * random.randn(n, 2) + tile([7, 0],(n, 1))
x = vstack((x1, x2))
# pairwise distance matrix
M = array([[ sqrt(sum((x[i] - x[j])**2))
for i in range(2*n)]
for j in range(2 * n)])
# create the Laplacian matrix
D = diag(1 / sqrt( M.sum(axis = 0) ))
L = identity(2 * n) - dot(D, dot(M, D))
# compute eigenvectors of L
S, V = sl.eig(L)
# As L is symmetric the imaginary parts
# in the eigenvalues are only due to negligible numerical errors S=S.real
V=V.real
largest=abs(S).argmax()
plot(V[:,largest])
import scipy.linalg as sl
import scipy.cluster.vq as sc
# simple 4 class data
x = random.rand(1000,2)
ndx = ((x[:,0] < 0.4) | (x[:,0] > 0.6)) & \
((x[:,1] < 0.4) | (x[:,1] > 0.6))
x = x[ndx]
n = x.shape[0]
# pairwise distance matrix
M = array([[ sqrt(sum((x[i]-x[j])**2)) for i in range(n) ]
for j in range(n)])
# create the Laplacian matrix
D = diag(1 / sqrt( M.sum(axis=0) ))
L = identity(n) - dot(D, dot(M, D))
# compute eigenvectors of L
_,_,V = sl.svd(L)
k = 4
# take k first eigenvectors
eigv = V[:k,:].T
# k-means
centroids,dist = sc.kmeans(eigv,k)
clust_id = sc.vq(eigv,centroids)[0]
U, S, V = sl.svd(L)
_, _, V = sl.svd(L)
for i in range(k):
ndx = where(clust_id == i)[0]
plot(x[ndx, 0], x[ndx, 1],'o')
axis('equal')
###Output
_____no_output_____
###Markdown
Note, the plot above needs not to be identical to that presented in the book as it is generated from random data. 19.3 Solving initial value problems
###Code
class IV_Problem:
"""
Initial value problem (IVP) class
"""
def __init__(self, rhs, y0, interval, name='IVP'):
"""
rhs 'right hand side' function of the ordinary differential
equation f(t,y)
y0 array with initial values
interval start and end value of the interval of independent
variables often initial and end time
name descriptive name of the problem
"""
self.rhs = rhs
self.y0 = y0
self.t0, self.tend = interval
self.name = name
def rhs(t,y):
g = 9.81
l = 1.
yprime = array([y[1], g / l * sin(y[0])])
return yprime
pendulum = IV_Problem(rhs, array([pi / 2, 0.]), [0., 10.] ,
'mathem. pendulum')
class IVPsolver:
"""
IVP solver class for explicit one-step discretization methods
with constant step size
"""
def __init__(self, problem, discretization, stepsize):
self.problem = problem
self.discretization = discretization
self.stepsize = stepsize
def one_stepper(self):
yield self.problem.t0, self.problem.y0
ys = self.problem.y0
ts = self.problem.t0
while ts <= self.problem.tend:
ts, ys = self.discretization(self.problem.rhs, ts, ys,
self.stepsize)
yield ts, ys
def solve(self):
return list(self.one_stepper())
def expliciteuler(rhs, ts, ys, h):
return ts + h, ys + h * rhs(ts, ys)
def rungekutta4(rhs, ts, ys, h):
k1 = h * rhs(ts, ys)
k2 = h * rhs(ts + h/2., ys + k1/2.)
k3 = h * rhs(ts + h/2., ys + k2/2.)
k4 = h * rhs(ts + h, ys + k3)
return ts + h, ys + (k1 + 2*k2 + 2*k3 + k4)/6.
pendulum_Euler = IVPsolver(pendulum, expliciteuler, 0.001)
pendulum_RK4 = IVPsolver(pendulum, rungekutta4, 0.001)
sol_Euler = pendulum_Euler.solve()
sol_RK4 = pendulum_RK4.solve()
tEuler, yEuler = zip(*sol_Euler)
tRK4, yRK4 = zip(*sol_RK4)
subplot(1,2,1), plot(tEuler,yEuler),\
title('Pendulum result with Explicit Euler'),\
xlabel('Time'), ylabel('Angle and angular velocity')
subplot(1,2,2), plot(tRK4,abs(array(yRK4)-array(yEuler))),\
title('Difference between both methods'),\
xlabel('Time'), ylabel('Angle and angular velocity')
###Output
_____no_output_____
###Markdown
Scientific Computing with Python (Second Edition) Chapter 19: Comprehensive Examples*We start by importing all from Numpy. As explained in Chapter 01 the examples are written assuming this import is initially done.*
###Code
from numpy import *
###Output
_____no_output_____
###Markdown
19.1 Polynomials*Here we give the complete code, which was developped step-wise in the book Section 19.1* 19.1.3 The polynomial class
###Code
import scipy.linalg as sl
import matplotlib.pyplot as mp
class PolyNomial:
base='monomial'
def __init__(self,**args):
if 'points' in args:
self.points = array(args['points'])
self.xi = self.points[:,0]
self.coeff = self.point_2_coeff()
self.degree = len(self.coeff)-1
elif 'coeff' in args:
self.coeff = array(args['coeff'])
self.degree = len(self.coeff)-1
self.points = self.coeff_2_point()
else:
self.points = array([[0,0]])
self.xi = array([1.])
self.coeff = self.point_2_coeff()
self.degree = 0
def point_2_coeff(self):
return sl.solve(vander(self.x),self.y)
def coeff_2_point(self):
points = [[x,self(x)] for x in linspace(0,1,self.degree+1)]
return array(points)
def __call__(self,x):
return polyval(self.coeff,x)
@property
def x(self):
return self.points[:,0]
@property
def y(self):
return self.points[:,1]
def __repr__(self):
txt = f'Polynomial of degree {self.degree} \n'
txt += f'with coefficients {self.coeff} \n in {self.base} basis.'
return txt
margin = .05
plotres = 500
def plot(self,ab=None,plotinterp=True):
if ab is None: # guess a and b
x = self.x
a, b = x.min(), x.max()
h = b-a
a -= self.margin*h
b += self.margin*h
else:
a,b = ab
x = linspace(a,b,self.plotres)
y = vectorize(self.__call__)(x)
mp.plot(x,y)
mp.xlabel('$x$')
mp.ylabel('$p(x)$')
if plotinterp:
mp.plot(self.x, self.y, 'ro')
def companion(self):
companion = eye(self.degree, k=-1)
companion[0,:] -= self.coeff[1:]/self.coeff[0]
return companion
def zeros(self):
companion = self.companion()
return sl.eigvals(companion)
###Output
_____no_output_____
###Markdown
19.1.4 Usage examples of the polynomial class
###Code
p = PolyNomial(points=[(1,0),(2,3),(3,8)])
p.coeff
p.plot((-3.5,3.5))
pz = p.zeros()
print(pz)
###Output
[-1.+0.j 1.+0.j]
###Markdown
19.1.5 Newton polynomial*Again we present here the entire class directly and not the step-wise development as given in the text*
###Code
class NewtonPolynomial(PolyNomial):
base = 'Newton'
def __init__(self,**args):
if 'coeff' in args:
try:
self.xi = array(args['xi'])
except KeyError:
raise ValueError('Coefficients need to be given'
'together with abscissae values xi')
super(NewtonPolynomial, self).__init__(**args)
def point_2_coeff(self):
return array(list(self.divdiff()))
def divdiff(self):
xi = self.xi
row = self.y
yield row[0]
for level in range(1,len(xi)):
row = (row[1:] - row[:-1])/(xi[level:] - xi[:-level])
if allclose(row,0): # check: elements of row nearly zero
self.degree = level-1
break
yield row[0]
def __call__(self,x):
# first compute the sequence 1, (x-x_1), (x-x_1)(x-x_2),...
nps = hstack([1., cumprod(x-self.xi[:self.degree])])
return self.coeff@nps
def companion(self):
degree = self.degree
companion = eye(degree, k=-1)
diagonal = identity(degree,dtype=bool)
companion[diagonal] = self.x[:degree]
companion[:,-1] -= self.coeff[:degree]/self.coeff[degree]
return companion
# here we define the interpolation data: (x,y) pairs
pts = array([[0.,0],[.5,1],[1.,0],[2,0.]])
pN = NewtonPolynomial(points=pts) # this creates an instance of the
# polynomial class
pN.coeff # returns the coefficients array([0. , 2. , -4. , 2.66666667])
print(pN)
###Output
Polynomial of degree 3
with coefficients [ 0. 2. -4. 2.66666667]
in Newton basis.
###Markdown
*Here we make two demonstrations, which are not presented in the book*
###Code
print(pN.zeros())
pN.plot()
###Output
_____no_output_____
###Markdown
19.2 Spectral clustering We make first a general import as stated in Chapter 1 of the book:
###Code
from numpy import *
from matplotlib.pyplot import *
import scipy.linalg as sl
# create some data points
n = 100
x1 = 1.2 * random.randn(n, 2)
x2 = 0.8 * random.randn(n, 2) + tile([7, 0],(n, 1))
x = vstack((x1, x2))
# pairwise distance matrix
M = array([[ sqrt(sum((x[i] - x[j])**2))
for i in range(2*n)]
for j in range(2 * n)])
# create the Laplacian matrix
D = diag(1 / sqrt( M.sum(axis = 0) ))
L = identity(2 * n) - dot(D, dot(M, D))
# compute eigenvectors of L
S, V = sl.eig(L)
# As L is symmetric the imaginary parts
# in the eigenvalues are only due to negligible numerical errors S=S.real
V=V.real
largest=abs(S).argmax()
plot(V[:,largest])
import scipy.linalg as sl
import scipy.cluster.vq as sc
# simple 4 class data
x = random.rand(1000,2)
ndx = ((x[:,0] < 0.4) | (x[:,0] > 0.6)) & \
((x[:,1] < 0.4) | (x[:,1] > 0.6))
x = x[ndx]
n = x.shape[0]
# pairwise distance matrix
M = array([[ sqrt(sum((x[i]-x[j])**2)) for i in range(n) ]
for j in range(n)])
# create the Laplacian matrix
D = diag(1 / sqrt( M.sum(axis=0) ))
L = identity(n) - dot(D, dot(M, D))
# compute eigenvectors of L
_,_,V = sl.svd(L)
k = 4
# take k first eigenvectors
eigv = V[:k,:].T
# k-means
centroids,dist = sc.kmeans(eigv,k)
clust_id = sc.vq(eigv,centroids)[0]
U, S, V = sl.svd(L)
_, _, V = sl.svd(L)
for i in range(k):
ndx = where(clust_id == i)[0]
plot(x[ndx, 0], x[ndx, 1],'o')
axis('equal')
###Output
_____no_output_____
###Markdown
Note, the plot above needs not to be identical to that presented in the book as it is generated from random data. 19.3 Solving initial value problems
###Code
class IV_Problem:
"""
Initial value problem (IVP) class
"""
def __init__(self, rhs, y0, interval, name='IVP'):
"""
rhs 'right hand side' function of the ordinary differential
equation f(t,y)
y0 array with initial values
interval start and end value of the interval of independent
variables often initial and end time
name descriptive name of the problem
"""
self.rhs = rhs
self.y0 = y0
self.t0, self.tend = interval
self.name = name
def rhs(t,y):
g = 9.81
l = 1.
yprime = array([y[1], g / l * sin(y[0])])
return yprime
pendulum = IV_Problem(rhs, array([pi / 2, 0.]), [0., 10.] ,
'mathem. pendulum')
class IVPsolver:
"""
IVP solver class for explicit one-step discretization methods
with constant step size
"""
def __init__(self, problem, discretization, stepsize):
self.problem = problem
self.discretization = discretization
self.stepsize = stepsize
def one_stepper(self):
yield self.problem.t0, self.problem.y0
ys = self.problem.y0
ts = self.problem.t0
while ts <= self.problem.tend:
ts, ys = self.discretization(self.problem.rhs, ts, ys,
self.stepsize)
yield ts, ys
def solve(self):
return list(self.one_stepper())
def expliciteuler(rhs, ts, ys, h):
return ts + h, ys + h * rhs(ts, ys)
def rungekutta4(rhs, ts, ys, h):
k1 = h * rhs(ts, ys)
k2 = h * rhs(ts + h/2., ys + k1/2.)
k3 = h * rhs(ts + h/2., ys + k2/2.)
k4 = h * rhs(ts + h, ys + k3)
return ts + h, ys + (k1 + 2*k2 + 2*k3 + k4)/6.
pendulum_Euler = IVPsolver(pendulum, expliciteuler, 0.001)
pendulum_RK4 = IVPsolver(pendulum, rungekutta4, 0.001)
sol_Euler = pendulum_Euler.solve()
sol_RK4 = pendulum_RK4.solve()
tEuler, yEuler = zip(*sol_Euler)
tRK4, yRK4 = zip(*sol_RK4)
subplot(1,2,1), plot(tEuler,yEuler),\
title('Pendulum result with Explicit Euler'),\
xlabel('Time'), ylabel('Angle and angular velocity')
subplot(1,2,2), plot(tRK4,abs(array(yRK4)-array(yEuler))),\
title('Difference between both methods'),\
xlabel('Time'), ylabel('Angle and angular velocity')
###Output
_____no_output_____ |
dunovo_QualityTest.ipynb | ###Markdown
Different du novo pipeline parameters: G137_Br 1. Number of variants, overlaps & genome distribution
###Code
## Compare the variants (number, genome distribution & mutational spectrum) called based on four DCS datasets
## from du novo pipelines with different parameters (min base quality & percentage of the same base called)
!cd /Users/Bruce/Google_Drive/de_novo_Mutation/mouse/mouse_denovo/du_novo_quality/
!ls
!wc -l S*
###Output
S1_Galaxy88-[G137_Br_25_70].vcf S3_Galaxy61-[G137_Br_28_80].vcf
S2-S1_overlap.bed S4-S1_overlap.bed
S2_Galaxy35-[G137_Br_28_70].vcf S4_Galaxy87-[G137_Br_30_70].vcf
S3-S1_overlap.bed dunovo_QualityTest.ipynb
5040 S1_Galaxy88-[G137_Br_25_70].vcf
4547 S2-S1_overlap.bed
4688 S2_Galaxy35-[G137_Br_28_70].vcf
4531 S3-S1_overlap.bed
4683 S3_Galaxy61-[G137_Br_28_80].vcf
4413 S4-S1_overlap.bed
4636 S4_Galaxy87-[G137_Br_30_70].vcf
32538 total
###Markdown
Number of variants & overlaps: 4 combinations of parameters
###Code
!head S1_Galaxy88-\[G137_Br_25_70\].vcf
!wc -l S1_Galaxy88-\[G137_Br_25_70\].vcf
!bedtools intersect -a S1_Galaxy88-\[G137_Br_25_70\].vcf -b S1_Galaxy88-\[G137_Br_25_70\].vcf | wc -l
!bedtools intersect -a S2_Galaxy35-\[G137_Br_28_70\].vcf -b S1_Galaxy88-\[G137_Br_25_70\].vcf > S2-S1_overlap.bed
!wc -l S2-S1_overlap.bed
!bedtools intersect -a S3_Galaxy61-\[G137_Br_28_80\].vcf -b S1_Galaxy88-\[G137_Br_25_70\].vcf > S3-S1_overlap.bed
!wc -l S3-S1_overlap.bed
!bedtools intersect -a S4_Galaxy87-\[G137_Br_30_70\].vcf -b S1_Galaxy88-\[G137_Br_25_70\].vcf > S4-S1_overlap.bed
!wc -l S4-S1_overlap.bed
%load_ext rpy2.ipython
%%R
setwd("/Users/Bruce/Google_Drive/de_novo_Mutation/mouse/mouse_denovo/du_novo_quality/")
%%R
library("rtracklayer")
%%R
library(karyoploteR)
library(GenomicRanges)
###Output
_____no_output_____
###Markdown
G137_Br 1: Min base quality = 25, Percentage of the same base called = 70% (Default)
###Code
%%R
#Sample 1: min base quality=25; percentage of the same base called=70%
G137_Br<-read.table("S1_Galaxy88-[G137_Br_25_70].vcf",sep="\t")[,1:3]
names(G137_Br)=c('chr','start','end')
#class(G137_Br)
#head(G137_Br)
## Karyotype plot with "karyoploteR",
gains <- makeGRangesFromDataFrame(G137_BrS2) ## Import target positions
head(gains)
length(gains)
%%R
## Plot Sample 2
kp<-plotKaryotype(genome="mm10", main="S1:G137_Br") ## Set genome assembly
#kpPlotRegions(kp, gains,col="red",avoid.overlapping=FALSE ) ## Choose color
getCytobandColors(color.table=NULL, color.schema=c("only.centromeres"))
kpPlotRegions(kp, gains,col="darkorange") ## Choose color
###Output
_____no_output_____
###Markdown
G137_Br 2: Min base quality = 28, Percentage of the same base called = 70%
###Code
%%R
#Sample 2: min base quality=28; percentage of the same base called=70%
G137_Br<-read.table("S2_Galaxy35-[G137_Br_28_70].vcf",sep="\t")[,1:3]
names(G137_Br)=c('chr','start','end')
#class(G137_Br)
#head(G137_Br)
## Karyotype plot with "karyoploteR",
gains <- makeGRangesFromDataFrame(G137_Br) ## Import target positions
head(gains)
length(gains)
%%R
## Plot Sample 2
kp <- plotKaryotype(genome="mm10", main="S2:G137_Br") ## Set genome assembly
#kpPlotRegions(kp, gains,col="red",avoid.overlapping=FALSE ) ## Choose color
getCytobandColors(color.table=NULL, color.schema=c("only.centromeres"))
kpPlotRegions(kp, gains,col="green") ## Choose color
###Output
_____no_output_____
###Markdown
G137_Br 3: Min base quality = 28, Percentage of the same base called = 80%
###Code
%%R
#Sample 3: min base quality=28; percentage of the same base called=80%
G137_Br<-read.table("S3_Galaxy61-[G137_Br_28_80].vcf",sep="\t")[,1:3]
names(G137_Br)=c('chr','start','end')
#class(G137_Br)
#head(G137_Br)
## Karyotype plot with "karyoploteR",
gains <- makeGRangesFromDataFrame(G137_Br) ## Import target positions
head(gains)
length(gains)
%%R
## Plot Sample 3: min base quality=28; percentage of the same base called=80%
kp <- plotKaryotype(genome="mm10", main="S3:G137_Br") ## Set genome assembly
#kpPlotRegions(kp, gains,col="red",avoid.overlapping=FALSE ) ## Choose color
getCytobandColors(color.table=NULL, color.schema=c("only.centromeres"))
kpPlotRegions(kp, gains,col="blue") ## Choose color
###Output
_____no_output_____
###Markdown
G137_Br 4: Min base quality = 30, Percentage of the same base called = 70%
###Code
%%R
#Sample 4: min base quality=30; percentage of the same base called=70%
G137_Br<-read.table("S4_Galaxy87-[G137_Br_30_70].vcf",sep="\t")[,1:3]
names(G137_Br)=c('chr','start','end')
#class(G137_Br)
#head(G137_Br)
## Karyotype plot with "karyoploteR",
gains <- makeGRangesFromDataFrame(G137_Br) ## Import target positions
head(gains)
length(gains)
%%R
## Plot Sample 4
kp <- plotKaryotype(genome="mm10", main="S4:G137_Br") ## Set genome assembly
#kpPlotRegions(kp, gains,col="red",avoid.overlapping=FALSE ) ## Choose color
getCytobandColors(color.table=NULL, color.schema=c("only.centromeres"))
kpPlotRegions(kp, gains,col="red") ## Choose color
###Output
_____no_output_____
###Markdown
2. G137_Br Samples 1-4: Mutational Spectrum
###Code
%load_ext rpy2.ipython
%%R
## Check the mutational spectrum
require(MutationalPatterns)
#require(BSgenome.Mmusculus.UCSC.mm10)
%%R
setwd("/Users/Bruce/Google_Drive/de_novo_Mutation/mouse/mouse_denovo/du_novo_quality/spetrum/")
# Input sample names
#sample_names <- c ( "G137_Br", "G137_M", "G137p2_Br", "G137p2_M","G137p3_Br", "G137p3_M", "G137p5_Br", "G137p5_M")
vcf_files <- list.files(pattern = ".vcf", full.names = FALSE)
vcf_files
!head -15 S1_Galaxy88-[G137_Br_25_70].vcf
%%R
## Load the reference genome
library(BSgenome.Mmusculus.UCSC.mm10)
ref_genome <- "BSgenome.Mmusculus.UCSC.mm10"
library("BSgenome")
library(ref_genome, character.only = TRUE)
# This function loads the files as GRanges objects
sample_names<-vcf_files
vcfs <- read_vcfs_as_granges(vcf_files, sample_names, ref_genome)
## Get the type occurrences for all VCF objects.
type_occurrences = mut_type_occurrences(vcfs, ref_genome)
%%R
## Plot the point mutation spectrum over all samples
plot_spectrum(type_occurrences, CT=FALSE)
%%R
# plot by sample tissues
par(mfrow=c(2,2))
tissue<-sample_names
plot_spectrum(type_occurrences, by = tissue, CT = FALSE)
###Output
_____no_output_____ |
Crude with other stock prediction based on FRED.ipynb | ###Markdown
Data Acquisition Overview "All data is at the daily level, represented as a volume weighted average. Data is acquired all the way back to 2011, the longest period to obtain a reasonably complete dataset. \n", \ "The sections below will constitute a data dictionary for the columns utilized in this inquiry.\n", US Equity Indices, Given the importance of equity markets to the health of the overall economy, as well as the media's obsession with their movements, daily time-series of the following were included: - SP500: [SPX S&P 500 Index](https://us.spindices.com/indices/equity/sp-500) of large-cap US equities\n", - NASDAQCOM: [Nasdaq Composite Index](http://money.cnn.com/data/markets/nasdaq/) of large-cap US equities\n", DJIA: [Dow Jones Industrial Average](https://quotes.wsj.com/index/DJIA) of US equities\n", - RU2000PR: [Russell 2000 Price Index](https://fred.stlouisfed.org/series/RU2000PR) of US equities\n", The `pandas_datareader.data` and `quandl` APIs were used to acquire this information. Traditional Currencies\nThe [St. Louis Federal Reserve's FRED API](https://fred.stlouisfed.org/) was accessed using the [`pandas_datareader.data`](https://pandas-datareader.readthedocs.io/en/latest/) API to gather currency exchange rates of the US Dollar against the Japanese Yen, the Euro, the Chinese Yuan, the Mexican Peso, and the Australian Dollar "- DEXCHUS: [Chinese Yuan to USD](https://fred.stlouisfed.org/series/DEXCHUS)\n", "- DEXJPUS: [Japanese Yen to USD](https://fred.stlouisfed.org/series/DEXJPUS)\n", - DEXUSEU: [USD to European Union's Euro](https://fred.stlouisfed.org/series/DEXUSEU)\n", "- DEXMXUS: [Mexican New Pesos to USD](https://fred.stlouisfed.org/series/DEXMXUS)\n", "- DEXUSAL: [USD to Australian Dollar](https://fred.stlouisfed.org/series/DEXUSAL)\n", Debt Market Indicators A ladder of bond market indicators are represented in the data in LIBOR rates at various maturities. Specifically, LIBOR is included at overnight, 1-month, 3-month and 12-month maturities. To (very crudely) represent the consumer and the corporate markets we also included indices representing high yield returns and prime corporate debt returns. - USDONTD156N: [Overnight London Interbank Offered Rate (LIBOR](https://fred.stlouisfed.org/series/USDONTD156N) based on USD\n", - USD1MTD156N: [One Month London Interbank Offered Rate (LIBOR](https://fred.stlouisfed.org/series/USD1MTD156N) based on USD\n",- USD3MTD156N: [Three Month London Interbank Offered Rate (LIBOR](https://fred.stlouisfed.org/series/USD3MTD156N) based on USD\n",- USD12MD156N: [Twelve Month London Interbank Offered Rate (LIBOR](https://fred.stlouisfed.org/series/USD12MD156N) based on USD\n",- BAMLHYH0A0HYM2TRIV: [ICE BofAML US High Yield Total Return Index Value](https://fred.stlouisfed.org/series/BAMLHYH0A0HYM2TRIV)\n",- BAMLCC0A1AAATRIV: [ICE BofAML US Corp AAA Total Return Index Value](https://fred.stlouisfed.org/series/BAMLCC0A1AAATRIV) These series were also acquired from the St. Louis Fed's FRED API. Commodity Prices\n", We chose to include series that represent the oil market and the gold market, two assets that are not strongly tied to the others mentioned. - GOLDAMGBD228NLBM: [Gold Fixing Price 10:30 AM (London Time) in London Bullion Market, based on USD](https://fred.stlouisfed.org/series/GOLDAMGBD228NLBM) - DCOILWTICO: [West Texas Intermediate (WTI) - Cushing Oklahoma (https://fred.stlouisfed.org/series/DCOILWTICO) "These series were also acquired from the St. Louis Fed's FRED API. Energy-Related Series "To ensure we are getting signal from the energy sector data on natural gas and energy sector volatility is gathered. This data is alos acquired form teh St. Louis Fed's FRED API. "\n", "- MHHNGSP: [Henry Hub Natural Gas Spot Price](https://fred.stlouisfed.org/series/MHHNGSP)\n", "- VXXLECLS: [CBOE Energy Sector ETF Volatility Index](https://fred.stlouisfed.org/series/VXXLECLS)\n", "\n", " Call FRED API \n", "\n", "Below a simple function `get_fred_data` is defined to call the Saint Louis Fed's FRED API via pandas_datareader. "
###Code
import pandas as pd
import pandas_datareader.data as web
import quandl
from datetime import datetime
import warnings
warnings.filterwarnings('ignore')
pd.set_option('max_columns', 999)
pd.set_option('max_rows', 99999)
from fredapi import Fred
series_list = ['SP500', 'NASDAQCOM', 'DJIA','BOGMBASEW', 'DEXJPUS', 'DEXUSEU', 'DEXCHUS', 'DEXUSAL','VIXCLS','USDONTD156N',
'USD1MTD156N', 'USD3MTD156N', 'USD12MD156N',
'BAMLHYH0A0HYM2TRIV', 'BAMLCC0A1AAATRIV','GOLDAMGBD228NLBM',
'DCOILWTICO','MHHNGSP','VXXLECLS'] # cboe energy sector etf volatility
start = datetime(2015, 1, 1)
end = datetime.now()
def get_fred_data(series_list, start, end):
fred_df = pd.DataFrame()
for i, series in enumerate(series_list):
print('Calling FRED API for Series: {}'.format(series)),
if i == 0:
fred_df = web.get_data_fred(series, start, end)
else:
_df = web.get_data_fred(series, start, end)
fred_df = fred_df.join(_df, how='outer')
return fred_df
econ_df = get_fred_data(series_list, start, end)
econ_df.head()
import numpy as np
def generate_calendar(year, drop_index=False):
from pandas.tseries.offsets import YearEnd
from pandas.tseries.holiday import USFederalHolidayCalendar
start_date = pd.to_datetime('1/1/'+str(year))
end_date = start_date + YearEnd()
DAT = pd.date_range(str(start_date), str(end_date), freq='D')
MO = [d.strftime('%B') for d in DAT]
holidays = USFederalHolidayCalendar().holidays(start=start_date, end=end_date)
cal_df = pd.DataFrame({'date':DAT, 'month':MO})
cal_df['year'] = [format(d, '%Y') for d in DAT]
cal_df['weekday'] = [format(d, '%A') for d in DAT]
cal_df['is_weekday'] = cal_df.weekday.isin(['Monday','Tuesday','Wednesday','Thursday','Friday'])
cal_df['is_weekday'] = cal_df['is_weekday'].astype(int)
cal_df['is_holiday'] = cal_df['date'].isin(holidays)
cal_df['is_holiday'] = cal_df['is_holiday'].astype(int)
cal_df['is_holiday_week'] = cal_df.is_holiday.rolling(window=7,center=True,min_periods=1).sum()
cal_df['is_holiday_week'] = cal_df['is_holiday_week'].astype(int)
if not drop_index: cal_df.set_index('date', inplace=True)
return cal_df
def make_calendars(year_list, drop_index):
cal_df = pd.DataFrame()
for year in year_list:
cal_df = cal_df.append(generate_calendar(year, drop_index=drop_index))
return cal_df
year_list = [str(int(i)) for i in np.arange(2015, 2020)]
cal_df = make_calendars(year_list, drop_index=False)
cal_df.head()
econ_df = econ_df.join(cal_df, how='outer')
econ_df = econ_df.fillna(method='bfill')
econ_df = econ_df.fillna(method='ffill')
from datetime import datetime as dt
#drop future records introduced from the calendar function
before_future = pd.to_datetime(econ_df.index.values) <= dt.now()
econ_df = econ_df.loc[before_future]
econ_df = pd.get_dummies(econ_df,
columns=['month', 'year', 'weekday'],
drop_first=True)
econ_df.columns = [str.lower(s) for s in econ_df.columns]
print(econ_df.columns.tolist())
# Save original data to a dictionary
data = dict()
data['original'] = econ_df
econ_df.to_csv('econ_df.csv')
###Output
_____no_output_____
###Markdown
Feature Engineering Two methods were undertaken to reduce the noise and spread the signal thruogh time in the data. Rather than using the raw price data we do the following: - Melt the data such that only three columns exist: `date`, `variable`, and `value`.- Perform a split-apply-combine by grouping the data by `variable`, calculate a percent change, then calculate a rolling `window` mean of the percent change "- Spread the data back to its original shape, using the rolling `window` percent change as the new features This technique is an attempt to be more sensitive to changes in a given market as opposed to the actual value at any given time. Taking a rolling mean also spreads out any market movements that may be anomalous such that they are more in the ballpark. Melt `econ_df` on `date` Column To simplify plotting and facility the split-apply-combine operation the `econ_df` is melted on the `date` column. "
###Code
econ_df_melt = econ_df.copy()
econ_df_melt.reset_index(inplace=True)
econ_df_melt.rename(columns={'index': 'date'}, inplace=True)
econ_df_melt = econ_df_melt.melt('date')
econ_df_melt.head()
###Output
_____no_output_____
###Markdown
Split-Apply-Combine을 수행하여 형상 계산"Below we define the `window`, then split `econ_df_melt` on the `variable` column (which contains the names of the original columns). The list of `onehot_cols` are not subject to this calculation since they are binary "
###Code
onehot_cols = ['is_weekday', 'is_holiday', 'is_holiday_week', 'month_august', 'month_december',
'month_february', 'month_january', 'month_july',
'month_june', 'month_march',
'month_may', 'month_november', 'month_october',
'month_september', 'year_2011',
'year_2012', 'year_2013', 'year_2014', 'year_2015',
'year_2016', 'year_2017',
'year_2018', 'weekday_monday', 'weekday_saturday',
'weekday_sunday', 'weekday_thursday','weekday_tuesday', 'weekday_wednesday']
window = 30 #rolling avg
smooth_df = pd.DataFrame()
for name, df in econ_df_melt.groupby('variable'):
if name not in onehot_cols:
colname = 'rolling_'+str(window)+'_mean'
df['pct_change'] = df['value'].pct_change()
df[colname] = df['pct_change'].rolling(window=window).mean()
else:
df[colname] = df['value']
smooth_df = smooth_df.append(df)
smooth_df.head()
import matplotlib.pyplot as plt
import matplotlib.dates as mdates
from mpl_toolkits.axes_grid1 import make_axes_locatable
import seaborn as sns
years = mdates.YearLocator() # every year
months = mdates.MonthLocator() # every month
yearsFmt = mdates.DateFormatter('%Y')
def plot_tseries_over_group_with_histograms(df, xcol,
ycol,grpcol,title_prepend='{}',
labs=None, x_angle=0, labelpad=60, window=15, ignore_cols=[]):
#Function for plotting time series df[ycol] over datetime range df[xcol]
#df: pd.DataFrame containing datetime and series to plot
#- xcol: str of column name in df for datetime series
#- ycol: str of column name in df for tseries
#- grpcol: str of column name in df of group over which to plot
#- labs: dict of xlab, ylab
# - title_prepend: str containing \"{}\" that prepends group names in title
# - window: int for calculating rolling means of each series
# - ignore_cols: list of column names not to plot
unique_grp_vals = df[grpcol].unique()
nrows = len(unique_grp_vals) - len(ignore_cols)
figsize = (13, 6 * nrows)
fig, axes = plt.subplots(nrows, 1, figsize=figsize)
title_prepend_hist = 'Histogram of ' + str(title_prepend)
j = 0
for i, grp in enumerate(unique_grp_vals):
_df = df.loc[df[grpcol] == grp]
if grp not in ignore_cols:
_df = df.loc[df[grpcol] == grp]
try:
ax = axes[j]
ax.plot(_df[xcol], _df[ycol], alpha=.2, color='black')
ax.plot(_df[xcol], _df[ycol].rolling(window=window, min_periods=min(5, window)).mean(),alpha=.5, color='r', label='{} period rolling avg'.format(window),linestyle='--')
longer_window = int(window * 3)
ax.plot(_df[xcol], _df[ycol].rolling(window=longer_window, min_periods=5).mean(),alpha=.8, color='darkred', label='{} period rolling avg'.format(longer_window),linewidth=2),
mu, sigma = _df[ycol].mean(), _df[ycol].std()
ax.axhline(mu, linestyle='--', color='r', alpha=.3)
ax.axhline(mu - sigma, linestyle='-.', color='y', alpha=.3)
ax.axhline(mu + sigma, linestyle='-.', color='y', alpha=.3)
ax.set_title(title_prepend.format(grp))
ax.legend(loc='best')
bottom, top = mu - 3*sigma, mu + 3*sigma
ax.set_ylim((bottom, top))
if labs is not None:
ax.set_xlabel(labs['xlab'])
ax.set_ylabel(labs['ylab'])
ax.xaxis.labelpad = labelpad
ax.xaxis.set_minor_locator(months)
ax.grid(alpha=.1)
if x_angle != 0:
for tick in ax.get_xticklabels():
tick.set_rotation(x_angle)
divider = make_axes_locatable(ax)
axHisty = divider.append_axes('right', 1.2, pad=0.1, sharey=ax)
axHisty.grid(alpha=.1)
axHisty.hist(_df[ycol].dropna(), orientation='horizontal', alpha=.5, color='lightgreen', bins=25)
axHisty.axhline(mu, linestyle='--', color='r', label='mu', alpha=.3)
axHisty.axhline(mu - sigma, linestyle='-.', color='y', label='+/- two sigma', alpha=.3)
axHisty.axhline(mu + sigma, linestyle='-.', color='y', alpha=.3)
axHisty.legend(loc='best')
j += 1
except IndexError:
pass
else:
pass
sns.set_style("whitegrid")
sns.despine()
title_prepend = 'Time Series for {}'
xcol = 'date'
ycol = colname # from the rolling mean of pct change
grpcol = 'variable'
labs = dict(xlab='',ylab=str(window)+' Day Rolling Mean of Daily Percent Change')
plot_tseries_over_group_with_histograms(smooth_df,
xcol, ycol, grpcol,
title_prepend, labs,
x_angle=90,
ignore_cols=onehot_cols,
window=50)
plt.show()
smooth_df = smooth_df.pivot(index='date',
columns='variable',
values=colname)
smooth_df.dropna(inplace=True)
smooth_df.head()
viz_cols = ['bamlcc0a1aaatriv', 'bamlhyh0a0hym2triv', 'bogmbasew',
'dcoilwtico','dexchus', 'dexjpus', 'dexusal', 'dexuseu', 'djia',
'goldamgbd228nlbm', 'mhhngsp', 'nasdaqcom', 'sp500', 'usd12md156n',
'usd1mtd156n','usd3mtd156n', 'usdontd156n', 'vixcls', 'vxxlecls']
def correlation_heatmap(df, cutoff=None, title=''):
df_corr = df.corr('pearson')
np.fill_diagonal(df_corr.values, 0)
if cutoff != None:
for col in df_corr.columns:
df_corr.loc[df_corr[col].abs() <= cutoff, col] = 0
fig, ax = plt.subplots(figsize=(20, 15))
sns.heatmap(df_corr, ax=ax, cmap='RdBu_r')
plt.suptitle(title, size=18)
plt.show()
return df_corr
cutoff = .3
y_col = 'dcoilwtico'
#map the values to the corresponding dates
# in the moving averages dataset\n",
y_dict = econ_df[y_col].to_dict()
smooth_df[y_col] = smooth_df.index.values
smooth_df[y_col] = smooth_df[y_col].map(y_dict)
# shift back -window so we are predicting +window in future\n",
smooth_df[y_col] = smooth_df[y_col].shift(-window)
smooth_df.dropna(inplace=True)
# write to disk \n",
fname = './data/smooth_df_'+str(window)+'_mean.csv'
smooth_df.to_csv(fname)
data['smooth_df'] = smooth_df
smooth_df.head()
fred = Fred(api_key= '7da0b3b06d3293541808e737b9ce9add')
data = fred.get_series('DCOILWTICO')
###Output
_____no_output_____ |
notebooks/VietOCR_colab.ipynb | ###Markdown
IntroductionThis notebook describe how you can use VietOcr to train OCR model
###Code
! pip install --quiet vietocr==0.3.2
###Output
[K |████████████████████████████████| 61kB 3.2MB/s
[K |████████████████████████████████| 286kB 7.2MB/s
[?25h Installing build dependencies ... [?25l[?25hdone
Getting requirements to build wheel ... [?25l[?25hdone
Preparing wheel metadata ... [?25l[?25hdone
[K |████████████████████████████████| 952kB 35.5MB/s
[?25h Building wheel for gdown (PEP 517) ... [?25l[?25hdone
[31mERROR: albumentations 0.1.12 has requirement imgaug<0.2.7,>=0.2.5, but you'll have imgaug 0.4.0 which is incompatible.[0m
###Markdown
Inference
###Code
import matplotlib.pyplot as plt
from PIL import Image
from vietocr.tool.predictor import Predictor
from vietocr.tool.config import Cfg
config = Cfg.load_config_from_name('vgg_transformer')
###Output
_____no_output_____
###Markdown
Change weights to your weights or using default weights from our pretrained model. Path can be url or local file
###Code
# config['weights'] = './weights/transformerocr.pth'
config['weights'] = 'https://drive.google.com/uc?id=13327Y1tz1ohsm5YZMyXVMPIOjoOA0OaA'
config['cnn']['pretrained']=False
config['device'] = 'cuda:0'
config['predictor']['beamsearch']=False
detector = Predictor(config)
! gdown --id 1uMVd6EBjY4Q0G2IkU5iMOQ34X0bysm0b
! unzip -qq -o sample.zip
! ls sample | shuf |head -n 5
img = './sample/031189003299.jpeg'
img = Image.open(img)
plt.imshow(img)
s = detector.predict(img)
s
###Output
_____no_output_____
###Markdown
Download sample dataset
###Code
! gdown https://drive.google.com/uc?id=19QU4VnKtgm3gf0Uw_N2QKSquW1SQ5JiE
! unzip -qq -o ./data_line.zip
from google.colab import drive
drive.mount('/content/drive')
dataset_dir = '/content/drive/MyDrive/Receipt_OCR/OCR.zip'
!cp $dataset_dir .
! unzip -qq -o ./OCR.zip
###Output
_____no_output_____
###Markdown
Train model 1. Load your config2. Train model using your dataset above Load the default config, we adopt VGG for image feature extraction
###Code
from vietocr.tool.config import Cfg
from vietocr.model.trainer import Trainer
###Output
_____no_output_____
###Markdown
Change the config * *data_root*: the folder save your all images* *train_annotation*: path to train annotation* *valid_annotation*: path to valid annotation* *print_every*: show train loss at every n steps* *valid_every*: show validation loss at every n steps* *iters*: number of iteration to train your model* *export*: export weights to folder that you can use for inference* *metrics*: number of sample in validation annotation you use for computing full_sequence_accuracy, for large dataset it will take too long, then you can reuduce this number
###Code
!gdown 'https://drive.google.com/u/0/uc?export=download&confirm=5ALv&id=12dTOZ9VP7ZVzwQgVvqBWz5JO5RXXW5NY'
config = Cfg.load_config_from_name('vgg_seq2seq')
#config['vocab'] = 'aAàÀảẢãÃáÁạẠăĂằẰẳẲẵẴắẮặẶâÂầẦẩẨẫẪấẤậẬbBcCdDđĐeEèÈẻẺẽẼéÉẹẸêÊềỀểỂễỄếẾệỆfFgGhHiIìÌỉỈĩĨíÍịỊjJkKlLmMnNoOòÒỏỎõÕóÓọỌôÔồỒổỔỗỖốỐộỘơƠờỜởỞỡỠớỚợỢpPqQrRsStTuUùÙủỦũŨúÚụỤưƯừỪửỬữỮứỨựỰvVwWxXyYỳỲỷỶỹỸýÝỵỴzZ0123456789!"#$%&\'()*+,-./:;<=>?@[\\]^_`{|}~ '
dataset_params = {
'name':'coop',
'data_root':'.',
'train_annotation':'train_annotation.txt',
'valid_annotation':'test_annotation.txt'
}
params = {
'print_every':200,
'valid_every':15*200,
'iters':20000,
'checkpoint':'transformerocr.pth',
'export':'./transformerocr.pth',
'metrics': 10000
}
config['trainer'].update(params)
config['dataset'].update(dataset_params)
###Output
_____no_output_____
###Markdown
you can change any of these params in this full list below
###Code
import hashlib
def md5(fname):
hash_md5 = hashlib.md5()
with open(fname, "rb") as f:
for chunk in iter(lambda: f.read(4096), b""):
hash_md5.update(chunk)
return hash_md5.hexdigest()
config['pretrain']['cached'] = '/content/transformerocr.pt'
config['pretrain']['md5'] = md5(config['pretrain']['cached'])
import torch
torch.cuda.current_device()
config
###Output
_____no_output_____
###Markdown
You should train model from our pretrained
###Code
trainer = Trainer(config, pretrained=True)
trainer.train()
###Output
iter: 000200 - train loss: 2.220 - lr: 6.35e-05 - load time: 65.98 - gpu time: 82.47
iter: 000400 - train loss: 1.266 - lr: 1.32e-04 - load time: 66.11 - gpu time: 82.21
iter: 000600 - train loss: 1.016 - lr: 2.38e-04 - load time: 66.13 - gpu time: 82.18
iter: 000800 - train loss: 0.909 - lr: 3.72e-04 - load time: 66.14 - gpu time: 82.30
iter: 001000 - train loss: 0.860 - lr: 5.20e-04 - load time: 66.18 - gpu time: 82.41
iter: 001200 - train loss: 0.839 - lr: 6.69e-04 - load time: 66.31 - gpu time: 82.11
iter: 001400 - train loss: 0.836 - lr: 8.03e-04 - load time: 66.25 - gpu time: 82.17
iter: 001600 - train loss: 0.824 - lr: 9.09e-04 - load time: 66.22 - gpu time: 81.82
iter: 001800 - train loss: 0.821 - lr: 9.77e-04 - load time: 66.34 - gpu time: 81.73
iter: 002000 - train loss: 0.822 - lr: 1.00e-03 - load time: 66.50 - gpu time: 81.80
iter: 002200 - train loss: 0.827 - lr: 1.00e-03 - load time: 66.68 - gpu time: 81.78
iter: 002400 - train loss: 0.816 - lr: 9.99e-04 - load time: 66.62 - gpu time: 81.57
iter: 002600 - train loss: 0.818 - lr: 9.97e-04 - load time: 66.55 - gpu time: 81.82
iter: 002800 - train loss: 0.815 - lr: 9.95e-04 - load time: 66.61 - gpu time: 81.61
iter: 003000 - train loss: 0.815 - lr: 9.92e-04 - load time: 66.33 - gpu time: 81.22
###Markdown
Save model configuration for inference, load_config_from_file
###Code
trainer.config.save('config.yml')
###Output
_____no_output_____
###Markdown
Visualize your dataset to check data augmentation is appropriate
###Code
trainer.visualize_dataset()
###Output
_____no_output_____
###Markdown
Train now
###Code
trainer.train()
###Output
iter: 000200 - train loss: 1.657 - lr: 1.91e-05 - load time: 1.08 - gpu time: 158.33
iter: 000400 - train loss: 1.429 - lr: 3.95e-05 - load time: 0.76 - gpu time: 158.76
iter: 000600 - train loss: 1.331 - lr: 7.14e-05 - load time: 0.73 - gpu time: 158.38
iter: 000800 - train loss: 1.252 - lr: 1.12e-04 - load time: 1.29 - gpu time: 158.43
iter: 001000 - train loss: 1.218 - lr: 1.56e-04 - load time: 0.84 - gpu time: 158.86
iter: 001200 - train loss: 1.192 - lr: 2.01e-04 - load time: 0.78 - gpu time: 160.20
iter: 001400 - train loss: 1.140 - lr: 2.41e-04 - load time: 1.54 - gpu time: 158.48
iter: 001600 - train loss: 1.129 - lr: 2.73e-04 - load time: 0.70 - gpu time: 159.42
iter: 001800 - train loss: 1.095 - lr: 2.93e-04 - load time: 0.74 - gpu time: 158.03
iter: 002000 - train loss: 1.098 - lr: 3.00e-04 - load time: 0.66 - gpu time: 159.21
iter: 002200 - train loss: 1.060 - lr: 3.00e-04 - load time: 1.52 - gpu time: 157.63
iter: 002400 - train loss: 1.055 - lr: 3.00e-04 - load time: 0.80 - gpu time: 159.34
iter: 002600 - train loss: 1.032 - lr: 2.99e-04 - load time: 0.74 - gpu time: 159.13
iter: 002800 - train loss: 1.019 - lr: 2.99e-04 - load time: 1.42 - gpu time: 158.27
###Markdown
Visualize prediction from our trained model
###Code
trainer.visualize_prediction()
###Output
_____no_output_____
###Markdown
Compute full seq accuracy for full valid dataset
###Code
trainer.precision()
###Output
_____no_output_____ |
bw.ipynb | ###Markdown
define the random variable
###Code
mll = ROOT.RooRealVar("mll", "mll", 60, 120)
###Output
[1mRooFit v3.60 -- Developed by Wouter Verkerke and David Kirkby[0m
Copyright (C) 2000-2013 NIKHEF, University of California & Stanford University
All rights reserved, please read http://roofit.sourceforge.net/license.txt
###Markdown
define the parameters of a Breit Wigner PDF
###Code
m0 = ROOT.RooRealVar("m0", "m0", 90)
Gamma = ROOT.RooRealVar("gamma", "gamma", 2.5)
bw = ROOT.RooBreitWigner("bw", "bw", mll, m0, Gamma)
###Output
_____no_output_____
###Markdown
generate a 100 events dataset
###Code
dataset = bw.generate(mll, 100)
###Output
_____no_output_____
###Markdown
define a new PDF along with new parameters for the fit
###Code
m0fit = ROOT.RooRealVar("m0fit", "m0fit", 60, 120)
Gammafit = ROOT.RooRealVar("gammafir", "gammafit", 0, 10)
bwfit = ROOT.RooBreitWigner("bwfit", "bwfit", mll, m0fit, Gammafit)
###Output
_____no_output_____
###Markdown
fit to the data
###Code
bwfit.fitTo(dataset, ROOT.RooFit.Minos(ROOT.kTRUE))
m0fitRes = m0fit.getVal()
m0fitresDo = m0fit.getVal()+m0fit.getAsymErrorLo()
m0fitresUp = m0fit.getVal()+m0fit.getAsymErrorHi()
###Output
[#1] INFO:Minization -- RooMinimizer::optimizeConst: activating const optimization
**********
** 1 **SET PRINT 1
**********
**********
** 2 **SET NOGRAD
**********
PARAMETER DEFINITIONS:
NO. NAME VALUE STEP SIZE LIMITS
1 gammafir 5.00000e+00 1.00000e+00 0.00000e+00 1.00000e+01
2 m0fit 9.00000e+01 6.00000e+00 6.00000e+01 1.20000e+02
**********
** 3 **SET ERR 0.5
**********
**********
** 4 **SET PRINT 1
**********
**********
** 5 **SET STR 1
**********
NOW USING STRATEGY 1: TRY TO BALANCE SPEED AGAINST RELIABILITY
**********
** 6 **MIGRAD 1000 1
**********
FIRST CALL TO USER FUNCTION AT NEW START POINT, WITH IFLAG=4.
START MIGRAD MINIMIZATION. STRATEGY 1. CONVERGENCE WHEN EDM .LT. 1.00e-03
FCN=265.683 FROM MIGRAD STATUS=INITIATE 6 CALLS 7 TOTAL
EDM= unknown STRATEGY= 1 NO ERROR MATRIX
EXT PARAMETER CURRENT GUESS STEP FIRST
NO. NAME VALUE ERROR SIZE DERIVATIVE
1 gammafir 5.00000e+00 1.00000e+00 2.01358e-01 2.83906e+01
2 m0fit 9.00000e+01 6.00000e+00 2.01358e-01 9.72762e+01
ERR DEF= 0.5
MIGRAD MINIMIZATION HAS CONVERGED.
MIGRAD WILL VERIFY CONVERGENCE AND ERROR MATRIX.
COVARIANCE MATRIX CALCULATED SUCCESSFULLY
FCN=256.328 FROM MIGRAD STATUS=CONVERGED 52 CALLS 53 TOTAL
EDM=7.59505e-08 STRATEGY= 1 ERROR MATRIX ACCURATE
EXT PARAMETER STEP FIRST
NO. NAME VALUE ERROR SIZE DERIVATIVE
1 gammafir 2.72478e+00 3.88603e-01 9.65325e-04 -2.18911e-03
2 m0fit 8.97041e+01 1.92703e-01 7.08247e-05 -2.81993e-02
ERR DEF= 0.5
EXTERNAL ERROR MATRIX. NDIM= 25 NPAR= 2 ERR DEF=0.5
1.514e-01 7.072e-03
7.072e-03 3.713e-02
PARAMETER CORRELATION COEFFICIENTS
NO. GLOBAL 1 2
1 0.09432 1.000 0.094
2 0.09432 0.094 1.000
**********
** 7 **SET ERR 0.5
**********
**********
** 8 **SET PRINT 1
**********
**********
** 9 **HESSE 1000
**********
COVARIANCE MATRIX CALCULATED SUCCESSFULLY
FCN=256.328 FROM HESSE STATUS=OK 10 CALLS 63 TOTAL
EDM=7.58013e-08 STRATEGY= 1 ERROR MATRIX ACCURATE
EXT PARAMETER INTERNAL INTERNAL
NO. NAME VALUE ERROR STEP SIZE VALUE
1 gammafir 2.72478e+00 3.88586e-01 1.93065e-04 -4.72421e-01
2 m0fit 8.97041e+01 1.92694e-01 1.41649e-05 -9.86402e-03
ERR DEF= 0.5
EXTERNAL ERROR MATRIX. NDIM= 25 NPAR= 2 ERR DEF=0.5
1.514e-01 7.035e-03
7.035e-03 3.713e-02
PARAMETER CORRELATION COEFFICIENTS
NO. GLOBAL 1 2
1 0.09384 1.000 0.094
2 0.09384 0.094 1.000
**********
** 10 **MINOS 1000 1
**********
FCN=256.328 FROM MINOS STATUS=SUCCESSFUL 34 CALLS 97 TOTAL
EDM=7.58013e-08 STRATEGY= 1 ERROR MATRIX ACCURATE
EXT PARAMETER PARABOLIC MINOS ERRORS
NO. NAME VALUE ERROR NEGATIVE POSITIVE
1 gammafir 2.72478e+00 3.88586e-01 -3.63388e-01 4.17737e-01
2 m0fit 8.97041e+01 1.92694e-01
ERR DEF= 0.5
**********
** 11 **MINOS 1000 2
**********
FCN=256.328 FROM MINOS STATUS=SUCCESSFUL 26 CALLS 123 TOTAL
EDM=7.58013e-08 STRATEGY= 1 ERROR MATRIX ACCURATE
EXT PARAMETER PARABOLIC MINOS ERRORS
NO. NAME VALUE ERROR NEGATIVE POSITIVE
1 gammafir 2.72478e+00 3.88586e-01 -3.63388e-01 4.17737e-01
2 m0fit 8.97041e+01 1.92694e-01 -1.90704e-01 1.96464e-01
ERR DEF= 0.5
[#1] INFO:Minization -- RooMinimizer::optimizeConst: deactivating const optimization
###Markdown
plot the result
###Code
frame_mll = mll.frame()
dataset.plotOn(frame_mll)
bwfit.plotOn(frame_mll)
frame_mll.Draw()
ROOT.gPad.Draw()
###Output
Info in <TCanvas::MakeDefCanvas>: created default TCanvas with name c1
###Markdown
redo the fit instantiating the likelihood by hand
###Code
nll = bwfit.createNLL(dataset)
ROOT.RooMinuit(nll).migrad()
minNLL=nll.getVal()
###Output
**********
** 13 **MIGRAD 1000 1
**********
FIRST CALL TO USER FUNCTION AT NEW START POINT, WITH IFLAG=4.
START MIGRAD MINIMIZATION. STRATEGY 1. CONVERGENCE WHEN EDM .LT. 1.00e-03
FCN=256.328 FROM MIGRAD STATUS=INITIATE 4 CALLS 5 TOTAL
EDM= unknown STRATEGY= 1 NO ERROR MATRIX
EXT PARAMETER CURRENT GUESS STEP FIRST
NO. NAME VALUE ERROR SIZE DERIVATIVE
1 gammafir 2.72478e+00 3.88586e-01 8.74759e-02 -2.25434e-03
2 m0fit 8.97041e+01 1.92694e-01 6.42348e-03 -2.83420e-02
ERR DEF= 0.5
MIGRAD MINIMIZATION HAS CONVERGED.
MIGRAD WILL VERIFY CONVERGENCE AND ERROR MATRIX.
COVARIANCE MATRIX CALCULATED SUCCESSFULLY
FCN=256.328 FROM MIGRAD STATUS=CONVERGED 24 CALLS 25 TOTAL
EDM=6.1549e-11 STRATEGY= 1 ERROR MATRIX ACCURATE
EXT PARAMETER STEP FIRST
NO. NAME VALUE ERROR SIZE DERIVATIVE
1 gammafir 2.72486e+00 3.88618e-01 9.62800e-04 -5.09660e-06
2 m0fit 8.97041e+01 1.92707e-01 7.07712e-05 -1.21277e-03
ERR DEF= 0.5
EXTERNAL ERROR MATRIX. NDIM= 25 NPAR= 2 ERR DEF=0.5
1.514e-01 7.074e-03
7.074e-03 3.714e-02
PARAMETER CORRELATION COEFFICIENTS
NO. GLOBAL 1 2
1 0.09434 1.000 0.094
2 0.09434 0.094 1.000
###Markdown
draw the nll per points around the minimum
###Code
h2d = ROOT.TH2F("2d", "2d", 100, 89., 90.5, 100, 2., 4.)
for i in range(1,h2d.GetXaxis().GetNbins()+1):
for j in range(1,h2d.GetYaxis().GetNbins()+1):
m0here = h2d.GetXaxis().GetBinCenter(i)
gammaHere = h2d.GetYaxis().GetBinCenter(j)
m0fit.setVal(m0here)
Gammafit.setVal(gammaHere)
h2d.SetBinContent(i, j, 2*(nll.getVal()-minNLL))
contours = array.array('d', [1, 2.41,5.99])
h2d.Draw("COLZ")
h2dclone = h2d.Clone()
h2dclone.SetContour(3, contours)
h2dclone.SetLineStyle(2)
h2dclone.Draw("CONT2 LIST same")
ROOT.gPad.SetLogz()
line1 = ROOT.TLine(m0fitresDo, 2, m0fitresDo, 4)
line2 = ROOT.TLine(m0fitresUp, 2, m0fitresUp, 4)
line1.Draw("sames")
line2.Draw("sames")
ROOT.gPad.Update()
ROOT.gPad.Draw()
###Output
_____no_output_____
###Markdown
create the profile likelihood in parameter mofit
###Code
profile = nll.createProfile(m0fit)
frame1 = m0fit.frame(ROOT.RooFit.Bins(20), ROOT.RooFit.Range(89,90.5), ROOT.RooFit.Title("profileLL in mass"))
profile.plotOn(frame1)
frame1.GetYaxis().SetRangeUser(0, 2)
frame1.Draw()
line = ROOT.TLine(89, 0.5, 90.5, 0.5)
line.Draw("sames")
print(m0fit.getAsymErrorLo())
line1 = ROOT.TLine(m0fitresDo, 0, m0fitresDo, 2)
line2 = ROOT.TLine(m0fitresUp, 0, m0fitresUp, 2)
line1.Draw("sames")
line2.Draw("sames")
ROOT.gPad.Update()
ROOT.gPad.Draw()
###Output
0.0
[#1] INFO:Minization -- RooProfileLL::evaluate(nll_bwfit_bwData_Profile[m0fit]) Creating instance of MINUIT
[#1] INFO:Minization -- RooProfileLL::evaluate(nll_bwfit_bwData_Profile[m0fit]) determining minimum likelihood for current configurations w.r.t all observable
[#1] INFO:Minization -- RooProfileLL::evaluate(nll_bwfit_bwData_Profile[m0fit]) minimum found at (m0fit=89.7039)
.................................................................................. |
_demo/mixup-beta/MixUp and Beta Distribution.ipynb | ###Markdown
Understand Mixup Augmentation & Beta Distribution ImplementationIn the original article, the authors suggested three things:1. Create two separate dataloaders and draw a batch from each at every iteration to mix them up2. Draw a t value following a beta distribution with a parameter alpha (0.4 is suggested in their article)3. Mix up the two batches with the same value t.4. Use one-hot encoded targetsSource: https://forums.fast.ai/t/mixup-data-augmentation/22764 (Sylvain Gugger) Beta DistributionBeta distribution is control by two parameters, α and β with interval [0, 1], which make it useful for Mixup. Mixup is basically a superposition of two image with a parameter t. Instead of using a dog image, with Mixup, you may end up have a image which is 0.7 dog + 0.3 cat To get some sense of what a beta distribution is, let plot beta distribution with different alpha and beta to see its effect
###Code
import math
import torch
import matplotlib.pyplot as plt
from torch import tensor
# PyTorch has a log-gamma but not a gamma, so we'll create one
Γ = lambda x: x.lgamma().exp()
facts = [math.factorial(i) for i in range(7)]
plt.plot(range(7), facts, 'ro')
plt.plot(torch.linspace(0,6), Γ(torch.linspace(0,6)+1))
plt.legend(['factorial','Γ']);
###Output
_____no_output_____
###Markdown
When α != β
###Code
_,ax = plt.subplots(1,1, figsize=(5,4))
x = torch.linspace(0.01,0.99, 100000)
a_ls = [5.0,1.0,0.4, 1.0]
b_ls = [1.0,5.0,0.4, 1.0]
for a, b in zip(a_ls, b_ls):
a=tensor(a,dtype=torch.float)
b=tensor(b,dtype=torch.float)
# y = (x.pow(α-1) * (1-x).pow(α-1)) / (gamma_func(α ** 2) / gamma_func(α))
y = (x**(a-1) * (1-x)**(b-1)) / (Γ(a)*Γ(b) / Γ(a+b))
ax.plot(x,y)
# ax.set_title(f"α={a.numpy()[0]:.1}")
ax.set_title('Beta distribution when α != β ')
ax.legend([f'α = {float(a):.2}, β = {float(b):.2}' for a,b in zip(a_ls, b_ls)])
###Output
C:\ProgramData\Anaconda3\envs\fastai2\lib\site-packages\IPython\core\pylabtools.py:132: UserWarning: Creating legend with loc="best" can be slow with large amounts of data.
fig.canvas.print_figure(bytes_io, **kw)
###Markdown
A few observations from this graph.* α and β control the curve symmetrically, the blue line is symmetric with the orange line.* when α and β = 1, it reduce to uniform distribution* when α = β, the distribution is a symmetric distribution When α != β
###Code
_,ax = plt.subplots(1,1, figsize=(5,4))
x = torch.linspace(0.01,0.99, 100000)
a_ls = [0.1, 0.4, 0.6, 0.9]
b_ls = [0.1, 0.4, 0.6, 0.9]
for a, b in zip(a_ls, b_ls):
a=tensor(a,dtype=torch.float)
b=tensor(b,dtype=torch.float)
# y = (x.pow(α-1) * (1-x).pow(α-1)) / (gamma_func(α ** 2) / gamma_func(α))
y = (x**(a-1) * (1-x)**(b-1)) / (Γ(a)*Γ(b) / Γ(a+b))
ax.plot(x,y)
# ax.set_title(f"α={a.numpy()[0]:.1}")
ax.set_title('Beta distribution when α = β ')
ax.legend([f'α = {float(a):.2}, β = {float(b):.2}' for a,b in zip(a_ls, b_ls)])
###Output
C:\ProgramData\Anaconda3\envs\fastai2\lib\site-packages\IPython\core\pylabtools.py:132: UserWarning: Creating legend with loc="best" can be slow with large amounts of data.
fig.canvas.print_figure(bytes_io, **kw)
|
notebooks/examples/image_stack.ipynb | ###Markdown
Hyperspectral Images(Simultaneously acquired 2D images)**Suhas Somnath**10/12/2018**This example illustrates how a set of *simultaneously acquired* 2D grayscale images would be represented in theUniversal Spectroscopy andImaging Data (USID) schema and stored in a Hierarchical Data Format (HDF5) file, also referred to as the h5USID file.**This example is based on the popular Atomic Force Microscopy scan mode where multiple sensors *simultaneously* acquirea value at each position on a 2D grid, thereby resulting in a 2D image per sensor. Specifically, the goal of thisexample is to demonstrate the sharing of ``Ancillary`` datasets among multiple ``Main`` datasets.This document is intended as a supplement to the explanation about the [USID model](../../usid_model.html)Please consider downloading this document as a Jupyter notebook using the button at the bottom of this document.Prerequisites:--------------We recommend that you read about the [USID model](../../usid_model.html)We will be making use of the ``pyUSID`` package at multiple places to illustrate the central point. While it isrecommended / a bonus, it is not absolutely necessary that the reader understands how the specific ``pyUSID`` functionswork or why they were used in order to understand the data representation itself.Examples about these functions can be found in other documentation on pyUSID and the reader is encouraged to read thesupplementary documents. Import all necessary packagesThe main packages necessary for this example are ``h5py``, ``matplotlib``, and ``sidpy``, in addition to ``pyUSID``:
###Code
import subprocess
import sys
import os
import matplotlib.pyplot as plt
from warnings import warn
import h5py
%matplotlib inline
def install(package):
subprocess.call([sys.executable, "-m", "pip", "install", package])
try:
# This package is not part of anaconda and may need to be installed.
import wget
except ImportError:
warn('wget not found. Will install with pip.')
import pip
install('wget')
import wget
# Finally import pyUSID.
try:
import pyUSID as usid
import sidpy
except ImportError:
warn('pyUSID not found. Will install with pip.')
import pip
install('pyUSID')
import sidpy
import pyUSID as usid
###Output
_____no_output_____
###Markdown
Download the dataset---------------------As mentioned earlier, this image is available on the USID repository and can be accessed directly as well.Here, we will simply download the file using ``wget``:
###Code
h5_path = 'temp.h5'
url = 'https://raw.githubusercontent.com/pycroscopy/USID/master/data/SingFreqPFM_0003.h5'
if os.path.exists(h5_path):
os.remove(h5_path)
_ = wget.download(url, h5_path, bar=None)
###Output
_____no_output_____
###Markdown
Open the file-------------Lets open the file and look at its contents using[sidpy.hdf_utils.print_tree()](https://pycroscopy.github.io/sidpy/notebooks/03_hdf5/hdf_utils_read.htmlprint_tree())
###Code
h5_file = h5py.File(h5_path, mode='r')
usid.hdf_utils.print_tree(h5_file)
###Output
_____no_output_____
###Markdown
Notice that this file has multiple [Channel](../../usid_model.htmlchannels), each with a dataset named``Raw_Data``. Are they all [Main Dataset](../../usid_model.htmlmain-datasets) datasets?There are multiple ways to find out this. One approach is simply to ask pyUSID to list out all available ``Main``datasets.Visualize the contents in each of these channels------------------------------------------------
###Code
for main_dset in usid.hdf_utils.get_all_main(h5_file):
print(main_dset)
print('---------------------------------------------------------------\n')
###Output
_____no_output_____
###Markdown
From the print statements above, it is clear that each of these ``Raw_Data`` datasets were indeed ``Main`` datasets.How can these datasets be ``Main`` if they are not co-located with the corresponding sets of ``Ancillary`` datasetswithin each ``Channel`` group?Sharing Ancillary Datasets--------------------------Since each of the ``Main`` datasets have the same position and spectroscopic dimensions, they share the sameset of ancillary datasets that are under ``Measurement_000`` group. This is common for Scanning Probe Microscopyscans where information from multiple sensors are recorded **simultaneously** during the scan.Recall from the USID documentation that:1. Multiple ``Main`` datasets can share the same ``Ancillary`` datasets2. The ``Main`` datasets only need to have ``attributes`` named ``Position_Indices``, ``Position_Values``, ``Spectroscopic_Indices``, and ``Spectroscopic_Values``with the value set to the reference of the corresponding ``Ancillary`` datasetsWe can investigate if this is indeed the case here. Lets get the references to ``Ancillary`` datasets linked to eachof the ``Main`` datasets:
###Code
for main_dset in usid.hdf_utils.get_all_main(h5_file):
print('Main Dataset: {}'.format(main_dset.name))
print('Position Indices: {}'.format(main_dset.h5_pos_inds.name))
print('Position Values: {}'.format(main_dset.h5_pos_vals.name))
print('Spectroscopic Indices: {}'.format(main_dset.h5_spec_inds.name))
print('Spectroscopic Values: {}'.format(main_dset.h5_spec_vals.name))
print('---------------------------------------------------------------\n')
###Output
_____no_output_____
###Markdown
From above, we see that all the ``Main`` datasets we indeed referencing the same set of ``Ancillary`` Datasets.Note that it would **not** have been wrong to store (duplicate copies of) the ``Ancillary`` datasets within each``Channel`` group. The data was stored in this manner since it is more efficient and because it was known *apriori*that all ``Main`` datasets are dimensionally equal. Also note that this sharing of ``Ancillary`` datasets is OK eventhough the physical quantity and units within each ``Main`` dataset are different since these two pieces ofinformation are stored in the attributes of the ``Main`` datasets (which are unique and independent) and not in the``Ancillary`` datasets.The discussion regarding the contents of the ``Ancillary`` datasets are identical to that for the [2D grayscaleimage](./image.html) and will not be discussed here for brevity.Visualizing the contents within each channel--------------------------------------------Now lets visualize the contents within this ``Main Dataset`` using the ``USIDataset's`` built-in[visualize()](../user_guide/usi_dataset.htmlInteractive-Visualization) function.
###Code
usid.plot_utils.use_nice_plot_params()
for main_dset in usid.hdf_utils.get_all_main(h5_file):
main_dset.visualize(num_ticks=3)
###Output
_____no_output_____
###Markdown
Clean up--------Finally lets close and delete the example HDF5 file
###Code
h5_file.close()
os.remove(h5_path)
###Output
_____no_output_____ |
missing_data/mice.ipynb | ###Markdown
Multiple Imputation with Chained EquationsAuthor: Charles GuanDemonstration of Multiple Imputation with Chained Equations (MICE) using a toy dataset (Iris) and scikit-learn.
###Code
import seaborn as sns
import numpy as np
from sklearn.experimental import enable_iterative_imputer
from sklearn.impute import IterativeImputer
# Plot formatting
sns.set(style='ticks')
# Parameters
ratio_missing_data = 0.2
# Rule of thumb: one imputation per percent of incomplete data
num_imputations = round(ratio_missing_data * 100)
# Seed random state for reproducibility
# Remove hard-coded seed value for true pseudo-randomness
rng = np.random.default_rng(seed=73)
# Checks on parameters
assert 0 <= ratio_missing_data < 1, 'Invalid missing data ratio'
###Output
_____no_output_____
###Markdown
Initialize sample dataset
###Code
# Load sample data
df = sns.load_dataset('iris')
df.index.name = 'observation'
df.head()
# Quick visualization of data
sns.pairplot(df, hue='species')
# Randomly replace some of the numeric values with NaNs
numeric_df = df.select_dtypes(include=np.number)
nonnumeric_df = df.select_dtypes(exclude=np.number)
missing_df = numeric_df.mask(rng.random(size=numeric_df.shape) < ratio_missing_data)
print('Number of rows:', len(missing_df))
print('Number of non-NaN values in each column:')
print(missing_df.count())
###Output
Number of rows: 150
Number of non-NaN values in each column:
sepal_length 121
sepal_width 122
petal_length 122
petal_width 117
dtype: int64
###Markdown
Multiple Imputation with Chained Equations with scikit-learn
###Code
# Run multivariate imputations multiple times
# Accumulate results from each imputation
pred_df_list = []
for iter in range(num_imputations):
# Add some random-ness to each iteration of the imputer
random_state = rng.integers(np.iinfo(np.int32).max)
imp = IterativeImputer(sample_posterior=True,
random_state=random_state)
# Predict the missing data
pred = imp.fit_transform(missing_df)
pred_df = pd.DataFrame(pred,
columns=missing_df.columns,
index=missing_df.index)
pred_df_list.append(pred_df)
# Merge results into a single dataframe
pred_all_df = pd.concat(pred_df_list,
keys=range(len(pred_df_list)),
names=['imputation'])
pred_all_df
###Output
_____no_output_____
###Markdown
Normally, we should use Rubin's rules to pool resultsFor now, this is left out for simplicity. Instead we visualize the mean of the imputed values.
###Code
pred_mean_df = pred_all_df.mean(level='observation')
pred_mean_df.head()
# Quick visualization of data, colored by species
pred_mean_df['species'] = df.species
sns.pairplot(pred_mean_df, hue='species')
###Output
_____no_output_____
###Markdown
Distribution of imputed data looks similar to original data distribution. However, not all values were predicted perfectly. Show versions for documentation
###Code
print(np.__version__)
pd.show_versions()
###Output
INSTALLED VERSIONS
------------------
commit : None
python : 3.6.10.final.0
python-bits : 64
OS : Linux
OS-release : 4.15.0-101-generic
machine : x86_64
processor : x86_64
byteorder : little
LC_ALL : None
LANG : en_US.UTF-8
LOCALE : en_US.UTF-8
pandas : 1.0.3
numpy : 1.17.0
pytz : 2020.1
dateutil : 2.8.1
pip : 20.0.2
setuptools : 40.6.3
Cython : None
pytest : None
hypothesis : None
sphinx : None
blosc : None
feather : None
xlsxwriter : None
lxml.etree : None
html5lib : None
pymysql : None
psycopg2 : None
jinja2 : 2.11.2
IPython : 7.13.0
pandas_datareader: None
bs4 : 4.9.0
bottleneck : None
fastparquet : None
gcsfs : None
lxml.etree : None
matplotlib : 3.1.3
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : None
pyarrow : None
pytables : None
pytest : None
pyxlsb : None
s3fs : None
scipy : 1.4.1
sqlalchemy : None
tables : None
tabulate : None
xarray : None
xlrd : None
xlwt : None
xlsxwriter : None
numba : None
|
TensorBasics.ipynb | ###Markdown
Tensor Initialisation
###Code
# Create a 2d tensor ( like array ):
# [ [1,2,5]
# [3,4,2] ]
tensor = torch.tensor([[1,2,3],[3,4,2]])
print(tensor)
#Setting the type of tensor and whether to work on CPU or GPU
tensor = torch.tensor([[1,2,3],[3,4,2]],dtype = torch.float64,device = "cuda")
#Another thing that is often followed is
device = "cuda" if torch.cuda.is_available() else "cpu"
# And then instead of specifying device specifically for different systems , above function will automatically manage the device
tensor = torch.tensor([[1,2,3],[3,4,2]],dtype = torch.float64,device =device)
tensor
tensor.dtype
tensor.device
tensor.shape
tensor.requires_grad
# We can set whether a tensor will be used for gradient descent
tensor = torch.tensor([[1,2,3],[3,4,2]],dtype = torch.float64,device =device,requires_grad=True)
###Output
_____no_output_____
###Markdown
Other Initialisation Methods:
###Code
# Unitialised random data
x = torch.empty(size = (4,3))
x
x_zero = torch.zeros((4,3))
x_zero
#Initialize random numbers inside the matrix ranging from 0-1
x_rand = torch.rand((4,3))
x_rand
x_ones = torch.ones((4,3))
x_ones
# Identity matrix
# Notice the change in function input instead of (n,m) we use n,m
x_I = torch.eye(3,4)
x_I
# Tensor in a range
x_range = torch.arange(start = 7,end = 10,step = 1)
x_range
# Linear spaced Range tensor
x_lin = torch.linspace(start = 100,end = 114.2,steps = 11)
x_lin
#Normally distributed tensor
x_normal = torch.empty((3,3)).normal_(mean = 0,std = 1)
x_normal
#Uniform distributed tensor
x_uniform = torch.empty((3,3)).uniform_(0,1)
x_uniform
#Using diagonal matrix
x_diag = torch.diag(torch.ones(3))
x_diag
###Output
_____no_output_____
###Markdown
Converting tensors' datatype
###Code
test_tensor = torch.arange(start = 0,end = 9,step= 1)
test_tensor = test_tensor.long()
test_tensor.dtype
test_tensor = test_tensor.short()
test_tensor.dtype
###Output
_____no_output_____
###Markdown
Numpy array to tensor
###Code
import numpy as np
np_array = np.zeros((5,3))
tensor = torch.from_numpy(np_array)
np_array_back = tensor.numpy()
###Output
_____no_output_____ |
Twitter Sentiment Analysis/Twitter Sentiment Analysis.ipynb | ###Markdown
loading pretrained model and vectorizer
###Code
#Vectorizer
with open('tfidfmodel.pickle', 'rb') as f:
vectorizer = pickle.load(f)
#Model
with open('classifier.pickle', 'rb') as f:
classifier = pickle.load(f)
###Output
_____no_output_____
###Markdown
PreProcessing the Text & Predicting the Results
###Code
total_pos = 0
total_neg = 0
for tweet in list_tweets:
#Removing all the hyper links
# ^ means from start and $ means from last
tweet = re.sub(r"^https://t.co/[a-zA-Z0-9]*\s", " ", tweet)
tweet = re.sub(r"\s+https://t.co/[a-zA-Z0-9]*\s", " ", tweet)
tweet = re.sub(r"\s+https://t.co/[a-zA-Z0-9]*$", " ", tweet)
#Puntuation
tweet = re.sub(r"\W"," ",tweet)
#numbers
tweet = re.sub(r"\d"," ",tweet)
#Single alphabets
# ^ means from start and $ means from last
tweet = re.sub(r"\s+[a-z]\s+"," ",tweet)
tweet = re.sub(r"\s+[a-z]$"," ",tweet)
tweet = re.sub(r"^[a-z]\s+"," ",tweet)
#Vaccent Spaces
tweet = re.sub(r"\s+"," ",tweet)
#Lower Casing
tweet = tweet.lower()
#Replacing Contraction with full words
tweet = re.sub(r"that's","that is",tweet)
tweet = re.sub(r"there's","there is",tweet)
tweet = re.sub(r"what's","what is",tweet)
tweet = re.sub(r"where's","where is",tweet)
tweet = re.sub(r"it's","it is",tweet)
tweet = re.sub(r"who's","who is",tweet)
tweet = re.sub(r"i'm","i am",tweet)
tweet = re.sub(r"she's","she is",tweet)
tweet = re.sub(r"he's","he is",tweet)
tweet = re.sub(r"they're","they are",tweet)
tweet = re.sub(r"who're","who are",tweet)
tweet = re.sub(r"ain't","am not",tweet)
tweet = re.sub(r"wouldn't","would not",tweet)
tweet = re.sub(r"shouldn't","should not",tweet)
tweet = re.sub(r"can't","can not",tweet)
tweet = re.sub(r"couldn't","could not",tweet)
tweet = re.sub(r"won't","will not",tweet)
#Pretty Printing
pp.pprint(tweet)
print(len(tweet))
#Predicting The Reslts
sent = classifier.predict(vectorizer.transform([tweet]).toarray())
if sent[0] == 1:
total_pos += 1
else:
total_neg += 1
print("Positive Reviews: {}".format(total_pos))
print("Negative Reviews: {}".format(total_neg))
#Accuracy
print("Accuracy score is {}%".format((total_pos / 500)*100))
###Output
Accuracy score is 74.4%
###Markdown
Plotting The Results
###Code
# Visualizing the results
objects = ['Positive','Negative']
y_pos = np.arange(len(objects))
plt.bar(y_pos,[total_pos,total_neg],alpha=0.5)
plt.xticks(y_pos,objects)
plt.ylabel('Number')
plt.title('Number of Postive and Negative Tweets
###Output
_____no_output_____ |
Pytorch_Lenet.ipynb | ###Markdown
Simple exercise in learning PyTorchHere we write a simple LeNet clone and train it on the MNIST dataset
###Code
#Code adapted from https://pytorch.org/tutorials/beginner/blitz/neural_networks_tutorial.html for LeNet in PyTorch
import torch
import torch.nn as nn
import torch.nn.functional as F
class Net(nn.Module):
def __init__(self):
# Intialize parent module
super(Net, self).__init__()
# 1 input image channel, 6 output channels, 3x3 square convolution
# kernel
self.conv1 = nn.Conv2d(1, 6, 3)
self.conv2 = nn.Conv2d(6, 16, 3)
# an affine operation: y = Wx + b
self.fc1 = nn.Linear(16 * 6 * 6, 120) # 6*6 from image dimension
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 10)
def forward(self, x):
# Max pooling over a (2, 2) window
x = F.max_pool2d(F.relu(self.conv1(x)), (2, 2))
# If the size is a square you can only specify a single number
x = F.max_pool2d(F.relu(self.conv2(x)), 2)
x = x.view(-1, self.num_flat_features(x))
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x
def num_flat_features(self, x):
size = x.size()[1:] # all dimensions except the batch dimension
num_features = 1
for s in size:
num_features *= s
return num_features
net = Net()
print(net)
params = list(net.parameters())
print(len(params))
print(params[0].size()) # conv1's .weight
input = torch.randn(1, 1, 32, 32)
out = net(input)
print(out)
# Now that we have a working LeNet, we train it on MNIST
import torch
import torchvision
import torchvision.transforms as transforms
transform = transforms.Compose([
transforms.Resize((32,32)),
transforms.Grayscale(num_output_channels=1),
transforms.ToTensor()
])
trainset = torchvision.datasets.MNIST(root='./data', train=True, download=True, transform=transform)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=4, shuffle=True, num_workers=2)
testset = torchvision.datasets.MNIST(root='./data', train=False, download=True, transform=transform)
testloader = torch.utils.data.DataLoader(testset, batch_size=4, shuffle=False, num_workers=2)
classes = (0,1,2,3,4,5,6,7,8,9)
import matplotlib.pyplot as plt
import numpy as np
def show_image(img):
img = img/2 + 0.5
npimg = img.numpy()
plt.imshow(np.transpose(npimg, (1,2,0)))
plt.show()
images, labels = iter(trainloader).next()
show_image(torchvision.utils.make_grid(images))
print(' '.join('%5s' % classes[labels[j]] for j in range(4)))
import torch.optim as optim
# Ref: https://rdipietro.github.io/friendly-intro-to-cross-entropy-loss/
criterion = nn.CrossEntropyLoss()
# Ref: https://towardsdatascience.com/stochastic-gradient-descent-with-momentum-a84097641a5d
optimizer = optim.SGD(net.parameters(), lr=0.001, momentum=0.9)
for epoch in range(2):
running_loss = 0.0
for i, data in enumerate(trainloader, 0):
inputs, labels=data
optimizer.zero_grad()
outputs = net(inputs)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
running_loss += loss.item()
if i % 2000 == 1999:
print('[%d, %5d] loss: %.3f' %(epoch + 1, i + 1, running_loss / 2000))
running_loss = 0.0
print('Finished Training')
images, labels = iter(testloader).next()
show_image(torchvision.utils.make_grid(images))
print(f'Ground Truth: {classes[labels[j]] for j in range(4)}')
outputs = net(images)
_, predicted = torch.max(outputs, 1)
print('Predicted: ', ' '.join('%5s' % classes[predicted[j]] for j in range(4)))
# Accuracy
correct = 0
total = 0
with torch.no_grad():
for data in testloader:
images, labels = data
outputs = net(images)
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
correct += (predicted == labels).sum().item()
print('Accuracy of the network on the 10000 test images: %d %%' % (
100 * correct / total))
###Output
_____no_output_____ |
0701.ipynb | ###Markdown
data load
###Code
# import pandas as pd
patient = pd.read_csv('../../data/MIMIC_III/PATIENTS.csv') #환자정보_입원환자만
# cpt = pd.read_csv('../data/MIMIC_III_data/CPTEVENTS.csv')
lab = pd.read_csv('../../data/MIMIC_III/LABEVENTS.csv') #외래환자포함
diagnoses_icd = pd.read_csv('../../data/MIMIC_III/DIAGNOSES_ICD.csv') #환자있음 #dis
diagnoses = pd.read_csv('../../data/MIMIC_III/D_ICD_DIAGNOSES.csv') #병이름 소개 #d_icd
diagnoses.head()
diagnoses_icd.head()
patient.head()
lab.head()
###Output
_____no_output_____
###Markdown
폐렴 걸린 행 뽑기
###Code
diag_pneum = diagnoses[(diagnoses['SHORT_TITLE'].str.contains('pneum')|(diagnoses['SHORT_TITLE'].str.contains('Pneum')))]
len(diag_pneum)
diag_pneum
###Output
_____no_output_____
###Markdown
폐와 관련된 질병의 value_counts - 코드 486, 5070, 48241만 사용하기로 함```pneum 또는 Pneum이 속한 병에 걸린 사람들ICD9 pneumonia 구글링하면 뉴모니아 관련 코드 나옴```
###Code
pneum_id = diagnoses_icd[diagnoses_icd['ICD9_CODE'].isin(diag_pneum['ICD9_CODE'])].reset_index()
pneum_id['ICD9_CODE'].value_counts()[:10]
###Output
_____no_output_____
###Markdown
[:3] 정확한 병명 확인
###Code
diagnoses[diagnoses['ICD9_CODE'].isin(pneum_id['ICD9_CODE'].value_counts()[:3].index)]
pneum_id['SUBJECT_ID'].nunique()
###Output
_____no_output_____
###Markdown
0622 flag가 abnormal```랩이벤트 flag abnormal인거분포 abnormal 1개인거, 2개인거,,,, frequency 조사itemid (검사) 유니크 개수``` [:3]의 Diagnoses_icd 데이터 추출
###Code
pneum_id
###Output
_____no_output_____
###Markdown
쓸데없는 줄 날리기
###Code
pneum = pneum_id[(pneum_id['ICD9_CODE'].isin(pneum_id['ICD9_CODE'].value_counts()[:3].index))].drop(['index','ROW_ID','SEQ_NUM'],axis=1).reset_index(drop=True)
pneum
###Output
_____no_output_____
###Markdown
코드 3개 존재 확인
###Code
pneum['ICD9_CODE'].value_counts()
환자id = pneum['SUBJECT_ID'].unique()
len(환자id)
# 추출한 환자id로 환자의 사망,생존 확인
patient[patient['SUBJECT_ID'].isin(환자id)]['EXPIRE_FLAG'].value_counts()
###Output
_____no_output_____
###Markdown
patient에서 pneum 상위 3개 걸린 환자만 뽑기
###Code
환자 = patient[patient['SUBJECT_ID'].isin(환자id)]
환자
###Output
_____no_output_____
###Markdown
lab에서 pneum 상위 3개 걸린 환자만 뽑기
###Code
환자lab = lab[lab['SUBJECT_ID'].isin(환자id)].reset_index(drop=True)
환자lab
환자lab['SUBJECT_ID'].nunique()
환자lab['ITEMID'].nunique()
환자lab['FLAG'] = 환자lab['FLAG'].fillna('nan') #nan 개수 셀려고
환자lab['FLAG'].value_counts()
###Output
_____no_output_____
###Markdown
abnormal인 것만 뽑음
###Code
ab_pneu = 환자lab[환자lab['FLAG'].str.contains('abnormal')]
ab_pneu
###Output
_____no_output_____
###Markdown
ab_pneu에서 ITEMID nunique()
###Code
ab_pneu['ITEMID'].nunique()
###Output
_____no_output_____
###Markdown
환자 lab에서 flag 빈도수 분포
###Code
sns.countplot(환자lab['FLAG'])
plt.figure(figsize=(100,25), dpi=100)
sns.countplot(ab_pneu['ITEMID'])#, rotation = - 45)
plt.xticks(rotation = - 45 )
ab_pneu['ITEMID']
ab_pneu['ITEMID'].value_counts()[:10]
###Output
_____no_output_____
###Markdown
0701
###Code
need = 환자lab[['SUBJECT_ID', 'ITEMID', 'FLAG']]
need.replace('abnormal', 1, inplace=True) #inplace = True를 하지 않으면 (저장?) 유지가 안됨.)
# need.replace()
need.reset_index(drop=True, inplace = True) #index 재정비 및 정렬 (오름차순) 및 저장
need
###Output
/usr/local/lib/python3.8/dist-packages/pandas/core/frame.py:4524: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame
See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
return super().replace(
###Markdown
need['ITEMID'] == 50910
###Code
len(need)
# need 확인하고 싶었음.
# need.to_csv("need.csv")
need['ITEMID'].nunique()
###Output
_____no_output_____
###Markdown
0630
###Code
col = need['ITEMID'].unique()
row = need['SUBJECT_ID'].unique()
df = pd.DataFrame(columns = col, index = row)
# df[col]
df
df.index #row 아니고 index
df.columns
(need['SUBJECT_ID'] == 9).sum()
# need['ITEMID'] == 50821
#중복을 제거해서 for문을 돌릴 수 있으면 좋을 것 같다.
for idx in range(len(need)):
for i,j in zip(df.index, df.columns):
if need['SUBJECT_ID'][idx] == i and need['ITEMID'][idx] == j:
df[i,j] = 1
df
# df.to_csv("0629과제.csv")
df.sum() # 열끼리의 합을 통해 1이 1개도 없는 열 확인
col = df.sum() == 0
idx = col.index
lst = list(idx)
# df.drop(columns = lst, axis = 1)
df = pd.read_csv(r"0629과제.csv")
df_copy = df.copy()
# df_copy.drop(columns = 50911, inplace=True)
df
df_copy
df_copy[['50911']].sum()
for i in lst:
if df_copy[str(i)].sum() == 0:
df_copy.drop(columns = str(i), axis = 1, inplace = True)
df
df_copy
# df_copy.to_csv("0629과제 최종.csv")
need['FLAG']
###Output
_____no_output_____
###Markdown
드디어 똑같아짐 이 아래는 내가 따로 한 거라서 볼 필요가 없음.
###Code
lab_abn = lab[lab['FLAG'].isin(['abnormal'])] #abnormal만 넣어둠
lab_abn
lab_abn[lab_abn['SUBJECT_ID'].isin(환자id)].reset_index(drop=True)
labev_abn = lab[lab['FLAG'].isin(['abnormal'])]
labev_abn
labev_abn[labev_abn['SUBJECT_ID'].isin(pneum_id['SUBJECT_ID'])]
labev.loc[labev['ROW_ID']== dis2['ROW_ID']]
# labev['ROW_ID']==
dis2['ROW_ID']
labev_abn['ROW_ID']
###Output
_____no_output_____ |
GDAL/.ipynb_checkpoints/gdal2ncml-checkpoint.ipynb | ###Markdown
netcdf big {dimensions: x = 4212 ; y = 3912 ;variables: char GDAL_Geographics ; GDAL_Geographics:Northernmost_Northing = 39.755 ; GDAL_Geographics:Southernmost_Northing = 36.49500000000011 ; GDAL_Geographics:Easternmost_Easting = -70.24500000000012 ; GDAL_Geographics:Westernmost_Easting = -73.755 ; GDAL_Geographics:spatial_ref = "GEOGCS[\"GCS_North_American_1983\",DATUM[\"D_North_American_1983\",SPHEROID[\"GRS_1980\",6378137,298.257222101]],PRIMEM[\"Greenwich\",0],UNIT[\"Degree\",0.017453292519943295],VERTCS[\"Instantaneous Water Level height\",VERT_DATUM[\"Instantaneous Water Level\",2005],UNIT[\"Meter\",1]]]" ; GDAL_Geographics:GeoTransform = "-73.755 0.000833333 0 39.755 0 -0.000833333 " ; GDAL_Geographics:grid_mapping_name = "Geographics Coordinate System" ; GDAL_Geographics:long_name = "Grid_latitude" ; float Band1(y, x) ; Band1:_FillValue = -1.e+10f ; Band1:grid_mapping = "GDAL_Geographics" ; Band1:long_name = "GDAL Band Number 1" ;// global attributes: :Conventions = "CF-1.0" ;
###Code
value="COMPD_CS[\\\"NAD83 + NAVD88 height\\\",GEOGCS[\\\"NAD83\\\",DATUM[\\\"North_American_Datum_1983\\\",SPHEROID[\\\"GRS 1980\\\",6378137,298.257222101,AUTHORITY[\\\"EPSG\\\",\\\"7019\\\"]],TOWGS84[0,0,0,0,0,0,0],AUTHORITY[\\\"EPSG\\\",\\\"6269\\\"]],PRIMEM[\\\"Greenwich\\\",0,AUTHORITY[\\\"EPSG\\\",\\\"8901\\\"]],UNIT[\\\"degree\\\",0.0174532925199433,AUTHORITY[\\\"EPSG\\\",\\\"9122\\\"]],AUTHORITY[\\\"EPSG\\\",\\\"4269\\\"]],VERT_CS[\\\"NAVD88 height\\\",VERT_DATUM[\\\"North American Vertical Datum 1988\\\",2005,AUTHORITY[\\\"EPSG\\\",\\\"5103\\\"],EXTENSION[\\\"PROJ4_GRIDS\\\",\\\"g2012a_conus.gtx,g2012a_alaska.gtx,g2012a_guam.gtx,g2012a_hawaii.gtx,g2012a_puertorico.gtx,g2012a_samoa.gtx\\\"]],UNIT[\\\"metre\\\",1,AUTHORITY[\\\"EPSG\\\",\\\"9001\\\"]],AXIS[\\\"Up\\\",UP],AUTHORITY[\\\"EPSG\\\",\\\"5703\\\"]],AUTHORITY[\\\"EPSG\\\",\\\"5498\\\"]]"
value
value="COMPD_CS[\"NAD83 + NAVD88 height\",GEOGCS[\"NAD83\",DATUM[\"North_American_Datum_1983\",SPHEROID[\"GRS 1980\",6378137,298.257222101,AUTHORITY[\"EPSG\",\"7019\"]],TOWGS84[0,0,0,0,0,0,0],AUTHORITY[\"EPSG\",\"6269\"]],PRIMEM[\"Greenwich\",0,AUTHORITY[\"EPSG\",\"8901\"]],UNIT[\"degree\",0.0174532925199433,AUTHORITY[\"EPSG\",\"9122\"]],AUTHORITY[\"EPSG\",\"4269\"]],VERT_CS[\"NAVD88 height\",VERT_DATUM[\"North American Vertical Datum 1988\",2005,AUTHORITY[\"EPSG\",\"5103\"],EXTENSION[\"PROJ4_GRIDS\",\"g2012a_conus.gtx,g2012a_alaska.gtx,g2012a_guam.gtx,g2012a_hawaii.gtx,g2012a_puertorico.gtx,g2012a_samoa.gtx\"]],UNIT[\"metre\",1,AUTHORITY[\"EPSG\",\"9001\"]],AXIS[\"Up\",UP],AUTHORITY[\"EPSG\",\"5703\"]],AUTHORITY[\"EPSG\",\"5498\"]]"
value
###Output
_____no_output_____ |
Scripts/2_article_preprocessing.ipynb | ###Markdown
Imports
###Code
import re
from unicodedata import normalize
import html
from tqdm.notebook import tqdm
import pickle
import gc
###Output
_____no_output_____
###Markdown
Load articles
###Code
data = pickle.load(open("../Data/data_v1.p", "rb"))
###Output
_____no_output_____
###Markdown
Extract metadata HMTL tags
###Code
def decode_html_tags(data: list, key_name: str, new_key_name: str):
'''This function decodes HTML tags from each article: <ref> --> <ref> '''
with tqdm(total=len(data)) as pbar:
for d in data:
d[new_key_name] = html.unescape(d[key_name])
pbar.update(1)
return data
####
data = decode_html_tags(data, 'body', 'body')
###Output
_____no_output_____
###Markdown
HTML comments
###Code
def remove_html_comments(data: list, key_name: str, new_key_name: str):
'''This function removes HTML comments from each article'''
with tqdm(total=len(data)) as pbar:
for d in data:
d[new_key_name] = re.sub("<!--.+?-->", "", d[key_name])
pbar.update(1)
return data
####
data = remove_html_comments(data, 'body', 'body')
s = [d['body'] for d in data if d['_id_']=='213688'][0]
print(s)
###Output
{{Ficha de persona
|nombre = Nick Mason
|imagen = Nick_Mason_20060603_Fnac_08.jpg
|tamaño de imagen = 250px
|pie de imagen = Nick Mason, junio 2006.
|nombre de nacimiento = Nicholas Berkeley Mason
|fecha de nacimiento = {{Fecha de inicio|27|1|1944|edad}} {{bandera|Reino Unido}}
|instrumento = [[batería (instrumento musical)|Batería]], [[Teclado electrónico|teclados]], [[bajo (instrumento musical)|bajo]], [[guitarra]]
|género = [[Rock progresivo]], [[rock psicodélico]], [[rock experimental]], [[rock instrumental]]
|ocupación = [[músico]], [[Productor discográfico|productor]], [[escritor]]
|años activo = [[1964]]-[[2018]]
|compañía discográfica = [[Capitol Records]], [[Columbia Records]], [[Sony Music Entertainment|Sony]], [[EMI]], [[Harvest Records|Harvest]]
|relacionados = [[Pink Floyd]]<br />[[Sigma 6 (banda)|Sigma 6]]<br />[[Sigma 6 (banda)|The Screaming Abdabs]]<br />[[Mason + Fenn]]<br />[[Nick Mason's Saucerful of Secrets]]<br />[[Robert Wyatt]]<br />[[Carla Bley]]<br />[[Michael Mantler]]<br />[[Death Grips ]]<br />[[Radiohead]]<br />[[Archie]]<br />[[Unkle Adams]]
|página web =
}}
'''Nicholas Berkeley Mason''' [[Comendador de la Orden del Imperio Británico|CBE]] ([[Birmingham]], [[Inglaterra]]; [[27 de enero]] de [[1944]]), más conocido como '''Nick Mason''', es un [[músico]], [[Productor discográfico|productor]], [[escritor]], [[Batería (instrumento musical)|baterista]] y piloto de automovilismo, reconocido por su trabajo en el grupo [[Reino Unido|británico]] de [[rock progresivo]] [[Pink Floyd]]. Mason ha escrito conjuntamente algunas de las composiciones más populares de Pink Floyd como «[[Echoes]]» y «[[Time (canción)|Time]]». Mason es el único miembro de Pink Floyd presente en cada uno de sus álbumes. Se estima que hasta 2010, el grupo ha vendido más de 250 millones de discos en todo el mundo<ref>{{Citation | title = Pink Floyd Reunion Tops Fans' Wish List in Music Choice Survey | url = http://www.bloomberg.com/apps/news?pid=newsarchive&sid=aOmothQgn6l4&refer=muse|work=Bloomberg| date = 26 de septiembre de 2007| accessdate =25 de mayo de 2012| postscript = }}</ref><ref>{{enlace roto|1={{Citation| title = Pink Floyd's a dream, Zeppelin's a reality| url = http://www2.timesdispatch.com/lifestyles/2007/sep/28/-rtd_2007_09_28_0044-ar-182172/| work = [[Richmond Times-Dispatch]]| date = 28 de septiembre de 2007| accessdate = 25 de mayo de 2012| postscript = }} |2=http://www2.timesdispatch.com/lifestyles/2007/sep/28/-rtd_2007_09_28_0044-ar-182172/ |bot=InternetArchiveBot }}</ref> incluyendo 75 millones de unidades vendidas en los Estados Unidos.
También compite en eventos de carreras de automóviles, como las [[24 Horas de Le Mans]].<ref>Documental de [[Discovery Channel]] «World's Most Expensive Cars»</ref>
El 26 de noviembre de 2012, Mason recibió una Honorario [[Doctor en Letras]] de la [[Universidad de Westminster]] en la ceremonia de presentación de la Escuela de Arquitectura, Construcción y Medio Ambiente (había estudiado arquitectura en el predecesor de la Universidad, [[Regent Street Polythechnic]], 1962-1967).<ref>University of Westminster presentation ceremony programme, 26 November 2012</ref>
== Primeros años ==
Es hijo del director de cine documental [[Bill Mason (director) | Bill Mason]], que nació en [[Birmingham]], pero fue criado en [[Hampstead (Londres)|Hampstead]], Londres,<ref group="nota">Muchas biografías en línea confunden la dirección de la calle «Downshire Hill» con «The Downshire Hills», un barrio de [[Birmingham]].</ref> y asistió a [[Frensham Heights School]] cerca de la [[Farnham]], Surrey. Más tarde estudió en la [[Universidad de Westminster|Regent Street Polytechnic]] (ahora la Universidad de Westminster), donde se asoció con [[Roger Waters]], [[Bob Klose]] y [[Richard Wright (músico)|Rick Wright]] en 1964 para formar el predecesor de Pink Floyd, [[Sigma 6 (banda)|Sigma 6]].
== Pink Floyd ==
Cuando Nick Mason estudiaba en la Regent Street Polytechnic, formó junto con [[Roger Waters]], [[Bob Klose]] y [[Richard Wright (Pink Floyd)|Richard Wright]] en 1964 la banda [[Sigma 6 (banda)|Sigma 6]], que posteriormente, tras la ida de Klose y la llegada de [[Syd Barrett]], se convirtió en [[Pink Floyd]]. Desde entonces ha permanecido en el grupo y, aunque es quien tiene menos créditos en la composición de las canciones de Pink Floyd, es el único miembro que ha aparecido en todos los discos del grupo.
Ha trabajado también a través de su compañía Ten Tenths, como baterista y productor con músicos como [[Steve Hillage]] y [[Robert Wyatt]], como baterista con [[Michael Mantler]] y como productor con [[The Damned]]. Mason colecciona automóviles clásicos, una afición heredada de su padre. Como fanático de [[Ferrari]], posee diez automóviles de esta marca, y es común que las compañías de automóviles le ofrezcan modelos limitados de lujo que solo son construidos para contados clientes habituales.
En 1985 Roger Waters inició un juicio contra [[David Gilmour]] por los derechos del nombre Pink Floyd. Nick Mason permaneció con Gilmour durante todo ese tiempo hasta que en 1986 ambos ganaron los derechos sobre el nombre de la banda y su repertorio.
En julio del 2005 Mason junto con Gilmour, Richard Wright y, por primera vez en 24 años, Roger Waters, tocó nuevamente como Pink Floyd cuatro canciones en el concierto masivo [[Live 8]] en [[Londres]]. Tras del reencuentro, Mason quedó en muy buenos términos con Roger Waters e incluso ya antes había tocado la batería en las dos últimas noches de la gira de Waters en 2002 para la canción de Pink Floyd «Set the Controls for the Heart of the Sun». En el 2006 Mason y Rick Wright participaron con David Gilmour durante su presentación en el [[Royal Albert Hall]] en Londres (aunque en realidad fue un ''show'' de David Gilmour). En 2007, esa misma formación tocó la canción «[[Arnold Layne]]» (primer sencillo de Pink Floyd) en el concierto tributo a Syd Barrett. Ese mismo año colaboró nuevamente con Roger Waters en algunos de los primeros conciertos de su gira ''[[Dark side of the Moon Live]]''.
== Estilo ==
Fue uno de los primeros bateristas en la historia del ''rock'' en usar el doble pedal (pero nunca lo ha ocupado) algo muy común en estos días.{{cr}}
Como compositor solo se le acreditan «[[Speak to Me]]» (''[[The Dark Side of the Moon]]'') y «[[The Grand Vizier's Garden Party]]» (''[[Ummagumma]]''), sin embargo, colaboraba con una de las «marcas registradas» de Pink Floyd, los efectos de sonidos o ''[[Loop (música)|loops]]'' (antes de la invención de máquinas especializadas para ello). También se le atribuye piezas angulares en la historia de Pink Floyd, como «[[Echoes]]», «[[Time (canción)|Time]]», «[[A Saucerful of Secrets]]», «[[One of These Days]]», «[[Atom Heart Mother (canción)|Atom Heart Mother]]», «[[Interstellar Overdrive]]», todos ellos compuestos por la banda.
== Discografía ==
=== Con Pink Floyd ===
{{AP|Discografía de Pink Floyd}}
=== Con Nick Mason's Fictitious Sports ===
* ''[[Nick Mason's Fictitious Sports]]'' – 3 de mayo de 1981 (aparece como álbum de Mason pero en realidad es un disco de [[Carla Bley]])
=== Con Rick Fenn ===
* ''[[Profiles]]'' – 29 de julio de 1985
* ''[[White of the Eye]]'' – 1987 (banda sonora)
* ''[[Tank Mailing]]'' – 1988 (banda sonora)
=== Canciones de Pink Floyd co-escritas por Mason ===
*«Nick's Boogie» (1967) (''London '66–'67'')
*«[[Pow R. Toc H.]]» (1967) (''[[The Piper at the Gates of Dawn]]'')
*«[[Interstellar Overdrive]]» (1967) (''The Piper at the Gates of Dawn'')
*«[[A Saucerful of Secrets (canción)|A Saucerful of Secrets]]» (1968) (''[[A Saucerful of Secrets]]'')
*«[[Careful with that Axe, Eugene]]» (1968) (cara B del sencillo «[[Point Me at the Sky]]»)
*«The Merry Xmas Song» (1969) (no publicado)
*«[[Up the Khyber]]» (1969) (''[[Music from the Film More]]'')
*«[[Party Sequence]]» (1969) (''Music from the Film More'')
*«[[Main Theme]]» (1969) (''Music from the Film More'')
*«[[Ibiza Bar]]» (1969) (''Music from the Film More'')
*«[[More Blues]]» (1969) (''Music from the Film More'')
*«[[Quicksilver (canción)|Quicksilver]]» (1969) (''Music from the Film More'')
*«[[Dramatic Theme]]» (1969) (''Music from the Film More'')
*«[[The Grand Vizier's Garden Party]]» (1969) (''[[Ummagumma]]'')
*«Come In Number 51, Your Time Is Up» (1970) (''[[Zabriskie Point]]'')
*«Country Song» (1970) (''Zabriskie Point'')
*«Crumbling Land» (1970) (''Zabriskie Point'')
*«Heart Beat, Pig Meat» (1970) (''Zabriskie Point'')
*«[[Atom Heart Mother (canción)|Atom Heart Mother]]» (1970) (''[[Atom Heart Mother]]'')
*«[[Alan's Psychedelic Breakfast]]» (1970) (''Atom Heart Mother'')
*«[[One of These Days]]» (1971) (''[[Meddle]]'')
*«[[Seamus]]» (''Meddle'')
*«[[Echoes]]» (1971) (''Meddle'')
*«[[When You're In]]» (1972) (''[[Obscured by Clouds]]'')
*«[[Speak to Me]]» (1973) (''[[The Dark Side of the Moon]]'')
*«[[Time (canción)|Time]]» (1973) (''The Dark Side of the Moon'')
*«[[Any Colour You Like]]» (1973) (''The Dark Side of the Moon'')
*«Carrera Slow Blues» (1992) (''[[La Carrera Panamericana]]'')
*«Pan Am Shuffle» (1992) (''La Carrera Panamericana'')
*«Soundscape» (1995) (pista extra del álbum ''[[Pulse (álbum)|P·U·L·S·E]]'')
*«Love Scene (Version 6)» (1997) (edición extendida de 1997 de ''[[Zabriskie Point]]'')
*«Unknown Song» (1997) (edición extendida de 1997 de ''Zabriskie Point'')
*«Love Scene (Version 4)» (1997) (edición extendida de 1997 de ''Zabriskie Point'')
*«The Hard Way» (2011) (''[[The Dark Side of the Moon]] (Immersion edition)'')
*«The Travel Sequence» (2011) (''The Dark Side of the Moon (Immersion edition)'')
*«Sum» (2014) (''[[The Endless River]]'')
*«Skins» (2014) (''The Endless River'')
=== Como productor ===
* Principal Edwards Magic Theatre – ''The Asmoto Running Band'' (1971)
* Principal Edwards Magic Theatre – ''Round One'' (1974)
* [[Robert Wyatt]] – ''Rock Bottom'' (1974)
* [[Gong (banda)|Gong]] – ''Shamal'' (1976)
* [[The Damned]] – ''Music for Pleasure'' (1977)
* [[Steve Hillage]] – ''Green'' (1978); Coproducido con Steve Hillage. Mason también toca la batería en «Leylines to Glassdom».
==Visión e inquietudes==
En común con [[Roger Waters]], Mason ha dado conciertos para recaudar fondos para la [[Countryside Alliance]], un grupo que hizo campaña en contra de la prohibición de la caza del zorro con la Ley de Caza de 2004.<ref>{{cita web | url= http://www.gigwise.com/news/14561/pink-floyd-legends-to-play-gig-for-pro-hunt-campaigners | título= Pink Floyd Legends To Play Gig For Pro-Hunt Campaigners | editorial=[[Gigwise]] | nombre=Daniel | apellido=Melia | fecha=14 de marzo de 2006 | fechaacceso=15 de agosto de 2015}}</ref> En 2007 ambos actuaron en el [[castillo de Highclere]] en Hampshire en apoyo del grupo.<ref name="Povey">{{cita libro |apellido=Povey |nombre=Glenn |título=Echoes: The Complete History of Pink Floyd|año=2008|editorial=Mind Head Publishing Ltd|isbn=978-0955462412}}</ref>
Es un miembro del consejo y copresidente de la Coalición [[Featured Artists' Coalition]].<ref name="Youngs">{{cita noticia|url=http://www.bbc.co.uk/news/entertainment-arts-11556101|título=Pink Floyd may get back together for charity|apellido=Youngs|nombre=Ian|fecha=16 de octubre de 2010|obra=[[BBC Online]]|fechaacceso=16 de octubre de 2010}}</ref><ref name="FAC-2010-09-20">{{enlace roto|1={{cita web|url=http://www.featuredartistscoalition.com/showscreen.php?site_id=161&screentype=site&screenid=161&loginreq=0&blogaction=showitem&bloginfo=1208|título=FAC Chairman Nick Mason in keynote interview at In The City 2010|fecha=20 de septiembre de 2010|editorial=[[Featured Artists' Coalition]]|fechaacceso=16 de octubre de 2010}} |2=http://www.featuredartistscoalition.com/showscreen.php?site_id=161&screentype=site&screenid=161&loginreq=0&blogaction=showitem&bloginfo=1208 |bot=InternetArchiveBot }}</ref> como portavoz de la organización, Mason ha expresado su apoyo a los derechos de los músicos y ofreció asesoramiento a los artistas más jóvenes en una industria de la música que cambia rápidamente.<ref>{{cita noticia|url=http://www.bbc.co.uk/news/entertainment-arts-11564393|título=Pink Floyd drummer Nick Mason gives advice to new bands|apellido=Youngs|nombre=Ian|fecha=18 de octubre de 2010|obra=[[BBC Online]]|fechaacceso=15 de agosto de 2015}}</ref>
Mason se ha unido a Roger Waters en la expresión de apoyo al movimiento «[[Boicot, Desinversiones y Sanciones]]», campaña contra Israel por el [[conflicto palestino-israelí]] e instó a [[The Rolling Stones]] a no tocar en Israel en 2014.<ref>{{cita web | url= http://www.salon.com/2014/05/01/pink_floyds_roger_waters_and_nick_mason_why_rolling_stones_shouldnt_play_in_israel/ | título= Pink Floyd’s Roger Waters and Nick Mason: Why Rolling Stones shouldn’t play in Israel | editorial=[[Salon (sitio web)|Salon]] | fecha=1 de mayo de 2014 | fechaacceso=15 de agosto de 2015}}</ref> Mason se considera [[ateo]].<ref>{{cita web|título=Q magazine Questionnaire|url=http://www.pinkfloyd-co.com/band/interviews/nbm/nbmquestions.html|fechaacceso=13 de mayo de 2011|urlarchivo=https://web.archive.org/web/20110527170254/http://www.pinkfloyd-co.com/band/interviews/nbm/nbmquestions.html|fechaarchivo=27 de mayo de 2011}}</ref>
== Libros ==
* ''Into the Red: 22 Classic Cars That Shaped a Century of Motor Sport'' (con Mark Hales) – 3 de septiembre de 1998 (primera edición), 9 de septiembre de 2004 (segunda edición).
* ''[[Inside Out: A Personal History of Pink Floyd]]'' – 28 de octubre de 2004.
== Véase también ==
* [[:Categoría:Canciones escritas por Nick Mason|Canciones escritas por Nick Mason]]
== Notas ==
{{listaref|group="nota"}}
==Referencias==
{{Listaref}}
== Enlaces externos ==
{{commonscat}}
{{NF|1944||Mason, Nick}}
[[Categoría:Bateristas de rock]]
[[Categoría:Bateristas del Reino Unido]]
[[Categoría:Miembros de Pink Floyd]]
[[Categoría:Pilotos de automovilismo de Inglaterra]]
[[Categoría:Pilotos de las 24 Horas de Le Mans]]
[[Categoría:Ateos de Inglaterra]]
[[Categoría:Nacidos en Birmingham]]
###Markdown
Breake line tags
###Code
def replace_break_line_tags(data: list, key_name: str, new_key_name: str):
'''This function replaces <br> HTML tags'''
with tqdm(total=len(data)) as pbar:
for d in data:
text = d[key_name]
br_line_tags = ['<br>', '<br >',
'<Br>', '<Br >',
'<br/>', '<br />', '<br/ >', '<br / >',
'<Br/>', '<Br />', '<Br/ >', '<Br / >',
'>br>',
' ', ' ',
'<li>',
'<span>', '</span>',
]
for br in br_line_tags:
text = text.replace(br, '\n')
d[new_key_name] = text
pbar.update(1)
return data
####
data = replace_break_line_tags(data, 'body', 'body')
###Output
_____no_output_____
###Markdown
Double square brackets
###Code
def clean_double_square_brackets_metadata(data: list, key_name: str, new_key_name: str):
'''This function cleans double square brackets'''
with tqdm(total=len(data)) as pbar:
for ind,d in enumerate(data):
texto_processed = d[key_name]
left_index = 0
right_index = len(texto_processed)
while True:
reg_resp_l = re.search(r'\[\[', texto_processed[left_index:right_index])
if reg_resp_l == None:
break
else:
left_index = reg_resp_l.span()[0] + left_index
reg_resp_r = re.search(r'\]\]', texto_processed[left_index:right_index])
if reg_resp_r == None:
print('¡warning!')
print(ind)
break
else:
right_index = reg_resp_r.span()[1] + left_index
note = texto_processed[left_index:right_index]
note_replacements = note.split('|')
if len(note_replacements)==1:
note_replacement = note_replacements[0]
left_padding = 0
elif len(note_replacements)==2:
note_replacement = '[[' + note_replacements[1]
left_padding = len(note_replacements[0]) - 2
else:
note_replacement = '[[]]'
left_padding = len(note) - 4
texto_processed = note_replacement.join(texto_processed.split(note))
left_index = right_index-left_padding
right_index = len(texto_processed)
d[new_key_name] = texto_processed
pbar.update(1)
return data
####
data = clean_double_square_brackets_metadata(data, 'body', 'body')
###Output
_____no_output_____
###Markdown
HTML
###Code
def extract_html_element(text: str):
html_l = list()
left_index = 0
right_index = len(text)
list_index = 0
while True:
reg_resp_l = re.search(r'<[^/!]*?>', text[left_index:right_index])
if reg_resp_l == None:
reg_resp_l = re.search(r'<![\s\S]*?>', text[left_index:right_index])
if reg_resp_l != None:
text = ''.join(text.split(reg_resp_l.group()))
else:
reg_resp_l = re.search(r'<[^/]*?/>', text[left_index:right_index])
if reg_resp_l == None:
break
else:
tag_raw = reg_resp_l.group()
tag_name = tag_raw.split(' ')[0][1:]
try:
tag_value = tag_raw.split(' ')[1][:-2]
except:
tag_value = ''
tag = dict()
tag = {'tag': tag_name, 'value': tag_value}
list_index_str = '%«% '+ str(list_index) + ' %»%'
list_index += 1
text = list_index_str.join(text.split(tag_raw))
html_l.append(tag)
left_index = 0
right_index = len(text)
else:
tag_name = reg_resp_l.group()[1:-1].split(' ')[0]
tag_raw_open = reg_resp_l.group()
left_index = reg_resp_l.span()[1] + left_index
reg_resp_r = re.search(r'</[\s\S]*?>', text[left_index:right_index])
if reg_resp_r == None:
break
else:
tag_raw_close = reg_resp_r.group()
right_index = reg_resp_r.span()[0] + left_index
reg_resp_l_2 = re.search(r'<[\s\S]*?>', text[left_index:right_index-1])
if reg_resp_l_2 == None:
tag_value = text[left_index:right_index]
tag_raw = tag_raw_open + tag_value + tag_raw_close
tag = {'tag': tag_name, 'value': tag_value}
list_index_str = '%«% '+ str(list_index) + ' %»%'
list_index += 1
text = list_index_str.join(text.split(tag_raw))
html_l.append(tag)
left_index = 0
right_index = len(text)
else:
left_index = reg_resp_l_2.span()[0] + left_index
return text, html_l
####
def extract_html_elements_metadata(data: list, key_name: str, new_key_name: str):
'''This function extracts all the html elements from the text and replaces them with placeholders'''
with tqdm(total=len(data)) as pbar:
for d in data:
text_processed, html_l = extract_html_element(d[key_name])
d[new_key_name] = text_processed
d['_metadata_html_'] = html_l
pbar.update(1)
return data
####
data = extract_html_elements_metadata(data, 'body', 'body')
###Output
_____no_output_____
###Markdown
Curly brackets
###Code
def extract_curly_brackets_metadata(data: list, key_name: str, new_key_name: str):
'''This function extracts all the curly brackets from the text and replaces them with placeholders'''
with tqdm(total=len(data)) as pbar:
for d in data:
brackets_l = list()
texto_processed = d[key_name]
left_index = 0
right_index = len(texto_processed)
list_index = 0
while True:
reg_resp_l = re.search(r'{{', texto_processed[left_index:right_index])
if reg_resp_l == None:
break
else:
left_index = reg_resp_l.span()[0] + left_index
reg_resp_r = re.search(r'}}', texto_processed[left_index:right_index])
if reg_resp_r == None:
break
else:
right_index = reg_resp_r.span()[1] + left_index
reg_resp_l_2 = re.search(r'{{', texto_processed[left_index+2:right_index])
if reg_resp_l_2 == None:
bracket = re.search(r'{{[\s\S}]*?}}', texto_processed[left_index:]).group(0)
list_index_str = '%{% '+ str(list_index) + ' %}%'
texto_processed = list_index_str.join(texto_processed.split(bracket))
brackets_l.append(bracket)
list_index += 1
left_index = 0
right_index = len(texto_processed)
else:
left_index = reg_resp_l_2.span()[0] + left_index
d[new_key_name] = texto_processed
d['_metadata_brackets_'] = brackets_l
pbar.update(1)
return data
data = extract_curly_brackets_metadata(data, 'body', 'text')
###Output
_____no_output_____
###Markdown
Divide article parts
###Code
def extract_infobox(data: list):
''' This function extracts the infobox part of the articles'''
pattern = r'{{Ficha de'
pattern_persona = r'{{Ficha de persona'
with tqdm(total=len(data)) as pbar:
for d in data:
for b in d['_metadata_brackets_']:
response = re.match(pattern, b)
if response:
d['infobox'] = b
response_persona = re.match(pattern_persona, b)
if response_persona:
d['_tipo_'] = 'persona'
else:
d['_tipo_'] = 'grupo'
pbar.update(1)
return data
data = extract_infobox(data)
####
#for d in data:
#del d['body']
###Output
_____no_output_____
###Markdown
Divide article parts Infobox keys
###Code
def normalize_string(text: str):
'''This function normalizes a string'''
text = text.lower().replace('_',' ').replace('-',' ').strip()
# -> NFD & remove diacritical
text = re.sub(r'([^n\u0300-\u036f]|n(?!\u0303(?![\u0300-\u036f])))[\u0300-\u036f]+',
r'\1', normalize( "NFD", text), 0, re.I)
# -> NFC
text = normalize('NFC', text)
return text
#####
def get_infobox_attributes_pattern():
'''This function returns the regex pattern to extract the attributes from each infobox'''
pattern = r'(\n\|)([^=]*)(=)([^\n]*)' # group 2 -> attr. name , group 4 -> attr. value
# \n\| : ... cada atributo empieza por un salto de línea "\n" y el símbolo "|"
# [^=]* : ... luego, todo lo que viene es el nombre de la variable (group 2)
# = : ... hasta llegar a un símbolo "="
# ([^\n]*) : ... luego, todo lo que hay hasta un salto de línea "\n" es el valor de la variable (group 4)
return pattern
#####
def extract_infobox_attributes(data: list):
'''This function extracts and returns all the attributes and their values from each infobox'''
pattern = get_infobox_attributes_pattern()
with tqdm(total=len(data)) as pbar:
for a in data:
attr_names = [x.group(2).strip() for x in re.finditer(pattern, a['infobox'])]
attr_values = [x.group(4).strip() for x in re.finditer(pattern, a['infobox'])]
for i in range(len(attr_names)):
attr_name = normalize_string(attr_names[i])
attr_value = attr_values[i].strip()
try:
if attr_name[0]=='|':
attr_name = attr_name[1:].strip()
except:
pass
attrs = attr_name.split('\n|')
num_attrs = len(attrs)
if num_attrs>1:
for n in attrs[:-1]:
a[n] = ''
a[attrs[-1]] = attr_value
else:
a[attr_name] = attr_value
pbar.update(1)
return data
####
data = extract_infobox_attributes(data)
###Output
_____no_output_____
###Markdown
Attribute freq
###Code
def get_attr_frequency(data: list):
'''This function returns the keys sorted by frecuency of appearence in the dataset'''
attr_freq = {}
for a in data:
for k in list(a.keys()):
if k in attr_freq.keys():
attr_freq[k] += 1
else:
attr_freq[k] = 1
attr_freq = {k: v for k, v in sorted(attr_freq.items(), key=lambda item: item[1], reverse=True)}
num_attrs = len(attr_freq.keys())
print(f'# of distinct attributes: {num_attrs}')
return attr_freq
####
attr_frequency = get_attr_frequency(data)
attr_frequency
###Output
_____no_output_____
###Markdown
Extract new attributes
###Code
def get_new_attributes(text: str):
new_attributes = list()
pattern = '\|.*?='
elements = re.findall(pattern, text)
elements = elements[::-1]
t = text
if len(elements)>0:
t = text.split(elements[-1])[0]
for ind, e in enumerate(elements):
new_key = e[1:-1].strip().lower()
new_value = text.split(e)[1]
if ind!=0:
new_value = new_value.split(elements[ind-1])[0]
new_value = new_value.strip()
new_attributes.append([new_key, new_value])
return t, new_attributes
####
attr_l = list(attr_frequency.keys())[10:]
for a in data:
keys_l = list(a.keys())
for key in keys_l:
if key in attr_l:
a[key], new_attributes = get_new_attributes(a[key])
for attr in new_attributes:
a[attr[0]] = attr[1]
####
print(f'# of articles: {len(data)}')
attr_frequency = get_attr_frequency(data)
###Output
# of articles: 26789
# of distinct attributes: 1440
###Markdown
Remove empty keys
###Code
def remove_empty_attributes(data: list):
'''This function removes empty attributes from the dataset'''
for a in data:
attrs = list(a.keys())
for attr in attrs:
if a[attr]=='':
if attr == '_tipo_':
print(a[attr])
del a[attr]
return data
####
data_artists = remove_empty_attributes(data)
attr_frequency = get_attr_frequency(data)
###Output
# of distinct attributes: 1020
###Markdown
Split musicians & groups
###Code
def split_musicians_and_groups(data: list):
'''This function splits data between musicians and groups'''
data_persons = list()
data_groups = list()
for a in data:
if a['_tipo_'] == 'persona':
data_persons.append(a)
elif a['_tipo_'] == 'grupo':
data_groups.append(a)
return data_persons, data_groups
####
data_persons, data_groups = split_musicians_and_groups(data)
print(f'# of articles about persons: {len(data_persons)}')
attr_frequency_persons = get_attr_frequency(data_persons)
print(f'# of articles about groups: {len(data_groups)}')
attr_frequency_groups = get_attr_frequency(data_groups)
###Output
# of articles about groups: 11283
# of distinct attributes: 563
###Markdown
Save the articles
###Code
pickle.dump(data_persons, open( "../Data/data_v2_persons.p", "wb"))
pickle.dump(data_groups, open( "../Data/data_v2_groups.p", "wb"))
###Output
_____no_output_____ |
source/pytorch/pytorch_with_examples/dynamic_net.ipynb | ###Markdown
PyTorch: Control Flow + Weight Sharing--------------------------------------To showcase the power of PyTorch dynamic graphs, we will implement a very strangemodel: a third-fifth order polynomial that on each forward passchooses a random number between 4 and 5 and uses that many orders, reusingthe same weights multiple times to compute the fourth and fifth order.
###Code
import random
import torch
import math
class DynamicNet(torch.nn.Module):
def __init__(self):
"""
In the constructor we instantiate five parameters and assign them as members.
"""
super().__init__()
self.a = torch.nn.Parameter(torch.randn(()))
self.b = torch.nn.Parameter(torch.randn(()))
self.c = torch.nn.Parameter(torch.randn(()))
self.d = torch.nn.Parameter(torch.randn(()))
self.e = torch.nn.Parameter(torch.randn(()))
def forward(self, x):
"""
For the forward pass of the model, we randomly choose either 4, 5
and reuse the e parameter to compute the contribution of these orders.
Since each forward pass builds a dynamic computation graph, we can use normal
Python control-flow operators like loops or conditional statements when
defining the forward pass of the model.
Here we also see that it is perfectly safe to reuse the same parameter many
times when defining a computational graph.
"""
y = self.a + self.b * x + self.c * x ** 2 + self.d * x ** 3
for exp in range(4, random.randint(4, 6)):
y = y + self.e * x ** exp
return y
def string(self):
"""
Just like any class in Python, you can also define custom method on PyTorch modules
"""
return f'y = {self.a.item()} + {self.b.item()} x + {self.c.item()} x^2 + {self.d.item()} x^3 + {self.e.item()} x^4 ? + {self.e.item()} x^5 ?'
# Create Tensors to hold input and outputs.
x = torch.linspace(-math.pi, math.pi, 2000)
y = torch.sin(x)
# Construct our model by instantiating the class defined above
model = DynamicNet()
# Construct our loss function and an Optimizer. Training this strange model with
# vanilla stochastic gradient descent is tough, so we use momentum
criterion = torch.nn.MSELoss(reduction='sum')
optimizer = torch.optim.SGD(model.parameters(), lr=1e-8, momentum=0.9)
for t in range(30000):
# Forward pass: Compute predicted y by passing x to the model
y_pred = model(x)
# Compute and print loss
loss = criterion(y_pred, y)
if t % 2000 == 1999:
print(t, loss.item())
# Zero gradients, perform a backward pass, and update the weights.
optimizer.zero_grad()
loss.backward()
optimizer.step()
print(f'Result: {model.string()}')
###Output
_____no_output_____ |
sem8/sem8_trees-checkpoint.ipynb | ###Markdown
Семинар 8. Решающие деревья  *Source: https://www.upnxtblog.com/index.php/2017/12/06/17-machine-learning-algorithms-that-you-should-know/* Сами по себе решающие деревья используются в машинном обучении относительно редко, однако очень распространены методы, основанные на их композиции - ансамблях (Random Forest, XGBoost, LightGBM, CatBoost). Линейные модели или решающие деревья?Можно ли сказать, что какой-то из этих двух типов моделей всегда лучше? Нет. В зависимости от пространственной структуры данных, один из них будет работать лучше:- Линейная модель, если данные хорошо линейно разделимы- Решающие деревья, если данные плохо линейно разделимы (присутствуют только кусочно-линейные или нелинейные зависимости)
###Code
import matplotlib.pyplot as plt
import numpy as np
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import accuracy_score
from sklearn.model_selection import train_test_split
%matplotlib inline
plt.rcParams["figure.figsize"] = (11, 6.5)
np.random.seed(13)
n = 500
X = np.zeros(shape=(n, 2))
X[:, 0] = np.linspace(-5, 5, 500)
X[:, 1] = X[:, 0] + 0.5 * np.random.normal(size=n)
y = (X[:, 1] > X[:, 0]).astype(int)
plt.scatter(X[:, 0], X[:, 1], s=100, c=y, cmap="winter")
plt.show()
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25, random_state=13)
lr = LogisticRegression(random_state=13)
lr.fit(X_train, y_train)
y_pred_lr = lr.predict(X_test)
print(f"Linear model accuracy: {accuracy_score(y_pred_lr, y_test):.2f}")
!pip install mlxtend
from mlxtend.plotting import plot_decision_regions
plot_decision_regions(X_test, y_test, lr)
plt.show()
from sklearn.tree import DecisionTreeClassifier
dt = DecisionTreeClassifier(random_state=13)
dt.fit(X_train, y_train)
y_pred_dt = dt.predict(X_test)
print(f"Decision tree accuracy: {accuracy_score(y_pred_dt, y_test):.2f}")
plot_decision_regions(X_test, y_test, dt)
plt.show()
from catboost import CatBoostClassifier
dt = CatBoostClassifier(random_state=13, verbose=False)
dt.fit(X_train, y_train)
y_pred_dt = dt.predict(X_test)
print(f"Catboost accuracy: {accuracy_score(y_pred_dt, y_test):.2f}")
plot_decision_regions(X_test, y_test, dt)
plt.show()
np.random.seed(13)
X = np.random.randn(500, 2)
y = np.logical_xor(X[:, 0] > 0, X[:, 1] > 0).astype(int)
plt.scatter(X[:, 0], X[:, 1], s=100, c=y, cmap="winter")
plt.show()
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25, random_state=13)
lr = LogisticRegression(random_state=13)
lr.fit(X_train, y_train)
y_pred_lr = lr.predict(X_test)
print(f"Linear model accuracy: {accuracy_score(y_pred_lr, y_test):.2f}")
plot_decision_regions(X_test, y_test, lr)
plt.show()
dt = DecisionTreeClassifier(random_state=13)
dt.fit(X_train, y_train)
y_pred_dt = dt.predict(X_test)
print(f"Decision tree accuracy: {accuracy_score(y_pred_dt, y_test):.2f}")
plot_decision_regions(X_test, y_test, dt)
plt.show()
dt = CatBoostClassifier(random_state=13, verbose=False)
dt.fit(X_train, y_train)
y_pred_dt = dt.predict(X_test)
print(f"Catboost accuracy: {accuracy_score(y_pred_dt, y_test):.2f}")
plot_decision_regions(X_test, y_test, dt)
plt.show()
###Output
_____no_output_____
###Markdown
Переобучение Без регуляризации решающие деревья обладают фантастической способность к переобучению: можно построить решающее дерево, которое имеет нулевую ошибку на данной выборке, выделив для каждого объекта отдельный листик.
###Code
np.random.seed(13)
n = 100
X = np.random.normal(size=(n, 2))
X[:50, :] += 0.25
X[50:, :] -= 0.25
y = np.array([1] * 50 + [0] * 50)
plt.scatter(X[:, 0], X[:, 1], s=100, c=y, cmap="winter")
plt.show()
###Output
_____no_output_____
###Markdown
Посмотрим, как влияют разные значения гиперпараметров решающего дерева на его структуру:- `max_depth`: максимальная глубина дерева- `min_samples_leaf`: минимальное число объектов в вершине дерева, необходимое для того, чтобы она могла быть листовой. Другими словами, когда мы будем перебирать пороги для разбиения в конкретной вершине, мы будем рассматривать только такие пороги, после разбиения по которым каждая из двух новых вершин будет содержать не менее `min_samples_leaf` объектов.- `min_samples_split`: минимальное число объектов во внутреннем узле, при котором мы будем делать разбиение этого листа.
###Code
fig, ax = plt.subplots(nrows=3, ncols=3, figsize=(15, 12))
for i, max_depth in enumerate([3, 5, None]):
for j, min_samples_leaf in enumerate([15, 5, 1]):
dt = DecisionTreeClassifier(max_depth=max_depth, min_samples_leaf=min_samples_leaf, random_state=13)
dt.fit(X, y)
ax[i][j].set_title("max_depth = {} | min_samples_leaf = {}".format(max_depth, min_samples_leaf))
ax[i][j].axis("off")
plot_decision_regions(X, y, dt, ax=ax[i][j])
plt.show()
###Output
_____no_output_____
###Markdown
На любой выборке (исключая те, где есть объекты с одинаковыми значениями признаков, но разными ответами) можно получить нулевую ошибку - с помощью максимально переобученного дерева:
###Code
dt = DecisionTreeClassifier(max_depth=None, min_samples_leaf=1, min_samples_split=2, random_state=13)
dt.fit(X, y)
print(f"Decision tree accuracy: {accuracy_score(y, dt.predict(X)):.2f}")
plot_decision_regions(X, y, dt)
plt.show()
###Output
_____no_output_____
###Markdown
Неустойчивость Посмотрим, как будет меняться структура дерева без регуляризации, если брать для обучения разные 90%-ые подвыборки исходной выборки.
###Code
fig, ax = plt.subplots(nrows=3, ncols=3, figsize=(15, 12))
for i in range(3):
for j in range(3):
seed_idx = 3 * i + j
np.random.seed(seed_idx)
dt = DecisionTreeClassifier(random_state=13)
idx_part = np.random.choice(len(X), replace=False, size=int(0.9 * len(X)))
X_part, y_part = X[idx_part, :], y[idx_part]
dt.fit(X_part, y_part)
ax[i][j].set_title("sample #{}".format(seed_idx))
ax[i][j].axis("off")
plot_decision_regions(X_part, y_part, dt, ax=ax[i][j])
plt.show()
###Output
_____no_output_____
###Markdown
Решающее дерево из sklearn
###Code
import pandas as pd
from sklearn.datasets import load_boston
from sklearn.metrics import mean_squared_error
from sklearn.tree import DecisionTreeRegressor, plot_tree
boston = load_boston()
X = pd.DataFrame(data=boston["data"], columns=boston["feature_names"])
y = boston["target"]
plt.title("House price distribution")
plt.xlabel("price")
plt.ylabel("# samples")
plt.hist(y, bins=20)
plt.show()
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25, random_state=13)
dt = DecisionTreeRegressor(max_depth=3, random_state=13)
dt.fit(X_train, y_train)
plot_tree(dt, feature_names=X.columns, filled=True, rounded=True)
plt.show()
max_depth_array = range(2, 20)
mse_array = []
for max_depth in max_depth_array:
dt = DecisionTreeRegressor(max_depth=max_depth, random_state=13)
dt.fit(X_train, y_train)
mse_array.append(mean_squared_error(y_test, dt.predict(X_test)))
plt.plot(max_depth_array, mse_array)
plt.title("Dependence of MSE on max depth")
plt.xlabel("max depth")
plt.ylabel("MSE")
plt.show()
pd.DataFrame({
"max_depth": max_depth_array,
"MSE": mse_array
}).sort_values(by="MSE").reset_index(drop=True)
min_samples_leaf_array = range(1, 20)
mse_array = []
for min_samples_leaf in min_samples_leaf_array:
dt = DecisionTreeRegressor(max_depth=6, min_samples_leaf=min_samples_leaf, random_state=13)
dt.fit(X_train, y_train)
mse_array.append(mean_squared_error(y_test, dt.predict(X_test)))
plt.plot(min_samples_leaf_array, mse_array)
plt.title("Dependence of MSE on min samples leaf")
plt.xlabel("min samples leaf")
plt.ylabel("MSE")
plt.show()
pd.DataFrame({
"min_samples_leaf": min_samples_leaf_array,
"MSE": mse_array
}).sort_values(by="MSE").reset_index(drop=True)
min_samples_split_array = range(2, 20)
mse_array = []
for min_samples_split in min_samples_split_array:
dt = DecisionTreeRegressor(max_depth=6, min_samples_leaf=1, min_samples_split=min_samples_split, random_state=13)
dt.fit(X_train, y_train)
mse_array.append(mean_squared_error(y_test, dt.predict(X_test)))
plt.plot(min_samples_split_array, mse_array)
plt.title("Dependence of MSE on min samples split")
plt.xlabel("min samples split")
plt.ylabel("MSE")
plt.show()
pd.DataFrame({
"min_samples_split": min_samples_split_array,
"MSE": mse_array
}).sort_values(by="MSE").reset_index(drop=True)
%%time
from sklearn.model_selection import GridSearchCV
gs = GridSearchCV(DecisionTreeRegressor(random_state=13),
param_grid={
# 'max_features': ['auto', 'log2', 'sqrt'],
'max_depth': range(2, 20),
'min_samples_leaf': range(1, 20)
},
cv=5,
scoring='neg_mean_squared_error')
gs.fit(X_train, y_train)
gs.best_params_
dt_our_best = DecisionTreeRegressor(max_depth=6, random_state=13)
dt_our_best.fit(X_train, y_train)
print(mean_squared_error(y_test, dt_our_best.predict(X_test)))
plot_tree(dt_our_best, feature_names=X.columns, filled=True, rounded=True)
plt.show()
dt_gs_best = DecisionTreeRegressor(max_depth=11, min_samples_leaf=3, random_state=13)
dt_gs_best.fit(X_train, y_train)
mean_squared_error(y_test, dt_gs_best.predict(X_test))
###Output
_____no_output_____
###Markdown
Важность признаков
###Code
df_importances = pd.DataFrame({
"feature": X.columns,
"importance": dt.feature_importances_
}).sort_values(by="importance", ascending=False).reset_index(drop=True)
plt.bar(df_importances['feature'], df_importances['importance'])
plt.show()
###Output
_____no_output_____
###Markdown
Влияет ли стандартизация (масштабирование) признаков на результат работы решающего дерева?
###Code
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
X_train_scaled = pd.DataFrame(sc.fit_transform(X_train), columns=X_train.columns, index=X_train.index)
X_test_scaled = pd.DataFrame(sc.transform(X_test), columns=X_test.columns, index=X_test.index)
X_train_scaled.head()
print("No scaling is applied\n")
for max_depth in [3, 6]:
dt = DecisionTreeRegressor(max_depth=max_depth, random_state=13)
dt.fit(X_train, y_train)
print(f"MSE on test set for depth {max_depth}: {mean_squared_error(y_test, dt.predict(X_test)):.2f}")
print("Standard scaling is applied\n")
for max_depth in [3, 6]:
dt = DecisionTreeRegressor(max_depth=max_depth, random_state=13)
dt.fit(X_train_scaled, y_train)
print(f"MSE on test set for depth {max_depth}: {mean_squared_error(y_test, dt.predict(X_test_scaled)):.2f}")
###Output
_____no_output_____
###Markdown
Скетч решающего дерева $R_m$ - множество объектов в разбиваемой вершине, $j$ - номер признака, по которому происходит разбиение, $t$ - порог разбиения.Критерий ошибки:$$Q(R_m, j, t) = \frac{|R_\ell|}{|R_m|}H(R_\ell) + \frac{|R_r|}{|R_m|}H(R_r) \to \min_{j, t}$$$R_\ell$ - множество объектов в левом поддереве, $R_r$ - множество объектов в правом поддереве.$H(R)$ - критерий информативности, с помощью которого можно оценить качество распределения целевой переменной среди объектов множества $R$.
###Code
boston = load_boston()
X = pd.DataFrame(data=boston["data"], columns=boston["feature_names"])
X["target"] = boston["target"]
###Output
_____no_output_____
###Markdown
**Задание 1**: реализуйте подсчет критерия ошибки. Для этого реализуйте функции для подсчета значения критерия информативности, а также для разбиения вершины.
###Code
from typing import Iterable, List, Tuple
def H(R: np.array) -> float:
"""
Compute impurity criterion for a fixed set of objects R.
Last column is assumed to contain target value
"""
if(len(R)==0):
return 0
return np.var(R.iloc[:,-1])
def split_node(R_m: np.array, feature: str, t: float) -> Iterable[np.array]:
"""
Split a fixed set of objects R_m given feature number and threshold t
"""
mask = R_m[feature] < t
return R_m[mask], R_m[~mask]
def q_error(R_m: np.array, feature: str, t: float) -> float:
"""
Compute error criterion for given split parameters
"""
R_left, R_right = split_node(R_m,feature, t)
return len(R_left)/len(R_m) * H(R_left) + len(R_right)/len(R_m) * H(R_right)
###Output
_____no_output_____
###Markdown
**Задание 2**: переберите все возможные разбиения выборки по одному из признаков и постройте график критерия ошибки в зависимости от значения порога.
###Code
feature = "RM"
Q_array = []
feature_values = np.unique(X_train["RM"])
for t in feature_values:
Q_array.append(q_error(X_train, feature, t))
plt.plot(feature_values, Q_array)
plt.title(feature)
plt.xlabel("threshold")
plt.ylabel("Q error")
plt.show()
###Output
_____no_output_____
###Markdown
**Задание 3**: Напишите функцию, находящую оптимальное разбиение данной вершины по данному признаку.
###Code
def get_optimal_split(R_m: np.array, feature: str) -> Tuple[float, List[float]]:
Q_array = []
feature_values = np.unique(R_m[feature])
for t in feature_values:
Q_array.append(q_error(R_m, feature, t))
opt_threshold = feature_values[np.argmin(Q_array)]
return opt_threshold, Q_array
t, Q_array = get_optimal_split(X_train, feature)
plt.plot(np.unique(X_train[feature]), Q_array)
plt.title(feature)
plt.xlabel("threshold")
plt.ylabel("Q error")
plt.show()
###Output
_____no_output_____
###Markdown
**Задание 4**: Для первого разбиения найдите признак, показывающий наилучшее качество. Каков порог разбиения и значение качества? Постройте график критерия ошибки для данного признака в зависимости от значения порога.
###Code
results = []
for f in X_train.columns:
t, Q_array = get_optimal_split(X_train, f)
min_error = min(Q_array)
results.append((f, t, min_error))
plt.figure()
plt.title("Feature: {} | optimal t: {} | min Q error: {:.2f}".format(f, t, min_error))
plt.plot(np.unique(X_train[f]), Q_array)
plt.show()
results = sorted(results, key=lambda x: x[2])
results
thr = pd.DataFrame(results, columns=["feature", "optimal t", "min Q error"])
optimal_feature, optimal_t, optimal_error = thr.iloc[1,:]
###Output
_____no_output_____
###Markdown
**Задание 5**: Изобразите разбиение визуально. Для этого постройте диаграмму рассеяния целевой переменной в зависимости от значения найденного признака. Далее изобразите вертикальную линию, соответствующую порогу разбиения.
###Code
plt.scatter(X[optimal_feature], y)
plt.axvline(x=optimal_t, color="red")
plt.xlabel(optimal_feature)
plt.ylabel("target")
plt.title("Feature: {} | optimal t: {} | Q error: {:.2f}".format(optimal_feature, optimal_t, optimal_error))
plt.show()
###Output
_____no_output_____ |
Business Analysis.ipynb | ###Markdown
SQL Business Analysis Project-----
###Code
import sqlite3
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from matplotlib import cm
%matplotlib inline
db = 'chinook.db'
###Output
_____no_output_____
###Markdown
Function to takes a SQL query as an argument and returns a pandas dataframe of that query-------
###Code
def run_query(q):
with sqlite3.connect(db) as conn:
return pd.read_sql(q, conn)
###Output
_____no_output_____
###Markdown
Function that takes a SQL command as an argument and executes it using the sqlite module-------
###Code
def run_command(c):
with sqlite3.connect(db) as conn:
conn.isolation_level = None
conn.execute(c)
###Output
_____no_output_____
###Markdown
Function that calls the run_query() function to return a list of all tables and views in the database----
###Code
def show_tables():
q = '''
SELECT
name,
type
FROM sqlite_master
WHERE type IN ("table","view");
'''
return run_query(q)
show_tables()
###Output
_____no_output_____
###Markdown
Album to Purchase----- Recommendation for the three artists whose albums we should purchase for the store, based on sales of tracks from their genresQuery that returns each genre, with the number of tracks sold in the USA:in absolute numbersin percentages.
###Code
albums_to_purchase = '''
WITH usa_tracks_sold AS (
SELECT il.* FROM customer c
INNER JOIN invoice i on c.customer_id = i.customer_id
INNER JOIN invoice_line il on i.invoice_id = il.invoice_id
WHERE c.country = "USA" )
select g.name genre, count(uts.invoice_line_id) tracks_sold, cast(count(uts.invoice_line_id) AS FLOAT) / (
SELECT COUNT(*) from usa_tracks_sold
) percentage_sold
FROM usa_tracks_sold uts
INNER JOIN track t on t.track_id = uts.track_id
INNER JOIN genre g on g.genre_id = t.genre_id
group by 1
order by 2 desc
limit 5
'''
run_query(albums_to_purchase)
genre_sales_usa = run_query(albums_to_purchase)
genre_sales_usa.set_index("genre", inplace=True, drop=True)
genre_sales_usa['tracks_sold'].plot.barh(title="Top purchased Genres in the USA",
xlim=(0, 600),
colormap=plt.cm.Accent)
for i, label in enumerate(list(genre_sales_usa.index)):
score = genre_sales_usa.loc[label, "tracks_sold"]
label = (genre_sales_usa.loc[label, "percentage_sold"] * 100
).astype(int).astype(str) + "%"
plt.annotate(str(label), (score + 10, i - 0.15))
plt.show()
###Output
_____no_output_____
###Markdown
The three artists whose albums we should purchase based on sales of tracks from their genres are:* Red Tone (Punk)* Slim Jim Bites (Blues)* Meteor and the Girls (Pop) Employee performance------ Query that finds the total dollar amount of sales assigned to each sales support agent within the company.
###Code
qq = "select * from invoice limit 10"
run_query(qq)
employee_performance = '''
WITH customer_support_rep_sales AS (
select i.customer_id,
c.support_rep_id,
sum(i.total) total
from invoice i
inner join customer c on c.customer_id = i.customer_id
group by 1,2
)
select e.first_name||" "||e.last_name employee,
e.hire_date,
sum(csrs.total) total_sales
from customer_support_rep_sales csrs
inner join employee e on e.employee_id = csrs.support_rep_id
group by 1
order by 3 desc
'''
run_query(employee_performance)
emp_sales = run_query(employee_performance)
emp_sales.set_index("employee", drop=True, inplace=True)
emp_sales.sort_values("total_sales", inplace=True)
emp_sales.plot.barh(
legend=False,
title='Sales Breakdown by Employee',
colormap=plt.cm.Accent
)
plt.ylabel('')
plt.show()
# Where a country has only one customer collect them into an "Other" group
sales_by_country = '''
with country_group AS (
SELECT
CASE
WHEN (
SELECT count(*)
FROM customer
where country = c.country
) = 1 THEN "Other"
ELSE c.country
END AS country,
c.customer_id,
il.*
FROM invoice_line il
INNER JOIN invoice i ON i.invoice_id = il.invoice_id
INNER JOIN customer c ON c.customer_id = i.customer_id
)
select
country,
customers,
total_sales,
average_order,
customer_lifetime_value
from
(
select country,
count(distinct customer_id) customers,
sum(unit_price) total_sales,
sum(unit_price)/count(distinct customer_id) customer_lifetime_value,
sum(unit_price)/count(distinct invoice_id) average_order,
CASE
WHEN country = "Other" THEN 1
ELSE 0
END AS sort
from country_group
group by country
order by sort asc, total_sales desc
);
'''
run_query(sales_by_country)
###Output
_____no_output_____
###Markdown
Dashboard Visualization------
###Code
country_metrics = run_query(sales_by_country)
country_metrics.set_index("country", drop=True, inplace=True)
colors = [plt.cm.Accent(i) for i in np.linspace(0, 1, country_metrics.shape[0])]
fig, axes = plt.subplots(nrows=2, ncols=2, figsize=(9, 10))
ax1, ax2, ax3, ax4 = axes.flatten()
fig.subplots_adjust(hspace=.5, wspace=.3)
# top left
sales_breakdown = country_metrics["total_sales"].copy().rename('')
sales_breakdown.plot.pie(
ax=ax1,
startangle=-90,
counterclock=False,
title='Sales Breakdown by Country,\nNumber of Customers',
colormap=plt.cm.Accent,
fontsize=8,
wedgeprops={'linewidth':0}
)
# top right
cvd_cols = ["customers","total_sales"]
custs_vs_dollars = country_metrics[cvd_cols].copy()
custs_vs_dollars.index.name = ''
for c in cvd_cols:
custs_vs_dollars[c] /= custs_vs_dollars[c].sum() / 100
custs_vs_dollars.plot.bar(
ax=ax2,
colormap=plt.cm.Set1,
title="% Customers vs total Sales"
)
ax2.tick_params(top="off", right="off", left="off", bottom="off")
ax2.spines["top"].set_visible(False)
ax2.spines["right"].set_visible(False)
# bottom left
avg_order = country_metrics["average_order"].copy()
avg_order.index.name = ''
difference_from_avg = avg_order * 100 / avg_order.mean() - 100
difference_from_avg.drop("Other", inplace=True)
difference_from_avg.plot.bar(
ax=ax3,
color=colors,
title="Average Order,\n % Difference from Mean"
)
ax3.tick_params(top="off", right="off", left="off", bottom="off")
ax3.axhline(0, color='k')
ax3.spines["top"].set_visible(False)
ax3.spines["right"].set_visible(False)
ax3.spines["bottom"].set_visible(False)
# bottom right
ltv = country_metrics["customer_lifetime_value"].copy()
ltv.index.name = ''
ltv.drop("Other",inplace=True)
ltv.plot.bar(
ax=ax4,
color=colors,
title="Customer Lifetime Value, Dollars"
)
ax4.tick_params(top="off", right="off", left="off", bottom="off")
ax4.spines["top"].set_visible(False)
ax4.spines["right"].set_visible(False)
plt.show()
###Output
_____no_output_____ |
ipypublish/tests/test_files/nb_with_glossary_bib/source/main.ipynb | ###Markdown
glossary term \gls{term1} \gls{acro1} \gls{symbol1}
###Code
a=1
print(a)
###Output
1
|
Introduction-to-Data-Science-in-python/Material/Week+3.ipynb | ###Markdown
---_You are currently looking at **version 1.0** of this notebook. To download notebooks and datafiles, as well as get help on Jupyter notebooks in the Coursera platform, visit the [Jupyter Notebook FAQ](https://www.coursera.org/learn/python-data-analysis/resources/0dhYG) course resource._--- Merging Dataframes
###Code
import pandas as pd
df = pd.DataFrame([{'Name': 'Chris', 'Item Purchased': 'Sponge', 'Cost': 22.50},
{'Name': 'Kevyn', 'Item Purchased': 'Kitty Litter', 'Cost': 2.50},
{'Name': 'Filip', 'Item Purchased': 'Spoon', 'Cost': 5.00}],
index=['Store 1', 'Store 1', 'Store 2'])
df
df['Date'] = ['December 1', 'January 1', 'mid-May']
df
df['Delivered'] = True
df
df['Feedback'] = ['Positive', None, 'Negative']
df
adf = df.reset_index()
adf['Date'] = pd.Series({0: 'December 1', 2: 'mid-May'})
adf
staff_df = pd.DataFrame([{'Name': 'Kelly', 'Role': 'Director of HR'},
{'Name': 'Sally', 'Role': 'Course liasion'},
{'Name': 'James', 'Role': 'Grader'}])
staff_df = staff_df.set_index('Name')
student_df = pd.DataFrame([{'Name': 'James', 'School': 'Business'},
{'Name': 'Mike', 'School': 'Law'},
{'Name': 'Sally', 'School': 'Engineering'}])
student_df = student_df.set_index('Name')
print(staff_df.head())
print()
print(student_df.head())
pd.merge(staff_df, student_df, how='outer', left_index=True, right_index=True)
pd.merge(staff_df, student_df, how='inner', left_index=True, right_index=True)
pd.merge(staff_df, student_df, how='left', left_index=True, right_index=True)
pd.merge(staff_df, student_df, how='right', left_index=True, right_index=True)
staff_df = staff_df.reset_index()
student_df = student_df.reset_index()
pd.merge(staff_df, student_df, how='left', left_on='Name', right_on='Name')
staff_df = pd.DataFrame([{'Name': 'Kelly', 'Role': 'Director of HR', 'Location': 'State Street'},
{'Name': 'Sally', 'Role': 'Course liasion', 'Location': 'Washington Avenue'},
{'Name': 'James', 'Role': 'Grader', 'Location': 'Washington Avenue'}])
student_df = pd.DataFrame([{'Name': 'James', 'School': 'Business', 'Location': '1024 Billiard Avenue'},
{'Name': 'Mike', 'School': 'Law', 'Location': 'Fraternity House #22'},
{'Name': 'Sally', 'School': 'Engineering', 'Location': '512 Wilson Crescent'}])
pd.merge(staff_df, student_df, how='left', left_on='Name', right_on='Name')
staff_df = pd.DataFrame([{'First Name': 'Kelly', 'Last Name': 'Desjardins', 'Role': 'Director of HR'},
{'First Name': 'Sally', 'Last Name': 'Brooks', 'Role': 'Course liasion'},
{'First Name': 'James', 'Last Name': 'Wilde', 'Role': 'Grader'}])
student_df = pd.DataFrame([{'First Name': 'James', 'Last Name': 'Hammond', 'School': 'Business'},
{'First Name': 'Mike', 'Last Name': 'Smith', 'School': 'Law'},
{'First Name': 'Sally', 'Last Name': 'Brooks', 'School': 'Engineering'}])
staff_df
student_df
pd.merge(staff_df, student_df, how='inner', left_on=['First Name','Last Name'], right_on=['First Name','Last Name'])
###Output
_____no_output_____
###Markdown
Idiomatic Pandas: Making Code Pandorable
###Code
import pandas as pd
df = pd.read_csv('census.csv')
df
(df.where(df['SUMLEV']==50)
.dropna()
.set_index(['STNAME','CTYNAME'])
.rename(columns={'ESTIMATESBASE2010': 'Estimates Base 2010'}))
df = df[df['SUMLEV']==50]
df.set_index(['STNAME','CTYNAME'], inplace=True)
df.rename(columns={'ESTIMATESBASE2010': 'Estimates Base 2010'})
import numpy as np
def min_max(row):
data = row[['POPESTIMATE2010',
'POPESTIMATE2011',
'POPESTIMATE2012',
'POPESTIMATE2013',
'POPESTIMATE2014',
'POPESTIMATE2015']]
return pd.Series({'min': np.min(data), 'max': np.max(data)})
df.apply(min_max, axis=1)
import numpy as np
def min_max(row):
data = row[['POPESTIMATE2010',
'POPESTIMATE2011',
'POPESTIMATE2012',
'POPESTIMATE2013',
'POPESTIMATE2014',
'POPESTIMATE2015']]
row['max'] = np.max(data)
row['min'] = np.min(data)
return row
df.apply(min_max, axis=1)
rows = ['POPESTIMATE2010',
'POPESTIMATE2011',
'POPESTIMATE2012',
'POPESTIMATE2013',
'POPESTIMATE2014',
'POPESTIMATE2015']
df.apply(lambda x: np.max(x[rows]), axis=1)
###Output
_____no_output_____
###Markdown
Group by
###Code
import pandas as pd
import numpy as np
df = pd.read_csv('census.csv')
df = df[df['SUMLEV']==50]
df
%%timeit -n 10
for state in df['STNAME'].unique():
avg = np.average(df.where(df['STNAME']==state).dropna()['CENSUS2010POP'])
print('Counties in state ' + state + ' have an average population of ' + str(avg))
%%timeit -n 10
for group, frame in df.groupby('STNAME'):
avg = np.average(frame['CENSUS2010POP'])
print('Counties in state ' + group + ' have an average population of ' + str(avg))
df.head()
df = df.set_index('STNAME')
def fun(item):
if item[0]<'M':
return 0
if item[0]<'Q':
return 1
return 2
for group, frame in df.groupby(fun):
print('There are ' + str(len(frame)) + ' records in group ' + str(group) + ' for processing.')
df = pd.read_csv('census.csv')
df = df[df['SUMLEV']==50]
df.groupby('STNAME').agg({'CENSUS2010POP': np.average})
print(type(df.groupby(level=0)['POPESTIMATE2010','POPESTIMATE2011']))
print(type(df.groupby(level=0)['POPESTIMATE2010']))
(df.set_index('STNAME').groupby(level=0)['CENSUS2010POP']
.agg({'avg': np.average, 'sum': np.sum}))
(df.set_index('STNAME').groupby(level=0)['POPESTIMATE2010','POPESTIMATE2011']
.agg({'avg': np.average, 'sum': np.sum}))
(df.set_index('STNAME').groupby(level=0)['POPESTIMATE2010','POPESTIMATE2011']
.agg({'POPESTIMATE2010': np.average, 'POPESTIMATE2011': np.sum}))
###Output
_____no_output_____
###Markdown
Scales
###Code
df = pd.DataFrame(['A+', 'A', 'A-', 'B+', 'B', 'B-', 'C+', 'C', 'C-', 'D+', 'D'],
index=['excellent', 'excellent', 'excellent', 'good', 'good', 'good', 'ok', 'ok', 'ok', 'poor', 'poor'])
df.rename(columns={0: 'Grades'}, inplace=True)
df
df['Grades'].astype('category').head()
grades = df['Grades'].astype('category',
categories=['D', 'D+', 'C-', 'C', 'C+', 'B-', 'B', 'B+', 'A-', 'A', 'A+'],
ordered=True)
grades.head()
grades > 'C'
df = pd.read_csv('census.csv')
df = df[df['SUMLEV']==50]
df = df.set_index('STNAME').groupby(level=0)['CENSUS2010POP'].agg({'avg': np.average})
pd.cut(df['avg'],10)
###Output
_____no_output_____
###Markdown
Pivot Tables
###Code
#http://open.canada.ca/data/en/dataset/98f1a129-f628-4ce4-b24d-6f16bf24dd64
df = pd.read_csv('cars.csv')
df.head()
df.pivot_table(values='(kW)', index='YEAR', columns='Make', aggfunc=np.mean)
df.pivot_table(values='(kW)', index='YEAR', columns='Make', aggfunc=[np.mean,np.min], margins=True)
###Output
_____no_output_____
###Markdown
Date Functionality in Pandas
###Code
import pandas as pd
import numpy as np
###Output
_____no_output_____
###Markdown
Timestamp
###Code
pd.Timestamp('9/1/2016 10:05AM')
###Output
_____no_output_____
###Markdown
Period
###Code
pd.Period('1/2016')
pd.Period('3/5/2016')
###Output
_____no_output_____
###Markdown
DatetimeIndex
###Code
t1 = pd.Series(list('abc'), [pd.Timestamp('2016-09-01'), pd.Timestamp('2016-09-02'), pd.Timestamp('2016-09-03')])
t1
type(t1.index)
###Output
_____no_output_____
###Markdown
PeriodIndex
###Code
t2 = pd.Series(list('def'), [pd.Period('2016-09'), pd.Period('2016-10'), pd.Period('2016-11')])
t2
type(t2.index)
###Output
_____no_output_____
###Markdown
Converting to Datetime
###Code
d1 = ['2 June 2013', 'Aug 29, 2014', '2015-06-26', '7/12/16']
ts3 = pd.DataFrame(np.random.randint(10, 100, (4,2)), index=d1, columns=list('ab'))
ts3
ts3.index = pd.to_datetime(ts3.index)
ts3
pd.to_datetime('4.7.12', dayfirst=True)
###Output
_____no_output_____
###Markdown
Timedeltas
###Code
pd.Timestamp('9/3/2016')-pd.Timestamp('9/1/2016')
pd.Timestamp('9/2/2016 8:10AM') + pd.Timedelta('12D 3H')
###Output
_____no_output_____
###Markdown
Working with Dates in a Dataframe
###Code
dates = pd.date_range('10-01-2016', periods=9, freq='2W-SUN')
dates
df = pd.DataFrame({'Count 1': 100 + np.random.randint(-5, 10, 9).cumsum(),
'Count 2': 120 + np.random.randint(-5, 10, 9)}, index=dates)
df
df.index.weekday_name
df.diff()
df.resample('M').mean()
df['2017']
df['2016-12']
df['2016-12':]
df.asfreq('W', method='ffill')
import matplotlib.pyplot as plt
%matplotlib inline
df.plot()
###Output
_____no_output_____ |
Course 2: CNN/Week 2 - Question Cat Vs Dog.ipynb | ###Markdown
NOTE:In the cell below you MUST use a batch size of 10 (batch_size=10) for the train_generator and the validation_generator. Using a batch size greater than 10 will exceed memory limits on the Coursera platform
###Code
TRAINING_DIR = "/tmp/cats-v-dogs/training/"
train_datagen = ImageDataGenerator(
rescale = 1/255,
rotation_range = 40, #range - 0-100
width_shift_range = 0.2,
height_shift_range = 0.2,
shear_range = 0.2,
zoom_range = 0.2,
horizontal_flip = True,
fill_mode = "nearest"
)
train_generator = train_datagen.flow_from_directory(TRAINING_DIR,
batch_size=10,
class_mode='binary',
target_size=(150, 150))
VALIDATION_DIR = "/tmp/cats-v-dogs/testing/"
validation_datagen = ImageDataGenerator(rescale=1.0/255)
validation_generator = validation_datagen.flow_from_directory(VALIDATION_DIR,
batch_size=10,
class_mode='binary',
target_size=(150, 150))
# Expected Output:
# Found 2700 images belonging to 2 classes.
# Found 300 images belonging to 2 classes.
history = model.fit_generator(train_generator,
epochs=2,
verbose=1,
validation_data=validation_generator)
# PLOT LOSS AND ACCURACY
%matplotlib inline
import matplotlib.image as mpimg
import matplotlib.pyplot as plt
#-----------------------------------------------------------
# Retrieve a list of list results on training and test data
# sets for each training epoch
#-----------------------------------------------------------
acc=history.history['acc']
val_acc=history.history['val_acc']
loss=history.history['loss']
val_loss=history.history['val_loss']
epochs=range(len(acc)) # Get number of epochs
#------------------------------------------------
# Plot training and validation accuracy per epoch
#------------------------------------------------
plt.plot(epochs, acc, 'r', "Training Accuracy")
plt.plot(epochs, val_acc, 'b', "Validation Accuracy")
plt.title('Training and validation accuracy')
plt.figure()
#------------------------------------------------
# Plot training and validation loss per epoch
#------------------------------------------------
plt.plot(epochs, loss, 'r', "Training Loss")
plt.plot(epochs, val_loss, 'b', "Validation Loss")
plt.title('Training and validation loss')
# Desired output. Charts with training and validation metrics. No crash :)
###Output
_____no_output_____ |
Astronomy Picture of Day/Apod.ipynb | ###Markdown
Astronomy Picture of the Day Astronomy Picture of the Day is a api provided by NASA and Michigan Technological University. According to the APOD website, "Each day a different image or photograph of our universe is featured, along with a brief explanation written by a professional astronomer." Importing required Modules
###Code
import requests
from pprint import pprint
from IPython.display import Image
###Output
_____no_output_____
###Markdown
An api key is required to access the APOD api, which you can get from the link below
link = https://api.nasa.gov/
###Code
key = 'your_api_key_here'
###Output
_____no_output_____
###Markdown
API Endpoint and parameters
###Code
url = 'https://api.nasa.gov/planetary/apod?api_key='
main_url = url + key
# The data for which you want the picture
params = {'date':'2020-12-25'}
print(main_url)
###Output
https://api.nasa.gov/planetary/apod?api_key=your_api_key_here
###Markdown
Making a request
###Code
r = requests.get(main_url,params=params)
data = r.json()
data
###Output
_____no_output_____
###Markdown
APOD Information
###Code
date = data['date']
info = data['explanation']
img_url = data['url']
print(date)
pprint(info)
###Output
2020-12-25
('Orion always seems to come up sideways on northern winter evenings. Those '
'familiar stars of the constellation of the Hunter are caught above the trees '
'in this colorful night skyscape. Not a star at all but still visible to eye, '
"the Great Nebula of Orion shines below the Hunter's belt stars. The camera's "
"exposure reveals the stellar nursery's faint pinkish glow. Betelgeuse, giant "
"star at Orion's shoulder, has the color of warm and cozy terrestrial "
'lighting, but so does another familiar stellar giant, Aldebaran. Alpha star '
'of the constellation Taurus the Bull, Aldebaran anchors the recognizable '
'V-shape traced by the Hyades Cluster toward the top of the starry frame.')
###Markdown
Downlading & Displaying Image
###Code
r = requests.get(img_url)
with open('astro-pic.jpg','wb') as f:
f.write(r.content)
Image('astro-pic.jpg')
###Output
_____no_output_____ |
apk_analysis/data_analysis/cdv_plugins.ipynb | ###Markdown
Import Dataset
###Code
df_api = pd.read_csv("../db/cdv/cordova_API.csv")
df_permission = pd.read_csv("../db/cdv/cordova_PERMISSION.csv")
# df_feature = pd.read_csv("../db/fcordova/eatures.csv")
df_api.columns
l_api = list(df_api.columns)
l_permission = df_permission.columns
l_api
"successCallback" in l_api
df_api
df_api = df_api.dropna()
df_api = df_api.reset_index(drop=True)
df_api
###Output
_____no_output_____
###Markdown
Analyse API calls The occurances The occurances of funcitons detected for each plugin in each APK
###Code
df_plugins_only = df_api.drop(columns=["apk_name"])
df_plugins_only
###Output
_____no_output_____
###Markdown
The occurance of plugins for entire dataset
###Code
total_apk = df_plugins_only.shape[0]
print(f"Total APKs: {total_apk}")
df_cnt = df_plugins_only.astype(bool).sum(axis=0).sort_values(ascending=False)
df_cnt
# percentage of apks using each plugin
df_pct = df_cnt.apply(lambda x: round(x/total_apk*100, 2))
df_pct
plt.figure(figsize=(14, 8))
sns.set(font_scale=1.5) # font size 2
sns_pct = sns.barplot(x=df_pct.values, y=df_pct.index)
# sns_pct.set_xticklabels(sns_pct.get_xticklabels(), rotation=45, horizontalalignment='right')
sns_pct.set_xticks(range(0, 101, 10))
plt.xlabel("")
plt.ylabel("Plugin")
plt.title(f"Plugin Usage for {total_apk} APKs")
for p in sns_pct.patches:
# print(p)
sns_pct.annotate(
"{:.1%}".format(p.get_width()/100),
(p.get_width(), p.get_y() + p.get_height()),
fontsize=15,
color='black',
xytext=(2, 5),
textcoords='offset points')
plt.show()
###Output
_____no_output_____
###Markdown
Heatmap Heatmap for Entire database
###Code
df_plugins_only_T = df_plugins_only.T # transpose
plt.figure(figsize=(20,10))
sns.set(font_scale=2) # font size 2
ax = sns.heatmap(df_plugins_only_T)
plt.title("The occurances of funcitons detected for each plugin in each APK")
###Output
_____no_output_____
###Markdown
Heatmap for a small set of dataset
###Code
# select a set of apks, originial
set_num = 20
plt.figure(figsize=(20,10))
sns.set(font_scale=2) # font size 2
ax = sns.heatmap(df_plugins_only_T.iloc[:, :set_num])
plt.title("The occurances of funcitons detected for each plugin in each APK")
# select a set of apks, heatmap with annotation
plt.figure(figsize=(20,10))
sns.set(font_scale=2) # font size 2
ax = sns.heatmap(df_plugins_only_T.iloc[:, :set_num], annot=True)
###Output
_____no_output_____
###Markdown
Heatmap without media and device
###Code
df_plugins_media = df_api.drop(columns=["apk_name", "media", "contacts"])
df_plugins_media_T = df_plugins_media.T
# select a set of apks, heatmap with annotation
plt.figure(figsize=(20,10))
sns.set(font_scale=2) # font size 2
ax = sns.heatmap(df_plugins_media_T.iloc[:, :set_num], annot=True)
###Output
_____no_output_____ |
examples/volatility_scaling.ipynb | ###Markdown
Загрузка и предобработка данных
###Code
prices = pd.read_excel("factors/russia/monthlyprice.xlsx", index_col=0, parse_dates=True)
pe = pd.read_excel("factors/russia/PE.xlsx", index_col=0, parse_dates=True)
avg_volume = pd.read_excel("factors/russia/betafilter.xlsx", index_col=0, parse_dates=True)
index = pd.read_excel("factors/russia/imoex.xlsx", index_col=0, parse_dates=True)
prices, pe, avg_volume, index = pqr.utils.replace_with_nan(prices, pe, avg_volume, index)
###Output
_____no_output_____
###Markdown
Строим фактор стоимости и бенчмарк
###Code
universe = pqr.Universe(prices)
universe.filter(avg_volume >= 10_000_000)
preprocessor = [
pqr.Filter(universe.mask),
pqr.LookBackMean(3),
pqr.Hold(3),
]
value = pqr.Factor(pe, "less", preprocessor)
benchmark = pqr.Benchmark.from_index(index["IMOEX"], name="IMOEX")
###Output
_____no_output_____
###Markdown
Конструируем портфель из 50% лучших по фактору стоимости компаний
###Code
q05 = pqr.fm.Quantiles(0, 0.5)
portfolio = pqr.Portfolio(
universe,
longs=q05(value),
allocation_strategy=pqr.EqualWeights(),
name="Top 50%"
)
###Output
_____no_output_____
###Markdown
Смотрим его доходность и базовые статистики.
###Code
summary = pqr.dash.Dashboard(
pqr.dash.Table(
pqr.metrics.MeanReturn(annualizer=1, statistics=True),
pqr.metrics.Volatility(annualizer=1),
pqr.metrics.SharpeRatio(rf=0),
pqr.metrics.MeanExcessReturn(benchmark),
pqr.metrics.Alpha(benchmark, statistics=True),
pqr.metrics.Beta(benchmark),
),
pqr.dash.Graph(pqr.metrics.CompoundedReturns(), benchmark=benchmark, figsize=(16, 9)),
)
summary([portfolio])
###Output
_____no_output_____
###Markdown
Пробуем поскейлить
###Code
(pqr.metrics.TrailingVolatility()(portfolio) * 100).plot()
plt.title("Trailing volatility (12 months)")
plt.xlabel("Date")
plt.ylabel("Volatility, %")
plt.grid();
###Output
_____no_output_____
###Markdown
Волатильность портфеля выглядит неплохо, но видно, что в периоды высокой волатильности (особенно 2008 г.) портфель проигрывает бенчмарку. Попробуем это исправить за счет скейлинга по волатильности.
###Code
class VolatilityScaling:
def __init__(self, universe: pqr.Universe, target: float = 0.1):
self.universe = universe
self.target = target
def __call__(self, positions: pd.DataFrame) -> pd.DataFrame:
# считаем доходность портфеля
portfolio_returns = self.universe(positions)
# считаем волатильность доходности портфеля
volatility = portfolio_returns.rolling(12).std(ddof=1).iloc[12:] * np.sqrt(12)
# строим матрицу фактора (дублируем колонки)
w, vol = pqr.utils.align(positions, volatility)
volatility_factor_values = np.ones_like(w) * vol.to_numpy()[:, np.newaxis]
volatility_factor = pd.DataFrame(
volatility_factor_values,
index=w.index,
columns=w.columns
)
scaler = pqr.ScalingByFactor(
factor=pqr.Factor(volatility_factor, better="less"),
target=self.target
)
return scaler(positions)
portfolio_scaled = pqr.Portfolio(
universe,
longs=q05(value),
allocation_strategy=[pqr.EqualWeights(), VolatilityScaling(universe, 0.15)],
name="Top 50% scale"
)
###Output
_____no_output_____
###Markdown
Посмотрим на получившееся плечо портфеля.
###Code
portfolio_scaled.positions.sum(axis=1).plot()
plt.title("Scaled Leverage")
plt.xlabel("Date")
plt.ylabel("Leverage")
plt.grid();
summary([portfolio_scaled])
###Output
_____no_output_____
###Markdown
Стало хуже, потому что в 2008 году пришлось понижать плечо слишком поздно, за счет чего не был пойман отскок рынка (зато падение поймали отлично), а в 2017 году на экстремально низкой волатильности портфеля было повышено плечо очень сильно, что привело к большим потерям. Попробуем ограничить плечо.
###Code
portfolio_scaled_limits = pqr.Portfolio(
universe,
longs=q05(value),
allocation_strategy=[pqr.EqualWeights(), VolatilityScaling(universe, 0.15), pqr.LeverageLimits(0.8, 1.5)],
name="Top 50% scaled with limits"
)
###Output
_____no_output_____
###Markdown
Видно, что теперь не позволяем портфелю быть заполненным менее чем на 80%, но плечо ограничиваем в 1.5
###Code
portfolio_scaled_limits.positions.sum(axis=1).plot()
plt.title("Scaled Leverage")
plt.xlabel("Date")
plt.ylabel("Leverage")
plt.grid();
###Output
_____no_output_____
###Markdown
Но сильно лучше не стало: хотя в 2017 ушла такая бешеная волатильность портфеля, после 2008 оставание никуда не делось.
###Code
summary([portfolio, portfolio_scaled, portfolio_scaled_limits])
###Output
_____no_output_____ |
examples/09. Computing CG Properties.ipynb | ###Markdown
09. Computing coarse-grained molecular featuresThis notebook shows how to compute pairwise ditances, angles and dihedrals between CG beads given a CG mapping. The CG mapping used in this example is generated from [DSGPM](https://github.com/rochesterxugroup/DSGPM) model. You must need MDAnalysis and NetworkX in your working environment to run this example.
###Code
import hoomd
import hoomd.htf as htf
import tensorflow as tf
import MDAnalysis as mda
import numpy as np
import os
os.environ['CUDA_VISIBLE_DEVICES'] = '-1'
import warnings
warnings.filterwarnings('ignore')
###Output
2 Physical GPUs, 2 Logical GPUs
###Markdown
In this example we read from a trajectory file to compute coarse-grained (CG) bond distances, angles and dihedrals. Here, we use two MDAnalysis universes with and without hydrogens as the mapping model used to compute the CG mappings (DSGPM model) only maps heavy atoms of a given molecule. Hence, we have to add the missing hydrogen atoms to the corresponding CG beads. Read frames from the trajectory
###Code
universe = mda.Universe('ex9_segA.pdb','ex9_segA.trr')
avg_cgr = tf.keras.metrics.MeanTensor()
avg_cga = tf.keras.metrics.MeanTensor()
avg_cgd = tf.keras.metrics.MeanTensor()
directory = os.getcwd()
jfile = os.path.join(directory,'ex9_cgmap_segA.json')
#mda universe without H's
uxh = mda.Universe(os.path.join(directory,'ex9_segA_xH.pdb'))
#mda universe with H's
uh = mda.Universe(os.path.join(directory,'ex9_segA.pdb'))
for inputs, ts in htf.iter_from_trajectory(16, universe, r_cut=10, start=400, end=700):
cg_fts = []
r_tensor = []
a_tensor = []
d_tensor = []
box = inputs[2]
#get CG bead indices that make bonds, angles, dihedrals and
#CG coordinates
cg_fts = htf.compute_cg_graph(DSGPM=True,infile=jfile,group_atoms=True,
u_no_H=uxh, u_H=uh)
for i in range(len(cg_fts[0])):
cg_r = htf.mol_bond_distance(CG = True, cg_positions = cg_fts[-1],
b1=cg_fts[0][i][0],b2=cg_fts[0][i][1],box=box)
r_tensor.append(cg_r)
for j in range(len(cg_fts[1])):
cg_a = htf.mol_angle(CG= True, cg_positions=cg_fts[-1],
b1=cg_fts[1][j][0],b2=cg_fts[1][j][1],b3=cg_fts[1][j][2],box=box)
a_tensor.append(cg_a)
for k in range(len(cg_fts[2])):
cg_d = htf.mol_dihedral(CG=True,cg_positions=cg_fts[-1],b1=cg_fts[2][k][0],
b2=cg_fts[2][k][1],b3=cg_fts[2][k][2],b4=cg_fts[2][k][3],box=box)
d_tensor.append(cg_d)
avg_cgr.update_state(r_tensor)
avg_cga.update_state(a_tensor)
avg_cgd.update_state(d_tensor)
cgR = avg_cgr.result().numpy()
cgD = avg_cgd.result().numpy()*180./np.pi
cgA = avg_cga.result().numpy()*180./np.pi
print('CG pairwise distances:',cgR,'\n')
print('CG angles:',cgA,'\n')
print('CG dihedral angles:',cgD)
###Output
CG pairwise distances: [ 5.4447026 1.1312671 6.8375177 2.9382892 2.4656532 4.4416947
3.199694 4.2150507 3.5845404 2.153627 7.9029765 3.8829455
6.7589035 6.4774413 2.255304 4.924929 15.143286 ]
CG angles: [ 57.06865 75.22357 83.657074 113.90926 30.8918 61.174572
40.556293 27.594091 50.535973 149.74725 46.7441 91.21376
44.42922 157.15317 45.61479 121.53312 140.93109 90.67879
51.733078 156.72841 ]
CG dihedral angles: [ 61.196575 177.25443 4.7860584 111.41965 176.07312 133.15497
84.99461 135.7665 147.13869 4.834345 168.7402 124.28182
175.61597 21.146255 163.78894 32.634514 9.021241 175.17809
10.565324 7.1954145]
###Markdown
Application to multiple moleculesNote that the above computation is only applied to one molecule in the system. If a user has multiple copies of a molecule, the calculation of indices of CG beads making bonds, angles and dihedrals must be applied to all molecules. Let's assume there are 2 of the above molecules are available in the system. Each molecule has 18 CG beads. We can obtain the indices as follows. Here we use the outputs from `compute_cg_graph` to both molecules.
###Code
r_ids,a_ids,d_ids = htf.mol_features_multiple(bnd_indices=cg_fts[0],ang_indices=cg_fts[1],
dih_indices=cg_fts[2],molecules=2,beads=18)
# For example here are the CG bead indices involved in making angles
print('angles in molecule 1: ',a_ids[:20])
print('\n angles in molecule 2:',a_ids[20:])
# Now the same calculation with mol_bond_distance,mol_angle and mol_dihedral can be used
# to calculate CG bond distances, angles and dihedrals
###Output
angles in molecule 1: [[ 0 1 2]
[ 1 2 3]
[ 2 3 5]
[ 3 5 4]
[ 3 5 6]
[ 4 5 6]
[ 5 6 7]
[ 5 6 8]
[ 6 8 10]
[ 7 6 8]
[ 8 10 9]
[ 8 10 11]
[ 9 10 11]
[10 11 12]
[11 12 13]
[12 13 15]
[13 15 14]
[13 15 16]
[14 15 16]
[15 16 17]]
angles in molecule 2: [[18 19 20]
[19 20 21]
[20 21 23]
[21 23 22]
[21 23 24]
[22 23 24]
[23 24 25]
[23 24 26]
[24 26 28]
[25 24 26]
[26 28 27]
[26 28 29]
[27 28 29]
[28 29 30]
[29 30 31]
[30 31 33]
[31 33 32]
[31 33 34]
[32 33 34]
[33 34 35]]
|
NLP_t5_trivia.ipynb | ###Markdown
Copyright 2019 The T5 AuthorsLicensed under the Apache License, Version 2.0 (the "License");
###Code
# Copyright 2019 The T5 Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
###Output
_____no_output_____
###Markdown
Fine-Tuning the Text-To-Text Transfer Transformer (T5) for Context-Free Trivia _Or: What does T5 know?_*The following tutorial guides you through the process of fine-tuning a pre-trained T5 model, evaluating its accuracy, and using it for prediction,all on a free Google Cloud TPU .* BackgroundT5 was introduced in the paper [_Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer_](https://arxiv.org/abs/1910.10683). In this paper, we provide a comprehensive picture of how we pre-trained a standard text-to-text Transformer model on a large text corpus, achieving state-of-the-art results on many NLP tasks after fine-tuning.We pre-trained T5 on a mixture of supervised and unsupervised tasks with the majoriy of data coming from an unlabeled dataset we developed called [C4](https://www.tensorflow.org/datasets/catalog/c4). C4 is based on a massive scrape of the web produced by [Common Crawl](https://commoncrawl.org). Loosely speaking, pre-training on C4 ideally gives T5 an understanding of natural language in addition to general world knowledge. How can we assess what T5 knows?As the name implies, T5 is a text-to-text model, which enables us to train it on arbitrary tasks involving a textual input and output. As we showed in our paper, a huge variety of NLP tasks can be cast in this format, including translation, summarization, and even classification and regression tasks.One way to use this text-to-text framework is on question-answering problems, where the model is fed some context along with a question and is trained to predict the question's answer. For example, we might feed the model the text from the Wikipedia article about [Hurrican Connie](https://en.wikipedia.org/wiki/Hurricane_Connie) along with the question "On what date did Hurricane Connie occur?" and train the model to predict the answer "August 3rd, 1955".In this notebook, we'll be training T5 on a variant of this task which we call **context-free question answering (QA)**. In context-free QA, we feed the model a question *without any context* and train it to predict the answer. Since the model doesn't receive any context, the primary way it can learn to answer these questions is based on the "knowledge" it obtained during pre-training. We don't expect T5 to contain super specific information, so we will be focusing on two question-answering datasets which largely include trivia questions (i.e. facts about well-known subjects). [Similar](https://arxiv.org/abs/1909.01066) [investigations](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) have recently been done on BERT and GPT-2.T5 was not pre-trained on context-free QA, so in this notebook we'll first create two new tasks and then use the [`t5`](https://github.com/google-research/text-to-text-transfer-transformer) library to fine-tune, evaluate, and obtain predictions from T5. In the end, T5's performance on this context-free trivia QA can give us a sense of what kind (and how much) information T5 managed to learn during pre-training. Caveats* While we provide instructions for running on a [Cloud TPU](https://cloud.google.com/tpu/) via Colab for free, a [Google Cloud Storage (GCS)](http://console.cloud.google.com/storage) bucket is required for storing model parameters and data. The [GCS free tier](https://cloud.google.com/free/) provides 5 GB of storage, which should be enough to train the `large` model and smaller but not the `3B` or `11B` parameter models. You can use part of your initial $300 credit to get more space.* The Cloud TPU provided by Colab (a `v2-8`) does not have enough memory to fine-tune the `11B` parameter model. For this model, you will need to fine-tune inside of a GCP instance (see [README](https://github.com/google-research/text-to-text-transfer-transformer/)). Set Up Train on TPU 1. Create a Cloud Storage bucket for your data and model checkpoints at http://console.cloud.google.com/storage, and fill in the `BASE_DIR` parameter in the following form. There is a [free tier](https://cloud.google.com/free/) if you do not yet have an account. 1. On the main menu, click Runtime and select **Change runtime type**. Set "TPU" as the hardware accelerator. 1. Run the following cell and follow instructions to: * Set up a Colab TPU running environment * Verify that you are connected to a TPU device * Upload your credentials to TPU to access your GCS bucket
###Code
import datetime
import functools
import json
import os
import pprint
import random
import string
import sys
import tensorflow as tf
BASE_DIR = "" #@param { type: "string" }
if not BASE_DIR:
raise ValueError("You must enter a BASE_DIR.")
DATA_DIR = os.path.join(BASE_DIR, "data")
MODELS_DIR = os.path.join(BASE_DIR, "models")
ON_CLOUD = True
if ON_CLOUD:
assert "COLAB_TPU_ADDR" in os.environ, "ERROR: Not connected to a TPU runtime; please see the first cell in this notebook for instructions!"
TPU_ADDRESS = "grpc://" + os.environ["COLAB_TPU_ADDR"]
TPU_TOPOLOGY = "2x2"
print("TPU address is", TPU_ADDRESS)
from google.colab import auth
auth.authenticate_user()
with tf.Session(TPU_ADDRESS) as session:
print('TPU devices:')
pprint.pprint(session.list_devices())
# Upload credentials to TPU.
with open('/content/adc.json', 'r') as f:
auth_info = json.load(f)
tf.contrib.cloud.configure_gcs(session, credentials=auth_info)
# Now credentials are set for all future sessions on this TPU.
#@title Install and import required packages
if ON_CLOUD:
!pip install -qU t5
import warnings
warnings.filterwarnings("ignore", category=DeprecationWarning)
import t5
import tensorflow as tf
import tensorflow_datasets as tfds
import time
# Improve logging.
from contextlib import contextmanager
import logging as py_logging
if ON_CLOUD:
tf.get_logger().propagate = False
py_logging.root.setLevel('INFO')
@contextmanager
def tf_verbosity_level(level):
og_level = tf.logging.get_verbosity()
tf.logging.set_verbosity(level)
yield
tf.logging.set_verbosity(og_level)
###Output
_____no_output_____
###Markdown
Creating new Tasks and Mixture Two core components of the T5 library are `Task` and `Mixture` objects.A `Task` is a dataset along with preprocessing functions and evaluation metrics. A `Mixture` is a collection of `Task` objects along with a mixing rate or a function defining how to compute a mixing rate based on the properties of the constituent `Tasks`.For this example, we will fine-tune the model to do context-free trivia question answering. Natural Questions[Natural Questions (NQ)](https://ai.google.com/research/NaturalQuestions) is a challenging corpus for open-domain QA. Each example includes a question along with an entire Wikipedia article that may or may not contain its answer. The goal is to produce the correct answer given this context. In our case, we will be ignoring the provided context in hopes that the model will learn to find the answers from the world knowledge it has acquired during pre-training.Since the raw data splits are stored as JSONL files, we will first need to convert them to TSV format to make them parseable in TensorFlow. We will also take the opportunity to drop information we will not be using, remove questions with multiple answers, and to do a bit of cleaning of the text.
###Code
import gzip
import json
# Public directory of Natural Questions data on GCS.
NQ_JSONL_DIR = "gs://natural_questions/v1.0-simplified/"
NQ_SPLIT_FNAMES = {
"train": "simplified-nq-train.jsonl.gz",
"validation": "nq-dev-all.jsonl.gz"
}
nq_counts_path = os.path.join(DATA_DIR, "nq-counts.json")
nq_tsv_path = {
"train": os.path.join(DATA_DIR, "nq-train.tsv"),
"validation": os.path.join(DATA_DIR, "nq-validation.tsv")
}
def nq_jsonl_to_tsv(in_fname, out_fname):
def extract_answer(tokens, span):
"""Reconstruct answer from token span and remove extra spaces."""
start, end = span["start_token"], span["end_token"]
ans = " ".join(tokens[start:end])
# Remove incorrect spacing around punctuation.
ans = ans.replace(" ,", ",").replace(" .", ".").replace(" %", "%")
ans = ans.replace(" - ", "-").replace(" : ", ":").replace(" / ", "/")
ans = ans.replace("( ", "(").replace(" )", ")")
ans = ans.replace("`` ", "\"").replace(" ''", "\"")
ans = ans.replace(" 's", "'s").replace("s ' ", "s' ")
return ans
count = 0
with tf.io.gfile.GFile(in_fname, "rb") as infile,\
tf.io.gfile.GFile(out_fname, "w") as outfile:
for line in gzip.open(infile):
ex = json.loads(line)
# Remove any examples with more than one answer.
if len(ex['annotations'][0]['short_answers']) != 1:
continue
# Questions in NQ do not include a question mark.
question = ex["question_text"] + "?"
answer_span = ex['annotations'][0]['short_answers'][0]
# Handle the two document formats in NQ (tokens or text).
if "document_tokens" in ex:
tokens = [t["token"] for t in ex["document_tokens"]]
elif "document_text" in ex:
tokens = ex["document_text"].split(" ")
answer = extract_answer(tokens, answer_span)
# Write this line as <question>\t<answer>
outfile.write("%s\t%s\n" % (question, answer))
count += 1
tf.logging.log_every_n(
tf.logging.INFO,
"Wrote %d examples to %s." % (count, out_fname),
1000)
return count
if tf.io.gfile.exists(nq_counts_path):
# Used cached data and counts.
tf.logging.info("Loading NQ from cache.")
num_nq_examples = json.load(tf.io.gfile.GFile(nq_counts_path))
else:
# Create TSVs and get counts.
tf.logging.info("Generating NQ TSVs.")
num_nq_examples = {}
for split, fname in NQ_SPLIT_FNAMES.items():
num_nq_examples[split] = nq_jsonl_to_tsv(
os.path.join(NQ_JSONL_DIR, fname), nq_tsv_path[split])
json.dump(num_nq_examples, tf.io.gfile.GFile(nq_counts_path, "w"))
###Output
I1206 00:11:00.169766 248738 <ipython-input-3-45e03d923fbf>:51] Loading NQ from cache.
###Markdown
Next, we define a function to load the TSV data as a `tf.data.Dataset` in TensorFlow.
###Code
def nq_dataset_fn(split, shuffle_files=False):
# We only have one file for each split.
del shuffle_files
# Load lines from the text file as examples.
ds = tf.data.TextLineDataset(nq_tsv_path[split])
# Split each "<question>\t<answer>" example into (question, answer) tuple.
ds = ds.map(
functools.partial(tf.io.decode_csv, record_defaults=["", ""],
field_delim="\t", use_quote_delim=False),
num_parallel_calls=tf.data.experimental.AUTOTUNE)
# Map each tuple to a {"question": ... "answer": ...} dict.
ds = ds.map(lambda *ex: dict(zip(["question", "answer"], ex)))
return ds
print("A few raw validation examples...")
for ex in tfds.as_numpy(nq_dataset_fn("validation").take(5)):
print(ex)
###Output
A few raw validation examples...
{'question': b'what do the 3 dots mean in math?', 'answer': b'the therefore sign (\xe2\x88\xb4) is generally used before a logical consequence, such as the conclusion of a syllogism'}
{'question': b'who is playing the halftime show at super bowl 2016?', 'answer': b'Coldplay with special guest performers Beyonc\xc3\xa9 and Bruno Mars'}
{'question': b'who won the 2017 sports personality of the year?', 'answer': b'Mo Farah'}
{'question': b'where was the world economic forum held this year?', 'answer': b'Davos, a mountain resort in Graub\xc3\xbcnden, in the eastern Alps region of Switzerland'}
{'question': b'who has made the most premier league appearances?', 'answer': b'Gareth Barry'}
###Markdown
Now, we write a preprocess function to convert the examples in the `tf.data.Dataset` into a text-to-text format, with both `inputs` and `targets` fields. The preprocessor also normalizes the text by lowercasing it and removing quotes since the answers are sometimes formatted in odd ways. Finally, we prepend 'trivia question:' to the inputs so that the model knows what task it's trying to solve.
###Code
def trivia_preprocessor(ds):
def normalize_text(text):
"""Lowercase and remove quotes from a TensorFlow string."""
text = tf.strings.lower(text)
text = tf.strings.regex_replace(text,"'(.*)'", r"\1")
return text
def to_inputs_and_targets(ex):
"""Map {"question": ..., "answer": ...}->{"inputs": ..., "targets": ...}."""
return {
"inputs":
tf.strings.join(
["trivia question: ", normalize_text(ex["question"])]),
"targets": normalize_text(ex["answer"])
}
return ds.map(to_inputs_and_targets,
num_parallel_calls=tf.data.experimental.AUTOTUNE)
###Output
_____no_output_____
###Markdown
Finally, we put everything together to create a `Task`.
###Code
t5.data.TaskRegistry.add(
"nq_context_free",
# Supply a function which returns a tf.data.Dataset.
dataset_fn=nq_dataset_fn,
splits=["train", "validation"],
# Supply a function which preprocesses text from the tf.data.Dataset.
text_preprocessor=[trivia_preprocessor],
# Use the same vocabulary that we used for pre-training.
sentencepiece_model_path=t5.data.DEFAULT_SPM_PATH,
# Lowercase targets before computing metrics.
postprocess_fn=t5.data.postprocessors.lower_text,
# We'll use accuracy as our evaluation metric.
metric_fns=[t5.evaluation.metrics.accuracy],
# Not required, but helps for mixing and auto-caching.
num_input_examples=num_nq_examples
)
###Output
_____no_output_____
###Markdown
Let's look at a few pre-processed examples from the validation set. Note they contain both the tokenized (integer) and plain-text inputs and targets.
###Code
nq_task = t5.data.TaskRegistry.get("nq_context_free")
ds = nq_task.get_dataset(split="validation", sequence_length={"inputs": 128, "targets": 32})
print("A few preprocessed validation examples...")
for ex in tfds.as_numpy(ds.take(5)):
print(ex)
###Output
A few preprocessed validation examples...
{'inputs_plaintext': b'trivia question: what is the average height of a chinese man?', 'inputs': array([22377, 822, 10, 125, 19, 8, 1348, 3902, 13,
3, 9, 3, 1436, 1496, 15, 388, 58, 1]), 'targets_plaintext': b'167.1 cm (5 ft 6 in)', 'targets': array([ 898, 25059, 2446, 9209, 3, 89, 17, 431, 16,
61, 1])}
{'inputs_plaintext': b'trivia question: what is the population of fayetteville north carolina?', 'inputs': array([22377, 822, 10, 125, 19, 8, 2074, 13, 3,
89, 9, 63, 1954, 1420, 3457, 443, 12057, 9,
58, 1]), 'targets_plaintext': b'204,408 in 2013', 'targets': array([ 3, 26363, 6, 2445, 927, 16, 2038, 1])}
{'inputs_plaintext': b'trivia question: capital of georgia the former soviet republic 7 letters?', 'inputs': array([22377, 822, 10, 1784, 13, 873, 1677, 23, 9,
8, 1798, 78, 5914, 17, 20237, 489, 5487, 58,
1]), 'targets_plaintext': b'tbilisi', 'targets': array([ 3, 17, 3727, 159, 23, 1])}
{'inputs_plaintext': b'trivia question: who plays jill bigelow in line of duty?', 'inputs': array([22377, 822, 10, 113, 4805, 3, 354, 1092, 600,
15, 3216, 16, 689, 13, 5461, 58, 1]), 'targets_plaintext': b'polly walker', 'targets': array([ 5492, 63, 3, 24063, 1])}
{'inputs_plaintext': b'trivia question: when did we first put a rover on mars?', 'inputs': array([22377, 822, 10, 116, 410, 62, 166, 474, 3,
9, 3, 52, 1890, 30, 8113, 58, 1]), 'targets_plaintext': b'january 2004', 'targets': array([ 3, 7066, 76, 1208, 4406, 1])}
###Markdown
**Note**: Instead of defining `nq_dataset_fn` and above, we also could have used the `TextLineTask` class with the `parse_tsv` preprocessor for equivalent results as follows:```pyt5.data.TaskRegistry.add( "nq_context_free", t5.data.TextLineTask, split_to_filepattern=nq_tsv_path, text_preprocessor=[ functools.partial( t5.data.preprocessors.parse_tsv, field_names=["question", "answer"]), trivia_preprocessor ], postprocess_fn=t5.data.postprocessors.lower_text, metric_fns=[t5.evaluation.metrics.accuracy], num_input_examples=num_nq_examples)``` TriviaQAA second dataset we will use is related to [TriviaQA](https://nlp.cs.washington.edu/triviaqa/). It is also intended for reading comprehension, but, once again, we will modify the task here by ignoring the provided context.Since the dataset has been imported into [TensorFlow Datasets (TFDS)](https://www.tensorflow.org/datasets/catalog/trivia_qa), we can let it handle the data parsing for us. It will take a few minutes to download and preprocess the first time, but we'll be able to access it instantly from our data directory afterward.
###Code
ds = tfds.load(
"trivia_qa/unfiltered.nocontext",
data_dir=DATA_DIR,
# Download data locally for preprocessing to avoid using GCS space.
download_and_prepare_kwargs={"download_dir": "./downloads"})
print("A few raw validation examples...")
for ex in tfds.as_numpy(ds["validation"].take(2)):
print(ex)
###Output
A few raw validation examples...
{'answer': {'aliases': array([b'Torquemada (disambiguation)', b'Torquemada'], dtype=object), 'matched_wiki_entity_name': b'', 'normalized_aliases': array([b'torquemada', b'torquemada disambiguation'], dtype=object), 'normalized_matched_wiki_entity_name': b'', 'normalized_value': b'torquemada', 'type': b'WikipediaEntity', 'value': b'Torquemada'}, 'entity_pages': {'doc_source': array([], dtype=object), 'filename': array([], dtype=object), 'title': array([], dtype=object), 'wiki_context': array([], dtype=object)}, 'question': b'In 1483, who was appointed the first grand inquisitor of the Spanish Inquisition?', 'question_id': b'qw_16011', 'question_source': b'http://www.quizwise.com/', 'search_results': {'description': array([], dtype=object), 'filename': array([], dtype=object), 'rank': array([], dtype=int32), 'search_context': array([], dtype=object), 'title': array([], dtype=object), 'url': array([], dtype=object)}}
{'answer': {'aliases': array([b'Austerlitz (disambiguation)', b'Austerlitz', b'AUSTERLITZ'],
dtype=object), 'matched_wiki_entity_name': b'', 'normalized_aliases': array([b'austerlitz', b'austerlitz disambiguation'], dtype=object), 'normalized_matched_wiki_entity_name': b'', 'normalized_value': b'austerlitz', 'type': b'WikipediaEntity', 'value': b'AUSTERLITZ'}, 'entity_pages': {'doc_source': array([], dtype=object), 'filename': array([], dtype=object), 'title': array([], dtype=object), 'wiki_context': array([], dtype=object)}, 'question': b'Which celebrated battle was fought near Brno on 2nd December 1805?', 'question_id': b'dpql_4053', 'question_source': b'https://derbyshirepubquizleague.wordpress.com/', 'search_results': {'description': array([], dtype=object), 'filename': array([], dtype=object), 'rank': array([], dtype=int32), 'search_context': array([], dtype=object), 'title': array([], dtype=object), 'url': array([], dtype=object)}}
###Markdown
As with Natural Questions, we need to preprocess the raw examples into `inputs` and `targets` features. We can reuse the `trivia_preprocessor` above, but first we need to convert the TriviaQA examples into the correct format, ignoring the fields we don't need for our task.We'll then define our `Task` and print out a few preprocessed examples from the validation set.Note that we do not need to specify the splits or number of examples since that information is provided by TFDS.
###Code
def tiviaqa_extract_qa(ds):
def exract_qa(ex):
return {
"question": ex["question"],
"answer": ex["answer"]["value"]
}
return ds.map(exract_qa, num_parallel_calls=tf.data.experimental.AUTOTUNE)
t5.data.TaskRegistry.add(
"triviaqa_context_free",
# A TfdsTask takes in a TFDS name instead of a tf.data.Dataset function.
t5.data.TfdsTask,
tfds_name="trivia_qa/unfiltered.nocontext:1.1.0",
tfds_data_dir=DATA_DIR,
sentencepiece_model_path=t5.data.DEFAULT_SPM_PATH,
text_preprocessor=[tiviaqa_extract_qa, trivia_preprocessor],
postprocess_fn=t5.data.postprocessors.lower_text,
metric_fns=[t5.evaluation.metrics.accuracy]
)
# Load and print a few examples.
triviaqa_task = t5.data.TaskRegistry.get("triviaqa_context_free")
ds = triviaqa_task.get_dataset(split="validation", sequence_length={"inputs": 128, "targets": 32})
print("A few preprocessed validation examples...")
for ex in tfds.as_numpy(ds.take(3)):
print(ex)
###Output
A few preprocessed validation examples...
{'inputs_plaintext': b'trivia question: what does a farrier do?', 'inputs': array([22377, 822, 10, 125, 405, 3, 9, 623, 6711,
103, 58, 1]), 'targets_plaintext': b'he shoes horses', 'targets': array([ 3, 88, 4439, 10235, 1])}
{'inputs_plaintext': b'trivia question: what is the name of the wooden panelled lining applied to a room', 'inputs': array([22377, 822, 10, 125, 19, 8, 564, 13, 8,
5726, 2952, 1361, 3, 9424, 2930, 12, 3, 9,
562, 1]), 'targets_plaintext': b'wainscotting', 'targets': array([ 3, 210, 13676, 10405, 53, 1])}
{'inputs_plaintext': b'trivia question: how did gus grissom, ed white and roger b. chaffee die in 1967?', 'inputs': array([22377, 822, 10, 149, 410, 3, 1744, 7, 19116,
10348, 6, 3, 15, 26, 872, 11, 3, 3822,
49, 3, 115, 5, 3, 3441, 7398, 15, 67,
16, 18148, 58, 1]), 'targets_plaintext': b'burned to death', 'targets': array([16644, 12, 1687, 1])}
###Markdown
Dataset MixtureWe now create a `Mixture` from the above `Tasks`, which we will fine-tune on.There are different ways to automatically set the rate (for example, based on the number of examples using `rate_num_examples`), but we will just hardcode an equal mixture for simplicity.
###Code
t5.data.MixtureRegistry.remove("trivia_all")
t5.data.MixtureRegistry.add(
"trivia_all",
["nq_context_free", "triviaqa_context_free"],
default_rate=1.0
)
###Output
_____no_output_____
###Markdown
Transferring to new TasksWe are now ready to fine-tune one of the pre-trained T5 models on our new mixture of context-free QA tasks.First, we'll instantiate a `Model` object using the model size of your choice. Note that larger models are slower to train and use but will likely achieve higher accuracy. You also may be able to increase accuracy by training longer with more `FINETUNE_STEPS` below. Caveats* Due to its memory requirements, you will not be able to train the `11B` parameter model on the TPU provided by Colab. Instead, you will need to fine-tune inside of a GCP instance (see [README](https://github.com/google-research/text-to-text-transfer-transformer/)).* Due to the checkpoint size, you will not be able use the 5GB GCS free tier for the `3B` parameter models. You will need at least 25GB of space, which you can purchase with your $300 of initial credit on GCP.* While `large` can achieve decent results, it is recommended that you fine-tune at least the `3B` parameter model. Define Model
###Code
MODEL_SIZE = "3B" #@param["small", "base", "large", "3B", "11B"]
# Public GCS path for T5 pre-trained model checkpoints
BASE_PRETRAINED_DIR = "gs://t5-data/pretrained_models"
PRETRAINED_DIR = os.path.join(BASE_PRETRAINED_DIR, MODEL_SIZE)
MODEL_DIR = os.path.join(MODELS_DIR, MODEL_SIZE)
if ON_CLOUD and MODEL_SIZE == "3B":
tf.logging.warn(
"The `3B` model is too large to use with the 5GB GCS free tier. "
"Make sure you have at least 25GB on GCS before continuing."
)
elif ON_CLOUD and MODEL_SIZE == "11B":
raise ValueError(
"The `11B` parameter is too large to fine-tune on the `v2-8` TPU "
"provided by Colab. Please comment out this Error if you're running "
"on a larger TPU."
)
# Set parallelism and batch size to fit on v2-8 TPU (if possible).
# Limit number of checkpoints to fit within 5GB (if possible).
model_parallelism, train_batch_size, keep_checkpoint_max = {
"small": (1, 256, 16),
"base": (2, 128, 8),
"large": (8, 64, 4),
"3B": (8, 16, 1),
"11B": (8, 16, 1)}[MODEL_SIZE]
tf.io.gfile.makedirs(MODEL_DIR)
# The models from our paper are based on the Mesh Tensorflow Transformer.
model = t5.models.MtfModel(
model_dir=MODEL_DIR,
tpu=TPU_ADDRESS,
tpu_topology=TPU_TOPOLOGY,
model_parallelism=model_parallelism,
batch_size=train_batch_size,
sequence_length={"inputs": 128, "targets": 32},
learning_rate_schedule=0.003,
save_checkpoints_steps=5000,
keep_checkpoint_max=keep_checkpoint_max if ON_CLOUD else None,
iterations_per_loop=100,
)
###Output
_____no_output_____
###Markdown
Before we continue, let's load a [TensorBoard](https://www.tensorflow.org/tensorboard) visualizer so that we can keep monitor our progress. The page should automatically update as fine-tuning and evaluation proceed.
###Code
if ON_CLOUD:
%reload_ext tensorboard
import tensorboard as tb
tb.notebook.start("--logdir " + MODELS_DIR)
###Output
_____no_output_____
###Markdown
Fine-tuneWe are now ready to fine-tune our model. This will take a while (~2 hours with default settings), so please be patient! The larger the model and more `FINETUNE_STEPS` you use, the longer it will take.Don't worry, you can always come back later and increase the number of steps, and it will automatically pick up where you left off.
###Code
FINETUNE_STEPS = 25000 #@param {type: "integer"}
model.finetune(
mixture_or_task_name="trivia_all",
pretrained_model_dir=PRETRAINED_DIR,
finetune_steps=FINETUNE_STEPS
)
###Output
_____no_output_____
###Markdown
Expected Results [SPOILER ALERT] Below are the expected accuracies on the Natural Question (NQ) and TriviQA validation sets for various model sizes. The full 11B model produces the exact text of the answer 34.5% and 25.1% of the time on TriviaQA and NQ, respectively. The 3B parameter model, which is the largest that can be trained with a free Cloud TPU in Colab, achieves 29.7% and 23.7%, respectively.In reality, the model performs better than this since requiring exact match is too strict of a metric, as you’ll see in the examples below. This helps to explain why the model appears to perform better on TriviaQA than NQ, as the latter tends to include more long-form answers extracted from the context. EvaluateWe now evaluate on the validation sets of the tasks in our mixture. Accuracy results will be logged and added to the TensorBoard above.
###Code
# Use a larger batch size for evaluation, which requires less memory.
model.batch_size = train_batch_size * 4
model.eval(
mixture_or_task_name="trivia_all",
checkpoint_steps="all"
)
###Output
_____no_output_____
###Markdown
Let's look at a few random predictions from the validation sets. Note that we measure accuracy based on an *exact match* of the predicted answer and the ground-truth answer. As a result, some of the answers are semantically correct but are counted wrong by the exact match score.
###Code
def print_random_predictions(task_name, n=10):
"""Print n predictions from the validation split of a task."""
# Grab the dataset for this task.
ds = t5.data.TaskRegistry.get(task_name).get_dataset(
split="validation",
sequence_length={"inputs": 128, "targets": 32},
shuffle=False)
def _prediction_file_to_ckpt(path):
"""Extract the global step from a prediction filename."""
return int(path.split("_")[-2])
# Grab the paths of all logged predictions.
prediction_files = tf.io.gfile.glob(
os.path.join(
MODEL_DIR,
"validation_eval/%s_*_predictions" % task_name))
# Get most recent prediction file by sorting by their step.
latest_prediction_file = sorted(
prediction_files, key=_prediction_file_to_ckpt)[-1]
# Collect (inputs, targets, prediction) from the dataset and predictions file
results = []
with tf.io.gfile.GFile(latest_prediction_file) as preds:
for ex, pred in zip(tfds.as_numpy(ds), preds):
results.append((tf.compat.as_text(ex["inputs_plaintext"]),
tf.compat.as_text(ex["targets_plaintext"]),
pred.strip()))
print("<== Random predictions for %s using checkpoint %s ==>\n" %
(task_name,
_prediction_file_to_ckpt(latest_prediction_file)))
for inp, tgt, pred in random.choices(results, k=10):
print("Input:", inp)
print("Target:", tgt)
print("Prediction:", pred)
print("Counted as Correct?", tgt == pred)
print()
print_random_predictions("triviaqa_context_free")
print_random_predictions("nq_context_free")
###Output
<== Random predictions for triviaqa_context_free using checkpoint 1100000 ==>
Input: trivia question: jackpot counter, ghost drop and drop zone are all terms used in which uk television game show?
Target: tipping point
Prediction: countdown
Counted as Correct? False
Input: trivia question: cursed to sail around the cape of good hope, which ghost ship is the theme of an 1841 opera by richard wagner?
Target: the flying dutchman
Prediction: baron von munchhausen
Counted as Correct? False
Input: trivia question: at what fret are found the same notes as the open strings, but an octave higher, on a standard guitar?
Target: 12th
Prediction: 12th
Counted as Correct? True
Input: trivia question: how many legs does a ladybird have?
Target: six
Prediction: six
Counted as Correct? True
Input: trivia question: in which city’s harbour was the ship queen elizabeth ravaged by fire in 1972?
Target: hong kong
Prediction: hong kong
Counted as Correct? True
Input: trivia question: what are the three largest islands in the world beginning with the letter n
Target: new guinea, north island
Prediction: new zealand; namibia and nova scotia
Counted as Correct? False
Input: trivia question: lenny bruce was in what field of entertainment in the 1960s?
Target: standup comedy
Prediction: comedy
Counted as Correct? False
Input: trivia question: in which sea are the cayman islands?
Target: caribbean
Prediction: caribbean
Counted as Correct? True
Input: trivia question: what is an astronomical event that occurs when one celestial object moves into the shadow of another?
Target: eclipse
Prediction: lunar eclipse
Counted as Correct? False
Input: trivia question: which tv cartoon series was about a meek janitor who led a double life as an unfortunate super-detective?
Target: hong kong fuey
Prediction: scooby-doo
Counted as Correct? False
<== Random predictions for nq_context_free using checkpoint 1100000 ==>
Input: trivia question: who is known as the super fast boy in the series the icredible?
Target: dashiell robert parr/dash
Prediction: dash
Counted as Correct? False
Input: trivia question: who played santa in the santa clause movies?
Target: tim allen
Prediction: tim allen
Counted as Correct? True
Input: trivia question: who has sold more albums kelly or carrie?
Target: carrie underwood
Prediction: carrie underwood
Counted as Correct? True
Input: trivia question: when did sweet caroline start at red sox games?
Target: at least 1997
Prediction: 2004
Counted as Correct? False
Input: trivia question: who plays mr wilson in dennis the menace?
Target: joseph sherrard kearns
Prediction: joseph sherrard kearns
Counted as Correct? True
Input: trivia question: who had a baby at 100 in the bible?
Target: abraham
Prediction: sarah
Counted as Correct? False
Input: trivia question: who is doing 2018 super bowl half time show?
Target: justin timberlake
Prediction: justin timberlake
Counted as Correct? True
Input: trivia question: what is the official slogan for the 2018 winter olympics?
Target: passion. connected.
Prediction: every step counts
Counted as Correct? False
Input: trivia question: ray charles hit the road jack album name?
Target: ray charles greatest hits
Prediction: the road jack album
Counted as Correct? False
Input: trivia question: who sang the theme song to step by step?
Target: jesse frederick james conaway
Prediction: frederick and teresa james
Counted as Correct? False
###Markdown
PredictNow that we have fine-tuned the model, we can feed T5 arbitrary questions and have it predict the answers!There is a significant amount of overhead in initializing the model so this may take a few minutes to run each time even though the prediction itself is quite fast.To avoid this overhead, you might consider exporting a `SavedModel` and running it on [Cloud ML Engine](https://cloud.google.com/ml-engine/).
###Code
question_1 = "Where is the Google headquarters located?" #@param {type:"string"}
question_2 = "What is the most populous country in the world?" #@param {type:"string"}
question_3 = "Who are the 4 members of The Beatles?" #@param {type:"string"}
question_4 = "How many teeth do humans have?" #@param {type:"string"}
questions = [question_1, question_2, question_3, question_4]
now = time.time()
# Write out the supplied questions to text files.
predict_inputs_path = os.path.join(MODEL_DIR, "predict_inputs_%d.txt" % now)
predict_outputs_path = os.path.join(MODEL_DIR, "predict_outputs_%d.txt" % now)
# Manually apply preprocessing by prepending "triviaqa question:".
with tf.io.gfile.GFile(predict_inputs_path, "w") as f:
for q in questions:
f.write("trivia question: %s\n" % q.lower())
# Ignore any logging so that we only see the model's answers to the questions.
with tf_verbosity_level('ERROR'):
model.batch_size = len(questions)
model.predict(
input_file=predict_inputs_path,
output_file=predict_outputs_path,
# Select the most probable output token at each step.
temperature=0,
)
# The output filename will have the checkpoint appended so we glob to get
# the latest.
prediction_files = sorted(tf.io.gfile.glob(predict_outputs_path + "*"))
print("\nPredictions using checkpoint %s:\n" % prediction_files[-1].split("-")[-1])
with tf.io.gfile.GFile(prediction_files[-1]) as f:
for q, a in zip(questions, f):
if q:
print("Q: " + q)
print("A: " + a)
print()
###Output
Predictions using checkpoint 1100000:
Q: Where is the Google headquarters located?
A: mountain view, california
Q: What is the most populous country in the world?
A: china
Q: Who are the 4 members of The Beatles?
A: john lennon, paul mccartney, george harrison and ringo starr
Q: How many teeth do humans have?
A: 30
###Markdown
ExportAs mentioned in the previous section, exporting a [`SavedModel`](https://www.tensorflow.org/guide/saved_model) can be useful for improving performance during inference or allowing your model to be deployed on a variety of platforms (e.g., TFLite, TensorFlow.js, TensorFlow Serving, or TensorFlow Hub).
###Code
model.export(
os.path.join(MODEL_DIR, "export"),
checkpoint_step=-1, # use most recent
beam_size=1, # no beam search
temperature=1.0, # sample according to predicted distribution
)
###Output
_____no_output_____ |
Notebooks/easy_track/pydata/NumPy.ipynb | ###Markdown
NumPy---In this tutorial, we are going to learn about NumPy and how to use it.NumPy, or simply Numpy, is a Python's linear algebra library that allows working with large, multi-dimensional arrays and matrices, along with a collection of high-level mathematical functions to operate on these arrays.One of the most important PyData libraries (used for Data Science), Numpy is used as the base for almost all other PyData libraries, hence is very important that you understand how to work with NumPy.One of the advantages that Numpy has over Python's built-in lists is its bindings with the C-programming language, which allows mathematical functions to be performed at much faster speeds as compared to on the built-in lists. Now that we have the basic idea regarding what Numpy is, let us start working with it. Importing NumPy---Before any project in which you want to use Numpy, add the following line of code to import Numpy.
###Code
import numpy as np # importing numpy as np means in order to use numpy, you can simply type np
###Output
_____no_output_____
###Markdown
In this tutorial, we will focus on some of the most important fundamental data types of Numpy— vectors, arrays, matrices, and we alsp be working on various number generation methods. NumPy Arrays---Numpy arrays are the fundamental data type of Numpy, and is the primary way how Numpy works with data. The array object in NumPy is called ndarray.Numpy arrays essentially come in two flavors: * Vectors, and * Matrices Vectors are 1-d arrays. On the other hand, matrices are 2-d arrays of the dimension __m x n__ where m, n >= 1. This means that a matrix can still have only one row or one column.Let's have a look at the different ways that you can create a Numpy array with. A. How to Create NumPy Arrays---__1. Numpy Arrays from a Python List:__We can create a numpy array by directly converting a list or list of lists. For this, we use the array() method. The following is the syntax:> numpy_arr = np.array(python_list)To create an ndarray, we can pass a list, tuple or any array-like object into the array() method, and it will be converted into an ndarray object.
###Code
# creating a 1D python list
arr = [1, 2, 3, 4, 5, 6]
arr
# creating a numpy array from the python list
np.array(arr)
# creating a 2D python list
matrix = [[1, 2, 3], [4, 5, 6], [7, 8, 9]]
matrix
# creating a numpy array from the python list
np.array(matrix)
###Output
_____no_output_____
###Markdown
B. Built-in Methods:---Here, we will se a bunch of built-in Numpy methods for a bunch of different array operation. (a). arangeThis method returns evenly spaced values within a given range. The following is the syntax:> np.arange(start, stop, step), where, * start -> Starting value * stop -> Right upper bound of the range * step -> The size of the step; number of values to skip
###Code
# an array with all values in range 0-10 (excluding 10)
np.arange(0, 10)
# an array with values in range 0-10 (excluding 10) and a step size of 2
np.arange(0, 10, 2)
###Output
_____no_output_____
###Markdown
(b). zerosThe zeros() method returns an array of zeros. The following is the syntax- > np.zeros(shape), where,* shape -> Shape of the array; tuple
###Code
# array of 0s
np.zeros((10))
# matrix of 0s
np.zeros((3,4))
###Output
_____no_output_____
###Markdown
(c). onesThe ones() method returns an array of ones. The following is the syntax- > np.ones(shape), where,* shape -> Shape of the array; tuple
###Code
# vector of 1s
np.ones((5))
# matrix of 1s
np.ones((4,2))
###Output
_____no_output_____
###Markdown
(d). linspaceThis method returns evenly spaced numbers over a specific interval. The following is the syntax-> np.linspace(start, stop, num = 50), where,* start -> Starting value* stop -> Stopping value* num -> Number of evenly-spaced values between the start and stop index.
###Code
np.linspace(1,2,3)
np.linspace(0, 100)
###Output
_____no_output_____
###Markdown
(e). eyeThis method returns an identity matrix (i.e., a unique matrix that has 0 for all elements except the diagonal elements). The following is the syntax-> np.eye(shape), where,* shape -> Shape of the matrix
###Code
# creating a square identity matrix
np.eye(4)
# creating an arbitrary identity matrix
np.eye(4,3)
###Output
_____no_output_____
###Markdown
C. NumPy Random Methods---Numpy's random methods allow us to randomly generate and work with random integers and floating point number. In this section, we will cover some of the most important random methods. (a). randThe rand() method returns an array of the given shape and populates it with random samples from a ***uniform distribution***over the range ``[0, 1)``. The following is the syntax-> np.random.rand(b0, b1, b2....), where,* bn -> Size of nth dimension
###Code
# vector with random values between [0,1)
np.random.rand(5)
# matrix with random values between [0,1)
np.random.rand(3, 2)
###Output
_____no_output_____
###Markdown
(b). randnThe randn() method, just like rand() also returns an array with randomly generated values. The only difference is that the value are samples from a ***standard normal distribution***. The following is the syntax-> np.random.randn(b0, b1, b2....), where,* bn -> Size of nth dimension
###Code
# vector with random values from a standard normal distribution
np.random.randn(5)
# matrix with random values from a standard normal distribution
np.random.randn(3,4)
###Output
_____no_output_____
###Markdown
(c). randintThe randint() method returns an array of integers in the specified range. The following is the syntax-> np.random.randint(low, high, size), where,* low -> Left limit of the range (inclusive)* high -> Right limit of the range (exclusive)* size -> Shape of the array
###Code
# vector of 5 random integers between 0-10
np.random.randint(0, 10, 5)
# matrix of random integers between 10-50 of the shape 3x4
np.random.randint(10, 50, (3,4))
###Output
_____no_output_____
###Markdown
These were one of the most useful of the Numpy random class. To check for more methods, refer [here](https://docs.scipy.org/doc/numpy-1.14.0/reference/routines.random.html).Now, let us have a look at some of the important array methods in Numpy. D. Array Attributes and methods--- (a). reshapeThe reshape() method is used to, as the name suggest, change the shape of the array. While the data in the array remains the same, we can cast it to a different shape using the reshape method. The following is the syntax-> numpy_arr.reshape(shape), where, * shape -> New shape that you want to cast the array to; tupleOne thing to be noted is that the total number of elements in the original array and the dimensions of the new array should be the same.
###Code
# creating a new array
arr = np.arange(0,20)
# reshaping the array
arr.reshape((4,5))
###Output
_____no_output_____
###Markdown
Now, let see what happens when you enter a reshape size that does not meet the number of elements in the initial array. *__Hint__: We will run into an error!*
###Code
# Number of elements in arr = 20
# reshape size = 3,4 -> 3 * 4 = 12
arr.reshape(3,4)
###Output
_____no_output_____
###Markdown
(b). max & minAs the name suggests, the max() method returns the largest element in the array. The syntax is-> arr.max()The min() method returns the smallest element in the array. The syntax is-> arr.min()
###Code
# getting the largest element
arr = np.array([1,5,2,4,8,3])
arr.max()
# getting the smallest element
arr = np.array([1,5,2,4,8,3])
arr.min()
###Output
_____no_output_____
###Markdown
(c). argmax & argminThe argmax() method returns the index of the largest element in the array. The syntax is-> arr.argmax()The argmin() method returns the index of the largest element in the array. The syntax is-> arr.argmin()
###Code
arr = np.array([6,3,1,9,6,4,5,2,3,9])
# getting the index of the largest element in the array
arr.argmax()
# getting the index of the smallest element in the array
arr.argmin()
###Output
_____no_output_____
###Markdown
(d). shapeThe shape attribute \[not a method] returns the shape of the array.
###Code
# shape of a vector
arr = np.array([5,3,1,7,2,4,3,5,9])
arr.shape
# shape of a matrix
arr = np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]])
arr.shape
###Output
_____no_output_____
###Markdown
(e). dtypeThe dtype attribute returns the data type of the elements in the array.
###Code
arr = np.array(['a', 'b', 'c', 'd'])
arr.dtype
arr2 = np.array([1.3, 4.5, 1.7])
arr2.dtype
arr3 = np.array([1, 2, 3, 4, 5])
arr3.dtype
###Output
_____no_output_____
###Markdown
C. NumPy Indexing Operations---Now that we know how to create an array, in this section, we will have a look at the different indexing, slicing and selection operations in Numpy. (a). Bracket Indexing and SelectionJust like Python lists, you can use bracket selection and indexing to select one or more than one elements from a numpy array.
###Code
arr = np.arange(10, 20)
arr
# selecting element at a certain index
arr[5]
# selecting a range of elements
arr[1:6]
# selecting a range of elements
arr[:8]
arr2 = np.random.rand(4, 5)
arr2
# selecting a 2d slice
arr2[1:3, 2:4]
# selecting a all rows but restricting columns
arr2[:, 2:4]
###Output
_____no_output_____
###Markdown
(b). BroadcastingBroadcasting is a unique property of Numpy arrays that sets it apart from how we can use Python lists.
###Code
arr = np.random.randint(20, 100, (6,7))
arr
# broadcasting a slice
arr[:3, :4] = 0
arr
###Output
_____no_output_____
###Markdown
**__NOTE__: If you broadcast an array or a slice of an array as another variable, this new variable will not be a different array but actually act as an alias for the original array (or the slice of the original array). This new array will actually be referencing to the old array's location in the storage.**What this means is that any change in the new array will reflect in old array as well. This is known as the concept of referencing and the idea behind this is to save storage space.*
###Code
# original array
arr = np.arange(10, 20)
arr
# referenced array generated from broadcasting
ref_arr = arr[2:4]
ref_arr
# broadcasting all values of ref_arr as 0
ref_arr[:] = 0
ref_arr
# checking if the values changed in arr as well
arr
###Output
_____no_output_____
###Markdown
As we can see, the values changed in the old array too. Now, here's how you get a dereferenced copy of a Numpy array. (c). copyThe copy method allows you create a dereferenced copy of a Numpy array. The syntax is as follows-> new_arr = arr.copy()
###Code
# original array
arr = np.arange(40, 50)
# dereferenced copy of slice of old array
new_arr = arr.copy()
# broadcasting values of new_arr
new_arr[:] = 0
# checking if that changed old arr
arr
###Output
_____no_output_____
###Markdown
As expected, no change to the old array. (d). Fancy IndexingFancy indexing can be a bit confusing as it is not quite "Pythonic". Fancy indexing is used to select entire rows of a matrix.
###Code
arr = np.random.randint(0, 100, (5,6))
arr
# fancy indexing row 1
arr[[1]]
# fancy indexing row 2,3,4
arr[[2,3,4]]
# fancy indexing row 4,2,3 (in this exact order)
arr[[4,2,3]]
###Output
_____no_output_____
###Markdown
D. NumPy Selection Operation---This is in a way similar to indexing, however you can select rows and columns on the basis of conditions. Let's see how you can do it.
###Code
arr = np.random.randint(0, 50, (10,))
arr
# checks if value at each index is > 15
arr > 15
# selection on the basis of condition
arr[arr > 15]
###Output
_____no_output_____
###Markdown
Now, let us have a look at the arithmetic operations that you can perform using Numpy. E. Arithmetic Operations--- (a). add, subtract, multiply, divide, exponentiation> * Addition: arr1 + arr2> * Subtraction: arr1 - arr2> * Multiplication: arr1 * arr2> * Division: arr1 / arr2> * Exponentiation: arr ** n, where n is an number
###Code
arr1 = np.arange(10)
print(arr1)
arr2 = np.ones(10)
print(arr2)
arr3 = np.ones(5)
print(arr3)
# addition
print(arr1 + arr2)
# will throw an error as the arrays have to be the same shape
print(arr1 + arr3)
# subtraction
arr1 - arr2
# multiplication
arr1 * arr1
# division
print(arr1 / 2)
# 0's division by 0 will give nan (not a number) value
print(arr1 / arr1)
# exponentiation
print(arr1 ** 2)
print(arr1 ** 3.2)
###Output
[ 0 1 4 9 16 25 36 49 64 81]
[0.00000000e+00 1.00000000e+00 9.18958684e+00 3.36347354e+01
8.44485063e+01 1.72466208e+02 3.09089322e+02 5.06190194e+02
7.76046882e+02 1.13129542e+03]
###Markdown
(b). sqrt methodThe sqrt method returns square root of each element in the array. The syntax is-> np.sqrt(arr)
###Code
np.sqrt(arr1)
###Output
_____no_output_____
###Markdown
(c). exp methodThe exp method exponentiates (en) each element in the array. The syntax is-> np.exp(arr)
###Code
np.exp(arr1)
###Output
_____no_output_____
###Markdown
(d). sin methodThe sin method returns sine of each element (sin(n)) in the array. The syntax is-> np.sin(arr)
###Code
np.sin(arr1)
###Output
_____no_output_____
###Markdown
(d). log methodThe log method returns logarithm of each element (log(n)) in the array. The syntax is-> np.log(arr)
###Code
np.log(arr1)
###Output
_____no_output_____ |
notebooks/exp144_analysis.ipynb | ###Markdown
Exp 144 analysisSee `./informercial/Makefile` for experimentaldetails.
###Code
import os
import numpy as np
from IPython.display import Image
import matplotlib
import matplotlib.pyplot as plt
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import seaborn as sns
sns.set_style('ticks')
matplotlib.rcParams.update({'font.size': 16})
matplotlib.rc('axes', titlesize=16)
from infomercial.exp import meta_bandit
from infomercial.exp import epsilon_bandit
from infomercial.exp import beta_bandit
from infomercial.exp import softbeta_bandit
from infomercial.local_gym import bandit
from infomercial.exp.meta_bandit import load_checkpoint
import gym
def plot_meta(env_name, result):
"""Plots!"""
# episodes, actions, scores_E, scores_R, values_E, values_R, ties, policies
episodes = result["episodes"]
actions =result["actions"]
bests =result["p_bests"]
scores_E = result["scores_E"]
scores_R = result["scores_R"]
values_R = result["values_R"]
values_E = result["values_E"]
ties = result["ties"]
policies = result["policies"]
# -
env = gym.make(env_name)
best = env.best
print(f"Best arm: {best}, last arm: {actions[-1]}")
# Plotz
fig = plt.figure(figsize=(6, 14))
grid = plt.GridSpec(6, 1, wspace=0.3, hspace=0.8)
# Arm
plt.subplot(grid[0, 0])
plt.scatter(episodes, actions, color="black", alpha=.5, s=2, label="Bandit")
plt.plot(episodes, np.repeat(best, np.max(episodes)+1),
color="red", alpha=0.8, ls='--', linewidth=2)
plt.ylim(-.1, np.max(actions)+1.1)
plt.ylabel("Arm choice")
plt.xlabel("Episode")
# Policy
policies = np.asarray(policies)
episodes = np.asarray(episodes)
plt.subplot(grid[1, 0])
m = policies == 0
plt.scatter(episodes[m], policies[m], alpha=.4, s=2, label="$\pi_E$", color="purple")
m = policies == 1
plt.scatter(episodes[m], policies[m], alpha=.4, s=2, label="$\pi_R$", color="grey")
plt.ylim(-.1, 1+.1)
plt.ylabel("Controlling\npolicy")
plt.xlabel("Episode")
plt.legend(loc='center left', bbox_to_anchor=(1, 0.5))
_ = sns.despine()
# score
plt.subplot(grid[2, 0])
plt.scatter(episodes, scores_E, color="purple", alpha=0.4, s=2, label="E")
plt.plot(episodes, scores_E, color="purple", alpha=0.4)
plt.scatter(episodes, scores_R, color="grey", alpha=0.4, s=2, label="R")
plt.plot(episodes, scores_R, color="grey", alpha=0.4)
plt.plot(episodes, np.repeat(tie_threshold, np.max(episodes)+1),
color="violet", alpha=0.8, ls='--', linewidth=2)
plt.ylabel("Score")
plt.xlabel("Episode")
# plt.semilogy()
plt.legend(loc='center left', bbox_to_anchor=(1, 0.5))
_ = sns.despine()
# Q
plt.subplot(grid[3, 0])
plt.scatter(episodes, values_E, color="purple", alpha=0.4, s=2, label="$Q_E$")
plt.scatter(episodes, values_R, color="grey", alpha=0.4, s=2, label="$Q_R$")
plt.plot(episodes, np.repeat(tie_threshold, np.max(episodes)+1),
color="violet", alpha=0.8, ls='--', linewidth=2)
plt.ylabel("Value")
plt.xlabel("Episode")
# plt.semilogy()
plt.legend(loc='center left', bbox_to_anchor=(1, 0.5))
_ = sns.despine()
# Ties
plt.subplot(grid[4, 0])
plt.scatter(episodes, bests, color="red", alpha=.5, s=2)
plt.ylabel("p(best)")
plt.xlabel("Episode")
plt.ylim(0, 1)
# Ties
plt.subplot(grid[5, 0])
plt.scatter(episodes, ties, color="black", alpha=.5, s=2, label="$\pi_{tie}$ : 1\n $\pi_\pi$ : 0")
plt.ylim(-.1, 1+.1)
plt.ylabel("Ties index")
plt.xlabel("Episode")
plt.legend(loc='center left', bbox_to_anchor=(1, 0.5))
def plot_epsilon(env_name, result):
"""Plots!"""
# episodes, actions, scores_E, scores_R, values_E, values_R, ties, policies
episodes = result["episodes"]
actions =result["actions"]
bests =result["p_bests"]
scores_R = result["scores_R"]
values_R = result["values_R"]
epsilons = result["epsilons"]
# -
env = gym.make(env_name)
best = env.best
print(f"Best arm: {best}, last arm: {actions[-1]}")
# Plotz
fig = plt.figure(figsize=(6, 14))
grid = plt.GridSpec(6, 1, wspace=0.3, hspace=0.8)
# Arm
plt.subplot(grid[0, 0])
plt.scatter(episodes, actions, color="black", alpha=.5, s=2, label="Bandit")
for b in best:
plt.plot(episodes, np.repeat(b, np.max(episodes)+1),
color="red", alpha=0.8, ls='--', linewidth=2)
plt.ylim(-.1, np.max(actions)+1.1)
plt.ylabel("Arm choice")
plt.xlabel("Episode")
# score
plt.subplot(grid[1, 0])
plt.scatter(episodes, scores_R, color="grey", alpha=0.4, s=2, label="R")
plt.ylabel("Score")
plt.xlabel("Episode")
# plt.semilogy()
plt.legend(loc='center left', bbox_to_anchor=(1, 0.5))
_ = sns.despine()
# Q
plt.subplot(grid[2, 0])
plt.scatter(episodes, values_R, color="grey", alpha=0.4, s=2, label="$Q_R$")
plt.ylabel("Value")
plt.xlabel("Episode")
# plt.semilogy()
plt.legend(loc='center left', bbox_to_anchor=(1, 0.5))
_ = sns.despine()
# best
plt.subplot(grid[3, 0])
plt.scatter(episodes, bests, color="red", alpha=.5, s=2)
plt.ylabel("p(best)")
plt.xlabel("Episode")
plt.ylim(0, 1)
# Decay
plt.subplot(grid[4, 0])
plt.scatter(episodes, epsilons, color="black", alpha=.5, s=2)
plt.ylabel("$\epsilon_R$")
plt.xlabel("Episode")
plt.legend(loc='center left', bbox_to_anchor=(1, 0.5))
def plot_critic(critic_name, env_name, result):
# -
env = gym.make(env_name)
best = env.best
# Data
critic = result[critic_name]
arms = list(critic.keys())
values = list(critic.values())
# Plotz
fig = plt.figure(figsize=(8, 3))
grid = plt.GridSpec(1, 1, wspace=0.3, hspace=0.8)
# Arm
plt.subplot(grid[0])
plt.scatter(arms, values, color="black", alpha=.5, s=30)
plt.plot([best]*10, np.linspace(min(values), max(values), 10), color="red", alpha=0.8, ls='--', linewidth=2)
plt.ylabel("Value")
plt.xlabel("Arm")
###Output
_____no_output_____
###Markdown
Load and process data
###Code
data_path ="/Users/qualia/Code/infomercial/data/"
exp_name = "exp144"
sorted_params = load_checkpoint(os.path.join(data_path, f"{exp_name}_sorted.pkl"))
# print(sorted_params.keys())
best_params = sorted_params[0]
sorted_params
###Output
_____no_output_____
###Markdown
Performanceof best parameters
###Code
env_name = 'BanditOneHigh10-v0'
num_episodes = 100
# Run w/ best params
result = epsilon_bandit(
env_name=env_name,
num_episodes=num_episodes,
lr_R=best_params["lr_R"],
epsilon=best_params["epsilon"],
epsilon_decay_tau=best_params["epsilon_decay_tau"],
seed_value=3436,
)
print(best_params)
plot_epsilon(env_name, result=result)
plot_critic('critic_R', env_name, result)
###Output
_____no_output_____
###Markdown
Sensitivityto parameter choices
###Code
total_Rs = []
eps = []
lrs_R = []
lrs_E = []
decays = []
trials = list(sorted_params.keys())
for t in trials:
total_Rs.append(sorted_params[t]['total_R'])
lrs_R.append(sorted_params[t]['lr_R'])
eps.append(sorted_params[t]['epsilon'])
decays.append(sorted_params[t]['epsilon_decay_tau'])
# Init plot
fig = plt.figure(figsize=(5, 18))
grid = plt.GridSpec(6, 1, wspace=0.3, hspace=0.8)
# Do plots:
# Arm
plt.subplot(grid[0, 0])
plt.scatter(trials, total_Rs, color="black", alpha=.5, s=6, label="total R")
plt.xlabel("Sorted params")
plt.ylabel("total R")
_ = sns.despine()
plt.subplot(grid[1, 0])
plt.scatter(trials, lrs_R, color="black", alpha=.5, s=6, label="total R")
plt.xlabel("Sorted params")
plt.ylabel("lr_R")
_ = sns.despine()
plt.subplot(grid[2, 0])
plt.scatter(lrs_R, total_Rs, color="black", alpha=.5, s=6, label="total R")
plt.xlabel("lrs_R")
plt.ylabel("total_Rs")
_ = sns.despine()
plt.subplot(grid[3, 0])
plt.scatter(eps, total_Rs, color="black", alpha=.5, s=6, label="total R")
plt.xlabel("epsilon")
plt.ylabel("total_Rs")
_ = sns.despine()
plt.subplot(grid[4, 0])
plt.scatter(decays, total_Rs, color="black", alpha=.5, s=6, label="total R")
plt.xlabel("Tau")
plt.ylabel("total_Rs")
_ = sns.despine()
###Output
_____no_output_____
###Markdown
Parameter correlations
###Code
from scipy.stats import spearmanr
spearmanr(eps, lrs_R)
spearmanr(eps, total_Rs)
spearmanr(lrs_R, total_Rs)
###Output
_____no_output_____
###Markdown
Distributionsof parameters
###Code
# Init plot
fig = plt.figure(figsize=(5, 6))
grid = plt.GridSpec(3, 1, wspace=0.3, hspace=0.8)
plt.subplot(grid[0, 0])
plt.hist(eps, color="black")
plt.xlabel("epsilon")
plt.ylabel("Count")
_ = sns.despine()
plt.subplot(grid[1, 0])
plt.hist(lrs_R, color="black")
plt.xlabel("lr_R")
plt.ylabel("Count")
_ = sns.despine()
###Output
_____no_output_____
###Markdown
of total reward
###Code
# Init plot
fig = plt.figure(figsize=(5, 2))
grid = plt.GridSpec(1, 1, wspace=0.3, hspace=0.8)
plt.subplot(grid[0, 0])
plt.hist(total_Rs, color="black", bins=50)
plt.xlabel("Total reward")
plt.ylabel("Count")
# plt.xlim(0, 10)
_ = sns.despine()
###Output
_____no_output_____ |
PA#3_Clustering/PA#3_clustering.ipynb | ###Markdown
Programming Assignment 3 Clustering Student Details When submitting, fill your full name, your student ID and your NetID in this cell. Note that this is a markdown cell! Student Full Name:ID:Team Mate name :ID: Rules 1. Work is to be done in a team2. Any cheating including plagiarism, cooperation will be reported to the corresponding UTA’ s instance.3. If using any resource (books, internet), please make sure that you cite it.4. Follow the given structure. Specifically, place all your tasks in THIS NOTEBOOK BUT IN SEPARATE BLOCKS. Then save this notebook as 'yourNetID_pa3.ipynb' and submit it. 5. Do not alter the dataset name.6. Please dont ask any details specific to the project like "How to plot XYZ ? What parameters are to be used? " and so on..7. Report is not required for this assignment. If you want to document a function or a process, just comment or use markup cell.8. Please dont send images of your visualizations to verify whether they are right or not before submission deadline. Assignment Details The purpose of this assignment is to cluster adults using K-means clustering and Hierarchical Agglomerative clustering models and to visualize clusters for predicted and actual cluster labels.Your dataset is part of "Adult". You can find more information here: https://archive.ics.uci.edu/ml/datasets/adult.The classification problem is whether they earn more than 50,000$ or not.You need to submit this ipython file after renaming it. Preprocessing will be needed for the data as most of the data is in string and needs to be quantified.
###Code
%%javascript
IPython.OutputArea.prototype._should_scroll = function(lines) {
return false;
}
###Output
_____no_output_____
###Markdown
Required Python Packages
###Code
# Import required Python packages here
#Seaborn,numpy,pandas,sklearn,matplotlib only
###Output
_____no_output_____
###Markdown
TASK 1: K-Means Clustering Task 1-a: Determine “k” value from the elbow method In this task, you will be using the elbow method to determine the optimal number of clusters for k-means clustering.We need some way to determine whether we are using the right number of clusters when using k-means clustering. One method to validate the number of clusters is the elbow method. The idea of the elbow method is to run k-means clustering on the dataset for a range of values of k (k will be from 1 to 10 in this task), and for each value of k calculate the sum of squared errors (SSE). Then, plot a line chart of the SSE for each value of k. If the line chart looks like an arm, then the "elbow" on the arm is the value of k that is the best. The idea is that we want a small SSE, but that the SSE tends to decrease toward 0 as we increase k (the SSE is 0 when k is equal to the number of data points in the dataset, because then each data point is a cluster, and there is no error between it and the center of its cluster). So our goal is to choose a small value of k that still has a low SSE, and the elbow usually represents where we start to have diminishing returns by increasing k.For this task, you need to perform the elbow method for k from 1 to 10 and plot a line chart of the SSE for each value of k, and determine the best k (the number of clusters). Note that you need to use the whole dataset in this task and you need to print your decision for k.
###Code
#########################begin code for Task 1-a
#########################begin code for Task 1-a
###Output
_____no_output_____
###Markdown
Task 1-b: Visualization for K-Means Clustering In this task, you will be performing k-means clustering for k=2 and visualize the predicted training samples and actual training samples on scatter plots. Use 70% of the dataset for training and 30% of the dataset for testing. Perform kmeans for clustering samples in your training set. Use two subplots for visualizing the predicted training samples and actual training samples on two scatter plots.Since your dataset has multiple features(dimensions), you won't be able to plot your data on a scatter plot. Thus, you’re going to visualize your data with the help of one of the Dimensionality Reduction techniques, namely Principal Component Analysis (PCA). The idea in PCA is to find a linear combination of the two variables that contains most of the information. This new variable or “principal component” can replace the two original variables. You can easily apply PCA to your data with the help of scikit-learn.
###Code
###################begin code for Task 1-b-1: Split the dataset 70% for training and 30% for testing
### Important!!!
###################end code for Task 1-b-1
###################begin code for Task 1-b-2: Visualize the predicted training labels vs actual training labels
# Import PCA
from sklearn.decomposition import PCA
# Create the KMeans model
# Compute cluster centers and predict cluster index for each sample
# Model and fit the data to the PCA model
X_train_pca = None
# Visualize the predicted training labels vs actual training labels.
### scatter(x, y, your_data)
x = X_train_pca[:, 0]
y = X_train_pca[:, 1]
###################end code for Task 1-b-2
###Output
_____no_output_____
###Markdown
Now, you need to visualize the predicted testing labels versus actual testing labels. Use the trained model in previous step.
###Code
###################begin code for Task 1-b-3: Visualize the predicted testing labels vs actual testing labels
# predict cluster index for each sample
# Model and fit the data to the PCA model
X_test_pca = None
# Visualize the predicted testing labels vs actual testing labels.
### scatter(x, y, your_data)
x = X_test_pca[:, 0]
y = X_test_pca[:, 1]
###################end code for Task 1-b-3
###Output
_____no_output_____
###Markdown
In this step, you need to provide the evaluation of your clustering model. Print out a confusion matrix.
###Code
###################begin code for Task 1-b-4: Print out a confusion matrix
###################end code for Task 1-b-4
###Output
_____no_output_____
###Markdown
TASK 2: Hierarchical Agglomerative Clustering Task 2-a: Find the best Hierarchical Agglomerative Clustering Model In this task, you will be performing Hierarchical Agglomerative clustering with different linkage methods (complete and average) and different similarity measures (cosine, euclidean, and manhattan) in order to find the best pair of linkage method and similarity measure. Use F1 score for evaluation and take n_clusters = 2.
###Code
###################begin code for Task 2-a: Print out a confusion matrix
# Import AgglomerativeClustering
from sklearn.cluster import AgglomerativeClustering
# Import pairwise_distances for calculating pairwise distance matrix
from sklearn.metrics.pairwise import pairwise_distances
# Import f1_score
from sklearn.metrics import f1_score
## Calculate pairwise distance matrix for X_train
pdm_train = None
## Model and fit the training data to the AgglomerativeClustering model
## complete linkage + cosine
## Model and fit the training data to the AgglomerativeClustering model
## complete linkage + euclidean
## Model and fit the training data to the AgglomerativeClustering model
## complete linkage + manhattan
## Model and fit the training data to the AgglomerativeClustering model
## average linkage + cosine
## Model and fit the training data to the AgglomerativeClustering model
## average linkage + euclidean
## Model and fit the training data to the AgglomerativeClustering model
## average linkage + manhattan
print("F1-score for complete linkage + cosine", None)
print("F1-score for complete linkage + euclidean", None)
print("F1-score for complete linkage + manhattan", None)
print("F1-score for average linkage + cosine", None)
print("F1-score for average linkage + euclidean", None)
print("F1-score for average linkage + manhattan", None)
###################end code for Task 2-a
###Output
_____no_output_____
###Markdown
Task 2-b: Visualization for Hierarchical Agglomerative Clustering Find the best performed model from the previous step and use that model for visualizing the predicted training samples and actual training samples on scatter plots. Use PCA model for visualizing your data (use X_train_pca from Task 1-b-2).
###Code
###################begin code for Task 2-b: Visualize the predicted training labels vs actual training labels
# Visualize the predicted training labels versus actual training labels.
###################end code for Task 2-b
###Output
_____no_output_____
###Markdown
TASK 3: WEKA Visualization of K-means Clustering and Hierarchical Agglomerative Clustering Task 3-a : Visualize the k-means clustering using weka
###Code
###################start Task 3-a
###################end Task 3-a
###Output
_____no_output_____
###Markdown
Task 3-b : Visualize the hierarchical clustering using weka
###Code
###################start Task 3-b
###################end Task 3-b
###Output
_____no_output_____
###Markdown
(BONUS) TASK 4: Compare K-Means Clustering and Hierarchical Agglomerative Clustering Task 4-a: Visualize Clusters In this task, use whole dataset for training k-means cluster and hierarchical agglomerative clustering. Use the best model for agglomerative clustering. Visualize the predicted labels from k-means clustering and agglomerative clustering versus actual labels. Basically, you need to plot three scatter plots as subplots.
###Code
###################begin code for Task 4-a: Visualize the predicted training labels vs actual training labels
### Kmeans Clustering
# Model and fit the data to the Kmeans (use fit_predict : Performs clustering on X and returns cluster labels.)
### Agglomerative Clustering
# Calculate pairwise distance matrix for X
# Model and fit the data to the Agglomerative (use fit_predict : Performs clustering on X and returns cluster labels.)
### Visualize Clusters
# Model and fit the data to the PCA model
X_pca = None
# Visualize the predicted Kmeans labels versus the predicted Agglomerative labels versus Actual labels.
###################end code for Task 4-a
###Output
_____no_output_____
###Markdown
Task 4-b: Compare K-Means Clustering & Hierarchical Agglomerative Clustering Print out confusion matrices for kmeans and agglomerative clustering. Also, compare precision, recall, and F1-score for both model. Type your reasoning.
###Code
###################begin code for Task 4-b
###################end code for Task 4-b
###Output
_____no_output_____ |
scripts/methane_debug/debug.ipynb | ###Markdown
init at answer
###Code
g = esp.Graph('C')
forcefield = esp.graphs.legacy_force_field.LegacyForceField(
"smirnoff99Frosst"
)
forcefield.parametrize(g)
from espaloma.data.md import MoleculeVacuumSimulation
simulation = MoleculeVacuumSimulation(
n_samples=100,
n_steps_per_sample=10,
)
simulation.run(g)
representation = esp.nn.baselines.FreeParameterBaseline(g_ref=g.heterograph)
for term in ['n2', 'n3']:
for param in ['k', 'eq']:
setattr(
representation, '%s_%s' % (term, param),
torch.nn.Parameter(
g.nodes[term].data[param + '_ref'].data
)
)
net = torch.nn.Sequential(
representation,
esp.mm.geometry.GeometryInGraph(),
esp.mm.energy.EnergyInGraph(), # predicted energy -> u
esp.mm.energy.EnergyInGraph(suffix='_ref') # reference energy -> u_ref,
)
optimizer = torch.optim.Adam(
net.parameters(),
0.1,
)
# optimizer = torch.optim.LBFGS(
# net.parameters(),
# 0.1,
# line_search_fn='strong_wolfe',
# )
list(net.named_parameters())
net(g.heterograph)
states = []
losses = []
def l():
net(g.heterograph)
loss = torch.nn.MSELoss()(
g.nodes['n2'].data['u_ref'],
g.nodes['n2'].data['u'],
)
loss = loss.sum()
losses.append(loss.detach().numpy())
# loss.backward()
print(loss)
return loss
l()
# optimizer.step(l)
g.nodes['n2'].data['k']
for _ in range(100):
optimizer.zero_grad()
def l():
net(g.heterograph)
loss = torch.nn.MSELoss()(
g.nodes['n2'].data['u_ref'],
g.nodes['n2'].data['u'],
)
loss = loss.sum()
losses.append(loss.detach().numpy())
loss.backward()
print(loss)
return loss
optimizer.step(l)
states.append(
{
'%s_%s' % (term, param): getattr(
net[0],
'%s_%s' % (term, param)
).detach().clone().numpy()
for term in ['n2'] for param in ['k', 'eq']
}
)
plt.plot(losses)
ks = np.array([state['n2_k'].flatten() for state in states])
eqs = np.array([state['n2_eq'].flatten() for state in states])
eqs.std(axis=0)
for idx in range(8):
plt.plot(np.diff(ks[:, idx]))
for idx in range(8):
plt.plot(eqs[:, idx])
g.nodes['n2'].data['eq_ref']
###Output
_____no_output_____
###Markdown
param gaussian noise
###Code
g = esp.Graph('C')
forcefield = esp.graphs.legacy_force_field.LegacyForceField(
"smirnoff99Frosst"
)
forcefield.parametrize(g)
from espaloma.data.md import MoleculeVacuumSimulation
simulation = MoleculeVacuumSimulation(
n_samples=100,
n_steps_per_sample=10,
)
simulation.run(g)
representation = esp.nn.baselines.FreeParameterBaseline(g_ref=g.heterograph)
epsilon = 0.1
for term in ['n2', 'n3']:
for param in ['k', 'eq']:
setattr(
representation, '%s_%s' % (term, param),
torch.nn.Parameter(
g.nodes[term].data[param + '_ref'].data
+ torch.distributions.normal.Normal(
loc=torch.zeros_like(g.nodes[term].data[param + '_ref']),
scale=epsilon * torch.ones_like(g.nodes[term].data[param + '_ref']),
).sample()
)
)
net = torch.nn.Sequential(
representation,
esp.mm.geometry.GeometryInGraph(),
esp.mm.energy.EnergyInGraph(), # predicted energy -> u
esp.mm.energy.EnergyInGraph(suffix='_ref') # reference energy -> u_ref,
)
optimizer = torch.optim.Adam(
net.parameters(),
0.1,
)
# optimizer = torch.optim.LBFGS(
# net.parameters(),
# 0.1,
# line_search_fn='strong_wolfe',
# )
net(g.heterograph)
states = []
losses = []
def l():
net(g.heterograph)
loss = torch.nn.MSELoss()(
g.nodes['n2'].data['u_ref'],
g.nodes['n2'].data['u'],
)
loss = loss.sum()
losses.append(loss.detach().numpy())
# loss.backward()
print(loss)
return loss
l()
# optimizer.step(l)
g.nodes['n2'].data['k']
for _ in range(1000):
optimizer.zero_grad()
def l():
net(g.heterograph)
loss = torch.nn.MSELoss()(
g.nodes['n2'].data['u_ref'],
g.nodes['n2'].data['u'],
)
loss = loss.sum()
losses.append(loss.detach().numpy())
loss.backward()
print(loss)
return loss
optimizer.step(l)
states.append(
{
'%s_%s' % (term, param): getattr(
net[0],
'%s_%s' % (term, param)
).detach().clone().numpy()
for term in ['n2'] for param in ['k', 'eq']
}
)
plt.plot(losses)
ks = np.array([state['n2_k'].flatten() for state in states])
eqs = np.array([state['n2_eq'].flatten() for state in states])
eqs.std(axis=0)
for idx in range(8):
plt.plot(np.diff(ks[:, idx]))
for idx in range(8):
plt.plot(eqs[:, idx])
plt.scatter(
g.nodes['n2'].data['u_ref'].flatten().detach(),
g.nodes['n2'].data['u'].flatten().detach(),
)
###Output
_____no_output_____
###Markdown
angle
###Code
g = esp.Graph('C')
forcefield = esp.graphs.legacy_force_field.LegacyForceField(
"smirnoff99Frosst"
)
forcefield.parametrize(g)
from espaloma.data.md import MoleculeVacuumSimulation
simulation = MoleculeVacuumSimulation(
n_samples=100,
n_steps_per_sample=10,
)
simulation.run(g)
representation = esp.nn.baselines.FreeParameterBaseline(g_ref=g.heterograph)
epsilon = 0.1
for term in ['n2', 'n3']:
for param in ['k', 'eq']:
setattr(
representation, '%s_%s' % (term, param),
torch.nn.Parameter(
g.nodes[term].data[param + '_ref'].data
+ torch.distributions.normal.Normal(
loc=torch.zeros_like(g.nodes[term].data[param + '_ref']),
scale=epsilon * torch.ones_like(g.nodes[term].data[param + '_ref']),
).sample()
)
)
net = torch.nn.Sequential(
representation,
esp.mm.geometry.GeometryInGraph(),
esp.mm.energy.EnergyInGraph(), # predicted energy -> u
esp.mm.energy.EnergyInGraph(suffix='_ref') # reference energy -> u_ref,
)
optimizer = torch.optim.Adam(
net.parameters(),
0.1,
)
# optimizer = torch.optim.LBFGS(
# net.parameters(),
# 0.1,
# line_search_fn='strong_wolfe',
# )
net(g.heterograph)
states = []
losses = []
def l():
net(g.heterograph)
loss = torch.nn.MSELoss()(
g.nodes['n3'].data['u_ref'],
g.nodes['n3'].data['u'],
)
loss = loss.sum()
losses.append(loss.detach().numpy())
# loss.backward()
print(loss)
return loss
l()
# optimizer.step(l)
g.nodes['n3'].data['k']
for _ in range(1000):
optimizer.zero_grad()
def l():
net(g.heterograph)
loss = torch.nn.MSELoss()(
g.nodes['n3'].data['u_ref'],
g.nodes['n3'].data['u'],
)
loss = loss.sum()
losses.append(loss.detach().numpy())
loss.backward()
print(loss)
return loss
optimizer.step(l)
states.append(
{
'%s_%s' % (term, param): getattr(
net[0],
'%s_%s' % (term, param)
).detach().clone().numpy()
for term in ['n3'] for param in ['k', 'eq']
}
)
plt.plot(losses)
ks = np.array([state['n3_k'].flatten() for state in states])
eqs = np.array([state['n3_eq'].flatten() for state in states])
eqs.std(axis=0)
for idx in range(8):
plt.plot(np.diff(ks[:, idx]))
for idx in range(8):
plt.plot(eqs[:, idx])
plt.scatter(
g.nodes['n3'].data['u_ref'].flatten().detach(),
g.nodes['n3'].data['u'].flatten().detach(),
)
###Output
_____no_output_____
###Markdown
all
###Code
g = esp.Graph('C')
forcefield = esp.graphs.legacy_force_field.LegacyForceField(
"smirnoff99Frosst"
)
forcefield.parametrize(g)
from espaloma.data.md import MoleculeVacuumSimulation
simulation = MoleculeVacuumSimulation(
n_samples=100,
n_steps_per_sample=10,
)
simulation.run(g)
representation = esp.nn.baselines.FreeParameterBaseline(g_ref=g.heterograph)
epsilon = 0.1
for term in ['n2', 'n3']:
for param in ['k', 'eq']:
setattr(
representation, '%s_%s' % (term, param),
torch.nn.Parameter(
g.nodes[term].data[param + '_ref'].data
+ torch.distributions.normal.Normal(
loc=torch.zeros_like(g.nodes[term].data[param + '_ref']),
scale=epsilon * torch.ones_like(g.nodes[term].data[param + '_ref']),
).sample()
)
)
net = torch.nn.Sequential(
representation,
esp.mm.geometry.GeometryInGraph(),
esp.mm.energy.EnergyInGraph(), # predicted energy -> u
esp.mm.energy.EnergyInGraph(suffix='_ref') # reference energy -> u_ref,
)
optimizer = torch.optim.Adam(
net.parameters(),
0.1,
)
# optimizer = torch.optim.LBFGS(
# net.parameters(),
# 0.1,
# line_search_fn='strong_wolfe',
# )
net(g.heterograph)
states = []
losses = []
def l():
net(g.heterograph)
loss = torch.nn.MSELoss()(
g.nodes['g'].data['u_ref'],
g.nodes['g'].data['u'],
)
loss = loss.sum()
losses.append(loss.detach().numpy())
# loss.backward()
print(loss)
return loss
l()
# optimizer.step(l)
g.nodes['n3'].data['k']
for _ in range(1000):
optimizer.zero_grad()
def l():
net(g.heterograph)
loss = torch.nn.MSELoss()(
g.nodes['n3'].data['u_ref'],
g.nodes['n3'].data['u'],
)
loss = loss.sum()
losses.append(loss.detach().numpy())
loss.backward()
print(loss)
return loss
optimizer.step(l)
states.append(
{
'%s_%s' % (term, param): getattr(
net[0],
'%s_%s' % (term, param)
).detach().clone().numpy()
for term in ['n3'] for param in ['k', 'eq']
}
)
plt.plot(losses)
ks = np.array([state['n3_k'].flatten() for state in states])
eqs = np.array([state['n3_eq'].flatten() for state in states])
eqs.std(axis=0)
for idx in range(8):
plt.plot(np.diff(ks[:, idx]))
for idx in range(8):
plt.plot(eqs[:, idx])
plt.scatter(
g.nodes['n3'].data['u_ref'].flatten().detach(),
g.nodes['n3'].data['u'].flatten().detach(),
)
###Output
_____no_output_____
###Markdown
normalize
###Code
g = esp.Graph('C')
forcefield = esp.graphs.legacy_force_field.LegacyForceField(
"smirnoff99Frosst"
)
forcefield.parametrize(g)
from espaloma.data.md import MoleculeVacuumSimulation
simulation = MoleculeVacuumSimulation(
n_samples=100,
n_steps_per_sample=10,
)
simulation.run(g)
representation = esp.nn.baselines.FreeParameterBaseline(g_ref=g.heterograph)
normalize = esp.data.normalize.ESOL100LogNormalNormalize()
epsilon = 0.1
for term in ['n2', 'n3']:
for param in ['k', 'eq']:
setattr(
representation, '%s_%s' % (term, param),
torch.nn.Parameter(
torch.zeros_like(
g.nodes[term].data[param + '_ref'],
)
)
)
net = torch.nn.Sequential(
representation,
esp.mm.geometry.GeometryInGraph(),
esp.mm.energy.EnergyInGraph(), # predicted energy -> u
esp.mm.energy.EnergyInGraph(suffix='_ref') # reference energy -> u_ref,
)
optimizer = torch.optim.Adam(
net.parameters(),
0.1,
)
# optimizer = torch.optim.LBFGS(
# net.parameters(),
# 0.1,
# line_search_fn='strong_wolfe',
# )
net(g.heterograph)
states = []
losses = []
def l():
net(g.heterograph)
loss = torch.nn.MSELoss()(
g.nodes['g'].data['u_ref'],
g.nodes['g'].data['u'],
)
loss = loss.sum()
losses.append(loss.detach().numpy())
# loss.backward()
print(loss)
return loss
l()
# optimizer.step(l)
g.nodes['n3'].data['k']
for _ in range(1000):
optimizer.zero_grad()
def l():
normalize.unnorm(net(g.heterograph))
loss = torch.nn.MSELoss()(
g.nodes['g'].data['u_ref'],
g.nodes['g'].data['u'],
)
loss = loss.sum()
losses.append(loss.detach().numpy())
loss.backward()
print(loss)
return loss
optimizer.step(l)
states.append(
{
'%s_%s' % (term, param): getattr(
net[0],
'%s_%s' % (term, param)
).detach().clone().numpy()
for term in ['n3'] for param in ['k', 'eq']
}
)
plt.plot(losses)
ks = np.array([state['n3_k'].flatten() for state in states])
eqs = np.array([state['n3_eq'].flatten() for state in states])
eqs.std(axis=0)
for idx in range(8):
plt.plot(np.diff(ks[:, idx]))
eqs
for idx in range(8):
plt.plot(eqs[:, idx])
plt.scatter(
g.nodes['g'].data['u_ref'].flatten().detach(),
g.nodes['g'].data['u'].flatten().detach(),
)
###Output
_____no_output_____
###Markdown
initialize clever
###Code
g = esp.Graph('C')
forcefield = esp.graphs.legacy_force_field.LegacyForceField(
"smirnoff99Frosst"
)
forcefield.parametrize(g)
from espaloma.data.md import MoleculeVacuumSimulation
simulation = MoleculeVacuumSimulation(
n_samples=100,
n_steps_per_sample=10,
)
simulation.run(g)
for term in ['n2', 'n3']:
for param in ['k', 'eq']:
print(term, param)
print(g.nodes[term].data[param + '_ref'].mean())
print(g.nodes[term].data[param + '_ref'].shape)
representation = esp.nn.baselines.FreeParameterBaseline(g_ref=g.heterograph)
representation.n2_k = torch.nn.Parameter(torch.ones(8, 1) * 200000.0)
representation.n2_eq = torch.nn.Parameter(torch.ones(8, 1) * 0.10)
representation.n3_k = torch.nn.Parameter(torch.ones(12, 1) * 200.)
representation.n3_eq = torch.nn.Parameter(torch.ones(12, 1) * 1.0)
net = torch.nn.Sequential(
representation,
esp.mm.geometry.GeometryInGraph(),
esp.mm.energy.EnergyInGraph(), # predicted energy -> u
esp.mm.energy.EnergyInGraph(suffix='_ref') # reference energy -> u_ref,
)
# optimizer = torch.optim.Adam(
# net.parameters(),
# 0.1,
# )
optimizer = torch.optim.LBFGS(
net.parameters(),
0.1,
line_search_fn='strong_wolfe',
)
print(net[0].n2_k)
net(g.heterograph)
states = []
losses = []
for _ in range(1000):
optimizer.zero_grad()
def l():
net(g.heterograph)
loss = torch.nn.MSELoss()(
g.nodes['n2'].data['u_ref'],
g.nodes['n2'].data['u'],
)
loss = loss.sum()
losses.append(loss.detach().numpy())
loss.backward()
print(loss)
return loss
optimizer.step(l)
states.append(
{
'%s_%s' % (term, param): getattr(
net[0],
'%s_%s' % (term, param)
).detach().clone().numpy()
for term in ['n2'] for param in ['k', 'eq']
}
)
plt.plot(losses)
plt.yscale('log')
ks = np.array([state['n2_k'].flatten() for state in states])
eqs = np.array([state['n2_eq'].flatten() for state in states])
eqs.std(axis=0)
eqs
for idx in range(8):
plt.plot(np.diff(ks[:, idx]))
eqs
for idx in range(8):
plt.plot(eqs[:, idx])
plt.rc('font', family='serif', size=10)
plt.rc('xtick', labelsize=8)
plt.rc('ytick', labelsize=8)
plt.scatter(
g.nodes['n2'].data['u_ref'].flatten().detach(),
g.nodes['n2'].data['u'].flatten().detach(),
)
plt.xlabel('$U_\mathtt{ref}$')
plt.ylabel('$U_\mathtt{pred}$')
###Output
_____no_output_____ |
p_continuous_control/Continuous_Control.ipynb | ###Markdown
Continuous Control---You are welcome to use this coding environment to train your agent for the second project of the [__Deep Reinforcement Learning Nanodegree program__](https://www.udacity.com/course/deep-reinforcement-learning-nanodegree--nd893). Follow the instructions below to get started! 1. Start the EnvironmentWe begin by importing some necessary packages.
###Code
from unityagents import UnityEnvironment
from collections import namedtuple, deque
import numpy as np
import random
import torch
import matplotlib.pyplot as plt
%matplotlib inline
seed = 1
# set the seed for generating random numbers
np.random.seed(seed)
random.seed(seed)
torch.manual_seed(seed) # (both CPU and CUDA)
# check for GPU
print("CUDA is available:", torch.cuda.is_available())
###Output
CUDA is available: True
###Markdown
Next, we will start the environment! *__Before running the code cell below__*, change the file_name parameter to match the location of the Unity environment that you downloaded.- __Mac:__ "path/to/Reacher.app"- __Windows (x86):__ "path/to/Reacher_Windows_x86/Reacher.exe"- __Windows (x86_64):__ "path/to/Reacher_Windows_x86_64/Reacher.exe"- __Linux (x86):__ "path/to/Reacher_Linux/Reacher.x86"- __Linux (x86_64):__ "path/to/Reacher_Linux/Reacher.x86_64"For instance, if you are using a Mac, then you downloaded Reacher.app. If this file is in the same folder as the notebook, then the line below should appear as follows: env = UnityEnvironment(file_name="Reacher.app")
###Code
env = UnityEnvironment(file_name="")
###Output
INFO:unityagents:
'Academy' started successfully!
Unity Academy name: Academy
Number of Brains: 1
Number of External Brains : 1
Lesson number : 0
Reset Parameters :
goal_speed -> 1.0
goal_size -> 5.0
Unity brain name: ReacherBrain
Number of Visual Observations (per agent): 0
Vector Observation space type: continuous
Vector Observation space size (per agent): 33
Number of stacked Vector Observation: 1
Vector Action space type: continuous
Vector Action space size (per agent): 4
Vector Action descriptions: , , ,
###Markdown
Environments contain **_brains_** which are responsible for deciding the actions of their associated agents. Here we check for the first brain available, and set it as the default brain we will be controlling from Python.
###Code
# get the default brain
brain_name = env.brain_names[0]
brain = env.brains[brain_name]
###Output
_____no_output_____
###Markdown
2. Examine the State and Action SpacesRun the code cell below to print some information about the environment.
###Code
# reset the environment
env_info = env.reset(train_mode=True)[brain_name]
# number of agents
num_agents = len(env_info.agents)
print('Number of agents:', num_agents)
# size of each action
action_size = brain.vector_action_space_size
print('Size of each action:', action_size)
# examine the state space
states = env_info.vector_observations
state_size = states.shape[1]
print('There are {} agents. Each observes a state with length: {}'.format(states.shape[0], state_size))
print('The state for the first agent looks like:', states[0])
###Output
Number of agents: 20
Size of each action: 4
There are 20 agents. Each observes a state with length: 33
The state for the first agent looks like: [ 0.00000000e+00 -4.00000000e+00 0.00000000e+00 1.00000000e+00
-0.00000000e+00 -0.00000000e+00 -4.37113883e-08 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 -1.00000000e+01 0.00000000e+00
1.00000000e+00 -0.00000000e+00 -0.00000000e+00 -4.37113883e-08
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 5.75471878e+00 -1.00000000e+00
5.55726624e+00 0.00000000e+00 1.00000000e+00 0.00000000e+00
-1.68164849e-01]
###Markdown
3. Take Random Actions in the EnvironmentIn the next code cell, you will learn how to use the Python API to control the agent and receive feedback from the environment.Note that **in this coding environment, you will not be able to watch the agents while they are training**, and you should set `train_mode=True` to restart the environment.
###Code
env_info = env.reset(train_mode=True)[brain_name] # reset the environment
states = env_info.vector_observations # get the current state (for each agent)
scores = np.zeros(num_agents) # initialize the score (for each agent)
while True:
actions = np.random.randn(num_agents, action_size) # select an action (for each agent)
actions = np.clip(actions, -1, 1) # all actions between -1 and 1
env_info = env.step(actions)[brain_name] # send all actions to tne environment
next_states = env_info.vector_observations # get next state (for each agent)
rewards = env_info.rewards # get reward (for each agent)
dones = env_info.local_done # see if episode finished
scores += env_info.rewards # update the score (for each agent)
states = next_states # roll over states to next time step
if np.any(dones): # exit loop if episode finished
break
print('Total score (averaged over agents) this episode: {}'.format(np.mean(scores)))
###Output
Total score (averaged over agents) this episode: 0.12099999729543924
###Markdown
4. Train the Agent with DDPGRun the code cell below to train the agent from scratch. Alternatively, you can skip to the next code cell (***Watch a Smart Agent!***) to load the pre-trained weights from file.When training the environment, set `train_mode=True`, so that the line for resetting the environment looks like the following: env_info = env.reset(train_mode=True)[brain_name]
###Code
from DDPG_agent import Agent
# instantiate the Agent
agent = Agent(state_size=state_size, action_size=action_size, num_agents=num_agents, seed=seed)
def ddpg(n_episodes=2000, max_t=1000):
"""...
Params
======
n_episodes (int): maximum number of training episodes
max_t (int): maximum number of timesteps per episode
"""
scores_deque = deque(maxlen=100) # last 100 scores
scores = [] # list containing scores from each episode
for i_episode in range(1, n_episodes+1):
env_info = env.reset(train_mode=True)[brain_name] # reset the environment
states = env_info.vector_observations # get the current state (for each agent)
agent.reset()
score = np.zeros(num_agents) # initialize the score (for each agent)
for t in range(max_t):
actions = agent.act(states, score.mean()) # select an action according to the current policy
# and exploration noise (for each agent)
env_info = env.step(actions)[brain_name] # send all action to the environment
next_states = env_info.vector_observations # get the next state (for each agent)
rewards = env_info.rewards # get the reward (for each agent)
dones = env_info.local_done # see if episode has finished
score += env_info.rewards
for a in range(num_agents): # save experience of each agent in replay memory,
agent.step(states[a], # update networks n times after every n timesteps
actions[a], # in row (one for each agent),
rewards[a], # using different samples from the buffer
next_states[a],
dones[a])
states = next_states # roll over states to next time step
if np.any(dones):
break # exit loop if episode finished
final_score = score.mean()
scores_deque.append(final_score)
scores.append(final_score)
print('\rEpisode {}\tScore: {:.2f}'.format(i_episode, final_score), end="")
if i_episode % 100 == 0:
print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_deque)))
if np.mean(scores_deque) >= 30 and len(scores_deque) == 100:
print('\nEnvironment solved in {:d} episodes!\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_deque)))
torch.save(agent.actor_local.state_dict(), 'checkpoint_actor.pth')
torch.save(agent.critic_local.state_dict(), 'checkpoint_critic.pth')
break
return scores
scores = ddpg()
# plot the scores
fig = plt.figure(dpi=125)
ax = fig.add_subplot(111)
plt.plot(np.arange(1, len(scores)+1), scores)
plt.ylabel('Score')
plt.xlabel('Episode #')
plt.grid()
plt.show()
###Output
Episode 100 Average Score: 33.72
Environment solved in 100 episodes! Average Score: 33.72
###Markdown
5. Watch a Smart Agent!
###Code
# load the weights from file
agent.actor_local.load_state_dict(torch.load('checkpoint_actor.pth'))
for i_episode in range(3):
env_info = env.reset(train_mode=False)[brain_name] # reset the environment
states = env_info.vector_observations # get the current state (for each agent)
scores = np.zeros(num_agents) # initialize the score (for each agent)
while True:
actions = agent.act(states, add_noise=False) # select an action (for each agent)
env_info = env.step(actions)[brain_name] # send all actions to tne environment
next_states = env_info.vector_observations # get next state (for each agent)
rewards = env_info.rewards # get reward (for each agent)
dones = env_info.local_done # see if episode finished
scores += env_info.rewards # update the score (for each agent)
states = next_states # roll over states to next time step
if np.any(dones): # exit loop if episode finished
break
print('Total score (averaged over agents): {:.2f}'.format(np.mean(scores)))
###Output
Total score (averaged over agents): 35.41
Total score (averaged over agents): 34.17
Total score (averaged over agents): 35.36
###Markdown
When finished, close the environment.
###Code
env.close()
###Output
_____no_output_____ |
FCD_M1_0_Introducao.ipynb | ###Markdown
[](https://mybinder.org/v2/gh/zavaleta/Fundamentos_DS/main)  Fundamentos de Ciência de Dados Módulo 1 - Markdown Introdução ao Python Notebook Introdução Esta é uma prova de funcionamento Linguagem Markdown
###Code
print('Hello')
###Output
Hello
###Markdown
Titulos
###Code
# codigo 1
def functiox():
print(2)
functiox()
###Output
2
|
Numpy & Pandas/Pandas Operations.ipynb | ###Markdown
Pandas Operations RIHAD VARIAWA --- Series Loading packages and initializations
###Code
import numpy as np
import pandas as pd
labels = ['a','b','c']
my_data = [10,20,30]
arr = np.array(my_data)
d = {'a':10,'b':20,'c':30}
print ("Labels:", labels)
print("My data:", my_data)
print("Dictionary:", d)
###Output
Labels: ['a', 'b', 'c']
My data: [10, 20, 30]
Dictionary: {'a': 10, 'b': 20, 'c': 30}
###Markdown
Creating a Series (Pandas class)* From numerical data only* From numerical data and corresponding index (row labels)* From NumPy array as the source of numerical data* Just using a pre-defined dictionary
###Code
pd.Series(data=my_data) # Output looks very similar to a NumPy array
pd.Series(data=my_data, index=labels) # Note the extra information about index
# Inputs are in order of the expected parameters (not explicitly named), NumPy array is used for data
pd.Series(arr, labels)
pd.Series(d) # Using a pre-defined Dictionary object
###Output
_____no_output_____
###Markdown
What type of values can a Pandas Series hold?
###Code
print ("\nHolding numerical data\n",'-'*25, sep='')
print(pd.Series(arr))
print ("\nHolding text labels\n",'-'*20, sep='')
print(pd.Series(labels))
print ("\nHolding functions\n",'-'*20, sep='')
print(pd.Series(data=[sum,print,len]))
print ("\nHolding objects from a dictionary\n",'-'*40, sep='')
print(pd.Series(data=[d.keys, d.items, d.values]))
###Output
Holding numerical data
-------------------------
0 10
1 20
2 30
dtype: int32
Holding text labels
--------------------
0 a
1 b
2 c
dtype: object
Holding functions
--------------------
0 <built-in function sum>
1 <built-in function print>
2 <built-in function len>
dtype: object
Holding objects from a dictionary
----------------------------------------
0 <built-in method keys of dict object at 0x0000...
1 <built-in method items of dict object at 0x000...
2 <built-in method values of dict object at 0x00...
dtype: object
###Markdown
Indexing and slicing
###Code
ser1 = pd.Series([1,2,3,4],['CA', 'OR', 'CO', 'AZ'])
ser2 = pd.Series([1,2,5,4],['CA', 'OR', 'NV', 'AZ'])
print ("\nIndexing by name of the item/object (string identifier)\n",'-'*56, sep='')
print("Value for CA in ser1:", ser1['CA'])
print("Value for AZ in ser1:", ser1['AZ'])
print("Value for NV in ser2:", ser2['NV'])
print ("\nIndexing by number (positional value in the series)\n",'-'*52, sep='')
print("Value for CA in ser1:", ser1[0])
print("Value for AZ in ser1:", ser1[3])
print("Value for NV in ser2:", ser2[2])
print ("\nIndexing by a range\n",'-'*25, sep='')
print ("Value for OR, CO, and AZ in ser1:\n", ser1[1:4], sep='')
###Output
Indexing by name of the item/object (string identifier)
--------------------------------------------------------
Value for CA in ser1: 1
Value for AZ in ser1: 4
Value for NV in ser2: 5
Indexing by number (positional value in the series)
----------------------------------------------------
Value for CA in ser1: 1
Value for AZ in ser1: 4
Value for NV in ser2: 5
Indexing by a range
-------------------------
Value for OR, CO, and AZ in ser1:
OR 2
CO 3
AZ 4
dtype: int64
###Markdown
Adding/Merging two series with common indices
###Code
ser1 = pd.Series([1,2,3,4],['CA', 'OR', 'CO', 'AZ'])
ser2 = pd.Series([1,2,5,4],['CA', 'OR', 'NV', 'AZ'])
ser3 = ser1+ser2
print ("\nAfter adding the two series, the result looks like this...\n",'-'*59, sep='')
print(ser3)
print("\nPython tries to add values where it finds common index name, and puts NaN where indices are missing\n")
print ("\nThe idea works even for multiplication...\n",'-'*43, sep='')
print (ser1*ser2)
print ("\nOr even for combination of mathematical operations!\n",'-'*53, sep='')
print (np.exp(ser1)+np.log10(ser2))
###Output
After adding the two series, the result looks like this...
-----------------------------------------------------------
AZ 8.0
CA 2.0
CO NaN
NV NaN
OR 4.0
dtype: float64
Python tries to add values where it finds common index name, and puts NaN where indices are missing
The idea works even for multiplication...
-------------------------------------------
AZ 16.0
CA 1.0
CO NaN
NV NaN
OR 4.0
dtype: float64
Or even for combination of mathematical operations!
-----------------------------------------------------
AZ 55.200210
CA 2.718282
CO NaN
NV NaN
OR 7.690086
dtype: float64
###Markdown
DataFrame (the Real Meat!)
###Code
from numpy.random import randn as rn
###Output
_____no_output_____
###Markdown
Creating and accessing DataFrame* Indexing* Adding and deleting rows and columns* Subsetting DataFrame
###Code
np.random.seed(101)
matrix_data = rn(5,4)
row_labels = ['A','B','C','D','E']
column_headings = ['W','X','Y','Z']
df = pd.DataFrame(data=matrix_data, index=row_labels, columns=column_headings)
print("\nThe data frame looks like\n",'-'*45, sep='')
print(df)
###Output
The data frame looks like
---------------------------------------------
W X Y Z
A 2.706850 0.628133 0.907969 0.503826
B 0.651118 -0.319318 -0.848077 0.605965
C -2.018168 0.740122 0.528813 -0.589001
D 0.188695 -0.758872 -0.933237 0.955057
E 0.190794 1.978757 2.605967 0.683509
###Markdown
Indexing and slicing (columns)* By bracket method* By DOT method (NOT recommended)
###Code
print("\nThe 'X' column\n",'-'*25, sep='')
print(df['X'])
print("\nType of the column: ", type(df['X']), sep='')
print("\nThe 'X' and 'Z' columns indexed by passing a list\n",'-'*55, sep='')
print(df[['X','Z']])
print("\nType of the pair of columns: ", type(df[['X','Z']]), sep='')
print ("\nSo, for more than one column, the object turns into a DataFrame")
print("\nThe 'X' column accessed by DOT method (NOT recommended)\n",'-'*55, sep='')
print(df.X)
###Output
The 'X' column accessed by DOT method (NOT recommended)
-------------------------------------------------------
A 0.628133
B -0.319318
C 0.740122
D -0.758872
E 1.978757
Name: X, dtype: float64
###Markdown
Creating and deleting a (new) column (or row)
###Code
print("\nA column is created by assigning it in relation to an existing column\n",'-'*75, sep='')
df['New'] = df['X']+df['Z']
df['New (Sum of X and Z)'] = df['X']+df['Z']
print(df)
print("\nA column is dropped by using df.drop() method\n",'-'*55, sep='')
df = df.drop('New', axis=1) # Notice the axis=1 option, axis = 0 is default, so one has to change it to 1
print(df)
df1=df.drop('A')
print("\nA row (index) is dropped by using df.drop() method and axis=0\n",'-'*65, sep='')
print(df1)
print("\nAn in-place change can be done by making inplace=True in the drop method\n",'-'*75, sep='')
df.drop('New (Sum of X and Z)', axis=1, inplace=True)
print(df)
###Output
A column is created by assigning it in relation to an existing column
---------------------------------------------------------------------------
W X Y Z New New (Sum of X and Z)
A 2.706850 0.628133 0.907969 0.503826 1.131958 1.131958
B 0.651118 -0.319318 -0.848077 0.605965 0.286647 0.286647
C -2.018168 0.740122 0.528813 -0.589001 0.151122 0.151122
D 0.188695 -0.758872 -0.933237 0.955057 0.196184 0.196184
E 0.190794 1.978757 2.605967 0.683509 2.662266 2.662266
A column is dropped by using df.drop() method
-------------------------------------------------------
W X Y Z New (Sum of X and Z)
A 2.706850 0.628133 0.907969 0.503826 1.131958
B 0.651118 -0.319318 -0.848077 0.605965 0.286647
C -2.018168 0.740122 0.528813 -0.589001 0.151122
D 0.188695 -0.758872 -0.933237 0.955057 0.196184
E 0.190794 1.978757 2.605967 0.683509 2.662266
A row (index) is dropped by using df.drop() method and axis=0
-----------------------------------------------------------------
W X Y Z New (Sum of X and Z)
B 0.651118 -0.319318 -0.848077 0.605965 0.286647
C -2.018168 0.740122 0.528813 -0.589001 0.151122
D 0.188695 -0.758872 -0.933237 0.955057 0.196184
E 0.190794 1.978757 2.605967 0.683509 2.662266
An in-place change can be done by making inplace=True in the drop method
---------------------------------------------------------------------------
W X Y Z
A 2.706850 0.628133 0.907969 0.503826
B 0.651118 -0.319318 -0.848077 0.605965
C -2.018168 0.740122 0.528813 -0.589001
D 0.188695 -0.758872 -0.933237 0.955057
E 0.190794 1.978757 2.605967 0.683509
###Markdown
Selecting/indexing Rows* Label-based 'loc' method* Index (numeric) 'iloc' method
###Code
print("\nLabel-based 'loc' method can be used for selecting row(s)\n",'-'*60, sep='')
print("\nSingle row\n")
print(df.loc['C'])
print("\nMultiple rows\n")
print(df.loc[['B','C']])
print("\nIndex position based 'iloc' method can be used for selecting row(s)\n",'-'*70, sep='')
print("\nSingle row\n")
print(df.iloc[2])
print("\nMultiple rows\n")
print(df.iloc[[1,2]])
###Output
Label-based 'loc' method can be used for selecting row(s)
------------------------------------------------------------
Single row
W -2.018168
X 0.740122
Y 0.528813
Z -0.589001
Name: C, dtype: float64
Multiple rows
W X Y Z
B 0.651118 -0.319318 -0.848077 0.605965
C -2.018168 0.740122 0.528813 -0.589001
Index position based 'iloc' method can be used for selecting row(s)
----------------------------------------------------------------------
Single row
W -2.018168
X 0.740122
Y 0.528813
Z -0.589001
Name: C, dtype: float64
Multiple rows
W X Y Z
B 0.651118 -0.319318 -0.848077 0.605965
C -2.018168 0.740122 0.528813 -0.589001
###Markdown
Subsetting DataFrame
###Code
np.random.seed(101)
matrix_data = rn(5,4)
row_labels = ['A','B','C','D','E']
column_headings = ['W','X','Y','Z']
df = pd.DataFrame(data=matrix_data, index=row_labels, columns=column_headings)
print("\nThe DatFrame\n",'-'*45, sep='')
print(df)
print("\nElement at row 'B' and column 'Y' is\n")
print(df.loc['B','Y'])
print("\nSubset comprising of rows B and D, and columns W and Y, is\n")
df.loc[['B','D'],['W','Y']]
###Output
The DatFrame
---------------------------------------------
W X Y Z
A 2.706850 0.628133 0.907969 0.503826
B 0.651118 -0.319318 -0.848077 0.605965
C -2.018168 0.740122 0.528813 -0.589001
D 0.188695 -0.758872 -0.933237 0.955057
E 0.190794 1.978757 2.605967 0.683509
Element at row 'B' and column 'Y' is
-0.848076983404
Subset comprising of rows B and D, and columns W and Y, is
###Markdown
Conditional selection, index (re)setting, multi-index Basic idea of conditional check and Boolean DataFrame
###Code
print("\nThe DataFrame\n",'-'*45, sep='')
print(df)
print("\nBoolean DataFrame(s) where we are checking if the values are greater than 0\n",'-'*75, sep='')
print(df>0)
print("\n")
print(df.loc[['A','B','C']]>0)
booldf = df>0
print("\nDataFrame indexed by boolean dataframe\n",'-'*45, sep='')
print(df[booldf])
###Output
The DataFrame
---------------------------------------------
W X Y Z
A 2.706850 0.628133 0.907969 0.503826
B 0.651118 -0.319318 -0.848077 0.605965
C -2.018168 0.740122 0.528813 -0.589001
D 0.188695 -0.758872 -0.933237 0.955057
E 0.190794 1.978757 2.605967 0.683509
Boolean DataFrame(s) where we are checking if the values are greater than 0
---------------------------------------------------------------------------
W X Y Z
A True True True True
B True False False True
C False True True False
D True False False True
E True True True True
W X Y Z
A True True True True
B True False False True
C False True True False
DataFrame indexed by boolean dataframe
---------------------------------------------
W X Y Z
A 2.706850 0.628133 0.907969 0.503826
B 0.651118 NaN NaN 0.605965
C NaN 0.740122 0.528813 NaN
D 0.188695 NaN NaN 0.955057
E 0.190794 1.978757 2.605967 0.683509
###Markdown
Passing Boolean series to conditionally subset the DataFrame
###Code
matrix_data = np.matrix('22,66,140;42,70,148;30,62,125;35,68,160;25,62,152')
row_labels = ['A','B','C','D','E']
column_headings = ['Age', 'Height', 'Weight']
df = pd.DataFrame(data=matrix_data, index=row_labels, columns=column_headings)
print("\nA new DataFrame\n",'-'*25, sep='')
print(df)
print("\nRows with Height > 65 inch\n",'-'*35, sep='')
print(df[df['Height']>65])
booldf1 = df['Height']>65
booldf2 = df['Weight']>145
print("\nRows with Height > 65 inch and Weight >145 lbs\n",'-'*55, sep='')
print(df[(booldf1) & (booldf2)])
print("\nDataFrame with only Age and Weight columns whose Height > 65 inch\n",'-'*68, sep='')
print(df[booldf1][['Age','Weight']])
###Output
A new DataFrame
-------------------------
Age Height Weight
A 22 66 140
B 42 70 148
C 30 62 125
D 35 68 160
E 25 62 152
Rows with Height > 65 inch
-----------------------------------
Age Height Weight
A 22 66 140
B 42 70 148
D 35 68 160
Rows with Height > 65 inch and Weight >145 lbs
-------------------------------------------------------
Age Height Weight
B 42 70 148
D 35 68 160
DataFrame with only Age and Weight columns whose Height > 65 inch
--------------------------------------------------------------------
Age Weight
A 22 140
B 42 148
D 35 160
###Markdown
Re-setting and Setting Index
###Code
matrix_data = np.matrix('22,66,140;42,70,148;30,62,125;35,68,160;25,62,152')
row_labels = ['A','B','C','D','E']
column_headings = ['Age', 'Height', 'Weight']
df = pd.DataFrame(data=matrix_data, index=row_labels, columns=column_headings)
print("\nThe DataFrame\n",'-'*25, sep='')
print(df)
print("\nAfter resetting index\n",'-'*35, sep='')
print(df.reset_index())
print("\nAfter resetting index with 'drop' option TRUE\n",'-'*45, sep='')
print(df.reset_index(drop=True))
print("\nAdding a new column 'Profession'\n",'-'*45, sep='')
df['Profession'] = "Student Teacher Engineer Doctor Nurse".split()
print(df)
print("\nSetting 'Profession' column as index\n",'-'*45, sep='')
print (df.set_index('Profession'))
###Output
The DataFrame
-------------------------
Age Height Weight
A 22 66 140
B 42 70 148
C 30 62 125
D 35 68 160
E 25 62 152
After resetting index
-----------------------------------
index Age Height Weight
0 A 22 66 140
1 B 42 70 148
2 C 30 62 125
3 D 35 68 160
4 E 25 62 152
After resetting index with 'drop' option TRUE
---------------------------------------------
Age Height Weight
0 22 66 140
1 42 70 148
2 30 62 125
3 35 68 160
4 25 62 152
Adding a new column 'Profession'
---------------------------------------------
Age Height Weight Profession
A 22 66 140 Student
B 42 70 148 Teacher
C 30 62 125 Engineer
D 35 68 160 Doctor
E 25 62 152 Nurse
Setting 'Profession' column as index
---------------------------------------------
Age Height Weight
Profession
Student 22 66 140
Teacher 42 70 148
Engineer 30 62 125
Doctor 35 68 160
Nurse 25 62 152
###Markdown
Multi-indexing
###Code
# Index Levels
outside = ['G1','G1','G1','G2','G2','G2']
inside = [1,2,3,1,2,3]
hier_index = list(zip(outside,inside))
print("\nTuple pairs after the zip and list command\n",'-'*45, sep='')
print(hier_index)
hier_index = pd.MultiIndex.from_tuples(hier_index)
print("\nIndex hierarchy\n",'-'*25, sep='')
print(hier_index)
print("\nIndex hierarchy type\n",'-'*25, sep='')
print(type(hier_index))
print("\nCreating DataFrame with multi-index\n",'-'*37, sep='')
np.random.seed(101)
df1 = pd.DataFrame(data=np.round(rn(6,3),2), index= hier_index, columns= ['A','B','C'])
print(df1)
print("\nSubsetting multi-index DataFrame using two 'loc' methods\n",'-'*60, sep='')
print(df1.loc['G2'].loc[[1,3]][['B','C']])
print("\nNaming the indices by 'index.names' method\n",'-'*45, sep='')
df1.index.names=['Outer', 'Inner']
print(df1)
###Output
Tuple pairs after the zip and list command
---------------------------------------------
[('G1', 1), ('G1', 2), ('G1', 3), ('G2', 1), ('G2', 2), ('G2', 3)]
Index hierarchy
-------------------------
MultiIndex(levels=[['G1', 'G2'], [1, 2, 3]],
labels=[[0, 0, 0, 1, 1, 1], [0, 1, 2, 0, 1, 2]])
Index hierarchy type
-------------------------
<class 'pandas.core.indexes.multi.MultiIndex'>
Creating DataFrame with multi-index
-------------------------------------
A B C
G1 1 2.71 0.63 0.91
2 0.50 0.65 -0.32
3 -0.85 0.61 -2.02
G2 1 0.74 0.53 -0.59
2 0.19 -0.76 -0.93
3 0.96 0.19 1.98
Subsetting multi-index DataFrame using two 'loc' methods
------------------------------------------------------------
B C
1 0.53 -0.59
3 0.19 1.98
Naming the indices by 'index.names' method
---------------------------------------------
A B C
Outer Inner
G1 1 2.71 0.63 0.91
2 0.50 0.65 -0.32
3 -0.85 0.61 -2.02
G2 1 0.74 0.53 -0.59
2 0.19 -0.76 -0.93
3 0.96 0.19 1.98
###Markdown
Cross-section ('XS') command
###Code
print("\nGrabbing a cross-section from outer level\n",'-'*45, sep='')
print(df1.xs('G1'))
print("\nGrabbing a cross-section from inner level (for all outer levels)\n",'-'*65, sep='')
print(df1.xs(2,level='Inner'))
###Output
Grabbing a cross-section from outer level
---------------------------------------------
A B C
Inner
1 2.71 0.63 0.91
2 0.50 0.65 -0.32
3 -0.85 0.61 -2.02
Grabbing a cross-section from inner level (for all outer levels)
-----------------------------------------------------------------
A B C
Outer
G1 0.50 0.65 -0.32
G2 0.19 -0.76 -0.93
###Markdown
Missing Values
###Code
df = pd.DataFrame({'A':[1,2,np.nan],'B':[5,np.nan,np.nan],'C':[1,2,3]})
df['States']="CA NV AZ".split()
df.set_index('States',inplace=True)
print(df)
###Output
A B C
States
CA 1.0 5.0 1
NV 2.0 NaN 2
AZ NaN NaN 3
###Markdown
Pandas 'dropna' method
###Code
print("\nDropping any rows with a NaN value\n",'-'*35, sep='')
print(df.dropna(axis=0))
print("\nDropping any column with a NaN value\n",'-'*35, sep='')
print(df.dropna(axis=1))
print("\nDropping a row with a minimum 2 NaN value using 'thresh' parameter\n",'-'*68, sep='')
print(df.dropna(axis=0, thresh=2))
###Output
Dropping any rows with a NaN value
-----------------------------------
A B C
States
CA 1.0 5.0 1
Dropping any column with a NaN value
-----------------------------------
C
States
CA 1
NV 2
AZ 3
Dropping a row with a minimum 2 NaN value using 'thresh' parameter
--------------------------------------------------------------------
A B C
States
CA 1.0 5.0 1
NV 2.0 NaN 2
###Markdown
Pandas 'fillna' method
###Code
print("\nFilling values with a default value\n",'-'*35, sep='')
print(df.fillna(value='FILL VALUE'))
print("\nFilling values with a computed value (mean of column A here)\n",'-'*60, sep='')
print(df.fillna(value=df['A'].mean()))
###Output
Filling values with a default value
-----------------------------------
A B C
States
CA 1 5 1
NV 2 FILL VALUE 2
AZ FILL VALUE FILL VALUE 3
Filling values with a computed value (mean of column A here)
------------------------------------------------------------
A B C
States
CA 1.0 5.0 1
NV 2.0 1.5 2
AZ 1.5 1.5 3
###Markdown
GroupBy method
###Code
# Create dataframe
data = {'Company':['GOOG','GOOG','MSFT','MSFT','FB','FB'],
'Person':['Sam','Charlie','Amy','Vanessa','Carl','Sarah'],
'Sales':[200,120,340,124,243,350]}
df = pd.DataFrame(data)
df
byComp = df.groupby('Company')
print("\nGrouping by 'Company' column and listing mean sales\n",'-'*55, sep='')
print(byComp.mean())
print("\nGrouping by 'Company' column and listing sum of sales\n",'-'*55, sep='')
print(byComp.sum())
# Note dataframe conversion of the series and transpose
print("\nAll in one line of command (Stats for 'FB')\n",'-'*65, sep='')
print(pd.DataFrame(df.groupby('Company').describe().loc['FB']).transpose())
print("\nSame type of extraction with little different command\n",'-'*68, sep='')
print(df.groupby('Company').describe().loc[['GOOG', 'MSFT']])
###Output
Grouping by 'Company' column and listing mean sales
-------------------------------------------------------
Sales
Company
FB 296.5
GOOG 160.0
MSFT 232.0
Grouping by 'Company' column and listing sum of sales
-------------------------------------------------------
Sales
Company
FB 593
GOOG 320
MSFT 464
All in one line of command (Stats for 'FB')
-----------------------------------------------------------------
Sales
count mean std min 25% 50% 75% max
FB 2.0 296.5 75.660426 243.0 269.75 296.5 323.25 350.0
Same type of extraction with little different command
--------------------------------------------------------------------
Sales
count mean std min 25% 50% 75% max
Company
GOOG 2.0 160.0 56.568542 120.0 140.0 160.0 180.0 200.0
MSFT 2.0 232.0 152.735065 124.0 178.0 232.0 286.0 340.0
###Markdown
Merging, Joining, Concatenating Concatenation
###Code
# Creating data frames
df1 = pd.DataFrame({'A': ['A0', 'A1', 'A2', 'A3'],
'B': ['B0', 'B1', 'B2', 'B3'],
'C': ['C0', 'C1', 'C2', 'C3'],
'D': ['D0', 'D1', 'D2', 'D3']},
index=[0, 1, 2, 3])
df2 = pd.DataFrame({'A': ['A4', 'A5', 'A6', 'A7'],
'B': ['B4', 'B5', 'B6', 'B7'],
'C': ['C4', 'C5', 'C6', 'C7'],
'D': ['D4', 'D5', 'D6', 'D7']},
index=[4, 5, 6, 7])
df3 = pd.DataFrame({'A': ['A8', 'A9', 'A10', 'A11'],
'B': ['B8', 'B9', 'B10', 'B11'],
'C': ['C8', 'C9', 'C10', 'C11'],
'D': ['D8', 'D9', 'D10', 'D11']},
index=[8,9,10,11])
print("\nThe DataFrame number 1\n",'-'*30, sep='')
print(df1)
print("\nThe DataFrame number 2\n",'-'*30, sep='')
print(df2)
print("\nThe DataFrame number 3\n",'-'*30, sep='')
print(df3)
df_cat1 = pd.concat([df1,df2,df3], axis=0)
print("\nAfter concatenation along row\n",'-'*30, sep='')
print(df_cat1)
df_cat2 = pd.concat([df1,df2,df3], axis=1)
print("\nAfter concatenation along column\n",'-'*60, sep='')
print(df_cat2)
df_cat2.fillna(value=0, inplace=True)
print("\nAfter filling missing values with zero\n",'-'*60, sep='')
print(df_cat2)
###Output
After concatenation along row
------------------------------
A B C D
0 A0 B0 C0 D0
1 A1 B1 C1 D1
2 A2 B2 C2 D2
3 A3 B3 C3 D3
4 A4 B4 C4 D4
5 A5 B5 C5 D5
6 A6 B6 C6 D6
7 A7 B7 C7 D7
8 A8 B8 C8 D8
9 A9 B9 C9 D9
10 A10 B10 C10 D10
11 A11 B11 C11 D11
After concatenation along column
------------------------------------------------------------
A B C D A B C D A B C D
0 A0 B0 C0 D0 NaN NaN NaN NaN NaN NaN NaN NaN
1 A1 B1 C1 D1 NaN NaN NaN NaN NaN NaN NaN NaN
2 A2 B2 C2 D2 NaN NaN NaN NaN NaN NaN NaN NaN
3 A3 B3 C3 D3 NaN NaN NaN NaN NaN NaN NaN NaN
4 NaN NaN NaN NaN A4 B4 C4 D4 NaN NaN NaN NaN
5 NaN NaN NaN NaN A5 B5 C5 D5 NaN NaN NaN NaN
6 NaN NaN NaN NaN A6 B6 C6 D6 NaN NaN NaN NaN
7 NaN NaN NaN NaN A7 B7 C7 D7 NaN NaN NaN NaN
8 NaN NaN NaN NaN NaN NaN NaN NaN A8 B8 C8 D8
9 NaN NaN NaN NaN NaN NaN NaN NaN A9 B9 C9 D9
10 NaN NaN NaN NaN NaN NaN NaN NaN A10 B10 C10 D10
11 NaN NaN NaN NaN NaN NaN NaN NaN A11 B11 C11 D11
After filling missing values with zero
------------------------------------------------------------
A B C D A B C D A B C D
0 A0 B0 C0 D0 0 0 0 0 0 0 0 0
1 A1 B1 C1 D1 0 0 0 0 0 0 0 0
2 A2 B2 C2 D2 0 0 0 0 0 0 0 0
3 A3 B3 C3 D3 0 0 0 0 0 0 0 0
4 0 0 0 0 A4 B4 C4 D4 0 0 0 0
5 0 0 0 0 A5 B5 C5 D5 0 0 0 0
6 0 0 0 0 A6 B6 C6 D6 0 0 0 0
7 0 0 0 0 A7 B7 C7 D7 0 0 0 0
8 0 0 0 0 0 0 0 0 A8 B8 C8 D8
9 0 0 0 0 0 0 0 0 A9 B9 C9 D9
10 0 0 0 0 0 0 0 0 A10 B10 C10 D10
11 0 0 0 0 0 0 0 0 A11 B11 C11 D11
###Markdown
Merging by a common 'key'The **merge** function allows you to merge DataFrames together using a similar logic as merging SQL Tables together.
###Code
left = pd.DataFrame({'key': ['K0', 'K1', 'K2', 'K3'],
'A': ['A0', 'A1', 'A2', 'A3'],
'B': ['B0', 'B1', 'B2', 'B3']})
right = pd.DataFrame({'key': ['K0', 'K1', 'K2', 'K3'],
'C': ['C0', 'C1', 'C2', 'C3'],
'D': ['D0', 'D1', 'D2', 'D3']})
print("\nThe DataFrame 'left'\n",'-'*30, sep='')
print(left)
print("\nThe DataFrame 'right'\n",'-'*30, sep='')
print(right)
merge1= pd.merge(left,right,how='inner',on='key')
print("\nAfter simple merging with 'inner' method\n",'-'*50, sep='')
print(merge1)
###Output
After simple merging with 'inner' method
--------------------------------------------------
A B key C D
0 A0 B0 K0 C0 D0
1 A1 B1 K1 C1 D1
2 A2 B2 K2 C2 D2
3 A3 B3 K3 C3 D3
###Markdown
Merging on a set of keys
###Code
left = pd.DataFrame({'key1': ['K0', 'K0', 'K1', 'K2'],
'key2': ['K0', 'K1', 'K0', 'K1'],
'A': ['A0', 'A1', 'A2', 'A3'],
'B': ['B0', 'B1', 'B2', 'B3']})
right = pd.DataFrame({'key1': ['K0', 'K1', 'K1', 'K2'],
'key2': ['K0', 'K0', 'K0', 'K0'],
'C': ['C0', 'C1', 'C2', 'C3'],
'D': ['D0', 'D1', 'D2', 'D3']})
left
right
pd.merge(left, right, on=['key1', 'key2'])
pd.merge(left, right, how='outer',on=['key1', 'key2'])
pd.merge(left, right, how='left',on=['key1', 'key2'])
pd.merge(left, right, how='right',on=['key1', 'key2'])
###Output
_____no_output_____
###Markdown
JoiningJoining is a convenient method for combining the columns of two potentially differently-indexed DataFrames into a single DataFrame based on **'index keys'**.
###Code
left = pd.DataFrame({'A': ['A0', 'A1', 'A2'],
'B': ['B0', 'B1', 'B2']},
index=['K0', 'K1', 'K2'])
right = pd.DataFrame({'C': ['C0', 'C2', 'C3'],
'D': ['D0', 'D2', 'D3']},
index=['K0', 'K2', 'K3'])
left
right
left.join(right)
left.join(right, how='outer')
###Output
_____no_output_____
###Markdown
Useful operations head() and unique values* head()* unique()* nunique()* value_count()
###Code
import pandas as pd
df = pd.DataFrame({'col1':[1,2,3,4,5,6,7,8,9,10],
'col2':[444,555,666,444,333,222,666,777,666,555],
'col3':'aaa bb c dd eeee fff gg h iii j'.split()})
df
print("\nMethod head() is for showing first few entries\n",'-'*50, sep='')
df.head()
print("\nFinding unique values in 'col2'\n",'-'*40, sep='') # Note 'unique' method applies to pd.series only
print(df['col2'].unique())
print("\nFinding number of unique values in 'col2'\n",'-'*45, sep='')
print(df['col2'].nunique())
print("\nTable of unique values in 'col2'\n",'-'*40, sep='')
t1=df['col2'].value_counts()
print(t1)
###Output
Table of unique values in 'col2'
----------------------------------------
666 3
444 2
555 2
222 1
333 1
777 1
Name: col2, dtype: int64
###Markdown
Applying functionsPandas work with **'apply'** method to accept any user-defined function
###Code
# Define a function
def testfunc(x):
if (x> 500):
return (10*np.log10(x))
else:
return (x/10)
df['FuncApplied'] = df['col2'].apply(testfunc)
print(df)
###Output
col1 col2 col3 FuncApplied
0 1 444 aaa 44.400000
1 2 555 bb 27.442930
2 3 666 c 28.234742
3 4 444 dd 44.400000
4 5 333 eeee 33.300000
5 6 222 fff 22.200000
6 7 666 gg 28.234742
7 8 777 h 28.904210
8 9 666 iii 28.234742
9 10 555 j 27.442930
###Markdown
**Apply works with built-in function too!**
###Code
df['col3length']= df['col3'].apply(len)
print(df)
###Output
col1 col2 col3 FuncApplied col3length
0 1 444 aaa 44.400000 3
1 2 555 bb 27.442930 2
2 3 666 c 28.234742 1
3 4 444 dd 44.400000 2
4 5 333 eeee 33.300000 4
5 6 222 fff 22.200000 3
6 7 666 gg 28.234742 2
7 8 777 h 28.904210 1
8 9 666 iii 28.234742 3
9 10 555 j 27.442930 1
###Markdown
**Combine 'apply' with lambda expession for in-line calculations**
###Code
df['FuncApplied'].apply(lambda x: np.sqrt(x))
###Output
_____no_output_____
###Markdown
**Standard statistical functions directly apply to columns**
###Code
print("\nSum of the column 'FuncApplied' is: ",df['FuncApplied'].sum())
print("Mean of the column 'FuncApplied' is: ",df['FuncApplied'].mean())
print("Std dev of the column 'FuncApplied' is: ",df['FuncApplied'].std())
print("Min and max of the column 'FuncApplied' are: ",df['FuncApplied'].min(),"and",df['FuncApplied'].max())
###Output
Sum of the column 'FuncApplied' is: 312.7942967255717
Mean of the column 'FuncApplied' is: 31.27942967255717
Std dev of the column 'FuncApplied' is: 7.4065059423607895
Min and max of the column 'FuncApplied' are: 22.2 and 44.4
###Markdown
Deletion, sorting, list of column and row names **Getting the names of the columns**
###Code
print("\nName of columns\n",'-'*20, sep='')
print(df.columns)
l = list(df.columns)
print("\nColumn names in a list of strings for later manipulation:",l)
###Output
Name of columns
--------------------
Index(['col1', 'col2', 'col3', 'FuncApplied', 'col3length'], dtype='object')
Column names in a list of strings for later manipulation: ['col1', 'col2', 'col3', 'FuncApplied', 'col3length']
Deleting last column by 'del' command (this affects the dataframe immediately, unlike drop method
----------------------------------------------------------------------------------------------------
col1 col2 col3 FuncApplied
0 1 444 aaa 44.400000
1 2 555 bb 27.442930
2 3 666 c 28.234742
3 4 444 dd 44.400000
4 5 333 eeee 33.300000
5 6 222 fff 22.200000
6 7 666 gg 28.234742
7 8 777 h 28.904210
8 9 666 iii 28.234742
9 10 555 j 27.442930
###Markdown
**Deletion by 'del' command** This affects the dataframe immediately, unlike drop method.
###Code
print("\nDeleting last column by 'del' command\n",'-'*50, sep='')
del df['col3length']
print(df)
df['col3length']= df['col3'].apply(len)
###Output
Deleting last column by 'del' command
--------------------------------------------------
col1 col2 col3 FuncApplied
0 1 444 aaa 44.400000
1 2 555 bb 27.442930
2 3 666 c 28.234742
3 4 444 dd 44.400000
4 5 333 eeee 33.300000
5 6 222 fff 22.200000
6 7 666 gg 28.234742
7 8 777 h 28.904210
8 9 666 iii 28.234742
9 10 555 j 27.442930
###Markdown
**Sorting and Ordering a DataFrame **
###Code
df.sort_values(by='col2') #inplace=False by default
df.sort_values(by='FuncApplied',ascending=False) #inplace=False by default
###Output
_____no_output_____
###Markdown
**Find Null Values or Check for Null Values**
###Code
df = pd.DataFrame({'col1':[1,2,3,np.nan],
'col2':[np.nan,555,666,444],
'col3':['abc','def','ghi','xyz']})
df.head()
df.isnull()
df.fillna('FILL')
###Output
_____no_output_____
###Markdown
**Pivot Table**
###Code
data = {'A':['foo','foo','foo','bar','bar','bar'],
'B':['one','one','two','two','one','one'],
'C':['x','y','x','y','x','y'],
'D':[1,3,2,5,4,1]}
df = pd.DataFrame(data)
df
# Index out of 'A' and 'B', columns from 'C', actual numerical values from 'D'
df.pivot_table(values='D',index=['A', 'B'],columns=['C'])
# Index out of 'A' and 'B', columns from 'C', actual numerical values from 'D'
df.pivot_table(values='D',index=['A', 'B'],columns=['C'], fill_value='FILLED')
###Output
_____no_output_____
###Markdown
Pandas built-in Visualization **Import packages**
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
###Output
_____no_output_____
###Markdown
**Read in the CSV data file**
###Code
df1=pd.read_csv('df1.csv', index_col=0)
df1.head()
df2=pd.read_csv('df2')
df2.head()
###Output
_____no_output_____
###Markdown
**Histogram of a single column**
###Code
df1['A'].hist()
###Output
_____no_output_____
###Markdown
**Histogram with a different set of arguments (list of columns, bins, figure size, etc)**
###Code
df1.hist(column=['B','C'],bins=20,figsize=(10,4))
###Output
_____no_output_____
###Markdown
**Histogram with generic plot method of Pandas**
###Code
df1.plot(kind='hist', bins=30, grid=True, figsize=(12,7))
###Output
_____no_output_____
###Markdown
**Area plot**
###Code
import seaborn as sns #Plot style will change to Seaborn package style from now on
df2.plot.area(alpha=0.4)
###Output
_____no_output_____
###Markdown
**Bar plot (with and without stacking)**
###Code
df2.plot.bar()
df2.plot.bar(stacked=True)
###Output
_____no_output_____
###Markdown
**Lineplot**
###Code
df1.plot.line(x=df1.index,y=['B','C'],figsize=(12,4),lw=1) # Note matplotlib arguments like 'lw' and 'figsize'
###Output
_____no_output_____
###Markdown
**Scatterplot**
###Code
df1.plot.scatter(x='A',y='B',figsize=(12,8))
df1.plot.scatter(x='A',y='B',c='C',cmap='coolwarm',figsize=(12,8)) # Color of the scatter dots set based on column C
df1.plot.scatter(x='A',y='B',s=10*np.exp(df1['C']),c='C',figsize=(12,8)) # Size of the dots set based on column C
###Output
_____no_output_____
###Markdown
**Boxplot**
###Code
df2.plot.box()
###Output
_____no_output_____
###Markdown
**Hexagonal bin plot for bivariate data**
###Code
df=pd.DataFrame(data=np.random.randn(1000,2),columns=['A','B'])
df.head()
df.plot.hexbin(x='A',y='B',gridsize=30,cmap='coolwarm')
###Output
_____no_output_____
###Markdown
**Kernel density estimation**
###Code
df2.plot.density(lw=3)
###Output
_____no_output_____ |
python_conditionals.ipynb | ###Markdown
Python Conditionals Boolean ConditionalsA boolean value is always either `True` or `False`. You cannot assign any other value to a boolean variable.
###Code
gameOver = False
isOdd = True
###Output
_____no_output_____
###Markdown
Relational OperatorsRelational operators compare two variables and report a value of `True` or `False`. You must compare values of the same type. For example, you cannot compare a number and a string.|Operator|Use||------|:--||`a == b`|`True` if a and b are equal||`a != b`|`True` if a and b are not equal||`a > b`|`True` if a is greater than b||`a < b`|`True` if a is less than b||`a >= b`|`True` if a is greater than or equal to b||`a <= b`|`True` if a is less than or equal to b|
###Code
a = 5
b = 7
c = 10
print( a == b ) # are a and b equal?
print( a != b ) # are a and b not equal?
print( a > b ) # is a greater than b?
print( a < b ) # is a less than b?
print( a >= b ) # is a greater than or equal to b?
print( a <= b ) # is a less than or equal to b?
###Output
False
True
False
True
False
True
###Markdown
Logical OperatorsLogical operators allow you to check multiple comparisons at the same time. For example, if I am hungry and it is dinnertime, then eat. Without the logical operators you could only do one check at a time.|Operator|Use||--------|:---||`a or b`|`True` if a is `True`, if b is `True`, or if both are `True`. `False` if both are `False`. ||`a and b`|`True` if a is `True` and b is `True`. `False` if either is `False`.||`not a`|`True` if a is `False`. `False` if a is `True`|Note:For `or` if the first condition is true, then the whole thing must be true so the second condition is not checked. For `and` if the first condition is false, then the whole thing must be false so the second condition is not checked. This is called "short circuit evaluation".
###Code
a = 5
b = 7
c = 10
print( (a < b ) and (b < c) ) # True and True == True
print( (a < b ) and (b > c) ) # True and False == False
print( (a > b ) and (b < c) ) # False and ___ == False (see: short circuit evalutation)
print( (a > b ) and (b > c) ) # False and ____ == False (see: short circuit evalutation)
print()
print( (a < b ) or (b < c) ) # True or ___ == True (see: short circuit evalutation)
print( (a < b ) or (b > c) ) # True or _____ == True (see: short circuit evalutation)
print( (a > b ) or (b < c) ) # False or True == True
print( (a > b ) or (b > c) ) # False or False == False
print()
print( not True ) # not True == False
print( not False ) # not False == True
print( not (a < b) ) # not True == False
print( not (a > b) ) # not False == True
print()
# You cannot distribute the not like it was an algebraic equation.
# The next two expressions are not the same.
print( not ( (a < b) or (c < b) ) ) # not (True or False) == False
print( ( not (a < b) or not (c < b) ) ) # (not True) or (not False) == False or True == True
###Output
True
False
False
False
True
True
True
False
False
True
False
True
False
True
###Markdown
if statementsA program would be very limited if it could not make decisions based on the value of inputs or resuts. `if` statements allow a program to check a value and perform different actions based on the result of the check.The format is:```if( boolean_condition ): do_stuff do_more_stuff```The `boolean_condition` must be a condition (like those shown above) that evaluates to either `True` or `False`. This line is followed by one or more (not zero) indented lines of code that make up the **if block**. in this case the **if block** consists of the two lines `do_stuff` and `do_more_stuff`.If the `boolean_condition` evaluates to `True` then the **if block** will be executed. If the `boolean_condition` evaluates to `False` then the **if block** will be skipped and will not execute.Note that the `boolean_condition` may be a simple condition like those in the **Relational Operators** section above, or it may be a compound conditions like those in the **Logical Operators** section above.
###Code
grade = 85
if( grade >= 90 ):
print( "Your grade of", grade, "is an A!" )
if( grade < 90 and grade >= 80 ):
print( "Your grade of", grade, "is a B!" )
if( grade < 80 and grade >= 75 ):
print( "Your grade of", grade, "is a C." )
if( grade < 75 and grade >= 70 ):
print( "Your grade of", grade, "is a D." )
if( grade < 70 ):
print( "Your grade of", grade, "is an F." )
###Output
Your grade of 85 is a B!
###Markdown
if-else statementsThere are times that you want your program to do one thing if a condition is true, and do another thing if it is not true. That's where the `else` operator comes into play. The structure of an `if-else` block looks like this.```if( boolean_condition ): if-block-of-statementselse: else-block-of-statements```Just like with an if statement, the `if-block-of-statements` and the `else-block-of-statements` consists of one or more (not zero) python statements. If the `boolean-condition` is `True` then the `if-block-of-statements` is executed and the `else-block-of-statements` is skipped. If the `boolean-condition` is `False` then the `if-block-of-statments` is skipped and the `else-block-of-statements` is executed.
###Code
num = 35
if( num > 50 ):
print( num, "is greater than 50." )
else:
print( num, "is not greater than 50." )
print()
cold = True
if( cold == True ): # could also say if( cold ): since cold evaluates to True or False.
print( "It\'s cold, wear a coat!" )
else:
print( "Not cold today!" )
###Output
35 is not greater than 50.
It's cold, wear a coat!
###Markdown
if-elif-else statementsSometimes you want to have your program make different decisions for a number of conditions, not just a single True/False condition. For this you have the `elif` statement. `elif` is short for `else if`. The structure of an if-elif-else statement looks like this:```if( boolean_condition ): if-block-of-statementselif ( another_boolean_condition ): elif-block-of-statementselse: else-block-of-statements```You may have more than one `elif` statement, each with their own boolean conditions to be tested and their own block of statements to be executed. The `if` statements always comes first. The `else` statement always comes last. All `elif` statments should be between them.Just like with an if statement, the `elif-block-of-statements` consists of one or more (not zero) python statements. If the `boolean-condition` is `True` then the `if-block-of-statements` is executed and the `elif-block-of-statements` and the `else-block-of-statements` are skipped. If the `boolean-condition` is `False` then the `if-block-of-statments` is skipped and the `elif` `another_boolean_condition` is evaluated. If the `another_boolean_condition` is `True` then the `elif-block-of-statements` is executed and the `else-block-of-statements` is skipped. If the `another_boolean_condition` is `False` then the `elif-block-of-statements` is skipped and the next `elif` is evaluated. If there are no more `elif` statements then the `else-block-of-statements` is executed.
###Code
a = 5
b = 10
if( a == b ):
print( "a equals b" )
elif( a < b ):
print( "a is less than b" )
elif( a > b ):
print( "a is greater than b" )
else:
print( "Something went wrong." )
print()
a = 10
b = 10
if( a == b ):
print( "a equals b" )
elif( b == a ):
print( "b equals a" ) # notice that once a condition is triggered all the other conditions are skipped.
else:
print( "Ummmm, something went wrong." )
###Output
a is less than b
a equals b
###Markdown
AlgorithmsHere are some examples of some common algorithms that use boolean conditions and if statements.
###Code
# Is num a multiple of 10?
# Think: what number could we use instead of 10 to determine if a number is even or odd?
# Refer to the use of modulus (%) on the Python Basics page.
num = 35
if( num % 10 == 0 ):
print( num, "is a multiple of 10." )
else:
print( num, "is not a multiple of 10." )
print()
num = 100
if( num % 10 == 0 ):
print( num, "is a multiple of 10." )
else:
print( num, "is not a multiple of 10." )
###Output
35 is not a multiple of 10.
100 is a multiple of 10.
|
src/04_word2vec/.ipynb_checkpoints/04_w2v_results_alldiv-checkpoint.ipynb | ###Markdown
Visualizing Word2Vec Models Trained on Biomedical Abstracts in PubMed A Comparison of Race and Diversity Over Time Brandon Kramer - University of Virginia's Biocomplexity Institute This notebook explores two Word2Vec models trained the PubMed database taken from January 2021. Overall, I am interested in testing whether diversity and racial terms are becoming more closely related over time. To do this, I [trained](https://github.com/brandonleekramer/diversity/blob/master/src/04_word_embeddings/03_train_word2vec.ipynb) two models (one from 1990-2000 data and then a random sample of the 2010-2020 data). Now, I will visualize the results of these models to see which words are similar to race/diversity as well as plotting some comparisons of these two terms over time.For those unfamiliar with Word2Vec, it might be worth reading [this post from Connor Gilroy](https://ccgilroy.github.io/community-discourse/introduction.html) - a sociologist that details how word embeddings can help us better understand the concept of "community." The post contains information on how Word2Vec and other word embedding approaches can teach us about word/document similarity, opposite words, and historical changes in words. Basically, Word2Vec turns all of the words in the corpus into a number based on how they are used in the context of 5-word windows (a parameter I defined in this model), making all of the words directly compariable to one another within a vector space. The end result is that we are able to compare how similar or different words are or, as we will see below, how similar or different words become over time. As we will come to see, this approach is useful but not perfect for dealing with our case due to the polysemy of 'diversity.' Import packages and ingest data Let's load all of our packages and the `.bin` files that hold our models.
###Code
# load packages
import os
from itertools import product
import pandas.io.sql as psql
import pandas as pd
from pandas import DataFrame
from gensim.models import Word2Vec
from gensim.models import KeyedVectors
import numpy as np
from sklearn.manifold import TSNE
import matplotlib.pyplot as plt
import matplotlib.cm as cm
from matplotlib.patches import Rectangle
import seaborn as sns
import warnings
warnings.filterwarnings("ignore")
# load data
os.chdir("/sfs/qumulo/qhome/kb7hp/git/diversity/data/word_embeddings/")
earlier_model = Word2Vec.load("word2vec_1990_2000_socdiv_0821.bin")
later_model_original = Word2Vec.load("word2vec_2010_2020_socdiv_0821.bin")
###Output
/home/kb7hp/.conda/envs/brandon_env/lib/python3.9/site-packages/gensim/similarities/__init__.py:15: UserWarning: The gensim.similarities.levenshtein submodule is disabled, because the optional Levenshtein package <https://pypi.org/project/python-Levenshtein/> is unavailable. Install Levenhstein (e.g. `pip install python-Levenshtein`) to suppress this warning.
warnings.warn(msg)
###Markdown
Normalizing Our Results
###Code
# http://www-personal.umich.edu/~tdszyman/misc/InsightSIGNLP16.pdf
# https://github.com/williamleif/histwords
# https://gist.github.com/zhicongchen/9e23d5c3f1e5b1293b16133485cd17d8 <<<<<<
# https://github.com/nikhgarg/EmbeddingDynamicStereotypes/blob/master/dataset_utilities/normalize_vectors.py
def intersection_align_gensim(m1, m2, words=None):
"""
Intersect two gensim word2vec models, m1 and m2.
Only the shared vocabulary between them is kept.
If 'words' is set (as list or set), then the vocabulary is intersected with this list as well.
Indices are re-organized from 0..N in order of descending frequency (=sum of counts from both m1 and m2).
These indices correspond to the new syn0 and syn0norm objects in both gensim models:
-- so that Row 0 of m1.syn0 will be for the same word as Row 0 of m2.syn0
-- you can find the index of any word on the .index2word list: model.index2word.index(word) => 2
The .vocab dictionary is also updated for each model, preserving the count but updating the index.
"""
# Get the vocab for each model
vocab_m1 = set(m1.wv.index_to_key)
vocab_m2 = set(m2.wv.index_to_key)
# Find the common vocabulary
common_vocab = vocab_m1 & vocab_m2
if words: common_vocab &= set(words)
# If no alignment necessary because vocab is identical...
if not vocab_m1 - common_vocab and not vocab_m2 - common_vocab:
return (m1,m2)
# Otherwise sort by frequency (summed for both)
common_vocab = list(common_vocab)
common_vocab.sort(key=lambda w: m1.wv.get_vecattr(w, "count") + m2.wv.get_vecattr(w, "count"), reverse=True)
# print(len(common_vocab))
# Then for each model...
for m in [m1, m2]:
# Replace old syn0norm array with new one (with common vocab)
indices = [m.wv.key_to_index[w] for w in common_vocab]
old_arr = m.wv.vectors
new_arr = np.array([old_arr[index] for index in indices])
m.wv.vectors = new_arr
# Replace old vocab dictionary with new one (with common vocab)
# and old index2word with new one
new_key_to_index = {}
new_index_to_key = []
for new_index, key in enumerate(common_vocab):
new_key_to_index[key] = new_index
new_index_to_key.append(key)
m.wv.key_to_index = new_key_to_index
m.wv.index_to_key = new_index_to_key
print(len(m.wv.key_to_index), len(m.wv.vectors))
return (m1,m2)
def smart_procrustes_align_gensim(base_embed, other_embed, words=None):
"""
Original script: https://gist.github.com/quadrismegistus/09a93e219a6ffc4f216fb85235535faf
Procrustes align two gensim word2vec models (to allow for comparison between same word across models).
Code ported from HistWords <https://github.com/williamleif/histwords> by William Hamilton <[email protected]>.
First, intersect the vocabularies (see `intersection_align_gensim` documentation).
Then do the alignment on the other_embed model.
Replace the other_embed model's syn0 and syn0norm numpy matrices with the aligned version.
Return other_embed.
If `words` is set, intersect the two models' vocabulary with the vocabulary in words (see `intersection_align_gensim` documentation).
"""
# patch by Richard So [https://twitter.com/richardjeanso) (thanks!) to update this code for new version of gensim
# base_embed.init_sims(replace=True)
# other_embed.init_sims(replace=True)
# make sure vocabulary and indices are aligned
in_base_embed, in_other_embed = intersection_align_gensim(base_embed, other_embed, words=words)
# get the (normalized) embedding matrices
base_vecs = in_base_embed.wv.get_normed_vectors()
other_vecs = in_other_embed.wv.get_normed_vectors()
# just a matrix dot product with numpy
m = other_vecs.T.dot(base_vecs)
# SVD method from numpy
u, _, v = np.linalg.svd(m)
# another matrix operation
ortho = u.dot(v)
# Replace original array with modified one, i.e. multiplying the embedding matrix by "ortho"
other_embed.wv.vectors = (other_embed.wv.vectors).dot(ortho)
return other_embed
later_model = smart_procrustes_align_gensim(earlier_model, later_model_original)
###Output
78100 78100
78100 78100
###Markdown
Analyzing Most Similar Words **What words are most similar to "racial," "ethnicity", and "diversity"?** As we can see below, "racial" and "ethnicity" is mostly similar to other racialized and/or gendered terms in both the 1990-2000 and 2010-20 periods. "Diversity", on the other hand, is most similar to heterogeneity, divergence and complexity in 1990-2000 and then richness, divergence and diversification in 2010-2020. Overall, this tells us a different version of the same story we saw when analyzing Hypothesis 1: "diversity" rarely refers to social diversity along racial or classed lines. Diversity is mostly used as a biological term. Even here, richness, along with evenness, are measure within Simpson's Index for measuring ecological biodiversity (e.g. [Stirling et al. 2001](https://www.journals.uchicago.edu/doi/abs/10.1086/321317?casa_token=Fb4sojZm9XgAAAAA:BV-t4e5f3SZ05gTJZRUydcQvHTYg47f1qRu51CixgF-b_HnGVXuPQFaqf_Lp88Tvy51Gnp7iw4yG)).
###Code
# average of earlier model
earlier_race = earlier_model.wv.most_similar(positive=['race', 'racial', 'racially'], topn=50)
earlier_race = pd.DataFrame(earlier_race).rename(columns={0: "term", 1: "score"})
earlier_race['year'] = '1990-2000'
earlier_race.reset_index(inplace=True)
earlier_race = earlier_race.rename(columns = {'index':'rank'})
# average of later model
later_race = later_model.wv.most_similar(positive=['race', 'racial', 'racially'], topn=50)
later_race = pd.DataFrame(later_race).rename(columns={0: "term", 1: "score"})
later_race['year'] = '2010-2020'
later_race.reset_index(inplace=True)
later_race = later_race.rename(columns = {'index':'rank'})
# merge the tables for comparison
top_race_vectors = pd.merge(earlier_race, later_race, on=["rank"])
top_race_vectors
# average of earlier model
earlier_ethnicity = earlier_model.wv.most_similar(positive=['ethnic', 'ethnicity', 'ethnically'], topn=50)
earlier_ethnicity = pd.DataFrame(earlier_ethnicity).rename(columns={0: "term", 1: "score"})
earlier_ethnicity['year'] = '1990-2000'
earlier_ethnicity.reset_index(inplace=True)
earlier_ethnicity = earlier_ethnicity.rename(columns = {'index':'rank'})
# average of later model
later_ethnicity = later_model.wv.most_similar(positive=['ethnic', 'ethnicity', 'ethnically'], topn=50)
later_ethnicity = pd.DataFrame(later_ethnicity).rename(columns={0: "term", 1: "score"})
later_ethnicity['year'] = '2010-2020'
later_ethnicity.reset_index(inplace=True)
later_ethnicity = later_ethnicity.rename(columns = {'index':'rank'})
# merge the tables for comparison
top_ethnicity_vectors = pd.merge(earlier_ethnicity, later_ethnicity, on=["rank"])
top_ethnicity_vectors
# average of earlier model
earlier_diversity = earlier_model.wv.most_similar(positive=['diverse', 'diversity'], topn=50)
earlier_diversity = pd.DataFrame(earlier_diversity).rename(columns={0: "term", 1: "score"})
earlier_diversity['year'] = '1990-2000'
earlier_diversity.reset_index(inplace=True)
earlier_diversity = earlier_diversity.rename(columns = {'index':'rank'})
# average of later model
later_diversity = later_model.wv.most_similar(positive=['diverse', 'diversity'], topn=50)
later_diversity = pd.DataFrame(later_diversity).rename(columns={0: "term", 1: "score"})
later_diversity['year'] = '2010-2020'
later_diversity.reset_index(inplace=True)
later_diversity = later_diversity.rename(columns = {'index':'rank'})
# merge the tables for comparison
top_diversity_vectors = pd.merge(earlier_diversity, later_diversity, on=["rank"])
top_diversity_vectors
###Output
_____no_output_____
###Markdown
Comparing Race and Diversity That makes it a little difficult to directly compare the terms, so let's use the `wv.similarity()` function to directly look at that. This basically allows you to directly compare the two words to see how close they are in the vector space. To make this process a little more efficient, we are going to make our own function named `w2v_similarities_over_time()` and then compare all the relevant terms. Following [Garg et al. (2018)](https://www.pnas.org/content/115/16/E3635.short), we also decided to average some of the terms in our dictionaries since it gets a little cumbersome trying to interpret the multiple outcomes of very similiary terms like diversity/diverse, race/racial, ethnic/ethnicity, etc.
###Code
def w2v_similarities_over_time(df, w2v_m1, w2v_m2):
'''
function compares several word2vec vectors from two different years
and then examines how those several comparisons change over time
----------------------------------------------------------------
1) first it takes a dictionary of words and creates its product
2) compares all of those words within the vector space of w2v_m1
3) compares all of those words within the vector space of w2v_m2
4) examines changes in the comparisons of w2v_m1 and w2v_m2 over time
'''
df = list(product(df['term'], df['term']))
df = pd.DataFrame(df, columns=['term1','term2'])
cos_sim_m1 = []
for index, row in df.iterrows():
cos_sim_m1.append(w2v_m1.wv.similarity(row[0],row[1]))
cos_sim_m1 = DataFrame(cos_sim_m1, columns=['cos_sim_m1'])
df = df.merge(cos_sim_m1, left_index=True, right_index=True)
cos_sim_m2 = []
for index, row in df.iterrows():
cos_sim_m2.append(w2v_m2.wv.similarity(row[0],row[1]))
cos_sim_m2 = DataFrame(cos_sim_m2, columns=['cos_sim_m2'])
df = df.merge(cos_sim_m2, left_index=True, right_index=True)
df["cos_sim_diffs"] = df["cos_sim_m1"] - df["cos_sim_m2"]
df_matrix = df.pivot("term1", "term2", "cos_sim_diffs")
return df_matrix
###Output
_____no_output_____
###Markdown
Let's pull in our dictionaries but filter to only the race and diversity entries:
###Code
race_diversity_early = earlier_model.wv.similarity('race','diversity')
race_diversity_later = later_model.wv.similarity('race','diversity')
racial_diversity_early = earlier_model.wv.similarity('racial','diversity')
racial_diversity_later = later_model.wv.similarity('racial','diversity')
ethnic_diversity_early = earlier_model.wv.similarity('ethnic','diversity')
ethnic_diversity_later = later_model.wv.similarity('ethnic','diversity')
ethnicity_diversity_early = earlier_model.wv.similarity('ethnicity','diversity')
ethnicity_diversity_later = later_model.wv.similarity('ethnicity','diversity')
black_div_early = earlier_model.wv.similarity('black','diversity')
black_div_later = later_model.wv.similarity('black','diversity')
afam_div_early = earlier_model.wv.similarity('africanamerican','diversity')
afam_div_later = later_model.wv.similarity('africanamerican','diversity')
white_div_early = earlier_model.wv.similarity('white','diversity')
white_div_later = later_model.wv.similarity('white','diversity')
caucasian_div_early = earlier_model.wv.similarity('caucasian','diversity')
caucasian_div_later = later_model.wv.similarity('caucasian','diversity')
hisp_div_early = earlier_model.wv.similarity('hispanic','diversity')
hisp_div_later = later_model.wv.similarity('hispanic','diversity')
asian_div_early = earlier_model.wv.similarity('asian','diversity')
asian_div_later = later_model.wv.similarity('asian','diversity')
latino_div_early = earlier_model.wv.similarity('latino','diversity')
latino_div_later = later_model.wv.similarity('latino','diversity')
native_div_early = earlier_model.wv.similarity('native','diversity')
native_div_later = later_model.wv.similarity('native','diversity')
print('Overall Comparisons of Racial and Diversity Terms:')
print('Race and diversity: 1990-2000 score:', race_diversity_early, ' 2010-2020 score:', race_diversity_later, ' Difference is:', race_diversity_later-race_diversity_early )
print('Racial and diversity: 1990-2000 score:', racial_diversity_early, ' 2010-2020 score:', racial_diversity_later, ' Difference is:', racial_diversity_later-racial_diversity_early)
print('Ethnic and diversity: 1990-2000 score:', ethnic_diversity_early, ' 2010-2020 score:', ethnic_diversity_later, ' Difference is:', ethnic_diversity_later-ethnic_diversity_early)
print('Ethnicity and diversity: 1990-2000 score:', ethnicity_diversity_early, ' 2010-2020 score:', ethnicity_diversity_later, ' Difference is:', ethnicity_diversity_later-ethnicity_diversity_early)
print('Black and diversity: 1990-1995 score:', black_div_early, ' 2015-2020 score:', black_div_later, ' Difference is:', black_div_later-black_div_early)
print('African American and diversity: 1990-1995 score:', afam_div_early, ' 2015-2020 score:', afam_div_later, ' Difference is:', afam_div_later-afam_div_early)
print('White and diversity: 1990-1995 score:', white_div_early, ' 2015-2020 score:', white_div_later, ' Difference is:', white_div_later-white_div_early)
print('Caucasian and diversity: 1990-1995 score:', caucasian_div_early, ' 2015-2020 score:', caucasian_div_later, ' Difference is:', caucasian_div_later-caucasian_div_early)
print('Hispanic and diversity: 1990-1995 score:', hisp_div_early, ' 2015-2020 score:', hisp_div_later, ' Difference is:', hisp_div_later-hisp_div_early)
print('Latino and diversity: 1990-1995 score:', latino_div_early, ' 2015-2020 score:', latino_div_later, ' Difference is:', latino_div_later-latino_div_early)
print('Asian and diversity: 1990-1995 score:', asian_div_early, ' 2015-2020 score:', asian_div_later, ' Difference is:', asian_div_later-asian_div_early)
print('Native and diversity: 1990-1995 score:', native_div_early, ' 2015-2020 score:', native_div_later, ' Difference is:', native_div_early-native_div_later)
###Output
Overall Comparisons of Racial and Diversity Terms:
Race and diversity: 1990-2000 score: 0.06736891 2010-2020 score: 0.03890413 Difference is: -0.02846478
Racial and diversity: 1990-2000 score: 0.23084037 2010-2020 score: 0.1350901 Difference is: -0.09575027
Ethnic and diversity: 1990-2000 score: 0.2257958 2010-2020 score: 0.17736048 Difference is: -0.04843533
Ethnicity and diversity: 1990-2000 score: 0.094476104 2010-2020 score: 0.074991435 Difference is: -0.019484669
Black and diversity: 1990-1995 score: 0.030569227 2015-2020 score: 0.05853089 Difference is: 0.027961662
African American and diversity: 1990-1995 score: 0.054992154 2015-2020 score: 0.043294504 Difference is: -0.01169765
White and diversity: 1990-1995 score: -0.0026508104 2015-2020 score: 0.028805489 Difference is: 0.0314563
Caucasian and diversity: 1990-1995 score: 0.15436253 2015-2020 score: 0.028390814 Difference is: -0.12597172
Hispanic and diversity: 1990-1995 score: 0.09954857 2015-2020 score: 0.07099947 Difference is: -0.028549097
Latino and diversity: 1990-1995 score: 0.088089146 2015-2020 score: 0.09452546 Difference is: 0.0064363107
Asian and diversity: 1990-1995 score: 0.18365769 2015-2020 score: 0.10658541 Difference is: -0.07707228
Native and diversity: 1990-1995 score: 0.113705866 2015-2020 score: 0.14128739 Difference is: -0.02758152
###Markdown
To interpret these scores, we have to know that a value of 1 means that two words have a perfect relationship, 0 means the two words have no relationship, and -1 means that they are perfect opposites ([Stack Overflow 2017](https://stackoverflow.com/questions/42381902/interpreting-negative-word2vec-similarity-from-gensim), [Google Groups 2019](https://groups.google.com/g/gensim/c/SZ1yct-7CuU)). Thus, when we compare all of the race, racial, ethnic and ethnicity vectors to diverse and diversity, we actually see that they are becoming *less* similar over time. Thus, despite our earlier hypotheses indicating that diversity is rising while racial terms decline, it does not seem to be the case that the two are being used in similar ways over time. It is worth noting that a number of things could complicate this interpretation, including the polysemy of diversity. Next, we will create a plot for this. We have to keep this grey scale, because sociologists are still living in the late 1900s. Before moving on to plots of these vectors, let's take a look at specific racial terms and see how they compare to diversity.
###Code
plt.figure(figsize=(6, 4))
sns.set_style("white")
d = {'group': [
'Asian', 'Latino', 'Hispanic', 'Caucasian', 'White',
'Black', 'African American', 'Ethnicity', 'Ethnic', 'Racial', 'Race'
],
'1990-2000': [
asian_div_early, latino_div_early, hisp_div_early, caucasian_div_early, white_div_early,
black_div_early, afam_div_early, ethnicity_diversity_early, ethnic_diversity_early, racial_diversity_early, race_diversity_early
],
'2010-2020': [
asian_div_later, latino_div_later, hisp_div_later, caucasian_div_later, white_div_later,
black_div_later, afam_div_later, ethnicity_diversity_later, ethnic_diversity_later, racial_diversity_later, race_diversity_later
]}
df = pd.DataFrame(data=d)
ordered_df = df
my_range=range(1,len(df.index)+1)
plt.hlines(y=my_range, xmin=ordered_df['1990-2000'], xmax=ordered_df['2010-2020'], color='lightgrey', alpha=0.4)
plt.scatter(ordered_df['1990-2000'], my_range, color='red', alpha=1, label='1990-2000')
plt.scatter(ordered_df['2010-2020'], my_range, color='skyblue', alpha=0.8 , label='2010-2020')
#plt.scatter(ordered_df['1990-2000'], my_range, color='black', alpha=1, label='1990-2000')
#plt.scatter(ordered_df['2010-2020'], my_range, color='dimgrey', alpha=0.4 , label='2010-2020')
plt.legend()
# Add title and axis names
plt.yticks(my_range, ordered_df['group'])
plt.title("Figure 4. Comparison of Racial/Ethnic Word Vectors Relative \n to the Diversity Vector for 1990-2000 and 2010-2020 Word2Vec Models", loc='center')
plt.xlabel('Cosine Similarity Scores')
#plt.ylabel('All Terms Compared to Diversity')
top_race_vectors.to_csv('/sfs/qumulo/qhome/kb7hp/git/diversity/data/final_data/race_vectors_alldiv_0921.csv')
top_ethnicity_vectors.to_csv('/sfs/qumulo/qhome/kb7hp/git/diversity/data/final_data/ethnicity_vectors_alldiv_0921.csv')
top_diversity_vectors.to_csv('/sfs/qumulo/qhome/kb7hp/git/diversity/data/final_data/diversity_vectors_alldiv_0921.csv')
ordered_df.to_csv('/sfs/qumulo/qhome/kb7hp/git/diversity/data/final_data/select_wv_comps_alldiv_0921.csv')
###Output
_____no_output_____
###Markdown
Overall, this approach gave us some mixed results. Asian and diversity/diverse become significantly more dissimilar while white and diversity/diverse become more similar. Once could argue that this supports Berrey's argument about diversity being used to reinforce whiteness, but it also might just be diverse/diversity being more to describe variation in neuroscience where a common term is 'white matter.' In the end, it might just be the case that the Word2Vec model's inability to deal with polysemy does not help us answer our research question. Before concluding that, let's look at visual plots of our vectors. Singular Value Decomposition In order to do that, we have to reduce the 512 dimension model into just 2 dimensions using the `TSNE` package. We will do this for both models, which will take around 30 minutes to run. Scroll down to see the results...
###Code
%%capture
earlier_vocab = list(earlier_model.wv.vocab)
earlier_x = earlier_model[earlier_vocab]
earlier_tsne = TSNE(n_components=2)
earlier_tsne_x = earlier_tsne.fit_transform(earlier_x)
df_earlier = pd.DataFrame(earlier_tsne_x, index=earlier_vocab, columns=['x', 'y'])
later_vocab = list(later_model.wv.vocab)
later_x = later_model[later_vocab]
later_tsne = TSNE(n_components=2)
later_tsne_x = later_tsne.fit_transform(later_x)
df_later = pd.DataFrame(later_tsne_x, index=later_vocab, columns=['x', 'y'])
keys = ['race', 'racial', 'ethnic', 'ethnicity', 'diverse', 'diversity']
earlier_embedding_clusters = []
earlier_word_clusters = []
for word in keys:
earlier_embeddings = []
earlier_words = []
for similar_word, _ in earlier_model.wv.most_similar(word, topn=30):
earlier_words.append(similar_word)
earlier_embeddings.append(earlier_model[similar_word])
earlier_embedding_clusters.append(earlier_embeddings)
earlier_word_clusters.append(earlier_words)
earlier_embedding_clusters = np.array(earlier_embedding_clusters)
n, m, k = earlier_embedding_clusters.shape
e_tsne_model_en_2d = TSNE(perplexity=15, n_components=2, init='pca', n_iter=3500, random_state=32)
e_embeddings_en_2d = np.array(e_tsne_model_en_2d.fit_transform(earlier_embedding_clusters.reshape(n * m, k))).reshape(n, m, 2)
later_embedding_clusters = []
later_word_clusters = []
for word in keys:
later_embeddings = []
later_words = []
for similar_word, _ in later_model.wv.most_similar(word, topn=30):
later_words.append(similar_word)
later_embeddings.append(later_model[similar_word])
later_embedding_clusters.append(later_embeddings)
later_word_clusters.append(later_words)
later_embedding_clusters = np.array(later_embedding_clusters)
n, m, k = later_embedding_clusters.shape
l_tsne_model_en_2d = TSNE(perplexity=15, n_components=2, init='pca', n_iter=3500, random_state=32)
l_embeddings_en_2d = np.array(l_tsne_model_en_2d.fit_transform(later_embedding_clusters.reshape(n * m, k))).reshape(n, m, 2)
###Output
_____no_output_____
###Markdown
Plotting the Results of the Word2Vec Models (1990-95 vs 2015-20)
###Code
def tsne_plot_similar_words(title, labels, earlier_embedding_clusters, earlier_word_clusters, a, filename=None):
plt.figure(figsize=(16, 9))
colors = cm.rainbow(np.linspace(0, 1, len(labels)))
for label, earlier_embeddings, earlier_words, color in zip(labels, earlier_embedding_clusters, earlier_word_clusters, colors):
x = earlier_embeddings[:, 0]
y = earlier_embeddings[:, 1]
plt.scatter(x, y, c=color, alpha=a, label=label)
for i, word in enumerate(earlier_words):
plt.annotate(word, alpha=0.5, xy=(x[i], y[i]), xytext=(5, 2),
textcoords='offset points', ha='right', va='bottom', size=10)
plt.legend(loc=4)
plt.title(title)
plt.grid(True)
if filename:
plt.savefig(filename, format='png', dpi=150, bbox_inches='tight')
plt.show()
early_plot = tsne_plot_similar_words('Comparing the Use of Race, Ethnicity and Diversity (Word2Vec Model of 1990-2000 PubMed Data)',
keys, e_embeddings_en_2d, earlier_word_clusters, 0.7, 'earlier_comparison.png')
early_plot
###Output
_____no_output_____
###Markdown
Now we can look at how the words in each vector of race, racial, ethnic, ethnicity, diversity and diverse. When we start to look at the specific terms of interest, we find racial and ethnic are the far left or toward the bottom-center. Other variants of these terms are more centered in the plot. On the other hand, diversity and diverse are both clustered toward the top-right, which means that race and diversity are fairly far away in the vector space.
###Code
def tsne_plot_similar_words(title, labels, later_embedding_clusters, later_word_clusters, a, filename=None):
plt.figure(figsize=(16, 9))
colors = cm.rainbow(np.linspace(0, 1, len(labels)))
for label, later_embeddings, later_words, color in zip(labels, later_embedding_clusters, later_word_clusters, colors):
x = later_embeddings[:, 0]
y = later_embeddings[:, 1]
plt.scatter(x, y, c=color, alpha=a, label=label)
for i, word in enumerate(later_words):
plt.annotate(word, alpha=0.5, xy=(x[i], y[i]), xytext=(5, 2),
textcoords='offset points', ha='right', va='bottom', size=10)
plt.legend(loc=4)
plt.title(title)
plt.grid(True)
if filename:
plt.savefig(filename, format='png', dpi=150, bbox_inches='tight')
plt.show()
later_plot = tsne_plot_similar_words('Comparing the Use of Race, Ethnicity and Diversity (Word2Vec Model of 2010-2020 PubMed Data)',
keys, l_embeddings_en_2d, later_word_clusters,
0.7, 'later_comparison.png')
os.chdir("/sfs/qumulo/qhome/kb7hp/git/diversity/data/word_embeddings/")
plt.savefig('later_comparison.png')
later_plot
###Output
_____no_output_____
###Markdown
When we look at the same vectors in the 2015-20 model, it seems like the vectors are more closely related overall. However, when we look closer we see that the 'race' and 'ethnicity' are up in the top-left corner while 'racial' and 'ethnic' are in the top-right corner. Both sets are still fairly separated from the red and orange diversity vectors. Although these plots do not show this as clear as one might want, our analyses above do suggest that diverse and diversity as well as race, racial, ethnic, and ethinicity are being used more dissimilarly over time. The challenging thing about this analysis disentangling the polysemy from how diversity is used. If we were able to 'disentange' the use of diveristy in its more general sense compared to its usage in the context of equity, inclusion and justice discussions, would we find that the two words are becoming more similar over time? Does diversity replace race/ethnicity?: Contextualizing Word Vectors with Heat MapsAfter consulting some colleagues, we thought about two potential ways to test this. The first would be to turn to BERT or ELMo ([Fonteyn 2019](https://laurenthelinguist.files.wordpress.com/2019/08/sle_2019_bert.pdf); [Rakhmanberdieva 2019](https://towardsdatascience.com/word-representation-in-natural-language-processing-part-iii-2e69346007f)), which would allow us to identify the contexual variations in how diversity is used. The problem is that BERT, for example, is trained on Wikipedia data that is not historical. There are BERT options like PubMedBERT and BioBERT, but they are trained on the entirety of the PubMed abstracts, which fails to help us identify historical variations in how the terms change. Moreover, it would not make much sense to fine tune a BERT model on the same data in which it was already trained on. Thus, we ruled out BERT as an option. Instead, we decided to continue using Word2Vec and instead compare the diveristy vector to a myriad of other vectors that we measured in H1. Our logic was that if we see diversity become more semantically similar to other diversity-related vectors over time time, while also moving further away from the racial/ethic vectors, we could infer that diversity is actually replacing race/ethnicity in biomedical abstracts over time. To do this, I developed a function named `w2v_similarities_over_time()` that calculates the difference between all the words witin a dictionary of terms and then compares how they have changed relative to one another over time. Specifically, I will be comparing how diverse and diversity change relative to the terms in our race/ethnicity, sex/gender, sexuality, social class, and cultural/equity categories from [Hypotheses 1 and 2](https://growthofdiversity.netlify.app/methods/). Then, I will visualize the results of these models comparisons using some heat maps.First step is importing our H1 library so we can pluck out all of the vectors for a heat map in a relatively automated manner.
###Code
# load dictionary of words
os.chdir("/sfs/qumulo/qhome/kb7hp/git/diversity/data/dictionaries/")
h1_dictionary = pd.read_csv("diversity_project - tree_data.csv")
h1_dictionary = h1_dictionary[h1_dictionary['viz_embeddings'] == 1].drop(['hypothesis', 'viz_embeddings', 'mean_embeddings'], axis=1)
h1_dictionary = h1_dictionary.replace({'category': {'asian|black|hispanic_latinx|white': 'race_ethnicity'}}, regex=True)
h1_dictionary = h1_dictionary.replace({'category': {'sex_gender|sexuality': 'gender_sexuality'}}, regex=True)
h1_dictionary = h1_dictionary.replace({'category': {'cultural|equity': 'cultural_equity'}}, regex=True)
h1_dictionary = h1_dictionary.replace({'category': {'minority': 'social_class'}}, regex=True).sort_values(by=['category', 'term'])
h1_dictionary = h1_dictionary.replace({'term': {'under_served': 'underserved'}}, regex=True).sort_values(by=['category', 'term'])
h1_dictionary = h1_dictionary.replace({'term': {'african_american': 'africanamerican'}}, regex=True).sort_values(by=['category', 'term'])
# manual deletion after chatting with catherine
h1_dictionary = h1_dictionary[~h1_dictionary['term'].isin(['diverse', 'negro', 'ethnic', 'racist', 'racial',
'homosexual', 'men', 'women', 'inequality', 'equality'])]
h1_dictionary
###Output
_____no_output_____
###Markdown
Next, we will define our `w2v_similarities_over_time()` function.
###Code
def w2v_similarities_over_time(df, w2v_m1, w2v_m2):
'''
function compares several word2vec vectors from two different years
and then examines how those several comparisons change over time
----------------------------------------------------------------
1) first it takes a dictionary of words and creates its product
2) compares all of those words within the vector space of w2v_m1
3) compares all of those words within the vector space of w2v_m2
4) examines changes in the comparisons of w2v_m1 and w2v_m2 over time
'''
df = list(product(df['term'], df['term']))
df = pd.DataFrame(df, columns=['term1','term2'])
cos_sim_m1 = []
for index, row in df.iterrows():
cos_sim_m1.append(w2v_m1.wv.similarity(row[0],row[1]))
cos_sim_m1 = DataFrame(cos_sim_m1, columns=['cos_sim_m1'])
df = df.merge(cos_sim_m1, left_index=True, right_index=True)
cos_sim_m2 = []
for index, row in df.iterrows():
cos_sim_m2.append(w2v_m2.wv.similarity(row[0],row[1]))
cos_sim_m2 = DataFrame(cos_sim_m2, columns=['cos_sim_m2'])
df = df.merge(cos_sim_m2, left_index=True, right_index=True)
df["cos_sim_diffs"] = df["cos_sim_m2"] - df["cos_sim_m1"]
df_matrix = df.pivot("term1", "term2", "cos_sim_diffs")
return df_matrix
###Output
_____no_output_____
###Markdown
And check to make sure we get the same results as above...
###Code
race_ethnicity = h1_dictionary[(h1_dictionary['category'] == 'race_ethnicity') |
(h1_dictionary['category'] == 'diversity')]
race_ethnicity_matrix = w2v_similarities_over_time(race_ethnicity, earlier_model, later_model)
race_ethnicity_matrix
###Output
_____no_output_____
###Markdown
These do look similar to the basic plot we created above.So now we will create each of our four heat maps and combine them into a joint figure for publication (again in grey scale for the sociologists)...
###Code
sns.set(rc={'figure.figsize':(8,5)})
sns.set_style("whitegrid")
cultural_equity = h1_dictionary[(h1_dictionary['category'] == 'cultural_equity') |
(h1_dictionary['category'] == 'diversity')]
cultural_equity = cultural_equity[~cultural_equity['term'].isin(['interlinguistic', 'oppressive', 'religion', 'religiosity'])]
cultural_equity_matrix = w2v_similarities_over_time(cultural_equity, earlier_model, later_model)
cmap = sns.diverging_palette(20, 230, as_cmap=True)
#cmap = sns.cubehelix_palette(200, hue=0.05, rot=0, light=0, dark=0.9)
#corr_cultural = cultural_equity_matrix.corr()
#mask_cultural = np.triu(np.ones_like(corr_cultural, dtype=bool))
cultural_equity_heatmap = sns.heatmap(cultural_equity_matrix, center=0, #mask=mask_cultural,
cmap=cmap)
race_ethnicity = h1_dictionary[(h1_dictionary['category'] == 'race_ethnicity') |
(h1_dictionary['category'] == 'diversity')]
race_ethnicity_matrix = w2v_similarities_over_time(race_ethnicity, earlier_model, later_model)
#corr_race = race_ethnicity_matrix.corr()
#mask_race = np.triu(np.ones_like(corr_race, dtype=bool))
race_ethnicity_heatmap = sns.heatmap(race_ethnicity_matrix, #mask=mask_race,
center=0, cmap=cmap).set_title("Figure 4B. Racial and Ethnic Vectors")
gender_sexuality = h1_dictionary[(h1_dictionary['category'] == 'gender_sexuality') |
(h1_dictionary['category'] == 'diversity')]
gender_sexuality_matrix = w2v_similarities_over_time(gender_sexuality, earlier_model, later_model)
#corr_gender = gender_sexuality_matrix.corr()
#mask_gender = np.triu(np.ones_like(corr_gender, dtype=bool))
gender_sexuality_heatmap = sns.heatmap(gender_sexuality_matrix, center=0, #mask=mask_gender,
cmap=cmap).set_title("Figure 4C. Gender and Sexuality Vectors")
social_class = h1_dictionary[(h1_dictionary['category'] == 'social_class') |
(h1_dictionary['category'] == 'diversity')]
social_class_matrix = w2v_similarities_over_time(social_class, earlier_model, later_model)
#corr_class = social_class_matrix.corr()
#mask_class = np.triu(np.ones_like(corr_class, dtype=bool))
social_class_heatmap = sns.heatmap(social_class_matrix, center=0, #mask=mask_class,
cmap=cmap).set_title("Figure 4D. Socio-Economic Vectors")
sns.set(rc={'figure.figsize':(14,9)})
sns.set_style("whitegrid")
fig, ((ax1, ax2), (ax3, ax4)) = plt.subplots(2, 2, constrained_layout=False)
# race/ethnicity
race_ethnicity_g = sns.heatmap(race_ethnicity_matrix, center=0, cmap=cmap, ax=ax1)
race_ethnicity_labels = race_ethnicity['term'].sort_values().tolist()
N = len(race_ethnicity_labels)
race_label = 'diversity'
race_index = race_ethnicity_labels.index(race_label)
x, y, w, h = 0, race_index, N, 1
for _ in range(2):
race_ethnicity_g.add_patch(Rectangle((x, y), w, h, fill=False, edgecolor='black', lw=2, clip_on=False))
x, y, w, h = y, x, h, w
race_ethnicity_g.set_title("Figure 5A. Racial and Ethnic Vectors")
race_ethnicity_g.set(xlabel=None)
race_ethnicity_g.set(ylabel=None)
race_ethnicity_g.set_xticklabels(race_ethnicity_g.get_xticklabels(), rotation=40, horizontalalignment='right')
## cultural
cultural_equity_g = sns.heatmap(cultural_equity_matrix, center=0, cmap=cmap, ax=ax2)
cultural_labels = cultural_equity['term'].sort_values().tolist()
N = len(cultural_labels)
cultural_label = 'diversity'
cultural_index = cultural_labels.index(cultural_label)
x, y, w, h = 0, cultural_index, N, 1
for _ in range(2):
cultural_equity_g.add_patch(Rectangle((x, y), w, h, fill=False, edgecolor='black', lw=2, clip_on=False))
x, y, w, h = y, x, h, w
cultural_equity_g.set_title("Figure 5B. Cultural and Equity/Justice Vectors")
cultural_equity_g.set(xlabel=None)
cultural_equity_g.set(ylabel=None)
cultural_equity_g.set_xticklabels(cultural_equity_g.get_xticklabels(), rotation=40, horizontalalignment='right')
# socio-economic
social_class_g = sns.heatmap(social_class_matrix, center=0, cmap=cmap, ax=ax3)
ses_labels = social_class['term'].sort_values().tolist()
N = len(ses_labels)
ses_label = 'diversity'
ses_index = ses_labels.index(ses_label)
x, y, w, h = 0, ses_index, N, 1
for _ in range(2):
social_class_g.add_patch(Rectangle((x, y), w, h, fill=False, edgecolor='black', lw=2, clip_on=False))
x, y, w, h = y, x, h, w
social_class_g.set_title("Figure 5C. Socio-Economic Vectors")
social_class_g.set(xlabel=None)
social_class_g.set(ylabel=None)
social_class_g.set_xticklabels(social_class_g.get_xticklabels(), rotation=40, horizontalalignment='right')
# sex/gender
gender_sexuality_g = sns.heatmap(gender_sexuality_matrix, center=0, cmap=cmap, ax=ax4)
gender_labels = gender_sexuality['term'].sort_values().tolist()
N = len(gender_labels)
gender_label = 'diversity'
gender_index = gender_labels.index(gender_label)
x, y, w, h = 0, gender_index, N, 1
for _ in range(2):
gender_sexuality_g.add_patch(Rectangle((x, y), w, h, fill=False, edgecolor='black', lw=2, clip_on=False))
x, y, w, h = y, x, h, w
gender_sexuality_g.set_title("Figure 5D. Gender and Sexuality Vectors")
gender_sexuality_g.set(xlabel=None)
gender_sexuality_g.set(ylabel=None)
gender_sexuality_g.set_xticklabels(gender_sexuality_g.get_xticklabels(), rotation=40, horizontalalignment='right')
plt.subplots_adjust(left=None, bottom=None, right=None, top=None, wspace=0.5, hspace=0.5)
###Output
_____no_output_____
###Markdown
The final product provides some interesting results. It sure looks like diversity is generally more similar to most of the vectors apart from the race/ethnicity vectors, which could suggest that diversity is replacing race/ethnicity within in the context of articles that are examining other historically underrepresented populations in biomedical research.
###Code
# load dictionary of words
os.chdir("/sfs/qumulo/qhome/kb7hp/git/diversity/data/dictionaries/")
national_terms = pd.read_csv("diversity_project - national_embeddings.csv")
national_terms.head()
sns.set(rc={'figure.figsize':(8,5)})
sns.set_style("whitegrid")
national_asian = national_terms[(national_terms['category'] == 'asian') |
(national_terms['category'] == 'diversity')]
national_asian_matrix = w2v_similarities_over_time(national_asian, earlier_model, later_model)
cmap = sns.diverging_palette(20, 230, as_cmap=True)
national_asian_heatmap = sns.heatmap(national_asian_matrix, center=0, cmap=cmap)
sns.set(rc={'figure.figsize':(8,5)})
sns.set_style("whitegrid")
national_europe = national_terms[(national_terms['category'] == 'europe') |
(national_terms['category'] == 'diversity')]
national_europe_matrix = w2v_similarities_over_time(national_europe, earlier_model, later_model)
cmap = sns.diverging_palette(20, 230, as_cmap=True)
national_europe_heatmap = sns.heatmap(national_europe_matrix, center=0, cmap=cmap)
sns.set(rc={'figure.figsize':(8,5)})
sns.set_style("whitegrid")
national_americas = national_terms[(national_terms['category'] == 'americas') |
(national_terms['category'] == 'diversity')]
national_americas_matrix = w2v_similarities_over_time(national_americas, earlier_model, later_model)
cmap = sns.diverging_palette(20, 230, as_cmap=True)
national_americas_heatmap = sns.heatmap(national_americas_matrix, center=0, cmap=cmap)
sns.set(rc={'figure.figsize':(8,5)})
sns.set_style("whitegrid")
national_africa = national_terms[(national_terms['category'] == 'africa') |
(national_terms['category'] == 'diversity')]
national_africa_matrix = w2v_similarities_over_time(national_africa, earlier_model, later_model)
cmap = sns.diverging_palette(20, 230, as_cmap=True)
national_africa_heatmap = sns.heatmap(national_africa_matrix, center=0, cmap=cmap)
sns.set(rc={'figure.figsize':(14,9)})
sns.set_style("whitegrid")
fig, ((ax1, ax2), (ax3, ax4)) = plt.subplots(2, 2, constrained_layout=False)
## cultural
national_asian_g = sns.heatmap(national_asian_matrix, center=0, cmap=cmap, ax=ax1)
national_asian_labels = national_asian['term'].sort_values().tolist()
N = len(national_asian_labels)
national_asian_label = 'diversity'
national_asian_index = national_asian_labels.index(national_asian_label)
x, y, w, h = 0, national_asian_index, N, 1
for _ in range(2):
national_asian_g.add_patch(Rectangle((x, y), w, h, fill=False, edgecolor='black', lw=2, clip_on=False))
x, y, w, h = y, x, h, w
national_asian_g.set_title("Figure 6A. Top-10 Asian Vectors")
national_asian_g.set(xlabel=None)
national_asian_g.set(ylabel=None)
national_asian_g.set_xticklabels(national_asian_g.get_xticklabels(), rotation=40, horizontalalignment='right')
# race/ethnicity
national_europe_g = sns.heatmap(national_europe_matrix, center=0, cmap=cmap, ax=ax2)
national_europe_labels = national_europe['term'].sort_values().tolist()
N = len(national_europe_labels)
national_europe_label = 'diversity'
national_europe_index = national_europe_labels.index(national_europe_label)
x, y, w, h = 0, national_europe_index, N, 1
for _ in range(2):
national_europe_g.add_patch(Rectangle((x, y), w, h, fill=False, edgecolor='black', lw=2, clip_on=False))
x, y, w, h = y, x, h, w
national_europe_g.set_title("Figure 6B. Top-10 European Vectors")
national_europe_g.set(xlabel=None)
national_europe_g.set(ylabel=None)
national_europe_g.set_xticklabels(national_europe_g.get_xticklabels(), rotation=40, horizontalalignment='right')
# sex/gender
national_americas_g = sns.heatmap(national_americas_matrix, center=0, cmap=cmap, ax=ax3)
national_americas_labels = national_americas['term'].sort_values().tolist()
N = len(national_americas_labels)
national_americas_label = 'diversity'
national_americas_index = national_americas_labels.index(national_americas_label)
x, y, w, h = 0, national_americas_index, N, 1
for _ in range(2):
national_americas_g.add_patch(Rectangle((x, y), w, h, fill=False, edgecolor='black', lw=2, clip_on=False))
x, y, w, h = y, x, h, w
national_americas_g.set_title("Figure 6C. Top-10 American Vectors")
national_americas_g.set(xlabel=None)
national_americas_g.set(ylabel=None)
national_americas_g.set_xticklabels(national_americas_g.get_xticklabels(), rotation=40, horizontalalignment='right')
# socio-economic
national_africa_g = sns.heatmap(national_africa_matrix, center=0, cmap=cmap, ax=ax4)
national_africa_labels = national_africa['term'].sort_values().tolist()
N = len(national_africa_labels)
national_africa_label = 'diversity'
national_africa_index = national_africa_labels.index(national_africa_label)
x, y, w, h = 0, national_africa_index, N, 1
for _ in range(2):
national_africa_g.add_patch(Rectangle((x, y), w, h, fill=False, edgecolor='black', lw=2, clip_on=False))
x, y, w, h = y, x, h, w
national_africa_g.set_title("Figure 6D. Top-10 African Vectors")
national_africa_g.set(xlabel=None)
national_africa_g.set(ylabel=None)
national_africa_g.set_xticklabels(national_africa_g.get_xticklabels(), rotation=40, horizontalalignment='right')
plt.subplots_adjust(left=None, bottom=None, right=None, top=None, wspace=0.5, hspace=0.5)
###Output
_____no_output_____
###Markdown
Mean Differences Let's define a function that compares the mean differences for all terms within each category so that we can make better sense of the visualizations.
###Code
def w2v_sim_mean_comps(df, w2v_m1, w2v_m2):
df = list(product(df['term'], df['term']))
df = pd.DataFrame(df, columns=['term1','term2'])
cos_sim_m1 = []
for index, row in df.iterrows():
cos_sim_m1.append(w2v_m1.wv.similarity(row[0],row[1]))
cos_sim_m1 = DataFrame(cos_sim_m1, columns=['cos_sim_m1'])
df = df.merge(cos_sim_m1, left_index=True, right_index=True)
cos_sim_m2 = []
for index, row in df.iterrows():
cos_sim_m2.append(w2v_m2.wv.similarity(row[0],row[1]))
cos_sim_m2 = DataFrame(cos_sim_m2, columns=['cos_sim_m2'])
df = df.merge(cos_sim_m2, left_index=True, right_index=True)
df["cos_sim_diffs"] = df["cos_sim_m2"] - df["cos_sim_m1"]
return df
os.chdir("/sfs/qumulo/qhome/kb7hp/git/diversity/data/dictionaries/")
h1_allterms = pd.read_csv("diversity_project - tree_data.csv")
h1_allterms = h1_allterms[h1_allterms['term'].isin(list(earlier_model.wv.key_to_index))]
h1_allterms['term'] = h1_allterms['term'].str.replace('_', '', regex=True)
#h1_allterms = h1_allterms[~h1_allterms['term'].isin(['intersexual'])]
list_of_terms = ['cultural', 'disability', 'equity', 'lifecourse', 'migration',
'minority', 'race_ethnicity', 'sex_gender', 'sexuality', 'social_class']
aggregated_means = pd.DataFrame(columns = ['term1', 'term2', 'cos_sim_m1', 'cos_sim_m2', 'cos_sim_diffs', 'group'])
for term in list_of_terms:
tmp_dictionary = h1_allterms[(h1_allterms['category'] == term) | (h1_allterms['term'] == 'diversity')]
comp_outcomes = w2v_sim_mean_comps(tmp_dictionary, earlier_model, later_model)
comp_outcomes = comp_outcomes[(comp_outcomes['term1'] == 'diversity') & (comp_outcomes['term2'] != 'diversity')]
comp_outcomes['group'] = [term] * len(comp_outcomes)
aggregated_means = pd.concat([aggregated_means, comp_outcomes])
aggregated_means = aggregated_means.groupby(by=["group"]).mean().round(3)
aggregated_means
os.chdir("/sfs/qumulo/qhome/kb7hp/git/diversity/data/dictionaries/")
h3_dictionary = pd.read_csv("diversity_project - h3_dictionary.csv")
h3_dictionary = h3_dictionary[h3_dictionary['term'].isin(list(earlier_model.wv.key_to_index))]
h3_dictionary = h3_dictionary.drop(['str_type','regional','subclass','source','date_added'], axis=1)
#h3_dictionary = h3_dictionary[h3_dictionary['category'] != 'subnational'] # need to update later
#h3_dictionary = h3_dictionary[h3_dictionary['category'] != 'subcontinental'] # need to update later
category_analysis = h3_dictionary.drop(['continental'], axis=1)
#category_analysis = category_analysis[category_analysis['mean_embeddings'] == 1]
#category_analysis = category_analysis.drop(['mean_embeddings'], axis=1)
#category_analysis = category_analysis[~category_analysis['term'].str.contains("s$")]
category_analysis = category_analysis[~category_analysis['term'].str.contains("_")]
os.chdir("/sfs/qumulo/qhome/kb7hp/git/diversity/data/dictionaries/")
h3_div_subset = pd.read_csv("diversity_project - tree_data.csv")
h3_div_subset = h3_div_subset.drop(['viz_embeddings','mean_embeddings','hypothesis'], axis=1)
h3_div_subset = h3_div_subset[h3_div_subset['category'] == 'diversity']
h3_div_subset = h3_div_subset[['term', 'category']]
category_analysis = pd.concat([category_analysis, h3_div_subset])
category_analysis
list_of_terms = ['continental', 'directional', 'national', 'omb/us census', 'race/ethnicity', 'subcontinental', 'subnational']
aggregated_means = pd.DataFrame(columns = ['term1', 'term2', 'cos_sim_m1', 'cos_sim_m2', 'cos_sim_diffs', 'group'])
for term in list_of_terms:
tmp_dictionary = category_analysis[(category_analysis['category'] == term) | (category_analysis['term'] == 'diversity')]
comp_outcomes = w2v_sim_mean_comps(tmp_dictionary, earlier_model, later_model)
comp_outcomes = comp_outcomes[(comp_outcomes['term1'] == 'diversity') & (comp_outcomes['term2'] != 'diversity')]
comp_outcomes['group'] = [term] * len(comp_outcomes)
aggregated_means = pd.concat([aggregated_means, comp_outcomes])
aggregated_means = aggregated_means.groupby(by=["group"]).mean().round(3)
aggregated_means
os.chdir("/sfs/qumulo/qhome/kb7hp/git/diversity/data/dictionaries/")
h3_dictionary = pd.read_csv("diversity_project - h3_dictionary.csv")
h3_dictionary = h3_dictionary[h3_dictionary['term'].isin(list(earlier_model.wv.key_to_index))]
h3_dictionary = h3_dictionary.drop(['str_type','regional','subclass','source','date_added'], axis=1)
national_means = h3_dictionary[h3_dictionary['category'] == 'national']
#national_means = national_means[national_means['mean_embeddings'] == 1]
national_means = national_means.drop(['category','mean_embeddings'], axis=1)
national_means = national_means.rename(columns={'continental': 'category'})
#national_means = national_means[~national_means['term'].str.contains("s$")]
national_means = national_means[~national_means['term'].str.contains("_")]
national_means = pd.concat([national_means, h3_div_subset])
national_means
list_of_terms = ['africa', 'asia', 'europe', 'north america', 'oceania', 'south america']
aggregated_means = pd.DataFrame(columns = ['term1', 'term2', 'cos_sim_m1', 'cos_sim_m2', 'cos_sim_diffs', 'group'])
for term in list_of_terms:
tmp_dictionary = national_means[(national_means['category'] == term) | (national_means['term'] == 'diversity')]
comp_outcomes = w2v_sim_mean_comps(tmp_dictionary, earlier_model, later_model)
comp_outcomes = comp_outcomes[(comp_outcomes['term1'] == 'diversity') & (comp_outcomes['term2'] != 'diversity')]
comp_outcomes['group'] = [term] * len(comp_outcomes)
aggregated_means = pd.concat([aggregated_means, comp_outcomes])
aggregated_means = aggregated_means.groupby(by=["group"]).mean().round(3)
aggregated_means
###Output
_____no_output_____ |
notebooks/building_in_processing_in_graph.ipynb | ###Markdown
Run in Google Colab TF 2.0 Transition - Building in Pre-processing into the ModelTF 2.0 adds a lot of new features and more powerful representation. This notebook will demonstrate some of the newer features to build (custom) input pre-processing into the graph. What's the benefit to this: 1. Since it is part of the model, one does not need to re-implement the preprocessing on the inference side. 2. Since it will be added as graph ops, the preprocessing will happen on the GPU (instead of upstream on CPU) and be faster. 3. The preprocessing graph operations can be optimized by the Tensorflow compiler. ObjectiveWe will be using the following TF 2.0 features / recommendations: 1. [Recommentation] Use tf.keras for the model building. 2. [Recommendation] Put preprocessing into the model. 3. [Feature] Use @tf.function decorator to convert the Python code for preprocessing into graph ops. 4. [Feature] Use subclassing of layers to define a new layer for the preprocessing. ImportsIf you haven't already, you need to install TF 2.0 beta. If you are running this notebook in colab (which is 1.13 as of this writing), you will need to install TF 2.0 via a cell, as in below:
###Code
# If not already installed
%pip install tensorflow==2.0.0-beta1
###Output
_____no_output_____
###Markdown
Now let's import what we will use in this demonstration.
###Code
import tensorflow as tf
from tensorflow.keras import Model, Input, layers
from tensorflow.keras.layers import Flatten, Dense
# expected output: 2.0.0-beta1
print(tf.__version__)
###Output
_____no_output_____
###Markdown
Layer SubclassingWe will start by subclassing the tf.keras layers class to make a new layer type, as follows: 1. Will take an input vector whose shape is specified at instantiation. 2. Will normalize the data between 0 and 1 (assumes pixel data between 0 .. 255). 3. Outputs the normalized input. 4. Has no trainable parameters. Let's start by showing a basic template for subclassing layers and then explain it:```pythonclass NewLayer(layers.Layer): def __init__(self): super(NewLayer, self).__init__() self.my_vars = blash, blah def build(self, input_shape): """ Handler for building the layer """ self.kernel = blah, blah def call(self, inputs): """ Handler for layer object as callable """ outputs = do something with inputs return outputs``` SubclassingThe first line in the above template `class NewLayer(layers.Layer)` indicates we want to create a new class object named `NewLayer` which is subclassed (derived) from the tf.keras `layers` class. This will give us a custom layer definition. __init__() methodThis is the initializer (constructor) for the class object instantiation. We use the initializer to initialize layer specific variables. build() methodThis method handles the building of the layer when the model is compiled. A typical action is to define the shape of the kernel (trainable parameters) and initialization of the kernel. call() methodThis method handles calling the layer as a callable (function call) for execution in the graph. Subclassing Our Custom LayerIn the code below, we subclass a custom layer for doing preprocessing of the input, and where the preprocessing is converted to graph operations in the model.The first line in the code `class Normalize(layers.Layer)` indicates we want to create a new class object named `Normalize` which is subclassed (derived) from the tf.keras `layers` class. __init__() methodSince we won't have any constants or variables to preserve, we don't have any need to add anything to this method. build() methodOur custom layer won't have any trainable parameters. We will tell the compile process to not set up any gradient descent updates on the kernel during training by setting the `layers` class variable `self.kernel` to `None`. call() methodThis is where we add our preprocessing. The parameter `inputs` is the input tensor to the layer during training and prediction. A TF tensor object implements polymorphism to overload operators. We use the overloaded division operator, which will broadcast the division operation across the entire tensor --thus each element will be divided by 255.0.Finally, we add the decorator `@tf.function` to tell **TensorFlow AutoGraph** to convert convert the Python code in this method to graph operations in the model.
###Code
class Normalize(layers.Layer):
""" Custom Layer for Preprocessing Input """
def __init__(self):
""" Constructor """
super(Normalize, self).__init__()
def build(self, input_shape):
""" Handler for Input Shape """
self.kernel = None
@tf.function
def call(self, inputs):
""" Handler for layer object is callable """
inputs = inputs / 255.0
return inputs
###Output
_____no_output_____
###Markdown
Build the ModelLet's build a model to train on the MNIST dataset. We will keep it really basic: 1. Use the Functional API method for defining the model. 2. Make the first layer of our model the custom preprocessing layer. 3. The remaining layers are a basic DNN for MNIST.
###Code
# Create the input vector for 28x28 MNIST images
inputs = Input((28, 28))
# The first layer is the preprocessing layer, which is bound to the input vector
x = Normalize()(inputs)
# Next layer, we flatten the preprocessed input into a 1D vector
x = Flatten()(x)
# Create a hidden dense layer of 128 nodes
x = Dense(128, activation='relu')(x)
# Create an output layer for classifying the 10 digits
outputs = Dense(10, activation='sigmoid')(x)
# Instantiate the model
model = Model(inputs, outputs)
# Compile the model
model.compile(loss='sparse_categorical_crossentropy', optimizer='adam', metrics=['acc'])
###Output
_____no_output_____
###Markdown
Get the DatasetWe will get the tf.keras builtin dataset for MNIST. The dataset is pre-split into train and test data. The data is separated into numpy multi-dimensional arrays for images and labels. The image data is not preprocessed --i.e., all the values are between 0 and 255. The label data is not one-hot-encoded --hence why we compiled with `loss='sparse_categorical_crossentropy'`
###Code
from tensorflow.keras.datasets import mnist
# Load the train and test data into memory
(x_train, y_train), (x_test, y_test) = mnist.load_data()
# Expected output: (60000, 28, 28) , (60000,)
print(x_train.shape, y_train.shape)
###Output
_____no_output_____
###Markdown
Train the ModelLet's now train the model (with the preprocessing built into the model graph) on the unpreprocessed MNIST data.
###Code
model.fit(x_train, y_train, epochs=10, batch_size=32, validation_split=0.1, verbose=1)
###Output
_____no_output_____
###Markdown
Evaluate the ModelLet's now evaluate (prediction) using unpreprocessed test examples on the trained model.
###Code
acc = model.evaluate(x_test, y_test)
print(acc)
###Output
_____no_output_____ |
locale/examples/01-filter/connectivity.ipynb | ###Markdown
Connectivity {connectivity_example}============Use the connectivity filter to remove noisy isosurfaces.This example is very similar to [this VTKexample](https://kitware.github.io/vtk-examples/site/Python/VisualizationAlgorithms/PineRootConnectivity/)
###Code
# sphinx_gallery_thumbnail_number = 2
import pyvista as pv
from pyvista import examples
###Output
_____no_output_____
###Markdown
Load a dataset that has noisy isosurfaces
###Code
mesh = examples.download_pine_roots()
cpos = [(40.6018, -280.533, 47.0172),
(40.6018, 37.2813, 50.1953),
(0.0, 0.0, 1.0)]
# Plot the raw data
p = pv.Plotter()
p.add_mesh(mesh, color='#965434')
p.add_mesh(mesh.outline())
p.show(cpos=cpos)
###Output
_____no_output_____
###Markdown
The mesh plotted above is very noisy. We can extract the largestconnected isosurface in that mesh using the`pyvista.DataSetFilters.connectivity`{.interpreted-text role="func"}filter and passing `largest=True` to the `connectivity` filter or byusing the `pyvista.DataSetFilters.extract_largest`{.interpreted-textrole="func"} filter (both are equivalent).
###Code
# Grab the largest connected volume present
largest = mesh.connectivity(largest=True)
# or: largest = mesh.extract_largest()
p = pv.Plotter()
p.add_mesh(largest, color='#965434')
p.add_mesh(mesh.outline())
p.camera_position = cpos
p.show()
###Output
_____no_output_____ |
AS Strategy Data & Preparation for PV.ipynb | ###Markdown
+ Requirements 1. List of strategies on AllocateSmartly2. CSVs of each strategy3. Update every month (at rebalance date)4. Import data table for each strategy on PortfolioVisualizer as "shares" and benchmarks5. Update strategy (strategy of several strategies)6.
###Code
import os
from selenium import webdriver
import pandas as pd
import datetime, time, csv
import seaborn as sns
import numpy as np
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
REMEMBERexport ASNAME='[email protected]'export ASPASSWD=''export PVNAME='[email protected]'export PVPASSWD=''
###Code
# DON'T UPLOAD PASSWORDS TO GITHUB
os.environ.update({'PVNAME':'[email protected]'})
os.environ.update({'PVPASSWD':'9D@3G!#qH5ZK*k^X*L%x'})
os.environ.update({'ASNAME':'[email protected]'})
os.environ.update({'ASPASSWD':'LoZu#WJdjUvu3uFzLOst'})
###Output
_____no_output_____
###Markdown
Allocate Smartly
###Code
# prepare Data folders
import shutil
try:
shutil.rmtree(b'/home/scubamut/MEGAsync/WORK_IN_PROGRESS/pvautomate/Data')
except:
pass
os.makedirs(b'/home/scubamut/MEGAsync/WORK_IN_PROGRESS/pvautomate/Data')
os.makedirs(b'/home/scubamut/MEGAsync/WORK_IN_PROGRESS/pvautomate/Data/strategy_returns')
os.makedirs(b'/home/scubamut/MEGAsync/WORK_IN_PROGRESS/pvautomate/Data/benchmark_returns')
###Output
_____no_output_____
###Markdown
Login to allocatesmartly.com and generate CSV files for the Strategies
###Code
data_path = '/home/scubamut/MEGAsync/WORK_IN_PROGRESS/pvautomate/Data/'
# browser = webdriver.Chrome("/usr/lib/chromium-browser/chromedriver")
browser = webdriver.Chrome("/usr/local/share/chromedriver")
browser.get('https://allocatesmartly.com/login')
browser.set_window_position(1,1)
browser.maximize_window()
time.sleep(5)
user = browser.find_element_by_id('user_login')
user.send_keys(os.environ.get('ASNAME'))
password = browser.find_element_by_id('user_pass')
password.send_keys(os.environ.get('ASPASSWD'))
time.sleep(5)
login = browser.find_element_by_id('wp-submit')
login.click()
browser.get('https://allocatesmartly.com/members/strategies/')
strategies = pd.read_html(browser.page_source, header=0)[0]
strategies = strategies.filter(items=[c for c in strategies.columns][1:-2])
# remove Strategy 'Trading ...'
strategies.Strategy = strategies.Strategy.apply(lambda x: x[:x.find('Trading')])
strategies
# for each strategy, get monthly table and generate CSV files
for s in strategies.Strategy:
print(s)
browser.find_elements_by_link_text(s)[0].click()
browser.find_element_by_id('sx-periodic-table')
table = pd.read_html(browser.page_source, header=1)[2]
table.columns = ['Year','Jan','Feb','Mar','Apr','May','Jun','Jul','Aug','Sep','Oct','Nov','Dec','Total']
# need to change 60/40 to 60_40 for
table.to_csv(data_path + 'strategy_returns/' + s.replace('/','_')+'.csv')
browser.execute_script("window.history.go(-1)")
time.sleep(2)
browser.close()
#
strats = strategies.copy()
strats.to_csv(data_path + 'strats.csv')
###Output
_____no_output_____
###Markdown
Generate symbols (benchmark names) for use by Portfolio Visualizer
###Code
data_path = '/home/scubamut/MEGAsync/WORK_IN_PROGRESS/pvautomate/Data/'
strategies = pd.read_csv(data_path + 'strats.csv')
# new Stratnames column and remove brackets
strategies['Stratnames'] = pd.Series(strategies.Strategy).str.replace(r"\(.*\)","")
# rename 60/40 Benchmark to B6040 Benchmark
strategies['Stratnames'] = strategies.Stratnames.str.replace('40', 'B6040')
# remove developer from name
strategies['Stratnames'] = strategies.Stratnames.str.replace("/('\w+)|(\w+'\w+)|(\w+')|(\w+)/", "")
# new column for Developers
strategies['Developer'] = strategies.Strategy.str.extract("/('\w+)|(\w+'\w+)|(\w+')|(\w+)/", expand=True)[1]
# create unique name >= 6 characters for Portfolio Visualizer Bencmarks
strategies['Name'] = [''.join([c for c in s if c.isupper()]) for s in strategies.Stratnames]
strategies['Numbers'] = strategies.Stratnames.str.extract('(\d+)', expand=True).fillna('')
strategies['Symbol'] = strategies['Name'] + strategies['Numbers']
# # append 0s so that Share Name is at least 6 characters
strategies['Symbol'] = [s+'00000'[:6-len(s)]for s in strategies.Symbol]
strategies['Symbol'] = strategies['Symbol'].str.replace('4000', 'B6040')
strategies = strategies.filter(items=['Strategy', 'Stratnames', 'Symbol', 'AnnualReturn (20Y)', 'SharpeRatio (20Y)'])
strategies['Strategy'] = strategies['Strategy'].str.replace('/','_')
strategies
# verfy that all Securities are unique
len(strategies) == len(strategies.Symbol.unique())
strategies.to_csv('/home/scubamut/MEGAsync/WORK_IN_PROGRESS/pvautomate/Data/ASstrategies.csv', index=None)
strategies[:5]
strategies
###Output
_____no_output_____
###Markdown
Generate Benchmark Returns for use by Portfolio Visualizer
###Code
def update_strategy_data(data_path, strategy, symbol):
"""
For each strategy, save latest data table from website,
generate csv data to save benchmark with format suitable for
PortfolioVisualizer
- data_path : windows folder name for data
- symbol : strategy Symbol
- strategy : strategy Name
"""
df = pd.read_csv(data_path + 'strategy_returns/' + strategy + '.csv',index_col=[0] )
df1 = pd.DataFrame(columns=['Period','Return'])
df1 = df1.append({'Period': 'Period', 'Return': 'Return'}, ignore_index=True)
for row in range(0,len(df)):
for column in range(1,13):
year, month, value = df.iloc[row,0], column, df.iloc[row, column]
# period = (datetime.date (year, month, 1) - datetime.timedelta (days = 1)).strftime('%#m/%#d/%Y')
next_month = datetime.date (year, month, 1).replace(day=28) + datetime.timedelta(days=4) # this will never fail
period = next_month - datetime.timedelta(days=next_month.day)
if not np.isreal(value):
newline = str(period) + u',' + value
df1 = df1.append({'Period': period, 'Return': value}, ignore_index=True)
df1.to_csv(data_path + 'benchmark_returns/' + symbol.replace('/','_')+'.csv', index=False, header=False, quoting=csv.QUOTE_NONNUMERIC)
data_path = '/home/scubamut/MEGAsync/WORK_IN_PROGRESS/pvautomate/Data/'
for n in range(len(strategies)):
# for n in [1]:
symbol = strategies.Symbol[n]
strategy = strategies.Strategy[n]
print((symbol, strategy))
update_strategy_data(data_path, strategy, symbol)
#browser.close()
###Output
('DAAA00', "Kipnis' Defensive Adaptive Asset Allocation")
('PAA000', 'Protective Asset Allocation')
('AMS000', "Allocate Smartly's Meta Strategy")
('AAA000', 'Adaptive Asset Allocation')
('VAAA00', 'Vigilant Asset Allocation - Aggressive')
('PC0000', "Varadi's Percentile Channels")
('PAACPR', 'Protective Asset Allocation - CPR')
('GPM000', "Keuning's Generalized Protective Momentum")
('ADM000', 'Accelerating Dual Momentum')
('USMD00', 'US Max Diversification')
('USRPTF', 'US Risk Parity Trend Following')
('ACA000', "Stoken's Active Combined Asset (ACA)")
('DAA000', 'Defensive Asset Allocation')
('ACAM00', "Stoken's Active Combined Asset (ACA) - Monthly")
('MCP000', "Varadi's Minimum Correlation Portfolio")
('CAAD00', 'Classical Asset Allocation - Defensive')
('RAAB00', 'Robust Asset Allocation - Balanced')
('USERC0', 'US Equal Risk Contribution')
('TPL000', "Faber's Trinity Portfolio Lite")
('MBP000', "Livingston's Mama Bear Portfolio")
('EI0000', 'Efficiente Index')
('CDM000', 'Composite Dual Momentum')
('USMC00', 'US Min Correlation')
('EAAD00', 'Elastic Asset Allocation - Defensive')
('GTAAA6', "Faber's Global Tactical Asset Alloc. - Agg. 6")
('EAAO00', 'Elastic Asset Allocation - Offensive')
('FAA000', 'Flexible Asset Allocation')
('VAAB00', 'Vigilant Asset Allocation - Balanced')
('GRPTF0', 'Global Risk Parity Trend Following')
('AWP000', "Dalio's All-Weather Portfolio")
('GTAA50', "Faber's Global Tactical Asset Alloc. 5 (GTAA 5)")
('GB0000', "PortfolioCharts' Golden Butterfly")
('GTAA13', "Faber's Global Tactical Asset Alloc. 13 (GTAA 13)")
('TPP000', 'Tactical Permanent Portfolio')
('PP0000', "Browne's Permanent Portfolio")
('DDM000', "Newfound's Diversified Dual Momentum")
('CAAO00', 'Classical Asset Allocation - Offensive')
('GTAAA3', "Faber's Global Tactical Asset Alloc. - Agg. 3")
('PBP000', "Livingston's Papa Bear Portfolio")
('RAAA00', 'Robust Asset Allocation - Aggressive')
('USMS00', 'US Max Sharpe')
('GTTUER', 'Growth-Trend Timing - UE Rate')
('TBS000', "Novell's Tactical Bond Strategy")
('GTTO00', 'Growth-Trend Timing - Original')
('TDM000', 'Traditional Dual Momentum')
('TWM000', "Davis' Three Way Model")
('PSS000', "Glenn's Paired Switching Strategy")
('BB6040', '60_40 Benchmark')
('IP0000', "Faber's Ivy Portfolio")
('SRS000', "Faber's Sector Relative Strength (Sector RS)")
###Markdown
Portfolio Visualizer Login to PortfolioVisualizer
###Code
from selenium import webdriver
import time
# browser = webdriver.Chrome("/usr/lib/chromium-browser/chromedriver")
browser = webdriver.Chrome("/usr/local/share/chromedriver")
# browser = webdriver.Firefox()
browser.set_window_position(1,1)
# browser.set_window_size(1038, 875)
browser.maximize_window()
browser.get('https://www.portfoliovisualizer.com/login')
time.sleep(2)
user = browser.find_element_by_id('username')
user.send_keys(os.environ.get('PVNAME'))
password = browser.find_element_by_id('password')
password.send_keys(os.environ.get('PVPASSWD'))
time.sleep(2)
login = browser.find_element_by_id('submitButton')
login.click()
###Output
_____no_output_____
###Markdown
Navigate to Import Benchmarks
###Code
len(strategies)
strategies[-3:]
data_path = '/home/scubamut/MEGAsync/WORK_IN_PROGRESS/pvautomate/Data/benchmark_returns/'
# browser.get('https://www.portfoliovisualizer.com/preferences#import')
browser.get('https://www.portfoliovisualizer.com/manage-benchmarks#import')
#########################################################################
# IMPORTANT: ONLY 50 BENCHMARKS ALLOWED
# DELETE 5 STRATEGIES
# 16,10,27,24
strategies[:3]
for n in range(len(strategies)):
if n in [10,16,27,24]:
print('OUT : ',n,strategies.Strategy[n])
else:
print(n,strategies.Strategy[n])
for n in range(len(strategies)):
# for n in [1]:
if n in [10,16,27,24]:
print('OUT : ',n,strategies.Strategy[n])
else:
print(n,strategies.Strategy[n])
symbol = strategies.Symbol[n]
strategy = strategies.Strategy[n]
browser.maximize_window()
# Series Name
browser.find_element_by_id("benchmarkName").clear()
browser.find_element_by_id("benchmarkName").send_keys(strategy)
# Upload Data File
browser.find_element_by_id("upload").clear()
browser.find_element_by_id("upload").send_keys(data_path + symbol +'.csv')
# Series (Type)
browser.find_element_by_id('seriesType_chosen').click()
# choose SeriesType : li[1|2|3|4] eg li[1] gives Monthly Returns
browser.find_element_by_xpath('//*[@id="seriesType_chosen"]/div/ul/li[1]').click()
# Percentage Values (2=Yes)
browser.find_element_by_id('percentageValues_chosen').click()
browser.find_element_by_xpath('//*[@id="percentageValues_chosen"]/div/ul/li[2]').click()
# Assigned Ticker
browser.find_element_by_id("benchmarkSymbol").clear()
browser.find_element_by_id("benchmarkSymbol").send_keys(symbol)
browser.find_element_by_id('benchmarkAssetClass_chosen').click()
# NOTE: there must be 10 years of monthly data to assign as an Asset Class (2=Yes)
# But don't need this for creating a benckmark
# Asset Class (2=Yes)
browser.find_element_by_css_selector('#benchmarkAssetClass_chosen > div > ul > li:nth-child(2)').click()
# Import Data Series
browser.implicitly_wait(90)
browser.find_element_by_id("importBenchmarkButton").click()
print(browser.find_element_by_class_name("alert").text)
browser.execute_script("window.history.go(-1)")
time.sleep(2)
browser.close()
# REMEMBER THAT THE LAST MONTH MAY BE INCOMPLETE!!
###Output
_____no_output_____
###Markdown
[WIP] SCRAPE METRICS FROM PV
###Code
# browser = webdriver.Chrome("/usr/lib/chromium-browser/chromedriver")
browser = webdriver.Chrome("/usr/local/share/chromedriver")
# browser = webdriver.Firefox()
browser.set_window_position(1,1)
# browser.set_window_size(1038, 875)
browser.maximize_window()
browser.get('https://www.portfoliovisualizer.com/login')
time.sleep(2)
user = browser.find_element_by_id('username')
user.send_keys(os.environ.get('PVNAME'))
password = browser.find_element_by_id('password')
password.send_keys(os.environ.get('PVPASSWD'))
time.sleep(2)
login = browser.find_element_by_id('submitButton')
login.click()
###Output
_____no_output_____
###Markdown
[WIP] HOW TO CREATE OPTIMISED PORTFOLIOS OF N STRATEGIES?
###Code
strategies = pd.read_csv('/home/scubamut/MEGAsync/WORK_IN_PROGRESS/pvautomate/Data/ASstrategies.csv')
strategies[:3]
data_path = '/home/scubamut/MEGAsync/WORK_IN_PROGRESS/pvautomate/Data/benchmark_returns/'
df = pd.DataFrame(columns=list(strategies.Symbol), index=pd.read_csv(data_path + 'BB6040.csv').Period)
df[:2]
# dataframe of all returns
for n in range(len(strategies)):
print (n, strategies.Strategy[n], strategies.Symbol[n])
df[strategies.Symbol[n]] = pd.read_csv(data_path + strategies.Symbol[n] + '.csv',index_col="Period")
data = df.dropna()
data[:10]
data = (data.applymap(lambda x: x.rstrip('%'))).astype(float)
# data.cumsum()
data.astype(float)[:3]
# remove rows with all zeroes
data = data[(data.T != 0).any()]
data[-3:]
data_path = '/home/scubamut/MEGAsync/WORK_IN_PROGRESS/pvautomate/Data/benchmark_returns/'
data.to_csv(data_path + 'data.csv')
returns = data.cumsum()
returns[-3:]
d = data[['DAAA00','AAA000']]
d.corr()
d.DAAA00.plot()
d = data.PAA000.describe()
d
returns[returns.index>'1999-12-31'].PAA000.plot(figsize=(15,10),grid=True)
returns.corr(method='spearman', min_periods=1)
# CORRELATIONS (COMPARE WITH AS)
corr = data.corr(method='spearman', min_periods=1)
corr
data_path = '/home/scubamut/MEGAsync/WORK_IN_PROGRESS/pvautomate/Data/benchmark_returns/'
data = pd.read_csv(data_path + 'data.csv')
# Generate a mask for the upper triangle
mask = np.zeros_like(corr, dtype=np.bool)
mask[np.triu_indices_from(mask)] = True
# Set up the matplotlib figure
f, ax = plt.subplots(figsize=(50, 50))
# Generate a custom diverging colormap
cmap = sns.diverging_palette(220, 10, as_cmap=True)
# Draw the heatmap with the mask and correct aspect ratio
sns.heatmap(corr, mask=mask, cmap=cmap, vmax=.3, center=0,
square=True, linewidths=0.5, cbar_kws={"shrink": 1.5})
plt.figure(figsize = (20,20))
sns.heatmap(data)
>>> corr = np.corrcoef(np.random.randn(100, 200))
>>> mask = np.zeros_like(corr)
>>> mask[np.triu_indices_from(mask)] = True
>>> with sns.axes_style("white"):
plt.figure(figsize = (20,30))
ax = sns.heatmap(corr, mask=mask, vmax=.3)
corr
###Output
_____no_output_____
###Markdown
SCRATCH
###Code
import fintools as ft
# for PAA000.csv
data_path = '/home/scubamut/MEGAsync/WORK_IN_PROGRESS/pvautomate/Data/benchmark_returns/'
PAA000 = d = pd.read_csv(data_path + 'PAA000.csv',index_col='Period')
d[:3]
rets = d.applymap(lambda x: x.rstrip('%')).astype(float)
rets
rets.plot()
rets.std()
rets.max()
rets.min()
ft.compute_annual_factor
import empyrical as e
import numpy as np
from empyrical import max_drawdown, alpha_beta
returns = np.array([.01, .02, .03, -.4, -.06, -.02])
benchmark_returns = np.array([.02, .02, .03, -.35, -.05, -.01])
# calculate the max drawdown
max_drawdown(returns)
# calculate alpha and beta
alpha, beta = alpha_beta(returns, benchmark_returns)
max_drawdown(returns)
###Output
_____no_output_____ |
Custom+CUDA+Kernels+in+Python+with+Numba.ipynb | ###Markdown
Custom CUDA Kernels in Python with NumbaIn this section we will go further into our understanding of how the CUDA programming model organizes parallel work, and will leverage this understanding to write custom CUDA **kernels**, functions which run in parallel on CUDA GPUs. Custom CUDA kernels, in utilizing the CUDA programming model, require more work to implement than, for example, simply decorating a ufunc with `@vectorize`. However, they make possible parallel computing in places where ufuncs are just not able, and provide a flexibility that can lead to the highest level of performance.This section contains three appendices for those of you interested in futher study: a variety of debugging techniques to assist your GPU programming, links to CUDA programming references, and coverage of Numba supported random number generation on the GPU. ObjectivesBy the time you complete this section you will be able to:* Write custom CUDA kernels in Python and launch them with an execution configuration.* Utilize grid stride loops for working in parallel over large data sets and leveraging memory coalescing.* Use atomic operations to avoid race conditions when working in parallel. The Need for Custom Kernels Ufuncs are fantastically elegant, and for any scalar operation that ought to be performed element wise on data, ufuncs are likely the right tool for the job.As you are well aware, there are many, if not more, classes of problems that cannot be solved by applying the same function to each element of a data set. Consider, for example, any problem that requires access to more than one element of a data structure in order to calculate its output, like stencil algorithms, or any problem that cannot be expressed by a one input value to one output value mapping, such as a reduction. Many of these problems are still inherently parallelizable, but cannot be expressed by a ufunc.Writing custom CUDA kernels, while more challenging than writing GPU accelerated ufuncs, provides developers with tremendous flexibility for the types of functions they can send to run in parallel on the GPU. Furthermore, as you will begin learning in this and the next section, it also provides fine-grained control over *how* the parallelism is conducted by exposing CUDA's thread hierarchy to developers explicitly.While remaining purely in Python, the way we write CUDA kernels using Numba is very reminiscent of how developers write them in CUDA C/C++. For those of you familiar with programming in CUDA C/C++, you will likely pick up custom kernels in Python with Numba very rapidly, and for those of you learning them for the first time, know that the work you do here will also serve you well should you ever need or wish to develop CUDA in C/C++, or even, make a study of the wealth of CUDA resources on the web that are most commonly portraying CUDA C/C++ code. Introduction to CUDA KernelsWhen programming in CUDA, developers write functions for the GPU called **kernels**, which are executed, or in CUDA parlance, **launched**, on the GPU's many cores in parallel **threads**. When kernels are launched, programmers use a special syntax, called an **execution configuration** (also called a launch configuration) to describe the parallel execution's configuration.The following slides (which will appear after executing the cell below) give a high level introduction to how CUDA kernels can be created to work on large datasets in parallel on the GPU device. Work through the slides and then you will begin writing and executing your own custom CUDA kernels, using the ideas presented in the slides.
###Code
from IPython.display import IFrame
IFrame('https://view.officeapps.live.com/op/view.aspx?src=https://developer.download.nvidia.com/training/courses/C-AC-02-V1/AC_CUDA_Python_1.pptx', 640, 390)
###Output
_____no_output_____
###Markdown
A First CUDA KernelLet's start with a concrete, and very simple example by rewriting our addition function for 1D NumPy arrays. CUDA kernels are compiled using the `numba.cuda.jit` decorator. `numba.cuda.jit` is not to be confused with the `numba.jit` decorator you've already learned which optimizes functions **for the CPU**.We will begin with a very simple example to highlight some of the essential syntax. Worth mentioning is that this particular function could in fact be written as a ufunc, but we choose it here to keep the focus on learning the syntax. We will be proceeding to functions more well suited to being written as a custom kernel below. Be sure to read the comments carefully, as they provide some important information about the code.
###Code
from numba import cuda
# Note the use of an `out` array. CUDA kernels written with `@cuda.jit` do not return values,
# just like their C counterparts. Also, no explicit type signature is required with @cuda.jit
@cuda.jit
def add_kernel(x, y, out):
# The actual values of the following CUDA-provided variables for thread and block indices,
# like function parameters, are not known until the kernel is launched.
# This calculation gives a unique thread index within the entire grid (see the slides above for more)
idx = cuda.grid(1) # 1 = one dimensional thread grid, returns a single value.
# This Numba-provided convenience function is equivalent to
# `cuda.threadIdx.x + cuda.blockIdx.x * cuda.blockDim.x`
# This thread will do the work on the data element with the same index as its own
# unique index within the grid.
out[idx] = x[idx] + y[idx]
import numpy as np
n = 4096
x = np.arange(n).astype(np.int32) # [0...4095] on the host
y = np.ones_like(x) # [1...1] on the host
d_x = cuda.to_device(x) # Copy of x on the device
d_y = cuda.to_device(y) # Copy of y on the device
d_out = cuda.device_array_like(d_x) # Like np.array_like, but for device arrays
# Because of how we wrote the kernel above, we need to have a 1 thread to one data element mapping,
# therefore we define the number of threads in the grid (128*32) to equal n (4096).
threads_per_block = 128
blocks_per_grid = 32
add_kernel[blocks_per_grid, threads_per_block](d_x, d_y, d_out)
cuda.synchronize()
print(d_out.copy_to_host()) # Should be [1...4096]
###Output
[ 1 2 3 ... 4094 4095 4096]
###Markdown
Exercise: Tweak the CodeMake the following minor changes to the code above to see how it affects its execution. Make educated guesses about what will happen before running the code:* Decrease the `threads_per_block` variable* Decrease the `blocks_per_grid` variable* Increase the `threads_per_block` and/or `blocks_per_grid variables`* Remove or comment out the `cuda.synchronize()` call ResultsIn the example above, because the kernel is written so that each thread works on exactly one data element, it is essential for the number of threads in the grid equal the number of data elements.By **reducing the number of threads in the grid**, either by reducing the number of blocks, and/or reducing the number of threads per block, there are elements where work is left undone and thus we can see in the output that the elements toward the end of the `d_out` array did not have any values added to it. If you edited the execution configuration by reducing the number of threads per block, then in fact there are other elements through the `d_out` array that were not processed.**Increasing the size of the grid** in fact creates an error. Later in this section you will learn how to expose this error and debug it.You might have expected that **removing the synchronization point** would have resulted in a print showing that no or less work had been done. This is a reasonable guess since without a synchronization point the CPU will work asynchronously while the GPU is processing. The detail to learn here is that memory copies carry implicit synchronization, making the call to `cuda.synchronize` above unnecessary. Exercise: Accelerate a CPU Function as a Custom CUDA KernelBelow is CPU scalar function `square_device` that could be used as a CPU ufunc. Your job is to refactor it to run as a CUDA kernel decorated with the `@cuda.jit` decorator.You might think that making this function run on the device could be much more easily done with `@vectorize`, and you would be correct. But this scenario will give you a chance to work with all the syntax we've introduced before moving on to more complicated and realistic examples.In this exercise you will need to:* Refactor the `square_device` definition to be a CUDA kernel that will do one thread's worth of work on a single element.* Refactor the `d_a` and `d_out` arrays below to be CUDA device arrays.* Modify the `blocks` and `threads` variables to appropriate values for the provided `n`.* Refactor the call to `square_device` to be a kernel launch that includes an execution configuration.The assertion test below will fail until you successfully implement the above. If you get stuck, feel free to check out a [solution](../../../../edit/tasks/task2/task/solutions/square_device_solution.py).
###Code
# Refactor to be a CUDA kernel doing one thread's work.
# Don't forget that when using `@cuda.jit`, you must provide an output array as no value will be returned.
@cuda.jit
def square_device(a, out):
idx = cuda.grid(1)
out[idx] = a[idx]**2
# Leave the values in this cell fixed for this exercise
n = 4096
a = np.arange(n)
out = a**2 # `out` will only be used for testing below
d_a = cuda.to_device(a) # TODO make `d_a` a device array
d_out = cuda.device_array_like(np.zeros_like(a)) # TODO: make d_out a device array
# TODO: Update the execution configuration for the amount of work needed
blocks = 4
threads = 1024
# TODO: Launch as a kernel with an appropriate execution configuration
# d_out = square_device(d_a)
square_device[blocks, threads](d_a, d_out)
print(d_out.copy_to_host())
from numpy import testing
testing.assert_almost_equal(d_out, out)
###Output
_____no_output_____
###Markdown
An Aside on Hiding Latency and Execution Configuration Choices CUDA enabled NVIDIA GPUs consist of several [**Streaming Multiprocessors**](https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.htmlhardware-implementation), or **SMs** on a die, with attached DRAM. SMs contain all required resources for the execution of kernel code including many CUDA cores. When a kernel is launched, each block is assigned to a single SM, with potentially many blocks assigned to a single SM. SMs partition blocks into further subdivisions of 32 threads called **warps** and it is these warps which are given parallel instructions to execute.When an instruction takes more than one clock cycle to complete (or in CUDA parlance, to **expire**) the SM can continue to do meaningful work *if it has additional warps that are ready to be issued new instructions.* Because of very large register files on the SMs, there is no time penalty for an SM to change context between issuing instructions to one warp or another. In short, the latency of operations can be hidden by SMs with other meaningful work so long as there is other work to be done.**Therefore, of primary importance to utilizing the full potential of the GPU, and thereby writing performant accelerated applications, it is essential to give SMs the ability to hide latency by providing them with a sufficient number of warps which can be accomplished most simply by executing kernels with sufficiently large grid and block dimensions.**Deciding the very best size for the CUDA thread grid is a complex problem, and depends on both the algorithm and the specific GPU's [compute capability](https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.htmlcompute-capabilities), but here are some very rough heuristics that we tend to follow and which can work well for getting started: * The size of a block should be a multiple of 32 threads (the size of a warp), with typical block sizes between 128 and 512 threads per block. * The size of the grid should ensure the full GPU is utilized where possible. Launching a grid where the number of blocks is 2x-4x the number of SMs on the GPU is a good starting place. Something in the range of 20 - 100 blocks is usually a good starting point. * The CUDA kernel launch overhead does increase with the number of blocks, so when the input size is very large we find it best not to launch a grid where the number of threads equals the number of input elements, which would result in a tremendous number of blocks. Instead we use a pattern to which we will now turn our attention for dealing with large inputs. Working on Largest Datasets with Grid Stride LoopsThe following slides give a high level overview of a technique called a **grid stride loop** which will create flexible kernels where each thread is able to work on more than one data element, an essential technique for large datasets. Execute the cell to load the slides.
###Code
from IPython.display import IFrame
IFrame('https://view.officeapps.live.com/op/view.aspx?src=https://developer.download.nvidia.com/training/courses/C-AC-02-V1/AC_CUDA_Python_2.pptx', 640, 390)
###Output
_____no_output_____
###Markdown
A First Grid Stride LoopLet's refactor the `add_kernel` above to utilize a grid stride loop so that we can launch it to work on larger data sets flexibly while incurring the benefits of global **memory coalescing**, which allows parallel threads to access memory in contiguous chunks, a scenario which the GPU can leverage to reduce the total number of memory operations:
###Code
from numba import cuda
@cuda.jit
def add_kernel(x, y, out):
start = cuda.grid(1)
# This calculation gives the total number of threads in the entire grid
stride = cuda.gridsize(1) # 1 = one dimensional thread grid, returns a single value.
# This Numba-provided convenience function is equivalent to
# `cuda.blockDim.x * cuda.gridDim.x`
# This thread will start work at the data element index equal to that of its own
# unique index in the grid, and then, will stride the number of threads in the grid each
# iteration so long as it has not stepped out of the data's bounds. In this way, each
# thread may work on more than one data element, and together, all threads will work on
# every data element.
for i in range(start, x.shape[0], stride):
# Assuming x and y inputs are same length
out[i] = x[i] + y[i]
import numpy as np
n = 100000 # This is far more elements than threads in our grid
x = np.arange(n).astype(np.int32)
y = np.ones_like(x)
d_x = cuda.to_device(x)
d_y = cuda.to_device(y)
d_out = cuda.device_array_like(d_x)
threads_per_block = 128
blocks_per_grid = 30
add_kernel(d_x, d_y, d_out)
print(d_out.copy_to_host()) # Remember, memory copy carries implicit synchronization
###Output
[ 1 2 3 ... 99998 99999 100000]
###Markdown
Exercise: Implement a Grid Stride LoopRefactor the following CPU scalar `hypot_stride` function to run as a CUDA Kernel utilizing a grid stride loop. Feel free to look at [the solution](../../../../edit/tasks/task2/task/solutions/hypot_stride_solution.py) if you get stuck.
###Code
from math import hypot
@cuda.jit
def hypot_stride(a, b, c):
start = cuda.grid(1)
stride = cuda.gridsize(1)
for idx in range(start, a.shape[0], stride):
c[idx] = hypot(a[idx], b[idx])
# You do not need to modify the contents in this cell
n = 1000000
a = np.random.uniform(-12, 12, n).astype(np.float32)
b = np.random.uniform(-12, 12, n).astype(np.float32)
d_a = cuda.to_device(a)
d_b = cuda.to_device(b)
d_c = cuda.device_array_like(d_b)
blocks = 128
threads_per_block = 64
hypot_stride[blocks, threads_per_block](d_a, d_b, d_c)
from numpy import testing
# This assertion will fail until you successfully implement the hypot_stride kernel above
testing.assert_almost_equal(np.hypot(a,b), d_c.copy_to_host(), decimal=5)
###Output
_____no_output_____
###Markdown
Timing the KernelLet's take the time to do some performance timing for the `hypot_stride` kernel. If you weren't able to successfully implement it, copy and execute [the solution](../../../../edit/tasks/task2/task/solutions/hypot_stride_solution.py) before timing. CPU BaselineFirst let's get a baseline with `np.hypot`:
###Code
%timeit np.hypot(a, b)
###Output
5.69 ms ± 5.8 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
###Markdown
Numba on the CPUNext let's see about a CPU optimized version:
###Code
from numba import jit
@jit
def numba_hypot(a, b):
return np.hypot(a, b)
%timeit numba_hypot(a, b)
###Output
5.4 ms ± 1.83 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
###Markdown
Single Threaded on the DeviceJust to see, let's launch our kernel in a grid with only a single thread. Here we will use `%time`, which only runs the statement once to ensure our measurement isn't affected by the finite depth of the CUDA kernel queue. We will also add a `cuda.synchronize` to be sure we don't get any innacurate times on account of returning control to the CPU, where the timer is, before the kernel completes:
###Code
%time hypot_stride[1, 1](d_a, d_b, d_c); cuda.synchronize()
###Output
CPU times: user 204 ms, sys: 120 ms, total: 324 ms
Wall time: 322 ms
###Markdown
Hopefully not too much of a surprise that this is way slower than even the baseline CPU execution. Parallel on the Device
###Code
%time hypot_stride[128, 64](d_a, d_b, d_c); cuda.synchronize()
###Output
CPU times: user 0 ns, sys: 0 ns, total: 0 ns
Wall time: 568 µs
###Markdown
That's much faster! Atomic Operations and Avoiding Race ConditionsCUDA, like many general purpose parallel execution frameworks, makes it possible to have race conditions in your code. A race condition in CUDA arises when threads read to or write from a memory location that might be modified by another independent thread. Generally speaking, you need to worry about: * read-after-write hazards: One thread is reading a memory location at the same time another thread might be writing to it. * write-after-write hazards: Two threads are writing to the same memory location, and only one write will be visible when the kernel is complete. A common strategy to avoid both of these hazards is to organize your CUDA kernel algorithm such that each thread has exclusive responsibility for unique subsets of output array elements, and/or to never use the same array for both input and output in a single kernel call. (Iterative algorithms can use a double-buffering strategy if needed, and switch input and output arrays on each iteration.)However, there are many cases where different threads need to combine results. Consider something very simple, like: "every thread increments a global counter." Implementing this in your kernel requires each thread to:1. Read the current value of a global counter.2. Compute `counter + 1`.3. Write that value back to global memory.However, there is no guarantee that another thread has not changed the global counter between steps 1 and 3. To resolve this problem, CUDA provides **atomic operations** which will read, modify and update a memory location in one, indivisible step. Numba supports several of these functions, [described here](http://numba.pydata.org/numba-doc/dev/cuda/intrinsics.htmlsupported-atomic-operations).Let's make our thread counter kernel:
###Code
@cuda.jit
def thread_counter_race_condition(global_counter):
global_counter[0] += 1 # This is bad
@cuda.jit
def thread_counter_safe(global_counter):
cuda.atomic.add(global_counter, 0, 1) # Safely add 1 to offset 0 in global_counter array
# This gets the wrong answer
global_counter = cuda.to_device(np.array([0], dtype=np.int32))
thread_counter_race_condition[64, 64](global_counter)
print('Should be %d:' % (64*64), global_counter.copy_to_host())
# This works correctly
global_counter = cuda.to_device(np.array([0], dtype=np.int32))
thread_counter_safe[64, 64](global_counter)
print('Should be %d:' % (64*64), global_counter.copy_to_host())
###Output
Should be 4096: [4096]
###Markdown
AssessmentThe following exercise will require you to utilize everything you've learned so far. Unlike previous exercises, there will not be any solution code available to you, and, there are a couple additional steps you will need to take to "run the assessment" and get a score for your attempt(s). **Please read the directions carefully before beginning your work to ensure the best chance at successfully completing the assessment.** How to Run the AssessmentTake the following steps to complete this assessment:1. Using the instructions that follow, work on the cells below as you usually would for an exercise.2. When you are satisfied with your work, follow the instructions below to copy and paste code in into linked source code files. Be sure to save the files after you paste your work.3. Return to the browser tab you used to launch this notebook, and click on the **"Assess"** button. After a few seconds a score will be generated along with a helpful message.You are welcome to click on the **Assess** button as many times as you like, so feel free if you don't pass the first time to make additional modifications to your code and repeat steps 1 through 3. Good luck! Write an Accelerated Histogramming KernelFor this assessment, you will create an accelerated histogramming kernel. This will take an array of input data, a range, and a number of bins, and count how many of the input data elements land in each bin. Below is a working CPU implementation of histogramming to serve as an example for your work:
###Code
def cpu_histogram(x, xmin, xmax, histogram_out):
'''Increment bin counts in histogram_out, given histogram range [xmin, xmax).'''
# Note that we don't have to pass in nbins explicitly, because the size of histogram_out determines it
nbins = histogram_out.shape[0]
bin_width = (xmax - xmin) / nbins
# This is a very slow way to do this with NumPy, but looks similar to what you will do on the GPU
for element in x:
bin_number = np.int32((element - xmin)/bin_width)
if bin_number >= 0 and bin_number < histogram_out.shape[0]:
# only increment if in range
histogram_out[bin_number] += 1
x = np.random.normal(size=10000, loc=0, scale=1).astype(np.float32)
xmin = np.float32(-4.0)
xmax = np.float32(4.0)
histogram_out = np.zeros(shape=10, dtype=np.int32)
cpu_histogram(x, xmin, xmax, histogram_out)
histogram_out
###Output
_____no_output_____
###Markdown
Using a grid stride loop and atomic operations, implement your solution in the cell below. After making any modifications, and before running the assessment, paste this cell's content into [**`assessment/histogram.py`**](../../../../edit/tasks/task2/task/assessment/histogram.py) and save it.
###Code
@cuda.jit
def cuda_histogram(x, xmin, xmax, histogram_out):
'''Increment bin counts in histogram_out, given histogram range [xmin, xmax).'''
start = cuda.grid(1)
stride = cuda.gridsize(1)
nbins = histogram_out.shape[0]
bin_width = (xmax - xmin) / nbins
# This is a very slow way to do this with NumPy, but looks similar to what you will do on the GPU
for idx in range(start, x.shape[0], stride):
element = x[idx]
bin_number = np.int32((element - xmin)/bin_width)
if bin_number >= 0 and bin_number < histogram_out.shape[0]:
# only increment if in range
cuda.atomic.add(histogram_out, bin_number, 1)
d_x = cuda.to_device(x)
d_histogram_out = cuda.to_device(np.zeros(shape=10, dtype=np.int32))
blocks = 128
threads_per_block = 64
cuda_histogram[blocks, threads_per_block](d_x, xmin, xmax, d_histogram_out)
# This assertion will fail until you correctly implement `cuda_histogram`
np.testing.assert_array_almost_equal(d_histogram_out.copy_to_host(), histogram_out, decimal=2)
###Output
_____no_output_____
###Markdown
SummaryIn this section you learned how to:* Write custom CUDA kernels in Python and launch them with an execution configuration.* Utilize grid stride loops for working in parallel over large data sets and leveraging memory coalescing.* Use atomic operations to avoid race conditions when working in parallel. Download ContentTo download the contents of this notebook, execute the following cell and then click the download link below. Note: If you run this notebook on a local Jupyter server, you can expect some of the file path links in the notebook to be broken as they are shaped to our own platform. You can still navigate to the files through the Jupyter file navigator.
###Code
!tar -zcvf section2.tar.gz .
###Output
_____no_output_____
###Markdown
[Download files from this section.](files/section2.tar.gz) Appendix: Troubleshooting and Debugging Note about the TerminalDebugging is an important part of programming. Unfortuntely, it is pretty difficult to debug CUDA kernels directly in the Jupyter notebook for a variety of reasons, so this notebook will show terminal commands by executing Jupyter notebook cells using the shell. These shell commands will appear in notebook cells with the command line prefixed by `!`. When applying the debug methods described in this notebook, you will likely run the commands in the terminal directly. PrintingA common debugging strategy is printing to the console. Numba supports printing from CUDA kernels, with some restrictions. Note that output printed from a CUDA kernel will not be captured by Jupyter, so you will need to debug with a script you can run from the terminal.Let's look at a CUDA kernel with a bug:
###Code
! cat debug/ex1.py
###Output
_____no_output_____
###Markdown
When we run this code to histogram 50 values, we see the histogram is not getting 50 entries:
###Code
! python debug/ex1.py
###Output
_____no_output_____
###Markdown
*(You might have already spotted the mistake, but let's pretend we don't know the answer.)*We hypothesize that maybe a bin calculation error is causing many of the histogram entries to appear out of range. Let's add some printing around the `if` statement to show us what is going on:
###Code
! cat debug/ex1a.py
###Output
_____no_output_____
###Markdown
This kernel will print every value and bin number it calculates. Looking at one of the print statements, we see that `print` supports constant strings, and scalar values:``` pythonprint('in range', x[i], bin_number)```String substitution (using C printf syntax or the newer `format()` syntax) is not supported. If we run this script we see:
###Code
! python debug/ex1a.py
###Output
_____no_output_____
###Markdown
Scanning down that output, we see that all 50 values should be in range. Clearly we have some kind of race condition updating the histogram. In fact, the culprit line is:``` pythonhistogram_out[bin_number] += 1```which should be (as you may have seen in a previous exercise)``` pythoncuda.atomic.add(histogram_out, bin_number, 1)``` CUDA SimulatorBack in the early days of CUDA, `nvcc` had an "emulator" mode that would execute CUDA code on the CPU for debugging. That functionality was dropped in later CUDA releases after `cuda-gdb` was created. There isn't a debugger for CUDA+Python, so Numba includes a "CUDA simulator" in Numba that runs your CUDA code with the Python interpreter on the host CPU. This allows you to debug the logic of your code using Python modules and functions that would otherwise be not allowed by the compile.A very common use case is to start the Python debugger inside one thread of a CUDA kernel:``` pythonimport numpy as npfrom numba import [email protected] histogram(x, xmin, xmax, histogram_out): nbins = histogram_out.shape[0] bin_width = (xmax - xmin) / nbins start = cuda.grid(1) stride = cuda.gridsize(1) DEBUG FIRST THREAD if start == 0: from pdb import set_trace; set_trace() for i in range(start, x.shape[0], stride): bin_number = np.int32((x[i] + xmin)/bin_width) if bin_number >= 0 and bin_number < histogram_out.shape[0]: cuda.atomic.add(histogram_out, bin_number, 1)x = np.random.normal(size=50, loc=0, scale=1).astype(np.float32)xmin = np.float32(-4.0)xmax = np.float32(4.0)histogram_out = np.zeros(shape=10, dtype=np.int32)histogram[64, 64](x, xmin, xmax, histogram_out)print('input count:', x.shape[0])print('histogram:', histogram_out)print('count:', histogram_out.sum())```This code allows a debug session like the following to take place:```(gtc2017) 0179-sseibert:gtc2017-numba sseibert$ NUMBA_ENABLE_CUDASIM=1 python debug/ex2.py> /Users/sseibert/continuum/conferences/gtc2017-numba/debug/ex2.py(18)histogram()-> for i in range(start, x.shape[0], stride):(Pdb) n> /Users/sseibert/continuum/conferences/gtc2017-numba/debug/ex2.py(19)histogram()-> bin_number = np.int32((x[i] + xmin)/bin_width)(Pdb) n> /Users/sseibert/continuum/conferences/gtc2017-numba/debug/ex2.py(21)histogram()-> if bin_number >= 0 and bin_number < histogram_out.shape[0]:(Pdb) p bin_number, x[i](-6, -1.4435024)(Pdb) p x[i], xmin, bin_width(-1.4435024, -4.0, 0.80000000000000004)(Pdb) p (x[i] - xmin) / bin_width3.1956219673156738(Pdb) q``` CUDA MemcheckAnother common error occurs when a CUDA kernel has an invalid memory access, typically caused by running off the end of an array. The full CUDA toolkit from NVIDIA (not the `cudatoolkit` conda package) contain a utility called `cuda-memcheck` that can check for a wide range of memory access mistakes in CUDA code.Let's debug the following code:
###Code
! cat debug/ex3.py
! cuda-memcheck python debug/ex3.py
###Output
_____no_output_____
###Markdown
The output of `cuda-memcheck` is clearly showing a problem with our histogram function:```========= Invalid __global__ write of size 4========= at 0x00000548 in cudapy::__main__::histogram$241(Array, float, float, Array)```But we don't know which line it is. To get better error information, we can turn "debug" mode on when compiling the kernel, by changing the kernel to look like this:``` [email protected](debug=True)def histogram(x, xmin, xmax, histogram_out): nbins = histogram_out.shape[0]```
###Code
! cuda-memcheck python debug/ex3a.py
###Output
_____no_output_____
###Markdown
Now we get an error message that includes a source file and line number: `ex3a.py:17`.
###Code
! cat -n debug/ex3a.py | grep -C 2 "17"
###Output
_____no_output_____
###Markdown
At this point, we might realize that our if statement incorrectly has an `or` instead of an `and`.`cuda-memcheck` has different modes for detecting different kinds of problems (similar to `valgrind` for debugging CPU memory access errors). Take a look at the documentation for more information: http://docs.nvidia.com/cuda/cuda-memcheck/ Appendix: CUDA ReferencesIt's worth bookmarking Chapters 1 and 2 of the CUDA C Programming Guide for study after the completion of this course. They are written for CUDA C, but are still highly applicable to programming CUDA Python. * Introduction: http://docs.nvidia.com/cuda/cuda-c-programming-guide/index.htmlintroduction * Programming Model: http://docs.nvidia.com/cuda/cuda-c-programming-guide/index.htmlprogramming-model Appendix: Random Number Generation on the GPU with NumbaGPUs can be extremely useful for Monte Carlo applications where you need to use large amounts of random numbers. CUDA ships with an excellent set of random number generation algorithms in the cuRAND library. Unfortunately, cuRAND is defined in a set of C headers which Numba can't easily compile or link to. (Numba's CUDA JIT does not ever create C code for CUDA kernels.) It is on the Numba roadmap to find a solution to this problem, but it may take some time.In the meantime, Numba version 0.33 and later includes the `xoroshiro128+` generator, which is pretty high quality, though with a smaller period ($2^{128} - 1$) than the XORWOW generator in cuRAND.To use it, you will want to initialize the RNG state on the host for each thread in your kernel. This state creation function initializes each state to be in the same sequence designated by the seed, but separated by $2^{64}$ steps from each other. This ensures that different threads will not accidentally end up with overlapping sequences (unless a single thread draws $2^{64}$ random numbers, which you won't have patience for):
###Code
import numpy as np
from numba import cuda
from numba.cuda.random import create_xoroshiro128p_states, xoroshiro128p_uniform_float32
threads_per_block = 64
blocks = 24
rng_states = create_xoroshiro128p_states(threads_per_block * blocks, seed=1)
###Output
_____no_output_____
###Markdown
We can use these random number states in our kernel by passing it in as an argument:
###Code
@cuda.jit
def monte_carlo_mean(rng_states, iterations, out):
thread_id = cuda.grid(1)
total = 0
for i in range(iterations):
sample = xoroshiro128p_uniform_float32(rng_states, thread_id) # Returns a float32 in range [0.0, 1.0)
total += sample
out[thread_id] = total/iterations
out = cuda.device_array(threads_per_block * blocks, dtype=np.float32)
monte_carlo_mean[blocks, threads_per_block](rng_states, 10000, out)
print(out.copy_to_host().mean())
###Output
_____no_output_____
###Markdown
Exercise: Monte Carlo Pi on the GPULet's revisit Monte Carlo Pi generating algorithm from the first section, where we had compiled it with Numba on the CPU.
###Code
from numba import njit
import random
@njit
def monte_carlo_pi(nsamples):
acc = 0
for i in range(nsamples):
x = random.random()
y = random.random()
if (x**2 + y**2) < 1.0:
acc += 1
return 4.0 * acc / nsamples
nsamples = 10000000
%timeit monte_carlo_pi(nsamples)
###Output
_____no_output_____
###Markdown
Your task is to refactor `monte_carlo_pi_device` below, currently identical to `monte_carlo_pi` above, to run on the GPU. You can use `monte_carlo_mean` above for inspiration, but at the least you will need to:- Decorate to be a CUDA kernel- Draw samples for the thread from the device RNG state (generated 2 cells below)- Store each thread's results in an output array which will be meaned on the host (as `monte_carlo_mean` did above)If you look two cells below you will see that all the data has already been initialized, the execution configuration created, and the kernel launched. All you need to do is refactor the kernel definition in the cell immediately below. Check out [the solution](../../../../edit/tasks/task3/task/solutions/monte_carlo_pi_solution.py) if you get stuck.
###Code
from numba import njit
import random
# TODO: All your work will be in this cell. Refactor to run on the device successfully given the way the
# kernel is launched below.
@njit
def monte_carlo_pi_device(nsamples):
acc = 0
for i in range(nsamples):
x = random.random()
y = random.random()
if (x**2 + y**2) < 1.0:
acc += 1
return 4.0 * acc / nsamples
# Do not change any of the values in this cell
nsamples = 10000000
threads_per_block = 128
blocks = 32
grid_size = threads_per_block * blocks
samples_per_thread = int(nsamples / grid_size) # Each thread only needs to work on a fraction of total number of samples.
# This could also be calcuated inside the kernel definition using `gridsize(1)`.
rng_states = create_xoroshiro128p_states(grid_size, seed=1)
d_out = cuda.device_array(threads_per_block * blocks, dtype=np.float32)
%time monte_carlo_pi_device[blocks, threads_per_block](rng_states, samples_per_thread, d_out); cuda.synchronize()
print(d_out.copy_to_host().mean())
###Output
_____no_output_____ |
animated_plots/ig_fft_animation_qpsk.ipynb | ###Markdown
Intensity Graded FFT Animation with QPSK
###Code
import numpy as np
from matplotlib import pyplot as plt
from matplotlib.animation import FuncAnimation
import math
from IPython.display import HTML, Image
#Test QPSK Signal
num_symbols = 256*10240
sps = 2
x_int = np.random.randint(0, 4, num_symbols) # 0 to 3
x_int = np.repeat(x_int, sps, axis=0)
x_degrees = x_int*360/4.0 + 45 # 45, 135, 225, 315 degrees
x_radians = x_degrees*np.pi/180.0 # sin() and cos() takes in radians
x_symbols = np.cos(x_radians) + 1j*np.sin(x_radians) # this produces our QPSK complex symbols
# Create our raised-cosine filter
num_taps = 101
beta = 0.35
Ts = sps # Assume sample rate is 1 Hz, so sample period is 1, so *symbol* period is 8
t = np.arange(-51, 52) # remember it's not inclusive of final number
h = np.sinc(t/Ts) * np.cos(np.pi*beta*t/Ts) / (1 - (2*beta*t/Ts)**2)
# Filter our signal, in order to apply the pulse shaping
x_shaped = np.convolve(x_symbols, h)
n = (np.random.randn(len(x_shaped)) + 1j*np.random.randn(len(x_shaped)))/np.sqrt(2) # AWGN with unity power
noise_power = 0.001
r = x_shaped + n * np.sqrt(noise_power)
samples = r
#Animate function that interates over fft data
def animate(i, im, norm_fft_array, mag_steps, fft_len, fft_div):
mag_step = 1/mag_steps
if i == 0:
hitmap_array = im.get_array()*np.exp(-10)
else:
hitmap_array = im.get_array()*np.exp(-0.04)
for m in range(fft_len):
hit_mag = int(norm_fft_array[i][m]/mag_step)
hitmap_array[hit_mag][int(m/fft_div)] = hitmap_array[hit_mag][int(m/fft_div)] + .5
#hitmap_array_db = 10.0 * np.log10(hitmap_array)
im.set_array(hitmap_array)
return [im]
#Function
def fft_intensity_animate(samples: np.ndarray, fft_len: int = 256, fft_div: int = 2, mag_steps: int = 100):
num_ffts = math.floor(len(samples)/fft_len)
fft_array = []
for i in range(num_ffts):
temp = np.fft.fftshift(np.fft.fft(samples[i*fft_len:(i+1)*fft_len]))
temp_mag = 20.0 * np.log10(np.abs(temp))
fft_array.append(temp_mag)
max_mag = np.amax(fft_array)
min_mag = np.abs(np.amin(fft_array))
norm_fft_array = fft_array
for i in range(num_ffts):
norm_fft_array[i] = (fft_array[i]+min_mag)/(max_mag+min_mag)
#animation place holder
fig = plt.figure()
a = np.random.random((mag_steps+1,int(fft_len/fft_div)))
im = plt.imshow(a, origin='lower', cmap='plasma', interpolation='bilinear')
#compute animation
anim = FuncAnimation(fig, animate, frames=1000, fargs = (im,norm_fft_array,mag_steps,fft_len,fft_div), interval=1, blit=True)
return(anim)
###Output
_____no_output_____
###Markdown
Animation
###Code
# Test parameters
fft_len = 1024
fft_div = 2
mag_steps = 400
anim = fft_intensity_animate(samples, fft_len, fft_div, mag_steps)
###Output
_____no_output_____
###Markdown
Save Animation
###Code
anim.save('fft_qpsk_animation.mp4', fps=30, extra_args=['-vcodec', 'libx264'])
###Output
_____no_output_____
###Markdown
Optional GIF Animation
###Code
#anim.save('fft_animation.gif', fps=30, writer='pillow')
#Image(url='fft_animation.gif')
###Output
_____no_output_____ |
notebooks/S15C_Spark_SQL_Notes.ipynb | ###Markdown
Spark SQL====- [Official Documentation](http://spark.apache.org/docs/latest/sql-programming-guide.html)A tour of the Spark SQL library, the `spark-csv` package and Spark DataFrames. Resources------ [Spark tutorials](http://www.sparktutorials.net/tutorials): A growing bunch of accessible tutorials on Spark, mostly in Scala but a few in Python.
###Code
from pyspark import SparkContext, SparkConf
conf = (SparkConf()
.setAppName('SparkSQL')
.setMaster('local[*]'))
sc = SparkContext(conf=conf)
from pyspark.sql import SQLContext
sqlc = SQLContext(sc)
###Output
_____no_output_____
###Markdown
DataFrame from `pandas`----
###Code
pandas_df = sns.load_dataset('iris')
spark_df = sqlc.createDataFrame(pandas_df)
spark_df.show(n=3)
###Output
+------------+-----------+------------+-----------+-------+
|sepal_length|sepal_width|petal_length|petal_width|species|
+------------+-----------+------------+-----------+-------+
| 5.1| 3.5| 1.4| 0.2| setosa|
| 4.9| 3.0| 1.4| 0.2| setosa|
| 4.7| 3.2| 1.3| 0.2| setosa|
+------------+-----------+------------+-----------+-------+
only showing top 3 rows
###Markdown
DataFrame from CSV files Using manual parsing and a schema
###Code
%%bash
cat data/cars.csv
from pyspark.sql.types import *
def pad(alist):
tmp = alist[:]
n = 5 - len(alist)
for i in range(n):
tmp.append('')
return tmp
# Load a text file and convert each line to a tuple.
lines = sc.textFile('data/cars.csv')
header = lines.first() #extract header
lines = lines.filter(lambda line: line != header)
lines = lines.filter(lambda line: line)
parts = lines.map(lambda l: l.split(','))
parts = parts.map(lambda part: pad(part))
fields = [
StructField('year', IntegerType(), True),
StructField('make', StringType(), True),
StructField('model', StringType(), True),
StructField('comment', StringType(), True),
StructField('blank', StringType(), True),
]
schema = StructType(fields)
# Apply the schema to the RDD.
df0 = sqlc.createDataFrame(parts, schema)
df0.show(n=3)
###Output
+----+-------+-----+--------------------+-----+
|year| make|model| comment|blank|
+----+-------+-----+--------------------+-----+
|null|"Tesla"| "S"| "No comment"| |
|null| Ford| E350|"Go get one now t...| |
|null| Chevy| Volt| | |
+----+-------+-----+--------------------+-----+
###Markdown
Using the `spark-csv` package
###Code
df = (sqlc.read.format('com.databricks.spark.csv')
.options(header='true', inferschema='true')
.load('data/cars.csv'))
###Output
_____no_output_____
###Markdown
Using the dataframe
###Code
df.printSchema()
df.show()
df.select(['year', 'make']).show()
###Output
+----+-----+
|year| make|
+----+-----+
|2012|Tesla|
|1997| Ford|
|2015|Chevy|
+----+-----+
###Markdown
To run SQL queries, we need to register the dataframe as a table
###Code
df.registerTempTable('cars')
q = sqlc.sql('select year, make from cars where year > 2000')
q.show()
###Output
+----+-----+
|year| make|
+----+-----+
|2012|Tesla|
|2015|Chevy|
+----+-----+
###Markdown
Spark dataframes can be converted to Pandas onesTypically, we would only convert small dataframes such as the results of SQL queries. If we could load the original dataset in memory as a `pandaa` dataframe, why would we be using Spark?
###Code
q_df = q.toPandas()
q_df
###Output
_____no_output_____
###Markdown
DataFrame from JSON files----It is easier to read in JSON than CSV files because JSON is self-describing, allowing Spark SQL to infer the appropriate schema without additional hints.As an example, we will look at Durham police crime reports from the [Durham Open Data](https://opendurham.nc.gov/page/home/) website.
###Code
df = sqlc.read.json('data/durham-police-crime-reports.json')
###Output
_____no_output_____
###Markdown
How many records are there?
###Code
df.count()
###Output
_____no_output_____
###Markdown
Since this is JSON, it is possible to have a nested schema.
###Code
df.printSchema()
###Output
root
|-- datasetid: string (nullable = true)
|-- fields: struct (nullable = true)
| |-- addtime: string (nullable = true)
| |-- big_zone: string (nullable = true)
| |-- chrgdesc: string (nullable = true)
| |-- csstatus: string (nullable = true)
| |-- csstatusdt: string (nullable = true)
| |-- date_fnd: string (nullable = true)
| |-- date_occu: string (nullable = true)
| |-- date_rept: string (nullable = true)
| |-- dist: string (nullable = true)
| |-- dow1: string (nullable = true)
| |-- dow2: string (nullable = true)
| |-- geo_point_2d: array (nullable = true)
| | |-- element: double (containsNull = true)
| |-- geo_shape: struct (nullable = true)
| | |-- coordinates: array (nullable = true)
| | | |-- element: double (containsNull = true)
| | |-- type: string (nullable = true)
| |-- hour_fnd: string (nullable = true)
| |-- hour_occu: string (nullable = true)
| |-- hour_rept: string (nullable = true)
| |-- inci_id: string (nullable = true)
| |-- monthstamp: string (nullable = true)
| |-- reportedas: string (nullable = true)
| |-- reviewdate: string (nullable = true)
| |-- strdate: string (nullable = true)
| |-- ucr_code: string (nullable = true)
| |-- ucr_type_o: string (nullable = true)
| |-- yearstamp: string (nullable = true)
|-- geometry: struct (nullable = true)
| |-- coordinates: array (nullable = true)
| | |-- element: double (containsNull = true)
| |-- type: string (nullable = true)
|-- record_timestamp: string (nullable = true)
|-- recordid: string (nullable = true)
###Markdown
Show the top few rows.
###Code
df.show(n=5)
###Output
+--------------------+--------------------+--------------------+--------------------+--------------------+
| datasetid| fields| geometry| record_timestamp| recordid|
+--------------------+--------------------+--------------------+--------------------+--------------------+
|durham-police-cri...|[2013-12-01T19:00...|[WrappedArray(-78...|2016-03-12T02:32:...|2c0251654c4b7a006...|
|durham-police-cri...|[2013-12-01T19:00...|[WrappedArray(-78...|2016-03-12T02:32:...|e5fe0e483fdb17fb7...|
|durham-police-cri...|[2013-12-01T19:00...|[WrappedArray(-78...|2016-03-12T02:32:...|d16c330ea4b3e2a90...|
|durham-police-cri...|[2013-12-01T19:00...|[WrappedArray(-78...|2016-03-12T02:32:...|1128e12a912b16cfe...|
|durham-police-cri...|[2013-12-01T19:00...|[WrappedArray(-78...|2016-03-12T02:32:...|ac79bc9c709d5dfa4...|
+--------------------+--------------------+--------------------+--------------------+--------------------+
only showing top 5 rows
###Markdown
Make a dataframe only containing date and charges.
###Code
df.select(['fields.strdate', 'fields.chrgdesc']).show(n=5)
###Output
+-----------+--------------------+
| strdate| chrgdesc|
+-----------+--------------------+
|Dec 2 2013|CALLS FOR SERVICE...|
|Dec 2 2013|VANDALISM TO PROP...|
|Dec 2 2013|BURGLARY - FORCIB...|
|Dec 2 2013|LARCENY - SHOPLIF...|
|Dec 2 2013|BURGLARY - FORCIB...|
+-----------+--------------------+
only showing top 5 rows
###Markdown
Show distinct charges - note that for an actual analysis, you would probably want to consolidate these into a smaller number of groups to account for typos, etc.
###Code
df.select('fields.chrgdesc').distinct().show()
###Output
+--------------------+
| chrgdesc|
+--------------------+
|ALL OTHER OFFENSE...|
|DRUG EQUIPMENT/PA...|
| ASSIST OTHER AGENCY|
|TOWED/ABANDONED V...|
|DRUG EQUIPMENT/PA...|
|BURGLARY - FORCIB...|
|SEX OFFENSE - STA...|
|ROBBERY - INDIVIDUAL|
|WEAPON VIOLATIONS...|
|ALL OTHER OFFENSE...|
|DRUG/NARCOTIC VIO...|
|SEX OFFENSE - PEE...|
|DRUG/NARCOTIC VIO...|
|DRUG/NARCOTIC VIO...|
|AGGRAVATED ASSAUL...|
|ALL OTHER OFFENSE...|
|LIQUOR LAW - POSS...|
|EMBEZZLEMENT - WI...|
|WEAPON VIOLATIONS...|
| RUNAWAY|
+--------------------+
only showing top 20 rows
###Markdown
What charges are the most common?
###Code
df.groupby('fields.chrgdesc').count().sort('count', ascending=False).show()
###Output
+--------------------+-----+
| chrgdesc|count|
+--------------------+-----+
|BURGLARY - FORCIB...|11630|
|LARCENY - SHOPLIF...| 7633|
|LARCENY - FROM MO...| 7405|
|SIMPLE ASSAULT (P...| 5085|
| LARCENY - ALL OTHER| 4666|
|LARCENY - FROM BU...| 4514|
|VANDALISM TO AUTO...| 4112|
|DRUG/NARCOTIC VIO...| 3790|
|LARCENY - AUTOMOB...| 3441|
|VANDALISM TO PROP...| 3422|
|CALLS FOR SERVICE...| 3207|
| AGGRAVATED ASSAULT| 3183|
|BURGLARY - NON-FO...| 2339|
|ROBBERY - INDIVIDUAL| 2330|
|TOWED/ABANDONED V...| 2244|
|MOTOR VEHICLE THE...| 1970|
|DRIVING WHILE IMP...| 1912|
|FRAUD - FALSE PRE...| 1660|
| FOUND PROPERTY| 1643|
|ALL TRAFFIC (EXCE...| 1436|
+--------------------+-----+
only showing top 20 rows
###Markdown
Register as table to run full SQL queries
###Code
df.registerTempTable('crimes')
q = sqlc.sql('''
select fields.chrgdesc, count(fields.chrgdesc) as count
from crimes
where fields.monthstamp=3
group by fields.chrgdesc
''')
q.show()
###Output
+--------------------+-----+
| chrgdesc|count|
+--------------------+-----+
|ALL OTHER OFFENSE...| 1|
|TOWED/ABANDONED V...| 258|
| ASSIST OTHER AGENCY| 19|
|BURGLARY - FORCIB...| 929|
|SEX OFFENSE - STA...| 3|
|ROBBERY - INDIVIDUAL| 157|
|WEAPON VIOLATIONS...| 6|
|SEX OFFENSE - PEE...| 5|
|ALL OTHER OFFENSE...| 8|
|DRUG/NARCOTIC VIO...| 14|
|DRUG/NARCOTIC VIO...| 28|
|AGGRAVATED ASSAUL...| 1|
|LIQUOR LAW - POSS...| 2|
|ALL OTHER OFFENSE...| 3|
|EMBEZZLEMENT - WI...| 7|
|WEAPON VIOLATIONS...| 1|
| RUNAWAY| 87|
| MISSING PERSON| 16|
|SIMPLE ASSAULT-PH...| 3|
|ALL OTHER OFFENSE...| 22|
+--------------------+-----+
only showing top 20 rows
###Markdown
Convert to `pandas`
###Code
crimes_df = q.toPandas()
crimes_df.head()
###Output
_____no_output_____
###Markdown
DataFrame from SQLite3----The official docs suggest that this can be done directly via JDBC but I cannot get it to work. As a workaround, you can convert to JSON before importing as a dataframe. If anyone finds out how to load an SQLite3 database table directly into a Spark dataframe, please let me know.
###Code
from odo import odo
odo('sqlite:///../data/Chinook_Sqlite.sqlite::Album', 'Album.json')
df = sqlc.read.json('Album.json')
df.show(n=3)
###Output
+-------+--------+--------------------+
|AlbumId|ArtistId| Title|
+-------+--------+--------------------+
| 1| 1|For Those About T...|
| 2| 2| Balls to the Wall|
| 3| 2| Restless and Wild|
+-------+--------+--------------------+
only showing top 3 rows
###Markdown
DataSets----In Scala and Java, Spark 1.6 introduced a new type called `DataSet` that combines the relational properties of a `DataFrame` with the functional methods of an `RDD`. This will be available in Python in a later version. However, because of the dynamic nature of Python, you can already call functional methods on a Spark `Dataframe`, giving most of the ease of use of the `DataSet` type.
###Code
ds = sqlc.read.text('../data/Ulysses.txt')
ds
ds.show(n=3)
def remove_punctuation(s):
import string
return s.translate(dict.fromkeys(ord(c) for c in string.punctuation))
counts = (ds.map(lambda x: remove_punctuation(x[0]))
.flatMap(lambda x: x.lower().strip().split())
.filter(lambda x: x!= '')
.map(lambda x: (x, 1))
.countByKey())
sorted(counts.items(), key=lambda x: x[1], reverse=True)[:10]
###Output
_____no_output_____
###Markdown
**Optional Exercise**The crime data set includes both date and geospatial information. Consider creating an interactive map visualization of crimes in Durham by date using the `bokeh` package. See this [example](http://bokeh.pydata.org/en/0.11.1/docs/user_guide/geo.html) to get started. GeoJSON version of the Durham Police Crime Reports can be [downloaded](https://opendurham.nc.gov/explore/dataset/durham-police-crime-reports/download/?format=geojson&timezone=America/New_York). Version information
###Code
%load_ext version_information
%version_information pyspark
###Output
_____no_output_____ |
Tensorflow Developer Certificate Specialization/C4 - Sequences, Time Series and Prediction/W3/assignment/C4_W3_Assignment.ipynb | ###Markdown
###Code
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
print(tf.__version__)
def plot_series(time, series, format="-", start=0, end=None):
plt.plot(time[start:end], series[start:end], format)
plt.xlabel("Time")
plt.ylabel("Value")
plt.grid(False)
def trend(time, slope=0):
return slope * time
def seasonal_pattern(season_time):
"""Just an arbitrary pattern, you can change it if you wish"""
return np.where(season_time < 0.1,
np.cos(season_time * 6 * np.pi),
2 / np.exp(9 * season_time))
def seasonality(time, period, amplitude=1, phase=0):
"""Repeats the same pattern at each period"""
season_time = ((time + phase) % period) / period
return amplitude * seasonal_pattern(season_time)
def noise(time, noise_level=1, seed=None):
rnd = np.random.RandomState(seed)
return rnd.randn(len(time)) * noise_level
time = np.arange(10 * 365 + 1, dtype="float32")
baseline = 10
series = trend(time, 0.1)
baseline = 10
amplitude = 40
slope = 0.005
noise_level = 3
# Create the series
series = baseline + trend(time, slope) + seasonality(time, period=365, amplitude=amplitude)
# Update with noise
series += noise(time, noise_level, seed=51)
split_time = 3000
time_train = time[:split_time]
x_train = series[:split_time]
time_valid = time[split_time:]
x_valid = series[split_time:]
window_size = 20
batch_size = 32
shuffle_buffer_size = 1000
plot_series(time, series)
def windowed_dataset(series, window_size, batch_size, shuffle_buffer):
dataset = tf.data.Dataset.from_tensor_slices(series)
dataset = dataset.window(window_size + 1, shift=1, drop_remainder=True)
dataset = dataset.flat_map(lambda window: window.batch(window_size + 1))
dataset = dataset.shuffle(shuffle_buffer).map(lambda window: (window[:-1], window[-1]))
dataset = dataset.batch(batch_size).prefetch(1)
return dataset
tf.keras.backend.clear_session()
tf.random.set_seed(51)
np.random.seed(51)
tf.keras.backend.clear_session()
dataset = windowed_dataset(x_train, window_size, batch_size, shuffle_buffer_size)
model = tf.keras.models.Sequential([
tf.keras.layers.Lambda(lambda x: tf.expand_dims(x, axis=-1), input_shape=[None]),
tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(64, return_sequences=True)),
tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(64)),
tf.keras.layers.Dense(1),
tf.keras.layers.Lambda(lambda x: x * 100.0)
])
lr_schedule = tf.keras.callbacks.LearningRateScheduler(
lambda epoch: 1e-8 * 10**(epoch / 20))
optimizer = tf.keras.optimizers.SGD(learning_rate=1e-8, momentum=0.9)
model.compile(loss=tf.keras.losses.Huber(),
optimizer=optimizer,
metrics=["mae"])
history = model.fit(dataset, epochs=100, callbacks=[lr_schedule])
plt.semilogx(history.history["lr"], history.history["loss"])
plt.axis([1e-8, 1e-4, 0, 30])
# FROM THIS PICK A LEARNING RATE
tf.keras.backend.clear_session()
tf.random.set_seed(51)
np.random.seed(51)
tf.keras.backend.clear_session()
dataset = windowed_dataset(x_train, window_size, batch_size, shuffle_buffer_size)
model = tf.keras.models.Sequential([
tf.keras.layers.Lambda(lambda x: tf.expand_dims(x, axis=-1), input_shape=[None]),
tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(64, return_sequences=True)),
tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(64)),
tf.keras.layers.Dense(1),
tf.keras.layers.Lambda(lambda x: x * 100.0)
])
lr_schedule = tf.keras.callbacks.LearningRateScheduler(
lambda epoch: 1e-8 * 10**(epoch / 20))
optimizer = tf.keras.optimizers.SGD(learning_rate=1e-6, momentum=0.9)
model.compile(loss=tf.keras.losses.Huber(),
optimizer=optimizer,
metrics=["mae"])
history = model.fit(dataset, epochs=100, callbacks=[lr_schedule])
forecast = []
results = []
for time in range(len(series) - window_size):
forecast.append(model.predict(series[time:time + window_size][np.newaxis]))
forecast = forecast[split_time-window_size:]
results = np.array(forecast)[:, 0, 0]
plt.figure(figsize=(10, 6))
plot_series(time_valid, x_valid)
plot_series(time_valid, results)
tf.keras.metrics.mean_absolute_error(x_valid, results).numpy()
# YOUR RESULT HERE SHOULD BE LESS THAN 4
import matplotlib.image as mpimg
import matplotlib.pyplot as plt
#-----------------------------------------------------------
# Retrieve a list of list results on training and test data
# sets for each training epoch
#-----------------------------------------------------------
mae=history.history['mae']
loss=history.history['loss']
epochs=range(len(loss)) # Get number of epochs
#------------------------------------------------
# Plot MAE and Loss
#------------------------------------------------
plt.plot(epochs, mae, 'r')
plt.plot(epochs, loss, 'b')
plt.title('MAE and Loss')
plt.xlabel("Epochs")
plt.ylabel("Accuracy")
plt.legend(["MAE", "Loss"])
plt.figure()
###Output
_____no_output_____ |
Rethinking/Chp_06x.ipynb | ###Markdown
Code 6.1
###Code
data = {'species' : ['afarensis', 'africanus', 'habilis', 'boisei', 'rudolfensis', 'ergaster', 'sapiens'],
'brain' : [438, 452, 612, 521, 752, 871, 1350],
'mass' : [37., 35.5, 34.5, 41.5, 55.5, 61.0, 53.5]}
d = pd.DataFrame(data)
d
###Output
_____no_output_____
###Markdown
Code 6.2
###Code
m_6_1 = smf.ols('brain ~ mass', data=d).fit()
###Output
_____no_output_____
###Markdown
Code 6.3
###Code
1 - m_6_1.resid.var()/d.brain.var()
# m_6_1.summary() check the value for R-squared
###Output
_____no_output_____
###Markdown
Code 6.4
###Code
m_6_2 = smf.ols('brain ~ mass + I(mass**2)', data=d).fit()
###Output
_____no_output_____
###Markdown
Code 6.5
###Code
m_6_3 = smf.ols('brain ~ mass + I(mass**2) + I(mass**3)', data=d).fit()
m_6_4 = smf.ols('brain ~ mass + I(mass**2) + I(mass**3) + I(mass**4)', data=d).fit()
m_6_5 = smf.ols('brain ~ mass + I(mass**2) + I(mass**3) + I(mass**4) + I(mass**5)', data=d).fit()
m_6_6 = smf.ols('brain ~ mass + I(mass**2) + I(mass**3) + I(mass**4) + I(mass**5) + I(mass**6)', data=d).fit()
###Output
_____no_output_____
###Markdown
Code 6.6
###Code
m_6_7 = smf.ols('brain ~ 1', data=d).fit()
###Output
_____no_output_____
###Markdown
Code 6.7
###Code
d_new = d.drop(d.index[-1])
###Output
_____no_output_____
###Markdown
Code 6.8
###Code
f, (ax1, ax2) = plt.subplots(1, 2, sharey=True, figsize=(8,3))
ax1.scatter(d.mass, d.brain, alpha=0.8)
ax2.scatter(d.mass, d.brain, alpha=0.8)
for i in range(len(d)):
d_new = d.drop(d.index[-i])
m0 = smf.ols('brain ~ mass', d_new).fit()
# need to calculate regression line
# need to add intercept term explicitly
x = sm.add_constant(d_new.mass) # add constant to new data frame with mass
x_pred = pd.DataFrame({'mass': np.linspace(x.mass.min() - 10, x.mass.max() + 10, 50)}) # create linspace dataframe
x_pred2 = sm.add_constant(x_pred) # add constant to newly created linspace dataframe
y_pred = m0.predict(x_pred2) # calculate predicted values
ax1.plot(x_pred, y_pred, 'gray', alpha=.5)
ax1.set_ylabel('body mass (kg)', fontsize=12);
ax1.set_xlabel('brain volume (cc)', fontsize=12)
ax1.set_title('Underfit model')
# fifth order model
m1 = smf.ols('brain ~ mass + I(mass**2) + I(mass**3) + I(mass**4) + I(mass**5)', data=d_new).fit()
x = sm.add_constant(d_new.mass) # add constant to new data frame with mass
x_pred = pd.DataFrame({'mass': np.linspace(x.mass.min()-10, x.mass.max()+10, 200)}) # create linspace dataframe
x_pred2 = sm.add_constant(x_pred) # add constant to newly created linspace dataframe
y_pred = m1.predict(x_pred2) # calculate predicted values from fitted model
ax2.plot(x_pred, y_pred, 'gray', alpha=.5)
ax2.set_xlim(32,62)
ax2.set_ylim(-250, 2200)
ax2.set_ylabel('body mass (kg)', fontsize=12);
ax2.set_xlabel('brain volume (cc)', fontsize=12)
ax2.set_title('Overfit model')
plt.show()
###Output
_____no_output_____
###Markdown
Code 6.9
###Code
p = (0.3, 0.7)
-sum(p * np.log(p))
###Output
_____no_output_____
###Markdown
Code 6.10
###Code
# fit model
m_6_1 = smf.ols('brain ~ mass', data=d).fit()
#compute de deviance by cheating
-2 * m_6_1.llf
###Output
_____no_output_____
###Markdown
Code 6.11
###Code
# standarize the mass before fitting
d['mass_s'] = d['mass'] - np.mean(d['mass'] / np.std(d['mass']))
with pm.Model() as m_6_8 :
a = pm.Normal('a', mu=np.mean(d['brain']), sd=10)
b = pm.Normal('b', mu=0, sd=10)
sigma = pm.Uniform('sigma', 0, np.std(d['brain']) * 10)
mu = pm.Deterministic('mu', a + b * d['mass_s'])
brain = pm.Normal('brain', mu = mu, sd = sigma, observed = d['brain'])
m_6_8 = pm.sample(2000, tune=5000)
theta = pm.summary(m_6_8)['mean'][:3]
#compute deviance
dev = - 2 * sum(stats.norm.logpdf(d['brain'], loc = theta[0] + theta[1] * d['mass_s'] , scale = theta[2]))
dev
###Output
_____no_output_____
###Markdown
Code 6.12 [This](https://github.com/rmcelreath/rethinking/blob/a309712d904d1db7af1e08a76c521ab994006fd5/R/sim_train_test.R) is the original function.
###Code
# This function only works with number of parameters >= 2
def sim_train_test(N=20, k=3, rho=[0.15, -0.4], b_sigma=100):
n_dim = 1 + len(rho)
if n_dim < k:
n_dim = k
Rho = np.diag(np.ones(n_dim))
Rho[0, 1:3:1] = rho
i_lower = np.tril_indices(n_dim, -1)
Rho[i_lower] = Rho.T[i_lower]
x_train = stats.multivariate_normal.rvs(cov=Rho, size=N)
x_test = stats.multivariate_normal.rvs(cov=Rho, size=N)
mm_train = np.ones((N,1))
np.concatenate([mm_train, x_train[:, 1:k]], axis=1)
#Using pymc3
with pm.Model() as m_sim:
vec_V = pm.MvNormal('vec_V', mu=0, cov=b_sigma * np.eye(n_dim),
shape=(1, n_dim), testval=np.random.randn(1, n_dim)*.01)
mu = pm.Deterministic('mu', 0 + pm.math.dot(x_train, vec_V.T))
y = pm.Normal('y', mu=mu, sd=1, observed=x_train[:, 0])
with m_sim:
trace_m_sim = pm.sample()
vec = pm.summary(trace_m_sim)['mean'][:n_dim]
vec = np.array([i for i in vec]).reshape(n_dim, -1)
dev_train = - 2 * sum(stats.norm.logpdf(x_train, loc = np.matmul(x_train, vec), scale = 1))
mm_test = np.ones((N,1))
mm_test = np.concatenate([mm_test, x_test[:, 1:k +1]], axis=1)
dev_test = - 2 * sum(stats.norm.logpdf(x_test[:,0], loc = np.matmul(mm_test, vec), scale = 1))
return np.mean(dev_train), np.mean(dev_test)
n = 20
tries = 10
param = 6
r = np.zeros(shape=(param - 1, 4))
train = []
test = []
for j in range(2, param + 1):
print(j)
for i in range(1, tries + 1):
tr, te = sim_train_test(N=n, k=param)
train.append(tr), test.append(te)
r[j -2, :] = np.mean(train), np.std(train, ddof=1), np.mean(test), np.std(test, ddof=1)
###Output
2
###Markdown
Code 6.14
###Code
num_param = np.arange(2, param + 1)
plt.figure(figsize=(10, 6))
plt.scatter(num_param, r[:, 0], color='C0')
plt.xticks(num_param)
for j in range(param - 1):
plt.vlines(num_param[j], r[j,0] - r[j, 1], r[j,0] + r[j,1], color='mediumblue',
zorder=-1, alpha=0.80)
plt.scatter(num_param + 0.1, r[:, 2], facecolors='none', edgecolors='k')
for j in range(param - 1):
plt.vlines(num_param[j] + 0.1, r[j,2] - r[j, 3], r[j,2] + r[j,3], color='k',
zorder=-2, alpha=0.70)
dist = 0.20
plt.text(num_param[1] - dist, r[1, 0] - dist, 'in', color='C0', fontsize=13)
plt.text(num_param[1] + dist, r[1, 2] - dist, 'out', color='k', fontsize=13)
plt.text(num_param[1] + dist, r[1, 2] + r[1,3] - dist, '+1 SD', color='k', fontsize=10)
plt.text(num_param[1] + dist, r[1, 2] - r[1,3] - dist, '+1 SD', color='k', fontsize=10)
plt.xlabel('Number of parameters', fontsize=14)
plt.ylabel('Deviance', fontsize=14)
plt.title('N = {}'.format(n), fontsize=14)
plt.show()
###Output
_____no_output_____
###Markdown
Code 6.15
###Code
data = pd.read_csv('Data/cars.csv', sep=',')
with pm.Model() as m_6_15 :
a = pm.Normal('a', mu=0, sd=100)
b = pm.Normal('b', mu=0, sd=10)
sigma = pm.Uniform('sigma', 0, 30)
mu = pm.Deterministic('mu', a + b * data['speed'])
dist = pm.Normal('dist', mu=mu, sd=sigma, observed = data['dist'])
m_6_15 = pm.sample(5000, tune=10000)
###Output
Auto-assigning NUTS sampler...
Initializing NUTS using jitter+adapt_diag...
Multiprocess sampling (4 chains in 4 jobs)
NUTS: [sigma_interval__, b, a]
100%|██████████| 15000/15000 [00:36<00:00, 412.92it/s]
The acceptance probability does not match the target. It is 0.8918815238809351, but should be close to 0.8. Try to increase the number of tuning steps.
###Markdown
Code 6.16
###Code
n_samples = 1000
n_cases = data.shape[0]
ll = np.zeros((n_cases, n_samples))
for s in range(0, n_samples):
mu = m_6_15['a'][s] + m_6_15['b'][s] * data['speed']
p_ = stats.norm.logpdf(data['dist'], loc=mu, scale=m_6_15['sigma'][s])
ll[:,s] = p_
###Output
_____no_output_____
###Markdown
Code 6.17
###Code
n_cases = data.shape[0]
lppd = np.zeros((n_cases))
for a in range(1, n_cases):
lppd[a,] = logsumexp(ll[a,]) - np.log(n_samples)
###Output
_____no_output_____
###Markdown
Code 6.18
###Code
pWAIC = np.zeros((n_cases))
for i in range(1, n_cases):
pWAIC[i,] = np.var(ll[i,])
###Output
_____no_output_____
###Markdown
Code 6.19
###Code
- 2 * (sum(lppd) - sum(pWAIC))
###Output
_____no_output_____
###Markdown
Code 6.20
###Code
waic_vec = - 2 * (lppd - pWAIC)
np.sqrt(n_cases * np.var(waic_vec))
###Output
_____no_output_____
###Markdown
Code 6.21
###Code
d = pd.read_csv('Data/milk.csv', sep=';')
d['neocortex'] = d['neocortex.perc'] / 100
d.dropna(inplace=True)
d.shape
###Output
_____no_output_____
###Markdown
Code 6.22
###Code
a_start = d['kcal.per.g'].mean()
sigma_start = d['kcal.per.g'].std()
mass_shared = theano.shared(np.log(d['mass'].values))
neocortex_shared = theano.shared(d['neocortex'].values)
with pm.Model() as m6_11:
alpha = pm.Normal('alpha', mu=0, sd=10, testval=a_start)
mu = alpha + 0 * neocortex_shared
sigma = pm.HalfCauchy('sigma',beta=10, testval=sigma_start)
kcal = pm.Normal('kcal', mu=mu, sd=sigma, observed=d['kcal.per.g'])
trace_m6_11 = pm.sample(1000, tune=1000)
with pm.Model() as m6_12:
alpha = pm.Normal('alpha', mu=0, sd=10, testval=a_start)
beta = pm.Normal('beta', mu=0, sd=10)
sigma = pm.HalfCauchy('sigma',beta=10, testval=sigma_start)
mu = alpha + beta * neocortex_shared
kcal = pm.Normal('kcal', mu=mu, sd=sigma, observed=d['kcal.per.g'])
trace_m6_12 = pm.sample(5000, tune=15000)
with pm.Model() as m6_13:
alpha = pm.Normal('alpha', mu=0, sd=10, testval=a_start)
beta = pm.Normal('beta', mu=0, sd=10)
sigma = pm.HalfCauchy('sigma', beta=10, testval=sigma_start)
mu = alpha + beta * mass_shared
kcal = pm.Normal('kcal', mu=mu, sd=sigma, observed=d['kcal.per.g'])
trace_m6_13 = pm.sample(1000, tune=1000)
with pm.Model() as m6_14:
alpha = pm.Normal('alpha', mu=0, sd=10, testval=a_start)
beta = pm.Normal('beta', mu=0, sd=10, shape=2)
sigma = pm.HalfCauchy('sigma', beta=10, testval=sigma_start)
mu = alpha + beta[0] * mass_shared + beta[1] * neocortex_shared
kcal = pm.Normal('kcal', mu=mu, sd=sigma, observed=d['kcal.per.g'])
trace_m6_14 = pm.sample(5000, tune=15000)
###Output
Auto-assigning NUTS sampler...
Initializing NUTS using jitter+adapt_diag...
Multiprocess sampling (4 chains in 4 jobs)
NUTS: [sigma_log__, alpha]
100%|██████████| 2000/2000 [00:02<00:00, 979.98it/s]
Auto-assigning NUTS sampler...
Initializing NUTS using jitter+adapt_diag...
Multiprocess sampling (4 chains in 4 jobs)
NUTS: [sigma_log__, beta, alpha]
100%|██████████| 20000/20000 [02:10<00:00, 152.87it/s]
There were 21 divergences after tuning. Increase `target_accept` or reparameterize.
There were 1 divergences after tuning. Increase `target_accept` or reparameterize.
There were 46 divergences after tuning. Increase `target_accept` or reparameterize.
The number of effective samples is smaller than 25% for some parameters.
Auto-assigning NUTS sampler...
Initializing NUTS using jitter+adapt_diag...
Multiprocess sampling (4 chains in 4 jobs)
NUTS: [sigma_log__, beta, alpha]
100%|██████████| 2000/2000 [00:02<00:00, 675.77it/s]
Auto-assigning NUTS sampler...
Initializing NUTS using jitter+adapt_diag...
Multiprocess sampling (4 chains in 4 jobs)
NUTS: [sigma_log__, beta, alpha]
100%|██████████| 20000/20000 [03:57<00:00, 84.07it/s]
There were 64 divergences after tuning. Increase `target_accept` or reparameterize.
There were 71 divergences after tuning. Increase `target_accept` or reparameterize.
There were 11 divergences after tuning. Increase `target_accept` or reparameterize.
There were 7 divergences after tuning. Increase `target_accept` or reparameterize.
The number of effective samples is smaller than 25% for some parameters.
###Markdown
Code 6.23
###Code
pm.waic(trace_m6_14, m6_14)
###Output
/home/rosgori/Python/pymc3_env/lib/python3.6/site-packages/pymc3/stats.py:211: UserWarning: For one or more samples the posterior variance of the
log predictive densities exceeds 0.4. This could be indication of
WAIC starting to fail see http://arxiv.org/abs/1507.04544 for details
""")
###Markdown
Code 6.24
###Code
compare_df = pm.compare({m6_11 : trace_m6_11,
m6_12 : trace_m6_12,
m6_13 : trace_m6_13,
m6_14 : trace_m6_14}, method='pseudo-BMA')
compare_df.loc[:,'model'] = pd.Series(['m6.11', 'm6.12', 'm6.13', 'm6.14'])
compare_df = compare_df.set_index('model')
compare_df
###Output
/home/rosgori/Python/pymc3_env/lib/python3.6/site-packages/pymc3/stats.py:211: UserWarning: For one or more samples the posterior variance of the
log predictive densities exceeds 0.4. This could be indication of
WAIC starting to fail see http://arxiv.org/abs/1507.04544 for details
""")
###Markdown
Code 6.25
###Code
pm.compareplot(compare_df);
###Output
_____no_output_____
###Markdown
Code 6.26
###Code
diff = np.random.normal(loc=6.7, scale=7.26, size=100000)
sum(diff[diff<0]) / 100000
###Output
_____no_output_____
###Markdown
Code 6.27 Compare function already checks number of observations to be equal.
###Code
coeftab = pd.DataFrame({'m6_11': pm.summary(trace_m6_11)['mean'],
'm6_12': pm.summary(trace_m6_12)['mean'],
'm6_13': pm.summary(trace_m6_13)['mean'],
'm6_14': pm.summary(trace_m6_14)['mean']})
coeftab
###Output
_____no_output_____
###Markdown
Code 6.28
###Code
traces = [trace_m6_11, trace_m6_12, trace_m6_13, trace_m6_14]
models = [m6_11, m6_12, m6_13, m6_14]
plt.figure(figsize=(10, 8))
pm.forestplot(traces, plot_kwargs={'fontsize':14});
###Output
_____no_output_____
###Markdown
Code 6.29
###Code
kcal_per_g = np.repeat(0, 30) # empty outcome
neocortex = np.linspace(0.5, 0.8, 30) # sequence of neocortex
mass = np.repeat(4.5, 30) # average mass
mass_shared.set_value(np.log(mass))
neocortex_shared.set_value(neocortex)
post_pred = pm.sample_ppc(trace_m6_14, samples=10000, model=m6_14)
###Output
100%|██████████| 10000/10000 [00:04<00:00, 2044.40it/s]
###Markdown
Code 6.30
###Code
milk_ensemble = pm.sample_ppc_w(traces, 10000,
models, weights=compare_df.weight.sort_index(ascending=True))
plt.figure(figsize=(8, 6))
plt.plot(neocortex, post_pred['kcal'].mean(0), ls='--', color='C2')
hpd_post_pred = pm.hpd(post_pred['kcal'])
plt.plot(neocortex,hpd_post_pred[:,0], ls='--', color='C2')
plt.plot(neocortex,hpd_post_pred[:,], ls='--', color='C2')
plt.plot(neocortex, milk_ensemble['kcal'].mean(0), color='C0')
hpd_av = pm.hpd(milk_ensemble['kcal'])
plt.fill_between(neocortex, hpd_av[:,0], hpd_av[:,1], alpha=0.1, color='C0')
plt.scatter(d['neocortex'], d['kcal.per.g'], facecolor='None', edgecolors='C0')
plt.ylim(0.3, 1)
plt.xlabel('neocortex', fontsize=16)
plt.ylabel('kcal.per.g', fontsize=16);
import sys, IPython, scipy, matplotlib, platform
print("This notebook was createad on a computer %s running %s and using:\nPython %s\nIPython %s\nPyMC3 %s\nNumPy %s\nPandas %s\nSciPy %s\nMatplotlib %s\n" % (platform.machine(), ' '.join(platform.linux_distribution()[:2]), sys.version[:5], IPython.__version__, pm.__version__, np.__version__, pd.__version__, scipy.__version__, matplotlib.__version__))
###Output
This notebook was createad on a computer x86_64 running debian stretch/sid and using:
Python 3.6.4
IPython 6.3.1
PyMC3 3.4.1
NumPy 1.14.2
Pandas 0.22.0
SciPy 1.0.1
Matplotlib 2.2.2
|
Crash Course on Python/pygrams_notebooks/utf-8''C1M5L3_Code_Reuse_V2.ipynb | ###Markdown
Code Reuse Let’s put what we learned about code reuse all together. First, let’s look back at **inheritance**. Run the following cell that defines a generic `Animal` class.
###Code
class Animal:
name = ""
category = ""
def __init__(self, name):
self.name = name
def set_category(self, category):
self.category = category
###Output
_____no_output_____
###Markdown
What we have is not enough to do much -- yet. That’s where you come in. In the next cell, define a `Turtle` class that inherits from the `Animal` class. Then go ahead and set its category. For instance, a turtle is generally considered a reptile. Although modern cladistics call this categorization into question, for purposes of this exercise we will say turtles are reptiles!
###Code
class Turtle(Animal):
name = "turtle"
category = "Reptile"
###Output
_____no_output_____
###Markdown
Run the following cell to check whether you correctly defined your `Turtle` class and set its category to reptile.
###Code
print(Turtle.category)
###Output
Reptile
###Markdown
Was the output of the above cell reptile? If not, go back and edit your `Turtle` class making sure that it inherits from the `Animal` class and its category is properly set to reptile. Be sure to re-run that cell once you've finished your edits. Did you get it? If so, great! Next, let’s practice **composition** a little bit. This one will require a second type of `Animal` that is in the same category as the first. For example, since you already created a `Turtle` class, go ahead and create a `Snake` class. Don’t forget that it also inherits from the `Animal` class and that its category should be set to reptile.
###Code
class Snake(Animal):
name = "snake"
category = "Reptile"
###Output
_____no_output_____
###Markdown
Now, let’s say we have a large variety of `Animal`s (such as turtles and snakes) in a Zoo. Below we have the `Zoo` class. We’re going to use it to organize our various `Animal`s. Remember, inheritance says a Turtle is an `Animal`, but a `Zoo` is not an `Animal` and an `Animal` is not a `Zoo` -- though they are related to one another. Fill in the blanks of the `Zoo` class below so that you can use **zoo.add_animal( )** to add instances of the `Animal` subclasses you created above. Once you’ve added them all, you should be able to use **zoo.total_of_category( )** to tell you exactly how many individual `Animal` types the `Zoo` has for each category! Be sure to run the cell once you've finished your edits.
###Code
class Zoo:
def __init__(self):
self.current_animals = {}
def add_animal(self, animal):
self.current_animals[animal.name] = animal.category
def total_of_category(self, category):
result = 0
for animal in self.current_animals.values():
if category == category:
result += 1
return result
zoo = Zoo()
###Output
_____no_output_____
###Markdown
Run the following cell to check whether you properly filled in the blanks of your `Zoo` class.
###Code
turtle = Turtle("Turtle") #create an instance of the Turtle class
snake = Snake("Snake") #create an instance of the Snake class
zoo.add_animal(turtle)
zoo.add_animal(snake)
print(zoo.total_of_category("reptile")) #how many zoo animal types in the reptile category
###Output
2
|
sample-notebooks/AzureNotebooks-azure-storage-genomics-giab.ipynb | ###Markdown
Genomics Data Analysis with Azure Jupyter Notebooks- Genome in a Bottle (GIAB) Jupyter notebook is a great tool for data scientists who is working on Genomics data analysis. We will demonstrate Azure Jupyter notebook usage via GATK and Picard with Azure Open Dataset. **Here is the coverage of this notebook:**1. Create index file for VCF file2. Convert the VCF file to a table **Dependencies:**This notebook requires the following libraries:- Azure storage `pip install azure-storage-blob==2.1.0`. Please visit [this page](https://github.com/Azure/azure-storage-python/wiki) for frequently encountered problem for this SDK.- Genome Analysis Toolkit (GATK) (*Users need to download GATK from Broad Institute's webpage into the same compute environment with this notebook: https://github.com/broadinstitute/gatk/releases*)**Important information: This notebook is using Python 3.6 kernel** 1. Getting the GIAB Genomics data from Azure Open DatasetSeveral public genomics data has been uploaded as an Azure Open Dataset [here](https://azure.microsoft.com/services/open-datasets/catalog/). We create a blob service linked to this open datasets. You can find example of data calling procedure from Azure Open Dataset for `Genome In a Bottle- GIAB` datasets in below: **1.a.Install Azure Blob Storage SDK**
###Code
pip install azure-storage-blob==2.1.0
###Output
_____no_output_____
###Markdown
**1.b.Download the targeted file**
###Code
import os
import uuid
import sys
from azure.storage.blob import BlockBlobService, PublicAccess
blob_service_client = BlockBlobService(account_name='datasetgiab', sas_token='sv=2019-02-02&se=2050-01-01T08%3A00%3A00Z&si=prod&sr=c&sig=7qp%2BxGLGc%2BO2MIVzzDZY7GSqEwthyGnhXJ566KoH7As%3D')
blob_service_client.get_blob_to_path('dataset/data/NA12878/analysis/GIAB_integration', 'NIST_RTG_PlatGen_merged_highconfidence_v0.2_Allannotate.vcf.gz', './NIST_RTG_PlatGen_merged_highconfidence_v0.2_Allannotate.vcf.gz')
###Output
_____no_output_____
###Markdown
2. Creates an index for a feature file, e.g. VCF or BED fileThis tool creates an index file for the various kinds of feature-containing files supported by GATK (such as VCF and BED files). An index allows querying features by a genomic interval.
###Code
!./gatk IndexFeatureFile -I NIST_RTG_PlatGen_merged_highconfidence_v0.2_Allannotate.vcf.gz
###Output
_____no_output_____
###Markdown
3. Extract fields from a VCF file to a tab-delimited table This tool creates an index file for the various kinds of feature-containing files supported by GATK (such as VCF and BED files). An index allows querying features by a genomic interval.**INFO/site-level fields**Use the `-F` argument to extract INFO fields; each field will occupy a single column in the output file. The field can be any standard VCF column (e.g. CHROM, ID, QUAL) or any annotation name in the INFO field (e.g. AC, AF). The tool also supports the following additional fields:EVENTLENGTH (length of the event)TRANSITION (1 for a bi-allelic transition (SNP), 0 for bi-allelic transversion (SNP), -1 for INDELs and multi-allelics)HET (count of het genotypes)HOM-REF (count of homozygous reference genotypes)HOM-VAR (count of homozygous variant genotypes)NO-CALL (count of no-call genotypes)TYPE (type of variant, possible values are NO_VARIATION, SNP, MNP, INDEL, SYMBOLIC, and MIXEDVAR (count of non-reference genotypes)NSAMPLES (number of samples)NCALLED (number of called samples)MULTI-ALLELIC (is this variant multi-allelic? true/false)**FORMAT/sample-level fields**Use the `-GF` argument to extract FORMAT/sample-level fields. The tool will create a new column per sample with the name "SAMPLE_NAME.FORMAT_FIELD_NAME" e.g. NA12877.GQ, NA12878.GQ. **Input**A VCF file to convert to a table**Output**A tab-delimited file containing the values of the requested fields in the VCF file.
###Code
!./gatk VariantsToTable -V NIST_RTG_PlatGen_merged_highconfidence_v0.2_Allannotate.vcf.gz -F CHROM -F POS -F TYPE -O outputtable.table
###Output
_____no_output_____ |
module3-ridge-regression/Jay_Adamo_Ridge_Regression_assignment.ipynb | ###Markdown
Lambda School Data Science*Unit 2, Sprint 1, Module 3*--- Ridge Regression AssignmentWe're going back to our other **New York City** real estate dataset. Instead of predicting apartment rents, you'll predict property sales prices.But not just for condos in Tribeca...- [ ] Use a subset of the data where `BUILDING_CLASS_CATEGORY` == `'01 ONE FAMILY DWELLINGS'` and the sale price was more than 100 thousand and less than 2 million.- [ ] Do train/test split. Use data from January — March 2019 to train. Use data from April 2019 to test.- [ ] Do one-hot encoding of categorical features.- [ ] Do feature selection with `SelectKBest`.- [ ] Fit a ridge regression model with multiple features. Use the `normalize=True` parameter (or do [feature scaling](https://scikit-learn.org/stable/modules/preprocessing.html) beforehand — use the scaler's `fit_transform` method with the train set, and the scaler's `transform` method with the test set)- [ ] Get mean absolute error for the test set.- [ ] As always, commit your notebook to your fork of the GitHub repo.The [NYC Department of Finance](https://www1.nyc.gov/site/finance/taxes/property-rolling-sales-data.page) has a glossary of property sales terms and NYC Building Class Code Descriptions. The data comes from the [NYC OpenData](https://data.cityofnewyork.us/browse?q=NYC%20calendar%20sales) portal. Stretch GoalsDon't worry, you aren't expected to do all these stretch goals! These are just ideas to consider and choose from.- [ ] Add your own stretch goal(s) !- [ ] Instead of `Ridge`, try `LinearRegression`. Depending on how many features you select, your errors will probably blow up! 💥- [ ] Instead of `Ridge`, try [`RidgeCV`](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.RidgeCV.html).- [ ] Learn more about feature selection: - ["Permutation importance"](https://www.kaggle.com/dansbecker/permutation-importance) - [scikit-learn's User Guide for Feature Selection](https://scikit-learn.org/stable/modules/feature_selection.html) - [mlxtend](http://rasbt.github.io/mlxtend/) library - scikit-learn-contrib libraries: [boruta_py](https://github.com/scikit-learn-contrib/boruta_py) & [stability-selection](https://github.com/scikit-learn-contrib/stability-selection) - [_Feature Engineering and Selection_](http://www.feat.engineering/) by Kuhn & Johnson.- [ ] Try [statsmodels](https://www.statsmodels.org/stable/index.html) if you’re interested in more inferential statistical approach to linear regression and feature selection, looking at p values and 95% confidence intervals for the coefficients.- [ ] Read [_An Introduction to Statistical Learning_](http://faculty.marshall.usc.edu/gareth-james/ISL/ISLR%20Seventh%20Printing.pdf), Chapters 1-3, for more math & theory, but in an accessible, readable way.- [ ] Try [scikit-learn pipelines](https://scikit-learn.org/stable/modules/compose.html).
###Code
%%capture
import sys
# If you're on Colab:
if 'google.colab' in sys.modules:
DATA_PATH = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Applied-Modeling/master/data/'
!pip install category_encoders==2.*
# If you're working locally:
else:
DATA_PATH = '../data/'
# Ignore this Numpy warning when using Plotly Express:
# FutureWarning: Method .ptp is deprecated and will be removed in a future version. Use numpy.ptp instead.
import warnings
warnings.filterwarnings(action='ignore', category=FutureWarning, module='numpy')
import pandas as pd
import pandas_profiling
# Read New York City property sales data
df = pd.read_csv(DATA_PATH+'condos/NYC_Citywide_Rolling_Calendar_Sales.csv')
# Change column names: replace spaces with underscores
df.columns = [col.replace(' ', '_') for col in df]
# SALE_PRICE was read as strings.
# Remove symbols, convert to integer
df['SALE_PRICE'] = (
df['SALE_PRICE']
.str.replace('$','')
.str.replace('-','')
.str.replace(',','')
.astype(int)
)
# BOROUGH is a numeric column, but arguably should be a categorical feature,
# so convert it from a number to a string
df['BOROUGH'] = df['BOROUGH'].astype(str)
# Reduce cardinality for NEIGHBORHOOD feature
# Get a list of the top 10 neighborhoods
top10 = df['NEIGHBORHOOD'].value_counts()[:10].index
# At locations where the neighborhood is NOT in the top 10,
# replace the neighborhood with 'OTHER'
df.loc[~df['NEIGHBORHOOD'].isin(top10), 'NEIGHBORHOOD'] = 'OTHER'
df.head()
df.shape
###Output
_____no_output_____
###Markdown
- [ ] Use a subset of the data where `BUILDING_CLASS_CATEGORY` == `'01 ONE FAMILY DWELLINGS'` and the sale price was more than 100 thousand and less than 2 million.- [ ] Do train/test split. Use data from January — March 2019 to train. Use data from April 2019 to test.- [ ] Do one-hot encoding of categorical features.- [ ] Do feature selection with `SelectKBest`.- [ ] Fit a ridge regression model with multiple features. Use the `normalize=True` parameter (or do [feature scaling](https://scikit-learn.org/stable/modules/preprocessing.html) beforehand — use the scaler's `fit_transform` method with the train set, and the scaler's `transform` method with the test set)- [ ] Get mean absolute error for the test set.
###Code
# creating subset of data:
# Building class is equal to '01 ONE FAMILY DWELLINGS'
# Sale price more than 100,000 and less than 2,000,000
df_sub = df[(df['BUILDING_CLASS_CATEGORY'] == '01 ONE FAMILY DWELLINGS') & (df['SALE_PRICE'] < 2000000) &
(df['SALE_PRICE'] > 100000)]
df_sub
df_sub.shape
###Output
_____no_output_____
###Markdown
- [ ] Do train/test split. Use data from January — March 2019 to train. Use data from April 2019 to test.- [ ] Do one-hot encoding of categorical features.- [ ] Do feature selection with `SelectKBest`.- [ ] Fit a ridge regression model with multiple features. Use the `normalize=True` parameter (or do [feature scaling](https://scikit-learn.org/stable/modules/preprocessing.html) beforehand — use the scaler's `fit_transform` method with the train set, and the scaler's `transform` method with the test set)- [ ] Get mean absolute error for the test set.
###Code
# need to format date and time for test/train data
df_sub['SALE_DATE'] = pd.to_datetime(df_sub['SALE_DATE'], infer_datetime_format=True)
df_sub['SALE_DATE']
# train/test split
# train = January - March 2019
# test = April 2019
train = df_sub[(df_sub['SALE_DATE'] >= '2019-01-01') & (df_sub['SALE_DATE'] <= '2019-03-31')]
test = df_sub[(df_sub['SALE_DATE'] >= '2019-04-01') & (df_sub['SALE_DATE'] <= '2019-04-30')]
train.shape, test.shape
###Output
_____no_output_____
###Markdown
- [ ] Do one-hot encoding of categorical features.
###Code
# One-hot encoding on categorical features with low cardinality
target = 'SALE_PRICE'
high_cardinality = ['ADDRESS', 'LAND_SQUARE_FEET', 'SALE_DATE', 'EASE-MENT']
features = train.columns.drop([target] + high_cardinality)
# set train and test variables
X_train = train[features]
y_train = train[target]
X_test = test[features]
y_test = test[target]
# Checking X_train, X-test shape before encoding
X_train.shape, X_test.shape
import category_encoders as ce
encoder = ce.OneHotEncoder(use_cat_names=True)
X_train = encoder.fit_transform(X_train)
X_test = encoder.transform(X_test)
# X_test head and shape after encoding
print(X_train.shape)
X_train.head()
###Output
(2507, 49)
###Markdown
- [ ] Do feature selection with `SelectKBest`.
###Code
# Feature selection with SelectKBest, k=1
from sklearn.feature_selection import f_regression, SelectKBest
# defining selector
selector = SelectKBest(score_func=f_regression, k=15)
# setting tranform on train and test
X_train_selected = selector.fit_transform(X_train, y_train)
X_test_selected = selector.transform(X_test)
# checking shape
X_train_selected.shape, X_test_selected.shape
# Which features were selected?
all_names = X_train.columns
selected_mask = selector.get_support()
selected_names = all_names[selected_mask]
unselected_names = all_names[~selected_mask]
print('Features selected:')
for name in selected_names:
print(name)
print('\n')
print('Features not selected:')
for name in unselected_names:
print(name)
!pip install ipython
from sklearn.linear_model import LinearRegression
from sklearn.metrics import mean_absolute_error
for k in range(1, len(X_train.columns)+1):
print(f'{k} features')
selector = SelectKBest(score_func=f_regression, k=k)
X_train_selected = selector.fit_transform(X_train, y_train)
X_test_selected = selector.transform(X_test)
model = LinearRegression()
model.fit(X_train_selected, y_train)
y_pred = model.predict(X_test_selected)
mae = mean_absolute_error(y_test, y_pred)
print(f'Test MAE: ${mae:,.0f} \n')
###Output
Requirement already satisfied: ipython in /usr/local/lib/python3.6/dist-packages (5.5.0)
Requirement already satisfied: pexpect; sys_platform != "win32" in /usr/local/lib/python3.6/dist-packages (from ipython) (4.8.0)
Requirement already satisfied: simplegeneric>0.8 in /usr/local/lib/python3.6/dist-packages (from ipython) (0.8.1)
Requirement already satisfied: setuptools>=18.5 in /usr/local/lib/python3.6/dist-packages (from ipython) (47.1.1)
Requirement already satisfied: traitlets>=4.2 in /usr/local/lib/python3.6/dist-packages (from ipython) (4.3.3)
Requirement already satisfied: pygments in /usr/local/lib/python3.6/dist-packages (from ipython) (2.1.3)
Requirement already satisfied: prompt-toolkit<2.0.0,>=1.0.4 in /usr/local/lib/python3.6/dist-packages (from ipython) (1.0.18)
Requirement already satisfied: pickleshare in /usr/local/lib/python3.6/dist-packages (from ipython) (0.7.5)
Requirement already satisfied: decorator in /usr/local/lib/python3.6/dist-packages (from ipython) (4.4.2)
Requirement already satisfied: ptyprocess>=0.5 in /usr/local/lib/python3.6/dist-packages (from pexpect; sys_platform != "win32"->ipython) (0.6.0)
Requirement already satisfied: ipython-genutils in /usr/local/lib/python3.6/dist-packages (from traitlets>=4.2->ipython) (0.2.0)
Requirement already satisfied: six in /usr/local/lib/python3.6/dist-packages (from traitlets>=4.2->ipython) (1.12.0)
Requirement already satisfied: wcwidth in /usr/local/lib/python3.6/dist-packages (from prompt-toolkit<2.0.0,>=1.0.4->ipython) (0.1.9)
1 features
Test MAE: $183,641
2 features
Test MAE: $182,569
3 features
Test MAE: $182,569
4 features
Test MAE: $173,706
5 features
Test MAE: $174,556
6 features
Test MAE: $172,843
7 features
Test MAE: $173,412
8 features
Test MAE: $173,241
9 features
Test MAE: $168,668
10 features
Test MAE: $169,452
11 features
Test MAE: $169,006
12 features
Test MAE: $161,221
13 features
Test MAE: $162,578
14 features
Test MAE: $161,733
15 features
Test MAE: $161,735
16 features
Test MAE: $161,548
17 features
Test MAE: $161,548
18 features
Test MAE: $161,308
19 features
Test MAE: $161,308
20 features
###Markdown
- [ ] Fit a ridge regression model with multiple features. Use the `normalize=True` parameter (or do [feature scaling](https://scikit-learn.org/stable/modules/preprocessing.html) beforehand — use the scaler's `fit_transform` method with the train set, and the scaler's `transform` method with the test set)- [ ] Get mean absolute error for the test set.
###Code
from sklearn.linear_model import RidgeCV
from IPython.display import display, HTML
from sklearn.linear_model import Ridge
import matplotlib.pyplot as plt
# Range of alpha parameters for Ridge Regression.
for alpha in [0.001, 0.01, 0.1, 1.0, 1, 100.0, 1000.0]:
# Fit Ridge Regression model
display(HTML(f'Ridge Regression, with alpha={alpha}'))
model = Ridge(alpha=alpha, normalize=True)
model.fit(X_train, y_train)
y_pred = model.predict(X_test)
# Get Test MAE
mae = mean_absolute_error(y_test, y_pred)
display(HTML(f'Test Mean Absolute Error: ${mae:,.0f}'))
# Plot coefficients
coefficients = pd.Series(model.coef_, X_train.columns)
plt.figure(figsize=(16,8))
coefficients.sort_values().plot.barh(color='grey')
plt.xlim(-400,700)
plt.show()
###Output
_____no_output_____ |
notebooks/train_local.ipynb | ###Markdown
Local Development Arguments
###Code
# Generator hyperparameters.
generator_dict = dict()
# Which paper to use for generator architecture: "berg", "GANomaly".
generator_dict["architecture"] = "GANomaly"
# Whether generator will be trained or not.
generator_dict["train"] = True
# Number of steps to train generator for per cycle.
generator_dict["train_steps"] = 1
# The latent size of the berg input noise vector or the GANomaly generator's
# encoder logits vector.
generator_dict["latent_size"] = 512
# Whether to normalize latent vector before projection.
generator_dict["normalize_latents"] = True
# Whether to use pixel norm op after each convolution.
generator_dict["use_pixel_norm"] = True
# Small value to add to denominator for numerical stability.
generator_dict["pixel_norm_epsilon"] = 1e-8
# The 3D dimensions to project latent noise vector into.
generator_dict["projection_dims"] = [4, 4, 512]
# The amount of leakyness of generator's leaky relus.
generator_dict["leaky_relu_alpha"] = 0.2
# The final activation function of generator: None, sigmoid, tanh, relu.
generator_dict["final_activation"] = "None"
# Whether to add uniform noise to fake images.
generator_dict["add_uniform_noise_to_fake_images"] = True
# Scale factor for L1 regularization for generator.
generator_dict["l1_regularization_scale"] = 0.
# Scale factor for L2 regularization for generator.
generator_dict["l2_regularization_scale"] = 0.
# Name of optimizer to use for generator.
generator_dict["optimizer"] = "Adam"
# How quickly we train model by scaling the gradient for generator.
generator_dict["learning_rate"] = 0.001
# Adam optimizer's beta1 hyperparameter for first moment.
generator_dict["adam_beta1"] = 0.0
# Adam optimizer's beta2 hyperparameter for second moment.
generator_dict["adam_beta2"] = 0.99
# Adam optimizer's epsilon hyperparameter for numerical stability.
generator_dict["adam_epsilon"] = 1e-8
# Global clipping to prevent gradient norm to exceed this value for generator.
generator_dict["clip_gradients"] = None
generator_berg_dict = dict()
generator_ganomaly_dict = dict()
generator_berg_losses_dict = dict()
generator_ganomaly_losses_dict = dict()
if generator_dict["architecture"] == "berg":
# The latent vector's random normal mean.
generator_berg_dict["latent_mean"] = 0.0
# The latent vector's random normal standard deviation.
generator_berg_dict["latent_stddev"] = 1.0
# These are just example values, yours will vary.
# Weights to multiply loss of D(G(z))
generator_berg_losses_dict["D_of_G_of_z_loss_weight"] = 1.0
# Weights to multiply loss of D(G(E(x)))
generator_berg_losses_dict["D_of_G_of_E_of_x_loss_weight"] = 0.0
# Weights to multiply loss of D(G(E(G(z)))
generator_berg_losses_dict["D_of_G_of_E_of_G_of_z_loss_weight"] = 0.0
# Weights to multiply loss of z - E(G(z))
generator_berg_losses_dict["z_minus_E_of_G_of_z_l1_loss_weight"] = 0.0
generator_berg_losses_dict["z_minus_E_of_G_of_z_l2_loss_weight"] = 0.0
# Weights to multiply loss of G(z) - G(E(G(z))
generator_berg_losses_dict["G_of_z_minus_G_of_E_of_G_of_z_l1_loss_weight"] = 0.0
generator_berg_losses_dict["G_of_z_minus_G_of_E_of_G_of_z_l2_loss_weight"] = 0.0
# Weights to multiply loss of E(x) - E(G(E(x)))
generator_berg_losses_dict["E_of_x_minus_E_of_G_of_E_of_x_l1_loss_weight"] = 1.0
generator_berg_losses_dict["E_of_x_minus_E_of_G_of_E_of_x_l2_loss_weight"] = 0.0
# Weights to multiply loss of x - G(E(x))
generator_berg_losses_dict["x_minus_G_of_E_of_x_l1_loss_weight"] = 0.0
generator_berg_losses_dict["x_minus_G_of_E_of_x_l2_loss_weight"] = 0.0
# GANomaly parameters to zero.
# Weights to multiply loss of D(G(x))
generator_ganomaly_losses_dict["D_of_G_of_x_loss_weight"] = 0.0
# Weights to multiply loss of x - G(x)
generator_ganomaly_losses_dict["x_minus_G_of_x_l1_loss_weight"] = 0.0
generator_ganomaly_losses_dict["x_minus_G_of_x_l2_loss_weight"] = 0.0
# Weights to multiply loss of Ge(x) - E(G(x))
generator_ganomaly_losses_dict["Ge_of_x_minus_E_of_G_of_x_l1_loss_weight"] = 0.0
generator_ganomaly_losses_dict["Ge_of_x_minus_E_of_G_of_x_l2_loss_weight"] = 0.0
else: # GANomaly
# Whether generator GANomaly architecture uses U-net skip connection for each block.
generator_ganomaly_dict["use_unet_skip_connections"] = [True] * 9
# Percent of masking image inputs to generator.
generator_ganomaly_dict["mask_generator_input_images_percent"] = 0.2
# Integer amount to randomly shift image mask block sizes.
generator_ganomaly_dict["image_mask_block_random_shift_amount"] = 0
# Whether to use shuffle or dead image block masking.
generator_ganomaly_dict["use_shuffle_image_masks"] = True
# Whether to add uniform noise to GANomaly Z vector.
generator_ganomaly_dict["add_uniform_noise_to_z"] = True
# These are just example values, yours will vary.
# Weights to multiply loss of D(G(x))
generator_ganomaly_losses_dict["D_of_G_of_x_loss_weight"] = 1.0
# Weights to multiply loss of x - G(x)
generator_ganomaly_losses_dict["x_minus_G_of_x_l1_loss_weight"] = 0.0
generator_ganomaly_losses_dict["x_minus_G_of_x_l2_loss_weight"] = 1.0
# Weights to multiply loss of Ge(x) - E(G(x))
generator_ganomaly_losses_dict["Ge_of_x_minus_E_of_G_of_x_l1_loss_weight"] = 0.0
generator_ganomaly_losses_dict["Ge_of_x_minus_E_of_G_of_x_l2_loss_weight"] = 0.0
# Berg parameters to zero.
# Weights to multiply loss of D(G(z))
generator_berg_losses_dict["D_of_G_of_z_loss_weight"] = 0.0
# Weights to multiply loss of D(G(E(x)))
generator_berg_losses_dict["D_of_G_of_E_of_x_loss_weight"] = 0.0
# Weights to multiply loss of D(G(E(G(z)))
generator_berg_losses_dict["D_of_G_of_E_of_G_of_z_loss_weight"] = 0.0
# Weights to multiply loss of z - E(G(z))
generator_berg_losses_dict["z_minus_E_of_G_of_z_l1_loss_weight"] = 0.0
generator_berg_losses_dict["z_minus_E_of_G_of_z_l2_loss_weight"] = 0.0
# Weights to multiply loss of G(z) - G(E(G(z))
generator_berg_losses_dict["G_of_z_minus_G_of_E_of_G_of_z_l1_loss_weight"] = 0.0
generator_berg_losses_dict["G_of_z_minus_G_of_E_of_G_of_z_l2_loss_weight"] = 0.0
# Weights to multiply loss of E(x) - E(G(E(x)))
generator_berg_losses_dict["E_of_x_minus_E_of_G_of_E_of_x_l1_loss_weight"] = 0.0
generator_berg_losses_dict["E_of_x_minus_E_of_G_of_E_of_x_l2_loss_weight"] = 0.0
# Weights to multiply loss of x - G(E(x))
generator_berg_losses_dict["x_minus_G_of_E_of_x_l1_loss_weight"] = 0.0
generator_berg_losses_dict["x_minus_G_of_E_of_x_l2_loss_weight"] = 0.0
generator_dict["berg"] = generator_berg_dict
generator_dict["GANomaly"] = generator_ganomaly_dict
generator_dict["losses"] = {}
generator_dict["losses"]["berg"] = generator_berg_losses_dict
generator_dict["losses"]["GANomaly"] = generator_ganomaly_losses_dict
generator_dict
# Encoder hyperparameters.
encoder_dict = dict()
# These are optional if using GANomaly architecture, required for berg.
# Whether encoder will be created or not.
encoder_dict["create"] = False
# Whether encoder will be trained or not.
encoder_dict["train"] = False
# Whether to use minibatch stddev op before first base conv layer.
encoder_dict["use_minibatch_stddev"] = True
# The size of groups to split minibatch examples into.
encoder_dict["minibatch_stddev_group_size"] = 4
# Whether to average across feature maps and pixels for minibatch stddev.
encoder_dict["minibatch_stddev_use_averaging"] = True
# The amount of leakyness of encoder's leaky relus.
encoder_dict["leaky_relu_alpha"] = 0.2
# Scale factor for L1 regularization for encoder.
encoder_dict["l1_regularization_scale"] = 0.
# Scale factor for L2 regularization for encoder.
encoder_dict["l2_regularization_scale"] = 0.
# Name of optimizer to use for encoder.
encoder_dict["optimizer"] = "Adam"
# How quickly we train model by scaling the gradient for encoder.
encoder_dict["learning_rate"] = 0.001
# Adam optimizer's beta1 hyperparameter for first moment.
encoder_dict["adam_beta1"] = 0.0
# Adam optimizer's beta2 hyperparameter for second moment.
encoder_dict["adam_beta2"] = 0.99
# Adam optimizer's epsilon hyperparameter for numerical stability.
encoder_dict["adam_epsilon"] = 1e-8
# Global clipping to prevent gradient norm to exceed this value for encoder.
encoder_dict["clip_gradients"] = None
encoder_losses_dict = dict()
# Berg Losses
encoder_losses_berg_dict = dict()
# Weights to multiply loss of D(G(E(x)))
encoder_losses_berg_dict["D_of_G_of_E_of_x_loss_weight"] = 0.0
# Weights to multiply loss of D(G(E(G(z)))
encoder_losses_berg_dict["D_of_G_of_E_of_G_of_z_loss_weight"] = 0.0
# Weights to multiply loss of z - E(G(z))
encoder_losses_berg_dict["z_minus_E_of_G_of_z_l1_loss_weight"] = 0.0
encoder_losses_berg_dict["z_minus_E_of_G_of_z_l2_loss_weight"] = 0.0
# Weights to multiply loss of G(z) - G(E(G(z))
encoder_losses_berg_dict["G_of_z_minus_G_of_E_of_G_of_z_l1_loss_weight"] = 0.0
encoder_losses_berg_dict["G_of_z_minus_G_of_E_of_G_of_z_l2_loss_weight"] = 0.0
# Weights to multiply loss of E(x) - E(G(E(x)))
encoder_losses_berg_dict["E_of_x_minus_E_of_G_of_E_of_x_l1_loss_weight"] = 0.0
encoder_losses_berg_dict["E_of_x_minus_E_of_G_of_E_of_x_l2_loss_weight"] = 0.0
# Weights to multiply loss of x - G(E(x))
encoder_losses_berg_dict["x_minus_G_of_E_of_x_l1_loss_weight"] = 0.0
encoder_losses_berg_dict["x_minus_G_of_E_of_x_l2_loss_weight"] = 0.0
# GANomaly Losses
encoder_losses_ganomaly_dict = dict()
# Weights to multiply loss of Ge(x) - E(G(x))
encoder_losses_ganomaly_dict["Ge_of_x_minus_E_of_G_of_x_l1_loss_weight"] = 0.0
encoder_losses_ganomaly_dict["Ge_of_x_minus_E_of_G_of_x_l2_loss_weight"] = 1.0
encoder_losses_dict["berg"] = encoder_losses_berg_dict
encoder_losses_dict["GANomaly"] = encoder_losses_ganomaly_dict
encoder_dict["losses"] = encoder_losses_dict
encoder_dict
# Discriminator hyperparameters.
discriminator_dict = dict()
# Whether discriminator will be created or not.
discriminator_dict["create"] = True
# Whether discriminator will be trained or not.
discriminator_dict["train"] = True
# Number of steps to train discriminator for per cycle.
discriminator_dict["train_steps"] = 1
# Whether to use minibatch stddev op before first base conv layer.
discriminator_dict["use_minibatch_stddev"] = True
# The size of groups to split minibatch examples into.
discriminator_dict["minibatch_stddev_group_size"] = 4
# Whether to average across feature maps and pixels for minibatch stddev.
discriminator_dict["minibatch_stddev_use_averaging"] = True
# The amount of leakyness of discriminator's leaky relus.
discriminator_dict["leaky_relu_alpha"] = 0.2
# Scale factor for L1 regularization for discriminator.
discriminator_dict["l1_regularization_scale"] = 0.
# Scale factor for L2 regularization for discriminator.
discriminator_dict["l2_regularization_scale"] = 0.
# Name of optimizer to use for discriminator.
discriminator_dict["optimizer"] = "Adam"
# How quickly we train model by scaling the gradient for discriminator.
discriminator_dict["learning_rate"] = 0.001
# Adam optimizer's beta1 hyperparameter for first moment.
discriminator_dict["adam_beta1"] = 0.0
# Adam optimizer's beta2 hyperparameter for second moment.
discriminator_dict["adam_beta2"] = 0.99
# Adam optimizer's epsilon hyperparameter for numerical stability.
discriminator_dict["adam_epsilon"] = 1e-8
# Global clipping to prevent gradient norm to exceed this value for discriminator.
discriminator_dict["clip_gradients"] = None
# Coefficient of gradient penalty for discriminator.
discriminator_dict["gradient_penalty_coefficient"] = 10.0
# Target value of gradient magnitudes for gradient penalty for discriminator.
discriminator_dict["gradient_penalty_target"] = 1.0
# Coefficient of epsilon drift penalty for discriminator.
discriminator_dict["epsilon_drift"] = 0.001
# Losses
discriminator_losses_dict = dict()
# Weight to multiply loss of D(x)
discriminator_losses_dict["D_of_x_loss_weight"] = 1.0
# Berg Losses
discriminator_losses_berg_dict = dict()
# Weight to multiply loss of D(G(z))
discriminator_losses_berg_dict["D_of_G_of_z_loss_weight"] = 0.0
# Weight to multiply loss of D(G(E(x)))
discriminator_losses_berg_dict["D_of_G_of_E_of_x_loss_weight"] = 0.0
# Weight to multiply loss of D(G(E(G(z)))
discriminator_losses_berg_dict["D_of_G_of_E_of_G_of_z_loss_weight"] = 0.0
# GANomaly Losses
discriminator_losses_ganomaly_dict = dict()
# Weight to multiply loss of D(G(x))
discriminator_losses_ganomaly_dict["D_of_G_of_x_loss_weight"] = 1.0
discriminator_losses_dict["berg"] = discriminator_losses_berg_dict
discriminator_losses_dict["GANomaly"] = discriminator_losses_ganomaly_dict
discriminator_dict["losses"] = discriminator_losses_dict
discriminator_dict
# Reconstruction training parameters.
reconstruction_dict = dict()
# Whether using multiple resolutions across a list of TF Records.
reconstruction_dict["use_multiple_resolution_records"] = True
# GCS locations to read reconstruction training data.
reconstruction_dict["train_file_patterns"] = [
tf.io.gfile.glob(
pattern="gs://.../data/*/*.svs.{}.*.tfrecords".format(i)
)[0:150]
for i in range(9 - 1, -1, -1)
]
# GCS locations to read reconstruction evaluation data.
reconstruction_dict["eval_file_patterns"] = [
tf.io.gfile.glob(
pattern="gs://.../data/*/*.svs.{}.*.tfrecords".format(i)
)[0:150]
for i in range(9 - 1, -1, -1)
]
# Which dataset to use for reconstruction training:
# "mnist", "cifar10", "cifar10_car", "tf_record"
reconstruction_dict["dataset"] = "tf_record"
# TF Record Example feature schema for reconstruction.
reconstruction_dict["tf_record_example_schema"] = [
{
"name": "image/encoded",
"type": "FixedLen",
"shape": [],
"dtype": "str"
},
{
"name": "image/name",
"type": "FixedLen",
"shape": [],
"dtype": "str"
}
]
# Name of image feature within schema dictionary.
reconstruction_dict["image_feature_name"] = "image/encoded"
# Encoding of image: raw, png, or jpeg.
reconstruction_dict["image_encoding"] = "png"
# Height of predownscaled image if NOT using multiple resolution records.
reconstruction_dict["image_predownscaled_height"] = 1024
# Width of predownscaled image if NOT using multiple resolution records.
reconstruction_dict["image_predownscaled_width"] = 1024
# Depth of image, number of channels.
reconstruction_dict["image_depth"] = 3
# Name of label feature within schema dictionary.
reconstruction_dict["label_feature_name"] = ""
# Schedule list of number of epochs to train for reconstruction.
reconstruction_dict["num_epochs_schedule"] = [1] * 9
# Number of examples in one epoch of reconstruction training set.
reconstruction_dict["train_dataset_length"] = 330415
# Schedule list of number of examples in reconstruction training batch for each resolution block.
reconstruction_dict["train_batch_size_schedule"] = [32] + [16] * 4 + [4] + [2] * 2 + [1]
# Schedule list of number of examples in reconstruction evaluation batch for each resolution block.
reconstruction_dict["eval_batch_size_schedule"] = [32] + [16] * 4 + [4] + [2] * 2 + [1]
# Number of steps/batches to evaluate for reconstruction.
reconstruction_dict["eval_steps"] = 1
# List of number of examples until block added to networks.
reconstruction_dict["num_examples_until_growth_schedule"] = [
epochs * reconstruction_dict["train_dataset_length"]
for epochs in reconstruction_dict["num_epochs_schedule"]
]
# List of number of steps/batches until block added to networks.
reconstruction_dict["num_steps_until_growth_schedule"] = [
ex // bs
for ex, bs in zip(
reconstruction_dict["num_examples_until_growth_schedule"],
reconstruction_dict["train_batch_size_schedule"]
)
]
# Whether to autotune input function performance for reconstruction datasets.
reconstruction_dict["input_fn_autotune"] = True
# How many steps to train before writing steps and loss to log.
reconstruction_dict["log_step_count_steps"] = 100
# How many steps to train before saving a summary.
reconstruction_dict["save_summary_steps"] = 100
# Whether to write loss summaries for TensorBoard.
reconstruction_dict["write_loss_summaries"] = False
# Whether to write generator image summaries for TensorBoard.
reconstruction_dict["write_generator_image_summaries"] = False
# Whether to write encoder image summaries for TensorBoard.
reconstruction_dict["write_encoder_image_summaries"] = False
# Whether to write variable histogram summaries for TensorBoard.
reconstruction_dict["write_variable_histogram_summaries"] = False
# Whether to write gradient histogram summaries for TensorBoard.
reconstruction_dict["write_gradient_histogram_summaries"] = False
# How many steps to train reconstruction before saving a checkpoint.
reconstruction_dict["save_checkpoints_steps"] = 10000
# Max number of reconstruction checkpoints to keep.
reconstruction_dict["keep_checkpoint_max"] = 100
# Whether to save checkpoint every growth phase.
reconstruction_dict["checkpoint_every_growth_phase"] = True
# Whether to save checkpoint every epoch.
reconstruction_dict["checkpoint_every_epoch"] = True
# Checkpoint growth index to restore checkpoint.
reconstruction_dict["checkpoint_growth_idx"] = 0
# Checkpoint epoch index to restore checkpoint.
reconstruction_dict["checkpoint_epoch_idx"] = 0
# The checkpoint save path for saving and restoring.
reconstruction_dict["checkpoint_save_path"] = ""
# Whether to store loss logs.
reconstruction_dict["store_loss_logs"] = True
# Whether to normalize loss logs.
reconstruction_dict["normalized_loss_logs"] = True
# Whether to print model summaries.
reconstruction_dict["print_training_model_summaries"] = False
# Initial growth index to resume training midway.
reconstruction_dict["initial_growth_idx"] = 0
# Initial epoch index to resume training midway.
reconstruction_dict["initial_epoch_idx"] = 0
# Max number of times training loop can be restarted such as for NaN losses.
reconstruction_dict["max_training_loop_restarts"] = 20
# Whether to scale layer weights to equalize learning rate each forward pass.
reconstruction_dict["use_equalized_learning_rate"] = True
# Whether to normalize reconstruction losses by number of pixels.
reconstruction_dict["normalize_reconstruction_losses"] = True
reconstruction_dict
# Error distribution training parameters.
error_distribution_dict = dict()
# Whether using multiple resolutions across a list of TF Records.
error_distribution_dict["use_multiple_resolution_records"] = False
# GCS locations to read error distribution training data.
error_distribution_dict["train_file_pattern"] = tf.io.gfile.glob(
pattern="gs://.../data/*/*.svs.{}.*.tfrecords".format(0)
)[150:175]
# GCS locations to read error distribution training data.
error_distribution_dict["eval_file_pattern"] = tf.io.gfile.glob(
pattern="gs://.../data/*/*.svs.{}.*.tfrecords".format(0)
)[150:175]
# Which dataset to use for error distribution training:
# "mnist", "cifar10", "cifar10_car", "tf_record"
error_distribution_dict["dataset"] = "tf_record"
# TF Record Example feature schema for error distribution.
error_distribution_dict["tf_record_example_schema"] = [
{
"name": "image/encoded",
"type": "FixedLen",
"shape": [],
"dtype": "str"
},
{
"name": "image/name",
"type": "FixedLen",
"shape": [],
"dtype": "str"
}
]
# Name of image feature within schema dictionary.
error_distribution_dict["image_feature_name"] = "image/encoded"
# Encoding of image: raw, png, or jpeg.
error_distribution_dict["image_encoding"] = "png"
# Height of predownscaled image if NOT using multiple resolution records.
error_distribution_dict["image_predownscaled_height"] = 1024
# Width of predownscaled image if NOT using multiple resolution records.
error_distribution_dict["image_predownscaled_width"] = 1024
# Depth of image, number of channels.
error_distribution_dict["image_depth"] = 3
# Name of label feature within schema dictionary.
error_distribution_dict["label_feature_name"] = ""
# Number of examples in one epoch of error distribution training set.
error_distribution_dict["train_dataset_length"] = 44693
# Number of examples in error distribution training batch.
error_distribution_dict["train_batch_size"] = 16
# Number of steps/batches to evaluate for error distribution.
error_distribution_dict["eval_steps"] = 1
# Whether to autotune input function performance for error distribution datasets.
error_distribution_dict["input_fn_autotune"] = True
# How many steps to train error distribution before saving a checkpoint.
error_distribution_dict["save_checkpoints_steps"] = 10000
# Max number of error distribution checkpoints to keep.
error_distribution_dict["keep_checkpoint_max"] = 100
# The checkpoint save path for saving and restoring.
error_distribution_dict["checkpoint_save_path"] = ""
# Max number of times training loop can be restarted.
error_distribution_dict["max_training_loop_restarts"] = 20
# Whether using sample or population covariance for error distribution.
error_distribution_dict["use_sample_covariance"] = True
error_distribution_dict
# Dynamic threshold training parameters.
dynamic_threshold_dict = dict()
# Whether using multiple resolutions across a list of TF Records.
dynamic_threshold_dict["use_multiple_resolution_records"] = False
# GCS locations to read dynamic threshold training data.
dynamic_threshold_dict["train_file_pattern"] = tf.io.gfile.glob(
pattern="gs://.../data/*/*.svs.{}.*.tfrecords".format(0)
)[175:200]
# GCS locations to read dynamic threshold evaluation data.
dynamic_threshold_dict["eval_file_pattern"] = tf.io.gfile.glob(
pattern="gs://.../data/*/*.svs.{}.*.tfrecords".format(0)
)[175:200]
# Which dataset to use for dynamic threshold training:
# "mnist", "cifar10", "cifar10_car", "tf_record"
dynamic_threshold_dict["dataset"] = "tf_record"
# TF Record Example feature schema for dynamic threshold.
dynamic_threshold_dict["tf_record_example_schema"] = [
{
"name": "image/encoded",
"type": "FixedLen",
"shape": [],
"dtype": "str"
},
{
"name": "image/name",
"type": "FixedLen",
"shape": [],
"dtype": "str"
}
]
# Name of image feature within schema dictionary.
dynamic_threshold_dict["image_feature_name"] = "image/encoded"
# Encoding of image: raw, png, or jpeg.
dynamic_threshold_dict["image_encoding"] = "png"
# Height of predownscaled image if NOT using multiple resolution records.
dynamic_threshold_dict["image_predownscaled_height"] = 1024
# Width of predownscaled image if NOT using multiple resolution records.
dynamic_threshold_dict["image_predownscaled_width"] = 1024
# Depth of image, number of channels.
dynamic_threshold_dict["image_depth"] = 3
# Name of label feature within schema dictionary.
dynamic_threshold_dict["label_feature_name"] = ""
# Number of examples in one epoch of dynamic threshold training set.
dynamic_threshold_dict["train_dataset_length"] = 52517
# Number of examples in dynamic threshold training batch.
dynamic_threshold_dict["train_batch_size"] = 16
# Number of steps/batches to evaluate for dynamic threshold.
dynamic_threshold_dict["eval_steps"] = 1
# Whether to autotune input function performance for dynamic threshold datasets.
dynamic_threshold_dict["input_fn_autotune"] = True
# How many steps to train dynamic threshold before saving a checkpoint.
dynamic_threshold_dict["save_checkpoints_steps"] = 10000
# Max number of dynamic threshold checkpoints to keep.
dynamic_threshold_dict["keep_checkpoint_max"] = 100
# The checkpoint save path for saving and restoring.
dynamic_threshold_dict["checkpoint_save_path"] = ""
# Max number of times training loop can be restarted.
dynamic_threshold_dict["max_training_loop_restarts"] = 20
# Whether using supervised dynamic thresholding or unsupervised.
dynamic_threshold_dict["use_supervised"] = False
supervised_dict = dict()
# Beta value for supervised F-beta score.
supervised_dict["f_score_beta"] = 0.05
unsupervised_dict = dict()
# Whether using sample or population covariance for dynamic threshold.
unsupervised_dict["use_sample_covariance"] = True
# Max standard deviations of Mahalanobis distance to flag as outlier.
unsupervised_dict["max_mahalanobis_stddevs"] = 3.0
dynamic_threshold_dict["supervised"] = supervised_dict
dynamic_threshold_dict["unsupervised"] = unsupervised_dict
dynamic_threshold_dict
# Training parameters.
training_dict = dict()
# GCS location to write checkpoints, loss logs, and export models.
training_dict["output_dir"] = "gs://my-bucket/trained_models/experiment"
# Version of TensorFlow.
training_dict["tf_version"] = 2.3
# Whether to use graph mode or not (eager).
training_dict["use_graph_mode"] = True
# Which distribution strategy to use, if any.
training_dict["distribution_strategy"] = "Mirrored"
# Whether we subclass models or use Functional API.
training_dict["subclass_models"] = True
# Whether performing training phase 1 or not.
training_dict["train_reconstruction"] = True
# Whether performing training phase 2 or not.
training_dict["train_error_distribution"] = True
# Whether performing training phase 3 or not.
training_dict["train_dynamic_threshold"] = True
training_dict["reconstruction"] = reconstruction_dict
training_dict["error_distribution"] = error_distribution_dict
training_dict["dynamic_threshold"] = dynamic_threshold_dict
training_dict
# Export parameters.
export_dict = dict()
# Most recent export's growth index so that there are no repeat exports.
export_dict["most_recent_export_growth_idx"] = -1
# Most recent export's epoch index so that there are no repeat exports.
export_dict["most_recent_export_epoch_idx"] = -1
# Whether to export SavedModel every growth phase.
export_dict["export_every_growth_phase"] = True
# Whether to export SavedModel every epoch.
export_dict["export_every_epoch"] = True
# Whether to export all growth phases or just current.
export_dict["export_all_growth_phases"] = True
# Using a random noise vector Z with shape (batch_size, generator_latent_size) for berg.
# Whether to export Z.
export_dict["export_Z"] = True
# Whether to export generated images, G(z).
export_dict["export_generated_images"] = True
# Whether to export encoded generated logits, E(G(z)).
export_dict["export_encoded_generated_logits"] = True
# Whether to export encoded generated images, G(E(G(z))).
export_dict["export_encoded_generated_images"] = True
# Whether to export Z generated images, Gd(z).
export_dict["export_Z_generated_images"] = True
# Using a query image with shape (batch_size, height, width, depth)
# Whether to export query images.
export_dict["export_query_images"] = True
# Berg encoded exports.
# Whether to export encoded query logits, E(x).
export_dict["export_query_encoded_logits"] = True
# Whether to export encoded query images, G(E(x)).
export_dict["export_query_encoded_images"] = True
# GANomaly encoded exports.
# Whether to export generator encoded query logits, Ge(x).
export_dict["export_query_gen_encoded_logits"] = True
# Whether to export generator encoded query images, G(x) = Gd(Ge(x)).
export_dict["export_query_gen_encoded_images"] = True
# Whether to export encoder encoded query logits, E(G(x)).
export_dict["export_query_enc_encoded_logits"] = True
# Whether to export encoder encoded query images, Gd(E(G(x))).
export_dict["export_query_enc_encoded_images"] = True
# Anomaly exports.
# Whether to export query anomaly images using sigmoid scaling.
export_dict["export_query_anomaly_images_sigmoid"] = True
# Whether to export query anomaly images using linear scaling.
export_dict["export_query_anomaly_images_linear"] = True
# Whether to export query Mahalanobis distances.
export_dict["export_query_mahalanobis_distances"] = True
# Whether to export query Mahalanobis distance images using sigmoid scaling.
export_dict["export_query_mahalanobis_distance_images_sigmoid"] = True
# Whether to export query Mahalanobis distance images using linear scaling.
export_dict["export_query_mahalanobis_distance_images_linear"] = True
# Whether to export query pixel anomaly flag binary images.
export_dict["export_query_pixel_anomaly_flag_images"] = True
# Whether to export query pixel anomaly flag binary images.
export_dict["export_query_pixel_anomaly_flag_counts"] = True
# Whether to export query pixel anomaly flag binary images.
export_dict["export_query_pixel_anomaly_flag_percentages"] = True
# Whether to export query anomaly scores, only for Berg.
export_dict["export_query_anomaly_scores"] = False
# Whether to export query anomaly flags, only for Berg.
export_dict["export_query_anomaly_flags"] = False
# Anomaly parameters.
# The threshold value at which above flags scores images as anomalous.
export_dict["anomaly_threshold"] = 5.0
# The anomaly convex combination factor for weighting the two anomaly losses.
export_dict["anom_convex_combo_factor"] = 0.05
# Whether to print model summaries.
export_dict["print_serving_model_summaries"] = False
export_dict
# Full parameters.
arguments = dict()
arguments["generator"] = generator_dict
arguments["encoder"] = encoder_dict
arguments["discriminator"] = discriminator_dict
arguments["training"] = training_dict
arguments["export"] = export_dict
# Full lists for full 1024x1024 network growth.
full_conv_num_filters = [[512, 512], [512, 512], [512, 512], [512, 512], [256, 256], [128, 128], [64, 64], [32, 32], [16, 16]]
full_conv_kernel_sizes = [[4, 3], [3, 3], [3, 3], [3, 3], [3, 3], [3, 3], [3, 3], [3, 3], [3, 3]]
full_conv_strides = [[1, 1], [1, 1], [1, 1], [1, 1], [1, 1], [1, 1], [1, 1], [1, 1], [1, 1]]
# Set final image size as a multiple of 2, starting at 4.
image_size = 1024
num_conv_blocks = max(
min(int(math.log(image_size, 2) - 1), len(full_conv_num_filters)), 1
)
arguments["conv_num_filters"] = full_conv_num_filters[0:num_conv_blocks]
arguments["conv_kernel_sizes"] = full_conv_kernel_sizes[0:num_conv_blocks]
arguments["conv_strides"] = full_conv_strides[0:num_conv_blocks]
# Get conv layer properties for generator and discriminator.
(generator,
discriminator) = (
gan_layer_architecture_shapes.calc_generator_discriminator_conv_layer_properties(
arguments["conv_num_filters"],
arguments["conv_kernel_sizes"],
arguments["conv_strides"],
arguments["training"]["reconstruction"]["image_depth"]
)
)
# Split up generator properties into separate lists.
(generator_base_conv_blocks,
generator_growth_conv_blocks,
generator_to_rgb_layers) = (
gan_layer_architecture_shapes.split_up_generator_conv_layer_properties(
generator,
arguments["conv_num_filters"],
arguments["conv_strides"],
arguments["training"]["reconstruction"]["image_depth"]
)
)
# Generator list of list of lists of base conv block layer shapes.
arguments["generator"]["base_conv_blocks"] = generator_base_conv_blocks
# Generator list of list of lists of growth conv block layer shapes.
arguments["generator"]["growth_conv_blocks"] = generator_growth_conv_blocks
# Generator list of list of lists of to_RGB layer shapes.
arguments["generator"]["to_rgb_layers"] = generator_to_rgb_layers
# Split up discriminator properties into separate lists.
(discriminator_from_rgb_layers,
discriminator_base_conv_blocks,
discriminator_growth_conv_blocks) = (
gan_layer_architecture_shapes.split_up_discriminator_conv_layer_properties(
discriminator,
arguments["conv_num_filters"],
arguments["conv_strides"],
arguments["training"]["reconstruction"]["image_depth"]
)
)
# Discriminator list of list of lists of from_RGB layer shapes.
arguments["discriminator"]["from_rgb_layers"] = discriminator_from_rgb_layers
# Discriminator list of list of lists of base conv block layer shapes.
arguments["discriminator"]["base_conv_blocks"] = (
discriminator_base_conv_blocks
)
# Discriminator list of list of lists of growth conv block layer shapes.
arguments["discriminator"]["growth_conv_blocks"] = (
discriminator_growth_conv_blocks
)
if (arguments["generator"]["architecture"] == "GANomaly" and
arguments["generator"]["GANomaly"]["mask_generator_input_images_percent"] > 0.):
# Image mask block pixel sizes list of lists.
arguments["generator"]["GANomaly"]["image_mask_block_sizes"] = (
image_masks.calculate_image_mask_block_sizes_per_resolution(
num_resolutions=num_conv_blocks,
min_height=arguments["generator"]["projection_dims"][0],
min_width=arguments["generator"]["projection_dims"][1],
pixel_mask_percent=(
arguments["generator"]["GANomaly"][
"mask_generator_input_images_percent"]
)
)
)
arguments
###Output
_____no_output_____
###Markdown
Run model
###Code
os.environ["OUTPUT_DIR"] = arguments["training"]["output_dir"]
%%bash
echo ${OUTPUT_DIR}
# %%bash
# gsutil -m rm -rf ${OUTPUT_DIR}
train_and_evaluate_model = model.TrainAndEvaluateModel(params=arguments)
train_and_evaluate_model.train_and_evaluate()
###Output
_____no_output_____ |
Sentiment Models/NTUSD.ipynb | ###Markdown
Proccess
###Code
ntword=pd.read_json('Datasets/NTUSD_Fin_word_v1.0.json')
ntemoji=pd.read_json('Datasets/NTUSD_Fin_emoji_v1.0.json')
nthastag=pd.read_json('Datasets/NTUSD_Fin_hashtag_v1.0.json')
word2sent=ntword.set_index('token').market_sentiment.to_dict()
emoji2sent=ntemoji.set_index('token').market_sentiment.to_dict()
def get_ntusd_score(text):
score=0
for token in text.split():
if token in word2sent:
score=score+word2sent[token]
if token in emoji2sent:
score=score+emoji2sent[token]
return score
coms['sent_ntusd']=coms.body_clean.progress_apply(get_ntusd_score)
subs['sent_ntusd']=subs.clean_text.progress_apply(get_ntusd_score)
coms1=coms[coms.submission_id.isin(subs.id.unique().tolist())]
subid_to_sent_com_ntusd=coms1.groupby('submission_id').sent_ntusd.sum().to_dict()
subs['sent_ntusd_coms']=subs.id.map(subid_to_sent_com_ntusd)
subs=subs.rename(columns={'kpi1':'awards_value'})
#subs.to_pickle('Datasets/submissions.pickle')
nmean=pd.read_pickle('temp_ntsud_mean.pickle')
subs['sent_ntusd_coms_wavg']=subs.id.map(nmean.to_dict())
subs[['author', 'num_comments', 'score', 'title', 'selftext', 'award_name',
'award_description', 'award_count', 'award_coin_price',
'award_coin_reward', 'subreddit', 'subreddit_subscribers', 'id',
'domain', 'no_follow', 'send_replies', 'author_fullname',
'subreddit_id', 'permalink', 'url', 'created', 'author_created',
'clean_text', 'origin', 'topic',
'author_karma', 'awards_value', 'author_posts', 'num_awards',
'sell', 'buy',
'sent_ntusd','sent_ntusd_wavg','sent_ntusd_coms', 'sent_ntusd_coms_wavg',
'sent_lr',
'sent_lr_coms',
'sent_db', 'sent_fb','sent_fbt',
'sent_dbe_sadness', 'sent_dbe_joy', 'sent_dbe_love','sent_dbe_anger', 'sent_dbe_fear', 'sent_dbe_surprise'
]].to_pickle(r'C:\Users\Ben\Desktop\Diplomatiki\CryptoSent\Datasets\Main Dataset\submissions.pickle')
ntword=pd.read_json(r'C:\Users\Ben\Desktop\Diplomatiki\CryptoSent\Datasets\other\NTUSD_Fin_word_v1.0.json')
ntemoji=pd.read_json(r'C:\Users\Ben\Desktop\Diplomatiki\CryptoSent\Datasets\other\NTUSD_Fin_emoji_v1.0.json')
nthastag=pd.read_json(r'C:\Users\Ben\Desktop\Diplomatiki\CryptoSent\Datasets\other\NTUSD_Fin_hashtag_v1.0.json')
word2sent=ntword.set_index('token').market_sentiment.to_dict()
emoji2sent=ntemoji.set_index('token').market_sentiment.to_dict()
def get_ntusd_score(text):
score=0
for token in text.split():
if token in word2sent:
score=score+word2sent[token]
if token in emoji2sent:
score=score+emoji2sent[token]
return score
def get_ntusd_score_wavg(text):
score=0
count=0
for token in text.split():
if token in word2sent:
score=score+word2sent[token]
count=count+1
if token in emoji2sent:
score=score+emoji2sent[token]
count=count+1
try :return score/count
except: 0
###Output
_____no_output_____ |
docs/10c-Ellipsometry.ipynb | ###Markdown
Rotating Analyzer Ellipsometry**Scott Prahl****May 2020**
###Code
import numpy as np
import matplotlib.pyplot as plt
import pypolar.fresnel as fresnel
import pypolar.ellipsometry as ell
###Output
_____no_output_____
###Markdown
Ellipsometer LayoutA basic ellipsometer configuration is shown below. Typically the incident light is linearly polarized but the reflected light is, in general, elliptically polarized. $$E_{rs} = r_s e^{j\delta_s} E_{is} $$and$$E_{rp} = r_p e^{j\delta_p}E_{ip} $$The parameter $\Delta$ describes the change in phase from parallel polarization after reflection (because $E_p$ and $E_s$ are in phase before incidence). The amplitude ratio (parallel vs perpendicular reflected light) is represented by $\tan\psi$. Rotating EllipsometerAn ellipsometer is usually used at a single angle of incidence with linear or circularly polarized light. The reflected light passes through a rotating analyzer before hitting the detector. The reflected light is monitored over 360° with each rotation. This produces a sinusoidal signal that looks like$$I(\phi) = I_\mathrm{DC} + I_C \cos 2\phi + I_S \sin 2\phi$$where $\phi$ is the angle that the analyzer makes with the plane of incidence. To extract the index of refraction of a substrate three things must be done1. fit the ellipsometer signal to obtain $I_\mathrm{DC}$, $I_S$, and $I_C$2. calculate $\alpha =I_C/I_\mathrm{DC}$ and $\beta =I_S/I_\mathrm{DC}$2. calculate $\rho =\tan\psi\cdot\exp(j\Delta)$ from $\alpha$ and $\beta$3. calculate the index of refraction using $\rho$ Fitting to a sinusoidWe want to fit the detected signal $I(\phi)$ to find average value $I_\mathrm{DC}$ as well as the two Fourier coefficients $I_S$ and $I_C$ $$I(\phi) = I_\mathrm{DC} + I_C \cos 2\phi + I_S \sin 2\phi$$Our ellipsometer digitizes the signal every 5 degrees to produce an array of 72 elements, $I_i$. The first challenge is to determine the coefficients $I_\mathrm{DC}$, $I_S$, and $I_C$ using these discrete data points$$I_i = I_\mathrm{DC} + I_C \cos2\phi_i + I_S \sin2\phi_i$$where, $\phi_i=2\pi i/N$.The DC offset $I_\mathrm{DC}$ is found by averaging over one analyzer rotation ($0\le\phi\le2\pi$)$$I_\mathrm{DC} = {1\over N}\sum_{i=0}^{N-1} I_i$$The Fourier coefficients are given by$$ I_S = {1\over \pi} \int_0^{2\pi} I(\phi)\sin(2\phi) \,{\rm d} \phi \qquad\mbox{and}\qquad I_C = {1\over \pi} \int_0^{2\pi} I(\phi)\cos(2\phi) \,{\rm d} \phi$$For the discrete case this becomes$$ I_S = {1\over \pi} \sum_{i=0}^{N-1} I_i\sin2\phi_i \cdot \Delta\phi\qquad\mbox{and}\qquad I_C = {1\over \pi} \sum_{i=0}^{N-1} I_i\cos2\phi_i \cdot \Delta\phi$$where $\Delta\phi=2\pi/N$. If we substitute for for $\Delta\phi$, then we recognize that $I_C$ and $I_S$ are just the weighted averages of $I_i$ over one drum rotation$$ I_S = {1\over N} \sum_{i=0}^{N-1} I_i\cdot 2\sin 2\phi_i\qquad\mbox{and}\qquad I_C = {1\over N} \sum_{i=0}^{N-1} I_i\cdot 2\cos 2\phi_i$$where the quantities in brackets need only be calculated once at the beginning of the analysis. Every rotation of the drum requires three averages to be calculated.Below is a test with random noise added to a known sinsoidal signal.
###Code
degrees = np.linspace(0,360,num=72,endpoint=False)
phi = degrees*np.pi/180
# this is the signal we will try to recover
a=4.1
b=0.8
c=0.5
error=0.1
signal=ell.rotating_analyzer_signal(phi,a,b,c,error)
# finding the offset and coefficients of the sin() and cos() terms
# np.average sums the array and divides by the number of elements N
I_DC = np.average(signal)
I_S = 2*np.average(signal*np.sin(2*phi))
I_C = 2*np.average(signal*np.cos(2*phi))
plt.scatter(degrees,signal,s=5,color='red')
plt.plot(degrees,I_DC + I_S*np.sin(2*phi) + I_C*np.cos(2*phi))
plt.xlabel("Analyzer Angle (degrees)")
plt.ylabel("Ellipsometer Intensity")
plt.xlim(-10,370)
plt.show()
print("I_DC expected=%.3f obtained=%.3f" % (a,I_DC))
print("I_S expected=%.3f obtained=%.3f" % (b,I_S))
print("I_C expected=%.3f obtained=%.3f" % (c,I_C))
###Output
_____no_output_____
###Markdown
Isotropic, Homogeneous MaterialsConverting $\alpha$ and $\beta$ to surface properties requires a physical model for the surface. The simplest model is that of an isotropic flat material with reflection determined by the Fresnel reflection properties. Tompkins 2005, page 282, writes>The major problem with this approach is that it ignores the surface layer of the material. All materials have this surface overlayer, which may due to surface roughness, surface oxide, surface reconstruction, etc. Therefore, any realistic model of the sample is more complicated than a simple air/material, and [this analysis] is not valid for any model involving a surface overlayer. However, [it] is quite useful as a limiting case ...The expression for the electric field at the detector can be found using Jones matrices. The incident light passes through a linear polarizer at an angle $\theta_p$ relative to the plane of incidence. The light is reflected off the surface and then passes through a linear analyzer at an angle $\phi$. The electric field is$$E_D=\left[\begin{array}{cc}\cos^2\phi & \sin \phi\cos\phi\\\sin\phi\cos\phi & \sin^2\phi\\\end{array}\right]\cdot\left[\begin{array}{cc}r_p(\theta_i) & 0\\0 & r_s(\theta_i)\\\end{array}\right]\cdot\left[\begin{array}{cc}\cos^2\theta_p & \sin \theta_p\cos \theta_p\\\sin \theta_p\cos \theta_p & \sin^2 \theta_p\\\end{array}\right]\cdot\left[\begin{array}{c}1\\0\\\end{array}\right]$$Therefore$$E_D=\left[\begin{array}{cc}r_p(\theta_i)\cos^2\theta_p\cos^2\phi+r_s(\theta_i)\cos \theta_p\cos\phi\sin \theta_p \sin\phi\\r_p(\theta_i)\cos^2 \theta_p\cos\phi\sin\phi+r_s(\theta_i)\cos P\sin \theta_p \sin^2\phi\\\end{array}\right]$$and since the intensity is product of $E_D$ with its conjugate transpose $I=E_D\cdot E_D^T$ with a bit of algebra we find that$$I_\mathrm{measured} = \cos^2\theta_p \cdot\left|r_p(\theta_i)\cos\theta_p \cos\phi+r_s(\theta_i)\sin\theta_p \sin\phi\right|^2$$and with even more algebra this can be related back to the sinusoidal function$$I(\phi) = I_\mathrm{DC} + I_C \cos 2\phi + I_S \sin 2\phi$$which leads to$$\alpha = {I_C\over I_\mathrm{DC}} = {\tan^2\psi -\tan^2 \theta_p \over \tan^2\psi+\tan^2 \theta_p}\qquad\mbox{and}\qquad\beta = {I_S\over I_\mathrm{DC}} = {2\tan\psi \cos\Delta \tan \theta_p \over \tan^2\psi+\tan^2 \theta_p}$$
###Code
degrees = np.linspace(0,360,num=72,endpoint=False)
phi = degrees*np.pi/180
# create signal
m=3-0.2j # sample index of refraction
P=30 # incident polarization azimuth (degrees)
theta_p = np.radians(P) # incident polarization azimuth (radians)
th=70 # angle of incidence (degrees)
theta_i = np.radians(th) # angle of incidence (radians)
# Generate 72 intensities based on experimental conditions
# On reflected intensity for each angle of the rotating analyzer
phi_deg = np.linspace(0,360,num=72,endpoint=False)
phi = np.radians(phi_deg)
signal = ell.rotating_analyzer_signal_from_m(phi, m, theta_i, theta_p, noise=0.0003)
# analyze signal
rho, fit = ell.rho_from_rotating_analyzer_data(phi, signal, theta_p)
m2 = ell.m_from_rho(rho,theta_i)
# display data and fit
plt.plot(degrees,signal,'xr')
plt.plot(degrees,fit)
plt.xlabel("Analyzer Angle (degrees)")
plt.ylabel("Ellipsometer Measurement")
plt.title("%.1f° Linear Polarized Light, No QWP" % np.degrees(theta_p))
plt.xlim(-10,370)
plt.show()
print("m=%.3f%+.3fj (expected)"%(m.real,m.imag))
print("m=%.3f%+.3fj"%(m2.real,m2.imag))
#rho2 = ell.rho_from_m(m,theta_i)
#print("rho=%.3f%+.3fj (expected)"%(rho2.real,rho2.imag))
#print("rho=%.3f%+.3fj"%(rho.real,rho.imag))
###Output
_____no_output_____
###Markdown
Ellipsometry ParametersIf linearly polarized light is incident with an azimuthal angle $\theta_p$ (where $\theta_p=0^\circ$ is in the plane of incidence) thenthe normalized intensity is $${I(\phi)\over I_\mathrm{DC}} = 1 + \alpha \cos 2\phi + \beta\sin 2\phi$$ No quarter wave plate in the incident beam ($0\le\theta_p\le90°$)The parameters $\psi$ and $\Delta$ can now be calculated from $\alpha$ and $\beta$$$\tan\psi = \sqrt{1+\alpha\over 1-\alpha}\cdot |\tan \theta_p| \qquad\mbox{and}\qquad\cos\Delta = {\beta\over\sqrt{1-\alpha^2}}\cdot{\tan \theta_p\over |\tan \theta_p|} $$Or in terms of $I_\mathrm{DC}$, $I_S$, and $I_C$$$\tan\psi = \sqrt{I_\mathrm{DC}+I_C\over I_\mathrm{DC}-I_C}\cdot |\tan \theta_p|\qquad\mbox{and}\qquad\cos\Delta = {I_S\over\sqrt{I_\mathrm{DC}^2-I_C^2}}\cdot{\tan \theta_p\over |\tan \theta_p|}$$ Example* generate a theoretical ellipsometer signal* fit the signal to determine m
###Code
# Experimental Conditions
m=1.5-0.0j # sample index of refraction
P=45 # incident polarization azimuth (degrees)
theta_p = np.radians(P) # incident polarization azimuth (radians)
th=70 # angle of incidence (degrees)
theta_i = np.radians(th) # angle of incidence (radians)
# Generate 72 intensities based on experimental conditions
# On reflected intensity for each angle of the rotating analyzer
phiD = np.linspace(0,360,num=72,endpoint=False)
phi = np.radians(phiD)
signal = ell.rotating_analyzer_signal_from_m(phi, m, theta_i, theta_p, noise=0.005)
# Calculate Delta, tanpsi, and index from 72 intensities
rho, fit = ell.rho_from_rotating_analyzer_data(phi, signal, theta_p)
m2 = ell.m_from_rho(rho, theta_i)
tanpsi, Delta = ell.tanpsi_Delta_from_rho(rho)
# Show the results
plt.plot(phiD, signal, 'or')
plt.plot(phiD, fit, color="blue")
plt.title(r"P=%d°, $\theta_i=$%d, m=%.1f-%.1fj" % (P,th,m.real,-m.imag))
plt.xlabel("Analyzer Angle (degrees)")
plt.ylabel("Ellipsometer Intensity")
plt.xlim(-10,370)
plt.show()
print("Fitted Delta=%.1f°, tanpsi=%.3f" % (np.degrees(Delta),tanpsi))
print("Original refractive index = %.3f%+.3fj" % (m.real,m.imag))
print("Recovered refractive index = %.3f%+.3fj" % (m2.real,m2.imag))
###Output
_____no_output_____
###Markdown
Sensitivity to random gaussian noise
###Code
# Experimental Conditions
m=3.0-0.2j # sample index of refraction
P=20 # incident linear polarization angle (degrees)
theta_p = np.radians(P) # incident linear polarization angle (radians)
th=30 # angle of incidence (degrees)
theta_i = np.radians(th)# angle of incidence (radians)
# Generate 72 intensities based on experimental conditions
phi = np.radians(np.linspace(0,360,num=72,endpoint=False))
# error free signal needed to scale the plots
signal = ell.rotating_analyzer_signal_from_m(phi, m, theta_i, theta_p)
scale = signal.mean()
ymax = signal.max()
N=16
dev = np.zeros(N)
err = np.zeros(N)
print("m=%.3f%+.3f expected" % (m.real,m.imag))
# create fit plots for each error
plt.subplots(4,4,figsize=(10,10))
for i in range(N):
error = scale*2**(-i-2)
signal = ell.rotating_analyzer_signal_from_m(phi, m, theta_i, theta_p, noise=error)
rho, fit = ell.rho_from_rotating_analyzer_data(phi, signal, theta_p)
m2 = ell.m_from_rho(rho, theta_i)
dev[i]=abs(m2-m)
rel_error = 100*error/ymax
err[i]=rel_error
plt.subplot(4,4,i+1)
plt.plot(np.degrees(phi), signal, 'ob')
plt.plot(np.degrees(phi), fit, 'k')
plt.ylim(-0.1*ymax,1.1*ymax)
plt.text(0,0.99*ymax,'%.4f%%'%rel_error)
plt.yticks([])
plt.xticks([])
print("m=%.3f%+.3f error=%.3f%%" % (m2.real,m2.imag,rel_error))
plt.show()
plt.scatter(err,dev)
plt.xlim(1e-4,100)
plt.ylim(5e-4,20)
plt.xscale('log')
plt.yscale('log')
plt.xlabel('Relative Noise Added to Signal (%)')
plt.ylabel('Absolute Error in Measured Index')
plt.title("Expected m=%.3f%+.3f" % (m.real,m.imag))
plt.show()
###Output
m=3.000-0.200 expected
m=0.610-0.968 error=12.505%
m=2.197-0.821 error=6.253%
m=1.713-1.200 error=3.126%
m=11.331+0.000 error=1.563%
m=2.751-0.797 error=0.782%
m=4.636+0.000 error=0.391%
m=2.941-0.412 error=0.195%
m=3.702+0.000 error=0.098%
m=2.989-0.295 error=0.049%
m=3.014-0.068 error=0.024%
m=3.002-0.188 error=0.012%
m=3.000-0.200 error=0.006%
m=3.005-0.165 error=0.003%
m=3.002-0.186 error=0.002%
m=3.001-0.196 error=0.001%
m=2.999-0.204 error=0.000%
###Markdown
Not all measurements give equally good results
###Code
# Experimental Conditions
phi = np.radians(np.linspace(0,360,num=72,endpoint=False))
tanpsi = 0.5
print(" P Delta tanpsi Delta tanpsi Delta tanpsi")
for P in [1, 45, 89, 91, 178, -10, -89]:
for Del in [10, 67, 120, 178]:
Delta = np.radians(Del)
theta_p = np.radians(P)
rho0 = tanpsi*np.exp(1j*Delta)
print("%6.1f° %6.1f° %6.3f: " % (P,Del,tanpsi), end='')
# print(" [%.2f° %.2f] " % (np.degrees(np.angle(rho0)),rho.imag),end='')
signal = ell.rotating_analyzer_signal_from_rho(phi, rho0, theta_p, noise=0.0002)
rho, fit = ell.rho_from_rotating_analyzer_data(phi, signal, theta_p)
print("%8.3f° %6.3f" % (np.degrees(np.angle(rho)),np.abs(rho)))
###Output
P Delta tanpsi Delta tanpsi Delta tanpsi
1.0° 10.0° 0.500: 6.748° 0.504
1.0° 67.0° 0.500: 67.123° 0.498
1.0° 120.0° 0.500: 120.367° 0.506
1.0° 178.0° 0.500: 173.541° 0.497
45.0° 10.0° 0.500: 10.051° 0.500
45.0° 67.0° 0.500: 67.000° 0.500
45.0° 120.0° 0.500: 119.997° 0.500
45.0° 178.0° 0.500: 177.919° 0.500
89.0° 10.0° 0.500: 0.000° 0.255
89.0° 67.0° 0.500: 62.731° 0.424
89.0° 120.0° 0.500: 126.181° 0.422
89.0° 178.0° 0.500: 142.567° 0.630
91.0° 10.0° 0.500: 33.703° 0.592
91.0° 67.0° 0.500: 67.805° 0.519
91.0° 120.0° 0.500: 129.220° 0.396
91.0° 178.0° 0.500: -180.000° 0.375
178.0° 10.0° 0.500: 10.602° 0.499
178.0° 67.0° 0.500: 66.955° 0.501
178.0° 120.0° 0.500: 120.087° 0.502
178.0° 178.0° 0.500: 177.213° 0.500
-10.0° 10.0° 0.500: 9.935° 0.500
-10.0° 67.0° 0.500: 67.001° 0.500
-10.0° 120.0° 0.500: 120.006° 0.500
-10.0° 178.0° 0.500: 178.094° 0.500
-89.0° 10.0° 0.500: -0.000° 0.665
-89.0° 67.0° 0.500: 67.228° 0.502
-89.0° 120.0° 0.500: 126.772° 0.417
-89.0° 178.0° 0.500: 175.630° 0.500
|
agglomerative.ipynb | ###Markdown
Agglomerative Clustering algorithm Loading numpy and AgglomerativeClustering function from sklearn
###Code
from sklearn.cluster import AgglomerativeClustering
import numpy as np
###Output
_____no_output_____
###Markdown
Creating input array
###Code
X = np.array([[1, 1], [1, 2], [1, 0],
[10, 3], [10, 4], [10, 1]])
###Output
_____no_output_____
###Markdown
Calling AgglomerativeClustering algorithm
###Code
cluster = AgglomerativeClustering().fit(X)
cluster.labels_
###Output
_____no_output_____ |
Logistic Regression/Code/Logistic Regression.ipynb | ###Markdown
Logistic Regression Created by Ramses Alexander Coraspe Valdez Created on October 9, 2019
###Code
import pandas as pd
import numpy as np
import math
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.model_selection import train_test_split, KFold, cross_val_score
from sklearn import linear_model
import statsmodels.api as sm
from sklearn import metrics
###Output
_____no_output_____
###Markdown
a. Perform a descriptive analysis of the variables/factors, including numerical and graphical analysis that you consider appropriate.
###Code
missing = ["?"]
df1 = pd.read_csv('https://raw.githubusercontent.com/Wittline/Machine_Learning/master/Logistic%20Regression/breast-cancer-wisconsin.data',
sep=',',
names=["id", "Clump_Thickness", "Uniformity_CellSize", "Uniformity_CellShape", 'Marginal_Adhesion', 'Single_Epithelial_CellSize', 'Bare_Nuclei', 'Bland_Chromatin', 'Normal_Nucleoli', 'Mitoses', 'Class'],
na_values = missing);
df1.head(13)
df1.isnull().sum()
mdn = df1['Bare_Nuclei'].median()
df1['Bare_Nuclei'].fillna(mdn, inplace=True)
df1.isnull().sum()
df1.drop(['id'], axis = 1, inplace = True)
#df1.drop(df1.columns[[0]], axis = 1, inplace = True)
benign = df1[df1['Class']==2]
malignant = df1[df1['Class']==4]
plt.figure(1, figsize=(18, 8));
bp = plt.boxplot([df1.Clump_Thickness, df1.Uniformity_CellSize, df1.Uniformity_CellShape, df1.Marginal_Adhesion, df1.Single_Epithelial_CellSize, df1.Bare_Nuclei, df1.Bland_Chromatin, df1.Normal_Nucleoli, df1.Mitoses], vert=True, patch_artist=True,
flierprops={'alpha':0.6, 'markersize': 6,
'markeredgecolor': '#555555','marker': 'd',
'markerfacecolor': "#555555"},
capprops={'color': '#555555', 'linewidth': 2},
boxprops={'color': '#555555', 'linewidth': 2},
whiskerprops={'color': '#555555', 'linewidth': 2},
medianprops={'color': '#555555', 'linewidth': 2},
meanprops={'color': '#555555', 'linewidth': 2});
plt.grid(True, alpha=0.6);
plt.title("Box Plot", fontsize=20);
plt.ylabel("Frequency", fontsize=20);
plt.xticks(ticks=[1,2,3,4,5,6,7,8,9], labels=["Clump_Thickness", "Uniformity_CellSize", "Uniformity_CellShape", 'Marginal_Adhesion', 'Single_Epithelial_CellSize', 'Bare_Nuclei', 'Bland_Chromatin', 'Normal_Nucleoli', 'Mitoses'], fontsize=10);
bp['boxes'][0].set(facecolor='blue', alpha= 0.6);
bp['boxes'][1].set(facecolor="blue",alpha= 0.6 );
bp['boxes'][2].set(facecolor='blue', alpha= 0.6);
bp['boxes'][3].set(facecolor="blue",alpha= 0.6 );
bp['boxes'][4].set(facecolor="blue",alpha= 0.6 );
bp['boxes'][5].set(facecolor="blue",alpha= 0.6 );
bp['boxes'][6].set(facecolor="blue",alpha= 0.6 );
bp['boxes'][7].set(facecolor="blue",alpha= 0.6 );
bp['boxes'][8].set(facecolor="blue",alpha= 0.6 );
plt.show();
f, axes = plt.subplots(3, 3, figsize=(20, 10))
sns.distplot( benign["Clump_Thickness"] , color="skyblue", ax=axes[0, 0], kde=False)
sns.distplot( malignant["Clump_Thickness"] , color="red", ax=axes[0, 0], kde=False)
axes[0,0].set_xlim([1, 10])
sns.distplot( benign["Uniformity_CellSize"] , color="skyblue", ax=axes[0, 1], kde=False)
sns.distplot( malignant["Uniformity_CellSize"] , color="red", ax=axes[0, 1], kde=False)
axes[0,1].set_xlim([1, 10])
sns.distplot( benign["Uniformity_CellShape"] , color="skyblue", ax=axes[0, 2], kde=False)
sns.distplot( malignant["Uniformity_CellShape"] , color="red", ax=axes[0, 2], kde=False)
axes[0,2].set_xlim([1, 10])
sns.distplot( benign["Marginal_Adhesion"] , color="skyblue", ax=axes[1, 0], kde=False)
sns.distplot( malignant["Marginal_Adhesion"] , color="red", ax=axes[1, 0], kde=False)
axes[1,0].set_xlim([1, 10])
sns.distplot( benign["Single_Epithelial_CellSize"] , color="skyblue", ax=axes[1, 1], kde=False)
sns.distplot( malignant["Single_Epithelial_CellSize"] , color="red", ax=axes[1, 1], kde=False)
axes[1,1].set_xlim([1, 10])
sns.distplot( benign["Bare_Nuclei"] , color="skyblue", ax=axes[1, 2], kde=False)
sns.distplot( malignant["Bare_Nuclei"] , color="red", ax=axes[1, 2], kde=False)
axes[1,2].set_xlim([1, 10])
sns.distplot( benign["Bland_Chromatin"] , color="skyblue", ax=axes[2, 0], kde=False)
sns.distplot( malignant["Bland_Chromatin"] , color="red", ax=axes[2, 0], kde=False)
axes[2,0].set_xlim([1, 10])
sns.distplot( benign["Normal_Nucleoli"] , color="skyblue", ax=axes[2, 1], kde=False)
sns.distplot( malignant["Normal_Nucleoli"] , color="red", ax=axes[2, 1], kde=False)
axes[2,1].set_xlim([1, 10])
sns.distplot( benign["Mitoses"] , color="skyblue", ax=axes[2,2], kde=False)
sns.distplot( malignant["Mitoses"] , color="red", ax=axes[2, 2], kde=False)
axes[2,2].set_xlim([1, 10])
plt.plot();
###Output
_____no_output_____
###Markdown
b. What variables will be the input variables? Indicate the type of each of these variables: numerical or categorical. In the case of categorical variables, it clearly indicates the treatment of the dummy variables introduced. Would all the independent variables that are in the dataset be used? If not, indicate which or which of them would not be used and why.1. "Clump_Thickness" 2. "Uniformity_CellSize"3. "Uniformity_CellShape" 4. "Marginal_Adhesion" 5. "Single_Epithelial_CellSize"6. "Bare_Nuclei"7. "Bland_Chromatin"8. "Normal_Nucleoli"9. "Mitoses"10. "Class"The first nine variables have nominal values and ordinal values were assigned to them too, with the same distance between categories. These variables could be normalized to another range, but in this case they all have the same behavior, apparently they are already scaled. these variables will be the input variables.The variable "Class" is categorical (binary), and this will be used to identify the class of the record.So far the only variable in the dataset that was removed is the variable "id" which is only a unique identifier of the record, perhaps other variables can be discarded due to the high correlation but will be done it later. c. Is it necessary to normalize or scalings some of the variables? Justify your decision.Although all input variables have nominal ordinal values, they will not be normalized, since they are all in the same range of 1 - 10 d. What is the output variable and how many levels will it have? How much data does the output variable have in each class?1. "Class" variable will be the output2. Two levels3. Class "Bening" represent 65.52% and the Class "Malignant" represent 34.48%
###Code
plt.figure(figsize=(10, 8))
ax= sns.countplot(df1['Class'], palette=['skyblue', 'red'])
total = len(df1['Class'])
for p in ax.patches:
name = 'Benign'
if((abs(p.get_x())*10 )- 2 == 4):
name = 'Malignant'
height = p.get_height()
ax.text(p.get_x()+p.get_width()/2.,
height + 3,
'{:1.2f} % {}'.format(round((height/total)*100,2), name ) ,
ha="center")
plt.show()
###Output
_____no_output_____
###Markdown
e. Obtain Pearson's correlation coefficients for each pair of variables and include your conclusions about it.Checking the correlation matrix below we realize that the variables *Uniformity_CellSize* and *Uniformity_CellShape* are highly correlated and any of them coud be rejected for the analysis, in this case we will not reject any of them to be able to observe if this will affect the final outcomes.
###Code
corr = df1.corr(method='pearson').round(2)
mask = np.zeros_like(corr, dtype=np.bool)
mask[np.triu_indices_from(mask)] = True
f, ax = plt.subplots(figsize=(10, 10))
c_map = sns.diverging_palette(220, 10, as_cmap=True)
sns.heatmap(corr, mask=mask, cmap=c_map, vmin=-1, vmax=1, center=0,
square=True, linewidths=.5, cbar_kws={"shrink": .5}, annot=True)
plt.tight_layout()
###Output
_____no_output_____
###Markdown
f. Perform a random partition in the Training set (80%) and Test set (20%). Note that you must select the samples by stratified sampling to respect the proportion of classes M and B.
###Code
df1.Class = [1 if each == 4 else 0 for each in df1.Class]
y = df1.Class
X = df1.drop(['Class'], axis = 1)
x_train, x_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=40)
###Output
_____no_output_____
###Markdown
g. Find the logistic regression model with the training data, including the p-values of each coefficient, and the AIC metric.
###Code
lg = linear_model.LogisticRegression(random_state = 40, max_iter = 100,solver='lbfgs')
print("Train accuracy: {} ".format(lg.fit(x_train, y_train).score(x_train, y_train)))
logit_model=sm.Logit(y,X)
result=logit_model.fit()
print(result.summary2())
###Output
Train accuracy: 0.9749552772808586
Optimization terminated successfully.
Current function value: 0.385080
Iterations 8
Results: Logit
==========================================================================
Model: Logit Pseudo R-squared: 0.402
Dependent Variable: Class AIC: 556.3415
Date: 2019-10-11 14:29 BIC: 597.2883
No. Observations: 699 Log-Likelihood: -269.17
Df Model: 8 LL-Null: -450.26
Df Residuals: 690 LLR p-value: 2.2651e-73
Converged: 1.0000 Scale: 1.0000
No. Iterations: 8.0000
--------------------------------------------------------------------------
Coef. Std.Err. z P>|z| [0.025 0.975]
--------------------------------------------------------------------------
Clump_Thickness -0.3365 0.0569 -5.9147 0.0000 -0.4480 -0.2250
Uniformity_CellSize 0.9223 0.1279 7.2127 0.0000 0.6717 1.1730
Uniformity_CellShape 0.1888 0.1078 1.7519 0.0798 -0.0224 0.4000
Marginal_Adhesion 0.1151 0.0737 1.5623 0.1182 -0.0293 0.2596
Single_Epithelial_CellSize -0.8097 0.1011 -8.0075 0.0000 -1.0078 -0.6115
Bare_Nuclei 0.5530 0.0622 8.8873 0.0000 0.4311 0.6750
Bland_Chromatin -0.5138 0.0891 -5.7645 0.0000 -0.6885 -0.3391
Normal_Nucleoli 0.3267 0.0702 4.6528 0.0000 0.1891 0.4644
Mitoses -0.2250 0.0860 -2.6148 0.0089 -0.3937 -0.0563
==========================================================================
###Markdown
h. Check the model`s performance with the Test set. Shows the confusion matrix and the threshold value used.
###Code
print("Test accuracy: {} ".format(lg.fit(x_train, y_train).score(x_test, y_test)))
score = lg.score(x_test, y_test)
predictions = lg.predict(x_test)
cm = metrics.confusion_matrix(y_test, predictions)
plt.figure(figsize=(9,9))
sns.heatmap(cm, annot=True, fmt=".3f", linewidths=.5, square = True, cmap = 'Blues_r');
plt.ylabel('Actual label');
plt.xlabel('Predicted label');
all_sample_title = 'Accuracy Score: {0}'.format(score)
plt.title(all_sample_title, size = 15);
###Output
Test accuracy: 0.9357142857142857
###Markdown
i. Make the adjustments that you consider appropriate in the process to improve the model obtained, Indicating the adjustments made and if it was possible to improve it.Removing the *Uniformity_CellShape* variable which is highly correlated and with a p-value greater than 0.05 does not improve the model.
###Code
y = df1.Class
X = df1.drop(['Uniformity_CellShape', 'Class'], axis = 1)
x_train, x_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=40)
lg = linear_model.LogisticRegression(random_state = 40, max_iter = 100, solver='lbfgs')
print("Train accuracy: {} ".format(lg.fit(x_train, y_train).score(x_train, y_train)))
logit_model=sm.Logit(y,X)
result=logit_model.fit()
print(result.summary2())
print("Test accuracy: {} ".format(lg.fit(x_train, y_train).score(x_test, y_test)))
score = lg.score(x_test, y_test)
predictions = lg.predict(x_test)
cm = metrics.confusion_matrix(y_test, predictions)
plt.figure(figsize=(9,9))
sns.heatmap(cm, annot=True, fmt=".3f", linewidths=.5, square = True, cmap = 'Blues_r');
plt.ylabel('Actual label');
plt.xlabel('Predicted label');
all_sample_title = 'Accuracy Score: {0}'.format(score)
plt.title(all_sample_title, size = 15);
###Output
Train accuracy: 0.9767441860465116
Optimization terminated successfully.
Current function value: 0.387342
Iterations 8
Results: Logit
==========================================================================
Model: Logit Pseudo R-squared: 0.399
Dependent Variable: Class AIC: 557.5045
Date: 2019-10-11 14:29 BIC: 593.9017
No. Observations: 699 Log-Likelihood: -270.75
Df Model: 7 LL-Null: -450.26
Df Residuals: 691 LLR p-value: 1.4417e-73
Converged: 1.0000 Scale: 1.0000
No. Iterations: 8.0000
--------------------------------------------------------------------------
Coef. Std.Err. z P>|z| [0.025 0.975]
--------------------------------------------------------------------------
Clump_Thickness -0.3151 0.0549 -5.7435 0.0000 -0.4226 -0.2076
Uniformity_CellSize 1.0402 0.1094 9.5053 0.0000 0.8257 1.2546
Marginal_Adhesion 0.1116 0.0740 1.5070 0.1318 -0.0335 0.2567
Single_Epithelial_CellSize -0.8018 0.1002 -8.0020 0.0000 -0.9982 -0.6054
Bare_Nuclei 0.5648 0.0621 9.0984 0.0000 0.4431 0.6864
Bland_Chromatin -0.5092 0.0886 -5.7475 0.0000 -0.6828 -0.3355
Normal_Nucleoli 0.3439 0.0688 4.9994 0.0000 0.2090 0.4787
Mitoses -0.2167 0.0849 -2.5521 0.0107 -0.3831 -0.0503
==========================================================================
Test accuracy: 0.9428571428571428
###Markdown
j. Repeat the process from item (f), but instead of performing the partition indicated in that item, use the cross-validation method with the partition value you consider appropriate.
###Code
y = df1.Class
X = df1.drop(['Class'], axis = 1)
kf = KFold(n_splits=6)
lg = linear_model.LogisticRegression(random_state = 40, max_iter = 100, solver='lbfgs')
print(cross_val_score(lg, X, y, cv=kf, scoring='accuracy').mean())
###Output
0.9643260634639944
|
Chapter1/01_03_regex.ipynb | ###Markdown
NLP Basics: Learning how to use regular expressions Using regular expressions in PythonPython's `re` package is the most commonly used regex resource. More details can be found [here](https://docs.python.org/3/library/re.html).
###Code
import re
re_test = 'This is a made up string to test 2 different regex methods'
re_test_messy = 'This is a made up string to test 2 different regex methods'
re_test_messy1 = 'This-is-a-made/up.string*to>>>>test----2""""""different~regex-methods'
###Output
_____no_output_____
###Markdown
Splitting a sentence into a list of words
###Code
re.split('\s', re_test)
re.findall('\S+', re_test)
###Output
_____no_output_____
###Markdown
Replacing a specific string
###Code
re.split('\s+', re_test_messy)
re.findall('\S+', re_test)
re.split('\W+', re_test_messy1)
re.findall('\w+', re_test_messy1)
pep8_test = 'I try to follow PEP8 guidelines'
pep7_test = 'I try to follow PEP7 guidelines'
peep8_test = 'I try to follow PEEP8 guidelines'
re.findall('[A-Z]+[0-9]+', pep8_test)
re.sub('[A-Z]+[0-9]+', 'PEP8 Python Styleguide', peep8_test)
###Output
_____no_output_____ |
examples/DNVGL-ST-F101_pressure_containment.ipynb | ###Markdown
**Task**: Pipe pressure containment (bursting) according to DNVGL-ST-F101.**References**:1. [DNVGL-ST-F101](https://www.dnvgl.com/oilgas/download/dnvgl-st-f101-submarine-pipeline-systems.html) (edition 2017-12)1. [PDover2t](https://github.com/qwilka/PDover2t) Copyright © 2018 Stephen McEntee. Licensed under the MIT license, see [PDover2t LICENSE file](https://github.com/qwilka/PDover2t/blob/master/LICENSE) for details.
###Code
import pprint
import numpy as np
import pdover2t
parameters = {
"alpha_U": 1.0,
"D": 0.660,
"g": 9.81,
"gamma_inc": 1.1,
"gamma_SCPC": 1.138,
"h_ref": 30.,
"h_l": 0.,
"material": "CMn",
"p_d": 240e5,
"rho_cont": 275.,
"rho_water": 1027.,
"rho_t": 1027.,
"SC": "medium",
"SMYS": 450.e6,
"SMTS": 535.e6,
"t": 0.0212,
"t_corr": 0.0005,
"t_fab": 0.001,
"T": 60,
}
###Output
_____no_output_____
###Markdown
Calculate pipe pressure containment utility, showing all intermediate results and unity value. Reference: DNVGL-ST-F101 (2017-12) sec:5.4.2.1, eq:5.6, page:93; $p_{li}$ sec:5.4.2.1, eq:5.7, page:94; $p_{lt}$ $$p_{li} - p_e \:\leq\: \min \left( \frac{p_b(t_1)}{\gamma_m \,\cdot\, \gamma_{SC,PC}} ;\frac{p_{lt}}{\alpha_{spt}} - p_e ;\frac{p_{mpt} \cdot \alpha_U}{\alpha_{mpt}} \right)$$$$p_{lt} - p_e \:\leq\: \min \left( \frac{p_b(t_1)}{\gamma_m \,\cdot\, \gamma_{SC,PC}} ;p_{mpt} \right)$$
###Code
p_cont_overall = pdover2t.dnvgl_st_f101.press_contain_all(ret="all", **parameters)
pprint.pprint(p_cont_overall)
print("Pressure containment unity check result: {:.2f}".format(p_cont_overall["p_cont_uty"]))
###Output
Pressure containment unity check result: 1.10
|
Week 7/Quandl_2.ipynb | ###Markdown
Quandl 2: AnalysisThis notebook describes the following:1. Getting data from quandl,2. Applying some descriptive analytics,3. Applying time series analysis,4. Running a linear regression,5. Code: VAR,6. Code: Fixed effects,7. OLS with Pandas,8. Logistic regression.*Note: the notebook itself aims to introduce the "analysis" interface without concetrating on details of rigorous economic research.*Resources:- [Pandas computational functions](http://pandas.pydata.org/pandas-docs/version/0.9.0/computation.html) 1. Getting data from quandl
###Code
import quandl
with open('quandl_key.txt','r') as f:
key = f.read()
data = quandl.get(["FRED/GDP","FRED/UNRATE", "FRED/FEDFUNDS", "FRED/CPIAUCSL"],authtoken=key, collapse="annual")
data.head()
data[[0]].head()
###Output
_____no_output_____
###Markdown
2. Applying some descriptive analytics
###Code
data.info()
data.describe()
data.corr()
data.cov()
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
sns.heatmap(data.corr())
###Output
_____no_output_____
###Markdown
3. Applying time series analysis
###Code
from statsmodels.tsa.arima_model import ARIMA
from statsmodels.tsa.stattools import acf, pacf #ACF and PACF, which are later explained
acf = acf(data[[0]])
pacf = pacf(data[[0]])
plt.plot(acf) # q: number of MA components
plt.plot(pacf,'r') # p: number of AR components
from statsmodels.tsa.stattools import adfuller
stationarity_test = adfuller(data["FRED/GDP - Value"])
print(stationarity_test[1])
model = ARIMA(data[[0]], order=(1, 1, 0)) #AR(p), I(d), MA(q)
results = model.fit()
plt.plot(data[[0]].diff()) # plot the original data graph
plt.plot(results.fittedvalues, 'r') # plot the fitted graph
from statsmodels.tsa.arima_model import ARIMAResults
results.summary()
###Output
_____no_output_____
###Markdown
4. Running a linear regressions
###Code
from statsmodels.formula.api import ols
model_ols = ols(formula="data[[0]] ~ data[[1]]+data[[2]]+data[[3]]", data=data)
results_ols = model_ols.fit()
results_ols.summary()
data[[0]].hist(bins=5)
data[[0]].diff().hist()
import numpy as np
np.log(data[[0]]).hist()
results_ols.resid.hist()
sns.distplot(results_ols.resid)
###Output
_____no_output_____
###Markdown
5. VAR
###Code
from statsmodels.tsa.api import VAR
model_var = VAR(mdata[[0]])
results_var = model_var.fit()
results_var.summary()
###Output
_____no_output_____
###Markdown
6. Panel data
###Code
from pandas.stats.plm import PanelOLS
reg = PanelOLS(y=data[[0]],x=data[[1]],time_effects=True)
reg
###Output
_____no_output_____
###Markdown
7. OLS with Pandas
###Code
from pandas.stats.ols import OLS
linear = OLS(y=data["FRED/GDP - Value"],x=data["FRED/UNRATE - Value"])
linear
###Output
_____no_output_____
###Markdown
8. Logistic regression
###Code
data[[0]].pct_change().mean()
import numpy as np
data["GDP_status"] = np.where(data[[0]].pct_change()>data[[0]].pct_change().mean(),1,0)
data.head()
data["FRED/UNRATE - Value"]= data["FRED/UNRATE - Value"].fillna(data["FRED/UNRATE - Value"].mean())
from statsmodels.api import Logit
logit = Logit(data['GDP_status'], data["FRED/CPIAUCSL - Value"])
results_logit = logit.fit()
results_logit.summary()
###Output
Optimization terminated successfully.
Current function value: 0.646569
Iterations 4
|
CaseStudy_DynamicTomography/01_LoadData_CreateSparseData.ipynb | ###Markdown
1) First, we download the dynamic tomographic data from [Zenodo](https://zenodo.org/record/3696817.YGyJMxMzb_Q).2) There are resolutions available of size 256 x 256 or 512 x 512, (GelPhantomData_b4.mat and GelPhantomData_b2.mat respectively). For the paper, we use the 256x256 resolution for simplicity.3) Two additional data are provided in GelPhantom_extra_frames.mat with dense sampled measurements from the first time step and the last (18th) time step.**Note that the pixel size of the detector is wrong. The correct pixel size should be doubled.**
###Code
# Download files from Zenodo
download_zenodo()
# Read matlab files for the 17 frames
name = "GelPhantomData_b4"
path = "MatlabData/"
file_info = read_frames(path, name)
# Get sinograms + metadata
sinograms = file_info['sinograms']
frames = sinograms.shape[0]
angles = file_info['angles']
distanceOriginDetector = file_info['distanceOriginDetector']
distanceSourceOrigin = file_info['distanceSourceOrigin']
pixelSize = 2*file_info['pixelSize']
numDetectors = file_info['numDetectors']
# Create acquisition + image geometries
ag = AcquisitionGeometry.create_Cone2D(source_position = [0, distanceSourceOrigin],
detector_position = [0, -distanceOriginDetector])\
.set_panel(numDetectors, pixelSize)\
.set_channels(frames)\
.set_angles(angles, angle_unit="radian")\
.set_labels(['channel','angle', 'horizontal'])
ig = ag.get_ImageGeometry()
ig.voxel_num_x = 256
ig.voxel_num_y = 256
# Create the 2D+time acquisition data
data = ag.allocate()
for i in range(frames):
data.fill(sinograms[i], channel = i)
# Create and save Sparse Data with different angular sampling: 18, 36, 72, 360 projections
for i in [1, 5, 10, 20]:
name_proj = "data_{}".format(int(360/i))
new_data = Slicer(roi={'angle':(0,360,i)})(data)
ag = new_data.geometry
ig = ag.get_ImageGeometry()
ig.voxel_num_x = 256
ig.voxel_num_y = 256
show2D(new_data, slice_list = [0,5,10,16], num_cols=4, origin="upper",
cmap="inferno", title="Projections {}".format(int(360/i)), size=(25, 20))
writer = NEXUSDataWriter(file_name = "SparseData/"+name_proj+".nxs",
data = new_data)
writer.write()
# Read matlab files for the extra frames
name = "GelPhantom_extra_frames"
path = "MatlabData/"
frame1_info = read_extra_frames(path, name, "GelPhantomFrame1_b4")
frame18_info = read_extra_frames(path, name, "GelPhantomFrame18_b4")
# Acquisition geometry for the 1st frame: 720 projections
ag2D_frame1 = AcquisitionGeometry.create_Cone2D(source_position = [0, frame1_info['distanceSourceOrigin']],
detector_position = [0, -frame1_info['distanceOriginDetector']])\
.set_panel(num_pixels = frame1_info['numDetectors'], pixel_size = 2*frame1_info['pixelSize'])\
.set_angles(frame1_info['angles'])\
.set_labels(['angle', 'horizontal'])
# Acquisition geometry for the 18th frame: 1600 projections
ag2D_frame18 = AcquisitionGeometry.create_Cone2D(source_position = [0, frame18_info['distanceSourceOrigin']],
detector_position = [0, -frame18_info['distanceOriginDetector']])\
.set_panel(num_pixels = frame18_info['numDetectors'], pixel_size = 2*frame18_info['pixelSize'])\
.set_angles(frame18_info['angles'])\
.set_labels(['angle', 'horizontal'])
# Image geometry is the same
ig = ag2D_frame18.get_ImageGeometry()
ig.voxel_num_x = 256
ig.voxel_num_y = 256
# Create and save the 2D acquisition data for the extra frames
data = ag2D_frame1.allocate()
data.fill(frame1_info['sinograms'])
show2D(data, cmap="inferno")
# Save prescan
name = "data_prescan_720"
writer = NEXUSDataWriter(file_name = "SparseData/" + name +".nxs",
data = data)
writer.write()
data = ag2D_frame18.allocate()
data.fill(frame18_info['sinograms'])
show2D(data, cmap="inferno")
# Save postscan
name = "data_postscan_1600"
writer = NEXUSDataWriter(file_name = "SparseData/" + name +".nxs",
data = data)
writer.write()
###Output
_____no_output_____ |
4. Python for DS/5-1-Numpy1D.ipynb | ###Markdown
1D Numpy in Python Welcome! This notebook will teach you about using Numpy in the Python Programming Language. By the end of this lab, you'll know what Numpy is and the Numpy operations. Table of Contents Preparation What is Numpy? Type Assign Value Slicing Assign Value with List Other Attributes Numpy Array Operations Array Addition Array Multiplication Product of Two Numpy Arrays Dot Product Adding Constant to a Numpy Array Mathematical Functions Linspace Estimated time needed: 30 min Preparation
###Code
# Import the libraries
import time
import sys
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
# Plotting functions
def Plotvec1(u, z, v):
ax = plt.axes()
ax.arrow(0, 0, *u, head_width=0.05, color='r', head_length=0.1)
plt.text(*(u + 0.1), 'u')
ax.arrow(0, 0, *v, head_width=0.05, color='b', head_length=0.1)
plt.text(*(v + 0.1), 'v')
ax.arrow(0, 0, *z, head_width=0.05, head_length=0.1)
plt.text(*(z + 0.1), 'z')
plt.ylim(-2, 2)
plt.xlim(-2, 2)
def Plotvec2(a,b):
ax = plt.axes()
ax.arrow(0, 0, *a, head_width=0.05, color ='r', head_length=0.1)
plt.text(*(a + 0.1), 'a')
ax.arrow(0, 0, *b, head_width=0.05, color ='b', head_length=0.1)
plt.text(*(b + 0.1), 'b')
plt.ylim(-2, 2)
plt.xlim(-2, 2)
###Output
_____no_output_____
###Markdown
Create a Python List as follows:
###Code
# Create a python list
a = ["0", 1, "two", "3", 4]
###Output
_____no_output_____
###Markdown
We can access the data via an index: We can access each element using a square bracket as follows:
###Code
# Print each element
print("a[0]:", a[0])
print("a[1]:", a[1])
print("a[2]:", a[2])
print("a[3]:", a[3])
print("a[4]:", a[4])
###Output
a[0]: 0
a[1]: 1
a[2]: two
a[3]: 3
a[4]: 4
###Markdown
What is Numpy? A numpy array is similar to a list. It's usually fixed in size and each element is of the same type. We can cast a list to a numpy array by first importing numpy:
###Code
# import numpy library
import numpy as np
###Output
_____no_output_____
###Markdown
We then cast the list as follows:
###Code
# Create a numpy array
a = np.array([0, 1, 2, 3, 4])
a
###Output
_____no_output_____
###Markdown
Each element is of the same type, in this case integers: As with lists, we can access each element via a square bracket:
###Code
# Print each element
print("a[0]:", a[0])
print("a[1]:", a[1])
print("a[2]:", a[2])
print("a[3]:", a[3])
print("a[4]:", a[4])
###Output
a[0]: 0
a[1]: 1
a[2]: 2
a[3]: 3
a[4]: 4
###Markdown
Type If we check the type of the array we get numpy.ndarray:
###Code
# Check the type of the array
type(a)
###Output
_____no_output_____
###Markdown
As numpy arrays contain data of the same type, we can use the attribute "dtype" to obtain the Data-type of the array’s elements. In this case a 64-bit integer:
###Code
# Check the type of the values stored in numpy array
a.dtype
###Output
_____no_output_____
###Markdown
We can create a numpy array with real numbers:
###Code
# Create a numpy array
b = np.array([3.1, 11.02, 6.2, 213.2, 5.2])
###Output
_____no_output_____
###Markdown
When we check the type of the array we get numpy.ndarray:
###Code
# Check the type of array
type(b)
###Output
_____no_output_____
###Markdown
If we examine the attribute dtype we see float 64, as the elements are not integers:
###Code
# Check the value type
b.dtype
###Output
_____no_output_____
###Markdown
Assign value We can change the value of the array, consider the array c:
###Code
# Create numpy array
c = np.array([20, 1, 2, 3, 4])
c
###Output
_____no_output_____
###Markdown
We can change the first element of the array to 100 as follows:
###Code
# Assign the first element to 100
c[0] = 100
c
###Output
_____no_output_____
###Markdown
We can change the 5th element of the array to 0 as follows:
###Code
# Assign the 5th element to 0
c[4] = 0
c
###Output
_____no_output_____
###Markdown
Slicing Like lists, we can slice the numpy array, and we can select the elements from 1 to 3 and assign it to a new numpy array d as follows:
###Code
# Slicing the numpy array
d = c[1:4]
d
###Output
_____no_output_____
###Markdown
We can assign the corresponding indexes to new values as follows:
###Code
# Set the fourth element and fifth element to 300 and 400
c[3:5] = 300, 400
c
###Output
_____no_output_____
###Markdown
Assign Value with List Similarly, we can use a list to select a specific index.The list ' select ' contains several values:
###Code
# Create the index list
select = [0, 2, 3]
###Output
_____no_output_____
###Markdown
We can use the list as an argument in the brackets. The output is the elements corresponding to the particular index:
###Code
# Use List to select elements
d = c[select]
d
###Output
_____no_output_____
###Markdown
We can assign the specified elements to a new value. For example, we can assign the values to 100 000 as follows:
###Code
# Assign the specified elements to new value
c[select] = 100000
c
###Output
_____no_output_____
###Markdown
Other Attributes Let's review some basic array attributes using the array a:
###Code
# Create a numpy array
a = np.array([0, 1, 2, 3, 4])
a
###Output
_____no_output_____
###Markdown
The attribute size is the number of elements in the array:
###Code
# Get the size of numpy array
a.size
###Output
_____no_output_____
###Markdown
The next two attributes will make more sense when we get to higher dimensions but let's review them. The attribute ndim represents the number of array dimensions or the rank of the array, in this case, one:
###Code
# Get the number of dimensions of numpy array
a.ndim
###Output
_____no_output_____
###Markdown
The attribute shape is a tuple of integers indicating the size of the array in each dimension:
###Code
# Get the shape/size of numpy array
a.shape
# Create a numpy array
a = np.array([1, -1, 1, -1])
# Get the mean of numpy array
mean = a.mean()
mean
# Get the standard deviation of numpy array
standard_deviation=a.std()
standard_deviation
# Create a numpy array
b = np.array([-1, 2, 3, 4, 5])
b
# Get the biggest value in the numpy array
max_b = b.max()
max_b
# Get the smallest value in the numpy array
min_b = b.min()
min_b
###Output
_____no_output_____
###Markdown
Numpy Array Operations Array Addition Consider the numpy array u:
###Code
u = np.array([1, 0])
u
###Output
_____no_output_____
###Markdown
Consider the numpy array v:
###Code
v = np.array([0, 1])
v
###Output
_____no_output_____
###Markdown
We can add the two arrays and assign it to z:
###Code
# Numpy Array Addition
z = u + v
z
###Output
_____no_output_____
###Markdown
The operation is equivalent to vector addition:
###Code
# Plot numpy arrays
Plotvec1(u, z, v)
###Output
_____no_output_____
###Markdown
Array Multiplication Consider the vector numpy array y:
###Code
# Create a numpy array
y = np.array([1, 2])
y
###Output
_____no_output_____
###Markdown
We can multiply every element in the array by 2:
###Code
# Numpy Array Multiplication
z = 2 * y
z
###Output
_____no_output_____
###Markdown
This is equivalent to multiplying a vector by a scaler: Product of Two Numpy Arrays Consider the following array u:
###Code
# Create a numpy array
u = np.array([1, 2])
u
###Output
_____no_output_____
###Markdown
Consider the following array v:
###Code
# Create a numpy array
v = np.array([3, 2])
v
###Output
_____no_output_____
###Markdown
The product of the two numpy arrays u and v is given by:
###Code
# Calculate the production of two numpy arrays
z = u * v
z
###Output
_____no_output_____
###Markdown
Dot Product The dot product of the two numpy arrays u and v is given by:
###Code
# Calculate the dot product
np.dot(u, v)
###Output
_____no_output_____
###Markdown
Adding Constant to a Numpy Array Consider the following array:
###Code
# Create a constant to numpy array
u = np.array([1, 2, 3, -1])
u
###Output
_____no_output_____
###Markdown
Adding the constant 1 to each element in the array:
###Code
# Add the constant to array
u + 1
###Output
_____no_output_____
###Markdown
The process is summarised in the following animation: Mathematical Functions We can access the value of pie in numpy as follows :
###Code
# The value of pie
np.pi
###Output
_____no_output_____
###Markdown
We can create the following numpy array in Radians:
###Code
# Create the numpy array in radians
x = np.array([0, np.pi/2 , np.pi])
###Output
_____no_output_____
###Markdown
We can apply the function sin to the array x and assign the values to the array y; this applies the sine function to each element in the array:
###Code
# Calculate the sin of each elements
y = np.sin(x)
y
###Output
_____no_output_____
###Markdown
Linspace A useful function for plotting mathematical functions is "linespace". Linespace returns evenly spaced numbers over a specified interval. We specify the starting point of the sequence and the ending point of the sequence. The parameter "num" indicates the Number of samples to generate, in this case 5:
###Code
# Makeup a numpy array within [-2, 2] and 5 elements
np.linspace(-2, 2, num=5)
###Output
_____no_output_____
###Markdown
If we change the parameter num to 9, we get 9 evenly spaced numbers over the interval from -2 to 2:
###Code
# Makeup a numpy array within [-2, 2] and 9 elements
np.linspace(-2, 2, num=9)
###Output
_____no_output_____
###Markdown
We can use the function line space to generate 100 evenly spaced samples from the interval 0 to 2π:
###Code
# Makeup a numpy array within [0, 2π] and 100 elements
x = np.linspace(0, 2*np.pi, num=100)
###Output
_____no_output_____
###Markdown
We can apply the sine function to each element in the array x and assign it to the array y:
###Code
# Calculate the sine of x list
y = np.sin(x)
# Plot the result
plt.plot(x, y)
###Output
_____no_output_____
###Markdown
Quiz on 1D Numpy Array Implement the following vector subtraction in numpy: u-v
###Code
# Write your code below and press Shift+Enter to execute
u = np.array([1, 0])
v = np.array([0, 1])
u-v
###Output
_____no_output_____
###Markdown
Double-click __here__ for the solution.<!-- Your answer is below:u - v--> Multiply the numpy array z with -2:
###Code
# Write your code below and press Shift+Enter to execute
z = np.array([2, 4])
z*(-2)
###Output
_____no_output_____
###Markdown
Double-click __here__ for the solution.<!-- Your answer is below:-2 * z--> Consider the list [1, 2, 3, 4, 5] and [1, 0, 1, 0, 1], and cast both lists to a numpy array then multiply them together:
###Code
# Write your code below and press Shift+Enter to execute
a = np.array([1, 2, 3, 4, 5])
b = np.array([1, 0, 1, 0, 1])
###Output
_____no_output_____
###Markdown
Double-click __here__ for the solution.<!-- Your answer is below:a = np.array([1, 2, 3, 4, 5])b = np.array([1, 0, 1, 0, 1])a * b-->
###Code
print('type a:',type(a))
print('type b:', type(b))
c = a*b
c
###Output
_____no_output_____
###Markdown
Convert the list [-1, 1] and [1, 1] to numpy arrays a and b. Then, plot the arrays as vectors using the fuction Plotvec2 and find the dot product:
###Code
# Write your code below and press Shift+Enter to execute
a = np.array([-1, 1])
b = np.array([1,1])
Plotvec2(a,b)
###Output
_____no_output_____
###Markdown
Double-click __here__ for the solution.<!-- Your answer is below:a = np.array([-1, 1])b = np.array([1, 1])Plotvec2(a, b)print("The dot product is", np.dot(a,b))-->
###Code
np.dot(a,b)
###Output
_____no_output_____
###Markdown
Convert the list [1, 0] and [0, 1] to numpy arrays a and b. Then, plot the arrays as vectors using the function Plotvec2 and find the dot product:
###Code
# Write your code below and press Shift+Enter to execute
a = np.array([1,0])
b = np.array([0,1])
Plotvec2(a,b)
np.dot(a,b)
###Output
_____no_output_____
###Markdown
Double-click __here__ for the solution.<!-- a = np.array([1, 0])b = np.array([0, 1])Plotvec2(a, b)print("The dot product is", np.dot(a, b)) --> Convert the list [1, 1] and [0, 1] to numpy arrays a and b. Then plot the arrays as vectors using the fuction Plotvec2 and find the dot product:
###Code
# Write your code below and press Shift+Enter to execute
a = np.array([1,1])
b = np.array([0,1])
Plotvec2(a,b)
np.dot(a,b)
###Output
_____no_output_____
###Markdown
Double-click __here__ for the solution.<!-- a = np.array([1, 1])b = np.array([0, 1])Plotvec2(a, b)print("The dot product is", np.dot(a, b))print("The dot product is", np.dot(a, b)) --> Why are the results of the dot product for [-1, 1] and [1, 1] and the dot product for [1, 0] and [0, 1] zero, but not zero for the dot product for [1, 1] and [0, 1]? Hint: Study the corresponding figures, pay attention to the direction the arrows are pointing to.
###Code
# Write your code below and press Shift+Enter to execute
###Output
_____no_output_____ |
notebooks/semisupervised/FMNIST/baseline/augmented2/FMNIST-augment-baseline-16.ipynb | ###Markdown
Choose GPU
###Code
%env CUDA_DEVICE_ORDER=PCI_BUS_ID
%env CUDA_VISIBLE_DEVICES=0
import tensorflow as tf
gpu_devices = tf.config.experimental.list_physical_devices('GPU')
if len(gpu_devices)>0:
tf.config.experimental.set_memory_growth(gpu_devices[0], True)
print(gpu_devices)
tf.keras.backend.clear_session()
###Output
[PhysicalDevice(name='/physical_device:GPU:0', device_type='GPU')]
###Markdown
dataset information
###Code
from datetime import datetime
dataset = "fmnist"
dims = (28, 28, 1)
num_classes = 10
labels_per_class = 16 # full
batch_size = 128
datestring = datetime.now().strftime("%Y_%m_%d_%H_%M_%S_%f")
datestring = (
str(dataset)
+ "_"
+ str(labels_per_class)
+ "____"
+ datestring
+ '_baseline_augmented'
)
print(datestring)
###Output
fmnist_16____2020_08_25_22_20_54_702151_baseline_augmented
###Markdown
Load packages
###Code
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
from tqdm.autonotebook import tqdm
from IPython import display
import pandas as pd
import umap
import copy
import os, tempfile
###Output
/mnt/cube/tsainbur/conda_envs/tpy3/lib/python3.6/site-packages/tqdm/autonotebook/__init__.py:14: TqdmExperimentalWarning: Using `tqdm.autonotebook.tqdm` in notebook mode. Use `tqdm.tqdm` instead to force console mode (e.g. in jupyter console)
" (e.g. in jupyter console)", TqdmExperimentalWarning)
###Markdown
Load dataset
###Code
from tfumap.load_datasets import load_FMNIST, mask_labels
X_train, X_test, X_valid, Y_train, Y_test, Y_valid = load_FMNIST(flatten=False)
X_train.shape
if labels_per_class == "full":
X_labeled = X_train
Y_masked = Y_labeled = Y_train
else:
X_labeled, Y_labeled, Y_masked = mask_labels(
X_train, Y_train, labels_per_class=labels_per_class
)
###Output
_____no_output_____
###Markdown
Build network
###Code
from tensorflow.keras import datasets, layers, models
from tensorflow_addons.layers import WeightNormalization
def conv_block(filts, name, kernel_size = (3, 3), padding = "same", **kwargs):
return WeightNormalization(
layers.Conv2D(
filts, kernel_size, activation=None, padding=padding, **kwargs
),
name="conv"+name,
)
#CNN13
#See:
#https://github.com/vikasverma1077/ICT/blob/master/networks/lenet.py
#https://github.com/brain-research/realistic-ssl-evaluation
lr_alpha = 0.1
dropout_rate = 0.5
num_classes = 10
input_shape = dims
model = models.Sequential()
model.add(tf.keras.Input(shape=input_shape))
### conv1a
name = '1a'
model.add(conv_block(name = name, filts = 128, kernel_size = (3,3), padding="same"))
model.add(layers.BatchNormalization(name="bn"+name))
model.add(layers.LeakyReLU(alpha=lr_alpha, name = 'lrelu'+name))
### conv1b
name = '1b'
model.add(conv_block(name = name, filts = 128, kernel_size = (3,3), padding="same"))
model.add(layers.BatchNormalization(name="bn"+name))
model.add(layers.LeakyReLU(alpha=lr_alpha, name = 'lrelu'+name))
### conv1c
name = '1c'
model.add(conv_block(name = name, filts = 128, kernel_size = (3,3), padding="same"))
model.add(layers.BatchNormalization(name="bn"+name))
model.add(layers.LeakyReLU(alpha=lr_alpha, name = 'lrelu'+name))
# max pooling
model.add(layers.MaxPooling2D(pool_size=(2, 2), strides=2, padding='valid', name="mp1"))
# dropout
model.add(layers.Dropout(dropout_rate, name="drop1"))
### conv2a
name = '2a'
model.add(conv_block(name = name, filts = 256, kernel_size = (3,3), padding="same"))
model.add(layers.BatchNormalization(name="bn"+name))
model.add(layers.LeakyReLU(alpha=lr_alpha))
### conv2b
name = '2b'
model.add(conv_block(name = name, filts = 256, kernel_size = (3,3), padding="same"))
model.add(layers.BatchNormalization(name="bn"+name))
model.add(layers.LeakyReLU(alpha=lr_alpha, name = 'lrelu'+name))
### conv2c
name = '2c'
model.add(conv_block(name = name, filts = 256, kernel_size = (3,3), padding="same"))
model.add(layers.BatchNormalization(name="bn"+name))
model.add(layers.LeakyReLU(alpha=lr_alpha, name = 'lrelu'+name))
# max pooling
model.add(layers.MaxPooling2D(pool_size=(2, 2), strides=2, padding='valid', name="mp2"))
# dropout
model.add(layers.Dropout(dropout_rate, name="drop2"))
### conv3a
name = '3a'
model.add(conv_block(name = name, filts = 512, kernel_size = (3,3), padding="valid"))
model.add(layers.BatchNormalization(name="bn"+name))
model.add(layers.LeakyReLU(alpha=lr_alpha, name = 'lrelu'+name))
### conv3b
name = '3b'
model.add(conv_block(name = name, filts = 256, kernel_size = (1,1), padding="valid"))
model.add(layers.BatchNormalization(name="bn"+name))
model.add(layers.LeakyReLU(alpha=lr_alpha, name = 'lrelu'+name))
### conv3c
name = '3c'
model.add(conv_block(name = name, filts = 128, kernel_size = (1,1), padding="valid"))
model.add(layers.BatchNormalization(name="bn"+name))
model.add(layers.LeakyReLU(alpha=lr_alpha, name = 'lrelu'+name))
# max pooling
model.add(layers.AveragePooling2D(pool_size=(3, 3), strides=2, padding='valid'))
model.add(layers.Flatten())
model.add(layers.Dense(256, activation=None, name='z'))
model.add(WeightNormalization(layers.Dense(256, activation=None)))
model.add(layers.LeakyReLU(alpha=lr_alpha, name = 'lrelufc1'))
model.add(WeightNormalization(layers.Dense(256, activation=None)))
model.add(layers.LeakyReLU(alpha=lr_alpha, name = 'lrelufc2'))
model.add(WeightNormalization(layers.Dense(num_classes, activation=None)))
model.summary()
###Output
Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv1a (WeightNormalization) (None, 28, 28, 128) 2689
_________________________________________________________________
bn1a (BatchNormalization) (None, 28, 28, 128) 512
_________________________________________________________________
lrelu1a (LeakyReLU) (None, 28, 28, 128) 0
_________________________________________________________________
conv1b (WeightNormalization) (None, 28, 28, 128) 295297
_________________________________________________________________
bn1b (BatchNormalization) (None, 28, 28, 128) 512
_________________________________________________________________
lrelu1b (LeakyReLU) (None, 28, 28, 128) 0
_________________________________________________________________
conv1c (WeightNormalization) (None, 28, 28, 128) 295297
_________________________________________________________________
bn1c (BatchNormalization) (None, 28, 28, 128) 512
_________________________________________________________________
lrelu1c (LeakyReLU) (None, 28, 28, 128) 0
_________________________________________________________________
mp1 (MaxPooling2D) (None, 14, 14, 128) 0
_________________________________________________________________
drop1 (Dropout) (None, 14, 14, 128) 0
_________________________________________________________________
conv2a (WeightNormalization) (None, 14, 14, 256) 590593
_________________________________________________________________
bn2a (BatchNormalization) (None, 14, 14, 256) 1024
_________________________________________________________________
leaky_re_lu (LeakyReLU) (None, 14, 14, 256) 0
_________________________________________________________________
conv2b (WeightNormalization) (None, 14, 14, 256) 1180417
_________________________________________________________________
bn2b (BatchNormalization) (None, 14, 14, 256) 1024
_________________________________________________________________
lrelu2b (LeakyReLU) (None, 14, 14, 256) 0
_________________________________________________________________
conv2c (WeightNormalization) (None, 14, 14, 256) 1180417
_________________________________________________________________
bn2c (BatchNormalization) (None, 14, 14, 256) 1024
_________________________________________________________________
lrelu2c (LeakyReLU) (None, 14, 14, 256) 0
_________________________________________________________________
mp2 (MaxPooling2D) (None, 7, 7, 256) 0
_________________________________________________________________
drop2 (Dropout) (None, 7, 7, 256) 0
_________________________________________________________________
conv3a (WeightNormalization) (None, 5, 5, 512) 2360833
_________________________________________________________________
bn3a (BatchNormalization) (None, 5, 5, 512) 2048
_________________________________________________________________
lrelu3a (LeakyReLU) (None, 5, 5, 512) 0
_________________________________________________________________
conv3b (WeightNormalization) (None, 5, 5, 256) 262913
_________________________________________________________________
bn3b (BatchNormalization) (None, 5, 5, 256) 1024
_________________________________________________________________
lrelu3b (LeakyReLU) (None, 5, 5, 256) 0
_________________________________________________________________
conv3c (WeightNormalization) (None, 5, 5, 128) 65921
_________________________________________________________________
bn3c (BatchNormalization) (None, 5, 5, 128) 512
_________________________________________________________________
lrelu3c (LeakyReLU) (None, 5, 5, 128) 0
_________________________________________________________________
average_pooling2d (AveragePo (None, 2, 2, 128) 0
_________________________________________________________________
flatten (Flatten) (None, 512) 0
_________________________________________________________________
z (Dense) (None, 256) 131328
_________________________________________________________________
weight_normalization (Weight (None, 256) 131841
_________________________________________________________________
lrelufc1 (LeakyReLU) (None, 256) 0
_________________________________________________________________
weight_normalization_1 (Weig (None, 256) 131841
_________________________________________________________________
lrelufc2 (LeakyReLU) (None, 256) 0
_________________________________________________________________
weight_normalization_2 (Weig (None, 10) 5151
=================================================================
Total params: 6,642,730
Trainable params: 3,388,308
Non-trainable params: 3,254,422
_________________________________________________________________
###Markdown
Augmentation
###Code
import tensorflow_addons as tfa
def norm(x):
return( x - tf.reduce_min(x))#/(tf.reduce_max(x) - tf.reduce_min(x))
def augment(image, label):
if tf.random.uniform((1,), minval=0, maxval = 2, dtype=tf.int32)[0] == 0:
# stretch
randint_hor = tf.random.uniform((2,), minval=0, maxval = 8, dtype=tf.int32)[0]
randint_vert = tf.random.uniform((2,), minval=0, maxval = 8, dtype=tf.int32)[0]
image = tf.image.resize(image, (dims[0]+randint_vert*2, dims[1]+randint_hor*2))
#image = tf.image.crop_to_bounding_box(image, randint_vert,randint_hor,28,28)
image = tf.image.resize_with_pad(
image, dims[0], dims[1]
)
image = tf.image.resize_with_crop_or_pad(
image, dims[0] + 3, dims[1] + 3
) # crop 6 pixels
image = tf.image.random_crop(image, size=dims)
if tf.random.uniform((1,), minval=0, maxval = 2, dtype=tf.int32)[0] == 0:
image = tfa.image.rotate(
image,
tf.squeeze(tf.random.uniform(shape=(1, 1), minval=-0.1, maxval=0.1)),
interpolation="BILINEAR",
)
image = tf.image.random_flip_left_right(image)
image = tf.clip_by_value(image, 0, 1)
if tf.random.uniform((1,), minval=0, maxval = 2, dtype=tf.int32)[0] == 0:
image = tf.image.random_brightness(image, max_delta=0.5) # Random brightness
image = tf.image.random_contrast(image, lower=0.5, upper=1.75)
image = norm(image)
image = tf.clip_by_value(image, 0, 1)
image = tfa.image.random_cutout(
tf.expand_dims(image, 0), (8, 8), constant_values=0.5
)[0]
image = tf.clip_by_value(image, 0, 1)
return image, label
nex = 10
for i in range(5):
fig, axs = plt.subplots(ncols=nex +1, figsize=((nex+1)*2, 2))
axs[0].imshow(np.squeeze(X_train[i]), cmap = plt.cm.Greys)
axs[0].axis('off')
for ax in axs.flatten()[1:]:
aug_img = np.squeeze(augment(X_train[i], Y_train[i])[0])
ax.matshow(aug_img, cmap = plt.cm.Greys, vmin=0, vmax=1)
ax.axis('off')
###Output
_____no_output_____
###Markdown
train
###Code
early_stopping = tf.keras.callbacks.EarlyStopping(
monitor='val_accuracy', min_delta=0, patience=100, verbose=1, mode='auto',
baseline=None, restore_best_weights=True
)
image = X_train[i]
print(image.shape)
# stretch
# stretch
randint_hor = tf.random.uniform((2,), minval=0, maxval = 8, dtype=tf.int32)[0]
randint_vert = tf.random.uniform((2,), minval=0, maxval = 8, dtype=tf.int32)[0]
image = tf.image.resize(image, (dims[0]+randint_vert*2, dims[1]+randint_hor*2))
image = tf.image.crop_to_bounding_box(image, randint_vert,randint_hor,28,28)
print(image.shape)
plt.matshow(np.squeeze(image), cmap = plt.cm.Greys, vmin=0, vmax=1)
import tensorflow_addons as tfa
opt = tf.keras.optimizers.Adam(1e-4)
opt = tfa.optimizers.MovingAverage(opt)
loss = tf.keras.losses.CategoricalCrossentropy(label_smoothing=0.2, from_logits=True)
model.compile(opt, loss = loss, metrics=['accuracy'])
Y_valid_one_hot = tf.keras.backend.one_hot(
Y_valid, num_classes
)
Y_labeled_one_hot = tf.keras.backend.one_hot(
Y_labeled, num_classes
)
from livelossplot import PlotLossesKerasTF
# plot losses callback
plotlosses = PlotLossesKerasTF()
train_ds = (
tf.data.Dataset.from_tensor_slices((X_labeled, Y_labeled_one_hot))
.repeat()
.shuffle(len(X_labeled))
.map(augment, num_parallel_calls=tf.data.experimental.AUTOTUNE)
.batch(batch_size)
.prefetch(tf.data.experimental.AUTOTUNE)
)
steps_per_epoch = int(len(X_train)/ batch_size)
history = model.fit(
train_ds,
epochs=500,
validation_data=(X_valid, Y_valid_one_hot),
callbacks = [early_stopping, plotlosses],
steps_per_epoch = steps_per_epoch,
)
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.plot(history.history['accuracy'])
plt.plot(history.history['val_accuracy'])
submodel = tf.keras.models.Model(
[model.inputs[0]], [model.get_layer('z').output]
)
z = submodel.predict(X_train)
np.shape(z)
reducer = umap.UMAP(verbose=True)
embedding = reducer.fit_transform(z.reshape(len(z), np.product(np.shape(z)[1:])))
plt.scatter(embedding[:, 0], embedding[:, 1], c=Y_train.flatten(), s= 1, alpha = 0.1, cmap = plt.cm.tab10)
z_valid = submodel.predict(X_valid)
np.shape(z_valid)
reducer = umap.UMAP(verbose=True)
embedding = reducer.fit_transform(z_valid.reshape(len(z_valid), np.product(np.shape(z_valid)[1:])))
plt.scatter(embedding[:, 0], embedding[:, 1], c=Y_valid.flatten(), s= 1, alpha = 0.1, cmap = plt.cm.tab10)
fig, ax = plt.subplots(figsize=(10,10))
ax.scatter(embedding[:, 0], embedding[:, 1], c=Y_valid.flatten(), s= 1, alpha = 1, cmap = plt.cm.tab10)
predictions = model.predict(X_valid)
fig, ax = plt.subplots(figsize=(10,10))
ax.scatter(embedding[:, 0], embedding[:, 1], c=np.argmax(predictions, axis=1), s= 1, alpha = 1, cmap = plt.cm.tab10)
Y_test_one_hot = tf.keras.backend.one_hot(
Y_test, num_classes
)
result = model.evaluate(X_test, Y_test_one_hot)
###Output
_____no_output_____
###Markdown
save results
###Code
# save score, valid embedding, weights, results
from tfumap.paths import MODEL_DIR, ensure_dir
save_folder = MODEL_DIR / 'semisupervised-keras' / dataset / str(labels_per_class) / datestring
ensure_dir(save_folder)
###Output
_____no_output_____
###Markdown
save weights
###Code
encoder = tf.keras.models.Model(
[model.inputs[0]], [model.get_layer('z').output]
)
encoder.save_weights((save_folder / "encoder").as_posix())
classifier = tf.keras.models.Model(
[tf.keras.Input(tensor=model.get_layer('weight_normalization').input)], [model.outputs[0]]
)
print([i.name for i in classifier.layers])
classifier.save_weights((save_folder / "classifier").as_posix())
###Output
_____no_output_____
###Markdown
save score
###Code
Y_test_one_hot = tf.keras.backend.one_hot(
Y_test, num_classes
)
result = model.evaluate(X_test, Y_test_one_hot)
np.save(save_folder / 'test_loss.npy', result)
###Output
_____no_output_____
###Markdown
save embedding
###Code
z = encoder.predict(X_train)
reducer = umap.UMAP(verbose=True)
embedding = reducer.fit_transform(z.reshape(len(z), np.product(np.shape(z)[1:])))
plt.scatter(embedding[:, 0], embedding[:, 1], c=Y_train.flatten(), s= 1, alpha = 0.1, cmap = plt.cm.tab10)
np.save(save_folder / 'train_embedding.npy', embedding)
###Output
_____no_output_____
###Markdown
save results
###Code
import pickle
with open(save_folder / 'history.pickle', 'wb') as file_pi:
pickle.dump(history.history, file_pi)
print('test')
###Output
_____no_output_____ |
py/notebooks/draft/.ipynb_checkpoints/Corrugated geometries simplified-checkpoint.ipynb | ###Markdown
Corrugated Shells Init symbols for *sympy*
###Code
from sympy import *
from sympy.vector import CoordSys3D
N = CoordSys3D('N')
x1, x2, x3 = symbols("x_1 x_2 x_3")
alpha1, alpha2, alpha3 = symbols("alpha_1 alpha_2 alpha_3")
R, L, ga, gv = symbols("R L g_a g_v")
init_printing()
###Output
_____no_output_____
###Markdown
Corrugated cylindrical coordinates
###Code
a1 = pi / 2 + (L / 2 - alpha1)/R
x = (R + alpha3 + ga * cos(gv * a1)) * cos(a1)
y = alpha2
z = (R + alpha3 + ga * cos(gv * a1)) * sin(a1)
r = x*N.i + y*N.j + z*N.k
###Output
_____no_output_____
###Markdown
Base Vectors $\vec{R}_1, \vec{R}_2, \vec{R}_3$
###Code
R1=r.diff(alpha1)
R2=r.diff(alpha2)
R3=r.diff(alpha3)
trigsimp(R1)
R2
R3
###Output
_____no_output_____
###Markdown
Base Vectors $\vec{R}^1, \vec{R}^2, \vec{R}^3$
###Code
eps=trigsimp(R1.dot(R2.cross(R3)))
R_1=simplify(trigsimp(R2.cross(R3)/eps))
R_2=simplify(trigsimp(R3.cross(R1)/eps))
R_3=simplify(trigsimp(R1.cross(R2)/eps))
R_1
R_2
R_3
###Output
_____no_output_____
###Markdown
Jacobi matrix:$ A = \left( \begin{array}{ccc} \frac{\partial x_1}{\partial \alpha_1} & \frac{\partial x_1}{\partial \alpha_2} & \frac{\partial x_1}{\partial \alpha_3} \\\frac{\partial x_2}{\partial \alpha_1} & \frac{\partial x_2}{\partial \alpha_2} & \frac{\partial x_3}{\partial \alpha_3} \\\frac{\partial x_3}{\partial \alpha_1} & \frac{\partial x_3}{\partial \alpha_2} & \frac{\partial x_3}{\partial \alpha_3} \\\end{array} \right)$$ \left[\begin{array}{ccc} \vec{R}_1 & \vec{R}_2 & \vec{R}_3\end{array} \right] = \left[\begin{array}{ccc} \vec{e}_1 & \vec{e}_2 & \vec{e}_3\end{array} \right] \cdot \left( \begin{array}{ccc} \frac{\partial x_1}{\partial \alpha_1} & \frac{\partial x_1}{\partial \alpha_2} & \frac{\partial x_1}{\partial \alpha_3} \\\frac{\partial x_2}{\partial \alpha_1} & \frac{\partial x_2}{\partial \alpha_2} & \frac{\partial x_3}{\partial \alpha_3} \\\frac{\partial x_3}{\partial \alpha_1} & \frac{\partial x_3}{\partial \alpha_2} & \frac{\partial x_3}{\partial \alpha_3} \\\end{array} \right) = \left[\begin{array}{ccc} \vec{e}_1 & \vec{e}_2 & \vec{e}_3\end{array} \right] \cdot A$$ \left[\begin{array}{ccc} \vec{e}_1 & \vec{e}_2 & \vec{e}_3\end{array} \right] =\left[\begin{array}{ccc} \vec{R}_1 & \vec{R}_2 & \vec{R}_3\end{array} \right] \cdot A^{-1}$
###Code
dx1da1=R1.dot(N.i)
dx1da2=R2.dot(N.i)
dx1da3=R3.dot(N.i)
dx2da1=R1.dot(N.j)
dx2da2=R2.dot(N.j)
dx2da3=R3.dot(N.j)
dx3da1=R1.dot(N.k)
dx3da2=R2.dot(N.k)
dx3da3=R3.dot(N.k)
A=Matrix([[dx1da1, dx1da2, dx1da3], [dx2da1, dx2da2, dx2da3], [dx3da1, dx3da2, dx3da3]])
simplify(A)
A_inv = A**-1
trigsimp(A_inv[0,0])
trigsimp(A.det())
###Output
_____no_output_____
###Markdown
Metric tensor ${\displaystyle \hat{G}=\sum_{i,j} g^{ij}\vec{R}_i\vec{R}_j}$
###Code
g11=R1.dot(R1)
g12=R1.dot(R2)
g13=R1.dot(R3)
g21=R2.dot(R1)
g22=R2.dot(R2)
g23=R2.dot(R3)
g31=R3.dot(R1)
g32=R3.dot(R2)
g33=R3.dot(R3)
G=Matrix([[g11, g12, g13],[g21, g22, g23], [g31, g32, g33]])
G=trigsimp(G)
G
###Output
_____no_output_____
###Markdown
${\displaystyle \hat{G}=\sum_{i,j} g_{ij}\vec{R}^i\vec{R}^j}$
###Code
g_11=R_1.dot(R_1)
g_12=R_1.dot(R_2)
g_13=R_1.dot(R_3)
g_21=R_2.dot(R_1)
g_22=R_2.dot(R_2)
g_23=R_2.dot(R_3)
g_31=R_3.dot(R_1)
g_32=R_3.dot(R_2)
g_33=R_3.dot(R_3)
G_con=Matrix([[g_11, g_12, g_13],[g_21, g_22, g_23], [g_31, g_32, g_33]])
G_con=trigsimp(G_con)
G_con
G_inv = G**-1
G_inv
###Output
_____no_output_____
###Markdown
Derivatives of vectors Derivative of base vectors
###Code
dR1dalpha1 = trigsimp(R1.diff(alpha1))
dR1dalpha1
###Output
_____no_output_____
###Markdown
$ \frac { d\vec{R_1} } { d\alpha_1} = -\frac {1}{R} \left( 1+\frac{\alpha_3}{R} \right) \vec{R_3} $
###Code
dR1dalpha2 = trigsimp(R1.diff(alpha2))
dR1dalpha2
dR1dalpha3 = trigsimp(R1.diff(alpha3))
dR1dalpha3
###Output
_____no_output_____
###Markdown
$ \frac { d\vec{R_1} } { d\alpha_3} = \frac {1}{R} \frac {1}{1+\frac{\alpha_3}{R}} \vec{R_1} $
###Code
dR2dalpha1 = trigsimp(R2.diff(alpha1))
dR2dalpha1
dR2dalpha2 = trigsimp(R2.diff(alpha2))
dR2dalpha2
dR2dalpha3 = trigsimp(R2.diff(alpha3))
dR2dalpha3
dR3dalpha1 = trigsimp(R3.diff(alpha1))
dR3dalpha1
###Output
_____no_output_____
###Markdown
$ \frac { d\vec{R_3} } { d\alpha_1} = \frac {1}{R} \frac {1}{1+\frac{\alpha_3}{R}} \vec{R_1} $
###Code
dR3dalpha2 = trigsimp(R3.diff(alpha2))
dR3dalpha2
dR3dalpha3 = trigsimp(R3.diff(alpha3))
dR3dalpha3
###Output
_____no_output_____
###Markdown
$ \frac { d\vec{R_3} } { d\alpha_3} = \vec{0} $ Derivative of vectors$ \vec{u} = u^1 \vec{R_1} + u^2\vec{R_2} + u^3\vec{R_3} $ $ \frac { d\vec{u} } { d\alpha_1} = \frac { d(u^1\vec{R_1}) } { d\alpha_1} + \frac { d(u^2\vec{R_2}) } { d\alpha_1}+ \frac { d(u^3\vec{R_3}) } { d\alpha_1} = \frac { du^1 } { d\alpha_1} \vec{R_1} + u^1 \frac { d\vec{R_1} } { d\alpha_1} + \frac { du^2 } { d\alpha_1} \vec{R_2} + u^2 \frac { d\vec{R_2} } { d\alpha_1} + \frac { du^3 } { d\alpha_1} \vec{R_3} + u^3 \frac { d\vec{R_3} } { d\alpha_1} = \frac { du^1 } { d\alpha_1} \vec{R_1} - u^1 \frac {1}{R} \left( 1+\frac{\alpha_3}{R} \right) \vec{R_3} + \frac { du^2 } { d\alpha_1} \vec{R_2}+ \frac { du^3 } { d\alpha_1} \vec{R_3} + u^3 \frac {1}{R} \frac {1}{1+\frac{\alpha_3}{R}} \vec{R_1}$Then$ \frac { d\vec{u} } { d\alpha_1} = \left( \frac { du^1 } { d\alpha_1} + u^3 \frac {1}{R} \frac {1}{1+\frac{\alpha_3}{R}} \right) \vec{R_1} + \frac { du^2 } { d\alpha_1} \vec{R_2} + \left( \frac { du^3 } { d\alpha_1} - u^1 \frac {1}{R} \left( 1+\frac{\alpha_3}{R} \right) \right) \vec{R_3}$ $ \frac { d\vec{u} } { d\alpha_2} = \frac { d(u^1\vec{R_1}) } { d\alpha_2} + \frac { d(u^2\vec{R_2}) } { d\alpha_2}+ \frac { d(u^3\vec{R_3}) } { d\alpha_2} = \frac { du^1 } { d\alpha_2} \vec{R_1} + \frac { du^2 } { d\alpha_2} \vec{R_2} + \frac { du^3 } { d\alpha_2} \vec{R_3} $Then$ \frac { d\vec{u} } { d\alpha_2} = \frac { du^1 } { d\alpha_2} \vec{R_1} + \frac { du^2 } { d\alpha_2} \vec{R_2} + \frac { du^3 } { d\alpha_2} \vec{R_3} $ $ \frac { d\vec{u} } { d\alpha_3} = \frac { d(u^1\vec{R_1}) } { d\alpha_3} + \frac { d(u^2\vec{R_2}) } { d\alpha_3}+ \frac { d(u^3\vec{R_3}) } { d\alpha_3} = \frac { du^1 } { d\alpha_3} \vec{R_1} + u^1 \frac { d\vec{R_1} } { d\alpha_3} + \frac { du^2 } { d\alpha_3} \vec{R_2} + u^2 \frac { d\vec{R_2} } { d\alpha_3} + \frac { du^3 } { d\alpha_3} \vec{R_3} + u^3 \frac { d\vec{R_3} } { d\alpha_3} = \frac { du^1 } { d\alpha_3} \vec{R_1} + u^1 \frac {1}{R} \frac {1}{1+\frac{\alpha_3}{R}} \vec{R_1} + \frac { du^2 } { d\alpha_3} \vec{R_2}+ \frac { du^3 } { d\alpha_3} \vec{R_3} $ Then$ \frac { d\vec{u} } { d\alpha_3} = \left( \frac { du^1 } { d\alpha_3} + u^1 \frac {1}{R} \frac {1}{1+\frac{\alpha_3}{R}} \right) \vec{R_1} + \frac { du^2 } { d\alpha_3} \vec{R_2}+ \frac { du^3 } { d\alpha_3} \vec{R_3}$ Gradient of vector $\nabla_1 u^1 = \frac { \partial u^1 } { \partial \alpha_1} + u^3 \frac {1}{R} \frac {1}{1+\frac{\alpha_3}{R}}$$\nabla_1 u^2 = \frac { \partial u^2 } { \partial \alpha_1} $$\nabla_1 u^3 = \frac { \partial u^3 } { \partial \alpha_1} - u^1 \frac {1}{R} \left( 1+\frac{\alpha_3}{R} \right) $$\nabla_2 u^1 = \frac { \partial u^1 } { \partial \alpha_2}$$\nabla_2 u^2 = \frac { \partial u^2 } { \partial \alpha_2}$$\nabla_2 u^3 = \frac { \partial u^3 } { \partial \alpha_2}$$\nabla_3 u^1 = \frac { \partial u^1 } { \partial \alpha_3} + u^1 \frac {1}{R} \frac {1}{1+\frac{\alpha_3}{R}}$$\nabla_3 u^2 = \frac { \partial u^2 } { \partial \alpha_3} $$\nabla_3 u^3 = \frac { \partial u^3 } { \partial \alpha_3}$$ \nabla \vec{u} = \left( \begin{array}{ccc} \nabla_1 u^1 & \nabla_1 u^2 & \nabla_1 u^3 \\\nabla_2 u^1 & \nabla_2 u^2 & \nabla_2 u^3 \\\nabla_3 u^1 & \nabla_3 u^2 & \nabla_3 u^3 \\\end{array} \right)$
###Code
u1=Function('u^1')
u2=Function('u^2')
u3=Function('u^3')
q=Function('q') # q(alpha3) = 1+alpha3/R
K = Symbol('K') # K = 1/R
u1_nabla1 = u1(alpha1, alpha2, alpha3).diff(alpha1) + u3(alpha1, alpha2, alpha3) * K / q(alpha3)
u2_nabla1 = u2(alpha1, alpha2, alpha3).diff(alpha1)
u3_nabla1 = u3(alpha1, alpha2, alpha3).diff(alpha1) - u1(alpha1, alpha2, alpha3) * K * q(alpha3)
u1_nabla2 = u1(alpha1, alpha2, alpha3).diff(alpha2)
u2_nabla2 = u2(alpha1, alpha2, alpha3).diff(alpha2)
u3_nabla2 = u3(alpha1, alpha2, alpha3).diff(alpha2)
u1_nabla3 = u1(alpha1, alpha2, alpha3).diff(alpha3) + u1(alpha1, alpha2, alpha3) * K / q(alpha3)
u2_nabla3 = u2(alpha1, alpha2, alpha3).diff(alpha3)
u3_nabla3 = u3(alpha1, alpha2, alpha3).diff(alpha3)
# $\nabla_2 u^2 = \frac { \partial u^2 } { \partial \alpha_2}$
grad_u = Matrix([[u1_nabla1, u2_nabla1, u3_nabla1],[u1_nabla2, u2_nabla2, u3_nabla2], [u1_nabla3, u2_nabla3, u3_nabla3]])
grad_u
G_s = Matrix([[q(alpha3)**2, 0, 0],[0, 1, 0], [0, 0, 1]])
grad_u_down=grad_u*G_s
expand(simplify(grad_u_down))
###Output
_____no_output_____
###Markdown
$ \left( \begin{array}{c} \nabla_1 u_1 \\ \nabla_2 u_1 \\ \nabla_3 u_1 \\\nabla_1 u_2 \\ \nabla_2 u_2 \\ \nabla_3 u_2 \\\nabla_1 u_3 \\ \nabla_2 u_3 \\ \nabla_3 u_3 \\\end{array} \right) = \left( \begin{array}{c}\left( 1+\frac{\alpha_2}{R} \right)^2 \frac { \partial u^1 } { \partial \alpha_1} + u^3 \frac {\left( 1+\frac{\alpha_3}{R} \right)}{R} \\\left( 1+\frac{\alpha_2}{R} \right)^2 \frac { \partial u^1 } { \partial \alpha_2} \\\left( 1+\frac{\alpha_3}{R} \right)^2 \frac { \partial u^1 } { \partial \alpha_3} + u^1 \frac {\left( 1+\frac{\alpha_3}{R} \right)}{R} \\\frac { \partial u^2 } { \partial \alpha_1} \\\frac { \partial u^2 } { \partial \alpha_2} \\\frac { \partial u^2 } { \partial \alpha_3} \\\frac { \partial u^3 } { \partial \alpha_1} - u^1 \frac {\left( 1+\frac{\alpha_3}{R} \right)}{R} \\\frac { \partial u^3 } { \partial \alpha_2} \\\frac { \partial u^3 } { \partial \alpha_3} \\\end{array} \right)$ $ \left( \begin{array}{c} \nabla_1 u_1 \\ \nabla_2 u_1 \\ \nabla_3 u_1 \\\nabla_1 u_2 \\ \nabla_2 u_2 \\ \nabla_3 u_2 \\\nabla_1 u_3 \\ \nabla_2 u_3 \\ \nabla_3 u_3 \\\end{array} \right)= B \cdot\left( \begin{array}{c} u^1 \\\frac { \partial u^1 } { \partial \alpha_1} \\\frac { \partial u^1 } { \partial \alpha_2} \\\frac { \partial u^1 } { \partial \alpha_3} \\u^2 \\\frac { \partial u^2 } { \partial \alpha_1} \\\frac { \partial u^2 } { \partial \alpha_2} \\\frac { \partial u^2 } { \partial \alpha_3} \\u^3 \\\frac { \partial u^3 } { \partial \alpha_1} \\\frac { \partial u^3 } { \partial \alpha_2} \\\frac { \partial u^3 } { \partial \alpha_3} \\\end{array} \right) $
###Code
B = zeros(9, 12)
B[0,1] = (1+alpha3/R)**2
B[0,8] = (1+alpha3/R)/R
B[1,2] = (1+alpha3/R)**2
B[2,0] = (1+alpha3/R)/R
B[2,3] = (1+alpha3/R)**2
B[3,5] = S(1)
B[4,6] = S(1)
B[5,7] = S(1)
B[6,9] = S(1)
B[6,0] = -(1+alpha3/R)/R
B[7,10] = S(1)
B[8,11] = S(1)
B
###Output
_____no_output_____
###Markdown
Deformations tensor
###Code
E=zeros(6,9)
E[0,0]=1
E[1,4]=1
E[2,8]=1
E[3,1]=1
E[3,3]=1
E[4,2]=1
E[4,6]=1
E[5,5]=1
E[5,7]=1
E
Q=E*B
Q=simplify(Q)
Q
###Output
_____no_output_____
###Markdown
Tymoshenko theory $u^1 \left( \alpha_1, \alpha_2, \alpha_3 \right)=u\left( \alpha_1 \right)+\alpha_3\gamma \left( \alpha_1 \right) $ $u^2 \left( \alpha_1, \alpha_2, \alpha_3 \right)=0 $ $u^3 \left( \alpha_1, \alpha_2, \alpha_3 \right)=w\left( \alpha_1 \right) $ $ \left( \begin{array}{c} u^1 \\\frac { \partial u^1 } { \partial \alpha_1} \\\frac { \partial u^1 } { \partial \alpha_2} \\\frac { \partial u^1 } { \partial \alpha_3} \\u^2 \\\frac { \partial u^2 } { \partial \alpha_1} \\\frac { \partial u^2 } { \partial \alpha_2} \\\frac { \partial u^2 } { \partial \alpha_3} \\u^3 \\\frac { \partial u^3 } { \partial \alpha_1} \\\frac { \partial u^3 } { \partial \alpha_2} \\\frac { \partial u^3 } { \partial \alpha_3} \\\end{array} \right) = T \cdot \left( \begin{array}{c} u \\\frac { \partial u } { \partial \alpha_1} \\\gamma \\\frac { \partial \gamma } { \partial \alpha_1} \\w \\\frac { \partial w } { \partial \alpha_1} \\\end{array} \right) $
###Code
T=zeros(12,6)
T[0,0]=1
T[0,2]=alpha3
T[1,1]=1
T[1,3]=alpha3
T[3,2]=1
T[8,4]=1
T[9,5]=1
T
Q=E*B*T
Q=simplify(Q)
Q
###Output
_____no_output_____
###Markdown
Elasticity tensor(stiffness tensor) General form
###Code
from sympy import MutableDenseNDimArray
C_x = MutableDenseNDimArray.zeros(3, 3, 3, 3)
for i in range(3):
for j in range(3):
for k in range(3):
for l in range(3):
elem_index = 'C^{{{}{}{}{}}}'.format(i+1, j+1, k+1, l+1)
el = Symbol(elem_index)
C_x[i,j,k,l] = el
C_x
###Output
_____no_output_____
###Markdown
Include symmetry
###Code
C_x_symmetry = MutableDenseNDimArray.zeros(3, 3, 3, 3)
def getCIndecies(index):
if (index == 0):
return 0, 0
elif (index == 1):
return 1, 1
elif (index == 2):
return 2, 2
elif (index == 3):
return 0, 1
elif (index == 4):
return 0, 2
elif (index == 5):
return 1, 2
for s in range(6):
for t in range(s, 6):
i,j = getCIndecies(s)
k,l = getCIndecies(t)
elem_index = 'C^{{{}{}{}{}}}'.format(i+1, j+1, k+1, l+1)
el = Symbol(elem_index)
C_x_symmetry[i,j,k,l] = el
C_x_symmetry[i,j,l,k] = el
C_x_symmetry[j,i,k,l] = el
C_x_symmetry[j,i,l,k] = el
C_x_symmetry[k,l,i,j] = el
C_x_symmetry[k,l,j,i] = el
C_x_symmetry[l,k,i,j] = el
C_x_symmetry[l,k,j,i] = el
C_x_symmetry
###Output
_____no_output_____
###Markdown
Isotropic material
###Code
C_isotropic = MutableDenseNDimArray.zeros(3, 3, 3, 3)
C_isotropic_matrix = zeros(6)
mu = Symbol('mu')
la = Symbol('lambda')
for s in range(6):
for t in range(s, 6):
if (s < 3 and t < 3):
if(t != s):
C_isotropic_matrix[s,t] = la
C_isotropic_matrix[t,s] = la
else:
C_isotropic_matrix[s,t] = 2*mu+la
C_isotropic_matrix[t,s] = 2*mu+la
elif (s == t):
C_isotropic_matrix[s,t] = mu
C_isotropic_matrix[t,s] = mu
for s in range(6):
for t in range(s, 6):
i,j = getCIndecies(s)
k,l = getCIndecies(t)
el = C_isotropic_matrix[s, t]
C_isotropic[i,j,k,l] = el
C_isotropic[i,j,l,k] = el
C_isotropic[j,i,k,l] = el
C_isotropic[j,i,l,k] = el
C_isotropic[k,l,i,j] = el
C_isotropic[k,l,j,i] = el
C_isotropic[l,k,i,j] = el
C_isotropic[l,k,j,i] = el
C_isotropic
def getCalpha(C, A, q, p, s, t):
res = S(0)
for i in range(3):
for j in range(3):
for k in range(3):
for l in range(3):
res += C[i,j,k,l]*A[q,i]*A[p,j]*A[s,k]*A[t,l]
return simplify(trigsimp(res))
C_isotropic_alpha = MutableDenseNDimArray.zeros(3, 3, 3, 3)
for i in range(3):
for j in range(3):
for k in range(3):
for l in range(3):
c = getCalpha(C_isotropic, A_inv, i, j, k, l)
C_isotropic_alpha[i,j,k,l] = c
C_isotropic_alpha[0,0,0,0]
C_isotropic_matrix_alpha = zeros(6)
for s in range(6):
for t in range(6):
i,j = getCIndecies(s)
k,l = getCIndecies(t)
C_isotropic_matrix_alpha[s,t] = C_isotropic_alpha[i,j,k,l]
C_isotropic_matrix_alpha
###Output
_____no_output_____
###Markdown
Orthotropic material
###Code
C_orthotropic = MutableDenseNDimArray.zeros(3, 3, 3, 3)
C_orthotropic_matrix = zeros(6)
for s in range(6):
for t in range(s, 6):
elem_index = 'C^{{{}{}}}'.format(s+1, t+1)
el = Symbol(elem_index)
if ((s < 3 and t < 3) or t == s):
C_orthotropic_matrix[s,t] = el
C_orthotropic_matrix[t,s] = el
for s in range(6):
for t in range(s, 6):
i,j = getCIndecies(s)
k,l = getCIndecies(t)
el = C_orthotropic_matrix[s, t]
C_orthotropic[i,j,k,l] = el
C_orthotropic[i,j,l,k] = el
C_orthotropic[j,i,k,l] = el
C_orthotropic[j,i,l,k] = el
C_orthotropic[k,l,i,j] = el
C_orthotropic[k,l,j,i] = el
C_orthotropic[l,k,i,j] = el
C_orthotropic[l,k,j,i] = el
C_orthotropic
###Output
_____no_output_____
###Markdown
Orthotropic material in shell coordinates
###Code
def getCalpha(C, A, q, p, s, t):
res = S(0)
for i in range(3):
for j in range(3):
for k in range(3):
for l in range(3):
res += C[i,j,k,l]*A[q,i]*A[p,j]*A[s,k]*A[t,l]
return simplify(trigsimp(res))
C_orthotropic_alpha = MutableDenseNDimArray.zeros(3, 3, 3, 3)
for i in range(3):
for j in range(3):
for k in range(3):
for l in range(3):
c = getCalpha(C_orthotropic, A_inv, i, j, k, l)
C_orthotropic_alpha[i,j,k,l] = c
C_orthotropic_alpha[0,0,0,0]
C_orthotropic_matrix_alpha = zeros(6)
for s in range(6):
for t in range(6):
i,j = getCIndecies(s)
k,l = getCIndecies(t)
C_orthotropic_matrix_alpha[s,t] = C_orthotropic_alpha[i,j,k,l]
C_orthotropic_matrix_alpha
###Output
_____no_output_____
###Markdown
Physical coordinates $u^1=\frac{u_{[1]}}{1+\frac{\alpha_3}{R}}$ $\frac{\partial u^1} {\partial \alpha_3}=\frac{1}{1+\frac{\alpha_3}{R}} \frac{\partial u_{[1]}} {\partial \alpha_3} + u_{[1]} \frac{\partial} {\partial \alpha_3} \left( \frac{1}{1+\frac{\alpha_3}{R}} \right) = =\frac{1}{1+\frac{\alpha_3}{R}} \frac{\partial u_{[1]}} {\partial \alpha_3} - u_{[1]} \frac{1}{R \left( 1+\frac{\alpha_3}{R} \right)^2} $
###Code
P=eye(12,12)
P[0,0]=1/(1+alpha3/R)
P[1,1]=1/(1+alpha3/R)
P[2,2]=1/(1+alpha3/R)
P[3,0]=-1/(R*(1+alpha3/R)**2)
P[3,3]=1/(1+alpha3/R)
P
Def=simplify(E*B*P)
Def
rows, cols = Def.shape
D_p=zeros(rows, cols)
q = 1+alpha3/R
for i in range(rows):
ratio = 1
if (i==0):
ratio = q*q
elif (i==3 or i == 4):
ratio = q
for j in range(cols):
D_p[i,j] = Def[i,j] / ratio
D_p = simplify(D_p)
D_p
###Output
_____no_output_____
###Markdown
Stiffness tensor
###Code
C_isotropic_alpha_p = MutableDenseNDimArray.zeros(3, 3, 3, 3)
q=1+alpha3/R
for i in range(3):
for j in range(3):
for k in range(3):
for l in range(3):
fact = 1
if (i==0):
fact = fact*q
if (j==0):
fact = fact*q
if (k==0):
fact = fact*q
if (l==0):
fact = fact*q
C_isotropic_alpha_p[i,j,k,l] = simplify(C_isotropic_alpha[i,j,k,l]*fact)
C_isotropic_matrix_alpha_p = zeros(6)
for s in range(6):
for t in range(6):
i,j = getCIndecies(s)
k,l = getCIndecies(t)
C_isotropic_matrix_alpha_p[s,t] = C_isotropic_alpha_p[i,j,k,l]
C_isotropic_matrix_alpha_p
C_orthotropic_alpha_p = MutableDenseNDimArray.zeros(3, 3, 3, 3)
q=1+alpha3/R
for i in range(3):
for j in range(3):
for k in range(3):
for l in range(3):
fact = 1
if (i==0):
fact = fact*q
if (j==0):
fact = fact*q
if (k==0):
fact = fact*q
if (l==0):
fact = fact*q
C_orthotropic_alpha_p[i,j,k,l] = simplify(C_orthotropic_alpha[i,j,k,l]*fact)
C_orthotropic_matrix_alpha_p = zeros(6)
for s in range(6):
for t in range(6):
i,j = getCIndecies(s)
k,l = getCIndecies(t)
C_orthotropic_matrix_alpha_p[s,t] = C_orthotropic_alpha_p[i,j,k,l]
C_orthotropic_matrix_alpha_p
###Output
_____no_output_____
###Markdown
Tymoshenko
###Code
D_p_T = D_p*T
K = Symbol('K')
D_p_T = D_p_T.subs(R, 1/K)
simplify(D_p_T)
###Output
_____no_output_____
###Markdown
Square of segment $A=\frac {\theta}{2} \left( R + h_2 \right)^2-\frac {\theta}{2} \left( R + h_1 \right)^2$
###Code
theta, h1, h2=symbols('theta h_1 h_2')
square_geom=theta/2*(R+h2)**2-theta/2*(R+h1)**2
expand(simplify(square_geom))
###Output
_____no_output_____
###Markdown
${\displaystyle A=\int_{0}^{L}\int_{h_1}^{h_2} \left( 1+\frac{\alpha_3}{R} \right) d \alpha_1 d \alpha_3}, L=R \theta$
###Code
square_int=integrate(integrate(1+alpha3/R, (alpha3, h1, h2)), (alpha1, 0, theta*R))
expand(simplify(square_int))
###Output
_____no_output_____
###Markdown
Virtual work Isotropic material physical coordinates
###Code
simplify(D_p.T*C_isotropic_matrix_alpha_p*D_p)
###Output
_____no_output_____
###Markdown
Isotropic material physical coordinates - Tymoshenko
###Code
W = simplify(D_p_T.T*C_isotropic_matrix_alpha_p*D_p_T*(1+alpha3*K)**2)
W
h=Symbol('h')
E=Symbol('E')
v=Symbol('nu')
W_a3 = integrate(W, (alpha3, -h/2, h/2))
W_a3 = simplify(W_a3)
W_a3.subs(la, E*v/((1+v)*(1-2*v))).subs(mu, E/((1+v)*2))
A_M = zeros(3)
A_M[0,0] = E*h/(1-v**2)
A_M[1,1] = 5*E*h/(12*(1+v))
A_M[2,2] = E*h**3/(12*(1-v**2))
Q_M = zeros(3,6)
Q_M[0,1] = 1
Q_M[0,4] = K
Q_M[1,0] = -K
Q_M[1,2] = 1
Q_M[1,5] = 1
Q_M[2,3] = 1
W_M=Q_M.T*A_M*Q_M
W_M
###Output
_____no_output_____ |
docs/lmpm_demo.ipynb | ###Markdown
`lmpm` module demoThis notebook exemplifies the usage of the `lmpm` module to predict protein localization. 1. Installation The module requires `python 3` and depends on `numpy`, `pandas`, `sklearn`, `matplotlib` and `seaborn`. The specific versions of this libraries used for its development are:``` - python=3.8.8 - numpy=1.20.1 - pandas=1.2.3 - matplotlib=3.3.4 - seaborn=0.11.1 - scikit-learn=0.23.2```An easy way to recreate the environment is installing it with conda from the specification file in the [lmpm github repository](https://github.com/Lean-Mean-Protein-Machine-Learning/LMPM) with conda by running:``` install minicondawget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.shbash Miniconda3-latest-Linux-x86_64.sh download environment specification filewget https://raw.githubusercontent.com/Lean-Mean-Protein-Machine-Learning/LMPM/main/environment.yml create "lmpmenv" environment from this fileconda env create -f environment.yml activate environmentconda activate lmpmenv```With an environment that fulfills the requisites, proceed to installation of the module by running:```python3 -m pip install git+https://github.com/Lean-Mean-Protein-Machine-Learning/LMPM```This will install the `lmpm` module in your environment and all functions will be available through imports.*Note:* If you want to reproduce the code of this notebook with that environment, you will also need to run `conda install notebook` to install and use Jupyter Notebooks. We did not include it in the invironment as it is not essential for the module. 2. Import the module The module has many functions that can be useful for multiple purposes. One option is importing the module and use its functions as relative paths to its location.
###Code
import lmpm
###Output
_____no_output_____
###Markdown
An alternative is importing the 6 main functions of the module individually to use them directly. The function of these will be discussed in detail below.
###Code
from lmpm import predict_loc_simple
from lmpm import predict_location
from lmpm import optimize_sequence
from lmpm import plot_optimization
from lmpm import top_mutations
###Output
_____no_output_____
###Markdown
3. Predict protein localization Throughout this example we will be using the sequence of the human heat shock protein beta-1 (HSPB-1, UniProt KB id: A0A6Q8PHA6). This protein is known to be localized in the cell nucleus.
###Code
seq = 'MTERRVPFSLLRGPSWDPFRDWYPHSRLFDQAFGLPRLPEEWSQWLGGSSWPGYVRPLPPAAIESPAVAAPAYSRALSRQLSSGVSEIRHTADRWRVSLDVNHFAPDELTVKTKDGVVEITARGAAGRAWLHLPVLHAEIHAAPRCGPHPSFLLPVP'
###Output
_____no_output_____
###Markdown
Two functions can be used to predict the localization of the protein: `predict_loc_simple` and `predict_location`.The first one, `predict_loc_simple` requires specifying:- Input protein sequence: string with the protein sequence of interest. Can either be in one- or three- letter format (e.g. MTE or MetTrpGlu)- Organism of interest: `human` for *Homo sapiens*, `yeast` for *Saccharomyces cerevisiae*, or `ecoli` for *Escherichia coli*- Localization of interest: `cytoplasm`, `membrane` or `secreted`The function returns the probability that the input sequence is localized in the target location in this specific organism. Here, the model correctly predicts that this protein is localized in the cytoplasm, as the probability for that localization is larger than 0.5.
###Code
predict_loc_simple(seq, organism = 'human', target_loc = 'cytoplasm')
###Output
_____no_output_____
###Markdown
The fourth argument, `include_dg` defaults to `False` but can be set to `True` to computate $\Delta G$ from the sequence and include it as an extra feature for the model. Including $\Delta G$ increases the accuracy of the model slightly and does not increase the computation time significantly. In this case, the predicted probability does not change.
###Code
predict_loc_simple(seq, organism = 'human', target_loc = 'cytoplasm', include_dg=True)
###Output
_____no_output_____
###Markdown
The second one, `predict_location` is more versatile. In addition to the organisms and localization of interest it includes the option `all` than can be used to obtain multiple results at the same time. The results are returned as a class with multiple attributes.
###Code
preds = predict_location(seq, organism = 'human', target_loc = 'all')
###Output
_____no_output_____
###Markdown
- The `.result` attribute has the same function as calling the class itself and returns a `pd.DataFrame` with the predicted probabilities for the query. In this case, since the location was `all` the results return the probabilities for all locations.
###Code
preds.result
# gives the same as .result
preds()
###Output
_____no_output_____
###Markdown
- The `.predicted_loc` attribute returns the most probable localization, which is where the model predicts the protein will be localized. The `predicted_prob` attribute returns the probability the protein is localized at this most favorable localization.
###Code
print(preds.predicted_loc)
print(preds.predicted_prob)
###Output
cytoplasm
0.72
###Markdown
- The `.query` attribute returns a dictionary with the values used as inputs to the function.
###Code
preds.query
###Output
_____no_output_____
###Markdown
The results are similar when using `all` for localization, but the three models are used internally to predict the localization probabilities for each organism. Note that the displayed probabilities do not sum to one because each probability is computed with respect to the other localization probabilities **within** a organism, not across them. In this case, the results show that this protein would be localized in the cytoplasm for all organisms because all probabilities are higher than 0.5.
###Code
preds = predict_location(seq, organism = 'all', target_loc = 'cytoplasm')
preds.result
###Output
_____no_output_____
###Markdown
Combining `all` for both organism and location, the function returns all results.
###Code
preds = predict_location(seq, organism = 'all', target_loc = 'all')
preds.result
###Output
_____no_output_____
###Markdown
Note that when `all` is used for organisms, the `.predicted_loc` and `.predicted_prob` attributes indicating the most probable localization are calculated for *Homo sapiens*, ignoring the predictions for the other organisms.
###Code
print(preds.predicted_loc)
print(preds.predicted_prob)
###Output
cytoplasm
0.72
###Markdown
The function can also be used as a substitute to `predict_loc_simple` by specifying only one species and localization. It can also be used including, or not, the $\Delta G$ calculation as shown below.
###Code
predict_location(seq, organism = 'human', target_loc = 'cytoplasm', include_dg=True).result
###Output
_____no_output_____
###Markdown
The function `predict_location` includes an additional parameter, `pred_all` that is set to `False` by default but can be set to `True` to compute all probabilities in addition to the ones defined explicity as shown above. Here, for instance, we define organism and location explicitly but include `pred_all=True` so that we can access all the probabilities with the `.all_predictions` attribute (which is not populated if `pred_all=False`. Note that the rest of attributes display only the results according to the specified organism and location, just as shown above.
###Code
preds = predict_location(seq, organism = 'all', target_loc = 'cytoplasm', include_dg=True, pred_all=True)
preds.result
print(preds.predicted_loc)
print(preds.predicted_prob)
preds.query
preds.all_predictions
###Output
_____no_output_____
###Markdown
4. Mutate the sequence to improve localization The `lmpm` module has also implementations for functions that investigate the effect of point mutations on the localization of a protein.Three functions act in coordination for this task: `optimize_sequence`, `plot_optimization` and `top_mutations`.The first one, `optimize_sequence`, takes as arguments an input sequence, organism, location and whether to include $\Delta G$ in the model or not, just as before. However, in addition it has the `positions` argument to specify a list of positions to mutate. The format of this list should be a string with integers or ranges defined strictly with `-` indicating the residue positions to mutate. Note that the residue position is defined starting at 1 for consistency with the PDB format. For instance, to mutate the first 3 residues of the target protein, the 5th residue, and the residues from 9-12, the position would be: `'1-3,5,9-12'`. If `position` is not defined, it defaults to mutate all residues in the protein. However, mutating more than 10 residues in the protein is currently not supported and raises an error because it would take too much time.This function returns a tuple where the first element is a `pd.DataFrame` containing the predicted scores for the sequence at the target location and organism across all different mutations (row) for each position (column). The second element in the tuple corresponds to the probability of the initial sequence without mutations. In this example, the initial probability of being in the `cytoplasm` is of 0.72, and as it can be seen on the `pd.DataFrame` this could be improved to 0.77 by mutating residue at position 4 (which is a R) for S.
###Code
mutated_scores, initial_score = optimize_sequence(seq, 'human', 'cytoplasm', include_dg=False, positions='4,9')
initial_score
mutated_scores
###Output
_____no_output_____
###Markdown
The outputs of this function can be represented with the `plot_optimization` function. This function takes as arguments the results returned by the `optimized_sequence` function. In addition, it has the `plot_inplace` and `dpi` arguments to control if the plot should be drawn (default behavior, works well with jupyter notebooks) or returned from the function (useful for the web app and other applications) and the resolution in dots per inch of the figure (defaults to 100, but can be changed to 300 for better results).The resulting plot shows the effect of each mutation (vertical axis) on each position (horizontal axis) in the change in probability for the target class (in this case, cytoplasm, as defined by the function above). The mutations with more intense red are those that increase more the probability (in this case up to 0.04), and those with intense blue are those that decrease the probability.
###Code
plot_optimization(mutated_scores, initial_score, plot_inplace=True, dpi=100)
###Output
/home/mexposit/miniconda3/envs/lmpmdev/lib/python3.8/site-packages/lmpm/improve_sec.py:155: UserWarning: FixedFormatter should only be used together with FixedLocator
ax.set_xticklabels(mutated_scores.columns, rotation=90)
###Markdown
Finally, the function `top_mutations` can be used to show the mutations that could improve the localization of the protein at the target location. It takes as arguments the results from `optimize_sequence` and the argument `top_results` expects an integer with the number of `n` best mutations to show to increase the localization. If `top_results` is larger than the number of mutations that increase the probability then only those mutations increasing the probability are returned.In this example, the top 10 mutations that could improve the probability are returned. The top mutation (R4S) increases the probability of being of the cytoplasm class by 0.05, up to 0.77 from the initial 0.72.
###Code
top_mutations(mutated_scores, initial_score, top_results=10)
###Output
_____no_output_____
###Markdown
5. Get unirep representations Additionally, the module contains many functions that are useful for the main functions. However, this functions may also be useful in other cases. For instance, the functions inside `check_inputs.py` can be useful to convert amino acid sequence inputs to a unified single-letter code and check for incorrect symbols. This functions can be accessed specifying the file name that contains them. For instance:
###Code
lmpm.check_inputs.check_input('ArgProLeuTrpArgProMetLeuTrpLeuProArg')
lmpm.check_inputs.check_input('rplwrpmlwlpr')
###Output
_____no_output_____
###Markdown
Similarly, the functions in the `unirep` submodule could be helpful to train other machine learning models based on protein sequences. The `get_UniReps` function in that submodule converts a given protein sequence to 1900 features.
###Code
unirep_rep = lmpm.unirep.get_UniReps('RPLWRPMLWLPR')
unirep_rep
unirep_rep.shape
###Output
_____no_output_____ |
EfectosLazoCerrado.ipynb | ###Markdown
Efectos del lazo cerrado Objetivos- Determinar la estabilidad de sistemas de lazo abierto y lazo cerrado.- Verificar el efecto de cerrar un lazo de control sobre sistemas de tiempo continuo. Lazo cerradoTomando como base el bucle típico de control de la siguiente figura.Puede deducirse que la función de transferencia desde la **referencia** $R(s)$ hasta la **controlada** $C(s)$.Observe que el error se define como:\begin{equation}E(s) = R(s) - H(s) C(s)\end{equation}La señal **controlada** corresponde a la transformación que realiza el **controlador**, el **actuador** y la **planta** sobre el **error**.\begin{align}\color{blue}{C(s)} &= \left (G_c(s)G_a(s)G_p(s) \right ) E(s) \\\color{blue}{C(s)} &= \left (G_c(s)G_a(s)G_p(s) \right ) \left ( \color{blue}{\color{blue}{R(s)}} - H(s) \color{blue}{C(s)} \right ) \\\color{blue}{C(s)} &= \left (G_c(s)G_a(s)G_p(s) \right ) \color{blue}{\color{blue}{R(s)}} - \left (G_c(s)G_a(s)G_p(s)H(s) \right )\color{blue}{C(s)} \\\end{align}\begin{equation}\color{blue}{C(s)} + \left (G_c(s)G_a(s)G_p(s)H(s) \right )\color{blue}{C(s)} = \left (G_c(s)G_a(s)G_p(s) \right ) \color{blue}{R(s)}\end{equation}\begin{equation}\color{blue}{C(s)} \left ( 1 + G_c(s)G_a(s)G_p(s)H(s)\right ) = \left (G_c(s)G_a(s)G_p(s) \right ) \color{blue}{R(s)}\end{equation}Así, la función de transferencia de lazo cerrado es:\begin{equation}\frac{C(s)}{R(s)} = \frac{G_c(s)G_a(s)G_p(s)}{1 + G_c(s)G_a(s)G_p(s)H(s)}\end{equation}Para efectos prácticos se reune en un solo modelo a los sistemas **Actuador** y **Planta**, pues estos dos sistemas conforman el **Proceso** que se debe controlar. Así, la función de transferencia de lazo cerrado será:\begin{equation}\frac{C(s)}{R(s)} = \frac{G_c(s)G_p(s)}{1 + G_c(s)G_p(s)H(s)}\end{equation}Tenga en cuenta que $G_p(s)$ incorpora las dinámicas de **Actuador** y **Planta**.El rol del **Sensor** requiere respuestas "rápidas" y "precisas".El sistema **Controlador** debe ser diseñado para lograr comportamientos deseados en el sistema en lazo cerrado, es decir, se moldea la forma de $C(s)$ a partir de $R(s)$ ajustando $G_c(s)$ de forma apropiada. Lazo abierto vs Lazo cerradoLas características más relevantes de la respuesta transitoria de un sistema está relacionada con la ubicación de sus polos.De las ecuaciones anteriores, puede observarse que los polos se reubican en lazo cerrado de acuerdo con el controlador. Para este analisis considere los modelos lineales definidos como divisiones de polinomios.\begin{equation}\frac{C(s)}{R(s)} = \frac{G_c(s)G_p(s)}{1 + G_c(s)G_p(s)H(s)}\end{equation}\begin{equation}\frac{C(s)}{R(s)} = \frac{\frac{N_c(s)}{D_c(s)}\frac{N_p(s)}{D_p(s)}}{1 + \frac{N_c(s)}{D_c(s)}\frac{N_p(s)}{D_p(s)}H(s)}\end{equation}Considere que el sensor es perfecto, es decir, $H(s)=1$.La función de transferencia de lazo cerrado es:\begin{equation}\frac{C(s)}{R(s)} = \frac{\frac{N_c(s)N_p(s)}{D_c(s)D_p(s)}}{\frac{D_c(s)D_p(s) + N_c(s)N_p(s)}{D_c(s)D_p(s)}} = \frac{N_c(s)N_p(s)}{D_c(s)D_p(s) + N_c(s)N_p(s)}\end{equation}Mientras la función de transferencia de lazo abierto es:\begin{equation}\frac{C(s)}{R(s)} = \frac{N_c(s)N_p(s)}{D_c(s)D_p(s)}\end{equation}Observe que los numeradores se mantienen, esto indica que los ceros del sistema en lazo cerrado son los mismos que en lazo abierto.Observe que los denominadores cambian, esto indica que los polos del sistema en lazo cerrado cambian respecto al lazo abierto. **Ejemplo**Suponga un proceso modelado por:$$G_p(s) = \frac{2}{4s - 3}$$$$G_p(s) = \frac{-2/3}{\frac{-4}{3}s + \frac{3}{3}}$$$$G_p(s) = \frac{-2/3}{\frac{-4}{3}s + 1}$$y una estrategia de contro definida por:$$G_c(s) = k_c$$ - ¿El sistema $G_p$ es estable?- ¿Qué efecto tiene realimentar el sistema con el controlador definido?En análisis se realizará a partir de las raíces del sistema teniendo en cuenta que la función de transferencia de lazo cerrado es:\begin{equation}\frac{C(s)}{R(s)} = \frac{N_c(s)N_p(s)}{D_c(s)D_p(s) + N_c(s)N_p(s)}\end{equation}$$G_{LC}(s) = \frac{2k_c}{4s - 3 + 2k_c}$$
###Code
# Se define la función de transferencia del proceso
Gp = control.tf(2, [4,-3])
Gp
# Se hallan los polos del proceso
polos = Gp.pole()
polos
# Se hallan los ceros del proceso
ceros = Gp.zero()
ceros
# Se grafica el mapa de polos y ceros
control.pzmap(Gp)
plt.grid(True)
###Output
_____no_output_____
###Markdown
- El sistema no tiene ceros.- El sistema tiene un polo en $s = 0.75$.- La respuesta dinámica del sistema está dominada por $e^{0.75t}$.- Este sistema es inestable.
###Code
# Se grafica la respuesta al escalón
ts = np.linspace(0, 20, 1000)
_, y = control.step_response(Gp, ts)
plt.plot(ts, y)
plt.grid(True)
###Output
_____no_output_____
###Markdown
Se tomarán distintos escenarios para $G_c(s) = k_c$.
###Code
Gc1 = 0.01
Gc2 = 0.1
Gc3 = 1.5
Gc4 = 3
Gc5 = 5
Gc6 = 10
Gc7 = 1.501
# Caso 1
G_LC1 = control.feedback(Gc1*Gp,1)
_, y1 = control.step_response(G_LC1, ts)
G_LC1
# Caso 2
G_LC2 = control.feedback(Gc2*Gp,1)
_, y2 = control.step_response(G_LC2, ts)
G_LC2
# Caso 3
G_LC3 = control.feedback(Gc3*Gp,1)
_, y3 = control.step_response(G_LC3, ts)
G_LC3
# Caso 4
G_LC4 = control.feedback(Gc4*Gp,1)
_, y4 = control.step_response(G_LC4, ts)
G_LC4
# Caso 5
G_LC5 = control.feedback(Gc5*Gp,1)
_, y5 = control.step_response(G_LC5, ts)
G_LC5
# Caso 6
G_LC6 = control.feedback(Gc6*Gp,1)
_, y6 = control.step_response(G_LC6, ts)
G_LC6
# Caso 7
G_LC7 = control.feedback(Gc7*Gp,1)
_, y7 = control.step_response(G_LC7, ts)
G_LC7
# Se grafica el mapa de polos y ceros para todos los escenarios
control.pzmap(Gp)
control.pzmap(G_LC1)
control.pzmap(G_LC2)
control.pzmap(G_LC3)
control.pzmap(G_LC4)
control.pzmap(G_LC5)
control.pzmap(G_LC6)
control.pzmap(G_LC7)
plt.grid(True)
# Se grafica la respuesta al escalón para todos los escenarios
plt.plot(ts, y,ts, y1,ts, y2,ts, y3,ts, y4,ts, y5,ts, y6)
plt.legend(['Proceso',
'k=' + str(Gc1),
'k=' + str(Gc2),
'k=' + str(Gc3),
'k=' + str(Gc4),
'k=' + str(Gc5),
'k=' + str(Gc6),
'k=' + str(Gc7)
])
plt.grid(True)
# Se grafica la respuesta al escalón para los escenarios inestables
plt.plot(ts, y,ts, y1,ts, y2,ts, y3)
plt.legend(['Proceso',
'k=' + str(Gc1),
'k=' + str(Gc2),
'k=' + str(Gc3)])
plt.grid(True)
# La respuesta al escalón para los escenarios estables y el integrador
plt.plot(ts, y3,ts, y4,ts, y5,ts,y6,ts,y7)
plt.legend(['k=' + str(Gc3),
'k=' + str(Gc4),
'k=' + str(Gc5),
'k=' + str(Gc6),
'k=' + str(Gc7)])
plt.grid(True)
###Output
_____no_output_____
###Markdown
Efectos del lazo cerrado Objetivos- Determinar la estabilidad de sistemas de lazo abierto y lazo cerrado.- Verificar el efecto de cerrar un lazo de control sobre sistemas de tiempo continuo. Lazo cerradoTomando como base el bucle típico de control de la siguiente figura.Puede deducirse que la función de transferencia desde la **referencia** $R(s)$ hasta la **controlada** $C(s)$.Observe que el error se define como:\begin{equation}E(s) = R(s) - H(s) C(s)\end{equation}La señal **controlada** corresponde a la transformación que realiza el **controlador**, el **actuador** y la **planta** sobre el **error**.\begin{align}\color{blue}{C(s)} &= \left (G_c(s)G_a(s)G_p(s) \right ) E(s) \\\color{blue}{C(s)} &= \left (G_c(s)G_a(s)G_p(s) \right ) \left ( \color{blue}{\color{blue}{R(s)}} - H(s) \color{blue}{C(s)} \right ) \\\color{blue}{C(s)} &= \left (G_c(s)G_a(s)G_p(s) \right ) \color{blue}{\color{blue}{R(s)}} - \left (G_c(s)G_a(s)G_p(s)H(s) \right )\color{blue}{C(s)} \\\end{align}\begin{equation}\color{blue}{C(s)} + \left (G_c(s)G_a(s)G_p(s)H(s) \right )\color{blue}{C(s)} = \left (G_c(s)G_a(s)G_p(s) \right ) \color{blue}{R(s)}\end{equation}\begin{equation}\color{blue}{C(s)} \left ( 1 + G_c(s)G_a(s)G_p(s)H(s)\right ) = \left (G_c(s)G_a(s)G_p(s) \right ) \color{blue}{R(s)}\end{equation}Así, la función de transferencia de lazo cerrado es:\begin{equation}\frac{C(s)}{R(s)} = \frac{G_c(s)G_a(s)G_p(s)}{1 + G_c(s)G_a(s)G_p(s)H(s)}\end{equation}Para efectos prácticos se reune en un solo modelo a los sistemas **Actuador** y **Planta**, pues estos dos sistemas conforman el **Proceso** que se debe controlar. Así, la función de transferencia de lazo cerrado será:\begin{equation}\frac{C(s)}{R(s)} = \frac{G_c(s)G_p(s)}{1 + G_c(s)G_p(s)H(s)}\end{equation}Tenga en cuenta que $G_p(s)$ incorpora las dinámicas de **Actuador** y **Planta**.El rol del **Sensor** requiere respuestas "rápidas" y "precisas".El sistema **Controlador** debe ser diseñado para lograr comportamientos deseados en el sistema en lazo cerrado, es decir, se moldea la forma de $C(s)$ a partir de $R(s)$ ajustando $G_c(s)$ de forma apropiada. Lazo abierto vs Lazo cerradoLas características más relevantes de la respuesta transitoria de un sistema está relacionada con la ubicación de sus polos.De las ecuaciones anteriores, puede observarse que los polos se reubican en lazo cerrado de acuerdo con el controlador. Para este analisis considere los modelos lineales definidos como divisiones de polinomios.\begin{equation}\frac{C(s)}{R(s)} = \frac{G_c(s)G_p(s)}{1 + G_c(s)G_p(s)H(s)}\end{equation}\begin{equation}\frac{C(s)}{R(s)} = \frac{\frac{N_c(s)}{D_c(s)}\frac{N_p(s)}{D_p(s)}}{1 + \frac{N_c(s)}{D_c(s)}\frac{N_p(s)}{D_p(s)}H(s)}\end{equation}Considere que el sensor es perfecto, es decir, $H(s)=1$.La función de transferencia de lazo cerrado es:\begin{equation}\frac{C(s)}{R(s)} = \frac{\frac{N_c(s)N_p(s)}{D_c(s)D_p(s)}}{\frac{D_c(s)D_p(s) + N_c(s)N_p(s)}{D_c(s)D_p(s)}} = \frac{N_c(s)N_p(s)}{D_c(s)D_p(s) + N_c(s)N_p(s)}\end{equation}Mientras la función de transferencia de lazo abierto es:\begin{equation}\frac{C(s)}{R(s)} = \frac{N_c(s)N_p(s)}{D_c(s)D_p(s)}\end{equation}Observe que los numeradores se mantienen, esto indica que los ceros del sistema en lazo cerrado son los mismos que en lazo abierto.Observe que los denominadores cambian, esto indica que los polos del sistema en lazo cerrado cambian respecto al lazo abierto. **Ejemplo**Suponga un proceso modelado por:$$G_p(s) = \frac{2}{4s - 3}$$$$G_p(s) = \frac{-2/3}{\frac{-4}{3}s + \frac{3}{3}}$$$$G_p(s) = \frac{-2/3}{\frac{-4}{3}s + 1}$$y una estrategia de contro definida por:$$G_c(s) = k_c$$ - ¿El sistema $G_p$ es estable?- ¿Qué efecto tiene realimentar el sistema con el controlador definido?En análisis se realizará a partir de las raíces del sistema teniendo en cuenta que la función de transferencia de lazo cerrado es:\begin{equation}\frac{C(s)}{R(s)} = \frac{N_c(s)N_p(s)}{D_c(s)D_p(s) + N_c(s)N_p(s)}\end{equation}$$G_{LC}(s) = \frac{2k_c}{4s - 3 + 2k_c}$$ Polo en$$4s-3+2k_c=0$$$$s=\frac{3-2k_c}{4}$$
###Code
# Se define la función de transferencia del proceso
Gp = control.tf(2, [4,-3])
Gp
# Se hallan los polos del proceso
polos = Gp.pole()
polos
# Se hallan los ceros del proceso
ceros = Gp.zero()
ceros
# Se grafica el mapa de polos y ceros
control.pzmap(Gp)
plt.grid(True)
###Output
_____no_output_____
###Markdown
- El sistema no tiene ceros.- El sistema tiene un polo en $s = 0.75$.- La respuesta dinámica del sistema está dominada por $e^{0.75t}$.- Este sistema es inestable.
###Code
# Se grafica la respuesta al escalón
ts = np.linspace(0, 20, 1000)
_, y = control.step_response(Gp, ts)
plt.plot(ts, y)
plt.grid(True)
###Output
_____no_output_____
###Markdown
Se tomarán distintos escenarios para $G_c(s) = k_c$.
###Code
Gc1 = 0.01
Gc2 = 0.1
Gc3 = 1.5
Gc4 = 3
Gc5 = 5
Gc6 = 1000
Gc7 = 1.501
# Caso 1
G_LC1 = control.feedback(Gc1*Gp,1)
_, y1 = control.step_response(G_LC1, ts)
G_LC1
# Caso 2
G_LC2 = control.feedback(Gc2*Gp,1)
_, y2 = control.step_response(G_LC2, ts)
G_LC2
# Caso 3
G_LC3 = control.feedback(Gc3*Gp,1)
_, y3 = control.step_response(G_LC3, ts)
G_LC3
# Caso 4
G_LC4 = control.feedback(Gc4*Gp,1)
_, y4 = control.step_response(G_LC4, ts)
G_LC4
# Caso 5
G_LC5 = control.feedback(Gc5*Gp,1)
_, y5 = control.step_response(G_LC5, ts)
G_LC5
# Caso 6
G_LC6 = control.feedback(Gc6*Gp,1)
_, y6 = control.step_response(G_LC6, ts)
G_LC6
# Caso 7
G_LC7 = control.feedback(Gc7*Gp,1)
_, y7 = control.step_response(G_LC7, ts)
G_LC7
# Se grafica el mapa de polos y ceros para todos los escenarios
control.pzmap(Gp)
control.pzmap(G_LC1)
control.pzmap(G_LC2)
control.pzmap(G_LC3)
control.pzmap(G_LC4)
control.pzmap(G_LC5)
control.pzmap(G_LC6)
control.pzmap(G_LC7)
plt.grid(True)
# Se grafica la respuesta al escalón para todos los escenarios
plt.plot(ts, y,ts, y1,ts, y2,ts, y3,ts, y4,ts, y5,ts, y6)
plt.legend(['Proceso',
'k=' + str(Gc1),
'k=' + str(Gc2),
'k=' + str(Gc3),
'k=' + str(Gc4),
'k=' + str(Gc5),
'k=' + str(Gc6),
'k=' + str(Gc7)
])
plt.grid(True)
# Se grafica la respuesta al escalón para los escenarios inestables
plt.plot(ts, y,ts, y1,ts, y2,ts, y3)
plt.legend(['Proceso',
'k=' + str(Gc1),
'k=' + str(Gc2),
'k=' + str(Gc3)])
plt.grid(True)
# La respuesta al escalón para los escenarios estables y el integrador
plt.plot(ts, y3,ts, y4,ts, y5,ts,y6,ts,y7)
plt.legend(['k=' + str(Gc3),
'k=' + str(Gc4),
'k=' + str(Gc5),
'k=' + str(Gc6),
'k=' + str(Gc7)])
plt.grid(True)
# La respuesta al escalón para los escenarios estables y el integrador
plt.plot(ts, y4,ts, y5,ts,y6)
plt.legend(['k=' + str(Gc4),
'k=' + str(Gc5),
'k=' + str(Gc6)])
plt.grid(True)
###Output
_____no_output_____ |
Group8/problem generation/.ipynb_checkpoints/workflow_generating instances-checkpoint.ipynb | ###Markdown
generate client coordinates
###Code
x, y = generate_points(math.sqrt(2), num_clients)
draw_instance(radius, x, y)
###Output
_____no_output_____
###Markdown
generate dataframes to export as excel files
###Code
all_points = np.vstack((np.zeros(2),np.vstack((x, y)).T))
#all_points
travel_times_df = get_travel_times(all_points)
clients_df = generate_service_times(travel_times_df, all_points, l)
# probability vector for provider start time (at hour 0, 1, 2, 3)
# p = [0.5, 0.2, 0.15, 0.15]
providers_df = generate_providers(num_providers)
general_df = generate_general(l, providers_df, clients_df)
general_df.iloc[:,1] = general_df.iloc[:,1].astype(int)
travel_times_df
clients_df
providers_df
general_df
import openpyxl
import xlsxwriter
import xlwt
from datetime import datetime
now = datetime.now().strftime("%d_%m_%M:%S")
with pd.ExcelWriter(f'data_numP{num_providers}_numC{num_clients}_{now}.xlsx') as writer:
general_df.to_excel(writer, sheet_name='General', header=False, index=False)
providers_df.to_excel(writer, sheet_name='Providers', index=False)
clients_df.to_excel(writer, sheet_name='Clients', index=False)
travel_times_df.to_excel(writer, sheet_name='Distances', header=False, index=False)
###Output
_____no_output_____ |
Lessons/Lesson01-python-primer.ipynb | ###Markdown
Lesson 01: Lightning Fast Python Primer - Released in 1991- Created by Guido Van Rossum- Interpreted- Dynamically Typed- Emphasizes on Code Readability Learning by Doing 1. Collatz Conjecture2. Factorial3. Upper Case Class Collatz Conjecture Start with any positive integer n. Then each term is obtained from the previous term as follows: if the previous term is even, the next term is one half of the previous term. If the previous term is odd, the next term is 3 times the previous term plus 1. The conjecture is that no matter what value of n, the sequence will always reach 1.
###Code
n = input('Enter a number:')
n = int(n)
n_list = []
while n != 1:
if n%2 == 0:
n = n / 2
else:
n = (3 * n) + 1
n_list.append(n)
print(n_list)
###Output
[25.0, 76.0, 38.0, 19.0, 58.0, 29.0, 88.0, 44.0, 22.0, 11.0, 34.0, 17.0, 52.0, 26.0, 13.0, 40.0, 20.0, 10.0, 5.0, 16.0, 8.0, 4.0, 2.0, 1.0]
###Markdown
Factorial Write a program which can compute the factorial of a given number.
###Code
def fact(x):
if x == 0:
return 1
else:
return x * fact(x - 1)
x = input('Enter a number: ')
x = int(x)
fact(10)
###Output
_____no_output_____
###Markdown
To Upper Case Class Define a class which has at least two methods:1. getString: to get a string from console input2. printString: to print the string in upper case.
###Code
class ToUpperCase():
def __init__(self):
self.str = ''
def getString(self, input_str=''):
self.str = input_str
def printString(self):
return self.str.upper()
up = ToUpperCase()
up.getString('hello')
up.printString()
###Output
_____no_output_____ |
20160606_xtract 1926.ipynb | ###Markdown
L'aide de Beautiful soup est ici :
###Code
from bs4 import BeautifulSoup
soup = BeautifulSoup(open('data_1926-1928.html', encoding='iso-8859-1'), 'html.parser')
spacers = soup.find_all(class_='spacer')
node = spacers[4]
node
len(node)
node.contents
node.p.text
###Output
_____no_output_____
###Markdown
Vérifions pour tous les spacers :
###Code
len([node.p.text for node in spacers if node.p is not None])
3 * 12
###Output
_____no_output_____
###Markdown
C'est suffisament régulier sur nos données de test. On a 12 valeurs par an * 3 ans.
###Code
node.next_sibling.table.tr.text
[node.next_sibling.table.find(text='Maximum instantané').parent.parent.parent
for node in spacers if node.p is not None]
node
###Output
_____no_output_____
###Markdown
Utilisation d'une regexp
###Code
node.next_sibling.table.text
t = node.next_sibling.table.text
import re
pattern_cm = re.compile('Hauteur :\n(\d+.\d*)\xa0cm.+Date :\n([\d/: ]+)', re.DOTALL)
pattern_mm = re.compile('Hauteur :\n(\d+.\d*)\xa0mm.+Date :\n([\d/: ]+)', re.DOTALL)
re.findall(pattern_cm, t)
[re.findall(p, t) for t in [node.next_sibling.table.text for node in spacers] if t is not None]
for node in spacers:
print(node)
if len(list(node.children)) > 0:
if node.next_sibling.table is not None:
t = node.next_sibling.table.text
print(re.findall(p, t))
list(node.children)
len(node.children)
###Output
_____no_output_____
###Markdown
On cherche les tables
###Code
table = soup.find_all('table', class_='statistiques')[1]
###Output
_____no_output_____
###Markdown
Filtrage mensuel On prend la table si elle est liée au mois (donc débits journaliers) :
###Code
def is_monthly(table):
"Returns True if monthly data table."
cap = table.find('caption')
if cap is None:
label = table.find(class_='procedure_choix')
if label is not None:
return label.text == 'Ecoulement mensuel'
else:
return False
else:
return False
table = soup.find_all('table', class_='statistiques')[1]
table
table.find(class_='procedure_choix').text
is_monthly(table)
for table in soup.find_all('table', class_='statistiques'):
print(is_monthly(table))
###Output
False
False
True
True
True
True
True
True
True
True
True
True
True
True
False
False
True
True
True
True
True
True
True
True
True
True
True
True
False
False
True
True
True
True
True
True
True
True
True
True
True
True
###Markdown
Extraction donnée On sait qu'on a une table mensuelle, mais on n'est pas sûr des données qu'elle contient.
###Code
def extract_height_date(soup):
"Extracts height (in cm) and date from soup."
pattern_cm = re.compile('Hauteur :\n(\d+.\d*)\xa0cm.+Date :\n([\d/: ]+)', re.DOTALL)
pattern_mm = re.compile('Hauteur :\n(\d+.\d*).*mm.+Date :\n([\d/: ]+)', re.DOTALL)
res = re.findall(pattern_cm, soup.text)
if len(res) > 0:
# assume it was cm
return res
else:
# assume it was mm
res = re.findall(pattern_mm, soup.text)
return res
data = []
for table in soup.find_all('table', class_='statistiques'):
if is_monthly(table):
data.append(*re.findall(p, table.text))
import pandas as pd
def parse_dates(df):
df = df.copy()
df['date'] = pd.to_datetime(df['date'])
return df
def make_numeric(df):
df = df.copy()
df['hauteur'] = [float(val.replace(',', '.')) for val in df['hauteur']]
return df
df = pd.DataFrame(data, columns=['hauteur', 'date']).pipe(parse_dates).pipe(make_numeric).set_index('date')
df
%matplotlib notebook
import matplotlib.pyplot as plt
fig, ax = plt.subplots()
df.plot.line(ax=ax, style='-o')
###Output
_____no_output_____
###Markdown
Tests Fonction
###Code
def extract_height_date(soup):
"Extracts height (in cm) and date from soup."
pattern_cm = re.compile('Hauteur :\n(\d+.\d*)[ \xa0]cm.+Date :\n([\d/: ]+)', re.DOTALL)
pattern_mm = re.compile('Hauteur :\n(\d+.\d*)[ \xa0]mm.+Date :\n([\d/: ]+)', re.DOTALL)
res = re.findall(pattern_cm, soup.text)
if len(res) > 0:
# assume it was cm
return res
else:
# assume it was mm
res = re.findall(pattern_mm, soup.text)
if len(res) > 0:
height, date = res[0]
height = str(float(height) / 10)
return [(height, date)]
else:
return [(None, None)]
###Output
_____no_output_____
###Markdown
Test cases On va tester les extractions en cm et mm et valider les résultats.
###Code
test_cases = ["""<table class="statistiques" width="42%"><tbody>
<tr><td class="left" colspan="3" id="C1"><span class="procedure_choix">Ecoulement mensuel</span></td></tr>
<tr>
<td class="left" id="C1"> </td>
<td class="left" id="C2" style="text-align:left;border-color:#FFFFFF;font-size:1em;" width="40%">Débit moyen :</td>
<td class="left" id="C3" style="text-align:left;border-color:#FFFFFF;font-size:1em;" width="50%">366.0 m3/s</td>
</tr>
<tr>
<td class="left" id="C1"> </td>
<td class="left" id="C2" style="text-align:left;border-color:#FFFFFF;font-size:1em;" width="40%">Débit moyen spécifique :</td>
<td class="left" id="C3" style="text-align:left;border-color:#FFFFFF;font-size:1em;" width="50%">8.37 l/s/km2</td>
</tr>
<tr>
<td class="left" id="C1"> </td>
<td class="left" id="C2" style="text-align:left;border-color:#FFFFFF;font-size:1em;" width="40%">Lame d'eau :</td>
<td class="left" id="C3" style="text-align:left;border-color:#FFFFFF;font-size:1em;" width="50%">22.4 mm</td>
</tr>
<tr><td class="left" colspan="3" id="C1"> </td></tr>
<tr><td class="left" colspan="3" id="C1"><span class="procedure_choix">Ecoulement naturel reconstitué</span></td></tr>
<tr>
<td class="left" id="C1"> </td>
<td class="left" id="C2" style="text-align:left;border-color:#FFFFFF;font-size:1em;" width="30%">Débit moyen :</td>
<td class="left" id="C3" style="text-align:left;border-color:#FFFFFF;font-size:1em;" width="50%">366.0 m3/s</td>
</tr>
<tr>
<td class="left" id="C1"> </td>
<td class="left" id="C2" style="text-align:left;border-color:#FFFFFF;font-size:1em;" width="30%">Débit moyen spécifique :</td>
<td class="left" id="C3" style="text-align:left;border-color:#FFFFFF;font-size:1em;" width="50%">8.37 l/s/km2</td>
</tr>
<tr>
<td class="left" id="C1"> </td>
<td class="left" id="C2" style="text-align:left;border-color:#FFFFFF;font-size:1em;" width="30%">Lame d'eau :</td>
<td class="left" id="C3" style="text-align:left;border-color:#FFFFFF;font-size:1em;" width="50%">22.4 mm</td>
</tr>
<tr><td class="left" colspan="3" id="C1"> </td></tr>
<tr><td class="left" colspan="3" id="C1"><span class="procedure_choix">Maximum instantané</span></td></tr>
<tr>
<td class="left" id="C1"> </td>
<td class="left" id="C2" style="text-align:left;border-color:#FFFFFF;font-size:1em;" width="30%">Hauteur :</td>
<td class="left" id="C3" style="text-align:left;border-color:#FFFFFF;font-size:1em;" width="50%">1570.0 mm</td>
</tr>
<tr>
<td class="left" id="C1"> </td>
<td class="left" id="C2" style="text-align:left;border-color:#FFFFFF;font-size:1em;" width="30%">Date :</td>
<td class="left" id="C3" style="text-align:left;border-color:#FFFFFF;font-size:1em;" width="50%">17/01/2008 19:10</td>
</tr>
</tbody></table>""",
"""<table class="statistiques" width="42%"><tbody>
<tr><td class="left" colspan="3" id="C1"><span class="procedure_choix">Ecoulement annuel</span></td></tr>
<tr>
<td class="left" id="C1"> </td>
<td class="left" id="C2" style="text-align:left;border-color:#FFFFFF;font-size:1em;" width="40%">Débit moyen :</td>
<td class="left" id="C3" style="text-align:left;border-color:#FFFFFF;font-size:1em;" width="50%">441.0 m3/s</td>
</tr>
<tr>
<td class="left" id="C1"> </td>
<td class="left" id="C2" style="text-align:left;border-color:#FFFFFF;font-size:1em;" width="40%">Débit moyen spécifique :</td>
<td class="left" id="C3" style="text-align:left;border-color:#FFFFFF;font-size:1em;" width="50%">10.10 l/s/km2</td>
</tr>
<tr>
<td class="left" id="C1"> </td>
<td class="left" id="C2" style="text-align:left;border-color:#FFFFFF;font-size:1em;" width="40%">Lame d'eau :</td>
<td class="left" id="C3" style="text-align:left;border-color:#FFFFFF;font-size:1em;" width="50%">318.0 mm</td>
</tr>
<tr><td class="left" colspan="3" id="C1"> </td></tr>
<tr><td class="left" colspan="3" id="C1"><span class="procedure_choix">Ecoulement naturel reconstitué</span></td></tr>
<tr>
<td class="left" id="C1"> </td>
<td class="left" id="C2" style="text-align:left;border-color:#FFFFFF;font-size:1em;" width="30%">Débit moyen :</td>
<td class="left" id="C3" style="text-align:left;border-color:#FFFFFF;font-size:1em;" width="50%">441.0 m3/s</td>
</tr>
<tr>
<td class="left" id="C1"> </td>
<td class="left" id="C2" style="text-align:left;border-color:#FFFFFF;font-size:1em;" width="30%">Débit moyen spécifique :</td>
<td class="left" id="C3" style="text-align:left;border-color:#FFFFFF;font-size:1em;" width="50%">10.10 l/s/km2</td>
</tr>
<tr>
<td class="left" id="C1"> </td>
<td class="left" id="C2" style="text-align:left;border-color:#FFFFFF;font-size:1em;" width="30%">Lame d'eau :</td>
<td class="left" id="C3" style="text-align:left;border-color:#FFFFFF;font-size:1em;" width="50%">318.0 mm</td>
</tr>
<tr><td class="left" colspan="3" id="C1"> </td></tr>
<tr><td class="left" colspan="3" id="C1"><span class="procedure_choix">Maximum instantané</span></td></tr>
<tr>
<td class="left" id="C1"> </td>
<td class="left" id="C2" style="text-align:left;border-color:#FFFFFF;font-size:1em;" width="30%">Débit :</td>
<td class="left" id="C3" style="text-align:left;border-color:#FFFFFF;font-size:1em;" width="50%">1740.0 m3/s</td>
</tr>
<tr>
<td class="left" id="C1"> </td>
<td class="left" id="C2" style="text-align:left;border-color:#FFFFFF;font-size:1em;" width="30%">Validité :</td>
<td class="left" id="C3" style="text-align:left;border-color:#FFFFFF;font-size:1em;" width="50%">#</td>
</tr>
<tr>
<td class="left" id="C1"> </td>
<td class="left" id="C2" style="text-align:left;border-color:#FFFFFF;font-size:1em;" width="30%">Date :</td>
<td class="left" id="C3" style="text-align:left;border-color:#FFFFFF;font-size:1em;" width="50%">08/01/1926 07:00</td>
</tr>
<tr>
<td class="left" id="C1"> </td>
<td class="left" id="C2" style="text-align:left;border-color:#FFFFFF;font-size:1em;" width="30%">Hauteur :</td>
<td class="left" id="C3" style="text-align:left;border-color:#FFFFFF;font-size:1em;" width="50%">602.0 cm</td>
</tr>
<tr>
<td class="left" id="C1"> </td>
<td class="left" id="C2" style="text-align:left;border-color:#FFFFFF;font-size:1em;" width="30%">Date :</td>
<td class="left" id="C3" style="text-align:left;border-color:#FFFFFF;font-size:1em;" width="50%">08/01/1926 07:00</td>
</tr>
</tbody></table>""",
"""<table class="statistiques" width="42%"><tbody>
<tr><td class="left" colspan="3" id="C1"><span class="procedure_choix">Ecoulement mensuel</span></td></tr>
<tr>
<td class="left" id="C1"> </td>
<td class="left" id="C2" style="text-align:left;border-color:#FFFFFF;font-size:1em;" width="40%">Débit moyen :</td>
<td class="left" id="C3" style="text-align:left;border-color:#FFFFFF;font-size:1em;" width="50%">518.0 m3/s</td>
</tr>
<tr>
<td class="left" id="C1"> </td>
<td class="left" id="C2" style="text-align:left;border-color:#FFFFFF;font-size:1em;" width="40%">Débit moyen spécifique :</td>
<td class="left" id="C3" style="text-align:left;border-color:#FFFFFF;font-size:1em;" width="50%">11.80 l/s/km2</td>
</tr>
<tr>
<td class="left" id="C1"> </td>
<td class="left" id="C2" style="text-align:left;border-color:#FFFFFF;font-size:1em;" width="40%">Lame d'eau :</td>
<td class="left" id="C3" style="text-align:left;border-color:#FFFFFF;font-size:1em;" width="50%">28.6 mm</td>
</tr>
<tr><td class="left" colspan="3" id="C1"> </td></tr>
<tr><td class="left" colspan="3" id="C1"><span class="procedure_choix">Ecoulement naturel reconstitué</span></td></tr>
<tr>
<td class="left" id="C1"> </td>
<td class="left" id="C2" style="text-align:left;border-color:#FFFFFF;font-size:1em;" width="30%">Débit moyen :</td>
<td class="left" id="C3" style="text-align:left;border-color:#FFFFFF;font-size:1em;" width="50%">518.0 m3/s</td>
</tr>
<tr>
<td class="left" id="C1"> </td>
<td class="left" id="C2" style="text-align:left;border-color:#FFFFFF;font-size:1em;" width="30%">Débit moyen spécifique :</td>
<td class="left" id="C3" style="text-align:left;border-color:#FFFFFF;font-size:1em;" width="50%">11.80 l/s/km2</td>
</tr>
<tr>
<td class="left" id="C1"> </td>
<td class="left" id="C2" style="text-align:left;border-color:#FFFFFF;font-size:1em;" width="30%">Lame d'eau :</td>
<td class="left" id="C3" style="text-align:left;border-color:#FFFFFF;font-size:1em;" width="50%">28.6 mm</td>
</tr>
<tr><td class="left" colspan="3" id="C1"> </td></tr>
</tbody></table>"""]
test_results = [1, 1, 0]
for tc, tr in zip(test_cases, test_results):
test_soup = BeautifulSoup(tc, 'xml')
print("expected: {}, actual: {}".format(tr, extract_height_date(test_soup)))
###Output
expected: 1, actual: [('157.0', '17/01/2008 19:10')]
expected: 1, actual: [('602.0', '08/01/1926 07:00')]
expected: 0, actual: [(None, None)]
###Markdown
But does it scale ? On peut tester notre fonction sur des données un peu plus grandes.
###Code
soup = BeautifulSoup(open('data_2008-2016.html', encoding='iso-8859-1'), 'html.parser')
data = []
for table in soup.find_all('table', class_='statistiques'):
if is_monthly(table):
data.append(*extract_height_date(table))
data
pd.DataFrame(data).dropna()
###Output
_____no_output_____ |
Figure3/Cad6B_and_Snai2_HCR_dNT/.ipynb_checkpoints/20190722_Cad6BandSnai2_Intensity-checkpoint.ipynb | ###Markdown
Analyze Ceramide Staining Intensity Import Modules
###Code
# Import packages
import datetime
import os
import glob
import pandas as pd
import numpy as np
# Import plotting packages
import matplotlib as mpl
import seaborn as sns
import dabest
print("matplotlib v{}".format(mpl.__version__))
print("seaborn v{}".format(sns.__version__))
print("dabest v{}".format(dabest.__version__))
###Output
matplotlib v3.0.3
seaborn v0.9.0
dabest v0.2.4
###Markdown
Assemble image data into a single dataframe
###Code
# Navigate to CSV path
path = os.path.abspath('')+'/CSVs/'
full_df = pd.DataFrame()
list_ = []
for file_ in glob.glob(path + "/*.csv"): # For loop to bring in files and concatenate them into a single dataframe
df = pd.read_csv(file_)
df['Image'] = os.path.splitext(os.path.basename(file_))[0] # Determine Image name from file name
df['Stain'], df['ROI'] = zip(*df['Label'].map(lambda x: x.split(':'))) # Split values in ROI label
(df['ExptDate'], df['Treatment'], df['Dose'], df['Stains'], df['Embryo'], # Split values in Image name column
df['Somites'], df['Mag']) = zip(*df['Image'].map(lambda x: x.split('_')))
list_.append(df)
full_df = pd.concat(list_)
full_df.head()
# Add experiment date here to apply to dataframe
now = datetime.datetime.now()
analysis_date = now.strftime("%Y%m%d")
# Get a list of treatments and stains
treatment_list = full_df.Treatment.unique()
treatment_list = treatment_list.tolist()
stain_list = full_df.Stain.unique()
stain_list = stain_list.tolist()
# Mean background values and group by Treatment, Embryo, Fluor, ROI and Section
mean_sections = ((full_df.groupby(['Stain', 'Treatment', 'Embryo', 'ROI', 'ExptDate'])
['Area', 'Mean', 'IntDen']).mean())
# Loop through stains, performing the following analysis
for j in stain_list:
stain = j
df_stain = pd.DataFrame(mean_sections.xs(stain))
# Loop trough treatments, performing each analysis and exporting CSV file for each treatment
for i in treatment_list:
# Slice dataframe to process only embryos with given treatment
treatment = i
df_treatment = pd.DataFrame(df_stain.xs(treatment))
# Determine CTCF values = ROI IntDen - (background mean * ROI area)
# Calculate background (background mean * ROI area)
background_corr_cntl = (df_treatment.xs('background', level='ROI')['Mean']
* df_treatment.xs('Cntl', level='ROI')['Area'])
background_corr_expt = (df_treatment.xs('background', level='ROI')['Mean']
* df_treatment.xs('Expt', level='ROI')['Area'])
# Slice out only Cntl or Expt values in IntDen
intdens_cntl = df_treatment.xs('Cntl', level='ROI')['IntDen']
intdens_expt = df_treatment.xs('Expt', level='ROI')['IntDen']
# Subtract background from IntDens to determine CTCF and concatenate into single dataframe
sub_cntl = pd.DataFrame(intdens_cntl - background_corr_cntl)
sub_expt = pd.DataFrame(intdens_expt - background_corr_expt)
full_ctcf = pd.concat([sub_cntl, sub_expt], keys = ['Cntl', 'Expt'])
full_ctcf.columns = ['CTCF']
# Combine raw values, generate ratio
ctcf_cntl = full_ctcf.xs('Cntl').reset_index()
ctcf_cntl.rename(columns={'CTCF':'Cntl CTCF'}, inplace=True)
ctcf_expt = full_ctcf.xs('Expt').reset_index()
ctcf_expt.rename(columns={'CTCF':'Expt CTCF'}, inplace=True)
results = pd.concat([ctcf_cntl,ctcf_expt], axis=1)
results['Expt/Cntl CTCF'] = ctcf_expt['Expt CTCF'] / ctcf_cntl['Cntl CTCF']
results = results.loc[:,~results.columns.duplicated()]
results = results.groupby(['Embryo', 'ExptDate']).mean().reset_index()
# Normalize means
# Normalize all migration area values to mean of control group
norm_cntl = pd.DataFrame(results['Cntl CTCF']/(float(results['Cntl CTCF'].mean())))
norm_cntl.rename(columns={'Cntl CTCF':'Norm Cntl CTCF'}, inplace=True)
norm_expt = pd.DataFrame(results['Expt CTCF']/(float(results['Cntl CTCF'].mean())))
norm_expt.rename(columns={'Expt CTCF':'Norm Expt CTCF'}, inplace=True)
norm_expt.columns = ['Norm Expt CTCF']
results = pd.concat([results, norm_cntl, norm_expt], axis=1, sort=False)
results.to_csv(analysis_date + '_' + stain + '_' + treatment + '_Intensity.csv')
results = pd.read_csv('20190722_Snai2_nSMase2MO_Intensity.csv')
results = dabest.load(results, idx=('Norm Cntl CTCF', 'Norm Expt CTCF')
,id_col='Embryo', paired=True)
results.mean_diff.statistical_tests
fig1 = results.mean_diff.plot(
#Set overall figure parameters
dpi=200
,fig_size=(3,3)
#Edit legend features, use matplotlib.Axes.legend kwargs in dictionary format
# ,legend_kwargs={'loc':'upper left'
# ,'frameon':True}
#Edit 0 line features, use matplotlib.Axes.hlines kwargs in dictionary format
,reflines_kwargs= {'linestyle':'dashed'
,'linewidth':.8
,'color' : 'black'}
#Set swarm plot parameters
,swarm_label='Ceramide Intensity'
,swarm_ylim=(0,1.5)
,show_pairs=False #connect paired points? Yes (True), no (False)
# ,color_col='ID' #color points based on defined column identifier
# ,custom_palette={'Cntl CTCF':'#747575'
# ,'Expt CTCF':'#139604'}
,swarm_desat=1
,group_summaries='mean_sd' #display mean+/-sd as bars next to swarm plots
,group_summaries_offset=0.15
#Edit swarmplot features, use seaborn.swarmplot kwargs in dictionary format
,swarmplot_kwargs={'size':7}
#Edit group summary line features, use matplotlib.lines.Line2D kwargs in dictionary format
,group_summary_kwargs={'lw':3
,'alpha':.7}
#Set effect size plot parameters
,float_contrast=True #displays mean difference next to graph (True) or below graph (False)
,contrast_label='mean difference'
,es_marker_size=9
,halfviolin_desat=1
,halfviolin_alpha=0.8
#Edit violin features, use sns.violinplot kwargs in dictionary format
,violinplot_kwargs={'widths':0.5}
#Edit legend features, use matplotlib.Axes.legend kwargs in dictionary format
# ,legend_kwargs={'loc':'upper left'
# ,'frameon':True}
#Edit slopegraph features, use
#kwargs in dictionary format
# ,slopegraph_kwargs={'color':'blue'}
)
###Output
_____no_output_____ |
module4-model-interpretation/Furkan_Onat_LS_DS_234_assignment.ipynb | ###Markdown
###Code
import pandas as pd
import os
from google.colab import files
uploaded = files.upload()
df = pd.read_csv('freMTPL2freq.csv')
###Output
_____no_output_____
###Markdown
Explanatory Analysis
###Code
df.head()
df.isnull().sum()
import sys
!{sys.executable} -m pip install pandas-profiling
from pandas_profiling import ProfileReport
profile = ProfileReport(df).to_notebook_iframe()
profile
# Adding a feature for annualized claim frequency
df['Frequency'] = df['ClaimNb'] /df['Exposure']
df.head()
df['Frequency'].value_counts(normalize=True)
df['Frequency'].nunique()
df.describe()
df['ClaimNb'].value_counts()
df.dtypes
###Output
_____no_output_____
###Markdown
Model 1 Target= ClaimNbModel= DecisionTree ClassifierEvaluation Metric. = Validation AccuracyDescription = Make ClaimNb feature 3-class feature Added Frequency Feature
###Code
%matplotlib inline
!pip category_encoders
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
from sklearn.impute import SimpleImputer
from sklearn.metrics import accuracy_score
from sklearn.model_selection import train_test_split
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import FunctionTransformer
from sklearn.ensemble import RandomForestClassifier
from sklearn.tree import DecisionTreeClassifier
df_model_1 = df.copy()
df_model_1['ClaimNb'].value_counts(normalize=True)
df_model_1['ClaimNb'].value_counts()
# I will create a new column for number of claims per policy.
df_model_1['ClaimNb_Adj'] = df_model_1['ClaimNb']
df_model_1.head()
# I modify the new 'ClaimNb' column to have just 3 classes : 'no claim', 'once', 'more than once'.
df_model_1['ClaimNb_Adj'] = df_model_1['ClaimNb_Adj'].replace({0: 'no claim', 1: 'once', 2: 'more than once', 3: 'more than once', 4: 'more than once', 11: 'more than once', 5: 'more than once', 16: 'more than once', 9: 'more than once', 8: 'more than once', 6: 'more than once'})
df_model_1.head()
# I will use "ClaimNb_Adj" feature as the target for the model
y = df_model_1['ClaimNb_Adj']
# Baseline for the majority class
df_model_1['ClaimNb_Adj'].value_counts(normalize=True)
# Split for test and train
train, test = train_test_split(df_model_1, train_size=0.80, test_size=0.20, stratify=df_model_1['ClaimNb_Adj'], random_state=42)
train.shape, test.shape
# Split for train and val
train, val = train_test_split(train, train_size = 0.80, test_size=0.20, stratify=train['ClaimNb_Adj'], random_state=42)
train.shape, val.shape
def wrangle(X):
# Drop IDpol since it doesn't have any explanatory power
# Drop ClaimNb and Frequency as they are a function of our target.
column_drop = ['IDpol','ClaimNb', 'Frequency']
X = X.drop(columns=column_drop)
return X
train = wrangle(train)
val = wrangle(val)
test = wrangle(test)
train.head()
!pip install --upgrade category_encoders
import category_encoders as ce
# Arranging features matrix and y target vector
target = 'ClaimNb_Adj'
X_train = train.drop(columns=target)
y_train = train[target]
X_val = val.drop(columns=target)
y_val = val[target]
X_test = test.drop(columns=target)
y_test = test[target]
pipeline = make_pipeline(
ce.OrdinalEncoder(),
DecisionTreeClassifier(max_depth = 3)
)
pipeline.fit(X_train, y_train)
print('Validation Accuracy', pipeline.score(X_val, y_val))
import graphviz
from sklearn.tree import export_graphviz
tree = pipeline.named_steps['decisiontreeclassifier']
dot_data = export_graphviz(tree,
out_file=None,
feature_names=X_train.columns,
class_names=y_train.unique().astype(str),
filled=True,
impurity=False,
proportion=True
)
graphviz.Source(dot_data)
y.value_counts(normalize=True)
# Getting feature importances
rf = pipeline.named_steps['decisiontreeclassifier']
importances = pd.Series(rf.feature_importances_,X_train.columns)
# plot feature importances
%matplotlib inline
n=11
plt.figure(figsize=(5,n))
plt.title("Feature Importances")
importances.sort_values()[-n:].plot.barh(color='black');
importances.sort_values(ascending=False)
# Predict on Test
y_pred = pipeline.predict(X_test)
y_pred.shape, y_test.shape
print('Train Accuracy', pipeline.score(X_train, y_train))
print('Validation Accuracy', pipeline.score(X_val, y_val))
from sklearn.metrics import accuracy_score
# print the accuracy
accuracy = accuracy_score(y_pred, y_test)
print("Accuracy : %.4f%%" % (accuracy * 100.0))
###Output
Accuracy : 94.9765%
###Markdown
Assignment DAY 4 Partial Dependence Plot
###Code
import matplotlib.pyplot as plt
plt.rcParams['figure.dpi']=72
!pip install pdpbox
from pdpbox.pdp import pdp_isolate, pdp_plot
from pdpbox import pdp
feature = 'DrivAge'
encoder = ce.OrdinalEncoder()
X_val_encoded = encoder.fit_transform(X_val)
model = DecisionTreeClassifier()
model.fit(X_val_encoded, y_val)
X_val.head()
%matplotlib inline
from pdpbox import pdp
feature = 'DrivAge'
pdp_dist = pdp.pdp_isolate(model=model, dataset = X_val_encoded, model_features=X_val_encoded.columns, feature=feature)
pdp.pdp_plot(pdp_dist, feature);
feature = 'VehGas'
pdp_dist = pdp.pdp_isolate(model=model, dataset = X_val_encoded, model_features=X_val.columns, feature=feature)
pdp.pdp_plot(pdp_dist, feature);
###Output
_____no_output_____
###Markdown
Shapley value plots
###Code
pip install shap
X_train.shape, X_val.shape,X_test.shape
processor = make_pipeline(
ce.OrdinalEncoder(),
)
X_train_processed = processor.fit_transform(X_train)
X_val_processed = processor.transform(X_val)
eval_set = [(X_train_processed, y_train),
(X_val_processed, y_val)]
model=DecisionTreeClassifier(max_depth = 3)
model.fit(X_train_processed, y_train)
row = X_test.iloc[[3094]]
row
import shap
explainer = shap.TreeExplainer(model)
row_processed = processor.transform(row)
shap_values = explainer.shap_values(row_processed)
shap.initjs()
shap.force_plot(
base_value=explainer.expected_value,
shap_values=shap_values,
features=row,
link='logit'
)
X_train_processed.shape
###Output
_____no_output_____ |
LinkedIn Learning/Tensorflow esencial/Shapes.ipynb | ###Markdown
Shape en TensorFlow Definición: Shape describe tanto las dimensiones numéricas en un tensor como la longitud de cada dimensión. Sintaxis: tf.shape(tensor)
###Code
import tensorflow.compat.v1 as tf
tf.disable_v2_behavior()
import numpy as np
b = tf.placeholder(tf.int32, shape=None)
a = [1,2,3]
suma = tf.add(a,b )
print(tf.shape(suma))
sess = tf.Session()
dic = {b:[1,1,1]}
print(sess.run(suma, feed_dict=dic))
###Output
Tensor("Shape:0", shape=(?,), dtype=int32)
[2 3 4]
|
static/files/Hengchao_02.ipynb | ###Markdown
Assignment2 Hengchao Wang 1001778272
###Code
from csv import reader
from math import sqrt
import random
import operator
import matplotlib.pyplot as plt
# variable
filename = "iris_data"
splitRow = 0.8
K = 5
distanceMethod = 1
###Output
_____no_output_____
###Markdown
Divide the dataset as development and test.
###Code
def str_column_to_float(dataset):
for i in range(len(dataset[0]) - 1):
for row in dataset:
row[i] = float(row[i].strip())
# load csv files into a list. parameters: start row end row and which row is classRow
def load_csv(filename, start = 0, end = -1, classRow = -1):
dataset = list()
filename = filename + ".csv"
with open(filename, 'r') as file:
csv_reader = reader(file)
for row in csv_reader:
if not row:
continue
if end == -1:
if classRow != -1:
tmp = row[classRow]
row[classRow] = row[-1]
row[-1] = tmp
dataset.append(row[start:])
else :
if classRow != -1:
tmp = row[classRow]
row[classRow] = row[-1]
row[-1] = tmp
dataset.append(row[start:end])
str_column_to_float(dataset)
return dataset
dataset = load_csv(filename)
len(dataset)
def saperateDataset(dataset, splitRate):
developmentSet = list()
testSet = list()
for x in range(len(dataset)-1):
for y in range(4):
dataset[x][y] = float(dataset[x][y])
if random.random() < splitRate:
developmentSet.append(dataset[x])
else:
testSet.append(dataset[x])
print("developmentSet length:", len(developmentSet), "testSet length:", len(testSet))
return developmentSet,testSet
developmentSet,testSet = saperateDataset(dataset, splitRow)
type(developmentSet[1][1])
###Output
developmentSet length: 124 testSet length: 25
###Markdown
Distance metric
###Code
def euclideanDistance(a,b): #euclidean
sum = 0
for i in range(len(a)-1):
sum += (a[i]-b[i])**2
return sqrt(sum)
def normalizedEuclideanDistance(a,b): #normalized euclidean
sumnum = 0
for i in range(len(a)-1):
avg = (a[i]-b[i])/2
si = sqrt( (a[i] - avg) ** 2 + (b[i] - avg) ** 2 )
sumnum += ((a[i]-b[i])/si ) ** 2
return sqrt(sumnum)
def cosineSimilarity(a,b): #cosine similarity
sum_fenzi = 0.0
sum_fenmu_1,sum_fenmu_2 = 0,0
for i in range(len(a)-1):
sum_fenzi += a[i]*b[i]
sum_fenmu_1 += a[i]**2
sum_fenmu_2 += b[i]**2
return sum_fenzi/( sqrt(sum_fenmu_1) * sqrt(sum_fenmu_2) )
def distanceTest():
print( 'a,b euclidean distance:',euclideanDistance((1,2,1,2),(3,3,3,4)))
print( 'a,b normalized euclidean distance:',normalizedEuclideanDistance((1,2,1,2),(3,3,3,4)))
print( 'a,b cosine similarity:',cosineSimilarity((1,2,1,2),(3,3,3,4)))
distanceTest()
###Output
a,b euclidean distance: 3.0
a,b normalized euclidean distance: 0.6738353315566452
a,b cosine similarity: 0.9428090415820634
###Markdown
Implement KNN
###Code
def getNeighbors(developmentSet, instance, k, distanceMethod): # choose distance method.
distances = list()
for x in range(len(developmentSet)):
# print(developmentSet[x], instance)
if operator.eq(developmentSet[x],instance):
continue
else:
if distanceMethod == 1: #1 == euclideanDistance
dist = euclideanDistance(instance, developmentSet[x])
elif distanceMethod == 2: #2 == normalizedEuclideanDistance
dist = normalizedEuclideanDistance(instance, developmentSet[x])
elif distanceMethod == 3: #3 == cosineSimilarity
dist = cosineSimilarity(instance, developmentSet[x])
distances.append((developmentSet[x], dist))
distances.sort(key=operator.itemgetter(1))
sorted(distances,key=lambda x: x[0]) #sorted by
neighbors = list()
if distanceMethod == 1 or distanceMethod == 2:
for x in range(k):
neighbors.append(distances[x][0])
else:
for x in range(len(distances)-k, len(distances)): # cosineSimilarity need to get the top k biggest
neighbors.append(distances[x][0])
return neighbors
def getPrediction(neighbors):
classVotes = {}
for x in range(len(neighbors)):
response = neighbors[x][-1]
if response in classVotes:
classVotes[response] += 1
else:
classVotes[response] = 1
sortedVotes = sorted(classVotes.items(), key=operator.itemgetter(1), reverse=True)
return sortedVotes[0][0]
def getAccuracy(testSet, predictions):
correct = 0
for x in range(len(testSet)):
if testSet[x][-1] == predictions[x]:
correct += 1
return (correct/float(len(testSet)))*100.0
# test of KNN
neighbors = getNeighbors(developmentSet, developmentSet[1], K, 3) #test
getPrediction(neighbors)
def getAllPrediction(developmentSet, k, distanceMethod):
predictions = []
for instance in developmentSet:
neighbors = getNeighbors(developmentSet, instance, k, distanceMethod)
predictions.append(getPrediction(neighbors))
# print(predictions)
accuracy = getAccuracy(developmentSet, predictions)
if distanceMethod == 1:
method = "EuclideanDistance"
elif distanceMethod == 2:
method = "normalizedEuclideanDistance"
elif distanceMethod == 3:
method = "cosineSimilarity"
print("k = ", k, " distanceMethod = ", method , "accuracy = ",accuracy)
return accuracy
def compare(data):
for distanceMethod in [1,2,3]:
for K in [1,3,5,7]:
getAllPrediction(data, K, distanceMethod)
compare(developmentSet)
def plotRes(k, m1, m2, m3): # Draw bar charts for accuracy
plt.xlim(0, 10)
plt.plot(k, m1,label = "$test error$", c = "r")
plt.plot(k, m2,label = "$normalizedEuclideanDistance$", c = "y")
plt.plot(k, m3,label = "$cosineSimilarity$", c = "g")
plt.xlabel('K')
plt.ylabel('accuracy')
plt.show()
def findBest(): # Find optimal hyperparameters
maxAccuracy, bestK, bestM = 0, 0, 1
m1 = list()
m2 = list()
m3 = list()
k = [1,2,3,4,5,6,7,8,9,10]
for K in k:
acc = getAllPrediction(developmentSet, K, 1)
m1.append(acc)
if acc > maxAccuracy: maxAccuracy, bestK, bestM = acc, K, 1
for K in k:
acc = getAllPrediction(developmentSet, K, 2)
m2.append(acc)
if acc > maxAccuracy: maxAccuracy, bestK, bestM = acc, K, 2
for K in k:
acc = getAllPrediction(developmentSet, K, 3)
m3.append(acc)
if acc > maxAccuracy: maxAccuracy, bestK, bestM = acc, K, 3
plotRes(k, m1, m2, m3)
return maxAccuracy, bestK, bestM
maxAccuracy, bestK, bestM = findBest()
getAllPrediction(testSet, bestK, bestM)
###Output
k = 5 distanceMethod = cosineSimilarity accuracy = 96.0
|
src/.ipynb_checkpoints/nn-copy-checkpoint.ipynb | ###Markdown
Table of Contents
###Code
from PIL import Image
import scipy.misc
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import os
import random
import tensorflow as tf
from matplotlib.font_manager import FontProperties
def display_image_samples(data, labels=None): # labels are used for plot titles and are optional
font = FontProperties()
font.set_family('monospace')
plt.figure(figsize=(8,4))
rows, cols = 2, 4 # these are arbitrary
random_ids = random.sample(range(len(data)), rows*cols) # randomly select the images
for i in range(rows*cols):
curr_index = random_ids[i]
image = data[curr_index]
title_str = ('shape: ' + str(image.shape))
if labels:
title_str += ('\nclass ' + str(labels[i]))
plt.subplot(rows, cols, i+1)
plt.title(title_str, fontproperties=font)
plt.imshow(image)
plt.axis('off')
plt.tight_layout()
plt.show()
###Output
_____no_output_____
###Markdown
Fetch Data
###Code
def clean_data(data):
# apply greyscale
data = data.mean(3) # dimension 3 of image shape corresponds to color channels
# data = data[:, :, :, 0] # same as above
# center-crop images
# data = data[:, :, 7:data.shape[2]-1]
print(data.shape)
return data
from sklearn.model_selection import train_test_split
def load_data(data_path, k, test_size=0.3):
x = []
y = []
for i in range(k):
curr_dir_path = data_path + 'c' + str(i) + '/'
for file in os.listdir(curr_dir_path):
file_name = os.fsdecode(file)
if file_name.endswith(".jpg"):
file_path = (os.path.join(curr_dir_path, file_name))
img = np.asarray(Image.open(file_path))#.flatten()
x.append(img)
y.append(i)
# apply greyscale and cropping
x = clean_data(np.asarray(x))
# np.asarray(x_train), np.asarray(labels)
x_train, x_test, y_train, y_test = train_test_split(x, y, test_size=0.33, random_state=42)
return np.asarray(x_train), np.asarray(y_train), np.asarray(x_test), np.asarray(y_test)
###Output
_____no_output_____
###Markdown
Convolutional Neural Network
###Code
def weight_variable(shape):
# should initialize weights with a small amount of noise for symmetry breaking, and to prevent 0 gradients
initial = tf.truncated_normal(shape, stddev=0.1)
return tf.Variable(initial)
def bias_variable(shape):
# it is good practice to initialize them with a slightly positive initial bias to avoid "dead neurons"
initial = tf.constant(0.1, shape=shape)
return tf.Variable(initial)
def conv2d(x, W):
# uses stride of 1 and padding of 0, allowing the output to be the same as the input
return tf.nn.conv2d(x, W, strides=[1, 1, 1, 1], padding='SAME')
def max_pool_2x2(x):
# max-pooling over 2x2 patches
return tf.nn.max_pool(x, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME')
def model(x):
# need to reshape x to a 4d tensor according to the image width and color channels (1)
x_image = tf.reshape(x, shape=[-1, 24, 24, 1])
# ---------- 1: CONVOLUTION + MAXPOOL ----------
# note: third dimension corresponds to num. of input channels
W_conv1 = weight_variable([5, 5, 1, 32]) # computing 32 features for each 5x5 patch
b_conv1 = bias_variable([32]) # one bias vector for each output channel (features computed)
# convolve x_image with weight tensor, then add bias and apply ReLU
h_conv1 = tf.nn.relu(conv2d(x_image, W_conv1) + b_conv1)
# apply maxpooling - should half the size of the images
h_pool1 = max_pool_2x2(h_conv1)
# ---------- 2: CONVOLUTION + MAXPOOL ----------
W_conv2 = weight_variable([5, 5, 32, 64]) # this layer will compute 64 features
b_conv2 = bias_variable([64])
h_conv2 = tf.nn.relu(conv2d(h_pool1, W_conv2) + b_conv2)
h_pool2 = max_pool_2x2(h_conv2) # the image size should be halfed once again
# ---------- 3: FULLY CONNECTED LAYER ----------
# by now, the image should be 6x6 pixels in size
W_fc1 = weight_variable([6 * 6 * 64, 1024])
b_fc1 = bias_variable([1024])
# reshape the tensor from the pooling layer into a batch of vectors, multiply by a weight matrix, add a bias
# first, flatten the images. instead of (6,6,64), now 6*6*64 = 2304
h_pool2_flat = tf.reshape(h_pool2, [-1, 6*6*64])
# finally, apply ReLU
h_fc1 = tf.nn.relu(tf.matmul(h_pool2_flat, W_fc1) + b_fc1)
# ---------- 4: DROPOUT ----------
# this will be used in order to reduce overfitting the data
keep_prob = tf.placeholder(tf.float32) # probability that a neuron's output will be kept during dropout
h_fc1_drop = tf.nn.dropout(h_fc1, keep_prob)
# ---------- 5: READOUT LAYER ----------
# regular layer, which will connect the fully-connected layer to the last output layer with the 10 final nodes
W_fc2 = weight_variable([1024, 10])
b_fc2 = bias_variable([10])
# obtain final prediction
y_conv = tf.matmul(h_fc1_drop, W_fc2) + b_fc2
return y_conv
def run_model(num_classes, x_train, y_train, x_test, y_test, num_epochs, num_batches):
# define the x and y placeholders
x = tf.placeholder(tf.float32, shape=[None, 24 * 24])
y = tf.placeholder(tf.float32, shape=[None, num_classes])
y_ = model(x)
cross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=y_, labels=y))
train_step = tf.train.AdamOptimizer(learning_rate=0.001).minimize(cross_entropy)
correct_prediction = tf.equal(tf.argmax(y_, 1), tf.argmax(y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
# train the model
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
# convert y_train labels into one-hot vectors
onehot_ytrain = tf.one_hot(y_train, num_classes, on_value=1., off_value=0., axis=-1)
onehot_ytrain = sess.run(onehot_ytrain)
# convert y_test labels into one-hot vectors for later testing
onehot_ytest = tf.one_hot(y_test, num_classes, on_value=1., off_value=0., axis=-1)
onehot_ytest = sess.run(onehot_ytest)
# define the size for each batch
batch_size = len(x_train) // num_batches
for j in range(num_epochs):
# initialize progress bar
printProgressBar(0, 1, prefix='EPOCH ' + str(j+1) + ': ', length=50)
# if j % 10 == 0: print("\nEPOCH ", j+1)
# used to sum up the cost for each batch
total_cost = 0
# iterate through the training data in batches
for i in range(0, len(x_train), batch_size):
batch_xtrain = x_train[i : i + batch_size, :]
batch_ytrain_onehot = onehot_ytrain[i : i + batch_size, :]
_, c = sess.run([train_step, cross_entropy], feed_dict={x: batch_xtrain, y: batch_ytrain_onehot})
total_cost += c
# if (j % 10 == 0) and (i % batch_size == 0): print("batch", i + 1, ", cost =", total_cost)
printProgressBar(i, len(x_train), prefix='EPOCH ' + str(j+1) + ': ',
suffix='Cost = ' + str(total_cost), length=50)
print()
# if j % 10 == 0: print("> Total Cost =", total_cost)
#accuracy_val = sess.run(accuracy, feed_dict={x: x_test, y:onehot_ytest})
#print('\nAccuracy = ', accuracy_val*100, '%')
###Output
_____no_output_____
###Markdown
Implementation
###Code
csv_path = '../dataset/driver_imgs_list.csv'
# train_data_path = '../dataset/original/train/'
train_data_path = '../dataset/resized/'
# train_data_path = '../dataset/samples/'
drivers_csv = pd.read_csv(csv_path)
classes = (np.unique(np.asarray(drivers_csv)[:,1]))
NUM_CLASSES = len(classes) # 10
# fetch images from stored dataset in path
x_train, y_train, x_test, y_test = load_data(train_data_path, NUM_CLASSES) # test perc = 0.3 (default)
print(x_train.shape)
# print a sample of images
display_image_samples(x_train)
print('\n---------------------------------------- DETAILS ---------------------------------------\n')
print('data shape (original):', x_train.shape) # (13, 24, 24)
# want to flatten it, like: (13, 576)
x_train_flattened = x_train.reshape(x_train.shape[0], -1) # the -1 would be automatically calculated as 24*24 (=576)
x_test_flattened = x_test.reshape(x_test.shape[0], -1)
print('data shape (flattened):' , x_train_flattened.shape)
print('\nclass names:', classes, '\nclass names shape:', classes.shape)
print('\nlabels shape:', y_train.shape)
print('\n------------------------------------- CONFIGURATION -------------------------------------\n')
# SIZES: names: [] x 10 , data:(50000, 576), labels:(50000,)
num_epochs = 2
num_batches = 5
print('epochs:', num_epochs)
print('number of batches:', num_batches)
print('batch size:', len(x_train) // num_batches)
print('\n-----------------------------------------------------------------------------------------\n')
run_model(NUM_CLASSES, x_train_flattened, y_train, x_test_flattened, y_test, num_epochs, num_batches)
###Output
---------------------------------------- DETAILS ---------------------------------------
data shape (original): (15024, 24, 24)
data shape (flattened): (15024, 576)
class names: ['c0' 'c1' 'c2' 'c3' 'c4' 'c5' 'c6' 'c7' 'c8' 'c9']
class names shape: (10,)
labels shape: (15024,)
------------------------------------- CONFIGURATION -------------------------------------
epochs: 2
number of batches: 5
batch size: 3004
-----------------------------------------------------------------------------------------
<class 'numpy.ndarray'>--------------------------------------| 0.0%
<class 'numpy.ndarray'>
|
Recommender-Systems/Popularality based filtering.ipynb | ###Markdown
These are the top rated places
###Code
rating_count = pd.DataFrame(ratings.groupby('placeID')['rating'].count())
rating_count.sort_values('rating', ascending=False).head()
most_rated_places = pd.DataFrame([135085, 132825, 135032, 135052, 132834], index=np.arange(5), columns=['placeID'])
final = pd.merge(most_rated_places, cuisine, on='placeID')
final
cuisine['Rcuisine'].describe()
###Output
_____no_output_____ |
2021Q1_DSF/5.- Spark/notebooks/spark_sql/05_dw_missing_values.ipynb | ###Markdown
Valores AusentesLos valores ausentes en _pyspark_ están identificados como _null_. El método `isNull` permite idenficar los registros nulos y `isNotNull` los no nulos.
###Code
from pyspark.sql import functions as F
vancouver_df = spark.read.csv(DATA_PATH + 'crime_in_vancouver.csv', sep=',', header=True, inferSchema=True)
vancouver_df.filter(F.col('NEIGHBOURHOOD').isNull()).show(4)
vancouver_df.filter(F.col('NEIGHBOURHOOD').isNotNull()).show(4)
###Output
_____no_output_____
###Markdown
Conteo de valores nulos
###Code
vancouver_df.filter(F.col('NEIGHBOURHOOD').isNull()).count()
vancouver_df.filter(F.col('TYPE').isNull()).count()
###Output
_____no_output_____
###Markdown
Porcentaje de ausentes por columnaEl primer método es menos eficiente que el segundo ya que requiere ejecutar una acción por cada columna. Como norma general en Spark hay que intentar realizar el número mínimo de acciones.
###Code
n_rows_vancouver = vancouver_df.count()
###Output
_____no_output_____
###Markdown
__Método 1:__
###Code
%%time
for col in vancouver_df.columns:
n_missing = vancouver_df.filter(F.col(col).isNull()).count()
perc_missing = 100 * n_missing / n_rows_vancouver
print(col, round(perc_missing, 2))
###Output
_____no_output_____
###Markdown
__Método 2:__Para una única columna
###Code
vancouver_df.select(F.round(F.sum(F.col('NEIGHBOURHOOD').isNull().cast('int')) * 100 / n_rows_vancouver, 2)\
.alias('NEIGHBOURHOOD')).show()
###Output
_____no_output_____
###Markdown
Todas las columnas
###Code
%%time
missing_ops = [F.round(F.sum(F.col(c).isNull().cast('int')) * 100 / n_rows_vancouver, 2).alias(c)
for c in vancouver_df.columns]
vancouver_df.select(missing_ops).show()
###Output
_____no_output_____
###Markdown
Eliminación registros nulosEl método `dropna` se utiliza para eliminar registros nulos. Con el parámetro `subset` se indican sobre qué columnas buscar nulos y el parámetro `how` selecciona con qué condición se elimina un registro. Por defecto, `how` está a 'any'.
###Code
vancouver_df.dropna(how='all').count()
vancouver_df.dropna(how='any').count()
vancouver_no_missing_df = vancouver_df.dropna(subset=['HOUR', 'MINUTE'])
vancouver_no_missing_df.select(missing_ops).show()
###Output
_____no_output_____
###Markdown
Imputación de valores nulos`fillna` imputa los valores nulos de las columnas a un valor fijo elegido.
###Code
vancouver_df.show(3)
###Output
_____no_output_____
###Markdown
Imputa los valores nulos de las columnas `HOUR` y `MINUTE` por el valor 0, y los de la columna `NEIGHBOURHOOD` por 'Unknown'.
###Code
vancouver_df.fillna(0, subset=['HOUR', 'MINUTE']).show(3)
vancouver_df.fillna('Unknown', subset=['NEIGHBOURHOOD']).show(3)
###Output
_____no_output_____
###Markdown
Ejercicio 1 Usando el siguiente dataframe
###Code
vancouver_df = spark.read.csv(DATA_PATH + 'crime_in_vancouver.csv', sep=',', header=True, inferSchema=True)
###Output
_____no_output_____
###Markdown
- a. Determine que columna(s) tiene(n) el mayor número de nulos- b. Complete las variables categóricas con nulos con el valor mayoritario- c. Elimine los registros con mayor número de nulos- d. Complete las variables cuantitativas con nulos con los valores medios correspondientes de esas columnas
###Code
# Respuesta aqui
# Respuesta aqui
# Respuesta aqui
# Respuesta aqui
###Output
_____no_output_____
###Markdown
Ejercicio 2Fuente de los datos: https://www.kaggle.com/abhinav89/telecom-customer1) Obtener un diccionario de las variables con el valor del porcentaje de nulos que contengan. Ordenarlo, de alguna forma aunque la salida no sea un diccionario, de mayor a menor porcentaje de nulos.2) Realiza el tratamiento que consideres para los datos nulos, en función del significado de negocio que consideres para cada caso y la cantidad de datos nulos que contenga la columna. Imputar al menos cinco columnas a modo de ejemplo, justificando los valores sustituidos a nivel de negocio.Hint: consideraremos que la columna no aporta valor si contiene más del 40% de sus valores nulos
###Code
df = spark.read.csv(DATA_PATH + 'telecom_customer_churn.csv', sep=',', header=True, inferSchema=True)
df.count()
###Output
_____no_output_____
###Markdown
1) Obtener un diccionario de las variables con el valor del porcentaje de nulos que contengan. Ordenarlo, de alguna forma aunque la salida no sea un diccionario, de mayor a menor porcentaje de nulos.
###Code
# Respuesta aqui
# Respuesta aqui
# Respuesta aqui
# Respuesta aqui
###Output
_____no_output_____
###Markdown
2) Realiza el tratamiento que consideres para los datos nulos, en función del significado de negocio que consideres para cada caso y la cantidad de datos nulos que contenga la columna. Imputar al menos cinco columnas a modo de ejemplo, justificando los valores sustituidos a nivel de negocio.Hint: consideraremos que la columna no aporta valor si contiene más del 40% de sus valores nulos
###Code
# Respuesta aqui
# Respuesta aqui
###Output
_____no_output_____ |
02_Python_Datatypes_examples/019_print_colored_text_to_the_terminal.ipynb | ###Markdown
All the IPython Notebooks in this **Python Examples** series by Dr. Milaan Parmar are available @ **[GitHub](https://github.com/milaan9/90_Python_Examples)** Python Program to Print Colored Text to the TerminalIn this example, you will learn to print colored text to the terminal.To understand this example, you should have the knowledge of the following **[Python programming](https://github.com/milaan9/01_Python_Introduction/blob/main/000_Intro_to_Python.ipynb)** topics:* **[Python Strings](https://github.com/milaan9/02_Python_Datatypes/blob/main/002_Python_String.ipynb)**
###Code
# Example 1: Using ANSI escape sequences
print('\x1b[38;2;6;96;243m' + 'Python4DataScience' + '\x1b[0m')
###Output
[38;2;6;96;243mPython4DataScience[0m
###Markdown
**Explanation:** The working of the above line of code is shown below: Let's understand the escape code **`\x1b[38;2;5;86;243m`**.* **`\x1b`** calls a function. You can also use **`\033`** for the same purpose.* **`38;2;r;g;b`** helps to set RGB color. **`5;86;243`** are the rgb color for blue (the color of the logo of Programiz).* **`m`** is the function name. Here, **`m`** means SGR (Select Graphics Rendition) function.For more information regarding the ANSI escape code, you can refer to **[ANSI escape code](https://en.wikipedia.org/wiki/ANSI_escape_code)**.
###Code
# Example 2: Using python module termcolor
from termcolor import colored
print(colored('Python4DataScience', 'red'))
###Output
[31mPython4DataScience[0m
|
DataAnalysis/notebooks/MallCustomersOutliersDetection.ipynb | ###Markdown
Calculate variance (unbiased sample variance), standart deviation and outlier in one of your datasets. Approximate values after outlier deletion. Visualize results (was/now). Importind data
###Code
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import numpy as np
%matplotlib inline
data = pd.read_csv("../data/mall_customers.csv")
display(data.sample(5))
###Output
_____no_output_____
###Markdown
Visualizing data Drawing histograms.
###Code
sns.boxplot(data=data[data.columns[1:]])
plt.xticks(rotation=20)
plt.show()
plt.figure(1 , figsize = (15 , 3))
n = 0
for x in ['Age' , 'Annual Income (k$)' , 'Spending Score (1-100)']:
n += 1
plt.subplot(1 , 3 , n)
plt.subplots_adjust(hspace =0.5 , wspace = 0.5)
sns.distplot(data[x] , bins = 20)
plt.show()
###Output
_____no_output_____
###Markdown
Analyzing annual income distribution
###Code
def calculate_deviations(arr, k):
"""
Args:
arr (list): List to count characteristics of.
k (int): coefficent for range control.
"""
# Mean.
mean = np.mean(arr)
print("Mean = {:.1f}".format(mean))
# Variance.
variance = np.var(arr, ddof=1)
print("Variance = {:.1f}".format(variance))
# Standart deviation.
deviation = np.std(arr, ddof=1)
print("Deviation = {:.1f}".format(deviation))
# Range control.
interval_border_min = mean - k * deviation
interval_border_max = mean + k * deviation
print("Standart deviation borders: from {:.1f} to {:.1f}".format(interval_border_min,
interval_border_max))
return mean, variance, deviation, (interval_border_min, interval_border_max)
###Output
_____no_output_____
###Markdown
Values distribution before replacing outliers with mean.
###Code
sns.boxplot(data["Annual Income (k$)"])
plt.show()
sns.distplot(data["Annual Income (k$)"], bins=20)
plt.show()
mean, _, _, interval = calculate_deviations(arr=data["Annual Income (k$)"],
k=2)
###Output
Mean = 60.6
Variance = 689.8
Deviation = 26.3
Standart deviation borders: from 8.0 to 113.1
###Markdown
Values distribution after replacing outliers with mean.
###Code
balanced_income = [mean if ((i < interval[0]) | (i > interval[1])) else i for i in data["Annual Income (k$)"]]
sns.boxplot(balanced_income)
plt.show()
sns.distplot(balanced_income, bins=20)
plt.show()
###Output
_____no_output_____ |
differential_cryptanalysis.ipynb | ###Markdown
Toy Cipher를 이용한 차분 분석 예제 Toy Cipher예제에서 사용할 Toy Cipher는 다음 링크에서 설명한 자료를 토대로 만든 것입니다.http://www.secmem.org/blog/2019/04/08/차분-공격의-이해/이 Toy Cipher는 12비트 블록암호로, 3라운드로 구성되어 있으며, 12비트씩 4개의 라운드키를 필요로합니다.라운드키 확장 함수는 따로 없습니다.ToyCipher의 구조를 그림으로 표현하면 아래와 같습니다.여기에서 사용한 4-bit S-Box는 다음과 같습니다. 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | a | b | c | d | e | f---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|--- 6 | 7 | b | c | 9 | 8 | 4 | 0 | e | 5 | 3 | d | 1 | 2 | f | a비트 치환은 다음과 같이 이루어집니다. 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 ---|---|---|---|---|---|---|---|---|---|---|--- 0 | 2 | 4 | 6 | 8 | 10 | 1 | 3 | 5 | 7 | 9 | 11
###Code
# 필요한 패키지를 import 합니다.
import differential_cryptanalysis as dc
import toycipher as tc
# Toy Cipher의 객체를 하나 생성하고, 48비트 키를 랜덤하게 채워줍니다.
cipher = tc.ToyCipher()
cipher.random_keys()
# 차분 분석을 수행하기 위해 S-Box의 입-출력 테이블을 출력합니다.
dc.print_inout_table(cipher.SBOX)
# 위에서 생성한 입-출력 테이블을 바탕으로 입력차분-출력차분 확률 분포표를 출력합니다.
dc.print_differential_prob_table(cipher.SBOX)
###Output
SBox: [6, 7, 11, 12, 9, 8, 4, 0, 14, 5, 3, 13, 1, 2, 15, 10]
입력차분-출력차분 빈도표
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
0 16 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
1 0 4 0 2 2 2 0 2 0 0 0 2 0 0 2 0
2 0 0 0 0 0 0 0 0 6 0 0 2 0 6 2 0
3 0 0 0 2 0 0 2 0 0 2 2 2 4 2 0 0
4 0 0 0 0 0 0 0 4 0 0 0 0 4 0 0 8
5 0 0 2 0 2 0 0 0 2 2 0 2 2 0 4 0
6 0 2 6 0 2 0 0 2 0 0 0 0 0 0 0 4
7 0 2 0 4 2 2 2 0 0 0 2 0 2 0 0 0
8 0 2 2 0 0 0 0 0 6 0 4 2 0 0 0 0
9 0 0 0 2 0 0 2 0 0 4 0 2 0 0 2 4
10 0 0 4 0 0 6 2 0 0 2 2 0 0 0 0 0
11 0 2 2 2 2 0 2 2 0 0 0 2 0 0 2 0
12 0 0 0 0 2 2 2 6 0 0 0 0 0 4 0 0
13 0 2 0 4 2 0 4 0 0 2 0 0 2 0 0 0
14 0 0 0 0 0 4 0 0 0 2 6 0 0 2 2 0
15 0 2 0 0 2 0 0 0 2 2 0 2 2 2 2 0
###Markdown
차분 경로 찾기 1입력차분-출력차분 빈도표를 이용해서 발생할 확률이 높은 차분 경로를 탐색합니다.위 그림은 (0x2, 0x0, 0x0)의 입력차분이 (0x8, 0x0, 0x0)의 출력차분으로 올 경로를 의미하며, 이 때의 확률은 6/16 * 6/16 = 9/64 입니다.이는 (0x2, 0x0, 0x0)의 입력 차분을 가지는 임의의 64개의 평문쌍 중 9개는 해당 차분 경로를 타고 계산될 것이라고 기대할 수 있습니다.이 경로를 타게 되면 두번째와 세번째의 nibble의 출력 차분은 0이 되어야 하기 때문에 해당 부분의 차분이 0이 아닌 평문쌍은 버리고,첫 번째 nibble에 해당하는 라운드 키를 추측하여, 한 라운드 복호화를 수행한 결과의 차분이 0x8이 되도록 하는 경우의 개수를 셉니다.가장 많읜 평문쌍의 한라운드 복호화 결과의 차분이 0x8이 되도록 하는 키가 높은 확률로 해당 부분의 라운드 키가 됩니다.
###Code
# 첫 번째 nibble에 해당하는 라운드키 찾기
dc.try_recover_key(cipher, 32, input_diff=(2, 0, 0), target_diff=(8, 0, 0), key_mask=(0xf, 0, 0))
###Output
Key Count [0x0 0x0 0x0 0x0 0x2 0x0 0x2 0x0 0x0 0x0 0x0 0x0 0x4 0x1 0x3 0x0]
Partial Key Candidates --> 1) 0xc 2) 0xe
###Markdown

###Code
# 두 번째 nibble에 해당하는 라운드키 찾기
dc.try_recover_key(cipher, 32, input_diff=(0, 2, 0), target_diff=(0, 0x4, 0), key_mask=(0, 0xf, 0))
###Output
Key Count [0x2 0x6 0x3 0x3 0x1 0x0 0x3 0x0 0x0 0x3 0x0 0x1 0x3 0x3 0x6 0x2]
Partial Key Candidates --> 1) 0x1 2) 0xe
###Markdown

###Code
# 세 번째 nibble에 해당하는 라운드키 찾기
dc.try_recover_key(cipher, 128, input_diff=(0, 0, 2), target_diff=(5, 0, 0xa), key_mask=(0, 0, 0xf))
# 실제 라운드 키
print(cipher.rks)
###Output
[[0x4 0x3 0x4]
[0xb 0x1 0x6]
[0x1 0x2 0x7]
[0xc 0x1 0x3]]
|
0.Sandbox_problems.ipynb | ###Markdown
Importing packages Throughout this tutorial, we will use the following common Python packages:
###Code
# Use these packages to easily access files on your hard drive
import os, sys, glob
# The Numpy package allows you to manipulate data (mainly numerical)
import numpy as np
# The Pandas package allows more advanced data manipulation e.g. in structured data frames
import pandas as pd
# The Matplotlib package is for plotting - uses the same syntax as plotting in Matlab (figures, axes etc)
import matplotlib.pyplot as plt
# Seaborn is a higher-level package for plotting that calls functions in Matplotlib,
# you can usually input your Pandas dataframes to get pretty plots in 1 or 2 lines
import seaborn as sns
# We will use Scipy for advanced computation like model fitting
import scipy
###Output
_____no_output_____
###Markdown
Problems 1. Create two lists that separate numbers (eg. from 1-100) divisible by 3 and numbers not divisible by 3. 2. Keep generating random numbers until a generated number is greater than 0.8 and store the number of times it takes you to get this number 3. Generate some random data in two variables of equal length and make a scatter plot using matplotlib 4. Generate some data for a linear relationship between two variables (e.g. age and height of schoolchildren), put them in a Pandas dataframe with 2 named columns, and use Seaborn to create a scatterplot with regression line
###Code
# a hint to start with but feel free to make your own.
#generate random age with range (5-12)
age = 5 + np.random.rand(100)*7
#generate height as a linear function of age
height = 108 + (152-108)*((age-5)/7) + np.random.randn(100)*20
#put these values into a dataframe
age_height = pd.DataFrame.from_dict({'age':age,'height':height}).sort_values(by=['age','height'])
display(age_height.head())
###Output
_____no_output_____
###Markdown
5. Create a Pandas dataframe with height data for 5 age groups and use Seaborn to turn this into a barplot with errorbars and an overlaid stripplot or swarmplot.
###Code
# a hint of how to put data into groups
age_height['group'] = age_height['age'].apply(lambda x: np.floor(x)-4)
###Output
_____no_output_____ |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.