path
stringlengths 7
265
| concatenated_notebook
stringlengths 46
17M
|
---|---|
sample-code/l13-l16-supervised-text/l16-Text Classification.ipynb | ###Markdown
Data Retrieval and Preparation
###Code
# import libs
import pandas as pd
# Load data
train = pd.read_csv('https://raw.githubusercontent.com/biplav-s/course-nl/8f0bb9e50db6706595e6d5ca38c39d31e9bfc77b/common-data/Kaggle-fake-news-train.csv', header=0, lineterminator='\n')
nRow, nCol = train.shape
print(f'INFO: There are {nRow} rows and {nCol} columns in the training set.')
# Seeing a data sample
train.head()
# Clean of white spaces
train = train.applymap(lambda x: x.strip() if isinstance(x, str) else x)
# Print statistics
train.info()
# Distribution of positive and negative labels
train["label"].value_counts()
# Notice that it is quite balanced
# Removing empty rows from csv
train.dropna(axis=0,inplace=True)
nRow, nCol = train.shape
print(f'INFO: There are {nRow} rows and {nCol} columns in the training set.')
# Distribution of positive and negative labels after removing rows with empty data
train["label"].value_counts()
###Output
_____no_output_____
###Markdown
Building a classification model
###Code
# Add a column with content from title, author and text.
# We will check if the classification on cotent that incluses just 'title' is better than with 'all content', or not.
train['all_content'] = train['title'] + ' '+ train['author'] + train['text']
# Check new column
train.head()
###Output
_____no_output_____
###Markdown
Text processing
###Code
# Import for tokenization. We can use any library - nltk or spacy - for example.
from nltk.tokenize import word_tokenize
# Tokenize
train['title_tokenize'] = train['title'].apply(word_tokenize)
train['all_content_tokenize'] = train['all_content'].apply(word_tokenize)
train.head()
# We can extract the label column from train dataframe to be the target 'y' variable
targets = train['label'].values
# Import for representation
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.model_selection import train_test_split
# Initialize the model
vectorizer = TfidfVectorizer()
# Get the tf idf representation for title fields
X = vectorizer.fit_transform(train['title'].values)
# If train-test size is not initialized, test_size will be set to 0.25 and train_set = 1-test_size
X_train, X_test, y_train, y_test = train_test_split(X, targets, random_state=0)
###Output
_____no_output_____
###Markdown
Now learning classifier
###Code
# Import for prediction
from sklearn.ensemble import RandomForestClassifier
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import confusion_matrix
# Try titles on random forest
RandomFC= RandomForestClassifier(n_estimators=5)
RandomFC.fit(X_train, y_train)
# Print accuracy
print('Accuracy of randomforest classifier on training set: {:.2f}'.format(RandomFC.score(X_train, y_train)))
print('Accuracy of randomforest classifier on test set: {:.2f}'.format(RandomFC.score(X_test, y_test)))
CM = confusion_matrix(y_test, RandomFC.predict(X_test))
print(CM)
# Also try logistic regression
logreg = LogisticRegression(C=1e5)
logreg.fit(X_train, y_train)
# Print accuracy
print('Accuracy of Logreg classifier on training set: {:.2f}'.format(logreg.score(X_train, y_train)))
print('Accuracy of Logreg classifier on test set: {:.2f}'.format(logreg.score(X_test, y_test)))
CM = confusion_matrix(y_test, logreg.predict(X_test))
print(CM)
###Output
Accuracy of Logreg classifier on training set: 1.00
Accuracy of Logreg classifier on test set: 0.93
[[2376 207]
[ 98 1891]]
###Markdown
Now train using all content
###Code
# Get the tf idf representation for title fields
X = vectorizer.fit_transform(train['all_content'].values)
from sklearn.model_selection import train_test_split
# If train-test size is not initialized, test_size will be set to 0.25 and train_set = 1-test_size
X_train, X_test, y_train, y_test = train_test_split(X, targets, random_state=0)
# Do random forest
RandomFC= RandomForestClassifier(n_estimators=5)
RandomFC.fit(X_train, y_train)
# Print accuracy
print('Accuracy of randomforest classifier on training set: {:.2f}'.format(RandomFC.score(X_train, y_train)))
print('Accuracy of randomforest classifier on test set: {:.2f}'.format(RandomFC.score(X_test, y_test)))
CM = confusion_matrix(y_test, RandomFC.predict(X_test))
print(CM)
# Also try logistic regression
logreg = LogisticRegression(C=1e5)
logreg.fit(X_train, y_train)
# Print stats
print('Accuracy of Logreg classifier on training set: {:.2f}'.format(logreg.score(X_train, y_train)))
print('Accuracy of Logreg classifier on test set: {:.2f}'.format(logreg.score(X_test, y_test)))
CM = confusion_matrix(y_test, logreg.predict(X_test))
print(CM)
###Output
Accuracy of Logreg classifier on training set: 1.00
Accuracy of Logreg classifier on test set: 0.98
[[2528 55]
[ 50 1939]]
|
lessons/Jupyter Notebooks/My First Notebook.ipynb | ###Markdown
NumPy Array SlicingOne way to select multiple elements from an array in Python is to "slice" pieces of the array. In order to do that, we will specify two indices and an (optional) step value. The final "slice" contains a selection of elements from the "start" index to the "end" index.A slice is defined using the following notation: [start:end].We can also define the step, like this: [start:end:step].What happens if we do not define the start/end indices? Let's take a look in these special cases:* If we don't specify a _start_ index, then the slice starts from the first element along the specified dimension (element [0]).* If we don't specify an _end_ index, then the slice ends at the last element along the specified dimension.* If we don't specify a _step_ value, then it is considered as a unit-step (step=1).If you want to read more on the topic, there is a very nice tutorial on [W3Schools](https://www.w3schools.com/python/numpy/numpy_array_slicing.asp).Let's go over some examples!
###Code
# Let's create an array that contains all 100 integers from 0 to 99
Z = np.arange(100)
Z
# Selection of the first element at position 0
Z[0]
# Let's cut ourselves a slice starting from 0 up to 50 using start/end indices
Z[0:50]
# Let's cut a smaller slice, without a start index
Z[:25]
# Let's add a step of 2
Z[0:50:2]
# What will this do?
Z[::10]
# What?! A negative starting index?! How is this possible?!
Z[-25:]
###Output
_____no_output_____
###Markdown
Exponential FunctionLet's see an example of an exponential function. We want to generate exponential curves that follow the equation _$f(X) = \alpha e^{-(\frac{X}{\beta})^2} + \gamma$_. Let's write a function that takes as arguments the parameters $\alpha, \beta, \gamma$ and a sequence $X$ and generates exponential signals and let's see how the parameters affect the shape of the curve.
###Code
# Import the MatplotLib library (https://matplotlib.org/)
import matplotlib.pyplot as plt
# Let's play with exponentials
def exponential(X, a, b, c):
return a*np.exp(-((X/b)**2)) + c
X = np.linspace(-1,+1, 100)
Y = exponential(X, 1, .2, 3)
n = 100
for a,b in zip( np.random.uniform(0.5,1.0,n), np.random.uniform(0.01,1.0,n)):
plt.plot(X, exponential(X, a, b, 0), color="k", alpha=.1)
l1 = [1,2,3]
l2 = ["a", "b", "c"]
for number, letter in zip(l1,l2):
print(number, letter)
###Output
1 a
2 b
3 c
|
Jupyter Notebook/Live Tweet Analysis.ipynb | ###Markdown
Live Tweet Analysis
###Code
# Import Necessary Libraries
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
from random import randrange
import warnings
warnings.filterwarnings('ignore')
import tweepy
from tweepy import OAuthHandler
import nltk
import re
from nltk.stem import WordNetLemmatizer
from nltk.corpus import stopwords
from textblob import TextBlob
from wordcloud import WordCloud, STOPWORDS, ImageColorGenerator
# Assign the credentials
consumer_key = "Your Consumer Key"
consumer_secret = "Your Consumer Secret"
access_token = "Your Access Token"
access_token_secret = "Your Access Secret"
# Authenticate the API using above credentials
auth = tweepy.OAuthHandler(consumer_key, consumer_secret)
auth.set_access_token(access_token, access_token_secret)
api = tweepy.API(auth)
# Create a Dataframe to store the extracted data.
df = pd.DataFrame(columns=["Date","User","Tweet",'User_location'])
print(df)
# Function to collect tweets
def get_tweets(topic, location_id, Count):
i=0
for tweet in tweepy.Cursor(api.search, q='{} place:{}'.format(topic, location_id), lang="en", exclude='retweets').items():
print(i, end='\r')
df.loc[i,"Date"] = tweet.created_at
df.loc[i,"User"] = tweet.user.name
df.loc[i,"Tweet"] = tweet.text
df.loc[i,"User_location"] = tweet.user.location
i=i+1
if i>Count:
break
else:
pass
# Call the function to collect the tweets according to the given location, topic and count
location = api.geo_search(query="India", granularity="country")
location_id = location[0].id
topic=["Covid19"]
get_tweets(topic, location_id, Count=100)
# First 10 tweets
df.head(10)
###Output
_____no_output_____
###Markdown
Tweet Processing
###Code
# Function to clean the tweets
stop_words = set(stopwords.words('english')) # Set of stopwords (Words that doesn't give meaningful information)
lemmatizer = WordNetLemmatizer() # Used for converting words with similar meaning to a single word.
def text_process(tweet):
processed_tweet = [] # To store processed text
tweet = tweet.lower() # Convert to lower case
tweet = re.sub(r'((www\.[\S]+)|(https?://[\S]+))', 'URL', tweet) # Replaces any URLs with the word URL
tweet = re.sub(r'@[\S]+', 'USER_MENTION', tweet) # Replace @handle with the word USER_MENTION
tweet = re.sub(r'#(\S+)', r' \1 ', tweet) # Removes # from hashtag
tweet = re.sub(r'\brt\b', '', tweet) # Remove RT (retweet)
tweet = re.sub(r'\.{2,}', ' ', tweet) # Replace 2+ dots with space
tweet = tweet.strip(' "\'') # Strip space, " and ' from tweet
tweet = re.sub(r'\s+', ' ', tweet) # Replace multiple spaces with a single space
words = tweet.split()
for word in words:
word = word.strip('\'"?!,.():;') # Remove Punctuations
word = re.sub(r'(.)\1+', r'\1\1', word) # Convert more than 2 letter repetitions to 2 letter (happppy -> happy)
word = re.sub(r'(-|\')', '', word) # Remove - & '
if (re.search(r'^[a-zA-Z][a-z0-9A-Z\._]*$', word) is not None): # Check if the word starts with an english letter
if(word not in stop_words): # Check if the word is a stopword.
word = str(lemmatizer.lemmatize(word)) # Lemmatize the word
processed_tweet.append(word)
return ' '.join(processed_tweet)
# Function to analyze sentiment
def analyze_sentiment(tweet):
analysis = TextBlob(tweet)
if analysis.sentiment.polarity > 0:
return 'Positive'
elif analysis.sentiment.polarity == 0:
return 'Neutral'
else:
return 'Negative'
# Function to clean tweets
df['clean_tweet'] = df['Tweet'].apply(lambda x : text_process(x))
df.head()
# Function to analyse the sentiments
df["Sentiment"] = df["Tweet"].apply(lambda x : analyze_sentiment(x))
df.head(5)
# Check the Summary of a random record
n = randrange(100)
print("Original tweet:\n",df['Tweet'][n])
print()
print("Clean tweet:\n",df['clean_tweet'][n])
print()
print("Sentiment of the tweet:\n",df['Sentiment'][n])
# Overall Summary
print("Total Tweets Extracted for Topic : {} are : {}".format(Topic,len(df.Tweet)))
print("Total Positive Tweets are : {}".format(len(df[df["Sentiment"]=="Positive"])))
print("Total Negative Tweets are : {}".format(len(df[df["Sentiment"]=="Negative"])))
print("Total Neutral Tweets are : {}".format(len(df[df["Sentiment"]=="Neutral"])))
df["Sentiment"].value_counts()
###Output
_____no_output_____
###Markdown
Visualisation
###Code
#Create a Bargraph for the sentiments of tweets
sns.countplot(df["Sentiment"],facecolor=(0, 0, 0, 0),linewidth=5,edgecolor=sns.color_palette("dark", 3))
sns.countplot(df["Sentiment"])
plt.title("Summary of Counts for Total tweets")
# Piechart
a=len(df[df["Sentiment"]=="Positive"])
b=len(df[df["Sentiment"]=="Negative"])
c=len(df[df["Sentiment"]=="Neutral"])
d=np.array([a,b,c])
explode = (0.1, 0.0, 0.1) #Used to break the piechart
plt.pie(d,shadow=True,explode=explode,labels=["Positive","Negative","Neutral"],autopct='%1.2f%%');
###Output
_____no_output_____
###Markdown
Generate WordCloud
###Code
# Combine all individual sentences into a single text and create a WordCloud to see the common words in these tweets.
text = " ".join(review for review in df.clean_tweet)
print("There are {} words in the combination of all review.".format(len(text)))
# Create stopword list
stopwords = set(STOPWORDS)
stopwords.update(["USER_MENTION","URL"]) #To add any custom StopWords
# Generate a word cloud image
wordcloud = WordCloud(stopwords=stopwords,collocations=False, max_words=800,max_font_size=70).generate(text)
# Create the wordcloud
plt.figure(figsize=(12,8))
plt.imshow(wordcloud, interpolation='bilinear')
plt.title("The most frequently used words when searching for {}".format(Topic))
plt.axis("off")
plt.show()
###Output
There are 7611 words in the combination of all review.
|
GP_experiments/marginal_lik_gps_rq.ipynb | ###Markdown
Prepare data
###Code
# x = torch.linspace(0., 1., 200)\
seed = 22
# true_nu = .5
x = np.linspace(0., 1., 200)
np.random.seed(seed)
torch.manual_seed(seed)
np.random.shuffle(x)
x = torch.from_numpy(x).float()
kernel = gpytorch.kernels.ScaleKernel(gpytorch.kernels.RQKernel())
gp = ExactGPModel(
train_x=None,
train_y=None,
likelihood=gpytorch.likelihoods.GaussianLikelihood(), # p(y|f)
kernel=kernel
)
true_noise = 0.1
true_lengthscale = 0.3
true_alpha = .05
gp.covar_module.base_kernel.lengthscale = true_lengthscale
gp.covar_module.base_kernel.alpha = true_alpha
gp.eval()
f_preds = gp(x)
f = f_preds.rsample().detach()
y = f + true_noise * torch.randn(len(x))
plt.plot(x, y, "ro", ms=10, alpha=0.5)
plt.title("Test data")
split_n = 50
eval_x = x[split_n:]
x = x[:split_n]
eval_y = y[split_n:]
eval_f = f[split_n:]
y = y[:split_n]
order = np.argsort(eval_x)
eval_x = eval_x[order]
eval_f = eval_f[order]
eval_y = eval_y[order]
plt.plot(x, y, "ro", ms=10, alpha=0.5)
plt.plot(eval_x, eval_f)
plt.title("Train data")
# np.savez("plots/data/gp_rq/data.npz",
# x_train=x,
# y_train=y,
# x_test=eval_x,
# y_test=eval_y,
# f_test=eval_f)
x.shape
###Output
_____no_output_____
###Markdown
Error vs MLL vs CMLL
###Code
alpha_array = torch.linspace(0.0, 0.3, 300)
# true_noise = np.sqrt(0.03)
# true_noise = 0.03
true_noise = 0.1
# true_noise = 0.2
error_ll = []
for kernel_ll in alpha_array:
kernel = gpytorch.kernels.ScaleKernel(gpytorch.kernels.RQKernel())
noise = torch.tensor([true_noise**2])
likelihood = gpytorch.likelihoods.GaussianLikelihood()
likelihood.noise_covar.noise = noise
gp = ExactGPModel(
train_x=x,#[:40],
train_y=y,#[:40],
likelihood=likelihood,
kernel=kernel
)
gp.covar_module.base_kernel.lengthscale = true_lengthscale
gp.covar_module.base_kernel.alpha = kernel_ll
gp.eval()
likelihood.eval()
f_preds = gp(eval_x)
y_preds = likelihood(f_preds)
error = y_preds.log_prob(eval_y)
error_ll.append(error.item())
plt.plot(alpha_array, error_ll)
print(alpha_array[np.argmax(error_ll)])
def get_log_mll(gp, x, y):
N = len(x)
covar_matrix = gp.covar_module(x,x).evaluate()
covar_matrix += gp.likelihood.noise * torch.eye(N)
log_mll = - 0.5 * (y.T @ torch.inverse(covar_matrix)) @ y
log_mll += - 0.5 * torch.logdet(covar_matrix)
log_mll += - 0.5 * N * np.log(2 * np.pi)
return log_mll
ml_ll = []
for kernel_ll in alpha_array:
kernel = gpytorch.kernels.ScaleKernel(gpytorch.kernels.RQKernel())
noise = torch.tensor([true_noise**2])
likelihood = gpytorch.likelihoods.GaussianLikelihood()
likelihood.noise_covar.noise = noise
gp = ExactGPModel(
train_x=x,#[:40],
train_y=y,#[:40],
likelihood=likelihood,
kernel=kernel
)
gp.covar_module.base_kernel.lengthscale = true_lengthscale
gp.covar_module.base_kernel.alpha = kernel_ll
gp.eval()
likelihood.eval()
ml_ll.append(get_log_mll(gp, x, y).item())
plt.plot(alpha_array, ml_ll)
print(alpha_array[np.argmax(ml_ll)])
all_cml_ll = []
all_cml_error = []
m = 45
# m = 20
n_orders = 20
for _ in range(n_orders):
cml_ll = []
cml_error = []
order = np.arange(len(x))
np.random.shuffle(order)
xm, ym = x[order[:m]], y[order[:m]]
x_, y_ = x[order[m:]], y[order[m:]]
for kernel_ll in alpha_array:
kernel = gpytorch.kernels.ScaleKernel(gpytorch.kernels.RQKernel())
noise = torch.tensor([true_noise**2])
likelihood = gpytorch.likelihoods.GaussianLikelihood()
likelihood.noise_covar.noise = noise
gp = ExactGPModel(
train_x=xm,
train_y=ym,
likelihood=likelihood,
kernel=kernel
)
gp.covar_module.base_kernel.lengthscale = true_lengthscale
gp.covar_module.base_kernel.alpha = kernel_ll
gp.eval()
likelihood.eval()
cml_ll.append((get_log_mll(gp, x, y) - get_log_mll(gp, xm, ym)).item())
all_cml_ll.append(cml_ll)
all_cml_ll = np.array(all_cml_ll)
plt.plot(alpha_array, all_cml_ll.mean(0))
print(alpha_array[np.argmax(all_cml_ll.mean(0))])
for i in range(n_orders):
plt.plot(alpha_array, all_cml_ll[i])
# plt.plot(lengthscale_array, all_cml_ll[1])
# plt.plot(lengthscale_array, all_cml_ll[2])
# plt.plot(lengthscale_array, all_cml_ll[4])
# plt.plot(lengthscale_array, all_cml_ll[5])
def rescale(lst):
lst = np.array(lst)
return (lst - np.min(lst)) / (np.max(lst - np.min(lst)))
plt.plot(alpha_array, rescale(all_cml_ll.mean(0)), label="Conditional ML")
plt.plot(alpha_array, rescale(error_ll), "--k", label="Test Likelihood")
plt.plot(alpha_array, rescale(ml_ll), label="ML")
plt.legend()
plt.xlabel(r"RQ kernel, $\alpha$")
np.savez("plots/data/gp_rq/alpha_optimization_small_noise.npz",
# np.savez("plots/data/gp_rq/alpha_optimization_big_noise.npz",
alpha=alpha_array,
cmll=all_cml_ll.mean(0),
mll=ml_ll,
test_likelihood=error_ll)
i = np.argmax(all_cml_ll.mean(0))
alpha_cmll = alpha_array[i]
print("alpha: {} \t cmll: {} \t mll: {}".format(alpha_cmll.item(), all_cml_ll.mean(0)[i], ml_ll[i]))
# all_cml_ll.mean(0)
i = np.argmax(ml_ll)
alpha_cmll = alpha_array[i]
print("alpha: {} \t cmll: {} \t mll: {}".format(alpha_cmll.item(), all_cml_ll.mean(0)[i], ml_ll[i]))
# all_cml_ll.mean(0)
###Output
alpha: 0.0010033445432782173 cmll: 1.9466287612915039 mll: 16.170513153076172
###Markdown
How does MLL grow with observed data
###Code
import tqdm
true_noise = 0.2
kernel = gpytorch.kernels.ScaleKernel(gpytorch.kernels.RQKernel())
noise = torch.tensor([true_noise**2])
likelihood = gpytorch.likelihoods.GaussianLikelihood()
likelihood.noise_covar.noise = noise
gp = ExactGPModel(
train_x=x,
train_y=y,
likelihood=likelihood,
kernel=kernel
)
gp.covar_module.base_kernel.lengthscale = true_lengthscale
gp.covar_module.base_kernel.alpha = 0.01
x.shape
x_combined = torch.cat([x, eval_x])
y_combined = torch.cat([y, eval_y])
alpha_lst = [0.001, 0.01, 0.1, 0.3]
mlls_data = {alpha: np.zeros((len(x_combined),)) for alpha in alpha_lst}
n_orders = 100
for _ in tqdm.tqdm(range(n_orders)):
cml_ll = []
cml_error = []
order = np.arange(len(x_combined))
np.random.shuffle(order)
x_, y_ = x_combined[order], y_combined[order]
# print(x_.shape)
for alpha in alpha_lst:
gp.covar_module.base_kernel.alpha = alpha
mlls = np.array([get_log_mll(gp, x_[:i], y_[:i]).item() for i in range(len(x_combined))])
mlls_data[alpha] += mlls / n_orders
plt.figure(figsize=(15, 7))
plt.grid()
for alpha in alpha_lst:
plt.plot(mlls_data[alpha][1:] - mlls_data[alpha][:-1], label=r"$\alpha={}$".format(alpha))
# mlls_data[-1]
plt.legend(fontsize=16)
plt.xlabel(r"$m$", fontsize=16)
plt.ylabel(r"$\log ~p(D_{m+1} \vert D_{ \leq m})$", fontsize=16)
plt.title("Rational quadratic kernel", fontsize=20)
mlls_data.keys()
np.savez("plots/data/gp_rq/learning_curve.npz",
mlls_0001=mlls_data[0.001],
mlls_001=mlls_data[0.01],
mlls_01=mlls_data[0.1],
mlls_03=mlls_data[0.3],
)
plt.figure(figsize=(10, 7))
plt.grid()
for alpha in alpha_lst:
plt.plot((mlls_data[alpha][1:] - mlls_data[alpha][:-1]), label=r"$\alpha={}$".format(alpha))
# mlls_data[-1]
plt.xlim(40, 48)
plt.ylim(0.3, 0.45)
plt.legend(fontsize=16)
plt.xlabel(r"$m$", fontsize=16)
plt.ylabel(r"$\log ~p(D_{m+1} \vert D_{ \leq m})$", fontsize=16)
plt.title("Rational quadratic kernel", fontsize=20)
plt.plot(mlls_data[1:] - mlls_data[:-1])
mlls_data[-1]
get_log_mll(gp, x[:1], y[:1])
get_log_mll(gp, x[:2], y[:2]) - get_log_mll(gp, x[:1], y[:1])
###Output
_____no_output_____
###Markdown
How does the fit look?
###Code
true_noise = 0.2
# true_noise = 0.03
kernel = gpytorch.kernels.ScaleKernel(gpytorch.kernels.RQKernel())
noise = torch.tensor([true_noise**2])
likelihood = gpytorch.likelihoods.GaussianLikelihood()
likelihood.noise_covar.noise = noise
gp = ExactGPModel(
train_x=x,#[:40],
train_y=y,#[:40],
likelihood=likelihood,
kernel=kernel
)
gp.covar_module.base_kernel.lengthscale = true_lengthscale
# gp.covar_module.base_kernel.alpha = 0.001
gp.covar_module.base_kernel.alpha = 0.19
gp.eval()
likelihood.eval()
f_preds = gp(eval_x)
f_preds.stddev
y_preds = likelihood(f_preds)
pred_means = f_preds.mean
pred_std = y_preds.stddev
# pred_std = f_preds.stddev
order = torch.argsort(eval_x)
plt.plot(eval_x[order].detach().numpy(), pred_means[order].detach().numpy())
plt.fill_between(eval_x[order].detach().numpy(),
pred_means[order].detach().numpy() + 2 * pred_std[order].detach().numpy(),
pred_means[order].detach().numpy() - 2 * pred_std[order].detach().numpy(),
color="b", alpha=0.2)
plt.plot(eval_x.detach().numpy(), eval_y.detach().numpy(), "bo")
plt.plot(x.detach().numpy(), y.detach().numpy(), "ro")
np.savez("plots/data/gp_rq/big_noise_cmll_fit.npz",
# np.savez("plots/data/gp_rq/big_noise_mll_fit.npz",
x_train=x.detach().numpy(),
y_train=y.detach().numpy(),
x_test=eval_x[order].detach().numpy(),
y_test=eval_y[order].detach().numpy(),
f_test=eval_f[order].detach().numpy(),
pred_mu=pred_means[order].detach().numpy(),
pred_sigma=pred_std[order].detach().numpy()
)
###Output
_____no_output_____ |
docs/r2/migrate_tf2.ipynb | ###Markdown
Copyright 2019 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Migrating tf.summary usage to TF 2.0 View on TensorFlow.org Run in Google Colab View source on GitHub > Note: This doc is for people who are already familiar with TensorFlow 1.x TensorBoard and who want to migrate large TensorFlow code bases from TensorFlow 1.x to 2.0. If you're new to TensorBoard, see the [get started](get_started.ipynb) doc instead. If you are using `tf.keras` there may be no action you need to take to upgrade to TensorFlow 2.0.
###Code
!pip install tf-nightly-2.0-preview
import tensorflow as tf
###Output
_____no_output_____
###Markdown
TensorFlow 2.0 includes significant changes to the `tf.summary` API used to write summary data for visualization in TensorBoard. What's changedIt's useful to think of the `tf.summary` API as two sub-APIs:- A set of ops for recording individual summaries - `summary.scalar()`, `summary.histogram()`, `summary.image()`, `summary.audio()`, and `summary.text()` - which are called inline from your model code.- Writing logic that collects these individual summaries and writes them to a specially formatted log file (which TensorBoard then reads to generate visualizations). In TF 1.xThe two halves had to be manually wired together - by fetching the summary op outputs via `Session.run()` and calling `FileWriter.add_summary(output, step)`. The `v1.summary.merge_all()` op made this easier by using a graph collection to aggregate all summary op outputs, but this approach still worked poorly for eager execution and control flow, making it especially ill-suited for TF 2.0. In TF 2.XThe two halves are tightly integrated, and now individual `tf.summary` ops write their data immediately when executed. Using the API from your model code should still look familiar, but it's now friendly to eager execution while remaining graph-mode compatible. Integrating both halves of the API means the `summary.FileWriter` is now part of the TensorFlow execution context and gets accessed directly by `tf.summary` ops, so configuring writers is the main part that looks different. Example usage with eager execution, the default in TF 2.0:
###Code
writer = tf.summary.create_file_writer("/tmp/mylogs/eager")
with writer.as_default():
for step in range(100):
# other model code would go here
tf.summary.scalar("my_metric", 0.5, step=step)
writer.flush()
ls /tmp/mylogs/eager
###Output
_____no_output_____
###Markdown
Example usage with tf.function graph execution:
###Code
writer = tf.summary.create_file_writer("/tmp/mylogs/tf_function")
@tf.function
def my_func(step):
with writer.as_default():
# other model code would go here
tf.summary.scalar("my_metric", 0.5, step=step)
for step in range(100):
my_func(step)
writer.flush()
ls /tmp/mylogs/tf_function
###Output
_____no_output_____
###Markdown
Example usage with legacy TF 1.x graph execution:
###Code
g = tf.compat.v1.Graph()
with g.as_default():
step = tf.Variable(0, dtype=tf.int64)
step_update = step.assign_add(1)
writer = tf.summary.create_file_writer("/tmp/mylogs/session")
with writer.as_default():
tf.summary.scalar("my_metric", 0.5, step=step)
all_summary_ops = tf.compat.v1.summary.all_v2_summary_ops()
writer_flush = writer.flush()
with tf.compat.v1.Session(graph=g) as sess:
sess.run([writer.init(), step.initializer])
for i in range(100):
sess.run(all_summary_ops)
sess.run(step_update)
sess.run(writer_flush)
ls /tmp/mylogs/session
###Output
_____no_output_____ |
0_developed_draft/file_summary_mwt_chor_outputs.ipynb | ###Markdown
get files under all MWT folders in dir_driveexample path: /Volumes/COBOLT/MWT/20190418X_XX_100s30x10s10s_slo1/VG903_400mM/20190418_141335/VG903_OH_15x3_t96h20C_100s30x10s10s_A_0418_jv410014.pngparse the file names into:* extension* filename prefix* filename suffix (e.g. shanespark)* mwt name* group name* expname * exp date * tracker * experimenter * exp condition * pre-plate * taps * ISI * post-tap * exp name tag* MWT DB source (e.g. MWT, MWT bad)
###Code
# get all unique suffix from all file paths
# load all file paths
filename = os.path.join(dir_save,'allfilepaths.pickle')
with open(filename,'rb') as f:
filepaths = pickle.load(f)
# make it into dataframe
df_filepaths = pd.DataFrame(filepaths,columns=['filepath'])
# transform files into path obj
file_path_obj = list(map(PurePath,df_filepaths['filepath']))
# get MWT path
df_filepaths['mwtpath'] = np.array(list(map(lambda x: os.path.dirname(x), df_filepaths['filepath'])))
# get suffix
df_filepaths['suffix'] = np.array(list(map(lambda x: x.suffix, file_path_obj)))
df_filenames = pd.DataFrame(list(map(lambda x: x.name, file_path_obj)),
columns=['filenames'])
# get .dat
a = df_filenames['filenames'].str.extract(r'(?<=[.])([A-z]{1,}\d{0,}[.]dat)')
ind_datext = ~a.iloc[:,0].isna()
df_filepaths.loc[ind_datext, 'suffix'] = a.loc[ind_datext,0]
# get xx.000.dat
a = df_filenames['filenames'].str.extract(r'(?<=[.])([A-z]{1,}[.]\d{1,}[.]dat)')
b = a.iloc[:,0].str.replace(r'(?<=[.])(\d{3,})(?=[.]dat)','id')
ind_datext = ~b.isna()
df_filepaths.loc[ind_datext, 'suffix'] = b[ind_datext]
# count suffix by plate
df = df_filepaths.groupby(['mwtpath', 'suffix']).count()
df_suffix_count = df.unstack(level=-1)
df_suffix_count
# save file dataframe
path_save = os.path.join(dir_save, 'file_summary_mwt.pickle')
with open(path_save,'wb') as f:
pickle.dump(df_suffix_count,f)
print('complete')
###Output
complete
|
.ipynb_checkpoints/Compare Product Prices across Ecommerce Sites-checkpoint.ipynb | ###Markdown
Compare Product Prices across Ecommerce SitesIn this project we will help the user to compare prices across the different ECommerce sites using the top 4 ecommerce sites available.- [Shopee Singapore](www.shopee.sg)- [Carousell Singapore](https://sg.carousell.com/)- [Qoo10 Singapore](https://www.qoo10.sg/?__ar=Y)- [Lazada Singapore](https://www.lazada.sg/)
###Code
#!pip install tabulate #Use this to get tabulate code to display the data nicely
#pip install tagui #Use this to install tagui if first time user
import tagui as t
class Product(object):
productName = ""
price=""
def make_product(productName, price):
product = Product()
product.productName = productName
product.price = price
return product
productSearch = input("Enter name of Product: ")
shopee = list()
carousell = list()
qoo10 = list()
lazada = list()
#Start with First URL Travelocity.com
urlFirst = 'https://www.shopee.sg/'
urlSecond = 'https://sg.carousell.com/'
urlThird = 'https://www.qoo10.sg/?__ar=Y'
urlFourth = 'https://www.lazada.sg/'
t.init(visual_automation = True, chrome_browser = True)
t.url(urlFirst)
t.type('/html/body/div[1]/div/div[2]/div[1]/div/div[2]/div/div[1]/div[1]/div/form/input',productSearch)
t.click('/html/body/div[1]/div/div[2]/div[1]/div/div[2]/div/div[1]/div[1]/button')
t.click('/html/body/div[1]/div/div[2]/div[1]/div/div[2]/div/div[1]/div[1]/button')
price1 = t.read('/html/body/div[1]/div/div[2]/div[3]/div[2]/div[2]/div[2]/div[1]/div/a/div/div[2]/div[2]/div')
price2 = t.read('/html/body/div[1]/div/div[2]/div[3]/div[2]/div[2]/div[2]/div[2]/div/a/div/div[2]/div[2]/div')
price3 = t.read('/html/body/div[1]/div/div[2]/div[3]/div[2]/div[2]/div[2]/div[3]/div/a/div/div[2]/div[2]/div')
name1 = t.read('/html/body/div[1]/div/div[2]/div[3]/div[2]/div[2]/div[2]/div[1]/div/a/div/div[2]/div[1]/div')
name2 = t.read('/html/body/div[1]/div/div[2]/div[3]/div[2]/div[2]/div[2]/div[2]/div/a/div/div[2]/div[1]/div')
name3 = t.read('/html/body/div[1]/div/div[2]/div[3]/div[2]/div[2]/div[2]/div[3]/div/a/div/div[2]/div[1]/div')
p1 = make_product(name1, price1)
p2 = make_product(name2, price2)
p3 = make_product(name3, price3)
shopee.append(p1)
shopee.append(p2)
shopee.append(p3)
#Carousell
t.url(urlSecond)
t.type('/html/body/div[1]/div/div[1]/div[1]/div[1]/div/form/div[3]/div/div/input',productSearch)
t.click('/html/body/div[1]/div/div[1]/div[1]/div[1]/div/form/button')
price1 = t.read('/html/body/div[1]/div/div[3]/div/div[2]/main/div/div/div[1]/div[1]/a[2]/p[2]')
price2 = t.read('/html/body/div[1]/div/div[3]/div/div[2]/main/div/div/div[2]/div[1]/a[2]/p[2]')
price3 = t.read('/html/body/div[1]/div/div[3]/div/div[2]/main/div/div/div[3]/div[1]/a[2]/p[2]')
name1 = t.read('/html/body/div[1]/div/div[3]/div/div[2]/main/div/div/div[1]/div[1]/a[2]/p[1]')
name2 = t.read('/html/body/div[1]/div/div[3]/div/div[2]/main/div/div/div[2]/div[1]/a[2]/p[1]')
name3 = t.read('/html/body/div[1]/div/div[3]/div/div[2]/main/div/div/div[3]/div[1]/a[2]/p[1]')
p1 = make_product(name1, price1)
p2 = make_product(name2, price2)
p3 = make_product(name3, price3)
carousell.append(p1)
carousell.append(p2)
carousell.append(p3)
#Qoo10
t.url(urlThird)
t.type('/html/body/div[2]/div[2]/div[2]/div/form/fieldset/input[1]',productSearch)
t.click('/html/body/div[2]/div[2]/div[2]/div/form/fieldset/button')
price1 = t.read('/html/body/div[3]/div[3]/div[1]/div/div[1]/form/div[4]/table/tbody/tr[1]/td[3]/div/strong')
price2 = t.read('/html/body/div[3]/div[3]/div[1]/div/div[1]/form/div[4]/table/tbody/tr[2]/td[3]/div/strong')
price3 = t.read('/html/body/div[3]/div[3]/div[1]/div/div[1]/form/div[4]/table/tbody/tr[3]/td[3]/div/strong')
name1 = t.read('/html/body/div[3]/div[3]/div[1]/div/div[1]/form/div[4]/table/tbody/tr[1]/td[2]/div/div[1]/a')
name2 = t.read('/html/body/div[3]/div[3]/div[1]/div/div[1]/form/div[4]/table/tbody/tr[2]/td[2]/div/div[1]/a')
name3 = t.read('/html/body/div[3]/div[3]/div[1]/div/div[1]/form/div[4]/table/tbody/tr[3]/td[2]/div/div[1]/a')
p1 = make_product(name1, price1)
p2 = make_product(name2, price2)
p3 = make_product(name3, price3)
qoo10.append(p1)
qoo10.append(p2)
qoo10.append(p3)
#Lazada
t.url(urlFourth)
t.type('/html/body/div[2]/div/div[1]/div/div/div[2]/div/div[2]/form/div/div[1]/input[1]',productSearch)
t.click('/html/body/div[2]/div/div[1]/div/div/div[2]/div/div[2]/form/div/div[2]/button')
price1 = t.read('/html/body/div[3]/div/div[2]/div[1]/div/div[1]/div[3]/div[1]/div/div/div[2]/div[3]/span')
price2 = t.read('/html/body/div[3]/div/div[2]/div[1]/div/div[1]/div[3]/div[2]/div/div/div[2]/div[3]/span')
price3 = t.read('/html/body/div[3]/div/div[2]/div[1]/div/div[1]/div[3]/div[3]/div/div/div[2]/div[3]/span')
name1 = t.read('/html/body/div[3]/div/div[2]/div[1]/div/div[1]/div[3]/div[1]/div/div/div[2]/div[2]/a')
name2 = t.read('/html/body/div[3]/div/div[2]/div[1]/div/div[1]/div[3]/div[2]/div/div/div[2]/div[2]/a')
name3 = t.read('/html/body/div[3]/div/div[2]/div[1]/div/div[1]/div[3]/div[3]/div/div/div[2]/div[2]/a')
p1 = make_product(name1, price1)
p2 = make_product(name2, price2)
p3 = make_product(name3, price3)
lazada.append(p1)
lazada.append(p2)
lazada.append(p3)
from IPython.display import HTML, display
import tabulate
product = list()
table = list()
for i in carousell:
product = [i.productName, i.price,"Carousell"]
table.append(product)
for i in shopee:
product = [i.productName, i.price, "Shopee"]
table.append(product)
for i in qoo10:
product = [i.productName, i.price,"Qoo10"]
table.append(product)
for i in lazada:
product = [i.productName, i.price,"Lazada"]
table.append(product)
display(HTML(tabulate.tabulate(table, tablefmt='html')))
t.close()
from IPython.display import HTML
HTML('''<script>
code_show=true;
function code_toggle() {
if (code_show){
$('div.input').hide();
} else {
$('div.input').show();
}
code_show = !code_show
}
$( document ).ready(code_toggle);
</script>
<form action="javascript:code_toggle()"><input type="submit" value="Click here to Hide/Show the raw code."></form>''')
###Output
_____no_output_____ |
Task 6_ Classification/Classification.ipynb | ###Markdown
The above data shows the coefficients from our model. These coefficients show the change in log(odds) of the response variable being a "1" for each one unit change in that predictor variable. The positive values of the coefficient will bring the log(odds) up. The negative coefficient will bring the log(odds) down. The absolute value of the coefficient tells us how strong the coefficients are.In the above data set, we have not considered units of the columns. Hence, we can say that data frame has data in multiple units such as- est_inc_USD is measured in $$ however hhold_oldest are measured in numeric. Variables which are associated with greater likelihood of "entertain" as a primary purpose of past visits are - hhold_car_Sedan, homeState_New York, homeState_Connecticut, hhold_pax, hhold_field_Services. These variables can influence the outcome variable since they are all associated with any form of entertainment - preference of sedan as a primary vehile, located in New York and Connecticute etc. It is interesting to note that the variable that influences 'entertain' is the one where the primary wage earner works in the Service sector.Also, it is important to note that if the hhold_pax is higher then people will most likely visit for entertainment. It may due to the fact that those houses would have kids and big family. Variables like owning an SUV, living in states that are not accessible to Lobsterland by car and having members that older in age have lesser influence on 'entertain'.
###Code
pred_train = logmodel.predict(X_train)
from sklearn.metrics import accuracy_score
accuracy_score(y_train, pred_train)
pred_test = logmodel.predict(X_test)
from sklearn.metrics import accuracy_score
accuracy_score(y_test, pred_test)
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.metrics import confusion_matrix
mat = confusion_matrix(pred_test, y_test)
sns.heatmap(mat, square=True, annot=True, cbar=False)
plt.xlabel("Actual Result")
plt.ylabel("Predicted Result")
a, b = plt.ylim()
a += 0.5
b -= 0.5
plt.ylim(a, b)
plt.show()
print(mat)
pred_train = logmodel.predict(X_train)
from sklearn.metrics import accuracy_score
accuracy_score(y_train, pred_train)
pred_test = logmodel.predict(X_test)
from sklearn.metrics import accuracy_score
accuracy_score(y_test, pred_test)
###Output
_____no_output_____ |
crawl/pyquery_ops.ipynb | ###Markdown
pyquerypyquery allows you to make jquery queries on xml documents. The API is as much as possible the similar to jquery. pyquery uses lxml for fast xml and html manipulation.https://pythonhosted.org/pyquery/https://pythonhosted.org/pyquery/api.html
###Code
#!pip install -U pyquery
!pip freeze | grep pyquery
from pyquery import PyQuery as pq
###Output
_____no_output_____
###Markdown
pyquery操作html dom tree 使用pyquery生成html可以生成,但是不太优雅。。。
###Code
html = """
<html ng-controller="JumperController" ng-strict-di="" class="ng-scope" ng-app="jumper" dir="ltr" lang="en">
<body dir="ltr" class="theme-for-videopage -mouse -should -not -have -outline">
<!-- ngIf: loadPage --><div ng-if="loadPage" class="ng-scope" style="">
<top-nav class="ng-isolate-scope"><topbar class="ng-isolate-scope"><!-- ngIf: $ctrl.shouldShowTopbar --><div class="c-bapi-header topbar topbar-v2 f-authenticated ng-scope" ng-if="$ctrl.shouldShowTopbar"> <header class="stream-office-top-nav" id="stream-office-top-nav"> <div id="O365_NavHeader" class="_48b3a6747e9ecbeb6809a478a379f572.scss o365cs-base o365sx-navbar o365sx-search" role="banner" style="height: 48px; line-height: 48px;"><div id="O365_HeaderLeftRegion" class="_23a17bfeb92ba0b654edfc6cab03ed3a.scss"><div class="_39d41f3de648b4f3498e1a38d97fe686.scss"><button id="O365_MainLink_NavMenu" class="_0299ff19e67bde1ea0ef4995f53311be.scss o365sx-button o365sx-waffle" aria-haspopup="true" aria-expanded="false" type="button" role="button" aria-label="App launcher" title="App launcher"><span class="ms-Icon--WaffleOffice365 ms-icon-font-size-16" style="display: inline-block; font-size: 16px;" role="presentation"></span></button></div></div><div id="CenterRegion" class="_907346bfcf1b63cbf1f8121fb205a803.scss" style="width: calc(100% - 96px);"><div style="width: 100%;"><div style="display: block; position: relative;"><div style="" data-automation-id="visibleContent"><div id="topLevelRegion" class="c1b953e95a09350215e599ea5f933e20.scss"><div id="leftRegion" class="c1b953e95a09350215e599ea5f933e20.scss" style="justify-content: flex-start;"><div id="O365_MainLink_TenantLogo_container" class="dc7ed9a5e04b6acb575f721fa13b8c10.scss"><div class="_2d0fa5f18ae6de3c95eac48b6eb50909.scss o365sx-primary-font"><a id="O365_MainLink_TenantLogo" class="fd3e30cd539c12fb6e85a2c7f74d46bb.scss" title="PROD - sensata.com Azure AD (sso.sensata.com) (Not O365)" aria-label="PROD - sensata.com Azure AD (sso.sensata.com) (Not O365)" href="https://www.office.com/?auth=2&home=1&username=amber-yan%40sensata.com&from=ShellLogo"><img id="O365_MainLink_TenantLogoImg" alt="PROD - sensata.com Azure AD (sso.sensata.com) (Not O365)" title="PROD - sensata.com Azure AD (sso.sensata.com) (Not O365)" src="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAMgAAAAyCAYAAAAZUZThAAAAGXRFWHRTb2Z0d2FyZQBBZG9iZSBJbWFnZVJlYWR5ccllPAAAAxdpVFh0WE1MOmNvbS5hZG9iZS54bXAAAAAAADw/eHBhY2tldCBiZWdpbj0i77u/IiBpZD0iVzVNME1wQ2VoaUh6cmVTek5UY3prYzlkIj8+IDx4OnhtcG1ldGEgeG1sbnM6eD0iYWRvYmU6bnM6bWV0YS8iIHg6eG1wdGs9IkFkb2JlIFhNUCBDb3JlIDUuNi1jMDY3IDc5LjE1Nzc0NywgMjAxNS8wMy8zMC0yMzo0MDo0MiAgICAgICAgIj4gPHJkZjpSREYgeG1sbnM6cmRmPSJodHRwOi8vd3d3LnczLm9yZy8xOTk5LzAyLzIyLXJkZi1zeW50YXgtbnMjIj4gPHJkZjpEZXNjcmlwdGlvbiByZGY6YWJvdXQ9IiIgeG1sbnM6eG1wTU09Imh0dHA6Ly9ucy5hZG9iZS5jb20veGFwLzEuMC9tbS8iIHhtbG5zOnN0UmVmPSJodHRwOi8vbnMuYWRvYmUuY29tL3hhcC8xLjAvc1R5cGUvUmVzb3VyY2VSZWYjIiB4bWxuczp4bXA9Imh0dHA6Ly9ucy5hZG9iZS5jb20veGFwLzEuMC8iIHhtcE1NOkluc3RhbmNlSUQ9InhtcC5paWQ6NjQ4NzRCNzczMDgzMTFFNTkyQ0NDRTI0Njc0Q0I3MjgiIHhtcE1NOkRvY3VtZW50SUQ9InhtcC5kaWQ6NjQ4NzRCNzgzMDgzMTFFNTkyQ0NDRTI0Njc0Q0I3MjgiIHhtcDpDcmVhdG9yVG9vbD0iQWRvYmUgSWxsdXN0cmF0b3IgQ1MzIj4gPHhtcE1NOkRlcml2ZWRGcm9tIHN0UmVmOmluc3RhbmNlSUQ9InV1aWQ6OERBNzY5NDdCRjY3REQxMUE0QzQ5NEE4QUM3OTRCRTIiIHN0UmVmOmRvY3VtZW50SUQ9InhtcC5kaWQ6NjQ4NzRCNzYzMDgzMTFFNTkyQ0NDRTI0Njc0Q0I3MjgiLz4gPC9yZGY6RGVzY3JpcHRpb24+IDwvcmRmOlJERj4gPC94OnhtcG1ldGE+IDw/eHBhY2tldCBlbmQ9InIiPz7lGcoBAAALsElEQVR42uxcCZBU1RX9s8iMLCPDsMmAAcQAwgyLGEEMBAdLUdDIJoulwIyICJKlkgJTGgwCGmNJpABFhxAwBEEoKhIiojFoBAWxFGVTZwIiYQkoCEFlm9xrn1dcXr3+3f93Tw8Z76k69bvf+rf73n333vfTKioqPIVC4Ua63gKFQgVEoVABUShUQBQKFRCFQgVEoVABUShUQBSKaorMGPltiX2JB4l/8ClXRLyC+CFxVchzuZtYh/gCcadIv57YkfgecXUC18rtdCC+S3wlQL0ridcSdxEXJ9D/CGJ94griJzHKfp94M/EAcUECffbH8/sB8RIie4U3El8mPkf8t4pADLAnPQo7VZyL54kZjnLFVrlxPm1G40pR/zixFdJHW22PCtE28xdWO6PjrFdk1XskZP9LRRsnid19yl5t9fliiP4aENdU+IPv880hr+f/gT2I9xLHEGuFbccvc77jpl7lKHfCKnMg4EkUOfqZgrxPE2ybmeZo/7M466521L0gYP+XOtp4yaf8Mkf5dgH73Czq7ic+Q5xMnEX8xGq7STUVkOXiGr8Xth0/FWuPI+2wI+1fUAkMDgScxA460vbiuI/YTKSHVQn2ExtZ/+OB3d8xqClB8F9H2qcB+jT9xos+xAL8fot4naP+EuIg/J5EHF8NlaPPjJJEPFIZKtZFxHeFFN4VpdzlxM/FtN01hKT+TPSzlpiN9M7Eo0j/klgYciTohvqMI2g3nnr5xHJxbn1D9j9WtLGemOdTtj5xgyg/IWBfj4q61/uUk8h25F9CHAa19qaA59AT96p9jHI1iLegj9uJbeJouwvxDtTpT7w4SrmZuLZTuKeuMrWgZnJbI6GWxa1iGfJNviIOnfcGYtMEpsRuPg+0WRKm3Itxjo0C1stGvbYJ9p8T8Dr4XnQM0c808eJP9Sk3gvgkXqTa1n1a4lDzPoagyzb4vdgGXglBKrfqrY3y/jxE3OPoZw3assv3wuBi42vi02KdMQpq5XFR5j9IKxDtTUW6jQ+IJbaA8Ig6FPpyMvXATOLgECOQZB5elLw4y99IHFhFem86rrefI695AjOgF1CwJB4mtggwY8qX5gjWaxIzRfnelgAZfGPVOWON4o9ZfWy06ldAMzHl+1h5vI7aAoOHwQaUHeNjmDACssgymmxxCPYQIyA3WRfSOUkP6kJrsTgzRBu9rJPuEaP8YuuG5aRQOLIslXSuyPuNSC9PwcL4A8fLsRovT4FPvbdE+enEOjBysEVzl8jrJCxFEhuhCeRgUNsm8opRp65Ie41YM4oq+qZIl0JbJNIbEv9hCRUPUvWI84SKVSAG2CJrtpIaRT/MSIw3jIAcsy7y1SQ9pPGOhxRUTXnPqr81hopm4/4UCsjdjv7rR7GiLazkc6lrmZZtvA9LYUNRp73In+dosyUGUMazSLtG1NkR45lMF2sIg9mOOguIr4i8dDz3fcTHHeW7i/ZuEOmPCwHJEukjoW7twTrLbu9N1NtkrFi1rHV77SRZEeo40moEbKN2HG0aZMd5DpUF133LiFI2p5LP5TCsVD2JtxFvJTYW+YXgeDh5NxFvFPmzHW2Ww8F5GbEb0r4S+Qt9LEmMmjjuEmn34L79Fc7bQ8Q7bDsSzvWUw8nNjt9fijRZ5kLxu76wyi6M4vSuCwtga/w/baxYJdbo0idJo1hjaxG2LEQbQ61zGxxjvbNWlN0ZZYSoLNa3/DbSwVcq0g8LFSVVzMZ64QHLQmbOx8Nsa1BG/AjrAsPtIv8r1Gkn0n7q6Le1yJ8h0p9wzGpsrfw78VdYr7ksXsVwWG+PMjNK9Wu2mEHyHe11gEFjDWYnG2/LRXon6F+XJfnB5MBrPSxB69ZoHOMp3xvrqswqWKRn4T5e58i7FdfRqBL7T4MjM5Yzc7L1MnSB9ScWTkNHL0M7BSJvoqOfNiL/CSuP3Qav+/Q1wCprLwVOIgJjaUABqRnFGbsb68atLgE5H5iboJlYGRkU2CKzF3p8vFECg2D5k/6TXMyKhrmONn6YgIBI694YvLSHRfkTjkU9L9aHQzNId6xB4hEQaUFbgXWUvLZVUkDSod+VQdd7x/JcpwojoZsy/+yju8fC/fAasy76dALthEUpdFf2nt9XBfeRrzsP6w1eWzT1Kfux5e3fKv7zuvQLRDkY8v9roKdfJdYHQdGD+AA8+OkITH2KOIDYAl5+xgW4lg6i7lDinxCJcEasp2LhjPjdF8fNxB8T/4lrM2hte9J3W1PN8hSPem0c0909IdopdLQzLoXXMdTRf6sqmEWko3AzRmCXr0T6KvIt73q5ZfmxfRErHKN3vDPIVJF2X4zA0lrwR/hZQaX/Qi4R5gq1UJbfJyIa7LaGWA7ObyW4qSPUOpVo4UhrHaKdZklqJyxc961xFcwiM8RvjsnaQZxJLCEWY2Z9SVgUFwoLz+/FM+ERdjRG3EkI0zf4eQwrnR/mi9/TYUnjGakrcSLxEeSVYWY7JMrPI3bHNoxhsKrJ92cUsbl1bvyOzyFOwP8NOHaFxtGe2IX4a2gvBq2+3SJhObEYd1aB93ldgtGrJq5mp+X0LEzhdbS0ruFDxyicKnZ1aAYuLHHUfSFGnf5RvPa/i2IpMlgk0ifG6OOMCE/JhN8mGqZY/+eg3mBH2UaO51RhzYzSy16RMXny5NegN++F/bs01VtSILm5xO2Q9I0h2jlJfB768zbinRgFU4UvsFmM7e/ricMDRuEmO5K1FL6HHPiDTOQ2j8jriI9ixPYckb5n8DzyMBJz9PNK4hDi65ZfqwnxI+KLeH62r6I58ldh05sHvX8nzitX+Cx2odztoiyfyyL0k4/newqz4ABsYuN+GhCPYkbk574F71YN+IW2o0453i/25zTE9fEMOhcz7Eps/kvjGSpNPz36nUADOMJO4WU/HqBeDWxhOFlJ58ZCXI94Io6tEtlw+h2BMCSKhhCuqNsoVEAUCh/oRxsUChUQhUIFRKFIOjL1FlQrsOWlLQY+XpBneGe91bEWtey5vtSLWOP2h+ibF/MtiZ97wb5L0JuY5UUies+/G6qL9GoFNpe6LFTsXFsXoy6bPdkcO4X4YIi+OSSETbPsKrg3QD0OcWELWxNVsRSVjW+8yL6OKfi/1It8gI41BfZT/M07G13APo5FGLnZ7m++JNPLi3i7J+H/QC/yUb9ZXsTz3UCksw+N/QfsmzBfavkSx7FexGfykDg//pDdci+yh2MCZqwFOC9GZy/yUTv2xdRD2rXog89/RMrvqEbAVku2sb7Ccghe82cQJl6ILbSHxJ6dWxBSXibim3pjG22F2EcyTWxbfQfHbWLvR7EInTdbeBdjC675TtcO/B6IfTP7EbdmdhmuQiSA2W/CW5l/Cy95Rirvpa5BqidMXBo739phNN7tRRxsrFNP8yKfIuWdgfztrI5Qc2ph9pmPcvlQ2XgNw58v5YiLyzHjMLpgDbHGi+xaZDT3IlEMPCNxvNMMzBbmu2TmyO3zjkT2lHPclYmy5SgAdkpy1DDHt5Vi5mBP+DzP7PRTFUuR4GKdwZ5n8+G62nghl3mRUBhGAQSln3c2eJGPNYWAcRtfi/flqFDHsr2zH6kzIePcB3umc7DwN8GEO3Dk8PTb8Ps02uRFuglKzMEifwXUNg79KIbQr/HODX9XK5YiFLJw5NH+L17kg92s/x+DhaoY+v5clOOXvqd4QbNEOxd5kZAMA45QLsEsYfak/xFri6cgTLz+2AQB87DmeMyLRM6+752NHs7EzNUZgsv7RIZDsF6F0HDc1BswQBwOaWFTK5biHLBK1QmqEasvbO4dhxdyjrB03QUhmIPRvAgj/T4IDH+tvylmlfXIZyF724uEgw+C6vYc2uN8/hQtBwReTfwRhJODBHMhQPsx2zyImWQH1K6XoeKNxfnNEupiCYSNQ/UPqoAoqiNYCBbDCpYNC9dPhNXr/NRVVUAUCl2kKxQqIAqFCohCoQKiUKiAKBQqIAqFCohCoQKiUHz38D8BBgA1ol6fZdZh+wAAAABJRU5ErkJggg=="></a></div><div class="b3a3fdb167dc849c79df08c917363fc9.scss o365sx-appName"></div></div><div id="O365_SuiteBranding_container" class="dc7ed9a5e04b6acb575f721fa13b8c10.scss"><div class="_314d83894b3882400fc35fc859c3ff16.scss o365sx-appName"><a href="https://web.microsoftstream.com" role="link" id="O365_AppName" aria-label="Go to Stream"><span class="ab38a845419f0ef0715ce595ac49a106.scss">Stream</span></a></div></div></div><div id="centerRegion" class="c1b953e95a09350215e599ea5f933e20.scss" style="flex: 1 0 auto; justify-content: center;"><div style="display: flex; flex: 1 0 auto;"><div id="O365_SearchBoxContainer_container" class="dc7ed9a5e04b6acb575f721fa13b8c10.scss" style="flex: 1 0 auto; justify-content: center;"><div style="width: 100%;"></div></div></div></div><div id="HeaderButtonRegion" class="c1b953e95a09350215e599ea5f933e20.scss" style="justify-content: flex-end;"><div id="O365_Feature_Flags_container" class="dc7ed9a5e04b6acb575f721fa13b8c10.scss"></div><div id="O365_MainLink_Day_container" class="dc7ed9a5e04b6acb575f721fa13b8c10.scss"></div><div id="O365_MainLink_Chat_container" class="dc7ed9a5e04b6acb575f721fa13b8c10.scss"></div><div id="O365_MainLink_Bell_container" class="dc7ed9a5e04b6acb575f721fa13b8c10.scss"><div class="ac25fed02f3f4133f1f21a7d9e3ca7f2.scss"><button class="_0299ff19e67bde1ea0ef4995f53311be.scss o365sx-button" id="O365_MainLink_Bell" aria-haspopup="true" aria-label="Notifications" role="button" type="button" title="Notifications" aria-expanded="false"><div class="o365cs-base" aria-hidden="true"><span class="ms-Icon--Ringer" style="display: inline-block; font-size: 16px;" role="presentation"></span></div></button></div></div><div id="O365_MainLink_Settings_container" class="dc7ed9a5e04b6acb575f721fa13b8c10.scss"><div class="ac25fed02f3f4133f1f21a7d9e3ca7f2.scss"><button class="_0299ff19e67bde1ea0ef4995f53311be.scss o365sx-button" id="O365_MainLink_Settings" aria-haspopup="true" aria-label="Settings" role="button" type="button" title="Settings" aria-expanded="false"><div class="o365cs-base" aria-hidden="true"><span class="ms-Icon--Settings" style="display: inline-block; font-size: 16px;" role="presentation"></span></div></button></div></div><div id="O365_MainLink_Help_container" class="dc7ed9a5e04b6acb575f721fa13b8c10.scss"><div class="ac25fed02f3f4133f1f21a7d9e3ca7f2.scss"><button class="_0299ff19e67bde1ea0ef4995f53311be.scss o365sx-button" id="O365_MainLink_Help" aria-haspopup="true" aria-label="Help" role="button" type="button" title="Help" aria-expanded="false"><div class="o365cs-base" aria-hidden="true"><span class="ms-Icon--Help" style="display: inline-block; font-size: 16px;" role="presentation"></span></div></button></div></div></div><div class="dc7ed9a5e04b6acb575f721fa13b8c10.scss abbf934e6755a54ab1e3873ff32905ab.scss"><div></div></div></div></div></div></div></div><div id="O365_HeaderRightRegion" class="f6f6055c0ea8e7d60465984814c03603.scss"><div class="_39d41f3de648b4f3498e1a38d97fe686.scss"><button id="O365_MainLink_Me" class="_0299ff19e67bde1ea0ef4995f53311be.scss _554bec52b64efc8c56ac2eafe5abdee5.scss o365sx-button o365sx-highContrastButton" role="button" aria-haspopup="true" type="button" title="Account manager for Yan, Amber" aria-label="Account manager for Yan, Amber"><div class="_971b1aa64c8487b35e2026e8dc272cef.scss"><div class="c461a030351b633b6e29ec434f4120f7.scss undefined"><span style="display: none;">Yan, Amber</span></div><div class="_0299ff19e67bde1ea0ef4995f53311be.scss f3229ec41b8583dd8dd7687630b709ca.scss"><div id="O365_MainLink_MePhoto" class="_5cd3b7e3330f359c70c3f8080b25a70b.scss"><div><div><div class="_38d7ebc8b9e301ac0fb6c7146c680f3a.scss" style="width: 32px; height: 32px;"><div id="meInitialsButton" style="background-color: rgb(126, 56, 120); font-size: 16px;" class="ebe3b1eb2850b56868257b865001fc57.scss" aria-hidden="true"><span>YA</span></div><div class="b09f993ead477113894c0a380aa56bae.scss"><img style="" src="https://outlook.office.com/owa/service.svc/s/[email protected]&UA=0&size=HR96x96&sc=1595600012300" alt="YA"></div></div></div></div></div></div></div></button></div><div class="_39d41f3de648b4f3498e1a38d97fe686.scss"><a id="O365_MainLink_Signin" class="_0299ff19e67bde1ea0ef4995f53311be.scss _554bec52b64efc8c56ac2eafe5abdee5.scss o365sx-button o365sx-highContrastButton" role="button" type="button" style="display: none;" aria-label="Sign in"><div class="_971b1aa64c8487b35e2026e8dc272cef.scss"><div class="c461a030351b633b6e29ec434f4120f7.scss undefined"><span style="display: inline;">Sign in</span></div><div class="_0299ff19e67bde1ea0ef4995f53311be.scss f3229ec41b8583dd8dd7687630b709ca.scss"><div id="O365_MainLink_SigninLink" class="_5144c36614176145aadaa132d7d63eeb.scss"><span class="ms-Icon--Contact ms-icon-font-size-18 o365sx-accent-font _29bfbb25c9d5ee5f6f8e79a840f4d4d6.scss" style="display: inline-block; font-size: 18px;" role="presentation"></span></div></div></div></a></div></div><div id="NotificationToastContainer"><div aria-live="polite" role="region" aria-relevant="additions text"></div></div></div><div id="o365cs-flexpane-overlay"></div><div></div></header> <div class="action-bar-container" ng-show="$ctrl.videoEditorInactive" aria-hidden="false"> <div action-list="" class="topbar-container"> <button data-bi-id="topbar-hamburger" class="c-action-trigger c-glyph hamburger topbar-button hide-for-medium" aria-label="Menu" ng-click="$ctrl.mobileMenuClicked($event)" style=""> <svg-src class="topbar-svg ng-isolate-scope" svg-name="icon_hamburger"><span class="svg-container" ng-class="{'icon-inline': $ctrl.inlineWithText}"> <svg class="svg-icon" focusable="false" aria-hidden="true"> <use xlink:href="#icon_hamburger" ng-attr-xlink:href="{{$ctrl.getSvgPath}}"></use> </svg> </span></svg-src> </button> <nav role="navigation" class="f-closed mobile-menu ng-hide" ng-attr-aria-hidden="!$ctrl.showMobileMenu" ng-show="$ctrl.showMobileMenu" style=""> <ul class="c-menu"> <li class="c-menu-item"> <a data-bi-id="topbar-home-button-mobile" href="/" class="home-link action-link ng-binding" ng-click="$ctrl.logAndSetFocus($event, '/')"> <svg-src class="topbar-svg ng-isolate-scope" svg-name="icon_home" inline-with-text="true"><span class="svg-container icon-inline" ng-class="{'icon-inline': $ctrl.inlineWithText}"> <svg class="svg-icon" focusable="false" aria-hidden="true"> <use xlink:href="#icon_home" ng-attr-xlink:href="{{$ctrl.getSvgPath}}"></use> </svg> </span></svg-src> Home </a> </li> <li data-bi-id="topbar-discover-menu-button-mobile" class="c-menu-item f-sub-menu" ng-keydown="$ctrl.menuKeydown($event, 'discoverMenuExpanded')"> <button class="discover-navigation ms-mobile-sub-menu" id="topbar-discover-navigation-button-mobile" aria-haspopup="true" ng-attr-aria-expanded="{{$ctrl.discoverMenuExpanded}}" aria-label="Discover menu options" ng-click="$ctrl.expandMenu($event,'discoverMenuExpanded',$ctrl.discoverMenuExpanded)" aria-expanded="false"> <div class="mobile-submenu-content"> <svg-src class="topbar-svg ng-isolate-scope" svg-name="icon_discover" inline-with-text="true"><span class="svg-container icon-inline" ng-class="{'icon-inline': $ctrl.inlineWithText}"> <svg class="svg-icon" focusable="false" aria-hidden="true"> <use xlink:href="#icon_discover" ng-attr-xlink:href="{{$ctrl.getSvgPath}}"></use> </svg> </span></svg-src> <span class="link-label ng-binding">Discover</span> </div> </button> <!-- ngIf: $ctrl.discoverMenuExpanded --> </li> <li class="c-menu-item f-sub-menu" ng-keydown="$ctrl.menuKeydown($event, 'myContentMenuExpanded')"> <button class="mycontent-navigation ms-mobile-sub-menu" id="topbar-mycontent-navigation-button-mobile" aria-haspopup="true" ng-attr-aria-expanded="{{$ctrl.myContentMenuExpanded}}" aria-label="My content menu options" ng-click="$ctrl.expandMenu($event, 'myContentMenuExpanded', $ctrl.myContentMenuExpanded)" aria-expanded="false"> <div class="mobile-submenu-content"> <svg-src class="topbar-svg ng-isolate-scope" svg-name="icon_mycontent" inline-with-text="true"><span class="svg-container icon-inline" ng-class="{'icon-inline': $ctrl.inlineWithText}"> <svg class="svg-icon" focusable="false" aria-hidden="true"> <use xlink:href="#icon_mycontent" ng-attr-xlink:href="{{$ctrl.getSvgPath}}"></use> </svg> </span></svg-src> <span class="link-label ng-binding">My content</span> </div> </button> <!-- ngIf: $ctrl.myContentMenuExpanded --> </li> <li class="c-menu-item f-sub-menu" ng-keydown="$ctrl.menuKeydown($event, 'createMenuExpanded')"> <button class="create-navigation ms-mobile-sub-menu" id="topbar-create-navigation-button-mobile" aria-haspopup="true" ng-attr-aria-expanded="{{$ctrl.createMenuExpanded}}" aria-label="Create menu options" ng-click="$ctrl.expandMenu($event,'createMenuExpanded',$ctrl.createMenuExpanded)" aria-expanded="false"> <div class="mobile-submenu-content"> <svg-src class="topbar-svg ng-isolate-scope" svg-name="icon_create" inline-with-text="true"><span class="svg-container icon-inline" ng-class="{'icon-inline': $ctrl.inlineWithText}"> <svg class="svg-icon" focusable="false" aria-hidden="true"> <use xlink:href="#icon_create" ng-attr-xlink:href="{{$ctrl.getSvgPath}}"></use> </svg> </span></svg-src> <span class="link-label ng-binding">Create</span> </div> </button> <!-- ngIf: $ctrl.createMenuExpanded --> </li> <li class="c-menu-item icon-item hide-for-medium"> <button data-bi-id="topbar-invite-button-mobile" class="action-link ng-binding ms-mobile-sub-menu" aria-label="Invite others" ng-click="$ctrl.inviteClicked($event)"> <div class="mobile-submenu-content"> <svg-src class="topbar-svg ng-isolate-scope" svg-name="icon_invite_white"><span class="svg-container" ng-class="{'icon-inline': $ctrl.inlineWithText}"> <svg class="svg-icon" focusable="false" aria-hidden="true"> <use xlink:href="#icon_invite_white" ng-attr-xlink:href="{{$ctrl.getSvgPath}}"></use> </svg> </span></svg-src> <span class="link-label ng-binding">Invite others</span> </div> </button> </li> <!-- ngIf: $ctrl.isUserFeedbackEnabled() --><li ng-if="$ctrl.isUserFeedbackEnabled()" class="feedback-link c-menu-item icon-item hide-for-medium ng-scope"> <button class="action-link ng-binding ms-mobile-sub-menu mobile-feedback-trigger" aria-label="Send feedback" child-focus-element="smiley-button" ng-click="$ctrl.ocvUifClicked()"> <div class="mobile-submenu-content"> <svg-src class="topbar-svg ng-isolate-scope" svg-name="icon_feedback"><span class="svg-container" ng-class="{'icon-inline': $ctrl.inlineWithText}"> <svg class="svg-icon" focusable="false" aria-hidden="true"> <use xlink:href="#icon_feedback" ng-attr-xlink:href="{{$ctrl.getSvgPath}}"></use> </svg> </span></svg-src> <span class="link-label ng-binding">Send feedback</span> </div> </button> </li><!-- end ngIf: $ctrl.isUserFeedbackEnabled() --> </ul> </nav> <div class="menu-container show-for-medium"> <nav class="desktop-menu" role="navigation"> <ul role="list"> <li class="c-menu-item home-menu-item"> <a data-bi-id="topbar-home-button" href="/" class="home-link ng-binding" ng-class="{'current-menu-name': $ctrl.highlightedTab.home}" ng-click="$ctrl.logAndSetFocus($event, '/')"> <svg-src class="topbar-svg ng-isolate-scope" ng-class="{'current-menu-icon': $ctrl.highlightedTab.home}" svg-name="icon_home" inline-with-text="true"><span class="svg-container icon-inline" ng-class="{'icon-inline': $ctrl.inlineWithText}"> <svg class="svg-icon" focusable="false" aria-hidden="true"> <use xlink:href="#icon_home" ng-attr-xlink:href="{{$ctrl.getSvgPath}}"></use> </svg> </span></svg-src> Home </a> </li> <li> <div data-bi-id="topbar-discover-dropdown-menu" class="c-select-menu" ng-keydown="$ctrl.menuKeydown($event, 'discoverMenuExpanded')"> <button class="discover-navigation" id="topbar-discover-navigation-button-desktop" aria-haspopup="true" ng-attr-aria-expanded="{{$ctrl.discoverMenuExpanded}}" aria-label="Discover menu options" ng-class="{'current-menu-name': $ctrl.highlightedTab.discover}" ng-click="$ctrl.expandMenu($event,'discoverMenuExpanded',$ctrl.discoverMenuExpanded)" aria-expanded="false"> <div class="button-content"> <svg-src class="topbar-svg ng-isolate-scope" ng-class="{'current-menu-icon': $ctrl.highlightedTab.discover}" svg-name="icon_discover" inline-with-text="true"><span class="svg-container icon-inline" ng-class="{'icon-inline': $ctrl.inlineWithText}"> <svg class="svg-icon" focusable="false" aria-hidden="true"> <use xlink:href="#icon_discover" ng-attr-xlink:href="{{$ctrl.getSvgPath}}"></use> </svg> </span></svg-src> <span class="link-label ng-binding">Discover</span> <svg-src class="topbar-svg down-arrow-icon ng-isolate-scope" svg-name="icon_down_arrow" inline-with-text="true"><span class="svg-container icon-inline" ng-class="{'icon-inline': $ctrl.inlineWithText}"> <svg class="svg-icon" focusable="false" aria-hidden="true"> <use xlink:href="#icon_down_arrow" ng-attr-xlink:href="{{$ctrl.getSvgPath}}"></use> </svg> </span></svg-src> </div> </button> <ul class="c-menu ng-hide" aria-labelledby="topbar-discover-navigation-button-desktop" ng-show="$ctrl.discoverMenuExpanded" role="presentation" aria-hidden="true"> <!-- ngRepeat: option in $ctrl.menuItems.Browse --><!-- ngIf: option.enabled --><li class="c-menu-item ng-scope" ng-repeat="option in $ctrl.menuItems.Browse" ng-if="option.enabled"> <a role="link" data-bi-id="topbar-discover-video-link" class="discover-video-link" ng-href="/browse" ng-click="$ctrl.logAndSetFocus($event, option.link.href, true)" href="/browse"> Videos </a> </li><!-- end ngIf: option.enabled --><!-- end ngRepeat: option in $ctrl.menuItems.Browse --><!-- ngIf: option.enabled --><li class="c-menu-item ng-scope" ng-repeat="option in $ctrl.menuItems.Browse" ng-if="option.enabled"> <a role="link" data-bi-id="topbar-discover-channel-link" class="discover-channel-link" ng-href="/browse?view=channel" ng-click="$ctrl.logAndSetFocus($event, option.link.href, true)" href="/browse?view=channel"> Channels </a> </li><!-- end ngIf: option.enabled --><!-- end ngRepeat: option in $ctrl.menuItems.Browse --><!-- ngIf: option.enabled --><li class="c-menu-item ng-scope" ng-repeat="option in $ctrl.menuItems.Browse" ng-if="option.enabled"> <a role="link" data-bi-id="topbar-discover-people-link" class="discover-people-link" ng-href="/browse?view=people" ng-click="$ctrl.logAndSetFocus($event, option.link.href, true)" href="/browse?view=people"> People </a> </li><!-- end ngIf: option.enabled --><!-- end ngRepeat: option in $ctrl.menuItems.Browse --><!-- ngIf: option.enabled --><li class="c-menu-item ng-scope" ng-repeat="option in $ctrl.menuItems.Browse" ng-if="option.enabled"> <a role="link" data-bi-id="topbar-discover-group-link" class="discover-groups-link" ng-href="/browse?view=group" ng-click="$ctrl.logAndSetFocus($event, option.link.href, true)" href="/browse?view=group"> Groups </a> </li><!-- end ngIf: option.enabled --><!-- end ngRepeat: option in $ctrl.menuItems.Browse --> </ul> </div> </li> <li> <div data-bi-id="topbar-mycontent-dropdown-menu" class="c-select-menu" ng-keydown="$ctrl.menuKeydown($event, 'myContentMenuExpanded')"> <button class="mycontent-navigation" ng-class="{'current-menu-name': $ctrl.highlightedTab.studio}" id="topbar-mycontent-navigation-button-desktop" aria-haspopup="true" ng-attr-aria-expanded="{{$ctrl.myContentMenuExpanded}}" aria-label="My content menu options" ng-click="$ctrl.expandMenu($event,'myContentMenuExpanded',$ctrl.myContentMenuExpanded)" aria-expanded="false"> <div class="button-content"> <svg-src class="topbar-svg ng-isolate-scope" ng-class="{'current-menu-icon': $ctrl.highlightedTab.studio}" svg-name="icon_mycontent" inline-with-text="true"><span class="svg-container icon-inline" ng-class="{'icon-inline': $ctrl.inlineWithText}"> <svg class="svg-icon" focusable="false" aria-hidden="true"> <use xlink:href="#icon_mycontent" ng-attr-xlink:href="{{$ctrl.getSvgPath}}"></use> </svg> </span></svg-src> <span class="link-label ng-binding">My content</span> <svg-src class="topbar-svg down-arrow-icon ng-isolate-scope" svg-name="icon_down_arrow" inline-with-text="true"><span class="svg-container icon-inline" ng-class="{'icon-inline': $ctrl.inlineWithText}"> <svg class="svg-icon" focusable="false" aria-hidden="true"> <use xlink:href="#icon_down_arrow" ng-attr-xlink:href="{{$ctrl.getSvgPath}}"></use> </svg> </span></svg-src> </div> </button> <ul class="c-menu ng-hide" aria-labelledby="topbar-mycontent-navigation-button-desktop" ng-show="$ctrl.myContentMenuExpanded" role="presentation" aria-hidden="true"> <!-- ngRepeat: option in $ctrl.menuItems.MyContent --><!-- ngIf: option.enabled --><li class="c-menu-item ng-scope" ng-repeat="option in $ctrl.menuItems.MyContent" ng-if="option.enabled"> <a class="my-video-link" data-bi-id="topbar-mycontent-videos-link" ng-href="studio/videos" ng-click="$ctrl.logAndSetFocus($event, option.link.href, true)" href="studio/videos"> Videos </a> </li><!-- end ngIf: option.enabled --><!-- end ngRepeat: option in $ctrl.menuItems.MyContent --><!-- ngIf: option.enabled --><li class="c-menu-item ng-scope" ng-repeat="option in $ctrl.menuItems.MyContent" ng-if="option.enabled"> <a class="my-groups-link" data-bi-id="topbar-mycontent-groups-link" ng-href="studio/groups" ng-click="$ctrl.logAndSetFocus($event, option.link.href, true)" href="studio/groups"> Groups </a> </li><!-- end ngIf: option.enabled --><!-- end ngRepeat: option in $ctrl.menuItems.MyContent --><!-- ngIf: option.enabled --><li class="c-menu-item ng-scope" ng-repeat="option in $ctrl.menuItems.MyContent" ng-if="option.enabled"> <a class="my-channel-link" data-bi-id="topbar-mycontent-channels-link" ng-href="studio/channels" ng-click="$ctrl.logAndSetFocus($event, option.link.href, true)" href="studio/channels"> Channels </a> </li><!-- end ngIf: option.enabled --><!-- end ngRepeat: option in $ctrl.menuItems.MyContent --><!-- ngIf: option.enabled --><li class="c-menu-item ng-scope" ng-repeat="option in $ctrl.menuItems.MyContent" ng-if="option.enabled"> <a class="my-meetings-link" data-bi-id="topbar-mycontent-meetings-link" ng-href="studio/meetings" ng-click="$ctrl.logAndSetFocus($event, option.link.href, true)" href="studio/meetings"> Meetings </a> </li><!-- end ngIf: option.enabled --><!-- end ngRepeat: option in $ctrl.menuItems.MyContent --><!-- ngIf: option.enabled --><li class="c-menu-item ng-scope" ng-repeat="option in $ctrl.menuItems.MyContent" ng-if="option.enabled"> <a class="my-watchlist-link" data-bi-id="topbar-mycontent-watchlist-link" ng-href="studio/watchlist" ng-click="$ctrl.logAndSetFocus($event, option.link.href, true)" href="studio/watchlist"> Watchlist </a> </li><!-- end ngIf: option.enabled --><!-- end ngRepeat: option in $ctrl.menuItems.MyContent --><!-- ngIf: option.enabled --><li class="c-menu-item ng-scope" ng-repeat="option in $ctrl.menuItems.MyContent" ng-if="option.enabled"> <a class="my-following-channels-link" data-bi-id="topbar-mycontent-following-channels-link" ng-href="studio/followingchannels" ng-click="$ctrl.logAndSetFocus($event, option.link.href, true)" href="studio/followingchannels"> Followed channels </a> </li><!-- end ngIf: option.enabled --><!-- end ngRepeat: option in $ctrl.menuItems.MyContent --><!-- ngIf: option.enabled --><!-- end ngRepeat: option in $ctrl.menuItems.MyContent --><!-- ngIf: option.enabled --><li class="c-menu-item ng-scope" ng-repeat="option in $ctrl.menuItems.MyContent" ng-if="option.enabled"> <a class="recycle-bin-link" data-bi-id="topbar-mycontent-recycle-bin-link" ng-href="studio/recyclebin" ng-click="$ctrl.logAndSetFocus($event, option.link.href, true)" href="studio/recyclebin"> Recycle bin </a> </li><!-- end ngIf: option.enabled --><!-- end ngRepeat: option in $ctrl.menuItems.MyContent --> </ul> </div> </li> <li> <div data-bi-id="topbar-create-dropdown-menu" class="c-select-menu" ng-keydown="$ctrl.menuKeydown($event, 'createMenuExpanded')"> <button class="create-navigation" id="create-desktop-navigation" aria-haspopup="true" ng-attr-aria-expanded="{{$ctrl.createMenuExpanded}}" aria-label="Create menu options" ng-click="$ctrl.expandMenu($event,'createMenuExpanded',$ctrl.createMenuExpanded)" aria-expanded="false"> <div class="button-content"> <svg-src class="topbar-svg ng-isolate-scope" svg-name="icon_create" inline-with-text="true"><span class="svg-container icon-inline" ng-class="{'icon-inline': $ctrl.inlineWithText}"> <svg class="svg-icon" focusable="false" aria-hidden="true"> <use xlink:href="#icon_create" ng-attr-xlink:href="{{$ctrl.getSvgPath}}"></use> </svg> </span></svg-src> <span class="link-label ng-binding">Create</span> <svg-src class="topbar-svg down-arrow-icon ng-isolate-scope" svg-name="icon_down_arrow" inline-with-text="true"><span class="svg-container icon-inline" ng-class="{'icon-inline': $ctrl.inlineWithText}"> <svg class="svg-icon" focusable="false" aria-hidden="true"> <use xlink:href="#icon_down_arrow" ng-attr-xlink:href="{{$ctrl.getSvgPath}}"></use> </svg> </span></svg-src> </div> </button> <ul class="c-menu ng-hide" aria-labelledby="create-desktop-navigation" ng-show="$ctrl.createMenuExpanded" role="presentation" aria-hidden="true"> <!-- ngIf: $ctrl.isCreateVideoEnabled() --><li class="c-menu-item ng-scope" ng-if="$ctrl.isCreateVideoEnabled()"> <a role="link" id="topbar-create-upload-link" class="upload-link ng-binding" ng-href="/upload" ng-click="$ctrl.logAndSetFocus($event, '/upload')" href="/upload"> Upload video </a> </li><!-- end ngIf: $ctrl.isCreateVideoEnabled() --> <!-- ngIf: $ctrl.isCreateLiveEventEnabled() && $ctrl.CreateLiveEnabled --><li class="c-menu-item ng-scope" ng-if="$ctrl.isCreateLiveEventEnabled() && $ctrl.CreateLiveEnabled"> <a role="link" data-bi-id="topbar-create-live-link" class="create-live-link ng-binding" ng-href="/create" ng-click="$ctrl.logAndSetFocus($event, '/create')" href="/create"> Live event </a> </li><!-- end ngIf: $ctrl.isCreateLiveEventEnabled() && $ctrl.CreateLiveEnabled --> <!-- ngIf: $ctrl.isCreateGroupEnabled() --><li class="c-menu-item ng-scope" ng-if="$ctrl.isCreateGroupEnabled()"> <a external-link="" role="link" data-bi-id="topbar-create-group-link" class="create-group-link ng-binding" ng-click="$ctrl.createGroupClicked($event)" ng-href="#" href="#"> Group </a> </li><!-- end ngIf: $ctrl.isCreateGroupEnabled() --> <li class="c-menu-item"> <a external-link="" role="link" data-bi-id="topbar-create-channel-link" class="create-channel-link ng-binding" ng-click="$ctrl.createChannelClicked($event)" ng-href="#" href="#"> Channel </a> </li> <!-- ngIf: $ctrl.isRecordVideoEnabled --> <!-- ngIf: $ctrl.isScreenRecorderEnabled --><li class="c-menu-item ng-scope" ng-if="$ctrl.isScreenRecorderEnabled"> <a external-link="" data-bi-id="topbar-create-record-screen-link" role="link" class="screen-capture-link ng-binding" ng-click="$ctrl.recordScreen($event)" ng-href="#" href="#"> Record screen </a> </li><!-- end ngIf: $ctrl.isScreenRecorderEnabled --> </ul> </div> </li> </ul> </nav> </div> <div class="f-search" ng-style="{'width': $ctrl.searchWidth}" style="width: 372px;"> <form data-bi-id="topbar-search-form" role="search" class="c-search topbar-search ng-pristine ng-valid ng-valid-maxlength" ng-submit="$ctrl.submitSearch($event)" aria-label="Topbar search"> <input aria-label="Enter your search" type="search" class="search-input ng-pristine ng-untouched ng-valid ng-empty ng-valid-maxlength" placeholder="Search" ng-model="$ctrl.searchTerm" maxlength="100" autocomplete="off" aria-invalid="false"> <button class="c-glyph topbar-search-button"> <span class="x-screen-reader ng-binding">Search</span> </button> </form> </div> <div class="topbar-right-menu show-for-medium"> <ul role="list"> <li ng-show="$ctrl.totalUploadItemCount" aria-hidden="true" class="ng-hide"> <button class="c-action-trigger action-link topbar-button glyph-upload-progress f-bapi ng-isolate-scope" aria-label="Upload progress" dropdown-overlay-invoke="uploadProgress" disable-animation="true" always-down="true" is-fixed-anchor="false" prefer-left="true" deduct-body-top="true" child-focus-element="bg-upload-item-button" position-under-anchor="true" custom-title="Upload progress"> <!-- ngIf: $ctrl.uploadingCount --> <!-- ngIf: !$ctrl.anyFailed && !$ctrl.uploadingCount --><svg-src class="topbar-svg white-fill green-background upload-complete ng-scope ng-isolate-scope" svg-name="icon_checkmark" ng-if="!$ctrl.anyFailed && !$ctrl.uploadingCount"><span class="svg-container" ng-class="{'icon-inline': $ctrl.inlineWithText}"> <svg class="svg-icon" focusable="false" aria-hidden="true"> <use xlink:href="#icon_checkmark" ng-attr-xlink:href="{{$ctrl.getSvgPath}}"></use> </svg> </span></svg-src><!-- end ngIf: !$ctrl.anyFailed && !$ctrl.uploadingCount --> <!-- ngIf: !$ctrl.uploadingCount && $ctrl.anyFailed --> </button> </li> <!-- ngIf: $ctrl.isAdminModeToggleVisible() --> <!-- ngIf: $ctrl.isCreateVideoEnabled() --><li ng-if="$ctrl.isCreateVideoEnabled()" class="ng-scope"> <button data-bi-id="topbar-upload-button" class="c-action-trigger action-link topbar-button glyph-upload f-bapi ng-isolate-scope" aria-label="Upload video" ng-click="$ctrl.uploadClicked($event)" custom-title="Upload video" light-style="true"> <svg-src class="topbar-svg ng-isolate-scope" svg-name="icon_upload"><span class="svg-container" ng-class="{'icon-inline': $ctrl.inlineWithText}"> <svg class="svg-icon" focusable="false" aria-hidden="true"> <use xlink:href="#icon_upload" ng-attr-xlink:href="{{$ctrl.getSvgPath}}"></use> </svg> </span></svg-src> </button> </li><!-- end ngIf: $ctrl.isCreateVideoEnabled() --> <li> <button data-bi-id="topbar-invite-button" class="c-action-trigger action-link topbar-button glyph-invite f-bapi ng-isolate-scope" aria-label="Invite others" ng-click="$ctrl.inviteClicked($event)" custom-title="Invite others" light-style="true"> <svg-src class="topbar-svg ng-isolate-scope" svg-name="icon_invite_white"><span class="svg-container" ng-class="{'icon-inline': $ctrl.inlineWithText}"> <svg class="svg-icon" focusable="false" aria-hidden="true"> <use xlink:href="#icon_invite_white" ng-attr-xlink:href="{{$ctrl.getSvgPath}}"></use> </svg> </span></svg-src> </button> </li> <!-- ngIf: $ctrl.isUserFeedbackEnabled() --><li ng-if="$ctrl.isUserFeedbackEnabled()" class="ng-scope"> <button class="c-action-trigger c-glyph glyph-feedback feedback-trigger f-bapi topbar-button ng-isolate-scope" aria-label="Send feedback" child-focus-element="smiley-button" custom-title="Send feedback" light-style="true" ng-click="$ctrl.ocvUifClicked()"> <svg-src class="topbar-svg ng-isolate-scope" svg-name="icon_feedback"><span class="svg-container" ng-class="{'icon-inline': $ctrl.inlineWithText}"> <svg class="svg-icon" focusable="false" aria-hidden="true"> <use xlink:href="#icon_feedback" ng-attr-xlink:href="{{$ctrl.getSvgPath}}"></use> </svg> </span></svg-src> </button> </li><!-- end ngIf: $ctrl.isUserFeedbackEnabled() --> </ul> </div> </div> </div> <!-- ngIf: $ctrl.isNewBannerEnabled && !$ctrl.isReactBannerEnabled --><banner ng-if="$ctrl.isNewBannerEnabled && !$ctrl.isReactBannerEnabled" ng-show="$ctrl.videoEditorInactive" class="ng-scope ng-isolate-scope" aria-hidden="false"><div class="banner ng-hide" ng-hide="$ctrl.shouldHideBanner" aria-hidden="true"> <div class="banner-content"> <span class="banner-icon"> <!-- ngIf: $ctrl.iconName --> </span> <span class="banner-text-container"> <!-- ngIf: $ctrl.header --> <!-- ngIf: $ctrl.text --> <!-- ngIf: $ctrl.linkUrl && $ctrl.linkLabel --> </span> </div> </div></banner><!-- end ngIf: $ctrl.isNewBannerEnabled && !$ctrl.isReactBannerEnabled --> <!-- ngIf: $ctrl.isReactBannerEnabled --> </div><!-- end ngIf: $ctrl.shouldShowTopbar --> <!-- ngIf: $ctrl.useO365GroupShareableControl --> </topbar> </top-nav>
<div class="office-scroll-body" role="main">
<div ng-performance=""><div id="ngPerformanceDisplay"> <style>#ngPerformanceDisplay{display: none;} #perfStats{position: fixed; right: 0; bottom: 15px; z-index: 100000; background-color: rgba(250, 249, 244, 0.97); border: 3px solid #843376; border-right: none; border-radius: 5px 0 0 5px; box-shadow: 0 1px 4px black; padding: 0 15px 15px; font-size: 0.65em;}#perfStats h3{margin-bottom: 0;}#perfStats td:last-child{text-align: right;}</style> <div id="perfStats"> <div id="perfStatsPanel"> <h3>ANGULAR STATS</h3> <table class="table table-condensed"> <tbody> <tr> <td>Total Scopes:</td><td><span id="scopes">491</span></td></tr><tr> <td>Total Watchers:</td><td><span id="watchers">2022</span></td></tr><tr> <td>Dirty Checks:</td><td><span id="dirty-checks">5570</span></td></tr><tr> <td>Digest Cycles:</td><td><span id="digest-cycles">4211</span></td></tr><tr> <td>Digest Cycle (last):</td><td> <span id="digest-ms">5.0</span> ms <br><span id="digest-fps">200</span> FPS </td></tr><tr> <td>Digest Cycle (avg):</td><td> <span id="avg-digest-ms">9.1</span> ms <br><span id="avg-digest-fps">110</span> FPS </td></tr><tr> <td>Digest Cycle (max):</td><td> <span id="max-digest-ms">518.0</span> ms <br><span id="max-digest-fps">2</span> FPS </td></tr></tbody> </table> <h3>PAGELOAD STATS</h3> <table class="table table-condensed"> <tbody> <tr> <td>Head Load:</td><td><span id="head-load">3009.0</span> ms</td></tr><tr> <td>Body Load:</td><td><span id="body-load">0.0</span> ms</td></tr><tr> <td>Footer Load:</td><td><span id="footer-load">0.0</span> ms</td></tr><tr> <td>Vendor Script Load:</td><td><span id="vendor-load">0.0</span> ms</td></tr><tr> <td>App Load:</td><td><span id="app-load">0.0</span> ms</td></tr><tr> <td>Metrics Load:</td><td><span id="metrics-load">0.0</span> ms</td></tr><tr> <td>Time To End-of-Page:</td><td><span id="time-to-eop">0.0</span> ms</td></tr><tr> <td>Time To Angular:</td><td><span id="time-to-ng">11718.0</span> ms</td></tr></tbody> </table> </div></div></div></div>
<!-- ngViewport: --><div ng-viewport="" class="ng-scope"><!-- ngIf: Video.videoId --><video-page video-id="Video.videoId" ng-if="Video.videoId" class="ng-scope ng-isolate-scope"><div ng-show="$ctrl.videoEditorInactive" aria-hidden="false" class=""> <!-- ngIf: $ctrl.showMessageBar --> <!-- ngIf: !$ctrl.isLiveGAEnabled && $ctrl.isLiveContent && !$ctrl.isCompleted --> <!-- ngIf: $ctrl.isReadOnlyVideo --> <div class="container video" ng-class="{'admin-elevated-mode':$ctrl.isInAdminMode}"> <div class="main-area"> <section class="player-area" aria-label="Video player section"> <div ng-switch="" on="$ctrl.routingCase"> <div ng-class="!$ctrl.showMetadata?'blank-fill':''" class="" style=""> <!-- ngSwitchWhen: Normal --><div ng-switch-when="Normal" class="ng-scope" style=""> <div class="video-player-container" role="presentation" style="padding-top: 20px;"> <div class="row sub-header"> </div> <div class="row"> <div id="player-wrapper" class="column xsmall-12 medium-8 large-8 small-6 medium-12" ng-class="{'medium-12 large-12': $ctrl.display12Column, 'medium-12': $ctrl.showRightContainer || $ctrl.isCompletedOnLoad}"> <div class="player-content-wrapper169"> <div id="playerContent" class="player-content"> <!-- ngIf: $ctrl.shouldShowEndEventMessage --> <!-- ngIf: $ctrl.video && !$ctrl.shouldShowEndEventMessage --><video-player-view ng-if="$ctrl.video && !$ctrl.shouldShowEndEventMessage" video="$ctrl.video" video-item="$ctrl.video" video-edit-state-info="$ctrl.videoEditStateInfo" on-viewed="$ctrl.onVideoViewed()" enable-theater-mode="true" toggle-theater-mode="$ctrl.EnableTheaterMode(enable)" is-producer-view="false" on-time-update="$ctrl.onTimeUpdate(currentTime)" on-init="$ctrl.onInit({ seekFunction: seekFunction, getCurrentTime: getCurrentTime, play: play, pause: pause, toggleTheaterMode: toggleTheaterMode })" preload="true" is-in-admin-mode="$ctrl.isInAdminMode" on-video-completed="$ctrl.onVideoCompleted()" is-yammer-available="$ctrl.isYammerAvailable" is-yammer-loaded="$ctrl.isYammerLoaded" yammer-events="$ctrl.yammerEvents" is-completed-on-load="$ctrl.isCompletedOnLoad" on-audience-did-reach-end-of-vodless-playback="$ctrl.onAudienceDidReachEndOfVodlessPlayback()" is-replacing="$ctrl.isReplacing" reload-updated-video="$ctrl.retry()" class="ng-scope ng-isolate-scope"><div class="video-player-view"> <div class="player-content-wrapper"> <div id="playerContent" class="player-content"> <div class="player-inner-content"> <!-- ngIf: $ctrl.isReportIssueOpen --> <!-- ngIf: !$ctrl.isVideoPlayerShown && $ctrl.isContentSourceLiveStream && !$ctrl.isErrorOccured --> <!-- ngIf: $ctrl.isVideoPlayerShown && !$ctrl.showFakePlayButton && !$ctrl.isErrorOccured --><media-player player-options="$ctrl.playerOptions" media-player-id="$ctrl.videoEditStateInfo.mediaPlayerId" ng-if="$ctrl.isVideoPlayerShown && !$ctrl.showFakePlayButton && !$ctrl.isErrorOccured" on-viewed="$ctrl.onVideoViewed()" on-deleted-or-replaced="$ctrl.refreshVideoMetadata()" is-live-event-stopped="$ctrl.isLiveEventStopped()" summary-heartbeat="$ctrl.summaryHeartbeat(properties)" playback-summary="$ctrl.playbackSummary(properties)" on-error="$ctrl.onPlayerError(metrics, traceId, isWarning, eventType)" trace-player-logs-and-return-trace-id="$ctrl.tracePlayerLogsAndReturnTraceId(eventName, flushLog, logMediaInternals)" on-player-retry="$ctrl.onPlayerRetry(retryCount, metrics, traceId, isWarning, eventType, enableTelemetry, manualRetry, errorCode, reason)" on-play="$ctrl.onPlayEvent()" on-canplaythrough="$ctrl.onCanplaythrough()" on-init="$ctrl.onInit({ seekFunction: seekFunction, getCurrentTime: getCurrentTime, play: play, pause: pause, toggleTheaterMode: toggleTheaterMode })" on-time-update="$ctrl.onTimeUpdateEvent(currentTime)" preload="$ctrl.preload" enable-theater-mode="$ctrl.enableTheaterMode" enter-theater-mode="$ctrl.toggleTheaterMode({enable: true})" exit-theater-mode="$ctrl.toggleTheaterMode({enable: false})" on-user-active="$ctrl.onUserActive()" player-events-tracker="$ctrl.playerEventsTracker()" on-user-inactive="$ctrl.onUserInactive()" on-pause="$ctrl.onPause()" enable-search="$ctrl.enableSearch" on-search="$ctrl.onSearch()" on-player-ended="$ctrl.changeVideoStateToCompleted()" default-muted="$ctrl.defaultMuted" video-link="$ctrl.videoLink" on-sdn-init-success="$ctrl.onSdnInitSuccess(details)" on-sdn-init-failure="$ctrl.onSdnInitFailure(details)" on-sdn-fatal-error="$ctrl.onSdnFatalError(details)" on-report-issue="$ctrl.onReportIssue()" on-last-amp-error-change="$ctrl.onLastAmpErrorChange(lastAmpError)" playback-session-id="$ctrl.playbackSessionId" is-producer-view="$ctrl.isProducerView" is-live-event-completed="$ctrl.isCompleted" timed-events="$ctrl.video.timedEvents" on-player-caption-detected="$ctrl.onPlayerCaptionDetected()" video-id="$ctrl.video.id" content-version="$ctrl.video.contentVersion" ng-class=" { 'hide-amp-controlbar' : $ctrl.isProducerView && !$ctrl.hideControlBar } " on-forms-token-post-message="$ctrl.getFormsAuthToken()" on-demand-options="$ctrl.video.liveEvent.liveEventOptions.onDemandOptions" show-end-of-event-overlay="$ctrl.onAudienceDidReachEndOfVodlessPlayback()" reload-updated-video="$ctrl.reloadUpdatedVideo()" class="ng-scope ng-isolate-scope" style=""><div class="media-player-controller" id="Axe3tnzmmaq"> <div poster="" id="vjs_video_3" playsinline="" preload="auto" crossorigin="anonymous" tabindex="-1" class="azuremediaplayer amp-big-play-centered amp-stream-skin vjs_video_3-dimensions amp-size-m vjs-user-inactive" style="width: 100%; height: 100%; font-size: 770px;"><div tabindex="0" class="vjs-controls-enabled vjs-player vjs-workinghover outline-enabled vjs-has-started vjs-paused vjs-ended" aria-label="video player, press space key to play/pause the video"><div><video id="vjs_video_3_AzureHtml5JS_api_5775" class="vjs-tech" oncontextmenu="return false;" tabindex="-1" aria-hidden="true" preload="auto" src="blob:https://web.microsoftstream.com/5bfda6dc-2edd-c74b-85f8-6be9ea16a24d"></video></div><div></div><div class="vjs-fullscreen-outline outline-enabled-control"></div><div class="vjs-poster" tabindex="-1"><span class="vjs-poster-span" tabindex="-1"></span><img src="https://uswe-1-content.api.microsoftstream.com/api/thumbnails/2/2aa98491-bdf5-46e3-bf56-1e8009f66b43/OTQxYTJlNGEtYWNkMy00NjFiLTg5MDgtNjdiYTM3M2E3YTFhLXBvc3Rlcl9sYXJnZS5wbmc?validTill=2020-07-26T00%3A00%3A00.0000000Z&aadUserOId=79c34bfa-ad42-4aae-a681-1edc26a3b11e&encoding=base64&api-version=1.4-private&signature=fN5f%2BdNeiv8t08ii8UgyddGMmugPjPjn%2BT%2BsPsafS3Q%3D" style="display: inline;" crossorigin="use-credentials" alt="" role="presentation" width="100%" height="auto"></div><div class="vjs-text-track-display" aria-live="off" aria-atomic="true"></div><div class="vjs-loading-spinner" dir="ltr"><span></span></div><button class="vjs-big-play-button " type="button" aria-live="off"><span class="vjs-control-text">Play Video</span></button><div class="vjs-control-bar" dir="ltr" role="region" aria-label="Video controls"><div class="amp-controlbaricons-left"><button class="vjs-play-control vjs-control vjs-button outline-enabled-control vjs-paused" type="button" aria-live="off" style="cursor:pointer" title="Play"><span class="vjs-control-text">Play</span></button><div class="vjs-volume-control" title="volume level" aria-expanded="false"><button class="vjs-mute-control vjs-control vjs-button outline-enabled-control vjs-vol-3" type="button" aria-live="off"><span class="vjs-control-text">Mute</span></button><div class="vjs-menu"><div class="vjs-menu-content"><div tabindex="0" class="vjs-volume-bar vjs-slider-bar vjs-slider vjs-slider-horizontal" role="slider" aria-valuenow="100.00" aria-valuemin="0" aria-valuemax="100" aria-label="volume level" aria-valuetext="100.00%"><div class="vjs-volume-level" style="width: 100%;"><span class="vjs-control-text"></span></div><div class="vjs-volume-handle vjs-control outline-enabled-control" style="left: 100%;"><span class="vjs-control-text" aria-live="polite">100%</span></div></div></div></div></div><div class="vjs-live-control vjs-control vjs-hidden outline-enabled-control" tabindex="0" role="button" aria-live="off" aria-label="LIVE" style="cursor:pointer" title="Go to live"><span class="vjs-control-text"></span><div class="vjs-live-display" aria-live="off"><span class="vjs-control-text">Stream Type</span>LIVE</div></div><div class="vjs-current-time vjs-time-control vjs-control outline-enabled-control"><div class="vjs-current-time-display" aria-live="off"><span class="vjs-control-text">Current Time</span> 1:27:59</div><div class="amp-livetimeindicator">LIVE</div></div><div class="vjs-time-control vjs-time-divider"><div><span>/</span></div></div><div class="vjs-duration vjs-time-control vjs-control outline-enabled-control"><div class="vjs-duration-display" aria-live="off"><span class="vjs-control-text">Duration Time</span> 1:27:59</div></div><div class="vjs-remaining-time vjs-time-control vjs-control outline-enabled-control"><div class="vjs-remaining-time-display" aria-live="off"><span class="vjs-control-text">Remaining Time</span> -0:00</div></div></div><div class="amp-controlbaricons-middle"><div class="vjs-progress-control vjs-control outline-enabled-control" tabindex="-1"><div tabindex="0" class="vjs-progress-holder vjs-slider vjs-slider-horizontal" role="slider" aria-valuenow="99.94" aria-valuemin="0" aria-valuemax="100" aria-label="progress bar" aria-valuetext="01:27:56"><div class="vjs-load-progress " style="width: 100%;"><span class="vjs-control-text" style="left: 99.7929%; width: 0.215559%;"><span>Loaded</span>: 0%</span></div><div class="vjs-mouse-display" style="left: 740px;"><span class="amp-time-tooltip" style="left: -30px;">1:25:53</span><span class="amp-time-tooltip-pointer" style="left: -4px;"></span></div><div class="vjs-play-progress vjs-slider-bar " data-current-time="1:27:59" style="width: 100%;"><span class="vjs-control-text"><span>Progress</span>: 0%</span></div><div class="vjs-slider-handle " style="left: 100%;"><div class="vjs-mouse-display-seek-handle" style="left: 0px;"><span class="amp-time-tooltip">1:27:59</span><span class="amp-time-tooltip-pointer"></span></div><span class="vjs-control-text">Seekbar Handle</span></div></div></div></div><div class="amp-controlbaricons-right"><button class="amp-caption-toggle vjs-control vjs-button outline-enabled-control" type="button" aria-live="off" aria-pressed="true" aria-label="Captions are currently off, Caption toggle"><span class="vjs-control-text">Captions</span></button><div class="amp-more-options-icon vjs-control vjs-button vjs-settings outline-enabled-control" tabindex="0" role="button" aria-live="off" aria-label="Settings" data-return-focus="settingsPanel"><span class="vjs-control-text">Settings</span></div><button class="amp-theater-icon vjs-control vjs-button " type="button" aria-live="off" aria-label="Theater mode"><span class="vjs-control-text">Theater mode</span></button><button class="vjs-fullscreen-control vjs-control vjs-button outline-enabled-control" type="button" aria-live="off" style="cursor:pointer" title="Fullscreen"><span class="vjs-control-text">Fullscreen</span></button></div></div><div class="vjs-moreoptions-dialog vjs-modal-overlay vjs-hidden" tabindex="-1" role="presentation" aria-labelledby="TTsettingsMoreOptionsDialogLabel-vjs_video_3_component_315" aria-describedby="TTMoreOptionsDialogDescription-vjs_video_3_component_315"><div class="vjs-settings-panel" tabindex="-1"><div class="vjs-settings-panel-child outline-enabled-control" tabindex="-1"><div class="vjs-menu vjs-main-menu" role="presentation"><ul class="vjs-menu-content" role="menu"><div class="vjs-menu-button vjs-menu-button-popup vjs-button vjs-menu-item amp-playbackspeed-control-normal outline-enabled-control" tabindex="-1" role="menuitem" aria-live="off" aria-expanded="false" aria-haspopup="true" aria-label="Playback Speed" title="Playback Speed"><span class="vjs-control-text"></span><div class="vjs-settings-sub-menu-title">Playback Speed</div><div class="vjs-settings-sub-menu-value"></div></div><div class="vjs-captions-subtitles-button vjs-menu-button vjs-menu-button-popup vjs-button vjs-menu-item outline-enabled-control" tabindex="-1" role="menuitem" aria-live="off" aria-expanded="false" aria-haspopup="true"><span class="vjs-control-text"></span><div class="vjs-settings-sub-menu-title">Captions / Subtitles</div><div class="vjs-settings-sub-menu-value">Off</div></div><div class="amp-captions-settings-button vjs-menu-button vjs-menu-button-popup vjs-button vjs-menu-item outline-enabled-control" tabindex="-1" role="menuitem" aria-live="off" aria-expanded="false" aria-haspopup="true"><span class="vjs-control-text"></span><div class="vjs-settings-sub-menu-title">Captions / Subtitles settings</div><div class="vjs-settings-sub-menu-value"></div></div><div class="amp-quality-control vjs-menu-button vjs-menu-button-popup vjs-button vjs-menu-item quality-4 outline-enabled-control" tabindex="-1" role="menuitem" aria-live="off" aria-expanded="false" aria-haspopup="true"><span class="vjs-control-text"></span><div class="vjs-settings-sub-menu-title">Quality</div><div class="vjs-settings-sub-menu-value">Auto</div></div><div class="amp-caption-toggle vjs-menu-button vjs-menu-button-popup vjs-button vjs-menu-item outline-enabled-control" tabindex="-1" role="menuitem" aria-live="off" aria-expanded="false" aria-haspopup="true"><div class="vjs-menu" role="presentation"><ul class="vjs-menu-content" role="menu"></ul></div><span class="vjs-control-text"></span><div class="vjs-settings-sub-menu-title">Report an issue</div><div class="vjs-settings-sub-menu-value"></div></div></ul></div><div class="vjs-settings-sub-menu vjs-hidden" style="margin-right: -152px; opacity: 0;"><div class="vjs-menu" role="presentation"><ul class="vjs-menu-content" role="menu"><li class="vjs-menu-item vjs-back-button outline-enabled-control" tabindex="-1" role="menuitem" aria-live="off" aria-label="Back to main menu"><span class="vjs-back-button-text">Playback Speed</span></li><li class="vjs-menu-item amp-menu-item outline-enabled-control" tabindex="-1" role="menuitemcheckbox" aria-live="off" aria-label="2.0x" aria-checked="false">2.0x<span class="vjs-control-text"> </span></li><li class="vjs-menu-item amp-menu-item outline-enabled-control" tabindex="-1" role="menuitemcheckbox" aria-live="off" aria-label="1.75x" aria-checked="false">1.75x<span class="vjs-control-text"> </span></li><li class="vjs-menu-item amp-menu-item outline-enabled-control" tabindex="-1" role="menuitemcheckbox" aria-live="off" aria-label="1.5x" aria-checked="false">1.5x<span class="vjs-control-text"> </span></li><li class="vjs-menu-item amp-menu-item outline-enabled-control" tabindex="-1" role="menuitemcheckbox" aria-live="off" aria-label="1.25x" aria-checked="false">1.25x<span class="vjs-control-text"> </span></li><li class="vjs-menu-item amp-menu-item outline-enabled-control" tabindex="-1" role="menuitemcheckbox" aria-live="off" aria-label="1.0x" aria-checked="false">1.0x<span class="vjs-control-text"> </span></li><li class="vjs-menu-item amp-menu-item outline-enabled-control" tabindex="-1" role="menuitemcheckbox" aria-live="off" aria-label="0.5x" aria-checked="false">0.5x<span class="vjs-control-text"> </span></li></ul></div></div><div class="vjs-settings-sub-menu vjs-hidden" style="margin-right: -176px; opacity: 0;"><div class="vjs-menu" role="presentation"><ul class="vjs-menu-content" role="menu"><li class="vjs-menu-item vjs-back-button outline-enabled-control" tabindex="-1" role="menuitem" aria-live="off" aria-label="Back to main menu"><span class="vjs-back-button-text">Captions / Subtitles</span></li><li class="vjs-menu-item vjs-selected outline-enabled-control" tabindex="-1" role="menuitemcheckbox" aria-live="off" aria-label="Off" aria-checked="true">Off<span class="vjs-control-text">, selected</span></li><li class="vjs-menu-item outline-enabled-control" tabindex="-1" role="menuitemcheckbox" aria-live="off" aria-label="English [CC]" aria-checked="false">English<span class="vjs-control-text"> </span> CC</li></ul></div></div><div class="vjs-settings-sub-menu vjs-hidden" style="margin-right: -308px; opacity: 0;"><div class="vjs-menu"><ul class="vjs-menu-content" role="menu"><li class="vjs-menu-item dialog-header vjs-back-button outline-enabled-control" tabindex="-1" role="menuitem" aria-live="off" aria-label="Back to main menu"><span class="vjs-back-button-text">Captions / Subtitles settings</span></li><li class="amp-caption-settings-menu amp-non-clickable-element" role="presentation" tabindex="-1">
<div class="amp-caption-setting-container">
<label class="amp-caption-settings-label" id="captions-text-size-vjs_video_3_component_489">Text size: </label>
<div class="amp-radio-button-group amp-font-size-group" role="radiogroup" aria-labelledby="captions-text-size-vjs_video_3_component_489">
<label class="amp-radio-button amp-font-size" role="radio" aria-labelledby="captions-text-size-small-vjs_video_3_component_489">
<input type="radio" name="fontSize" value="0.75" class="outline-enabled-control" aria-setsize="3" aria-posinset="1">
<span id="captions-text-size-small-vjs_video_3_component_489">Small</span>
</label>
<label class="amp-radio-button amp-font-size" role="radio" aria-labelledby="captions-text-size-medium-vjs_video_3_component_489">
<input type="radio" name="fontSize" value="1.00" class="outline-enabled-control" aria-setsize="3" aria-posinset="2">
<span id="captions-text-size-medium-vjs_video_3_component_489">Medium</span>
</label>
<label class="amp-radio-button amp-font-size" role="radio" aria-labelledby="captions-text-size-large-vjs_video_3_component_489">
<input type="radio" name="fontSize" value="1.50" class="outline-enabled-control" aria-setsize="3" aria-posinset="3">
<span id="captions-text-size-large-vjs_video_3_component_489">Large</span>
</label>
</div>
<label class="amp-caption-settings-label" for="captions-text-color-vjs_video_3_component_489">Text color:
<span class="amp-caption-color-selected-text">Standard</span>
</label>
<div class="amp-radio-button-group amp-font-color-group" id="caption-text-color-vjs_video_3_component_489" role="radiogroup" aria-label="Text color: ">
<label class="amp-radio-button amp-font-color" role="radio">
<input type="radio" name="fontColor" value="whiteBlack" aria-setsize="4" aria-posinset="1" class="amp-font-color-whiteBlack outline-enabled-control" aria-label="Standard">
<span></span>
</label>
<label class="amp-radio-button amp-font-color" role="radio">
<input type="radio" name="fontColor" value="whiteGray" aria-setsize="4" aria-posinset="2" class="amp-font-color-whiteGray outline-enabled-control" aria-label="Reverse sepia">
<span></span>
</label>
<label class="amp-radio-button amp-font-color" role="radio">
<input type="radio" name="fontColor" value="blackLightGray" aria-setsize="4" aria-posinset="3" class="amp-font-color-blackLightGray outline-enabled-control" aria-label="Sepia">
<span></span>
</label>
<label class="amp-radio-button amp-font-color" role="radio">
<input type="radio" name="fontColor" value="blackWhite" aria-setsize="4" aria-posinset="4" class="amp-font-color-blackWhite outline-enabled-control" aria-label="Reverse standard">
<span></span>
</label>
</div>
<div class="amp-background-transparency amp-row-with-button">
<label class="amp-caption-settings-label" id="captions-background-transparency-label-vjs_video_3_component_489">Background transparency</label>
<div class="amp-toggle">
<button type="button" role="button" aria-pressed="false" class="amp-captions-background-transparency outline-enabled-control" aria-labelledby="captions-background-transparency-label-vjs_video_3_component_489"></button>
<span class="toggle-value">Off</span>
</div>
</div>
<div class="amp-caption-settings-reset-container">
<button type="button" class="amp-caption-settings-reset amp-caption-settings-label outline-enabled-control">Revert to default settings</button>
</div>
</div>
</li></ul></div></div><div class="vjs-settings-sub-menu vjs-hidden" style="margin-right: -146px; opacity: 0;"><div class="vjs-menu" role="presentation"><ul class="vjs-menu-content" role="menu"><li class="vjs-menu-item vjs-back-button outline-enabled-control" tabindex="-1" role="menuitem" aria-live="off" aria-label="Back to main menu"><span class="vjs-back-button-text">Quality</span></li><li class="vjs-menu-item vjs-selected amp-menu-item outline-enabled-control" tabindex="-1" role="menuitemcheckbox" aria-live="off" aria-label="Auto" aria-checked="true">Auto<span class="vjs-control-text">, selected</span></li><li class="vjs-menu-item amp-menu-item outline-enabled-control" tabindex="-1" role="menuitemcheckbox" aria-live="off" aria-label="720p-338Kbps" aria-checked="false">720p-338Kbps<span class="vjs-control-text"> </span></li><li class="vjs-menu-item amp-menu-item outline-enabled-control" tabindex="-1" role="menuitemcheckbox" aria-live="off" aria-label="540p-205Kbps" aria-checked="false">540p-205Kbps<span class="vjs-control-text"> </span></li><li class="vjs-menu-item amp-menu-item outline-enabled-control" tabindex="-1" role="menuitemcheckbox" aria-live="off" aria-label="360p-105Kbps" aria-checked="false">360p-105Kbps<span class="vjs-control-text"> </span></li></ul></div></div></div></div></div><div class="vjs-error-display vjs-modal-dialog vjs-hidden " tabindex="-1" aria-describedby="vjs_video_3_component_563_description" aria-hidden="true" aria-label="Modal Window" role="dialog"><p class="vjs-modal-dialog-description vjs-offscreen" id="vjs_video_3_component_563_description">This is a modal window.</p><div class="vjs-modal-dialog-content" role="document" tabindex="0"></div></div><div class="vjs-caption-settings vjs-modal-overlay vjs-hidden" tabindex="-1" role="dialog" aria-labelledby="TTsettingsDialogLabel-vjs_video_3_component_568" aria-describedby="TTsettingsDialogDescription-vjs_video_3_component_568">
<div role="document">
<div role="heading" aria-level="1" id="TTsettingsDialogLabel-vjs_video_3_component_568" class="vjs-control-text">Captions Settings Dialog</div>
<div id="TTsettingsDialogDescription-vjs_video_3_component_568" class="vjs-control-text">Beginning of dialog window. Escape will cancel and close the window.</div>
<div class="vjs-tracksettings">
<div class="vjs-tracksettings-colors">
<fieldset class="vjs-fg-color vjs-tracksetting">
<legend>Text</legend>
<label class="vjs-label" for="captions-foreground-color-vjs_video_3_component_568">Text color</label>
<select id="captions-foreground-color-vjs_video_3_component_568">
<option value="#FFF" selected="">White</option>
<option value="#000">Black</option>
<option value="#F00">Red</option>
<option value="#0F0">Green</option>
<option value="#00F">Blue</option>
<option value="#FF0">Yellow</option>
<option value="#F0F">Magenta</option>
<option value="#0FF">Cyan</option>
</select>
<span class="vjs-text-opacity vjs-opacity">
<label class="vjs-label" for="captions-foreground-opacity-vjs_video_3_component_568">Text opacity</label>
<select id="captions-foreground-opacity-vjs_video_3_component_568">
<option value="1" selected="">Opaque</option>
<option value="0.5">Semi-Opaque</option>
</select>
</span>
</fieldset>
<fieldset class="vjs-bg-color vjs-tracksetting">
<legend>Background</legend>
<label class="vjs-label" for="captions-background-color-vjs_video_3_component_568">Background Color</label>
<select id="captions-background-color-vjs_video_3_component_568">
<option value="#000" selected="">Black</option>
<option value="#FFF">White</option>
<option value="#F00">Red</option>
<option value="#0F0">Green</option>
<option value="#00F">Blue</option>
<option value="#FF0">Yellow</option>
<option value="#F0F">Magenta</option>
<option value="#0FF">Cyan</option>
</select>
<span class="vjs-bg-opacity vjs-opacity">
<label class="vjs-label" for="captions-background-opacity-vjs_video_3_component_568">Background opacity</label>
<select id="captions-background-opacity-vjs_video_3_component_568">
<option value="1" selected="">Opaque</option>
<option value="0.5">Semi-Transparent</option>
<option value="0">Transparent</option>
</select>
</span>
</fieldset>
<fieldset class="window-color vjs-tracksetting">
<legend>Window</legend>
<label class="vjs-label" for="captions-window-color-vjs_video_3_component_568">Window Color</label>
<select id="captions-window-color-vjs_video_3_component_568">
<option value="#000" selected="">Black</option>
<option value="#FFF">White</option>
<option value="#F00">Red</option>
<option value="#0F0">Green</option>
<option value="#00F">Blue</option>
<option value="#FF0">Yellow</option>
<option value="#F0F">Magenta</option>
<option value="#0FF">Cyan</option>
</select>
<span class="vjs-window-opacity vjs-opacity">
<label class="vjs-label" for="captions-window-opacity-vjs_video_3_component_568">Transparency</label>
<select id="captions-window-opacity-vjs_video_3_component_568">
<option value="0" selected="">Transparent</option>
<option value="0.5">Semi-Transparent</option>
<option value="1">Opaque</option>
</select>
</span>
</fieldset>
</div> <!-- vjs-tracksettings-colors -->
<div class="vjs-tracksettings-font">
<div class="vjs-font-percent vjs-tracksetting">
<label class="vjs-label" for="captions-font-size-vjs_video_3_component_568">Font Size</label>
<select id="captions-font-size-vjs_video_3_component_568">
<option value="0.50">50%</option>
<option value="0.75">75%</option>
<option value="1.00" selected="">100%</option>
<option value="1.25">125%</option>
<option value="1.50">150%</option>
<option value="1.75">175%</option>
<option value="2.00">200%</option>
<option value="3.00">300%</option>
<option value="4.00">400%</option>
</select>
</div>
<div class="vjs-edge-style vjs-tracksetting">
<label class="vjs-label" for="captions-edge-style-vjs_video_3_component_568">Text Edge Style</label>
<select id="captions-edge-style-vjs_video_3_component_568">
<option value="none" selected="">None</option>
<option value="raised">Raised</option>
<option value="depressed">Depressed</option>
<option value="uniform">Uniform</option>
<option value="dropshadow">Dropshadow</option>
</select>
</div>
<div class="vjs-font-family vjs-tracksetting">
<label class="vjs-label" for="captions-font-family-vjs_video_3_component_568">Font Family</label>
<select id="captions-font-family-vjs_video_3_component_568">
<option value="proportionalSansSerif" selected="">Proportional Sans-Serif</option>
<option value="monospaceSansSerif">Monospace Sans-Serif</option>
<option value="proportionalSerif">Proportional Serif</option>
<option value="monospaceSerif">Monospace Serif</option>
<option value="casual">Casual</option>
<option value="script">Script</option>
<option value="small-caps">Small Caps</option>
</select>
</div>
</div> <!-- vjs-tracksettings-font -->
<div class="vjs-tracksettings-controls">
<button class="vjs-default-button">Defaults</button>
<button class="vjs-done-button">Done</button>
</div>
</div> <!-- vjs-tracksettings -->
</div> <!-- role="document" --></div><p class="vjs-screen-reader-text" id="player-use-for-screen-reader-alert-only0.050683595946912186" role="alert" aria-atomic="true"></p></div></div> <!-- ngIf: $ctrl.isErrorOccured --> </div> </media-player><!-- end ngIf: $ctrl.isVideoPlayerShown && !$ctrl.showFakePlayButton && !$ctrl.isErrorOccured --> <!-- ngIf: $ctrl.showFakePlayButton --> <!-- ngIf: $ctrl.isErrorOccured --> <!-- ngIf: $ctrl.isProducerView && !$ctrl.hideControlBar --> </div> </div> </div> </div></video-player-view><!-- end ngIf: $ctrl.video && !$ctrl.shouldShowEndEventMessage --> </div> </div> </div> <div id="player-right-container-wrapper" class="column xsmall-12 small-6 medium-4 large-4" ng-show="$ctrl.showRightContainer" role="region" aria-label="Interactive content" aria-hidden="false"> <div class="sub-header"> <tab-navigation class="right-navigation ng-isolate-scope" model="$ctrl.playerRightTabOptions" value="$ctrl.currentRightTab" direction="$ctrl.tabHeaderDirection" show-full-in-small="true"><nav class="tab-navigation c-in-page-navigation" ng-class="{'tab-navigation-v2': $ctrl.isV2StyleEnabled, 'c-in-page-navigation': !$ctrl.isVertical, 'vertical-navigation': $ctrl.isVertical, 'tab-navigation-minimal': $ctrl.minimalStyle}"> <ul class="tab-navigation-list show-for-small" role="tablist" ng-class="{'show-for-medium': !$ctrl.isVertical && !$ctrl.showFullInSmall, 'show-for-small': $ctrl.showFullInSmall}"> <!-- ngRepeat: option in $ctrl.model.options --><!-- ngIf: !option.hide --><!-- end ngRepeat: option in $ctrl.model.options --><!-- ngIf: !option.hide --><li class="tab-navigation-item ng-scope transcript-right-tab f-active" role="presentation" ng-repeat="option in $ctrl.model.options" ng-if="!option.hide" ng-class="[option.className, {'f-active': $ctrl.value === option.value}]" style=""> <!-- ngIf: option.svgName --> <button class="c-action-trigger nav-link nav-link-option-1 f-active" role="tab" ng-class="{'f-active': $ctrl.value === option.value, 'item-error': option.value === $ctrl.errorTabValue}" ng-click="$ctrl.onTabClicked(option)" ng-disabled="option.disable" aria-posinset="1" aria-setsize="1" aria-selected="true"> <!-- ngIf: option.value === $ctrl.errorTabValue --> Transcript </button> <!-- ngIf: option.afterSvgName --> </li><!-- end ngIf: !option.hide --><!-- end ngRepeat: option in $ctrl.model.options --> </ul> <!-- ngIf: !$ctrl.isVertical --><ul class="tab-navigation-list ng-scope hide-for-small" ng-if="!$ctrl.isVertical" ng-class="{'hide-for-medium': !$ctrl.showFullInSmall, 'hide-for-small': $ctrl.showFullInSmall}"> <li class="tab-navigation-item"> <button class="c-action-trigger with-drawer nav-link nav-link-option-1 f-active" ng-class="{'f-active': $ctrl.value === $ctrl.activeOption.value, 'item-error': $ctrl.activeOption.value === $ctrl.errorTabValue}" ng-click="$ctrl.onTabClicked($ctrl.activeOption)" style=""> <!-- ngIf: $ctrl.activeOption.value === $ctrl.errorTabValue --> Transcript </button> </li> <!-- ngIf: $ctrl.drawerItems --><li class="tab-navigation-item ng-scope" ng-if="$ctrl.drawerItems"> <drawer drawer-items="$ctrl.drawerItems" item="$ctrl.model" left-allign="$ctrl.leftAllign" fixed-elements="$ctrl.fixedElements" tab-navigation-drawer="true" class="ng-isolate-scope" style="display: none;"><div class="drawer" action-list=""> <span ng-show="!$ctrl.transcludeButtonPresent" role="presentation" aria-hidden="false" class=""> <button name="dropdown" role="menu" class="c-action-trigger c-glyph drawer-button drawer-button-stream drawer-button-without-label ng-isolate-scope" aria-label="More actions" custom-title="More actions" aria-expanded="false" ng-click="return;"> <span class="x-screen-reader ng-binding">More actions</span> </button> </span> <span role="presentation" ng-show="$ctrl.transcludeButtonPresent" ng-transclude="transcludebutton" aria-hidden="true" class="ng-hide"> <ng-transclude></ng-transclude> </span> <ul class="drawer-content collapsed" ng-class="{'allign-opp': $ctrl.leftAllign}"> <!-- ngRepeat: item in $ctrl.drawerItems --> </ul> </div></drawer> </li><!-- end ngIf: $ctrl.drawerItems --> <li aria-label="Health" ng-class="$ctrl.svgClassName"> <!-- ngIf: $ctrl.isSvgAfterAvailable --> </li> </ul><!-- end ngIf: !$ctrl.isVertical --> </nav> </tab-navigation> <!-- ngIf: $ctrl.rightTabDrawerItems.length --> </div> <div class="player-right-container-content" id="player-right-container-content" style="height: 391px;"> <div id="yammer-right-tab-page" ng-class="{'live':$ctrl.isLiveNow || $ctrl.isLiveContentCreatedState}" ng-show="$ctrl.isRightTabSet(3)" aria-hidden="true" class="ng-hide"><div id="yammer-container"> <!-- ngIf: $ctrl.yammerHasError --> <!-- ngIf: !$ctrl.yammerHasError --><div id="yammer-embed" ng-if="!$ctrl.yammerHasError" class="ng-scope"> </div><!-- end ngIf: !$ctrl.yammerHasError --> </div></div> <div id="transcript-right-tab-page" ng-class="{'live':$ctrl.isLiveNow || $ctrl.isLiveContentCreatedState}" ng-show="$ctrl.isRightTabSet(1)" aria-hidden="false" class="" style=""><div id="transcript-container" ng-class="{'no-data': $ctrl.isNoTranscriptData || $ctrl.captionProcessing}"> <div id="transcript-content" class="transcript-content" ng-show="$ctrl.isCompletedOnLoad && $ctrl.showTranscript" aria-hidden="false"> <!-- ngIf: $ctrl.captionProcessing && !$ctrl.captionProcessingPastSLA --> <!-- ngIf: $ctrl.captionProcessing && $ctrl.captionProcessingPastSLA --> <!-- ngIf: $ctrl.captionError --> <!-- ngIf: $ctrl.servicePlans.capabilities.transcriptSearch && $ctrl.captionReady --><search-event ng-if="$ctrl.servicePlans.capabilities.transcriptSearch && $ctrl.captionReady" video="$ctrl.video" is-admin="$ctrl.isInAdminMode" can-edit="$ctrl.isEditorOrInAdminMode && $ctrl.isTranscriptEditV3Enabled" on-edit-click="$ctrl.toggleTranscriptEditMode()" is-editable="!$ctrl.showTrimBar && !ctrl.showReplaceBar" edit-event-button-aria-label="$ctrl.resources.TranscriptEditEnter" view-event-button-aria-label="$ctrl.resources.TranscriptEditExit" event-list-aria-label="$ctrl.resources.TranscriptVirtualListAriaLabel" class="ng-scope ng-isolate-scope"><div class="search-event"> <div class="search-event-wrapper"> <form class="c-search search-event-form ng-pristine ng-valid ng-valid-maxlength"> <input aria-label="Input text and hit enter key to search transcript" type="search" role="search" name="search-field" class="search-event-input search-field ng-pristine ng-untouched ng-valid ng-empty ng-valid-maxlength" ng-model="$ctrl.searchString" autocomplete="off" placeholder="Search transcript" ng-change="$ctrl.onSearchInputChange($ctrl.searchString)" ng-keydown="$ctrl.onSearchInputKeyDown($event)" maxlength="100" aria-invalid="false"> <button class="c-glyph search-magnifier ng-isolate-scope" name="search-button" type="button" aria-label="Search transcript" ng-click="$ctrl.onSearchButtonClick()" custom-title="Search transcript" z-index-override="11" disabled="disabled"></button> <button ng-show="$ctrl.showClearButton()" class="c-glyph clear-button ng-isolate-scope ng-hide" name="clear-button" type="button" ng-click="$ctrl.onClearButtonClick()" aria-label="clear user input" custom-title="Clear search input" z-index-override="11" aria-hidden="true"></button> </form> <button ng-show="$ctrl.canEdit && $ctrl.onEditClick" class="edit-event-button c-glyph c-action-trigger ng-hide start-edit-button" aria-label="Edit transcript" ng-click="$ctrl.onEditClicked()" custom-title="Edit transcript" z-index-override="11" aria-hidden="true" style=""> <svg-src svg-name="icon_edit" inline-with-text="true" class="ng-isolate-scope"><span class="svg-container icon-inline" ng-class="{'icon-inline': $ctrl.inlineWithText}"> <svg class="svg-icon" focusable="false" aria-hidden="true"> <use xlink:href="#icon_edit" ng-attr-xlink:href="{{$ctrl.getSvgPath}}"></use> </svg> </span></svg-src> </button> </div> <div class="content"> <!-- ngIf: $ctrl.state.isError --> <div class="searching ng-hide" ng-show="$ctrl.state.isSearching" aria-hidden="true"> <div class="searching-inner"> <div class="c-progress f-indeterminate-local f-progress-small" role="progressbar" aria-valuetext="Searching"> <span></span> <span></span> <span></span> <span></span> <span></span> </div> <h6 class="c-heading-6 ng-binding">Searching...</h6> </div> </div> <div class="no-search-result ng-hide" ng-show="$ctrl.state.isNoResults" aria-hidden="true"> <h6 class="c-heading-6 ng-binding" role="note" tabindex="0"> No search results found. </h6> </div> <!-- ngIf: $ctrl.events.length --> <div class="transclude-content" ng-show="$ctrl.state.isCollapsed" aria-hidden="false"> <ng-transclude> <transcript video="$ctrl.video" player-time="$ctrl.playerTime" is-admin="$ctrl.isInAdminMode" on-init="$ctrl.onTranscriptInit(transcriptController)" right-drawer-items="$ctrl.rightTabDrawerItems" bottom-drawer-items="$ctrl.bottomTabDrawerItems" class="ng-scope ng-isolate-scope"><div class="transcript"> <div class="transcript-loading ng-hide" ng-show="$ctrl.isLoadingState" aria-hidden="true" style=""> <loading-panel class="ng-isolate-scope"><div class="loading-panel"> <div class="loading-panel-inner"> <div class="c-progress f-indeterminate-local f-progress-small" role="progressbar" aria-valuetext="Loading"> <span></span> <span></span> <span></span> <span></span> <span></span> </div> <h5 class="c-heading-4 ng-binding">Loading...</h5> </div> </div></loading-panel> </div> <!-- ngIf: $ctrl.isErrorState --> <div ng-show="!$ctrl.isLoadingState && !$ctrl.isErrorState" class="transcript-inner new" ng-class="{'new': $ctrl.isTranscriptEditV3Enabled}" aria-hidden="false" style=""> <!-- ngIf: $ctrl.transcriptLines.length --><virtual-list class="transcript-list ng-scope ng-isolate-scope" items="$ctrl.transcriptLines" ng-if="$ctrl.transcriptLines.length" is-auto-scrolling="$ctrl.isAutoScrollingState" force-update-to-index="$ctrl.forceUpdateToIndex" force-scroll-to-item="$ctrl.forceScrollToItem" user-interacted="$ctrl.enterUserInteractionMode()" apply-shadow="true" video-id="$ctrl.video.id" scrollable-content-aria-label="$ctrl.resources.TranscriptVirtualListAriaLabel" style=""><!-- ngIf: $ctrl.applyShadow --><div class="virtual-list-shadow-mask ng-scope" ng-if="$ctrl.applyShadow"></div><!-- end ngIf: $ctrl.applyShadow --> <div class="virtual-list"> <!-- ngIf: $ctrl.isVListEnabled --><div class="virtual-list-total-padding ng-scope" ng-style="$ctrl.totalPadding" ng-if="$ctrl.isVListEnabled" style="height: 74583px;"></div><!-- end ngIf: $ctrl.isVListEnabled --> <div class="scrollable-content" role="listbox" aria-label="This is a list of captions." ng-style="$ctrl.scrollTransform" id="" style="position: absolute; transform: translateY(73541px);"> <ng-transclude>
<ul class="ng-scope"> <!-- ngRepeat: line in $vs_renderItems track by line.id --><!-- ngIf: line --><li ng-if="line" ng-repeat="line in $vs_renderItems track by line.id" ng-class="$ctrl.shouldHighlight(line) ? 'highlight-line' : 'default-line'" role="listitem" ng-click="$ctrl.onTranscriptClicked(line)" aria-label="Clicking on this transcript will seek video to the time of 5206 seconds from beginning." id="transcript-74cd5051-c243-46ab-9dfa-23810cc9883a" aria-posinset="1367" aria-setsize="1386" class="transcript-line c-caption-1 ng-scope default-line" tabindex="0" style=""> <!-- ngIf: $ctrl.areTranscriptNewEditButtonsEnabled --><div class="transcript-line-wrapper ng-scope" ng-if="$ctrl.areTranscriptNewEditButtonsEnabled"> <div class="transcript-timestamp new ng-binding" tabindex="-1" aria-label="Clicking on this transcript will seek video to the time of 5206 seconds from beginning." ng-click="$ctrl.onTranscriptClicked(line)" role="button"> 01:26:46 </div> <div class="transcript-text-container"> <div class="transcript-text new ng-binding" ng-class="{editable: $ctrl.isActiveEdit(line)}" ng-focus="$ctrl.onTranscriptFocused($event, line)" vlist-index="1366" ng-keydown="$ctrl.onTextKeyDown($event)" ng-blur="$ctrl.onTranscriptBlur()" contenteditable="$ctrl.isActiveEdit(line)"> In in an earlier answer here, we highlighted there's a lot </div> <div class="transcript-text-action-bar" ng-class="{visible: $ctrl.shouldShowActionBar(line)}" ng-switch="" on="$ctrl.transcriptSavingStatus"> <!-- ngSwitchWhen: 0 --> <!-- ngSwitchWhen: 1 --> <!-- ngSwitchWhen: 2 --> <!-- ngSwitchWhen: 3 --> <!-- ngSwitchDefault: --><div class="transcript-text-action-buttons ng-scope" ng-switch-default=""> <div class="transcript-text-action-left-buttons"> <button class="stream-btn discard-btn ng-binding" ng-click="$ctrl.discard()"> Discard </button> <button class="stream-btn save-btn ng-binding" type="submit" ng-class="{'btn-primary': $ctrl.isTranscriptModified}" ng-click="$ctrl.save($event, line)"> Save </button> </div> <div class="transcript-text-action-divider"></div> <div class="transcript-text-action-right-buttons"> <button class="stream-btn replay-btn ng-isolate-scope" type="submit" ng-click="$ctrl.replay($event, line)" aria-label="Clicking this button restarts playback from the start of the current transcript segment and sets the focus to the current transcript segment's editable text contents." ng-mouseover="$ctrl.applyActiveStyleToReplayButton()" ng-mouseleave="$ctrl.applyInactiveStyleToReplayButton()" ng-focus="$ctrl.applyActiveStyleToReplayButton()" ng-blur="$ctrl.applyInactiveStyleToReplayButton()" custom-title="Replay this video segment"> <svg-src svg-name="icon_replay" inline-with-text="true" class="replay-icon ng-isolate-scope"><span class="svg-container icon-inline" ng-class="{'icon-inline': $ctrl.inlineWithText}"> <svg class="svg-icon" focusable="false" aria-hidden="true"> <use xlink:href="#icon_replay" ng-attr-xlink:href="{{$ctrl.getSvgPath}}"></use> </svg> </span></svg-src> </button> </div> </div><!-- end ngSwitchWhen: --> </div> </div> </div><!-- end ngIf: $ctrl.areTranscriptNewEditButtonsEnabled --> <!-- ngIf: !$ctrl.areTranscriptNewEditButtonsEnabled --> </li><!-- end ngIf: line --><!-- end ngRepeat: line in $vs_renderItems track by line.id --><!-- ngIf: line --><li ng-if="line" ng-repeat="line in $vs_renderItems track by line.id" ng-class="$ctrl.shouldHighlight(line) ? 'highlight-line' : 'default-line'" role="listitem" ng-click="$ctrl.onTranscriptClicked(line)" aria-label="Clicking on this transcript will seek video to the time of 5209 seconds from beginning." id="transcript-c738b813-6c3d-4fa2-bc8a-e1a6738d3272" aria-posinset="1368" aria-setsize="1386" class="transcript-line c-caption-1 ng-scope default-line" tabindex="0" style=""> <!-- ngIf: $ctrl.areTranscriptNewEditButtonsEnabled --><div class="transcript-line-wrapper ng-scope" ng-if="$ctrl.areTranscriptNewEditButtonsEnabled"> <div class="transcript-timestamp new ng-binding" tabindex="-1" aria-label="Clicking on this transcript will seek video to the time of 5209 seconds from beginning." ng-click="$ctrl.onTranscriptClicked(line)" role="button"> 01:26:49 </div> <div class="transcript-text-container"> <div class="transcript-text new ng-binding" ng-class="{editable: $ctrl.isActiveEdit(line)}" ng-focus="$ctrl.onTranscriptFocused($event, line)" vlist-index="1367" ng-keydown="$ctrl.onTextKeyDown($event)" ng-blur="$ctrl.onTranscriptBlur()" contenteditable="$ctrl.isActiveEdit(line)"> of training being rolled out and being made available. </div> <div class="transcript-text-action-bar" ng-class="{visible: $ctrl.shouldShowActionBar(line)}" ng-switch="" on="$ctrl.transcriptSavingStatus"> <!-- ngSwitchWhen: 0 --> <!-- ngSwitchWhen: 1 --> <!-- ngSwitchWhen: 2 --> <!-- ngSwitchWhen: 3 --> <!-- ngSwitchDefault: --><div class="transcript-text-action-buttons ng-scope" ng-switch-default=""> <div class="transcript-text-action-left-buttons"> <button class="stream-btn discard-btn ng-binding" ng-click="$ctrl.discard()"> Discard </button> <button class="stream-btn save-btn ng-binding" type="submit" ng-class="{'btn-primary': $ctrl.isTranscriptModified}" ng-click="$ctrl.save($event, line)"> Save </button> </div> <div class="transcript-text-action-divider"></div> <div class="transcript-text-action-right-buttons"> <button class="stream-btn replay-btn ng-isolate-scope" type="submit" ng-click="$ctrl.replay($event, line)" aria-label="Clicking this button restarts playback from the start of the current transcript segment and sets the focus to the current transcript segment's editable text contents." ng-mouseover="$ctrl.applyActiveStyleToReplayButton()" ng-mouseleave="$ctrl.applyInactiveStyleToReplayButton()" ng-focus="$ctrl.applyActiveStyleToReplayButton()" ng-blur="$ctrl.applyInactiveStyleToReplayButton()" custom-title="Replay this video segment"> <svg-src svg-name="icon_replay" inline-with-text="true" class="replay-icon ng-isolate-scope"><span class="svg-container icon-inline" ng-class="{'icon-inline': $ctrl.inlineWithText}"> <svg class="svg-icon" focusable="false" aria-hidden="true"> <use xlink:href="#icon_replay" ng-attr-xlink:href="{{$ctrl.getSvgPath}}"></use> </svg> </span></svg-src> </button> </div> </div><!-- end ngSwitchWhen: --> </div> </div> </div><!-- end ngIf: $ctrl.areTranscriptNewEditButtonsEnabled --> <!-- ngIf: !$ctrl.areTranscriptNewEditButtonsEnabled --> </li><!-- end ngIf: line --><!-- end ngRepeat: line in $vs_renderItems track by line.id --><!-- ngIf: line --><li ng-if="line" ng-repeat="line in $vs_renderItems track by line.id" ng-class="$ctrl.shouldHighlight(line) ? 'highlight-line' : 'default-line'" role="listitem" ng-click="$ctrl.onTranscriptClicked(line)" aria-label="Clicking on this transcript will seek video to the time of 5212 seconds from beginning." id="transcript-b496e241-f2da-42da-a0b0-656d76bba4d0" aria-posinset="1369" aria-setsize="1386" class="transcript-line c-caption-1 ng-scope default-line" tabindex="0" style=""> <!-- ngIf: $ctrl.areTranscriptNewEditButtonsEnabled --><div class="transcript-line-wrapper ng-scope" ng-if="$ctrl.areTranscriptNewEditButtonsEnabled"> <div class="transcript-timestamp new ng-binding" tabindex="-1" aria-label="Clicking on this transcript will seek video to the time of 5212 seconds from beginning." ng-click="$ctrl.onTranscriptClicked(line)" role="button"> 01:26:52 </div> <div class="transcript-text-container"> <div class="transcript-text new ng-binding" ng-class="{editable: $ctrl.isActiveEdit(line)}" ng-focus="$ctrl.onTranscriptFocused($event, line)" vlist-index="1368" ng-keydown="$ctrl.onTextKeyDown($event)" ng-blur="$ctrl.onTranscriptBlur()" contenteditable="$ctrl.isActiveEdit(line)"> Raise your hand, volunteer for the opportunity. Ask how </div> <div class="transcript-text-action-bar" ng-class="{visible: $ctrl.shouldShowActionBar(line)}" ng-switch="" on="$ctrl.transcriptSavingStatus"> <!-- ngSwitchWhen: 0 --> <!-- ngSwitchWhen: 1 --> <!-- ngSwitchWhen: 2 --> <!-- ngSwitchWhen: 3 --> <!-- ngSwitchDefault: --><div class="transcript-text-action-buttons ng-scope" ng-switch-default=""> <div class="transcript-text-action-left-buttons"> <button class="stream-btn discard-btn ng-binding" ng-click="$ctrl.discard()"> Discard </button> <button class="stream-btn save-btn ng-binding" type="submit" ng-class="{'btn-primary': $ctrl.isTranscriptModified}" ng-click="$ctrl.save($event, line)"> Save </button> </div> <div class="transcript-text-action-divider"></div> <div class="transcript-text-action-right-buttons"> <button class="stream-btn replay-btn ng-isolate-scope" type="submit" ng-click="$ctrl.replay($event, line)" aria-label="Clicking this button restarts playback from the start of the current transcript segment and sets the focus to the current transcript segment's editable text contents." ng-mouseover="$ctrl.applyActiveStyleToReplayButton()" ng-mouseleave="$ctrl.applyInactiveStyleToReplayButton()" ng-focus="$ctrl.applyActiveStyleToReplayButton()" ng-blur="$ctrl.applyInactiveStyleToReplayButton()" custom-title="Replay this video segment"> <svg-src svg-name="icon_replay" inline-with-text="true" class="replay-icon ng-isolate-scope"><span class="svg-container icon-inline" ng-class="{'icon-inline': $ctrl.inlineWithText}"> <svg class="svg-icon" focusable="false" aria-hidden="true"> <use xlink:href="#icon_replay" ng-attr-xlink:href="{{$ctrl.getSvgPath}}"></use> </svg> </span></svg-src> </button> </div> </div><!-- end ngSwitchWhen: --> </div> </div> </div><!-- end ngIf: $ctrl.areTranscriptNewEditButtonsEnabled --> <!-- ngIf: !$ctrl.areTranscriptNewEditButtonsEnabled --> </li><!-- end ngIf: line --><!-- end ngRepeat: line in $vs_renderItems track by line.id --><!-- ngIf: line --><li ng-if="line" ng-repeat="line in $vs_renderItems track by line.id" ng-class="$ctrl.shouldHighlight(line) ? 'highlight-line' : 'default-line'" role="listitem" ng-click="$ctrl.onTranscriptClicked(line)" aria-label="Clicking on this transcript will seek video to the time of 5215 seconds from beginning." id="transcript-05f17e31-8022-469b-8f3f-bc46bbb48779" aria-posinset="1370" aria-setsize="1386" class="transcript-line c-caption-1 ng-scope default-line" tabindex="0" style=""> <!-- ngIf: $ctrl.areTranscriptNewEditButtonsEnabled --><div class="transcript-line-wrapper ng-scope" ng-if="$ctrl.areTranscriptNewEditButtonsEnabled"> <div class="transcript-timestamp new ng-binding" tabindex="-1" aria-label="Clicking on this transcript will seek video to the time of 5215 seconds from beginning." ng-click="$ctrl.onTranscriptClicked(line)" role="button"> 01:26:55 </div> <div class="transcript-text-container"> <div class="transcript-text new ng-binding" ng-class="{editable: $ctrl.isActiveEdit(line)}" ng-focus="$ctrl.onTranscriptFocused($event, line)" vlist-index="1369" ng-keydown="$ctrl.onTextKeyDown($event)" ng-blur="$ctrl.onTranscriptBlur()" contenteditable="$ctrl.isActiveEdit(line)"> you can help. Ask how you can have an impact and get the </div> <div class="transcript-text-action-bar" ng-class="{visible: $ctrl.shouldShowActionBar(line)}" ng-switch="" on="$ctrl.transcriptSavingStatus"> <!-- ngSwitchWhen: 0 --> <!-- ngSwitchWhen: 1 --> <!-- ngSwitchWhen: 2 --> <!-- ngSwitchWhen: 3 --> <!-- ngSwitchDefault: --><div class="transcript-text-action-buttons ng-scope" ng-switch-default=""> <div class="transcript-text-action-left-buttons"> <button class="stream-btn discard-btn ng-binding" ng-click="$ctrl.discard()"> Discard </button> <button class="stream-btn save-btn ng-binding" type="submit" ng-class="{'btn-primary': $ctrl.isTranscriptModified}" ng-click="$ctrl.save($event, line)"> Save </button> </div> <div class="transcript-text-action-divider"></div> <div class="transcript-text-action-right-buttons"> <button class="stream-btn replay-btn ng-isolate-scope" type="submit" ng-click="$ctrl.replay($event, line)" aria-label="Clicking this button restarts playback from the start of the current transcript segment and sets the focus to the current transcript segment's editable text contents." ng-mouseover="$ctrl.applyActiveStyleToReplayButton()" ng-mouseleave="$ctrl.applyInactiveStyleToReplayButton()" ng-focus="$ctrl.applyActiveStyleToReplayButton()" ng-blur="$ctrl.applyInactiveStyleToReplayButton()" custom-title="Replay this video segment"> <svg-src svg-name="icon_replay" inline-with-text="true" class="replay-icon ng-isolate-scope"><span class="svg-container icon-inline" ng-class="{'icon-inline': $ctrl.inlineWithText}"> <svg class="svg-icon" focusable="false" aria-hidden="true"> <use xlink:href="#icon_replay" ng-attr-xlink:href="{{$ctrl.getSvgPath}}"></use> </svg> </span></svg-src> </button> </div> </div><!-- end ngSwitchWhen: --> </div> </div> </div><!-- end ngIf: $ctrl.areTranscriptNewEditButtonsEnabled --> <!-- ngIf: !$ctrl.areTranscriptNewEditButtonsEnabled --> </li><!-- end ngIf: line --><!-- end ngRepeat: line in $vs_renderItems track by line.id --><!-- ngIf: line --><li ng-if="line" ng-repeat="line in $vs_renderItems track by line.id" ng-class="$ctrl.shouldHighlight(line) ? 'highlight-line' : 'default-line'" role="listitem" ng-click="$ctrl.onTranscriptClicked(line)" aria-label="Clicking on this transcript will seek video to the time of 5219 seconds from beginning." id="transcript-b5e0539d-c84d-45b2-a823-f0336e1ed3bd" aria-posinset="1371" aria-setsize="1386" class="transcript-line c-caption-1 ng-scope default-line" tabindex="0" style=""> <!-- ngIf: $ctrl.areTranscriptNewEditButtonsEnabled --><div class="transcript-line-wrapper ng-scope" ng-if="$ctrl.areTranscriptNewEditButtonsEnabled"> <div class="transcript-timestamp new ng-binding" tabindex="-1" aria-label="Clicking on this transcript will seek video to the time of 5219 seconds from beginning." ng-click="$ctrl.onTranscriptClicked(line)" role="button"> 01:26:59 </div> <div class="transcript-text-container"> <div class="transcript-text new ng-binding" ng-class="{editable: $ctrl.isActiveEdit(line)}" ng-focus="$ctrl.onTranscriptFocused($event, line)" vlist-index="1370" ng-keydown="$ctrl.onTextKeyDown($event)" ng-blur="$ctrl.onTranscriptBlur()" contenteditable="$ctrl.isActiveEdit(line)"> right training in order to be ready for that role, because </div> <div class="transcript-text-action-bar" ng-class="{visible: $ctrl.shouldShowActionBar(line)}" ng-switch="" on="$ctrl.transcriptSavingStatus"> <!-- ngSwitchWhen: 0 --> <!-- ngSwitchWhen: 1 --> <!-- ngSwitchWhen: 2 --> <!-- ngSwitchWhen: 3 --> <!-- ngSwitchDefault: --><div class="transcript-text-action-buttons ng-scope" ng-switch-default=""> <div class="transcript-text-action-left-buttons"> <button class="stream-btn discard-btn ng-binding" ng-click="$ctrl.discard()"> Discard </button> <button class="stream-btn save-btn ng-binding" type="submit" ng-class="{'btn-primary': $ctrl.isTranscriptModified}" ng-click="$ctrl.save($event, line)"> Save </button> </div> <div class="transcript-text-action-divider"></div> <div class="transcript-text-action-right-buttons"> <button class="stream-btn replay-btn ng-isolate-scope" type="submit" ng-click="$ctrl.replay($event, line)" aria-label="Clicking this button restarts playback from the start of the current transcript segment and sets the focus to the current transcript segment's editable text contents." ng-mouseover="$ctrl.applyActiveStyleToReplayButton()" ng-mouseleave="$ctrl.applyInactiveStyleToReplayButton()" ng-focus="$ctrl.applyActiveStyleToReplayButton()" ng-blur="$ctrl.applyInactiveStyleToReplayButton()" custom-title="Replay this video segment"> <svg-src svg-name="icon_replay" inline-with-text="true" class="replay-icon ng-isolate-scope"><span class="svg-container icon-inline" ng-class="{'icon-inline': $ctrl.inlineWithText}"> <svg class="svg-icon" focusable="false" aria-hidden="true"> <use xlink:href="#icon_replay" ng-attr-xlink:href="{{$ctrl.getSvgPath}}"></use> </svg> </span></svg-src> </button> </div> </div><!-- end ngSwitchWhen: --> </div> </div> </div><!-- end ngIf: $ctrl.areTranscriptNewEditButtonsEnabled --> <!-- ngIf: !$ctrl.areTranscriptNewEditButtonsEnabled --> </li><!-- end ngIf: line --><!-- end ngRepeat: line in $vs_renderItems track by line.id --><!-- ngIf: line --><li ng-if="line" ng-repeat="line in $vs_renderItems track by line.id" ng-class="$ctrl.shouldHighlight(line) ? 'highlight-line' : 'default-line'" role="listitem" ng-click="$ctrl.onTranscriptClicked(line)" aria-label="Clicking on this transcript will seek video to the time of 5222 seconds from beginning." id="transcript-1e165221-f253-4648-a43d-8d7394794a0a" aria-posinset="1372" aria-setsize="1386" class="transcript-line c-caption-1 ng-scope default-line" tabindex="0" style=""> <!-- ngIf: $ctrl.areTranscriptNewEditButtonsEnabled --><div class="transcript-line-wrapper ng-scope" ng-if="$ctrl.areTranscriptNewEditButtonsEnabled"> <div class="transcript-timestamp new ng-binding" tabindex="-1" aria-label="Clicking on this transcript will seek video to the time of 5222 seconds from beginning." ng-click="$ctrl.onTranscriptClicked(line)" role="button"> 01:27:02 </div> <div class="transcript-text-container"> <div class="transcript-text new ng-binding" ng-class="{editable: $ctrl.isActiveEdit(line)}" ng-focus="$ctrl.onTranscriptFocused($event, line)" vlist-index="1371" ng-keydown="$ctrl.onTextKeyDown($event)" ng-blur="$ctrl.onTranscriptBlur()" contenteditable="$ctrl.isActiveEdit(line)"> this is going to be a really exciting journey an we really </div> <div class="transcript-text-action-bar" ng-class="{visible: $ctrl.shouldShowActionBar(line)}" ng-switch="" on="$ctrl.transcriptSavingStatus"> <!-- ngSwitchWhen: 0 --> <!-- ngSwitchWhen: 1 --> <!-- ngSwitchWhen: 2 --> <!-- ngSwitchWhen: 3 --> <!-- ngSwitchDefault: --><div class="transcript-text-action-buttons ng-scope" ng-switch-default=""> <div class="transcript-text-action-left-buttons"> <button class="stream-btn discard-btn ng-binding" ng-click="$ctrl.discard()"> Discard </button> <button class="stream-btn save-btn ng-binding" type="submit" ng-class="{'btn-primary': $ctrl.isTranscriptModified}" ng-click="$ctrl.save($event, line)"> Save </button> </div> <div class="transcript-text-action-divider"></div> <div class="transcript-text-action-right-buttons"> <button class="stream-btn replay-btn ng-isolate-scope" type="submit" ng-click="$ctrl.replay($event, line)" aria-label="Clicking this button restarts playback from the start of the current transcript segment and sets the focus to the current transcript segment's editable text contents." ng-mouseover="$ctrl.applyActiveStyleToReplayButton()" ng-mouseleave="$ctrl.applyInactiveStyleToReplayButton()" ng-focus="$ctrl.applyActiveStyleToReplayButton()" ng-blur="$ctrl.applyInactiveStyleToReplayButton()" custom-title="Replay this video segment"> <svg-src svg-name="icon_replay" inline-with-text="true" class="replay-icon ng-isolate-scope"><span class="svg-container icon-inline" ng-class="{'icon-inline': $ctrl.inlineWithText}"> <svg class="svg-icon" focusable="false" aria-hidden="true"> <use xlink:href="#icon_replay" ng-attr-xlink:href="{{$ctrl.getSvgPath}}"></use> </svg> </span></svg-src> </button> </div> </div><!-- end ngSwitchWhen: --> </div> </div> </div><!-- end ngIf: $ctrl.areTranscriptNewEditButtonsEnabled --> <!-- ngIf: !$ctrl.areTranscriptNewEditButtonsEnabled --> </li><!-- end ngIf: line --><!-- end ngRepeat: line in $vs_renderItems track by line.id --><!-- ngIf: line --><li ng-if="line" ng-repeat="line in $vs_renderItems track by line.id" ng-class="$ctrl.shouldHighlight(line) ? 'highlight-line' : 'default-line'" role="listitem" ng-click="$ctrl.onTranscriptClicked(line)" aria-label="Clicking on this transcript will seek video to the time of 5226 seconds from beginning." id="transcript-882ad8de-0d12-4a43-ac46-4bd696cdb111" aria-posinset="1373" aria-setsize="1386" class="transcript-line c-caption-1 ng-scope default-line" tabindex="0" style=""> <!-- ngIf: $ctrl.areTranscriptNewEditButtonsEnabled --><div class="transcript-line-wrapper ng-scope" ng-if="$ctrl.areTranscriptNewEditButtonsEnabled"> <div class="transcript-timestamp new ng-binding" tabindex="-1" aria-label="Clicking on this transcript will seek video to the time of 5226 seconds from beginning." ng-click="$ctrl.onTranscriptClicked(line)" role="button"> 01:27:06 </div> <div class="transcript-text-container"> <div class="transcript-text new ng-binding" ng-class="{editable: $ctrl.isActiveEdit(line)}" ng-focus="$ctrl.onTranscriptFocused($event, line)" vlist-index="1372" ng-keydown="$ctrl.onTextKeyDown($event)" ng-blur="$ctrl.onTranscriptBlur()" contenteditable="$ctrl.isActiveEdit(line)"> need everybody to lean in and help us achieve that growth. </div> <div class="transcript-text-action-bar" ng-class="{visible: $ctrl.shouldShowActionBar(line)}" ng-switch="" on="$ctrl.transcriptSavingStatus"> <!-- ngSwitchWhen: 0 --> <!-- ngSwitchWhen: 1 --> <!-- ngSwitchWhen: 2 --> <!-- ngSwitchWhen: 3 --> <!-- ngSwitchDefault: --><div class="transcript-text-action-buttons ng-scope" ng-switch-default=""> <div class="transcript-text-action-left-buttons"> <button class="stream-btn discard-btn ng-binding" ng-click="$ctrl.discard()"> Discard </button> <button class="stream-btn save-btn ng-binding" type="submit" ng-class="{'btn-primary': $ctrl.isTranscriptModified}" ng-click="$ctrl.save($event, line)"> Save </button> </div> <div class="transcript-text-action-divider"></div> <div class="transcript-text-action-right-buttons"> <button class="stream-btn replay-btn ng-isolate-scope" type="submit" ng-click="$ctrl.replay($event, line)" aria-label="Clicking this button restarts playback from the start of the current transcript segment and sets the focus to the current transcript segment's editable text contents." ng-mouseover="$ctrl.applyActiveStyleToReplayButton()" ng-mouseleave="$ctrl.applyInactiveStyleToReplayButton()" ng-focus="$ctrl.applyActiveStyleToReplayButton()" ng-blur="$ctrl.applyInactiveStyleToReplayButton()" custom-title="Replay this video segment"> <svg-src svg-name="icon_replay" inline-with-text="true" class="replay-icon ng-isolate-scope"><span class="svg-container icon-inline" ng-class="{'icon-inline': $ctrl.inlineWithText}"> <svg class="svg-icon" focusable="false" aria-hidden="true"> <use xlink:href="#icon_replay" ng-attr-xlink:href="{{$ctrl.getSvgPath}}"></use> </svg> </span></svg-src> </button> </div> </div><!-- end ngSwitchWhen: --> </div> </div> </div><!-- end ngIf: $ctrl.areTranscriptNewEditButtonsEnabled --> <!-- ngIf: !$ctrl.areTranscriptNewEditButtonsEnabled --> </li><!-- end ngIf: line --><!-- end ngRepeat: line in $vs_renderItems track by line.id --><!-- ngIf: line --><li ng-if="line" ng-repeat="line in $vs_renderItems track by line.id" ng-class="$ctrl.shouldHighlight(line) ? 'highlight-line' : 'default-line'" role="listitem" ng-click="$ctrl.onTranscriptClicked(line)" aria-label="Clicking on this transcript will seek video to the time of 5232 seconds from beginning." id="transcript-663ad0ca-79c9-4db3-8970-cdfabdf478fc" aria-posinset="1374" aria-setsize="1386" class="transcript-line c-caption-1 ng-scope default-line" tabindex="0" style=""> <!-- ngIf: $ctrl.areTranscriptNewEditButtonsEnabled --><div class="transcript-line-wrapper ng-scope" ng-if="$ctrl.areTranscriptNewEditButtonsEnabled"> <div class="transcript-timestamp new ng-binding" tabindex="-1" aria-label="Clicking on this transcript will seek video to the time of 5232 seconds from beginning." ng-click="$ctrl.onTranscriptClicked(line)" role="button"> 01:27:12 </div> <div class="transcript-text-container"> <div class="transcript-text new ng-binding" ng-class="{editable: $ctrl.isActiveEdit(line)}" ng-focus="$ctrl.onTranscriptFocused($event, line)" vlist-index="1373" ng-keydown="$ctrl.onTextKeyDown($event)" ng-blur="$ctrl.onTranscriptBlur()" contenteditable="$ctrl.isActiveEdit(line)"> Alright, so I can't think of a better comment to close on. I </div> <div class="transcript-text-action-bar" ng-class="{visible: $ctrl.shouldShowActionBar(line)}" ng-switch="" on="$ctrl.transcriptSavingStatus"> <!-- ngSwitchWhen: 0 --> <!-- ngSwitchWhen: 1 --> <!-- ngSwitchWhen: 2 --> <!-- ngSwitchWhen: 3 --> <!-- ngSwitchDefault: --><div class="transcript-text-action-buttons ng-scope" ng-switch-default=""> <div class="transcript-text-action-left-buttons"> <button class="stream-btn discard-btn ng-binding" ng-click="$ctrl.discard()"> Discard </button> <button class="stream-btn save-btn ng-binding" type="submit" ng-class="{'btn-primary': $ctrl.isTranscriptModified}" ng-click="$ctrl.save($event, line)"> Save </button> </div> <div class="transcript-text-action-divider"></div> <div class="transcript-text-action-right-buttons"> <button class="stream-btn replay-btn ng-isolate-scope" type="submit" ng-click="$ctrl.replay($event, line)" aria-label="Clicking this button restarts playback from the start of the current transcript segment and sets the focus to the current transcript segment's editable text contents." ng-mouseover="$ctrl.applyActiveStyleToReplayButton()" ng-mouseleave="$ctrl.applyInactiveStyleToReplayButton()" ng-focus="$ctrl.applyActiveStyleToReplayButton()" ng-blur="$ctrl.applyInactiveStyleToReplayButton()" custom-title="Replay this video segment"> <svg-src svg-name="icon_replay" inline-with-text="true" class="replay-icon ng-isolate-scope"><span class="svg-container icon-inline" ng-class="{'icon-inline': $ctrl.inlineWithText}"> <svg class="svg-icon" focusable="false" aria-hidden="true"> <use xlink:href="#icon_replay" ng-attr-xlink:href="{{$ctrl.getSvgPath}}"></use> </svg> </span></svg-src> </button> </div> </div><!-- end ngSwitchWhen: --> </div> </div> </div><!-- end ngIf: $ctrl.areTranscriptNewEditButtonsEnabled --> <!-- ngIf: !$ctrl.areTranscriptNewEditButtonsEnabled --> </li><!-- end ngIf: line --><!-- end ngRepeat: line in $vs_renderItems track by line.id --><!-- ngIf: line --><li ng-if="line" ng-repeat="line in $vs_renderItems track by line.id" ng-class="$ctrl.shouldHighlight(line) ? 'highlight-line' : 'default-line'" role="listitem" ng-click="$ctrl.onTranscriptClicked(line)" aria-label="Clicking on this transcript will seek video to the time of 5236 seconds from beginning." id="transcript-4f9f99e1-3717-4b1c-afa9-d4bb93a8c217" aria-posinset="1375" aria-setsize="1386" class="transcript-line c-caption-1 ng-scope default-line" tabindex="0" style=""> <!-- ngIf: $ctrl.areTranscriptNewEditButtonsEnabled --><div class="transcript-line-wrapper ng-scope" ng-if="$ctrl.areTranscriptNewEditButtonsEnabled"> <div class="transcript-timestamp new ng-binding" tabindex="-1" aria-label="Clicking on this transcript will seek video to the time of 5236 seconds from beginning." ng-click="$ctrl.onTranscriptClicked(line)" role="button"> 01:27:16 </div> <div class="transcript-text-container"> <div class="transcript-text new ng-binding" ng-class="{editable: $ctrl.isActiveEdit(line)}" ng-focus="$ctrl.onTranscriptFocused($event, line)" vlist-index="1374" ng-keydown="$ctrl.onTextKeyDown($event)" ng-blur="$ctrl.onTranscriptBlur()" contenteditable="$ctrl.isActiveEdit(line)"> want to thank everyone on the panel. I wanna think Jeff, I </div> <div class="transcript-text-action-bar" ng-class="{visible: $ctrl.shouldShowActionBar(line)}" ng-switch="" on="$ctrl.transcriptSavingStatus"> <!-- ngSwitchWhen: 0 --> <!-- ngSwitchWhen: 1 --> <!-- ngSwitchWhen: 2 --> <!-- ngSwitchWhen: 3 --> <!-- ngSwitchDefault: --><div class="transcript-text-action-buttons ng-scope" ng-switch-default=""> <div class="transcript-text-action-left-buttons"> <button class="stream-btn discard-btn ng-binding" ng-click="$ctrl.discard()"> Discard </button> <button class="stream-btn save-btn ng-binding" type="submit" ng-class="{'btn-primary': $ctrl.isTranscriptModified}" ng-click="$ctrl.save($event, line)"> Save </button> </div> <div class="transcript-text-action-divider"></div> <div class="transcript-text-action-right-buttons"> <button class="stream-btn replay-btn ng-isolate-scope" type="submit" ng-click="$ctrl.replay($event, line)" aria-label="Clicking this button restarts playback from the start of the current transcript segment and sets the focus to the current transcript segment's editable text contents." ng-mouseover="$ctrl.applyActiveStyleToReplayButton()" ng-mouseleave="$ctrl.applyInactiveStyleToReplayButton()" ng-focus="$ctrl.applyActiveStyleToReplayButton()" ng-blur="$ctrl.applyInactiveStyleToReplayButton()" custom-title="Replay this video segment"> <svg-src svg-name="icon_replay" inline-with-text="true" class="replay-icon ng-isolate-scope"><span class="svg-container icon-inline" ng-class="{'icon-inline': $ctrl.inlineWithText}"> <svg class="svg-icon" focusable="false" aria-hidden="true"> <use xlink:href="#icon_replay" ng-attr-xlink:href="{{$ctrl.getSvgPath}}"></use> </svg> </span></svg-src> </button> </div> </div><!-- end ngSwitchWhen: --> </div> </div> </div><!-- end ngIf: $ctrl.areTranscriptNewEditButtonsEnabled --> <!-- ngIf: !$ctrl.areTranscriptNewEditButtonsEnabled --> </li><!-- end ngIf: line --><!-- end ngRepeat: line in $vs_renderItems track by line.id --><!-- ngIf: line --><li ng-if="line" ng-repeat="line in $vs_renderItems track by line.id" ng-class="$ctrl.shouldHighlight(line) ? 'highlight-line' : 'default-line'" role="listitem" ng-click="$ctrl.onTranscriptClicked(line)" aria-label="Clicking on this transcript will seek video to the time of 5239 seconds from beginning." id="transcript-796b523c-9cdf-4c60-ab28-f265d6905cfc" aria-posinset="1376" aria-setsize="1386" class="transcript-line c-caption-1 ng-scope default-line" tabindex="0" style=""> <!-- ngIf: $ctrl.areTranscriptNewEditButtonsEnabled --><div class="transcript-line-wrapper ng-scope" ng-if="$ctrl.areTranscriptNewEditButtonsEnabled"> <div class="transcript-timestamp new ng-binding" tabindex="-1" aria-label="Clicking on this transcript will seek video to the time of 5239 seconds from beginning." ng-click="$ctrl.onTranscriptClicked(line)" role="button"> 01:27:19 </div> <div class="transcript-text-container"> <div class="transcript-text new ng-binding" ng-class="{editable: $ctrl.isActiveEdit(line)}" ng-focus="$ctrl.onTranscriptFocused($event, line)" vlist-index="1375" ng-keydown="$ctrl.onTextKeyDown($event)" ng-blur="$ctrl.onTranscriptBlur()" contenteditable="$ctrl.isActiveEdit(line)"> want to thank all of you for attending. I really encourage </div> <div class="transcript-text-action-bar" ng-class="{visible: $ctrl.shouldShowActionBar(line)}" ng-switch="" on="$ctrl.transcriptSavingStatus"> <!-- ngSwitchWhen: 0 --> <!-- ngSwitchWhen: 1 --> <!-- ngSwitchWhen: 2 --> <!-- ngSwitchWhen: 3 --> <!-- ngSwitchDefault: --><div class="transcript-text-action-buttons ng-scope" ng-switch-default=""> <div class="transcript-text-action-left-buttons"> <button class="stream-btn discard-btn ng-binding" ng-click="$ctrl.discard()"> Discard </button> <button class="stream-btn save-btn ng-binding" type="submit" ng-class="{'btn-primary': $ctrl.isTranscriptModified}" ng-click="$ctrl.save($event, line)"> Save </button> </div> <div class="transcript-text-action-divider"></div> <div class="transcript-text-action-right-buttons"> <button class="stream-btn replay-btn ng-isolate-scope" type="submit" ng-click="$ctrl.replay($event, line)" aria-label="Clicking this button restarts playback from the start of the current transcript segment and sets the focus to the current transcript segment's editable text contents." ng-mouseover="$ctrl.applyActiveStyleToReplayButton()" ng-mouseleave="$ctrl.applyInactiveStyleToReplayButton()" ng-focus="$ctrl.applyActiveStyleToReplayButton()" ng-blur="$ctrl.applyInactiveStyleToReplayButton()" custom-title="Replay this video segment"> <svg-src svg-name="icon_replay" inline-with-text="true" class="replay-icon ng-isolate-scope"><span class="svg-container icon-inline" ng-class="{'icon-inline': $ctrl.inlineWithText}"> <svg class="svg-icon" focusable="false" aria-hidden="true"> <use xlink:href="#icon_replay" ng-attr-xlink:href="{{$ctrl.getSvgPath}}"></use> </svg> </span></svg-src> </button> </div> </div><!-- end ngSwitchWhen: --> </div> </div> </div><!-- end ngIf: $ctrl.areTranscriptNewEditButtonsEnabled --> <!-- ngIf: !$ctrl.areTranscriptNewEditButtonsEnabled --> </li><!-- end ngIf: line --><!-- end ngRepeat: line in $vs_renderItems track by line.id --><!-- ngIf: line --><li ng-if="line" ng-repeat="line in $vs_renderItems track by line.id" ng-class="$ctrl.shouldHighlight(line) ? 'highlight-line' : 'default-line'" role="listitem" ng-click="$ctrl.onTranscriptClicked(line)" aria-label="Clicking on this transcript will seek video to the time of 5242 seconds from beginning." id="transcript-121c51ca-1966-4497-a949-7f3fff9ca962" aria-posinset="1377" aria-setsize="1386" class="transcript-line c-caption-1 ng-scope default-line" tabindex="0" style=""> <!-- ngIf: $ctrl.areTranscriptNewEditButtonsEnabled --><div class="transcript-line-wrapper ng-scope" ng-if="$ctrl.areTranscriptNewEditButtonsEnabled"> <div class="transcript-timestamp new ng-binding" tabindex="-1" aria-label="Clicking on this transcript will seek video to the time of 5242 seconds from beginning." ng-click="$ctrl.onTranscriptClicked(line)" role="button"> 01:27:22 </div> <div class="transcript-text-container"> <div class="transcript-text new ng-binding" ng-class="{editable: $ctrl.isActiveEdit(line)}" ng-focus="$ctrl.onTranscriptFocused($event, line)" vlist-index="1376" ng-keydown="$ctrl.onTextKeyDown($event)" ng-blur="$ctrl.onTranscriptBlur()" contenteditable="$ctrl.isActiveEdit(line)"> you to attend our next quarterly town Hall on Aug 6th. I know you </div> <div class="transcript-text-action-bar" ng-class="{visible: $ctrl.shouldShowActionBar(line)}" ng-switch="" on="$ctrl.transcriptSavingStatus"> <!-- ngSwitchWhen: 0 --> <!-- ngSwitchWhen: 1 --> <!-- ngSwitchWhen: 2 --> <!-- ngSwitchWhen: 3 --> <!-- ngSwitchDefault: --><div class="transcript-text-action-buttons ng-scope" ng-switch-default=""> <div class="transcript-text-action-left-buttons"> <button class="stream-btn discard-btn ng-binding" ng-click="$ctrl.discard()"> Discard </button> <button class="stream-btn save-btn ng-binding" type="submit" ng-class="{'btn-primary': $ctrl.isTranscriptModified}" ng-click="$ctrl.save($event, line)"> Save </button> </div> <div class="transcript-text-action-divider"></div> <div class="transcript-text-action-right-buttons"> <button class="stream-btn replay-btn ng-isolate-scope" type="submit" ng-click="$ctrl.replay($event, line)" aria-label="Clicking this button restarts playback from the start of the current transcript segment and sets the focus to the current transcript segment's editable text contents." ng-mouseover="$ctrl.applyActiveStyleToReplayButton()" ng-mouseleave="$ctrl.applyInactiveStyleToReplayButton()" ng-focus="$ctrl.applyActiveStyleToReplayButton()" ng-blur="$ctrl.applyInactiveStyleToReplayButton()" custom-title="Replay this video segment"> <svg-src svg-name="icon_replay" inline-with-text="true" class="replay-icon ng-isolate-scope"><span class="svg-container icon-inline" ng-class="{'icon-inline': $ctrl.inlineWithText}"> <svg class="svg-icon" focusable="false" aria-hidden="true"> <use xlink:href="#icon_replay" ng-attr-xlink:href="{{$ctrl.getSvgPath}}"></use> </svg> </span></svg-src> </button> </div> </div><!-- end ngSwitchWhen: --> </div> </div> </div><!-- end ngIf: $ctrl.areTranscriptNewEditButtonsEnabled --> <!-- ngIf: !$ctrl.areTranscriptNewEditButtonsEnabled --> </li><!-- end ngIf: line --><!-- end ngRepeat: line in $vs_renderItems track by line.id --><!-- ngIf: line --><li ng-if="line" ng-repeat="line in $vs_renderItems track by line.id" ng-class="$ctrl.shouldHighlight(line) ? 'highlight-line' : 'default-line'" role="listitem" ng-click="$ctrl.onTranscriptClicked(line)" aria-label="Clicking on this transcript will seek video to the time of 5246 seconds from beginning." id="transcript-536750e0-23ee-4982-b4b7-06b407f7bccf" aria-posinset="1378" aria-setsize="1386" class="transcript-line c-caption-1 ng-scope default-line" tabindex="0" style=""> <!-- ngIf: $ctrl.areTranscriptNewEditButtonsEnabled --><div class="transcript-line-wrapper ng-scope" ng-if="$ctrl.areTranscriptNewEditButtonsEnabled"> <div class="transcript-timestamp new ng-binding" tabindex="-1" aria-label="Clicking on this transcript will seek video to the time of 5246 seconds from beginning." ng-click="$ctrl.onTranscriptClicked(line)" role="button"> 01:27:26 </div> <div class="transcript-text-container"> <div class="transcript-text new ng-binding" ng-class="{editable: $ctrl.isActiveEdit(line)}" ng-focus="$ctrl.onTranscriptFocused($event, line)" vlist-index="1377" ng-keydown="$ctrl.onTextKeyDown($event)" ng-blur="$ctrl.onTranscriptBlur()" contenteditable="$ctrl.isActiveEdit(line)"> have a lot of questions and he submitted a ton of questions </div> <div class="transcript-text-action-bar" ng-class="{visible: $ctrl.shouldShowActionBar(line)}" ng-switch="" on="$ctrl.transcriptSavingStatus"> <!-- ngSwitchWhen: 0 --> <!-- ngSwitchWhen: 1 --> <!-- ngSwitchWhen: 2 --> <!-- ngSwitchWhen: 3 --> <!-- ngSwitchDefault: --><div class="transcript-text-action-buttons ng-scope" ng-switch-default=""> <div class="transcript-text-action-left-buttons"> <button class="stream-btn discard-btn ng-binding" ng-click="$ctrl.discard()"> Discard </button> <button class="stream-btn save-btn ng-binding" type="submit" ng-class="{'btn-primary': $ctrl.isTranscriptModified}" ng-click="$ctrl.save($event, line)"> Save </button> </div> <div class="transcript-text-action-divider"></div> <div class="transcript-text-action-right-buttons"> <button class="stream-btn replay-btn ng-isolate-scope" type="submit" ng-click="$ctrl.replay($event, line)" aria-label="Clicking this button restarts playback from the start of the current transcript segment and sets the focus to the current transcript segment's editable text contents." ng-mouseover="$ctrl.applyActiveStyleToReplayButton()" ng-mouseleave="$ctrl.applyInactiveStyleToReplayButton()" ng-focus="$ctrl.applyActiveStyleToReplayButton()" ng-blur="$ctrl.applyInactiveStyleToReplayButton()" custom-title="Replay this video segment"> <svg-src svg-name="icon_replay" inline-with-text="true" class="replay-icon ng-isolate-scope"><span class="svg-container icon-inline" ng-class="{'icon-inline': $ctrl.inlineWithText}"> <svg class="svg-icon" focusable="false" aria-hidden="true"> <use xlink:href="#icon_replay" ng-attr-xlink:href="{{$ctrl.getSvgPath}}"></use> </svg> </span></svg-src> </button> </div> </div><!-- end ngSwitchWhen: --> </div> </div> </div><!-- end ngIf: $ctrl.areTranscriptNewEditButtonsEnabled --> <!-- ngIf: !$ctrl.areTranscriptNewEditButtonsEnabled --> </li><!-- end ngIf: line --><!-- end ngRepeat: line in $vs_renderItems track by line.id --><!-- ngIf: line --><li ng-if="line" ng-repeat="line in $vs_renderItems track by line.id" ng-class="$ctrl.shouldHighlight(line) ? 'highlight-line' : 'default-line'" role="listitem" ng-click="$ctrl.onTranscriptClicked(line)" aria-label="Clicking on this transcript will seek video to the time of 5250 seconds from beginning." id="transcript-254e9ed2-49b2-4481-b1f7-345039944d8f" aria-posinset="1379" aria-setsize="1386" class="transcript-line c-caption-1 ng-scope default-line" tabindex="0" style=""> <!-- ngIf: $ctrl.areTranscriptNewEditButtonsEnabled --><div class="transcript-line-wrapper ng-scope" ng-if="$ctrl.areTranscriptNewEditButtonsEnabled"> <div class="transcript-timestamp new ng-binding" tabindex="-1" aria-label="Clicking on this transcript will seek video to the time of 5250 seconds from beginning." ng-click="$ctrl.onTranscriptClicked(line)" role="button"> 01:27:30 </div> <div class="transcript-text-container"> <div class="transcript-text new ng-binding" ng-class="{editable: $ctrl.isActiveEdit(line)}" ng-focus="$ctrl.onTranscriptFocused($event, line)" vlist-index="1378" ng-keydown="$ctrl.onTextKeyDown($event)" ng-blur="$ctrl.onTranscriptBlur()" contenteditable="$ctrl.isActiveEdit(line)"> around. You know people that have taken a number of voluntary </div> <div class="transcript-text-action-bar" ng-class="{visible: $ctrl.shouldShowActionBar(line)}" ng-switch="" on="$ctrl.transcriptSavingStatus"> <!-- ngSwitchWhen: 0 --> <!-- ngSwitchWhen: 1 --> <!-- ngSwitchWhen: 2 --> <!-- ngSwitchWhen: 3 --> <!-- ngSwitchDefault: --><div class="transcript-text-action-buttons ng-scope" ng-switch-default=""> <div class="transcript-text-action-left-buttons"> <button class="stream-btn discard-btn ng-binding" ng-click="$ctrl.discard()"> Discard </button> <button class="stream-btn save-btn ng-binding" type="submit" ng-class="{'btn-primary': $ctrl.isTranscriptModified}" ng-click="$ctrl.save($event, line)"> Save </button> </div> <div class="transcript-text-action-divider"></div> <div class="transcript-text-action-right-buttons"> <button class="stream-btn replay-btn ng-isolate-scope" type="submit" ng-click="$ctrl.replay($event, line)" aria-label="Clicking this button restarts playback from the start of the current transcript segment and sets the focus to the current transcript segment's editable text contents." ng-mouseover="$ctrl.applyActiveStyleToReplayButton()" ng-mouseleave="$ctrl.applyInactiveStyleToReplayButton()" ng-focus="$ctrl.applyActiveStyleToReplayButton()" ng-blur="$ctrl.applyInactiveStyleToReplayButton()" custom-title="Replay this video segment"> <svg-src svg-name="icon_replay" inline-with-text="true" class="replay-icon ng-isolate-scope"><span class="svg-container icon-inline" ng-class="{'icon-inline': $ctrl.inlineWithText}"> <svg class="svg-icon" focusable="false" aria-hidden="true"> <use xlink:href="#icon_replay" ng-attr-xlink:href="{{$ctrl.getSvgPath}}"></use> </svg> </span></svg-src> </button> </div> </div><!-- end ngSwitchWhen: --> </div> </div> </div><!-- end ngIf: $ctrl.areTranscriptNewEditButtonsEnabled --> <!-- ngIf: !$ctrl.areTranscriptNewEditButtonsEnabled --> </li><!-- end ngIf: line --><!-- end ngRepeat: line in $vs_renderItems track by line.id --><!-- ngIf: line --><li ng-if="line" ng-repeat="line in $vs_renderItems track by line.id" ng-class="$ctrl.shouldHighlight(line) ? 'highlight-line' : 'default-line'" role="listitem" ng-click="$ctrl.onTranscriptClicked(line)" aria-label="Clicking on this transcript will seek video to the time of 5253 seconds from beginning." id="transcript-a83df1f5-dd28-40f9-8262-e617edda9518" aria-posinset="1380" aria-setsize="1386" class="transcript-line c-caption-1 ng-scope default-line" tabindex="0" style=""> <!-- ngIf: $ctrl.areTranscriptNewEditButtonsEnabled --><div class="transcript-line-wrapper ng-scope" ng-if="$ctrl.areTranscriptNewEditButtonsEnabled"> <div class="transcript-timestamp new ng-binding" tabindex="-1" aria-label="Clicking on this transcript will seek video to the time of 5253 seconds from beginning." ng-click="$ctrl.onTranscriptClicked(line)" role="button"> 01:27:33 </div> <div class="transcript-text-container"> <div class="transcript-text new ng-binding" ng-class="{editable: $ctrl.isActiveEdit(line)}" ng-focus="$ctrl.onTranscriptFocused($event, line)" vlist-index="1379" ng-keydown="$ctrl.onTextKeyDown($event)" ng-blur="$ctrl.onTranscriptBlur()" contenteditable="$ctrl.isActiveEdit(line)"> and involuntary actions will talk about all that on our next </div> <div class="transcript-text-action-bar" ng-class="{visible: $ctrl.shouldShowActionBar(line)}" ng-switch="" on="$ctrl.transcriptSavingStatus"> <!-- ngSwitchWhen: 0 --> <!-- ngSwitchWhen: 1 --> <!-- ngSwitchWhen: 2 --> <!-- ngSwitchWhen: 3 --> <!-- ngSwitchDefault: --><div class="transcript-text-action-buttons ng-scope" ng-switch-default=""> <div class="transcript-text-action-left-buttons"> <button class="stream-btn discard-btn ng-binding" ng-click="$ctrl.discard()"> Discard </button> <button class="stream-btn save-btn ng-binding" type="submit" ng-class="{'btn-primary': $ctrl.isTranscriptModified}" ng-click="$ctrl.save($event, line)"> Save </button> </div> <div class="transcript-text-action-divider"></div> <div class="transcript-text-action-right-buttons"> <button class="stream-btn replay-btn ng-isolate-scope" type="submit" ng-click="$ctrl.replay($event, line)" aria-label="Clicking this button restarts playback from the start of the current transcript segment and sets the focus to the current transcript segment's editable text contents." ng-mouseover="$ctrl.applyActiveStyleToReplayButton()" ng-mouseleave="$ctrl.applyInactiveStyleToReplayButton()" ng-focus="$ctrl.applyActiveStyleToReplayButton()" ng-blur="$ctrl.applyInactiveStyleToReplayButton()" custom-title="Replay this video segment"> <svg-src svg-name="icon_replay" inline-with-text="true" class="replay-icon ng-isolate-scope"><span class="svg-container icon-inline" ng-class="{'icon-inline': $ctrl.inlineWithText}"> <svg class="svg-icon" focusable="false" aria-hidden="true"> <use xlink:href="#icon_replay" ng-attr-xlink:href="{{$ctrl.getSvgPath}}"></use> </svg> </span></svg-src> </button> </div> </div><!-- end ngSwitchWhen: --> </div> </div> </div><!-- end ngIf: $ctrl.areTranscriptNewEditButtonsEnabled --> <!-- ngIf: !$ctrl.areTranscriptNewEditButtonsEnabled --> </li><!-- end ngIf: line --><!-- end ngRepeat: line in $vs_renderItems track by line.id --><!-- ngIf: line --><li ng-if="line" ng-repeat="line in $vs_renderItems track by line.id" ng-class="$ctrl.shouldHighlight(line) ? 'highlight-line' : 'default-line'" role="listitem" ng-click="$ctrl.onTranscriptClicked(line)" aria-label="Clicking on this transcript will seek video to the time of 5256 seconds from beginning." id="transcript-dec7ed5c-25d3-4dd7-8085-ba121269fc2f" aria-posinset="1381" aria-setsize="1386" class="transcript-line c-caption-1 ng-scope default-line" tabindex="0" style=""> <!-- ngIf: $ctrl.areTranscriptNewEditButtonsEnabled --><div class="transcript-line-wrapper ng-scope" ng-if="$ctrl.areTranscriptNewEditButtonsEnabled"> <div class="transcript-timestamp new ng-binding" tabindex="-1" aria-label="Clicking on this transcript will seek video to the time of 5256 seconds from beginning." ng-click="$ctrl.onTranscriptClicked(line)" role="button"> 01:27:36 </div> <div class="transcript-text-container"> <div class="transcript-text new ng-binding" ng-class="{editable: $ctrl.isActiveEdit(line)}" ng-focus="$ctrl.onTranscriptFocused($event, line)" vlist-index="1380" ng-keydown="$ctrl.onTextKeyDown($event)" ng-blur="$ctrl.onTranscriptBlur()" contenteditable="$ctrl.isActiveEdit(line)"> quarterly townhall. Obviously today was really focused on </div> <div class="transcript-text-action-bar" ng-class="{visible: $ctrl.shouldShowActionBar(line)}" ng-switch="" on="$ctrl.transcriptSavingStatus"> <!-- ngSwitchWhen: 0 --> <!-- ngSwitchWhen: 1 --> <!-- ngSwitchWhen: 2 --> <!-- ngSwitchWhen: 3 --> <!-- ngSwitchDefault: --><div class="transcript-text-action-buttons ng-scope" ng-switch-default=""> <div class="transcript-text-action-left-buttons"> <button class="stream-btn discard-btn ng-binding" ng-click="$ctrl.discard()"> Discard </button> <button class="stream-btn save-btn ng-binding" type="submit" ng-class="{'btn-primary': $ctrl.isTranscriptModified}" ng-click="$ctrl.save($event, line)"> Save </button> </div> <div class="transcript-text-action-divider"></div> <div class="transcript-text-action-right-buttons"> <button class="stream-btn replay-btn ng-isolate-scope" type="submit" ng-click="$ctrl.replay($event, line)" aria-label="Clicking this button restarts playback from the start of the current transcript segment and sets the focus to the current transcript segment's editable text contents." ng-mouseover="$ctrl.applyActiveStyleToReplayButton()" ng-mouseleave="$ctrl.applyInactiveStyleToReplayButton()" ng-focus="$ctrl.applyActiveStyleToReplayButton()" ng-blur="$ctrl.applyInactiveStyleToReplayButton()" custom-title="Replay this video segment"> <svg-src svg-name="icon_replay" inline-with-text="true" class="replay-icon ng-isolate-scope"><span class="svg-container icon-inline" ng-class="{'icon-inline': $ctrl.inlineWithText}"> <svg class="svg-icon" focusable="false" aria-hidden="true"> <use xlink:href="#icon_replay" ng-attr-xlink:href="{{$ctrl.getSvgPath}}"></use> </svg> </span></svg-src> </button> </div> </div><!-- end ngSwitchWhen: --> </div> </div> </div><!-- end ngIf: $ctrl.areTranscriptNewEditButtonsEnabled --> <!-- ngIf: !$ctrl.areTranscriptNewEditButtonsEnabled --> </li><!-- end ngIf: line --><!-- end ngRepeat: line in $vs_renderItems track by line.id --><!-- ngIf: line --><li ng-if="line" ng-repeat="line in $vs_renderItems track by line.id" ng-class="$ctrl.shouldHighlight(line) ? 'highlight-line' : 'default-line'" role="listitem" ng-click="$ctrl.onTranscriptClicked(line)" aria-label="Clicking on this transcript will seek video to the time of 5258 seconds from beginning." id="transcript-f7256754-e9ba-4178-8559-909066aa6f2a" aria-posinset="1382" aria-setsize="1386" class="transcript-line c-caption-1 ng-scope default-line" tabindex="0" style=""> <!-- ngIf: $ctrl.areTranscriptNewEditButtonsEnabled --><div class="transcript-line-wrapper ng-scope" ng-if="$ctrl.areTranscriptNewEditButtonsEnabled"> <div class="transcript-timestamp new ng-binding" tabindex="-1" aria-label="Clicking on this transcript will seek video to the time of 5258 seconds from beginning." ng-click="$ctrl.onTranscriptClicked(line)" role="button"> 01:27:38 </div> <div class="transcript-text-container"> <div class="transcript-text new ng-binding" ng-class="{editable: $ctrl.isActiveEdit(line)}" ng-focus="$ctrl.onTranscriptFocused($event, line)" vlist-index="1381" ng-keydown="$ctrl.onTextKeyDown($event)" ng-blur="$ctrl.onTranscriptBlur()" contenteditable="$ctrl.isActiveEdit(line)"> stint. Uhm, I want to thank everyone for participating and </div> <div class="transcript-text-action-bar" ng-class="{visible: $ctrl.shouldShowActionBar(line)}" ng-switch="" on="$ctrl.transcriptSavingStatus"> <!-- ngSwitchWhen: 0 --> <!-- ngSwitchWhen: 1 --> <!-- ngSwitchWhen: 2 --> <!-- ngSwitchWhen: 3 --> <!-- ngSwitchDefault: --><div class="transcript-text-action-buttons ng-scope" ng-switch-default=""> <div class="transcript-text-action-left-buttons"> <button class="stream-btn discard-btn ng-binding" ng-click="$ctrl.discard()"> Discard </button> <button class="stream-btn save-btn ng-binding" type="submit" ng-class="{'btn-primary': $ctrl.isTranscriptModified}" ng-click="$ctrl.save($event, line)"> Save </button> </div> <div class="transcript-text-action-divider"></div> <div class="transcript-text-action-right-buttons"> <button class="stream-btn replay-btn ng-isolate-scope" type="submit" ng-click="$ctrl.replay($event, line)" aria-label="Clicking this button restarts playback from the start of the current transcript segment and sets the focus to the current transcript segment's editable text contents." ng-mouseover="$ctrl.applyActiveStyleToReplayButton()" ng-mouseleave="$ctrl.applyInactiveStyleToReplayButton()" ng-focus="$ctrl.applyActiveStyleToReplayButton()" ng-blur="$ctrl.applyInactiveStyleToReplayButton()" custom-title="Replay this video segment"> <svg-src svg-name="icon_replay" inline-with-text="true" class="replay-icon ng-isolate-scope"><span class="svg-container icon-inline" ng-class="{'icon-inline': $ctrl.inlineWithText}"> <svg class="svg-icon" focusable="false" aria-hidden="true"> <use xlink:href="#icon_replay" ng-attr-xlink:href="{{$ctrl.getSvgPath}}"></use> </svg> </span></svg-src> </button> </div> </div><!-- end ngSwitchWhen: --> </div> </div> </div><!-- end ngIf: $ctrl.areTranscriptNewEditButtonsEnabled --> <!-- ngIf: !$ctrl.areTranscriptNewEditButtonsEnabled --> </li><!-- end ngIf: line --><!-- end ngRepeat: line in $vs_renderItems track by line.id --><!-- ngIf: line --><li ng-if="line" ng-repeat="line in $vs_renderItems track by line.id" ng-class="$ctrl.shouldHighlight(line) ? 'highlight-line' : 'default-line'" role="listitem" ng-click="$ctrl.onTranscriptClicked(line)" aria-label="Clicking on this transcript will seek video to the time of 5261 seconds from beginning." id="transcript-31fc5c4f-4f9e-4eac-9b55-dbb1bdbbef55" aria-posinset="1383" aria-setsize="1386" class="transcript-line c-caption-1 ng-scope default-line" tabindex="0" style=""> <!-- ngIf: $ctrl.areTranscriptNewEditButtonsEnabled --><div class="transcript-line-wrapper ng-scope" ng-if="$ctrl.areTranscriptNewEditButtonsEnabled"> <div class="transcript-timestamp new ng-binding" tabindex="-1" aria-label="Clicking on this transcript will seek video to the time of 5261 seconds from beginning." ng-click="$ctrl.onTranscriptClicked(line)" role="button"> 01:27:41 </div> <div class="transcript-text-container"> <div class="transcript-text new ng-binding" ng-class="{editable: $ctrl.isActiveEdit(line)}" ng-focus="$ctrl.onTranscriptFocused($event, line)" vlist-index="1382" ng-keydown="$ctrl.onTextKeyDown($event)" ng-blur="$ctrl.onTranscriptBlur()" contenteditable="$ctrl.isActiveEdit(line)"> thank you for working so hard </div> <div class="transcript-text-action-bar" ng-class="{visible: $ctrl.shouldShowActionBar(line)}" ng-switch="" on="$ctrl.transcriptSavingStatus"> <!-- ngSwitchWhen: 0 --> <!-- ngSwitchWhen: 1 --> <!-- ngSwitchWhen: 2 --> <!-- ngSwitchWhen: 3 --> <!-- ngSwitchDefault: --><div class="transcript-text-action-buttons ng-scope" ng-switch-default=""> <div class="transcript-text-action-left-buttons"> <button class="stream-btn discard-btn ng-binding" ng-click="$ctrl.discard()"> Discard </button> <button class="stream-btn save-btn ng-binding" type="submit" ng-class="{'btn-primary': $ctrl.isTranscriptModified}" ng-click="$ctrl.save($event, line)"> Save </button> </div> <div class="transcript-text-action-divider"></div> <div class="transcript-text-action-right-buttons"> <button class="stream-btn replay-btn ng-isolate-scope" type="submit" ng-click="$ctrl.replay($event, line)" aria-label="Clicking this button restarts playback from the start of the current transcript segment and sets the focus to the current transcript segment's editable text contents." ng-mouseover="$ctrl.applyActiveStyleToReplayButton()" ng-mouseleave="$ctrl.applyInactiveStyleToReplayButton()" ng-focus="$ctrl.applyActiveStyleToReplayButton()" ng-blur="$ctrl.applyInactiveStyleToReplayButton()" custom-title="Replay this video segment"> <svg-src svg-name="icon_replay" inline-with-text="true" class="replay-icon ng-isolate-scope"><span class="svg-container icon-inline" ng-class="{'icon-inline': $ctrl.inlineWithText}"> <svg class="svg-icon" focusable="false" aria-hidden="true"> <use xlink:href="#icon_replay" ng-attr-xlink:href="{{$ctrl.getSvgPath}}"></use> </svg> </span></svg-src> </button> </div> </div><!-- end ngSwitchWhen: --> </div> </div> </div><!-- end ngIf: $ctrl.areTranscriptNewEditButtonsEnabled --> <!-- ngIf: !$ctrl.areTranscriptNewEditButtonsEnabled --> </li><!-- end ngIf: line --><!-- end ngRepeat: line in $vs_renderItems track by line.id --><!-- ngIf: line --><li ng-if="line" ng-repeat="line in $vs_renderItems track by line.id" ng-class="$ctrl.shouldHighlight(line) ? 'highlight-line' : 'default-line'" role="listitem" ng-click="$ctrl.onTranscriptClicked(line)" aria-label="Clicking on this transcript will seek video to the time of 5263 seconds from beginning." id="transcript-cd31625a-f693-42d6-83c6-6543117eb7cd" aria-posinset="1384" aria-setsize="1386" class="transcript-line c-caption-1 ng-scope default-line" tabindex="0" style=""> <!-- ngIf: $ctrl.areTranscriptNewEditButtonsEnabled --><div class="transcript-line-wrapper ng-scope" ng-if="$ctrl.areTranscriptNewEditButtonsEnabled"> <div class="transcript-timestamp new ng-binding" tabindex="-1" aria-label="Clicking on this transcript will seek video to the time of 5263 seconds from beginning." ng-click="$ctrl.onTranscriptClicked(line)" role="button"> 01:27:43 </div> <div class="transcript-text-container"> <div class="transcript-text new ng-binding" ng-class="{editable: $ctrl.isActiveEdit(line)}" ng-focus="$ctrl.onTranscriptFocused($event, line)" vlist-index="1383" ng-keydown="$ctrl.onTextKeyDown($event)" ng-blur="$ctrl.onTranscriptBlur()" contenteditable="$ctrl.isActiveEdit(line)"> for Team Syncada. Please continue to stay safe and </div> <div class="transcript-text-action-bar" ng-class="{visible: $ctrl.shouldShowActionBar(line)}" ng-switch="" on="$ctrl.transcriptSavingStatus"> <!-- ngSwitchWhen: 0 --> <!-- ngSwitchWhen: 1 --> <!-- ngSwitchWhen: 2 --> <!-- ngSwitchWhen: 3 --> <!-- ngSwitchDefault: --><div class="transcript-text-action-buttons ng-scope" ng-switch-default=""> <div class="transcript-text-action-left-buttons"> <button class="stream-btn discard-btn ng-binding" ng-click="$ctrl.discard()"> Discard </button> <button class="stream-btn save-btn ng-binding" type="submit" ng-class="{'btn-primary': $ctrl.isTranscriptModified}" ng-click="$ctrl.save($event, line)"> Save </button> </div> <div class="transcript-text-action-divider"></div> <div class="transcript-text-action-right-buttons"> <button class="stream-btn replay-btn ng-isolate-scope" type="submit" ng-click="$ctrl.replay($event, line)" aria-label="Clicking this button restarts playback from the start of the current transcript segment and sets the focus to the current transcript segment's editable text contents." ng-mouseover="$ctrl.applyActiveStyleToReplayButton()" ng-mouseleave="$ctrl.applyInactiveStyleToReplayButton()" ng-focus="$ctrl.applyActiveStyleToReplayButton()" ng-blur="$ctrl.applyInactiveStyleToReplayButton()" custom-title="Replay this video segment"> <svg-src svg-name="icon_replay" inline-with-text="true" class="replay-icon ng-isolate-scope"><span class="svg-container icon-inline" ng-class="{'icon-inline': $ctrl.inlineWithText}"> <svg class="svg-icon" focusable="false" aria-hidden="true"> <use xlink:href="#icon_replay" ng-attr-xlink:href="{{$ctrl.getSvgPath}}"></use> </svg> </span></svg-src> </button> </div> </div><!-- end ngSwitchWhen: --> </div> </div> </div><!-- end ngIf: $ctrl.areTranscriptNewEditButtonsEnabled --> <!-- ngIf: !$ctrl.areTranscriptNewEditButtonsEnabled --> </li><!-- end ngIf: line --><!-- end ngRepeat: line in $vs_renderItems track by line.id --><!-- ngIf: line --><li ng-if="line" ng-repeat="line in $vs_renderItems track by line.id" ng-class="$ctrl.shouldHighlight(line) ? 'highlight-line' : 'default-line'" role="listitem" ng-click="$ctrl.onTranscriptClicked(line)" aria-label="Clicking on this transcript will seek video to the time of 5266 seconds from beginning." id="transcript-819bcabc-e177-47f9-97eb-854d36c5b7b1" aria-posinset="1385" aria-setsize="1386" class="transcript-line c-caption-1 ng-scope default-line" tabindex="0" style=""> <!-- ngIf: $ctrl.areTranscriptNewEditButtonsEnabled --><div class="transcript-line-wrapper ng-scope" ng-if="$ctrl.areTranscriptNewEditButtonsEnabled"> <div class="transcript-timestamp new ng-binding" tabindex="-1" aria-label="Clicking on this transcript will seek video to the time of 5266 seconds from beginning." ng-click="$ctrl.onTranscriptClicked(line)" role="button"> 01:27:46 </div> <div class="transcript-text-container"> <div class="transcript-text new ng-binding" ng-class="{editable: $ctrl.isActiveEdit(line)}" ng-focus="$ctrl.onTranscriptFocused($event, line)" vlist-index="1384" ng-keydown="$ctrl.onTextKeyDown($event)" ng-blur="$ctrl.onTranscriptBlur()" contenteditable="$ctrl.isActiveEdit(line)"> serve our customers and be well. Have a great day </div> <div class="transcript-text-action-bar" ng-class="{visible: $ctrl.shouldShowActionBar(line)}" ng-switch="" on="$ctrl.transcriptSavingStatus"> <!-- ngSwitchWhen: 0 --> <!-- ngSwitchWhen: 1 --> <!-- ngSwitchWhen: 2 --> <!-- ngSwitchWhen: 3 --> <!-- ngSwitchDefault: --><div class="transcript-text-action-buttons ng-scope" ng-switch-default=""> <div class="transcript-text-action-left-buttons"> <button class="stream-btn discard-btn ng-binding" ng-click="$ctrl.discard()"> Discard </button> <button class="stream-btn save-btn ng-binding" type="submit" ng-class="{'btn-primary': $ctrl.isTranscriptModified}" ng-click="$ctrl.save($event, line)"> Save </button> </div> <div class="transcript-text-action-divider"></div> <div class="transcript-text-action-right-buttons"> <button class="stream-btn replay-btn ng-isolate-scope" type="submit" ng-click="$ctrl.replay($event, line)" aria-label="Clicking this button restarts playback from the start of the current transcript segment and sets the focus to the current transcript segment's editable text contents." ng-mouseover="$ctrl.applyActiveStyleToReplayButton()" ng-mouseleave="$ctrl.applyInactiveStyleToReplayButton()" ng-focus="$ctrl.applyActiveStyleToReplayButton()" ng-blur="$ctrl.applyInactiveStyleToReplayButton()" custom-title="Replay this video segment"> <svg-src svg-name="icon_replay" inline-with-text="true" class="replay-icon ng-isolate-scope"><span class="svg-container icon-inline" ng-class="{'icon-inline': $ctrl.inlineWithText}"> <svg class="svg-icon" focusable="false" aria-hidden="true"> <use xlink:href="#icon_replay" ng-attr-xlink:href="{{$ctrl.getSvgPath}}"></use> </svg> </span></svg-src> </button> </div> </div><!-- end ngSwitchWhen: --> </div> </div> </div><!-- end ngIf: $ctrl.areTranscriptNewEditButtonsEnabled --> <!-- ngIf: !$ctrl.areTranscriptNewEditButtonsEnabled --> </li><!-- end ngIf: line --><!-- end ngRepeat: line in $vs_renderItems track by line.id --><!-- ngIf: line --><li ng-if="line" ng-repeat="line in $vs_renderItems track by line.id" ng-class="$ctrl.shouldHighlight(line) ? 'highlight-line' : 'default-line'" role="listitem" ng-click="$ctrl.onTranscriptClicked(line)" aria-label="Clicking on this transcript will seek video to the time of 5268 seconds from beginning." id="transcript-4133dfb3-14c4-492f-9fc9-b76b4902aefa" aria-posinset="1386" aria-setsize="1386" class="transcript-line c-caption-1 ng-scope default-line" tabindex="0" style=""> <!-- ngIf: $ctrl.areTranscriptNewEditButtonsEnabled --><div class="transcript-line-wrapper ng-scope" ng-if="$ctrl.areTranscriptNewEditButtonsEnabled"> <div class="transcript-timestamp new ng-binding" tabindex="-1" aria-label="Clicking on this transcript will seek video to the time of 5268 seconds from beginning." ng-click="$ctrl.onTranscriptClicked(line)" role="button"> 01:27:48 </div> <div class="transcript-text-container"> <div class="transcript-text new ng-binding" ng-class="{editable: $ctrl.isActiveEdit(line)}" ng-focus="$ctrl.onTranscriptFocused($event, line)" vlist-index="1385" ng-keydown="$ctrl.onTextKeyDown($event)" ng-blur="$ctrl.onTranscriptBlur()" contenteditable="$ctrl.isActiveEdit(line)"> everyone. </div> <div class="transcript-text-action-bar" ng-class="{visible: $ctrl.shouldShowActionBar(line)}" ng-switch="" on="$ctrl.transcriptSavingStatus"> <!-- ngSwitchWhen: 0 --> <!-- ngSwitchWhen: 1 --> <!-- ngSwitchWhen: 2 --> <!-- ngSwitchWhen: 3 --> <!-- ngSwitchDefault: --><div class="transcript-text-action-buttons ng-scope" ng-switch-default=""> <div class="transcript-text-action-left-buttons"> <button class="stream-btn discard-btn ng-binding" ng-click="$ctrl.discard()"> Discard </button> <button class="stream-btn save-btn ng-binding" type="submit" ng-class="{'btn-primary': $ctrl.isTranscriptModified}" ng-click="$ctrl.save($event, line)"> Save </button> </div> <div class="transcript-text-action-divider"></div> <div class="transcript-text-action-right-buttons"> <button class="stream-btn replay-btn ng-isolate-scope" type="submit" ng-click="$ctrl.replay($event, line)" aria-label="Clicking this button restarts playback from the start of the current transcript segment and sets the focus to the current transcript segment's editable text contents." ng-mouseover="$ctrl.applyActiveStyleToReplayButton()" ng-mouseleave="$ctrl.applyInactiveStyleToReplayButton()" ng-focus="$ctrl.applyActiveStyleToReplayButton()" ng-blur="$ctrl.applyInactiveStyleToReplayButton()" custom-title="Replay this video segment"> <svg-src svg-name="icon_replay" inline-with-text="true" class="replay-icon ng-isolate-scope"><span class="svg-container icon-inline" ng-class="{'icon-inline': $ctrl.inlineWithText}"> <svg class="svg-icon" focusable="false" aria-hidden="true"> <use xlink:href="#icon_replay" ng-attr-xlink:href="{{$ctrl.getSvgPath}}"></use> </svg> </span></svg-src> </button> </div> </div><!-- end ngSwitchWhen: --> </div> </div> </div><!-- end ngIf: $ctrl.areTranscriptNewEditButtonsEnabled --> <!-- ngIf: !$ctrl.areTranscriptNewEditButtonsEnabled --> </li><!-- end ngIf: line --><!-- end ngRepeat: line in $vs_renderItems track by line.id --> </ul> </ng-transclude> </div> </div></virtual-list><!-- end ngIf: $ctrl.transcriptLines.length --> <!-- ngIf: !$ctrl.isTranscriptEditV3Enabled --> </div> </div> </transcript> </ng-transclude> </div> </div> </div> </search-event><!-- end ngIf: $ctrl.servicePlans.capabilities.transcriptSearch && $ctrl.captionReady --> <!-- ngIf: !$ctrl.servicePlans.capabilities.transcriptSearch && $ctrl.captionReady --> </div> <div class="no-data-message-cont"> <!-- ngIf: $ctrl.isNoTranscriptData --> </div> </div></div> <div id="forms-right-tab-page" ng-class="{'live':$ctrl.isLiveNow || $ctrl.isLiveContentCreatedState}" ng-show="$ctrl.isRightTabSet(5)" aria-hidden="true" class="ng-hide"><div id="forms-container" class="forms-container"> <forms timed-events="$ctrl.timedEvents" show-controls="$ctrl.showFormControls" on-add-form="$ctrl.addForm(formEvent)" on-delete-form="$ctrl.deleteForm(formEvent)" player="$ctrl.player" on-form-clicked="$ctrl.jumpToForm(formEvent)" can-edit="$ctrl.isEditorOrInAdminMode" time-code="$ctrl.playerTime" video-id="$ctrl.videoId" player-session-id="$ctrl.playerSessionId" class="ng-isolate-scope"><div class="forms" dir="ltr"> <!-- ngIf: $ctrl.canEdit --> <!-- ngIf: $ctrl.timedEvents.length !== 0 --> <!-- ngIf: $ctrl.timedEvents.length === 0 && $ctrl.canEdit --> </div></forms> </div></div> </div> </div> </div> </div> </div><!-- end ngSwitchWhen: --> <!-- ngSwitchWhen: NotFound --> <!-- ngSwitchWhen: Forbidden --> <!-- ngIf: $ctrl.routingCase === 'Uploading' || $ctrl.routingCase === 'NotReady' || $ctrl.routingCase === 'Error' --> </div> </div> </section> <!-- ngIf: $ctrl.video --><section class="other-area ng-scope" ng-if="$ctrl.video" aria-label="section including comments and related videos" style=""> <div class="row"> <div class="column video-detail-container medium-12 large-8" ng-class="{'medium-12 large-12': $ctrl.display12Column, 'medium-12 large-8': $ctrl.showRightContainer || $ctrl.isCompletedOnLoad}" style="margin-top: 0px;"> <div class="bottom-navigation-container"> <tab-navigation class="bottom-navigation ng-isolate-scope" model="$ctrl.playerBottomTabOptions" value="$ctrl.currentBottomTab" direction="$ctrl.tabHeaderDirection" show-full-in-small="true"><nav class="tab-navigation c-in-page-navigation" ng-class="{'tab-navigation-v2': $ctrl.isV2StyleEnabled, 'c-in-page-navigation': !$ctrl.isVertical, 'vertical-navigation': $ctrl.isVertical, 'tab-navigation-minimal': $ctrl.minimalStyle}"> <ul class="tab-navigation-list show-for-small" role="tablist" ng-class="{'show-for-medium': !$ctrl.isVertical && !$ctrl.showFullInSmall, 'show-for-small': $ctrl.showFullInSmall}"> <!-- ngRepeat: option in $ctrl.model.options --><!-- ngIf: !option.hide --><li class="tab-navigation-item ng-scope details-bottom-tab f-active" role="presentation" ng-repeat="option in $ctrl.model.options" ng-if="!option.hide" ng-class="[option.className, {'f-active': $ctrl.value === option.value}]" style=""> <!-- ngIf: option.svgName --> <button class="c-action-trigger nav-link nav-link-option-0 f-active" role="tab" ng-class="{'f-active': $ctrl.value === option.value, 'item-error': option.value === $ctrl.errorTabValue}" ng-click="$ctrl.onTabClicked(option)" ng-disabled="option.disable" aria-posinset="1" aria-setsize="1" aria-selected="true"> <!-- ngIf: option.value === $ctrl.errorTabValue --> Details </button> <!-- ngIf: option.afterSvgName --> </li><!-- end ngIf: !option.hide --><!-- end ngRepeat: option in $ctrl.model.options --><!-- ngIf: !option.hide --><!-- end ngRepeat: option in $ctrl.model.options --><!-- ngIf: !option.hide --><!-- end ngRepeat: option in $ctrl.model.options --><!-- ngIf: !option.hide --><!-- end ngRepeat: option in $ctrl.model.options --><!-- ngIf: !option.hide --><!-- end ngRepeat: option in $ctrl.model.options --> </ul> <!-- ngIf: !$ctrl.isVertical --><ul class="tab-navigation-list ng-scope hide-for-small" ng-if="!$ctrl.isVertical" ng-class="{'hide-for-medium': !$ctrl.showFullInSmall, 'hide-for-small': $ctrl.showFullInSmall}"> <li class="tab-navigation-item"> <button class="c-action-trigger with-drawer nav-link nav-link-option-0 f-active" ng-class="{'f-active': $ctrl.value === $ctrl.activeOption.value, 'item-error': $ctrl.activeOption.value === $ctrl.errorTabValue}" ng-click="$ctrl.onTabClicked($ctrl.activeOption)"> <!-- ngIf: $ctrl.activeOption.value === $ctrl.errorTabValue --> Details </button> </li> <!-- ngIf: $ctrl.drawerItems --><li class="tab-navigation-item ng-scope" ng-if="$ctrl.drawerItems"> <drawer drawer-items="$ctrl.drawerItems" item="$ctrl.model" left-allign="$ctrl.leftAllign" fixed-elements="$ctrl.fixedElements" tab-navigation-drawer="true" class="ng-isolate-scope"><div class="drawer" action-list=""> <span ng-show="!$ctrl.transcludeButtonPresent" role="presentation" aria-hidden="false" class=""> <button name="dropdown" role="menu" class="c-action-trigger c-glyph drawer-button drawer-button-stream drawer-button-without-label ng-isolate-scope" aria-label="More actions" custom-title="More actions" aria-expanded="false" ng-click="return;"> <span class="x-screen-reader ng-binding">More actions</span> </button> </span> <span role="presentation" ng-show="$ctrl.transcludeButtonPresent" ng-transclude="transcludebutton" aria-hidden="true" class="ng-hide"> <ng-transclude></ng-transclude> </span> <ul class="drawer-content collapsed" ng-class="{'allign-opp': $ctrl.leftAllign}"> <!-- ngRepeat: item in $ctrl.drawerItems --> </ul> </div></drawer> </li><!-- end ngIf: $ctrl.drawerItems --> <li aria-label="Health" ng-class="$ctrl.svgClassName"> <!-- ngIf: $ctrl.isSvgAfterAvailable --> </li> </ul><!-- end ngIf: !$ctrl.isVertical --> </nav> </tab-navigation> <!-- ngIf: $ctrl.bottomTabDrawerItems.length --> </div> <div id="under-player-container" class="xsmall-12 small-12 medium-12 large-12"> <video-metadata ng-show="$ctrl.isBottomTabSet(0) && $ctrl.showMetadata" video="$ctrl.video" is-in-admin-mode="$ctrl.isInAdminMode" elevated-video-session-properties="$ctrl.elevatedVideoSessionProperties" video-page-session-properties="$ctrl.videoPageSessionProperties" player-options="$ctrl.playerOptions" dialog-interruption-handler="$ctrl.dialogInterruptionHandler" get-player-current-time="$ctrl.getCurrentTime()" drawer-items="$ctrl.drawerItems" view-settings="$ctrl.viewSettings" is-stream-admin="$ctrl.isStreamAdmin" is-live-content="$ctrl.isLiveContent" video-page-session-metrics="$ctrl.videoPageSessionMetrics" is-live-now="$ctrl.isLiveNow" is-completed="$ctrl.isCompleted" is-replacing="$ctrl.isReplacing" is-trimming="$ctrl.isTrimming" class="ng-isolate-scope" aria-hidden="false"><section> <div class="video-meta-container"> <div class="row row-size1"> <!-- ngIf: $ctrl.showErrorMessage --> </div> <div class="row column main-meta row-size0 remove-margin"> <div class="title-container title-with-completed" ng-class="{'title-with-live': $ctrl.isLiveNow, 'title-with-completed': $ctrl.isCompleted, 'title-before-live': $ctrl.isLiveContent && !$ctrl.isCompleted}"> <span class="live-label-cont"> <!-- ngIf: $ctrl.isLiveNow --> </span> <!-- ngIf: $ctrl.isLiveContent && !$ctrl.isCompleted && !$ctrl.isLiveNow && !$ctrl.isReplacing && !$ctrl.isTrimming --> <!-- ngIf: $ctrl.isLiveContent && $ctrl.isCompleted --> <h1 class="title ng-binding"> 2020 Global STINT Town Hall Meeting </h1> </div> <div class="creator-and-privacy"> <div class="video-create-message"> <video-creator item="$ctrl.video" get-creator-message-template="$ctrl.getCreateMessageTemplate()" class="ng-isolate-scope"><!-- ngIf: ::($ctrl.canDisplayCreator) --><span class="info-message-content ng-binding ng-scope" ng-if="::($ctrl.canDisplayCreator)">Published on 2020/7/22 by <a class="c-hyperlink info-message-link ng-binding" ng-href="/user/21bed756-c076-4add-9912-d49fcc3543d0" ng-click="$ctrl.cacheCreator()" href="/user/21bed756-c076-4add-9912-d49fcc3543d0">Cermak, Jennifer</a> </span><!-- end ngIf: ::($ctrl.canDisplayCreator) --></video-creator> </div> <div class="privacy-and-statistics"> <video-privacy class="video-privacy ng-isolate-scope" video-privacy-type="$ctrl.video.privacyMode"><p class="screen-reader-text ng-binding">organization</p> <div class="videoprivacy ng-isolate-scope" aria-hidden="true" custom-title="" light-style="$ctrl.lightCustomTitle"> <svg-src class="icon_privacytag ng-isolate-scope video-privacy-company" aria-label="Privacy" inline-with-text="true" svg-name="icon_privacy_my_org" ng-class="{
'video-privacy-company' : ($ctrl.videoPrivacyType | isVideoPrivacyCompany),
'video-privacy-limited' : ($ctrl.videoPrivacyType | isVideoPrivacyLimited)
}"><span class="svg-container icon-inline" ng-class="{'icon-inline': $ctrl.inlineWithText}"> <svg class="svg-icon" focusable="false" aria-hidden="true"> <use xlink:href="#icon_privacy_my_org" ng-attr-xlink:href="{{$ctrl.getSvgPath}}"></use> </svg> </span></svg-src> <!-- ngIf: !$ctrl.hideLabel --><span class="c-badge privacytext ng-binding ng-scope" ng-if="!$ctrl.hideLabel"> Company </span><!-- end ngIf: !$ctrl.hideLabel --> </div> </video-privacy> <!-- ngIf: !$ctrl.isLiveNow --><video-statistics ng-if="!$ctrl.isLiveNow" video-statistics-data="$ctrl.video.metrics" show-icons="true" class="ng-scope ng-isolate-scope"><!-- ngIf: !$ctrl.showIcons --> <!-- ngIf: $ctrl.showIcons --><span class="video-statistics ng-scope" ng-if="$ctrl.showIcons"> <span role="img" class="view-count metric-inline-flex ng-isolate-scope" custom-title="366 views" light-style="$ctrl.lightCustomTitle" aria-label="366 views"> <span aria-hidden="true" class="ng-binding">366</span> <svg-src svg-name="icon_views_new" class="metric-icon ng-isolate-scope"><span class="svg-container" ng-class="{'icon-inline': $ctrl.inlineWithText}"> <svg class="svg-icon" focusable="false" aria-hidden="true"> <use xlink:href="#icon_views_new" ng-attr-xlink:href="{{$ctrl.getSvgPath}}"></use> </svg> </span></svg-src> </span> <!-- ngIf: !$ctrl.hideLikes --><span role="img" class="like-count metric-inline-flex ng-scope ng-isolate-scope" ng-if="!$ctrl.hideLikes" custom-title="4 likes" light-style="$ctrl.lightCustomTitle" aria-label="4 likes"> <span aria-hidden="true" class="ng-binding">4</span> <svg-src svg-name="icon_likes" class="metric-icon ng-isolate-scope"><span class="svg-container" ng-class="{'icon-inline': $ctrl.inlineWithText}"> <svg class="svg-icon" focusable="false" aria-hidden="true"> <use xlink:href="#icon_likes" ng-attr-xlink:href="{{$ctrl.getSvgPath}}"></use> </svg> </span></svg-src> </span><!-- end ngIf: !$ctrl.hideLikes --> <!-- ngIf: $ctrl.showComments --> </span><!-- end ngIf: $ctrl.showIcons --></video-statistics><!-- end ngIf: !$ctrl.isLiveNow --> <!-- ngIf: $ctrl.isLiveNow --> </div> </div> <!-- ngIf: $ctrl.video.description --> <!-- ngIf: $ctrl.showSeeMoreSeeLessButton --> </div> <!-- ngIf: !$ctrl.showSeeMoreSeeLessButton --><div class="row column divider ng-scope" ng-if="!$ctrl.showSeeMoreSeeLessButton"> <hr class="c-divider"> </div><!-- end ngIf: !$ctrl.showSeeMoreSeeLessButton --> <div class="row row-size0 action-trigger"> <div class="column xsmall-10 small-7 medium-8 large-8 relativeposition"> <div class="video-action-trigger"> <!-- ngIf: $ctrl.video.userData.permissions.canShareUrl --><button class="c-action-trigger c-glyph action-button video-action-share ng-scope" ng-click="$ctrl.onShareClick()" aria-label="Share" ng-if="$ctrl.video.userData.permissions.canShareUrl"> <span class="action-button-svg-container"><svg-src svg-name="icon_share" inline-with-text="true" class="action-button-svg ng-isolate-scope"><span class="svg-container icon-inline" ng-class="{'icon-inline': $ctrl.inlineWithText}"> <svg class="svg-icon" focusable="false" aria-hidden="true"> <use xlink:href="#icon_share" ng-attr-xlink:href="{{$ctrl.getSvgPath}}"></use> </svg> </span></svg-src></span> <span class="action-button-label ng-binding">Share</span> </button><!-- end ngIf: $ctrl.video.userData.permissions.canShareUrl --> <!-- ngIf: $ctrl.video.userData.isPinned --> <!-- ngIf: !$ctrl.video.userData.isPinned --><button class="c-action-trigger c-glyph action-button video-action-pin ng-scope" ng-click="$ctrl.onPinButtonClick()" ng-disabled="$ctrl.pinButtonDisabled" aria-label="Add to watchlist" ng-if="!$ctrl.video.userData.isPinned"> <span class="action-button-svg-container"><svg-src svg-name="icon-add-to-watchlist" inline-with-text="true" class="action-button-svg video-action-pin ng-isolate-scope"><span class="svg-container icon-inline" ng-class="{'icon-inline': $ctrl.inlineWithText}"> <svg class="svg-icon" focusable="false" aria-hidden="true"> <use xlink:href="#icon-add-to-watchlist" ng-attr-xlink:href="{{$ctrl.getSvgPath}}"></use> </svg> </span></svg-src></span> <span class="action-button-label ng-binding">Add to watchlist</span> </button><!-- end ngIf: !$ctrl.video.userData.isPinned --> <!-- ngIf: $ctrl.video.userData.isLiked --> <!-- ngIf: !$ctrl.video.userData.isLiked --><button class="c-action-trigger c-glyph action-button video-like ng-scope" ng-click="$ctrl.onLikeClick()" ng-disabled="$ctrl.likeButtonDisabled" aria-label="Like" ng-if="!$ctrl.video.userData.isLiked"> <span class="action-button-svg-container"><svg-src svg-name="icon_likes" inline-with-text="true" class="action-button-svg video-action-like ng-isolate-scope"><span class="svg-container icon-inline" ng-class="{'icon-inline': $ctrl.inlineWithText}"> <svg class="svg-icon" focusable="false" aria-hidden="true"> <use xlink:href="#icon_likes" ng-attr-xlink:href="{{$ctrl.getSvgPath}}"></use> </svg> </span></svg-src></span> <span class="action-button-label ng-binding">Like</span> </button><!-- end ngIf: !$ctrl.video.userData.isLiked --> <drawer class="action-drawer action-drawer-with-label ng-isolate-scope" drawer-items="$ctrl.drawerItems"><div class="drawer" action-list=""> <span ng-show="!$ctrl.transcludeButtonPresent" role="presentation" aria-hidden="false" class=""> <button name="dropdown" role="menu" class="c-action-trigger c-glyph drawer-button drawer-button-stream drawer-button-without-label ng-isolate-scope" aria-label="More actions" custom-title="More actions" aria-expanded="false" ng-click="return;"> <span class="x-screen-reader ng-binding">More actions</span> </button> </span> <span role="presentation" ng-show="$ctrl.transcludeButtonPresent" ng-transclude="transcludebutton" aria-hidden="true" class="ng-hide"> <ng-transclude></ng-transclude> </span> <ul class="drawer-content collapsed" ng-class="{'allign-opp': $ctrl.leftAllign}"> <!-- ngRepeat: item in $ctrl.drawerItems --><li ng-repeat="item in $ctrl.drawerItems" role="presentation" class="ng-scope"> <!-- ngIf: item.isVisible --><button role="menuitem" ng-class="item.iconClassName" class="drawer-item-Linked-group-or-channel c-glyph c-action-trigger drawer-content-item Linked-group-or-channel" aria-label="Linked groups/channels List item 1 of 2" ng-click="$ctrl.onItemClick(item)" ng-if="item.isVisible"> <!-- ngIf: item.svgSrc --><svg-src svg-name="icon_channel_poster" ng-if="item.svgSrc" class="small-icon ng-scope ng-isolate-scope" inline-with-text="true"><span class="svg-container icon-inline" ng-class="{'icon-inline': $ctrl.inlineWithText}"> <svg class="svg-icon" focusable="false" aria-hidden="true"> <use xlink:href="#icon_channel_poster" ng-attr-xlink:href="{{$ctrl.getSvgPath}}"></use> </svg> </span></svg-src><!-- end ngIf: item.svgSrc --> <span class="icon-text ng-binding svg-text" ng-class="item.svgSrc?'svg-text':'glyph-text'">Linked groups/channels</span> </button><!-- end ngIf: item.isVisible --> </li><!-- end ngRepeat: item in $ctrl.drawerItems --><li ng-repeat="item in $ctrl.drawerItems" role="presentation" class="ng-scope"> <!-- ngIf: item.isVisible --><button role="menuitem" ng-class="item.iconClassName" class="drawer-item-add-to-group-channel c-glyph c-action-trigger drawer-content-item add-to-group-channel" aria-label="Add to group/channel List item 2 of 2" ng-click="$ctrl.onItemClick(item)" ng-if="item.isVisible"> <!-- ngIf: item.svgSrc --><svg-src svg-name="icon-add-to-channel" ng-if="item.svgSrc" class="small-icon ng-scope ng-isolate-scope" inline-with-text="true"><span class="svg-container icon-inline" ng-class="{'icon-inline': $ctrl.inlineWithText}"> <svg class="svg-icon" focusable="false" aria-hidden="true"> <use xlink:href="#icon-add-to-channel" ng-attr-xlink:href="{{$ctrl.getSvgPath}}"></use> </svg> </span></svg-src><!-- end ngIf: item.svgSrc --> <span class="icon-text ng-binding svg-text" ng-class="item.svgSrc?'svg-text':'glyph-text'">Add to group/channel</span> </button><!-- end ngIf: item.isVisible --> </li><!-- end ngRepeat: item in $ctrl.drawerItems --> </ul> </div></drawer> <!-- ngIf: $ctrl.isReactShareDialogEnabled() --> </div> </div> <div class="column xsmall-2 small-5 medium-4 large-4 video-action-trigger view-settings-button"> <!-- ngIf: $ctrl.viewSettings.length --><drawer drawer-items="$ctrl.viewSettings" ng-if="$ctrl.viewSettings.length" class="ng-scope ng-isolate-scope"><div class="drawer" action-list=""> <span ng-show="!$ctrl.transcludeButtonPresent" role="presentation" aria-hidden="true" class="ng-hide"> <button name="dropdown" role="menu" class="c-action-trigger c-glyph drawer-button drawer-button-stream drawer-button-without-label ng-isolate-scope" aria-label="More actions" custom-title="More actions" aria-expanded="false" ng-click="return;"> <span class="x-screen-reader ng-binding">More actions</span> </button> </span> <span role="presentation" ng-show="$ctrl.transcludeButtonPresent" ng-transclude="transcludebutton" aria-hidden="false" class=""><transcludebutton class="ng-scope"> <button class="c-action-trigger drawer-button-stream action-button drawer-button-with-label" aria-label="View settings" aria-expanded="false"> <span class="drawer-icon-container"><svg-src svg-name="icon_settings" inline-with-text="true" class="action-button-svg video-action-unpin ng-isolate-scope"><span class="svg-container icon-inline" ng-class="{'icon-inline': $ctrl.inlineWithText}"> <svg class="svg-icon" focusable="false" aria-hidden="true"> <use xlink:href="#icon_settings" ng-attr-xlink:href="{{$ctrl.getSvgPath}}"></use> </svg> </span></svg-src></span> <span class="drawer-label ng-binding">View settings</span> <span class="drawer-icon-container"><svg-src class="drawer-down-arrow ng-isolate-scope" svg-name="icon_down_arrow" inline-with-text="true"><span class="svg-container icon-inline" ng-class="{'icon-inline': $ctrl.inlineWithText}"> <svg class="svg-icon" focusable="false" aria-hidden="true"> <use xlink:href="#icon_down_arrow" ng-attr-xlink:href="{{$ctrl.getSvgPath}}"></use> </svg> </span></svg-src></span> </button> </transcludebutton></span> <ul class="drawer-content collapsed" ng-class="{'allign-opp': $ctrl.leftAllign}"> <!-- ngRepeat: item in $ctrl.drawerItems --><li ng-repeat="item in $ctrl.drawerItems" role="presentation" class="ng-scope"> <!-- ngIf: item.isVisible --><button role="menuitem" ng-class="item.iconClassName" class="drawer-item-hide-transcript c-glyph c-action-trigger drawer-content-item hide-transcript" aria-label="Hide transcript List item 1 of 2" ng-click="$ctrl.onItemClick(item)" ng-if="item.isVisible"> <!-- ngIf: item.svgSrc --><svg-src svg-name="icon_hide" ng-if="item.svgSrc" class="small-icon ng-scope ng-isolate-scope" inline-with-text="true"><span class="svg-container icon-inline" ng-class="{'icon-inline': $ctrl.inlineWithText}"> <svg class="svg-icon" focusable="false" aria-hidden="true"> <use xlink:href="#icon_hide" ng-attr-xlink:href="{{$ctrl.getSvgPath}}"></use> </svg> </span></svg-src><!-- end ngIf: item.svgSrc --> <span class="icon-text ng-binding svg-text" ng-class="item.svgSrc?'svg-text':'glyph-text'">Hide transcript</span> </button><!-- end ngIf: item.isVisible --> </li><!-- end ngRepeat: item in $ctrl.drawerItems --><li ng-repeat="item in $ctrl.drawerItems" role="presentation" class="ng-scope"> <!-- ngIf: item.isVisible --><button role="menuitem" ng-class="item.iconClassName" class="drawer-item-theater-mode-on c-glyph c-action-trigger drawer-content-item theater-mode-on" aria-label="Theater mode List item 2 of 2" ng-click="$ctrl.onItemClick(item)" ng-if="item.isVisible"> <!-- ngIf: item.svgSrc --><svg-src svg-name="icon_theater_on" ng-if="item.svgSrc" class="small-icon ng-scope ng-isolate-scope" inline-with-text="true"><span class="svg-container icon-inline" ng-class="{'icon-inline': $ctrl.inlineWithText}"> <svg class="svg-icon" focusable="false" aria-hidden="true"> <use xlink:href="#icon_theater_on" ng-attr-xlink:href="{{$ctrl.getSvgPath}}"></use> </svg> </span></svg-src><!-- end ngIf: item.svgSrc --> <span class="icon-text ng-binding svg-text" ng-class="item.svgSrc?'svg-text':'glyph-text'">Theater mode</span> </button><!-- end ngIf: item.isVisible --> </li><!-- end ngRepeat: item in $ctrl.drawerItems --> </ul> </div></drawer><!-- end ngIf: $ctrl.viewSettings.length --> </div> </div> </div> </section></video-metadata> <div id="transcript-bottom-tab" ng-show="$ctrl.isBottomTabSet(1)" aria-hidden="true" class="ng-hide"> </div> <div id="yammer-bottom-tab" ng-show="$ctrl.isBottomTabSet(3)" aria-hidden="true" class="ng-hide"> </div> <div class="video-analytics-container ng-hide" ng-show="$ctrl.isBottomTabSet(4)" aria-hidden="true"> <video-analytics video="$ctrl.video" is-producer="false" is-vod="$ctrl.isVideoVod" class="ng-isolate-scope"><!-- ngIf: !$ctrl.isSmallSize --><div class="video-analytics-container ng-scope" ng-if="!$ctrl.isSmallSize"> <!-- ngIf: !$ctrl.hasData && !$ctrl.isVod --> <!-- ngIf: $ctrl.hasData || $ctrl.isVod --><div class="analytics-data-container ng-scope" ng-if="$ctrl.hasData || $ctrl.isVod"> <!-- ngIf: !$ctrl.isVod --> <div class="view-count-panel"> <div class="view-count-value ng-binding">365</div> <div class="view-count-label ng-binding">Views</div> </div> <div class="like-count-panel"> <div class="like-count-value ng-binding">4</div> <div class="like-count-label"> <svg-src class="likes-icon ng-isolate-scope" svg-name="icon_likes"><span class="svg-container" ng-class="{'icon-inline': $ctrl.inlineWithText}"> <svg class="svg-icon" focusable="false" aria-hidden="true"> <use xlink:href="#icon_likes" ng-attr-xlink:href="{{$ctrl.getSvgPath}}"></use> </svg> </span></svg-src> <span class="ng-binding">Likes</span> </div> </div> <!-- ngIf: $ctrl.providerUrl --> <!-- ngIf: $ctrl.isDownloadAnalyticsEnabled --> </div><!-- end ngIf: $ctrl.hasData || $ctrl.isVod --> </div><!-- end ngIf: !$ctrl.isSmallSize --> <!-- ngIf: $ctrl.isSmallSize && $ctrl.isLiveNow --></video-analytics> </div> <div id="forms-bottom-tab" ng-show="$ctrl.isBottomTabSet(5)" aria-hidden="true" class="ng-hide"> </div> </div> <div id="faces-container" class="faces-container column xsmall-12 small-12 medium-12 large-12 ng-hide" ng-show="$ctrl.isBottomTabSet(2) && ($ctrl.showFaces || $ctrl.isLiveContentButFaceDetectionNotReady) && $ctrl.video.options.faceDetection" aria-hidden="true"> <div id="faces-content" class="faces-content"> <face-chart video="$ctrl.video" player-time="$ctrl.playerTime" is-admin="$ctrl.isInAdminMode" not-ready="$ctrl.isLiveContentButFaceDetectionNotReady" is-live-content="$ctrl.isLiveContent" is-created="$ctrl.isCreated" is-completed="$ctrl.isCompleted" class="ng-isolate-scope"><!-- ngIf: $ctrl.isLoading --> <!-- ngIf: $ctrl.isError --> <!-- ngIf: !$ctrl.isLoading && !$ctrl.isError --><div class="face-chart-container ng-scope" ng-if="!$ctrl.isLoading && !$ctrl.isError" style=""> <!-- ngIf: ($ctrl.video.userData.permissions.canEdit || $ctrl.isAdmin) && $ctrl.isEditMode --> <!-- ngIf: $ctrl.noFaceShown && !$ctrl.isEditMode --><div ng-if="$ctrl.noFaceShown && !$ctrl.isEditMode" class="no-face-shown ng-scope"> <div class="no-data-message-cont"> <div class="no-data-message"> <span class="message-icon-cont"> <svg-src svg-name="icon_library" inline-with-text="true" class="message-icon ng-isolate-scope"><span class="svg-container icon-inline" ng-class="{'icon-inline': $ctrl.inlineWithText}"> <svg class="svg-icon" focusable="false" aria-hidden="true"> <use xlink:href="#icon_library" ng-attr-xlink:href="{{$ctrl.getSvgPath}}"></use> </svg> </span></svg-src> </span> <!-- ngIf: !$ctrl.notReady --><span ng-if="!$ctrl.notReady" class="ng-binding ng-scope">People timelines are not available for this video.</span><!-- end ngIf: !$ctrl.notReady --> <!-- ngIf: $ctrl.vodNotReady || $ctrl.liveVodNotReady --> <!-- ngIf: $ctrl.liveNotReady --> </div> </div> </div><!-- end ngIf: $ctrl.noFaceShown && !$ctrl.isEditMode --> <!-- ngIf: !$ctrl.noFaceShown && !$ctrl.notReady --> <!-- ngIf: $ctrl.showCommandBar() --> </div><!-- end ngIf: !$ctrl.isLoading && !$ctrl.isError --></face-chart> </div> </div> <!-- ngIf: $ctrl.isCompletedOnLoad && !$ctrl.isVODLess --><div id="comments-container" class="comments-container column xsmall-12 small-6 medium-6 large-12 vertical-space ng-scope" ng-if="$ctrl.isCompletedOnLoad && !$ctrl.isVODLess"> <div id="comments"> <comments video="$ctrl.video" is-in-admin-mode="$ctrl.isInAdminMode" class="ng-isolate-scope"><div class="video-comments c-caption-1"> <div class="comments-loading ng-hide" ng-show="$ctrl.initState.loading" aria-hidden="true" style=""> <loading-panel class="ng-isolate-scope"><div class="loading-panel"> <div class="loading-panel-inner"> <div class="c-progress f-indeterminate-local f-progress-small" role="progressbar" aria-valuetext="Loading"> <span></span> <span></span> <span></span> <span></span> <span></span> </div> <h5 class="c-heading-4 ng-binding">Loading...</h5> </div> </div></loading-panel> </div> <div class="error-banner ng-hide" ng-show="$ctrl.initState.error" aria-hidden="true"> <h6 class="c-heading-6 ng-binding"></h6> </div> <div class="comments-ready" ng-show="$ctrl.initState.ready" aria-hidden="false" style=""> <h2 class="title-heading comments-label c-heading-5 ng-binding"> 0 Comments </h2> <div class="error-banner ng-hide" ng-show="$ctrl.postCommentState.error" aria-hidden="true"> <h6 class="c-heading-6 ng-binding"></h6> </div> <!-- ngIf: $ctrl.video.userData.permissions.canComment --><div class="comment-input-box ng-scope" ng-if="$ctrl.video.userData.permissions.canComment"> <div class="profile-image"> <!-- ngIf: $ctrl.getProfileImage() --><img ng-if="$ctrl.getProfileImage()" ng-src="https://uswe-1-content.api.microsoftstream.com/api/tenants/8fe11266-3a2b-494f-b7ce-92bc201dc2ab/users/79c34bfa-ad42-4aae-a681-1edc26a3b11e/photo?api-version=1.4-private" onerror="this.src="/bundles/app/generated/user-placeholder.svg"" alt="user profile image" class="ng-scope" src="/bundles/app/generated/user-placeholder.svg"><!-- end ngIf: $ctrl.getProfileImage() --> <!-- ngIf: !$ctrl.getProfileImage() --> </div> <div class="input-area"> <div class="input-arrow"> <div ng-class="$ctrl.commentInputFocused ? 'arrow-outer-focused' : 'arrow-outer'" class="arrow-outer"></div> </div> <div class="c-textarea"> <textarea aria-label="Post a new comment" id="comment-textarea" ng-focus="$ctrl.commentInputFocused = true; $ctrl.focusedBefore = true;" ng-blur="$ctrl.commentInputFocused = false" ng-model="$ctrl.commentInput" class="f-no-resize f-scroll ng-pristine ng-untouched ng-valid ng-empty" name="textarea-default" rows="1" placeholder="Post a new comment" aria-invalid="false"></textarea> </div> </div> <div class="actions ng-hide" ng-show="$ctrl.focusedBefore && !$ctrl.postCommentState.posting" aria-hidden="true"> <button type="button" class="stream-btn" ng-disabled="$ctrl.postCommentState.posting" ng-click="$ctrl.discardComment()" aria-label="Discard"> <span class="ng-binding">Discard</span> </button> <button type="button" class="stream-btn right-dialog-button ng-binding btn-primary" ng-disabled="!$ctrl.commentInput || $ctrl.postCommentState.posting" ng-click="$ctrl.postComment()" aria-label="Post" disabled="disabled"> <span class="ng-binding">Post</span> </button> </div> <!-- ngIf: $ctrl.postCommentState.posting --> </div><!-- end ngIf: $ctrl.video.userData.permissions.canComment --> <!-- ngIf: !$ctrl.video.userData.permissions.canComment --> <div class="comments-list"> <!-- ngRepeat: c in $ctrl.comments --> </div> <div class="load-more-status"> <div class="error-banner ng-hide" ng-show="$ctrl.loadMoreState.error" aria-hidden="true"> <h6 class="c-heading-6 ng-binding"></h6> </div> <!-- ngIf: $ctrl.reachedEnd --><div class="reach-end ng-scope" role="note" ng-if="$ctrl.reachedEnd" aria-label="No more comments" style=""> </div><!-- end ngIf: $ctrl.reachedEnd --> <!-- ngIf: !$ctrl.reachedEnd --> </div> </div> </div></comments> </div> </div><!-- end ngIf: $ctrl.isCompletedOnLoad && !$ctrl.isVODLess --> <!-- ngIf: $ctrl.showMetadata --><div id="related-videos-container-for-small-screen" class="related-video-container column xsmall-12 small-6 medium-6 large-12 ng-scope" ng-if="$ctrl.showMetadata"> <!-- ngIf: $ctrl.isCompletedOnLoad --><!-- end ngIf: $ctrl.isCompletedOnLoad --> </div><!-- end ngIf: $ctrl.showMetadata --> </div> <div id="related-videos-container-for-big-screen" class="column medium-4 large-4" style="margin-top: 0px;"> <div id="related-videos" ng-if="$ctrl.isCompletedOnLoad" class="ng-scope"> <!-- ngIf: $ctrl.video && !$ctrl.isWatchList --><related-videos video="$ctrl.video" ng-if="$ctrl.video && !$ctrl.isWatchList" secondary-related-video-clicked="$ctrl.secondaryRelatedVideosClicked()" related-video-clicked="$ctrl.relatedVideosClicked(value)" related-videos-list="$ctrl.getListOfChannelVideos(value)" auto-play-toggle-status="$ctrl.getAutoPlayToggleStatus(status)" class="ng-scope ng-isolate-scope"><div class="related-videos list-style"> <!-- ngIf: $ctrl.relatedVideos.length && $ctrl.hideRelatedVideoDiv() --><div class="primary-related-videos ng-scope" ng-if="$ctrl.relatedVideos.length && $ctrl.hideRelatedVideoDiv()" style=""> <h2 class="c-heading-5 related-video-label ng-binding">More from trending videos</h2> <!-- ngIf: $ctrl.showAutoPlayToggle() --> <ul class="row"> <!-- ngRepeat: relatedVideoItem in $ctrl.relatedVideos --><li class="related-list-item column xsmall-12 ng-scope" ng-class="{'end' : 0 === $ctrl.relatedVideos.length - 1}" ng-repeat="relatedVideoItem in $ctrl.relatedVideos"> <responsive-video-item video-item="relatedVideoItem" query-param="$ctrl.queryStrings" item-clicked="$ctrl.videoItemClicked()" list-style="true" auto-play-status="$ctrl.autoPlayDefault" related-videos="$ctrl.relatedVideos" class="ng-isolate-scope"><div class="responsive-video-item list-style" ng-class="{'list-style' : $ctrl.listStyle}"> <a ng-href="/video/06c63fed-6723-4b8c-90a4-c6a5c3865dde" ng-click="$ctrl.videoClicked()" aria-label=" Video SmarterTogether with Paul Vasington - July 23, 2020 Update, duration is 4 minutes 50 seconds, number of views is 701 " class="video-item-image-link" href="/video/06c63fed-6723-4b8c-90a4-c6a5c3865dde"> <div class="video-item-thumb"> <div class="video-item-image-link"> <div class="video-item-image-container"> <!-- ngIf: !$ctrl.isImageUrlsEmpty --><img ng-if="!$ctrl.isImageUrlsEmpty" set-img-src="{"extraSmall":{"url":"https://uswe-1-content.api.microsoftstream.com/api/thumbnails/2/c08c6151-b9d1-4287-bc5d-dd6f0b55519d/MS1mcHg0bm1lN29hejJ2ZGU2cnZlcGxjZGhwZV8xXzgweDQ1LnBuZw?validTill=2020-07-26T00%3a00%3a00.0000000Z&aadUserOId=79c34bfa-ad42-4aae-a681-1edc26a3b11e&encoding=base64&api-version=1.4-private&signature=KwUdNSrcZRAUjJcEKl8ArUe%2fNjtX3P0EBROJ01QmbNQ%3d"},"small":{"url":"https://uswe-1-content.api.microsoftstream.com/api/thumbnails/2/c08c6151-b9d1-4287-bc5d-dd6f0b55519d/MS1mcHg0bm1lN29hejJ2ZGU2cnZlcGxjZGhwZV8xXzE2MHg5MC5wbmc?validTill=2020-07-26T00%3a00%3a00.0000000Z&aadUserOId=79c34bfa-ad42-4aae-a681-1edc26a3b11e&encoding=base64&api-version=1.4-private&signature=3URQY8uoOcQj4s8YpZXK%2fffzaITTR4%2f1RPCKylMVPpk%3d"},"medium":{"url":"https://uswe-1-content.api.microsoftstream.com/api/thumbnails/2/c08c6151-b9d1-4287-bc5d-dd6f0b55519d/MS1mcHg0bm1lN29hejJ2ZGU2cnZlcGxjZGhwZV8xXzMyMHgxODAucG5n?validTill=2020-07-26T00%3a00%3a00.0000000Z&aadUserOId=79c34bfa-ad42-4aae-a681-1edc26a3b11e&encoding=base64&api-version=1.4-private&signature=TFluZTsxbEcUHlvd%2fAIDNLfs9Nw%2fTrejQqgr5DVIKlo%3d"},"large":{"url":"https://uswe-1-content.api.microsoftstream.com/api/thumbnails/2/c08c6151-b9d1-4287-bc5d-dd6f0b55519d/MS1mcHg0bm1lN29hejJ2ZGU2cnZlcGxjZGhwZV8xXzY0MHgzNjAucG5n?validTill=2020-07-26T00%3a00%3a00.0000000Z&aadUserOId=79c34bfa-ad42-4aae-a681-1edc26a3b11e&encoding=base64&api-version=1.4-private&signature=%2fFLMb12SoIjqab530w7RKnntHzF2jMOC%2fhnCz7zZh0w%3d"}}" alt="SmarterTogether with Paul Vasington - July 23, 2020 Update" aria-hidden="true" class="ng-scope" style="display: block;" src="https://uswe-1-content.api.microsoftstream.com/api/thumbnails/2/c08c6151-b9d1-4287-bc5d-dd6f0b55519d/MS1mcHg0bm1lN29hejJ2ZGU2cnZlcGxjZGhwZV8xXzE2MHg5MC5wbmc?validTill=2020-07-26T00%3a00%3a00.0000000Z&aadUserOId=79c34bfa-ad42-4aae-a681-1edc26a3b11e&encoding=base64&api-version=1.4-private&signature=3URQY8uoOcQj4s8YpZXK%2fffzaITTR4%2f1RPCKylMVPpk%3d"><!-- end ngIf: !$ctrl.isImageUrlsEmpty --> <!-- ngIf: $ctrl.isImageUrlsEmpty && $ctrl.isLiveVideo --> <!-- ngIf: !$ctrl.isLiveVideo --><div ng-if="!$ctrl.isLiveVideo" class="video-item-thumb-duration ng-binding ng-scope"> 04:50 </div><!-- end ngIf: !$ctrl.isLiveVideo --> </div> </div> <!-- ngIf: $ctrl.newVideo && !$ctrl.isLiveNow --> <!-- ngIf: $ctrl.isLiveNow --> <!-- ngIf: $ctrl.videoItem.userData.isViewed && $ctrl.playbackProgressPercentage --> </div> <div class="detail-container"> <h3 class="title-heading"> <div class="video-item-title"> <!-- ngIf: $ctrl.isLiveVideo --> <span class="video-title-text ng-binding ng-isolate-scope" aria-hidden="true" custom-title="SmarterTogether with Paul Vasington - July 23, 2020 Update" light-style="true" on-ellipsis-only="true"> SmarterTogether with Paul Vasington - July 23, 2020 Update </span> </div> </h3> <div class="video-footer"> <!-- ngIf: $ctrl.isLiveVideo --> <!-- ngIf: $ctrl.isLiveVideo && !$ctrl.isLiveVideoCompleted --> <!-- ngIf: $ctrl.isVideoBeingPlayed --> <!-- ngIf: (!$ctrl.isLiveVideo || $ctrl.isLiveVideoCompleted) && !$ctrl.isVideoBeingPlayed --><video-statistics ng-if="(!$ctrl.isLiveVideo || $ctrl.isLiveVideoCompleted) && !$ctrl.isVideoBeingPlayed" video-statistics-data="$ctrl.videoItem.metrics" hide-likes="true" class="ng-scope ng-isolate-scope"><!-- ngIf: !$ctrl.showIcons --><span class="video-statistics ng-scope" ng-if="!$ctrl.showIcons"> <span class="view-count ng-binding"> 701 views </span> <!-- ngIf: !$ctrl.hideLikes --> <!-- ngIf: $ctrl.showComments --> </span><!-- end ngIf: !$ctrl.showIcons --> <!-- ngIf: $ctrl.showIcons --></video-statistics><!-- end ngIf: (!$ctrl.isLiveVideo || $ctrl.isLiveVideoCompleted) && !$ctrl.isVideoBeingPlayed --> </div> </div> </a> </div></responsive-video-item> </li><!-- end ngRepeat: relatedVideoItem in $ctrl.relatedVideos --><li class="related-list-item column xsmall-12 ng-scope" ng-class="{'end' : 1 === $ctrl.relatedVideos.length - 1}" ng-repeat="relatedVideoItem in $ctrl.relatedVideos"> <responsive-video-item video-item="relatedVideoItem" query-param="$ctrl.queryStrings" item-clicked="$ctrl.videoItemClicked()" list-style="true" auto-play-status="$ctrl.autoPlayDefault" related-videos="$ctrl.relatedVideos" class="ng-isolate-scope"><div class="responsive-video-item list-style" ng-class="{'list-style' : $ctrl.listStyle}"> <a ng-href="/video/f8d4b791-9efe-4a03-9cff-fd8efbf97c92" ng-click="$ctrl.videoClicked()" aria-label=" Video Sensata COVID-19 Public Service Announcement, duration is 2 minutes 34 seconds, number of views is 871 " class="video-item-image-link" href="/video/f8d4b791-9efe-4a03-9cff-fd8efbf97c92"> <div class="video-item-thumb"> <div class="video-item-image-link"> <div class="video-item-image-container"> <!-- ngIf: !$ctrl.isImageUrlsEmpty --><img ng-if="!$ctrl.isImageUrlsEmpty" set-img-src="{"extraSmall":{"url":"https://uswe-1-content.api.microsoftstream.com/api/thumbnails/1/8ef52c4b-3499-4cf9-a085-efddc3b4596a/Yzk3MzM0NTktNmQ0OC00NmZiLTg1YmYtOGUzOWFlOTI3NGQ4LXBvc3Rlcl9leHRyYXNtYWxsLnBuZw?validTill=2020-07-26T00%3a00%3a00.0000000Z&aadUserOId=79c34bfa-ad42-4aae-a681-1edc26a3b11e&encoding=base64&api-version=1.4-private&signature=001TFfnfst7hTyI2MRQ7oryaBZQ2HlUR6Jykjm6d97c%3d"},"small":{"url":"https://uswe-1-content.api.microsoftstream.com/api/thumbnails/1/8ef52c4b-3499-4cf9-a085-efddc3b4596a/Yzk3MzM0NTktNmQ0OC00NmZiLTg1YmYtOGUzOWFlOTI3NGQ4LXBvc3Rlcl9zbWFsbC5wbmc?validTill=2020-07-26T00%3a00%3a00.0000000Z&aadUserOId=79c34bfa-ad42-4aae-a681-1edc26a3b11e&encoding=base64&api-version=1.4-private&signature=IoNltM8Ku64RS0tkLapS4B%2fPd0t1YVPJ8Mfr4UtdSHU%3d"},"medium":{"url":"https://uswe-1-content.api.microsoftstream.com/api/thumbnails/1/8ef52c4b-3499-4cf9-a085-efddc3b4596a/Yzk3MzM0NTktNmQ0OC00NmZiLTg1YmYtOGUzOWFlOTI3NGQ4LXBvc3Rlcl9tZWRpdW0ucG5n?validTill=2020-07-26T00%3a00%3a00.0000000Z&aadUserOId=79c34bfa-ad42-4aae-a681-1edc26a3b11e&encoding=base64&api-version=1.4-private&signature=TmBiBC3E%2bzSyyjSWpJPi4MTD37UEC4z6%2bmgnmLDfVj0%3d"},"large":{"url":"https://uswe-1-content.api.microsoftstream.com/api/thumbnails/1/8ef52c4b-3499-4cf9-a085-efddc3b4596a/Yzk3MzM0NTktNmQ0OC00NmZiLTg1YmYtOGUzOWFlOTI3NGQ4LXBvc3Rlcl9sYXJnZS5wbmc?validTill=2020-07-26T00%3a00%3a00.0000000Z&aadUserOId=79c34bfa-ad42-4aae-a681-1edc26a3b11e&encoding=base64&api-version=1.4-private&signature=jPYGFmQNXtdLXAJFtQjYS9VEJeX5mrT2LrjiADhJqw0%3d"}}" alt="Sensata COVID-19 Public Service Announcement" aria-hidden="true" class="ng-scope" style="display: block;" src="https://uswe-1-content.api.microsoftstream.com/api/thumbnails/1/8ef52c4b-3499-4cf9-a085-efddc3b4596a/Yzk3MzM0NTktNmQ0OC00NmZiLTg1YmYtOGUzOWFlOTI3NGQ4LXBvc3Rlcl9zbWFsbC5wbmc?validTill=2020-07-26T00%3a00%3a00.0000000Z&aadUserOId=79c34bfa-ad42-4aae-a681-1edc26a3b11e&encoding=base64&api-version=1.4-private&signature=IoNltM8Ku64RS0tkLapS4B%2fPd0t1YVPJ8Mfr4UtdSHU%3d"><!-- end ngIf: !$ctrl.isImageUrlsEmpty --> <!-- ngIf: $ctrl.isImageUrlsEmpty && $ctrl.isLiveVideo --> <!-- ngIf: !$ctrl.isLiveVideo --><div ng-if="!$ctrl.isLiveVideo" class="video-item-thumb-duration ng-binding ng-scope"> 02:34 </div><!-- end ngIf: !$ctrl.isLiveVideo --> </div> </div> <!-- ngIf: $ctrl.newVideo && !$ctrl.isLiveNow --> <!-- ngIf: $ctrl.isLiveNow --> <!-- ngIf: $ctrl.videoItem.userData.isViewed && $ctrl.playbackProgressPercentage --> </div> <div class="detail-container"> <h3 class="title-heading"> <div class="video-item-title"> <!-- ngIf: $ctrl.isLiveVideo --> <span class="video-title-text ng-binding ng-isolate-scope" aria-hidden="true" custom-title="Sensata COVID-19 Public Service Announcement" light-style="true" on-ellipsis-only="true"> Sensata COVID-19 Public Service Announcement </span> </div> </h3> <div class="video-footer"> <!-- ngIf: $ctrl.isLiveVideo --> <!-- ngIf: $ctrl.isLiveVideo && !$ctrl.isLiveVideoCompleted --> <!-- ngIf: $ctrl.isVideoBeingPlayed --> <!-- ngIf: (!$ctrl.isLiveVideo || $ctrl.isLiveVideoCompleted) && !$ctrl.isVideoBeingPlayed --><video-statistics ng-if="(!$ctrl.isLiveVideo || $ctrl.isLiveVideoCompleted) && !$ctrl.isVideoBeingPlayed" video-statistics-data="$ctrl.videoItem.metrics" hide-likes="true" class="ng-scope ng-isolate-scope"><!-- ngIf: !$ctrl.showIcons --><span class="video-statistics ng-scope" ng-if="!$ctrl.showIcons"> <span class="view-count ng-binding"> 871 views </span> <!-- ngIf: !$ctrl.hideLikes --> <!-- ngIf: $ctrl.showComments --> </span><!-- end ngIf: !$ctrl.showIcons --> <!-- ngIf: $ctrl.showIcons --></video-statistics><!-- end ngIf: (!$ctrl.isLiveVideo || $ctrl.isLiveVideoCompleted) && !$ctrl.isVideoBeingPlayed --> </div> </div> </a> </div></responsive-video-item> </li><!-- end ngRepeat: relatedVideoItem in $ctrl.relatedVideos --><li class="related-list-item column xsmall-12 ng-scope" ng-class="{'end' : 2 === $ctrl.relatedVideos.length - 1}" ng-repeat="relatedVideoItem in $ctrl.relatedVideos"> <responsive-video-item video-item="relatedVideoItem" query-param="$ctrl.queryStrings" item-clicked="$ctrl.videoItemClicked()" list-style="true" auto-play-status="$ctrl.autoPlayDefault" related-videos="$ctrl.relatedVideos" class="ng-isolate-scope"><div class="responsive-video-item list-style" ng-class="{'list-style' : $ctrl.listStyle}"> <a ng-href="/video/90163e75-101b-4223-8fcf-bb4a29e68989" ng-click="$ctrl.videoClicked()" aria-label=" Video Global Sales Recognition Event, duration is 42 minutes 1 second, number of views is 52 " class="video-item-image-link" href="/video/90163e75-101b-4223-8fcf-bb4a29e68989"> <div class="video-item-thumb"> <div class="video-item-image-link"> <div class="video-item-image-container"> <!-- ngIf: !$ctrl.isImageUrlsEmpty --><img ng-if="!$ctrl.isImageUrlsEmpty" set-img-src="{"extraSmall":{"url":"https://uswe-1-content.api.microsoftstream.com/api/thumbnails/2/9b730d73-1820-4808-9543-bac24bdf4e97/MS1ndmh4Y2t0bG9kc2picGVoZ3BraXFrNmRtaF8yXzgweDQ1LnBuZw?validTill=2020-07-26T00%3a00%3a00.0000000Z&aadUserOId=79c34bfa-ad42-4aae-a681-1edc26a3b11e&encoding=base64&api-version=1.4-private&signature=dko4htzgJIHOhd5VSRvv0rP751niFrrFWD4ysHEbuiU%3d"},"small":{"url":"https://uswe-1-content.api.microsoftstream.com/api/thumbnails/2/9b730d73-1820-4808-9543-bac24bdf4e97/MS1ndmh4Y2t0bG9kc2picGVoZ3BraXFrNmRtaF8yXzE2MHg5MC5wbmc?validTill=2020-07-26T00%3a00%3a00.0000000Z&aadUserOId=79c34bfa-ad42-4aae-a681-1edc26a3b11e&encoding=base64&api-version=1.4-private&signature=cZVA9wv%2bxPmjx6zoQUYXpfFiBDR6Om%2fCZyc2d%2fuAcZs%3d"},"medium":{"url":"https://uswe-1-content.api.microsoftstream.com/api/thumbnails/2/9b730d73-1820-4808-9543-bac24bdf4e97/MS1ndmh4Y2t0bG9kc2picGVoZ3BraXFrNmRtaF8yXzMyMHgxODAucG5n?validTill=2020-07-26T00%3a00%3a00.0000000Z&aadUserOId=79c34bfa-ad42-4aae-a681-1edc26a3b11e&encoding=base64&api-version=1.4-private&signature=5fmyuj9FZQROxdW9EPiCIHj5Pb4wvxBf%2f6hxSERD1q8%3d"},"large":{"url":"https://uswe-1-content.api.microsoftstream.com/api/thumbnails/2/9b730d73-1820-4808-9543-bac24bdf4e97/MS1ndmh4Y2t0bG9kc2picGVoZ3BraXFrNmRtaF8yXzY0MHgzNjAucG5n?validTill=2020-07-26T00%3a00%3a00.0000000Z&aadUserOId=79c34bfa-ad42-4aae-a681-1edc26a3b11e&encoding=base64&api-version=1.4-private&signature=tby8X7qPqWwVdkcE5d1GnklmYuzoFlzdx8qD4BhfAz4%3d"}}" alt="Global Sales Recognition Event" aria-hidden="true" class="ng-scope" style="display: block;" src="https://uswe-1-content.api.microsoftstream.com/api/thumbnails/2/9b730d73-1820-4808-9543-bac24bdf4e97/MS1ndmh4Y2t0bG9kc2picGVoZ3BraXFrNmRtaF8yXzE2MHg5MC5wbmc?validTill=2020-07-26T00%3a00%3a00.0000000Z&aadUserOId=79c34bfa-ad42-4aae-a681-1edc26a3b11e&encoding=base64&api-version=1.4-private&signature=cZVA9wv%2bxPmjx6zoQUYXpfFiBDR6Om%2fCZyc2d%2fuAcZs%3d"><!-- end ngIf: !$ctrl.isImageUrlsEmpty --> <!-- ngIf: $ctrl.isImageUrlsEmpty && $ctrl.isLiveVideo --> <!-- ngIf: !$ctrl.isLiveVideo --><div ng-if="!$ctrl.isLiveVideo" class="video-item-thumb-duration ng-binding ng-scope"> 42:01 </div><!-- end ngIf: !$ctrl.isLiveVideo --> </div> </div> <!-- ngIf: $ctrl.newVideo && !$ctrl.isLiveNow --> <!-- ngIf: $ctrl.isLiveNow --> <!-- ngIf: $ctrl.videoItem.userData.isViewed && $ctrl.playbackProgressPercentage --> </div> <div class="detail-container"> <h3 class="title-heading"> <div class="video-item-title"> <!-- ngIf: $ctrl.isLiveVideo --> <span class="video-title-text ng-binding ng-isolate-scope" aria-hidden="true" custom-title="Global Sales Recognition Event" light-style="true" on-ellipsis-only="true"> Global Sales Recognition Event </span> </div> </h3> <div class="video-footer"> <!-- ngIf: $ctrl.isLiveVideo --> <!-- ngIf: $ctrl.isLiveVideo && !$ctrl.isLiveVideoCompleted --> <!-- ngIf: $ctrl.isVideoBeingPlayed --> <!-- ngIf: (!$ctrl.isLiveVideo || $ctrl.isLiveVideoCompleted) && !$ctrl.isVideoBeingPlayed --><video-statistics ng-if="(!$ctrl.isLiveVideo || $ctrl.isLiveVideoCompleted) && !$ctrl.isVideoBeingPlayed" video-statistics-data="$ctrl.videoItem.metrics" hide-likes="true" class="ng-scope ng-isolate-scope"><!-- ngIf: !$ctrl.showIcons --><span class="video-statistics ng-scope" ng-if="!$ctrl.showIcons"> <span class="view-count ng-binding"> 52 views </span> <!-- ngIf: !$ctrl.hideLikes --> <!-- ngIf: $ctrl.showComments --> </span><!-- end ngIf: !$ctrl.showIcons --> <!-- ngIf: $ctrl.showIcons --></video-statistics><!-- end ngIf: (!$ctrl.isLiveVideo || $ctrl.isLiveVideoCompleted) && !$ctrl.isVideoBeingPlayed --> </div> </div> </a> </div></responsive-video-item> </li><!-- end ngRepeat: relatedVideoItem in $ctrl.relatedVideos --><li class="related-list-item column xsmall-12 ng-scope" ng-class="{'end' : 3 === $ctrl.relatedVideos.length - 1}" ng-repeat="relatedVideoItem in $ctrl.relatedVideos"> <responsive-video-item video-item="relatedVideoItem" query-param="$ctrl.queryStrings" item-clicked="$ctrl.videoItemClicked()" list-style="true" auto-play-status="$ctrl.autoPlayDefault" related-videos="$ctrl.relatedVideos" class="ng-isolate-scope"><div class="responsive-video-item list-style" ng-class="{'list-style' : $ctrl.listStyle}"> <a ng-href="/video/8cbe9a33-8504-491b-b42b-fd11fb7eeb70" ng-click="$ctrl.videoClicked()" aria-label=" Video ¿Por qué es importante para ti prevenir el COVID-19?, duration is 1 minute 37 seconds, number of views is 61 " class="video-item-image-link" href="/video/8cbe9a33-8504-491b-b42b-fd11fb7eeb70"> <div class="video-item-thumb"> <div class="video-item-image-link"> <div class="video-item-image-container"> <!-- ngIf: !$ctrl.isImageUrlsEmpty --><img ng-if="!$ctrl.isImageUrlsEmpty" set-img-src="{"extraSmall":{"url":"https://uswe-1-content.api.microsoftstream.com/api/thumbnails/1/eed2cd84-0161-4821-9d6b-963aab5081fa/MS11Z2dkYjdlMjRnYjVxZHN0c2pyNWZkN29yYl8yXzgweDQ1LnBuZw?validTill=2020-07-26T00%3a00%3a00.0000000Z&aadUserOId=79c34bfa-ad42-4aae-a681-1edc26a3b11e&encoding=base64&api-version=1.4-private&signature=jG3xrEDNBLeKSkkbSJtxgjhlZGuOScoIZHFGA7qi83Q%3d"},"small":{"url":"https://uswe-1-content.api.microsoftstream.com/api/thumbnails/1/eed2cd84-0161-4821-9d6b-963aab5081fa/MS11Z2dkYjdlMjRnYjVxZHN0c2pyNWZkN29yYl8yXzE2MHg5MC5wbmc?validTill=2020-07-26T00%3a00%3a00.0000000Z&aadUserOId=79c34bfa-ad42-4aae-a681-1edc26a3b11e&encoding=base64&api-version=1.4-private&signature=nA4q4Tkd2JCBvJbla4B4G65OtpcU%2b%2bvRKTatdDBM6zI%3d"},"medium":{"url":"https://uswe-1-content.api.microsoftstream.com/api/thumbnails/1/eed2cd84-0161-4821-9d6b-963aab5081fa/MS11Z2dkYjdlMjRnYjVxZHN0c2pyNWZkN29yYl8yXzMyMHgxODAucG5n?validTill=2020-07-26T00%3a00%3a00.0000000Z&aadUserOId=79c34bfa-ad42-4aae-a681-1edc26a3b11e&encoding=base64&api-version=1.4-private&signature=gDZQKKM%2bB8XvI20W2mmlB6jeKvpixtbaxb4mFdamXLs%3d"},"large":{"url":"https://uswe-1-content.api.microsoftstream.com/api/thumbnails/1/eed2cd84-0161-4821-9d6b-963aab5081fa/MS11Z2dkYjdlMjRnYjVxZHN0c2pyNWZkN29yYl8yXzY0MHgzNjAucG5n?validTill=2020-07-26T00%3a00%3a00.0000000Z&aadUserOId=79c34bfa-ad42-4aae-a681-1edc26a3b11e&encoding=base64&api-version=1.4-private&signature=%2fPXDmhFKbBbG10pcFJRV1VY%2bDeMcF7iKkreKuM%2bBuNI%3d"}}" alt="¿Por qué es importante para ti prevenir el COVID-19?" aria-hidden="true" class="ng-scope" style="display: block;" src="https://uswe-1-content.api.microsoftstream.com/api/thumbnails/1/eed2cd84-0161-4821-9d6b-963aab5081fa/MS11Z2dkYjdlMjRnYjVxZHN0c2pyNWZkN29yYl8yXzE2MHg5MC5wbmc?validTill=2020-07-26T00%3a00%3a00.0000000Z&aadUserOId=79c34bfa-ad42-4aae-a681-1edc26a3b11e&encoding=base64&api-version=1.4-private&signature=nA4q4Tkd2JCBvJbla4B4G65OtpcU%2b%2bvRKTatdDBM6zI%3d"><!-- end ngIf: !$ctrl.isImageUrlsEmpty --> <!-- ngIf: $ctrl.isImageUrlsEmpty && $ctrl.isLiveVideo --> <!-- ngIf: !$ctrl.isLiveVideo --><div ng-if="!$ctrl.isLiveVideo" class="video-item-thumb-duration ng-binding ng-scope"> 01:37 </div><!-- end ngIf: !$ctrl.isLiveVideo --> </div> </div> <!-- ngIf: $ctrl.newVideo && !$ctrl.isLiveNow --> <!-- ngIf: $ctrl.isLiveNow --> <!-- ngIf: $ctrl.videoItem.userData.isViewed && $ctrl.playbackProgressPercentage --> </div> <div class="detail-container"> <h3 class="title-heading"> <div class="video-item-title"> <!-- ngIf: $ctrl.isLiveVideo --> <span class="video-title-text ng-binding ng-isolate-scope" aria-hidden="true" custom-title="¿Por qué es importante para ti prevenir el COVID-19?" light-style="true" on-ellipsis-only="true"> ¿Por qué es importante para ti prevenir el COVID-19? </span> </div> </h3> <div class="video-footer"> <!-- ngIf: $ctrl.isLiveVideo --> <!-- ngIf: $ctrl.isLiveVideo && !$ctrl.isLiveVideoCompleted --> <!-- ngIf: $ctrl.isVideoBeingPlayed --> <!-- ngIf: (!$ctrl.isLiveVideo || $ctrl.isLiveVideoCompleted) && !$ctrl.isVideoBeingPlayed --><video-statistics ng-if="(!$ctrl.isLiveVideo || $ctrl.isLiveVideoCompleted) && !$ctrl.isVideoBeingPlayed" video-statistics-data="$ctrl.videoItem.metrics" hide-likes="true" class="ng-scope ng-isolate-scope"><!-- ngIf: !$ctrl.showIcons --><span class="video-statistics ng-scope" ng-if="!$ctrl.showIcons"> <span class="view-count ng-binding"> 61 views </span> <!-- ngIf: !$ctrl.hideLikes --> <!-- ngIf: $ctrl.showComments --> </span><!-- end ngIf: !$ctrl.showIcons --> <!-- ngIf: $ctrl.showIcons --></video-statistics><!-- end ngIf: (!$ctrl.isLiveVideo || $ctrl.isLiveVideoCompleted) && !$ctrl.isVideoBeingPlayed --> </div> </div> </a> </div></responsive-video-item> </li><!-- end ngRepeat: relatedVideoItem in $ctrl.relatedVideos --><li class="related-list-item column xsmall-12 ng-scope" ng-class="{'end' : 4 === $ctrl.relatedVideos.length - 1}" ng-repeat="relatedVideoItem in $ctrl.relatedVideos"> <responsive-video-item video-item="relatedVideoItem" query-param="$ctrl.queryStrings" item-clicked="$ctrl.videoItemClicked()" list-style="true" auto-play-status="$ctrl.autoPlayDefault" related-videos="$ctrl.relatedVideos" class="ng-isolate-scope"><div class="responsive-video-item list-style" ng-class="{'list-style' : $ctrl.listStyle}"> <a ng-href="/video/a8e74ddd-8241-4aea-b5b3-ac00719bab9b" ng-click="$ctrl.videoClicked()" aria-label=" Video SmarterTogether with Jeff Cote - July 9, 2020 Update, duration is 3 minutes 31 seconds, number of views is 1136 " class="video-item-image-link" href="/video/a8e74ddd-8241-4aea-b5b3-ac00719bab9b"> <div class="video-item-thumb"> <div class="video-item-image-link"> <div class="video-item-image-container"> <!-- ngIf: !$ctrl.isImageUrlsEmpty --><img ng-if="!$ctrl.isImageUrlsEmpty" set-img-src="{"extraSmall":{"url":"https://uswe-1-content.api.microsoftstream.com/api/thumbnails/2/455822ac-a5f4-4839-8b9d-89ef7f0ade89/MS02ZW9veHlmdWJneG5xZWRsNXVheWdheGNiZV8xXzgweDQ1LnBuZw?validTill=2020-07-26T00%3a00%3a00.0000000Z&aadUserOId=79c34bfa-ad42-4aae-a681-1edc26a3b11e&encoding=base64&api-version=1.4-private&signature=ty6Eu98yjhXkYdjB2wPNwgssCskw5LlwM%2frdCBdpDlA%3d"},"small":{"url":"https://uswe-1-content.api.microsoftstream.com/api/thumbnails/2/455822ac-a5f4-4839-8b9d-89ef7f0ade89/MS02ZW9veHlmdWJneG5xZWRsNXVheWdheGNiZV8xXzE2MHg5MC5wbmc?validTill=2020-07-26T00%3a00%3a00.0000000Z&aadUserOId=79c34bfa-ad42-4aae-a681-1edc26a3b11e&encoding=base64&api-version=1.4-private&signature=NE%2bk%2bTFGV7%2ftmqAN8zSFBQ8WSwEo5A%2fHhzVvfUWxZVA%3d"},"medium":{"url":"https://uswe-1-content.api.microsoftstream.com/api/thumbnails/2/455822ac-a5f4-4839-8b9d-89ef7f0ade89/MS02ZW9veHlmdWJneG5xZWRsNXVheWdheGNiZV8xXzMyMHgxODAucG5n?validTill=2020-07-26T00%3a00%3a00.0000000Z&aadUserOId=79c34bfa-ad42-4aae-a681-1edc26a3b11e&encoding=base64&api-version=1.4-private&signature=3%2fqwH8frvYkl8thFLiPAHPDIANaL%2f%2bRK035BwdVVJ8E%3d"},"large":{"url":"https://uswe-1-content.api.microsoftstream.com/api/thumbnails/2/455822ac-a5f4-4839-8b9d-89ef7f0ade89/MS02ZW9veHlmdWJneG5xZWRsNXVheWdheGNiZV8xXzY0MHgzNjAucG5n?validTill=2020-07-26T00%3a00%3a00.0000000Z&aadUserOId=79c34bfa-ad42-4aae-a681-1edc26a3b11e&encoding=base64&api-version=1.4-private&signature=In1hYj4ITMwOAG1WVZYphdgok7yJZy7XIFfTZ6dVYCg%3d"}}" alt="SmarterTogether with Jeff Cote - July 9, 2020 Update" aria-hidden="true" class="ng-scope" style="display: block;" src="https://uswe-1-content.api.microsoftstream.com/api/thumbnails/2/455822ac-a5f4-4839-8b9d-89ef7f0ade89/MS02ZW9veHlmdWJneG5xZWRsNXVheWdheGNiZV8xXzE2MHg5MC5wbmc?validTill=2020-07-26T00%3a00%3a00.0000000Z&aadUserOId=79c34bfa-ad42-4aae-a681-1edc26a3b11e&encoding=base64&api-version=1.4-private&signature=NE%2bk%2bTFGV7%2ftmqAN8zSFBQ8WSwEo5A%2fHhzVvfUWxZVA%3d"><!-- end ngIf: !$ctrl.isImageUrlsEmpty --> <!-- ngIf: $ctrl.isImageUrlsEmpty && $ctrl.isLiveVideo --> <!-- ngIf: !$ctrl.isLiveVideo --><div ng-if="!$ctrl.isLiveVideo" class="video-item-thumb-duration ng-binding ng-scope"> 03:31 </div><!-- end ngIf: !$ctrl.isLiveVideo --> </div> </div> <!-- ngIf: $ctrl.newVideo && !$ctrl.isLiveNow --> <!-- ngIf: $ctrl.isLiveNow --> <!-- ngIf: $ctrl.videoItem.userData.isViewed && $ctrl.playbackProgressPercentage --> </div> <div class="detail-container"> <h3 class="title-heading"> <div class="video-item-title"> <!-- ngIf: $ctrl.isLiveVideo --> <span class="video-title-text ng-binding ng-isolate-scope" aria-hidden="true" custom-title="SmarterTogether with Jeff Cote - July 9, 2020 Update" light-style="true" on-ellipsis-only="true"> SmarterTogether with Jeff Cote - July 9, 2020 Update </span> </div> </h3> <div class="video-footer"> <!-- ngIf: $ctrl.isLiveVideo --> <!-- ngIf: $ctrl.isLiveVideo && !$ctrl.isLiveVideoCompleted --> <!-- ngIf: $ctrl.isVideoBeingPlayed --> <!-- ngIf: (!$ctrl.isLiveVideo || $ctrl.isLiveVideoCompleted) && !$ctrl.isVideoBeingPlayed --><video-statistics ng-if="(!$ctrl.isLiveVideo || $ctrl.isLiveVideoCompleted) && !$ctrl.isVideoBeingPlayed" video-statistics-data="$ctrl.videoItem.metrics" hide-likes="true" class="ng-scope ng-isolate-scope"><!-- ngIf: !$ctrl.showIcons --><span class="video-statistics ng-scope" ng-if="!$ctrl.showIcons"> <span class="view-count ng-binding"> 1,136 views </span> <!-- ngIf: !$ctrl.hideLikes --> <!-- ngIf: $ctrl.showComments --> </span><!-- end ngIf: !$ctrl.showIcons --> <!-- ngIf: $ctrl.showIcons --></video-statistics><!-- end ngIf: (!$ctrl.isLiveVideo || $ctrl.isLiveVideoCompleted) && !$ctrl.isVideoBeingPlayed --> </div> </div> </a> </div></responsive-video-item> </li><!-- end ngRepeat: relatedVideoItem in $ctrl.relatedVideos --><li class="related-list-item column xsmall-12 ng-scope" ng-class="{'end' : 5 === $ctrl.relatedVideos.length - 1}" ng-repeat="relatedVideoItem in $ctrl.relatedVideos"> <responsive-video-item video-item="relatedVideoItem" query-param="$ctrl.queryStrings" item-clicked="$ctrl.videoItemClicked()" list-style="true" auto-play-status="$ctrl.autoPlayDefault" related-videos="$ctrl.relatedVideos" class="ng-isolate-scope"><div class="responsive-video-item list-style" ng-class="{'list-style' : $ctrl.listStyle}"> <a ng-href="/video/0a80e96e-0075-44c4-8df9-74ce52557e52" ng-click="$ctrl.videoClicked()" aria-label=" Video Global Safety - Returning to the Workplace (E), duration is 19 minutes 48 seconds, number of views is 99 " class="video-item-image-link" href="/video/0a80e96e-0075-44c4-8df9-74ce52557e52"> <div class="video-item-thumb"> <div class="video-item-image-link"> <div class="video-item-image-container"> <!-- ngIf: !$ctrl.isImageUrlsEmpty --><img ng-if="!$ctrl.isImageUrlsEmpty" set-img-src="{"extraSmall":{"url":"https://uswe-1-content.api.microsoftstream.com/api/thumbnails/1/0ce5c543-f1a6-4f6c-8b5f-fc4a75d9b6c2/MS1ra2p2dWxtNmt2b3l6N3hjbXBjNDM2MnZsZV8xXzg2eDQ1LnBuZw?validTill=2020-07-26T00%3a00%3a00.0000000Z&aadUserOId=79c34bfa-ad42-4aae-a681-1edc26a3b11e&encoding=base64&api-version=1.4-private&signature=FPnrb8FE09T0to7aCDhcedlH5wiv6bMaXEzNf1lIJls%3d"},"small":{"url":"https://uswe-1-content.api.microsoftstream.com/api/thumbnails/1/0ce5c543-f1a6-4f6c-8b5f-fc4a75d9b6c2/MS1ra2p2dWxtNmt2b3l6N3hjbXBjNDM2MnZsZV8xXzE3Mng5MC5wbmc?validTill=2020-07-26T00%3a00%3a00.0000000Z&aadUserOId=79c34bfa-ad42-4aae-a681-1edc26a3b11e&encoding=base64&api-version=1.4-private&signature=NWihocEQY8yiONHugZL%2bhNYml6J6AEfBZjRG30qmdQY%3d"},"medium":{"url":"https://uswe-1-content.api.microsoftstream.com/api/thumbnails/1/0ce5c543-f1a6-4f6c-8b5f-fc4a75d9b6c2/MS1ra2p2dWxtNmt2b3l6N3hjbXBjNDM2MnZsZV8xXzM0NHgxODAucG5n?validTill=2020-07-26T00%3a00%3a00.0000000Z&aadUserOId=79c34bfa-ad42-4aae-a681-1edc26a3b11e&encoding=base64&api-version=1.4-private&signature=EEN%2bczM8J2MxT9jUICKhscRtqs6h55rFyUDqk%2fIQs0M%3d"},"large":{"url":"https://uswe-1-content.api.microsoftstream.com/api/thumbnails/1/0ce5c543-f1a6-4f6c-8b5f-fc4a75d9b6c2/MS1ra2p2dWxtNmt2b3l6N3hjbXBjNDM2MnZsZV8xXzY4OHgzNjAucG5n?validTill=2020-07-26T00%3a00%3a00.0000000Z&aadUserOId=79c34bfa-ad42-4aae-a681-1edc26a3b11e&encoding=base64&api-version=1.4-private&signature=J0r64bJUfX31MIm2OvHPnSqBTivaVWq1z04A7Ax2m70%3d"}}" alt="Global Safety - Returning to the Workplace (E)" aria-hidden="true" class="ng-scope" style="display: block;" src="https://uswe-1-content.api.microsoftstream.com/api/thumbnails/1/0ce5c543-f1a6-4f6c-8b5f-fc4a75d9b6c2/MS1ra2p2dWxtNmt2b3l6N3hjbXBjNDM2MnZsZV8xXzE3Mng5MC5wbmc?validTill=2020-07-26T00%3a00%3a00.0000000Z&aadUserOId=79c34bfa-ad42-4aae-a681-1edc26a3b11e&encoding=base64&api-version=1.4-private&signature=NWihocEQY8yiONHugZL%2bhNYml6J6AEfBZjRG30qmdQY%3d"><!-- end ngIf: !$ctrl.isImageUrlsEmpty --> <!-- ngIf: $ctrl.isImageUrlsEmpty && $ctrl.isLiveVideo --> <!-- ngIf: !$ctrl.isLiveVideo --><div ng-if="!$ctrl.isLiveVideo" class="video-item-thumb-duration ng-binding ng-scope"> 19:48 </div><!-- end ngIf: !$ctrl.isLiveVideo --> </div> </div> <!-- ngIf: $ctrl.newVideo && !$ctrl.isLiveNow --> <!-- ngIf: $ctrl.isLiveNow --> <!-- ngIf: $ctrl.videoItem.userData.isViewed && $ctrl.playbackProgressPercentage --> </div> <div class="detail-container"> <h3 class="title-heading"> <div class="video-item-title"> <!-- ngIf: $ctrl.isLiveVideo --> <span class="video-title-text ng-binding ng-isolate-scope" aria-hidden="true" custom-title="Global Safety - Returning to the Workplace (E)" light-style="true" on-ellipsis-only="true"> Global Safety - Returning to the Workplace (E) </span> </div> </h3> <div class="video-footer"> <!-- ngIf: $ctrl.isLiveVideo --> <!-- ngIf: $ctrl.isLiveVideo && !$ctrl.isLiveVideoCompleted --> <!-- ngIf: $ctrl.isVideoBeingPlayed --> <!-- ngIf: (!$ctrl.isLiveVideo || $ctrl.isLiveVideoCompleted) && !$ctrl.isVideoBeingPlayed --><video-statistics ng-if="(!$ctrl.isLiveVideo || $ctrl.isLiveVideoCompleted) && !$ctrl.isVideoBeingPlayed" video-statistics-data="$ctrl.videoItem.metrics" hide-likes="true" class="ng-scope ng-isolate-scope"><!-- ngIf: !$ctrl.showIcons --><span class="video-statistics ng-scope" ng-if="!$ctrl.showIcons"> <span class="view-count ng-binding"> 99 views </span> <!-- ngIf: !$ctrl.hideLikes --> <!-- ngIf: $ctrl.showComments --> </span><!-- end ngIf: !$ctrl.showIcons --> <!-- ngIf: $ctrl.showIcons --></video-statistics><!-- end ngIf: (!$ctrl.isLiveVideo || $ctrl.isLiveVideoCompleted) && !$ctrl.isVideoBeingPlayed --> </div> </div> </a> </div></responsive-video-item> </li><!-- end ngRepeat: relatedVideoItem in $ctrl.relatedVideos --><li class="related-list-item column xsmall-12 ng-scope" ng-class="{'end' : 6 === $ctrl.relatedVideos.length - 1}" ng-repeat="relatedVideoItem in $ctrl.relatedVideos"> <responsive-video-item video-item="relatedVideoItem" query-param="$ctrl.queryStrings" item-clicked="$ctrl.videoItemClicked()" list-style="true" auto-play-status="$ctrl.autoPlayDefault" related-videos="$ctrl.relatedVideos" class="ng-isolate-scope"><div class="responsive-video-item list-style" ng-class="{'list-style' : $ctrl.listStyle}"> <a ng-href="/video/eadc8a88-d71d-4a40-8331-31c3a875c8de" ng-click="$ctrl.videoClicked()" aria-label=" Video Donación equipo médico, duration is 1 minute 48 seconds, number of views is 149 " class="video-item-image-link" href="/video/eadc8a88-d71d-4a40-8331-31c3a875c8de"> <div class="video-item-thumb"> <div class="video-item-image-link"> <div class="video-item-image-container"> <!-- ngIf: !$ctrl.isImageUrlsEmpty --><img ng-if="!$ctrl.isImageUrlsEmpty" set-img-src="{"extraSmall":{"url":"https://uswe-1-content.api.microsoftstream.com/api/thumbnails/1/cf970933-54ee-40cb-ab74-32ee3d399139/MS1meXhhaWJwazdka29vZWlpaGttcHNvcXFjZ18xXzgweDQ1LnBuZw?validTill=2020-07-26T00%3a00%3a00.0000000Z&aadUserOId=79c34bfa-ad42-4aae-a681-1edc26a3b11e&encoding=base64&api-version=1.4-private&signature=QaiXoWsw6KMszY5XUlJJk5NEj1tIUToKd0Z0m5maLDg%3d"},"small":{"url":"https://uswe-1-content.api.microsoftstream.com/api/thumbnails/1/cf970933-54ee-40cb-ab74-32ee3d399139/MS1meXhhaWJwazdka29vZWlpaGttcHNvcXFjZ18xXzE2MHg5MC5wbmc?validTill=2020-07-26T00%3a00%3a00.0000000Z&aadUserOId=79c34bfa-ad42-4aae-a681-1edc26a3b11e&encoding=base64&api-version=1.4-private&signature=%2fLYCAevntng8LiXsAQ6EBvOqPVyV12Ug08K%2bNetIvOQ%3d"},"medium":{"url":"https://uswe-1-content.api.microsoftstream.com/api/thumbnails/1/cf970933-54ee-40cb-ab74-32ee3d399139/MS1meXhhaWJwazdka29vZWlpaGttcHNvcXFjZ18xXzMyMHgxODAucG5n?validTill=2020-07-26T00%3a00%3a00.0000000Z&aadUserOId=79c34bfa-ad42-4aae-a681-1edc26a3b11e&encoding=base64&api-version=1.4-private&signature=40QcrQr56de8bWS8RDKP2QJDhGrUKi8KfSnPVyQjUmU%3d"},"large":{"url":"https://uswe-1-content.api.microsoftstream.com/api/thumbnails/1/cf970933-54ee-40cb-ab74-32ee3d399139/MS1meXhhaWJwazdka29vZWlpaGttcHNvcXFjZ18xXzY0MHgzNjAucG5n?validTill=2020-07-26T00%3a00%3a00.0000000Z&aadUserOId=79c34bfa-ad42-4aae-a681-1edc26a3b11e&encoding=base64&api-version=1.4-private&signature=5jHyIf1N8CB54ap%2f52qxOzxHq78rEWNs0%2bsEAUdDJo0%3d"}}" alt="Donación equipo médico" aria-hidden="true" class="ng-scope" style="display: block;" src="https://uswe-1-content.api.microsoftstream.com/api/thumbnails/1/cf970933-54ee-40cb-ab74-32ee3d399139/MS1meXhhaWJwazdka29vZWlpaGttcHNvcXFjZ18xXzE2MHg5MC5wbmc?validTill=2020-07-26T00%3a00%3a00.0000000Z&aadUserOId=79c34bfa-ad42-4aae-a681-1edc26a3b11e&encoding=base64&api-version=1.4-private&signature=%2fLYCAevntng8LiXsAQ6EBvOqPVyV12Ug08K%2bNetIvOQ%3d"><!-- end ngIf: !$ctrl.isImageUrlsEmpty --> <!-- ngIf: $ctrl.isImageUrlsEmpty && $ctrl.isLiveVideo --> <!-- ngIf: !$ctrl.isLiveVideo --><div ng-if="!$ctrl.isLiveVideo" class="video-item-thumb-duration ng-binding ng-scope"> 01:48 </div><!-- end ngIf: !$ctrl.isLiveVideo --> </div> </div> <!-- ngIf: $ctrl.newVideo && !$ctrl.isLiveNow --> <!-- ngIf: $ctrl.isLiveNow --> <!-- ngIf: $ctrl.videoItem.userData.isViewed && $ctrl.playbackProgressPercentage --> </div> <div class="detail-container"> <h3 class="title-heading"> <div class="video-item-title"> <!-- ngIf: $ctrl.isLiveVideo --> <span class="video-title-text ng-binding ng-isolate-scope" aria-hidden="true" custom-title="Donación equipo médico" light-style="true" on-ellipsis-only="true"> Donación equipo médico </span> </div> </h3> <div class="video-footer"> <!-- ngIf: $ctrl.isLiveVideo --> <!-- ngIf: $ctrl.isLiveVideo && !$ctrl.isLiveVideoCompleted --> <!-- ngIf: $ctrl.isVideoBeingPlayed --> <!-- ngIf: (!$ctrl.isLiveVideo || $ctrl.isLiveVideoCompleted) && !$ctrl.isVideoBeingPlayed --><video-statistics ng-if="(!$ctrl.isLiveVideo || $ctrl.isLiveVideoCompleted) && !$ctrl.isVideoBeingPlayed" video-statistics-data="$ctrl.videoItem.metrics" hide-likes="true" class="ng-scope ng-isolate-scope"><!-- ngIf: !$ctrl.showIcons --><span class="video-statistics ng-scope" ng-if="!$ctrl.showIcons"> <span class="view-count ng-binding"> 149 views </span> <!-- ngIf: !$ctrl.hideLikes --> <!-- ngIf: $ctrl.showComments --> </span><!-- end ngIf: !$ctrl.showIcons --> <!-- ngIf: $ctrl.showIcons --></video-statistics><!-- end ngIf: (!$ctrl.isLiveVideo || $ctrl.isLiveVideoCompleted) && !$ctrl.isVideoBeingPlayed --> </div> </div> </a> </div></responsive-video-item> </li><!-- end ngRepeat: relatedVideoItem in $ctrl.relatedVideos --><li class="related-list-item column xsmall-12 ng-scope" ng-class="{'end' : 7 === $ctrl.relatedVideos.length - 1}" ng-repeat="relatedVideoItem in $ctrl.relatedVideos"> <responsive-video-item video-item="relatedVideoItem" query-param="$ctrl.queryStrings" item-clicked="$ctrl.videoItemClicked()" list-style="true" auto-play-status="$ctrl.autoPlayDefault" related-videos="$ctrl.relatedVideos" class="ng-isolate-scope"><div class="responsive-video-item list-style" ng-class="{'list-style' : $ctrl.listStyle}"> <a ng-href="/video/f44d2459-2c86-43e5-a6b7-5dcf10b55722" ng-click="$ctrl.videoClicked()" aria-label=" Video SmarterTogether with Hans Lidforss - July 2, 2020 Update, duration is 3 minutes 49 seconds, number of views is 963 " class="video-item-image-link" href="/video/f44d2459-2c86-43e5-a6b7-5dcf10b55722"> <div class="video-item-thumb"> <div class="video-item-image-link"> <div class="video-item-image-container"> <!-- ngIf: !$ctrl.isImageUrlsEmpty --><img ng-if="!$ctrl.isImageUrlsEmpty" set-img-src="{"extraSmall":{"url":"https://uswe-1-content.api.microsoftstream.com/api/thumbnails/1/b2e0a692-a0c7-4d75-992c-9b5c61e3f168/MS16dGl6bW90a2RrbXB4d25vM2ZqdzZmdWFnZ18xXzgweDQ1LnBuZw?validTill=2020-07-26T00%3a00%3a00.0000000Z&aadUserOId=79c34bfa-ad42-4aae-a681-1edc26a3b11e&encoding=base64&api-version=1.4-private&signature=O7eo1mXEU1KYkWZ6%2buY4Pyiy6hkWDTigHWN51NouiTY%3d"},"small":{"url":"https://uswe-1-content.api.microsoftstream.com/api/thumbnails/1/b2e0a692-a0c7-4d75-992c-9b5c61e3f168/MS16dGl6bW90a2RrbXB4d25vM2ZqdzZmdWFnZ18xXzE2MHg5MC5wbmc?validTill=2020-07-26T00%3a00%3a00.0000000Z&aadUserOId=79c34bfa-ad42-4aae-a681-1edc26a3b11e&encoding=base64&api-version=1.4-private&signature=bKOhiwNzUs%2btNgmTl1mXruRBmKo70eQEq5TPJg4ZKz8%3d"},"medium":{"url":"https://uswe-1-content.api.microsoftstream.com/api/thumbnails/1/b2e0a692-a0c7-4d75-992c-9b5c61e3f168/MS16dGl6bW90a2RrbXB4d25vM2ZqdzZmdWFnZ18xXzMyMHgxODAucG5n?validTill=2020-07-26T00%3a00%3a00.0000000Z&aadUserOId=79c34bfa-ad42-4aae-a681-1edc26a3b11e&encoding=base64&api-version=1.4-private&signature=hhYk1HWmqZAW8BWdeA4nVDBdSR575TsYNGQx6rwtE1w%3d"},"large":{"url":"https://uswe-1-content.api.microsoftstream.com/api/thumbnails/1/b2e0a692-a0c7-4d75-992c-9b5c61e3f168/MS16dGl6bW90a2RrbXB4d25vM2ZqdzZmdWFnZ18xXzY0MHgzNjAucG5n?validTill=2020-07-26T00%3a00%3a00.0000000Z&aadUserOId=79c34bfa-ad42-4aae-a681-1edc26a3b11e&encoding=base64&api-version=1.4-private&signature=Ark7swCXm4EiRbbQ0XStORvRsW4qWBMaVB9ampWG9Vo%3d"}}" alt="SmarterTogether with Hans Lidforss - July 2, 2020 Update" aria-hidden="true" class="ng-scope" style="display: block;" src="https://uswe-1-content.api.microsoftstream.com/api/thumbnails/1/b2e0a692-a0c7-4d75-992c-9b5c61e3f168/MS16dGl6bW90a2RrbXB4d25vM2ZqdzZmdWFnZ18xXzE2MHg5MC5wbmc?validTill=2020-07-26T00%3a00%3a00.0000000Z&aadUserOId=79c34bfa-ad42-4aae-a681-1edc26a3b11e&encoding=base64&api-version=1.4-private&signature=bKOhiwNzUs%2btNgmTl1mXruRBmKo70eQEq5TPJg4ZKz8%3d"><!-- end ngIf: !$ctrl.isImageUrlsEmpty --> <!-- ngIf: $ctrl.isImageUrlsEmpty && $ctrl.isLiveVideo --> <!-- ngIf: !$ctrl.isLiveVideo --><div ng-if="!$ctrl.isLiveVideo" class="video-item-thumb-duration ng-binding ng-scope"> 03:49 </div><!-- end ngIf: !$ctrl.isLiveVideo --> </div> </div> <!-- ngIf: $ctrl.newVideo && !$ctrl.isLiveNow --> <!-- ngIf: $ctrl.isLiveNow --> <!-- ngIf: $ctrl.videoItem.userData.isViewed && $ctrl.playbackProgressPercentage --> </div> <div class="detail-container"> <h3 class="title-heading"> <div class="video-item-title"> <!-- ngIf: $ctrl.isLiveVideo --> <span class="video-title-text ng-binding ng-isolate-scope" aria-hidden="true" custom-title="SmarterTogether with Hans Lidforss - July 2, 2020 Update" light-style="true" on-ellipsis-only="true"> SmarterTogether with Hans Lidforss - July 2, 2020 Update </span> </div> </h3> <div class="video-footer"> <!-- ngIf: $ctrl.isLiveVideo --> <!-- ngIf: $ctrl.isLiveVideo && !$ctrl.isLiveVideoCompleted --> <!-- ngIf: $ctrl.isVideoBeingPlayed --> <!-- ngIf: (!$ctrl.isLiveVideo || $ctrl.isLiveVideoCompleted) && !$ctrl.isVideoBeingPlayed --><video-statistics ng-if="(!$ctrl.isLiveVideo || $ctrl.isLiveVideoCompleted) && !$ctrl.isVideoBeingPlayed" video-statistics-data="$ctrl.videoItem.metrics" hide-likes="true" class="ng-scope ng-isolate-scope"><!-- ngIf: !$ctrl.showIcons --><span class="video-statistics ng-scope" ng-if="!$ctrl.showIcons"> <span class="view-count ng-binding"> 963 views </span> <!-- ngIf: !$ctrl.hideLikes --> <!-- ngIf: $ctrl.showComments --> </span><!-- end ngIf: !$ctrl.showIcons --> <!-- ngIf: $ctrl.showIcons --></video-statistics><!-- end ngIf: (!$ctrl.isLiveVideo || $ctrl.isLiveVideoCompleted) && !$ctrl.isVideoBeingPlayed --> </div> </div> </a> </div></responsive-video-item> </li><!-- end ngRepeat: relatedVideoItem in $ctrl.relatedVideos --><li class="related-list-item column xsmall-12 ng-scope" ng-class="{'end' : 8 === $ctrl.relatedVideos.length - 1}" ng-repeat="relatedVideoItem in $ctrl.relatedVideos"> <responsive-video-item video-item="relatedVideoItem" query-param="$ctrl.queryStrings" item-clicked="$ctrl.videoItemClicked()" list-style="true" auto-play-status="$ctrl.autoPlayDefault" related-videos="$ctrl.relatedVideos" class="ng-isolate-scope"><div class="responsive-video-item list-style" ng-class="{'list-style' : $ctrl.listStyle}"> <a ng-href="/video/8f44d3ac-71d2-417b-9ded-7eeab4ddef12" ng-click="$ctrl.videoClicked()" aria-label=" Video ISO 14001 2020 (ags), duration is 9 minutes 40 seconds, number of views is 14 " class="video-item-image-link" href="/video/8f44d3ac-71d2-417b-9ded-7eeab4ddef12"> <div class="video-item-thumb"> <div class="video-item-image-link"> <div class="video-item-image-container"> <!-- ngIf: !$ctrl.isImageUrlsEmpty --><img ng-if="!$ctrl.isImageUrlsEmpty" set-img-src="{"extraSmall":{"url":"https://uswe-1-content.api.microsoftstream.com/api/thumbnails/2/a5658b80-2cda-47f1-91f2-2234585af571/MS03Mmh1dWNlaDRld2NkcnpmdjRkNWpqbDdmZV8xXzgweDQ1LnBuZw?validTill=2020-07-26T00%3a00%3a00.0000000Z&aadUserOId=79c34bfa-ad42-4aae-a681-1edc26a3b11e&encoding=base64&api-version=1.4-private&signature=djfltgSS4y3zaeJTNQMlHwAPtKxJx%2b%2bkhDoSrU8HuOg%3d"},"small":{"url":"https://uswe-1-content.api.microsoftstream.com/api/thumbnails/2/a5658b80-2cda-47f1-91f2-2234585af571/MS03Mmh1dWNlaDRld2NkcnpmdjRkNWpqbDdmZV8xXzE2MHg5MC5wbmc?validTill=2020-07-26T00%3a00%3a00.0000000Z&aadUserOId=79c34bfa-ad42-4aae-a681-1edc26a3b11e&encoding=base64&api-version=1.4-private&signature=k9bJL4ZHBAdV%2bvfgEt9pHOigoNNTyFwgKuHoGhvK9Rw%3d"},"medium":{"url":"https://uswe-1-content.api.microsoftstream.com/api/thumbnails/2/a5658b80-2cda-47f1-91f2-2234585af571/MS03Mmh1dWNlaDRld2NkcnpmdjRkNWpqbDdmZV8xXzMyMHgxODAucG5n?validTill=2020-07-26T00%3a00%3a00.0000000Z&aadUserOId=79c34bfa-ad42-4aae-a681-1edc26a3b11e&encoding=base64&api-version=1.4-private&signature=fRqPtojNxt7Hy0ufYiRKcjDcabTzoqOQfROzUtoqr4U%3d"},"large":{"url":"https://uswe-1-content.api.microsoftstream.com/api/thumbnails/2/a5658b80-2cda-47f1-91f2-2234585af571/MS03Mmh1dWNlaDRld2NkcnpmdjRkNWpqbDdmZV8xXzY0MHgzNjAucG5n?validTill=2020-07-26T00%3a00%3a00.0000000Z&aadUserOId=79c34bfa-ad42-4aae-a681-1edc26a3b11e&encoding=base64&api-version=1.4-private&signature=bxoR61IXPkHU8kNd2H9THjLID8%2b2Us4uLTFYmrhF02A%3d"}}" alt="ISO 14001 2020 (ags)" aria-hidden="true" class="ng-scope" style="display: block;" src="https://uswe-1-content.api.microsoftstream.com/api/thumbnails/2/a5658b80-2cda-47f1-91f2-2234585af571/MS03Mmh1dWNlaDRld2NkcnpmdjRkNWpqbDdmZV8xXzE2MHg5MC5wbmc?validTill=2020-07-26T00%3a00%3a00.0000000Z&aadUserOId=79c34bfa-ad42-4aae-a681-1edc26a3b11e&encoding=base64&api-version=1.4-private&signature=k9bJL4ZHBAdV%2bvfgEt9pHOigoNNTyFwgKuHoGhvK9Rw%3d"><!-- end ngIf: !$ctrl.isImageUrlsEmpty --> <!-- ngIf: $ctrl.isImageUrlsEmpty && $ctrl.isLiveVideo --> <!-- ngIf: !$ctrl.isLiveVideo --><div ng-if="!$ctrl.isLiveVideo" class="video-item-thumb-duration ng-binding ng-scope"> 09:40 </div><!-- end ngIf: !$ctrl.isLiveVideo --> </div> </div> <!-- ngIf: $ctrl.newVideo && !$ctrl.isLiveNow --> <!-- ngIf: $ctrl.isLiveNow --> <!-- ngIf: $ctrl.videoItem.userData.isViewed && $ctrl.playbackProgressPercentage --> </div> <div class="detail-container"> <h3 class="title-heading"> <div class="video-item-title"> <!-- ngIf: $ctrl.isLiveVideo --> <span class="video-title-text ng-binding ng-isolate-scope" aria-hidden="true" custom-title="ISO 14001 2020 (ags)" light-style="true" on-ellipsis-only="true"> ISO 14001 2020 (ags) </span> </div> </h3> <div class="video-footer"> <!-- ngIf: $ctrl.isLiveVideo --> <!-- ngIf: $ctrl.isLiveVideo && !$ctrl.isLiveVideoCompleted --> <!-- ngIf: $ctrl.isVideoBeingPlayed --> <!-- ngIf: (!$ctrl.isLiveVideo || $ctrl.isLiveVideoCompleted) && !$ctrl.isVideoBeingPlayed --><video-statistics ng-if="(!$ctrl.isLiveVideo || $ctrl.isLiveVideoCompleted) && !$ctrl.isVideoBeingPlayed" video-statistics-data="$ctrl.videoItem.metrics" hide-likes="true" class="ng-scope ng-isolate-scope"><!-- ngIf: !$ctrl.showIcons --><span class="video-statistics ng-scope" ng-if="!$ctrl.showIcons"> <span class="view-count ng-binding"> 14 views </span> <!-- ngIf: !$ctrl.hideLikes --> <!-- ngIf: $ctrl.showComments --> </span><!-- end ngIf: !$ctrl.showIcons --> <!-- ngIf: $ctrl.showIcons --></video-statistics><!-- end ngIf: (!$ctrl.isLiveVideo || $ctrl.isLiveVideoCompleted) && !$ctrl.isVideoBeingPlayed --> </div> </div> </a> </div></responsive-video-item> </li><!-- end ngRepeat: relatedVideoItem in $ctrl.relatedVideos --><li class="related-list-item column xsmall-12 ng-scope" ng-class="{'end' : 9 === $ctrl.relatedVideos.length - 1}" ng-repeat="relatedVideoItem in $ctrl.relatedVideos"> <responsive-video-item video-item="relatedVideoItem" query-param="$ctrl.queryStrings" item-clicked="$ctrl.videoItemClicked()" list-style="true" auto-play-status="$ctrl.autoPlayDefault" related-videos="$ctrl.relatedVideos" class="ng-isolate-scope"><div class="responsive-video-item list-style" ng-class="{'list-style' : $ctrl.listStyle}"> <a ng-href="/video/5c392656-fbc3-4c10-9c14-8a62c0af200d" ng-click="$ctrl.videoClicked()" aria-label=" Video Sensata EVP Video 2018 - English No Captions, duration is 1 minute 19 seconds, number of views is 61 " class="video-item-image-link" href="/video/5c392656-fbc3-4c10-9c14-8a62c0af200d"> <div class="video-item-thumb"> <div class="video-item-image-link"> <div class="video-item-image-container"> <!-- ngIf: !$ctrl.isImageUrlsEmpty --><img ng-if="!$ctrl.isImageUrlsEmpty" set-img-src="{"extraSmall":{"url":"https://uswe-1-content.api.microsoftstream.com/api/thumbnails/cc7687ff-48c6-44ff-8fc6-aa45a684800f/NGQ3ZjAwZjAtY2NjOS00YjdhLWI1ZDQtNWE4MjY1NmNfMV84MHg0NS5wbmc?validTill=2020-07-26T00%3a00%3a00.0000000Z&aadUserOId=79c34bfa-ad42-4aae-a681-1edc26a3b11e&encoding=base64&api-version=1.4-private&signature=hwoIdsDHL8xJU1QipVXG165qEVC4vHvu7vE85U1E8ig%3d"},"small":{"url":"https://uswe-1-content.api.microsoftstream.com/api/thumbnails/cc7687ff-48c6-44ff-8fc6-aa45a684800f/NGQ3ZjAwZjAtY2NjOS00YjdhLWI1ZDQtNWE4MjY1NmNfMV8xNjB4OTAucG5n?validTill=2020-07-26T00%3a00%3a00.0000000Z&aadUserOId=79c34bfa-ad42-4aae-a681-1edc26a3b11e&encoding=base64&api-version=1.4-private&signature=bvNR5Q%2fdzs8phKC39m%2fIpzeHLcMEUFIetjCuLYrgQhI%3d"},"medium":{"url":"https://uswe-1-content.api.microsoftstream.com/api/thumbnails/cc7687ff-48c6-44ff-8fc6-aa45a684800f/NGQ3ZjAwZjAtY2NjOS00YjdhLWI1ZDQtNWE4MjY1NmNfMV8zMjB4MTgwLnBuZw?validTill=2020-07-26T00%3a00%3a00.0000000Z&aadUserOId=79c34bfa-ad42-4aae-a681-1edc26a3b11e&encoding=base64&api-version=1.4-private&signature=OiH25XxkVF8orv229ex%2fbPYTwqHAQVJO4wgPMqgKYZA%3d"},"large":{"url":"https://uswe-1-content.api.microsoftstream.com/api/thumbnails/cc7687ff-48c6-44ff-8fc6-aa45a684800f/NGQ3ZjAwZjAtY2NjOS00YjdhLWI1ZDQtNWE4MjY1NmNfMV82NDB4MzYwLnBuZw?validTill=2020-07-26T00%3a00%3a00.0000000Z&aadUserOId=79c34bfa-ad42-4aae-a681-1edc26a3b11e&encoding=base64&api-version=1.4-private&signature=Peyhj%2b210X3efvuEc%2fUZC5hcBX94YhiDBwn%2bNKqsquY%3d"}}" alt="Sensata EVP Video 2018 - English No Captions" aria-hidden="true" class="ng-scope" style="display: block;" src="https://uswe-1-content.api.microsoftstream.com/api/thumbnails/cc7687ff-48c6-44ff-8fc6-aa45a684800f/NGQ3ZjAwZjAtY2NjOS00YjdhLWI1ZDQtNWE4MjY1NmNfMV8xNjB4OTAucG5n?validTill=2020-07-26T00%3a00%3a00.0000000Z&aadUserOId=79c34bfa-ad42-4aae-a681-1edc26a3b11e&encoding=base64&api-version=1.4-private&signature=bvNR5Q%2fdzs8phKC39m%2fIpzeHLcMEUFIetjCuLYrgQhI%3d"><!-- end ngIf: !$ctrl.isImageUrlsEmpty --> <!-- ngIf: $ctrl.isImageUrlsEmpty && $ctrl.isLiveVideo --> <!-- ngIf: !$ctrl.isLiveVideo --><div ng-if="!$ctrl.isLiveVideo" class="video-item-thumb-duration ng-binding ng-scope"> 01:19 </div><!-- end ngIf: !$ctrl.isLiveVideo --> </div> </div> <!-- ngIf: $ctrl.newVideo && !$ctrl.isLiveNow --> <!-- ngIf: $ctrl.isLiveNow --> <!-- ngIf: $ctrl.videoItem.userData.isViewed && $ctrl.playbackProgressPercentage --> </div> <div class="detail-container"> <h3 class="title-heading"> <div class="video-item-title"> <!-- ngIf: $ctrl.isLiveVideo --> <span class="video-title-text ng-binding ng-isolate-scope" aria-hidden="true" custom-title="Sensata EVP Video 2018 - English No Captions" light-style="true" on-ellipsis-only="true"> Sensata EVP Video 2018 - English No Captions </span> </div> </h3> <div class="video-footer"> <!-- ngIf: $ctrl.isLiveVideo --> <!-- ngIf: $ctrl.isLiveVideo && !$ctrl.isLiveVideoCompleted --> <!-- ngIf: $ctrl.isVideoBeingPlayed --> <!-- ngIf: (!$ctrl.isLiveVideo || $ctrl.isLiveVideoCompleted) && !$ctrl.isVideoBeingPlayed --><video-statistics ng-if="(!$ctrl.isLiveVideo || $ctrl.isLiveVideoCompleted) && !$ctrl.isVideoBeingPlayed" video-statistics-data="$ctrl.videoItem.metrics" hide-likes="true" class="ng-scope ng-isolate-scope"><!-- ngIf: !$ctrl.showIcons --><span class="video-statistics ng-scope" ng-if="!$ctrl.showIcons"> <span class="view-count ng-binding"> 61 views </span> <!-- ngIf: !$ctrl.hideLikes --> <!-- ngIf: $ctrl.showComments --> </span><!-- end ngIf: !$ctrl.showIcons --> <!-- ngIf: $ctrl.showIcons --></video-statistics><!-- end ngIf: (!$ctrl.isLiveVideo || $ctrl.isLiveVideoCompleted) && !$ctrl.isVideoBeingPlayed --> </div> </div> </a> </div></responsive-video-item> </li><!-- end ngRepeat: relatedVideoItem in $ctrl.relatedVideos --><li class="related-list-item column xsmall-12 ng-scope" ng-class="{'end' : 10 === $ctrl.relatedVideos.length - 1}" ng-repeat="relatedVideoItem in $ctrl.relatedVideos"> <responsive-video-item video-item="relatedVideoItem" query-param="$ctrl.queryStrings" item-clicked="$ctrl.videoItemClicked()" list-style="true" auto-play-status="$ctrl.autoPlayDefault" related-videos="$ctrl.relatedVideos" class="ng-isolate-scope"><div class="responsive-video-item list-style" ng-class="{'list-style' : $ctrl.listStyle}"> <a ng-href="/video/1334f10f-227a-4776-a56b-8c79b4c2f912" ng-click="$ctrl.videoClicked()" aria-label=" Video ESD 2020, duration is 9 minutes 22 seconds, number of views is 4 " class="video-item-image-link" href="/video/1334f10f-227a-4776-a56b-8c79b4c2f912"> <div class="video-item-thumb"> <div class="video-item-image-link"> <div class="video-item-image-container"> <!-- ngIf: !$ctrl.isImageUrlsEmpty --><img ng-if="!$ctrl.isImageUrlsEmpty" set-img-src="{"extraSmall":{"url":"https://uswe-1-content.api.microsoftstream.com/api/thumbnails/2/b64a0452-fca0-4282-8724-79a2512202e9/MS1ma2l0a3FmaDN6azRubWw2czY0bHMzaWp5ZV8xXzYweDQ1LnBuZw?validTill=2020-07-26T00%3a00%3a00.0000000Z&aadUserOId=79c34bfa-ad42-4aae-a681-1edc26a3b11e&encoding=base64&api-version=1.4-private&signature=lttgLQZRrrF6%2bekCbsj2RCzrE7Z1GJDPBaKlUHcHcEs%3d"},"small":{"url":"https://uswe-1-content.api.microsoftstream.com/api/thumbnails/2/b64a0452-fca0-4282-8724-79a2512202e9/MS1ma2l0a3FmaDN6azRubWw2czY0bHMzaWp5ZV8xXzEyMHg5MC5wbmc?validTill=2020-07-26T00%3a00%3a00.0000000Z&aadUserOId=79c34bfa-ad42-4aae-a681-1edc26a3b11e&encoding=base64&api-version=1.4-private&signature=SRmjC4HsvPvKq22bjHu5HHCWTXpfUG6ZANLmc39PY%2fY%3d"},"medium":{"url":"https://uswe-1-content.api.microsoftstream.com/api/thumbnails/2/b64a0452-fca0-4282-8724-79a2512202e9/MS1ma2l0a3FmaDN6azRubWw2czY0bHMzaWp5ZV8xXzI0MHgxODAucG5n?validTill=2020-07-26T00%3a00%3a00.0000000Z&aadUserOId=79c34bfa-ad42-4aae-a681-1edc26a3b11e&encoding=base64&api-version=1.4-private&signature=mNK%2fsMfv9bhuH1eXRfZvwjctQoXOieWT8wwLQe7XFZ4%3d"},"large":{"url":"https://uswe-1-content.api.microsoftstream.com/api/thumbnails/2/b64a0452-fca0-4282-8724-79a2512202e9/MS1ma2l0a3FmaDN6azRubWw2czY0bHMzaWp5ZV8xXzQ4MHgzNjAucG5n?validTill=2020-07-26T00%3a00%3a00.0000000Z&aadUserOId=79c34bfa-ad42-4aae-a681-1edc26a3b11e&encoding=base64&api-version=1.4-private&signature=NGWGIc8YpCFEDXP1PIyeNIVfWLyp5hR90sqxLNkPgd4%3d"}}" alt="ESD 2020" aria-hidden="true" class="ng-scope" style="display: block;" src="https://uswe-1-content.api.microsoftstream.com/api/thumbnails/2/b64a0452-fca0-4282-8724-79a2512202e9/MS1ma2l0a3FmaDN6azRubWw2czY0bHMzaWp5ZV8xXzEyMHg5MC5wbmc?validTill=2020-07-26T00%3a00%3a00.0000000Z&aadUserOId=79c34bfa-ad42-4aae-a681-1edc26a3b11e&encoding=base64&api-version=1.4-private&signature=SRmjC4HsvPvKq22bjHu5HHCWTXpfUG6ZANLmc39PY%2fY%3d"><!-- end ngIf: !$ctrl.isImageUrlsEmpty --> <!-- ngIf: $ctrl.isImageUrlsEmpty && $ctrl.isLiveVideo --> <!-- ngIf: !$ctrl.isLiveVideo --><div ng-if="!$ctrl.isLiveVideo" class="video-item-thumb-duration ng-binding ng-scope"> 09:22 </div><!-- end ngIf: !$ctrl.isLiveVideo --> </div> </div> <!-- ngIf: $ctrl.newVideo && !$ctrl.isLiveNow --> <!-- ngIf: $ctrl.isLiveNow --> <!-- ngIf: $ctrl.videoItem.userData.isViewed && $ctrl.playbackProgressPercentage --> </div> <div class="detail-container"> <h3 class="title-heading"> <div class="video-item-title"> <!-- ngIf: $ctrl.isLiveVideo --> <span class="video-title-text ng-binding ng-isolate-scope" aria-hidden="true" custom-title="ESD 2020" light-style="true" on-ellipsis-only="true"> ESD 2020 </span> </div> </h3> <div class="video-footer"> <!-- ngIf: $ctrl.isLiveVideo --> <!-- ngIf: $ctrl.isLiveVideo && !$ctrl.isLiveVideoCompleted --> <!-- ngIf: $ctrl.isVideoBeingPlayed --> <!-- ngIf: (!$ctrl.isLiveVideo || $ctrl.isLiveVideoCompleted) && !$ctrl.isVideoBeingPlayed --><video-statistics ng-if="(!$ctrl.isLiveVideo || $ctrl.isLiveVideoCompleted) && !$ctrl.isVideoBeingPlayed" video-statistics-data="$ctrl.videoItem.metrics" hide-likes="true" class="ng-scope ng-isolate-scope"><!-- ngIf: !$ctrl.showIcons --><span class="video-statistics ng-scope" ng-if="!$ctrl.showIcons"> <span class="view-count ng-binding"> 4 views </span> <!-- ngIf: !$ctrl.hideLikes --> <!-- ngIf: $ctrl.showComments --> </span><!-- end ngIf: !$ctrl.showIcons --> <!-- ngIf: $ctrl.showIcons --></video-statistics><!-- end ngIf: (!$ctrl.isLiveVideo || $ctrl.isLiveVideoCompleted) && !$ctrl.isVideoBeingPlayed --> </div> </div> </a> </div></responsive-video-item> </li><!-- end ngRepeat: relatedVideoItem in $ctrl.relatedVideos --><li class="related-list-item column xsmall-12 ng-scope" ng-class="{'end' : 11 === $ctrl.relatedVideos.length - 1}" ng-repeat="relatedVideoItem in $ctrl.relatedVideos"> <responsive-video-item video-item="relatedVideoItem" query-param="$ctrl.queryStrings" item-clicked="$ctrl.videoItemClicked()" list-style="true" auto-play-status="$ctrl.autoPlayDefault" related-videos="$ctrl.relatedVideos" class="ng-isolate-scope"><div class="responsive-video-item list-style" ng-class="{'list-style' : $ctrl.listStyle}"> <a ng-href="/video/2163c359-4ad9-407c-baea-6dc1455b6d15" ng-click="$ctrl.videoClicked()" aria-label=" Video EU RTC 2020 Session 1, duration is 2 hours 22 minutes 14 seconds, number of views is 7 " class="video-item-image-link" href="/video/2163c359-4ad9-407c-baea-6dc1455b6d15"> <div class="video-item-thumb"> <div class="video-item-image-link"> <div class="video-item-image-container"> <!-- ngIf: !$ctrl.isImageUrlsEmpty --><img ng-if="!$ctrl.isImageUrlsEmpty" set-img-src="{"extraSmall":{"url":"https://uswe-1-content.api.microsoftstream.com/api/thumbnails/1/8261e38d-eea7-4c64-b9c0-ffad3602d36f/NzYxMDc0YTYtMzI3My00MDA3LTk2NjUtODg5ZWE3ZGY1NGM3LXBvc3Rlcl9leHRyYXNtYWxsLnBuZw?validTill=2020-07-26T00%3a00%3a00.0000000Z&aadUserOId=79c34bfa-ad42-4aae-a681-1edc26a3b11e&encoding=base64&api-version=1.4-private&signature=nX1PuTjtbiBywb8yk7KS9l5LlZxy%2fIqvPvJC8nQYMPU%3d"},"small":{"url":"https://uswe-1-content.api.microsoftstream.com/api/thumbnails/1/8261e38d-eea7-4c64-b9c0-ffad3602d36f/NzYxMDc0YTYtMzI3My00MDA3LTk2NjUtODg5ZWE3ZGY1NGM3LXBvc3Rlcl9zbWFsbC5wbmc?validTill=2020-07-26T00%3a00%3a00.0000000Z&aadUserOId=79c34bfa-ad42-4aae-a681-1edc26a3b11e&encoding=base64&api-version=1.4-private&signature=q05mXzH8rg%2fGAbl8O4%2bwX8DPBw1RFe4oNn%2fcKL50i%2fY%3d"},"medium":{"url":"https://uswe-1-content.api.microsoftstream.com/api/thumbnails/1/8261e38d-eea7-4c64-b9c0-ffad3602d36f/NzYxMDc0YTYtMzI3My00MDA3LTk2NjUtODg5ZWE3ZGY1NGM3LXBvc3Rlcl9tZWRpdW0ucG5n?validTill=2020-07-26T00%3a00%3a00.0000000Z&aadUserOId=79c34bfa-ad42-4aae-a681-1edc26a3b11e&encoding=base64&api-version=1.4-private&signature=A4Sft7FlOdv8sGI8wFBi064gfxYHJfBzkVe%2fh0JVdnU%3d"},"large":{"url":"https://uswe-1-content.api.microsoftstream.com/api/thumbnails/1/8261e38d-eea7-4c64-b9c0-ffad3602d36f/NzYxMDc0YTYtMzI3My00MDA3LTk2NjUtODg5ZWE3ZGY1NGM3LXBvc3Rlcl9sYXJnZS5wbmc?validTill=2020-07-26T00%3a00%3a00.0000000Z&aadUserOId=79c34bfa-ad42-4aae-a681-1edc26a3b11e&encoding=base64&api-version=1.4-private&signature=MgWu5%2fCYv2l0RRqQvvz%2b0P4%2f1wGBXKV0DgiVjMlj1xA%3d"}}" alt="EU RTC 2020 Session 1" aria-hidden="true" class="ng-scope" style="display: block;" src="https://uswe-1-content.api.microsoftstream.com/api/thumbnails/1/8261e38d-eea7-4c64-b9c0-ffad3602d36f/NzYxMDc0YTYtMzI3My00MDA3LTk2NjUtODg5ZWE3ZGY1NGM3LXBvc3Rlcl9zbWFsbC5wbmc?validTill=2020-07-26T00%3a00%3a00.0000000Z&aadUserOId=79c34bfa-ad42-4aae-a681-1edc26a3b11e&encoding=base64&api-version=1.4-private&signature=q05mXzH8rg%2fGAbl8O4%2bwX8DPBw1RFe4oNn%2fcKL50i%2fY%3d"><!-- end ngIf: !$ctrl.isImageUrlsEmpty --> <!-- ngIf: $ctrl.isImageUrlsEmpty && $ctrl.isLiveVideo --> <!-- ngIf: !$ctrl.isLiveVideo --><div ng-if="!$ctrl.isLiveVideo" class="video-item-thumb-duration ng-binding ng-scope"> 02:22:14 </div><!-- end ngIf: !$ctrl.isLiveVideo --> </div> </div> <!-- ngIf: $ctrl.newVideo && !$ctrl.isLiveNow --> <!-- ngIf: $ctrl.isLiveNow --> <!-- ngIf: $ctrl.videoItem.userData.isViewed && $ctrl.playbackProgressPercentage --> </div> <div class="detail-container"> <h3 class="title-heading"> <div class="video-item-title"> <!-- ngIf: $ctrl.isLiveVideo --> <span class="video-title-text ng-binding ng-isolate-scope" aria-hidden="true" custom-title="EU RTC 2020 Session 1" light-style="true" on-ellipsis-only="true"> EU RTC 2020 Session 1 </span> </div> </h3> <div class="video-footer"> <!-- ngIf: $ctrl.isLiveVideo --> <!-- ngIf: $ctrl.isLiveVideo && !$ctrl.isLiveVideoCompleted --> <!-- ngIf: $ctrl.isVideoBeingPlayed --> <!-- ngIf: (!$ctrl.isLiveVideo || $ctrl.isLiveVideoCompleted) && !$ctrl.isVideoBeingPlayed --><video-statistics ng-if="(!$ctrl.isLiveVideo || $ctrl.isLiveVideoCompleted) && !$ctrl.isVideoBeingPlayed" video-statistics-data="$ctrl.videoItem.metrics" hide-likes="true" class="ng-scope ng-isolate-scope"><!-- ngIf: !$ctrl.showIcons --><span class="video-statistics ng-scope" ng-if="!$ctrl.showIcons"> <span class="view-count ng-binding"> 7 views </span> <!-- ngIf: !$ctrl.hideLikes --> <!-- ngIf: $ctrl.showComments --> </span><!-- end ngIf: !$ctrl.showIcons --> <!-- ngIf: $ctrl.showIcons --></video-statistics><!-- end ngIf: (!$ctrl.isLiveVideo || $ctrl.isLiveVideoCompleted) && !$ctrl.isVideoBeingPlayed --> </div> </div> </a> </div></responsive-video-item> </li><!-- end ngRepeat: relatedVideoItem in $ctrl.relatedVideos --><li class="related-list-item column xsmall-12 ng-scope" ng-class="{'end' : 12 === $ctrl.relatedVideos.length - 1}" ng-repeat="relatedVideoItem in $ctrl.relatedVideos"> <responsive-video-item video-item="relatedVideoItem" query-param="$ctrl.queryStrings" item-clicked="$ctrl.videoItemClicked()" list-style="true" auto-play-status="$ctrl.autoPlayDefault" related-videos="$ctrl.relatedVideos" class="ng-isolate-scope"><div class="responsive-video-item list-style" ng-class="{'list-style' : $ctrl.listStyle}"> <a ng-href="/video/a46d108a-9484-4b6e-b2f8-fe33ff664f3c" ng-click="$ctrl.videoClicked()" aria-label=" Video Laser Weld, duration is 41 minutes 17 seconds, number of views is 12 " class="video-item-image-link" href="/video/a46d108a-9484-4b6e-b2f8-fe33ff664f3c"> <div class="video-item-thumb"> <div class="video-item-image-link"> <div class="video-item-image-container"> <!-- ngIf: !$ctrl.isImageUrlsEmpty --><img ng-if="!$ctrl.isImageUrlsEmpty" set-img-src="{"extraSmall":{"url":"https://uswe-1-content.api.microsoftstream.com/api/thumbnails/1/7e480306-4006-4f80-b81b-b64854c261fa/MS01dXpleG5qdXF5eHFyem1vMnF4eHY3bmdvY18xXzgweDQ1LnBuZw?validTill=2020-07-26T00%3a00%3a00.0000000Z&aadUserOId=79c34bfa-ad42-4aae-a681-1edc26a3b11e&encoding=base64&api-version=1.4-private&signature=UGOGjvM1391oWeGFGx7lzd4yQxWN6wgjCbNV3dXckqA%3d"},"small":{"url":"https://uswe-1-content.api.microsoftstream.com/api/thumbnails/1/7e480306-4006-4f80-b81b-b64854c261fa/MS01dXpleG5qdXF5eHFyem1vMnF4eHY3bmdvY18xXzE2MHg5MC5wbmc?validTill=2020-07-26T00%3a00%3a00.0000000Z&aadUserOId=79c34bfa-ad42-4aae-a681-1edc26a3b11e&encoding=base64&api-version=1.4-private&signature=axIgq02iMGfJOTF3Yuez%2ftRgq54eBPCTp7JYnDjE%2bUU%3d"},"medium":{"url":"https://uswe-1-content.api.microsoftstream.com/api/thumbnails/1/7e480306-4006-4f80-b81b-b64854c261fa/MS01dXpleG5qdXF5eHFyem1vMnF4eHY3bmdvY18xXzMyMHgxODAucG5n?validTill=2020-07-26T00%3a00%3a00.0000000Z&aadUserOId=79c34bfa-ad42-4aae-a681-1edc26a3b11e&encoding=base64&api-version=1.4-private&signature=0hhc2jFSGUFh%2fcVun%2f2zeTQZcu9IBFT2nvTjxwaHzZg%3d"},"large":{"url":"https://uswe-1-content.api.microsoftstream.com/api/thumbnails/1/7e480306-4006-4f80-b81b-b64854c261fa/MS01dXpleG5qdXF5eHFyem1vMnF4eHY3bmdvY18xXzY0MHgzNjAucG5n?validTill=2020-07-26T00%3a00%3a00.0000000Z&aadUserOId=79c34bfa-ad42-4aae-a681-1edc26a3b11e&encoding=base64&api-version=1.4-private&signature=J9Ihy2zACCDPDMgtz3sSzlavkgUiZoPb4Eo6KTJN7cI%3d"}}" alt="Laser Weld" aria-hidden="true" class="ng-scope" style="display: block;" src="https://uswe-1-content.api.microsoftstream.com/api/thumbnails/1/7e480306-4006-4f80-b81b-b64854c261fa/MS01dXpleG5qdXF5eHFyem1vMnF4eHY3bmdvY18xXzE2MHg5MC5wbmc?validTill=2020-07-26T00%3a00%3a00.0000000Z&aadUserOId=79c34bfa-ad42-4aae-a681-1edc26a3b11e&encoding=base64&api-version=1.4-private&signature=axIgq02iMGfJOTF3Yuez%2ftRgq54eBPCTp7JYnDjE%2bUU%3d"><!-- end ngIf: !$ctrl.isImageUrlsEmpty --> <!-- ngIf: $ctrl.isImageUrlsEmpty && $ctrl.isLiveVideo --> <!-- ngIf: !$ctrl.isLiveVideo --><div ng-if="!$ctrl.isLiveVideo" class="video-item-thumb-duration ng-binding ng-scope"> 41:17 </div><!-- end ngIf: !$ctrl.isLiveVideo --> </div> </div> <!-- ngIf: $ctrl.newVideo && !$ctrl.isLiveNow --> <!-- ngIf: $ctrl.isLiveNow --> <!-- ngIf: $ctrl.videoItem.userData.isViewed && $ctrl.playbackProgressPercentage --> </div> <div class="detail-container"> <h3 class="title-heading"> <div class="video-item-title"> <!-- ngIf: $ctrl.isLiveVideo --> <span class="video-title-text ng-binding ng-isolate-scope" aria-hidden="true" custom-title="Laser Weld" light-style="true" on-ellipsis-only="true"> Laser Weld </span> </div> </h3> <div class="video-footer"> <!-- ngIf: $ctrl.isLiveVideo --> <!-- ngIf: $ctrl.isLiveVideo && !$ctrl.isLiveVideoCompleted --> <!-- ngIf: $ctrl.isVideoBeingPlayed --> <!-- ngIf: (!$ctrl.isLiveVideo || $ctrl.isLiveVideoCompleted) && !$ctrl.isVideoBeingPlayed --><video-statistics ng-if="(!$ctrl.isLiveVideo || $ctrl.isLiveVideoCompleted) && !$ctrl.isVideoBeingPlayed" video-statistics-data="$ctrl.videoItem.metrics" hide-likes="true" class="ng-scope ng-isolate-scope"><!-- ngIf: !$ctrl.showIcons --><span class="video-statistics ng-scope" ng-if="!$ctrl.showIcons"> <span class="view-count ng-binding"> 12 views </span> <!-- ngIf: !$ctrl.hideLikes --> <!-- ngIf: $ctrl.showComments --> </span><!-- end ngIf: !$ctrl.showIcons --> <!-- ngIf: $ctrl.showIcons --></video-statistics><!-- end ngIf: (!$ctrl.isLiveVideo || $ctrl.isLiveVideoCompleted) && !$ctrl.isVideoBeingPlayed --> </div> </div> </a> </div></responsive-video-item> </li><!-- end ngRepeat: relatedVideoItem in $ctrl.relatedVideos --><li class="related-list-item column xsmall-12 ng-scope" ng-class="{'end' : 13 === $ctrl.relatedVideos.length - 1}" ng-repeat="relatedVideoItem in $ctrl.relatedVideos"> <responsive-video-item video-item="relatedVideoItem" query-param="$ctrl.queryStrings" item-clicked="$ctrl.videoItemClicked()" list-style="true" auto-play-status="$ctrl.autoPlayDefault" related-videos="$ctrl.relatedVideos" class="ng-isolate-scope"><div class="responsive-video-item list-style" ng-class="{'list-style' : $ctrl.listStyle}"> <a ng-href="/video/f7ff56e5-e5a0-4966-878a-e46a3adbbcdf" ng-click="$ctrl.videoClicked()" aria-label=" Video 2018 Leadership Summit - Office of the CTO Update, duration is 45 minutes 44 seconds, number of views is 455 " class="video-item-image-link" href="/video/f7ff56e5-e5a0-4966-878a-e46a3adbbcdf"> <div class="video-item-thumb"> <div class="video-item-image-link"> <div class="video-item-image-container"> <!-- ngIf: !$ctrl.isImageUrlsEmpty --><img ng-if="!$ctrl.isImageUrlsEmpty" set-img-src="{"extraSmall":{"url":"https://uswe-1-content.api.microsoftstream.com/api/thumbnails/a0ed2930-69ca-469b-8f5b-4133e64bdf3b/OTQ3NDMzMDgtMWRhYS00YWQ0LWFiZTYtNjM1ZDAzMGYxZDlmLXBvc3Rlcl9leHRyYXNtYWxsLnBuZw?validTill=2020-07-26T00%3a00%3a00.0000000Z&aadUserOId=79c34bfa-ad42-4aae-a681-1edc26a3b11e&encoding=base64&api-version=1.4-private&signature=%2bHySeHnbKCEC4b1rUqA1E4Z9Q1zH6wu8sTVW%2bR3rhyI%3d"},"small":{"url":"https://uswe-1-content.api.microsoftstream.com/api/thumbnails/a0ed2930-69ca-469b-8f5b-4133e64bdf3b/OTQ3NDMzMDgtMWRhYS00YWQ0LWFiZTYtNjM1ZDAzMGYxZDlmLXBvc3Rlcl9zbWFsbC5wbmc?validTill=2020-07-26T00%3a00%3a00.0000000Z&aadUserOId=79c34bfa-ad42-4aae-a681-1edc26a3b11e&encoding=base64&api-version=1.4-private&signature=LvjOBfl7%2fv8bVMBHpyEK%2b916JqZcr15F%2bMDDQ3slmt0%3d"},"medium":{"url":"https://uswe-1-content.api.microsoftstream.com/api/thumbnails/a0ed2930-69ca-469b-8f5b-4133e64bdf3b/OTQ3NDMzMDgtMWRhYS00YWQ0LWFiZTYtNjM1ZDAzMGYxZDlmLXBvc3Rlcl9tZWRpdW0ucG5n?validTill=2020-07-26T00%3a00%3a00.0000000Z&aadUserOId=79c34bfa-ad42-4aae-a681-1edc26a3b11e&encoding=base64&api-version=1.4-private&signature=UfwRm9lta5W7xGjP%2fjcS5ny8Wytw0kFXRAcUrRzLP3E%3d"},"large":{"url":"https://uswe-1-content.api.microsoftstream.com/api/thumbnails/a0ed2930-69ca-469b-8f5b-4133e64bdf3b/OTQ3NDMzMDgtMWRhYS00YWQ0LWFiZTYtNjM1ZDAzMGYxZDlmLXBvc3Rlcl9sYXJnZS5wbmc?validTill=2020-07-26T00%3a00%3a00.0000000Z&aadUserOId=79c34bfa-ad42-4aae-a681-1edc26a3b11e&encoding=base64&api-version=1.4-private&signature=MrZhvx2daLIECs%2fLk2OwowmTx03KQT58ejKWgKqUtSI%3d"}}" alt="2018 Leadership Summit - Office of the CTO Update" aria-hidden="true" class="ng-scope" style="display: block;" src="https://uswe-1-content.api.microsoftstream.com/api/thumbnails/a0ed2930-69ca-469b-8f5b-4133e64bdf3b/OTQ3NDMzMDgtMWRhYS00YWQ0LWFiZTYtNjM1ZDAzMGYxZDlmLXBvc3Rlcl9zbWFsbC5wbmc?validTill=2020-07-26T00%3a00%3a00.0000000Z&aadUserOId=79c34bfa-ad42-4aae-a681-1edc26a3b11e&encoding=base64&api-version=1.4-private&signature=LvjOBfl7%2fv8bVMBHpyEK%2b916JqZcr15F%2bMDDQ3slmt0%3d"><!-- end ngIf: !$ctrl.isImageUrlsEmpty --> <!-- ngIf: $ctrl.isImageUrlsEmpty && $ctrl.isLiveVideo --> <!-- ngIf: !$ctrl.isLiveVideo --><div ng-if="!$ctrl.isLiveVideo" class="video-item-thumb-duration ng-binding ng-scope"> 45:44 </div><!-- end ngIf: !$ctrl.isLiveVideo --> </div> </div> <!-- ngIf: $ctrl.newVideo && !$ctrl.isLiveNow --> <!-- ngIf: $ctrl.isLiveNow --> <!-- ngIf: $ctrl.videoItem.userData.isViewed && $ctrl.playbackProgressPercentage --> </div> <div class="detail-container"> <h3 class="title-heading"> <div class="video-item-title"> <!-- ngIf: $ctrl.isLiveVideo --> <span class="video-title-text ng-binding ng-isolate-scope" aria-hidden="true" custom-title="2018 Leadership Summit - Office of the CTO Update" light-style="true" on-ellipsis-only="true"> 2018 Leadership Summit - Office of the CTO Update </span> </div> </h3> <div class="video-footer"> <!-- ngIf: $ctrl.isLiveVideo --> <!-- ngIf: $ctrl.isLiveVideo && !$ctrl.isLiveVideoCompleted --> <!-- ngIf: $ctrl.isVideoBeingPlayed --> <!-- ngIf: (!$ctrl.isLiveVideo || $ctrl.isLiveVideoCompleted) && !$ctrl.isVideoBeingPlayed --><video-statistics ng-if="(!$ctrl.isLiveVideo || $ctrl.isLiveVideoCompleted) && !$ctrl.isVideoBeingPlayed" video-statistics-data="$ctrl.videoItem.metrics" hide-likes="true" class="ng-scope ng-isolate-scope"><!-- ngIf: !$ctrl.showIcons --><span class="video-statistics ng-scope" ng-if="!$ctrl.showIcons"> <span class="view-count ng-binding"> 455 views </span> <!-- ngIf: !$ctrl.hideLikes --> <!-- ngIf: $ctrl.showComments --> </span><!-- end ngIf: !$ctrl.showIcons --> <!-- ngIf: $ctrl.showIcons --></video-statistics><!-- end ngIf: (!$ctrl.isLiveVideo || $ctrl.isLiveVideoCompleted) && !$ctrl.isVideoBeingPlayed --> </div> </div> </a> </div></responsive-video-item> </li><!-- end ngRepeat: relatedVideoItem in $ctrl.relatedVideos --><li class="related-list-item column xsmall-12 ng-scope" ng-class="{'end' : 14 === $ctrl.relatedVideos.length - 1}" ng-repeat="relatedVideoItem in $ctrl.relatedVideos"> <responsive-video-item video-item="relatedVideoItem" query-param="$ctrl.queryStrings" item-clicked="$ctrl.videoItemClicked()" list-style="true" auto-play-status="$ctrl.autoPlayDefault" related-videos="$ctrl.relatedVideos" class="ng-isolate-scope"><div class="responsive-video-item list-style" ng-class="{'list-style' : $ctrl.listStyle}"> <a ng-href="/video/d0e5a0fd-d8f0-4b10-9185-bb35a079f938" ng-click="$ctrl.videoClicked()" aria-label=" Video Curso ITE, duration is 2 minutes 23 seconds, number of views is 10 " class="video-item-image-link" href="/video/d0e5a0fd-d8f0-4b10-9185-bb35a079f938"> <div class="video-item-thumb"> <div class="video-item-image-link"> <div class="video-item-image-container"> <!-- ngIf: !$ctrl.isImageUrlsEmpty --><img ng-if="!$ctrl.isImageUrlsEmpty" set-img-src="{"extraSmall":{"url":"https://uswe-1-content.api.microsoftstream.com/api/thumbnails/1/cbe691fc-3a29-435c-8419-6be639e784f4/MS1iaWtjaG13Ymg3NGlxaWFteGZlZGZiNmFuYV8xXzgweDQ1LnBuZw?validTill=2020-07-26T00%3a00%3a00.0000000Z&aadUserOId=79c34bfa-ad42-4aae-a681-1edc26a3b11e&encoding=base64&api-version=1.4-private&signature=9msOpL6osWnT59iLyMde6pXKEdktLQS2FO9RzUkoloI%3d"},"small":{"url":"https://uswe-1-content.api.microsoftstream.com/api/thumbnails/1/cbe691fc-3a29-435c-8419-6be639e784f4/MS1iaWtjaG13Ymg3NGlxaWFteGZlZGZiNmFuYV8xXzE2MHg5MC5wbmc?validTill=2020-07-26T00%3a00%3a00.0000000Z&aadUserOId=79c34bfa-ad42-4aae-a681-1edc26a3b11e&encoding=base64&api-version=1.4-private&signature=vybcF2LjWbXFdD5VKG9RonheQpyMsJdhHu%2fbRIepK8M%3d"},"medium":{"url":"https://uswe-1-content.api.microsoftstream.com/api/thumbnails/1/cbe691fc-3a29-435c-8419-6be639e784f4/MS1iaWtjaG13Ymg3NGlxaWFteGZlZGZiNmFuYV8xXzMyMHgxODAucG5n?validTill=2020-07-26T00%3a00%3a00.0000000Z&aadUserOId=79c34bfa-ad42-4aae-a681-1edc26a3b11e&encoding=base64&api-version=1.4-private&signature=m%2b9y5KN7g%2bOUA9V%2fpsRCS9e5X6vvIs23DZ5t8AJaTJ8%3d"},"large":{"url":"https://uswe-1-content.api.microsoftstream.com/api/thumbnails/1/cbe691fc-3a29-435c-8419-6be639e784f4/MS1iaWtjaG13Ymg3NGlxaWFteGZlZGZiNmFuYV8xXzY0MHgzNjAucG5n?validTill=2020-07-26T00%3a00%3a00.0000000Z&aadUserOId=79c34bfa-ad42-4aae-a681-1edc26a3b11e&encoding=base64&api-version=1.4-private&signature=8ejVqwNigetXC257uPlNCKhNZLY3hZUd0w8GPkisFa4%3d"}}" alt="Curso ITE" aria-hidden="true" class="ng-scope" style="display: block;" src="https://uswe-1-content.api.microsoftstream.com/api/thumbnails/1/cbe691fc-3a29-435c-8419-6be639e784f4/MS1iaWtjaG13Ymg3NGlxaWFteGZlZGZiNmFuYV8xXzE2MHg5MC5wbmc?validTill=2020-07-26T00%3a00%3a00.0000000Z&aadUserOId=79c34bfa-ad42-4aae-a681-1edc26a3b11e&encoding=base64&api-version=1.4-private&signature=vybcF2LjWbXFdD5VKG9RonheQpyMsJdhHu%2fbRIepK8M%3d"><!-- end ngIf: !$ctrl.isImageUrlsEmpty --> <!-- ngIf: $ctrl.isImageUrlsEmpty && $ctrl.isLiveVideo --> <!-- ngIf: !$ctrl.isLiveVideo --><div ng-if="!$ctrl.isLiveVideo" class="video-item-thumb-duration ng-binding ng-scope"> 02:23 </div><!-- end ngIf: !$ctrl.isLiveVideo --> </div> </div> <!-- ngIf: $ctrl.newVideo && !$ctrl.isLiveNow --> <!-- ngIf: $ctrl.isLiveNow --> <!-- ngIf: $ctrl.videoItem.userData.isViewed && $ctrl.playbackProgressPercentage --> </div> <div class="detail-container"> <h3 class="title-heading"> <div class="video-item-title"> <!-- ngIf: $ctrl.isLiveVideo --> <span class="video-title-text ng-binding ng-isolate-scope" aria-hidden="true" custom-title="Curso ITE" light-style="true" on-ellipsis-only="true"> Curso ITE </span> </div> </h3> <div class="video-footer"> <!-- ngIf: $ctrl.isLiveVideo --> <!-- ngIf: $ctrl.isLiveVideo && !$ctrl.isLiveVideoCompleted --> <!-- ngIf: $ctrl.isVideoBeingPlayed --> <!-- ngIf: (!$ctrl.isLiveVideo || $ctrl.isLiveVideoCompleted) && !$ctrl.isVideoBeingPlayed --><video-statistics ng-if="(!$ctrl.isLiveVideo || $ctrl.isLiveVideoCompleted) && !$ctrl.isVideoBeingPlayed" video-statistics-data="$ctrl.videoItem.metrics" hide-likes="true" class="ng-scope ng-isolate-scope"><!-- ngIf: !$ctrl.showIcons --><span class="video-statistics ng-scope" ng-if="!$ctrl.showIcons"> <span class="view-count ng-binding"> 10 views </span> <!-- ngIf: !$ctrl.hideLikes --> <!-- ngIf: $ctrl.showComments --> </span><!-- end ngIf: !$ctrl.showIcons --> <!-- ngIf: $ctrl.showIcons --></video-statistics><!-- end ngIf: (!$ctrl.isLiveVideo || $ctrl.isLiveVideoCompleted) && !$ctrl.isVideoBeingPlayed --> </div> </div> </a> </div></responsive-video-item> </li><!-- end ngRepeat: relatedVideoItem in $ctrl.relatedVideos --><li class="related-list-item column xsmall-12 ng-scope" ng-class="{'end' : 15 === $ctrl.relatedVideos.length - 1}" ng-repeat="relatedVideoItem in $ctrl.relatedVideos"> <responsive-video-item video-item="relatedVideoItem" query-param="$ctrl.queryStrings" item-clicked="$ctrl.videoItemClicked()" list-style="true" auto-play-status="$ctrl.autoPlayDefault" related-videos="$ctrl.relatedVideos" class="ng-isolate-scope"><div class="responsive-video-item list-style" ng-class="{'list-style' : $ctrl.listStyle}"> <a ng-href="/video/6539dcb1-ef02-45a7-ba8d-c61e1e05d296" ng-click="$ctrl.videoClicked()" aria-label=" Video EU RTC 2020 Session 2, duration is 3 hours 5 minutes 8 seconds, number of views is 2 " class="video-item-image-link" href="/video/6539dcb1-ef02-45a7-ba8d-c61e1e05d296"> <div class="video-item-thumb"> <div class="video-item-image-link"> <div class="video-item-image-container"> <!-- ngIf: !$ctrl.isImageUrlsEmpty --><img ng-if="!$ctrl.isImageUrlsEmpty" set-img-src="{"extraSmall":{"url":"https://uswe-1-content.api.microsoftstream.com/api/thumbnails/2/f0e78a4b-88f6-4585-ab70-a79d751ed1da/MS1wenk2bmx3Zzd3ajMybm50NmY3eGlwZWRuZF8xXzgweDQ1LnBuZw?validTill=2020-07-26T00%3a00%3a00.0000000Z&aadUserOId=79c34bfa-ad42-4aae-a681-1edc26a3b11e&encoding=base64&api-version=1.4-private&signature=bImwv6Y%2fS1SaP%2fVFtAiHWTzhKaiaDPNgWY2Bbre%2bOeg%3d"},"small":{"url":"https://uswe-1-content.api.microsoftstream.com/api/thumbnails/2/f0e78a4b-88f6-4585-ab70-a79d751ed1da/MS1wenk2bmx3Zzd3ajMybm50NmY3eGlwZWRuZF8xXzE2MHg5MC5wbmc?validTill=2020-07-26T00%3a00%3a00.0000000Z&aadUserOId=79c34bfa-ad42-4aae-a681-1edc26a3b11e&encoding=base64&api-version=1.4-private&signature=Mj0bs8O5ywiMKiZJVeBKTMfP%2b9RLthCi%2b%2bHuqHXQVcQ%3d"},"medium":{"url":"https://uswe-1-content.api.microsoftstream.com/api/thumbnails/2/f0e78a4b-88f6-4585-ab70-a79d751ed1da/MS1wenk2bmx3Zzd3ajMybm50NmY3eGlwZWRuZF8xXzMyMHgxODAucG5n?validTill=2020-07-26T00%3a00%3a00.0000000Z&aadUserOId=79c34bfa-ad42-4aae-a681-1edc26a3b11e&encoding=base64&api-version=1.4-private&signature=6pxRF8jpG22Ng7QCSEXElf4CM8kq5g3FQtTdSEkjaHQ%3d"},"large":{"url":"https://uswe-1-content.api.microsoftstream.com/api/thumbnails/2/f0e78a4b-88f6-4585-ab70-a79d751ed1da/MS1wenk2bmx3Zzd3ajMybm50NmY3eGlwZWRuZF8xXzY0MHgzNjAucG5n?validTill=2020-07-26T00%3a00%3a00.0000000Z&aadUserOId=79c34bfa-ad42-4aae-a681-1edc26a3b11e&encoding=base64&api-version=1.4-private&signature=AGKzYf9qXhmC4nzUcVhmpXdN%2fboILKAK11ooOsSnQ8M%3d"}}" alt="EU RTC 2020 Session 2" aria-hidden="true" class="ng-scope" style="display: block;" src="https://uswe-1-content.api.microsoftstream.com/api/thumbnails/2/f0e78a4b-88f6-4585-ab70-a79d751ed1da/MS1wenk2bmx3Zzd3ajMybm50NmY3eGlwZWRuZF8xXzE2MHg5MC5wbmc?validTill=2020-07-26T00%3a00%3a00.0000000Z&aadUserOId=79c34bfa-ad42-4aae-a681-1edc26a3b11e&encoding=base64&api-version=1.4-private&signature=Mj0bs8O5ywiMKiZJVeBKTMfP%2b9RLthCi%2b%2bHuqHXQVcQ%3d"><!-- end ngIf: !$ctrl.isImageUrlsEmpty --> <!-- ngIf: $ctrl.isImageUrlsEmpty && $ctrl.isLiveVideo --> <!-- ngIf: !$ctrl.isLiveVideo --><div ng-if="!$ctrl.isLiveVideo" class="video-item-thumb-duration ng-binding ng-scope"> 03:05:08 </div><!-- end ngIf: !$ctrl.isLiveVideo --> </div> </div> <!-- ngIf: $ctrl.newVideo && !$ctrl.isLiveNow --> <!-- ngIf: $ctrl.isLiveNow --> <!-- ngIf: $ctrl.videoItem.userData.isViewed && $ctrl.playbackProgressPercentage --> </div> <div class="detail-container"> <h3 class="title-heading"> <div class="video-item-title"> <!-- ngIf: $ctrl.isLiveVideo --> <span class="video-title-text ng-binding ng-isolate-scope" aria-hidden="true" custom-title="EU RTC 2020 Session 2" light-style="true" on-ellipsis-only="true"> EU RTC 2020 Session 2 </span> </div> </h3> <div class="video-footer"> <!-- ngIf: $ctrl.isLiveVideo --> <!-- ngIf: $ctrl.isLiveVideo && !$ctrl.isLiveVideoCompleted --> <!-- ngIf: $ctrl.isVideoBeingPlayed --> <!-- ngIf: (!$ctrl.isLiveVideo || $ctrl.isLiveVideoCompleted) && !$ctrl.isVideoBeingPlayed --><video-statistics ng-if="(!$ctrl.isLiveVideo || $ctrl.isLiveVideoCompleted) && !$ctrl.isVideoBeingPlayed" video-statistics-data="$ctrl.videoItem.metrics" hide-likes="true" class="ng-scope ng-isolate-scope"><!-- ngIf: !$ctrl.showIcons --><span class="video-statistics ng-scope" ng-if="!$ctrl.showIcons"> <span class="view-count ng-binding"> 2 views </span> <!-- ngIf: !$ctrl.hideLikes --> <!-- ngIf: $ctrl.showComments --> </span><!-- end ngIf: !$ctrl.showIcons --> <!-- ngIf: $ctrl.showIcons --></video-statistics><!-- end ngIf: (!$ctrl.isLiveVideo || $ctrl.isLiveVideoCompleted) && !$ctrl.isVideoBeingPlayed --> </div> </div> </a> </div></responsive-video-item> </li><!-- end ngRepeat: relatedVideoItem in $ctrl.relatedVideos --><li class="related-list-item column xsmall-12 ng-scope" ng-class="{'end' : 16 === $ctrl.relatedVideos.length - 1}" ng-repeat="relatedVideoItem in $ctrl.relatedVideos"> <responsive-video-item video-item="relatedVideoItem" query-param="$ctrl.queryStrings" item-clicked="$ctrl.videoItemClicked()" list-style="true" auto-play-status="$ctrl.autoPlayDefault" related-videos="$ctrl.relatedVideos" class="ng-isolate-scope"><div class="responsive-video-item list-style" ng-class="{'list-style' : $ctrl.listStyle}"> <a ng-href="/video/877fac3c-2cd9-4b15-9ae9-747b8fde1c01" ng-click="$ctrl.videoClicked()" aria-label=" Video EU RTC 2020 Session 4, duration is 3 hours 2 minutes 2 seconds, number of views is 2 " class="video-item-image-link" href="/video/877fac3c-2cd9-4b15-9ae9-747b8fde1c01"> <div class="video-item-thumb"> <div class="video-item-image-link"> <div class="video-item-image-container"> <!-- ngIf: !$ctrl.isImageUrlsEmpty --><img ng-if="!$ctrl.isImageUrlsEmpty" set-img-src="{"extraSmall":{"url":"https://uswe-1-content.api.microsoftstream.com/api/thumbnails/2/3f62880b-3cae-4af9-9d09-86014d581a6d/MS13M3UydnNiamt0bG1vZjZsb3FkZzI0MzdvaF8xXzgweDQ1LnBuZw?validTill=2020-07-26T00%3a00%3a00.0000000Z&aadUserOId=79c34bfa-ad42-4aae-a681-1edc26a3b11e&encoding=base64&api-version=1.4-private&signature=wX8qMeLhjSPuMMZnSFhtYGGWk%2fGfzSd9JV5uI0ELIgY%3d"},"small":{"url":"https://uswe-1-content.api.microsoftstream.com/api/thumbnails/2/3f62880b-3cae-4af9-9d09-86014d581a6d/MS13M3UydnNiamt0bG1vZjZsb3FkZzI0MzdvaF8xXzE2MHg5MC5wbmc?validTill=2020-07-26T00%3a00%3a00.0000000Z&aadUserOId=79c34bfa-ad42-4aae-a681-1edc26a3b11e&encoding=base64&api-version=1.4-private&signature=lmmOw39UXgRhq%2fv4bdwss4KAKQsNsxBCE%2bRqjIZ1CHg%3d"},"medium":{"url":"https://uswe-1-content.api.microsoftstream.com/api/thumbnails/2/3f62880b-3cae-4af9-9d09-86014d581a6d/MS13M3UydnNiamt0bG1vZjZsb3FkZzI0MzdvaF8xXzMyMHgxODAucG5n?validTill=2020-07-26T00%3a00%3a00.0000000Z&aadUserOId=79c34bfa-ad42-4aae-a681-1edc26a3b11e&encoding=base64&api-version=1.4-private&signature=gb42GgmuD0qn0rngbc2FmNc0AQqNLWsHMlRMtT00nDU%3d"},"large":{"url":"https://uswe-1-content.api.microsoftstream.com/api/thumbnails/2/3f62880b-3cae-4af9-9d09-86014d581a6d/MS13M3UydnNiamt0bG1vZjZsb3FkZzI0MzdvaF8xXzY0MHgzNjAucG5n?validTill=2020-07-26T00%3a00%3a00.0000000Z&aadUserOId=79c34bfa-ad42-4aae-a681-1edc26a3b11e&encoding=base64&api-version=1.4-private&signature=AjoFbGkkw%2bdZQZWFseb1dGaA44uB7yWeH0NPrUtgJlI%3d"}}" alt="EU RTC 2020 Session 4" aria-hidden="true" class="ng-scope" style="display: block;" src="https://uswe-1-content.api.microsoftstream.com/api/thumbnails/2/3f62880b-3cae-4af9-9d09-86014d581a6d/MS13M3UydnNiamt0bG1vZjZsb3FkZzI0MzdvaF8xXzE2MHg5MC5wbmc?validTill=2020-07-26T00%3a00%3a00.0000000Z&aadUserOId=79c34bfa-ad42-4aae-a681-1edc26a3b11e&encoding=base64&api-version=1.4-private&signature=lmmOw39UXgRhq%2fv4bdwss4KAKQsNsxBCE%2bRqjIZ1CHg%3d"><!-- end ngIf: !$ctrl.isImageUrlsEmpty --> <!-- ngIf: $ctrl.isImageUrlsEmpty && $ctrl.isLiveVideo --> <!-- ngIf: !$ctrl.isLiveVideo --><div ng-if="!$ctrl.isLiveVideo" class="video-item-thumb-duration ng-binding ng-scope"> 03:02:02 </div><!-- end ngIf: !$ctrl.isLiveVideo --> </div> </div> <!-- ngIf: $ctrl.newVideo && !$ctrl.isLiveNow --> <!-- ngIf: $ctrl.isLiveNow --> <!-- ngIf: $ctrl.videoItem.userData.isViewed && $ctrl.playbackProgressPercentage --> </div> <div class="detail-container"> <h3 class="title-heading"> <div class="video-item-title"> <!-- ngIf: $ctrl.isLiveVideo --> <span class="video-title-text ng-binding ng-isolate-scope" aria-hidden="true" custom-title="EU RTC 2020 Session 4" light-style="true" on-ellipsis-only="true"> EU RTC 2020 Session 4 </span> </div> </h3> <div class="video-footer"> <!-- ngIf: $ctrl.isLiveVideo --> <!-- ngIf: $ctrl.isLiveVideo && !$ctrl.isLiveVideoCompleted --> <!-- ngIf: $ctrl.isVideoBeingPlayed --> <!-- ngIf: (!$ctrl.isLiveVideo || $ctrl.isLiveVideoCompleted) && !$ctrl.isVideoBeingPlayed --><video-statistics ng-if="(!$ctrl.isLiveVideo || $ctrl.isLiveVideoCompleted) && !$ctrl.isVideoBeingPlayed" video-statistics-data="$ctrl.videoItem.metrics" hide-likes="true" class="ng-scope ng-isolate-scope"><!-- ngIf: !$ctrl.showIcons --><span class="video-statistics ng-scope" ng-if="!$ctrl.showIcons"> <span class="view-count ng-binding"> 2 views </span> <!-- ngIf: !$ctrl.hideLikes --> <!-- ngIf: $ctrl.showComments --> </span><!-- end ngIf: !$ctrl.showIcons --> <!-- ngIf: $ctrl.showIcons --></video-statistics><!-- end ngIf: (!$ctrl.isLiveVideo || $ctrl.isLiveVideoCompleted) && !$ctrl.isVideoBeingPlayed --> </div> </div> </a> </div></responsive-video-item> </li><!-- end ngRepeat: relatedVideoItem in $ctrl.relatedVideos --><li class="related-list-item column xsmall-12 ng-scope" ng-class="{'end' : 17 === $ctrl.relatedVideos.length - 1}" ng-repeat="relatedVideoItem in $ctrl.relatedVideos"> <responsive-video-item video-item="relatedVideoItem" query-param="$ctrl.queryStrings" item-clicked="$ctrl.videoItemClicked()" list-style="true" auto-play-status="$ctrl.autoPlayDefault" related-videos="$ctrl.relatedVideos" class="ng-isolate-scope"><div class="responsive-video-item list-style" ng-class="{'list-style' : $ctrl.listStyle}"> <a ng-href="/video/da2a2168-3a35-4b91-b27f-260479c704f5" ng-click="$ctrl.videoClicked()" aria-label=" Video Modulo 1 Entrenamiento en soldadura 2020, duration is 7 minutes 6 seconds, number of views is 3 " class="video-item-image-link" href="/video/da2a2168-3a35-4b91-b27f-260479c704f5"> <div class="video-item-thumb"> <div class="video-item-image-link"> <div class="video-item-image-container"> <!-- ngIf: !$ctrl.isImageUrlsEmpty --><img ng-if="!$ctrl.isImageUrlsEmpty" set-img-src="{"extraSmall":{"url":"https://uswe-1-content.api.microsoftstream.com/api/thumbnails/1/bc2c88f6-ba13-4ce7-8e07-79f21e7982ff/MS02NXltaDRlNW9iNTJvbG9vdXlyaGc2Y3VsZl8xXzgweDQ1LnBuZw?validTill=2020-07-26T00%3a00%3a00.0000000Z&aadUserOId=79c34bfa-ad42-4aae-a681-1edc26a3b11e&encoding=base64&api-version=1.4-private&signature=ZpmbSc99MHzN2AeOLz2mpo%2b52kSHoR4BOHWq2rqHAiM%3d"},"small":{"url":"https://uswe-1-content.api.microsoftstream.com/api/thumbnails/1/bc2c88f6-ba13-4ce7-8e07-79f21e7982ff/MS02NXltaDRlNW9iNTJvbG9vdXlyaGc2Y3VsZl8xXzE2MHg5MC5wbmc?validTill=2020-07-26T00%3a00%3a00.0000000Z&aadUserOId=79c34bfa-ad42-4aae-a681-1edc26a3b11e&encoding=base64&api-version=1.4-private&signature=KKBCnuKeD9mUd3QiXb0AZ7gbO4d2RDY8JTq9B3oGe0g%3d"},"medium":{"url":"https://uswe-1-content.api.microsoftstream.com/api/thumbnails/1/bc2c88f6-ba13-4ce7-8e07-79f21e7982ff/MS02NXltaDRlNW9iNTJvbG9vdXlyaGc2Y3VsZl8xXzMyMHgxODAucG5n?validTill=2020-07-26T00%3a00%3a00.0000000Z&aadUserOId=79c34bfa-ad42-4aae-a681-1edc26a3b11e&encoding=base64&api-version=1.4-private&signature=UVcnTLoWpd3cWyjK5N%2fEa1rwsfWCUgR5JaON9nV8meM%3d"},"large":{"url":"https://uswe-1-content.api.microsoftstream.com/api/thumbnails/1/bc2c88f6-ba13-4ce7-8e07-79f21e7982ff/MS02NXltaDRlNW9iNTJvbG9vdXlyaGc2Y3VsZl8xXzY0MHgzNjAucG5n?validTill=2020-07-26T00%3a00%3a00.0000000Z&aadUserOId=79c34bfa-ad42-4aae-a681-1edc26a3b11e&encoding=base64&api-version=1.4-private&signature=9oue2LQQe9YNNFdMSk1IFhZRPkZC3mnIDs6%2f8ElVPfs%3d"}}" alt="Modulo 1 Entrenamiento en soldadura 2020" aria-hidden="true" class="ng-scope" style="display: block;" src="https://uswe-1-content.api.microsoftstream.com/api/thumbnails/1/bc2c88f6-ba13-4ce7-8e07-79f21e7982ff/MS02NXltaDRlNW9iNTJvbG9vdXlyaGc2Y3VsZl8xXzE2MHg5MC5wbmc?validTill=2020-07-26T00%3a00%3a00.0000000Z&aadUserOId=79c34bfa-ad42-4aae-a681-1edc26a3b11e&encoding=base64&api-version=1.4-private&signature=KKBCnuKeD9mUd3QiXb0AZ7gbO4d2RDY8JTq9B3oGe0g%3d"><!-- end ngIf: !$ctrl.isImageUrlsEmpty --> <!-- ngIf: $ctrl.isImageUrlsEmpty && $ctrl.isLiveVideo --> <!-- ngIf: !$ctrl.isLiveVideo --><div ng-if="!$ctrl.isLiveVideo" class="video-item-thumb-duration ng-binding ng-scope"> 07:06 </div><!-- end ngIf: !$ctrl.isLiveVideo --> </div> </div> <!-- ngIf: $ctrl.newVideo && !$ctrl.isLiveNow --> <!-- ngIf: $ctrl.isLiveNow --> <!-- ngIf: $ctrl.videoItem.userData.isViewed && $ctrl.playbackProgressPercentage --> </div> <div class="detail-container"> <h3 class="title-heading"> <div class="video-item-title"> <!-- ngIf: $ctrl.isLiveVideo --> <span class="video-title-text ng-binding ng-isolate-scope" aria-hidden="true" custom-title="Modulo 1 Entrenamiento en soldadura 2020" light-style="true" on-ellipsis-only="true"> Modulo 1 Entrenamiento en soldadura 2020 </span> </div> </h3> <div class="video-footer"> <!-- ngIf: $ctrl.isLiveVideo --> <!-- ngIf: $ctrl.isLiveVideo && !$ctrl.isLiveVideoCompleted --> <!-- ngIf: $ctrl.isVideoBeingPlayed --> <!-- ngIf: (!$ctrl.isLiveVideo || $ctrl.isLiveVideoCompleted) && !$ctrl.isVideoBeingPlayed --><video-statistics ng-if="(!$ctrl.isLiveVideo || $ctrl.isLiveVideoCompleted) && !$ctrl.isVideoBeingPlayed" video-statistics-data="$ctrl.videoItem.metrics" hide-likes="true" class="ng-scope ng-isolate-scope"><!-- ngIf: !$ctrl.showIcons --><span class="video-statistics ng-scope" ng-if="!$ctrl.showIcons"> <span class="view-count ng-binding"> 3 views </span> <!-- ngIf: !$ctrl.hideLikes --> <!-- ngIf: $ctrl.showComments --> </span><!-- end ngIf: !$ctrl.showIcons --> <!-- ngIf: $ctrl.showIcons --></video-statistics><!-- end ngIf: (!$ctrl.isLiveVideo || $ctrl.isLiveVideoCompleted) && !$ctrl.isVideoBeingPlayed --> </div> </div> </a> </div></responsive-video-item> </li><!-- end ngRepeat: relatedVideoItem in $ctrl.relatedVideos --><li class="related-list-item column xsmall-12 ng-scope" ng-class="{'end' : 18 === $ctrl.relatedVideos.length - 1}" ng-repeat="relatedVideoItem in $ctrl.relatedVideos"> <responsive-video-item video-item="relatedVideoItem" query-param="$ctrl.queryStrings" item-clicked="$ctrl.videoItemClicked()" list-style="true" auto-play-status="$ctrl.autoPlayDefault" related-videos="$ctrl.relatedVideos" class="ng-isolate-scope"><div class="responsive-video-item list-style" ng-class="{'list-style' : $ctrl.listStyle}"> <a ng-href="/video/c822f2e4-4d01-4e60-b379-91682d84fc1e" ng-click="$ctrl.videoClicked()" aria-label=" Video Teams Training -May 15, duration is 59 minutes 54 seconds, number of views is 255 " class="video-item-image-link" href="/video/c822f2e4-4d01-4e60-b379-91682d84fc1e"> <div class="video-item-thumb"> <div class="video-item-image-link"> <div class="video-item-image-container"> <!-- ngIf: !$ctrl.isImageUrlsEmpty --><img ng-if="!$ctrl.isImageUrlsEmpty" set-img-src="{"extraSmall":{"url":"https://uswe-1-content.api.microsoftstream.com/api/thumbnails/c2e2788b-1f9a-45fa-ad1d-bf494ea42e1d/NjM4OGMwMzItZjQ1Mi00OTA3LTkzYjUtMWQxYTI3NTFjNWE4LXBvc3Rlcl9leHRyYXNtYWxsLnBuZw?validTill=2020-07-26T00%3a00%3a00.0000000Z&aadUserOId=79c34bfa-ad42-4aae-a681-1edc26a3b11e&encoding=base64&api-version=1.4-private&signature=Day5xO2IIXdY8TBPMAhqtrsUgE6FfUIL4jzg7Km2JtE%3d"},"small":{"url":"https://uswe-1-content.api.microsoftstream.com/api/thumbnails/c2e2788b-1f9a-45fa-ad1d-bf494ea42e1d/NjM4OGMwMzItZjQ1Mi00OTA3LTkzYjUtMWQxYTI3NTFjNWE4LXBvc3Rlcl9zbWFsbC5wbmc?validTill=2020-07-26T00%3a00%3a00.0000000Z&aadUserOId=79c34bfa-ad42-4aae-a681-1edc26a3b11e&encoding=base64&api-version=1.4-private&signature=%2fqoMDkRqB1KMi5QWrzsHS2RN67flmxzywn5%2b%2bwBJM9E%3d"},"medium":{"url":"https://uswe-1-content.api.microsoftstream.com/api/thumbnails/c2e2788b-1f9a-45fa-ad1d-bf494ea42e1d/NjM4OGMwMzItZjQ1Mi00OTA3LTkzYjUtMWQxYTI3NTFjNWE4LXBvc3Rlcl9tZWRpdW0ucG5n?validTill=2020-07-26T00%3a00%3a00.0000000Z&aadUserOId=79c34bfa-ad42-4aae-a681-1edc26a3b11e&encoding=base64&api-version=1.4-private&signature=Hq%2bm1HOVz85swhhgmIJHYZyFtRapDMefkXAHMUaSXg0%3d"},"large":{"url":"https://uswe-1-content.api.microsoftstream.com/api/thumbnails/c2e2788b-1f9a-45fa-ad1d-bf494ea42e1d/NjM4OGMwMzItZjQ1Mi00OTA3LTkzYjUtMWQxYTI3NTFjNWE4LXBvc3Rlcl9sYXJnZS5wbmc?validTill=2020-07-26T00%3a00%3a00.0000000Z&aadUserOId=79c34bfa-ad42-4aae-a681-1edc26a3b11e&encoding=base64&api-version=1.4-private&signature=HD0sC%2fo9%2ftzy%2fqF%2bCCLz7lsUy5zou%2fuf6kaCHGhu4TQ%3d"}}" alt="Teams Training -May 15" aria-hidden="true" class="ng-scope" style="display: block;" src="https://uswe-1-content.api.microsoftstream.com/api/thumbnails/c2e2788b-1f9a-45fa-ad1d-bf494ea42e1d/NjM4OGMwMzItZjQ1Mi00OTA3LTkzYjUtMWQxYTI3NTFjNWE4LXBvc3Rlcl9zbWFsbC5wbmc?validTill=2020-07-26T00%3a00%3a00.0000000Z&aadUserOId=79c34bfa-ad42-4aae-a681-1edc26a3b11e&encoding=base64&api-version=1.4-private&signature=%2fqoMDkRqB1KMi5QWrzsHS2RN67flmxzywn5%2b%2bwBJM9E%3d"><!-- end ngIf: !$ctrl.isImageUrlsEmpty --> <!-- ngIf: $ctrl.isImageUrlsEmpty && $ctrl.isLiveVideo --> <!-- ngIf: !$ctrl.isLiveVideo --><div ng-if="!$ctrl.isLiveVideo" class="video-item-thumb-duration ng-binding ng-scope"> 59:54 </div><!-- end ngIf: !$ctrl.isLiveVideo --> </div> </div> <!-- ngIf: $ctrl.newVideo && !$ctrl.isLiveNow --> <!-- ngIf: $ctrl.isLiveNow --> <!-- ngIf: $ctrl.videoItem.userData.isViewed && $ctrl.playbackProgressPercentage --> </div> <div class="detail-container"> <h3 class="title-heading"> <div class="video-item-title"> <!-- ngIf: $ctrl.isLiveVideo --> <span class="video-title-text ng-binding ng-isolate-scope" aria-hidden="true" custom-title="Teams Training -May 15" light-style="true" on-ellipsis-only="true"> Teams Training -May 15 </span> </div> </h3> <div class="video-footer"> <!-- ngIf: $ctrl.isLiveVideo --> <!-- ngIf: $ctrl.isLiveVideo && !$ctrl.isLiveVideoCompleted --> <!-- ngIf: $ctrl.isVideoBeingPlayed --> <!-- ngIf: (!$ctrl.isLiveVideo || $ctrl.isLiveVideoCompleted) && !$ctrl.isVideoBeingPlayed --><video-statistics ng-if="(!$ctrl.isLiveVideo || $ctrl.isLiveVideoCompleted) && !$ctrl.isVideoBeingPlayed" video-statistics-data="$ctrl.videoItem.metrics" hide-likes="true" class="ng-scope ng-isolate-scope"><!-- ngIf: !$ctrl.showIcons --><span class="video-statistics ng-scope" ng-if="!$ctrl.showIcons"> <span class="view-count ng-binding"> 255 views </span> <!-- ngIf: !$ctrl.hideLikes --> <!-- ngIf: $ctrl.showComments --> </span><!-- end ngIf: !$ctrl.showIcons --> <!-- ngIf: $ctrl.showIcons --></video-statistics><!-- end ngIf: (!$ctrl.isLiveVideo || $ctrl.isLiveVideoCompleted) && !$ctrl.isVideoBeingPlayed --> </div> </div> </a> </div></responsive-video-item> </li><!-- end ngRepeat: relatedVideoItem in $ctrl.relatedVideos --><li class="related-list-item column xsmall-12 ng-scope" ng-class="{'end' : 19 === $ctrl.relatedVideos.length - 1}" ng-repeat="relatedVideoItem in $ctrl.relatedVideos"> <responsive-video-item video-item="relatedVideoItem" query-param="$ctrl.queryStrings" item-clicked="$ctrl.videoItemClicked()" list-style="true" auto-play-status="$ctrl.autoPlayDefault" related-videos="$ctrl.relatedVideos" class="ng-isolate-scope"><div class="responsive-video-item list-style" ng-class="{'list-style' : $ctrl.listStyle}"> <a ng-href="/video/4edcf7ba-9805-4eab-8b0d-7eaf9cf6f023" ng-click="$ctrl.videoClicked()" aria-label=" Video STOP 2020 (Ags), duration is 11 minutes 57 seconds, number of views is 4 " class="video-item-image-link" href="/video/4edcf7ba-9805-4eab-8b0d-7eaf9cf6f023"> <div class="video-item-thumb"> <div class="video-item-image-link"> <div class="video-item-image-container"> <!-- ngIf: !$ctrl.isImageUrlsEmpty --><img ng-if="!$ctrl.isImageUrlsEmpty" set-img-src="{"extraSmall":{"url":"https://uswe-1-content.api.microsoftstream.com/api/thumbnails/2/b7e48cbb-82b2-43db-b66f-df8c89720243/MS1qcXRtbTJna2Zsa3RlYm9hdzZvM295ZmhpZV8xXzgweDQ1LnBuZw?validTill=2020-07-26T00%3a00%3a00.0000000Z&aadUserOId=79c34bfa-ad42-4aae-a681-1edc26a3b11e&encoding=base64&api-version=1.4-private&signature=uY69ECd5Rf9tlXa61wr8JmeXE4eJVRFmdhkJJfRqTRI%3d"},"small":{"url":"https://uswe-1-content.api.microsoftstream.com/api/thumbnails/2/b7e48cbb-82b2-43db-b66f-df8c89720243/MS1qcXRtbTJna2Zsa3RlYm9hdzZvM295ZmhpZV8xXzE2MHg5MC5wbmc?validTill=2020-07-26T00%3a00%3a00.0000000Z&aadUserOId=79c34bfa-ad42-4aae-a681-1edc26a3b11e&encoding=base64&api-version=1.4-private&signature=PcdvZVetvjYqaBZ6R8nm2uWtvwn24vdQvBtVKd1o%2bbU%3d"},"medium":{"url":"https://uswe-1-content.api.microsoftstream.com/api/thumbnails/2/b7e48cbb-82b2-43db-b66f-df8c89720243/MS1qcXRtbTJna2Zsa3RlYm9hdzZvM295ZmhpZV8xXzMyMHgxODAucG5n?validTill=2020-07-26T00%3a00%3a00.0000000Z&aadUserOId=79c34bfa-ad42-4aae-a681-1edc26a3b11e&encoding=base64&api-version=1.4-private&signature=La5cXcFNocNM5TwyKwgYU%2fcBBjZLeo6gBS3nadDHUZs%3d"},"large":{"url":"https://uswe-1-content.api.microsoftstream.com/api/thumbnails/2/b7e48cbb-82b2-43db-b66f-df8c89720243/MS1qcXRtbTJna2Zsa3RlYm9hdzZvM295ZmhpZV8xXzY0MHgzNjAucG5n?validTill=2020-07-26T00%3a00%3a00.0000000Z&aadUserOId=79c34bfa-ad42-4aae-a681-1edc26a3b11e&encoding=base64&api-version=1.4-private&signature=ocCtwrxMP%2b1oAnwQc3lkPu1Tr4XyfpSb%2fkA4aPgcG%2fY%3d"}}" alt="STOP 2020 (Ags)" aria-hidden="true" class="ng-scope" style="display: block;" src="https://uswe-1-content.api.microsoftstream.com/api/thumbnails/2/b7e48cbb-82b2-43db-b66f-df8c89720243/MS1qcXRtbTJna2Zsa3RlYm9hdzZvM295ZmhpZV8xXzE2MHg5MC5wbmc?validTill=2020-07-26T00%3a00%3a00.0000000Z&aadUserOId=79c34bfa-ad42-4aae-a681-1edc26a3b11e&encoding=base64&api-version=1.4-private&signature=PcdvZVetvjYqaBZ6R8nm2uWtvwn24vdQvBtVKd1o%2bbU%3d"><!-- end ngIf: !$ctrl.isImageUrlsEmpty --> <!-- ngIf: $ctrl.isImageUrlsEmpty && $ctrl.isLiveVideo --> <!-- ngIf: !$ctrl.isLiveVideo --><div ng-if="!$ctrl.isLiveVideo" class="video-item-thumb-duration ng-binding ng-scope"> 11:57 </div><!-- end ngIf: !$ctrl.isLiveVideo --> </div> </div> <!-- ngIf: $ctrl.newVideo && !$ctrl.isLiveNow --> <!-- ngIf: $ctrl.isLiveNow --> <!-- ngIf: $ctrl.videoItem.userData.isViewed && $ctrl.playbackProgressPercentage --> </div> <div class="detail-container"> <h3 class="title-heading"> <div class="video-item-title"> <!-- ngIf: $ctrl.isLiveVideo --> <span class="video-title-text ng-binding ng-isolate-scope" aria-hidden="true" custom-title="STOP 2020 (Ags)" light-style="true" on-ellipsis-only="true"> STOP 2020 (Ags) </span> </div> </h3> <div class="video-footer"> <!-- ngIf: $ctrl.isLiveVideo --> <!-- ngIf: $ctrl.isLiveVideo && !$ctrl.isLiveVideoCompleted --> <!-- ngIf: $ctrl.isVideoBeingPlayed --> <!-- ngIf: (!$ctrl.isLiveVideo || $ctrl.isLiveVideoCompleted) && !$ctrl.isVideoBeingPlayed --><video-statistics ng-if="(!$ctrl.isLiveVideo || $ctrl.isLiveVideoCompleted) && !$ctrl.isVideoBeingPlayed" video-statistics-data="$ctrl.videoItem.metrics" hide-likes="true" class="ng-scope ng-isolate-scope"><!-- ngIf: !$ctrl.showIcons --><span class="video-statistics ng-scope" ng-if="!$ctrl.showIcons"> <span class="view-count ng-binding"> 4 views </span> <!-- ngIf: !$ctrl.hideLikes --> <!-- ngIf: $ctrl.showComments --> </span><!-- end ngIf: !$ctrl.showIcons --> <!-- ngIf: $ctrl.showIcons --></video-statistics><!-- end ngIf: (!$ctrl.isLiveVideo || $ctrl.isLiveVideoCompleted) && !$ctrl.isVideoBeingPlayed --> </div> </div> </a> </div></responsive-video-item> </li><!-- end ngRepeat: relatedVideoItem in $ctrl.relatedVideos --><li class="related-list-item column xsmall-12 ng-scope" ng-class="{'end' : 20 === $ctrl.relatedVideos.length - 1}" ng-repeat="relatedVideoItem in $ctrl.relatedVideos"> <responsive-video-item video-item="relatedVideoItem" query-param="$ctrl.queryStrings" item-clicked="$ctrl.videoItemClicked()" list-style="true" auto-play-status="$ctrl.autoPlayDefault" related-videos="$ctrl.relatedVideos" class="ng-isolate-scope"><div class="responsive-video-item list-style" ng-class="{'list-style' : $ctrl.listStyle}"> <a ng-href="/video/79329eff-2a3b-4312-8e6d-a0e3bca9902f" ng-click="$ctrl.videoClicked()" aria-label=" Video Si compartes comida, compartes el virus., duration is 0 minutes 52 seconds, number of views is 55 " class="video-item-image-link" href="/video/79329eff-2a3b-4312-8e6d-a0e3bca9902f"> <div class="video-item-thumb"> <div class="video-item-image-link"> <div class="video-item-image-container"> <!-- ngIf: !$ctrl.isImageUrlsEmpty --><img ng-if="!$ctrl.isImageUrlsEmpty" set-img-src="{"extraSmall":{"url":"https://uswe-1-content.api.microsoftstream.com/api/thumbnails/2/70d49280-5b7e-438e-bb1f-0f05150e13e4/MS1sMzRuanpqYTZnYng0cTR4cHh6Z2U2MmlsZF8yXzgweDQ1LnBuZw?validTill=2020-07-26T00%3a00%3a00.0000000Z&aadUserOId=79c34bfa-ad42-4aae-a681-1edc26a3b11e&encoding=base64&api-version=1.4-private&signature=pm%2bi1xZzA4BjJN2VPZ%2f02xPkjvIPgjPYXgm9tLl6cCU%3d"},"small":{"url":"https://uswe-1-content.api.microsoftstream.com/api/thumbnails/2/70d49280-5b7e-438e-bb1f-0f05150e13e4/MS1sMzRuanpqYTZnYng0cTR4cHh6Z2U2MmlsZF8yXzE2MHg5MC5wbmc?validTill=2020-07-26T00%3a00%3a00.0000000Z&aadUserOId=79c34bfa-ad42-4aae-a681-1edc26a3b11e&encoding=base64&api-version=1.4-private&signature=4e%2bHhqbxuUha27VWPXr9K55mQL%2b1ufyKF7bOnoBNPDU%3d"},"medium":{"url":"https://uswe-1-content.api.microsoftstream.com/api/thumbnails/2/70d49280-5b7e-438e-bb1f-0f05150e13e4/MS1sMzRuanpqYTZnYng0cTR4cHh6Z2U2MmlsZF8yXzMyMHgxODAucG5n?validTill=2020-07-26T00%3a00%3a00.0000000Z&aadUserOId=79c34bfa-ad42-4aae-a681-1edc26a3b11e&encoding=base64&api-version=1.4-private&signature=gHHgHieCuzaxaQ18a0ThCzpBVPTTXMN8WvS%2bip0wPIg%3d"},"large":{"url":"https://uswe-1-content.api.microsoftstream.com/api/thumbnails/2/70d49280-5b7e-438e-bb1f-0f05150e13e4/MS1sMzRuanpqYTZnYng0cTR4cHh6Z2U2MmlsZF8yXzY0MHgzNjAucG5n?validTill=2020-07-26T00%3a00%3a00.0000000Z&aadUserOId=79c34bfa-ad42-4aae-a681-1edc26a3b11e&encoding=base64&api-version=1.4-private&signature=WRQAwZDLNPuRTiekIHTeV37AgSA4HdvzygW6DC%2fCeWU%3d"}}" alt="Si compartes comida, compartes el virus." aria-hidden="true" class="ng-scope" style="display: block;" src="https://uswe-1-content.api.microsoftstream.com/api/thumbnails/2/70d49280-5b7e-438e-bb1f-0f05150e13e4/MS1sMzRuanpqYTZnYng0cTR4cHh6Z2U2MmlsZF8yXzE2MHg5MC5wbmc?validTill=2020-07-26T00%3a00%3a00.0000000Z&aadUserOId=79c34bfa-ad42-4aae-a681-1edc26a3b11e&encoding=base64&api-version=1.4-private&signature=4e%2bHhqbxuUha27VWPXr9K55mQL%2b1ufyKF7bOnoBNPDU%3d"><!-- end ngIf: !$ctrl.isImageUrlsEmpty --> <!-- ngIf: $ctrl.isImageUrlsEmpty && $ctrl.isLiveVideo --> <!-- ngIf: !$ctrl.isLiveVideo --><div ng-if="!$ctrl.isLiveVideo" class="video-item-thumb-duration ng-binding ng-scope"> 00:52 </div><!-- end ngIf: !$ctrl.isLiveVideo --> </div> </div> <!-- ngIf: $ctrl.newVideo && !$ctrl.isLiveNow --> <!-- ngIf: $ctrl.isLiveNow --> <!-- ngIf: $ctrl.videoItem.userData.isViewed && $ctrl.playbackProgressPercentage --> </div> <div class="detail-container"> <h3 class="title-heading"> <div class="video-item-title"> <!-- ngIf: $ctrl.isLiveVideo --> <span class="video-title-text ng-binding ng-isolate-scope" aria-hidden="true" custom-title="Si compartes comida, compartes el virus." light-style="true" on-ellipsis-only="true"> Si compartes comida, compartes el virus. </span> </div> </h3> <div class="video-footer"> <!-- ngIf: $ctrl.isLiveVideo --> <!-- ngIf: $ctrl.isLiveVideo && !$ctrl.isLiveVideoCompleted --> <!-- ngIf: $ctrl.isVideoBeingPlayed --> <!-- ngIf: (!$ctrl.isLiveVideo || $ctrl.isLiveVideoCompleted) && !$ctrl.isVideoBeingPlayed --><video-statistics ng-if="(!$ctrl.isLiveVideo || $ctrl.isLiveVideoCompleted) && !$ctrl.isVideoBeingPlayed" video-statistics-data="$ctrl.videoItem.metrics" hide-likes="true" class="ng-scope ng-isolate-scope"><!-- ngIf: !$ctrl.showIcons --><span class="video-statistics ng-scope" ng-if="!$ctrl.showIcons"> <span class="view-count ng-binding"> 55 views </span> <!-- ngIf: !$ctrl.hideLikes --> <!-- ngIf: $ctrl.showComments --> </span><!-- end ngIf: !$ctrl.showIcons --> <!-- ngIf: $ctrl.showIcons --></video-statistics><!-- end ngIf: (!$ctrl.isLiveVideo || $ctrl.isLiveVideoCompleted) && !$ctrl.isVideoBeingPlayed --> </div> </div> </a> </div></responsive-video-item> </li><!-- end ngRepeat: relatedVideoItem in $ctrl.relatedVideos --><li class="related-list-item column xsmall-12 ng-scope" ng-class="{'end' : 21 === $ctrl.relatedVideos.length - 1}" ng-repeat="relatedVideoItem in $ctrl.relatedVideos"> <responsive-video-item video-item="relatedVideoItem" query-param="$ctrl.queryStrings" item-clicked="$ctrl.videoItemClicked()" list-style="true" auto-play-status="$ctrl.autoPlayDefault" related-videos="$ctrl.relatedVideos" class="ng-isolate-scope"><div class="responsive-video-item list-style" ng-class="{'list-style' : $ctrl.listStyle}"> <a ng-href="/video/b58b4f10-c43c-4664-8b18-1376be4c6b00" ng-click="$ctrl.videoClicked()" aria-label=" Video #InThisTogether Recap, duration is 1 minute 6 seconds, number of views is 194 " class="video-item-image-link" href="/video/b58b4f10-c43c-4664-8b18-1376be4c6b00"> <div class="video-item-thumb"> <div class="video-item-image-link"> <div class="video-item-image-container"> <!-- ngIf: !$ctrl.isImageUrlsEmpty --><img ng-if="!$ctrl.isImageUrlsEmpty" set-img-src="{"extraSmall":{"url":"https://uswe-1-content.api.microsoftstream.com/api/thumbnails/9a181aa7-4fda-4422-86ca-1b0c08664a71/NDAwZTIyNWEtM2E0Yy00YTA3LWI0YWMtYTM2ZjRkNzc0NWFiLXBvc3Rlcl9leHRyYXNtYWxsLnBuZw?validTill=2020-07-26T00%3a00%3a00.0000000Z&aadUserOId=79c34bfa-ad42-4aae-a681-1edc26a3b11e&encoding=base64&api-version=1.4-private&signature=RNCpaHn%2fkji2tXMkwPaIkWkJ0lE5cWQmuj2b1pbAtR8%3d"},"small":{"url":"https://uswe-1-content.api.microsoftstream.com/api/thumbnails/9a181aa7-4fda-4422-86ca-1b0c08664a71/NDAwZTIyNWEtM2E0Yy00YTA3LWI0YWMtYTM2ZjRkNzc0NWFiLXBvc3Rlcl9zbWFsbC5wbmc?validTill=2020-07-26T00%3a00%3a00.0000000Z&aadUserOId=79c34bfa-ad42-4aae-a681-1edc26a3b11e&encoding=base64&api-version=1.4-private&signature=ZS8Dc7ofarSRRSKdAxIfhEbIUvwccn0gtHZfTtY0UVM%3d"},"medium":{"url":"https://uswe-1-content.api.microsoftstream.com/api/thumbnails/9a181aa7-4fda-4422-86ca-1b0c08664a71/NDAwZTIyNWEtM2E0Yy00YTA3LWI0YWMtYTM2ZjRkNzc0NWFiLXBvc3Rlcl9tZWRpdW0ucG5n?validTill=2020-07-26T00%3a00%3a00.0000000Z&aadUserOId=79c34bfa-ad42-4aae-a681-1edc26a3b11e&encoding=base64&api-version=1.4-private&signature=vt%2fnfy0iLgAbKOCWd96smAbDxiGqvRXcusA%2bPobxUus%3d"},"large":{"url":"https://uswe-1-content.api.microsoftstream.com/api/thumbnails/9a181aa7-4fda-4422-86ca-1b0c08664a71/NDAwZTIyNWEtM2E0Yy00YTA3LWI0YWMtYTM2ZjRkNzc0NWFiLXBvc3Rlcl9sYXJnZS5wbmc?validTill=2020-07-26T00%3a00%3a00.0000000Z&aadUserOId=79c34bfa-ad42-4aae-a681-1edc26a3b11e&encoding=base64&api-version=1.4-private&signature=ruvtKwAOHoxETRuUkTspczv6KITGV2%2fGdODioltWdMQ%3d"}}" alt="#InThisTogether Recap" aria-hidden="true" class="ng-scope" style="display: block;" src="https://uswe-1-content.api.microsoftstream.com/api/thumbnails/9a181aa7-4fda-4422-86ca-1b0c08664a71/NDAwZTIyNWEtM2E0Yy00YTA3LWI0YWMtYTM2ZjRkNzc0NWFiLXBvc3Rlcl9zbWFsbC5wbmc?validTill=2020-07-26T00%3a00%3a00.0000000Z&aadUserOId=79c34bfa-ad42-4aae-a681-1edc26a3b11e&encoding=base64&api-version=1.4-private&signature=ZS8Dc7ofarSRRSKdAxIfhEbIUvwccn0gtHZfTtY0UVM%3d"><!-- end ngIf: !$ctrl.isImageUrlsEmpty --> <!-- ngIf: $ctrl.isImageUrlsEmpty && $ctrl.isLiveVideo --> <!-- ngIf: !$ctrl.isLiveVideo --><div ng-if="!$ctrl.isLiveVideo" class="video-item-thumb-duration ng-binding ng-scope"> 01:06 </div><!-- end ngIf: !$ctrl.isLiveVideo --> </div> </div> <!-- ngIf: $ctrl.newVideo && !$ctrl.isLiveNow --> <!-- ngIf: $ctrl.isLiveNow --> <!-- ngIf: $ctrl.videoItem.userData.isViewed && $ctrl.playbackProgressPercentage --> </div> <div class="detail-container"> <h3 class="title-heading"> <div class="video-item-title"> <!-- ngIf: $ctrl.isLiveVideo --> <span class="video-title-text ng-binding ng-isolate-scope" aria-hidden="true" custom-title="#InThisTogether Recap" light-style="true" on-ellipsis-only="true"> #InThisTogether Recap </span> </div> </h3> <div class="video-footer"> <!-- ngIf: $ctrl.isLiveVideo --> <!-- ngIf: $ctrl.isLiveVideo && !$ctrl.isLiveVideoCompleted --> <!-- ngIf: $ctrl.isVideoBeingPlayed --> <!-- ngIf: (!$ctrl.isLiveVideo || $ctrl.isLiveVideoCompleted) && !$ctrl.isVideoBeingPlayed --><video-statistics ng-if="(!$ctrl.isLiveVideo || $ctrl.isLiveVideoCompleted) && !$ctrl.isVideoBeingPlayed" video-statistics-data="$ctrl.videoItem.metrics" hide-likes="true" class="ng-scope ng-isolate-scope"><!-- ngIf: !$ctrl.showIcons --><span class="video-statistics ng-scope" ng-if="!$ctrl.showIcons"> <span class="view-count ng-binding"> 194 views </span> <!-- ngIf: !$ctrl.hideLikes --> <!-- ngIf: $ctrl.showComments --> </span><!-- end ngIf: !$ctrl.showIcons --> <!-- ngIf: $ctrl.showIcons --></video-statistics><!-- end ngIf: (!$ctrl.isLiveVideo || $ctrl.isLiveVideoCompleted) && !$ctrl.isVideoBeingPlayed --> </div> </div> </a> </div></responsive-video-item> </li><!-- end ngRepeat: relatedVideoItem in $ctrl.relatedVideos --><li class="related-list-item column xsmall-12 ng-scope" ng-class="{'end' : 22 === $ctrl.relatedVideos.length - 1}" ng-repeat="relatedVideoItem in $ctrl.relatedVideos"> <responsive-video-item video-item="relatedVideoItem" query-param="$ctrl.queryStrings" item-clicked="$ctrl.videoItemClicked()" list-style="true" auto-play-status="$ctrl.autoPlayDefault" related-videos="$ctrl.relatedVideos" class="ng-isolate-scope"><div class="responsive-video-item list-style" ng-class="{'list-style' : $ctrl.listStyle}"> <a ng-href="/video/a79120f1-58f2-46fa-a259-8f818c9e4f31" ng-click="$ctrl.videoClicked()" aria-label=" Video EU RTC 2020 Session 3, duration is 2 hours 21 minutes 34 seconds, number of views is 1 " class="video-item-image-link" href="/video/a79120f1-58f2-46fa-a259-8f818c9e4f31"> <div class="video-item-thumb"> <div class="video-item-image-link"> <div class="video-item-image-container"> <!-- ngIf: !$ctrl.isImageUrlsEmpty --><img ng-if="!$ctrl.isImageUrlsEmpty" set-img-src="{"extraSmall":{"url":"https://uswe-1-content.api.microsoftstream.com/api/thumbnails/2/60f57c7e-8b6f-4924-93e1-41aaa37d4280/NWYxMjlkNGUtN2FhZi00MmUyLTg0ZjUtMWNmYjc3NTZlZjVkLXBvc3Rlcl9leHRyYXNtYWxsLnBuZw?validTill=2020-07-26T00%3a00%3a00.0000000Z&aadUserOId=79c34bfa-ad42-4aae-a681-1edc26a3b11e&encoding=base64&api-version=1.4-private&signature=gNVUe6rLvr2H4i8MWaDCmtVNZPCDjVML%2f3LEtktWxZA%3d"},"small":{"url":"https://uswe-1-content.api.microsoftstream.com/api/thumbnails/2/60f57c7e-8b6f-4924-93e1-41aaa37d4280/NWYxMjlkNGUtN2FhZi00MmUyLTg0ZjUtMWNmYjc3NTZlZjVkLXBvc3Rlcl9zbWFsbC5wbmc?validTill=2020-07-26T00%3a00%3a00.0000000Z&aadUserOId=79c34bfa-ad42-4aae-a681-1edc26a3b11e&encoding=base64&api-version=1.4-private&signature=PSVCsZ6yLxMmq%2bsahyQuJwMxFClLfnY8SShHjODeWkQ%3d"},"medium":{"url":"https://uswe-1-content.api.microsoftstream.com/api/thumbnails/2/60f57c7e-8b6f-4924-93e1-41aaa37d4280/NWYxMjlkNGUtN2FhZi00MmUyLTg0ZjUtMWNmYjc3NTZlZjVkLXBvc3Rlcl9tZWRpdW0ucG5n?validTill=2020-07-26T00%3a00%3a00.0000000Z&aadUserOId=79c34bfa-ad42-4aae-a681-1edc26a3b11e&encoding=base64&api-version=1.4-private&signature=Hfb9r2b6cmCq%2beGvSPTEV%2bVd7JXspvABSzxSz%2by6ldg%3d"},"large":{"url":"https://uswe-1-content.api.microsoftstream.com/api/thumbnails/2/60f57c7e-8b6f-4924-93e1-41aaa37d4280/NWYxMjlkNGUtN2FhZi00MmUyLTg0ZjUtMWNmYjc3NTZlZjVkLXBvc3Rlcl9sYXJnZS5wbmc?validTill=2020-07-26T00%3a00%3a00.0000000Z&aadUserOId=79c34bfa-ad42-4aae-a681-1edc26a3b11e&encoding=base64&api-version=1.4-private&signature=dOIPQeCWJK9dcGKkGa14p%2fMhHifL9y%2bQZ5FjorgUN7s%3d"}}" alt="EU RTC 2020 Session 3" aria-hidden="true" class="ng-scope" style="display: block;" src="https://uswe-1-content.api.microsoftstream.com/api/thumbnails/2/60f57c7e-8b6f-4924-93e1-41aaa37d4280/NWYxMjlkNGUtN2FhZi00MmUyLTg0ZjUtMWNmYjc3NTZlZjVkLXBvc3Rlcl9zbWFsbC5wbmc?validTill=2020-07-26T00%3a00%3a00.0000000Z&aadUserOId=79c34bfa-ad42-4aae-a681-1edc26a3b11e&encoding=base64&api-version=1.4-private&signature=PSVCsZ6yLxMmq%2bsahyQuJwMxFClLfnY8SShHjODeWkQ%3d"><!-- end ngIf: !$ctrl.isImageUrlsEmpty --> <!-- ngIf: $ctrl.isImageUrlsEmpty && $ctrl.isLiveVideo --> <!-- ngIf: !$ctrl.isLiveVideo --><div ng-if="!$ctrl.isLiveVideo" class="video-item-thumb-duration ng-binding ng-scope"> 02:21:34 </div><!-- end ngIf: !$ctrl.isLiveVideo --> </div> </div> <!-- ngIf: $ctrl.newVideo && !$ctrl.isLiveNow --> <!-- ngIf: $ctrl.isLiveNow --> <!-- ngIf: $ctrl.videoItem.userData.isViewed && $ctrl.playbackProgressPercentage --> </div> <div class="detail-container"> <h3 class="title-heading"> <div class="video-item-title"> <!-- ngIf: $ctrl.isLiveVideo --> <span class="video-title-text ng-binding ng-isolate-scope" aria-hidden="true" custom-title="EU RTC 2020 Session 3" light-style="true" on-ellipsis-only="true"> EU RTC 2020 Session 3 </span> </div> </h3> <div class="video-footer"> <!-- ngIf: $ctrl.isLiveVideo --> <!-- ngIf: $ctrl.isLiveVideo && !$ctrl.isLiveVideoCompleted --> <!-- ngIf: $ctrl.isVideoBeingPlayed --> <!-- ngIf: (!$ctrl.isLiveVideo || $ctrl.isLiveVideoCompleted) && !$ctrl.isVideoBeingPlayed --><video-statistics ng-if="(!$ctrl.isLiveVideo || $ctrl.isLiveVideoCompleted) && !$ctrl.isVideoBeingPlayed" video-statistics-data="$ctrl.videoItem.metrics" hide-likes="true" class="ng-scope ng-isolate-scope"><!-- ngIf: !$ctrl.showIcons --><span class="video-statistics ng-scope" ng-if="!$ctrl.showIcons"> <span class="view-count ng-binding"> 1 view </span> <!-- ngIf: !$ctrl.hideLikes --> <!-- ngIf: $ctrl.showComments --> </span><!-- end ngIf: !$ctrl.showIcons --> <!-- ngIf: $ctrl.showIcons --></video-statistics><!-- end ngIf: (!$ctrl.isLiveVideo || $ctrl.isLiveVideoCompleted) && !$ctrl.isVideoBeingPlayed --> </div> </div> </a> </div></responsive-video-item> </li><!-- end ngRepeat: relatedVideoItem in $ctrl.relatedVideos --><li class="related-list-item column xsmall-12 ng-scope end" ng-class="{'end' : 23 === $ctrl.relatedVideos.length - 1}" ng-repeat="relatedVideoItem in $ctrl.relatedVideos"> <responsive-video-item video-item="relatedVideoItem" query-param="$ctrl.queryStrings" item-clicked="$ctrl.videoItemClicked()" list-style="true" auto-play-status="$ctrl.autoPlayDefault" related-videos="$ctrl.relatedVideos" class="ng-isolate-scope"><div class="responsive-video-item list-style" ng-class="{'list-style' : $ctrl.listStyle}"> <a ng-href="/video/d21795a2-ab2b-44ec-9fc4-3d9f9eeedddf" ng-click="$ctrl.videoClicked()" aria-label=" Video HD -B1 System Demo, duration is 1 hour 10 minutes 48 seconds, number of views is 2 " class="video-item-image-link" href="/video/d21795a2-ab2b-44ec-9fc4-3d9f9eeedddf"> <div class="video-item-thumb"> <div class="video-item-image-link"> <div class="video-item-image-container"> <!-- ngIf: !$ctrl.isImageUrlsEmpty --><img ng-if="!$ctrl.isImageUrlsEmpty" set-img-src="{"extraSmall":{"url":"https://uswe-1-content.api.microsoftstream.com/api/thumbnails/1/dc0592fa-1ff3-42ab-ae3c-49ed7d7b6fdc/MS01dXpleG5qdXF5eHFyem1vMnF4eHY3bmdvY18xXzgweDQ1LnBuZw?validTill=2020-07-26T00%3a00%3a00.0000000Z&aadUserOId=79c34bfa-ad42-4aae-a681-1edc26a3b11e&encoding=base64&api-version=1.4-private&signature=dEL%2fFp5%2b1uTvjYSDcdhxrF5cHs%2bTMzTVhAdL5NbSoh4%3d"},"small":{"url":"https://uswe-1-content.api.microsoftstream.com/api/thumbnails/1/dc0592fa-1ff3-42ab-ae3c-49ed7d7b6fdc/MS01dXpleG5qdXF5eHFyem1vMnF4eHY3bmdvY18xXzE2MHg5MC5wbmc?validTill=2020-07-26T00%3a00%3a00.0000000Z&aadUserOId=79c34bfa-ad42-4aae-a681-1edc26a3b11e&encoding=base64&api-version=1.4-private&signature=JTYQ5jEKdfwrbqE8UeDwpN9X%2bI%2bn1NLSrF4QyDXRUZE%3d"},"medium":{"url":"https://uswe-1-content.api.microsoftstream.com/api/thumbnails/1/dc0592fa-1ff3-42ab-ae3c-49ed7d7b6fdc/MS01dXpleG5qdXF5eHFyem1vMnF4eHY3bmdvY18xXzMyMHgxODAucG5n?validTill=2020-07-26T00%3a00%3a00.0000000Z&aadUserOId=79c34bfa-ad42-4aae-a681-1edc26a3b11e&encoding=base64&api-version=1.4-private&signature=nJagbZxXRQHvUjzS60Kg07iAaEK4IGZB0HRl38lFVvY%3d"},"large":{"url":"https://uswe-1-content.api.microsoftstream.com/api/thumbnails/1/dc0592fa-1ff3-42ab-ae3c-49ed7d7b6fdc/MS01dXpleG5qdXF5eHFyem1vMnF4eHY3bmdvY18xXzY0MHgzNjAucG5n?validTill=2020-07-26T00%3a00%3a00.0000000Z&aadUserOId=79c34bfa-ad42-4aae-a681-1edc26a3b11e&encoding=base64&api-version=1.4-private&signature=mc1%2fYti1VputAgIaPDPnYunVr%2bhZLvU3J4rPL%2bsQwpU%3d"}}" alt="HD -B1 System Demo" aria-hidden="true" class="ng-scope" style="display: block;" src="https://uswe-1-content.api.microsoftstream.com/api/thumbnails/1/dc0592fa-1ff3-42ab-ae3c-49ed7d7b6fdc/MS01dXpleG5qdXF5eHFyem1vMnF4eHY3bmdvY18xXzE2MHg5MC5wbmc?validTill=2020-07-26T00%3a00%3a00.0000000Z&aadUserOId=79c34bfa-ad42-4aae-a681-1edc26a3b11e&encoding=base64&api-version=1.4-private&signature=JTYQ5jEKdfwrbqE8UeDwpN9X%2bI%2bn1NLSrF4QyDXRUZE%3d"><!-- end ngIf: !$ctrl.isImageUrlsEmpty --> <!-- ngIf: $ctrl.isImageUrlsEmpty && $ctrl.isLiveVideo --> <!-- ngIf: !$ctrl.isLiveVideo --><div ng-if="!$ctrl.isLiveVideo" class="video-item-thumb-duration ng-binding ng-scope"> 01:10:48 </div><!-- end ngIf: !$ctrl.isLiveVideo --> </div> </div> <!-- ngIf: $ctrl.newVideo && !$ctrl.isLiveNow --> <!-- ngIf: $ctrl.isLiveNow --> <!-- ngIf: $ctrl.videoItem.userData.isViewed && $ctrl.playbackProgressPercentage --> </div> <div class="detail-container"> <h3 class="title-heading"> <div class="video-item-title"> <!-- ngIf: $ctrl.isLiveVideo --> <span class="video-title-text ng-binding ng-isolate-scope" aria-hidden="true" custom-title="HD -B1 System Demo" light-style="true" on-ellipsis-only="true"> HD -B1 System Demo </span> </div> </h3> <div class="video-footer"> <!-- ngIf: $ctrl.isLiveVideo --> <!-- ngIf: $ctrl.isLiveVideo && !$ctrl.isLiveVideoCompleted --> <!-- ngIf: $ctrl.isVideoBeingPlayed --> <!-- ngIf: (!$ctrl.isLiveVideo || $ctrl.isLiveVideoCompleted) && !$ctrl.isVideoBeingPlayed --><video-statistics ng-if="(!$ctrl.isLiveVideo || $ctrl.isLiveVideoCompleted) && !$ctrl.isVideoBeingPlayed" video-statistics-data="$ctrl.videoItem.metrics" hide-likes="true" class="ng-scope ng-isolate-scope"><!-- ngIf: !$ctrl.showIcons --><span class="video-statistics ng-scope" ng-if="!$ctrl.showIcons"> <span class="view-count ng-binding"> 2 views </span> <!-- ngIf: !$ctrl.hideLikes --> <!-- ngIf: $ctrl.showComments --> </span><!-- end ngIf: !$ctrl.showIcons --> <!-- ngIf: $ctrl.showIcons --></video-statistics><!-- end ngIf: (!$ctrl.isLiveVideo || $ctrl.isLiveVideoCompleted) && !$ctrl.isVideoBeingPlayed --> </div> </div> </a> </div></responsive-video-item> </li><!-- end ngRepeat: relatedVideoItem in $ctrl.relatedVideos --> </ul> </div><!-- end ngIf: $ctrl.relatedVideos.length && $ctrl.hideRelatedVideoDiv() --> <!-- ngIf: $ctrl.secondaryRelatedVideos.length --> </div></related-videos><!-- end ngIf: $ctrl.video && !$ctrl.isWatchList --> <!-- ngIf: $ctrl.video && $ctrl.isWatchList --> </div></div> </div> </section><!-- end ngIf: $ctrl.video --> </div> </div> </div></video-page><!-- end ngIf: Video.videoId -->
</div>
</div>
<footer class="ng-isolate-scope"><div class="row column" ng-show="$ctrl.videoEditorInactive" aria-hidden="false"> <section class="footer-section"> <ul class="c-list f-bare footer-link-list-left"> <li> <a ng-href="/usersettings" aria-label=""Content Language Selector. Currently set to English"" class="language-selector-link ng-binding" href="/usersettings"> <span class="c-glyph glyph-world" aria-hidden="true"></span> English </a> </li> <li> <a external-link="" class="c-hyperlink footer-link company-policy-link ng-binding" ng-href="" target="_blank"> </a> </li> </ul> <ul class="c-list f-bare footer-link-list-right"> <li> <a external-link="" class="c-hyperlink footer-link contact-us-link ng-binding" ng-href="http://go.microsoft.com/fwlink/?LinkID=808783&clcid=0x0009" target="_blank" href="http://go.microsoft.com/fwlink/?LinkID=808783&clcid=0x0009"> Contact us </a> </li> <li> <a external-link="" class="c-hyperlink footer-link privacy-and-cookies-link ng-binding" ng-href="http://go.microsoft.com/fwlink/?LinkID=808237&clcid=0x0009" target="_blank" href="http://go.microsoft.com/fwlink/?LinkID=808237&clcid=0x0009"> Privacy & cookies </a> </li> <li> <a external-link="" class="c-hyperlink footer-link terms-of-use-link ng-binding" ng-href="http://go.microsoft.com/fwlink/?LinkID=808780&clcid=0x0009" target="_blank" href="http://go.microsoft.com/fwlink/?LinkID=808780&clcid=0x0009"> Terms of use </a> </li> <li> <a external-link="" class="c-hyperlink footer-link third-party-notices-link ng-binding" href="https://amsglob0cdnstream13.azureedge.net/1-0-2141-2/bundles/app/generated/thirdpartynotices/notice.txt" download="NOTICE" target="_blank"> Third party notices </a> </li> <li> <a external-link="" class="c-hyperlink footer-link terms-and-conditions-link ng-binding" ng-href="http://go.microsoft.com/fwlink/?LinkID=808240&clcid=0x0009" target="_blank" href="http://go.microsoft.com/fwlink/?LinkID=808240&clcid=0x0009"> Terms & conditions </a> </li> <li dir="ltr" class="copyright ng-binding"> © 2020 Microsoft </li> </ul> </section> </div></footer></div><!-- end ngIf: loadPage -->
<!-- ngIf: !loadPage -->
<style>.obf-OverallAnchor:focus { background-color: #93003D } .obf-OverallAnchor:hover { background-color: #C30052 } .obf-OverallAnchorActive { background-color: #93003D } .obf-SpinnerCircle { background-color: #93003D } .obf-ChoiceGroup input[type=radio]:checked+label>.obf-ChoiceGroupIcon { border-color: #93003D } .obf-ChoiceGroup input[type=radio]:hover+label>.obf-ChoiceGroupIcon { border-color: #C30052 } .obf-ChoiceGroup input[type=radio]:checked+label>.obf-ChoiceGroupIcon>span { background-color: #93003D } .obf-SubmitButton { background-color: #93003D } .obf-SubmitButton:hover { background-color: #C30052 } .obf-Link { color: #93003D } .obf-Link:hover { color: #C30052 } #obf-TPromptTitle { color: #93003D } #obf-TFormTitle { color: #93003D } </style><dropdown-overlay dropdown-overlay-name="uploadProgress" dropdown-overlay-class="topbarContextMenu uploadProgressContextMenu" parent-element="glyph-upload-progress" next-element="glyph-upload" class="ng-isolate-scope topbarContextMenu uploadProgressContextMenu overlay"><ng-transclude> <bg-upload-progress class="ng-scope ng-isolate-scope"><div class="bg-upload-progress"> <!-- ngRepeat: uploadingItem in $ctrl.uploadingList --> <!-- ngRepeat: otherUploadItem in $ctrl.nonUploadingList --> </div></bg-upload-progress> </ng-transclude></dropdown-overlay><div class="custom-title-body" collapsed="true" style="top: 0px; height: 0px; width: 0px; z-index: 21; left: 0px;">Search transcript</div><iframe src="https://outlook.office.com/owa/SuiteServiceProxy.aspx?suiteServiceUserName=amber-yan%40sensata.com&suiteServiceReturnUrl=https%3A%2F%2Fweb.microsoftstream.com%2Fvideo%2F59035293-9751-4eae-88d6-7a235a553791&returnUrl=https%3A%2F%2Fweb.microsoftstream.com%2Fvideo%2F59035293-9751-4eae-88d6-7a235a553791&apiver=1" id="o365shellowaframe" name="o365shellowaframe" style="display: none;" sandbox="allow-scripts allow-same-origin allow-forms"></iframe><iframe src="https://webshell.suite.office.com/iframe/TokenFactoryIframe?origin=https%3A%2F%2Fweb.microsoftstream.com&shsid=dae9f462-8523-44d7-b4ea-f7ab9dcd8314&apiver=oneshell&cshver=20200719.1" id="o365shellwcssframe" name="o365shellwcssframe" style="display: none;" sandbox="allow-scripts allow-same-origin allow-forms"></iframe><script src="https://r4.res.office365.com/footprint/v3.2/scripts/fp-min.js" type="text/javascript" crossorigin="anonymous"></script></body></html>
"""
<ul class="ng-scope">
<li ng-if="line" ng-repeat="line in $vs_renderItems track by line.id" ng-class="$ctrl.shouldHighlight(line) ? 'highlight-line' : 'default-line'" role="listitem" ng-click="$ctrl.onTranscriptClicked(line)" aria-label="Clicking on this transcript will seek video to the time of 5206 seconds from beginning." id="transcript-74cd5051-c243-46ab-9dfa-23810cc9883a" aria-posinset="1367" aria-setsize="1386" class="transcript-line c-caption-1 ng-scope default-line" tabindex="0" style=""> <div class="transcript-line-wrapper ng-scope" ng-if="$ctrl.areTranscriptNewEditButtonsEnabled"> <div class="transcript-timestamp new ng-binding" tabindex="-1" aria-label="Clicking on this transcript will seek video to the time of 5206 seconds from beginning." ng-click="$ctrl.onTranscriptClicked(line)" role="button"> 01:26:46 </div> <div class="transcript-text-container"> <div class="transcript-text new ng-binding" ng-class="{editable: $ctrl.isActiveEdit(line)}" ng-focus="$ctrl.onTranscriptFocused($event, line)" vlist-index="1366" ng-keydown="$ctrl.onTextKeyDown($event)" ng-blur="$ctrl.onTranscriptBlur()" contenteditable="$ctrl.isActiveEdit(line)"> In in an earlier answer here, we highlighted there's a lot </div> <div class="transcript-text-action-bar" ng-class="{visible: $ctrl.shouldShowActionBar(line)}" ng-switch="" on="$ctrl.transcriptSavingStatus"> <!-- ngSwitchWhen: 0 --> <!-- ngSwitchWhen: 1 --> <!-- ngSwitchWhen: 2 --> <!-- ngSwitchWhen: 3 --> <!-- ngSwitchDefault: --><div class="transcript-text-action-buttons ng-scope" ng-switch-default=""> <div class="transcript-text-action-left-buttons"> <button class="stream-btn discard-btn ng-binding" ng-click="$ctrl.discard()"> Discard </button> <button class="stream-btn save-btn ng-binding" type="submit" ng-class="{'btn-primary': $ctrl.isTranscriptModified}" ng-click="$ctrl.save($event, line)"> Save </button> </div> <div class="transcript-text-action-divider"></div> <div class="transcript-text-action-right-buttons"> <button class="stream-btn replay-btn ng-isolate-scope" type="submit" ng-click="$ctrl.replay($event, line)" aria-label="Clicking this button restarts playback from the start of the current transcript segment and sets the focus to the current transcript segment's editable text contents." ng-mouseover="$ctrl.applyActiveStyleToReplayButton()" ng-mouseleave="$ctrl.applyInactiveStyleToReplayButton()" ng-focus="$ctrl.applyActiveStyleToReplayButton()" ng-blur="$ctrl.applyInactiveStyleToReplayButton()" custom-title="Replay this video segment"> <svg-src svg-name="icon_replay" inline-with-text="true" class="replay-icon ng-isolate-scope"><span class="svg-container icon-inline" ng-class="{'icon-inline': $ctrl.inlineWithText}"> <svg class="svg-icon" focusable="false" aria-hidden="true"> <use xlink:href="#icon_replay" ng-attr-xlink:href="{{$ctrl.getSvgPath}}"></use> </svg> </span></svg-src> </button> </div> </div><!-- end ngSwitchWhen: --> </div> </div> </div><!-- end ngIf: $ctrl.areTranscriptNewEditButtonsEnabled --> <!-- ngIf: !$ctrl.areTranscriptNewEditButtonsEnabled --> </li><!-- end ngIf: line --><!-- end ngRepeat: line in $vs_renderItems track by line.id --><!-- ngIf: line --><li ng-if="line" ng-repeat="line in $vs_renderItems track by line.id" ng-class="$ctrl.shouldHighlight(line) ? 'highlight-line' : 'default-line'" role="listitem" ng-click="$ctrl.onTranscriptClicked(line)" aria-label="Clicking on this transcript will seek video to the time of 5209 seconds from beginning." id="transcript-c738b813-6c3d-4fa2-bc8a-e1a6738d3272" aria-posinset="1368" aria-setsize="1386" class="transcript-line c-caption-1 ng-scope default-line" tabindex="0" style=""> <!-- ngIf: $ctrl.areTranscriptNewEditButtonsEnabled --><div class="transcript-line-wrapper ng-scope" ng-if="$ctrl.areTranscriptNewEditButtonsEnabled"> <div class="transcript-timestamp new ng-binding" tabindex="-1" aria-label="Clicking on this transcript will seek video to the time of 5209 seconds from beginning." ng-click="$ctrl.onTranscriptClicked(line)" role="button"> 01:26:49 </div> <div class="transcript-text-container"> <div class="transcript-text new ng-binding" ng-class="{editable: $ctrl.isActiveEdit(line)}" ng-focus="$ctrl.onTranscriptFocused($event, line)" vlist-index="1367" ng-keydown="$ctrl.onTextKeyDown($event)" ng-blur="$ctrl.onTranscriptBlur()" contenteditable="$ctrl.isActiveEdit(line)"> of training being rolled out and being made available. </div> <div class="transcript-text-action-bar" ng-class="{visible: $ctrl.shouldShowActionBar(line)}" ng-switch="" on="$ctrl.transcriptSavingStatus"> <!-- ngSwitchWhen: 0 --> <!-- ngSwitchWhen: 1 --> <!-- ngSwitchWhen: 2 --> <!-- ngSwitchWhen: 3 --> <!-- ngSwitchDefault: --><div class="transcript-text-action-buttons ng-scope" ng-switch-default=""> <div class="transcript-text-action-left-buttons"> <button class="stream-btn discard-btn ng-binding" ng-click="$ctrl.discard()"> Discard </button> <button class="stream-btn save-btn ng-binding" type="submit" ng-class="{'btn-primary': $ctrl.isTranscriptModified}" ng-click="$ctrl.save($event, line)"> Save </button> </div> <div class="transcript-text-action-divider"></div> <div class="transcript-text-action-right-buttons"> <button class="stream-btn replay-btn ng-isolate-scope" type="submit" ng-click="$ctrl.replay($event, line)" aria-label="Clicking this button restarts playback from the start of the current transcript segment and sets the focus to the current transcript segment's editable text contents." ng-mouseover="$ctrl.applyActiveStyleToReplayButton()" ng-mouseleave="$ctrl.applyInactiveStyleToReplayButton()" ng-focus="$ctrl.applyActiveStyleToReplayButton()" ng-blur="$ctrl.applyInactiveStyleToReplayButton()" custom-title="Replay this video segment"> <svg-src svg-name="icon_replay" inline-with-text="true" class="replay-icon ng-isolate-scope"><span class="svg-container icon-inline" ng-class="{'icon-inline': $ctrl.inlineWithText}"> <svg class="svg-icon" focusable="false" aria-hidden="true"> <use xlink:href="#icon_replay" ng-attr-xlink:href="{{$ctrl.getSvgPath}}"></use> </svg> </span></svg-src> </button> </div> </div><!-- end ngSwitchWhen: --> </div> </div> </div><!-- end ngIf: $ctrl.areTranscriptNewEditButtonsEnabled --> <!-- ngIf: !$ctrl.areTranscriptNewEditButtonsEnabled --> </li><!-- end ngIf: line --><!-- end ngRepeat: line in $vs_renderItems track by line.id --><!-- ngIf: line --><li ng-if="line" ng-repeat="line in $vs_renderItems track by line.id" ng-class="$ctrl.shouldHighlight(line) ? 'highlight-line' : 'default-line'" role="listitem" ng-click="$ctrl.onTranscriptClicked(line)" aria-label="Clicking on this transcript will seek video to the time of 5212 seconds from beginning." id="transcript-b496e241-f2da-42da-a0b0-656d76bba4d0" aria-posinset="1369" aria-setsize="1386" class="transcript-line c-caption-1 ng-scope default-line" tabindex="0" style=""> <!-- ngIf: $ctrl.areTranscriptNewEditButtonsEnabled --><div class="transcript-line-wrapper ng-scope" ng-if="$ctrl.areTranscriptNewEditButtonsEnabled"> <div class="transcript-timestamp new ng-binding" tabindex="-1" aria-label="Clicking on this transcript will seek video to the time of 5212 seconds from beginning." ng-click="$ctrl.onTranscriptClicked(line)" role="button"> 01:26:52 </div> <div class="transcript-text-container"> <div class="transcript-text new ng-binding" ng-class="{editable: $ctrl.isActiveEdit(line)}" ng-focus="$ctrl.onTranscriptFocused($event, line)" vlist-index="1368" ng-keydown="$ctrl.onTextKeyDown($event)" ng-blur="$ctrl.onTranscriptBlur()" contenteditable="$ctrl.isActiveEdit(line)"> Raise your hand, volunteer for the opportunity. Ask how </div> <div class="transcript-text-action-bar" ng-class="{visible: $ctrl.shouldShowActionBar(line)}" ng-switch="" on="$ctrl.transcriptSavingStatus"> <!-- ngSwitchWhen: 0 --> <!-- ngSwitchWhen: 1 --> <!-- ngSwitchWhen: 2 --> <!-- ngSwitchWhen: 3 --> <!-- ngSwitchDefault: --><div class="transcript-text-action-buttons ng-scope" ng-switch-default=""> <div class="transcript-text-action-left-buttons"> <button class="stream-btn discard-btn ng-binding" ng-click="$ctrl.discard()"> Discard </button> <button class="stream-btn save-btn ng-binding" type="submit" ng-class="{'btn-primary': $ctrl.isTranscriptModified}" ng-click="$ctrl.save($event, line)"> Save </button> </div> <div class="transcript-text-action-divider"></div> <div class="transcript-text-action-right-buttons"> <button class="stream-btn replay-btn ng-isolate-scope" type="submit" ng-click="$ctrl.replay($event, line)" aria-label="Clicking this button restarts playback from the start of the current transcript segment and sets the focus to the current transcript segment's editable text contents." ng-mouseover="$ctrl.applyActiveStyleToReplayButton()" ng-mouseleave="$ctrl.applyInactiveStyleToReplayButton()" ng-focus="$ctrl.applyActiveStyleToReplayButton()" ng-blur="$ctrl.applyInactiveStyleToReplayButton()" custom-title="Replay this video segment"> <svg-src svg-name="icon_replay" inline-with-text="true" class="replay-icon ng-isolate-scope"><span class="svg-container icon-inline" ng-class="{'icon-inline': $ctrl.inlineWithText}"> <svg class="svg-icon" focusable="false" aria-hidden="true"> <use xlink:href="#icon_replay" ng-attr-xlink:href="{{$ctrl.getSvgPath}}"></use> </svg> </span></svg-src> </button> </div> </div><!-- end ngSwitchWhen: --> </div> </div> </div><!-- end ngIf: $ctrl.areTranscriptNewEditButtonsEnabled --> <!-- ngIf: !$ctrl.areTranscriptNewEditButtonsEnabled --> </li><!-- end ngIf: line --><!-- end ngRepeat: line in $vs_renderItems track by line.id --><!-- ngIf: line --><li ng-if="line" ng-repeat="line in $vs_renderItems track by line.id" ng-class="$ctrl.shouldHighlight(line) ? 'highlight-line' : 'default-line'" role="listitem" ng-click="$ctrl.onTranscriptClicked(line)" aria-label="Clicking on this transcript will seek video to the time of 5215 seconds from beginning." id="transcript-05f17e31-8022-469b-8f3f-bc46bbb48779" aria-posinset="1370" aria-setsize="1386" class="transcript-line c-caption-1 ng-scope default-line" tabindex="0" style=""> <!-- ngIf: $ctrl.areTranscriptNewEditButtonsEnabled --><div class="transcript-line-wrapper ng-scope" ng-if="$ctrl.areTranscriptNewEditButtonsEnabled"> <div class="transcript-timestamp new ng-binding" tabindex="-1" aria-label="Clicking on this transcript will seek video to the time of 5215 seconds from beginning." ng-click="$ctrl.onTranscriptClicked(line)" role="button"> 01:26:55 </div> <div class="transcript-text-container"> <div class="transcript-text new ng-binding" ng-class="{editable: $ctrl.isActiveEdit(line)}" ng-focus="$ctrl.onTranscriptFocused($event, line)" vlist-index="1369" ng-keydown="$ctrl.onTextKeyDown($event)" ng-blur="$ctrl.onTranscriptBlur()" contenteditable="$ctrl.isActiveEdit(line)"> you can help. Ask how you can have an impact and get the </div> <div class="transcript-text-action-bar" ng-class="{visible: $ctrl.shouldShowActionBar(line)}" ng-switch="" on="$ctrl.transcriptSavingStatus"> <!-- ngSwitchWhen: 0 --> <!-- ngSwitchWhen: 1 --> <!-- ngSwitchWhen: 2 --> <!-- ngSwitchWhen: 3 --> <!-- ngSwitchDefault: --><div class="transcript-text-action-buttons ng-scope" ng-switch-default=""> <div class="transcript-text-action-left-buttons"> <button class="stream-btn discard-btn ng-binding" ng-click="$ctrl.discard()"> Discard </button> <button class="stream-btn save-btn ng-binding" type="submit" ng-class="{'btn-primary': $ctrl.isTranscriptModified}" ng-click="$ctrl.save($event, line)"> Save </button> </div> <div class="transcript-text-action-divider"></div> <div class="transcript-text-action-right-buttons"> <button class="stream-btn replay-btn ng-isolate-scope" type="submit" ng-click="$ctrl.replay($event, line)" aria-label="Clicking this button restarts playback from the start of the current transcript segment and sets the focus to the current transcript segment's editable text contents." ng-mouseover="$ctrl.applyActiveStyleToReplayButton()" ng-mouseleave="$ctrl.applyInactiveStyleToReplayButton()" ng-focus="$ctrl.applyActiveStyleToReplayButton()" ng-blur="$ctrl.applyInactiveStyleToReplayButton()" custom-title="Replay this video segment"> <svg-src svg-name="icon_replay" inline-with-text="true" class="replay-icon ng-isolate-scope"><span class="svg-container icon-inline" ng-class="{'icon-inline': $ctrl.inlineWithText}"> <svg class="svg-icon" focusable="false" aria-hidden="true"> <use xlink:href="#icon_replay" ng-attr-xlink:href="{{$ctrl.getSvgPath}}"></use> </svg> </span></svg-src> </button> </div> </div><!-- end ngSwitchWhen: --> </div> </div> </div><!-- end ngIf: $ctrl.areTranscriptNewEditButtonsEnabled --> <!-- ngIf: !$ctrl.areTranscriptNewEditButtonsEnabled --> </li><!-- end ngIf: line --><!-- end ngRepeat: line in $vs_renderItems track by line.id --><!-- ngIf: line --><li ng-if="line" ng-repeat="line in $vs_renderItems track by line.id" ng-class="$ctrl.shouldHighlight(line) ? 'highlight-line' : 'default-line'" role="listitem" ng-click="$ctrl.onTranscriptClicked(line)" aria-label="Clicking on this transcript will seek video to the time of 5219 seconds from beginning." id="transcript-b5e0539d-c84d-45b2-a823-f0336e1ed3bd" aria-posinset="1371" aria-setsize="1386" class="transcript-line c-caption-1 ng-scope default-line" tabindex="0" style=""> <!-- ngIf: $ctrl.areTranscriptNewEditButtonsEnabled --><div class="transcript-line-wrapper ng-scope" ng-if="$ctrl.areTranscriptNewEditButtonsEnabled"> <div class="transcript-timestamp new ng-binding" tabindex="-1" aria-label="Clicking on this transcript will seek video to the time of 5219 seconds from beginning." ng-click="$ctrl.onTranscriptClicked(line)" role="button"> 01:26:59 </div> <div class="transcript-text-container"> <div class="transcript-text new ng-binding" ng-class="{editable: $ctrl.isActiveEdit(line)}" ng-focus="$ctrl.onTranscriptFocused($event, line)" vlist-index="1370" ng-keydown="$ctrl.onTextKeyDown($event)" ng-blur="$ctrl.onTranscriptBlur()" contenteditable="$ctrl.isActiveEdit(line)"> right training in order to be ready for that role, because </div> <div class="transcript-text-action-bar" ng-class="{visible: $ctrl.shouldShowActionBar(line)}" ng-switch="" on="$ctrl.transcriptSavingStatus"> <!-- ngSwitchWhen: 0 --> <!-- ngSwitchWhen: 1 --> <!-- ngSwitchWhen: 2 --> <!-- ngSwitchWhen: 3 --> <!-- ngSwitchDefault: --><div class="transcript-text-action-buttons ng-scope" ng-switch-default=""> <div class="transcript-text-action-left-buttons"> <button class="stream-btn discard-btn ng-binding" ng-click="$ctrl.discard()"> Discard </button> <button class="stream-btn save-btn ng-binding" type="submit" ng-class="{'btn-primary': $ctrl.isTranscriptModified}" ng-click="$ctrl.save($event, line)"> Save </button> </div> <div class="transcript-text-action-divider"></div> <div class="transcript-text-action-right-buttons"> <button class="stream-btn replay-btn ng-isolate-scope" type="submit" ng-click="$ctrl.replay($event, line)" aria-label="Clicking this button restarts playback from the start of the current transcript segment and sets the focus to the current transcript segment's editable text contents." ng-mouseover="$ctrl.applyActiveStyleToReplayButton()" ng-mouseleave="$ctrl.applyInactiveStyleToReplayButton()" ng-focus="$ctrl.applyActiveStyleToReplayButton()" ng-blur="$ctrl.applyInactiveStyleToReplayButton()" custom-title="Replay this video segment"> <svg-src svg-name="icon_replay" inline-with-text="true" class="replay-icon ng-isolate-scope"><span class="svg-container icon-inline" ng-class="{'icon-inline': $ctrl.inlineWithText}"> <svg class="svg-icon" focusable="false" aria-hidden="true"> <use xlink:href="#icon_replay" ng-attr-xlink:href="{{$ctrl.getSvgPath}}"></use> </svg> </span></svg-src> </button> </div> </div><!-- end ngSwitchWhen: --> </div> </div> </div><!-- end ngIf: $ctrl.areTranscriptNewEditButtonsEnabled --> <!-- ngIf: !$ctrl.areTranscriptNewEditButtonsEnabled --> </li><!-- end ngIf: line --><!-- end ngRepeat: line in $vs_renderItems track by line.id --><!-- ngIf: line --><li ng-if="line" ng-repeat="line in $vs_renderItems track by line.id" ng-class="$ctrl.shouldHighlight(line) ? 'highlight-line' : 'default-line'" role="listitem" ng-click="$ctrl.onTranscriptClicked(line)" aria-label="Clicking on this transcript will seek video to the time of 5222 seconds from beginning." id="transcript-1e165221-f253-4648-a43d-8d7394794a0a" aria-posinset="1372" aria-setsize="1386" class="transcript-line c-caption-1 ng-scope default-line" tabindex="0" style=""> <!-- ngIf: $ctrl.areTranscriptNewEditButtonsEnabled --><div class="transcript-line-wrapper ng-scope" ng-if="$ctrl.areTranscriptNewEditButtonsEnabled"> <div class="transcript-timestamp new ng-binding" tabindex="-1" aria-label="Clicking on this transcript will seek video to the time of 5222 seconds from beginning." ng-click="$ctrl.onTranscriptClicked(line)" role="button"> 01:27:02 </div> <div class="transcript-text-container"> <div class="transcript-text new ng-binding" ng-class="{editable: $ctrl.isActiveEdit(line)}" ng-focus="$ctrl.onTranscriptFocused($event, line)" vlist-index="1371" ng-keydown="$ctrl.onTextKeyDown($event)" ng-blur="$ctrl.onTranscriptBlur()" contenteditable="$ctrl.isActiveEdit(line)"> this is going to be a really exciting journey an we really </div> <div class="transcript-text-action-bar" ng-class="{visible: $ctrl.shouldShowActionBar(line)}" ng-switch="" on="$ctrl.transcriptSavingStatus"> <!-- ngSwitchWhen: 0 --> <!-- ngSwitchWhen: 1 --> <!-- ngSwitchWhen: 2 --> <!-- ngSwitchWhen: 3 --> <!-- ngSwitchDefault: --><div class="transcript-text-action-buttons ng-scope" ng-switch-default=""> <div class="transcript-text-action-left-buttons"> <button class="stream-btn discard-btn ng-binding" ng-click="$ctrl.discard()"> Discard </button> <button class="stream-btn save-btn ng-binding" type="submit" ng-class="{'btn-primary': $ctrl.isTranscriptModified}" ng-click="$ctrl.save($event, line)"> Save </button> </div> <div class="transcript-text-action-divider"></div> <div class="transcript-text-action-right-buttons"> <button class="stream-btn replay-btn ng-isolate-scope" type="submit" ng-click="$ctrl.replay($event, line)" aria-label="Clicking this button restarts playback from the start of the current transcript segment and sets the focus to the current transcript segment's editable text contents." ng-mouseover="$ctrl.applyActiveStyleToReplayButton()" ng-mouseleave="$ctrl.applyInactiveStyleToReplayButton()" ng-focus="$ctrl.applyActiveStyleToReplayButton()" ng-blur="$ctrl.applyInactiveStyleToReplayButton()" custom-title="Replay this video segment"> <svg-src svg-name="icon_replay" inline-with-text="true" class="replay-icon ng-isolate-scope"><span class="svg-container icon-inline" ng-class="{'icon-inline': $ctrl.inlineWithText}"> <svg class="svg-icon" focusable="false" aria-hidden="true"> <use xlink:href="#icon_replay" ng-attr-xlink:href="{{$ctrl.getSvgPath}}"></use> </svg> </span></svg-src> </button> </div> </div><!-- end ngSwitchWhen: --> </div> </div> </div><!-- end ngIf: $ctrl.areTranscriptNewEditButtonsEnabled --> <!-- ngIf: !$ctrl.areTranscriptNewEditButtonsEnabled --> </li><!-- end ngIf: line --><!-- end ngRepeat: line in $vs_renderItems track by line.id --><!-- ngIf: line --><li ng-if="line" ng-repeat="line in $vs_renderItems track by line.id" ng-class="$ctrl.shouldHighlight(line) ? 'highlight-line' : 'default-line'" role="listitem" ng-click="$ctrl.onTranscriptClicked(line)" aria-label="Clicking on this transcript will seek video to the time of 5226 seconds from beginning." id="transcript-882ad8de-0d12-4a43-ac46-4bd696cdb111" aria-posinset="1373" aria-setsize="1386" class="transcript-line c-caption-1 ng-scope default-line" tabindex="0" style=""> <!-- ngIf: $ctrl.areTranscriptNewEditButtonsEnabled --><div class="transcript-line-wrapper ng-scope" ng-if="$ctrl.areTranscriptNewEditButtonsEnabled"> <div class="transcript-timestamp new ng-binding" tabindex="-1" aria-label="Clicking on this transcript will seek video to the time of 5226 seconds from beginning." ng-click="$ctrl.onTranscriptClicked(line)" role="button"> 01:27:06 </div> <div class="transcript-text-container"> <div class="transcript-text new ng-binding" ng-class="{editable: $ctrl.isActiveEdit(line)}" ng-focus="$ctrl.onTranscriptFocused($event, line)" vlist-index="1372" ng-keydown="$ctrl.onTextKeyDown($event)" ng-blur="$ctrl.onTranscriptBlur()" contenteditable="$ctrl.isActiveEdit(line)"> need everybody to lean in and help us achieve that growth. </div> <div class="transcript-text-action-bar" ng-class="{visible: $ctrl.shouldShowActionBar(line)}" ng-switch="" on="$ctrl.transcriptSavingStatus"> <!-- ngSwitchWhen: 0 --> <!-- ngSwitchWhen: 1 --> <!-- ngSwitchWhen: 2 --> <!-- ngSwitchWhen: 3 --> <!-- ngSwitchDefault: --><div class="transcript-text-action-buttons ng-scope" ng-switch-default=""> <div class="transcript-text-action-left-buttons"> <button class="stream-btn discard-btn ng-binding" ng-click="$ctrl.discard()"> Discard </button> <button class="stream-btn save-btn ng-binding" type="submit" ng-class="{'btn-primary': $ctrl.isTranscriptModified}" ng-click="$ctrl.save($event, line)"> Save </button> </div> <div class="transcript-text-action-divider"></div> <div class="transcript-text-action-right-buttons"> <button class="stream-btn replay-btn ng-isolate-scope" type="submit" ng-click="$ctrl.replay($event, line)" aria-label="Clicking this button restarts playback from the start of the current transcript segment and sets the focus to the current transcript segment's editable text contents." ng-mouseover="$ctrl.applyActiveStyleToReplayButton()" ng-mouseleave="$ctrl.applyInactiveStyleToReplayButton()" ng-focus="$ctrl.applyActiveStyleToReplayButton()" ng-blur="$ctrl.applyInactiveStyleToReplayButton()" custom-title="Replay this video segment"> <svg-src svg-name="icon_replay" inline-with-text="true" class="replay-icon ng-isolate-scope"><span class="svg-container icon-inline" ng-class="{'icon-inline': $ctrl.inlineWithText}"> <svg class="svg-icon" focusable="false" aria-hidden="true"> <use xlink:href="#icon_replay" ng-attr-xlink:href="{{$ctrl.getSvgPath}}"></use> </svg> </span></svg-src> </button> </div> </div><!-- end ngSwitchWhen: --> </div> </div> </div><!-- end ngIf: $ctrl.areTranscriptNewEditButtonsEnabled --> <!-- ngIf: !$ctrl.areTranscriptNewEditButtonsEnabled --> </li><!-- end ngIf: line --><!-- end ngRepeat: line in $vs_renderItems track by line.id --><!-- ngIf: line --><li ng-if="line" ng-repeat="line in $vs_renderItems track by line.id" ng-class="$ctrl.shouldHighlight(line) ? 'highlight-line' : 'default-line'" role="listitem" ng-click="$ctrl.onTranscriptClicked(line)" aria-label="Clicking on this transcript will seek video to the time of 5232 seconds from beginning." id="transcript-663ad0ca-79c9-4db3-8970-cdfabdf478fc" aria-posinset="1374" aria-setsize="1386" class="transcript-line c-caption-1 ng-scope default-line" tabindex="0" style=""> <!-- ngIf: $ctrl.areTranscriptNewEditButtonsEnabled --><div class="transcript-line-wrapper ng-scope" ng-if="$ctrl.areTranscriptNewEditButtonsEnabled"> <div class="transcript-timestamp new ng-binding" tabindex="-1" aria-label="Clicking on this transcript will seek video to the time of 5232 seconds from beginning." ng-click="$ctrl.onTranscriptClicked(line)" role="button"> 01:27:12 </div> <div class="transcript-text-container"> <div class="transcript-text new ng-binding" ng-class="{editable: $ctrl.isActiveEdit(line)}" ng-focus="$ctrl.onTranscriptFocused($event, line)" vlist-index="1373" ng-keydown="$ctrl.onTextKeyDown($event)" ng-blur="$ctrl.onTranscriptBlur()" contenteditable="$ctrl.isActiveEdit(line)"> Alright, so I can't think of a better comment to close on. I </div> <div class="transcript-text-action-bar" ng-class="{visible: $ctrl.shouldShowActionBar(line)}" ng-switch="" on="$ctrl.transcriptSavingStatus"> <!-- ngSwitchWhen: 0 --> <!-- ngSwitchWhen: 1 --> <!-- ngSwitchWhen: 2 --> <!-- ngSwitchWhen: 3 --> <!-- ngSwitchDefault: --><div class="transcript-text-action-buttons ng-scope" ng-switch-default=""> <div class="transcript-text-action-left-buttons"> <button class="stream-btn discard-btn ng-binding" ng-click="$ctrl.discard()"> Discard </button> <button class="stream-btn save-btn ng-binding" type="submit" ng-class="{'btn-primary': $ctrl.isTranscriptModified}" ng-click="$ctrl.save($event, line)"> Save </button> </div> <div class="transcript-text-action-divider"></div> <div class="transcript-text-action-right-buttons"> <button class="stream-btn replay-btn ng-isolate-scope" type="submit" ng-click="$ctrl.replay($event, line)" aria-label="Clicking this button restarts playback from the start of the current transcript segment and sets the focus to the current transcript segment's editable text contents." ng-mouseover="$ctrl.applyActiveStyleToReplayButton()" ng-mouseleave="$ctrl.applyInactiveStyleToReplayButton()" ng-focus="$ctrl.applyActiveStyleToReplayButton()" ng-blur="$ctrl.applyInactiveStyleToReplayButton()" custom-title="Replay this video segment"> <svg-src svg-name="icon_replay" inline-with-text="true" class="replay-icon ng-isolate-scope"><span class="svg-container icon-inline" ng-class="{'icon-inline': $ctrl.inlineWithText}"> <svg class="svg-icon" focusable="false" aria-hidden="true"> <use xlink:href="#icon_replay" ng-attr-xlink:href="{{$ctrl.getSvgPath}}"></use> </svg> </span></svg-src> </button> </div> </div><!-- end ngSwitchWhen: --> </div> </div> </div><!-- end ngIf: $ctrl.areTranscriptNewEditButtonsEnabled --> <!-- ngIf: !$ctrl.areTranscriptNewEditButtonsEnabled --> </li><!-- end ngIf: line --><!-- end ngRepeat: line in $vs_renderItems track by line.id --><!-- ngIf: line --><li ng-if="line" ng-repeat="line in $vs_renderItems track by line.id" ng-class="$ctrl.shouldHighlight(line) ? 'highlight-line' : 'default-line'" role="listitem" ng-click="$ctrl.onTranscriptClicked(line)" aria-label="Clicking on this transcript will seek video to the time of 5236 seconds from beginning." id="transcript-4f9f99e1-3717-4b1c-afa9-d4bb93a8c217" aria-posinset="1375" aria-setsize="1386" class="transcript-line c-caption-1 ng-scope default-line" tabindex="0" style=""> <!-- ngIf: $ctrl.areTranscriptNewEditButtonsEnabled --><div class="transcript-line-wrapper ng-scope" ng-if="$ctrl.areTranscriptNewEditButtonsEnabled"> <div class="transcript-timestamp new ng-binding" tabindex="-1" aria-label="Clicking on this transcript will seek video to the time of 5236 seconds from beginning." ng-click="$ctrl.onTranscriptClicked(line)" role="button"> 01:27:16 </div> <div class="transcript-text-container"> <div class="transcript-text new ng-binding" ng-class="{editable: $ctrl.isActiveEdit(line)}" ng-focus="$ctrl.onTranscriptFocused($event, line)" vlist-index="1374" ng-keydown="$ctrl.onTextKeyDown($event)" ng-blur="$ctrl.onTranscriptBlur()" contenteditable="$ctrl.isActiveEdit(line)"> want to thank everyone on the panel. I wanna think Jeff, I </div> <div class="transcript-text-action-bar" ng-class="{visible: $ctrl.shouldShowActionBar(line)}" ng-switch="" on="$ctrl.transcriptSavingStatus"> <!-- ngSwitchWhen: 0 --> <!-- ngSwitchWhen: 1 --> <!-- ngSwitchWhen: 2 --> <!-- ngSwitchWhen: 3 --> <!-- ngSwitchDefault: --><div class="transcript-text-action-buttons ng-scope" ng-switch-default=""> <div class="transcript-text-action-left-buttons"> <button class="stream-btn discard-btn ng-binding" ng-click="$ctrl.discard()"> Discard </button> <button class="stream-btn save-btn ng-binding" type="submit" ng-class="{'btn-primary': $ctrl.isTranscriptModified}" ng-click="$ctrl.save($event, line)"> Save </button> </div> <div class="transcript-text-action-divider"></div> <div class="transcript-text-action-right-buttons"> <button class="stream-btn replay-btn ng-isolate-scope" type="submit" ng-click="$ctrl.replay($event, line)" aria-label="Clicking this button restarts playback from the start of the current transcript segment and sets the focus to the current transcript segment's editable text contents." ng-mouseover="$ctrl.applyActiveStyleToReplayButton()" ng-mouseleave="$ctrl.applyInactiveStyleToReplayButton()" ng-focus="$ctrl.applyActiveStyleToReplayButton()" ng-blur="$ctrl.applyInactiveStyleToReplayButton()" custom-title="Replay this video segment"> <svg-src svg-name="icon_replay" inline-with-text="true" class="replay-icon ng-isolate-scope"><span class="svg-container icon-inline" ng-class="{'icon-inline': $ctrl.inlineWithText}"> <svg class="svg-icon" focusable="false" aria-hidden="true"> <use xlink:href="#icon_replay" ng-attr-xlink:href="{{$ctrl.getSvgPath}}"></use> </svg> </span></svg-src> </button> </div> </div><!-- end ngSwitchWhen: --> </div> </div> </div><!-- end ngIf: $ctrl.areTranscriptNewEditButtonsEnabled --> <!-- ngIf: !$ctrl.areTranscriptNewEditButtonsEnabled --> </li><!-- end ngIf: line --><!-- end ngRepeat: line in $vs_renderItems track by line.id --><!-- ngIf: line --><li ng-if="line" ng-repeat="line in $vs_renderItems track by line.id" ng-class="$ctrl.shouldHighlight(line) ? 'highlight-line' : 'default-line'" role="listitem" ng-click="$ctrl.onTranscriptClicked(line)" aria-label="Clicking on this transcript will seek video to the time of 5239 seconds from beginning." id="transcript-796b523c-9cdf-4c60-ab28-f265d6905cfc" aria-posinset="1376" aria-setsize="1386" class="transcript-line c-caption-1 ng-scope default-line" tabindex="0" style=""> <!-- ngIf: $ctrl.areTranscriptNewEditButtonsEnabled --><div class="transcript-line-wrapper ng-scope" ng-if="$ctrl.areTranscriptNewEditButtonsEnabled"> <div class="transcript-timestamp new ng-binding" tabindex="-1" aria-label="Clicking on this transcript will seek video to the time of 5239 seconds from beginning." ng-click="$ctrl.onTranscriptClicked(line)" role="button"> 01:27:19 </div> <div class="transcript-text-container"> <div class="transcript-text new ng-binding" ng-class="{editable: $ctrl.isActiveEdit(line)}" ng-focus="$ctrl.onTranscriptFocused($event, line)" vlist-index="1375" ng-keydown="$ctrl.onTextKeyDown($event)" ng-blur="$ctrl.onTranscriptBlur()" contenteditable="$ctrl.isActiveEdit(line)"> want to thank all of you for attending. I really encourage </div> <div class="transcript-text-action-bar" ng-class="{visible: $ctrl.shouldShowActionBar(line)}" ng-switch="" on="$ctrl.transcriptSavingStatus"> <!-- ngSwitchWhen: 0 --> <!-- ngSwitchWhen: 1 --> <!-- ngSwitchWhen: 2 --> <!-- ngSwitchWhen: 3 --> <!-- ngSwitchDefault: --><div class="transcript-text-action-buttons ng-scope" ng-switch-default=""> <div class="transcript-text-action-left-buttons"> <button class="stream-btn discard-btn ng-binding" ng-click="$ctrl.discard()"> Discard </button> <button class="stream-btn save-btn ng-binding" type="submit" ng-class="{'btn-primary': $ctrl.isTranscriptModified}" ng-click="$ctrl.save($event, line)"> Save </button> </div> <div class="transcript-text-action-divider"></div> <div class="transcript-text-action-right-buttons"> <button class="stream-btn replay-btn ng-isolate-scope" type="submit" ng-click="$ctrl.replay($event, line)" aria-label="Clicking this button restarts playback from the start of the current transcript segment and sets the focus to the current transcript segment's editable text contents." ng-mouseover="$ctrl.applyActiveStyleToReplayButton()" ng-mouseleave="$ctrl.applyInactiveStyleToReplayButton()" ng-focus="$ctrl.applyActiveStyleToReplayButton()" ng-blur="$ctrl.applyInactiveStyleToReplayButton()" custom-title="Replay this video segment"> <svg-src svg-name="icon_replay" inline-with-text="true" class="replay-icon ng-isolate-scope"><span class="svg-container icon-inline" ng-class="{'icon-inline': $ctrl.inlineWithText}"> <svg class="svg-icon" focusable="false" aria-hidden="true"> <use xlink:href="#icon_replay" ng-attr-xlink:href="{{$ctrl.getSvgPath}}"></use> </svg> </span></svg-src> </button> </div> </div><!-- end ngSwitchWhen: --> </div> </div> </div><!-- end ngIf: $ctrl.areTranscriptNewEditButtonsEnabled --> <!-- ngIf: !$ctrl.areTranscriptNewEditButtonsEnabled --> </li><!-- end ngIf: line --><!-- end ngRepeat: line in $vs_renderItems track by line.id --><!-- ngIf: line --><li ng-if="line" ng-repeat="line in $vs_renderItems track by line.id" ng-class="$ctrl.shouldHighlight(line) ? 'highlight-line' : 'default-line'" role="listitem" ng-click="$ctrl.onTranscriptClicked(line)" aria-label="Clicking on this transcript will seek video to the time of 5242 seconds from beginning." id="transcript-121c51ca-1966-4497-a949-7f3fff9ca962" aria-posinset="1377" aria-setsize="1386" class="transcript-line c-caption-1 ng-scope default-line" tabindex="0" style=""> <!-- ngIf: $ctrl.areTranscriptNewEditButtonsEnabled --><div class="transcript-line-wrapper ng-scope" ng-if="$ctrl.areTranscriptNewEditButtonsEnabled"> <div class="transcript-timestamp new ng-binding" tabindex="-1" aria-label="Clicking on this transcript will seek video to the time of 5242 seconds from beginning." ng-click="$ctrl.onTranscriptClicked(line)" role="button"> 01:27:22 </div> <div class="transcript-text-container"> <div class="transcript-text new ng-binding" ng-class="{editable: $ctrl.isActiveEdit(line)}" ng-focus="$ctrl.onTranscriptFocused($event, line)" vlist-index="1376" ng-keydown="$ctrl.onTextKeyDown($event)" ng-blur="$ctrl.onTranscriptBlur()" contenteditable="$ctrl.isActiveEdit(line)"> you to attend our next quarterly town Hall on Aug 6th. I know you </div> <div class="transcript-text-action-bar" ng-class="{visible: $ctrl.shouldShowActionBar(line)}" ng-switch="" on="$ctrl.transcriptSavingStatus"> <!-- ngSwitchWhen: 0 --> <!-- ngSwitchWhen: 1 --> <!-- ngSwitchWhen: 2 --> <!-- ngSwitchWhen: 3 --> <!-- ngSwitchDefault: --><div class="transcript-text-action-buttons ng-scope" ng-switch-default=""> <div class="transcript-text-action-left-buttons"> <button class="stream-btn discard-btn ng-binding" ng-click="$ctrl.discard()"> Discard </button> <button class="stream-btn save-btn ng-binding" type="submit" ng-class="{'btn-primary': $ctrl.isTranscriptModified}" ng-click="$ctrl.save($event, line)"> Save </button> </div> <div class="transcript-text-action-divider"></div> <div class="transcript-text-action-right-buttons"> <button class="stream-btn replay-btn ng-isolate-scope" type="submit" ng-click="$ctrl.replay($event, line)" aria-label="Clicking this button restarts playback from the start of the current transcript segment and sets the focus to the current transcript segment's editable text contents." ng-mouseover="$ctrl.applyActiveStyleToReplayButton()" ng-mouseleave="$ctrl.applyInactiveStyleToReplayButton()" ng-focus="$ctrl.applyActiveStyleToReplayButton()" ng-blur="$ctrl.applyInactiveStyleToReplayButton()" custom-title="Replay this video segment"> <svg-src svg-name="icon_replay" inline-with-text="true" class="replay-icon ng-isolate-scope"><span class="svg-container icon-inline" ng-class="{'icon-inline': $ctrl.inlineWithText}"> <svg class="svg-icon" focusable="false" aria-hidden="true"> <use xlink:href="#icon_replay" ng-attr-xlink:href="{{$ctrl.getSvgPath}}"></use> </svg> </span></svg-src> </button> </div> </div><!-- end ngSwitchWhen: --> </div> </div> </div><!-- end ngIf: $ctrl.areTranscriptNewEditButtonsEnabled --> <!-- ngIf: !$ctrl.areTranscriptNewEditButtonsEnabled --> </li><!-- end ngIf: line --><!-- end ngRepeat: line in $vs_renderItems track by line.id --><!-- ngIf: line --><li ng-if="line" ng-repeat="line in $vs_renderItems track by line.id" ng-class="$ctrl.shouldHighlight(line) ? 'highlight-line' : 'default-line'" role="listitem" ng-click="$ctrl.onTranscriptClicked(line)" aria-label="Clicking on this transcript will seek video to the time of 5246 seconds from beginning." id="transcript-536750e0-23ee-4982-b4b7-06b407f7bccf" aria-posinset="1378" aria-setsize="1386" class="transcript-line c-caption-1 ng-scope default-line" tabindex="0" style=""> <!-- ngIf: $ctrl.areTranscriptNewEditButtonsEnabled --><div class="transcript-line-wrapper ng-scope" ng-if="$ctrl.areTranscriptNewEditButtonsEnabled"> <div class="transcript-timestamp new ng-binding" tabindex="-1" aria-label="Clicking on this transcript will seek video to the time of 5246 seconds from beginning." ng-click="$ctrl.onTranscriptClicked(line)" role="button"> 01:27:26 </div> <div class="transcript-text-container"> <div class="transcript-text new ng-binding" ng-class="{editable: $ctrl.isActiveEdit(line)}" ng-focus="$ctrl.onTranscriptFocused($event, line)" vlist-index="1377" ng-keydown="$ctrl.onTextKeyDown($event)" ng-blur="$ctrl.onTranscriptBlur()" contenteditable="$ctrl.isActiveEdit(line)"> have a lot of questions and he submitted a ton of questions </div> <div class="transcript-text-action-bar" ng-class="{visible: $ctrl.shouldShowActionBar(line)}" ng-switch="" on="$ctrl.transcriptSavingStatus"> <!-- ngSwitchWhen: 0 --> <!-- ngSwitchWhen: 1 --> <!-- ngSwitchWhen: 2 --> <!-- ngSwitchWhen: 3 --> <!-- ngSwitchDefault: --><div class="transcript-text-action-buttons ng-scope" ng-switch-default=""> <div class="transcript-text-action-left-buttons"> <button class="stream-btn discard-btn ng-binding" ng-click="$ctrl.discard()"> Discard </button> <button class="stream-btn save-btn ng-binding" type="submit" ng-class="{'btn-primary': $ctrl.isTranscriptModified}" ng-click="$ctrl.save($event, line)"> Save </button> </div> <div class="transcript-text-action-divider"></div> <div class="transcript-text-action-right-buttons"> <button class="stream-btn replay-btn ng-isolate-scope" type="submit" ng-click="$ctrl.replay($event, line)" aria-label="Clicking this button restarts playback from the start of the current transcript segment and sets the focus to the current transcript segment's editable text contents." ng-mouseover="$ctrl.applyActiveStyleToReplayButton()" ng-mouseleave="$ctrl.applyInactiveStyleToReplayButton()" ng-focus="$ctrl.applyActiveStyleToReplayButton()" ng-blur="$ctrl.applyInactiveStyleToReplayButton()" custom-title="Replay this video segment"> <svg-src svg-name="icon_replay" inline-with-text="true" class="replay-icon ng-isolate-scope"><span class="svg-container icon-inline" ng-class="{'icon-inline': $ctrl.inlineWithText}"> <svg class="svg-icon" focusable="false" aria-hidden="true"> <use xlink:href="#icon_replay" ng-attr-xlink:href="{{$ctrl.getSvgPath}}"></use> </svg> </span></svg-src> </button> </div> </div><!-- end ngSwitchWhen: --> </div> </div> </div><!-- end ngIf: $ctrl.areTranscriptNewEditButtonsEnabled --> <!-- ngIf: !$ctrl.areTranscriptNewEditButtonsEnabled --> </li><!-- end ngIf: line --><!-- end ngRepeat: line in $vs_renderItems track by line.id --><!-- ngIf: line --><li ng-if="line" ng-repeat="line in $vs_renderItems track by line.id" ng-class="$ctrl.shouldHighlight(line) ? 'highlight-line' : 'default-line'" role="listitem" ng-click="$ctrl.onTranscriptClicked(line)" aria-label="Clicking on this transcript will seek video to the time of 5250 seconds from beginning." id="transcript-254e9ed2-49b2-4481-b1f7-345039944d8f" aria-posinset="1379" aria-setsize="1386" class="transcript-line c-caption-1 ng-scope default-line" tabindex="0" style=""> <!-- ngIf: $ctrl.areTranscriptNewEditButtonsEnabled --><div class="transcript-line-wrapper ng-scope" ng-if="$ctrl.areTranscriptNewEditButtonsEnabled"> <div class="transcript-timestamp new ng-binding" tabindex="-1" aria-label="Clicking on this transcript will seek video to the time of 5250 seconds from beginning." ng-click="$ctrl.onTranscriptClicked(line)" role="button"> 01:27:30 </div> <div class="transcript-text-container"> <div class="transcript-text new ng-binding" ng-class="{editable: $ctrl.isActiveEdit(line)}" ng-focus="$ctrl.onTranscriptFocused($event, line)" vlist-index="1378" ng-keydown="$ctrl.onTextKeyDown($event)" ng-blur="$ctrl.onTranscriptBlur()" contenteditable="$ctrl.isActiveEdit(line)"> around. You know people that have taken a number of voluntary </div> <div class="transcript-text-action-bar" ng-class="{visible: $ctrl.shouldShowActionBar(line)}" ng-switch="" on="$ctrl.transcriptSavingStatus"> <!-- ngSwitchWhen: 0 --> <!-- ngSwitchWhen: 1 --> <!-- ngSwitchWhen: 2 --> <!-- ngSwitchWhen: 3 --> <!-- ngSwitchDefault: --><div class="transcript-text-action-buttons ng-scope" ng-switch-default=""> <div class="transcript-text-action-left-buttons"> <button class="stream-btn discard-btn ng-binding" ng-click="$ctrl.discard()"> Discard </button> <button class="stream-btn save-btn ng-binding" type="submit" ng-class="{'btn-primary': $ctrl.isTranscriptModified}" ng-click="$ctrl.save($event, line)"> Save </button> </div> <div class="transcript-text-action-divider"></div> <div class="transcript-text-action-right-buttons"> <button class="stream-btn replay-btn ng-isolate-scope" type="submit" ng-click="$ctrl.replay($event, line)" aria-label="Clicking this button restarts playback from the start of the current transcript segment and sets the focus to the current transcript segment's editable text contents." ng-mouseover="$ctrl.applyActiveStyleToReplayButton()" ng-mouseleave="$ctrl.applyInactiveStyleToReplayButton()" ng-focus="$ctrl.applyActiveStyleToReplayButton()" ng-blur="$ctrl.applyInactiveStyleToReplayButton()" custom-title="Replay this video segment"> <svg-src svg-name="icon_replay" inline-with-text="true" class="replay-icon ng-isolate-scope"><span class="svg-container icon-inline" ng-class="{'icon-inline': $ctrl.inlineWithText}"> <svg class="svg-icon" focusable="false" aria-hidden="true"> <use xlink:href="#icon_replay" ng-attr-xlink:href="{{$ctrl.getSvgPath}}"></use> </svg> </span></svg-src> </button> </div> </div><!-- end ngSwitchWhen: --> </div> </div> </div><!-- end ngIf: $ctrl.areTranscriptNewEditButtonsEnabled --> <!-- ngIf: !$ctrl.areTranscriptNewEditButtonsEnabled --> </li><!-- end ngIf: line --><!-- end ngRepeat: line in $vs_renderItems track by line.id --><!-- ngIf: line --><li ng-if="line" ng-repeat="line in $vs_renderItems track by line.id" ng-class="$ctrl.shouldHighlight(line) ? 'highlight-line' : 'default-line'" role="listitem" ng-click="$ctrl.onTranscriptClicked(line)" aria-label="Clicking on this transcript will seek video to the time of 5253 seconds from beginning." id="transcript-a83df1f5-dd28-40f9-8262-e617edda9518" aria-posinset="1380" aria-setsize="1386" class="transcript-line c-caption-1 ng-scope default-line" tabindex="0" style=""> <!-- ngIf: $ctrl.areTranscriptNewEditButtonsEnabled --><div class="transcript-line-wrapper ng-scope" ng-if="$ctrl.areTranscriptNewEditButtonsEnabled"> <div class="transcript-timestamp new ng-binding" tabindex="-1" aria-label="Clicking on this transcript will seek video to the time of 5253 seconds from beginning." ng-click="$ctrl.onTranscriptClicked(line)" role="button"> 01:27:33 </div> <div class="transcript-text-container"> <div class="transcript-text new ng-binding" ng-class="{editable: $ctrl.isActiveEdit(line)}" ng-focus="$ctrl.onTranscriptFocused($event, line)" vlist-index="1379" ng-keydown="$ctrl.onTextKeyDown($event)" ng-blur="$ctrl.onTranscriptBlur()" contenteditable="$ctrl.isActiveEdit(line)"> and involuntary actions will talk about all that on our next </div> <div class="transcript-text-action-bar" ng-class="{visible: $ctrl.shouldShowActionBar(line)}" ng-switch="" on="$ctrl.transcriptSavingStatus"> <!-- ngSwitchWhen: 0 --> <!-- ngSwitchWhen: 1 --> <!-- ngSwitchWhen: 2 --> <!-- ngSwitchWhen: 3 --> <!-- ngSwitchDefault: --><div class="transcript-text-action-buttons ng-scope" ng-switch-default=""> <div class="transcript-text-action-left-buttons"> <button class="stream-btn discard-btn ng-binding" ng-click="$ctrl.discard()"> Discard </button> <button class="stream-btn save-btn ng-binding" type="submit" ng-class="{'btn-primary': $ctrl.isTranscriptModified}" ng-click="$ctrl.save($event, line)"> Save </button> </div> <div class="transcript-text-action-divider"></div> <div class="transcript-text-action-right-buttons"> <button class="stream-btn replay-btn ng-isolate-scope" type="submit" ng-click="$ctrl.replay($event, line)" aria-label="Clicking this button restarts playback from the start of the current transcript segment and sets the focus to the current transcript segment's editable text contents." ng-mouseover="$ctrl.applyActiveStyleToReplayButton()" ng-mouseleave="$ctrl.applyInactiveStyleToReplayButton()" ng-focus="$ctrl.applyActiveStyleToReplayButton()" ng-blur="$ctrl.applyInactiveStyleToReplayButton()" custom-title="Replay this video segment"> <svg-src svg-name="icon_replay" inline-with-text="true" class="replay-icon ng-isolate-scope"><span class="svg-container icon-inline" ng-class="{'icon-inline': $ctrl.inlineWithText}"> <svg class="svg-icon" focusable="false" aria-hidden="true"> <use xlink:href="#icon_replay" ng-attr-xlink:href="{{$ctrl.getSvgPath}}"></use> </svg> </span></svg-src> </button> </div> </div><!-- end ngSwitchWhen: --> </div> </div> </div><!-- end ngIf: $ctrl.areTranscriptNewEditButtonsEnabled --> <!-- ngIf: !$ctrl.areTranscriptNewEditButtonsEnabled --> </li><!-- end ngIf: line --><!-- end ngRepeat: line in $vs_renderItems track by line.id --><!-- ngIf: line --><li ng-if="line" ng-repeat="line in $vs_renderItems track by line.id" ng-class="$ctrl.shouldHighlight(line) ? 'highlight-line' : 'default-line'" role="listitem" ng-click="$ctrl.onTranscriptClicked(line)" aria-label="Clicking on this transcript will seek video to the time of 5256 seconds from beginning." id="transcript-dec7ed5c-25d3-4dd7-8085-ba121269fc2f" aria-posinset="1381" aria-setsize="1386" class="transcript-line c-caption-1 ng-scope default-line" tabindex="0" style=""> <!-- ngIf: $ctrl.areTranscriptNewEditButtonsEnabled --><div class="transcript-line-wrapper ng-scope" ng-if="$ctrl.areTranscriptNewEditButtonsEnabled"> <div class="transcript-timestamp new ng-binding" tabindex="-1" aria-label="Clicking on this transcript will seek video to the time of 5256 seconds from beginning." ng-click="$ctrl.onTranscriptClicked(line)" role="button"> 01:27:36 </div> <div class="transcript-text-container"> <div class="transcript-text new ng-binding" ng-class="{editable: $ctrl.isActiveEdit(line)}" ng-focus="$ctrl.onTranscriptFocused($event, line)" vlist-index="1380" ng-keydown="$ctrl.onTextKeyDown($event)" ng-blur="$ctrl.onTranscriptBlur()" contenteditable="$ctrl.isActiveEdit(line)"> quarterly townhall. Obviously today was really focused on </div> <div class="transcript-text-action-bar" ng-class="{visible: $ctrl.shouldShowActionBar(line)}" ng-switch="" on="$ctrl.transcriptSavingStatus"> <!-- ngSwitchWhen: 0 --> <!-- ngSwitchWhen: 1 --> <!-- ngSwitchWhen: 2 --> <!-- ngSwitchWhen: 3 --> <!-- ngSwitchDefault: --><div class="transcript-text-action-buttons ng-scope" ng-switch-default=""> <div class="transcript-text-action-left-buttons"> <button class="stream-btn discard-btn ng-binding" ng-click="$ctrl.discard()"> Discard </button> <button class="stream-btn save-btn ng-binding" type="submit" ng-class="{'btn-primary': $ctrl.isTranscriptModified}" ng-click="$ctrl.save($event, line)"> Save </button> </div> <div class="transcript-text-action-divider"></div> <div class="transcript-text-action-right-buttons"> <button class="stream-btn replay-btn ng-isolate-scope" type="submit" ng-click="$ctrl.replay($event, line)" aria-label="Clicking this button restarts playback from the start of the current transcript segment and sets the focus to the current transcript segment's editable text contents." ng-mouseover="$ctrl.applyActiveStyleToReplayButton()" ng-mouseleave="$ctrl.applyInactiveStyleToReplayButton()" ng-focus="$ctrl.applyActiveStyleToReplayButton()" ng-blur="$ctrl.applyInactiveStyleToReplayButton()" custom-title="Replay this video segment"> <svg-src svg-name="icon_replay" inline-with-text="true" class="replay-icon ng-isolate-scope"><span class="svg-container icon-inline" ng-class="{'icon-inline': $ctrl.inlineWithText}"> <svg class="svg-icon" focusable="false" aria-hidden="true"> <use xlink:href="#icon_replay" ng-attr-xlink:href="{{$ctrl.getSvgPath}}"></use> </svg> </span></svg-src> </button> </div> </div><!-- end ngSwitchWhen: --> </div> </div> </div><!-- end ngIf: $ctrl.areTranscriptNewEditButtonsEnabled --> <!-- ngIf: !$ctrl.areTranscriptNewEditButtonsEnabled --> </li><!-- end ngIf: line --><!-- end ngRepeat: line in $vs_renderItems track by line.id --><!-- ngIf: line --><li ng-if="line" ng-repeat="line in $vs_renderItems track by line.id" ng-class="$ctrl.shouldHighlight(line) ? 'highlight-line' : 'default-line'" role="listitem" ng-click="$ctrl.onTranscriptClicked(line)" aria-label="Clicking on this transcript will seek video to the time of 5258 seconds from beginning." id="transcript-f7256754-e9ba-4178-8559-909066aa6f2a" aria-posinset="1382" aria-setsize="1386" class="transcript-line c-caption-1 ng-scope default-line" tabindex="0" style=""> <!-- ngIf: $ctrl.areTranscriptNewEditButtonsEnabled --><div class="transcript-line-wrapper ng-scope" ng-if="$ctrl.areTranscriptNewEditButtonsEnabled"> <div class="transcript-timestamp new ng-binding" tabindex="-1" aria-label="Clicking on this transcript will seek video to the time of 5258 seconds from beginning." ng-click="$ctrl.onTranscriptClicked(line)" role="button"> 01:27:38 </div> <div class="transcript-text-container"> <div class="transcript-text new ng-binding" ng-class="{editable: $ctrl.isActiveEdit(line)}" ng-focus="$ctrl.onTranscriptFocused($event, line)" vlist-index="1381" ng-keydown="$ctrl.onTextKeyDown($event)" ng-blur="$ctrl.onTranscriptBlur()" contenteditable="$ctrl.isActiveEdit(line)"> stint. Uhm, I want to thank everyone for participating and </div> <div class="transcript-text-action-bar" ng-class="{visible: $ctrl.shouldShowActionBar(line)}" ng-switch="" on="$ctrl.transcriptSavingStatus"> <!-- ngSwitchWhen: 0 --> <!-- ngSwitchWhen: 1 --> <!-- ngSwitchWhen: 2 --> <!-- ngSwitchWhen: 3 --> <!-- ngSwitchDefault: --><div class="transcript-text-action-buttons ng-scope" ng-switch-default=""> <div class="transcript-text-action-left-buttons"> <button class="stream-btn discard-btn ng-binding" ng-click="$ctrl.discard()"> Discard </button> <button class="stream-btn save-btn ng-binding" type="submit" ng-class="{'btn-primary': $ctrl.isTranscriptModified}" ng-click="$ctrl.save($event, line)"> Save </button> </div> <div class="transcript-text-action-divider"></div> <div class="transcript-text-action-right-buttons"> <button class="stream-btn replay-btn ng-isolate-scope" type="submit" ng-click="$ctrl.replay($event, line)" aria-label="Clicking this button restarts playback from the start of the current transcript segment and sets the focus to the current transcript segment's editable text contents." ng-mouseover="$ctrl.applyActiveStyleToReplayButton()" ng-mouseleave="$ctrl.applyInactiveStyleToReplayButton()" ng-focus="$ctrl.applyActiveStyleToReplayButton()" ng-blur="$ctrl.applyInactiveStyleToReplayButton()" custom-title="Replay this video segment"> <svg-src svg-name="icon_replay" inline-with-text="true" class="replay-icon ng-isolate-scope"><span class="svg-container icon-inline" ng-class="{'icon-inline': $ctrl.inlineWithText}"> <svg class="svg-icon" focusable="false" aria-hidden="true"> <use xlink:href="#icon_replay" ng-attr-xlink:href="{{$ctrl.getSvgPath}}"></use> </svg> </span></svg-src> </button> </div> </div><!-- end ngSwitchWhen: --> </div> </div> </div><!-- end ngIf: $ctrl.areTranscriptNewEditButtonsEnabled --> <!-- ngIf: !$ctrl.areTranscriptNewEditButtonsEnabled --> </li><!-- end ngIf: line --><!-- end ngRepeat: line in $vs_renderItems track by line.id --><!-- ngIf: line --><li ng-if="line" ng-repeat="line in $vs_renderItems track by line.id" ng-class="$ctrl.shouldHighlight(line) ? 'highlight-line' : 'default-line'" role="listitem" ng-click="$ctrl.onTranscriptClicked(line)" aria-label="Clicking on this transcript will seek video to the time of 5261 seconds from beginning." id="transcript-31fc5c4f-4f9e-4eac-9b55-dbb1bdbbef55" aria-posinset="1383" aria-setsize="1386" class="transcript-line c-caption-1 ng-scope default-line" tabindex="0" style=""> <!-- ngIf: $ctrl.areTranscriptNewEditButtonsEnabled --><div class="transcript-line-wrapper ng-scope" ng-if="$ctrl.areTranscriptNewEditButtonsEnabled"> <div class="transcript-timestamp new ng-binding" tabindex="-1" aria-label="Clicking on this transcript will seek video to the time of 5261 seconds from beginning." ng-click="$ctrl.onTranscriptClicked(line)" role="button"> 01:27:41 </div> <div class="transcript-text-container"> <div class="transcript-text new ng-binding" ng-class="{editable: $ctrl.isActiveEdit(line)}" ng-focus="$ctrl.onTranscriptFocused($event, line)" vlist-index="1382" ng-keydown="$ctrl.onTextKeyDown($event)" ng-blur="$ctrl.onTranscriptBlur()" contenteditable="$ctrl.isActiveEdit(line)"> thank you for working so hard </div> <div class="transcript-text-action-bar" ng-class="{visible: $ctrl.shouldShowActionBar(line)}" ng-switch="" on="$ctrl.transcriptSavingStatus"> <!-- ngSwitchWhen: 0 --> <!-- ngSwitchWhen: 1 --> <!-- ngSwitchWhen: 2 --> <!-- ngSwitchWhen: 3 --> <!-- ngSwitchDefault: --><div class="transcript-text-action-buttons ng-scope" ng-switch-default=""> <div class="transcript-text-action-left-buttons"> <button class="stream-btn discard-btn ng-binding" ng-click="$ctrl.discard()"> Discard </button> <button class="stream-btn save-btn ng-binding" type="submit" ng-class="{'btn-primary': $ctrl.isTranscriptModified}" ng-click="$ctrl.save($event, line)"> Save </button> </div> <div class="transcript-text-action-divider"></div> <div class="transcript-text-action-right-buttons"> <button class="stream-btn replay-btn ng-isolate-scope" type="submit" ng-click="$ctrl.replay($event, line)" aria-label="Clicking this button restarts playback from the start of the current transcript segment and sets the focus to the current transcript segment's editable text contents." ng-mouseover="$ctrl.applyActiveStyleToReplayButton()" ng-mouseleave="$ctrl.applyInactiveStyleToReplayButton()" ng-focus="$ctrl.applyActiveStyleToReplayButton()" ng-blur="$ctrl.applyInactiveStyleToReplayButton()" custom-title="Replay this video segment"> <svg-src svg-name="icon_replay" inline-with-text="true" class="replay-icon ng-isolate-scope"><span class="svg-container icon-inline" ng-class="{'icon-inline': $ctrl.inlineWithText}"> <svg class="svg-icon" focusable="false" aria-hidden="true"> <use xlink:href="#icon_replay" ng-attr-xlink:href="{{$ctrl.getSvgPath}}"></use> </svg> </span></svg-src> </button> </div> </div><!-- end ngSwitchWhen: --> </div> </div> </div><!-- end ngIf: $ctrl.areTranscriptNewEditButtonsEnabled --> <!-- ngIf: !$ctrl.areTranscriptNewEditButtonsEnabled --> </li><!-- end ngIf: line --><!-- end ngRepeat: line in $vs_renderItems track by line.id --><!-- ngIf: line --><li ng-if="line" ng-repeat="line in $vs_renderItems track by line.id" ng-class="$ctrl.shouldHighlight(line) ? 'highlight-line' : 'default-line'" role="listitem" ng-click="$ctrl.onTranscriptClicked(line)" aria-label="Clicking on this transcript will seek video to the time of 5263 seconds from beginning." id="transcript-cd31625a-f693-42d6-83c6-6543117eb7cd" aria-posinset="1384" aria-setsize="1386" class="transcript-line c-caption-1 ng-scope default-line" tabindex="0" style=""> <!-- ngIf: $ctrl.areTranscriptNewEditButtonsEnabled --><div class="transcript-line-wrapper ng-scope" ng-if="$ctrl.areTranscriptNewEditButtonsEnabled"> <div class="transcript-timestamp new ng-binding" tabindex="-1" aria-label="Clicking on this transcript will seek video to the time of 5263 seconds from beginning." ng-click="$ctrl.onTranscriptClicked(line)" role="button"> 01:27:43 </div> <div class="transcript-text-container"> <div class="transcript-text new ng-binding" ng-class="{editable: $ctrl.isActiveEdit(line)}" ng-focus="$ctrl.onTranscriptFocused($event, line)" vlist-index="1383" ng-keydown="$ctrl.onTextKeyDown($event)" ng-blur="$ctrl.onTranscriptBlur()" contenteditable="$ctrl.isActiveEdit(line)"> for Team Syncada. Please continue to stay safe and </div> <div class="transcript-text-action-bar" ng-class="{visible: $ctrl.shouldShowActionBar(line)}" ng-switch="" on="$ctrl.transcriptSavingStatus"> <!-- ngSwitchWhen: 0 --> <!-- ngSwitchWhen: 1 --> <!-- ngSwitchWhen: 2 --> <!-- ngSwitchWhen: 3 --> <!-- ngSwitchDefault: --><div class="transcript-text-action-buttons ng-scope" ng-switch-default=""> <div class="transcript-text-action-left-buttons"> <button class="stream-btn discard-btn ng-binding" ng-click="$ctrl.discard()"> Discard </button> <button class="stream-btn save-btn ng-binding" type="submit" ng-class="{'btn-primary': $ctrl.isTranscriptModified}" ng-click="$ctrl.save($event, line)"> Save </button> </div> <div class="transcript-text-action-divider"></div> <div class="transcript-text-action-right-buttons"> <button class="stream-btn replay-btn ng-isolate-scope" type="submit" ng-click="$ctrl.replay($event, line)" aria-label="Clicking this button restarts playback from the start of the current transcript segment and sets the focus to the current transcript segment's editable text contents." ng-mouseover="$ctrl.applyActiveStyleToReplayButton()" ng-mouseleave="$ctrl.applyInactiveStyleToReplayButton()" ng-focus="$ctrl.applyActiveStyleToReplayButton()" ng-blur="$ctrl.applyInactiveStyleToReplayButton()" custom-title="Replay this video segment"> <svg-src svg-name="icon_replay" inline-with-text="true" class="replay-icon ng-isolate-scope"><span class="svg-container icon-inline" ng-class="{'icon-inline': $ctrl.inlineWithText}"> <svg class="svg-icon" focusable="false" aria-hidden="true"> <use xlink:href="#icon_replay" ng-attr-xlink:href="{{$ctrl.getSvgPath}}"></use> </svg> </span></svg-src> </button> </div> </div><!-- end ngSwitchWhen: --> </div> </div> </div><!-- end ngIf: $ctrl.areTranscriptNewEditButtonsEnabled --> <!-- ngIf: !$ctrl.areTranscriptNewEditButtonsEnabled --> </li><!-- end ngIf: line --><!-- end ngRepeat: line in $vs_renderItems track by line.id --><!-- ngIf: line --><li ng-if="line" ng-repeat="line in $vs_renderItems track by line.id" ng-class="$ctrl.shouldHighlight(line) ? 'highlight-line' : 'default-line'" role="listitem" ng-click="$ctrl.onTranscriptClicked(line)" aria-label="Clicking on this transcript will seek video to the time of 5266 seconds from beginning." id="transcript-819bcabc-e177-47f9-97eb-854d36c5b7b1" aria-posinset="1385" aria-setsize="1386" class="transcript-line c-caption-1 ng-scope default-line" tabindex="0" style=""> <!-- ngIf: $ctrl.areTranscriptNewEditButtonsEnabled --><div class="transcript-line-wrapper ng-scope" ng-if="$ctrl.areTranscriptNewEditButtonsEnabled"> <div class="transcript-timestamp new ng-binding" tabindex="-1" aria-label="Clicking on this transcript will seek video to the time of 5266 seconds from beginning." ng-click="$ctrl.onTranscriptClicked(line)" role="button"> 01:27:46 </div> <div class="transcript-text-container"> <div class="transcript-text new ng-binding" ng-class="{editable: $ctrl.isActiveEdit(line)}" ng-focus="$ctrl.onTranscriptFocused($event, line)" vlist-index="1384" ng-keydown="$ctrl.onTextKeyDown($event)" ng-blur="$ctrl.onTranscriptBlur()" contenteditable="$ctrl.isActiveEdit(line)"> serve our customers and be well. Have a great day </div> <div class="transcript-text-action-bar" ng-class="{visible: $ctrl.shouldShowActionBar(line)}" ng-switch="" on="$ctrl.transcriptSavingStatus"> <!-- ngSwitchWhen: 0 --> <!-- ngSwitchWhen: 1 --> <!-- ngSwitchWhen: 2 --> <!-- ngSwitchWhen: 3 --> <!-- ngSwitchDefault: --><div class="transcript-text-action-buttons ng-scope" ng-switch-default=""> <div class="transcript-text-action-left-buttons"> <button class="stream-btn discard-btn ng-binding" ng-click="$ctrl.discard()"> Discard </button> <button class="stream-btn save-btn ng-binding" type="submit" ng-class="{'btn-primary': $ctrl.isTranscriptModified}" ng-click="$ctrl.save($event, line)"> Save </button> </div> <div class="transcript-text-action-divider"></div> <div class="transcript-text-action-right-buttons"> <button class="stream-btn replay-btn ng-isolate-scope" type="submit" ng-click="$ctrl.replay($event, line)" aria-label="Clicking this button restarts playback from the start of the current transcript segment and sets the focus to the current transcript segment's editable text contents." ng-mouseover="$ctrl.applyActiveStyleToReplayButton()" ng-mouseleave="$ctrl.applyInactiveStyleToReplayButton()" ng-focus="$ctrl.applyActiveStyleToReplayButton()" ng-blur="$ctrl.applyInactiveStyleToReplayButton()" custom-title="Replay this video segment"> <svg-src svg-name="icon_replay" inline-with-text="true" class="replay-icon ng-isolate-scope"><span class="svg-container icon-inline" ng-class="{'icon-inline': $ctrl.inlineWithText}"> <svg class="svg-icon" focusable="false" aria-hidden="true"> <use xlink:href="#icon_replay" ng-attr-xlink:href="{{$ctrl.getSvgPath}}"></use> </svg> </span></svg-src> </button> </div> </div><!-- end ngSwitchWhen: --> </div> </div> </div><!-- end ngIf: $ctrl.areTranscriptNewEditButtonsEnabled --> <!-- ngIf: !$ctrl.areTranscriptNewEditButtonsEnabled --> </li><!-- end ngIf: line --><!-- end ngRepeat: line in $vs_renderItems track by line.id --><!-- ngIf: line --><li ng-if="line" ng-repeat="line in $vs_renderItems track by line.id" ng-class="$ctrl.shouldHighlight(line) ? 'highlight-line' : 'default-line'" role="listitem" ng-click="$ctrl.onTranscriptClicked(line)" aria-label="Clicking on this transcript will seek video to the time of 5268 seconds from beginning." id="transcript-4133dfb3-14c4-492f-9fc9-b76b4902aefa" aria-posinset="1386" aria-setsize="1386" class="transcript-line c-caption-1 ng-scope default-line" tabindex="0" style=""> <!-- ngIf: $ctrl.areTranscriptNewEditButtonsEnabled --><div class="transcript-line-wrapper ng-scope" ng-if="$ctrl.areTranscriptNewEditButtonsEnabled"> <div class="transcript-timestamp new ng-binding" tabindex="-1" aria-label="Clicking on this transcript will seek video to the time of 5268 seconds from beginning." ng-click="$ctrl.onTranscriptClicked(line)" role="button"> 01:27:48 </div> <div class="transcript-text-container"> <div class="transcript-text new ng-binding" ng-class="{editable: $ctrl.isActiveEdit(line)}" ng-focus="$ctrl.onTranscriptFocused($event, line)" vlist-index="1385" ng-keydown="$ctrl.onTextKeyDown($event)" ng-blur="$ctrl.onTranscriptBlur()" contenteditable="$ctrl.isActiveEdit(line)"> everyone. </div> <div class="transcript-text-action-bar" ng-class="{visible: $ctrl.shouldShowActionBar(line)}" ng-switch="" on="$ctrl.transcriptSavingStatus"> <!-- ngSwitchWhen: 0 --> <!-- ngSwitchWhen: 1 --> <!-- ngSwitchWhen: 2 --> <!-- ngSwitchWhen: 3 --> <!-- ngSwitchDefault: --><div class="transcript-text-action-buttons ng-scope" ng-switch-default=""> <div class="transcript-text-action-left-buttons"> <button class="stream-btn discard-btn ng-binding" ng-click="$ctrl.discard()"> Discard </button> <button class="stream-btn save-btn ng-binding" type="submit" ng-class="{'btn-primary': $ctrl.isTranscriptModified}" ng-click="$ctrl.save($event, line)"> Save </button> </div> <div class="transcript-text-action-divider"></div> <div class="transcript-text-action-right-buttons"> <button class="stream-btn replay-btn ng-isolate-scope" type="submit" ng-click="$ctrl.replay($event, line)" aria-label="Clicking this button restarts playback from the start of the current transcript segment and sets the focus to the current transcript segment's editable text contents." ng-mouseover="$ctrl.applyActiveStyleToReplayButton()" ng-mouseleave="$ctrl.applyInactiveStyleToReplayButton()" ng-focus="$ctrl.applyActiveStyleToReplayButton()" ng-blur="$ctrl.applyInactiveStyleToReplayButton()" custom-title="Replay this video segment"> <svg-src svg-name="icon_replay" inline-with-text="true" class="replay-icon ng-isolate-scope"><span class="svg-container icon-inline" ng-class="{'icon-inline': $ctrl.inlineWithText}"> <svg class="svg-icon" focusable="false" aria-hidden="true"> <use xlink:href="#icon_replay" ng-attr-xlink:href="{{$ctrl.getSvgPath}}"></use> </svg> </span></svg-src> </button> </div> </div><!-- end ngSwitchWhen: --> </div> </div> </div><!-- end ngIf: $ctrl.areTranscriptNewEditButtonsEnabled --> <!-- ngIf: !$ctrl.areTranscriptNewEditButtonsEnabled --> </li><!-- end ngIf: line --><!-- end ngRepeat: line in $vs_renderItems track by line.id --> </ul> </ng-transclude> </div> </div></virtual-list><!-- end ngIf: $ctrl.transcriptLines.length --> <!-- ngIf: !$ctrl.isTranscriptEditV3Enabled --> </div> </div> </transcript> </ng-transclude> </div> </div> </div> </search-event><!-- end ngIf: $ctrl.servicePlans.capabilities.transcriptSearch && $ctrl.captionReady --> <!-- ngIf: !$ctrl.servicePlans.capabilities.transcriptSearch && $ctrl.captionReady --> </div> <div class="no-data-message-cont"> <!-- ngIf: $ctrl.isNoTranscriptData --> </div> </div></div> <div id="forms-right-tab-page" ng-class="{'live':$ctrl.isLiveNow || $ctrl.isLiveContentCreatedState}" ng-show="$ctrl.isRightTabSet(5)" aria-hidden="true" class="ng-hide"><div id="forms-container" class="forms-container"> <forms timed-events="$ctrl.timedEvents" show-controls="$ctrl.showFormControls" on-add-form="$ctrl.addForm(formEvent)" on-delete-form="$ctrl.deleteForm(formEvent)" player="$ctrl.player" on-form-clicked="$ctrl.jumpToForm(formEvent)" can-edit="$ctrl.isEditorOrInAdminMode" time-code="$ctrl.playerTime" video-id="$ctrl.videoId" player-session-id="$ctrl.playerSessionId" class="ng-isolate-scope"><div class="forms" dir="ltr"> <!-- ngIf: $ctrl.canEdit --> <!-- ngIf: $ctrl.timedEvents.length !== 0 --> <!-- ngIf: $ctrl.timedEvents.length === 0 && $ctrl.canEdit --> </div></forms> </div></div> </div> </div> </div> </div> </div><!-- end ngSwitchWhen: --> <!-- ngSwitchWhen: NotFound --> <!-- ngSwitchWhen: Forbidden --> <!-- ngIf: $ctrl.routingCase === 'Uploading' || $ctrl.routingCase === 'NotReady' || $ctrl.routingCase === 'Error' --> </div> </div> </section> <!-- ngIf: $ctrl.video --><section class="other-area ng-scope" ng-if="$ctrl.video" aria-label="section including comments and related videos" style=""> <div class="row"> <div class="column video-detail-container medium-12 large-8" ng-class="{'medium-12 large-12': $ctrl.display12Column, 'medium-12 large-8': $ctrl.showRightContainer || $ctrl.isCompletedOnLoad}" style="margin-top: 0px;"> <div class="bottom-navigation-container"> <tab-navigation class="bottom-navigation ng-isolate-scope" model="$ctrl.playerBottomTabOptions" value="$ctrl.currentBottomTab" direction="$ctrl.tabHeaderDirection" show-full-in-small="true"><nav class="tab-navigation c-in-page-navigation" ng-class="{'tab-navigation-v2': $ctrl.isV2StyleEnabled, 'c-in-page-navigation': !$ctrl.isVertical, 'vertical-navigation': $ctrl.isVertical, 'tab-navigation-minimal': $ctrl.minimalStyle}"> <ul class="tab-navigation-list show-for-small" role="tablist" ng-class="{'show-for-medium': !$ctrl.isVertical && !$ctrl.showFullInSmall, 'show-for-small': $ctrl.showFullInSmall}"> <!-- ngRepeat: option in $ctrl.model.options --><!-- ngIf: !option.hide --><li class="tab-navigation-item ng-scope details-bottom-tab f-active" role="presentation" ng-repeat="option in $ctrl.model.options" ng-if="!option.hide" ng-class="[option.className, {'f-active': $ctrl.value === option.value}]" style=""> <!-- ngIf: option.svgName --> <button class="c-action-trigger nav-link nav-link-option-0 f-active" role="tab" ng-class="{'f-active': $ctrl.value === option.value, 'item-error': option.value === $ctrl.errorTabValue}" ng-click="$ctrl.onTabClicked(option)" ng-disabled="option.disable" aria-posinset="1" aria-setsize="1" aria-selected="true"> <!-- ngIf: option.value === $ctrl.errorTabValue --> Details </button> <!-- ngIf: option.afterSvgName --> </li><!-- end ngIf: !option.hide --><!-- end ngRepeat: option in $ctrl.model.options --><!-- ngIf: !option.hide --><!-- end ngRepeat: option in $ctrl.model.options --><!-- ngIf: !option.hide --><!-- end ngRepeat: option in $ctrl.model.options --><!-- ngIf: !option.hide --><!-- end ngRepeat: option in $ctrl.model.options --><!-- ngIf: !option.hide --><!-- end ngRepeat: option in $ctrl.model.options --> </ul> <!-- ngIf: !$ctrl.isVertical --><ul class="tab-navigation-list ng-scope hide-for-small" ng-if="!$ctrl.isVertical" ng-class="{'hide-for-medium': !$ctrl.showFullInSmall, 'hide-for-small': $ctrl.showFullInSmall}"> <li class="tab-navigation-item"> <button class="c-action-trigger with-drawer nav-link nav-link-option-0 f-active" ng-class="{'f-active': $ctrl.value === $ctrl.activeOption.value, 'item-error': $ctrl.activeOption.value === $ctrl.errorTabValue}" ng-click="$ctrl.onTabClicked($ctrl.activeOption)"> <!-- ngIf: $ctrl.activeOption.value === $ctrl.errorTabValue --> Details </button> </li> <!-- ngIf: $ctrl.drawerItems --><li class="tab-navigation-item ng-scope" ng-if="$ctrl.drawerItems"> <drawer drawer-items="$ctrl.drawerItems" item="$ctrl.model" left-allign="$ctrl.leftAllign" fixed-elements="$ctrl.fixedElements" tab-navigation-drawer="true" class="ng-isolate-scope"><div class="drawer" action-list=""> <span ng-show="!$ctrl.transcludeButtonPresent" role="presentation" aria-hidden="false" class=""> <button name="dropdown" role="menu" class="c-action-trigger c-glyph drawer-button drawer-button-stream drawer-button-without-label ng-isolate-scope" aria-label="More actions" custom-title="More actions" aria-expanded="false" ng-click="return;"> <span class="x-screen-reader ng-binding">More actions</span> </button> </span> <span role="presentation" ng-show="$ctrl.transcludeButtonPresent" ng-transclude="transcludebutton" aria-hidden="true" class="ng-hide"> <ng-transclude></ng-transclude> </span> <ul class="drawer-content collapsed" ng-class="{'allign-opp': $ctrl.leftAllign}"> <!-- ngRepeat: item in $ctrl.drawerItems --> </ul> </div></drawer> </li><!-- end ngIf: $ctrl.drawerItems --> <li aria-label="Health" ng-class="$ctrl.svgClassName"> <!-- ngIf: $ctrl.isSvgAfterAvailable --> </li> </ul><!-- end ngIf: !$ctrl.isVertical --> </nav> </tab-navigation> <!-- ngIf: $ctrl.bottomTabDrawerItems.length --> </div> <div id="under-player-container" class="xsmall-12 small-12 medium-12 large-12"> <video-metadata ng-show="$ctrl.isBottomTabSet(0) && $ctrl.showMetadata" video="$ctrl.video" is-in-admin-mode="$ctrl.isInAdminMode" elevated-video-session-properties="$ctrl.elevatedVideoSessionProperties" video-page-session-properties="$ctrl.videoPageSessionProperties" player-options="$ctrl.playerOptions" dialog-interruption-handler="$ctrl.dialogInterruptionHandler" get-player-current-time="$ctrl.getCurrentTime()" drawer-items="$ctrl.drawerItems" view-settings="$ctrl.viewSettings" is-stream-admin="$ctrl.isStreamAdmin" is-live-content="$ctrl.isLiveContent" video-page-session-metrics="$ctrl.videoPageSessionMetrics" is-live-now="$ctrl.isLiveNow" is-completed="$ctrl.isCompleted" is-replacing="$ctrl.isReplacing" is-trimming="$ctrl.isTrimming" class="ng-isolate-scope" aria-hidden="false"><section> <div class="video-meta-container"> <div class="row row-size1"> <!-- ngIf: $ctrl.showErrorMessage --> </div> <div class="row column main-meta row-size0 remove-margin"> <div class="title-container title-with-completed" ng-class="{'title-with-live': $ctrl.isLiveNow, 'title-with-completed': $ctrl.isCompleted, 'title-before-live': $ctrl.isLiveContent && !$ctrl.isCompleted}"> <span class="live-label-cont"> <!-- ngIf: $ctrl.isLiveNow --> </span> <!-- ngIf: $ctrl.isLiveContent && !$ctrl.isCompleted && !$ctrl.isLiveNow && !$ctrl.isReplacing && !$ctrl.isTrimming --> <!-- ngIf: $ctrl.isLiveContent && $ctrl.isCompleted --> <h1 class="title ng-binding"> 2020 Global STINT Town Hall Meeting </h1> </div> <div class="creator-and-privacy"> <div class="video-create-message"> <video-creator item="$ctrl.video" get-creator-message-template="$ctrl.getCreateMessageTemplate()" class="ng-isolate-scope"><!-- ngIf: ::($ctrl.canDisplayCreator) --><span class="info-message-content ng-binding ng-scope" ng-if="::($ctrl.canDisplayCreator)">Published on 2020/7/22 by <a class="c-hyperlink info-message-link ng-binding" ng-href="/user/21bed756-c076-4add-9912-d49fcc3543d0" ng-click="$ctrl.cacheCreator()" href="/user/21bed756-c076-4add-9912-d49fcc3543d0">Cermak, Jennifer</a> </span><!-- end ngIf: ::($ctrl.canDisplayCreator) --></video-creator> </div> <div class="privacy-and-statistics"> <video-privacy class="video-privacy ng-isolate-scope" video-privacy-type="$ctrl.video.privacyMode"><p class="screen-reader-text ng-binding">organization</p> <div class="videoprivacy ng-isolate-scope" aria-hidden="true" custom-title="" light-style="$ctrl.lightCustomTitle"> <svg-src class="icon_privacytag ng-isolate-scope video-privacy-company" aria-label="Privacy" inline-with-text="true" svg-name="icon_privacy_my_org" ng-class="{
html = """
<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=utf8" />
<title></title>
</head>
<body>
</body>
</html>
"""
d = pq(html)
print(d.html())
print(d.outerHtml())
print(d.outer_html())
###Output
<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=utf8">
<title></title>
</head>
<body>
</body>
</html>
###Markdown
标签选择器
###Code
d('title').append('Title1')
print(d.outer_html())
table = pq('<table><thead></thead><tbody></tbody></table>')
table('thead').append('<tr><th>code</th><th>close</th><th>vol</th><th>trade_date</th></tr>')
table('tbody').append('<tr><td>code</td><td>close</td><td>vol</td><td>trade_date</td></tr>')
print(table.outer_html())
d('body').append(table)
print(d.outer_html())
d('#test').html()
d.show()
d('#transcript-f067e512-76fd-4c66-8b37-2a0315364266 > div > div:nth-child(2) > div').html()
html.ng-scope
body.theme-for-videopage.-mouse.-should.-not.-have.-outline
div.ng-scope
div.office-scroll-body
div.ng-scope
video-page.ng-scope.ng-isolate-scope
div div.container.video
div.main-area
section.player-area div div div.ng-scope
div.video-player-container div.row div#player-right-container-wrapper.column.xsmall-12.small-6.medium-4.large-4 div#player-right-container-content.player-right-container-content
div#transcript-right-tab-page div#transcript-container
div#transcript-content.transcript-content
search-event.ng-scope.ng-isolate-scope
div.search-event div.content
div.transclude-content ng-transclude
transcript.ng-scope.ng-isolate-scope div.transcript div.transcript-inner.new
virtual-list.transcript-list.ng-scope.ng-isolate-scope div.virtual-list
div.scrollable-content ng-transclude ul.ng-scope
li#transcript-2f780f01-9733-4286-8a6f-8c96870f5690.transcript-line.c-caption-1.ng-scope.default-line
div.transcript-line-wrapper.ng-scope
#transcript-2f780f01-9733-4286-8a6f-8c96870f5690 > div:nth-child(1)
d('.transcript-line-wrapper').text()
d('.transcript-line-wrapper').contents()
t = d('div#transcript-content.transcript-content div.scrollable-content ng-transclude ul.ng-scope li')
t.text()
len(t)
t[0]
t.eq(0)
for i in range(len(t)):
print(t[0]('div.transcript-line-wrapper.ng-scope div.transcript-text.new.ng-binding').contents())
break
d('ul.ng-scope:nth-child(1)')
###Output
_____no_output_____ |
code/version1.0/Extract_masked_labels_and_check_masked_label.ipynb | ###Markdown
Loading the relevant libraries
###Code
import os
import pandas as pd
import numpy as np
import torch
import torchvision
import torch
from torch.utils.tensorboard import SummaryWriter
import torch.nn as nn
import torch.optim as optim
import torch.nn.functional as F
from collections import Counter
from torchvision import transforms, utils, datasets
from torch.utils.data import Dataset, DataLoader, random_split, SubsetRandomSampler, WeightedRandomSampler
imagenet_label_list = pd.read_csv(r"/home/junzhin/Project/Summer_project/code/version1.0/imagenet_labels_set.txt", sep = " ",header = None)
imagenet_label_list_dict = dict(zip(imagenet_label_list[2].apply(lambda x: x.lower()), imagenet_label_list[0]))
###Output
_____no_output_____
###Markdown
Checking the directory of the imagenet and make sure the existence and presence of the labels
###Code
your_path =r"/data/gpfs/datasets/Imagenet/"
file_dir = os.listdir(your_path)
file_dir
train_filenames = os.listdir(your_path + file_dir[1])
valid_filenames = os.listdir(your_path + file_dir[2])
print(len(train_filenames))
print(len(valid_filenames))
print(set(train_filenames) == set(valid_filenames))
imagenet_training_filePath = os.path.join(your_path, file_dir[1])
imagenet_valid_filePath = os.path.join(your_path, file_dir[2])
image_net_train_dataset = datasets.ImageFolder(
imagenet_training_filePath)
###Output
_____no_output_____
###Markdown
Open the office31 dataset and extract the labels
###Code
office31_path =r"/home/junzhin/Project/datasets/domain_adaptation_images/"
office31_dir = os.listdir(office31_path)
amazon_file_path = os.path.join(office31_path, office31_dir[0]+ "/images/")
dslr_file_path = os.path.join(office31_path, office31_dir[1]+ "/images/")
webcam_file_path = os.path.join(office31_path, office31_dir[2]+ "/images/")
print(office31_dir)
###Output
_____no_output_____
###Markdown
Checking if three domains have the same set of class labels
###Code
amazon = os.listdir(amazon_file_path)
dslr = os.listdir(dslr_file_path)
webcam = os.listdir(webcam_file_path)
print(set(amazon) == set(dslr) == set(webcam))
###Output
True
###Markdown
Loading one of the domains of the class labels for further manipulation
###Code
office_31_train_dataset = datasets.ImageFolder(
amazon_file_path)
office_31_train_dataset
print(dict(Counter(office_31_train_dataset.targets)))
office_31_training_labels = office_31_train_dataset.classes
print(office_31_train_dataset.classes[:])
selected_index = [ office_31_train_dataset.class_to_idx[i] for i in office_31_train_dataset.classes[:20]]
print(selected_index)
trainset_1 = torch.utils.data.Subset(office_31_train_dataset, selected_index)
train_loader = torch.utils.data.DataLoader(trainset_1, batch_size=2, shuffle= False, num_workers=1, pin_memory=True,sampler=None)
print(len(trainset_1))
print(len(train_loader))
###Output
20
10
###Markdown
Checking the duplicate classes between the imagenet of 1000 classes and office31
###Code
office_31_training_labels = [word.lower().replace("_", " ") for word in office_31_training_labels]
imagenet_label_list = [word.lower().replace("_", " ") for word in imagenet_label_list[2]]
###Output
_____no_output_____
###Markdown
Method 1: Exact string match between two datasets classes
###Code
masked_list1 = set(office_31_training_labels).intersection(set(imagenet_label_list))
###Output
_____no_output_____
###Markdown
Method 2 Similarity string matching between two datasets classes: To remove any possible similar strings in both set of classes
###Code
import textdistance
def find_similars_dict(source_labels, target_labels, threshold = 1.0):
similarity_dict = {}
for x in source_labels:
similarity_dict[x] = []
for y in target_labels:
similarity_score = textdistance.smith_waterman.normalized_similarity(x,y)
if similarity_score >= threshold:
similarity_dict[x].append([y,similarity_score])
for each_word_list in similarity_dict:
similarity_dict[each_word_list].sort(key=lambda x: x[1], reverse = True)
# levenshtein damerau_levenshtein gotoh smith_waterman
return similarity_dict
similarity_dict = find_similars_dict(office_31_training_labels,imagenet_label_list,threshold = 1)
similarity_dict
masked_list2 = set()
for key in similarity_dict:
for values in similarity_dict[key]:
masked_list2.add(values[0])
print(masked_list2)
###Output
{'fountain pen', 'loudspeaker', 'tray', 'dial telephone', 'mountain bike', 'beer bottle', 'desktop computer', 'coffee mug', 'typewriter keyboard', 'monitor', 'water bottle', 'pay-phone', 'pop bottle', 'printer', 'pill bottle', 'binder', 'bookcase', 'projector', 'notebook', 'wine bottle', 'microphone', 'cellular telephone', 'computer keyboard', 'mouse'}
###Markdown
Save them into files:
###Code
masked_list1 = [x.replace(" ", "_") for x in masked_list1]
masked_list2 = [x.replace(" ", "_") for x in masked_list2]
print(masked_list1)
print(masked_list2)
masked_list1_df = pd.DataFrame([(imagenet_label_list_dict[i], i) for i in masked_list1])
masked_list2_df = pd.DataFrame([(imagenet_label_list_dict[i], i) for i in masked_list2])
masked_list1_df.to_csv('masked_office31_imagenetlabel1_df.csv',index = False, header = False)
masked_list2_df.to_csv('masked_office31_imagenetlabel2_df.csv',index = False, header = False)
###Output
_____no_output_____
###Markdown
Open the officehome dataset and extract the labels
###Code
officehome_path =r"/home/junzhin/Project/datasets/OfficeHomeDataset_10072016"
officehome_dir = os.listdir(officehome_path)
art_file_path = os.path.join(officehome_path, officehome_dir[0])
clipart_file_path = os.path.join(officehome_path, officehome_dir[1])
product_file_path = os.path.join(officehome_path, officehome_dir[4])
world_file_path = os.path.join(officehome_path, officehome_dir[5])
print(officehome_path)
print(officehome_dir)
art = os.listdir(art_file_path)
clipart = os.listdir(clipart_file_path)
product = os.listdir(product_file_path)
world = os.listdir(world_file_path)
print(set(art) == set(clipart) == set(product) == set(world))
print(len(set(art)))
###Output
True
65
###Markdown
Choosing one of the domains for labels extractions:
###Code
office_home_train_dataset = datasets.ImageFolder(art_file_path)
office_home_train_dataset.classes
office_home_label_list = [i.lower().replace("_", " ") for i in office_home_train_dataset.classes]
print(office_home_train_dataset.classes)
print(len(office_home_train_dataset.classes))
masked_list1 = set(office_home_label_list).intersection(set(imagenet_label_list))
masked_list1
similarity_dict = find_similars_dict(office_home_label_list, imagenet_label_list,threshold = 1)
similarity_dict
masked_list2 = set()
for key in similarity_dict:
for values in similarity_dict[key]:
masked_list2.add(values[0])
print(masked_list2)
###Output
_____no_output_____
###Markdown
Save them into files
###Code
masked_list1 = [x.replace(" ", "_") for x in masked_list1]
masked_list2 = [x.replace(" ", "_") for x in masked_list2]
print(masked_list1)
print(masked_list2)
masked_list1_df = pd.DataFrame([(imagenet_label_list_dict[i], i) for i in masked_list1])
masked_list2_df = pd.DataFrame([(imagenet_label_list_dict[i], i) for i in masked_list2])
masked_list1_df.to_csv('masked_officehome_imagenetlabel1_df.csv',index = False, header = False)
masked_list2_df.to_csv('masked_officehome_imagenetlabel2_df.csv',index = False, header = False)
import matplotlib.pyplot as plt
def dataview(number_of_samples, imageset, classename,imagenet_label_list_dict):
target_index = imageset.class_to_idx[imagenet_label_list_dict[classename]]
for index in range(len(imageset)):
if imageset.imgs[index][1] == target_index:
break
for i in range(number_of_samples):
plt.figure()
plt.imshow(imageset[index + i][0])
# print(imageset.imgs[index + i])
return
dataview(5,image_net_train_dataset,"sweatshirt",imagenet_label_list_dict)
plt.imshow(image_net_train_dataset[0][0])
###Output
_____no_output_____
###Markdown
Experiment of choosing the subset of the dataset
###Code
image_net_train_dataset = datasets.ImageFolder(
imagenet_training_filePath)
print(image_net_train_dataset)
image_net_valid_dataset = datasets.ImageFolder(
imagenet_valid_filePath)
print(image_net_valid_dataset)
masked_label_list_office31 = pd.read_csv(r"/home/junzhin/Project/Summer_project/code/version1.0/masked_office31_imagenetlabel2_df.csv", header = None)
masked_label_list_officehome = pd.read_csv(r"/home/junzhin/Project/Summer_project/code/version1.0/masked_officehome_imagenetlabel2_df.csv", header = None)
chosen_index_office31 = [image_net_train_dataset.class_to_idx[i] for i in image_net_train_dataset.classes if i not in list(masked_label_list_office31[0])]
print(len(chosen_index_office31))
chosen_index_officehome = [image_net_train_dataset.class_to_idx[i] for i in image_net_train_dataset.classes if i not in list(masked_label_list_officehome[0])]
print(len(chosen_index_officehome))
masked_image_net_train_dataset_office31 = torch.utils.data.Subset(image_net_train_dataset, chosen_index_office31)
masked_image_net_train_dataset_officehome = torch.utils.data.Subset(image_net_train_dataset, chosen_index_officehome)
print(len(masked_image_net_train_dataset_office31))
###Output
976
###Markdown
Random selection after masking the labels in the imagenet
###Code
train_dataset_initial = datasets.ImageFolder(r"/data/gpfs/datasets/Imagenet/train_blurred")
filepath = r'/home/junzhin/Project/Summer_project/code/version1.0/masked_office31_imagenetlabel2_manual_df.csv'#r'/home/junzhin/Project/Summer_project/code/version1.0/masked_officehome_imagenetlabel2_manual_df.csv'
masked_label_list = pd.read_csv(filepath, header = None)
masked_label_list.sort_values(0, inplace = True)
masked_label_list.drop_duplicates(inplace = True)
masked_label_list.reset_index(drop=True, inplace = True)
masked_label_list.to_csv(filepath, index = False, header = False)
import random
# fix the random seed to produce the replicable sampling results
random.seed(1000)
# select the classes after excluding the masked classes
exclude_masked_classes = [one_class for one_class in train_dataset_initial.classes if one_class not in list(masked_label_list[0])]
print(len(exclude_masked_classes))
random_selected_classes = random.sample(exclude_masked_classes, len(masked_label_list))
chosen_classes = [train_dataset_initial.class_to_idx[each] for each in train_dataset_initial.classes if each not in random_selected_classes]
print(len(chosen_classes))
chosen_index_train = [index for index in range(len(train_dataset_initial)) if train_dataset_initial.imgs[index][1] in chosen_classes]
# chosen_index_valid = [ index for index in range(len(valid_dataset_initial)) if valid_dataset_initial.imgs[index][1] not in chosen_classes]
train_dataset = torch.utils.data.Subset(train_dataset_initial, chosen_index_train)
# valid_dataset = torch.utils.data.Subset(valid_dataset_initial, chosen_index_valid)
print(len(train_dataset))
print(train_dataset_initial.class_to_idx)
remap_dict = { x:i for i, x in enumerate(chosen_classes)}
print(remap_dict)
target = torch.tensor([0,999, 887])
target = target.map(lambda x: remap_dict[x])
label_set = set([train_dataset_initial.imgs[index][1] for index in range(len(train_dataset_initial)) if train_dataset_initial.imgs[index][1] in chosen_classes])
print(chosen_index_train[:100])
###Output
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99]
|
Stochastic_Path.ipynb | ###Markdown
###Code
import numpy as np
import matplotlib.pyplot as plt
T=1
steps = 1000
dt = T/float(steps)
mu = 0.0
sigma = float(T)
#Normal distribution
Z = np.random.normal(mu,sigma,steps)
#Initialize
W = np.zeros(steps)# Wiener process
time_steps = np.zeros(steps)
I_W_dt = np.zeros(steps)
I_W_dt_1 = np.zeros(steps)
I_W_dW = np.zeros(steps)
I_W_dW_1 = np.zeros(steps)
W = np.append([0],np.cumsum(W + np.sqrt(dt)*(Z)))[:-1]
dW = np.append([0], W[1:] - W[:-1])
time_steps = np.append([0],np.cumsum(time_steps + dt))[:-1]
I_W_dt = np.append([0],np.cumsum(I_W_dt + W*dt))[:-1]
I_W_dt_1 = np.append([0],np.cumsum(I_W_dt_1 + (T-time_steps)*dW))[:-1]
I_W_dW = np.append([0],np.cumsum(I_W_dW + W*dW))[:-1]
I_W_dW_1 = 0.5*np.square(W[-1])- 0.5*T
plt.figure(1)
plt.plot(time_steps, W)
plt.grid()
plt.xlabel("time")
plt.figure(2)
plt.plot(time_steps, I_W_dt,'r')
plt.plot(time_steps, I_W_dt_1,'b')
plt.grid()
plt.xlabel("time")
plt.figure(3)
plt.plot(time_steps, I_W_dW,'r')
plt.axhline(y =I_W_dW_1, color = 'r', linestyle = '-')
plt.grid()
plt.xlabel("time")
###Output
_____no_output_____ |
rust_vs_cython.ipynb | ###Markdown
Fixed k = 10 experiment
###Code
params = fixed_k(10, 15, 40)
fig = setup_plots('Fixed k = 10', 'N')
fig
for partial_result in run_experiment(params, 3):
plot_data(partial_result, fig, 'N')
###Output
_____no_output_____
###Markdown
Proportional k = 0.01% experiment
###Code
params = proportional_k(0.001, 20, 35)
fig = setup_plots('Proportional k = 0.1%', 'N')
fig
for partial_result in run_experiment(params, 3):
plot_data(partial_result, fig, 'N')
###Output
_____no_output_____
###Markdown
Fixed N = 100,000 experiment
###Code
params = fixed_n(100_000, 1, 22)
fig = setup_plots('Fixed N = 100,000', 'k')
fig
for partial_result in run_experiment(params, 3):
plot_data(partial_result, fig, 'k')
###Output
_____no_output_____ |
geolocation_of_Alumni.ipynb | ###Markdown
###Code
#!pip install geopandas
#!pip install pycountry
#!pip install pycountry-convert
#!pip install mapclassify==2.4.3
# import libraries
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
import pycountry
import geopandas as gpd
import re
import sys
'geopandas' in sys.modules
from google.colab import files
uploaded = files.upload()
import pandas as pd
import io
df = pd.read_csv(io.StringIO(uploaded['final_preprocess_data.csv'].decode('utf-8')))
#df
df['job_location'] = df['job_location'].apply(lambda x: x.replace("islamabad", "islamabad"))
df['job_location'] = df['job_location'].apply(lambda x: x.replace("karachi", "karachi"))
df['skills'] = df['skills'].apply(lambda x: x.replace("||", ","))
df.to_csv (r'.\export_dataframe.csv', index = False, header=True)
clist=['Afghanistan','Albania','Algeria','Andorra','Angola','Antigua & Deps','Argentina','Armenia','Australia','Austria','Azerbaijan','Bahamas','Bahrain','Bangladesh','Barbados','Belarus','Belgium','Belize','Benin','Bhutan','Bolivia','Bosnia Herzegovina','Botswana','Brazil','Brunei','Bulgaria','Burkina','Burundi','Cambodia','Cameroon','Canada','Cape Verde','Central African Rep','Chad','Chile','China','Colombia','Comoros','Congo','Congo {Democratic Rep}','Costa Rica','Croatia','Cuba','Cyprus','Czech Republic','Denmark','Djibouti','Dominica','Dominican Republic','East Timor','Ecuador','Egypt','El Salvador','Equatorial Guinea','Eritrea','Estonia','Ethiopia','Fiji','Finland','France','Gabon','Gambia','Georgia','Germany','Ghana','Greece','Grenada','Guatemala','Guinea','Guinea-Bissau','Guyana','Haiti','Honduras','Hungary','Iceland','India','Indonesia','Iran','Iraq','Ireland ','Israel','Italy','Ivory Coast','Jamaica','Japan','Jordan','Kazakhstan','Kenya','Kiribati','Korea North','Korea South','Kosovo','Kuwait','Kyrgyzstan','Laos','Latvia','Lebanon','Lesotho','Liberia','Libya','Liechtenstein','Lithuania','Luxembourg','Macedonia','Madagascar','Malawi','Malaysia','Maldives','Mali','Malta','Marshall Islands','Mauritania','Mauritius','Mexico','Micronesia','Moldova','Monaco','Mongolia','Montenegro','Morocco','Mozambique','Myanmar','Namibia','Nauru','Nepal','Netherlands','New Zealand','Nicaragua','Niger','Nigeria','Norway','Oman','Pakistan','Palau','Panama','Papua New Guinea','Paraguay','Peru','Philippines','Poland','Portugal','Qatar','Romania','Russian Federation','Rwanda','St Kitts & Nevis','St Lucia','Saint Vincent & the Grenadines','Samoa','San Marino','Sao Tome & Principe','Saudi Arabia','Senegal','Serbia','Seychelles','Sierra Leone','Singapore','Slovakia','Slovenia','Solomon Islands','Somalia','South Africa','South Sudan','Spain','Sri Lanka','Sudan','Suriname','Swaziland','Sweden','Switzerland','Syria','Taiwan','Tajikistan','Tanzania','Thailand','Togo','Tonga','Trinidad & Tobago','Tunisia','Turkey','Turkmenistan','Tuvalu','Uganda','Ukraine','United Arab Emirates','United Kingdom','United States','Uruguay','Uzbekistan','Vanuatu','Vatican City','Venezuela','Vietnam','Yemen','Zambia','Zimbabwe']
for i in range(len(clist)):
clist[i] = clist[i].lower()
#clist
reg = '(%s)' % '|'.join(clist)
df['CountryName'] = df['job_location'].str.extract(reg)
#df
###Output
_____no_output_____
###Markdown
Impute / Replace Missing Values with Mode
###Code
df['CountryName'] = df['CountryName'].fillna(df['CountryName'].mode()[0])
# generate country code based on country name
import pycountry
def alpha3code(column):
CODE=[]
CODE1=[]
for country in column:
try:
code=pycountry.countries.get(name=country).alpha_3
CODE.append(code)
except:
CODE.append('None')
return CODE
# create a column for code
df['CODE']=alpha3code(df.CountryName)
#df.head()
df_value_counts=df["CountryName"].value_counts()
df_value_counts = df_value_counts.reset_index()
df_value_counts.columns = ['CountryName', 'Alumni']
df_value_counts.CountryName = df_value_counts.CountryName.str.title()
df_value_counts.head(10)
# generate country code based on country name
import pycountry
from pycountry_convert import country_alpha2_to_continent_code, country_name_to_country_alpha2
def alpha2code(column):
CODE=[]
COUNTRY=[]
COUNTRY1=[]
CONTINENT=[]
for country in column:
try:
code=(pycountry.countries.get(name=country).alpha_2,country_alpha2_to_continent_code(pycountry.countries.get(name=country).alpha_2))
countr=pycountry.countries.get(name=country).alpha_2
countr1=pycountry.countries.get(name=country).alpha_3
conti=country_alpha2_to_continent_code(pycountry.countries.get(name=country).alpha_2)
CODE.append(code)
COUNTRY.append(countr)
COUNTRY1.append(countr1)
CONTINENT.append(conti)
except:
CODE.append('None')
COUNTRY.append('None')
COUNTRY1.append('None')
CONTINENT.append('None')
return (CODE,COUNTRY,CONTINENT,COUNTRY1)
# create a column for code
(df_value_counts['Code'],df_value_counts['Country'],df_value_counts['Continent'],df_value_counts['CODE']) =alpha2code(df_value_counts.CountryName)
df_value_counts.head()
#Importing the required modules
import pandas as pd
from geopy.geocoders import Nominatim
from geopy.extra.rate_limiter import RateLimiter
#Creating an instance of Nominatim Class
geolocator = Nominatim(user_agent="my_request")
#applying the rate limiter wrapper
geocode = RateLimiter(geolocator.geocode, min_delay_seconds=1)
#Applying the method to pandas DataFrame
df_value_counts['Geolocate'] = df_value_counts['CountryName'].apply(geocode)
df_value_counts['Latitude'] = df_value_counts['Geolocate'].apply(lambda x: x.latitude if x else None)
df_value_counts['Longitude'] = df_value_counts['Geolocate'].apply(lambda x: x.longitude if x else None)
df_value_counts.head()
###Output
_____no_output_____
###Markdown
[link text](https://) Create a world map
###Code
#title Default title text
#installation
#!pip install folium
# Create a world map to show distributions of of Alumni
import folium
from folium.plugins import MarkerCluster
#empty map
world_map = folium.Map()
#world_map= folium.Map(tiles="cartodbpositron") #cartodbdark_matter cartodbpositron, Stamen Toner
marker_cluster = MarkerCluster().add_to(world_map)
#for each coordinate, create circlemarker of user percent
for i in range(len(df_value_counts)):
lat = df_value_counts.iloc[i]['Latitude']
longn = df_value_counts.iloc[i]['Longitude']
radius=10
popup_text = """<div style="background-color: lightyellow; color: black; padding: 3px; border: 2px solid black; border-radius: 3px;"> Country: {}<br> # of Alumni: {}<br> </div>"""
popup_text = popup_text.format(df_value_counts.iloc[i]['CountryName'],
df_value_counts.iloc[i]['Alumni']
)
folium.CircleMarker(location = [lat, longn], radius=radius, popup= popup_text, fill =True).add_to(marker_cluster)
#show the map
world_map
###Output
_____no_output_____
###Markdown
Data Visualization: How To Plot A Map with Geopandas in Python
###Code
# first let us merge geopandas data with our data
# 'naturalearth_lowres' is geopandas datasets so we can use it directly
world = gpd.read_file(gpd.datasets.get_path('naturalearth_lowres'))
# rename the columns so that we can merge with our data
world.columns=['pop_est', 'continent', 'name', 'CODE', 'gdp_md_est', 'geometry']
# then merge with our data
merge=pd.merge(world,df_value_counts,on='CODE')
#cmap= 'Accent', 'Accent_r', 'Blues', 'Blues_r', 'BrBG', 'BrBG_r', 'BuGn', 'BuGn_r', 'BuPu', 'BuPu_r', 'CMRmap', 'CMRmap_r', 'Dark2', 'Dark2_r', 'GnBu', 'GnBu_r', 'Greens', 'Greens_r', 'Greys', 'Greys_r', 'OrRd', 'OrRd_r', 'Oranges', 'Oranges_r', 'PRGn', 'PRGn_r', 'Paired', 'Paired_r', 'Pastel1', 'Pastel1_r', 'Pastel2', 'Pastel2_r', 'PiYG', 'PiYG_r', 'PuBu', 'PuBuGn', 'PuBuGn_r', 'PuBu_r', 'PuOr', 'PuOr_r', 'PuRd', 'PuRd_r', 'Purples', 'Purples_r', 'RdBu', 'RdBu_r', 'RdGy', 'RdGy_r', 'RdPu', 'RdPu_r', 'RdYlBu', 'RdYlBu_r', 'RdYlGn', 'RdYlGn_r', 'Reds', 'Reds_r', 'Set1', 'Set1_r', 'Set2', 'Set2_r', 'Set3', 'Set3_r', 'Spectral', 'Spectral_r', 'Wistia', 'Wistia_r', 'YlGn', 'YlGnBu', 'YlGnBu_r', 'YlGn_r', 'YlOrBr', 'YlOrBr_r', 'YlOrRd', 'YlOrRd_r', 'afmhot', 'afmhot_r', 'autumn', 'autumn_r', 'binary', 'binary_r', 'bone', 'bone_r', 'brg', 'brg_r', 'bwr', 'bwr_r', 'cividis', 'cividis_r', 'cool', 'cool_r', 'coolwarm', 'coolwarm_r', 'copper', 'copper_r', 'crest', 'crest_r', 'cubehelix', 'cubehelix_r', 'flag', 'flag_r', 'flare', 'flare_r', 'gist_earth', 'gist_earth_r', 'gist_gray', 'gist_gray_r', 'gist_heat', 'gist_heat_r', 'gist_ncar', 'gist_ncar_r', 'gist_rainbow', 'gist_rainbow_r', 'gist_stern', 'gist_stern_r', 'gist_yarg', 'gist_yarg_r', 'gnuplot', 'gnuplot2', 'gnuplot2_r', 'gnuplot_r', 'gray', 'gray_r', 'hot', 'hot_r', 'hsv', 'hsv_r', 'icefire', 'icefire_r', 'inferno', 'inferno_r', 'jet'
# plot Number of Alumni world map
merge.plot(column='Alumni', scheme="quantiles",
figsize=(25, 20),
legend=True,cmap='coolwarm')
plt.title('Number of Alumni in Different Countries',fontsize=25)
# add countries names and numbers
for i in range(0,10):
plt.text(float(merge.Longitude[i]),float(merge.Latitude[i]),"{}\n{}".format(merge.CountryName[i],merge.Alumni[i]),size=15)
plt.show()
#import geopandas
from sklearn.preprocessing import OrdinalEncoder
import category_encoders as ce
encoder= ce.OrdinalEncoder(cols=['Current_Job','Current_Company','Total_year','job_location','Last_University','Last_degree','Graduation_end_year'],return_df=True)
df=encoder.fit_transform(df)
df
df.corr()
sns.heatmap(df.corr());
# Increase the size of the heatmap.
plt.figure(figsize=(16, 6))
# Store heatmap object in a variable to easily access it when you want to include more features (such as title).
# Set the range of values to be displayed on the colormap from -1 to 1, and set the annotation to True to display the correlation values on the heatmap.
heatmap = sns.heatmap(df.corr(), vmin=-1, vmax=1, annot=True)
# Give a title to the heatmap. Pad defines the distance of the title from the top of the heatmap.
heatmap.set_title('Correlation Heatmap', fontdict={'fontsize':14}, pad=12);
plt.figure(figsize=(16, 6))
heatmap = sns.heatmap(df.corr(), vmin=-1, vmax=1, annot=True, cmap='BrBG')
heatmap.set_title('Correlation Heatmap', fontdict={'fontsize':18}, pad=12);
# save heatmap as .png file
# dpi - sets the resolution of the saved image in dots/inches
# bbox_inches - when set to 'tight' - does not allow the labels to be cropped
plt.savefig('heatmap.png', dpi=300, bbox_inches='tight')
np.triu(np.ones_like(df.corr()))
plt.figure(figsize=(16, 6))
# define the mask to set the values in the upper triangle to True
mask = np.triu(np.ones_like(df.corr(), dtype=np.bool))
heatmap = sns.heatmap(df.corr(), mask=mask, vmin=-1, vmax=1, annot=True, cmap='BrBG')
heatmap.set_title('Triangle Correlation Heatmap', fontdict={'fontsize':18}, pad=16);
df.corr()[['Current_Job']].sort_values(by='Current_Job', ascending=False)
plt.figure(figsize=(8, 12))
heatmap = sns.heatmap(df.corr()[['Current_Job']].sort_values(by='Current_Job', ascending=False), vmin=-1, vmax=1, annot=True, cmap='BrBG')
heatmap.set_title('Features Correlating with Current_Job', fontdict={'fontsize':18}, pad=16);
###Output
_____no_output_____ |
kNN_cred_data.ipynb | ###Markdown
Importação de Banco de dados
###Code
base = pd.read_csv('credit_data.csv')
base.loc[base.age < 0, 'age'] = 40.92
previsores = base.iloc[:, 1:4].values
classe = base.iloc[:, 4].values
###Output
_____no_output_____
###Markdown
Tratando valores faltantes
###Code
from sklearn.impute import SimpleImputer
imputer = SimpleImputer(missing_values = np.nan, strategy = 'mean')
imputer = imputer.fit(previsores[:, 1:4])
previsores[:, 1:4] = imputer.transform(previsores[:, 1:4])
###Output
_____no_output_____
###Markdown
Padronizando os valores
###Code
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
previsores = scaler.fit_transform(previsores)
###Output
_____no_output_____
###Markdown
Teste do banco de dados
###Code
from sklearn.model_selection import train_test_split
previsores_treinamento, previsores_teste, classe_treinamento, classe_teste = train_test_split(previsores, classe, test_size=0.25, random_state=0)
###Output
_____no_output_____
###Markdown
Treinamento KNN
###Code
from sklearn.neighbors import KNeighborsClassifier
classificador = KNeighborsClassifier(n_neighbors=5, metric='minkowski', p=2)
classificador.fit(previsores_treinamento, classe_treinamento)
###Output
_____no_output_____
###Markdown
Resultado da previsão
###Code
previsoes = classificador.predict(previsores_teste)
from sklearn.metrics import confusion_matrix, accuracy_score
precisao = accuracy_score(classe_teste, previsoes)
print(precisao)
matriz = confusion_matrix(classe_teste, previsoes)
matriz
###Output
_____no_output_____ |
DeepSurvivalMachines_MLDM.ipynb | ###Markdown
MLDM project Deep Survival Machine [paper](https://arxiv.org/abs/2003.01176)Reproducing article results by Sergei Telnov Prepare working environment
###Code
from google.colab import drive
drive.mount('/content/drive')
%cd /content/drive/MyDrive/Github/
###Output
/content/drive/MyDrive/Github
###Markdown
GitHub Creds
###Code
git_token = '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'
username = 'SerTelnov'
repository = 'DeepSurvivalMachines'
###Output
_____no_output_____
###Markdown
Deep Survival Machines [GitHub repo](https://github.com/autonlab/DeepSurvivalMachines)
###Code
# My fork
!git clone https://{git_token}@github.com/{username}/{repository}
%cd {repository}
%ls -a
###Output
[0m[01;34mdocs[0m/ [01;34mexamples[0m/ LICENSE README.md [01;34mtests[0m/
[01;34mdsm[0m/ [01;34m.git[0m/ .pylintrc requirements.txt .travis.yml
###Markdown
Installing requirements
###Code
!pip install scikit-survival
###Output
Requirement already satisfied: scikit-survival in /usr/local/lib/python3.7/dist-packages (0.16.0)
Requirement already satisfied: osqp!=0.6.0,!=0.6.1 in /usr/local/lib/python3.7/dist-packages (from scikit-survival) (0.6.2.post0)
Requirement already satisfied: joblib in /usr/local/lib/python3.7/dist-packages (from scikit-survival) (1.1.0)
Requirement already satisfied: numpy in /usr/local/lib/python3.7/dist-packages (from scikit-survival) (1.19.5)
Requirement already satisfied: ecos in /usr/local/lib/python3.7/dist-packages (from scikit-survival) (2.0.7.post1)
Requirement already satisfied: scipy!=1.3.0,>=1.0 in /usr/local/lib/python3.7/dist-packages (from scikit-survival) (1.4.1)
Requirement already satisfied: scikit-learn<0.25,>=0.24.0 in /usr/local/lib/python3.7/dist-packages (from scikit-survival) (0.24.2)
Requirement already satisfied: numexpr in /usr/local/lib/python3.7/dist-packages (from scikit-survival) (2.7.3)
Requirement already satisfied: pandas>=0.25 in /usr/local/lib/python3.7/dist-packages (from scikit-survival) (1.1.5)
Requirement already satisfied: qdldl in /usr/local/lib/python3.7/dist-packages (from osqp!=0.6.0,!=0.6.1->scikit-survival) (0.1.5.post0)
Requirement already satisfied: python-dateutil>=2.7.3 in /usr/local/lib/python3.7/dist-packages (from pandas>=0.25->scikit-survival) (2.8.2)
Requirement already satisfied: pytz>=2017.2 in /usr/local/lib/python3.7/dist-packages (from pandas>=0.25->scikit-survival) (2018.9)
Requirement already satisfied: six>=1.5 in /usr/local/lib/python3.7/dist-packages (from python-dateutil>=2.7.3->pandas>=0.25->scikit-survival) (1.15.0)
Requirement already satisfied: threadpoolctl>=2.0.0 in /usr/local/lib/python3.7/dist-packages (from scikit-learn<0.25,>=0.24.0->scikit-survival) (3.0.0)
###Markdown
Reproducing article results
###Code
import numpy as np
from sklearn.model_selection import ParameterGrid
from dsm import DeepSurvivalMachines
from dsm import datasets
###Output
_____no_output_____
###Markdown
Common methods
###Code
def get_dataset(x, t, e):
n = len(x)
tr_size = int(n*0.70)
vl_size = int(n*0.10)
te_size = int(n*0.20)
x_train, x_test, x_val = x[:tr_size], x[-te_size:], x[tr_size:tr_size+vl_size]
t_train, t_test, t_val = t[:tr_size], t[-te_size:], t[tr_size:tr_size+vl_size]
e_train, e_test, e_val = e[:tr_size], e[-te_size:], e[tr_size:tr_size+vl_size]
return {
'train': [x_train, t_train, e_train],
'test': [x_test, t_test, e_test],
'val': [x_val, t_val, e_val],
'horizons': [0.25, 0.5, 0.75],
'times': np.quantile(t[e==1], [0.25, 0.5, 0.75]).tolist()
}
param_grid = {'k' : [3, 4, 6],
'distribution' : ['LogNormal', 'Weibull'],
'learning_rate' : [ 1e-4, 1e-3],
'layers' : [ [], [100], [100, 100] ]
}
default_params = ParameterGrid(param_grid)
def train_and_validate(dataset, model_params=default_params):
models = []
x_train, t_train, e_train = dataset['train']
x_val, t_val, e_val = dataset['val']
for param in model_params:
print('Training for params:')
print('Number of underlying cox distributions (k):', param['k'])
print('Primitive distribution', param['distribution'])
print('Hidden Layers:', param['layers'])
print('Learning rate', param['learning_rate'])
model = DeepSurvivalMachines(k = param['k'],
distribution = param['distribution'],
layers = param['layers'])
# The fit method is called to train the model
model.fit(x_train, t_train, e_train, iters = 100, learning_rate = param['learning_rate'])
models.append([[model.compute_nll(x_val, t_val, e_val), model]])
return min(models)[0][1]
def test_model_prediction(dataset, model):
x_test, _, _ = dataset['train']
times = dataset['times']
out_risk = model.predict_risk(x_test, times)
out_survival = model.predict_survival(x_test, times)
assert np.all((out_risk + out_survival) == 1.0)
from sksurv.metrics import concordance_index_ipcw, brier_score, cumulative_dynamic_auc
def evaluation(dataset, model):
times = dataset['times']
x_train, t_train, e_train = dataset['train']
x_val, t_val, e_val = dataset['val']
x_test, t_test, e_test = dataset['test']
horizons = dataset['horizons']
out_risk = model.predict_risk(x_test, times)
out_survival = model.predict_survival(x_test, times)
cis = []
brs = []
et_train = np.array([(e_train[i], t_train[i]) for i in range(len(e_train))],
dtype = [('e', bool), ('t', float)])
et_test = np.array([(e_test[i], t_test[i]) for i in range(len(e_test))],
dtype = [('e', bool), ('t', float)])
et_val = np.array([(e_val[i], t_val[i]) for i in range(len(e_val))],
dtype = [('e', bool), ('t', float)])
for i, _ in enumerate(times):
cis.append(concordance_index_ipcw(et_train, et_test, out_risk[:, i], times[i])[0])
brs.append(brier_score(et_train, et_test, out_survival, times)[1])
roc_auc = []
for i, _ in enumerate(times):
roc_auc.append(cumulative_dynamic_auc(et_train, et_test, out_risk[:, i], times[i])[0])
for horizon in enumerate(horizons):
print(f"For {horizon[1]} quantile,")
print("TD Concordance Index:", cis[horizon[0]])
print("Brier Score:", brs[0][horizon[0]])
print("ROC AUC ", roc_auc[horizon[0]][0], "\n")
###Output
_____no_output_____
###Markdown
Evaluation Dataset [SUPPORT](https://biostat.app.vumc.org/wiki/Main/SupportDesc) Study to Understand Prognoses Preferences Outcomes and Risks of Treatment
###Code
x, t, e = datasets.load_dataset('SUPPORT')
dataset = get_dataset(x, t, e)
model = train_and_validate(dataset)
test_model_prediction(dataset, model)
evaluation(dataset, model)
###Output
For 0.25 quantile,
TD Concordance Index: 0.7658881414459718
Brier Score: 0.10976772013463337
ROC AUC 0.7758300731042115
For 0.5 quantile,
TD Concordance Index: 0.7032522630011635
Brier Score: 0.182887471230353
ROC AUC 0.7236738029951486
For 0.75 quantile,
TD Concordance Index: 0.6561380569523884
Brier Score: 0.2210506115503837
ROC AUC 0.7159701944629551
###Markdown
Dataset METABRICMolecular Taxonomy of Breast Cancer InternationalConsortium
###Code
%cd ../METABRIC
import pandas as pd
from sklearn.impute import SimpleImputer
from sklearn.preprocessing import StandardScaler
patient_features = ['HORMONE_THERAPY', 'RADIO_THERAPY', 'ER_IHC', 'CHEMOTHERAPY', 'AGE_AT_DIAGNOSIS', 'OS_STATUS', 'OS_MONTHS']
rna_features = ['MKI67', 'EGFR', 'PGR', 'ERBB2']
patient_df = pd.read_csv('data_clinical_patient.txt', skiprows=4, sep='\t', index_col='PATIENT_ID')
patient_df
patient_df = patient_df[patient_features]
patient_df
rna_cases = pd.read_csv('cases_RNA_Seq_mRNA.txt', sep=': ', header=None, index_col=0).T
rna_cases
patient_ids = set(rna_cases['case_list_ids'].to_numpy()[0].split('\t'))
rna_patient_df = patient_df.loc[patient_df.index.isin(patient_ids)]
rna_patient_df['DEATH'] = rna_patient_df['OS_STATUS'].apply(lambda xx: 0 if xx.startswith('0') else 1)
rna_patient_df['OS_MONTHS'] = rna_patient_df['OS_MONTHS'].apply(lambda xx: int(xx) * 12)
rna_patient_df
len(rna_patient_df[rna_patient_df.isna().any(axis=1)])
rna_gene_indicators_df = pd.read_csv('data_cna.txt', sep='\t', index_col=0).T[rna_features]
rna_gene_indicators_df
rna_gene_indicators_df = rna_gene_indicators_df.loc[rna_gene_indicators_df.index.isin(patient_ids)]
rna_gene_indicators_df
data_df = pd.concat([rna_patient_df, rna_gene_indicators_df], axis=1, join="inner")
data_df
category_features = ['HORMONE_THERAPY', 'RADIO_THERAPY', 'ER_IHC', 'CHEMOTHERAPY']
x_category_df = pd.get_dummies(data_df[category_features])
x_category_df
x_numerical_df = data_df.drop(['OS_STATUS', 'OS_MONTHS', 'DEATH'], axis=1)
x_numerical_df = x_numerical_df.drop(category_features, axis=1)
x_numerical_df
x_df = pd.concat([x_category_df, x_numerical_df], axis=1, join="inner")
x = SimpleImputer(missing_values=np.nan, strategy='mean').fit_transform(x)
x = StandardScaler().fit_transform(x)
t = data_df['OS_MONTHS'].values
e = data_df['DEATH'].values
dataset = get_dataset(x, t, e)
model = train_and_validate(dataset)
test_model_prediction(dataset, model)
evaluation(dataset, model)
###Output
For 0.25 quantile,
TD Concordance Index: 0.7200427830138727
Brier Score: 0.11674009329667431
ROC AUC 0.7633641975308642
For 0.5 quantile,
TD Concordance Index: 0.676229755454679
Brier Score: 0.187154343887192
ROC AUC 0.7586206259585121
For 0.75 quantile,
TD Concordance Index: 0.652184898524165
Brier Score: 0.22280184639262526
ROC AUC 0.7058354796872583
###Markdown
Dataset SYNTHETIC
###Code
dataset_url = 'https://raw.githubusercontent.com/chl8856/DeepHit/master/sample%20data/SYNTHETIC/synthetic_comprisk.csv'
df = pd.read_csv(dataset_url)
df
event1_df = df[df.label.isin([0, 1])]
event2_df = df[df.label.isin([0, 2])]
def dataset_from_df(event_df):
x = event_df.drop(['time', 'label', 'true_label', 'true_time'], axis=1).values
e = event_df.label.apply(lambda xx: 0 if xx == 0 else 1).values
t = event_df.time.values
x = SimpleImputer(missing_values=np.nan, strategy='mean').fit_transform(x)
x = StandardScaler().fit_transform(x)
return get_dataset(x, t, e)
###Output
_____no_output_____
###Markdown
Event 1
###Code
event1_dataset = dataset_from_df(event1_df)
model = train_and_validate(event1_dataset)
test_model_prediction(event1_dataset, model)
evaluation(event1_dataset, model)
###Output
For 0.25 quantile,
TD Concordance Index: 0.7987622004815225
Brier Score: 0.12449735771844522
ROC AUC 0.7716902540612917
For 0.5 quantile,
TD Concordance Index: 0.7598582457645292
Brier Score: 0.15059612629961835
ROC AUC 0.7611583400130089
For 0.75 quantile,
TD Concordance Index: 0.7632150169762315
Brier Score: 0.1808870954514221
ROC AUC 0.7517560476670667
###Markdown
Event 2
###Code
event2_dataset = dataset_from_df(event2_df)
model = train_and_validate(event2_dataset)
test_model_prediction(event2_dataset, model)
evaluation(event2_dataset, model)
###Output
For 0.25 quantile,
TD Concordance Index: 0.803511665456491
Brier Score: 0.12449190134127838
ROC AUC 0.7795992465268462
For 0.5 quantile,
TD Concordance Index: 0.7629262025204677
Brier Score: 0.15062828024002406
ROC AUC 0.7535194669254959
For 0.75 quantile,
TD Concordance Index: 0.7293686382405302
Brier Score: 0.17165978935771656
ROC AUC 0.7454250415352373
|
09_Principal Component Analysis with NumPy/Principal_Component_Analysis.ipynb | ###Markdown
Principal Component Analysis Task 2: Load the Data and Libraries---
###Code
%matplotlib inline
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
plt.style.use("ggplot")
plt.rcParams["figure.figsize"] = (12,8)
# data URL: https://archive.ics.uci.edu/ml/machine-learning-databases/iris/iris.data
iris = pd.read_csv("iris.data", header=None)
iris.head()
iris.columns = ["sepal_length", "sepal_width", "petal_length", "petal_width", "species"]
iris.head()
iris.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 150 entries, 0 to 149
Data columns (total 5 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 sepal_length 150 non-null float64
1 sepal_width 150 non-null float64
2 petal_length 150 non-null float64
3 petal_width 150 non-null float64
4 species 150 non-null object
dtypes: float64(4), object(1)
memory usage: 6.0+ KB
###Markdown
Task 3: Visualize the Data---
###Code
sns.scatterplot(x="sepal_length", y="sepal_width", data=iris, hue="species", style="species")
###Output
_____no_output_____
###Markdown
Task 4: Standardize the Data---
###Code
X = iris.iloc[:, 0:4].values
y = iris.species.values
from sklearn.preprocessing import StandardScaler
X = StandardScaler().fit_transform(X)
###Output
_____no_output_____
###Markdown
Task 5: Compute the Eigenvectors and Eigenvalues--- Covariance: $\sigma_{jk} = \frac{1}{n-1}\sum_{i=1}^{N}(x_{ij}-\bar{x_j})(x_{ik}-\bar{x_k})$Coviance matrix: $Σ = \frac{1}{n-1}((X-\bar{x})^T(X-\bar{x}))$
###Code
covariance_matrix = np.cov(X.T)
print("Covariance matrix: \n", covariance_matrix)
###Output
Covariance matrix:
[[ 1.00671141 -0.11010327 0.87760486 0.82344326]
[-0.11010327 1.00671141 -0.42333835 -0.358937 ]
[ 0.87760486 -0.42333835 1.00671141 0.96921855]
[ 0.82344326 -0.358937 0.96921855 1.00671141]]
###Markdown
We can prove this by looking at the covariance matrix. It has the property that it is symmetric. We also constrain the each of the columns (eigenvectors) such that the values sum to one. Thus, they are orthonormal to each other.Eigendecomposition of the covriance matrix: $Σ = W\wedge W^{-1}$
###Code
eigen_values, eigen_vectors = np.linalg.eig(covariance_matrix)
print("Eigenvectors: \n", eigen_vectors, "\n")
print("Eigenvalues: \n", eigen_values)
###Output
Eigenvectors:
[[ 0.52237162 -0.37231836 -0.72101681 0.26199559]
[-0.26335492 -0.92555649 0.24203288 -0.12413481]
[ 0.58125401 -0.02109478 0.14089226 -0.80115427]
[ 0.56561105 -0.06541577 0.6338014 0.52354627]]
Eigenvalues:
[2.93035378 0.92740362 0.14834223 0.02074601]
###Markdown
Task 6: Singular Value Decomposition (SVD)---
###Code
eigen_vec_svd, s, v = np.linalg.svd(X.T)
eigen_vec_svd
###Output
_____no_output_____
###Markdown
Task 7: Picking Principal Components Using the Explained Variance---
###Code
for val in eigen_values:
print(val)
variance_explained = [(i/sum(eigen_values))*100 for i in eigen_values]
variance_explained
cumulative_variance_explained = np.cumsum(variance_explained)
cumulative_variance_explained
sns.lineplot(x=[1,2,3,4], y=cumulative_variance_explained);
plt.xlabel("Number of Components")
plt.ylabel("Cumulative explained variance")
plt.title("Explained variance vs Number of Components")
plt.show()
###Output
_____no_output_____
###Markdown
Task 8: Project Data Onto Lower-Dimensional Linear Subspace---
###Code
# Xpca = X.W
eigen_vectors
projection_matrix = (eigen_vectors.T[:][:])[:2].T
print("Projection Matrix: \n", projection_matrix)
X_pca = X.dot(projection_matrix)
for species in ("Iris-setosa", "Iris-versicolor", "Iris-virginica"):
sns.scatterplot(X_pca[y==species, 0], X_pca[y==species, 1])
###Output
_____no_output_____ |
MySparkSession.ipynb | ###Markdown
###Code
!apt-get update
!apt-get install openjdk-8-jdk-headless -qq > /dev/null
!wget -q http://archive.apache.org/dist/spark/spark-2.3.1/spark-2.3.1-bin-hadoop2.7.tgz
!tar xf spark-2.3.1-bin-hadoop2.7.tgz
!pip install -q findspark
import os
os.environ["JAVA_HOME"] = "/usr/lib/jvm/java-8-openjdk-amd64"
os.environ["SPARK_HOME"] = "/content/spark-2.3.1-bin-hadoop2.7"
import findspark
findspark.init()
from pyspark import SparkContext
sc = SparkContext.getOrCreate()
import pyspark
from pyspark.sql import SparkSession
spark = SparkSession.builder.getOrCreate()
###Output
_____no_output_____ |
translations/ja/quantum-machine-learning/encoding.ipynb | ###Markdown
データの符号化このページでは、量子機械学習のためのデータ符号化の問題を紹介し、さまざまなデータ符号化手法について説明し、実装します。 はじめに機械学習モデルを成功させるには、データの表現が非常に重要です。古典的な機械学習の場合では、データをいかに数値的に表現し、古典的な機械学習アルゴリズムで最適に処理できるようにするかが問題となります。量子機械学習の場合、問題は似ていますが、より基本的です:問題は、データをどうやって表現して、量子システムに効率的に入力し、量子機械学習アルゴリズムで処理できるようにするかです。これは通常、データの符号化(データ *エンコーディング* )と呼ばれますが、データの *埋め込み* または *ロード* とも呼ばれます。このプロセスは、量子機械学習アルゴリズムの重要な部分であり、その計算能力に直接影響します。 手法それぞれが $N$ 個の [特徴](gloss:features) を持つ$M$個のサンプルで構成される古典的なデータセット$\mathscr{X}$を考えてみましょう: $$\class{script-x}{\mathscr{X}} = \class{brace}{{}x^{(1)},\class{ellipsis}{\dots},\cssId{_x-lil-m}{x^{(m)}},\dots,x^{(M)}\class{brace}{}}$$ ここで、$x^{(m)}$は$m = 1, ..., M$の$N$ 次元ベクトルです。このデータセットを量子ビットのシステムで表すために、さまざまな埋め込み手法を使用できます。以下に、その一部を参考文献[1](References)、 [2](References)に従って簡単に説明し、実装します。 計算基底符号化計算基底符号化は、古典的な$N$ビット文字列を$N$量子ビットシステムの [計算基底状態](gloss:computational-basis-state) に関連付けます。 たとえば、$x = 5$の場合、これは$0101$として$4$ビット文字列として表すことができ、量子状態$|0101\rangle$として4量子ビットシステムによって表すことができます。より一般的には、$N$ビット文字列:$x = (b_1, b_2, ... , b_N)$の場合、対応する$N$-量子ビット状態は$\cssId{ket-x}{| x \rangle} = | b_1, b_2, ... , b_N \rangle$です。ここで、$n = 1 , \dots , N$に対して$b_n \class{in}{\in} {0,1}$ です。上記の古典的なデータセット$\mathscr{X}$の場合に、計算基底符号化を使用するには、各データポイントを$N$ビット文字列$x^{(m)} = (b_1, b_2, ... , b_N)$ とし、これを量子状態 $|x^{m}\rangle = |b_1, b_2, ... , b_N \rangle$ に直接マッピングします。ここで$n = 1, ..., N$ において、$b_n \in {0, 1 } $であり、また、$m = 1, ..., M$です。 データセット全体を計算基底状態の重ね合わせとして表すことができます。$$\cssId{_ket-dataset}{| \mathscr{X} \rangle} = \frac{1}{\sqrt{\cssId{_m}{M}}}\cssId{*sum-m}{\sum*{m=1}^{M}|x^{m} \rangle} $$ 計算基底符号化```q-statevector-binary-encoding p 左の入力データセットにビット列を追加したり削除したりすると、右の状態ベクトルに計算基底符号化がどのように符号化されるかを見ることができる。```Qiskitでは、どのような状態でデータセットを符号化するかを計算したら、`initialize` 関数を使用して準備することができます。 たとえば、データセット $\mathscr{X} = {x^{(1)}=101, x^{(2)}=111}$ は、$|\mathscr{X}\rangle= \frac{1}{\sqrt{2}}(|101\rangle+|111\rangle)$ の状態としてエンコードされます:
###Code
import math
from qiskit import QuantumCircuit
desired_state = [
0,
0,
0,
0,
0,
1 / math.sqrt(2),
0,
1 / math.sqrt(2)]
qc = QuantumCircuit(3)
qc.initialize(desired_state, [0,1,2])
qc.decompose().decompose().decompose().decompose().decompose().draw()
###Output
_____no_output_____
###Markdown
この例は、計算基底符号化の欠点も示しています。つまり、計算基底符号化は理解するのは簡単ですが、状態ベクトルは非常に疎になる可能性があり、実装するスキームが通常効率的ではありません。 振幅符号化振幅符号化は、データを量子状態の振幅に符号化します。これは、正規化された古典的な$N$次元データポイント$x$を、$n$量子ビットの量子状態$|\psi_x\rangle$の振幅として表します:$$|\psi_x\rangle = \sum_{i=1}^N x_i |i\rangle$$ where $N = 2^n$ ここで、$N = 2^n$、$x_i$ は $x$ の $i^{th}$ 番目の要素であり、$|i\rangle$ は $i^{th}$ 番目の計算基底状態です。上記の古典的なデータセット$\mathscr{X}$ を符号化するために、すべての$M$ $N$次元データポイントを長さ $N \times M$ の1つの振幅ベクトルに連結します: $$\alpha=\cssId{*a-norm}{A*{\text{norm}}}(x_{1}^{(1)},...,x_{N}^{(1)},...,x_{1}^{(m)},...,x_{N}^{(m)},...,x_{1}^{(M)},...,x_{N}^{(M)})$$ ここで、$A_{\text{norm}}$ は正規化定数であり、$|\alpha|^2 = 1$ です。 データセット$\mathscr{X}$ は、計算基底で次のように表すことができます。 $$|\mathscr{X}\rangle = \sum_{i=1}^N \alpha_i |i\rangle$$ ここで、$\alpha_i$は振幅ベクトルの要素であり、$|i\rangle$は計算基底状態です。 符号化される振幅の数は$N \times M$です。 $n$量子ビットのシステムは$2^n$個の振幅を提供するので、振幅の埋め込みには$n \ge \mathrm{log}_2(NM)$個の量子ビットが必要です。 振幅符号化```q-statevector-amplitude-encoding p 左側のデータポイントの値を変更し、振幅エンコーディングがこれらを右側の状態ベクトルとしてどのようにエンコードするかを確認します。```例として、振幅符号化を使用してデータセット$\mathscr{X}= {x^{(1)}=(1.5,0), x^{(2)}=(-2,3)}$ を符号化してみましょう。両方のデータポイントを連結し、結果のベクトルを正規化すると、次のようになります: $$\alpha = \frac{1}{\sqrt{15.25}}(1.5,0,-2,3)$$ 結果として得られる2量子ビットの量子状態は次のようになります: $$|\mathscr{X}\rangle = \frac{1}{\sqrt{15.25}}(1.5|00\rangle-2|10\rangle+3|11\rangle)$$上記の例では、振幅ベクトルの要素の総数$N \times M$は2の累乗です。$N \times M$が2の累乗でない場合、$2^n\geq MN$ となるように$n$の値を選び、振幅ベクトルに無意味な定数を埋め込めばよいです。 計算基底符号化と同様に、データセットを符号化する状態を計算したら、Qiskitで`initialize`関数を使用してを準備できます:
###Code
desired_state = [
1 / math.sqrt(15.25) * 1.5,
0,
1 / math.sqrt(15.25) * -2,
1 / math.sqrt(15.25) * 3]
qc = QuantumCircuit(2)
qc.initialize(desired_state, [0,1])
qc.decompose().decompose().decompose().decompose().decompose().draw()
###Output
_____no_output_____
###Markdown
振幅符号化の利点は、符号化に必要なのは$\mathrm{log}_2(NM)$量子ビットのみであるということです。 ただし、その後のアルゴリズムは量子状態の振幅を操作する必要があり、量子状態を準備し測定する方法は効率的ではない傾向があります。 角度符号化角度符号化は、$N$ 個の特徴(feature)を $n$量子ビットの回転角に符号化します。ここで、$N \le n$です。 たとえば、データポイント$x = (x_1,...,x_N)$ は次のように符号化できます。$$\cssId{_}{|x\rangle} = \class{*big-o-times-n}{\bigotimes^N*{i=1}} \cos(x_i)|0\rangle + \sin(x_i)|1\rangle$$これは、前の2つの符号化方法とは異なり、データセット全体ではなく、一度に1つのデータポイントのみを符号化する方法です。ただし、使用するのは$N$量子ビットで量子回路の深さも一定であるため、現在の量子ハードウェアに適用可能です。角度符号化は[ユニタリー演算](gloss:unitary)で指定することができます:$$ S_{x_j} = \class{*big-o-times-n}{\bigotimes*{i=1}^N} U(x_j^{(i)}) $$ここで$$ U(x_j^{(i)}) = \begin{bmatrix} \cos(x_j^{(i)}) & -\sin(x_j^{(i)}) \ \sin(x_j^{(i)}) & \cos(x_j^{(i)}) \ \end{bmatrix} $$$Y$軸周りの単一量子ビットの回転は次のとおりでした: $$RY(\theta) = \exp(-i \frac{\theta}{2} Y) = \begin{pmatrix} \cos{\frac{\theta}{2}} & -\sin{\frac{\theta}{2}} \ \sin{\frac{\theta}{2}} & \cos{\frac{\theta}{2}} \end{pmatrix} $$ $U(x_j^{(i)}) = RY(2x_j^{(i)})$に注意し、例として、qiskitを使用してデータポイント$x = (0, \pi/4, \pi/2)$ を符号化します:
###Code
qc = QuantumCircuit(3)
qc.ry(0, 0)
qc.ry(math.pi/4, 1)
qc.ry(math.pi/2, 2)
qc.draw()
###Output
_____no_output_____
###Markdown
高密度角度符号化は、角度符号化を少し一般化したものであり、相対位相を使用して1量子ビットごとに2つの特徴を符号化します。データポイント$x = (x_1,...,x_N)$は次のように符号化されます: $$|x\rangle = \class{*big-o-times-n2}{\bigotimes*{i=1}^{N/2}} \cos(x_{2i-1})|0\rangle + e^{i x_{2i}}\sin(x_{2i-1})|1\rangle$$角度および高密度角度符号化は、正弦波関数および指数関数を使用しますが、この関数は特別である必要はなく、任意の関数を使う一般的な量子ビットの符号化クラスに簡単に抽象化できます。次のように、任意のユニタリー演算で定義されたパラメーター化された量子回路で符号化を実装することもできます。 任意の符号化任意の符号化は、$N$個の特徴を$n$量子ビット上の$N$個のパラメーター化されたゲートの回転として符号化します。ここで、$n \leq N$です。角度符号化と同様に、データセット全体ではなく、一度に1つのデータポイントのみを符号化します。 また、一定の深さの量子回路と$n \leq N$ 量子ビットを使用しているため、現在の量子ハードウェアで実行できます。 たとえば、[`EfficientSU2`](https://qiskit.org/documentation/stubs/qiskit.circuit.library.EfficientSU2.html)回路を使用して12個の特徴を符号化するには、3個の量子ビットを使用するだけです:
###Code
from qiskit.circuit.library import EfficientSU2
circuit = EfficientSU2(num_qubits=3, reps=1, insert_barriers=True)
circuit.decompose().draw()
###Output
_____no_output_____
###Markdown
例えば、データポイント$x = [0.1,0.2,0.3,0.4,0.5,0.6,0.7,0.8,0.9,1.0,1.1,1.2]$ を12の特徴で符号化し、パラメーター化された各ゲートを使用して異なる特徴を符号化します。
###Code
x = [0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0, 1.1, 1.2]
encode = circuit.bind_parameters(x)
encode.decompose().draw()
###Output
_____no_output_____
###Markdown
3量子ビットのQiskit[`ZZFeatureMap`](https://qiskit.org/documentation/stubs/qiskit.circuit.library.ZZFeatureMap.html) 回路は、6つのパラメーター化されたゲートがあるにもかかわらず、3つの機能のデータポイントしか符号化しません:
###Code
from qiskit.circuit.library import ZZFeatureMap
circuit = ZZFeatureMap(3, reps=1, insert_barriers=True)
circuit.decompose().draw()
x = [0.1, 0.2, 0.3]
encode = circuit.bind_parameters(x)
encode.decompose().draw()
###Output
_____no_output_____
###Markdown
クイッククイズパラメーター化された量子回路には16個のパラメーターがあります。 エンコードできる機能の最大数はいくつですか?1. 41. 81. 161. 32 さまざまなタイプのデータに対するさまざまなパラメータ化された量子回路の性能は、活発に研究されている分野です。 $$\cssId{big-o-times}{\bigotimes}$$ 参考文献1. Maria Schuld and Francesco Petruccione, *Supervised Learning with Quantum Computers*, Springer 2018, [doi:10.1007/978-3-319-96424-9](https://www.springer.com/gp/book/9783319964232).2. Ryan LaRose and Brian Coyle, *Robust data encodings for quantum classifiers*, Physical Review A 102, 032420 (2020), [doi:10.1103/PhysRevA.102.032420](https://journals.aps.org/pra/abstract/10.1103/PhysRevA.102.032420), [arXiv:2003.01695](https://arxiv.org/abs/2003.01695).
###Code
import qiskit.tools.jupyter
%qiskit_version_table
###Output
/usr/local/anaconda3/lib/python3.7/site-packages/qiskit/aqua/__init__.py:86: DeprecationWarning: The package qiskit.aqua is deprecated. It was moved/refactored to qiskit-terra For more information see <https://github.com/Qiskit/qiskit-aqua/blob/main/README.md#migration-guide>
warn_package('aqua', 'qiskit-terra')
|
My2-4-3-HorsesOrHumans.ipynb | ###Markdown
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Download the neccessary data into the Colab Instance
###Code
try:
# %tensorflow_version only exists in Colab.
%tensorflow_version 2.x
except Exception:
pass
!wget --no-check-certificate \
https://storage.googleapis.com/laurencemoroney-blog.appspot.com/horse-or-human.zip \
-O /tmp/horse-or-human.zip
!wget --no-check-certificate \
https://storage.googleapis.com/laurencemoroney-blog.appspot.com/validation-horse-or-human.zip \
-O /tmp/validation-horse-or-human.zip
import os
import zipfile
local_zip = '/tmp/horse-or-human.zip'
zip_ref = zipfile.ZipFile(local_zip, 'r')
zip_ref.extractall('/tmp/horse-or-human')
local_zip = '/tmp/validation-horse-or-human.zip'
zip_ref = zipfile.ZipFile(local_zip, 'r')
zip_ref.extractall('/tmp/validation-horse-or-human')
zip_ref.close()
# Directory with our training horse pictures
train_horse_dir = os.path.join('/tmp/horse-or-human/horses')
# Directory with our training human pictures
train_human_dir = os.path.join('/tmp/horse-or-human/humans')
# Directory with our training horse pictures
validation_horse_dir = os.path.join('/tmp/validation-horse-or-human/horses')
# Directory with our training human pictures
validation_human_dir = os.path.join('/tmp/validation-horse-or-human/humans')
train_horse_names = os.listdir('/tmp/horse-or-human/horses')
print(train_horse_names[:10])
train_human_names = os.listdir('/tmp/horse-or-human/humans')
print(train_human_names[:10])
validation_horse_hames = os.listdir('/tmp/validation-horse-or-human/horses')
print(validation_horse_hames[:10])
validation_human_names = os.listdir('/tmp/validation-horse-or-human/humans')
print(validation_human_names[:10])
import tensorflow as tf
###Output
_____no_output_____
###Markdown
Define your model and optimizer
###Code
model = tf.keras.models.Sequential([
# Note the input shape is the desired size of the image with 3 bytes color
# This is the first convolution
tf.keras.layers.Conv2D(32, (3,3), activation='relu', input_shape=(100, 100, 3)),
tf.keras.layers.MaxPooling2D(2, 2),
# The second convolution
tf.keras.layers.Conv2D(64, (3,3), activation='relu'),
tf.keras.layers.MaxPooling2D(2,2),
# The third convolution
tf.keras.layers.Conv2D(128, (3,3), activation='relu'),
tf.keras.layers.MaxPooling2D(2,2),
# The fourth convolution
tf.keras.layers.Conv2D(256, (3,3), activation='relu'),
tf.keras.layers.MaxPooling2D(2,2),
# Flatten the results to feed into a DNN
tf.keras.layers.Flatten(),
# 512 neuron hidden layer
tf.keras.layers.Dense(512, activation='relu'),
tf.keras.layers.Dense(256, activation='relu'),
# Only 1 output neuron. It will contain a value from 0-1 where 0 for 1 class ('horses') and 1 for the other ('humans')
tf.keras.layers.Dense(1, activation='sigmoid')
])
print(model.summary())
from tensorflow.keras.optimizers import RMSprop
optimizer = RMSprop(learning_rate=0.0001)
model.compile(loss='binary_crossentropy',
optimizer=optimizer,
metrics=['acc'])
###Output
_____no_output_____
###Markdown
Organize your data into Generators
###Code
from tensorflow.keras.preprocessing.image import ImageDataGenerator
# All images will be augmented according to whichever lines are uncommented below.
# we can first try without any of the augmentation beyond the rescaling
train_datagen = ImageDataGenerator(
rescale=1./255,
#rotation_range=40,
#width_shift_range=0.2,
#height_shift_range=0.2,
#shear_range=0.2,
#zoom_range=0.2,
#horizontal_flip=True,
#fill_mode='nearest'
)
# Flow training images in batches of 128 using train_datagen generator
train_generator = train_datagen.flow_from_directory(
'/tmp/horse-or-human/', # This is the source directory for training images
target_size=(100, 100), # All images will be resized to 300x300
batch_size=128,
# Since we use binary_crossentropy loss, we need binary labels
class_mode='binary')
validation_datagen = ImageDataGenerator(rescale=1./255)
validation_generator = validation_datagen.flow_from_directory(
'/tmp/validation-horse-or-human',
target_size=(100, 100),
class_mode='binary')
###Output
Found 1027 images belonging to 2 classes.
Found 256 images belonging to 2 classes.
###Markdown
Train your modelThis may take a little while. Remember we are now building and training relatively complex computer vision models!
###Code
history = model.fit(
train_generator,
steps_per_epoch=8,
epochs=100,
verbose=1,
validation_data=validation_generator)
###Output
Epoch 1/100
8/8 [==============================] - 37s 651ms/step - loss: 0.6835 - acc: 0.5573 - val_loss: 0.6385 - val_acc: 0.8125
Epoch 2/100
8/8 [==============================] - 6s 692ms/step - loss: 0.6207 - acc: 0.7275 - val_loss: 0.5776 - val_acc: 0.6914
Epoch 3/100
8/8 [==============================] - 6s 723ms/step - loss: 0.5311 - acc: 0.7969 - val_loss: 0.4571 - val_acc: 0.8125
Epoch 4/100
8/8 [==============================] - 5s 683ms/step - loss: 0.4058 - acc: 0.8765 - val_loss: 0.3962 - val_acc: 0.8477
Epoch 5/100
8/8 [==============================] - 5s 760ms/step - loss: 0.3171 - acc: 0.8877 - val_loss: 0.5633 - val_acc: 0.7930
Epoch 6/100
8/8 [==============================] - 5s 680ms/step - loss: 0.2583 - acc: 0.9110 - val_loss: 0.5881 - val_acc: 0.8320
Epoch 7/100
8/8 [==============================] - 5s 686ms/step - loss: 0.2259 - acc: 0.9299 - val_loss: 0.7214 - val_acc: 0.8242
Epoch 8/100
8/8 [==============================] - 5s 678ms/step - loss: 0.1820 - acc: 0.9321 - val_loss: 0.5951 - val_acc: 0.8594
Epoch 9/100
8/8 [==============================] - 6s 720ms/step - loss: 0.1467 - acc: 0.9521 - val_loss: 0.5738 - val_acc: 0.8672
Epoch 10/100
8/8 [==============================] - 5s 684ms/step - loss: 0.1510 - acc: 0.9410 - val_loss: 0.7826 - val_acc: 0.8516
Epoch 11/100
8/8 [==============================] - 5s 677ms/step - loss: 0.1269 - acc: 0.9522 - val_loss: 0.5946 - val_acc: 0.8828
Epoch 12/100
8/8 [==============================] - 5s 679ms/step - loss: 0.1333 - acc: 0.9555 - val_loss: 0.9744 - val_acc: 0.8320
Epoch 13/100
8/8 [==============================] - 6s 692ms/step - loss: 0.1272 - acc: 0.9522 - val_loss: 0.9294 - val_acc: 0.8398
Epoch 14/100
8/8 [==============================] - 5s 687ms/step - loss: 0.0870 - acc: 0.9711 - val_loss: 1.4153 - val_acc: 0.7852
Epoch 15/100
8/8 [==============================] - 5s 688ms/step - loss: 0.1405 - acc: 0.9399 - val_loss: 0.8569 - val_acc: 0.8477
Epoch 16/100
8/8 [==============================] - 5s 677ms/step - loss: 0.0764 - acc: 0.9744 - val_loss: 1.0020 - val_acc: 0.8359
Epoch 17/100
8/8 [==============================] - 5s 683ms/step - loss: 0.0608 - acc: 0.9833 - val_loss: 0.6788 - val_acc: 0.8789
Epoch 18/100
8/8 [==============================] - 5s 688ms/step - loss: 0.1048 - acc: 0.9555 - val_loss: 1.0391 - val_acc: 0.8359
Epoch 19/100
8/8 [==============================] - 5s 677ms/step - loss: 0.0525 - acc: 0.9844 - val_loss: 1.1165 - val_acc: 0.8320
Epoch 20/100
8/8 [==============================] - 5s 684ms/step - loss: 0.0744 - acc: 0.9722 - val_loss: 1.1162 - val_acc: 0.8281
Epoch 21/100
8/8 [==============================] - 5s 684ms/step - loss: 0.0456 - acc: 0.9867 - val_loss: 0.9586 - val_acc: 0.8711
Epoch 22/100
8/8 [==============================] - 5s 685ms/step - loss: 0.0700 - acc: 0.9744 - val_loss: 1.1987 - val_acc: 0.8281
Epoch 23/100
8/8 [==============================] - 6s 720ms/step - loss: 0.0358 - acc: 0.9902 - val_loss: 1.4710 - val_acc: 0.8242
Epoch 24/100
8/8 [==============================] - 6s 688ms/step - loss: 0.0717 - acc: 0.9711 - val_loss: 1.1249 - val_acc: 0.8359
Epoch 25/100
8/8 [==============================] - 5s 682ms/step - loss: 0.0319 - acc: 0.9900 - val_loss: 1.3386 - val_acc: 0.8359
Epoch 26/100
8/8 [==============================] - 5s 682ms/step - loss: 0.0244 - acc: 0.9933 - val_loss: 1.4342 - val_acc: 0.8320
Epoch 27/100
8/8 [==============================] - 5s 681ms/step - loss: 0.0677 - acc: 0.9677 - val_loss: 1.5867 - val_acc: 0.8164
Epoch 28/100
8/8 [==============================] - 5s 684ms/step - loss: 0.0287 - acc: 0.9911 - val_loss: 1.1941 - val_acc: 0.8359
Epoch 29/100
8/8 [==============================] - 6s 695ms/step - loss: 0.0135 - acc: 0.9978 - val_loss: 1.3209 - val_acc: 0.8320
Epoch 30/100
8/8 [==============================] - 5s 684ms/step - loss: 0.0168 - acc: 0.9956 - val_loss: 1.6094 - val_acc: 0.8320
Epoch 31/100
8/8 [==============================] - 5s 682ms/step - loss: 0.0116 - acc: 0.9989 - val_loss: 1.5172 - val_acc: 0.8320
Epoch 32/100
8/8 [==============================] - 6s 689ms/step - loss: 0.0082 - acc: 1.0000 - val_loss: 1.9393 - val_acc: 0.8281
Epoch 33/100
8/8 [==============================] - 6s 689ms/step - loss: 0.0478 - acc: 0.9822 - val_loss: 1.3902 - val_acc: 0.8359
Epoch 34/100
8/8 [==============================] - 5s 685ms/step - loss: 0.0157 - acc: 0.9978 - val_loss: 1.3971 - val_acc: 0.8359
Epoch 35/100
8/8 [==============================] - 5s 684ms/step - loss: 0.0114 - acc: 0.9989 - val_loss: 1.3864 - val_acc: 0.8438
Epoch 36/100
8/8 [==============================] - 5s 683ms/step - loss: 0.0082 - acc: 0.9989 - val_loss: 1.3643 - val_acc: 0.8555
Epoch 37/100
8/8 [==============================] - 6s 732ms/step - loss: 0.0063 - acc: 1.0000 - val_loss: 1.7181 - val_acc: 0.8359
Epoch 38/100
8/8 [==============================] - 5s 681ms/step - loss: 0.0056 - acc: 0.9989 - val_loss: 1.5996 - val_acc: 0.8438
Epoch 39/100
8/8 [==============================] - 5s 680ms/step - loss: 0.0039 - acc: 1.0000 - val_loss: 2.1373 - val_acc: 0.8320
Epoch 40/100
8/8 [==============================] - 5s 774ms/step - loss: 0.0091 - acc: 0.9989 - val_loss: 4.0722 - val_acc: 0.6992
Epoch 41/100
8/8 [==============================] - 5s 681ms/step - loss: 0.0268 - acc: 0.9867 - val_loss: 1.7784 - val_acc: 0.8398
Epoch 42/100
8/8 [==============================] - 6s 687ms/step - loss: 0.0027 - acc: 1.0000 - val_loss: 1.7981 - val_acc: 0.8438
Epoch 43/100
8/8 [==============================] - 6s 696ms/step - loss: 0.0212 - acc: 0.9933 - val_loss: 1.6352 - val_acc: 0.8398
Epoch 44/100
8/8 [==============================] - 5s 682ms/step - loss: 0.0032 - acc: 1.0000 - val_loss: 1.7446 - val_acc: 0.8438
Epoch 45/100
8/8 [==============================] - 6s 692ms/step - loss: 0.0019 - acc: 1.0000 - val_loss: 1.8080 - val_acc: 0.8398
Epoch 46/100
8/8 [==============================] - 6s 699ms/step - loss: 0.0013 - acc: 1.0000 - val_loss: 1.9747 - val_acc: 0.8398
Epoch 47/100
8/8 [==============================] - 6s 724ms/step - loss: 0.0013 - acc: 1.0000 - val_loss: 1.9379 - val_acc: 0.8516
Epoch 48/100
8/8 [==============================] - 5s 770ms/step - loss: 0.0378 - acc: 0.9878 - val_loss: 2.1242 - val_acc: 0.8281
Epoch 49/100
8/8 [==============================] - 5s 683ms/step - loss: 0.0020 - acc: 1.0000 - val_loss: 2.0169 - val_acc: 0.8359
Epoch 50/100
8/8 [==============================] - 6s 693ms/step - loss: 0.0020 - acc: 1.0000 - val_loss: 2.0784 - val_acc: 0.8359
Epoch 51/100
8/8 [==============================] - 5s 775ms/step - loss: 7.8102e-04 - acc: 1.0000 - val_loss: 2.0788 - val_acc: 0.8398
Epoch 52/100
8/8 [==============================] - 5s 688ms/step - loss: 7.2774e-04 - acc: 1.0000 - val_loss: 2.1921 - val_acc: 0.8398
Epoch 53/100
8/8 [==============================] - 5s 776ms/step - loss: 5.4943e-04 - acc: 1.0000 - val_loss: 2.3365 - val_acc: 0.8359
Epoch 54/100
8/8 [==============================] - 6s 690ms/step - loss: 5.0761e-04 - acc: 1.0000 - val_loss: 2.7908 - val_acc: 0.8281
Epoch 55/100
8/8 [==============================] - 5s 682ms/step - loss: 0.0602 - acc: 0.9811 - val_loss: 3.4787 - val_acc: 0.7383
Epoch 56/100
8/8 [==============================] - 6s 694ms/step - loss: 0.0120 - acc: 0.9956 - val_loss: 2.1008 - val_acc: 0.8359
Epoch 57/100
8/8 [==============================] - 5s 678ms/step - loss: 0.0017 - acc: 1.0000 - val_loss: 2.0369 - val_acc: 0.8398
Epoch 58/100
8/8 [==============================] - 6s 694ms/step - loss: 0.0013 - acc: 1.0000 - val_loss: 2.0438 - val_acc: 0.8398
Epoch 59/100
8/8 [==============================] - 5s 680ms/step - loss: 8.1109e-04 - acc: 1.0000 - val_loss: 2.1871 - val_acc: 0.8398
Epoch 60/100
8/8 [==============================] - 6s 734ms/step - loss: 5.2193e-04 - acc: 1.0000 - val_loss: 2.2547 - val_acc: 0.8398
Epoch 61/100
8/8 [==============================] - 6s 702ms/step - loss: 3.8745e-04 - acc: 1.0000 - val_loss: 2.3603 - val_acc: 0.8398
Epoch 62/100
8/8 [==============================] - 6s 724ms/step - loss: 2.5402e-04 - acc: 1.0000 - val_loss: 2.4167 - val_acc: 0.8398
Epoch 63/100
8/8 [==============================] - 6s 706ms/step - loss: 2.3233e-04 - acc: 1.0000 - val_loss: 2.5134 - val_acc: 0.8398
Epoch 64/100
8/8 [==============================] - 6s 748ms/step - loss: 1.7258e-04 - acc: 1.0000 - val_loss: 2.6453 - val_acc: 0.8398
Epoch 65/100
8/8 [==============================] - 6s 700ms/step - loss: 1.3896e-04 - acc: 1.0000 - val_loss: 2.5774 - val_acc: 0.8398
Epoch 66/100
8/8 [==============================] - 6s 707ms/step - loss: 1.2735e-04 - acc: 1.0000 - val_loss: 3.3187 - val_acc: 0.8203
Epoch 67/100
8/8 [==============================] - 6s 700ms/step - loss: 0.0475 - acc: 0.9844 - val_loss: 2.7380 - val_acc: 0.7930
Epoch 68/100
8/8 [==============================] - 5s 685ms/step - loss: 0.0053 - acc: 0.9989 - val_loss: 2.7062 - val_acc: 0.8125
Epoch 69/100
8/8 [==============================] - 6s 727ms/step - loss: 0.0026 - acc: 1.0000 - val_loss: 2.6588 - val_acc: 0.8203
Epoch 70/100
8/8 [==============================] - 6s 700ms/step - loss: 0.0013 - acc: 1.0000 - val_loss: 2.7468 - val_acc: 0.8242
Epoch 71/100
8/8 [==============================] - 6s 692ms/step - loss: 9.3190e-04 - acc: 1.0000 - val_loss: 2.6162 - val_acc: 0.8320
Epoch 72/100
8/8 [==============================] - 6s 766ms/step - loss: 5.3048e-04 - acc: 1.0000 - val_loss: 2.5601 - val_acc: 0.8359
Epoch 73/100
8/8 [==============================] - 6s 718ms/step - loss: 3.7345e-04 - acc: 1.0000 - val_loss: 2.5688 - val_acc: 0.8359
Epoch 74/100
8/8 [==============================] - 6s 713ms/step - loss: 3.1541e-04 - acc: 1.0000 - val_loss: 2.6924 - val_acc: 0.8359
Epoch 75/100
8/8 [==============================] - 6s 704ms/step - loss: 2.1960e-04 - acc: 1.0000 - val_loss: 2.6025 - val_acc: 0.8398
Epoch 76/100
8/8 [==============================] - 6s 742ms/step - loss: 1.5169e-04 - acc: 1.0000 - val_loss: 2.7581 - val_acc: 0.8398
Epoch 77/100
8/8 [==============================] - 6s 690ms/step - loss: 1.5959e-04 - acc: 1.0000 - val_loss: 2.8909 - val_acc: 0.8359
Epoch 78/100
8/8 [==============================] - 5s 684ms/step - loss: 1.1197e-04 - acc: 1.0000 - val_loss: 2.7923 - val_acc: 0.8398
Epoch 79/100
8/8 [==============================] - 5s 688ms/step - loss: 0.0280 - acc: 0.9867 - val_loss: 1.8628 - val_acc: 0.8672
Epoch 80/100
8/8 [==============================] - 5s 775ms/step - loss: 7.1661e-04 - acc: 1.0000 - val_loss: 2.1692 - val_acc: 0.8516
Epoch 81/100
8/8 [==============================] - 5s 686ms/step - loss: 3.4284e-04 - acc: 1.0000 - val_loss: 2.2236 - val_acc: 0.8516
Epoch 82/100
8/8 [==============================] - 5s 678ms/step - loss: 2.8167e-04 - acc: 1.0000 - val_loss: 2.3130 - val_acc: 0.8516
Epoch 83/100
8/8 [==============================] - 6s 716ms/step - loss: 2.1269e-04 - acc: 1.0000 - val_loss: 2.3876 - val_acc: 0.8516
Epoch 84/100
8/8 [==============================] - 5s 679ms/step - loss: 2.1119e-04 - acc: 1.0000 - val_loss: 2.5256 - val_acc: 0.8438
Epoch 85/100
8/8 [==============================] - 5s 678ms/step - loss: 1.0752e-04 - acc: 1.0000 - val_loss: 2.5970 - val_acc: 0.8438
Epoch 86/100
8/8 [==============================] - 5s 674ms/step - loss: 8.6612e-05 - acc: 1.0000 - val_loss: 2.6796 - val_acc: 0.8438
Epoch 87/100
8/8 [==============================] - 5s 682ms/step - loss: 7.0141e-05 - acc: 1.0000 - val_loss: 2.8184 - val_acc: 0.8438
Epoch 88/100
8/8 [==============================] - 5s 684ms/step - loss: 5.3502e-05 - acc: 1.0000 - val_loss: 2.9045 - val_acc: 0.8438
Epoch 89/100
8/8 [==============================] - 5s 685ms/step - loss: 9.1974e-05 - acc: 1.0000 - val_loss: 3.4343 - val_acc: 0.8320
Epoch 90/100
8/8 [==============================] - 5s 674ms/step - loss: 3.4796e-05 - acc: 1.0000 - val_loss: 3.2933 - val_acc: 0.8398
Epoch 91/100
8/8 [==============================] - 5s 684ms/step - loss: 2.7988e-05 - acc: 1.0000 - val_loss: 3.4356 - val_acc: 0.8359
Epoch 92/100
8/8 [==============================] - 5s 690ms/step - loss: 1.9205e-05 - acc: 1.0000 - val_loss: 3.4798 - val_acc: 0.8398
Epoch 93/100
8/8 [==============================] - 6s 692ms/step - loss: 4.8933e-05 - acc: 1.0000 - val_loss: 1.2326 - val_acc: 0.9258
Epoch 94/100
8/8 [==============================] - 5s 684ms/step - loss: 0.1174 - acc: 0.9744 - val_loss: 3.6169 - val_acc: 0.8203
Epoch 95/100
8/8 [==============================] - 6s 688ms/step - loss: 0.0010 - acc: 1.0000 - val_loss: 3.4198 - val_acc: 0.8242
Epoch 96/100
8/8 [==============================] - 6s 689ms/step - loss: 5.1098e-04 - acc: 1.0000 - val_loss: 3.3288 - val_acc: 0.8320
Epoch 97/100
8/8 [==============================] - 5s 683ms/step - loss: 3.2914e-04 - acc: 1.0000 - val_loss: 3.2461 - val_acc: 0.8359
Epoch 98/100
8/8 [==============================] - 5s 680ms/step - loss: 1.8483e-04 - acc: 1.0000 - val_loss: 3.1818 - val_acc: 0.8359
Epoch 99/100
8/8 [==============================] - 6s 718ms/step - loss: 1.1972e-04 - acc: 1.0000 - val_loss: 3.1206 - val_acc: 0.8398
Epoch 100/100
8/8 [==============================] - 5s 763ms/step - loss: 7.9419e-05 - acc: 1.0000 - val_loss: 3.0846 - val_acc: 0.8398
###Markdown
Run your ModelLet's now take a look at actually running a prediction using the model. This code will allow you to choose 1 or more files from your file system, it will then upload them, and run them through the model, giving an indication of whether the object is a horse or a human. **Was the model correct? Try a couple more images and see if you can confuse it!**
###Code
import numpy as np
from google.colab import files
from keras.preprocessing import image
uploaded = files.upload()
for fn in uploaded.keys():
# predicting images
path = '/content/' + fn
img = image.load_img(path, target_size=(100, 100))
x = image.img_to_array(img)
x = x / 255.0
x = np.expand_dims(x, axis=0)
image_tensor = np.vstack([x])
classes = model.predict(image_tensor)
print(classes)
print(classes[0])
if classes[0]>0.5:
print(fn + " is a human")
else:
print(fn + " is a horse")
###Output
_____no_output_____
###Markdown
Finally lets visualize all of the model layers!
###Code
import matplotlib.pyplot as plt
import numpy as np
import random
from tensorflow.keras.preprocessing.image import img_to_array, load_img
%matplotlib inline
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
# Let's define a new Model that will take an image as input, and will output
# intermediate representations for all layers in the previous model after the first.
successive_outputs = [layer.output for layer in model.layers[1:]]
visualization_model = tf.keras.models.Model(inputs = model.input, outputs = successive_outputs)
# Let's prepare a random input image from the training set.
horse_img_files = [os.path.join(train_horse_dir, f) for f in train_horse_names]
human_img_files = [os.path.join(train_human_dir, f) for f in train_human_names]
img_path = random.choice(horse_img_files + human_img_files)
# uncomment the following line if you want to pick the Xth human file manually
img_path = human_img_files[0]
img = load_img(img_path, target_size=(100, 100)) # this is a PIL image
x = img_to_array(img) # Numpy array with shape (100, 100, 3)
x = x.reshape((1,) + x.shape) # Numpy array with shape (1, 100, 100, 3)
# Rescale by 1/255
x /= 255.0
# Let's run our image through our network, thus obtaining all
# intermediate representations for this image.
successive_feature_maps = visualization_model.predict(x)
# These are the names of the layers, so can have them as part of our plot
layer_names = [layer.name for layer in model.layers]
# Now let's display our representations
for layer_name, feature_map in zip(layer_names, successive_feature_maps):
if len(feature_map.shape) == 4:
# Just do this for the conv / maxpool layers, not the fully-connected layers
n_features = feature_map.shape[-1] # number of features in feature map
n_features = min(n_features,5) # limit to 5 features for easier viewing
# The feature map has shape (1, size, size, n_features)
size = feature_map.shape[1]
# We will tile our images in this matrix
display_grid = np.zeros((size, size * n_features))
for i in range(n_features):
# Postprocess the feature to make it visually palatable
x = feature_map[0, :, :, i]
x -= x.mean()
x /= x.std()
x *= 64
x += 128
x = np.clip(x, 0, 255).astype('uint8')
# We'll tile each filter into this big horizontal grid
display_grid[:, i * size : (i + 1) * size] = x
# Display the grid
scale = 20. / n_features
plt.figure(figsize=(scale * n_features, scale))
plt.title(layer_name)
plt.grid(False)
plt.imshow(display_grid, aspect='auto', cmap='viridis')
###Output
/usr/local/lib/python3.7/dist-packages/ipykernel_launcher.py:50: RuntimeWarning: invalid value encountered in true_divide
###Markdown
Clean UpBefore running the next exercise, run the following cell to terminate the kernel and free memory resources:
###Code
import os, signal
os.kill(os.getpid(), signal.SIGKILL)
###Output
_____no_output_____ |
solutions/S6_DQN_CartPole.ipynb | ###Markdown
Deep Q-Network with Cart PoleThis notebook shows an implementation of a DQN on the Cart Pole environment.Note: The following code is heavily inspired by [this]( https://www.katnoria.com/nb_dqn_lunar/) blog post. 1. SetupWe first need to install some dependencies for using the environment:
###Code
import random
import sys
from time import time
from collections import deque, defaultdict, namedtuple
import numpy as np
import gym
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
env = gym.make('CartPole-v1')
env.seed(0)
print(env.action_space.n)
print(env.observation_space.shape)
device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')
device
###Output
_____no_output_____
###Markdown
2. Define the neural network, the replay buffer and the agent First, we define the neural network that predicts the Q-values for all actions, given a state as input.This is a fully-connected neural net with two hidden layers using Relu activations.The last layer does not have any activation and outputs a Q-value for every action.
###Code
class QNetwork(nn.Module):
def __init__(self, state_size, action_size, seed):
super(QNetwork, self).__init__()
self.seed = torch.manual_seed(seed)
self.fc1 = nn.Linear(state_size, 32)
self.fc2 = nn.Linear(32, 64)
self.fc3 = nn.Linear(64, action_size)
def forward(self, x):
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
return self.fc3(x)
###Output
_____no_output_____
###Markdown
Next, we define a replay buffer that saves previous transitions (so-called `experiences`) and provides a `sample` function to randomly extract a batch of experiences from the buffer.Note that experiences are internally saved as `numpy`-arrays. They are converted back to PyTorch tensors before being returned by the `sample`-method.
###Code
class ReplayBuffer:
def __init__(self, buffer_size, batch_size, seed):
self.batch_size = batch_size
self.seed = random.seed(seed)
self.memory = deque(maxlen=buffer_size) # maximum size of buffer
self.experience = namedtuple("Experience", field_names=["state", "action", "reward", "next_state", "done"])
def add(self, state, action, reward, next_state, done):
experience = self.experience(state, action, reward, next_state, done)
self.memory.append(experience)
def sample(self):
experiences = random.sample(self.memory, self.batch_size)
# Convert to PyTorch tensors
states = np.vstack([experience.state for experience in experiences if experience is not None])
states_tensor = torch.from_numpy(states).float().to(device)
actions = np.vstack([experience.action for experience in experiences if experience is not None])
actions_tensor = torch.from_numpy(actions).long().to(device)
rewards = np.vstack([experience.reward for experience in experiences if experience is not None])
rewards_tensor = torch.from_numpy(rewards).float().to(device)
next_states = np.vstack([experience.next_state for experience in experiences if experience is not None])
next_states_tensor = torch.from_numpy(next_states).float().to(device)
# Convert done flag from boolean to int
dones = np.vstack([experience.done for experience in experiences if experience is not None]).astype(np.uint8)
dones_tensor = torch.from_numpy(dones).float().to(device)
return (states_tensor, actions_tensor, rewards_tensor, next_states_tensor, dones_tensor)
def __len__(self):
return len(self.memory)
BUFFER_SIZE = int(1e5) # Replay memory size
BATCH_SIZE = 64 # Number of experiences to sample from memory
GAMMA = 0.99 # Discount factor
TAU = 1e-3 # Soft update parameter for updating fixed q network
LR = 1e-4 # Q Network learning rate
UPDATE_EVERY = 4 # How often to update Q network
class DQNAgent:
def __init__(self, state_size, action_size, seed):
self.state_size = state_size
self.action_size = action_size
self.seed = random.seed(seed)
# Initialize Q and Fixed Q networks
self.q_network = QNetwork(state_size, action_size, seed).to(device)
self.fixed_network = QNetwork(state_size, action_size, seed).to(device)
self.optimizer = optim.Adam(self.q_network.parameters())
# Initiliase memory
self.memory = ReplayBuffer(BUFFER_SIZE, BATCH_SIZE, seed)
self.timestep = 0
def step(self, state, action, reward, next_state, done):
self.memory.add(state, action, reward, next_state, done)
self.timestep += 1
# trigger training
if self.timestep % UPDATE_EVERY == 0:
if len(self.memory) > BATCH_SIZE: # only when buffer is filled
sampled_experiences = self.memory.sample()
self.learn(sampled_experiences)
def learn(self, experiences):
states, actions, rewards, next_states, dones = experiences
action_values = self.fixed_network(next_states).detach()
max_action_values = action_values.max(1)[0].unsqueeze(1)
# If "done" just use reward, else update Q_target with discounted action values
Q_target = rewards + (GAMMA * max_action_values * (1 - dones))
Q_expected = self.q_network(states).gather(1, actions)
# Calculate loss and update weights
loss = F.mse_loss(Q_expected, Q_target)
self.optimizer.zero_grad()
loss.backward()
self.optimizer.step()
# Update fixed weights
self.update_fixed_network(self.q_network, self.fixed_network)
def update_fixed_network(self, q_network, fixed_network):
for source_parameters, target_parameters in zip(q_network.parameters(), fixed_network.parameters()):
target_parameters.data.copy_(TAU * source_parameters.data + (1.0 - TAU) * target_parameters.data)
def act(self, state, eps=0.0):
rnd = random.random()
if rnd < eps:
return np.random.randint(self.action_size)
else:
state = torch.from_numpy(state).float().unsqueeze(0).to(device)
action_values = self.q_network(state)
action = np.argmax(action_values.cpu().data.numpy())
return action
###Output
_____no_output_____
###Markdown
3. Executes episodes and train the model We first define some paramters which are guiding the training process:
###Code
MAX_EPISODES = 2000 # Max number of episodes to play
MAX_STEPS = 1000 # Max steps allowed in a single episode/play
# Epsilon schedule
EPS_START = 1.0 # Default/starting value of eps
EPS_DECAY = 0.999 # Epsilon decay rate
EPS_MIN = 0.01 # Minimum epsilon
###Output
_____no_output_____
###Markdown
Then we start executing episodes and observe the mean score per episode.The environment is considered as solved if this score is above 200.
###Code
# Get state and action sizes
state_size = env.observation_space.shape[0]
action_size = env.action_space.n
print('State size: {}, action size: {}'.format(state_size, action_size))
dqn_agent = DQNAgent(state_size, action_size, seed=0)
start = time()
# Maintain a list of last 100 scores
scores_window = deque(maxlen=100)
eps = EPS_START
for episode in range(1, MAX_EPISODES + 1):
state = env.reset()
score = 0
for t in range(MAX_STEPS):
action = dqn_agent.act(state, eps)
next_state, reward, done, info = env.step(action)
dqn_agent.step(state, action, reward, next_state, done)
state = next_state
score += reward
if done:
break
eps = max(eps * EPS_DECAY, EPS_MIN)
scores_window.append(score)
if episode % 99 == 0:
mean_score = np.mean(scores_window)
print('Progress {}/{}, average score:{:.2f}'.format(episode, MAX_EPISODES, mean_score))
mean_score = np.mean(scores_window)
if mean_score >= 200:
print('\rEnvironment solved in {} episodes, average score: {:.2f}'.format(episode, mean_score))
sys.stdout.flush()
break
end = time()
print('Took {} seconds'.format(end - start))
###Output
State size: 4, action size: 2
Progress 99/2000, average score:13.65
Progress 198/2000, average score:10.60
Progress 297/2000, average score:9.68
Progress 396/2000, average score:9.67
Progress 495/2000, average score:9.54
Progress 594/2000, average score:9.40
Progress 693/2000, average score:9.67
Progress 792/2000, average score:11.09
Progress 891/2000, average score:10.72
Progress 990/2000, average score:10.97
Progress 1089/2000, average score:14.88
Progress 1188/2000, average score:29.49
Progress 1287/2000, average score:129.15
Progress 1386/2000, average score:184.90
Progress 1485/2000, average score:170.18
Progress 1584/2000, average score:173.48
Progress 1683/2000, average score:176.79
Progress 1782/2000, average score:165.41
Progress 1881/2000, average score:155.59
Progress 1980/2000, average score:147.86
Took 270.2958333492279 seconds
###Markdown
4. Play epsiode and record itThe following code enables Colab to record sessions (not needed when using executing code locally).
###Code
!apt-get install -y xvfb x11-utils
!pip install pyvirtualdisplay==0.2.* \
PyOpenGL==3.1.* \
PyOpenGL-accelerate==3.1.*
!pip install gym[box2d]==0.17.*
import pyvirtualdisplay
_display = pyvirtualdisplay.Display(visible=False, size=(1400, 900))
_ = _display.start()
###Output
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following additional packages will be installed:
libxxf86dga1
Suggested packages:
mesa-utils
The following NEW packages will be installed:
libxxf86dga1 x11-utils xvfb
0 upgraded, 3 newly installed, 0 to remove and 16 not upgraded.
Need to get 993 kB of archives.
After this operation, 2,981 kB of additional disk space will be used.
Get:1 http://archive.ubuntu.com/ubuntu bionic/main amd64 libxxf86dga1 amd64 2:1.1.4-1 [13.7 kB]
Get:2 http://archive.ubuntu.com/ubuntu bionic/main amd64 x11-utils amd64 7.7+3build1 [196 kB]
Get:3 http://archive.ubuntu.com/ubuntu bionic-updates/universe amd64 xvfb amd64 2:1.19.6-1ubuntu4.8 [784 kB]
Fetched 993 kB in 2s (504 kB/s)
Selecting previously unselected package libxxf86dga1:amd64.
(Reading database ... 145483 files and directories currently installed.)
Preparing to unpack .../libxxf86dga1_2%3a1.1.4-1_amd64.deb ...
Unpacking libxxf86dga1:amd64 (2:1.1.4-1) ...
Selecting previously unselected package x11-utils.
Preparing to unpack .../x11-utils_7.7+3build1_amd64.deb ...
Unpacking x11-utils (7.7+3build1) ...
Selecting previously unselected package xvfb.
Preparing to unpack .../xvfb_2%3a1.19.6-1ubuntu4.8_amd64.deb ...
Unpacking xvfb (2:1.19.6-1ubuntu4.8) ...
Setting up xvfb (2:1.19.6-1ubuntu4.8) ...
Setting up libxxf86dga1:amd64 (2:1.1.4-1) ...
Setting up x11-utils (7.7+3build1) ...
Processing triggers for man-db (2.8.3-2ubuntu0.1) ...
Processing triggers for libc-bin (2.27-3ubuntu1.2) ...
/sbin/ldconfig.real: /usr/local/lib/python3.6/dist-packages/ideep4py/lib/libmkldnn.so.0 is not a symbolic link
Collecting pyvirtualdisplay==0.2.*
Downloading https://files.pythonhosted.org/packages/69/ec/8221a07850d69fa3c57c02e526edd23d18c7c05d58ed103e3b19172757c1/PyVirtualDisplay-0.2.5-py2.py3-none-any.whl
Requirement already satisfied: PyOpenGL==3.1.* in /usr/local/lib/python3.6/dist-packages (3.1.5)
Collecting PyOpenGL-accelerate==3.1.*
[?25l Downloading https://files.pythonhosted.org/packages/a2/3c/f42a62b7784c04b20f8b88d6c8ad04f4f20b0767b721102418aad94d8389/PyOpenGL-accelerate-3.1.5.tar.gz (538kB)
[K |████████████████████████████████| 542kB 4.3MB/s
[?25hCollecting EasyProcess
Downloading https://files.pythonhosted.org/packages/48/3c/75573613641c90c6d094059ac28adb748560d99bd27ee6f80cce398f404e/EasyProcess-0.3-py2.py3-none-any.whl
Building wheels for collected packages: PyOpenGL-accelerate
Building wheel for PyOpenGL-accelerate (setup.py) ... [?25l[?25hdone
Created wheel for PyOpenGL-accelerate: filename=PyOpenGL_accelerate-3.1.5-cp36-cp36m-linux_x86_64.whl size=1593637 sha256=4299bd55c68fda6ca5b4a23598a8b9a750596719360a8c789a6cead3e3d6aba5
Stored in directory: /root/.cache/pip/wheels/bd/21/77/99670ceca25fddb3c2b60a7ae44644b8253d1006e8ec417bcc
Successfully built PyOpenGL-accelerate
Installing collected packages: EasyProcess, pyvirtualdisplay, PyOpenGL-accelerate
Successfully installed EasyProcess-0.3 PyOpenGL-accelerate-3.1.5 pyvirtualdisplay-0.2.5
Requirement already satisfied: gym[box2d]==0.17.* in /usr/local/lib/python3.6/dist-packages (0.17.3)
Requirement already satisfied: pyglet<=1.5.0,>=1.4.0 in /usr/local/lib/python3.6/dist-packages (from gym[box2d]==0.17.*) (1.5.0)
Requirement already satisfied: numpy>=1.10.4 in /usr/local/lib/python3.6/dist-packages (from gym[box2d]==0.17.*) (1.19.5)
Requirement already satisfied: cloudpickle<1.7.0,>=1.2.0 in /usr/local/lib/python3.6/dist-packages (from gym[box2d]==0.17.*) (1.3.0)
Requirement already satisfied: scipy in /usr/local/lib/python3.6/dist-packages (from gym[box2d]==0.17.*) (1.4.1)
Collecting box2d-py~=2.3.5; extra == "box2d"
[?25l Downloading https://files.pythonhosted.org/packages/06/bd/6cdc3fd994b0649dcf5d9bad85bd9e26172308bbe9a421bfc6fdbf5081a6/box2d_py-2.3.8-cp36-cp36m-manylinux1_x86_64.whl (448kB)
[K |████████████████████████████████| 450kB 4.3MB/s
[?25hRequirement already satisfied: future in /usr/local/lib/python3.6/dist-packages (from pyglet<=1.5.0,>=1.4.0->gym[box2d]==0.17.*) (0.16.0)
Installing collected packages: box2d-py
Successfully installed box2d-py-2.3.8
###Markdown
Use the trained model to play and record one episode. The recorded video will be stored into the `video`-subfolder.
###Code
import time
FPS = 25
record_folder="video"
env = gym.make('CartPole-v1')
env = gym.wrappers.Monitor(env, record_folder, force=True)
state = env.reset()
total_reward = 0.0
while True:
start_ts = time.time()
env.render()
state = torch.from_numpy(state).float().unsqueeze(0).to(device)
action_values = dqn_agent.q_network(state)
action = np.argmax(action_values.cpu().data.numpy())
state, reward, done, _ = env.step(action)
total_reward += reward
if done:
break
delta = 1/FPS - (time.time() - start_ts)
if delta > 0:
time.sleep(delta)
print("Total reward: %.2f" % total_reward)
env.close()
###Output
_____no_output_____ |
examples/pyfvcom_preprocessing_example.ipynb | ###Markdown
This notebook shows how to create FVCOM forcing data for an unstructured grid.We need an SMS unstructured grid (`.2dm` file) in which we have defined some nodestrings to act as open boundaries.We'll be making the following files:- casename_grd.dat- casename_dep.dat- sigma.dat- casename_obc.dat- casename_cor.dat- casename_elevtide.nc
###Code
%matplotlib inline
from datetime import datetime
import PyFVCOM as pf
# Define a start, end and sampling interval for the tidal data
start = datetime.strptime('2018-04-01', '%Y-%m-%d')
end = datetime.strptime('2018-05-01', '%Y-%m-%d')
interval = 1 / 24 # 1 hourly in units of days
model = pf.preproc.Model(start, end, 'estuary.2dm', sampling=interval,
native_coordinates='spherical', zone='20N')
# Define everything we need for the open boundaries.
# We need the TPXO data to predict tides at the boundary. Get that from here:
# ftp://ftp.oce.orst.edu/dist/tides/Global/tpxo9_netcdf.tar.gz
# and extract its contents in the PyFVCOM/examples directory.
tpxo_harmonics = 'h_tpxo9.v1.nc'
constituents = ['M2', 'S2']
for boundary in model.open_boundaries:
# Create a 5km sponge layer for all open boundaries.
boundary.add_sponge_layer(5000, 0.001)
# Set the type of open boundary we've got.
boundary.add_type(1) # prescribed surface elevation
# And add some tidal data.
boundary.add_tpxo_tides(tpxo_harmonics, predict='zeta', constituents=constituents, interval=interval)
# Make a vertical grid with 21 uniform levels
model.sigma.type = 'uniform'
model.dims.levels = 21
# Write out the files for FVCOM.
model.write_grid('estuary_grd.dat', depth_file='estuary_dep.dat')
model.write_sponge('estuary_spg.dat')
model.write_obc('estuary_obc.dat')
model.write_coriolis('estuary_cor.dat')
model.write_sigma('sigma.dat')
model.write_tides('estuary_elevtide.nc')
# Let's have a look at the grid we've just worked on.
mesh = pf.read.Domain('estuary.2dm', native_coordinates='spherical', zone='20N')
domain = pf.plot.Plotter(mesh, figsize=(20, 10), tick_inc=(0.1, 0.05), cb_label='Depth (m)')
domain.plot_field(-mesh.grid.h)
for boundary in model.open_boundaries:
domain.axes.plot(*domain.m(boundary.grid.lon, boundary.grid.lat), 'ro')
###Output
_____no_output_____ |
section_2/01_loss_function.ipynb | ###Markdown
損失関数をグラフで描画損失関数をグラフとして描画し、特性を把握します。 今回は、二乗和誤差のグラフと二値の交差エントロピー誤差のグラフを描画します。 二乗和誤差二乗和誤差は以下の式で表されます。 $$ E = \frac{1}{2} \sum_{k=1}^n(y_k-t_k)^2 $$$y_k$は出力、$t_k$は正解、$n$は出力層のニューロン数を表します。 ここで、総和を取る前の個々の二乗誤差をグラフに描画します。 $$E_k = \frac{1}{2}(y_k-t_k)^2$$以下のコードにより、`y`の値が0.25、0.5、0.75のとき、`t`の値とともに二乗誤差がどう変化するのかを確認します。
###Code
import numpy as np
import matplotlib.pyplot as plt
def square_error(y, t):
return (y - t)**2/2 # 二乗誤差
t = np.linspace(0, 1)
ys = [0.25, 0.5, 0.75]
for y in ys:
plt.plot(t, square_error(y, t), label="y="+str(y))
plt.legend()
plt.xlabel("t")
plt.ylabel("Error")
plt.show()
###Output
_____no_output_____
###Markdown
二乗誤差の場合、最小値の両側でなだらかに誤差が立ち上がることが確認できます。 二値の交差エントロピー誤差二値の交差エントロピー誤差は、以下の式で表されます。$$E = -y\log t-(1-y)log(1-t)$$この場合$E$は$y$と$t$の値の隔たりの大きさを表しますが、$y$が$t$と等しいときに最小値をとります。 二値の交差エントロピー誤差を以下のコードで描画します。 `y`の値が0.25、0.5、0.75のとき、`t`の値とともに二値の交差エントロピー誤差がどう変化するのかを確認します。
###Code
import numpy as np
import matplotlib.pyplot as plt
def binary_crossentropy(y, t):
return -y*np.log(t) - (1-y)*np.log(1-t) # 二値の交差エントロピー
t = np.linspace(0.01, 0.99)
ys = [0.25, 0.5, 0.75]
for y in ys:
plt.plot(t, binary_crossentropy(y, t), label="y="+str(y))
plt.legend()
plt.xlabel("t")
plt.ylabel("Error")
plt.show()
###Output
_____no_output_____ |
datasets/Part 6 - Reinforcement Learning/Section 32 - Upper Confidence Bound (UCB)/random_selection.ipynb | ###Markdown
Clonamos el repositorio para obtener los dataSet
###Code
!git clone https://github.com/joanby/machinelearning-az.git
###Output
_____no_output_____
###Markdown
Damos acceso a nuestro Drive
###Code
from google.colab import drive
drive.mount('/content/drive')
###Output
_____no_output_____
###Markdown
Test it
###Code
!ls '/content/drive/My Drive'
###Output
_____no_output_____
###Markdown
Google colab tools
###Code
from google.colab import files # Para manejar los archivos y, por ejemplo, exportar a su navegador
import glob # Para manejar los archivos y, por ejemplo, exportar a su navegador
from google.colab import drive # Montar tu Google drive
###Output
_____no_output_____
###Markdown
Instalar dependendias
###Code
!pip install sklearn
###Output
_____no_output_____
###Markdown
Selección Aleatoria Cómo importar las librerías
###Code
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
###Output
_____no_output_____
###Markdown
Importar el data set
###Code
dataset = pd.read_csv('/content/machinelearning-az/datasets/Part 6 - Reinforcement Learning/Section 32 - Upper Confidence Bound (UCB)/Ads_CTR_Optimisation.csv')
###Output
_____no_output_____
###Markdown
Implementrar una Selección Aleatoria
###Code
import random
N = 10000
d = 10
ads_selected = []
total_reward = 0
for n in range(0, N):
ad = random.randrange(d)
ads_selected.append(ad)
reward = dataset.values[n, ad]
total_reward = total_reward + reward
###Output
_____no_output_____
###Markdown
Histograma de resultados
###Code
plt.hist(ads_selected)
plt.title('Histograma de selección de anuncios')
plt.xlabel('Anuncio')
plt.ylabel('Número de veces que ha sido visualizado')
plt.show()
###Output
_____no_output_____ |
07-Rekurzija.ipynb | ###Markdown
Rekurzija je postupak koji u svojoj definiciji koristi samog sebe. U Pajtonu se implementira kao funkcija koja poziva samu sebe. Ustaljen primer rekurzije je faktorijel. Faktorijel prirodnog broja N možemo definisati kao proizvod broja N i faktorijela njegovog prethodnika u nizu prirodnih brojeva - N-1. Ukoliko je N=1, faktorijel je jednak 1.Ukoliko faktorijel obeležimo slovom f, onda ovu definiciju možemo zapisati ovako:f(N) = N * f(N-1).Sledi implementacija u Pajtonu.
###Code
def factorial(num):
if num == 1:
return 1
else:
return num * factorial(num - 1)
N = 5
fact = factorial(N)
print("Factorial of %d is %d" % (N, fact))
###Output
Factorial of 5 is 120
###Markdown
Još jedan ustaljeni primer rekurzije je Fibonačijev niz. Fibonačijev niz se definiše na sledeći način: prvi i drugi član niza jednaki su 1, a svaki sledeći član niza jednak je zbiru prethodna dva člana. Prvih nekoliko članova Fibnačijevog niza: 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89...Sledi implementacija u Pajtonu: funkcija koja računa N-ti element Fibonačijevog niza.
###Code
def fibonacci(nth):
if nth <= 2:
return 1
else:
return fibonacci(nth - 1) + fibonacci(nth - 2)
nth = 10
fib = fibonacci(nth)
print("The %dth element of the Fibonacci series is: %d" % (nth, fib))
###Output
The 10th element of the Fibonacci series is: 55
|
Week 06 Linear Classification/multiclass/multiclass-experiments.ipynb | ###Markdown
Multiclass Perceptron and SVM In this notebook, we'll try out the multiclass Perceptron and SVM on small data sets. Clone remote
###Code
import os, sys
from pathlib import Path
URL = "https://github.com/Data-Science-and-Data-Analytics-Courses/UCSanDiegoX---Machine-Learning-Fundamentals-03-Jan-2019-audit"
NBDIR = "Week 06 Linear Classification/multiclass"
def clone(url, dest=".", branch="master", reloc=True):
"""
Clone remote branch from url into dest
branch not provided: clone all branches
reloc is True: relocate to repository
"""
url = url.strip(" /")
repo = Path(dest, os.path.basename(url)).resolve()
# dest must not be inside existing repository
is_out = !git -C "$dest" rev-parse
if not is_out: # inside repository
raise ValueError("Can't clone into existing repository")
# Clone
p = repo.as_posix()
if branch: # specific branch
!git clone --single-branch "$url" -b "$branch" "$p"
else: # all branches
!git clone "$url" "$p"
# Relocate
if reloc:
%cd "$repo"
return repo.as_posix()
REPO = clone(URL)
%run .Importable.ipynb
sys.path.append(REPO)
%cd "$NBDIR"
###Output
Cloning into '/content/UCSanDiegoX---Machine-Learning-Fundamentals-03-Jan-2019-audit'...
remote: Enumerating objects: 87, done.[K
remote: Counting objects: 100% (87/87), done.[K
remote: Compressing objects: 100% (83/83), done.[K
remote: Total 966 (delta 43), reused 0 (delta 0), pack-reused 879[K
Receiving objects: 100% (966/966), 3.45 MiB | 10.76 MiB/s, done.
Resolving deltas: 100% (492/492), done.
/content/UCSanDiegoX---Machine-Learning-Fundamentals-03-Jan-2019-audit
###Markdown
1. Multiclass Perceptron Let's start with the code for the multiclass Perceptron algorithm. This is similar in spirit to our earlier binary Perceptron algorithm, except that now there is a linear function for each class.If there are `k` classes, we will assume that they are numbered `0,1,...,k-1`. For `d`-dimensional data, the classifier will be parametrized by:* `w`: this is a `kxd` numpy array with one row for each class* `b`: this is a `k`-dimensional numpy array with one offset for each classThus the linear function for class `j` (where `j` lies in the range `0` to `k-1`) is given by `w[j,:], b[j]`. The first procedure, **evaluate_classifier**, takes as input the parameters of a linear classifier (`w,b`) as well as a data point (`x`) and returns the prediction of that classifier at `x`.
###Code
def evaluate_classifier(w,b,x):
k = len(b)
scores = np.zeros(k)
for j in range(k):
scores[j] = np.dot(w[j,:],x) + b[j]
return int(np.argmax(scores))
###Output
_____no_output_____
###Markdown
Here is the multiclass Perceptron training procedure. It is invoked as follows:* `w,b,converged = train_multiclass_perceptron(x,y,k,n_iters)`where* `x`: n-by-d numpy array with n data points, each d-dimensional* `y`: n-dimensional numpy array with the labels (in the range `0` to `k-1`)* `k`: the number of classes* `n_iters`: the training procedure will run through the data at most this many times (default: 100)* `w,b`: parameters for the final linear classifier, as above* `converged`: flag (True/False) indicating whether the algorithm converged within the prescribed number of iterationsIf the data is not linearly separable, then the training procedure will not converge.
###Code
def train_multiclass_perceptron(x,y,k,n_iters=100):
n,d = x.shape
w = np.zeros((k,d))
b = np.zeros(k)
done = False
converged = True
iters = 0
np.random.seed(None)
while not(done):
done = True
I = np.random.permutation(n)
for j in I:
pred_y = evaluate_classifier(w,b,x[j,:])
true_y = int(y[j])
if pred_y != true_y:
w[true_y,:] = w[true_y,:] + x[j,:]
b[true_y] = b[true_y] + 1.0
w[pred_y,:] = w[pred_y,:] - x[j,:]
b[pred_y] = b[pred_y] - 1.0
done = False
iters = iters + 1
if iters > n_iters:
done = True
converged = False
if converged:
print("Perceptron algorithm: iterations until convergence: ", iters)
else:
print("Perceptron algorithm: did not converge within the specified number of iterations")
return w, b, converged
###Output
_____no_output_____
###Markdown
2. Experiments with multiclass Perceptron
###Code
%matplotlib inline
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
matplotlib.rc('xtick', labelsize=14)
matplotlib.rc('ytick', labelsize=14)
###Output
_____no_output_____
###Markdown
This next routine takes as input a two-dimensional data set as well as a classifier. It plots the points, with different colors for different labels, and shows the decision boundaries of the classifier. It is invoked as follows:* `display_data_and_boundary(x,y,pred_fn)`where* `x` and `y` are the two-dimensional data and their labels (in the range `0,...,k-1`)* `pred_fn` is the classifier: it is a function that takes a data point and returns a label
###Code
def display_data_and_boundary(x,y,pred_fn):
# Determine the x1- and x2- limits of the plot
x1min = min(x[:,0]) - 1
x1max = max(x[:,0]) + 1
x2min = min(x[:,1]) - 1
x2max = max(x[:,1]) + 1
plt.xlim(x1min,x1max)
plt.ylim(x2min,x2max)
# Plot the data points
k = int(max(y)) + 1
cols = ['ro', 'k^', 'b*','gx']
for label in range(k):
plt.plot(x[(y==label),0], x[(y==label),1], cols[label%4], markersize=8)
# Construct a grid of points at which to evaluate the classifier
grid_spacing = 0.05
xx1, xx2 = np.meshgrid(np.arange(x1min, x1max, grid_spacing), np.arange(x2min, x2max, grid_spacing))
grid = np.c_[xx1.ravel(), xx2.ravel()]
Z = np.array([pred_fn(pt) for pt in grid])
# Show the classifier's boundary using a color plot
Z = Z.reshape(xx1.shape)
plt.pcolormesh(xx1, xx2, Z, cmap=plt.cm.Pastel1, vmin=0, vmax=k)
plt.show()
###Output
_____no_output_____
###Markdown
The following procedure, **run_multiclass_perceptron**, loads a labeled two-dimensional data set, learns a linear classifier using the Perceptron algorithm, and then displays the data as well as the boundary.The data file is assumed to contain one data point per line, along with a label, like:* `3 8 2` (meaning that point `x=(3,8)` has label `y=2`)
###Code
def run_multiclass_perceptron(datafile):
data = np.loadtxt(datafile)
n,d = data.shape
# Create training set x and labels y
x = data[:,0:2]
y = data[:,2]
k = int(max(y)) + 1
print("Number of classes: ", k)
# Run the Perceptron algorithm for at most 1000 iterations
w,b,converged = train_multiclass_perceptron(x,y,k,1000)
# Show the data and boundary
pred_fn = lambda p: evaluate_classifier(w,b,p)
display_data_and_boundary(x,y,pred_fn)
###Output
_____no_output_____
###Markdown
Let's try this out on two simple data sets. Make sure that the directory containing this notebook also contains the two-dimensional data files `data_3.txt` and `data_4.txt`. You should run these next two cells a few times to get a sense of the variability of the outcome.
###Code
run_multiclass_perceptron('data_3.txt')
run_multiclass_perceptron('data_4.txt')
###Output
Number of classes: 3
Perceptron algorithm: iterations until convergence: 53
###Markdown
3. Experiments with multiclass SVM Now let's see how multiclass SVM fares on these same data sets. We start with an analog of the **run_multiclass_perceptron** function. The key difference is that the SVM version, **run_multiclass_svm**, takes a second parameter: the regularization constant `C` in the convex program of the soft-margin SVM.
###Code
from sklearn.svm import SVC, LinearSVC
def run_multiclass_svm(datafile,C_value=1.0):
data = np.loadtxt(datafile)
n,d = data.shape
# Create training set x and labels y
x = data[:,0:2]
y = data[:,2]
k = int(max(y)) + 1
print("Number of classes: ", k)
# Train an SVM
clf = LinearSVC(loss='hinge', multi_class='crammer_singer', C=C_value)
clf.fit(x,y)
# Show the data and boundary
pred_fn = lambda p: clf.predict(p.reshape(1,-1))
display_data_and_boundary(x,y,pred_fn)
###Output
_____no_output_____
###Markdown
Let's run this on the two data sets `data_3.txt` and `data_4.txt` that we saw earlier. Try playing with the second parameter to see how the decision boundary changes. You should try values like `C = 0.01, 0.1, 1.0, 10.0, 100.0`.
###Code
for c in [0.01, 0.1, 1.0, 10.0, 100.0]:
print("C: {}".format(c))
run_multiclass_svm('data_3.txt',c)
for c in [0.01, 0.1, 1.0, 10.0, 100.0]:
print("C: {}".format(c))
run_multiclass_svm('data_4.txt',c)
###Output
C: 0.01
Number of classes: 3
###Markdown
For you to think about: How would you summarize the effect of varying `C`? The final experiment is with the famous IRIS data set. This is four-dimensional data with three labels, but we will pick just two of the features, as a consequence of which the problem is not linearly separable. Thus the Perceptron algorithm would never converge. The soft-margin SVM obtains a reasonable solution, however.
###Code
# Load IRIS data
from sklearn import datasets
iris = datasets.load_iris()
x = iris.data
y = iris.target
# Select just two of the four features
features = [1,3]
x = x[:,features]
# Train SVM
clf = LinearSVC(loss='hinge', multi_class='crammer_singer')
clf.fit(x,y)
pred_fn = lambda p: clf.predict(p.reshape(1,-1))
display_data_and_boundary(x,y,pred_fn)
###Output
_____no_output_____ |
5_Cluster interpretation.ipynb | ###Markdown
For selected variables-kmeans
###Code
selected_cluster ='kmean_cluster_14'
summary_kmeans_cluster_selected_df = kmeans_cluster_selected_df.groupby(selected_cluster).agg([np.median])#, np.std])
summary_kmeans_cluster_selected_df['count']=kmeans_cluster_selected_df.groupby(selected_cluster).size()/len(kmeans_cluster_selected_df)*100
round(summary_kmeans_cluster_selected_df.transpose(),2)#.to_csv('kmean_N=14_median.csv')
kmeans_cluster_selected_df[['tripDuration','tripDistance_miles',selected_cluster]].groupby(selected_cluster).agg([np.sum]).transpose()#.to_csv('kmeans_N=14_sum.csv')
###Output
_____no_output_____
###Markdown
breaking down trip cluster
###Code
social_trips = kmeans_cluster_selected_df.loc[
kmeans_cluster_selected_df['kmean_cluster_14'].isin([3,13])]
social_trips.pivot_table(index=['kmean_cluster_14'], columns='weekend_trip', aggfunc='size', fill_value=0)#.to_csv('cluster_3_13.csv')
social_trips[['kmean_cluster_14','weekend_trip','tripDuration','tripDistance_miles']].groupby(
['kmean_cluster_14','weekend_trip']).agg(np.sum)#.to_csv('cluster_3_13_summary.csv')
###Output
_____no_output_____
###Markdown
Selected variables for GMM
###Code
# #cluster data
# gmm_cluster_selected_df = pd.read_csv('../temp/gmm_clusters_withFlags_PCA_selectedVariable.csv')
# selected_cluster ='gmm_cluster_9'
# summary_gmm_cluster_selected_df = gmm_cluster_selected_df.groupby(selected_cluster).agg([np.mean])#, np.std])
# summary_gmm_cluster_selected_df['count']=gmm_cluster_selected_df.groupby(selected_cluster).size()/len(gmm_cluster_selected_df)*100
# round(summary_gmm_cluster_selected_df.transpose(),2)
###Output
_____no_output_____
###Markdown
ploting graphs
###Code
def plot_graph(cluster_df,selected_cluster,variable_to_plot,title,ax,plot_upper_bound, x_label):
#PURPOSE: Plot kdeplot of a single variable for all cluster type
#INPUT
####ax = axis of gridspec
####title: title of the plot
####cluster_df: Dataframe with cluster id
####selected_cluster: cluster to plot
####variable_to_plot: name of the variable to plot
####plot_upper_bound: upper bound of the plot
#OUTPUT: plt plot
#fig, ax = plt.subplots(figsize=(6,4))
#run a loop for all the sorted unique values in cluster
for i in sorted(cluster_df[selected_cluster].unique()):
#plot kdeplot for each clusters in same plot
sns.kdeplot(cluster_df[cluster_df[selected_cluster]==i][variable_to_plot],
label=str('C'+str(i)), ax=ax)
ax.set_title(title, fontsize=10)
ax.legend(fontsize=8, ncol=2)
ax.axis([0, plot_upper_bound, None, None])
ax.set_ylabel('Probability', fontsize=9)
ax.set_xlabel(x_label, fontsize=9)
ax.tick_params(axis='both', which='major', labelsize=8)
#return plt
selected_cluster='kmean_cluster_14'
fig= plt.figure(figsize=(7,7))
#fig.suptitle('K mean cluster 10')
gspec = gridspec.GridSpec(2,2)
ax1 = plt.subplot(gspec[0,0])
ax2 = plt.subplot(gspec[0,1])
ax3 = plt.subplot(gspec[1,0])
ax4 = plt.subplot(gspec[1,1])
# ax5 = plt.subplot(gspec[2,0])
# ax6 = plt.subplot(gspec[2,1])
ax1=plot_graph(kmeans_cluster_selected_df,selected_cluster,'tripDistance_miles','Trip Distance in miles',ax1,1,'Distance (miles)')
ax2=plot_graph(kmeans_cluster_selected_df,selected_cluster,'tripDuration','Trip duration in minutes',ax2,30,'Duration (minutes)')
ax3=plot_graph(kmeans_cluster_selected_df,selected_cluster,'route_directness','Route Directness',ax3,1,'Ratio')
ax4=plot_graph(kmeans_cluster_selected_df,selected_cluster,'average_trip_speed_mph','Average speed in mph',ax4,10,'Speed (mph)')
# ax5=plot_graph(kmeans_cluster_selected_df,selected_cluster,'destination_emp_density_perMile','Employment density at destination',ax5,20000)
# ax6=plot_graph(kmeans_cluster_selected_df,selected_cluster,'StartTime_Day','Start time at Night',ax6,1)
plt.subplots_adjust(hspace=0.3, wspace=0.3)
plt.savefig(str('results/'+selected_cluster+'_plot1.png'), dpi=900)
start_time = pd.pivot_table(kmeans_cluster_selected_df[['StartTime_AMPeak', 'StartTime_Day', 'StartTime_PMPeak',
'StartTime_Night',selected_cluster]], index=selected_cluster)*100
fig, ax= plt.subplots(figsize=(9,5))
bar_height_am = start_time[['StartTime_Day','StartTime_PMPeak','StartTime_Night']].sum(axis=1).tolist()
bar_height_day = start_time[['StartTime_PMPeak','StartTime_Night']].sum(axis=1).tolist()
bar_height_pm = start_time[['StartTime_Night']].sum(axis=1).tolist()
bar_height_night = 0
p1=plt.bar(start_time.index,start_time['StartTime_AMPeak'], bottom=bar_height_am, label='AM Peak')
p2=plt.bar(start_time.index,start_time['StartTime_Day'], bottom=bar_height_day, label='Day')
p3=plt.bar(start_time.index,start_time['StartTime_PMPeak'], bottom=bar_height_pm, label='PM Peak')
p4=plt.bar(start_time.index,start_time['StartTime_Night'], bottom=bar_height_night, label='Night')
# Add some text for labels, title and custom x-axis tick labels, etc.
ax.set_ylabel('Trip start time')
#ax.set_title('Percentage of trips by time of the day')
plt.xticks(np.arange(14), ('C0','C1','C2','C3','C4','C5','C6','C7','C8','C9','C10','C11','C12','C13'))
ax.legend(loc='center left', bbox_to_anchor=(1, 0.85), fontsize=9)
plt.xlabel('Clusters', fontsize=9)
plt.xlabel('Proportion of trips', fontsize=9)
ax.tick_params(axis='both', which='major', labelsize=9)
plt.tight_layout()
plt.savefig(str('results/'+selected_cluster+'_TimeDistribution.png'), dpi=900)
density_plot = pd.pivot_table(kmeans_cluster_selected_df[['origin_SHT_PRK_density', 'origin_LNG_PRK_density',
'destination_SHT_PRK_density', 'destination_LNG_PRK_density'
,selected_cluster]], index=selected_cluster)
# men_means, men_std = (20, 35, 30, 35, 27), (2, 3, 4, 1, 2)
# women_means, women_std = (25, 32, 34, 20, 25), (3, 5, 2, 3, 3)
ind = np.arange(len(density_plot.index)) # the x locations for the groups
width = 0.35 # the width of the bars
fig, ax = plt.subplots(figsize=(9,5))
rects1 = ax.bar(ind - width/2, density_plot['origin_SHT_PRK_density'], width, label='Origin short term')
rects2 = ax.bar(ind - width/2, density_plot['origin_LNG_PRK_density'], width, bottom=density_plot['origin_SHT_PRK_density'], label='Origin long term')
# rects2 = ax.bar(ind + width/2, density_plot['origin_LNG_PRK_density'], width, label='Origin long term')
rects3 = ax.bar(ind + width/2, density_plot['destination_SHT_PRK_density'], width, label='Destination short term')
rects4 = ax.bar(ind + width/2, density_plot['destination_LNG_PRK_density'], width, bottom=density_plot['destination_SHT_PRK_density'], label='Destination long term')
# Add some text for labels, title and custom x-axis tick labels, etc.
ax.set_ylabel('Parking density (number per sq. mile)')
#ax.set_title('Parking availability of clusters at origin and destination')
ax.set_xticks(ind)
#ax.set_xticklabels(('G1', 'G2', 'G3', 'G4', 'G5'))
ax.legend(loc='center left', bbox_to_anchor=(1, 0.85))
plt.tight_layout()
plt.savefig(str('results/'+selected_cluster+'_plot3.png'), dpi=900)
landuse_type_plot = pd.pivot_table(kmeans_cluster_selected_df[['origin_CBD', 'origin_URBAN', 'origin_SU', 'origin_RURAL',
'destination_CBD', 'destination_URBAN', 'destination_SU',
'destination_RURAL',selected_cluster]], index=selected_cluster)*100
fig, ax = plt.subplots(figsize=(9,5))
ind = np.arange(len(landuse_type_plot.index)) # the x locations for the groups
width = 0.35 # the width of the bars
bar_height_origin_CBD = landuse_type_plot[[ 'origin_URBAN', 'origin_SU', 'origin_RURAL']].sum(axis=1).tolist()
bar_height_origin_URBAN = landuse_type_plot[[ 'origin_SU', 'origin_RURAL']].sum(axis=1).tolist()
bar_height_origin_SU = landuse_type_plot[[ 'origin_RURAL']].sum(axis=1).tolist()
bar_height_origin_RURAL = 0
rects_o1 = ax.bar(ind - width/2, landuse_type_plot['origin_CBD'], width, bottom=bar_height_origin_CBD, label='Origin CBD')
rects_o2 = ax.bar(ind - width/2, landuse_type_plot['origin_URBAN'], width, bottom=bar_height_origin_URBAN, label='Origin urban')
rects_o3 = ax.bar(ind - width/2, landuse_type_plot['origin_SU'], width, bottom=bar_height_origin_SU, label='Origin sub-urban')
rects_o4 = ax.bar(ind - width/2, landuse_type_plot['origin_RURAL'], width, bottom=bar_height_origin_RURAL, label='Origin rural')
# rects2 = ax.bar(ind + width/2, density_plot['origin_LNG_PRK_density'], width, label='Origin long term')
bar_height_dest_CBD = landuse_type_plot[['destination_URBAN', 'destination_SU','destination_RURAL']].sum(axis=1).tolist()
bar_height_dest_URBAN = landuse_type_plot[['destination_SU','destination_RURAL']].sum(axis=1).tolist()
bar_height_dest_SU = landuse_type_plot[[ 'destination_RURAL']].sum(axis=1).tolist()
bar_height_dest_RURAL = 0
rects_d1 = ax.bar(ind + width/2, landuse_type_plot['destination_CBD'], width, bottom=bar_height_dest_CBD, label='Origin CBD')
rects_d2 = ax.bar(ind + width/2, landuse_type_plot['destination_URBAN'], width, bottom=bar_height_dest_URBAN, label='Origin urban')
rects_d3 = ax.bar(ind + width/2, landuse_type_plot['destination_SU'], width, bottom=bar_height_dest_SU, label='Origin sub-urban')
rects_d4 = ax.bar(ind + width/2, landuse_type_plot['destination_RURAL'], width, bottom=bar_height_dest_RURAL, label='Origin rural')
# Add some text for labels, title and custom x-axis tick labels, etc.
ax.set_ylabel('Percentage of landuse (by type)')
ax.set_title('Origin and destination landuse by clusters')
ax.set_xticks(ind)
ax.legend(loc='center left', bbox_to_anchor=(1, 0.85))
landuse_type_plot
###Output
_____no_output_____ |
curso/4 - Machine Learning/Trees/Gradient Boost.ipynb | ###Markdown
GB
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
df = pd.read_csv(r'C:\Users\rodri\GitHub\My_Projects\1 Aulas Data Science\4 - Machine Learning\Regressao Linear/autompg-dataset.zip')
df.head()
df = df[df.horsepower != '?']
df.drop('car name', axis=1, inplace=True)
from sklearn.model_selection import train_test_split
X = df.drop(['mpg'], axis=1)
y = df['mpg']
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=42)
# Import RandomForestRegressor
from sklearn.ensemble import GradientBoostingRegressor
from sklearn.ensemble import GradientBoostingClassifier
from sklearn import metrics
# Instantiate gb
gb = GradientBoostingRegressor(learning_rate=0.01,
n_estimators=500,
random_state=42)
# Instantiate sgbr
sgbr = GradientBoostingRegressor(max_depth=4,
subsample=0.9,
max_features=0.75,
learning_rate=0.01,
n_estimators=500,
random_state=42)
# Fit gb to the training set
gb.fit(X_train, y_train)
sgbr.fit(X_train, y_train)
y_pred = gb.predict(X_test)
print('MAE:', metrics.mean_absolute_error(y_test, y_pred))
print('MSE:', metrics.mean_squared_error(y_test, y_pred))
print('RMSE:', np.sqrt(metrics.mean_squared_error(y_test, y_pred)))
y_pred = sgbr.predict(X_test)
print('MAE:', metrics.mean_absolute_error(y_test, y_pred))
print('MSE:', metrics.mean_squared_error(y_test, y_pred))
print('RMSE:', np.sqrt(metrics.mean_squared_error(y_test, y_pred)))
sgbr.feature_importances_
# Create a pd.Series of features importances
importances = pd.Series(data=sgbr.feature_importances_,
index= X_train.columns)
# Sort importances
importances_sorted = importances.sort_values()
# Draw a horizontal barplot of importances_sorted
importances_sorted.plot(kind='barh', color='lightgreen')
plt.title('Features Importances')
plt.show()
df = pd.read_csv(r'C:\Users\rodri\GitHub\My_Projects\1 Aulas Data Science\Data Sets/breast-cancer-wisconsin-data.zip')
df.head()
d={'M':1,'B':0}
df['diagnosis'] = df['diagnosis'].map(d)
df['diagnosis'].value_counts()
from sklearn.model_selection import train_test_split
features = df.drop(['id', 'diagnosis', 'Unnamed: 32'], axis=1)
targets = df['diagnosis']
X_train, X_test, y_train, y_test = train_test_split(features, targets, test_size=0.30, stratify=targets,random_state=1)
# Instantiate gb
gb = GradientBoostingClassifier(learning_rate=0.01,
n_estimators=350,
random_state=42)
# Instantiate sgbr
sgbr = GradientBoostingClassifier(subsample=0.8,
max_features=0.70,
learning_rate=0.001,
n_estimators=750,
random_state=42)
# Fit gb to the training set
gb.fit(X_train, y_train)
sgbr.fit(X_train, y_train)
# Import accuracy_score
from sklearn.metrics import accuracy_score
# Predict test set labels
y_pred = gb.predict(X_test)
# Compute test set accuracy
acc = accuracy_score(y_test, y_pred)
print("Test set accuracy: {:.2f} %".format(acc*100))
# Predict test set labels
y_pred = sgbr.predict(X_test)
# Compute test set accuracy
acc = accuracy_score(y_test, y_pred)
print("Test set accuracy: {:.2f} %".format(acc*100))
y_pred_proba = gb.predict_proba(X_test)
y_pred_proba
y_pred
###Output
_____no_output_____ |
in-class/week-9-working-with-PDFs_BLANKS.ipynb | ###Markdown
Tables scattered in PDFsAs it is, PDFs are notoriously obnoxious. They are designed so people can't change them easily.PDFs that hold tables are pretty much the worst.You might have worked with the Tabula GUI to extract tables from PDFs. But there's a lot of manual work involved. To automate the process, we'll use the **Tabula Python Library**. There's NO satisfaction guarantee, but at least it's a way to try to tackle PDFs with tables. THE SETUP
###Code
## !pip install tabula-py.
## it is not part of the standard Colab library
pip install -q tabula-py
###Output
_____no_output_____
###Markdown
You don't need to run this in Colab, but you do in Jupyter. Just uncomment it in Jupyter:
###Code
# pip install install-jdk
## import tabula
## check it's versioning
import tabula
tabula.environment_info() ## not need always
## import some libraries we need
import pandas as pd ## pandas to work with data
## in order to export our file to our computer drive, you need this only in Colab:
# from google.colab import files
### COLAB ONLY
## import colab's file uploader
files.upload()
## Let's pull in our first pdf with a single page, single table
## WHAT TYPE OF DATA?
## let's get the first table
## look at it
## WHT TYPE OF DATA?
## Export and download as CSV file
###Output
_____no_output_____
###Markdown
Multiple pages/ Multiple tables
###Code
## let's pull in our a file with multiple pages and tables
## we target the table on the fisrt and second page.
## let's get the first table
## we are pulling in a range of pages
## IMPORTANT: There are no spaces in "1-2,4"
###Output
_____no_output_____
###Markdown
Homework for week 9- Multi-page, Multi-table all zipped together- Look at week 9 Foundational Multi-page, Multi-table Campaign contribution demo
###Code
## path to our "campaign_contribs.pdf" PDF
## get all the pages
## confirm we have the correct number of tables. should have 4 tables
## let's get the 3rd table
## create a function to download each table as CSV
## call the function
## write function to combine tabula tables into a single csv
## call the function
###Output
_____no_output_____
###Markdown
Reality Check
###Code
## import who_covid.pdf
##look at it
## table on page 3
##table on page 4
###Output
_____no_output_____ |
jupyter/Cloud Pak for Data v3.0.x/ExtendWMLSoftwareSpec.ipynb | ###Markdown
Adding Software Specification to WMLThis notebook's goal is to showcase how you can extend WML's existing Software Specification and how it can be useful for your Decision Optimization experiment.In this example, a new Python package is added and used in the optimization model. Importing and instantiating WMLThe first step is to add WML to your notebook. It is not imported by default, so you will have to add it manually. To do so, you can use the following code:
###Code
from watson_machine_learning_client import WatsonMachineLearningAPIClient
###Output
_____no_output_____
###Markdown
If you want to code on your own computer, you must install the WML client using pip. To do so, simply use the following command: `!pip install -i https://test.pypi.org/simple/ watson-machine-learning-client-V4==1.0.51`. This step, however, is not necessary when coding within a notebook created from the platform.Once imported, you will have to instantiate the WML client using your credentials, this is necessary in order to use it.To do so, use the WML's client constructor `WatsonMachineLearningAPIClient(wml_credentials)` with a JSON `wml_credentials` describing all necessary information, that is:```wml_credentials = { "username": "", "password": "" "instance_id": "", "url" : "", "version": " (not mandatory)}```
###Code
# Instantiating WML's client using our credentials. Don't forget to change this part with your own credentials!
# You might want to modify the instance_id, depending on where you are using this example
wml_credentials = {
"username":"xxx",
"password":"xxx",
"instance_id" : "wml_local",
"url": "xxx",
"version": "3.0.0"
}
client = WatsonMachineLearningAPIClient(wml_credentials)
###Output
_____no_output_____
###Markdown
In order to work, WML must also be given what's called a `space`, that is, the place where to deploy your model to. You might already have created a few spaces. You can check if that is the case using the following code. It will display all the spaces that you have created that currently exist on the platform.
###Code
def guid_from_space_name(client, space_name):
space = client.spaces.get_details()
return(next(item for item in space['resources'] if item['entity']["name"] == space_name)['metadata']['guid'])
client.spaces.list()
###Output
_____no_output_____
###Markdown
You can then find one space that you wish to use, and execute the following code to tell WML to use it.
###Code
# Select a space from the list displayed above
space_id = guid_from_space_name(client,"xxx")
client.set.default_space(space_id)
###Output
_____no_output_____
###Markdown
If you don't have any deployment spaces available. So you must create one and use it. To do so, simply use the following code:`client.set.default_space(meta_props={client.spaces.ConfigurationMetaNames.NAME: "sample_space"})["metadata"]["guid"]")` Creating a simple package extensionFor the purpose of this demonstration, you will create a very simple package extension that will install the pip package called `hello_demo`. Of course, feel free to replace that by whatever you might need.The first step is to write a small `yaml` file, here named `main.yml`, for this package extension, like so:
###Code
%%writefile main.yml
name: do_example
channels:
- defaults
dependencies:
- pip:
- hello_demo
###Output
_____no_output_____
###Markdown
Once done, you can store it in the package extensions using `client.package_extensions.store(meta_props=meta_prop_pkg_ext, file_path="/home/wsuser/work/main.yml")`You can also store the uid of the extension for later usage, using `client.package_extensions.get_uid()`
###Code
# These first few lines, makes the name of the package unique using the current time
import time
current_time = time.asctime()
meta_prop_pkg_ext = {
client.package_extensions.ConfigurationMetaNames.NAME: "conda_ext_" + current_time,
client.package_extensions.ConfigurationMetaNames.DESCRIPTION: "Pkg extension for conda",
client.package_extensions.ConfigurationMetaNames.TYPE: "conda_yml",
}
# Storing the package and saving it's uid
pkg_ext_id = client.package_extensions.get_uid(client.package_extensions.store(meta_props=meta_prop_pkg_ext,
file_path="/home/wsuser/work/main.yml"))
###Output
_____no_output_____
###Markdown
Extend DO V12.10 existing software specification with the package extension created previouslyYou now want to create a DO Model that is going to use the pip package from the package extension and use it. First of all, create a new model, and print the pip package version number. The model `main.py` will be:```import hello_demoprint("hello_demo version: " + hello_demo.__version__)```
###Code
%mkdir -p model
%%writefile model/main.py
import hello_demo
print("hello_demo version: " + hello_demo.__version__)
###Output
_____no_output_____
###Markdown
You now need to compress the model directory you created with tar, so that it can be deployed in WML. That is what the next cell does.
###Code
# Creating a tar from the model you just created
import tarfile
def reset(tarinfo):
tarinfo.uid = tarinfo.gid = 0
tarinfo.uname = tarinfo.gname = "root"
return tarinfo
tar = tarfile.open("model.tar.gz", "w:gz")
tar.add("model/main.py", arcname="main.py", filter=reset)
tar.close()
###Output
_____no_output_____
###Markdown
Done! This model is ready for use! Since the model is using a custom pip package that is not available by default in DO V12.10, you need to extend its' software specifications.To do so, use the following code. It will create an extension to the current specifications of DO V12.10 and add the package you previously created, making the `hello_demo` package available to your model.
###Code
# Look for the do_12.10 software specification
base_sw_id = client.software_specifications.get_uid_by_name("do_12.10")
# Create a new software specification using the default do_12.10 one as the base for it
meta_prop_sw_spec = {
client.software_specifications.ConfigurationMetaNames.NAME: "do_12.10_ext_"+current_time,
client.software_specifications.ConfigurationMetaNames.DESCRIPTION: "Software specification for DO example",
client.software_specifications.ConfigurationMetaNames.BASE_SOFTWARE_SPECIFICATION: {"guid": base_sw_id}
}
sw_spec_id = client.software_specifications.get_uid(client.software_specifications.store(meta_props=meta_prop_sw_spec)) # Creating the new software specification
client.software_specifications.add_package_extension(sw_spec_id, pkg_ext_id) # Adding the previously created package extension to it
###Output
_____no_output_____
###Markdown
Time to test everything! You can now store your model in WML, deploy it and then run it.When storing the model, that is where you must specify the new software specification to use, the one you just created. As you can see, you add the ID within the metadata used to store the model, in `client.repository.ModelMetaNames.SOFTWARE_SPEC_UID`
###Code
# Storing it with custom metadata, feel free to change this part...
mnist_metadata = {
client.repository.ModelMetaNames.NAME: "xxx",
client.repository.ModelMetaNames.DESCRIPTION: "xxx",
client.repository.ModelMetaNames.TYPE: "do-docplex_12.10",
client.repository.ModelMetaNames.SOFTWARE_SPEC_UID: sw_spec_id
}
model_details = client.repository.store_model(model='/home/wsuser/work/model.tar.gz', meta_props=mnist_metadata)
model_uid = client.repository.get_model_uid(model_details)
print(model_uid)
# Deploying the model
meta_props = {
client.deployments.ConfigurationMetaNames.NAME: "xxx",
client.deployments.ConfigurationMetaNames.DESCRIPTION: "xxx",
client.deployments.ConfigurationMetaNames.BATCH: {},
client.deployments.ConfigurationMetaNames.HARDWARE_SPEC: {"name" : "S", "num_nodes":1 }
}
deployment_details = client.deployments.create(model_uid, meta_props=meta_props)
deployment_uid = client.deployments.get_uid(deployment_details)
print(deployment_uid)
###Output
_____no_output_____
###Markdown
The next few cells create the WML job for this model and wait for it to be solved. Once solved, logs are displayed, where you should see the `hello_demo` version number.
###Code
solve_payload = {
client.deployments.DecisionOptimizationMetaNames.SOLVE_PARAMETERS: {
"oaas.logTailEnabled":"true"
}
}
job_details = client.deployments.create_job(deployment_uid, solve_payload)
job_uid = client.deployments.get_job_uid(job_details)
print(job_uid)
from time import sleep
while job_details['entity']['decision_optimization']['status']['state'] not in ['completed', 'failed', 'canceled']:
print(job_details['entity']['decision_optimization']['status']['state'] + '...')
sleep(5)
job_details=client.deployments.get_job_details(job_uid)
print( job_details['entity']['decision_optimization']['solve_state']['latest_engine_activity'])
###Output
_____no_output_____ |
trigger_word.ipynb | ###Markdown
save model
###Code
model.save('/content/drive/MyDrive/phoebe/trigger_word_model.h5')
###Output
_____no_output_____
###Markdown
Load model and test it on an audio image, transformed into an array of numbers
###Code
import tensorflow as tf
new_model = tf.keras.models.load_model('/content/drive/MyDrive/phoebe/trigger_word_model.h5')
#use 1 validation audio for testing
from keras.preprocessing import image
path = '/content/drive/MyDrive/phoebe/phoebe_chunk_val_spec/class1/audio00.png'
img=image.load_img(path, target_size=(150, 150))
x=image.img_to_array(img)
x=np.expand_dims(x, axis=0)
#images = np.vstack([x])
print(new_model.predict(x))
###Output
_____no_output_____ |
Model_Seq2Seq_Attention/Seq2Seq-train.ipynb | ###Markdown
Create Model with specified optimizer and loss function
###Code
model = Seq2SeqAttention(config, len(dataset.vocab), dataset.word_embeddings)
if torch.cuda.is_available():
model.cuda()
model.train()
optimizer = optim.SGD(model.parameters(), lr=config.lr)
NLLLoss = nn.NLLLoss()
model.add_optimizer(optimizer)
model.add_loss_op(NLLLoss)
train_losses = []
val_accuracies = []
def train():
for i in range(config.max_epochs):
print ("Epoch: {}".format(i))
train_loss,val_accuracy = model.run_epoch(dataset.train_iterator, dataset.val_iterator, i)
train_losses.append(train_loss)
val_accuracies.append(val_accuracy)
%time train()
def calc_true_and_pred(model, iterator):
all_preds = []
all_y = []
for idx,batch in enumerate(iterator):
if torch.cuda.is_available():
x = batch.text.cuda()
else:
x = batch.text
y_pred = model(x)
predicted = torch.max(y_pred.cpu().data, 1)[1] + 1
all_preds.extend(predicted.numpy())
all_y.extend(batch.label.numpy())
return all_y, all_preds
all_y, all_preds = calc_true_and_pred(model, dataset.test_iterator)
###Output
_____no_output_____
###Markdown
accuracy score
###Code
test_acc = accuracy_score(all_y, np.array(all_preds).flatten())
print ('Final Test Accuracy: {:.4f}'.format(test_acc))
###Output
_____no_output_____
###Markdown
confusion matrix
###Code
report = classification_report(all_y, np.array(all_preds).flatten())
print(report)
###Output
_____no_output_____ |
week_02/column_transformers/ColumnTransformers.ipynb | ###Markdown
Feature Engineering IIputting things together
###Code
import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.compose import ColumnTransformer
from sklearn.impute import SimpleImputer
from sklearn.preprocessing import OneHotEncoder, MinMaxScaler
from sklearn.pipeline import make_pipeline
###Output
_____no_output_____
###Markdown
1. Load the data
###Code
df = pd.read_csv('penguins_simple.csv', sep=';')
df.shape
###Output
_____no_output_____
###Markdown
2. Train-Test Split
###Code
X = df.iloc[:, 1:]
y = df['Species']
Xtrain, Xtest, ytrain, ytest = train_test_split(X, y, random_state=42)
Xtrain.shape, Xtest.shape, ytrain.shape, ytest.shape
###Output
_____no_output_____
###Markdown
3. Define a ColumnTransformer
###Code
pipeline = make_pipeline(
SimpleImputer(strategy='most_frequent'),
MinMaxScaler()
)
"""
使用方法:
输入一连串数据挖掘步骤,最后一步必须是估计器,前几步是转换器
输入的数据集经过转换器的处理后,输出的结果作为下一步的输入
最后用于估计器进行分类
每一步都是元祖(‘名称’,步骤)来表示
流水线功能:
跟踪记录各步骤操作
对各步骤进行封装
确保代码的复杂程度不至于超出掌控范围
"""
trans = ColumnTransformer([
('kristians_onehot', OneHotEncoder(sparse=False, handle_unknown='ignore'), ['Sex']),
('kristians_scale', MinMaxScaler(), ['Body Mass (g)', 'Culmen Depth (mm)']),
('impute_then_scale', pipeline, ['Flipper Length (mm)']),
('do_nothing', 'passthrough', ['Culmen Length (mm)']),
])
###Output
_____no_output_____
###Markdown
4. fit + transform training data
###Code
trans.fit(Xtrain)
Xtrain_transformed = trans.transform(Xtrain) # result is a single numpy array
Xtrain_transformed.shape
###Output
_____no_output_____
###Markdown
5. fit a LogReg model
###Code
from sklearn.linear_model import LogisticRegression
model = LogisticRegression(max_iter=1000)
model.fit(Xtrain_transformed, ytrain)
###Output
_____no_output_____
###Markdown
6. transform test data
###Code
Xtest_transform = trans.transform(Xtest)
Xtest_transform.shape
###Output
_____no_output_____
###Markdown
7. predict
###Code
ypred = model.predict(Xtest_transform)
ypred[:5]
###Output
_____no_output_____ |
Problemset 07 - CIFAR-10 Multimodal data.ipynb | ###Markdown
Exercise 1: Load CIFAR-10 and create `info` data Part A: Load CIFATR10download the dataset from https://www.cs.toronto.edu/~kriz/cifar.html and create a function to return selected batches
###Code
def load_cifar(batches = [1,2,3,4,5]):
cifar10_dir ="mnt/cifar-10-batches-py/cifar-10-batches-py/" # this is where you downloaded CIFAR10
data_cifar={"data":np.r_, "labels":[]}
def unpickle(file):
import cPickle
with open(file, 'rb') as fo:
dict = cPickle.load(fo)
return dict
data1=unpickle(cifar10_dir+"data_batch_1")
imgs=data1["data"]
labels=data1["labels"]
if len(batches)>1:
for batch in batches[1:]:
data2=unpickle(cifar10_dir+"data_batch_%d"%batch)
data_imgs=data2["data"]
data_labels=data2["labels"]
labels+=data_labels
imgs=np.concatenate((imgs,data_imgs))
def onehot_labels(labels):
return np.eye(10)[labels]
imgs = imgs.reshape(-1,3,32,32).transpose(0,2,3,1)
labels = np.r_[labels]
ohlabs = onehot_labels(labels)
return imgs, labels, ohlabs
def train_test_split(imgs, labels, ohlabs, train_pct=.8, shuffle=True):
train_size=int(len(imgs)*train_pct)
print(train_size)
if shuffle:
idxs=np.random.permutation(range(len(imgs)))
else:
idxs=np.arange((len(imgs)))
train_imgs = imgs[idxs[:train_size]]
train_ohlabs = ohlabs[idxs[:train_size]]
train_labels = labels[idxs[:train_size]]
test_imgs = imgs[idxs[train_size:]]
test_ohlabs = ohlabs[idxs[train_size:]]
test_labels = labels[idxs[train_size:]]
return train_imgs, train_labels, train_ohlabs, test_imgs, test_labels, test_ohlabs
print "free mem", humanbytes(psutil.virtual_memory().free)
###Output
free mem 1.18 GB
###Markdown
** we will use only batch 1**. create variables as in previous problemsets and create additional **INFO** variables with the new information.data shapes must be as follows: imgs (10000, 32, 32, 3) labels (10000,) onehot (10000, 10) train_imgs (8000, 32, 32, 3) train_labels (8000,) train_ohlabs (8000, 10) train_info (8000,) test_imgs (2000, 32, 32, 3) test_labels (2000,) test_ohlabs (2000, 10) test_info (2000,)
###Code
imgs, labels, ohlabs = load_cifar(batches = [1,2,3,4,5])
d = train_test_split(imgs, labels, ohlabs,)
train_imgs, train_labels, train_ohlabs, test_imgs, test_labels, test_ohlabs = d
print "imgs ", imgs.shape
print "labels", labels.shape
print "onehot", ohlabs.shape
print "train_imgs ", train_imgs.shape
print "train_labels", train_labels.shape
print "train_ohlabs", train_ohlabs.shape
print "test_imgs ", test_imgs.shape
print "test_labels ", test_labels.shape
print "test_ohlabs ", test_ohlabs.shape
gc.collect()
print "free mem", humanbytes(psutil.virtual_memory().free)
###Output
40000
imgs (50000, 32, 32, 3)
labels (50000,)
onehot (50000, 10)
train_imgs (40000, 32, 32, 3)
train_labels (40000,)
train_ohlabs (40000, 10)
test_imgs (10000, 32, 32, 3)
test_labels (10000,)
test_ohlabs (10000, 10)
free mem 755.23 MB
###Markdown
Part B: Generate infowe will add **ONE NEURON** at the last **fully connected layer** with the following values:- **+1** if the image corresponds to a **LIVING THING**- **-1** otherwisecomplete the following function to transform the `labels` numpy array passed as argument using this convention, such as [3, 1, 7, 9] --> [1, -1, 1, -1]
###Code
labcode= np.r_[[ 0 , 1, 2, 3, 4, 5, 6, 7, 8, 9 ]]
cnames = np.r_[["plane", "car", "bird", "cat", "deer", "dog", "frog", "horse", "boat", "truck"]]
info = np.r_[[ -1, -1, 1, 1, 1, 1, 1, 1, -1, -1 ]]
def get_info(labels):
labels_info = info[[labels]]
return labels_info
###Output
_____no_output_____
###Markdown
so now we create the info variables for train and test
###Code
train_info = get_info(train_labels)
test_info = get_info(test_labels)
print train_info.shape
###Output
(40000,)
###Markdown
Exercise 2: Build `tflearn` modeluse the same network as in the previous problemset **BUT** add an additional input neuron at OUTPUT layer:| layer | input_size | output_size | filter_size | stride | n_filters |activation| var sizes | params || ------- |:-----------:|:-----------:|:------------:|:------:|:---------:|:--------:|:--------------:| || conv1 | 32x32x3 | 32x32x15 | 5x5 |1 | 15 | relu | W1 = [5,5,3,15] b = [15]|-|| conv2 | 32x32x15 | 16x16x18 | 5x5 |2 | 18 | relu | W2 = [5,5,15,18] b = [18]|-|| conv3 | 16x16x18 | 8x8x20 | 3x3 |2 | 20 | relu | W3 = [3,3,18,20] b =[20]|-|| maxpool | 8x8x20 | 4x4x20 | | | | | | k = 2 || fc | 4x4x20 | 100 | | | | relu | W4 = [320,100]b=[100]|-|| dropout | 100 | 100 | | | | | | pkeep = .75 || output **+ INFO** | **101** | 10 | | | | softmax | W5 = [**101**,10]b=[10]|-|
###Code
def get_TF_graph(pkeep=.075):
tf.reset_default_graph()
with tf.name_scope("data"):
X = tf.placeholder(tf.float32, [None, 32, 32, 3])
X_info = tf.placeholder(tf.float32, [None, 1])
Y = tf.placeholder(tf.float32, [None, 10])
network1 = tf.reshape(X, [-1, 32, 32, 3])
with tf.name_scope("layers"):
num_classes=10
network2 = conv_2d(network1, 15, 5, strides=1, activation='relu', name="conv1", padding="SAME")
network3 = conv_2d(network2, 18, 5, strides=2, activation='relu', name="conv2", padding="SAME")
network4 = conv_2d(network3, 20, 3, strides=2, activation='relu', name="conv3", padding="SAME")
network5 = max_pool_2d(network4, 2, strides=2, name="conv3", padding="SAME")
network6 = fully_connected(network5, 100, activation='relu', name='fc1')
network7 = dropout(network6, pkeep, name='dropout')
network8 = tf.concat((X_info, network7), axis=1)
y_hat = fully_connected(network8, num_classes, activation='softmax', name='fc2')
with tf.name_scope("loss"):
cross_entropy = tf.nn.softmax_cross_entropy_with_logits(logits=y_hat, labels=Y)
loss = tf.reduce_mean(cross_entropy)*100
with tf.name_scope("accuracy"):
correct_pred = tf.equal(tf.argmax(y_hat, 1), tf.argmax(Y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
with tf.name_scope("optimizer"):
train_step = tf.train.AdamOptimizer().minimize(loss)
return X, Y, X_info, y_hat, loss, train_step, accuracy, network2
print "free mem", humanbytes(psutil.virtual_memory().free)
X, Y, X_info, y_hat, loss, train_step, accuracy, network2 = get_TF_graph()
vars = {i.name:i for i in tflearn.variables.get_all_trainable_variable()}
vars
###Output
WARNING:tensorflow:From /opt/miniconda/lib/python2.7/site-packages/tflearn/initializations.py:119: __init__ (from tensorflow.python.ops.init_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.initializers.variance_scaling instead with distribution=uniform to get equivalent behavior.
###Markdown
Exercise 3: implement optimization loop
###Code
model_name = "cnn_cifar10_info_" + datetime.now().strftime("%m-%d_%H:%M")
print model_name
def fit (X, Y, X_info,
X_train, y_train, X_train_info, X_test, y_test, X_test_info,
model_name, loss, train_step, accuracy,
batch_size, n_epochs, log_freq):
saver = tf.train.Saver()
# Initialize the variables (i.e. assign their default value)
init = tf.global_variables_initializer()
log_train, log_test = [], []
# Start training
from rlx.ml import Batches
from time import time
with tf.Session() as sess:
from datetime import datetime
model_name = model_name + "_" + datetime.now().strftime("%Y-%m-%d_%H:%M")
sess.run(init)
step = 1
t1 = time()
for epoch in range(1,n_epochs+1):
ndata = 0
for batch_X, batch_X_info, batch_y in Batches([X_train, X_train_info, y_train],
batch_size=batch_size,
n_epochs=1, shuffle=True).get():
ndata += len(batch_X)
# Run optimization
train_acc, _ = sess.run([accuracy, train_step],
feed_dict={X: batch_X, Y: batch_y,
X_info: batch_X_info.reshape(-1,1)})
t2 = time()
log_train.append([step, t2-t1, train_acc])
print "\repoch %3d/%d step %5d: train acc: %.4f, time: %.3f segs, %7d/%d"%(epoch, n_epochs, step, train_acc, (t2-t1), ndata, len(train_imgs)),
if step%log_freq==0:
idxs = np.random.permutation(range(len(X_test)))[:1000]
test_acc = sess.run(accuracy, feed_dict = {X: X_test[idxs],
Y: y_test[idxs],
X_info: X_test_info[idxs].reshape(-1,1)})
print "\repoch %3d/%d step %5d: train acc: %.4f, test acc: %.4f, time: %.3f segs, %d data items"%(epoch, n_epochs, step, train_acc, test_acc, (t2-t1), ndata)
log_test.append([step, t2-t1, test_acc])
saver.save(sess, "models/"+model_name+".tf")
step += 1
print("Optimization Finished!")
# Calculate accuracy for test images by batches (for low memory)
test_acc = 0
for batch_X, batch_X_info, batch_y in Batches([X_test, X_test_info, y_test], batch_size=batch_size, n_epochs=1).get():
test_acc += sess.run(accuracy, feed_dict={X: batch_X,
Y: batch_y,
X_info: batch_X_info.reshape(-1,1)})*len(batch_X)/len(X_test)
print("Testing Accuracy:", test_acc)
saver.save(sess, "models/"+model_name+".tf")
log_train = pd.DataFrame(log_train, columns=["step", "time", "accuracy"])
log_test = pd.DataFrame(log_test, columns=["step", "time", "accuracy"])
return log_train, log_test, model_name
def plot_results(log_train, log_test):
k = log_train.rolling(window=10).mean().dropna()
plt.plot(k.time, k.accuracy, color="blue", lw=2, label="train")
plt.plot(log_test.time, log_test.accuracy, color="red",lw=2, label="test")
plt.legend(loc="center left", bbox_to_anchor=(1,.5))
plt.plot(log_train.time, log_train.accuracy, alpha=.3, color="blue")
plt.grid()
plt.xlabel("elapsed time (secs)")
plt.ylabel("accuracy")
plt.axhline(0.5, color="black")
plt.xlim(0,log_train.time.max()+1)
plt.title("final train_acc=%.4f, test_acc=%.4f"%(log_train.accuracy.values[-1], log_test.accuracy.values[-1]))
###Output
_____no_output_____
###Markdown
train your model !!!
###Code
X, Y, X_info, y_hat, loss, train_step, accuracy, network2 = get_TF_graph()
print "free mem", humanbytes(psutil.virtual_memory().free)
log_train, log_test, model_name = fit(X, Y, X_info,
train_imgs, train_ohlabs, train_info,
test_imgs, test_ohlabs, test_info,
"extended_cnn_mnist", loss, train_step, accuracy,
batch_size=100, n_epochs=20, log_freq=400)
plot_results(log_train, log_test)
print "free mem", humanbytes(psutil.virtual_memory().free)
###Output
epoch 1/20 step 400: train acc: 0.3600, test acc: 0.3550, time: 763.751 segs, 40000 data items
epoch 2/20 step 800: train acc: 0.3600, test acc: 0.3680, time: 1439.305 segs, 40000 data items
epoch 3/20 step 1200: train acc: 0.3800, test acc: 0.4120, time: 2112.301 segs, 40000 data items
epoch 4/20 step 1600: train acc: 0.3600, test acc: 0.4090, time: 2803.351 segs, 40000 data items
epoch 5/20 step 2000: train acc: 0.5000, test acc: 0.4650, time: 3515.082 segs, 40000 data items
epoch 6/20 step 2400: train acc: 0.4200, test acc: 0.4600, time: 4176.871 segs, 40000 data items
epoch 7/20 step 2800: train acc: 0.4100, test acc: 0.4620, time: 4849.320 segs, 40000 data items
epoch 8/20 step 3200: train acc: 0.4100, test acc: 0.4850, time: 5519.653 segs, 40000 data items
epoch 9/20 step 3226: train acc: 0.5200, time: 5572.843 segs, 2600/40000
###Markdown
Exercise 4: visualize filters, sample misses and activations
###Code
with tf.Session() as sess:
saver = tf.train.Saver()
saver.restore(sess, "models/"+model_name+".tf")
C1_activations, test_preds, w1= sess.run([network2, y_hat, network2.W], feed_dict={X:test_imgs, X_info: test_info.reshape(-1,1)})
print "Show Confusion Matrix"
preds = np.argmax(test_preds, axis=1)
print preds.shape, test_labels.shape
print np.mean(test_labels==preds)
from rlx.ml import confusion_matrix
confusion_matrix(test_labels, preds)
def display_imgs(w, figsize=(20,3)):
plt.figure(figsize=figsize)
for i in range(w.shape[-1]):
plt.subplot(1,w.shape[-1],i+1)
plt.imshow(w[:,:,i], cmap = plt.cm.Greys_r, interpolation="none")
plt.axis("off")
plt.title(i)
display_imgs(w1[:,:,0,:])
print "SHOW SOME MISSES"
misses = np.argwhere(test_labels != preds)[:,0]
idxs = np.random.permutation(misses)[:10]
plt.figure(figsize=(20,3))
for i,idx in enumerate(idxs):
plt.subplot(1,10,i+1)
plt.imshow(test_imgs[idx][:,:,0], cmap=plt.cm.Greys_r)
plt.axis("off")
plt.title("true %d, pred %d"%(test_labels[idx], preds[idx]))
print "show conv1 activations for a random image"
i = np.random.randint(len(test_imgs))
plt.imshow(test_imgs[i])
display_imgs(w1[:,:,0,:])
display_imgs(C1_activations[i])
###Output
show conv1 activations for a random image
|
AirQo Ugadan Air Quality Forecast Challenge/Solution 3/Unohana_Airqo_Challenge_Solution.ipynb | ###Markdown
Features engineering part
###Code
def aggregate_features(x,col_name):
x["max_"+col_name]=x[col_name].apply(np.max)
x["min_"+col_name]=x[col_name].apply(np.min)
x["mean_"+col_name]=x[col_name].apply(np.mean)
x["std_"+col_name]=x[col_name].apply(np.std)
x["var_"+col_name]=x[col_name].apply(np.var)
x["median_"+col_name]=x[col_name].apply(np.median)
x["ptp_"+col_name]=x[col_name].apply(np.ptp)
return x
def remove_nan_values(x):
return [e for e in x if not math.isnan(e)]
data=pd.concat([train,test],sort=False).reset_index(drop=True)
data.columns.tolist()
for x in range(121):
data["temp_"+ str(x)] = data.temp.str[x]
data["precip_"+ str(x)] = data.precip.str[x]
data["rel_humidity_"+ str(x)] = data.rel_humidity.str[x]
data["wind_dir_"+ str(x)] = data.wind_dir.str[x]
data["wind_spd_"+ str(x)] = data.wind_spd.str[x]
data["atmos_press_"+ str(x)] = data.atmos_press.str[x]
data.shape
len(data.precip[10])
for col_name in tqdm(features):
data[col_name]=data[col_name].apply(remove_nan_values)
for col_name in tqdm(features):
data=aggregate_features(data,col_name)
data.drop(features,1,inplace=True)
train=data[data.target.notnull()].reset_index(drop=True)
test=data[data.target.isna()].reset_index(drop=True)
del data
gc.collect()
# Concatenating train and test
train_len = train.shape[0]
df = pd.concat([train, test])
# Feature engineering, aggregating features by hour of day
for feat in features:
for aggregate in ['kurt', 'skew']:
df[aggregate+'_'+feat] = df[[x for x in df.columns if x.startswith(feat)]].agg(aggregate, axis = 1)
aggregates = ['mean', 'max', 'min', 'kurt', 'skew']
for feat in features:
hours = OrderedDict()
for i in range(24):
hours['hour_' + str(i)] = []
for i in range(24):
for x, y in zip([int(j%24) for j in range(121)], [x for x in df.columns if x.startswith(feat)]):
if x == i: hours.get('hour_' + str(i)).append(y)
for i in range(24):
for aggregate in aggregates:
if aggregate in ['kurt', 'skew']:
df[aggregate+'_'+feat+'_hour_'+str(i)] = df[hours.get('hour_' + str(i))].agg(aggregate)
else:
df[aggregate+'_'+feat+'_hour_'+str(i)] = df[hours.get('hour_' + str(i))].agg(aggregate, axis = 1)
# Binning values to speed up training and to remove noise
df[[x for x in df.columns if x not in ['ID', 'location', 'target']]] = \
df[[x for x in df.columns if x not in ['ID', 'location', 'target']]].\
apply(lambda x: pd.qcut(labels=False, duplicates='drop', q = 24, x = x))
# Splitting training and testing data from the concatenated dataframe
train = df[:train_len]
test = df[train_len:]
%%time
# Training model per location, per seed, while cross validating
seed_predictions = []
seed_predictions_2 = []
for i in tqdm_notebook(range(1)):
loc_predictions = []
loc_predictions_2 = []
learning_rates = [0.052201, 0.03723, 0.043131, 0.051959, 0.047195]
for location, rate in tqdm_notebook(zip(['A', 'B', 'C', 'D', 'E'], learning_rates), leave = False):
X = train[(train.location == location)]
y = X.target
X = X.drop(['ID', 'location', 'target'], axis = 1)
testt = test[(test.location == location)]
testt = testt.drop(['ID', 'location', 'target'], axis = 1)
if i in [0]:
fold_predictions = []
kfold = KFold(n_splits = 5, random_state = 0)
for train_index, test_index in tqdm_notebook(kfold.split(X, y), leave = False):
X.reset_index(drop = True, inplace=True), y.reset_index(drop = True, inplace = True)
X_train, X_test, y_train, y_test = X.loc[train_index], X.loc[test_index], y[train_index], y[test_index]
cat = CatBoostRegressor(verbose = False, eval_metric='RMSE', use_best_model=True, random_seed=i)
cat.fit(X_train, y_train, eval_set=[(X_train, y_train), (X_test, y_test)], early_stopping_rounds=100)
fold_predictions.append(cat.predict(testt))
loc_predictions.extend(np.mean(fold_predictions, axis = 0))
loc_predictions_2.extend(CatBoostRegressor(verbose = False, learning_rate=rate, random_seed=i).fit(X, y).predict(testt))
seed_predictions.append(loc_predictions)
seed_predictions_2.append(loc_predictions_2)
test_ids = []
for ids in ['A', 'B', 'C', 'D', 'E']:
test_ids.extend(test[test.location == ids].ID)
mean_preds = ((np.mean(seed_predictions, axis = 0)*0.5) + (np.mean(seed_predictions_2, axis = 0)*0.5))
sub = pd.DataFrame({'ID': test_ids, 'target': mean_preds})
sub_1 = sample_sub.drop('target', axis = 1).merge(sub, how = 'left', on = 'ID')
%%time
# Training model on the full data
X = train.drop(['ID', 'location', 'target'], axis = 1)
y = train.target
testt = test.drop(['ID', 'location', 'target'], axis = 1)
predictions = []
kfold = KFold(n_splits = 20, random_state = 0)
for train_index, test_index in tqdm_notebook(kfold.split(X, y)):
X_train, X_test, y_train, y_test = X.loc[train_index], X.loc[test_index], y[train_index], y[test_index]
cat = CatBoostRegressor(verbose = False, iterations=20000, learning_rate=0.063602, eval_metric='RMSE', use_best_model=True)
cat.fit(X_train, y_train, eval_set=[(X_train, y_train), (X_test, y_test)], early_stopping_rounds=100)
predictions.append(cat.predict(testt))
sub = pd.DataFrame({'ID': test.ID, 'target': np.mean(predictions, axis = 0)})
sub_2 = sample_sub.drop('target', axis = 1).merge(sub, how = 'left', on = 'ID')
# Ensembling the models and creating a submission file
ensemble = sub_1.target*0.5 + sub_2.target*0.5
pd.DataFrame({'ID': sub_1.ID, 'target': ensemble}).to_csv('unohana_final_sub.csv', index = False)
###Output
_____no_output_____ |
docs/intermediate/sampling.ipynb | ###Markdown
IntroductionWhat happens when we hit the Inference Button (tm)? Is it gradient descent that is happening underneath the hood? What is this "sampler" we speak about, and what exactly is it doing?As we take a detour away from PyMC3 for a moment,those fundamental questions are the questionsthat we are going to go through in this chapter.It's my hope that you'll enjoy peeling back the coverson _some_ of what happens underneath the hood. In the beginning...First off, we must remember that with Bayesian statistical inference,we are most concerned with computing the posterior distribution of parametersconditioned on data, $P(H|D)$.Here, $H$ refers to the parameter set and the model,while $D$ refers to the data.Because it is a conditional distribution,by invoking the rules of probability, where$$P(H,D)=P(H|D)P(D)=P(D|H)P(D)$$if you were to treat each of the $P$s as algebraic elements,then a simple rearrangement gives us:$$P(H|D)=\frac{P(D|H)P(H)}{P(D)}$$This, then, is Bayes' rule as applied to joint distributionsof data and model parameters.One hiccup shows here, though,in that we cannot analytically know how to calculate $P(D)$.The reason for this is that we don't have an analytical formfor the probability distribution of how data could have been configured.In practice, we treat that as a normalizing constant,since philosophically, data are considered constantwhile parameters are random variables. Hence, our posterior distributionis calculated as a proportionality term:$$P(H|D) \propto P(D|H)P(H)$$ Let's look at an illustrationTo make everything a bit more concrete, let's look at what I calla "simplest complex example",one that is not too hard to "grok" (geek slang for _understand_),but one that is complex enough to be interesting.We're going to inspect this particular model:$$\mu_d \sim N(\mu=3, \sigma=1)$$$$\sigma_d \sim Exp(\lambda=13)$$$$d \sim N(\mu=\mu_d, \sigma=\sigma_d)$$We have Gaussian-distributed data,where the mean of the data distribution,$\mu_d$ is a Gaussian-distributed random variablethat has a configuration that specifies our prior belief about ithaving not seen any data,while the variance of the data distribution,$\sigma_d$ is an Exponentially-distributed random variable,also configured in a way that specifies our prior without having seen any data.The model's PGM looks something like this:
###Code
from daft import PGM
G = PGM()
G.add_node("mu", content=r"$\mu$")
G.add_node("sigma", content=r"$\sigma$", x=1)
G.add_node("d", content="d", x=0.5, y=-1)
G.add_edge("mu", "d")
G.add_edge("sigma", "d")
G.show()
###Output
_____no_output_____
###Markdown
Exercise: Generative Model
###Code
def generative_model():
mu = norm(loc=8, scale=3).rvs()
sigma = expon(scale=3).rvs()
data = norm(loc=mu, scale=sigma).rvs()
return data
###Output
_____no_output_____
###Markdown
Exercise: Joint log-likelihoodWrite down the joint log-likelihood between data and the model parameters under the pre-specified priors.
###Code
from scipy.stats import norm, expon
def log_like(mu, sigma, data):
mu_like = norm(loc=8, scale=1).logpdf(mu)
sigma_like = expon(scale=3).logpdf(sigma)
data_like = norm(loc=mu, scale=sigma).logpdf(data).sum() # sum is important!
return mu_like + sigma_like + data_like
def generate_data(n):
for i in range(n):
yield generative_model()
###Output
_____no_output_____
###Markdown
Now, I'm going to give you some _actual_ data,and I'd like you to propose a $\mu$ and a $\sigma$,and then evaluate their joint log-likelihood with the data.
###Code
import numpy as np
true_mu = 2
true_sigma = 1
data = norm(true_mu, true_sigma).rvs(150)
log_like(-2, 1, data)
###Output
_____no_output_____
###Markdown
Let's plot how the log likelihood varies with $\mu$ and $\sigma$.This will give us a great way to visualize the posterior distribution space.
###Code
from ipywidgets import interact, FloatSlider, IntSlider
import matplotlib.pyplot as plt
import seaborn as sns
min_mu, max_mu = -3, 15
mu = FloatSlider(min=min_mu, max=max_mu, value=0, step=0.1, description=r"$\mu$")
min_sigma, max_sigma = 0.1, 5
sigma = FloatSlider(min=min_sigma, max=max_sigma, value=1, step=0.1, description=r"$\sigma$")
num_data = IntSlider(min=0, max=100, step=1, value=20, description="Num Data Points")
@interact(mu=mu, sigma=sigma, num_data=num_data)
def plot_univariate_posterior(mu, sigma, num_data):
mu_range = np.linspace(min_mu, max_mu, 100)
sigma_range = np.linspace(min_sigma, max_sigma, 100)
d = data[0:num_data]
ll_sigma = [log_like(mu, s, d) for s in sigma_range]
ll_mu = [log_like(m, sigma, d) for m in mu_range]
fig, ax = plt.subplots(figsize=(8,4), nrows=1, ncols=2)
ax[0].plot(sigma_range, np.exp(ll_sigma))
ax[0].set_xlabel("$\sigma$")
ax[0].set_title(f"$\mu$ fixed at {mu}")
ax[1].plot(mu_range, np.exp(ll_mu))
ax[1].set_xlabel("$\mu$")
ax[1].set_title(f"$\sigma$ fixed at {sigma}")
ax[0].set_ylabel("log likelihood")
sns.despine()
plt.tight_layout()
###Output
_____no_output_____
###Markdown
If you're on the online version of this notebook, you'll have some sliders that you can play with.The first slider, $\mu$, allows you to fix $\mu$ at a particular value,while letting $\sigma$ vary.The second slider, $\sigma$, allows you to fix $\sigma$ at a particular value,while letting $\mu$ vary.On the y-axis of both is the joint log likelihood of the model and data,but with $\mu$ fixed with the data on the $\sigma$ plot,and with $\sigma$ fixed with the data on the $\mu$ plot.There's a few things you might want to note.Firstly, if you set Num Data to 0, you should see the joint log likelihood with no data, a.k.a. the prior log likelihood.Secondly, the parameter sliders for $\mu$ and $\sigma$ will allow you to "fix" a parameter value.The plot on the left has $\mu$ fixed, while the plot on the right has $\sigma$ fixed.In both cases, the data are also fixed at what was observed.Thirdly, as you add the number of data points used to evaluate the log likelihood,you should see the true values of the parameters converge towards the true value,conditioned on you setting the "fixed" parameter value (using the sliders) to the true value too.When the "fixed" parameters are wrong, inference about the other parameter will be wrongn too.This shows the importance of _jointly_ inferring the values. Sampling: Metropolis-HastingsAn easy-to-understand sampler that we can start withis the Metropolis-Hastings sampler.I first learned it in a grad-level computational biology class,but I expect most statistics undergrads should havea good working knowledge of the algorithm.Here's how the algorithm works,shamelessly copied (and modified)from the [Wikipedia article](https://en.wikipedia.org/wiki/Metropolis%E2%80%93Hastings_algorithm):- For each parameter $p$, do the following.- Initialize an arbitrary point for the parameter (this is $p_t$, or $p$ at step $t$).- Define a probability density $P(p_t)$, for which we will draw new values of the parameters. Here, we will use $P(p) = Normal(p_{t-1}, 1)$.- For each iteration: - Generate candidate new candidate $p_t$ drawn from $P(p_t)$. - Calculate the likelihood of the data under the previous parameter value(s) $p_{t-1}$: $L(p_{t-1})$ - Calculate the likelihood of the data under the proposed parameter value(s) $p_t$: $L(p_t)$ - Calculate acceptance ratio $r = \frac{L(p_t)}{L(p_{t-1})}$. - Generate a new random number on the unit interval: $s \sim U(0, 1)$. - Compare $s$ to $r$. - If $s \leq r$, accept $p_t$. - If $s \gt r$, reject $p_t$ and continue sampling again with $p_{t-1}$. Because of a desire for convenience,we could choose to use a normal distribution to sample all values.However, that distribution choice is possibly going to bite us during sampling,because the values that we could possibly sample for the $\sigma$ parametercan take on negatives,but when a negative $\sigma$ is passedinto the normally-distributed likelihood,we are going to get computation errors!This is because the scale parameter of a normal distributioncan only be positive, and cannot be negative or zero.(If it were zero, there would be no randomness.) Transformations as a hackThe key problem here is that the support of the Exponential distributionis bound to be positive real numbers only.That said, we can get around this problemsimply by sampling amongst the unbounded real number space $(-\inf, +\inf)$,and then transforming the number by a math function to be in the bounded space.One way we can transform numbers from an unbounded spaceto a positive-bounded spaceis to use the exponential transform:$$y = e^x$$For any given value $x$, $y$ will be guaranteed to be positive.And _voila_!The key trick here was to **sample in unbounded space**,but **evalute log-likelihood in bounded space**.We call the "unbounded" space the _transformed_ space,while the "bounded" space is the _original_ or _untransformed_ space.We have implemented the necessary componentsto compute posterior distributions on parameters!This is something that a probabilistic programming language, such as PyMC3, will do for you,so you don't have to worry about transforming your random variables for sampling.(That said, you still have to worry about transforming your RVs for model building,as we have seen in the chapter on hierarchical modelling. Putting things all togetherLet's see how we an implement a simple MCMC sampler,one that uses the Metropolis-Hastings algorithm, for sampling.Follow along the block of code below:_look at the comments for notes on the important bits!_
###Code
from tqdm.autonotebook import tqdm
##### Read the code and commennts carefully! #####
# Here, we initialize our "guesses" for mu and sigma.
mu_prev = np.random.normal()
sigma_prev = np.random.normal()
# Keep a history of the parameter values and ratio.
mu_history = dict()
sigma_history = dict()
ratio_history = dict()
for i in tqdm(range(1000)):
# We record the history of values on each loop.
mu_history[i] = mu_prev
sigma_history[i] = sigma_prev
# Now, we propose new values centered on the previous values
mu_t = np.random.normal(mu_prev, 0.1)
sigma_t = np.random.normal(sigma_prev, 0.1)
# Compute joint log likelihood
LL_t = log_like(mu_t, np.exp(sigma_t), data)
# NOTE: because sigma has to be positive, we apply an exponential transform
LL_prev = log_like(mu_prev, np.exp(sigma_prev), data)
# Calculate the difference in log-likelihoods
# (or a.k.a. ratio of likelihoods)
diff_log_like = LL_t - LL_prev
if diff_log_like > 0:
ratio = 1
else:
# We need to exponentiate to get the correct ratio,
# since all of our calculations were in log-space
ratio = np.exp(diff_log_like)
# Defensive programming check
if np.isinf(ratio) or np.isnan(ratio):
raise ValueError(f"LL_t: {LL_t}, LL_prev: {LL_prev}")
# Ratio comparison step
ratio_history[i] = ratio
p = np.random.uniform(0, 1)
if ratio >= p:
mu_prev = mu_t
sigma_prev = sigma_t
###Output
_____no_output_____
###Markdown
Now, let's visualize how our sampling went.
###Code
import janitor
import pandas as pd
trace = (
pd.DataFrame(
pd.Series(sigma_history, name="sigma")
)
.join(
pd.Series(mu_history, name="mu")
)
# We have to transform the sampled values into the correct space.
.transform_column("sigma", np.exp)
)
trace.plot();
###Output
_____no_output_____
###Markdown
It looks like we were able to recover the true $\mu$ and $\sigma$,as well as estimate the uncertainty around their values!Notice how it took a couple few dozenns of steps before the trace becomes **stationary**,that is it becomes a flat trend-line.If we prune the trace to just the values after the 200th iteration,we get the following trace:
###Code
trace.loc[200:].plot();
###Output
_____no_output_____ |
Titanic Kaggle Competition .ipynb | ###Markdown
INTRODUÇÃO Esse é meu primeiro projeto de aprendizado de máquina para o Kaggle, foi totalmente construído em python e suas bibliotecas com a finalidade de aprender e entender mais sobre a plataforma, bem como utilizar alguns conceitos de ciência de dados na prática. **Importação das bibliotecas necessárias**
###Code
# Análise e manejamento de dados
import pandas as pd
import numpy as np
import random as rnd
# Visualização
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
# Machine learning
from sklearn.model_selection import StratifiedShuffleSplit
from sklearn.metrics import accuracy_score, log_loss
from sklearn.neighbors import KNeighborsClassifier
from sklearn.svm import SVC
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import RandomForestClassifier, AdaBoostClassifier, GradientBoostingClassifier
from sklearn.naive_bayes import GaussianNB
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis, QuadraticDiscriminantAnalysis
from sklearn.linear_model import LogisticRegression
###Output
_____no_output_____
###Markdown
**Coleta dos dados** A combinação do conjunto de treino com o conjunto de teste facilita operações futuras que possam vir a ser necessárias nos dois conjuntos mutuamente.
###Code
train_df = pd.read_csv('/kaggle/input/titanic/train.csv')
test_df = pd.read_csv('/kaggle/input/titanic/test.csv')
combined_train_test = [train_df, test_df]
###Output
_____no_output_____
###Markdown
COMPREENDENDO O CONJUNTO DE DADOSÉ importante entender um pouco mais sobre a situação do conjunto de dados inicial, identificando colunas que possam ter valores faltantes, compreendendo a tipagem dos dados, como se dá a distribuição das variáveis numéricas e categóricas etc.Desta maneira será possível traçar as melhores estratégias do pré-processamento dos dados.
###Code
train_df.head()
print(train_df.isnull().sum())
print('-'*50)
print(test_df.isnull().sum())
print(train_df.info())
print('-'*50)
print(test_df.info())
train_df.describe()
train_df.describe(include=['O'])
###Output
_____no_output_____
###Markdown
**Conclusões*** Nem todas as features do conjunto estão em tipagem numérica, sendo que a coluna Ticket e a coluna Cabin possuem dados mistos.* As colunas Age, Cabin e Embarked possuem valores faltantes, sendo que Cabin possui mais de 75% dos dados nulos.* A coluna Sex possui 65% de valores masculinos.* Poucos passageiros viajavam em primeira classe.* A coluna Ticket possui aproximadamente 22% de valores duplicados. **Prováveis ações*** Completar as colunas Age e Embarked, uma vez que elas contribuiem para a sobrevivência visto a descrição do acontecimento.* Excluir a coluna PassengerId, já que não contribui com a sobrevivência.* Excluir a coluna Ticket da análise, dado a quantidade alta de valores duplicados e a provável não correlação com a sobrevivência.* Excluir a coluna Cabin da análise, em virtude da quantidade de valores faltantes.* Criar uma nova coluna com a quantidade total de familiares a bordo através das colunas SibSp e Parch.* Classificar Age e Fare em intervalos para transformar variáveis numéricas contínuas em categóricas ordinais. **Hipóteses** Através da análise até agora pode-se fazer algumas suposições, cuja veracidade poderá ser verificada futuramente. São elas:* Mulheres possuem maior probabilidade de terem sobrevivido.* Crianças possuem maior probabilidade de terem sobrevivido.* Passageiros de primeira classe possuem maior chance de terem sobrevivido. LIMPEZA E TRANSFORMAÇÃO DE DADOSSerá importante modificar as colunas do nosso conjunto de dados baseado nas observações feitas anteriormente, visando modificar os valores das nossas colunas para tipos numéricos, lidando com valores faltantes e removendo *features* que não sejam determinantes para a análise. **Excluir *features* que não agregam à análise** Será benéfico excluir colunas que não agregam significativamente para nossa análise, como visto anteriormente, são elas: * PassengerId* Ticket* Cabin
###Code
train_df = train_df.drop(['PassengerId', 'Ticket', 'Cabin'], axis=1)
# Separando o ID do passageiro antes de deletá-lo pois será necessário na submissão dos resultados
passenger_id_test = test_df['PassengerId']
test_df = test_df.drop(['PassengerId', 'Ticket', 'Cabin'], axis=1)
combined_train_test = [train_df, test_df]
###Output
_____no_output_____
###Markdown
**Converter *features* não numéricas** A maioria dos modelos de *machine learning* requer valores numéricos, para isso, será necessário converter os valores que contém *strings* do nosso conjunto de dados. **1. *Sex*** A coluna *Sex* será mapeada em valores de 0 (masculino) ou 1 (feminino).
###Code
for dataset in combined_train_test:
dataset['Sex'] = dataset['Sex'].map( {'female': 1, 'male': 0} ).astype(int)
# Visualizando nosso conjunto de treino
train_df.tail()
###Output
_____no_output_____
###Markdown
**1. *Embarked*** A coluna Embarked será mapeada em valores de 0 (C), 1 (Q) ou 2 (S), mas antes será necessário completar seus valores faltantes. Como existem apenas dois valores faltantes, estes serão preenchidos com o valor mais recorrente do conjunto de dados.
###Code
# Encontrando o valor mais recorrente de Embarked no conjunto de treino
embarked_mode = train_df.Embarked.dropna().mode()[0]
# Preenchendo os valores faltantes com a moda da coluna.
for dataset in combined_train_test:
dataset['Embarked'] = dataset['Embarked'].fillna(embarked_mode)
# Mapeando os valores
for dataset in combined_train_test:
dataset['Embarked'] = dataset['Embarked'].map( {'C': 0, 'Q': 1, 'S': 2} ).astype(int)
train_df.tail()
###Output
_____no_output_____
###Markdown
**Preenchendo valores faltantes *features* contínuas** 1. **Age** Como a idade é um fator relevante para a análise, o preenchimento de seus valores faltantes será necessário. Para isso, deve-se visar um valor que menos aumenta o ruído dos dados. Uma das estratégias para isso é utilizar *features* que possuem correlação com a idade, como a classe de viagem e o sexo da pessoa, e encontrar a mediana da idade para cada combinação entre as *features* relacionadas (Pclass = 0 e Sex = 0, PClass = 0 e Sex = 1, e assim por diante).
###Code
# Construindo a visualização dos histogramas da idade em uma grade com as possíveis combinações entre Pclass e Sex
grid = sns.FacetGrid(train_df, row='Pclass', col='Sex', height=2.5, aspect=2)
grid.map(plt.hist, 'Age', alpha=0.5, bins=20)
###Output
_____no_output_____
###Markdown
Através da visualização fica nítida a correlação entre Pclass, Sex e Age (mais visível quando Pclass=3 e Sex=0). Verificada a correlação, deve-se encontrar os valores medianos de Age para Pclass x Sex.
###Code
# Array vazio que conterá os valores para as possíveis combinações
guess_ages = np.zeros((2, 3))
for dataset in combined_train_test:
for sex in range(2):
for pclass in range(3):
# Encontrando as idades para cada combinação
guess_df = dataset[((dataset['Sex'] == sex) & (dataset['Pclass'] == pclass + 1))]['Age'].dropna()
# Armazenando a mediana das idades das combinações em guess_ages
guess_ages[sex, pclass] = guess_df.median()
for sex in range(2):
for pclass in range(3):
# Preenchendo os valores faltantes
dataset.loc[(dataset['Age'].isnull()) & (dataset['Sex'] == sex) & (dataset['Pclass'] == pclass+1),
'Age'] = guess_ages[sex, pclass]
# Convertendo os valores de Age para int
dataset['Age'] = dataset['Age'].astype(int)
train_df.tail()
###Output
_____no_output_____
###Markdown
Resolvido o problema dos valores de idade faltantes! No entanto, seria melhor para visualizar a correlação com a sobrevivência se a idade fosse separada em faixas etárias, conforme planejado anteriormente.
###Code
# Classificando as idades em faixas etárias de 10 anos
for dataset in combined_train_test:
dataset.loc[dataset['Age'] <= 16, 'Age'] = 0
dataset.loc[(dataset['Age'] > 16) & (dataset['Age'] <= 32), 'Age'] = 1
dataset.loc[(dataset['Age'] > 32) & (dataset['Age'] <= 48), 'Age'] = 2
dataset.loc[(dataset['Age'] > 48) & (dataset['Age'] <= 64), 'Age'] = 3
dataset.loc[ dataset['Age'] > 64, 'Age'] = 4
train_df.head()
###Output
_____no_output_____
###Markdown
2. **Fare** Ainda falta um valor faltante da *feature* Fare no conjunto de teste, que será preenchido com a mediana da coluna.
###Code
test_df['Fare'].fillna(test_df['Fare'].dropna().median(), inplace=True)
###Output
_____no_output_____
###Markdown
Apesar de resolvido o problema de valores faltantes, seria melhor se a Fare fosse separada em intervalos assim como Age, categorizando os dados.
###Code
# Dividindo Fare em 4 quantis
for dataset in combined_train_test:
dataset.loc[dataset['Fare'] <= 7.91, 'Fare'] = 0
dataset.loc[(dataset['Fare'] > 7.91) & (dataset['Fare'] <= 14.454), 'Fare'] = 1
dataset.loc[(dataset['Fare'] > 14.454) & (dataset['Fare'] <= 31), 'Fare'] = 2
dataset.loc[dataset['Fare'] > 31, 'Fare'] = 3
dataset['Fare'] = dataset['Fare'].astype(int)
combined_train_test = [train_df, test_df]
train_df.head()
###Output
_____no_output_____
###Markdown
FEATURE ENGINEERINGAqui os dados presentes no conjunto de dados serão utilizados para criar novas *features* mais relevantes para o problema. 1. **FamilySize** Somando os valores das colunas SibSp, Parch e o próprio passageiro, encontra-se o tamanho total da família. Isso possibilita a exclusão de SibSp e Parch do nosso conjunto de dados.
###Code
# Criando FamilySize
for dataset in combined_train_test:
dataset['FamilySize'] = dataset['SibSp'] + dataset['Parch'] + 1
# Deletando SibSp e Parch
train_df = train_df.drop(['SibSp', 'Parch'], axis=1)
test_df = test_df.drop(['SibSp', 'Parch'], axis=1)
combined_train_test = [train_df, test_df]
train_df.head()
###Output
_____no_output_____
###Markdown
2. **IsAlone** Para verificar a correlação de estar sozinho no navio com a sobrevivência, podemos criar uma coluna IsAlone através de FamilySize.
###Code
# Criando IsAlone
for dataset in combined_train_test:
dataset['IsAlone'] = 0
dataset.loc[dataset['FamilySize'] == 1, 'IsAlone'] = 1
train_df.tail()
###Output
_____no_output_____
###Markdown
3. **Title** A princípio a coluna Name não aparenta ajudar na análise, contudo, podemos retirar o título do passageiro oferecido na *feature*, o qual provavelmente possui correlação com a sobrevivência.
###Code
# Criando a coluna Title a partir da extração dos títulos utilizando Regex
for dataset in combined_train_test:
dataset['Title'] = dataset.Name.str.extract(' ([A-Za-z]+)\.', expand=False)
train_df['Title'].unique()
###Output
_____no_output_____
###Markdown
Extraídos os títulos, percebe-se que muitos deles podem ser reescritos de maneira mais comum ou serem classificados como raros (uma vez que representam uma porcentagem pequena da população).
###Code
for dataset in combined_train_test:
# Classificando títulos incomuns como raros
dataset['Title'] = dataset['Title'].replace(['Lady', 'Countess','Capt', 'Col','Don', 'Dr',
'Major', 'Rev', 'Sir', 'Jonkheer', 'Dona'], 'Rare')
# Substituindo maneiras incomuns de escrever títulos
dataset['Title'] = dataset['Title'].replace('Mlle', 'Miss')
dataset['Title'] = dataset['Title'].replace('Ms', 'Miss')
dataset['Title'] = dataset['Title'].replace('Mme', 'Mrs')
# Excluindo a coluna Name
train_df = train_df.drop('Name', axis=1)
test_df = test_df.drop('Name', axis=1)
combined_train_test = [train_df, test_df]
train_df.tail()
###Output
_____no_output_____
###Markdown
Agora a coluna Title será mapeada em valores numéricos para que o conjunto de dados possua apenas dados do tipo *int*.
###Code
title_mapping = {"Mr": 1, "Miss": 2, "Mrs": 3, "Master": 4, "Rare": 5}
for dataset in combined_train_test:
dataset['Title'] = dataset['Title'].map(title_mapping)
train_df.tail()
###Output
_____no_output_____
###Markdown
COMPREENDENDO A CORRELAÇÃO DAS FEATURES COM A SOBREVIVÊNCIAAgora que os conjuntos de dados estão organizados e sem valores faltantes pode-se estudar o impacto que cada *feature* possui na sobrevivência do passageiro observando a média de Survival para cada valor de cada *feature*. Para isso, será criada uma visualização para cada feature utilizando *bar plots*.
###Code
# Alterando a fonte
plt.rcParams['font.family'] = 'sans-serif'
plt.rcParams['font.sans-serif'] = 'Helvetica'
# Alterando o estilo dos eixos
plt.rcParams['axes.edgecolor']='#333F4B'
plt.rcParams['axes.linewidth']=0.8
plt.rcParams['xtick.color']='#333F4B'
plt.rcParams['ytick.color']='#333F4B'
# Criando os bar plots
fig, ax = plt.subplots(2, 4)
fig.set_size_inches(15, 9)
df_column = 0
for row in range(2):
for col in range(4):
survival_correlation = pd.DataFrame(train_df[[train_df.columns[1:].tolist()[df_column],
'Survived']].groupby([train_df.columns[1:].tolist()[df_column]],
as_index=False).mean().iloc[:, 1].values)
my_range=list(range(len(survival_correlation)))
ax[row, col].hlines(y=my_range, xmin=0, xmax=survival_correlation, color='#007ACC', alpha=0.2, linewidth=5)
ax[row, col].plot(survival_correlation, my_range, "o", markersize=5, color='#007ACC', alpha=0.6)
ax[row, col].set_xlim(0, 1)
ax[row, col].set_yticks(np.arange(len(survival_correlation)))
# Alterando o estilo das labels
ax[row, col].set_xlabel('Média de sobrêvivencia', color = '#333F4B')
ax[row, col].set_ylabel(train_df.columns[1:].tolist()[df_column])
# Retirando os limites de cima e da direita do gráfico
ax[row, col].spines['top'].set_color('none')
ax[row, col].spines['right'].set_color('none')
ax[row, col].spines['left'].set_smart_bounds(True)
ax[row, col].spines['bottom'].set_smart_bounds(True)
df_column += 1
plt.show()
###Output
/opt/conda/lib/python3.6/site-packages/matplotlib/font_manager.py:1241: UserWarning: findfont: Font family ['sans-serif'] not found. Falling back to DejaVu Sans.
(prop.get_family(), self.defaultFamily[fontext]))
###Markdown
Apesar de extremamente raro, observamos correlação em todas as features observadas. Provavelmente devido à competição ser voltada para iniciantes e a seleção prévia de quais features seriam utilizadas através do conhecimento do problema. Comparação de modelosChegou a hora de utilizar diferentes modelos de machine learning para classificar o conjunto de teste, então, enviar as predições do melhor modelo.
###Code
classifiers = [
KNeighborsClassifier(3),
SVC(),
DecisionTreeClassifier(),
RandomForestClassifier(),
AdaBoostClassifier(),
GradientBoostingClassifier(),
GaussianNB(),
LinearDiscriminantAnalysis(),
QuadraticDiscriminantAnalysis(),
LogisticRegression()
]
log_cols = ["Modelo", "Acurácia"]
log = pd.DataFrame(columns=log_cols)
sss = StratifiedShuffleSplit(n_splits=10, test_size=0.1, random_state=9)
# Valores features
X = train_df.values[0::, 1::]
# Valores survival
y = train_df.values[0::, 0]
acc_dict = {}
for train_index, test_index in sss.split(X, y):
# Separando valores de validação no conjunto de treino para verificar a acurácia do modelo
X_train, X_test = X[train_index], X[test_index]
y_train, y_test = y[train_index], y[test_index]
for clf in classifiers:
name = clf.__class__.__name__
clf.fit(X_train, y_train)
train_predictions = clf.predict(X_test)
acc = accuracy_score(y_test, train_predictions)
if name in acc_dict:
acc_dict[name] += acc
else:
acc_dict[name] = acc
for clf in acc_dict:
acc_dict[clf] = acc_dict[clf] / 10.0
log_entry = pd.DataFrame([[clf, acc_dict[clf]]], columns=log_cols)
log = log.append(log_entry)
plt.xlabel('Acurácia')
plt.title('Acurácia do Modelo')
fig = plt.gcf()
fig.set_size_inches(10, 6)
sns.barplot(x='Acurácia', y='Modelo', data=log, color='#007ACC', alpha=0.6)
###Output
_____no_output_____
###Markdown
Conclui-se que o modelo com melhor resultado no conjunto de treino foi o de Support Vector Machine (SVC). Enviando o resultado
###Code
# Treinando modelo
best_classifier = SVC()
best_classifier.fit(train_df.values[0::, 1::], train_df.values[0::, 0])
# Criando um df dos resultados
submission = pd.DataFrame()
submission['PassengerId'] = passenger_id_test.values
submission['Survived'] = best_classifier.predict(test_df)
# Convertendo o df para csv
submission.to_csv('submission.csv', index=False)
###Output
_____no_output_____ |
Fundamentals of Deep Learning/Reuters Newswires Dataset, Multiclass Classification.ipynb | ###Markdown
Best fit at epoch = 9
###Code
model = models.Sequential()
model.add(layers.Dense(64, activation='relu', input_shape=(10000,)))
model.add(layers.Dense(64, activation='relu'))
model.add(layers.Dense(46, activation='softmax'))
model.compile(optimizer='rmsprop',
loss='categorical_crossentropy',
metrics=['accuracy'])
model.fit(partial_x_train,
partial_y_train,
epochs=9,
batch_size=512,
validation_data=(x_val, y_val))
results = model.evaluate(x_test, one_hot_test_labels)
print("Test loss:", results[0])
print("Test acc:", results[1])
# Predicting
predictions = model.predict(x_test)
print(predictions[0].shape)
print(sum(predictions[0]))
###Output
(46,)
0.999999927199
###Markdown
Here we used categorical labels for learning. In case of integer variables, loss function used is: sparse_categorical_crossentropy y_train = np.array(train_labels) y_test = np.array(test_labels) model.compile(optimizer='rmsprop',loss='sparse_categorical_crossentropy',metrics=['acc'])
###Code
model = models.Sequential()
model.add(layers.Dense(64, activation='relu', input_shape=(10000,)))
model.add(layers.Dense(4, activation='relu'))
model.add(layers.Dense(46, activation='softmax'))
model.compile(optimizer='rmsprop',
loss='categorical_crossentropy',
metrics=['accuracy'])
model.fit(partial_x_train,
partial_y_train,
epochs=20,
batch_size=128,
validation_data=(x_val, y_val))
results = model.evaluate(x_test, one_hot_test_labels)
print("Test loss:", results[0])
print("Test acc:", results[1])
###Output
Train on 7982 samples, validate on 1000 samples
Epoch 1/20
7982/7982 [==============================] - 2s 189us/step - loss: 3.5173 - acc: 0.0787 - val_loss: 3.2234 - val_acc: 0.1280
Epoch 2/20
7982/7982 [==============================] - 1s 123us/step - loss: 2.8013 - acc: 0.2492 - val_loss: 2.4683 - val_acc: 0.4560
Epoch 3/20
7982/7982 [==============================] - 1s 122us/step - loss: 2.0043 - acc: 0.4743 - val_loss: 1.9022 - val_acc: 0.4680
Epoch 4/20
7982/7982 [==============================] - 1s 121us/step - loss: 1.5417 - acc: 0.5336 - val_loss: 1.5835 - val_acc: 0.6340
Epoch 5/20
7982/7982 [==============================] - 1s 125us/step - loss: 1.2759 - acc: 0.6461 - val_loss: 1.4968 - val_acc: 0.6240
Epoch 6/20
7982/7982 [==============================] - 1s 124us/step - loss: 1.1686 - acc: 0.6463 - val_loss: 1.4493 - val_acc: 0.6340
Epoch 7/20
7982/7982 [==============================] - 1s 131us/step - loss: 1.0853 - acc: 0.6630 - val_loss: 1.4443 - val_acc: 0.6400
Epoch 8/20
7982/7982 [==============================] - 1s 130us/step - loss: 1.0081 - acc: 0.6977 - val_loss: 1.4804 - val_acc: 0.6620
Epoch 9/20
7982/7982 [==============================] - 1s 135us/step - loss: 0.9428 - acc: 0.7499 - val_loss: 1.4518 - val_acc: 0.6840
Epoch 10/20
7982/7982 [==============================] - 1s 148us/step - loss: 0.8844 - acc: 0.7633 - val_loss: 1.4667 - val_acc: 0.6900
Epoch 11/20
7982/7982 [==============================] - 1s 125us/step - loss: 0.8391 - acc: 0.7724 - val_loss: 1.4829 - val_acc: 0.6920
Epoch 12/20
7982/7982 [==============================] - 1s 125us/step - loss: 0.7986 - acc: 0.7804 - val_loss: 1.5089 - val_acc: 0.6870
Epoch 13/20
7982/7982 [==============================] - 1s 140us/step - loss: 0.7662 - acc: 0.7870 - val_loss: 1.5467 - val_acc: 0.6890
Epoch 14/20
7982/7982 [==============================] - 1s 138us/step - loss: 0.7366 - acc: 0.7914 - val_loss: 1.5910 - val_acc: 0.6850
Epoch 15/20
7982/7982 [==============================] - 1s 137us/step - loss: 0.7109 - acc: 0.7974 - val_loss: 1.6020 - val_acc: 0.6900
Epoch 16/20
7982/7982 [==============================] - 1s 131us/step - loss: 0.6875 - acc: 0.8031 - val_loss: 1.6407 - val_acc: 0.6830
Epoch 17/20
7982/7982 [==============================] - 1s 130us/step - loss: 0.6519 - acc: 0.8127 - val_loss: 1.6495 - val_acc: 0.6920
Epoch 18/20
7982/7982 [==============================] - 1s 134us/step - loss: 0.6095 - acc: 0.8326 - val_loss: 1.6842 - val_acc: 0.6980
Epoch 19/20
7982/7982 [==============================] - 1s 133us/step - loss: 0.5746 - acc: 0.8434 - val_loss: 1.6725 - val_acc: 0.7020
Epoch 20/20
7982/7982 [==============================] - 1s 156us/step - loss: 0.5454 - acc: 0.8472 - val_loss: 1.6693 - val_acc: 0.7050
2246/2246 [==============================] - 0s 163us/step
Test loss: 1.83595189174
Test acc: 0.688334817453
|
examples/simple_models/titanic_xgboost/demo/titanic_demo.ipynb | ###Markdown
Titanic survival classification demo
###Code
import numpy as np
import pandas as pd
import grpc
import hydro_serving_grpc as hs
###Output
_____no_output_____
###Markdown
1. Read a data sample
###Code
train_df = pd.read_csv('../data/train.csv', header=0, delimiter=',', nrows=10)
train_df.columns = train_df.columns.str.rstrip()
def age_converter(text):
try:
return int(text.rstrip())
except ValueError:
return np.nan
train_df["Age"] = train_df["Age"].apply(age_converter)
train_df.head()
train_df.columns
example = train_df.iloc[0]
###Output
_____no_output_____
###Markdown
2. Perform inference
###Code
from hydrosdk import Cluster, Application
import grpc
cluster = Cluster(
http_address="<hydrosphere-http-address>",
grpc_address="<hydrosphere-grpc-address>",
ssl=True, # turn off, if your Hydrosphere instance doesn't have
grpc_credentials=grpc.ssl_channel_credentials() # TLS certificates installed
)
app = Application.find(cluster, "<application-name>")
app.lock_while_starting()
predictor = app.predictor()
result = predictor.predict({
"pclass": example.Pclass.astype("int32"),
"sex": example.Sex,
"age": example.Age.astype("int32"),
"fare": example.Fare,
"parch": example.Parch.astype("int32")
})
print("Predicted value:", result["survived"])
print("Actual value:", example.Survived)
###Output
Predicted value: 0
Actual value: 0
|
Machine Learning Algorithms/Section 2 - Regression/7. Model Evaluation and Selection/Model Selection Reference/polynomial_regression.ipynb | ###Markdown
Polynomial Regression Importing the libraries
###Code
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
###Output
_____no_output_____
###Markdown
Importing the dataset
###Code
dataset = pd.read_csv('ENTER_THE_NAME_OF_YOUR_DATASET_HERE.csv')
X = dataset.iloc[:, :-1].values
y = dataset.iloc[:, -1].values
###Output
_____no_output_____
###Markdown
Splitting the dataset into the Training set and Test set
###Code
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2, random_state = 0)
###Output
_____no_output_____
###Markdown
Training the Polynomial Regression model on the Training set
###Code
from sklearn.preprocessing import PolynomialFeatures
from sklearn.linear_model import LinearRegression
poly_reg = PolynomialFeatures(degree = 4)
X_poly = poly_reg.fit_transform(X_train)
regressor = LinearRegression()
regressor.fit(X_poly, y_train)
###Output
_____no_output_____
###Markdown
Predicting the Test set results
###Code
y_pred = regressor.predict(poly_reg.transform(X_test))
np.set_printoptions(precision=2)
print(np.concatenate((y_pred.reshape(len(y_pred),1), y_test.reshape(len(y_test),1)),1))
###Output
_____no_output_____
###Markdown
Evaluating the Model Performance
###Code
from sklearn.metrics import r2_score
r2_score(y_test, y_pred)
###Output
_____no_output_____ |
A_05_Fake_News_Classification/nlp_a5.ipynb | ###Markdown
submitted by Tarang Ranpara (202011057)
###Code
import os
import spacy
import logging
import numpy as np
import pandas as pd
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.optim import Adam
from torch.nn.utils.rnn import pad_sequence
from torch.utils.data import Dataset, DataLoader
from sklearn.model_selection import train_test_split
from tqdm import tqdm
spacy_eng = spacy.load("en_core_web_sm")
logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO)
###Output
_____no_output_____
###Markdown
Exploring the data
###Code
train_path = 'drive/MyDrive/NLP_A5/'
data = data.drop(data[~data.label.isin(['0', '1'])].index)
print("Cleaned Dataset shape:", data.shape)
data.head()
data.head()
X = data[['id', 'title', 'author', 'text']]
y = data[['label']]
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.10, random_state=42)
###Output
_____no_output_____
###Markdown
Building the vocab
###Code
class Vocab:
def __init__(self):
self.itos = {0: "<PAD>", 1: "<SOS>", 2: "<EOS>", 3: "<UNK>"}
self.stoi = {t: i for i, t in self.itos.items()}
def __len__(self):
return len(self.itos)
@staticmethod
def tokenizer_eng(text):
return [tok.text.lower() for tok in spacy_eng.tokenizer(text)]
def build_vocab(self, sentences):
logging.info("Building vocab")
idx = 4
for sent in sentences:
for word in self.tokenizer_eng(sent):
self.stoi[word] = idx
self.itos[idx] = word
idx += 1
logging.info("Vocab built.")
def vectorize(self, text):
tokenized_text = self.tokenizer_eng(text)
return [
self.stoi[token] if token in self.stoi else self.stoi["<UNK>"]
for token in tokenized_text
]
###Output
_____no_output_____
###Markdown
Dataset Loader
###Code
class DatasetLoader(Dataset):
def __init__(self, root_dir, subset="train", transform=None, test_size=0.1):
self.root_dir = root_dir
self.transform = transform
self.test_size = test_size
self.subset = subset
# Loading dataset
self.df = pd.read_csv(
os.path.join(root_dir, "train.csv"),
error_bad_lines=False,
warn_bad_lines=False,
engine="python")
# Cleaning dataset
self.__clean_data()
# Splitting the data
self.train_data, self.test_data = self.__train_test_split()
# Get texts and labels
self.texts = self.train_data["title"].values
self.labels = self.train_data["label"].values.astype(np.int64)
self.test_texts = self.test_data["title"].values
self.test_labels = self.test_data["label"].values.astype(np.int64)
self.classes = ["Non Fake", "Fake"]
# Initialize and build vocabulary
self.vocab = Vocab()
self.vocab.build_vocab(self.texts.tolist())
def __len__(self):
return len(self.texts) if self.subset == "train" else len(self.test_texts)
def __getitem__(self, index):
if self.subset == "train":
text = self.texts[index]
label = self.labels[index]
else:
text = self.test_texts[index]
label = self.test_labels[index]
if self.transform is not None:
text = self.transform(text)
vectorized_text = [self.vocab.stoi["<SOS>"]]
vectorized_text += self.vocab.vectorize(text)
vectorized_text.append(self.vocab.stoi["<EOS>"])
return vectorized_text, label
def __clean_data(self):
self.df = self.df.dropna()
self.df = self.df.drop(self.df[~self.df.label.isin(['0', '1'])].index)
def __train_test_split(self):
return train_test_split(self.df, test_size=self.test_size, random_state=42)
###Output
_____no_output_____
###Markdown
pad the sequences
###Code
class PadTextSequence:
def __init__(self, pad_idx):
self.pad_idx = pad_idx
def __call__(self, batch):
texts = [torch.tensor(item[0]) for item in batch]
labels = [torch.tensor(item[1]) for item in batch]
texts = pad_sequence(texts, batch_first=True, padding_value=self.pad_idx)
return texts, torch.Tensor(labels).to(torch.int64)
###Output
_____no_output_____
###Markdown
get train/test loaders
###Code
def get_train_test_loader(
root_fldr,
transform=None,
batch_size=32,
shuffle=True,
test_split=0.1
):
train_dataset = DatasetLoader(root_fldr, subset="train", transform=transform, test_size=test_split)
test_dataset = DatasetLoader(root_fldr, subset="test", transform=transform, test_size=test_split)
pad_idx = train_dataset.vocab.stoi["<PAD>"]
train_loader = DataLoader(
dataset=train_dataset,
batch_size=batch_size,
shuffle=shuffle,
collate_fn=PadTextSequence(pad_idx=pad_idx)
)
test_loader = DataLoader(
dataset=test_dataset,
batch_size=batch_size,
shuffle=shuffle,
collate_fn=PadTextSequence(pad_idx=pad_idx)
)
return train_loader, test_loader
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
###Output
_____no_output_____
###Markdown
setting hyper params
###Code
# Set up hyperparameters
num_layers = 1
hidden_nodes = 256
embedding_dim = 300
learning_rate = 0.0001
batch_size = 64
num_epochs = 15
train_loader, test_loader = get_train_test_loader(train_path, batch_size=batch_size, shuffle=True)
vocab_size = len(train_loader.dataset.vocab)
###Output
2021-09-15 16:39:37,732 : INFO : NumExpr defaulting to 2 threads.
2021-09-15 16:39:37,755 : INFO : Building vocab
2021-09-15 16:39:40,346 : INFO : Vocab built.
2021-09-15 16:39:42,170 : INFO : Building vocab
2021-09-15 16:39:44,115 : INFO : Vocab built.
###Markdown
LSTM Model
###Code
class LSTM_model(nn.Module):
def __init__(
self,
inp_size,
hidden_nodes=64,
num_layers=1,
embedding_dim=100
):
super(LSTM_model, self).__init__()
self.hidden_nodes = hidden_nodes
self.num_layers = num_layers
self.embedding = nn.Embedding(inp_size, embedding_dim)
self.lstm = nn.LSTM(embedding_dim, hidden_nodes, num_layers, batch_first=True)
self.fc = nn.Linear(hidden_nodes, 2)
def forward(self, x):
h0 = torch.randn(self.num_layers, x.size(0), self.hidden_nodes).to(device)
c0 = torch.randn(self.num_layers, x.size(0), self.hidden_nodes).to(device)
x = self.embedding(x)
x, _ = self.lstm(x, [h0, c0])
# Consider only last hidden state
x = F.softmax(self.fc(x[:, -1, :]), dim=0)
return x
model = LSTM_model(vocab_size, hidden_nodes, num_layers, embedding_dim).to(device=device)
criterion = nn.CrossEntropyLoss()
optimizer = Adam(model.parameters(), lr=learning_rate)
def get_accuracy(loader, model):
num_correct = 0
num_samples = 0
model.eval() # Switch model to evaluation mode
with torch.no_grad(): # We don't need to compute gradients here
for x, y in tqdm(loader, ascii="123456789=", desc="Evaluating:"):
x = x.to(device=device)
y = y.to(device=device)
scores = model(x)
loss = criterion(scores, y)
_, predictions = scores.max(1)
num_correct += (predictions == y).sum()
num_samples += predictions.size(0) # predictions.shape[0]
accuracy = (num_correct / num_samples) * 100
print(f"Loss: {loss.item()}, Accuracy: {num_correct} / {num_samples} = {accuracy: .3f}%")
# Switch back to training mode
model.train()
return accuracy
def train_loop():
print("Training begins..")
for epoch in range(num_epochs):
num_correct = 0
num_samples = 0
loop = tqdm(enumerate(train_loader), total=len(train_loader), ascii=" 123456789=")
for batch_idx, (data, targets) in loop:
data = data.to(device=device)
# targets = targets.view(-1, 1).to(torch.float32)
targets = targets.to(device=device)
# Forward step
scores = model(data)
loss = criterion(scores, targets)
# Backward step
optimizer.zero_grad() # To clear out previous step's gradients
loss.backward()
# Gradient descent
optimizer.step()
# Calculate ratio of correct predictions
_, predictions = scores.max(1)
num_correct += (predictions == targets).sum()
num_samples += predictions.size(0) # predictions.shape[0]
# Update loss and accuracy on progress bar
accuracy = (num_correct / num_samples)
loop.set_description(f"=> Epoch {epoch + 1}/{num_epochs}")
loop.set_postfix(loss=loss.item(), accuracy=accuracy.item())
train_loop()
test_accuracy = get_accuracy(test_loader, model)
###Output
Evaluating:: 100%|==========| 29/29 [00:01<00:00, 16.10it/s] |
_lessons/11-hipoteses.ipynb | ###Markdown
---layout: pagetitle: Testes de Hipótesesnav_order: 11---[](https://colab.research.google.com/github/icd-ufmg/icd-ufmg.github.io/blob/master/_lessons/11-hipoteses.ipynb) Testes de Hipóteses{: .no_toc .mb-2 }Entendendo valores-p{: .fs-6 .fw-300 }{: .no_toc .text-delta }Resultados Esperados1. Entender o conceito de um teste de hipótese1. Entender o valor-p1. Saber realizar e interpretar testes de hipóteses1. Saber realizar e interpretar valores-p---**Sumário**1. TOC{:toc}---
###Code
# -*- coding: utf8
from scipy import stats as ss
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
plt.style.use('seaborn-colorblind')
plt.rcParams['figure.figsize'] = (16, 10)
plt.rcParams['axes.labelsize'] = 20
plt.rcParams['axes.titlesize'] = 20
plt.rcParams['legend.fontsize'] = 20
plt.rcParams['xtick.labelsize'] = 20
plt.rcParams['ytick.labelsize'] = 20
plt.rcParams['lines.linewidth'] = 4
plt.ion()
def despine(ax=None):
if ax is None:
ax = plt.gca()
# Hide the right and top spines
ax.spines['right'].set_visible(False)
ax.spines['top'].set_visible(False)
# Only show ticks on the left and bottom spines
ax.yaxis.set_ticks_position('left')
ax.xaxis.set_ticks_position('bottom')
###Output
_____no_output_____
###Markdown
IntroduçãoEste é o primeiro notbeook com base em testes de hipóteses. Para entender tal mundo, temos que cobrir:1. Testes quando sabemos da população1. Testes quando não sabemos da população1. Erros e Poder de Teste (próximo notebook)1. Causalidade (próximo notebook)Assim como no mundo de intervalos de confiânça, a nossa ideia aqui é ter algum garantia estatística para fazer afirmações quando temos apenas uma amostra. Note que tal amostra, idealmente, vai ser grande o suficiente para estimar com menor viés alguns parâmetros como a média e o desvio padrão da população.Quando testamos hipóteses queremos responder perguntas como: "Tal cenário é factível quando comparamos com uma __hipótese nula?__". Esta __hipótese nula__ representa um modelo probabílistico da população. No geral, queremos rejeitar a mesma indicando que não temos evidências de que os dados seguem tal modelo. Modelos, para fins deste notebook, quer dizer formas de amostrar a população. Imagine que uma moeda é jogada para cima 100 vezes e cai em caras 99 vezes. Uma __hipótese nula__ seria: "A moeda é justa!". Qual a chance de uma moeda justa cair em caras 99 vezes?
###Code
x = np.arange(0, 101) # Valores no eixo x
prob_binom = ss.distributions.binom.pmf(x, 100, 0.5)
plt.plot(x, prob_binom, 'o')
plt.plot([99], prob_binom[99], 'ro')
plt.text(99, 0.0018, 'Aqui', horizontalalignment='center')
plt.xlabel('Num Caras - x')
plt.ylabel('P(sair x caras)')
plt.title('Chance de sair 99 caras (pontinho vermelho!)')
despine()
###Output
_____no_output_____
###Markdown
Quase zero! Porém, é fácil falar de moedas. Foi assim também quando estudamos o IC. Vamos explorar outros casos. Exemplo 1: Juri no Alabama__No início dos anos 1960, no Condado de Talladega, no Alabama, um negro chamado Robert Swain foi condenado por à morte por estuprar uma mulher branca. Ele recorreu da sentença, citando entre outros fatores o júri todo branco. Na época, apenas homens com 21 anos ou mais eram autorizados a servir em júris no condado de Talladega. No condado, 26% dos jurados elegíveis eram negros. No juri final, havia apenas 8 negros entre os 100 selecionados para o painel de jurados no julgamento de Swain.__Nossa pergunta: **Qual é a probabilidade (chance) de serem selecionados 8 indíviduos negros?**Para nos ajudar com este exemplo, o código amostra proporções de uma população. O mesmo gera 10,000 amostras ($n$) de uma população de tamanho `pop_size`. Tais amostras são geradas sem reposição. Além do mais, o código assume que:1. [0, $pop\_size * prop$) pertencem a um grupo.1. [$pop\_size * prop$, $pop\_size$) pertencem a outro grupo.Ou seja, em uma população de 10 (pop_size) pessoas. Caso a proporção seja 0.2 (prop). A população tem a seguinte cara:__[G1, G1, G2, G2, G2, G2, G2, G2, G2, G2]__.A ideia do cógido é responder: **Ao realizar amostras uniformes da população acima, quantas pessoas do tipo G1 e do tipo G2 são amostradas**. Para isto, realizamos 10,0000 amostras.
###Code
def sample_proportion(pop_size, prop, n=10000):
'''
Amostra proporções de uma população.
Parâmetros
----------
pop_size: int, tamanho da população
prop: double, entre 0 e 1
n: int, número de amostras
'''
assert(prop >= 0)
assert(prop <= 1)
grupo = pop_size * prop
resultados = np.zeros(n)
for i in range(n):
sample = np.random.randint(0, pop_size, 100)
resultados[i] = np.sum(sample < grupo)
return resultados
###Output
_____no_output_____
###Markdown
Vamos ver agora qual é a cara de 10,000 amostras da cidade de Talladega. Vamos assumir que a cidade tem uma população de 100,000 habitantes. Tal número não importa muito para o exemplo, estamos olhando amostras. Podemos ajustar com a real.O gráfico abaixo mostra no eixo-x o número de pessoas negras em cada amostra uniforme. Realizamos 10,000 delas. No eixo-y, a quantidade de amostras com aquele percentua. Agora responda, qual a chance de sair 8 pessoas apenas?
###Code
proporcoes = sample_proportion(pop_size=100000, prop=0.26)
bins = np.linspace(1, 100, 100) + 0.5
plt.hist(proporcoes, bins=bins, edgecolor='k')
plt.xlim(0, 52)
plt.ylabel('Numero de Amostras de Tamanho 10k')
plt.xlabel('Número no Grupo')
plt.plot([8], [0], 'ro', ms=15)
despine()
###Output
_____no_output_____
###Markdown
Podemos usar 5\% de chance para nos ajudar. É assim que funciona os testes. Com 5\% de chances, saiem pelo menos 19 pessoas negras. Então estamos abaixo disto, bem raro!
###Code
np.percentile(proporcoes, 5)
###Output
_____no_output_____
###Markdown
Este exemplo, assim como o próximo, embora não assuma nada sobre a população, assume uma hipótese nula de seleção uniforme. Isto é, qual é a probabilidade de ocorre o valor que observamos em uma seleção uniforme?! Ideia de Testes de Hipóteses1. Dado um valor observado $t_{obs}.$1. Qual é a chance deste valor em uma hipótese (modelo) nulo?!No exemplo acima $t_{obs}=8$. Exemplo 2 -- Um outro JuriIn 2010, the American Civil Liberties Union (ACLU) of Northern California presented a report on jury selection in Alameda County, California. The report concluded that certain ethnic groups are underrepresented among jury panelists in Alameda County, and suggested some reforms of the process by which eligible jurors are assigned to panels. In this section, we will perform our own analysis of the data and examine some questions that arise as a result.Aqui temos um outro exemplo de juri. Neste caso, temos diferentes grupos raciais. Como podemos falar algo dos grupos? Precisamos de uma estatística de teste. Portanto, vamos usar a __total variation distance__.$$TVD(p, q) = \sum_{i=0}^{n-1} abs(p_i - q_i)$$p e q aqui são vetores com proporções. Cada vetor deve somar 1. Quão maior a TVD, maior a diferença entre as proporções dos dois vetores! Quando p == q, temos TVD == 0.
###Code
def total_variation(p, q):
'''
Computa a total variation distance com base em dois vetore, p e q
Parâmetros
----------
p: vetor de probabilidades de tamanho n
q: vetor de probabilidades de tamanho n
'''
return np.sum(np.abs(p - q)) / 2
###Output
_____no_output_____
###Markdown
Nossos dados. No juri, temos as seguintes proporções em cada raça.
###Code
idx = ['Asian', 'Black', 'Latino', 'White', 'Other']
df = pd.DataFrame(index=idx)
df['pop'] = [0.15, 0.18, 0.12, 0.54, 0.01]
df['sample'] = [0.26, 0.08, 0.08, 0.54, 0.04]
df
df.plot.bar()
plt.ylabel('Propopção')
plt.ylabel('Grupo')
despine()
###Output
_____no_output_____
###Markdown
Vamos comparar com uma amostra aleatória! Para isto, vou amostrar cada grupo por vez. Note a diferença.
###Code
N = 1453
uma_amostra = []
for g in df.index:
p = df.loc[g]['pop']
s = sample_proportion(N, p, 1)[0]
uma_amostra.append(s/100)
df['1random'] = uma_amostra
df.plot.bar()
plt.ylabel('Propopção')
plt.ylabel('Grupo')
despine()
###Output
_____no_output_____
###Markdown
Agora compare o TVD nos dados e na amostra aleatória!
###Code
total_variation(df['1random'], df['pop'])
total_variation(df['sample'], df['pop'])
###Output
_____no_output_____
###Markdown
Para realizar o teste, fazemos 10,000 amostras e comparamos os TVDs! O código abaixo guarda o resultado de cada amostra em uma linha de uma matriz.
###Code
N = 1453
A = np.zeros(shape=(10000, len(df.index)))
for i, g in enumerate(df.index):
p = df.loc[g]['pop']
A[:, i] = sample_proportion(N, p) / 100
A
###Output
_____no_output_____
###Markdown
Agora o histograma da TVD. O ponto vermelho mostra o valor que observamos, as barras mostram as diferentes amostras. Novamente, bastante raro tal valor. Rejeitamos a hipótese nula e indicamos que os dados não foram selecionados de forma uniforme!
###Code
all_distances = []
for i in range(A.shape[0]):
all_distances.append(total_variation(df['pop'], A[i]))
plt.hist(all_distances, bins=30, edgecolor='k')
plt.ylabel('Numero de Amostras de Tamanho 10k')
plt.xlabel('Total Variation Distance')
plt.plot([0.14], [0], 'ro', ms=15)
despine()
np.percentile(all_distances, 97.5)
###Output
_____no_output_____
###Markdown
Caso 3. Dados ReaisAgora, finalmente vamos assumir um caso com dados reais. Em particular, vamos comparar salários de dois times da NBA. O dataframe se encontra abaixo. Vamos focar nos times:1. Houston Rockets1. Cleveland CavaliersDiferente do exemplo anterior, simular aqui vair ser um pouco mais complicado. É mais complicado assumir uma população para gerar amostras. Portanto vamos fazer uso de __testes de permutação__. Inicialmente, vamos explorar os dados (abaixo)
###Code
df = pd.read_csv('https://media.githubusercontent.com/media/icd-ufmg/material/master/aulas/11-Hipoteses/nba_salaries.csv')
df.head()
df = df[df['TEAM'].isin(['Houston Rockets', 'Cleveland Cavaliers'])]
df.head()
###Output
_____no_output_____
###Markdown
Vamos pegar o salário médio de cada time. Observe que teremos uma diferença de mais ou menos 3 milhões e doláres. Tal estatística será a nossa $t_{obs}$. Antes disso, vamos criar um filtro (vetor de booleanos) para o Houston.
###Code
filtro = df['TEAM'] == 'Houston Rockets'
###Output
_____no_output_____
###Markdown
Agora o salário do Houston
###Code
filtro = df['TEAM'] == 'Houston Rockets'
df[filtro]['SALARY'].mean()
###Output
_____no_output_____
###Markdown
Do não Houston, ou seja, Cleveland.
###Code
df[~filtro]['SALARY'].mean()
###Output
_____no_output_____
###Markdown
Por fim, nossa estatística observada.
###Code
t_obs = df[~filtro]['SALARY'].mean() - df[filtro]['SALARY'].mean()
t_obs
###Output
_____no_output_____
###Markdown
Teste de PermutaçãoEmbora não estamos comparando proporções como nos exemplos anteriores, podemos sim brincar um pouco de simulação. Vamos assumir um seguinte modelo nulo. __A probabilidade de um jogador escolher (ou for contratado) em um time para jogar é igual para os dois times__. Ou seja, vamos dizer que a associação Jogador x Time é uniformimente aleatória! Como fazemos isto? Basta fazer um shuffle no filtro acima.Assumindo apenas 5 jogadores. Os dados reais são:1. Nomes: [J1, J2, J3, J4, J5]1. Salários: [S1, S2, S3, S4, S5]1. Times: [T1, T1, T2, T2, T2]Vamos manter os nomes e os salários fixos. Ao fazer um shuffle (embaralhamento) nos times, temos:1. Nomes: [J1, J2, J3, J4, J5]1. Salários: [S1, S2, S3, S4, S5]1. Times: [T1, T2, T2, T2, T1]Esta é nossa hipótese nula! Vamos realizar uma!
###Code
np.random.shuffle(filtro.values)
diff = df[~filtro]['SALARY'].mean() - df[filtro]['SALARY'].mean()
diff
###Output
_____no_output_____
###Markdown
Note acima que temos uma diferença de salários diferente do que observamos. Vamos agora gerar 10,000!
###Code
N = 10000
diferencas = np.zeros(N)
for i in range(N):
np.random.shuffle(filtro.values)
diff = df[~filtro]['SALARY'].mean() - df[filtro]['SALARY'].mean()
diferencas[i] = diff
###Output
_____no_output_____
###Markdown
Na figura abaixo mostramos os resultados do mundo simulado. Note que em 16% dos casos temos diferença de salários mais extremas do que $t_{obs}$. Isto é uma chance bem alta! Nos exemplos anteriores tinhamos valores bem menores. Ou seja, **não rejeitamos a hipótese nula (modelo simulado)**. A variação dos salários pode ser explicado pelo acaso.
###Code
plt.hist(diferencas, bins=50, edgecolor='k')
plt.xlabel('Diferença na Permutação')
plt.ylabel('Pr(diff)')
plt.vlines(t_obs, 0, 0.14, color='red')
plt.text(t_obs+1, 0.10, '$16\%$ dos casos')
despine()
plt.show()
###Output
_____no_output_____
###Markdown
Animação
###Code
from IPython.display import HTML
from matplotlib import animation
def update_hist(num, data):
plt.cla()
plt.hist(data[0:100 * (num+1)], bins=50,
density=True, edgecolor='k')
plt.xlabel('Diferença na Permutação')
plt.ylabel('Pr(diff)')
despine()
fig = plt.figure()
ani = animation.FuncAnimation(fig, update_hist, 30, fargs=(diferencas, ))
HTML(ani.to_html5_video())
###Output
_____no_output_____ |
Codes/Hierachical Clustering analysis-Most Diabetic & Obese.ipynb | ###Markdown
Hierarchical Clustering analysis of Counties with High Prevalence of Diabetes and High prevalence of obesity
###Code
import seaborn as sns; sns.set(color_codes=True)
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
url='https://raw.githubusercontent.com/cathyxinxyz/Capstone_Project_1/master/Datasets/combined.csv'
df=pd.read_csv(url,index_col='FIPS',encoding="ISO-8859-1")
#read in variable information and build variable information dictionary with variable names as key
url='https://raw.githubusercontent.com/cathyxinxyz/Capstone_Project_1/master/Datasets/Food_atlas/Var_name_info.csv'
var_info_df=pd.read_csv(url,encoding="ISO-8859-1", index_col='variable')
df.info()
from sklearn import preprocessing
from pandas.api.types import is_string_dtype
from pandas.api.types import is_numeric_dtype
#remove categorical variables
df=df[df.columns.difference(['State', 'County', 'RUCC_2013'])]
#finds columns with more than 30 missing values
number_null_values_percol=df.isnull().sum(axis=0)
cols_with_over_30_null_values=number_null_values_percol[number_null_values_percol>30]
print (cols_with_over_30_null_values.index)
#drop these columns first
df=df.drop(list(cols_with_over_30_null_values.index), axis=1)
df=df.dropna()
df.shape
###Output
_____no_output_____
###Markdown
Single out the counties with top 20% diabetes prevalence and top 20% obesity prevalence
###Code
df['db_level']=pd.qcut(df['prevalence of diabetes'],6, labels=range(6))
df['ob_level']=pd.qcut(df['prevalence of obesity'],6, labels=range(6))
#preprocess the subset of data
from sklearn import preprocessing
min_max_scaler = preprocessing.MinMaxScaler()
df_subset=df[(df['db_level']==5) & (df['ob_level']==5)].drop(['db_level','ob_level'],axis=1)
X_minmax = min_max_scaler.fit_transform(df_subset.values)
normed_df=pd.DataFrame(X_minmax, index=df_subset.index, columns=df_subset.columns)
###Output
_____no_output_____
###Markdown
Principal Component Analysis
###Code
#PCA using Sklearn
from sklearn.decomposition import PCA
explained_var=list()
for d in range(1,len(normed_df.columns)):
pca = PCA(n_components=d)
pca.fit(normed_df)
explained_var.append(sum(pca.explained_variance_ratio_))
plt.plot(range(1,len(normed_df.columns)), explained_var)
plt.axhline(y=0.95, color='r')
plt.show()
#95% of variance can be explained by 22 components out of total 52 variables
pca = PCA(n_components=15)
pca.fit(normed_df)
data_transformed=pca.transform(normed_df)
df_transformed=pd.DataFrame(data_transformed, index=normed_df.index)
X=df_transformed
from scipy.cluster.hierarchy import dendrogram, fcluster, leaves_list, set_link_color_palette,linkage
#test different ways of linkage computation and choose one that gives most tight and reasonable clusters
distance_way= ['ward', 'single', 'average', 'weighted', 'centroid', 'median']
for method in ['average', 'centroid', 'median', 'complete']:
plt.figure(figsize=(40,10))
Z = linkage(X, method)
plt.title('Hierarchical Clustering Dendrogram')
plt.xlabel('sample index')
plt.ylabel('distance')
dendrogram(
Z,
leaf_rotation=90., # rotates the x axis labels
leaf_font_size=8., # font size for the x axis labels
)
plt.title(method)
plt.show()
#choose complete method
Z = linkage(X, 'complete')
from collections import Counter
#cut the dendrogram where reasonable clusters are identified
labels = fcluster(Z,3.2,'distance')
print (Counter(labels))
group_colors=['gold','red','purple']
set_link_color_palette(group_colors)
plt.figure(figsize=(80,20))
D = dendrogram(Z=Z, color_threshold=3.2, leaf_font_size=12, leaf_rotation=45)
plt.show()
df_subset['group']=labels
#write function to draw stacked percentage plot with standard deviation: bar_race for race variables and bar_age for age variables
import numpy as np
import matplotlib.pyplot as plt
def Bar_race(groups_to_plot, df):
N=len(groups_to_plot)
whitemeans=np.empty(N)
whitestd=np.empty(N)
blackmeans=np.empty(N)
blackstd=np.empty(N)
asianmeans=np.empty(N)
asianstd=np.empty(N)
hispmeans=np.empty(N)
hispstd=np.empty(N)
Ame_Ind_Alkmeans=np.empty(N)
Ame_Ind_Alkstd=np.empty(N)
Hawa_Islandmeans=np.empty(N)
Hawa_Islandstd=np.empty(N)
groups=groups_to_plot
for n, group in enumerate(groups):
df_group=df[df['group']==group]
whitemeans[n]=np.mean(df_group['var59'])
whitestd[n]=np.std(df_group['var59'])
blackmeans[n]=np.mean(df_group['var60'])
blackstd[n]=np.std(df_group['var60'])
asianmeans[n]=np.mean(df_group['var62'])
asianstd[n]=np.std(df_group['var62'])
hispmeans[n]=np.mean(df_group['var61'])
hispstd[n]=np.std(df_group['var61'])
Ame_Ind_Alkmeans[n]=np.mean(df_group['var63'])
Ame_Ind_Alkstd[n]=np.std(df_group['var63'])
Hawa_Islandmeans[n]=np.mean(df_group['var64'])
Hawa_Islandstd[n]=np.std(df_group['var64'])
width = 0.35 # the width of the bars: can also be len(x) sequence
ind = np.arange(N)
p1=plt.bar(ind, whitemeans, width, color='r', yerr=whitestd)
p2=plt.bar(ind, blackmeans, width, color='m', bottom=whitemeans, yerr=blackstd)
p3=plt.bar(ind, hispmeans, width, color='g', bottom=sum([whitemeans,blackmeans]), yerr=hispstd)
p4=plt.bar(ind, asianmeans, width, color='y', bottom=sum([whitemeans, blackmeans, hispmeans]), yerr=asianstd)
p5=plt.bar(ind, Ame_Ind_Alkmeans, width, color='b', bottom=sum([whitemeans, blackmeans, hispmeans, asianmeans]), yerr=Ame_Ind_Alkstd)
p6=plt.bar(ind, Hawa_Islandmeans, width, color='purple',
bottom=np.sum([whitemeans,blackmeans,hispmeans,asianmeans,Ame_Ind_Alkmeans]), yerr=Hawa_Islandstd)
plt.legend((p1[0], p2[0],p3[0], p4[0],p5[0], p6[0]), ('White', 'Black',
'Hispanic', 'Asian','American Indian or Alaska Native',
'Hawaiian or Pacific Islander'), bbox_to_anchor=(1.04,1))
plt.xticks(range(N), ['cluster_'+str(g) for g in groups_to_plot], rotation=45)
plt.ylim(0, 110)
plt.show()
def Bar_age(groups_to_plot, df):
N=len(groups_to_plot)
under_18_means=np.empty(N)
under_18_std=np.empty(N)
fr18to65_means=np.empty(N)
fr18to65_std=np.empty(N)
above_65_means=np.empty(N)
above_65_std=np.empty(N)
groups=groups_to_plot
for n, group in enumerate(groups):
df_group=df_subset[df_subset['group']==group]
df_group['adult']=100-df_group['var65']-df_group['var66']
under_18_means[n]=np.mean(df_group['var66'])
under_18_std[n]=np.std(df_group['var66'])
fr18to65_means[n]=np.mean(df_group['adult'])
fr18to65_std[n]=np.std(df_group['adult'])
above_65_means[n]=np.mean(df_group['var65'])
above_65_std[n]=np.std(df_group['var65'])
width = 0.35 # the width of the bars: can also be len(x) sequence
ind = np.arange(N)
p1=plt.bar(ind, under_18_means, width, color='y', yerr=under_18_std)
#p2=plt.bar(ind, fr18to65_means, width, color='r', bottom=under_18_means, yerr=fr18to65_std)
p2=plt.bar(ind, above_65_means, width, color='m', bottom=sum([under_18_means]), yerr=above_65_std)
plt.legend((p1[0], p2[0]), ('<18 years', '>=65 years'), bbox_to_anchor=(1.04,1))
plt.xticks(range(N), ['group'+str(i) for i in range(1,N+1)], rotation=45)
plt.ylim(0, 50)
plt.show()
#write function to draw bar chart of features by clusters
def Bar_features(groups, df, features, group_colors, nrow, ncol, size):
N=len(groups)
print (N)
for i,var in enumerate(features):
plt.subplot(nrow, ncol, i+1)
means=np.empty(N)
stds=np.empty(N)
max_values=list()
for n, group in enumerate(groups):
df_group=df[df['group']==group]
means[n]=np.mean(df_group[var])
stds[n]=np.std(df_group[var])
max_values.append(np.mean(df_group[var])+np.std(df_group[var]))
width = 1 # the width of the bars: can also be len(x) sequence
ind = np.arange(N)
plt.bar(ind, means, width, yerr=stds, color=group_colors)
if var in var_info_df.index:
plt.ylabel(var_info_df.loc[var]['meaning'], size=size)
else:
plt.ylabel(var, size=size)
plt.xticks(range(N), ['cluster_'+str(g) for g in groups], rotation=45, size=size)
plt.yticks(size=size)
plt.ylim(0, max(max_values))
plt.show()
Bar_race([1,2,3], df_subset)
Bar_age([1,2,3], df_subset)
plt.figure(facecolor="w", figsize=(30,30))
features=['prevalence of physical inactivity']
features.extend(['var'+str(n) for n in [20,68,33]])
features.append('frac_uninsured')
Bar_features([1,2,3], df_subset, features, group_colors, 2,3, 30)
###Output
3
###Markdown
Construct the dataframe to record the clusters that each county belongs to
###Code
df_with_groups=df[[]]
df_with_groups['group']=df_subset['group']
df_with_groups.shape
df_with_groups
#assign the counties that are not among the most diabetic and obese counties to cluster 0
df_with_groups=df_with_groups.fillna(0)
df_with_groups['clusters']=df_with_groups['group'].apply((lambda x:'cluster_'+str(int(x))))
df_with_groups[['clusters']].to_csv('C:/Users/cathy/Capstone_project_1/Datasets/cluster_groups_diabetic_obese.csv')
###Output
_____no_output_____ |
01_introduction/0_1pytorch_basics.ipynb | ###Markdown
Load libraries
###Code
import torch
import numpy as np
import pandas as pd
a = torch.tensor([7,5,4,8,5], dtype=torch.int32)
a.dtype
a.type()
a = torch.tensor([7,5,4,8,5], dtype=torch.int32)
print(a.dtype)
b = torch.FloatTensor([0.1, 0.3, 5.9])
b.type()
c = b.type(torch.FloatTensor)
c.dtype
a.size()
a.ndimension()
a_col = a.view(5, 1)
a_col
a_col = a.view(-1, 1)
a_col
a_col.ndimension()
np_array = np.array([0.1, 2.5, 3.9])
torch_tensor = torch.from_numpy(np_array)
bake_to_np = torch_tensor.numpy()
pandas_series = pd.Series([0.1, 2.5, 3.9])
pandas_to_torch = torch.from_numpy(pandas_series.values)
pandas_to_torch
my_tensor = torch.tensor([0,1,2,3])
torch_to_list = my_tensor.tolist()
torch_to_list
c = torch.tensor([0,1,2,3])
print(c)
c[0] = 100
print(c)
print(c[:-2])
u = torch.tensor([2,10])
v = torch.tensor([5,33])
z = u + v
z
z = 2 * u
z
z = u * v
z
#dot prodoct
u = torch.tensor([2,10])
v = torch.tensor([5,33])
z = torch.dot(u, v)
z
u = u + 1
u
a = torch.tensor([2,10], dtype=torch.float32)
a.mean()
a.max()
torch.linspace(-2, 2, steps=9)
a = torch.tensor([[11,12,13], [21,22,23], [31,32,33]])
a.ndimension()
a.shape
#number of elements
a.numel()
a[1][2]
a[0, 0:2]
a = torch.tensor([[0,1,1], [1,0,1]])
b = torch.tensor([[1,1], [1,1], [1,1]])
c = torch.mm(a, b)
print(c)
x = torch.tensor(2, dtype=torch.float32, requires_grad=True)
y = x ** 2
y
# y = torch.tensor(x ** 2, dtype=torch.float32, requires_grad=True)
# to calc the deriv of y
y.backward()
# to eval the derivative of y at x = 2
x.grad
y.grad
x = torch.tensor(2, dtype=torch.float32, requires_grad=True)
z = x ** 2 + 2 * x + 1
z.backward()
x.grad
# Partial derivatives
#f(u=1, v=2) = uv + u^2
u = torch.tensor(1, dtype=torch.float32, requires_grad=True)
v = torch.tensor(2, dtype=torch.float32, requires_grad=True)
f = u * v + u**2
f.backward()
print(u.grad, v.grad)
###Output
_____no_output_____ |
qradar-notebooks/UBA-ML.ipynb | ###Markdown
ML models for large feature set in one AQL queryGoal of this notebook is to try making some models from 70+ features extracted from one bit QRadar query. Each row is an entry for a user (plus a time slice).The features are broken into a few different models. See the large query AQL below for all the features. It would also be possible to build a seperate model for each individual feature but that could be very noisy or generate a lot of alerts. I would propose a model for related features like:- General are general QRadar features like number of qid, events, log sources, devices, context- Time is time based features like min/max/avg, events exactly on the start of hour and start of each minute- Network is things related to the network addresses like IP, local/remote, mac addresses- Port is the traffic in ranges 0 - 1024 - 49151 - max, plus unique ports in each range- Rules is info around QRadar and UBA BB, like unique rules, UBA risk- 'All Columns' is all 70+ features at onceIdeas for other models:- Proxy: things like unique URLs, http/https traffic, URL categories, source and dest IPs- Windows: object name, types, domain, eventID, process nametc etc.- UNIX model- Cloud: AWS, azure, office 365. Things like how many EC2 instances, how many files, cloud object storage, how many S3 buckets accessed- Authentication model - normal auth times, amount of auth, auth device (VPN/domain controller etc), auth source IP'sThe first model is for an entire population, then look at some sample users to check if they are an inlier or outlier. So this checks how close or similar each person is compared to everyone. For example if someone's 'Port' features are very different from the rest of the peers 'Port' features, then they would be marked an outlier. This same model could be used to look at peer groups instead of the whole population. So we could make a model for each department, city, job title etc. Example: make a model for city 'Sandy Springs', then for each person in Sandy Springs see if they are an outlier for each model (Port, Proxy, Network etc).The next model I looked at was just a person versus their own historical data. So take a week of data, and check the latest point to see if it is an inlier or outlier. This would determine if their behavior changed for a model vs themself in the past. For example we could make a model for Proxy features, and see if my Proxy features today or this hour are different from my past ones.Advantanges:- one big query for all data at once is much more efficient. See: https://github.ibm.com/infosec/uba/issues/4203issuecomment-12407381- have all features at once. Can then use for multiple models and views- **view:** vs self, vs whole population, vs peers (department, city, job title)- **model:** can use all features, subsets of features (port/proxy/IP etc.), or one by one
###Code
!pip install git+https://github.com/IBM/ibm-security-notebooks.git
# Default settings, constants
import pandas as pd
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
pd.set_option('display.max_columns', None)
pd.set_option('display.max_colwidth', -1)
pd.set_option('mode.chained_assignment', None)
FIGSIZE=(15,10)
LABEL = 'label'
FAMILY = 'family'
RANDOM=0
SPLIT=0.2
SCALE_DATA = False
matplotlib.rcParams['figure.figsize'] = FIGSIZE
USERS = [
'admin',
'root',
'DeptMgrAll-5057',
'testuser-31176',
'svc_emon',
'configservices',
'MATTO'
]
# Prefixes for ariel results
PREFIX = [
'General',
'Time',
'Network',
'Port',
'Rules',
'All Columns'
]
from pyclient.qradar import QRadar, AQL
qi = QRadar(console='YOUR-CONSOLE-IP-ADDRESS', username='admin', token='YOUR-SERVICE-TOKEN')
df = pd.DataFrame.from_records(qi.search(AQL.proxy_model))
df.head(10)
df.shape
df.describe()
from sklearn.ensemble import IsolationForest
from sklearn.neighbors import LocalOutlierFactor
from sklearn.preprocessing import MinMaxScaler, StandardScaler, RobustScaler
import time
def mean(l):
return float(sum(l))/float(len(l))
def test_model(prefix='All Columns'):
print('Model for %s' % prefix)
start = time.time()
if prefix == 'All Columns':
data = df
else:
cols = ['user', 'timeslice']
cols.extend([col for col in df if col.startswith(prefix.lower()+'_')])
data = df[cols]
# Scale data
if SCALE_DATA:
numeric_col = data.columns[data.dtypes.apply(lambda c: np.issubdtype(c, np.number))]
scaler = RobustScaler()
scaled_values = scaler.fit_transform(data[numeric_col])
data[numeric_col] = scaled_values
features=data.drop('user',axis=1).drop('timeslice',axis=1)
print('%s feature columns' % len(list(features)))
isof = IsolationForest(behaviour='new', contamination='auto', n_jobs=-1)
try:
isof.fit(features)
except:
return # skip the rest
for username in USERS:
sample = data[data['user'] == username].drop('user',axis=1).drop('timeslice',axis=1)
if sample.empty:
continue
isof_pred = isof.predict(sample)
isof_dec = isof.decision_function(sample) # > 0 normal, < 0 outlier
isof_score = isof.score_samples(sample) # lower more abnormal
pred[username].append(mean(isof_pred))
dec[username].append(mean(isof_dec))
score[username].append(mean(isof_score))
print('took %.2f seconds' % (time.time() - start))
print('')
###Output
_____no_output_____
###Markdown
Model each user vs entire populationThis uses the whole population to make a model, and looks for outliers vs the general population per model.You could use the same idea or code, but instead of the whole population, key off of some other attribute like job title, city, department. So do this similar model and instead of comparing against everyone, compare against the same city 'Fredericton' and so on for each value.I try a big general model using all the features. Then also some models using a sub-set of features like just IP ones, just port ones, just proxy features etc.
###Code
from collections import defaultdict
pred = defaultdict(list)
dec = defaultdict(list)
score = defaultdict(list)
for prefix in PREFIX:
test_model(prefix)
def pretty_print(d, label):
print(label)
for key in d:
print('%s: %s' % (key, [ '%.2f' % i for i in d[key] ]))
print('')
print(', '.join(PREFIX))
print('')
pretty_print(pred, "Prediction (1 inlier, -1 outlier)")
pretty_print(dec, "Decison ( > 0 inlier, < 0 outlier)")
pretty_print(score, "Score (lower more abnormal)")
###Output
_____no_output_____
###Markdown
Model each user vs them-selfThis makes models per user, to see if a point is an outlier vs all their old data points.To give more weighting to 'newer' points we could add more recent ones twice for example
###Code
def test_user_model(username, prefix='All Columns'):
if prefix == 'All Columns':
data = df
else:
cols = ['user', 'timeslice']
cols.extend([col for col in df if col.startswith(prefix.lower()+'_')])
data = df[cols]
# Scale data
if SCALE_DATA:
numeric_col = data.columns[data.dtypes.apply(lambda c: np.issubdtype(c, np.number))]
scaler = StandardScaler()
scaled_values = scaler.fit_transform(data[numeric_col])
data[numeric_col] = scaled_values
data = data[data['user'] == username].drop('user',axis=1).drop('timeslice',axis=1)
if data.empty:
return # skip the rest
sample = pd.DataFrame(data.iloc[0]).transpose()
features = data.drop(data.head(0).index)
isof = IsolationForest(behaviour='new', contamination='auto', n_jobs=-1)
isof.fit(features)
isof_pred = isof.predict(sample)
isof_dec = isof.decision_function(sample) # > 0 normal, < 0 outlier
isof_score = isof.score_samples(sample) # lower more abnormal
print('%s, %s: %.2f, %.2f, %.2f' % (username, prefix, mean(isof_pred), mean(isof_dec), mean(isof_score)))
print("Prediction (1 inlier, -1 outlier), Decison ( > 0 inlier, < 0 outlier), Score (lower more abnormal)\n")
for username in USERS:
for prefix in PREFIX:
test_user_model(username,prefix=prefix)
print('')
###Output
_____no_output_____
###Markdown
PCA on modelsLook and see what the data looks like when condensed down to just 2 dimensions. Done for the whole population.On the scatter plot grey is all points, red is outlier and green is inlier.
###Code
from sklearn.decomposition import PCA
X = 'PC 1'
Y = 'PC 2'
def draw_pca(prefix):
print('Calculating PCA for %s' % prefix)
if prefix == 'All Columns':
data = df
else:
cols = ['user', 'timeslice']
cols.extend([col for col in df if col.startswith(prefix.lower()+'_')])
data = df[cols]
numeric_col = data.columns[data.dtypes.apply(lambda c: np.issubdtype(c, np.number))]
# Scale features
#scaler = RobustScaler() # use robust scaler because we have outliers and big range
#scaled_values = scaler.fit_transform(data[numeric_col])
#data[numeric_col] = scaled_values
pca = PCA(n_components=2)
try:
components = pca.fit_transform(data[numeric_col])
except:
return # skip the rest
components_df = pd.DataFrame(components, columns = [X, Y])
data[X] = components_df[X]
data[Y] = components_df[Y]
inlier_users = []
outlier_users = []
for user in USERS:
try:
val = pred[user][PREFIX.index(prefix)]
except:
continue
if val > 0:
inlier_users.append(user)
else:
outlier_users.append(user)
inlier = data[data['user'].isin(inlier_users)]
outlier = data[data['user'].isin(outlier_users)]
ax1 = data.plot(kind='scatter', x=X, y=Y, color='grey', s=1, title='PCA for %s' % prefix)
if not inlier.empty:
inlier.plot(kind='scatter', x=X, y=Y, color='green', ax=ax1, s=15)
if not outlier.empty:
outlier.plot(kind='scatter', x=X, y=Y, color='red', ax=ax1, s=15)
for prefix in PREFIX:
draw_pca(prefix)
def test_user_pca(username, prefix='All Columns'):
if prefix == 'All Columns':
data = df
else:
cols = ['user', 'timeslice']
cols.extend([col for col in df if col.startswith(prefix.lower()+'_')])
data = df[cols]
data = data[data['user'] == username]
numeric_col = data.columns[data.dtypes.apply(lambda c: np.issubdtype(c, np.number))]
# Scale features
#scaler = RobustScaler()
#scaled_values = scaler.fit_transform(data[numeric_col])
#data[numeric_col] = scaled_values
pca = PCA(n_components=2)
try:
components = pca.fit_transform(data[numeric_col])
except:
return # skip the rest
components_df = pd.DataFrame(components, columns = [X, Y])
data[X] = components_df[X]
data[Y] = components_df[Y]
print(data.shape)
data.plot(kind='scatter', x=X, y=Y, color='blue', s=10, title='PCA for %s for %s' % (username, prefix))
#plt.scatter(data[X], data[Y])
for username in USERS:
test_user_pca(username, prefix="General")
###Output
_____no_output_____ |
5-RL-Quadcopter/Quadcopter_Project-Solution.ipynb | ###Markdown
Project: Train a Quadcopter How to FlyDesign an agent to fly a quadcopter, and then train it using a reinforcement learning algorithm of your choice! Try to apply the techniques you have learnt, but also feel free to come up with innovative ideas and test them. InstructionsTake a look at the files in the directory to better understand the structure of the project. - `task.py`: Define your task (environment) in this file.- `agents/`: Folder containing reinforcement learning agents. - `policy_search.py`: A sample agent has been provided here. - `agent.py`: Develop your agent here.- `physics_sim.py`: This file contains the simulator for the quadcopter. **DO NOT MODIFY THIS FILE**.For this project, you will define your own task in `task.py`. Although we have provided a example task to get you started, you are encouraged to change it. Later in this notebook, you will learn more about how to amend this file.You will also design a reinforcement learning agent in `agent.py` to complete your chosen task. You are welcome to create any additional files to help you to organize your code. For instance, you may find it useful to define a `model.py` file defining any needed neural network architectures. Controlling the QuadcopterWe provide a sample agent in the code cell below to show you how to use the sim to control the quadcopter. This agent is even simpler than the sample agent that you'll examine (in `agents/policy_search.py`) later in this notebook!The agent controls the quadcopter by setting the revolutions per second on each of its four rotors. The provided agent in the `Basic_Agent` class below always selects a random action for each of the four rotors. These four speeds are returned by the `act` method as a list of four floating-point numbers. For this project, the agent that you will implement in `agents/agent.py` will have a far more intelligent method for selecting actions!
###Code
import random
class Basic_Agent():
def __init__(self, task):
self.task = task
def act(self):
new_thrust = random.gauss(450., 25.)
return [new_thrust + random.gauss(0., 1.) for x in range(4)]
###Output
_____no_output_____
###Markdown
Run the code cell below to have the agent select actions to control the quadcopter. Feel free to change the provided values of `runtime`, `init_pose`, `init_velocities`, and `init_angle_velocities` below to change the starting conditions of the quadcopter.The `labels` list below annotates statistics that are saved while running the simulation. All of this information is saved in a text file `data.txt` and stored in the dictionary `results`.
###Code
%load_ext autoreload
%autoreload 2
import csv
import numpy as np
from task import Task
# Modify the values below to give the quadcopter a different starting position.
runtime = 5. # time limit of the episode
init_pose = np.array([0., 0., 10., 0., 0., 0.]) # initial pose
init_velocities = np.array([0., 0., 0.]) # initial velocities
init_angle_velocities = np.array([0., 0., 0.]) # initial angle velocities
file_output = 'data.txt' # file name for saved results
# Setup
task = Task(init_pose, init_velocities, init_angle_velocities, runtime)
agent = Basic_Agent(task)
done = False
labels = ['time', 'x', 'y', 'z', 'phi', 'theta', 'psi', 'x_velocity',
'y_velocity', 'z_velocity', 'phi_velocity', 'theta_velocity',
'psi_velocity', 'rotor_speed1', 'rotor_speed2', 'rotor_speed3', 'rotor_speed4']
results = {x : [] for x in labels}
# Run the simulation, and save the results.
with open(file_output, 'w') as csvfile:
writer = csv.writer(csvfile)
writer.writerow(labels)
while True:
rotor_speeds = agent.act()
_, _, done = task.step(rotor_speeds)
to_write = [task.sim.time] + list(task.sim.pose) + list(task.sim.v) + list(task.sim.angular_v) + list(rotor_speeds)
for ii in range(len(labels)):
results[labels[ii]].append(to_write[ii])
writer.writerow(to_write)
if done:
break
###Output
_____no_output_____
###Markdown
Run the code cell below to visualize how the position of the quadcopter evolved during the simulation.
###Code
import matplotlib.pyplot as plt
%matplotlib inline
plt.plot(results['time'], results['x'], label='x')
plt.plot(results['time'], results['y'], label='y')
plt.plot(results['time'], results['z'], label='z')
plt.legend()
_ = plt.ylim()
###Output
_____no_output_____
###Markdown
The next code cell visualizes the velocity of the quadcopter.
###Code
plt.plot(results['time'], results['x_velocity'], label='x_hat')
plt.plot(results['time'], results['y_velocity'], label='y_hat')
plt.plot(results['time'], results['z_velocity'], label='z_hat')
plt.legend()
_ = plt.ylim()
###Output
_____no_output_____
###Markdown
Next, you can plot the Euler angles (the rotation of the quadcopter over the $x$-, $y$-, and $z$-axes),
###Code
plt.plot(results['time'], results['phi'], label='phi')
plt.plot(results['time'], results['theta'], label='theta')
plt.plot(results['time'], results['psi'], label='psi')
plt.legend()
_ = plt.ylim()
###Output
_____no_output_____
###Markdown
before plotting the velocities (in radians per second) corresponding to each of the Euler angles.
###Code
plt.plot(results['time'], results['phi_velocity'], label='phi_velocity')
plt.plot(results['time'], results['theta_velocity'], label='theta_velocity')
plt.plot(results['time'], results['psi_velocity'], label='psi_velocity')
plt.legend()
_ = plt.ylim()
###Output
_____no_output_____
###Markdown
Finally, you can use the code cell below to print the agent's choice of actions.
###Code
plt.plot(results['time'], results['rotor_speed1'], label='Rotor 1 revolutions / second')
plt.plot(results['time'], results['rotor_speed2'], label='Rotor 2 revolutions / second')
plt.plot(results['time'], results['rotor_speed3'], label='Rotor 3 revolutions / second')
plt.plot(results['time'], results['rotor_speed4'], label='Rotor 4 revolutions / second')
plt.legend()
_ = plt.ylim()
###Output
_____no_output_____
###Markdown
When specifying a task, you will derive the environment state from the simulator. Run the code cell below to print the values of the following variables at the end of the simulation:- `task.sim.pose` (the position of the quadcopter in ($x,y,z$) dimensions and the Euler angles),- `task.sim.v` (the velocity of the quadcopter in ($x,y,z$) dimensions), and- `task.sim.angular_v` (radians/second for each of the three Euler angles).
###Code
# the pose, velocity, and angular velocity of the quadcopter at the end of the episode
print(task.sim.pose)
print(task.sim.v)
print(task.sim.angular_v)
###Output
_____no_output_____
###Markdown
In the sample task in `task.py`, we use the 6-dimensional pose of the quadcopter to construct the state of the environment at each timestep. However, when amending the task for your purposes, you are welcome to expand the size of the state vector by including the velocity information. You can use any combination of the pose, velocity, and angular velocity - feel free to tinker here, and construct the state to suit your task. The TaskA sample task has been provided for you in `task.py`. Open this file in a new window now. The `__init__()` method is used to initialize several variables that are needed to specify the task. - The simulator is initialized as an instance of the `PhysicsSim` class (from `physics_sim.py`). - Inspired by the methodology in the original DDPG paper, we make use of action repeats. For each timestep of the agent, we step the simulation `action_repeats` timesteps. If you are not familiar with action repeats, please read the **Results** section in [the DDPG paper](https://arxiv.org/abs/1509.02971).- We set the number of elements in the state vector. For the sample task, we only work with the 6-dimensional pose information. To set the size of the state (`state_size`), we must take action repeats into account. - The environment will always have a 4-dimensional action space, with one entry for each rotor (`action_size=4`). You can set the minimum (`action_low`) and maximum (`action_high`) values of each entry here.- The sample task in this provided file is for the agent to reach a target position. We specify that target position as a variable.The `reset()` method resets the simulator. The agent should call this method every time the episode ends. You can see an example of this in the code cell below.The `step()` method is perhaps the most important. It accepts the agent's choice of action `rotor_speeds`, which is used to prepare the next state to pass on to the agent. Then, the reward is computed from `get_reward()`. The episode is considered done if the time limit has been exceeded, or the quadcopter has travelled outside of the bounds of the simulation.In the next section, you will learn how to test the performance of an agent on this task. The AgentThe sample agent given in `agents/policy_search.py` uses a very simplistic linear policy to directly compute the action vector as a dot product of the state vector and a matrix of weights. Then, it randomly perturbs the parameters by adding some Gaussian noise, to produce a different policy. Based on the average reward obtained in each episode (`score`), it keeps track of the best set of parameters found so far, how the score is changing, and accordingly tweaks a scaling factor to widen or tighten the noise.Run the code cell below to see how the agent performs on the sample task.
###Code
import sys
import pandas as pd
from agents.policy_search import PolicySearch_Agent
from task import Task
num_episodes = 1000
target_pos = np.array([0., 0., 10.])
task = Task(target_pos=target_pos)
agent = PolicySearch_Agent(task)
for i_episode in range(1, num_episodes+1):
state = agent.reset_episode() # start a new episode
while True:
action = agent.act(state)
next_state, reward, done = task.step(action)
agent.step(reward, done)
state = next_state
if done:
print("\rEpisode = {:4d}, score = {:7.3f} (best = {:7.3f}), noise_scale = {}".format(
i_episode, agent.score, agent.best_score, agent.noise_scale), end="") # [debug]
break
sys.stdout.flush()
###Output
_____no_output_____
###Markdown
This agent should perform very poorly on this task. And that's where you come in! Define the Task, Design the Agent, and Train Your Agent!Amend `task.py` to specify a task of your choosing. If you're unsure what kind of task to specify, you may like to teach your quadcopter to takeoff, hover in place, land softly, or reach a target pose. After specifying your task, use the sample agent in `agents/policy_search.py` as a template to define your own agent in `agents/agent.py`. You can borrow whatever you need from the sample agent, including ideas on how you might modularize your code (using helper methods like `act()`, `learn()`, `reset_episode()`, etc.).Note that it is **highly unlikely** that the first agent and task that you specify will learn well. You will likely have to tweak various hyperparameters and the reward function for your task until you arrive at reasonably good behavior.As you develop your agent, it's important to keep an eye on how it's performing. Use the code above as inspiration to build in a mechanism to log/save the total rewards obtained in each episode to file. If the episode rewards are gradually increasing, this is an indication that your agent is learning.
###Code
## TODO: Train your agent here.
###Output
_____no_output_____
###Markdown
Plot the RewardsOnce you are satisfied with your performance, plot the episode rewards, either from a single run, or averaged over multiple runs.
###Code
## TODO: Plot the rewards.
###Output
_____no_output_____ |
PID-ejercicio.ipynb | ###Markdown
Tarea: Controladores PID**Nombre completo**:*Fulano de Tal* Analice la [documentación](http://ftp.esat.kuleuven.be/pub/SISTA/data/process_industry/distill2.txt) y explore los [Datos](https://ftp.esat.kuleuven.be/pub/SISTA/data/process_industry/distill2.dat.gz) del sistema distill2. Puede encontrar más datos y descripciones [aquí](http://ftp.esat.kuleuven.be/pub/SISTA/data/process_industry/) 1. Grafique la respuesta experimental del sistema ante un escalón unitario. Para el ejercicio, seleccione la combinación U2Y2.
###Code
datos = pd.read_csv("Datos/distill2.dat",
sep="\s+",
header = None,
names=['muestra',
'U1Y1',
'U1Y2',
'U2Y1',
'U2Y2',
'U3Y1',
'U3Y2',
])
datos["Tiempo"] = datos["muestra"] * 10
datos
plt.plot(datos["Tiempo"],datos["U3Y2"])
plt.grid()
# Escriba su código
###Output
_____no_output_____
###Markdown
--------2. Encuentre un modelo apropiado para el sistema.
###Code
# Escriba su código
###Output
_____no_output_____
###Markdown
Describa sus procedimientos y respuestas. -------- 3. Método directoA. Determine los parámetros relacionados con el comportamiento deseado para el sistema controlado.
###Code
# Escriba su código
###Output
_____no_output_____
###Markdown
Describa sus procedimientos y respuestas. B. Encuentre un controlador PID para cumplir los requerimientos.
###Code
# Escriba su código
###Output
_____no_output_____
###Markdown
Describa sus procedimientos y respuestas. C. Simule el compotamiento del sistema controlado y compare con el del sistema sin compensación.
###Code
# Escriba su código
###Output
_____no_output_____
###Markdown
Describa sus procedimientos y respuestas. --------- 5. Método basado en criterios de optimizaciónA. Determine los parámetros relacionados con el comportamiento deseado para el sistema controlado.
###Code
# Escriba su código
###Output
_____no_output_____
###Markdown
Describa sus procedimientos y respuestas. B. Encuentre un controlador PID para cumplir los requerimientos.
###Code
# Escriba su código
###Output
_____no_output_____
###Markdown
Describa sus procedimientos y respuestas. C. Simule el compotamiento del sistema controlado y compare con el del sistema sin compensación.
###Code
# Escriba su código
###Output
_____no_output_____ |
examples/adapter-inference-demo.ipynb | ###Markdown
Quick demo for inferencing a sentiment-analysis adapter
###Code
!pip install git+https://github.com/adapter-hub/adapter-transformers.git
!git clone https://github.com/huggingface/transformers
!pip install mecab-python3==0.996.5
!pip install unidic-lite
import torch
from transformers import AutoTokenizer, AutoModelForSequenceClassification, AdapterType
###Output
_____no_output_____
###Markdown
We could load pretrained adapter from the they were saved ($ADAPTER_PATH) as well as the base model.ベースモデルと同様に、($ADAPTER_PATH)に保存した事前学習済みのアダプターをロードすることができます。
###Code
adapter_path = "$ADAPTER_PATH"
model = AutoModelForSequenceClassification.from_pretrained("cl-tohoku/bert-base-japanese-whole-word-masking")
tokenizer = AutoTokenizer.from_pretrained("cl-tohoku/bert-base-japanese-whole-word-masking")
model.load_adapter(adapter_path)
def predict(sentence):
token_ids = tokenizer.convert_tokens_to_ids(tokenizer.tokenize(sentence))
input_tensor = torch.tensor([token_ids])
outputs = model(input_tensor, adapter_names=['sst-2'])
result = torch.argmax(outputs[0]).item()
return 'positive' if result == 1 else 'negative'
###Output
_____no_output_____
###Markdown
Now we can input sentences to find if they are predicted correctly.文を入力することで、正しく予測されるかを調べることができます。
###Code
assert predict("すばらしいですね") == "positive"
assert predict("全然面白くない") == "negative"
###Output
_____no_output_____ |
Ikea-distance.ipynb | ###Markdown
How far is the closest Ikea?This is a little data science toy project that originated from a walk in the park and the following thought: Living in Flagstaff, Arizona, the closest Ikea is located in Tempe, which is a two hour one-way drive. *Is this a short distance by American standards? What percentage of US Americans have a shorter or longer drive to get their 2x2 Kallax shelf unit*? AnalysisIn order to address the aforementioned questions, we can map a function that calculates the distance to the closest Ikea location on the United States and weight the distance distribution with the population density distribution. We need the following data for this task:1. *a map of the United States*2. *a list of all Ikea locations in the States*, possibly with latitudes and longitudes of each location3. a grid of points within the borders of the US that allows us to derive the *distance to the closest Ikea location for each point on the grid*4. *the geographic population density distribution across the States*Putting all these things together, we can derive a *distribution of the distance to the closest Ikea per capita*. For the sake of simplicity, we will only consider the continental US in this study. There is actually no Ikea location in either Alaska nor Hawai'i nor any of the US territories. Hence, including either of these states or territories would significantly skew the results of this analysis. 1. We need a map[cartopy](https://github.com/SciTools/cartopy) seemed like a good way to obtain the map data that we need for this study:
###Code
%matplotlib inline
import matplotlib.pyplot as plt
import cartopy.io.shapereader as shpreader
import cartopy.crs as ccrs
import cartopy.feature as cfeature
import warnings
warnings.simplefilter('ignore', RuntimeWarning)
# define crop area on the globe for this study
# [lon_min, lon_max, lat_min, lat_max]
map_crop = [-130, -60, 20, 50]
def basemap():
"""this function defines a matplotlib axis with a basemap that
will be used several times in this notebook"""
f, ax = plt.subplots(figsize=(12,8))
ax = plt.axes(projection=ccrs.PlateCarree())
ax.set_extent(map_crop) # define map crop
ax.add_feature(cfeature.LAND, color='white') # add land
ax.add_feature(cfeature.OCEAN, color='lightblue') # add oceans
return ax
# read map files from naturalearthdata.com
shpfilename = shpreader.natural_earth(resolution='110m',
category='cultural',
name='admin_0_countries')
# extract US data from the map files
countries = shpreader.Reader(shpfilename).records()
for country in countries:
if country.attributes['ABBREV'] == 'U.S.A.':
us_geometry = country.geometry
us_shape = country._shape
break
# plot a simple map
ax = basemap()
ax.add_geometries(us_geometry, ccrs.PlateCarree(), edgecolor='black',
facecolor='none', linewidth=2,
label=country.attributes['ABBREV'])
###Output
_____no_output_____
###Markdown
While this a very simple map, it will be sufficient for our purposes. Note that this map does not include lakes, like for instance the Great Lakes. This is a small detail we should keep in mind for later. 2. We need a list of Ikea locations in the USThis information is luckily provided by Ikea itself. Just go to [ikea.com](https://ikea.com), and you will get a list of their locations. With a little bit of digging through their javascript files, you can even extract a list of geographic coordinates for their locations, which are compiled below.
###Code
import numpy as np
ikea_locations = np.rec.array(
[('Tempe', 33.3406452 , -111.9701824 ),
('Burbank', 34.174558 , -118.303142 ),
('Carson', 33.84179776, -118.26031208),
('Costa Mesa', 33.69092406, -117.91640997),
('Covina', 34.07422161, -117.87920237),
('East Palo Alto', 37.4610567 , -122.139374 ),
('Emeryville', 37.8310722 , -122.2923363 ),
('San Diego', 32.7800402 , -117.1263976 ),
('West Sacramento', 38.5866091 , -121.5535246 ),
('Centennial', 39.571952 , -104.8750419 ),
('New Haven', 41.2961069 , -72.9198649 ),
('Jacksonville', 30.332184 , -81.655651 ),
('Miami', 25.7915476 , -80.3838211 ),
('Orlando', 28.4830125 , -81.427288 ),
('Sunrise', 26.121522 , -80.3317868 ),
('Tampa', 27.9539394 , -82.4317366 ),
('Atlanta', 33.7887958 , -84.4052785 ),
('Bolingbrook', 41.7279305 , -88.0378324 ),
('Schaumburg', 42.0568612 , -88.0361715 ),
('Fishers', 39.954334 , -86.0120759 ),
('Merriam', 39.019697 , -94.6909241 ),
('Baltimore', 39.3753398 , -76.4616532 ),
('College Park', 39.0208564 , -76.9285061 ),
('Stoughton', 42.1375763 , -71.0685603 ),
('Canton', 42.324824 , -83.4516619 ),
('Twin Cities', 44.8584007 , -93.2448277 ),
('St. Louis', 38.6331482 , -90.2440372 ),
('Las Vegas', 36.0689264 , -115.280825 ),
('Elizabeth', 40.6751301 , -74.1697393 ),
('Paramus', 40.9245349 , -74.0732861 ),
('Brooklyn', 40.6719755 , -74.0114608 ),
('Long Island', 40.7741334 , -73.5305931 ),
('Charlotte', 35.2933918 , -80.7650555 ),
('Columbus', 40.1485005 , -82.9676117 ),
('West Chester', 39.3154678 , -84.4339977 ),
('Portland', 45.5712882 , -122.5535108 ),
('Conshohocken', 40.0951758 , -75.3063721 ),
('Philadelphia', 39.9170912 , -75.141859 ),
('Pittsburgh', 40.4518981 , -80.1679925 ),
('Memphis', 35.1899914 , -89.7993025 ),
('Frisco', 33.093868 , -96.8212207 ),
('Grand Prairie', 32.689824 , -97.02212 ),
('Houston', 29.7854269 , -95.4722835 ),
('Round Rock', 30.5569728 , -97.6902621 ),
('Draper', 40.508693 , -111.893237 ),
('Woodbridge', 38.6432536 , -77.289176 ),
('Renton', 47.4424256 , -122.2256924 ),
('Oak Creek', 42.88585 , -87.863136 )],
dtype=[('index', 'O'), ('lat', '<f8'), ('lon', '<f8')])
###Output
_____no_output_____
###Markdown
Let's put these locations on the map as black circles and add Flagstaff as a green star:
###Code
ax = basemap()
ax.scatter(ikea_locations['lon'], ikea_locations['lat'], color='black', marker='o', s=10) # add Ikeas
ax.scatter(-111.631111, 35.199167, color='green', marker='*', s=200) # add Flagstaff
###Output
_____no_output_____
###Markdown
There are strong clusters of Ikea locations on the East coast and the West coast, whereas most of the western states seem to be abandoned. Let's put a grid on the US and check the distance to the closest Ikea location. What is the maximum distance from any location in the US to the closest Ikea? 3. The distance to the closest Ikea location from any point in the USThe tasks seems simple: create a grid of equidistant or equiangular cells within the borders of the States. The problem is: how to define the borders? Here, `cartopy` comes in handy. When we obtained the map information from [naturalearthdata.com](https://www.naturalearthdata.com/) using `cartopy`, we extracted a shapefile for the US, which contains information on its borders, and named this object `us_shape`. Now, we can check for each cell of our grid, whether this cell is located within the borders of the US. To make this check, we use functionality from the [shapely](https://github.com/Toblerity/Shapely) module, which can directly use the shapefile information provided in `us_shape`, and put this into the function `in_country`. In order to speed up this process, we `vectorize` this function and apply it to an equiangular grid that covers the area defined by `map_crop`, i.e., the area covered by the map plot shown here.The result is a boolean `mask` array, defining for each cells whether it is part of the US (`True`), or not (`False`).
###Code
from shapely.geometry import Point, shape
def in_country(lon, lat):
"""check whether coordinate pairs lies within the borders of the US"""
return shape(us_shape).contains(Point(lon, lat))
# build grid
xx, yy = np.mgrid[map_crop[0]:map_crop[1]:300j, map_crop[2]:map_crop[3]:150j]
# mask areas outside the US
mask = np.vectorize(in_country)(xx.ravel(), yy.ravel()).reshape(xx.shape)
###Output
_____no_output_____
###Markdown
We plot this mask on the map plot created by `basemap` to check whether it is consistent with the border outlines shown above:
###Code
ax = basemap()
# plot mask (only areas within the borders)
ax.scatter(xx[mask], yy[mask], color='red', marker='s', s=30)
# add border lines
ax.add_geometries(us_geometry, ccrs.PlateCarree(), edgecolor='black',
facecolor='none', linewidth=2,
label=country.attributes['ABBREV'])
###Output
_____no_output_____
###Markdown
That's good enough the purpose of this study. Now, we just have to find a way to calculate the distance between two points, where the first point is any of the grid cells shown above and the second point is the closest Ikea location. This is an easy task for a [Ball Tree](https://sklearn.org/modules/generated/sklearn.neighbors.BallTree.html) algorithm. Keep in mind that this map is a projection from a spherical surface. To find the distance between two points on a sphere, we can make use of the [Haversine](https://en.wikipedia.org/wiki/VersineHaversine) function and set this as our distance metric.
###Code
from sklearn.neighbors import BallTree
# build a tree containing Ikea locations
ikea_tree = BallTree(list(zip(np.deg2rad(ikea_locations.lat),
np.deg2rad(ikea_locations.lon))),
metric='haversine')
# derive the distances and indices of the closest Ikea locations
# for each point in our grid
match_dist, match_id = ikea_tree.query(list(zip(np.deg2rad(yy.ravel()),
np.deg2rad(xx.ravel()))))
match_dist = match_dist.reshape(xx.shape)*6371 # reshape and convert distances to km
print('maximum distance to Ikea in km:', np.max(match_dist[mask]))
###Output
maximum distance to Ikea in km: 1050.6663178983786
###Markdown
The maximum distance from anywhere in the continental US to the closest Ikea is 1050 km. That's a long drive. But where would you have to live to be that far away from the closest Ikea?
###Code
ax = basemap()
# mask areas outside the country borders
match_dist[~mask] = np.nan
# plot the distance map
grid_plot = ax.pcolor(xx, yy, match_dist, snap=True, cmap='hot_r')
ax.scatter(ikea_locations['lon'], ikea_locations['lat'], color='black', marker='o', s=10) # add Ikea location
ax.scatter(-111.631111, 35.199167, color='green', marker='*', s=200) # add Flagstaff
plt.colorbar(grid_plot, label='Distance to the closest Ikea location (km)', ax=ax, fraction=0.024, pad=0.05)
###Output
_____no_output_____
###Markdown
Well, I guess you shouldn't live in Montana if you are a big Ikea fan. This is where you would have to drive the maximum distance of 1050 km to get to the next location. 4. It's the people who matterSo far, we treated every cell in our grid and hence every point in the continental US as equally important. But this is not really the case. Remember the caveat about the map: some areas that are unpopulated (e.g., the Great Lakes) are considered in this map and hence also in our grid. Also, some areas have a much higher population density (e.g., New York City) than others (e.g., Arizona). Instead of treating every cell in our grid equally, we should account for the population density distribution if we want to derive useful information on the average distance of US Americans to the closest Ikea. Census 2010We can construct an approximation of the population density distribution from [Census 2010](https://en.wikipedia.org/wiki/2010_United_States_Census) data. [This website](https://www.census.gov/geo/reference/centersofpop.html) provides files that list the centers of population on different levels and as a function of geographic coordinates. We download the [population distribution on the "Block Group" level](https://www2.census.gov/geo/docs/reference/cenpop2010/blkgrp/CenPop2010_Mean_BG.txt) (the finest level available) file and map it on our grid. This file basically provides the number of people counted in a "block", as well as the geographic coordinates of the center of this block.The mapping is done by building a 2-dimensional histogram over the grid from the previous step. By using the population number as weights, this methods basically sums up the population that falls within one grid cell. The result is the total population per map grid cell.This method is not perfect in that it does not properly sample the population distribution on the very fine scale. Each block is treated as a singularity, whereas in reality, the population counted in this block is spread over a much larger area that might even spread into a neighboring cell. For the sake of simplicity, we will ignore this effect for now.
###Code
import pandas as pd
# read in population data
# https://www.census.gov/geo/reference/centersofpop.html
pop = pd.read_csv('CenPop2010_Mean_BG.txt')
# build a two-dimensional histogram of the population distribution
pop_grid, pop_xgrid, pop_ygrid = np.histogram2d(
pop.LONGITUDE, pop.LATITUDE,
bins=xx.shape,
range=[[map_crop[0], map_crop[1]], [map_crop[2], map_crop[3]]],
weights=pop.POPULATION)
pop_grid[~mask] = np.nan # mask areas outside the country borders
# plot the population density using a logarithmic scale
ax = basemap()
pop_plot = ax.pcolor(xx, yy, np.log10(pop_grid), snap=True, cmap='hot_r')
plt.colorbar(pop_plot, label=r'Population Count ($\log10$)', ax=ax, fraction=0.024, pad=0.05)
###Output
_____no_output_____
###Markdown
While not perfect, this logarithmic map looks like a credible representation of the population density distribution in the US:* the highly populated areas on the East and West coasts are clearly visible* the Great Lakes clearly pop out as unpopulated areas* larger cities in the Midwest are clearly identifiableOn the downside, some areas in the West appear as completely unpopulated, which is a result of the singularized population distribution resulting from the simplistic data format. Results: the distance distribution to the closest Ikea per capitaWe have all the ingredients in place. Now it's time to calculate the final result: *What is the distribution of distances to Ikea among the population?*In order to obtain this distribution, we multiply the *distance to the closest Ikea per grid cell* (step 3) with the *population density distribution* (step 4) for each of the grid cells:
###Code
dist_pop_hist, dist_pop_bins = np.histogram(match_dist.ravel(), bins=100,
range=(0, np.nanmax(match_dist)),
weights=pop_grid.ravel())
###Output
_____no_output_____
###Markdown
To derive some useful information from this distribution we turn it into a normalized cumulative distribution and interpolate some meaningful distances:
###Code
dist_pop_cumhist = np.cumsum(dist_pop_hist)
dist_pop_cumhist /= dist_pop_cumhist[-1]
from scipy.interpolate import interp1d
from scipy.optimize import bisect
# interpolate the distance function (=f_dist)
f_dist = interp1d(dist_pop_bins[:-1]+dist_pop_bins[1]/2, dist_pop_cumhist)
# find the distances at which f_dist equals some meaningful percentages
for p in [50, 10, 90]:
print('{:.1f}% of US Americans live within {:.1f} km of an Ikea location'.format(
p, bisect(lambda d: f_dist(d)-p/100, dist_pop_bins[0]+dist_pop_bins[1]/2, dist_pop_bins[-2]+dist_pop_bins[1]/2)))
###Output
50.0% of US Americans live within 60.1 km of an Ikea location
10.0% of US Americans live within 7.7 km of an Ikea location
90.0% of US Americans live within 293.8 km of an Ikea location
###Markdown
Given that the continental US spans about 4000 km from coast to coast, this is a pretty impressive coverage: 50% of US Americans have to drive less than 60 km to get to Ikea. Now, of course, the most important question: how does Flagstaff, which has a (straight-line) distance of 206 km to the closest Ikea in Tempe, fare in comparison to these numbers:
###Code
print('{:.1f}% of Americans are closer to the closest Ikea than Flagstaff.'.format(f_dist(206)*100))
###Output
79.4% of Americans are closer to the closest Ikea than Flagstaff.
###Markdown
Well, this is demotivating...Just out of curiosity: where do those ~20% of US Americans live that are worse off than Flagstaff?
###Code
ax = basemap()
# copy the distance distribution and extract only those areas with a larger distance than 206 km
moredistant = match_dist[:]
moredistant[moredistant < 206] = np.nan
# plot the culled distance distribution
grid_plot = ax.pcolor(xx, yy, moredistant, snap=True, cmap='hot_r')
ax.scatter(ikea_locations['lon'], ikea_locations['lat'], color='black', marker='o', s=10) # add Ikeas
ax.scatter(-111.631111, 35.199167, color='green', marker='*', s=200) # add Flagstaff
plt.colorbar(grid_plot, label='Distance to the closest Ikea location (km)', ax=ax, fraction=0.024, pad=0.05)
###Output
_____no_output_____
###Markdown
For the sake of completeness, here is the complete cumulative distribution of population-weighted distances:
###Code
# plot cumulative distribution of distances
f, ax = plt.subplots(figsize=(8,6))
ax.plot(dist_pop_bins[:-1]+(dist_pop_bins[0]+dist_pop_bins[1])/2,
dist_pop_cumhist/dist_pop_cumhist[-1], lw=2, color='gray')
# label Flagstaff
ax.scatter(206, 0.8, color='green')
ax.annotate('Flagstaff', xy=(240, 0.8), horizontalalignment='left',
verticalalignment='center', fontsize=14, color='green')
# label the population median
ax.scatter(60, 0.5, color='red')
ax.annotate('Population Median', xy=(90, 0.5), horizontalalignment='left',
verticalalignment='center', fontsize=14, color='red')
# label axes
ax.set_xlabel('Distance to the closest Ikea location (km)', fontsize=14)
ax.set_ylabel('Cumulative Fraction of Population', fontsize=14)
###Output
_____no_output_____
###Markdown
One final question: what fraction of the US population can reach an Ikea within ~3o min to 1 hr, roughly corresponding to a 50 km to 100 km drive?
###Code
print('{:.1f}% of Americans live within 50 km of the closest Ikea.'.format(f_dist(50)*100))
print('{:.1f}% of Americans live within 100 km of the closest Ikea.'.format(f_dist(100)*100))
###Output
46.4% of Americans live within 50 km of the closest Ikea.
59.5% of Americans live within 100 km of the closest Ikea.
|
project_maslow_starter_notebook.ipynb | ###Markdown
###Code
import pandas as pd
import os
import urllib.request
def download_project_maslow_files():
"""The purpose of this function is to handle getting the nonprofit.txt and nonprofit_text.txt
files. This functin will create a data directory, add a .gitignore file so
source control does not pick up the txt files, and then check to see
if either nonprofit.txt and nonprofit_text.txt have been downloaded yet.
If not, it will proceed to download the files.
"""
if os.path.isdir("data") is False:
print("data/ folder does not exist, creating it now")
os.mkdir("data")
if os.path.isfile("data/.gitignore") is False:
os.system("echo *.txt > data/.gitignore")
if os.path.isfile("data/nonprofit.txt") is False:
print("Downloading nonprofit.txt...")
urllib.request.urlretrieve("https://mtsu-dsi-hackathon-2022.s3.amazonaws.com/nonprofit.txt", 'data/nonprofit.txt')
if os.path.isfile("data/nonprofit_text.txt") is False:
print("Downloading nonprofit_text.txt...")
urllib.request.urlretrieve("https://mtsu-dsi-hackathon-2022.s3.amazonaws.com/nonprofit_text.txt", 'data/nonprofit_text.txt')
def get_non_profit_df():
"""This function will first run the download_project_maslow_files() to check to
see if the files have been downloaded to the `data/` folder. If not,
it will download the files.
Then, it will read in the file and return a dataframe with the nonprofit
"""
download_project_maslow_files()
col_types = {
"nonprofit_id": "Int64",
"reporting_year": "Int64",
"ein": "Int64",
"businessname": "str",
"phone": "str",
"address1": "str",
"address2": "str",
"city": "str",
"stabbrv": "str",
"zip": "str"
}
df = pd.read_csv("data/nonprofit.txt", sep = "|", dtype=col_types)
return df
df_np = get_non_profit_df()
df_np.head()
def get_non_profit_text_df():
"""This function will first run the download_project_maslow_files() to check to
see if the files have been downloaded to the `data/` folder. If not,
it will download the files.
Then, it will read in the file and return a dataframe with the nonprofit_text
"""
download_project_maslow_files()
col_types = {
"nonprofit_text_id": "Int64",
"reporting_year": "Int64",
"nonprofit_id": "Int64",
"grouptype": "str",
"description": "str"
}
df = pd.read_csv("data/nonprofit_text.txt", sep = "|", encoding="cp1252", dtype=col_types)
return df
df_text = get_non_profit_text_df()
df_text.head()
###Output
_____no_output_____ |
classwork/.ipynb_checkpoints/AppStore Dataset-checkpoint.ipynb | ###Markdown
AppStore DatasetTutorial: https://www.makeschool.com/academy/track/app-store-dataset-tutorial
###Code
# Pandas is a library for basic data analysis
import pandas as pd
# NumPy is a library for advanced mathematical computation
import numpy as np
# MatPlotLib is a library for basic data visualization
import matplotlib.pyplot as plt
# SeaBorn is a library for advanced data visualization
import seaborn as sns
sns.set(style="white", context="notebook", palette="deep")
COLOR_COLUMNS = ["#66C2FF", "#5CD6D6", "#00CC99", "#85E085", "#FFD966", "#FFB366", "#FFB3B3", "#DAB3FF", "#C2C2D6"]
sns.set_palette(palette=COLOR_COLUMNS, n_colors=4)
ls #bash command to list all files in the current directory
cd ../datasets/ #checks and see if our csv file is there
cd MakeSchool/Term4/DS1.1/classwork/
FILEPATH = "../datasets/AppleStore.csv"
df = pd.read_csv(FILEPATH, index_col="Unnamed: 0")
df
###Output
_____no_output_____
###Markdown
Some good questions to keep in mind:- Any convoluted data?- Any repeated data?- Any redundant data?- Any data that's difficult to understand?---- Remove **unclean data** - irregularities in data types - inconsistencies in how data was recorded - inappropriate data values (e.g. ```null``` values) .iloc[] and .loc[]- ```.iloc[]``` and ```.loc[]``` are selector tools in Pandas to view a single or multiple rows, columns, and/or cells in a datasets- `.iloc[]` is useful for selecting data by **index**- `.loc[]` is useful for selecting data by **label**
###Code
# df.iloc[0] #get first index = PACMAN
# df.loc[1] #get first label = PACMAN
# df.iloc[:3] # get top 3
# df.loc[:3] # get top 3
# df.head() #peeks top 5
# df.tail() #peeks bottom 5
# df.price.mean() #does the same as line below
df["price"].mean()
# df.describe() # show general column-formatted information on the dataset
df.describe(include="O")
#gab size_bytes data and divide to give us our new size_Mb column
def _byte_resizer(data):
return np.around(data / 1000000, decimals=2)
df["size_Mb"] = df["size_bytes"].apply(_byte_resizer)
df.drop("size_bytes", axis="columns", inplace=True)
df
###Output
_____no_output_____
###Markdown
Page 3: Creating Basic Visualization Part 2: Exploratory Data Sciencehttps://www.makeschool.com/academy/track/standalone/app-store-dataset-tutorial/creating-basic-visualizations
###Code
plt.subplots(figsize=(10, 8))
BINS = [0.00, 10.00, 20.00, 50.00, 100.00, 200.00, 500.00, 1000.00, 2000.00, np.inf]
LABELS = ["<10m", "10-20m", "20-50m", "50-100m", "100-200m", "200-500m", "500-1000m", "1-2G", ">2G"]
freqs = pd.cut(df["size_Mb"], BINS, include_lowest=True, labels=LABELS)
sns.barplot(y=freqs.value_counts().values, x=freqs.value_counts().index)
BINS = [-np.inf, 0.00, np.inf]
LABELS = ["FREE", "PAID"]
colors = ['lightcoral', 'yellowgreen']
df["price_categories"] = pd.cut(df["price"], BINS, include_lowest=True, labels=LABELS)
fig, axs = plt.subplots(figsize=(10, 5)) #initialize our plotting space in MatPlotLib.
price_df = df["price_categories"].value_counts()
#Now create a doughnut plot in MapPlotLib
plt.pie(price_df.values, labels=LABELS, colors=colors, autopct='%1.1f%%', shadow=True)
centre_circle = plt.Circle((0,0),0.75,color='black', fc='white',linewidth=1.25)
fig = plt.gcf()
fig.gca().add_artist(centre_circle)
plt.axis('equal')
###Output
_____no_output_____
###Markdown
Let's keep moving with this idea and check out the highest rated free and paid apps.
###Code
free_apps = df.loc[df["price_categories"] == "FREE"]
paid_apps = df.loc[df["price_categories"] == "PAID"]
# sort data based on total user ratings
free_apps_rated = free_apps.sort_values(by=["rating_count_tot"], ascending=False)
paid_apps_rated = paid_apps.sort_values(by=["rating_count_tot"], ascending=False)
# plot free apps
sns.barplot(x=free_apps_rated["rating_count_tot"][:10], y=free_apps_rated["track_name"][:10])
# plot paid apps
sns.barplot(x=paid_apps_rated["rating_count_tot"][:10], y=paid_apps_rated["track_name"][:10])
plt.subplots(figsize=(20, 20))
ratings = df.sort_values(by=["rating_count_tot"], ascending=False) #descending sort of total rating count
sns.barplot(x=ratings["rating_count_tot"][:30], y=ratings["track_name"][:30]) #plot the first 30 rated
###Output
_____no_output_____
###Markdown
Page 4 See Games with Us Part 3: Exploratory Data Science (finale)https://www.makeschool.com/academy/track/standalone/app-store-dataset-tutorial/see-games-with-us Visualize the distribution of apps based on genre
###Code
df["prime_genre"].value_counts()
genres = df["prime_genre"].value_counts()
genres.sort_values(ascending=False, inplace=True) #default
plt.subplots(figsize=(20, 10))
sns.barplot(x=genres.values, y=genres.index, order=genres.index, orient="h")
###Output
_____no_output_____
###Markdown
Slice our data to grab only the mobile games
###Code
games = df.loc[df["prime_genre"] == "Games"]
games.head()
games["price"].value_counts()
#cell to hold mobile game data by price.
prices = (games["price"].value_counts()) / (games["price"].shape[0]) * 100 #divide sum of a certain price by the sum of all games (3862) * 100 //to get the percentage
prices.sort_values(ascending=False, inplace=True)
prices
plt.subplots(figsize=(20, 10))
ax = sns.barplot(y=prices.values, x=prices.index, order=prices.index)
ax.set(xlabel="USD", ylabel="percent (%)")
###Output
_____no_output_____
###Markdown
Visualizations looking at mobile game popularity.
###Code
free_games = games.loc[games["price_categories"] == "FREE"]
paid_games = games.loc[games["price_categories"] == "PAID"]
free_games_rated = free_games.sort_values(by=["rating_count_tot"], ascending=False)
paid_games_rated = paid_games.sort_values(by=["rating_count_tot"], ascending=False)
# Then, let's initialize our plotting space (in a new cell), this time using subplots eloquently to display two plots dynamically.
fig = plt.figure(figsize=(20, 10))
ax1 = fig.add_subplot(211)
ax2 = fig.add_subplot(212)
Finally, let's create two barplots for top rated free and paid mobile apps. In a new cell.
sns.barplot(x=free_games_rated["rating_count_tot"][:10], y=free_games_rated["track_name"][:10], ax=ax1)
sns.barplot(x=paid_games_rated["rating_count_tot"][:10], y=paid_games_rated["track_name"][:10], ax=ax2)
###Output
_____no_output_____ |
notebooks/Session_2/Episode_2/minipy_pandas.ipynb | ###Markdown
Mini Intro to Pandasby Liang JinPart of Mini Python Sessions:- [github.com/drliangjin/minipy](https://github.com/drliangjin/minipy)Official Pandas Doc:- [pandas.pydata.org](https://pandas.pydata.org/) Pandas -- "Panel Data"Built on `NumPy`, `Pandas` is the key package to manipulate data, particularly good for working on `tabular` data.1. Pandas basics, `Series` and `DataFrame` objects2. Data Cleaning3. Data Wrangling4. Data Aggregating and grouping 1. Getting start with Pandas
###Code
import numpy as np
import pandas as pd
# Remember you can also import sub-objects under one package:
from pandas import Series, DataFrame
###Output
_____no_output_____
###Markdown
Pandas SeriesA Pandas `Series` is an advanced list/array with much more flexibility and functionality:- a sequence of values- associated data index
###Code
# create a random numpy array as data
arr = np.random.randint(-5, 6, 5) # <= 5 random numbers from -5 to 5
arr
###Output
_____no_output_____
###Markdown
Create Pandas Series (1)
###Code
# by default, numerical index are assigned, starting from 0
ser1 = pd.Series(arr)
ser1
###Output
_____no_output_____
###Markdown
Create Pandas Series (2)
###Code
# we can also specify index for the values
ser2 = pd.Series(arr, index = ['e', 'd', 'c', 'b', 'a'])
ser2
###Output
_____no_output_____
###Markdown
Series Indexing
###Code
# retrieve a single value using "label", loc, and iloc
ser2['a'], ser2.loc['a'], ser2.iloc[len(ser2)-1]
# update a single value using "label"
ser2['a'] = -99
# retrieve multiple elements using a list of "label"
ser2[['a', 'e']]
# boolean indexing
ser2[ser2 < 0]
###Output
_____no_output_____
###Markdown
Series Attributes
###Code
# return values of a Series
ser2.values
# return index of a Series, dtype = 'object' is a general form for string
ser2.index
# change labels
ser2.index = ['Lancaster', 'York', 'Manchester', 'Edinburgh', 'Liverpool']
# return the assigned name for the Series
ser2.name = 'example'
ser2.index.name = 'city'
ser2
###Output
_____no_output_____
###Markdown
Pandas DataFrameA Pandas `DataFrame` represents a rectangular table of data:- has two deminsions;- contains an ordered(indexed) collection of columns (Series)- each column can be a different value type (`int`, `float`, `string`, `boolean`, etc);- has a row and column index Create a Pandas DataFrame
###Code
# UK University League
# A dictionary of equal-length lists of NumPy arrays
data = {'uni':['Lancaster', 'Lancaster', 'Lancaster', 'Manchester', 'Manchester', 'Manchester'],
'year':[2017, 2018, 2019, 2017, 2018, 2019],
'rank':[9, 9, 8, 25, 22, 18]}
# from Python dictionary to Pandas DataFrame
df = pd.DataFrame(data, columns = ['uni', 'year', 'rank'])
###Output
_____no_output_____
###Markdown
How a Pandas DataFrame looks like?
###Code
data # <== this is the raw dictionary of data
# Jupyter displays a DataFrame as a nice-looking HTML table
# Same as Series, by default, numerical index are assigned to DataFrame
df
###Output
_____no_output_____
###Markdown
"Label" Indexing
###Code
# we can of course change the index, Pandas is very flexible
df.index = ['zero', 'one', 'two', 'three', 'four', 'five']
# Now let's have another look at the DataFrame
df
###Output
_____no_output_____
###Markdown
Retrieve rows and columns
###Code
# retrieve a column
df['rank'] # note: we still have index with this single column
# selection using "label"
df.loc['two']
# df.iloc[2] # or selection using integer index
###Output
_____no_output_____
###Markdown
Create new columns
###Code
# An empty or new column can be assigned a scalar value, an array or a Series
df['new_col1'] = 99
df['new_col2'] = np.arange(6.)
df['new_col3'] = pd.Series([0.5, 0.7], index = [0, 5])
# or create a dummy variable base on other column(s)
df['top10'] = df['rank'] <= 10
df
# let's delete non-sense columns by:
df.drop(['new_col1', 'new_col2', 'new_col3'], axis=1, inplace=True)
# or using
# del df['new_col1', 'new_col2', 'new_col3']
###Output
_____no_output_____
###Markdown
Sorting and Ranking (1)
###Code
# sort_index: rows
df.sort_index() # by default, axis = 0
# sort_index: columns
df.sort_index(axis=1) # note, this is a copy
# let's switch back to numeric index
df.reset_index(drop=True, inplace=True)
###Output
_____no_output_____
###Markdown
Sorting and Ranking (2)
###Code
# sort_values
df.sort_values(by=['rank'])
# sort_values
df.sort_values(by=['year', 'rank'])
# let's get back the orginal order
df.sort_index(axis=0, inplace=True)
###Output
_____no_output_____
###Markdown
Summarizing Descriptive Statistics
###Code
df.info()
is_lancs = df['uni'] == 'Lancaster'
df[is_lancs].describe() # only numeric data will be summarized, how about year?
###Output
_____no_output_____
###Markdown
Unique values, Value counts, and Membership
###Code
# unique values, such as unique university in dataset
uniques = df['uni'].unique()
uniques
# how many values/observations for each unique university
df['uni'].value_counts()
# vectorized set membership check
mask = df['uni'].isin(['Warwick', 'Bath'])
mask # we can then use this mask object as boolean to filter data
###Output
_____no_output_____
###Markdown
2. Data Cleaning Missing data- Python built-in: `None`- Numpy and Pandas: `NaN` (Not a Number)
###Code
ser = pd.Series([None, 1, 3, 5, np.nan])
ser.isnull() # or notnull()
###Output
_____no_output_____
###Markdown
Drop missing data
###Code
# drop missing data from Series
ser.dropna() # same as ser[ser.notnull()]
# drop missing data from DataFrame, a bit more complex
df.loc[(df['uni'] == 'Manchester') & (df['year'] >= 2018), 'rank'] = np.nan
df.loc[(df['uni'] == 'Manchester') & (df['year'] == 2019), 'top10'] = None
# df[(df['uni'] == 'Manchester') & (df['year'] == 2018)]['rank'] = np.nan
df.dropna() # drop any row if containing one missing value
# df.dropna(axis=1) # also play with arg: how='all'
###Output
_____no_output_____
###Markdown
Fill missing data
###Code
# fill missing data with 0
df.fillna(0) # you can also put a callable inside, e.g., mean()
# fill missing data with the most "recent" data points
df.fillna(method='ffill') # forward fill and backward fill
###Output
_____no_output_____
###Markdown
Replace data
###Code
# replace obvious erros with np.nan
# 99, or -99 by construction are errors
ser = pd.Series([-99, -0.05, 0.01, 0.03, 0.02, 99])
ser.replace([99, -99], np.nan)
# replace according to a dictionary
ser.replace({-99: np.nan, 99: 0})
###Output
_____no_output_____
###Markdown
3. Data Wrangling Merge data
###Code
# left and right dataframe
left = pd.DataFrame({'uni':['Lancaster', 'Lancaster', 'Lancaster', 'Manchester', 'Manchester', 'Manchester'],
'year':[2017, 2018, 2019, 2017, 2018, 2019],
'rank':[9, 9, 8, 25, 22, 18]})
right = pd.DataFrame({'uni':['Lancaster', 'Lancaster', 'Manchester', 'Manchester'],
'year':[2018, 2019, 2018, 2019],
'acf_rank':[7, 10, 35, 20]})
pd.merge(left, right, on=['uni', 'year'], how='left') # keep all left, get matched right
pd.merge(left, right, on=['uni', 'year'], how='inner') # get matched from both dataframes
###Output
_____no_output_____
###Markdown
Concat data
###Code
# left and right dataframe
up = pd.DataFrame({'uni':['Lancaster', 'Lancaster', 'Lancaster', 'Manchester', 'Manchester', 'Manchester'],
'year':[2017, 2018, 2019, 2017, 2018, 2019],
'rank':[9, 9, 8, 25, 22, 18]})
down = pd.DataFrame({'uni':['Lancaster', 'Lancaster', 'Manchester', 'Manchester'],
'year':[2015, 2016, 2015, 2016],
'rank':[11, 9, 28, 28]})
pd.concat([up, down]).sort_values(by=['uni', 'year']).reset_index(drop=True)
###Output
_____no_output_____ |
00 Introduction/ml_linear_non_linear.ipynb | ###Markdown
Linear, Non-Linear ExamplesObjective: visualize linear, quadratic, cubic, exponential, log, sine functions on a plot
###Code
%matplotlib inline
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import math
def log_func(x):
if x > 0:
return math.log(x,10)
else:
return np.nan
df = pd.DataFrame(index=range(-200,200))
df.shape
title = ['Linear 3*X','Quadratic X^2',
'Cubic X^3','Absolute abs(X)',
'sine(X)','log(X)',
'Exponential 2^X']
df['linear']=df.index.map(lambda x: 3*x)
df['quadratic'] = df.index.map(lambda x: x**2)
df['cubic'] = df.index.map(lambda x: x**3)
df['abs'] = df.index.map(lambda x: abs(x))
df['sine'] = np.sin(np.arange(-20,20,.1))
df['log'] = df.index.map(log_func)
df['exponential'] = df.index.map(lambda x: 2.0**x)
df.head()
df.tail()
fig, axs = plt.subplots(2, 2, figsize=(10, 10), sharex=False)
axx = axs.ravel()
for i in range(df.shape[1]-3):
axx[i].set_title(title[i])
df[df.columns[i]].plot(ax=axx[i])
axx[i].set_xlabel('X')
if i % 2 == 1 :
axx[i].yaxis.tick_right()
axx[i].grid()
plt.figure(figsize=(10,3))
plt.plot(df['exponential'])
plt.title('Exponential 2^X')
plt.xlim(-10,20)
plt.ylim(0,100000)
plt.xlabel('X')
plt.grid()
plt.figure(figsize=(10,3))
plt.plot(df['log'])
plt.title('Log(X)')
plt.xlim(-10,200)
plt.xlabel('X')
plt.grid()
plt.figure(figsize=(10,3))
plt.plot(np.arange(-20,20,.1), df['sine'])
plt.title('Sine(X)')
plt.xlabel('X')
plt.grid()
###Output
_____no_output_____ |
Unit03/3_Similarity.ipynb | ###Markdown
Similarity manhattan_distance: $$d(x,y)=\sum^n_{i=1}|(x_i - y_i)|$$Euclidean : $$d(x,y)=\sqrt{\sum^n_{i=1}(x_i - y_i)^2}$$Cosine: $$\theta(x,y)=\frac{x \bullet y}{ \sqrt{x \bullet x} \sqrt{y \bullet y}}$$使用numpy函數 ->Q: x1y1,x2y2哪個較不相關?x1 = [0,5]y1 = [1,5]x2 = [0,5]y2 = [100,5]
###Code
import numpy as np
def manhattan_distance(x,y):
d1=np.sum(np.abs(x-y))
def euclidean_distance(x,y):
return np.sqrt(np.sum(x-y) ** 2)
def cosine_similarity(x,y):
return np.dot(x,y) / (np.sqrt(np.dot(x,x)) * np.sqrt(np.dot(y,y)))
X1 = np.array([0,5])
Y1 = np.array([1,5])
print('x=[0,5],y=[1,5]')
print("歐式距離(x1,y1)= ",euclidean_distance(X1,Y1))
print("cos距離(x1,y1)= ",cosine_similarity(X1,Y1))
print('\nx=[100,5],y=[1,5]')
X2 = np.array([0,5])
Y2 = np.array([0,100])
print("歐式距離(x2,y2)= ",euclidean_distance(X2,Y2))
print("cos距離(x2,y2)= ",cosine_similarity(X2,Y2))
print()
print("cos值越大 角度越小 表示很有相關")
print("cos值越小 角度越大 表示越不相關 ")
print("================================")
print("三維")
A = np.array([99,1,1])
B = np.array([0,1,1])
C = np.array([100,0,0])
print('A={0},B={1},C={2}'.format(A,B,C))
print("歐式距離(A,B)= ",euclidean_distance(A,B))
print("歐式距離(A,C)= ",euclidean_distance(A,C))
print("cos距離(A,B)= ",cosine_similarity(A,B))
print("cos距離(A,C)= ",cosine_similarity(A,C))
print("================================")
print("多維")
a = np.array([1,0,0,0])
b = np.array([1,0,1,1])
c = np.array([1,1,1,1])
d = np.array([10,0,0,0])
print('a={0},b={1},c={2},d={3}'.format(a,b,c,d))
print("歐式距離(a,c)= ",euclidean_distance(a,c))
print("歐式距離(b,c)= ",euclidean_distance(b,c))
print("歐式距離(c,c)= ",euclidean_distance(c,c))
print("歐式距離(a,d)= ",euclidean_distance(a,d))
print("cos距離(a,c)= ",cosine_similarity(a,c))
print("cos距離(b,c)= ",cosine_similarity(b,c))
print("cos距離(a,b)= ",cosine_similarity(a,b))
print("cos距離(a,d)= ",cosine_similarity(a,d))
print("ab ac 哪個比較近(cos距離)?")
###Output
================================
多維
a=[1 0 0 0],b=[1 0 1 1],c=[1 1 1 1],d=[10 0 0 0]
歐式距離(a,c)= 3.0
歐式距離(b,c)= 1.0
歐式距離(c,c)= 0.0
歐式距離(a,d)= 9.0
cos距離(a,c)= 0.5
cos距離(b,c)= 0.8660254037844387
cos距離(a,b)= 0.5773502691896258
cos距離(a,d)= 1.0
ab ac 哪個比較近(cos距離)?
###Markdown
補充Cosine Similarity (sklearn)
###Code
from sklearn.metrics.pairwise import cosine_similarity
# Vectors
vec_a = [1, 2, 3, 4, 5]
vec_b = [1, 3, 5, 7, 9]
# Dot and norm
dot = sum(a*b for a, b in zip(vec_a, vec_b))
norm_a = sum(a*a for a in vec_a) ** 0.5
norm_b = sum(b*b for b in vec_b) ** 0.5
# Cosine similarity
cos_sim = dot / (norm_a*norm_b)
# Results
print('My version:', cos_sim)
print('Scikit-Learn:', cosine_similarity([vec_a], [vec_b]))
###Output
My version: 0.9972413740548081
Scikit-Learn: [[0.99724137]]
|
Copy_of_Covid19_detection_using_X_ray_images.ipynb | ###Markdown
TASK 1 : Import Libraries
###Code
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Conv2D,MaxPooling2D,Dropout,Flatten,Dense
from tensorflow.keras.optimizers import Adam
from tensorflow.keras.preprocessing.image import ImageDataGenerator
import numpy as np
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
TASK 2 : Clone & Explore dataset
###Code
#clone the dataset from the github repository
! git clone https://github.com/education454/datasets.git
#set the path to the main
import os
main_dir="/content/datasets/Data"
#set the path to the train dir
train_dir=os.path.join(main_dir,'train')
#set the path to the test dir
test_dir=os.path.join(main_dir,'test')
#directory with the training covid images
train_covid_dir=os.path.join(train_dir,'COVID19')
#directory with the training normal images
train_normal_dir=os.path.join(train_dir,'NORMAL')
#directory with the testing covid images
test_covid_dir=os.path.join(test_dir,'COVID19')
#directory with the testing normal images
test_normal_dir=os.path.join(test_dir,'NORMAL')
#print the filenames
train_covid_names=os.listdir(train_covid_dir)
print(train_covid_names[:10])
train_normal_names=os.listdir(train_normal_dir)
print(train_normal_names[:10])
test_covid_names=os.listdir(test_covid_dir)
print(test_covid_names[:10])
test_normal_names=os.listdir(test_normal_dir)
print(test_normal_names[:10])
#print the total no of images present in each dir
print("total images in the training set:",len(train_covid_names+train_normal_names))
print("total images in the testing set:",len(test_covid_names+test_normal_names))
###Output
total images in the training set: 1811
total images in the testing set: 484
###Markdown
TASK 3 : Data Visualization
###Code
# plot a grid of 16 images (8 images of Covid19 and 8 images of Normal)
import matplotlib.image as mpimg
#set the number of columns and rows
rows=4
cols=4
#set the figure size
fig=plt.gcf()
fig.set_size_inches(12,12)
#get the filenames from the covid & normal dir of the train dataset
covid_pic=[os.path.join(train_covid_dir,filename)for filename in train_covid_names[0:8]]
normal_pic=[os.path.join(train_normal_dir,filename)for filename in train_normal_names[0:8]]
#print the list
print(covid_pic)
print(normal_pic)
#merge the covid and normal list
merge_list=covid_pic+normal_pic
for i,img_path in enumerate(covid_pic+normal_pic):
data=img_path.split('/',6)[6]
sp=plt.subplot(rows,cols,i+1)
sp.axis('Off')
img=mpimg.imread(img_path)
sp.set_title(data,fontsize=10)
plt.imshow(img,cmap='gray')
plt.show()
###Output
['/content/datasets/Data/train/COVID19/COVID19(510).jpg', '/content/datasets/Data/train/COVID19/COVID19(107).jpg', '/content/datasets/Data/train/COVID19/COVID19(378).jpg', '/content/datasets/Data/train/COVID19/COVID19(471).jpg', '/content/datasets/Data/train/COVID19/COVID19(330).jpg', '/content/datasets/Data/train/COVID19/COVID19(429).jpg', '/content/datasets/Data/train/COVID19/COVID19(239).jpg', '/content/datasets/Data/train/COVID19/COVID19(31).jpg']
['/content/datasets/Data/train/NORMAL/NORMAL(1255).jpg', '/content/datasets/Data/train/NORMAL/NORMAL(986).jpg', '/content/datasets/Data/train/NORMAL/NORMAL(1556).jpg', '/content/datasets/Data/train/NORMAL/NORMAL(1389).jpg', '/content/datasets/Data/train/NORMAL/NORMAL(1149).jpg', '/content/datasets/Data/train/NORMAL/NORMAL(819).jpg', '/content/datasets/Data/train/NORMAL/NORMAL(708).jpg', '/content/datasets/Data/train/NORMAL/NORMAL(1254).jpg']
###Markdown
TASK 4 : Data Preprocessing & Augmentation
###Code
# generate training,testing and validation batches
dgen_train=ImageDataGenerator(rescale=1./255,validation_split=0.2,zoom_range=0.2,horizontal_flip=True)
dgen_validation=ImageDataGenerator(rescale=1./255)
dgen_test=ImageDataGenerator(rescale=1./255)
train_generator=dgen_train.flow_from_directory(train_dir,target_size=(150,150),subset='training',batch_size=32,class_mode='binary')
validation_generator=dgen_train.flow_from_directory(train_dir,target_size=(150,150),subset='validation',batch_size=32,class_mode='binary')
test_generator=dgen_test.flow_from_directory(test_dir,target_size=(150,150),batch_size=32,class_mode='binary')
#get the class indices
train_generator.class_indices
#get the image shape
train_generator.image_shape
###Output
_____no_output_____
###Markdown
TASK 5 : Build Convolutional Neural Network Model
###Code
#@title Default title text
model=Sequential()
# add the convolutional layer
# filters, size of filters,padding,activation_function,input_shape
model.add(Conv2D(32,(5,5),padding='SAME',activation='relu',input_shape=(150,150,3)))
# pooling layerel.add
model.add(MaxPooling2D(pool_size=(2,2)))
# place a dropout layer
model.add(Dropout(0.5))
model.add(Conv2D(64,(5,5),padding='SAME',activation='relu'))
# add another convolutional layer
model.add(MaxPooling2D(pool_size=(2,2)))
# pooling layer
model.add(Dropout(0.5))
# place a dropout layer
model.add(Flatten())
# Flatten layer
model.add(Dense(256,activation='relu'))
# add a dense layer : amount of nodes, activation
model.add(Dropout(0.5))
# place a dropout layer
# 0.5 drop out rate is recommended, half input nodes will be dropped at each update
model.add(Dense(1,activation='sigmoid'))
model.summary()
###Output
Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv2d (Conv2D) (None, 150, 150, 32) 2432
_________________________________________________________________
max_pooling2d (MaxPooling2D) (None, 75, 75, 32) 0
_________________________________________________________________
dropout (Dropout) (None, 75, 75, 32) 0
_________________________________________________________________
conv2d_1 (Conv2D) (None, 75, 75, 64) 51264
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 37, 37, 64) 0
_________________________________________________________________
dropout_1 (Dropout) (None, 37, 37, 64) 0
_________________________________________________________________
flatten (Flatten) (None, 87616) 0
_________________________________________________________________
dense (Dense) (None, 256) 22429952
_________________________________________________________________
dropout_2 (Dropout) (None, 256) 0
_________________________________________________________________
dense_1 (Dense) (None, 1) 257
=================================================================
Total params: 22,483,905
Trainable params: 22,483,905
Non-trainable params: 0
_________________________________________________________________
###Markdown
TASK 6 : Compile & Train the Model
###Code
#compile the model
model.compile(Adam(lr=0.001),loss='binary_crossentropy',metrics=['accuracy'])
#train the
history=model.fit(train_generator,epochs=30,validation_data=validation_generator)
###Output
Epoch 1/30
46/46 [==============================] - 48s 1s/step - loss: 0.6055 - accuracy: 0.8061 - val_loss: 0.4119 - val_accuracy: 0.8867
Epoch 2/30
46/46 [==============================] - 48s 1s/step - loss: 0.2205 - accuracy: 0.9068 - val_loss: 0.1879 - val_accuracy: 0.9420
Epoch 3/30
46/46 [==============================] - 49s 1s/step - loss: 0.1602 - accuracy: 0.9413 - val_loss: 0.1486 - val_accuracy: 0.9530
Epoch 4/30
46/46 [==============================] - 51s 1s/step - loss: 0.1543 - accuracy: 0.9524 - val_loss: 0.1313 - val_accuracy: 0.9558
Epoch 5/30
46/46 [==============================] - 50s 1s/step - loss: 0.1383 - accuracy: 0.9462 - val_loss: 0.1397 - val_accuracy: 0.9613
Epoch 6/30
46/46 [==============================] - 50s 1s/step - loss: 0.1166 - accuracy: 0.9669 - val_loss: 0.1066 - val_accuracy: 0.9696
Epoch 7/30
46/46 [==============================] - 50s 1s/step - loss: 0.1256 - accuracy: 0.9600 - val_loss: 0.1081 - val_accuracy: 0.9696
Epoch 8/30
46/46 [==============================] - 50s 1s/step - loss: 0.0805 - accuracy: 0.9731 - val_loss: 0.0955 - val_accuracy: 0.9724
Epoch 9/30
46/46 [==============================] - 51s 1s/step - loss: 0.0870 - accuracy: 0.9758 - val_loss: 0.0904 - val_accuracy: 0.9724
Epoch 10/30
46/46 [==============================] - 51s 1s/step - loss: 0.1157 - accuracy: 0.9593 - val_loss: 0.1168 - val_accuracy: 0.9669
Epoch 11/30
46/46 [==============================] - 50s 1s/step - loss: 0.1010 - accuracy: 0.9710 - val_loss: 0.1458 - val_accuracy: 0.9530
Epoch 12/30
46/46 [==============================] - 50s 1s/step - loss: 0.1008 - accuracy: 0.9703 - val_loss: 0.1324 - val_accuracy: 0.9448
Epoch 13/30
46/46 [==============================] - 50s 1s/step - loss: 0.0912 - accuracy: 0.9717 - val_loss: 0.0737 - val_accuracy: 0.9834
Epoch 14/30
46/46 [==============================] - 49s 1s/step - loss: 0.0854 - accuracy: 0.9724 - val_loss: 0.0778 - val_accuracy: 0.9779
Epoch 15/30
46/46 [==============================] - 50s 1s/step - loss: 0.0730 - accuracy: 0.9758 - val_loss: 0.0742 - val_accuracy: 0.9724
Epoch 16/30
46/46 [==============================] - 49s 1s/step - loss: 0.0555 - accuracy: 0.9834 - val_loss: 0.0871 - val_accuracy: 0.9641
Epoch 17/30
46/46 [==============================] - 49s 1s/step - loss: 0.0619 - accuracy: 0.9848 - val_loss: 0.0752 - val_accuracy: 0.9834
Epoch 18/30
46/46 [==============================] - 48s 1s/step - loss: 0.0771 - accuracy: 0.9772 - val_loss: 0.1205 - val_accuracy: 0.9475
Epoch 19/30
46/46 [==============================] - 49s 1s/step - loss: 0.0900 - accuracy: 0.9689 - val_loss: 0.1106 - val_accuracy: 0.9669
Epoch 20/30
46/46 [==============================] - 49s 1s/step - loss: 0.0567 - accuracy: 0.9848 - val_loss: 0.0848 - val_accuracy: 0.9696
Epoch 21/30
46/46 [==============================] - 49s 1s/step - loss: 0.0506 - accuracy: 0.9807 - val_loss: 0.0866 - val_accuracy: 0.9779
Epoch 22/30
46/46 [==============================] - 49s 1s/step - loss: 0.0674 - accuracy: 0.9807 - val_loss: 0.0910 - val_accuracy: 0.9807
Epoch 23/30
46/46 [==============================] - 49s 1s/step - loss: 0.0601 - accuracy: 0.9807 - val_loss: 0.0771 - val_accuracy: 0.9696
Epoch 24/30
46/46 [==============================] - 48s 1s/step - loss: 0.0473 - accuracy: 0.9841 - val_loss: 0.0672 - val_accuracy: 0.9834
Epoch 25/30
46/46 [==============================] - 49s 1s/step - loss: 0.0448 - accuracy: 0.9827 - val_loss: 0.0530 - val_accuracy: 0.9862
Epoch 26/30
46/46 [==============================] - 48s 1s/step - loss: 0.0736 - accuracy: 0.9717 - val_loss: 0.0840 - val_accuracy: 0.9724
Epoch 27/30
46/46 [==============================] - 49s 1s/step - loss: 0.0585 - accuracy: 0.9800 - val_loss: 0.0815 - val_accuracy: 0.9724
Epoch 28/30
46/46 [==============================] - 48s 1s/step - loss: 0.0569 - accuracy: 0.9841 - val_loss: 0.0666 - val_accuracy: 0.9862
Epoch 29/30
46/46 [==============================] - 47s 1s/step - loss: 0.0549 - accuracy: 0.9807 - val_loss: 0.0404 - val_accuracy: 0.9862
Epoch 30/30
46/46 [==============================] - 47s 1s/step - loss: 0.0508 - accuracy: 0.9821 - val_loss: 0.0841 - val_accuracy: 0.9696
###Markdown
TASK 7 : Performance Evaluation
###Code
#get the keys of history object
#plot graph between training and validation loss
history.history.keys()
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.legend(['Training','Validation'])
plt.title(['Training and validation losses'])
plt.xlabel('epoch')
#plot graph between training and validation accuarcy
plt.plot(history.history['accuracy'])
plt.plot(history.history['val_accuracy'])
plt.legend(['Training','Validation'])
plt.title(['Training and validation accuracy'])
plt.xlabel('epoch')
# get the test acuarcy and loss
test_loss,test_acc=model.evaluate(test_generator)
print('test loss:{} test acc:{}'.format(test_acc,test_loss))
###Output
16/16 [==============================] - 10s 626ms/step - loss: 0.0645 - accuracy: 0.9793
test loss:0.9793388247489929 test acc:0.06453290581703186
###Markdown
TASK 8 : Prediction On New Data
###Code
from google.colab import files
from keras.preprocessing import image
upload=files.upload()
for filename in upload.keys():
img_path='/content/'+filename
img=image.load_image(image_path,target_size=(150,150))
images=image.img_to_arr(img)
images=np.expand_dims(images,axis=0)
prediction=model.predict(images)
print(filename)
if(prediction==0):
print("covid detected")
else:
print(" covid not detected")
###Output
_____no_output_____ |
MaterialCursoPython/Fase 2 - Manejo de datos y optimizacion/Tema 04 - Colecciones de datos/Apuntes/Leccion 3 (Apuntes) - Diccionarios.ipynb | ###Markdown
Los diccionariosSon junto a las listas las colecciones más utilizadas. Se basan en una estructura mapeada donde cada elemento de la colección se encuentra identificado con una clave única. Por tanto, no puede haber dos claves iguales. En otros lenguajes se conocen como arreglos asociativos.
###Code
vacio = {}
c = set()
c ={1,2,2,4,5}
print(vacio)
c
###Output
{}
###Markdown
Tipo de una variable
###Code
type(c)
###Output
_____no_output_____
###Markdown
Definición Para cada elemento se define la estructura -> clave:valor
###Code
colores = {'amarillo':'yellow','azul':'blue'}
for color in colores:
if 'yellow' == colores[color]:
print(color)
###Output
amarillo
###Markdown
También se pueden añadir elementos sobre la marcha
###Code
colores['verde'] = 'green'
colores
colores['azul']
colores['amarillo']
###Output
_____no_output_____
###Markdown
Las claves también pueden ser números, pero son un poco confusas
###Code
numeros = {10:'diez',20:'veinte'}
numeros[10]
###Output
_____no_output_____
###Markdown
Modificación de valor a partir de la clave
###Code
colores['amarillo'] = 'white'
colores
###Output
_____no_output_____
###Markdown
Función del()Sirve para borrar un elemento del diccionario.
###Code
del(colores['amarillo'])
colores
###Output
_____no_output_____
###Markdown
Trabajando directamente con registros
###Code
edades = {'Hector':27,'Juan':45,'Maria':34}
edades
edades['Hector']+=1
edades
edades['Juan'] + edades['Maria']
###Output
_____no_output_____
###Markdown
Lectura secuencial con for .. in ..Es posible utilizar una iteraciín for para recorrer los elementos del diccionario:
###Code
for edad in edades:
print(edad)
###Output
Maria
Hector
Juan
###Markdown
El problema es que se devuelven las claves, no los valoresPara solucionarlo deberíamos indicar la clave del diccionario para cada elemento.
###Code
for clave in edades:
print(edades[clave])
for clave in edades:
print(clave,edades[clave])
###Output
Maria 34
Hector 28
Juan 45
###Markdown
El método .items()Nos facilita la lectura en clave y valor de los elementos porque devuelve ambos valores en cada iteración automáticamente:
###Code
for c,v in edades.items():
print(c,v)
###Output
Maria 34
Hector 28
Juan 45
###Markdown
Ejemplo utilizando diccionarios y listas a la vezPodemos crear nuestras propias estructuras avanzadas mezclando ambas colecciones. Mientras los diccionarios se encargarían de manejar las propiedades individuales de los registros, las listas nos permitirían manejarlos todos en conjunto.
###Code
personajes = []
p = {'Nombre':'Gandalf','Clase':'Mago','Raza':'Humano'} # No sé si era un humano pero lo parecía jeje
personajes.append(p)
personajes
p = {'Nombre':'Legolas','Clase':'Arquero','Raza':'Elfo'}
personajes.append(p)
p = {'Nombre':'Gimli','Clase':'Guerrero','Raza':'Enano'}
personajes.append(p)
personajes
for p in personajes:
print(p['Nombre'], p['Clase'], p['Raza'])
###Output
Gandalf Mago Humano
Legolas Arquero Elfo
Gimli Guerrero Enano
|
NPTEL-Course-Lecture Programmes/Pager.ipynb | ###Markdown
🙂 Welcome
👇 Here we are reading nodes/edges from text file named page_rank.txt and are able to plot a graph , by making use of values present in files .
[page_rank.txt](https://github.com/Lakhankumawat/LearnPython/blob/main/NPTEL-Course-Lecture%20Programmes/page_rank.txt)
Program Code :
###Code
import networkx as nx
import matplotlib.pyplot as plt
G=nx.read_edgelist(r"page_rank.txt",create_using=nx.DiGraph,nodetype=str)
nx.draw(G,with_labels=True)
plt.show()
###Output
_____no_output_____ |
ML with Python JB 2.ipynb | ###Markdown
10. Machine Learning Algorithm Performance MetricsTo evaluate machine learning algorithms are very important.Choice of metrics influences how the performance of machine learning algorithms is measured and compared. They influence how you weight the importance of different characteristics in the results and your ultimate choice of which algorithm to choose. 10.1 Algorithm Evaluation MetricsRecipes evaluate the same algorithms, Logistic Regression for classification and LinearRegression for the regression problems. A 10-fold cross validation test harness is used to demonstrate each metric. In these recipes is the cross validation.cross_val_score function used to report the performance in each recipe. all scores are reported so that they can be sorted in ascending order (largest score is best). 10.2 Classification MetricsIn this section we will review how to use the below metrics.10.2.1 Classification AccuracyClassification accuracy is the number of correct predictions made as a ratio of all predictionsmade. This is the most common evaluation metric for classification problems. It is really only suitable when there are an equal number of observations in each class (rare case)and all predictions and prediction errors are equally important.(often not the case).10.2.2 Logarithmic LossLogarithmic Loss is a performance metric for evaluating the predictions of probabilitiesof membership to a given class. The scalar probability between 0 and 1 can be seen as a measureof confidence for a prediction by an algorithm. Predictions that are correct or incorrect arerewarded or punished proportionally to the confidence of the prediction.10.2.3 Area Under ROC Curve (AUC)Area Under ROC Curve is a performance metric for binary classificationproblems. The AUC represents a model's ability to discriminate between positive and negative classes. An area of 1.0 represents a model that made all predictions perfectly. An area of 0.5 represents a model that is as good as random. ROC can be broken down into sensitivity and specificity. Sensitivity is the true positive rate also called the recall. It is the number of instancesfrom the positive (first) class that actually predicted correctly.Specificity is also called the true negative rate. Is the number of instances from thenegative (second) class that were actually predicted correctly.10.2.4 Confusion MatrixConfusion Matrix presentation of the accuracy of a model with two or more classes. The table presents predictions on the x-axis and accuracy outcomes on the y-axis. The cells of the table are the number of predictions made by a machine learning algorithm. e.g. a machine learning algorithm can predict 0 or 1 and each prediction may actually have been a 0 or 1. Predictions for 0 that were actually 0 appear in the cell for prediction = 0 and actual = 0, whereas predictions for 0 that were actually 1 appear in the cell for prediction =0 and actual = 1, and so on. 10.2.5 Classification ReportClassification Report while working on classification problems to give you a quick idea of the accuracy of a model using a number of measures. The classification report() function displays the precision, recall, F1-score and support for each class.
###Code
from pandas import read_csv
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import KFold
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import train_test_split
from sklearn.metrics import confusion_matrix
from sklearn.metrics import classification_report
rawdata = read_csv('C:/Users/Satish/python_files/diabetes.csv')
array = rawdata.values
x = array[:,0:8]
y = array[:,8]
kfold = KFold(n_splits=10, random_state=7)
model = LogisticRegression()
################ Cross Validation Classification Accuracy ############################
# scoring = 'accuracy'
# result = cross_val_score(model, x, y, cv = kfold, scoring = scoring)
# print("Mean Accuracy: ", result.mean()*100)
# print("Standard Deviation Accuracy: ", result.std()*100)
# #### Mean Accuracy: 76.95, std Accuracy:4.84
################ Cross Validation Classification using Logarithmic Loss ###############
# scoring = 'neg_log_loss'
# result = cross_val_score(model, x, y, cv = kfold, scoring = scoring)
# print("Mean Accuracy: ", result.mean()*100)
# print("Standard Deviation Accuracy: ", result.std()*100)
#### Mean Accuracy: -49.26, std Accuracy:4.68
#### Smaller logloss is better with 0 representing a perfect logloss. From above,
#### the measure is inverted to be ascending when using the cross val score() function.
################ Cross Validation Classification using Area Under ROC Curve (AUC) ######
# scoring = 'roc_auc'
# result = cross_val_score(model, x, y, cv = kfold, scoring = scoring)
# print("Mean Accuracy: ", result.mean()*100)
# print("Standard Deviation Accuracy: ", result.std()*100)
#### Mean Accuracy: 82.34, std Accuracy:4.07
#### AUC is relatively close to 1 and greater than 0.5, suggesting some skill in predictions
################ Cross Validation Classification using Confusion Matrix ###############
# test_size = 0.33
# seed = 7
# x_train,x_test,y_train,y_test = train_test_split(x, y, test_size = test_size, random_state = seed)
# model.fit(x_train,y_train)
# predict = model.predict(x_test)
# matrix = confusion_matrix(y_test, predict)
# print(matrix)
### the majority of predictions fall on the diagonal line of the matrix i.e.correct predictions.
################ Cross Validation Classification using Classification Report ############
report = classification_report(y_test, predict)
print(report)
### good prediction and recall for the algorithm
###Output
precision recall f1-score support
0.0 0.77 0.87 0.82 162
1.0 0.71 0.55 0.62 92
avg / total 0.75 0.76 0.75 254
###Markdown
10.3 Regression MetricsRegression metrics, the Boston House Price dataset is used to demonstrate egression problem where all of the input variables are also numeric. Below are the most common metrics for evaluating predictions on regression machine learning. problems.10.3.1 Mean Absolute Error (MAE)The Mean Absolute Error is the sum of the absolute differences between predictionsand actual values. It gives an idea of how wrong the predictions were. The measure gives anidea of the magnitude of the error, but no idea of the direction (e.g. over or under predicting).10.3.2 Mean Squared Error (MSE) Mean Squared Error is much like the mean absolute error in that it provides a gross idea of the magnitude of error. Taking the square root of the mean squared error converts the units back to the original units of the output variable and can be meaningful for description and presentation. This is called the Root Mean Squared Error (or RMSE). 10.3.2 R Squared Metric (R^2)This metric provides an indication of the goodness of fit of a set of predictions to the actual values. In statistical literature this measure is called the coeficient of determination. This is a value between 0 and 1 for no-fit and perfect fit respectively.
###Code
from pandas import read_csv
from sklearn.model_selection import KFold
from sklearn.model_selection import cross_val_score
from sklearn.linear_model import LinearRegression
names = ['CRIM', 'ZN', 'INDUS', 'CHAS', 'NOX', 'RM', 'AGE', 'DIS', 'RAD', 'TAX', 'PTRATIO',
'B', 'LSTAT', 'MEDV']
data = read_csv('C:/Users/Satish/python_files/housing.csv', delim_whitespace=True, names=names)
# print(data.head(10))
array = data.values
x = array[:,0:13]
y = array[:,13]
kfold = KFold(n_splits=10, random_state=7)
model = LinearRegression()
############# Cross Validation Regression using Mean Absolute Error #################
# scoring = 'neg_mean_absolute_error'
# result = cross_val_score(model, x, y, cv=kfold, scoring=scoring)
# print("Mean: ", result.mean())
# print("STD: ", result.std())
### Mean:-4.00 STD:2.08
### A value of 0 indicates no error or perfect predictions. Metric is inverted by the
### cross val score() function.
############# Cross Validation Regression using Mean Squared Error #################
# scoring = 'neg_mean_squared_error'
# result = cross_val_score(model, x, y, cv=kfold, scoring=scoring)
# print("Mean: ", result.mean())
# print("STD: ", result.std())
### Mean:-34.70 STD:45.57
### This metric too is inverted so that the results are increasing.
############# Cross Validation Regression using R^2 ##################################
# scoring = 'r2'
# result = cross_val_score(model, x, y, cv=kfold, scoring=scoring)
# print("Mean: ", result.mean())
# print("STD: ", result.std())
### Mean:0.20 STD:0.59
### the predictions have a poor fit to the actual values with a value closer to zero and less
### than 0.5.
###Output
Mean: 0.20252899006056085
STD: 0.5952960169512264
###Markdown
11. Spot-Check Classification AlgorithmsSpot-checking is a way of discovering which algorithms perform well on your machine learningproblem. You cannot know which algorithms are best suited to your problem beforehand. You must trial a number of methods and focus attention on those that prove themselves the most promising. 11.1 Algorithm Spot-CheckingWe cannot know which algorithm will work best on your dataset beforehand. We must use trial and error to discover a shortlist of algorithms that do well on the problem that we can then double down on and tune further. We can guess at what algorithms might do well on the dataset, and this can be a good starting point. Try a mixture of algorithms and see what is good at picking out the structure in the data. Below are some suggestions when spot-checking algorithms on your dataset:1} Try a mixture of algorithm representations (e.g. instances and trees).2} Try a mixture of learning algorithms (e.g. different algorithms for learning the same type of representation).3} Try a mixture of modeling types (e.g. linear and nonlinear functions or parametric and nonparametric).Each recipe is demonstrated on the Pima Indians onset of Diabetes dataset. A test harnessusing 10-fold cross validation is used to demonstrate how to spot-check each machine learningalgorithm and mean accuracy measures are used to indicate algorithm performance. 11.3 Linear Machine Learning AlgorithmsThis section demonstrates recipes for how to use two linear machine learning algorithms i.e. logistic regression and linear discriminant analysis.11.3.1 Logistic RegressionLogistic regression assumes a Gaussian distribution for the numeric input variables and canmodel binary classification problems. 11.3.2 Linear Discriminant AnalysisLinear Discriminant Analysis or LDA is a statistical technique for binary and multiclassclassification. It too assumes a Gaussian distribution for the numerical input variables. 11.4 Nonlinear Machine Learning AlgorithmsRecipes for how to use 4 nonlinear machine learning algorithms.11.4.1 k-Nearest Neighbors (KNN)The k-Nearest Neighbors algorithm uses a distance metric to find the k most similarinstances in the training data for a new instance and takes the mean outcome of the neighborsas the prediction. 11.4.2 Naive BayesNaive Bayes calculates the probability of each class and the conditional probability of each class given each input value. These probabilities are estimated for new data and multiplied together, assuming that they are all independent (a simple or naive assumption). When working with real-valued data, a Gaussian distribution is assumed to easily estimate the probabilities for input variables using the Gaussian Probability Density Function. 11.4.3 Classification and Regression Trees (CART)Classification and Regression Trees (CART or just decision trees) construct a binary tree fromthe training data. Split points are chosen greedily by evaluating each attribute and each value of each attribute in the training data in order to minimize a cost function (like the Gini index)11.4.4 Support Vector Machines (SVM)Support Vector Machines seek a line that best separates two classes. Those data instances that are closest to the line that best separates the classes are called support vectors and influence where the line is placed. SVM has been extended to support multiple classes. Of particular importance is the use of different kernel functions via the kernel parameter.
###Code
from pandas import read_csv
from sklearn.model_selection import KFold
from sklearn.model_selection import cross_val_score
from sklearn.linear_model import LogisticRegression
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis
from sklearn.neighbors import KNeighborsClassifier
from sklearn.naive_bayes import GaussianNB
from sklearn.tree import DecisionTreeClassifier
from sklearn.svm import SVC
rawdata = read_csv('C:/Users/Satish/python_files/diabetes.csv')
array = rawdata.values
x = array[:,0:8]
y = array[:,8]
kfold = KFold(n_splits=10, random_state=7)
######################## Logistic Regression Classification #############################
model = LogisticRegression()
result = cross_val_score(model, x, y, cv=kfold)
# print(result.mean())
### Mean estimated accuracy = 0.769
######################## Linear Discriminant Analysis ####################################
model = LinearDiscriminantAnalysis()
result = cross_val_score(model, x, y, cv=kfold)
# print(result.mean())
### Mean estimated accuracy = 0.773
######################## k-Nearest Neighbors Classification #############################
model = KNeighborsClassifier()
result = cross_val_score(model, x, y, cv=kfold)
# print(result.mean())
### Mean estimated accuracy = 0.726
######################## Gaussian Naive Bayes Classification #############################
model = GaussianNB()
result = cross_val_score(model, x, y, cv=kfold)
# print(result.mean())
### Mean estimated accuracy = 0.755
######################### CART Classification ############################################
model = DecisionTreeClassifier()
result = cross_val_score(model, x, y, cv=kfold)
# print(result.mean())
### Mean estimated accuracy = 0.699
######################## Support Vector Machines Classification ##########################
model = SVC()
result = cross_val_score(model, x, y, cv=kfold)
# print(result.mean())
### Mean estimated accuracy = 0.651
###Output
0.773462064251538
###Markdown
12. Spot-Check Regression AlgorithmsSpot-checking is a way of discovering which algorithms perform well on machine learning problem. We cannot know which algorithms are best suited to the problem beforehand. We must trial a number of methods and focus attention on those that prove themselves the most promising. In this we will discover six machine learning algorithms that you can use when spot-checking your regression problem in Python with scikit-learn. 12.2 Linear Machine Learning AlgorithmsThis section provides how to use four different linear machine learning algorithms for regression in Python with scikit-learn.12.2.1 Linear RegressionLinear regression assumes that the input variables have a Gaussian distribution. It is alsoassumed that input variables are relevant to the output variable and that they are not highlycorrelated with each other 12.2.2 Ridge RegressionRidge regression is an extension of linear regression where the loss function is modified tominimize the complexity of the model measured as the sum squared value of the coeficient values (also called the L2-norm). 12.2.3 LASSO RegressionThe Least Absolute Shrinkage and Selection Operator (or LASSO for short) is a modificationof linear regression, like ridge regression, where the loss function is modified to minimize the complexity of the model measured as the sum absolute value of the coeficient values (also called the L1-norm).12.2.4 ElasticNet RegressionElasticNet is a form of regularization regression that combines the properties of both RidgeRegression and LASSO regression. It seeks to minimize the complexity of the regression model(magnitude and number of regression coeficients) by penalizing the model using both theL2-norm (sum squared coeficient values) and the L1-norm (sum absolute coeficient values).12.3 Nonlinear Machine Learning AlgorithmsThis section provides examples of how to use three different nonlinear machine learning algorithms for regression in Python with scikit-learn.12.3.1 K-Nearest Neighbors (KNN)The k-Nearest Neighbors algorithm (or KNN) locates the k most similar instances in thetraining dataset for a new data instance. From the k neighbors, a mean or median outputvariable is taken as the prediction. Of note is the distance metric used (the metric argument).The Minkowski distance is used by default, which is a generalization of both the Euclideandistance (used when all inputs have the same scale) and Manhattan distance (for when thescales of the input variables differ). 12.3.2 Classification and Regression Trees (CART)CART use the training data to select the best points to split the data in order to minimize a cost metric. The default cost metric for regression decision trees is the mean squared error, specified in the criterion parameter. 12.3.3 Support Vector Machines (SVM)upport Vector Machines (SVM) were developed for binary classification. The technique hasbeen extended for the prediction real-valued problems called Support Vector Regression (SVR).
###Code
from pandas import read_csv
from sklearn.model_selection import KFold
from sklearn.model_selection import cross_val_score
from sklearn.linear_model import LinearRegression
from sklearn.linear_model import Ridge
from sklearn.linear_model import Lasso
from sklearn.linear_model import ElasticNet
from sklearn.neighbors import KNeighborsRegressor
from sklearn.tree import DecisionTreeRegressor
from sklearn.svm import SVR
names = ['CRIM', 'ZN', 'INDUS', 'CHAS', 'NOX', 'RM', 'AGE', 'DIS', 'RAD', 'TAX', 'PTRATIO',
'B', 'LSTAT', 'MEDV']
data = read_csv('C:/Users/Satish/python_files/housing.csv', delim_whitespace=True, names=names)
# print(data.head(10))
array = data.values
x = array[:,0:13]
y = array[:,13]
kfold = KFold(n_splits=10, random_state=7)
############################ Linear Regression #######################################
model = LinearRegression()
scoring = 'neg_mean_squared_error'
result = cross_val_score(model, x, y, cv=kfold, scoring=scoring)
# print(result.mean())
### Mean estimated accuracy = -34.70
############################ Ridge Regression #######################################
model = Ridge()
scoring = 'neg_mean_squared_error'
result = cross_val_score(model, x, y, cv=kfold, scoring=scoring)
# print(result.mean())
### Mean estimated accuracy = -34.07
############################ Lasso Regression #######################################
model = Lasso()
scoring = 'neg_mean_squared_error'
result = cross_val_score(model, x, y, cv=kfold, scoring=scoring)
# print(result.mean())
### Mean estimated accuracy = -34.46
############################ ElasticNet Regression #######################################
model = ElasticNet()
scoring = 'neg_mean_squared_error'
result = cross_val_score(model, x, y, cv=kfold, scoring=scoring)
# print(result.mean())
### Mean estimated accuracy = -34.16
############################ KNN Regression #######################################
model = KNeighborsRegressor()
scoring = 'neg_mean_squared_error'
result = cross_val_score(model, x, y, cv=kfold, scoring=scoring)
# print(result.mean())
### Mean estimated accuracy = -107.28
############################ CART Regression #######################################
model = DecisionTreeRegressor()
scoring = 'neg_mean_squared_error'
result = cross_val_score(model, x, y, cv=kfold, scoring=scoring)
# print(result.mean())
### Mean estimated accuracy = -35.65
############################ SVR Regression #######################################
model = SVR()
scoring = 'neg_mean_squared_error'
result = cross_val_score(model, x, y, cv=kfold, scoring=scoring)
print(result.mean())
### Mean estimated accuracy = -91.04
###Output
-91.04782433324428
###Markdown
13. Compare Machine Learning AlgorithmsIt is important to compare the performance of multiple different machine learning algorithmsconsistently. In this chapter you will discover how you can create a test harness to comparemultiple different machine learning algorithms in Python with scikit-learn. You can use thistest harness as a template on your own machine learning problems and add more and differentalgorithms to compare. 13.1 Choose The Best Machine Learning ModelWhile working on machine learning project, we often end up with multiple good models to choose from and each model will have different performance characteristics.Using resampling methods like cross validation, you can get an estimate for how accurate each model may be on unseen data. We need to be able to use these estimates to choose one or two best models from the suite of models that we have created.When we have a new dataset, it is a good idea to visualize the data using different techniquesin order to look at the data from different perspectives. The same idea applies to model selection. We should use a number of different ways of looking at the estimated accuracy of machine learning algorithms in order to choose the one or two algorithm to finalize.13.2 Compare Machine Learning Algorithms ConsistentlyThe key to a fair comparison of machine learning algorithms is ensuring that each algorithm isevaluated in the same way on the same data. This can be achieve by forcing each algorithm to be evaluated on a consistent test harness. In the example below six different classification algorithms are compared on a single dataset using diabetes dataset which has two classes and eight numeric input variables of varying scales. The 10-fold cross validation procedure is used to evaluate each algorithm with the same random seed to ensure that the same splits to the training data are performed and that each algorithm is evaluated in precisely the same way. 1} Logistic Regression.2} Linear Discriminant Analysis.3} k-Nearest Neighbors.4} Classification and Regression Trees.5} Naive Bayes.6} Support Vector Machines.
###Code
from pandas import read_csv
from matplotlib import pyplot
from sklearn.model_selection import KFold
from sklearn.model_selection import cross_val_score
from sklearn.linear_model import LogisticRegression
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis
from sklearn.neighbors import KNeighborsClassifier
from sklearn.naive_bayes import GaussianNB
from sklearn.tree import DecisionTreeClassifier
from sklearn.svm import SVC
rawdata = read_csv('C:/Users/Satish/python_files/diabetes.csv')
array = rawdata.values
x = array[:,0:8]
y = array[:,8]
#### Prepare model
models = []
models.append(('LR', LogisticRegression()))
models.append(('LDA', LinearDiscriminantAnalysis()))
models.append(('KNN', KNeighborsClassifier()))
models.append(('GNB', GaussianNB()))
models.append(('CART', DecisionTreeClassifier()))
models.append(('SVM', SVC()))
#### EValuate each model in turn
result = []
names = []
scoring = 'accuracy'
for name, model in models:
kfold = KFold(n_splits=10, random_state=7)
cv_result = cross_val_score(model, x, y, cv=kfold, scoring=scoring)
result.append(cv_result)
names.append(name)
print(name, cv_result.mean(), cv_result.std())
#box plot algorithm comparison
fig = pyplot.figure()
fig.suptitle('Algorithm Comparison')
ax = fig.add_subplot(111)
pyplot.boxplot(result)
ax.set_xticklabels(names)
pyplot.show()
#### From these results, it would suggest that both logistic regression and linear discriminant
#### analysis are perhaps worthy of further study on this problem.
###Output
LR 0.7695146958304853 0.04841051924567195
LDA 0.773462064251538 0.05159180390446138
KNN 0.7265550239234451 0.06182131406705549
GNB 0.7551777170198223 0.04276593954064409
CART 0.6900034176349965 0.06249622191574835
SVM 0.6510252904989747 0.07214083485055327
###Markdown
14. Automate ML Workflows with PipelinesThere are standard workflows in a machine learning project that can be automated. In Pythonscikit-learn, Pipelines help to clearly define and automate these workflows. 14.1 Automating Machine Learning WorkflowsThere are standard workflows in applied LM to overcome common problems like data leakage in the test harness. Python scikit-learn provides a Pipeline utility to help automate machine learning workflows by allowing for a linear sequence of data transforms to be chained together culminating in a modeling process that can be evaluated.The goal is to ensure that all of the steps in the pipeline are constrained to the data available for the evaluation, such as the training dataset or each fold of the cross validation procedure.14.2 Data Preparation and Modeling PipelineAn easy trap to fall into in applied ML is leaking data from training dataset to test dataset. To avoid this trap we need a robust test harness with strong separation of training and testing including data preparation. Data preparation is one easy way to leak knowledge of the whole training dataset to the algorithm. e.g. preparing your data using normalization or standardization on the entire training dataset before learning would not be a valid test because the training dataset would have been influenced by the scale of the data in the test set. Pipelines help to prevent data leakage in the test harness by ensuring that data preparationlike standardization is constrained to each fold of cross validation procedure. The example below demonstrates this important data preparation and model evaluation workflow on the diabetes dataset. The pipeline is defined with two steps1} Standardize the data.2} Learn a Linear Discriminant Analysis model.The pipeline is then evaluated using 10-fold cross validation.
###Code
####### Create a pipeline that standardizes the data then creates a model ###########
from pandas import read_csv
from sklearn.model_selection import KFold
from sklearn.model_selection import cross_val_score
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis
from sklearn.preprocessing import StandardScaler
from sklearn.pipeline import Pipeline
rawdata = read_csv('C:/Users/Satish/python_files/diabetes.csv')
array = rawdata.values
x = array[:,0:8]
y = array[:,8]
### Create the Pipeline with list of steps
estimators = []
estimators.append(('standardize', StandardScaler()))
estimators.append(('LDA', LinearDiscriminantAnalysis()))
model = Pipeline(estimators) # provide list of steps to Pipeline for process the data
### Evaluate the Pipeline
kfold = KFold(n_splits=10, random_state=7)
result = cross_val_score(model, x, y, cv=kfold)
print(result.mean())
#### Mean Accuracy = 0.77
#### The Pipeline itself is treated like an estimator and evaluated entirety by the k-fold
#### cross validation procedure.
###Output
0.773462064251538
###Markdown
14.3 Feature Extraction and Modeling PipelineFeature extraction is procedure that is susceptible to data leakage. Like data preparation,feature extraction procedures must be restricted to the data in training dataset. The pipeline provides a handy tool called the FeatureUnion which allows the results of multiple feature selection and extraction procedures to be combined into a larger dataset on which a model can be trained. Importantly, all the feature extraction and the feature union occurs within each fold of the cross validation procedure. The example below demonstrates the pipeline defined with four steps1} Feature Extraction with Principal Component Analysis (3 features).2} Feature Extraction with Statistical Selection (6 features).3} Feature Union.4} Learn a Logistic Regression Model.The pipeline is then evaluated using 10-fold cross validation.
###Code
####### Create a pipeline that extracts feature from the data then creates a model ###########
from pandas import read_csv
from sklearn.model_selection import KFold
from sklearn.model_selection import cross_val_score
from sklearn.pipeline import Pipeline
from sklearn.pipeline import FeatureUnion
from sklearn.feature_selection import SelectKBest
from sklearn.linear_model import LogisticRegression
from sklearn.decomposition import PCA
rawdata = read_csv('C:/Users/Satish/python_files/diabetes.csv')
array = rawdata.values
x = array[:,0:8]
y = array[:,8]
### create feature Union
features = []
features.append(('pca', PCA(n_components=3))) # step 1 Feature extraction with PCA, 3 component
features.append(('select_best', SelectKBest(k=6))) #step 2 Feature selection with selectKbest
feature_union = FeatureUnion(features) #FeatureUnion is it's own Pipeline of PCA and selectKbest
### create Pipeline
estimators = []
estimators.append(('feature_union', feature_union)) # FeatureUnion feed to final pipeline
estimators.append(('logistic', LogisticRegression()))
model = Pipeline(estimators) # Final pipeline with FeatureUnion and LogisticRegression
### Evaluate the Pipeline
kfold = KFold(n_splits=10, random_state=7)
result = cross_val_score(model, x, y, cv=kfold)
print(result.mean())
### Mean Accuracy = 0.776
###Output
0.7760423786739576
###Markdown
15. Improve Performance with EnsemblesEnsembles can give a boost in accuracy on the dataset. 15.1 Combine Models Into Ensemble PredictionsThe three most popular methods for combining the predictions from different models are:1} Bagging - Building multiple models (typically of the same type) from different subsamples of the training dataset.2} Boosting - Building multiple models (typically of the same type) each of which learns to fix the prediction errors of a prior model in the sequence of models.3} Voting - Building multiple models (typically of differing types) and simple statistics (like calculating the mean) are used to combine predictions. 15.2 Bagging AlgorithmsBootstrap Aggregation (or Bagging) involves taking multiple samples from your training dataset(with replacement) and training a model for each sample. The final output prediction is averaged across the predictions of all of the sub-models. 15.2.1 Bagged Decision TreesBagging performs best with algorithms that have high variance. A popular example aredecision trees, often constructed without pruning. The example below is an example of using the BaggingClassifier with the Classification and Regression Trees algorithm (DecisionTreeClassifier). A total of 100 trees are created.15.2.2 Random ForestRandom Forests is an extension of bagged decision trees. Samples of the training dataset aretaken with replacement, but the trees are constructed in a way that reduces the correlationbetween individual classiers. Rather than greedily choosing the best split point inthe construction of each tree, only a random subset of features are considered for each split. 15.2.3 Extra TreesExtra Trees are modification of bagging where random trees are constructed from samples of the training dataset. 15.3 Boosting AlgorithmsBoosting ensemble algorithms creates a sequence of models that attempt to correct the mistakesof the models before them in the sequence. Once created, the models make predictions whichmay be weighted by their demonstrated accuracy and the results are combined to create a finaloutput prediction. 15.3.1 AdaBoostAdaBoost works by weighting instances in the dataset by how easy or difficult they are to classify, allowing the algorithm to pay or less attention to them in the construction of subsequent models. 15.3.2 Stochastic Gradient Boostingtochastic Gradient Boosting (also called Gradient Boosting Machines) are one of the mostsophisticated ensemble techniques. It is also a technique that is proving to be perhaps one ofthe best techniques available for improving performance via ensembles. You can construct aGradient Boosting model for classification using the GradientBoostingClassifier class. 15.4 Voting EnsembleVoting is one of the simplest ways of combining the predictions from multiple machine learningalgorithms. It works by first creating two or more standalone models from training dataset.A Voting Classifier can then be used to wrap the models and average the predictions of thesub-models when asked to make predictions for new data. The predictions of the sub-models canbe weighted, but specifying the weights for classifiers manually or even heuristically is difficult. More advanced methods can learn how to best weight the predictions from sub-models, but this is called stacking (stacked aggregation).
###Code
from pandas import read_csv
from sklearn.model_selection import KFold
from sklearn.model_selection import cross_val_score
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import BaggingClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.ensemble import ExtraTreesClassifier
from sklearn.ensemble import AdaBoostClassifier
from sklearn.ensemble import GradientBoostingClassifier
from sklearn.ensemble import VotingClassifier
from sklearn.linear_model import LogisticRegression
from sklearn.svm import SVC
rawdata = read_csv('C:/Users/Satish/python_files/diabetes.csv')
array = rawdata.values
x = array[:,0:8]
y = array[:,8]
seed = 7
kfold = KFold(n_splits=10, random_state=7)
################# Bagged Decision Trees for Classification ####################
# num_trees = 100
# cart = DecisionTreeClassifier()
# model = BaggingClassifier(base_estimator=cart, n_estimators=num_trees, random_state=seed)
# result = cross_val_score(model, x, y, cv=kfold)
# print(result.mean())
### Estimated mean Accuracy = 0.77
#################### Random Forest Classification ##############################
# max_feature = 3
# num_trees = 100
# model = RandomForestClassifier(n_estimators=num_trees, max_features=max_feature)
# result = cross_val_score(model, x, y, cv=kfold)
# print(result.mean())
### Estimated mean Accuracy = 0.772
#################### Extra trees Classification ##############################
# max_feature = 7
# num_trees = 100
# model = ExtraTreesClassifier(n_estimators=num_trees, max_features=max_feature)
# result = cross_val_score(model, x, y, cv=kfold)
# print(result.mean())
### Estimated mean Accuracy = 0.755
#################### AdaBoost Classification ####################################
# max_feature = 7
# model = ExtraTreesClassifier(n_estimators=num_trees, max_features=max_feature)
# result = cross_val_score(model, x, y, cv=kfold)
# print(result.mean())
### Estimated mean Accuracy = 0.772
#################### Stochastic Gradient Boosting Classification #####################
num_trees = 100
model = GradientBoostingClassifier(n_estimators=num_trees, random_state=seed)
result = cross_val_score(model, x, y, cv=kfold)
print(result.mean())
### Estimated mean Accuracy = 0.766
#################### Voting Ensemble for Classification ###############################
##### create sub models
# estimator = []
# estimator.append(('cart', DecisionTreeClassifier()))
# estimator.append(('logic', LogisticRegression()))
# estimator.append(('svm', SVC()))
# model = VotingClassifier(estimator)
# result = cross_val_score(model, x, y, cv=kfold)
# print(result.mean())
### Estimated mean Accuracy = 0.739
###Output
0.7669002050580999
###Markdown
16. Improve Performance with Algorithm TuningMachine learning models are parameterized so that their behavior can be tuned for a givenproblem. Models can have many parameters and finding the best combination of parameters canbe treated as a search problem. 16.1 Machine Learning Algorithm ParametersAlgorithm tuning is a final step in the process of applied machine learning before finalizing your model. It is sometimes called hyperparameter optimization where the algorithm parametersare referred to as hyperparameters, whereas the coeficients found by the machine learningalgorithm itself are referred to as parameters. Optimization suggests the search-nature of theproblem. We can use different search strategies to find a good and robust parameter or set of parameters for an algorithm on a given problem. 16.2 Grid Search Parameter TuningGrid search is an approach to parameter tuning that will methodically build and evaluate amodel for each combination of algorithm parameters specified in a grid. The example below evaluates different alpha values for the Ridge Regression algorithm on the diabetes dataset performing a grid search using the GridSearchCV class.16.3 Random Search Parameter TuningRandom search is an approach to parameter tuning that will sample algorithm parameters froma random distribution (i.e. uniform) for a fixed number of iterations. A model is constructedand evaluated for each combination of parameters chosen. A random search for algorithm parameters using the RandomizedSearchCV class.The example below evaluates different random alpha values between 0 and 1 for the Ridge Regression algorithm on the diabetes dataset. total of 100 iterations are performed with uniformly random alpha values selected in the range between 0 and 1
###Code
import numpy
from pandas import read_csv
from scipy.stats import uniform
from sklearn.linear_model import Ridge
from sklearn.model_selection import GridSearchCV
from sklearn.model_selection import RandomizedSearchCV
rawdata = read_csv('C:/Users/Satish/python_files/diabetes.csv')
array = rawdata.values
x = array[:,0:8]
y = array[:,8]
#################### Grid search for algorithm Tunning ###############################
# alph = numpy.array([1,0.1,0.01,0.001,0.0001,0])
# param_grid = dict(alpha=alph)
# model = Ridge()
# grid = GridSearchCV(estimator=model, param_grid=param_grid)
# grid.fit(x, y)
# print(grid.best_score_) # best_score = 0.279 ## the optimal score achieved
# print(grid.best_estimator_.alpha) # alpha = 1.0 parameters in the grid that achieved that score
#################### Randomized for algorithm Tunning ###############################
param_grid = {'alpha': uniform()}
model = Ridge()
search = RandomizedSearchCV(estimator=model, param_distributions=param_grid, n_iter=100,
random_state=7)
search.fit(x, y)
print(search.best_score_) # 0.279
print(search.best_estimator_.alpha) # 0.977 optimal alpha value near 1.0 is discovered.
###Output
0.27961712703051084
0.9779895119966027
###Markdown
17. Save and Load ML ModelsFinding an accurate machine learning model is not the end of the project. We need to save model to file and load it later in order to make predictions. 17.1 Finalize Your Model with picklePickle is the standard way of serializing objects in Python. We can use the pickleoperation to serialize machine learning algorithms and save the serialized format to a file. Later We can load this file to deserialize the model and use it to make new predictions. The example demonstrates how to train model on dataset, save model and load it make prediction on unseen data.17.2 Finalize Your Model with JoblibThe Joblib library is part of the SciPy ecosystem and provides utilities for saving and loading Python objects that make use of NumPy data structures, efficiently. This can be useful for some machine learning algorithms that require a lot of parameters or store the entire dataset (e.g. k-Nearest Neighbors). The example below demonstrates how you can train a logistic regression model on the diabetes dataset, save the model to file using Joblib and load it to make predictions on the unseen test set.
###Code
from pandas import read_csv
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
# from pickle import dump
# from pickle import load
from sklearn.externals.joblib import dump
from sklearn.externals.joblib import load
rawdata = read_csv('C:/Users/Satish/python_files/diabetes.csv')
array = rawdata.values
x = array[:,0:8]
y = array[:,8]
##### split the dataset and train the model
x_train, x_test,y_train, y_test = train_test_split(x, y, test_size=0.33, random_state=7)
model = LogisticRegression()
model.fit(x_train, y_train)
#################### Save model using Pickle ###############################
##### save model on local disk
# filename = 'final_model.sav'
# dump(model, open(filename, 'wb'))
#### load the model from disk and make prediction
# loaded_model = load(open(filename, 'rb'))
# result = loaded_model.score(x_test, y_test)
# print(result)
#################### Save model using Joblib ###############################
##### save model on local disk
filename = 'final_model23.sav'
dump(model, filename)
#### load the model from disk and make prediction
loaded_model = load(filename)
result = loaded_model.score(x_test, y_test)
print(result)
## the model saves a file as final_model23.sav on local disk and creates one file for each
## NumPy array in the model
## After the model is loaded an estimate of the accuracy of the model on unseen data.
###Output
0.7559055118110236
|
python/template.ipynb | ###Markdown
**Title**: A short explicit title for that notebook **Date**: DD-Mmm-YYYY **Description**: * This notebook is intended to provide a baseline structure to help standardizing how Flywheel presents public facing Jupyter notebooks.* In general, Jupyter notebook should be structured as a tutorial that walks the user through one specific use case.* It should be possible to run this notebook in any jupyter-compatible thrid party-platforms such as [google collab](https://colab.research.google.com/) or [mybinder.org](https://mybinder.org/).* This template notebook is structured as follow: 1. Three required sections are described: [Requirements](Requirements), [Setup](Setup) and [Flywheel API Key and Client](Flywheel-API-Key-and-Client). 2. Some [Suggestions on how to structure your notebook](Suggestions-on-how-to-structure-your-notebook) are provided. 3. A brief [Style Guide](Style-Guide) is described. Requirements- Detail here the minimum requirements for using this notebook.- Examples: - Access to Flywheel Instance - Flyhweel permission (i.e. admin, read/write, etc.) - Pre-requisites in terms of access to data- Do not list here the required python packages as this is captured in a the [Setup](Setup) section. The next 2 sections ([Setup](Setup) and [Flywheel API Key and Client](Flywheel-API-Key-and-Client)) should be found in any notebook and kept as consistent as possible. Setup Packages required for the execution of this notebook should be installed with `pip` (using the `!` jupyter operator to run shell commands). This is required to ensure that the notebook is "standalone" and to avoid any issue with undefined package requirements. It also allows the notebook to be run out of the box on jupyter third party-platforms such as [google collab](https://colab.research.google.com/) or [mybinder.org](https://mybinder.org/).
###Code
# Here is an example to install the flywheel SDK
!pip install flywheel-sdk
###Output
_____no_output_____
###Markdown
Once installed packages get imported. Import should first list Python standard packages and then Third-party packages.
###Code
# Python standard package come first
from getpass import getpass
import logging
import os
# Third party packages come second
import flywheel
from permission import check_user_permission
###Output
_____no_output_____
###Markdown
If useful, a logger can be instantiated to display information during notebook execution (e.g. useful to keep track of runtime).
###Code
# Instantiate a logger
logging.basicConfig(level=logging.INFO)
log = logging.getLogger('root')
###Output
_____no_output_____
###Markdown
Flywheel API Key and Client Tutorials based on Jupyter notebooks aim at illustrating interactions with a Flywheel instance using the Flywheel SDK. To communicate with a Flywheel instance your first need to authenticate with the Flywheel API which required getting an API_KEY for your account. You can get you API_KEY by following the steps described in the Flywheel SDK doc [here](https://flywheel-io.gitlab.io/product/backend/sdk/branches/master/python/getting_started.htmlapi-key). DANGER: Do NOT share your API key with anyone for any reason - it is the same as sharing your password and constitutes a HIPAA violation. ALWAYS obscure credentials from your code, especially when sharing with others/commiting to a shared repository.
###Code
API_KEY = getpass('Enter API_KEY here: ')
###Output
_____no_output_____
###Markdown
Instantiate the Flywheel API client either using the API_KEY provided by the user input above or by reading it from the environment variable `FW_KEY`.
###Code
fw = flywheel.Client(API_KEY if 'API_KEY' in locals() else os.environ.get('FW_KEY'))
###Output
_____no_output_____
###Markdown
You can check which Flywheel instance you have been authenticated against with the following:
###Code
log.info('You are now logged in as %s to %s', fw.get_current_user()['email'], fw.get_config()['site']['api_url'])
###Output
_____no_output_____
###Markdown
Suggestions on how to structure your notebook Constants Often you will have to define a few constants in your notebook which serve as the inputs. Such constant for instance is the API_KEY that was used to instantiate the Flywheel client. Other examples could be a PROJECT_ID or PROJECT_LABEL that will be used to identify a specific project.
###Code
PROJECT_LABEL = 'MyProject'
###Output
_____no_output_____
###Markdown
RequirementsBefore getting started, you need to makesure the user has the right permission to run a specific actions like `run_gear` or `create_project_container` etc. We will be calling the `check_user_permission` function that we imported earlier to verify the user permission. First of all, we will define the minimum requirements needed in each container to run this notebook.
###Code
min_reqs = {
"site": "user",
"group": "rw",
"project": ['files_download',
'files_create_upload',
'files_modify_metadata',
'files_delete_non_device_data',
'files_delete_device_data',]
}
###Output
_____no_output_____
###Markdown
The `check_user_permission` function takes 5 arguments: `fw_client` , `min_reqs`, `group_id`(if needed), `project_label`(if needed) and `show_compatible` (which is set default to `True`). In this example, since `PROJECT_LABEL` has been defined in the previous section, `GROUP_ID` needed to define before calling the function.
###Code
GROUP_ID = input('Please enter your Group ID here: ')
###Output
_____no_output_____
###Markdown
Here we will be calling the `check_user_permission` and it will return True if both the group and project meet the minimum requirement, else a compatible list will be printed.
###Code
check_user_permission(fw, min_reqs, group=GROUP_ID, project=PROJECT_LABEL)
###Output
_____no_output_____
###Markdown
Helper functions Your notebook may also need a few helper functions. We recommend that you defined them first in the notebook and use [Google Style Python docstrings](https://sphinxcontrib-napoleon.readthedocs.io/en/latest/example_google.htmlexample-google-style-python-docstrings) to explicit what they are. Here is an example:
###Code
def get_project_id(fw, project_label):
"""Return the first project ID matching project_label
Args:
fw (flywheel.Client): A flywheel client
project_label (str): A Project label
Returns:
(str): Project ID or None if no project found
"""
project = fw.projects.find_first(f'label={project_label}')
if project:
return project.id
else:
return None
###Output
_____no_output_____
###Markdown
There is a few useful jupyter operator that can be leveraged when functions are defined that way. Such as diplaying the docstring of any function with the `?` operator or the source code itself with `??`.
###Code
get_project_id?
###Output
_____no_output_____
###Markdown
Main script Once you have all the pieces defined, a short description of what the code does should be provided. In this script, we will be retrieving the Project ID and a message will be printed to notify the user whether the project exist or not.
###Code
project_id = get_project_id(fw, PROJECT_LABEL)
if project_id:
print(f'Project ID is: {project_id}.')
else:
print(f'No Project with label {PROJECT_LABEL} found.')
###Output
_____no_output_____ |
Species Segmentation_Iris/.ipynb_checkpoints/Species Segmentation with Cluster Analysis - Iris Dataset-checkpoint.ipynb | ###Markdown
Species Segmentation with Cluster Analysis The Iris flower dataset is one of the most popular ones for machine learning. You can read a lot about it online and have probably already heard of it: https://en.wikipedia.org/wiki/Iris_flower_data_setWe didn't want to use it in the lectures, but believe that it would be very interesting for you to try it out (and maybe read about it on your own).There are 4 features: sepal length, sepal width, petal length, and petal width.Start by creating 2 clusters. Then standardize the data and try again. Does it make a difference?Use the Elbow rule to determine how many clusters are there. Import the relevant libraries
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
sns.set()
from sklearn.cluster import KMeans
###Output
_____no_output_____
###Markdown
Load the data Load data from the csv file: 'iris_dataset.csv'.
###Code
rawdata = pd.read_csv('iris-dataset.csv')
rawdata
###Output
_____no_output_____
###Markdown
Plot the data For this exercise, try to cluster the iris flowers by the shape of their sepal. Use the 'sepal_length' and 'sepal_width' variables.
###Code
plt.scatter(rawdata['sepal_length'],rawdata['sepal_width'])
plt.xlabel('Length of Sepal')
plt.ylabel('Width of Sepal')
###Output
_____no_output_____
###Markdown
Clustering (unscaled data) Separate the original data into 2 clusters.
###Code
x = rawdata.copy()
kmeans = KMeans(2)
kmeans
clusters = x.copy()
clusters['cluster_pred']=kmeans.fit_predict(x)
plt.scatter(clusters['sepal_length'],clusters['sepal_width'],c=clusters['cluster_pred'],cmap='rainbow')
plt.xlabel('Length of Sepal')
plt.ylabel('Width of Sepal')
###Output
_____no_output_____
###Markdown
Standardize the variables Import and use the method function from sklearn to standardize the data.
###Code
from sklearn import preprocessing
x_scaled = preprocessing.scale(x)
x_scaled
###Output
_____no_output_____
###Markdown
Clustering (scaled data)
###Code
kmeans_new = KMeans(2)
kmeans_new.fit(x_scaled)
clusters_new = x.copy()
clusters_new['cluster_pred'] = kmeans_new.fit_predict(x_scaled)
#display old unscaled data with cluster prediction(done with scaled data)
#better clarity in comparison
clusters_new
plt.scatter(clusters_new['sepal_length'],clusters_new['sepal_width'],c=clusters_new['cluster_pred'],cmap='rainbow')
plt.xlabel('Length of Sepal')
plt.ylabel('Width of Sepal')
###Output
_____no_output_____
###Markdown
Take Advantage of the Elbow Method WCSS
###Code
wcss =[]
for i in range(1,10):
kmeans = KMeans(i)
kmeans.fit(x_scaled)
wcss.append(kmeans.inertia_)
wcss
###Output
C:\Users\User\anaconda3\lib\site-packages\sklearn\cluster\_kmeans.py:881: UserWarning: KMeans is known to have a memory leak on Windows with MKL, when there are less chunks than available threads. You can avoid it by setting the environment variable OMP_NUM_THREADS=1.
warnings.warn(
###Markdown
The Elbow Method
###Code
plt.plot(range(1,10),wcss)
plt.xlabel('Number of clusters')
plt.ylabel('WCSS')
###Output
_____no_output_____
###Markdown
Taking k = 3 as best solution
###Code
kmeans_3 = KMeans(3)
kmeans_3.fit(x_scaled)
culsters_3 = x.copy()
clusters_3['cluster_pred'] = kmeans_3.fit_predict(x_scaled)
plt.scatter(clusters_3['sepal_length'],clusters_3['sepal_width'],c=clusters_3['cluster_pred'],cmap='rainbow')
plt.xlabel('Length of Sepal')
plt.ylabel('Width of Sepal')
###Output
_____no_output_____ |
LAB 3/Practice/1_NB_Classifier_Whether.ipynb | ###Markdown
**Aim: Implement Naive Bayes classifier : Whether Example** Step 1: Import necessary libraries.We will use preprocessing and naive bayes libraries of sklearn
###Code
from sklearn import preprocessing
from sklearn.naive_bayes import GaussianNB, MultinomialNB
###Output
_____no_output_____
###Markdown
Step 2: Prepare dataset.Create feature set for weather and temperature, and classlabel play.
###Code
weather = ['Sunny', 'Sunny', 'Overcast', 'Rainy', 'Rainy','Rainy', 'Overcast',
'Sunny', 'Sunny', 'Rainy', 'Sunny', 'Overcast', 'Overcast', 'Rainy']
temp = ['Hot','Hot','Hot','Mild','Cool','Cool','Cool','Mild',
'Cool','Mild','Mild','Mild','Hot','Mild']
play=['No','No','Yes','Yes','Yes','No','Yes','No','Yes',
'Yes','Yes','Yes','Yes','No']
###Output
_____no_output_____
###Markdown
Step 3: Digitize the data set using encoding
###Code
#creating labelEncoder
le = preprocessing.LabelEncoder()
# Converting string labels into numbers.
weather_encoded=le.fit_transform(weather)
print("Weather:" ,weather_encoded)
temp_encoded=le.fit_transform(temp)
label=le.fit_transform(play)
print("Temp:",temp_encoded)
print("Play:",label)
###Output
Temp: [1 1 1 2 0 0 0 2 0 2 2 2 1 2]
Play: [0 0 1 1 1 0 1 0 1 1 1 1 1 0]
###Markdown
Step 4: Merge different features to prepare dataset
###Code
#Combinig weather and temp into single listof tuples
features=tuple(zip(weather_encoded,temp_encoded))
print("Features:",features)
###Output
Features: ((2, 1), (2, 1), (0, 1), (1, 2), (1, 0), (1, 0), (0, 0), (2, 2), (2, 0), (1, 2), (2, 2), (0, 2), (0, 1), (1, 2))
###Markdown
Step 5: Train ’Naive Bayes Classifier’
###Code
#Create a Classifier
model=MultinomialNB()
# Train the model using the training sets
model.fit(features,label)
###Output
_____no_output_____
###Markdown
Step 6: Predict Output for new data
###Code
#Predict Output
predicted= model.predict([[0,2]]) # 0:Overcast, 2:Mild
print("Predicted Value:", predicted)
predicted= model.predict([[0,1]]) # 0:Overcast, 1:Hot
print("Predicted Value:", predicted)
predicted= model.predict([[2,2]]) # 2:Sunny, 2:Mild
print("Predicted Value:", predicted)
###Output
Predicted Value: [1]
###Markdown
Exercise:**Manually calculate output for the following cases and compare it with system’s output.**(1) Will you play if the temperature is 'Hot' and weather is 'overcast'?(2) Will you play if the temperature is 'Mild' and weather is 'Sunny'?
###Code
###Output
_____no_output_____ |
Kaggle-fruits360-DL-to-Peltarion-dataset.ipynb | ###Markdown
Fruit-360 preprocessorThis notebook will prepare the fruit-360 dataset for the Peltarion platform.Note: This notebook requires installation of Sidekick. For more information about this package, see: https://github.com/Peltarion/sidekick
###Code
#Get sidekick
!pip install git+https://github.com/Peltarion/sidekick#egg=sidekick
import os
import sidekick
import resource
import functools
import pandas as pd
from glob import glob
from PIL import Image
from sklearn.model_selection import train_test_split
!mkdir out
# Path to the raw dataset
input_path = 'Fruit-Images-Dataset/Training'
os.chdir(input_path)
# Path to the zip output
output_path = 'out/out-data.zip'
images_rel_path = glob(os.path.join('*', '*.jpg')) + glob(os.path.join('*', '*.png'))
print("Images found: ", len(images_rel_path))
###Output
Images found: 53177
###Markdown
Create DataframeThe class column values are derived from the names of the subfolders in the `input_path`.The image column contains the relative path to the images in the subfolders.
###Code
df = pd.DataFrame({'image': images_rel_path})
df['class'] = df['image'].apply(lambda path: os.path.basename(os.path.dirname(path)))
df.head()
### Check that all images have the same format, e.g., RGB
def get_mode(path):
im = Image.open(path)
im.close()
return im.mode
df['image_mode'] = df['image'].apply(lambda path: get_mode(path))
print(df['image_mode'].value_counts())
df = df.drop(['image_mode'], axis=1)
## Create subsets for training and validation
def create_subsets(df, col='class', validation_size=0.20):
train_data, validate_data = train_test_split(df, test_size=validation_size, random_state=42, stratify=df[[col]])
train_data.insert(loc=2, column='subset', value='T')
validate_data.insert(loc=2, column='subset', value='V')
return train_data.append(validate_data, ignore_index=True)
df = create_subsets(df)
df['subset'].value_counts()
df.head()
## Create dataset bundle
'''
Available modes:
- crop_and_resize
- center_crop_or_pad
- resize_image
'''
image_processor = functools.partial(sidekick.process_image, mode='crop_and_resize', size=(100, 100), file_format='jpeg')
sidekick.create_dataset(
output_path,
df,
path_columns=['image'],
preprocess={
'image': image_processor
}
)
# Adding Drive folders to colab notebook
from google.colab import drive
drive.mount('/content/drive')
###Output
Go to this URL in a browser: https://accounts.google.com/o/oauth2/auth?client_id=947318989803-6bn6qk8qdgf4n4g3pfee6491hc0brc4i.apps.googleusercontent.com&redirect_uri=urn%3Aietf%3Awg%3Aoauth%3A2.0%3Aoob&scope=email%20https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fdocs.test%20https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fdrive%20https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fdrive.photos.readonly%20https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fpeopleapi.readonly&response_type=code
Enter your authorization code:
··········
Mounted at /content/drive
|
CESM2_COSP/taylor_plots/newobs_taylor.ipynb | ###Markdown
Make new taylor plots (with both CAM6 and updated observations) Here, I am using the updated set of observations and copying all needed files to a single directory for organizations and reproducibility. I am also using a bilinear (linear?) interpolation instead of a nearest neighbor interpolation. Verify my methods by reproducing Figure 7 from Kay 2012 using the stored data from Ben Hillman. Function and package imports
###Code
import sys
# Add common resources folder to path
sys.path.append('/glade/u/home/jonahshaw/Scripts/git_repos/CESM2_analysis')
sys.path.append('/glade/u/home/jonahshaw/Scripts/git_repos/CESM2_analysis/Common/')
# sys.path.append("/home/jonahks/git_repos/netcdf_analysis/Common/")
from imports import (
pd, np, xr, mpl, plt, sns, os,
datetime, sys, crt, gridspec,
ccrs, metrics, Iterable, cmaps,
mpl,glob
)
from functions import (
masked_average, add_weights, sp_map,
season_mean, get_dpm, leap_year, share_ylims,
to_png
)
import cftime
from cloud_metric import Cloud_Metric
from collections import deque
%matplotlib inline
###Output
The autoreload extension is already loaded. To reload it, use:
%reload_ext autoreload
###Markdown
Taylor plot specific imports
###Code
import taylor_jshaw as taylor
import matplotlib as matplotlib
import matplotlib.patches as patches
from interp_functions import *
from functions import calculate
###Output
_____no_output_____
###Markdown
Label appropriate directories I am reorganizing to have everything in this first directory
###Code
# where to save processed files
save_dir = '/glade/u/home/jonahshaw/w/archive/taylor_files/'
###Output
_____no_output_____
###Markdown
Old processing here
###Code
# _ceres_data = xr.open_dataset('/glade/work/jonahshaw/obs/CERES/CERES_EBAF-TOA_Ed4.1_Subset_200003-202102.nc')
# _ceres_data = add_weights(_ceres_data)
# # Flip FLNSC so it matches model convention (net-LW is down, net-SW is up)
# # _ceres_data['sfc_net_lw_clr_t_mon'] = -1*_ceres_data['sfc_net_lw_clr_t_mon']
# # Derive Cloud Radiative Effect Variables
# _ceres_data['SWCF'] = (_ceres_data['toa_sw_clr_c_mon'] - _ceres_data['toa_sw_all_mon']).assign_attrs(
# {'units': 'W/m2','long_name': 'Shortwave cloud forcing'})
# _ceres_data['LWCF'] = (_ceres_data['toa_lw_clr_c_mon'] - _ceres_data['toa_lw_all_mon']).assign_attrs(
# {'units': 'W/m2','long_name': 'Longwave cloud forcing'})
# _ceres_data['SWCF'].to_netcdf('%s/2021_obs/CERES_SWCF_200003_202102.nc' % save_dir)
# _ceres_data['LWCF'].to_netcdf('%s/2021_obs/CERES_LWCF_200003_202102.nc' % save_dir)
# # original observations from the Kay 2012 paper
# # og_dir = '/glade/u/home/jonahshaw/w/kay2012_OGfiles'
# # CAM4 and CAM5 model runs
# oldcase_dir = '/glade/u/home/jonahshaw/w/archive/Kay_COSP_2012/'
# # CAM6 model runs
# newcase_dir = '/glade/p/cesm/pcwg/jenkay/COSP/cesm21/'
# # Time ranges to select by:
# time_range1 = ['2001-01-01', '2010-12-31']
# time_range2 = ['0001-01-01', '0010-12-31']
# _ceres_data = xr.open_dataset('/glade/u/home/jonahshaw/obs/CERES_EBAF/CERES_EBAF-TOA_Ed4.1_Subset_200003-202002.nc')
# # _ceres_data = _ceres_data.sel(time=slice('2000-01-01', '2015-12-31')) # This was a problem.
# _ceres_data = add_weights(_ceres_data)
# # Flip FLNSC so it matches model convention (net-LW is down, net-SW is up)
# # _ceres_data['sfc_net_lw_clr_t_mon'] = -1*_ceres_data['sfc_net_lw_clr_t_mon']
# # Derive Cloud Radiative Effect Variables
# _ceres_data['SWCF'] = (_ceres_data['toa_sw_clr_c_mon'] - _ceres_data['toa_sw_all_mon']).assign_attrs(
# {'units': 'W/m2','long_name': 'Shortwave cloud forcing'})
# _ceres_data['LWCF'] = (_ceres_data['toa_lw_clr_c_mon'] - _ceres_data['toa_lw_all_mon']).assign_attrs(
# {'units': 'W/m2','long_name': 'Longwave cloud forcing'})
# _ceres_data['SWCF'].to_netcdf('%s/2021_obs/CERES_SWCF_2000_2020.nc' % save_dir)
# # _ceres_data['LWCF'].to_netcdf('%s/2021_obs/CERES_LWCF_2000_2020.nc' % save_dir)
###Output
_____no_output_____
###Markdown
BLANK
###Code
# case_dirs = [oldcase_dir,oldcase_dir,newcase_dir]
# cases = [
# '%s%s' % (oldcase_dir,'cam4_1deg_release_amip'),
# '%s%s' % (oldcase_dir,'cam5_1deg_release_amip'),
# '%s%s' % (newcase_dir,'f.e21.FHIST_BGC.f09_f09_mg17.CMIP6-AMIP.001_cosp1')
# ]
cases = [
'cam4_1deg_release_amip',
'cam5_1deg_release_amip',
'f.e21.FHIST_BGC.f09_f09_mg17.CMIP6-AMIP.001_cosp1'
]
cosp_v = [1,1,2] # cosp version (guess)
def get_file(var_name,case,suffix=''):
return glob.glob('%s/%s/*%s*.nc' % (case,suffix,var_name))
def fix_cam_time(ds):
try:
ds['time'] = ds['time_bnds'].isel(bnds=0)
except:
ds['time'] = ds['time_bnds'].isel(nbnd=0)
return ds
def select_AMIP(ds):
if ds['time.year'][0] > 1000: # bad way to discriminate by year format
_ds = ds.sel(time=slice('2001-01-01', '2010-12-31')) # work for the AMIP 0000-0010
else:
_ds = ds.sel(time=slice('0001-01-01', '0010-12-31')) # work for the AMIP 0000-0010
return _ds
###Output
_____no_output_____
###Markdown
Panel 1 (CERES-EBAF LWCF and SWCF) Open observation files
###Code
new_swcf = xr.open_dataset('%s/2021_obs/CERES_SWCF_200003_202102.nc' % (save_dir))
new_lwcf = xr.open_dataset('%s/2021_obs/CERES_LWCF_200003_202102.nc' % (save_dir))
proc_swcf = new_swcf.groupby('time.month').mean('time').mean('month')
proc_lwcf = new_lwcf.groupby('time.month').mean('time').mean('month')
cntlnames = {
'SWCF': proc_swcf['SWCF'], # these have to be dataarrays, not datasets
'LWCF': proc_lwcf['LWCF'],
}
###Output
_____no_output_____
###Markdown
Open model files
###Code
_vars = ['SWCF','LWCF']
suffix = 'atm/proc/tseries/month_1'
model_das = {}
for j in _vars:
var_files = []
for ii in cases:
_f = glob.glob('%s/%s/*%s*.nc' % (save_dir,ii,j)) # get the correct file
# print(_f)
# break
# open dataset
_ds = xr.open_dataset(_f[0])
print(_f[0])
# apply time bounds
_ds = fix_cam_time(_ds)
# select the AMIP period
_ds = select_AMIP(_ds)
# Fix any weird month/year mismatch by weighting months equally.
_da = _ds[j].groupby('time.month').mean('time').mean('month')
# Interpolate to the control (observation) grid
# _da = _da.interp_like(cntlnames[j],method='nearest')
# _da = _da.interp_like(cntlnames[j])
_da,_rgrd = interp_like2D(_da,target=cntlnames[j])
# var_files.append(_da)
var_files.append(_da[j])
# print(_f)
model_das[j] = var_files
testmetrics = model_das
###Output
/glade/u/home/jonahshaw/w/archive/taylor_files//cam4_1deg_release_amip/cam4_1deg_release_amip.cam.h0.SWCF.200011-201012.nc
/glade/u/home/jonahshaw/w/archive/taylor_files//cam5_1deg_release_amip/cam5_1deg_release_amip.cam.h0.SWCF.200101-201012.nc
/glade/u/home/jonahshaw/w/archive/taylor_files//f.e21.FHIST_BGC.f09_f09_mg17.CMIP6-AMIP.001_cosp1/f.e21.FHIST_BGC.f09_f09_mg17.CMIP6-AMIP.001_cosp1.cam.h0.SWCF.197901-201412.nc
/glade/u/home/jonahshaw/w/archive/taylor_files//cam4_1deg_release_amip/cam4_1deg_release_amip.cam.h0.LWCF.200011-201012.nc
/glade/u/home/jonahshaw/w/archive/taylor_files//cam5_1deg_release_amip/cam5_1deg_release_amip.cam.h0.LWCF.200101-201012.nc
/glade/u/home/jonahshaw/w/archive/taylor_files//f.e21.FHIST_BGC.f09_f09_mg17.CMIP6-AMIP.001_cosp1/f.e21.FHIST_BGC.f09_f09_mg17.CMIP6-AMIP.001_cosp1.cam.h0.LWCF.197901-201412.nc
###Markdown
Calculate
###Code
testnames = ('CAM4','CAM5','CAM6')
varnames = ['SWCF','LWCF']
varlabels = ['CERES-EBAF SWCF','CERES-EBAF LWCF']
nvars = 2; ntest = 3;
cc = np.zeros([nvars,ntest])
ratio = np.zeros([nvars,ntest])
bias = np.zeros([nvars,ntest])
bias_abs = np.zeros([nvars,ntest])
for ivar,var in enumerate(varnames): # iterate over the variables for a specific Taylor plot
# Select observational dataarray:
obs_da = cntlnames[var]
obs_ds = obs_da
for itest,(name,metric) in enumerate(zip(testnames,testmetrics[var])): # iterate over the models to test/plot
# Time average:
test_ds = metric #[var]
# Calculate Taylor diagram relevant variables:
_bias,_corr,_rmsnorm,_ratio,_bias_abs = calculate(obs_ds,test_ds)
# print(_bias[var],_corr[var],_rmsnorm[var],_ratio[var])
cc[ivar,itest] = _corr
ratio[ivar,itest] = _ratio
bias[ivar,itest] = _bias
bias_abs[ivar,itest] = _bias_abs
# print(bias,corr,rmsnorm,ratio)
###Output
_____no_output_____
###Markdown
Plot
###Code
mpl.rcParams['font.family'] = 'serif'
mpl.rcParams['font.size'] = 10
mpl.rcParams['text.usetex'] = True
figure = plt.figure(figsize=(4,4))
figure.set_dpi(300)
testcolors = ('SkyBlue','Firebrick','#f6d921')
ax = figure.add_subplot(1,1,1,frameon=False)
taylor_diagram = taylor.Taylor_diagram(
ax,cc,ratio,bias,
casecolors=testcolors,
varlabels=range(1,len(varnames)+1),
)
# Reference bias bubbles, wut is this?
ref_bias = 0.1 # This is a 10% bias reference bubble in the lower-left corner
yloc = 0.05*taylor_diagram.xymax + ref_bias/2.0
xloc = 0.05*taylor_diagram.xymax + ref_bias/2.0
circle = patches.Circle(
(xloc,yloc),ref_bias/2.0,
color="black",
alpha=0.30,
)
ax.add_patch(circle)
# Reference bias bubble points - centered at the reference bubble
circle = patches.Circle(
(xloc,yloc),0.01,
color="black",
)
ax.add_patch(circle)
# Reference bias text
ax.text(
xloc+ref_bias/2.0 + 0.01*taylor_diagram.xymax,yloc,
"%.0f%s bias"%(ref_bias*100,r"\%"),
color="Black",
fontsize=8,
horizontalalignment="left",
verticalalignment="center"
)
# Case labels
xloc = taylor_diagram.xymax*0.95
yloc = taylor_diagram.xymax*0.05
dy = taylor_diagram.xymax*0.05
for itest,testname in enumerate(testnames[::-1]):
ax.text(
xloc,yloc+itest*dy, # place these just above the dots
testname,
color=testcolors[::-1][itest],
fontsize=11,
horizontalalignment="right",
verticalalignment="bottom",
# fontweight='bold', # doesn't do anything
)
# Variable labels
xloc0 = taylor_diagram.xymax*-0.1
yloc0 = taylor_diagram.xymax*-0.15
dy = taylor_diagram.xymax*0.055
for ivar,(varlabel,biases) in enumerate(zip(varlabels, bias_abs)):
xloc = xloc0
yloc = yloc0+ivar*-dy
_text = ax.text(
xloc,yloc, # place these just above the dots
'%s. %s: ' % (str(ivar+1), varlabel),
color='black',
fontsize=10,
horizontalalignment="left",
verticalalignment="bottom",
# fontweight='bold', # doesn't do anything
)
for itest,testname in enumerate(testnames):
if itest == 0:
xloc += xloc + 0.2#0.09
xloc += 0.04*len(_text.get_text()) + 0.025
_text = ax.text(
xloc,yloc, # place these just above the dots
round(biases[itest]),
color=testcolors[itest],
fontsize=10,
horizontalalignment="left",
verticalalignment="bottom",
# fontweight='bold', # doesn't do anything
)
mpl.rcParams['text.usetex'] = False
plt.tight_layout() # make sure the bottom wording doesn't get cut off
to_png(figure,'CERESCRF_taylor_cosp1.3',bbox_inches="tight")
###Output
_____no_output_____
###Markdown
Panel 2 (ISCCP, MISR, and CALIPSO total cloud) BLANK I have to shift the time coordinate, because it is off by a month. I centered it at th 15th so there is no future confusion.
###Code
# new_time = cftime.num2date(range(243), 'months since 2000-03-15', '360_day')
# misr_dir = '/glade/work/jonahshaw/obs/MISR/'
# misr_monthly_file = 'clt_MISR_20000301_20200531.nc'
# misr_monthly = xr.open_dataset('%s/%s' % (misr_dir,misr_monthly_file))
# misr_monthly['time'] = new_time
# misr_monthly = misr_monthly.rename({'clMISR':'cltMISR'})
# misr_monthly['cltMISR'].to_netcdf('%s/2021_obs/MISR_CLDTOT_200003_202005.nc' % save_dir)
# og_caliop_dir = '/glade/work/jonahshaw/obs/CALIPSO/GOCCP/2Ddata/grid_2x2_L40/'
# all_caliopf = glob.glob('%s/**/*' % og_caliop_dir)
# # len(all_caliop)
# all_caliopf.sort()
# check1 = xr.open_dataset(bob[0])
# check2 = xr.open_dataset(bob[11])
# check1
# bob = [i for i in all_caliopf if '2017' in i]
# test1 = xr.open_mfdataset(bob,combine='by_coords')
# all_cal = xr.open_mfdataset(all_caliopf,combine='by_coords')
# all_cal
# all_cal['cltcalipso'].to_netcdf('%s/2021_obs/CALIOP_CLDTOT_200606_201512.nc' % save_dir)
# all_cal['cllcalipso'].to_netcdf('%s/2021_obs/CALIOP_CLDLOW_200606_201512.nc' % save_dir)
# all_cal['clmcalipso'].to_netcdf('%s/2021_obs/CALIOP_CLDMED_200606_201512.nc' % save_dir)
# all_cal['clhcalipso'].to_netcdf('%s/2021_obs/CALIOP_CLDHGH_200606_201512.nc' % save_dir)
# all_cal['cltcalipso_liq'].to_netcdf('%s/2021_obs/CALIOP_CLDTOTLIQ_200606_201512.nc' % save_dir)
# all_cal['cltcalipso_ice'].to_netcdf('%s/2021_obs/CALIOP_CLDTOTICE_200606_201512.nc' % save_dir)
###Output
_____no_output_____
###Markdown
BLANK Open files
###Code
new_clt_isccp = xr.open_dataset('%s/2021_obs/ISCCP_CLDTOT_199807_201612.nc' % (save_dir))
new_clt_misr = xr.open_dataset('%s/2021_obs/MISR_CLDTOT_200003_202005.nc' % (save_dir))
new_clt_caliop = xr.open_dataset('%s/2021_obs/CALIOP_CLDTOT_200606_201512.nc' % (save_dir))
proc_clt_isccp = new_clt_isccp.groupby('time.month').mean('time').mean('month')
# proc_clt_misr = new_clt_misr.groupby('time.month').mean('time').mean('month')
proc_clt_caliop = 100*new_clt_caliop.groupby('time.month').mean('time').mean('month')
proc_clt_isccp = proc_clt_isccp.rename({'cltisccp':'CLDTOT_ISCCP'})
proc_clt_misr = new_clt_misr.rename({'clMISR':'CLDTOT_MISR'})
proc_clt_caliop = proc_clt_caliop.rename({'cltcalipso':'CLDTOT_CAL'}) # yes, this is on a 2x2deg grid
# Mask passive sensors (ISCCP and MISR)
proc_clt_isccp = proc_clt_isccp.where(np.abs(proc_clt_isccp['lat'])<60)
proc_clt_misr = proc_clt_misr.where(np.abs(proc_clt_misr['lat'])<60)
###Output
/glade/work/jonahshaw/miniconda3/envs/cheycomp/lib/python3.7/site-packages/xarray/core/nanops.py:142: RuntimeWarning: Mean of empty slice
return np.nanmean(a, axis=axis, dtype=dtype)
###Markdown
Open model files
###Code
# Pre-proc for MISR obs
clt_misr1 = proc_clt_misr['CLDTOT_MISR'].assign_coords(lon=(proc_clt_misr['CLDTOT_MISR'].lon % 360)).sortby('lon')
clt_misr2 = xr.where(clt_misr1 == 0,np.nan,clt_misr1)
# Necessary pre-processing for CALIOP obs when using interp_like
clt_cal1 = proc_clt_caliop['CLDTOT_CAL'].rename({'longitude':'lon','latitude':'lat'})
clt_cal2 = clt_cal1.assign_coords(lon=(clt_cal1.lon % 360)).sortby('lon')
cntlnames = {
'CLDTOT_ISCCP': proc_clt_isccp['CLDTOT_ISCCP'], # these have to be dataarrays, not datasets
'CLDTOT_MISR': clt_misr2,#.where(np.abs(clt_misr1['lat'])<60),
'CLDTOT_CAL': clt_cal2,
}
_vars = ['CLDTOT_ISCCP','CLDTOT_MISR','CLDTOT_CAL']
model_das = {}
for j in _vars:
print(j)
var_files = []
for ii in cases:
print(ii)
_f = glob.glob('%s/%s/*.%s.*' % (save_dir,ii,j)) # get the correct file
# open dataset
# print(_f[0])
_ds = xr.open_dataset(_f[0])
# Fix any weird month/year mismatch by weighting months equally.
try:
_ds = fix_cam_time(_ds)
except:
print('Check that time is already selected.')
pass
# select the AMIP period
_ds = select_AMIP(_ds)
# Fix any weird month/year mismatch by weighting months equally.
_da = _ds[j].groupby('time.month').mean('time').mean('month')
_da,_rgrd = interp_like2D(_da,target=cntlnames[j]) # this should return a da if given a da
# print(_da.name)
# var_files.append(_da)
var_files.append(_da[j])
model_das[j] = var_files
testmetrics = model_das
###Output
CLDTOT_ISCCP
cam4_1deg_release_amip
###Markdown
Calculate These are being masked now via the pre-masked observational data.
###Code
testnames = ('CAM4','CAM5','CAM6')
varnames = ['CLDTOT_ISCCP','CLDTOT_MISR','CLDTOT_CAL']
varlabels = ['ISCCP Total Cloud (60S-60N)','MISR Total Cloud (60S-60N, Ocean Only)','CALIPSO Total Cloud']
nvars = 3; ntest = 3;
cc = np.zeros([nvars,ntest])
ratio = np.zeros([nvars,ntest])
bias = np.zeros([nvars,ntest])
bias_abs = np.zeros([nvars,ntest])
for ivar,var in enumerate(varnames): # iterate over the variables for a specific Taylor plot
# Select observational dataarray:
obs_da = cntlnames[var]
obs_ds = obs_da
for itest,(name,metric) in enumerate(zip(testnames,testmetrics[var])): # iterate over the models to test/plot
# Time average:
test_ds = metric #[var]
# Calculate Taylor diagram relevant variables:
_bias,_corr,_rmsnorm,_ratio,_bias_abs = calculate(obs_ds,test_ds)
# print(_bias[var],_corr[var],_rmsnorm[var],_ratio[var])
cc[ivar,itest] = _corr
ratio[ivar,itest] = _ratio
bias[ivar,itest] = _bias
bias_abs[ivar,itest] = _bias_abs
# print(bias,corr,rmsnorm,ratio)
###Output
_____no_output_____
###Markdown
Plot
###Code
mpl.rcParams['font.family'] = 'serif'
mpl.rcParams['font.size'] = 10
mpl.rcParams['text.usetex'] = True
figure = plt.figure(figsize=(4,4))
figure.set_dpi(300)
testcolors = ('SkyBlue','Firebrick','#f6d921')
ax = figure.add_subplot(1,1,1,frameon=False)
taylor_diagram = taylor.Taylor_diagram(
ax,cc,ratio,bias,
casecolors=testcolors,
varlabels=range(1,len(varnames)+1),
)
# Reference bias bubbles, wut is this?
ref_bias = 0.1 # This is a 10% bias reference bubble in the lower-left corner
yloc = 0.05*taylor_diagram.xymax + ref_bias/2.0
xloc = 0.05*taylor_diagram.xymax + ref_bias/2.0
circle = patches.Circle(
(xloc,yloc),ref_bias/2.0,
color="black",
alpha=0.30,
)
ax.add_patch(circle)
# Reference bias bubble points - centered at the reference bubble
circle = patches.Circle(
(xloc,yloc),0.01,
color="black",
)
ax.add_patch(circle)
# Reference bias text
ax.text(
xloc+ref_bias/2.0 + 0.01*taylor_diagram.xymax,yloc,
"%.0f%s bias"%(ref_bias*100,r"\%"),
color="Black",
fontsize=8,
horizontalalignment="left",
verticalalignment="center"
)
# Case labels
xloc = taylor_diagram.xymax*0.95
yloc = taylor_diagram.xymax*0.05
dy = taylor_diagram.xymax*0.05
for itest,testname in enumerate(testnames[::-1]):
ax.text(
xloc,yloc+itest*dy, # place these just above the dots
testname,
color=testcolors[::-1][itest],
fontsize=11,
horizontalalignment="right",
verticalalignment="bottom",
# fontweight='bold', # doesn't do anything
)
# Variable labels
xloc0 = taylor_diagram.xymax*-0.1
yloc0 = taylor_diagram.xymax*-0.15
dy = taylor_diagram.xymax*0.055
for ivar,(varlabel,biases) in enumerate(zip(varlabels, bias_abs)):
xloc = xloc0
yloc = yloc0+ivar*-dy
_text = ax.text(
xloc,yloc, # place these just above the dots
'%s. %s: ' % (str(ivar+1), varlabel),
color='black',
fontsize=10,
horizontalalignment="left",
verticalalignment="bottom",
# fontweight='bold', # doesn't do anything
)
for itest,testname in enumerate(testnames):
if itest == 0:
xloc += xloc + 0.15#0.09
xloc += 0.04*len(_text.get_text()) + 0.025
_text = ax.text(
xloc,yloc, # place these just above the dots
round(biases[itest])/100,
color=testcolors[itest],
fontsize=10,
horizontalalignment="left",
verticalalignment="bottom",
# fontweight='bold', # doesn't do anything
)
mpl.rcParams['text.usetex'] = False
to_png(figure,'cldtot_taylor_cosp1.3',bbox_inches="tight")
###Output
_____no_output_____
###Markdown
Panel 3 (CALIPSO low- mid- and high-level cloud) Open files
###Code
new_cldlow_cal = xr.open_dataset('%s/2021_obs/CALIOP_CLDLOW_200606_201512.nc' % (save_dir))
new_cldmed_cal = xr.open_dataset('%s/2021_obs/CALIOP_CLDMED_200606_201512.nc' % (save_dir))
new_cldhgh_cal = xr.open_dataset('%s/2021_obs/CALIOP_CLDHGH_200606_201512.nc' % (save_dir))
proc_cldlow_cal = 100*new_cldlow_cal.groupby('time.month').mean('time').mean('month')
proc_cldmed_cal = 100*new_cldmed_cal.groupby('time.month').mean('time').mean('month')
proc_cldhgh_cal = 100*new_cldhgh_cal.groupby('time.month').mean('time').mean('month')
proc_cldlow_caliop = proc_cldlow_cal.rename({'cllcalipso':'CLDLOW_CAL'}) # yes, this is on a 2x2deg grid
proc_cldmed_caliop = proc_cldmed_cal.rename({'clmcalipso':'CLDMED_CAL'}) # yes, this is on a 2x2deg grid
proc_cldhgh_caliop = proc_cldhgh_cal.rename({'clhcalipso':'CLDHGH_CAL'}) # yes, this is on a 2x2deg grid
# Necessary pre-processing for CALIOP obs when using interp_like
cll1 = proc_cldlow_caliop['CLDLOW_CAL'].rename({'longitude':'lon','latitude':'lat'})
cll2 = cll1.assign_coords(lon=(cll1.lon % 360)).sortby('lon')
clm1 = proc_cldmed_caliop['CLDMED_CAL'].rename({'longitude':'lon','latitude':'lat'})
clm2 = clm1.assign_coords(lon=(clm1.lon % 360)).sortby('lon')
clh1 = proc_cldhgh_caliop['CLDHGH_CAL'].rename({'longitude':'lon','latitude':'lat'})
clh2 = clh1.assign_coords(lon=(clh1.lon % 360)).sortby('lon')
###Output
_____no_output_____
###Markdown
Open model files
###Code
cntlnames = {
'CLDLOW_CAL': cll2, # these have to be dataarrays, not datasets
'CLDMED_CAL': clm2,
'CLDHGH_CAL': clh2,
}
_vars = ['CLDLOW_CAL','CLDMED_CAL','CLDHGH_CAL']
suffix = 'atm/proc/tseries/month_1'
model_das = {}
for j in _vars:
var_files = []
for ii in cases:
_f = glob.glob('%s/%s/*.%s.*' % (save_dir,ii,j)) # get the correct file
# open dataset
print(_f[0])
_ds = xr.open_dataset(_f[0])
# apply time bounds
_ds = fix_cam_time(_ds)
# select the AMIP period
_ds = select_AMIP(_ds)
# Fix any weird month/year mismatch by weighting months equally.
try:
_da = _ds[j].groupby('time.month').mean('time').mean('month')
_ds.close() # ?
except:
print(_ds)
# Interpolate to the control (observation) grid
_da = _da.interp_like(cntlnames[j],method='linear')
# _da = _da.interp_like(cntlnames[j])
var_files.append(_da)
# print(_f)
model_das[j] = var_files
testmetrics = model_das
###Output
/glade/u/home/jonahshaw/w/archive/taylor_files//cam4_1deg_release_amip/cam4_1deg_release_amip.cam.h0.CLDLOW_CAL.200011-201012.nc
/glade/u/home/jonahshaw/w/archive/taylor_files//cam5_1deg_release_amip/cam5_1deg_release_amip.cam.h0.CLDLOW_CAL.200101-201012.nc
/glade/u/home/jonahshaw/w/archive/taylor_files//f.e21.FHIST_BGC.f09_f09_mg17.CMIP6-AMIP.001_cosp1/f.e21.FHIST_BGC.f09_f09_mg17.CMIP6-AMIP.001_cosp1.cam.h0.CLDLOW_CAL.197901-201412.nc
/glade/u/home/jonahshaw/w/archive/taylor_files//cam4_1deg_release_amip/cam4_1deg_release_amip.cam.h0.CLDMED_CAL.200011-201012.nc
/glade/u/home/jonahshaw/w/archive/taylor_files//cam5_1deg_release_amip/cam5_1deg_release_amip.cam.h0.CLDMED_CAL.200101-201012.nc
/glade/u/home/jonahshaw/w/archive/taylor_files//f.e21.FHIST_BGC.f09_f09_mg17.CMIP6-AMIP.001_cosp1/f.e21.FHIST_BGC.f09_f09_mg17.CMIP6-AMIP.001_cosp1.cam.h0.CLDMED_CAL.197901-201412.nc
/glade/u/home/jonahshaw/w/archive/taylor_files//cam4_1deg_release_amip/cam4_1deg_release_amip.cam.h0.CLDHGH_CAL.200011-201012.nc
/glade/u/home/jonahshaw/w/archive/taylor_files//cam5_1deg_release_amip/cam5_1deg_release_amip.cam.h0.CLDHGH_CAL.200101-201012.nc
/glade/u/home/jonahshaw/w/archive/taylor_files//f.e21.FHIST_BGC.f09_f09_mg17.CMIP6-AMIP.001_cosp1/f.e21.FHIST_BGC.f09_f09_mg17.CMIP6-AMIP.001_cosp1.cam.h0.CLDHGH_CAL.197901-201412.nc
###Markdown
Calculate
###Code
# Case names
testnames = ('CAM4','CAM5','CAM6')
testmetrics = model_das
varnames = ['CLDLOW_CAL','CLDMED_CAL','CLDHGH_CAL']
varlabels = ['CALIPSO Low Cloud','CALIPSO Mid Cloud','CALIPSO High Cloud']
nvars = 3; ntest = 3;
cc = np.zeros([nvars,ntest])
ratio = np.zeros([nvars,ntest])
bias = np.zeros([nvars,ntest])
bias_abs = np.zeros([nvars,ntest])
for ivar,var in enumerate(varnames): # iterate over the variables for a specific Taylor plot
# Select observational dataarray:
obs_da = cntlnames[var]
obs_ds = obs_da
for itest,(name,metric) in enumerate(zip(testnames,testmetrics[var])): # iterate over the models to test/plot
# Time average:
test_ds = metric #[var]
# Calculate Taylor diagram relevant variables:
_bias,_corr,_rmsnorm,_ratio,_bias_abs = calculate(obs_ds,test_ds)
# print(_bias[var],_corr[var],_rmsnorm[var],_ratio[var])
cc[ivar,itest] = _corr
ratio[ivar,itest] = _ratio
bias[ivar,itest] = _bias
bias_abs[ivar,itest] = _bias_abs
# print(bias,corr,rmsnorm,ratio)
###Output
_____no_output_____
###Markdown
Plot
###Code
mpl.rcParams['font.family'] = 'serif'
mpl.rcParams['font.size'] = 10
mpl.rcParams['text.usetex'] = True
figure = plt.figure(figsize=(8,8))
figure.set_dpi(200)
testcolors = ('SkyBlue','Firebrick','#f6d921')
ax = figure.add_subplot(2,2,1,frameon=False)
taylor_diagram = taylor.Taylor_diagram(
ax,cc,ratio,bias,
casecolors=testcolors,
varlabels=range(1,len(varnames)+1),
)
# Reference bias bubbles, wut is this?
ref_bias = 0.1 # This is a 10% bias reference bubble in the lower-left corner
yloc = 0.05*taylor_diagram.xymax + ref_bias/2.0
xloc = 0.05*taylor_diagram.xymax + ref_bias/2.0
circle = patches.Circle(
(xloc,yloc),ref_bias/2.0,
color="black",
alpha=0.30,
)
ax.add_patch(circle)
# Reference bias bubble points - centered at the reference bubble
circle = patches.Circle(
(xloc,yloc),0.01,
color="black",
)
ax.add_patch(circle)
# Reference bias text
ax.text(
xloc+ref_bias/2.0 + 0.01*taylor_diagram.xymax,yloc,
"%.0f%s bias"%(ref_bias*100,r"\%"),
color="Black",
fontsize=8,
horizontalalignment="left",
verticalalignment="center"
)
# Case labels
xloc = taylor_diagram.xymax*0.95
yloc = taylor_diagram.xymax*0.05
dy = taylor_diagram.xymax*0.05
for itest,testname in enumerate(testnames[::-1]):
ax.text(
xloc,yloc+itest*dy, # place these just above the dots
testname,
color=testcolors[::-1][itest],
fontsize=11,
horizontalalignment="right",
verticalalignment="bottom",
# fontweight='bold', # doesn't do anything
)
# Variable labels
xloc0 = taylor_diagram.xymax*-0.1
yloc0 = taylor_diagram.xymax*-0.15
dy = taylor_diagram.xymax*0.055
for ivar,(varlabel,biases) in enumerate(zip(varlabels, bias_abs)):
xloc = xloc0
yloc = yloc0+ivar*-dy
_text = ax.text(
xloc,yloc, # place these just above the dots
'%s. %s: ' % (str(ivar+1), varlabel),
color='black',
fontsize=10,
horizontalalignment="left",
verticalalignment="bottom",
# fontweight='bold', # doesn't do anything
)
for itest,testname in enumerate(testnames):
if itest == 0:
xloc += xloc + 0.1#0.09
xloc += 0.04*len(_text.get_text()) + 0.025
_text = ax.text(
xloc,yloc, # place these just above the dots
round(biases[itest])/100,
color=testcolors[itest],
fontsize=10,
horizontalalignment="left",
verticalalignment="bottom",
# fontweight='bold', # doesn't do anything
)
mpl.rcParams['text.usetex'] = False
to_png(figure,'CLDCAL_taylor_cosp1.3')
###Output
_____no_output_____
###Markdown
Panel 4 (MISR low-topped thick and MODIS high-topped thick cloud) Open files and mask above 60 degrees MISR
###Code
new_clmisr = xr.open_dataset('%s/2021_obs/%s' % (save_dir,'MISR_CLDTHCK_200003_202005.nc'))
new_clmisr = new_clmisr.where(np.abs(new_clmisr['lat']<60)) # Mask on obs is transferred to model in calculate()
og_clmisr = new_clmisr.rename({'clMISR':'CLDTHCK_MISR'})
###Output
_____no_output_____
###Markdown
MODIS
###Code
new_clmodis = xr.open_dataset('%s/2021_obs/MODIS_CLDTHCK_200301_202012.nc' % save_dir)
og_clmodis = 100*new_clmodis.where(np.abs(new_clmodis['lat'])<60)
###Output
_____no_output_____
###Markdown
Set-up
###Code
# Control names dictionary (i.e. observations)
cntlnames = {
'CLDTHCK_MISR': og_clmisr['CLDTHCK_MISR'], # these have to be dataarrays, not datasets
'CLDTHCK_MODIS': og_clmodis['CLDTHCK_MODIS'],
}
# suffixes = ['atm/proc/tseries/month_1','','atm/proc/tseries/month_1']
# paths to use if I have pr
_vars = ['CLDTHCK_MISR','CLDTHCK_MODIS']
_sdir = '/glade/u/home/jonahshaw/w/archive/taylor_files/'
model_das = {}
# for j,_dir in zip(_vars,case_dirs):
for j in _vars:
print(j)
var_files = []
for ii in cases:
print(ii)
_f = glob.glob('%s/%s/*.%s.*' % (save_dir,ii,j)) # get the correct file
# open dataset
_ds = xr.open_dataset(_f[0])
# Fix any weird month/year mismatch by weighting months equally.
_da = _ds[j].groupby('time.month').mean('time').mean('month')
# Hide latitudes above 60 degrees:
_da = _da.where(np.abs(_da['lat'])<60)
# Interpolate to the control (observation) grid
_da = _da.interp_like(cntlnames[j],method='linear')
# _da = _da.interp_like(cntlnames[j])
var_files.append(_da)
model_das[j] = var_files
###Output
CLDTHCK_MISR
cam4_1deg_release_amip
cam5_1deg_release_amip
###Markdown
Calculate These are being masked correctly. Should just use values below 60degrees.
###Code
# Case names
testnames = ('CAM4','CAM5','CAM6')
testmetrics = model_das
varnames = ['CLDTHCK_MISR','CLDTHCK_MODIS']
varlabels = ['MISR low-topped thick cloud (ocean-only)','MODIS high-topped thick cloud']
nvars = 2; ntest = 3;
cc = np.zeros([nvars,ntest])
ratio = np.zeros([nvars,ntest])
bias = np.zeros([nvars,ntest])
for ivar,var in enumerate(varnames): # iterate over the variables for a specific Taylor plot
# Select observational dataarray:
obs_da = cntlnames[var]
obs_ds = obs_da
for itest,(name,metric) in enumerate(zip(testnames,testmetrics[var])): # iterate over the models to test/plot
# Time average:
test_ds = metric #[var]
# Calculate Taylor diagram relevant variables:
_bias,_corr,_rmsnorm,_ratio,_bias_abs = calculate(obs_ds,test_ds)
# print(_bias[var],_corr[var],_rmsnorm[var],_ratio[var])
cc[ivar,itest] = _corr
ratio[ivar,itest] = _ratio
bias[ivar,itest] = _bias
# print(bias,corr,rmsnorm,ratio)
###Output
_____no_output_____
###Markdown
Plot
###Code
mpl.rcParams['font.family'] = 'serif'
mpl.rcParams['font.size'] = 10
mpl.rcParams['text.usetex'] = True
figure = plt.figure(figsize=(8,8))
figure.set_dpi(200)
testcolors = ('SkyBlue','Firebrick','#f6d921')
ax = figure.add_subplot(2,2,1,frameon=False)
taylor_diagram = taylor.Taylor_diagram(
ax,cc,ratio,bias,
casecolors=testcolors,
varlabels=range(1,len(varnames)+1),
)
# Reference bias bubbles, wut is this?
ref_bias = 0.1 # This is a 10% bias reference bubble in the lower-left corner
yloc = 0.05*taylor_diagram.xymax + ref_bias/2.0
xloc = 0.05*taylor_diagram.xymax + ref_bias/2.0
circle = patches.Circle(
(xloc,yloc),ref_bias/2.0,
color="black",
alpha=0.30,
)
ax.add_patch(circle)
# Reference bias bubble points - centered at the reference bubble
circle = patches.Circle(
(xloc,yloc),0.01,
color="black",
)
ax.add_patch(circle)
# Reference bias text
ax.text(
xloc+ref_bias/2.0 + 0.01*taylor_diagram.xymax,yloc,
"%.0f%s bias"%(ref_bias*100,r"\%"),
color="Black",
fontsize=8,
horizontalalignment="left",
verticalalignment="center"
)
# Case labels
xloc = taylor_diagram.xymax*0.95
yloc = taylor_diagram.xymax*0.05
dy = taylor_diagram.xymax*0.05
for itest,testname in enumerate(testnames[::-1]):
ax.text(
xloc,yloc+itest*dy, # place these just above the dots
testname,
color=testcolors[::-1][itest],
fontsize=11,
horizontalalignment="right",
verticalalignment="bottom",
# fontweight='bold', # doesn't do anything
)
# Variable labels
xloc0 = taylor_diagram.xymax*-0.1
yloc0 = taylor_diagram.xymax*-0.15
dy = taylor_diagram.xymax*0.055
for ivar,(varlabel,biases) in enumerate(zip(varlabels, bias_abs)):
xloc = xloc0
yloc = yloc0+ivar*-dy
_text = ax.text(
xloc,yloc, # place these just above the dots
'%s. %s: ' % (str(ivar+1), varlabel),
color='black',
fontsize=10,
horizontalalignment="left",
verticalalignment="bottom",
# fontweight='bold', # doesn't do anything
)
for itest,testname in enumerate(testnames):
if itest == 0:
xloc += xloc + 0.45#0.09
xloc += 0.1*len(_text.get_text()) + 0.025
_text = ax.text(
xloc,yloc, # place these just above the dots
round(biases[itest])/100,
color=testcolors[itest],
fontsize=10,
horizontalalignment="left",
verticalalignment="bottom",
# fontweight='bold', # doesn't do anything
)
mpl.rcParams['text.usetex'] = False
to_png(figure,'THCKCLD_taylor_cosp1.3')
###Output
_____no_output_____
###Markdown
Old
###Code
# start_dir = '/glade/u/home/jonahshaw/w/archive/taylor_files'
# for i in os.listdir(start_dir):
# _path = '%s/%s' % (start_dir,i)
# _file = os.listdir(_path)[0]
# print(_file)
# # print(os.listdir('%s/%s' % (start_dir,i)))
# _temp = xr.open_dataset('%s/%s' % (_path,_file))
# _temp = _temp.rename({'CLD_MISR':'CLDTOT_MISR'})
# _temp.to_netcdf('%s/%s' % (_path,_file)+'1')
###Output
f.e21.FHIST_BGC.f09_f09_mg17.CMIP6-AMIP.001_cosp1.CLDTHCK_MISR.nc
cam5_1deg_release_amip.CLDTHCK_MISR.nc
cam4_1deg_release_amip.CLDTHCK_MODIS.nc
|
Nonconvex_Scalar_Osher_Solution.ipynb | ###Markdown
Osher solution to a scalar Riemann problemImplementation of the general solution to the scalar Riemann problem that is valid also for non-convex fluxes, using the formula from (Osher 1984):$$Q(\xi) = \begin{cases} \text{argmin}_{q_l \leq q \leq q_r} [f(q) - \xi q]& \text{if} ~q_l\leq q_r,\\ \text{argmax}_{q_r \leq q \leq q_l} [f(q) - \xi q]& \text{if} ~q_r\leq q_l.\\\end{cases}$$See also Section 16.1.2. of (LeVeque 2002).
###Code
%matplotlib inline
from __future__ import print_function
import numpy as np
import matplotlib.pyplot as plt
from IPython.display import display
###Output
_____no_output_____
###Markdown
Select an animation style:
###Code
from utils import animation_tools
#animation_style = 'ipywidgets'
animation_style = 'JSAnimation'
def osher_solution(f, q_left, q_right, xi_left=None, xi_right=None):
"""
Compute the Riemann solution to a scalar conservation law.
Compute the similarity solution Q(x/t) and also the
(possibly multi-valued) solution determined by tracing
characteristics.
Input:
f = flux function (possibly nonconvex)
q_left, q_right = Riemann data
xi_left, xi_right = optional left and right limits for xi = x/t
in similarity solution.
If not specified, chosen based on the characteristic speeds.
Returns:
xi = array of values between xi_left and xi_right
q = array of corresponding q(xi) values (xi = x/t)
q_char = array of values of q between q_left and q_right
xi_char = xi value for each q_char for use in plotting the
(possibly multi-valued) solution where each q value
propagates at speed f'(q).
"""
from numpy import linspace,empty,argmin,argmax,diff,hstack
q_min = min(q_left, q_right)
q_max = max(q_left, q_right)
qv = linspace(q_min, q_max, 1000)
# define the function qtilde as in (16.7)
if q_left <= q_right:
def qtilde(xi):
Q = empty(xi.shape, dtype=float)
for j,xij in enumerate(xi):
i = argmin(f(qv) - xij*qv)
Q[j] = qv[i]
return Q
else:
def qtilde(xi):
Q = empty(xi.shape, dtype=float)
for j,xij in enumerate(xi):
i = argmax(f(qv) - xij*qv)
Q[j] = qv[i]
return Q
# The rest is just for plotting purposes:
fv = f(qv)
dfdq = diff(fv) / (qv[1] - qv[0])
dfdq_min = dfdq.min()
dfdq_max = dfdq.max()
dfdq_range = dfdq_max - dfdq_min
#print("Mininum characteristic velocity: %g" % dfdq_min)
#print("Maximum characteristic velocity: %g" % dfdq_max)
if xi_left is None:
xi_left = min(0,dfdq_min) - 0.1*dfdq_range
if xi_right is None:
xi_right = max(0,dfdq_max) + 0.1*dfdq_range
q_char = hstack((q_min, 0.5*(qv[:-1] + qv[1:]), q_max))
if q_left <= q_right:
xi_min = xi_left
xi_max = xi_right
else:
xi_min = xi_right
xi_max = xi_left
xi_char = hstack((xi_min, dfdq, xi_max))
xi = linspace(xi_left, xi_right, 1000)
q = qtilde(xi)
return xi, q, q_char, xi_char
###Output
_____no_output_____
###Markdown
Traffic flowFirst try a convex flux, such as $f(q) = q(1-q)$ from traffic flow (with $u_{max}=1$ in the notation of Chapter 11):
###Code
f = lambda q: q*(1-q)
plt.figure(figsize=(12,5))
plt.subplot(121)
q_left = 0.6
q_right = 0.1
xi, qxi, q_char, xi_char = osher_solution(f, q_left, q_right, -2, 2)
plt.plot(xi_char, q_char,'r')
plt.plot(xi, qxi, 'b', linewidth=2)
plt.ylim(-0.1,1.1)
plt.title('Rarefaction solution')
plt.subplot(122)
q_left = 0.1
q_right = 0.6
xi, qxi, q_char, xi_char = osher_solution(f, q_left, q_right, -2, 2)
plt.plot(xi_char, q_char,'r')
plt.plot(xi, qxi, 'b', linewidth=2)
plt.ylim(-0.1,1.1)
plt.title('Shock solution');
###Output
_____no_output_____
###Markdown
Buckley-Leverett EquationThe Buckley-Leverett equation for two-phase flow is described in Section 16.1.1. It has the non-convext flux function$$ f(q) = \frac{q^2}{q^2 + a(1-q)^2}$$where $a$ is some constant.
###Code
a = 0.5
f_buckley_leverett = lambda q: q**2 / (q**2 + a*(1-q)**2)
q_left = 1.
q_right = 0.
###Output
_____no_output_____
###Markdown
Plot the flux and its derivative
###Code
qvals = np.linspace(q_right, q_left, 200)
fvals = f_buckley_leverett(qvals)
dfdq = np.diff(fvals) / (qvals[1]-qvals[0]) # approximate df/dq
qmid = 0.5*(qvals[:-1] + qvals[1:]) # midpoints for plotting dfdq
plt.figure(figsize=(10,4))
plt.subplot(131)
plt.plot(qvals,fvals)
plt.xlabel('q')
plt.ylabel('f(q)')
plt.title('flux function f(q)')
plt.subplot(132)
plt.plot(qmid, dfdq)
plt.xlabel('q')
plt.ylabel('df/dq')
plt.title('characteristic speed df/dq')
plt.subplot(133)
plt.plot(dfdq, qmid, 'r')
plt.xlabel('df/dq')
plt.ylabel('q')
plt.title('q vs. df/dq')
plt.subplots_adjust(left=0.)
###Output
_____no_output_____
###Markdown
Note that the third plot above shows $q$ on the vertical axis and $df/dq$ on the horizontal axis (it's the middle figure turned sideways). You can think of this as showing the characteristic velocity for each point on a jump discontinuity from $q=0$ to $q=1$, and hence a triple valued solution of the Riemann problem at $t=1$ when each $q$ value has propagated this far. Below we show this together with the correct solution to the Riemann problem, with a shock wave inserted (as computed using the Osher solution defined above). Note that for this non-convex flux function the Riemann solution consists partly of a rarefaction wave together with a shock wave.
###Code
xi, qxi, q_char, xi_char = osher_solution(f_buckley_leverett, q_left, q_right, -2, 2)
plt.plot(xi_char, q_char,'r')
plt.plot(xi, qxi, 'b', linewidth=2)
plt.ylim(-0.1,1.1)
###Output
_____no_output_____
###Markdown
Create an animation:
###Code
figs = []
# adjust first and last elements in xi arrays
# so things plot nicely for t \approx 0:
xi[0] *= 1e6; xi[-1] *= 1e6
xi_char[0] *= 1e6; xi_char[-1] *= 1e6
times = np.linspace(0,1,11)
times[0] = 1e-3 # adjust first time to be >0
for t in times:
fig = plt.figure(figsize=(6,3))
plt.plot(xi_char*t,q_char,'r')
plt.plot(xi*t, qxi, 'b', linewidth=2)
plt.xlim(-2, 2.5)
plt.ylim(-0.1,1.1)
figs.append(fig)
plt.close(fig)
anim = animation_tools.animate_figs(figs, style=animation_style, figsize=(6,3))
display(anim)
###Output
_____no_output_____
###Markdown
Sinusoidal fluxAs another test, the flux function $f(q) = \sin(q)$ is used in Example 16.1 to produce the figure 16.4, reproduced below.
###Code
f_sin = lambda q: np.sin(q)
q_left = np.pi/4.
q_right = 15*np.pi/4.
xi, qxi, q_char, xi_char = osher_solution(f_sin, q_left, q_right, -1.5, 1.5)
plt.plot(xi_char, q_char,'r')
plt.plot(xi, qxi, 'b', linewidth=2)
plt.ylim(0.,14.)
###Output
_____no_output_____
###Markdown
Make an animation
###Code
figs = []
# adjust first and last elements in xi arrays
# so things plot nicely for t \approx 0:
xi[0] *= 1e6; xi[-1] *= 1e6
xi_char[0] *= 1e6; xi_char[-1] *= 1e6
times = np.linspace(0,1,11)
times[0] = 1e-3 # adjust first time to be >0
for t in times:
fig = plt.figure(figsize=(6,3))
plt.plot(xi_char*t,q_char,'r')
plt.plot(xi*t, qxi, 'b', linewidth=2)
plt.xlim(-1.5, 1.5)
plt.ylim(0.,14.)
figs.append(fig)
plt.close(fig)
anim = animation_tools.animate_figs(figs, style=animation_style, figsize=(6,3))
display(anim)
###Output
_____no_output_____
###Markdown
Yet another example
###Code
f = lambda q: q*np.sin(q)
q_left = 2.
q_right = 20.
xi, qxi, q_char, xi_char = osher_solution(f, q_left, q_right)
plt.plot(xi_char,q_char,'r')
plt.plot(xi, qxi, 'b', linewidth=2)
plt.ylim(0,25)
###Output
_____no_output_____ |
Model backlog/Train/4-commonlit-roberta-large-seq-300.ipynb | ###Markdown
Dependencies
###Code
import random, os, warnings, math
import numpy as np
import pandas as pd
import seaborn as sns
from matplotlib import pyplot as plt
from sklearn.model_selection import KFold
from sklearn.metrics import mean_squared_error
import tensorflow as tf
import tensorflow.keras.layers as L
import tensorflow.keras.backend as K
from tensorflow.keras import optimizers, losses, metrics, Model
from tensorflow.keras.callbacks import EarlyStopping, ModelCheckpoint, LearningRateScheduler
from transformers import TFAutoModelForSequenceClassification, TFAutoModel, AutoTokenizer
def seed_everything(seed=0):
random.seed(seed)
np.random.seed(seed)
tf.random.set_seed(seed)
os.environ['PYTHONHASHSEED'] = str(seed)
os.environ['TF_DETERMINISTIC_OPS'] = '1'
seed = 0
seed_everything(seed)
sns.set(style='whitegrid')
warnings.filterwarnings('ignore')
pd.set_option('display.max_colwidth', 150)
###Output
_____no_output_____
###Markdown
Hardware configuration
###Code
# TPU or GPU detection
# Detect hardware, return appropriate distribution strategy
try:
tpu = tf.distribute.cluster_resolver.TPUClusterResolver()
print(f'Running on TPU {tpu.master()}')
except ValueError:
tpu = None
if tpu:
tf.config.experimental_connect_to_cluster(tpu)
tf.tpu.experimental.initialize_tpu_system(tpu)
strategy = tf.distribute.experimental.TPUStrategy(tpu)
else:
strategy = tf.distribute.get_strategy()
AUTO = tf.data.experimental.AUTOTUNE
REPLICAS = strategy.num_replicas_in_sync
print(f'REPLICAS: {REPLICAS}')
###Output
Running on TPU grpc://10.0.0.2:8470
REPLICAS: 8
###Markdown
Load data
###Code
train_filepath = '/kaggle/input/commonlitreadabilityprize/train.csv'
train = pd.read_csv(train_filepath)
print(f'Train samples: {len(train)}')
display(train.head())
# removing unused columns
train.drop(['url_legal', 'license'], axis=1, inplace=True)
###Output
Train samples: 2834
###Markdown
Model parameters
###Code
BATCH_SIZE = 8 * REPLICAS
LEARNING_RATE = 1e-5 * REPLICAS
EPOCHS = 35
ES_PATIENCE = 10
PATIENCE = 2
N_FOLDS = 5
SEQ_LEN = 300
BASE_MODEL = '/kaggle/input/huggingface-roberta/roberta-large/'
###Output
_____no_output_____
###Markdown
Auxiliary functions
###Code
# Datasets utility functions
def custom_standardization(text):
text = text.lower() # if encoder is uncased
text = text.strip()
return text
def sample_target(features, target):
mean, stddev = target
sampled_target = tf.random.normal([], mean=tf.cast(mean, dtype=tf.float32),
stddev=tf.cast(stddev, dtype=tf.float32), dtype=tf.float32)
return (features, sampled_target)
def get_dataset(pandas_df, tokenizer, labeled=True, ordered=False, repeated=False,
is_sampled=False, batch_size=32, seq_len=128):
"""
Return a Tensorflow dataset ready for training or inference.
"""
text = [custom_standardization(text) for text in pandas_df['excerpt']]
# Tokenize inputs
tokenized_inputs = tokenizer(text, max_length=seq_len, truncation=True,
padding='max_length', return_tensors='tf')
if labeled:
dataset = tf.data.Dataset.from_tensor_slices(({'input_ids': tokenized_inputs['input_ids'],
'attention_mask': tokenized_inputs['attention_mask']},
(pandas_df['target'], pandas_df['standard_error'])))
if is_sampled:
dataset = dataset.map(sample_target, num_parallel_calls=tf.data.AUTOTUNE)
else:
dataset = tf.data.Dataset.from_tensor_slices({'input_ids': tokenized_inputs['input_ids'],
'attention_mask': tokenized_inputs['attention_mask']})
if repeated:
dataset = dataset.repeat()
if not ordered:
dataset = dataset.shuffle(1024)
dataset = dataset.batch(batch_size)
dataset = dataset.prefetch(tf.data.AUTOTUNE)
return dataset
def plot_metrics(history):
metric_list = list(history.keys())
size = len(metric_list)//2
fig, axes = plt.subplots(size, 1, sharex='col', figsize=(20, size * 5))
axes = axes.flatten()
for index in range(len(metric_list)//2):
metric_name = metric_list[index]
val_metric_name = metric_list[index+size]
axes[index].plot(history[metric_name], label='Train %s' % metric_name)
axes[index].plot(history[val_metric_name], label='Validation %s' % metric_name)
axes[index].legend(loc='best', fontsize=16)
axes[index].set_title(metric_name)
plt.xlabel('Epochs', fontsize=16)
sns.despine()
plt.show()
###Output
_____no_output_____
###Markdown
Model
###Code
def model_fn(encoder, seq_len=256):
input_ids = L.Input(shape=(seq_len,), dtype=tf.int32, name='input_ids')
input_attention_mask = L.Input(shape=(seq_len,), dtype=tf.int32, name='attention_mask')
outputs = encoder({'input_ids': input_ids,
'attention_mask': input_attention_mask})
last_hidden_state = outputs['last_hidden_state']
x = L.GlobalAveragePooling1D()(last_hidden_state)
# x = L.Dropout(.5)(x)
output = L.Dense(1, name='output')(x)
model = Model(inputs=[input_ids, input_attention_mask], outputs=output)
optimizer = optimizers.Adam(lr=LEARNING_RATE)
model.compile(optimizer=optimizer,
loss=losses.MeanSquaredError(),
metrics=[metrics.RootMeanSquaredError()])
return model
with strategy.scope():
# encoder = TFAutoModelForSequenceClassification.from_pretrained(BASE_MODEL, num_labels=1)
encoder = TFAutoModel.from_pretrained(BASE_MODEL)
model = model_fn(encoder, SEQ_LEN)
model.summary()
###Output
Some layers from the model checkpoint at /kaggle/input/huggingface-roberta/roberta-large/ were not used when initializing TFRobertaModel: ['lm_head']
- This IS expected if you are initializing TFRobertaModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing TFRobertaModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
All the layers of TFRobertaModel were initialized from the model checkpoint at /kaggle/input/huggingface-roberta/roberta-large/.
If your task is similar to the task the model of the checkpoint was trained on, you can already use TFRobertaModel for predictions without further training.
###Markdown
Training
###Code
tokenizer = AutoTokenizer.from_pretrained(BASE_MODEL)
skf = KFold(n_splits=N_FOLDS, shuffle=True, random_state=seed)
oof_pred = []; oof_labels = []; history_list = []
for fold,(idxT, idxV) in enumerate(skf.split(train)):
if tpu: tf.tpu.experimental.initialize_tpu_system(tpu)
print(f'\nFOLD: {fold+1}')
print(f'TRAIN: {len(idxT)} VALID: {len(idxV)}')
# Model
K.clear_session()
with strategy.scope():
# encoder = TFAutoModelForSequenceClassification.from_pretrained(BASE_MODEL, num_labels=1)
encoder = TFAutoModel.from_pretrained(BASE_MODEL)
model = model_fn(encoder, SEQ_LEN)
model_path = f'model_{fold}.h5'
es = EarlyStopping(monitor='val_root_mean_squared_error', mode='min',
patience=ES_PATIENCE, restore_best_weights=True, verbose=1)
checkpoint = ModelCheckpoint(model_path, monitor='val_root_mean_squared_error', mode='min',
save_best_only=True, save_weights_only=True)
# Train
history = model.fit(x=get_dataset(train.loc[idxT], tokenizer, repeated=True, is_sampled=True,
batch_size=BATCH_SIZE, seq_len=SEQ_LEN),
validation_data=get_dataset(train.loc[idxV], tokenizer, ordered=True,
batch_size=BATCH_SIZE, seq_len=SEQ_LEN),
steps_per_epoch=50,
callbacks=[es, checkpoint],
epochs=EPOCHS,
verbose=2).history
history_list.append(history)
# Save last model weights
model.load_weights(model_path)
# Results
print(f"#### FOLD {fold+1} OOF RMSE = {np.min(history['val_root_mean_squared_error']):.4f}")
# OOF predictions
valid_ds = get_dataset(train.loc[idxV], tokenizer, ordered=True, batch_size=BATCH_SIZE, seq_len=SEQ_LEN)
oof_labels.append([target[0].numpy() for sample, target in iter(valid_ds.unbatch())])
x_oof = valid_ds.map(lambda sample, target: sample)
oof_pred.append(model.predict(x_oof))
###Output
FOLD: 1
TRAIN: 2267 VALID: 567
###Markdown
Model loss and metrics graph
###Code
for fold, history in enumerate(history_list):
print(f'\nFOLD: {fold+1}')
plot_metrics(history)
###Output
FOLD: 1
###Markdown
Model evaluationWe are evaluating the model on the `OOF` predictions, it stands for `Out Of Fold`, since we are training using `K-Fold` our model will see all the data, and the correct way to evaluate each fold is by looking at the predictions that are not from that fold. OOF metrics
###Code
y_true = np.concatenate(oof_labels)
y_preds = np.concatenate(oof_pred)
for fold, history in enumerate(history_list):
print(f"FOLD {fold+1} RMSE: {np.min(history['val_root_mean_squared_error']):.4f}")
print(f'OOF RMSE: {mean_squared_error(y_true, y_preds, squared=False):.4f}')
###Output
FOLD 1 RMSE: 0.6931
FOLD 2 RMSE: 0.7034
FOLD 3 RMSE: 0.5952
FOLD 4 RMSE: 0.5963
FOLD 5 RMSE: 0.6469
OOF RMSE: 0.6486
###Markdown
**Error analysis**, label x prediction distributionHere we can compare the distribution from the labels and the predicted values, in a perfect scenario they should align.
###Code
preds_df = pd.DataFrame({'Label': y_true, 'Prediction': y_preds[:,0]})
fig, ax = plt.subplots(1, 1, figsize=(20, 6))
sns.distplot(preds_df['Label'], ax=ax, label='Label')
sns.distplot(preds_df['Prediction'], ax=ax, label='Prediction')
ax.legend()
plt.show()
sns.jointplot(data=preds_df, x='Label', y='Prediction', kind='reg', height=10)
plt.show()
###Output
_____no_output_____ |
src/genomic_benchmarks/loc2seq/demo/demo_notebook.ipynb | ###Markdown
Transforming demo datasets
###Code
download_dataset("demo_coding_vs_intergenomic_seqs")
download_dataset("dummy_mouse_enhancers_ensembl", version=0)
###Output
Reference /Users/katarina/.genomic_benchmarks/fasta/Mus_musculus.GRCm38.dna_rm.toplevel.fa.gz already exists. Skipping.
###Markdown
Checking the output
###Code
!ls ~/.genomic_benchmarks/
!ls ~/.genomic_benchmarks/dummy_mouse_enhancers_ensembl
!ls ~/.genomic_benchmarks/dummy_mouse_enhancers_ensembl/test
!ls ~/.genomic_benchmarks/dummy_mouse_enhancers_ensembl/test/negative
!cat ~/.genomic_benchmarks/dummy_mouse_enhancers_ensembl/test/negative/0.txt
###Output
NNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNGTCACACATACAGAATCATTTCATAAGTGGAGAGATACTTACCATGGTAGATAGACCCTCAATTGTAACTATAATTATATATGGCAAACACAAGGGCACTATCCTTTGAAGTTTAGACTTGGAAAAACACTTTTCTACGGNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNGACAGCCTGTCAACCAGAGAAAAAAATTTTTCAAACTTTACGTGATAGCCAGCTTGTAAATTTGGCAGCCTATGCAATCTGCACAGGAAAGGCAACTTCACACATACTCCAAAAGAACAAGAAATCATATATCTTTTTTACCCTCTAAAGACTGGGTCTTGAAACAGAGGCCTCTTGGACAGCACACTAGACTCATGTAA |
classes/calculus/derivative.ipynb | ###Markdown
Numerical derivative > Given a function $f$, we want to estimate the derivative $f'$ of $f$ at a point $x$ which is not explicitly known. IntroductionDerivative are used everywhere. What we're most interested in is :- Physical systems resolution- Image or signal processingThe numerical derivation will allow us to find an approximation of the derivative using only a discrete set of points, and we will be interested in error estimates.Functions studied :- Analytical functions (functions for which we know the formula) $R → R$, continuous and derivable- non-analytical functions but evaluable at any point- a set of points representing a continuous function I - Derivative calculationThe derivative of a function $f(x)$ at $x = a$ is the limit$$f'(a) = \lim_{h \to 0}\frac{f(a + h) − f(a)}{h}$$We can see that a part of the equation looks like the slope : $$\frac{f(x_B) − f(x_A)}{x_B − x_A}$$If we have the expression of the function, we can derive it by using formal methods of calculation and if we don't have it, we can **estimate** it.The formula above is equivalent to : $$f_0(x_0) = \lim_{\Delta x \to 0}\frac{f(x_0 + \Delta x) − f(x_0)}{\Delta x}$$There are 3 main difference formulas for numerically approximating derivatives.The **forward difference formula** with step size $h$ :$$ f'(a) \approx \frac{f(a + h) − f(a)}{h}$$The **backward difference formula** with step size $h$ :$$ f'(a) \approx \frac{f(a) − f(a - h)}{h}$$The **central difference formula** with step size $h$ is the average of the forward and backwards difference formulas :$$ f'(a) \approx \frac{f(a + h) − f(a - h)}{2h}$$**h can also be represented as $\Delta x$**
###Code
import numpy as np
import matplotlib.pyplot as plt
def derivative(f,a,method='central',h=0.01):
'''Compute the difference formula for f'(a) with step size h.
Parameters
----------
f : function
Vectorized function of one variable
a : number
Compute derivative at x = a
method : string
Difference formula: 'forward', 'backward' or 'central'
h : number
Step size in difference formula
Returns
-------
float
Difference formula:
central: f(a+h) - f(a-h))/2h
forward: f(a+h) - f(a))/h
backward: f(a) - f(a-h))/h
'''
if method == 'central':
return (f(a + h) - f(a - h))/(2*h)
elif method == 'forward':
return (f(a + h) - f(a))/h
elif method == 'backward':
return (f(a) - f(a - h))/h
else:
raise ValueError("Method must be 'central', 'forward' or 'backward'.")
# For example, we can check and plot here the difference between the approximation of the derivative of sinus with
# the central method and the real value
x = np.linspace(0,5*np.pi,100)
dydx = derivative(np.sin,x)
dYdx = np.cos(x)
plt.figure(figsize=(12,5))
plt.plot(x,dydx,'r.',label='Central difference')
plt.plot(x,dYdx,'b',label='True value')
plt.legend(loc='best')
plt.show()
###Output
_____no_output_____ |
jupyter_notebook_zh/underline_usage_in_class.ipynb | ###Markdown
类内参数与下划线学习总结 1. 参数前双下划线(__param) **实例的变量名如果以双下划线(__param)开头,就变成了一个私有变量(private),只有内部可以访问,外部不能访问**示例代码如下:
###Code
class Student:
def __init__(self, name):
self.__name = name
print("inner print: ", self.__name)
stu = Student('Bob')
print("Running Class Student: ", stu)
print("Get param in Class Student: ", stu.__name)
###Output
inner print: Bob
Running Class Student: <__main__.Student object at 0x110ecc5c0>
###Markdown
在上面的代码中,- 首先,我们定义了一个名为 `Student` 的类;- 然后,在类的初始化中,定义了一个变量,并在其前加了**双下弧线**,由 Python3 的语法规则,这个变量就变为了类内的私有变量,简而言之,只有类内可以访问,类外无法访问。通过运行结果可以发现,在类内可以访问私有变量 `__name`,但是类外访问则会报错。 2. 参数前单下划线(_param) 与1学习方法相同,我们仍然通过编写 `Student` 类来学习,为了与1 中的 `Student` 类相区分,这次命名为 `Student1`。示例代码如下:
###Code
class Student1:
def __init__(self, name, age):
self._name = name
self.age = age
stu1 = Student1('Alvin', '30')
print("stu1._name = ", stu1._name)
print("stu1.age = ", stu1.age)
###Output
stu1._name = Alvin
stu1.age = 30
###Markdown
通过运行结果可发现,类内的参数前带**单下划线**不影响外部对其访问,与不带下划线的参数一样。 3. 参数后单下划线(param_) 我们通过编写类 `Student2` 来学习,示例代码如下:
###Code
class Student2:
def __init__(self, name):
self.name_ = name
stu2 = Student2('Kate')
print("stu2.name_ = ", stu2.name_)
###Output
stu2.name_ = Kate
|
pipelining/ale-exp1/ale-exp1_cslg-rand-5000_1w_ale_plotting.ipynb | ###Markdown
Experiment Description1-way ALE.> This notebook is for experiment \ and data sample \. Initialization
###Code
%load_ext autoreload
%autoreload 2
import numpy as np, sys, os
in_colab = 'google.colab' in sys.modules
# fetching code and data(if you are using colab
if in_colab:
!rm -rf s2search
!git clone --branch pipelining https://github.com/youyinnn/s2search.git
sys.path.insert(1, './s2search')
%cd s2search/pipelining/ale-exp1/
pic_dir = os.path.join('.', 'plot')
if not os.path.exists(pic_dir):
os.mkdir(pic_dir)
###Output
_____no_output_____
###Markdown
Loading data
###Code
sys.path.insert(1, '../../')
import numpy as np, sys, os, pandas as pd
from getting_data import read_conf
from s2search_score_pdp import pdp_based_importance
sample_name = 'cslg-rand-5000'
f_list = [
'title', 'abstract', 'venue', 'authors',
'year',
'n_citations'
]
ale_xy = {}
ale_metric = pd.DataFrame(columns=['feature_name', 'ale_range', 'ale_importance'])
for f in f_list:
file = os.path.join('.', 'scores', f'{sample_name}_1w_ale_{f}.npz')
if os.path.exists(file):
nparr = np.load(file)
quantile = nparr['quantile']
ale_result = nparr['ale_result']
values_for_rug = nparr.get('values_for_rug')
ale_xy[f] = {
'x': quantile,
'y': ale_result,
'rug': values_for_rug,
'weird': ale_result[len(ale_result) - 1] > 20
}
if f != 'year' and f != 'n_citations':
ale_xy[f]['x'] = list(range(len(quantile)))
ale_xy[f]['numerical'] = False
else:
ale_xy[f]['xticks'] = quantile
ale_xy[f]['numerical'] = True
ale_metric.loc[len(ale_metric.index)] = [f, np.max(ale_result) - np.min(ale_result), pdp_based_importance(ale_result, f)]
# print(len(ale_result))
print(ale_metric.sort_values(by=['ale_importance'], ascending=False))
print()
des, sample_configs, sample_from_other_exp = read_conf('.')
if sample_configs['cslg-rand-5000'][0].get('quantiles') != None:
print(f'The following feature choose quantiles as ale bin size:')
for k in sample_configs['cslg-rand-5000'][0]['quantiles'].keys():
print(f" {k} with {sample_configs['cslg-rand-5000'][0]['quantiles'][k]}% quantile, {len(ale_xy[k]['x'])} bins are used")
if sample_configs['cslg-rand-5000'][0].get('intervals') != None:
print(f'The following feature choose fixed amount as ale bin size:')
for k in sample_configs['cslg-rand-5000'][0]['intervals'].keys():
print(f" {k} with {sample_configs['cslg-rand-5000'][0]['intervals'][k]} values, {len(ale_xy[k]['x'])} bins are used")
###Output
feature_name ale_range ale_importance
1 abstract 13.457690 5.085414
0 title 12.135204 3.992325
2 venue 12.598691 1.340829
4 year 1.777482 0.449724
5 n_citations 1.245702 0.238238
3 authors 0.000000 0.000000
The following feature choose quantiles as ale bin size:
year with 1% quantile, 100 bins are used
n_citations with 1% quantile, 100 bins are used
title with 1% quantile, 100 bins are used
abstract with 1% quantile, 100 bins are used
authors with 1% quantile, 100 bins are used
venue with 1% quantile, 100 bins are used
###Markdown
ALE Plots
###Code
import matplotlib.pyplot as plt
import seaborn as sns
from matplotlib.ticker import MaxNLocator
categorical_plot_conf = [
{
'xlabel': 'Title',
'ylabel': 'ALE',
'ale_xy': ale_xy['title']
},
{
'xlabel': 'Abstract',
'ale_xy': ale_xy['abstract']
},
{
'xlabel': 'Authors',
'ale_xy': ale_xy['authors'],
# 'zoom': {
# 'inset_axes': [0.3, 0.3, 0.47, 0.47],
# 'x_limit': [89, 93],
# 'y_limit': [-1, 14],
# }
},
{
'xlabel': 'Venue',
'ale_xy': ale_xy['venue'],
# 'zoom': {
# 'inset_axes': [0.3, 0.3, 0.47, 0.47],
# 'x_limit': [89, 93],
# 'y_limit': [-1, 13],
# }
},
]
numerical_plot_conf = [
{
'xlabel': 'Year',
'ylabel': 'ALE',
'ale_xy': ale_xy['year'],
# 'zoom': {
# 'inset_axes': [0.15, 0.4, 0.4, 0.4],
# 'x_limit': [2019, 2023],
# 'y_limit': [1.9, 2.1],
# },
},
{
'xlabel': 'Citations',
'ale_xy': ale_xy['n_citations'],
# 'zoom': {
# 'inset_axes': [0.4, 0.65, 0.47, 0.3],
# 'x_limit': [-1000.0, 12000],
# 'y_limit': [-0.1, 1.2],
# },
},
]
def pdp_plot(confs, title):
fig, axes_list = plt.subplots(nrows=1, ncols=len(confs), figsize=(20, 5), dpi=100)
subplot_idx = 0
# plt.suptitle(title, fontsize=20, fontweight='bold')
# plt.autoscale(False)
for conf in confs:
axes = axes if len(confs) == 1 else axes_list[subplot_idx]
sns.rugplot(conf['ale_xy']['rug'], ax=axes, height=0.02)
axes.axhline(y=0, color='k', linestyle='-', lw=0.8)
axes.plot(conf['ale_xy']['x'], conf['ale_xy']['y'])
axes.grid(alpha = 0.4)
# axes.set_ylim([-2, 20])
axes.xaxis.set_major_locator(MaxNLocator(integer=True))
# axes.yaxis.set_major_locator(MaxNLocator(integer=True))
if ('ylabel' in conf):
axes.set_ylabel(conf.get('ylabel'), fontsize=20, labelpad=10)
# if ('xticks' not in conf['ale_xy'].keys()):
# xAxis.set_ticklabels([])
axes.set_xlabel(conf['xlabel'], fontsize=16, labelpad=10)
if not (conf['ale_xy']['weird']):
if (conf['ale_xy']['numerical']):
axes.set_ylim([-1.5, 1.5])
pass
else:
axes.set_ylim([-7, 15])
pass
if 'zoom' in conf:
axins = axes.inset_axes(conf['zoom']['inset_axes'])
axins.xaxis.set_major_locator(MaxNLocator(integer=True))
axins.yaxis.set_major_locator(MaxNLocator(integer=True))
axins.plot(conf['ale_xy']['x'], conf['ale_xy']['y'])
axins.set_xlim(conf['zoom']['x_limit'])
axins.set_ylim(conf['zoom']['y_limit'])
axins.grid(alpha=0.3)
rectpatch, connects = axes.indicate_inset_zoom(axins)
connects[0].set_visible(False)
connects[1].set_visible(False)
connects[2].set_visible(True)
connects[3].set_visible(True)
subplot_idx += 1
pdp_plot(categorical_plot_conf, f"ALE for {len(categorical_plot_conf)} categorical features")
plt.savefig(os.path.join('.', 'plot', f'{sample_name}-1wale-categorical.png'), facecolor='white', transparent=False, bbox_inches='tight')
pdp_plot(numerical_plot_conf, f"ALE for {len(numerical_plot_conf)} numerical features")
plt.savefig(os.path.join('.', 'plot', f'{sample_name}-1wale-numerical.png'), facecolor='white', transparent=False, bbox_inches='tight')
###Output
_____no_output_____ |
notebooks/train_model-fastai.ipynb | ###Markdown
Build Model
###Code
from torch.utils.data import DataLoader, random_split
from PIL import Image
from torchvision import transforms
from hpa_src.data.transforms import ToPIL
from torch.nn import BCEWithLogitsLoss
from hpa_src.models.loss import FocalLoss
import torch
import torch.nn as nn
import torch.optim as optim
from torch.optim import lr_scheduler
# input_size = 299
input_size = 512
train_transform = transforms.Compose([
ToPIL(),
transforms.RandomResizedCrop(input_size, scale=(0.5,1)),
transforms.RandomHorizontalFlip(),
transforms.RandomRotation(90),
transforms.ToTensor(),
# transforms.Normalize((0.1149, 0.0922, 0.0553),
# (0.1694, 0.1381, 0.1551))
transforms.Normalize((0.08069, 0.05258, 0.05487, 0.08282),
(0.13704, 0.10145, 0.15313, 0.13814))
])
val_transform = transforms.Compose([
ToPIL(),
# transforms.Resize(input_size),
transforms.ToTensor(),
# transforms.Normalize((0.1149, 0.0922, 0.0553),
# (0.1694, 0.1381, 0.1551))
transforms.Normalize((0.08069, 0.05258, 0.05487, 0.08282),
(0.13704, 0.10145, 0.15313, 0.13814))
])
test_transform = transforms.Compose([
ToPIL(),
# transforms.Resize(input_size),
transforms.ToTensor(),
transforms.Normalize((0.05913, 0.0454 , 0.04066, 0.05928),
(0.11734, 0.09503, 0.129 , 0.11528))
])
# train_sampler, val_sampler = train_val_split(image_df.shape[0])
# train_sampler, val_sampler = train_val_split(100)
train_dataset = HpaDataset(DATA + 'raw/png/training.csv', transform=train_transform)
val_dataset = HpaDataset(DATA + 'raw/png/validation.csv', transform=val_transform)
bz = 8
train_loader = torch.utils.data.DataLoader(
train_dataset, batch_size=bz, #sampler=train_sampler,
num_workers=bz
)
val_loader = torch.utils.data.DataLoader(
val_dataset, batch_size=bz, #sampler=val_sampler,
num_workers=bz
)
dataloaders = {'train': train_loader, 'val': val_loader}
###Output
_____no_output_____
###Markdown
Model
###Code
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
# import pretrainedmodels
# pretrained = pretrainedmodels.__dict__['inceptionresnetv2'](num_classes=1000, pretrained='imagenet')
from hpa_src.models.inception import inceptionresnetv2
model = inceptionresnetv2(pretrained='imagenet')
model = nn.DataParallel(model, device_ids=[0,1])
# criterion = BCEWithLogitsLoss()
criterion = FocalLoss(gamma=2, logits=True)
# Observe that all parameters are being optimized
optimizer_ft = optim.Adam(model.parameters(), lr=0.001, weight_decay=1e-6)
# Decay LR by a factor of 0.1 every 7 epochs
# exp_lr_scheduler = lr_scheduler.StepLR(optimizer_ft, step_size=7, gamma=0.1)
# dataset_sizes = {'train': len(train_indx), 'val': len(val_indx)}
# tmp = next(iter(dataloaders['train']))
# pretrained(tmp[0].to_device())
from hpa_src.models.training import ModelTrainer
from hpa_src.models.callbacks import TorchModelCheckpoint
trainer = ModelTrainer(model)
trainer.compile(optimizer_ft, criterion, device=device)
checker = TorchModelCheckpoint('../models/torch_trained', monitor='val_f1', save_best_only=True, mode='max')
###Output
_____no_output_____
###Markdown
First train classifier layer
###Code
from keras.callbacks import History
for p in trainer.model.parameters():
p.requires_grad = False
for p in trainer.model.module.conv2d_last.parameters():
p.requires_grad = True
trainer.fit(dataloaders['train'], dataloaders['val'], epochs=3)
for p in trainer.model.parameters():
p.requires_grad = True
trainer.fit(dataloaders['train'], dataloaders['val'], epochs=60, model_checker = checker)
# plt.subplot(121)
fig = plt.figure(figsize=(13,5))
ax1 = fig.add_subplot(121)
ax1.plot(trainer.history.epoch, trainer.history.history['train_loss'], label='train_loss')
ax1.plot(trainer.history.epoch, trainer.history.history['val_loss'], label='val_loss')
ax1.legend()
ax2 = fig.add_subplot(122)
ax2.plot(trainer.history.epoch, trainer.history.history['train_f1'], label='train_f1')
ax2.plot(trainer.history.epoch, trainer.history.history['val_f1'], label='val_f1')
ax2.legend()
plt.savefig('lr0.0001_focal3channel.png')
plt.show()
###Output
_____no_output_____
###Markdown
Cross validate threshold
###Code
import resource
rlimit = resource.getrlimit(resource.RLIMIT_NOFILE)
resource.setrlimit(resource.RLIMIT_NOFILE, (2048*2, rlimit[1]))
model = torch.load("../models/torch_trained")
model = model.eval()
all_dataset = HpaDataset(DATA + 'raw/png/train.csv', transform=val_transform)
all_loader = torch.utils.data.DataLoader(
all_dataset, batch_size=bz, #sampler=val_sampler,
num_workers=bz
)
with torch.no_grad():
val_preds = []
val_true = []
for inputs, labels in all_loader:
inputs = inputs.to(device)
val_preds.append(model(inputs))
val_true.append(labels)
val_preds = torch.cat(val_preds)
val_true = torch.cat(val_true)
from hpa_src.data.functional import preds2label, preds2onehot, array2str
from sklearn.metrics import f1_score
scores = []
for p in np.arange(0.1,0.9,0.02):
#tmp = preds2onehot(val_preds, threshold=np.log(p/(1-p)))
scores.append(f1_score(val_true, val_preds>np.log(p/(1-p)), average='macro'))
plt.scatter(np.arange(0.1,0.9,0.02), scores)
plt.show()
p_opt = np.arange(0.1,0.9,0.02)[np.argmax(scores)]
p_opt
###Output
_____no_output_____
###Markdown
Optimize per class threshold
###Code
from hpa_src.data.functional import apply_threshold, optim_threshold
thresholds = []
for i in range(val_true.shape[1]):
thresholds.append(optim_threshold(val_true[:,i], val_preds[:,i], logits=True))
tmp = np.stack([val_preds[:,i] > thresholds[i] for i in range(val_preds.shape[1])]).T
f1_score(val_true, tmp, average='macro')
###Output
_____no_output_____
###Markdown
Test prediction
###Code
test = TestDataset(DATA + 'raw/sample_submission.csv', transform=test_transform)
test_dl = DataLoader(test, batch_size=16, num_workers=16)
with torch.no_grad():
prediction = [model(img.to(device)) for img in test_dl]
prediction = torch.cat(prediction)
# prediction = preds2label(prediction)
preds = list(array2str(apply_threshold(prediction, thresholds)))
# p = np.arange(0.1,0.9,0.02)[np.argmax(scores)]
# p = p_opt
# # p = 0.3
# preds = preds2label(prediction, threshold=np.log(p/(1-p)), fill_na=False)
# preds = list(array2str(preds))
tst = pd.read_csv(DATA + "raw/sample_submission.csv")
tst.Predicted = preds
tst.to_csv(DATA + "processed/Submission.csv", index=False)
tst.head()
torch.sigmoid(torch.tensor(thresholds))
###Output
_____no_output_____
###Markdown
Visualize performance
###Code
from sklearn.metrics import precision_recall_curve
val_pred_prob = torch.sigmoid(val_preds)
i = 25
prec, rec, _ = precision_recall_curve(val_true[:,i], val_pred_prob[:,i])
plt.plot(rec, prec)
plt.show()
###Output
_____no_output_____
###Markdown
ignite
###Code
#best_model = train_model(pretrained, criterion, optimizer_ft, exp_lr_scheduler, num_epochs=20)
from ignite.engine import Events, create_supervised_trainer, create_supervised_evaluator
from ignite.metrics import CategoricalAccuracy, Loss
from ignite.handlers import EarlyStopping
from hpa_src.models.metrics import F1Score
trainer = create_supervised_trainer(pretrained, optimizer_ft, criterion, device=device)
def score_function(engine):
val_loss = engine.state.metrics['loss']
return -val_loss
f1score = F1Score()
handler = EarlyStopping(patience=5, score_function=score_function, trainer=trainer)
evaluator = create_supervised_evaluator(pretrained,
metrics={
'loss': Loss(criterion),
'f1': F1Score()
}, device=device)
evaluator.add_event_handler(Events.COMPLETED, handler)
@trainer.on(Events.EPOCH_COMPLETED)
def log_training_results(trainer):
evaluator.run(train_loader)
metrics = evaluator.state.metrics
print("Training Results - Epoch: {} Avg loss: {:.2f} Avg f1: {:.2f}"
.format(trainer.state.epoch, metrics['loss'], metrics['f1']))
@trainer.on(Events.EPOCH_COMPLETED)
def log_validation_results(trainer):
evaluator.run(val_loader)
metrics = evaluator.state.metrics
print("Validation Results - Epoch: {} Avg loss: {:.2f} Avg f1: {:.2f}"
.format(trainer.state.epoch, metrics['loss'], metrics['f1']))
trainer.run(train_loader, max_epochs=100)
torch.save(pretrained.state_dict(), DATA + '../models/torch_3epoch')
evaluator.state.metrics
###Output
_____no_output_____ |
PY0101EN-2.2_notebook_quizz_sets.ipynb | ###Markdown
List to Set Cast the following list to a set:
###Code
A=['A','B','C','A','B','C']
B=set(A)
type(B)
print (B)
###Output
{'B', 'C', 'A'}
###Markdown
Add an Element to the Set Add the string 'D' to the set S
###Code
S={'A','B','C'}
S.add('D')
print(S)
###Output
{'B', 'C', 'A', 'D'}
###Markdown
Intersection Find the intersection of set A and B
###Code
A={1,2,3,4,5}
B={1,3,9, 12}
C=A & B
print(C)
###Output
{1, 3}
###Markdown
Cast a List to a Set Cast the following list to a set:
###Code
['A','B','C','A','B','C']
###Output
_____no_output_____
###Markdown
Click here for the solution```pythonset(['A','B','C','A','B','C'])``` Add an Element to the Set Add the string 'D' to the set S.
###Code
S={'A','B','C'}
###Output
_____no_output_____
###Markdown
Click here for the solution```pythonS.add('D')S``` Intersection of Sets Find the intersection of set A and B.
###Code
A={1,2,3,4,5}
B={1,3,9, 12}
###Output
_____no_output_____ |
05_test_Document_Reader.ipynb | ###Markdown
Try out pretrained DrQA Document Reader with a simple query
###Code
MODEL_FILE = 'models_lisandro/document_reader.mdl'
model_class = Model(MODEL_FILE)
query = "How long do Hamsters live?"
selected_ids = str(("Animal testing on Syrian hamsters", "Domestication of the Syrian hamster",
"Golden hamster", "Hamster", "The Hamsters (album)"))
db_path = "data/wikipedia/docs.db"
connection = sqlite3.connect(db_path, check_same_thread=False)
cursor = connection.cursor()
cursor.execute("SELECT id, text FROM documents WHERE id IN " + selected_ids)
data_json = {"id": [], "text": [], "predicted_answer" : [], "score": []}
for r in cursor.fetchall():
data_json["id"].append(r[0]); data_json["text"].append(r[1])
cursor.close()
connection.close()
examples = [[data_json['text'][idx], query] for idx in range(len(data_json["text"]))]
predictions = model_class.predict_batch(examples)
for p in predictions:
data_json["predicted_answer"].append(p[0][0])
data_json["score"].append(p[0][1])
data_df = pd.DataFrame.from_dict(data_json)
print(f"Question: {query}")
print("\nRelevant documents, with their corresponding answers:")
display(data_df.head(5))
# Argmax among all documents
answers = [p[0][0] for p in predictions]
scores = [p[0][1] for p in predictions]
idx = torch.argmax(torch.tensor(scores))
print(f"\nBest prediction : {answers[idx]}, with score : {scores[idx]}")
###Output
Question: How long do Hamsters live?
Relevant documents, with their corresponding answers:
|
h2o-py/demos/constrained_kmeans_demo_chicago.ipynb | ###Markdown
Constrained K-Means demo - Chicago Weather Dataset H2O K-Means algorithmK-Means falls in the general category of clustering algorithms. Clustering is a form of unsupervised learning that tries to find structures in the data without using any labels or target values. Clustering partitions a set of observations into separate groupings such that observation in a given group is more similar to another observation in the same group than to another observation in a different group.More about H2O K-means Clustering: http://docs.h2o.ai/h2o/latest-stable/h2o-docs/data-science/k-means.html Constrained K-Means algorithm in H2OUsing the `cluster_size_constraints` parameter, a user can set the minimum size of each cluster during the training by an array of numbers. The size of the array must be equal as the `k` parameter.To satisfy the custom minimal cluster size, the calculation of clusters is converted to the Minimal Cost Flow problem. Instead of using the Lloyd iteration algorithm, a graph is constructed based on the distances and constraints. The goal is to go iteratively through the input edges and create an optimal spanning tree that satisfies the constraints.More information about how to convert the standard K-means algorithm to the Minimal Cost Flow problem is described in this paper: https://pdfs.semanticscholar.org/ecad/eb93378d7911c2f7b9bd83a8af55d7fa9e06.pdf.**Minimum-cost flow problem can be efficiently solved in polynomial time. Currently, the performance of this implementation of Constrained K-means algorithm is slow due to many repeatable calculations which cannot be parallelized and more optimized at H2O backend.**Expected time with various sized data:* 5 000 rows, 5 features ~ 0h 4m 3s* 10 000 rows, 5 features ~ 0h 9m 21s* 15 000 rows, 5 features ~ 0h 22m 25s* 20 000 rows, 5 features ~ 0h 39m 27s* 25 000 rows, 5 features ~ 1h 06m 8s* 30 000 rows, 5 features ~ 1h 26m 43s* 35 000 rows, 5 features ~ 1h 44m 7s* 40 000 rows, 5 features ~ 2h 13m 31s* 45 000 rows, 5 features ~ 2h 4m 29s* 50 000 rows, 5 features ~ 4h 4m 18s(OS debian 10.0 (x86-64), processor Intel© Core™ i7-7700HQ CPU @ 2.80GHz × 4, RAM 23.1 GiB) Shorter time using Aggregator ModelTo solve Constrained K-means in a shorter time, you can used the H2O Aggregator model to aggregate data to smaller size first and then pass these data to the Constrained K-means model to calculate the final centroids to be used with scoring. The results won't be as accurate as a result from a model with the whole dataset. However, it should help solve the problem of a huge datasets.However, there are some assumptions:* the large dataset has to consist of many similar data points - if not, the insensitive aggregation can break the structure of the dataset* the resulting clustering may not meet the initial constraints exactly when scoring (this also applies to Constrained K-means model, scoring use only result centroids to score and no constraints defined before)The H2O Aggregator method is a clustering-based method for reducing a numerical/categorical dataset into a dataset with fewer rows. Aggregator maintains outliers as outliers but lumps together dense clusters into exemplars with an attached count column showing the member points.More about H2O Aggregator: http://docs.h2o.ai/h2o/latest-stable/h2o-docs/data-science/aggregator.html
###Code
# run h2o Kmeans
# Import h2o library
import h2o
from h2o.estimators import H2OKMeansEstimator
# init h2o cluster
h2o.init(strict_version_check=False, url="http://192.168.59.147:54321")
###Output
versionFromGradle='3.29.0',projectVersion='3.29.0.99999',branch='maurever_PUBDEV-6447_constrained_kmeans_improvement',lastCommitHash='162ceb18eae8b773028f27b284129c3ef752d001',gitDescribe='jenkins-master-4952-11-g162ceb18ea-dirty',compiledOn='2020-02-20 15:01:59',compiledBy='mori'
Checking whether there is an H2O instance running at http://192.168.59.147:54321 . connected.
###Markdown
Data - Chicago Weather dataset- 5162 rows- 5 features (monht, day, year, maximal temperature, mean teperature)
###Code
# load data
import pandas as pd
data = pd.read_csv("../../smalldata/chicago/chicagoAllWeather.csv")
data = data.iloc[:,[1, 2, 3, 4, 5]]
print(data.shape)
data.head()
# import time to measure elapsed time
from timeit import default_timer as timer
from datetime import timedelta
import time
start = timer()
end = timer()
print("Time:", timedelta(seconds=end-start))
###Output
Time: 0:00:00.000010
###Markdown
Traditional K-means Constrained K-means
###Code
data_h2o = h2o.H2OFrame(data)
# run h2o Kmeans to get good starting points
h2o_km = H2OKMeansEstimator(k=3, init="furthest", standardize=True)
start = timer()
h2o_km.train(training_frame=data_h2o)
end = timer()
user_points = h2o.H2OFrame(h2o_km.centers())
# show details
h2o_km.show()
time_km = timedelta(seconds=end-start)
print("Time:", time_km)
# run h2o constrained Kmeans
h2o_km_co = H2OKMeansEstimator(k=3, user_points=user_points, cluster_size_constraints=[1000, 2000, 1000], standardize=True)
start = timer()
h2o_km_co.train(training_frame=data_h2o)
end = timer()
# show details
h2o_km_co.show()
time_km_co = timedelta(seconds=end-start)
print("Time:", time_km_co)
###Output
kmeans Model Build progress: |████████████████████████████████████████████| 100%
Model Details
=============
H2OKMeansEstimator : K-means
Model Key: KMeans_model_python_1582207404277_10
Model Summary:
###Markdown
Constrained K-means reduced data using Aggregator - changed size 1/2 of original data
###Code
from h2o.estimators.aggregator import H2OAggregatorEstimator
# original data size 5162, constraints 1000, 2000, 1000
# aggregated data size ~ 2581, constaints 500, 1000, 500
params = {
"target_num_exemplars": 2581,
"rel_tol_num_exemplars": 0.01,
"categorical_encoding": "eigen"
}
agg = H2OAggregatorEstimator(**params)
start = timer()
agg.train(training_frame=data_h2o)
data_agg = agg.aggregated_frame
# run h2o Kmeans
h2o_km_co_agg = H2OKMeansEstimator(k=3, user_points=user_points, cluster_size_constraints=[500, 1000, 500], standardize=True)
h2o_km_co_agg.train(x=["month", "day", "year", "maxTemp", "meanTemp"],training_frame=data_agg)
end = timer()
# show details
h2o_km_co_agg.show()
time_km_co_12 = timedelta(seconds=end-start)
print("Time:", time_km_co_12)
###Output
aggregator Model Build progress: |████████████████████████████████████████| 100%
kmeans Model Build progress: |████████████████████████████████████████████| 100%
Model Details
=============
H2OKMeansEstimator : K-means
Model Key: KMeans_model_python_1582207404277_12
Model Summary:
###Markdown
Constrained K-means reduced data using Aggregator - changed size 1/4 of original data
###Code
from h2o.estimators.aggregator import H2OAggregatorEstimator
# original data size 5162, constraints 1000, 2000, 1000
# aggregated data size ~ 1290, constaints 250, 500, 250
params = {
"target_num_exemplars": 1290,
"rel_tol_num_exemplars": 0.01,
"categorical_encoding": "eigen"
}
agg_14 = H2OAggregatorEstimator(**params)
start = timer()
agg_14.train(training_frame=data_h2o)
data_agg_14 = agg_14.aggregated_frame
# run h2o Kmeans
h2o_km_co_agg_14 = H2OKMeansEstimator(k=3, user_points=user_points, cluster_size_constraints=[240, 480, 240], standardize=True)
h2o_km_co_agg_14.train(x=list(range(5)),training_frame=data_agg_14)
end = timer()
# show details
h2o_km_co_agg_14.show()
time_km_co_14 = timedelta(seconds=end-start)
print("Time:", time_km_co_14)
###Output
aggregator Model Build progress: |████████████████████████████████████████| 100%
kmeans Model Build progress: |████████████████████████████████████████████| 100%
Model Details
=============
H2OKMeansEstimator : K-means
Model Key: KMeans_model_python_1582207404277_14
Model Summary:
###Markdown
Results Time | Data | Number of rows | Time ||---|---|---|| Original data | {{data.shape[0]}} | {{print(time_km_co)}} || Aggregated data 1/2 size of original data | {{data_agg.shape[0]}} | {{print(time_km_co_12)}} || Aggregated data 1/4 size of original data | {{data_agg_14.shape[0]}}| {{print(time_km_co_14)}}| Accuracy
###Code
centers_km_co = h2o_km_co.centers()
centers_km_co_agg_12 = h2o_km_co_agg.centers()
centers_km_co_agg_14 = h2o_km_co_agg_14.centers()
centers_all = pd.concat([pd.DataFrame(centers_km_co).sort_values(by=[0]), pd.DataFrame(centers_km_co_agg_12).sort_values(by=[0]), pd.DataFrame(centers_km_co_agg_14).sort_values(by=[0])])
###Output
_____no_output_____
###Markdown
Difference between coordinates of original data and aggregated data
###Code
diff_first_cluster = pd.concat([centers_all.iloc[0,:] - centers_all.iloc[3,:], centers_all.iloc[0,:] - centers_all.iloc[6,:]], axis=1, ignore_index=True).transpose()
diff_first_cluster.index = ["1/2", "1/4"]
diff_first_cluster.style.bar(subset=[0,1,2,3,4], align='mid', color=['#d65f5f', '#5fba7d'], width=90)
diff_second_cluster = pd.concat([centers_all.iloc[1,:] - centers_all.iloc[4,:], centers_all.iloc[1,:] - centers_all.iloc[7,:]], axis=1, ignore_index=True).transpose()
diff_second_cluster.index = ["1/2", "1/4"]
diff_second_cluster.style.bar(subset=[0,1,2,3,4], align='mid', color=['#d65f5f', '#5fba7d'], width=90)
diff_third_cluster = pd.concat([centers_all.iloc[2,:] - centers_all.iloc[5,:], centers_all.iloc[2,:] - centers_all.iloc[8,:]], axis=1, ignore_index=True).transpose()
diff_third_cluster.index = ["1/2", "1/4"]
diff_third_cluster.style.bar(subset=[0,1,2,3,4], color=['#d65f5f', '#5fba7d'], align="mid", width=90)
###Output
_____no_output_____ |
docs/allcools/cluster_level/RegionDS/02.annotation.ipynb | ###Markdown
Annotate RegionDSAfter getting an {{ RegionDS }} from DMR calling or any other genome region sets, we can annotate the regions with other epigenomic profiles or genomic features stored in the BigWig or BED format.For example, in this section, we will annotate the DMR RegionDS with chromatin accessibility profiles and some general genomic features. Import
###Code
import pandas as pd
import pathlib
from ALLCools.mcds import RegionDS
###Output
_____no_output_____
###Markdown
Open RegionDS
###Code
dmr_ds = RegionDS.open('test_HIP')
dmr_ds
###Output
Using dmr as region_dim
###Markdown
DMR Chromatin Accessibility Profile (BigWig)For example, here we annotate cluster-matched chromatin accessibility profiles from a mouse hippocampus snATAC-seq dataset. Each profile is stored in BigWig format. The {func}`annotate_by_bigwigs ` method of RegionDS can help scan the regions on each BigWig file, resulting a new data variable stored in the RegionDS
###Code
# prepare the bigwig tab-separated table, first column is cluster name, second column is BigWig path
bigwig_dir = '../../data/HIPBulk/atac_bulk/'
bigwigs = pd.Series({
p.name.split('.')[0].split('_')[-1]: str(p)
for p in pathlib.Path(bigwig_dir).glob('HIP_snATAC_*.bw')
})
bigwigs.to_csv('test_bigwig.csv', header=False)
bigwigs
dmr_ds.annotate_by_bigwigs(slop=250,
bigwig_table='test_bigwig.csv',
dim='snATAC',
cpu=30)
# the annotated matrix are stored in a new data variable
dmr_ds['dmr_snATAC_da']
###Output
_____no_output_____
###Markdown
DMR Overlapping Genome Features (BED)Next, we overlap the DMR regions with a set of BED files that containing different kinds of genome features (e.g. CGI, promoter). The {func}`annotate_by_beds ` method of RegionDS can help scan the regions on each BigWig file, resulting a new data variable stored in the RegionDS. The output dataset is a boolean matrix recording whether a DMR is overlapping with one feature.
###Code
genome_feature_dir = '../../data/genome/genome_feature/'
genome_feature_beds = {
'.'.join(p.name.split('.')[:-4]): str(p)
for p in pathlib.Path(genome_feature_dir).glob('*.bed.gz')
}
beds = pd.Series(genome_feature_beds)
beds.to_csv('test_genome_featue_bed.csv', header=False)
beds
dmr_ds.annotate_by_beds(slop=250,
bed_table='test_genome_featue_bed.csv',
dim='genome-features',
bed_sorted=False,
cpu=30)
# the annotated matrix are stored in a new data variable
dmr_ds['dmr_genome-features_da']
###Output
_____no_output_____
###Markdown
After AnnotationAfter annotation, the RegionDS will contain additional dmr-by-feature matrix.
###Code
# note the dmr_genome-features and dmr_snATAC dir is newly added by the annotation functions
!tree -L 2 test_HIP/
###Output
test_HIP/
├── chrom_sizes.txt
├── dmr
│ ├── count_type
│ ├── dmr
│ ├── dmr_chrom
│ ├── dmr_da
│ ├── dmr_da_frac
│ ├── dmr_end
│ ├── dmr_length
│ ├── dmr_ndms
│ ├── dmr_start
│ ├── dmr_state
│ └── sample
├── dmr_genome-features
│ ├── dmr
│ ├── dmr_genome-features_da
│ └── genome-features
├── dmr_snATAC
│ ├── dmr
│ ├── dmr_snATAC_da
│ └── snATAC
└── dms
├── count_type
├── dms
├── dms_chrom
├── dms_contexts
├── dms_da
├── dms_da_frac
├── dms_pos
├── dms_p-values
├── dms_residual
└── sample
31 directories, 1 file
###Markdown
Selectively openWhen you open a RegionDS containing multiple annotation, you can select specific datasets with `select_dir` and select specific DMRs with `use_regions`.
###Code
# select 2 DMR and their snATAC annotation
RegionDS.open('test_HIP/',
select_dir=['dmr', 'dmr_snATAC'],
use_regions=['chr1-0', 'chr1-1'])
###Output
Using dmr as region_dim
|
notebooks/survey_calibration_cvxr.ipynb | ###Markdown
###Code
#@title Copyright 2019 The Empirical Calibration Authors.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ============================================================================
###Output
_____no_output_____
###Markdown
Run in Google Colab View source on GitHub A common use case of surveys is to estimate the mean or total of a quantity. Wereplicate the [direct standardization example](https://rviews.rstudio.com/2018/07/20/cvxr-a-direct-standardization-example/). Datawas obtained from the CVXR package's [github repo](https://github.com/anqif/CVXR/tree/master/data), where* `dspop` contains 1000 rows with columns - $\text{sex} \sim \text{Bernoulli}(0.5)$ - $\text{age} \sim \text{Uniform}(10, 60)$ - $y_i \sim N(5 \times \text{sex}_i + 0.1 \times \text{age}_i, 1)$.* `dssamp` contains a skewed sample of 100 rows with smallvalues of $y$ over-represented. Imports
###Code
from matplotlib import pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
sns.set_style('whitegrid')
%config InlineBackend.figure_format='retina'
# install and import ec
!pip install -q git+https://github.com/google/empirical_calibration
import empirical_calibration as ec
# install and import rdata
!pip install -q rdata
import rdata
###Output
Building wheel for empirical-calibration (setup.py) ... [?25l[?25hdone
###Markdown
Download data
###Code
# Read data from R package's github repo.
!wget -q https://github.com/anqif/CVXR/raw/master/data/dspop.rda
!wget -q https://github.com/anqif/CVXR/raw/master/data/dssamp.rda
dspop = rdata.conversion.convert(rdata.parser.parse_file('dspop.rda'))['dspop']
dssamp = rdata.conversion.convert(rdata.parser.parse_file('dssamp.rda'))['dssamp']
###Output
_____no_output_____
###Markdown
Analysis
###Code
#@title Apply empirical calibration to get weights
cols = ['sex', 'age']
weights, _ = ec.maybe_exact_calibrate(
covariates=dssamp[cols],
target_covariates=dspop[cols],
objective=ec.Objective.ENTROPY
)
###Output
_____no_output_____
###Markdown
Estimates of meanThe true mean of $y$ based on the data generating process is 6.0.Using the generated population of size 1000 and sample of size 100contained in the \pkg{CVXR} package, the population mean is 6.01, but themean of the skewed sample is 3.76, which is a gross underestimation.However, with empirical calibration, the weighted meanis 5.82, which is closer to the population mean.
###Code
print('True mean of y: {}'.format(dspop['y'].mean()))
print('Un-weighted sampel mean: {}'.format(dssamp['y'].mean()))
print('Weighted mean: {}'.format(np.average(dssamp['y'], weights=weights)))
###Output
True mean of y: 6.009373516521697
Un-weighted sampel mean: 3.7554424512120006
Weighted mean: 5.821816581178334
###Markdown
Kenel densityThe kernel density plots show that theunweighted curve is biased toward smaller values of $y$, but the weights helprecover the true density.
###Code
import statsmodels.api as sm
true = sm.nonparametric.KDEUnivariate(dspop['y'])
un_weighted = sm.nonparametric.KDEUnivariate(dssamp['y'])
weighted = sm.nonparametric.KDEUnivariate(dssamp['y'])
true.fit(bw='Scott')
un_weighted.fit(bw='Scott')
weighted.fit(fft=False, weights=weights, bw='Scott')
fig, ax = plt.subplots(1, 1)
ax.plot(true.support, true.density, label='true')
ax.plot(un_weighted.support, un_weighted.density, label='un-weighted')
ax.plot(weighted.support, weighted.density, label='weighted')
ax.legend()
###Output
_____no_output_____ |
additional_content/8_1_Inference_Data_Setup.ipynb | ###Markdown
First, select Head of Household information, including geocodes and member info. Then, add on spells.
###Code
##DO NOT RERUN UNLESS CHANGING VARIABLES/PARAMETERS BELOW
conn.rollback()
sql_hhspells = """CREATE TEMP TABLE hhspells AS
SELECT a.*, b.start_date, b.end_date, b.benefit_type, b.ch_dpa_caseid
FROM idhs.hh_member a LEFT JOIN idhs.hh_indcase_spells b ON a.recptno = b.recptno
WHERE end_date BETWEEN '2013-01-01' AND '2013-12-31';"""
sql_hhselect1 = """CREATE TEMP TABLE hhinfo AS
SELECT a.*, b.edlevel, b.health, b.martlst, b.workexp
FROM hhspells a LEFT JOIN idhs.member_info b
ON a.ch_dpa_caseid=b.ch_dpa_caseid AND a.recptno=b.recptno;"""
sql_hhselect2 = """CREATE TEMP TABLE hhinfo2 AS
SELECT a.*, b.lng_x, b.lat_y, b.geom, b.geom_2163, b.county_fips_10_nbr,
b.tract_fips_10_nbr, b.place_10_nm
FROM hhinfo a LEFT JOIN idhs.case_geocode b
ON a.ch_dpa_caseid=b.ch_dpa_caseid;"""
sql_hhselect3 = """CREATE TEMP TABLE hhinfo3 AS
SELECT a.*, b.district, b.case_group, b.homeless
FROM hhinfo2 a LEFT JOIN idhs.assistance_case b
ON a.ch_dpa_caseid=b.ch_dpa_caseid;"""
cursor.execute(sql_hhspells)
cursor.execute(sql_hhselect1)
cursor.execute(sql_hhselect2)
cursor.execute(sql_hhselect3)
conn.commit()
#DO NOT RERUN UNLESS RERUNNING THE ABOVE - IF SO, DROP TABLE FIRST!
sql_createtable = """CREATE TABLE class2.for_inference_example AS
SELECT * FROM hhinfo3;
ALTER TABLE class2.for_inference_example
ADD COLUMN new_spell_win1yr INTEGER,
ADD COLUMN new_spell_win1yr_benefit INTEGER,
ADD COLUMN has_job_q1 INTEGER,
ADD COLUMN has_job_q2 INTEGER,
ADD COLUMN has_job_q3 INTEGER,
ADD COLUMN has_job_q4 INTEGER,
ADD COLUMN has_job_win1yr INTEGER,
ADD COLUMN lose_job_win1yr INTEGER,
ADD COLUMN wage_q1 REAL,
ADD COLUMN wage_q2 REAL,
ADD COLUMN wage_q3 REAL,
ADD COLUMN wage_q4 REAL,
ADD COLUMN total_wage_1yr REAL,
ADD COLUMN new_id SERIAL PRIMARY KEY;
ALTER TABLE class2.for_inference_example
OWNER TO class2_admin;
GRANT ALL PRIVILEGES ON TABLE class2.for_inference_example TO class2_admin;
GRANT SELECT ON TABLE class2.for_inference_example TO class2_select;
"""
cursor.execute(sql_createtable)
conn.commit()
conn.rollback()
###Output
_____no_output_____
###Markdown
Now loop through all rows and update table with job variables
###Code
select_statement = "SELECT * FROM class2.for_inference_example WHERE total_wage_1yr IS NULL;"
row_cur = conn.cursor(cursor_factory=psycopg2.extras.DictCursor)
qtr_cur = conn.cursor(cursor_factory=psycopg2.extras.DictCursor)
update_cur = conn.cursor()
row_cur.execute(select_statement)
for row in row_cur: #loop through each row (spell) of participation
#initialize variables
new_spell_win1yr = 0
new_spell_win1yr_benefit = 0
has_job_win1yr = 0
lose_job_win1yr = 0
total_wage_1yr = 0
end_year = row['end_date'].year
# determine quarter from month of end_spell date
if row['end_date'].month < 4:
end_qtr = 1
elif row['end_date'].month < 7:
end_qtr = 2
elif row['end_date'].month < 10:
end_qtr = 3
else:
end_qtr = 4
#create a list (length 4) of dictionaries where each dict contains quarter info for 4 quarters after
# end_date so as to pull data from il_wage - different dates for each row/spell
oneyear = []
for i in range(0,4):
qtr_info={}
qtr_info['count'] = i+1
qtr_info['quarter'] = end_qtr + i
if qtr_info['quarter'] >= 5:
qtr_info['quarter'] = qtr_info['quarter'] - 4
qtr_info['year'] = end_year + 1
else:
qtr_info['year'] = end_year
oneyear.append(qtr_info)
## END FOR LOOP - FOR I IN RANGE (0,4)
for qtr in oneyear: #loop through the four quarters determined above
#initialize variables
has_job = 0;
wage = 0;
#select any wage data for this SSN, this quarter
qtr_emp_select = "SELECT empr_no, wage FROM ides.il_wage_" + str(qtr['year']) + "q" + str(qtr['quarter'])
qtr_emp_select += " WHERE ssn LIKE '" + str(row['ssn_hash']) + "' "
qtr_emp_select += ";"
qtr_cur.execute(qtr_emp_select)
qtr_result = qtr_cur.fetchall()
if qtr_result: #if results are obtained
has_job = 1
has_job_win1yr = 1 #this global variable is set if any instance of has_job
for entry in qtr_result:
wage += entry['wage']
elif has_job_win1yr == 1:
lose_job_win1yr = 1 #set lose job if respondent previously had job indicator positive
#update quarter specific wage info
update_stmt = "UPDATE " + schema + "." + table
update_stmt += " SET has_job_q" + str(qtr['count']) + " = " + str(has_job) + ","
update_stmt += " wage_q" + str(qtr['count']) + " = " + str(wage)
update_stmt += " WHERE new_id = " + str(row['new_id']) + ";"
update_cur.execute(update_stmt)
total_wage_1yr += wage
## END FOR LOOP - FOR QTR IN YEAR
conn.commit() #commit the 4qtr updates
#update row specific wage info
tot_up_stmt = "UPDATE " + schema + "." + table
tot_up_stmt += " SET has_job_win1yr = " + str(has_job_win1yr) + ", "
tot_up_stmt += "lose_job_win1yr = " + str(lose_job_win1yr) + ", "
tot_up_stmt += "total_wage_1yr = " + str(total_wage_1yr)
tot_up_stmt += " WHERE new_id = " + str(row['new_id']) + ";"
update_cur.execute(tot_up_stmt)
conn.commit() #commit row updates
##END FOR LOOP - FOR ROW IN RESULT
select_statement = "SELECT * FROM class2.for_inference_example WHERE total_wage_1yr IS NULL;"
row_cur = conn.cursor(cursor_factory=psycopg2.extras.DictCursor)
bene_cur = conn.cursor(cursor_factory=psycopg2.extras.DictCursor)
update_cur = conn.cursor()
row_cur.execute(select_statement)
for row in row_cur: #loop through each row (spell) of participation
#initialize variables
new_spell_win1yr = 0
new_spell_win1yr_benefit = 0
#get the date 1 year from end_date of spell
date_1yr = row['end_date'] + datetime.timedelta(days=365)
#build the select statement to get start of next spell for this individual (by recptno and ch_dpa_caseid)
#start dates between last end and 1 year later
#order by start_date to get the first start date after end using fetchone
bene_select = "SELECT start_date, benefit_type FROM idhs.hh_indcase_spells"
bene_select += " WHERE recptno = " + str(row['recptno']) + " AND "
bene_select += " ch_dpa_caseid = " + str(row['ch_dpa_caseid']) + " AND "
bene_select += " (start_date BETWEEN '" + str(row['end_date']) + "' AND '" + str(date_1yr) + "')"
bene_select += " ORDER BY start_date;"
bene_cur.execute(bene_select)
bene_result = bene_cur.fetchall()
if bene_result: #if a spell is obtained within 1 year of end_date of the spell
new_spell_win1yr = 1
#check to see if its the same type of benefit - need to check all spells, not just first
for spell in bene_result:
if spell['benefit_type'] == row['benefit_type']:
new_spell_win1yr_benefit = 1
# END IF BENEFIT TYPES MATCH
#END FOR LOOP - SPELL IN BENE_RESULT
#END IF NEW SPELL EXISTS - IMPLIED ELSE IS THE ZEROS THAT THE VARIABLES WERE INITIALIZED TO
#update record with results
update_stmt = "UPDATE " + schema + "." + table
update_stmt += " SET new_spell_win1yr = " + str(new_spell_win1yr) + ","
update_stmt += " new_spell_win1yr_benefit = " + str(new_spell_win1yr_benefit)
update_stmt += " WHERE new_id = " + str(row['new_id']) + ";"
update_cur.execute(update_stmt)
conn.commit()
##END FOR LOOP - FOR ROW IN RESULT
conn.rollback()
###Output
_____no_output_____ |
docs/source/tutorials/function_print_title.ipynb | ###Markdown
functions: print_result[print_title](../api/functions.rstnornir_utils.plugins.functions.print_title) that prints a title
###Code
from nornir_utils.plugins.functions import print_title
print_title("About to do somehting!")
###Output
[1m[32m**** About to do somehting! ****************************************************[0m
[0m |
dataproject/Data project MEC.ipynb | ###Markdown
**Data project***** Introduction This data project addresses the development of home care for the elderly in Denmark. The main focus of the project will be on people aged 65 years and above. The project will look at the number of referral hours to home care in total and divided into whether the recipients have chosen a private or a public supplier. It is a known fact that the share of elders has been growing for the last decade, but at the same time, the number of referral hours to home care has been decreasing. Whether the decreasing number is due to the elderly generally needing less help with personal care and practical tasks or a down-prioritization by the public sector is an interesting discussion, but will remain out of the scope for this project. In Denmark, it is the responsibility of the public sector to take care of the elderly, including providing home care for both personal care and practical tasks. The majority of the public services for the elderly are controlled by the state but supplied by the municipalities. The municipalities can outsource home care to private contractors, who need to live up to a certain level of quality and price. In the end, it is the elderly that choose which supplier they wish to receive help from. The data project uses data from Statistics Denmark. Table AED06 shows total recipients referral to home care and hours per week. Table AED12 shows the share of home care provided by private contractors. These two tables can be used to find the number of recipients referral to home care and the number that choose a public/private supplier. Furthermore, table BY2 is used to get data on the entire population. The data project is organized as follows. Section 3 describes the import, cleaning and structuring of the data used in the project. Section 4 presents a graphical analysis and a map, and lastly, section 5 concludes on the project. Data import, cleaning and structuring Importing relevant packages Importing useful packages for the project:
###Code
# importing useful packages
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import ipywidgets as widgets
###Output
_____no_output_____
###Markdown
As already mentioned, we are going to use data from Statistics Denmark. Instead of going to their website to download their data in Excel format and then import it to Python, we choose to use an API (Application Programming Interface) to fetch the data.
###Code
# importing a package that allow to extract data from Statistics Denmark (DST)
import pydst
dst = dst = pydst.Dst(lang='en')
# importing a package that allow to make a map of Denmark
import geopandas as gpd
###Output
_____no_output_____
###Markdown
*Note, that the two packages* `pydst` *and* `geopandas` *are not default packages in python, and hence, one need to install those in order to be able to run the code in this notebook.*- *To install* `pydst` (works for both windows and mac)*: open terminal and run the command: pip install git+https://github.com/elben10/pydst*- *To install* `geopandas`: - *Windows: open terminal and run the command: conda install -c conda-forge geopandas* - *Mac: open terminal* 1. Run the command: conda install -c conda-forge geopandas 2. Uninstall fiona by running the command: conda uninstall fiona 3. Download fiona wheel and install: can be downloaded from here (https://pypi.python.org/packages/71/ea/908bf078499b30d1ec374eb5baba016a568fc8142ee6ccf72e356d20871c/Fiona-1.7.4-cp27-cp27m-macosx_10_6_intel.whlmd5=971393c23ffc552664b7c694b992fb3e) and run: pip install Fiona-1.7.4-cp27-cp27m-macosx_10_6_intel.whl 4. Reinstall geopandas by running the command: pip install git+git://github.com/geopandas/geopandas.git **NOTE: We only use `geopandas` to make it easier to compare municipalities at a given point in time. We have another interactive figure (that do not require `geopandas`) where it is possible to see the development over time in each of municipality. Comparing the level across municipalities is also possible, but not as easy** Importing data on referral hours from Statistics Denmark The first step is to import the relevant data. We need information on the total, public and private referral hours to home care. Statistics Denmark has a data set including total referral hours to home care (AED06) and a data set including the shar of privately provided referral hours (AED12). We start by getting an overview of the variables by using the `get_variables()`. The function lists all the values that can be attained in the desired table "*Recipients referral to home care, free choice, by region, type of benefits, hours per week, age and sex (2008-2017)*" (AED06). It can also be found here: http://statistikbanken.dk/statbank5a/default.asp?w=1368.
###Code
# importing total referral hours
totalhours_vars = dst.get_variables(table_id = 'AED06')
totalhours_vars
###Output
_____no_output_____
###Markdown
Now, the same approach is taken to get an overview of the table: "*Recipients referral to home care, free choice, who use private contractor by region, type of benefits, age and sex (2008-2017)*" (AED12).
###Code
# importing the share of privately provided referral hours
privateshare_vars = dst.get_variables(table_id = 'AED12')
privateshare_vars
###Output
_____no_output_____
###Markdown
Cleaning and structuring data on referral hours In order to fetch the data we use `get_data()`. The function allows us to choose the desired variables from the original table, and further restrict the data on several conditions listed below. With data from table AED06 containing total referral hours to home care, we generate two dataframes. Common for both of them is that we restrict it to only include *Permanent home help, total* (100), which states the total number of referral hours to home care per week. In order to find the underlying value of *Permanent home help, total* we use a function that extracts the second element of the variable *values* from the data set *totalhours_vars*. This is found to be 100, which can be seen in the output below.
###Code
totalhours_vars['values'][1]
###Output
_____no_output_____
###Markdown
Using the method we restrict the data to include ages *65 years and above* (200-800) and both *men and women* (100). For one of the two dataframes we only keep the country level (000) and for the other, we only keep the municipality level (i.e. we remove observations on country and regional level). The time period is restricted to 2010-2017. We also rename a few of the variables and remove missing observations for some municipalities. Lastly, we convert our variables of interest, namely the hours of home care to a numeric variable.
###Code
# a. defining dictionary to rename variables
col_dict1 = {}
col_dict1['TID'] = 'year'
col_dict1['INDHOLD'] = 'home_help_total'
col_dict1['OMRÅDE'] = 'geo_area'
col_dict1['ALDER'] = 'age'
# b. load data
totalhours_denmark = dst.get_data(table_id = 'AED06', variables={'OMRÅDE':['000'], 'YDELSESTYPE':['100'],
'ALDER':['200','300','400','500','600','700','800'], 'KOEN':['100'],
'Tid':['2010','2011','2012','2013','2014','2015','2016','2017']})
totalhours_municipality = dst.get_data(table_id = 'AED06', variables={'OMRÅDE':['*'], 'YDELSESTYPE':['100'],
'ALDER':['200','300','400','500','600','700','800'], 'KOEN':['100'],
'Tid':['2010','2011','2012','2013','2014','2015','2016','2017']})
# c. renaming and dropping variables
totalhours_denmark.rename(columns=col_dict1, inplace=True)
totalhours_denmark.drop(['YDELSESTYPE', 'KOEN', 'TIMEUGE'], axis=1, inplace=True)
totalhours_municipality.rename(columns=col_dict1, inplace=True)
totalhours_municipality.drop(['YDELSESTYPE', 'KOEN', 'TIMEUGE'], axis=1, inplace=True)
# d. removing missing observations
totalhours_municipality = totalhours_municipality[totalhours_municipality.home_help_total !=".."]
# e. removing observations on country and regional level in the municipality dataset
for val in ['Region', 'All Denmark']:
I = totalhours_municipality.geo_area.str.contains(val)
totalhours_municipality = totalhours_municipality.loc[I == False]
# f. convert values to numeric
totalhours_denmark.home_help_total=totalhours_denmark.home_help_total.astype('float')
totalhours_municipality.home_help_total=totalhours_municipality.home_help_total.astype('float')
# g. multiplying with 52 as there is 52 weeks in a year
totalhours_denmark.home_help_total=totalhours_denmark.home_help_total*52
totalhours_municipality.home_help_total=totalhours_municipality.home_help_total*52
###Output
_____no_output_____
###Markdown
Now, the data set for all Denmark on the total referral hours to home care looks like the table below:
###Code
totalhours_denmark.head(5)
###Output
_____no_output_____
###Markdown
And likewise, the data set for the total referral hours to home care on the municipality level looks like the table below:
###Code
totalhours_municipality.head(5)
###Output
_____no_output_____
###Markdown
The same approach is taken for the data from AED12 containing the private share of referral hours to home care. We generate two dataframes, where we restrict both of them to only include *the fraction of privately supplied home care* (100), only ages *65 years and above* (200-800) and both *men and women* (100). For one of the two dataframes we only keep the country level (000) and for the other, we only keep the municipality level (i.e. we remove observations on country and regional level). The time period is restricted to 2010-2017. We also rename a few of the variables and remove missing observations for some municipalities. Lastly, we convert our variables of interest, namely the hours of home care to a numeric variable.
###Code
# a. defining dictionary to rename variables
col_dict2 = {}
col_dict2['TID'] = 'year'
col_dict2['INDHOLD'] = 'frac_private'
col_dict2['OMRÅDE'] = 'geo_area'
col_dict2['ALDER'] = 'age'
# b. load data
privateshare_denmark = dst.get_data(table_id = 'AED12', variables={'OMRÅDE':['000'], 'YDELSESTYPE':['100'],
'ALDER':['200','300','400','500','600','700','800'], 'KOEN':['100'],
'Tid':['2010','2011','2012','2013','2014','2015','2016','2017']})
privateshare_municipality = dst.get_data(table_id = 'AED12', variables={'OMRÅDE':['*'], 'YDELSESTYPE':['100'],
'ALDER':['200','300','400','500','600','700','800'], 'KOEN':['100'],
'Tid':['2010','2011','2012','2013','2014','2015','2016','2017']})
# c. renaming and dropping variables
privateshare_denmark.rename(columns=col_dict2, inplace=True)
privateshare_denmark.drop(['YDELSESTYPE', 'KOEN'], axis=1, inplace=True)
privateshare_municipality.rename(columns=col_dict2, inplace=True)
privateshare_municipality.drop(['YDELSESTYPE', 'KOEN'], axis=1, inplace=True)
# d. removing missing observations
privateshare_municipality = privateshare_municipality[privateshare_municipality.frac_private !=".."]
# e. removing observations on country and regional level
for val in ['Region', 'All Denmark']:
I = privateshare_municipality.geo_area.str.contains(val)
privateshare_municipality = privateshare_municipality.loc[I == False]
# f. convert values to numeric
privateshare_denmark.frac_private=privateshare_denmark.frac_private.astype('float')
privateshare_municipality.frac_private=privateshare_municipality.frac_private.astype('float')
###Output
_____no_output_____
###Markdown
Now, the data set for all Denmark on the privat share of referral hours look like the table below:
###Code
privateshare_denmark.head(5)
###Output
_____no_output_____
###Markdown
And likewise, the data set on the private share of referral hours on the municipality level look like the table below:
###Code
privateshare_municipality.head(5)
###Output
_____no_output_____
###Markdown
Now that the data is cleaned, we merge the relevant data sets in order to obtain two tables with all the relevant variables.
###Code
# a. merging the two data sets for Denmark
mergeddata_denmark = pd.merge(totalhours_denmark,privateshare_denmark,how='left',on=['year', 'geo_area', 'age'])
# b.merging the two data sets on municipality level
mergeddata_municipality = pd.merge(totalhours_municipality,privateshare_municipality,how='left',on=['year', 'geo_area', 'age'])
mergeddata_denmark.head(5)
mergeddata_municipality.head(5)
###Output
_____no_output_____
###Markdown
We are interested in analysing referral hours to home care in total and referral hours to home care supplied by public and private firms. Further, we want to see the development over time in the period 2010-2017 and sum all the age groups into one large group containing all elder aged 65 years and above. The data from Statistics Denmark only includes total hours and the private share in many age groups. To find referral hours to home care supplied by the public and private firms for all elder, we need to do some algebra. The new data sets are shown in the two tables below.
###Code
# a. making 'frac_private' a share and not a percentage
mergeddata_denmark['temp'] = mergeddata_denmark.frac_private / 100
mergeddata_municipality['temp'] = mergeddata_municipality.frac_private / 100
# b. calculating the hours of home care supplied by the private sector
mergeddata_denmark['private_supply'] = mergeddata_denmark.home_help_total * mergeddata_denmark.temp
mergeddata_municipality['private_supply'] = mergeddata_municipality.home_help_total * mergeddata_municipality.temp
# c. calculating the hours of home care supplied by the public sector
mergeddata_denmark['public_supply'] = mergeddata_denmark.home_help_total-mergeddata_denmark.private_supply
mergeddata_municipality['public_supply'] = mergeddata_municipality.home_help_total-mergeddata_municipality.private_supply
# d. dropping irrelevant variables
mergeddata_denmark.drop(['frac_private','temp'], axis=1, inplace=True)
mergeddata_municipality.drop(['frac_private','temp'], axis=1, inplace=True)
# e. summing the number of referral hours for the different age groups
data_denmark = mergeddata_denmark.groupby(['geo_area', 'year']).sum().reset_index()
data_municipality = mergeddata_municipality.groupby(['geo_area', 'year']).sum().reset_index()
# d. sorting the dataset
data_denmark.sort_values(by=['year', 'geo_area'], inplace=True)
data_municipality.sort_values(by=['year', 'geo_area'], inplace=True)
data_denmark.head(5)
data_municipality.head(5)
###Output
_____no_output_____
###Markdown
Cleaning and structuring population data Besides the two data sets including total referrals to home care and the private share, we also need population data for two reasons. First to illustrate that both the number and share of people aged 65 years and above have been increasing for the last decade. Further, we need the population size in each municipality to calculate the number of referral hours to home care per person in order to be able to compare the municipalities. To do this, we fetch data from the table BY2 from Statistics Denmark using the same steps described in more detail above.Again we use `get_data()` to choose the desired variables and restrict the data on several conditions as age (65 years and above) and time period (2010-2017). Further, we rename and drop some variables. Three data sets are generated. One which shows the population size for everyone in Denmark, one which shows the population size for all people aged 65 years and above in Denmark and one which shows the population size for all aged 65 years and above in all the municipalities. The first two data sets are merged and used to calculate the share of people aged 65 years and above. The last data set is used later on to find the number of referrals to home care per person.
###Code
# a. defining dictionary to rename variables
col_dict1 = {}
col_dict1['TID'] = 'year'
col_dict1['ALDER'] = 'age'
col_dict1['INDHOLD'] = 'persons'
col_dict1['KOMK'] = 'geo_area'
col_dict2 = {}
col_dict2['persons_x'] = 'total'
col_dict2['persons_y'] = 'above64'
# b. load data
Population_all = dst.get_data(table_id = 'BY2', variables={'KOMK':['*'],'BYST':['*'],'KØN':['*'],'ALDER':['*'],
'Tid':['2010','2011','2012','2013','2014','2015','2016','2017']})
Population_above64 = dst.get_data(table_id = 'BY2', variables={'KOMK':['*'],'BYST':['*'],'KØN':['*'],
'ALDER':['65','66','67','68','69','70','71','72','73','74',
'75','76','77','78','79','80','81','82','83','84',
'85','86','87','88','89','90','91','92','93','94',
'95','96','97','98','99','100','101','102','103',
'104','105','106','107','108','109','110','111',
'112','113','114','115','116','117','118','119',
'120','121','122','123','124','125'],'Tid':['2010',
'2011','2012','2013','2014','2015','2016','2017']})
Population_above64_mun = dst.get_data(table_id = 'BY2', variables={'KOMK':['*'],'BYST':['*'],'KØN':['*'],
'ALDER':['65','66','67','68','69','70','71','72','73',
'74','75','76','77','78','79','80','81','82',
'83','84','85','86','87','88','89','90','91',
'92','93','94','95','96','97','98','99','100',
'101','102','103','104','105','106','107',
'108','109','110','111','112','113','114',
'115','116','117','118','119','120','121',
'122','123','124','125'],'Tid':['2010','2011',
'2012','2013','2014','2015','2016','2017']})
# c. renaming and dropping variables
Population_all.rename(columns=col_dict1, inplace=True)
Population_above64.rename(columns=col_dict1, inplace=True)
Population_above64_mun.rename(columns=col_dict1, inplace=True)
# d. summing over the ages and municipalities
Population_all = Population_all.groupby(['year']).sum().reset_index()
Population_all.rename(columns = {'persons':'total'}, inplace=True)
Population_above64 = Population_above64.groupby(['year']).sum().reset_index()
Population_above64.rename(columns = {'persons':'above64'}, inplace=True)
# e. summing over the ages and keeping municipalities
Population_above64_mun = Population_above64_mun.groupby(['year', 'geo_area']).sum().reset_index()
Population_above64_mun.rename(columns = {'persons':'above64'}, inplace=True)
# f. merging the datasets Population_all and Population_above64
popdata = pd.merge(Population_all,Population_above64,how='left',on=['year'])
popdata.rename(columns=col_dict2, inplace=True)
# g. calculating the share
popdata['share'] = popdata.above64/popdata.total*100
popdata.head(5)
###Output
_____no_output_____
###Markdown
In order to compare the municipalities, we merge the data set containing the number of persons aged 65 years and above and the data set on referral hours on the municipality level. Thereafter we calculate the number of total referral hours per person, the number of public and privately supplied referral hours per person. We do this to get a relative measure.
###Code
# a. merging the population data and the data on referral hours on municipality level
data_municipality1 = pd.merge(data_municipality,Population_above64_mun,how='left',on=['year','geo_area'])
# b. creating variables
data_municipality1['total_per_pers'] = data_municipality1.home_help_total/data_municipality1.above64*100
data_municipality1['public_per_pers'] = data_municipality1.public_supply/data_municipality1.above64*100
data_municipality1['private_per_pers'] = data_municipality1.private_supply/data_municipality1.above64*100
# c. viewing the head of the data
data_municipality1.head(5)
###Output
_____no_output_____
###Markdown
Data analysis Development in referral hours to home care The development in referral hours to home care is shown in the figure below, and it can be seen that there has been a decrease in referral hours to home care over the period 2010-2017. The number of referral hours has especially decreased sharply in the years 2010 to 2013. There was a small increase from 2015 to 2016.Note the statistics are compiled on the basis of monthly digitalized data reports from the municipalities. The monthly statistical coverage varies among the municipalities.
###Code
# a. plot figure
plt.figure(figsize=(4,3), dpi=120)
plt.plot('year', 'home_help_total', 'ko-', data=data_denmark)
# b. ad title and labels
plt.title('Development in permanent home help 2010-2017')
plt.ylabel('Permanent home help, total')
plt.rc('font', size=6)
###Output
_____no_output_____
###Markdown
The rather large decrease in the number of referral hours of home care has happened despite both the total number of persons and the fraction of the total population aged 65 and above has been steadily increasing over the time period. This is illustrated in the two figures below.
###Code
# a. plot figure
plt.figure(figsize=(7,3), dpi=120)
# b. specifying the left subplot
plt.subplot(1, 2, 1)
plt.plot('year', 'total', 'ko-', data=popdata)
plt.title('Number of people aged 65+ years')
plt.ylabel('Number of people')
plt.rc('font', size=6)
# c. specifying the right subplot
plt.subplot(1, 2, 2)
plt.plot('year', 'share', 'bo-', data=popdata)
plt.title('Share of people aged 65+ years')
plt.ylabel('Percentage')
plt.rc('font', size=6)
# d. showing the figure
plt.show()
###Output
_____no_output_____
###Markdown
In the figure below the number of referral hours to home care that is chosen by the recipients to be supplied by a private contractor is a bit lower in 2017 than it was in 2010. However, the decrease in the number of referral hours to home care that is chosen to be supplied by the public has decreased a lot more from 2010 to 2017. This shows that in general a fewer number of referral hours are being supplied.A benefit from outsourcing home care is that private companies can be more effective as well as supplying faster, better and cheaper solutions than the public (i.e. the municipality). On the other side, private companies most likely want to maximise profits, which might make private companies more focused on making the costs of home care as small as possible. This might result in private companies supplying lower quality home care. However, the municipality has to to offer its residents two or three possible contractors to provide home care, where one of them has to be the municipality itself. If the private companies drive down the quality of home care in order to maximise profit the elders can always choose a different provider (e.g. the municipality itself). This can help keep the quality of the home care provided by private companies at a certain level.
###Code
# a. plot figure
plt.figure(figsize=(4,3), dpi=120)
plt.plot('year', 'private_supply', 'ko-', data=data_denmark)
plt.plot('year', 'public_supply', 'bo-', data=data_denmark)
# b. ad title and labels
plt.title('Development of number of privately and publicly supplied referral hours, 2010-2017')
plt.ylabel('Number of referral hours')
plt.legend()
plt.rc('font', size=6)
###Output
_____no_output_____
###Markdown
Differences across municipalities In this section, we first look at the development in privately and publicly supplied referral hours per person for elder aged 65 and above in the period 2010-2017. Thereafter we visualise the differences across municipalities for the total number of supplied referral hours per person for elder aged 65 and above in 2017 in a map of Danish municipalities. **Development in privately and publicly supplied referral hours per person** From the figure below it is possible to see the development of the number of referral hours to home care per person in each municipality in the period 2010-2017, split on the number of hours that are supplied by the municipality and the number of hours that are supplied by a private supplier. Generally, most of the municipalities have experienced a decrease in the number of referral hours in total. Some municipalities, e.g. Faxe and Herlev, have experienced a shift from most of the referral hours to home care being supplied by the municipality to instead being supplied by a private supplier. Other municipalities, e.g. Hørsholm, have recipients who primarily have used private suppliers in the whole period 2010-2017. However, most municipalities have recipients who primarily have used the municipality as the supplier of home care.
###Code
# defining the figure
def plot_e(dataframe, municipality):
I = dataframe['geo_area'] == municipality
ax=dataframe.loc[I,:].plot(x='year', y=['private_per_pers','public_per_pers'], style=['-bo', '-ko'], figsize=(8, 5),fontsize=11, legend='False')
ax.legend(loc='upper right', fontsize=11)
ax.set_title('Development of number of privately and publicly supplied referral hours per person on municipality level, 2010-2017', fontsize=13)
ax.set_xlabel('Year',fontsize=11)
ax.set_ylabel('Number of referral hours per person', fontsize=11)
# making the figure interactive
widgets.interact(plot_e,
dataframe = widgets.fixed(data_municipality1),
municipality = widgets.Dropdown(description='Municipality', options=data_municipality1.geo_area.unique(), value='Copenhagen')
);
###Output
_____no_output_____
###Markdown
**Mapping the total referral hours per person** In this section, we map the total number of referral hours per person for elder aged 65 years and above across the municipalities in Denmark. The colouring of the map indicates an average of the total number of referral hours per person aged 65 and above in each of the municipalities. First, we restrict the data to only include 2017.
###Code
# restricting data to only include 2017
data_municipality2 = data_municipality1[data_municipality1.year == 2017]
###Output
_____no_output_____
###Markdown
Then we need to import a shape file for the Danish municipalities. The figure below is just a simple map of Denmark to see that the map is working properly.
###Code
# a. read shape file
file = './KOMMUNE.shp'
map_df = gpd.read_file(file)
# b. plot map
map_df.plot()
###Output
_____no_output_____
###Markdown
Now we need to merge our data with the total referral hours per person in each municipality with the geographical data in the shape file.
###Code
# a. merge on municipality
map_data = map_df.set_index('KOMNAVN').join(data_municipality2.set_index('geo_area'))
# b. look at data
map_data.head()
###Output
_____no_output_____
###Markdown
Lastly, we choose the relevant variable we want to visualise on the map, specify the range of that variable and make some visualisation specifications related to the map.
###Code
# a. specify the variable to visualize on the map
variable = 'total_per_pers'
# b. range of variable
vmin, vmax = 0, 1000
# c. create figure and axes
fig, ax = plt.subplots(1,figsize=(20,12))
# d. plot the map
map_data.plot(column=variable,cmap='Blues',linewidth=0.8,ax=ax,edgecolor='0.8', legend=False)
# e. remove the axis
ax.axis('off')
# f. inserting a title to the figure
ax.set_title('Total referral hours per person aged 65+ in the municipalities, 2017', fontdict={'fontsize': '20', 'fontweight' : '3'})
# g. colorbar as a legend
sm = plt.cm.ScalarMappable(cmap='Blues', norm=plt.Normalize(vmin=vmin, vmax=vmax))
# h. empty array for the data range
sm._A = []
# i. Changing the font size of the colorbar
cbar = fig.colorbar(sm, ax=ax,)
cbar.ax.tick_params(labelsize=12)
###Output
_____no_output_____ |
examples/interpretation/segyconverter/01_segy_sample_files.ipynb | ###Markdown
Copyright (c) Microsoft Corporation.Licensed under the MIT License. Generate Sythetic SEGY files for testingThis notebook builds the test data used by the convert_segy unit tests. It covers just a few of the SEG-Y files that could be encountered if you bring your own SEG-Y files for training. This is not a comprehensive set of files so there still may be situations where the segyio or the convert_segy.py utility would fail to load the SEG-Y data.
###Code
import deepseismic_interpretation.segyconverter.utils.create_segy as utils
import segyio
###Output
_____no_output_____
###Markdown
Create sample SEG-Y files for testing1. Control, that represents a perfect data, with no missing traces.2. Missing traces on the top-left and bottom right of the geographic field w/ inline sorting3. Missing traces on the top-left and bottom right of the geographic field w/ crossline sorting4. Missing trace in the center of the geographic field w/ inline sorting Control FileCreate a file that has a cuboid shape with traces at all inline/crosslines
###Code
controlfile = './normalsegy.segy'
utils.create_segy_file(lambda il, xl: True, controlfile)
utils.show_segy_details(controlfile)
utils.load_segy_with_geometry(controlfile)
###Output
_____no_output_____
###Markdown
Inline Error Fileinlineerror.segy will throw an error that inlines are not unique because it assumes the same number of inlines per crossline
###Code
inlinefile = './inlineerror.segy'
utils.create_segy_file(lambda il, xl: not ((il < 20 and xl < 125) or (il > 40 and xl > 250)),
inlinefile, segyio.TraceSortingFormat.INLINE_SORTING)
utils.show_segy_details(inlinefile)
# Cannot load this file with inferred geometry; segyio will fail
# utils.load_segy_with_geometry(inlinefile)
###Output
_____no_output_____
###Markdown
Crossline Error Filexlineerror.segy will throw an error that crosslines are not unique because it assumes the same number of crosslines per inline
###Code
xlineerrorfile = './xlineerror.segy'
utils.create_segy_file(lambda il, xl: not ((il < 20 and xl < 125) or (il > 40 and xl > 250)),
xlineerrorfile, segyio.TraceSortingFormat.CROSSLINE_SORTING)
utils.show_segy_details(xlineerrorfile)
# Cannot load this file with inferred geometry; segyio will fail
# utils.load_segy_with_geometry(xlineerrorfile)
###Output
_____no_output_____
###Markdown
Cube hole SEG-Y fileWhen collecting seismic data, unless in an area of open ocean, it is rare to be able to collect all trace data from a rectangular field make the collection of traces from a uniform field
###Code
cubehole_segyfile = './cubehole.segy'
utils.create_segy_file(lambda il, xl: not ((20 < il < 30) and (150 < xl < 250)),
cubehole_segyfile, segyio.TraceSortingFormat.INLINE_SORTING)
utils.show_segy_details(cubehole_segyfile)
# Cannot load this file with inferred geometry; segyio will fail
# utils.load_segy_with_geometry(cubehole_segyfile)
###Output
_____no_output_____ |
Section 3/.ipynb_checkpoints/image_classification_svm-checkpoint.ipynb | ###Markdown
Classifying images using SVM Import the digit image dataset
###Code
from sklearn.datasets import load_digits
digits = load_digits()
# Print to show there are 1797 images (8 by 8 images for a dimensionality of 64)
print("Image Data Shape" , digits.data.shape)
# Print to show there are 1797 labels (integers from 0–9)
print("Label Data Shape", digits.target.shape)
import numpy as np
import matplotlib.pyplot as plt
plt.figure(figsize=(20,4))
for index, (image, label) in enumerate(zip(digits.data[0:5], digits.target[0:5])):
plt.subplot(1, 5, index + 1)
plt.imshow(np.reshape(image, (8,8)), cmap=plt.cm.gray)
plt.title('Training: %i\n' % label, fontsize = 20)
#We split the data into train and test using train_test_split
from sklearn.model_selection import train_test_split
x_train, x_test, y_train, y_test = train_test_split(digits.data, digits.target, test_size=0.25, random_state=0)
#We import Logistic Regression
from sklearn.linear_model import LogisticRegression
# we use the defaults values
logisticRegr = LogisticRegression()
# we train the model
logisticRegr.fit(x_train, y_train)
#We make predictions on the entire test data
predictions = logisticRegr.predict(x_test)
# Use score method to get accuracy of model
score = logisticRegr.score(x_test, y_test)
print(score)
###Output
0.9533333333333334
###Markdown
Using Support Vector Machines
###Code
# Import classifiers
from sklearn.svm import SVC
###Output
_____no_output_____
###Markdown
Using a linear kernel
###Code
# we use the linear kernel to create an SVM instance
svmClassifierLinear = SVC(kernel='linear')
# we train the SVC model with a linear kernel
svmClassifierLinear.fit(x_train, y_train)
#We make predictions on the entire test data
predictionsSVMLinear = svmClassifierLinear.predict(x_test)
# Use score method to get accuracy of model
scoreSVMLinear = svmClassifierLinear.score(x_test, y_test)
print(scoreSVMLinear)
###Output
0.9711111111111111
###Markdown
We get an improved accuracy of 97%
###Code
# we use the defaults values to create an SVM instance
svmClassifierPoly = SVC(kernel='poly')
# we train the SVC model with a polynomial kernel
svmClassifierPoly.fit(x_train, y_train)
# Use score method to get accuracy of model
scoreSVMPoly = svmClassifierPoly.score(x_test, y_test)
print(scoreSVMPoly)
###Output
0.9822222222222222
|
DIgit Recognizer Data Augmentation-Kaggle.ipynb | ###Markdown
Image Augmentation in Digit Recognition with numpy for beginners ...1) Image Augmentation is a techinque to modify the images in order to expand the dataset,2) Take below image for example of an inverted mountain , human eye will call it a mountain even if it sees from any angle.3) Similar concept we will apply to image training models by rotating them on different axis .4) Since it is sometimes hard to get relevant number of images for your model so you would enhance the existing images to increase the training size to improve performance
###Code
# Importing the usual libraries and filter warnings
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
from matplotlib.pyplot import xticks
import seaborn as sns
import warnings
warnings.filterwarnings('ignore')
import os
for dirname, _, filenames in os.walk('/kaggle/input'):
for filename in filenames:
print(os.path.join(dirname, filename))
train = pd.read_csv('train.csv')
test = pd.read_csv('test.csv')
# train = pd.read_csv('/kaggle/input/digit-recognizer/train.csv')
# test = pd.read_csv('/kaggle/input/digit-recognizer/test.csv')
print(train.shape,test.shape)
#In the beginning it's important to check the size of your train and test data which later helps in
#deciding the sample size while testing your model on train data
train.head(5)
test.head(5)
# Lets see if we have a null value in the whole dataset
#Usuall we will check isnull().sum() but here in our dataset we have 784 columns and a groupby won't fit the buffer
print(np.unique([train.isnull().sum()]))
print(np.unique([test.isnull().sum()]))
y = train['label']
df_train = train.drop(columns=["label"],axis=1)
print(y.shape,df_train.shape)
###Output
(42000,) (42000, 784)
###Markdown
VisualizationIts quite evident that this is a multiclass classification problem and the target classes are almost uniformly distributed in the dataset having digits from 0-9
###Code
sns.countplot(y)
#Lets see the first 50 images of the dataset
df_train_img = df_train.values.reshape(-1,28,28,1)
plt.figure(figsize=(15,8))
for i in range(50):
plt.subplot(5,10,i+1)
plt.imshow(df_train_img[i].reshape((28,28)),cmap='gray')
plt.axis("off")
plt.show()
###Output
_____no_output_____
###Markdown
Function for image augmentation 1. Below function takes the taining set pixels and lables as input 2. Defines 3 global dataframe variables which will store the images roatated at various angles 3. For loop rotates an image to various angles using nympy function rot90 and appends to 3 diiferent different lists 4. Finally 3 dataframes are created just like training set but having inverted images 5. Visualize the various datasets - [Images rotated by 90 &176;](DF1) - [Images rotated by 180 &176;](DF2) - [Images rotated by 270 &176;](DF3)
###Code
def augment(df_aug,y):
col_list = df_aug.columns.tolist()
col_list = ['label']+col_list
list1=[]
list2=[]
list3=[]
global df1
global df2
global df3
df_train_img = df_aug.values.reshape(-1,28,28,1)
for i in range(len(df_aug)):
list1.append([y[i]]+np.rot90(df_train_img[i],1).flatten().tolist())
list2.append([y[i]]+np.rot90(df_train_img[i],2).flatten().tolist())
list3.append([y[i]]+np.rot90(df_train_img[i],3).flatten().tolist())
df1= pd.DataFrame(list1,columns=col_list)
df2 = pd.DataFrame(list2,columns=col_list)
df3 = pd.DataFrame(list3,columns=col_list)
#Function is called
augment(df_train,y)
#3 new dataframes are created with same size as tarining set as expected
print(df1.shape,df2.shape,df3.shape)
###Output
(42000, 785) (42000, 785) (42000, 785)
###Markdown
###Code
df_train1 = df1.drop(columns=["label"],axis=1)
df_train_img1 = df_train1.values.reshape(-1,28,28,1)
#Lets see the first 50 images of the dataset
plt.figure(figsize=(15,8))
for i in range(50):
plt.subplot(5,10,i+1)
plt.imshow(df_train_img1[i].reshape((28,28)),cmap='gray')
plt.axis("off")
plt.show()
###Output
_____no_output_____
###Markdown
###Code
df_train2 = df2.drop(columns=["label"],axis=1)
df_train_img2 = df_train2.values.reshape(-1,28,28,1)
#Lets see the first 50 images of the dataset
plt.figure(figsize=(15,8))
for i in range(50):
plt.subplot(5,10,i+1)
plt.imshow(df_train_img2[i].reshape((28,28)),cmap='gray')
plt.axis("off")
plt.show()
###Output
_____no_output_____
###Markdown
###Code
df_train3 = df3.drop(columns=["label"],axis=1)
df_train_img3 = df_train3.values.reshape(-1,28,28,1)
#Lets see the first 50 images of the dataset
plt.figure(figsize=(15,8))
for i in range(50):
plt.subplot(5,10,i+1)
plt.imshow(df_train_img3[i].reshape((28,28)),cmap='gray')
plt.axis("off")
plt.show()
#Lets merge all the dataframes
#frames = [train,df1,df2,df3]
frames = [train,df1,df3]
final_df = pd.concat(frames)
final_df.shape
y = final_df['label']
df_train = final_df.drop(columns=["label"],axis=1)
print(y.shape,df_train.shape)
# Normalize the dataset
df_train = df_train / 255
test = test / 255
#Looks like the values are equally distributed in the dataset
y.value_counts()
sns.countplot(y)
# Loading the usual libraries
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import cross_val_predict
from sklearn.model_selection import GridSearchCV
from sklearn.svm import SVC
from sklearn.neighbors import KNeighborsClassifier
from sklearn.tree import DecisionTreeClassifier
seed = 2
test_size = 0.3
X_train, X_test, y_train, y_test = train_test_split(df_train,y, test_size = test_size , random_state = seed)
print(X_train.shape,X_test.shape,y_train.shape,y_test.shape)
#KNN
knn = KNeighborsClassifier(n_neighbors=10)
knn.fit(X_train, y_train)
y_pred = knn.predict(X_test)
accuracy = accuracy_score(y_test,y_pred)
print('Accuracy: %f' % accuracy)
y_pred_test = knn.predict(test)
submission = pd.DataFrame({"ImageId": list(range(1, len(y_pred_test)+1)),"Label": y_pred_test})
submission.to_csv("submission_digit1.csv", index=False)
###Output
_____no_output_____ |
examples/tutorials/PytorchLightning_Integration.ipynb | ###Markdown
Pytorch-Lightning Integration for DeepChem ModelsIn this tutorial we will go through how to setup a deepchem model inside the [pytorch-lightning](https://www.pytorchlightning.ai/) framework. Lightning is a pytorch framework which simplifies the process of experimenting with pytorch models easier. A few key functionalities offered by pytorch lightning which deepchem users can find useful are:1. Multi-gpu training functionalities: pytorch-lightning provides easy multi-gpu, multi-node training. It also simplifies the process of launching multi-gpu, multi-node jobs across different cluster infrastructure, e.g. AWS, slurm based clusters.1. Reducing boilerplate pytorch code: lightning takes care of details like, `optimizer.zero_grad(), model.train(), model.eval()`. Lightning also provides experiment logging functionality, for e.g. irrespective of training on CPU, GPU, multi-nodes the user can use the method `self.log` inside the trainer and it will appropriately log the metrics.1. Features that can speed up training: half-precision training, gradient checkpointing, code profiling.[](https://colab.research.google.com/drive/1VVLqq0vMlPkSEXeqcFnHY_zEuvDOQu50?usp=sharing) Setup- This notebook assumes that you have already installed deepchem, if you have not follow the instructions at the deepchem installation page: https://deepchem.readthedocs.io/en/latest/get_started/installation.html.- Install pytorch lightning following the instructions on lightning's home page: https://www.pytorchlightning.ai/
###Code
!pip install --pre deepchem
!pip install pytorch_lightning
###Output
Requirement already satisfied: deepchem in /Users/princychahal/mambaforge/envs/keras_try_5/lib/python3.8/site-packages (2.6.1.dev20220119163852)
Requirement already satisfied: numpy>=1.21 in /Users/princychahal/mambaforge/envs/keras_try_5/lib/python3.8/site-packages (from deepchem) (1.22.0)
Requirement already satisfied: scikit-learn in /Users/princychahal/mambaforge/envs/keras_try_5/lib/python3.8/site-packages (from deepchem) (1.0.2)
Requirement already satisfied: pandas in /Users/princychahal/mambaforge/envs/keras_try_5/lib/python3.8/site-packages (from deepchem) (1.4.0)
Collecting rdkit-pypi
Downloading rdkit_pypi-2021.9.5.1-cp38-cp38-macosx_11_0_arm64.whl (15.9 MB)
[K |████████████████████████████████| 15.9 MB 6.8 MB/s eta 0:00:01
[?25hRequirement already satisfied: joblib in /Users/princychahal/mambaforge/envs/keras_try_5/lib/python3.8/site-packages (from deepchem) (1.1.0)
Requirement already satisfied: scipy in /Users/princychahal/mambaforge/envs/keras_try_5/lib/python3.8/site-packages (from deepchem) (1.7.3)
Requirement already satisfied: threadpoolctl>=2.0.0 in /Users/princychahal/mambaforge/envs/keras_try_5/lib/python3.8/site-packages (from scikit-learn->deepchem) (3.0.0)
Requirement already satisfied: python-dateutil>=2.8.1 in /Users/princychahal/mambaforge/envs/keras_try_5/lib/python3.8/site-packages (from pandas->deepchem) (2.8.2)
Requirement already satisfied: pytz>=2020.1 in /Users/princychahal/mambaforge/envs/keras_try_5/lib/python3.8/site-packages (from pandas->deepchem) (2021.3)
Requirement already satisfied: Pillow in /Users/princychahal/mambaforge/envs/keras_try_5/lib/python3.8/site-packages (from rdkit-pypi->deepchem) (8.4.0)
Requirement already satisfied: six>=1.5 in /Users/princychahal/mambaforge/envs/keras_try_5/lib/python3.8/site-packages (from python-dateutil>=2.8.1->pandas->deepchem) (1.16.0)
Installing collected packages: rdkit-pypi
Successfully installed rdkit-pypi-2021.9.5.1
Requirement already satisfied: pytorch_lightning in /Users/princychahal/mambaforge/envs/keras_try_5/lib/python3.8/site-packages (1.5.8)
Requirement already satisfied: typing-extensions in /Users/princychahal/mambaforge/envs/keras_try_5/lib/python3.8/site-packages (from pytorch_lightning) (4.0.1)
Requirement already satisfied: numpy>=1.17.2 in /Users/princychahal/mambaforge/envs/keras_try_5/lib/python3.8/site-packages (from pytorch_lightning) (1.22.0)
Requirement already satisfied: torch>=1.7.* in /Users/princychahal/mambaforge/envs/keras_try_5/lib/python3.8/site-packages (from pytorch_lightning) (1.10.2)
Requirement already satisfied: tensorboard>=2.2.0 in /Users/princychahal/mambaforge/envs/keras_try_5/lib/python3.8/site-packages (from pytorch_lightning) (2.7.0)
Requirement already satisfied: tqdm>=4.41.0 in /Users/princychahal/mambaforge/envs/keras_try_5/lib/python3.8/site-packages (from pytorch_lightning) (4.62.3)
Requirement already satisfied: fsspec[http]!=2021.06.0,>=2021.05.0 in /Users/princychahal/mambaforge/envs/keras_try_5/lib/python3.8/site-packages (from pytorch_lightning) (2022.1.0)
Requirement already satisfied: packaging>=17.0 in /Users/princychahal/mambaforge/envs/keras_try_5/lib/python3.8/site-packages (from pytorch_lightning) (21.3)
Requirement already satisfied: PyYAML>=5.1 in /Users/princychahal/mambaforge/envs/keras_try_5/lib/python3.8/site-packages (from pytorch_lightning) (6.0)
Requirement already satisfied: pyDeprecate==0.3.1 in /Users/princychahal/mambaforge/envs/keras_try_5/lib/python3.8/site-packages (from pytorch_lightning) (0.3.1)
Processing /Users/princychahal/Library/Caches/pip/wheels/8e/70/28/3d6ccd6e315f65f245da085482a2e1c7d14b90b30f239e2cf4/future-0.18.2-py3-none-any.whl
Requirement already satisfied: torchmetrics>=0.4.1 in /Users/princychahal/mambaforge/envs/keras_try_5/lib/python3.8/site-packages (from pytorch_lightning) (0.7.0)
Requirement already satisfied: tensorboard-data-server<0.7.0,>=0.6.0 in /Users/princychahal/mambaforge/envs/keras_try_5/lib/python3.8/site-packages (from tensorboard>=2.2.0->pytorch_lightning) (0.6.0)
Requirement already satisfied: absl-py>=0.4 in /Users/princychahal/mambaforge/envs/keras_try_5/lib/python3.8/site-packages (from tensorboard>=2.2.0->pytorch_lightning) (1.0.0)
Requirement already satisfied: grpcio>=1.24.3 in /Users/princychahal/mambaforge/envs/keras_try_5/lib/python3.8/site-packages (from tensorboard>=2.2.0->pytorch_lightning) (1.43.0)
Requirement already satisfied: requests<3,>=2.21.0 in /Users/princychahal/mambaforge/envs/keras_try_5/lib/python3.8/site-packages (from tensorboard>=2.2.0->pytorch_lightning) (2.27.1)
Requirement already satisfied: google-auth<3,>=1.6.3 in /Users/princychahal/mambaforge/envs/keras_try_5/lib/python3.8/site-packages (from tensorboard>=2.2.0->pytorch_lightning) (2.3.3)
Requirement already satisfied: wheel>=0.26 in /Users/princychahal/mambaforge/envs/keras_try_5/lib/python3.8/site-packages (from tensorboard>=2.2.0->pytorch_lightning) (0.37.1)
Requirement already satisfied: google-auth-oauthlib<0.5,>=0.4.1 in /Users/princychahal/mambaforge/envs/keras_try_5/lib/python3.8/site-packages (from tensorboard>=2.2.0->pytorch_lightning) (0.4.6)
Requirement already satisfied: tensorboard-plugin-wit>=1.6.0 in /Users/princychahal/mambaforge/envs/keras_try_5/lib/python3.8/site-packages (from tensorboard>=2.2.0->pytorch_lightning) (1.8.1)
Requirement already satisfied: setuptools>=41.0.0 in /Users/princychahal/mambaforge/envs/keras_try_5/lib/python3.8/site-packages (from tensorboard>=2.2.0->pytorch_lightning) (60.5.0)
Requirement already satisfied: werkzeug>=0.11.15 in /Users/princychahal/mambaforge/envs/keras_try_5/lib/python3.8/site-packages (from tensorboard>=2.2.0->pytorch_lightning) (2.0.2)
Requirement already satisfied: markdown>=2.6.8 in /Users/princychahal/mambaforge/envs/keras_try_5/lib/python3.8/site-packages (from tensorboard>=2.2.0->pytorch_lightning) (3.3.6)
Requirement already satisfied: protobuf>=3.6.0 in /Users/princychahal/mambaforge/envs/keras_try_5/lib/python3.8/site-packages (from tensorboard>=2.2.0->pytorch_lightning) (3.18.1)
Requirement already satisfied: aiohttp; extra == "http" in /Users/princychahal/mambaforge/envs/keras_try_5/lib/python3.8/site-packages (from fsspec[http]!=2021.06.0,>=2021.05.0->pytorch_lightning) (3.8.1)
Requirement already satisfied: pyparsing!=3.0.5,>=2.0.2 in /Users/princychahal/mambaforge/envs/keras_try_5/lib/python3.8/site-packages (from packaging>=17.0->pytorch_lightning) (3.0.7)
Requirement already satisfied: six in /Users/princychahal/mambaforge/envs/keras_try_5/lib/python3.8/site-packages (from absl-py>=0.4->tensorboard>=2.2.0->pytorch_lightning) (1.16.0)
Requirement already satisfied: urllib3<1.27,>=1.21.1 in /Users/princychahal/mambaforge/envs/keras_try_5/lib/python3.8/site-packages (from requests<3,>=2.21.0->tensorboard>=2.2.0->pytorch_lightning) (1.26.8)
Requirement already satisfied: certifi>=2017.4.17 in /Users/princychahal/mambaforge/envs/keras_try_5/lib/python3.8/site-packages (from requests<3,>=2.21.0->tensorboard>=2.2.0->pytorch_lightning) (2021.10.8)
Requirement already satisfied: charset-normalizer~=2.0.0; python_version >= "3" in /Users/princychahal/mambaforge/envs/keras_try_5/lib/python3.8/site-packages (from requests<3,>=2.21.0->tensorboard>=2.2.0->pytorch_lightning) (2.0.10)
Requirement already satisfied: idna<4,>=2.5; python_version >= "3" in /Users/princychahal/mambaforge/envs/keras_try_5/lib/python3.8/site-packages (from requests<3,>=2.21.0->tensorboard>=2.2.0->pytorch_lightning) (3.3)
Requirement already satisfied: cachetools<5.0,>=2.0.0 in /Users/princychahal/mambaforge/envs/keras_try_5/lib/python3.8/site-packages (from google-auth<3,>=1.6.3->tensorboard>=2.2.0->pytorch_lightning) (4.2.4)
Requirement already satisfied: pyasn1-modules>=0.2.1 in /Users/princychahal/mambaforge/envs/keras_try_5/lib/python3.8/site-packages (from google-auth<3,>=1.6.3->tensorboard>=2.2.0->pytorch_lightning) (0.2.7)
Requirement already satisfied: rsa<5,>=3.1.4; python_version >= "3.6" in /Users/princychahal/mambaforge/envs/keras_try_5/lib/python3.8/site-packages (from google-auth<3,>=1.6.3->tensorboard>=2.2.0->pytorch_lightning) (4.8)
Requirement already satisfied: requests-oauthlib>=0.7.0 in /Users/princychahal/mambaforge/envs/keras_try_5/lib/python3.8/site-packages (from google-auth-oauthlib<0.5,>=0.4.1->tensorboard>=2.2.0->pytorch_lightning) (1.3.0)
Requirement already satisfied: importlib-metadata>=4.4; python_version < "3.10" in /Users/princychahal/mambaforge/envs/keras_try_5/lib/python3.8/site-packages (from markdown>=2.6.8->tensorboard>=2.2.0->pytorch_lightning) (4.10.1)
Requirement already satisfied: aiosignal>=1.1.2 in /Users/princychahal/mambaforge/envs/keras_try_5/lib/python3.8/site-packages (from aiohttp; extra == "http"->fsspec[http]!=2021.06.0,>=2021.05.0->pytorch_lightning) (1.2.0)
Requirement already satisfied: frozenlist>=1.1.1 in /Users/princychahal/mambaforge/envs/keras_try_5/lib/python3.8/site-packages (from aiohttp; extra == "http"->fsspec[http]!=2021.06.0,>=2021.05.0->pytorch_lightning) (1.2.0)
Requirement already satisfied: async-timeout<5.0,>=4.0.0a3 in /Users/princychahal/mambaforge/envs/keras_try_5/lib/python3.8/site-packages (from aiohttp; extra == "http"->fsspec[http]!=2021.06.0,>=2021.05.0->pytorch_lightning) (4.0.2)
Requirement already satisfied: multidict<7.0,>=4.5 in /Users/princychahal/mambaforge/envs/keras_try_5/lib/python3.8/site-packages (from aiohttp; extra == "http"->fsspec[http]!=2021.06.0,>=2021.05.0->pytorch_lightning) (6.0.2)
Requirement already satisfied: yarl<2.0,>=1.0 in /Users/princychahal/mambaforge/envs/keras_try_5/lib/python3.8/site-packages (from aiohttp; extra == "http"->fsspec[http]!=2021.06.0,>=2021.05.0->pytorch_lightning) (1.7.2)
Requirement already satisfied: attrs>=17.3.0 in /Users/princychahal/mambaforge/envs/keras_try_5/lib/python3.8/site-packages (from aiohttp; extra == "http"->fsspec[http]!=2021.06.0,>=2021.05.0->pytorch_lightning) (21.4.0)
Requirement already satisfied: pyasn1<0.5.0,>=0.4.6 in /Users/princychahal/mambaforge/envs/keras_try_5/lib/python3.8/site-packages (from pyasn1-modules>=0.2.1->google-auth<3,>=1.6.3->tensorboard>=2.2.0->pytorch_lightning) (0.4.8)
Requirement already satisfied: oauthlib>=3.0.0 in /Users/princychahal/mambaforge/envs/keras_try_5/lib/python3.8/site-packages (from requests-oauthlib>=0.7.0->google-auth-oauthlib<0.5,>=0.4.1->tensorboard>=2.2.0->pytorch_lightning) (3.1.1)
Requirement already satisfied: zipp>=0.5 in /Users/princychahal/mambaforge/envs/keras_try_5/lib/python3.8/site-packages (from importlib-metadata>=4.4; python_version < "3.10"->markdown>=2.6.8->tensorboard>=2.2.0->pytorch_lightning) (3.7.0)
###Markdown
Import the relevant packages.
###Code
import deepchem as dc
from deepchem.models import GCNModel
import pytorch_lightning as pl
import torch
from torch.nn import functional as F
from torch import nn
import pytorch_lightning as pl
from pytorch_lightning.core.lightning import LightningModule
from torch.optim import Adam
import numpy as np
import torch
###Output
_____no_output_____
###Markdown
Deepchem ExampleBelow we show an example of a Graph Convolution Network (GCN). Note that this is a simple example which uses a GCNModel to predict the label from an input sequence. We do not showcase the complete functionality of deepchem in this example as we want to restructure the deepchem code and adapt it so that it can be easily plugged into pytorch-lightning. This example was inspired from the `GCNModel` documentation present [here](https://github.com/deepchem/deepchem/blob/a68f8c072b80a1bce5671250aef60f9cc8519bec/deepchem/models/torch_models/gcn.pyL200). **Prepare the dataset**: for training our deepchem models we need a dataset that we can use to train the model. Below we prepare a sample dataset for the purposes of this tutorial. Below we also directly use the featurized to encode examples for the dataset.
###Code
smiles = ["C1CCC1", "CCC"]
labels = [0., 1.]
featurizer = dc.feat.MolGraphConvFeaturizer()
X = featurizer.featurize(smiles)
dataset = dc.data.NumpyDataset(X=X, y=labels)
###Output
_____no_output_____
###Markdown
**Setup the model**: now we initialize the Graph Convolutional Network model that we will use in our training.
###Code
model = GCNModel(
mode='classification',
n_tasks=1,
batch_size=2,
learning_rate=0.001
)
###Output
[16:00:37] /Users/princychahal/Documents/github/dgl/src/runtime/tensordispatch.cc:43: TensorDispatcher: dlopen failed: Using backend: pytorch
dlopen(/Users/princychahal/mambaforge/envs/keras_try_5/lib/python3.8/site-packages/dgl-0.8-py3.8-macosx-11.0-arm64.egg/dgl/tensoradapter/pytorch/libtensoradapter_pytorch_1.10.2.dylib, 1): image not found
###Markdown
**Train the model**: fit the model on our training dataset, also specify the number of epochs to run.
###Code
loss = model.fit(dataset, nb_epoch=5)
print(loss)
###Output
0.18830760717391967
###Markdown
Pytorch-Lightning + Deepchem exampleNow we will look at an example of the GCN model adapt for Pytorch-Lightning. For using Pytorch-Lightning there are two important components:1. `LightningDataModule`: This module defines who the data is prepared and fed into the model so that the model can use it for training. The module defines the train dataloader function which are directly used by the trainer to generate data for the `LightningModule`. To learn more about the `LightningDataModule` refer to the [datamodules documentation](https://pytorch-lightning.readthedocs.io/en/stable/extensions/datamodules.html).2. `LightningModule`: This module defines the training, validation steps for our model. We can use this module to initialize our model based on the hyperparameters. There are a number of boilerplate functions which we use directly to track our experiments, for example we can save all the hyperparameters that we used for training using the `self.save_hyperparameters()` method. For more details on how to use this module refer to the [lightningmodules documentation](https://pytorch-lightning.readthedocs.io/en/latest/common/lightning_module.html). **Setup the torch dataset**: Note that here we need to create a custome `SmilesDataset` so that we can easily interface with the deepchem featurizers. For this interface we need to define a collate method so that we can create batches for the dataset.
###Code
# prepare LightningDataModule
class SmilesDataset(torch.utils.data.Dataset):
def __init__(self, smiles, labels):
assert len(smiles) == len(labels)
featurizer = dc.feat.MolGraphConvFeaturizer()
X = featurizer.featurize(smiles)
self._samples = dc.data.NumpyDataset(X=X, y=labels)
def __len__(self):
return len(self._samples)
def __getitem__(self, index):
return (
self._samples.X[index],
self._samples.y[index],
self._samples.w[index],
)
class SmilesDatasetBatch:
def __init__(self, batch):
X = [np.array([b[0] for b in batch])]
y = [np.array([b[1] for b in batch])]
w = [np.array([b[2] for b in batch])]
self.batch_list = [X, y, w]
def collate_smiles_dataset_wrapper(batch):
return SmilesDatasetBatch(batch)
###Output
_____no_output_____
###Markdown
**Create the GCN specific lightning module**: in this part we use an object of the `SmilesDataset` created above to create the `SmilesDatasetModule`
###Code
class SmilesDatasetModule(pl.LightningDataModule):
def __init__(self, train_smiles, train_labels, batch_size):
super().__init__()
self._train_smiles = train_smiles
self._train_labels = train_labels
self._batch_size = batch_size
def setup(self, stage):
self.train_dataset = SmilesDataset(
self._train_smiles,
self._train_labels,
)
def train_dataloader(self):
return torch.utils.data.DataLoader(
self.train_dataset,
batch_size=self._batch_size,
collate_fn=collate_smiles_dataset_wrapper,
shuffle=True,
)
###Output
_____no_output_____
###Markdown
**Create the lightning module**: in this part we create the GCN specific lightning module. This class specifies the logic flow for the training step. We also create the required models, optimizers and losses for the training flow.
###Code
# prepare the LightningModule
class GCNModule(pl.LightningModule):
def __init__(self, mode, n_tasks, learning_rate):
super().__init__()
self.save_hyperparameters(
"mode",
"n_tasks",
"learning_rate",
)
self.gcn_model = GCNModel(
mode=self.hparams.mode,
n_tasks=self.hparams.n_tasks,
learning_rate=self.hparams.learning_rate,
)
self.pt_model = self.gcn_model.model
self.loss = self.gcn_model._loss_fn
def configure_optimizers(self):
return self.gcn_model.optimizer._create_pytorch_optimizer(
self.pt_model.parameters(),
)
def training_step(self, batch, batch_idx):
batch = batch.batch_list
inputs, labels, weights = self.gcn_model._prepare_batch(batch)
outputs = self.pt_model(inputs)
if isinstance(outputs, torch.Tensor):
outputs = [outputs]
if self.gcn_model._loss_outputs is not None:
outputs = [outputs[i] for i in self.gcn_model._loss_outputs]
loss_outputs = self.loss(outputs, labels, weights)
self.log(
"train_loss",
loss_outputs,
on_epoch=True,
sync_dist=True,
reduce_fx="mean",
prog_bar=True,
)
return loss_outputs
###Output
_____no_output_____
###Markdown
**Create the relevant objects**
###Code
# create module objects
smiles_datasetmodule = SmilesDatasetModule(
train_smiles=["C1CCC1", "CCC", "C1CCC1", "CCC", "C1CCC1", "CCC", "C1CCC1", "CCC", "C1CCC1", "CCC"],
train_labels=[0., 1., 0., 1., 0., 1., 0., 1., 0., 1.],
batch_size=2,
)
gcnmodule = GCNModule(
mode="classification",
n_tasks=1,
learning_rate=1e-3,
)
###Output
_____no_output_____
###Markdown
Lightning TrainerTrainer is the wrapper which builds on top of the `LightningDataModule` and `LightningModule`. When constructing the lightning trainer you can also specify the number of epochs, max-steps to run, number of GPUs, number of nodes to be used for trainer. Lightning trainer acts as a wrapper over your distributed training setup and this way you are able to build your models in a way you would build them in a simple way for your local runs.
###Code
trainer = pl.Trainer(
max_epochs=5,
)
###Output
GPU available: False, used: False
TPU available: False, using: 0 TPU cores
IPU available: False, using: 0 IPUs
###Markdown
**Call the fit function to run model training**
###Code
# train
trainer.fit(
model=gcnmodule,
datamodule=smiles_datasetmodule,
)
###Output
| Name | Type | Params
----------------------------------
0 | pt_model | GCN | 29.4 K
----------------------------------
29.4 K Trainable params
0 Non-trainable params
29.4 K Total params
0.118 Total estimated model params size (MB)
/Users/princychahal/mambaforge/envs/keras_try_5/lib/python3.8/site-packages/pytorch_lightning/trainer/data_loading.py:132: UserWarning: The dataloader, train_dataloader, does not have many workers which may be a bottleneck. Consider increasing the value of the `num_workers` argument` (try 8 which is the number of cpus on this machine) in the `DataLoader` init to improve performance.
rank_zero_warn(
/Users/princychahal/mambaforge/envs/keras_try_5/lib/python3.8/site-packages/pytorch_lightning/trainer/data_loading.py:428: UserWarning: The number of training samples (5) is smaller than the logging interval Trainer(log_every_n_steps=50). Set a lower value for log_every_n_steps if you want to see logs for the training epoch.
rank_zero_warn(
|
03_survey-analysis/02b_documentation.ipynb | ###Markdown
Documentation Table of Contents:* [Are you documenting your software?](Are-you-documenting-your-software?)* [What kind of documentation do you write?](What-kind-of-documentation-do-you-write?) Setting up
###Code
# Import notebook containing sampled dataset
%run "./00_data-cleaning.ipynb"
# Filtering the df
df = df[(df['Do you write code as part of your job?'] =='Yes')]
###Output
_____no_output_____
###Markdown
Are you documenting your software?
###Code
#Are you documenting your software?
count = df['Are you documenting your software?'].value_counts()
results = pd.DataFrame(count.values, count.index)
display(results)
# Pie Chart Documenting
colors = ['#a4b085','#6898d1']
df.groupby('Are you documenting your software?')['Are you documenting your software?'].count().plot.pie( autopct='%1.1f%%',figsize=(6,6), fontsize = 20, colors=colors)
count = df['Are you documenting your software?'].value_counts()
plt.title('Are you documenting your software?', bbox={'facecolor':'0.8', 'pad':12})
plt.show()
#Are you documenting your software?
plt.figure(figsize=(15,10))
label_text=['never','sometimes','regularly']
count = df['Guide for new contributors'].value_counts()
results = pd.DataFrame(count.values, count.index)
display(results)
sns.set(style="darkgrid")
sns.barplot(count.index, count.values)
plt.xticks(rotation= 90)
plt.title('Are you documenting your software?', bbox={'facecolor':'0.8', 'pad':12})
ax = plt.gca()
totals = []
for i in ax.patches:
totals.append(i.get_height())
total = sum(totals)
for i in ax.patches:
ax.text(i.get_x()+ 0.15, i.get_height()+.9, \
str(round((i.get_height()/total)*100, 2))+'%', fontsize=13,
color='dimgrey')
plt.show()
###Output
_____no_output_____
###Markdown
What kind of documentation do you write?
###Code
# What kind of documentation do you write?
label_text=['never','sometimes','regularly']
fig, ax =plt.subplots(3,3, sharey=True, figsize=(15,18))
fig.suptitle('What kind of documentation do you write?', bbox={'facecolor':'0.8', 'pad':12})
count1 = df['In an average month, how much time do you spend on software development?'].value_counts()
ax1 = sns.countplot(df['Developer guide'], ax=ax[0,0])
ax1.set_xticklabels(label_text)
ax1.set_title("Documentation developer guide")
ax2 = sns.countplot(df['Code comments'], ax=ax[0,2])
ax2.set_xticklabels(label_text)
ax2.set_title("Documentation code comments")
ax3 = sns.countplot(df['Guide for new contributors'], ax=ax[0,1])
ax3.set_xticklabels(label_text)
ax3.set_title("'Guide for new contributors'")
ax4 = sns.countplot(df['Release notes'], ax=ax[1,0])
ax4.set_xticklabels(label_text)
ax4.set_title("Documentation release notes")
ax5 = sns.countplot(df['Installation notes'], ax=ax[1,1])
ax5.set_xticklabels(label_text)
ax5.set_title("Documentation installation notes")
ax6 = sns.countplot(df['README'], ax=ax[1,2])
ax6.set_xticklabels(label_text)
ax6.set_title("Documentation readme")
ax7 = sns.countplot(df['User manual'], ax=ax[2,0])
ax7.set_xticklabels(label_text)
ax7.set_title("Documentation user manual")
plt.show()
###Output
_____no_output_____ |
Cumulative COVID-19 Global Analysis and Predictions .ipynb | ###Markdown
Importing all the libraries
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import tensorflow as tf
keras = tf.keras
from sklearn.preprocessing import MinMaxScaler
import plotly.graph_objects as go
physical_devices = tf.config.experimental.list_physical_devices('GPU')
tf.config.experimental.set_memory_growth(physical_devices[0], True)
###Output
_____no_output_____
###Markdown
Helper Functions
###Code
def create_dataset(X, y, time_steps=1):
Xs, ys = [], []
for i in range(len(X) - time_steps):
v = X.iloc[i:(i + time_steps)].values
Xs.append(v)
ys.append(y.iloc[i + time_steps])
return np.array(Xs), np.array(ys)
class ResetStatesCallback(keras.callbacks.Callback):
def on_epoch_begin(self, epoch, logs):
self.model.reset_states()
###Output
_____no_output_____
###Markdown
Importing CoronaVirus Cases Data
###Code
confirmed_df = pd.read_csv('https://raw.githubusercontent.com/CSSEGISandData/COVID-19/master/csse_covid_19_data/csse_covid_19_time_series/time_series_covid19_confirmed_global.csv')
death_df = pd.read_csv('https://raw.githubusercontent.com/CSSEGISandData/COVID-19/master/csse_covid_19_data/csse_covid_19_time_series/time_series_covid19_deaths_global.csv')
recovered_df = pd.read_csv('https://raw.githubusercontent.com/CSSEGISandData/COVID-19/master/csse_covid_19_data/csse_covid_19_time_series/time_series_covid19_recovered_global.csv')
###Output
_____no_output_____
###Markdown
Data Pre Processing Confirmed Data
###Code
confirmed_df.head()
death_df.head()
recovered_df.head()
confirmed_df = confirmed_df.iloc[:, 4:]
confirmed_df = confirmed_df.sum(axis=0)
confirmed_df.index = pd.to_datetime(confirmed_df.index).strftime('%Y-%m-%d')
confirmed_df = pd.DataFrame(confirmed_df)
###Output
_____no_output_____
###Markdown
Death Data
###Code
death_df = death_df.iloc[:, 4:]
death_df = death_df.sum(axis=0)
death_df.index = pd.to_datetime(death_df.index).strftime('%Y-%m-%d')
death_df = pd.DataFrame(death_df)
###Output
_____no_output_____
###Markdown
Recovered Data
###Code
recovered_df = recovered_df.iloc[:, 4:]
recovered_df = recovered_df.sum(axis=0)
recovered_df.index = pd.to_datetime(recovered_df.index).strftime('%Y-%m-%d')
recovered_df = pd.DataFrame(recovered_df)
###Output
_____no_output_____
###Markdown
Overall Corona Cases Scaling Data Confirmed Cases Data
###Code
scaler_confirm = MinMaxScaler()
scaler_confirm = scaler_confirm.fit(confirmed_df)
train_confirm = scaler_confirm.fit_transform(confirmed_df)
train_confirm = pd.DataFrame(train_confirm)
###Output
_____no_output_____
###Markdown
Death Cases Data
###Code
scaler_death = MinMaxScaler()
scaler_death = scaler_death.fit(death_df)
train_death = scaler_death.transform(death_df)
train_death = pd.DataFrame(train_death)
###Output
_____no_output_____
###Markdown
Recovered Data
###Code
scaler_recovered = MinMaxScaler()
scaler_recovered = scaler_recovered.fit(recovered_df)
train_recovered = scaler_recovered.fit_transform(recovered_df)
train_recovered = pd.DataFrame(train_recovered)
###Output
_____no_output_____
###Markdown
Preparing Data For Time Series Analysis Using LSTM
###Code
time_steps = 1
X_train_confirm, y_train_confirm = create_dataset(train_confirm, train_confirm, time_steps)
X_train_death, y_train_death = create_dataset(train_death, train_death, time_steps)
X_train_recovered, y_train_recovered = create_dataset(train_recovered, train_recovered, time_steps)
###Output
_____no_output_____
###Markdown
Model Building Model for Confirmed Cases
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
confirm_model = keras.Sequential()
confirm_model.add(keras.layers.LSTM(units=128,input_shape=(X_train_confirm.shape[1], X_train_confirm.shape[2]),return_sequences = True))
confirm_model.add(keras.layers.LSTM(units=128,return_sequences = True))
confirm_model.add(keras.layers.LSTM(units=128,return_sequences = True))
confirm_model.add(keras.layers.LSTM(units=128))
confirm_model.add(keras.layers.Dropout(rate=0.2))
confirm_model.add(keras.layers.Dense(units=1))
confirm_model.compile(loss='mean_squared_error', optimizer='adam')
reset_states = ResetStatesCallback()
confirm_model.fit(X_train_confirm, y_train_confirm,epochs=500,batch_size=16,shuffle=False,callbacks = [reset_states],verbose = 0)
###Output
_____no_output_____
###Markdown
Model for Death Cases
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
death_model = keras.Sequential()
death_model.add(keras.layers.LSTM(units=128,input_shape=(X_train_death.shape[1], X_train_death.shape[2]),return_sequences = True))
death_model.add(keras.layers.LSTM(units=128,return_sequences = True))
death_model.add(keras.layers.LSTM(units=128,return_sequences = True))
death_model.add(keras.layers.LSTM(units=128))
death_model.add(keras.layers.Dropout(rate=0.2))
death_model.add(keras.layers.Dense(units=1))
death_model.compile(loss='mean_squared_error', optimizer='adam')
reset_states = ResetStatesCallback()
death_model.fit(X_train_death, y_train_death,epochs=500,batch_size=16,shuffle=False,callbacks = [reset_states],verbose=0)
###Output
_____no_output_____
###Markdown
Model for Recovered Cases
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
recovered_model = keras.Sequential()
recovered_model.add(keras.layers.LSTM(units=128,input_shape=(X_train_recovered.shape[1], X_train_recovered.shape[2]),return_sequences = True))
recovered_model.add(keras.layers.LSTM(units=128,return_sequences = True))
recovered_model.add(keras.layers.LSTM(units=128,return_sequences = True))
recovered_model.add(keras.layers.LSTM(units=128))
recovered_model.add(keras.layers.Dropout(rate=0.2))
recovered_model.add(keras.layers.Dense(units=1))
recovered_model.compile(loss='mean_squared_error', optimizer='adam')
reset_states = ResetStatesCallback()
recovered_model.fit(X_train_recovered, y_train_recovered,epochs=500,batch_size=16,shuffle=False,callbacks = [reset_states],verbose=0)
###Output
_____no_output_____
###Markdown
Predicting The Results
###Code
DAYS_TO_PREDICT = int(input('Enter Number Of Days You Want to Predict : '))
###Output
_____no_output_____
###Markdown
Predicting Results For Confirmed Cases
###Code
confirm_test_seq = y_train_confirm[-1:]
confirm_preds = []
for _ in range(DAYS_TO_PREDICT):
y2_confirm = confirm_test_seq.reshape((len(confirm_test_seq ), time_steps, 1))
confirm_pred = confirm_model.predict(y2_confirm)
confirm_preds.append(confirm_pred)
confirm_new_seq = confirm_test_seq.flatten()
confirm_new_seq = np.append(confirm_new_seq,[confirm_pred])
confirm_new_seq = confirm_new_seq[-1:]
confirm_test_seq = confirm_new_seq.reshape((len(confirm_new_seq), time_steps, 1))
y2_confirm = confirm_test_seq
confirm_preds = (np.array(confirm_preds).flatten()).reshape(-1,1)
confirm_preds = scaler_confirm.inverse_transform(confirm_preds)
predicted_confirmed_index = pd.date_range(start=confirmed_df.index[-1],periods=DAYS_TO_PREDICT + 1,closed='right').strftime('%Y-%m-%d')
predicted_confirmed_cases = pd.DataFrame()
predicted_confirmed_cases['Dates'] = predicted_confirmed_index
predicted_confirmed_cases['Confirmed Cases Predictions'] = confirm_preds
predicted_confirmed_cases.set_index('Dates', inplace = True)
fig = go.Figure()
fig.update_layout(template='plotly_dark')
fig.update_xaxes(tickangle=90, showticklabels = True, type = 'category')
fig.add_trace(go.Scatter(x=predicted_confirmed_cases.index,
y=predicted_confirmed_cases['Confirmed Cases Predictions'],
mode='lines+markers',
name='confirm',
line=dict(color='red', width=2)))
###Output
_____no_output_____
###Markdown
Predicting Results For Death Cases
###Code
death_test_seq = y_train_death[-1:]
death_preds = []
for i in range(DAYS_TO_PREDICT):
y2_death = death_test_seq.reshape((len(death_test_seq), time_steps, 1))
death_pred = death_model.predict(y2_death)
death_preds.append(death_pred)
death_new_seq = death_test_seq.flatten()
death_new_seq = np.append(death_new_seq,[death_pred])
death_new_seq = death_new_seq[-1:]
death_test_seq = death_new_seq.reshape((len(death_new_seq), time_steps, 1))
y2_death = death_test_seq
death_preds = (np.array(death_preds).flatten()).reshape(-1,1)
death_preds = scaler_death.inverse_transform(death_preds)
predict_death_index = pd.date_range(start=death_df.index[-1],periods=DAYS_TO_PREDICT + 1,closed='right').strftime('%Y-%m-%d')
predict_death_cases = pd.DataFrame()
predict_death_cases['Dates'] = predict_death_index
predict_death_cases['Death Cases Predictions'] = death_preds
predict_death_cases.set_index('Dates', inplace = True)
fig = go.Figure()
fig.update_layout(template='plotly_dark')
fig.update_xaxes(tickangle=90, showticklabels = True, type = 'category')
fig.add_trace(go.Scatter(x=predict_death_cases.index,
y=predict_death_cases['Death Cases Predictions'],
mode='lines+markers',
name='death',
line=dict(color='blue', width=2)))
###Output
_____no_output_____
###Markdown
Predicting Results For Recovered Cases
###Code
recovered_test_seq = y_train_recovered[-1:]
recovered_preds = []
for i in range(DAYS_TO_PREDICT):
y2_recovered = recovered_test_seq.reshape((len(recovered_test_seq), time_steps, 1))
recovered_pred = recovered_model.predict(y2_recovered)
recovered_preds.append(recovered_pred)
recovered_new_seq = recovered_test_seq.flatten()
recovered_new_seq = np.append(recovered_new_seq,[recovered_pred])
recovered_new_seq = recovered_new_seq[-1:]
recovered_test_seq = recovered_new_seq.reshape((len(recovered_new_seq), time_steps, 1))
y2_recovered = recovered_test_seq
recovered_preds = (np.array(recovered_preds).flatten()).reshape(-1,1)
recovered_preds = scaler_recovered.inverse_transform(recovered_preds)
predict_recovered_index = pd.date_range(start=recovered_df.index[-1],periods=DAYS_TO_PREDICT + 1,closed='right').strftime('%Y-%m-%d')
predict_recovered_cases = pd.DataFrame()
predict_recovered_cases['Dates'] = predict_recovered_index
predict_recovered_cases['Recovered Cases Predictions'] = recovered_preds
predict_recovered_cases.set_index('Dates', inplace = True)
fig = go.Figure()
fig.update_layout(template='plotly_dark')
fig.update_xaxes(tickangle=90, showticklabels = True, type = 'category')
fig.add_trace(go.Scatter(x=predict_recovered_cases.index,
y=predict_recovered_cases['Recovered Cases Predictions'],
mode='lines+markers',
name='recovered',
line=dict(color='green', width=2)))
###Output
_____no_output_____
###Markdown
Visualizations Historial Cases
###Code
fig = go.Figure()
fig.update_layout(template='plotly_dark')
fig.update_xaxes(tickangle=90, showticklabels = True, type = 'category')
fig.add_trace(go.Scatter(x=confirmed_df.index,
y=confirmed_df[0],
mode='lines+markers',
name='confirm',
line=dict(color='red', width=2)))
fig.add_trace(go.Scatter(x=death_df.index,
y=death_df[0],
mode='lines+markers',
name='death',
line=dict(color='green', width=2)))
fig.add_trace(go.Scatter(x=recovered_df.index,
y=recovered_df[0],
mode='lines+markers',
name='recovered',
line=dict(color='blue', width=2)))
###Output
_____no_output_____
###Markdown
Predictions for every individual scenario
###Code
fig = go.Figure()
fig.update_layout(template='plotly_dark')
fig.update_xaxes(tickangle=90, showticklabels = True, type = 'category')
fig.add_trace(go.Scatter(x=predicted_confirmed_cases.index,
y=predicted_confirmed_cases['Confirmed Cases Predictions'],
mode='lines+markers',
name='Confirmed Cases Predictions',
line=dict(color='yellow', width=2)))
fig.add_trace(go.Scatter(x=predict_death_cases.index,
y=predict_death_cases['Death Cases Predictions'],
mode='lines+markers',
name='Death Cases Predictions',
line=dict(color='turquoise', width=2)))
fig.add_trace(go.Scatter(x=predict_recovered_cases.index,
y=predict_recovered_cases['Recovered Cases Predictions'],
mode='lines+markers',
name='Recovered Cases Predictions',
line=dict(color='white', width=2)))
###Output
_____no_output_____
###Markdown
Historical and Predicted Values
###Code
fig = go.Figure()
fig.update_layout(template='plotly_dark')
fig.update_xaxes(tickangle=90, showticklabels = True, type = 'category')
fig.add_trace(go.Scatter(x=confirmed_df.index,
y=confirmed_df[0],
mode='lines+markers',
name='confirm',
line=dict(color='red', width=2)))
fig.add_trace(go.Scatter(x=death_df.index,
y=death_df[0],
mode='lines+markers',
name='death',
line=dict(color='green', width=2)))
fig.add_trace(go.Scatter(x=recovered_df.index,
y=recovered_df[0],
mode='lines+markers',
name='recovered',
line=dict(color='blue', width=2)))
fig.add_trace(go.Scatter(x=predicted_confirmed_cases.index,
y=predicted_confirmed_cases['Confirmed Cases Predictions'],
mode='lines+markers',
name='Confirmed Cases Predictions',
line=dict(color='yellow', width=2)))
fig.add_trace(go.Scatter(x=predict_death_cases.index,
y=predict_death_cases['Death Cases Predictions'],
mode='lines+markers',
name='Death Cases Predictions',
line=dict(color='turquoise', width=2)))
fig.add_trace(go.Scatter(x=predict_recovered_cases.index,
y=predict_recovered_cases['Recovered Cases Predictions'],
mode='lines+markers',
name='Recovered Cases Predictions',
line=dict(color='white', width=2)))
###Output
_____no_output_____ |
dlaicourse-master/TensorFlow Deployment/Course 4 - TensorFlow Serving/Week 2/Exercises/TFServing_Week2_Exercise.ipynb | ###Markdown
Exporting an MNIST Classifier in SavedModel FormatIn this exercise, we will learn on how to create models for TensorFlow Hub. You will be tasked with performing the following tasks:* Creating a simple MNIST classifier and evaluating its accuracy.* Exporting it into SavedModel.* Hosting the model as TF Hub Module.* Importing this TF Hub Module to be used with Keras Layers. Run in Google Colab View source on GitHub
###Code
try:
%tensorflow_version 2.x
except:
pass
import numpy as np
import tensorflow as tf
import tensorflow_hub as hub
import tensorflow_datasets as tfds
tfds.disable_progress_bar()
print("\u2022 Using TensorFlow Version:", tf.__version__)
###Output
_____no_output_____
###Markdown
Create an MNIST ClassifierWe will start by creating a class called `MNIST`. This class will load the MNIST dataset, preprocess the images from the dataset, and build a CNN based classifier. This class will also have some methods to train, test, and save our model. In the cell below, fill in the missing code and create the following Keras `Sequential` model:``` Model: "sequential" _________________________________________________________________ Layer (type) Output Shape Param ================================================================= lambda (Lambda) (None, 28, 28, 1) 0 _________________________________________________________________ conv2d (Conv2D) (None, 28, 28, 8) 80 _________________________________________________________________ max_pooling2d (MaxPooling2D) (None, 14, 14, 8) 0 _________________________________________________________________ conv2d_1 (Conv2D) (None, 14, 14, 16) 1168 _________________________________________________________________ max_pooling2d_1 (MaxPooling2 (None, 7, 7, 16) 0 _________________________________________________________________ conv2d_2 (Conv2D) (None, 7, 7, 32) 4640 _________________________________________________________________ flatten (Flatten) (None, 1568) 0 _________________________________________________________________ dense (Dense) (None, 128) 200832 _________________________________________________________________ dense_1 (Dense) (None, 10) 1290 =================================================================```Notice that we are using a ` tf.keras.layers.Lambda` layer at the beginning of our model. `Lambda` layers are used to wrap arbitrary expressions as a `Layer` object:```pythontf.keras.layers.Lambda(expression)```The `Lambda` layer exists so that arbitrary TensorFlow functions can be used when constructing `Sequential` and Functional API models. `Lambda` layers are best suited for simple operations.
###Code
class MNIST:
def __init__(self, export_path, buffer_size=1000, batch_size=32,
learning_rate=1e-3, epochs=10):
self._export_path = export_path
self._buffer_size = buffer_size
self._batch_size = batch_size
self._learning_rate = learning_rate
self._epochs = epochs
self._build_model()
self.train_dataset, self.test_dataset = self._prepare_dataset()
# Function to preprocess the images.
def preprocess_fn(self, x):
# EXERCISE: Cast x to tf.float32 using the tf.cast() function.
# You should also normalize the values of x to be in the range [0, 1].
x = # YOUR CODE HERE
return x
def _build_model(self):
# EXERCISE: Build the model according to the model summary shown above.
self._model = tf.keras.models.Sequential([
tf.keras.layers.Input(shape=(28, 28, 1), dtype=tf.uint8),
# Use a Lambda layer to use the self.preprocess_fn function
# defined above to preprocess the images.
# YOUR CODE HERE
# Create a Conv2D layer with 8 filters, a kernel size of 3
# and padding='same'.
# YOUR CODE HERE
# Create a MaxPool2D() layer. Use default values.
# YOUR CODE HERE
# Create a Conv2D layer with 16 filters, a kernel size of 3
# and padding='same'.
# YOUR CODE HERE
# Create a MaxPool2D() layer. Use default values.
# YOUR CODE HERE
# Create a Conv2D layer with 32 filters, a kernel size of 3
# and padding='same'.
# YOUR CODE HERE
# Create the Flatten and Dense layers as described in the
# model summary shown above.
# YOUR CODE HERE
])
# EXERCISE: Define the optimizer, loss function and metrics.
# Use the tf.keras.optimizers.Adam optimizer and set the
# learning rate to self._learning_rate.
optimizer_fn = # YOUR CODE HERE
# Use sparse_categorical_crossentropy as your loss function.
loss_fn = # YOUR CODE HERE
# Set the metrics to accuracy.
metrics_list = # YOUR CODE HERE
# Compile the model.
self._model.compile(optimizer_fn, loss=loss_fn, metrics=metrics_list)
def _prepare_dataset(self):
# EXERCISE: Load the MNIST dataset using tfds.load(). You should
# load the images as well as their corresponding labels and
# load both the test and train splits.
dataset = # YOUR CODE HERE
# EXERCISE: Extract the 'train' and 'test' splits from the dataset above.
train_dataset, test_dataset = # YOUR CODE HERE
return train_dataset, test_dataset
def train(self):
# EXERCISE: Shuffle and batch the self.train_dataset. Use self._buffer_size
# as the shuffling buffer and self._batch_size as the batch size for batching.
dataset_tr = # YOUR CODE HERE
# Train the model for specified number of epochs.
self._model.fit(dataset_tr, epochs=self._epochs)
def test(self):
# EXERCISE: Batch the self.test_dataset. Use a batch size of 32.
dataset_te = # YOUR CODE HERE
# Evaluate the dataset
results = self._model.evaluate(dataset_te)
# Print the metric values on which the model is being evaluated on.
for name, value in zip(self._model.metrics_names, results):
print("%s: %.3f" % (name, value))
def export_model(self):
# Save the model.
tf.saved_model.save(self._model, self._export_path)
###Output
_____no_output_____
###Markdown
Train, Evaluate, and Save the ModelWe will now use the `MNIST` class we created above to create an `mnist` object. When creating our `mnist` object we will use a dictionary to pass our training parameters. We will then call the `train` and `export_model` methods to train and save our model, respectively. Finally, we call the `test` method to evaluate our model after training. **NOTE:** It will take about 12 minutes to train the model for 5 epochs.
###Code
# Define the training parameters.
args = {'export_path': './saved_model',
'buffer_size': 1000,
'batch_size': 32,
'learning_rate': 1e-3,
'epochs': 5
}
# Create the mnist object.
mnist = MNIST(**args)
# Train the model.
mnist.train()
# Save the model.
mnist.export_model()
# Evaluate the trained MNIST model.
mnist.test()
###Output
_____no_output_____
###Markdown
Create a TarballThe `export_model` method saved our model in the TensorFlow SavedModel format in the `./saved_model` directory. The SavedModel format saves our model and its weights in various files and directories. This makes it difficult to distribute our model. Therefore, it is convenient to create a single compressed file that contains all the files and folders of our model. To do this, we will use the `tar` archiving program to create a tarball (similar to a Zip file) that contains our SavedModel.
###Code
# Create a tarball from the SavedModel.
!tar -cz -f module.tar.gz -C ./saved_model .
###Output
_____no_output_____
###Markdown
Inspect the TarballWe can uncompress our tarball to make sure it has all the files and folders from our SavedModel.
###Code
# Inspect the tarball.
!tar -tf module.tar.gz
###Output
_____no_output_____
###Markdown
Simulate Server ConditionsOnce we have verified our tarball, we can now simulate server conditions. In a normal scenario, we will fetch our TF Hub module from a remote server using the module's handle. However, since this notebook cannot host the server, we will instead point the module handle to the directory where our SavedModel is stored.
###Code
!rm -rf ./module
!mkdir -p module
!tar xvzf module.tar.gz -C ./module
# Define the module handle.
MODULE_HANDLE = './module'
###Output
_____no_output_____
###Markdown
Load the TF Hub Module
###Code
# EXERCISE: Load the TF Hub module using the hub.load API.
model = # YOUR CODE HERE
###Output
_____no_output_____
###Markdown
Test the TF Hub ModuleWe will now test our TF Hub module with images from the `test` split of the MNIST dataset.
###Code
# EXERCISE: Load the MNIST 'test' split using tfds.load(). You
# should load the images along with their corresponding labels.
dataset = # YOUR CODE HERE
# EXERCISE: Batch the dataset using a batch size of 32.
test_dataset = # YOUR CODE HERE
# Test the TF Hub module for a single batch of data
for batch_data in test_dataset.take(1):
outputs = model(batch_data[0])
outputs = np.argmax(outputs, axis=-1)
print('Predicted Labels:', outputs)
print('True Labels: ', batch_data[1].numpy())
###Output
_____no_output_____
###Markdown
We can see that the model correctly predicts the labels for most images in the batch. Evaluate the Model Using KerasIn the cell below, you will integrate the TensorFlow Hub module into the high level Keras API.
###Code
# EXERCISE: Integrate the TensorFlow Hub module into a Keras
# sequential model. You should use a hub.KerasLayer and you
# should make sure to use the correct values for the output_shape,
# and input_shape parameters. You should also use tf.uint8 for
# the dtype parameter.
model = # YOUR CODE HERE
# Compile the model.
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
# Evaluate the model on the test_dataset.
results = model.evaluate(test_dataset)
# Print the metric values on which the model is being evaluated on.
for name, value in zip(model.metrics_names, results):
print("%s: %.3f" % (name, value))
###Output
_____no_output_____ |
project/Py_lectures_07_My_project.ipynb | ###Markdown
How to structure a working code: a small exampleThis simple script plots the info about the students of a classroom, the real students of the course itself.You already contributed to this script, so let's assume it is the one you want to present for the final exam Some (required) rules1. Write an external Class to import here2. Keep the notebook very simple to better emphasize your results3. Make extensive use of comments, description, etc (using Markdown)4. Trap the possible errors! (wrong directory, non-existing Surname, etc)5. Give clear instructions to setup the environment, and provide the file(s) to run the script6. Be OS independent!7. A reader is NOT an expert of the field, keep everything simple, do not use jargon!Here it is an example: Statistics of PhD students @ Python in the lab: the winter edition Name(s), Surname(s) PhD Physics, XXXII cycle, Politecnico of Torino (change as needed) Winter edition**Abstract** This script describes the distribution of PhD student of the Winter Edition of the Python in the lab course. It has been made with the contribution of different groups bla bla bla... The calculusAdd a clear description of the calculus you are performing. Explain in detail the scope of your researche and the goal you aim to reach.For instance, the picture below shows a case where labeling of a signal is required to feed a Machine Learning code. The script wants to find an automatic way to define portions of the signal where the sensor detects a signal (scintillation) or not (no-scintillation) 
###Code
# Setup the environment if needed
myDir = "/this_is/myworking_dir"
# add any other setup
###Output
_____no_output_____
###Markdown
First import the file with the Class(es) and make an instance
###Code
import classroomW_solution as cl
csv_file = "01RONKG_2019.csv"
cds_file ='phds.txt'
wClass = cl.WinterClassroom(csv_file, cds_file)
###Output
_____no_output_____
###Markdown
Print the distribution of the students' PhD
###Code
for c in sorted(wClass.cds):
wClass.get_students_cds(c, is_print=True)
###Output
There are 7 students from Aerospace Engineering
There is 1 student from Architectural and Landscape Heritage
There is 1 student from Bioengineering and Medical-Surgical sciences
There is 1 student from Chemical Engineering
There are 15 students from Electrical, Electronics and Communications Engineering
There are 12 students from Energetics
There is 1 student from Physics
There is 1 student from Management, Production and Design
There are 16 students from Civil and Environmental Engineering
There are 5 students from Computer and Control Engineering
There are 8 students from Mechanical Engineering
There are 3 students from Metrology
There are 3 students from Materials Science and Technology
There are 5 students from Urban and Regional Development
###Markdown
Make a pie plot of the distribution
###Code
# Plot the solution of group 1
wClass.plot_pie(group=1)
###Output
There are 3 students from Materials Science and Technology
There are 5 students from Urban and Regional Development
There are 8 students from Mechanical Engineering
There are 15 students from Electrical, Electronics and Communications Engineering
There are 3 students from Metrology
There is 1 student from Physics
There is 1 student from Architectural and Landscape Heritage
There are 5 students from Computer and Control Engineering
There are 12 students from Energetics
There is 1 student from Bioengineering and Medical-Surgical sciences
There is 1 student from Management, Production and Design
There are 7 students from Aerospace Engineering
There are 16 students from Civil and Environmental Engineering
There is 1 student from Chemical Engineering
###Markdown
Note: group 1 did not consider to plot the bar without printing the list above
###Code
# Plot the solution of group 6
wClass.plot_pie(group=6)
###Output
_____no_output_____
###Markdown
Make a bar plot of the distribution
###Code
# Plot the solution of group 2
wClass.plot_bar(group=2)
# Plot the solution of group 7
wClass.plot_bar(group=7)
###Output
_____no_output_____
###Markdown
Write the email of the students
###Code
wClass.write_emails()
###Output
Student_emails.txt written
|
gs_quant/content/reports_and_screens/00_fx/0004_bear_put_spread_screen.ipynb | ###Markdown
Costless 1x2 Put Spreads This screen looks for max payouts and breakeven levels for 1x2 zero cost put spreads for a given tenor. Top strike is fixed at the specified delta and the bottom strike is solved to make the structure zero-cost.
###Code
from gs_quant.markets.portfolio import Portfolio
from gs_quant.markets import PricingContext
from gs_quant.instrument import FXOption
from gs_quant.risk import FXSpot
from gs_quant.common import BuySell
import pandas as pd; pd.set_option('display.precision', 2)
def get_screen(df_portfolio, strike_label='40d'):
screen = pd.pivot_table(df_portfolio, values='strike_price', index=['pair', 'Spot'], columns=['buy_sell'])
screen = screen.reset_index('Spot')
upper_label, lower_label = f'Upper Strike ({strike_label})', 'Lower Strike (Mid)*'
screen = screen.rename(columns={BuySell.Buy: upper_label, BuySell.Sell: lower_label})
screen['Max Payout'] = screen[upper_label] / screen[lower_label] - 1
screen['Lower Breakeven (Mid)'] = 2 * screen[lower_label] - screen[upper_label]
screen['Lower Breakeven OTMS'] = abs(screen['Lower Breakeven (Mid)'] / screen['Spot'] - 1)
return screen
def calculate_bear_put_spreads(upper_strike='40d', crosses=['USDNOK'], tenor='3m'):
portfolio = Portfolio()
for cross in crosses:
option_type = 'Put' if cross[:3] == 'USD' else 'Call'
upper_leg = FXOption(pair=f'{cross}', strike_price=upper_strike, notional_amount=10e6,
option_type=option_type, buy_sell=BuySell.Buy, expiration_date=tenor, premium_currency=cross[:3])
lower_leg = FXOption(pair=f'{cross}', strike_price=f'P={abs(upper_leg.premium)}', notional_amount=20e6,
option_type=option_type, buy_sell=BuySell.Sell, expiration_date=tenor)
portfolio.append((upper_leg, lower_leg))
with PricingContext():
portfolio.resolve()
spot = portfolio.calc(FXSpot)
summary = portfolio.to_frame()
summary['Spot'] = list(spot.result())
return get_screen(summary, upper_strike)
g10 = ['USDJPY', 'EURUSD', 'AUDUSD', 'GBPUSD', 'USDCAD', 'USDNOK', 'NZDUSD', 'USDSEK', 'USDCHF']
result = calculate_bear_put_spreads(upper_strike='40d', crosses=g10, tenor='3m')
result.sort_values(by='Max Payout', ascending=False).style.format({
'Max Payout': '{:,.2%}'.format, 'Lower Breakeven OTMS': '{:,.2%}'.format}).background_gradient(
subset=['Max Payout', 'Lower Breakeven OTMS'])
###Output
_____no_output_____ |
notebooks/map_1415.ipynb | ###Markdown
Mapping ATL14 and ATL15Creates a mosaic of ATL14 tiles applying weights to each tile
###Code
import glob
import re
import numpy as np
import pointCollection as pc
import matplotlib.pyplot as plt
# directory with ATL14 files
thedir = '/Volumes/ice2/ben/ATL14_test/IS2/z03xlooser_dt10xlooser_80km'
# create figure axis
fig, hax = plt.subplots(nrows=3, ncols=2, sharex=True, sharey=True, figsize=(11,16))
# valid range of tiles
XR=[-440000., -120000.]
YR= [-1840000., -1560000.]
# weight parameters
# pad width
W0 = 1e4
# feathering width
WF = 2e4
# add MODIS mosaic of Greenland as background
MOG=pc.grid.data().from_geotif('/Volumes/ice1/ben/MOG/2005/mog_2005_1km.tif')
for count in range(6):
i,j = (count % 3, count//3)
MOG.show(ax=hax[i,j], vmin=14000, vmax=17000, cmap='gray')
# find list of valid files
ctr_dict={}
file_list = []
for file in glob.glob(thedir+'/*/*.h5'):
xc,yc=[int(item)*1.e3 for item in re.compile('E(.*)_N(.*).h5').search(file).groups()]
if ((xc >= XR[0]) and (xc <= XR[1]) & (yc >= YR[0]) and (yc <= YR[1])):
file_list.append(file)
# get bounds, grid spacing and dimensions of output mosaic
mosaic=pc.grid.mosaic()
for file in file_list:
# read ATL14 grid from HDF5
temp=pc.grid.mosaic().from_h5( file, group='dz/', field_mapping={'z':'dz'})
ctr_dict[(np.mean(temp.x), np.mean(temp.y))]=file
# update grid spacing of output mosaic
mosaic.update_spacing(temp)
# update the extents of the output mosaic
mosaic.update_bounds(temp)
# update dimensions of output mosaic
mosaic.update_dimensions(temp)
# create output mosaic
mosaic.data = np.zeros(mosaic.dimensions)
mosaic.mask = np.ones(mosaic.dimensions,dtype=np.bool)
mosaic.count = np.zeros(mosaic.dimensions)
mosaic.weight = np.zeros((mosaic.dimensions[0],mosaic.dimensions[1]))
for file in file_list:
# read ATL14 grid from HDF5
temp=pc.grid.mosaic().from_h5( file, group='dz/', field_mapping={'z':'dz'})
temp=temp.weights(pad=W0, feather=WF, apply=True)
# get the image coordinates of the input file
iy,ix = mosaic.image_coordinates(temp)
for band in range(mosaic.dimensions[2]):
mosaic.data[iy,ix,band] += temp.z[:,:,band]
mosaic.mask[iy,ix,band] = False
# add weights to total weight matrix
mosaic.weight[iy,ix] += temp.weight[:,:]
# read ATL14 grid from HDF5
temp=pc.grid.mosaic().from_h5( file, group='dz/', field_mapping={'z':'count'})
temp=temp.weights(pad=W0, feather=WF, apply=True)
for band in range(mosaic.dimensions[2]):
mosaic.count[iy,ix,band] += temp.z[:,:,band]
# find valid weights
iy,ix = np.nonzero(mosaic.weight == 0)
mosaic.mask[iy,ix,:] = True
# normalize weights
iy,ix = np.nonzero(mosaic.weight > 0)
for band in range(mosaic.dimensions[2]):
mosaic.data[iy,ix,band] /= mosaic.weight[iy,ix]
mosaic.count[iy,ix,band] /= mosaic.weight[iy,ix]
# replace invalid points with fill_value
mosaic.data[mosaic.mask] = mosaic.fill_value
# plot ATL14 mosaics
for count in range(0, 3):
# show mosaic image of elevation change
hax[count,0].imshow((mosaic.data[:,:,count+1]-mosaic.data[:,:,count]),
extent=mosaic.extent, cmap='Spectral', vmin=-3, vmax=3, origin='lower')
# show mosaic image of beam count
hax[count,1].imshow((mosaic.count[:,:,count+1] > 0) & (mosaic.count[:,:,count] > 0),
extent=mosaic.extent, vmin=0, vmax=1, origin='lower')
# set x and y labels
hax[count,0].set_ylabel(f'$\delta$ z\n {temp.t[count]} to {temp.t[count+1]}')
hax[count,1].set_ylabel(f'coverage\n {temp.t[count]} to {temp.t[count+1]}')
hax[count,1].set_yticks([])
for count in range(2):
hax[count,0].set_xticks([])
hax[0,0].set_xlim(mosaic.extent[0:2])
hax[0,0].set_ylim(mosaic.extent[2:4])
fig.subplots_adjust(left=0.1, right=0.95, bottom=0.05, top=0.95, hspace=0.05, wspace=0.1)
plt.show()
###Output
_____no_output_____ |
python-statatics-tutorial/basic-theme/python-language/Itertools.ipynb | ###Markdown
itertools module
###Code
from itertools import *
###Output
_____no_output_____
###Markdown
1 chain(*iterables) Make an iterator that returns elements from the first iterable until it is exhausted, then proceeds to the next iterable, until all of the iterables are exhausted.
###Code
for value in chain('gau', 'fung'):
print value,
###Output
g a u f u n g
###Markdown
2 combinations(iterable, r)Return r length subsequences of elements from the input iterable.
###Code
for value in combinations('gaufung', 5):
print value
###Output
('g', 'a', 'u', 'f', 'u')
('g', 'a', 'u', 'f', 'n')
('g', 'a', 'u', 'f', 'g')
('g', 'a', 'u', 'u', 'n')
('g', 'a', 'u', 'u', 'g')
('g', 'a', 'u', 'n', 'g')
('g', 'a', 'f', 'u', 'n')
('g', 'a', 'f', 'u', 'g')
('g', 'a', 'f', 'n', 'g')
('g', 'a', 'u', 'n', 'g')
('g', 'u', 'f', 'u', 'n')
('g', 'u', 'f', 'u', 'g')
('g', 'u', 'f', 'n', 'g')
('g', 'u', 'u', 'n', 'g')
('g', 'f', 'u', 'n', 'g')
('a', 'u', 'f', 'u', 'n')
('a', 'u', 'f', 'u', 'g')
('a', 'u', 'f', 'n', 'g')
('a', 'u', 'u', 'n', 'g')
('a', 'f', 'u', 'n', 'g')
('u', 'f', 'u', 'n', 'g')
###Markdown
3 combinations_with_replacement(iterable, r) Return r length subsequences of elements from the input iterable allowing individual elements to be repeated more than once.
###Code
for value in combinations_with_replacement('abc', 2):
print value
###Output
('a', 'a')
('a', 'b')
('a', 'c')
('b', 'b')
('b', 'c')
('c', 'c')
###Markdown
4 compress(data, selectors)Make an iterator that filters elements from data returning only those that have a corresponding element in selectors that evaluates to True. Stops when either the data or selectors iterables has been exhausted.
###Code
for value in compress('gaufung',[1,0,0,1,0,0,0]):
print value,
###Output
g f
###Markdown
5 count(start=0, step=1)Make an iterator that returns evenly spaced values starting with n. 6 cycle(iterable)Make an iterator returning elements from the iterable and saving a copy of each. When the iterable is exhausted, return elements from the saved copy. Repeats indefinitely. 7 dropwhile(predicate, iterable)Make an iterator that drops elements from the begining of iterable as long as the predicate is true; afterwards, returns every element. Note, the iterator does not produce any output until the predicate first becomes false, so it may have a lengthy start-up time.
###Code
for value in dropwhile(lambda x: x>5, [7,4,6,4,1]):
print value,
###Output
4 6 4 1
###Markdown
8 groupby(iterable[, key])Make an iterator that returns consecutive keys and groups from the iterable. The key is a function computing a key value for each element. If not specified or is None, key defaults to an identity function and returns the element unchanged. Generally, the iterable needs to already be sorted on the same key function.
###Code
for k, g in groupby('AAAABBBCCDAABBB'):
print k,list(g)
###Output
A ['A', 'A', 'A', 'A']
B ['B', 'B', 'B']
C ['C', 'C']
D ['D']
A ['A', 'A']
B ['B', 'B', 'B']
###Markdown
9 迭代器函数+ ifilter(predicate, iterable)+ ifilter(predicate, iterable)+ imap(function, *iterables)+ islice(iterable, stop) + izip(*iterables)+ izip_longest(*iterables[, fillvalue]) 10 product(*iterables[, repeat]) Cartesian product of input iterables. ((x,y) for x in A for y in B).
###Code
for value in product('gau','fung'):
print value
for value in product(range(2), repeat=3):
print value
###Output
(0, 0, 0)
(0, 0, 1)
(0, 1, 0)
(0, 1, 1)
(1, 0, 0)
(1, 0, 1)
(1, 1, 0)
(1, 1, 1)
###Markdown
11 repeat(object[, times])Make an iterator that returns object over and over again. Runs indefinitely unless the times argument is specified.
###Code
for value in repeat('gau', 4):
print value,
###Output
gau gau gau gau
###Markdown
12 takewhile(predicate, iterable)Make an iterator that returns elements from the iterable as long as the predicate is true.
###Code
for value in takewhile(lambda x: x<5, [1,4,6,4,1]):
print value,
###Output
1 4
|
examples/pretrain/prepare_dataset.ipynb | ###Markdown
prepare_dataset
###Code
from EduData import get_data
get_data("open-luna", "../../data/")
###Output
downloader, INFO http://base.ustc.edu.cn/data/OpenLUNA/OpenLUNA.json is saved as ..\..\data\OpenLUNA.json
|
More condensed dataset/notebook2/flo_test/5. Single-output.ipynb | ###Markdown
D - Optimizing diameter model 1D. Extra Trees
###Code
# This is a grid search for three parameters in the Extra Trees algorithm.
# Parameters are: random_state, n_estimators, max_features.
# This gives the best combination of the three parameters for the smallest mean squared error.
min_mae = 99999
min_i, min_j, min_k = 0, 0, 0
for i in tqdm(range(1, 25)):
for j in range(1, 25):
for k in range(2, 40, 1):
ET_regr = ExtraTreesRegressor(n_estimators=i,
max_features=j,
random_state=k)
ET_regr.fit(X_train_d, np.ravel(Y_train_d))
ET_Y_pred_d = pd.DataFrame(ET_regr.predict(X_test_d))
mae = mean_absolute_error(Y_test_d, ET_Y_pred_d)
if (min_mae > mae):
min_mae = mae
min_i = i
min_j = j
min_k = k
print(min_mae, min_i, min_j, min_k)
###Output
100%|██████████| 24/24 [05:34<00:00, 13.95s/it]
###Markdown
2D. Decision Tree
###Code
# This is a grid search for three parameters in the Decision Trees algorithm.
# Parameters are: max_depth, max_features, random_state.
# This gives the best combination of the three parameters for the smallest mean squared error.
min_mae = 99999
min_i, min_j, min_k = 0, 0, 0
for i in tqdm(range(1, 30)):
for j in range(1, 30):
for k in range(4, 80, 2):
DT_regr = DecisionTreeRegressor(max_depth=i,
max_features=j,
random_state=k)
DT_regr.fit(X_train_d, np.ravel(Y_train_d))
DT_Y_pred_d = pd.DataFrame(DT_regr.predict(X_test_d))
mae = mean_absolute_error(Y_test_d, DT_Y_pred_d)
if (min_mae > mae):
min_mae = mae
min_i = i
min_j = j
min_k = k
print(min_mae, min_i, min_j, min_k)
###Output
100%|██████████| 29/29 [02:47<00:00, 5.78s/it]
###Markdown
3D. Random Forest
###Code
# This is a grid search for three parameters in the Random Forest algorithm.
# Parameters are: max_depth, n_estimators, max_features.
# Random_state is set to 45.
# This gives the best combination of the three parameters for the smallest mean squared error.
min_mae = 99999
min_i, min_j, min_k = 0, 0, 0
for i in tqdm(range(1, 31)):
for j in range(1, 31):
for k in range(2, 50, 1):
RF_regr = RandomForestRegressor(max_depth=i,
n_estimators=j,
max_features=k,
random_state=45)
RF_regr.fit(X_train_d, np.ravel(Y_train_d))
RF_Y_pred_d = pd.DataFrame(RF_regr.predict(X_test_d))
mae = mean_absolute_error(Y_test_d, RF_Y_pred_d)
if (min_mae > mae):
min_mae = mae
min_i = i
min_j = j
min_k = k
print(min_mae, min_i, min_j, min_k)
###Output
100%|██████████| 30/30 [16:41<00:00, 33.38s/it]
###Markdown
4D. K Neighbors
###Code
min_mae = 99999
min_i, min_j = 0, 0
for i in tqdm(range(1, 40)):
for j in range(1, 40):
KNN_reg_d = KNeighborsRegressor(n_neighbors=i,
p=j).fit(X_train_d, np.ravel(Y_train_d))
KNN_Y_pred_d = KNN_reg_d.predict(X_test_d)
mae = mean_absolute_error(Y_test_d, KNN_Y_pred_d)
if (min_mae > mae):
min_mae = mae
min_i = i
min_j = j
print(min_mae, min_i, min_j)
###Output
100%|██████████| 39/39 [00:19<00:00, 2.03it/s]
###Markdown
Saving Decision Tree model
###Code
DT_regr_d = DecisionTreeRegressor(max_depth=12,
max_features=23,
random_state=22)
DT_regr_d.fit(X_train_d, np.ravel(Y_train_d))
DT_Y_pred_d = pd.DataFrame(DT_regr_d.predict(X_test_d))
joblib.dump(DT_regr_d, "./model_SO_diameter_DecisionTree.joblib")
###Output
_____no_output_____
###Markdown
E - Optimizing emission model 1E. Extra Trees
###Code
# This is a grid search for three parameters in the Extra Trees algorithm.
# Parameters are: random_state, n_estimators, max_features.
# This gives the best combination of the three parameters for the smallest mean squared error.
min_mae = 99999
min_i, min_j, min_k = 0, 0, 0
for i in tqdm(range(1, 25)):
for j in range(1, 25):
for k in range(2, 50, 1):
ET_regr_e = ExtraTreesRegressor(n_estimators=i,
max_features=j,
random_state=k)
ET_regr_e.fit(X_train_e, np.ravel(Y_train_e))
ET_Y_pred_e = pd.DataFrame(ET_regr_e.predict(X_test_e))
mae = mean_absolute_error(Y_test_e, ET_Y_pred_e)
if (min_mae > mae):
min_mae = mae
min_i = i
min_j = j
min_k = k
print(min_mae, min_i, min_j, min_k)
###Output
100%|██████████| 24/24 [07:36<00:00, 19.03s/it]
###Markdown
2E. Decision Trees
###Code
# This is a grid search for three parameters in the Decision Trees algorithm.
# This gives the best combination of the three parameters for the smallest mean squared error.
min_mae = 99999
min_i, min_j, min_k = 0, 0, 0
for i in tqdm(range(1, 30)):
for j in range(1, 30):
for k in range(4, 50, 2):
DT_regr_e = DecisionTreeRegressor(max_depth=i,
max_features=j,
random_state=k)
DT_regr_e.fit(X_train_e, np.ravel(Y_train_e))
DT_Y_pred_e = pd.DataFrame(DT_regr_e.predict(X_test_e))
mae = mean_absolute_error(Y_test_e, DT_Y_pred_e)
if (min_mae > mae):
min_mae = mae
min_i = i
min_j = j
min_k = k
print(min_mae, min_i, min_j, min_k)
###Output
100%|██████████| 29/29 [01:49<00:00, 3.78s/it]
###Markdown
3E. Random Forest
###Code
min_mae = 99999
min_i, min_j, min_k = 0, 0, 0
for i in tqdm(range(1, 31)):
for j in range(1, 31):
for k in range(2, 50, 1):
RF_regr_e = RandomForestRegressor(max_depth=i,
n_estimators=j,
max_features=k,
random_state=45)
RF_regr_e.fit(X_train_e, np.ravel(Y_train_e))
RF_Y_pred_e = pd.DataFrame(RF_regr_e.predict(X_test_e))
mae = mean_absolute_error(Y_test_e, RF_Y_pred_e)
if (min_mae > mae):
min_mae = mae
min_i = i
min_j = j
min_k = k
print(min_mae, min_i, min_j, min_k)
###Output
100%|██████████| 30/30 [17:34<00:00, 35.15s/it]
###Markdown
4E. K Neighbors
###Code
min_mae = 99999
min_i, min_j = 0, 0
for i in tqdm(range(1, 40)):
for j in range(1, 40):
KNN_reg_e = KNeighborsRegressor(n_neighbors=i,
p=j).fit(X_train_e, np.ravel(Y_train_e))
KNN_Y_pred_e = KNN_reg_e.predict(X_test_e)
mae = mean_absolute_error(Y_test_e, KNN_Y_pred_e)
if (min_mae > mae):
min_mae = mae
min_i = i
min_j = j
print(min_mae, min_i, min_j)
###Output
100%|██████████| 39/39 [00:21<00:00, 1.80it/s]
###Markdown
Saving Extra Trees model
###Code
ET_regr_e = ExtraTreesRegressor(n_estimators=20,
max_features=21,
random_state=23).fit(X_train_e, np.ravel(Y_train_e))
ET_Y_pred_e = ET_regr_e.predict(X_test_e)
joblib.dump(ET_regr_e, "./model_SO_emission_ExtraTrees.joblib")
###Output
_____no_output_____
###Markdown
A - Optimizing absorption model 1A: Extra Trees
###Code
# This is a grid search for three parameters in the Extra Trees algorithm.
# Parameters are: random_state, n_estimators, max_features.
# This gives the best combination of the three parameters for the smallest mean squared error.
min_mae = 99999
min_i, min_j, min_k = 0, 0, 0
for i in tqdm(range(1, 30)):
for j in range(1, 30):
for k in range(2, 50, 1):
ET_regr_a = ExtraTreesRegressor(n_estimators=i,
max_features=j,
random_state=k)
ET_regr_a.fit(X_train_a, np.ravel(Y_train_a))
ET_Y_pred_a = pd.DataFrame(ET_regr_a.predict(X_test_a))
mae = mean_absolute_error(Y_test_a, ET_Y_pred_a)
if (min_mae > mae):
min_mae = mae
min_i = i
min_j = j
min_k = k
print(min_mae, min_i, min_j, min_k)
###Output
100%|██████████| 29/29 [13:46<00:00, 28.50s/it]
###Markdown
2A. Decision Trees
###Code
# This is a grid search for three parameters in the Decision Trees algorithm.
# This gives the best combination of the three parameters for the smallest mean squared error.
min_mae = 99999
min_i, min_j, min_k = 0, 0, 0
for i in tqdm(range(1, 30)):
for j in range(1, 30):
for k in range(4, 80, 2):
DT_regr_a = DecisionTreeRegressor(max_depth=i,
max_features=j,
random_state=k)
DT_regr_a.fit(X_train_a, np.ravel(Y_train_a))
DT_Y_pred_a = pd.DataFrame(DT_regr_a.predict(X_test_a))
mae = mean_absolute_error(Y_test_a, DT_Y_pred_a)
if (min_mae > mae):
min_mae = mae
min_i = i
min_j = j
min_k = k
print(min_mae, min_i, min_j, min_k)
DT_regr_a = DecisionTreeRegressor(max_depth=20,
max_features=20,
random_state=12).fit(X_train_a, np.ravel(Y_train_a))
DT_Y_pred_a = DT_regr_a.predict(X_test_a)
DT_r2_a = r2_score(Y_test_a, DT_Y_pred_a)
DT_MSE_a = mean_squared_error(Y_test_a, DT_Y_pred_a)
DT_RMSE_a = mean_squared_error(Y_test_a, DT_Y_pred_a, squared=False)
DT_MAE_a = mean_absolute_error(Y_test_a, DT_Y_pred_a)
print('diameter:', 'r2:', DT_r2_a, '; MSE:', DT_MSE_a, '; RMSE:', DT_RMSE_a, '; MAE:', DT_MAE_a)
###Output
diameter: r2: 0.7157594152974505 ; MSE: 1033.208754199327 ; RMSE: 32.14356474007398 ; MAE: 20.595959593939398
###Markdown
3A. Random Forest
###Code
min_mae = 99999
min_i, min_j, min_k = 0, 0, 0
for i in tqdm(range(1, 31)):
for j in range(1, 31):
for k in range(2, 50, 1):
RF_regr_a = RandomForestRegressor(max_depth=i,
n_estimators=j,
max_features=k,
random_state=45)
RF_regr_a.fit(X_train_a, np.ravel(Y_train_a))
RF_Y_pred_a = pd.DataFrame(RF_regr_a.predict(X_test_a))
mae = mean_absolute_error(Y_test_a, RF_Y_pred_a)
if (min_mae > mae):
min_mae = mae
min_i = i
min_j = j
min_k = k
print(min_mae, min_i, min_j, min_k)
###Output
100%|██████████| 30/30 [18:12<00:00, 36.43s/it]
###Markdown
4A. K Neighbors
###Code
min_mae = 99999
min_i, min_j = 0, 0
for i in tqdm(range(1, 40)):
for j in range(1, 40):
KNN_reg_a = KNeighborsRegressor(n_neighbors=i,
p=j).fit(X_train_a, np.ravel(Y_train_a))
KNN_Y_pred_a = KNN_reg_a.predict(X_test_a)
mae = mean_absolute_error(Y_test_a, KNN_Y_pred_a)
if (min_mae > mae):
min_mae = mae
min_i = i
min_j = j
print(min_mae, min_i, min_j)
###Output
100%|██████████| 39/39 [00:19<00:00, 2.01it/s]
###Markdown
Saving model
###Code
ET_regr_a = ExtraTreesRegressor(n_estimators=6,
max_features=29,
random_state=24
)
ET_regr_a.fit(X_train_a, np.ravel(Y_train_a))
ET_Y_pred_a = pd.DataFrame(ET_regr_a.predict(X_test_a))
joblib.dump(ET_regr_a, "./model_SO_abs_ExtraTrees.joblib")
###Output
_____no_output_____
###Markdown
Analyzing
###Code
## Diameter
DT_regr_d = DecisionTreeRegressor(max_depth=12,
max_features=23,
random_state=22)
DT_regr_d.fit(X_train_d, np.ravel(Y_train_d))
DT_Y_pred_d = DT_regr_d.predict(X_test_d)
D_mae = mean_absolute_error(Y_test_d, DT_Y_pred_d)
D_r_2 = r2_score(Y_test_d, DT_Y_pred_d)
D_mse = mean_squared_error(Y_test_d, DT_Y_pred_d)
D_rmse = mean_squared_error(Y_test_d, DT_Y_pred_d, squared=False)
## Emission
ET_regr_e = ExtraTreesRegressor(n_estimators=20,
max_features=21,
random_state=23).fit(X_train_e, np.ravel(Y_train_e))
ET_Y_pred_e = ET_regr_e.predict(X_test_e)
E_mae = mean_absolute_error(Y_test_e, ET_Y_pred_e)
E_r_2 = r2_score(Y_test_e, ET_Y_pred_e)
E_mse = mean_squared_error(Y_test_e, ET_Y_pred_e)
E_rmse = mean_squared_error(Y_test_e, ET_Y_pred_e, squared=False)
### Absorption
ET_regr_a = ExtraTreesRegressor(n_estimators=6,
max_features=29,
random_state=24)
ET_regr_a.fit(X_train_a, np.ravel(Y_train_a))
ET_Y_pred_a = ET_regr_a.predict(X_test_a)
A_mae = mean_absolute_error(Y_test_a, ET_Y_pred_a)
A_r_2 = r2_score(Y_test_a, ET_Y_pred_a)
A_mse = mean_squared_error(Y_test_a, ET_Y_pred_a)
A_rmse = mean_squared_error(Y_test_a, ET_Y_pred_a, squared=False)
from tabulate import tabulate
d = [ ["Diameter", D_r_2, D_mae, D_mse, D_rmse],
["Absorption", A_r_2, A_mae, A_mse, A_rmse],
["Emission", E_r_2, E_mae, E_mse, E_rmse]]
print(tabulate(d, headers=["Outputs", "R2", "Mean absolute error", "Mean squared error", "Root mean squared error"]))
fig, (ax1, ax2, ax3) = plt.subplots(1, 3, figsize=(20,5))
fig.suptitle('Single Outputs', fontsize=25)
ax1.plot(DT_Y_pred_d, Y_test_d, 'o')
ax1.plot([1.5,9],[1.5,9], color = 'r')
ax1.set_title('Diameter')
ax1.set(xlabel='Predicted Values (nm)', ylabel='Observed Values (nm)')
ax2.plot(ET_Y_pred_a, Y_test_a, 'o')
ax2.plot([400,650],[400,650], color = 'r')
ax2.set_title('Absorption')
ax2.set(xlabel='Predicted Values (nm)', ylabel='Predicted Values (nm)')
ax3.plot(ET_Y_pred_e, Y_test_e, 'o')
ax3.plot([450,700],[450,700], color = 'r')
ax3.set_title('Emission')
ax3.set(xlabel='Predicted Values (nm)', ylabel='Observed Values (nm)')
fig.tight_layout()
###Output
_____no_output_____
###Markdown
Feature importance For diameter prediction
###Code
importance_dict_d = dict()
for i in range(0,71):
importance_dict_d[input_col[i]] = DT_regr_d.feature_importances_[i]
sorted_importance_d = sorted(importance_dict_d.items(), key=lambda x: x[1], reverse=True)
sorted_importance_d
top7_d = DataFrame(sorted_importance_d[0:7], columns=['features', 'importance score'])
others_d = DataFrame(sorted_importance_d[7:], columns=['features', 'importance score'])
import seaborn as sns
a4_dims = (20.7, 8.27)
fig, ax = plt.subplots(figsize=a4_dims)
sns.set_theme(style="whitegrid")
ax = sns.barplot(x="features", y="importance score", data=top7_d)
###Output
_____no_output_____
###Markdown
Emission prediction
###Code
importance_dict_e = dict()
for i in range(0,71):
importance_dict_e[input_col[i]] = ET_regr_e.feature_importances_[i]
sorted_importance_e = sorted(importance_dict_e.items(), key=lambda x: x[1], reverse=True)
sorted_importance_e
top7_e = DataFrame(sorted_importance_e[0:7], columns=['features', 'importance score'])
others_e = DataFrame(sorted_importance_e[7:], columns=['features', 'importance score'])
# combined_others2 = pd.DataFrame(data = {
# 'features' : ['others'],
# 'importance score' : [others2['importance score'].sum()]
# })
# #combining top 10 with others
# imp_score2 = pd.concat([top7, combined_others2])
import seaborn as sns
a4_dims = (20.7, 8.27)
fig, ax = plt.subplots(figsize=a4_dims)
sns.set_theme(style="whitegrid")
ax = sns.barplot(x="features", y="importance score", data=top7_e)
###Output
_____no_output_____
###Markdown
Absorption prediction
###Code
importance_dict_a = dict()
for i in range(0,71):
importance_dict_a[input_col[i]] = ET_regr_a.feature_importances_[i]
sorted_importance_a = sorted(importance_dict_a.items(), key=lambda x: x[1], reverse=True)
sorted_importance_a
top7_a = DataFrame(sorted_importance_a[0:7], columns=['features', 'importance score'])
others_a = DataFrame(sorted_importance_a[7:], columns=['features', 'importance score'])
import seaborn as sns
a4_dims = (20.7, 8.27)
fig, ax = plt.subplots(figsize=a4_dims)
sns.set_theme(style="whitegrid")
ax = sns.barplot(x="features", y="importance score", data=top7_a)
importance_dict_a
###Output
_____no_output_____
###Markdown
Combine
###Code
sorted_a = sorted(importance_dict_a.items(), key=lambda x: x[0], reverse=False)
sorted_d = sorted(importance_dict_d.items(), key=lambda x: x[0], reverse=False)
sorted_e = sorted(importance_dict_e.items(), key=lambda x: x[0], reverse=False)
sorted_d
combined_importance = dict()
for i in range(0,71):
combined_importance[sorted_e[i][0]] = sorted_e[i][1] + sorted_a[i][1] + sorted_d[i][1]
combined_importance
sorted_combined_importance = sorted(combined_importance.items(), key=lambda x: x[1], reverse=True)
sorted_combined_importance
top7_combined = DataFrame(sorted_combined_importance[0:7], columns=['features', 'importance score'])
others_combined = DataFrame(sorted_combined_importance [7:], columns=['features', 'importance score'])
import seaborn as sns
a4_dims = (20.7, 8.27)
fig, ax = plt.subplots(figsize=a4_dims)
sns.set_theme(style="whitegrid")
ax = sns.barplot(x="features", y="importance score", data=top7_combined)
from sklearn.datasets import load_boston
import pandas as pd
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
import seaborn as sns
import statsmodels.api as sm
%matplotlib inline
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LinearRegression
from sklearn.feature_selection import RFE
from sklearn.linear_model import RidgeCV, LassoCV, Ridge, Lasso
###Output
_____no_output_____ |
Example Notebooks/GenePattern Files in Python.ipynb | ###Markdown
Using GenePattern Files in PythonThis is a short tutorial on how to use GenePattern files in conjunction with common Python libraries, such as [csv](https://docs.python.org/2/library/csv.html), [numpy](http://www.numpy.org/), [pandas](http://pandas.pydata.org/) and [matplotlib](http://matplotlib.org/). It demonstrates how to load data from GenePattern into these libraries, and how to send data from these libraries back to GenePattern.This tutorial assumes that the reader is familiar with [GenePattern](http://genepattern.org) and its basic associated concepts, such as modules, jobs and result files. A simple GenePattern tutorial is [available here](http://www.broadinstitute.org/cancer/software/genepattern/tutorial). It also assumes that the reader is familiar with the basics of the [GenePattern Python library](http://www.broadinstitute.org/cancer/software/genepattern/programmers-guide_Using_GenePattern_from_Python). If you are not already familiar with this library, it is recommended that you first complete the [GenePattern Python Tutorial](http://www.broadinstitute.org/cancer/software/genepattern/example-genepattern-notebooks). InstallationIf you're not running this notebook on the [GenePattern Notebook Repository](https://notebook.genepattern.org), several of the libraries in this tutorial may need to be installed in order to use their functionality. Installation is typically done through PIP. All of the relevant libraries can be installed by executing the line below on the command line, with the necessary permissions. You may need to restart the IPython kernel before the libraries become available after installation.> *pip install numpy pandas matplotlib genepattern-python* GPFile ObjectsFiles in the GenePattern Python library are represented by GPFile objects. These objects contain an URL to the file on the GenePattern server and several methods for obtaining the contents of the file.GPFile objects can either be created directly or obtained from a GenePattern job, through the GPJob object. An example of directly instantiating a GPFile object to represent a remote file is below. An example of obtaining a GPFile object representing a GenePattern job's output is also given, but is commented out, as this tutorial does not cover launching GenePattern jobs.
###Code
import gp
# Create a GenePattern server proxy instance
gpserver = gp.GPServer('https://cloud.genepattern.org/gp','myusername', 'mypassword')
# Directly create the GPFile object
gpfile = gp.GPFile(gpserver, 'https://datasets.genepattern.org/data/all_aml/all_aml_test.preprocessed.gct')
# The code below demonstrates how to obtain a GPFile from a GPJob object.
# It asumes the user already has a GPJob launched and represented in the gpjob variable.
# This returns a list of all output files as GPFiles.
# output_list = gpjob.get_output_files()
# The list can be iterated over or a single member obtained
# gpfile = output_list[0]
###Output
_____no_output_____
###Markdown
Understanding Your FileWhen doing any sort of file parsing in Python, it is important to understand the contents of the file that you are using. The file we will use with this tutorial is a GCT containing expression data for different leukemia samples.[Here is a link to a GCT file](https://datasets.genepattern.org/data/all_aml/all_aml_test.preprocessed.gct). Go ahead and click the link. It will open up in your browser.As you can see, the file consists of a couple lines with header information, followed by a tab-separated matrix of expression data. This will be important to keep in mind later in the tutorial, when we start loading the data using different libraries.More information on the GCT format is [available here](http://www.broadinstitute.org/cancer/software/genepattern/file-formats-guideGCT). GCT and ODF ParsersThe GenePattern Python Library comes with built-in support for the GCT and ODF file formats. These are two of the most common formats used by GenePattern. While this tutorial does not use these parsers, in favor of doing things manually, we thought the parsers worth mentioning as they make working with GCT and ODF files particularly easy.To use one of these parsers, simply import the `gp.data` package and pass one of their constructors a GPFile object. The parser will do the rest, loading the contents of the file and importing it into a pandas Dataframe (see the pandas section below).Here is an example of how to load the `all_aml_test.preprocessed.gct` file using the GCT parser.
###Code
# Import the GCT and ODF parsers
from gp.data import GCT, ODF
# Load and parse the GCT file
gpfile_parsed = GCT(gpfile)
# Reference and return the resulting Dataframe
gpfile_parsed.dataframe
###Output
_____no_output_____
###Markdown
Reading the FileEvery GPFile object has a read() method, which downloads the file and returns its contents as a string. As strings are handled differently, depending on whether you are using Python 2 or Python 3, we recommend explicitly reading the contents as unicode. This will ensure that the file can be easily used in either version of Python.An example of reading the contents of the file to a variable is given below.
###Code
# In Python 3 this will be unicode by default
raw_txt = gpfile.read()
###Output
_____no_output_____
###Markdown
GenePattern File IOYou now have the contents of the file represented as a string. With this you can read the data, parse the file or, with the right knowhow, perform analyses.Many common libraries, however, expect data to come in the form of a Python IO object, rather than a raw string. Thankfully, the data can be used in this way simply by instantiating a StringIO object. The code below shows how to do this.
###Code
from io import StringIO
# Wrap the data in a StringIO object
data_io = StringIO(raw_txt)
###Output
_____no_output_____
###Markdown
**Note:** The GPFile object also possesses the open() method. This method directly returns an IO object for the remote file, bypassing the need to first read the file and then wrap its contents with a StringIO. Unfortunately, due to the way that Python handles strings, this results in different behavior between Python 2 and Python 3. If you use this method and run into errors, we recommend trying the above method instead. Using GenePattern with the csv moduleOne of the simplest ways to parse the data is using Python's built-in [csv](https://docs.python.org/2/library/csv.html) module. This module provides a variety of methods for parsing text into matrices of raw values.The most important method in this library is the reader() method. It will parse each line into a list of values and then return an object from which each line can be read in order. This can either be iterated over or cast as a list.An example of this is shown below.
###Code
import csv
# Load the data into a reader object.
# For a GCT file, the delimiter between columns is the tab character
csv_reader = csv.reader(data_io, delimiter='\t')
# Cast the reader object as a list of lists
data_matrix = list(csv_reader)
# Print the fourth line of the file (the first sample)
# Remember, the first line will be index 0.
print(data_matrix[3])
###Output
_____no_output_____
###Markdown
Note that the first two columns in this matrix will be the sample name and description. These correspond to the first two entries in the list printed above.Also note that the csv library parses all values as strings, including of the numerical values in the list. Before analyses can be performed with these values, they will need to be parsed as numbers. This is a fundamental limitation of the csv library, and one other libraries avoid. Using GenePattern with numpyThe popular [numpy](http://www.numpy.org/) package provides a variety of tools useful for scientific computing. Among these tools are a method to quickly parse files into matrices of data. This method is named [genfromtxt()](http://docs.scipy.org/doc/numpy-1.10.0/reference/generated/numpy.genfromtxt.html).Below is an example of how to load the GCT file into a numpy matrix, using the IO object we created earlier. This method call requires that a number of parameters are set. This tailors the parsing of the data to fit the GCT format.
###Code
import numpy
# Wrap the data in a clean StringIO object
data_io = StringIO(raw_txt)
# This reads the file and generates a matrix of values.
# The parameters are as follows:
#
# data_io: This is a StringIO object wrapping the raw data
# delimiter: GCT files are tab-seperated
# dtype: Determine the type of each column based on its contents
# comments: GCT files contain no comment lines
data_matrix = numpy.genfromtxt(data_io, delimiter='\t', skip_header=2, dtype=None, comments=None)
# Print the column names of the matrix to demonstrate
print(data_matrix[3])
###Output
_____no_output_____
###Markdown
**Note:** You will notice that this method of parsing is able to recognize when a column's value is either a string or a number. This allows you to perform analyses on the data without first having to convert each numerical value to the appropriate type. Using GenePattern with pandas[pandas](http://pandas.pydata.org/) is a popular data analysis library that provides a variety of tools and data structures for use in scientific computing. Additionally, pandas comes with a number of Jupyter integrations, making it easy to use from within a Jupyter notebook.Among the tools provided by pandas is the *read_csv()* method. This method can be used to easily read a GenePattern file, and displays the results as a table. Example code demonstrating this is shown below.
###Code
import pandas
# Reset the state of the IO object.
# This prepares it for reading again.
data_io = StringIO(raw_txt)
# This reads the file and generates data structure representing the values.
# If a code cell returns this value, it will be displayed in the notebook as a table.
# The parameters are as follows:
#
# delimiter: GCT files are tab-separated
# header: This line contains the column headers
pandas.read_csv(data_io, delimiter='\t', header=2)
###Output
_____no_output_____
###Markdown
**Note:** The pandas library requires certain versions of the numpy library in order to run. If you are experiencing errors when attempting to import pandas, you may have an incompatible version of numpy installed. This is particularly common on Mac OSX, as many Macs come with an older version of the numpy library. To fix this problem you will first need to remove the older version of the library from your Python path, and then install the newer version through PIP. Using GenePattern with matplotlib[matplotlib](http://matplotlib.org/) is a popular plotting and graphing library. It comes with a variety of Jupyter integrations. These features are ideal for visualizing the contents of GenePattern files in ways that supplement GenePattern's native visualizers.By default matplotlib displays its visualizations a separate Python window. This may or may not work for you, depending on whether you are running Jupyter locally. When working with matplotlib, it is good to tell the notebook to display all matplotlib visualizations inline. The magic for this is shown below.
###Code
%matplotlib inline
###Output
_____no_output_____
###Markdown
The matplotlib library comes with far more display options than can be covered in this tutorial. Furthermore, doing so would be outside its scope. Instead, below we present a piece of code that demonstrates how to use pandas and matplotlib to display the GenePattern file as a heatmap. Note that for ease of demonstration and readability this code displays only a small portion of the GenePattern file.
###Code
import matplotlib.cbook as cbook
import matplotlib.pyplot as plt
import pylab
# Reset the state of the IO object.
# This prepares it for reading again.
data_io = StringIO(raw_txt)
# Parse the file using pandas's read_csv() method,
# as was used in the previous section
table = pandas.read_csv(data_io, delimiter='\t', header=2)
# Tell matplotlib to generate a 15 in x 10 in image
pylab.rcParams['figure.figsize'] = 15, 10
# For ease of demonstration, display only a portion of the GenePattern file
submatrix = table.ix[0:10,2:10].values
# Note that if we were doing real science in this demonstration,
# we would normalize the values in the matrix somewhere around this point
# Obtain references to the figure and axis parts of the heatmap
fig, ax = plt.subplots()
# Set the color scheme used by the heatmap
heatmap = ax.pcolor(submatrix, cmap=plt.cm.bwr)
# Put the major ticks at the middle of each cell
ax.set_xticks(numpy.arange(submatrix.shape[1])+0.5, minor=False)
ax.set_yticks(numpy.arange(submatrix.shape[0])+0.5, minor=False)
# Use a more natural, table-like display
ax.invert_yaxis()
ax.xaxis.tick_top()
# Extract the column and row labels from the file
column_labels = [item for item in table.columns.values[2:]]
row_labels = list(table.columns[2:13])
# Set the table and row labels for display
ax.set_xticklabels(column_labels, minor=False)
ax.set_yticklabels(row_labels, minor=False)
# Display the heatmap
plt.show()
###Output
_____no_output_____ |
notebooks/05_gr_gtr_subject_classifier.ipynb | ###Markdown
GtR Topic Classifier Preamble
###Code
%run notebook_preamble.ipy
pd.set_option('max_columns', 99)
import ast
import seaborn as sns
from itertools import chain
from collections import Counter, defaultdict
import itertools
import re
from eu_funding.visualization.visualize import pdf_cdf
from eu_funding.utils.nlp_utils import remove_markup, normalise_digits, lemmatize, bigram, stringify_docs
# from src.visualization.visualize import pdf_cdf
from sklearn.metrics import confusion_matrix
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.linear_model import LogisticRegression
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import cross_val_score,GridSearchCV
from sklearn.multiclass import OneVsRestClassifier
from sklearn.preprocessing import LabelBinarizer
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.preprocessing import MultiLabelBinarizer
from sklearn.feature_selection import chi2
import networkx as nx
import community
import warnings
warnings.simplefilter('ignore', UserWarning)
from nesta.packages.nlp_utils import preprocess
list_cols = ['research_topics', 'research_subjects']
gtr_projects_df = pd.read_csv(
os.path.join(ext_data_path, 'gtr', 'gtr_projects.csv'),
converters={k: ast.literal_eval for k in list_cols}
)
gtr_projects_df.head()
research_subject_counter = Counter(chain(*gtr_projects_df['research_subjects']))
research_topic_counter = Counter(chain(*gtr_projects_df['research_topics']))
print('There are {} unique research subjects in the GtR projects dataset.'.format(len(research_subject_counter)))
print('There are {} unique research topics in the GtR projects dataset.'.format(len(research_topic_counter)))
research_subject_counter.most_common(40)
###Output
_____no_output_____
###Markdown
Field Definition Through Community Detection
###Code
combos = list(chain(*[sorted(itertools.combinations(d, 2)) for d in gtr_projects_df['research_topics']]))
research_topic_edge_counter = Counter(combos)
total_research_topics = len(list(chain(*gtr_projects_df['research_topics'])))
def association_strength(combo, occurrences, cooccurrences, total):
return (2 * total * cooccurrences[combo]) / (occurrences[combo[0]] * occurrences[combo[1]])
edges = set(combos)
assoc_strengths = [association_strength(
edge,
research_topic_counter,
research_topic_edge_counter,
total_research_topics) for edge in edges]
plt.hist(np.log10(assoc_strengths), bins=100)
plt.show()
edge_df = pd.DataFrame()
edge_df['source'] = [e[0] for e in edges]
edge_df['target'] = [e[1] for e in edges]
edge_df['weight'] = np.log10(assoc_strengths)
g = nx.from_pandas_edgelist(edge_df, edge_attr='weight')
class CommunityPartition:
def __init__(self, graph):
self.graph = graph
def edgelist_to_cooccurrence(self, repeats, **best_partition_kwargs):
edge_counter = Counter()
for i in range(repeats):
partition = community.best_partition(self.graph, **best_partition_kwargs)
edgelist = self.partition_to_edgelist(partition)
edge_counter.update(edgelist)
g = nx.Graph()
g.add_weighted_edges_from([(e[0][0], e[0][1], e[1]) for e in edge_counter.items()])
return g
def partition_to_edgelist(self, partition):
partition_reverse_mapping = self.reverse_index_partition(partition)
edgelist = []
for community, elements in partition_reverse_mapping.items():
combos = [tuple(sorted(e)) for e in itertools.combinations(elements, 2)]
edgelist.extend(combos)
return edgelist
def reverse_index_partition(self, partition):
partition_reverse_mapping = defaultdict(list)
for k, v in partition.items():
partition_reverse_mapping[v].append(k)
return partition_reverse_mapping
cp = CommunityPartition(g)
co = cp.edgelist_to_cooccurrence(3, resolution=.4)
nx.draw(co)
#Extract the best partition
part = community.best_partition(g, resolution=0.4, random_state=0, weight='weight')
set(part.values())
size = float(len(set(part.values())))
pos = nx.spring_layout(co)
count = 0.
for com in set(part.values()) :
count = count + 1.
list_nodes = [nodes for nodes in part.keys()
if part[nodes] == com]
nx.draw_networkx_nodes(co, pos, list_nodes, node_size = 20,
node_color = str(count / size))
nx.draw_networkx_edges(co, pos, alpha=0.5)
plt.show()
pd.Series(part).reset_index(drop=False).groupby(0)['index'].apply(lambda x: print(', '.join(list(x))+'\n'))
import pickle
category_name_lookup = {
0: 'social_sciences',
1: 'arts_linguistics',
2: 'chem_mater_phys_eng',
3: 'social_sciences',
4: 'maths_computing_ee',
5: 'social_sciences',
6: 'social_sciences',
7: 'biological_sciences',
8: 'social_sciences',
9: 'humanities',
10: 'arts_linguistics',
11: 'humanities',
12: 'social_sciences',
13: 'physics',
14: 'environmental_sciences',
15: 'social_sciences',
16: 'humanities'
}
topic_discipline_lookup = {top:category_name_lookup[disc] for top, disc in part.items()}
with open(os.path.join(model_path, 'communities_partition.pkl'), 'wb') as f:
pickle.dump(part, f)
with open(os.path.join(model_path, 'communities_partition_labels.pkl'), 'wb') as f:
pickle.dump(category_name_lookup, f)
gtr_projects_df['discipline'] = gtr_projects_df['research_topics'].apply(
lambda x: [topic_discipline_lookup[val] for val in x])
gtr_projects_df['discipline_sets'] = [set(x) for x in gtr_projects_df['discipline']]
gtr_projects_df['single_disc'] = [True if len(x)==1 else np.nan if len(x)==0 else False for x in gtr_projects_df['discipline_sets']]
gtr_projects_df['single_disc'].mean()
gtr_projects_df['discipline_sets'] = [
set(['medical_sciences']) if f =='MRC' else x for f,x in zip(
gtr_projects_df['funder_name'],
gtr_projects_df['discipline_sets'])]
def modal_value(l):
c = Counter(l)
try:
return c.most_common(1)[0][0]
except:
return np.nan
gtr_projects_df['modal_discipline'] = [modal_value(d) for d in gtr_projects_df['discipline_sets']]
gtr_projects_df['modal_discipline'].value_counts()
Counter(chain(*gtr_projects_df['discipline_sets'])).most_common()
n_labels = [True if len(s) > 0 else False for s in gtr_projects_df['discipline_sets']]
# remove projects without abstracts
gtr_projects_df = gtr_projects_df[~pd.isnull(gtr_projects_df['abstract_texts'])]
# remove projects with short abstracts
gtr_projects_df = gtr_projects_df[gtr_projects_df['abstract_texts'].str.len() > 250]
# remove projects with no labels
n_labels = [True if len(s) > 0 else False for s in gtr_projects_df['discipline_sets']]
gtr_projects_df = gtr_projects_df[n_labels]
###Output
_____no_output_____
###Markdown
Text Preprocessing
###Code
import spacy
from gensim.models.phrases import Phraser
nlp = spacy.load('en')
nlp.remove_pipe('parser')
nlp.remove_pipe('ner')
with open(os.path.join(raw_data_path, 'stopwords_en_long.txt'), 'r') as f:
stopwords = f.read().splitlines()
for stopword in stopwords:
nlp.vocab[stopword.lower()].is_stop = True
nlp.vocab[stopword.upper()].is_stop = True
nlp.vocab[stopword.title()].is_stop = True
abstracts = [remove_markup(a) for a in gtr_projects_df['abstract_texts']]
abstracts = [normalise_digits(a) for a in abstracts]
abstracts = lemmatize(abstracts, nlp)
bigrammer = Phraser.load(os.path.join(model_path, 'gtr_discipline_bigrammer.pkl'))
abstracts = bigram(abstracts, phraser=bigrammer)
abstracts_str = list(stringify_docs(abstracts))
gtr_projects_df['abstract_processed'] = abstracts_str
###Output
_____no_output_____
###Markdown
Single Label
###Code
from sklearn.feature_selection import SelectPercentile, chi2
from sklearn.model_selection import train_test_split
from sklearn.naive_bayes import ComplementNB, MultinomialNB
from sklearn.linear_model import LinearRegression
from sklearn.ensemble import VotingClassifier
from sklearn.metrics import classification_report, confusion_matrix
from sklearn.pipeline import Pipeline
from sklearn.externals import joblib
gtr_projects_df['n_disciplines'] = [len(a) for a in gtr_projects_df['discipline_sets']]
gtr_pure_df = gtr_projects_df[gtr_projects_df['n_disciplines'] == 1]
tfidf_single = TfidfVectorizer(
max_df=0.4,
min_df=5,
sublinear_tf=True,
norm='l2'
)
tfidf_pure_vecs = tfidf_single.fit_transform(gtr_pure_df['abstract_processed'])
X_train, X_test, y_train, y_test = train_test_split(tfidf_pure_vecs, gtr_pure_df['modal_discipline'], train_size=0.8, test_size=0.2)
sp = SelectPercentile(chi2, 40)
sp_vecs = sp.fit_transform(X_train, y_train)
sp_vecs_test = sp.transform(X_test)
tfidf_vecs.shape
sp_vecs.shape
lr = LogisticRegression(C=10, solver='lbfgs', multi_class='auto')
mnb = MultinomialNB()
cmb = ComplementNB()
voter = VotingClassifier(estimators=[('lr', lr), ('cmb', cmb), ('mnb', mnb)])
pipe = Pipeline??
from sklearn.model_selection import GridSearchCV
###Output
_____no_output_____
###Markdown
Optimise
###Code
lr_params = {'C': [0.1, 0.2, 0.5, 1, 2, 5, 10, 20, 50, 100]}
lr_grid = GridSearchCV(lr, param_grid=lr_params, cv=5, verbose=2)
lr_grid.fit(sp_vecs, y_train)
lr_best = lr_grid.best_estimator_
print(classification_report(y_test, lr_best.predict(sp_vecs_test)))
sns.heatmap(
pd.DataFrame(confusion_matrix(y_test, lr_best.predict(sp_vecs_test)), columns=mlb.classes_, index=mlb.classes_),
annot=True,
fmt='d'
)
###Output
_____no_output_____
###Markdown
Export Pipe
###Code
tfidf_single = TfidfVectorizer(
max_df=0.4,
min_df=5,
sublinear_tf=True,
norm='l2'
)
sp = SelectPercentile(chi2, 40)
lr = LogisticRegression(C=10, solver='lbfgs', multi_class='auto')
pipe = Pipeline(
steps=[
('tfidf', tfidf),
('sp', sp),
('lr', lr)
]
)
pipe.fit(gtr_pure_df['abstract_processed'], gtr_pure_df['modal_discipline'])
joblib.dump(pipe, os.path.join(model_path, 'gtr_discipline_lvl9_lr_20190222.pkl'))
###Output
_____no_output_____
###Markdown
Multilabel
###Code
tfidf = TfidfVectorizer(
max_df=0.3,
min_df=5,
sublinear_tf=True,
norm='l2'
)
tfidf_vecs = tfidf.fit_transform(abstracts_str)
classes = list(set(chain(*gtr_projects_df['discipline_sets'])))
mlb = MultiLabelBinarizer(classes=classes)
target_binarized = mlb.fit_transform(gtr_projects_df['discipline_sets'])
target_binarized_df = pd.DataFrame(target_binarized, columns=mlb.classes_)
X_train, X_test, y_train, y_test = train_test_split(tfidf_vecs, target_binarized_df, train_size=0.8, test_size=0.2)
sp = SelectPercentile(chi2, 40)
sp_vecs = sp.fit_transform(X_train, y_train)
sp_vecs_test = sp.transform(X_test)
lr = LogisticRegression(solver='lbfgs', multi_class='auto')
mnb = MultinomialNB()
cmb = ComplementNB()
voter = VotingClassifier(estimators=[('lr', lr), ('cmb', cmb), ('mnb', mnb)])
for cls in mlb.classes_:
print(cls)
lr.fit(sp_vecs, y_train[cls])
print(classification_report(y_test[cls], lr.predict(sp_vecs_test)))
print(classification_report(y_test, voter.predict(sp_vecs_test)))
###Output
_____no_output_____
###Markdown
Train RF
###Code
from sklearn.model_selection import RandomizedSearchCV
rf_params = {
'bootstrap': [True, False],
'max_depth': [10, 20, 50, 100, None],
'max_features': ['auto', 'sqrt'],
'min_samples_leaf': [1, 2, 4],
'min_samples_split': [2, 5, 10],
'n_estimators': [100, 200, 500]
}
rf = RandomForestClassifier(n_jobs=3)
rf_random = RandomizedSearchCV(
estimator=rf,
param_distributions=rf_params,
n_iter=100,
cv=3,
verbose=1,
random_state=42,
n_jobs=3
)
rf_random.fit(X_train, y_train)
print(classification_report(y_test, rf_random.best_estimator_.predict(X_test)))
rf_random.best_estimator_.fit(tfidf_vecs_filt, target_binarized)
from sklearn.externals import joblib
joblib.dump(rf_random.best_estimator_, os.path.join(model_path, 'gtr_discipline_classifier_rf_8_20192102.pkl'))
joblib.dump(tfidf, os.path.join(model_path, 'gtr_discipline_tfidf_20192102.pkl'))
bigrammer.save(os.path.join(model_path, 'gtr_discipline_bigrammer.pkl'))
nlp.vocab.to_disk(os.path.join(model_path, 'gtr_discipline_vocab'))
###Output
_____no_output_____
###Markdown
Apply to CORDIS
###Code
cordis_projects_df = pd.read_csv(os.path.join(inter_data_path, 'fp7_h2020_projects.csv'))
cordis_abstracts = [remove_markup(a) for a in cordis_projects_df['objective'][:25]]
cordis_abstracts = [normalise_digits(a) for a in cordis_abstracts]
cordis_abstracts = lemmatize(cordis_abstracts, nlp)
cordis_abstracts = bigram(cordis_abstracts, phraser=bigrammer)
cordis_abstracts = list(stringify_docs(cordis_abstracts))
for abstract, pred in zip(cordis_projects_df['objective'][:25], pipe.predict(cordis_abstracts)):
print(pred)
print(abstract)
print('\n==============')
cordis_tfidf_vecs = tfidf.transform(cordis_abstracts)
cordis_subject_probs = rf_random.best_estimator_.predict_proba(cordis_tfidf_vecs)
cordis_subjects = rf_random.best_estimator_.predict(cordis_tfidf_vecs)
subject_probs = np.zeros((len(cordis_projects_df), 8))
for i in range(8):
subject_probs[:, i] = cordis_subject_probs[i][:, 0]
n = 101
cordis_projects_df['objective'][n]
pd.DataFrame(cordis_subjects, columns=mlb.classes_).sum()
###Output
_____no_output_____
###Markdown
Alternative Feature Selection
###Code
feature_terms = []
indices = np.array(range(0, X_train.shape[1]))
for discipline in y_train.columns:
features_chi2 = chi2(X_train, y_train[discipline])[0]
threshold = np.percentile(features_chi2[~pd.isnull(features_chi2)], 90)
discipline_indices = indices[features_chi2 > threshold]
feature_terms.extend(np.array(tfidf.get_feature_names())[discipline_indices])
tfidf_stop_words = set(tfidf.get_feature_names()).difference(set(feature_terms))
tfidf = TfidfVectorizer(
# max_df=0.5,
min_df=5,
sublinear_tf=True,
norm='l2',
stop_words=tfidf_stop_words
)
tfidf_vecs_filt = tfidf.fit_transform(abstracts_str)
X_train, X_test, y_train, y_test = train_test_split(tfidf_vecs_filt, target_binarized, train_size=0.9, test_size=0.1)
###Output
_____no_output_____ |
course-2/week-1/peer_review_linreg_height_weight.ipynb | ###Markdown
Линейная регрессия и основные библиотеки Python для анализа данных и научных вычислений Это задание посвящено линейной регрессии. На примере прогнозирования роста человека по его весу Вы увидите, какая математика за этим стоит, а заодно познакомитесь с основными библиотеками Python, необходимыми для дальнейшего прохождения курса. **Материалы**- Лекции данного курса по линейным моделям и градиентному спуску- [Документация](http://docs.scipy.org/doc/) по библиотекам NumPy и SciPy- [Документация](http://matplotlib.org/) по библиотеке Matplotlib - [Документация](http://pandas.pydata.org/pandas-docs/stable/tutorials.html) по библиотеке Pandas- [Pandas Cheat Sheet](http://www.analyticsvidhya.com/blog/2015/07/11-steps-perform-data-analysis-pandas-python/)- [Документация](http://stanford.edu/~mwaskom/software/seaborn/) по библиотеке Seaborn Задание 1. Первичный анализ данных c Pandas В этом заданиии мы будем использовать данные [SOCR](http://wiki.stat.ucla.edu/socr/index.php/SOCR_Data_Dinov_020108_HeightsWeights) по росту и весу 25 тысяч подростков. **[1].** Если у Вас не установлена библиотека Seaborn - выполните в терминале команду *conda install seaborn*. (Seaborn не входит в сборку Anaconda, но эта библиотека предоставляет удобную высокоуровневую функциональность для визуализации данных).
###Code
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
###Output
_____no_output_____
###Markdown
Считаем данные по росту и весу (*weights_heights.csv*, приложенный в задании) в объект Pandas DataFrame:
###Code
data = pd.read_csv('weights_heights.csv', index_col='Index')
###Output
_____no_output_____
###Markdown
Чаще всего первое, что надо надо сделать после считывания данных - это посмотреть на первые несколько записей. Так можно отловить ошибки чтения данных (например, если вместо 10 столбцов получился один, в названии которого 9 точек с запятой). Также это позволяет познакомиться с данными, как минимум, посмотреть на признаки и их природу (количественный, категориальный и т.д.). После этого стоит построить гистограммы распределения признаков - это опять-таки позволяет понять природу признака (степенное у него распределение, или нормальное, или какое-то еще). Также благодаря гистограмме можно найти какие-то значения, сильно не похожие на другие - "выбросы" в данных. Гистограммы удобно строить методом *plot* Pandas DataFrame с аргументом *kind='hist'*.**Пример.** Построим гистограмму распределения роста подростков из выборки *data*. Используем метод *plot* для DataFrame *data* c аргументами *y='Height'* (это тот признак, распределение которого мы строим)
###Code
data.plot(y='Height', kind='hist',
color='red', title='Height (inch.) distribution')
###Output
_____no_output_____
###Markdown
Аргументы:- *y='Height'* - тот признак, распределение которого мы строим- *kind='hist'* - означает, что строится гистограмма- *color='red'* - цвет **[2]**. Посмотрите на первые 5 записей с помощью метода *head* Pandas DataFrame. Нарисуйте гистограмму распределения веса с помощью метода *plot* Pandas DataFrame. Сделайте гистограмму зеленой, подпишите картинку.
###Code
data.head()
data.Weight.plot(kind = 'hist', color = 'green', title = 'Weight (pounds) distribution')
###Output
_____no_output_____
###Markdown
Один из эффективных методов первичного анализа данных - отображение попарных зависимостей признаков. Создается $m \times m$ графиков (*m* - число признаков), где по диагонали рисуются гистограммы распределения признаков, а вне диагонали - scatter plots зависимости двух признаков. Это можно делать с помощью метода $scatter\_matrix$ Pandas Data Frame или *pairplot* библиотеки Seaborn. Чтобы проиллюстрировать этот метод, интересней добавить третий признак. Создадим признак *Индекс массы тела* ([BMI](https://en.wikipedia.org/wiki/Body_mass_index)). Для этого воспользуемся удобной связкой метода *apply* Pandas DataFrame и lambda-функций Python.
###Code
def make_bmi(height_inch, weight_pound):
METER_TO_INCH, KILO_TO_POUND = 39.37, 2.20462
return (weight_pound / KILO_TO_POUND) / \
(height_inch / METER_TO_INCH) ** 2
data['BMI'] = data.apply(lambda row: make_bmi(row['Height'],
row['Weight']), axis=1)
###Output
_____no_output_____
###Markdown
**[3].** Постройте картинку, на которой будут отображены попарные зависимости признаков , 'Height', 'Weight' и 'BMI' друг от друга. Используйте метод *pairplot* библиотеки Seaborn.
###Code
sns.pairplot(data)
###Output
_____no_output_____
###Markdown
Часто при первичном анализе данных надо исследовать зависимость какого-то количественного признака от категориального (скажем, зарплаты от пола сотрудника). В этом помогут "ящики с усами" - boxplots библиотеки Seaborn. Box plot - это компактный способ показать статистики вещественного признака (среднее и квартили) по разным значениям категориального признака. Также помогает отслеживать "выбросы" - наблюдения, в которых значение данного вещественного признака сильно отличается от других. **[4]**. Создайте в DataFrame *data* новый признак *weight_category*, который будет иметь 3 значения: 1 – если вес меньше 120 фунтов. (~ 54 кг.), 3 - если вес больше или равен 150 фунтов (~68 кг.), 2 – в остальных случаях. Постройте «ящик с усами» (boxplot), демонстрирующий зависимость роста от весовой категории. Используйте метод *boxplot* библиотеки Seaborn и метод *apply* Pandas DataFrame. Подпишите ось *y* меткой «Рост», ось *x* – меткой «Весовая категория».
###Code
def weight_category(weight):
weight_category = None
if weight < 120:
weight_category = 1
elif weight >= 150:
weight_category = 3
else:
weight_category = 2
return weight_category
data['weight_category'] = data['Weight'].apply(weight_category)
ax = sns.boxplot(data = data, x = 'weight_category', y = 'Height')
ax.set(ylabel = u'Рост', xlabel = u'Весовая категория')
###Output
_____no_output_____
###Markdown
**[5].** Постройте scatter plot зависимости роста от веса, используя метод *plot* для Pandas DataFrame с аргументом *kind='scatter'*. Подпишите картинку.
###Code
ax = data.plot(kind = 'scatter', x = 'Weight', y = 'Height', title = u'Зависимость роста от веса')
ax.set(xlabel = u'Вес', ylabel = u'Рост')
###Output
_____no_output_____
###Markdown
Задание 2. Минимизация квадратичной ошибки В простейшей постановке задача прогноза значения вещественного признака по прочим признакам (задача восстановления регрессии) решается минимизацией квадратичной функции ошибки. **[6].** Напишите функцию, которая по двум параметрам $w_0$ и $w_1$ вычисляет квадратичную ошибку приближения зависимости роста $y$ от веса $x$ прямой линией $y = w_0 + w_1 * x$:$$error(w_0, w_1) = \sum_{i=1}^n {(y_i - (w_0 + w_1 * x_i))}^2 $$Здесь $n$ – число наблюдений в наборе данных, $y_i$ и $x_i$ – рост и вес $i$-ого человека в наборе данных.
###Code
def y_app(w, x):
return w[0] + w[1]*x
def error(w):
return sum(map(lambda x, y: (y - y_app(w, x)) ** 2, data.Weight, data.Height))
###Output
_____no_output_____
###Markdown
Итак, мы решаем задачу: как через облако точек, соответсвующих наблюдениям в нашем наборе данных, в пространстве признаков "Рост" и "Вес" провести прямую линию так, чтобы минимизировать функционал из п. 6. Для начала давайте отобразим хоть какие-то прямые и убедимся, что они плохо передают зависимость роста от веса.**[7].** Проведите на графике из п. 5 Задания 1 две прямые, соответствующие значениям параметров ($w_0, w_1) = (60, 0.05)$ и ($w_0, w_1) = (50, 0.16)$. Используйте метод *plot* из *matplotlib.pyplot*, а также метод *linspace* библиотеки NumPy. Подпишите оси и график.
###Code
ax_x = np.linspace(data['Weight'].min(), data['Weight'].max())
y1 = y_app([60, 0.05], ax_x)
y2 = y_app([50, 0.16], ax_x)
ax = data.plot(kind = 'scatter', x = 'Weight', y = 'Height', title = u'Зависимость роста от веса')
ax.set(xlabel = u'Вес', ylabel = u'Рост')
plt.plot(ax_x, y1, color='red')
plt.plot(ax_x, y2, color='yellow')
###Output
_____no_output_____
###Markdown
Минимизация квадратичной функции ошибки - относительная простая задача, поскольку функция выпуклая. Для такой задачи существует много методов оптимизации. Посмотрим, как функция ошибки зависит от одного параметра (наклон прямой), если второй параметр (свободный член) зафиксировать.**[8].** Постройте график зависимости функции ошибки, посчитанной в п. 6, от параметра $w_1$ при $w_0$ = 50. Подпишите оси и график.
###Code
w1=np.linspace(-5, 5)
w_error = list(error([50, i]) for i in w1)
plt.ylabel('error')
plt.xlabel('w1')
plt.title(u'Зависимость ошибки от $w_1$ при $w_0=50$')
plt.plot(w1, w_error)
###Output
_____no_output_____
###Markdown
Теперь методом оптимизации найдем "оптимальный" наклон прямой, приближающей зависимость роста от веса, при фиксированном коэффициенте $w_0 = 50$.**[9].** С помощью метода *minimize_scalar* из *scipy.optimize* найдите минимум функции, определенной в п. 6, для значений параметра $w_1$ в диапазоне [-5,5]. Проведите на графике из п. 5 Задания 1 прямую, соответствующую значениям параметров ($w_0$, $w_1$) = (50, $w_1\_opt$), где $w_1\_opt$ – найденное в п. 8 оптимальное значение параметра $w_1$.
###Code
import scipy.optimize as opt
w1_opt = opt.minimize_scalar(lambda x: error([50, x]), bounds=(-5,5), method='bounded')
y_opt = y_app([50, w1_opt.x], ax_x)
ax = data.plot(kind = 'scatter', x = 'Weight', y = 'Height', title = u'Зависимость роста от веса')
ax.set(xlabel = u'Вес', ylabel = u'Рост')
plt.plot(ax_x, y_opt, color = 'red')
###Output
_____no_output_____
###Markdown
При анализе многомерных данных человек часто хочет получить интуитивное представление о природе данных с помощью визуализации. Увы, при числе признаков больше 3 такие картинки нарисовать невозможно. На практике для визуализации данных в 2D и 3D в данных выделаяют 2 или, соответственно, 3 главные компоненты (как именно это делается - мы увидим далее в курсе) и отображают данные на плоскости или в объеме. Посмотрим, как в Python рисовать 3D картинки, на примере отображения функции $z(x,y) = sin(\sqrt{x^2+y^2})$ для значений $x$ и $y$ из интервала [-5,5] c шагом 0.25.
###Code
from mpl_toolkits.mplot3d import Axes3D
###Output
_____no_output_____
###Markdown
Создаем объекты типа matplotlib.figure.Figure (рисунок) и matplotlib.axes._subplots.Axes3DSubplot (ось).
###Code
fig = plt.figure()
ax = fig.gca(projection='3d') # get current axis
# Создаем массивы NumPy с координатами точек по осям X и У.
# Используем метод meshgrid, при котором по векторам координат
# создается матрица координат. Задаем нужную функцию Z(x, y).
X = np.arange(-5, 5, 0.25)
Y = np.arange(-5, 5, 0.25)
X, Y = np.meshgrid(X, Y)
Z = np.sin(np.sqrt(X**2 + Y**2))
# Наконец, используем метод *plot_surface* объекта
# типа Axes3DSubplot. Также подписываем оси.
surf = ax.plot_surface(X, Y, Z)
ax.set_xlabel('X')
ax.set_ylabel('Y')
ax.set_zlabel('Z')
plt.show()
###Output
_____no_output_____
###Markdown
**[10].** Постройте 3D-график зависимости функции ошибки, посчитанной в п.6 от параметров $w_0$ и $w_1$. Подпишите ось $x$ меткой «Intercept», ось $y$ – меткой «Slope», a ось $z$ – меткой «Error».
###Code
fig = plt.figure()
fig.set_size_inches(13.5, 7.5)
ax = fig.gca(projection='3d') # get current axis
W0 = np.arange(-100, 101, 1)
W1 = np.arange(-5, 5, 0.25)
W0, W1 = np.meshgrid(W0, W1)
ERR = error([W0, W1])
surf = ax.plot_surface(W0, W1, ERR)
ax.set_xlabel('Intercept')
ax.set_ylabel('Slope')
ax.set_zlabel('Error')
plt.show()
###Output
_____no_output_____
###Markdown
**[11].** С помощью метода *minimize* из scipy.optimize найдите минимум функции, определенной в п. 6, для значений параметра $w_0$ в диапазоне [-100,100] и $w_1$ - в диапазоне [-5, 5]. Начальная точка – ($w_0$, $w_1$) = (0, 0). Используйте метод оптимизации L-BFGS-B (аргумент method метода minimize). Проведите на графике из п. 5 Задания 1 прямую, соответствующую найденным оптимальным значениям параметров $w_0$ и $w_1$. Подпишите оси и график.
###Code
w0_opt, w1_opt = opt.minimize(error, [0, 0], bounds = [(-100, 100), (-5, 5)], method='L-BFGS-B').x
y_min = y_app([w0_opt, w1_opt], ax_x)
ax = data.plot(kind = 'scatter', x = 'Weight', y = 'Height', title = u'Зависимость роста от веса')
ax.set(xlabel = u'Вес', ylabel = u'Рост')
plt.plot(ax_x, y_min, color = 'red')
###Output
_____no_output_____ |
Jupyter notebook/Practice 2 - How to build a recommender.ipynb | ###Markdown
Recommender Systems 2018/19 Practice 2 - Non personalized recommenders We will use the Movielens 10 million dataset. We download it and uncompress the file we need
###Code
from urllib.request import urlretrieve
import zipfile
urlretrieve ("http://files.grouplens.org/datasets/movielens/ml-10m.zip", "data/Movielens_10M/movielens_10m.zip")
dataFile = zipfile.ZipFile("data/Movielens_10M/movielens_10m.zip")
URM_path = dataFile.extract("ml-10M100K/ratings.dat", path = "data/Movielens_10M")
URM_file = open(URM_path, 'r')
type(URM_file)
###Output
_____no_output_____
###Markdown
Let's take a look at the data
###Code
for _ in range(10):
print(URM_file.readline())
# Start from beginning of the file
URM_file.seek(0)
numberInteractions = 0
for _ in URM_file:
numberInteractions += 1
print ("The number of interactions is {}".format(numberInteractions))
###Output
The number of interactions is 10000054
###Markdown
We split each row to separate user, item, rating and timestamp. We do that with a custom function creating a tuple for each interaction
###Code
def rowSplit (rowString):
split = rowString.split("::")
split[3] = split[3].replace("\n","")
split[0] = int(split[0])
split[1] = int(split[1])
split[2] = float(split[2])
split[3] = int(split[3])
result = tuple(split)
return result
URM_file.seek(0)
URM_tuples = []
for line in URM_file:
URM_tuples.append(rowSplit (line))
URM_tuples[0:10]
###Output
_____no_output_____
###Markdown
We can easily separate the four columns in different independent lists
###Code
userList, itemList, ratingList, timestampList = zip(*URM_tuples)
userList = list(userList)
itemList = list(itemList)
ratingList = list(ratingList)
timestampList = list(timestampList)
userList[0:10]
itemList[0:10]
ratingList[0:10]
timestampList[0:10]
###Output
_____no_output_____
###Markdown
Now we can display some statistics
###Code
userList_unique = list(set(userList))
itemList_unique = list(set(itemList))
numUsers = len(userList_unique)
numItems = len(itemList_unique)
print ("Number of items\t {}, Number of users\t {}".format(numItems, numUsers))
print ("Max ID items\t {}, Max Id users\t {}\n".format(max(itemList_unique), max(userList_unique)))
print ("Average interactions per user {:.2f}".format(numberInteractions/numUsers))
print ("Average interactions per item {:.2f}\n".format(numberInteractions/numItems))
print ("Sparsity {:.2f} %".format((1-float(numberInteractions)/(numItems*numUsers))*100))
###Output
Number of items 10677, Number of users 69878
Max ID items 65133, Max Id users 71567
Average interactions per user 143.11
Average interactions per item 936.60
Sparsity 98.66 %
###Markdown
Rating distribution in time
###Code
import matplotlib.pyplot as pyplot
# Clone the list to avoid changing the ordering of the original data
timestamp_sorted = list(timestampList)
timestamp_sorted.sort()
pyplot.plot(timestamp_sorted, 'ro')
pyplot.ylabel('Timestamp ')
pyplot.xlabel('Item Index')
pyplot.show()
###Output
_____no_output_____
###Markdown
To store the data we use a sparse matrix. We build it as a COO matrix and then change its format The COO constructor expects (data, (row, column))
###Code
import scipy.sparse as sps
URM_all = sps.coo_matrix((ratingList, (userList, itemList)))
URM_all
URM_all.tocsr()
###Output
_____no_output_____
###Markdown
Item popularity
###Code
import numpy as np
itemPopularity = (URM_all>0).sum(axis=0)
itemPopularity
itemPopularity = np.array(itemPopularity).squeeze()
itemPopularity
itemPopularity = np.sort(itemPopularity)
itemPopularity
pyplot.plot(itemPopularity, 'ro')
pyplot.ylabel('Num Interactions ')
pyplot.xlabel('Item Index')
pyplot.show()
tenPercent = int(numItems/10)
print("Average per-item interactions over the whole dataset {:.2f}".
format(itemPopularity.mean()))
print("Average per-item interactions for the top 10% popular items {:.2f}".
format(itemPopularity[-tenPercent].mean()))
print("Average per-item interactions for the least 10% popular items {:.2f}".
format(itemPopularity[:tenPercent].mean()))
print("Average per-item interactions for the median 10% popular items {:.2f}".
format(itemPopularity[int(numItems*0.45):int(numItems*0.55)].mean()))
print("Number of items with zero interactions {}".
format(np.sum(itemPopularity==0)))
itemPopularityNonzero = itemPopularity[itemPopularity>0]
tenPercent = int(len(itemPopularityNonzero)/10)
print("Average per-item interactions over the whole dataset {:.2f}".
format(itemPopularityNonzero.mean()))
print("Average per-item interactions for the top 10% popular items {:.2f}".
format(itemPopularityNonzero[-tenPercent].mean()))
print("Average per-item interactions for the least 10% popular items {:.2f}".
format(itemPopularityNonzero[:tenPercent].mean()))
print("Average per-item interactions for the median 10% popular items {:.2f}".
format(itemPopularityNonzero[int(numItems*0.45):int(numItems*0.55)].mean()))
pyplot.plot(itemPopularityNonzero, 'ro')
pyplot.ylabel('Num Interactions ')
pyplot.xlabel('Item Index')
pyplot.show()
###Output
_____no_output_____
###Markdown
User activity
###Code
userActivity = (URM_all>0).sum(axis=1)
userActivity = np.array(userActivity).squeeze()
userActivity = np.sort(userActivity)
pyplot.plot(userActivity, 'ro')
pyplot.ylabel('Num Interactions ')
pyplot.xlabel('User Index')
pyplot.show()
###Output
_____no_output_____
###Markdown
Now that we have the data, we can build our first recommender. We need two things:* a 'fit' function to train our model* a 'recommend' function that uses our model to recommend Let's start with a random recommender In a random recommend we don't have anything to learn from the data
###Code
class RandomRecommender(object):
def fit(self, URM_train):
self.numItems = URM_train.shape[0]
def recommend(self, user_id, at=5):
recommended_items = np.random.choice(self.numItems, at)
return recommended_items
###Output
_____no_output_____
###Markdown
In order to evaluate our recommender we have to define:* A splitting of the data in URM_train and URM_test* An evaluation metric* A functon computing the evaluation for each user The splitting of the data is very important to ensure your algorithm is evaluated in a realistic scenario by using test it has never seen.
###Code
train_test_split = 0.80
numInteractions = URM_all.nnz
train_mask = np.random.choice([True,False], numInteractions, p=[train_test_split, 1-train_test_split])
train_mask
userList = np.array(userList)
itemList = np.array(itemList)
ratingList = np.array(ratingList)
URM_train = sps.coo_matrix((ratingList[train_mask], (userList[train_mask], itemList[train_mask])))
URM_train = URM_train.tocsr()
URM_train
test_mask = np.logical_not(train_mask)
URM_test = sps.coo_matrix((ratingList[test_mask], (userList[test_mask], itemList[test_mask])))
URM_test = URM_test.tocsr()
URM_test
###Output
_____no_output_____
###Markdown
Evaluation metric
###Code
user_id = userList_unique[1]
user_id
randomRecommender = RandomRecommender()
randomRecommender.fit(URM_train)
recommended_items = randomRecommender.recommend(user_id, at=5)
recommended_items
###Output
_____no_output_____
###Markdown
We call items in the test set 'relevant'
###Code
relevant_items = URM_test[user_id].indices
relevant_items
is_relevant = np.in1d(recommended_items, relevant_items, assume_unique=True)
is_relevant
###Output
_____no_output_____
###Markdown
Precision: how many of the recommended items are relevant
###Code
def precision(recommended_items, relevant_items):
is_relevant = np.in1d(recommended_items, relevant_items, assume_unique=True)
precision_score = np.sum(is_relevant, dtype=np.float32) / len(is_relevant)
return precision_score
###Output
_____no_output_____
###Markdown
Recall: how many of the relevant items I was able to recommend
###Code
def recall(recommended_items, relevant_items):
is_relevant = np.in1d(recommended_items, relevant_items, assume_unique=True)
recall_score = np.sum(is_relevant, dtype=np.float32) / relevant_items.shape[0]
return recall_score
###Output
_____no_output_____
###Markdown
Mean Average Precision
###Code
def MAP(recommended_items, relevant_items):
is_relevant = np.in1d(recommended_items, relevant_items, assume_unique=True)
# Cumulative sum: precision at 1, at 2, at 3 ...
p_at_k = is_relevant * np.cumsum(is_relevant, dtype=np.float32) / (1 + np.arange(is_relevant.shape[0]))
map_score = np.sum(p_at_k) / np.min([relevant_items.shape[0], is_relevant.shape[0]])
return map_score
###Output
_____no_output_____
###Markdown
And let's test it!
###Code
# We pass as paramether the recommender class
def evaluate_algorithm(URM_test, recommender_object, at=5):
cumulative_precision = 0.0
cumulative_recall = 0.0
cumulative_MAP = 0.0
num_eval = 0
for user_id in userList_unique:
relevant_items = URM_test[user_id].indices
if len(relevant_items)>0:
recommended_items = recommender_object.recommend(user_id, at=at)
num_eval+=1
cumulative_precision += precision(recommended_items, relevant_items)
cumulative_recall += recall(recommended_items, relevant_items)
cumulative_MAP += MAP(recommended_items, relevant_items)
cumulative_precision /= num_eval
cumulative_recall /= num_eval
cumulative_MAP /= num_eval
print("Recommender performance is: Precision = {:.4f}, Recall = {:.4f}, MAP = {:.4f}".format(
cumulative_precision, cumulative_recall, cumulative_MAP))
evaluate_algorithm(URM_test, randomRecommender)
###Output
Recommender performance is: Precision = 0.0004, Recall = 0.0001, MAP = 0.0002
###Markdown
So the code works. The performance however... Top Popular recommender We recommend to all users the most popular items, that is those with the highest number of interactions In this case our model is the item popularity
###Code
class TopPopRecommender(object):
def fit(self, URM_train):
itemPopularity = (URM_train>0).sum(axis=0)
itemPopularity = np.array(itemPopularity).squeeze()
# We are not interested in sorting the popularity value,
# but to order the items according to it
self.popularItems = np.argsort(itemPopularity)
self.popularItems = np.flip(self.popularItems, axis = 0)
def recommend(self, user_id, at=5):
recommended_items = self.popularItems[0:at]
return recommended_items
###Output
_____no_output_____
###Markdown
Now train and test our model
###Code
topPopRecommender = TopPopRecommender()
topPopRecommender.fit(URM_train)
for user_id in userList_unique[0:10]:
print(topPopRecommender.recommend(user_id, at=5))
evaluate_algorithm(URM_test, topPopRecommender, at=5)
###Output
Recommender performance is: Precision = 0.0962, Recall = 0.0308, MAP = 0.0531
###Markdown
That's better, but we can improve Hint, remove items already seen by the user
###Code
class TopPopRecommender(object):
def fit(self, URM_train):
self.URM_train = URM_train
itemPopularity = (URM_train>0).sum(axis=0)
itemPopularity = np.array(itemPopularity).squeeze()
# We are not interested in sorting the popularity value,
# but to order the items according to it
self.popularItems = np.argsort(itemPopularity)
self.popularItems = np.flip(self.popularItems, axis = 0)
def recommend(self, user_id, at=5, remove_seen=True):
if remove_seen:
unseen_items_mask = np.in1d(self.popularItems, self.URM_train[user_id].indices,
assume_unique=True, invert = True)
unseen_items = self.popularItems[unseen_items_mask]
recommended_items = unseen_items[0:at]
else:
recommended_items = self.popularItems[0:at]
return recommended_items
topPopRecommender_removeSeen = TopPopRecommender()
topPopRecommender_removeSeen.fit(URM_train)
for user_id in userList_unique[0:10]:
print(topPopRecommender_removeSeen.recommend(user_id, at=5))
evaluate_algorithm(URM_test, topPopRecommender_removeSeen)
###Output
Recommender performance is: Precision = 0.1988, Recall = 0.0536, MAP = 0.1476
###Markdown
Simple but effective. Always remove seen items if your purpose is to recommend "new" ones Global effects recommender We recommend to all users the highest rated items First we compute the average of all ratings, or global average
###Code
globalAverage = np.mean(URM_train.data)
print("The global average is {:.2f}".format(globalAverage))
###Output
The global average is 3.51
###Markdown
We subtract the bias to all ratings
###Code
URM_train_unbiased = URM_train.copy()
URM_train_unbiased.data -= globalAverage
print(URM_train_unbiased.data[0:10])
###Output
[1.48775009 1.48775009 1.48775009 1.48775009 1.48775009 1.48775009
1.48775009 1.48775009 1.48775009 1.48775009]
###Markdown
Then we compute the average rating for each item, or itemBias
###Code
item_mean_rating = URM_train_unbiased.mean(axis=0)
item_mean_rating
item_mean_rating = np.array(item_mean_rating).squeeze()
item_mean_rating = np.sort(item_mean_rating[item_mean_rating!=0])
pyplot.plot(item_mean_rating, 'ro')
pyplot.ylabel('Item Bias')
pyplot.xlabel('Item Index')
pyplot.show()
###Output
_____no_output_____
###Markdown
And the average rating for each user, or userBias
###Code
user_mean_rating = URM_train_unbiased.mean(axis=1)
user_mean_rating
user_mean_rating = np.array(user_mean_rating).squeeze()
user_mean_rating = np.sort(user_mean_rating[user_mean_rating!=0.0])
pyplot.plot(user_mean_rating, 'ro')
pyplot.ylabel('User Bias')
pyplot.xlabel('User Index')
pyplot.show()
###Output
_____no_output_____
###Markdown
Now we can sort the items by their itemBias and use the same recommendation principle as in TopPop
###Code
class GlobalEffectsRecommender(object):
def fit(self, URM_train):
self.URM_train = URM_train
globalAverage = np.mean(URM_train.data)
URM_train_unbiased = URM_train.copy()
URM_train_unbiased.data -= globalAverage
item_mean_rating = URM_train_unbiased.mean(axis=0)
item_mean_rating = np.array(item_mean_rating).squeeze()
self.bestRatedItems = np.argsort(item_mean_rating)
self.bestRatedItems = np.flip(self.bestRatedItems, axis = 0)
def recommend(self, user_id, at=5, remove_seen=True):
if remove_seen:
unseen_items_mask = np.in1d(self.bestRatedItems, URM_train[user_id].indices,
assume_unique=True, invert = True)
unseen_items = self.bestRatedItems[unseen_items_mask]
recommended_items = unseen_items[0:at]
else:
recommended_items = self.bestRatedItems[0:at]
return recommended_items
globalEffectsRecommender = GlobalEffectsRecommender()
globalEffectsRecommender.fit(URM_train)
evaluate_algorithm(URM_test, globalEffectsRecommender)
###Output
Recommender performance is: Precision = 0.1683, Recall = 0.0387, MAP = 0.1223
###Markdown
Now let's try to combine User bias an item bias
###Code
class GlobalEffectsRecommender(object):
def fit(self, URM_train):
self.URM_train = URM_train
globalAverage = np.mean(URM_train.data)
URM_train_unbiased = URM_train.copy()
URM_train_unbiased.data -= globalAverage
# User Bias
user_mean_rating = URM_train_unbiased.mean(axis=1)
user_mean_rating = np.array(user_mean_rating).squeeze()
# In order to apply the user bias we have to change the rating value
# in the URM_train_unbiased inner data structures
# If we were to write:
# URM_train_unbiased[user_id].data -= user_mean_rating[user_id]
# we would change the value of a new matrix with no effect on the original data structure
for user_id in range(len(user_mean_rating)):
start_position = URM_train_unbiased.indptr[user_id]
end_position = URM_train_unbiased.indptr[user_id+1]
URM_train_unbiased.data[start_position:end_position] -= user_mean_rating[user_id]
# Item Bias
item_mean_rating = URM_train_unbiased.mean(axis=0)
item_mean_rating = np.array(item_mean_rating).squeeze()
self.bestRatedItems = np.argsort(item_mean_rating)
self.bestRatedItems = np.flip(self.bestRatedItems, axis = 0)
def recommend(self, user_id, at=5, remove_seen=True):
if remove_seen:
unseen_items_mask = np.in1d(self.bestRatedItems, URM_train[user_id].indices,
assume_unique=True, invert = True)
unseen_items = self.bestRatedItems[unseen_items_mask]
recommended_items = unseen_items[0:at]
else:
recommended_items = self.bestRatedItems[0:at]
return recommended_items
globalEffectsRecommender = GlobalEffectsRecommender()
globalEffectsRecommender.fit(URM_train)
evaluate_algorithm(URM_test, globalEffectsRecommender)
###Output
Recommender performance is: Precision = 0.1683, Recall = 0.0387, MAP = 0.1223
###Markdown
The result is identical. User bias is essential in case of rating prediction but not relevant in case of TopK recommendations. Question: Why is GlobalEffect performing worse than TopPop even if we are taking into account more information about the interaction?......... The test data contains a lot of low rating interactions... We are testing against those as well, but GlobalEffects is penalizing interactions with low rating
###Code
URM_test.data[URM_test.data<=2]
###Output
_____no_output_____
###Markdown
In reality we want to recommend items rated in a positive way, so let's build a new Test set with positive interactions only
###Code
URM_test_positiveOnly = URM_test.copy()
URM_test_positiveOnly.data[URM_test.data<=2] = 0
URM_test_positiveOnly.eliminate_zeros()
URM_test_positiveOnly
print("Deleted {} negative interactions".format(URM_test.nnz - URM_test_positiveOnly.nnz))
###Output
Deleted 277317 negative interactions
###Markdown
Run the evaluation again for both
###Code
evaluate_algorithm(URM_test_positiveOnly, topPopRecommender_removeSeen)
evaluate_algorithm(URM_test_positiveOnly, globalEffectsRecommender)
###Output
Recommender performance is: Precision = 0.1627, Recall = 0.0419, MAP = 0.1184
|
Stocks/Stock King 2.1-Copy1.ipynb | ###Markdown
Days of prediction:
###Code
# How many periods looking back to learn
n_per_in = 90
# How many periods to predict
n_per_out = 30
# Features
n_features = df.shape[1]
# Splitting the data into appropriate sequences
X, y = split_sequence(df.to_numpy(), n_per_in, n_per_out)
###Output
_____no_output_____
###Markdown
Modding the network:
###Code
## Creating the NN
# Instatiating the model
model = Sequential()
# Activation
activ = "tanh"
# Input layer
model.add(LSTM(90,
activation=activ,
return_sequences=True,
input_shape=(n_per_in, n_features)))
# Hidden layers
layer_maker(n_layers=1,
n_nodes=50,
activation=activ)
# Final Hidden layer
model.add(LSTM(60, activation=activ))
# Output layer
model.add(Dense(n_per_out))
# Model summary
model.summary()
# Compiling the data with selected specifications
model.compile(optimizer='adam', loss='mse', metrics=['accuracy'])
## Fitting and Training
res = model.fit(X, y, epochs=200, batch_size=128, validation_split=0.1)
visualize_training_results(res)
# Transforming the actual values to their original price
actual = pd.DataFrame(close_scaler.inverse_transform(df[["Close"]]),
index=df.index,
columns=[df.columns[0]])
# Getting a DF of the predicted values to validate against
predictions = validater(n_per_in, n_per_out)
# Printing the RMSE
print("RMSE:", val_rmse(actual, predictions))
# Plotting
pp.figure(figsize=(16,6))
# Plotting those predictions
pp.plot(predictions, label='Predicted')
# Plotting the actual values
pp.plot(actual, label='Actual')
pp.title(f"Predicted vs Actual Closing Prices")
pp.ylabel("Price")
pp . legend ()
# pp . xlim ( '2015-05-01' , '2020-05-01' )
pp.show()
# Predicting off of the most recent days from the original DF
yhat = model.predict(np.array(df.tail(n_per_in)).reshape(1, n_per_in, n_features))
# Transforming the predicted values back to their original format
yhat = close_scaler.inverse_transform(yhat)[0]
# Creating a DF of the predicted prices
preds = pd.DataFrame(yhat,
index=pd.date_range(start=df.index[-1]+timedelta(days=1),
periods=len(yhat),
freq="B"),
columns=[df.columns[0]])
# Number of periods back to plot the actual values
pers = n_per_in
# Transforming the actual values to their original price
actual = pd.DataFrame(close_scaler.inverse_transform(df[["Close"]].tail(pers)),
index=df.Close.tail(pers).index,
columns=[df.columns[0]]).append(preds.head(1))
# Printing the predicted prices
print(preds)
# Plotting
pp.figure(figsize=(16,6))
pp.plot(actual, label="Actual Prices")
pp.plot(preds, label="Predicted Prices")
pp.ylabel("Price")
pp.xlabel("Dates")
pp.title(f"Forecasting the next {len(yhat)} days")
pp . legend ()
pp.show()
###Output
Close
2020-05-25 4037.141602
2020-05-26 4419.875000
2020-05-27 4144.038086
2020-05-28 4141.793945
2020-05-29 4467.413574
2020-06-01 4275.016113
2020-06-02 4079.013184
2020-06-03 3970.238281
2020-06-04 4183.469238
2020-06-05 3941.726318
2020-06-08 4045.078125
2020-06-09 4131.540039
2020-06-10 4403.567383
2020-06-11 4384.712891
2020-06-12 4689.707031
2020-06-15 4965.892578
2020-06-16 4843.812988
2020-06-17 5389.931152
2020-06-18 5655.073242
2020-06-19 6072.718750
2020-06-22 6277.929199
2020-06-23 6442.253906
2020-06-24 6295.768066
2020-06-25 6322.183594
2020-06-26 6333.503418
2020-06-29 5820.988281
2020-06-30 5685.297852
2020-07-01 5393.449219
2020-07-02 5466.540039
2020-07-03 5372.300781
|
Experiments/toy_stationary_ex.ipynb | ###Markdown
Demonstration of stationary property
###Code
import jax
from jax import random, grad, jit, vmap
from jax.config import config
import jax.numpy as np
from jax.experimental import optimizers, stax
import matplotlib
import matplotlib.pyplot as plt
import matplotlib.pylab as pylab
import os
from tqdm.notebook import tqdm as tqdm
## Random seed
rand_key = random.PRNGKey(10)
signal_length = 256
x_min = -np.pi
x_max = np.pi
x_vals = np.linspace(x_min,x_max,signal_length)[...,None]
x_vals_encoded = np.concatenate([np.sin(x_vals), np.cos(x_vals)], axis=-1)
def gen_y_vals(x_vals, center, width):
return(np.roll(np.exp(-.5 * ((x_vals)/width)**2), int((center/(2*np.pi))*x_vals.shape[0])))
for center in np.linspace(x_min, x_max, 8):
plt.plot(x_vals, gen_y_vals(x_vals, center, .2))
plt.show()
def make_network(num_layers, num_channels):
layers = []
for i in range(num_layers-1):
layers.append(stax.Dense(num_channels))
layers.append(stax.Relu)
layers.append(stax.Dense(1))
return stax.serial(*layers)
init_fn, apply_fn = make_network(4, 256)
model_loss = jit(lambda params, x, y: np.mean((apply_fn(params, x) - y) ** 2))
model_grad_loss = jit(lambda params, x, y: jax.grad(model_loss)(params, x, y))
def train_model(key, lr, iters, data):
opt_init, opt_update, get_params = optimizers.adam(lr)
opt_update = jit(opt_update)
_, params = init_fn(key, (-1, data[0].shape[-1]))
opt_state = opt_init(params)
losses = []
for i in tqdm(range(iters), desc='train iter', leave=False):
opt_state = opt_update(i, model_grad_loss(get_params(opt_state), *data), opt_state)
if i % 50 == 0:
loss = model_loss(get_params(opt_state), *data)
losses.append(loss)
return losses
###Output
_____no_output_____
###Markdown
Ensemble
###Code
ensemble_size = 20
rand_key = random.PRNGKey(10)
ensemble_key = random.split(rand_key, ensemble_size)
lr = 1e-4
iters = 500
no_encoding_losses = []
encoded_losses = []
centers = np.linspace(x_min, x_max, 40)
for center in tqdm(centers, leave=False):
y_vals = gen_y_vals(x_vals, center, .05)
no_encoding_losses.append(vmap(train_model, in_axes=[0,None,None,None])(ensemble_key, lr, iters, (x_vals, y_vals))[-1])
encoded_losses.append(vmap(train_model, in_axes=[0,None,None,None])(ensemble_key, lr, iters, (x_vals_encoded, y_vals))[-1])
no_encoding_losses = np.array(no_encoding_losses)
encoded_losses = np.array(encoded_losses)
prop_cycle = plt.rcParams['axes.prop_cycle']
colors = prop_cycle.by_key()['color']
params = {'legend.fontsize': 20,
'axes.labelsize': 18,
'axes.titlesize': 22,
'xtick.labelsize':16,
'ytick.labelsize':16}
pylab.rcParams.update(params)
matplotlib.rcParams['mathtext.fontset'] = 'cm'
matplotlib.rcParams['mathtext.rm'] = 'serif'
plt.rcParams["font.family"] = "cmr10"
colors_k = np.array([[0.8872, 0.4281, 0.1875],
# [0.8136, 0.6844, 0.0696],
[0.2634, 0.6634, 0.4134],
[0.0943, 0.5937, 0.8793],
[0.3936, 0.2946, 0.6330],
[0.7123, 0.2705, 0.3795]])
fig = plt.figure(constrained_layout=True, figsize=(14,4))
gs = fig.add_gridspec(1,2)
centers_to_highlight = [-np.pi/2,0,np.pi/2]
linewidth = 4
title_offset = -.35
ax = fig.add_subplot(gs[0,0])
for i, center in enumerate(centers_to_highlight):
plt.plot(x_vals, gen_y_vals(x_vals, center, .05), linewidth=linewidth, color=colors_k[i], alpha=.8)
plt.xlabel('$x$')
plt.ylabel('$y$')
plt.yticks([0,.25,.5,.75,1], ['$0$','','','','$1$'])
plt.grid(True, which='major', alpha=.3)
ax.set_title('(a) Example target signals', y=title_offset)
plt.xticks([-np.pi, -np.pi/2, 0, np.pi/2, np.pi], ['$-\pi$','$-\pi/2$','$0$','$\pi/2$','$\pi$'])
ax = fig.add_subplot(gs[0,1])
for i, center in enumerate(centers_to_highlight):
ax.vlines(center, ymin=0, ymax=70, color=colors_k[i], linestyle=':', linewidth=linewidth, alpha=.8)
no_encoding_mean = np.mean(no_encoding_losses, axis=-1)
no_encoding_std = np.std(no_encoding_losses, axis=-1)
plt.semilogy(centers, no_encoding_mean, color=colors_k[-1], linewidth=linewidth, label='No Mapping')
plt.fill_between(centers, no_encoding_mean-no_encoding_std, no_encoding_mean+no_encoding_std, color=colors_k[-1], alpha=.15)
encoded_mean = np.mean(encoded_losses, axis=-1)
encoded_std = np.std(encoded_losses, axis=-1)
plt.semilogy(centers, encoded_mean, color=colors_k[-2], linewidth=linewidth, label='Basic Mapping')
plt.fill_between(centers, encoded_mean-encoded_std, encoded_mean+encoded_std, color=colors_k[-2], alpha=.15)
plt.xlabel('Center of Gaussian')
plt.ylabel('Mean squared error')
plt.ylim([.0001,.02])
plt.grid(True, which='major', alpha=.3)
plt.xticks([-np.pi, -np.pi/2, 0, np.pi/2, np.pi], ['$-\pi$','$-\pi/2$','$0$','$\pi/2$','$\pi$'])
ax.set_title('(b) Reconstruction accuracy', y=title_offset)
plt.legend(loc='center left', bbox_to_anchor=(1,.5), handlelength=1)
plt.savefig('supp_stationary.pdf', bbox_inches='tight', pad_inches=0)
###Output
_____no_output_____ |
examples/FS_components.ipynb | ###Markdown
FS ETo Components (FS1 and FS2) Example
###Code
import ee
import openet.refetgee
import openet.refetgee.units as units
ee.Initialize()
###Output
_____no_output_____
###Markdown
Load GRIDMET for a single date
###Code
gridmet_coll = ee.ImageCollection('IDAHO_EPSCOR/GRIDMET')\
.filterDate('2019-04-01', '2019-04-02')
gridmet_img = ee.Image(ee.ImageCollection(gridmet_coll.first()))
gridmet_img = ee.Image('IDAHO_EPSCOR/GRIDMET/20190801')
###Output
_____no_output_____
###Markdown
Ancillary data and test point
###Code
test_pnt = ee.Geometry.Point(-119.01, 39.01);
# Project latitude array to match GRIDMET elevation grid exactly
elev_img = ee.Image("projects/climate-engine/gridmet/elevation")
lat_img = ee.Image.pixelLonLat().select('latitude')\
.reproject('EPSG:4326', elev_img.projection().getInfo()['transform'])
# Set the output crs and crsTransform to match the GRIDMET images
crs = gridmet_img.projection().getInfo()['wkt']
geo = gridmet_img.projection().getInfo()['transform']
###Output
_____no_output_____
###Markdown
Precomputed ETo
###Code
eto = gridmet_img.select('eto').reduceRegion(
ee.Reducer.first(), test_pnt, crs=crs, crsTransform=geo)
print(eto.getInfo()['eto'])
###Output
7.099999904632568
###Markdown
Compute FS1 for a single image
###Code
print(openet.refetgee.Daily.gridmet(gridmet_img, elev=elev_img, lat=lat_img).eto_fs1\
.reduceRegion(ee.Reducer.first(), test_pnt, crs=crs, crsTransform=geo)\
.getInfo()['eto_fs1'])
###Output
3.5710874773859875
###Markdown
Compute FS2 for a single image
###Code
print(openet.refetgee.Daily.gridmet(gridmet_img, elev=elev_img, lat=lat_img).eto_fs2\
.reduceRegion(ee.Reducer.first(), test_pnt, crs=crs, crsTransform=geo)\
.getInfo()['eto_fs2'])
###Output
3.5242705617140135
###Markdown
Compute ETo using FS components (FS1 + FS2)
###Code
fs1 = openet.refetgee.Daily.gridmet(gridmet_img, elev=elev_img, lat=lat_img).eto_fs1\
.reduceRegion(ee.Reducer.first(), test_pnt, crs=crs, crsTransform=geo)\
.getInfo()['eto_fs1']
fs2 = openet.refetgee.Daily.gridmet(gridmet_img, elev=elev_img, lat=lat_img).eto_fs2\
.reduceRegion(ee.Reducer.first(), test_pnt, crs=crs, crsTransform=geo)\
.getInfo()['eto_fs2']
eto = fs1+fs2
print(eto)
###Output
7.095358039100001
|
homework 3/.ipynb_checkpoints/whale_analysis-checkpoint.ipynb | ###Markdown
A Whale off the Port(folio) --- In this assignment, you'll get to use what you've learned this week to evaluate the performance among various algorithmic, hedge, and mutual fund portfolios and compare them against the S&P 500 Index.
###Code
# Initial imports
import pandas as pd
import numpy as np
import datetime as dt
from pathlib import Path
%matplotlib inline
###Output
_____no_output_____
###Markdown
Data CleaningIn this section, you will need to read the CSV files into DataFrames and perform any necessary data cleaning steps. After cleaning, combine all DataFrames into a single DataFrame.Files:* `whale_returns.csv`: Contains returns of some famous "whale" investors' portfolios.* `algo_returns.csv`: Contains returns from the in-house trading algorithms from Harold's company.* `sp500_history.csv`: Contains historical closing prices of the S&P 500 Index. Whale ReturnsRead the Whale Portfolio daily returns and clean the data
###Code
# Reading whale returns
# Count nulls
# Drop nulls
###Output
_____no_output_____
###Markdown
Algorithmic Daily ReturnsRead the algorithmic daily returns and clean the data
###Code
# Reading algorithmic returns
# Count nulls
# Drop nulls
###Output
_____no_output_____
###Markdown
S&P 500 ReturnsRead the S&P 500 historic closing prices and create a new daily returns DataFrame from the data.
###Code
# Reading S&P 500 Closing Prices
# Check Data Types
# Fix Data Types
# Calculate Daily Returns
# Drop nulls
# Rename `Close` Column to be specific to this portfolio.
###Output
_____no_output_____
###Markdown
Combine Whale, Algorithmic, and S&P 500 Returns
###Code
# Join Whale Returns, Algorithmic Returns, and the S&P 500 Returns into a single DataFrame with columns for each portfolio's returns.
###Output
_____no_output_____
###Markdown
--- Conduct Quantitative AnalysisIn this section, you will calculate and visualize performance and risk metrics for the portfolios. Performance Anlysis Calculate and Plot the daily returns.
###Code
# Plot daily returns of all portfolios
###Output
_____no_output_____
###Markdown
Calculate and Plot cumulative returns.
###Code
# Calculate cumulative returns of all portfolios
# Plot cumulative returns
###Output
_____no_output_____
###Markdown
--- Risk AnalysisDetermine the _risk_ of each portfolio:1. Create a box plot for each portfolio. 2. Calculate the standard deviation for all portfolios4. Determine which portfolios are riskier than the S&P 5005. Calculate the Annualized Standard Deviation Create a box plot for each portfolio
###Code
# Box plot to visually show risk
###Output
_____no_output_____
###Markdown
Calculate Standard Deviations
###Code
# Calculate the daily standard deviations of all portfolios
###Output
_____no_output_____
###Markdown
Determine which portfolios are riskier than the S&P 500
###Code
# Calculate the daily standard deviation of S&P 500
# Determine which portfolios are riskier than the S&P 500
###Output
_____no_output_____
###Markdown
Calculate the Annualized Standard Deviation
###Code
# Calculate the annualized standard deviation (252 trading days)
###Output
_____no_output_____
###Markdown
--- Rolling StatisticsRisk changes over time. Analyze the rolling statistics for Risk and Beta. 1. Calculate and plot the rolling standard deviation for the S&P 500 using a 21-day window2. Calculate the correlation between each stock to determine which portfolios may mimick the S&P 5003. Choose one portfolio, then calculate and plot the 60-day rolling beta between it and the S&P 500 Calculate and plot rolling `std` for all portfolios with 21-day window
###Code
# Calculate the rolling standard deviation for all portfolios using a 21-day window
# Plot the rolling standard deviation
###Output
_____no_output_____
###Markdown
Calculate and plot the correlation
###Code
# Calculate the correlation
# Display de correlation matrix
###Output
_____no_output_____
###Markdown
Calculate and Plot Beta for a chosen portfolio and the S&P 500
###Code
# Calculate covariance of a single portfolio
# Calculate variance of S&P 500
# Computing beta
# Plot beta trend
###Output
_____no_output_____
###Markdown
Rolling Statistics Challenge: Exponentially Weighted Average An alternative way to calculate a rolling window is to take the exponentially weighted moving average. This is like a moving window average, but it assigns greater importance to more recent observations. Try calculating the [`ewm`](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.ewm.html) with a 21-day half-life.
###Code
# Use `ewm` to calculate the rolling window
###Output
_____no_output_____
###Markdown
--- Sharpe RatiosIn reality, investment managers and thier institutional investors look at the ratio of return-to-risk, and not just returns alone. After all, if you could invest in one of two portfolios, and each offered the same 10% return, yet one offered lower risk, you'd take that one, right? Using the daily returns, calculate and visualize the Sharpe ratios using a bar plot
###Code
# Annualized Sharpe Ratios
# Visualize the sharpe ratios as a bar plot
###Output
_____no_output_____
###Markdown
Determine whether the algorithmic strategies outperform both the market (S&P 500) and the whales portfolios.Write your answer here! --- Create Custom PortfolioIn this section, you will build your own portfolio of stocks, calculate the returns, and compare the results to the Whale Portfolios and the S&P 500. 1. Choose 3-5 custom stocks with at last 1 year's worth of historic prices and create a DataFrame of the closing prices and dates for each stock.2. Calculate the weighted returns for the portfolio assuming an equal number of shares for each stock3. Join your portfolio returns to the DataFrame that contains all of the portfolio returns4. Re-run the performance and risk analysis with your portfolio to see how it compares to the others5. Include correlation analysis to determine which stocks (if any) are correlated Choose 3-5 custom stocks with at last 1 year's worth of historic prices and create a DataFrame of the closing prices and dates for each stock.For this demo solution, we fetch data from three companies listes in the S&P 500 index.* `GOOG` - [Google, LLC](https://en.wikipedia.org/wiki/Google)* `AAPL` - [Apple Inc.](https://en.wikipedia.org/wiki/Apple_Inc.)* `COST` - [Costco Wholesale Corporation](https://en.wikipedia.org/wiki/Costco)
###Code
# Reading data from 1st stock
# Reading data from 2nd stock
# Reading data from 3rd stock
# Combine all stocks in a single DataFrame
# Reset Date index
# Reorganize portfolio data by having a column per symbol
# Calculate daily returns
# Drop NAs
# Display sample data
###Output
_____no_output_____
###Markdown
Calculate the weighted returns for the portfolio assuming an equal number of shares for each stock
###Code
# Set weights
weights = [1/3, 1/3, 1/3]
# Calculate portfolio return
# Display sample data
###Output
_____no_output_____
###Markdown
Join your portfolio returns to the DataFrame that contains all of the portfolio returns
###Code
# Join your returns DataFrame to the original returns DataFrame
# Only compare dates where return data exists for all the stocks (drop NaNs)
###Output
_____no_output_____
###Markdown
Re-run the risk analysis with your portfolio to see how it compares to the others Calculate the Annualized Standard Deviation
###Code
# Calculate the annualized `std`
###Output
_____no_output_____
###Markdown
Calculate and plot rolling `std` with 21-day window
###Code
# Calculate rolling standard deviation
# Plot rolling standard deviation
###Output
_____no_output_____
###Markdown
Calculate and plot the correlation
###Code
# Calculate and plot the correlation
###Output
_____no_output_____
###Markdown
Calculate and Plot Rolling 60-day Beta for Your Portfolio compared to the S&P 500
###Code
# Calculate and plot Beta
###Output
_____no_output_____
###Markdown
Using the daily returns, calculate and visualize the Sharpe ratios using a bar plot
###Code
# Calculate Annualzied Sharpe Ratios
# Visualize the sharpe ratios as a bar plot
###Output
_____no_output_____ |
Task-6_Decision_Tree.ipynb | ###Markdown
Author : Shradha Pujari Task 6 : Prediction using Decision Tree Algorithm GRIP @ The Sparks FoundationDecision Trees are versatile Machine Learning algorithms that can performboth classification and regression tasks, and even multioutput tasks.For the given ‘Iris’ dataset, I created the Decision Tree classifier and visualized itgraphically. The purpose of this task is if we feed any new data to this classifier, it would be able topredict the right class accordingly. Technical Stack : Sikit Learn, Numpy Array, Seaborn, Pandas, Matplotlib, Pydot
###Code
# Importing the required Libraries
from sklearn.datasets import load_iris
from sklearn.tree import DecisionTreeClassifier, export_graphviz
from sklearn.model_selection import train_test_split
import sklearn.metrics as sm
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
import pydot
from IPython.display import Image
###Output
_____no_output_____
###Markdown
Step 1 - Loading the Dataset
###Code
# Loading Dataset
iris = load_iris()
X=iris.data[:,:]
y=iris.target
###Output
_____no_output_____
###Markdown
Step 2 - Exploratory Data Analysis
###Code
#Input data
data=pd.DataFrame(iris['data'],columns=["Petal length","Petal Width","Sepal Length","Sepal Width"])
data['Species']=iris['target']
data['Species']=data['Species'].apply(lambda x: iris['target_names'][x])
data.head()
data.shape
data.describe()
###Output
_____no_output_____
###Markdown
Step 3 - Data Visualization comparing various features
###Code
# Input data Visualization
sns.pairplot(data)
# Scatter plot of data based on Sepal Length and Width features
sns.FacetGrid(data,hue='Species').map(plt.scatter,'Sepal Length','Sepal Width').add_legend()
plt.show()
# Scatter plot of data based on Petal Length and Width features
sns.FacetGrid(data,hue='Species').map(plt.scatter,'Petal length','Petal Width').add_legend()
plt.show()
###Output
_____no_output_____
###Markdown
Step 4 - Decision Tree Model Training
###Code
# Model Training
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.1, random_state=1)
tree_classifier = DecisionTreeClassifier()
tree_classifier.fit(X_train,y_train)
print("Training Complete.")
y_pred = tree_classifier.predict(X_test)
###Output
Training Complete.
###Markdown
Step 5 - Comparing the actual and predicted flower classification
###Code
df = pd.DataFrame({'Actual': y_test, 'Predicted': y_pred})
df
###Output
_____no_output_____
###Markdown
Step 6 - Visualizing the Trained Model
###Code
#Visualizing the trained Decision Tree Classifier taking all 4 features in consideration
export_graphviz(
tree_classifier,
out_file="img\desision_tree.dot",
feature_names=iris.feature_names[:],
class_names=iris.target_names,
rounded=True,
filled=True
)
(graph,) = pydot.graph_from_dot_file('img\desision_tree.dot')
graph.write_png('img\desision_tree.png')
Image(filename='img\desision_tree.png')
###Output
_____no_output_____
###Markdown
Step 7 - Predicting the class output for some random values of petal and sepal length and width
###Code
print("Class Names = ",iris.target_names)
# Estimating class probabilities
print()
print("Estimating Class Probabilities for flower whose petals length width are 4.7cm and 3.2cm and sepal length and width are 1.3cm and 0.2cm. ")
print()
print('Output = ',tree_classifier.predict([[4.7, 3.2, 1.3, 0.2]]))
print()
print("Our model predicts the class as 0, that is, setosa.")
###Output
Class Names = ['setosa' 'versicolor' 'virginica']
Estimating Class Probabilities for flower whose petals length width are 4.7cm and 3.2cm and sepal length and width are 1.3cm and 0.2cm.
Output = [0]
Our model predicts the class as 0, that is, setosa.
###Markdown
Step 8 - Calculating the Model accuracy
###Code
# Model Accuracy
print("Accuracy:",sm.accuracy_score(y_test, y_pred))
###Output
Accuracy: 1.0
|
src/02_conditionals_and_loops.ipynb | ###Markdown
02. 조건문과 반복문 1. 조건문- `if` 사용- 조건 적은 이후에 `:` 적고- 다음 단락은 들여쓰기로```pyif 조건1: 코드 실행 조건1이 참일 때elif 조건2: 코드 실행 조건1이 거짓이고, 조건2가 참일 때else: 코드 실행 조건1, 2가 모두 거짓일 때 최종 실행```
###Code
condition = True
if condition:
print('condition is true')
condition = 100 > 20
if condition:
print('100 is bigger than 20')
score = 100
if score > 80:
print("Pass")
score = 75
if score < 20:
print("Grade F")
elif 20 <= score < 40:
print("Grade D")
elif 40 <= score < 60:
print("Grade C")
elif 60 <= score < 80:
print("Grade B")
else:
print("Grade A")
score = -1
if 0 <= score < 20:
print("Grade F")
elif 20 <= score < 40:
print("Grade D")
elif 40 <= score < 60:
print("Grade C")
elif 60 <= score < 80:
print("Grade B")
elif 80 <= score <= 100:
print("Grade A")
else:
print("Score is out of range")
# 한 줄로 써도 문법상 오류는 없지만 추천하진 않음
weather = "sunny"
if weather is 'sunny': print('Happy')
else: print("Sad")
# 값을 대입할 때는 간결성을 위해 쓰는 경우 많음
temperature = -8
feeling = 'cold' if temperature < -5 else 'hot'
print(feeling)
gift = 'apple'
bag = ['apple', 'banana', 'sandwich', 'coffee', 'book']
if gift in bag:
print(f"{gift}은 이미 가지고 있어요")
else:
bag.append(gift)
print(f'{gift} 가방에 잘 넣어둘게요.')
print(f"이제 내 가방엔 {bag} 들이 있어요")
###Output
apple은 이미 가지고 있어요
###Markdown
2. 반복문여러번 반복해서 계산이나 출력해야할 것이 있거나, 리스트나 dictionary의 원소들을 체크하고싶을 때 사용 2.1 for`for ... in ...` 구문만 기억```for 원소 in Iterable: print(원소)```
###Code
colors = ['red', 'green', 'blue', 'yellow']
for c in colors:
print(c)
# for문 안이라고 c만 쓸 수 있는게 아니라 다른 변수 모두 활용 가능
my_name = "q"
colors = ['red', 'green', 'blue', 'yellow']
for c in colors:
print(f"{c}\tin {colors}, feat. {my_name}")
# 숫자로 반복
# range(끝숫자)
for i in range(5):
print(i, end=' ')
# range(시작, 끝)
for i in range(10, 20):
print(i, end=' ')
# range(시작, 끝, 건너뛰기)
for i in range(100, 200, 20):
print(i, end=' ')
# 문자열도 iterable
for c in "Artificial Intelligence":
print(c, end='')
# 문자열도 iterable
small_letters = 'abcdefghijklmnopqrstuvwxyz'
capital_letters = 'ABCDEFGHIJKLMNOPQRSTUVWXYZ'
sentence = "Artificial Intelligence"
for c in sentence:
if c in small_letters:
c = c.upper()
elif c in capital_letters:
c = c.lower()
else:
pass
print(c, end='')
# 반복 중단 : break
for i in range(10, 100, 2):
if i == 40:
break
print(i, end=' ')
###Output
10 12 14 16 18 20 22 24 26 28 30 32 34 36 38
###Markdown
List comprehensionfor 반복을 활용해서 list 만들기
###Code
# 그냥 단순 for 문으로 만들면
h_letters = []
for letter in 'human':
h_letters.append(letter)
print(h_letters)
###Output
['h', 'u', 'm', 'a', 'n']
###Markdown
포맷은 다음과 같음
###Code
# list comprehension
h_letters = [ letter for letter in 'human' ]
print( h_letters)
# 1000 단위의 연속된 숫자를 리스트로 만들기
result = [i * 1000 for i in range(1, 10)]
print(result)
###Output
[1000, 2000, 3000, 4000, 5000, 6000, 7000, 8000, 9000]
###Markdown
list comprehension에 if 조건문을 추가해서 사용할 수 있음원소를 필터링하는 목적이라면 if 조건문을 뒤에 쓰기
###Code
# 짝수만 골라서 담아보자 1
number_list = [ x for x in range(20) if x % 2 == 0]
print(number_list)
# 짝수만 골라서 담아보자 2
number_list = [ x if x % 2 == 0 else None for x in range(20)]
print(number_list)
###Output
[0, None, 2, None, 4, None, 6, None, 8, None, 10, None, 12, None, 14, None, 16, None, 18, None]
###Markdown
2.2 while```pywhile 조건: 코드```
###Code
count = 5
while count > 0:
print(f"count is {count}")
count -= 1
destination = 100
count = 0
while True:
if count > 100:
break
print(f"count is {count}")
count += 20
###Output
count is 0
count is 20
count is 40
count is 60
count is 80
count is 100
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.