path
stringlengths 7
265
| concatenated_notebook
stringlengths 46
17M
|
---|---|
Dynamics_lab12_SWE.ipynb | ###Markdown
AG Dynamics of the Earth Jupyter notebooks Georg Kaufmann Dynamic systems: 12. Shallow-water Shallow-water equations----*Georg Kaufmann,Geophysics Section,Institute of Geological Sciences,Freie Universitรคt Berlin,Germany*
###Code
import numpy as np
import scipy.special
import matplotlib.pyplot as plt
###Output
_____no_output_____ |
4_5_Bayesian_with_python.ipynb | ###Markdown
Today's ์ธ๋ฏธ๋ ๋ชฉ์ฐจ---1. 1 & 2์ฃผ์ฐจ ๋ด์ฉ ๋ณต์ต1. R vs Python ๋น๊ต1. PyMC๋?1. PyMC vs MCMCPACK with disaster data1. ํ๋ก๊ทธ๋๋จธ๋ฅผ ์ํ ๋ฒ ์ด์ง์ with ํ์ด์ฌ(2์ฅ) 1. 1 & 2์ฃผ์ฐจ ๋ด์ฉ ๋ณต์ต---1. 1์ฃผ์ฐจ : ์ด์ ๋จ1 ์ฑ๊ท ์ด์ ์ฃผํผํฐ๋
ธํธ๋ถ [[๋งํฌ ํ
์คํธ](https://nbviewer.jupyter.org/github/sk-rhyeu/bayesian_lab/blob/master/3_8_Bayesian_with_python_Intro.ipynb)]2. 2์ฃผ์ฐจ : ์ด์ ๋จ2 ์งํ์ด์ PPT 2. R vs Python ๋น๊ต---์ฐธ๊ณ : https://www.youtube.com/watch?v=jLGsrGk2pDU1. R* ์ฅ์ > 1) ๋ฐ์ดํฐ ๋ง์ด๋/ ํต๊ณ๋ถ์์ ์ํ ์๋ฃ ๋ฐ ํจํค์ง๊ฐ ๋งค์ฐ ๋ง์ (R์ ์ฃผ๋ชฉ์ )> 2) ์ด๋ฏธ ๊ฒ์ฆ๋ ๋จธ์ ๋ฌ๋ ์๊ณ ๋ฆฌ์ฆ ์ฝ๋ฉ ์๋ฃ๊ฐ ๊ต์ฅํ ๋ง์ (ex. SVM)> 3) ๊ณ ํ, ๋์ฉ๋์ ์ธ๊ณต์ง๋ฅ ์ฝ๋ ์๋ฃ๊ฐ ๋ง์ (๊ตฌ๊ธ๋ง ์ต๊ณ !)> 4) ์ฐ๊ตฌ, ๋ถ์, ์คํ ๋ชฉ์ ์ ํ๋ก๊ทธ๋๋ฐ๋ก์ ์ ํฉ* ๋จ์ > 1) ์ธ์ด ์์ฒด๊ฐ ์ค๋๋์ ํจ์จ์ฑ์ด ๋จ์ด์ง (R : 100์ค vs Python: 30์ค)> 2) ์
๋ฌธ์๊ฐ ๋ฐฐ์ฐ๊ธฐ ์ด๋ ค์ด ๊ตฌ์กฐ> 3) ์๋น์ค, ์ดํ๋ฆฌ์ผ์ด์
๊ณผ ๊ฐ์ด ์ค์ ์์คํ
์ ๊ตฌํํ๊ธฐ์ ๋ง์ ์ด๋ ค์์ด ๋ฐ๋ฆ------2. Python* ์ฅ์ > 1) ์ธ์ด๊ฐ ๊ฐ๊ฒฐํ์ฌ ์
๋ฌธ์๊ฐ ๋ฐฐ์ฐ๊ธฐ ์ฌ์> 2) ํ์ด์ฌ์ผ๋ก ๋ง๋ ์๋น์ค๋ ์ดํ๋ฆฌ์ผ์ด์
๋ฑ ์์ฉํ๊ฐ ์ ๋์ด์์ (๊ฐ๋ฐ์ธ์ด๋ก ์ ํฉ)>3) ์ธ๊ณต์ง๋ฅ, ๋ฅ๋ฌ๋์ ์ง์ํ๊ธฐ์ ์ต์ ํ (ํ์)> 4) ์ค์ ํ๊ฒฝ์์ ์์ง๋ '๋ฆฌ์ผ ๋ฐ์ดํฐ'๋ก ๋จธ์ ๋ฌ๋ ํ๊ธฐ์ ์ ํฉ (R๊ณผ์ ๊ฐ์ฅ ํฐ ์ฐจ์ด์ )* ๋จ์ > 1) R์ ๋นํ์ฌ ์๋์ ์ผ๋ก ๋ผ์ด๋ธ๋ฌ๋ฆฌ/ํจํค์ง๊ฐ ์ ์> 2) ํ์ด์ฌ ๊ธฐ๋ฐ์ ๋ฐ์ดํฐ๋ง์ด๋ ์ฝ๋๊ฐ ๋ง์ง ์์ (๊ธฐ๋ณธ๋ง ์์) 3. PyMC๋?--- ์ฐธ๊ณ : https://en.wikipedia.org/wiki/PyMC3* ๋ฒ ์ด์ง์ ๋ถ์์ ์ํ ํ์ด์ฌ ๋ผ์ด๋ธ๋ฌ๋ฆฌ( Vs in R MCMCPACK)* MCMC ๊ธฐ๋ฒ ์๊ณ ๋ฆฌ์ฆ์ ์ด์ ์ ๋ง์ถค* Based on theano (ํ๋ ฌ ๊ฐ๋ฑ ์ํ์ ํํ์ ์ต์ ํ ํ๋ ๋ผ์ด๋ธ๋ฌ๋ฆฌ)* ์ฒ๋ฌธํ, ๋ถ์์๋ฌผํ,์ํํ,์ฌ๋ฆฌํ๋ฑ ์ฌ๋ฌ ๊ณผํ ๋ถ์ผ์ ์ถ๋ก ๋ฌธ์ ๋ฅผ ํด๊ฒฐํ๊ธฐ ์ํด ๋ง์ด ์ฌ์ฉ* Stan๊ณผ ํจ๊ป ๊ฐ์ฅ ์ธ๊ธฐ ์๋ ํ๋ก๊ทธ๋๋ฐ ๋๊ตฌ (์ฌ๊ธฐ์ Stan์ C++์ฐ์ฌ์ง ํต๊ณ ์ถ๋ก ์ ์ํ ํ๋ก๊ทธ๋๋ฐ ์ธ์ด)* ํ์ฌ PyMC4 ๋ฒ์ ผ๊น์ง ์ถ์ 4. 2์ฅ PyMC ๋ ์์๋ณด๊ธฐ---1. ์๋ก 1. ๋ชจ๋ธ๋ง๋ฐฉ๋ฒ1. ์ฐ๋ฆฌ์ ๋ชจ๋ธ์ด ์ ์ ํ๊ฐ?1. ๊ฒฐ๋ก 1. ๋ถ๋ก 2.1.1 ๋ถ๋ชจ์ ์์ ๊ด๊ณ- ๋ถ๋ชจ๋ณ์๋ ๋ค๋ฅธ ๋ณ์์ ์ํฅ์ ์ฃผ๋ ๋ณ์๋ค.- ์์๋ณ์๋ ๋ค๋ฅธ ๋ณ์์ ์ํฅ์ ๋ฐ๋ ๋ณ์๋ค. ์ฆ, ๋ถ๋ชจ๋ณ์์ ์ข
์๋๋ค.- ์ด๋ ๋ณ์๋ผ๋ ๋ถ๋ชจ ๋ณ์๊ฐ ๋ ์ ์์ผ๋ฉฐ, ๋์์ ์์๋ณ์๊ฐ ๋ ์ ์๋ค.
###Code
!pip install pymc
import pymc as pm
import matplotlib
matplotlib.rc('font', family='Malgun Gothic') # ๊ทธ๋ฆผ ํ๊ธ ํฐํธ ์ง์ , ๋ง์ ๊ณ ๋
lambda_ = pm.Exponential("poisson_param", 1)
# used in the call to the next variable...
data_generator = pm.Poisson("data_generator", lambda_)
data_plus_one = data_generator + 1
print ("Children of โlambda_โ: ")
print (lambda_.children)
print ("\nParents of โdata_generatorโ: ")
print (data_generator.parents)
print ("\nChildren of โdata_generatorโ: ")
print (data_generator.children)
###Output
Children of โlambda_โ:
{<pymc.distributions.new_dist_class.<locals>.new_class 'data_generator' at 0x7fceb80cf668>}
Parents of โdata_generatorโ:
{'mu': <pymc.distributions.new_dist_class.<locals>.new_class 'poisson_param' at 0x7fceb80cf630>}
Children of โdata_generatorโ:
{<pymc.PyMCObjects.Deterministic '(data_generator_add_1)' at 0x7fceb2217cc0>}
###Markdown
2.1.2 PyMC ๋ณ์- ๋ชจ๋ PyMC ๋ณ์๋ value ์์ฑ์ ๊ฐ์ง- ๋ณ์์ ํ์ฌ (๊ฐ๋ฅํ ๋์) ๋ด๋ถ ๊ฐ์ ๋ง๋ฌ ( P(theta|y) )1. stochastic ๋ณ์ (ํ๋ฅ ์ ์ธ ๋ณ์, stochastic process : RV ๋ค์ ์งํฉ)> 1) ๊ฐ์ด ์ ํด์ง์ง ์์ ๋ณ์> 2) ๋ถ๋ชจ๋ณ์์ ๊ฐ์ ๋ชจ๋ ์๊ณ ์์ด๋ ์ฌ์ ํ ๋์> 3) ex) Possion, DiscreteUniform, Exponential> 4) random() ๋ก ๋ฉ์๋ ํธ์ถ2. deterministic ๋ณ์(๊ฒฐ์ ๋ก ์ ์ธ ๋ณ์)> 1) ๋ณ์์ ๋ถ๋ชจ๋ฅผ ๋ชจ๋ ์๊ณ ์๋ ๊ฒฝ์ฐ์ ๋๋คํ์ง ์์ ๋ณ์> 2) @pm.deterministic ๋ก ์ ์ธ> ex) A deterministic variable is the variable that you can predict with almost 100% accuracy. For example, your age is x this year and it is definitely gonna be x+1 next year. Whether you alive or otherwise. So age is a deterministic variable in this case.
###Code
print ("lambda_.value =", lambda_.value)
print ("data_generator.value =", data_generator.value)
print ("data_plus_one.value =", data_plus_one.value)
lambda_1 = pm.Exponential("lambda_1", 1) # prior on first behaviour
lambda_2 = pm.Exponential("lambda_2", 1) # prior on second behaviour
tau = pm.DiscreteUniform("tau", lower=0, upper=10) # prior on behaviour change
print ("Initialized values...")
print("lambda_1.value = %.3f" % lambda_1.value)
print("lambda_2.value = %.3f" % lambda_2.value)
print("tau.value = %.3f" % tau.value, "\n")
lambda_1.random(), lambda_2.random(), tau.random()
print("After calling random() on the variables...")
print("lambda_1.value = %.3f" % lambda_1.value)
print("lambda_2.value = %.3f" % lambda_2.value)
print("tau.value = %.3f" % tau.value)
type(lambda_1 + lambda_2)
import numpy as np
n_data_points = 5 # in CH1 we had ~70 data points
@pm.deterministic
def lambda_(tau=tau, lambda_1=lambda_1, lambda_2=lambda_2):
out = np.zeros(n_data_points)
out[:tau] = lambda_1 # lambda before tau is lambda1
out[tau:] = lambda_2 # lambda after tau is lambda2
return out
###Output
_____no_output_____
###Markdown
2.13 ๋ชจ๋ธ์ ๊ด์ธก ํฌํจํ๊ธฐ* P(theta)๋ฅผ ๊ตฌ์ฒด์ ์ผ๋ก ์ง์ (์ฃผ๊ด์ )* P(theta | y) = P(theta,y) / P(y) = P(y|theta)P(theta) / P(y) proportional P(y |theta)P(theta)
###Code
%matplotlib inline
from IPython.core.pylabtools import figsize
from matplotlib import pyplot as plt
figsize(12.5, 4)
samples = [lambda_1.random() for i in range(20000)]
plt.hist(samples, bins=70, normed=True, histtype="stepfilled")
plt.title("Prior distribution for $\lambda_1$")
plt.xlim(0, 8);
data = np.array([10, 5])
fixed_variable = pm.Poisson("fxd", 1, value=data, observed=True)
print("value: ", fixed_variable.value)
print("calling .random()")
fixed_variable.random()
print("value: ", fixed_variable.value)
# We're using some fake data here
data = np.array([10, 25, 15, 20, 35])
obs = pm.Poisson("obs", lambda_, value=data, observed=True)
print(obs.value)
###Output
[10 25 15 20 35]
###Markdown
2.14 ๋ง์ง๋ง์ผ๋ก* model = pm.Model ([obs,lambda_,lambda_1,lambda_2,tau])
###Code
model = pm.Model([obs, lambda_, lambda_1, lambda_2, tau])
###Output
_____no_output_____
###Markdown
2.2 ๋ชจ๋ธ๋ง ๋ฐฉ๋ฒ- ์ฐ๋ฆฌ์ ๋ฐ์ดํฐ๊ฐ ์ด๋ป๊ฒ ๋ง๋ค์ด์ก์๊น?1. ๋ฐ์ดํฐ๋ฅผ ๋ํ๋ด๋ ์ต๊ณ ์ ํ๋ฅ ๋ณ์ -> ๋ถํฌ1. ๋ถํฌ์ ์ ์๋์ด์ผ ํ๋ ๋ชจ์> 1) ์ด๊ธฐ ํ๋์ ๋ํ ๊ฒ> 2) ์ฌํ ํ๋์ ๋ํ ๊ฒ> 3) ๋ณํ์ T(ํ๋์ด ์ธ์ ๋ฐ๋๋์ง ์์ง ๋ชปํจ) -> ์ ๋ฌธ์ ์ธ ๊ฒฌํด๊ฐ ์๋ ๊ฒฝ์ฐ ์ด์ฐ๊ท ๋ฑ๋ถํฌ๋ก ๊ฐ์ 3. 1) ๊ณผ 2)์ ๋ชจ์์ ๋ํ ๋ฐ๋์งํ ํ๋ฅ ๋ถํฌ> -๋ฏฟ์์ด ๊ฐ๋ ฅํ์ง ์์ ๊ฒฝ์ฐ ๋ชจ๋ธ๋ง์ ์ค๋จํ๋ ๊ฒ์ด ์ต์ , ๋ชจ์๋ผ๋ฆฌ ์ฐ๊ด์ฑ ์๊ฒ ์ค์ ํ๋ ๊ฒ์ด ์ข์ 2.2.1 ๊ฐ์์คํ ๋ฆฌ, ๋ค๋ฅธ ๊ฒฐ๋ง* 2.2์ฅ์ ์์๋ฅผ ์ญํํ๋ฉด ์๋ก์ด ๋ฐ์ดํฐ์
์ ๋ง๋ค ์ ์์* ๊ฐ์ ๋ฐ์ดํฐ์
์ด ์ฐ๋ฆฌ๊ฐ ๊ด์ธกํ ๋ฐ์ดํฐ์
์ฒ๋ผ ๋ณด์ด์ง ์์๋ ๊ด์ฐฎ์ (๊ฐ์ ๋ฐ์ดํฐ์
์ด ๋ ํ๋ฅ ์ด ์๋นํ ๋ฎ์)* PyMC์ ๋ฒ ์ด์ง ์ด๋ฌํ ํ๋ฅ ์ ๊ทน๋ํํ๋ ์ข์ ๋ชจ์๋ฅผ ์ฐพ๋๋ก ์ค๊ณ ๋์ด์์* ๋ฒ ์ด์ง์ ์ถ๋ก ์ ๋งค์ฐ ์ค์ํ ๋ฐฉ๋ฒ* ์ด ๋ฐฉ๋ฒ์ ๋ชจ๋ธ์ ์ ํฉ์ฑ์ ๊ฒ์ฆํจ
###Code
tau = pm.rdiscrete_uniform(0, 80)
print(tau)
alpha = 1. / 20.
lambda_1, lambda_2 = pm.rexponential(alpha, 2)
print(lambda_1, lambda_2)
lambda_ = np.r_[ lambda_1*np.ones(tau), lambda_2*np.ones(80-tau) ]
print (lambda_)
data = pm.rpoisson(lambda_)
print (data)
plt.bar(np.arange(80), data, color="#348ABD")
plt.bar(tau - 1, data[tau - 1], color="r", label="user behaviour changed")
plt.xlabel("Time (days)")
plt.ylabel("count of text-msgs received")
plt.title("Artificial dataset")
plt.xlim(0, 80)
plt.legend();
def plot_artificial_sms_dataset():
tau = pm.rdiscrete_uniform(0, 80)
alpha = 1. / 20.
lambda_1, lambda_2 = pm.rexponential(alpha, 2)
data = np.r_[pm.rpoisson(lambda_1, tau), pm.rpoisson(lambda_2, 80 - tau)]
plt.bar(np.arange(80), data, color="#348ABD")
plt.bar(tau - 1, data[tau - 1], color="r", label="user behaviour changed")
plt.xlim(0, 80)
plt.xlabel("Time (days)")
plt.ylabel("Text messages received")
figsize(12.5, 5)
plt.suptitle("More examples of artificial datasets", fontsize=14)
for i in range(1, 5):
plt.subplot(4, 1, i)
plot_artificial_sms_dataset()
###Output
/usr/local/lib/python3.6/dist-packages/matplotlib/font_manager.py:1241: UserWarning: findfont: Font family ['Malgun Gothic'] not found. Falling back to DejaVu Sans.
(prop.get_family(), self.defaultFamily[fontext]))
###Markdown
2.2.2 ์์ : ๋ฒ ์ด์ง์ A/B ํ
์คํธ * ์๋ก ๋ค๋ฅธ ๋ ๊ฐ์ง ๋ฐฉ๋ฒ ๊ฐ์ ํจ๊ณผ์ ์ฐจ์ด๋ฅผ ๋ฐํ๊ธฐ ์ํ ํต๊ณ์ ๋์์ธ ํจํด (Two sample t-test)* ํต์ฌ์ ๊ทธ๋ฃน ๊ฐ์ ์ฐจ์ด์ ์ด ๋จ ํ๋๋ฟ์ด๋ผ๋ ์ , ์ธก์ ๊ฐ์ ์๋ฏธ ์๋ ๋ณํ๊ฐ ๋ฐ๋กธ ์ฐจ์ด๋ก ์ฐ๊ฒฐ๋จ* ์ฌํ์คํ๋ถ์์ ๋ณดํต ํ๊ท ์ฐจ์ด๊ฒ์ or ๋น์จ์ฐจ์ด๊ฒ์ ๊ณผ ๊ฐ์ '๊ฐ์ค๊ฒ์ ' ์ฌ์ฉ -> Z์ค์ฝ์ด or P-value ๊ด๋ จ Bayes factor : ๊ฐ์ด ์ปค์ง์๋ก ๊ท๋ฌด๊ฐ์ค์ด ์ฑํ ๊ฐ๋ฅ์ฑ์ด ์ปค์ง๋ค (๊ฐํ ์ฆ๊ฑฐ) 2.2.3 ๊ฐ๋จํ ์์ * ์ ํ์จ : ์น์ฌ์ดํธ ๋ฐฉ๋ฌธ์๊ฐ ํ์์ผ๋ก ๊ฐ์
ํ๊ฑฐ๋, ๋ฌด์ธ๊ฐ๋ฅผ ๊ตฌ๋งคํ๊ฑฐ๋, ๊ธฐํ ๋ค๋ฅธ ํ๋์ ํ๋ ๊ฒ์ ๋งํจ* PA : A์ฌ์ดํธ์ ๋
ธ์ถ๋ ์ฌ์ฉ์๊ฐ ๊ถ๊ทน์ ์ผ๋ก ์ ํํ ์ด๋ค ํ๋ฅ (A์ฌ์ดํธ์ ์ง์ ํ ํจ์จ์ฑ, ์์ง ๋ชปํจ)> 1) A ์ฌ์ดํธ๊ฐ N๋ช
์๊ฒ ๋
ธ์ถ, n๋ช
์ด ์ ํํ๋ค๊ณ ๊ฐ์ > 2) ๊ด์ธก๋ ๋น๋ n /N ์ด ๋ฐ๋์ PA๋ ๊ฐ์๊ฑด ์๋ -> ๊ด์ธก๋ ๋น๋์ ์ฌ๊ฑด์ ์ค์ ๋น๋ ๊ฐ์๋ ์ฐจ์ด๊ฐ ์์> ex) ์ก๋ฉด์ฒด ์ฃผ์ฌ์๋ฅผ ๊ตด๋ ค 1์ด ๋์ค๋ ์ค์ ํ๋ฅ ์ 1/6 , ํ์ง๋ง 6๋ฒ ๊ตด๋ ค์ 1์ ํ๋ฒ๋ ๊ด์ธกํ์ง ๋ชปํ ์ ์์ (๊ด์ธก๋ ๋น๋)> 3) ๋
ธ์ด์ฆ์ ๋ณต์ก์ฑ ๋๋ฌธ์ ์ค์ ๋น๋๋ฅผ ์์ง ๋ชปํ์ฌ ๊ด์ธก๋ ๋ฐ์ดํฐ๋ก ์ค์ ๋น๋๋ฅผ ์ถ๋ก ํด์ผํจ> 4) ๋ฒ ์ด์ง์ ํต๊ณ๋ฅผ ์ฌ์ฉํ์ฌ ์ ์ ํ ์ฌ์ ํ๋ฅ ๋ฐ ๊ด์ธก๋ ๋ฐ์ดํฐ๋ฅผ ์ฌ์ฉํ์ฌ ์ค์ ๋น๋์ ์ถ์ ๊ฐ์ ์ถ๋ก > 5) ํ์ฌ PA์ ๋ํ ํ์ ์ด ๊ฐํ์ง ์๊ธฐ ๋๋ฌธ์ ๊ท ๋ฑ๋ถํฌ๋ก ๊ฐ์ > 6) PA=0.05, ์ฌ์ดํธ์ ๋
ธ์ถ๋ ์ฌ์ฉ์ ์ N=1,500 ์ด๋ผ ๊ฐ์ , X= ์ฌ์ฉ์๊ฐ ๊ตฌ๋งค๋ฅผ ํ๋์ง ํน์ ํ์ง ์์๋์ง ์ฌ๋ถ -> ๋ฒ ๋ฅด๋์ด๋ถํฌ ์ฌ์ฉ> ๊ฒฐ๋ก : ์ฐ๋ฆฌ์ ์ฌํํ๋ฅ ๋ถํฌ๋ ๋ฐ์ดํฐ๊ฐ ์ ์ํ๋ ์ง์ง PA ๊ฐ ์ฃผ๋ณ์ ๊ฐ์ค์น๋ฅผ ๋
###Code
import pymc as pm
# The parameters are the bounds of the Uniform.
p = pm.Uniform('p', lower=0, upper=1)
# set constants
p_true = 0.05 # remember, this is unknown.
N = 1500
# sample N Bernoulli random variables from Ber(0.05).
# each random variable has a 0.05 chance of being a 1.
# this is the data-generation step
occurrences = pm.rbernoulli(p_true, N)
print(occurrences) # Remember: Python treats True == 1, and False == 0
print(occurrences.sum())
# Occurrences.mean is equal to n/N.
print("What is the observed frequency in Group A? %.4f" % occurrences.mean())
print("Does this equal the true frequency? %s" % (occurrences.mean() == p_true))
# include the observations, which are Bernoulli
obs = pm.Bernoulli("obs", p, value=occurrences, observed=True)
# To be explained in chapter 3
mcmc = pm.MCMC([p, obs])
mcmc.sample(18000, 1000)
figsize(12.5, 4)
plt.title("Posterior distribution of $p_A$, the true effectiveness of site A")
plt.vlines(p_true, 0, 90, linestyle="--", label="true $p_A$ (unknown)")
plt.hist(mcmc.trace("p")[:], bins=25, histtype="stepfilled", normed=True)
plt.legend();
###Output
/usr/local/lib/python3.6/dist-packages/matplotlib/axes/_axes.py:6521: MatplotlibDeprecationWarning:
The 'normed' kwarg was deprecated in Matplotlib 2.1 and will be removed in 3.1. Use 'density' instead.
alternative="'density'", removal="3.1")
###Markdown
2.2.4 A์B๋ฅผ ๋ฌถ์ด๋ณด๊ธฐ1. ์์ ๊ฐ์ด PB๋ ์ํ1. delta=PA-PB, PA, PB ํจ๊ป ์ถ๋ก
###Code
import pymc as pm
figsize(12, 4)
# these two quantities are unknown to us.
true_p_A = 0.05
true_p_B = 0.04
# notice the unequal sample sizes -- no problem in Bayesian analysis.
N_A = 1500
N_B = 750
# generate some observations
observations_A = pm.rbernoulli(true_p_A, N_A)
observations_B = pm.rbernoulli(true_p_B, N_B)
print("Obs from Site A: ", observations_A[:30].astype(int), "...")
print("Obs from Site B: ", observations_B[:30].astype(int), "...")
print(observations_A.mean())
print(observations_B.mean())
# Set up the pymc model. Again assume Uniform priors for p_A and p_B.
p_A = pm.Uniform("p_A", 0, 1)
p_B = pm.Uniform("p_B", 0, 1)
# Define the deterministic delta function. This is our unknown of interest.
@pm.deterministic
def delta(p_A=p_A, p_B=p_B):
return p_A - p_B
# Set of observations, in this case we have two observation datasets.
obs_A = pm.Bernoulli("obs_A", p_A, value=observations_A, observed=True)
obs_B = pm.Bernoulli("obs_B", p_B, value=observations_B, observed=True)
# To be explained in chapter 3.
mcmc = pm.MCMC([p_A, p_B, delta, obs_A, obs_B])
mcmc.sample(20000, 1000)
p_A_samples = mcmc.trace("p_A")[:]
p_B_samples = mcmc.trace("p_B")[:]
delta_samples = mcmc.trace("delta")[:]
figsize(12.5, 10)
# histogram of posteriors
ax = plt.subplot(311)
plt.xlim(0, .1)
plt.hist(p_A_samples, histtype='stepfilled', bins=25, alpha=0.85,
label="posterior of $p_A$", color="#A60628", normed=True)
plt.vlines(true_p_A, 0, 80, linestyle="--", label="true $p_A$ (unknown)")
plt.legend(loc="upper right")
plt.title("Posterior distributions of $p_A$, $p_B$, and delta unknowns")
ax = plt.subplot(312)
plt.xlim(0, .1)
plt.hist(p_B_samples, histtype='stepfilled', bins=25, alpha=0.85,
label="posterior of $p_B$", color="#467821", normed=True)
plt.vlines(true_p_B, 0, 80, linestyle="--", label="true $p_B$ (unknown)")
plt.legend(loc="upper right")
ax = plt.subplot(313)
plt.hist(delta_samples, histtype='stepfilled', bins=30, alpha=0.85,
label="posterior of delta", color="#7A68A6", normed=True)
plt.vlines(true_p_A - true_p_B, 0, 60, linestyle="--",
label="true delta (unknown)")
plt.vlines(0, 0, 60, color="black", alpha=0.2)
plt.legend(loc="upper right");
###Output
/usr/local/lib/python3.6/dist-packages/matplotlib/axes/_axes.py:6521: MatplotlibDeprecationWarning:
The 'normed' kwarg was deprecated in Matplotlib 2.1 and will be removed in 3.1. Use 'density' instead.
alternative="'density'", removal="3.1")
###Markdown
์ ๊ทธ๋ํ ํด์* PA ๋ณด๋ค PB์ ์ฌํํ๋ฅ ๋ถํฌ๊ฐ ๋ ํํ (PB์ sample size๊ฐ ๋ ์ ์) -> PB์ ์ค์ ๊ฐ์ ๋ํ ํ์ ์ด ๋ถ์กฑ
###Code
figsize(12.5, 3)
# histogram of posteriors
plt.xlim(0, .1)
plt.hist(p_A_samples, histtype='stepfilled', bins=30, alpha=0.80,
label="posterior of $p_A$", color="#A60628", normed=True)
plt.hist(p_B_samples, histtype='stepfilled', bins=30, alpha=0.80,
label="posterior of $p_B$", color="#467821", normed=True)
plt.legend(loc="upper right")
plt.xlabel("Value")
plt.ylabel("Density")
plt.title("Posterior distributions of $p_A$ and $p_B$")
plt.ylim(0,80);
###Output
/usr/local/lib/python3.6/dist-packages/matplotlib/axes/_axes.py:6521: MatplotlibDeprecationWarning:
The 'normed' kwarg was deprecated in Matplotlib 2.1 and will be removed in 3.1. Use 'density' instead.
alternative="'density'", removal="3.1")
###Markdown
์ ๊ทธ๋ํ ํด์* delta์ ์ฌํํ๋ฅ ๋ถํฌ ๋๋ถ๋ถ์ด 0์ด์, ์ฆ A์ฌ์ดํธ์ ์๋ต์ด B์ฌ์ดํธ๋ณด๋ค ๋ซ๋ค๋ ๊ฒ์ ์๋ฏธํจ (๊ตฌ๋งค์๊ฐ ๋ง๋ค๋ ๊ฒ)
###Code
# Count the number of samples less than 0, i.e. the area under the curve
# before 0, represent the probability that site A is worse than site B.
print("Probability site A is WORSE than site B: %.3f" % \
(delta_samples < 0).mean())
print("Probability site A is BETTER than site B: %.3f" % \
(delta_samples > 0).mean())
###Output
Probability site A is WORSE than site B: 0.219
Probability site A is BETTER than site B: 0.781
###Markdown
2.2.4 ๊ฒฐ๋ก * ์ฃผ๋ชฉํ ์ ์ A์ฌ์ดํธ์ B์ฌ์ดํธ์ ํ๋ณธ ํฌ๊ธฐ ์ฐจ์ด๊ฐ ์ธ๊ธ๋์ง ์์๋ค๋ ์ -> ์ด๋ฐ ๊ฒฝ์ฐ ๋ฒ ์ด์ง์ ์ถ๋ก ์ด ์ ํฉํ ๋ฐฉ๋ฒ* ๊ฐ์ค๊ฒ์ ๋ณด๋ค A/B ํ
์คํธ๊ฐ ๋ ์์ฐ์ค๋ฌ์ 2.2.5 ์์ : ๊ฑฐ์ง๋ง์ ๋ํ ์๊ณ ๋ฆฌ์ฆ* ์์งํ ๋ต๋ณ์ ์ค์ ๋น์จ์ ๊ด์ธกํ ๋ฐ์ดํฐ๋ณด๋ค ์ ์ ์ ์์> ex) " ๋ฌธํญ์ด ์ํ์์ ๋ถ์ ํ์๋ฅผ ํ ์ ์ด ์๋๊ฐ?" 2.2.6 ์ดํญ๋ถํฌ* ๋ ๊ฐ์ ๋ชจ์ N๊ณผ P๋ฅผ ๊ฐ์ง* P๊ฐ ํด์๋ก ์ฌ๊ฑด์ด ์ผ์ด๋ ๊ฐ๋ฅ์ฑ์ด ์ปค์ง* N=1 ๊ฒฝ์ฐ ๋ฒ ๋ฅด๋์ด๋ถํฌ์, ํฌ๊ธฐ๊ฐ 0๋ถํฐ N์ด๊ณ ๋ชจ์ P๋ฅผ ๊ฐ์ง ๋ฒ ๋ฅด๋์ด ํ๋ฅ ๋ณ์๋ค์ ํฉ์ด ์ดํญ๋ถํฌ๋ฅผ ๋ฐ๋ฆ
###Code
figsize(12.5, 4)
import scipy.stats as stats
binomial = stats.binom
parameters = [(10, .4), (10, .9)]
colors = ["#348ABD", "#A60628"]
for i in range(2):
N, p = parameters[i]
_x = np.arange(N + 1)
plt.bar(_x - 0.5, binomial.pmf(_x, N, p), color=colors[i],
edgecolor=colors[i],
alpha=0.6,
label="$N$: %d, $p$: %.1f" % (N, p),
linewidth=3)
plt.legend(loc="upper left")
plt.xlim(0, 10.5)
plt.xlabel("$k$")
plt.ylabel("$P(X = k)$")
plt.title("Probability mass distributions of binomial random variables");
###Output
_____no_output_____
###Markdown
2.2.7 ์์ : ํ์๋ค์ ๋ถ์ ํ์* ์ฃผ์ : ์ดํญ๋ถํฌ๋ฅผ ์ฌ์ฉํ์ฌ ์ํ ์ค์ ๋ถ์ ํ์๋ฅผ ์ ์ง๋ฅด๋ ๋น๋๋ฅผ ์์๋ด๋ ๊ฒ* ๋ ๋์ ๊ฒฐ๊ณผ๋ฅผ ์ํ์ฌ ์๋ก์ด ์๊ณ ๋ฆฌ์ฆ ์ ์ ( Called ํ๋ผ์ด๋ฒ์ ์๊ณ ๋ฆฌ์ฆ)> 1) ๋์ ์ ์๋ฉด์ด ๋์จ ํ์์ ์ ์งํ๊ฒ ๋๋ต> 2) ๋์ ์ ๋ท๋ฉด์ด ๋์จ ํ์์ ๋์ ์ ๋ค์ ๋์ ธ ์๋ฉด์ด ๋์ค๋ฉด "๋ถ์ ํ์ ์ธ์ ๋๋ต", ๋ท๋ฉด์ด ๋์ค๋ฉด "๋ถ์ ํ์ ๋ถ์ ๋๋ต"> ์์ ๊ฐ์ ๋ฐฉ๋ฒ ์ฌ์ฉ์ "๋ถ์ ํ์ ์ธ์ ๋๋ต" ์ด ๋ถ์ ํ์๋ฅผ ์ธ์ ํ ์ง์ ์ ๊ฒฐ๊ณผ์ธ์ง ์๋๋ฉด ๋๋ฒ์งธ ๋์ ๋์ง๊ธฐ์์ ์๋ฉด์ด ๋์จ ๊ฒฐ๊ณผ์ธ์ง ๋ชจ๋ฆ -> ํ๋ผ์ด๋ฒ์๋ ์ง์ผ์ง๊ณ ์ฐ๊ตฌ์๋ ์ ์งํ ๋ต๋ณ์ ๋ฐ์* ์์ : 100๋ช
์กฐ์ฌํ์ฌ ๋ถ์ ํ์์ ๋น์จ์ธ P๋ฅผ ์ฐพ์ผ๋ ค ํจ, ํ์ฌ ์ ๋ณด๊ฐ ์์ผ๋ฏ๋ก P์ ์ฌ์ ํ๋ฅ ๋ก ๊ท ์ผ๋ถํฌ ๊ฐ์
###Code
import pymc as pm
N = 100
p = pm.Uniform("freq_cheating", 0, 1)
true_answers = pm.Bernoulli("truths", p, size=N)
first_coin_flips = pm.Bernoulli("first_flips", 0.5, size=N)
print(first_coin_flips.value)
second_coin_flips = pm.Bernoulli("second_flips", 0.5, size=N)
@pm.deterministic
def observed_proportion(t_a=true_answers,
fc=first_coin_flips,
sc=second_coin_flips):
observed = fc * t_a + (1 - fc) * sc
return observed.sum() / float(N)
observed_proportion.value
X = 35
observations = pm.Binomial("obs", N, observed_proportion, observed=True,
value=X)
model = pm.Model([p, true_answers, first_coin_flips,
second_coin_flips, observed_proportion, observations])
# To be explained in Chapter 3!
mcmc = pm.MCMC(model)
mcmc.sample(40000, 15000)
figsize(12.5, 3)
p_trace = mcmc.trace("freq_cheating")[:]
plt.hist(p_trace, histtype="stepfilled", normed=True, alpha=0.85, bins=30,
label="posterior distribution", color="#348ABD")
plt.vlines([.05, .35], [0, 0], [5, 5], alpha=0.3)
plt.xlim(0, 1)
plt.xlabel("Value of $p$")
plt.ylabel("Density")
plt.title("Posterior distribution of parameter $p$")
plt.legend();
###Output
_____no_output_____
###Markdown
์ ๊ทธ๋ํ ํด์* P๋ ๋ถ์ ํ์๋ฅผ ํ์ ํ๋ฅ ๋ฅผ ์๋ฏธ* 0.05 ~ 0.35 ์ฌ์ด์ ๋ฒ์๋ก ์ขํ์ง ('0.3'์ด๋ผ๋ ๋ฒ์ ๋ด์ ์ฐธ ๊ฐ์ด ์กด์ฌํ ๊ฐ๋ฅ์ฑ์ด ์์)* ์ด๋ฐ ์ข
๋ฅ์ ์๊ณ ๋ฆฌ์ฆ์ ์ฌ์ฉ์์ ๊ฐ์ธ์ ๋ณด๋ฅผ ์์งํ๋ ๋ฐ ์ฌ์ฉ๋ ์๋ ์์ 2.2.8 PyMC ๋์ ๋ชจ๋ธ* P("์") = P(์ฒซ ๋์ ์ ์๋ฉด)P(๋ถ์ ํ์์) + P(์ฒซ ๋์ ์ ๋ท๋ฉด)P(๋ ๋ฒ์งธ ๋์ ์ ์๋ฉด) = P/2 + 1/4* P๋ฅผ ์๊ณ ์๋ค๋ฉด ์ฐ๋ฆฌ๋ ํ ํ์์ด "์" ๋ผ๊ณ ๋๋ตํ ํ๋ฅ ์ ๊ณ์ฐํ ์ ์์(P๋ก deterministic ํจ์ ์ฌ์ฉ)
###Code
p = pm.Uniform("freq_cheating", 0, 1)
@pm.deterministic
def p_skewed(p=p):
return 0.5 * p + 0.25
yes_responses = pm.Binomial("number_cheaters", 100, p_skewed,
value=35, observed=True)
model = pm.Model([yes_responses, p_skewed, p])
# To Be Explained in Chapter 3!
mcmc = pm.MCMC(model)
mcmc.sample(25000, 2500)
figsize(12.5, 3)
p_trace = mcmc.trace("freq_cheating")[:]
plt.hist(p_trace, histtype="stepfilled", normed=True, alpha=0.85, bins=30,
label="posterior distribution", color="#348ABD")
plt.vlines([.05, .35], [0, 0], [5, 5], alpha=0.2)
plt.xlim(0, 1)
plt.legend();
###Output
/usr/local/lib/python3.6/dist-packages/matplotlib/axes/_axes.py:6521: MatplotlibDeprecationWarning:
The 'normed' kwarg was deprecated in Matplotlib 2.1 and will be removed in 3.1. Use 'density' instead.
alternative="'density'", removal="3.1")
###Markdown
2.2.9 ๋ ๋ง์ PyMC ๊ธฐ๋ฒ๋ค* ์ธ๋ฑ์ฑ์ด๋ ์ฌ๋ผ์ด์ฑ ๊ฐ์ ์ฐ์ฐ -> ๋ด์ฅ๋ Lambda ํจ์๋ฅผ ์ฌ์ฉํ์ฌ ๋ ๊ฐ๊ฒฐํ๊ณ ๋จ์ํ๊ฒ ๋ค๋ฃฐ ์ ์์
###Code
beta = pm.Normal("coefficients",0,size=(N,1))
x = np.random.randn((N,1))
Linear_combination = pm.Lambda(labmda x=x, beta=beta: np.dot(x.T,beta))
N = 10
x = np.empty(N, dtype=object)
for i in range(0, N):
x[i] = pm.Exponential('x_%i' % i, (i + 1) ** 2)
###Output
_____no_output_____
###Markdown
2.2.10 ์์ :์ฐ์ฃผ ์๋ณต์ ์ฑ๋ฆฐ์ ํธ ์ฐธ์ฌ* ์ฌ๊ฑด์ ์์ฝ> 1) 25๋ฒ์งธ ์ฐ์ฃผ ์๋ณต์ ๋นํ์ด ์ฐธ์ฌ๋ก ๋๋จ> 2) ์์ธ : ๋ก์ผ ๋ถ์คํฐ์ ์ฐ๊ฒฐ๋ O๋ง์ ๊ฒฐํจ์ผ๋ก ๋ฐ์, O๋ง์ ์ธ๋ถ ์จ๋๋ฅผ ํฌํจํ์ฌ ๋ง์ ์์ธ์ ๋๋ฌด ๋ฏผ๊ฐํ๊ฒ ๋ฐ์ํ์ฌ ์ค๊ณํ๊ธฐ ๋๋ฌธ> 3) ์ด์ 24๋ฒ์ ๋นํ์์ 23๋ฒ์งธ ๋นํ ์ O๋ง์ ๊ฒฐํจ์ ๋ํ ๋ฐ์ดํฐ๋ ์ ์ฉ> 4) 7๋ฒ์งธ ๋นํ์ ํด๋นํ๋ ๋ฐ์ดํฐ๋ง ์ค์ํ๊ฒ ๊ณ ๋ ค๋จ* ์ธ๋ถ ์จ๋์ ์ฌ๊ฑด ๋ฐ์์ ๋น๊ตํ์ฌ ๋์ ๊ด๊ณ๋ฅผ ํ์
###Code
figsize(12.5, 3.5)
np.set_printoptions(precision=3, suppress=True)
challenger_data = np.genfromtxt(r"C:\Users\wh\006775\Probabilistic-Programming-and-Bayesian-Methods-for-Hackers-master\Chapter2_MorePyMC\data\challenger_data.csv", skip_header=1,
usecols=[1, 2], missing_values="NA",
delimiter=",")
# drop the NA values
challenger_data = challenger_data[~np.isnan(challenger_data[:, 1])]
# plot it, as a function of temperature (the first column)
print("Temp (F), O-Ring failure?")
print(challenger_data)
plt.scatter(challenger_data[:, 0], challenger_data[:, 1], s=75, color="k",
alpha=0.5)
plt.yticks([0, 1])
plt.ylabel("Damage Incident?")
plt.xlabel("Outside temperature (Fahrenheit)")
plt.title("Defects of the Space Shuttle O-Rings vs temperature");
###Output
_____no_output_____
###Markdown
์ ๊ทธ๋ํ ํด์์ฐธ๊ณ : https://blog.naver.com/varkiry05/221057275615 (about ๋ก์ง์คํฑ ํจ์)* ์ธ๋ถ ์จ๋๊ฐ ๋ฎ์์ง์๋ก ํผํด ์ฌ๊ณ ๊ฐ ๋ฐ์ํ ํ๋ฅ ์ด ์ฆ๊ฐ* ๋ชฉ์ : ๋ชจ๋ธ๋ง์ ํตํ์ฌ "์จ๋ t์์ ์์ค ์ฌ๊ณ ์ ํ๋ฅ ์ ์ผ๋ง์ธ๊ฐ?"* ์จ๋ ํจ์์ ๋ก์ง์คํฑํจ์ ์ฌ์ฉ : P(t) = 1/1+e^beta*t
###Code
figsize(12, 3)
def logistic(x, beta):
return 1.0 / (1.0 + np.exp(beta * x))
x = np.linspace(-4, 4, 100)
plt.plot(x, logistic(x, 1), label=r"$\beta = 1$")
plt.plot(x, logistic(x, 3), label=r"$\beta = 3$")
plt.plot(x, logistic(x, -5), label=r"$\beta = -5$")
plt.title("Logistic functon plotted for several value of $\\beta$ parameter", fontsize=14)
plt.legend();
###Output
_____no_output_____
###Markdown
์ ๊ทธ๋ํ ํด์* Beta์ ๋ํ ํ์ ์ด ์์ผ๋ฏ๋ก 1, 3, -5 ์ฌ๋ฌ ๊ฐ ๋์
* ๋ก์ง์คํฑํจ์์์ ํ๋ฅ ์ 0๊ทผ์ฒ์์๋ง ๋ณํ์ง๋ง, ๊ทธ๋ฆผ 2-11์์ ํ๋ฅ ์ ํ์จ 65~70๋ ๊ทผ์ฒ์์ ๋ณํจ -> ํธํฅ (bias ) ์ถ๊ฐ* P(t) = 1/1+e^beta*t+alpha (alpha๊ฐ ์ถ๊ฐ๋จ)
###Code
def logistic(x, beta, alpha=0):
return 1.0 / (1.0 + np.exp(np.dot(beta, x) + alpha))
x = np.linspace(-4, 4, 100)
plt.plot(x, logistic(x, 1), label=r"$\beta = 1$", ls="--", lw=1)
plt.plot(x, logistic(x, 3), label=r"$\beta = 3$", ls="--", lw=1)
plt.plot(x, logistic(x, -5), label=r"$\beta = -5$", ls="--", lw=1)
plt.plot(x, logistic(x, 1, 1), label=r"$\beta = 1, \alpha = 1$",
color="#348ABD")
plt.plot(x, logistic(x, 3, -2), label=r"$\beta = 3, \alpha = -2$",
color="#A60628")
plt.plot(x, logistic(x, -5, 7), label=r"$\beta = -5, \alpha = 7$",
color="#7A68A6")
plt.title("Logistic functon with bias, plotted for several value of $\\alpha$ bias parameter", fontsize=14)
plt.legend(loc="lower left");
###Output
_____no_output_____
###Markdown
์ ๊ทธ๋ํ ํด์ * ์ฌ๋ฌ alpha ๊ฐ ๋์
(1, -2, 7)* alpha ๊ฐ ๋์
์ ํตํ์ฌ ๊ณก์ ์ ์ผ์ชฝ ๋๋ ์ค๋ฅธ์ชฝ์ผ๋ก ์ด๋ (ํธํฅ) 2.2.11 ์ ๊ท๋ถํฌ* ์ ๊ท๋ถํฌ์ ๋ชจ์ : ํ๊ท ๊ณผ ์ ๋ฐ๋ (๋ถ์ฐ์ ์ญ์, ์ ๋ฐ๋๊ฐ ์ปค์ง์๋ก ๋ถํฌ๋ ์ข์์ง)* X ~ N(M, 1/T) = N(M, ์๊ทธ๋ง^2)
###Code
import scipy.stats as stats
nor = stats.norm
x = np.linspace(-8, 7, 150)
mu = (-2, 0, 3)
tau = (.7, 1, 2.8)
colors = ["#348ABD", "#A60628", "#7A68A6"]
parameters = zip(mu, tau, colors)
for _mu, _tau, _color in parameters:
plt.plot(x, nor.pdf(x, _mu, scale=1. / np.sqrt(_tau)),
label="$\mu = %d,\;\\tau = %.1f$" % (_mu, _tau), color=_color)
plt.fill_between(x, nor.pdf(x, _mu, scale=1. / np.sqrt(_tau)), color=_color,
alpha=.33)
plt.legend(loc="upper right")
plt.xlabel("$x$")
plt.ylabel("density function at $x$")
plt.title("Probability distribution of three different Normal random \
variables");
import pymc as pm
temperature = challenger_data[:, 0]
D = challenger_data[:, 1] # defect or not?
# notice the`value` here. We explain why below.
beta = pm.Normal("beta", 0, 0.001, value=0)
alpha = pm.Normal("alpha", 0, 0.001, value=0)
@pm.deterministic
def p(t=temperature, alpha=alpha, beta=beta):
return 1.0 / (1. + np.exp(beta * t + alpha))
###Output
_____no_output_____
###Markdown
์ ๊ทธ๋ํ ํด์* ๊ฒฐํจ์ฌ๊ณ , Di ~ Ber( p(ti)) , i=1,2,...,N* p(t)๋ ๋ก์ง์คํฑํจ์, t๋ ๊ด์ธกํ ์จ๋* beta ์ alpha์ ๊ฐ์ 0์ผ๋ก ์ค์ ( ๋๋ฌด ํฌ๋ฉด p๋ 0 ๋๋ 1) -> ๊ฒฐ๊ณผ์ ์ํฅ X, ์ฌ์ ํ๋ฅ ์ ์ด๋ค ๋ถ๊ฐ์ ์ธ ์ ๋ณด๋ฅผ ํฌํจํ๋ค๋ ์๋ฏธ๊ฐ ์๋
###Code
p.value
# connect the probabilities in `p` with our observations through a
# Bernoulli random variable.
observed = pm.Bernoulli("bernoulli_obs", p, value=D, observed=True)
model = pm.Model([observed, beta, alpha])
# Mysterious code to be explained in Chapter 3
map_ = pm.MAP(model)
map_.fit()
mcmc = pm.MCMC(model)
mcmc.sample(120000, 100000, 2)
alpha_samples = mcmc.trace('alpha')[:, None] # best to make them 1d
beta_samples = mcmc.trace('beta')[:, None]
figsize(12.5, 6)
# histogram of the samples:
plt.subplot(211)
plt.title(r"Posterior distributions of the variables $\alpha, \beta$")
plt.hist(beta_samples, histtype='stepfilled', bins=35, alpha=0.85,
label=r"1 posterior of $\beta$", color="#7A68A6", normed=True)
plt.legend()
plt.subplot(212)
plt.hist(alpha_samples, histtype='stepfilled', bins=35, alpha=0.85,
label=r"2 posterior of $\alpha$", color="#A60628", normed=True)
plt.legend();
###Output
_____no_output_____
###Markdown
์ ๊ทธ๋ํ ํด์* beta์ ๋ชจ๋ ํ๋ณธ๊ฐ์ 0๋ณด๋ค ํผ -> ์ฌํํ๋ฅ ์ด 0 ์ฃผ๋ณ์ ์ง์ค๋์๋ค๋ฉด ์จ๋๊ฐ ๊ฒฐํจ์ ํ๋ฅ ์ ์๋ฌด๋ฐ ์ํฅ์ ์ฃผ์ง ์์๋ค๋ ๊ฒ์ ์์* alpha ๋ํ ๋ง์ฐฌ๊ฐ์ง (0๊ณผ ๊ฑฐ๋ฆฌ๊ฐ ๋ฉ๋ค)* ๋๋น๊ฐ ๋์์๋ก ๋ชจ์์ ๋ํ ํ์ ์ด ์์์ ์๋ฏธ
###Code
t = np.linspace(temperature.min() - 5, temperature.max() + 5, 50)[:, None]
p_t = logistic(t.T, beta_samples, alpha_samples)
mean_prob_t = p_t.mean(axis=0)
figsize(12.5, 4)
plt.plot(t, mean_prob_t, lw=3, label="average posterior \nprobability \
of defect")
plt.plot(t, p_t[0, :], ls="--", label="realization from posterior")
plt.plot(t, p_t[-2, :], ls="--", label="realization from posterior")
plt.scatter(temperature, D, color="k", s=50, alpha=0.5)
plt.title("Posterior expected value of probability of defect; \
plus realizations")
plt.legend(loc="lower left")
plt.ylim(-0.1, 1.1)
plt.xlim(t.min(), t.max())
plt.ylabel("probability")
plt.xlabel("temperature");
from scipy.stats.mstats import mquantiles
# vectorized bottom and top 2.5% quantiles for "confidence interval"
qs = mquantiles(p_t, [0.025, 0.975], axis=0)
plt.fill_between(t[:, 0], *qs, alpha=0.7,
color="#7A68A6")
plt.plot(t[:, 0], qs[0], label="95% CI", color="#7A68A6", alpha=0.7)
plt.plot(t, mean_prob_t, lw=1, ls="--", color="k",
label="average posterior \nprobability of defect")
plt.xlim(t.min(), t.max())
plt.ylim(-0.02, 1.02)
plt.legend(loc="lower left")
plt.scatter(temperature, D, color="k", s=50, alpha=0.5)
plt.xlabel("temp, $t$")
plt.ylabel("probability estimate")
plt.title("Posterior probability estimates given temp. $t$");
###Output
_____no_output_____
###Markdown
์ ๊ทธ๋ํ ํด์* ex) 65๋์์ ์ฐ๋ฆฌ๋ 0.25 ~ 0.75 ์ฌ์ด์ ๊ฒฐํจํ๋ฅ ์ด ์๋ค๊ณ 95% ํ์ ํ ์ ์๋ค (๋ฒ ์ด์ง์์ ์ ์ฉ๊ตฌ๊ฐ ํด์)1. ์ ๋ขฐ๊ตฌ๊ฐ (Confidence interval) vs ์ ์ฉ๊ตฌ๊ฐ (Credible interval)์ฐธ๊ณ : https://freshrimpsushi.tistory.com/752
###Code
figsize(12.5, 2.5)
prob_31 = logistic(31, beta_samples, alpha_samples)
plt.xlim(0.995, 1)
plt.hist(prob_31, bins=1000, normed=True, histtype='stepfilled')
plt.title("Posterior distribution of probability of defect, given $t = 31$")
plt.xlabel("probability of defect occurring in O-ring");
###Output
_____no_output_____
###Markdown
2.2.12 ์ฑ๋ฆฐ์ ํธ ์ฐธ์ฌ ๋น์ผ์๋ ๋ฌด์จ ์ผ์ด ์ผ์ด๋ฌ๋๊ฐ?* ์ฐธ์ฌ ๋น์ผ ์ธ๋ถ ์จ๋๋ ํ์จ 31๋ (=์ญ์จ-0.5๋) 2.3 ์ฐ๋ฆฌ์ ๋ชจ๋ธ์ด ์ ์ ํ๊ฐ?* ๋ชจ๋ธ์ด ์ฐ์ํ๋ค : ๋ฐ์ดํฐ๋ฅผ ์ ํํํ๋ค* ๋ชจ๋ธ์ ์ฐ์์ฑ์ ์ด๋ป๊ฒ ํ๊ฐ?> 1) ๊ด์ธก ๋ฐ์ดํฐ(๊ณ ์ ๋ ํ๋ฅ ๋ณ์) ์ ์ฐ๋ฆฌ๊ฐ ์๋ฎฌ๋ ์ด์
ํ๋ ์ธ์์ ์ธ ๋ฐ์ดํฐ์
์ ๋น๊ต> 2) ์๋ก์ด Stochastic ๋ณ์๋ฅผ ๋ง๋ฌ (๋จ ๊ด์ธก ๋ฐ์ดํฐ ์์ฒด๋ ๋นผ์ผ ํจ)
###Code
simulated = pm.Bernoulli("bernoulli_sim", p)
N = 10000
mcmc = pm.MCMC([simulated, alpha, beta, observed])
mcmc.sample(N)
figsize(12.5, 5)
simulations = mcmc.trace("bernoulli_sim")[:]
print(simulations.shape)
plt.title("Simulated dataset using posterior parameters")
figsize(12.5, 6)
for i in range(4):
ax = plt.subplot(4, 1, i + 1)
plt.scatter(temperature, simulations[1000 * i, :], color="k",
s=50, alpha=0.6)
###Output
_____no_output_____
###Markdown
์ ๊ทธ๋ํ ํด์* ๋ชจ๋ธ์ ํ๋ฅญ์ฑ์ ํ๊ฐ : ๋ฒ ์ด์ง์ p-๊ฐ ์ฌ์ฉ (๋น๋์ฃผ์ ํต๊ณ์์์ p-๊ฐ๊ณผ ๋ค๋ฅด๊ฒ ์ฃผ๊ด์ )* So ์ ค๋จผ์ ๊ทธ๋ํฝ ํ
์คํธ๊ฐ p-๊ฐ ํ
์คํธ๋ณด๋ค ๋ ๋ช
๋ฐฑํ๋ค๊ณ ๊ฐ์กฐ 2.3.1 ๋ถ๋ฆฌ๋ํ* ๊ทธ๋ํฝ ํ
์คํธ๋ ๋ก์ง์คํฑ ํ๊ท๋ถ์๋ฒ์ ์ํ ์๋ก์ด ๋ฐ์ดํฐ ์๊ฐํ ๋ฐฉ๋ฒ (Called ๋ถ๋ฆฌ๋ํ)* ๋ถ๋ฆฌ๋ํ๋ ์ฌ์ฉ์๊ฐ ๋น๊ตํ๊ณ ์ถ์ ์๋ก ๋ค๋ฅธ ๋ชจ๋ธ์ ๊ทธ๋ํฝ์ผ๋ก ๋น๊ต* ๋ค์์ ์ ์๋๋ ๋ฐฉ๋ฒ์ ๊ฐ ๋ชจ๋ธ์ ๋ํด ์ฌํํ๋ฅ ์๋ฎฌ๋ ์ด์
์ด ํน์ ์จ๋์ผ ๋ ๊ฐ์ด 1์ธ ํ์์ ๋น์จ์ ๊ณ์ฐ : P(Defect = 1 | t)
###Code
posterior_probability = simulations.mean(axis=0)
print("Obs. | Array of Simulated Defects\
| Posterior Probability of Defect | Realized Defect ")
for i in range(len(D)):
print ("%s | %s | %.2f | %d" %\
(str(i).zfill(2),str(simulations[:10,i])[:-1] + "...]".ljust(12),
posterior_probability[i], D[i]))
ix = np.argsort(posterior_probability)
print("probb | defect ")
for i in range(len(D)):
print("%.2f | %d" % (posterior_probability[ix[i]], D[ix[i]]))
import separation_plot
from separation_plot import separation_plot
figsize(11., 1.5)
separation_plot(posterior_probability, D)
###Output
_____no_output_____
###Markdown
์ ๊ทธ๋ํ ํด์* ๊พธ๋ถ๊พธ๋ถํ ์ ์ ์ ๋ ฌ๋ ํ๋ฅ , ํ๋์ ๋ง๋๋ ๊ฒฐํจ, ๋น ๊ณต๊ฐ์ ๋ฌด๊ฒฐํจ, ๊ฒ์ ์์ง์ ์ ์ฐ๋ฆฌ๊ฐ ๊ด์ธกํด์ผ ํ๋ ๊ฒฐํจ์ ๊ธฐ๋ ๊ฐ์ ์๋ฏธ* ํ๋ฅ ์ด ๋์์ง์๋ก ๋์ฑ ๋ ๋ง์ ๊ฒฐํจ์ด ๋ฐ์ํจ* ์ด ๋ฐฉ๋ฒ์ผ๋ก ๋ชจ๋ธ์ด ์์ธกํ ์ด๋ฒคํธ์ ์ด ๊ฐ์์ ๋ฐ์ดํฐ์ ์๋ ์ด๋ฒคํธ์ ์ค์ ๊ฐ์ ๋น๊ต ๊ฐ๋ฅ
###Code
figsize(11., 1.25)
# Our temperature-dependent model
separation_plot(posterior_probability, D)
plt.title("Temperature-dependent model")
# Perfect model
# i.e. the probability of defect is equal to if a defect occurred or not.
p = D
separation_plot(p, D)
plt.title("Perfect model")
# random predictions
p = np.random.rand(23)
separation_plot(p, D)
plt.title("Random model")
# constant model
constant_prob = 7. / 23 * np.ones(23)
separation_plot(constant_prob, D)
plt.title("Constant-prediction model");
###Output
_____no_output_____ |
2.Feature Engineering - Review Analysis.ipynb | ###Markdown
Yelp Project Part II: Feature Engineering - Review Analysis - LDA
###Code
import pandas as pd
df = pd.read_csv('restaurant_reviews.csv', encoding ='utf-8')
df.head()
# getting the training or testing ids is to use the LDA fitting the training sets and predict
# the topic categories of the testing set
train_id = pd.read_csv('train_set_id.csv', encoding ='utf-8')
train_id.columns = ['business_id']
test_id = pd.read_csv('test_set_id.csv', encoding ='utf-8')
test_id.columns = ['business_id']
df_train = train_id.merge(df, how = 'left', left_on='business_id', right_on='business_id')
df_train.dropna(how='any', inplace = True)
df_test = test_id.merge(df, how = 'left', left_on='business_id', right_on='business_id')
df_test.dropna(how='any', inplace = True)
df_train.shape
df_test.shape
from sklearn.feature_extraction.text import CountVectorizer
count = CountVectorizer(stop_words='english',
max_df=0.1,
max_features=10000)
X_train = count.fit_transform(df_train['text'].values)
X_test = count.transform(df_test['text'].values)
from sklearn.decomposition import LatentDirichletAllocation
lda = LatentDirichletAllocation(n_components = 10,
random_state = 1,
learning_method = 'online',
max_iter = 15,
verbose=1,
n_jobs = -1)
X_topics_train = lda.fit_transform(X_train)
X_topics_test = lda.transform(X_test)
n_top_words = 30
feature_names = count.get_feature_names()
for topic_idx, topic in enumerate(lda.components_):
print('Topic %d:' % (topic_idx))
print(" ".join([feature_names[i]
for i in topic.argsort()
[:-n_top_words - 1: -1]]))
# identify the column index of the max values in the rows, which is the class of each row
import numpy as np
idx = np.argmax(X_topics_train, axis=1)
df_train['label'] = (df_train['stars'] >= 4)*1
df_train['Topic'] = idx
df_train.head()
df_train.to_csv('review_train.csv', index = False)
df_test['label'] = (df_test['stars'] >= 4)*1
# identify the column index of the max values in the rows, which is the class of each row
import numpy as np
idx = np.argmax(X_topics_test, axis=1)
df_test['Topic'] = idx
df_test.head()
df_test.to_csv('review_test.csv', index = False)
import pandas as pd
import numpy as np
df_train = pd.read_csv('review_train.csv')
df_test = pd.read_csv('review_test.csv')
df_train['score'] = df_train['label'].replace(0, -1)
df_test['score'] = df_test['label'].replace(0, -1)
len(df_train['business_id'].unique())
topic_train = df_train.groupby(['business_id', 'Topic']).mean()['score'].unstack().fillna(0).reset_index()
topic_train.index.name = None
topic_train.columns = ['business_id', 'Topic0', 'Topic1', 'Topic2', 'Topic3', 'Topic4',
'Topic5', 'Topic6', 'Topic7', 'Topic8', 'Topic9']
topic_train.head()
topic_train.to_csv('train_topic_score.csv', index = False)
topic_test = df_test.groupby(['business_id', 'Topic']).mean()['score'].unstack().fillna(0).reset_index()
topic_test.index.name = None
topic_test.columns = ['business_id', 'Topic0', 'Topic1', 'Topic2', 'Topic3', 'Topic4',
'Topic5', 'Topic6', 'Topic7', 'Topic8', 'Topic9']
topic_test.head()
topic_test.to_csv('test_topic_score.csv', index = False)
print(topic_train.shape)
print(topic_test.shape)
topic = pd.concat([topic_train, topic_test])
topic.to_csv('topic_score.csv', index = False)
horror = X_topics[:, 0].argsort()
for iter_idx, movie_idx in enumerate(horror[:3]):
print('\n Horror moive #%d:' % (iter_idx+1))
print(df['text'][movie_idx][:300], '...')
#### Now is the example in the slide
# E.g. take restaurant 'cInZkUSckKwxCqAR7s2ETw' as an example: First Watch
eg_res = df[df['business_id'] == 'cInZkUSckKwxCqAR7s2ETw']
eg = pd.read_csv('topic_score.csv')
eg[eg['business_id'] == 'cInZkUSckKwxCqAR7s2ETw']
eg_res
eg_res.loc[715, :]['text']
###Output
_____no_output_____ |
5.Data-Visualization-with-Python/c.Pie-charts-box-plots-scatter-plots-bubble-plots.ipynb | ###Markdown
Pie Charts, Box Plots, Scatter Plots, and Bubble Plots IntroductionIn this lab session, we continue exploring the Matplotlib library. More specificatlly, we will learn how to create pie charts, box plots, scatter plots, and bubble charts. Table of Contents1. [Exploring Datasets with *p*andas](0)2. [Downloading and Prepping Data](2)3. [Visualizing Data using Matplotlib](4) 4. [Pie Charts](6) 5. [Box Plots](8) 6. [Scatter Plots](10) 7. [Bubble Plots](12) Exploring Datasets with *pandas* and MatplotlibToolkits: The course heavily relies on [*pandas*](http://pandas.pydata.org/) and [**Numpy**](http://www.numpy.org/) for data wrangling, analysis, and visualization. The primary plotting library we will explore in the course is [Matplotlib](http://matplotlib.org/).Dataset: Immigration to Canada from 1980 to 2013 - [International migration flows to and from selected countries - The 2015 revision](http://www.un.org/en/development/desa/population/migration/data/empirical2/migrationflows.shtml) from United Nation's website.The dataset contains annual data on the flows of international migrants as recorded by the countries of destination. The data presents both inflows and outflows according to the place of birth, citizenship or place of previous / next residence both for foreigners and nationals. In this lab, we will focus on the Canadian Immigration data. Downloading and Prepping Data Import primary modules.
###Code
import numpy as np # useful for many scientific computing in Python
import pandas as pd # primary data structure library
###Output
_____no_output_____
###Markdown
Let's download and import our primary Canadian Immigration dataset using *pandas* `read_excel()` method. Normally, before we can do that, we would need to download a module which *pandas* requires to read in excel files. This module is **xlrd**. For your convenience, we have pre-installed this module, so you would not have to worry about that. Otherwise, you would need to run the following line of code to install the **xlrd** module:```!conda install -c anaconda xlrd --yes``` Download the dataset and read it into a *pandas* dataframe.
###Code
!conda install -c anaconda xlrd --yes
df_can = pd.read_excel('https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/DV0101EN/labs/Data_Files/Canada.xlsx',
sheet_name='Canada by Citizenship',
skiprows=range(20),
skipfooter=2
)
print('Data downloaded and read into a dataframe!')
###Output
Collecting package metadata (current_repodata.json): done
Solving environment: done
# All requested packages already installed.
Data downloaded and read into a dataframe!
###Markdown
Let's take a look at the first five items in our dataset.
###Code
df_can.head()
###Output
_____no_output_____
###Markdown
Let's find out how many entries there are in our dataset.
###Code
# print the dimensions of the dataframe
print(df_can.shape)
###Output
(195, 43)
###Markdown
Clean up data. We will make some modifications to the original dataset to make it easier to create our visualizations. Refer to *Introduction to Matplotlib and Line Plots* and *Area Plots, Histograms, and Bar Plots* for a detailed description of this preprocessing.
###Code
# clean up the dataset to remove unnecessary columns (eg. REG)
df_can.drop(['AREA', 'REG', 'DEV', 'Type', 'Coverage'], axis=1, inplace=True)
# let's rename the columns so that they make sense
df_can.rename(columns={'OdName':'Country', 'AreaName':'Continent','RegName':'Region'}, inplace=True)
# for sake of consistency, let's also make all column labels of type string
df_can.columns = list(map(str, df_can.columns))
# set the country name as index - useful for quickly looking up countries using .loc method
df_can.set_index('Country', inplace=True)
# add total column
df_can['Total'] = df_can.sum(axis=1)
# years that we will be using in this lesson - useful for plotting later on
years = list(map(str, range(1980, 2014)))
print('data dimensions:', df_can.shape)
###Output
data dimensions: (195, 38)
###Markdown
Visualizing Data using Matplotlib Import `Matplotlib`.
###Code
%matplotlib inline
import matplotlib as mpl
import matplotlib.pyplot as plt
mpl.style.use('ggplot') # optional: for ggplot-like style
# check for latest version of Matplotlib
print('Matplotlib version: ', mpl.__version__) # >= 2.0.0
###Output
Matplotlib version: 3.1.1
###Markdown
Pie Charts A `pie chart` is a circualr graphic that displays numeric proportions by dividing a circle (or pie) into proportional slices. You are most likely already familiar with pie charts as it is widely used in business and media. We can create pie charts in Matplotlib by passing in the `kind=pie` keyword.Let's use a pie chart to explore the proportion (percentage) of new immigrants grouped by continents for the entire time period from 1980 to 2013. Step 1: Gather data. We will use *pandas* `groupby` method to summarize the immigration data by `Continent`. The general process of `groupby` involves the following steps:1. **Split:** Splitting the data into groups based on some criteria.2. **Apply:** Applying a function to each group independently: .sum() .count() .mean() .std() .aggregate() .apply() .etc..3. **Combine:** Combining the results into a data structure.
###Code
# group countries by continents and apply sum() function
df_continents = df_can.groupby('Continent', axis=0).sum()
# note: the output of the groupby method is a `groupby' object.
# we can not use it further until we apply a function (eg .sum())
print(type(df_can.groupby('Continent', axis=0)))
df_continents.head()
###Output
<class 'pandas.core.groupby.generic.DataFrameGroupBy'>
###Markdown
Step 2: Plot the data. We will pass in `kind = 'pie'` keyword, along with the following additional parameters:- `autopct` - is a string or function used to label the wedges with their numeric value. The label will be placed inside the wedge. If it is a format string, the label will be `fmt%pct`.- `startangle` - rotates the start of the pie chart by angle degrees counterclockwise from the x-axis.- `shadow` - Draws a shadow beneath the pie (to give a 3D feel).
###Code
# autopct create %, start angle represent starting point
df_continents['Total'].plot(kind='pie',
figsize=(5, 6),
autopct='%1.1f%%', # add in percentages
startangle=90, # start angle 90ยฐ (Africa)
shadow=True, # add shadow
)
plt.title('Immigration to Canada by Continent [1980 - 2013]')
plt.axis('equal') # Sets the pie chart to look like a circle.
plt.show()
###Output
_____no_output_____
###Markdown
The above visual is not very clear, the numbers and text overlap in some instances. Let's make a few modifications to improve the visuals:* Remove the text labels on the pie chart by passing in `legend` and add it as a seperate legend using `plt.legend()`.* Push out the percentages to sit just outside the pie chart by passing in `pctdistance` parameter.* Pass in a custom set of colors for continents by passing in `colors` parameter.* **Explode** the pie chart to emphasize the lowest three continents (Africa, North America, and Latin America and Carribbean) by pasing in `explode` parameter.
###Code
colors_list = ['gold', 'yellowgreen', 'lightcoral', 'lightskyblue', 'lightgreen', 'pink']
explode_list = [0.1, 0, 0, 0, 0.1, 0.1] # ratio for each continent with which to offset each wedge.
df_continents['Total'].plot(kind='pie',
figsize=(15, 6),
autopct='%1.1f%%',
startangle=90,
shadow=True,
labels=None, # turn off labels on pie chart
pctdistance=1.12, # the ratio between the center of each pie slice and the start of the text generated by autopct
colors=colors_list, # add custom colors
explode=explode_list # 'explode' lowest 3 continents
)
# scale the title up by 12% to match pctdistance
plt.title('Immigration to Canada by Continent [1980 - 2013]', y=1.12)
plt.axis('equal')
# add legend
plt.legend(labels=df_continents.index, loc='upper left')
plt.show()
###Output
_____no_output_____
###Markdown
**Question:** Using a pie chart, explore the proportion (percentage) of new immigrants grouped by continents in the year 2013.**Note**: You might need to play with the explore values in order to fix any overlapping slice values.
###Code
### type your answer here
colors_list = ['gold', 'yellowgreen', 'lightcoral', 'lightskyblue', 'lightgreen', 'pink']
#explode_list = [0.1, 0, 0, 0, 0.1, 0.2] # ratio for each continent with which to offset each wedge.
df_continents['2013'].plot(kind='pie',
figsize=(15, 6),
autopct='%1.1f%%',
startangle=90,
shadow=True,
labels=None, # turn off labels on pie chart
pctdistance=1.12, # the ratio between the center of each pie slice and the start of the text generated by autopct
colors=colors_list, # add custom colors
explode=explode_list # 'explode' lowest 3 continents
)
# scale the title up by 12% to match pctdistance
plt.title('Immigration to Canada by Continent, 2013]', y=1.12)
plt.axis('equal')
# add legend
plt.legend(labels=df_continents.index, loc='upper left')
plt.show()
###Output
_____no_output_____
###Markdown
Double-click __here__ for the solution.<!-- The correct answer is:explode_list = [0.1, 0, 0, 0, 0.1, 0.2] ratio for each continent with which to offset each wedge.--><!--df_continents['2013'].plot(kind='pie', figsize=(15, 6), autopct='%1.1f%%', startangle=90, shadow=True, labels=None, turn off labels on pie chart pctdistance=1.12, the ratio between the pie center and start of text label explode=explode_list 'explode' lowest 3 continents )--><!--\\ scale the title up by 12% to match pctdistanceplt.title('Immigration to Canada by Continent in 2013', y=1.12) plt.axis('equal') --><!--\\ add legendplt.legend(labels=df_continents.index, loc='upper left') --><!--\\ show plotplt.show()--> Box Plots A `box plot` is a way of statistically representing the *distribution* of the data through five main dimensions: - **Minimun:** Smallest number in the dataset.- **First quartile:** Middle number between the `minimum` and the `median`.- **Second quartile (Median):** Middle number of the (sorted) dataset.- **Third quartile:** Middle number between `median` and `maximum`.- **Maximum:** Highest number in the dataset. To make a `box plot`, we can use `kind=box` in `plot` method invoked on a *pandas* series or dataframe.Let's plot the box plot for the Japanese immigrants between 1980 - 2013. Step 1: Get the dataset. Even though we are extracting the data for just one country, we will obtain it as a dataframe. This will help us with calling the `dataframe.describe()` method to view the percentiles.
###Code
# to get a dataframe, place extra square brackets around 'Japan'.
df_japan = df_can.loc[['Japan'], years].transpose()
df_japan.head()
###Output
_____no_output_____
###Markdown
Step 2: Plot by passing in `kind='box'`.
###Code
df_japan.plot(kind='box', figsize=(8, 6))
plt.title('Box plot of Japanese Immigrants from 1980 - 2013')
plt.ylabel('Number of Immigrants')
plt.show()
###Output
_____no_output_____
###Markdown
We can immediately make a few key observations from the plot above:1. The minimum number of immigrants is around 200 (min), maximum number is around 1300 (max), and median number of immigrants is around 900 (median).2. 25% of the years for period 1980 - 2013 had an annual immigrant count of ~500 or fewer (First quartile).2. 75% of the years for period 1980 - 2013 had an annual immigrant count of ~1100 or fewer (Third quartile).We can view the actual numbers by calling the `describe()` method on the dataframe.
###Code
df_japan.describe()
###Output
_____no_output_____
###Markdown
One of the key benefits of box plots is comparing the distribution of multiple datasets. In one of the previous labs, we observed that China and India had very similar immigration trends. Let's analyize these two countries further using box plots.**Question:** Compare the distribution of the number of new immigrants from India and China for the period 1980 - 2013. Step 1: Get the dataset for China and India and call the dataframe **df_CI**.
###Code
### type your answer here
df_CI= df_can.loc[['China', 'India'], years].transpose()
df_CI.head()
###Output
_____no_output_____
###Markdown
Double-click __here__ for the solution.<!-- The correct answer is:df_CI= df_can.loc[['China', 'India'], years].transpose()df_CI.head()--> Let's view the percentages associated with both countries using the `describe()` method.
###Code
### type your answer here
df_CI.describe()
###Output
_____no_output_____
###Markdown
Double-click __here__ for the solution.<!-- The correct answer is:df_CI.describe()--> Step 2: Plot data.
###Code
### type your answer here
df_CI.plot(kind='box', figsize=(8, 6))
plt.title('Box plots of Immigrants from China and India (1980 - 2013)')
plt.ylabel('Number of Immigrants')
plt.show()
###Output
_____no_output_____
###Markdown
Double-click __here__ for the solution.<!-- The correct answer is:df_CI.plot(kind='box', figsize=(10, 7))--><!--plt.title('Box plots of Immigrants from China and India (1980 - 2013)')plt.xlabel('Number of Immigrants')--><!--plt.show()--> We can observe that, while both countries have around the same median immigrant population (~20,000), China's immigrant population range is more spread out than India's. The maximum population from India for any year (36,210) is around 15% lower than the maximum population from China (42,584). If you prefer to create horizontal box plots, you can pass the `vert` parameter in the **plot** function and assign it to *False*. You can also specify a different color in case you are not a big fan of the default red color.
###Code
# horizontal box plots
df_CI.plot(kind='box', figsize=(10, 7), color='blue', vert=False)
plt.title('Box plots of Immigrants from China and India (1980 - 2013)')
plt.xlabel('Number of Immigrants')
plt.show()
###Output
_____no_output_____
###Markdown
**Subplots**Often times we might want to plot multiple plots within the same figure. For example, we might want to perform a side by side comparison of the box plot with the line plot of China and India's immigration.To visualize multiple plots together, we can create a **`figure`** (overall canvas) and divide it into **`subplots`**, each containing a plot. With **subplots**, we usually work with the **artist layer** instead of the **scripting layer**. Typical syntax is : ```python fig = plt.figure() create figure ax = fig.add_subplot(nrows, ncols, plot_number) create subplots```Where- `nrows` and `ncols` are used to notionally split the figure into (`nrows` \* `ncols`) sub-axes, - `plot_number` is used to identify the particular subplot that this function is to create within the notional grid. `plot_number` starts at 1, increments across rows first and has a maximum of `nrows` * `ncols` as shown below. We can then specify which subplot to place each plot by passing in the `ax` paramemter in `plot()` method as follows:
###Code
fig = plt.figure() # create figure
ax0 = fig.add_subplot(1, 2, 1) # add subplot 1 (1 row, 2 columns, first plot)
ax1 = fig.add_subplot(1, 2, 2) # add subplot 2 (1 row, 2 columns, second plot). See tip below**
# Subplot 1: Box plot
df_CI.plot(kind='box', color='blue', vert=False, figsize=(20, 6), ax=ax0) # add to subplot 1
ax0.set_title('Box Plots of Immigrants from China and India (1980 - 2013)')
ax0.set_xlabel('Number of Immigrants')
ax0.set_ylabel('Countries')
# Subplot 2: Line plot
df_CI.plot(kind='line', figsize=(20, 6), ax=ax1) # add to subplot 2
ax1.set_title ('Line Plots of Immigrants from China and India (1980 - 2013)')
ax1.set_ylabel('Number of Immigrants')
ax1.set_xlabel('Years')
plt.show()
###Output
_____no_output_____
###Markdown
** * Tip regarding subplot convention **In the case when `nrows`, `ncols`, and `plot_number` are all less than 10, a convenience exists such that the a 3 digit number can be given instead, where the hundreds represent `nrows`, the tens represent `ncols` and the units represent `plot_number`. For instance,```python subplot(211) == subplot(2, 1, 1) ```produces a subaxes in a figure which represents the top plot (i.e. the first) in a 2 rows by 1 column notional grid (no grid actually exists, but conceptually this is how the returned subplot has been positioned). Let's try something a little more advanced. Previously we identified the top 15 countries based on total immigration from 1980 - 2013.**Question:** Create a box plot to visualize the distribution of the top 15 countries (based on total immigration) grouped by the *decades* `1980s`, `1990s`, and `2000s`. Step 1: Get the dataset. Get the top 15 countries based on Total immigrant population. Name the dataframe **df_top15**.
###Code
### type your answer here
df_top15 = df_can.sort_values(['Total'], ascending=False, axis=0).head(15)
df_top15
###Output
_____no_output_____
###Markdown
Double-click __here__ for the solution.<!-- The correct answer is:df_top15 = df_can.sort_values(['Total'], ascending=False, axis=0).head(15)df_top15--> Step 2: Create a new dataframe which contains the aggregate for each decade. One way to do that: 1. Create a list of all years in decades 80's, 90's, and 00's. 2. Slice the original dataframe df_can to create a series for each decade and sum across all years for each country. 3. Merge the three series into a new data frame. Call your dataframe **new_df**.
###Code
### type your answer here
# create a list of all years in decades 80's, 90's, and 00's
years_80s = list(map(str, range(1980, 1990)))
years_90s = list(map(str, range(1990, 2000)))
years_00s = list(map(str, range(2000, 2010)))
# slice the original dataframe df_can to create a series for each decade
df_80s = df_top15.loc[:, years_80s].sum(axis=1)
df_90s = df_top15.loc[:, years_90s].sum(axis=1)
df_00s = df_top15.loc[:, years_00s].sum(axis=1)
# merge the three series into a new data frame
new_df = pd.DataFrame({'1980s': df_80s, '1990s': df_90s, '2000s':df_00s})
new_df.head(15)
###Output
_____no_output_____
###Markdown
Double-click __here__ for the solution.<!-- The correct answer is:\\ create a list of all years in decades 80's, 90's, and 00'syears_80s = list(map(str, range(1980, 1990))) years_90s = list(map(str, range(1990, 2000))) years_00s = list(map(str, range(2000, 2010))) --><!--\\ slice the original dataframe df_can to create a series for each decadedf_80s = df_top15.loc[:, years_80s].sum(axis=1) df_90s = df_top15.loc[:, years_90s].sum(axis=1) df_00s = df_top15.loc[:, years_00s].sum(axis=1)--><!--\\ merge the three series into a new data framenew_df = pd.DataFrame({'1980s': df_80s, '1990s': df_90s, '2000s':df_00s}) --><!--\\ display dataframenew_df.head()--> Let's learn more about the statistics associated with the dataframe using the `describe()` method.
###Code
### type your answer here
new_df.describe()
###Output
_____no_output_____
###Markdown
Double-click __here__ for the solution.<!-- The correct answer is:new_df.describe()--> Step 3: Plot the box plots.
###Code
### type your answer here
new_df.plot(kind='box', figsize=(10, 6))
plt.title('Immigration from top 15 countries for decades 80s, 90s and 2000s')
plt.show()
###Output
_____no_output_____
###Markdown
Double-click __here__ for the solution.<!-- The correct answer is:new_df.plot(kind='box', figsize=(10, 6))--><!--plt.title('Immigration from top 15 countries for decades 80s, 90s and 2000s')--><!--plt.show()--> Note how the box plot differs from the summary table created. The box plot scans the data and identifies the outliers. In order to be an outlier, the data value must be:* larger than Q3 by at least 1.5 times the interquartile range (IQR), or,* smaller than Q1 by at least 1.5 times the IQR.Let's look at decade 2000s as an example: * Q1 (25%) = 36,101.5 * Q3 (75%) = 105,505.5 * IQR = Q3 - Q1 = 69,404 Using the definition of outlier, any value that is greater than Q3 by 1.5 times IQR will be flagged as outlier.Outlier > 105,505.5 + (1.5 * 69,404) Outlier > 209,611.5
###Code
# let's check how many entries fall above the outlier threshold
new_df[new_df['2000s']> 209611.5]
###Output
_____no_output_____
###Markdown
China and India are both considered as outliers since their population for the decade exceeds 209,611.5. The box plot is an advanced visualizaiton tool, and there are many options and customizations that exceed the scope of this lab. Please refer to [Matplotlib documentation](http://matplotlib.org/api/pyplot_api.htmlmatplotlib.pyplot.boxplot) on box plots for more information. Scatter Plots A `scatter plot` (2D) is a useful method of comparing variables against each other. `Scatter` plots look similar to `line plots` in that they both map independent and dependent variables on a 2D graph. While the datapoints are connected together by a line in a line plot, they are not connected in a scatter plot. The data in a scatter plot is considered to express a trend. With further analysis using tools like regression, we can mathematically calculate this relationship and use it to predict trends outside the dataset.Let's start by exploring the following:Using a `scatter plot`, let's visualize the trend of total immigrantion to Canada (all countries combined) for the years 1980 - 2013. Step 1: Get the dataset. Since we are expecting to use the relationship betewen `years` and `total population`, we will convert `years` to `int` type.
###Code
# we can use the sum() method to get the total population per year
df_tot = pd.DataFrame(df_can[years].sum(axis=0))
# change the years to type int (useful for regression later on)
df_tot.index = map(int, df_tot.index)
# reset the index to put in back in as a column in the df_tot dataframe
df_tot.reset_index(inplace = True)
# rename columns
df_tot.columns = ['year', 'total']
# view the final dataframe
df_tot.head()
###Output
_____no_output_____
###Markdown
Step 2: Plot the data. In `Matplotlib`, we can create a `scatter` plot set by passing in `kind='scatter'` as plot argument. We will also need to pass in `x` and `y` keywords to specify the columns that go on the x- and the y-axis.
###Code
df_tot.plot(kind='scatter', x='year', y='total', figsize=(10, 6), color='darkblue')
plt.title('Total Immigration to Canada from 1980 - 2013')
plt.xlabel('Year')
plt.ylabel('Number of Immigrants')
plt.show()
###Output
_____no_output_____
###Markdown
Notice how the scatter plot does not connect the datapoints together. We can clearly observe an upward trend in the data: as the years go by, the total number of immigrants increases. We can mathematically analyze this upward trend using a regression line (line of best fit). So let's try to plot a linear line of best fit, and use it to predict the number of immigrants in 2015.Step 1: Get the equation of line of best fit. We will use **Numpy**'s `polyfit()` method by passing in the following:- `x`: x-coordinates of the data. - `y`: y-coordinates of the data. - `deg`: Degree of fitting polynomial. 1 = linear, 2 = quadratic, and so on.
###Code
x = df_tot['year'] # year on x-axis
y = df_tot['total'] # total on y-axis
fit = np.polyfit(x, y, deg=1)
fit
###Output
_____no_output_____
###Markdown
The output is an array with the polynomial coefficients, highest powers first. Since we are plotting a linear regression `y= a*x + b`, our output has 2 elements `[5.56709228e+03, -1.09261952e+07]` with the the slope in position 0 and intercept in position 1. Step 2: Plot the regression line on the `scatter plot`.
###Code
df_tot.plot(kind='scatter', x='year', y='total', figsize=(10, 6), color='darkblue')
plt.title('Total Immigration to Canada from 1980 - 2013')
plt.xlabel('Year')
plt.ylabel('Number of Immigrants')
# plot line of best fit
plt.plot(x, fit[0] * x + fit[1], color='red') # recall that x is the Years
plt.annotate('y={0:.0f} x + {1:.0f}'.format(fit[0], fit[1]), xy=(2000, 150000))
plt.show()
# print out the line of best fit
'No. Immigrants = {0:.0f} * Year + {1:.0f}'.format(fit[0], fit[1])
###Output
_____no_output_____
###Markdown
Using the equation of line of best fit, we can estimate the number of immigrants in 2015:```pythonNo. Immigrants = 5567 * Year - 10926195No. Immigrants = 5567 * 2015 - 10926195No. Immigrants = 291,310```When compared to the actuals from Citizenship and Immigration Canada's (CIC) [2016 Annual Report](http://www.cic.gc.ca/english/resources/publications/annual-report-2016/index.asp), we see that Canada accepted 271,845 immigrants in 2015. Our estimated value of 291,310 is within 7% of the actual number, which is pretty good considering our original data came from United Nations (and might differ slightly from CIC data).As a side note, we can observe that immigration took a dip around 1993 - 1997. Further analysis into the topic revealed that in 1993 Canada introcuded Bill C-86 which introduced revisions to the refugee determination system, mostly restrictive. Further amendments to the Immigration Regulations cancelled the sponsorship required for "assisted relatives" and reduced the points awarded to them, making it more difficult for family members (other than nuclear family) to immigrate to Canada. These restrictive measures had a direct impact on the immigration numbers for the next several years. **Question**: Create a scatter plot of the total immigration from Denmark, Norway, and Sweden to Canada from 1980 to 2013? Step 1: Get the data: 1. Create a dataframe the consists of the numbers associated with Denmark, Norway, and Sweden only. Name it **df_countries**. 2. Sum the immigration numbers across all three countries for each year and turn the result into a dataframe. Name this new dataframe **df_total**. 3. Reset the index in place. 4. Rename the columns to **year** and **total**. 5. Display the resulting dataframe.
###Code
### type your answer here
# create df_countries dataframe
df_countries = df_can.loc[['Denmark', 'Norway', 'Sweden'], years].transpose()
# create df_total by summing across three countries for each year
df_total = pd.DataFrame(df_countries.sum(axis=1))
# reset index in place
df_total.reset_index(inplace=True)
# rename columns
df_total.columns = ['year', 'total']
# change column year from string to int to create scatter plot
df_total['year'] = df_total['year'].astype(int)
# show resulting dataframe
df_total.head()
###Output
_____no_output_____
###Markdown
Double-click __here__ for the solution.<!-- The correct answer is:\\ create df_countries dataframedf_countries = df_can.loc[['Denmark', 'Norway', 'Sweden'], years].transpose()--><!--\\ create df_total by summing across three countries for each yeardf_total = pd.DataFrame(df_countries.sum(axis=1))--><!--\\ reset index in placedf_total.reset_index(inplace=True)--><!--\\ rename columnsdf_total.columns = ['year', 'total']--><!--\\ change column year from string to int to create scatter plotdf_total['year'] = df_total['year'].astype(int)--><!--\\ show resulting dataframedf_total.head()--> Step 2: Generate the scatter plot by plotting the total versus year in **df_total**.
###Code
### type your answer here
# generate scatter plot
df_total.plot(kind='scatter', x='year', y='total', figsize=(10, 6), color='darkblue')
# add title and label to axes
plt.title('Immigration from Denmark, Norway, and Sweden to Canada from 1980 - 2013')
plt.xlabel('Year')
plt.ylabel('Number of Immigrants')
# show plot
plt.show()
###Output
_____no_output_____
###Markdown
Double-click __here__ for the solution.<!-- The correct answer is:\\ generate scatter plotdf_total.plot(kind='scatter', x='year', y='total', figsize=(10, 6), color='darkblue')--><!--\\ add title and label to axesplt.title('Immigration from Denmark, Norway, and Sweden to Canada from 1980 - 2013')plt.xlabel('Year')plt.ylabel('Number of Immigrants')--><!--\\ show plotplt.show()--> Bubble Plots A `bubble plot` is a variation of the `scatter plot` that displays three dimensions of data (x, y, z). The datapoints are replaced with bubbles, and the size of the bubble is determined by the third variable 'z', also known as the weight. In `maplotlib`, we can pass in an array or scalar to the keyword `s` to `plot()`, that contains the weight of each point.**Let's start by analyzing the effect of Argentina's great depression**.Argentina suffered a great depression from 1998 - 2002, which caused widespread unemployment, riots, the fall of the government, and a default on the country's foreign debt. In terms of income, over 50% of Argentines were poor, and seven out of ten Argentine children were poor at the depth of the crisis in 2002. Let's analyze the effect of this crisis, and compare Argentina's immigration to that of it's neighbour Brazil. Let's do that using a `bubble plot` of immigration from Brazil and Argentina for the years 1980 - 2013. We will set the weights for the bubble as the *normalized* value of the population for each year. Step 1: Get the data for Brazil and Argentina. Like in the previous example, we will convert the `Years` to type int and bring it in the dataframe.
###Code
df_can_t = df_can[years].transpose() # transposed dataframe
# cast the Years (the index) to type int
df_can_t.index = map(int, df_can_t.index)
# let's label the index. This will automatically be the column name when we reset the index
df_can_t.index.name = 'Year'
# reset index to bring the Year in as a column
df_can_t.reset_index(inplace=True)
# view the changes
df_can_t.head()
###Output
_____no_output_____
###Markdown
Step 2: Create the normalized weights. There are several methods of normalizations in statistics, each with its own use. In this case, we will use [feature scaling](https://en.wikipedia.org/wiki/Feature_scaling) to bring all values into the range [0,1]. The general formula is:where *`X`* is an original value, *`X'`* is the normalized value. The formula sets the max value in the dataset to 1, and sets the min value to 0. The rest of the datapoints are scaled to a value between 0-1 accordingly.
###Code
# normalize Brazil data
norm_brazil = (df_can_t['Brazil'] - df_can_t['Brazil'].min()) / (df_can_t['Brazil'].max() - df_can_t['Brazil'].min())
# normalize Argentina data
norm_argentina = (df_can_t['Argentina'] - df_can_t['Argentina'].min()) / (df_can_t['Argentina'].max() - df_can_t['Argentina'].min())
###Output
_____no_output_____
###Markdown
Step 3: Plot the data. - To plot two different scatter plots in one plot, we can include the axes one plot into the other by passing it via the `ax` parameter. - We will also pass in the weights using the `s` parameter. Given that the normalized weights are between 0-1, they won't be visible on the plot. Therefore we will: - multiply weights by 2000 to scale it up on the graph, and, - add 10 to compensate for the min value (which has a 0 weight and therefore scale with x2000).
###Code
# Brazil
ax0 = df_can_t.plot(kind='scatter',
x='Year',
y='Brazil',
figsize=(14, 8),
alpha=0.5, # transparency
color='green',
s=norm_brazil * 2000 + 10, # pass in weights
xlim=(1975, 2015)
)
# Argentina
ax1 = df_can_t.plot(kind='scatter',
x='Year',
y='Argentina',
alpha=0.5,
color="blue",
s=norm_argentina * 2000 + 10,
ax = ax0
)
ax0.set_ylabel('Number of Immigrants')
ax0.set_title('Immigration from Brazil and Argentina from 1980 - 2013')
ax0.legend(['Brazil', 'Argentina'], loc='upper left', fontsize='x-large')
###Output
_____no_output_____
###Markdown
The size of the bubble corresponds to the magnitude of immigrating population for that year, compared to the 1980 - 2013 data. The larger the bubble, the more immigrants in that year.From the plot above, we can see a corresponding increase in immigration from Argentina during the 1998 - 2002 great depression. We can also observe a similar spike around 1985 to 1993. In fact, Argentina had suffered a great depression from 1974 - 1990, just before the onset of 1998 - 2002 great depression. On a similar note, Brazil suffered the *Samba Effect* where the Brazilian real (currency) dropped nearly 35% in 1999. There was a fear of a South American financial crisis as many South American countries were heavily dependent on industrial exports from Brazil. The Brazilian government subsequently adopted an austerity program, and the economy slowly recovered over the years, culminating in a surge in 2010. The immigration data reflect these events. **Question**: Previously in this lab, we created box plots to compare immigration from China and India to Canada. Create bubble plots of immigration from China and India to visualize any differences with time from 1980 to 2013. You can use **df_can_t** that we defined and used in the previous example. Step 1: Normalize the data pertaining to China and India.
###Code
### type your answer here
norm_china = (df_can_t['China'] - df_can_t['China'].min()) / (df_can_t['China'].max() - df_can_t['China'].min())
norm_india = (df_can_t['India'] - df_can_t['India'].min()) / (df_can_t['India'].max() - df_can_t['India'].min())
###Output
_____no_output_____
###Markdown
Double-click __here__ for the solution.<!-- The correct answer is:\\ normalize China datanorm_china = (df_can_t['China'] - df_can_t['China'].min()) / (df_can_t['China'].max() - df_can_t['China'].min())--><!-- normalize India datanorm_india = (df_can_t['India'] - df_can_t['India'].min()) / (df_can_t['India'].max() - df_can_t['India'].min())--> Step 2: Generate the bubble plots.
###Code
### type your answer here
# China
ax0 = df_can_t.plot(kind='scatter',
x='Year',
y='China',
figsize=(14, 8),
alpha=0.5, # transparency
color='green',
s=norm_china * 2000 + 10, # pass in weights
xlim=(1975, 2015)
)
# India
ax1 = df_can_t.plot(kind='scatter',
x='Year',
y='India',
alpha=0.5,
color="blue",
s=norm_india * 2000 + 10,
ax = ax0
)
ax0.set_ylabel('Number of Immigrants')
ax0.set_title('Immigration from China and India from 1980 - 2013')
ax0.legend(['China', 'India'], loc='upper left', fontsize='x-large')
###Output
_____no_output_____ |
BankCustomerPrediction/Copy_of_artificial_neural_network.ipynb | ###Markdown
Artificial Neural Network Importing the libraries
###Code
import numpy as np
import pandas as pd
import tensorflow as tf
tf.__version__
###Output
_____no_output_____
###Markdown
Part 1 - Data Preprocessing Importing the dataset
###Code
dataset = pd.read_csv('Churn_Modelling.csv')
X = dataset.iloc[:, 3:-1].values
y = dataset.iloc[:, -1].values
print(X)
print(y)
###Output
[1 0 1 ... 1 1 0]
###Markdown
Encoding categorical data Label Encoding the "Gender" column
###Code
from sklearn.preprocessing import LabelEncoder
le = LabelEncoder()
X[:, 2] = le.fit_transform(X[:, 2])
print(X)
###Output
[[1.0 0.0 0.0 ... 1 1 101348.88]
[0.0 0.0 1.0 ... 0 1 112542.58]
[1.0 0.0 0.0 ... 1 0 113931.57]
...
[1.0 0.0 0.0 ... 0 1 42085.58]
[0.0 1.0 0.0 ... 1 0 92888.52]
[1.0 0.0 0.0 ... 1 0 38190.78]]
###Markdown
One Hot Encoding the "Geography" column
###Code
from sklearn.compose import ColumnTransformer
from sklearn.preprocessing import OneHotEncoder
ct = ColumnTransformer(transformers=[('encoder', OneHotEncoder(), [1])], remainder='passthrough')
X = np.array(ct.fit_transform(X))
pd.DataFrame(X_train)
###Output
_____no_output_____
###Markdown
Splitting the dataset into the Training set and Test set
###Code
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2, random_state = 0)
###Output
_____no_output_____
###Markdown
Feature Scaling
###Code
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
X_train = sc.fit_transform(X_train)
X_test = sc.transform(X_test)
###Output
_____no_output_____
###Markdown
Part 2 - Building the ANN Initializing the ANN
###Code
ann = tf.keras.models.Sequential()
###Output
_____no_output_____
###Markdown
Adding the input layer and the first hidden layer
###Code
ann.add(tf.keras.layers.Dense(units=6,activation = 'relu'))
###Output
_____no_output_____
###Markdown
Adding the second hidden layer
###Code
ann.add(tf.keras.layers.Dense(units=6,activation='relu'))
###Output
_____no_output_____
###Markdown
Adding the output layer
###Code
ann.add(tf.keras.layers.Dense(units=1,activation = 'sigmoid'))
###Output
_____no_output_____
###Markdown
Part 3 - Training the ANN
###Code
ann.compile(optimizer = 'adam',loss = 'binary_crossentropy',metrics = ['accuracy'])
###Output
_____no_output_____
###Markdown
Training the ANN on the Training set
###Code
ann.fit(X_train,y_train,batch_size=32,epochs=100)
###Output
Epoch 1/100
250/250 [==============================] - 0s 1ms/step - loss: 0.5581 - accuracy: 0.7591
Epoch 2/100
250/250 [==============================] - 0s 1ms/step - loss: 0.4566 - accuracy: 0.7965
Epoch 3/100
250/250 [==============================] - 0s 1ms/step - loss: 0.4366 - accuracy: 0.8029
Epoch 4/100
250/250 [==============================] - 0s 1ms/step - loss: 0.4227 - accuracy: 0.8165
Epoch 5/100
250/250 [==============================] - 0s 1ms/step - loss: 0.4087 - accuracy: 0.8305
Epoch 6/100
250/250 [==============================] - 0s 1ms/step - loss: 0.3936 - accuracy: 0.8415
Epoch 7/100
250/250 [==============================] - 0s 1ms/step - loss: 0.3801 - accuracy: 0.8481
Epoch 8/100
250/250 [==============================] - 0s 1ms/step - loss: 0.3689 - accuracy: 0.8539
Epoch 9/100
250/250 [==============================] - 0s 1ms/step - loss: 0.3603 - accuracy: 0.8556
Epoch 10/100
250/250 [==============================] - 0s 1ms/step - loss: 0.3544 - accuracy: 0.8576
Epoch 11/100
250/250 [==============================] - 0s 1ms/step - loss: 0.3505 - accuracy: 0.8575
Epoch 12/100
250/250 [==============================] - 0s 1ms/step - loss: 0.3482 - accuracy: 0.8596
Epoch 13/100
250/250 [==============================] - 0s 1ms/step - loss: 0.3467 - accuracy: 0.8597
Epoch 14/100
250/250 [==============================] - 0s 1ms/step - loss: 0.3452 - accuracy: 0.8605
Epoch 15/100
250/250 [==============================] - 0s 1ms/step - loss: 0.3441 - accuracy: 0.8599
Epoch 16/100
250/250 [==============================] - 0s 1ms/step - loss: 0.3434 - accuracy: 0.8608
Epoch 17/100
250/250 [==============================] - 0s 1ms/step - loss: 0.3424 - accuracy: 0.8609
Epoch 18/100
250/250 [==============================] - 0s 1ms/step - loss: 0.3422 - accuracy: 0.8591
Epoch 19/100
250/250 [==============================] - 0s 1ms/step - loss: 0.3419 - accuracy: 0.8609
Epoch 20/100
250/250 [==============================] - 0s 1ms/step - loss: 0.3413 - accuracy: 0.8612
Epoch 21/100
250/250 [==============================] - 0s 1ms/step - loss: 0.3411 - accuracy: 0.8600
Epoch 22/100
250/250 [==============================] - 0s 1ms/step - loss: 0.3411 - accuracy: 0.8612
Epoch 23/100
250/250 [==============================] - 0s 1ms/step - loss: 0.3408 - accuracy: 0.8608
Epoch 24/100
250/250 [==============================] - 0s 1ms/step - loss: 0.3403 - accuracy: 0.8590
Epoch 25/100
250/250 [==============================] - 0s 1ms/step - loss: 0.3404 - accuracy: 0.8612
Epoch 26/100
250/250 [==============================] - 0s 1ms/step - loss: 0.3402 - accuracy: 0.8606
Epoch 27/100
250/250 [==============================] - 0s 1ms/step - loss: 0.3399 - accuracy: 0.8596
Epoch 28/100
250/250 [==============================] - 0s 1ms/step - loss: 0.3394 - accuracy: 0.8594
Epoch 29/100
250/250 [==============================] - 0s 1ms/step - loss: 0.3391 - accuracy: 0.8611
Epoch 30/100
250/250 [==============================] - 0s 1ms/step - loss: 0.3390 - accuracy: 0.8622
Epoch 31/100
250/250 [==============================] - 0s 1ms/step - loss: 0.3388 - accuracy: 0.8615
Epoch 32/100
250/250 [==============================] - 0s 1ms/step - loss: 0.3386 - accuracy: 0.8612
Epoch 33/100
250/250 [==============================] - 0s 1ms/step - loss: 0.3385 - accuracy: 0.8610
Epoch 34/100
250/250 [==============================] - 0s 1ms/step - loss: 0.3383 - accuracy: 0.8615
Epoch 35/100
250/250 [==============================] - 0s 1ms/step - loss: 0.3382 - accuracy: 0.8620
Epoch 36/100
250/250 [==============================] - 0s 1ms/step - loss: 0.3381 - accuracy: 0.8597
Epoch 37/100
250/250 [==============================] - 0s 1ms/step - loss: 0.3380 - accuracy: 0.8611
Epoch 38/100
250/250 [==============================] - 0s 1ms/step - loss: 0.3379 - accuracy: 0.8612
Epoch 39/100
250/250 [==============================] - 0s 1ms/step - loss: 0.3379 - accuracy: 0.8611
Epoch 40/100
250/250 [==============================] - 0s 1ms/step - loss: 0.3376 - accuracy: 0.8618
Epoch 41/100
250/250 [==============================] - 0s 1ms/step - loss: 0.3373 - accuracy: 0.8611
Epoch 42/100
250/250 [==============================] - 0s 1ms/step - loss: 0.3372 - accuracy: 0.8616
Epoch 43/100
250/250 [==============================] - 0s 1ms/step - loss: 0.3376 - accuracy: 0.8606
Epoch 44/100
250/250 [==============================] - 0s 1ms/step - loss: 0.3367 - accuracy: 0.8627
Epoch 45/100
250/250 [==============================] - 0s 1ms/step - loss: 0.3372 - accuracy: 0.8610
Epoch 46/100
250/250 [==============================] - 0s 1ms/step - loss: 0.3367 - accuracy: 0.8591
Epoch 47/100
250/250 [==============================] - 0s 1ms/step - loss: 0.3368 - accuracy: 0.8618
Epoch 48/100
250/250 [==============================] - 0s 1ms/step - loss: 0.3364 - accuracy: 0.8631
Epoch 49/100
250/250 [==============================] - 0s 1ms/step - loss: 0.3362 - accuracy: 0.8615
Epoch 50/100
250/250 [==============================] - 0s 1ms/step - loss: 0.3366 - accuracy: 0.8620
Epoch 51/100
250/250 [==============================] - 0s 1ms/step - loss: 0.3362 - accuracy: 0.8610
Epoch 52/100
250/250 [==============================] - 0s 1ms/step - loss: 0.3365 - accuracy: 0.8602
Epoch 53/100
250/250 [==============================] - 0s 1ms/step - loss: 0.3357 - accuracy: 0.8616
Epoch 54/100
250/250 [==============================] - 0s 1ms/step - loss: 0.3359 - accuracy: 0.8614
Epoch 55/100
250/250 [==============================] - 0s 1ms/step - loss: 0.3358 - accuracy: 0.8616
Epoch 56/100
250/250 [==============================] - 0s 1ms/step - loss: 0.3359 - accuracy: 0.8597
Epoch 57/100
250/250 [==============================] - 0s 1ms/step - loss: 0.3355 - accuracy: 0.8619
Epoch 58/100
250/250 [==============================] - 0s 1ms/step - loss: 0.3355 - accuracy: 0.8600
Epoch 59/100
250/250 [==============================] - 0s 1ms/step - loss: 0.3353 - accuracy: 0.8609
Epoch 60/100
250/250 [==============================] - 0s 1ms/step - loss: 0.3354 - accuracy: 0.8602
Epoch 61/100
250/250 [==============================] - 0s 1ms/step - loss: 0.3354 - accuracy: 0.8615
Epoch 62/100
250/250 [==============================] - 0s 1ms/step - loss: 0.3351 - accuracy: 0.8615
Epoch 63/100
250/250 [==============================] - 0s 1ms/step - loss: 0.3349 - accuracy: 0.8614
Epoch 64/100
250/250 [==============================] - 0s 1ms/step - loss: 0.3349 - accuracy: 0.8610
Epoch 65/100
250/250 [==============================] - 0s 1ms/step - loss: 0.3349 - accuracy: 0.8614
Epoch 66/100
250/250 [==============================] - 0s 1ms/step - loss: 0.3348 - accuracy: 0.8611
Epoch 67/100
250/250 [==============================] - 0s 1ms/step - loss: 0.3345 - accuracy: 0.8609
Epoch 68/100
250/250 [==============================] - 0s 1ms/step - loss: 0.3348 - accuracy: 0.8626
Epoch 69/100
250/250 [==============================] - 0s 1ms/step - loss: 0.3346 - accuracy: 0.8626
Epoch 70/100
250/250 [==============================] - 0s 1ms/step - loss: 0.3342 - accuracy: 0.8610
Epoch 71/100
250/250 [==============================] - 0s 1ms/step - loss: 0.3346 - accuracy: 0.8611
Epoch 72/100
250/250 [==============================] - 0s 1ms/step - loss: 0.3343 - accuracy: 0.8611
Epoch 73/100
250/250 [==============================] - 0s 1ms/step - loss: 0.3343 - accuracy: 0.8612
Epoch 74/100
250/250 [==============================] - 0s 1ms/step - loss: 0.3344 - accuracy: 0.8606
Epoch 75/100
250/250 [==============================] - 0s 1ms/step - loss: 0.3337 - accuracy: 0.8608
Epoch 76/100
250/250 [==============================] - 0s 1ms/step - loss: 0.3341 - accuracy: 0.8622
Epoch 77/100
250/250 [==============================] - 0s 1ms/step - loss: 0.3338 - accuracy: 0.8612
Epoch 78/100
250/250 [==============================] - 0s 1ms/step - loss: 0.3339 - accuracy: 0.8608
Epoch 79/100
250/250 [==============================] - 0s 1ms/step - loss: 0.3338 - accuracy: 0.8622
Epoch 80/100
250/250 [==============================] - 0s 1ms/step - loss: 0.3337 - accuracy: 0.8615
Epoch 81/100
250/250 [==============================] - 0s 1ms/step - loss: 0.3339 - accuracy: 0.8608
Epoch 82/100
250/250 [==============================] - 0s 1ms/step - loss: 0.3339 - accuracy: 0.8604
Epoch 83/100
250/250 [==============================] - 0s 1ms/step - loss: 0.3335 - accuracy: 0.8614
Epoch 84/100
250/250 [==============================] - 0s 1ms/step - loss: 0.3339 - accuracy: 0.8622
Epoch 85/100
250/250 [==============================] - 0s 1ms/step - loss: 0.3336 - accuracy: 0.8615
Epoch 86/100
250/250 [==============================] - 0s 1ms/step - loss: 0.3336 - accuracy: 0.8618
Epoch 87/100
250/250 [==============================] - 0s 1ms/step - loss: 0.3336 - accuracy: 0.8610
Epoch 88/100
250/250 [==============================] - 0s 1ms/step - loss: 0.3335 - accuracy: 0.8611
Epoch 89/100
250/250 [==============================] - 0s 1ms/step - loss: 0.3335 - accuracy: 0.8621
Epoch 90/100
250/250 [==============================] - 0s 1ms/step - loss: 0.3334 - accuracy: 0.8624
Epoch 91/100
250/250 [==============================] - 0s 1ms/step - loss: 0.3335 - accuracy: 0.8629
Epoch 92/100
250/250 [==============================] - 0s 1ms/step - loss: 0.3334 - accuracy: 0.8600
Epoch 93/100
250/250 [==============================] - 0s 1ms/step - loss: 0.3329 - accuracy: 0.8610
Epoch 94/100
250/250 [==============================] - 0s 1ms/step - loss: 0.3335 - accuracy: 0.8600
Epoch 95/100
250/250 [==============================] - 0s 1ms/step - loss: 0.3338 - accuracy: 0.8626
Epoch 96/100
250/250 [==============================] - 0s 1ms/step - loss: 0.3333 - accuracy: 0.8637
Epoch 97/100
250/250 [==============================] - 0s 1ms/step - loss: 0.3329 - accuracy: 0.8624
Epoch 98/100
250/250 [==============================] - 0s 1ms/step - loss: 0.3333 - accuracy: 0.8612
Epoch 99/100
250/250 [==============================] - 0s 1ms/step - loss: 0.3332 - accuracy: 0.8618
Epoch 100/100
250/250 [==============================] - 0s 1ms/step - loss: 0.3329 - accuracy: 0.8622
###Markdown
Part 4 - Making the predictions and evaluating the model Predicting the result of a single observation **Homework**Use our ANN model to predict if the customer with the following informations will leave the bank: Geography: FranceCredit Score: 600Gender: MaleAge: 40 years oldTenure: 3 yearsBalance: \$ 60000Number of Products: 2Does this customer have a credit card? YesIs this customer an Active Member: YesEstimated Salary: \$ 50000So, should we say goodbye to that customer? **Solution**
###Code
###Output
[[False]]
###Markdown
Therefore, our ANN model predicts that this customer stays in the bank!**Important note 1:** Notice that the values of the features were all input in a double pair of square brackets. That's because the "predict" method always expects a 2D array as the format of its inputs. And putting our values into a double pair of square brackets makes the input exactly a 2D array.**Important note 2:** Notice also that the "France" country was not input as a string in the last column but as "1, 0, 0" in the first three columns. That's because of course the predict method expects the one-hot-encoded values of the state, and as we see in the first row of the matrix of features X, "France" was encoded as "1, 0, 0". And be careful to include these values in the first three columns, because the dummy variables are always created in the first columns. Predicting the Test set results
###Code
###Output
[[0 0]
[0 1]
[0 0]
...
[0 0]
[0 0]
[0 0]]
###Markdown
Making the Confusion Matrix
###Code
###Output
[[1516 79]
[ 200 205]]
|
Crypto_Analysis.ipynb | ###Markdown
Quartely Crypto Market Review Framework Setting up the folders and import modules
###Code
import os
from datetime import date, datetime
import calendar
import requests
import pandas as pd
import numpy as np
from threading import Thread
import sqlite3
import time
import json
Data_Path = 'Data'
Result_Path = 'Results'
for path in [Data_Path,Result_Path]:
if not os.path.exists(path):
os.makedirs(path)
###Output
_____no_output_____
###Markdown
Add new exchanges to exchange csv file
###Code
url='https://min-api.cryptocompare.com/data/all/exchanges'
data = requests.get(url).json()
ccc_new_df = pd.DataFrame.from_dict(data).T
ccc_new_df.to_csv(os.path.join(Data_Path,'CCCExchanges_List.csv'))
new_exchanges = set(ccc_new_df.index.values)
ccc_old_df = pd.read_csv(os.path.join(Data_Path,'CCCExchanges_Old.csv'))
ccc_old_df = ccc_old_df[~pd.isnull(ccc_old_df['Country'])]
old_exchanges = set(ccc_old_df['Source'].values)
old_exchanges.add('CCCAGG')
## In old but not in new
if old_exchanges-new_exchanges:
print("That's weird: ",old_exchanges-new_exchanges)
## In new but not in old
print('%s exchanges not included in the old exchange list'%len(new_exchanges-old_exchanges))
new_ex = pd.DataFrame(list(new_exchanges-old_exchanges),columns = ['Source'])
complete_ex = ccc_old_df.append(new_ex.sort_values(by=['Source']))
complete_ex = complete_ex[ccc_old_df.columns]
complete_ex.to_csv(os.path.join(Data_Path,'CCCExchanges.csv'),index=False)
complete_ex
## Download all coins from CCC
def unix_time(d):
return calendar.timegm(d.timetuple())
end_date = datetime.today()
url='https://min-api.cryptocompare.com/data/all/exchanges'
data = requests.get(url).json()
ccc_df = pd.DataFrame.from_dict(data).T
ex_list = list(ccc_df.index)
ex_list.remove('EtherDelta')
ex_list.append('EtherDelta')
conn = sqlite3.connect(os.path.join(Data_Path,"CCC.db"))
cursor = conn.cursor()
all_crypto = []
pair_list = pd.DataFrame(columns = ['Exchange','Crypto','Fiat'])
for ex in ex_list:
cur_exchange = ccc_df.loc[ex].dropna()
ex_currencies = cur_exchange.index
all_crypto = list(set(all_crypto))
for crypto in ex_currencies:
fiat_list = cur_exchange.loc[crypto]
for fiat in fiat_list:
#Make a List of Cryptos to go through
pair_list=pair_list.append(pd.DataFrame([[ex,crypto,fiat]],columns = ['Exchange','Crypto','Fiat']))
## Add cccagg USD exchange rate for all cryptos
cccagg_df = pd.DataFrame(columns = ['Exchange','Crypto','Fiat'])
cccagg_df['Crypto']=pair_list['Crypto'].unique()
cccagg_df['Exchange']='cccagg'
cccagg_df['Fiat']='USD'
pair_list=pair_list.append(cccagg_df)
pair_list.reset_index(inplace=True,drop=True)
pair_list.to_csv(os.path.join(Data_Path,"Exchange_Pair_List.csv"))
pair_list
## Download all coins from CCC
#This has been outsourced to an external python script
def unix_time(d):
return calendar.timegm(d.timetuple())
end_date = datetime.today()
# os.remove(os.path.join(Data_Path,"CCC_new.db"))
conn = sqlite3.connect(os.path.join(Data_Path,"CCC_new.db"))
# cursor = conn.cursor()
all_crypto = []
#Benchmark
pair_list = pd.read_csv(os.path.join(Data_Path,"Exchange_Pair_List.csv"))
def partition(pair_list,threads=4):
np.array_split(range(len(pair_list)),threads)
return np.array_split(range(len(pair_list)),threads)
def download_rows(pair_list,res_index=0,start=0,end=0,sleep_time=60):
start_time = time.time()
res_df = pd.DataFrame()
cur_sleep_time = sleep_time
if not end:
end = len(pair_list)
for index,row in pair_list[start:end].iterrows():
crypto = row['Crypto']
fiat = row['Fiat']
ex = row['Exchange']
try:
hit_url = 'https://min-api.cryptocompare.com/data/histoday?fsym='+str(crypto)+'&tsym='+str(fiat)+'&limit=2000&aggregate=1&toTs='+str(unix_time(end_date))+'&e='+ str(ex)
#Check for rate limit! If we hit rate limit, then wait!
while True:
d = json.loads(requests.get(hit_url).text)
if d['Response'] =='Success':
df = pd.DataFrame(d["Data"])
if index%1000==0:
print('hitting', ex, crypto.encode("utf-8"), fiat, 'on thread', res_index)
if not df.empty:
df['Source']=ex
df['From']=crypto
df['To']=fiat
df=df[df['volumeto']>0.0]
res_df = res_df.append(df)
cur_sleep_time = sleep_time
break
else:
cur_sleep_time = int((np.random.rand()+.5)*cur_sleep_time*1.5)
if cur_sleep_time>1800:
print('Hit rate limit on thread %d, waiting for %ds'%(res_index,cur_sleep_time))
time.sleep(cur_sleep_time)
except Exception as err:
time.sleep(15)
print('problem with',ex.encode("utf-8"),crypto,fiat)
end_time = time.time()
result_dfs[res_index] = res_df
print('Total time spent %ds on thread %d'%(end_time-start_time,res_index))
threads = 4
parts = partition(pair_list[:500],threads)
thread_list = [0 for _ in range(threads)]
result_dfs = [0 for _ in range(threads)]
for i, pair in enumerate(parts):
thread_list[i] = Thread(target=download_rows, args=(pair_list,i,pair[0],pair[-1],))
for i in range(threads):
# starting thread i
thread_list[i].start()
for i in range(threads):
thread_list[i].join()
for result in result_dfs:
result.to_sql("Data", conn, if_exists="append")
print(len(result))
conn.commit()
conn.close()
hit_url = 'https://min-api.cryptocompare.com/data/histoday?fsym=LTC&tsym=BTC&limit=2000&aggregate=1&toTs=1570012197&e=Bitfinex'
d = json.loads(requests.get(hit_url).text)
#Check for rate limit! If we hit rate limit, then wait!
df = pd.DataFrame(d["Data"])
print(d['Response'])
print(d)
(np.random.rand()+.5)
print('Hit rate limit, waiting for %ds'%time.sleep(30))
print('dine')
df = pd.DataFrame({
'first column': [1, 2, 3, 4],
'second column': [10, 20, 30, 40]
})
res_df = pd.DataFrame()
res_df= res_df.append(df)
res_df
pair_list[10:30]
list(range(100))[10:]
## Download all coins from CCC
from threading import Thread
import sqlite3
import time
def unix_time(d):
return calendar.timegm(d.timetuple())
end_date = datetime.today()
conn = sqlite3.connect(os.path.join(Data_Path,"CCC_new.db"))
cursor = conn.cursor()
all_crypto = []
#Benchmark
pair_list = pd.read_csv(os.path.join(Data_Path,"Exchange_Pair_List.csv"))
pair_list = pair_list.iloc[:100]
start = time.time()
res_df = pd.DataFrame()
def download_rows(pair_list,conn,start=0,end=0):
if not end:
end = len(pair_list)
for index,row in pair_list[start:end].iterrows():
crypto = row['Crypto']
fiat = row['Fiat']
ex = row['Exchange']
# try:
hit_url = 'https://min-api.cryptocompare.com/data/histoday?fsym='+str(crypto)+'&tsym='+str(fiat)+'&limit=2000&aggregate=1&toTs='+str(unix_time(end_date))+'&e='+ str(ex)
d = json.loads(requests.get(hit_url).text)
df=pd.DataFrame(d["Data"])
if not df.empty:
print('hitting',ex,crypto.encode("utf-8"),fiat)
all_crypto = all_crypto + [crypto]
df['Source']=ex
df['From']=crypto
df['To']=fiat
df=df[df['volumeto']>0.0]
if index%1000 == 1:
res_df.to_sql("Data", conn, if_exists="append")
res_df = pd.DataFrame()
else:
res_df = res_df.append(df)
# except Exception as err:
# time.sleep(15)
# print('problem with',ex.encode("utf-8"),crypto)
res_df.to_sql("Data", conn, if_exists="append")
res_df = pd.DataFrame()
end = time.time()
print('Total time spent %ds'%(end-start))
t1 = Thread(target=download_rows, args=(pair_list,conn,0,50,))
t2 = Thread(target=download_rows, args=(pair_list,conn,50,100,))
# starting thread 1
t1.start()
# starting thread 2
t2.start()
# wait until thread 1 is completely executed
t1.join()
# wait until thread 2 is completely executed
t2.join()
# url='https://min-api.cryptocompare.com/data/all/exchanges'
# data = requests.get(url).json()
# ccc_df = pd.DataFrame.from_dict(data).T
# ex_list = list(ccc_df.index)
# ex_list.remove('EtherDelta')
# conn = sqlite3.connect(os.path.join(Data_Path,"CCC_new.db"))
# cursor = conn.cursor()
# all_crypto = []
# for ex in ex_list:
# cur_exchange = ccc_df.loc[ex].dropna()
# ex_currencies = cur_exchange.index
# all_crypto = list(set(all_crypto))
# for crypto in ex_currencies:
# fiat_list = cur_exchange.loc[crypto]
# for fiat in fiat_list:
# try:
# hit_url = 'https://min-api.cryptocompare.com/data/histoday?fsym='+str(crypto)+'&tsym='+str(fiat)+'&limit=2000&aggregate=1&toTs='+str(unix_time(end_date))+'&e='+ str(ex)
# # print(hit_url)
# d = json.loads(requests.get(hit_url).text)
# df=pd.DataFrame(d["Data"])
# if not df.empty:
# # print('hitting',ex,crypto.encode("utf-8"),fiat)
# all_crypto = all_crypto + [crypto]
# df['Source']=ex
# df['From']=crypto
# df['To']=fiat
# df=df[df['volumeto']>0.0]
# df.to_sql("Data", conn, if_exists="append")
# except Exception as err:
# time.sleep(15)
# print('problem with',ex.encode("utf-8"),crypto)
# #Final run with cccagg
# ex='cccagg'
# all_crypto = list(set(all_crypto))
# fiat_list = ['USD']
# for crypto in all_crypto:
# for fiat in fiat_list:
# try:
# hit_url = 'https://min-api.cryptocompare.com/data/histoday?fsym='+str(crypto)+'&tsym='+str(fiat)+'&limit=2000&aggregate=1&toTs='+str(unix_time(end_date))+'&e='+ str(ex)
# # print(hit_url)
# d = json.loads(requests.get(hit_url).text)
# df=pd.DataFrame(d["Data"])
# if not df.empty:
# print('hitting',ex,crypto,fiat)
# df['Source']=ex
# df['From']=crypto
# df['To']=fiat
# df=df[df['volumeto']>0.0]
# df.to_sql("Data", conn, if_exists="append")
# except Exception as err:
# print('problem with',ex,crypto)
# #Final final run with Etherdelta dropping all weird characters
# ex = 'EtherDelta'
# cur_exchange = ccc_df.loc[ex].dropna()
# ex_currencies = cur_exchange.index
# all_crypto = list(set(all_crypto))
# for crypto in ex_currencies:
# fiat_list = cur_exchange.loc[crypto]
# for fiat in fiat_list:
# try:
# hit_url = 'https://min-api.cryptocompare.com/data/histoday?fsym='+str(crypto)+'&tsym='+str(fiat)+'&limit=2000&aggregate=1&toTs='+str(unix_time(end_date))+'&e='+ str(ex)
# d = json.loads(requests.get(hit_url).text)
# df=pd.DataFrame(d["Data"])
# if not df.empty:
# # print('hitting',ex,crypto.encode("utf-8"),fiat)
# all_crypto = all_crypto + [crypto]
# df['Source']=ex
# df['From']=crypto
# df['To']=fiat
# df=df[df['volumeto']>0.0]
# df.to_sql("Data", conn, if_exists="append")
# except Exception as err:
# time.sleep(10)
# print('problem with',ex.encode("utf-8"),crypto)
# # Commit changes and close connection
conn.commit()
conn.close()
###Output
_____no_output_____ |
music recommendation.ipynb | ###Markdown
Data Preparation and Exploration
###Code
data_home = './'
# get data from data file
user_song_df = pd.read_csv(filepath_or_buffer=data_home+'train_triplets.txt',
sep='\t', header=None,
names=['user','song','play_count'])
# check general info of dataframe
user_song_df.info()
user_song_df.head(10)
# check play count for each user
user_play_count = pd.DataFrame(user_song_df.groupby('user')['play_count'].sum())
user_play_count=user_play_count.sort_values('play_count',ascending=False)
user_play_count.head(10)
user_play_count.info()
user_play_count.describe()
# check play count for each song
song_play_count = pd.DataFrame(user_song_df.groupby('song')['play_count'].sum())
song_play_count = song_play_count.sort_values('play_count',ascending=False)
song_play_count.head(10)
song_play_count.info()
song_play_count.describe()
user_play_count.head(100000)['play_count'].sum()/user_play_count['play_count'].sum()
song_play_count.head(30000)['play_count'].sum()/song_play_count['play_count'].sum()
# songs are too many, only choose 30000 songs from the most listened songs
song_count_subset = song_play_count.head(30000)
# users are too many, only choose 100000 users from whom listened most songs
user_count_subset = user_play_count.head(n=100000)
#keep 100K users and 30k songs , delete others
subset = user_song_df[(user_song_df['user'].isin(user_count_subset.index))&(user_song_df['song'].isin(song_count_subset.index))]
subset.info()
del(user_song_df)
subset.head(10)
# get song detailed information
conn = sqlite3.connect(data_home+'track_metadata.db')
cur = conn.cursor()
track_metadata_df = pd.read_sql(con=conn, sql='select * from songs')
metadata_df_sub = track_metadata_df[track_metadata_df.song_id.isin(song_count_subset.index)]
metadata_df_sub.head(5)
metadata_df_sub.info()
#remove useless info
metadata_df_sub = metadata_df_sub.drop('track_id',axis=1)
metadata_df_sub = metadata_df_sub.drop('artist_mbid',axis=1)
#remove duplicate rows
metadata_df_sub = metadata_df_sub.drop_duplicates('song_id')
data_merge = pd.merge(subset,metadata_df_sub,how='left',left_on='song',right_on='song_id')
data_merge.head(5)
#remove useless features
del(data_merge['song_id'])
del(data_merge['artist_id'])
del(data_merge['duration'])
del(data_merge['artist_familiarity'])
del(data_merge['artist_hotttnesss'])
del(data_merge['track_7digitalid'])
del(data_merge['shs_perf'])
del(data_merge['shs_work'])
data_merge.head(5)
data_merge.rename(columns={"play_count": "listen_count"},inplace=True)
data_merge.head(5)
# check user listen count distribution
user_listen =data_merge.groupby('user')['title'].count().reset_index().sort_values(by='title',ascending = False)
import matplotlib.pyplot as plt
%matplotlib inline
user_listen.head(5)
user_listen.describe()
plt.figure(figsize=(7,8))
# plt.hist(user_listen['title'])
n, bins, patches = plt.hist(user_listen['title'], 50, facecolor='green', alpha=0.75)
plt.xlabel('Play Counts')
plt.ylabel('Num of Users')
plt.title('Histogram of User Play Count Distribution')
###Output
_____no_output_____
###Markdown
Popularity-Based Recommendation
###Code
from sklearn.model_selection import train_test_split
train_data, test_data = train_test_split(data_merge, test_size=0.4, random_state=0)
# recommend by popularity for new user who doesn't have listening record
def popularity_recommendation(data, user_id, item_id,recommend_num):
# based on the item_id, get the popular items
popular = data.groupby(item_id)[user_id].count().reset_index()
# rename groupby column
popular.rename(columns = {user_id:"score"},inplace = True)
# sort the data
popular = popular.sort_values('score',ascending=False)
# create rank
popular['rank'] = popular['score'].rank(ascending=0, method='first')
return popular.set_index('rank').head(recommend_num)
# recommend songs
popularity_recommendation(train_data,'user','title',20)
# recommend releases
popularity_recommendation(train_data,'user','release',10)
# recommend artists
popularity_recommendation(train_data,'user','artist_name',20)
###Output
_____no_output_____
###Markdown
Item-based collabrative filtering Recommendation
###Code
#Item-based collabrative filtering
#Refer to https://github.com/llSourcell/recommender_live/blob/master/Song%20Recommender_Python.ipynb
class ItemCFRecommendation:
def __init__(self):
self.train_data = None
self.user_id = None
self.item_id = None
self.cooccurence_matrix = None
def set_data(self,train_data, user_id, item_id):
self.train_data = train_data
self.user_id = user_id
self.item_id = item_id
# get listened songs of a certain user
def get_user_items(self,user):
user_data = self.train_data[self.train_data[self.user_id] == user]
user_items = list(user_data[self.item_id].unique())
return user_items
# get users who listened to a certain song
def get_item_users(self,item):
item_data = self.train_data[self.train_data[self.item_id] == item]
item_users = list(item_data[self.user_id].unique())
return item_users
# get all unique items from trainning data
def get_all_items_training_data(self):
all_items = list(self.train_data[self.item_id].unique())
return all_items
# construct cooccurence matrix
def construct_cooccurence_matrix(self, user_items, all_items):
# get all the users which listened to the songs that the certain user listened to
user_item_users = []
for i in user_items:
users = self.get_item_users(i)
user_item_users.append(users)
self.cooccurence_matrix = np.zeros((len(user_items),len(all_items)),float)
print(self.cooccurence_matrix.shape)
# calculate the similarity between user listened songs and all songs in the training data
# using Jaccard similarity coefficient
for i in range(0, len(user_items)):
# get users of a certain listened song of a certain user
user_listened_certain = set(user_item_users[i])
for j in range(0, len(all_items)):
user_unique = self.get_item_users(all_items[j])
user_intersection = user_listened_certain.intersection(user_unique)
if len(user_intersection)!=0:
user_union = user_listened_certain.union(user_unique)
self.cooccurence_matrix[i][j] = float(len(user_intersection)/len(user_union))
else:
self.cooccurence_matrix[i][j] = 0
return self.cooccurence_matrix
# use cooccurence matrix to make top recommendation
def generate_top_recommendation(self, user, all_songs, user_songs):
print("Non Zero values in cooccurence %d" % np.count_nonzero(self.cooccurence_matrix))
# get average similarity between all the listened songs and a certain song
scores = self.cooccurence_matrix.sum(axis=0)/float(self.cooccurence_matrix.shape[0])
print("score's shape: {n}".format(n=scores.shape))
scores = scores.tolist()
sort_index = sorted(((e,i) for (i,e) in enumerate(scores)),reverse=True)
col = ['user_id', 'song', 'score', 'rank']
df = pd.DataFrame(columns=col)
rank = 1
for i in range(0,len(sort_index)):
if ~np.isnan(sort_index[i][0]) and all_songs[sort_index[i][1]] not in user_songs and rank <= 10:
df.loc[len(df)]=[user,all_songs[sort_index[i][1]],sort_index[i][0],rank]
rank = rank+1
if df.shape[0] == 0:
print("The current user has no songs for training the item similarity based recommendation model.")
return -1
else:
return df
def recommend(self, user):
user_songs = self.get_user_items(user)
print("No. of unique songs for the user: %d" % len(user_songs))
all_songs = self.get_all_items_training_data()
print("no. of unique songs in the training set: %d" % len(all_songs))
self.construct_cooccurence_matrix(user_songs, all_songs)
df_recommendations = self.generate_top_recommendation(user, all_songs, user_songs)
return df_recommendations
data_merge.head(5)
len(data_merge)
data_merge['song'].nunique()
song_count_subset = song_count_subset.head(5000)
len(song_count_subset)
song_count_subset.head(5)
song_sub = song_count_subset.index
# data is too large to calculate, limit to 1000 songs
data_merge_sub = data_merge[data_merge['song'].isin(song_sub[:1000])]
len(data_merge_sub)
data_merge_sub['song'].nunique()
data_merge_sub['user'].nunique()
del(train_data)
del(test_data)
# data is too large to calculate, limit to 10000 users
data_merge_sub = data_merge_sub[data_merge_sub['user'].isin(user_count_subset.head(10000).index)]
len(data_merge_sub)
data_merge_sub['user'].nunique()
data_merge_sub['song'].nunique()
user_count_subset.head(5)
train_data, test_data = train_test_split(data_merge_sub,test_size=0.3, random_state=0)
model = ItemCFRecommendation()
model.set_data(train_data,'user','title')
# get a specific user
user = train_data['user'].iloc[7]
# recommend songs for this user
model.recommend(user)
###Output
No. of unique songs for the user: 115
no. of unique songs in the training set: 997
(115, 997)
Non Zero values in cooccurence 113158
score's shape: (997,)
|
examples/Introducing_CivisML_v2.ipynb | ###Markdown
Introducing CivisML 2.0Note: We are continually releasing changes to CivisML, and this notebook is useful for any versions 2.0.0 and above.Data scientists are on the front lines of their organizationโs most important customer growth and engagement questions, and they need to guide action as quickly as possible by getting models into production. CivisML is a machine learning service that makes it possible for data scientists to massively increase the speed with which they can get great models into production. And because itโs built on open-source packages, CivisML remains transparent and data scientists remain in control.In this notebook, weโll go over the new features introduced in CivisML 2.0. For a walkthrough of CivisMLโs fundamentals, check out this introduction to the mechanics of CivisML: https://github.com/civisanalytics/civis-python/blob/master/examples/CivisML_parallel_training.ipynbCivisML 2.0 is full of new features to make modeling faster, more accurate, and more portable. This notebook will cover the following topics:- CivisML overview- Parallel training and validation- Use of the new ETL transformer, `DataFrameETL`, for easy, customizable ETL- Stacked models: combine models to get one bigger, better model- Model portability: get trained models out of CivisML- Multilayer perceptron models: neural networks built in to CivisML- Hyperband: a smarter alternative to grid searchCivisML can be used to build models that answer all kinds of business questions, such as what movie to recommend to a customer, or which customers are most likely to upgrade their accounts. For the sake of example, this notebook uses a publicly available dataset on US colleges, and focuses on predicting the type of college (public non-profit, private non-profit, or private for-profit).
###Code
# first, let's import the packages we need
import requests
from io import StringIO
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn import model_selection
# import the Civis Python API client
import civis
# ModelPipeline is the class used to build CivisML models
from civis.ml import ModelPipeline
# Suppress warnings for demo purposes. This is not recommended as a general practice.
import warnings
warnings.filterwarnings('ignore')
###Output
_____no_output_____
###Markdown
Downloading dataBefore we build any models, we need a dataset to play with. We're going to use the most recent College Scorecard data from the Department of Education.This dataset is collected to study the performance of US higher education institutions. You can learn more about it in [this technical paper](https://collegescorecard.ed.gov/assets/UsingFederalDataToMeasureAndImprovePerformance.pdf), and you can find details on the dataset features in [this data dictionary](https://collegescorecard.ed.gov/data/).
###Code
# Downloading data; this may take a minute
# Two kind of nulls
df = pd.read_csv("https://ed-public-download.app.cloud.gov/downloads/Most-Recent-Cohorts-All-Data-Elements.csv", sep=",", na_values=['NULL', 'PrivacySuppressed'], low_memory=False)
# How many rows and columns?
df.shape
# What are some of the column names?
df.columns
###Output
_____no_output_____
###Markdown
Data MungingBefore running CivisML, we need to do some basic data munging, such as removing missing data from the dependent variable, and splitting the data into training and test sets.Throughout this notebook, we'll be trying to predict whether a college is public (labelled as 1), private non-profit (2), or private for-profit (3). The column name for this dependent variable is "CONTROL".
###Code
# Make sure to remove any rows with nulls in the dependent variable
df = df[np.isfinite(df['CONTROL'])]
# split into training and test sets
train_data, test_data = model_selection.train_test_split(df, test_size=0.2)
# print a few sample columns
train_data.head()
###Output
_____no_output_____
###Markdown
Some of these columns are duplicates, or contain information we don't want to use in our model (like college names and URLs). CivisML can take a list of columns to exclude and do this part of the data munging for us, so let's make that list here.
###Code
to_exclude = ['ADM_RATE_ALL', 'OPEID', 'OPEID6', 'ZIP', 'INSTNM',
'INSTURL', 'NPCURL', 'ACCREDAGENCY', 'T4APPROVALDATE',
'STABBR', 'ALIAS', 'REPAY_DT_MDN', 'SEPAR_DT_MDN']
###Output
_____no_output_____
###Markdown
Basic CivisML UsageWhen building a supervised model, there are a few basic things you'll probably want to do:1. Transform the data into a modelling-friendly format2. Train the model on some labelled data3. Validate the model4. Use the model to make predictions about unlabelled dataCivisML does all of this in three lines of code. Let's fit a basic sparse logistic model to see how. The first thing we need to do is build a `ModelPipeline` object. This stores all of the basic configuration options for the model. We'll tell it things like the type of model, dependent variable, and columns we want to exclude. CivisML handles basic ETL for you, including categorical expansion of any string-type columns.
###Code
# Use a push-button workflow to fit a model with reasonable default parameters
sl_model = ModelPipeline(model='sparse_logistic',
model_name='Example sparse logistic',
primary_key='UNITID',
dependent_variable=['CONTROL'],
excluded_columns=to_exclude)
###Output
_____no_output_____
###Markdown
Next, we want to train and validate the model by calling `.train` on the `ModelPipeline` object. CivisML uses 4-fold cross-validation on the training set. You can train on local data or query data from Redshift. In this case, we have our data locally, so we just pass the data frame.
###Code
sl_train = sl_model.train(train_data)
###Output
_____no_output_____
###Markdown
This returns a `ModelFuture` object, which is *non-blocking*-- this means that you can keep doing things in your notebook while the model runs on Civis Platform in the background. If you want to make a blocking call (one that doesn't complete until your model is finished), you can use `.result()`.
###Code
# non-blocking
sl_train
# blocking
sl_train.result()
###Output
_____no_output_____
###Markdown
Parallel Model Tuning and ValidationWe didn't actually specify the number of jobs in the `.train()` call above, but behind the scenes, the model was actually training in parallel! In CivisML 2.0, model tuning and validation will automatically be distributed across your computing cluster, without ever using more than 90% of the cluster resources. This means that you can build models faster and try more model configurations, leaving you more time to think critically about your data. If you decide you want more control over the resources you're using, you can set the `n_jobs` parameter to a specific number of jobs, and CivisML won't run more than that at once. We can see how well the model did by looking at the validation metrics.
###Code
# loop through the metric names and print to screen
metrics = [print(key) for key in sl_train.metrics.keys()]
# ROC AUC for each of the three categories in our dependent variable
sl_train.metrics['roc_auc']
###Output
_____no_output_____
###Markdown
Impressive!This is the basic CivisML workflow: create the model, train, and make predictions. There are other configuration options for more complex use cases; for example, you can create a custom estimator, pass custom dependencies, manage the computing resources for larger models, and more. For more information, see the Machine Learning section of the [Python API client docs](https://civis-python.readthedocs.io).Now that we can build a simple model, let's see what's new to CivisML 2.0! Custom ETLCivisML can do several data transformations to prepare your data for modeling. This makes data preprocessing easier, and makes it part of your model pipeline rather than an additional script you have to run. CivisML's built-in ETL includes:- Categorical expansion: expand a single column of strings or categories into separate binary variables.- Dropping columns: remove columns not needed in a model, such as an ID number.- Removing null columns: remove columns that contain no data.With CivisML 2.0, you can now recreate and customize this ETL using `DataFrameETL`, our open source ETL transformer, [available on GitHub](https://github.com/civisanalytics/civisml-extensions).By default, CivisML will use DataFrameETL to automatically detect non-numeric columns for categorical expansion. Our example college dataset has a lot of integer columns which are actually categorical, but we can make sure they're handled correctly by passing CivisML a custom ETL transformer.
###Code
# The ETL transformer used in CivisML can be found in the civismlext module
from civismlext.preprocessing import DataFrameETL
###Output
_____no_output_____
###Markdown
This creates a list of columns to categorically expand, identified using the data dictionary available [here](https://collegescorecard.ed.gov/data/).
###Code
# column indices for columns to expand
to_expand = list(df.columns[:21]) + list(df.columns[23:36]) + list(df.columns[99:290]) + \
list(df.columns[[1738, 1773, 1776]])
# create ETL estimator to pass to CivisML
etl = DataFrameETL(cols_to_drop=to_exclude,
cols_to_expand=to_expand, # we made this column list during data munging
check_null_cols='warn')
###Output
_____no_output_____
###Markdown
Model StackingNow it's time to fit a model. Let's take a look at model stacking, which is new to CivisML 2.0.Stacking lets you combine several algorithms into a single model which performs as well or better than the component algorithms. We use stacking at Civis to build more accurate models, which saves our data scientists time comparing algorithm performance. In CivisML, we have two stacking workflows: `stacking_classifier` (sparse logistic, GBT, and random forest, with a logistic regression model as a "meta-estimator" to combine predictions from the other models); and `stacking_regressor` (sparse linear, GBT, and random forest, with a non-negative linear regression as the meta-estimator). Use them the same way you use `sparse_logistic` or other pre-defined models. If you want to learn more about how stacking works under the hood, take a look at [this talk](https://www.youtube.com/watch?v=3gpf1lGwecA&t=1058s) by the person at Civis who wrote it!Let's fit both a stacking classifier and some un-stacked models, so we can compare the performance.
###Code
workflows = ['stacking_classifier',
'sparse_logistic',
'random_forest_classifier',
'gradient_boosting_classifier']
models = []
# create a model object for each of the four model types
for wf in workflows:
model = ModelPipeline(model=wf,
model_name=wf + ' v2 example',
primary_key='UNITID',
dependent_variable=['CONTROL'],
etl=etl # use the custom ETL we created
)
models.append(model)
# iterate over the model objects and run a CivisML training job for each
trains = []
for model in models:
train = model.train(train_data)
trains.append(train)
###Output
_____no_output_____
###Markdown
Let's plot diagnostics for each of the models. In the Civis Platform, these plots will automatically be built and displayed in the "Models" tab. But for the sake of example, let's also explicitly plot ROC curves and AUCs in the notebook.There are three classes (public, non-profit private, and for-profit private), so we'll have three curves per model. It looks like all of the models are doing well, with sparse logistic performing slightly worse than the other three.
###Code
%matplotlib inline
# Let's look at how the model performed during validation
def extract_roc(fut_job, model_name):
'''Build a data frame of ROC curve data from the completed training job `fut_job`
with model name `model_name`. Note that this function will only work for a classification
model where the dependent variable has more than two classes.'''
aucs = fut_job.metrics['roc_auc']
roc_curve = fut_job.metrics['roc_curve_by_class']
n_classes = len(roc_curve)
fpr = []
tpr = []
class_num = []
auc = []
for i, curve in enumerate(roc_curve):
fpr.extend(curve['fpr'])
tpr.extend(curve['tpr'])
class_num.extend([i] * len(curve['fpr']))
auc.extend([aucs[i]] * len(curve['fpr']))
model_vec = [model_name] * len(fpr)
df = pd.DataFrame({
'model': model_vec,
'class': class_num,
'fpr': fpr,
'tpr': tpr,
'auc': auc
})
return df
# extract ROC curve information for all of the trained models
workflows_abbrev = ['stacking', 'logistic', 'RF', 'GBT']
roc_dfs = [extract_roc(train, w) for train, w in zip(trains, workflows_abbrev)]
roc_df = pd.concat(roc_dfs)
# create faceted ROC curve plots. Each row of plots is a different model type, and each
# column of plots is a different class of the dependent variable.
g = sns.FacetGrid(roc_df, col="class", row="model")
g = g.map(plt.plot, "fpr", "tpr", color='blue')
###Output
_____no_output_____
###Markdown
All of the models perform quite well, so it's difficult to compare based on the ROC curves. Let's plot the AUCs themselves.
###Code
# Plot AUCs for each model
%matplotlib inline
auc_df = roc_df[['model', 'class', 'auc']]
auc_df.drop_duplicates(inplace=True)
plt.show(sns.swarmplot(x=auc_df['model'], y=auc_df['auc']))
###Output
_____no_output_____
###Markdown
Here we can see that all models but sparse logistic perform quite well, but stacking appears to perform marginally better than the others. For more challenging modeling tasks, the difference between stacking and other models will often be more pronounced. Now our models are trained, and we know that they all perform very well. Because the AUCs are all so high, we would expect the models to make similar predictions. Let's see if that's true.
###Code
# kick off a prediction job for each of the four models
preds = [model.predict(test_data) for model in models]
# This will run on Civis Platform cloud resources
[pred.result() for pred in preds]
# print the top few rows for each of the models
pred_df = [pred.table.head() for pred in preds]
import pprint
pprint.pprint(pred_df)
###Output
[ control_1 control_2 control_3
UNITID
217882 0.993129 0.006856 0.000015
195234 0.001592 0.990423 0.007985
446385 0.002784 0.245300 0.751916
13508115 0.003109 0.906107 0.090785
459499 0.005351 0.039922 0.954726,
control_1 control_2 control_3
UNITID
217882 9.954234e-01 0.000200 0.004377
195234 6.766601e-08 0.999615 0.000385
446385 4.571749e-03 0.056303 0.939125
13508115 1.768058e-02 0.699806 0.282514
459499 1.319468e-02 0.285295 0.701510,
control_1 control_2 control_3
UNITID
217882 0.960 0.034 0.006
195234 0.012 0.974 0.014
446385 0.020 0.508 0.472
13508115 0.006 0.914 0.080
459499 0.032 0.060 0.908,
control_1 control_2 control_3
UNITID
217882 0.993809 0.005610 0.000581
195234 0.004323 0.991094 0.004583
446385 0.001309 0.066452 0.932238
13508115 0.012525 0.809062 0.178413
459499 0.002034 0.061846 0.936120]
###Markdown
Looks like the probabilities here aren't exactly the same, but are directionally identical-- so, if you chose the class that had the highest probability for each row, you'd end up with the same predictions for all models. This makes sense, because all of the models performed well. Model PortabilityWhat if you want to score a model outside of Civis Platform? Maybe you want to deploy this model in an app for education policy makers. In CivisML 2.0, you can easily get the trained model pipeline out of the `ModelFuture` object.
###Code
train_stack = trains[0] # Get the ModelFuture for the stacking model
trained_model = train_stack.estimator
###Output
_____no_output_____
###Markdown
This `Pipeline` contains all of the steps CivisML used to train the model, from ETL to the model itself. We can print each step individually to get a better sense of what is going on.
###Code
# print each of the estimators in the pipeline, separated by newlines for readability
for step in train_stack.estimator.steps:
print(step[1])
print('\n')
###Output
DataFrameETL(check_null_cols='warn',
cols_to_drop=['ADM_RATE_ALL', 'OPEID', 'OPEID6', 'ZIP', 'INSTNM', 'INSTURL', 'NPCURL', 'ACCREDAGENCY', 'T4APPROVALDATE', 'STABBR', 'ALIAS', 'REPAY_DT_MDN', 'SEPAR_DT_MDN'],
cols_to_expand=['UNITID', 'OPEID', 'OPEID6', 'INSTNM', 'CITY', 'STABBR', 'ZIP', 'ACCREDAGENCY', 'INSTURL', 'NPCURL', 'SCH_DEG', 'HCM2', 'MAIN', 'NUMBRANCH', 'PREDDEG', 'HIGHDEG', 'CONTROL', 'ST_FIPS', 'REGION', 'LOCALE', 'LOCALE2', 'CCBASIC', 'CCUGPROF', 'CCSIZSET', 'HBCU', 'PBI', 'ANNHI', 'TRIBAL',...RT2', 'CIP54ASSOC', 'CIP54CERT4', 'CIP54BACHL', 'DISTANCEONLY', 'ICLEVEL', 'OPENADMP', 'ACCREDCODE'],
dataframe_output=False, dummy_na=True, fill_value=0.0)
Imputer(axis=0, copy=True, missing_values='NaN', strategy='mean', verbose=0)
StackedClassifier(cv=StratifiedKFold(n_splits=4, random_state=42420, shuffle=True),
estimator_list=[('sparse_logistic', Pipeline(memory=None,
steps=[('selectfrommodel', SelectFromModel(estimator=LogitNet(alpha=1, cut_point=0.5, fit_intercept=True, lambda_path=None,
max_iter=10000, min_lambda_ratio=0.0001, n_jobs=1, n_lambda=100,
n_splits=4, random_state=42, scoring='... random_state=42, refit=True, scoring=None, solver='lbfgs',
tol=1e-08, verbose=0))]))],
n_jobs=1, pre_dispatch='2*n_jobs', verbose=0)
###Markdown
Now we can see that there are three steps: the `DataFrameETL` object we passed in, a null imputation step, and the stacking estimator itself.We can use this outside of CivisML simply by calling `.predict` on the estimator. This will make predictions using the model in the notebook without using CivisML.
###Code
# drop the dependent variable so we don't use it to predict itself!
predictions = trained_model.predict(test_data.drop(labels=['CONTROL'], axis=1))
# print out the class predictions. These will be integers representing the predicted
# class rather than probabilities.
predictions
###Output
_____no_output_____
###Markdown
Hyperparameter optimization with Hyperband and Neural NetworksMultilayer Perceptrons (MLPs) are simple neural networks, which are now built in to CivisML. The MLP estimators in CivisML come from [muffnn](https://github.com/civisanalytics/muffnn), another open source package written and maintained by Civis Analytics using [tensorflow](https://www.tensorflow.org/). Let's fit one using hyperband.Tuning hyperparameters is a critical chore for getting an algorithm to perform at its best, but it can take a long time to run. Using CivisML 2.0, we can use hyperband as an alternative to conventional grid search for hyperparameter optimization-- it runs about twice as fast. While grid search runs every parameter combination for the full time, hyperband runs many combinations for a short time, then filters out the best, runs them for longer, filters again, and so on. This means that you can try more combinations in less time, so we recommend using it whenever possible. The hyperband estimator is open source and [available on GitHub](https://github.com/civisanalytics/civisml-extensions). You can learn about the details in [the original paper, Li et al. (2016)](https://arxiv.org/abs/1603.06560).Right now, hyperband is implemented in CivisML named preset models for the following algorithms: - Multilayer Perceptrons (MLPs)- Stacking- Random forests- GBTs- ExtraTreesUnlike grid search, you don't need to specify values to search over. If you pass `cross_validation_parameters='hyperband'` to `ModelPipeline`, hyperparameter combinations will be randomly drawn from preset distributions.
###Code
# build a model specifying the MLP model with hyperband
model_mlp = ModelPipeline(model='multilayer_perceptron_classifier',
model_name='MLP example',
primary_key='UNITID',
dependent_variable=['CONTROL'],
cross_validation_parameters='hyperband',
etl=etl
)
train_mlp = model_mlp.train(train_data,
n_jobs=10) # parallel hyperparameter optimization and validation!
# block until the job finishes
train_mlp.result()
###Output
_____no_output_____
###Markdown
Let's dig into the hyperband model a little bit. Like the stacking model, the model below starts with ETL and null imputation, but contains some additional steps: a step to scale the predictor variables (which improves neural network performance), and a hyperband searcher containing the MLP.
###Code
for step in train_mlp.estimator.steps:
print(step[1])
print('\n')
###Output
INFO:tensorflow:Restoring parameters from /tmp/tmpe49np0dv/saved_model
DataFrameETL(check_null_cols='warn',
cols_to_drop=['ADM_RATE_ALL', 'OPEID', 'OPEID6', 'ZIP', 'INSTNM', 'INSTURL', 'NPCURL', 'ACCREDAGENCY', 'T4APPROVALDATE', 'STABBR', 'ALIAS', 'REPAY_DT_MDN', 'SEPAR_DT_MDN'],
cols_to_expand=['UNITID', 'OPEID', 'OPEID6', 'INSTNM', 'CITY', 'STABBR', 'ZIP', 'ACCREDAGENCY', 'INSTURL', 'NPCURL', 'SCH_DEG', 'HCM2', 'MAIN', 'NUMBRANCH', 'PREDDEG', 'HIGHDEG', 'CONTROL', 'ST_FIPS', 'REGION', 'LOCALE', 'LOCALE2', 'CCBASIC', 'CCUGPROF', 'CCSIZSET', 'HBCU', 'PBI', 'ANNHI', 'TRIBAL',...RT2', 'CIP54ASSOC', 'CIP54CERT4', 'CIP54BACHL', 'DISTANCEONLY', 'ICLEVEL', 'OPENADMP', 'ACCREDCODE'],
dataframe_output=False, dummy_na=True, fill_value=0.0)
Imputer(axis=0, copy=True, missing_values='NaN', strategy='mean', verbose=0)
MinMaxScaler(copy=False, feature_range=(0, 1))
HyperbandSearchCV(cost_parameter_max={'n_epochs': 50},
cost_parameter_min={'n_epochs': 5}, cv=None, error_score='raise',
estimator=MLPClassifier(activation=<function relu at 0x7f28a5746510>, batch_size=64,
hidden_units=(256,), init_scale=0.1, keep_prob=1.0, n_epochs=5,
random_state=None,
solver=<class 'tensorflow.python.training.adam.AdamOptimizer'>,
solver_kwargs=None),
eta=3, iid=True, n_jobs=1,
param_distributions={'keep_prob': <scipy.stats._distn_infrastructure.rv_frozen object at 0x7f28b44a9400>, 'hidden_units': [(), (16,), (32,), (64,), (64, 64), (64, 64, 64), (128,), (128, 128), (128, 128, 128), (256,), (256, 256), (256, 256, 256), (512, 256, 128, 64), (1024, 512, 256, 128)], 'solver_k...rning_rate': 0.002}, {'learning_rate': 0.005}, {'learning_rate': 0.008}, {'learning_rate': 0.0001}]},
pre_dispatch='2*n_jobs', random_state=42, refit=True,
return_train_score=True, scoring=None, verbose=0)
###Markdown
`HyperbandSearchCV` essentially works like `GridSearchCV`. If you want to get the best estimator without all of the extra CV information, you can access it using the `best_estimator_` attribute.
###Code
train_mlp.estimator.steps[3][1].best_estimator_
###Output
_____no_output_____
###Markdown
To see how well the best model performed, you can look at the `best_score_`.
###Code
train_mlp.estimator.steps[3][1].best_score_
###Output
_____no_output_____
###Markdown
And to look at information about the different hyperparameter configurations that were tried, you can look at the `cv_results_`.
###Code
train_mlp.estimator.steps[3][1].cv_results_
###Output
_____no_output_____
###Markdown
Just like any other model in CivisML, we can use hyperband-tuned models to make predictions using `.predict()` on the `ModelPipeline`.
###Code
predict_mlp = model_mlp.predict(test_data)
predict_mlp.table.head()
###Output
_____no_output_____ |
dataset test bikes.ipynb | ###Markdown
BIKES
###Code
day = pd.read_csv("data/day.csv")
data = day.drop(["dteday", "instant", "casual", 'registered', 'cnt', 'yr'], axis=1)
data.columns
data_raw = data.copy()
data.season = data.season.map({1: "spring", 2: "summer", 3: "fall", 4: 'winter'})
data.weathersit = data.weathersit.map({1: "clear, partly cloudy", 2: 'mist, cloudy', 3: 'light snow, light rain', 4:'heavy rain, snow and fog'})
data.mnth = pd.to_datetime(data.mnth, format="%m").dt.strftime("%b")
data.weekday = pd.to_datetime(data.weekday, format="%w").dt.strftime("%a")
data_dummies = pd.get_dummies(data, columns=['season', 'mnth', 'weekday', 'weathersit'])
data_dummies.head()
from sklearn.cross_validation import train_test_split
X_train, X_test, y_train, y_test = train_test_split(data_raw.values, day.cnt.values, random_state=0)
from sklearn.linear_model import RidgeCV
ridge = RidgeCV().fit(X_train, y_train)
from sklearn.metrics import r2_score
ridge.score(X_train, y_train)
ridge.score(X_test, y_test)
from sklearn.tree import DecisionTreeRegressor
tree = DecisionTreeRegressor(max_depth=5).fit(X_train, y_train)
print(tree.score(X_train, y_train))
print(tree.score(X_test, y_test))
from sklearn.ensemble import RandomForestRegressor
forest = RandomForestRegressor(n_estimators=500).fit(X_train, y_train)
print(forest.score(X_train, y_train))
print(forest.score(X_test, y_test))
data_raw.cnt = day.cnt
data_dummies.cnt = day.cnt
data_raw.to_csv("data/bike_day_raw.csv", index=None)
data_dummies.to_csv("data/bike_day_dummies.csv", index=None)
###Output
_____no_output_____
###Markdown
LOANS
###Code
data = pd.read_csv("data/loan.csv")[::23]
data.shape
data.head()
counts = data.notnull().sum(axis=0).sort_values(ascending=False)
columns = counts[:52].index
data = data[columns]
data = data.dropna()
data.head()
bad_statuses = ["Charged Off ", "Default", "Does not meet the credit policy. Status:Charged Off", "In Grace Period",
"Default Receiver", "Late (16-30 days)", "Late (31-120 days)"]
data['bad_status'] = data.loan_status.isin(bad_statuses)
data = data.drop(["url", "title", "id", "emp_title", "loan_status"], axis=1)
data.columns
data.dtypes
data.purpose.value_counts()
float_columns = data.dtypes[data.dtypes == "float64"].index
data_float = data[float_columns]
data_float.shape
X = data_float.values
y = data.bad_status.values
from sklearn.linear_model import LogisticRegression
from sklearn.cross_validation import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=0)
lr = LogisticRegression()
lr.fit(X_train, y_train)
print(lr.score(X_train, y_train))
print(lr.score(X_test, y_test))
lr.coef_.shape
plt.figure(figsize=(8, 8))
plt.barh(range(X.shape[1]), lr.coef_.ravel())
plt.yticks(np.arange(X.shape[1]) + .5, data_float_hard.columns.tolist(), va="center");
data_float_hard = data_float.drop(['total_rec_late_fee', "revol_util"], axis=1)
X = data_float_hard.values
###Output
_____no_output_____
###Markdown
SHELTER ANIMALS
###Code
train = pd.read_csv("data/shelter_train.csv")
test = pd.read_csv("data/shelter_test.csv")
train.head()
###Output
_____no_output_____
###Markdown
Bank marketing
###Code
data = pd.read_csv("data/bank-additional/bank-additional-full.csv", sep=";")
data.head()
data.job.value_counts()
data.columns
data.dtypes
target = data.y
data = data.drop("y", axis=1)
bla = pd.get_dummies(data)
bla.columns
X = bla.values
y = target.values
from sklearn.linear_model import LogisticRegression
from sklearn.cross_validation import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=0)
lr = LogisticRegression()
lr.fit(X_train, y_train)
print(lr.score(X_train, y_train))
print(lr.score(X_test, y_test))
plt.figure(figsize=(10, 12))
plt.barh(range(X.shape[1]), lr.coef_.ravel())
plt.yticks(np.arange(X.shape[1]) + .5, bla.columns.tolist(), va="center");
from sklearn.ensemble import RandomForestClassifier
rf = RandomForestClassifier(n_estimators=100).fit(X_train, y_train)
rf.score(X_train, y_train)
rf.score(X_test, y_test)
bla['target'] = target
bla.to_csv("data/bank-campaign.csv", index=None)
###Output
_____no_output_____ |
exercise12/exercise12-2.ipynb | ###Markdown
###Code
import torch
import torch.nn as nn
import torch.optim as optim
one_hot_lookup = [
[1, 0, 0, 0, 0], # 0 h
[0, 1, 0, 0, 0], # 1 i
[0, 0, 1, 0, 0], # 2 e
[0, 0, 0, 1, 0], # 3 l
[0, 0, 0, 0, 1], # 4 o
]
x_data = [0, 1, 0, 2, 3, 3] # hihell
y_data = [1, 0, 2, 3, 3, 4] # ihello
x_one_hot = [one_hot_lookup[i] for i in x_data]
###Output
_____no_output_____
###Markdown
(2) Parameters
###Code
num_classes = 5
input_size = 5 # one_hot size
hidden_size = 5 # output from the LSTM. 5 to directly predict one-hot
batch_size = 1 # one sentence
sequence_length = 1 # Let's do one by one
num_layers = 1 # one-layer rnn
inputs = torch.tensor(x_one_hot, dtype=torch.float)
labels = torch.tensor(y_data, dtype=torch.long)
###Output
_____no_output_____
###Markdown
1. Model
###Code
class Model(nn.Module):
def __init__(self,
input_size=5,
hidden_size=5,
num_layers=1,
batch_size=1,
sequence_length=1,
num_classes=5):
super().__init__()
self.rnn = nn.RNN(input_size=input_size,
hidden_size=hidden_size,
batch_first=True)
self.input_size = input_size
self.hidden_size = hidden_size
self.num_layers = num_layers
self.batch_size = batch_size
self.sequence_length = sequence_length
self.num_classes = num_classes
# Fully-Connected layer
self.fc = nn.Linear(num_classes, num_classes)
def forward(self, x, hidden):
# Reshape input in (batch_size, sequence_length, input_size)
x = x.view(self.batch_size, self.sequence_length, self.input_size)
out, hidden = self.rnn(x, hidden)
out = self.fc(out) # Add here
out = out.view(-1, self.num_classes)
return hidden, out
def init_hidden(self):
return torch.zeros(self.num_layers, self.batch_size, self.hidden_size)
###Output
_____no_output_____
###Markdown
2. Criterion & Loss
###Code
model = Model()
criterion = nn.CrossEntropyLoss()
optimizer = optim.Adam(model.parameters(), lr=0.1)
###Output
_____no_output_____
###Markdown
3. Training
###Code
model = Model(input_size=5, hidden_size=5, num_layers=1, batch_size=1, sequence_length=6, num_classes=5)
criterion = nn.CrossEntropyLoss()
optimizer = optim.Adam(model.parameters(), lr=0.1)
hidden = model.init_hidden()
loss = 0
idx2char = ['h', 'i', 'e', 'l', 'o']
x_data = [0, 1, 0, 2, 3, 3] # hihell
one_hot_dict = {
'h': [1, 0, 0, 0, 0],
'i': [0, 1, 0, 0, 0],
'e': [0, 0, 1, 0, 0],
'l': [0, 0, 0, 1, 0],
'o': [0, 0, 0, 0, 1],
}
one_hot_lookup = [
[1, 0, 0, 0, 0], # 0 h
[0, 1, 0, 0, 0], # 1 i
[0, 0, 1, 0, 0], # 2 e
[0, 0, 0, 1, 0], # 3 l
[0, 0, 0, 0, 1], # 4 o
]
y_data = [1, 0, 2, 3, 3, 4] # ihello
x_one_hot = [one_hot_lookup[x] for x in x_data]
inputs = torch.tensor(x_one_hot, dtype=torch.float)
labels = torch.tensor(y_data, dtype=torch.long)
inputs
labels
for epoch in range(0, 15 + 1):
hidden.detach_()
hidden = hidden.detach()
hidden = hidden.clone().detach().requires_grad_(True) # New syntax from `1.0`
hidden, outputs = model(inputs, hidden)
optimizer.zero_grad()
loss = criterion(outputs, labels) # It wraps for-loop in here
loss.backward()
optimizer.step()
_, idx = outputs.max(1)
idx = idx.data.numpy()
result_str = [idx2char[c] for c in idx.squeeze()]
print(f"epoch: {epoch}, loss: {loss.data}")
print(f"Predicted string: {''.join(result_str)}")
###Output
epoch: 0, loss: 1.632482886314392
Predicted string: eieeee
epoch: 1, loss: 1.344533920288086
Predicted string: olello
epoch: 2, loss: 1.0991240739822388
Predicted string: olelll
epoch: 3, loss: 0.8392814993858337
Predicted string: ihello
epoch: 4, loss: 0.6179984211921692
Predicted string: ihello
epoch: 5, loss: 0.45398271083831787
Predicted string: ihello
epoch: 6, loss: 0.32671499252319336
Predicted string: ihello
epoch: 7, loss: 0.22967374324798584
Predicted string: ihello
epoch: 8, loss: 0.15975196659564972
Predicted string: ihello
epoch: 9, loss: 0.1100870743393898
Predicted string: ihello
epoch: 10, loss: 0.07598868757486343
Predicted string: ihello
epoch: 11, loss: 0.05339379981160164
Predicted string: ihello
epoch: 12, loss: 0.03852824494242668
Predicted string: ihello
epoch: 13, loss: 0.028578201308846474
Predicted string: ihello
epoch: 14, loss: 0.021733442321419716
Predicted string: ihello
epoch: 15, loss: 0.016893386840820312
Predicted string: ihello
|
notebooks/5.0_comparing_magnifications/2.1_20x_dw.matching_overlay.field_thr.ipynb | ###Markdown
Read shift data
###Code
shifts = pd.read_csv(f"shift_correction/{selected_magnification}_{selected_image_type}.shifts.csv")
shifts.index = shifts["sid"].values
shifts.drop("sid", 1, inplace=True)
###Output
_____no_output_____
###Markdown
Matching 20x_raw and reference dots
###Code
dots_data = pd.read_csv("/mnt/data/Imaging/202105-Deconwolf/data_210726/dots_data.clean.tsv.gz", sep="\t")
dots_data = dots_data[selected_magnification == dots_data["magnification"]]
dots_data = dots_data[selected_image_type == dots_data["image_type"]]
thresholds_table = pd.read_csv("../../data/magnifications_matching/intensity_thresholds.by_field.tsv", sep="\t")
matched_dots = pd.read_csv(
os.path.join("../../data/magnifications_matching",
f"{selected_magnification}_{selected_image_type}.matched_dots.field_thr.tsv"
), sep="\t")
reference = pd.read_csv("../../data/60x_reference/ref__dw.field_thr.tsv", sep="\t")
for current_field_id in tqdm(np.unique(dots_data["sid"])):
thresholds = thresholds_table.loc[current_field_id == thresholds_table["sid"], :]
intensity_thr = thresholds.loc[selected_image_type == thresholds["image_type"], "thr"].values[0]
dot_max_z_proj = tifffile.imread(os.path.join(dot_image_folder_path, f"a647_{current_field_id:03d}.tif")).max(0)
ref_max_z_proj = tifffile.imread(os.path.join(ref_image_folder_path, f"a647_{current_field_id:03d}.tif")).max(0)
dot_labels = tifffile.imread(os.path.join(dot_mask_folder_path, f"a647_{current_field_id:03d}.dilated_labels.from_60x.tiff")
).reshape(dot_max_z_proj.shape)
ref_labels = tifffile.imread(os.path.join(ref_mask_folder_path, f"a647_{current_field_id:03d}.dilated_labels.tiff")
).reshape(ref_max_z_proj.shape)
dots = dots_data.loc[current_field_id == dots_data["sid"], :].copy(
).sort_values("Value2", ascending=False).reset_index(drop=True)
dot_coords = dots.loc[intensity_thr <= dots["Value2"], ("x", "y")].copy().reset_index(drop=True)
dot_coords2 = dot_coords.copy() / aspect
dot_coords2["x"] += (shifts.loc[current_field_id, "x"] * 9)
dot_coords2["y"] += (shifts.loc[current_field_id, "y"] * 9)
ref_coords = reference.loc[reference["sid"] == current_field_id, ("x", "y")].copy().reset_index(drop=True)
matched_20x_dots = matched_dots.loc[matched_dots["series"] == current_field_id, "id_20x"].values
matched_60x_dots = matched_dots.loc[matched_dots["series"] == current_field_id, "id_60x"].values
max_match_dist = matched_dots.loc[matched_dots["series"] == current_field_id, "eudist"].max()
selected_20x_dots = dot_coords.loc[matched_20x_dots, :]
selected_20x_dots2 = dot_coords2.loc[matched_20x_dots, :]
selected_60x_dots = ref_coords.loc[matched_60x_dots, :]
fig3, ax = plt.subplots(figsize=(30, 10), ncols=3, constrained_layout=True)
fig3.suptitle(f"Field #{current_field_id} (n.matched_dots={matched_20x_dots.shape[0]}; max.dist={max_match_dist:.03f})")
print(" > Plotting dot")
ax[0].set_title(f"{selected_magnification}_{selected_image_type} (n.total={dot_coords2.shape[0]}, only matched are plotted)")
ax[0].imshow(
dot_max_z_proj, cmap=plt.get_cmap("gray"), interpolation="none",
vmin=dot_max_z_proj.min(), vmax=dot_max_z_proj.max(),
resample=False, filternorm=False)
ax[0].scatter(
x=selected_20x_dots["y"].values,
y=selected_20x_dots["x"].values,
s=30, facecolors='none', edgecolors='r', linewidth=.5)
print(" > Plotting ref")
ax[1].set_title(f"60x_dw (n.total={ref_coords.shape[0]}, only matched are plotted)")
ax[1].imshow(
ref_max_z_proj, cmap=plt.get_cmap("gray"), interpolation="none",
vmin=ref_max_z_proj.min()*1.5, vmax=ref_max_z_proj.max()*.5,
resample=False, filternorm=False)
ax[1].scatter(
x=selected_60x_dots["y"].values,
y=selected_60x_dots["x"].values,
s=30, facecolors='none', edgecolors='r', linewidth=.5)
print(" > Plotting contours [20x]")
for lid in range(1, dot_labels.max()):
contours = measure.find_contours(dot_labels == lid, 0.8)
for contour in contours:
ax[0].scatter(x=contour[:,1], y=contour[:,0], c="yellow", s=.005)
print(" > Plotting contours [60x]")
for lid in range(1, ref_labels.max()):
contours = measure.find_contours(ref_labels == lid, 0.8)
for contour in contours:
ax[1].scatter(x=contour[:,1], y=contour[:,0], c="yellow", s=.005)
print(" > Plotting overlapped points between raw and dw")
ax[2].set_title(f"Red: {selected_magnification}_{selected_image_type}. Blue: 60x_dw.")
ax[2].plot(
selected_20x_dots2["y"].values,
selected_20x_dots2["x"].values,
'r.', marker=".", markersize=2)
ax[2].plot(
selected_60x_dots["y"].values,
selected_60x_dots["x"].values,
'b.', marker=".", markersize=.8)
plt.close(fig3)
print(" > Exporting")
fig3.savefig(os.path.join("../../data/magnifications_matching",
f"{selected_magnification}_{selected_image_type}.overlays.field_thr.matched",
f"overlay_{current_field_id:03d}.png"), bbox_inches='tight')
print(" ! DONE")
###Output
0%| | 0/7 [00:00<?, ?it/s] |
NLP/BERT_training.ipynb | ###Markdown
Courtsey - https://mccormickml.com/2019/07/22/BERT-fine-tuning/
###Code
import tensorflow as tf
device_name = tf.test.gpu_device_name()
if device_name != '/device:GPU:0':
raise SystemError('GPU device not found')
print('Found GPU at: {}'.format(device_name))
!pip install pytorch-pretrained-bert pytorch-nlp
import torch
from torch.utils.data import TensorDataset, DataLoader, RandomSampler, SequentialSampler
from keras.preprocessing.sequence import pad_sequences
from sklearn.model_selection import train_test_split
from pytorch_pretrained_bert import BertTokenizer, BertConfig
from pytorch_pretrained_bert import BertAdam, BertForSequenceClassification, BertModel
from tqdm import tqdm, trange
import pandas as pd
import io
import numpy as np
import matplotlib.pyplot as plt
import spacy
from nltk.corpus import stopwords
% matplotlib inline
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
n_gpu = torch.cuda.device_count()
torch.cuda.get_device_name(0)
stop_words = set(stopwords.words('english'))
df = pd.read_excel("PPMdata.xlsx")
print (df.shape)
df.head(3)
df = df.dropna(subset=['Sentence','Sentiment'])
print (df.shape)
df.Sentiment = df.Sentiment.astype(int)
df.Sentence = df.Sentence.str.lower()
df.Sentiment.value_counts()
df = df.sample(frac=1)
punctuation = '!"#$%&()*+-/:;<=>?@[\\]^_`{|}~.,'
df['clean_text'] = df.Sentence.apply(lambda x: ''.join(ch for ch in x if ch not in set(punctuation)))
# remove numbers
df['clean_text'] = df['clean_text'].str.replace("[0-9]", " ")
# remove whitespaces
df['clean_text'] = df['clean_text'].apply(lambda x:' '.join(x.split()))
df['clean_text'] = df.clean_text.apply(lambda x: " ".join([i for i in x.split() if i not in stop_words]).strip())
nlp = spacy.load('en_core_web_sm', disable=['parser', 'ner'])
# function to lemmatize text
def lemmatization(texts):
output = []
for i in texts:
s = [token.lemma_ for token in nlp(i)]
output.append(' '.join(s))
return output
df['clean_text'] = lemmatization(df['clean_text'])
df['num_words'] = df.clean_text.apply(lambda x: len(x.split()))
df = df[df.num_words >= 5][df.num_words <= 50]
print (df.shape)
print (df.Sentiment.value_counts())
df.num_words.plot.hist()
plt.show()
sentences = df.clean_text.values
# We need to add special tokens at the beginning and end of each sentence for BERT to work properly
sentences = ["[CLS] " + sentence + " [SEP]" for sentence in sentences]
labels = df.Sentiment.values
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased', do_lower_case=True)
tokenized_texts = [tokenizer.tokenize(sent) for sent in sentences]
print ("Tokenize the first sentence:")
print (sentences[0])
print (tokenized_texts[0])
from collections import Counter
Counter([len(ids) for ids in tokenized_texts])
MAX_LEN = 64
tokenizer.convert_tokens_to_ids(tokenized_texts[0])
input_ids = pad_sequences([tokenizer.convert_tokens_to_ids(txt) for txt in tokenized_texts],
maxlen=MAX_LEN, dtype="long", truncating="post", padding="post")
input_ids
# Create attention masks
attention_masks = []
# Create a mask of 1s for each token followed by 0s for padding
for seq in input_ids:
seq_mask = [float(i>0) for i in seq]
attention_masks.append(seq_mask)
np.array(attention_masks)
# Use train_test_split to split our data into train and validation sets for training
train_inputs, validation_inputs, train_labels, validation_labels = train_test_split(input_ids, labels,
random_state=2018, test_size=0.1)
train_masks, validation_masks, _, _ = train_test_split(attention_masks, input_ids,
random_state=2018, test_size=0.1)
# Convert all of our data into torch tensors, the required datatype for our model
train_inputs = torch.tensor(train_inputs,dtype=torch.long)
validation_inputs = torch.tensor(validation_inputs,dtype=torch.long)
train_labels = torch.tensor(train_labels,dtype=torch.long)
validation_labels = torch.tensor(validation_labels,dtype=torch.long)
train_masks = torch.tensor(train_masks,dtype=torch.long)
validation_masks = torch.tensor(validation_masks,dtype=torch.long)
validation_inputs
# Select a batch size for training. For fine-tuning BERT on a specific task, the authors recommend a batch size of 16 or 32
batch_size = 32
# Create an iterator of our data with torch DataLoader. This helps save on memory during training because, unlike a for loop,
# with an iterator the entire dataset does not need to be loaded into memory
train_data = TensorDataset(train_inputs, train_masks, train_labels)
train_sampler = RandomSampler(train_data)
train_dataloader = DataLoader(train_data, sampler=train_sampler, batch_size=batch_size)
validation_data = TensorDataset(validation_inputs, validation_masks, validation_labels)
validation_sampler = SequentialSampler(validation_data)
validation_dataloader = DataLoader(validation_data, sampler=validation_sampler, batch_size=batch_size)
train_data.tensors
model = BertForSequenceClassification.from_pretrained("bert-base-uncased", num_labels=2)
param_optimizer = list(model.named_parameters())
no_decay = ['bias', 'gamma', 'beta']
optimizer_grouped_parameters = [
{'params': [p for n, p in param_optimizer if not any(nd in n for nd in no_decay)],
'weight_decay_rate': 0.01},
{'params': [p for n, p in param_optimizer if any(nd in n for nd in no_decay)],
'weight_decay_rate': 0.0}
]
# This variable contains all of the hyperparemeter information our training loop needs
optimizer = BertAdam(optimizer_grouped_parameters,
lr=2e-5,
warmup=.1)
# Function to calculate the accuracy of our predictions vs labels
def flat_accuracy(preds, labels):
pred_flat = np.argmax(preds, axis=1).flatten()
labels_flat = labels.flatten()
return np.sum(pred_flat == labels_flat) / len(labels_flat)
device
?model
# Store our loss and accuracy for plotting
train_loss_set = []
# Number of training epochs (authors recommend between 2 and 4)
epochs = 1
# trange is a tqdm wrapper around the normal python range
for _ in trange(epochs, desc="Epoch"):
# Training
# Set our model to training mode (as opposed to evaluation mode)
model.train()
# Tracking variables
tr_loss = 0
nb_tr_examples, nb_tr_steps = 0, 0
# Train the data for one epoch
for step, batch in enumerate(train_dataloader):
# Add batch to GPU
batch = tuple(t.to(device) for t in batch)
# Unpack the inputs from our dataloader
b_input_ids, b_input_mask, b_labels = batch
# Clear out the gradients (by default they accumulate)
optimizer.zero_grad()
# Forward pass
loss = model(b_input_ids, token_type_ids=None, attention_mask=b_input_mask, labels=b_labels)
train_loss_set.append(loss.item())
# Backward pass
loss.backward()
# Update parameters and take a step using the computed gradient
optimizer.step()
# Update tracking variables
tr_loss += loss.item()
nb_tr_examples += b_input_ids.size(0)
nb_tr_steps += 1
print("Train loss: {}".format(tr_loss/nb_tr_steps))
# Validation
# Put model in evaluation mode to evaluate loss on the validation set
model.eval()
# Tracking variables
eval_loss, eval_accuracy = 0, 0
nb_eval_steps, nb_eval_examples = 0, 0
# Evaluate data for one epoch
for batch in validation_dataloader:
# Add batch to GPU
batch = tuple(t.to(device) for t in batch)
# Unpack the inputs from our dataloader
b_input_ids, b_input_mask, b_labels = batch
# Telling the model not to compute or store gradients, saving memory and speeding up validation
with torch.no_grad():
# Forward pass, calculate logit predictions
logits = model(b_input_ids, token_type_ids=None, attention_mask=b_input_mask)
# Move logits and labels to CPU
logits = logits.detach().cpu().numpy()
label_ids = b_labels.to('cpu').numpy()
tmp_eval_accuracy = flat_accuracy(logits, label_ids)
eval_accuracy += tmp_eval_accuracy
nb_eval_steps += 1
print("Validation Accuracy: {}".format(eval_accuracy/nb_eval_steps))
model.eval()
# Tracking variables
predictions , true_labels = [], []
# Predict
for batch in validation_dataloader:
# Add batch to GPU
batch = tuple(t.to(device) for t in batch)
# Unpack the inputs from our dataloader
b_input_ids, b_input_mask, b_labels = batch
# Telling the model not to compute or store gradients, saving memory and speeding up prediction
with torch.no_grad():
# Forward pass, calculate logit predictions
logits = model(b_input_ids, token_type_ids=None, attention_mask=b_input_mask)
# Move logits and labels to CPU
logits = logits.detach().cpu().numpy()
label_ids = b_labels.to('cpu').numpy()
# Store predictions and true labels
predictions.append(logits)
true_labels.append(label_ids)
logits[:,1]
# Flatten the predictions and true values for aggregate Matthew's evaluation on the whole dataset
flat_predictions = [item for sublist in predictions for item in sublist]
flat_predictions = np.argmax(flat_predictions, axis=1).flatten()
flat_true_labels = np.array([item for sublist in true_labels for item in sublist])
flat_predictions
flat_true_labels
from sklearn.metrics import accuracy_score, f1_score
accuracy_score(flat_true_labels,flat_predictions)
f1_score(flat_true_labels,flat_predictions)
model.parameters
model2 = BertModel.from_pretrained('bert-base-uncased')
for param in model2.parameters():
param.requires_grad = False
from torch import nn
import torch.nn.functional as F
class Flatten(nn.Module):
def forward(self, input):
return input.view(input.size(0), -1)
class finetuneBERT(Flatten,nn.Module):
def __init__(self, bert_output_size, output_size):
super(finetuneBERT, self).__init__()
self.bertmodel = model2
self.flatten = Flatten()
self.attn = nn.Linear(bert_output_size, bert_output_size)
self.out = nn.Linear(in_features=bert_output_size,out_features=output_size)
def forward(self, input_token, input_mask):
hidden, _ = self.bertmodel(input_token, input_mask)
attn_weights = F.softmax(
self.attn(hidden[-1]), dim=1)
attn_applied = torch.bmm(attn_weights.unsqueeze(0),
encoder_outputs.unsqueeze(0))
flatten = torch.flatten(torch.Tensor(hidden[-1]),start_dim=1)
output = nn.Softmax()(self.out(flatten))
return output
model2.parameters
!pip install torchsummary
from torchsummary import summary
model3 = finetuneBERT(768*MAX_LEN,2)
model3.parameters
model3 = model3.to(device)
criterion = nn.CrossEntropyLoss()
from torch import optim
optimizer_ft = optim.SGD(model3.out.parameters(), lr=0.001, momentum=0.9)
# Store our loss and accuracy for plotting
train_loss_set = []
# Number of training epochs (authors recommend between 2 and 4)
epochs = 1
# trange is a tqdm wrapper around the normal python range
for _ in trange(epochs, desc="Epoch"):
# Training
# Set our model to training mode (as opposed to evaluation mode)
model3.train()
# Tracking variables
tr_loss = 0
nb_tr_examples, nb_tr_steps = 0, 0
# Train the data for one epoch
for step, batch in enumerate(train_dataloader):
# Add batch to GPU
batch = tuple(t.to(device) for t in batch)
# Unpack the inputs from our dataloader
b_input_ids, b_input_mask, b_labels = batch
# Clear out the gradients (by default they accumulate)
optimizer_ft.zero_grad()
# Forward pass
output = model3(b_input_ids,b_input_mask)
#output = output.reshape(output.shape[0])
loss = criterion(output, b_labels)
train_loss_set.append(loss.item())
# Backward pass
loss.backward()
# Update parameters and take a step using the computed gradient
optimizer.step()
# Update tracking variables
tr_loss += loss.item()
nb_tr_examples += b_input_ids.size(0)
nb_tr_steps += 1
print("Train loss: {}".format(tr_loss/nb_tr_steps))
# Validation
# Put model in evaluation mode to evaluate loss on the validation set
model3.eval()
# Tracking variables
eval_loss, eval_accuracy = 0, 0
nb_eval_steps, nb_eval_examples = 0, 0
# Evaluate data for one epoch
for batch in validation_dataloader:
# Add batch to GPU
batch = tuple(t.to(device) for t in batch)
# Unpack the inputs from our dataloader
b_input_ids, b_input_mask, b_labels = batch
# Telling the model not to compute or store gradients, saving memory and speeding up validation
with torch.no_grad():
# Forward pass, calculate logit predictions
logits = model3(b_input_ids,b_input_mask)
# Move logits and labels to CPU
logits = logits.detach().cpu().numpy()
label_ids = b_labels.to('cpu').numpy()
#tmp_eval_accuracy = flat_accuracy(logits, label_ids)
tmp_eval_accuracy = np.dot(logits.argmax(axis=1),label_ids)*1.0/logits.shape[0]
eval_accuracy += tmp_eval_accuracy
nb_eval_steps += 1
print("Validation Accuracy: {}".format(eval_accuracy/nb_eval_steps))
hidden, _ = model3.bertmodel(validation_inputs, validation_masks)
validation_inputs.shape
np.array(hidden[0]).shape
np.array(hidden[0])[0]
output.reshape(output.shape[0])
output.shape
criterion(output, b_labels)
?criterion
torch.randn(3, 5, requires_grad=True).shape
torch.empty(3, dtype=torch.long).random_(5).shape
torch.randn(3, 5, requires_grad=True)
###Output
_____no_output_____ |
docs/auto_examples/plot_tutorial_05.ipynb | ###Markdown
Tutorial 5: Colors and colorbarsThis tutorial demonstrates how to configure the colorbar(s) with ``surfplot``. Layer color maps and colorbars The color map can be specified for each added plotting layer using the `cmap` parameter of :func:`~surfplot.plotting.Plot.add_layer`, along with the associated ``matplotlib`` colorbar drawn if specified. The colobar can be turned off by `cbar=False`. The range of the colormap is specified with the `color_range` parameter, which takes a tuple of (`minimum`, `maximum`) values. If no color range is specified (the default, i.e. `None`), then the color range is computed automically based on the minimum and maximum of the data.Let's get started by setting up a plot with surface shading added as well. Following the first initial steps of `sphx_glr_auto_examples_plot_tutorial_01.py` :
###Code
from neuromaps.datasets import fetch_fslr
from surfplot import Plot
surfaces = fetch_fslr()
lh, rh = surfaces['inflated']
p = Plot(lh, rh)
sulc_lh, sulc_rh = surfaces['sulc']
p.add_layer({'left': sulc_lh, 'right': sulc_rh}, cmap='binary_r', cbar=False)
###Output
_____no_output_____
###Markdown
Now let's add a plotting layer with a colorbar using the example data. The`cmap` parameter accepts any named `matplotlib colormap`_, or a `colormap object`_. This means that ``surfplot`` can work with pretty muchany colormap, including those from `seaborn`_ and `cmasher`_, for example.
###Code
from surfplot.datasets import load_example_data
# default mode network associations
default = load_example_data(join=True)
p.add_layer(default, cmap='GnBu_r', cbar_label='Default mode')
fig = p.build()
fig.show()
###Output
_____no_output_____
###Markdown
`cbar_label` added a text label to the colorbar. Although not necessary incases where a single layer/colorbar is shown, it can be useful when addingmultiple layers. To demonstrate that, let's add another layer using the`frontoparietal` network associations from :func:`~surfplot.datasets.load_example_data`:
###Code
fronto = load_example_data('frontoparietal', join=True)
p.add_layer(fronto, cmap='YlOrBr_r', cbar_label='Frontoparietal')
fig = p.build()
fig.show()
###Output
_____no_output_____
###Markdown
The order of the colorbars is always based on the order of the layers, where the outermost colorbar is the last (i.e. uppermost) plotting layer. Of course, more layers and colorbars can lead to busy-looking figure, so be surenot to overdo it. cbar_kwsOnce all layers have been added, the positioning and style can be adjusted using the `cbar_kws` parameter in :func:`~surfplot.plotting.Plot.build`, which are keyword arguments for :func:`surfplot.plotting.Plot._add_colorbars`. Each one is briefly described below (see :func:`~surfplot.plotting.Plot._add_colorbars`for more detail):1. `location`: The location, relative to the surface plot2. `label_direction`: Angle to draw label for colorbars3. `n_ticks`: Number of ticks to include on colorbar4. `decimals`: Number of decimals to show for colorbal tick values5. `fontsize`: Font size for colorbar labels and tick labels6. `draw_border`: Draw ticks and black border around colorbar7. `outer_labels_only`: Show tick labels for only the outermost colorbar8. `aspect`: Ratio of long to short dimensions9. `pad`: Space that separates each colorbar10. `shrink`: Fraction by which to multiply the size of the colorbar11. `fraction`: Fraction of original axes to use for colorbarLet's plot colorbars on the right, which will generate vertical colorbars instead of horizontal colorbars. We'll also add some style changes for a cleaner look:
###Code
kws = {'location': 'right', 'label_direction': 45, 'decimals': 1,
'fontsize': 8, 'n_ticks': 2, 'shrink': .15, 'aspect': 8,
'draw_border': False}
fig = p.build(cbar_kws=kws)
fig.show()
# sphinx_gallery_thumbnail_number = 3
###Output
_____no_output_____ |
wisconsin/PCA_log_regression.ipynb | ###Markdown
Creation of synthetic data for Wisoncsin Breat Cancer data set using Principal Component Analysis. Tested using a logistic regression model. AimTo test a statistic method (principal component analysis) for synthesising data that can be used to train a logistic regression machine learning model. DataRaw data is avilable at: https://www.kaggle.com/uciml/breast-cancer-wisconsin-data Basic methods description* Create synthetic data by sampling from distributions based on Principal Component Analysis of orginal data* Train logistic regression model on synthetic data and test against held-back raw data Code & results
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import train_test_split
from sklearn.decomposition import PCA
# Turn warnings off for notebook publication
import warnings
warnings.filterwarnings("ignore")
###Output
_____no_output_____
###Markdown
Import Data
###Code
def load_data():
""""
Load Wisconsin Breast Cancer Data Set
Inputs
------
None
Returns
-------
X: NumPy array of X
y: Numpy array of y
col_names: column names for X
"""
# Load data and drop 'id' column
data = pd.read_csv('./wisconsin.csv')
data.drop('id', axis=1, inplace=True)
# Change 'diagnosis' column to 'malignant', and put in last column place
malignant = pd.DataFrame()
data['malignant'] = data['diagnosis'] == 'M'
data.drop('diagnosis', axis=1, inplace=True)
# Split data in X and y
X = data.drop(['malignant'], axis=1)
y = data['malignant']
# Get col names and convert to NumPy arrays
X_col_names = list(X)
X = X.values
y = y.values
return data, X, y, X_col_names
###Output
_____no_output_____
###Markdown
Data processing Split X and y into training and test sets
###Code
def split_into_train_test(X, y, test_proportion=0.25):
""""
Randomly split X and y numpy arrays into training and test data sets
Inputs
------
X and y NumPy arrays
Returns
-------
X_test, X_train, y_test, y_train Numpy arrays
"""
X_train, X_test, y_train, y_test = \
train_test_split(X, y, shuffle=True, test_size=test_proportion)
return X_train, X_test, y_train, y_test
###Output
_____no_output_____
###Markdown
Standardise data
###Code
def standardise_data(X_train, X_test):
""""
Standardise training and tets data sets according to mean and standard
deviation of test set
Inputs
------
X_train, X_test NumPy arrays
Returns
-------
X_train_std, X_test_std
"""
mu = X_train.mean(axis=0)
std = X_train.std(axis=0)
X_train_std = (X_train - mu) / std
X_test_std = (X_test - mu) /std
return X_train_std, X_test_std
###Output
_____no_output_____
###Markdown
Calculate accuracy measures
###Code
def calculate_diagnostic_performance(actual, predicted):
""" Calculate sensitivty and specificty.
Inputs
------
actual, predted numpy arrays (1 = +ve, 0 = -ve)
Returns
-------
A dictionary of results:
1) accuracy: proportion of test results that are correct
2) sensitivity: proportion of true +ve identified
3) specificity: proportion of true -ve identified
4) positive likelihood: increased probability of true +ve if test +ve
5) negative likelihood: reduced probability of true +ve if test -ve
6) false positive rate: proportion of false +ves in true -ve patients
7) false negative rate: proportion of false -ves in true +ve patients
8) positive predictive value: chance of true +ve if test +ve
9) negative predictive value: chance of true -ve if test -ve
10) actual positive rate: proportion of actual values that are +ve
11) predicted positive rate: proportion of predicted vales that are +ve
12) recall: same as sensitivity
13) precision: the proportion of predicted +ve that are true +ve
14) f1 = 2 * ((precision * recall) / (precision + recall))
*false positive rate is the percentage of healthy individuals who
incorrectly receive a positive test result
* alse neagtive rate is the percentage of diseased individuals who
incorrectly receive a negative test result
"""
# Calculate results
actual_positives = actual == 1
actual_negatives = actual == 0
test_positives = predicted == 1
test_negatives = predicted == 0
test_correct = actual == predicted
accuracy = test_correct.mean()
true_positives = actual_positives & test_positives
false_positives = actual_negatives & test_positives
true_negatives = actual_negatives & test_negatives
sensitivity = true_positives.sum() / actual_positives.sum()
specificity = np.sum(true_negatives) / np.sum(actual_negatives)
positive_likelihood = sensitivity / (1 - specificity)
negative_likelihood = (1 - sensitivity) / specificity
false_postive_rate = 1 - specificity
false_negative_rate = 1 - sensitivity
positive_predictive_value = true_positives.sum() / test_positives.sum()
negative_predicitive_value = true_negatives.sum() / test_negatives.sum()
actual_positive_rate = actual.mean()
predicted_positive_rate = predicted.mean()
recall = sensitivity
precision = \
true_positives.sum() / (true_positives.sum() + false_positives.sum())
f1 = 2 * ((precision * recall) / (precision + recall))
# Add results to dictionary
results = dict()
results['accuracy'] = accuracy
results['sensitivity'] = sensitivity
results['specificity'] = specificity
results['positive_likelihood'] = positive_likelihood
results['negative_likelihood'] = negative_likelihood
results['false_postive_rate'] = false_postive_rate
results['false_postive_rate'] = false_postive_rate
results['false_negative_rate'] = false_negative_rate
results['positive_predictive_value'] = positive_predictive_value
results['negative_predicitive_value'] = negative_predicitive_value
results['actual_positive_rate'] = actual_positive_rate
results['predicted_positive_rate'] = predicted_positive_rate
results['recall'] = recall
results['precision'] = precision
results['f1'] = f1
return results
###Output
_____no_output_____
###Markdown
Logistic Regression Model
###Code
def fit_and_test_logistic_regression_model(X_train, X_test, y_train, y_test):
""""
Fit and test logistic regression model.
Return a dictionary of accuracy measures.
Calls on `calculate_diagnostic_performance` to calculate results
Inputs
------
X_train, X_test NumPy arrays
Returns
-------
A dictionary of accuracy results.
"""
# Fit logistic regression model
lr = LogisticRegression(C=0.1)
lr.fit(X_train,y_train)
# Predict tets set labels
y_pred = lr.predict(X_test_std)
# Get accuracy results
accuracy_results = calculate_diagnostic_performance(y_test, y_pred)
return accuracy_results
###Output
_____no_output_____
###Markdown
Synthetic Data Method - Principal Component Analysis * Transform original data by princiapl components* Take mean and standard deviation of transformed data* Create new data by sampling from distributions* Inverse transform generated data back to original dimension space
###Code
def get_principal_component_model(data, n_components=0):
"""
Principal component analysis
Inputs
------
data: raw data (DataFrame)
Returns
-------
A dictionary of:
model: pca model object
transformed_X: transformed_data
explained_variance: explained_variance
"""
# If n_components not passed to function, use number of features in data
if n_components == 0:
n_components = data.shape[1]
pca = PCA(n_components)
transformed_X = pca.fit_transform(data)
#fit_transform reduces X to the new datasize if n components is specified
explained_variance = pca.explained_variance_ratio_
# Compile a dictionary to return results
results = {'model': pca,
'transformed_X': transformed_X,
'explained_variance': explained_variance}
return results
def make_synthetic_data_pc(X_original, y_original, number_of_samples=1000,
n_components=0):
"""
Synthetic data generation.
Calls on `get_principal_component_model` for PCA model
If number of components not defined then the function sets it to the number
of features in X
Inputs
------
original_data: X, y numpy arrays
number_of_samples: number of synthetic samples to generate
n_components: number of principal components to use for data synthesis
Returns
-------
X_synthetic: NumPy array
y_synthetic: NumPy array
"""
# If number of PCA not passed, set to number fo features in X
if n_components == 0:
n_components = X_original.shape[1]
# Split the training data into positive and negative
mask = y_original == 1
X_train_pos = X_original[mask]
mask = y_original == 0
X_train_neg = X_original[mask]
# Pass malignant and benign X data sets to Principal Component Analysis
pca_pos = get_principal_component_model(X_train_pos, n_components)
pca_neg = get_principal_component_model(X_train_neg, n_components)
# Set up list to hold malignant and benign transformed data
transformed_X = []
# Create synthetic data for malignant and benign PCA models
for pca_model in [pca_pos, pca_neg]:
# Get PCA tranformed data
transformed = pca_model['transformed_X']
# Get means and standard deviations, to use for sampling
means = transformed.mean(axis=0)
stds = transformed.std(axis=0)
# Make synthetic PC data using sampling from normal distributions
synthetic_pca_data = np.zeros((number_of_samples, n_components))
for pc in range(n_components):
synthetic_pca_data[:, pc] = \
np.random.normal(means[pc], stds[pc], size=number_of_samples)
transformed_X.append(synthetic_pca_data)
# Reverse transform data to create synthetic data to be used
X_synthetic_pos = pca_pos['model'].inverse_transform(transformed_X[0])
X_synthetic_neg = pca_neg['model'].inverse_transform(transformed_X[1])
y_synthetic_pos = np.ones((X_synthetic_pos.shape[0],1))
y_synthetic_neg = np.zeros((X_synthetic_neg.shape[0],1))
# Combine positive and negative and shuffle rows
X_synthetic = np.concatenate((X_synthetic_pos, X_synthetic_neg), axis=0)
y_synthetic = np.concatenate((y_synthetic_pos, y_synthetic_neg), axis=0)
# Randomise order of X, y
synthetic = np.concatenate((X_synthetic, y_synthetic), axis=1)
shuffle_index = np.random.permutation(np.arange(X_synthetic.shape[0]))
synthetic = synthetic[shuffle_index]
X_synthetic = synthetic[:,0:-1]
y_synthetic = synthetic[:,-1]
return X_synthetic, y_synthetic
###Output
_____no_output_____
###Markdown
Main code
###Code
# Load data
original_data, X, y, X_col_names = load_data()
# Set up results DataFrame
results = pd.DataFrame()
###Output
_____no_output_____
###Markdown
Fitting classification model to raw data
###Code
# Set number of replicate runs
number_of_runs = 30
# Set up lists for results
accuracy_measure_names = []
accuracy_measure_data = []
for run in range(number_of_runs):
# Print progress
print (run + 1, end=' ')
# Split training and test set
X_train, X_test, y_train, y_test = split_into_train_test(X, y)
# Standardise data
X_train_std, X_test_std = standardise_data(X_train, X_test)
# Get accuracy of fitted model
accuracy = fit_and_test_logistic_regression_model(
X_train_std, X_test_std, y_train, y_test)
# Get accuracy measure names if not previously done
if len(accuracy_measure_names) == 0:
for key, value in accuracy.items():
accuracy_measure_names.append(key)
# Get accuracy values
run_accuracy_results = []
for key, value in accuracy.items():
run_accuracy_results.append(value)
# Add results to results list
accuracy_measure_data.append(run_accuracy_results)
# Strore mean and sem in results DataFrame
accuracy_array = np.array(accuracy_measure_data)
results['raw_mean'] = accuracy_array.mean(axis=0)
results['raw_sem'] = accuracy_array.std(axis=0)/np.sqrt(number_of_runs)
results.index = accuracy_measure_names
###Output
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30
###Markdown
Fitting classification model to synthetic data
###Code
# Set number of replicate runs
number_of_runs = 30
# Set up lists for results
accuracy_measure_names = []
accuracy_measure_data = []
for run in range(number_of_runs):
# Get synthetic data
X_synthetic, y_synthetic = make_synthetic_data_pc(
X, y, number_of_samples=1000)
# Print progress
print (run + 1, end=' ')
# Split training and test set
X_train, X_test, y_train, y_test = split_into_train_test(X, y)
# Standardise data (using synthetic data)
X_train_std, X_test_std = standardise_data(X_synthetic, X_test)
# Get accuracy of fitted model
accuracy = fit_and_test_logistic_regression_model(
X_train_std, X_test_std, y_synthetic, y_test)
# Get accuracy measure names if not previously done
if len(accuracy_measure_names) == 0:
for key, value in accuracy.items():
accuracy_measure_names.append(key)
# Get accuracy values
run_accuracy_results = []
for key, value in accuracy.items():
run_accuracy_results.append(value)
# Add results to results list
accuracy_measure_data.append(run_accuracy_results)
# Strore mean and sem in results DataFrame
accuracy_array = np.array(accuracy_measure_data)
results['pca_mean'] = accuracy_array.mean(axis=0)
results['pca_sem'] = accuracy_array.std(axis=0)/np.sqrt(number_of_runs)
###Output
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30
###Markdown
Save last synthetic data set
###Code
# Create a data frame with id
synth_df = pd.DataFrame()
synth_df['id'] = np.arange(y_synthetic.shape[0])
# Transfer X values to DataFrame
synth_df=pd.concat([synth_df,
pd.DataFrame(X_synthetic, columns=X_col_names)],
axis=1)
# Add a 'M' or 'B' diagnosis
y_list = list(y_synthetic)
diagnosis = ['M' if y==1 else 'B' for y in y_list]
synth_df['diagnosis'] = diagnosis
# Shuffle data
synth_df = synth_df.sample(frac=1.0)
# Save data
synth_df.to_csv('./Output/synthetic_data_pca.csv', index=False)
###Output
_____no_output_____
###Markdown
Show results
###Code
results
###Output
_____no_output_____
###Markdown
Compare raw and synthetic data means and standard deviations
###Code
# Process synthetic data
synth_df.drop('id', axis=1, inplace=True)
malignant = pd.DataFrame()
synth_df['malignant'] = synth_df['diagnosis'] == 'M'
synth_df.drop('diagnosis', axis=1, inplace=True)
descriptive_stats = pd.DataFrame()
descriptive_stats['Original M mean'] = \
original_data[original_data['malignant']==True].mean()
descriptive_stats['Synthetic M mean'] = \
synth_df[synth_df['malignant']==True].mean()
descriptive_stats['Original B mean'] = \
original_data[original_data['malignant']==False].mean()
descriptive_stats['Synthetic B mean'] = \
synth_df[synth_df['malignant']==False].mean()
descriptive_stats['Original M std'] = \
original_data[original_data['malignant']==True].std()
descriptive_stats['Synthetic M std'] = \
synth_df[synth_df['malignant']==True].std()
descriptive_stats['Original B std'] = \
original_data[original_data['malignant']==False].std()
descriptive_stats['Synthetic B std'] = \
synth_df[synth_df['malignant']==False].std()
descriptive_stats
###Output
_____no_output_____ |
tests/SDK/test_sdk08_benchmarker/analysis_d63.ipynb | ###Markdown
Benchmarker AnalysisAnalysis of tng-sdk-benchmark's behavior for 5GTANGO D6.3.
###Code
%matplotlib inline
import matplotlib.pyplot as plt
import pandas as pd
import seaborn as sns
import matplotlib
import numpy as np
sns.set(font_scale=1.3, style="ticks")
def select_and_rename(df, mapping):
"""
Helper: Selects columns of df using the keys
of the mapping dict.
It renames the columns to the values of the
mappings dict.
"""
# select subset of columns
dff = df[list(mapping.keys())]
# rename
for k, v in mapping.items():
#print("Renaming: {} -> {}".format(k, v))
dff.rename(columns={k: v}, inplace=True)
#print(dff.head())
return dff
def cleanup(df):
"""
Cleanup of df data.
Dataset specific.
"""
def _replace(df, column, str1, str2):
if column in df:
df[column] = df[column].str.replace(str1, str2)
def _to_num(df, column):
if column in df:
df[column] = pd.to_numeric(df[column])
_replace(df, "flow_size", "tcpreplay -i data -tK --loop 40000 --preload-pcap /pcaps/smallFlows.pcap", "0")
_replace(df, "flow_size", "tcpreplay -i data -tK --loop 40000 --preload-pcap /pcaps/bigFlows.pcap", "1")
_to_num(df, "flow_size")
_replace(df, "ruleset", "./start.sh small_ruleset", "1")
_replace(df, "ruleset", "./start.sh big_ruleset", "2")
_replace(df, "ruleset", "./start.sh", "0")
_to_num(df, "ruleset")
_replace(df, "req_size", "ab -c 1 -t 60 -n 99999999 -e /tngbench_share/ab_dist.csv -s 60 -k -i http://20.0.0.254:8888/", "0")
_replace(df, "req_size", "ab -c 1 -t 60 -n 99999999 -e /tngbench_share/ab_dist.csv -s 60 -k http://20.0.0.254:8888/bunny.mp4", "1")
_replace(df, "req_size", "ab -c 1 -t 60 -n 99999999 -e /tngbench_share/ab_dist.csv -s 60 -k -i -X 20.0.0.254:3128 http://40.0.0.254:80/", "0")
_replace(df, "req_size", "ab -c 1 -t 60 -n 99999999 -e /tngbench_share/ab_dist.csv -s 60 -k -X 20.0.0.254:3128 http://40.0.0.254:80/bunny.mp4", "1")
_to_num(df, "req_size")
_replace(df, "req_type", "malaria publish -t -n 20000 -H 20.0.0.254 -q 1 --json /tngbench_share/malaria.json", "0")
_replace(df, "req_type", "malaria publish -t -n 20000 -H 20.0.0.254 -q 2 --json /tngbench_share/malaria.json", "1")
_replace(df, "req_type", "malaria publish -s 10 -n 20000 -H 20.0.0.254 --json /tngbench_share/malaria.json", "2")
_replace(df, "req_type", "malaria publish -s 10000 -n 20000 -H 20.0.0.254 --json /tngbench_share/malaria.json", "3")
_to_num(df, "req_type")
###Output
_____no_output_____
###Markdown
Data
###Code
df_sec01 = pd.read_csv("/home/manuel/sndzoo/ds_nfv_sec01/data/csv_experiments.csv")
df_sec02 = pd.read_csv("/home/manuel/sndzoo/ds_nfv_sec02/data/csv_experiments.csv")
df_sec03 = pd.read_csv("/home/manuel/sndzoo/ds_nfv_sec03/data/csv_experiments.csv")
df_web01 = pd.read_csv("/home/manuel/sndzoo/ds_nfv_web01/data/csv_experiments.csv")
df_web02 = pd.read_csv("/home/manuel/sndzoo/ds_nfv_web02/data/csv_experiments.csv")
df_web03 = pd.read_csv("/home/manuel/sndzoo/ds_nfv_web03/data/csv_experiments.csv")
df_iot01 = pd.read_csv("/home/manuel/sndzoo/ds_nfv_iot01/data/csv_experiments.csv")
df_iot02 = pd.read_csv("/home/manuel/sndzoo/ds_nfv_iot02/data/csv_experiments.csv")
# do renaming and selection
map_sec01 = {
"run_id": "run_id",
"experiment_name": "ex_name",
"experiment_start": "ex_start",
"experiment_stop": "ex_stop",
"param__header__all__config_id": "conf_id",
"param__header__all__repetition": "repetition",
"param__header__all__time_limit": "time_limit",
"param__header__all__time_warmup": "time_warmup",
"param__func__mp.input__cmd_start": "flow_size",
"param__func__de.upb.ids-suricata.0.1__cmd_start": "ruleset",
"param__func__de.upb.ids-suricata.0.1__cpu_bw": "cpu_bw",
"param__func__de.upb.ids-suricata.0.1__mem_max": "memory",
#"metric__vnf0.vdu01.0__suricata_bytes": "ids_bytes",
#"metric__vnf0.vdu01.0__suricata_packets": "ids_pkts",
#"metric__vnf0.vdu01.0__suricata_dropped": "ids_drop",
#"metric__vnf0.vdu01.0__suricata_drops": "ids_drops",
"metric__vnf0.vdu01.0__stat__input__rx_bytes": "if_rx_bytes",
#"metric__vnf0.vdu01.0__stat__input__rx_dropped": "if_in_rx_dropped",
#"metric__vnf0.vdu01.0__stat__input__rx_errors": "if_in_rx_errors",
#"metric__vnf0.vdu01.0__stat__input__rx_packets": "if_in_rx_packets",
#"metric__vnf0.vdu01.0__stat__input__tx_bytes": "if_in_tx_byte",
#"metric__vnf0.vdu01.0__stat__input__tx_dropped": "if_in_tx_dropped",
#"metric__vnf0.vdu01.0__stat__input__tx_errors": "if_in_tx_errors",
#"metric__vnf0.vdu01.0__stat__input__tx_packets": "if_in_tx_packets",
}
map_sec02 = {
"experiment_name": "ex_name",
"experiment_start": "ex_start",
"experiment_stop": "ex_stop",
"param__header__all__config_id": "conf_id",
"param__header__all__repetition": "repetition",
"param__header__all__time_limit": "time_limit",
"param__header__all__time_warmup": "time_warmup",
"param__func__mp.input__cmd_start": "flow_size",
"param__func__de.upb.ids-snort2.0.1__cmd_start": "ruleset",
"param__func__de.upb.ids-snort2.0.1__cpu_bw": "cpu_bw",
"param__func__de.upb.ids-snort2.0.1__mem_max": "memory",
#"metric__vnf0.vdu01.0__snort_bytes": "ids_bytes",
#"metric__vnf0.vdu01.0__snort_packets": "ids_pkts",
#"metric__vnf0.vdu01.0__snort_dropped": "ids_drop",
#"metric__vnf0.vdu01.0__snort_drops": "ids_drops",
"metric__vnf0.vdu01.0__stat__input__rx_bytes": "if_rx_bytes",
#"metric__vnf0.vdu01.0__stat__input__rx_dropped": "if_in_rx_dropped",
#"metric__vnf0.vdu01.0__stat__input__rx_errors": "if_in_rx_errors",
#"metric__vnf0.vdu01.0__stat__input__rx_packets": "if_in_rx_packets",
#"metric__vnf0.vdu01.0__stat__input__tx_bytes": "if_in_tx_byte",
#"metric__vnf0.vdu01.0__stat__input__tx_dropped": "if_in_tx_dropped",
#"metric__vnf0.vdu01.0__stat__input__tx_errors": "if_in_tx_errors",
#"metric__vnf0.vdu01.0__stat__input__tx_packets": "if_in_tx_packets",
}
map_sec03 = {
"experiment_name": "ex_name",
"experiment_start": "ex_start",
"experiment_stop": "ex_stop",
"param__header__all__config_id": "conf_id",
"param__header__all__repetition": "repetition",
"param__header__all__time_limit": "time_limit",
"param__header__all__time_warmup": "time_warmup",
"param__func__mp.input__cmd_start": "flow_size",
"param__func__de.upb.ids-snort3.0.1__cmd_start": "ruleset",
"param__func__de.upb.ids-snort3.0.1__cpu_bw": "cpu_bw",
"param__func__de.upb.ids-snort3.0.1__mem_max": "memory",
#"metric__vnf0.vdu01.0__snort3_total_allow": "ids_allow",
#"metric__vnf0.vdu01.0__snort3_total_analyzed": "ids_anlyzd",
#"metric__vnf0.vdu01.0__snort3_total_received": "ids_recv",
#"metric__vnf0.vdu01.0__snort3_total_outstanding": "ids_outstanding",
#"metric__vnf0.vdu01.0__snort3_total_dropped": "ids_drop",
"metric__vnf0.vdu01.0__stat__input__rx_bytes": "if_rx_bytes",
#"metric__vnf0.vdu01.0__stat__input__rx_dropped": "if_in_rx_dropped",
#"metric__vnf0.vdu01.0__stat__input__rx_errors": "if_in_rx_errors",
#"metric__vnf0.vdu01.0__stat__input__rx_packets": "if_in_rx_packets",
#"metric__vnf0.vdu01.0__stat__input__tx_bytes": "if_in_tx_byte",
#"metric__vnf0.vdu01.0__stat__input__tx_dropped": "if_in_tx_dropped",
#"metric__vnf0.vdu01.0__stat__input__tx_errors": "if_in_tx_errors",
#"metric__vnf0.vdu01.0__stat__input__tx_packets": "if_in_tx_packets",
}
map_web01 = {
"experiment_name": "ex_name",
"experiment_start": "ex_start",
"experiment_stop": "ex_stop",
"param__header__all__config_id": "conf_id",
"param__header__all__repetition": "repetition",
"param__header__all__time_limit": "time_limit",
"param__header__all__time_warmup": "time_warmup",
"param__func__mp.input__cmd_start": "req_size",
"param__func__de.upb.lb-nginx.0.1__cpu_bw": "cpu_bw",
"param__func__de.upb.lb-nginx.0.1__mem_max": "memory",
"metric__mp.input.vdu01.0__ab_completed_requests": "req_compl",
#"metric__mp.input.vdu01.0__ab_concurrent_lvl": "req_concurrent",
#"metric__mp.input.vdu01.0__ab_failed_requests": "req_failed",
#"metric__mp.input.vdu01.0__ab_html_transfer_byte": "req_html_bytes",
#"metric__mp.input.vdu01.0__ab_mean_time_per_request": "req_time_mean",
#"metric__mp.input.vdu01.0__ab_request_per_second": "req_per_sec",
#"metric__mp.input.vdu01.0__ab_time_used_s": "req_time_used",
#"metric__mp.input.vdu01.0__ab_total_transfer_byte": "transf_bytes",
#"metric__mp.input.vdu01.0__ab_transfer_rate_kbyte_per_second": "req_transf_rate",
"metric__vnf0.vdu01.0__stat__input__rx_bytes": "if_rx_bytes",
#"metric__vnf0.vdu01.0__stat__input__rx_dropped": "if_in_rx_dropped",
#"metric__vnf0.vdu01.0__stat__input__rx_errors": "if_in_rx_errors",
#"metric__vnf0.vdu01.0__stat__input__rx_packets": "if_in_rx_packets",
#"metric__vnf0.vdu01.0__stat__input__tx_bytes": "if_tx_bytes",
#"metric__vnf0.vdu01.0__stat__input__tx_dropped": "if_in_tx_dropped",
#"metric__vnf0.vdu01.0__stat__input__tx_errors": "if_in_tx_errors",
#"metric__vnf0.vdu01.0__stat__input__tx_packets": "if_in_tx_packets",
}
map_web02 = {
"experiment_name": "ex_name",
"experiment_start": "ex_start",
"experiment_stop": "ex_stop",
"param__header__all__config_id": "conf_id",
"param__header__all__repetition": "repetition",
"param__header__all__time_limit": "time_limit",
"param__header__all__time_warmup": "time_warmup",
"param__func__mp.input__cmd_start": "req_size",
"param__func__de.upb.lb-haproxy.0.1__cpu_bw": "cpu_bw",
"param__func__de.upb.lb-haproxy.0.1__mem_max": "memory",
"metric__mp.input.vdu01.0__ab_completed_requests": "req_compl",
#"metric__mp.input.vdu01.0__ab_concurrent_lvl": "req_concurrent",
#"metric__mp.input.vdu01.0__ab_failed_requests": "req_failed",
#"metric__mp.input.vdu01.0__ab_html_transfer_byte": "req_html_bytes",
#"metric__mp.input.vdu01.0__ab_mean_time_per_request": "req_time_mean",
#"metric__mp.input.vdu01.0__ab_request_per_second": "req_per_sec",
#"metric__mp.input.vdu01.0__ab_time_used_s": "req_time_used",
#"metric__mp.input.vdu01.0__ab_total_transfer_byte": "transf_bytes",
#"metric__mp.input.vdu01.0__ab_transfer_rate_kbyte_per_second": "req_transf_rate",
"metric__vnf0.vdu01.0__stat__input__rx_bytes": "if_rx_bytes",
#"metric__vnf0.vdu01.0__stat__input__rx_dropped": "if_in_rx_dropped",
#"metric__vnf0.vdu01.0__stat__input__rx_errors": "if_in_rx_errors",
#"metric__vnf0.vdu01.0__stat__input__rx_packets": "if_in_rx_packets",
#"metric__vnf0.vdu01.0__stat__input__tx_bytes": "if_tx_bytes",
#"metric__vnf0.vdu01.0__stat__input__tx_dropped": "if_in_tx_dropped",
#"metric__vnf0.vdu01.0__stat__input__tx_errors": "if_in_tx_errors",
#"metric__vnf0.vdu01.0__stat__input__tx_packets": "if_in_tx_packets",
}
map_web03 = {
"experiment_name": "ex_name",
"experiment_start": "ex_start",
"experiment_stop": "ex_stop",
"param__header__all__config_id": "conf_id",
"param__header__all__repetition": "repetition",
"param__header__all__time_limit": "time_limit",
"param__header__all__time_warmup": "time_warmup",
"param__func__mp.input__cmd_start": "req_size",
"param__func__de.upb.px-squid.0.1__cpu_bw": "cpu_bw",
"param__func__de.upb.px-squid.0.1__mem_max": "memory",
"metric__mp.input.vdu01.0__ab_completed_requests": "req_compl",
#"metric__mp.input.vdu01.0__ab_concurrent_lvl": "req_concurrent",
#"metric__mp.input.vdu01.0__ab_failed_requests": "req_failed",
#"metric__mp.input.vdu01.0__ab_html_transfer_byte": "req_html_bytes",
#"metric__mp.input.vdu01.0__ab_mean_time_per_request": "req_time_mean",
#"metric__mp.input.vdu01.0__ab_request_per_second": "req_per_sec",
#"metric__mp.input.vdu01.0__ab_time_used_s": "req_time_used",
#"metric__mp.input.vdu01.0__ab_total_transfer_byte": "transf_bytes",
#"metric__mp.input.vdu01.0__ab_transfer_rate_kbyte_per_second": "req_transf_rate",
"metric__vnf0.vdu01.0__stat__input__rx_bytes": "if_rx_bytes",
#"metric__vnf0.vdu01.0__stat__input__rx_dropped": "if_in_rx_dropped",
#"metric__vnf0.vdu01.0__stat__input__rx_errors": "if_in_rx_errors",
#"metric__vnf0.vdu01.0__stat__input__rx_packets": "if_in_rx_packets",
#"metric__vnf0.vdu01.0__stat__input__tx_bytes": "if_tx_bytes",
#"metric__vnf0.vdu01.0__stat__input__tx_dropped": "if_in_tx_dropped",
#"metric__vnf0.vdu01.0__stat__input__tx_errors": "if_in_tx_errors",
#"metric__vnf0.vdu01.0__stat__input__tx_packets": "if_in_tx_packets",
}
map_iot01 = {
"experiment_name": "ex_name",
"experiment_start": "ex_start",
"experiment_stop": "ex_stop",
"param__header__all__config_id": "conf_id",
"param__header__all__repetition": "repetition",
"param__header__all__time_limit": "time_limit",
"param__header__all__time_warmup": "time_warmup",
"param__func__mp.input__cmd_start": "req_type",
"param__func__de.upb.broker-mosquitto.0.1__cpu_bw": "cpu_bw",
"param__func__de.upb.broker-mosquitto.0.1__mem_max": "memory",
#"metric__mp.input.vdu01.0__malaria_clientid": "mal_id",
#"metric__mp.input.vdu01.0__malaria_count_ok": "mal_count_ok",
#"metric__mp.input.vdu01.0__malaria_count_total": "mal_count_total",
#"metric__mp.input.vdu01.0__malaria_msgs_per_sec": "msg_per_sec",
#"metric__mp.input.vdu01.0__malaria_rate_ok": "mal_rate_ok",
#"metric__mp.input.vdu01.0__malaria_time_max": "mal_time_max",
#"metric__mp.input.vdu01.0__malaria_time_mean": "msg_t_mean",
#"metric__mp.input.vdu01.0__malaria_time_min": "mal_time_min",
#"metric__mp.input.vdu01.0__malaria_time_stddev": "msg_t_std",
#"metric__mp.input.vdu01.0__malaria_time_total": "mal_time_total",
#"metric__mp.output.vdu01.0__malaria_client_count": "mal_ccount",
#"metric__mp.output.vdu01.0__malaria_clientid": "mal_cid2",
#"metric__mp.output.vdu01.0__malaria_flight_time_max": "mal_ft_max",
#"metric__mp.output.vdu01.0__malaria_flight_time_mean": "mal_ft_mean",
#"metric__mp.output.vdu01.0__malaria_flight_time_min": "mal_ft_min",
#"metric__mp.output.vdu01.0__malaria_flight_time_stddev": "mal_ft_stddev",
#"metric__mp.output.vdu01.0__malaria_ms_per_msg": "mal_ms_per_msg",
#"metric__mp.output.vdu01.0__malaria_msg_count": "mal_out_msg_count",
#"metric__mp.output.vdu01.0__malaria_msg_duplicates": "mal_out_msg_dup",
#"metric__mp.output.vdu01.0__malaria_msg_per_sec": "mal_out_msgs_per_sec",
#"metric__mp.output.vdu01.0__malaria_test_complete": "mal_test_complete",
#"metric__mp.output.vdu01.0__malaria_time_total": "mal_out_t_total",
"metric__vnf0.vdu01.0__stat__input__rx_bytes": "if_rx_bytes",
#"metric__vnf0.vdu01.0__stat__input__rx_dropped": "if_in_rx_dropped",
#"metric__vnf0.vdu01.0__stat__input__rx_errors": "if_in_rx_errors",
#"metric__vnf0.vdu01.0__stat__input__rx_packets": "if_in_rx_packets",
#"metric__vnf0.vdu01.0__stat__input__tx_bytes": "if_tx_bytes",
#"metric__vnf0.vdu01.0__stat__input__tx_dropped": "if_in_tx_dropped",
#"metric__vnf0.vdu01.0__stat__input__tx_errors": "if_in_tx_errors",
#"metric__vnf0.vdu01.0__stat__input__tx_packets": "if_in_tx_packets",
}
map_iot02 = {
"experiment_name": "ex_name",
"experiment_start": "ex_start",
"experiment_stop": "ex_stop",
"param__header__all__config_id": "conf_id",
"param__header__all__repetition": "repetition",
"param__header__all__time_limit": "time_limit",
#"param__header__all__time_warmup": "time_warmup",
#"param__func__mp.input__cmd_start": "req_type",
#"param__func__de.upb.broker-emqx.0.1__cpu_bw": "cpu_bw",
#"param__func__de.upb.broker-emqx.0.1__mem_max": "memory",
#"metric__mp.input.vdu01.0__malaria_clientid": "mal_id",
#"metric__mp.input.vdu01.0__malaria_count_ok": "mal_count_ok",
#"metric__mp.input.vdu01.0__malaria_count_total": "mal_count_total",
#"metric__mp.input.vdu01.0__malaria_msgs_per_sec": "msg_per_sec",
#"metric__mp.input.vdu01.0__malaria_rate_ok": "mal_rate_ok",
#"metric__mp.input.vdu01.0__malaria_time_max": "mal_time_max",
#"metric__mp.input.vdu01.0__malaria_time_mean": "msg_t_mean",
#"metric__mp.input.vdu01.0__malaria_time_min": "mal_time_min",
#"metric__mp.input.vdu01.0__malaria_time_stddev": "msg_t_std",
#"metric__mp.input.vdu01.0__malaria_time_total": "mal_time_total",
#"metric__mp.output.vdu01.0__malaria_client_count": "mal_ccount",
#"metric__mp.output.vdu01.0__malaria_clientid": "mal_cid2",
#"metric__mp.output.vdu01.0__malaria_flight_time_max": "mal_ft_max",
#"metric__mp.output.vdu01.0__malaria_flight_time_mean": "mal_ft_mean",
#"metric__mp.output.vdu01.0__malaria_flight_time_min": "mal_ft_min",
#"metric__mp.output.vdu01.0__malaria_flight_time_stddev": "mal_ft_stddev",
#"metric__mp.output.vdu01.0__malaria_ms_per_msg": "mal_ms_per_msg",
#"metric__mp.output.vdu01.0__malaria_msg_count": "mal_out_msg_count",
#"metric__mp.output.vdu01.0__malaria_msg_duplicates": "mal_out_msg_dup",
#"metric__mp.output.vdu01.0__malaria_msg_per_sec": "mal_out_msgs_per_sec",
#"metric__mp.output.vdu01.0__malaria_test_complete": "mal_test_complete",
#"metric__mp.output.vdu01.0__malaria_time_total": "mal_out_t_total",
"metric__vnf0.vdu01.0__stat__input__rx_bytes": "if_rx_bytes",
#"metric__vnf0.vdu01.0__stat__input__rx_dropped": "if_in_rx_dropped",
#"metric__vnf0.vdu01.0__stat__input__rx_errors": "if_in_rx_errors",
#"metric__vnf0.vdu01.0__stat__input__rx_packets": "if_in_rx_packets",
#"metric__vnf0.vdu01.0__stat__input__tx_bytes": "if_tx_bytes",
#"metric__vnf0.vdu01.0__stat__input__tx_dropped": "if_in_tx_dropped",
#"metric__vnf0.vdu01.0__stat__input__tx_errors": "if_in_tx_errors",
#"metric__vnf0.vdu01.0__stat__input__tx_packets": "if_in_tx_packets",
}
# add additional data
df_sec01["vnf"] = "suricata"
df_sec02["vnf"] = "snort2"
df_sec03["vnf"] = "snort3"
df_web01["vnf"] = "nginx"
df_web02["vnf"] = "haproxy"
df_web03["vnf"] = "squid"
df_iot01["vnf"] = "mosquitto"
df_iot02["vnf"] = "emqx"
# cleanup data sets
dfs_raw = [df_sec01, df_sec02, df_sec03, df_web01, df_web02, df_web03, df_iot01, df_iot02]
map_list = [map_sec01, map_sec02, map_sec03, map_web01, map_web02, map_web03, map_iot01, map_iot02]
dfs = list() # clean data frames
for (df, m) in zip(dfs_raw, map_list):
tmp = select_and_rename(df.copy(), m)
cleanup(tmp)
dfs.append(tmp)
dfs[0].info()
dfs[0]["ex_start"] = pd.to_datetime(dfs[0]["ex_start"], errors='coerce')
dfs[0]["ex_stop"] = pd.to_datetime(dfs[0]["ex_stop"], errors='coerce')
dfs[0]["td_measure"] = dfs[0]["ex_stop"] - dfs[0]["ex_start"]
dfs[0]["td_measure"] = dfs[0]["td_measure"]/np.timedelta64(1,'s')
dfs[0]["delta_s"] = dfs[0]["time_limit"] - dfs[0]["td_measure"]
dfs[0].info()
#dfs[0].describe()
dfs[0]
g = sns.scatterplot(data=dfs[0], x="run_id", y="td_measure", linewidth=0, alpha=0.5)
g.set_ylim(120.0, 120.15)
g.set(xlabel="Experiment run ID", ylabel="Measurement time [s]")
plt.tight_layout()
plt.savefig("bench_roundtime.png", dpi=300)
###Output
_____no_output_____
###Markdown
Experiment Runtime
###Code
rtdata = list()
rtdata.append({"name": "SEC01", "runtime": 4266})
rtdata.append({"name": "SEC02", "runtime": 4352})
rtdata.append({"name": "SEC03", "runtime": 2145})
rtdata.append({"name": "WEB01", "runtime": 4223})
rtdata.append({"name": "WEB02", "runtime": 4213})
rtdata.append({"name": "WEB03", "runtime": 4232})
rtdata.append({"name": "IOT01", "runtime": 4298})
rtdata.append({"name": "IOT02", "runtime": 6949})
rtdf = pd.DataFrame(rtdata)
rtdf
g = sns.barplot(data=rtdf, x="name", y="runtime", color="gray")
for item in g.get_xticklabels():
item.set_rotation(45)
g.set(xlabel="Experiment", ylabel="Runtime [min]")
plt.tight_layout()
plt.savefig("bench_experiment_runtime_total.png", dpi=300)
###Output
_____no_output_____ |
py/ssd_v1.ipynb | ###Markdown
w and heights are always the same for our standard shape[[[ 0.07023411 0.10281222 0.04966302 0.09932604], [ 0.07023411 0.10281222 0.09932604 0.04966302]],[[ 0.15050167 0.22323005 0.10642076 0.21284151 0.08689218 0.26067653], [ 0.15050167 0.22323005 0.21284151 0.10642076 0.26067653 0.08689218]],[[ 0.33110368 0.41161588 0.23412566 0.46825132 0.19116279 0.57348841], [ 0.33110368 0.41161588 0.46825132 0.23412566 0.57348841 0.19116279]],[[ 0.5117057 0.59519559 0.36183056 0.72366112 0.2954334 0.88630027], [ 0.5117057 0.59519559 0.72366112 0.36183056 0.88630027 0.2954334]],[[ 0.69230771 0.77738154 0.48953545 0.9790709], [ 0.69230771 0.77738154 0.9790709 0.48953545]],[[ 0.87290972 0.95896852 0.61724037 1.23448074], [ 0.87290972 0.95896852 1.23448074 0.61724037]]]
###Code
"""
we are passed x,y points and a selection of widths and heights
"""
with tf.variable_scope('ssd/select'):
l_feed = tf.placeholder(tf.float32, [None, None, None, None, 4], name="localizations")
p_feed = tf.placeholder(tf.float32, [None, None, None, None, 21], name="predictions")
d_pred = p_feed[:, :, :, :, 1:]
d_conditions = tf.greater(d_pred, 0.5)
d_chosen = tf.where(condition=d_conditions)
c_index = d_chosen[:,:-1]
x_feed = tf.placeholder(tf.float32, [None, None, None], name="x")
y_feed = tf.placeholder(tf.float32, [None, None, None], name="y")
h_feed = tf.placeholder(tf.float32, [None], name="h")
w_feed = tf.placeholder(tf.float32, [None], name="w")
box_shape = tf.shape(l_feed)
box_reshape = [-1, box_shape[-2], box_shape[-1]]
box_feat_localizations = tf.reshape(l_feed, box_reshape)
box_yref = tf.reshape(y_feed, [-1, 1])
box_xref = tf.reshape(x_feed, [-1, 1])
box_dx = box_feat_localizations[:, :, 0] * w_feed * 0.1 + box_xref
box_dy = box_feat_localizations[:, :, 1] * h_feed * 0.1 + box_yref
box_w = w_feed * tf.exp(box_feat_localizations[:, :, 2] * 0.2)
box_h = h_feed * tf.exp(box_feat_localizations[:, :, 3] * 0.2)
box_ymin = box_dy - box_h / 2.
box_xmin = box_dx - box_w / 2.
box_xmax = box_dy + box_h / 2.
box_ymax = box_dx + box_w / 2.
box_stack = tf.stack([box_ymin, box_xmin, box_xmax, box_ymax], axis=1)
box_transpose = tf.transpose(box_stack, [0,2,1])
box_gather_reshape = tf.reshape(box_transpose, box_shape, name="reshaping")
classes_selected = tf.cast(tf.transpose(d_chosen)[-1]+1, tf.float32)
classes_expand = tf.expand_dims(classes_selected, 1)
box_gather = tf.gather_nd(box_gather_reshape, c_index)
p_gather = tf.expand_dims(tf.gather_nd(d_pred, d_chosen), 1)
s_out = tf.concat([box_gather, p_gather, classes_expand], axis=1, name="output")
###Output
_____no_output_____
###Markdown
Basic image inputget a local image and expand it to a 4d tensor
###Code
image_path = os.path.join('images/', 'street_smaller.jpg')
mean = tf.constant([123, 117, 104], dtype=tf.float32)
with tf.variable_scope('image'):
image_data = tf.gfile.FastGFile(image_path, 'rb').read()
#we want to use decode_image here but it's buggy
decoded = tf.image.decode_jpeg(image_data, channels=None)
normed = tf.divide(tf.cast(decoded, tf.float32), 255.0)
batched = tf.expand_dims(normed, 0)
resized_image = tf.image.resize_bilinear(batched, [299, 299])
standard_size = resized_image
graph_norm = standard_size * 255.0 - mean
with tf.Session() as image_session:
raw_image, file_image, plot_image = image_session.run((decoded, graph_norm, standard_size), feed_dict={})
# Main image processing routine.
predictions_net, localizations_net = ssd_session.run([predictions, localisations],
feed_dict={'ssd/input:0': file_image})
l_bboxes = []
for i in range(6):
box_feed = {l_feed: localizations_net[i], p_feed: predictions_net[i], \
y_feed: ssd_anchors[i][0], x_feed: ssd_anchors[i][1], \
h_feed: ssd_anchors[i][2], w_feed: ssd_anchors[i][3]}
bboxes = ssd_session.run([s_out], feed_dict=box_feed)
l_bboxes.append(bboxes[0])
bboxes = np.concatenate(l_bboxes, 0)
# implement these in frontend
# rclasses, rscores, rbboxes = np_methods.bboxes_sort(rclasses, rscores, rbboxes, top_k=400)
# rclasses, rscores, rbboxes = np_methods.bboxes_nms(rclasses, rscores, rbboxes, nms_threshold=nms_threshold)
print(predictions)
print(localisations)
print(bboxes)
from simple_heatmap import create_nms
create_nms()
with tf.variable_scope('gather'):
gather_indices = tf.placeholder(tf.int32, [None], name='indices')
gather_values = tf.placeholder(tf.float32, [None, 6], name='values')
gathered = tf.gather(gather_values, gather_indices, name='output')
nms_feed={'nms/bounds:0': bboxes, 'nms/threshold:0': [.8]}
pick = ssd_session.run(('nms/output:0'), feed_dict=nms_feed)
if bboxes.size>0 and pick.size>0:
gather_feed={'gather/indices:0': pick, 'gather/values:0': bboxes}
boxes = ssd_session.run(('gather/output:0'), feed_dict=gather_feed)
print(boxes)
import matplotlib.pyplot as plt
import matplotlib.patches as patches
import matplotlib.image as mpimg
fig, ax = plt.subplots(1)
show_image = np.reshape(plot_image, (299,299,3))
ax.imshow(raw_image)
print(raw_image.shape)
height = raw_image.shape[0]
width = raw_image.shape[1]
for box in boxes:
# Create a Rectangle patch
x = box[1] * width
y = box[0] * height
w = (box[3]-box[1]) * width
h = (box[2]-box[0]) * height
rect = patches.Rectangle((x,y),w,h,linewidth=3,edgecolor='r',facecolor='none')
# Add the patch to the Axes
ax.add_patch(rect)
plt.show()
from tensorflow.python.framework import graph_util
from tensorflow.python.training import saver as saver_lib
from tensorflow.core.protobuf import saver_pb2
checkpoint_prefix = os.path.join("checkpoints", "saved_checkpoint")
checkpoint_state_name = "checkpoint_state"
input_graph_name = "input_ssd_graph.pb"
output_graph_name = "ssd.pb"
input_graph_path = os.path.join("checkpoints", input_graph_name)
saver = saver_lib.Saver(write_version=saver_pb2.SaverDef.V2)
checkpoint_path = saver.save(
ssd_session,
checkpoint_prefix,
global_step=0,
latest_filename=checkpoint_state_name)
graph_def = ssd_session.graph.as_graph_def()
from tensorflow.python.lib.io import file_io
file_io.atomic_write_string_to_file(input_graph_path, str(graph_def))
print("wroteIt")
from tensorflow.python.tools import freeze_graph
input_saver_def_path = ""
input_binary = False
output_node_names = "ssd_300_vgg/softmax/Reshape_1,"+\
"ssd_300_vgg/softmax_1/Reshape_1,"+\
"ssd_300_vgg/softmax_2/Reshape_1,"+\
"ssd_300_vgg/softmax_3/Reshape_1,"+\
"ssd_300_vgg/softmax_4/Reshape_1,"+\
"ssd_300_vgg/softmax_5/Reshape_1,"+\
"ssd_300_vgg/block4_box/Reshape,"+\
"ssd_300_vgg/block7_box/Reshape,"+\
"ssd_300_vgg/block8_box/Reshape,"+\
"ssd_300_vgg/block9_box/Reshape,"+\
"ssd_300_vgg/block10_box/Reshape,"+\
"ssd_300_vgg/block11_box/Reshape,"+\
"ssd/priors/x,"+\
"ssd/priors/y,"+\
"gather/output,"+\
"nms/output,"+\
"ssd/select/output"
restore_op_name = "save/restore_all"
filename_tensor_name = "save/Const:0"
output_graph_path = os.path.join("data", output_graph_name)
clear_devices = False
freeze_graph.freeze_graph(input_graph_path, input_saver_def_path,
input_binary, checkpoint_path, output_node_names,
restore_op_name, filename_tensor_name,
output_graph_path, clear_devices, "")
###Output
INFO:tensorflow:Froze 71 variables.
Converted 71 variables to const ops.
685 ops in the final graph.
|
Courses/IadMl/IntroToDeepLearning/seminars/sem05/sem05_task.ipynb | ###Markdown
ะะะะะะะะ!ะกะปะตะดัััะตะต ะทะฐะดะฐะฝะธะต ะบัะฐะนะฝะต ัะตะบะพะผะตะฝะดัะตััั ะฒัะฟะพะปะฝััั ะฒ Google Colab, ััะพะฑั ะพะฑะตัะฟะตัะธัั ะพััััััะฒะธะต ะฟัะพะฑะปะตะผ ั ัะพะตะดะธะฝะตะฝะธะตะผ ะฟัะธ ัะบะฐัะธะฒะฐะฝะธะธ ะดะฐัะฐัะตัะฐ, ะฐ ัะฐะบะถะต ััะพะฑั ะพะฑะตัะฟะตัะธัั ัะบะพัะพััั ะฟัะธ ะพะฑััะตะฝะธะธ ะฝะตะนัะพัะตัะธ.
###Code
import glob
import sys
import warnings
import matplotlib.pyplot as plt
import numpy as np
import torch
import torch.nn.functional as F
from torch import nn
from tqdm.auto import tqdm
%matplotlib inline
warnings.filterwarnings("ignore")
###Output
_____no_output_____
###Markdown
Transfer learningะะฐ ััะพะผ ัะตะผะธะฝะฐัะต ะผั ะฝะฐััะธะผัั ะพัะตะฝั ะฑััััะพ ะพะฑััะฐัั ะฝะตะนัะพัะตัั ะฝะฐ ัะปะพะถะฝัั ะทะฐะดะฐัั ะบะปะฐััะธัะธะบะฐัะธะธ ะธะทะพะฑัะฐะถะตะฝะธะน, ะธัะฟะพะปัะทัั ะพัะตะฝั ะฟัะพััะพะน ะฟัะธัะผ, ะธะผะตะฝัะตะผัะน fine tuning'ะพะผ. ะะปั ะฝะฐัะฐะปะฐ ัะบะฐัะตะผ ะดะฐัะฐัะตั. ะะฐ ััะพั ัะฐะท ะผั ะฝะฐััะธะผ ะฝะตะนัะพะฝะบั ะพัะปะธัะฐัั ะบะพัะตัะตะบ ะพั ัะพะฑะฐัะตะบ.
###Code
# !wget https://download.microsoft.com/download/3/E/1/3E1C3F21-ECDB-4869-8368-6DEBA77B919F/kagglecatsanddogs_3367a.zip && unzip kagglecatsanddogs_3367a.zip > /dev/null
###Output
_____no_output_____
###Markdown
ะฃะดะฐะปะธะผ ะฝะตัะบะพะปัะบะพ ะฑะธััั
ะธะทะพะฑัะฐะถะตะฝะธะน
###Code
# !rm -rf ./PetImages/Cat/666.jpg ./PetImages/Dog/11702.jpg
###Output
_____no_output_____
###Markdown
ะะฐัะฐัะตั ัะฐะทะดะตะปะธะผ ััะตะดััะฒะฐะผะธ pytorch'a ะฝะฐ ััะตะนะฝ ะธ ัะตัั.
###Code
from torchvision.datasets import ImageFolder
from torchvision.transforms import Compose, Normalize, Resize, ToTensor
dataset = ImageFolder(
"./PetImages",
transform=Compose(
[
Resize((224, 224)),
ToTensor(),
Normalize((0.5, 0.5, 0.5), (1, 1, 1)),
]
)
)
train_set, test_set = torch.utils.data.random_split(
dataset,
[int(0.8 * len(dataset)), len(dataset) - int(0.8 * len(dataset))]
)
###Output
_____no_output_____
###Markdown
ะกะดะตะปะฐะตะผ ะธะท ัะบะฐัะฐะฝะฝัั
ะดะฐัะฐัะตัะพะฒ ะดะฐัะฐะปะพะฐะดะตัั
###Code
train_dataloader = torch.utils.data.DataLoader(train_set, batch_size=256, shuffle=True)
test_dataloader = torch.utils.data.DataLoader(test_set, batch_size=256, shuffle=False)
###Output
_____no_output_____
###Markdown
ะะพัะผะพััะธะผ, ะบะฐะบ ะฒัะณะปัะดัั ะบะฐััะธะฝะบะธ.
###Code
file = np.random.choice(glob.glob("./PetImages/*/*.jpg"))
plt.imshow(plt.imread(file))
###Output
_____no_output_____
###Markdown
Fine-Tuning ะะพัะบะธ ะธ ัะพะฑะฐะบะธ ััะพ ะบะพะฝะตัะฝะพ ั
ะพัะพัะพ, ะฒะพั ัะพะปัะบะพ ะพะฑััะตะฝะธะต ะผะพะดะตะปะธ, ะบะพัะพัะฐั ะฑัะดะตั ั
ะพัะพัะพ ัะฐะฑะพัะฐัั ะฝะฐ ััะพะผ ะดะฐัะฐัะตัะต ะผะพะถะตั ะพะบะฐะทะฐัััั ะพัะตะฝั ะดะพะปะณะธะผ...ะะดะฝะฐะบะพ ะบะฐััะธะฝะบะธ, ะบะพัะพััะต ะผั ัะตะณะพะดะฝั ัะฐััะผะพััะธะผ ะพะบะฐะทัะฒะฐัััั ะพัะตะฝั ะฟะพั
ะพะถะธะผะธ ะฝะฐ ะบะฐััะธะฝะบะธ ะธะท ะพะณัะพะผะฝะพะณะพ ะดะฐัะฐัะตัะฐ ImageNet. ะะฐะดะฐัะฐ, ะบะพัะพััั ะผั ัะตะณะพะดะฝั ัะฐััะผะพััะธะผ, ะฝะฐะทัะฒะฐะตััั Transfer Learning -- ะฒ ััััะบะพัะทััะฝะพะน ะปะธัะตัะฐัััะต ะธะฝะพะณะดะฐ ะผะพะถะฝะพ ะฒัััะตัะธัั ัะตัะผะธะฝ "ะพะฑััะตะฝะธะต ั ะฟะตัะตะฝะพัะพะผ ะทะฝะฐะฝะธะน". ะะฝะฐะฝะธั ะผั ะดะตะนััะฒะธัะตะปัะฝะพ ะฟะตัะตะฝะพัะธะผ -- ะพั ัะตัะธ, ะบะพัะพัะฐั ั
ะพัะพัะพ ัะฐะฑะพัะฐะตั ะฝะฐ ะพะดะฝะพะผ ะดะฐัะฐัะตัะต (ImageNet) ะบ ะดััะณะธะผ ะดะฐะฝะฝัะผ (ะบ ะดะฐัะฐัะตัั Cats vs Dogs). ะะฐะณััะทะธะผ ัะถะต ะพะฑััะตะฝะฝัั ัะตััะ ะฑะธะฑะปะธะพัะตะบะต `torchvision` ะธะผะฟะปะตะผะตะฝัะธัะพะฒะฐะฝะพ ะฝะต ัะพะปัะบะพ ะฑะพะปััะพะต ะผะฝะพะถะตััะฒะพ ะผะพะดะตะปะตะน (ะฒัะตะฒะพะทะผะพะถะฝัะต ResNet'ั, Inception, VGG, AlexNet, DenseNet, ResNext, WideResNet, MobileNet...), ะฝะพ ะธ ะทะฐะณััะถะตะฝั ัะตะบะฟะพะธะฝัั ะพะฑััะตะฝะธั ััะธั
ะผะพะดะตะปะตะน ะฝะฐ ImageNet. ะะดะฝะฐะบะพ ะดะปั ะดะฐัะฐัะตัะฐ Cats vs Dogs ัะฐะบะฐั ัััะบะฐ ัะฒะปัะตััั ัะพัะบะพััั...
###Code
from torchvision.models import resnet18
# ะะฐะณััะทะธัั ะฟัะตะดะพะฑััะตะฝะฝัั ัะตัั -- pretrained=True
model = resnet18(pretrained=True)
model
for param in model.parameters():
param.requires_grad = False
###Output
_____no_output_____
###Markdown
ะ ะทะฐะดะฐัะต transfer learning'a ะผั ะทะฐะผะตะฝัะตะผ ะฟะพัะปะตะดะฝะธะน ัะปะพะน ะฝะตะนัะพัะตัะธ ะฝะฐ ะปะธะฝะตะนะฝัะน ั ะดะฒัะผั ะฒัั
ะพะดะฐะผะธ.
###Code
model.fc = nn.Linear(512, 2)
###Output
_____no_output_____
###Markdown
ะะธะถะต ะฝะตัะบะพะปัะบะพ ััะฝะบัะธะน, ะบะพัะพััะต ะผั ัะถะต ะฒะธะดะตะปะธ ะฒ ะฟัะตะดัะดััะธั
ัะตะผะธะฝะฐัะฐั
.
###Code
def train_epoch(
model,
data_loader,
optimizer,
criterion,
return_losses=False,
device="cuda:0",
):
model = model.to(device).train()
total_loss = 0
num_batches = 0
all_losses = []
total_predictions = np.array([])#.reshape((0, ))
total_labels = np.array([])#.reshape((0, ))
with tqdm(total=len(data_loader), file=sys.stdout) as prbar:
for images, labels in data_loader:
# Move Batch to GPU
images = images.to(device)
labels = labels.to(device)
predicted = model(images)
loss = criterion(predicted, labels)
# Update weights
loss.backward()
optimizer.step()
optimizer.zero_grad()
# Update descirption for tqdm
accuracy = (predicted.argmax(1) == labels).float().mean()
prbar.set_description(
f"Loss: {round(loss.item(), 4)} "
f"Accuracy: {round(accuracy.item() * 100, 4)}"
)
prbar.update(1)
total_loss += loss.item()
total_predictions = np.append(total_predictions, predicted.argmax(1).cpu().detach().numpy())
total_labels = np.append(total_labels, labels.cpu().detach().numpy())
num_batches += 1
all_losses.append(loss.detach().item())
metrics = {"loss": total_loss / num_batches}
metrics.update({"accuracy": (total_predictions == total_labels).mean()})
if return_losses:
return metrics, all_losses
else:
return metrics
def validate(model, data_loader, criterion, device="cuda:0"):
model = model.eval()
total_loss = 0
num_batches = 0
total_predictions = np.array([])
total_labels = np.array([])
with tqdm(total=len(data_loader), file=sys.stdout) as prbar:
for images, labels in data_loader:
images = images.to(device)
labels = labels.to(device)
predicted = model(images)
loss = criterion(predicted, labels)
accuracy = (predicted.argmax(1) == labels).float().mean()
prbar.set_description(
f"Loss: {round(loss.item(), 4)} "
f"Accuracy: {round(accuracy.item() * 100, 4)}"
)
prbar.update(1)
total_loss += loss.item()
total_predictions = np.append(total_predictions, predicted.argmax(1).cpu().detach().numpy())
total_labels = np.append(total_labels, labels.cpu().detach().numpy())
num_batches += 1
metrics = {"loss": total_loss / num_batches}
metrics.update({"accuracy": (total_predictions == total_labels).mean()})
return metrics
def fit(
model,
epochs,
train_data_loader,
validation_data_loader,
optimizer,
criterion,
device="cuda:0"
):
all_train_losses = []
epoch_train_losses = []
epoch_eval_losses = []
for epoch in range(epochs):
# Train step
print(f"Train Epoch: {epoch}")
train_metrics, one_epoch_train_losses = train_epoch(
model=model,
data_loader=train_data_loader,
optimizer=optimizer,
return_losses=True,
criterion=criterion,
device=device
)
# Save Train losses
all_train_losses.extend(one_epoch_train_losses)
epoch_train_losses.append(train_metrics["loss"])
# Eval step
print(f"Validation Epoch: {epoch}")
with torch.no_grad():
validation_metrics = validate(
model=model,
data_loader=validation_data_loader,
criterion=criterion
)
# Save eval losses
epoch_eval_losses.append(validation_metrics["loss"])
###Output
_____no_output_____
###Markdown
ะกะพะทะดะฐะนัะต ะพะฑัะตะบั ะปะพััะฐ ะธ ะพะฟัะธะผะธะทะฐัะพั.
###Code
criterion = nn.CrossEntropyLoss()
optimizer = # YOUR CODE. It must optimize only across fully connected layer
device = "cuda:0" if torch.cuda.is_available() else "cpu"
fit(model, 5, train_dataloader, test_dataloader, optimizer, criterion, device=device)
###Output
_____no_output_____
###Markdown
ะะฐะบ ะฒะธะดะธะผ ะฝะฐ ะพะดะฝั ัะฟะพั
ั ะพะฑััะตะฝะธั ัั
ะพะดะธั ะฟะพััะดะบะฐ ะดะฒัั
ะผะธะฝัั, ะธ ัะถะต ะฟะพัะปะต ะพะดะฝะพะน ัะฟะพั
ะธ ะฟะพะปััะฐะตััั ะฟัะธะตะผะปะตะผะพะต ะบะฐัะตััะฒะพ. ะะฐะฒะฐะนัะต ะฟัะพะธะฝะธัะธะฐะปะธะทะธััะตะผ ะผะพะดะตะปั ั ะฝัะปั ะธ ะฟะพะฟัะพะฑัะตะผ ะพะฑััะธัั.
###Code
model_full = resnet18(pretrained=False)
model_full.fc = nn.Linear(512, 2)
optimizer = # YOUR CODE. It must optimize across all parameters
fit(model_full, 5, train_dataloader, test_dataloader, optimizer, criterion, device=device)
###Output
_____no_output_____
###Markdown
__ะะพะฟัะพั__. ะะพัะตะผั ะฟัะธ ะพะฑััะตะฝะธะธ ะฟะพะปะฝะพะน ะผะพะดะตะปะธ ะฟะพะปััะฐะตััั ัะฐะบ, ััะพ ะฒัะตะผั ะฝะฐ ะพะดะฝั ัะฟะพั
ั ะฟะพััะธ ัะฐะบะพะต ะถะต?ะ ะตะบะพะผะตะฝะดัะตะผ ะฟะพะดัะผะฐัั ะฝะฐ ััะธะผ ะฒะพะฟัะพัะพะผ ัะฐะผะพััะพััะตะปัะฝะพ. ะะฐะบ ะผั ะฒะธะดะธะผ, ะฝะฐ transfer learning'e ะฝะตะนัะพัะตัั ัั
ะพะดะธััั ะพัะตะฝั ะฑััััะพ. ะะฝะฐัะธัะตะปัะฝะพ ะฑััััะตะต, ัะตะผ ะธะฝะธัะธะฐะปะธะทะธัะพะฒะฐะฝะฝะฐั ั ะฝัะปั. ะะพะถะฝะพ ั ัะฒะตัะตะฝะฝะพัััั ะณะพะฒะพัะธัั, ััะพ transfer learning -- ะพัะตะฝั ะฟะพะปะตะทะฝะฐั ัะตั
ะฝะธะบะฐ. Adversarial ะฐัะฐะบะธ.ะขะฐะบะฐั ะฒะตัั, ะบะฐะบ ะฐัะฐะบะธ ะฝะฐ ะฝะตะนัะพัะตัั ะบัะฐะนะฝะต ะฒะฐะถะฝั ะดะปั ััััะฐ ะฟัะธ ัะฐะทัะฐะฑะพัะบะต. ะกััะตััะฒัะตั ะผะฝะพะณะพ ะผะตัะพะดะพะฒ ะบะฐะบ ะธั
ะณะตะฝะตัะฐัะธะธ, ัะฐะบ ะธ ะทะฐัะธัั ะพั ะฝะธั
. ะั ัะฐััะผะพััะธะผ ัะตะณะพะดะฝั ะฑะฐะทะพะฒัะต ะบะพะฝัะตะฟัั, ััะพะฑั ะดะฐัั ะฟะพะฝะธะผะฐะฝะธะต ะฟัะพะธัั
ะพะดััะตะณะพ.ะะพะถะตะผ ะฝะฐะทะฒะฐัั adversarial ะฐัะฐะบะพะน ะณะตะฝะตัะฐัะธั ัะฐะบะพะณะพ ะฟัะธะผะตัะฐ, ะบะพัะพััะน ะฝะต ะพัะปะธัะธะผ ะณะปะฐะทะพะผ ะพั ะฝะฐััะพััะตะณะพ, ะฝะพ ะฝะตะนัะพัะตัั ะฑัะดะตั ะะงะะะฌ ัะฒะตัะตะฝะฐ ะฒ ัะพะผ, ััะพ ััะพั ะฟัะธะผะตั ะธะท ะดััะณะพะณะพ ะบะปะฐััะฐ. ะกะตะนัะฐั ะผั ะฟะพะฟัะพะฑัะตะผ ัะณะตะฝะตัะธัะพะฒะฐัั ัะฐะบัั ัะพะฑะฐัะบั, ััะพ ะฝะตะนัะพัะตัั ะฑัะดะตั ัะฒะตัะตะฝะฐ, ััะพ ััะพ ะบะพัะธะบ.ะกะตะณะพะดะฝั ะผั ัะฐััะผะพััะธะผ ะฟัะธะผะตั Fast Gradient Sign Attack (FGSM, ะฟะพัะตะผั ัะฐะผ ะฑัะบะฒะฐ M ะฒ ะบะพะฝัะต -- ัััั ะตะณะพ ะทะฝะฐะตั...). ะะดะตั ะพัะตะฝั ะฟัะพััะฐั. ะะบะฐะทัะฒะฐะตััั, ััะพ ะตัะปะธ ะผั ัะตัะตะท ะพะฑััะตะฝะฝัั ะฝะตะนัะพัะตัั ะฟะพััะธัะฐะตะผ ะณัะฐะดะธะตะฝั ะฟะพ ะธัั
ะพะดะฝะพะน ะบะฐััะธะฝะบะต, ะฟะพััะธัะฐะตะผ ะตะณะพ ะทะฝะฐะบ ะธ ะฟัะธะฑะฐะฒะธะผ, ัะผะฝะพะถะธะฒ ะฝะฐ ะผะฐะปะตะฝัะบะพะต ัะธัะปะพ, ะผะพะดะตะปั ะฟะพะดัะผะฐะตั, ััะพ ััะพ ะบะฐััะธะธะฝะบะฐ ะดััะณะพะณะพ ะบะปะฐััะฐ.ะะปั ัะพะณะพ, ััะพะฑั ะฝะฐะผ ะฟะพััะธัะฐัั ะณัะฐะดะธะตะฝั ะฟะพ ะฒั
ะพะดั, ะฝะฐะผ ะฟัะตะดััะพะธั "ัะฐะทะผะพัะพะทะธัั" ะฒัะต ะตั ะณัะฐะธะตะฝัั.
###Code
model.eval()
for param in model.parameters():
param.requires_grad = True
def fgsm_attack(image, epsilon, data_grad):
# YOUR CODE
# DO EXACTLY WHAT IS WRITTEN ON THE ABOVE IMAGE
###Output
_____no_output_____
###Markdown
ะัะฑะธัะฐะตะผ ะธะท ะดะฐัะฐัะตัะฐ ัะปััะฐะนะฝัั ะบะฐััะธะฝะบั ั ะบะพัะตัะบะพะน
###Code
cl = 1
while cl == 1:
i = np.random.randint(0, len(train_set))
cl = train_set[i][1]
image = train_set[i][0]
image = image.to(device)
# ะ ะฐะทัะตัะธะผ ะฒััะธัะปะตะฝะธะต ะณัะฐะดะธะตะฝัะฐ ะฟะพ ะบะฐััะธะฝะบะต
image.requires_grad = True
pred = model(image[None])
predicted_label = pred.argmax(1).item()
confidence = pred.softmax(1)[0][predicted_label]
# ะบัะฐัะธะฒะพ ัะธััะตะผ
if predicted_label == 1:
plt.title("Dog, confidence = %0.4f" % confidence.item());
else:
plt.title("Cat, confidence = %0.4f" % confidence.item());
plt.imshow(image.cpu().detach().numpy().transpose((1, 2, 0)) + 0.5)
###Output
_____no_output_____
###Markdown
ะกะฐะผะพะต ะธะฝัะตัะตัะฝะพะต ะฝะฐัะธะฝะฐะตััั ััั. ะััะธัะปะธะผ ะณัะฐะดะธะตะฝั ััะฝะบัะธะธ ะฟะพัะตัั ะฟะพ ะบะฐััะธะฝะบะต ะฟัะธ ะฟะพะผะพัะธ ะฒัะทะพะฒะฐ .backward().
###Code
loss = criterion(pred, torch.tensor(cl).reshape((1,)).to(device))
loss.backward()
###Output
_____no_output_____
###Markdown
ะัะพะธะทะฒะตะดัะผ ะฐัะฐะบั.
###Code
eps = 0.007
attack = fgsm_attack(image, eps, image.grad)
pred = model(attack[None])
predicted_label = pred.argmax(1).item()
confidence = pred.softmax(1)[0][predicted_label]
if predicted_label == 1:
plt.title("Dog, confidence = %0.4f" % confidence.item());
else:
plt.title("Cat, confidence = %0.4f" % confidence.item());
plt.imshow(attack.cpu().detach().numpy().transpose((1, 2, 0)) + 0.5)
###Output
_____no_output_____ |
checkbox/dl_checkboxes.ipynb | ###Markdown
Deep Learning Checkboxes 1. My Development Environment- Windows Desktop with NVIDIA GPU - 1080 ti- Conda environment with Tensorflow (1.13.1) and Keras GPU version, Python 3.6 2. Folder Structure- data path -> C:\projects\science\checkbox-data (ie extract checkbox-data.tgz here)- scripts path -> C:\projects\science\checkbox (this notebook lives in here)- models -> C:\projects\science\models (upon running this notebook model is stored here)- pwd -> C:\projects\science\checkbox 3. Build Model
###Code
from split import split
from train import train_resnet_classification
from report import report
# Make train-val-test split
dpath = '../checkbox-data/'
proc_data_path = '../'
split(dpath, proc_data_path)
# abobe call creates ../data/ folder and copies images as needed by flow_from_directory
# Train Resnet50 - Transfer learning
num_classes = 3
tmode = "train_head"
train_resnet_classification(num_classes, tmode, proc_data_path)
# above call creates models in ../models/train_head.h5
#Fine tune- ResNet50
tmode = "finetune"
train_resnet_classification(num_classes, tmode, proc_data_path)
# above call creates models in ../models/finetune.h5
###Output
Found 502 images belonging to 3 classes.
Found 143 images belonging to 3 classes.
Epoch 1/30
100/100 [==============================] - 18s 185ms/step - loss: 0.3944 - acc: 0.8683 - val_loss: 0.6091 - val_acc: 0.8113
Epoch 00001: val_acc improved from -inf to 0.81132, saving model to ..//models/finetune.h5
Epoch 2/30
100/100 [==============================] - 11s 112ms/step - loss: 0.3835 - acc: 0.8550 - val_loss: 0.7059 - val_acc: 0.7358
Epoch 00002: val_acc did not improve from 0.81132
Epoch 3/30
100/100 [==============================] - 11s 112ms/step - loss: 0.3348 - acc: 0.8717 - val_loss: 0.6730 - val_acc: 0.8050
Epoch 00003: val_acc did not improve from 0.81132
Epoch 4/30
100/100 [==============================] - 11s 112ms/step - loss: 0.2764 - acc: 0.9075 - val_loss: 0.6744 - val_acc: 0.7799
Epoch 00004: val_acc did not improve from 0.81132
Epoch 5/30
100/100 [==============================] - 11s 112ms/step - loss: 0.2507 - acc: 0.9100 - val_loss: 0.7667 - val_acc: 0.8113
Epoch 00005: val_acc improved from 0.81132 to 0.81132, saving model to ..//models/finetune.h5
Epoch 6/30
100/100 [==============================] - 11s 112ms/step - loss: 0.2333 - acc: 0.9125 - val_loss: 0.7639 - val_acc: 0.7862
Epoch 00006: val_acc did not improve from 0.81132
Epoch 7/30
100/100 [==============================] - 11s 112ms/step - loss: 0.2107 - acc: 0.9204 - val_loss: 0.6841 - val_acc: 0.8679
Epoch 00007: val_acc improved from 0.81132 to 0.86792, saving model to ..//models/finetune.h5
Epoch 8/30
100/100 [==============================] - 11s 112ms/step - loss: 0.1548 - acc: 0.9546 - val_loss: 0.7705 - val_acc: 0.7862
Epoch 00008: val_acc did not improve from 0.86792
Epoch 9/30
100/100 [==============================] - 11s 112ms/step - loss: 0.1622 - acc: 0.9487 - val_loss: 0.8206 - val_acc: 0.8101
Epoch 00009: val_acc did not improve from 0.86792
Epoch 10/30
100/100 [==============================] - 11s 112ms/step - loss: 0.1621 - acc: 0.9458 - val_loss: 0.7701 - val_acc: 0.8365
Epoch 00010: val_acc did not improve from 0.86792
Epoch 00010: ReduceLROnPlateau reducing learning rate to 9.999999747378752e-07.
Epoch 11/30
100/100 [==============================] - 11s 112ms/step - loss: 0.1311 - acc: 0.9608 - val_loss: 0.8128 - val_acc: 0.8428
Epoch 00011: val_acc did not improve from 0.86792
Epoch 12/30
100/100 [==============================] - 11s 112ms/step - loss: 0.1114 - acc: 0.9625 - val_loss: 0.9309 - val_acc: 0.8239
Epoch 00012: val_acc did not improve from 0.86792
Epoch 13/30
100/100 [==============================] - 11s 112ms/step - loss: 0.1256 - acc: 0.9458 - val_loss: 0.9322 - val_acc: 0.8113
Epoch 00013: val_acc did not improve from 0.86792
Epoch 14/30
100/100 [==============================] - 11s 112ms/step - loss: 0.1065 - acc: 0.9662 - val_loss: 0.8435 - val_acc: 0.8302
Epoch 00014: val_acc did not improve from 0.86792
Epoch 15/30
100/100 [==============================] - 11s 113ms/step - loss: 0.1337 - acc: 0.9512 - val_loss: 0.9022 - val_acc: 0.8302
Epoch 00015: val_acc did not improve from 0.86792
Epoch 16/30
100/100 [==============================] - 11s 112ms/step - loss: 0.1225 - acc: 0.9587 - val_loss: 1.0039 - val_acc: 0.8428
Epoch 00016: val_acc did not improve from 0.86792
Epoch 00016: ReduceLROnPlateau reducing learning rate to 9.999999974752428e-08.
Epoch 17/30
100/100 [==============================] - 11s 113ms/step - loss: 0.1408 - acc: 0.9500 - val_loss: 0.7442 - val_acc: 0.8428
Epoch 00017: val_acc did not improve from 0.86792
Epoch 18/30
100/100 [==============================] - 11s 112ms/step - loss: 0.1189 - acc: 0.9612 - val_loss: 1.0460 - val_acc: 0.8291
Epoch 00018: val_acc did not improve from 0.86792
Epoch 00018: ReduceLROnPlateau reducing learning rate to 1.0000000116860975e-08.
Epoch 19/30
100/100 [==============================] - 11s 113ms/step - loss: 0.1262 - acc: 0.9546 - val_loss: 1.0775 - val_acc: 0.8176
Epoch 00019: val_acc did not improve from 0.86792
Epoch 20/30
100/100 [==============================] - 11s 113ms/step - loss: 0.1094 - acc: 0.9650 - val_loss: 0.6837 - val_acc: 0.8616
Epoch 00020: val_acc did not improve from 0.86792
Epoch 00020: ReduceLROnPlateau reducing learning rate to 9.999999939225292e-10.
Epoch 21/30
100/100 [==============================] - 11s 113ms/step - loss: 0.1080 - acc: 0.9650 - val_loss: 1.0299 - val_acc: 0.8050
Epoch 00021: val_acc did not improve from 0.86792
Epoch 22/30
100/100 [==============================] - 11s 113ms/step - loss: 0.1048 - acc: 0.9646 - val_loss: 0.7698 - val_acc: 0.8616
Epoch 00022: val_acc did not improve from 0.86792
Epoch 23/30
100/100 [==============================] - 11s 113ms/step - loss: 0.1153 - acc: 0.9621 - val_loss: 0.9307 - val_acc: 0.8365
Epoch 00023: val_acc did not improve from 0.86792
Epoch 24/30
100/100 [==============================] - 11s 113ms/step - loss: 0.1333 - acc: 0.9525 - val_loss: 1.0923 - val_acc: 0.8239
Epoch 00024: val_acc did not improve from 0.86792
Epoch 00024: ReduceLROnPlateau reducing learning rate to 9.999999717180686e-11.
Epoch 25/30
100/100 [==============================] - 11s 113ms/step - loss: 0.1136 - acc: 0.9612 - val_loss: 0.8623 - val_acc: 0.8491
Epoch 00025: val_acc did not improve from 0.86792
Epoch 26/30
100/100 [==============================] - 11s 113ms/step - loss: 0.1041 - acc: 0.9696 - val_loss: 0.8077 - val_acc: 0.8428
Epoch 00026: val_acc did not improve from 0.86792
Epoch 27/30
100/100 [==============================] - 11s 113ms/step - loss: 0.1093 - acc: 0.9625 - val_loss: 0.9493 - val_acc: 0.8354
Epoch 00027: val_acc did not improve from 0.86792
Epoch 28/30
100/100 [==============================] - 11s 113ms/step - loss: 0.1425 - acc: 0.9587 - val_loss: 0.8306 - val_acc: 0.8491
Epoch 00028: val_acc did not improve from 0.86792
Epoch 00028: ReduceLROnPlateau reducing learning rate to 9.99999943962493e-12.
Epoch 29/30
100/100 [==============================] - 11s 113ms/step - loss: 0.1066 - acc: 0.9662 - val_loss: 0.9946 - val_acc: 0.8239
Epoch 00029: val_acc did not improve from 0.86792
Epoch 30/30
100/100 [==============================] - 11s 113ms/step - loss: 0.1385 - acc: 0.9537 - val_loss: 0.8873 - val_acc: 0.8302
Epoch 00030: val_acc did not improve from 0.86792
Epoch 00030: ReduceLROnPlateau reducing learning rate to 9.999999092680235e-13.
###Markdown
Finetune produced improved the val accurac to 86%
###Code
#Run the model on test set and get model accuracy on test set
report(proc_data_path)
###Output
Test accuracy = 87.0%
[[21 2 0]
[ 1 22 0]
[ 1 5 17]]
###Markdown
We are able to get test accuracy of 87% which is comparable to val accuracy 4. Make Predictions
###Code
from predict import predict
#Predict on a test image
test_image = '../data/test/0_checkbox-06.open.png'
predict(test_image, proc_data_path)
###Output
WARNING:tensorflow:From C:\Users\Vaishali\Anaconda3\envs\vinayenv\lib\site-packages\tensorflow\python\framework\op_def_library.py:263: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version.
Instructions for updating:
Colocations handled automatically by placer.
|
example/model_basemaps.ipynb | ###Markdown
**Explore basemaps**You can use the ToucanDataSdk to access basemaps and visualize them in your notebook.Install **jupyter labextension install @jupyterlab/geojson-extension** **1- Connect to your instance with the sdk**
###Code
from toucan_data_sdk import ToucanDataSdk
from IPython.display import GeoJSON
from pandas.io.json import json_normalize
import getpass
instance = 'demo'
small_app = 'demo'
instance_url = f"https://api-{instance}.toucantoco.com"
username = 'toucantoco'
try:
auth = get_auth(instance)
except Exception:
auth = (username, getpass.getpass())
sdk = ToucanDataSdk(instance_url, small_app=small_app, auth=auth)
###Output
_____no_output_____
###Markdown
**2- Query basemaps**
###Code
query={'properties.id':'FRA'}
basemaps = sdk.query_basemaps(query)
GeoJSON(basemaps)
pd.DataFrame(json_normalize(basemaps['features']))
###Output
_____no_output_____ |
notebooks/Gaia cov.ipynb | ###Markdown
Make some fake Gaia uncertainties with the same column names as the simulated data:
###Code
A = np.random.uniform(size=(32,5,5))
cov = 0.5 * np.einsum('nij,nkj->nik', A, A)
data = dict()
for i,name1 in enumerate(['ra', 'dec', 'parallax', 'pmra', 'pmdec']):
data['{}_error'.format(name1)] = np.sqrt(cov[:,i,i])
for j,name2 in enumerate(['ra', 'dec', 'parallax', 'pmra', 'pmdec']):
if j >= i: continue
data['{}_{}_corr'.format(name1,name2)] = cov[:,i,j] / (np.sqrt(cov[:,i,i]*cov[:,j,j]))
def construct_cov(gaia_data):
"""
If the real data look like the simulated data, Gaia will provide
correlation coefficients and standard deviations for
(ra,dec,parallax,pm_ra,pm_dec), but we probably want to turn that
into a covariance matrix.
"""
names = ['ra', 'dec', 'parallax', 'pmra', 'pmdec']
n = len(gaia_data['ra_error'])
C = np.zeros((n,len(names),len(names)))
# pre-load the diagonal
for i,name in enumerate(names):
full_name = "{}_error".format(name)
C[:,i,i] = gaia_data[full_name]**2
for i,name1 in enumerate(names):
for j,name2 in enumerate(names):
if j >= i: continue
full_name = "{}_{}_corr".format(name1, name2)
C[...,i,j] = gaia_data[full_name]*np.sqrt(C[...,i,i]*C[...,j,j])
C[...,j,i] = gaia_data[full_name]*np.sqrt(C[...,i,i]*C[...,j,j])
return C
out_cov = construct_cov(data)
assert np.allclose(out_cov, cov)
###Output
_____no_output_____ |
jupyter-notebooks/Run the generate-sitemap pipeline.ipynb | ###Markdown
Run the generate-sitemap pipeline[dpp](https://github.com/frictionlessdata/datapackage-pipelines) runs the knesset data pipelines periodically on our server.This notebook runs the generate-sitemap pipelines which generates the sitemap at https://oknesset.org/sitemap.txt Generate the siteRun the render site pages notebookVerify:
###Code
%%bash
echo committees
ls -lah ../data/committees/dist/dist/committees | wc -l
echo factions
ls -lah ../data/committees/dist/dist/factions | wc -l
echo meetings
ls -lah ../data/committees/dist/dist/meetings/*/* | wc -l
echo members
ls -lah ../data/committees/dist/dist/members | wc -l
###Output
committees
1559
factions
26
meetings
2977
members
2081
###Markdown
Run the generate-sitemap pipeline
###Code
!{'cd /pipelines; dpp run --verbose ./knesset/generate-sitemap'}
###Output
[./knesset/generate-sitemap:T_0] >>> INFO :6d1ce4d1 RUNNING ./knesset/generate-sitemap
[./knesset/generate-sitemap:T_0] >>> INFO :6d1ce4d1 Collecting dependencies
[./knesset/generate-sitemap:T_0] >>> INFO :6d1ce4d1 Running async task
[./knesset/generate-sitemap:T_0] >>> INFO :6d1ce4d1 Waiting for completion
[./knesset/generate-sitemap:T_0] >>> INFO :6d1ce4d1 Async task starting
[./knesset/generate-sitemap:T_0] >>> INFO :6d1ce4d1 Searching for existing caches
[./knesset/generate-sitemap:T_0] >>> INFO :6d1ce4d1 Building process chain:
[./knesset/generate-sitemap:T_0] >>> INFO :- generate_sitemap
[./knesset/generate-sitemap:T_0] >>> INFO :- (sink)
[./knesset/generate-sitemap:T_0] >>> INFO :generate_sitemap: INFO :loading from data path: /pipelines/data/committees/dist/dist
[./knesset/generate-sitemap:T_0] >>> INFO :generate_sitemap: INFO :num_links_per_file=50000
[./knesset/generate-sitemap:T_0] >>> INFO :6d1ce4d1 DONE /usr/local/lib/python3.6/site-packages/datapackage_pipelines/manager/../lib/internal/sink.py
[./knesset/generate-sitemap:T_0] >>> INFO :6d1ce4d1 DONE /pipelines/knesset/generate_sitemap.py
[./knesset/generate-sitemap:T_0] >>> INFO :6d1ce4d1 DONE V ./knesset/generate-sitemap {'num-directories': 43, 'num-files': 3197, 'num-sitemap-links': 3197, 'num-sitemap-txt-files': 1}
INFO :RESULTS:
INFO :SUCCESS: ./knesset/generate-sitemap {'num-directories': 43, 'num-files': 3197, 'num-sitemap-links': 3197, 'num-sitemap-txt-files': 1}
###Markdown
View the sitemap
###Code
%%bash
echo number of committees: `cat ../data/committees/dist/dist/sitemap.txt | grep committees | wc -l`
echo first 10 committees:
cat ../data/committees/dist/dist/sitemap.txt | grep committees | head
echo number of meetings: `cat ../data/committees/dist/dist/sitemap.txt | grep meetings | wc -l`
echo first 10 meetings:
cat ../data/committees/dist/dist/sitemap.txt | grep meetings | head
###Output
number of committees: 778
first 10 committees:
https://oknesset.org/committees/965.html
https://oknesset.org/committees/928.html
https://oknesset.org/committees/index.html
https://oknesset.org/committees/109.html
https://oknesset.org/committees/430.html
https://oknesset.org/committees/1004.html
https://oknesset.org/committees/23.html
https://oknesset.org/committees/126.html
https://oknesset.org/committees/123.html
https://oknesset.org/committees/711.html
number of meetings: 1002
first 10 meetings:
https://oknesset.org/meetings/4/2/425865.html
https://oknesset.org/meetings/4/2/428527.html
https://oknesset.org/meetings/4/2/422217.html
https://oknesset.org/meetings/4/2/425287.html
https://oknesset.org/meetings/4/2/429615.html
https://oknesset.org/meetings/4/2/425155.html
https://oknesset.org/meetings/4/2/426910.html
https://oknesset.org/meetings/4/2/425961.html
https://oknesset.org/meetings/4/2/424526.html
https://oknesset.org/meetings/4/2/426405.html
|
hw11/graph-recsys/recsys-skillfactory.ipynb | ###Markdown
ะััั "ะัะฐะบัะธัะตัะบะธะน Machine Learning"ะจะตััะฐะบะพะฒ ะะฝะดัะตะนะะฒะตะดะตะฝะธะต ะฒ ัะตะบะพะผะตะฝะดะฐัะตะปัะฝัะต ัะธััะตะผั
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
plt.style.use('ggplot')
plt.rcParams['figure.figsize'] = (12, 8)
import warnings
warnings.filterwarnings('ignore')
from ipywidgets import interact, IntSlider, fixed, FloatSlider
###Output
_____no_output_____
###Markdown
ะะพัะธะฒะฐัะธั * ะัะดะธ - ะฟะพััะตะฑะธัะตะปะธ ะบะพะฝัะตะฝัะฐ ะธ ััะปัะณ * ะัะทัะบะฐ * ะคะธะปัะผั * ะะฝะธะณะธ * ะะณัั * ะะดะฐ * ...* ะะพ ะฒัะฑะพั ัะปะธัะบะพะผ ะฒะตะปะธะบ.. * Spotify - 30 ะผะปะฝ. ะฟะตัะตะฝ * Netflix - 20 ััั. ัะธะปัะผะพะฒ * Amazon - 500 ััั. ะบะฝะธะณ * Steam - 20 ััั. ะธะณั ะะฐะดะพ ะบะฐะบ-ัะพ ัะธะปัััะพะฒะฐัั..* ะะพะถะฝะพ ัะฟัะพัะธัั ั ะดััะทะตะน (ะฒะบััั ะผะพะณัั ะพัะปะธัะฐัััั)* ะะพะถะฝะพ ะฟะพัะธัะฐัั ะพะฑะทะพัั (ะผะฝะพะณะพ ะฒัะตะผะตะฝะธ)* ะะฒัะพะผะฐัะธัะตัะบะฐั ัะตะบะพะผะตะฝะดะฐัะตะปัะฝะฐั ัะธััะตะผะฐ! Netflix Prize ะััะพัะฝะธะบะธ ะฟะตััะพะฝะฐะปัะฝัั
ัะตะบะพะผะตะฝะดะฐัะธะน* ะะฐ ะพัะฝะพะฒะต ะฟัะตะดะฟะพััะตะฝะธะน ะฟะพะปัะทะพะฒะฐัะตะปั * ะ ะฐัััะธััะฒะฐะตััั ะฝะตะบะพัะพััะน "ะฟัะพัะธะปั" ะฟะพะปัะทะพะฒะฐัะตะปั, ะดะปั ะบะพัะพัะพะณะพ ะพะฟัะตะดะตะปััััั ะฝะฐะธะฑะพะปะตะต ะฟะพะดั
ะพะดััะธะต ัะพะฒะฐัั* ะะฐ ะพัะฝะพะฒะต ะฟะพั
ะพะถะธั
ะฟะพะปัะทะพะฒะฐัะตะปะตะน * ะะฐั
ะพะดะธะผ ะดััะณะธั
ะฟะพะปัะทะพะฒะฐัะตะปะตะน ั ะฟะพั
ะพะถะธะผะธ ะธะฝัะตัะตัะฐะผะธ ะธ ะดะพััะฐะฒะปัะตะผ ัะตะบะพะผะตะฝะดะฐัะธั* ะะฐ ะพัะฝะพะฒะต ะฟะพั
ะพะถะธั
ัะพะฒะพัะพะฒ * ะ ะตะบะพะผะตะฝะดัะตะผ ัะพะฒะฐัั, ะฟะพั
ะพะถะธะต ะฝะฐ ัะต, ััะพ ะผะฝะต ะฝัะฐะฒัััั ะะพััะฐะฝะพะฒะบะฐ ะฟัะพะฑะปะตะผั * ะะพะปัะทะพะฒะฐัะตะปะธ ััะฐะฒัั ะพัะตะฝะบั ัะพะฒะฐัะฐะผ * ะะธะฝะฐัะฝัั * ะะพะปะธัะตััะฒะพ "ะทะฒะตะทะด" * ะะตัะฒะฝัั (ะบะพะป-ะฒะพ ะฟะพััะฐัะตะฝะพะณะพ ะฒัะตะผะตะฝะธ\ะดะตะฝะตะณ)* ะะฐะดะพ ะทะฐะฟะพะปะฝะธัั ะฟัะพะฟััะบะธ* ะัะตะดะพััะฐะฒะธัั ัะตะบะพะผะตะฝะดะฐัะธั ะัะฐะฝัั* ะฅะพัะพัะตะต ะฒะพัััะฐะฝะพะฒะปะตะฝะธะต ัะตะนัะธะฝะณะพะฒ $\neq$ ั
ะพัะพัะฐั ัะตะบะพะผะตะฝะดะฐัะตะปัะฝะฐั ัะธััะตะผะฐ* ะฃัะตั ัะบะพะฝะพะผะธัะตัะบะธั
ะฟัะตะดะฟะพััะตะฝะธะน ะฟัะพะดะฐะฒัะพะฒ* Learning Loop* ะฅะพะปะพะดะฝัะน ััะฐัั* ะะพะทะฝะธะบะฐะตั ะดะปั ะฝะพะฒัั
ัะพะฒะฐัะพะฒ ะธ ะฟะพะปัะทะพะฒะฐัะตะปะตะน* ะะฐัััะฐะฑะธััะตะผะพััั* ะะฐะบัััะธะฒะฐะฝะธะต ัะตะนัะธะฝะณะพะฒ* ะะตะฐะบัะธะฒะฝัะต ะฟะพะปัะทะพะฒะฐัะตะปะธ* ะขัะธะฒะธะฐะปัะฝัะต ัะตะบะพะผะตะฝะดะฐัะธะธ ะะพะดั
ะพะดั ะบ ัะตัะตะฝะธั* ะะพะปะปะฐะฑะพัะฐัะธะฒะฝะฐั ัะธะปัััะฐัะธั* ะะฐัะตะฝัะฝัะต ะผะตัะพะดั (ะผะฐััะธัะฝัะต ัะฐะทะปะพะถะตะฝะธั) ะะพะปะปะฐะฑะพัะฐัะธะฒะฝะฐั ัะธะปัััะฐัะธั * User-based* Item-based User-based CF ะะฒะตะดะตะผ ะพะฑะพะทะฝะฐัะตะฝะธั:* $U$ - ะผะฝะพะถะตััะฒะพ ะฟะพะปัะทะพะฒะฐัะตะปะตะน* $I$ - ะผะฝะพะถะตััะฒะพ ัะพะฒะฐัะพะฒ* $U_i$ - ะผะฝะพะถะตััะฒะพ ะฟะพะปัะทะพะฒะฐัะตะปะตะน, ะพัะตะฝะธะฒัะธั
ัะพะฒะฐั $i$* $I_u$ - ะผะฝะพะถะตััะฒะพ ัะพะฒะฐัะพะฒ, ะพัะตะฝะฝะตะฝะฝัั
ะฟะพะปัะทะพะฒะฐัะตะปะตะผ $u$* $R_{ui}$ - ะพัะตะฝะบะฐ, ะบะพัะพััั ะดะฐะป ะฟะพะปัะทะพะฒะฐัะตะปั $u$ ัะพะฒะฐัั $i$* $\hat{R}_{ui}$ - ะฟัะพะณะฝะพะท ะพัะตะฝะบะธ ะัะพะณะฝะพะทะธัะพะฒะฐะฝะธะต ัะตะนัะธะฝะณะฐ* ะะพััะธัะฐะตะผ ัั
ะพะดััะฒะพ ะผะตะถะดั ะฟะพะปัะทะพะฒะฐัะตะปัะผะธ $s \in \mathbb{R}^{U \times U}$* ะะปั ัะตะปะตะฒะพะณะพ ะฟะพะปัะทะพะฒะฐัะตะปั $u$ ะฝะฐะนัะธ ะฟะพั
ะพะถะธั
ะฟะพะปัะทะพะฒะฐัะตะปะตะน $N(u)$$$ \hat{R}_{ui} = \bar{R}_u + \frac{\sum_{v \in N(u)} s_{uv}(R_{vi} - \bar{R}_v)}{\sum_{v \in N(u)} \left| s_{uv}\right|} $$* $\bar{R}_u$ - ะฟะพะฟัะฐะฒะบะฐ ะฝะฐ ะฟะธััะธะผะธะทะผ\ะพะฟัะธะผะธะทะผ ะฟะพะปัะทะพะฒะฐัะตะปะตะน ะะฐะบ ะพะฟัะตะดะตะปััั $N(u)$?* $N(u)$ ะผะพะถะฝะพ ะพะฟัะตะดะตะปััั ะฟะพ ัะฐะทะฝัะผ ัะพะพะฑัะฐะถะตะฝะธัะผ: * ะัะฐัั ะฒัะตั
* Top-k * $s_{uv} > \theta$ ะะฐะบ ะพะฟัะตะดะตะปััั ัั
ะพะถะตััั ะฟะพะปัะทะพะฒะฐัะตะปะตะน* ะะปั ะบะฐะถะดะพะน ะฟะฐัั $(u,v)$ ะฝะฐะดะพ ะฟะตัะตัะตัั ะผะฝะพะถะตััะฒะพ ะพัะตะฝะตะฝะฝัั
ัะพะฒะฐัะพะฒ* ะะพััะตะปััะธั ะฟะธััะพะฝะฐ$$ s_{uv} = \frac{\sum\limits_{i \in I_u\cap I_v} (R_{ui} - \bar{R}_u)(R_{vi} - \bar{R}_v)}{\sqrt{\sum\limits_{i \in I_u\cap I_v}(R_{ui} - \bar{R}_u)^2}\sqrt{\sum\limits_{i \in I_u\cap I_v}(R_{vi} - \bar{R}_v)^2}}$$* ะะพััะตะปััะธั ะกะฟะธัะผะฐะฝะฐ* ะะพัะธะฝััะฝะฐั ะผะตัะฐ$$ s_{uv} = \frac{\sum\limits_{i \in I_u\cap I_v} R_{ui} R_{vi}}{\sqrt{{\sum\limits_{i \in I_u\cap I_v}R_{ui}^2}}\sqrt{{\sum\limits_{i \in I_u\cap I_v}R_{vi}^2}}}$$ ะัะธะผะตั Item-based CF ะัะพะณะฝะพะทะธัะพะฒะฐะฝะธะต ัะตะนัะธะฝะณะฐ* ะะพััะธัะฐะตะผ ัั
ะพะดััะฒะพ ะผะตะถะดั ัะพะฒะฐัะฐะผะธ $s \in \mathbb{R}^{I \times I}$* ะะปั ัะพะฒะฐัะฐ $i$ ะฝะฐะนัะธ ะพัะตะฝะตะฝะฝัะต ะฟะพะปัะทะพะฒะฐัะตะปะตะผ $u$ ะฟะพั
ะพะถะธะต ัะพะฒะฐัั: $N(i)$$$ \hat{R}_{ui} = \frac{\sum_{j \in N(i)} s_{ij}R_{uj}}{\sum_{j \in N(i)} \left| s_{ij}\right|} $$ ะกั
ะพะถะตััั ัะพะฒะฐัะพะฒ* ะฃัะปะพะฒะฝะฐั ะฒะตัะพััะฝะพััั$$ s_{ij} = \frac{n_{ij}}{n_i} $$* ะะฐะฒะธัะธะผะพััั$$ s_{ij} = \frac{n_{ij}}{n_i n_j} $$ ะะพะฟัะพะฑัะตะผ ััะพ-ัะพ ัะดะตะปะฐัั ั ะผะพะดัะปะตะผ [surprice](http://surprise.readthedocs.io/en/stable/index.html) ะะตะผะพ CF
###Code
filepath = './data/user_ratedmovies.dat'
df_rates = pd.read_csv(filepath, sep='\t')
filepath = './data/movies.dat'
df_movies = pd.read_csv(filepath, sep='\t', encoding='iso-8859-1')
df_movies.head()
df_movies.loc[:, 'id'] = df_movies.loc[:, 'id'].astype('str')
df_movies = df_movies.set_index('id')
df_rates.head()
q = df_rates.datetime.quantile(0.85)
filepath = './data/user_ratedmovies_train.dat'
idx = df_rates.datetime < q
df_rates.loc[idx].to_csv(filepath, sep='\t', columns=['userID', 'movieID', 'rating'], index=None)
filepath = './data/user_ratedmovies_test.dat'
df_rates.loc[~idx].to_csv(filepath, sep='\t', columns=['userID', 'movieID', 'rating'], index=None)
from surprise import Dataset
filepaths = [('./data/user_ratedmovies_train.dat', './data/user_ratedmovies_test.dat')]
reader = Reader(line_format='user item rating', sep='\t', skip_lines=1)
data = Dataset.load_from_folds(filepaths, reader=reader)
from surprise import KNNBasic, KNNWithMeans
from surprise.accuracy import rmse
from surprise import dump
###Output
_____no_output_____
###Markdown
ะะฟะธัะฐะฝะธะต ะฐะปะณะพัะธัะผะพะฒ, ะพัะฝะพะฒะฐะฝะฝัั
ะฝะฐ CF - [ัััั](http://surprise.readthedocs.io/en/stable/knn_inspired.html)
###Code
sim_options = {'name': 'cosine',
'user_based': True
}
dumpfile = './alg.dump'
dump.dump(dumpfile, predictions, algo)
algo = KNNWithMeans(k=20, min_k=1, sim_options=sim_options)
for trainset, testset in data.folds():
algo.train(trainset)
predictions = algo.test(testset)
rmse(predictions)
dump.dump(dumpfile, predictions, algo)
df_predictions = pd.DataFrame(predictions, columns=['uid', 'iid', 'rui', 'est', 'details'])
df_predictions.head()
algo.predict('190', '173', verbose=2)
anti_train = trainset.build_anti_testset()
one_user = filter(lambda r: r[0] == '75', anti_train)
# ะญัะพ ะฑัะดะตั ะดะพะปะณะพ..
# anti_train_predictions = algo.test(one_user)
anti_train_predictions = algo.test(one_user)
from collections import defaultdict
def get_top_n(predictions, n=10):
# First map the predictions to each user.
top_n = defaultdict(list)
for uid, iid, true_r, est, _ in predictions:
top_n[uid].append((iid, est))
# Then sort the predictions for each user and retrieve the k highest ones.
for uid, user_ratings in top_n.items():
user_ratings.sort(key=lambda x: x[1], reverse=True)
top_n[uid] = user_ratings[:n]
return top_n
df_movies.loc['5695', 'title']
top_n = get_top_n(anti_train_predictions, n=10)
for uid, user_ratings in top_n.items():
print(uid, [df_movies.loc[iid, 'title'] for (iid, _) in user_ratings])
###Output
_____no_output_____
###Markdown
ะะพะดะตะปะธ ัะพ ัะบััััะผะธ ัะฐะบัะพัะฐะผะธ ะะปั ะบะฐะถะดะพะณะพ ะฟะพะปัะทะพะฒะฐัะตะปั ะธ ัะพะฒะฐัะฐ ะฟะพัััะพะธะผ ะฒะตะบัะพัั $p_u\in \mathbb{R}^{k}$ ะธ $q_i \in \mathbb{R}^{k}$ ัะฐะบ, ััะพะฑั$$ R_{ui} \approx p_u^\top q_i $$* $p_u$ ะธะฝะพะณะดะฐ ะฟะพะปััะฐะตััั ะธะฝัะตัะฟัะตัะธัะพะฒะฐัั ะบะฐะบ ะทะฐะธะฝัะตัะตัะพะฒะฐะฝะฝะพััั ะฟะพะปัะทะพะฒะฐัะตะปั ะฒ ะฝะตะบะพัะพัะพะน ะบะฐัะตะณะพัะธะธ ัะพะฒะฐัะพะฒ* $q_i$ ะธะฝะพะณะดะฐ ะฟะพะปััะฐะตััั ะธะฝัะตัะฟัะตัะธัะพะฒะฐัั ะบะฐะบ ะฟัะธะฝะฐะดะปะตะถะฝะพััั ัะพะฒะฐัะฐ ะบ ะพะฟัะตะดะตะปะตะฝะฝะพะน ะบะฐัะตะณะพัะธะธะัะพะผะต ัะพะณะพ, ะฒ ะฟะพะปััะตะฝะฝะพะผ ะฟัะพัััะฐะฝััะฒะต, ะผะพะถะฝะพ ััะธัะฐัั ะฟะพั
ะพะถะตััั ะฟะพะปัะทะพะฒะฐัะตะปะตะน ะธ ัะพะฒะฐัะพะฒ Non-negative Matrix Factorization* $P \geq 0$* $Q \geq 0$ SVD ัะฐะทะปะพะถะตะฝะธะต * ะะฐะดะพ ัะตะผ-ัะพ ะทะฐะฟะพะปะฝะธัั ะฟัะพะฟััะบะธ * ะัะปัะผะธ * ะะฐะทะพะฒัะผะธ ะฟัะตะดัะบะฐะทะฐะฝะธัะผะธ* ะะฐะบ ะฒะฐัะธะฐะฝั * $R' = R-B$ ะธ ะทะฐะฟะพะปะฝะธัั $0$* ะขะฐะบะธะผ ะพะฑัะฐะทะพะผ: * $P = U\sqrt{\Sigma}$ * $Q = \sqrt{\Sigma}V^\top$ * $\hat{R} = P^\top Q$* ะ ะบะฐะบ ะดะตะปะฐัั ะฟัะตะดัะบะฐะทะฐะฝะธั ะดะปั ะฝะพะฒัั
ะฟะพะปัะทะพะฒะฐัะตะปะตะน? Non-negative Matrix Factorization* $P \geq 0$* $Q \geq 0$ Latent Factor Model* ะัะดะตะผ ะพะฟัะธะผะธะทะธัะพะฒะฐัั ัะปะตะดัััะธะน ััะฝะบัะธะพะฝะฐะป$$ \sum\limits_{u,i}(R_{ui} - \bar{R}_u - \bar{R}_i - \langle p_u, q_i \rangle)^2 + \lambda \sum_u\| p_u \|^2 + \mu\sum_i\| q_i \|^2 \rightarrow \min\limits_{P, Q} $$* ะก ะฟะพะผะพััั ะณัะฐะดะธะตะฝัะฝะพะณะพ ัะฟััะบะฐ (ะฝะฐ ะบะฐะถะดะพะผ ัะฐะณะต ัะปััะฐะนะฝะพ ะฒัะฑะธัะฐั ะฟะฐัั $(u,i)$:$$ p_{uk} = p_{uk} + 2\alpha \left(q_{ik}(R_{ui} - \bar{R}_u - \bar{R}_i - \langle p_u, q_i \rangle) - \lambda p_{uk}\right)$$$$ q_{ik} = q_{ik} + 2\alpha \left(p_{uk}(R_{ui} - \bar{R}_u - \bar{R}_i - \langle p_u, q_i \rangle) - \mu q_{ik}\right)$$ ะะตะผะพ SVD ะะตัะตะบะพะดะธััะตะผ ID ัะธะปัะผะพะฒ ะธ ะฟะพะปัะทะพะฒะฐัะตะปะตะน
###Code
filepath = './data/user_ratedmovies.dat'
df_rates = pd.read_csv(filepath, sep='\t')
filepath = './data/movies.dat'
df_movies = pd.read_csv(filepath, sep='\t', encoding='iso-8859-1')
from sklearn.preprocessing import LabelEncoder
mov_enc = LabelEncoder()
mov_enc.fit(df_rates.movieID.values)
n_movies = df_rates.movieID.nunique()
user_enc = LabelEncoder()
user_enc.fit(df_rates.userID.values)
n_users = df_rates.userID.nunique()
idx = df_movies.loc[:, 'id'].isin(df_rates.movieID)
df_movies = df_movies.loc[idx, :]
df_rates.loc[:, 'movieID'] = mov_enc.transform(df_rates.movieID.values)
df_movies.loc[:, 'id'] = mov_enc.transform(df_movies.loc[:, 'id'].values)
df_rates.loc[:, 'userID'] = user_enc.transform(df_rates.userID.values)
df_rates.head()
###Output
_____no_output_____
###Markdown
ะ ัะฒะฝะพะผ ะฒะธะดะต ะทะฐะฟะธัะตะผ ะผะฐััะธัั ัะตะนัะธะฝะณะพะฒ
###Code
from scipy.sparse import coo_matrix, csr_matrix
n_users_train = df_rates.userID.nunique()
R_train = coo_matrix((df_rates.rating,
(df_rates.userID.values, df_rates.movieID.values)),
shape=(n_users, n_movies))
from scipy.sparse.linalg import svds
u, s, vt = svds(R_train, k=10, )
vt.shape
from sklearn.neighbors import NearestNeighbors
nn = NearestNeighbors(n_neighbors=10, metric='cosine')
v = vt.T
nn.fit(v)
ind = nn.kneighbors(v, return_distance=False)
m_names = df_movies.title.values
m_names = pd.DataFrame(data=m_names[ind], columns=['movie']+['nn_{}'.format(i) for i in range(1,10)])
idx = m_names.movie.str.contains('Terminator')
m_names.loc[idx]
###Output
_____no_output_____ |
project/starter_code/student_project-Copy1.ipynb | ###Markdown
Overview 1. Project Instructions & Prerequisites2. Learning Objectives3. Data Preparation4. Create Categorical Features with TF Feature Columns5. Create Continuous/Numerical Features with TF Feature Columns6. Build Deep Learning Regression Model with Sequential API and TF Probability Layers7. Evaluating Potential Model Biases with Aequitas Toolkit 1. Project Instructions & Prerequisites Project Instructions **Context**: EHR data is becoming a key source of real-world evidence (RWE) for the pharmaceutical industry and regulators to [make decisions on clinical trials](https://www.fda.gov/news-events/speeches-fda-officials/breaking-down-barriers-between-clinical-trials-and-clinical-care-incorporating-real-world-evidence). You are a data scientist for an exciting unicorn healthcare startup that has created a groundbreaking diabetes drug that is ready for clinical trial testing. It is a very unique and sensitive drug that requires administering the drug over at least 5-7 days of time in the hospital with frequent monitoring/testing and patient medication adherence training with a mobile application. You have been provided a patient dataset from a client partner and are tasked with building a predictive model that can identify which type of patients the company should focus their efforts testing this drug on. Target patients are people that are likely to be in the hospital for this duration of time and will not incur significant additional costs for administering this drug to the patient and monitoring. In order to achieve your goal you must build a regression model that can predict the estimated hospitalization time for a patient and use this to select/filter patients for your study. **Expected Hospitalization Time Regression Model:** Utilizing a synthetic dataset(denormalized at the line level augmentation) built off of the UCI Diabetes readmission dataset, students will build a regression model that predicts the expected days of hospitalization time and then convert this to a binary prediction of whether to include or exclude that patient from the clinical trial.This project will demonstrate the importance of building the right data representation at the encounter level, with appropriate filtering and preprocessing/feature engineering of key medical code sets. This project will also require students to analyze and interpret their model for biases across key demographic groups. Please see the project rubric online for more details on the areas your project will be evaluated. Dataset Due to healthcare PHI regulations (HIPAA, HITECH), there are limited number of publicly available datasets and some datasets require training and approval. So, for the purpose of this exercise, we are using a dataset from UC Irvine(https://archive.ics.uci.edu/ml/datasets/Diabetes+130-US+hospitals+for+years+1999-2008) that has been modified for this course. Please note that it is limited in its representation of some key features such as diagnosis codes which are usually an unordered list in 835s/837s (the HL7 standard interchange formats used for claims and remits). **Data Schema**The dataset reference information can be https://github.com/udacity/nd320-c1-emr-data-starter/blob/master/project/data_schema_references/. There are two CSVs that provide more details on the fields and some of the mapped values. Information was extracted from the database for encounters that satisfied the following criteria. (1) It is an inpatient encounter (a hospital admission). (2) It is a โdiabeticโ encounter, that is, one during which any kind of diabetes was entered to the system as a diagnosis. (3) The length of stay was at least 1 day and at most 14 days. (4) Laboratory tests were performed during the encounter. (5) Medications were administered during the encounter. Project Submission When submitting this project, make sure to run all the cells before saving the notebook. Save the notebook file as "student_project_submission.ipynb" and save another copy as an HTML file by clicking "File" -> "Download as.."->"html". Include the "utils.py" and "student_utils.py" files in your submission. The student_utils.py should be where you put most of your code that you write and the summary and text explanations should be written inline in the notebook. Once you download these files, compress them into one zip file for submission. Prerequisites - Intermediate level knowledge of Python- Basic knowledge of probability and statistics- Basic knowledge of machine learning concepts- Installation of Tensorflow 2.0 and other dependencies(conda environment.yml or virtualenv requirements.txt file provided) Environment Setup For step by step instructions on creating your environment, please go to https://github.com/udacity/nd320-c1-emr-data-starter/blob/master/project/README.md. 2. Learning Objectives By the end of the project, you will be able to - Use the Tensorflow Dataset API to scalably extract, transform, and load datasets and build datasets aggregated at the line, encounter, and patient data levels(longitudinal) - Analyze EHR datasets to check for common issues (data leakage, statistical properties, missing values, high cardinality) by performing exploratory data analysis. - Create categorical features from Key Industry Code Sets (ICD, CPT, NDC) and reduce dimensionality for high cardinality features by using embeddings - Create derived features(bucketing, cross-features, embeddings) utilizing Tensorflow feature columns on both continuous and categorical input features - SWBAT use the Tensorflow Probability library to train a model that provides uncertainty range predictions that allow for risk adjustment/prioritization and triaging of predictions - Analyze and determine biases for a model for key demographic groups by evaluating performance metrics across groups by using the Aequitas framework 3. Data Preparation
###Code
# from __future__ import absolute_import, division, print_function, unicode_literals
import os
import numpy as np
import tensorflow as tf
from tensorflow.keras import layers
import tensorflow_probability as tfp
import pandas as pd
import matplotlib.pyplot as plt
import aequitas as ae
import warnings
warnings.filterwarnings("ignore")
# Put all of the helper functions in utils
from utils import build_vocab_files, show_group_stats_viz, aggregate_dataset, preprocess_df, df_to_dataset, posterior_mean_field, prior_trainable
pd.set_option('display.max_columns', 500)
# this allows you to make changes and save in student_utils.py and the file is reloaded every time you run a code block
%load_ext autoreload
%autoreload
#OPEN ISSUE ON MAC OSX for TF model training
import os
os.environ['KMP_DUPLICATE_LIB_OK']='True'
###Output
_____no_output_____
###Markdown
Dataset Loading and Schema Review Load the dataset and view a sample of the dataset along with reviewing the schema reference files to gain a deeper understanding of the dataset. The dataset is located at the following path https://github.com/udacity/nd320-c1-emr-data-starter/blob/master/project/starter_code/data/final_project_dataset.csv. Also, review the information found in the data schema https://github.com/udacity/nd320-c1-emr-data-starter/blob/master/project/data_schema_references/
###Code
pip install xlrd
feat_desc_df = pd.read_excel("features.xlsx")
feat_desc_df
dataset_path = "./data/final_project_dataset.csv"
df = pd.read_csv(dataset_path)
df.head()
df.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 143424 entries, 0 to 143423
Data columns (total 26 columns):
encounter_id 143424 non-null int64
patient_nbr 143424 non-null int64
race 143424 non-null object
gender 143424 non-null object
age 143424 non-null object
weight 143424 non-null object
admission_type_id 143424 non-null int64
discharge_disposition_id 143424 non-null int64
admission_source_id 143424 non-null int64
time_in_hospital 143424 non-null int64
payer_code 143424 non-null object
medical_specialty 143424 non-null object
primary_diagnosis_code 143424 non-null object
other_diagnosis_codes 143424 non-null object
number_outpatient 143424 non-null int64
number_inpatient 143424 non-null int64
number_emergency 143424 non-null int64
num_lab_procedures 143424 non-null int64
number_diagnoses 143424 non-null int64
num_medications 143424 non-null int64
num_procedures 143424 non-null int64
ndc_code 119962 non-null object
max_glu_serum 143424 non-null object
A1Cresult 143424 non-null object
change 143424 non-null object
readmitted 143424 non-null object
dtypes: int64(13), object(13)
memory usage: 28.5+ MB
###Markdown
Determine Level of Dataset (Line or Encounter) **Question 1**: Based off of analysis of the data, what level is this dataset? Is it at the line or encounter level? Are there any key fields besides the encounter_id and patient_nbr fields that we should use to aggregate on? Knowing this information will help inform us what level of aggregation is necessary for future steps and is a step that is often overlooked.
###Code
len(df) > df['encounter_id'].nunique()
###Output
_____no_output_____
###Markdown
Student Response:The dataset is at line level. No other key fields to aggregate on. Analyze Dataset **Question 2**: Utilizing the library of your choice (recommend Pandas and Seaborn or matplotlib though), perform exploratory data analysis on the dataset. In particular be sure to address the following questions: - a. Field(s) with high amount of missing/zero values - b. Based off the frequency histogram for each numerical field, which numerical field(s) has/have a Gaussian(normal) distribution shape? - c. Which field(s) have high cardinality and why (HINT: ndc_code is one feature) - d. Please describe the demographic distributions in the dataset for the age and gender fields. **OPTIONAL**: Use the Tensorflow Data Validation and Analysis library to complete. - The Tensorflow Data Validation and Analysis library(https://www.tensorflow.org/tfx/data_validation/get_started) is a useful tool for analyzing and summarizing dataset statistics. It is especially useful because it can scale to large datasets that do not fit into memory. - Note that there are some bugs that are still being resolved with Chrome v80 and we have moved away from using this for the project. **Student Response**:2a. number_outpatient, number_inpatient, number_emergency and num_procedures are the fields containing highest amount of missing values. Whereas, ndc_code is the only field containing null values.2b. Based on the frequency histogram for each numerical field, num_lab_procedures field has a nearly Gaussian distribution shape.2c. 'primary_diagnosis_code', 'ndc_code', 'principal_diagnosis_code' and 'other_diagnosis_codes' are the fields having highest cardinality.2d. The age variable is slightly skewed to the left forming a left-tailed distribution. We can see from the above (age, gender) combined plot, there seems to be higher number of sample patients from the age group 70-80. Also the number of males is higher between 40-70. Exploratory Data Analysis
###Code
# Missing values
def check_null_values(df):
null_df = pd.DataFrame({'columns': df.columns,
'percent_null': df.isnull().sum() * 100 / len(df),
'percent_zero': df.isin([0]).sum() * 100 / len(df)
} )
return null_df
null_df = check_null_values(df)
null_df
df.isna().sum()
#Subset only numerical columns in the dataframe for plotting distributions
num_df = df.select_dtypes(include=['int64'])
num_df.head()
df.hist()
plt.rcParams["figure.figsize"] = 14, 14
plt.show()
def create_cardinality_feature(df):
num_rows = len(df)
random_code_list = np.arange(100, 1000, 1)
return np.random.choice(random_code_list, num_rows)
def count_unique_values(df, cat_col_list):
cat_df = df[cat_col_list]
#cat_df['principal_diagnosis_code'] = create_cardinality_feature(cat_df)
#add feature with high cardinality
val_df = pd.DataFrame({'columns': cat_df.columns,
'cardinality': cat_df.nunique() } )
return val_df
cat_df = df.select_dtypes(exclude=['int64'])
cat_df.head()
categorical_feature_list = cat_df.columns
val_df = count_unique_values(df, categorical_feature_list)
val_df
import numpy as np
# Filter out E and V codes since processing will be done on the numeric first 3 values
df['recode'] = df['primary_diagnosis_code']
df['recode'] = df['recode'][~df['recode'].str.contains("[a-zA-Z]").fillna(False)]
df['recode'] = np.where(df['recode'] == '?', '999', df['recode'])
df['recode'].fillna(value='999', inplace=True)
df['recode'] = df['recode'].str.slice(start=0, stop=3, step=1)
df['recode'] = df['recode'].astype(int)
df.head()
# ICD-9 Main Category ranges
icd9_ranges = [(1, 140), (140, 240), (240, 280), (280, 290), (290, 320), (320, 390),
(390, 460), (460, 520), (520, 580), (580, 630), (630, 680), (680, 710),
(710, 740), (740, 760), (760, 780), (780, 800), (800, 1000), (1000, 2000)]
# Associated category names
diag_dict = {0: 'infectious', 1: 'neoplasms', 2: 'endocrine', 3: 'blood',
4: 'mental', 5: 'nervous', 6: 'circulatory', 7: 'respiratory',
8: 'digestive', 9: 'genitourinary', 10: 'pregnancy', 11: 'skin',
12: 'muscular', 13: 'congenital', 14: 'prenatal', 15: 'misc',
16: 'injury', 17: 'misc'}
# Re-code in terms of integer
for num, cat_range in enumerate(icd9_ranges):
df['recode'] = np.where(df['recode'].between(cat_range[0],cat_range[1]),
num, df['recode'])
# Convert integer to category name using diag_dict
df['recode'] = df['recode']
df['cat'] = df['recode'].replace(diag_dict)
df.head()
import seaborn as sns
sns.countplot(x="age", hue='gender', palette="ch:.25", data=df)
###Output
_____no_output_____
###Markdown
2d. The age variable is slightly skewed to the left forming a left-tailed distribution. We can see from the above (age, gender) combined plot, there seems to be higher number of sample patients from the age group 70-80. Also the number of males is higher between 40-70.
###Code
#checking for unknown/invalid values in gender
df[df['gender'] == 'Unknown/Invalid']
#removing unknow/invalid rows from the gender
df = df[df.gender != 'Unknown/Invalid']
df.head()
df['gender'].hist()
df['medical_specialty'].value_counts()
###Output
_____no_output_____
###Markdown
Reduce Dimensionality of the NDC Code Feature **Question 3**: NDC codes are a common format to represent the wide variety of drugs that are prescribed for patient care in the United States. The challenge is that there are many codes that map to the same or similar drug. You are provided with the ndc drug lookup file https://github.com/udacity/nd320-c1-emr-data-starter/blob/master/project/data_schema_references/ndc_lookup_table.csv derived from the National Drug Codes List site(https://ndclist.com/). Please use this file to come up with a way to reduce the dimensionality of this field and create a new field in the dataset called "generic_drug_name" in the output dataframe.
###Code
#NDC code lookup file
ndc_code_path = "./medication_lookup_tables/final_ndc_lookup_table"
ndc_code_df = pd.read_csv(ndc_code_path)
ndc_code_df.head()
df.head()
ndc_code_df[ndc_code_df['NDC_Code'].isin(df['ndc_code'].unique())]
from student_utils import reduce_dimension_ndc
reduce_dim_df = reduce_dimension_ndc(df, ndc_code_df)
reduce_dim_df.head()
# Number of unique values should be less for the new output field
assert df['ndc_code'].nunique() > reduce_dim_df['generic_drug_name'].nunique()
###Output
_____no_output_____
###Markdown
Select First Encounter for each Patient **Question 4**: In order to simplify the aggregation of data for the model, we will only select the first encounter for each patient in the dataset. This is to reduce the risk of data leakage of future patient encounters and to reduce complexity of the data transformation and modeling steps. We will assume that sorting in numerical order on the encounter_id provides the time horizon for determining which encounters come before and after another.
###Code
len(reduce_dim_df) > reduce_dim_df['encounter_id'].nunique()
len(reduce_dim_df) == reduce_dim_df['encounter_id'].nunique()
reduce_dim_df[reduce_dim_df['encounter_id'] == 12522]
###Output
_____no_output_____
###Markdown
grouping fields grouping_field_list = ['encounter_id', 'patient_nbr', 'primary_diagnosis_code']non_grouped_field_list = [c for c in reduce_dim_df.columns if c not in grouping_field_list]encounter_df = reduce_dim_df.groupby(grouping_field_list)[non_grouped_field_list].agg(lambda x: set([y for y in x if y is not np.nan ] ) ).reset_index()encounter_df.head() check the levellen(encounter_df) > encounter_df['encounter_id'].nunique() level changed from line to encounter levellen(encounter_df) == encounter_df['encounter_id'].nunique() line level dataframe had multiple repeating encounter ids for a patient reduce_dim_df[reduce_dim_df['encounter_id'] == 12522] it was aggregated into one encounterencounter_df[encounter_df['encounter_id'] == 12522] it was aggregated into one encounterencounter_df[encounter_df['patient_nbr'] == 135] it was aggregated into one encounterreduce_dim_df[reduce_dim_df['patient_nbr'] == 135] Select First Encounter for each Patient **Question 4**: In order to simplify the aggregation of data for the model, we will only select the first encounter for each patient in the dataset. This is to reduce the risk of data leakage of future patient encounters and to reduce complexity of the data transformation and modeling steps. We will assume that sorting in numerical order on the encounter_id provides the time horizon for determining which encounters come before and after another.
###Code
from student_utils import select_first_encounter
first_encounter_df = select_first_encounter(reduce_dim_df, 'patient_nbr', 'encounter_id')
first_encounter_df
#take subset of output
test_first_encounter_df = first_encounter_df[['encounter_id', 'patient_nbr']]
test_first_encounter_df[test_first_encounter_df['patient_nbr']== 135]
first_encounter_df[first_encounter_df['patient_nbr']== 135]
df[df['patient_nbr']== 135]
# unique patients in transformed dataset
unique_patients = first_encounter_df['patient_nbr'].nunique()
print("Number of unique patients:{}".format(unique_patients))
# unique encounters in transformed dataset
unique_encounters = first_encounter_df['encounter_id'].nunique()
print("Number of unique encounters:{}".format(unique_encounters))
original_unique_patient_number = reduce_dim_df['patient_nbr'].nunique()
# number of unique patients should be equal to the number of unique encounters and patients in the final dataset
assert original_unique_patient_number == unique_patients
assert original_unique_patient_number == unique_encounters
print("Tests passed!!")
###Output
Number of unique patients:71515
Number of unique encounters:71515
Tests passed!!
###Markdown
Aggregate Dataset to Right Level for Modeling In order to provide a broad scope of the steps and to prevent students from getting stuck with data transformations, we have selected the aggregation columns and provided a function to build the dataset at the appropriate level. The 'aggregate_dataset" function that you can find in the 'utils.py' file can take the preceding dataframe with the 'generic_drug_name' field and transform the data appropriately for the project. To make it simpler for students, we are creating dummy columns for each unique generic drug name and adding those are input features to the model. There are other options for data representation but this is out of scope for the time constraints of the course.
###Code
first_encounter_df['generic_drug_name'].unique()
exclusion_list = ['generic_drug_name']
grouping_field_list = [c for c in first_encounter_df.columns if c not in exclusion_list]
agg_drug_df, ndc_col_list = aggregate_dataset(first_encounter_df, grouping_field_list, 'generic_drug_name')
first_encounter_df['generic_drug_name'].nunique()
agg_drug_df.head()
ndc_col_list
agg_drug_df.columns
agg_drug_df.info()
agg_drug_df.columns
grouping_field_list = ['encounter_id', 'patient_nbr', 'race', 'gender', 'weight',
'admission_type_id', 'discharge_disposition_id', 'admission_source_id',
'time_in_hospital', 'payer_code', 'medical_specialty',
'primary_diagnosis_code', 'other_diagnosis_codes', 'number_outpatient',
'number_inpatient', 'number_emergency', 'num_lab_procedures',
'number_diagnoses', 'num_medications', 'num_procedures',
'max_glu_serum', 'A1Cresult', 'change', 'readmitted']
non_grouped_field_list = [c for c in agg_drug_df.columns if c not in grouping_field_list]
encounter_agg_drug_df = agg_drug_df.groupby(grouping_field_list)[non_grouped_field_list].agg(lambda x:
list(set([y for y in x if y is not np.nan ] ) )).reset_index()
encounter_agg_drug_df.head()
len(encounter_agg_drug_df)
encounter_agg_drug_df['patient_nbr'].nunique()
encounter_agg_drug_df['encounter_id'].nunique()
assert len(encounter_agg_drug_df) == encounter_agg_drug_df['patient_nbr'].nunique() == encounter_agg_drug_df['encounter_id'].nunique()
###Output
_____no_output_____
###Markdown
Prepare Fields and Cast Dataset Feature Selection **Question 5**: After you have aggregated the dataset to the right level, we can do feature selection (we will include the ndc_col_list, dummy column features too). In the block below, please select the categorical and numerical features that you will use for the model, so that we can create a dataset subset. For the payer_code and weight fields, please provide whether you think we should include/exclude the field in our model and give a justification/rationale for this based off of the statistics of the data. Feel free to use visualizations or summary statistics to support your choice. Student response: ??
###Code
encounter_agg_drug_df['weight'].value_counts()
encounter_agg_drug_df['payer_code'].value_counts()
print("Percent of missing weights: %f" %(52294 / 54269))
print("Percent of missing payer_codes: %f" %(22594 / 54269))
encounter_agg_drug_df.info()
encounter_agg_drug_df.columns
'''
Please update the list to include the features you think are appropriate for the model
and the field that we will be using to train the model. There are three required demographic features for the model
and I have inserted a list with them already in the categorical list.
These will be required for later steps when analyzing data splits and model biases.
'''
required_demo_col_list = ['race', 'gender', 'age']
student_categorical_col_list = [ 'primary_diagnosis_code', 'payer_code', 'medical_specialty',
'other_diagnosis_codes', 'max_glu_serum', 'change', 'readmitted' ] + required_demo_col_list + ndc_col_list
student_numerical_col_list = [ 'encounter_id', 'patient_nbr', 'admission_type_id', 'discharge_disposition_id',
'admission_source_id', 'number_outpatient',
'number_inpatient', 'num_lab_procedures', 'number_diagnoses',
'num_medications', 'num_procedures']
PREDICTOR_FIELD = 'time_in_hospital'
def select_model_features(df, categorical_col_list, numerical_col_list, PREDICTOR_FIELD, grouping_key='patient_nbr'):
selected_col_list = [grouping_key] + [PREDICTOR_FIELD] + categorical_col_list + numerical_col_list
return encounter_agg_drug_df[selected_col_list]
selected_features_df = select_model_features(encounter_agg_drug_df, student_categorical_col_list, student_numerical_col_list,
PREDICTOR_FIELD)
selected_features_df.head()
encounter_agg_drug_df.head()
sel_copy = selected_features_df.copy()
sel_copy.head()
###Output
_____no_output_____
###Markdown
k = []for x in sel_copy['time_in_hospital']: k.append(list(x)[0]) ser = pd.Series(k)ser sel_copy['time_in_hospital'] = k Preprocess Dataset - Casting and Imputing We will cast and impute the dataset before splitting so that we do not have to repeat these steps across the splits in the next step. For imputing, there can be deeper analysis into which features to impute and how to impute but for the sake of time, we are taking a general strategy of imputing zero for only numerical features. OPTIONAL: What are some potential issues with this approach? Can you recommend a better way and also implement it?
###Code
processed_df = preprocess_df(selected_features_df, student_categorical_col_list,
student_numerical_col_list, PREDICTOR_FIELD, categorical_impute_value='nan', numerical_impute_value=0)
###Output
_____no_output_____
###Markdown
Split Dataset into Train, Validation, and Test Partitions **Question 6**: In order to prepare the data for being trained and evaluated by a deep learning model, we will split the dataset into three partitions, with the validation partition used for optimizing the model hyperparameters during training. One of the key parts is that we need to be sure that the data does not accidently leak across partitions.Please complete the function below to split the input dataset into three partitions(train, validation, test) with the following requirements.- Approximately 60%/20%/20% train/validation/test split- Randomly sample different patients into each data partition- **IMPORTANT** Make sure that a patient's data is not in more than one partition, so that we can avoid possible data leakage.- Make sure that the total number of unique patients across the splits is equal to the total number of unique patients in the original dataset- Total number of rows in original dataset = sum of rows across all three dataset partitions
###Code
processed_df = processed_df.loc[:,~processed_df.columns.duplicated()]
processed_df.head()
def patient_dataset_splitter(df, patient_key='patient_nbr'):
'''
df: pandas dataframe, input dataset that will be split
patient_key: string, column that is the patient id
return:
- train: pandas dataframe,
- validation: pandas dataframe,
- test: pandas dataframe,
'''
train, validation, test = np.split(df.sample(frac=1, random_state=1), [int(.6*len(df)), int(.8*len(df))])
return train, validation, test
#from student_utils import patient_dataset_splitter
d_train, d_val, d_test = patient_dataset_splitter(processed_df, 'patient_nbr')
assert len(d_train) + len(d_val) + len(d_test) == len(processed_df)
print("Test passed for number of total rows equal!")
assert (d_train['patient_nbr'].nunique() + d_val['patient_nbr'].nunique() + d_test['patient_nbr'].nunique()) == processed_df['patient_nbr'].nunique()
print("Test passed for number of unique patients being equal!")
processed_df['patient_nbr'].nunique()
d_test['patient_nbr'].nunique()
d_train['patient_nbr'].nunique()
d_val['patient_nbr'].nunique()
d_train['patient_nbr'].nunique() + d_val['patient_nbr'].nunique() + d_test['patient_nbr'].nunique()
###Output
_____no_output_____
###Markdown
Demographic Representation Analysis of Split After the split, we should check to see the distribution of key features/groups and make sure that there is representative samples across the partitions. The show_group_stats_viz function in the utils.py file can be used to group and visualize different groups and dataframe partitions. Label Distribution Across Partitions Below you can see the distributution of the label across your splits. Are the histogram distribution shapes similar across partitions?
###Code
show_group_stats_viz(processed_df, PREDICTOR_FIELD)
show_group_stats_viz(d_train, PREDICTOR_FIELD)
show_group_stats_viz(d_test, PREDICTOR_FIELD)
###Output
time_in_hospital
1.0 1459
2.0 1857
3.0 2002
4.0 1427
5.0 1036
6.0 836
7.0 594
8.0 456
9.0 302
10.0 256
11.0 219
12.0 160
13.0 141
14.0 109
dtype: int64
AxesSubplot(0.125,0.125;0.775x0.755)
###Markdown
Demographic Group Analysis We should check that our partitions/splits of the dataset are similar in terms of their demographic profiles. Below you can see how we might visualize and analyze the full dataset vs. the partitions.
###Code
# Full dataset before splitting
patient_demo_features = ['race', 'gender', 'age', 'patient_nbr']
patient_group_analysis_df = processed_df[patient_demo_features].groupby('patient_nbr').head(1).reset_index(drop=True)
show_group_stats_viz(patient_group_analysis_df, 'gender')
# Training partition
show_group_stats_viz(d_train, 'gender')
# Test partition
show_group_stats_viz(d_test, 'gender')
d_train.head()
###Output
_____no_output_____
###Markdown
Model
###Code
pip install sklearn
from sklearn import linear_model
from sklearn.model_selection import train_test_split
d_train.head()
y = d_train.pop('time_in_hospital')
y
X = d_train.copy()
X.head()
X_num = X[student_numerical_col_list]
student_numerical_col_list
df.info()
d_train.info()
#Splitting the datasets into train, tests for model training
X_train, X_test, y_train, y_test = train_test_split(X_num, y, test_size=0.33, random_state=42)
print(X_train.shape)
print(y_train.shape)
print(X_test.shape)
print(y_test.shape)
lm = linear_model.LinearRegression()
model = lm.fit(X_train,y_train)
preds = lm.predict(X_test)
preds[:5]
y_test[:5]
print("Score:", model.score(X_test, y_test))
###Output
Score: 0.2960284378154876
###Markdown
------------------------------------END----------------------------------------------------------- Convert Dataset Splits to TF Dataset We have provided you the function to convert the Pandas dataframe to TF tensors using the TF Dataset API. Please note that this is not a scalable method and for larger datasets, the 'make_csv_dataset' method is recommended -https://www.tensorflow.org/api_docs/python/tf/data/experimental/make_csv_dataset.
###Code
# Convert dataset from Pandas dataframes to TF dataset
batch_size = 128
diabetes_train_ds = df_to_dataset(d_train, PREDICTOR_FIELD, batch_size=batch_size)
diabetes_val_ds = df_to_dataset(d_val, PREDICTOR_FIELD, batch_size=batch_size)
diabetes_test_ds = df_to_dataset(d_test, PREDICTOR_FIELD, batch_size=batch_size)
for feature_batch, label_batch in diabetes_train_ds.take(1):
print('Every feature:', list(feature_batch.keys()))
print('A batch of ages:', feature_batch['age'])
print('A batch of targets:', label_batch )
# We use this sample of the dataset to show transformations later
diabetes_batch = next(iter(diabetes_train_ds))[0]
def demo(feature_column, example_batch):
feature_layer = layers.DenseFeatures(feature_column)
print(feature_layer(example_batch))
###Output
_____no_output_____
###Markdown
4. Create Categorical Features with TF Feature Columns Build Vocabulary for Categorical Features Before we can create the TF categorical features, we must first create the vocab files with the unique values for a given field that are from the **training** dataset. Below we have provided a function that you can use that only requires providing the pandas train dataset partition and the list of the categorical columns in a list format. The output variable 'vocab_file_list' will be a list of the file paths that can be used in the next step for creating the categorical features.
###Code
vocab_file_list = build_vocab_files(d_train, student_categorical_col_list)
vocab_file_list
###Output
_____no_output_____
###Markdown
Create Categorical Features with Tensorflow Feature Column API **Question 7**: Using the vocab file list from above that was derived from the features you selected earlier, please create categorical features with the Tensorflow Feature Column API, https://www.tensorflow.org/api_docs/python/tf/feature_column. Below is a function to help guide you.
###Code
from student_utils import create_tf_categorical_feature_cols
tf_cat_col_list = create_tf_categorical_feature_cols(student_categorical_col_list)
tf_cat_col_list[0]
test_cat_var1 = tf_cat_col_list[0]
print("Example categorical field:\n{}".format(test_cat_var1))
demo(test_cat_var1, diabetes_batch)
###Output
Example categorical field:
EmbeddingColumn(categorical_column=VocabularyFileCategoricalColumn(key='primary_diagnosis_code', vocabulary_file='./primary_diagnosis_code_vocab.txt', vocabulary_size=611, num_oov_buckets=1, dtype=tf.string, default_value=-1), dimension=10, combiner='mean', initializer=<tensorflow.python.ops.init_ops.TruncatedNormal object at 0x197c032b90>, ckpt_to_load_from=None, tensor_name_in_ckpt=None, max_norm=None, trainable=True)
tf.Tensor(
[[-0.04107003 0.46910128 -0.1286355 ... 0.10986607 0.37177095
-0.4413 ]
[-0.22344758 -0.24544352 -0.36145702 ... -0.06554496 -0.17308289
-0.37864628]
[ 0.01946732 0.34461108 0.17054473 ... -0.55793405 -0.06951376
-0.32023928]
...
[-0.11926756 0.43515435 -0.08595119 ... -0.32181257 0.07880753
-0.2593021 ]
[-0.14955378 -0.33592322 -0.23300701 ... 0.02312117 0.12493869
0.06630494]
[-0.22344758 -0.24544352 -0.36145702 ... -0.06554496 -0.17308289
-0.37864628]], shape=(128, 10), dtype=float32)
###Markdown
5. Create Numerical Features with TF Feature Columns **Question 8**: Using the TF Feature Column API(https://www.tensorflow.org/api_docs/python/tf/feature_column/), please create normalized Tensorflow numeric features for the model. Try to use the z-score normalizer function below to help as well as the 'calculate_stats_from_train_data' function.
###Code
from student_utils import create_tf_numeric_feature
###Output
_____no_output_____
###Markdown
For simplicity the create_tf_numerical_feature_cols function below uses the same normalizer function across all features(z-score normalization) but if you have time feel free to analyze and adapt the normalizer based off the statistical distributions. You may find this as a good resource in determining which transformation fits best for the data https://developers.google.com/machine-learning/data-prep/transform/normalization.
###Code
numerical_col_list
def calculate_stats_from_train_data(df, col):
mean = df[col].describe()['mean']
std = df[col].describe()['std']
return mean, std
def create_tf_numerical_feature_cols(numerical_col_list, train_df):
tf_numeric_col_list = []
for c in numerical_col_list:
mean, std = calculate_stats_from_train_data(train_df, str(c))
tf_numeric_feature = create_tf_numeric_feature(str(c), mean, std)
tf_numeric_col_list.append(tf_numeric_feature)
return tf_numeric_col_list
for c in student_numerical_col_list:
print(c)
d_train.head()
tf_cont_col_list = create_tf_numerical_feature_cols(student_numerical_col_list, d_train)
test_cont_var1 = tf_cont_col_list[0]
print("Example continuous field:\n{}\n".format(test_cont_var1))
demo(test_cont_var1, diabetes_batch)
###Output
Example continuous field:
NumericColumn(key='encounter_id', shape=(1,), default_value=(0,), dtype=tf.float64, normalizer_fn=None)
tf.Tensor(
[[9.71870480e+07]
[7.37602000e+07]
[3.88390560e+08]
[2.74544096e+08]
[6.20624040e+07]
[6.45479040e+07]
[1.11461512e+08]
[1.93734660e+07]
[2.82687168e+08]
[2.28184620e+07]
[6.29942480e+07]
[2.71811392e+08]
[1.10734384e+08]
[3.07856416e+08]
[4.43119904e+08]
[2.66514640e+08]
[1.07173840e+08]
[1.30960048e+08]
[3.62383800e+07]
[1.00912232e+08]
[2.91254336e+08]
[4.62366120e+07]
[1.73230976e+08]
[1.67119920e+08]
[2.31158544e+08]
[1.59251424e+08]
[1.72427680e+08]
[6.46619120e+07]
[1.66424560e+08]
[1.11026592e+08]
[1.12118536e+08]
[3.73815744e+08]
[2.70149184e+08]
[1.45804608e+08]
[2.06199168e+08]
[4.53150160e+07]
[2.21781248e+08]
[1.29799072e+08]
[4.19629952e+08]
[3.95468928e+08]
[2.29775232e+08]
[4.65806280e+07]
[2.18337456e+08]
[1.05478880e+08]
[1.19104112e+08]
[2.69760192e+08]
[1.75875120e+08]
[2.11050320e+08]
[1.66885264e+08]
[1.07495072e+08]
[6.59567880e+07]
[2.19611056e+08]
[1.60415376e+08]
[1.55362832e+08]
[1.87619376e+08]
[1.26235960e+08]
[4.57391400e+06]
[1.00426140e+07]
[3.72580192e+08]
[1.06177488e+08]
[2.39464672e+08]
[2.94252192e+08]
[1.06844152e+08]
[1.49975936e+08]
[1.11765960e+08]
[3.64903424e+08]
[2.65061712e+08]
[4.38302560e+07]
[3.53698624e+08]
[2.84216704e+08]
[1.61579856e+08]
[1.95660448e+08]
[9.33182000e+07]
[1.65280160e+08]
[1.48979936e+08]
[2.04223344e+08]
[1.71581632e+08]
[2.96047104e+08]
[3.20265024e+08]
[8.40175120e+07]
[2.29132912e+08]
[3.27179940e+07]
[3.30322200e+07]
[1.02215728e+08]
[1.95686688e+08]
[1.03299760e+08]
[2.25037120e+08]
[2.57973888e+08]
[8.27915360e+07]
[2.57718128e+08]
[1.54424384e+08]
[2.65226080e+08]
[3.33220768e+08]
[5.20069080e+07]
[1.45258720e+08]
[1.93635856e+08]
[6.66494880e+07]
[9.22080000e+05]
[5.77896120e+07]
[1.57523072e+08]
[9.57312560e+07]
[9.32112160e+07]
[9.93063200e+07]
[1.70168304e+08]
[1.06429072e+08]
[2.53793248e+08]
[8.30495840e+07]
[1.94227568e+08]
[1.02977940e+07]
[4.42569984e+08]
[9.05858400e+07]
[1.50104688e+08]
[2.69447160e+07]
[1.63599072e+08]
[1.10369368e+08]
[2.09251424e+08]
[2.74338496e+08]
[1.19751416e+08]
[1.67606000e+08]
[3.41412800e+08]
[3.13958460e+07]
[1.17621472e+08]
[2.38325056e+08]
[2.38248400e+08]
[2.62454016e+08]
[2.75895488e+08]
[1.42995424e+08]
[2.14773200e+08]], shape=(128, 1), dtype=float32)
###Markdown
6. Build Deep Learning Regression Model with Sequential API and TF Probability Layers Use DenseFeatures to combine features for model Now that we have prepared categorical and numerical features using Tensorflow's Feature Column API, we can combine them into a dense vector representation for the model. Below we will create this new input layer, which we will call 'claim_feature_layer'.
###Code
claim_feature_columns = tf_cat_col_list + tf_cont_col_list
claim_feature_layer = tf.keras.layers.DenseFeatures(claim_feature_columns)
###Output
_____no_output_____
###Markdown
Build Sequential API Model from DenseFeatures and TF Probability Layers Below we have provided some boilerplate code for building a model that connects the Sequential API, DenseFeatures, and Tensorflow Probability layers into a deep learning model. There are many opportunities to further optimize and explore different architectures through benchmarking and testing approaches in various research papers, loss and evaluation metrics, learning curves, hyperparameter tuning, TF probability layers, etc. Feel free to modify and explore as you wish. **OPTIONAL**: Come up with a more optimal neural network architecture and hyperparameters. Share the process in discovering the architecture and hyperparameters.
###Code
def build_sequential_model(feature_layer):
model = tf.keras.Sequential([
feature_layer,
tf.keras.layers.Dense(150, activation='relu'),
tf.keras.layers.Dense(75, activation='relu'),
tfp.layers.DenseVariational(1+1, posterior_mean_field, prior_trainable),
tfp.layers.DistributionLambda(
lambda t:tfp.distributions.Normal(loc=t[..., :1],
scale=1e-3 + tf.math.softplus(0.01 * t[...,1:])
)
),
])
return model
def build_diabetes_model(train_ds, val_ds, feature_layer, epochs=5, loss_metric='mse'):
model = build_sequential_model(feature_layer)
model.compile(optimizer='rmsprop', loss=loss_metric, metrics=[loss_metric,'accuracy'])
early_stop = tf.keras.callbacks.EarlyStopping(monitor=loss_metric, patience=3)
history = model.fit(train_ds, validation_data=val_ds,
callbacks=[early_stop],
epochs=epochs)
return model, history
diabetes_model, history = build_diabetes_model(diabetes_train_ds, diabetes_val_ds, claim_feature_layer, epochs=10)
history.history
accuracy = diabetes_model.evaluate(diabetes_test_ds)
print("Accuracy", accuracy)
###Output
85/85 [==============================] - 2s 24ms/step - loss: 17.9685 - mse: 17.0264 - accuracy: 0.1192
Accuracy [17.96854642980239, 17.026354, 0.11921872]
###Markdown
Show Model Uncertainty Range with TF Probability **Question 9**: Now that we have trained a model with TF Probability layers, we can extract the mean and standard deviation for each prediction. Please fill in the answer for the m and s variables below. The code for getting the predictions is provided for you below.
###Code
feature_list = student_categorical_col_list + student_numerical_col_list
diabetes_x_tst = dict(d_test[feature_list])
diabetes_yhat = diabetes_model(diabetes_x_tst)
preds = diabetes_model.predict(diabetes_test_ds)
len(d_test)
preds
diabetes_yhat[:]
#from student_utils import get_mean_std_from_preds
def get_mean_std_from_preds(diabetes_yhat):
'''
diabetes_yhat: TF Probability prediction object
'''
m = diabetes_yhat.mean()
s = diabetes_yhat.stddev()
return m, s
m, s = get_mean_std_from_preds(diabetes_yhat)
m
###Output
_____no_output_____
###Markdown
Show Prediction Output
###Code
prob_outputs = {
"pred": preds.flatten(),
"actual_value": d_test['time_in_hospital'].values,
"pred_mean": m.numpy().flatten(),
"pred_std": s.numpy().flatten()
}
prob_output_df = pd.DataFrame(prob_outputs)
prob_output_df
###Output
_____no_output_____
###Markdown
Convert Regression Output to Classification Output for Patient Selection **Question 10**: Given the output predictions, convert it to a binary label for whether the patient meets the time criteria or does not (HINT: use the mean prediction numpy array). The expected output is a numpy array with a 1 or 0 based off if the prediction meets or doesnt meet the criteria.
###Code
from student_utils import get_student_binary_prediction
student_binary_prediction = get_student_binary_prediction(prob_output_df, 'pred_mean')
###Output
_____no_output_____
###Markdown
Add Binary Prediction to Test Dataframe Using the student_binary_prediction output that is a numpy array with binary labels, we can use this to add to a dataframe to better visualize and also to prepare the data for the Aequitas toolkit. The Aequitas toolkit requires that the predictions be mapped to a binary label for the predictions (called 'score' field) and the actual value (called 'label_value').
###Code
def add_pred_to_test(test_df, pred_np, demo_col_list):
for c in demo_col_list:
test_df[c] = test_df[c].astype(str)
test_df['score'] = pred_np
test_df['label_value'] = test_df['time_in_hospital'].apply(lambda x: 1 if x >=5 else 0)
return test_df
pred_test_df = add_pred_to_test(d_test, student_binary_prediction, ['race', 'gender'])
pred_test_df[['patient_nbr', 'gender', 'race', 'time_in_hospital', 'score', 'label_value']].head()
###Output
_____no_output_____
###Markdown
Model Evaluation Metrics **Question 11**: Now it is time to use the newly created binary labels in the 'pred_test_df' dataframe to evaluate the model with some common classification metrics. Please create a report summary of the performance of the model and be sure to give the ROC AUC, F1 score(weighted), class precision and recall scores. For the report please be sure to include the following three parts:- With a non-technical audience in mind, explain the precision-recall tradeoff in regard to how you have optimized your model.- What are some areas of improvement for future iterations?
###Code
# AUC, F1, precision and recall
# Summary
###Output
_____no_output_____
###Markdown
7. Evaluating Potential Model Biases with Aequitas Toolkit Prepare Data For Aequitas Bias Toolkit Using the gender and race fields, we will prepare the data for the Aequitas Toolkit.
###Code
# Aequitas
from aequitas.preprocessing import preprocess_input_df
from aequitas.group import Group
from aequitas.plotting import Plot
from aequitas.bias import Bias
from aequitas.fairness import Fairness
ae_subset_df = pred_test_df[['race', 'gender', 'score', 'label_value']]
ae_df, _ = preprocess_input_df(ae_subset_df)
g = Group()
xtab, _ = g.get_crosstabs(ae_df)
absolute_metrics = g.list_absolute_metrics(xtab)
clean_xtab = xtab.fillna(-1)
aqp = Plot()
b = Bias()
###Output
_____no_output_____
###Markdown
Reference Group Selection Below we have chosen the reference group for our analysis but feel free to select another one.
###Code
# test reference group with Caucasian Male
bdf = b.get_disparity_predefined_groups(clean_xtab,
original_df=ae_df,
ref_groups_dict={'race':'Caucasian', 'gender':'Male'
},
alpha=0.05,
check_significance=False)
f = Fairness()
fdf = f.get_group_value_fairness(bdf)
###Output
_____no_output_____
###Markdown
Race and Gender Bias Analysis for Patient Selection **Question 12**: For the gender and race fields, please plot two metrics that are important for patient selection below and state whether there is a significant bias in your model across any of the groups along with justification for your statement.
###Code
# Plot two metrics
# Is there significant bias in your model for either race or gender?
###Output
_____no_output_____
###Markdown
Fairness Analysis Example - Relative to a Reference Group **Question 13**: Earlier we defined our reference group and then calculated disparity metrics relative to this grouping. Please provide a visualization of the fairness evaluation for this reference group and analyze whether there is disparity.
###Code
# Reference group fairness plot
###Output
_____no_output_____ |
app/notebooks/labeled_identities/shooters/tashfeen_malik.ipynb | ###Markdown
Table of Contents1 Name2 Search2.1 Load Cached Results2.2 Build Model From Google Images3 Analysis3.1 Gender cross validation3.2 Face Sizes3.3 Screen Time Across All Shows3.4 Appearances on a Single Show3.5 Other People Who Are On Screen4 Persist to Cloud4.1 Save Model to Google Cloud Storage4.2 Save Labels to DB4.2.1 Commit the person and labeler4.2.2 Commit the FaceIdentity labels
###Code
from esper.prelude import *
from esper.identity import *
from esper.topics import *
from esper.plot_util import *
from esper import embed_google_images
###Output
_____no_output_____
###Markdown
Name Please add the person's name and their expected gender below (Male/Female).
###Code
name = 'Tashfeen Malik'
gender = 'Female'
###Output
_____no_output_____
###Markdown
Search Load Cached Results Reads cached identity model from local disk. Run this if the person has been labelled before and you only wish to regenerate the graphs. Otherwise, if you have never created a model for this person, please see the next section.
###Code
assert name != ''
results = FaceIdentityModel.load(name=name)
imshow(tile_images([cv2.resize(x[1][0], (200, 200)) for x in results.model_params['images']], cols=10))
plt.show()
plot_precision_and_cdf(results)
###Output
_____no_output_____
###Markdown
Build Model From Google Images Run this section if you do not have a cached model and precision curve estimates. This section will grab images using Google Image Search and score each of the faces in the dataset. We will interactively build the precision vs score curve.It is important that the images that you select are accurate. If you make a mistake, rerun the cell below.
###Code
assert name != ''
# Grab face images from Google
img_dir = embed_google_images.fetch_images(name)
# If the images returned are not satisfactory, rerun the above with extra params:
# query_extras='' # additional keywords to add to search
# force=True # ignore cached images
face_imgs = load_and_select_faces_from_images(img_dir)
face_embs = embed_google_images.embed_images(face_imgs)
assert(len(face_embs) == len(face_imgs))
reference_imgs = tile_imgs([cv2.resize(x[0], (200, 200)) for x in face_imgs if x], cols=10)
def show_reference_imgs():
print('User selected reference images for {}.'.format(name))
imshow(reference_imgs)
plt.show()
show_reference_imgs()
# Score all of the faces in the dataset (this can take a minute)
face_ids_by_bucket, face_ids_to_score = face_search_by_embeddings(face_embs)
precision_model = PrecisionModel(face_ids_by_bucket)
###Output
_____no_output_____
###Markdown
Now we will validate which of the images in the dataset are of the target identity.__Hover over with mouse and press S to select a face. Press F to expand the frame.__
###Code
show_reference_imgs()
print(('Mark all images that ARE NOT {}. Thumbnails are ordered by DESCENDING distance '
'to your selected images. (The first page is more likely to have non "{}" images.) '
'There are a total of {} frames. (CLICK THE DISABLE JUPYTER KEYBOARD BUTTON '
'BEFORE PROCEEDING.)').format(
name, name, precision_model.get_lower_count()))
lower_widget = precision_model.get_lower_widget()
lower_widget
show_reference_imgs()
print(('Mark all images that ARE {}. Thumbnails are ordered by ASCENDING distance '
'to your selected images. (The first page is more likely to have "{}" images.) '
'There are a total of {} frames. (CLICK THE DISABLE JUPYTER KEYBOARD BUTTON '
'BEFORE PROCEEDING.)').format(
name, name, precision_model.get_lower_count()))
upper_widget = precision_model.get_upper_widget()
upper_widget
###Output
_____no_output_____
###Markdown
Run the following cell after labelling to compute the precision curve. Do not forget to re-enable jupyter shortcuts.
###Code
# Compute the precision from the selections
lower_precision = precision_model.compute_precision_for_lower_buckets(lower_widget.selected)
upper_precision = precision_model.compute_precision_for_upper_buckets(upper_widget.selected)
precision_by_bucket = {**lower_precision, **upper_precision}
results = FaceIdentityModel(
name=name,
face_ids_by_bucket=face_ids_by_bucket,
face_ids_to_score=face_ids_to_score,
precision_by_bucket=precision_by_bucket,
model_params={
'images': list(zip(face_embs, face_imgs))
}
)
plot_precision_and_cdf(results)
###Output
_____no_output_____
###Markdown
The next cell persists the model locally.
###Code
results.save()
###Output
_____no_output_____
###Markdown
Analysis Gender cross validationSituations where the identity model disagrees with the gender classifier may be cause for alarm. We would like to check that instances of the person have the expected gender as a sanity check. This section shows the breakdown of the identity instances and their labels from the gender classifier.
###Code
gender_breakdown = compute_gender_breakdown(results)
print('Expected counts by gender:')
for k, v in gender_breakdown.items():
print(' {} : {}'.format(k, int(v)))
print()
print('Percentage by gender:')
denominator = sum(v for v in gender_breakdown.values())
for k, v in gender_breakdown.items():
print(' {} : {:0.1f}%'.format(k, 100 * v / denominator))
print()
###Output
_____no_output_____
###Markdown
Situations where the identity detector returns high confidence, but where the gender is not the expected gender indicate either an error on the part of the identity detector or the gender detector. The following visualization shows randomly sampled images, where the identity detector returns high confidence, grouped by the gender label.
###Code
high_probability_threshold = 0.8
show_gender_examples(results, high_probability_threshold)
###Output
_____no_output_____
###Markdown
Face SizesFaces shown on-screen vary in size. For a person such as a host, they may be shown in a full body shot or as a face in a box. Faces in the background or those part of side graphics might be smaller than the rest. When calculuating screentime for a person, we would like to know whether the results represent the time the person was featured as opposed to merely in the background or as a tiny thumbnail in some graphic.The next cell, plots the distribution of face sizes. Some possible anomalies include there only being very small faces or large faces.
###Code
plot_histogram_of_face_sizes(results)
###Output
_____no_output_____
###Markdown
The histogram above shows the distribution of face sizes, but not how those sizes occur in the dataset. For instance, one might ask why some faces are so large or whhether the small faces are actually errors. The following cell groups example faces, which are of the target identity with probability, by their sizes in terms of screen area.
###Code
high_probability_threshold = 0.8
show_faces_by_size(results, high_probability_threshold, n=10)
###Output
_____no_output_____
###Markdown
Screen Time Across All ShowsOne question that we might ask about a person is whether they received a significantly different amount of screentime on different shows. The following section visualizes the amount of screentime by show in total minutes and also in proportion of the show's total time. For a celebrity or political figure such as Donald Trump, we would expect significant screentime on many shows. For a show host such as Wolf Blitzer, we expect that the screentime be high for shows hosted by Wolf Blitzer.
###Code
screen_time_by_show = get_screen_time_by_show(results)
plot_screen_time_by_show(name, screen_time_by_show)
###Output
_____no_output_____
###Markdown
We might also wish to validate these findings by comparing to the whether the person's name is mentioned in the subtitles. This might be helpful in determining whether extra or lack of screentime for a person may be due to a show's aesthetic choices. The following plots show compare the screen time with the number of caption mentions.
###Code
caption_mentions_by_show = get_caption_mentions_by_show([name.upper()])
plot_screen_time_and_other_by_show(name, screen_time_by_show, caption_mentions_by_show,
'Number of caption mentions', 'Count')
###Output
_____no_output_____
###Markdown
Appearances on a Single ShowFor people such as hosts, we would like to examine in greater detail the screen time allotted for a single show. First, fill in a show below.
###Code
show_name = 'FOX and Friends'
# Compute the screen time for each video of the show
screen_time_by_video_id = compute_screen_time_by_video(results, show_name)
###Output
_____no_output_____
###Markdown
One question we might ask about a host is "how long they are show on screen" for an episode. Likewise, we might also ask for how many episodes is the host not present due to being on vacation or on assignment elsewhere. The following cell plots a histogram of the distribution of the length of the person's appearances in videos of the chosen show.
###Code
plot_histogram_of_screen_times_by_video(name, show_name, screen_time_by_video_id)
###Output
_____no_output_____
###Markdown
For a host, we expect screentime over time to be consistent as long as the person remains a host. For figures such as Hilary Clinton, we expect the screentime to track events in the real world such as the lead-up to 2016 election and then to drop afterwards. The following cell plots a time series of the person's screentime over time. Each dot is a video of the chosen show. Red Xs are videos for which the face detector did not run.
###Code
plot_screentime_over_time(name, show_name, screen_time_by_video_id)
###Output
_____no_output_____
###Markdown
We hypothesized that a host is more likely to appear at the beginning of a video and then also appear throughout the video. The following plot visualizes the distibution of shot beginning times for videos of the show.
###Code
plot_distribution_of_appearance_times_by_video(results, show_name)
###Output
_____no_output_____
###Markdown
In the section 3.3, we see that some shows may have much larger variance in the screen time estimates than others. This may be because a host or frequent guest appears similar to the target identity. Alternatively, the images of the identity may be consistently low quality, leading to lower scores. The next cell plots a histogram of the probabilites for for faces in a show.
###Code
plot_distribution_of_identity_probabilities(results, show_name)
###Output
_____no_output_____
###Markdown
Other People Who Are On ScreenFor some people, we are interested in who they are often portrayed on screen with. For instance, the White House press secretary might routinely be shown with the same group of political pundits. A host of a show, might be expected to be on screen with their co-host most of the time. The next cell takes an identity model with high probability faces and displays clusters of faces that are on screen with the target person.
###Code
get_other_people_who_are_on_screen(results, k=25, precision_thresh=0.8)
###Output
_____no_output_____
###Markdown
Persist to CloudThe remaining code in this notebook uploads the built identity model to Google Cloud Storage and adds the FaceIdentity labels to the database. Save Model to Google Cloud Storage
###Code
gcs_model_path = results.save_to_gcs()
###Output
_____no_output_____
###Markdown
To ensure that the model stored to Google Cloud is valid, we load it and print the precision and cdf curve below.
###Code
gcs_results = FaceIdentityModel.load_from_gcs(name=name)
imshow(tile_imgs([cv2.resize(x[1][0], (200, 200)) for x in gcs_results.model_params['images']], cols=10))
plt.show()
plot_precision_and_cdf(gcs_results)
###Output
_____no_output_____
###Markdown
Save Labels to DBIf you are satisfied with the model, we can commit the labels to the database.
###Code
from django.core.exceptions import ObjectDoesNotExist
def standardize_name(name):
return name.lower()
person_type = ThingType.objects.get(name='person')
try:
person = Thing.objects.get(name=standardize_name(name), type=person_type)
print('Found person:', person.name)
except ObjectDoesNotExist:
person = Thing(name=standardize_name(name), type=person_type)
print('Creating person:', person.name)
labeler = Labeler(name='face-identity:{}'.format(person.name), data_path=gcs_model_path)
###Output
_____no_output_____
###Markdown
Commit the person and labelerThe labeler and person have been created but not set saved to the database. If a person was created, please make sure that the name is correct before saving.
###Code
person.save()
labeler.save()
###Output
_____no_output_____
###Markdown
Commit the FaceIdentity labelsNow, we are ready to add the labels to the database. We will create a FaceIdentity for each face whose probability exceeds the minimum threshold.
###Code
commit_face_identities_to_db(results, person, labeler, min_threshold=0.001)
print('Committed {} labels to the db'.format(FaceIdentity.objects.filter(labeler=labeler).count()))
###Output
_____no_output_____ |
Copy_of_Team_5_.ipynb | ###Markdown
**MISSING MIGRANTS PROJECT** Business problem Every year, hundreds of thousands of people leave their homes in search of a better life. In the process, many are injured or killed thus IOM came up with the Missing Migrants Project to track deaths of migrants and those who have gone missing along migratory routes across the globe. This enables them to identify ways of curbing death among migrants and understand the background of those who are most at risk to lose their life during migration. **Defining the Metric for Success**This analysis requires us to come up with a solution that will help provide a better understanding on the leading cause of death of migrants.We therefore need to identify the metrics that are signifinant in determining this and offer insights.We will implement the solution by performing the analysis. **Understanding the context**The International Organization for Migration (IOM)โs Missing Migrants Project records incidents in which migrants, including refugees and asylum-seekers, have died at state borders or in the process of migrating to an international destination. It was developed in response to disparate reports of people dying or disappearing along migratory routes around the world.The data is used to inform the Sustainable Development Goals Indicator 10.7.3 on the โ[n]number of people who died or disappeared in the process of migration towards an international destination.โMore than 40,000 people have lost their lives during unsafe migration journeys since 2014. The data collected by the Missing Migrants Project bear witness to one of the great political failures of modern times. IOM calls for immediate safe, humane and legal routes for migration. Better data can help inform policies to end migrant deaths and address the needs of families left behind. Business Understanding Business Objective To find the leading cause of migrants death and the factors that may add/influence it Research Questions 1.What was the leading cause of death?2.Which migrantโs region of origin had the highest deaths?3.Which affected nationality had the highest deaths?4.What was the highest no. of missing people per region?5.Which incident region were most deaths likely to occur in? Importing Libraries
###Code
# Importing the pandas library
#
import pandas as pd
# Importing the numpy library
#
import numpy as np
###Output
_____no_output_____
###Markdown
Loading and reading our dataset
###Code
#reading and loading our dataset
mm= pd.read_csv('/content/MissingMigrantsProject.csv', encoding= 'unicode_escape')
mm.head()
mm.tail()
###Output
_____no_output_____
###Markdown
**Data Understanding**
###Code
#getting info
mm.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 2420 entries, 0 to 2419
Data columns (total 12 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 id 2420 non-null int64
1 cause_of_death 2217 non-null object
2 region_origin 1977 non-null object
3 affected_nationality 845 non-null object
4 missing 271 non-null float64
5 dead 2318 non-null float64
6 incident_region 2410 non-null object
7 date 2411 non-null object
8 source 2413 non-null object
9 reliability 2096 non-null object
10 lat 2416 non-null float64
11 lon 2416 non-null float64
dtypes: float64(4), int64(1), object(7)
memory usage: 227.0+ KB
###Markdown
There are three different types of datatypes.
###Code
#describing our dataset
mm.describe()
###Output
_____no_output_____
###Markdown
These are the basic statistical values. Several columns have missing values.
###Code
#getting the shape of our dataset
mm.shape
###Output
_____no_output_____
###Markdown
The dataframe has 2420 rows and 12 columns.
###Code
#looking for duplicates
mm.duplicated(keep=False).sum()
###Output
_____no_output_____
###Markdown
The dataframe has no duplicates. **Data** **Cleaning**This done by following the data integrity rules i.e Validity, Accuracy, Completeness, Consistency, Uniformity to ensure the data is ready for analysis. Validity
###Code
mm.columns
#Procedure 1: Irrelevant Data
#Data Cleaning Action:Dropping
#Explanation:dropped the columns since they had data which was not necessary for the analysis
mm.drop(['source','lat', 'lon',],axis=1,inplace=True)
mm
mm.shape
###Output
_____no_output_____
###Markdown
We have 9 columns after dropping the 3 irrelevant ones. We dropped the columns since they were not required in our analysis.
###Code
#importing the library
import matplotlib.pyplot as plt
#visualising outliers using boxplot
#Procedure 2: Outliers
#Data Cleaning Action:Checking for outliers on the 'missing'
#Explanation: We will check for outliers on the required columns separately
mm.boxplot(column =['missing'], grid = False)
plt.title('Missing_migrants')
plt.show()
###Output
_____no_output_____
###Markdown
There are few outliers but we will keep them because they are critical information which can't be ignored
###Code
#Procedure 2: Outliers
#Data Cleaning Action:Checking for outliers on the 'dead' column
mm.boxplot(column =['dead'], grid = False)
plt.title('Dead_migrants')
plt.show()
###Output
_____no_output_____
###Markdown
There are outliers but we will keep them because they are critical information which can't be ignored Accuracy
###Code
#Procedure 1: None
#Data Cleaning Action: None
#Explanation:None
###Output
_____no_output_____
###Markdown
*COMPLETENESS*
###Code
#Procedure 1: Missing values
#Data Cleaning Action: Counting
#Explanation:counting missing values
mm.isnull().sum()
#Procedure 2: Missing values
#Data Cleaning Action: Checking percentage of the missing values
mm.isna().mean().round(4) * 100
###Output
_____no_output_____
###Markdown
Each column has missing values apart from the id column.
###Code
#Procedure 3: Missing values(missing and dead columns)
#Data Cleaning Action: Replacing
#Explanation:We replaced the missing values with 0
missing_value=0
mm['missing'].fillna(missing_value,inplace=True)
dead_value=0
mm['dead'].fillna(dead_value,inplace=True)
mm
###Output
_____no_output_____
###Markdown
We replaced the missing values from both the missing and dead columns with 0 rather than dropping them or replacing them with the mean/median because of how crucial the data is for the research. We cannot assume/force the no of fatalities that take place because this will hinder the sincerity of the results.
###Code
#Procedure 4: Missing values(missing and dead columns)
#Data Cleaning Action: Counter- Checking
#Explanation:We check if the missing values from both columns have
# been replaced
mm.isnull().sum()
###Output
_____no_output_____
###Markdown
The missing values in the respective columns have been replaced.
###Code
#Procedure 5: Missing values(all columns apart from the date column)
#Data Cleaning Action: Replacing the missing values(object type)
# with unknown
#Explanation:We replaced the missing values with unknown
nulls='unknown'
mm['cause_of_death'].fillna(nulls,inplace=True)
mm['region_origin'].fillna(nulls,inplace=True)
mm['affected_nationality'].fillna(nulls,inplace=True)
mm['incident_region'].fillna(nulls,inplace=True)
mm['reliability'].fillna(nulls,inplace=True)
mm.head(30)
###Output
_____no_output_____
###Markdown
We cannot predict/guess what caused the death of a victim or what their nationality/region of origin might be without a proper investigation being done. Though a row has a missing column it may contain another column with critical information for the research.
###Code
#Procedure 5: Missing values
#Data Cleaning Action: Counter- Checking
#Explanation:We check if the missing values have been replaced
mm.isnull().sum()
###Output
_____no_output_____
###Markdown
The missing values have been replaced. *CONSISTENCY*
###Code
#Procedure 1: Duplicates
#Data Cleaning Action:Checking
#Explanation:
mm.duplicated().sum()
###Output
_____no_output_____
###Markdown
No duplicates *UNIFORMITY*
###Code
#Procedure 1: Checking the length of unique values in the date column
#Data Cleaning Action:None
#Explanation: We used the len function
len(mm['date'].unique())
#Procedure 2: converting the date column to date time format
#Data Cleaning Action: Change from object type to date time
#Explanation: Change from object type to date time
mm['date'] = pd.to_datetime(mm['date'])
mm['date'].head()
###Output
_____no_output_____
###Markdown
date column data type was changed to datetime (YYYY-MM-DD)
###Code
#Procedure 3: Finding the first and last date entries
#Data Cleaning Action: None
#Explanation:Use min and max functions
print (mm['date'].min())
print (mm['date'].max())
###Output
2014-01-05 00:00:00
2017-12-04 00:00:00
###Markdown
The data contains a series of incident that took place between Jan-2014 and Dec-2017 (3 years)
###Code
#Procedure 4: Missing values(date column)
#Data Cleaning Action:Replacing the missing values with 0
#Explanation: None
null=0
mm['date'].fillna(null,inplace=True)
###Output
_____no_output_____
###Markdown
Replaced the null values in the date column with 0.
###Code
#Procedure 5: Checking for missing values
#Data Cleaning Action: Counting
#Explanation: Using isna() and sum() function
mm.isna().sum()
###Output
_____no_output_____
###Markdown
There are no missing values. **Data Analysis**
###Code
mm.columns
# Importing the seaborn library as sns
import seaborn as sns
###Output
_____no_output_____
###Markdown
**1. What was the leading cause of death?**
###Code
mm['cause_of_death'].value_counts().head(15).plot(kind = "bar", title = "Reason for Death");
###Output
_____no_output_____
###Markdown
From the graph, we can clearly see that drowning is the leading cause of death. **2. Which migrantโs region of origin had the highest deaths?**
###Code
mm.groupby('region_origin')['dead'].sum().to_frame().sort_values(by='region_origin',ascending= False).head(2)
###Output
_____no_output_____
###Markdown
The deaths occur mostly in unknown regions then followed by the Sub-Saharan Africa region. **3 Which affected nationality had the highest deaths?**
###Code
mm.groupby('affected_nationality')['dead'].sum().to_frame().sort_values(by='dead',ascending= False).head(2)
###Output
_____no_output_____
###Markdown
The most affected nationality was unknown. **4. What was the highest no. of missing people per region of origin?**
###Code
mm.groupby('region_origin')['missing'].sum().to_frame().sort_values(by='region_origin',ascending= False).head(2)
###Output
_____no_output_____
###Markdown
Sub-Saharan Africa had the second highest number of missing people. **5. Which incident region had the highest deaths**
###Code
mm.groupby('incident_region')['dead'].sum().to_frame().sort_values(by='dead',ascending= False).head(2)
###Output
_____no_output_____ |
Image_loading_and_processing.ipynb | ###Markdown
1. Import Python librariesA honey bee.The question at hand is: can a machine identify a bee as a honey bee or a bumble bee? These bees have different behaviors and appearances, but given the variety of backgrounds, positions, and image resolutions it can be a challenge for machines to tell them apart.Being able to identify bee species from images is a task that ultimately would allow researchers to more quickly and effectively collect field data. Pollinating bees have critical roles in both ecology and agriculture, and diseases like colony collapse disorder threaten these species. Identifying different species of bees in the wild means that we can better understand the prevalence and growth of these important insects.A bumble bee.This notebook walks through loading and processing images. After loading and processing these images, they will be ready for building models that can automatically detect honeybees and bumblebees.
###Code
# Used to change filepaths
from pathlib import Path
# We set up matplotlib, pandas, and the display function
%matplotlib inline
import matplotlib.pyplot as plt
from IPython.display import display
import pandas as pd
# import numpy to use in this cell
import numpy as np
# import Image from PIL so we can use it later
from PIL import Image
# generate test_data
test_data = np.random.beta(1, 1, size=(100, 100, 3))
# display the test_data
plt.imshow(test_data)
###Output
_____no_output_____
###Markdown
2. Opening images with PILNow that we have all of our imports ready, it is time to work with some real images.Pillow is a very flexible image loading and manipulation library. It works with many different image formats, for example, .png, .jpg, .gif and more. For most image data, one can work with images using the Pillow library (which is imported as PIL).Now we want to load an image, display it in the notebook, and print out the dimensions of the image. By dimensions, we mean the width of the image and the height of the image. These are measured in pixels. The documentation for Image in Pillow gives a comprehensive view of what this object can do.
###Code
# open the image
img = Image.open('datasets/bee_1.jpg')
# Get the image size
img_size = img.size
print("The image size is: {}".format(img_size))
# Just having the image as the last line in the cell will display it in the notebook
img
###Output
The image size is: (100, 100)
###Markdown
3. Image manipulation with PILPillow has a number of common image manipulation tasks built into the library. For example, one may want to resize an image so that the file size is smaller. Or, perhaps, convert an image to black-and-white instead of color. Operations that Pillow provides include:resizingcroppingrotatingflippingconverting to greyscale (or other color modes)Often, these kinds of manipulations are part of the pipeline for turning a small number of images into more images to create training data for machine learning algorithms. This technique is called data augmentation, and it is a common technique for image classification.We'll try a couple of these operations and look at the results.
###Code
# Crop the image to 25, 25, 75, 75
img_cropped = img.crop([25, 25, 75, 75])
display(img_cropped)
# rotate the image by 45 degrees
img_rotated = img.rotate(45, expand=25)
display(img_rotated)
# flip the image left to right
img_flipped = img.transpose(Image.FLIP_LEFT_RIGHT)
display(img_flipped)
###Output
_____no_output_____
###Markdown
4. Images as arrays of dataWhat is an image? So far, PIL has handled loading images and displaying them. However, if we're going to use images as data, we need to understand what that data looks like.Most image formats have three color "channels": red, green, and blue (some images also have a fourth channel called "alpha" that controls transparency). For each pixel in an image, there is a value for every channel.The way this is represented as data is as a three-dimensional matrix. The width of the matrix is the width of the image, the height of the matrix is the height of the image, and the depth of the matrix is the number of channels. So, as we saw, the height and width of our image are both 100 pixels. This means that the underlying data is a matrix with the dimensions 100x100x3.
###Code
# Turn our image object into a NumPy array
img_data = np.array(img)
# get the shape of the resulting array
img_data_shape = img_data.shape
print("Our NumPy array has the shape: {}".format(img_data_shape))
# plot the data with `imshow`
plt.imshow(img_data)
plt.show()
# plot the red channel
plt.imshow(img_data[:,:,0], cmap = plt.cm.Reds_r)
plt.show()
# plot the green channel
plt.imshow(img_data[:,:,1], cmap = plt.cm.Greens_r)
plt.show()
# plot the blue channel
plt.imshow(img_data[:,:,2], cmap=plt.cm.Blues_r)
plt.show()
###Output
Our NumPy array has the shape: (100, 100, 3)
###Markdown
5. Explore the color channelsColor channels can help provide more information about an image. A picture of the ocean will be more blue, whereas a picture of a field will be more green. This kind of information can be useful when building models or examining the differences between images.We'll look at the kernel density estimate for each of the color channels on the same plot so that we can understand how they differ.When we make this plot, we'll see that a shape that appears further to the right means more of that color, whereas further to the left means less of that color.
###Code
def plot_kde(channel, color):
""" Plots a kernel density estimate for the given data.
`channel` must be a 2d array
`color` must be a color string, e.g. 'r', 'g', or 'b'
"""
data = channel.flatten()
return pd.Series(data).plot.density(c=color)
# create the list of channels
channels = ['r','g','b']
def plot_rgb(image_data):
# use enumerate to loop over colors and indexes
for ix, color in enumerate(channels):
plot_kde(img_data[:, :, ix], color)
plt.show()
plot_rgb(img_data)
###Output
_____no_output_____
###Markdown
6. Honey bees and bumble bees (i)Now we'll look at two different images and some of the differences between them. The first image is of a honey bee, and the second image is of a bumble bee.First, let's look at the honey bee.
###Code
# load bee_12.jpg as honey
honey = Image.open('datasets/bee_12.jpg')
# display the honey bee image
display(honey)
# NumPy array of the honey bee image data
honey_data = np.array(honey)
# plot the rgb densities for the honey bee image
plot_rgb(honey_data)
###Output
_____no_output_____
###Markdown
7. Honey bees and bumble bees (ii)Now let's look at the bumble bee.When one compares these images, it is clear how different the colors are. The honey bee image above, with a blue flower, has a strong peak on the right-hand side of the blue channel. The bumble bee image, which has a lot of yellow for the bee and the background, has almost perfect overlap between the red and green channels (which together make yellow).
###Code
# load bee_3.jpg as bumble
bumble = Image.open('datasets/bee_3.jpg')
# display the bumble bee image
display(bumble)
# NumPy array of the bumble bee image data
bumble_data = np.array(bumble)
# plot the rgb densities for the bumble bee image
plot_rgb(bumble_data)
###Output
_____no_output_____
###Markdown
8. Simplify, simplify, simplifyWhile sometimes color information is useful, other times it can be distracting. In this examples where we are looking at bees, the bees themselves are very similar colors. On the other hand, the bees are often on top of different color flowers. We know that the colors of the flowers may be distracting from separating honey bees from bumble bees, so let's convert these images to black-and-white, or "grayscale."Grayscale is just one of the modes that Pillow supports. Switching between modes is done with the .convert() method, which is passed a string for the new mode.Because we change the number of color "channels," the shape of our array changes with this change. It also will be interesting to look at how the KDE of the grayscale version compares to the RGB version above.
###Code
# convert honey to grayscale
honey_bw = honey.convert("L")
display(honey_bw)
# convert the image to a NumPy array
honey_bw_arr = np.array(honey_bw)
# get the shape of the resulting array
honey_bw_arr_shape = honey_bw_arr.shape
print("Our NumPy array has the shape: {}".format(honey_bw_arr_shape))
# plot the array using matplotlib
plt.imshow(honey_bw_arr, cmap=plt.cm.gray)
plt.show()
# plot the kde of the new black and white array
plot_kde(honey_bw_arr, 'k')
###Output
_____no_output_____
###Markdown
9. Save your work!We've been talking this whole time about making changes to images and the manipulations that might be useful as part of a machine learning pipeline. To use these images in the future, we'll have to save our work after we've made changes.Now, we'll make a couple changes to the Image object from Pillow and save that. We'll flip the image left-to-right, just as we did with the color version. Then, we'll change the NumPy version of the data by clipping it. Using the np.maximum function, we can take any number in the array smaller than 100 and replace it with 100. Because this reduces the range of values, it will increase the contrast of the image. We'll then convert that back to an Image and save the result.
###Code
# flip the image left-right with transpose
honey_bw_flip = honey_bw.transpose(Image.FLIP_LEFT_RIGHT)
# show the flipped image
display(honey_bw_flip)
# save the flipped image
honey_bw_flip.save("saved_images/bw_flipped.jpg")
# create higher contrast by reducing range
honey_hc_arr = np.maximum(honey_bw_arr, 100)
# show the higher contrast version
plt.imshow(honey_bw_flip, cmap=plt.cm.gray)
# convert the NumPy array of high contrast to an Image
honey_bw_hc = Image.fromarray(honey_hc_arr)
# save the high contrast version
honey_bw_hc.save("saved_images/bw_hc.jpg")
###Output
_____no_output_____
###Markdown
10. Make a pipelineNow it's time to create an image processing pipeline. We have all the tools in our toolbox to load images, transform them, and save the results.In this pipeline we will do the following:Load the image with Image.open and create paths to save our images toConvert the image to grayscaleSave the grayscale imageRotate, crop, and zoom in on the image and save the new image
###Code
image_paths = ['datasets/bee_1.jpg', 'datasets/bee_12.jpg', 'datasets/bee_2.jpg', 'datasets/bee_3.jpg']
def process_image(path):
img = Image.open(path)
# create paths to save files to
bw_path = "saved_images/bw_{}.jpg".format(path.stem)
rcz_path = "saved_images/rcz_{}.jpg".format(path.stem)
print("Creating grayscale version of {} and saving to {}.".format(path, bw_path))
bw = img.convert("L")
bw.save(bw_path)
print("Creating rotated, cropped, and zoomed version of {} and saving to {}.".format(path, rcz_path))
rcz = bw.rotate(45).crop([25, 25, 75, 75]).resize((100,100))
rcz.save(rcz_path)
# for loop over image paths
for img_path in image_paths:
process_image(Path(img_path))
###Output
Creating grayscale version of datasets/bee_1.jpg and saving to saved_images/bw_bee_1.jpg.
Creating rotated, cropped, and zoomed version of datasets/bee_1.jpg and saving to saved_images/rcz_bee_1.jpg.
Creating grayscale version of datasets/bee_12.jpg and saving to saved_images/bw_bee_12.jpg.
Creating rotated, cropped, and zoomed version of datasets/bee_12.jpg and saving to saved_images/rcz_bee_12.jpg.
Creating grayscale version of datasets/bee_2.jpg and saving to saved_images/bw_bee_2.jpg.
Creating rotated, cropped, and zoomed version of datasets/bee_2.jpg and saving to saved_images/rcz_bee_2.jpg.
Creating grayscale version of datasets/bee_3.jpg and saving to saved_images/bw_bee_3.jpg.
Creating rotated, cropped, and zoomed version of datasets/bee_3.jpg and saving to saved_images/rcz_bee_3.jpg.
|
experiments/hyperparameters_1/seeds/oracle.run2/trials/4/trial.ipynb | ###Markdown
PTN TemplateThis notebook serves as a template for single dataset PTN experiments It can be run on its own by setting STANDALONE to True (do a find for "STANDALONE" to see where) But it is intended to be executed as part of a *papermill.py script. See any of the experimentes with a papermill script to get started with that workflow.
###Code
%load_ext autoreload
%autoreload 2
%matplotlib inline
import os, json, sys, time, random
import numpy as np
import torch
from torch.optim import Adam
from easydict import EasyDict
import matplotlib.pyplot as plt
from steves_models.steves_ptn import Steves_Prototypical_Network
from steves_utils.lazy_iterable_wrapper import Lazy_Iterable_Wrapper
from steves_utils.iterable_aggregator import Iterable_Aggregator
from steves_utils.ptn_train_eval_test_jig import PTN_Train_Eval_Test_Jig
from steves_utils.torch_sequential_builder import build_sequential
from steves_utils.torch_utils import get_dataset_metrics, ptn_confusion_by_domain_over_dataloader
from steves_utils.utils_v2 import (per_domain_accuracy_from_confusion, get_datasets_base_path)
from steves_utils.PTN.utils import independent_accuracy_assesment
from steves_utils.stratified_dataset.episodic_accessor import Episodic_Accessor_Factory
from steves_utils.ptn_do_report import (
get_loss_curve,
get_results_table,
get_parameters_table,
get_domain_accuracies,
)
from steves_utils.transforms import get_chained_transform
###Output
_____no_output_____
###Markdown
Required ParametersThese are allowed parameters, not defaultsEach of these values need to be present in the injected parameters (the notebook will raise an exception if they are not present)Papermill uses the cell tag "parameters" to inject the real parameters below this cell.Enable tags to see what I mean
###Code
required_parameters = {
"experiment_name",
"lr",
"device",
"seed",
"dataset_seed",
"labels_source",
"labels_target",
"domains_source",
"domains_target",
"num_examples_per_domain_per_label_source",
"num_examples_per_domain_per_label_target",
"n_shot",
"n_way",
"n_query",
"train_k_factor",
"val_k_factor",
"test_k_factor",
"n_epoch",
"patience",
"criteria_for_best",
"x_transforms_source",
"x_transforms_target",
"episode_transforms_source",
"episode_transforms_target",
"pickle_name",
"x_net",
"NUM_LOGS_PER_EPOCH",
"BEST_MODEL_PATH",
"torch_default_dtype"
}
standalone_parameters = {}
standalone_parameters["experiment_name"] = "STANDALONE PTN"
standalone_parameters["lr"] = 0.0001
standalone_parameters["device"] = "cuda"
standalone_parameters["seed"] = 1337
standalone_parameters["dataset_seed"] = 1337
standalone_parameters["num_examples_per_domain_per_label_source"]=100
standalone_parameters["num_examples_per_domain_per_label_target"]=100
standalone_parameters["n_shot"] = 3
standalone_parameters["n_query"] = 2
standalone_parameters["train_k_factor"] = 1
standalone_parameters["val_k_factor"] = 2
standalone_parameters["test_k_factor"] = 2
standalone_parameters["n_epoch"] = 100
standalone_parameters["patience"] = 10
standalone_parameters["criteria_for_best"] = "target_accuracy"
standalone_parameters["x_transforms_source"] = ["unit_power"]
standalone_parameters["x_transforms_target"] = ["unit_power"]
standalone_parameters["episode_transforms_source"] = []
standalone_parameters["episode_transforms_target"] = []
standalone_parameters["torch_default_dtype"] = "torch.float32"
standalone_parameters["x_net"] = [
{"class": "nnReshape", "kargs": {"shape":[-1, 1, 2, 256]}},
{"class": "Conv2d", "kargs": { "in_channels":1, "out_channels":256, "kernel_size":(1,7), "bias":False, "padding":(0,3), },},
{"class": "ReLU", "kargs": {"inplace": True}},
{"class": "BatchNorm2d", "kargs": {"num_features":256}},
{"class": "Conv2d", "kargs": { "in_channels":256, "out_channels":80, "kernel_size":(2,7), "bias":True, "padding":(0,3), },},
{"class": "ReLU", "kargs": {"inplace": True}},
{"class": "BatchNorm2d", "kargs": {"num_features":80}},
{"class": "Flatten", "kargs": {}},
{"class": "Linear", "kargs": {"in_features": 80*256, "out_features": 256}}, # 80 units per IQ pair
{"class": "ReLU", "kargs": {"inplace": True}},
{"class": "BatchNorm1d", "kargs": {"num_features":256}},
{"class": "Linear", "kargs": {"in_features": 256, "out_features": 256}},
]
# Parameters relevant to results
# These parameters will basically never need to change
standalone_parameters["NUM_LOGS_PER_EPOCH"] = 10
standalone_parameters["BEST_MODEL_PATH"] = "./best_model.pth"
# uncomment for CORES dataset
from steves_utils.CORES.utils import (
ALL_NODES,
ALL_NODES_MINIMUM_1000_EXAMPLES,
ALL_DAYS
)
standalone_parameters["labels_source"] = ALL_NODES
standalone_parameters["labels_target"] = ALL_NODES
standalone_parameters["domains_source"] = [1]
standalone_parameters["domains_target"] = [2,3,4,5]
standalone_parameters["pickle_name"] = "cores.stratified_ds.2022A.pkl"
# Uncomment these for ORACLE dataset
# from steves_utils.ORACLE.utils_v2 import (
# ALL_DISTANCES_FEET,
# ALL_RUNS,
# ALL_SERIAL_NUMBERS,
# )
# standalone_parameters["labels_source"] = ALL_SERIAL_NUMBERS
# standalone_parameters["labels_target"] = ALL_SERIAL_NUMBERS
# standalone_parameters["domains_source"] = [8,20, 38,50]
# standalone_parameters["domains_target"] = [14, 26, 32, 44, 56]
# standalone_parameters["pickle_name"] = "oracle.frame_indexed.stratified_ds.2022A.pkl"
# standalone_parameters["num_examples_per_domain_per_label_source"]=1000
# standalone_parameters["num_examples_per_domain_per_label_target"]=1000
# Uncomment these for Metahan dataset
# standalone_parameters["labels_source"] = list(range(19))
# standalone_parameters["labels_target"] = list(range(19))
# standalone_parameters["domains_source"] = [0]
# standalone_parameters["domains_target"] = [1]
# standalone_parameters["pickle_name"] = "metehan.stratified_ds.2022A.pkl"
# standalone_parameters["n_way"] = len(standalone_parameters["labels_source"])
# standalone_parameters["num_examples_per_domain_per_label_source"]=200
# standalone_parameters["num_examples_per_domain_per_label_target"]=100
standalone_parameters["n_way"] = len(standalone_parameters["labels_source"])
# Parameters
parameters = {
"experiment_name": "seeds_oracle.run2",
"lr": 0.001,
"device": "cuda",
"seed": 12341234,
"dataset_seed": 1337,
"labels_source": [
"3123D52",
"3123D65",
"3123D79",
"3123D80",
"3123D54",
"3123D70",
"3123D7B",
"3123D89",
"3123D58",
"3123D76",
"3123D7D",
"3123EFE",
"3123D64",
"3123D78",
"3123D7E",
"3124E4A",
],
"labels_target": [
"3123D52",
"3123D65",
"3123D79",
"3123D80",
"3123D54",
"3123D70",
"3123D7B",
"3123D89",
"3123D58",
"3123D76",
"3123D7D",
"3123EFE",
"3123D64",
"3123D78",
"3123D7E",
"3124E4A",
],
"x_transforms_source": [],
"x_transforms_target": [],
"episode_transforms_source": [],
"episode_transforms_target": [],
"num_examples_per_domain_per_label_source": 1000,
"num_examples_per_domain_per_label_target": 1000,
"n_shot": 3,
"n_way": 16,
"n_query": 2,
"train_k_factor": 1,
"val_k_factor": 2,
"test_k_factor": 2,
"torch_default_dtype": "torch.float32",
"n_epoch": 50,
"patience": 3,
"criteria_for_best": "target_loss",
"x_net": [
{"class": "nnReshape", "kargs": {"shape": [-1, 1, 2, 256]}},
{
"class": "Conv2d",
"kargs": {
"in_channels": 1,
"out_channels": 256,
"kernel_size": [1, 7],
"bias": False,
"padding": [0, 3],
},
},
{"class": "ReLU", "kargs": {"inplace": True}},
{"class": "BatchNorm2d", "kargs": {"num_features": 256}},
{
"class": "Conv2d",
"kargs": {
"in_channels": 256,
"out_channels": 80,
"kernel_size": [2, 7],
"bias": True,
"padding": [0, 3],
},
},
{"class": "ReLU", "kargs": {"inplace": True}},
{"class": "BatchNorm2d", "kargs": {"num_features": 80}},
{"class": "Flatten", "kargs": {}},
{"class": "Linear", "kargs": {"in_features": 20480, "out_features": 256}},
{"class": "ReLU", "kargs": {"inplace": True}},
{"class": "BatchNorm1d", "kargs": {"num_features": 256}},
{"class": "Linear", "kargs": {"in_features": 256, "out_features": 256}},
],
"NUM_LOGS_PER_EPOCH": 10,
"BEST_MODEL_PATH": "./best_model.pth",
"pickle_name": "oracle.Run2_10kExamples_stratified_ds.2022A.pkl",
"domains_source": [8, 32, 50],
"domains_target": [14, 20, 26, 38, 44],
}
# Set this to True if you want to run this template directly
STANDALONE = False
if STANDALONE:
print("parameters not injected, running with standalone_parameters")
parameters = standalone_parameters
if not 'parameters' in locals() and not 'parameters' in globals():
raise Exception("Parameter injection failed")
#Use an easy dict for all the parameters
p = EasyDict(parameters)
supplied_keys = set(p.keys())
if supplied_keys != required_parameters:
print("Parameters are incorrect")
if len(supplied_keys - required_parameters)>0: print("Shouldn't have:", str(supplied_keys - required_parameters))
if len(required_parameters - supplied_keys)>0: print("Need to have:", str(required_parameters - supplied_keys))
raise RuntimeError("Parameters are incorrect")
###################################
# Set the RNGs and make it all deterministic
###################################
np.random.seed(p.seed)
random.seed(p.seed)
torch.manual_seed(p.seed)
torch.use_deterministic_algorithms(True)
###########################################
# The stratified datasets honor this
###########################################
torch.set_default_dtype(eval(p.torch_default_dtype))
###################################
# Build the network(s)
# Note: It's critical to do this AFTER setting the RNG
# (This is due to the randomized initial weights)
###################################
x_net = build_sequential(p.x_net)
start_time_secs = time.time()
###################################
# Build the dataset
###################################
if p.x_transforms_source == []: x_transform_source = None
else: x_transform_source = get_chained_transform(p.x_transforms_source)
if p.x_transforms_target == []: x_transform_target = None
else: x_transform_target = get_chained_transform(p.x_transforms_target)
if p.episode_transforms_source == []: episode_transform_source = None
else: raise Exception("episode_transform_source not implemented")
if p.episode_transforms_target == []: episode_transform_target = None
else: raise Exception("episode_transform_target not implemented")
eaf_source = Episodic_Accessor_Factory(
labels=p.labels_source,
domains=p.domains_source,
num_examples_per_domain_per_label=p.num_examples_per_domain_per_label_source,
iterator_seed=p.seed,
dataset_seed=p.dataset_seed,
n_shot=p.n_shot,
n_way=p.n_way,
n_query=p.n_query,
train_val_test_k_factors=(p.train_k_factor,p.val_k_factor,p.test_k_factor),
pickle_path=os.path.join(get_datasets_base_path(), p.pickle_name),
x_transform_func=x_transform_source,
example_transform_func=episode_transform_source,
)
train_original_source, val_original_source, test_original_source = eaf_source.get_train(), eaf_source.get_val(), eaf_source.get_test()
eaf_target = Episodic_Accessor_Factory(
labels=p.labels_target,
domains=p.domains_target,
num_examples_per_domain_per_label=p.num_examples_per_domain_per_label_target,
iterator_seed=p.seed,
dataset_seed=p.dataset_seed,
n_shot=p.n_shot,
n_way=p.n_way,
n_query=p.n_query,
train_val_test_k_factors=(p.train_k_factor,p.val_k_factor,p.test_k_factor),
pickle_path=os.path.join(get_datasets_base_path(), p.pickle_name),
x_transform_func=x_transform_target,
example_transform_func=episode_transform_target,
)
train_original_target, val_original_target, test_original_target = eaf_target.get_train(), eaf_target.get_val(), eaf_target.get_test()
transform_lambda = lambda ex: ex[1] # Original is (<domain>, <episode>) so we strip down to episode only
train_processed_source = Lazy_Iterable_Wrapper(train_original_source, transform_lambda)
val_processed_source = Lazy_Iterable_Wrapper(val_original_source, transform_lambda)
test_processed_source = Lazy_Iterable_Wrapper(test_original_source, transform_lambda)
train_processed_target = Lazy_Iterable_Wrapper(train_original_target, transform_lambda)
val_processed_target = Lazy_Iterable_Wrapper(val_original_target, transform_lambda)
test_processed_target = Lazy_Iterable_Wrapper(test_original_target, transform_lambda)
datasets = EasyDict({
"source": {
"original": {"train":train_original_source, "val":val_original_source, "test":test_original_source},
"processed": {"train":train_processed_source, "val":val_processed_source, "test":test_processed_source}
},
"target": {
"original": {"train":train_original_target, "val":val_original_target, "test":test_original_target},
"processed": {"train":train_processed_target, "val":val_processed_target, "test":test_processed_target}
},
})
# Some quick unit tests on the data
from steves_utils.transforms import get_average_power, get_average_magnitude
q_x, q_y, s_x, s_y, truth = next(iter(train_processed_source))
assert q_x.dtype == eval(p.torch_default_dtype)
assert s_x.dtype == eval(p.torch_default_dtype)
print("Visually inspect these to see if they line up with expected values given the transforms")
print('x_transforms_source', p.x_transforms_source)
print('x_transforms_target', p.x_transforms_target)
print("Average magnitude, source:", get_average_magnitude(q_x[0].numpy()))
print("Average power, source:", get_average_power(q_x[0].numpy()))
q_x, q_y, s_x, s_y, truth = next(iter(train_processed_target))
print("Average magnitude, target:", get_average_magnitude(q_x[0].numpy()))
print("Average power, target:", get_average_power(q_x[0].numpy()))
###################################
# Build the model
###################################
model = Steves_Prototypical_Network(x_net, device=p.device, x_shape=(2,256))
optimizer = Adam(params=model.parameters(), lr=p.lr)
###################################
# train
###################################
jig = PTN_Train_Eval_Test_Jig(model, p.BEST_MODEL_PATH, p.device)
jig.train(
train_iterable=datasets.source.processed.train,
source_val_iterable=datasets.source.processed.val,
target_val_iterable=datasets.target.processed.val,
num_epochs=p.n_epoch,
num_logs_per_epoch=p.NUM_LOGS_PER_EPOCH,
patience=p.patience,
optimizer=optimizer,
criteria_for_best=p.criteria_for_best,
)
total_experiment_time_secs = time.time() - start_time_secs
###################################
# Evaluate the model
###################################
source_test_label_accuracy, source_test_label_loss = jig.test(datasets.source.processed.test)
target_test_label_accuracy, target_test_label_loss = jig.test(datasets.target.processed.test)
source_val_label_accuracy, source_val_label_loss = jig.test(datasets.source.processed.val)
target_val_label_accuracy, target_val_label_loss = jig.test(datasets.target.processed.val)
history = jig.get_history()
total_epochs_trained = len(history["epoch_indices"])
val_dl = Iterable_Aggregator((datasets.source.original.val,datasets.target.original.val))
confusion = ptn_confusion_by_domain_over_dataloader(model, p.device, val_dl)
per_domain_accuracy = per_domain_accuracy_from_confusion(confusion)
# Add a key to per_domain_accuracy for if it was a source domain
for domain, accuracy in per_domain_accuracy.items():
per_domain_accuracy[domain] = {
"accuracy": accuracy,
"source?": domain in p.domains_source
}
# Do an independent accuracy assesment JUST TO BE SURE!
# _source_test_label_accuracy = independent_accuracy_assesment(model, datasets.source.processed.test, p.device)
# _target_test_label_accuracy = independent_accuracy_assesment(model, datasets.target.processed.test, p.device)
# _source_val_label_accuracy = independent_accuracy_assesment(model, datasets.source.processed.val, p.device)
# _target_val_label_accuracy = independent_accuracy_assesment(model, datasets.target.processed.val, p.device)
# assert(_source_test_label_accuracy == source_test_label_accuracy)
# assert(_target_test_label_accuracy == target_test_label_accuracy)
# assert(_source_val_label_accuracy == source_val_label_accuracy)
# assert(_target_val_label_accuracy == target_val_label_accuracy)
experiment = {
"experiment_name": p.experiment_name,
"parameters": dict(p),
"results": {
"source_test_label_accuracy": source_test_label_accuracy,
"source_test_label_loss": source_test_label_loss,
"target_test_label_accuracy": target_test_label_accuracy,
"target_test_label_loss": target_test_label_loss,
"source_val_label_accuracy": source_val_label_accuracy,
"source_val_label_loss": source_val_label_loss,
"target_val_label_accuracy": target_val_label_accuracy,
"target_val_label_loss": target_val_label_loss,
"total_epochs_trained": total_epochs_trained,
"total_experiment_time_secs": total_experiment_time_secs,
"confusion": confusion,
"per_domain_accuracy": per_domain_accuracy,
},
"history": history,
"dataset_metrics": get_dataset_metrics(datasets, "ptn"),
}
ax = get_loss_curve(experiment)
plt.show()
get_results_table(experiment)
get_domain_accuracies(experiment)
print("Source Test Label Accuracy:", experiment["results"]["source_test_label_accuracy"], "Target Test Label Accuracy:", experiment["results"]["target_test_label_accuracy"])
print("Source Val Label Accuracy:", experiment["results"]["source_val_label_accuracy"], "Target Val Label Accuracy:", experiment["results"]["target_val_label_accuracy"])
json.dumps(experiment)
###Output
_____no_output_____ |
ICCT_it/examples/02/.ipynb_checkpoints/TD-12-Approssimazione-a-poli-dominanti-checkpoint.ipynb | ###Markdown
Approssimazione a poli dominantiQuando si studia il comportamento dei sistemi, spesso questi vengono approssimati da un polo dominante o da una coppia di poli complessi dominanti.Il sistema del secondo ordine presentato รจ definito dalla seguente funzione di trasferimento:\begin{equation} G(s)=\frac{\alpha\beta}{(s+\alpha)(s+\beta)}=\frac{1}{(\frac{1}{\alpha}s+1)(\frac{1}{\beta}s+1)},\end{equation}dove $\beta=1$ e $\alpha$ variabile.Il sistema del terzo ordine presentato รจ invece definito dalla seguente funzione di trasferimento:\begin{equation} G(s)=\frac{\alpha{\omega_0}^2}{\big(s+\alpha\big)\big(s^2+2\zeta\omega_0s+\omega_0^2\big)}=\frac{1}{(\frac{1}{\alpha}s+1)(\frac{1}{\omega_0^2}s^2+\frac{2\zeta\alpha}{\omega_0}s+1)},\end{equation}dove $\beta=1$, $\omega_0=4.1$ e $\zeta=0.24$ e $\alpha$ variabile.--- Come usare questo notebook?Alterna tra il sistema del secondo e terzo ordine e sposta il cursore per cambiare la posizione del polo mobile $\alpha$.Questo notebook รจ basato sul seguente [tutorial](https://lpsa.swarthmore.edu/PZXferStepBode/DomPole.html "The Dominant Pole Approximation") del Prof. Erik Cheever.
###Code
# System selector buttons
style = {'description_width': 'initial','button_width': '200px'}
typeSelect = widgets.ToggleButtons(
options=[('Sistema del secondo ordine', 0), ('Sistema del terzo ordine', 1),],
description='Seleziona: ',style=style)
display(typeSelect)
continuous_update=False
# set up plot
fig, ax = plt.subplots(2,1,figsize=[9.8,7],num='Approssimazione a poli dominanti')
plt.subplots_adjust(hspace=0.35)
ax[0].grid(True)
ax[1].grid(True)
# ax[2].grid(which='both', axis='both', color='lightgray')
ax[0].axhline(y=0,color='k',lw=.8)
ax[1].axhline(y=0,color='k',lw=.8)
ax[0].axvline(x=0,color='k',lw=.8)
ax[1].axvline(x=0,color='k',lw=.8)
ax[0].set_xlabel('Re')
ax[0].set_ylabel('Im')
ax[0].set_xlim([-10,0.5])
ax[1].set_xlim([-0.5,20])
ax[1].set_xlabel('$t$ [s]')
ax[1].set_ylabel('input, output')
ax[0].set_title('Mappa poli-zeri')
ax[1].set_title('Risposta')
plotzero, = ax[0].plot([], [])
response, = ax[1].plot([], [])
responseAdom, = ax[1].plot([], [])
responseBdom, = ax[1].plot([], [])
ax[1].step([0,50],[0,1],color='C0',label='input')
# generate x values
def response_func(a,index):
global plotzero, response, responseAdom, responseBdom
# global bodePlot, bodePlotAdom, bodePlotBdom
t = np.linspace(0, 50, 1000)
if index==0:
b=1
num=a*b
den=([1,a+b,a*b])
tf_sys=c.TransferFunction(num,den)
poles_sys,zeros_sys=c.pzmap(tf_sys, Plot=False)
tout, yout = c.step_response(tf_sys,t)
den1=([1,a])
tf_sys1=c.TransferFunction(a,den1)
toutA, youtA = c.step_response(tf_sys1,t)
den2=([1,b])
tf_sys2=c.TransferFunction(b,den2)
toutB, youtB = c.step_response(tf_sys2,t)
mag, phase, omega = c.bode_plot(tf_sys, Plot=False) # Bode-plot
magA, phase, omegaA = c.bode_plot(tf_sys1, Plot=False) # Bode-plot
magB, phase, omegaB = c.bode_plot(tf_sys2, Plot=False) # Bode-plot
s=sym.Symbol('s')
eq=(a*b/((s+a)*(s+b)))
eq1=1/(((1/a)*s+1)*((1/b)*s+1))
display(Markdown('Il polo variabile (curva viola) $\\alpha$ รจ uguale %.1f, il polo fisso (curva rossa) $b$ รจ uguale a %i; la funzione di trasferimento รจ uguale a:'%(a,1)))
display(eq),display(Markdown('o')),display(eq1)
elif index==1:
omega0=4.1
zeta=0.24
num=a*omega0**2
den=([1,2*zeta*omega0+a,omega0**2+2*zeta*omega0*a,a*omega0**2])
tf_sys=c.TransferFunction(num,den)
poles_sys,zeros_sys=c.pzmap(tf_sys, Plot=False)
tout, yout = c.step_response(tf_sys,t)
den1=([1,a])
tf_sys1=c.TransferFunction(a,den1)
toutA, youtA = c.step_response(tf_sys1,t)
den2=([1,2*zeta*omega0,omega0**2])
tf_sys2=c.TransferFunction(omega0**2,den2)
toutB, youtB = c.step_response(tf_sys2,t)
mag, phase, omega = c.bode_plot(tf_sys, Plot=False) # Bode-plot
magA, phase, omegaA = c.bode_plot(tf_sys1, Plot=False) # Bode-plot
magB, phase, omegaB = c.bode_plot(tf_sys2, Plot=False) # Bode-plot
s=sym.Symbol('s')
eq=(a*omega0**2/((s+a)*(s**2+2*zeta*omega0*s+omega0*omega0)))
eq1=1/(((1/a)*s+1)*((1/(omega0*omega0))*s*s+(2*zeta*a/omega0)*s+1))
display(Markdown('Il polo variabile (curva viola) $\\alpha$ รจ uguale %.1f, i poli fissi (curva rossa) $b$ sono uguali a $1\pm4j$ ($\omega_0 = 4.1$, $\zeta=0.24$). La funzione di trasferimento รจ uguale a:'%(a)))
display(eq),display(Markdown('o')),display(eq1)
ax[0].lines.remove(plotzero)
ax[1].lines.remove(response)
ax[1].lines.remove(responseAdom)
ax[1].lines.remove(responseBdom)
plotzero, = ax[0].plot(np.real(poles_sys), np.imag(poles_sys), 'xg', markersize=10, label = 'polo')
response, = ax[1].plot(tout,yout,color='C1',label='risposta',lw=3)
responseAdom, = ax[1].plot(toutA,youtA,color='C4',label='risposta dovuta al solo polo variabile')
responseBdom, = ax[1].plot(toutB,youtB,color='C3',label='risposta dovuta al solo polo fisso (o coppia)')
ax[0].legend()
ax[1].legend()
a_slider=widgets.FloatSlider(value=0.1, min=0.1, max=10, step=.1,
description='$\\alpha$:',disabled=False,continuous_update=False,
orientation='horizontal',readout=True,readout_format='.2f',)
input_data=widgets.interactive_output(response_func,{'a':a_slider,'index':typeSelect})
def update_slider(index):
global a_slider
aval=[0.1,0.1]
a_slider.value=aval[index]
input_data2=widgets.interactive_output(update_slider,{'index':typeSelect})
display(a_slider,input_data)
###Output
_____no_output_____ |
Milestone Project 1- Walkthrough Steps Workbook.ipynb | ###Markdown
Milestone Project 1: Walk-through Steps WorkbookBelow is a set of steps for you to follow to try to create the Tic Tac Toe Milestone Project game!
###Code
# For using the same code in either Python 2 or 3
from __future__ import print_function
## Note: Python 2 users, use raw_input() to get player input. Python 3 users, use input()
###Output
_____no_output_____
###Markdown
**Step 1: Write a function that can print out a board. Set up your board as a list, where each index 1-9 corresponds with a number on a number pad, so you get a 3 by 3 board representation.**
###Code
from IPython.display import clear_output
def display_board(board):
clear_output()
# implementation is really janky in tut
###Output
_____no_output_____
###Markdown
**Step 2: Write a function that can take in a player input and assign their marker as 'X' or 'O'. Think about using *while* loops to continually ask until you get a correct answer.**
###Code
def player_input():
marker = ''
while not (marker == 'O' or marker == 'X'):
marker = raw_input('Player 1: Do you want to be O or X? ').upper()
if marker == 'X':
return ('X', 'O')
else:
return ('O','X')
player_input()
###Output
Player 1: Do you want to be O or X? O
###Markdown
**Step 3: Write a function that takes, in the board list object, a marker ('X' or 'O'), and a desired position (number 1-9) and assigns it to the board.**
###Code
def place_marker(board, marker, position):
board[position] = marker
###Output
_____no_output_____
###Markdown
**Step 4: Write a function that takes in a board and a mark (X or O) and then checks to see if that mark has won. **
###Code
def win_check(board,mark):
# lots of manual checks in tut
pass
###Output
_____no_output_____
###Markdown
**Step 5: Write a function that uses the random module to randomly decide which player goes first. You may want to lookup random.randint() Return a string of which player went first.**
###Code
import random
def choose_first(): # janky
if random.randint(0, 1) == 0:
return 'Player 1'
else:
return 'Player 2'
###Output
_____no_output_____
###Markdown
**Step 6: Write a function that returns a boolean indicating whether a space on the board is freely available.**
###Code
def space_check(board, position):
return board[position] == ' ' # janky
###Output
_____no_output_____
###Markdown
**Step 7: Write a function that checks if the board is full and returns a boolean value. True if full, False otherwise.**
###Code
def full_board_check(board):
# janky implement using a 1D array
for i in range(1, 10):
if space_check(board, i):
return False
return True
###Output
_____no_output_____
###Markdown
**Step 8: Write a function that asks for a player's next position (as a number 1-9) and then uses the function from step 6 to check if its a free position. If it is, then return the position for later use. **
###Code
def player_choice(board):
position = ''
while position not in '1 2 3 4 5 6 7 8 9'.split() or not space_check(board[int(position)]):
position = raw_input('Choose your next position: (1-9)')
return position
###Output
_____no_output_____
###Markdown
**Step 9: Write a function that asks the player if they want to play again and returns a boolean True if they do want to play again.**
###Code
def replay():
# what's interesting is the str func chaining
return raw_input('Do you want to play again? Enter Yes or No').lower().startswith('y')
###Output
_____no_output_____
###Markdown
**Step 10: Here comes the hard part! Use while loops and the functions you've made to run the game!**
###Code
print('Welcome to Tic Tac Toe!')
while True:
# Set the game up here
theBoard = [' '] * 10
player1_marker, player2_marker = player_input() #tuple unpacking
turn = choose_first()
print(turn + 'will go first!')
game_on = True
while game_on:
#Player 1 Turn
if turn == 'Player 1': # janky
display_board(theBoard)
position = player_choice(theBoard)
place_marker(theBoard, player1_marker, position)
if win_check(theBoard, player1_marker):
display_board(theBoard)
print('Congrats, {pl}, has won the game!'.format(pl=turn))
game_on = False
else:
turn = 'Player 2' # janky
# Player2's turn.
if turn == 'Player 2':
display_board(theBoard)
position = player_choice(theBoard)
place_marker(theBoard, player2_marker, position)
if win_check(theBoard, player2_marker):
display_board(theBoard)
print('Congrats, {pl}, has won the game!'.format(pl=turn))
game_on = False
else:
turn = 'Player 1' # janky
if not replay():
break
###Output
Welcome to Tic Tac Toe!
|
new_link_prediction.ipynb | ###Markdown
ๅ็ฝ็ป้พ่ทฏ้ขๆต็ปๆ
###Code
# enron
AUC = link_prediction('./predict_TTMs/enron49_50.txt', './TTIE_matrix.txt', './enron_sub/49.txt', './enron_sub/50.txt', 2115)
AUC
# facebook
AUC = link_prediction('./predict_TTMs/facebook8_9.txt', './TTIE_matrix.txt', './facebook_sub/8.txt', './facebook_sub/9.txt', 5111)
AUC
# col_ms
AUC = link_prediction('./predict_TTMs/col24_25.txt', './TTIE_matrix.txt', './col_ms_sub/24.txt', './col_ms_sub/25.txt', 1899)
AUC
# email-eu
AUC = link_prediction('./predict_TTMs/email69_70.txt', './TTIE_matrix.txt', './email_eu_sub/69.txt', './email_eu_sub/70.txt', 1005)
AUC
###Output
100%|โโโโโโโโโโ| 6221/6221 [04:25<00:00, 23.42it/s]
###Markdown
Enron ่ฟ็ปญ้พ่ทฏ้ขๆต็ปๆ
###Code
for i in range(40, 50):
TTM_file = "./predict_TTMS/enron" + str(i) + "_" + str(i+1) + ".txt"
TTIE_file = './TTIE_matrix.txt'
sub_1 = './enron_sub/'+ str(i) + '.txt'
sub_2 = './enron_sub/' + str(i+1) + '.txt'
AUC = link_prediction(TTM_file, TTIE_file, sub_1, sub_2, 2115)
print(AUC)
# artificial network
for i in range(3, 12):
TTM_file = "./predict_TTMS/art" + str(i) + "_" + str(i+1) + ".txt"
TTIE_file = './TTIE_matrix.txt'
sub_1 = './art_sub/'+ str(i) + '.txt'
sub_2 = './art_sub/' + str(i+1) + '.txt'
AUC = link_prediction(TTM_file, TTIE_file, sub_1, sub_2, 200)
print(AUC)
###Output
100%|โโโโโโโโโโ| 996/996 [00:08<00:00, 121.48it/s]
1%| | 14/1394 [00:00<00:10, 132.84it/s] |
Facial-Keypoint-Detection-P1/3. Facial Keypoint Detection, Complete Pipeline.ipynb | ###Markdown
Face and Facial Keypoint detectionAfter you've trained a neural network to detect facial keypoints, you can then apply this network to *any* image that includes faces. The neural network expects a Tensor of a certain size as input and, so, to detect any face, you'll first have to do some pre-processing.1. Detect all the faces in an image using a face detector (we'll be using a Haar Cascade detector in this notebook).2. Pre-process those face images so that they are grayscale, and transformed to a Tensor of the input size that your net expects. This step will be similar to the `data_transform` you created and applied in Notebook 2, whose job was tp rescale, normalize, and turn any iimage into a Tensor to be accepted as input to your CNN.3. Use your trained model to detect facial keypoints on the image.--- In the next python cell we load in required libraries for this section of the project.
###Code
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
%matplotlib inline
###Output
_____no_output_____
###Markdown
Select an image Select an image to perform facial keypoint detection on; you can select any image of faces in the `images/` directory.
###Code
import cv2
# load in color image for face detection
image = cv2.imread('images/obamas.jpg')
# switch red and blue color channels
# --> by default OpenCV assumes BLUE comes first, not RED as in many images
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
# plot the image
fig = plt.figure(figsize=(9,9))
plt.imshow(image)
###Output
_____no_output_____
###Markdown
Detect all faces in an imageNext, you'll use one of OpenCV's pre-trained Haar Cascade classifiers, all of which can be found in the `detector_architectures/` directory, to find any faces in your selected image.In the code below, we loop over each face in the original image and draw a red square on each face (in a copy of the original image, so as not to modify the original). You can even [add eye detections](https://docs.opencv.org/3.4.1/d7/d8b/tutorial_py_face_detection.html) as an *optional* exercise in using Haar detectors.An example of face detection on a variety of images is shown below.
###Code
# load in a haar cascade classifier for detecting frontal faces
face_cascade = cv2.CascadeClassifier('detector_architectures/haarcascade_frontalface_default.xml')
# run the detector
# the output here is an array of detections; the corners of each detection box
# if necessary, modify these parameters until you successfully identify every face in a given image
faces = face_cascade.detectMultiScale(image, 1.2, 2)
# make a copy of the original image to plot detections on
image_with_detections = image.copy()
# loop over the detected faces, mark the image where each face is found
for (x,y,w,h) in faces:
# draw a rectangle around each detected face
# you may also need to change the width of the rectangle drawn depending on image resolution
cv2.rectangle(image_with_detections,(x,y),(x+w,y+h),(255,0,0),3)
fig = plt.figure(figsize=(9,9))
plt.imshow(image_with_detections)
###Output
_____no_output_____
###Markdown
Loading in a trained modelOnce you have an image to work with (and, again, you can select any image of faces in the `images/` directory), the next step is to pre-process that image and feed it into your CNN facial keypoint detector.First, load your best model by its filename.
###Code
import torch
from models import Net
net = Net()
## TODO: load the best saved model parameters (by your path name)
## You'll need to un-comment the line below and add the correct name for *your* saved model
net.load_state_dict(torch.load('saved_models/keypoints_model_1.pt'))
if torch.cuda.is_available():
net = net.cuda()
## print out your net and prepare it for testing (uncomment the line below)
net.eval()
###Output
_____no_output_____
###Markdown
Keypoint detectionNow, we'll loop over each detected face in an image (again!) only this time, you'll transform those faces in Tensors that your CNN can accept as input images. TODO: Transform each detected face into an input TensorYou'll need to perform the following steps for each detected face:1. Convert the face from RGB to grayscale2. Normalize the grayscale image so that its color range falls in [0,1] instead of [0,255]3. Rescale the detected face to be the expected square size for your CNN (224x224, suggested)4. Reshape the numpy image into a torch image.You may find it useful to consult to transformation code in `data_load.py` to help you perform these processing steps. TODO: Detect and display the predicted keypointsAfter each face has been appropriately converted into an input Tensor for your network to see as input, you'll wrap that Tensor in a Variable() and can apply your `net` to each face. The ouput should be the predicted the facial keypoints. These keypoints will need to be "un-normalized" for display, and you may find it helpful to write a helper function like `show_keypoints`. You should end up with an image like the following with facial keypoints that closely match the facial features on each individual face:
###Code
image_copy = np.copy(image)
# loop over the detected faces from your haar cascade
for (x,y,w,h) in faces:
# Select the region of interest that is the face in the image
#roi = image_copy[y:y+h, x:x+w]
roi = image_copy[y:y + int(1.5 * h), x - int(0.4 * w):x + int(1.1 * w)]
## TODO: Convert the face region from RGB to grayscale
gray = cv2.cvtColor(roi, cv2.COLOR_RGB2GRAY)
## TODO: Normalize the grayscale image so that its color range falls in [0,1] instead of [0,255]
norm = gray.astype(np.float32) / 255.0
## TODO: Rescale the detected face to be the expected square size for your CNN (224x224, suggested)
resz = cv2.resize(norm, (224, 224))
## TODO: Reshape the numpy image shape (H x W x C) into a torch image shape (C x H x W)
resh = resz[None]
resh = resh[None] # 1 x 1 x 224 x 224
## TODO: Make facial keypoint predictions using your loaded, trained network
## perform a forward pass to get the predicted facial keypoints
tens = torch.from_numpy(resh)
if torch.cuda.is_available():
tens = tens.cuda()
else:
tens = tens.cpu()
output = net(tens)
if torch.cuda.is_available():
output = output.cpu()
output = output.detach().numpy()
output = output.reshape((-1, 2))
# Renormalize the points
# Adjusting the normalisation due to different ROI above
output = 60 * output + 96
## TODO: Display each detected face and the corresponding keypoints
plt.figure()
plt.imshow(resz, cmap='gray')
plt.scatter(output[:, 0], output[:, 1], s=40, marker='.', c='m')
###Output
_____no_output_____ |
Sentiment Analysis/Amazon Review Sentiment Analysis.ipynb | ###Markdown
Loading the dataset
###Code
train = bz2.BZ2File('../input/amazonreviews/train.ft.txt.bz2')
test = bz2.BZ2File('../input/amazonreviews/test.ft.txt.bz2')
train = train.readlines()
test = test.readlines()
train[0]
# convert from raw binary strings into text files that can be parsed
train = [x.decode('utf-8') for x in train]
test = [x.decode('utf-8') for x in test]
train[0]
print(type(train), type(test), "\n")
print(f"Train Data Volume: {len(train)}\n")
print(f"Test Data Volume: {len(test)}\n\n")
print("Demo: ", "\n")
for x in train[:5]:
print(x, "\n")
# extract labels from the dataset
# judging from the dataset, let's set 0 for negative sentiment and 1 for positive sentiment
train_labels = [0 if x.split(' ')[0] == '__label__1' else 1 for x in train]
test_labels = [0 if x.split(' ')[0] =='__label__1' else 1 for x in test]
sns.countplot(train_labels)
plt.title('Train Labels Distribution')
sns.countplot(test_labels)
plt.title('Test Labels Distribution')
# let's extract the texts
train_texts = [x.split(' ', maxsplit=1)[1][:-1] for x in train]
test_texts = [x.split(' ', maxsplit=1)[1][:-1] for x in test]
train_texts[0]
del train, test
gc.collect()
###Output
_____no_output_____
###Markdown
Exploratory Data Analysis Word Cloud
###Code
from wordcloud import WordCloud
# let's have a corpus for all the texts in train_text
corpus = ' '.join(text for text in train_texts[:100000])
print(f'There are {len(corpus)} words in the corpus')
wordcloud = WordCloud(max_font_size=50, max_words=100, background_color='white')
wordcloud = wordcloud.generate(corpus)
plt.figure(figsize=(10, 8))
plt.imshow(wordcloud, interpolation='bilinear')
plt.axis('off')
plt.show()
del wordcloud
gc.collect()
###Output
_____no_output_____
###Markdown
Distribution of word count
###Code
# let's count the number of words in the review and see the distribution
train_texts_size = list(map(lambda x: len(x.split()), train_texts))
sns.displot(train_texts_size)
plt.xlabel('No. of words in review')
plt.ylabel('Frequency')
plt.title('Word Frequency Distribution in Reviews')
train_size_df = pd.DataFrame({'len': train_texts_size, 'labels': train_labels})
train_size_df.head(10)
neg_mean_len = train_size_df[train_size_df['labels'] == 0]['len'].mean()
pos_mean_len = train_size_df[train_size_df['labels'] == 1]['len'].mean()
print(f'Negative mean length: {neg_mean_len:.2f}')
print(f'Positive mean length: {pos_mean_len: .2f}')
print(f'Mean difference: {neg_mean_len - pos_mean_len:.2f}')
sns.catplot(x='labels', y='len', data=train_size_df, kind='box')
plt.title('Review length by Sentiment')
plt.ylabel('No. words in review')
plt.xlabel('Label -> 0 for Negative and 1 for Positive')
del train_size_df
gc.collect()
###Output
_____no_output_____
###Markdown
Tokenizing and Vectorizing
###Code
len(train_texts), len(test_texts)
from tensorflow.keras.preprocessing.text import Tokenizer
from tensorflow.keras.preprocessing.sequence import pad_sequences
num_words = 70000 # number of words from the train_text to tokenize by frequency
tokenizer = Tokenizer(num_words = num_words)
tokenizer.fit_on_texts(train_texts)
# let's see the dictionary of words tokenized
word_index = tokenizer.word_index
print(f'The size of the vocabulary: {len(word_index)}')
word_index
# let's save the tokenizer for future use
import pickle
# saving
with open('tokenizer.pickle', 'wb') as handle:
pickle.dump(tokenizer, handle, protocol=pickle.HIGHEST_PROTOCOL)
# loading
#with open('tokenizer.pickle', 'rb') as handle:
# tokenizer = pickle.load(handle)
sequences = tokenizer.texts_to_sequences(train_texts)
print(len(sequences))
# pad sequences to the same shape
maxlen = 100
sequences = pad_sequences(sequences, maxlen=maxlen)
sequences[0].shape
train_texts[0]
len(train_labels)
# let's convert to numpy array
import numpy as np
labels = np.array(train_labels)
# let's reduce the dataset size...
# train_texts should be 400,000 and test_text should be 20,000
# first we shuffle numbers from & 400000
indices = np.arange(len(train_texts))
np.random.shuffle(indices)
train_data = sequences[indices]
train_labels = labels[indices]
train_size = 600000
train_data = train_data[:train_size]
train_labels = train_labels[:train_size]
# let's split the dataset
from sklearn.model_selection import train_test_split
X_train, X_valid, y_train, y_valid = train_test_split(train_data, train_labels, random_state=42, test_size=0.2)
len(X_train), len(X_valid)
X_train.shape
# sanity check
sanity_text = tokenizer.sequences_to_texts(sequences[:3])
sanity_text
train_texts[:3]
sanity_text = tokenizer.sequences_to_texts(X_train[:3])
sanity_text
###Output
_____no_output_____
###Markdown
All works and we can see most of the texts from each review is kept
###Code
sns.countplot(train_labels)
plt.title('Test Labels Distribution')
del sequences, train_texts, train_data
gc.collect()
###Output
_____no_output_____
###Markdown
Preprocessing the Test set
###Code
len(test_texts)
# convert test_labels to numpy arrays
test_labels = np.array(test_labels)
# vectorize the test set
test = tokenizer.texts_to_sequences(test_texts)
# pad the sequence
test = pad_sequences(test, maxlen=maxlen)
# let's reduce the dataset size
indices = np.arange(len(test_texts))
np.random.shuffle(indices)
test = test[indices]
test_labels = test_labels[indices]
test_size = 50000
test = test[:test_size]
test_labels = test_labels[:test_size]
# sanity check
print(test_texts[:3])
print('\n')
print(tokenizer.sequences_to_texts(test[:3]))
###Output
['Great CD: My lovely Pat has one of the GREAT voices of her generation. I have listened to this CD for YEARS and I still LOVE IT. When I\'m in a good mood it makes me feel better. A bad mood just evaporates like sugar in the rain. This CD just oozes LIFE. Vocals are jusat STUUNNING and lyrics just kill. One of life\'s hidden gems. This is a desert isle CD in my book. Why she never made it big is just beyond me. Everytime I play this, no matter black, white, young, old, male, female EVERYBODY says one thing "Who was that singing ?"', "One of the best game music soundtracks - for a game I didn't really play: Despite the fact that I have only played a small portion of the game, the music I heard (plus the connection to Chrono Trigger which was great as well) led me to purchase the soundtrack, and it remains one of my favorite albums. There is an incredible mix of fun, epic, and emotional songs. Those sad and beautiful tracks I especially like, as there's not too many of those kinds of songs in my other video game soundtracks. I must admit that one of the songs (Life-A Distant Promise) has brought tears to my eyes on many occasions.My one complaint about this soundtrack is that they use guitar fretting effects in many of the songs, which I find distracting. But even if those weren't included I would still consider the collection worth it.", 'Batteries died within a year ...: I bought this charger in Jul 2003 and it worked OK for a while. The design is nice and convenient. However, after about a year, the batteries would not hold a charge. Might as well just get alkaline disposables, or look elsewhere for a charger that comes with batteries that have better staying power.']
["fiona's review to tell the truth it is very fragile and frustrating if you keep on doing leg kick action fiona's leg will snap in not time it is smaller for than the other figures and is just borderline for having fun", "vivid colors i'm amazed at how my pictures turned out with the use of this film the colors are so vivid and vibrant the images come out sharp definitely the film i'll be using from now on", "good product but there are better sheet feeders i bought this as an upgrade for my 1 person office i went from an hp that did a great job with printing and scanning but lacked a flatbed copy function to this generally i'm pleased with the product but the scanning function is a bit cumbersome and the for the bypass tray does not always do a good job with envelopes"]
###Markdown
ML Models Baseline Model
###Code
embedding_dim = 100
model = models.Sequential(name='baseline_amazon')
model.add(layers.Embedding(input_dim=num_words, output_dim=embedding_dim, input_length=maxlen))
model.add(layers.Conv1D(64, 7, padding='valid', activation='relu'))
model.add(layers.Conv1D(128, 7, padding='valid', activation='relu'))
model.add(layers.Conv1D(256, 7, padding='valid', activation='relu'))
model.add(layers.GlobalMaxPooling1D())
model.add(layers.Dropout(0.2))
model.add(layers.Dense(128, activation='relu'))
model.add(layers.Dense(64, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))
model.summary()
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
callbacks = [keras.callbacks.EarlyStopping(patience=10, restore_best_weights=True),
keras.callbacks.ModelCheckpoint('baseline.h5', save_best_only=True)]
history = model.fit(X_train, y_train, epochs=30, validation_data=(X_valid, y_valid), callbacks=callbacks)
def learning_curve(history):
loss = history.history['loss']
val_loss = history.history['val_loss']
accuracy = history.history['accuracy']
val_accuracy = history.history['val_accuracy']
epochs = range(1, len(loss) + 1)
plt.figure()
plt.plot(epochs, loss, 'bo', label='Train Loss')
plt.plot(epochs, val_loss, 'b-', label='Validation Loss')
plt.title('Train and Validation Loss')
plt.legend()
plt.figure()
plt.plot(epochs, accuracy, 'bo', label='Train Accuracy')
plt.plot(epochs, val_accuracy, 'b-', label='Validation Accuracy')
plt.title('Train and Validation Accuracy')
plt.legend()
plt.show()
learning_curve(history)
# let's evaluate the model's result on the test set
loss_1, acc_1 = model.evaluate(test, test_labels)
loss_1, acc_1
###Output
1563/1563 [==============================] - 5s 3ms/step - loss: 0.1994 - accuracy: 0.9220
###Markdown
Baseline Model + BatchNorm & Higher Dropout
###Code
model = models.Sequential(name='baseline2_amazon')
model.add(layers.Embedding(input_dim=num_words, output_dim=embedding_dim, input_length=maxlen))
model.add(layers.BatchNormalization())
model.add(layers.Conv1D(64, 7, padding='valid', use_bias=False))
model.add(layers.Dropout(0.2))
model.add(layers.BatchNormalization())
model.add(layers.Activation('relu'))
model.add(layers.Conv1D(128, 7, padding='valid', use_bias=False))
model.add(layers.Dropout(0.2))
model.add(layers.BatchNormalization())
model.add(layers.Activation('relu'))
model.add(layers.GlobalMaxPooling1D())
model.add(layers.Dropout(0.4))
model.add(layers.Dense(128, use_bias=False))
model.add(layers.BatchNormalization())
model.add(layers.Activation('relu'))
model.add(layers.Dense(1, activation='sigmoid'))
model.summary()
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
callbacks = [keras.callbacks.EarlyStopping(patience=10, restore_best_weights=True),
keras.callbacks.ModelCheckpoint('baseline2.h5', save_best_only=True)]
history = model.fit(X_train, y_train, epochs=30, validation_data=(X_valid, y_valid), callbacks=callbacks)
learning_curve(history)
loss_2, acc_2 = model.evaluate(test, test_labels)
loss_2, acc_2
###Output
1563/1563 [==============================] - 4s 3ms/step - loss: 0.2052 - accuracy: 0.9196
###Markdown
Model 3 - LSTM
###Code
model = models.Sequential(name='lstm_amazon')
model.add(layers.Embedding(input_dim=num_words, output_dim=embedding_dim, input_length=maxlen))
model.add(layers.LSTM(64, dropout=0.2, return_sequences=False))
model.add(layers.Dense(1, activation='sigmoid'))
model.summary()
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
callbacks = [keras.callbacks.EarlyStopping(patience=10, restore_best_weights=False),
keras.callbacks.ModelCheckpoint('lstm_amazon.h5', save_best_only=True)]
history = model.fit(X_train, y_train, epochs=20, validation_data=(X_valid, y_valid), callbacks=callbacks)
learning_curve(history)
loss_3, acc_3 = model.evaluate(test, test_labels)
loss_3, acc_3
###Output
1563/1563 [==============================] - 6s 4ms/step - loss: 0.4213 - accuracy: 0.9132
###Markdown
Bidirectional LSTM
###Code
model = models.Sequential(name='bidirectional_lstm')
model.add(layers.Embedding(input_dim=num_words, output_dim=embedding_dim, input_length=maxlen))
model.add(layers.Bidirectional(layers.LSTM(64, dropout=0.2, return_sequences=False)))
model.add(layers.Dropout(0.4))
model.add(layers.Dense(64, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))
model.summary()
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
callbacks = [keras.callbacks.EarlyStopping(patience=10, restore_best_weights=False),
keras.callbacks.ModelCheckpoint('bilstm_amazon.h5', save_best_only=True)]
history = model.fit(X_train, y_train, epochs=20, validation_data=(X_valid, y_valid), callbacks=callbacks)
learning_curve(history)
loss_4, acc_4 = model.evaluate(test, test_labels)
loss_4, acc_4
###Output
1563/1563 [==============================] - 9s 6ms/step - loss: 0.4269 - accuracy: 0.9111
###Markdown
Using Pretrained glove Embeddings
###Code
model = models.Sequential(name='pretrained_embeddings')
model.add(layers.Embedding(input_dim=num_words, output_dim=embedding_dim, input_length=maxlen))
model.add(layers.LSTM(64, dropout=0.2, return_sequences=False))
model.add(layers.Dense(1, activation='sigmoid'))
model.summary()
# we need to use the glove embeddings to set the weights of the Embedding layer
embedding_path = '../input/glove6b100dtxt/glove.6B.100d.txt'
# create a dictionary to store the index
embedding_index = {}
f = open(embedding_path)
for line in f:
values = line.split()
word = values[0]
coefs = np.array(values[1:], dtype='float32')
embedding_index[word] = coefs
f.close()
print(f'There are {len(embedding_index)} words found')
# initialize an zero matrix of shape (num_words, embedding_dim)
embedding_matrix = np.zeros((num_words, embedding_dim))
for word, index in word_index.items():
if index < num_words:
embedding_vector = embedding_index.get(word)
if embedding_vector is not None:
embedding_matrix[index] = embedding_vector # maps each index in our word_index to its glove embeddings
embedding_matrix[0].shape
model.layers[0].set_weights([embedding_matrix])
model.layers[0].trainable = False
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
callbacks = [keras.callbacks.EarlyStopping(patience=15, restore_best_weights=False),
keras.callbacks.ModelCheckpoint('glove_amazon.h5', save_best_only=True)]
history = model.fit(X_train, y_train, epochs=30, validation_data=(X_valid, y_valid), callbacks=callbacks)
learning_curve(history)
loss_5, acc_5 = model.evaluate(test, test_labels)
loss_5, acc_5
###Output
1563/1563 [==============================] - 6s 4ms/step - loss: 0.1860 - accuracy: 0.9285
###Markdown
Using Transformer Architecture From Scratch
###Code
# create the transformer block
class TransformerBlock(layers.Layer):
def __init__(self, embed_dim, num_heads, ff_dim, rate=0.1):
super(TransformerBlock, self).__init__()
self.att = layers.MultiHeadAttention(num_heads = num_heads, key_dim=embed_dim)
self.ffn = keras.Sequential([layers.Dense(ff_dim, activation='relu'), layers.Dense(embed_dim),])
self.layernorm1 = layers.LayerNormalization(epsilon=1e-6)
self.layernorm2 = layers.LayerNormalization(epsilon=1e-6)
self.dropout1 = layers.Dropout(rate)
self.dropout2 = layers.Dropout(rate)
def call(self, inputs, training):
attn_output = self.att(inputs, inputs)
attn_output = self.dropout1(attn_output, training=training)
out1 = self.layernorm1(inputs + attn_output)
ffn_output = self.ffn(out1)
ffn_output = self.dropout2(ffn_output, training=training)
return self.layernorm2(out1 + ffn_output)
# implement the embedding layer
class TokenAndPositionEmbedding(layers.Layer):
def __init__(self, maxlen, vocab_size, embed_dim):
super(TokenAndPositionEmbedding, self).__init__()
self.token_emb = layers.Embedding(input_dim=vocab_size, output_dim = embed_dim)
self.pos_emb = layers.Embedding(input_dim=maxlen, output_dim=embed_dim)
def call(self, x):
maxlen = tf.shape(x)[-1]
positions = tf.range(start=0, limit=maxlen, delta=1)
positions = self.pos_emb(positions)
x = self.token_emb(x)
return x + positions
# parameters for training
embed_dim = 100
num_heads = 2
ff_dim = 32
vocab_size = 70000
# create the model
inputs = layers.Input(shape=(maxlen,))
embedding_layer = TokenAndPositionEmbedding(maxlen, vocab_size, embed_dim)
x = embedding_layer(inputs)
transformer_block = TransformerBlock(embed_dim, num_heads, ff_dim)
x = transformer_block(x)
x = layers.GlobalAveragePooling1D()(x)
x = layers.Dropout(0.1)(x)
x = layers.Dense(20, activation='relu')(x)
x = layers.Dropout(0.1)(x)
outputs = layers.Dense(1, activation='sigmoid')(x)
model = keras.Model(inputs=inputs, outputs = outputs)
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
history = model.fit(X_train, y_train, epochs=10, validation_data=(X_valid, y_valid))
learning_curve(history)
loss_6, acc_6 = model.evaluate(test, test_labels)
loss_6, acc_6
result = pd.DataFrame({'loss': [loss_1, loss_2, loss_3, loss_4, loss_5, loss_6],
'accuracy': [acc_1, acc_2, acc_3, acc_4, acc_5, acc_6],
}, index = ['Baseline', 'Baseline with dropout', 'LSTM Model', 'Bidirectional LSTM',
'Pretrained Embeddings', 'Transformers from Scratch'])
result
###Output
_____no_output_____ |
Python_While_Loops.ipynb | ###Markdown
**Python While Loops** **1. Python Loops**Python has two primitive loop commands:- **while loops**- **for loops** **2. The while Loop**- With the while loop we can execute a set of statements as long as a condition is true.
###Code
# Example - Print i as long as i is less than 6:
i = 1
while i < 6:
print(i)
i += 1
###Output
1
2
3
4
5
###Markdown
- **Note**: remember to increment i, or else the loop will continue forever. - The while loop requires relevant variables to be ready, in this example we need to define an indexing variable, i, which we set to 1. **3. The break Statement**- With the break statement we can stop the loop even if the while condition is true:
###Code
# Example - Exit the loop when i is 3:
i = 1
while i < 6:
print(i)
if i == 3:
break
i += 1
###Output
1
2
3
###Markdown
**4. The continue Statement**- With the continue statement we can stop the current iteration, and continue with the next:
###Code
# Example - Continue to the next iteration if i is 3:
i = 0
while i < 6:
i += 1
if i == 3:
continue
print(i)
###Output
1
2
4
5
6
###Markdown
**5. The else Statement**- With the else statement we can run a block of code once when the condition no longer is true:
###Code
# Example - Print a message once the condition is false:
i = 1
while i < 6:
print(i)
i += 1
else:
print("i is no longer less than 6")
###Output
1
2
3
4
5
i is no longer less than 6
|
AAAI/Learnability/CIN/MLP/ds3/synthetic_type3_MLP_size_500_m_2000.ipynb | ###Markdown
Generate dataset
###Code
np.random.seed(12)
y = np.random.randint(0,10,5000)
idx= []
for i in range(10):
print(i,sum(y==i))
idx.append(y==i)
x = np.zeros((5000,2))
np.random.seed(12)
x[idx[0],:] = np.random.multivariate_normal(mean = [7,4],cov=[[0.1,0],[0,0.1]],size=sum(idx[0]))
x[idx[1],:] = np.random.multivariate_normal(mean = [8,6.5],cov=[[0.1,0],[0,0.1]],size=sum(idx[1]))
x[idx[2],:] = np.random.multivariate_normal(mean = [5.5,6.5],cov=[[0.1,0],[0,0.1]],size=sum(idx[2]))
x[idx[3],:] = np.random.multivariate_normal(mean = [-1,0],cov=[[0.1,0],[0,0.1]],size=sum(idx[3]))
x[idx[4],:] = np.random.multivariate_normal(mean = [0,2],cov=[[0.1,0],[0,0.1]],size=sum(idx[4]))
x[idx[5],:] = np.random.multivariate_normal(mean = [1,0],cov=[[0.1,0],[0,0.1]],size=sum(idx[5]))
x[idx[6],:] = np.random.multivariate_normal(mean = [0,-1],cov=[[0.1,0],[0,0.1]],size=sum(idx[6]))
x[idx[7],:] = np.random.multivariate_normal(mean = [0,0],cov=[[0.1,0],[0,0.1]],size=sum(idx[7]))
x[idx[8],:] = np.random.multivariate_normal(mean = [-0.5,-0.5],cov=[[0.1,0],[0,0.1]],size=sum(idx[8]))
x[idx[9],:] = np.random.multivariate_normal(mean = [0.4,0.2],cov=[[0.1,0],[0,0.1]],size=sum(idx[9]))
x[idx[0]][0], x[idx[5]][5]
for i in range(10):
plt.scatter(x[idx[i],0],x[idx[i],1],label="class_"+str(i))
plt.legend(loc='center left', bbox_to_anchor=(1, 0.5))
bg_idx = [ np.where(idx[3] == True)[0],
np.where(idx[4] == True)[0],
np.where(idx[5] == True)[0],
np.where(idx[6] == True)[0],
np.where(idx[7] == True)[0],
np.where(idx[8] == True)[0],
np.where(idx[9] == True)[0]]
bg_idx = np.concatenate(bg_idx, axis = 0)
bg_idx.shape
np.unique(bg_idx).shape
x = x - np.mean(x[bg_idx], axis = 0, keepdims = True)
np.mean(x[bg_idx], axis = 0, keepdims = True), np.mean(x, axis = 0, keepdims = True)
x = x/np.std(x[bg_idx], axis = 0, keepdims = True)
np.std(x[bg_idx], axis = 0, keepdims = True), np.std(x, axis = 0, keepdims = True)
for i in range(10):
plt.scatter(x[idx[i],0],x[idx[i],1],label="class_"+str(i))
plt.legend(loc='center left', bbox_to_anchor=(1, 0.5))
foreground_classes = {'class_0','class_1', 'class_2'}
background_classes = {'class_3','class_4', 'class_5', 'class_6','class_7', 'class_8', 'class_9'}
fg_class = np.random.randint(0,3)
fg_idx = np.random.randint(0,m)
train_data=[]
a = []
fg_instance = np.array([[0.0,0.0]])
bg_instance = np.array([[0.0,0.0]])
for i in range(m):
if i == fg_idx:
b = np.random.choice(np.where(idx[fg_class]==True)[0],size=1)
fg_instance += x[b]
a.append(x[b])
print("foreground "+str(fg_class)+" present at " + str(fg_idx))
else:
bg_class = np.random.randint(3,10)
b = np.random.choice(np.where(idx[bg_class]==True)[0],size=1)
bg_instance += x[b]
a.append(x[b])
print("background "+str(bg_class)+" present at " + str(i))
a = np.concatenate(a,axis=0)
print(a.shape)
print(fg_class , fg_idx)
a
fg_instance
bg_instance
(fg_instance+bg_instance)/m , m
# mosaic_list_of_images =[]
# mosaic_label = []
train_label=[]
fore_idx=[]
train_data = []
for j in range(train_size):
np.random.seed(j)
fg_instance = torch.zeros([2], dtype=torch.float64) #np.array([[0.0,0.0]])
bg_instance = torch.zeros([2], dtype=torch.float64) #np.array([[0.0,0.0]])
# a=[]
for i in range(m):
if i == fg_idx:
fg_class = np.random.randint(0,3)
b = np.random.choice(np.where(idx[fg_class]==True)[0],size=1)
fg_instance += x[b]
# a.append(x[b])
# print("foreground "+str(fg_class)+" present at " + str(fg_idx))
else:
bg_class = np.random.randint(3,10)
b = np.random.choice(np.where(idx[bg_class]==True)[0],size=1)
bg_instance += x[b]
# a.append(x[b])
# print("background "+str(bg_class)+" present at " + str(i))
train_data.append((fg_instance+bg_instance)/m)
# a = np.concatenate(a,axis=0)
# mosaic_list_of_images.append(np.reshape(a,(2*m,1)))
train_label.append(fg_class)
fore_idx.append(fg_idx)
train_data[0], train_label[0]
train_data = torch.stack(train_data, axis=0)
train_data.shape, len(train_label)
test_label=[]
# fore_idx=[]
test_data = []
for j in range(1000):
np.random.seed(j)
fg_instance = torch.zeros([2], dtype=torch.float64) #np.array([[0.0,0.0]])
fg_class = np.random.randint(0,3)
b = np.random.choice(np.where(idx[fg_class]==True)[0],size=1)
fg_instance += x[b]
# a.append(x[b])
# print("foreground "+str(fg_class)+" present at " + str(fg_idx))
test_data.append((fg_instance)/m)
# a = np.concatenate(a,axis=0)
# mosaic_list_of_images.append(np.reshape(a,(2*m,1)))
test_label.append(fg_class)
# fore_idx.append(fg_idx)
test_data[0], test_label[0]
test_data = torch.stack(test_data, axis=0)
test_data.shape, len(test_label)
x1 = (train_data).numpy()
y1 = np.array(train_label)
x1[y1==0,0]
x1[y1==0,0][:,0]
x1[y1==0,0][:,1]
x1 = (train_data).numpy()
y1 = np.array(train_label)
plt.scatter(x1[y1==0,0][:,0], x1[y1==0,0][:,1], label='class 0')
plt.scatter(x1[y1==1,0][:,0], x1[y1==1,0][:,1], label='class 1')
plt.scatter(x1[y1==2,0][:,0], x1[y1==2,0][:,1], label='class 2')
plt.legend()
plt.title("dataset4 CIN with alpha = 1/"+str(m))
x1 = (test_data).numpy()
y1 = np.array(test_label)
plt.scatter(x1[y1==0,0][:,0], x1[y1==0,0][:,1], label='class 0')
plt.scatter(x1[y1==1,0][:,0], x1[y1==1,0][:,1], label='class 1')
plt.scatter(x1[y1==2,0][:,0], x1[y1==2,0][:,1], label='class 2')
plt.legend()
plt.title("test dataset4")
class MosaicDataset(Dataset):
"""MosaicDataset dataset."""
def __init__(self, mosaic_list_of_images, mosaic_label):
"""
Args:
csv_file (string): Path to the csv file with annotations.
root_dir (string): Directory with all the images.
transform (callable, optional): Optional transform to be applied
on a sample.
"""
self.mosaic = mosaic_list_of_images
self.label = mosaic_label
#self.fore_idx = fore_idx
def __len__(self):
return len(self.label)
def __getitem__(self, idx):
return self.mosaic[idx] , self.label[idx] #, self.fore_idx[idx]
train_data[0].shape, train_data[0]
batch = 200
traindata_1 = MosaicDataset(train_data, train_label )
trainloader_1 = DataLoader( traindata_1 , batch_size= batch ,shuffle=True)
testdata_1 = MosaicDataset(test_data, test_label )
testloader_1 = DataLoader( testdata_1 , batch_size= batch ,shuffle=False)
# testdata_11 = MosaicDataset(test_dataset, labels )
# testloader_11 = DataLoader( testdata_11 , batch_size= batch ,shuffle=False)
class Whatnet(nn.Module):
def __init__(self):
super(Whatnet,self).__init__()
self.linear1 = nn.Linear(2,50)
self.linear2 = nn.Linear(50,3)
torch.nn.init.xavier_normal_(self.linear1.weight)
torch.nn.init.zeros_(self.linear1.bias)
torch.nn.init.xavier_normal_(self.linear2.weight)
torch.nn.init.zeros_(self.linear2.bias)
def forward(self,x):
x = F.relu(self.linear1(x))
x = (self.linear2(x))
return x[:,0]
def calculate_loss(dataloader,model,criter):
model.eval()
r_loss = 0
with torch.no_grad():
for i, data in enumerate(dataloader, 0):
inputs, labels = data
inputs, labels = inputs.to("cuda"),labels.to("cuda")
outputs = model(inputs)
# print(outputs.shape)
loss = criter(outputs, labels)
r_loss += loss.item()
return r_loss/(i+1)
def test_all(number, testloader,net):
correct = 0
total = 0
out = []
pred = []
with torch.no_grad():
for data in testloader:
images, labels = data
images, labels = images.to("cuda"),labels.to("cuda")
out.append(labels.cpu().numpy())
outputs= net(images)
_, predicted = torch.max(outputs.data, 1)
pred.append(predicted.cpu().numpy())
total += labels.size(0)
correct += (predicted == labels).sum().item()
pred = np.concatenate(pred, axis = 0)
out = np.concatenate(out, axis = 0)
print("unique out: ", np.unique(out), "unique pred: ", np.unique(pred) )
print("correct: ", correct, "total ", total)
print('Accuracy of the network on the %d test dataset %d: %.2f %%' % (total, number , 100 * correct / total))
def train_all(trainloader, ds_number, testloader_list, lr_list):
final_loss = []
for LR in lr_list:
print("--"*20, "Learning Rate used is", LR)
torch.manual_seed(12)
net = Whatnet().double()
net = net.to("cuda")
criterion_net = nn.CrossEntropyLoss()
optimizer_net = optim.Adam(net.parameters(), lr=0.001 ) #, momentum=0.9)
acti = []
loss_curi = []
epochs = 1000
running_loss = calculate_loss(trainloader,net,criterion_net)
loss_curi.append(running_loss)
print('epoch: [%d ] loss: %.3f' %(0,running_loss))
for epoch in range(epochs): # loop over the dataset multiple times
ep_lossi = []
running_loss = 0.0
net.train()
for i, data in enumerate(trainloader, 0):
# get the inputs
inputs, labels = data
inputs, labels = inputs.to("cuda"),labels.to("cuda")
# zero the parameter gradients
optimizer_net.zero_grad()
# forward + backward + optimize
outputs = net(inputs)
# print(outputs.shape)
loss = criterion_net(outputs, labels)
# print statistics
running_loss += loss.item()
loss.backward()
optimizer_net.step()
running_loss = calculate_loss(trainloader,net,criterion_net)
if(epoch%200 == 0):
print('epoch: [%d] loss: %.3f' %(epoch + 1,running_loss))
loss_curi.append(running_loss) #loss per epoch
if running_loss<=0.05:
print('epoch: [%d] loss: %.3f' %(epoch + 1,running_loss))
break
print('Finished Training')
correct = 0
total = 0
with torch.no_grad():
for data in trainloader:
images, labels = data
images, labels = images.to("cuda"), labels.to("cuda")
outputs = net(images)
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
correct += (predicted == labels).sum().item()
print('Accuracy of the network on the %d train images: %.2f %%' % (total, 100 * correct / total))
for i, j in enumerate(testloader_list):
test_all(i+1, j,net)
print("--"*40)
final_loss.append(loss_curi)
return final_loss
train_loss_all=[]
testloader_list= [ testloader_1]
lr_list = [0.0001, 0.0005, 0.001, 0.005, 0.01, 0.05, 0.1, 0.5 ]
fin_loss = train_all(trainloader_1, 1, testloader_list, lr_list)
train_loss_all.append(fin_loss)
%matplotlib inline
len(fin_loss)
for i,j in enumerate(fin_loss):
plt.plot(j,label ="LR = "+str(lr_list[i]))
plt.xlabel("Epochs")
plt.ylabel("Training_loss")
plt.legend(loc='center left', bbox_to_anchor=(1, 0.5))
###Output
_____no_output_____ |
introduction_to_machine_learning/py_05_h2o_in_the_cloud.ipynb | ###Markdown
Machine Learning with H2O - Tutorial 5: H2O in the Cloud**Objective**:- This tutorial demonstrates how to connect to a H2O cluster in the cloud. **Steps**:1. Create a H2O cluster in the cloud. Follow instructions from http://h2o-release.s3.amazonaws.com/h2o/latest_stable.html2. Import h2o module.3. Connect to cluster using h2o.connect(...) with specific IP address. Step 1: Create a H2O cluster in the CloudFollow the instructions from http://h2o-release.s3.amazonaws.com/h2o/latest_stable.html Step 2: Import H2O module
###Code
# Import module
import h2o
###Output
_____no_output_____
###Markdown
Step 3: Connect to H2O cluster with IP address
###Code
# In order to connect to a H2O cluster in the cloud, you need to specify the IP address
h2o.connect(ip = "xxx.xxx.xxx.xxx") # fill in the real IP
###Output
_____no_output_____ |
module1-regression-1/LS_DS11_211.ipynb | ###Markdown
Lambda School Data Science*Unit 2, Sprint 1, Module 1*--- Regression 1- Begin with baselines for regression- Use scikit-learn to fit a linear regression- Explain the coefficients from a linear regression Brandon Rohrer wrote a good blog post, [โWhat questions can machine learning answer?โ](https://brohrer.github.io/five_questions_data_science_answers.html)Weโll focus on two of these questions in Unit 2. These are both types of โsupervised learning.โ- โHow Much / How Many?โ (Regression)- โIs this A or B?โ (Classification)This unit, youโll build supervised learning models with โtabular dataโ (data in tables, like spreadsheets). Including, but not limited to:- Predict New York City real estate prices <-- **Today, we'll start this!**- Predict which water pumps in Tanzania need repairs- Choose your own labeled, tabular dataset, train a predictive model, and publish a blog post or web app with visualizations to explain your model! SetupRun the code cell below. You can work locally (follow the [local setup instructions](https://lambdaschool.github.io/ds/unit2/local/)) or on Colab.Libraries:- ipywidgets- pandas- plotly- scikit-learnIf your **Plotly** visualizations aren't working:- You must have JavaScript enabled in your browser- You probably want to use Chrome or Firefox- You may need to turn off ad blockers- [If you're using Jupyter Lab locally, you need to install some "extensions"](https://plot.ly/python/getting-started/jupyterlab-support-python-35)
###Code
import sys
# If you're on Colab:
if 'google.colab' in sys.modules:
DATA_PATH = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Applied-Modeling/master/data/'
# If you're working locally:
else:
DATA_PATH = '../data/'
# Ignore this Numpy warning when using Plotly Express:
# FutureWarning: Method .ptp is deprecated and will be removed in a future version. Use numpy.ptp instead.
import warnings
warnings.filterwarnings(action='ignore', category=FutureWarning, module='numpy')
###Output
_____no_output_____
###Markdown
Begin with baselines for regression Overview Predict how much a NYC condo costs ๐ ๐ธRegression models output continuous numbers, so we can use regression to answer questions like "How much?" or "How many?" Often, the question is "How much will this cost? How many dollars?" For example, here's a fun YouTube video, which we'll use as our scenario for this lesson:[Amateurs & Experts Guess How Much a NYC Condo With a Private Terrace Costs](https://www.youtube.com/watch?v=JQCctBOgH9I)> Real Estate Agent Leonard Steinberg just sold a pre-war condo in New York City's Tribeca neighborhood. We challenged three people - an apartment renter, an apartment owner and a real estate expert - to try to guess how much the apartment sold for. Leonard reveals more and more details to them as they refine their guesses. The condo from the video is **1,497 square feet**, built in 1852, and is in a desirable neighborhood. According to the real estate agent, _"Tribeca is known to be one of the most expensive ZIP codes in all of the United States of America."_How can we guess what this condo sold for? Let's look at 3 methods:1. Heuristics2. Descriptive Statistics3. Predictive Model Follow Along 1. HeuristicsHeuristics are "rules of thumb" that people use to make decisions and judgments. The video participants discussed their heuristics: **Participant 1**, Chinwe, is a real estate amateur. She rents her apartment in New York City. Her first guess was `8 million, and her final guess was 15 million.[She said](https://youtu.be/JQCctBOgH9I?t=465), _"People just go crazy for numbers like 1852. You say **'pre-war'** to anyone in New York City, they will literally sell a kidney. They will just give you their children."_ **Participant 3**, Pam, is an expert. She runs a real estate blog. Her first guess was 1.55 million, and her final guess was 2.2 million.[She explained](https://youtu.be/JQCctBOgH9I?t=280) her first guess: _"I went with a number that I think is kind of the going rate in the location, and that's **a thousand bucks a square foot.**"_ **Participant 2**, Mubeen, is between the others in his expertise level. He owns his apartment in New York City. His first guess was 1.7 million, and his final guess was also 2.2 million. 2. Descriptive Statistics We can use data to try to do better than these heuristics. How much have other Tribeca condos sold for?Let's answer this question with a relevant dataset, containing most of the single residential unit, elevator apartment condos sold in Tribeca, from January throughย April 2019.We can get descriptive statistics for the dataset's `SALE_PRICE` column.How many condo sales are in this dataset? What was the average sale price? The median? Minimum? Maximum?
###Code
import pandas as pd
df = pd.read_csv(DATA_PATH+'condos/tribeca.csv')
pd.options.display.float_format = '{:,.0f}'.format
df['SALE_PRICE'].describe()
###Output
_____no_output_____
###Markdown
On average, condos in Tribeca have sold for \$3.9 million. So that could be a reasonable first guess.In fact, here's the interesting thing: **we could use this one number as a "prediction", if we didn't have any data except for sales price...** Imagine we didn't have any any other information about condos, then what would you tell somebody? If you had some sales prices like this but you didn't have any of these other columns. If somebody asked you, "How much do you think a condo in Tribeca costs?"You could say, "Well, I've got 90 sales prices here, and I see that on average they cost \$3.9 million."So we do this all the time in the real world. We use descriptive statistics for prediction. And that's not wrong or bad, in fact **that's where you should start. This is called the _mean baseline_.** **Baseline** is an overloaded term, with multiple meanings:1. [**The score you'd get by guessing**](https://twitter.com/koehrsen_will/status/1088863527778111488)2. [**Fast, first models that beat guessing**](https://blog.insightdatascience.com/always-start-with-a-stupid-model-no-exceptions-3a22314b9aaa) 3. **Complete, tuned "simpler" model** (Simpler mathematically, computationally. Or less work for you, the data scientist.)4. **Minimum performance that "matters"** to go to production and benefit your employer and the people you serve.5. **Human-level performance** Baseline type 1 is what we're doing now.(Linear models can be great for 2, 3, 4, and [sometimes even 5 too!](http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.188.5825)) ---Let's go back to our mean baseline for Tribeca condos. If we just guessed that every Tribeca condo sold for \$3.9 million, how far off would we be, on average?
###Code
guess = df['SALE_PRICE'].mean()
errors = guess - df['SALE_PRICE']
print(f'If we just guessed every Tribeca condo sold for ${guess:,.0f},')
print(f'we would be off by ${mean_absolute_error:,.0f} on average.')
###Output
If we just guessed every Tribeca condo sold for $3,928,736,
we would be off by $2,783,380 on average.
###Markdown
That sounds like a lot of error! But fortunately, we can do better than this first baseline โย we can use more data. For example, the condo's size.Could sale price be **dependent** on square feet? To explore this relationship, let's make a scatterplot, using [Plotly Express](https://plot.ly/python/plotly-express/):
###Code
import plotly.express as px
px.scatter(df, x='GROSS_SQUARE_FEET', y='SALE_PRICE')
###Output
_____no_output_____
###Markdown
3. Predictive ModelTo go from a _descriptive_ [scatterplot](https://www.plotly.express/plotly_express/plotly_express.scatter) to a _predictive_ regression, just add a _line of best fit:_
###Code
# trendline='ols' draws an Ordinary Least Squares regression line
px.scatter(df, x='GROSS_SQUARE_FEET', y='SALE_PRICE', trendline='ols')
###Output
_____no_output_____
###Markdown
Roll over the Plotly regression line to see its equation and predictions for sale price, dependent on gross square feet.Linear Regression helps us **interpolate.** For example, in this dataset, there's a gap between 4016 sq ft and 4663 sq ft. There were no 4300 sq ft condos sold, but what price would you predict, using this line of best fit?Linear Regression also helps us **extrapolate.** For example, in this dataset, there were no 6000 sq ft condos sold, but what price would you predict? The line of best fit tries to summarize the relationship between our x variable and y variable in a way that enables us to use the equation for that line to make predictions. **Synonyms for "y variable"**- **Dependent Variable**- Response Variable- Outcome Variable - Predicted Variable- Measured Variable- Explained Variable- **Label**- **Target** **Synonyms for "x variable"**- **Independent Variable**- Explanatory Variable- Regressor- Covariate- Correlate- **Feature** The bolded terminology will be used most often by your instructors this unit. ChallengeIn your assignment, you will practice how to begin with baselines for regression, using a new dataset! Use scikit-learn to fit a linear regression Overview We can use visualization libraries to do simple linear regression ("simple" means there's only one independent variable). But during this unit, we'll usually use the scikit-learn library for predictive models, and we'll usually have multiple independent variables. In [_Python Data Science Handbook,_ Chapter 5.2: Introducing Scikit-Learn](https://jakevdp.github.io/PythonDataScienceHandbook/05.02-introducing-scikit-learn.htmlBasics-of-the-API), Jake VanderPlas explains **how to structure your data** for scikit-learn:> The best way to think about data within Scikit-Learn is in terms of tables of data. >> >>The features matrix is often stored in a variable named `X`. The features matrix is assumed to be two-dimensional, with shape `[n_samples, n_features]`, and is most often contained in a NumPy array or a Pandas `DataFrame`.>>We also generally work with a label or target array, which by convention we will usually call `y`. The target array is usually one dimensional, with length `n_samples`, and is generally contained in a NumPy array or Pandas `Series`. The target array may have continuous numerical values, or discrete classes/labels. >>The target array is the quantity we want to _predict from the data:_ in statistical terms, it is the dependent variable. VanderPlas also lists a **5 step process** for scikit-learn's "Estimator API":> Every machine learning algorithm in Scikit-Learn is implemented via the Estimator API, which provides a consistent interface for a wide range of machine learning applications.>> Most commonly, the steps in using the Scikit-Learn estimator API are as follows:>> 1. Choose a class of model by importing the appropriate estimator class from Scikit-Learn.> 2. Choose model hyperparameters by instantiating this class with desired values.> 3. Arrange data into a features matrix and target vector following the discussion above.> 4. Fit the model to your data by calling the `fit()` method of the model instance.> 5. Apply the Model to new data: For supervised learning, often we predict labels for unknown data using the `predict()` method.Let's try it! Follow AlongFollow the 5 step process, and refer to [Scikit-Learn LinearRegression documentation](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LinearRegression.html).
###Code
# 1. Import the appropriate estimator class from Scikit-Learn
from sklearn.linear_model import LinearRegression
# 2. Instantiate this class
model = LinearRegression()
# 3. Arrange X features matrix & y target vector
type(df[['GROSS_SQUARE_FEET']])
df[['GROSS_SQUARE_FEET']].shape
df[['GROSS_SQUARE_FEET']]
type(df['SALE_PRICE'])
df['SALE_PRICE'].shape
df['SALE_PRICE']
features = ['GROSS_SQUARE_FEET']
target = 'SALE_PRICE'
X_train = df[features]
y_train = df[target]
# 4. Fit the model
model.fit(X_train, y_train)
# 5. Apply the model to new data
square_feet = 1497
X_test = [[square_feet]]
y_pred = model.predict(X_test)
y_pred
###Output
_____no_output_____
###Markdown
So, we used scikit-learn to fit a linear regression, and predicted the sales price for a 1,497 square foot Tribeca condo, like the one from the video.Now, what did that condo actually sell for? ___The final answer is revealed in [the video at 12:28](https://youtu.be/JQCctBOgH9I?t=748)!___
###Code
y_test = [2800000]
###Output
_____no_output_____
###Markdown
What was the error for our prediction, versus the video participants?Let's use [scikit-learn's mean absolute error function](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.mean_absolute_error.html).
###Code
chinwe_final_guess = [15000000]
mubeen_final_guess = [2200000]
pam_final_guess = [2200000]
from sklearn.metrics import mean_absolute_error
mae = mean_absolute_error(y_test, y_pred)
print(f"Our model's error: ${mae:,.0f}")
mae = mean_absolute_error(y_test, chinwe_final_guess)
print(f"Chinwe's error: ${mae:,.0f}")
mae = mean_absolute_error(y_test, mubeen_final_guess)
print(f"Mubeen's error: ${mae:,.0f}")
mae = mean_absolute_error(y_test, pam_final_guess)
print(f"Pam's error: ${mae:,.0f}")
###Output
Pam's error: $600,000
###Markdown
This [diagram](https://ogrisel.github.io/scikit-learn.org/sklearn-tutorial/tutorial/text_analytics/general_concepts.htmlsupervised-learning-model-fit-x-y) shows what we just did! Don't worry about understanding it all now. But can you start to match some of these boxes/arrows to the corresponding lines of code from above? Here's [another diagram](https://livebook.manning.com/book/deep-learning-with-python/chapter-1/), which shows how machine learning is a "new programming paradigm":> A machine learning system is "trained" rather than explicitly programmed. It is presented with many "examples" relevant to a task, and it finds statistical structure in these examples which eventually allows the system to come up with rules for automating the task. โ[Francois Chollet](https://livebook.manning.com/book/deep-learning-with-python/chapter-1/) Wait, are we saying that *linear regression* could be considered a *machine learning algorithm*? Maybe it depends? What do you think? We'll discuss throughout this unit. ChallengeIn your assignment, you will use scikit-learn for linear regression with one feature. For a stretch goal, you can do linear regression with two or more features. Explain the coefficients from a linear regression OverviewWhat pattern did the model "learn", about the relationship between square feet & price? Follow Along To help answer this question, we'll look at the `coef_` and `intercept_` attributes of the `LinearRegression` object. (Again, [here's the documentation](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LinearRegression.html).)
###Code
model.coef_
model.intercept_
# Equations for a line
m = model.coef_[0]
b = model.intercept_
print('y = mx + b')
print(f'y = {m:,.0f}*x + {b:,.0f}')
print(f'price = {m:,.0f}*square_feet + {b:,.0f}')
###Output
y = mx + b
y = 3,076*x + -1,505,364
price = 3,076*square_feet + -1,505,364
###Markdown
We can repeatedly apply the model to new/unknown data, and explain the coefficient:
###Code
def predict(square_feet):
y_pred = model.predict([[square_feet]])
estimate = y_pred[0]
coefficient = model.coef_[0]
result = f'${estimate:,.0f} estimated price for {square_feet:,.0f} square foot condo in Tribeca.'
explanation = f'In this linear regression, each additional square foot adds ${coefficient:,.0f}.'
return result + '\n' + explanation
print(predict(1497))
# What does the model predict for low square footage?
print(predict(500))
# For high square footage?
print(predict(10000))
# These values are outside the min & max of the data the model was fit on,
# but predictive models assume future data will have similar distribution.
df['SALE_PRICE'].describe()
df['GROSS_SQUARE_FEET'].describe()
# Re-run the prediction function interactively
from ipywidgets import interact
interact(predict, square_feet=(630,5000)); # (min, max)
# Single brackets with string column name
# selects that column as a pandas Series (1D + index)
df['SALE_PRICE']
# "Double" brackets (list of strings with column name(s))
# selects the column(s) as a pandas DataFrame (2D + index)
df[['ADDRESS', 'NEIGHBORHOOD', 'ZIP_CODE']]
df[['GROSS_SQUARE_FEET']]
###Output
_____no_output_____ |
Notebooks/Aircraft Classification/Classification Case 2.ipynb | ###Markdown
Classification__Note:__ - Running this notebook requires extracting the audio features and processing the states via _Feature Extraction.ipynb_Aircraft/nonaircraft classification is done using the __Classification__ class. This class expects the root directory of the dataset and, optionally, non-default parameters for spectrum-, feature-, and state settings. This notebook classifies the noisy Mel spectra, then evaluates performance in the mismatched conditions caused by reducing MAV ego-noise.The network used is a __convolutional neural network__ with two inputs: - 3 convolutional layers and 2 fully-connected layers for input 1 (spectra)- 2 fully-connected layers for input 2 (states)Network configuration and training settings are passed to the class through dictionaries containing the appropriate _torch.nn_ attributes.
###Code
import os
import aircraft_detector.aircraft_classification.classification as cla
# assign root directory
root_directory = os.path.join(os.pardir, os.pardir, os.pardir, 'Data')
# load the settings:
# many of these are default settings, but are loaded explicitly for transparency
# spectrum settings of previously extracted set
spectrum_settings = {
'feature': 'Mel', # default = 'Stft'
'fft_sample_rate': 44100, # default
'stft_window_length': 1024, # default
'stft_hop_length': 512, # default
'frequency_bins': 60, # default
}
# feature settings: used to split up the (5 second/431 frames) spectra
feature_settings = {
'segment_frames': 60, # frames per segment (default: 60), approx. 70ms
'segment_hop': 30, # hop length per segment (default: 30), approx. 35ms
'frequency_smoothing': True, # smooth each spectrum in frequency (default: True)
'use_delta': True, # extract time derivate of spectrum as second channel (default: True)
}
# classification settings: how to load the dataset
classification_settings = {
'binary': True, # do binary classification (default: True)
'aircraft': ['airplane', 'helicopter'], # designated aircraft classes (default if binary)
'nonaircraft': ['engine', 'train', 'wind'], # nonaircraft classes (default if binary)
'balanced_split': True, # use a balanced split (default if binary)
'balance_ratios': [0.2, 0.2, 0.6] # balance ratios of 'larger' class (nonaircraft),
# overflow in ratios is automatically corrected
}
# load class with settings
classifier = cla.AircraftClassifier(
root_directory,
spectrum_settings=spectrum_settings,
feature_settings=feature_settings,
classification_settings=classification_settings,
implicit_denoise=True # denoise and classify in one
)
classifier.verbose = True
classifier.super_verbose = False # print every epoch (default: False)
# split noisy spectra into 60x60 features, export them
classifier.split_features(augmentations=[], noise_set='Mixed', noise_ratio=1.0) # no augmentation
# load features
df = classifier.load_datasets() # dataframe listing files, categories and labels
"""
Set the model configuration (list of layers): the first entry
{'layer_type': 'Conv2d', 'out_channels': 16, 'kernel_size': (5, 5), 'dilation': (2, 2)}
is equivalent to
torch.nn.Conv2d(in_channels=2, out_channels=16, kernel_size=(5, 5), dilation=(2, 2));
input_size is derived from the dataset.
From thereon, in_channels or in_features is derived from the previous layer.
'Linear_2' indicates a Linear layer belonging to the second input (states).
By default, a linear output layer is added at the end:
torch.nn.Linear(in_features=32, out_features=1).
"""
# 'location' in BatchNorm2d indicates if it should be before or after ReLU (default: before)
bn_location = 'before'
config = [
{'layer_type': 'Conv2d', 'out_channels': 16, 'kernel_size': (5, 5), 'dilation': (2, 2)},
{'layer_type': 'BatchNorm2d', 'location': bn_location, 'momentum': 0.1},
{'layer_type': 'MaxPool2d', 'kernel_size': (2, 2)},
{'layer_type': 'Conv2d', 'out_channels': 16, 'kernel_size': (5, 5), 'dilation': (2, 2)},
{'layer_type': 'BatchNorm2d', 'location': bn_location, 'momentum': 0.1},
{'layer_type': 'MaxPool2d', 'kernel_size': (2, 2)},
{'layer_type': 'Conv2d', 'out_channels': 32, 'kernel_size': (5, 5)},
{'layer_type': 'BatchNorm2d', 'location': bn_location, 'momentum': 0.1},
{'layer_type': 'Linear_2', 'out_features': 200},
{'layer_type': 'Dropout', 'p': 0.2},
{'layer_type': 'Linear_2', 'out_features': 200},
{'layer_type': 'Dropout', 'p': 0.5},
{'layer_type': 'Linear', 'out_features': 128},
{'layer_type': 'Dropout', 'p': 0.5},
{'layer_type': 'Linear', 'out_features': 32},
{'layer_type': 'Dropout', 'p': 0.5},
]
classifier.set_net_configuration(config)
# set the training configuration
# equivalent to torch.optimizer.Adamw(lr=0.0001, weight_decay=0.01, amsgrad=False)
optimizer = {'optimizer': 'AdamW', 'lr': 0.0001, 'weight_decay': 0.01, 'amsgrad': False}
train_settings = {
'epochs': 100,
'es_patience': 25, # early stopping patience
'batch_size': 256,
'optimizer': optimizer,
}
from aircraft_detector.utils.plot_helper import plot_training_history
# train model
model, train_losses, loss_history = classifier.train_network(train_settings)
train_loss, val_loss = train_losses
# plot training history
plot_training_history(loss_history)
# test model
test_loss = classifier.test_network(model)
# get accuracy
df_out = classifier.classify_dataset(model, 'Test', df) # adds 'Predicted' to dataframe
df_log = classifier.log_accuracy(df_out, index_name='1.00') # log accuracy
#accuracies = classifier.print_accuracy(df_out)
#print("Segment-based accuracy: %.3f%%." % accuracies[0]) # should be around 95%
#print("Recording-based accuracy: %.3f%%." % accuracies[1]) # should be 97.5%
print("Training loss: %.6f, Validation loss: %.6f, Test loss: %.6f."
% (train_loss, val_loss, test_loss))
df_log
# plot some example predictions
fig = classifier.plot_predictions(df_out, plot_title='prediction')
# save the model
#dir_network = classifier.save_network(model, test_loss)
# load the model
#model, dir_model = classifier.load_network()
from aircraft_detector.utils.plot_helper import plot_roc
# evaluate in mismatched conditions
augmentation = 'No'
colors = ['darkorchid', 'darkolivegreen', 'steelblue', 'goldenrod'] # looks better than rgby
# plot matching ROC
fig_title = "ROC Curve: %s augmentation, mixed evaluation" % augmentation.lower()
plt_label = "Noise ratio = 1.00"
fig = plot_roc(
df_out['Label'], df_out['Predicted'], title=fig_title, label=plt_label
)
# plot mismatched ROC
for i, noise_ratio in enumerate([0.75, 0.50, 0.25]):
# split evaluation set into 60x60 features
classifier.split_features('Test', noise_set='Mixed', noise_ratio=noise_ratio)
# evaluate on segments (noisy)
df_mismatched = classifier.classify_mismatched_test_set(model)
# add to log
df_log = classifier.log_accuracy(df_mismatched, df_log, '%.2f' % noise_ratio)
# print accuracies
#print("Accuracy for 'Mixed' with noise ratio = %.2f:" % noise_ratio)
#accuracies = classifier.print_accuracy(df_noisy)
# add to ROC plot
plt_label = "Noise ratio = %.2f" % noise_ratio
plot_roc(
df_mismatched['Label'],
df_mismatched['Predicted'],
fig=fig,
label=plt_label,
color=colors[i]
)
# save the plot
#fig_dest = 'ROC_%s_%s.eps' % (augmentation.replace(' ', '').lower(), noise_set.lower())
#fig.savefig(fig_dest, format='eps')
# save the results
#df_log.to_csv('%s_results.csv' % augmentation.replace(' ', '').lower())
df_log
###Output
Split 40 files (5 categories) into 499 files
Split 40 files (5 categories) into 499 files
Split 40 files (5 categories) into 499 files
|
why_heatmap.ipynb | ###Markdown
The tutorial generally aims to keep things as simple as possible. The intention is to be understandable to first time python users. Using the rather complex code in *heatmap.ipynp* to generate heatmaps seems to contradict this approach. This Notebook explains what that code does and why the simple alternatives (e.g. seaborn and pcolormesh) aren't 100% fit for the task.
###Code
from streakimage import StreakImage
import matplotlib as mpl
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
# We import heatmap.ipynb to easily plot heatmaps
import import_ipynb
from heatmap import heatmap
path_to_bg = "files/example_bg ST4 g20 20x556ms.img"
bg = StreakImage(path_to_bg)
path_to_img = "files/example_streak-image ST4 g20 20x556ms.img"
image = StreakImage(path_to_img, bg=bg)
###Output
_____no_output_____
###Markdown
Data plotted with *seaborn*
###Code
sns.heatmap(image.data)
###Output
_____no_output_____
###Markdown
Data plotted with *pcolormesh* **without** explicitly delivering the index and columns.
###Code
plt.pcolormesh(image.data)
###Output
_____no_output_____
###Markdown
Data plotted with *pcolormesh* **with** explicitly delivering the index and columns.
###Code
plt.pcolormesh(image.data.columns, image.data.index, image.data.values)
###Output
<ipython-input-6-44569ec3b509>:1: MatplotlibDeprecationWarning: shading='flat' when X and Y have the same dimensions as C is deprecated since 3.3. Either specify the corners of the quadrilaterals with X and Y, or pass shading='auto', 'nearest' or 'gouraud', or set rcParams['pcolor.shading']. This will become an error two minor releases later.
plt.pcolormesh(image.data.columns, image.data.index, image.data.values)
###Markdown
*heatmap* vs *pcolormesh* with minimal data set
###Code
#generate test data
indeces = [1,2,3,4]
columns = [1,2,3,4]
small_data = pd.DataFrame(np.random.randint(0,10, size=(4,4)), index=indeces, columns=columns)
fig, axes = plt.subplots(1,2, figsize=(10,6))
axes[0].pcolormesh(small_data.columns, small_data.index, small_data.values)
heatmap(small_data, axes[1])
###Output
<ipython-input-10-6dc15736b89d>:8: MatplotlibDeprecationWarning: shading='flat' when X and Y have the same dimensions as C is deprecated since 3.3. Either specify the corners of the quadrilaterals with X and Y, or pass shading='auto', 'nearest' or 'gouraud', or set rcParams['pcolor.shading']. This will become an error two minor releases later.
axes[0].pcolormesh(small_data.columns, small_data.index, small_data.values)
###Markdown
Comparison between the outputs of the simple pcolormesh function (top left) the heatmap function (top right) and the underlying values (bottom) clearly exhibits that pcolormesh places the axis labels at one corner of the corresponding coloured field. *heatmap* places them (correctly) at its center.
###Code
small_data
###Output
_____no_output_____
###Markdown
For small numbers of integer values as index and columns (like in this example) seaborn actually works pretty well.
###Code
sns.heatmap(small_data)
###Output
_____no_output_____ |
week1/1- mnist_classification_dense_tensorflow.ipynb | ###Markdown
Imports
###Code
# Change tensorflow version to 1.x
%tensorflow_version 1
import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data
#Read MNIST Data
mnist_data = input_data.read_data_sets('MNIST_data/',one_hot=True)
###Output
WARNING:tensorflow:From <ipython-input-4-5391ee05a7ac>:1: read_data_sets (from tensorflow.contrib.learn.python.learn.datasets.mnist) is deprecated and will be removed in a future version.
Instructions for updating:
Please use alternatives such as official/mnist/dataset.py from tensorflow/models.
WARNING:tensorflow:From /tensorflow-1.15.2/python3.6/tensorflow_core/contrib/learn/python/learn/datasets/mnist.py:260: maybe_download (from tensorflow.contrib.learn.python.learn.datasets.base) is deprecated and will be removed in a future version.
Instructions for updating:
Please write your own downloading logic.
WARNING:tensorflow:From /tensorflow-1.15.2/python3.6/tensorflow_core/contrib/learn/python/learn/datasets/base.py:252: _internal_retry.<locals>.wrap.<locals>.wrapped_fn (from tensorflow.contrib.learn.python.learn.datasets.base) is deprecated and will be removed in a future version.
Instructions for updating:
Please use urllib or similar directly.
Successfully downloaded train-images-idx3-ubyte.gz 9912422 bytes.
WARNING:tensorflow:From /tensorflow-1.15.2/python3.6/tensorflow_core/contrib/learn/python/learn/datasets/mnist.py:262: extract_images (from tensorflow.contrib.learn.python.learn.datasets.mnist) is deprecated and will be removed in a future version.
Instructions for updating:
Please use tf.data to implement this functionality.
Extracting MNIST_data/train-images-idx3-ubyte.gz
Successfully downloaded train-labels-idx1-ubyte.gz 28881 bytes.
WARNING:tensorflow:From /tensorflow-1.15.2/python3.6/tensorflow_core/contrib/learn/python/learn/datasets/mnist.py:267: extract_labels (from tensorflow.contrib.learn.python.learn.datasets.mnist) is deprecated and will be removed in a future version.
Instructions for updating:
Please use tf.data to implement this functionality.
Extracting MNIST_data/train-labels-idx1-ubyte.gz
WARNING:tensorflow:From /tensorflow-1.15.2/python3.6/tensorflow_core/contrib/learn/python/learn/datasets/mnist.py:110: dense_to_one_hot (from tensorflow.contrib.learn.python.learn.datasets.mnist) is deprecated and will be removed in a future version.
Instructions for updating:
Please use tf.one_hot on tensors.
Successfully downloaded t10k-images-idx3-ubyte.gz 1648877 bytes.
Extracting MNIST_data/t10k-images-idx3-ubyte.gz
Successfully downloaded t10k-labels-idx1-ubyte.gz 4542 bytes.
Extracting MNIST_data/t10k-labels-idx1-ubyte.gz
WARNING:tensorflow:From /tensorflow-1.15.2/python3.6/tensorflow_core/contrib/learn/python/learn/datasets/mnist.py:290: DataSet.__init__ (from tensorflow.contrib.learn.python.learn.datasets.mnist) is deprecated and will be removed in a future version.
Instructions for updating:
Please use alternatives such as official/mnist/dataset.py from tensorflow/models.
###Markdown
Dataset
###Code
import matplotlib.pyplot as plt
import numpy as np
import math
inp_batch, gt_batch = mnist_data.train.next_batch(10)
x,y = inp_batch[0], gt_batch[0]
#Checking one image and one label shapes
print(x.shape, y.shape)
#Checking a batch of images and a batch of labels shapes
print(inp_batch.shape, gt_batch.shape)
#Formatting images to matrix, from vector
def imformat(x):
horlen = int(math.sqrt(len(x)))
verlen = horlen
x_imformat = x.reshape((horlen,verlen))
return x_imformat
x_imformat = imformat(x)
plt.imshow(x_imformat,cmap = 'gray')
print(x.max(),x.min())
print(np.amax(x),np.amin(x))
###Output
1.0 0.0
1.0 0.0
###Markdown
Network
###Code
#Definin hyperparameters
batch_num = 50
input_shape = 784
label_shape = 10
lr = 0.003
layer_1_neurons = 200
layer_2_neurons = 80
layer_3_neurons = 10
# Placeholders are the things that we FEED to our tensorflow graph when
# we run our graph
inp = tf.placeholder(dtype = tf.float32 , shape = (None,input_shape))
lab = tf.placeholder(dtype = tf.float32, shape = (None, label_shape))
# We define our variables that we will use in our graph.
# Think of this like we define some nodes on the graph, but we didnt define the edges yet
W1 = tf.Variable(tf.random_normal(shape = [input_shape, layer_1_neurons]))
b1 = tf.Variable(tf.random_normal(shape = [layer_1_neurons]))
W2 = tf.Variable(tf.random_normal(shape = [layer_1_neurons, layer_2_neurons]))
b2 = tf.Variable(tf.random_normal(shape = [layer_2_neurons]))
W3 = tf.Variable(tf.random_normal(shape = [layer_2_neurons, layer_3_neurons]))
b3 = tf.Variable(tf.random_normal(shape = [layer_3_neurons]))
# Here we finish defining everything in our computational graph
y1 = tf.nn.sigmoid(tf.matmul(inp,W1) + b1)
y2 = tf.nn.sigmoid(tf.matmul(y1,W2) + b2)
y3 = tf.nn.sigmoid(tf.matmul(y2,W3) + b3)
pred = y3
# We need loss in our comp graph to optimize it
loss = tf.nn.softmax_cross_entropy_with_logits_v2(lab,pred)
# We need tstep in our comp graph to obtain the gradients
tstep = tf.train.AdamOptimizer(lr).minimize(loss)
#if this is an interactive session, I won't be needing python contexts after.
sess = tf.InteractiveSession()
tf.global_variables_initializer().run()
# Our training loop
itnum = 1000
epnum = 25
for epoch in range(epnum):
aggloss = 0
for itr in range(1,itnum):
xbatch,ybatch = mnist_data.train.next_batch(batch_num)
# I run my computational graph to obtain LOSS and TSTEP objects residing in my graph
# I assign the obtained values to itrloss variable and _ variable (i will not use _ variable)
# I feed my graph the INP and LAB objects. inp object is xbatch here, lab object is ybatch here
itrloss, _ = sess.run([loss,tstep], feed_dict = {inp:xbatch, lab:ybatch})
aggloss = aggloss + np.mean(itrloss)
print(epoch,aggloss/itnum)
#Checking accuracy
acc = 0
sample_size = 5000
for _ in range(sample_size):
xtest, ytest = mnist_data.test.next_batch(50)
# I run my graph to obtain my prediction this time. Same things apply as in the previous cell.
testpred = sess.run([pred], feed_dict={inp:xtest, lab:ytest})
acc = acc + int(np.argmax(ytest)==np.argmax(testpred))
acc = acc/sample_size
print(acc)
###Output
0.4726
|
notebooks/1.Hello, TensorFlow!.ipynb | ###Markdown
1. Hello, TensorFlow! 3x4 example
###Code
import tensorflow as tf
a = tf.placeholder('float')
b = tf.placeholder('float')
y = tf.mul(a, b)
sess = tf.Session()
print(sess.run(y, feed_dict={a: 3, b: 4}))
###Output
12.0
|
DataItGirls Colab/20180724_RegexOne.ipynb | ###Markdown
[View in Colaboratory](https://colab.research.google.com/github/YoungestSalon/TIL/blob/master/20180724_RegexOne.ipynb) ๊ณผ์ ๊ฐ์ ํ์ : [๋ค์ ์ฌ์ดํธ](https://regexone.com/lesson/introduction_abcs)์์ ํ ์ ์๋ ๋จ๊ณ๊น์ง ์ต๋ํ ํด๊ฒฐํ๊ณ , ๊ฐ ๋จ๊ณ๋ณ๋ก ๋ณธ์ธ์ด ์ฌ์ฉํ ์ ๊ทํํ์์ ์ ๋ฆฌํ์ฌ ๊ณผ์ ํด๋์ ๊ณต์ ํด์ฃผ์ธ์. ์ฐธ๊ณ ์๋ฃ : [Regular expression cheat sheet](http://www.cbs.dtu.dk/courses/27610/regular-expressions-cheat-sheet-v2.pdf)--- Lesson 1~15 Exercise 1: Matching Characters
###Code
abc
###Output
_____no_output_____
###Markdown
Exercise 1ยฝ: Matching Digits
###Code
123
###Output
_____no_output_____
###Markdown
Exercise 2: Matching With Wildcards
###Code
.
###Output
_____no_output_____
###Markdown
Exercise 3: Matching Characters
###Code
[cmf]an
###Output
_____no_output_____
###Markdown
Exercise 4: Excluding Characters
###Code
[^b]og
###Output
_____no_output_____
###Markdown
Exercise 5: Matching Character Ranges
###Code
[ABC]
###Output
_____no_output_____
###Markdown
Exercise 6: Matching Repeated Characters
###Code
wazz
###Output
_____no_output_____
###Markdown
Exercise 7: Matching Repeated Characters
###Code
aa
###Output
_____no_output_____
###Markdown
Exercise 8: Matching Optional Characters
###Code
\d
###Output
_____no_output_____
###Markdown
Exercise 9: Matching Whitespaces
###Code
\d.\s
###Output
_____no_output_____
###Markdown
Exercise 10: Matching Lines
###Code
^Mission
###Output
_____no_output_____
###Markdown
Exercise 11: Matching Groups
###Code
^(file_\S+).pdf$
###Output
_____no_output_____
###Markdown
Exercise 12: Matching Nested Groups
###Code
(\S{3}\s(\d{4}))
###Output
_____no_output_____
###Markdown
Exercise 13: Matching Nested Groups
###Code
(\d{4})x(\d{3,4})
###Output
_____no_output_____
###Markdown
Exercise 14: Matching Conditional Text
###Code
I love (cats|dogs)
###Output
_____no_output_____
###Markdown
Exercise 15: Matching Other Special Characters
###Code
^The
###Output
_____no_output_____
###Markdown
--- Problem 1~8 Exercise 1: Matching Numbers
###Code
(\d|-)*(\d)$
###Output
_____no_output_____
###Markdown
Exercise 2: Matching Phone Numbers
###Code
(\d{3})
###Output
_____no_output_____
###Markdown
Exercise 3: Matching Emails
###Code
([a-z.?a-z]*)\+?[a-z]*@
###Output
_____no_output_____
###Markdown
Exercise 4: Capturing HTML Tags
###Code
</((a|div))>
###Output
_____no_output_____
###Markdown
Exercise 5: Capturing Filename Data
###Code
(\S*).(jpg|png|gif)$
###Output
_____no_output_____
###Markdown
Exercise 6: Matching Lines
###Code
\s*(\D*)
###Output
_____no_output_____
###Markdown
Exercise 7: Extracting Data From Log Entries
###Code
\(\s(1553)\):\s+\D{2}\s{1}\D{6}.\D{4}.(\D{8})\((\D{13}).(\d*)\)
###Output
_____no_output_____
###Markdown
Exercise 8: Extracting Data From URLs
###Code
((\D*)://(\S*):(\d*)|(\D*)://([a-z.?a-z]*))
###Output
_____no_output_____ |
notebooks/DevelopingAnalyzeModule.ipynb | ###Markdown
Notebook for developing functions in analyze.py
###Code
# figures.py imports
from __future__ import division
#from cStringIO import StringIO
import datetime
import glob
import os
import arrow
from dateutil import tz
import matplotlib.dates as mdates
import matplotlib.gridspec as gridspec
import matplotlib.pyplot as plt
import matplotlib.cm as cm
import netCDF4 as nc
import numpy as np
import pandas as pd
import requests
from scipy import interpolate as interp
from salishsea_tools import (
nc_tools,
viz_tools,
stormtools,
tidetools,
)
#from salishsea_tools.nowcast import figures
#from salishsea_tools.nowcast import analyze
#from salishsea_tools.nowcast import residuals
%matplotlib inline
t_orig=datetime.datetime(2015, 1, 22); t_final=datetime.datetime(2015, 1, 29)
bathy = nc.Dataset('/data/nsoontie/MEOPAR/NEMO-forcing/grid/bathy_meter_SalishSea2.nc')
###Output
_____no_output_____
###Markdown
Constants
###Code
paths = {'nowcast': '/data/dlatorne/MEOPAR/SalishSea/nowcast/',
'forecast': '/ocean/sallen/allen/research/MEOPAR/SalishSea/forecast/',
'forecast2': '/ocean/sallen/allen/research/MEOPAR/SalishSea/forecast2/'}
colours = {'nowcast': 'DodgerBlue',
'forecast': 'ForestGreen',
'forecast2': 'MediumVioletRed',
'observed': 'Indigo',
'predicted': 'ForestGreen',
'model': 'blue',
'residual': 'DimGray'}
###Output
_____no_output_____
###Markdown
Functions in module
###Code
def create_path(mode, t_orig, file_part):
""" Creates a path to a file associated with a simulation for date t_orig.
E.g. create_path('nowcast',datatime.datetime(2015,1,1), 'SalishSea_1h*grid_T.nc') gives
/data/dlatorne/MEOPAR/SalishSea/nowcast/01jan15/SalishSea_1h_20150101_20150101_grid_T.nc
:arg mode: Mode of results - nowcast, forecast, forecast2.
:type mode: string
:arg t_orig: The simulation start date.
:type t_orig: datetime object
:arg file_part: Identifier for type of file. E.g. SalishSea_1h*grif_T.nc or ssh*.txt
:type grid: string
:returns: filename, run_date
filename is the full path of the file or an empty list if the file does not exist.
run_date is a datetime object that represents the date the simulation ran
"""
run_date = t_orig
if mode == 'nowcast':
results_home = paths['nowcast']
elif mode == 'forecast':
results_home = paths['forecast']
run_date = run_date + datetime.timedelta(days=-1)
elif mode == 'forecast2':
results_home = paths['forecast2']
run_date = run_date + datetime.timedelta(days=-2)
results_dir = os.path.join(results_home,
run_date.strftime('%d%b%y').lower())
filename = glob.glob(os.path.join(results_dir, file_part))
try:
filename = filename[-1]
except IndexError:
pass
return filename, run_date
create_path('forecast2', t_orig, 'SalishSea*.nc')
def verified_runs(t_orig):
""" Compiles a list of run types (nowcast, forecast, and/or forecast 2)
that have been verified as complete by checking if their corresponding
.nc files for that day (generated by create_path) exist.
:arg t_orig:
:type t_orig: datetime object
:returns: runs_list, list strings representing the runs that completed
"""
runs_list = []
for mode in ['nowcast', 'forecast', 'forecast2']:
files, run_date = create_path(mode, t_orig, 'SalishSea*grid_T.nc')
if files:
runs_list.append(mode)
return runs_list
def truncate_data(data,time, sdt, edt):
""" Truncates data for a desired time range: sdt <= time <= edt
data and time must be numpy arrays.
sdt, edt, and times in time must all have a timezone or all be naive.
:arg data: the data to be truncated
:type data: numpy array
:arg time: array of times associated with data
:type time: numpy array
:arg sdt: the start time of the tuncation
:type sdt: datetime object
:arg edt: the end time of the truncation
:type edt: datetime object
:returns: data_trun, time_trun, the truncated data and time arrays
"""
inds = np.where(np.logical_and(time <=edt, time >=sdt))
return data[inds], time[inds]
def calculate_residual(ssh, time_ssh, tides, time_tides):
""" Calculates the residual of the model sea surface height or
observed water levels with respect to the predicted tides.
:arg ssh: Sea surface height (observed or modelled).
:type ssh: numpy array
:arg time_ssh: Time component for sea surface height (observed or modelled)
:type time_ssh: numpy array
:arg tides: Predicted tides.
:type tides: dataFrame object
:arg time_tides: Time component for predicted tides.
:type time_tides: dataFrame object
:returns: res, the residual
"""
tides_interp = figures.interp_to_model_time(time_ssh, tides, time_tides)
res = ssh - tides_interp
return res
def plot_residual_forcing(ax, runs_list, t_orig):
""" Plots the observed water level residual at Neah Bay against
forced residuals from existing ssh*.txt files for Neah Bay.
Function may produce none, any, or all (nowcast, forecast, forecast 2)
forced residuals depending on availability for specified date (runs_list).
:arg ax: The axis where the residuals are plotted.
:type ax: axis object
:arg runs_list: Runs that are verified as complete.
:type runs_list: list
:arg t_orig: Date being considered.
:type t_orig: datetime object
"""
# truncation times
sdt = t_orig.replace(tzinfo=tz.tzutc())
edt = sdt + datetime.timedelta(days=1)
# retrieve observations, tides and residual
start_date = t_orig.strftime('%d-%b-%Y'); end_date = start_date
stn_no = figures.SITES['Neah Bay']['stn_no']
obs = figures.get_NOAA_wlevels(stn_no, start_date, end_date)
tides = figures.get_NOAA_tides(stn_no, start_date, end_date)
res_obs = calculate_residual(obs.wlev, obs.time, tides.pred, tides.time)
# truncate and plot
res_obs_trun, time_trun = truncate_data(np.array(res_obs),np.array(obs.time), sdt, edt)
ax.plot(time_trun, res_obs_trun, colours['observed'], label='observed',
linewidth=2.5)
# plot forcing for each simulation
for mode in runs_list:
filename_NB, run_date = create_path(mode, t_orig, 'ssh*.txt')
if filename_NB:
data = residuals._load_surge_data(filename_NB)
surge, dates = residuals._retrieve_surge(data, run_date)
surge_t, dates_t = truncate_data(np.array(surge),np.array(dates),sdt,edt)
ax.plot(dates_t, surge_t, label=mode, linewidth=2.5,
color=colours[mode])
ax.set_title('Comparison of observed and forced sea surface height residuals at Neah Bay:'
'{t_forcing:%d-%b-%Y}'.format(t_forcing=t_orig))
def plot_residual_model(axs, names, runs_list, grid_B, t_orig):
""" Plots the observed sea surface height residual against the
sea surface height model residual (calculate_residual) at
specified stations. Function may produce none, any, or all
(nowcast, forecast, forecast 2) model residuals depending on
availability for specified date (runs_list).
:arg ax: The axis where the residuals are plotted.
:type ax: list of axes
:arg names: Names of station.
:type names: list of names
:arg runs_list: Runs that have been verified as complete.
:type runs_list: list
:arg grid_B: Bathymetry dataset for the Salish Sea NEMO model.
:type grid_B: :class:`netCDF4.Dataset`
:arg t_orig: Date being considered.
:type t_orig: datetime object
"""
bathy, X, Y = tidetools.get_bathy_data(grid_B)
t_orig_obs = t_orig + datetime.timedelta(days=-1)
t_final_obs = t_orig + datetime.timedelta(days=1)
# truncation times
sdt = t_orig.replace(tzinfo=tz.tzutc())
edt = sdt + datetime.timedelta(days=1)
for ax, name in zip(axs, names):
lat = figures.SITES[name]['lat']; lon = figures.SITES[name]['lon']; msl = figures.SITES[name]['msl']
j, i = tidetools.find_closest_model_point(lon, lat, X, Y, bathy, allow_land=False)
ttide = figures.get_tides(name)
wlev_meas = figures.load_archived_observations(name, t_orig_obs.strftime('%d-%b-%Y'), t_final_obs.strftime('%d-%b-%Y'))
res_obs = calculate_residual(wlev_meas.wlev, wlev_meas.time, ttide.pred_all + msl, ttide.time)
# truncate and plot
res_obs_trun, time_obs_trun = truncate_data(np.array(res_obs), np.array(wlev_meas.time), sdt, edt)
ax.plot(time_obs_trun, res_obs_trun, color=colours['observed'], linewidth=2.5, label='observed')
for mode in runs_list:
filename, run_date = create_path(mode, t_orig, 'SalishSea_1h_*_grid_T.nc')
grid_T = nc.Dataset(filename)
ssh_loc = grid_T.variables['sossheig'][:, j, i]
t_start, t_final, t_model = figures.get_model_time_variables(grid_T)
res_mod = calculate_residual(ssh_loc, t_model, ttide.pred_8, ttide.time)
# truncate and plot
res_mod_trun, t_mod_trun = truncate_data(res_mod, t_model, sdt, edt)
ax.plot(t_mod_trun, res_mod_trun, label=mode, color=colours[mode], linewidth=2.5)
ax.set_title('Comparison of modelled sea surface height residuals at {station}: {t:%d-%b-%Y}'.format(station=name, t=t_orig))
def calculate_error(res_mod, time_mod, res_obs, time_obs):
""" Calculates the model or forcing residual error.
:arg res_mod: Residual for model ssh or NB surge data.
:type res_mod: numpy array
:arg time_mod: Time of model output.
:type time_mod: numpy array
:arg res_obs: Observed residual (archived or at Neah Bay)
:type res_obs: numpy array
:arg time_obs: Time corresponding to observed residual.
:type time_obs: numpy array
:return: error
"""
res_obs_interp = figures.interp_to_model_time(time_mod, res_obs, time_obs)
error = res_mod - res_obs_interp
return error
def calculate_error_model(names, runs_list, grid_B, t_orig):
""" Sets up the calculation for the model residual error.
:arg names: Names of station.
:type names: list of strings
:arg runs_list: Runs that have been verified as complete.
:type runs_list: list
:arg grid_B: Bathymetry dataset for the Salish Sea NEMO model.
:type grid_B: :class:`netCDF4.Dataset`
:arg t_orig: Date being considered.
:type t_orig: datetime object
:returns: error_mod_dict, t_mod_dict, t_orig_dict
"""
bathy, X, Y = tidetools.get_bathy_data(grid_B)
t_orig_obs = t_orig + datetime.timedelta(days=-1)
t_final_obs = t_orig + datetime.timedelta(days=1)
# truncation times
sdt = t_orig.replace(tzinfo=tz.tzutc())
edt = sdt + datetime.timedelta(days=1)
error_mod_dict = {}; t_mod_dict = {}; t_orig_dict = {}
for name in names:
error_mod_dict[name] = {}; t_mod_dict[name] = {}; t_orig_dict[name] = {}
lat = figures.SITES[name]['lat']; lon = figures.SITES[name]['lon']; msl = figures.SITES[name]['msl']
j, i = tidetools.find_closest_model_point(lon, lat, X, Y, bathy, allow_land=False)
ttide = figures.get_tides(name)
wlev_meas = figures.load_archived_observations(name, t_orig_obs.strftime('%d-%b-%Y'), t_final_obs.strftime('%d-%b-%Y'))
res_obs = calculate_residual(wlev_meas.wlev, wlev_meas.time, ttide.pred_all + msl, ttide.time)
for mode in runs_list:
filename, run_date = create_path(mode, t_orig, 'SalishSea_1h_*_grid_T.nc')
grid_T = nc.Dataset(filename)
ssh_loc = grid_T.variables['sossheig'][:, j, i]
t_start, t_final, t_model = figures.get_model_time_variables(grid_T)
res_mod = calculate_residual(ssh_loc, t_model, ttide.pred_8, ttide.time)
# truncate
res_mod_trun, t_mod_trun = truncate_data(res_mod, t_model, sdt, edt)
error_mod = calculate_error(res_mod_trun, t_mod_trun, res_obs, wlev_meas.time)
error_mod_dict[name][mode] = error_mod; t_mod_dict[name][mode] = t_mod_trun; t_orig_dict[name][mode] = t_orig
return error_mod_dict, t_mod_dict, t_orig_dict
def calculate_error_forcing(name, runs_list, t_orig):
""" Sets up the calculation for the forcing residual error.
:arg names: Name of station.
:type names: string
:arg runs_list: Runs that have been verified as complete.
:type runs_list: list
:arg t_orig: Date being considered.
:type t_orig: datetime object
:returns: error_frc_dict, t_frc_dict
"""
# truncation times
sdt = t_orig.replace(tzinfo=tz.tzutc())
edt = sdt + datetime.timedelta(days=1)
# retrieve observed residual
start_date = t_orig.strftime('%d-%b-%Y'); end_date = start_date
stn_no = figures.SITES['Neah Bay']['stn_no']
obs = figures.get_NOAA_wlevels(stn_no, start_date, end_date)
tides = figures.get_NOAA_tides(stn_no, start_date, end_date)
res_obs_NB = calculate_residual(obs.wlev, obs.time, tides.pred, tides.time)
# calculate forcing error
error_frc_dict = {}; t_frc_dict = {}; error_frc_dict[name] = {}; t_frc_dict[name] = {}
for mode in runs_list:
filename_NB, run_date = create_path(mode, t_orig, 'ssh*.txt')
if filename_NB:
data = residuals._load_surge_data(filename_NB)
surge, dates = residuals._retrieve_surge(data, run_date)
surge_t, dates_t = truncate_data(np.array(surge),np.array(dates), sdt, edt)
error_frc = calculate_error(surge_t, dates_t, res_obs_NB, obs.time)
error_frc_dict[name][mode] = error_frc; t_frc_dict[name][mode] = dates_t
return error_frc_dict, t_frc_dict
def plot_error_model(axs, names, runs_list, grid_B, t_orig):
""" Plots the model residual error.
:arg axs: The axis where the residual errors are plotted.
:type axs: list of axes
:arg names: Names of station.
:type names: list of strings
:arg runs_list: Runs that have been verified as complete.
:type runs_list: list of strings
:arg grid_B: Bathymetry dataset for the Salish Sea NEMO model.
:type grid_B: :class:`netCDF4.Dataset`
:arg t_orig: Date being considered.
:type t_orig: datetime object
"""
error_mod_dict, t_mod_dict, t_orig_dict = calculate_error_model(names, runs_list, grid_B, t_orig)
for ax, name in zip(axs, names):
ax.set_title('Comparison of modelled residual errors at {station}: {t:%d-%b-%Y}'.format(station=name, t=t_orig))
for mode in runs_list:
ax.plot(t_mod_dict[name][mode], error_mod_dict[name][mode], label=mode, color=colours[mode], linewidth=2.5)
def plot_error_forcing(ax, runs_list, t_orig):
""" Plots the forcing residual error.
:arg ax: The axis where the residual errors are plotted.
:type ax: axis object
:arg runs_list: Runs that have been verified as complete.
:type runs_list: list
:arg t_orig: Date being considered.
:type t_orig: datetime object
"""
name = 'Neah Bay'
error_frc_dict, t_frc_dict = calculate_error_forcing(name, runs_list, t_orig)
for mode in runs_list:
ax.plot(t_frc_dict[name][mode], error_frc_dict[name][mode], label=mode, color=colours[mode], linewidth=2.5)
ax.set_title('Comparison of observed and forced residual errors at Neah Bay: {t_forcing:%d-%b-%Y}'.format(t_forcing=t_orig))
def plot_residual_error_all(subject ,grid_B, t_orig, figsize=(20,16)):
""" Sets up and combines the plots produced by plot_residual_forcing
and plot_residual_model or plot_error_forcing and plot_error_model.
This function specifies the stations for which the nested functions
apply. Figure formatting except x-axis limits and titles are included.
:arg subject: Subject of figure, either 'residual' or 'error' for residual error.
:type subject: string
:arg grid_B: Bathymetry dataset for the Salish Sea NEMO model.
:type grid_B: :class:`netCDF4.Dataset`
:arg t_orig: Date being considered.
:type t_orig: datetime object
:arg figsize: Figure size (width, height) in inches.
:type figsize: 2-tuple
:returns: fig
"""
# set up axis limits - based on full 24 hour period 0000 to 2400
sax = t_orig
eax = t_orig +datetime.timedelta(days=1)
runs_list = verified_runs(t_orig)
fig, axes = plt.subplots(4, 1, figsize=figsize)
axs_mod = [axes[1], axes[2], axes[3]]
names = ['Point Atkinson', 'Victoria', 'Campbell River']
if subject == 'residual':
plot_residual_forcing(axes[0], runs_list, t_orig)
plot_residual_model(axs_mod, names, runs_list, grid_B, t_orig)
elif subject == 'error':
plot_error_forcing(axes[0], runs_list, t_orig)
plot_error_model(axs_mod, names, runs_list, grid_B, t_orig)
for ax in axes:
ax.set_ylim([-0.4, 0.4])
ax.set_xlabel('[hrs UTC]')
ax.set_ylabel('[m]')
hfmt = mdates.DateFormatter('%m/%d %H:%M')
ax.xaxis.set_major_formatter(hfmt)
ax.legend(loc=2, ncol=4)
ax.grid()
ax.set_xlim([sax,eax])
return fig
def compare_errors(name, mode, start, end, grid_B, figsize=(20,12)):
""" compares the model and forcing error at a station between dates start and end
for a simulation mode."""
# array of dates for iteration
numdays = (end-start).days
dates = [start + datetime.timedelta(days=num)
for num in range(0, numdays+1)]
dates.sort()
# intiialize figure and arrays
fig,axs = plt.subplots(3,1,figsize=figsize)
e_frc=np.array([])
t_frc=np.array([])
e_mod=np.array([])
t_mod=np.array([])
# mean daily error
frc_daily= np.array([])
mod_daily = np.array([])
t_daily = np.array([])
ttide=figures.get_tides(name)
for t_sim in dates:
# check if the run happened
if mode in verified_runs(t_sim):
# retrieve forcing and model error
e_frc_tmp, t_frc_tmp = calculate_error_forcing('Neah Bay', [mode], t_sim)
e_mod_tmp, t_mod_tmp, _ = calculate_error_model([name], [mode], grid_B, t_sim)
e_frc_tmp= figures.interp_to_model_time(t_mod_tmp[name][mode],e_frc_tmp['Neah Bay'][mode],t_frc_tmp['Neah Bay'][mode])
# append to larger array
e_frc = np.append(e_frc,e_frc_tmp)
t_frc = np.append(t_frc,t_mod_tmp[name][mode])
e_mod = np.append(e_mod,e_mod_tmp[name][mode])
t_mod = np.append(t_mod,t_mod_tmp[name][mode])
# append daily mean error
frc_daily=np.append(frc_daily, np.mean(e_frc_tmp))
mod_daily=np.append(mod_daily, np.mean(e_mod_tmp[name][mode]))
t_daily=np.append(t_daily,t_sim+datetime.timedelta(hours=12))
else:
print '{mode} simulation for {start} did not occur'.format(mode=mode, start=t_sim)
# Plotting time series
ax=axs[0]
ax.plot(t_frc, e_frc, 'b', label = 'Forcing error', lw=2)
ax.plot(t_mod, e_mod, 'g', lw=2, label = 'Model error')
ax.set_title(' Comparison of {mode} error at {name}'.format(mode=mode,name=name))
ax.set_ylim([-.4,.4])
hfmt = mdates.DateFormatter('%m/%d %H:%M')
# Plotting daily means
ax=axs[1]
ax.plot(t_daily, frc_daily, 'b', label = 'Forcing daily mean error', lw=2)
ax.plot([t_frc[0],t_frc[-1]],[np.mean(e_frc),np.mean(e_frc)], '--b', label='Mean forcing error', lw=2)
ax.plot(t_daily, mod_daily, 'g', lw=2, label = 'Model daily mean error')
ax.plot([t_mod[0],t_mod[-1]],[np.mean(e_mod),np.mean(e_mod)], '--g', label='Mean model error', lw=2)
ax.set_title(' Comparison of {mode} daily mean error at {name}'.format(mode=mode,name=name))
ax.set_ylim([-.2,.2])
# Plot tides
ax=axs[2]
ax.plot(ttide.time,ttide.pred_all, 'k', lw=2, label='tides')
ax.set_title('Tidal predictions')
ax.set_ylim([-3,3])
# format axes
hfmt = mdates.DateFormatter('%m/%d %H:%M')
for ax in axs:
ax.xaxis.set_major_formatter(hfmt)
ax.legend(loc=2, ncol=4)
ax.grid()
ax.set_xlim([start,end+datetime.timedelta(days=1)])
ax.set_ylabel('[m]')
return fig
###Output
_____no_output_____
###Markdown
* Clear tidal signal in model errors. I don't think we are removing the tidal energy in the residual calculation. * Bizarre forcing behavior on Jan 22. Looked at the ssh text file in run directory and everything was recorded as a forecast. Weird!! Is it possible that this text file did not generate the forcing for the Jan 22 nowcast run?* Everything produced by Jan 22 (18hr) text file is a fcst* worker links forcing in obs and fcst. So the obs/Jan21 was not related to this text file. But does that matter? This is a nowcast so it should only use Jan 22 forcing data fcst.There are 4 Jan 22 ssh text files in /ocean/nsoontie/MEOPAR/sshNeahBay/txt/* ssh-2015-02-22_12.txt is a forecast2 file* '' 18, 19, 21 are all in forecast/22jan15* '' 18 are is also in nowcast/22jan15So it appears that the forecast had to be restarted several times. What about the nowcast? Did that run smoothly?
###Code
def get_filenames(t_orig, t_final, period, grid, model_path):
"""Returns a list with the filenames for all files over the
defined period of time and sorted in chronological order.
:arg t_orig: The beginning of the date range of interest.
:type t_orig: datetime object
:arg t_final: The end of the date range of interest.
:type t_final: datetime object
:arg period: Time interval of model results (eg. 1h or 1d).
:type period: string
:arg grid: Type of model results (eg. grid_T, grid_U, etc).
:type grid: string
:arg model_path: Defines the path used (eg. nowcast)
:type model_path: string
:returns: files, a list of filenames
"""
numdays = (t_final-t_orig).days
dates = [t_orig + datetime.timedelta(days=num)
for num in range(0, numdays+1)]
dates.sort()
allfiles = glob.glob(model_path+'*/SalishSea_'+period+'*_'+grid+'.nc')
sdt = dates[0].strftime('%Y%m%d')
edt = dates[-1].strftime('%Y%m%d')
sstr = 'SalishSea_{}_{}_{}_{}.nc'.format(period, sdt, sdt, grid)
estr = 'SalishSea_{}_{}_{}_{}.nc'.format(period, edt, edt, grid)
files = []
for filename in allfiles:
if os.path.basename(filename) >= sstr:
if os.path.basename(filename) <= estr:
files.append(filename)
files.sort(key=os.path.basename)
return files
def combine_files(files, var, depth, j, i):
"""Returns the value of the variable entered over
multiple files covering a certain period of time.
:arg files: Multiple result files in chronological order.
:type files: list
:arg var: Name of variable (sossheig = sea surface height,
vosaline = salinity, votemper = temperature,
vozocrtx = Velocity U-component,
vomecrty = Velocity V-component).
:type var: string
:arg depth: Depth of model results ('None' if var=sossheig).
:type depth: integer or string
:arg j: Latitude (y) index of location (<=897).
:type j: integer
:arg i: Longitude (x) index of location (<=397).
:type i: integer
:returns: var_ary, time - array of model results and time.
"""
time = np.array([])
var_ary = np.array([])
for f in files:
G = nc.Dataset(f)
if depth == 'None':
var_tmp = G.variables[var][:, j, i]
else:
var_tmp = G.variables[var][:, depth, j, i]
var_ary = np.append(var_ary, var_tmp, axis=0)
t = nc_tools.timestamp(G, np.arange(var_tmp.shape[0]))
for ind in range(len(t)):
t[ind] = t[ind].datetime
time = np.append(time, t)
return var_ary, time
def plot_files(ax, grid_B, files, var, depth, t_orig, t_final,
name, label, colour):
"""Plots values of variable over multiple files covering
a certain period of time.
:arg ax: The axis where the variable is plotted.
:type ax: axis object
:arg grid_B: Bathymetry dataset for the Salish Sea NEMO model.
:type grid_B: :class:`netCDF4.Dataset`
:arg files: Multiple result files in chronological order.
:type files: list
:arg var: Name of variable (sossheig = sea surface height,
vosaline = salinity, votemper = temperature,
vozocrtx = Velocity U-component,
vomecrty = Velocity V-component).
:type var: string
:arg depth: Depth of model results ('None' if var=sossheig).
:type depth: integer or string
:arg t_orig: The beginning of the date range of interest.
:type t_orig: datetime object
:arg t_final: The end of the date range of interest.
:type t_final: datetime object
:arg name: The name of the station.
:type name: string
:arg label: Label for plot line.
:type label: string
:arg colour: Colour of plot lines.
:type colour: string
:returns: axis object (ax).
"""
bathy, X, Y = tidetools.get_bathy_data(grid_B)
lat = figures.SITES[name]['lat']; lon = figures.SITES[name]['lon']
[j, i] = tidetools.find_closest_model_point(lon, lat, X, Y,
bathy, allow_land=False)
# Call function
var_ary, time = combine_files(files, var, depth, j, i)
# Plot
ax.plot(time, var_ary, label=label, color=colour, linewidth=2)
# Figure format
ax_start = t_orig
ax_end = t_final + datetime.timedelta(days=1)
ax.set_xlim(ax_start, ax_end)
hfmt = mdates.DateFormatter('%m/%d %H:%M')
ax.xaxis.set_major_formatter(hfmt)
return ax
def compare_ssh_tides(grid_B, files, t_orig, t_final, name, PST=0, MSL=0,
figsize=(20, 5)):
"""
:arg grid_B: Bathymetry dataset for the Salish Sea NEMO model.
:type grid_B: :class:`netCDF4.Dataset`
:arg files: Multiple result files in chronological order.
:type files: list
:arg t_orig: The beginning of the date range of interest.
:type t_orig: datetime object
:arg t_final: The end of the date range of interest.
:type t_final: datetime object
:arg name: Name of station.
:type name: string
:arg PST: Specifies if plot should be presented in PST.
1 = plot in PST, 0 = plot in UTC.
:type PST: 0 or 1
:arg MSL: Specifies if the plot should be centred about mean sea level.
1=centre about MSL, 0=centre about 0.
:type MSL: 0 or 1
:arg figsize: Figure size (width, height) in inches.
:type figsize: 2-tuple
:returns: matplotlib figure object instance (fig).
"""
# Figure
fig, ax = plt.subplots(1, 1, figsize=figsize)
# Model
ax = plot_files(ax, grid_B, files, 'sossheig', 'None',
t_orig, t_final, name, 'Model', colours['model'])
# Tides
figures.plot_tides(ax, name, PST, MSL, color=colours['predicted'])
# Figure format
ax.set_title('Modelled Sea Surface Height versus Predicted Tides at {station}: {t_start:%d-%b-%Y} to {t_end:%d-%b-%Y}'.format(station=name, t_start=t_orig, t_end=t_final))
ax.set_ylim([-3.0, 3.0])
ax.set_xlabel('[hrs]')
ax.legend(loc=2, ncol=2)
ax.grid()
return fig
def plot_wlev_residual_NOAA(t_orig, elements, figsize=(20, 5)):
""" Plots the water level residual as calculated by the function
calculate_residual_obsNB and has the option to also plot the
observed water levels and predicted tides over the course of one day.
:arg t_orig: The beginning of the date range of interest.
:type t_orig: datetime object
:arg elements: Elements included in figure.
'residual' for residual only and 'all' for residual,
observed water level, and predicted tides.
:type elements: string
:arg figsize: Figure size (width, height) in inches.
:type figsize: 2-tuple
:returns: fig
"""
res_obs_NB, obs, tides = calculate_residual_obsNB('Neah Bay', t_orig)
# Figure
fig, ax = plt.subplots(1, 1, figsize=figsize)
# Plot
ax.plot(obs.time, res_obs_NB, 'Gray', label='Obs Residual', linewidth=2)
if elements == 'all':
ax.plot(obs.time, obs.wlev,
'DodgerBlue', label='Obs Water Level', linewidth=2)
ax.plot(tides.time, tides.pred[tides.time == obs.time],
'ForestGreen', label='Pred Tides', linewidth=2)
if elements == 'residual':
pass
ax.set_title('Residual of the observed water levels at Neah Bay: {t:%d-%b-%Y}'.format(t=t_orig))
ax.set_ylim([-3.0, 3.0])
ax.set_xlabel('[hrs]')
hfmt = mdates.DateFormatter('%m/%d %H:%M')
ax.xaxis.set_major_formatter(hfmt)
ax.legend(loc=2, ncol=3)
ax.grid()
return fig
def feet_to_metres(feet):
""" Converts feet to metres.
:returns: metres
"""
metres = feet*0.3048
return metres
def load_surge_data(filename_NB):
"""Loads the textfile with surge predictions for Neah Bay.
:arg filename_NB: Path to file of predicted water levels at Neah Bay.
:type filename_NB: string
:returns: data (data structure)
"""
# Loading the data from that text file.
data = pd.read_csv(filename_NB, skiprows=3,
names=['date', 'surge', 'tide', 'obs',
'fcst', 'anom', 'comment'], comment='#')
# Drop rows with all Nans
data = data.dropna(how='all')
return data
def to_datetime(datestr, year, isDec, isJan):
""" Converts the string given by datestr to a datetime object.
The year is an argument because the datestr in the NOAA data
doesn't have a year. Times are in UTC/GMT.
:arg datestr: Date of data.
:type datestr: datetime object
:arg year: Year of data.
:type year: datetime object
:arg isDec: True if run date was December.
:type isDec: Boolean
:arg isJan: True if run date was January.
:type isJan: Boolean
:returns: dt (datetime representation of datestr)
"""
dt = datetime.datetime.strptime(datestr, '%m/%d %HZ')
# Dealing with year changes.
if isDec and dt.month == 1:
dt = dt.replace(year=year+1)
elif isJan and dt.month == 12:
dt = dt.replace(year=year-1)
else:
dt = dt.replace(year=year)
dt = dt.replace(tzinfo=tz.tzutc())
return dt
def retrieve_surge(data, run_date):
""" Gathers the surge information a forcing file from on run_date.
:arg data: Surge predictions data.
:type data: data structure
:arg run_date: Simulation run date.
:type run_date: datetime object
:returns: surges (meteres), times (array with time_counter)
"""
surge = []
times = []
isDec, isJan = False, False
if run_date.month == 1:
isJan = True
if run_date.month == 12:
isDec = True
# Convert datetime to string for comparing with times in data
for d in data.date:
dt = _to_datetime(d, run_date.year, isDec, isJan)
times.append(dt)
daystr = dt.strftime('%m/%d %HZ')
tide = data.tide[data.date == daystr].item()
obs = data.obs[data.date == daystr].item()
fcst = data.fcst[data.date == daystr].item()
if obs == 99.90:
# Fall daylight savings
if fcst == 99.90:
# If surge is empty, just append 0
if not surge:
surge.append(0)
else:
# Otherwise append previous value
surge.append(surge[-1])
else:
surge.append(_feet_to_metres(fcst-tide))
else:
surge.append(_feet_to_metres(obs-tide))
return surge, times
###Output
_____no_output_____
###Markdown
Close up
###Code
def compare_errors1(name, mode, start, end, grid_B, figsize=(20,3)):
""" compares the model and forcing error at a station between dates start and end
for a simulation mode."""
# array of dates for iteration
numdays = (end-start).days
dates = [start + datetime.timedelta(days=num)
for num in range(0, numdays+1)]
dates.sort()
# intiialize figure and arrays
fig,ax = plt.subplots(1,1,figsize=figsize)
e_frc=np.array([])
t_frc=np.array([])
e_mod=np.array([])
t_mod=np.array([])
ttide=figures.get_tides(name)
for t_sim in dates:
# check if the run happened
if mode in verified_runs(t_sim):
# retrieve forcing and model error
e_frc_tmp, t_frc_tmp = calculate_error_forcing('Neah Bay', [mode], t_sim)
e_mod_tmp, t_mod_tmp, _ = calculate_error_model([name], [mode], grid_B, t_sim)
e_frc_tmp= figures.interp_to_model_time(t_mod_tmp[name][mode],e_frc_tmp['Neah Bay'][mode],t_frc_tmp['Neah Bay'][mode])
# append to larger array
e_frc = np.append(e_frc,e_frc_tmp)
t_frc = np.append(t_frc,t_mod_tmp[name][mode])
e_mod = np.append(e_mod,e_mod_tmp[name][mode])
t_mod = np.append(t_mod,t_mod_tmp[name][mode])
else:
print '{mode} simulation for {start} did not occur'.format(mode=mode, start=t_sim)
# Plotting time series
ax.plot(t_mod, e_mod*5, 'g', lw=2, label = 'Model error x 5')
ax.plot(ttide.time,ttide.pred_all, 'k', lw=2, label='tides')
ax.set_title(' Comparison of {mode} error at {name}'.format(mode=mode,name=name))
ax.set_ylim([-3,3])
hfmt = mdates.DateFormatter('%m/%d %H:%M')
ax.xaxis.set_major_formatter(hfmt)
ax.legend(loc=2, ncol=4)
ax.grid()
ax.set_xlim([start,end+datetime.timedelta(days=1)])
ax.set_ylabel('[m]')
return fig
t_orig=datetime.datetime(2015,1,10)
t_final = datetime.datetime(2015,1,19)
mode = 'nowcast'
fig = compare_errors1('Point Atkinson', mode, t_orig,t_final,bathy)
fig = compare_errors1('Victoria', mode, t_orig,t_final,bathy)
fig = compare_errors1('Campbell River', mode, t_orig,t_final,bathy)
def compare_errors2(ax, name, mode, start, end, grid_B, cf, cm):
""" compares the model and forcing error at a station between dates start and end
for a simulation mode."""
# array of dates for iteration
numdays = (end-start).days
dates = [start + datetime.timedelta(days=num)
for num in range(0, numdays+1)]
dates.sort()
# intiialize figure and arrays
e_frc=np.array([])
t_frc=np.array([])
e_mod=np.array([])
t_mod=np.array([])
# mean daily error
frc_daily= np.array([])
mod_daily = np.array([])
t_daily = np.array([])
ttide=figures.get_tides(name)
for t_sim in dates:
# check if the run happened
if mode in verified_runs(t_sim):
# retrieve forcing and model error
e_frc_tmp, t_frc_tmp = calculate_error_forcing('Neah Bay', [mode], t_sim)
e_mod_tmp, t_mod_tmp, _ = calculate_error_model([name], [mode], grid_B, t_sim)
e_frc_tmp= figures.interp_to_model_time(t_mod_tmp[name][mode],e_frc_tmp['Neah Bay'][mode],t_frc_tmp['Neah Bay'][mode])
# append to larger array
e_frc = np.append(e_frc,e_frc_tmp)
t_frc = np.append(t_frc,t_mod_tmp[name][mode])
e_mod = np.append(e_mod,e_mod_tmp[name][mode])
t_mod = np.append(t_mod,t_mod_tmp[name][mode])
# append daily mean error
frc_daily=np.append(frc_daily, np.mean(e_frc_tmp))
mod_daily=np.append(mod_daily, np.mean(e_mod_tmp[name][mode]))
t_daily=np.append(t_daily,t_sim+datetime.timedelta(hours=12))
else:
print '{mode} simulation for {start} did not occur'.format(mode=mode, start=t_sim)
# Plotting daily means
ax.plot(t_daily, frc_daily, cf, label = 'Forcing, ' + mode, lw=2)
ax.plot(t_daily, mod_daily, cm, lw=2, label = 'Model, ' + mode)
ax.set_title(' Comparison of daily mean error at {name}'.format(mode=mode,name=name))
ax.set_ylim([-.35,.35])
# format axes
hfmt = mdates.DateFormatter('%m/%d %H:%M')
ax.xaxis.set_major_formatter(hfmt)
ax.legend(loc=2, ncol=6)
ax.grid()
ax.set_xlim([start,end+datetime.timedelta(days=1)])
ax.set_ylabel('[m]')
return fig
t_orig=datetime.datetime(2015,1,1)
t_final = datetime.datetime(2015,1,31)
fig,axs = plt.subplots(3,1,figsize=(20,12))
for name, n in zip (['Point Atkinson','Victoria','Campbell River'], np.arange(3)):
fig = compare_errors2(axs[n], name, 'nowcast', t_orig,t_final,bathy,'DeepSkyBlue','YellowGreen')
fig = compare_errors2(axs[n], name, 'forecast', t_orig,t_final,bathy,'DodgerBlue','OliveDrab')
fig = compare_errors2(axs[n], name, 'forecast2', t_orig,t_final,bathy,'SteelBlue','DarkGreen')
def compare_errors3(name, mode, start, end, grid_B, figsize=(20,3)):
""" compares the model and forcing error at a station between dates start and end
for a simulation mode."""
# array of dates for iteration
numdays = (end-start).days
dates = [start + datetime.timedelta(days=num)
for num in range(0, numdays+1)]
dates.sort()
fig,ax = plt.subplots(1,1,figsize=figsize)
# intiialize figure and arrays
e_frc=np.array([])
t_frc=np.array([])
e_mod=np.array([])
t_mod=np.array([])
# mean daily error
frc_daily= np.array([])
mod_daily = np.array([])
t_daily = np.array([])
ttide=figures.get_tides(name)
for t_sim in dates:
# check if the run happened
if mode in verified_runs(t_sim):
# retrieve forcing and model error
e_frc_tmp, t_frc_tmp = calculate_error_forcing('Neah Bay', [mode], t_sim)
e_mod_tmp, t_mod_tmp, _ = calculate_error_model([name], [mode], grid_B, t_sim)
e_frc_tmp= figures.interp_to_model_time(t_mod_tmp[name][mode],e_frc_tmp['Neah Bay'][mode],t_frc_tmp['Neah Bay'][mode])
# append to larger array
e_frc = np.append(e_frc,e_frc_tmp)
t_frc = np.append(t_frc,t_mod_tmp[name][mode])
e_mod = np.append(e_mod,e_mod_tmp[name][mode])
t_mod = np.append(t_mod,t_mod_tmp[name][mode])
# append daily mean error
frc_daily=np.append(frc_daily, np.mean(e_frc_tmp))
mod_daily=np.append(mod_daily, np.mean(e_mod_tmp[name][mode]))
t_daily=np.append(t_daily,t_sim+datetime.timedelta(hours=12))
# stdev
stdev_mod = (max(np.cumsum((mod_daily-np.mean(e_mod))**2))/len(mod_daily))**0.5
else:
print '{mode} simulation for {start} did not occur'.format(mode=mode, start=t_sim)
# Plotting daily means
ax.plot(t_daily, frc_daily, 'b', label = 'Forcing, ' + mode, lw=2)
ax.plot(t_daily, mod_daily, 'g', lw=2, label = 'Model, ' + mode)
#ax.plot([t_frc[0],t_frc[-1]],[np.mean(e_frc),np.mean(e_frc)], '--b', label='Mean forcing error', lw=2)
#ax.plot([t_mod[0],t_mod[-1]],[np.mean(e_mod),np.mean(e_mod)], '--g', label='Mean model error', lw=2)
ax.set_title(' Comparison of daily mean error at {name}'.format(mode=mode,name=name))
ax.set_ylim([-.35,.35])
# format axes
hfmt = mdates.DateFormatter('%m/%d %H:%M')
ax.xaxis.set_major_formatter(hfmt)
ax.legend(loc=2, ncol=6)
ax.grid()
ax.set_xlim([start,end+datetime.timedelta(days=1)])
ax.set_ylabel('[m]')
print stdev_mod
return fig
t_orig=datetime.datetime(2015,1,22)
t_final = datetime.datetime(2015,1,24)
fig = compare_errors3('Victoria', 'nowcast', t_orig,t_final,bathy)
fig = compare_errors3('Victoria', 'forecast', t_orig,t_final,bathy)
fig = compare_errors3('Victoria', 'forecast2', t_orig,t_final,bathy)
###Output
0.0125982966754
0.0435648311803
0.0388926269505
|
Code/PC-CMI_Algorithm/PCA_CMI_HumanCancer.ipynb | ###Markdown
###Code
from google.colab import drive
drive.mount('/content/drive')
!pip install pycm
import pandas as pd
from sklearn.preprocessing import LabelEncoder
from sklearn.preprocessing import OneHotEncoder
from sklearn import preprocessing
from sklearn.feature_selection import VarianceThreshold
import numpy as np
from sklearn.model_selection import train_test_split
import matplotlib.pyplot as plt
import keras
from keras.callbacks import EarlyStopping, ModelCheckpoint
from keras.layers.advanced_activations import LeakyReLU
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import precision_recall_curve, roc_curve, auc, average_precision_score
from sklearn.metrics import confusion_matrix
from sklearn.model_selection import StratifiedKFold
from pycm import *
from matplotlib.pyplot import figure
import seaborn as sn
import time
import os
import numpy as np
import pandas as pd
import argparse
import matplotlib.pyplot as plt
from copy import deepcopy
from scipy import interpolate
from sklearn.feature_selection import mutual_info_regression
from scipy.stats import pearsonr
import scipy.sparse
import sys
import pickle
import re
from scipy import stats
from numpy import savetxt
from numpy import genfromtxt
import networkx as nx
from scipy.stats import norm
import itertools
import math
import copy
from sklearn.metrics import roc_curve, roc_auc_score
from sklearn.metrics import precision_recall_curve, roc_curve, auc, average_precision_score
from sklearn.metrics import confusion_matrix
from pycm import *
tcga_data_df = pd.read_csv('/content/drive/MyDrive/Thesis/Human-Cancer-Prediction/TCGA_GTEX_Data_18212_7142.tsv', delimiter='\t')
tcga_metadata_df = pd.read_csv('/content/drive/MyDrive/Thesis/Human-Cancer-Prediction/TCGA_GTEX_MetaData_7142_23.tsv', delimiter='\t')
tcga_data_df = tcga_data_df.drop(['NCBI_description','NCBI_other_designations','NCBI_chromosome', 'NCBI_map_location', 'NCBI_OMIM', 'CGC_Tumour Types(Somatic)', 'CGC_Tumour Types(Germline)', 'CGC_Role in Cancer', 'CGC_Translocation Partner', 'CGC_Somatic', 'CGC_Germline', 'CGC_Mutation Types', 'CGC_Molecular Genetics', 'CGC_Tissue Type', 'CGC_Cancer Syndrome', 'CGC_Other Syndrome', 'OMIM_Comments', 'OMIM_Phenotypes', 'Hugo_RefSeq IDs', 'Hugo_Ensembl gene ID', 'Hugo_Enzyme IDs', 'Hugo_Pubmed IDs', 'Hugo_Locus group', 'Hugo_Gene group name'],axis=1)
tcga_data_df = tcga_data_df.T
tcga_data_df.columns = tcga_data_df.iloc[0]
tcga_data_df = tcga_data_df.drop(tcga_data_df.index[0])
def x(a):
return np.log2(a.astype('float32') + 1)
tcga_data_df = tcga_data_df.apply(x, axis = 1)
tcga_data_df
tcga_metadata_df = tcga_metadata_df[['portions.analytes.aliquots.submitter_id', 'clinical.disease']]
tcga_metadata_df['clinical.disease'] = tcga_metadata_df['clinical.disease'].fillna('normal')
tcga_metadata_df = tcga_metadata_df.set_index('portions.analytes.aliquots.submitter_id')
tcga_metadata_df
tcga_data_df = pd.merge(tcga_data_df, tcga_metadata_df, left_index=True, right_index=True)
tcga_data_df
some_values = ['BRCA', 'Breast_normal']
tcga_data_breast_df = tcga_data_df.loc[tcga_data_df['clinical.disease'].isin(some_values)]
tcga_data_breast_df
tcga_data_breast_df = tcga_data_breast_df[['CD300LG','COL10A1','CA4','ADH1B','SCARA5','AQP7','FABP4','RBP4','MMP13','CIDEC', 'clinical.disease']]
tcga_data_breast_df
tcga_data_brca_df = tcga_data_breast_df.loc[tcga_data_breast_df['clinical.disease'] == 'BRCA']
tcga_data_brca_df = tcga_data_brca_df[['CD300LG','COL10A1','CA4','ADH1B','SCARA5','AQP7','FABP4','RBP4','MMP13','CIDEC']]
tcga_data_breastnormal_df = tcga_data_breast_df.loc[tcga_data_breast_df['clinical.disease'] == 'Breast_normal']
tcga_data_breastnormal_df = tcga_data_breastnormal_df[['CD300LG','COL10A1','CA4','ADH1B','SCARA5','AQP7','FABP4','RBP4','MMP13','CIDEC']]
tcga_data_breastnormal_df
def conditional_mutual_info(X,Y,Z=np.array(1)):
if X.ndim == 1:
X = np.reshape(X, (-1, 1))
if Y.ndim == 1:
Y = np.reshape(Y, (-1, 1))
if Z.ndim == 0:
c1 = np.cov(X)
if c1.ndim != 0:
d1 = np.linalg.det(c1)
else:
d1 = c1.item()
c2 = np.cov(Y)
if c2.ndim != 0:
d2 = np.linalg.det(c2)
else:
d2 = c2.item()
c3 = np.cov(X,Y)
if c3.ndim != 0:
d3 = np.linalg.det(c3)
else:
d3 = c3.item()
cmi = (1/2)*np.log((d1*d2)/d3)
else:
if Z.ndim == 1:
Z = np.reshape(Z, (-1, 1))
c1 = np.cov(np.concatenate((X, Z), axis=0))
if c1.ndim != 0:
d1 = np.linalg.det(c1)
else:
d1 = c1.item()
c2 = np.cov(np.concatenate((Y, Z), axis=0))
if c2.ndim != 0:
d2 = np.linalg.det(c2)
else:
d2 = c2.item()
c3 = np.cov(Z)
if c3.ndim != 0:
d3 = np.linalg.det(c3)
else:
d3 = c3.item()
c4 = np.cov(np.concatenate((X, Y, Z), axis=0))
if c4.ndim != 0:
d4 = np.linalg.det(c4)
else:
d4 = c4.item()
cmi = (1/2)*np.log((d1*d2)/(d3*d4))
if math.isinf(cmi):
cmi = 0
return cmi
def pca_cmi(data, theta, max_order,filename):
genes = list(data.columns)
predicted_graph = nx.complete_graph(genes)
num_edges = predicted_graph.number_of_edges()
L = -1
nochange = False
while L < max_order and nochange == False:
L = L+1
predicted_graph, nochange = remove_edges(predicted_graph, data, L, theta)
print()
print()
print("Final Prediction:")
print("-----------------")
print("Order : {}".format(L))
print("Number of edges in the predicted graph : {}".format(predicted_graph.number_of_edges()))
f = plt.figure()
nx.draw(predicted_graph, with_labels=True, font_weight='bold')
plt.savefig('/content/drive/MyDrive/COM S 673/DREAM3 in silico challenge/Results_HumanCancer/Undirected_'+filename+'_'+str(theta)+'.png')
plt.show()
print()
return predicted_graph
def remove_edges(predicted_graph, data, L, theta):
initial_num_edges = predicted_graph.number_of_edges()
edges = predicted_graph.edges()
for edge in edges:
neighbors = nx.common_neighbors(predicted_graph, edge[0], edge[1])
nhbrs = copy.deepcopy(sorted(neighbors))
T = len(nhbrs)
if T < L and L != 0:
continue
else:
x = data[edge[0]].to_numpy()
if x.ndim == 1:
x = np.reshape(x, (-1, 1))
y = data[edge[1]].to_numpy()
if y.ndim == 1:
y = np.reshape(y, (-1, 1))
K = list(itertools.combinations(nhbrs, L))
if L == 0:
cmiVal = conditional_mutual_info(x.T, y.T)
if cmiVal < theta:
predicted_graph.remove_edge(edge[0], edge[1])
else:
maxCmiVal = 0
for zgroup in K:
z = data[list(zgroup)].to_numpy()
if z.ndim == 1:
z = np.reshape(z, (-1, 1))
cmiVal = conditional_mutual_info(x.T, y.T, z.T)
if cmiVal > maxCmiVal:
maxCmiVal = cmiVal
if maxCmiVal < theta:
predicted_graph.remove_edge(edge[0], edge[1])
final_num_edges = predicted_graph.number_of_edges()
if final_num_edges < initial_num_edges:
return predicted_graph, False
return predicted_graph, True
def get_chains(graph):
adj_list = nx.generate_adjlist(graph, delimiter=" ")
mapping = {}
for idx,line in enumerate(adj_list):
line = line.split(" ")
mapping[line[0]] = set(line[1:])
for element in mapping:
for adjacent_element in mapping[element]:
mapping[adjacent_element].add(element)
triples = []
for element in mapping:
for adjacent_element in mapping[element]:
for adj_adj_element in mapping[adjacent_element]:
if adj_adj_element != element:
triple = [element, adjacent_element, adj_adj_element]
triples.append(triple)
return triples
def forms_v_shape(adjMatrix, point1, point2):
length = adjMatrix.shape[0]
for i in range(0,length):
if adjMatrix[i][point2] == 1 and adjMatrix[point2][i] == 0 and i != point1:
return True
return False
def forms_cycle(adjMatrix, point1, point2):
len = adjMatrix.shape[0]
for i in range(0,len):
for j in range(0,len):
if adjMatrix[i][j] == 1 and adjMatrix[j][i] == 1:
adjMatrix[i][j] = 0
adjMatrix[j][i] = 0
adjMatrix[point1][point2] = 1
adjMatrix[point2][point1] = 0
G = nx.from_numpy_matrix(adjMatrix,create_using=nx.DiGraph)
return not(nx.is_directed_acyclic_graph(G))
def align_edges(graph, data, theta):
num_nodes = graph.number_of_nodes()
directed_graph = nx.to_numpy_array(graph)
#Step 1: Align the v-structure
mapping = {}
for i in range(0,num_nodes):
mapping[i] = 'G'+str(i+1)
non_edge_pairs = list(nx.non_edges(graph))
for non_edge in non_edge_pairs:
common_neighbors = sorted(nx.common_neighbors(graph, non_edge[0], non_edge[1]))
x = data[non_edge[0]].to_numpy()
if x.ndim == 1:
x = np.reshape(x, (-1, 1))
y = data[non_edge[1]].to_numpy()
if y.ndim == 1:
y = np.reshape(y, (-1, 1))
for neighbor in common_neighbors:
z = data[neighbor].to_numpy()
if z.ndim == 1:
z = np.reshape(z, (-1, 1))
cmiVal = conditional_mutual_info(x.T, y.T, z.T)
xind = data.columns.get_loc(non_edge[0])
yind = data.columns.get_loc(non_edge[1])
zind = data.columns.get_loc(neighbor)
if directed_graph[xind][zind] == 1 and directed_graph[zind][xind] == 1 and directed_graph[yind][zind] == 1 and directed_graph[zind][yind] == 1:
if not cmiVal < theta:
directed_graph[xind][zind] = 1
directed_graph[zind][xind] = 0
directed_graph[yind][zind] = 1
directed_graph[zind][yind] = 0
# Step 2: Use Rule 1 of edge alignments to orient edges a -> b - c to a -> b -> c if adding the edge does not form a cycle or v-structure
triples = get_chains(graph)
for triple in triples:
xind = data.columns.get_loc(triple[0])
yind = data.columns.get_loc(triple[1])
zind = data.columns.get_loc(triple[2])
if directed_graph[xind][zind] == 0 and directed_graph[zind][xind] == 0 :
frozen_graph = np.copy(directed_graph)
forms_v = forms_v_shape(frozen_graph, yind, zind)
forms_cyc = forms_cycle(frozen_graph, yind, zind)
if not ( forms_v or forms_cyc ):
if directed_graph[xind][yind] == 1 and directed_graph[yind][xind] == 0 and directed_graph[yind][zind] == 1 and directed_graph[zind][yind] == 1:
directed_graph[yind][zind] = 1
directed_graph[zind][yind] = 0
# Step 3: Use Rule 2 of edge alignments to orient edges that form a cycle if oriented the other way.
frozen_graph = np.copy(directed_graph)
for i in range(0,num_nodes):
for j in range(0,num_nodes):
if frozen_graph[i][j] == 1 and frozen_graph[j][i] == 1:
if forms_cycle(frozen_graph, i, j) and not(forms_cycle(frozen_graph, j, i)):
directed_graph[j][i] = 1
directed_graph[i][j] = 0
G = nx.from_numpy_matrix(directed_graph,create_using=nx.DiGraph)
G = nx.relabel_nodes(G, mapping)
return G
predicted_graph_brca = pca_cmi(tcga_data_brca_df, 0.05, 20, "HumanCancer_BRCA")
predicted_graph_breastnormal = pca_cmi(tcga_data_breastnormal_df, 0.05, 20, "HumanCancer_BreastNormal")
###Output
Final Prediction:
-----------------
Order : 4
Number of edges in the predicted graph : 12
|
examproject/Exam Bella v2.ipynb | ###Markdown
**Question 2:** Find and illustrate the equilibrium when $y_{t-1} = \pi_{t-1} = v_t = s_t = s_{t-1} = 0$. Illustrate how the equilibrium changes when instead $v_t = 0.1$.
###Code
#Define new AD-curve and SRAS-curve according to the new values for the variables
_AD = sm.Eq(pi_t, ((-1/h*alpha)*((1+b*alpha*y_t)))
_SRAS = sm.Eq(pi_t, (gamma*y_t))
#Solve the new equilibrium
_EQ = sm.solve((_AD, _SRAS), (pi_t, y_t))
#Display result
_EQ
#Set v_t equal to 0.1
v_t = 0.1
#Define the AD-curve according to new value for the demand disturbance
newAD = sm.Eq(pi_t, ((1/h*alpha)*(v_t-(1+b*alpha)*y_t)))
#Solve the new equilibrium
newEQ = sm.solve((newAD, _SRAS), (pi_t, y_t))
#Display result
newEQ
#Defining functions in order to display result graphically
def SRAS(y_t):
return gamma*y_t
def AD(y_t):
return (-1/h*alpha)*((1+b*alpha)*y_t)
def newAD(y_t):
return (1/h*alpha)*(v_t-(1+b*alpha)*y_t)
# Create best response to multiple q values
x = np.linspace(-0.05,0.05,100)
SRAS = SRAS(x)
y = np.linspace(-0.05,0.05,100)
AD = AD(y)
z = np.linspace(-0.05,0.05,100)
newAD = newAD(z)
fig = plt.figure()
ax = fig.add_subplot(1,1,1)
ax.plot(x,SRAS,"m",label='SRAS')
ax.plot(y,AD,"c",label='AD')
ax.plot(z,newAD,"r",label='newAD')
ax.legend()
ax.grid()
ax.set_xlabel('Outputgap $y_t$')
ax.set_ylabel('Inflationgap $\pi_t$')
ax.set_title('Graphical illustration, Q2')
ax.set_xlim([-0.1,0.1])
ax.set_ylim([-0.1,0.1]);
# Plot results
plt.plot(x,SRAS,"m",label='SRAS')
plt.plot(y,AD,"c",label='AD')
plt.plot(z,newAD,"r",label='newAD')
plt.legend()
plt.xlabel('Outputgap, $y_t$')
plt.ylabel('Inflationgap, $\pi_t$')
plt.title('Graphical solution, Q2')
plt.grid()
plt.show
#### Jeg kunne godt tรฆnke mig at rangen pรฅ y-aksen er mindre, sรฅ SRAS-kurven ikke ligner den er lig 0
### Men det er som om jeg ikke rigtig kan fรฅ det fikset
### Hvis รฉn vil kigge pรฅ det, vil det vรฆre fedt!
###Output
_____no_output_____
###Markdown
**Persistent disturbances:** Now, additionaly, assume that both the demand and the supply disturbances are AR(1) processes$$ v_{t} = \delta v_{t-1} + x_{t} $$$$ s_{t} = \omega s_{t-1} + c_{t} $$where $x_{t}$ is a **demand shock**, and $c_t$ is a **supply shock**. The **autoregressive parameters** are:
###Code
par['delta'] = 0.80
par['omega'] = 0.15
###Output
_____no_output_____
###Markdown
**Question 3:** Starting from $y_{-1} = \pi_{-1} = s_{-1} = 0$, how does the economy evolve for $x_0 = 0.1$, $x_t = 0, \forall t > 0$ and $c_t = 0, \forall t \geq 0$?
###Code
#Define disturbances
def v_t(v_lag, x_t):
return par['delta']*v_lag+x_t
def s_t(s_lag,c_t):
return par['omega']*s_lag+c_t
###Output
_____no_output_____ |
ConditioningRealData2.ipynb | ###Markdown
Conditition on City Group , TYpe and P1 - cause they look like categorical* First issue will be to define the labels and the mappings , and the join table , this will not be hard , real issue will be to macke prob spae
###Code
group=np.unique(train['City Group'],return_counts=True)
group
types=np.unique(train['Type'],return_counts=True)
types
p1=np.unique(train['P1'],return_counts=True)
p1
###Output
_____no_output_____
###Markdown
The condititional prob model where will be :* P group,type,p1 * Meaning what is prob of p1 being some value givin the fact that group and type has some values* This will be first derived feature
###Code
group_labels=list(np.unique(train['City Group']))
group_labels
types_labels=list(np.unique(train['Type']))
types_labels
p1_labels=list(np.unique(train['P1']))
p1_labels
group_mapping={label: index
for index,label in enumerate(group_labels)}
group_mapping
types_mapping={label: index
for index,label in enumerate(types_labels)}
types_mapping
p1_mapping={label: index
for index,label in enumerate(p1_labels)}
p1_mapping
len(p1_mapping)
tr=train[['City Group','Type','P1']]
import itertools
lista=tr[:].values.tolist()
counts=([lista.count(ls) for ls in lista])
keys=[tuple(elem) for elem in lista]
prob_space={}
for i in range(len(keys)):
prob_space[keys[i]]=counts[i]/train.shape[0]
prob_space
sum(prob_space.values())
train['counts']=np.array(counts)
train.to_csv('trainCounts.csv',index=False)
train.columns
###Output
_____no_output_____
###Markdown
Building the joint distribution for this 3 vars * cardinalities are : ** group : 3 ** type : 3 ** p1 : 8
###Code
joint_prob_table = np.zeros((3, 3, 8))
for gr, tp, p in prob_space:
joint_prob_table[group_mapping[gr],
types_mapping[tp],
p1_mapping[p]]=prob_space[gr,tp,p]
joint_prob_table.shape
joint_prob_table
#Lets see prob of getting P1 for each city group
#this means marginalizing type
joint_prob_city_p1 = joint_prob_table.sum(axis=1)
tr.head(1)
#perform a query
#what is prob of obtaining particular P1 knowyng City Group from joint prob distribution
#where we marginalized the type
prob_p1_city=[]
for i in range(tr.shape[0]):
query=joint_prob_city_p1[group_mapping[tr['City Group'][i]],p1_mapping[tr['P1'][i]]]
prob_p1_city.append(query)
train['query1']=np.array(prob_p1_city)
train.to_csv('trainCounts.csv',index=False)
###Output
_____no_output_____ |
notebooks/pursuit/omp/omp_step_by_step.ipynb | ###Markdown
Dictionary Setup
###Code
M = 32
N = 64
K = 3
key = random.PRNGKey(0)
Phi = dict.gaussian_mtx(key, M,N)
Phi.shape
dict.coherence(Phi)
###Output
_____no_output_____
###Markdown
Signal Setup
###Code
x, omega = data.sparse_normal_representations(key, N, K, 1)
x = jnp.squeeze(x)
x
omega, omega.shape
y = Phi @ x
y
###Output
_____no_output_____
###Markdown
Development of OMP algorithm First iteration
###Code
r = y
norm_y_sqr = r.T @ r
norm_r_sqr = norm_y_sqr
norm_r_sqr
p = Phi.T @ y
p, p.shape
h = p
h, h.shape
i = pursuit.abs_max_idx(h)
i
indices = jnp.array([i])
indices, indices.shape
atom = Phi[:, i]
atom, atom.shape
subdict = jnp.expand_dims(atom, axis=1)
subdict.shape
L = jnp.ones((1,1))
L, L.shape
p_I = p[indices]
p_I, p_I.shape
x_I = p_I
x_I, x_I.shape
r_new = y - subdict @ x_I
r_new, r_new.shape
norm_r_new_sqr = r_new.T @ r_new
norm_r_new_sqr
###Output
_____no_output_____
###Markdown
Second iteration
###Code
r = r_new
norm_r_sqr = norm_r_new_sqr
h = Phi.T @ r
h, h.shape
i = pursuit.abs_max_idx(h)
i
indices = jnp.append(indices, i)
indices
atom = Phi[:, i]
atom, atom.shape
b = subdict.T @ atom
b
L = pursuit.gram_chol_update(L, b)
L, L.shape
subdict = jnp.hstack((subdict, jnp.expand_dims(atom,1)))
subdict, subdict.shape
p_I = p[indices]
p_I, p_I.shape
x_I = la.solve_spd_chol(L, p_I)
x_I, x_I.shape
subdict.shape, x_I.shape
r_new = y - subdict @ x_I
r_new, r_new.shape
norm_r_new_sqr = r_new.T @ r_new
norm_r_new_sqr
###Output
_____no_output_____
###Markdown
3rd iteration
###Code
r = r_new
norm_r_sqr = norm_r_new_sqr
h = Phi.T @ r
h, h.shape
i = pursuit.abs_max_idx(h)
i
indices = jnp.append(indices, i)
indices
atom = Phi[:, i]
atom, atom.shape
b = subdict.T @ atom
b
L = pursuit.gram_chol_update(L, b)
L, L.shape
subdict = jnp.hstack((subdict, jnp.expand_dims(atom,1)))
subdict, subdict.shape
p_I = p[indices]
p_I, p_I.shape
x_I = la.solve_spd_chol(L, p_I)
x_I, x_I.shape
r_new = y - subdict @ x_I
r_new, r_new.shape
norm_r_new_sqr = r_new.T @ r_new
norm_r_new_sqr
from cr.sparse.pursuit import omp
solution = omp.solve(Phi, y, K)
solution.x_I
solution.I
solution.r
solution.r_norm_sqr
solution.iterations
def time_solve():
solution = omp.solve(Phi, y, K)
solution.x_I.block_until_ready()
solution.r.block_until_ready()
solution.I.block_until_ready()
solution.r_norm_sqr.block_until_ready()
%timeit time_solve()
omp_solve = jax.jit(omp.solve, static_argnums=(2))
sol = omp_solve(Phi, y, K)
sol.r_norm_sqr
def time_solve_jit():
solution = omp_solve(Phi, y, K)
solution.x_I.block_until_ready()
solution.r.block_until_ready()
solution.I.block_until_ready()
solution.r_norm_sqr.block_until_ready()
%timeit time_solve_jit()
14.3 * 1000 / 49.3
###Output
_____no_output_____ |
notebooks/9-submission_E-drop_rare_words-score_069.ipynb | ###Markdown
Read data
###Code
train = pd_read_csv('data_in/TrainingData.csv')
test = pd_read_csv('data_in/TestData.csv')
# train.columns
# train['Position_Type'].head()
features = list(set(train.columns).intersection(set(test.columns)) - set(['FTE','Total']))
features.sort()
features
target = set(train.columns) - set(test.columns)
target = list(target)
target.sort()
target
for col in target:
test[col] = np.nan
train.shape, test.shape
train['is_holdout'] = False
test ['is_holdout'] = True
df = pd.concat([train,test], axis=0)
df.shape
# plt.plot(df['FTE'].sort_values().values[50000:100000])
# plt.plot(df['FTE'].sort_values().values[100000:142000])
# plt.plot(df['FTE'].sort_values().values[142000:144000])
# plt.plot(df['FTE'].sort_values().values[144000:145630])
# plt.plot(np.log10(df['FTE'][~pd.isnull(df['FTE'])].sort_values().values.squeeze()))
# plt.plot(df['FTE'][~pd.isnull(df['FTE'])].sort_values().values.squeeze()[:60]) # -0.08 .. 0
# plt.plot(df['FTE'][~pd.isnull(df['FTE'])].sort_values().values.squeeze()[60:30000]) # 0
# plt.plot(df['FTE'][~pd.isnull(df['FTE'])].sort_values().values.squeeze()[30000:100000]) # 0 - 1
# plt.plot(df['FTE'][~pd.isnull(df['FTE'])].sort_values().values.squeeze()[100000:-4000]) # 1
plt.plot(df['FTE'][~pd.isnull(df['FTE'])].sort_values().values.squeeze()[100000:136000]) # 1
# plt.plot(df['FTE'][~pd.isnull(df['FTE'])].sort_values().values.squeeze()[-4000:-1500]) # 1.0 .. 1.000003
# plt.plot(df['FTE'][~pd.isnull(df['FTE'])].sort_values().values.squeeze()[-1500:-900]) # 1.000003 .. 1.006
plt.show()
# convert FTE to string boolean and append to list of features
# df['FTE'] = ~pd.isnull(df['FTE']).astype('str')
df['FTE'] = df['FTE'].apply(lambda x: 'nan' if pd.isnull(x) else ( str(round(x,1)) if x <=1 else '>1' ) )
features = features + ['FTE']
features.sort()
# categorize Total field
def total_to_category(x):
if pd.isnull(x): return 'nan'
# if x < 1000: return str(round(x,1)) if x <=1 else '>1' ) )
ranges = [10,100,1000,10000, 1e5]
for i in ranges:
if abs(x) < i: return str(int(x//(i//10)*(i//10)))
return "> %s"%str(int(max(ranges)))
{x: total_to_category(x) for x in [1.5, 3.43, 15, 153, 2153, 9123, 42153, 142153]}
df['Total_sign'] = df['Total'].apply(lambda x: 'nan' if pd.isnull(x) else ('0' if x==0 else ('+' if x>0 else '-')))
df['Total_bin'] = df['Total'].apply(lambda x: total_to_category(x))
features = features + ['Total_sign', 'Total_bin']
features.sort()
df['Total_sign'].value_counts().head(n=20)
df['Total_bin'].value_counts().shape
df['Total_bin'].value_counts().head(n=20)
features
###Output
_____no_output_____
###Markdown
Fix "General"Based on analysis of test vs train
###Code
for keyword in ['General Supplies *', 'General Supplies', 'GENERAL SUPPLIES *']:
df.loc[df['Sub_Object_Description']==keyword,'Sub_Object_Description'] = 'General'
###Output
_____no_output_____
###Markdown
Frequent words
###Code
# https://keras.io/preprocessing/text/
from keras.preprocessing.text import text_to_word_sequence
text_to_word_sequence("foo, bar-yo * baz/Pla")
class KeywordReplacer:
"""
e.g.
kr1 = KeywordReplacer(df['Sub_Object_Description'])
kr1.calculate_list_words()
new_series = kr1.do_replace()
"""
def __init__(self, my_series):
self.my_series = my_series.fillna("")
def calculate_list_words(self):
list_words = text_to_word_sequence(" ".join(self.my_series.values))
list_words = pd.Series(list_words)
list_words = list_words.value_counts()
list_words['and'] = 0
list_words['for'] = 0
list_words['or'] = 0
list_words['is'] = 0
list_words['non'] = 0
list_words['with'] = 0
list_words['that'] = 0
list_words = list_words.sort_values(ascending=False)
self.list_words = list_words
def replace_with_keyword(self, x, order=1):
"""
order=1 or order=2
"""
x_seq = text_to_word_sequence(x)
x_max_1 = [self.list_words[y] for y in x_seq]
if len(x_max_1)==0: return ""
x_max_1 = np.argmax(x_max_1)
x_max_1 = x_seq[x_max_1]
if order==1: return x_max_1
x_seq = [y for y in x_seq if y!=x_max_1]
x_max_2 = [self.list_words[y] for y in x_seq]
if len(x_max_2)==0: return ""
x_max_2 = np.argmax(x_max_2)
x_max_2 = x_seq[x_max_2]
return x_max_2
def do_replace(self,order=1):
return self.my_series.apply(lambda x: "" if x=="" else self.replace_with_keyword(x,order))
# df_sub.fillna('').apply(lambda x: replace_with_keyword(x,2))
# testing
df_sub = df['Sub_Object_Description'][~pd.isnull(df['Sub_Object_Description'])].head()
kr1 = KeywordReplacer(df_sub)
kr1.calculate_list_words()
new_series_1 = kr1.do_replace(1)
new_series_2 = kr1.do_replace(2)
pd.DataFrame({'ori': df_sub, 'new_1': new_series_1, 'new_2': new_series_2})
# implement
main_map = (
('Object_Description', 'Object_key_1', 'Object_key_2'),
('Sub_Object_Description', 'Sub_Object_key_1', 'Sub_Object_key_2'),
('Job_Title_Description', 'Job_Title_key_1', 'Job_Title_key_2'),
('Location_Description', 'Location_key_1', 'Location_key_2'),
('Fund_Description', 'Fund_key_1', 'Fund_key_2'),
('Program_Description', 'Program_key_1', 'Program_key_2'),
)
for k1,k2,k3 in main_map:
print("%s .. %s"%(time.ctime(), k1))
kr2 = KeywordReplacer(df[k1])
print("%s .. calc list"%time.ctime())
kr2.calculate_list_words()
print("%s .. replace 1"%time.ctime())
df[k2] = kr2.do_replace(1)
print("%s .. replace 2"%time.ctime())
df[k3] = kr2.do_replace(2)
for k1,k2,k3 in main_map:
features = [x for x in features if x!=k1] # drop main description
features = features + [k2, k3] # add key_1 and key_2
###Output
_____no_output_____
###Markdown
update "test/train" variables after postprocessing above
###Code
train = df[~df['is_holdout']]
test = df[ df['is_holdout']]
###Output
_____no_output_____
###Markdown
check status
###Code
meta = list(set(df.columns) - set(features) - set(target))
meta
df.shape, df[features].shape, df[target].shape, df[meta].shape
###Output
_____no_output_____
###Markdown
Analyze how close the train and test features are
###Code
results = []
for ff in features:
vc_train = df[ff][~df['is_holdout']].value_counts()
vc_test = df[ff][ df['is_holdout']].value_counts()
# vc_train.shape, vc_test.shape
vc_both = vc_train.reset_index().merge(
vc_test.reset_index(),
left_on = 'index',
right_on='index',
how='outer',
suffixes=['_train', '_test']
)
vc_both = vc_both.set_index('index')
# vc_both.head()
# vc_both[pd.isnull(vc_both['Facility_or_Department_test'])].head()
out = {
'feature': ff,
'train all': df[~df['is_holdout']].shape[0],
# 'train': vc_both['%s_train'%ff].sum(),
'train non-null': (~pd.isnull(df[ff][~df['is_holdout']])).sum(),
'train_minus_test': vc_both['%s_train'%ff][pd.isnull(vc_both['%s_test'%ff ])].sum(),
'test_minus_train': vc_both['%s_test'%ff ][pd.isnull(vc_both['%s_train'%ff])].sum(),
}
out['tmt_pct'] = out['test_minus_train'] * 100 // out['train non-null']
results.append(out)
results = pd.DataFrame(results)
results = results.set_index('feature').sort_index()
results = results.astype('uint32')
# results.shape
# results.head()
results[['train all', 'train non-null', 'train_minus_test', 'test_minus_train', 'tmt_pct']]
# sod = train['Sub_Object_Description'].value_counts()
# field_name = 'Sub_Object_Description'
field_name = 'Object_Description'
sod = test[field_name][~test[field_name].isin(train[field_name])].value_counts()
sod.head(n=20)
sod.iloc[2:].sum()
# field_name = 'Sub_Object_Description'
# field_name = 'Object_Description'
# keyword = 'general'
# keyword = 'money'
# keyword = 'supplies'
keyword = 'item'
train[field_name][
train[field_name].apply(lambda x: False if pd.isnull(x) else keyword in x.lower())
].value_counts()
field_name = 'Sub_Object_Description'
# field_name = 'Object_Description'
# keyword = 'general'
# keyword = 'money'
# keyword = 'supplies'
keyword = 'item'
test[field_name][
test[field_name].apply(lambda x: False if pd.isnull(x) else keyword in x.lower())
].value_counts()
from matplotlib import pyplot as plt
# plt.bar(x=range(sod.shape[0]), height=sod.values)
plt.bar(x=range(sod.shape[0]-5), height=sod.iloc[5:].values)
plt.show()
# sod[sod<10].shape[0], sod.shape[0]
sod[sod<10]
subtest = test['Sub_Object_Description'].apply(lambda x: (~pd.isnull(x)) & ('community' in str(x).lower())) # .sum()
test['Sub_Object_Description'][subtest].head()
###Output
_____no_output_____
###Markdown
Read target labels
###Code
import yaml
labels = yaml.load(open("labels.yml",'r'))
# Function': ['Aides Compensation
prediction_names = []
for k,v1 in labels.items():
for v2 in v1:
pn = "%s__%s"%(k,v2)
prediction_names.append(pn)
assert 'Function__Aides Compensation' in prediction_names
prediction_names.sort()
prediction_names[:5]
###Output
_____no_output_____
###Markdown
one-hot encode each target by its classes
###Code
for p in prediction_names: df[p] = False
for k,v1 in labels.items():
for v2 in v1:
pn = "%s__%s"%(k,v2)
# print(pn)
df[pn] = df[k] == v2
# since NO_LABEL is replaced with NaN, need this
for dependent in labels.keys():
target_sub = [x for x in df.columns if x.startswith("%s__"%dependent)]
df.loc[~df[target_sub].any(axis=1), '%s__NO_LABEL'%dependent]=True
df[['Function', 'Function__Teacher Compensation', 'Function__Substitute Compensation', 'Function__NO_LABEL']].head()
df.shape, df[pd.isnull(df[prediction_names]).all(axis=1)].shape, df.loc[~df[prediction_names].any(axis=1)].shape
assert ~pd.isnull(df[prediction_names]).any().any()
df[prediction_names] = df[prediction_names].astype('uint8')
###Output
_____no_output_____
###Markdown
Factorize features
###Code
print(time.ctime())
df_feat = df[features].apply(lambda x: pd.factorize(x)[0], axis=0)
df_feat = df_feat + 1 # +1 for the -1 from pd.factorize on nan (keras Embedding supports [0,N) )
print(time.ctime())
df_feat.max().max(), df_feat.min().min()
vocab_size = df_feat.max(axis=0) + 1 # +1 to count the 0 index
vocab_size = vocab_size.sort_index()
vocab_size
assert df[prediction_names].max().max()==1
df['Total_sign'].value_counts()#.tail()#head(n=100).tail()
###Output
_____no_output_____
###Markdown
split the non-holdout into train/test
###Code
# calculate label_keys array whose order is replicable
label_keys = labels.keys()
label_keys = list(label_keys)
label_keys.sort()
df_x = df_feat[~df['is_holdout']]
df_y = df[prediction_names][~df['is_holdout']] # .fillna(0)
###Output
_____no_output_____
###Markdown
simple train/test split with sklearn from sklearn.model_selection import train_test_splittest_size=0.33 test_size=0x_train, x_test, y_train, y_test = train_test_split(df_x, df_y, test_size=test_size, random_state=42)x_train.shape, x_test.shape, y_train.shape, y_test.shape
###Code
import numpy as np
from sklearn.model_selection import RepeatedStratifiedKFold
rskf = RepeatedStratifiedKFold(n_splits=3, n_repeats=3, random_state=36851234)
###Output
_____no_output_____
###Markdown
build a dummy equi-probable target
###Code
def get_y_equi(n):
y_equi = {}
for k in label_keys:
y_equi[k] = np.ones(shape=(n, len(labels[k]))) / len(labels[k])
y_equi = [y_equi[k] for k in label_keys]
y_equi = np.concatenate(y_equi, axis=1)
y_equi = pd.DataFrame(y_equi, columns=y_train.columns, index=y_train.index)
# y_equi.shape
return y_equi
###Output
_____no_output_____
###Markdown
keras embedding + Dense/LSTM
###Code
from keras.layers import Embedding, Dense, Flatten, LSTM, Input, Concatenate, Add, Lambda, Dropout
from keras.models import Sequential, Model
from keras import backend as K
def build_fn():
# vocab_size = stats.shape[0]
# inputs = [Input(shape=(prob3.shape[1],)) for f in vocab_size.index]
inputs = {f: Input(shape=(1,), name=f) for f in vocab_size.index}
# embeddings = [Embedding(vocab_size[f], embedding_dim, input_length=prob3.shape[1]) for f in vocab_size.index]
if True:
embedding_dim = 10 # 3 # 12 # 2 # 64 # FIXME
embeddings = {f: Embedding(vocab_size[f], embedding_dim, input_length=1)(inputs[f]) for f in vocab_size.index}
else:
embeddings = {f: Embedding(vocab_size[f], max(3, vocab_size[f]//15//10), input_length=1)(inputs[f]) for f in vocab_size.index}
# the model will take as input an integer matrix of size (batch, input_length).
# the largest integer (i.e. word index) in the input should be no larger than 999 (vocabulary size).
# now model.output_shape == (None, input_length, embedding_dim), where None is the batch dimension.
# dummy variable
x1= embeddings
# flatten each feature since no sequences anyway
x1 = {f: Flatten(name="%s_flat"%f)(x1[f]) for f in vocab_size.index}
# dense layer for each feature
# x1 = {f: Dense(10, activation = 'relu', name="%s_d01"%f)(x1[f]) for f in vocab_size.index}
# x1 = {f: Dense( 3, activation = 'relu', name="%s_d02"%f)(x1[f]) for f in vocab_size.index}
# a dropout for each feature, this way, the network is more robust to dependencies on a single feature
x1 = {f: Dropout(0.3, name="%s_dropout"%f)(x1[f]) for f in vocab_size.index}
x1 = [x1[f] for f in vocab_size.index]
x1 = Concatenate()(x1)
# x1 = Flatten()(x1)
x1 = Dropout(0.3)(x1)
x1 = Dense(1000, activation='relu')(x1)
x1 = Dense( 300, activation='relu')(x1)
# x1 = Dense( 50, activation='relu')(x1)
# o1 = {dependent: Dense(50, activation = 'relu', name="%s_d1"%dependent)(x1) for dependent in label_keys}
# o1 = {dependent: Dense(50, activation = 'relu', name="%s_d2"%dependent)(o1[dependent]) for dependent in label_keys}
# outputs = [Dense(len(labels[dependent]), activation = 'softmax', name="%s_out"%dependent)(o1[dependent]) for dependent in label_keys]
outputs = [Dense(len(labels[dependent]), activation = 'softmax', name="%s_out"%dependent)(x1) for dependent in label_keys]
inputs = [inputs[f] for f in vocab_size.index]
model = Model(inputs=inputs, outputs=outputs)
# model.compile('rmsprop', loss=multi_multiclass_logloss, metrics=['acc'])
# model.compile('rmsprop', loss='categorical_crossentropy', metrics=['acc'])
model.compile('adam', loss='categorical_crossentropy', metrics=['acc'])
return model
model_test = build_fn()
model_test.summary()
from matplotlib import pyplot as plt
import time
verbose = 2
models_k = []
# instead of df_y, use y_zeros below
# otherwise will get error
# "ValueError: Supported target types are: ('binary', 'multiclass'). Got 'multilabel-indicator' instead."
y_zeros = np.zeros(shape=(df_x.shape[0], 1))
for k, indeces in enumerate(rskf.split(df_x.values, y_zeros)):
print('%s .. fold %s'%(time.ctime(), k+1))
train_index, test_index = indeces
print("TRAIN:", train_index, "TEST:", test_index)
x_train, x_test = df_x.iloc[train_index], df_x.iloc[test_index]
y_train, y_test = df_y.iloc[train_index], df_y.iloc[test_index]
y_equi = get_y_equi(y_train.shape[0])
# convert 2-D matrix of features into array of 1-D features
# This is needed because each feature has a different vocabulary for its embedding
x_train = [x_train[f].values for f in vocab_size.index]
x_test = [x_test [f].values for f in vocab_size.index]
# convert 2-D matrix of targets into K arrays of C-D matrices
# where C is the number of classes of each target
y_train = [y_train[[x for x in prediction_names if x.startswith("%s__"%f)]].values for f in label_keys]
y_test = [y_test [[x for x in prediction_names if x.startswith("%s__"%f)]].values for f in label_keys]
y_equi = [y_equi [[x for x in prediction_names if x.startswith("%s__"%f)]].values for f in label_keys]
# len(y_train), y_train[0].shape, y_train[1].shape, len(y_test), y_test[0].shape, len(y_equi), y_equi[0].shape
# y_equi[0][:2], y_equi[1][:2]
model = build_fn()
print('first train to equi-probable')
model.fit(
x_train,
y_equi,
batch_size=32*32, # 32, # FIXME
epochs=5,
verbose=verbose, #0,#2,
validation_split = 0.2,
# validation_split = 0,
shuffle=True
)
# y_pred = model.predict(x_train, batch_size=32*32)
y_pred = model.predict(x_test, batch_size=32*32)
assert abs(y_pred[0][0,0] - 0.027) < .001
assert abs(y_pred[1][0,0] - 0.090) < .001
# evaluate on the real data
score = model.evaluate(x_train, y_train, batch_size = 32*32)
assert abs(score[0] - 18.69) < 0.01
print('then train to actual probabilities')
history = model.fit(
# pd.get_dummies(train3['x'].values),
# # train2[list(set(train2.columns) - set(['joined']))],
# train3['y'].values,
x_train,
y_train,
batch_size=32*32, # 32, # FIXME
epochs=30,
#initial_epoch=30,
verbose=verbose,#0, #2,
validation_split = 0.2,
# validation_split = 0,
shuffle=True
)
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.show()
print('model.evaluate', model.evaluate(x_test, y_test, batch_size = 32*32))
models_k.append(model)
print('')
print('')
print('')
###Output
_____no_output_____
###Markdown
Predict complete dataset for visualization
###Code
len(models_k)
def predict_k_models(x_in):
y_pred = []
n = len(models_k)
for k,model in enumerate(models_k):
print('fold %s / %s'%(k+1,n))
y_pred.append(model.predict(x_in, verbose=2))
# y_pred = pd.Panel(y_pred).mean(axis=2)
# TODO replace pd.Panel with xarray
# http://xarray.pydata.org/en/stable/
y_out = []
for fi in range(len(label_keys)):
# print(fi)
y_out.append(pd.Panel([yyy[fi] for yyy in y_pred]).mean(axis=0))
for i,k in enumerate(label_keys):
y_out[i].columns = labels[k]
return y_out
x_test = df_x # .head()
y_test = df_y # .head()
x_test = [x_test [f].values for f in vocab_size.index]
y_test = [y_test [[x for x in prediction_names if x.startswith("%s__"%f)]].values for f in label_keys]
y_pred = predict_k_models(x_test)
# y_pred = pd.Panel(y_pred).mean(axis=2)
# y_pred.shape
y_test[0][0,:].sum(), y_pred[0].iloc[0,:].sum() # , y_pred[0]
from keras.losses import categorical_crossentropy
from keras import backend as K
result_all = []
for yi in range(len(y_test)):
result_i = categorical_crossentropy(K.variable(y_test[yi]), K.variable(y_pred[yi].values))
result_all.append(K.eval(result_i))
# len(result_all), result_all[0].shape
# result_all[0].mean()
# result_all[0][0:5], result_all[1][0:5], len(y_test)
pd.Series([r.mean() for r in result_all]).sum()
###Output
_____no_output_____
###Markdown
Spatial comparison
###Code
# sub_labels = labels
sub_labels = {k:labels[k] for k in label_keys if k in ['Function']}
n_show = 1000
for i,v0 in enumerate(sub_labels.items()):
k,v1 = v0
# y_pred2 = pd.DataFrame(y_pred[i], columns=v1)
y_pred2 = y_pred[i]
y_test2 = pd.DataFrame(y_test[i], columns=v1)
for v2 in v1:
plt.figure(figsize=(20,3))
plt.plot(y_pred2.loc[:n_show,v2], label='pred')
#plt.plot(sum_pred, label='sum_pred', alpha=0.2)
plt.plot(y_test2.loc[:n_show,v2], '.', label='actual')
plt.legend(loc='best')
plt.title("%s: %s"%(k,v2))
axes = plt.gca()
axes.set_ylim([-.1,1.1])
plt.show()
###Output
_____no_output_____
###Markdown
temporal comparison
###Code
y_pred2 = {label_keys[i]: y_pred[i] for i in range(len(label_keys))}
y_test2 = {label_keys[i]: y_test[i] for i in range(len(label_keys))}
k2 = 'Function'
y_pred3 = y_pred2[k2].values
y_test3 = y_test2[k2]
for i in range(15):
plt.figure(figsize=(10,3))
plt.subplot(121)
plt.bar(x=range(y_pred3.shape[1]), height=y_test3[i])
plt.title('%s. actual, argmax=%s'%(i,np.argmax(y_test3[i])))
axes = plt.gca()
axes.set_ylim([-.1,1.1])
plt.subplot(122)
plt.bar(x=range(y_pred3.shape[1]), height=y_pred3[i])
plt.title('%s. prediction, argmax=%s'%(i,np.argmax(y_pred3[i])))
axes = plt.gca()
axes.set_ylim([-.1,1.1])
# plt.title(y_test.index[i])
plt.show()
###Output
_____no_output_____
###Markdown
Mock submission
###Code
x_ho = df_feat[features][~df['is_holdout']].head()
x_ho = [x_ho [f].values for f in vocab_size.index]
y_ho = predict_k_models(x_ho)
df_submit = pd.DataFrame(np.concatenate(y_ho, axis=1), columns=prediction_names)
df_submit.shape
df_submit.head().round(1)
df[target].head()
###Output
_____no_output_____
###Markdown
Prepare submission
###Code
df.shape, df_feat.shape
x_ho = df_feat[features][ df['is_holdout']]
x_ho = [x_ho [f].values for f in vocab_size.index]
y_ho = predict_k_models(x_ho)
len(y_ho), y_ho[0].shape, y_ho[1].shape
df_submit = pd.DataFrame(np.concatenate(y_ho, axis=1), columns=prediction_names, index=df_feat[ df['is_holdout']].index)
df_submit.shape
df_submit.head()
# plt.plot(df_submit['Use__NO_LABEL'].sort_values().values)
plt.plot(df_submit['Operating_Status__NO_LABEL'].sort_values().values)
plt.show()
test.head()
assert (df_submit['Operating_Status__NO_LABEL']<0.0001).all()
del df_submit['Operating_Status__NO_LABEL']
fn = 'data_out/submission_E1_%s.csv'%(time.strftime("%Y%m%d_%H%M%S"))
df_submit.to_csv(fn)
from zipfile import ZipFile, ZIP_DEFLATED
with ZipFile('%s.zip'%fn, 'w', ZIP_DEFLATED) as myzip:
myzip.write(fn)
###Output
_____no_output_____ |
SOM_ZhRane.ipynb | ###Markdown
ๅบไบPython3ๅฎ็ฐ่ช็ป็ปๆ ๅฐ๏ผSelf-Organizing Map๏ผ็ฅ็ป็ฝ็ป็ฎๆณSOM็ฅ็ป็ฝ็ปๆไธคๅฑ๏ผ็ฌฌไธๅฑ่พๅ
ฅๅฑ๏ผ่พๅ
ฅๆฐๆฎ็ๅฑ๏ผๆฏไธ็ปด็๏ผ็ฅ็ปๅ
็ไธชๆฐๅฐฑๆฏๆฐๆฎ็นๅพๆฐ๏ผ็ฌฌไบๅฑๆฏ็ซไบๅฑ๏ผไนๅฐฑๆฏๆ นๆฎ่พๅ
ฅๅฑ่พๅ
ฅ็ๆฐๆฎ๏ผ็ฅ็ปๅ
ไน้ดๆ็
ง็ญ็ฅ่ฟ่ก็ซไบ็ๅฑ๏ผ้ๅธธๆฏไบ็ปด็๏ผ่กใๅ็ฅ็ปๅ
ไธชๆฐๅฏ้่ฟไธไบ็ป้ช่งๅ็ปๅฎใๅ
ถไธญ็ซไบ็ญ็ฅๆฏ้่ฟไธ้ข็ๆนๅผๅฎ็ฐ็๏ผ็ซไบๅฑ็ๆฏไธช็ฅ็ปๅ
้ฝๆๆ้๏ผๅฝ่พๅ
ฅๅฑ่พๅ
ฅๆไธชๆ ทๆฌๆถ๏ผๅฐฑ่ฎก็ฎๆๆ็ฅ็ปๅ
็ๆ้ไธ่ฏฅๆกๆ ทๆฌ็่ท็ฆป๏ผ็ถๅ้่ฟ่งๅ่ฐๆด่ท็ฆปๆฏ่พๅฐ็็ฅ็ปๅ
็ๆ้๏ผไฝฟๅพๅ
ถๆดๆฅ่ฟ่ฏฅๆ ทๆฌใ**SOM็ฎๆณๅฐฑๆฏๅฐๅคๆก้ซ็ปดๆฐๆฎๆ ๅฐๅฐไบ็ปด็ๅนณ้ขไธ๏ผๅนถไธไฟ่ฏ็ธ่ฟ็ๆฐๆฎๅจๅนณ้ขไธ็ๆ ๅฐไฝ็ฝฎๆฏ่พ้ ่ฟ๏ผไป่่ฟ่ก่็ฑปใ**็ฐๆๅพ
่็ฑปๆ ทๆฌๆฐๆฎ้$D$๏ผ็ปดๅบฆไธบ$(N, M)$๏ผๅ
ถไธญๆฐๆฎๆกๆฐไธบ$N$๏ผๆฏๆกๆฐๆฎ็็นๅพๆฐไธบ$M$ใ ไธใSOMๆจกๅๆญฅ้ชค๏ผ + ๏ผ1๏ผ**็ซไบๅฑ็ฅ็ปๅ
ไธชๆฐ่ฎพ็ฝฎ**๏ผ่ก็ฅ็ปๅ
ไธชๆฐๅฏไปฅๅ $\sqrt{5\sqrt{MN}}$(ๅไธๅๆด)๏ผๅ็ฅ็ปๅ
ไธชๆฐๅฏไปฅๅ่กไธๆ ท๏ผๆ่
ๅคไบ่กๆฐ็ไธๅ๏ผ + ๏ผ2๏ผ**ๆ ทๆฌๆฐๆฎ้ๅฝไธๅ**๏ผๅฏนๆฏไธๅ่ฟ่กๅ่ช็ๅฝไธๅ๏ผไพๅฆๅฏไปฅๅฐๆฏไธๅ็ผฉๆพๅฐ$[0,1]$ไน้ด๏ผ + ๏ผ3๏ผ**ๅๅงๅ็ฅ็ปๅ
ๆ้$W$**๏ผๆฏไธช็ฅ็ปๅ
็ๆ้ๅๆฏ้ฟๅบฆไธบ$M$็ๅ้๏ผๆฏไธชๅ
็ด ๅไธบ้ๆบ้ๆฉ็ๆฏ่พๅฐ็ๆฐ๏ผไพๅฆๅจ0ๅฐ0.01ไน้ด๏ผ **ๅผๅง่ฎญ็ปSOM็ฝ็ป๏ผ่ฟญไปฃๆฌกๆฐ$s$๏ผๅๅง็ๅญฆไน ็$u$๏ผๅๅง็้ปๅๅๅพ$r$๏ผ** + ๏ผ4๏ผ**่ฎก็ฎๆไฝณๅน้
็ฅ็ปๅ
(BMU)**๏ผ้ๆฉไธๆกๆ ทๆฌๆฐๆฎ$X$๏ผ่ฎก็ฎ่ฏฅๆฐๆฎไธๆๆ็ฅ็ปๅ
ๆ้ไน้ด็ๆฐๆฎ่ท็ฆป$m$๏ผๅ
ถไธญ่ท็ฆปๆๅฐ็็ฅ็ปๅ
ๅฎไนไธบไธบ$BMU$๏ผ + ๏ผ5๏ผ**่ทๅพ้ปๅๅ
็็ฅ็ปๅ
**๏ผๆ นๆฎๅฝๅ็้ปๅๅๅพ$R(cs)$่ทๅ้ปๅไธญๅฟไธบ$BMU$็ๆๆ็ฅ็ปๅ
๏ผๅ
ถไธญ$R(cs)$ๆฏ้็่ฟญไปฃๆฌกๆฐ็ๅขๅ ไฝฟๅพ้ปๅๅๅพ้ๆธ่กฐๅ็ๅฝๆฐ๏ผ$cs$ๆฏๅฝๅ็่ฟญไปฃๆฌกๆฐ๏ผ + ๏ผ6๏ผ**ๆดๆนๆ ทๆฌๆ้**๏ผๅฏน้ปๅไธญ็ๆฏไธไธช็ฅ็ปๅ
๏ผๆ นๆฎๅฝๅ็ๅญฆไน ็$U(cs)$ไปฅๅ่ฏฅ็ฅ็ปๅ
ไธ$BMU$ไน้ด็ๆๆ่ท็ฆป$d$๏ผ่ฎก็ฎ่ฏฅ็ฅ็ปๅ
็ๅญฆไน ็$L(U(cs),d)$ใ็ถๅๆ็
งไธๅผๆดๆฐ่ฏฅ็ฅ็ปๅ
็ๆ้๏ผ $$W = W + L(U(cs),d)*(X-W)$$ ๅ
ถไธญ$L(U(cs),d)$ๆฏๅ
ณไบๅฝๅ็ๅญฆไน ็ๅ็ฅ็ปๅ
ไน้ด็ๆๆ่ท็ฆป็ๅฝๆฐ๏ผ่พๅบ็ๆฏไธ$BMU$็ๆๆ่ท็ฆปไธบ$d$็็ฅ็ปๅ
็ๅญฆไน ็๏ผๅนถไธ$d$่ถๅคง๏ผๅญฆไน ็่ถๅฐ๏ผ$U(cs)$ๆฏ้็่ฟญไปฃๆฌกๆฐ็ๅขๅ ๅญฆไน ็้ๆธ่กฐๅ็ๅฝๆฐ๏ผ + ๏ผ7๏ผ**่ฎญ็ปๅฎๆ**๏ผๆๆๆ ทๆฌๆฐๆฎ่ฟ่กๅฎ๏ผ4๏ผ-๏ผ6๏ผ๏ผ$cs$ๅ 1๏ผๅฝๅ
ถ็ญไบ$s$ๆ่
่ฏฏๅทฎๅบๆฌไธๅๆถๅๆญข่ฟญไปฃ๏ผ ๆณจ๏ผ็ฅ็ปๅ
ไธชๆฐไธๅฎๅฐ๏ผๆฐๆฎ่ท็ฆป$m$ๅฐฑๆฏๆ็ๆฏ็ฅ็ปๅ
็M็ปดๆ้ไธๆ ทๆฌๆฐๆฎไน้ด็่ท็ฆป๏ผไพๅฆๆฌงๅผ่ท็ฆป๏ผไฝๅผฆ่ท็ฆป๏ผๆๆ่ท็ฆป$d$ๆ็ๆฏ็ฅ็ปๅ
็ฝ็ปไธญ็็ฅ็ปๅ
็ๅ ไฝไฝ็ฝฎไน้ด็่ท็ฆป๏ผไพๅฆๆผๅ้กฟ่ท็ฆป๏ผๆฌงๅผ่ท็ฆป๏ผ่ฏฏๅทฎๅฏไปฅๅฎไนไธบๆๆๆ ทๆฌๆฐๆฎไธๅ
ถ$BMU$็ๆๅฐๆฐๆฎ่ท็ฆป็ๅ๏ผ ไบใSOM็ปๆ็ๅฏ่งๅๅ
ๅฎน๏ผ + ็ปๆๅฏ่งๅ + ็ฅ็ปๅ
ๆ ทๅผ + ๅ
ญ่พนๅฝข + ๅฏ่งๅๅ
ๅฎน + ็นๅพ็ๅผ็ๅๅธ(ๅซๆ็ฑปๅซ็้) + ็ฑปๅซ็็ฅ็ปๅ
ๆฟๆดป็จๅบฆ + ๆ ทๆฌๆ ็ญพ็ๅๅธๅฏ่งๅ ไธใPython3ๅฎ็ฐ
###Code
import numpy as np
import pandas as pd
from math import ceil, exp
import matplotlib.pyplot as plt
from matplotlib import rcParams
from matplotlib.colors import ListedColormap,LinearSegmentedColormap
from mpl_toolkits.axes_grid1 import host_subplot
from mpl_toolkits import axisartist
import matplotlib as mpl
import sys
from sklearn.cluster import KMeans
print('matplotlib็ๆฌ', mpl.__version__)
print('numpy็ๆฌ',np.__version__)
print('pandas็ๆฌ',pd.__version__)
print('python็ๆฌ', sys.version)
# SOMๅฏ่งๅ็ปๅพ็่ฎพ็ฝฎ
config = {
"font.family":'serif',
"mathtext.fontset":'stix',
"font.serif": ['SimSun'],
"font.size": 16,
"axes.formatter.limits": [-2, 3]}
rcParams.update(config)
plt.rcParams['axes.unicode_minus']=False
# ็คบไพๆฐๆฎ้๏ผ้ธขๅฐพ่ฑ
def get_som_data(filepath=r'C:\Users\GWT9\Desktop\Iris.xlsx'):
data = pd.read_excel(filepath)
# ๆฐๆฎๆไนฑ
data = data.sample(frac=1)
# ๅพ
่็ฑปๆฐๆฎ
cluster_data = data.values[:,:-1]
# ็นๅพๆฐๆฎ
feature_data = list(data.keys())[:-1]
# ๆฐๆฎ็ฑปๅซ
class_data = data.values[:, -1]
return cluster_data, feature_data, class_data
som_data, ldata, stdata = get_som_data()
# ่ฎญ็ปSOM็ฅ็ป็ฝ็ป
class AFSOM:
def __init__(self, data):
self.data = data # ้่ฆๅ็ฑป็ๆฐๆฎ๏ผnumpyๆฐ็ปๆ ผๅผ๏ผN,M๏ผ
# ๅฎไน็ฅ็ปๅ
็ไธชๆฐ
self.net_count = ceil((5 * (len(self.data)*len(self.data[0]))**0.5) **0.5)
self.net_row = self.net_count # ็ฅ็ปๅ
็่กๆฐ
self.net_column = int(0.7 * self.net_count) + 1 # ็ฅ็ปๅ
็ๅๆฐ
# ็ฎๆณ่ฎญ็ป,่ฟญไปฃๆฌกๆฐ
self.epochs = 200
# ่ท็ฆปๅฝๆฐ
self.disfunc = 'm' # m๏ผ้ตๅฏๅคซๆฏๅบ่ท็ฆป(้ป่ฎคไธบๆฌงๅผ่ท็ฆป)
# ๅญฆไน ็่กฐๅๅฝๆฐ
self.learningrate_decay = 'e' # e๏ผๆๆฐ่กฐๅ
# ้ปๅๅๅพ่กฐๅๅฝๆฐ
self.radius_decay = 'e' # e๏ผๆๆฐ่กฐๅ
# ้ปๅๅ
ๅญฆไน ็ๆ นๆฎ่ท็ฆป็่กฐๅๅฝๆฐ
self.lr_r_decay = 'g' # g๏ผ้ซๆฏๅฝๆฐ
# ๅๅงๅญฆไน ็
self.learning_rate = 0.9
# ๅๅง็้ปๅๅๅพ
self.radius = 5
# ๆฏๆฌก่ฟญไปฃ็ๅญฆไน ็ไปฅๅ้ปๅๅๅพ็ๅญๅ
ธ
self.learning_rate_dict = self.decay_lr()
self.radius_dict = self.decay_radius()
# ็ฅ็ปๅ
ๆฐๆฎๅๅงๅ
self.net_data_dict = self.inital_net_data()
# ๅพ
่็ฑปๆฐๆฎๅฝไธๅ
self.data_normal, self.maxdata, self.mindata = self.column_norm_dataset()
# ็ฅ็ปๅ
็็น็ไฝ็ฝฎ
self.net_point_dict = self.get_point()
# ๅญๅจ่ฏฏๅทฎ
self.error_list = []
# ๆ นๆฎ่กใๅ็็ฅ็ปๅ
็ไธชๆฐ๏ผๅฎไนไบ็ปดๅนณ้ขไธ็ฅ็ปๅ
็ๅ ไฝไฝ็ฝฎใไนๅฐฑๆฏ่ทๅพๆฏไธช็ฅ็ปๅ
็XYๅๆ .ๆฏไธช็ฅ็ปๅ
ๆๅคๆ6ไธช็ธ้ป็็ฅ็ปๅ
๏ผ
# ๅนถไธไธ็ธ้ป็ๆฌงๅผ่ท็ฆปไธบ1
def get_point(self):
# ๅญๅ
ธๅฝขๅผ
point_dict = {}
sign = 0
for i in range(self.net_row):
for j in range(self.net_column):
if i % 2 == 0:
point_dict[sign] = np.array([j, i*(-(3**.5)/2)])
else:
point_dict[sign] = np.array([j+(1/2), i*(-(3**.5)/2)])
sign += 1
return point_dict
# ๆๅๅฝไธๅ็ๅฝๆฐ
def column_norm_dataset(self):
min_data = np.min(self.data, axis=0)
max_data = np.max(self.data, axis=0)
# ้ฒๆญขๆไบ็นๅพๆๅคงๆๅฐๅผ็ธๅ๏ผๅบ็ฐ้คๆฐไธบ0็ๆ
ๅฝข
return (self.data - min_data +1e-28) / (max_data- min_data+1e-28), max_data, min_data
# ๅๅงๅ็ฅ็ปๅ
ๆฐๆฎ
def inital_net_data(self):
# ไธบๆฏไธไธช็ฅ็ปๅ
ๅๅงๅๆฐๆฎ
init_net_data = {}
for n in range(self.net_row * self.net_column):
np.random.seed(n)
random_data = np.random.random(len(self.data[0])) / 100 # ้ๆบๅฐๆฐ
# ๆฐๆฎๅๅงๅ
init_net_data[n] = random_data
return init_net_data
# ๅฎไนๅญฆไน ็็่กฐๅๅฝๆฐ๏ผๅญๅ
ธ
def decay_lr(self, decay_rate=0.9, step=25):
# ๅญฆไน ็ๅญๅ
ธ
lr_step_dict = {}
# ่ฟญไปฃไธญๆฏไธๆฌก็ๅญฆไน ็
for i in range(self.epochs):
if self.learningrate_decay == 'e':# ๆๆฐ่กฐๅ
lr = self.learning_rate * decay_rate ** (i / (self.epochs/ step))
else: # ๆๅฎๅญฆไน ็
lr = 0.1
lr_step_dict[i] = lr
return lr_step_dict
# ๅฎไน้ปๅๅๅพ่กฐๅ็ๅฝๆฐ๏ผๅญๅ
ธ
def decay_radius(self, decay_rate=0.9, step=18):
# ้ปๅๅๅพๅญๅ
ธ
radius_step_dict = {}
# ่ฟญไปฃไธญๆฏไธๆฌก็้ปๅๅๅพ
for i in range(self.epochs):
if self.radius_decay == 'e':# ๆๆฐ่กฐๅ
radius = self.radius * decay_rate ** (i / (self.epochs/ step))
else:
radius = 0
radius_step_dict[i] = radius
return radius_step_dict
# ๆ นๆฎ่ท็ฆป็กฎๅฎๆฏไธชๅจ้ปๅ่ๅดๅ
็็ฅ็ปๅ
็ๅญฆไน ็
def decay_lr_r(self, dis, epoch):
if self.lr_r_decay == 'g':
return self.learning_rate_dict[epoch] * exp(-(dis ** 2) / (2 * self.radius_dict[epoch] ** 2))
# ๅๅฝไธๅ็ฅ็ปๅ
็ๆฐๆฎ
def anti_norm(self, netdata, index):
return netdata * (self.maxdata[index] - self.mindata[index]) + self.mindata[index]
def minkowski_distance(self, datasample, datanet, p=2):
edis = np.sum(np.power(datasample - datanet, p)) ** (1/p)
return edis
def data_distance(self, datasample, datanet, p=2):
if self.disfunc == 'm':
return self.minkowski_distance(datasample, datanet, p)
# ่ทๅๆไฝณๅน้
็ฅ็ปๅ
้ปๅๅๅพ่ๅดๅ
็็ฅ็ปๅ
def get_round(self, bmu_sign, epoch):
# BMU็ฅ็ปๅ
็ไฝ็ฝฎ
cpoint_set = self.net_point_dict[bmu_sign]
round_net = []
for net in self.net_point_dict:
# ่ฎก็ฎๆฌงๆฐ่ท็ฆป
dis = self.minkowski_distance(cpoint_set, self.net_point_dict[net], p=2)
if dis <= self.radius_dict[epoch]:
round_net.append(net)
return round_net
# ่ทๅไธๆไธช็ฅ็ปๅ
ๅๅจ็ธ้ป็ๅๅ
๏ผๆๆ็ปๆ็ธ้ป
def adjoin_net(self, net):
x, y = self.net_point_dict[net]
point_set = [[x-1, y], [x+1, y], [x-1/2, y+(3**.5)/2], [x+1/2, y+(3**.5)/2], [x-1/2, y-(3**.5)/2], [x+1/2, y-(3**.5)/2]]
net_set =[]
for n in self.net_point_dict:
for j in point_set:
net_p = self.net_point_dict[n]
if abs(j[0] -net_p[0]) < 1e-5 and abs(j[1] -net_p[1]) < 1e-5:
net_set.append(n)
return net_set
# ๅผๅง่ฎญ็ป
def som_train(self):
# ๅฝๅ่ฟญไปฃๆฌกๆฐ
epoch = 0
while epoch < self.epochs:
# ่ฏฏๅทฎ
error = 0
print('ๅฝๅ่ฟญไปฃๆฌกๆฐ', epoch)
# ้ๅๆฐๆฎ
for sdata in self.data_normal:
# ่ฎก็ฎBMU
dis_dict = {}
for nett in self.net_data_dict:
dis_dict[nett] = self.data_distance(sdata, self.net_data_dict[nett])
# ้ๆฉๆๅฐ่ท็ฆปๅฏนๅบ็็ฅ็ปๅ
min_net, min_dis = sorted(dis_dict.items(), key=lambda s:s[1])[0]
# ๅญๅจ่ฏฏๅทฎ
error += min_dis
# ่ทๅ่ฟไธช็ฅ็ปๅ
็้ปๅ
neibourhood_nets = self.get_round(min_net, epoch)
# ๅผๅงๆดๆน็ฅ็ปๅ
็
for nn in neibourhood_nets:
# ่ฎก็ฎ็ฅ็ปๅ
ๅพๅไธญๅฟ็นไน้ด็่ท็ฆป:ๆฌงๅผ่ท็ฆป
dis_net = self.minkowski_distance(self.net_point_dict[min_net], self.net_point_dict[nn], 2)
# ่ทๅพๅญฆไน ็
lr = self.decay_lr_r(dis_net, epoch)
# ๆดๆนๆ้
self.net_data_dict[nn] =np.add(self.net_data_dict[nn], lr * (sdata - self.net_data_dict[nn]))
# ๅญๅจ่ฏฏๅทฎ
self.error_list.append(error)
epoch += 1
return print('SOM่ฎญ็ปๅฎๆฏ')
# ็ปๅถๅญฆไน ็ใ้ปๅๅๅพใไปฅๅ่ฎญ็ป่ฏฏๅทฎ็ๆฒ็บฟ
def plot_train(self):
# ่ทๅๆฐๆฎ:่ฟญไปฃๆฌกๆฐ
epoch_list = range(self.epochs)
# ๅญฆไน ็
lr_list = [self.learning_rate_dict[k] for k in epoch_list]
# ้ปๅๅๅพ
r_list = [self.radius_dict[k] for k in epoch_list]
# ๅฉ็จๅ
ฑๅ็ๅๆ ่ฝด
plt.figure(figsize=(10, 5))
host = host_subplot(111, axes_class=axisartist.Axes)
plt.subplots_adjust(right=0.75)
par1 = host.twinx()
par2 = host.twinx()
par2.axis["right"] = par2.new_fixed_axis(loc="right", offset=(60, 0))
par1.axis["right"].toggle(all=True)
par2.axis["right"].toggle(all=True)
p1, = host.plot(epoch_list, lr_list, label="ๅญฆไน ็")
p2, = par1.plot(epoch_list, r_list, label="้ปๅๅๅพ")
p3, = par2.plot(epoch_list, self.error_list, label="่ฏฏๅทฎ")
host.set_xlabel("่ฟญไปฃๆฌกๆฐ")
host.set_ylabel("ๅญฆไน ็")
par1.set_ylabel("้ปๅๅๅพ")
par2.set_ylabel("่ฏฏๅทฎ")
host.legend()
host.axis["left"].label.set_color(p1.get_color())
par1.axis["right"].label.set_color(p2.get_color())
par2.axis["right"].label.set_color(p3.get_color())
plt.show()
# ้ธขๅฐพ่ฑ็็คบไพ
som_iris = AFSOM(som_data)
# SOM็ฎๆณ่ฟ่ก
som_iris.som_train()
# ่พๅบ่ฎญ็ป่ฟ็จ
som_iris.plot_train()
# ็ฌฌไบ้จๅ๏ผ ๅพๅฝข
class VISOM(AFSOM):
def __init__(self, data, net_data_dict, stationlist, datalabel=None, cluster=False):
super(VISOM, self).__init__(data)
# ่ฎญ็ปๅ็ๆ ทๆฌ็ๆ้
self.net_data_weight = net_data_dict
self.slist = stationlist # ๆฏๆกๆฐๆฎ็ๆ ็ญพ๏ผๅ่กจๆ ผๅผ๏ผ้ฟๅบฆไธบ๏ผฎ ไธไธบ็ฉบ
self.label=datalabel # ๆฏๆกๆฐๆฎไธญๆฏไธช็นๅพ็ๆ ็ญพ๏ผๅ่กจๆ ผๅผ๏ผ้ฟๅบฆไธบM๏ผๅฏไธบ็ฉบ
# stationlistไธไธบNone, cluster=Trueใ่ฏดๆๆฏๆกๆฐๆฎ็ฅ้็ฑปๅซ๏ผ็จ่ฏฅๆนๆณ่ฟ่ก้ช่ฏ็ฑปๅซใๅฆๅ็่ฏ๏ผ้่ฆ่ชๅฎไน็ฑปๅซๆฐใ
self.c_sign = cluster
# ๆฐๆฎ็ฑปๅซไธชๆฐ
self.cc = 3
self.define_cc()
# ๅฏ่งๅ่ฎพ็ฝฎ
# ้ซๅฎฝๆฏไธบ2ๆฏๆ นๅท3๏ผไธบๆญฃๅ
ญ่พนๅฝข
self.height = 8
self.width = 4*(3**0.5)
self.tap = 0 # ๆงๅถๅพๅฝขไน้ด็้ด้
# ้ข่ฒๆ ๅฐ
# ๅฎไนcolorbar็้ข่ฒๅธง,็บฟๆง่กฅๅธง๏ผ่ชๅฎไนcolormap็ๅๅญ
my_cmap = LinearSegmentedColormap.from_list('SOM', ['#3D26A8', '#3AC893', '#F9FA15'])
self.color_config = my_cmap # ๆ่
ๅฎๆน่ฎพๅฎplt.get_cmap('viridis))
# ่พๅบ็ๅพ็ๅคงๅฐ
self.figsize=(8, 7)
# ๅพๅฝข็น็ๅญๅ
ธ:่พๅบๆฏไธชๆๆ ็็นๅพๅพ่ฐฑ็ๅพๅฝข๏ผๅ
ญ่พนๅฝข
self.point_dict = self.get_hexagon()
# ๅพๅฝข็นๅญๅ
ธ๏ผ่พๅบ็ฅ็ปๅ
ไน้ด่ท็ฆป็ๅพๅฝข
self.point_dict_distance, self.net_line_dict = self.transpose_point()
self.border = None
# ่ขซๅปไธญ็
self.data_hinted_dict, self.data_hinted_data, self.hhnet_list = self.get_hinted_net()
# ๅฎไนๆฐๆฎ็ฑปๅซไธชๆฐ
def define_cc(self):
if self.slist is not None:
if self.c_sign:
self.cc = len(set(self.slist))
# ็ฅ็ปๅ
็ๅพๅฝขไธบ๏ผๅ
ญ่พนๅฝข
def get_hexagon(self):
# ๅๅง็็น
a=b=A=B=0
# ๅญๅจๆฏไธช็ฅ็ปๅ
็็น็ๅญๅ
ธ
net_point_dict = {}
# ็ฝ็ป็่ก
row_sign = 0
# ็ฅ็ปๅ
็ผๅท
net_sign = 0
while row_sign < self.net_row:
# ็ฝ็ป็ๅ
column_sign = 0
while column_sign < self.net_column:
# ้ๆถ้็ๅ
ญไธช็น
one = [a, b+self.height/2]
two = [a-self.width/2, b+(self.height/2-self.width/(2 * 3** 0.5))]
three = [a - self.width/2, b-(self.height/2- self.width/(2 * 3** 0.5))]
four = [a, b-self.height/2]
five = [a + self.width/2, b-(self.height/2-self.width/(2 * 3** 0.5))]
six = [a + self.width/2, b+(self.height/2-self.width/(2 * 3** 0.5))]
# ๆจช็บตๅๆ ๅๅผ
x_number = [one[0], two[0], three[0], four[0], five[0], six[0], one[0]]
y_number = [one[1], two[1], three[1], four[1], five[1], six[1], one[1]]
# ๅญๅจ
net_point_dict[net_sign] = [np.array([a, b]), [x_number, y_number]]
net_sign +=1
column_sign += 1
# ๆดๆฐa๏ผb
a = a+ self.width
b = b
row_sign += 1
if row_sign % 2 == 1:
a = A + self.width/2
b = B - self.height + self.width/ (2 * 3** 0.5)
C = a
D = b
else:
a = C - self.width/2
b = D - self.height + self.width/(2 * 3** 0.5)
A = a
B = b
return net_point_dict
# ็ๆๅๅ
ๆ ผไน้ด็่ท็ฆปๅพๅ๏ผ
def transpose_point(self, m=0.3):
# ็ฌฌไธ้จๅ๏ผๅพๅฝขๆๆฏไพ็ผฉๅฐ
# ็ฌฌไบ้จๅ๏ผๅพๅไน้ดๆทปๅ ๅค่พนๅฝข่ฟ็บฟ
point_net_dict = {}
# ไธคไธชๅๅ
ไน้ด็่ฟ็บฟ
add_nets = {}
# ๅญๅจ
for k in self.point_dict:
# ่ฏฅ็ฅ็ปๅ
็ไธญๅฟ็น
kx, ky = self.point_dict[k][0]
# ็ญๆฏไพ็ผฉๅฐ
listx, listy = self.point_dict[k][1]
# ็ผฉๅฐๅ็็น
small_x, small_y = [], []
for x, y in zip(listx, listy):
small_x.append(m*x +(1-m) * kx)
small_y.append(m*y +(1-m) * ky)
# ๅญๅจไธญๅฟ็นใๅพๅฝข็็น
point_net_dict[k] = [[kx, ky], [small_x, small_y]]
# ่ทๅๅๅจ็ๅๅ
ๆ ผ
round_net = self.adjoin_net(k)
# ๅพๅฝขไน้ด็ๅค่พนๅฝข็่ฟ็บฟ
for rnet in round_net:
cxx, cyy = self.point_dict[rnet][0]
# ้ฆๅ
่ทๅไธคไธช็ธ้ป็ฅ็ปๅๅ
็2ไธชไบค็น
list_X, list_Y = self.point_dict[rnet][1]
add_point_x = []
add_point_y = []
for a, b in zip(listx[:-1], listy[:-1]):
for c, d in zip(list_X[:-1], list_Y[:-1]):
if abs(a-c) < 1e-8 and abs(b-d) < 1e-8:
add_point_x.append(a)
add_point_y.append(b)
# ็ถๅๅพๅฐ่ฟ2ไธช็นๅ่ช็ผฉๅฐๅ็็น
mul_x = []
mul_y = []
for xx, yy in zip(add_point_x, add_point_y):
# ่ฎก็ฎๅๅ
็ผฉๅฐๅ็็น
k_xx = m*xx +(1-m) * kx
k_yy = m*yy +(1-m) * ky
rnet_xx = m*xx +(1-m) * cxx
rnet_yy = m*yy +(1-m) * cyy
mul_x += [k_xx, xx, rnet_xx]
mul_y += [k_yy, yy, rnet_yy]
new_mu_x = mul_x[:3] + mul_x[3:][::-1] + [mul_x[0]]
new_mu_y = mul_y[:3] + mul_y[3:][::-1] + [mul_y[0]]
# ไธญๅฟ็น็่ฟ็บฟ,่ฎพ็ฝฎ่พ็ญ
# ๅพๅฝขไน้ด็่ฟๆฅ็นๅจ่พนไธ๏ผ่ฆ็ธๅบ็็ผฉๅฐ
add_x = (kx + cxx) / 2
add_y = (ky + cyy) / 2
p1_x, p1_y = (1-m)*add_x + m*kx, (1-m)*add_y + m*ky
p2_x, p2_y = (1-m)* add_x + m*cxx, (1-m)* add_y + m*cyy
add_nets['%s_%s'%(rnet, k)] = [[p1_x, p2_x], [p1_y, p2_y], [new_mu_x, new_mu_y]]
return point_net_dict, add_nets
# ็ฌฌไธ้จๅ๏ผ ๅฏ่งๅ็ปๆ
# ่ทๅ่ขซๅปไธญ็็ฅ็ปๅ
ๅบๅ๏ผ็ฅ็ปๅ
๏ผๅปไธญ็ๆฐๆฎๆ ทๆฌๅฏนๅบ็ๅ็งฐใ็ฅ็ปๅ
๏ผๅปไธญ็ๆ ทๆฌๆฐๆฎ
def get_hinted_net(self):
# ่ขซๅปไธญ็็ฅ็ปๅ
hited_nets_list = []
hinted_nets_dict = {}
hinted_nets_data = {}
for danor, sanor in zip(self.data_normal, self.slist):
nets_dict = {}
for nor in self.net_data_weight:
diss = self.data_distance(self.net_data_weight[nor], danor)
nets_dict[nor] = diss
# ่ฎก็ฎๆๅฐๅผ
min_nn = sorted(nets_dict.items(), key=lambda s:s[1])[0][0]
if min_nn in hinted_nets_dict:
hinted_nets_dict[min_nn].append(sanor)
hinted_nets_data[min_nn].append(danor)
else:
hinted_nets_dict[min_nn] = [sanor]
hinted_nets_data[min_nn] = [danor]
if min_nn not in hited_nets_list:
hited_nets_list.append(min_nn)
return hinted_nets_dict, hinted_nets_data, hited_nets_list
# ๆ นๆฎๅๅ
ๆ ผๅญ็ๆฐๅผ๏ผๅๅ
ๆ ผๅญ็ผๅท๏ผๅพ็ๅ็งฐ๏ผๅพไพๅ็งฐ๏ผๆฏไธช็ฅ็ปๅ
ๆๅญ
def plot_net_data(self, netdata, netlist, title, label, tap, net_text=None):
# ๅฐๆฐๆฎ่ฝฌๅๅฐ0ๅฐ1
ndata = (netdata - min(netdata)) / (max(netdata) - min(netdata))
# ๅฏนๅบ็้ข่ฒ
color_map = self.color_config(ndata)[:, :-1]
# ๆฐๅปบๅพ็
plt.figure(figsize=self.figsize)
plt.axis('equal')
plt.axis('off')
# ๅผๅง็ปๅถ
for ni, nv in enumerate(netlist):
x, y = self.point_dict[nv][1]
cx, cy = self.point_dict[nv][0]
plt.plot(x, y, lw=tap,color='gray')
plt.fill(x, y,color=color_map[ni])
# ๆทปๅ ๅๅ
ๆ ็ญพ
if net_text:
plt.text(cx, cy, net_text[ni], horizontalalignment='center',verticalalignment='center')
plt.text((self.net_column-1) * self.width/2 , self.height/2+4, title, horizontalalignment='center',verticalalignment='center')
# ๆทปๅ ็ฑปๅซ็้
if self.border is not None:
title = 'border%s' % title
for kkk in self.border:
for xyuu in self.border[kkk]:
ux, uy = xyuu
plt.plot(ux, uy, color='tab:red', lw=3)
# ๆทปๅ colorbar
norm = mpl.colors.Normalize(vmin=min(netdata), vmax=max(netdata))
#ใๅพไพ็ๆ ็ญพ
ticks = np.linspace(min(netdata), max(netdata), 3)
plt.colorbar(mpl.cm.ScalarMappable(norm=norm, cmap=self.color_config),
shrink=0.5*(self.height/self.width), label=label, ticks=ticks, orientation='vertical',aspect=30)
plt.tight_layout()
plt.savefig('data/%s_SOM.png' % title, dpi=100, bbox_inches = 'tight')
plt.close()
# ็ปๅถๆฏไธช็ๆตๆๆ ็็นๅพๅพ่ฐฑ
def plot_sigle(self):
if self.label is None:
self.label = ['feature%s' % d for d in range(len(self.data[0]))]
# ่ฎฐๅฝๆฏไธชๅๅ
ๆ ผ็ๆฏไธช็นๅพ็ๆฐๆฎ
for index, value in enumerate(self.label):
# ็นๅพๆฐๆฎ
net_feature = []
# ็ฅ็ปๅ
็ผๅท
net_sign = []
# ้ๅ็ฅ็ปๅ
for nn in self.net_data_weight:
# ๅๅฝไธๅ
f_data = self.net_data_weight[nn][index] * (self.maxdata[index] - self.mindata[index]) +self.mindata[index]
net_feature.append(f_data)
net_sign.append(nn)
# ็ปๅพ
self.plot_net_data(np.array(net_feature), net_sign, value, 'ๆตๅบฆ', self.tap)
return print('ๆฏไธช็นๅพ็ๅพ่ฐฑ็ปๅถๅฎๆฏ')
# ่ฎก็ฎๆฏไธช็ฅ็ปๅ
ไธๅ
ถ็ธ้ป็็ฅ็ปๅ
ๅไธช่พน็้ข่ฒ
def plot_class_distance(self):
# ้ๅ็ฅ็ปๅ
็ๆ้ๅญๅ
ธ
plt.figure(figsize=self.figsize)
plt.axis('equal')
plt.axis('off')
# ๆ นๆฎ่ท็ฆป็ๅผ่พๅบไธๅ็้ข่ฒ:้ฆๅ
่ฆ่ทๅ่ท็ฆปๆฐๅผ็ๅบๅ
dis_net_dict = {}
for net in self.net_data_weight:
round_net = self.adjoin_net(net)
for rnet in round_net:
dis = self.data_distance(self.net_data_weight[net], self.net_data_weight[rnet])
dis_net_dict['%s_%s'%(net, rnet)] = dis
# ่ท็ฆป็ๅ่กจ
dis_lsit = np.array(list(set(dis_net_dict.values())))
# ๆ ๅฐ้ข่ฒ
# ๅฐๆฐๆฎ่ฝฌๅๅฐ0ๅฐ1
ndata = (dis_lsit - min(dis_lsit)) / (max(dis_lsit) - min(dis_lsit))
# ๅฏนๅบ็้ข่ฒ
color_map = plt.get_cmap('YlOrRd')(ndata)[:, :-1]
# ็ถๅๆ นๆฎไธๅ็ๆฐๅผ็ปๅถไธๅ็้ข่ฒ
for kf in self.net_line_dict:
x, y, z= self.net_line_dict[kf]
# ็ปๅถๅค่พนๅฝข
plt.fill(z[0], z[1], color=color_map[list(dis_lsit).index(dis_net_dict[kf])])
# ็ปๅถ่พน
plt.plot(x, y, color='r', lw=1)
# ๆทปๅ colorbar
norm = mpl.colors.Normalize(vmin=min(dis_lsit), vmax=max(dis_lsit))
#ใๅพไพ็ๆ ็ญพ
ticks = np.linspace(min(dis_lsit), max(dis_lsit), 3)
plt.colorbar(mpl.cm.ScalarMappable(norm=norm, cmap=plt.get_cmap('YlOrRd')),
shrink=0.5*(self.height/self.width), label='่ท็ฆป', ticks=ticks, orientation='vertical',aspect=30)
for k in self.point_dict_distance:
point_set_x, point_set_y = self.point_dict_distance[k][1]
plt.plot(point_set_x, point_set_y, lw=3, c='#666699')
# ๅปไธญไธๆฒกๆๅปไธญ็้ข่ฒไธๅ
if k in self.hhnet_list:
plt.fill(point_set_x, point_set_y, c='#666699')
else:
plt.fill(point_set_x, point_set_y, c='w')
plt.tight_layout()
plt.savefig('data/bian_class.png', dpi=100, bbox_inches = 'tight')
plt.close()
# ็ปๅถๅปไธญ็ฅ็ปๅ
็ๆฌกๆฐ
def plot_out_hits(self):
plt.figure(figsize=(10, 10))
plt.axis('equal')
plt.axis('off')
# ๅญๆฐๆฏไธชๅๅ
่ขซๅปไธญ็ๆฌกๆฐ
net_hits_dict = {net:len(self.data_hinted_dict[net]) for net in self.data_hinted_dict}
# ๅผๅง็ปๅถ๏ผๅพๅฝข็ๅคงๅฐๆ็
งๆฏไพ่ฟ่ก็ผฉๆพ
# ้ฆๅ
่ทๅๅผ็ๅ่กจ
hits_list = sorted(list(set(net_hits_dict.values())))
# ๅจ0.5ๅฐ1ไน้ด
shape_out = list(np.linspace(0.5, 1, len(hits_list)))
# ่ทๅพๅพๅฝข็็น
tuxing_dict = {}
# ่ฎก็ฎ็ผฉๆพๅๆ ็ๅฝๆฐ
for k in self.point_dict:
# ่ฏฅ็ฅ็ปๅ
็ไธญๅฟ็น
kx, ky = self.point_dict[k][0]
# ็ญๆฏไพ็ผฉๅฐ
listx, listy = self.point_dict[k][1]
# ็ผฉๅฐๅ็็น
small_x, small_y = [], []
if k in net_hits_dict:
hits_n = net_hits_dict[k]
# ็ผฉๆพ็ๆฏไพ
p = shape_out[hits_list.index(hits_n)]
# ่ทๅพ็ธๅบ็ๅๆ
for x, y in zip(listx, listy):
small_x.append(p*x+(1-p)*kx)
small_y.append(p*y+(1-p)*ky)
tuxing_dict[k] = [[kx, ky], [small_x, small_y]]
# ๅผๅง็ปๅถ:้ฆๅ
็ปๅถๅค้ข็่ๆก
for nn in self.point_dict:
x, y = self.point_dict[nn][1]
plt.plot(x, y, lw=0.8,color='silver')
plt.fill(x, y,color='w')
if nn in tuxing_dict:
sx, sy = tuxing_dict[nn][1]
cx, cy = tuxing_dict[nn][0]
plt.plot(sx, sy, lw=0.5,color='k')
plt.fill(sx, sy, color='rosybrown')
plt.text(cx, cy, net_hits_dict[nn], horizontalalignment='center',verticalalignment='center')
plt.tight_layout()
plt.savefig('data/nints_SOM.png', dpi=100, bbox_inches = 'tight')
plt.close()
# Kmeans่็ฑป๏ผ้ๅฏนๅทฒ็ปๅปไธญ็็ฅ็ปๅ
่ฟ่ก่็ฑปใๅนถ็ปๅถ็ฑปไธ็ฑปไน้ด็ๅบๅซ็บฟ
def som_kmeans(self):
# ๅฏน่ขซๅปไธญ็็ฅ็ปๅ
็ๆ้ๆฐๆฎๅผๅง่็ฑป
hits_data = []
for hn in self.hhnet_list:
hits_data.append(self.net_data_weight[hn])
kmeans = KMeans(n_clusters=self.cc, random_state=0).fit(hits_data)
# ่ทๅๆฏไธช่ขซๅปไธญ็็ฅ็ปๅ
ๆฏๅชไธ็ฑป็ๅญๅ
ธ
zidian_dict = {}
for dd in self.hhnet_list:
lei = kmeans.predict([self.net_data_weight[dd]])[0]
if lei in zidian_dict:
zidian_dict[lei].append(dd)
else:
zidian_dict[lei] = [dd]
# ๅผๅง่ฎก็ฎๆฏไธไธช็ฑปๅซไธญ็ๆฐๆฎ็ๆ ทๆฌ็ๆฟๆดป็จๅบฆ
for mcc in zidian_dict:
counnt_b = 0
net_jihuo_dict = {}
for bjz in zidian_dict[mcc]:
for kk in self.data_hinted_data[bjz]:
counnt_b += 1
# ่ฎก็ฎ่ฏฅๆ ทๆฌๆฐๆฎไธๆๆ็ฅ็ปๅๅ
็่ท็ฆป
for nettt in self.net_data_weight:
ddiss = self.data_distance(self.net_data_weight[nettt], kk)
if nettt in net_jihuo_dict:
net_jihuo_dict[nettt] += ddiss
else:
net_jihuo_dict[nettt] = ddiss
# ่ฎก็ฎๅๅผ
new_net_jihuo_dict = {}
for hh in net_jihuo_dict:
new_net_jihuo_dict[hh] = net_jihuo_dict[hh] / counnt_b
# ็ปๅถๅพ
jihuo_data = new_net_jihuo_dict.values()
# ๅป้p
sub_qu = list(set(jihuo_data))
# ๅๅบ
shengxu = sorted(sub_qu)
# ้ๅบ
jiangxu = sorted(sub_qu, reverse=True)
#้็ปๆฐๆฎ
new_data = [jiangxu[shengxu.index(k)] for k in jihuo_data]
new_net = new_net_jihuo_dict.keys()
self.plot_net_data(np.array(new_data), new_net, '็ฑปๅซ%s'% mcc, 'ๆฟๆดป็จๅบฆ',1)
# ้ๅๆฏไธชๆ ทๆฌๆ้๏ผๅฐ็ฅ็ปๅๅ
ๅ็ป
# ๆฏไธไธช็ฅ็ปๅ
ๅฏนๅบ็็ฑปๅซ
net_ddd_dict = {}
group_net_dict = {}
for neet in self.net_data_weight:
middle_r = {}
for cla in range(self.cc):
dis = self.data_distance(self.net_data_weight[neet], kmeans.cluster_centers_[cla])
middle_r[cla] = dis
# ้ๆฉๆๅฐ็
min_dd = sorted(middle_r.items(), key=lambda s:s[1])[0][0]
if min_dd in group_net_dict:
group_net_dict[min_dd].append(neet)
else:
group_net_dict[min_dd] = [neet]
net_ddd_dict[neet] = min_dd
# ๆพๅฐไธ้จๅ็ๅ็็บฟ๏ผ
border_net = []
line_point = {}
for k in range(self.cc):
line_point[k] = []
# ้ๅ่ฏฅ็ฑป็็ฅ็ปๅ
for skh in group_net_dict[k]:
# ่ทๅ่ฏฅ็ฅ็ปๅ
็ๅจๅด็็ฅ็ปๅ
round_netts = self.adjoin_net(skh)
for kk in round_netts:
if kk not in group_net_dict[k]:
# ่ฏดๆๆฏ่พน็๏ผๆทปๅ ไบค็น
k_listx, k_listy = self.point_dict[skh][1]
kk_listx, kk_listy = self.point_dict[kk][1]
# ๅญๅจไบค็น็่พน
j_ccx = []
j_ccy = []
for a, b in zip(k_listx[:-1], k_listy[:-1]):
for c, d in zip(kk_listx[:-1], kk_listy[:-1]):
if abs(a-c) < 1e-8 and abs(b-d) < 1e-8:
j_ccx.append(a)
j_ccy.append(b)
line_point[k].append([j_ccx, j_ccy])
# ๆทปๅ ่พน็็็น
border_net.append(skh)
self.border = line_point
self.plot_sigle()
return print('ๅธฆๆ็ฑปๅซ่พน็็ๆๆ ๅพ่ฐฑ')
# ็ปๅถๆ ทๆฌๆ ็ญพไปฅๅๅ็ฑปๅบๅซ็บฟ๏ผๅฐฝๅฏ่ฝ้ฟๅ
้ๅคๅ ๅ ็้ฎ้ข
def plot_sample_border(self):
# ๅผๅง็ปๅถ
plt.figure(figsize=(10, 10))
plt.axis('equal')
plt.axis('off')
for nn in self.point_dict:
x, y = self.point_dict[nn][1]
plt.plot(x, y, lw=0.8,color='silver')
plt.fill(x, y,color='w')
if nn in self.data_hinted_dict:
# ๆทปๅ ๆๅญ
# ็กฎๅฎๆฏไธช็ไฝ็ฝฎ
length = len(self.data_hinted_dict[nn])+1
one_pointy = y[0]
four_pointy = y[3]
pointx = x[0]
for index, value in enumerate(self.data_hinted_dict[nn]):
pp = (index+1) / length
ygg = one_pointy *pp + four_pointy * (1-pp)
plt.text(pointx, ygg, value, color='tab:blue',fontsize=10,
horizontalalignment='center',verticalalignment='center')
# ๆทปๅ ็ฑปๅซ็้
if self.border is not None:
for kkk in self.border:
for xyuu in self.border[kkk]:
ux, uy = xyuu
plt.plot(ux, uy, color='tab:red', lw=3)
plt.tight_layout()
plt.savefig('data/label_SOM.png', dpi=100, bbox_inches='tight')
plt.close()
visom = VISOM(som_data, som_iris.net_data_dict,stdata, ldata)
visom.plot_sigle()
visom.plot_class_distance()
visom.plot_out_hits()
visom.som_kmeans()
visom.plot_sample_border()
###Output
_____no_output_____ |
notebook/Pandas Tutorials.ipynb | ###Markdown
Pandas is used for- Calculate statistics such as mean, meadian, standard-deviation to answer data questions.- Cleaning the data by removing missing values and filtering rows or columns by some criteria- Visualize the data with help from Matplotlib- a python liberary for data visualizations.- Store the cleaned, transformed data back into a CSV, other file or database Official Tutorial Link: - [pandas tutorial](https://pandas.pydata.org/pandas-docs/version/0.15/tutorials.html) **Pandas is designed on the top of Numpy, can be used as data source for Matplotlib, SciPy and SkLearn** Series Vs Dataframe Creating pandas dataframe- Using dictionary is the simple one, even we can construct it from array, list and tuples- loading data from file (most useful in practical application)
###Code
# read data into dataframe from csv file
import pandas as pd
df=pd.read_csv('../Boston.csv') # read_csv function will read the csv file from specified file path
df.head()
df.index
df.columns
# read data into dataframe from csv file with first column as an index
import pandas as pd
df=pd.read_csv('../Boston.csv', index_col=0) # file path
df['chas'].unique()
df.columns
df[df.chas==0]['chas'].count()
###Output
_____no_output_____
###Markdown
data attribute description - https://archive.ics.uci.edu/ml/datasets/Housing- ZN: proportion of residential land zoned for lots over 25,000 sq.ft.- INDUS: proportion of non-retail business acres per town- CHAS: Charles River dummy variable (= 1 if tract bounds river; 0 otherwise)- NOX: nitric oxides concentration (parts per 10 million)- RM: average number of rooms per dwelling- AGE: proportion of owner-occupied units built prior to 1940- DIS: weighted distances to ๏ฌve Boston employment centers- RAD: index of accessibility to radial highways- TAX: full-value property-tax rate per 10000s- PTRATIO: pupil-teacher ratio by town - BLACK: 1000(Bkโ0.63)2 where Bk is the proportion of blacks by town 13.- LSTAT: percentage of lower status of the population- MEDV: Median value of owner-occupied homes in $1000s
###Code
# lets summarize the data
df.describe()
df[df.chas==0]['chas'].count()
# lets visualize the last five rows
df.tail()
# lets display first ten rows
df.head(10)
# lets see the detail information about each column- datatype
df.info()
# short information about dataframe row and column size
df.shape
###Output
_____no_output_____
###Markdown
data cleaning activities- drop duplicate- since in this dataset, there is no duplicate row, we will extend the existing dataset first then drop the duplicate
###Code
df_temp=df.append(df) # append dataframe 'df' over iteself 'df'
df_temp.head()
df_temp=df_temp.drop_duplicates()
df_temp.shape
###Output
_____no_output_____
###Markdown
Data cleaning activities- drop null value or na
###Code
# lets check whether there is some null value or not
df_temp.isnull()
df_temp.isnull().sum()
df_temp.sum()
# lets forcfull try to put some null values
import numpy as np
df_temp.loc[0:4]=np.nan # nan- not a number in numpy liberary
df_temp.head()
df_temp.isnull().sum()
df_temp.head()
df_temp.dropna(inplace=True, axis=0) # axis argument can be used to remove null row (axis=0) or Null column (axis=1)
df_temp.head()
###Output
_____no_output_____
###Markdown
Data clearning activities- fill na with some imputation technique such as mean or median- we will work on original dataframe from here- df
###Code
# lets put first four value in column zn of this dataframe as NaN
df['zn'][0:4]=np.nan
df.head()
# lets fill these NaN value with mean of this column
zn_col=df['zn']
mean=zn_col.mean()
# mean=df['zn'].mean()
mean
zn_col.fillna(23, inplace=True) #this inplace=True, will replace all NaN by mean into original dataframe
df.head()
# lets describe each column individually
df['zn'].describe()
# subset of dataframe
col_subset=df[['crim', 'medv']]
col_subset
# subset by row- can be done with two: loc or iloc
row_subset=df.loc[0:15] # loc is dataframe index approach
row_subset.head()
row_subset=df.iloc[0:4] # iloc is pure integer index based approach
row_subset.head()
###Output
_____no_output_____
###Markdown
try to complete exercise-notebook in github (https://github.com/tejshahi/starter-machine-learning)
###Code
data=pd.read_csv('../Exercise/IMDB-Movie-Data.csv', index_col=1)
data.head()
row_subset=data.loc['Prometheus':'Suicide Squad']
row_subset
row_subset=data.iloc[1:5]
row_subset.index
row_subset.index
row_subset.columns
###Output
_____no_output_____ |
notebooks/cassandra_snitches/snitch_lpf_comparison.ipynb | ###Markdown
DynamicEndpoint Snitch Measurement ChoicesHistorically the DES has used a Median Filter approximated with a Codahale ExpononentiallyDecayingReservoir with a memory of about 100 items. There are proposals that we should change this ranking filter, for example to an Exponential Moving Average. This notebook is my attempt to model Cassandra replica latencies using probability distributions taking into account the frequent causes of latency (e.g. disks, safepoints, networks, and timeouts) and figure out which filter is appropriate.
###Code
import numpy as np
import matplotlib.pyplot as plt
import random
import scipy
import scipy.stats
scipy.random.seed(1234)
class EMA(object):
def __init__(self, alpha1, initial):
self._ema_1 = initial
self.alpha1 = alpha1
def _ema(self, alpha, value, past):
return alpha * value + (1-alpha) * past
def sample(self, value):
self._ema_1 = self._ema(self.alpha1, value, self._ema_1)
def measure(self):
return self._ema_1
class MedianFilter(object):
def __init__(self, initial, size):
self.samples = []
self.size = size
def sample(self, value):
self.samples.append(value)
if len(self.samples) > self.size:
self.samples = self.samples[1:]
def measure(self):
d = sorted(self.samples)
return d[len(self.samples) // 2]
class LatencyGenerator(object):
"""
latency_ranges is a list of tuples of (distribution, probability)
"""
def __init__(self, latency_ranges, max_sample):
self.max = max_sample
self.i = 0
self.d = [i[0] for i in latency_ranges]
self.p = [i[1] for i in latency_ranges]
def __iter__(self):
self.i = 0
return self;
def __next__(self):
if self.i > self.max:
raise StopIteration()
self.i += 1
distribution = np.random.choice(self.d, p=self.p)
return distribution.sample()
class LatencyDistribution(object):
def __init__(self, minimum, maximum, skew):
self.dist = scipy.stats.truncexpon(
(maximum - minimum) / skew, loc = minimum, scale=skew
)
def sample(self):
return int(self.dist.rvs(1)[0])
latencies = LatencyGenerator(
[
# Most of the requests
(LatencyDistribution(1, 10, 5), 0.9),
# Young GC
(LatencyDistribution(20, 30, 3), 0.0925),
# Segment retransmits
(LatencyDistribution(200, 210, 5), 0.005),
# Safepoint pauses
(LatencyDistribution(1000, 2000, 10), 0.00195),
# Timeouts / stuck connections / safepoint pauses
(LatencyDistribution(10000, 10005, 1), 0.00055)
],
50000
)
data = np.array([i for i in latencies])
typical = np.array([i for i in data if i < 1000])
fig = plt.figure(None, (20, 3))
plt.title("Latency Histgoram")
plt.semilogy()
plt.ylabel("Count / {}".format(50000))
plt.xlabel("Latency (ms)")
plt.hist(data, 200)
plt.gca().set_xlim(0)
plt.xticks(np.arange(0, max(data)+1, 400))
plt.show()
fig2 = plt.figure(None, (20, 1))
plt.title("Latency Distribution All")
plt.xlabel("Latency (ms)")
plt.gca().set_xlim(0)
plt.xticks(np.arange(0, max(data)+1, 400))
plt.boxplot([data], vert=False, labels=["raw"])
plt.show()
fig3 = plt.figure(None, (20, 1))
plt.title("Latency Distribution Typical")
plt.xlabel("Latency (ms)")
plt.gca().set_xlim(0, max(typical)+5)
plt.xticks(np.arange(0, max(typical)+5, 5))
plt.boxplot([typical], vert=False, labels=["typical"])
plt.show()
from pprint import pprint
print("Summary Statistics:")
percentiles = [50, 75, 90, 95, 99, 99.9, 100]
summary = np.percentile(data, percentiles)
m = {
percentiles[i] : summary[i] for i in range(len(percentiles))
}
print("{:.10}: {:.10s}".format("Percentile", "Millis"))
for (k, v) in sorted(m.items()):
print("{:9.2f}%: {:10.0f}".format(k, v))
ema = EMA(0.05, data[0])
result = []
for d in data:
ema.sample(d)
result.append(ema.measure())
plt.figure(None, (20, 10))
plt.plot(result)
plt.ylabel("Latency (ms)")
plt.title('EMA')
plt.show()
mf = MedianFilter(data[0], 100)
result = []
for d in data:
mf.sample(d)
result.append(mf.measure())
plt.figure(None, (20, 10))
plt.plot(result)
plt.ylabel("Latency (ms)")
plt.title('Median Filter')
plt.show()
###Output
_____no_output_____ |
charles-university/statistical-nlp/assignment-1/nlp-assignment-1.ipynb | ###Markdown
[Assignment 1: PFL067 Statistical NLP](http://ufal.mff.cuni.cz/~hajic/courses/npfl067/assign1.html) Exploring Entropy and Language Modeling Author: Dan Kondratyuk November 15, 2017--- This Python notebook examines conditional entropy as it relates to bigram language models and cross entropy as it relates to linear interpolation smoothing.Code and explanation of results is fully viewable within this webpage. Files- [index.html](./index.html) - Contains all veiwable code and a summary of results- [README.md](./README.md) - Instructions on how to run the code with Python- [nlp-assignment-1.ipynb](./nlp-assignment-1.ipynb) - Jupyter notebook where code can be run- [requirements.txt](./requirements.txt) - Required python packages for running 1. Entropy of a Text Problem Statement> In this experiment, you will determine the conditional entropy of the word distribution in a text given the previous word. To do this, you will first have to compute P(i,j), which is the probability that at any position in the text you will find the word i followed immediately by the word j, and P(j|i), which is the probability that if word i occurs in the text then word j will follow. Given these probabilities, the conditional entropy of the word distribution in a text given the previous word can then be computed as:> $$H(J|I) = -\sum_{i \in I, j \in J} P(i,j) \log_2 P(j|i)$$> The perplexity is then computed simply as> $$P_X(P(J|I)) = 2^{H(J|I)}$$> Compute this conditional entropy and perplexity for `TEXTEN1.txt`. This file has every word on a separate line. (Punctuation is considered a word, as in many other cases.) The i,j above will also span sentence boundaries, where i is the last word of one sentence and j is the first word of the following sentence (but obviously, there will be a fullstop at the end of most sentences).> Next, you will mess up the text and measure how this alters the conditional entropy. For every character in the text, mess it up with a likelihood of 10%. If a character is chosen to be messed up, map it into a randomly chosen character from the set of characters that appear in the text. Since there is some randomness to the outcome of the experiment, run the experiment 10 times, each time measuring the conditional entropy of the resulting text, and give the min, max, and average entropy from these experiments. Be sure to use srand to reset the random number generator seed each time you run it. Also, be sure each time you are messing up the original text, and not a previously messed up text. Do the same experiment for mess up likelihoods of 5%, 1%, .1%, .01%, and .001%.> Next, for every word in the text, mess it up with a likelihood of 10%. If a word is chosen to be messed up, map it into a randomly chosen word from the set of words that appear in the text. Again run the experiment 10 times, each time measuring the conditional entropy of the resulting text, and give the min, max, and average entropy from these experiments. Do the same experiment for mess up likelihoods of 5%, 1%, .1%, .01%, and .001%.> Now do exactly the same for the file `TEXTCZ1.txt`, which contains a similar amount of text in an unknown language (just FYI, that's Czech*)> Tabulate, graph and explain your results. Also try to explain the differences between the two languages. To substantiate your explanations, you might want to tabulate also the basic characteristics of the two texts, such as the word count, number of characters (total, per word), the frequency of the most frequent words, the number of words with frequency 1, etc. Process TextThe first step is to define functions to calculate probabilites of bigrams/unigrams and conditional entropy of a text. This can be done by counting up the frequency of bigrams and unigrams. The `BigramModel` class contains all the necessary functionality to compute the entropy of a text. By counting up the word unigram/bigram frequencies, we can divide the necessary counts to get the appropriate probabilities for the entropy function.
###Code
# Import Python packages
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import nltk
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
import collections as c
from collections import defaultdict
# Configure Plots
plt.rcParams['lines.linewidth'] = 4
np.random.seed(200) # Set a seed so that this notebook has the same output each time
def open_text(filename):
"""Reads a text line by line, applies light preprocessing, and returns an array of words"""
with open(filename, encoding='iso-8859-2') as f:
content = f.readlines()
preprocess = lambda word: word.strip()
return np.array([preprocess(word) for word in content])
class BigramModel:
"""Counts up bigrams and calculates probabilities"""
def __init__(self, words):
self.words = words
self.word_set = list(set(words))
self.word_count = len(self.word_set)
self.total_word_count = len(self.words)
self.unigram_dist = c.Counter(words)
self.bigrams = list(nltk.bigrams(words))
self.bigram_set = list(set(self.bigrams))
self.bigram_count = len(self.bigram_set)
self.total_bigram_count = len(self.bigrams)
self.dist = c.Counter(self.bigrams)
def p_bigram(self, wprev, w):
"""Calculates the probability a bigram appears in the distribution"""
return self.dist[(wprev, w)] / self.total_bigram_count
def p_bigram_cond(self, wprev, w):
"""Calculates the probability a word appears in the distribution given the previous word"""
return self.dist[(wprev, w)] / self.unigram_dist[wprev]
def entropy_cond(self):
"""Calculates the conditional entropy from a list of bigrams"""
bigram_set = self.bigram_set
return - np.sum(self.p_bigram(*bigram) *
np.log2(self.p_bigram_cond(*bigram))
for bigram in bigram_set)
def perplexity_cond(self, entropy=-1):
"""Calculates the conditional perplexity from the given conditional entropy"""
if (entropy < 0):
return 2 ** self.entropy_cond()
else:
return 2 ** entropy
###Output
_____no_output_____
###Markdown
Perturb TextsDefine functions to process a list of words and, with a given probability, alter each character/word to a random character/word.
###Code
def charset(words):
"""Given a list of words, calculates the set of characters over all words"""
return np.array(list(set(char for word in words for char in word)))
def vocab_list(words):
"""Given a list of words, calculates the vocabulary (word set)"""
return np.array(list(set(word for word in words)))
def perturb_char(word, charset, prob=0.1):
"""Changes each character with given probability to a random character in the charset"""
return ''.join(np.random.choice(charset) if np.random.random() < prob else char for char in word)
def perturb_word(word, vocabulary, prob=0.1):
"""Changes a word with given probability to a random word in the vocabulary"""
return np.random.choice(vocabulary) if np.random.random() < prob else word
def perturb_text(words, seed=200):
"""Given a list of words, perturbs each word both on the character level
and the word level. Does this for a predefined list of probabilties"""
np.random.seed(seed)
chars = charset(words)
vocab = vocab_list(words)
text_chars, text_words = pd.DataFrame(), pd.DataFrame()
probabilities = [0, 0.00001, 0.0001, 0.001, 0.01, 0.05, 0.1]
for prob in probabilities:
text_chars[str(prob)] = [perturb_char(word, chars, prob=prob) for word in words]
text_words[str(prob)] = [perturb_word(word, vocab, prob=prob) for word in words]
return text_chars, text_words
###Output
_____no_output_____
###Markdown
Gather StatisticsThe following functions perturb a given text on the character and word level by a defined list of probabilities and compute statistical information for each probability data point.
###Code
def text_stats(words):
"""Given a list of words, this calculates various statistical
properties like entropy, number of characters, etc."""
bigram_model = BigramModel(words)
entropy = bigram_model.entropy_cond()
perplexity = bigram_model.perplexity_cond(entropy=entropy)
vocab_size = bigram_model.word_count
char_count = len([char for word in words for char in word])
chars_per_word = char_count / len(words)
words_freq_1 = sum(1 for key in bigram_model.unigram_dist if bigram_model.unigram_dist[key] == 1)
return [entropy, perplexity, vocab_size, char_count, chars_per_word, words_freq_1]
def run_stats(words, seed=200):
"""Calculates statistics for one run of perturbed probabilities of a given text
and outputs them to two tables (character and word level respectively)"""
perturbed_text = perturb_text(words, seed=seed)
text_chars, text_words = perturbed_text
col_names = [
'prob', 'entropy', 'perplexity', 'vocab_size', 'char_count',
'chars_per_word', 'words_freq_1'
]
char_stats = pd.DataFrame(columns=col_names)
word_stats = pd.DataFrame(columns=col_names)
# Iterate through all perturbation probabilities and gather statistics
for col in text_chars:
char_stats_calc = text_stats(list(text_chars[col]))
char_stats.loc[len(char_stats)] = [float(col)] + char_stats_calc
word_stats_calc = text_stats(list(text_words[col]))
word_stats.loc[len(word_stats)] = [float(col)] + word_stats_calc
return char_stats, word_stats
def all_stats(words, num_runs=10):
"""Calculates statistics for all runs of perturbed probabilities of a given text
and outputs the averaged values to two tables (character and word level respectively)"""
char_runs, word_runs = zip(*[run_stats(words, seed=i) for i in range(num_runs)])
char_concat, word_concat = pd.concat(char_runs), pd.concat(word_runs)
char_avg = char_concat.groupby(char_concat.index).mean()
word_avg = word_concat.groupby(word_concat.index).mean()
return char_avg, word_avg
def create_cond_entropy_plot(label, word_stats, char_stats):
"""Plots the word and character entropy of the given text statistics"""
plt.plot(word_stats.prob, word_stats.entropy, label='Word Entropy')
plt.plot(char_stats.prob, char_stats.entropy, label='Character Entropy')
plt.suptitle('Entropy (' + label + ')')
plt.xlabel('Probability')
plt.ylabel('Entropy')
_ = plt.legend()
###Output
_____no_output_____
###Markdown
Results (part 1): Calculate, Tabulate, and Graph StatisticsFinally, we calculate the conditional entropy of both English and Czech texts, along with their perturbed counterparts as specified in the problem statement. Some additional statistics are calculated to better explain results. Explanations and conclusions of results are given at the end of this section.
###Code
# Read the texts into memory
english = './TEXTEN1.txt'
czech = './TEXTCZ1.txt'
words_en = open_text(english)
words_cz = open_text(czech)
# Calculate statistics on all data points
char_stats_en, word_stats_en = all_stats(words_en)
char_stats_cz, word_stats_cz = all_stats(words_cz)
###Output
_____no_output_____
###Markdown
English Character StatisticsThe table below displays the conditional entropy of the English text when each character can be pertubed with the given probability. The entropy of the English text starts at 5.28 and decreases steadily to 4.7 as more characters are changed randomly. The vocabulary size and number of words with frequency 1 increase substantially.
###Code
char_stats_en
###Output
_____no_output_____
###Markdown
English Word StatisticsThe table below displays the conditional entropy of the English text when each word can be pertubed with the given probability. The entropy of the English text starts at 5.28 and increases slightly to 5.45 as more words are changed randomly. The vocabulary size decreases very slightly and the number of words with frequency 1 decrease substantially.
###Code
word_stats_en
###Output
_____no_output_____
###Markdown
Czech Character StatisticsThe table below displays the conditional entropy of the Czech text when each character can be pertubed with the given probability. The entropy of the Czech text starts at 4.74 and decreases steadily to 4.0 as more characters are changed randomly. The vocabulary size and number of words with frequency 1 increase substantially.
###Code
char_stats_cz
###Output
_____no_output_____
###Markdown
Czech Word StatisticsThe table below displays the conditional entropy of the Czech text when each word can be pertubed with the given probability. The entropy of the Czech text starts at 4.74 and decreases slightly to 4.63 as more words are changed randomly. The vocabulary size decreases very slightly and the number of words with frequency 1 decrease as well.
###Code
word_stats_cz
###Output
_____no_output_____
###Markdown
English PlotThe graph below plots the conditional entropy of the English text as a function of the probability of perturbing it. The blue line plots the entropy of the text with perturbed words, and the orange line plots the entropy of the text with purturbed characters.
###Code
create_cond_entropy_plot('English', word_stats_en, char_stats_en)
###Output
_____no_output_____
###Markdown
The plot shows that the conditional entropy drops as more characters in the words of the text are changed. Looking back at the table, not only does the vocabulary increase substantially, but the number of words with frequency 1 rise as well. Changing a character to a random symbol will more often than not create a new word. Conditional entropy can be thought of as the average amount of information needed to find the next word given its previous word. If the frequency of the previous word is 1, then the next word can be determined entirely from the previous, so no new information is necessary. In other words,$$p(w_1,w_2) \log_2 p(w_2|w_1) = p(w_1,w_2) \log_2 \frac{c(w_1,w_2)}{c(w_1)} = p(w_1,w_2) \log_2 1 = 0$$where $(w_1,w_2)$ is a bigram and $c(w_1) = 1$. Therefore, as repeated words are changed to single frequency words, the conditional entropy would go down, as seen in the graph.The plot also shows that the conditional entropy rises slightly as words in the text are altered to random words in the vocabulary. The table shows that the number of words with frequency 1 decrease rapidly. As no new words can be created, the the chance that a single frequency word will be mapped to a multiple frequency word increases with the probability. This has the effect of increasing the conditional entropy, since more information is necessary to determine the next word given the previous multiple frequency word. In other words, $- p(w_1,w_2) \log_2 p(w_2|w_1) > 0$ for $c(w_1) > 1$. Czech PlotThe graph below plots the conditional entropy of the Czech text as a function of the probability of perturbing it. The blue line plots the entropy of the text with perturbed words, and the orange line plots the entropy of the text with purturbed characters.
###Code
create_cond_entropy_plot('Czech', word_stats_cz, char_stats_cz)
###Output
_____no_output_____
###Markdown
The first thing to notice is that the Czech language has an inherently lower conditional entropy than English (at least for this text). This can be explained by the fact that the Czech text contains many more words with a frequency of 1. As opposed to English, Czech has many more word forms due to its declension and conjucation of words, further increasing its vocabulary size and making it much less likely that words of the same inflection appear in the text. As explained earlier, single frequency words have the effect of decreasing conditional entropy.Very similar to the English plot, the conditional entropy drops as as more characters in the words of the text are changed. This is due to the same reasons as explained above: the number of words of frequency 1 increase, lowering the amount of information needed to determine the next word given the previous.Somewhat unexpectedly, the Czech plot shows that the conditional entropy decreases as words in the text are altered to random words in the vocabulary. The English plot shows the opposite effect. Czech is known to be a [free word order](https://en.wikipedia.org/wiki/Czech_word_order) language, which means that (in many cases) words are free to move around the sentence without changing its syntactic structure. What this means is that determining the next word is harder, as other words can be mixed in without changing overall meaning. This requires more information overall (but this is offset to English by the relative vocabulary size). However, as words are altered randomly the chance that the same next word appears increases, futher decreasing entropy.Since English is highly dependent on word order (making it easy to determine what the next word is), it would make sense that randomly altering words would make it harder to determine what the next word is. It is important to keep in mind that even in the English case, after altering words past a certain point, the entropy should begin to decrease again. This is because low frequency words followed by high frequency words that keep the entropy high will decrease to an equilibrium point where every bigram is equally likely. Problem Statement> Now assume two languages, $L_1$ and $L_2$ do not share any vocabulary items, and that the conditional entropy as described above of a text $T_1$ in language $L_1$ is $E$ and that the conditional entropy of a text $T_2$ in language $L_2$ is also $E$. Now make a new text by appending $T_2$ to the end of $T_1$. Will the conditional entropy of this new text be greater than, equal to, or less than $E$? Explain. [This is a paper-and-pencil exercise of course!] Conditional entropy $H(Y|X)$ is the amount of information needed to determine the outcome of $Y$ given that the outcome $X$ is known. Since the texts are disjoint, the amount of information needed to find a word given the previous word will not increase between them (no bigrams are shared), except in one special case.Let $T_3 = T_1 \oplus T_2$ be the concatenation of the two texts. Note that $T_3$ has a newly formed bigram on the boundary of $T_1$ and $T_2$. Let $(t_1, t_2)$ be such a bigram. Then there is a nonzero term in the conditional entropy sum, increasing $E$ by $$- p(t_1,t_2) \log_2 p(t_2|t_1) = - \frac{1}{|T_3|} \log_2 \frac{1}{c(t_1)} = \frac{\log_2 c(t_1)}{|T_3|}$$where $c(t)$ is the number of times word $t$ appears in its text and $|T|$ is the length of $T$. If we let $|T_2| = 1$ and $c(t_1) = |T_1|$, this cannot be more than $max\{\frac{\log_2 n}{n}\} = \frac{1}{2}$ bits of information. In short, $E$ will increase by a small amount. The larger $E$ is, the more insignificant these terms will be and so the new conditional entropy will approach $E$.$E$ will also decrease very slightly as well. Notice that $|T_3| = |T_1| + |T_2| + 1$, one more than the addition of the two texts. This term will appear in every part of the sum, so it can be factored out. This has the effect of modifying the total conditional entropy by the ratio$$\frac{|T_1| + |T_2|}{|T_3|} = \frac{|T_1| + |T_2|}{|T_1| + |T_2| + 1}$$This gets arbitrarily close to 100% as either text becomes large. Putting these two facts together, the new entropy $E_{new}$ is$$E_{new} = \frac{|T_1| + |T_2|}{|T_1| + |T_2| + 1} E + \frac{\log_2 c(t_1)}{|T_1| + |T_2| + 1}$$which approaches $E$ as either text $T_1,T_2$ increases in length.---<!-- Denote $H_C(T)$ to be the conditional entropy of a text $T$ and $|T|$ to be the length of $T$. Then$$H_C(T) = - \sum_{i,j} p(w_i,w_j) \log_2 p(w_j|w_i) = - \sum_{i,j} \frac{c(w_i,w_j)}{|T|} \log_2 \frac{c(w_i,w_j)}{c(w_i)}$$where $c(w_1,\dots,w_n)$ counts the frequency of an $n$-gram in $T$.Let $T_3 = T_1 \oplus T_2$ be the concatenation of the two texts. Then $H_C(T_1) = H_C(T_2) = E$, and$$H_C(T_3) = - \frac{1}{|T_1 + T_2|} \sum_{i,j} c(w_i,w_j) \log_2 \frac{c(w_i,w_j)}{c(w_i)}$$If $T_1$, $T_2$ are nonempty, then $E$ must decrease, as $$. --- --> 2. Cross-Entropy and Language Modeling Problem Statement> This task will show you the importance of smoothing for language modeling, and in certain detail it lets you feel its effects.> First, you will have to prepare data: take the same texts as in the previous task, i.e. `TEXTEN1.txt` and `TEXTCZ1.txt`> Prepare 3 datasets out of each: strip off the last 20,000 words and call them the Test Data, then take off the last 40,000 words from what remains, and call them the Heldout Data, and call the remaining data the Training Data.> Here comes the coding: extract word counts from the training data so that you are ready to compute unigram-, bigram- and trigram-based probabilities from them; compute also the uniform probability based on the vocabulary size. Remember (T being the text size, and V the vocabulary size, i.e. the number of types - different word forms found in the training text):> $p_0(w_i) = 1 / V $> $p_1(w_i) = c_1(w_i) / T$> $p_2(w_i|w_{i-1}) = c_2(w_{i-1},w_i) / c_1(w_{i-1})$> $p_3(w_i|w_{i-2},w_{i-1}) = c_3(w_{i-2},w_{i-1},w_i) / c_2(w_{i-2},w_{i-1})$> Be careful; remember how to handle correctly the beginning and end of the training data with respect to bigram and trigram counts.> Now compute the four smoothing parameters (i.e. "coefficients", "weights", "lambdas", "interpolation parameters" or whatever, for the trigram, bigram, unigram and uniform distributions) from the heldout data using the EM algorithm. [Then do the same using the training data again: what smoothing coefficients have you got? After answering this question, throw them away!] Remember, the smoothed model has the following form:> $p_s(w_i|w_{i-2},w_{i-1}) = l_0p_0(w_i)+ l_1p_1(w_i)+ l_2p_2(w_i|w_{i-1}) + l_3p_3(w_i|w_{i-2},w_{i-1})$,> where> $$l_0 + l_1 + l_2 + l_3 = 1$$> And finally, compute the cross-entropy of the test data using your newly built, smoothed language model. Now tweak the smoothing parameters in the following way: add 10%, 20%, 30%, ..., 90%, 95% and 99% of the difference between the trigram smoothing parameter and 1.0 to its value, discounting at the same the remaining three parameters proportionally (remember, they have to sum up to 1.0!!). Then set the trigram smoothing parameter to 90%, 80%, 70%, ... 10%, 0% of its value, boosting proportionally the other three parameters, again to sum up to one. Compute the cross-entropy on the test data for all these 22 cases (original + 11 trigram parameter increase + 10 trigram smoothing parameter decrease). Tabulate, graph and explain what you have got. Also, try to explain the differences between the two languages based on similar statistics as in the Task No. 2, plus the "coverage" graph (defined as the percentage of words in the test data which have been seen in the training data). Process TextThe first step is to define functions to calculate probabilites of uniform, unigram, bigram, and trigram distributions with respect to a text. As before, this can be done by counting up the ngrams. The LanguageModel class contains all the necessary functionality to compute these probabilities.
###Code
np.random.seed(200) # Set a seed so that this notebook has the same output each time
class Dataset:
"""Splits a text into training, test, and heldout sets"""
def __init__(self, words):
self.train, self.test, self.heldout = self.split_data(words)
train_vocab = set(self.train)
test_vocab = set(self.test)
self.coverage = len([w for w in test_vocab if w in train_vocab]) / len(test_vocab)
def split_data(self, words, test_size = 20000, heldout_size = 40000):
words = list(words)
test, remain = words[-test_size:], words[:-test_size]
heldout, train = remain[-heldout_size:], remain[:-heldout_size]
return train, test, heldout
class LanguageModel:
"""Counts words and calculates probabilities (up to trigrams)"""
def __init__(self, words):
# Prepend two tokens to avoid beginning-of-data problems
words = np.array(['<ss>', '<s>'] + list(words))
# Unigrams
self.unigrams = words
self.unigram_set = list(set(self.unigrams))
self.unigram_count = len(self.unigram_set)
self.total_unigram_count = len(self.unigrams)
self.unigram_dist = c.Counter(self.unigrams)
# Bigrams
self.bigrams = list(nltk.bigrams(words))
self.bigram_set = list(set(self.bigrams))
self.bigram_count = len(self.bigram_set)
self.total_bigram_count = len(self.bigrams)
self.bigram_dist = c.Counter(self.bigrams)
# Trigrams
self.trigrams = list(nltk.trigrams(words))
self.trigram_set = list(set(self.trigrams))
self.trigram_count = len(self.trigram_set)
self.total_trigram_count = len(self.trigrams)
self.trigram_dist = c.Counter(self.trigrams)
def count(ngrams):
ngram_set = list(set(ngrams))
ngram_count = len(ngram_set)
total_ngram_count = len(ngrams)
ngram_dist = c.Counter(ngrams)
return ngram_set, ngram_count, total_ngram_count, ngram_dist
def p_uniform(self):
"""Calculates the probability of choosing a word uniformly at random"""
return self.div(1, self.unigram_count)
def p_unigram(self, w):
"""Calculates the probability a unigram appears in the distribution"""
return self.div(self.unigram_dist[w], self.total_unigram_count)
def p_bigram_cond(self, wprev, w):
"""Calculates the probability a word appears in the distribution given the previous word"""
# If neither ngram has been seen, use the uniform distribution for smoothing purposes
if ((self.bigram_dist[wprev, w], self.unigram_dist[wprev]) == (0,0)):
return self.p_uniform()
return self.div(self.bigram_dist[wprev, w], self.unigram_dist[wprev])
def p_trigram_cond(self, wprev2, wprev, w):
"""Calculates the probability a word appears in the distribution given the previous word"""
# If neither ngram has been seen, use the uniform distribution for smoothing purposes
if ((self.trigram_dist[wprev2, wprev, w], self.bigram_dist[wprev2, wprev]) == (0,0)):
return self.p_uniform()
return self.div(self.trigram_dist[wprev2, wprev, w], self.bigram_dist[wprev2, wprev])
def div(self, a, b):
"""Divides a and b safely"""
return a / b if b != 0 else 0
###Output
_____no_output_____
###Markdown
Expectation Maximization AlgorithmDefine functions to compute the EM algorithm on a language model using linear interpolation smoothing.
###Code
def init_lambdas(n=3):
"""Initializes a list of lambdas for an ngram language model with uniform probabilities"""
return [1 / (n + 1)] * (n + 1)
def p_smoothed(lm, lambdas, wprev2, wprev, w):
"""Calculate the smoothed trigram probability using the weighted product of lambdas"""
return np.multiply(lambdas, [
lm.p_uniform(),
lm.p_unigram(w),
lm.p_bigram_cond(wprev, w),
lm.p_trigram_cond(wprev2, wprev, w)
])
def expected_counts(lm, lambdas, heldout):
"""Computes the expected counts by smoothing across all trigrams and summing them all together"""
smoothed_probs = (p_smoothed(lm, lambdas, *trigram) for trigram in heldout) # Multiply lambdas by probabilities
return np.sum(smoothed / np.sum(smoothed) for smoothed in smoothed_probs) # Element-wise sum
def next_lambda(lm, lambdas, heldout):
"""Computes the next lambda from the current lambdas by normalizing the expected counts"""
expected = expected_counts(lm, lambdas, heldout)
return expected / np.sum(expected) # Normalize
def em_algorithm(train, heldout, stop_tolerance=1e-4):
"""Computes the EM algorithm for linear interpolation smoothing"""
lambdas = init_lambdas(3)
lm = LanguageModel(train)
heldout_trigrams = LanguageModel(heldout).trigrams
print('Lambdas:')
next_l = next_lambda(lm, lambdas, heldout_trigrams)
while not np.all([diff < stop_tolerance for diff in np.abs(lambdas - next_l)]):
print(next_l)
lambdas = next_l
next_l = next_lambda(lm, lambdas, heldout_trigrams)
lambdas = next_l
return lambdas
def log_sum(lm, lambdas, trigram):
"""Computes the log base 2 of the sum of the smoothed trigram probability"""
return np.log2(np.sum(p_smoothed(lm, lambdas, *trigram)))
def cross_entropy(lm, lambdas, test_trigrams):
"""Computes the cross entropy of the language model with respect to the test set"""
return - np.sum(log_sum(lm, lambdas, trigram) for trigram in test_trigrams) / len(test_trigrams)
def tweak_trigram_lambda(lambdas, amount):
"""Adds the given amount to the trigram lambda and removes
the same amount from the other lambdas (normalized)"""
first = np.multiply(lambdas[:-1], (1.0 - amount / np.sum(lambdas[:-1])))
last = lambdas[-1] + amount
return np.append(first, last)
###Output
_____no_output_____
###Markdown
Discount and Boost the Trigram ProbabiltiesDefine a function to discount or boost the trigram probabilities in the language model by adding/removing probability mass to/from the trigram lambda $l_3$ smoothing parameter.
###Code
discount_ratios = [0.0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9] # Discount trigram by this ratio
boost_ratios = [0.0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 0.95, 0.99] # Boost trigram by this ratio
def boost_stats(lm, test, lambdas):
"""Calculates the cross entropy of the language model with
respect to several ratios which boost or discount the trigram
lambda parameter"""
boost = pd.DataFrame(columns=['boost_trigram_ratio', 'trigram_lambda', 'cross_entropy'])
test_trigrams = LanguageModel(test).trigrams
for p in discount_ratios:
lambdas_tweaked = tweak_trigram_lambda(lambdas, (p - 1) * lambdas[-1])
entropy = cross_entropy(lm, lambdas_tweaked, test_trigrams)
boost.loc[len(boost)] = [p - 1, lambdas_tweaked[-1], entropy]
for p in boost_ratios:
lambdas_tweaked = tweak_trigram_lambda(lambdas, p * (1 - lambdas[-1]))
entropy = cross_entropy(lm, lambdas_tweaked, test_trigrams)
boost.loc[len(boost)] = [p, lambdas_tweaked[-1], entropy]
return boost
def create_lambdas_plot(label, boost_stats):
"""Plots the boosted lambda stats"""
plt.plot(boost_stats.boost_trigram_ratio, boost_stats.cross_entropy, label='Boosted Cross Entropy')
plt.suptitle('Cross Entropy (' + label + ')')
plt.xlabel('Trigram Boost Ratio')
plt.ylabel('Cross Entropy')
_ = plt.legend()
###Output
_____no_output_____
###Markdown
Results (part 2): Calculate, Tabulate, and Graph StatisticsFinally: calculate the language model of the English and Czech texts, compute the smoothed lambda parameters using the EM algorithm, and calculate the cross entropy. The cross entropy will also be calculated for discounting or boosting the trigram model by set ratios.
###Code
en = Dataset(words_en)
cz = Dataset(words_cz)
lm_en = LanguageModel(en.train)
lm_cz = LanguageModel(cz.train)
# Here we can see the 4 lambdas converge (English)
lambdas_en = em_algorithm(en.train, en.heldout)
# Here we can see the 4 lambdas converge (Czech)
lambdas_cz = em_algorithm(cz.train, cz.heldout)
boost_en = boost_stats(lm_en, en.test, lambdas_en)
boost_cz = boost_stats(lm_cz, cz.test, lambdas_cz)
###Output
_____no_output_____
###Markdown
English Cross EntropyThe table below displays the cross entropy of the English text between the language model (as trained on the training set) and the test set. We see that the unmodified cross entropy is ~7.5, which increases as the trigram lambda is discounted or boosted.
###Code
# Cross entropy without lambda modifications (English)
boost_en[boost_en.boost_trigram_ratio == 0.0].cross_entropy.iloc[0]
# Cross entropy with lambda modifications (English)
boost_en
###Output
_____no_output_____
###Markdown
Czech Cross EntropyThe table below displays the cross entropy of the Czech text between the language model (as trained on the training set) and the test set. We see that the unmodified cross entropy is ~10.2, which increases as the trigram lambda is discounted or boosted.
###Code
# Cross entropy without lambda modifications (Czech)
boost_cz[boost_cz.boost_trigram_ratio == 0.0].cross_entropy.iloc[0]
# Cross entropy with lambda modifications (English)
boost_cz
###Output
_____no_output_____
###Markdown
English PlotThe graph below plots the cross entropy of the English text as a function of the trigram boost ratio. Negative values indicate the amount the trigram parameter was discounted, while positive values indicate how much it was boosted.
###Code
create_lambdas_plot('English', boost_en)
# The ratio of English words in the test data which have been seen in the training data
en.coverage
###Output
_____no_output_____
###Markdown
Cross entropy can be thought of intuitively as the average number of bits needed to predict an outcome from a probability distribution given we use another probability distribution to approximate it. If we calculate the cross entropy between our training data and test data as done in this experiment, then we will have a value which will tell us how close our approximation is to the true distribution. The lower the cross entropy, the better. The plot above indicates that modifying the trigram lambda parameter will only increase the cross entropy, and therefore worsen the language model's approximation with respect to the test distribution. This means that the trigram lambda is in a (local) minimum. This is as expected, as the EM algorithm is an optimization algorithm that (in this case) finds the optimal lambda weights for each ngram probability function.The final thing to note is that boosting the trigram lambda results in a much higher cross entropy than discounting it. This is because there are much fewer trigrams in the dataset, so the trigram model is much sparser than the unigram or bigram model. Thus, assigning more probability mass to the trigram model will weaken the entire model significantly. However, reducing the probability mass of the trigram model is also detrimental, as it has some useful information that can improve the language model (just not as much as the unigrams and bigrams). Czech PlotThe graph below plots the cross entropy of the Czech text as a function of the trigram boost ratio. Negative values indicate the amount the trigram parameter was discounted, while positive values indicate how much it was boosted.
###Code
create_lambdas_plot('Czech', boost_cz)
# The ratio of Czech words in the test data which have been seen in the training data
cz.coverage
###Output
_____no_output_____ |
FerNote.ipynb | ###Markdown
Titulo 1
###Code
title = soup.find('div', class_='headline-hed-last').text
title
match = soup.find('p', class_='element element-paragraph').text
match
ulist = soup.find("ul")
ulist
items = ulist.find_all('li') and soup.find('a')
items # donde solo hay un a dentro de un ul
items = ulist.find_all('li') and soup.find_all('a')
for x in items:
print(x.text)# muestra todos los a dentro de un ul de la pagina
items = soup.find('ul', class_="nav navbar-nav navbar-left").find_all('li')
items
items[0].text
for x in items:
print(x.text)
###Output
Deportes
Farรกndula
Galerรญas
Internacionales
Nacional
Sucesos
El Novelรณn
|
notebooks/2021-08/20210819_light.ipynb | ###Markdown
Load train set
###Code
stock_ids = get_training_stock_ids('book_train.parquet') # all stocks by default
if not USE_ALL_STOCK_IDS:
# choose a random subset
print(f"Using a subset of {NBR_FOR_SUBSET_OF_STOCK_IDS}")
rng.shuffle(stock_ids)
#random.shuffle(stock_ids)
stock_ids = stock_ids[:NBR_FOR_SUBSET_OF_STOCK_IDS]
else:
print("Using all")
stock_ids[:3] # expect 59, 58, 23 if we're using all or 76, 73, 0 on the RANDOM_STATE of 1 if we don't use all stock ids
df_train_all = pd.read_csv(TRAIN_CSV)
df_train_all = df_train_all.set_index(['stock_id', 'time_id'])
print(df_train_all.shape)
#rows_for_stock_id_0 = df_train_all.query('stock_id == 0').shape[0]
#rows_for_stock_id_0
def show_details(df):
try:
nbr_index_levels = len(df.index.levels)
except AttributeError:
nbr_index_levels = 1
nbr_nulls = df.isnull().sum().sum()
#nulls_msg = "Has no nulls"
#if nbr_nulls==0:
nulls_msg = f"{nbr_nulls} nulls"
is_view_msg = f'is_view {df_train_all._data.is_view}'
is_single_block_msg = f'is_single_block {df_train_all._data.is_single_block}'
is_consolidated_msg = f'is_consolidated {df_train_all._data.is_consolidated()}'
print(f'[{nbr_index_levels}c] {df.shape[0]:,}x{df.shape[1]:,}, {nulls_msg}, {is_view_msg}, {is_single_block_msg}, {is_consolidated_msg}')
show_details(df_train_all)
all_time_ids = df_train_all.reset_index().time_id.unique()
#np.random.shuffle(all_time_ids) # shuffle the time_ids
rng.shuffle(all_time_ids)
print(f"We have {len(all_time_ids):,} time ids")
time_ids_train, time_ids_test = make_unique_time_ids(all_time_ids, test_size=TEST_SIZE)
assert len(time_ids_train) + len(time_ids_test) == len(all_time_ids)
assert len(time_ids_train.intersection(time_ids_test)) == 0, "Expecting no overlap between train and test time ids"
print(f"Example time ids for training, min first: {sorted(list(time_ids_train))[:5]}")
# make feature columns
def make_features_stats(df_book, agg_type, cols):
features_var1 = df_book.groupby(['stock_id', 'time_id'])[cols].agg(agg_type)
#print(type(features_var1))
if isinstance(features_var1, pd.Series):
# .size yields a series not a df
#features_var1.name = str(agg_type)
features_var1 = pd.DataFrame(features_var1, columns=[agg_type])
#pass
else:
features_var1_col_names = [f"{col}_{agg_type}" for col in cols]
features_var1.columns = features_var1_col_names
return features_var1
if True: # lightweight tests
df_book_train_stock_XX = pd.read_parquet(os.path.join(ROOT, f"book_train.parquet/stock_id=0"))
df_book_train_stock_XX["stock_id"] = 0
df_book_train_stock_XX = df_book_train_stock_XX.set_index(['stock_id', 'time_id'])
display(make_features_stats(df_book_train_stock_XX, 'nunique', ['ask_size1']).head())
def log_return(list_stock_prices):
return np.log(list_stock_prices).diff()
def realized_volatility(series_log_return):
return np.sqrt(np.sum(series_log_return**2))
def _realized_volatility_weighted_sub(ser, weights):
ser_weighted = ser * weights
return np.sqrt(np.sum(ser_weighted**2))
def realized_volatility_weighted(ser, weights_type):
"""Weighted volatility"""
# as a numpy array
# we drop from 12us to 3us by adding @njit to the _sub function
# we can't make _sub a closure, it loses all compilation benefits
# and we can't add njit(cache=True) in Jupyter as it can't
# find a cache location
# as a Series we have 5us and 15us w/wo @njit respectively
if isinstance(ser, pd.Series):
ser = ser.to_numpy()
nbr_items = ser.shape[0]
if weights_type == 'uniform':
weights = np.ones(nbr_items)
elif weights_type == 'linear':
weights = np.linspace(0.1, 1, nbr_items) # linear increasing weight
elif weights_type == 'half0half1':
half_way = int(ser.shape[0] / 2)
weights = np.concatenate((np.zeros(half_way), np.ones(ser.shape[0] - half_way))) # 0s then 1s weight
elif weights_type == 'geom':
weights = np.geomspace(0.01, 1, nbr_items) # geometric increase
#assert isinstance(weights_type, str) == False, f"Must not be a string like '{weights}' at this point"
return _realized_volatility_weighted_sub(ser, weights)
if True:
series_log_return = pd.Series(np.linspace(0, 10, 6))
print(realized_volatility_weighted(series_log_return, weights_type="uniform"))
#%timeit realized_volatility_weighted(series_log_return, weights_type="uniform")
def realized_volatility_weightedOLD(ser, weights=None):
"""Weighted volatility"""
#ser = series_log_return
if weights == "uniform":
weight_arr = np.ones(ser.shape[0])
elif weights == 'linear':
weight_arr = np.linspace(0.1, 1, ser.shape[0]) # linear increasing weight
#assert weights is not None, "Must have set a valid description before here"
#ser_weighted = ser * weights
return np.sqrt(np.sum((ser * weight_arr)**2))
if False:
# example usage
series_log_return = np.linspace(0, 10, 6)
weights = np.linspace(0.1, 1, series_log_return.shape[0]) # linear increasing weight
half_way = int(series_log_return.shape[0] / 2)
weights = np.concatenate((np.zeros(half_way), np.ones(series_log_return.shape[0] - half_way))) # 0s then 1s weight
weights = np.ones(series_log_return.shape[0]) # use all items equally
assert weights.shape[0] == series_log_return.shape[0]
realized_volatility_weighted(series_log_return, 'linear')
def make_wap(df_book_data, num=1, wap_colname="wap"):
"""Modifies df_book_data"""
assert num==1 or num==2
wap_numerator = (df_book_data[f'bid_price{num}'] * df_book_data[f'ask_size{num}'] +
df_book_data[f'ask_price{num}'] * df_book_data[f'bid_size{num}'])
wap_denominator = df_book_data[f'bid_size{num}'] + df_book_data[f'ask_size{num}']
df_book_data[wap_colname] = wap_numerator / wap_denominator
@memory.cache
def make_realized_volatility(df_book_data, log_return_name='log_return', wap_colname='wap', weights=None):
"""Consume wap column"""
df_book_data[log_return_name] = df_book_data.groupby(['stock_id', 'time_id'])[wap_colname].apply(log_return)
df_book_data = df_book_data[~df_book_data[log_return_name].isnull()]
df_realized_vol_per_stock = pd.DataFrame(df_book_data.groupby(['stock_id', 'time_id'])[log_return_name].agg(realized_volatility_weighted, weights))
return df_realized_vol_per_stock
if True: # lightweight tests
df_book_train_stock_XX = pd.read_parquet(os.path.join(ROOT, f"book_train.parquet/stock_id=0"))
df_book_train_stock_XX["stock_id"] = 0
df_book_train_stock_XX = df_book_train_stock_XX.set_index(['stock_id', 'time_id'])
make_wap(df_book_train_stock_XX, 2) # adds 'wap' column
#df_realized_vol_per_stockXX = make_realized_volatility(df_book_train_stock_XX, log_return_name="log_return2", weights='linear')
#display(df_realized_vol_per_stockXX)
@memory.cache
def load_data_build_features(stock_id, ROOT, filename, cols, df_target):
# filename e.g. book_train.parquet
assert isinstance(stock_id, int)
df_book_train_stock_X = pd.read_parquet(
os.path.join(ROOT, f"{filename}/stock_id={stock_id}")
)
df_book_train_stock_X["stock_id"] = stock_id
df_book_train_stock_X = df_book_train_stock_X.set_index(['stock_id', 'time_id'])
#assert df_book_train_stock_X.shape[0] > rows_for_stock_id_0, (df_book_train_stock_X.shape[0], rows_for_stock_id_0)
#df_book_train_stock_X_gt500 = df_book_train_stock_X.query("seconds_in_bucket>500").copy()
#df_realized_vol_per_stock_short500 = add_wap_make_realized_volatility(df_book_train_stock_X_gt500, log_return_name='log_return_gt500sec')
#df_book_train_stock_X_gt300 = df_book_train_stock_X.query("seconds_in_bucket>300").copy()
#df_realized_vol_per_stock_short300 = add_wap_make_realized_volatility(df_book_train_stock_X_gt300, log_return_name='log_return_gt300sec')
make_wap(df_book_train_stock_X, 2, "wap2")
df_realized_vol_per_stock_wap2_uniform = make_realized_volatility(df_book_train_stock_X, log_return_name="log_return2_uniform", wap_colname="wap2", weights='uniform')
df_realized_vol_per_stock_wap2_linear = make_realized_volatility(df_book_train_stock_X, log_return_name="log_return2_linear", wap_colname="wap2", weights='linear')
df_realized_vol_per_stock_wap2_half0half1 = make_realized_volatility(df_book_train_stock_X, log_return_name="log_return2_half0half1", wap_colname="wap2", weights='half0half1')
make_wap(df_book_train_stock_X, 1, "wap") # adds 'wap' column
df_realized_vol_per_stock_wap1_uniform = make_realized_volatility(df_book_train_stock_X, log_return_name="log_return1_uniform", weights='uniform')
df_realized_vol_per_stock_wap1_linear = make_realized_volatility(df_book_train_stock_X, log_return_name="log_return1_linear", weights='linear')
df_realized_vol_per_stock_wap1_half0half1 = make_realized_volatility(df_book_train_stock_X, log_return_name="log_return1_half0half1", weights='half0half1')
features_var1 = make_features_stats(df_book_train_stock_X, 'var', cols)
features_mean1 = make_features_stats(df_book_train_stock_X, 'mean', cols)
features_size1 = make_features_stats(df_book_train_stock_X, 'size', cols)
features_min1 = make_features_stats(df_book_train_stock_X, 'min', cols)
features_max1 = make_features_stats(df_book_train_stock_X, 'max', cols)
features_nunique1 = make_features_stats(df_book_train_stock_X, 'nunique', cols)
df_train_stock_X = df_target.query('stock_id == @stock_id')
to_merge = [df_train_stock_X,
features_var1, features_mean1, features_size1,
features_min1, features_max1, features_nunique1,
df_realized_vol_per_stock_wap1_uniform,
df_realized_vol_per_stock_wap2_uniform,
df_realized_vol_per_stock_wap1_linear,
df_realized_vol_per_stock_wap2_linear,
df_realized_vol_per_stock_wap1_half0half1,
df_realized_vol_per_stock_wap2_half0half1]
row_lengths = [df.shape[0] for df in to_merge]
assert len(set(row_lengths)) == 1, row_lengths # should all be same length
train_merged = pd.concat(to_merge, axis=1)
if 'target' in train_merged.columns:
# no need to check for duplication on the test set
features = train_merged.drop(columns='target').columns
#print(features)
assert len(set(features)) == len(features), f"Feature duplication! {len(set(features))} vs {len(features)}"
return train_merged
#if 'memory' in dir():
# # only setup local cache if we're running locally in development
# load_data_build_features = memory.cache(load_data_build_features)
cols = ['bid_price1', 'ask_price1', 'bid_price2', 'ask_price2',]
cols += ['bid_size1', 'ask_size1', 'bid_size2', 'ask_size2']
if True:
# test...
train_mergedXX = load_data_build_features(0, ROOT, 'book_train.parquet', cols, df_train_all)
display(train_mergedXX)
from joblib import Parallel, delayed
print(f'Iterating over {len(stock_ids)} stocks:')
all_train_merged = Parallel(n_jobs=-1, verbose=10)(delayed(load_data_build_features)(stock_id, ROOT, 'book_train.parquet', cols, df_train_all) for stock_id in stock_ids)
# join all the partial results back together
train_merged = pd.concat(all_train_merged)
show_details(train_merged)
train_merged.head()
features = train_merged.drop(columns='target').columns
print(features)
assert len(set(features)) == len(features), f"{len(set(features))} vs {len(features)} features, we should not have any duplicates"
###Output
Index(['bid_price1_var', 'ask_price1_var', 'bid_price2_var', 'ask_price2_var',
'bid_size1_var', 'ask_size1_var', 'bid_size2_var', 'ask_size2_var',
'bid_price1_mean', 'ask_price1_mean', 'bid_price2_mean',
'ask_price2_mean', 'bid_size1_mean', 'ask_size1_mean', 'bid_size2_mean',
'ask_size2_mean', 'size', 'bid_price1_min', 'ask_price1_min',
'bid_price2_min', 'ask_price2_min', 'bid_size1_min', 'ask_size1_min',
'bid_size2_min', 'ask_size2_min', 'bid_price1_max', 'ask_price1_max',
'bid_price2_max', 'ask_price2_max', 'bid_size1_max', 'ask_size1_max',
'bid_size2_max', 'ask_size2_max', 'bid_price1_nunique',
'ask_price1_nunique', 'bid_price2_nunique', 'ask_price2_nunique',
'bid_size1_nunique', 'ask_size1_nunique', 'bid_size2_nunique',
'ask_size2_nunique', 'log_return1_uniform', 'log_return2_uniform',
'log_return1_linear', 'log_return2_linear', 'log_return1_half0half1',
'log_return2_half0half1'],
dtype='object')
In [192] used 137.4570 MiB RAM in 0.21s, peaked 0.00 MiB above current, total RAM usage 2176.74 MiB
###Markdown
Features
###Code
def train_test_split(df, target_col, time_ids_train, time_ids_test):
X_train = df.query('time_id in @time_ids_train').drop(columns=[target_col, 'time_id'])
X_test = df.query('time_id in @time_ids_test').drop(columns=[target_col, 'time_id'])
y_train = df.query('time_id in @time_ids_train')[target_col]
y_test = df.query('time_id in @time_ids_test')[target_col]
return X_train, X_test, y_train, y_test
feature_cols = list(features) + ['stock_id']
X_train, X_test, y_train, y_test = train_test_split(train_merged.reset_index()[feature_cols+['time_id', 'target']], 'target', time_ids_train, time_ids_test)
X_train.shape, X_test.shape, y_train.shape, y_test.shape
X_train.head(3)
X_train.shape, X_test.shape, y_train.shape, y_test.shape
###Output
_____no_output_____
###Markdown
ML on a train/test split
###Code
from sklearn.linear_model import LinearRegression
from sklearn.ensemble import RandomForestRegressor
from sklearn.ensemble import GradientBoostingRegressor
import xgboost as xgb
from lightgbm import LGBMRegressor
from sklearn.experimental import enable_hist_gradient_boosting
from sklearn.ensemble import HistGradientBoostingRegressor
#est = LinearRegression()
#est = RandomForestRegressor(n_estimators=10, n_jobs=-1, random_state=RANDOM_STATE) # default n_estimators==100
#est = RandomForestRegressor(n_estimators=100, n_jobs=-1, random_state=RANDOM_STATE) # default n_estimators==100
#est = GradientBoostingRegressor(random_state=RANDOM_STATE)
#est = HistGradientBoostingRegressor(random_state=RANDOM_STATE)
# https://xgboost.readthedocs.io/en/latest/python/python_api.html
#tree_method='exact' default
#est = xgb.XGBRegressor(objective ='reg:linear', colsample_bytree = 0.3, learning_rate = 0.1, max_depth = 5, alpha = 10, n_estimators = 10)
est = xgb.XGBRegressor(tree_method='hist', )
#est = LGBMRegressor()
est.fit(X_train, y_train)
from sklearn.metrics import r2_score
print(f"USE_ALL_STOCK_IDS: {USE_ALL_STOCK_IDS}")
print(f"{df_train_all.reset_index().stock_id.unique().shape[0]} unique stock ids, test set is {TEST_SIZE*100:0.1f}%")
print(f"Features:", feature_cols)
print(est)
if X_test.shape[0] > 0:
y_pred = est.predict(X_test)
score = r2_score(y_test, y_pred)
rmspe = rmspe_score(y_test, y_pred)
print(f"rmspe score {rmspe:0.3f}, r^2 score {score:0.3f} on {y_pred.shape[0]:,} predictions")
else:
print('No testing rows in X_test')
%%time
scores = []
if TEST_SIZE > 0:
# https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.GroupKFold.html
# note the splits appear to be deterministic, possibly on discovery order
from sklearn.model_selection import GroupKFold
train_merged_no_idx = train_merged.reset_index()
groups = train_merged_no_idx['time_id']
group_kfold = GroupKFold(n_splits=3)
X_all = train_merged_no_idx[feature_cols]
y_all = train_merged_no_idx['target']
print(group_kfold.get_n_splits(X_all, y_all, groups))
for train_index, test_index in group_kfold.split(X_all, y_all, groups):
print("TRAIN:", train_index, "TEST:", test_index)
X_train, X_test = X_all.loc[train_index], X_all.loc[test_index]
y_train, y_test = y_all.loc[train_index], y_all.loc[test_index]
est.fit(X_train, y_train)
y_pred = est.predict(X_test)
score = r2_score(y_test, y_pred)
rmspe = rmspe_score(y_test, y_pred)
print(f"rmspe score {rmspe:0.3f}, r^2 score {score:0.3f} on {y_pred.shape[0]:,} predictions")
scores.append({'r2': score, 'rmspe': rmspe})
if len(scores) > 0:
# only show results if we've used cross validation
df_scores = pd.DataFrame(scores).T
folds = df_scores.columns.values
df_scores['std'] = df_scores[folds].std(axis=1)
df_scores['mean'] = df_scores[folds].mean(axis=1)
display(df_scores)
if X_test.shape[0] > 0:
df_preds = pd.DataFrame({'y_test': y_test, 'y_pred': y_pred})
df_preds['abs_diff'] = (df_preds['y_test'] - df_preds['y_pred']).abs()
display(df_preds.sort_values('abs_diff', ascending=False))
#item_to_debug = 32451
#train_merged.reset_index().loc[item_to_debug][['stock_id', 'time_id', 'target']]
try:
if X_test.shape[0] > 0:
from yellowbrick.regressor import PredictionError
visualizer = PredictionError(est)
visualizer.fit(X_train, y_train) # Fit the training data to the visualizer
visualizer.score(X_test, y_test) # Evaluate the model on the test data
ax_subplot = visualizer.show()
except ModuleNotFoundError:
print('no yellowbrick')
if ENV_HOME:
import eli5
display(eli5.show_weights(est, feature_names=feature_cols, top=30))
if 'feature_importances_' in dir(est):
feature_col = 'feature_importances_'
elif 'coef_' in dir(est):
feature_col = 'coef_'
df_features = pd.DataFrame(zip(getattr(est, feature_col), feature_cols), columns=['importance', 'feature']).set_index('importance')
df_features.sort_index(ascending=False)
###Output
_____no_output_____
###Markdown
Make predictions
###Code
len(stock_ids) # expecting 112
if USE_TEST_LOCAL_6_ITEMS: # True if debugging
# book train as a substitute
df_test_all = pd.read_csv(os.path.join(ROOT, 'test_local.csv'))
df_test_all = df_test_all.rename(columns={'target': 'train_target'})
TEST_FOLDER = 'book_test_local.parquet'
assert ENV_HOME == True
else:
df_test_all = pd.read_csv(TEST_CSV)
if df_test_all.shape[0] == 3: # kaggle test data
df_test_all = df_test_all[:1] # cut out 2 rows so predictions work
TEST_FOLDER = 'book_test.parquet'
print(ROOT, TEST_FOLDER)
df_test_all = df_test_all.set_index(['stock_id', 'time_id'])
show_details(df_test_all)
test_set_predictions = []
stock_ids_test = get_training_stock_ids(TEST_FOLDER) # all stocks by default
df_test_predictions = pd.DataFrame() # prediction set to build up
for stock_id in tqdm(stock_ids_test):
df_test_all_X = df_test_all.query('stock_id==@stock_id').copy()
test_merged = load_data_build_features(stock_id, ROOT, TEST_FOLDER, cols, df_test_all)
test_set_predictions_X = est.predict(test_merged.reset_index()[list(features) + ['stock_id']])
df_test_all_X['target'] = test_set_predictions_X
df_test_predictions = pd.concat((df_test_predictions, df_test_all_X))
assert df_test_all.shape[0] == df_test_predictions.shape[0], "Expecting all rows to be predicted"
print(f"Writing {df_test_predictions.shape[0]} rows to submission.csv on {datetime.datetime.utcnow()}")
df_test_predictions.reset_index()[['row_id', 'target']].to_csv('submission.csv', index=False)
show_details(df_test_predictions)
print(f'Notebook took {datetime.datetime.utcnow()-t1_notebook_start} to run')
if not ENV_HOME:
assert USE_ALL_STOCK_IDS, "If we're on Kaggle but not using all stock_ids, we're not ready to submit, so fail here to remind me to change USSE_ALL_STOCK_IDS!"
###Output
In [213] used 0.0000 MiB RAM in 0.10s, peaked 0.00 MiB above current, total RAM usage 2250.90 MiB
|
lab/1. Lab - Introduction to UpLabel.ipynb | ###Markdown
Introduction to UpLabelUpLabel is a lightweight, Python-based and modular tool which serves to support your machine learning tasks by making the data labeling process more efficient and structured. UpLabel is presented and tested within the MLADS-Session *"Distributed and Automated Data Labeling using Active Learning: Insights from the Field"*. Session DescriptionHigh-quality training data is essential for succeeding at any supervised machine learning task. There are numerous open source tools that allow for a structured approach to labeling. Instead of randomly choosing labeling data, we make use of machine learning itself for continuously improving the training data quality. Based on the expertise of the labelers as well as the complexity of the data, labeling tasks can be distrubuted in an intelligent way. Based on a real-world example from one of our customers, we will show how to apply the latest technology to optimize the task of labeling data for NLP problems. Software Component and User FlowThe following images serve to illustrate the user labeler flow and the software component flow. Software Component Flow--- User Flow--- Prepare WorkspaceRequired libraries are loaded below, for the most part they get imported by the main-script.
###Code
import matplotlib as plt
import sys
sys.path.append('../code')
import main
%matplotlib inline
###Output
_____no_output_____
###Markdown
Task SetupThere are two possible ways to go for this session:1. You can use our example data (German news data)2. Or your own data, if you brought some. If you want to use our example:- Use 'lab' as your project reference below (see step *"Run Iteration 0"*). The example case will be loaded.- Set the `dir` parameter to the folder, where the lab data is located, e.g. `C:/uplabel/data/lab/` If you brought your own data:- Either create a task config (either copy the code below and save it as `params.yml`) and save it in a subfolder of `task`- The task can be named as you like- Or simply rename the folder "sample" to your desired project name and use the sample file in it- Set the `dir` parameter to the folder, where your data is going to be located```yamldata: dir: ~/[YOUR DIRECTORY GOES HERE]/[projectname] source: input.txt cols: ['text','label'] extras: [] target_column: label text_column: textparameters: task: cat language: de labelers: 3 min_split_size: 0 max_split_size : 300 quality: 1 estimate_clusters: True quality_size: 0.1 overlap_size: 0.1```
###Code
project_name = 'news_en'
###Output
_____no_output_____
###Markdown
Run Iteration 0- This is the start of the initial iteration of the UpLabel process. - Feel free to create your own project, by adding a parameter file to `\tasks` and your data to `\data\[project name]`. Don't forget to update the `'project_name'` variable above, with the name of your task.Note: you can add `'debug_iter_id=X'` to repeat an iteration, where X is your iteration number.
###Code
main.Main(project_name)
###Output
_____no_output_____
###Markdown
Fun part: label your data- After the first iteration, you can start labeling your data- You can find the data splits in the folder you have set to the `dir`-parameter- File names are named this way: - `[original file name]-it_[iteration number]-split_[split number].xlsx`, like `data-it_1-split_1.xlsx`- Open your data and label it! Run Iteration 1
###Code
main.Main(project_name, debug_iter_id=1)
###Output
_____no_output_____
###Markdown
Label some more! Run Iteration 2
###Code
main.Main(project_name, debug_iter_id=2)
###Output
_____no_output_____ |
notebooks/.ipynb_checkpoints/fit_DM_PPI-checkpoint.ipynb | ###Markdown
Masses of compact remnant from CO core massesauthor: [M. Renzo]([email protected])
###Code
import numpy as np
import sys
import scipy
from scipy.optimize import curve_fit
# optional for prettier plots
sys.path.append('/mnt/home/mrenzo/codes/python_stuff/plotFunc/')
from plotDefaults import set_plot_defaults_from_matplotlibrc
set_plot_defaults_from_matplotlibrc()
###Output
_____no_output_____
###Markdown
IntroductionWe want to develop a new mapping between star (and core) mass and compact object remnant for rapid population synthesis calculations.Our aim is to have one way to calculate this across the entire mass range (from neutron stars to above the pair-instability black hole mass gap).Moreover, we want the mapping to be continuous. This is not because it is a priori unphysical to have discontinuities, but because we don't want to artificially introduce features.The idea is to calculate the mass of the compact object remnant as total mass minus varius mass loss terms:$$ M_\mathrm{remnant} = M_\mathrm{tot} - \left( \Delta M_\mathrm{PPI} + \Delta M_\mathrm{NLW} + \Delta M_\mathrm{SN} + \Delta M_{\nu, \mathrm{core}} + \Delta M_\mathrm{lGRB} + \cdots \right) $$In this way, pre-explosion binary interactions reduce $M_\mathrm{tot}$ already (and possibly modify the core masses), and then each mass loss process at core-collapse can be added separately.This can also be extended to add, say, long gamma-ray burst mass loss (as a function of core-spin), etc.Note that while "building" the compact object mass from the bottom up (e.g., the [Fryer et al. 2012](https://ui.adsabs.harvard.edu/abs/2012ApJ...749...91F/abstract) approach of starting with a proto neutron starmass and accrete the fallback on it) makes it very difficult to use observationally informed values for some of the terms in parenthesis. Conversely, in our approach of "building" the compact object by removingfrom the total mass the ejecta, we can easily use observationally informed quantities for each term here. If one (or more) of these terms have a stochastic component, this can naturally produce the scatter in compact object masses expected because of the stochasticity in supernova explosions (e.g., [Mandel & Mueller 2020](https://ui.adsabs.harvard.edu/abs/2020MNRAS.499.3214M/abstract)).In the following, we explain and calculate each mass loss term separately. Pulsational-pair instability mass loss $\Delta M_\mathrm{PPI}\equiv M_\mathrm{PPI}(M_\mathrm{CO})$This term represents the amount of mass lost in pulsational pair-instability SNe. Although the delay times between pulses (and core-collapse) can be very long (especially at the highest mass end),this is treated as instantaneous mass loss at the time of core-collapse in rapid population synthesis calculations. We do not improve on this here.Many codes use the fit from [Farmer et al. 2019](https://ui.adsabs.harvard.edu/abs/2019ApJ...887...53F/abstract) which however isdiscontinuous with [Fyer et al. 2012](https://ui.adsabs.harvard.edu/abs/2012ApJ...749...91F/abstract) typically used for core-collapse SNe.However, this is not a fit to the amount of mass *lost*, which is what we need here. One is provided in [Renzo et al. 2020](https://ui.adsabs.harvard.edu/abs/2020A%26A...640A..56R/abstract), but it does not contain the metallicity dependence, which is desirable.Thus, we re-fit the Z-dependent data from [Farmer et al. 2019](https://ui.adsabs.harvard.edu/abs/2019ApJ...887...53F/abstract). Below, `datafile1.txt` is a cleaned up version of `datafile1.txt` available on [zenodo](https://zenodo.org/record/3346593).We note that [Farmer et al. 2019](https://ui.adsabs.harvard.edu/abs/2019ApJ...887...53F/abstract) simulated only He cores,and [Renzo et al. 2020](https://ui.adsabs.harvard.edu/abs/2020A%26A...640A..56R/abstract) showed that the H-rich envelope,if present, is likely to fly away during the first pulse. Therefore to the amount of mass loss $\Delta M_\mathrm{PPI}$ we fit here one should *add any residual H-rich envelope present in the star at the time of pulsations*.
###Code
datafile = "datafile1.txt"
src = np.genfromtxt(datafile, skip_header=1)
with open(datafile, 'r') as f:
for i, line in enumerate(f):
if i==0:
col = line.split()
print(col)
break
def linear(x, a, b):
return a*x+b
def fitting_func_Z(data, a, b, c, d):
""" shifted cube plus square term, with the coefficient of the cubic term linear function in log10(Z) """
mco = data[0]
Z = data[1]
return linear(np.log10(Z),a,b)*(mco-c)**3+d*(mco-c)**2
fig=plt.figure(figsize=(12,20))
gs = gridspec.GridSpec(7, 1)
gs.update(wspace=0.00, hspace=0.00)
ax1 = fig.add_subplot(gs[0])
ax2 = fig.add_subplot(gs[1])
ax3 = fig.add_subplot(gs[2])
ax4 = fig.add_subplot(gs[3])
ax5 = fig.add_subplot(gs[4])
ax6 = fig.add_subplot(gs[5])
ax7 = fig.add_subplot(gs[6])
axes = [ax1,ax2,ax3,ax4,ax5,ax6,ax7]
rainbow = plt.cm.rainbow(np.linspace(0,1,8))
# --------------------------------------------------------------------------------------
# fit happens here!
# reload data
Mco = src[:, col.index("Mco")]
Z = src[:, col.index('Z')]
Mhe = src[:, col.index('Mhe')]
dMpulse = src[:, col.index('dMpulse')]
# fit only in the PPISN range -- neglect the Z dependence of this range
ind_for_fit = (Mco>=38) & (Mco<=60)
popt, pcov = curve_fit(fitting_func_Z, [Mco[ind_for_fit], Z[ind_for_fit]], dMpulse[ind_for_fit])
print(popt)
fit = "$\Delta M_\mathrm{PPI} = ("+f"{popt[0]:.4f}"+r"\log_{10}(Z)+"+f"{popt[1]:.4f})"+r"\times (M_\mathrm{CO}+"+f"{popt[2]:.1f}"+")^3"+f"{popt[3]:.4f}"+r"\times (M_\mathrm{CO}+"+f"{popt[2]:.1f}"+")^2$"
ax1.set_title(fit, fontsize=20)
# --------------------------------------------------------------------------------------
for i, metallicity in enumerate(sorted(np.unique(Z))):
ax = axes[i]
ax.axhline(0, 0,1,lw='1', c='k', ls='--', zorder=0)
# first plot data
x = Mco[Z==metallicity]
y = dMpulse[Z==metallicity]
ax.scatter(x, y, color=rainbow[i], label=r"$Z="+f"{metallicity:.0e}"+"$")
# then plot fit
ind_for_fit = (x>=38) & (x<=60)
x = x[ind_for_fit]
ax.plot(x, fitting_func_Z([x,[metallicity]*len(x)],*popt), c=rainbow[i])
# larger range to show the fit
xx = np.linspace(30,60,1000)
yy = fitting_func_Z([xx,[metallicity]*len(xx)],*popt)
ax.plot(xx, yy, c=rainbow[i], ls="--", lw=8, alpha=0.5, zorder=0)
# ----------
ax.legend(fontsize=20, handletextpad=0.1, frameon=True)
ax.set_ylim(-5,42)
ax.set_xlim(30,75)
if ax != ax7:
ax.set_xticklabels([])
ax4.set_ylabel(r"$\Delta M_\mathrm{PPI} \ [M_\odot]$")
ax7.set_xlabel(r"$M_\mathrm{CO} \ [M_\odot]$")
plt.savefig('fit1.png')
###Output
_____no_output_____
###Markdown
Notes on the PPI mass loss formulaTherefore we recommend the fit above for $38<M_\mathrm{CO} / M_\odot<60$ and $\Delta M_\mathrm{PPI}=M_\mathrm{tot}$ for $60\leq M_\mathrm{CO} / M_\odot< 130$ and 0 above.If the pre-pulse star has a H-rich envelope, the entirety of the H-rich envelope should be added to $\Delta M_\mathrm{PPI}$ - and then we set $\Delta M_\mathrm{NLW} =0$.Note that our fit: - neglects the mild Z-dependence of the edges of the gap (see [Farmer et al. 2019](https://ui.adsabs.harvard.edu/abs/2019ApJ...887...53F/abstract)) - neglects the delay between pulses and intra-pulse binary interactions (see [Marchant et al. 2019](https://ui.adsabs.harvard.edu/abs/2019ApJ...882...36M/abstract)) - the least massive BHs that can be made post-pulse might not be resolved properly (see [Marchant et al. 2019](https://ui.adsabs.harvard.edu/abs/2019ApJ...882...36M/abstract)) Neutrino caused envelope losses $\Delta M_{\rm NLW}$This is the mass loss caused by the [Nadhezin 1980](https://ui.adsabs.harvard.edu/abs/1980Ap%26SS..69..115N/abstract) -[Lovegrove & Woosley](https://ui.adsabs.harvard.edu/search/p_=0&q=%5Elovegrove%202013%20&sort=date%20desc%2C%20bibcode%20desc) mechanism: the losses ofthe neutrinos (see above) change the gravitational potential of the core and cause a shock wave that caneject loosely bound envelopes. If the envelope is not present (because another mechanism has removed it)before (e.g., binary interactions of pulsational pair instability), this should be zero
###Code
def delta_m_nadhezin_lovegrove_woosley(star):
""" See Nadhezin 1980, Lovegrove & Woosley 2013, Fernandez et al. 2018, Ivanov & Fernandez 2021 """
""" this should also be zero post-PPISN """
if star == RSG:
""" if H-rich and large radius """
return star.mtot - star.mhe
else:
return 0
###Output
_____no_output_____
###Markdown
Core-collapse SN mass loss $\Delta M_\mathrm{SN}\equiv\Delta M_\mathrm{SN}(M_\mathrm{CO})$This is a very uncertain amount of mass loss: the supernova ejecta.We still use the *delayed* algorithm from [Fryer et al. 2012](https://ui.adsabs.harvard.edu/abs/2012ApJ...749...91F/abstract) though these results should be revisited.
###Code
def delta_m_SN(star):
""" this is Fryer+12 """
###Output
_____no_output_____
###Markdown
Neutrino core losses $\Delta M_{\nu, \mathrm{core}}\equiv \Delta M_{\nu, \mathrm{core}}(M_\mathrm{remnant})$When a core collapses it releases about $10^{53}$ ergs of gravitational potential energy to neutrinos.These leave the core. The neutrino emission is estimated following [Fryer et al. 2012](https://ui.adsabs.harvard.edu/abs/2012ApJ...749...91F/abstract), butwe cap it at $10^{54}\ \mathrm{erg}/c^2\simeq0.5\,M_\odot$.
###Code
def delta_m_neutrino_core_losses(m_compact_object):
""" the amount of mass lost to neutrinos correspond to the minimum between 0.1 times the compact object and 0.5Msun~10^54 ergs/c^2 """
return min(0.1*m_compact_object, 0.5)
###Output
_____no_output_____
###Markdown
Miscellanea and sanity checksOne should always check that:$$ M_{\rm remnant} \leq M_{\rm tot} $$The fallback fraction, for kick-related problems can than be easily calculated as:$$ f_b = (M_{\rm tot}-M_{\rm remnant})/M_{\rm tot} $$Moreover, if the PPISN remove the H-rich envelope, than $\Delta M_{\rm NLW}=0$ (there is no envelope to be lost!)
###Code
# Farmer+19 Eq. 1
def farmer19(mco, Z=0.001):
"""
gets CO core mass in Msun units, returns the value of Eq. 1 from Farmer+19
If a metallicity Z is not given, assume the baseline value of Farmer+19
N.B. this fit is accurate at ~20% level
"""
mco = np.atleast_1d(mco)
# initialize at zero, takes care of PISN
m_remnant = np.zeros(len(mco))
# overwrite low mass
i = mco<38
m_remnant[i] = mco[i]+4
# overwrite PPISN
j = (mco >= 38) & (mco<=60)
# fit coefficients
a1 = -0.096
a2 = 8.564
a3 = -2.07
a4 = -152.97
m_remnant[j] = a1*mco[j]**2+a2*mco[j]+a3*np.log10(Z)+a4
# overwrite the highest most masses -- direct collapse
k = mco >= 130
m_remnant[k] = mco[k]
return m_remnant
# minimum post PPI BH mass
a1 = -0.096
a2 = 8.564
a3 = -2.07
a4 = -152.97
mco = 60
m_remnant = a1*mco**2+a2*mco+a3*np.log10(0.001)+a4
print(m_remnant)
fig=plt.figure()
gs = gridspec.GridSpec(100, 110)
ax = fig.add_subplot(gs[:,:])
mco = np.linspace(25, 250, 2000)
m_bh = farmer19(mco)
ax.scatter(mco, m_bh)
ax.set_xlabel(r"$M_\mathrm{CO} \ [M_\odot]$")
ax.set_ylabel(r"$M_\mathrm{remnant}\ [M_\odot]$")
###Output
_____no_output_____ |
lessons/06-decision-trees-random-forests/02-random-forests.ipynb | ###Markdown
Random ForestsRandom Forests are a popular form of "ensembling" โ the strategy of combining multiple different kinds of ML models to make a single decision. In ensembling in general any number of models might be combined, many different types of models might be used, and their votes might be weighted or unweighted. A Random Forest is a specific strategy for applying the concept of ensembling to a series of Decision Trees. Two techniques are used in order to ensure that each Decision Tree is different from the other trees in the forest:1. Bagging (short for bootstrap aggregation), and2. Random feature selection.Bagging is a fancy term for sampling with replacement. For us, it means that for every underlying decision tree we randomly sample the items in our training data, with replacement, typically up to the size of the training data (but this is a hyperparameter you can change).In a standard decision tree we consider EVERY feature and EVERY possible split point per feature. With random feature selection we instead specify a number of features to consider for split points when we first build the model. Every time we make a new split, we randomly select that number of features to consider. Among the selected features every split point will still be considered, and the optimum split will still be chosen, but the model will not have access to every possible feature at every possible split point.These two changes generally make RF's a bit more robust than DT's. In particular an RF is less prone to overfitting than a DT. Conversely, DTs are generally faster to train and use, since you're only building one tree as opposed to many.Anything that you can control via hyperparameters in a DT can be applied in an RF, as well as a few unique hyperparameters such as the number of trees to build.
###Code
# Lets look at the same examples from the DT lessons.
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.neighbors import KNeighborsClassifier
from sklearn.model_selection import train_test_split, KFold
from sklearn.ensemble import RandomForestClassifier
# Load the data
heart_dataset = pd.read_csv('../../datasets/uci-heart-disease/heart.csv')
# Split the data into input and labels
labels = heart_dataset['target']
input_data = heart_dataset.drop(columns=['target'])
# Split the data into training and test
training_data, test_data, training_labels, test_labels = train_test_split(
input_data,
labels,
test_size=0.20
)
model = RandomForestClassifier()
model.fit(training_data, training_labels)
model.score(test_data, test_labels)
# We can still get the feature importances:
feat_importances = pd.Series(model.feature_importances_, index=training_data.columns)
feat_importances.sort_values().plot(kind='barh', figsize=(10,10))
from sklearn.ensemble import RandomForestRegressor
# Load the data
fish_dataset = pd.read_csv('../../datasets/fish/Fish.csv')
# Split the data into input and labels โ we're trying to predict fish weight based on
# its size and species
labels = fish_dataset['Weight']
input_data = fish_dataset.drop(columns=['Weight'])
# We have one categorical parameter, so lets tell pandas to one-hot encode this value.
input_data = pd.get_dummies(input_data, columns=['Species'])
# Split the data into training and test
training_data, test_data, training_labels, test_labels = train_test_split(
input_data,
labels,
test_size=0.20
)
model = RandomForestRegressor()
model.fit(training_data, training_labels)
model.score(test_data, test_labels)
feat_importances = pd.Series(model.feature_importances_, index=training_data.columns)
feat_importances.sort_values().plot(kind='barh', figsize=(10,10))
###Output
_____no_output_____ |
Mathematics/Statistics/Statistics and Probability Python Notebooks/Computational and Inferential Thinking - The Foundations of Data Science (book)/Notebooks - by chapter/5. Python Sequences/5.2.1 Ranges.ipynb | ###Markdown
RangesA *range* is an array of numbers in increasing or decreasing order, each separated by a regular interval. Ranges are useful in a surprisingly large number of situations, so it's worthwhile to learn about them.Ranges are defined using the `np.arange` function, which takes either one, two, or three arguments: a start, and end, and a 'step'.If you pass one argument to `np.arange`, this becomes the `end` value, with `start=0`, `step=1` assumed. Two arguments give the `start` and `end` with `step=1` assumed. Three arguments give the `start`, `end` and `step` explicitly.A range always includes its `start` value, but does not include its `end` value. It counts up by `step`, and it stops before it gets to the `end`. np.arange(end): An array starting with 0 of increasing consecutive integers, stopping before end.
###Code
np.arange(5)
###Output
_____no_output_____
###Markdown
Notice how the array starts at 0 and goes only up to 4, not to the end value of 5. np.arange(start, end): An array of consecutive increasing integers from start, stopping before end.
###Code
np.arange(3, 9)
###Output
_____no_output_____
###Markdown
np.arange(start, end, step): A range with a difference of step between each pair of consecutive values, starting from start and stopping before end.
###Code
np.arange(3, 30, 5)
###Output
_____no_output_____
###Markdown
This array starts at 3, then takes a step of 5 to get to 8, then another step of 5 to get to 13, and so on.When you specify a step, the start, end, and step can all be either positive or negative and may be whole numbers or fractions.
###Code
np.arange(1.5, -2, -0.5)
###Output
_____no_output_____
###Markdown
Example: Leibniz's formula for $\pi$ The great German mathematician and philosopher [Gottfried Wilhelm Leibniz](https://en.wikipedia.org/wiki/Gottfried_Wilhelm_Leibniz) (1646 - 1716) discovered a wonderful formula for $\pi$ as an infinite sum of simple fractions. The formula is$$\pi = 4 \cdot \left(1 - \frac{1}{3} + \frac{1}{5} - \frac{1}{7} + \frac{1}{9} - \frac{1}{11} + \dots\right)$$ Though some math is needed to establish this, we can use arrays to convince ourselves that the formula works. Let's calculate the first 5000 terms of Leibniz's infinite sum and see if it is close to $\pi$.$$4 \cdot \left(1 - \frac{1}{3} + \frac{1}{5} - \frac{1}{7} + \frac{1}{9} - \frac{1}{11} + \dots - \frac{1}{9999} \right)$$We will calculate this finite sum by adding all the positive terms first and then subtracting the sum of all the negative terms [[1]](footnotes):$$4 \cdot \left( \left(1 + \frac{1}{5} + \frac{1}{9} + \dots + \frac{1}{9997} \right) - \left(\frac{1}{3} + \frac{1}{7} + \frac{1}{11} + \dots + \frac{1}{9999} \right) \right)$$ The positive terms in the sum have 1, 5, 9, and so on in the denominators. The array `by_four_to_20` contains these numbers up to 17:
###Code
by_four_to_20 = np.arange(1, 20, 4)
by_four_to_20
###Output
_____no_output_____
###Markdown
To get an accurate approximation to $\pi$, we'll use the much longer array `positive_term_denominators`.
###Code
positive_term_denominators = np.arange(1, 10000, 4)
positive_term_denominators
###Output
_____no_output_____
###Markdown
The positive terms we actually want to add together are just 1 over these denominators:
###Code
positive_terms = 1 / positive_term_denominators
###Output
_____no_output_____
###Markdown
The negative terms have 3, 7, 11, and so on on in their denominators. This array is just 2 added to `positive_term_denominators`.
###Code
negative_terms = 1 / (positive_term_denominators + 2)
###Output
_____no_output_____
###Markdown
The overall sum is
###Code
4 * ( sum(positive_terms) - sum(negative_terms) )
###Output
_____no_output_____ |
1-initial-sentiment-analysis.ipynb | ###Markdown
Initial setup
###Code
%reload_ext autoreload
%autoreload 2
%matplotlib inline
from fastai import *
from fastai.text import *
# import fastai.utils.collect_env
# fastai.utils.collect_env.show_install()
bs = 256
###Output
_____no_output_____
###Markdown
Prepare data
###Code
data_path = Config.data_path()
lang = 'nl'
name = f'{lang}wiki'
path = data_path/name
mdl_path = path/'models'
lm_fns = [f'{lang}_wt', f'{lang}_wt_vocab']
# The language model was previously saved like this:
# learn.save(mdl_path/lm_fns[0], with_opt=False)
# learn.data.vocab.save(mdl_path/(lm_fns[1] + '.pkl'))
sa_path = path/'110kDBRD'
# Takes ~ 6 minutes:
data_lm = (TextList.from_folder(sa_path)
.filter_by_folder(include=['train', 'test', 'unsup'])
.split_by_rand_pct(0.1, seed=42)
.label_for_lm()
.databunch(bs=bs, num_workers=1))
len(data_lm.vocab.itos), len(data_lm.train_ds)
data_lm.save('lm_databunch')
data_lm = load_data(sa_path, 'lm_databunch', bs=bs)
# learn_lm = language_model_learner(data_lm, AWD_LSTM, drop_mult=1.,
# path = path,
# pretrained_fnames= ['lm_best', 'itos']).to_fp16()
learn_lm = language_model_learner(data_lm, AWD_LSTM, drop_mult=1.,
path = path,
pretrained_fnames= ['lm_best', 'itos']).to_fp16()
language_model_learner??
Learner??
###Output
_____no_output_____ |
src/core/base_libs/BlazeFace/Convert.ipynb | ###Markdown
Convert TFLite model to PyTorchThis uses the model **face_detection_front.tflite** from [MediaPipe](https://github.com/google/mediapipe/tree/master/mediapipe/models).Prerequisites:1) Clone the MediaPipe repo:```git clone https://github.com/google/mediapipe.git```2) Install **flatbuffers**:```git clone https://github.com/google/flatbuffers.gitcmake -G "Unix Makefiles" -DCMAKE_BUILD_TYPE=Releasemake -jcd flatbuffers/pythonpython setup.py install```3) Clone the TensorFlow repo. We only need this to get the FlatBuffers schema files (I guess you could just download [schema.fbs](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/schema/schema.fbs)).```git clone https://github.com/tensorflow/tensorflow.git```4) Convert the schema files to Python files using **flatc**:```./flatbuffers/flatc --python tensorflow/tensorflow/lite/schema/schema.fbs```Now we can use the Python FlatBuffer API to read the TFLite file!
###Code
import os
import numpy as np
from collections import OrderedDict
###Output
_____no_output_____
###Markdown
Get the weights from the TFLite file Load the TFLite model using the FlatBuffers library:
###Code
from tflite import Model
data = open("./mediapipe/mediapipe/models/face_detection_front.tflite", "rb").read()
model = Model.Model.GetRootAsModel(data, 0)
subgraph = model.Subgraphs(0)
subgraph.Name()
def get_shape(tensor):
return [tensor.Shape(i) for i in range(tensor.ShapeLength())]
###Output
_____no_output_____
###Markdown
List all the tensors in the graph:
###Code
for i in range(0, subgraph.TensorsLength()):
tensor = subgraph.Tensors(i)
print("%3d %30s %d %2d %s" % (i, tensor.Name(), tensor.Type(), tensor.Buffer(),
get_shape(subgraph.Tensors(i))))
###Output
0 b'input' 0 0 [1, 128, 128, 3]
1 b'conv2d/Kernel' 0 1 [24, 5, 5, 3]
2 b'conv2d/Bias' 0 2 [24]
3 b'conv2d' 0 0 [1, 64, 64, 24]
4 b'activation' 0 0 [1, 64, 64, 24]
5 b'depthwise_conv2d/Kernel' 0 3 [1, 3, 3, 24]
6 b'depthwise_conv2d/Bias' 0 4 [24]
7 b'depthwise_conv2d' 0 0 [1, 64, 64, 24]
8 b'conv2d_1/Kernel' 0 5 [24, 1, 1, 24]
9 b'conv2d_1/Bias' 0 6 [24]
10 b'conv2d_1' 0 0 [1, 64, 64, 24]
11 b'add' 0 0 [1, 64, 64, 24]
12 b'activation_1' 0 0 [1, 64, 64, 24]
13 b'depthwise_conv2d_1/Kernel' 0 7 [1, 3, 3, 24]
14 b'depthwise_conv2d_1/Bias' 0 8 [24]
15 b'depthwise_conv2d_1' 0 0 [1, 64, 64, 24]
16 b'conv2d_2/Kernel' 0 9 [28, 1, 1, 24]
17 b'conv2d_2/Bias' 0 10 [28]
18 b'conv2d_2' 0 0 [1, 64, 64, 28]
19 b'channel_padding/Paddings' 2 11 [4, 2]
20 b'channel_padding' 0 0 [1, 64, 64, 28]
21 b'add_1' 0 0 [1, 64, 64, 28]
22 b'activation_2' 0 0 [1, 64, 64, 28]
23 b'depthwise_conv2d_2/Kernel' 0 12 [1, 3, 3, 28]
24 b'depthwise_conv2d_2/Bias' 0 13 [28]
25 b'depthwise_conv2d_2' 0 0 [1, 32, 32, 28]
26 b'max_pooling2d' 0 0 [1, 32, 32, 28]
27 b'conv2d_3/Kernel' 0 14 [32, 1, 1, 28]
28 b'conv2d_3/Bias' 0 15 [32]
29 b'conv2d_3' 0 0 [1, 32, 32, 32]
30 b'channel_padding_1/Paddings' 2 16 [4, 2]
31 b'channel_padding_1' 0 0 [1, 32, 32, 32]
32 b'add_2' 0 0 [1, 32, 32, 32]
33 b'activation_3' 0 0 [1, 32, 32, 32]
34 b'depthwise_conv2d_3/Kernel' 0 17 [1, 3, 3, 32]
35 b'depthwise_conv2d_3/Bias' 0 18 [32]
36 b'depthwise_conv2d_3' 0 0 [1, 32, 32, 32]
37 b'conv2d_4/Kernel' 0 19 [36, 1, 1, 32]
38 b'conv2d_4/Bias' 0 20 [36]
39 b'conv2d_4' 0 0 [1, 32, 32, 36]
40 b'channel_padding_2/Paddings' 2 21 [4, 2]
41 b'channel_padding_2' 0 0 [1, 32, 32, 36]
42 b'add_3' 0 0 [1, 32, 32, 36]
43 b'activation_4' 0 0 [1, 32, 32, 36]
44 b'depthwise_conv2d_4/Kernel' 0 22 [1, 3, 3, 36]
45 b'depthwise_conv2d_4/Bias' 0 23 [36]
46 b'depthwise_conv2d_4' 0 0 [1, 32, 32, 36]
47 b'conv2d_5/Kernel' 0 24 [42, 1, 1, 36]
48 b'conv2d_5/Bias' 0 25 [42]
49 b'conv2d_5' 0 0 [1, 32, 32, 42]
50 b'channel_padding_3/Paddings' 2 26 [4, 2]
51 b'channel_padding_3' 0 0 [1, 32, 32, 42]
52 b'add_4' 0 0 [1, 32, 32, 42]
53 b'activation_5' 0 0 [1, 32, 32, 42]
54 b'depthwise_conv2d_5/Kernel' 0 27 [1, 3, 3, 42]
55 b'depthwise_conv2d_5/Bias' 0 28 [42]
56 b'depthwise_conv2d_5' 0 0 [1, 16, 16, 42]
57 b'max_pooling2d_1' 0 0 [1, 16, 16, 42]
58 b'conv2d_6/Kernel' 0 29 [48, 1, 1, 42]
59 b'conv2d_6/Bias' 0 30 [48]
60 b'conv2d_6' 0 0 [1, 16, 16, 48]
61 b'channel_padding_4/Paddings' 2 31 [4, 2]
62 b'channel_padding_4' 0 0 [1, 16, 16, 48]
63 b'add_5' 0 0 [1, 16, 16, 48]
64 b'activation_6' 0 0 [1, 16, 16, 48]
65 b'depthwise_conv2d_6/Kernel' 0 32 [1, 3, 3, 48]
66 b'depthwise_conv2d_6/Bias' 0 33 [48]
67 b'depthwise_conv2d_6' 0 0 [1, 16, 16, 48]
68 b'conv2d_7/Kernel' 0 34 [56, 1, 1, 48]
69 b'conv2d_7/Bias' 0 35 [56]
70 b'conv2d_7' 0 0 [1, 16, 16, 56]
71 b'channel_padding_5/Paddings' 2 36 [4, 2]
72 b'channel_padding_5' 0 0 [1, 16, 16, 56]
73 b'add_6' 0 0 [1, 16, 16, 56]
74 b'activation_7' 0 0 [1, 16, 16, 56]
75 b'depthwise_conv2d_7/Kernel' 0 37 [1, 3, 3, 56]
76 b'depthwise_conv2d_7/Bias' 0 38 [56]
77 b'depthwise_conv2d_7' 0 0 [1, 16, 16, 56]
78 b'conv2d_8/Kernel' 0 39 [64, 1, 1, 56]
79 b'conv2d_8/Bias' 0 40 [64]
80 b'conv2d_8' 0 0 [1, 16, 16, 64]
81 b'channel_padding_6/Paddings' 2 41 [4, 2]
82 b'channel_padding_6' 0 0 [1, 16, 16, 64]
83 b'add_7' 0 0 [1, 16, 16, 64]
84 b'activation_8' 0 0 [1, 16, 16, 64]
85 b'depthwise_conv2d_8/Kernel' 0 42 [1, 3, 3, 64]
86 b'depthwise_conv2d_8/Bias' 0 43 [64]
87 b'depthwise_conv2d_8' 0 0 [1, 16, 16, 64]
88 b'conv2d_9/Kernel' 0 44 [72, 1, 1, 64]
89 b'conv2d_9/Bias' 0 45 [72]
90 b'conv2d_9' 0 0 [1, 16, 16, 72]
91 b'channel_padding_7/Paddings' 2 46 [4, 2]
92 b'channel_padding_7' 0 0 [1, 16, 16, 72]
93 b'add_8' 0 0 [1, 16, 16, 72]
94 b'activation_9' 0 0 [1, 16, 16, 72]
95 b'depthwise_conv2d_9/Kernel' 0 47 [1, 3, 3, 72]
96 b'depthwise_conv2d_9/Bias' 0 48 [72]
97 b'depthwise_conv2d_9' 0 0 [1, 16, 16, 72]
98 b'conv2d_10/Kernel' 0 49 [80, 1, 1, 72]
99 b'conv2d_10/Bias' 0 50 [80]
100 b'conv2d_10' 0 0 [1, 16, 16, 80]
101 b'channel_padding_8/Paddings' 2 51 [4, 2]
102 b'channel_padding_8' 0 0 [1, 16, 16, 80]
103 b'add_9' 0 0 [1, 16, 16, 80]
104 b'activation_10' 0 0 [1, 16, 16, 80]
105 b'depthwise_conv2d_10/Kernel' 0 52 [1, 3, 3, 80]
106 b'depthwise_conv2d_10/Bias' 0 53 [80]
107 b'depthwise_conv2d_10' 0 0 [1, 16, 16, 80]
108 b'conv2d_11/Kernel' 0 54 [88, 1, 1, 80]
109 b'conv2d_11/Bias' 0 55 [88]
110 b'conv2d_11' 0 0 [1, 16, 16, 88]
111 b'channel_padding_9/Paddings' 2 56 [4, 2]
112 b'channel_padding_9' 0 0 [1, 16, 16, 88]
113 b'add_10' 0 0 [1, 16, 16, 88]
114 b'activation_11' 0 0 [1, 16, 16, 88]
115 b'depthwise_conv2d_11/Kernel' 0 57 [1, 3, 3, 88]
116 b'depthwise_conv2d_11/Bias' 0 58 [88]
117 b'depthwise_conv2d_11' 0 0 [1, 8, 8, 88]
118 b'max_pooling2d_2' 0 0 [1, 8, 8, 88]
119 b'conv2d_12/Kernel' 0 59 [96, 1, 1, 88]
120 b'conv2d_12/Bias' 0 60 [96]
121 b'conv2d_12' 0 0 [1, 8, 8, 96]
122 b'channel_padding_10/Paddings' 2 61 [4, 2]
123 b'channel_padding_10' 0 0 [1, 8, 8, 96]
124 b'add_11' 0 0 [1, 8, 8, 96]
125 b'activation_12' 0 0 [1, 8, 8, 96]
126 b'depthwise_conv2d_12/Kernel' 0 62 [1, 3, 3, 96]
127 b'depthwise_conv2d_12/Bias' 0 63 [96]
128 b'depthwise_conv2d_12' 0 0 [1, 8, 8, 96]
129 b'conv2d_13/Kernel' 0 64 [96, 1, 1, 96]
130 b'conv2d_13/Bias' 0 65 [96]
131 b'conv2d_13' 0 0 [1, 8, 8, 96]
132 b'add_12' 0 0 [1, 8, 8, 96]
133 b'activation_13' 0 0 [1, 8, 8, 96]
134 b'depthwise_conv2d_13/Kernel' 0 66 [1, 3, 3, 96]
135 b'depthwise_conv2d_13/Bias' 0 67 [96]
136 b'depthwise_conv2d_13' 0 0 [1, 8, 8, 96]
137 b'conv2d_14/Kernel' 0 68 [96, 1, 1, 96]
138 b'conv2d_14/Bias' 0 69 [96]
139 b'conv2d_14' 0 0 [1, 8, 8, 96]
140 b'add_13' 0 0 [1, 8, 8, 96]
141 b'activation_14' 0 0 [1, 8, 8, 96]
142 b'depthwise_conv2d_14/Kernel' 0 70 [1, 3, 3, 96]
143 b'depthwise_conv2d_14/Bias' 0 71 [96]
144 b'depthwise_conv2d_14' 0 0 [1, 8, 8, 96]
145 b'conv2d_15/Kernel' 0 72 [96, 1, 1, 96]
146 b'conv2d_15/Bias' 0 73 [96]
147 b'conv2d_15' 0 0 [1, 8, 8, 96]
148 b'add_14' 0 0 [1, 8, 8, 96]
149 b'activation_15' 0 0 [1, 8, 8, 96]
150 b'depthwise_conv2d_15/Kernel' 0 74 [1, 3, 3, 96]
151 b'depthwise_conv2d_15/Bias' 0 75 [96]
152 b'depthwise_conv2d_15' 0 0 [1, 8, 8, 96]
153 b'conv2d_16/Kernel' 0 76 [96, 1, 1, 96]
154 b'conv2d_16/Bias' 0 77 [96]
155 b'conv2d_16' 0 0 [1, 8, 8, 96]
156 b'add_15' 0 0 [1, 8, 8, 96]
157 b'activation_16' 0 0 [1, 8, 8, 96]
158 b'classificator_8/Kernel' 0 78 [2, 1, 1, 88]
159 b'classificator_8/Bias' 0 79 [2]
160 b'classificator_8' 0 0 [1, 16, 16, 2]
161 b'classificator_16/Kernel' 0 80 [6, 1, 1, 96]
162 b'classificator_16/Bias' 0 81 [6]
163 b'classificator_16' 0 0 [1, 8, 8, 6]
164 b'regressor_8/Kernel' 0 82 [32, 1, 1, 88]
165 b'regressor_8/Bias' 0 83 [32]
166 b'regressor_8' 0 0 [1, 16, 16, 32]
167 b'regressor_16/Kernel' 0 84 [96, 1, 1, 96]
168 b'regressor_16/Bias' 0 85 [96]
169 b'regressor_16' 0 0 [1, 8, 8, 96]
170 b'reshape' 0 0 [1, 512, 1]
171 b'reshape_2' 0 0 [1, 384, 1]
172 b'reshape_1' 0 0 [1, 512, 16]
173 b'reshape_3' 0 0 [1, 384, 16]
174 b'classificators' 0 0 [1, 896, 1]
175 b'regressors' 0 0 [1, 896, 16]
###Markdown
Make a look-up table that lets us get the tensor index based on the tensor name:
###Code
tensor_dict = {(subgraph.Tensors(i).Name().decode("utf8")): i
for i in range(subgraph.TensorsLength())}
###Output
_____no_output_____
###Markdown
Grab only the tensors that represent weights and biases.
###Code
parameters = {}
for i in range(subgraph.TensorsLength()):
tensor = subgraph.Tensors(i)
if tensor.Buffer() > 0:
name = tensor.Name().decode("utf8")
parameters[name] = tensor.Buffer()
len(parameters)
###Output
_____no_output_____
###Markdown
The buffers are simply arrays of bytes. As the docs say,> The data_buffer itself is an opaque container, with the assumption that the> target device is little-endian. In addition, all builtin operators assume> the memory is ordered such that if `shape` is [4, 3, 2], then index> [i, j, k] maps to `data_buffer[i*3*2 + j*2 + k]`.For weights and biases, we need to interpret every 4 bytes as being as float. On my machine, the native byte ordering is already little-endian so we don't need to do anything special for that.
###Code
def get_weights(tensor_name):
i = tensor_dict[tensor_name]
tensor = subgraph.Tensors(i)
buffer = tensor.Buffer()
shape = get_shape(tensor)
assert(tensor.Type() == 0) # FLOAT32
W = model.Buffers(buffer).DataAsNumpy()
W = W.view(dtype=np.float32)
W = W.reshape(shape)
return W
W = get_weights("conv2d/Kernel")
b = get_weights("conv2d/Bias")
W.shape, b.shape
###Output
_____no_output_____
###Markdown
Now we can get the weights for all the layers and copy them into our PyTorch model. Convert the weights to PyTorch format
###Code
import torch
from blazeface import BlazeFace
net = BlazeFace()
net
###Output
_____no_output_____
###Markdown
Make a lookup table that maps the layer names between the two models. We're going to assume here that the tensors will be in the same order in both models. If not, we should get an error because shapes don't match.
###Code
probable_names = []
for i in range(0, subgraph.TensorsLength()):
tensor = subgraph.Tensors(i)
if tensor.Buffer() > 0 and tensor.Type() == 0:
probable_names.append(tensor.Name().decode("utf-8"))
probable_names[:5]
convert = {}
i = 0
for name, params in net.state_dict().items():
convert[name] = probable_names[i]
i += 1
###Output
_____no_output_____
###Markdown
Copy the weights into the layers.Note that the ordering of the weights is different between PyTorch and TFLite, so we need to transpose them.Convolution weights: TFLite: (out_channels, kernel_height, kernel_width, in_channels) PyTorch: (out_channels, in_channels, kernel_height, kernel_width)Depthwise convolution weights: TFLite: (1, kernel_height, kernel_width, channels) PyTorch: (channels, 1, kernel_height, kernel_width)
###Code
new_state_dict = OrderedDict()
for dst, src in convert.items():
W = get_weights(src)
print(dst, src, W.shape, net.state_dict()[dst].shape)
if W.ndim == 4:
if W.shape[0] == 1:
W = W.transpose((3, 0, 1, 2)) # depthwise conv
else:
W = W.transpose((0, 3, 1, 2)) # regular conv
new_state_dict[dst] = torch.from_numpy(W)
net.load_state_dict(new_state_dict, strict=True)
###Output
_____no_output_____
###Markdown
No errors? Then the conversion was successful! Save the checkpoint
###Code
torch.save(net.state_dict(), "blazeface.pth")
###Output
_____no_output_____ |
notebooks/4_Descriptor.ipynb | ###Markdown
Descriptor This notebook showcases the functions used in descriptor analysis. That is, determining the keypoints descriptor or unqiue identifier. This descriptor is composed of the orientation histograms in local neighborhoods near the keypoint. Imports
###Code
# Handles relative import
import os, sys
dir2 = os.path.abspath('')
dir1 = os.path.dirname(dir2)
if not dir1 in sys.path: sys.path.append(dir1)
import cv2
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
import const
import octaves as octaves_lib
import keypoints as keypoints_lib
import reference_orientation as reference_lib
import descriptor as descriptor_lib
###Output
_____no_output_____
###Markdown
Find a Keypoint
###Code
img = cv2.imread('../images/box_in_scene.png', flags=cv2.IMREAD_GRAYSCALE)
img = cv2.normalize(img, None, alpha=0, beta=1, norm_type=cv2.NORM_MINMAX, dtype=cv2.CV_32F)
img = img[0:300, 100:400]
octave_idx = 4
gauss_octaves = octaves_lib.build_gaussian_octaves(img)
gauss_octave = gauss_octaves[octave_idx]
dog_octave = octaves_lib.build_dog_octave(gauss_octave)
extrema = octaves_lib.find_dog_extrema(dog_octave)
keypoint_coords = keypoints_lib.find_keypoints(extrema, dog_octave)
keypoints = reference_lib.assign_reference_orientations(keypoint_coords, gauss_octave, octave_idx)
keypoint = keypoints[0]
magnitudes, orientations = reference_lib.gradients(gauss_octave)
coord = keypoint.coordinate
sigma = keypoint.sigma
shape = gauss_octave.shape
s, y, x = coord.round().astype(int)
pixel_dist = octaves_lib.pixel_dist_in_octave(octave_idx)
max_width = (np.sqrt(2) * const.descriptor_locality * sigma) / pixel_dist
max_width = max_width.round().astype(int)
in_frame = descriptor_lib.patch_in_frame(coord, max_width, shape)
print(f'This keypoint is in frame: {in_frame}')
###Output
This keypoint is in frame: True
###Markdown
Relative Coordinates At this point, a keypoint has an orientation (see notebook 3). This orientation becomes the local neighborhoods x axis. In other words, there is a change of reference frame. This is visualized here by showing each points relative x and y coordinate.
###Code
orientation_patch = orientations[s,
y - max_width: y + max_width,
x - max_width: x + max_width]
magnitude_patch = magnitudes[s,
y - max_width: y + max_width,
x - max_width: x + max_width]
patch_shape = magnitude_patch.shape
center_offset = [coord[1] - y, coord[2] - x]
rel_patch_coords = descriptor_lib.relative_patch_coordinates(center_offset, patch_shape, pixel_dist, sigma, keypoint.orientation)
plt.imshow(rel_patch_coords[1])
plt.title(f'rel X coords')
plt.colorbar()
plt.show()
plt.imshow(rel_patch_coords[0])
plt.title(f'rel Y coords')
plt.colorbar()
plt.show()
###Output
_____no_output_____
###Markdown
Gaussian Weighting of Neighborhood
###Code
magnitude_patch = descriptor_lib.mask_outliers(magnitude_patch, rel_patch_coords, const.descriptor_locality)
orientation_patch = (orientation_patch - keypoint.orientation) % (2 * np.pi)
weights = descriptor_lib.weighting_matrix(center_offset, patch_shape, octave_idx, sigma, const.descriptor_locality)
plt.imshow(weights)
###Output
_____no_output_____
###Markdown
Descriptor Patch
###Code
magnitude_patch = magnitude_patch * weights
plt.imshow(magnitude_patch)
###Output
_____no_output_____
###Markdown
Descriptor Patch of Each Histogram
###Code
coords_rel_to_hists = rel_patch_coords[None] - descriptor_lib.histogram_centers[..., None, None]
hists_magnitude_patch = descriptor_lib.mask_outliers(magnitude_patch[None], coords_rel_to_hists, const.inter_hist_dist, 1)
nr_cols = 4
fig, axs = plt.subplots(nr_cols, nr_cols, figsize=(7, 7))
for idx, masked_magntiude in enumerate(hists_magnitude_patch):
row = idx // nr_cols
col = idx % nr_cols
axs[row, col].imshow(masked_magntiude)
axs[row, col].axis('off')
plt.tight_layout()
###Output
_____no_output_____
###Markdown
Histograms to SIFT Feature
###Code
hists_magnitude_patch = descriptor_lib.interpolate_2d_grid_contribution(hists_magnitude_patch, coords_rel_to_hists)
hists = descriptor_lib.interpolate_1d_hist_contribution(hists_magnitude_patch, orientation_patch)
sift_feature = descriptor_lib.normalize_sift_feature(hists.ravel())
###Output
_____no_output_____
###Markdown
Visualize Descriptor on Input Image
###Code
abs_coord = keypoint.absolute_coordinate[1:][::-1]
coord = keypoint.coordinate
sigma = keypoint.sigma
shape = gauss_octave.shape
s, y, x = coord.round().astype(int)
center_offset = [coord[1] - y, coord[2] - x]
pixel_dist = octaves_lib.pixel_dist_in_octave(octave_idx)
width = const.descriptor_locality * sigma
theta = keypoint.orientation
c, s = np.cos(theta), np.sin(theta)
rot_mat = np.array(((c, -s), (s, c)))
arrow = np.matmul(rot_mat, np.array([1, 0])) * 50
hist_centers = descriptor_lib.histogram_centers.T
hist_centers = hist_centers * sigma
hist_centers = np.matmul(rot_mat, hist_centers)
hist_centers = (hist_centers + abs_coord[:,None]).round().astype(int)
color = (1, 0, 0)
darkened = cv2.addWeighted(img, 0.5, np.zeros(img.shape, img.dtype),0,0)
col_img = cv2.cvtColor(darkened, cv2.COLOR_GRAY2RGB)
# Horizontal lines
for i in range(5):
offset = np.array([0, width/2]) * i
l = np.array([-width, -width]) + offset
r = np.array([width, -width]) + offset
l = (np.matmul(rot_mat, l) + abs_coord).round().astype(int)
r = (np.matmul(rot_mat, r) + abs_coord).round().astype(int)
col_img = cv2.line(col_img, l, r, color=color, thickness=1)
# Vertical lines
for i in range(5):
offset = np.array([width/2, 0]) * i
t = np.array([-width, -width]) + offset
b = np.array([-width, width]) + offset
t = (np.matmul(rot_mat, t) + abs_coord).round().astype(int)
b = (np.matmul(rot_mat, b) + abs_coord).round().astype(int)
col_img = cv2.line(col_img, t, b, color=color, thickness=1)
plt.figure(figsize=(8, 8))
plt.imshow(col_img)
plt.axis('off')
plt.title('red arrow is x axis relative to keypoint')
xs, ys = hist_centers
plt.scatter(xs, ys, c=[x for x in range(len(xs))], cmap='autumn_r')
plt.arrow(abs_coord[0], abs_coord[1], arrow[0], arrow[1], color='red', width=1, head_width=10)
plt.show()
print(f'The red arrow represtns a rotation of {np.rad2deg(keypoint.orientation)} degrees.')
###Output
_____no_output_____
###Markdown
Histogram Content
###Code
cmap = matplotlib.cm.get_cmap('autumn_r')
fig, axs = plt.subplots(4, 4, figsize=(8, 8))
for idx, hist in enumerate(hists):
row = idx // 4
col = idx % 4
color = cmap((idx + 1) / len(hists))
axs[row, col].bar(list(range(const.nr_descriptor_bins)), hist, color=color)
plt.tight_layout()
plt.show()
###Output
_____no_output_____
###Markdown
The SIFT feature a.k.a Concatenated Histograms
###Code
colors = [cmap((idx+1) / len(hists)) for idx in range(16)]
colors = np.repeat(colors, const.nr_descriptor_bins, axis=0)
plt.figure(figsize=(20, 4))
plt.bar(range(len(sift_feature)), sift_feature, color=colors)
###Output
_____no_output_____ |
Utils/dowhy/docs/source/example_notebooks/dowhy_interpreter.ipynb | ###Markdown
DoWhy: Interpreters for Causal EstimatorsThis is a quick introduction to the use of interpreters in the DoWhy causal inference library.We will load in a sample dataset, use different methods for estimating the causal effect of a (pre-specified)treatment variable on a (pre-specified) outcome variable and demonstrate how to interpret the obtained results.First, let us add the required path for Python to find the DoWhy code and load all required packages
###Code
%load_ext autoreload
%autoreload 2
import numpy as np
import pandas as pd
import logging
import dowhy
from dowhy import CausalModel
import dowhy.datasets
###Output
_____no_output_____
###Markdown
Now, let us load a dataset. For simplicity, we simulate a dataset with linear relationships between common causes and treatment, and common causes and outcome.Beta is the true causal effect.
###Code
data = dowhy.datasets.linear_dataset(beta=1,
num_common_causes=5,
num_instruments = 2,
num_treatments=1,
num_discrete_common_causes=1,
num_samples=10000,
treatment_is_binary=True,
outcome_is_binary=False)
df = data["df"]
print(df[df.v0==True].shape[0])
df
###Output
6257
###Markdown
Note that we are using a pandas dataframe to load the data. Identifying the causal estimand We now input a causal graph in the GML graph format.
###Code
# With graph
model=CausalModel(
data = df,
treatment=data["treatment_name"],
outcome=data["outcome_name"],
graph=data["gml_graph"],
instruments=data["instrument_names"]
)
model.view_model()
from IPython.display import Image, display
display(Image(filename="causal_model.png"))
###Output
_____no_output_____
###Markdown
We get a causal graph. Now identification and estimation is done.
###Code
identified_estimand = model.identify_effect(proceed_when_unidentifiable=True)
print(identified_estimand)
###Output
Estimand type: nonparametric-ate
### Estimand : 1
Estimand name: backdoor
Estimand expression:
d
โโโโโ(Expectation(y|W1,W3,Z1,W2,W4,Z0,W0))
d[vโ]
Estimand assumption 1, Unconfoundedness: If Uโ{v0} and Uโy then P(y|v0,W1,W3,Z1,W2,W4,Z0,W0,U) = P(y|v0,W1,W3,Z1,W2,W4,Z0,W0)
### Estimand : 2
Estimand name: iv
Estimand expression:
Expectation(Derivative(y, [Z1, Z0])*Derivative([v0], [Z1, Z0])**(-1))
Estimand assumption 1, As-if-random: If Uโโy then ยฌ(U โโ{Z1,Z0})
Estimand assumption 2, Exclusion: If we remove {Z1,Z0}โ{v0}, then ยฌ({Z1,Z0}โy)
### Estimand : 3
Estimand name: frontdoor
No such variable found!
###Markdown
Method 1: Propensity Score StratificationWe will be using propensity scores to stratify units in the data.
###Code
causal_estimate_strat = model.estimate_effect(identified_estimand,
method_name="backdoor.propensity_score_stratification",
target_units="att")
print(causal_estimate_strat)
print("Causal Estimate is " + str(causal_estimate_strat.value))
###Output
/usr/local/lib/python3.6/dist-packages/sklearn/utils/validation.py:63: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel().
return f(*args, **kwargs)
*** Causal Estimate ***
## Identified estimand
Estimand type: nonparametric-ate
### Estimand : 1
Estimand name: backdoor
Estimand expression:
d
โโโโโ(Expectation(y|W1,W3,Z1,W2,W4,Z0,W0))
d[vโ]
Estimand assumption 1, Unconfoundedness: If Uโ{v0} and Uโy then P(y|v0,W1,W3,Z1,W2,W4,Z0,W0,U) = P(y|v0,W1,W3,Z1,W2,W4,Z0,W0)
## Realized estimand
b: y~v0+W1+W3+Z1+W2+W4+Z0+W0
Target units: att
## Estimate
Mean value: 0.9937614288925221
Causal Estimate is 0.9937614288925221
###Markdown
Textual InterpreterThe textual Interpreter describes (in words) the effect of unit change in the treatment variable on the outcome variable.
###Code
# Textual Interpreter
interpretation = causal_estimate_strat.interpret(method_name="textual_effect_interpreter")
###Output
Increasing the treatment variable(s) [v0] from 0 to 1 causes an increase of 0.9937614288925221 in the expected value of the outcome [y], over the data distribution/population represented by the dataset.
###Markdown
Visual InterpreterThe visual interpreter plots the change in the standardized mean difference (SMD) before and after Propensity Score based adjustment of the dataset. The formula for SMD is given below.$SMD = \frac{\bar X_{1} - \bar X_{2}}{\sqrt{(S_{1}^{2} + S_{2}^{2})/2}}$Here, $\bar X_{1}$ and $\bar X_{2}$ are the sample mean for the treated and control groups.
###Code
# Visual Interpreter
interpretation = causal_estimate_strat.interpret(method_name="propensity_balance_interpreter")
###Output
_____no_output_____
###Markdown
This plot shows how the SMD decreases from the unadjusted to the stratified units. Method 2: Propensity Score MatchingWe will be using propensity scores to match units in the data.
###Code
causal_estimate_match = model.estimate_effect(identified_estimand,
method_name="backdoor.propensity_score_matching",
target_units="atc")
print(causal_estimate_match)
print("Causal Estimate is " + str(causal_estimate_match.value))
# Textual Interpreter
interpretation = causal_estimate_match.interpret(method_name="textual_effect_interpreter")
###Output
Increasing the treatment variable(s) [v0] from 0 to 1 causes an increase of 0.9974377964538144 in the expected value of the outcome [y], over the data distribution/population represented by the dataset.
###Markdown
Cannot use propensity balance interpretor here since the interpreter method only supports propensity score stratification estimator. Method 3: WeightingWe will be using (inverse) propensity scores to assign weights to units in the data. DoWhy supports a few different weighting schemes:1. Vanilla Inverse Propensity Score weighting (IPS) (weighting_scheme="ips_weight")2. Self-normalized IPS weighting (also known as the Hajek estimator) (weighting_scheme="ips_normalized_weight")3. Stabilized IPS weighting (weighting_scheme = "ips_stabilized_weight")
###Code
causal_estimate_ipw = model.estimate_effect(identified_estimand,
method_name="backdoor.propensity_score_weighting",
target_units = "ate",
method_params={"weighting_scheme":"ips_weight"})
print(causal_estimate_ipw)
print("Causal Estimate is " + str(causal_estimate_ipw.value))
# Textual Interpreter
interpretation = causal_estimate_ipw.interpret(method_name="textual_effect_interpreter")
interpretation = causal_estimate_ipw.interpret(method_name="confounder_distribution_interpreter", fig_size=(8,8), font_size=12, var_name='W4', var_type='discrete')
###Output
_____no_output_____ |
scikit-learn/machine-learning-course-notebooks/multiple-linear-regression/ML0101EN-Reg-Mulitple-Linear-Regression-Co2.ipynb | ###Markdown
Multiple Linear RegressionEstimated time needed: **15** minutes ObjectivesAfter completing this lab you will be able to:* Use scikit-learn to implement Multiple Linear Regression* Create a model, train it, test it and use the model Table of contents Understanding the Data Reading the Data in Multiple Regression Model Prediction Practice Importing Needed packages
###Code
import matplotlib.pyplot as plt
import pandas as pd
import pylab as pl
import numpy as np
%matplotlib inline
###Output
_____no_output_____
###Markdown
Downloading DataTo download the data, we will use !wget to download it from IBM Object Storage.
###Code
!wget -O FuelConsumption.csv https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBMDeveloperSkillsNetwork-ML0101EN-SkillsNetwork/labs/Module%202/data/FuelConsumptionCo2.csv
###Output
_____no_output_____
###Markdown
**Did you know?** When it comes to Machine Learning, you will likely be working with large datasets. As a business, where can you host your data? IBM is offering a unique opportunity for businesses, with 10 Tb of IBM Cloud Object Storage: [Sign up now for free](http://cocl.us/ML0101EN-IBM-Offer-CC) Understanding the Data `FuelConsumptionCo2.csv`:We have downloaded a fuel consumption dataset, **`FuelConsumptionCo2.csv`**, which contains model-specific fuel consumption ratings and estimated carbon dioxide emissions for new light-duty vehicles for retail sale in Canada. [Dataset source](http://open.canada.ca/data/en/dataset/98f1a129-f628-4ce4-b24d-6f16bf24dd64?utm_medium=Exinfluencer&utm_source=Exinfluencer&utm_content=000026UJ&utm_term=10006555&utm_id=NA-SkillsNetwork-Channel-SkillsNetworkCoursesIBMDeveloperSkillsNetworkML0101ENSkillsNetwork20718538-2021-01-01)* **MODELYEAR** e.g. 2014* **MAKE** e.g. Acura* **MODEL** e.g. ILX* **VEHICLE CLASS** e.g. SUV* **ENGINE SIZE** e.g. 4.7* **CYLINDERS** e.g 6* **TRANSMISSION** e.g. A6* **FUELTYPE** e.g. z* **FUEL CONSUMPTION in CITY(L/100 km)** e.g. 9.9* **FUEL CONSUMPTION in HWY (L/100 km)** e.g. 8.9* **FUEL CONSUMPTION COMB (L/100 km)** e.g. 9.2* **CO2 EMISSIONS (g/km)** e.g. 182 --> low --> 0 Reading the data in
###Code
df = pd.read_csv("FuelConsumptionCo2.csv")
# take a look at the dataset
df.head()
###Output
_____no_output_____
###Markdown
Let's select some features that we want to use for regression.
###Code
cdf = df[['ENGINESIZE','CYLINDERS','FUELCONSUMPTION_CITY','FUELCONSUMPTION_HWY','FUELCONSUMPTION_COMB','CO2EMISSIONS']]
cdf.head(9)
###Output
_____no_output_____
###Markdown
Let's plot Emission values with respect to Engine size:
###Code
plt.scatter(cdf.ENGINESIZE, cdf.CO2EMISSIONS, color='blue')
plt.xlabel("Engine size")
plt.ylabel("Emission")
plt.show()
###Output
_____no_output_____
###Markdown
Creating train and test datasetTrain/Test Split involves splitting the dataset into training and testing sets respectively, which are mutually exclusive. After which, you train with the training set and test with the testing set.This will provide a more accurate evaluation on out-of-sample accuracy because the testing dataset is not part of the dataset that have been used to train the model. Therefore, it gives us a better understanding of how well our model generalizes on new data.We know the outcome of each data point in the testing dataset, making it great to test with! Since this data has not been used to train the model, the model has no knowledge of the outcome of these data points. So, in essence, it is truly an out-of-sample testing.Let's split our dataset into train and test sets. Around 80% of the entire dataset will be used for training and 20% for testing. We create a mask to select random rows using the **np.random.rand()** function:
###Code
msk = np.random.rand(len(df)) < 0.8
train = cdf[msk]
test = cdf[~msk]
###Output
_____no_output_____
###Markdown
Train data distribution
###Code
plt.scatter(train.ENGINESIZE, train.CO2EMISSIONS, color='blue')
plt.xlabel("Engine size")
plt.ylabel("Emission")
plt.show()
###Output
_____no_output_____
###Markdown
Multiple Regression Model In reality, there are multiple variables that impact the co2emission. When more than one independent variable is present, the process is called multiple linear regression. An example of multiple linear regression is predicting co2emission using the features FUELCONSUMPTION_COMB, EngineSize and Cylinders of cars. The good thing here is that multiple linear regression model is the extension of the simple linear regression model.
###Code
from sklearn import linear_model
regr = linear_model.LinearRegression()
x = np.asanyarray(train[['ENGINESIZE','CYLINDERS','FUELCONSUMPTION_COMB']])
y = np.asanyarray(train[['CO2EMISSIONS']])
regr.fit (x, y)
# The coefficients
print ('Coefficients: ', regr.coef_)
print ('Intercept: ',regr.intercept_)
###Output
Coefficients: [[10.20264689 7.70246152 9.71483433]]
Intercept: [64.8558952]
###Markdown
As mentioned before, **Coefficient** and **Intercept** are the parameters of the fitted line.Given that it is a multiple linear regression model with 3 parameters and that the parameters are the intercept and coefficients of the hyperplane, sklearn can estimate them from our data. Scikit-learn uses plain Ordinary Least Squares method to solve this problem. Ordinary Least Squares (OLS)OLS is a method for estimating the unknown parameters in a linear regression model. OLS chooses the parameters of a linear function of a set of explanatory variables by minimizing the sum of the squares of the differences between the target dependent variable and those predicted by the linear function. In other words, it tries to minimizes the sum of squared errors (SSE) or mean squared error (MSE) between the target variable (y) and our predicted output ($\hat{y}$) over all samples in the dataset.OLS can find the best parameters using of the following methods:* Solving the model parameters analytically using closed-form equations* Using an optimization algorithm (Gradient Descent, Stochastic Gradient Descent, Newtonโs Method, etc.) Prediction
###Code
y_hat= regr.predict(test[['ENGINESIZE','CYLINDERS','FUELCONSUMPTION_COMB']])
x = np.asanyarray(test[['ENGINESIZE','CYLINDERS','FUELCONSUMPTION_COMB']])
y = np.asanyarray(test[['CO2EMISSIONS']])
print("Residual sum of squares: %.2f"
% np.mean((y_hat - y) ** 2))
# Explained variance score: 1 is perfect prediction
print('Variance score: %.2f' % regr.score(x, y))
###Output
Residual sum of squares: 596.55
Variance score: 0.86
###Markdown
**Explained variance regression score:**\Let $\hat{y}$ be the estimated target output, y the corresponding (correct) target output, and Var be the Variance (the square of the standard deviation). Then the explained variance is estimated as follows:$\texttt{explainedVariance}(y, \hat{y}) = 1 - \frac{Var{ y - \hat{y}}}{Var{y}}$\The best possible score is 1.0, the lower values are worse. PracticeTry to use a multiple linear regression with the same dataset, but this time use FUELCONSUMPTION_CITY and FUELCONSUMPTION_HWY instead of FUELCONSUMPTION_COMB. Does it result in better accuracy?
###Code
regr = linear_model.LinearRegression()
x = np.asanyarray(train[['ENGINESIZE','CYLINDERS','FUELCONSUMPTION_CITY','FUELCONSUMPTION_HWY']])
y = np.asanyarray(train[['CO2EMISSIONS']])
regr.fit (x, y)
print ('Coefficients: ', regr.coef_)
print ('Intercept: ',regr.intercept_)
y_= regr.predict(test[['ENGINESIZE','CYLINDERS','FUELCONSUMPTION_CITY','FUELCONSUMPTION_HWY']])
x = np.asanyarray(test[['ENGINESIZE','CYLINDERS','FUELCONSUMPTION_CITY','FUELCONSUMPTION_HWY']])
y = np.asanyarray(test[['CO2EMISSIONS']])
print("Residual sum of squares: %.2f"% np.mean((y_ - y) ** 2))
print('Variance score: %.2f' % regr.score(x, y))
###Output
Coefficients: [[10.2480558 7.52299311 5.82539573 3.73058977]]
Intercept: [65.43986651]
Residual sum of squares: 595.22
Variance score: 0.86
|
ColonizingMars/ChallengeTemplates/challenge-option-2-how-could-we-colonize-mars.ipynb | ###Markdown
 Data Scientist Challenge: How could we colonize Mars?Use this notebook if you are interested in proposing ways to colonize Mars. HowUse data to answer questions such as:1. How do we decide who will go? population proportions, demographics, health, qualifications, genetic diversity2. What do we need to bring?3. What are some essential services?4. What kinds of jobs should people do?5. How do we feed people there? Consider: supply, manage, distribute, connect6. Where should we land?7. What structures should we design and build?8. Should we terraform Mars? How?9. How should Mars be governed?Pick as many questions from the above section (or come up with your own). Complete the sections within this notebook. Section I: About YouDouble click this cell and tell us:1. Your name2. Your email address3. Why you picked this challenge4. The questions you pickedFor example 1. Your name: Not-my Name 2. Your email address: [email protected] 3. Why you picked this challenge: I don't think we should attempt to colonize Mars 4. The questions you picked: Why does humanity tend to colonize? Why not focus on making Earth a better place? Section II: The data you usedPlease provide the following information:1. Name of dataset2. Link to dataset3. Why you picked the datasetIf you picked multiple datasets, separate them using commas ","
###Code
# Use this cell to import libraries
import pandas as pd
import plotly_express as px
import numpy as np
# Use this cell to read the data - use the tutorials if you are not sure how to do this
###Output
_____no_output_____
###Markdown
Section III: Data Analysis and VisualizationUse as many code cells as you need - remember to add a title, as well as appropriate x and y labels to your visualizations. Ensure to briefly comment on what you see. A sample is provided.
###Code
# Sample code
x_values = np.array([i for i in range(-200,200)])
y_values = x_values**3
px.line(x=x_values,
y=y_values,
title="Line plot of x and y values",
labels = {'x':'Independent Variable x','y':'Dependent Variable y'})
###Output
_____no_output_____ |
Deep-Learning-Analysis/Dynamic_Hand_Gestures_DL_v1.ipynb | ###Markdown
###Code
from google.colab import drive
drive.mount('/content/drive')
import os
import time
import joblib
import shutil
import tarfile
import requests
import numpy as np
import pandas as pd
import tensorflow as tf
import matplotlib.pyplot as plt
from scipy.signal import butter, lfilter
from tensorflow.keras.models import Model
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Input
from tensorflow.keras.layers import Dense
from tensorflow.keras.layers import Flatten
from tensorflow.keras.layers import Dropout
from tensorflow.keras.layers import BatchNormalization
from tensorflow.keras.layers import Conv1D
from tensorflow.keras.layers import MaxPooling1D
from tensorflow.keras.layers import concatenate
from tensorflow.keras.utils import to_categorical
from sklearn.utils import shuffle
from sklearn.metrics import classification_report, accuracy_score
DATASET_ID = '1p0CSRb9gax0sKqdyzOYVt-BXvZ4GtrBv'
# -------------BASE DIR (MODIFY THIS TO YOUR NEED) ------------ #
# BASE_DIR = '../'
BASE_DIR = '/content/drive/MyDrive/Research/Hand Gesture/GitHub/'
DATA_DIR = 'Sensor-Data/'
CHANNELS_DIR = 'Channels/'
FEATURES_DIR = 'Features/'
FIGURE_DIR = 'Figures/'
LOG_DIR = 'Logs/'
USERS = ['001', '002', '003', '004', '005', '006', '007', '008', '009',
'010', '011', '012', '013', '014', '015', '016', '017', '018',
'019', '020', '021', '022', '023', '024', '025']
GESTURES = ['j', 'z', 'bad', 'deaf', 'fine', 'good', 'goodbye', 'hello', 'hungry',
'me', 'no', 'please', 'sorry', 'thankyou', 'yes', 'you']
WINDOW_LEN = 150
# ------------- FOR THE GREATER GOOD :) ------------- #
DATASET_LEN = 1120
TRAIN_LEN = 960
TEST_LEN = 160
TEST_USER = '001'
EPOCHS = 5
CHANNELS_GROUP = 'DYNAMIC_ACC_ONLY_'
CUT_OFF = 3.0
ORDER = 4
FS = 100
CONFIG = "Rolling median filter for flex, LPF for IMU, Stacked CNN, epochs 20, lr 0.001\n"
#--------------------- Download util for Google Drive ------------------- #
def download_file_from_google_drive(id, destination):
URL = "https://docs.google.com/uc?export=download"
session = requests.Session()
response = session.get(URL, params = { 'id' : id }, stream = True)
token = get_confirm_token(response)
if token:
params = { 'id' : id, 'confirm' : token }
response = session.get(URL, params = params, stream = True)
save_response_content(response, destination)
def get_confirm_token(response):
for key, value in response.cookies.items():
if key.startswith('download_warning'):
return value
return None
def save_response_content(response, destination):
CHUNK_SIZE = 32768
with open(destination, "wb") as f:
for chunk in response.iter_content(CHUNK_SIZE):
if chunk:
f.write(chunk)
def download_data(fid, destination):
print('cleaning already existing files ... ', end='')
try:
shutil.rmtree(destination)
print('โ')
except:
print('โ')
print('creating data directory ... ', end='')
os.mkdir(destination)
print('โ')
print('downloading dataset from the repository ... ', end='')
filename = os.path.join(destination, 'dataset.tar.xz')
try:
download_file_from_google_drive(fid, filename)
print('โ')
except:
print('โ')
print('extracting the dataset ... ', end='')
try:
tar = tarfile.open(filename)
tar.extractall(destination)
tar.close()
print('โ')
except:
print('โ')
# ------- Comment This if already downloaded -------- #
# destination = os.path.join(BASE_DIR, DATA_DIR)
# download_data(DATASET_ID, destination)
class LowPassFilter(object):
def butter_lowpass(cutoff, fs, order):
nyq = 0.5 * fs
normal_cutoff = cutoff / nyq
b, a = butter(order, normal_cutoff, btype='low', analog=False)
return b, a
def apply(data, cutoff=CUT_OFF, fs=FS, order=ORDER):
b, a = LowPassFilter.butter_lowpass(cutoff, fs, order=order)
y = lfilter(b, a, data)
return y
def clean_dir(path):
print('cleaning already existing files ... ', end='')
try:
shutil.rmtree(path)
print('โ')
except:
print('โ')
print('creating ' + path + ' directory ... ', end='')
os.mkdir(path)
print('โ')
def extract_channels():
channels_dir = os.path.join(BASE_DIR, CHANNELS_DIR)
clean_dir(channels_dir)
for user in USERS:
print('Processing data for user ' + user, end=' ')
X = []
y = []
first_time = True
for gesture in GESTURES:
user_dir = os.path.join(BASE_DIR, DATA_DIR, user)
gesture_dir = os.path.join(user_dir, gesture + '.csv')
dataset = pd.read_csv(gesture_dir)
dataset['flex_1'] = dataset['flex_1'].rolling(3).median()
dataset['flex_2'] = dataset['flex_2'].rolling(3).median()
dataset['flex_3'] = dataset['flex_3'].rolling(3).median()
dataset['flex_4'] = dataset['flex_4'].rolling(3).median()
dataset['flex_5'] = dataset['flex_5'].rolling(3).median()
dataset.fillna(0, inplace=True)
# flex = ['flex_1', 'flex_2', 'flex_3', 'flex_4', 'flex_5']
# max_flex = dataset[flex].max(axis=1)
# max_flex.replace(0, 1, inplace=True)
# dataset[flex] = dataset[flex].divide(max_flex, axis=0)
flx1 = dataset['flex_1'].to_numpy().reshape(-1, WINDOW_LEN)
flx2 = dataset['flex_2'].to_numpy().reshape(-1, WINDOW_LEN)
flx3 = dataset['flex_3'].to_numpy().reshape(-1, WINDOW_LEN)
flx4 = dataset['flex_4'].to_numpy().reshape(-1, WINDOW_LEN)
flx5 = dataset['flex_5'].to_numpy().reshape(-1, WINDOW_LEN)
accx = dataset['ACCx'].to_numpy()
accy = dataset['ACCy'].to_numpy()
accz = dataset['ACCz'].to_numpy()
accx = LowPassFilter.apply(accx).reshape(-1, WINDOW_LEN)
accy = LowPassFilter.apply(accy).reshape(-1, WINDOW_LEN)
accz = LowPassFilter.apply(accz).reshape(-1, WINDOW_LEN)
gyrx = dataset['GYRx'].to_numpy()
gyry = dataset['GYRy'].to_numpy()
gyrz = dataset['GYRz'].to_numpy()
gyrx = LowPassFilter.apply(gyrx).reshape(-1, WINDOW_LEN)
gyry = LowPassFilter.apply(gyry).reshape(-1, WINDOW_LEN)
gyrz = LowPassFilter.apply(gyrz).reshape(-1, WINDOW_LEN)
accm = np.sqrt(accx ** 2 + accy ** 2 + accz ** 2)
gyrm = np.sqrt(gyrx ** 2 + gyry ** 2 + gyrz ** 2)
g_idx = GESTURES.index(gesture)
labels = np.ones((accx.shape[0], 1)) * g_idx
channels = np.stack([
flx1, flx2, flx3, flx4, flx5,
accx, accy, accz
], axis=-1)
if first_time == True:
X = channels
y = labels
first_time = False
else:
X = np.append(X, channels, axis=0)
y = np.append(y, labels, axis=0)
x_path = os.path.join(BASE_DIR, CHANNELS_DIR, CHANNELS_GROUP + user + '_X.joblib')
y_path = os.path.join(BASE_DIR, CHANNELS_DIR, CHANNELS_GROUP + user + '_y.joblib')
joblib.dump(X, x_path)
joblib.dump(y, y_path)
print('โ')
# extract_channels()
def get_model(input_shape = (150, 8)):
optimizer = tf.keras.optimizers.Adam(0.0001)
model = Sequential()
model.add(BatchNormalization(input_shape=input_shape))
model.add(Conv1D(filters=64, kernel_size=3, activation='relu'))
model.add(Conv1D(filters=64, kernel_size=3, activation='relu'))
model.add(Dropout(0.5))
model.add(MaxPooling1D(pool_size=2))
model.add(Flatten())
model.add(Dense(100, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(len(GESTURES), activation='softmax'))
model.compile(
loss='categorical_crossentropy',
optimizer=optimizer,
metrics=['accuracy']
)
return model
def get_conv_block():
input = Input(shape=(150, 1))
x = BatchNormalization()(input)
x = Conv1D(filters=8, kernel_size=3, activation='selu', padding='valid')(x)
x = Conv1D(filters=16, kernel_size=3, activation='selu', padding='valid')(x)
x = MaxPooling1D(2)(x)
x = Conv1D(filters=16, kernel_size=3, activation='selu', padding='valid')(x)
x = Conv1D(filters=16, kernel_size=3, activation='selu', padding='valid')(x)
x = MaxPooling1D(2)(x)
x = Flatten()(x)
x = Dense(50, activation='elu')(x)
return input, x
def get_stacked_model(n=8):
inputs = []
CNNs = []
for i in range(n):
input_i, CNN_i = get_conv_block()
inputs.append(input_i)
CNNs.append(CNN_i)
x = concatenate(CNNs, axis=-1)
x = Dropout(0.5)(x)
x = Dense(100, activation='selu')(x)
x = Dropout(0.5)(x)
# x = Dense(20, activation='selu')(x)
# x = Dropout(0.5)(x)
output = Dense(len(GESTURES), activation='sigmoid')(x)
model = Model(inputs, output)
opt = tf.keras.optimizers.Adam(learning_rate=0.001)
model.compile(loss='categorical_crossentropy', optimizer=opt, metrics=['accuracy'])
return model
ACC = []
logs = ''
for test_user in USERS:
print('Processing results for user ' + test_user, end='... \n')
X_train = []
X_test = []
y_train = []
y_test = []
first_time_train = True
first_time_test = True
for user in USERS:
x_path = os.path.join(BASE_DIR, CHANNELS_DIR, CHANNELS_GROUP + user + '_X.joblib')
y_path = os.path.join(BASE_DIR, CHANNELS_DIR, CHANNELS_GROUP + user + '_y.joblib')
X = joblib.load(x_path)
y = joblib.load(y_path)
if user == test_user:
if first_time_train == True:
first_time_train = False
X_test = X
y_test = y
else:
X_test = np.append(X_test, X, axis=0)
y_test = np.append(y_test, y, axis=0)
else:
if first_time_test == True:
first_time_test = False
X_train = X
y_train = y
else:
X_train = np.append(X_train, X, axis=0)
y_train = np.append(y_train, y, axis=0)
X_train, y_train = shuffle(X_train, y_train)
y_train = to_categorical(y_train)
y_test = to_categorical(y_test)
model = get_stacked_model()
model.fit(
np.split(X_train, 8, axis=-1), y_train, epochs=20, batch_size=32
)
_, accuracy = model.evaluate(np.split(X_test, 8, axis=-1), y_test, batch_size=32)
accuracy = accuracy * 100
print(f'%.2f %%' %(accuracy))
logs = logs + 'Accuracy for user ' + str(test_user) + '... ' + str(accuracy) + '\n'
ACC.append(accuracy)
AVG_ACC = np.mean(ACC)
STD = np.std(ACC)
print('------------------------------------')
print(f'Average accuracy %.2f +/- %.2f' %(AVG_ACC, STD))
line = '---------------------------------------\n'
log_dir = os.path.join(BASE_DIR, LOG_DIR)
if not os.path.exists(log_dir):
os.mkdir(log_dir)
f = open(os.path.join(log_dir, 'logs_dl_basic_cnn.txt'), 'a')
f.write(CONFIG)
f.write(logs)
f.write(line)
f.write(f'Average accuracy %.2f +/- %.2f' %(AVG_ACC, STD))
f.write('\n\n')
f.close()
###Output
_____no_output_____ |
Lab1_MNIST_DataLoader should try tensorflow errored.ipynb | ###Markdown
Lab 1: MNIST Data LoaderThis notebook is the first lab of the "Deep Learning Explained" course. It is derived from the tutorial numbered CNTK_103A in the CNTK repository. This notebook is used to download and pre-process the [MNIST][] digit images to be used for building different models to recognize handwritten digits. ** Note: ** This notebook must be run to completion before the other course notebooks can be run.[MNIST]: http://yann.lecun.com/exdb/mnist/
###Code
# Import the relevant modules to be used later
from __future__ import print_function
import gzip
import matplotlib.image as mpimg
import matplotlib.pyplot as plt
import numpy as np
import os
import shutil
import struct
import sys
import os
try:
from urllib.request import urlretrieve
except ImportError:
from urllib import urlretrieve
# Config matplotlib for inline plotting
%matplotlib inline
os.getcwd()
###Output
_____no_output_____
###Markdown
Data downloadWe will download the data onto the local machine. The MNIST database is a standard set of handwritten digits that has been widely used for training and testing of machine learning algorithms. It has a training set of 60,000 images and a test set of 10,000 images with each image being 28 x 28 grayscale pixels. This set is easy to use visualize and train on any computer.
###Code
# Functions to load MNIST images and unpack into train and test set.
# - loadData reads image data and formats into a 28x28 long array
# - loadLabels reads the corresponding labels data, 1 for each image
# - load packs the downloaded image and labels data into a combined format to be read later by
# CNTK text reader
def loadData(src, cimg):
print ('Downloading ' + src)
gzfname, h = urlretrieve(src, './delete.me')
print ('Done.')
try:
with gzip.open(gzfname) as gz:
n = struct.unpack('I', gz.read(4))
# Read magic number.
if n[0] != 0x3080000:
raise Exception('Invalid file: unexpected magic number.')
# Read number of entries.
n = struct.unpack('>I', gz.read(4))[0]
if n != cimg:
raise Exception('Invalid file: expected {0} entries.'.format(cimg))
crow = struct.unpack('>I', gz.read(4))[0]
ccol = struct.unpack('>I', gz.read(4))[0]
if crow != 28 or ccol != 28:
raise Exception('Invalid file: expected 28 rows/cols per image.')
# Read data.
res = np.fromstring(gz.read(cimg * crow * ccol), dtype = np.uint8)
finally:
os.remove(gzfname)
return res.reshape((cimg, crow * ccol))
def loadLabels(src, cimg):
print ('Downloading ' + src)
gzfname, h = urlretrieve(src, './delete.me')
print ('Done.')
try:
with gzip.open(gzfname) as gz:
n = struct.unpack('I', gz.read(4))
# Read magic number.
if n[0] != 0x1080000:
raise Exception('Invalid file: unexpected magic number.')
# Read number of entries.
n = struct.unpack('>I', gz.read(4))
if n[0] != cimg:
raise Exception('Invalid file: expected {0} rows.'.format(cimg))
# Read labels.
res = np.fromstring(gz.read(cimg), dtype = np.uint8)
finally:
os.remove(gzfname)
return res.reshape((cimg, 1))
def try_download(dataSrc, labelsSrc, cimg):
data = loadData(dataSrc, cimg)
labels = loadLabels(labelsSrc, cimg)
return np.hstack((data, labels))
###Output
_____no_output_____
###Markdown
Download the dataIn the following code, we use the functions defined above to download and unzip the MNIST data into memory. The training set has 60000 images while the test set has 10000 images.
###Code
# URLs for the train image and labels data
url_train_image = 'http://yann.lecun.com/exdb/mnist/train-images-idx3-ubyte.gz'
url_train_labels = 'http://yann.lecun.com/exdb/mnist/train-labels-idx1-ubyte.gz'
num_train_samples = 60000
print("Downloading train data")
train = try_download(url_train_image, url_train_labels, num_train_samples)
url_test_image = 'http://yann.lecun.com/exdb/mnist/t10k-images-idx3-ubyte.gz'
url_test_labels = 'http://yann.lecun.com/exdb/mnist/t10k-labels-idx1-ubyte.gz'
num_test_samples = 10000
print("Downloading test data")
test = try_download(url_test_image, url_test_labels, num_test_samples)
from tensorflow import keras
from tensorflow.keras import layers
from kerastuner.tuners import RandomSearch
###Output
_____no_output_____
###Markdown
Visualize the dataHere, we use matplotlib to display one of the training images and it's associated label.
###Code
sample_number = 5001
plt.imshow(train[sample_number,:-1].reshape(28,28), cmap="gray_r")
plt.axis('off')
print("Image Label: ", train[sample_number,-1])
###Output
Image Label: 3
###Markdown
Save the imagesSave the images in a local directory. While saving the data we flatten the images to a vector (28x28 image pixels becomes an array of length 784 data points).The labels are encoded as [1-hot][] encoding (label of 3 with 10 digits becomes `0001000000`, where the first index corresponds to digit `0` and the last one corresponds to digit `9`.[1-hot]: https://en.wikipedia.org/wiki/One-hot
###Code
# Save the data files into a format compatible with CNTK text reader
def savetxt(filename, ndarray):
dir = os.path.dirname(filename)
if not os.path.exists(dir):
os.makedirs(dir)
if not os.path.isfile(filename):
print("Saving", filename )
with open(filename, 'w') as f:
labels = list(map(' '.join, np.eye(10, dtype=np.uint).astype(str)))
for row in ndarray:
row_str = row.astype(str)
label_str = labels[row[-1]]
feature_str = ' '.join(row_str[:-1])
f.write('|labels {} |features {}\n'.format(label_str, feature_str))
else:
print("File already exists", filename)
# Save the train and test files (prefer our default path for the data)
data_dir = os.path.join("..", "Examples", "Image", "DataSets", "MNIST")
if not os.path.exists(data_dir):
data_dir = os.path.join("data", "MNIST")
print ('Writing train text file...')
savetxt(os.path.join(data_dir, "Train-28x28_cntk_text.txt"), train)
print ('Writing test text file...')
savetxt(os.path.join(data_dir, "Test-28x28_cntk_text.txt"), test)
print('Done')
###Output
Writing train text file...
Saving data\MNIST\Train-28x28_cntk_text.txt
Writing test text file...
Saving data\MNIST\Test-28x28_cntk_text.txt
Done
|
initial_exploration.ipynb | ###Markdown
As seen above, the chunks do not come padded -- the padding method below may not be the most efficient, but it'll get the job done for now -- may also want to go further up the data pipeline at some point
###Code
zs = torch.zeros(4096)
zs
#https://discuss.pytorch.org/t/how-to-do-padding-based-on-lengths/24442/2?u=aza
zs[:len(sample)] = sample
zs
from torch.utils.data import Dataset, DataLoader
class taxon_ds(Dataset):
def __init__(self, chunks):
self.chunks = chunks
def __len__(self):
return len(self.chunks)
def __getitem__(self, idx):
x = chunks[idx][1]
if (len(x) < 4096):
padded = torch.zeros(4096)
padded[:len(x)] = x
x = padded
y = chunks[idx][2]
return (x, y)
ds = taxon_ds(chunks)
ds[0]
###Output
_____no_output_____
###Markdown
That dataset above should be using a transform :D
###Code
dl = DataLoader(ds, batch_size=16, shuffle=True)
len(dl)
batch = next(iter(dl))
len(batch), batch[0].shape, batch[1].shape
###Output
_____no_output_____
###Markdown
We now have functioning dataloaders!
###Code
sample = batch[0]
###Output
_____no_output_____
###Markdown
Let's see how we'll pass a batch through a convolutional layer -- will need to add a dimension to the tensor in order to provide the channel dimension that the conv layer is expecting
###Code
import torch.nn as nn
#torch.nn.Conv1d??
nn.Conv1d(1, 2, 3)(sample.unsqueeze(1)).shape
###Output
_____no_output_____ |
tutorials/notebook/cx_site_chart_examples/layout_2.ipynb | ###Markdown
Example: CanvasXpress layout Chart No. 2This example page demonstrates how to, using the Python package, create a chart that matches the CanvasXpress online example located at:https://www.canvasxpress.org/examples/layout-2.htmlThis example is generated using the reproducible JSON obtained from the above page and the `canvasxpress.util.generator.generate_canvasxpress_code_from_json_file()` function.Everything required for the chart to render is included in the code below. Simply run the code block.
###Code
from canvasxpress.canvas import CanvasXpress
from canvasxpress.js.collection import CXEvents
from canvasxpress.render.jupyter import CXNoteBook
cx = CanvasXpress(
render_to="layout2",
data={
"z": {
"Species": [
"setosa",
"setosa",
"setosa",
"setosa",
"setosa",
"setosa",
"setosa",
"setosa",
"setosa",
"setosa",
"setosa",
"setosa",
"setosa",
"setosa",
"setosa",
"setosa",
"setosa",
"setosa",
"setosa",
"setosa",
"setosa",
"setosa",
"setosa",
"setosa",
"setosa",
"setosa",
"setosa",
"setosa",
"setosa",
"setosa",
"setosa",
"setosa",
"setosa",
"setosa",
"setosa",
"setosa",
"setosa",
"setosa",
"setosa",
"setosa",
"setosa",
"setosa",
"setosa",
"setosa",
"setosa",
"setosa",
"setosa",
"setosa",
"setosa",
"setosa",
"versicolor",
"versicolor",
"versicolor",
"versicolor",
"versicolor",
"versicolor",
"versicolor",
"versicolor",
"versicolor",
"versicolor",
"versicolor",
"versicolor",
"versicolor",
"versicolor",
"versicolor",
"versicolor",
"versicolor",
"versicolor",
"versicolor",
"versicolor",
"versicolor",
"versicolor",
"versicolor",
"versicolor",
"versicolor",
"versicolor",
"versicolor",
"versicolor",
"versicolor",
"versicolor",
"versicolor",
"versicolor",
"versicolor",
"versicolor",
"versicolor",
"versicolor",
"versicolor",
"versicolor",
"versicolor",
"versicolor",
"versicolor",
"versicolor",
"versicolor",
"versicolor",
"versicolor",
"versicolor",
"versicolor",
"versicolor",
"versicolor",
"versicolor",
"virginica",
"virginica",
"virginica",
"virginica",
"virginica",
"virginica",
"virginica",
"virginica",
"virginica",
"virginica",
"virginica",
"virginica",
"virginica",
"virginica",
"virginica",
"virginica",
"virginica",
"virginica",
"virginica",
"virginica",
"virginica",
"virginica",
"virginica",
"virginica",
"virginica",
"virginica",
"virginica",
"virginica",
"virginica",
"virginica",
"virginica",
"virginica",
"virginica",
"virginica",
"virginica",
"virginica",
"virginica",
"virginica",
"virginica",
"virginica",
"virginica",
"virginica",
"virginica",
"virginica",
"virginica",
"virginica",
"virginica",
"virginica",
"virginica",
"virginica"
]
},
"y": {
"vars": [
"s1",
"s2",
"s3",
"s4",
"s5",
"s6",
"s7",
"s8",
"s9",
"s10",
"s11",
"s12",
"s13",
"s14",
"s15",
"s16",
"s17",
"s18",
"s19",
"s20",
"s21",
"s22",
"s23",
"s24",
"s25",
"s26",
"s27",
"s28",
"s29",
"s30",
"s31",
"s32",
"s33",
"s34",
"s35",
"s36",
"s37",
"s38",
"s39",
"s40",
"s41",
"s42",
"s43",
"s44",
"s45",
"s46",
"s47",
"s48",
"s49",
"s50",
"s51",
"s52",
"s53",
"s54",
"s55",
"s56",
"s57",
"s58",
"s59",
"s60",
"s61",
"s62",
"s63",
"s64",
"s65",
"s66",
"s67",
"s68",
"s69",
"s70",
"s71",
"s72",
"s73",
"s74",
"s75",
"s76",
"s77",
"s78",
"s79",
"s80",
"s81",
"s82",
"s83",
"s84",
"s85",
"s86",
"s87",
"s88",
"s89",
"s90",
"s91",
"s92",
"s93",
"s94",
"s95",
"s96",
"s97",
"s98",
"s99",
"s100",
"s101",
"s102",
"s103",
"s104",
"s105",
"s106",
"s107",
"s108",
"s109",
"s110",
"s111",
"s112",
"s113",
"s114",
"s115",
"s116",
"s117",
"s118",
"s119",
"s120",
"s121",
"s122",
"s123",
"s124",
"s125",
"s126",
"s127",
"s128",
"s129",
"s130",
"s131",
"s132",
"s133",
"s134",
"s135",
"s136",
"s137",
"s138",
"s139",
"s140",
"s141",
"s142",
"s143",
"s144",
"s145",
"s146",
"s147",
"s148",
"s149",
"s150"
],
"smps": [
"Sepal.Length",
"Sepal.Width",
"Petal.Length",
"Petal.Width"
],
"data": [
[
5.1,
3.5,
1.4,
0.2
],
[
4.9,
3,
1.4,
0.2
],
[
4.7,
3.2,
1.3,
0.2
],
[
4.6,
3.1,
1.5,
0.2
],
[
5,
3.6,
1.4,
0.2
],
[
5.4,
3.9,
1.7,
0.4
],
[
4.6,
3.4,
1.4,
0.3
],
[
5,
3.4,
1.5,
0.2
],
[
4.4,
2.9,
1.4,
0.2
],
[
4.9,
3.1,
1.5,
0.1
],
[
5.4,
3.7,
1.5,
0.2
],
[
4.8,
3.4,
1.6,
0.2
],
[
4.8,
3,
1.4,
0.1
],
[
4.3,
3,
1.1,
0.1
],
[
5.8,
4,
1.2,
0.2
],
[
5.7,
4.4,
1.5,
0.4
],
[
5.4,
3.9,
1.3,
0.4
],
[
5.1,
3.5,
1.4,
0.3
],
[
5.7,
3.8,
1.7,
0.3
],
[
5.1,
3.8,
1.5,
0.3
],
[
5.4,
3.4,
1.7,
0.2
],
[
5.1,
3.7,
1.5,
0.4
],
[
4.6,
3.6,
1,
0.2
],
[
5.1,
3.3,
1.7,
0.5
],
[
4.8,
3.4,
1.9,
0.2
],
[
5,
3,
1.6,
0.2
],
[
5,
3.4,
1.6,
0.4
],
[
5.2,
3.5,
1.5,
0.2
],
[
5.2,
3.4,
1.4,
0.2
],
[
4.7,
3.2,
1.6,
0.2
],
[
4.8,
3.1,
1.6,
0.2
],
[
5.4,
3.4,
1.5,
0.4
],
[
5.2,
4.1,
1.5,
0.1
],
[
5.5,
4.2,
1.4,
0.2
],
[
4.9,
3.1,
1.5,
0.2
],
[
5,
3.2,
1.2,
0.2
],
[
5.5,
3.5,
1.3,
0.2
],
[
4.9,
3.6,
1.4,
0.1
],
[
4.4,
3,
1.3,
0.2
],
[
5.1,
3.4,
1.5,
0.2
],
[
5,
3.5,
1.3,
0.3
],
[
4.5,
2.3,
1.3,
0.3
],
[
4.4,
3.2,
1.3,
0.2
],
[
5,
3.5,
1.6,
0.6
],
[
5.1,
3.8,
1.9,
0.4
],
[
4.8,
3,
1.4,
0.3
],
[
5.1,
3.8,
1.6,
0.2
],
[
4.6,
3.2,
1.4,
0.2
],
[
5.3,
3.7,
1.5,
0.2
],
[
5,
3.3,
1.4,
0.2
],
[
7,
3.2,
4.7,
1.4
],
[
6.4,
3.2,
4.5,
1.5
],
[
6.9,
3.1,
4.9,
1.5
],
[
5.5,
2.3,
4,
1.3
],
[
6.5,
2.8,
4.6,
1.5
],
[
5.7,
2.8,
4.5,
1.3
],
[
6.3,
3.3,
4.7,
1.6
],
[
4.9,
2.4,
3.3,
1
],
[
6.6,
2.9,
4.6,
1.3
],
[
5.2,
2.7,
3.9,
1.4
],
[
5,
2,
3.5,
1
],
[
5.9,
3,
4.2,
1.5
],
[
6,
2.2,
4,
1
],
[
6.1,
2.9,
4.7,
1.4
],
[
5.6,
2.9,
3.6,
1.3
],
[
6.7,
3.1,
4.4,
1.4
],
[
5.6,
3,
4.5,
1.5
],
[
5.8,
2.7,
4.1,
1
],
[
6.2,
2.2,
4.5,
1.5
],
[
5.6,
2.5,
3.9,
1.1
],
[
5.9,
3.2,
4.8,
1.8
],
[
6.1,
2.8,
4,
1.3
],
[
6.3,
2.5,
4.9,
1.5
],
[
6.1,
2.8,
4.7,
1.2
],
[
6.4,
2.9,
4.3,
1.3
],
[
6.6,
3,
4.4,
1.4
],
[
6.8,
2.8,
4.8,
1.4
],
[
6.7,
3,
5,
1.7
],
[
6,
2.9,
4.5,
1.5
],
[
5.7,
2.6,
3.5,
1
],
[
5.5,
2.4,
3.8,
1.1
],
[
5.5,
2.4,
3.7,
1
],
[
5.8,
2.7,
3.9,
1.2
],
[
6,
2.7,
5.1,
1.6
],
[
5.4,
3,
4.5,
1.5
],
[
6,
3.4,
4.5,
1.6
],
[
6.7,
3.1,
4.7,
1.5
],
[
6.3,
2.3,
4.4,
1.3
],
[
5.6,
3,
4.1,
1.3
],
[
5.5,
2.5,
4,
1.3
],
[
5.5,
2.6,
4.4,
1.2
],
[
6.1,
3,
4.6,
1.4
],
[
5.8,
2.6,
4,
1.2
],
[
5,
2.3,
3.3,
1
],
[
5.6,
2.7,
4.2,
1.3
],
[
5.7,
3,
4.2,
1.2
],
[
5.7,
2.9,
4.2,
1.3
],
[
6.2,
2.9,
4.3,
1.3
],
[
5.1,
2.5,
3,
1.1
],
[
5.7,
2.8,
4.1,
1.3
],
[
6.3,
3.3,
6,
2.5
],
[
5.8,
2.7,
5.1,
1.9
],
[
7.1,
3,
5.9,
2.1
],
[
6.3,
2.9,
5.6,
1.8
],
[
6.5,
3,
5.8,
2.2
],
[
7.6,
3,
6.6,
2.1
],
[
4.9,
2.5,
4.5,
1.7
],
[
7.3,
2.9,
6.3,
1.8
],
[
6.7,
2.5,
5.8,
1.8
],
[
7.2,
3.6,
6.1,
2.5
],
[
6.5,
3.2,
5.1,
2
],
[
6.4,
2.7,
5.3,
1.9
],
[
6.8,
3,
5.5,
2.1
],
[
5.7,
2.5,
5,
2
],
[
5.8,
2.8,
5.1,
2.4
],
[
6.4,
3.2,
5.3,
2.3
],
[
6.5,
3,
5.5,
1.8
],
[
7.7,
3.8,
6.7,
2.2
],
[
7.7,
2.6,
6.9,
2.3
],
[
6,
2.2,
5,
1.5
],
[
6.9,
3.2,
5.7,
2.3
],
[
5.6,
2.8,
4.9,
2
],
[
7.7,
2.8,
6.7,
2
],
[
6.3,
2.7,
4.9,
1.8
],
[
6.7,
3.3,
5.7,
2.1
],
[
7.2,
3.2,
6,
1.8
],
[
6.2,
2.8,
4.8,
1.8
],
[
6.1,
3,
4.9,
1.8
],
[
6.4,
2.8,
5.6,
2.1
],
[
7.2,
3,
5.8,
1.6
],
[
7.4,
2.8,
6.1,
1.9
],
[
7.9,
3.8,
6.4,
2
],
[
6.4,
2.8,
5.6,
2.2
],
[
6.3,
2.8,
5.1,
1.5
],
[
6.1,
2.6,
5.6,
1.4
],
[
7.7,
3,
6.1,
2.3
],
[
6.3,
3.4,
5.6,
2.4
],
[
6.4,
3.1,
5.5,
1.8
],
[
6,
3,
4.8,
1.8
],
[
6.9,
3.1,
5.4,
2.1
],
[
6.7,
3.1,
5.6,
2.4
],
[
6.9,
3.1,
5.1,
2.3
],
[
5.8,
2.7,
5.1,
1.9
],
[
6.8,
3.2,
5.9,
2.3
],
[
6.7,
3.3,
5.7,
2.5
],
[
6.7,
3,
5.2,
2.3
],
[
6.3,
2.5,
5,
1.9
],
[
6.5,
3,
5.2,
2
],
[
6.2,
3.4,
5.4,
2.3
],
[
5.9,
3,
5.1,
1.8
]
]
},
"m": {
"Name": "Anderson's Iris data set",
"Description": "The data set consists of 50 Ss from each of three species of Iris (Iris setosa, Iris virginica and Iris versicolor). Four features were measured from each S: the length and the width of the sepals and petals, in centimetres.",
"Reference": "R. A. Fisher (1936). The use of multiple measurements in taxonomic problems. Annals of Eugenics 7 (2): 179-188."
}
},
config={
"broadcast": True,
"colorBy": "Species",
"graphType": "Scatter2D",
"layoutAdjust": True,
"scatterPlotMatrix": True,
"theme": "CanvasXpress"
},
width=613,
height=613,
events=CXEvents(),
after_render=[
[
"addRegressionLine",
[
"Species",
None,
None
]
]
],
other_init_params={
"version": 35,
"events": False,
"info": False,
"afterRenderInit": False,
"noValidate": True
}
)
display = CXNoteBook(cx)
display.render(output_file="layout_2.html")
###Output
_____no_output_____ |
Course/AIDrug/Homework2/Work2.ipynb | ###Markdown
่ฏ็ฉ็ญ้ Assignment> 10185101210 ้ไฟๆฝผไฝฟ็จ `Random Forest` ๆจกๅ้ขๆตๅ
ทๆๆ่ไฝ็จ็ๆๆบ็ฉใ ๅๅคๆดปๆงๆฐๆฎๅฏผๅ
ฅ rdkit ็ธๅ
ณๅบ๏ผ
###Code
from rdkit import Chem
from rdkit.Chem import AllChem
from rdkit.Chem.Draw import IPythonConsole
from rdkit.Chem import Draw
###Output
_____no_output_____
###Markdown
ๅฏผๅ
ฅๆฐๆฎๅค็็็ธๅ
ณๅบ๏ผ
###Code
import pandas as pd
import numpy as np
###Output
_____no_output_____
###Markdown
่ทๅๅๅญ็ๆดปๆงๆฐๆฎ๏ผ
###Code
df_all = pd.read_csv('./Experimental_anti_bact.csv', delimiter=',', header=0)
act_smiles = df_all[df_all['Activity']=='Active']['SMILES'].tolist()
inact_smiles = df_all[df_all['Activity']=='Inactive']['SMILES'].tolist()
df_all.head()
print(len(act_smiles), len(inact_smiles))
###Output
120 2215
###Markdown
่ฎก็ฎๆๆๅๅญ็ๅๅญๆ็บน๏ผ
###Code
from rdkit import Chem
from rdkit.Chem import rdFingerprintGenerator
mols_act = [Chem.MolFromSmiles(x) for x in act_smiles]
fps_act = rdFingerprintGenerator.GetFPs(mols_act)
mols_inact = [Chem.MolFromSmiles(x) for x in inact_smiles]
fps_inact = rdFingerprintGenerator.GetFPs(mols_inact)
fps = fps_act + fps_inact
###Output
_____no_output_____
###Markdown
ๅๅคๆ ทๆฌๆ ็ญพ๏ผ
###Code
tag = []
for i in range(len(fps_act)):
tag.append("ACTIVE")
for i in range(len(fps_inact)):
tag.append("INACTIVE")
###Output
_____no_output_____
###Markdown
ไฝฟ็จ้ๆบๆฃฎๆๆจกๅๅฏผๅ
ฅ้ๆบๆฃฎๆๆจกๅๅนถๅฏนๆจกๅ่ฟ่ก่ฎญ็ป๏ผ
###Code
from sklearn.model_selection import train_test_split
# 20% for testing, 80% for training
X_train, X_test, y_train, y_test = train_test_split(fps, tag, test_size=0.20, random_state = 0)
print(len(X_train), len(y_test))
###Output
1868 467
###Markdown
ๅฏนๆจกๅ่ฟ่ก่ฎญ็ป๏ผๅนถๆต้ๆจกๅ็ๅ็กฎๅบฆ๏ผ
###Code
from sklearn.ensemble import RandomForestClassifier
forest = RandomForestClassifier(n_jobs=-1, n_estimators=100)
forest.fit(X_train, y_train) # Build a forest of trees from the training set
from sklearn import metrics
y_pred = forest.predict(X_test) # Predict class for X
accuracy = metrics.accuracy_score(y_test, y_pred)
print("Model Accuracy: %.2f" %accuracy)
###Output
Model Accuracy: 0.96
###Markdown
ๅฏผๅ
ฅ่ฏ็ฉไฟกๆฏ
###Code
df_new = pd.read_csv('./Drug_HUB.csv', delimiter='\t', header=0)
df_new = df_new[['Name', 'SMILES']]
df_new.head()
###Output
_____no_output_____
###Markdown
่ฟ่ก่ฏ็ฉ็ญ้๏ผๅนถๅฐไฟๅญ็ปๆๅจcsvๆไปถไธญ๏ผ
###Code
print("Runnig...")
i = 0;
df_result = pd.DataFrame({"Name":[], "SMILES":[], "Probability":[]})
df_result.head()
for one in zip(df_new['Name'], df_new['SMILES']):
i = i + 1;
mol = Chem.MolFromSmiles(one[1])
fingerPrint = rdFingerprintGenerator.GetFPs([mol])
y_pred = forest.predict(fingerPrint)
y_prob = forest.predict_proba(fingerPrint)
print('\r', str(i) + "/" + str(len(df_new)),one[0], y_pred, y_prob)
if(y_pred[0] == 'ACTIVE'):
new = pd.DataFrame({"Name": [one[0]],
"SMILES": [one[1]],
"Probability": [y_prob[0][0]]})
df_result=df_result.append(new,ignore_index=True,sort=True)
print('Finished.')
df_result.to_csv("./Drug_avtive.csv", index = False)
###Output
Runnig...
1/4496 cefmenoxime ['ACTIVE'] [[0.7 0.3]]
2/4496 ulifloxacin ['ACTIVE'] [[0.52 0.48]]
3/4496 cefotiam ['ACTIVE'] [[0.6 0.4]]
4/4496 ceftriaxone ['ACTIVE'] [[0.6 0.4]]
5/4496 balofloxacin ['ACTIVE'] [[0.79 0.21]]
6/4496 cefminox ['ACTIVE'] [[0.58 0.42]]
7/4496 danofloxacin ['ACTIVE'] [[0.61 0.39]]
8/4496 besifloxacin ['ACTIVE'] [[0.79 0.21]]
9/4496 cefazolin ['ACTIVE'] [[0.54 0.46]]
10/4496 cefodizime ['ACTIVE'] [[0.56 0.44]]
11/4496 trovafloxacin ['ACTIVE'] [[0.57 0.43]]
12/4496 cefpirome ['ACTIVE'] [[0.63 0.37]]
13/4496 cefotiam-cilexetil ['INACTIVE'] [[0.45 0.55]]
14/4496 sitafloxacin ['ACTIVE'] [[0.62 0.38]]
15/4496 ceftizoxim ['INACTIVE'] [[0.45 0.55]]
16/4496 cefmetazole ['ACTIVE'] [[0.54 0.46]]
17/4496 cefoselis ['ACTIVE'] [[0.57 0.43]]
18/4496 cefotaxime ['ACTIVE'] [[0.57 0.43]]
19/4496 ceftazidime ['ACTIVE'] [[0.77 0.23]]
20/4496 cefetamet ['INACTIVE'] [[0.48 0.52]]
21/4496 cefamandole ['ACTIVE'] [[0.78 0.22]]
22/4496 cefuroxime ['INACTIVE'] [[0.45 0.55]]
23/4496 thiomersal ['INACTIVE'] [[0.16 0.84]]
24/4496 cefoxitin ['ACTIVE'] [[0.67 0.33]]
25/4496 cefonicid ['ACTIVE'] [[0.77 0.23]]
26/4496 cefamandole-nafate ['INACTIVE'] [[0.46 0.54]]
27/4496 cefetamet-pivoxil ['ACTIVE'] [[0.6 0.4]]
28/4496 finafloxacin ['ACTIVE'] [[0.78 0.22]]
29/4496 moxalactam ['ACTIVE'] [[0.51 0.49]]
30/4496 cefozopran ['INACTIVE'] [[0.42 0.58]]
31/4496 Ro-9187 ['INACTIVE'] [[0.1 0.9]]
32/4496 R-1479 ['INACTIVE'] [[0.1 0.9]]
33/4496 cephalosporin-c-zn ['INACTIVE'] [[0.15 0.85]]
34/4496 oxolinic-acid ['ACTIVE'] [[0.55 0.45]]
35/4496 carumonam ['INACTIVE'] [[0.39 0.61]]
36/4496 piperacillin ['ACTIVE'] [[0.55 0.45]]
37/4496 bleomycin ['ACTIVE'] [[0.61 0.39]]
38/4496 ceftaroline-fosamil ['INACTIVE'] [[0.38 0.62]]
39/4496 bleomycetin ['ACTIVE'] [[0.62 0.38]]
40/4496 7-aminocephalosporanic-acid ['INACTIVE'] [[0.19 0.81]]
41/4496 inimur ['INACTIVE'] [[0.31 0.69]]
42/4496 voreloxin ['INACTIVE'] [[0.29 0.71]]
43/4496 colistin-b-sulfate ['ACTIVE'] [[0.8 0.2]]
44/4496 balapiravir ['INACTIVE'] [[0.11 0.89]]
45/4496 faropenem ['INACTIVE'] [[0.11 0.89]]
46/4496 colistimethate ['ACTIVE'] [[0.8 0.2]]
47/4496 imipenem ['INACTIVE'] [[0.15 0.85]]
48/4496 meclocycline-sulfosalicylate ['INACTIVE'] [[0.25 0.75]]
49/4496 colistin ['ACTIVE'] [[0.76 0.24]]
50/4496 faropenem-medoxomil ['INACTIVE'] [[0.11 0.89]]
51/4496 cephalothin ['INACTIVE'] [[0.29 0.71]]
52/4496 demeclocycline ['INACTIVE'] [[0.27 0.73]]
53/4496 AGN-195183 ['INACTIVE'] [[0.04 0.96]]
54/4496 doripenem ['INACTIVE'] [[0.27 0.73]]
55/4496 nifurtimox ['INACTIVE'] [[0.33 0.67]]
56/4496 fdcyd ['INACTIVE'] [[0.23 0.77]]
57/4496 chlortetracycline ['INACTIVE'] [[0.19 0.81]]
58/4496 strontium-ranelate ['INACTIVE'] [[0.08 0.92]]
59/4496 solithromycin ['INACTIVE'] [[0.22 0.78]]
60/4496 cabotegravir ['INACTIVE'] [[0.24 0.76]]
61/4496 dolutegravir ['INACTIVE'] [[0.23 0.77]]
62/4496 elvitegravir ['INACTIVE'] [[0.2 0.8]]
63/4496 alvespimycin ['INACTIVE'] [[0.07 0.93]]
64/4496 flucloxacillin ['INACTIVE'] [[0.07 0.93]]
65/4496 avatrombopag ['INACTIVE'] [[0.21 0.79]]
66/4496 isepamicin ['INACTIVE'] [[0.28 0.72]]
67/4496 API-1 ['INACTIVE'] [[0.08 0.92]]
68/4496 NSC-3852 ['INACTIVE'] [[0.12 0.88]]
69/4496 benzyldimethyloctylammonium ['INACTIVE'] [[0.47 0.53]]
70/4496 tetroquinone ['INACTIVE'] [[0.01 0.99]]
71/4496 loracarbef ['INACTIVE'] [[0.23 0.77]]
72/4496 doxycycline-hyclate ['INACTIVE'] [[0.26 0.74]]
73/4496 AC-261066 ['INACTIVE'] [[0.06 0.94]]
74/4496 cefalonium ['INACTIVE'] [[0.21 0.79]]
75/4496 abemaciclib ['INACTIVE'] [[0.07 0.93]]
76/4496 nisin ['INACTIVE'] [[0.45 0.55]]
77/4496 piroctone-olamine ['INACTIVE'] [[0.04 0.96]]
78/4496 oxytetracycline ['INACTIVE'] [[0.21 0.79]]
79/4496 WIN-18446 ['INACTIVE'] [[0.14 0.86]]
80/4496 garenoxacin ['INACTIVE'] [[0.46 0.54]]
81/4496 pyrithione-zinc ['INACTIVE'] [[0.19 0.81]]
82/4496 gentamycin ['INACTIVE'] [[0.31 0.69]]
83/4496 cytochlor ['INACTIVE'] [[0.12 0.88]]
84/4496 decitabine ['INACTIVE'] [[0.21 0.79]]
85/4496 Ro-15-4513 ['INACTIVE'] [[0.01 0.99]]
86/4496 talmapimod ['INACTIVE'] [[0.15 0.85]]
87/4496 ertapenem ['INACTIVE'] [[0.27 0.73]]
88/4496 AL-8697 ['INACTIVE'] [[0.08 0.92]]
89/4496 SU3327 ['INACTIVE'] [[0.04 0.96]]
90/4496 omadacycline ['INACTIVE'] [[0.17 0.83]]
91/4496 azomycin-(2-nitroimidazole) ['INACTIVE'] [[0.02 0.98]]
92/4496 sulfanilate-zinc ['INACTIVE'] [[0.09 0.91]]
93/4496 filanesib ['INACTIVE'] [[0.05 0.95]]
94/4496 5-FP ['INACTIVE'] [[0. 1.]]
95/4496 RX-3117 ['INACTIVE'] [[0.03 0.97]]
96/4496 enocitabine ['INACTIVE'] [[0.18 0.82]]
97/4496 1-octacosanol ['INACTIVE'] [[0.05 0.95]]
98/4496 aldoxorubicin ['INACTIVE'] [[0.13 0.87]]
99/4496 MK-2048 ['INACTIVE'] [[0.13 0.87]]
100/4496 NS-309 ['INACTIVE'] [[0.02 0.98]]
101/4496 azimilide ['INACTIVE'] [[0.12 0.88]]
102/4496 tandospirone ['INACTIVE'] [[0.23 0.77]]
103/4496 FRAX486 ['INACTIVE'] [[0.12 0.88]]
104/4496 sipatrigine ['INACTIVE'] [[0.12 0.88]]
105/4496 valspodar ['INACTIVE'] [[0.15 0.85]]
106/4496 orphanin-fq ['INACTIVE'] [[0.24 0.76]]
107/4496 chloramphenicol-sodium-succinate ['ACTIVE'] [[0.76 0.24]]
108/4496 micronomicin ['INACTIVE'] [[0.25 0.75]]
109/4496 cefsulodin ['INACTIVE'] [[0.29 0.71]]
110/4496 ciclopirox ['INACTIVE'] [[0.44 0.56]]
111/4496 uprosertib ['INACTIVE'] [[0.05 0.95]]
112/4496 CYM-50260 ['INACTIVE'] [[0.17 0.83]]
113/4496 gepon ['INACTIVE'] [[0.25 0.75]]
114/4496 cephapirin ['INACTIVE'] [[0.35 0.65]]
115/4496 biapenem ['INACTIVE'] [[0.12 0.88]]
116/4496 methacycline ['INACTIVE'] [[0.23 0.77]]
117/4496 caspofungin-acetate ['INACTIVE'] [[0.15 0.85]]
118/4496 caspofungin ['INACTIVE'] [[0.15 0.85]]
119/4496 FCE-22250 ['ACTIVE'] [[0.6 0.4]]
120/4496 tigecycline ['INACTIVE'] [[0.12 0.88]]
121/4496 doxycycline ['INACTIVE'] [[0.18 0.82]]
122/4496 ftorafur ['INACTIVE'] [[0.25 0.75]]
123/4496 hetacillin ['INACTIVE'] [[0.03 0.97]]
124/4496 rifabutin ['ACTIVE'] [[0.55 0.45]]
125/4496 piricapiron ['INACTIVE'] [[0.07 0.93]]
126/4496 rifamycin ['INACTIVE'] [[0.4 0.6]]
127/4496 P22077 ['INACTIVE'] [[0.06 0.94]]
128/4496 opicapone ['INACTIVE'] [[0.06 0.94]]
129/4496 cetrorelix ['INACTIVE'] [[0.21 0.79]]
130/4496 palbociclib ['INACTIVE'] [[0.07 0.93]]
131/4496 actinomycin-d ['INACTIVE'] [[0.13 0.87]]
132/4496 dicloxacillin ['INACTIVE'] [[0.03 0.97]]
133/4496 dactinomycin ['INACTIVE'] [[0.13 0.87]]
134/4496 clevudine ['INACTIVE'] [[0.17 0.83]]
135/4496 eperezolid ['INACTIVE'] [[0.11 0.89]]
136/4496 dichloroacetate ['INACTIVE'] [[0.04 0.96]]
137/4496 F-11440 ['INACTIVE'] [[0.2 0.8]]
138/4496 nedocromil ['INACTIVE'] [[0.07 0.93]]
139/4496 RO-3 ['INACTIVE'] [[0.36 0.64]]
140/4496 sulbutiamine ['INACTIVE'] [[0.1 0.9]]
141/4496 exatecan-mesylate ['INACTIVE'] [[0.08 0.92]]
142/4496 GSK461364 ['INACTIVE'] [[0.09 0.91]]
143/4496 rifamycin-sv ['INACTIVE'] [[0.29 0.71]]
144/4496 geldanamycin ['INACTIVE'] [[0.13 0.87]]
145/4496 flosequinan ['INACTIVE'] [[0.04 0.96]]
146/4496 tetracycline ['INACTIVE'] [[0.2 0.8]]
147/4496 2,4-dinitrochlorobenzene ['INACTIVE'] [[0.03 0.97]]
148/4496 ticagrelor ['INACTIVE'] [[0.11 0.89]]
149/4496 minocycline ['INACTIVE'] [[0.19 0.81]]
150/4496 rolziracetam ['INACTIVE'] [[0.01 0.99]]
151/4496 JTE-607 ['INACTIVE'] [[0.13 0.87]]
152/4496 netupitant ['INACTIVE'] [[0.06 0.94]]
153/4496 BI-D1870 ['INACTIVE'] [[0.14 0.86]]
154/4496 azlocillin ['INACTIVE'] [[0.14 0.86]]
155/4496 NH125 ['INACTIVE'] [[0.33 0.67]]
156/4496 substance-p ['INACTIVE'] [[0.24 0.76]]
157/4496 zotarolimus ['INACTIVE'] [[0.13 0.87]]
158/4496 thonzonium ['INACTIVE'] [[0.44 0.56]]
159/4496 AZD3759 ['INACTIVE'] [[0.09 0.91]]
160/4496 pivampicillin ['INACTIVE'] [[0.24 0.76]]
161/4496 PF-03084014 ['INACTIVE'] [[0.09 0.91]]
162/4496 chlorproguanil ['INACTIVE'] [[0.09 0.91]]
163/4496 adaptavir ['INACTIVE'] [[0.12 0.88]]
164/4496 MK-8745 ['INACTIVE'] [[0.07 0.93]]
165/4496 tedizolid ['INACTIVE'] [[0.1 0.9]]
166/4496 GSK2830371 ['INACTIVE'] [[0.04 0.96]]
167/4496 Ro-60-0175 ['INACTIVE'] [[0.06 0.94]]
168/4496 WAY-161503 ['INACTIVE'] [[0.06 0.94]]
169/4496 NMS-1286937 ['INACTIVE'] [[0.1 0.9]]
170/4496 tafenoquine ['INACTIVE'] [[0.03 0.97]]
171/4496 octenidine ['INACTIVE'] [[0.2 0.8]]
172/4496 dantrolene ['INACTIVE'] [[0.39 0.61]]
173/4496 ALS-8176 ['INACTIVE'] [[0.02 0.98]]
174/4496 TAK-960 ['INACTIVE'] [[0.12 0.88]]
175/4496 triapine ['INACTIVE'] [[0.06 0.94]]
176/4496 terreic-acid-(-) ['INACTIVE'] [[0. 1.]]
177/4496 crizotinib ['INACTIVE'] [[0.13 0.87]]
178/4496 crizotinib-(S) ['INACTIVE'] [[0.13 0.87]]
179/4496 cloxacillin ['INACTIVE'] [[0.02 0.98]]
180/4496 sangivamycin ['INACTIVE'] [[0.06 0.94]]
181/4496 FPA-124 ['INACTIVE'] [[0.03 0.97]]
182/4496 tanespimycin ['INACTIVE'] [[0.09 0.91]]
183/4496 m-Chlorophenylbiguanide ['INACTIVE'] [[0.04 0.96]]
184/4496 AR-C155858 ['INACTIVE'] [[0.04 0.96]]
185/4496 MUT056399 ['INACTIVE'] [[0.02 0.98]]
186/4496 PSI-6130 ['INACTIVE'] [[0.04 0.96]]
187/4496 gilteritinib ['INACTIVE'] [[0.12 0.88]]
188/4496 rubitecan ['INACTIVE'] [[0.06 0.94]]
189/4496 GSK2656157 ['INACTIVE'] [[0.09 0.91]]
190/4496 picolinic-acid ['INACTIVE'] [[0.06 0.94]]
191/4496 degarelix ['INACTIVE'] [[0.21 0.79]]
192/4496 evacetrapib ['INACTIVE'] [[0.17 0.83]]
193/4496 omarigliptin ['INACTIVE'] [[0.14 0.86]]
194/4496 IOX1 ['INACTIVE'] [[0.08 0.92]]
195/4496 5-fluoro-3-pyridyl-methanol ['INACTIVE'] [[0. 1.]]
196/4496 BMS-777607 ['INACTIVE'] [[0.06 0.94]]
197/4496 aminothiadiazole ['INACTIVE'] [[0.01 0.99]]
198/4496 OTS167 ['INACTIVE'] [[0.08 0.92]]
199/4496 MK-5108 ['INACTIVE'] [[0.09 0.91]]
200/4496 MK-3207 ['INACTIVE'] [[0.12 0.88]]
201/4496 ligustilide ['INACTIVE'] [[0.02 0.98]]
202/4496 [sar9,met(o2)11]-substance-p ['INACTIVE'] [[0.28 0.72]]
203/4496 fludarabine ['INACTIVE'] [[0.08 0.92]]
204/4496 4EGI-1 ['INACTIVE'] [[0.08 0.92]]
205/4496 GS-39783 ['INACTIVE'] [[0.08 0.92]]
206/4496 VCH-916 ['INACTIVE'] [[0.02 0.98]]
207/4496 gallium-triquinolin-8-olate ['INACTIVE'] [[0.07 0.93]]
208/4496 5,7-dichlorokynurenic-acid ['INACTIVE'] [[0.1 0.9]]
209/4496 tavaborole ['INACTIVE'] [[0.02 0.98]]
210/4496 PF-5274857 ['INACTIVE'] [[0.03 0.97]]
211/4496 EB-47 ['INACTIVE'] [[0.03 0.97]]
212/4496 VX-702 ['INACTIVE'] [[0.05 0.95]]
213/4496 zinc-undecylenate ['INACTIVE'] [[0.02 0.98]]
214/4496 AZD3965 ['INACTIVE'] [[0.11 0.89]]
215/4496 MLN2480 ['INACTIVE'] [[0.07 0.93]]
216/4496 TCID ['INACTIVE'] [[0.06 0.94]]
217/4496 perospirone ['INACTIVE'] [[0.16 0.84]]
218/4496 CC-930 ['INACTIVE'] [[0.13 0.87]]
219/4496 1-phenylbiguanide ['INACTIVE'] [[0.02 0.98]]
220/4496 carbazochrome ['INACTIVE'] [[0.05 0.95]]
221/4496 PF-04691502 ['INACTIVE'] [[0.09 0.91]]
222/4496 NU6027 ['INACTIVE'] [[0.05 0.95]]
223/4496 echinomycin ['INACTIVE'] [[0.21 0.79]]
224/4496 azodicarbonamide ['INACTIVE'] [[0.03 0.97]]
225/4496 TC-S-7009 ['INACTIVE'] [[0.08 0.92]]
226/4496 A-987306 ['INACTIVE'] [[0.09 0.91]]
227/4496 voriconazole ['INACTIVE'] [[0.06 0.94]]
228/4496 VX-222 ['INACTIVE'] [[0.1 0.9]]
229/4496 kifunensine ['INACTIVE'] [[0.05 0.95]]
230/4496 MM-102 ['INACTIVE'] [[0.1 0.9]]
231/4496 NM107 ['INACTIVE'] [[0.05 0.95]]
232/4496 BMS-265246 ['INACTIVE'] [[0.03 0.97]]
233/4496 carbetocin ['INACTIVE'] [[0.08 0.92]]
234/4496 VX-745 ['INACTIVE'] [[0.02 0.98]]
235/4496 bicuculline-methochloride-(-) ['INACTIVE'] [[0.03 0.97]]
236/4496 RS-127445 ['INACTIVE'] [[0.05 0.95]]
237/4496 niridazole ['INACTIVE'] [[0.04 0.96]]
238/4496 5-Amino-3-D-ribofuranosylthiazolo[4,5-d]pyrimidin-2,7(3H,6H)-dione ['INACTIVE'] [[0.09 0.91]]
239/4496 GSK2110183 ['INACTIVE'] [[0.06 0.94]]
240/4496 lymecycline ['INACTIVE'] [[0.14 0.86]]
241/4496 flumatinib ['INACTIVE'] [[0.06 0.94]]
242/4496 atosiban ['INACTIVE'] [[0.13 0.87]]
243/4496 raltegravir ['INACTIVE'] [[0.04 0.96]]
244/4496 tirapazamine ['INACTIVE'] [[0.03 0.97]]
245/4496 viomycin ['INACTIVE'] [[0.12 0.88]]
246/4496 EC-144 ['INACTIVE'] [[0.08 0.92]]
247/4496 vasopressin ['INACTIVE'] [[0.19 0.81]]
248/4496 oprozomib ['INACTIVE'] [[0.08 0.92]]
249/4496 NBI-27914 ['INACTIVE'] [[0.13 0.87]]
250/4496 oxytocin ['INACTIVE'] [[0.13 0.87]]
251/4496 SM-164 ['INACTIVE'] [[0.22 0.78]]
252/4496 NVP-HSP990 ['INACTIVE'] [[0.03 0.97]]
253/4496 chlorisondamine-diiodide ['INACTIVE'] [[0.05 0.95]]
254/4496 octadecan-1-ol ['INACTIVE'] [[0.05 0.95]]
255/4496 daprodustat ['INACTIVE'] [[0.11 0.89]]
256/4496 Ro-5126766 ['INACTIVE'] [[0.04 0.96]]
257/4496 zebularine ['INACTIVE'] [[0.09 0.91]]
258/4496 ganglioside-gm1 ['INACTIVE'] [[0.34 0.66]]
259/4496 favipiravir ['INACTIVE'] [[0. 1.]]
260/4496 LDC1267 ['INACTIVE'] [[0.08 0.92]]
261/4496 GDC-0980 ['INACTIVE'] [[0.16 0.84]]
262/4496 BMY-14802 ['INACTIVE'] [[0.1 0.9]]
263/4496 MK-8245 ['INACTIVE'] [[0.08 0.92]]
264/4496 mericitabine ['INACTIVE'] [[0.04 0.96]]
265/4496 etanidazole ['INACTIVE'] [[0.03 0.97]]
266/4496 sanazole ['INACTIVE'] [[0. 1.]]
267/4496 CHIR-98014 ['INACTIVE'] [[0.14 0.86]]
268/4496 P5091 ['INACTIVE'] [[0.07 0.93]]
269/4496 blonanserin ['INACTIVE'] [[0.04 0.96]]
270/4496 AZ505 ['INACTIVE'] [[0.08 0.92]]
271/4496 golvatinib ['INACTIVE'] [[0.14 0.86]]
272/4496 dolastatin-10 ['INACTIVE'] [[0.23 0.77]]
273/4496 tretazicar ['INACTIVE'] [[0.07 0.93]]
274/4496 tenidap ['INACTIVE'] [[0.05 0.95]]
275/4496 CGK-733 ['INACTIVE'] [[0.15 0.85]]
276/4496 SAR405838 ['INACTIVE'] [[0.04 0.96]]
277/4496 SX-011 ['INACTIVE'] [[0.11 0.89]]
278/4496 GDC-0994 ['INACTIVE'] [[0.06 0.94]]
279/4496 SB-415286 ['INACTIVE'] [[0.08 0.92]]
280/4496 brequinar ['INACTIVE'] [[0.04 0.96]]
281/4496 ravoxertinib ['INACTIVE'] [[0.06 0.94]]
282/4496 TAK-733 ['INACTIVE'] [[0.06 0.94]]
283/4496 ixabepilone ['INACTIVE'] [[0.11 0.89]]
284/4496 TGR-1202 ['INACTIVE'] [[0.17 0.83]]
285/4496 idasanutlin ['INACTIVE'] [[0.11 0.89]]
286/4496 terlipressin ['INACTIVE'] [[0.33 0.67]]
287/4496 gadodiamide ['INACTIVE'] [[0.03 0.97]]
288/4496 genz-644282 ['INACTIVE'] [[0.08 0.92]]
289/4496 Y-29794 ['INACTIVE'] [[0.12 0.88]]
290/4496 AZD9668 ['INACTIVE'] [[0.1 0.9]]
291/4496 DCEBIO ['INACTIVE'] [[0.12 0.88]]
292/4496 XL888 ['INACTIVE'] [[0.11 0.89]]
293/4496 SNX-5422 ['INACTIVE'] [[0.05 0.95]]
294/4496 DMP-777 ['INACTIVE'] [[0.11 0.89]]
295/4496 MRK-409 ['INACTIVE'] [[0.11 0.89]]
296/4496 doxylamine ['INACTIVE'] [[0. 1.]]
297/4496 lypressin ['INACTIVE'] [[0.23 0.77]]
298/4496 erythromycin-estolate ['INACTIVE'] [[0.02 0.98]]
299/4496 PD-160170 ['INACTIVE'] [[0.27 0.73]]
300/4496 bitopertin ['INACTIVE'] [[0.12 0.88]]
301/4496 torcitabine ['INACTIVE'] [[0.08 0.92]]
302/4496 AMG-337 ['INACTIVE'] [[0.13 0.87]]
303/4496 metazosin ['INACTIVE'] [[0.11 0.89]]
304/4496 polidocanol ['INACTIVE'] [[0.04 0.96]]
305/4496 SAR131675 ['INACTIVE'] [[0.05 0.95]]
306/4496 volasertib ['INACTIVE'] [[0.08 0.92]]
307/4496 GDC-0834 ['INACTIVE'] [[0.13 0.87]]
308/4496 micafungin ['INACTIVE'] [[0.21 0.79]]
309/4496 DFB ['INACTIVE'] [[0.03 0.97]]
310/4496 SNX-2112 ['INACTIVE'] [[0.04 0.96]]
311/4496 NSC636819 ['INACTIVE'] [[0.09 0.91]]
312/4496 C646 ['INACTIVE'] [[0.08 0.92]]
313/4496 pyridoxal ['INACTIVE'] [[0.03 0.97]]
314/4496 riociguat ['INACTIVE'] [[0.07 0.93]]
315/4496 cariprazine ['INACTIVE'] [[0.11 0.89]]
316/4496 sutezolid ['INACTIVE'] [[0.13 0.87]]
317/4496 etazolate ['INACTIVE'] [[0.04 0.96]]
318/4496 BI-78D3 ['INACTIVE'] [[0.12 0.88]]
319/4496 LY3009120 ['INACTIVE'] [[0.04 0.96]]
320/4496 trametinib ['INACTIVE'] [[0.09 0.91]]
321/4496 omecamtiv-mecarbil ['INACTIVE'] [[0.1 0.9]]
322/4496 misonidazole ['INACTIVE'] [[0.02 0.98]]
323/4496 LY2090314 ['INACTIVE'] [[0.23 0.77]]
324/4496 clofarabine ['INACTIVE'] [[0.12 0.88]]
325/4496 PF-03049423 ['INACTIVE'] [[0.12 0.88]]
326/4496 MK-0812 ['INACTIVE'] [[0.12 0.88]]
327/4496 BMS-626529 ['INACTIVE'] [[0.03 0.97]]
328/4496 olaparib ['INACTIVE'] [[0.07 0.93]]
329/4496 desmopressin-acetate ['INACTIVE'] [[0.17 0.83]]
330/4496 BMY-7378 ['INACTIVE'] [[0.22 0.78]]
331/4496 telaprevir ['INACTIVE'] [[0.11 0.89]]
332/4496 BMS-688521 ['INACTIVE'] [[0.19 0.81]]
333/4496 fidarestat ['INACTIVE'] [[0.04 0.96]]
334/4496 LDN-57444 ['INACTIVE'] [[0.05 0.95]]
335/4496 AS-2444697 ['INACTIVE'] [[0.11 0.89]]
336/4496 cilengitide ['INACTIVE'] [[0.11 0.89]]
337/4496 ribociclib ['INACTIVE'] [[0.09 0.91]]
338/4496 BAY-87-2243 ['INACTIVE'] [[0.12 0.88]]
339/4496 avridine ['INACTIVE'] [[0.07 0.93]]
340/4496 oxacillin ['INACTIVE'] [[0.02 0.98]]
341/4496 sapropterin ['INACTIVE'] [[0.04 0.96]]
342/4496 edoxudine ['INACTIVE'] [[0.22 0.78]]
343/4496 AV-412 ['INACTIVE'] [[0.13 0.87]]
344/4496 MK-0773 ['INACTIVE'] [[0.06 0.94]]
345/4496 UBP-310 ['INACTIVE'] [[0.17 0.83]]
346/4496 triciribine ['INACTIVE'] [[0.1 0.9]]
347/4496 1-hexadecanol ['INACTIVE'] [[0.05 0.95]]
348/4496 adatanserin ['INACTIVE'] [[0.18 0.82]]
349/4496 LCL-161 ['INACTIVE'] [[0.09 0.91]]
350/4496 OR-486 ['INACTIVE'] [[0.05 0.95]]
351/4496 cevipabulin ['INACTIVE'] [[0.04 0.96]]
352/4496 AZD5069 ['INACTIVE'] [[0.09 0.91]]
353/4496 FERb-033 ['INACTIVE'] [[0.03 0.97]]
354/4496 nimustine ['INACTIVE'] [[0. 1.]]
355/4496 actinoquinol ['INACTIVE'] [[0.14 0.86]]
356/4496 gentiopicrin ['INACTIVE'] [[0.01 0.99]]
357/4496 lurasidone ['INACTIVE'] [[0.11 0.89]]
358/4496 WR99210 ['INACTIVE'] [[0.16 0.84]]
359/4496 apalutamide ['INACTIVE'] [[0.09 0.91]]
360/4496 L-838417 ['INACTIVE'] [[0.07 0.93]]
361/4496 BMS-566419 ['INACTIVE'] [[0.3 0.7]]
362/4496 BMS-927711 ['INACTIVE'] [[0.09 0.91]]
363/4496 daptomycin ['INACTIVE'] [[0.32 0.68]]
364/4496 JNJ-38877605 ['INACTIVE'] [[0.11 0.89]]
365/4496 metatinib ['INACTIVE'] [[0.02 0.98]]
366/4496 golgicide-a ['INACTIVE'] [[0.07 0.93]]
367/4496 tasquinimod ['INACTIVE'] [[0.03 0.97]]
368/4496 BGT226 ['INACTIVE'] [[0.12 0.88]]
369/4496 ST-2825 ['INACTIVE'] [[0.16 0.84]]
370/4496 safingol ['INACTIVE'] [[0.05 0.95]]
371/4496 7-hydroxystaurosporine ['INACTIVE'] [[0.02 0.98]]
372/4496 icatibant-acetate ['INACTIVE'] [[0.32 0.68]]
373/4496 carboxyamidotriazole ['INACTIVE'] [[0.05 0.95]]
374/4496 GSK503 ['INACTIVE'] [[0.13 0.87]]
375/4496 valrubicin ['INACTIVE'] [[0.12 0.88]]
376/4496 saracatinib ['INACTIVE'] [[0.07 0.93]]
377/4496 nifekalant ['INACTIVE'] [[0.1 0.9]]
378/4496 telbivudine ['INACTIVE'] [[0.27 0.73]]
379/4496 A-839977 ['INACTIVE'] [[0.05 0.95]]
380/4496 vicriviroc ['INACTIVE'] [[0.05 0.95]]
381/4496 ML-348 ['INACTIVE'] [[0.02 0.98]]
382/4496 DHBP ['INACTIVE'] [[0.24 0.76]]
383/4496 BTZ043-racemate ['INACTIVE'] [[0.16 0.84]]
384/4496 VBY-825 ['INACTIVE'] [[0.11 0.89]]
385/4496 trichloroacetic-acid ['INACTIVE'] [[0.02 0.98]]
386/4496 CX-5461 ['INACTIVE'] [[0.12 0.88]]
387/4496 ergotamine ['INACTIVE'] [[0.1 0.9]]
388/4496 AZD3839 ['INACTIVE'] [[0.07 0.93]]
389/4496 antagonist-g ['INACTIVE'] [[0.18 0.82]]
390/4496 sodium-nitrite ['INACTIVE'] [[0.01 0.99]]
391/4496 JNJ-42165279 ['INACTIVE'] [[0.04 0.96]]
392/4496 R406 ['INACTIVE'] [[0.12 0.88]]
393/4496 VUF10166 ['INACTIVE'] [[0.02 0.98]]
394/4496 flupentixol ['INACTIVE'] [[0.04 0.96]]
395/4496 2-(chloromethyl)-5,6,7,8-tetrahydro[1]benzothieno[2,3-d]pyrimidin-4(3H)-one ['INACTIVE'] [[0.03 0.97]]
396/4496 VX-765 ['INACTIVE'] [[0.08 0.92]]
397/4496 CB-839 ['INACTIVE'] [[0.06 0.94]]
398/4496 altiratinib ['INACTIVE'] [[0.03 0.97]]
399/4496 PIT ['INACTIVE'] [[0.02 0.98]]
400/4496 RI-1 ['INACTIVE'] [[0.07 0.93]]
401/4496 RRx-001 ['INACTIVE'] [[0.01 0.99]]
402/4496 MK-1775 ['INACTIVE'] [[0.08 0.92]]
403/4496 PH-797804 ['INACTIVE'] [[0.05 0.95]]
404/4496 TAK-063 ['INACTIVE'] [[0.13 0.87]]
405/4496 PHA-848125 ['INACTIVE'] [[0.05 0.95]]
406/4496 maribavir ['INACTIVE'] [[0.06 0.94]]
407/4496 seclazone ['INACTIVE'] [[0.02 0.98]]
408/4496 KI-8751 ['INACTIVE'] [[0.07 0.93]]
409/4496 pomalidomide ['INACTIVE'] [[0.03 0.97]]
410/4496 exherin ['INACTIVE'] [[0.16 0.84]]
411/4496 gemcitabine-elaidate ['INACTIVE'] [[0.06 0.94]]
412/4496 JNJ-40411813 ['INACTIVE'] [[0.07 0.93]]
413/4496 elagolix ['INACTIVE'] [[0.09 0.91]]
414/4496 PF-3274167 ['INACTIVE'] [[0.04 0.96]]
415/4496 GW-2580 ['INACTIVE'] [[0.46 0.54]]
416/4496 piperaquine-phosphate ['INACTIVE'] [[0.09 0.91]]
417/4496 romidepsin ['INACTIVE'] [[0.09 0.91]]
418/4496 UNC2025 ['INACTIVE'] [[0.14 0.86]]
419/4496 purvalanol-b ['INACTIVE'] [[0.07 0.93]]
420/4496 bacitracin-zinc ['INACTIVE'] [[0.31 0.69]]
421/4496 apricitabine ['INACTIVE'] [[0.01 0.99]]
422/4496 ascorbic-acid ['INACTIVE'] [[0.05 0.95]]
423/4496 tezacaftor ['INACTIVE'] [[0.05 0.95]]
424/4496 foropafant ['INACTIVE'] [[0.06 0.94]]
425/4496 gadobutrol ['INACTIVE'] [[0.04 0.96]]
426/4496 walrycin-b ['INACTIVE'] [[0.03 0.97]]
427/4496 BD-1047 ['INACTIVE'] [[0.05 0.95]]
428/4496 JNJ-17203212 ['INACTIVE'] [[0.08 0.92]]
429/4496 RHC-80267 ['INACTIVE'] [[0.04 0.96]]
430/4496 imidurea ['INACTIVE'] [[0.12 0.88]]
431/4496 domiphen ['INACTIVE'] [[0.32 0.68]]
432/4496 hemoglobin-modulators-1 ['INACTIVE'] [[0.06 0.94]]
433/4496 enasidenib ['INACTIVE'] [[0.03 0.97]]
434/4496 troleandomycin ['INACTIVE'] [[0.14 0.86]]
435/4496 norvancomycin ['INACTIVE'] [[0.18 0.82]]
436/4496 lemborexant ['INACTIVE'] [[0.04 0.96]]
437/4496 ZK811752 ['INACTIVE'] [[0.03 0.97]]
438/4496 NSI-189 ['INACTIVE'] [[0.02 0.98]]
439/4496 GPBAR-A ['INACTIVE'] [[0.07 0.93]]
440/4496 lumacaftor ['INACTIVE'] [[0.03 0.97]]
441/4496 EN460 ['INACTIVE'] [[0.06 0.94]]
442/4496 9-aminocamptothecin ['INACTIVE'] [[0.02 0.98]]
443/4496 lorlatinib ['INACTIVE'] [[0.07 0.93]]
444/4496 PLX-647 ['INACTIVE'] [[0.05 0.95]]
445/4496 PF-06463922 ['INACTIVE'] [[0.07 0.93]]
446/4496 GSK923295 ['INACTIVE'] [[0.11 0.89]]
447/4496 ponatinib ['INACTIVE'] [[0.06 0.94]]
448/4496 2-chloroadenosine ['INACTIVE'] [[0.12 0.88]]
449/4496 I-BRD9 ['INACTIVE'] [[0.13 0.87]]
450/4496 AMN-082 ['INACTIVE'] [[0.06 0.94]]
451/4496 PLX8394 ['INACTIVE'] [[0.13 0.87]]
452/4496 dovitinib ['INACTIVE'] [[0.11 0.89]]
453/4496 WAY-208466 ['INACTIVE'] [[0.06 0.94]]
454/4496 ranirestat ['INACTIVE'] [[0.08 0.92]]
455/4496 DMNB ['INACTIVE'] [[0.05 0.95]]
456/4496 AN2718 ['INACTIVE'] [[0.06 0.94]]
457/4496 entrectinib ['INACTIVE'] [[0.1 0.9]]
458/4496 ampicillin ['INACTIVE'] [[0.08 0.92]]
459/4496 sorbinil ['INACTIVE'] [[0.03 0.97]]
460/4496 darapladib ['INACTIVE'] [[0.09 0.91]]
461/4496 GZD824 ['INACTIVE'] [[0.07 0.93]]
462/4496 BIBN4096 ['INACTIVE'] [[0.15 0.85]]
463/4496 methicillin ['INACTIVE'] [[0.06 0.94]]
464/4496 clonixin-lysinate ['INACTIVE'] [[0.05 0.95]]
465/4496 CC-223 ['INACTIVE'] [[0.06 0.94]]
466/4496 NBI-74330-(+/-) ['INACTIVE'] [[0.07 0.93]]
467/4496 loreclezole ['INACTIVE'] [[0.03 0.97]]
468/4496 delanzomib ['INACTIVE'] [[0.01 0.99]]
469/4496 MK-6096 ['INACTIVE'] [[0.02 0.98]]
470/4496 bosutinib ['INACTIVE'] [[0.14 0.86]]
471/4496 gemcadiol ['INACTIVE'] [[0.07 0.93]]
472/4496 CGS-21680 ['INACTIVE'] [[0.06 0.94]]
473/4496 BAY-41-2272 ['INACTIVE'] [[0.09 0.91]]
474/4496 L-670596 ['INACTIVE'] [[0.08 0.92]]
475/4496 MK-2461 ['INACTIVE'] [[0.12 0.88]]
476/4496 UK-383367 ['INACTIVE'] [[0.1 0.9]]
477/4496 ixazomib-citrate ['INACTIVE'] [[0.11 0.89]]
478/4496 2'-MeCCPA ['INACTIVE'] [[0.04 0.96]]
479/4496 afatinib ['INACTIVE'] [[0.06 0.94]]
480/4496 BCTC ['INACTIVE'] [[0.04 0.96]]
481/4496 PF-04136309 ['INACTIVE'] [[0.05 0.95]]
482/4496 SB-205384 ['INACTIVE'] [[0.07 0.93]]
483/4496 ademetionine ['INACTIVE'] [[0.04 0.96]]
484/4496 GDC-0068 ['INACTIVE'] [[0.09 0.91]]
485/4496 MEK162 ['INACTIVE'] [[0.06 0.94]]
486/4496 losmapimod ['INACTIVE'] [[0.01 0.99]]
487/4496 gadoteridol ['INACTIVE'] [[0.05 0.95]]
488/4496 T-0156 ['INACTIVE'] [[0.12 0.88]]
489/4496 MLN9708 ['INACTIVE'] [[0.07 0.93]]
490/4496 UNC1999 ['INACTIVE'] [[0.08 0.92]]
491/4496 GSK2816126 ['INACTIVE'] [[0.12 0.88]]
492/4496 GS-143 ['INACTIVE'] [[0.1 0.9]]
493/4496 LY411575 ['INACTIVE'] [[0.14 0.86]]
494/4496 ivosidenib ['INACTIVE'] [[0.13 0.87]]
495/4496 torin-1 ['INACTIVE'] [[0.12 0.88]]
496/4496 dosulepin ['INACTIVE'] [[0.03 0.97]]
497/4496 T-0070907 ['INACTIVE'] [[0.03 0.97]]
498/4496 nafcillin ['INACTIVE'] [[0.03 0.97]]
499/4496 teneligliptin ['INACTIVE'] [[0.13 0.87]]
500/4496 GSK2606414 ['INACTIVE'] [[0.07 0.93]]
501/4496 PR-619 ['INACTIVE'] [[0.07 0.93]]
502/4496 boceprevir ['INACTIVE'] [[0.12 0.88]]
503/4496 almitrine ['INACTIVE'] [[0.08 0.92]]
504/4496 2-octyldodecan-1-ol ['INACTIVE'] [[0.08 0.92]]
505/4496 maraviroc ['INACTIVE'] [[0.06 0.94]]
506/4496 A-438079 ['INACTIVE'] [[0.04 0.96]]
507/4496 amthamine ['INACTIVE'] [[0.03 0.97]]
508/4496 meclinertant ['INACTIVE'] [[0.09 0.91]]
509/4496 DSR-6434 ['INACTIVE'] [[0.05 0.95]]
510/4496 fiacitabine ['INACTIVE'] [[0.09 0.91]]
511/4496 somatostatin ['INACTIVE'] [[0.35 0.65]]
512/4496 TAK-285 ['INACTIVE'] [[0.08 0.92]]
513/4496 AZD8835 ['INACTIVE'] [[0.18 0.82]]
514/4496 merbromin ['INACTIVE'] [[0.01 0.99]]
515/4496 carmustine ['INACTIVE'] [[0. 1.]]
516/4496 ML-179 ['INACTIVE'] [[0.12 0.88]]
517/4496 Org-12962 ['INACTIVE'] [[0.1 0.9]]
518/4496 decamethoxine ['INACTIVE'] [[0.07 0.93]]
519/4496 CE3F4 ['INACTIVE'] [[0.03 0.97]]
520/4496 hydroxypropyl-beta-cyclodextrin ['INACTIVE'] [[0.11 0.89]]
521/4496 streptozotocin ['INACTIVE'] [[0.02 0.98]]
522/4496 tetrahydrouridine ['INACTIVE'] [[0.08 0.92]]
523/4496 DAPT ['INACTIVE'] [[0.01 0.99]]
524/4496 ICA-069673 ['INACTIVE'] [[0.01 0.99]]
525/4496 MNITMT ['INACTIVE'] [[0.04 0.96]]
526/4496 taladegib ['INACTIVE'] [[0.11 0.89]]
527/4496 bivalirudin ['INACTIVE'] [[0.2 0.8]]
528/4496 TC-NTR1-17 ['INACTIVE'] [[0.07 0.93]]
529/4496 MK-2295 ['INACTIVE'] [[0.06 0.94]]
530/4496 2-chloro-N6-cyclopentyladenosine ['INACTIVE'] [[0.08 0.92]]
531/4496 talazoparib ['INACTIVE'] [[0.05 0.95]]
532/4496 DNQX ['INACTIVE'] [[0.05 0.95]]
533/4496 amyl-nitrite ['INACTIVE'] [[0.03 0.97]]
534/4496 masitinib ['INACTIVE'] [[0.02 0.98]]
535/4496 pyridoxal-isonicotinoyl-hydrazone ['INACTIVE'] [[0.04 0.96]]
536/4496 L-Ascorbyl-6-palmitate ['INACTIVE'] [[0.05 0.95]]
537/4496 GSK163090 ['INACTIVE'] [[0.05 0.95]]
538/4496 WEHI-345-analog ['INACTIVE'] [[0.03 0.97]]
539/4496 BD-1063 ['INACTIVE'] [[0.05 0.95]]
540/4496 3-deazaadenosine ['INACTIVE'] [[0.09 0.91]]
541/4496 pimecrolimus ['INACTIVE'] [[0.07 0.93]]
542/4496 talc ['INACTIVE'] [[0.03 0.97]]
543/4496 azasetron ['INACTIVE'] [[0.08 0.92]]
544/4496 DMP-543 ['INACTIVE'] [[0.07 0.93]]
545/4496 sinefungin ['INACTIVE'] [[0.02 0.98]]
546/4496 retaspimycin ['INACTIVE'] [[0.09 0.91]]
547/4496 AS-252424 ['INACTIVE'] [[0.05 0.95]]
548/4496 robalzotan ['INACTIVE'] [[0.07 0.93]]
549/4496 telotristat-ethyl ['INACTIVE'] [[0.1 0.9]]
550/4496 MGCD-265 ['INACTIVE'] [[0.07 0.93]]
551/4496 proglumetacin ['INACTIVE'] [[0.06 0.94]]
552/4496 CUDC-427 ['INACTIVE'] [[0.1 0.9]]
553/4496 voxtalisib ['INACTIVE'] [[0.09 0.91]]
554/4496 lonafarnib ['INACTIVE'] [[0.1 0.9]]
555/4496 orphenadrine ['INACTIVE'] [[0. 1.]]
556/4496 mezlocillin ['INACTIVE'] [[0.24 0.76]]
557/4496 dimethisoquin ['INACTIVE'] [[0.03 0.97]]
558/4496 VU-0422288 ['INACTIVE'] [[0.19 0.81]]
559/4496 pyridoxamine ['INACTIVE'] [[0.01 0.99]]
560/4496 2'-c-methylguanosine ['INACTIVE'] [[0.08 0.92]]
561/4496 6-aminochrysene ['INACTIVE'] [[0.06 0.94]]
562/4496 dasatinib ['INACTIVE'] [[0.11 0.89]]
563/4496 narlaprevir ['INACTIVE'] [[0.12 0.88]]
564/4496 propatylnitrate ['INACTIVE'] [[0.01 0.99]]
565/4496 phytosphingosine ['INACTIVE'] [[0.05 0.95]]
566/4496 ziprasidone ['INACTIVE'] [[0.12 0.88]]
567/4496 Ro-4987655 ['INACTIVE'] [[0.12 0.88]]
568/4496 DDR1-IN-1 ['INACTIVE'] [[0.06 0.94]]
569/4496 NVP-BGJ398 ['INACTIVE'] [[0.16 0.84]]
570/4496 TCS-21311 ['INACTIVE'] [[0.08 0.92]]
571/4496 rolitetracycline ['INACTIVE'] [[0.16 0.84]]
572/4496 exenatide ['INACTIVE'] [[0.33 0.67]]
573/4496 tiaramide ['INACTIVE'] [[0.06 0.94]]
574/4496 tarafenacin ['INACTIVE'] [[0.07 0.93]]
575/4496 ADX-10059 ['INACTIVE'] [[0.07 0.93]]
576/4496 teroxirone ['INACTIVE'] [[0.05 0.95]]
577/4496 CGH2466 ['INACTIVE'] [[0.1 0.9]]
578/4496 SGC-0946 ['INACTIVE'] [[0.09 0.91]]
579/4496 liraglutide ['INACTIVE'] [[0.35 0.65]]
580/4496 BMS-754807 ['INACTIVE'] [[0.1 0.9]]
581/4496 XAV-939 ['INACTIVE'] [[0.01 0.99]]
582/4496 AMG-925 ['INACTIVE'] [[0.06 0.94]]
583/4496 lomustine ['INACTIVE'] [[0. 1.]]
584/4496 rebastinib ['INACTIVE'] [[0.07 0.93]]
585/4496 GDC-0152 ['INACTIVE'] [[0.03 0.97]]
586/4496 nifenalol ['INACTIVE'] [[0.1 0.9]]
587/4496 irinotecan ['INACTIVE'] [[0.1 0.9]]
588/4496 NECA ['INACTIVE'] [[0.02 0.98]]
589/4496 SR-1664 ['INACTIVE'] [[0.18 0.82]]
590/4496 cladribine ['INACTIVE'] [[0.07 0.93]]
591/4496 tracazolate ['INACTIVE'] [[0.05 0.95]]
592/4496 AZD8330 ['INACTIVE'] [[0.06 0.94]]
593/4496 YM-976 ['INACTIVE'] [[0.03 0.97]]
594/4496 LY2334737 ['INACTIVE'] [[0.07 0.93]]
595/4496 trans-4-[8-(3-Fluorophenyl)-1,7-naphthyridin-6-yl]cyclohexanecarboxylic-acid ['INACTIVE'] [[0.03 0.97]]
596/4496 ICI-63197 ['INACTIVE'] [[0.04 0.96]]
597/4496 PD-166285 ['INACTIVE'] [[0.09 0.91]]
598/4496 nelarabine ['INACTIVE'] [[0.14 0.86]]
599/4496 benfluralin ['INACTIVE'] [[0.08 0.92]]
600/4496 4-galactosyllactose ['INACTIVE'] [[0.03 0.97]]
601/4496 BAM7 ['INACTIVE'] [[0.04 0.96]]
602/4496 alda-1 ['INACTIVE'] [[0.02 0.98]]
603/4496 CS-917 ['INACTIVE'] [[0.03 0.97]]
604/4496 GSK690693 ['INACTIVE'] [[0.21 0.79]]
605/4496 AZ191 ['INACTIVE'] [[0.14 0.86]]
606/4496 ipidacrine ['INACTIVE'] [[0.01 0.99]]
607/4496 PA-452 ['INACTIVE'] [[0.02 0.98]]
608/4496 ON123300 ['INACTIVE'] [[0.1 0.9]]
609/4496 regadenoson ['INACTIVE'] [[0.13 0.87]]
610/4496 semustine ['INACTIVE'] [[0.01 0.99]]
611/4496 A66 ['INACTIVE'] [[0.06 0.94]]
612/4496 3-bromo-7-nitroindazole ['INACTIVE'] [[0.2 0.8]]
613/4496 halofantrine ['INACTIVE'] [[0.1 0.9]]
614/4496 quinolinic-acid ['INACTIVE'] [[0.02 0.98]]
615/4496 apoptosis-activator-II ['INACTIVE'] [[0.02 0.98]]
616/4496 torcetrapib ['INACTIVE'] [[0.04 0.96]]
617/4496 SHP099 ['INACTIVE'] [[0.12 0.88]]
618/4496 perzinfotel ['INACTIVE'] [[0.03 0.97]]
619/4496 CaCCinh-A01 ['INACTIVE'] [[0.02 0.98]]
620/4496 vonoprazan ['INACTIVE'] [[0.04 0.96]]
621/4496 YO-01027 ['INACTIVE'] [[0.09 0.91]]
622/4496 lenvatinib ['INACTIVE'] [[0.06 0.94]]
623/4496 iodixanol ['INACTIVE'] [[0.08 0.92]]
624/4496 vanoxerine ['INACTIVE'] [[0.02 0.98]]
625/4496 LIMKi-3 ['INACTIVE'] [[0.02 0.98]]
626/4496 dabrafenib ['INACTIVE'] [[0.06 0.94]]
627/4496 aclarubicin ['INACTIVE'] [[0.09 0.91]]
628/4496 LY2228820 ['INACTIVE'] [[0.08 0.92]]
629/4496 antimonyl ['INACTIVE'] [[0.05 0.95]]
630/4496 80841-78-7 ['INACTIVE'] [[0.02 0.98]]
631/4496 novobiocin ['INACTIVE'] [[0.05 0.95]]
632/4496 pixantrone ['INACTIVE'] [[0.02 0.98]]
633/4496 piclamilast ['INACTIVE'] [[0.06 0.94]]
634/4496 selumetinib ['INACTIVE'] [[0.07 0.93]]
635/4496 tedizolid-phosphate ['INACTIVE'] [[0.13 0.87]]
636/4496 GSK1838705A ['INACTIVE'] [[0.05 0.95]]
637/4496 INC-280 ['INACTIVE'] [[0.07 0.93]]
638/4496 anacetrapib ['INACTIVE'] [[0.05 0.95]]
639/4496 GW-1100 ['INACTIVE'] [[0.05 0.95]]
640/4496 brivanib-alaninate ['INACTIVE'] [[0.14 0.86]]
641/4496 ioxaglic-acid ['INACTIVE'] [[0.06 0.94]]
642/4496 ticarcillin ['INACTIVE'] [[0.08 0.92]]
643/4496 COR-170 ['INACTIVE'] [[0.14 0.86]]
644/4496 AZD1480 ['INACTIVE'] [[0. 1.]]
645/4496 PI3K-IN-2 ['INACTIVE'] [[0.11 0.89]]
646/4496 AZD3264 ['INACTIVE'] [[0.1 0.9]]
647/4496 CTS21166 ['INACTIVE'] [[0.1 0.9]]
648/4496 SB-243213 ['INACTIVE'] [[0.06 0.94]]
649/4496 2-pyridylethylamine ['INACTIVE'] [[0. 1.]]
650/4496 pyrazoloacridine ['INACTIVE'] [[0.09 0.91]]
651/4496 eptifibatide ['INACTIVE'] [[0.2 0.8]]
652/4496 APD597 ['INACTIVE'] [[0.04 0.96]]
653/4496 PDP-EA ['INACTIVE'] [[0.09 0.91]]
654/4496 torin-2 ['INACTIVE'] [[0.14 0.86]]
655/4496 MSX-122 ['INACTIVE'] [[0.04 0.96]]
656/4496 SSR128129E ['INACTIVE'] [[0.03 0.97]]
657/4496 edaglitazone ['INACTIVE'] [[0.03 0.97]]
658/4496 CDBA ['INACTIVE'] [[0.04 0.96]]
659/4496 NGB-2904 ['INACTIVE'] [[0.08 0.92]]
660/4496 anlotinib ['INACTIVE'] [[0.08 0.92]]
661/4496 SAG ['INACTIVE'] [[0.09 0.91]]
662/4496 GP2a ['INACTIVE'] [[0.01 0.99]]
663/4496 allopurinol-riboside ['INACTIVE'] [[0.03 0.97]]
664/4496 SID-7969543 ['INACTIVE'] [[0.03 0.97]]
665/4496 telotristat ['INACTIVE'] [[0.08 0.92]]
666/4496 hydroxytacrine-maleate-(r,s) ['INACTIVE'] [[0.08 0.92]]
667/4496 JLK-6 ['INACTIVE'] [[0.04 0.96]]
668/4496 batimastat ['INACTIVE'] [[0.05 0.95]]
669/4496 3,3'-dichlorobenzaldazine ['INACTIVE'] [[0.05 0.95]]
670/4496 CCT137690 ['INACTIVE'] [[0.07 0.93]]
671/4496 flunixin-meglumin ['INACTIVE'] [[0.01 0.99]]
672/4496 thiocolchicoside ['INACTIVE'] [[0.11 0.89]]
673/4496 N-methylquipazine ['INACTIVE'] [[0.01 0.99]]
674/4496 oritavancin ['INACTIVE'] [[0.27 0.73]]
675/4496 zosuquidar ['INACTIVE'] [[0.08 0.92]]
676/4496 AZD7762 ['INACTIVE'] [[0.04 0.96]]
677/4496 talniflumate ['INACTIVE'] [[0.03 0.97]]
678/4496 halofuginone ['INACTIVE'] [[0.07 0.93]]
679/4496 NSC-95397 ['INACTIVE'] [[0.05 0.95]]
680/4496 troclosene ['INACTIVE'] [[0.02 0.98]]
681/4496 AV-608 ['INACTIVE'] [[0.06 0.94]]
682/4496 AZ-10606120 ['INACTIVE'] [[0.05 0.95]]
683/4496 SB-216763 ['INACTIVE'] [[0.03 0.97]]
684/4496 JIB04 ['INACTIVE'] [[0.08 0.92]]
685/4496 SB-772077B ['INACTIVE'] [[0.07 0.93]]
686/4496 lesinurad ['INACTIVE'] [[0.04 0.96]]
687/4496 broxuridine ['INACTIVE'] [[0.24 0.76]]
688/4496 RWJ-50271 ['INACTIVE'] [[0.05 0.95]]
689/4496 GW-842166 ['INACTIVE'] [[0.08 0.92]]
690/4496 LXR-623 ['INACTIVE'] [[0.08 0.92]]
691/4496 GNF-7 ['INACTIVE'] [[0.13 0.87]]
692/4496 2-fluoro-2-deoxy-D-galactose ['INACTIVE'] [[0. 1.]]
693/4496 CP-532623 ['INACTIVE'] [[0.03 0.97]]
694/4496 vismodegib ['INACTIVE'] [[0.04 0.96]]
695/4496 brigatinib ['INACTIVE'] [[0.11 0.89]]
696/4496 DCC-2618 ['INACTIVE'] [[0.02 0.98]]
697/4496 dihydroergocristine ['INACTIVE'] [[0.11 0.89]]
698/4496 MKT-077 ['INACTIVE'] [[0.06 0.94]]
699/4496 BIIB021 ['INACTIVE'] [[0.06 0.94]]
700/4496 trelagliptin ['INACTIVE'] [[0.08 0.92]]
701/4496 LY2801653 ['INACTIVE'] [[0.14 0.86]]
702/4496 lirimilast ['INACTIVE'] [[0.05 0.95]]
703/4496 6-aminopenicillanic-acid ['INACTIVE'] [[0.03 0.97]]
704/4496 dalbavancin ['INACTIVE'] [[0.24 0.76]]
705/4496 Rec-15/2615 ['INACTIVE'] [[0.04 0.96]]
706/4496 alpelisib ['INACTIVE'] [[0.05 0.95]]
707/4496 dinaciclib ['INACTIVE'] [[0.09 0.91]]
708/4496 NVP-BHG712 ['INACTIVE'] [[0.05 0.95]]
709/4496 methoxyamine ['INACTIVE'] [[0. 1.]]
710/4496 NSC-663284 ['INACTIVE'] [[0.05 0.95]]
711/4496 eterobarb ['INACTIVE'] [[0.01 0.99]]
712/4496 SJ-172550 ['INACTIVE'] [[0.06 0.94]]
713/4496 ACDPP ['INACTIVE'] [[0.05 0.95]]
714/4496 SBE-13 ['INACTIVE'] [[0.04 0.96]]
715/4496 fursultiamine ['INACTIVE'] [[0.04 0.96]]
716/4496 nadide ['INACTIVE'] [[0.04 0.96]]
717/4496 BIBU-1361 ['INACTIVE'] [[0.08 0.92]]
718/4496 BMS-817378 ['INACTIVE'] [[0.06 0.94]]
719/4496 plicamycin ['INACTIVE'] [[0.08 0.92]]
720/4496 aurora-a-inhibitor-i ['INACTIVE'] [[0.08 0.92]]
721/4496 pevonedistat ['INACTIVE'] [[0.03 0.97]]
722/4496 SB-328437 ['INACTIVE'] [[0.16 0.84]]
723/4496 VUF10460 ['INACTIVE'] [[0.05 0.95]]
724/4496 alexidine ['INACTIVE'] [[0.1 0.9]]
725/4496 AS-604850 ['INACTIVE'] [[0.02 0.98]]
726/4496 CGM097 ['INACTIVE'] [[0.08 0.92]]
727/4496 suritozole ['INACTIVE'] [[0.03 0.97]]
728/4496 PD-161570 ['INACTIVE'] [[0.05 0.95]]
729/4496 DCPIB ['INACTIVE'] [[0.03 0.97]]
730/4496 birinapant ['INACTIVE'] [[0.1 0.9]]
731/4496 XL388 ['INACTIVE'] [[0.1 0.9]]
732/4496 APD668 ['INACTIVE'] [[0.05 0.95]]
733/4496 JNJ-27141491 ['INACTIVE'] [[0.04 0.96]]
734/4496 fenthion ['INACTIVE'] [[0.08 0.92]]
735/4496 raclopride ['INACTIVE'] [[0.16 0.84]]
736/4496 ibudilast ['INACTIVE'] [[0.01 0.99]]
737/4496 2-methyl-5-nitrophenol ['INACTIVE'] [[0.02 0.98]]
738/4496 pralidoxime-chloride ['INACTIVE'] [[0.02 0.98]]
739/4496 cipemastat ['INACTIVE'] [[0.1 0.9]]
740/4496 EMD-53998 ['INACTIVE'] [[0.03 0.97]]
741/4496 malotilate ['INACTIVE'] [[0.03 0.97]]
742/4496 pumosetrag ['INACTIVE'] [[0.03 0.97]]
743/4496 dihydroergotamine ['INACTIVE'] [[0.14 0.86]]
744/4496 carboxypyridine-disulfide ['INACTIVE'] [[0.01 0.99]]
745/4496 benzylpenicillin ['INACTIVE'] [[0.07 0.93]]
746/4496 CFTRinh-172 ['INACTIVE'] [[0.01 0.99]]
747/4496 rosoxacin ['INACTIVE'] [[0.28 0.72]]
748/4496 pepstatin ['INACTIVE'] [[0.1 0.9]]
749/4496 pelitinib ['INACTIVE'] [[0.03 0.97]]
750/4496 epothilone-a ['INACTIVE'] [[0.1 0.9]]
751/4496 MK-2894 ['INACTIVE'] [[0.06 0.94]]
752/4496 regorafenib ['INACTIVE'] [[0.05 0.95]]
753/4496 haloprogin ['INACTIVE'] [[0.13 0.87]]
754/4496 SR-2211 ['INACTIVE'] [[0.08 0.92]]
755/4496 CT-7758 ['INACTIVE'] [[0.07 0.93]]
756/4496 prucalopride ['INACTIVE'] [[0.03 0.97]]
757/4496 amflutizole ['INACTIVE'] [[0.07 0.93]]
758/4496 nemonapride ['INACTIVE'] [[0.17 0.83]]
759/4496 tegobuvir ['INACTIVE'] [[0.08 0.92]]
760/4496 marimastat ['INACTIVE'] [[0.06 0.94]]
761/4496 AC1NDSS5 ['INACTIVE'] [[0.11 0.89]]
762/4496 AMD-3465 ['INACTIVE'] [[0.03 0.97]]
763/4496 arbidol ['INACTIVE'] [[0.08 0.92]]
764/4496 AZD6482 ['INACTIVE'] [[0.13 0.87]]
765/4496 relebactam ['INACTIVE'] [[0.08 0.92]]
766/4496 montelukast ['INACTIVE'] [[0.06 0.94]]
767/4496 ixazomib ['INACTIVE'] [[0.01 0.99]]
768/4496 ACTB-1003 ['INACTIVE'] [[0.11 0.89]]
769/4496 CPSI-1306-(+/-) ['INACTIVE'] [[0.01 0.99]]
770/4496 inositol-hexanicotinate ['INACTIVE'] [[0.01 0.99]]
771/4496 laquinimod ['INACTIVE'] [[0.01 0.99]]
772/4496 ardeparin ['INACTIVE'] [[0.14 0.86]]
773/4496 zoniporide ['INACTIVE'] [[0.09 0.91]]
774/4496 almorexant ['INACTIVE'] [[0.04 0.96]]
775/4496 SKA-31 ['INACTIVE'] [[0.03 0.97]]
776/4496 AT-7519 ['INACTIVE'] [[0.03 0.97]]
777/4496 ataluren ['INACTIVE'] [[0. 1.]]
778/4496 flindokalner ['INACTIVE'] [[0.03 0.97]]
779/4496 telcagepant ['INACTIVE'] [[0.17 0.83]]
780/4496 XL228 ['INACTIVE'] [[0.12 0.88]]
781/4496 2-[1-(4-piperonyl)piperazinyl]benzothiazole ['INACTIVE'] [[0.05 0.95]]
782/4496 AZD9272 ['INACTIVE'] [[0. 1.]]
783/4496 octreotide ['INACTIVE'] [[0.22 0.78]]
784/4496 ZM-306416 ['INACTIVE'] [[0.04 0.96]]
785/4496 phortress ['INACTIVE'] [[0.06 0.94]]
786/4496 EPZ004777 ['INACTIVE'] [[0.1 0.9]]
787/4496 SB-408124 ['INACTIVE'] [[0.03 0.97]]
788/4496 SN-38 ['INACTIVE'] [[0.04 0.96]]
789/4496 SAR405 ['INACTIVE'] [[0.08 0.92]]
790/4496 CJ-033466 ['INACTIVE'] [[0.09 0.91]]
791/4496 buparlisib ['INACTIVE'] [[0.12 0.88]]
792/4496 lazabemide ['INACTIVE'] [[0.01 0.99]]
793/4496 TH588 ['INACTIVE'] [[0.05 0.95]]
794/4496 BMS-806 ['INACTIVE'] [[0.07 0.93]]
795/4496 forodesine ['INACTIVE'] [[0.06 0.94]]
796/4496 valnemulin ['INACTIVE'] [[0.03 0.97]]
797/4496 FG-2216 ['INACTIVE'] [[0.02 0.98]]
798/4496 spirobromin ['INACTIVE'] [[0.07 0.93]]
799/4496 CP-339818 ['INACTIVE'] [[0.08 0.92]]
800/4496 EMD-66684 ['INACTIVE'] [[0.12 0.88]]
801/4496 lasalocid ['INACTIVE'] [[0.07 0.93]]
802/4496 INCB-024360 ['INACTIVE'] [[0.02 0.98]]
803/4496 pexidartinib ['INACTIVE'] [[0.06 0.94]]
804/4496 anidulafungin ['INACTIVE'] [[0.16 0.84]]
805/4496 betahistine ['INACTIVE'] [[0. 1.]]
806/4496 amrubicin ['INACTIVE'] [[0.04 0.96]]
807/4496 7-chlorokynurenic-acid ['INACTIVE'] [[0.03 0.97]]
808/4496 PK-44 ['INACTIVE'] [[0.12 0.88]]
809/4496 GBR-12935 ['INACTIVE'] [[0.03 0.97]]
810/4496 GSK-2837808A ['INACTIVE'] [[0.05 0.95]]
811/4496 VU10010 ['INACTIVE'] [[0.05 0.95]]
812/4496 NQDI-1 ['INACTIVE'] [[0.04 0.96]]
813/4496 BMS-387032 ['INACTIVE'] [[0.02 0.98]]
814/4496 sulbactam-pivoxil ['INACTIVE'] [[0.01 0.99]]
815/4496 fidaxomicin ['INACTIVE'] [[0.19 0.81]]
816/4496 AF38469 ['INACTIVE'] [[0.01 0.99]]
817/4496 pipecuronium ['INACTIVE'] [[0.16 0.84]]
818/4496 ispinesib ['INACTIVE'] [[0.07 0.93]]
819/4496 MLN0128 ['INACTIVE'] [[0.04 0.96]]
820/4496 MJ-15 ['INACTIVE'] [[0.03 0.97]]
821/4496 LDN-209929 ['INACTIVE'] [[0.01 0.99]]
822/4496 josamycin ['INACTIVE'] [[0.04 0.96]]
823/4496 TEPP-46 ['INACTIVE'] [[0.09 0.91]]
824/4496 KH-CB19 ['INACTIVE'] [[0.04 0.96]]
825/4496 guanfacine ['INACTIVE'] [[0.03 0.97]]
826/4496 yoda-1 ['INACTIVE'] [[0.07 0.93]]
827/4496 plerixafor ['INACTIVE'] [[0.01 0.99]]
828/4496 KRCA-0008 ['INACTIVE'] [[0.15 0.85]]
829/4496 epothilone-b ['INACTIVE'] [[0.1 0.9]]
830/4496 tasuldine ['INACTIVE'] [[0.06 0.94]]
831/4496 NPPB ['INACTIVE'] [[0.02 0.98]]
832/4496 BTB1 ['INACTIVE'] [[0.03 0.97]]
833/4496 spautin-1 ['INACTIVE'] [[0.03 0.97]]
834/4496 WAY-100635 ['INACTIVE'] [[0.08 0.92]]
835/4496 pleconaril ['INACTIVE'] [[0.05 0.95]]
836/4496 ascomycin ['INACTIVE'] [[0.04 0.96]]
837/4496 ethotoin ['INACTIVE'] [[0.03 0.97]]
838/4496 metrizamide ['INACTIVE'] [[0.05 0.95]]
839/4496 fimasartan ['INACTIVE'] [[0.03 0.97]]
840/4496 lonidamine ['INACTIVE'] [[0.06 0.94]]
841/4496 HKI-357 ['INACTIVE'] [[0.1 0.9]]
842/4496 arotinolol ['INACTIVE'] [[0.04 0.96]]
843/4496 SB-747651A ['INACTIVE'] [[0.02 0.98]]
844/4496 benznidazole ['INACTIVE'] [[0.07 0.93]]
845/4496 MK2-IN-1 ['INACTIVE'] [[0.05 0.95]]
846/4496 midecamycin ['INACTIVE'] [[0.12 0.88]]
847/4496 SMI-4a ['INACTIVE'] [[0. 1.]]
848/4496 rociletinib ['INACTIVE'] [[0.12 0.88]]
849/4496 lornoxicam ['INACTIVE'] [[0.04 0.96]]
850/4496 A-784168 ['INACTIVE'] [[0.05 0.95]]
851/4496 JNJ-37822681 ['INACTIVE'] [[0.05 0.95]]
852/4496 bicyclol ['INACTIVE'] [[0.02 0.98]]
853/4496 FK-33-824 ['INACTIVE'] [[0.08 0.92]]
854/4496 ki16198 ['INACTIVE'] [[0.05 0.95]]
855/4496 nepicastat ['INACTIVE'] [[0.02 0.98]]
856/4496 TC-G-1004 ['INACTIVE'] [[0.08 0.92]]
857/4496 ARV-825 ['INACTIVE'] [[0.16 0.84]]
858/4496 pemirolast ['INACTIVE'] [[0.03 0.97]]
859/4496 fluphenazine-decanoate ['INACTIVE'] [[0.01 0.99]]
860/4496 TC-LPA5-4 ['INACTIVE'] [[0.02 0.98]]
861/4496 LY3000328 ['INACTIVE'] [[0.09 0.91]]
862/4496 PF-8380 ['INACTIVE'] [[0.07 0.93]]
863/4496 laropiprant ['INACTIVE'] [[0.03 0.97]]
864/4496 myristyl-nicotinate ['INACTIVE'] [[0.17 0.83]]
865/4496 CEP-37440 ['INACTIVE'] [[0.05 0.95]]
866/4496 CGP-54626 ['INACTIVE'] [[0.09 0.91]]
867/4496 2-CMDO ['INACTIVE'] [[0.03 0.97]]
868/4496 antimycin-a ['INACTIVE'] [[0. 1.]]
869/4496 GW-583340 ['INACTIVE'] [[0.08 0.92]]
870/4496 TBA-354 ['INACTIVE'] [[0.06 0.94]]
871/4496 1-naphthyl-PP1 ['INACTIVE'] [[0.03 0.97]]
872/4496 eticlopride ['INACTIVE'] [[0.05 0.95]]
873/4496 enzalutamide ['INACTIVE'] [[0.07 0.93]]
874/4496 MLN-8054 ['INACTIVE'] [[0.08 0.92]]
875/4496 miriplatin ['INACTIVE'] [[0.07 0.93]]
876/4496 AMG-487-(+/-) ['INACTIVE'] [[0.06 0.94]]
877/4496 3-deazaneplanocin-A ['INACTIVE'] [[0.09 0.91]]
878/4496 DQP-1105 ['INACTIVE'] [[0.06 0.94]]
879/4496 AMG-487 ['INACTIVE'] [[0.06 0.94]]
880/4496 AZD8931 ['INACTIVE'] [[0.03 0.97]]
881/4496 SD-208 ['INACTIVE'] [[0.02 0.98]]
882/4496 pexmetinib ['INACTIVE'] [[0.11 0.89]]
883/4496 2-Chloropyrazine ['INACTIVE'] [[0. 1.]]
884/4496 AMG900 ['INACTIVE'] [[0.12 0.88]]
885/4496 BMS-587101 ['INACTIVE'] [[0.14 0.86]]
886/4496 mesulergine ['INACTIVE'] [[0.09 0.91]]
887/4496 adoprazine ['INACTIVE'] [[0.11 0.89]]
888/4496 GSK2190915 ['INACTIVE'] [[0.07 0.93]]
889/4496 xanomeline ['INACTIVE'] [[0.04 0.96]]
890/4496 SANT-1 ['INACTIVE'] [[0.06 0.94]]
891/4496 FR-139317 ['INACTIVE'] [[0.08 0.92]]
892/4496 tripelennamine ['INACTIVE'] [[0. 1.]]
893/4496 TP-003 ['INACTIVE'] [[0.03 0.97]]
894/4496 dipivefrine ['INACTIVE'] [[0.03 0.97]]
895/4496 fostamatinib ['INACTIVE'] [[0.1 0.9]]
896/4496 KHK-IN-1 ['INACTIVE'] [[0.1 0.9]]
897/4496 GSK256066 ['INACTIVE'] [[0.1 0.9]]
898/4496 fingolimod ['INACTIVE'] [[0.03 0.97]]
899/4496 K145 ['INACTIVE'] [[0.01 0.99]]
900/4496 WZ8040 ['INACTIVE'] [[0.04 0.96]]
901/4496 PNU-89843 ['INACTIVE'] [[0.11 0.89]]
902/4496 lometrexol ['INACTIVE'] [[0.1 0.9]]
903/4496 alisertib ['INACTIVE'] [[0.1 0.9]]
904/4496 eltoprazine ['INACTIVE'] [[0.05 0.95]]
905/4496 SB-743921 ['INACTIVE'] [[0.05 0.95]]
906/4496 GSK-2193874 ['INACTIVE'] [[0.14 0.86]]
907/4496 fluoromethylcholine ['INACTIVE'] [[0.01 0.99]]
908/4496 ansamitocin-p-3 ['INACTIVE'] [[0.11 0.89]]
909/4496 SR-57227A ['INACTIVE'] [[0.08 0.92]]
910/4496 GR-127935 ['INACTIVE'] [[0.05 0.95]]
911/4496 org-26576 ['INACTIVE'] [[0.03 0.97]]
912/4496 MBX-2982 ['INACTIVE'] [[0.13 0.87]]
913/4496 GS-6201 ['INACTIVE'] [[0.09 0.91]]
914/4496 TMC647055 ['INACTIVE'] [[0.13 0.87]]
915/4496 SYM-2206 ['INACTIVE'] [[0.02 0.98]]
916/4496 CF102 ['INACTIVE'] [[0.02 0.98]]
917/4496 EGF816 ['INACTIVE'] [[0.06 0.94]]
918/4496 lorediplon ['INACTIVE'] [[0.06 0.94]]
919/4496 NADPH ['INACTIVE'] [[0.24 0.76]]
920/4496 INCB-003284 ['INACTIVE'] [[0.05 0.95]]
921/4496 CCT129202 ['INACTIVE'] [[0.06 0.94]]
922/4496 AZD3988 ['INACTIVE'] [[0.08 0.92]]
923/4496 maytansinol-isobutyrate ['INACTIVE'] [[0.11 0.89]]
924/4496 WZ4003 ['INACTIVE'] [[0.03 0.97]]
925/4496 tivozanib ['INACTIVE'] [[0.06 0.94]]
926/4496 efinaconazole ['INACTIVE'] [[0.03 0.97]]
927/4496 TH-302 ['INACTIVE'] [[0.09 0.91]]
928/4496 chlorophyllide-cu-complex-na-salt ['INACTIVE'] [[0.12 0.88]]
929/4496 IWP-L6 ['INACTIVE'] [[0.06 0.94]]
930/4496 tenilsetam ['INACTIVE'] [[0.02 0.98]]
931/4496 RWJ-21757 ['INACTIVE'] [[0.1 0.9]]
932/4496 CDK9-IN-6 ['INACTIVE'] [[0.06 0.94]]
933/4496 rapastinel ['INACTIVE'] [[0.06 0.94]]
934/4496 GPI-1046 ['INACTIVE'] [[0.06 0.94]]
935/4496 INCB-3284 ['INACTIVE'] [[0.05 0.95]]
936/4496 poziotinib ['INACTIVE'] [[0.06 0.94]]
937/4496 GS-9620 ['INACTIVE'] [[0.07 0.93]]
938/4496 OICR-9429 ['INACTIVE'] [[0.12 0.88]]
939/4496 AZ-10417808 ['INACTIVE'] [[0.11 0.89]]
940/4496 CEP-33779 ['INACTIVE'] [[0.05 0.95]]
941/4496 carboxylosartan ['INACTIVE'] [[0.03 0.97]]
942/4496 TC-S-7004 ['INACTIVE'] [[0.03 0.97]]
943/4496 bafetinib ['INACTIVE'] [[0.08 0.92]]
944/4496 GSK-1562590 ['INACTIVE'] [[0.12 0.88]]
945/4496 ginkgolide-b ['INACTIVE'] [[0.03 0.97]]
946/4496 SCH-900776 ['INACTIVE'] [[0.16 0.84]]
947/4496 idarubicin ['INACTIVE'] [[0.04 0.96]]
948/4496 tonabersat ['INACTIVE'] [[0.02 0.98]]
949/4496 cot-inhibitor-1 ['INACTIVE'] [[0.12 0.88]]
950/4496 H2L-5765834 ['INACTIVE'] [[0.12 0.88]]
951/4496 1,2,3,4,5,6-hexabromocyclohexane ['INACTIVE'] [[0.03 0.97]]
952/4496 miocamycin ['INACTIVE'] [[0.13 0.87]]
953/4496 linagliptin ['INACTIVE'] [[0.15 0.85]]
954/4496 LOXO-101 ['INACTIVE'] [[0.09 0.91]]
955/4496 fenoverine ['INACTIVE'] [[0.01 0.99]]
956/4496 NVP-TAE684 ['INACTIVE'] [[0.1 0.9]]
957/4496 LY255283 ['INACTIVE'] [[0.05 0.95]]
958/4496 NMS-P715 ['INACTIVE'] [[0.09 0.91]]
959/4496 PD-81723 ['INACTIVE'] [[0. 1.]]
960/4496 XL-647 ['INACTIVE'] [[0.04 0.96]]
961/4496 BMS-CCR2-22 ['INACTIVE'] [[0.08 0.92]]
962/4496 acivicin ['INACTIVE'] [[0.02 0.98]]
963/4496 GW-843682X ['INACTIVE'] [[0.07 0.93]]
964/4496 perchlozone ['INACTIVE'] [[0.07 0.93]]
965/4496 imidapril ['INACTIVE'] [[0.03 0.97]]
966/4496 JTE-013 ['INACTIVE'] [[0.07 0.93]]
967/4496 homoquinolinic-acid ['INACTIVE'] [[0. 1.]]
968/4496 GW-0742 ['INACTIVE'] [[0.07 0.93]]
969/4496 lorglumide ['INACTIVE'] [[0.04 0.96]]
970/4496 ARC-239 ['INACTIVE'] [[0.02 0.98]]
971/4496 WIKI4 ['INACTIVE'] [[0.07 0.93]]
972/4496 CNX-774 ['INACTIVE'] [[0.09 0.91]]
973/4496 PRT062607 ['INACTIVE'] [[0.03 0.97]]
974/4496 bis(maltolato)oxovanadium(IV) ['INACTIVE'] [[0.01 0.99]]
975/4496 Wy-16922 ['INACTIVE'] [[0. 1.]]
976/4496 talampanel ['INACTIVE'] [[0.05 0.95]]
977/4496 BIBX-1382 ['INACTIVE'] [[0.05 0.95]]
978/4496 iohexol ['INACTIVE'] [[0. 1.]]
979/4496 teicoplanin ['INACTIVE'] [[0.13 0.87]]
980/4496 teicoplanin-a2-3 ['INACTIVE'] [[0.18 0.82]]
981/4496 foretinib ['INACTIVE'] [[0.08 0.92]]
982/4496 WZ-4002 ['INACTIVE'] [[0.04 0.96]]
983/4496 PACOCF3 ['INACTIVE'] [[0.05 0.95]]
984/4496 chiniofon ['INACTIVE'] [[0.08 0.92]]
985/4496 articaine ['INACTIVE'] [[0.04 0.96]]
986/4496 taltobulin ['INACTIVE'] [[0.04 0.96]]
987/4496 carbenicillin ['INACTIVE'] [[0.1 0.9]]
988/4496 LY2857785 ['INACTIVE'] [[0.01 0.99]]
989/4496 neratinib ['INACTIVE'] [[0.07 0.93]]
990/4496 CP-945,598 ['INACTIVE'] [[0.12 0.88]]
991/4496 afloqualone ['INACTIVE'] [[0.01 0.99]]
992/4496 zuclopenthixol ['INACTIVE'] [[0.05 0.95]]
993/4496 brivanib ['INACTIVE'] [[0.09 0.91]]
994/4496 amonafide ['INACTIVE'] [[0.07 0.93]]
995/4496 ACT-132577 ['INACTIVE'] [[0.01 0.99]]
996/4496 enzastaurin ['INACTIVE'] [[0.09 0.91]]
997/4496 norfluoxetine ['INACTIVE'] [[0.02 0.98]]
998/4496 mdivi-1 ['INACTIVE'] [[0.08 0.92]]
999/4496 LY2886721 ['INACTIVE'] [[0.1 0.9]]
1000/4496 SirReal-2 ['INACTIVE'] [[0.05 0.95]]
1001/4496 deforolimus ['INACTIVE'] [[0.11 0.89]]
1002/4496 omeprazole-magnesium ['INACTIVE'] [[0.08 0.92]]
1003/4496 3-deazauridine ['INACTIVE'] [[0.04 0.96]]
1004/4496 JZL-184 ['INACTIVE'] [[0.06 0.94]]
1005/4496 guanosine ['INACTIVE'] [[0.11 0.89]]
1006/4496 1E-1-(2-hydroxy-5-methylphenyl)-1-dodecanone oxime ['INACTIVE'] [[0.05 0.95]]
1007/4496 4E1RCat ['INACTIVE'] [[0.07 0.93]]
1008/4496 NSC-697923 ['INACTIVE'] [[0.09 0.91]]
1009/4496 propidium-iodide ['INACTIVE'] [[0.07 0.93]]
1010/4496 PD-318088 ['INACTIVE'] [[0.01 0.99]]
1011/4496 AT13387 ['INACTIVE'] [[0.04 0.96]]
1012/4496 evans-blue ['INACTIVE'] [[0.05 0.95]]
1013/4496 A12B4C3 ['INACTIVE'] [[0.1 0.9]]
1014/4496 VER-155008 ['INACTIVE'] [[0.15 0.85]]
1015/4496 naphthoquine-phosphate ['INACTIVE'] [[0.03 0.97]]
1016/4496 sotrastaurin ['INACTIVE'] [[0.18 0.82]]
1017/4496 MRK-016 ['INACTIVE'] [[0.09 0.91]]
1018/4496 AZD6765 ['INACTIVE'] [[0. 1.]]
1019/4496 CGS-15943 ['INACTIVE'] [[0.07 0.93]]
1020/4496 ESI-09 ['INACTIVE'] [[0.05 0.95]]
1021/4496 GSK2256294A ['INACTIVE'] [[0.06 0.94]]
1022/4496 rupatadine ['INACTIVE'] [[0.03 0.97]]
1023/4496 PSB-36 ['INACTIVE'] [[0.1 0.9]]
1024/4496 tideglusib ['INACTIVE'] [[0.04 0.96]]
1025/4496 TMN-355 ['INACTIVE'] [[0.05 0.95]]
1026/4496 amperozide ['INACTIVE'] [[0.01 0.99]]
1027/4496 K-Ras(G12C)-inhibitor-12 ['INACTIVE'] [[0.05 0.95]]
1028/4496 ER-27319 ['INACTIVE'] [[0.01 0.99]]
1029/4496 KI-16425 ['INACTIVE'] [[0.03 0.97]]
1030/4496 2,3-DCPE ['INACTIVE'] [[0.03 0.97]]
1031/4496 nastorazepide ['INACTIVE'] [[0.13 0.87]]
1032/4496 cefathiamidine ['INACTIVE'] [[0.26 0.74]]
1033/4496 L-701252 ['INACTIVE'] [[0.05 0.95]]
1034/4496 SB-221284 ['INACTIVE'] [[0.09 0.91]]
1035/4496 ceritinib ['INACTIVE'] [[0.06 0.94]]
1036/4496 cot-inhibitor-2 ['INACTIVE'] [[0.16 0.84]]
1037/4496 iopromide ['INACTIVE'] [[0.03 0.97]]
1038/4496 AZD2858 ['INACTIVE'] [[0.06 0.94]]
1039/4496 CW-008 ['INACTIVE'] [[0.03 0.97]]
1040/4496 uracil-mustard ['INACTIVE'] [[0.03 0.97]]
1041/4496 PF-03814735 ['INACTIVE'] [[0.03 0.97]]
1042/4496 VU1545 ['INACTIVE'] [[0.11 0.89]]
1043/4496 MK-3697 ['INACTIVE'] [[0.01 0.99]]
1044/4496 temocapril ['INACTIVE'] [[0.06 0.94]]
1045/4496 PF-4800567 ['INACTIVE'] [[0.02 0.98]]
1046/4496 SB-228357 ['INACTIVE'] [[0.07 0.93]]
1047/4496 ezatiostat ['INACTIVE'] [[0.06 0.94]]
1048/4496 pirarubicin ['INACTIVE'] [[0.09 0.91]]
1049/4496 dofequidar ['INACTIVE'] [[0.06 0.94]]
1050/4496 nitisinone ['INACTIVE'] [[0.04 0.96]]
1051/4496 zotepine ['INACTIVE'] [[0.01 0.99]]
1052/4496 STA-5326 ['INACTIVE'] [[0.05 0.95]]
1053/4496 LB-100 ['INACTIVE'] [[0.09 0.91]]
1054/4496 BS-181 ['INACTIVE'] [[0.1 0.9]]
1055/4496 erastin ['INACTIVE'] [[0.01 0.99]]
1056/4496 macitentan ['INACTIVE'] [[0.02 0.98]]
1057/4496 CITCO ['INACTIVE'] [[0.05 0.95]]
1058/4496 dextran ['INACTIVE'] [[0.03 0.97]]
1059/4496 PF-4708671 ['INACTIVE'] [[0.06 0.94]]
1060/4496 cidofovir ['INACTIVE'] [[0.04 0.96]]
1061/4496 venetoclax ['INACTIVE'] [[0.15 0.85]]
1062/4496 JW-67 ['INACTIVE'] [[0.05 0.95]]
1063/4496 dexloxiglumide ['INACTIVE'] [[0.06 0.94]]
1064/4496 5'-Chloro-5'-deoxy-ENBA-(+/-) ['INACTIVE'] [[0.04 0.96]]
1065/4496 ML-298 ['INACTIVE'] [[0.05 0.95]]
1066/4496 JNJ-47965567 ['INACTIVE'] [[0.11 0.89]]
1067/4496 anpirtoline ['INACTIVE'] [[0.04 0.96]]
1068/4496 embelin ['INACTIVE'] [[0.01 0.99]]
1069/4496 UNC0321 ['INACTIVE'] [[0.16 0.84]]
1070/4496 silodosin ['INACTIVE'] [[0.12 0.88]]
1071/4496 5-methylhydantoin-(L) ['INACTIVE'] [[0. 1.]]
1072/4496 PF-03758309 ['INACTIVE'] [[0.11 0.89]]
1073/4496 setipiprant ['INACTIVE'] [[0.08 0.92]]
1074/4496 LPA2-antagonist-1 ['INACTIVE'] [[0.12 0.88]]
1075/4496 WZ-3146 ['INACTIVE'] [[0.02 0.98]]
1076/4496 TAK-632 ['INACTIVE'] [[0.06 0.94]]
1077/4496 temsirolimus ['INACTIVE'] [[0.1 0.9]]
1078/4496 R547 ['INACTIVE'] [[0.06 0.94]]
1079/4496 CHF5074 ['INACTIVE'] [[0.03 0.97]]
1080/4496 7-nitroindazole ['INACTIVE'] [[0.07 0.93]]
1081/4496 andarine ['INACTIVE'] [[0.04 0.96]]
1082/4496 chicago-sky-blue-6b ['INACTIVE'] [[0.08 0.92]]
1083/4496 NMS-E973 ['INACTIVE'] [[0.11 0.89]]
1084/4496 AS-703026 ['INACTIVE'] [[0.02 0.98]]
1085/4496 etofylline-clofibrate ['INACTIVE'] [[0.03 0.97]]
1086/4496 Q-203 ['INACTIVE'] [[0.07 0.93]]
1087/4496 latrepirdine ['INACTIVE'] [[0.07 0.93]]
1088/4496 CYM-50358 ['INACTIVE'] [[0. 1.]]
1089/4496 aurothioglucose ['INACTIVE'] [[0.03 0.97]]
1090/4496 TRV130 ['INACTIVE'] [[0.05 0.95]]
1091/4496 bifendate ['INACTIVE'] [[0.04 0.96]]
1092/4496 PS178990 ['INACTIVE'] [[0.05 0.95]]
1093/4496 5-methylhydantoin-(D) ['INACTIVE'] [[0. 1.]]
1094/4496 floctafenine ['INACTIVE'] [[0.05 0.95]]
1095/4496 CCG-50014 ['INACTIVE'] [[0.03 0.97]]
1096/4496 zacopride ['INACTIVE'] [[0.05 0.95]]
1097/4496 acenocoumarol ['INACTIVE'] [[0.08 0.92]]
1098/4496 UNBS-5162 ['INACTIVE'] [[0.04 0.96]]
1099/4496 GDC-0879 ['INACTIVE'] [[0.05 0.95]]
1100/4496 picartamide ['INACTIVE'] [[0.01 0.99]]
1101/4496 PIK-75 ['INACTIVE'] [[0.1 0.9]]
1102/4496 tipifarnib ['INACTIVE'] [[0.03 0.97]]
1103/4496 L-655240 ['INACTIVE'] [[0.08 0.92]]
1104/4496 PSN-375963 ['INACTIVE'] [[0.06 0.94]]
1105/4496 CGP-55845 ['INACTIVE'] [[0.07 0.93]]
1106/4496 4-chlorophenylguanidine ['INACTIVE'] [[0.01 0.99]]
1107/4496 EPZ-5676 ['INACTIVE'] [[0.08 0.92]]
1108/4496 everolimus ['INACTIVE'] [[0.08 0.92]]
1109/4496 sorbitan-monostearate ['INACTIVE'] [[0.08 0.92]]
1110/4496 CYM-5442 ['INACTIVE'] [[0.06 0.94]]
1111/4496 ABT-737 ['INACTIVE'] [[0.06 0.94]]
1112/4496 VX-11e ['INACTIVE'] [[0.13 0.87]]
1113/4496 nTZDpa ['INACTIVE'] [[0.05 0.95]]
1114/4496 oglemilast ['INACTIVE'] [[0.06 0.94]]
1115/4496 HTH-01-015 ['INACTIVE'] [[0.08 0.92]]
1116/4496 niceritrol ['INACTIVE'] [[0.01 0.99]]
1117/4496 capadenoson ['INACTIVE'] [[0.09 0.91]]
1118/4496 midostaurin ['INACTIVE'] [[0.07 0.93]]
1119/4496 R-268712 ['INACTIVE'] [[0.06 0.94]]
1120/4496 VT-464 ['INACTIVE'] [[0.02 0.98]]
1121/4496 SUN-B-8155 ['INACTIVE'] [[0.08 0.92]]
1122/4496 clofilium ['INACTIVE'] [[0.13 0.87]]
1123/4496 rosamicin ['INACTIVE'] [[0.06 0.94]]
1124/4496 toyocamycin ['INACTIVE'] [[0.08 0.92]]
1125/4496 SB-268262 ['INACTIVE'] [[0.07 0.93]]
1126/4496 NS-018 ['INACTIVE'] [[0.08 0.92]]
1127/4496 ELN-441958 ['INACTIVE'] [[0.11 0.89]]
1128/4496 dimetindene ['INACTIVE'] [[0.08 0.92]]
1129/4496 UBP-296 ['INACTIVE'] [[0.08 0.92]]
1130/4496 CU-T12-9 ['INACTIVE'] [[0.03 0.97]]
1131/4496 L-733060 ['INACTIVE'] [[0.04 0.96]]
1132/4496 UBP-302 ['INACTIVE'] [[0.08 0.92]]
1133/4496 cabozantinib ['INACTIVE'] [[0.05 0.95]]
1134/4496 MK-571 ['INACTIVE'] [[0.03 0.97]]
1135/4496 diazooxonorleucine ['INACTIVE'] [[0.02 0.98]]
1136/4496 buclizine ['INACTIVE'] [[0.02 0.98]]
1137/4496 BCI-540 ['INACTIVE'] [[0.02 0.98]]
1138/4496 tetrahydrofolic-acid ['INACTIVE'] [[0.1 0.9]]
1139/4496 XMD17-109 ['INACTIVE'] [[0.08 0.92]]
1140/4496 PP-2 ['INACTIVE'] [[0.01 0.99]]
1141/4496 CX-4945 ['INACTIVE'] [[0.02 0.98]]
1142/4496 SA-47 ['INACTIVE'] [[0.03 0.97]]
1143/4496 MNS-(3,4-Methylenedioxy-nitrostyrene) ['INACTIVE'] [[0.04 0.96]]
1144/4496 WH-4-023 ['INACTIVE'] [[0.12 0.88]]
1145/4496 CP-376395 ['INACTIVE'] [[0.06 0.94]]
1146/4496 GW-4064 ['INACTIVE'] [[0.07 0.93]]
1147/4496 penciclovir ['INACTIVE'] [[0.01 0.99]]
1148/4496 SN-6 ['INACTIVE'] [[0.05 0.95]]
1149/4496 lidoflazine ['INACTIVE'] [[0.02 0.98]]
1150/4496 LDN-212854 ['INACTIVE'] [[0.14 0.86]]
1151/4496 7-methoxytacrine ['INACTIVE'] [[0.01 0.99]]
1152/4496 BI-2536 ['INACTIVE'] [[0.06 0.94]]
1153/4496 sucrose ['INACTIVE'] [[0.02 0.98]]
1154/4496 rose-bengal ['INACTIVE'] [[0.05 0.95]]
1155/4496 BTB06584 ['INACTIVE'] [[0.05 0.95]]
1156/4496 AZD3514 ['INACTIVE'] [[0.18 0.82]]
1157/4496 dimethindene-(S)-(+) ['INACTIVE'] [[0.08 0.92]]
1158/4496 TC-SP-14 ['INACTIVE'] [[0.04 0.96]]
1159/4496 avibactam ['INACTIVE'] [[0.05 0.95]]
1160/4496 tozasertib ['INACTIVE'] [[0.07 0.93]]
1161/4496 pirinixic-acid ['INACTIVE'] [[0.02 0.98]]
1162/4496 EUK-134 ['INACTIVE'] [[0.04 0.96]]
1163/4496 CU-CPT-4a ['INACTIVE'] [[0.01 0.99]]
1164/4496 DR-4485 ['INACTIVE'] [[0.02 0.98]]
1165/4496 Gue-1654 ['INACTIVE'] [[0.07 0.93]]
1166/4496 BMS-599626 ['INACTIVE'] [[0.13 0.87]]
1167/4496 AGI-5198 ['INACTIVE'] [[0.02 0.98]]
1168/4496 bretylium ['INACTIVE'] [[0.05 0.95]]
1169/4496 preladenant ['INACTIVE'] [[0.12 0.88]]
1170/4496 BIRT-377 ['INACTIVE'] [[0.04 0.96]]
1171/4496 SRPIN340 ['INACTIVE'] [[0. 1.]]
1172/4496 PK-11195 ['INACTIVE'] [[0.08 0.92]]
1173/4496 GR-79236 ['INACTIVE'] [[0.06 0.94]]
1174/4496 prochlorperazine ['INACTIVE'] [[0. 1.]]
1175/4496 tafamidis-meglumine ['INACTIVE'] [[0.03 0.97]]
1176/4496 UNC0224 ['INACTIVE'] [[0.14 0.86]]
1177/4496 CUDC-907 ['INACTIVE'] [[0.06 0.94]]
1178/4496 azatadine ['INACTIVE'] [[0.01 0.99]]
1179/4496 carfilzomib ['INACTIVE'] [[0.03 0.97]]
1180/4496 nizofenone ['INACTIVE'] [[0.04 0.96]]
1181/4496 sucralfate ['INACTIVE'] [[0.08 0.92]]
1182/4496 CAY10505 ['INACTIVE'] [[0.05 0.95]]
1183/4496 HA-130 ['INACTIVE'] [[0.01 0.99]]
1184/4496 isofloxythepin ['INACTIVE'] [[0.03 0.97]]
1185/4496 RTA-408 ['INACTIVE'] [[0.07 0.93]]
1186/4496 lestaurtinib ['INACTIVE'] [[0.08 0.92]]
1187/4496 ambenonium ['INACTIVE'] [[0.04 0.96]]
1188/4496 pirenoxine ['INACTIVE'] [[0.07 0.93]]
1189/4496 KD-023 ['INACTIVE'] [[0.1 0.9]]
1190/4496 nilotinib ['INACTIVE'] [[0.11 0.89]]
1191/4496 Lu-AA-47070 ['INACTIVE'] [[0.07 0.93]]
1192/4496 abametapir ['INACTIVE'] [[0.04 0.96]]
1193/4496 1-hexadecanal ['INACTIVE'] [[0.13 0.87]]
1194/4496 BMS-191011 ['INACTIVE'] [[0.03 0.97]]
1195/4496 PIK-293 ['INACTIVE'] [[0.05 0.95]]
1196/4496 GW-9662 ['INACTIVE'] [[0.03 0.97]]
1197/4496 R-59022 ['INACTIVE'] [[0. 1.]]
1198/4496 CYT-997 ['INACTIVE'] [[0.1 0.9]]
1199/4496 CR8-(R) ['INACTIVE'] [[0.03 0.97]]
1200/4496 CL-218872 ['INACTIVE'] [[0.08 0.92]]
1201/4496 TC-S-7003 ['INACTIVE'] [[0.08 0.92]]
1202/4496 nolatrexed ['INACTIVE'] [[0.07 0.93]]
1203/4496 iCRT-14 ['INACTIVE'] [[0.02 0.98]]
1204/4496 AZD5363 ['INACTIVE'] [[0.08 0.92]]
1205/4496 PF-04620110 ['INACTIVE'] [[0.11 0.89]]
1206/4496 PD-0325901 ['INACTIVE'] [[0.01 0.99]]
1207/4496 muscimol ['INACTIVE'] [[0.01 0.99]]
1208/4496 ANR-94 ['INACTIVE'] [[0.01 0.99]]
1209/4496 dehydrocorydaline ['INACTIVE'] [[0.03 0.97]]
1210/4496 UNC0642 ['INACTIVE'] [[0.14 0.86]]
1211/4496 CID-2745687 ['INACTIVE'] [[0.04 0.96]]
1212/4496 compound-w ['INACTIVE'] [[0.05 0.95]]
1213/4496 TC-OT-39 ['INACTIVE'] [[0.11 0.89]]
1214/4496 vipadenant ['INACTIVE'] [[0.08 0.92]]
1215/4496 nintedanib ['INACTIVE'] [[0.02 0.98]]
1216/4496 cromoglicic-acid ['INACTIVE'] [[0.03 0.97]]
1217/4496 sunitinib ['INACTIVE'] [[0.05 0.95]]
1218/4496 NGD-98-2 ['INACTIVE'] [[0.07 0.93]]
1219/4496 diethylcarbamazine ['INACTIVE'] [[0.03 0.97]]
1220/4496 foxy-5 ['INACTIVE'] [[0.07 0.93]]
1221/4496 PP-1 ['INACTIVE'] [[0.02 0.98]]
1222/4496 UCL-2077 ['INACTIVE'] [[0.01 0.99]]
1223/4496 AMG-PERK-44 ['INACTIVE'] [[0.1 0.9]]
1224/4496 CGP-7930 ['INACTIVE'] [[0.02 0.98]]
1225/4496 CI-976 ['INACTIVE'] [[0.03 0.97]]
1226/4496 nitecapone ['INACTIVE'] [[0.03 0.97]]
1227/4496 KI-20227 ['INACTIVE'] [[0.07 0.93]]
1228/4496 mofegiline ['INACTIVE'] [[0. 1.]]
1229/4496 danirixin ['INACTIVE'] [[0.07 0.93]]
1230/4496 CB-5083 ['INACTIVE'] [[0.07 0.93]]
1231/4496 resiquimod ['INACTIVE'] [[0.06 0.94]]
1232/4496 benzamil ['INACTIVE'] [[0.1 0.9]]
1233/4496 PF-670462 ['INACTIVE'] [[0.04 0.96]]
1234/4496 apremilast ['INACTIVE'] [[0.07 0.93]]
1235/4496 tenofovir-disoproxil ['INACTIVE'] [[0.02 0.98]]
1236/4496 VU0360172 ['INACTIVE'] [[0.03 0.97]]
1237/4496 filgotinib ['INACTIVE'] [[0.06 0.94]]
1238/4496 BQ-788 ['INACTIVE'] [[0.15 0.85]]
1239/4496 SEW-2871 ['INACTIVE'] [[0.01 0.99]]
1240/4496 GSK1292263 ['INACTIVE'] [[0.1 0.9]]
1241/4496 clorotepine ['INACTIVE'] [[0.02 0.98]]
1242/4496 5-methylfurmethiodide ['INACTIVE'] [[0.01 0.99]]
1243/4496 mubritinib ['INACTIVE'] [[0.04 0.96]]
1244/4496 indatraline ['INACTIVE'] [[0.03 0.97]]
1245/4496 pyrvinium-pamoate ['INACTIVE'] [[0.08 0.92]]
1246/4496 SB-297006 ['INACTIVE'] [[0.12 0.88]]
1247/4496 SKLB1002 ['INACTIVE'] [[0.02 0.98]]
1248/4496 NSC-23766 ['INACTIVE'] [[0.08 0.92]]
1249/4496 JNJ-26481585 ['INACTIVE'] [[0.08 0.92]]
1250/4496 TC-S-7005 ['INACTIVE'] [[0.1 0.9]]
1251/4496 LCQ908 ['INACTIVE'] [[0.06 0.94]]
1252/4496 thiethylperazine ['INACTIVE'] [[0.03 0.97]]
1253/4496 V-51 ['INACTIVE'] [[0.03 0.97]]
1254/4496 etofylline ['INACTIVE'] [[0. 1.]]
1255/4496 GW-501516 ['INACTIVE'] [[0.06 0.94]]
1256/4496 NSC5844 ['INACTIVE'] [[0.06 0.94]]
1257/4496 BVT-2733 ['INACTIVE'] [[0.03 0.97]]
1258/4496 SB-218078 ['INACTIVE'] [[0.05 0.95]]
1259/4496 PHA-665752 ['INACTIVE'] [[0.06 0.94]]
1260/4496 RS-67333 ['INACTIVE'] [[0.01 0.99]]
1261/4496 antalarmin ['INACTIVE'] [[0.06 0.94]]
1262/4496 bedaquiline ['INACTIVE'] [[0.04 0.96]]
1263/4496 dalargin ['INACTIVE'] [[0.1 0.9]]
1264/4496 quiflapon ['INACTIVE'] [[0.04 0.96]]
1265/4496 IB-MECA ['INACTIVE'] [[0.03 0.97]]
1266/4496 clofedanol ['INACTIVE'] [[0.04 0.96]]
1267/4496 taranabant ['INACTIVE'] [[0.02 0.98]]
1268/4496 2-hydroxyflutamide ['INACTIVE'] [[0.02 0.98]]
1269/4496 decernotinib ['INACTIVE'] [[0.08 0.92]]
1270/4496 PU-H71 ['INACTIVE'] [[0.08 0.92]]
1271/4496 AMG-548 ['INACTIVE'] [[0.02 0.98]]
1272/4496 TC-G-1005 ['INACTIVE'] [[0.04 0.96]]
1273/4496 SDZ-WAG-994 ['INACTIVE'] [[0.07 0.93]]
1274/4496 captan ['INACTIVE'] [[0.04 0.96]]
1275/4496 LY2874455 ['INACTIVE'] [[0.1 0.9]]
1276/4496 telatinib ['INACTIVE'] [[0.07 0.93]]
1277/4496 bilobalide ['INACTIVE'] [[0.01 0.99]]
1278/4496 saquinavir ['INACTIVE'] [[0.08 0.92]]
1279/4496 SGX523 ['INACTIVE'] [[0.16 0.84]]
1280/4496 PYR-41 ['INACTIVE'] [[0.09 0.91]]
1281/4496 aminopurvalanol-a ['INACTIVE'] [[0.1 0.9]]
1282/4496 salubrinal ['INACTIVE'] [[0.09 0.91]]
1283/4496 CG-400549 ['INACTIVE'] [[0.07 0.93]]
1284/4496 SHA-68 ['INACTIVE'] [[0.02 0.98]]
1285/4496 itacitinib ['INACTIVE'] [[0.09 0.91]]
1286/4496 ZD-7155 ['INACTIVE'] [[0.09 0.91]]
1287/4496 efatutazone ['INACTIVE'] [[0.07 0.93]]
1288/4496 lisofylline ['INACTIVE'] [[0.02 0.98]]
1289/4496 phentermine ['INACTIVE'] [[0. 1.]]
1290/4496 MDL-29951 ['INACTIVE'] [[0.02 0.98]]
1291/4496 AM-1241 ['INACTIVE'] [[0.1 0.9]]
1292/4496 iniparib ['INACTIVE'] [[0.03 0.97]]
1293/4496 sennoside-protonated ['INACTIVE'] [[0.12 0.88]]
1294/4496 chloroxoquinoline ['INACTIVE'] [[0. 1.]]
1295/4496 dapiprazole ['INACTIVE'] [[0.1 0.9]]
1296/4496 obidoxime ['INACTIVE'] [[0.08 0.92]]
1297/4496 PNU-120596 ['INACTIVE'] [[0.05 0.95]]
1298/4496 epothilone-d ['INACTIVE'] [[0.06 0.94]]
1299/4496 lixivaptan ['INACTIVE'] [[0.07 0.93]]
1300/4496 PD-184352 ['INACTIVE'] [[0.1 0.9]]
1301/4496 CGP-20712A ['INACTIVE'] [[0.04 0.96]]
1302/4496 GYKI-52466 ['INACTIVE'] [[0.06 0.94]]
1303/4496 TAS-103 ['INACTIVE'] [[0.01 0.99]]
1304/4496 KG-5 ['INACTIVE'] [[0.06 0.94]]
1305/4496 R-428 ['INACTIVE'] [[0.03 0.97]]
1306/4496 spiradoline ['INACTIVE'] [[0.07 0.93]]
1307/4496 VU-152100 ['INACTIVE'] [[0.04 0.96]]
1308/4496 ravuconazole ['INACTIVE'] [[0.17 0.83]]
1309/4496 SB-242235 ['INACTIVE'] [[0.01 0.99]]
1310/4496 revaprazan ['INACTIVE'] [[0.03 0.97]]
1311/4496 cisapride ['INACTIVE'] [[0.06 0.94]]
1312/4496 TC-S-7006 ['INACTIVE'] [[0.05 0.95]]
1313/4496 AZD1208 ['INACTIVE'] [[0.06 0.94]]
1314/4496 CGP-74514 ['INACTIVE'] [[0.07 0.93]]
1315/4496 PF-04217903 ['INACTIVE'] [[0.11 0.89]]
1316/4496 SU-11274 ['INACTIVE'] [[0.07 0.93]]
1317/4496 BVT-948 ['INACTIVE'] [[0. 1.]]
1318/4496 latrunculin-b ['INACTIVE'] [[0.07 0.93]]
1319/4496 emorfazone ['INACTIVE'] [[0.07 0.93]]
1320/4496 sesamin ['INACTIVE'] [[0.05 0.95]]
1321/4496 GSK1904529A ['INACTIVE'] [[0.18 0.82]]
1322/4496 semagacestat ['INACTIVE'] [[0.13 0.87]]
1323/4496 OC000459 ['INACTIVE'] [[0.07 0.93]]
1324/4496 VU0361737 ['INACTIVE'] [[0.02 0.98]]
1325/4496 PSB-06126 ['INACTIVE'] [[0.04 0.96]]
1326/4496 NKY-80 ['INACTIVE'] [[0.03 0.97]]
1327/4496 olmutinib ['INACTIVE'] [[0.04 0.96]]
1328/4496 SDZ-205-557 ['INACTIVE'] [[0.04 0.96]]
1329/4496 PF-573228 ['INACTIVE'] [[0.07 0.93]]
1330/4496 RS-100329 ['INACTIVE'] [[0.08 0.92]]
1331/4496 LY2157299 ['INACTIVE'] [[0.02 0.98]]
1332/4496 YK-4-279 ['INACTIVE'] [[0.09 0.91]]
1333/4496 trequinsin ['INACTIVE'] [[0.09 0.91]]
1334/4496 chidamide ['INACTIVE'] [[0.03 0.97]]
1335/4496 asunaprevir ['INACTIVE'] [[0.08 0.92]]
1336/4496 leuprolide ['INACTIVE'] [[0.11 0.89]]
1337/4496 CH-170 ['INACTIVE'] [[0.01 0.99]]
1338/4496 idazoxane ['INACTIVE'] [[0.02 0.98]]
1339/4496 SCH-58261 ['INACTIVE'] [[0.04 0.96]]
1340/4496 amuvatinib ['INACTIVE'] [[0.14 0.86]]
1341/4496 proacipimox ['INACTIVE'] [[0.01 0.99]]
1342/4496 SB-239063 ['INACTIVE'] [[0.03 0.97]]
1343/4496 GSK2578215A ['INACTIVE'] [[0.06 0.94]]
1344/4496 GKA-50 ['INACTIVE'] [[0.06 0.94]]
1345/4496 AZD5582 ['INACTIVE'] [[0.13 0.87]]
1346/4496 PD-198306 ['INACTIVE'] [[0.03 0.97]]
1347/4496 GSK1059615 ['INACTIVE'] [[0.07 0.93]]
1348/4496 bupicomide ['INACTIVE'] [[0.01 0.99]]
1349/4496 JW-74 ['INACTIVE'] [[0.04 0.96]]
1350/4496 flumecinol ['INACTIVE'] [[0. 1.]]
1351/4496 schisandrin-b ['INACTIVE'] [[0.08 0.92]]
1352/4496 T-5601640 ['INACTIVE'] [[0.03 0.97]]
1353/4496 GW-441756 ['INACTIVE'] [[0.03 0.97]]
1354/4496 siponimod ['INACTIVE'] [[0.13 0.87]]
1355/4496 sitaxentan ['INACTIVE'] [[0.05 0.95]]
1356/4496 TC-N-1752 ['INACTIVE'] [[0.11 0.89]]
1357/4496 S1P1-agonist-III ['INACTIVE'] [[0.03 0.97]]
1358/4496 N6-cyclopentyladenosine ['INACTIVE'] [[0.05 0.95]]
1359/4496 oxaliplatin ['INACTIVE'] [[0.04 0.96]]
1360/4496 sertaconazole ['INACTIVE'] [[0.05 0.95]]
1361/4496 ERK5-IN-1 ['INACTIVE'] [[0.09 0.91]]
1362/4496 5-(4chlorophenyl)-4-ethyl-2,4-dihydro-3H-1,2,4-triazol-3-one ['INACTIVE'] [[0.02 0.98]]
1363/4496 GSK3787 ['INACTIVE'] [[0.02 0.98]]
1364/4496 cathepsin-inhibitor-1 ['INACTIVE'] [[0.02 0.98]]
1365/4496 acriflavine ['INACTIVE'] [[0.02 0.98]]
1366/4496 FPL-55712 ['INACTIVE'] [[0.03 0.97]]
1367/4496 GBR-13069 ['INACTIVE'] [[0.02 0.98]]
1368/4496 WAY-362450 ['INACTIVE'] [[0.03 0.97]]
1369/4496 NMS-873 ['INACTIVE'] [[0.01 0.99]]
1370/4496 coenzyme-i ['INACTIVE'] [[0.11 0.89]]
1371/4496 Mps1-IN-5 ['INACTIVE'] [[0.07 0.93]]
1372/4496 pivmecillinam ['INACTIVE'] [[0.15 0.85]]
1373/4496 PP-121 ['INACTIVE'] [[0.03 0.97]]
1374/4496 RWJ-67657 ['INACTIVE'] [[0.06 0.94]]
1375/4496 ASP3026 ['INACTIVE'] [[0.09 0.91]]
1376/4496 CCG-63808 ['INACTIVE'] [[0.09 0.91]]
1377/4496 indibulin ['INACTIVE'] [[0.05 0.95]]
1378/4496 liranaftate ['INACTIVE'] [[0.04 0.96]]
1379/4496 4-HQN ['INACTIVE'] [[0.04 0.96]]
1380/4496 HEMADO ['INACTIVE'] [[0.09 0.91]]
1381/4496 ipragliflozin-l-proline ['INACTIVE'] [[0.07 0.93]]
1382/4496 2-Deoxy-2-{[methyl(nitroso)carbamoyl]amino}hexose ['INACTIVE'] [[0.07 0.93]]
1383/4496 inosine ['INACTIVE'] [[0.03 0.97]]
1384/4496 tomelukast ['INACTIVE'] [[0.05 0.95]]
1385/4496 auranofin ['INACTIVE'] [[0.08 0.92]]
1386/4496 ZLN-024 ['INACTIVE'] [[0.02 0.98]]
1387/4496 radezolid ['INACTIVE'] [[0.08 0.92]]
1388/4496 1-octanol ['INACTIVE'] [[0.01 0.99]]
1389/4496 PHA-680632 ['INACTIVE'] [[0.11 0.89]]
1390/4496 sertindole ['INACTIVE'] [[0.09 0.91]]
1391/4496 nitrocaramiphen ['INACTIVE'] [[0.07 0.93]]
1392/4496 UNC2250 ['INACTIVE'] [[0.07 0.93]]
1393/4496 FR-122047 ['INACTIVE'] [[0.02 0.98]]
1394/4496 MIRA-1 ['INACTIVE'] [[0. 1.]]
1395/4496 metrizoic-acid ['INACTIVE'] [[0.03 0.97]]
1396/4496 sodium-tanshinone-ii-a-sulfonate ['INACTIVE'] [[0.03 0.97]]
1397/4496 camicinal ['INACTIVE'] [[0.08 0.92]]
1398/4496 PAC-1 ['INACTIVE'] [[0.04 0.96]]
1399/4496 GSK429286A ['INACTIVE'] [[0.04 0.96]]
1400/4496 bisindolylmaleimide-ix ['INACTIVE'] [[0.12 0.88]]
1401/4496 tienilic-acid ['INACTIVE'] [[0.04 0.96]]
1402/4496 CC4 ['INACTIVE'] [[0.11 0.89]]
1403/4496 sodium-dodecyl-sulfate ['INACTIVE'] [[0.04 0.96]]
1404/4496 semapimod ['INACTIVE'] [[0.17 0.83]]
1405/4496 LY2183240 ['INACTIVE'] [[0.04 0.96]]
1406/4496 sorafenib ['INACTIVE'] [[0.05 0.95]]
1407/4496 AMG-517 ['INACTIVE'] [[0.03 0.97]]
1408/4496 ETC-159 ['INACTIVE'] [[0.04 0.96]]
1409/4496 JZL-195 ['INACTIVE'] [[0.1 0.9]]
1410/4496 AST-1306 ['INACTIVE'] [[0.05 0.95]]
1411/4496 NAV-26 ['INACTIVE'] [[0.05 0.95]]
1412/4496 P276-00 ['INACTIVE'] [[0.05 0.95]]
1413/4496 caracemide ['INACTIVE'] [[0.01 0.99]]
1414/4496 trap-101 ['INACTIVE'] [[0.07 0.93]]
1415/4496 NAN-190 ['INACTIVE'] [[0.06 0.94]]
1416/4496 sodium-picosulfate ['INACTIVE'] [[0.03 0.97]]
1417/4496 halobetasol-propionate ['INACTIVE'] [[0. 1.]]
1418/4496 TCS-359 ['INACTIVE'] [[0. 1.]]
1419/4496 GSK-J1 ['INACTIVE'] [[0.05 0.95]]
1420/4496 dacomitinib ['INACTIVE'] [[0.04 0.96]]
1421/4496 YM-244769 ['INACTIVE'] [[0.02 0.98]]
1422/4496 cordycepin ['INACTIVE'] [[0.01 0.99]]
1423/4496 MDL-73005EF ['INACTIVE'] [[0.13 0.87]]
1424/4496 monosodium-alpha-luminol ['INACTIVE'] [[0.06 0.94]]
1425/4496 tiquizium ['INACTIVE'] [[0.04 0.96]]
1426/4496 gonadorelin ['INACTIVE'] [[0.13 0.87]]
1427/4496 VU591 ['INACTIVE'] [[0.06 0.94]]
1428/4496 BI-224436 ['INACTIVE'] [[0.06 0.94]]
1429/4496 LRRK2-IN-1 ['INACTIVE'] [[0.1 0.9]]
1430/4496 CGP-60474 ['INACTIVE'] [[0.03 0.97]]
1431/4496 BAY-K-8644-(+/-) ['INACTIVE'] [[0.06 0.94]]
1432/4496 dipraglurant ['INACTIVE'] [[0.09 0.91]]
1433/4496 mabuterol ['INACTIVE'] [[0. 1.]]
1434/4496 TAK-220 ['INACTIVE'] [[0.1 0.9]]
1435/4496 propentofylline ['INACTIVE'] [[0.03 0.97]]
1436/4496 NK-252 ['INACTIVE'] [[0.01 0.99]]
1437/4496 azaguanine-8 ['INACTIVE'] [[0. 1.]]
1438/4496 AQ-RA741 ['INACTIVE'] [[0.03 0.97]]
1439/4496 GW-7647 ['INACTIVE'] [[0.04 0.96]]
1440/4496 S-14506 ['INACTIVE'] [[0.02 0.98]]
1441/4496 CYM-5541 ['INACTIVE'] [[0.07 0.93]]
1442/4496 AC-710 ['INACTIVE'] [[0.09 0.91]]
1443/4496 clebopride ['INACTIVE'] [[0.02 0.98]]
1444/4496 LY2608204 ['INACTIVE'] [[0.09 0.91]]
1445/4496 demecarium ['INACTIVE'] [[0.1 0.9]]
1446/4496 ARQ-621 ['INACTIVE'] [[0.14 0.86]]
1447/4496 RS-56812 ['INACTIVE'] [[0.07 0.93]]
1448/4496 risedronate ['INACTIVE'] [[0. 1.]]
1449/4496 TPCA-1 ['INACTIVE'] [[0.02 0.98]]
1450/4496 SNS-314 ['INACTIVE'] [[0.03 0.97]]
1451/4496 CZC24832 ['INACTIVE'] [[0.04 0.96]]
1452/4496 PD-173212 ['INACTIVE'] [[0.04 0.96]]
1453/4496 MM77 ['INACTIVE'] [[0.08 0.92]]
1454/4496 PNU-142633 ['INACTIVE'] [[0.01 0.99]]
1455/4496 BML-284 ['INACTIVE'] [[0.04 0.96]]
1456/4496 CUR-61414 ['INACTIVE'] [[0.05 0.95]]
1457/4496 BMS-707035 ['INACTIVE'] [[0.03 0.97]]
1458/4496 chlorcyclizine ['INACTIVE'] [[0.01 0.99]]
1459/4496 isavuconazole ['INACTIVE'] [[0.18 0.82]]
1460/4496 TG-100713 ['INACTIVE'] [[0.05 0.95]]
1461/4496 HER2-Inhibitor-1 ['INACTIVE'] [[0.05 0.95]]
1462/4496 KRN-633 ['INACTIVE'] [[0.06 0.94]]
1463/4496 tosedostat ['INACTIVE'] [[0.05 0.95]]
1464/4496 danusertib ['INACTIVE'] [[0.06 0.94]]
1465/4496 PIK-294 ['INACTIVE'] [[0.09 0.91]]
1466/4496 PF-04937319 ['INACTIVE'] [[0.03 0.97]]
1467/4496 VU0364770 ['INACTIVE'] [[0.03 0.97]]
1468/4496 pamabrom ['INACTIVE'] [[0.02 0.98]]
1469/4496 geniposide ['INACTIVE'] [[0.12 0.88]]
1470/4496 GW-6471 ['INACTIVE'] [[0.07 0.93]]
1471/4496 ZM-323881 ['INACTIVE'] [[0.04 0.96]]
1472/4496 vindesine ['INACTIVE'] [[0.05 0.95]]
1473/4496 AZD8186 ['INACTIVE'] [[0.08 0.92]]
1474/4496 PD-168568 ['INACTIVE'] [[0.06 0.94]]
1475/4496 stattic ['INACTIVE'] [[0.02 0.98]]
1476/4496 SB-399885 ['INACTIVE'] [[0.04 0.96]]
1477/4496 naringin-dihydrochalcone ['INACTIVE'] [[0.06 0.94]]
1478/4496 GDC-0623 ['INACTIVE'] [[0.08 0.92]]
1479/4496 N6-methyladenosine ['INACTIVE'] [[0.01 0.99]]
1480/4496 AMG458 ['INACTIVE'] [[0.08 0.92]]
1481/4496 etoricoxib ['INACTIVE'] [[0.05 0.95]]
1482/4496 PF-477736 ['INACTIVE'] [[0.06 0.94]]
1483/4496 moxaverine ['INACTIVE'] [[0. 1.]]
1484/4496 TCS-OX2-29 ['INACTIVE'] [[0.03 0.97]]
1485/4496 U-50488-(-) ['INACTIVE'] [[0.03 0.97]]
1486/4496 SKF-81297 ['INACTIVE'] [[0.01 0.99]]
1487/4496 cobimetinib ['INACTIVE'] [[0.05 0.95]]
1488/4496 gadopentetic-acid ['INACTIVE'] [[0.05 0.95]]
1489/4496 3-MPPI ['INACTIVE'] [[0.02 0.98]]
1490/4496 DR-2313 ['INACTIVE'] [[0.02 0.98]]
1491/4496 FLI-06 ['INACTIVE'] [[0.01 0.99]]
1492/4496 ZM-241385 ['INACTIVE'] [[0.05 0.95]]
1493/4496 BAY-K-8644-(s)-(-) ['INACTIVE'] [[0.06 0.94]]
1494/4496 SCH-23390 ['INACTIVE'] [[0.03 0.97]]
1495/4496 leucovorin ['INACTIVE'] [[0.09 0.91]]
1496/4496 calcium-levofolinate ['INACTIVE'] [[0.09 0.91]]
1497/4496 metocurine ['INACTIVE'] [[0.03 0.97]]
1498/4496 centazolone ['INACTIVE'] [[0.03 0.97]]
1499/4496 uracil ['INACTIVE'] [[0.03 0.97]]
1500/4496 EVP-6124 ['INACTIVE'] [[0.04 0.96]]
1501/4496 metafolin ['INACTIVE'] [[0.08 0.92]]
1502/4496 neohesperidin-dihydrochalcone ['INACTIVE'] [[0.06 0.94]]
1503/4496 fluprazine ['INACTIVE'] [[0. 1.]]
1504/4496 canertinib ['INACTIVE'] [[0.05 0.95]]
1505/4496 JTE-907 ['INACTIVE'] [[0.04 0.96]]
1506/4496 MDL-72832 ['INACTIVE'] [[0.15 0.85]]
1507/4496 vilanterol ['INACTIVE'] [[0.12 0.88]]
1508/4496 pitavastatin ['INACTIVE'] [[0.02 0.98]]
1509/4496 ONO-8130 ['INACTIVE'] [[0.02 0.98]]
1510/4496 trimeprazine ['INACTIVE'] [[0.1 0.9]]
1511/4496 tazarotene ['INACTIVE'] [[0.01 0.99]]
1512/4496 suramin ['INACTIVE'] [[0.08 0.92]]
1513/4496 LY2979165 ['INACTIVE'] [[0.04 0.96]]
1514/4496 ADX-47273 ['INACTIVE'] [[0.06 0.94]]
1515/4496 BLU9931 ['INACTIVE'] [[0.11 0.89]]
1516/4496 CPI-360 ['INACTIVE'] [[0.05 0.95]]
1517/4496 bephenium-hydroxynaphthoate ['INACTIVE'] [[0.09 0.91]]
1518/4496 NSC-319726 ['INACTIVE'] [[0.02 0.98]]
1519/4496 AP26113 ['INACTIVE'] [[0.09 0.91]]
1520/4496 varespladib ['INACTIVE'] [[0.01 0.99]]
1521/4496 LY2584702 ['INACTIVE'] [[0.13 0.87]]
1522/4496 SIS3 ['INACTIVE'] [[0.06 0.94]]
1523/4496 NHI-2 ['INACTIVE'] [[0.06 0.94]]
1524/4496 ODQ ['INACTIVE'] [[0. 1.]]
1525/4496 DPCPX ['INACTIVE'] [[0.15 0.85]]
1526/4496 bunazosin ['INACTIVE'] [[0.12 0.88]]
1527/4496 U-54494A ['INACTIVE'] [[0.03 0.97]]
1528/4496 toceranib ['INACTIVE'] [[0.04 0.96]]
1529/4496 alpha-D-glucopyranose ['INACTIVE'] [[0.03 0.97]]
1530/4496 amifampridine ['INACTIVE'] [[0.05 0.95]]
1531/4496 PLX-4720 ['INACTIVE'] [[0.05 0.95]]
1532/4496 SPP-86 ['INACTIVE'] [[0.03 0.97]]
1533/4496 bortezomib ['INACTIVE'] [[0.04 0.96]]
1534/4496 GSK2334470 ['INACTIVE'] [[0.03 0.97]]
1535/4496 icilin ['INACTIVE'] [[0.05 0.95]]
1536/4496 BAY-60-6583 ['INACTIVE'] [[0.06 0.94]]
1537/4496 RN-1747 ['INACTIVE'] [[0.03 0.97]]
1538/4496 purvalanol-a ['INACTIVE'] [[0.06 0.94]]
1539/4496 etofibrate ['INACTIVE'] [[0.01 0.99]]
1540/4496 solcitinib ['INACTIVE'] [[0.09 0.91]]
1541/4496 SRT1720 ['INACTIVE'] [[0.11 0.89]]
1542/4496 SKF-86002 ['INACTIVE'] [[0.01 0.99]]
1543/4496 ABT-491 ['INACTIVE'] [[0.15 0.85]]
1544/4496 sodium-tetradecyl-sulfate ['INACTIVE'] [[0.04 0.96]]
1545/4496 colforsin-daproate ['INACTIVE'] [[0.06 0.94]]
1546/4496 GSK-J2 ['INACTIVE'] [[0.05 0.95]]
1547/4496 N-acetylmannosamine ['INACTIVE'] [[0.02 0.98]]
1548/4496 PhiKan-083 ['INACTIVE'] [[0.03 0.97]]
1549/4496 mCPP ['INACTIVE'] [[0.02 0.98]]
1550/4496 JNJ-16259685 ['INACTIVE'] [[0.03 0.97]]
1551/4496 PIM-1-Inhibitor-2 ['INACTIVE'] [[0.05 0.95]]
1552/4496 isotiquimide ['INACTIVE'] [[0.01 0.99]]
1553/4496 epirizole ['INACTIVE'] [[0.07 0.93]]
1554/4496 ML314 ['INACTIVE'] [[0.06 0.94]]
1555/4496 D-delta-Tocopherol ['INACTIVE'] [[0.02 0.98]]
1556/4496 peficitinib ['INACTIVE'] [[0.07 0.93]]
1557/4496 Mps-BAY-2a ['INACTIVE'] [[0.08 0.92]]
1558/4496 SB-742457 ['INACTIVE'] [[0.08 0.92]]
1559/4496 GP1a ['INACTIVE'] [[0.04 0.96]]
1560/4496 BVD-523 ['INACTIVE'] [[0.02 0.98]]
1561/4496 schisandrol-b ['INACTIVE'] [[0.06 0.94]]
1562/4496 BX-795 ['INACTIVE'] [[0.04 0.96]]
1563/4496 telmesteine ['INACTIVE'] [[0.02 0.98]]
1564/4496 DRF053-(R) ['INACTIVE'] [[0.02 0.98]]
1565/4496 furegrelate ['INACTIVE'] [[0.02 0.98]]
1566/4496 KB-SRC-4 ['INACTIVE'] [[0.09 0.91]]
1567/4496 LDN193189 ['INACTIVE'] [[0.12 0.88]]
1568/4496 JHW-007 ['INACTIVE'] [[0.06 0.94]]
1569/4496 ceramide ['INACTIVE'] [[0.1 0.9]]
1570/4496 tebipenem ['INACTIVE'] [[0.2 0.8]]
1571/4496 apraclonidine ['INACTIVE'] [[0.07 0.93]]
1572/4496 zindotrine ['INACTIVE'] [[0.04 0.96]]
1573/4496 gimeracil ['INACTIVE'] [[0.01 0.99]]
1574/4496 CGP-37157 ['INACTIVE'] [[0.07 0.93]]
1575/4496 diclofensine ['INACTIVE'] [[0.02 0.98]]
1576/4496 BAX-channel-blocker ['INACTIVE'] [[0.02 0.98]]
1577/4496 presatovir ['INACTIVE'] [[0.21 0.79]]
1578/4496 ML-297 ['INACTIVE'] [[0.03 0.97]]
1579/4496 bretazenil ['INACTIVE'] [[0.06 0.94]]
1580/4496 norcyclobenzaprine ['INACTIVE'] [[0.02 0.98]]
1581/4496 mebicar ['INACTIVE'] [[0.08 0.92]]
1582/4496 AY-9944 ['INACTIVE'] [[0.06 0.94]]
1583/4496 ethyl-2-(carbamoyloxy)benzoate ['INACTIVE'] [[0.03 0.97]]
1584/4496 YM-58483 ['INACTIVE'] [[0.04 0.96]]
1585/4496 GSK-J4 ['INACTIVE'] [[0.04 0.96]]
1586/4496 casanthranol-variant ['INACTIVE'] [[0.05 0.95]]
1587/4496 ABT-639 ['INACTIVE'] [[0.04 0.96]]
1588/4496 L-745870 ['INACTIVE'] [[0.04 0.96]]
1589/4496 mivobulin ['INACTIVE'] [[0. 1.]]
1590/4496 LH846 ['INACTIVE'] [[0.02 0.98]]
1591/4496 ML-10302 ['INACTIVE'] [[0.02 0.98]]
1592/4496 roquinimex ['INACTIVE'] [[0.02 0.98]]
1593/4496 PDE10-IN-1 ['INACTIVE'] [[0.06 0.94]]
1594/4496 tenoxicam ['INACTIVE'] [[0.04 0.96]]
1595/4496 navarixin ['INACTIVE'] [[0.06 0.94]]
1596/4496 phenethicillin ['INACTIVE'] [[0.06 0.94]]
1597/4496 chlorophyllin-copper ['INACTIVE'] [[0.12 0.88]]
1598/4496 eact ['INACTIVE'] [[0.01 0.99]]
1599/4496 CRT0044876 ['INACTIVE'] [[0.05 0.95]]
1600/4496 MK-0752 ['INACTIVE'] [[0.07 0.93]]
1601/4496 pozanicline ['INACTIVE'] [[0. 1.]]
1602/4496 AM679 ['INACTIVE'] [[0.11 0.89]]
1603/4496 A-887826 ['INACTIVE'] [[0.01 0.99]]
1604/4496 mangafodipir ['INACTIVE'] [[0.14 0.86]]
1605/4496 fulvestrant ['INACTIVE'] [[0.01 0.99]]
1606/4496 rimcazole ['INACTIVE'] [[0.08 0.92]]
1607/4496 SU014813 ['INACTIVE'] [[0.11 0.89]]
1608/4496 secoisolariciresinol-diglucoside ['INACTIVE'] [[0.06 0.94]]
1609/4496 dexfenfluramine ['INACTIVE'] [[0.04 0.96]]
1610/4496 ampyrone ['INACTIVE'] [[0. 1.]]
1611/4496 ITI214 ['INACTIVE'] [[0.09 0.91]]
1612/4496 monostearin ['INACTIVE'] [[0.01 0.99]]
1613/4496 ethacridine-lactate-monohydrate ['INACTIVE'] [[0.08 0.92]]
1614/4496 PA-824 ['INACTIVE'] [[0.06 0.94]]
1615/4496 A0001 ['INACTIVE'] [[0.02 0.98]]
1616/4496 SB-334867 ['INACTIVE'] [[0.04 0.96]]
1617/4496 clomethiazole ['INACTIVE'] [[0.04 0.96]]
1618/4496 RU-58841 ['INACTIVE'] [[0.02 0.98]]
1619/4496 tioguanine ['INACTIVE'] [[0.13 0.87]]
1620/4496 proquazone ['INACTIVE'] [[0.02 0.98]]
1621/4496 JTC-801 ['INACTIVE'] [[0.08 0.92]]
1622/4496 delamanid ['INACTIVE'] [[0.11 0.89]]
1623/4496 UB-165 ['INACTIVE'] [[0.04 0.96]]
1624/4496 retapamulin ['INACTIVE'] [[0.14 0.86]]
1625/4496 TC-I-2014 ['INACTIVE'] [[0.06 0.94]]
1626/4496 EPZ011989 ['INACTIVE'] [[0.11 0.89]]
1627/4496 LY3023414 ['INACTIVE'] [[0.08 0.92]]
1628/4496 alarelin ['INACTIVE'] [[0.12 0.88]]
1629/4496 GPP-78 ['INACTIVE'] [[0.06 0.94]]
1630/4496 AMPA-(RS) ['INACTIVE'] [[0. 1.]]
1631/4496 cilastatin ['INACTIVE'] [[0.05 0.95]]
1632/4496 AMPA-(S) ['INACTIVE'] [[0. 1.]]
1633/4496 piperazinedione ['INACTIVE'] [[0.02 0.98]]
1634/4496 GKT137831 ['INACTIVE'] [[0.07 0.93]]
1635/4496 SIB-1757 ['INACTIVE'] [[0.04 0.96]]
1636/4496 vesnarinone ['INACTIVE'] [[0.05 0.95]]
1637/4496 lomitapide ['INACTIVE'] [[0.03 0.97]]
1638/4496 sotagliflozin ['INACTIVE'] [[0.04 0.96]]
1639/4496 KHS-101 ['INACTIVE'] [[0.08 0.92]]
1640/4496 KY02111 ['INACTIVE'] [[0.01 0.99]]
1641/4496 MM-11253 ['INACTIVE'] [[0.05 0.95]]
1642/4496 trapidil ['INACTIVE'] [[0.03 0.97]]
1643/4496 L-(+)-Rhamnose-Monohydrate ['INACTIVE'] [[0. 1.]]
1644/4496 FK-888 ['INACTIVE'] [[0.1 0.9]]
1645/4496 ASP-2535 ['INACTIVE'] [[0.12 0.88]]
1646/4496 1-acetyl-4-methylpiperazine ['INACTIVE'] [[0.04 0.96]]
1647/4496 atorvastatin ['INACTIVE'] [[0.02 0.98]]
1648/4496 penicillin-v-potassium ['INACTIVE'] [[0.04 0.96]]
1649/4496 tofogliflozin ['INACTIVE'] [[0.04 0.96]]
1650/4496 ML-323 ['INACTIVE'] [[0.04 0.96]]
1651/4496 YIL-781 ['INACTIVE'] [[0.1 0.9]]
1652/4496 YM-298198-desmethyl ['INACTIVE'] [[0.03 0.97]]
1653/4496 isbufylline ['INACTIVE'] [[0.01 0.99]]
1654/4496 CNQX ['INACTIVE'] [[0.06 0.94]]
1655/4496 WZ811 ['INACTIVE'] [[0.01 0.99]]
1656/4496 OXA-06 ['INACTIVE'] [[0.01 0.99]]
1657/4496 clonazepam ['INACTIVE'] [[0.06 0.94]]
1658/4496 pardoprunox ['INACTIVE'] [[0.08 0.92]]
1659/4496 proxodolol ['INACTIVE'] [[0. 1.]]
1660/4496 VTP-27999 ['INACTIVE'] [[0.04 0.96]]
1661/4496 SDZ-NKT-343 ['INACTIVE'] [[0.13 0.87]]
1662/4496 fosaprepitant-dimeglumine ['INACTIVE'] [[0.07 0.93]]
1663/4496 FH-535 ['INACTIVE'] [[0.05 0.95]]
1664/4496 pentacosanoic-acid ['INACTIVE'] [[0.04 0.96]]
1665/4496 lercanidipine ['INACTIVE'] [[0.1 0.9]]
1666/4496 SB-206553 ['INACTIVE'] [[0.03 0.97]]
1667/4496 simeprevir ['INACTIVE'] [[0.17 0.83]]
1668/4496 VLX600 ['INACTIVE'] [[0.05 0.95]]
1669/4496 tipranavir ['INACTIVE'] [[0.04 0.96]]
1670/4496 leteprinim ['INACTIVE'] [[0.01 0.99]]
1671/4496 T-62 ['INACTIVE'] [[0.02 0.98]]
1672/4496 pyrazinoylguanidine ['INACTIVE'] [[0.03 0.97]]
1673/4496 NVP-BVU972 ['INACTIVE'] [[0.14 0.86]]
1674/4496 vandetanib ['INACTIVE'] [[0.02 0.98]]
1675/4496 WAY-213613 ['INACTIVE'] [[0.07 0.93]]
1676/4496 triptorelin ['INACTIVE'] [[0.15 0.85]]
1677/4496 GSK-J5 ['INACTIVE'] [[0.03 0.97]]
1678/4496 RS-23597-190 ['INACTIVE'] [[0.01 0.99]]
1679/4496 AZ960 ['INACTIVE'] [[0.03 0.97]]
1680/4496 Ro-106-9920 ['INACTIVE'] [[0.02 0.98]]
1681/4496 YS-035 ['INACTIVE'] [[0.02 0.98]]
1682/4496 salmeterol ['INACTIVE'] [[0.11 0.89]]
1683/4496 safflower-yellow ['INACTIVE'] [[0.2 0.8]]
1684/4496 mirabegron ['INACTIVE'] [[0.09 0.91]]
1685/4496 dideoxyadenosine ['INACTIVE'] [[0.04 0.96]]
1686/4496 VU-0240551 ['INACTIVE'] [[0.06 0.94]]
1687/4496 PK-THPP ['INACTIVE'] [[0.07 0.93]]
1688/4496 RS-16566 ['INACTIVE'] [[0.08 0.92]]
1689/4496 SUN-11602 ['INACTIVE'] [[0.04 0.96]]
1690/4496 TXA127 ['INACTIVE'] [[0.24 0.76]]
1691/4496 fexinidazole ['INACTIVE'] [[0.05 0.95]]
1692/4496 sibutramine ['INACTIVE'] [[0.02 0.98]]
1693/4496 tabimorelin ['INACTIVE'] [[0.08 0.92]]
1694/4496 AZD2461 ['INACTIVE'] [[0.04 0.96]]
1695/4496 L-760735 ['INACTIVE'] [[0.08 0.92]]
1696/4496 AZD8055 ['INACTIVE'] [[0.04 0.96]]
1697/4496 sucrose-octaacetate ['INACTIVE'] [[0.04 0.96]]
1698/4496 bremelanotide ['INACTIVE'] [[0.34 0.66]]
1699/4496 sofosbuvir ['INACTIVE'] [[0.05 0.95]]
1700/4496 lucitanib ['INACTIVE'] [[0.05 0.95]]
1701/4496 prinaberel ['INACTIVE'] [[0.05 0.95]]
1702/4496 SNAP-94847 ['INACTIVE'] [[0.09 0.91]]
1703/4496 AGM-1470 ['INACTIVE'] [[0. 1.]]
1704/4496 TNP-470 ['INACTIVE'] [[0. 1.]]
1705/4496 BTS-72664 ['INACTIVE'] [[0.03 0.97]]
1706/4496 TCS-5861528 ['INACTIVE'] [[0.03 0.97]]
1707/4496 CCMI ['INACTIVE'] [[0.12 0.88]]
1708/4496 ICG-001 ['INACTIVE'] [[0.11 0.89]]
1709/4496 3-bromopyruvate ['INACTIVE'] [[0. 1.]]
1710/4496 2-hydroxy-4-((E)-3-(4-hydroxyphenyl)acryloyl)-2-((2R,3R,4S,5S,6R)-3,4,5-trihydroxy-6-(hydroxymethyl)tetrahydro-2H-pyran-2-yl)-6-((2S,3R,4R,5S,6R)-3,4,5-trihydroxy-6-(hydroxymethyl)tetrahydro-2H-pyran-2-yl)cyclohexane-1,3,5-trione ['INACTIVE'] [[0.14 0.86]]
1711/4496 ku-0063794 ['INACTIVE'] [[0.04 0.96]]
1712/4496 KBG ['INACTIVE'] [[0. 1.]]
1713/4496 TRAM-34 ['INACTIVE'] [[0.09 0.91]]
1714/4496 entecavir ['INACTIVE'] [[0.02 0.98]]
1715/4496 AMG319 ['INACTIVE'] [[0.05 0.95]]
1716/4496 metenkephalin ['INACTIVE'] [[0.07 0.93]]
1717/4496 AC-264613 ['INACTIVE'] [[0.02 0.98]]
1718/4496 nitrosodimethylurea ['INACTIVE'] [[0.01 0.99]]
1719/4496 SR-27897 ['INACTIVE'] [[0.05 0.95]]
1720/4496 tetraethylenepentamine ['INACTIVE'] [[0. 1.]]
1721/4496 palonosetron ['INACTIVE'] [[0.07 0.93]]
1722/4496 cutamesine ['INACTIVE'] [[0.04 0.96]]
1723/4496 HC-030031 ['INACTIVE'] [[0. 1.]]
1724/4496 esaprazole ['INACTIVE'] [[0.03 0.97]]
1725/4496 istradefylline ['INACTIVE'] [[0.11 0.89]]
1726/4496 genipin ['INACTIVE'] [[0.05 0.95]]
1727/4496 dutasteride ['INACTIVE'] [[0.07 0.93]]
1728/4496 OSI-930 ['INACTIVE'] [[0.07 0.93]]
1729/4496 aptiganel ['INACTIVE'] [[0.08 0.92]]
1730/4496 AZD1981 ['INACTIVE'] [[0.01 0.99]]
1731/4496 basimglurant ['INACTIVE'] [[0.02 0.98]]
1732/4496 BQU57 ['INACTIVE'] [[0.03 0.97]]
1733/4496 timofibrate ['INACTIVE'] [[0.02 0.98]]
1734/4496 cinepazet ['INACTIVE'] [[0.07 0.93]]
1735/4496 examorelin ['INACTIVE'] [[0.12 0.88]]
1736/4496 ibuprofen-piconol ['INACTIVE'] [[0.02 0.98]]
1737/4496 neohesperidin ['INACTIVE'] [[0.01 0.99]]
1738/4496 skepinone-l ['INACTIVE'] [[0.04 0.96]]
1739/4496 tiracizine ['INACTIVE'] [[0.03 0.97]]
1740/4496 ARQ-092 ['INACTIVE'] [[0.04 0.96]]
1741/4496 AZD2014 ['INACTIVE'] [[0.05 0.95]]
1742/4496 alpidem ['INACTIVE'] [[0.08 0.92]]
1743/4496 BIX-01294 ['INACTIVE'] [[0.07 0.93]]
1744/4496 A205804 ['INACTIVE'] [[0.05 0.95]]
1745/4496 NVP-BSK805 ['INACTIVE'] [[0.14 0.86]]
1746/4496 EIPA ['INACTIVE'] [[0.03 0.97]]
1747/4496 firategrast ['INACTIVE'] [[0.05 0.95]]
1748/4496 zardaverine ['INACTIVE'] [[0.04 0.96]]
1749/4496 meclofenamic-acid ['INACTIVE'] [[0.04 0.96]]
1750/4496 OTS514 ['INACTIVE'] [[0.07 0.93]]
1751/4496 Ro-48-8071 ['INACTIVE'] [[0.03 0.97]]
1752/4496 AEE788 ['INACTIVE'] [[0.02 0.98]]
1753/4496 efonidipine-monoethanolate ['INACTIVE'] [[0.07 0.93]]
1754/4496 PD-98059 ['INACTIVE'] [[0.01 0.99]]
1755/4496 alvocidib ['INACTIVE'] [[0.04 0.96]]
1756/4496 APY0201 ['INACTIVE'] [[0.09 0.91]]
1757/4496 ABC-294640 ['INACTIVE'] [[0. 1.]]
1758/4496 URMC-099 ['INACTIVE'] [[0.05 0.95]]
1759/4496 3-alpha-bis-(4-fluorophenyl)-methoxytropane ['INACTIVE'] [[0.05 0.95]]
1760/4496 lubiprostone ['INACTIVE'] [[0.02 0.98]]
1761/4496 roscovitine ['INACTIVE'] [[0.05 0.95]]
1762/4496 YM-298198 ['INACTIVE'] [[0.06 0.94]]
1763/4496 hyaluronic-acid ['INACTIVE'] [[0.05 0.95]]
1764/4496 barasertib-HQPA ['INACTIVE'] [[0.04 0.96]]
1765/4496 Ro-08-2750 ['INACTIVE'] [[0.04 0.96]]
1766/4496 trigonelline ['INACTIVE'] [[0.02 0.98]]
1767/4496 nelfinavir ['INACTIVE'] [[0.07 0.93]]
1768/4496 ML-228 ['INACTIVE'] [[0.02 0.98]]
1769/4496 troglitazone ['INACTIVE'] [[0.03 0.97]]
1770/4496 pirlindole ['INACTIVE'] [[0.07 0.93]]
1771/4496 metamizole ['INACTIVE'] [[0.01 0.99]]
1772/4496 TP-0903 ['INACTIVE'] [[0.05 0.95]]
1773/4496 ID-8 ['INACTIVE'] [[0.06 0.94]]
1774/4496 TA-01 ['INACTIVE'] [[0.02 0.98]]
1775/4496 K-MAP ['INACTIVE'] [[0. 1.]]
1776/4496 SDZ-220-040 ['INACTIVE'] [[0.06 0.94]]
1777/4496 amiprilose ['INACTIVE'] [[0.01 0.99]]
1778/4496 BQ-123 ['INACTIVE'] [[0.08 0.92]]
1779/4496 difluprednate ['INACTIVE'] [[0.04 0.96]]
1780/4496 KP-1212 ['INACTIVE'] [[0.08 0.92]]
1781/4496 4-mu-8C ['INACTIVE'] [[0.02 0.98]]
1782/4496 SB-705498 ['INACTIVE'] [[0.05 0.95]]
1783/4496 mosapride ['INACTIVE'] [[0.02 0.98]]
1784/4496 hypericin ['INACTIVE'] [[0.02 0.98]]
1785/4496 XBD173 ['INACTIVE'] [[0.03 0.97]]
1786/4496 ICA-110381 ['INACTIVE'] [[0.02 0.98]]
1787/4496 tolrestat ['INACTIVE'] [[0.05 0.95]]
1788/4496 moxonidine ['INACTIVE'] [[0.02 0.98]]
1789/4496 JNJ-7706621 ['INACTIVE'] [[0.05 0.95]]
1790/4496 perfluorodecalin ['INACTIVE'] [[0.01 0.99]]
1791/4496 GSK2838232 ['INACTIVE'] [[0.04 0.96]]
1792/4496 delivert ['INACTIVE'] [[0.22 0.78]]
1793/4496 PF-04457845 ['INACTIVE'] [[0.06 0.94]]
1794/4496 geniposidic-acid ['INACTIVE'] [[0.13 0.87]]
1795/4496 AR-A014418 ['INACTIVE'] [[0.04 0.96]]
1796/4496 butofilolol ['INACTIVE'] [[0.04 0.96]]
1797/4496 droxicam ['INACTIVE'] [[0.01 0.99]]
1798/4496 KW-3902 ['INACTIVE'] [[0.08 0.92]]
1799/4496 SB-225002 ['INACTIVE'] [[0.06 0.94]]
1800/4496 Calhex-231 ['INACTIVE'] [[0.01 0.99]]
1801/4496 GSK2126458 ['INACTIVE'] [[0.05 0.95]]
1802/4496 indocyanine-green ['INACTIVE'] [[0.07 0.93]]
1803/4496 nibentan ['INACTIVE'] [[0.15 0.85]]
1804/4496 SB-505124 ['INACTIVE'] [[0.08 0.92]]
1805/4496 aloxistatin ['INACTIVE'] [[0.01 0.99]]
1806/4496 C34 ['INACTIVE'] [[0.04 0.96]]
1807/4496 cinalukast ['INACTIVE'] [[0.09 0.91]]
1808/4496 cardionogen-1 ['INACTIVE'] [[0.1 0.9]]
1809/4496 MK-0354 ['INACTIVE'] [[0.02 0.98]]
1810/4496 olomoucine ['INACTIVE'] [[0. 1.]]
1811/4496 AVL-292 ['INACTIVE'] [[0.09 0.91]]
1812/4496 MRS-1220 ['INACTIVE'] [[0.06 0.94]]
1813/4496 balaglitazone ['INACTIVE'] [[0.05 0.95]]
1814/4496 PD-173074 ['INACTIVE'] [[0.06 0.94]]
1815/4496 SB-216641 ['INACTIVE'] [[0. 1.]]
1816/4496 dixanthogen ['INACTIVE'] [[0.07 0.93]]
1817/4496 motesanib ['INACTIVE'] [[0. 1.]]
1818/4496 brexpiprazole ['INACTIVE'] [[0.04 0.96]]
1819/4496 CYM-50769 ['INACTIVE'] [[0.04 0.96]]
1820/4496 2-deoxyglucose ['INACTIVE'] [[0.01 0.99]]
1821/4496 KN-62 ['INACTIVE'] [[0.05 0.95]]
1822/4496 rivaroxaban ['INACTIVE'] [[0.09 0.91]]
1823/4496 BD-1008 ['INACTIVE'] [[0.06 0.94]]
1824/4496 CV-1808 ['INACTIVE'] [[0.12 0.88]]
1825/4496 otenzepad ['INACTIVE'] [[0.03 0.97]]
1826/4496 TTP-22 ['INACTIVE'] [[0.03 0.97]]
1827/4496 GANT-61 ['INACTIVE'] [[0.1 0.9]]
1828/4496 CI-966 ['INACTIVE'] [[0.02 0.98]]
1829/4496 tenofovir-alafenamide ['INACTIVE'] [[0.01 0.99]]
1830/4496 co-102862 ['INACTIVE'] [[0.05 0.95]]
1831/4496 LGK-974 ['INACTIVE'] [[0.05 0.95]]
1832/4496 exo-IWR-1 ['INACTIVE'] [[0.15 0.85]]
1833/4496 endo-IWR-1 ['INACTIVE'] [[0.15 0.85]]
1834/4496 dimethicone ['INACTIVE'] [[0.01 0.99]]
1835/4496 LX7101 ['INACTIVE'] [[0.13 0.87]]
1836/4496 ICI-199441 ['INACTIVE'] [[0.04 0.96]]
1837/4496 BIO-5192 ['INACTIVE'] [[0.15 0.85]]
1838/4496 indacaterol ['INACTIVE'] [[0.07 0.93]]
1839/4496 Teijin-compound-1 ['INACTIVE'] [[0.01 0.99]]
1840/4496 clocortolone-pivalate ['INACTIVE'] [[0.02 0.98]]
1841/4496 cevimeline ['INACTIVE'] [[0.11 0.89]]
1842/4496 AMG-319 ['INACTIVE'] [[0.08 0.92]]
1843/4496 fotemustine ['INACTIVE'] [[0.02 0.98]]
1844/4496 Ro-10-5824 ['INACTIVE'] [[0.05 0.95]]
1845/4496 IPA-3 ['INACTIVE'] [[0.12 0.88]]
1846/4496 wortmannin ['INACTIVE'] [[0.05 0.95]]
1847/4496 BYK-204165 ['INACTIVE'] [[0.02 0.98]]
1848/4496 WDR5-0103 ['INACTIVE'] [[0.03 0.97]]
1849/4496 linsitinib ['INACTIVE'] [[0.06 0.94]]
1850/4496 TC1 ['INACTIVE'] [[0.02 0.98]]
1851/4496 taurocholate ['INACTIVE'] [[0.03 0.97]]
1852/4496 YM-201636 ['INACTIVE'] [[0.1 0.9]]
1853/4496 LY393558 ['INACTIVE'] [[0.04 0.96]]
1854/4496 AZD7545 ['INACTIVE'] [[0.03 0.97]]
1855/4496 12-O-tetradecanoylphorbol-13-acetate ['INACTIVE'] [[0.07 0.93]]
1856/4496 osimertinib ['INACTIVE'] [[0.14 0.86]]
1857/4496 GSK2636771 ['INACTIVE'] [[0.09 0.91]]
1858/4496 necrostatin-2 ['INACTIVE'] [[0.05 0.95]]
1859/4496 FR-180204 ['INACTIVE'] [[0.06 0.94]]
1860/4496 diadenosine-tetraphosphate ['INACTIVE'] [[0.1 0.9]]
1861/4496 UNC-3230 ['INACTIVE'] [[0.02 0.98]]
1862/4496 DPC-681 ['INACTIVE'] [[0.08 0.92]]
1863/4496 paritaprevir ['INACTIVE'] [[0.23 0.77]]
1864/4496 2-hydroxysaclofen ['INACTIVE'] [[0. 1.]]
1865/4496 2-cyanopyrimidine ['INACTIVE'] [[0.03 0.97]]
1866/4496 SB-612111 ['INACTIVE'] [[0.05 0.95]]
1867/4496 AGN-194310 ['INACTIVE'] [[0.06 0.94]]
1868/4496 vorapaxar ['INACTIVE'] [[0.05 0.95]]
1869/4496 BETP ['INACTIVE'] [[0.05 0.95]]
1870/4496 EPZ015666 ['INACTIVE'] [[0.05 0.95]]
1871/4496 AC-55541 ['INACTIVE'] [[0.07 0.93]]
1872/4496 LY215490 ['INACTIVE'] [[0.1 0.9]]
1873/4496 PF-3845 ['INACTIVE'] [[0.04 0.96]]
1874/4496 titanocene-dichloride ['INACTIVE'] [[0.03 0.97]]
1875/4496 SC-12267 ['INACTIVE'] [[0.01 0.99]]
1876/4496 zimelidine ['INACTIVE'] [[0.06 0.94]]
1877/4496 KPT-185 ['INACTIVE'] [[0.03 0.97]]
1878/4496 darglitazone ['INACTIVE'] [[0.08 0.92]]
1879/4496 tandutinib ['INACTIVE'] [[0.08 0.92]]
1880/4496 taselisib ['INACTIVE'] [[0.1 0.9]]
1881/4496 GBR-12783 ['INACTIVE'] [[0.02 0.98]]
1882/4496 A-1070722 ['INACTIVE'] [[0.06 0.94]]
1883/4496 4,5,6,7-tetrabromobenzotriazole ['INACTIVE'] [[0.07 0.93]]
1884/4496 VE-822 ['INACTIVE'] [[0.05 0.95]]
1885/4496 PF-562271 ['INACTIVE'] [[0.19 0.81]]
1886/4496 amitifadine ['INACTIVE'] [[0.05 0.95]]
1887/4496 bruceantin ['INACTIVE'] [[0.11 0.89]]
1888/4496 NTRC-824 ['INACTIVE'] [[0.14 0.86]]
1889/4496 NSC-405020 ['INACTIVE'] [[0.04 0.96]]
1890/4496 sildenafil ['INACTIVE'] [[0.04 0.96]]
1891/4496 diphenylguanidine ['INACTIVE'] [[0.07 0.93]]
1892/4496 Ko143 ['INACTIVE'] [[0.03 0.97]]
1893/4496 oxonic-acid ['INACTIVE'] [[0.01 0.99]]
1894/4496 adefovir ['INACTIVE'] [[0.02 0.98]]
1895/4496 CGP-57380 ['INACTIVE'] [[0.02 0.98]]
1896/4496 TAK-715 ['INACTIVE'] [[0.05 0.95]]
1897/4496 morniflumate ['INACTIVE'] [[0.02 0.98]]
1898/4496 ciglitazone ['INACTIVE'] [[0.01 0.99]]
1899/4496 EBPC ['INACTIVE'] [[0.01 0.99]]
1900/4496 PSNCBAM-1 ['INACTIVE'] [[0.02 0.98]]
1901/4496 ABT-702 ['INACTIVE'] [[0.07 0.93]]
1902/4496 LX1031 ['INACTIVE'] [[0.06 0.94]]
1903/4496 tetrindole ['INACTIVE'] [[0.09 0.91]]
1904/4496 APTO-253 ['INACTIVE'] [[0.09 0.91]]
1905/4496 MK-0893 ['INACTIVE'] [[0.07 0.93]]
1906/4496 empagliflozin ['INACTIVE'] [[0.01 0.99]]
1907/4496 BRL-15572 ['INACTIVE'] [[0.03 0.97]]
1908/4496 GW-3965 ['INACTIVE'] [[0.09 0.91]]
1909/4496 oncrasin-1 ['INACTIVE'] [[0.08 0.92]]
1910/4496 CGS-20625 ['INACTIVE'] [[0.05 0.95]]
1911/4496 ABT-202 ['INACTIVE'] [[0.05 0.95]]
1912/4496 atovaquone ['INACTIVE'] [[0.02 0.98]]
1913/4496 LGX818 ['INACTIVE'] [[0.14 0.86]]
1914/4496 EPZ020411 ['INACTIVE'] [[0.02 0.98]]
1915/4496 4-CMTB ['INACTIVE'] [[0.01 0.99]]
1916/4496 ML204 ['INACTIVE'] [[0.03 0.97]]
1917/4496 UNC0631 ['INACTIVE'] [[0.17 0.83]]
1918/4496 vatalanib ['INACTIVE'] [[0.08 0.92]]
1919/4496 citicoline ['INACTIVE'] [[0.02 0.98]]
1920/4496 haloperidol-decanoate ['INACTIVE'] [[0.03 0.97]]
1921/4496 YM-750 ['INACTIVE'] [[0.09 0.91]]
1922/4496 oridonin ['INACTIVE'] [[0.02 0.98]]
1923/4496 tocofersolan ['INACTIVE'] [[0.06 0.94]]
1924/4496 4-acetyl-1,1-dimethylpiperazinium ['INACTIVE'] [[0.04 0.96]]
1925/4496 cephalomannine ['INACTIVE'] [[0.06 0.94]]
1926/4496 KB-R7943 ['INACTIVE'] [[0.06 0.94]]
1927/4496 PQ-401 ['INACTIVE'] [[0.05 0.95]]
1928/4496 EG00229 ['INACTIVE'] [[0.09 0.91]]
1929/4496 CFM-2 ['INACTIVE'] [[0.03 0.97]]
1930/4496 methylergometrine ['INACTIVE'] [[0.04 0.96]]
1931/4496 N-(2-chlorophenyl)-2-({(2E)-2-[1-(2-pyridinyl)ethylidene]hydrazino}carbothioyl)hydrazinecarbothioamide ['INACTIVE'] [[0.04 0.96]]
1932/4496 scriptaid ['INACTIVE'] [[0.03 0.97]]
1933/4496 emoxipin ['INACTIVE'] [[0.03 0.97]]
1934/4496 alogliptin ['INACTIVE'] [[0.04 0.96]]
1935/4496 talnetant ['INACTIVE'] [[0.03 0.97]]
1936/4496 bismuth-subgallate ['INACTIVE'] [[0.04 0.96]]
1937/4496 VU29 ['INACTIVE'] [[0.1 0.9]]
1938/4496 GPR120-modulator-1 ['INACTIVE'] [[0.07 0.93]]
1939/4496 cytochalasin-b ['INACTIVE'] [[0.12 0.88]]
1940/4496 tropesin ['INACTIVE'] [[0.04 0.96]]
1941/4496 R112 ['INACTIVE'] [[0.07 0.93]]
1942/4496 RAF265 ['INACTIVE'] [[0.09 0.91]]
1943/4496 AZ3146 ['INACTIVE'] [[0.06 0.94]]
1944/4496 rucaparib ['INACTIVE'] [[0.05 0.95]]
1945/4496 OT-R-antagonist-1 ['INACTIVE'] [[0.07 0.93]]
1946/4496 CDK1-5-inhibitor ['INACTIVE'] [[0.04 0.96]]
1947/4496 PF-4981517 ['INACTIVE'] [[0.06 0.94]]
1948/4496 cediranib ['INACTIVE'] [[0.04 0.96]]
1949/4496 uridine-triacetate ['INACTIVE'] [[0.07 0.93]]
1950/4496 SB-218795 ['INACTIVE'] [[0.03 0.97]]
1951/4496 senicapoc ['INACTIVE'] [[0.01 0.99]]
1952/4496 CL316243 ['INACTIVE'] [[0.03 0.97]]
1953/4496 chloroprocaine ['INACTIVE'] [[0.01 0.99]]
1954/4496 CPCCOEt ['INACTIVE'] [[0. 1.]]
1955/4496 ML-218 ['INACTIVE'] [[0.03 0.97]]
1956/4496 JNJ-1661010 ['INACTIVE'] [[0.04 0.96]]
1957/4496 BMS-299897 ['INACTIVE'] [[0.05 0.95]]
1958/4496 TG100-115 ['INACTIVE'] [[0.05 0.95]]
1959/4496 ursodeoxycholyltaurine ['INACTIVE'] [[0.04 0.96]]
1960/4496 IOX2 ['INACTIVE'] [[0.04 0.96]]
1961/4496 naloxone-benzoylhydrazone ['INACTIVE'] [[0.01 0.99]]
1962/4496 CGP-53353 ['INACTIVE'] [[0.12 0.88]]
1963/4496 TCS-HDAC6-20b ['INACTIVE'] [[0.03 0.97]]
1964/4496 tetradecylthioacetic-acid ['INACTIVE'] [[0.12 0.88]]
1965/4496 BAY-61-3606 ['INACTIVE'] [[0.08 0.92]]
1966/4496 clobetasone-butyrate ['INACTIVE'] [[0.03 0.97]]
1967/4496 benaxibine ['INACTIVE'] [[0. 1.]]
1968/4496 PD-158780 ['INACTIVE'] [[0.06 0.94]]
1969/4496 sonidegib ['INACTIVE'] [[0.05 0.95]]
1970/4496 RU-SKI-43 ['INACTIVE'] [[0.04 0.96]]
1971/4496 M8-B ['INACTIVE'] [[0.04 0.96]]
1972/4496 procysteine ['INACTIVE'] [[0.01 0.99]]
1973/4496 pralatrexate ['INACTIVE'] [[0.05 0.95]]
1974/4496 CP-724714 ['INACTIVE'] [[0.04 0.96]]
1975/4496 CEP-32496 ['INACTIVE'] [[0.01 0.99]]
1976/4496 TAK-593 ['INACTIVE'] [[0.11 0.89]]
1977/4496 JNJ-7777120 ['INACTIVE'] [[0.04 0.96]]
1978/4496 AMG-208 ['INACTIVE'] [[0.06 0.94]]
1979/4496 idalopirdine ['INACTIVE'] [[0.04 0.96]]
1980/4496 ATN-161 ['INACTIVE'] [[0.12 0.88]]
1981/4496 savolitinib ['INACTIVE'] [[0.11 0.89]]
1982/4496 PRT062070 ['INACTIVE'] [[0.04 0.96]]
1983/4496 palovarotene ['INACTIVE'] [[0.03 0.97]]
1984/4496 AMZ30 ['INACTIVE'] [[0.09 0.91]]
1985/4496 caricotamide ['INACTIVE'] [[0.04 0.96]]
1986/4496 RG4733 ['INACTIVE'] [[0.06 0.94]]
1987/4496 ebrotidine ['INACTIVE'] [[0.08 0.92]]
1988/4496 UC-112 ['INACTIVE'] [[0.09 0.91]]
1989/4496 nornicotine ['INACTIVE'] [[0.02 0.98]]
1990/4496 minodronic-acid ['INACTIVE'] [[0.03 0.97]]
1991/4496 K-247 ['INACTIVE'] [[0. 1.]]
1992/4496 hyperin ['INACTIVE'] [[0.02 0.98]]
1993/4496 VU-0364739 ['INACTIVE'] [[0.03 0.97]]
1994/4496 palmitoylcarnitine ['INACTIVE'] [[0.14 0.86]]
1995/4496 ertugliflozin ['INACTIVE'] [[0.06 0.94]]
1996/4496 flurofamide ['INACTIVE'] [[0.01 0.99]]
1997/4496 UNC669 ['INACTIVE'] [[0.1 0.9]]
1998/4496 siramesine ['INACTIVE'] [[0.03 0.97]]
1999/4496 ozolinone ['INACTIVE'] [[0.1 0.9]]
2000/4496 MG-132 ['INACTIVE'] [[0.04 0.96]]
2001/4496 SQ-22536 ['INACTIVE'] [[0.01 0.99]]
2002/4496 genz-123346 ['INACTIVE'] [[0.08 0.92]]
2003/4496 clobutinol ['INACTIVE'] [[0.02 0.98]]
2004/4496 betazole ['INACTIVE'] [[0. 1.]]
2005/4496 denotivir ['INACTIVE'] [[0.01 0.99]]
2006/4496 tariquidar ['INACTIVE'] [[0.04 0.96]]
2007/4496 tyrphostin-AG-1478 ['INACTIVE'] [[0.04 0.96]]
2008/4496 LCZ696 ['INACTIVE'] [[0.03 0.97]]
2009/4496 U-75799E ['INACTIVE'] [[0.16 0.84]]
2010/4496 UK-356618 ['INACTIVE'] [[0.07 0.93]]
2011/4496 NS-11021 ['INACTIVE'] [[0.04 0.96]]
2012/4496 8-bromo-cGMP ['INACTIVE'] [[0.07 0.93]]
2013/4496 7-methylxanthine ['INACTIVE'] [[0.01 0.99]]
2014/4496 rose-bengal-lactone ['INACTIVE'] [[0.05 0.95]]
2015/4496 BIMU-8 ['INACTIVE'] [[0.08 0.92]]
2016/4496 zibotentan ['INACTIVE'] [[0.05 0.95]]
2017/4496 GPi-688 ['INACTIVE'] [[0.13 0.87]]
2018/4496 DBeQ ['INACTIVE'] [[0.05 0.95]]
2019/4496 TG-101209 ['INACTIVE'] [[0.1 0.9]]
2020/4496 linsidomine ['INACTIVE'] [[0.01 0.99]]
2021/4496 IDO5L ['INACTIVE'] [[0.08 0.92]]
2022/4496 sodium-stibogluconate ['INACTIVE'] [[0.07 0.93]]
2023/4496 F351 ['INACTIVE'] [[0.03 0.97]]
2024/4496 STF-118804 ['INACTIVE'] [[0.07 0.93]]
2025/4496 acalabrutinib ['INACTIVE'] [[0.07 0.93]]
2026/4496 tiludronate ['INACTIVE'] [[0.03 0.97]]
2027/4496 dextroamphetamine ['INACTIVE'] [[0. 1.]]
2028/4496 iloperidone ['INACTIVE'] [[0.06 0.94]]
2029/4496 fagomine ['INACTIVE'] [[0.02 0.98]]
2030/4496 1-(2-chloro-5-methylphenoxy)-3-(isopropylamino)-2-propanol ['INACTIVE'] [[0.01 0.99]]
2031/4496 relcovaptan ['INACTIVE'] [[0.09 0.91]]
2032/4496 BMS-863233 ['INACTIVE'] [[0.02 0.98]]
2033/4496 ORE1001 ['INACTIVE'] [[0.05 0.95]]
2034/4496 GSK1070916 ['INACTIVE'] [[0.14 0.86]]
2035/4496 saclofen ['INACTIVE'] [[0. 1.]]
2036/4496 midafotel ['INACTIVE'] [[0.04 0.96]]
2037/4496 nomifensine ['INACTIVE'] [[0.05 0.95]]
2038/4496 WAY-629 ['INACTIVE'] [[0.01 0.99]]
2039/4496 solamargine ['INACTIVE'] [[0.05 0.95]]
2040/4496 C106 ['INACTIVE'] [[0.02 0.98]]
2041/4496 PETCM ['INACTIVE'] [[0. 1.]]
2042/4496 L-NAME ['INACTIVE'] [[0.01 0.99]]
2043/4496 SGI-1027 ['INACTIVE'] [[0.1 0.9]]
2044/4496 EPZ005687 ['INACTIVE'] [[0.1 0.9]]
2045/4496 kynurenic-acid ['INACTIVE'] [[0.03 0.97]]
2046/4496 ZSTK-474 ['INACTIVE'] [[0.06 0.94]]
2047/4496 SC-144 ['INACTIVE'] [[0.07 0.93]]
2048/4496 narasin ['INACTIVE'] [[0.08 0.92]]
2049/4496 SB-683698 ['INACTIVE'] [[0.09 0.91]]
2050/4496 rimonabant ['INACTIVE'] [[0.05 0.95]]
2051/4496 nimorazole ['INACTIVE'] [[0.04 0.96]]
2052/4496 pyroxamide ['INACTIVE'] [[0.05 0.95]]
2053/4496 ML-3403 ['INACTIVE'] [[0.09 0.91]]
2054/4496 thaliblastine ['INACTIVE'] [[0.07 0.93]]
2055/4496 NS-19504 ['INACTIVE'] [[0.02 0.98]]
2056/4496 NVP-231 ['INACTIVE'] [[0.04 0.96]]
2057/4496 CTEP ['INACTIVE'] [[0.03 0.97]]
2058/4496 UNC0646 ['INACTIVE'] [[0.21 0.79]]
2059/4496 lofepramine ['INACTIVE'] [[0.02 0.98]]
2060/4496 isoguvacine ['INACTIVE'] [[0.03 0.97]]
2061/4496 flutrimazole ['INACTIVE'] [[0.03 0.97]]
2062/4496 balicatib ['INACTIVE'] [[0.13 0.87]]
2063/4496 org-27569 ['INACTIVE'] [[0.01 0.99]]
2064/4496 PHT-427 ['INACTIVE'] [[0.04 0.96]]
2065/4496 CID-16020046 ['INACTIVE'] [[0. 1.]]
2066/4496 wiskostatin ['INACTIVE'] [[0.02 0.98]]
2067/4496 2,5-furandimethanol ['INACTIVE'] [[0.03 0.97]]
2068/4496 CCG-1423 ['INACTIVE'] [[0.03 0.97]]
2069/4496 piboserod ['INACTIVE'] [[0.08 0.92]]
2070/4496 10-deacetylbaccatin ['INACTIVE'] [[0.01 0.99]]
2071/4496 PRX-08066 ['INACTIVE'] [[0.04 0.96]]
2072/4496 efavirenz ['INACTIVE'] [[0. 1.]]
2073/4496 ripasudil ['INACTIVE'] [[0.04 0.96]]
2074/4496 bay-w-9798 ['INACTIVE'] [[0.03 0.97]]
2075/4496 clorprenaline ['INACTIVE'] [[0.02 0.98]]
2076/4496 GLPG0492-R-enantiomer ['INACTIVE'] [[0.01 0.99]]
2077/4496 trovirdine ['INACTIVE'] [[0.03 0.97]]
2078/4496 PT-2385 ['INACTIVE'] [[0.06 0.94]]
2079/4496 MC1568 ['INACTIVE'] [[0. 1.]]
2080/4496 methyllycaconitine ['INACTIVE'] [[0.08 0.92]]
2081/4496 GLPG0492 ['INACTIVE'] [[0.01 0.99]]
2082/4496 salinomycin ['INACTIVE'] [[0.09 0.91]]
2083/4496 erythrosine ['INACTIVE'] [[0. 1.]]
2084/4496 VUF11207 ['INACTIVE'] [[0.01 0.99]]
2085/4496 SB-271046 ['INACTIVE'] [[0.02 0.98]]
2086/4496 DS2-(806622) ['INACTIVE'] [[0.03 0.97]]
2087/4496 carmoterol ['INACTIVE'] [[0.07 0.93]]
2088/4496 LM-22A4 ['INACTIVE'] [[0. 1.]]
2089/4496 CCT128930 ['INACTIVE'] [[0.08 0.92]]
2090/4496 ENMD-2076 ['INACTIVE'] [[0.01 0.99]]
2091/4496 kuromanin ['INACTIVE'] [[0.01 0.99]]
2092/4496 zilpaterol ['INACTIVE'] [[0.02 0.98]]
2093/4496 CNX-2006 ['INACTIVE'] [[0.12 0.88]]
2094/4496 PRT4165 ['INACTIVE'] [[0.05 0.95]]
2095/4496 terameprocol ['INACTIVE'] [[0.04 0.96]]
2096/4496 picotamide ['INACTIVE'] [[0.01 0.99]]
2097/4496 dianhydrogalactitol ['INACTIVE'] [[0.01 0.99]]
2098/4496 bemesetron ['INACTIVE'] [[0.03 0.97]]
2099/4496 clodronic-acid ['INACTIVE'] [[0. 1.]]
2100/4496 raltitrexed ['INACTIVE'] [[0.06 0.94]]
2101/4496 indirubin-3-monoxime ['INACTIVE'] [[0.01 0.99]]
2102/4496 dibenzepine ['INACTIVE'] [[0.05 0.95]]
2103/4496 KPT-276 ['INACTIVE'] [[0.03 0.97]]
2104/4496 I-BET151 ['INACTIVE'] [[0.03 0.97]]
2105/4496 GSK2801 ['INACTIVE'] [[0.01 0.99]]
2106/4496 xaliproden ['INACTIVE'] [[0.07 0.93]]
2107/4496 halopemide ['INACTIVE'] [[0.06 0.94]]
2108/4496 A-967079 ['INACTIVE'] [[0. 1.]]
2109/4496 SP-100030 ['INACTIVE'] [[0.02 0.98]]
2110/4496 licofelone ['INACTIVE'] [[0.03 0.97]]
2111/4496 amdinocillin ['INACTIVE'] [[0.12 0.88]]
2112/4496 LY344864 ['INACTIVE'] [[0.05 0.95]]
2113/4496 grazoprevir ['INACTIVE'] [[0.13 0.87]]
2114/4496 Ro-04-5595 ['INACTIVE'] [[0.01 0.99]]
2115/4496 ibotenic-acid ['INACTIVE'] [[0. 1.]]
2116/4496 bay-60-7550 ['INACTIVE'] [[0.06 0.94]]
2117/4496 IPI-145 ['INACTIVE'] [[0.1 0.9]]
2118/4496 GW-803430 ['INACTIVE'] [[0.02 0.98]]
2119/4496 tofisopam ['INACTIVE'] [[0.01 0.99]]
2120/4496 AR-12 ['INACTIVE'] [[0.03 0.97]]
2121/4496 eluxadoline ['INACTIVE'] [[0.06 0.94]]
2122/4496 dodecyl-sulfate ['INACTIVE'] [[0.02 0.98]]
2123/4496 bropirimine ['INACTIVE'] [[0.01 0.99]]
2124/4496 G007-LK ['INACTIVE'] [[0.14 0.86]]
2125/4496 goserelin-acetate ['INACTIVE'] [[0.18 0.82]]
2126/4496 ocinaplon ['INACTIVE'] [[0.03 0.97]]
2127/4496 VS-4718 ['INACTIVE'] [[0.08 0.92]]
2128/4496 aprepitant ['INACTIVE'] [[0.05 0.95]]
2129/4496 LY364947 ['INACTIVE'] [[0.04 0.96]]
2130/4496 piroxicam ['INACTIVE'] [[0.02 0.98]]
2131/4496 JNJ-10191584 ['INACTIVE'] [[0.01 0.99]]
2132/4496 PD-123319 ['INACTIVE'] [[0.12 0.88]]
2133/4496 IKK-2-inhibitor-V ['INACTIVE'] [[0.01 0.99]]
2134/4496 Ro-28-1675 ['INACTIVE'] [[0.06 0.94]]
2135/4496 IKK-2-inhibitor ['INACTIVE'] [[0.04 0.96]]
2136/4496 ezutromid ['INACTIVE'] [[0.03 0.97]]
2137/4496 oxypurinol ['INACTIVE'] [[0.01 0.99]]
2138/4496 ML167 ['INACTIVE'] [[0.02 0.98]]
2139/4496 AS-1949490 ['INACTIVE'] [[0.03 0.97]]
2140/4496 falecalcitriol ['INACTIVE'] [[0.03 0.97]]
2141/4496 sodium-ascorbate ['INACTIVE'] [[0. 1.]]
2142/4496 K02288 ['INACTIVE'] [[0.09 0.91]]
2143/4496 NS-5806 ['INACTIVE'] [[0.02 0.98]]
2144/4496 EPPTB ['INACTIVE'] [[0.01 0.99]]
2145/4496 MEK1-2-inhibitor ['INACTIVE'] [[0.03 0.97]]
2146/4496 ZK-93426 ['INACTIVE'] [[0.01 0.99]]
2147/4496 tetrazol-5-yl-glycine-(RS) ['INACTIVE'] [[0. 1.]]
2148/4496 lerisetron ['INACTIVE'] [[0.04 0.96]]
2149/4496 BRL-52537 ['INACTIVE'] [[0.06 0.94]]
2150/4496 SB-222200 ['INACTIVE'] [[0.03 0.97]]
2151/4496 OTX015 ['INACTIVE'] [[0.06 0.94]]
2152/4496 CYM-5520 ['INACTIVE'] [[0.08 0.92]]
2153/4496 candicidin ['INACTIVE'] [[0.13 0.87]]
2154/4496 cyclopiazonic-acid ['INACTIVE'] [[0.03 0.97]]
2155/4496 SCH-51344 ['INACTIVE'] [[0. 1.]]
2156/4496 CGS-9896 ['INACTIVE'] [[0.07 0.93]]
2157/4496 cetaben ['INACTIVE'] [[0.08 0.92]]
2158/4496 SR-3576 ['INACTIVE'] [[0.06 0.94]]
2159/4496 rac-BHFF ['INACTIVE'] [[0.02 0.98]]
2160/4496 SCS ['INACTIVE'] [[0. 1.]]
2161/4496 BRD-7389 ['INACTIVE'] [[0.01 0.99]]
2162/4496 mibefradil ['INACTIVE'] [[0.02 0.98]]
2163/4496 CHIR-124 ['INACTIVE'] [[0.05 0.95]]
2164/4496 menadione-bisulfite ['INACTIVE'] [[0.01 0.99]]
2165/4496 travoprost ['INACTIVE'] [[0.06 0.94]]
2166/4496 talipexole ['INACTIVE'] [[0.01 0.99]]
2167/4496 lafutidine ['INACTIVE'] [[0.07 0.93]]
2168/4496 G-15 ['INACTIVE'] [[0.09 0.91]]
2169/4496 RQ-00203078 ['INACTIVE'] [[0.03 0.97]]
2170/4496 cridanimod ['INACTIVE'] [[0.01 0.99]]
2171/4496 FGIN-1-43 ['INACTIVE'] [[0.03 0.97]]
2172/4496 RLX ['INACTIVE'] [[0.01 0.99]]
2173/4496 1-deoxymannojirimycin ['INACTIVE'] [[0.03 0.97]]
2174/4496 CP-316819 ['INACTIVE'] [[0.03 0.97]]
2175/4496 GSK-3-inhibitor-IX ['INACTIVE'] [[0.02 0.98]]
2176/4496 fruquintinib ['INACTIVE'] [[0.03 0.97]]
2177/4496 NDT-9513727 ['INACTIVE'] [[0.05 0.95]]
2178/4496 TW-37 ['INACTIVE'] [[0.06 0.94]]
2179/4496 fludroxycortide ['INACTIVE'] [[0.01 0.99]]
2180/4496 Org-25543 ['INACTIVE'] [[0. 1.]]
2181/4496 danoprevir ['INACTIVE'] [[0.19 0.81]]
2182/4496 metharbital ['INACTIVE'] [[0.03 0.97]]
2183/4496 GNF-5 ['INACTIVE'] [[0.03 0.97]]
2184/4496 eliglustat ['INACTIVE'] [[0.08 0.92]]
2185/4496 10058-F4 ['INACTIVE'] [[0. 1.]]
2186/4496 CCMQ ['INACTIVE'] [[0. 1.]]
2187/4496 satraplatin ['INACTIVE'] [[0.05 0.95]]
2188/4496 PKI-179 ['INACTIVE'] [[0.07 0.93]]
2189/4496 tazemetostat ['INACTIVE'] [[0.03 0.97]]
2190/4496 ozanimod ['INACTIVE'] [[0.06 0.94]]
2191/4496 setiptiline ['INACTIVE'] [[0.03 0.97]]
2192/4496 SNG-1153 ['INACTIVE'] [[0.01 0.99]]
2193/4496 KS-176 ['INACTIVE'] [[0.06 0.94]]
2194/4496 PJ-34 ['INACTIVE'] [[0.05 0.95]]
2195/4496 antimony-potassium ['INACTIVE'] [[0.03 0.97]]
2196/4496 AZM-475271 ['INACTIVE'] [[0.04 0.96]]
2197/4496 BET-BAY-002 ['INACTIVE'] [[0.05 0.95]]
2198/4496 vinflunine ['INACTIVE'] [[0.01 0.99]]
2199/4496 NBQX ['INACTIVE'] [[0.14 0.86]]
2200/4496 cloranolol ['INACTIVE'] [[0.02 0.98]]
2201/4496 benzoquinonium-dibromide ['INACTIVE'] [[0.03 0.97]]
2202/4496 HLCL-61 ['INACTIVE'] [[0.05 0.95]]
2203/4496 MY-5445 ['INACTIVE'] [[0.01 0.99]]
2204/4496 TC-F-2 ['INACTIVE'] [[0.1 0.9]]
2205/4496 HA-966-(S)-(-) ['INACTIVE'] [[0.05 0.95]]
2206/4496 defactinib ['INACTIVE'] [[0.09 0.91]]
2207/4496 NXY-059 ['INACTIVE'] [[0.01 0.99]]
2208/4496 HA-966-(R)-(+) ['INACTIVE'] [[0.05 0.95]]
2209/4496 SRT2104 ['INACTIVE'] [[0.15 0.85]]
2210/4496 Z160 ['INACTIVE'] [[0.02 0.98]]
2211/4496 chlorfenson ['INACTIVE'] [[0.04 0.96]]
2212/4496 oligomycin-a ['INACTIVE'] [[0.14 0.86]]
2213/4496 pemetrexed ['INACTIVE'] [[0.11 0.89]]
2214/4496 pritelivir ['INACTIVE'] [[0.04 0.96]]
2215/4496 TMC-353121 ['INACTIVE'] [[0.09 0.91]]
2216/4496 TC-FPR-43 ['INACTIVE'] [[0.02 0.98]]
2217/4496 ivacaftor ['INACTIVE'] [[0.03 0.97]]
2218/4496 MPEP ['INACTIVE'] [[0.07 0.93]]
2219/4496 ML-193 ['INACTIVE'] [[0.12 0.88]]
2220/4496 AS-2034178 ['INACTIVE'] [[0.07 0.93]]
2221/4496 repsox ['INACTIVE'] [[0.09 0.91]]
2222/4496 mocetinostat ['INACTIVE'] [[0.04 0.96]]
2223/4496 AZD7687 ['INACTIVE'] [[0.04 0.96]]
2224/4496 ryuvidine ['INACTIVE'] [[0.02 0.98]]
2225/4496 meloxicam ['INACTIVE'] [[0.03 0.97]]
2226/4496 methoxyflurane ['INACTIVE'] [[0.04 0.96]]
2227/4496 atracurium ['INACTIVE'] [[0.03 0.97]]
2228/4496 LY2811376 ['INACTIVE'] [[0.05 0.95]]
2229/4496 priralfinamide ['INACTIVE'] [[0. 1.]]
2230/4496 madrasin ['INACTIVE'] [[0.09 0.91]]
2231/4496 etofenamate ['INACTIVE'] [[0.03 0.97]]
2232/4496 trebenzomine ['INACTIVE'] [[0.01 0.99]]
2233/4496 GF109203X ['INACTIVE'] [[0.06 0.94]]
2234/4496 ethimizol ['INACTIVE'] [[0.06 0.94]]
2235/4496 ebselen ['INACTIVE'] [[0.02 0.98]]
2236/4496 IBC-293 ['INACTIVE'] [[0.02 0.98]]
2237/4496 betrixaban ['INACTIVE'] [[0.04 0.96]]
2238/4496 fenoxaprop-p-ethyl ['INACTIVE'] [[0.04 0.96]]
2239/4496 camostat-mesilate ['INACTIVE'] [[0.03 0.97]]
2240/4496 cisatracurium ['INACTIVE'] [[0.03 0.97]]
2241/4496 ZCL-278 ['INACTIVE'] [[0.06 0.94]]
2242/4496 dabigatran ['INACTIVE'] [[0.08 0.92]]
2243/4496 LY231617 ['INACTIVE'] [[0.01 0.99]]
2244/4496 C-021 ['INACTIVE'] [[0.13 0.87]]
2245/4496 VU0357121 ['INACTIVE'] [[0.01 0.99]]
2246/4496 isosorbide ['INACTIVE'] [[0.03 0.97]]
2247/4496 oleuropein ['INACTIVE'] [[0.05 0.95]]
2248/4496 TG-100801 ['INACTIVE'] [[0.1 0.9]]
2249/4496 arecaidine-but-2-ynyl-ester ['INACTIVE'] [[0. 1.]]
2250/4496 HC-067047 ['INACTIVE'] [[0.02 0.98]]
2251/4496 U-0126 ['INACTIVE'] [[0.08 0.92]]
2252/4496 cobicistat ['INACTIVE'] [[0.16 0.84]]
2253/4496 tiplaxtinin ['INACTIVE'] [[0.05 0.95]]
2254/4496 talopram ['INACTIVE'] [[0.01 0.99]]
2255/4496 T-5224 ['INACTIVE'] [[0.07 0.93]]
2256/4496 asenapine ['INACTIVE'] [[0.09 0.91]]
2257/4496 PNU-74654 ['INACTIVE'] [[0.03 0.97]]
2258/4496 chlorphensin-carbamate ['INACTIVE'] [[0.01 0.99]]
2259/4496 GSK269962 ['INACTIVE'] [[0.07 0.93]]
2260/4496 RS-0481 ['INACTIVE'] [[0.03 0.97]]
2261/4496 WYE-687 ['INACTIVE'] [[0.13 0.87]]
2262/4496 BW-180C ['INACTIVE'] [[0.07 0.93]]
2263/4496 otilonium ['INACTIVE'] [[0.07 0.93]]
2264/4496 dynole-34-2 ['INACTIVE'] [[0.09 0.91]]
2265/4496 CGP-78608 ['INACTIVE'] [[0.01 0.99]]
2266/4496 ML324 ['INACTIVE'] [[0.09 0.91]]
2267/4496 edelfosine ['INACTIVE'] [[0.2 0.8]]
2268/4496 CPP ['INACTIVE'] [[0.01 0.99]]
2269/4496 PD-118057 ['INACTIVE'] [[0.04 0.96]]
2270/4496 indiplon ['INACTIVE'] [[0.04 0.96]]
2271/4496 L-161982 ['INACTIVE'] [[0.06 0.94]]
2272/4496 nicaraven ['INACTIVE'] [[0. 1.]]
2273/4496 R-1485 ['INACTIVE'] [[0.12 0.88]]
2274/4496 EO-1428 ['INACTIVE'] [[0.01 0.99]]
2275/4496 Ro-19-4605 ['INACTIVE'] [[0.04 0.96]]
2276/4496 eprazinone ['INACTIVE'] [[0.04 0.96]]
2277/4496 amodiaquine ['INACTIVE'] [[0.01 0.99]]
2278/4496 morinidazole ['INACTIVE'] [[0.03 0.97]]
2279/4496 MK-2206 ['INACTIVE'] [[0.06 0.94]]
2280/4496 chlorothymol ['INACTIVE'] [[0. 1.]]
2281/4496 fenticonazole ['INACTIVE'] [[0.07 0.93]]
2282/4496 neo-gilurytmal ['INACTIVE'] [[0.02 0.98]]
2283/4496 triflusal ['INACTIVE'] [[0.02 0.98]]
2284/4496 UNC0638 ['INACTIVE'] [[0.13 0.87]]
2285/4496 bepotastine ['INACTIVE'] [[0.01 0.99]]
2286/4496 LY266097 ['INACTIVE'] [[0.07 0.93]]
2287/4496 idelalisib ['INACTIVE'] [[0.08 0.92]]
2288/4496 NSC-9965 ['INACTIVE'] [[0.06 0.94]]
2289/4496 amiflamine ['INACTIVE'] [[0.05 0.95]]
2290/4496 avasimibe ['INACTIVE'] [[0.06 0.94]]
2291/4496 CP-91149 ['INACTIVE'] [[0.06 0.94]]
2292/4496 N-hydroxynicotinamide ['INACTIVE'] [[0. 1.]]
2293/4496 XMD8-92 ['INACTIVE'] [[0.12 0.88]]
2294/4496 LY2603618 ['INACTIVE'] [[0.05 0.95]]
2295/4496 NS-3861 ['INACTIVE'] [[0.02 0.98]]
2296/4496 quazinone ['INACTIVE'] [[0.04 0.96]]
2297/4496 didanosine ['INACTIVE'] [[0.04 0.96]]
2298/4496 BW-373U86 ['INACTIVE'] [[0.06 0.94]]
2299/4496 VU0238429 ['INACTIVE'] [[0.03 0.97]]
2300/4496 napirimus ['INACTIVE'] [[0.02 0.98]]
2301/4496 PF-04447943 ['INACTIVE'] [[0.14 0.86]]
2302/4496 Ro-61-8048 ['INACTIVE'] [[0.03 0.97]]
2303/4496 CINPA-1 ['INACTIVE'] [[0.01 0.99]]
2304/4496 etravirine ['INACTIVE'] [[0.1 0.9]]
2305/4496 taltirelin ['INACTIVE'] [[0.04 0.96]]
2306/4496 vinorelbine ['INACTIVE'] [[0. 1.]]
2307/4496 bupranolol ['INACTIVE'] [[0.03 0.97]]
2308/4496 PTC-209 ['INACTIVE'] [[0.06 0.94]]
2309/4496 tymazoline ['INACTIVE'] [[0.01 0.99]]
2310/4496 CCG-63802 ['INACTIVE'] [[0.07 0.93]]
2311/4496 MDL-27531 ['INACTIVE'] [[0. 1.]]
2312/4496 PHTPP ['INACTIVE'] [[0.06 0.94]]
2313/4496 tecalcet ['INACTIVE'] [[0.09 0.91]]
2314/4496 amlodipine ['INACTIVE'] [[0.06 0.94]]
2315/4496 STAT3-inhibitor-VI ['INACTIVE'] [[0.02 0.98]]
2316/4496 clobazam ['INACTIVE'] [[0.02 0.98]]
2317/4496 L-mimosine ['INACTIVE'] [[0.03 0.97]]
2318/4496 nemorubicin ['INACTIVE'] [[0.12 0.88]]
2319/4496 GW-405833 ['INACTIVE'] [[0.11 0.89]]
2320/4496 compound-401 ['INACTIVE'] [[0.01 0.99]]
2321/4496 homochlorcyclizine ['INACTIVE'] [[0.01 0.99]]
2322/4496 homoharringtonine ['INACTIVE'] [[0.06 0.94]]
2323/4496 PFK-158 ['INACTIVE'] [[0.01 0.99]]
2324/4496 NFKB-activation-inhibitor-II ['INACTIVE'] [[0.03 0.97]]
2325/4496 GNF-5837 ['INACTIVE'] [[0.04 0.96]]
2326/4496 zopolrestat ['INACTIVE'] [[0.03 0.97]]
2327/4496 TGX-221 ['INACTIVE'] [[0.05 0.95]]
2328/4496 MRS1334 ['INACTIVE'] [[0.06 0.94]]
2329/4496 sumanirole ['INACTIVE'] [[0.01 0.99]]
2330/4496 irosustat ['INACTIVE'] [[0.06 0.94]]
2331/4496 flavin-adenine-dinucleotide ['INACTIVE'] [[0.08 0.92]]
2332/4496 methylnaltrexone ['INACTIVE'] [[0.03 0.97]]
2333/4496 linifanib ['INACTIVE'] [[0.03 0.97]]
2334/4496 4-hydroxy-phenazone ['INACTIVE'] [[0. 1.]]
2335/4496 lauric-diethanolamide ['INACTIVE'] [[0.03 0.97]]
2336/4496 EMD-386088 ['INACTIVE'] [[0.03 0.97]]
2337/4496 tolcapone ['INACTIVE'] [[0.03 0.97]]
2338/4496 xenalipin ['INACTIVE'] [[0.01 0.99]]
2339/4496 ATPA ['INACTIVE'] [[0. 1.]]
2340/4496 azelnidipine ['INACTIVE'] [[0.07 0.93]]
2341/4496 ZK-164015 ['INACTIVE'] [[0.05 0.95]]
2342/4496 selinexor ['INACTIVE'] [[0.05 0.95]]
2343/4496 pikamilone ['INACTIVE'] [[0. 1.]]
2344/4496 pimonidazole ['INACTIVE'] [[0.02 0.98]]
2345/4496 LTA ['INACTIVE'] [[0.02 0.98]]
2346/4496 RGFP966 ['INACTIVE'] [[0.05 0.95]]
2347/4496 retigabine ['INACTIVE'] [[0. 1.]]
2348/4496 letermovir ['INACTIVE'] [[0.11 0.89]]
2349/4496 oleoylethanolamide ['INACTIVE'] [[0.03 0.97]]
2350/4496 CGP-13501 ['INACTIVE'] [[0.02 0.98]]
2351/4496 harringtonine ['INACTIVE'] [[0.06 0.94]]
2352/4496 omega-(4-Iodophenyl)pentadecanoic acid ['INACTIVE'] [[0.09 0.91]]
2353/4496 cebranopadol ['INACTIVE'] [[0.03 0.97]]
2354/4496 tryptanthrin ['INACTIVE'] [[0.04 0.96]]
2355/4496 tribenoside ['INACTIVE'] [[0.04 0.96]]
2356/4496 SR-140333 ['INACTIVE'] [[0.1 0.9]]
2357/4496 FPL-12495 ['INACTIVE'] [[0. 1.]]
2358/4496 compound-58112 ['INACTIVE'] [[0.01 0.99]]
2359/4496 otamixaban ['INACTIVE'] [[0.11 0.89]]
2360/4496 A-317491 ['INACTIVE'] [[0.07 0.93]]
2361/4496 isopropyl-palmitate ['INACTIVE'] [[0.04 0.96]]
2362/4496 maropitant ['INACTIVE'] [[0.05 0.95]]
2363/4496 TC-I-2000 ['INACTIVE'] [[0.02 0.98]]
2364/4496 AP1903 ['INACTIVE'] [[0.12 0.88]]
2365/4496 RN-1 ['INACTIVE'] [[0.01 0.99]]
2366/4496 IEM1460 ['INACTIVE'] [[0.11 0.89]]
2367/4496 AZD6738 ['INACTIVE'] [[0.07 0.93]]
2368/4496 crenolanib ['INACTIVE'] [[0.15 0.85]]
2369/4496 ochromycinone ['INACTIVE'] [[0.01 0.99]]
2370/4496 lobenzarit ['INACTIVE'] [[0.01 0.99]]
2371/4496 LY303511 ['INACTIVE'] [[0.04 0.96]]
2372/4496 TG-100572 ['INACTIVE'] [[0.06 0.94]]
2373/4496 4SC-202 ['INACTIVE'] [[0.11 0.89]]
2374/4496 arofylline ['INACTIVE'] [[0.02 0.98]]
2375/4496 EW-7197 ['INACTIVE'] [[0.1 0.9]]
2376/4496 alclofenac ['INACTIVE'] [[0.02 0.98]]
2377/4496 vercirnon ['INACTIVE'] [[0.09 0.91]]
2378/4496 AS-1269574 ['INACTIVE'] [[0.02 0.98]]
2379/4496 zatebradine ['INACTIVE'] [[0.01 0.99]]
2380/4496 myricitrin ['INACTIVE'] [[0.01 0.99]]
2381/4496 vemurafenib ['INACTIVE'] [[0.05 0.95]]
2382/4496 AZD4547 ['INACTIVE'] [[0.09 0.91]]
2383/4496 gavestinel ['INACTIVE'] [[0.06 0.94]]
2384/4496 prolylleucylglycinamide ['INACTIVE'] [[0.02 0.98]]
2385/4496 ISX-9 ['INACTIVE'] [[0.05 0.95]]
2386/4496 BIX-02188 ['INACTIVE'] [[0.05 0.95]]
2387/4496 PHCCC ['INACTIVE'] [[0.01 0.99]]
2388/4496 loteprednol ['INACTIVE'] [[0. 1.]]
2389/4496 clindamycin-phosphate ['INACTIVE'] [[0.01 0.99]]
2390/4496 ML347 ['INACTIVE'] [[0.06 0.94]]
2391/4496 fluvastatin ['INACTIVE'] [[0.02 0.98]]
2392/4496 TAK-901 ['INACTIVE'] [[0.05 0.95]]
2393/4496 tenovin-6 ['INACTIVE'] [[0.04 0.96]]
2394/4496 lorcaserin ['INACTIVE'] [[0.02 0.98]]
2395/4496 U-0124 ['INACTIVE'] [[0.06 0.94]]
2396/4496 verbascoside ['INACTIVE'] [[0.06 0.94]]
2397/4496 PX-478 ['INACTIVE'] [[0.03 0.97]]
2398/4496 IOWH032 ['INACTIVE'] [[0.02 0.98]]
2399/4496 lappaconite ['INACTIVE'] [[0.06 0.94]]
2400/4496 NNC-55-0396 ['INACTIVE'] [[0.02 0.98]]
2401/4496 melphalan-n-oxide ['INACTIVE'] [[0.03 0.97]]
2402/4496 butaclamol ['INACTIVE'] [[0.03 0.97]]
2403/4496 AGP-103 ['INACTIVE'] [[0.04 0.96]]
2404/4496 NGD-94-1 ['INACTIVE'] [[0.14 0.86]]
2405/4496 AZD1446 ['INACTIVE'] [[0.07 0.93]]
2406/4496 SNAP-5089 ['INACTIVE'] [[0.05 0.95]]
2407/4496 thiazovivin ['INACTIVE'] [[0.06 0.94]]
2408/4496 safinamide ['INACTIVE'] [[0. 1.]]
2409/4496 BMS-345541 ['INACTIVE'] [[0.02 0.98]]
2410/4496 dioscin ['INACTIVE'] [[0.04 0.96]]
2411/4496 niraparib ['INACTIVE'] [[0.13 0.87]]
2412/4496 fosinopril ['INACTIVE'] [[0.01 0.99]]
2413/4496 tiotropium ['INACTIVE'] [[0.06 0.94]]
2414/4496 IU1 ['INACTIVE'] [[0.03 0.97]]
2415/4496 FK-962 ['INACTIVE'] [[0.01 0.99]]
2416/4496 molsidomine ['INACTIVE'] [[0. 1.]]
2417/4496 gastrodin ['INACTIVE'] [[0. 1.]]
2418/4496 ipsapirone ['INACTIVE'] [[0.16 0.84]]
2419/4496 ISRIB ['INACTIVE'] [[0.07 0.93]]
2420/4496 gallopamil ['INACTIVE'] [[0.05 0.95]]
2421/4496 RKI-1447 ['INACTIVE'] [[0.06 0.94]]
2422/4496 acalisib ['INACTIVE'] [[0.14 0.86]]
2423/4496 toloxatone ['INACTIVE'] [[0.01 0.99]]
2424/4496 SNC-80 ['INACTIVE'] [[0.06 0.94]]
2425/4496 mebrofenin ['INACTIVE'] [[0.01 0.99]]
2426/4496 BMS-536924 ['INACTIVE'] [[0.12 0.88]]
2427/4496 benzofenac ['INACTIVE'] [[0.01 0.99]]
2428/4496 DU-728 ['INACTIVE'] [[0.11 0.89]]
2429/4496 BI-847325 ['INACTIVE'] [[0.07 0.93]]
2430/4496 UNC0737 ['INACTIVE'] [[0.13 0.87]]
2431/4496 capsazepine ['INACTIVE'] [[0.04 0.96]]
2432/4496 1,3-dipropyl-8-phenylxanthine ['INACTIVE'] [[0.04 0.96]]
2433/4496 meisoindigo ['INACTIVE'] [[0.02 0.98]]
2434/4496 colfosceril-palmitate ['INACTIVE'] [[0.14 0.86]]
2435/4496 drotaverine ['INACTIVE'] [[0.03 0.97]]
2436/4496 saroglitazar ['INACTIVE'] [[0.04 0.96]]
2437/4496 MRS1845 ['INACTIVE'] [[0.1 0.9]]
2438/4496 NS-3623 ['INACTIVE'] [[0.05 0.95]]
2439/4496 SB-2343 ['INACTIVE'] [[0.1 0.9]]
2440/4496 GGTI-298 ['INACTIVE'] [[0.03 0.97]]
2441/4496 guacetisal ['INACTIVE'] [[0.03 0.97]]
2442/4496 ICI-118,551 ['INACTIVE'] [[0.01 0.99]]
2443/4496 ICI-162846 ['INACTIVE'] [[0.09 0.91]]
2444/4496 O6-Benzylguanine ['INACTIVE'] [[0.03 0.97]]
2445/4496 docusate ['INACTIVE'] [[0.02 0.98]]
2446/4496 WYE-125132 ['INACTIVE'] [[0.14 0.86]]
2447/4496 4-phenyl-1,2,3,4-tetrahydroisoquinoline ['INACTIVE'] [[0.01 0.99]]
2448/4496 GR-103691 ['INACTIVE'] [[0.04 0.96]]
2449/4496 fanetizole ['INACTIVE'] [[0.02 0.98]]
2450/4496 ZK-200775 ['INACTIVE'] [[0.07 0.93]]
2451/4496 elinogrel ['INACTIVE'] [[0.06 0.94]]
2452/4496 SC-560 ['INACTIVE'] [[0.01 0.99]]
2453/4496 AZD9496 ['INACTIVE'] [[0.09 0.91]]
2454/4496 CTS-1027 ['INACTIVE'] [[0.05 0.95]]
2455/4496 alclometasone-dipropionate ['INACTIVE'] [[0.02 0.98]]
2456/4496 3-MATIDA ['INACTIVE'] [[0.02 0.98]]
2457/4496 pizotifen ['INACTIVE'] [[0.01 0.99]]
2458/4496 K-858 ['INACTIVE'] [[0.01 0.99]]
2459/4496 SR-33805 ['INACTIVE'] [[0.04 0.96]]
2460/4496 nesbuvir ['INACTIVE'] [[0.08 0.92]]
2461/4496 LY2365109 ['INACTIVE'] [[0.06 0.94]]
2462/4496 CPI-203 ['INACTIVE'] [[0.07 0.93]]
2463/4496 tamibarotene ['INACTIVE'] [[0.01 0.99]]
2464/4496 istaroxime ['INACTIVE'] [[0.08 0.92]]
2465/4496 sevelamer ['INACTIVE'] [[0.01 0.99]]
2466/4496 SR-3306 ['INACTIVE'] [[0.05 0.95]]
2467/4496 L-655,708 ['INACTIVE'] [[0.04 0.96]]
2468/4496 velneperit ['INACTIVE'] [[0.03 0.97]]
2469/4496 WP1130 ['INACTIVE'] [[0.03 0.97]]
2470/4496 indium(iii)-isopropoxide ['INACTIVE'] [[0. 1.]]
2471/4496 JQ1-(+) ['INACTIVE'] [[0.1 0.9]]
2472/4496 Y-134 ['INACTIVE'] [[0.05 0.95]]
2473/4496 QX-222 ['INACTIVE'] [[0.01 0.99]]
2474/4496 amibegron ['INACTIVE'] [[0.01 0.99]]
2475/4496 3'-fluorobenzylspiperone ['INACTIVE'] [[0.05 0.95]]
2476/4496 S26948 ['INACTIVE'] [[0.03 0.97]]
2477/4496 WIN-64338 ['INACTIVE'] [[0.09 0.91]]
2478/4496 terciprazine ['INACTIVE'] [[0.04 0.96]]
2479/4496 paeoniflorin ['INACTIVE'] [[0.04 0.96]]
2480/4496 mirodenafil ['INACTIVE'] [[0.18 0.82]]
2481/4496 PNU-22394 ['INACTIVE'] [[0.02 0.98]]
2482/4496 procaterol ['INACTIVE'] [[0.02 0.98]]
2483/4496 RS-504393 ['INACTIVE'] [[0.04 0.96]]
2484/4496 pimavanserin ['INACTIVE'] [[0.03 0.97]]
2485/4496 IKK-16 ['INACTIVE'] [[0.07 0.93]]
2486/4496 alverine ['INACTIVE'] [[0.02 0.98]]
2487/4496 LDN-27219 ['INACTIVE'] [[0.04 0.96]]
2488/4496 KML29 ['INACTIVE'] [[0.02 0.98]]
2489/4496 LY2109761 ['INACTIVE'] [[0.1 0.9]]
2490/4496 dalcetrapib ['INACTIVE'] [[0.02 0.98]]
2491/4496 butabindide ['INACTIVE'] [[0.01 0.99]]
2492/4496 naloxegol ['INACTIVE'] [[0.05 0.95]]
2493/4496 SDZ-SER-082 ['INACTIVE'] [[0.07 0.93]]
2494/4496 MLN1117 ['INACTIVE'] [[0.11 0.89]]
2495/4496 felbamate ['INACTIVE'] [[0.02 0.98]]
2496/4496 PF-03882845 ['INACTIVE'] [[0.06 0.94]]
2497/4496 CaMKII-IN-1 ['INACTIVE'] [[0.03 0.97]]
2498/4496 chlorindanol ['INACTIVE'] [[0.02 0.98]]
2499/4496 WHI-P154 ['INACTIVE'] [[0.02 0.98]]
2500/4496 meglitinide ['INACTIVE'] [[0. 1.]]
2501/4496 pentobarbital ['INACTIVE'] [[0. 1.]]
2502/4496 isoxicam ['INACTIVE'] [[0.01 0.99]]
2503/4496 A740003 ['INACTIVE'] [[0.08 0.92]]
2504/4496 avagacestat ['INACTIVE'] [[0.07 0.93]]
2505/4496 XE-991 ['INACTIVE'] [[0.04 0.96]]
2506/4496 N-acetyl-D-glucosamine ['INACTIVE'] [[0.06 0.94]]
2507/4496 lodoxamide ['INACTIVE'] [[0.01 0.99]]
2508/4496 pirodavir ['INACTIVE'] [[0.02 0.98]]
2509/4496 BIX-02189 ['INACTIVE'] [[0.09 0.91]]
2510/4496 tyrphostin-AG-825 ['INACTIVE'] [[0.08 0.92]]
2511/4496 SN-2 ['INACTIVE'] [[0.04 0.96]]
2512/4496 tecastemizole ['INACTIVE'] [[0.02 0.98]]
2513/4496 FK-866 ['INACTIVE'] [[0. 1.]]
2514/4496 triclocarban ['INACTIVE'] [[0.09 0.91]]
2515/4496 TC-G-1008 ['INACTIVE'] [[0.04 0.96]]
2516/4496 quinelorane ['INACTIVE'] [[0.02 0.98]]
2517/4496 STF-62247 ['INACTIVE'] [[0.01 0.99]]
2518/4496 ONO-AE3-208 ['INACTIVE'] [[0.05 0.95]]
2519/4496 AM-580 ['INACTIVE'] [[0.01 0.99]]
2520/4496 doramapimod ['INACTIVE'] [[0.14 0.86]]
2521/4496 autotaxin-modulator-1 ['INACTIVE'] [[0.15 0.85]]
2522/4496 8-bromo-cAMP ['INACTIVE'] [[0.04 0.96]]
2523/4496 tecovirimat ['INACTIVE'] [[0.04 0.96]]
2524/4496 pelanserin ['INACTIVE'] [[0.02 0.98]]
2525/4496 AM-251 ['INACTIVE'] [[0.06 0.94]]
2526/4496 alcuronium ['INACTIVE'] [[0.05 0.95]]
2527/4496 ombitasvir ['INACTIVE'] [[0.11 0.89]]
2528/4496 CIQ ['INACTIVE'] [[0.01 0.99]]
2529/4496 RITA ['INACTIVE'] [[0.06 0.94]]
2530/4496 SC-51089 ['INACTIVE'] [[0.03 0.97]]
2531/4496 2-iminobiotin ['INACTIVE'] [[0. 1.]]
2532/4496 rosuvastatin ['INACTIVE'] [[0.02 0.98]]
2533/4496 dizocilpine-(-) ['INACTIVE'] [[0.04 0.96]]
2534/4496 PAOPA ['INACTIVE'] [[0.07 0.93]]
2535/4496 flecainide ['INACTIVE'] [[0.01 0.99]]
2536/4496 dizocilpine-(+) ['INACTIVE'] [[0.04 0.96]]
2537/4496 5-HMF ['INACTIVE'] [[0. 1.]]
2538/4496 PSB-603 ['INACTIVE'] [[0.14 0.86]]
2539/4496 AMG-232 ['INACTIVE'] [[0.06 0.94]]
2540/4496 THZ2 ['INACTIVE'] [[0.11 0.89]]
2541/4496 lycopene ['INACTIVE'] [[0.01 0.99]]
2542/4496 delavirdine ['INACTIVE'] [[0.1 0.9]]
2543/4496 ALX-5407 ['INACTIVE'] [[0.01 0.99]]
2544/4496 1-((Z)-3-Chloroallyl)-1,3,5,7-tetraazaadamantan-1-ium ['INACTIVE'] [[0. 1.]]
2545/4496 REV-5901 ['INACTIVE'] [[0.03 0.97]]
2546/4496 TU-2100 ['INACTIVE'] [[0.05 0.95]]
2547/4496 PRIMA1 ['INACTIVE'] [[0.05 0.95]]
2548/4496 phenindamine ['INACTIVE'] [[0.04 0.96]]
2549/4496 SR-2640 ['INACTIVE'] [[0.05 0.95]]
2550/4496 BRL-50481 ['INACTIVE'] [[0. 1.]]
2551/4496 sal003 ['INACTIVE'] [[0.07 0.93]]
2552/4496 navitoclax ['INACTIVE'] [[0.13 0.87]]
2553/4496 CYC116 ['INACTIVE'] [[0.07 0.93]]
2554/4496 chuanxiongzine ['INACTIVE'] [[0. 1.]]
2555/4496 SKF-83566 ['INACTIVE'] [[0.02 0.98]]
2556/4496 droxinostat ['INACTIVE'] [[0.03 0.97]]
2557/4496 anagliptin ['INACTIVE'] [[0.12 0.88]]
2558/4496 azalomycin-b ['INACTIVE'] [[0.13 0.87]]
2559/4496 GSK650394 ['INACTIVE'] [[0.06 0.94]]
2560/4496 NVP-ADW742 ['INACTIVE'] [[0.11 0.89]]
2561/4496 K-Ras(G12C)-inhibitor-6 ['INACTIVE'] [[0.03 0.97]]
2562/4496 NU-2058 ['INACTIVE'] [[0.03 0.97]]
2563/4496 NIDA-41020 ['INACTIVE'] [[0.03 0.97]]
2564/4496 gadoterate-meglumine ['INACTIVE'] [[0.07 0.93]]
2565/4496 AC-7954-(+/-) ['INACTIVE'] [[0.06 0.94]]
2566/4496 sari-59-801 ['INACTIVE'] [[0.02 0.98]]
2567/4496 cleviprex ['INACTIVE'] [[0.03 0.97]]
2568/4496 homoveratrylamine ['INACTIVE'] [[0. 1.]]
2569/4496 CP-673451 ['INACTIVE'] [[0.13 0.87]]
2570/4496 TC-E-5006 ['INACTIVE'] [[0.07 0.93]]
2571/4496 ufenamate ['INACTIVE'] [[0.02 0.98]]
2572/4496 Y-27632 ['INACTIVE'] [[0.01 0.99]]
2573/4496 broxaterol ['INACTIVE'] [[0.05 0.95]]
2574/4496 CFM-1571 ['INACTIVE'] [[0.03 0.97]]
2575/4496 pranidipine ['INACTIVE'] [[0.01 0.99]]
2576/4496 monensin ['INACTIVE'] [[0.03 0.97]]
2577/4496 mecarbinate ['INACTIVE'] [[0.05 0.95]]
2578/4496 macelignan ['INACTIVE'] [[0.01 0.99]]
2579/4496 pifithrin-alpha ['INACTIVE'] [[0.01 0.99]]
2580/4496 OMDM-2 ['INACTIVE'] [[0.03 0.97]]
2581/4496 3-CPMT ['INACTIVE'] [[0.05 0.95]]
2582/4496 iocetamic-acid ['INACTIVE'] [[0.01 0.99]]
2583/4496 BMS-649 ['INACTIVE'] [[0.04 0.96]]
2584/4496 ANA-12 ['INACTIVE'] [[0.05 0.95]]
2585/4496 buparvaquone ['INACTIVE'] [[0.03 0.97]]
2586/4496 pasireotide ['INACTIVE'] [[0.26 0.74]]
2587/4496 balsalazide ['INACTIVE'] [[0.01 0.99]]
2588/4496 C11-Acetate ['INACTIVE'] [[0. 1.]]
2589/4496 vinburnine ['INACTIVE'] [[0.02 0.98]]
2590/4496 mometasone ['INACTIVE'] [[0.01 0.99]]
2591/4496 WP1066 ['INACTIVE'] [[0.04 0.96]]
2592/4496 beclomethasone ['INACTIVE'] [[0. 1.]]
2593/4496 S-Nitrosoglutathione ['INACTIVE'] [[0.15 0.85]]
2594/4496 tetrahydropapaverine ['INACTIVE'] [[0.02 0.98]]
2595/4496 cyclocreatine ['INACTIVE'] [[0. 1.]]
2596/4496 BP-897 ['INACTIVE'] [[0.02 0.98]]
2597/4496 odanacatib ['INACTIVE'] [[0.13 0.87]]
2598/4496 darusentan ['INACTIVE'] [[0.03 0.97]]
2599/4496 PCI-29732 ['INACTIVE'] [[0.05 0.95]]
2600/4496 YC-1 ['INACTIVE'] [[0.04 0.96]]
2601/4496 YM-90709 ['INACTIVE'] [[0.02 0.98]]
2602/4496 BIO-1211 ['INACTIVE'] [[0.13 0.87]]
2603/4496 GW-1929 ['INACTIVE'] [[0.04 0.96]]
2604/4496 GANT-58 ['INACTIVE'] [[0.03 0.97]]
2605/4496 fosinoprilat ['INACTIVE'] [[0.02 0.98]]
2606/4496 THZ1 ['INACTIVE'] [[0.11 0.89]]
2607/4496 4-(4-fluorobenzoyl)-1-(4-phenylbutyl)-piperidine ['INACTIVE'] [[0.02 0.98]]
2608/4496 aloe-emodin ['INACTIVE'] [[0.01 0.99]]
2609/4496 acesulfame-potassium ['INACTIVE'] [[0. 1.]]
2610/4496 PHA-568487 ['INACTIVE'] [[0.04 0.96]]
2611/4496 A-412997 ['INACTIVE'] [[0.03 0.97]]
2612/4496 thiothixene ['INACTIVE'] [[0.03 0.97]]
2613/4496 PNU-282987 ['INACTIVE'] [[0.01 0.99]]
2614/4496 rivanicline ['INACTIVE'] [[0. 1.]]
2615/4496 7,8,9,10-tetrahydroazepino[2,1-b]quinazolin-12(6H)-one ['INACTIVE'] [[0. 1.]]
2616/4496 PF-05190457 ['INACTIVE'] [[0.05 0.95]]
2617/4496 fananserin ['INACTIVE'] [[0.08 0.92]]
2618/4496 talarozole ['INACTIVE'] [[0.05 0.95]]
2619/4496 lisuride ['INACTIVE'] [[0.05 0.95]]
2620/4496 BMS-986020 ['INACTIVE'] [[0.01 0.99]]
2621/4496 R306465 ['INACTIVE'] [[0.08 0.92]]
2622/4496 tetramethylthiuram-monosulfide ['INACTIVE'] [[0.01 0.99]]
2623/4496 piketoprofen ['INACTIVE'] [[0.02 0.98]]
2624/4496 arsenic-trioxide ['INACTIVE'] [[0.01 0.99]]
2625/4496 enprofylline ['INACTIVE'] [[0. 1.]]
2626/4496 CHIR-99021 ['INACTIVE'] [[0.03 0.97]]
2627/4496 CP-93129 ['INACTIVE'] [[0.04 0.96]]
2628/4496 etomoxir ['INACTIVE'] [[0.01 0.99]]
2629/4496 BMS-470539 ['INACTIVE'] [[0.1 0.9]]
2630/4496 AI-10-49 ['INACTIVE'] [[0.05 0.95]]
2631/4496 A-1120 ['INACTIVE'] [[0.13 0.87]]
2632/4496 beta-funaltrexamine ['INACTIVE'] [[0.02 0.98]]
2633/4496 GNF-2 ['INACTIVE'] [[0.04 0.96]]
2634/4496 PIK-93 ['INACTIVE'] [[0.05 0.95]]
2635/4496 flibanserin ['INACTIVE'] [[0.02 0.98]]
2636/4496 dimethyl-isosorbide ['INACTIVE'] [[0.04 0.96]]
2637/4496 befuraline ['INACTIVE'] [[0.02 0.98]]
2638/4496 oxolamine ['INACTIVE'] [[0.01 0.99]]
2639/4496 PF-05212384 ['INACTIVE'] [[0.1 0.9]]
2640/4496 GW-788388 ['INACTIVE'] [[0.02 0.98]]
2641/4496 nedaplatin ['INACTIVE'] [[0.02 0.98]]
2642/4496 linopirdine ['INACTIVE'] [[0.03 0.97]]
2643/4496 meluadrine ['INACTIVE'] [[0.01 0.99]]
2644/4496 erdafitinib ['INACTIVE'] [[0.04 0.96]]
2645/4496 prenylamine ['INACTIVE'] [[0.02 0.98]]
2646/4496 sudan-iv ['INACTIVE'] [[0.05 0.95]]
2647/4496 rizatriptan ['INACTIVE'] [[0.01 0.99]]
2648/4496 butalbital ['INACTIVE'] [[0.01 0.99]]
2649/4496 nanchangmycin ['INACTIVE'] [[0.08 0.92]]
2650/4496 difenpiramide ['INACTIVE'] [[0.01 0.99]]
2651/4496 cinepazide ['INACTIVE'] [[0.07 0.93]]
2652/4496 pirquinozol ['INACTIVE'] [[0.03 0.97]]
2653/4496 NS-9283 ['INACTIVE'] [[0.01 0.99]]
2654/4496 diminazene-aceturate ['INACTIVE'] [[0.06 0.94]]
2655/4496 dilazep ['INACTIVE'] [[0.03 0.97]]
2656/4496 5-iodo-A-85380 ['INACTIVE'] [[0.03 0.97]]
2657/4496 hemin ['INACTIVE'] [[0.02 0.98]]
2658/4496 coenzyme-a ['INACTIVE'] [[0.1 0.9]]
2659/4496 alagebrium ['INACTIVE'] [[0.03 0.97]]
2660/4496 dynasore ['INACTIVE'] [[0.01 0.99]]
2661/4496 ambrisentan ['INACTIVE'] [[0.03 0.97]]
2662/4496 diclofenac ['INACTIVE'] [[0. 1.]]
2663/4496 rhein ['INACTIVE'] [[0.01 0.99]]
2664/4496 NVP-AEW541 ['INACTIVE'] [[0.12 0.88]]
2665/4496 dexniguldipine ['INACTIVE'] [[0.06 0.94]]
2666/4496 palosuran ['INACTIVE'] [[0.05 0.95]]
2667/4496 niguldipine-(S)-(+) ['INACTIVE'] [[0.06 0.94]]
2668/4496 castanospermine ['INACTIVE'] [[0. 1.]]
2669/4496 fluorometholone-acetate ['INACTIVE'] [[0.01 0.99]]
2670/4496 toremifene ['INACTIVE'] [[0. 1.]]
2671/4496 HA-14-1 ['INACTIVE'] [[0.03 0.97]]
2672/4496 wnt-c59 ['INACTIVE'] [[0.04 0.96]]
2673/4496 RS-102221 ['INACTIVE'] [[0.05 0.95]]
2674/4496 napabucasin ['INACTIVE'] [[0.02 0.98]]
2675/4496 CP-94253 ['INACTIVE'] [[0.04 0.96]]
2676/4496 4,4-pentamethylenepiperidine ['INACTIVE'] [[0.04 0.96]]
2677/4496 SB-200646 ['INACTIVE'] [[0.02 0.98]]
2678/4496 ulipristal ['INACTIVE'] [[0.03 0.97]]
2679/4496 molidustat ['INACTIVE'] [[0.02 0.98]]
2680/4496 BML-190 ['INACTIVE'] [[0.03 0.97]]
2681/4496 PHP-501 ['INACTIVE'] [[0.1 0.9]]
2682/4496 TCS2002 ['INACTIVE'] [[0.03 0.97]]
2683/4496 G-1 ['INACTIVE'] [[0.05 0.95]]
2684/4496 EED226 ['INACTIVE'] [[0.03 0.97]]
2685/4496 JK-184 ['INACTIVE'] [[0.02 0.98]]
2686/4496 RG7112 ['INACTIVE'] [[0.11 0.89]]
2687/4496 NAB-2 ['INACTIVE'] [[0.04 0.96]]
2688/4496 CKI-7 ['INACTIVE'] [[0.04 0.96]]
2689/4496 pranlukast ['INACTIVE'] [[0.05 0.95]]
2690/4496 lumateperone ['INACTIVE'] [[0.1 0.9]]
2691/4496 chlorophyllin ['INACTIVE'] [[0.04 0.96]]
2692/4496 ginsenoside-rg3 ['INACTIVE'] [[0.06 0.94]]
2693/4496 fentiazac ['INACTIVE'] [[0.03 0.97]]
2694/4496 BMS-983970 ['INACTIVE'] [[0.05 0.95]]
2695/4496 LY310762 ['INACTIVE'] [[0.02 0.98]]
2696/4496 eprosartan ['INACTIVE'] [[0.06 0.94]]
2697/4496 SPP301 ['INACTIVE'] [[0.08 0.92]]
2698/4496 deptropine ['INACTIVE'] [[0.04 0.96]]
2699/4496 barnidipine ['INACTIVE'] [[0.04 0.96]]
2700/4496 lorazepam ['INACTIVE'] [[0.06 0.94]]
2701/4496 ACY-1215 ['INACTIVE'] [[0.04 0.96]]
2702/4496 TCN-238 ['INACTIVE'] [[0.06 0.94]]
2703/4496 bipenamol ['INACTIVE'] [[0.09 0.91]]
2704/4496 GK921 ['INACTIVE'] [[0.11 0.89]]
2705/4496 HA14-1 ['INACTIVE'] [[0.03 0.97]]
2706/4496 org-9768 ['INACTIVE'] [[0.01 0.99]]
2707/4496 SB-590885 ['INACTIVE'] [[0.05 0.95]]
2708/4496 JNJ-63533054 ['INACTIVE'] [[0.03 0.97]]
2709/4496 alacepril ['INACTIVE'] [[0.05 0.95]]
2710/4496 2-APB ['INACTIVE'] [[0.03 0.97]]
2711/4496 cinflumide ['INACTIVE'] [[0.01 0.99]]
2712/4496 mivacurium ['INACTIVE'] [[0.06 0.94]]
2713/4496 propylhexedrine ['INACTIVE'] [[0.03 0.97]]
2714/4496 PFI-4 ['INACTIVE'] [[0.07 0.93]]
2715/4496 DMH1 ['INACTIVE'] [[0.07 0.93]]
2716/4496 zometapine ['INACTIVE'] [[0.02 0.98]]
2717/4496 SB-939 ['INACTIVE'] [[0.08 0.92]]
2718/4496 bentiromide ['INACTIVE'] [[0.02 0.98]]
2719/4496 amezinium ['INACTIVE'] [[0. 1.]]
2720/4496 CIM-0216 ['INACTIVE'] [[0.03 0.97]]
2721/4496 cabergoline ['INACTIVE'] [[0.09 0.91]]
2722/4496 ecabet ['INACTIVE'] [[0.05 0.95]]
2723/4496 VE-821 ['INACTIVE'] [[0.02 0.98]]
2724/4496 WB-4101 ['INACTIVE'] [[0. 1.]]
2725/4496 remoxipride ['INACTIVE'] [[0.02 0.98]]
2726/4496 bismuth-subcitrate-potassium ['INACTIVE'] [[0.02 0.98]]
2727/4496 terodiline ['INACTIVE'] [[0.05 0.95]]
2728/4496 YM-155 ['INACTIVE'] [[0.04 0.96]]
2729/4496 TC-Mps1-12 ['INACTIVE'] [[0.03 0.97]]
2730/4496 triazolam ['INACTIVE'] [[0.1 0.9]]
2731/4496 nerbacadol ['INACTIVE'] [[0.04 0.96]]
2732/4496 licarbazepine ['INACTIVE'] [[0. 1.]]
2733/4496 dimemorfan ['INACTIVE'] [[0. 1.]]
2734/4496 WYE-354 ['INACTIVE'] [[0.15 0.85]]
2735/4496 thymol-iodide ['INACTIVE'] [[0.06 0.94]]
2736/4496 OXF-BD-02 ['INACTIVE'] [[0.02 0.98]]
2737/4496 AIM-100 ['INACTIVE'] [[0.01 0.99]]
2738/4496 JNJ-10397049 ['INACTIVE'] [[0.02 0.98]]
2739/4496 lersivirine ['INACTIVE'] [[0.03 0.97]]
2740/4496 letosteine ['INACTIVE'] [[0.05 0.95]]
2741/4496 TC-I-15 ['INACTIVE'] [[0.08 0.92]]
2742/4496 ON-01500 ['INACTIVE'] [[0.07 0.93]]
2743/4496 diroximel-fumarate ['INACTIVE'] [[0.06 0.94]]
2744/4496 verteporfin ['INACTIVE'] [[0.06 0.94]]
2745/4496 ethyl-pyruvate ['INACTIVE'] [[0. 1.]]
2746/4496 ciaftalan-zinc ['INACTIVE'] [[0.11 0.89]]
2747/4496 benethamine ['INACTIVE'] [[0.01 0.99]]
2748/4496 FPS-ZM1 ['INACTIVE'] [[0.01 0.99]]
2749/4496 NO-ASA ['INACTIVE'] [[0.03 0.97]]
2750/4496 enciprazine ['INACTIVE'] [[0.03 0.97]]
2751/4496 JNJ-26990990 ['INACTIVE'] [[0.03 0.97]]
2752/4496 oseltamivir-carboxylate ['INACTIVE'] [[0.01 0.99]]
2753/4496 apabetalone ['INACTIVE'] [[0. 1.]]
2754/4496 CH223191 ['INACTIVE'] [[0.06 0.94]]
2755/4496 ganetespib ['INACTIVE'] [[0.02 0.98]]
2756/4496 verinurad ['INACTIVE'] [[0.04 0.96]]
2757/4496 AH6809 ['INACTIVE'] [[0.01 0.99]]
2758/4496 stiripentol ['INACTIVE'] [[0.01 0.99]]
2759/4496 etizolam ['INACTIVE'] [[0.09 0.91]]
2760/4496 INH1 ['INACTIVE'] [[0.02 0.98]]
2761/4496 BW-B70C ['INACTIVE'] [[0.02 0.98]]
2762/4496 EI1 ['INACTIVE'] [[0.05 0.95]]
2763/4496 bismuth-oxalate ['INACTIVE'] [[0. 1.]]
2764/4496 dextrose ['INACTIVE'] [[0.02 0.98]]
2765/4496 tanshinone-i ['INACTIVE'] [[0. 1.]]
2766/4496 procyanidin-b-2 ['INACTIVE'] [[0.02 0.98]]
2767/4496 resatorvid ['INACTIVE'] [[0. 1.]]
2768/4496 AC-186 ['INACTIVE'] [[0.04 0.96]]
2769/4496 nalfurafine ['INACTIVE'] [[0.04 0.96]]
2770/4496 fudosteine ['INACTIVE'] [[0. 1.]]
2771/4496 gly-gln ['INACTIVE'] [[0.01 0.99]]
2772/4496 nor-Binaltorphimine-dihydrochloride ['INACTIVE'] [[0.08 0.92]]
2773/4496 tivantinib ['INACTIVE'] [[0.09 0.91]]
2774/4496 FG-4592 ['INACTIVE'] [[0.11 0.89]]
2775/4496 1-azakenpaullone ['INACTIVE'] [[0.06 0.94]]
2776/4496 AA-29504 ['INACTIVE'] [[0.04 0.96]]
2777/4496 WAY-207024 ['INACTIVE'] [[0.08 0.92]]
2778/4496 SDZ-220-581 ['INACTIVE'] [[0.07 0.93]]
2779/4496 ODM-201 ['INACTIVE'] [[0.04 0.96]]
2780/4496 PD-153035 ['INACTIVE'] [[0.03 0.97]]
2781/4496 CHPG ['INACTIVE'] [[0.01 0.99]]
2782/4496 enisamium-iodide ['INACTIVE'] [[0.06 0.94]]
2783/4496 APcK-110 ['INACTIVE'] [[0. 1.]]
2784/4496 methyl-nicotinate ['INACTIVE'] [[0.03 0.97]]
2785/4496 CBS,-N-Cyclohexyl-2-benzothiazolesulfenamide ['INACTIVE'] [[0.02 0.98]]
2786/4496 cinnarazine ['INACTIVE'] [[0.01 0.99]]
2787/4496 M-25 ['INACTIVE'] [[0.05 0.95]]
2788/4496 phenylmethylsulfonyl-fluoride ['INACTIVE'] [[0.03 0.97]]
2789/4496 AZ-628 ['INACTIVE'] [[0.05 0.95]]
2790/4496 creatine ['INACTIVE'] [[0. 1.]]
2791/4496 abiraterone-acetate ['INACTIVE'] [[0.01 0.99]]
2792/4496 CI-923 ['INACTIVE'] [[0.04 0.96]]
2793/4496 meprobamate ['INACTIVE'] [[0.02 0.98]]
2794/4496 deoxyepinephrine ['INACTIVE'] [[0. 1.]]
2795/4496 dimesna ['INACTIVE'] [[0. 1.]]
2796/4496 amtolmetin-guacil ['INACTIVE'] [[0.03 0.97]]
2797/4496 aminopentamide ['INACTIVE'] [[0.03 0.97]]
2798/4496 L-778123 ['INACTIVE'] [[0.08 0.92]]
2799/4496 TC-ASK-10 ['INACTIVE'] [[0.05 0.95]]
2800/4496 JNJ-42041935 ['INACTIVE'] [[0.1 0.9]]
2801/4496 MK-8033 ['INACTIVE'] [[0.12 0.88]]
2802/4496 ginsenoside-rd ['INACTIVE'] [[0.08 0.92]]
2803/4496 daphnetin ['INACTIVE'] [[0.01 0.99]]
2804/4496 BTS-54505 ['INACTIVE'] [[0.02 0.98]]
2805/4496 MMPX ['INACTIVE'] [[0.04 0.96]]
2806/4496 phenytoin ['INACTIVE'] [[0.03 0.97]]
2807/4496 GSK9027 ['INACTIVE'] [[0.06 0.94]]
2808/4496 ginsenoside-c-k ['INACTIVE'] [[0.05 0.95]]
2809/4496 RP-001 ['INACTIVE'] [[0.05 0.95]]
2810/4496 TC-A-2317 ['INACTIVE'] [[0.07 0.93]]
2811/4496 tifenazoxide ['INACTIVE'] [[0.05 0.95]]
2812/4496 tiotidine ['INACTIVE'] [[0.04 0.96]]
2813/4496 IBMX ['INACTIVE'] [[0. 1.]]
2814/4496 levopropoxyphene ['INACTIVE'] [[0.02 0.98]]
2815/4496 SCH-202676 ['INACTIVE'] [[0. 1.]]
2816/4496 lutein ['INACTIVE'] [[0.03 0.97]]
2817/4496 TCS-3035 ['INACTIVE'] [[0. 1.]]
2818/4496 zafirlukast ['INACTIVE'] [[0.17 0.83]]
2819/4496 EX-527 ['INACTIVE'] [[0.02 0.98]]
2820/4496 zolimidine ['INACTIVE'] [[0.06 0.94]]
2821/4496 chlorphenesin ['INACTIVE'] [[0. 1.]]
2822/4496 SD-2590 ['INACTIVE'] [[0.12 0.88]]
2823/4496 FGIN-1-27 ['INACTIVE'] [[0.02 0.98]]
2824/4496 oleamide ['INACTIVE'] [[0.05 0.95]]
2825/4496 L-741742 ['INACTIVE'] [[0.04 0.96]]
2826/4496 daucosterol ['INACTIVE'] [[0.04 0.96]]
2827/4496 bismuth(iii)-trifluoromethanesulfonate ['INACTIVE'] [[0.02 0.98]]
2828/4496 NKP-1339 ['INACTIVE'] [[0.08 0.92]]
2829/4496 BAY-85-8050 ['INACTIVE'] [[0. 1.]]
2830/4496 caroxazone ['INACTIVE'] [[0.02 0.98]]
2831/4496 umeclidinium ['INACTIVE'] [[0. 1.]]
2832/4496 vitamin-b12 ['INACTIVE'] [[0.17 0.83]]
2833/4496 axitinib ['INACTIVE'] [[0.06 0.94]]
2834/4496 sphingosylphosphorylcholine ['INACTIVE'] [[0.28 0.72]]
2835/4496 VU0364439 ['INACTIVE'] [[0.07 0.93]]
2836/4496 aminosalicylate ['INACTIVE'] [[0. 1.]]
2837/4496 paraxanthine ['INACTIVE'] [[0.01 0.99]]
2838/4496 RS-67506 ['INACTIVE'] [[0.04 0.96]]
2839/4496 isopropyl-myristate ['INACTIVE'] [[0.04 0.96]]
2840/4496 AT-1015 ['INACTIVE'] [[0.02 0.98]]
2841/4496 bifemelane ['INACTIVE'] [[0.06 0.94]]
2842/4496 NTNCB ['INACTIVE'] [[0.07 0.93]]
2843/4496 ecamsule-triethanolamine ['INACTIVE'] [[0.04 0.96]]
2844/4496 go-6983 ['INACTIVE'] [[0.09 0.91]]
2845/4496 E-2012 ['INACTIVE'] [[0.08 0.92]]
2846/4496 TC2559 ['INACTIVE'] [[0.03 0.97]]
2847/4496 chlorpropham ['INACTIVE'] [[0. 1.]]
2848/4496 AJ76-(+) ['INACTIVE'] [[0.02 0.98]]
2849/4496 stepronin ['INACTIVE'] [[0.05 0.95]]
2850/4496 apatinib ['INACTIVE'] [[0.05 0.95]]
2851/4496 GDC-0349 ['INACTIVE'] [[0.09 0.91]]
2852/4496 BP-554 ['INACTIVE'] [[0. 1.]]
2853/4496 phenprobamate ['INACTIVE'] [[0. 1.]]
2854/4496 fluspirilene ['INACTIVE'] [[0.03 0.97]]
2855/4496 A-674563 ['INACTIVE'] [[0.01 0.99]]
2856/4496 AMI-1 ['INACTIVE'] [[0.02 0.98]]
2857/4496 UAMC-00039 ['INACTIVE'] [[0.01 0.99]]
2858/4496 misoprostol ['INACTIVE'] [[0.05 0.95]]
2859/4496 UNC1215 ['INACTIVE'] [[0.05 0.95]]
2860/4496 mizolastine ['INACTIVE'] [[0.11 0.89]]
2861/4496 DMSO ['INACTIVE'] [[0.02 0.98]]
2862/4496 glycerol-monolaurate ['INACTIVE'] [[0.01 0.99]]
2863/4496 lylamine ['INACTIVE'] [[0.02 0.98]]
2864/4496 cyclovirobuxin-d ['INACTIVE'] [[0.06 0.94]]
2865/4496 alibendol ['INACTIVE'] [[0.03 0.97]]
2866/4496 pancuronium ['INACTIVE'] [[0.03 0.97]]
2867/4496 diphenyleneiodonium ['INACTIVE'] [[0. 1.]]
2868/4496 peramivir ['INACTIVE'] [[0.08 0.92]]
2869/4496 UNC-2327 ['INACTIVE'] [[0.05 0.95]]
2870/4496 GW-5074 ['INACTIVE'] [[0.01 0.99]]
2871/4496 mozavaptan ['INACTIVE'] [[0.02 0.98]]
2872/4496 refametinib ['INACTIVE'] [[0.01 0.99]]
2873/4496 opipramol ['INACTIVE'] [[0.02 0.98]]
2874/4496 SB-525334 ['INACTIVE'] [[0.08 0.92]]
2875/4496 sophocarpine ['INACTIVE'] [[0.07 0.93]]
2876/4496 L-monomethylarginine ['INACTIVE'] [[0.01 0.99]]
2877/4496 WWL-113 ['INACTIVE'] [[0.06 0.94]]
2878/4496 N-methyllidocaine-iodide ['INACTIVE'] [[0.03 0.97]]
2879/4496 tubastatin-a ['INACTIVE'] [[0.06 0.94]]
2880/4496 silymarin ['INACTIVE'] [[0. 1.]]
2881/4496 ABT-724 ['INACTIVE'] [[0.03 0.97]]
2882/4496 E7449 ['INACTIVE'] [[0.05 0.95]]
2883/4496 BMS-833923 ['INACTIVE'] [[0.02 0.98]]
2884/4496 perfluamine ['INACTIVE'] [[0.04 0.96]]
2885/4496 ethanolamine-oleate ['INACTIVE'] [[0.04 0.96]]
2886/4496 dipeptamin ['INACTIVE'] [[0.03 0.97]]
2887/4496 2-Oleoylglycerol ['INACTIVE'] [[0.04 0.96]]
2888/4496 TG-003 ['INACTIVE'] [[0.02 0.98]]
2889/4496 SM-21 ['INACTIVE'] [[0.02 0.98]]
2890/4496 HJC-0350 ['INACTIVE'] [[0.05 0.95]]
2891/4496 tolonidine ['INACTIVE'] [[0.01 0.99]]
2892/4496 olsalazine ['INACTIVE'] [[0.01 0.99]]
2893/4496 TG-02 ['INACTIVE'] [[0.06 0.94]]
2894/4496 denatonium-benzoate ['INACTIVE'] [[0.02 0.98]]
2895/4496 L-732,138 ['INACTIVE'] [[0.13 0.87]]
2896/4496 lomeguatrib ['INACTIVE'] [[0.02 0.98]]
2897/4496 nutlin-3 ['INACTIVE'] [[0.11 0.89]]
2898/4496 PF-04418948 ['INACTIVE'] [[0.01 0.99]]
2899/4496 DUP-697 ['INACTIVE'] [[0.05 0.95]]
2900/4496 alcaftadine ['INACTIVE'] [[0.03 0.97]]
2901/4496 methoxyphenamine ['INACTIVE'] [[0.03 0.97]]
2902/4496 LY225910 ['INACTIVE'] [[0.02 0.98]]
2903/4496 olvanil ['INACTIVE'] [[0.04 0.96]]
2904/4496 propylene-glycol ['INACTIVE'] [[0. 1.]]
2905/4496 prednicarbate ['INACTIVE'] [[0. 1.]]
2906/4496 levomepromazine ['INACTIVE'] [[0. 1.]]
2907/4496 3,4-DCPG-(+/-) ['INACTIVE'] [[0.04 0.96]]
2908/4496 KF-38789 ['INACTIVE'] [[0.02 0.98]]
2909/4496 merbarone ['INACTIVE'] [[0.01 0.99]]
2910/4496 salazodine ['INACTIVE'] [[0.01 0.99]]
2911/4496 KU-60019 ['INACTIVE'] [[0.14 0.86]]
2912/4496 L-Quisqualic-acid ['INACTIVE'] [[0.04 0.96]]
2913/4496 3,4-DCPG-(S) ['INACTIVE'] [[0.04 0.96]]
2914/4496 guanadrel ['INACTIVE'] [[0.05 0.95]]
2915/4496 NVP-AUY922 ['INACTIVE'] [[0.02 0.98]]
2916/4496 NT157 ['INACTIVE'] [[0.01 0.99]]
2917/4496 UH-232-(+) ['INACTIVE'] [[0. 1.]]
2918/4496 NVP-TAE226 ['INACTIVE'] [[0.1 0.9]]
2919/4496 celiprolol ['INACTIVE'] [[0.04 0.96]]
2920/4496 varenicline ['INACTIVE'] [[0.08 0.92]]
2921/4496 cardiogenol-c ['INACTIVE'] [[0. 1.]]
2922/4496 bryostatin-1 ['INACTIVE'] [[0.14 0.86]]
2923/4496 PSB-1115 ['INACTIVE'] [[0.01 0.99]]
2924/4496 3,4-DCPG-(R) ['INACTIVE'] [[0.04 0.96]]
2925/4496 PI-103 ['INACTIVE'] [[0.07 0.93]]
2926/4496 benidipine ['INACTIVE'] [[0.03 0.97]]
2927/4496 altanserin ['INACTIVE'] [[0.02 0.98]]
2928/4496 ZD-7288 ['INACTIVE'] [[0.02 0.98]]
2929/4496 AD-5467 ['INACTIVE'] [[0.02 0.98]]
2930/4496 OLDA ['INACTIVE'] [[0.05 0.95]]
2931/4496 aclidinium ['INACTIVE'] [[0.06 0.94]]
2932/4496 SIB-1893 ['INACTIVE'] [[0.02 0.98]]
2933/4496 alafosfalin ['INACTIVE'] [[0.02 0.98]]
2934/4496 elacridar ['INACTIVE'] [[0.04 0.96]]
2935/4496 LMK-235 ['INACTIVE'] [[0.03 0.97]]
2936/4496 OPC-21268 ['INACTIVE'] [[0.05 0.95]]
2937/4496 fenobam ['INACTIVE'] [[0.03 0.97]]
2938/4496 NE-100 ['INACTIVE'] [[0.01 0.99]]
2939/4496 THZ1-R ['INACTIVE'] [[0.03 0.97]]
2940/4496 ponesimod ['INACTIVE'] [[0.03 0.97]]
2941/4496 tucatinib ['INACTIVE'] [[0.08 0.92]]
2942/4496 VER-49009 ['INACTIVE'] [[0. 1.]]
2943/4496 ibandronate ['INACTIVE'] [[0.01 0.99]]
2944/4496 GSK-LSD-1 ['INACTIVE'] [[0.02 0.98]]
2945/4496 neridronic-acid ['INACTIVE'] [[0.01 0.99]]
2946/4496 A-7 ['INACTIVE'] [[0.08 0.92]]
2947/4496 TIC10 ['INACTIVE'] [[0.02 0.98]]
2948/4496 adrafinil ['INACTIVE'] [[0.02 0.98]]
2949/4496 perampanel ['INACTIVE'] [[0.03 0.97]]
2950/4496 butylphthalide ['INACTIVE'] [[0.01 0.99]]
2951/4496 FPL-64176 ['INACTIVE'] [[0.02 0.98]]
2952/4496 adenosine-triphosphate ['INACTIVE'] [[0. 1.]]
2953/4496 SR-3677 ['INACTIVE'] [[0.05 0.95]]
2954/4496 MRK-560 ['INACTIVE'] [[0.09 0.91]]
2955/4496 JW-642 ['INACTIVE'] [[0.03 0.97]]
2956/4496 carboplatin ['INACTIVE'] [[0.01 0.99]]
2957/4496 PD-168393 ['INACTIVE'] [[0.02 0.98]]
2958/4496 PF-02545920 ['INACTIVE'] [[0.05 0.95]]
2959/4496 BNC105 ['INACTIVE'] [[0.04 0.96]]
2960/4496 AZD3463 ['INACTIVE'] [[0.09 0.91]]
2961/4496 loxistatin-acid ['INACTIVE'] [[0.01 0.99]]
2962/4496 oxatomide ['INACTIVE'] [[0.04 0.96]]
2963/4496 NPS-2143 ['INACTIVE'] [[0.02 0.98]]
2964/4496 solifenacin-succinate ['INACTIVE'] [[0.01 0.99]]
2965/4496 ML130 ['INACTIVE'] [[0.02 0.98]]
2966/4496 NSC-625987 ['INACTIVE'] [[0.03 0.97]]
2967/4496 acrivastine ['INACTIVE'] [[0.07 0.93]]
2968/4496 I-BET-762 ['INACTIVE'] [[0.05 0.95]]
2969/4496 CB-10-277 ['INACTIVE'] [[0. 1.]]
2970/4496 ditolylguanidine ['INACTIVE'] [[0.03 0.97]]
2971/4496 zamifenacin ['INACTIVE'] [[0.03 0.97]]
2972/4496 ingenol-mebutate ['INACTIVE'] [[0.06 0.94]]
2973/4496 bimatoprost ['INACTIVE'] [[0.03 0.97]]
2974/4496 pruvanserin ['INACTIVE'] [[0.04 0.96]]
2975/4496 litronesib ['INACTIVE'] [[0.08 0.92]]
2976/4496 ledipasvir ['INACTIVE'] [[0.09 0.91]]
2977/4496 10-DEBC ['INACTIVE'] [[0.04 0.96]]
2978/4496 MC-1 ['INACTIVE'] [[0.07 0.93]]
2979/4496 pilaralisib ['INACTIVE'] [[0.02 0.98]]
2980/4496 barasertib ['INACTIVE'] [[0.06 0.94]]
2981/4496 gynostemma-extract ['INACTIVE'] [[0.05 0.95]]
2982/4496 LSN-2463359 ['INACTIVE'] [[0.04 0.96]]
2983/4496 PFI-3 ['INACTIVE'] [[0.07 0.93]]
2984/4496 BAN-ORL-24 ['INACTIVE'] [[0.01 0.99]]
2985/4496 polythiazide ['INACTIVE'] [[0. 1.]]
2986/4496 INT-767 ['INACTIVE'] [[0.08 0.92]]
2987/4496 1,12-Besm ['INACTIVE'] [[0.01 0.99]]
2988/4496 SCMC-Lys ['INACTIVE'] [[0. 1.]]
2989/4496 SMER-28 ['INACTIVE'] [[0.03 0.97]]
2990/4496 enflurane ['INACTIVE'] [[0.01 0.99]]
2991/4496 AZD5438 ['INACTIVE'] [[0.03 0.97]]
2992/4496 LY2784544 ['INACTIVE'] [[0.05 0.95]]
2993/4496 treosulfan ['INACTIVE'] [[0. 1.]]
2994/4496 medorinone ['INACTIVE'] [[0.01 0.99]]
2995/4496 iododexetimide ['INACTIVE'] [[0.04 0.96]]
2996/4496 stobadine ['INACTIVE'] [[0. 1.]]
2997/4496 NVP-TNKS656 ['INACTIVE'] [[0.08 0.92]]
2998/4496 alaproclate ['INACTIVE'] [[0.03 0.97]]
2999/4496 DAA-1106 ['INACTIVE'] [[0.03 0.97]]
3000/4496 tiagabine ['INACTIVE'] [[0.06 0.94]]
3001/4496 palifosfamide ['INACTIVE'] [[0. 1.]]
3002/4496 luliconazole ['INACTIVE'] [[0.1 0.9]]
3003/4496 acitazanolast ['INACTIVE'] [[0.02 0.98]]
3004/4496 osemozotan ['INACTIVE'] [[0.04 0.96]]
3005/4496 NG-nitro-arginine ['INACTIVE'] [[0.02 0.98]]
3006/4496 Y-320 ['INACTIVE'] [[0.1 0.9]]
3007/4496 L-689560 ['INACTIVE'] [[0.09 0.91]]
3008/4496 metipranolol ['INACTIVE'] [[0.04 0.96]]
3009/4496 azacyclonol ['INACTIVE'] [[0.03 0.97]]
3010/4496 succinylcholine-chloride ['INACTIVE'] [[0.03 0.97]]
3011/4496 CPI-0610 ['INACTIVE'] [[0.03 0.97]]
3012/4496 MI-14 ['INACTIVE'] [[0.02 0.98]]
3013/4496 pyrantel ['INACTIVE'] [[0.03 0.97]]
3014/4496 YZ9 ['INACTIVE'] [[0.02 0.98]]
3015/4496 ITX3 ['INACTIVE'] [[0.02 0.98]]
3016/4496 eupatilin ['INACTIVE'] [[0.01 0.99]]
3017/4496 DMeOB ['INACTIVE'] [[0. 1.]]
3018/4496 thiamine-pyrophosphate ['INACTIVE'] [[0.05 0.95]]
3019/4496 BMS-690514 ['INACTIVE'] [[0.05 0.95]]
3020/4496 AS-1892802 ['INACTIVE'] [[0.06 0.94]]
3021/4496 indirubin ['INACTIVE'] [[0.01 0.99]]
3022/4496 NS-8 ['INACTIVE'] [[0.07 0.93]]
3023/4496 1,4-butanediol ['INACTIVE'] [[0. 1.]]
3024/4496 U-99194 ['INACTIVE'] [[0.01 0.99]]
3025/4496 zofenopril-calcium ['INACTIVE'] [[0. 1.]]
3026/4496 dorsomorphin ['INACTIVE'] [[0.06 0.94]]
3027/4496 clomesone ['INACTIVE'] [[0.01 0.99]]
3028/4496 methanesulfonyl-fluoride ['INACTIVE'] [[0. 1.]]
3029/4496 AZD1283 ['INACTIVE'] [[0.07 0.93]]
3030/4496 nalbuphine ['INACTIVE'] [[0.04 0.96]]
3031/4496 apratastat ['INACTIVE'] [[0.03 0.97]]
3032/4496 Y-39983 ['INACTIVE'] [[0. 1.]]
3033/4496 acetylsalicylsalicylic-acid ['INACTIVE'] [[0. 1.]]
3034/4496 BW-616U ['INACTIVE'] [[0.01 0.99]]
3035/4496 W-54011 ['INACTIVE'] [[0.04 0.96]]
3036/4496 AP-18 ['INACTIVE'] [[0. 1.]]
3037/4496 BRD4770 ['INACTIVE'] [[0.06 0.94]]
3038/4496 ibrutinib ['INACTIVE'] [[0.07 0.93]]
3039/4496 ferrostatin-1 ['INACTIVE'] [[0.02 0.98]]
3040/4496 dihydromyricetin ['INACTIVE'] [[0. 1.]]
3041/4496 tyrphostin-AG-1296 ['INACTIVE'] [[0.03 0.97]]
3042/4496 SSR-180711 ['INACTIVE'] [[0.07 0.93]]
3043/4496 SB-452533 ['INACTIVE'] [[0.02 0.98]]
3044/4496 ONC201 ['INACTIVE'] [[0.05 0.95]]
3045/4496 LB42708 ['INACTIVE'] [[0.13 0.87]]
3046/4496 4-PPBP ['INACTIVE'] [[0.03 0.97]]
3047/4496 leucomethylene-blue ['INACTIVE'] [[0.07 0.93]]
3048/4496 stemregenin-1 ['INACTIVE'] [[0.01 0.99]]
3049/4496 azeliragon ['INACTIVE'] [[0.09 0.91]]
3050/4496 N-benzylnaltrindole ['INACTIVE'] [[0.12 0.88]]
3051/4496 ITE ['INACTIVE'] [[0.05 0.95]]
3052/4496 DG-172 ['INACTIVE'] [[0.11 0.89]]
3053/4496 ethacizin ['INACTIVE'] [[0.01 0.99]]
3054/4496 APR-246 ['INACTIVE'] [[0.08 0.92]]
3055/4496 PHA-793887 ['INACTIVE'] [[0.05 0.95]]
3056/4496 deslanoside ['INACTIVE'] [[0.01 0.99]]
3057/4496 ectoine-zwitterion ['INACTIVE'] [[0.01 0.99]]
3058/4496 5-hydroxyectoine ['INACTIVE'] [[0.05 0.95]]
3059/4496 igmesine ['INACTIVE'] [[0.02 0.98]]
3060/4496 ADL5859 ['INACTIVE'] [[0.04 0.96]]
3061/4496 P7C3 ['INACTIVE'] [[0.04 0.96]]
3062/4496 8-hydroxy-DPAT ['INACTIVE'] [[0.01 0.99]]
3063/4496 cafestol ['INACTIVE'] [[0.01 0.99]]
3064/4496 tribomsalan ['INACTIVE'] [[0.02 0.98]]
3065/4496 resminostat ['INACTIVE'] [[0.07 0.93]]
3066/4496 landiolol ['INACTIVE'] [[0.03 0.97]]
3067/4496 iodoantipyrine ['INACTIVE'] [[0. 1.]]
3068/4496 azilsartan-medoxomil ['INACTIVE'] [[0.04 0.96]]
3069/4496 FK-3311 ['INACTIVE'] [[0.07 0.93]]
3070/4496 AG-14361 ['INACTIVE'] [[0.04 0.96]]
3071/4496 hexoprenaline ['INACTIVE'] [[0.08 0.92]]
3072/4496 triptan ['INACTIVE'] [[0. 1.]]
3073/4496 ETP-46464 ['INACTIVE'] [[0.06 0.94]]
3074/4496 entinostat ['INACTIVE'] [[0.02 0.98]]
3075/4496 cloprostenol-(+/-) ['INACTIVE'] [[0.02 0.98]]
3076/4496 sodium-gluconate ['INACTIVE'] [[0. 1.]]
3077/4496 DBPR108 ['INACTIVE'] [[0.02 0.98]]
3078/4496 isamoltane ['INACTIVE'] [[0.03 0.97]]
3079/4496 BF2.649 ['INACTIVE'] [[0.01 0.99]]
3080/4496 pentostatin ['INACTIVE'] [[0.06 0.94]]
3081/4496 CI-844 ['INACTIVE'] [[0. 1.]]
3082/4496 sodium-gualenate ['INACTIVE'] [[0.01 0.99]]
3083/4496 imiloxan ['INACTIVE'] [[0.04 0.96]]
3084/4496 6-chloromelatonin ['INACTIVE'] [[0.03 0.97]]
3085/4496 PI4KIII-beta-inhibitor-1 ['INACTIVE'] [[0.03 0.97]]
3086/4496 EXO-1 ['INACTIVE'] [[0.01 0.99]]
3087/4496 ONX-0914 ['INACTIVE'] [[0.09 0.91]]
3088/4496 Ro-20-1724 ['INACTIVE'] [[0.02 0.98]]
3089/4496 4-DAMP ['INACTIVE'] [[0.04 0.96]]
3090/4496 BHQ ['INACTIVE'] [[0. 1.]]
3091/4496 guvacine ['INACTIVE'] [[0.02 0.98]]
3092/4496 TPPS4 ['INACTIVE'] [[0.06 0.94]]
3093/4496 RS-17053 ['INACTIVE'] [[0.06 0.94]]
3094/4496 brilliant-green ['INACTIVE'] [[0. 1.]]
3095/4496 cyclic-AMP ['INACTIVE'] [[0.03 0.97]]
3096/4496 ORY-1001 ['INACTIVE'] [[0.01 0.99]]
3097/4496 SR-95639A ['INACTIVE'] [[0. 1.]]
3098/4496 azosemide ['INACTIVE'] [[0.04 0.96]]
3099/4496 cadralazine ['INACTIVE'] [[0.03 0.97]]
3100/4496 fexaramine ['INACTIVE'] [[0.02 0.98]]
3101/4496 SGC-707 ['INACTIVE'] [[0.02 0.98]]
3102/4496 LY2452473 ['INACTIVE'] [[0.04 0.96]]
3103/4496 SKI-II ['INACTIVE'] [[0. 1.]]
3104/4496 uric-acid ['INACTIVE'] [[0.01 0.99]]
3105/4496 anthranilic-acid ['INACTIVE'] [[0.02 0.98]]
3106/4496 eucatropine ['INACTIVE'] [[0.03 0.97]]
3107/4496 omeprazole-sulfide ['INACTIVE'] [[0.04 0.96]]
3108/4496 remodelin ['INACTIVE'] [[0.07 0.93]]
3109/4496 eliprodil ['INACTIVE'] [[0.01 0.99]]
3110/4496 SB-258585 ['INACTIVE'] [[0.07 0.93]]
3111/4496 SGC-CBP30 ['INACTIVE'] [[0.09 0.91]]
3112/4496 CPI-613 ['INACTIVE'] [[0.04 0.96]]
3113/4496 cis-ACPD ['INACTIVE'] [[0.01 0.99]]
3114/4496 neuropathiazol ['INACTIVE'] [[0.02 0.98]]
3115/4496 DAU-5884 ['INACTIVE'] [[0.05 0.95]]
3116/4496 NVP-DPP728 ['INACTIVE'] [[0.07 0.93]]
3117/4496 MK-5046 ['INACTIVE'] [[0.13 0.87]]
3118/4496 6-iodo-nordihydrocapsaicin ['INACTIVE'] [[0. 1.]]
3119/4496 NU-1025 ['INACTIVE'] [[0.08 0.92]]
3120/4496 NVS-PAK1-1 ['INACTIVE'] [[0.07 0.93]]
3121/4496 swainsonine ['INACTIVE'] [[0. 1.]]
3122/4496 apafant ['INACTIVE'] [[0.17 0.83]]
3123/4496 donitriptan ['INACTIVE'] [[0.07 0.93]]
3124/4496 pibenzimol ['INACTIVE'] [[0.02 0.98]]
3125/4496 CMPD-1 ['INACTIVE'] [[0.01 0.99]]
3126/4496 quinagolide ['INACTIVE'] [[0.04 0.96]]
3127/4496 splitomycin ['INACTIVE'] [[0.02 0.98]]
3128/4496 carmoxirole ['INACTIVE'] [[0.02 0.98]]
3129/4496 troventol ['INACTIVE'] [[0.02 0.98]]
3130/4496 S-methylcysteine ['INACTIVE'] [[0.01 0.99]]
3131/4496 GNTI ['INACTIVE'] [[0.08 0.92]]
3132/4496 ripazepam ['INACTIVE'] [[0.05 0.95]]
3133/4496 cisplatin ['INACTIVE'] [[0. 1.]]
3134/4496 bemegride ['INACTIVE'] [[0.02 0.98]]
3135/4496 bosentan ['INACTIVE'] [[0.03 0.97]]
3136/4496 PP242 ['INACTIVE'] [[0.03 0.97]]
3137/4496 RS-45041-190 ['INACTIVE'] [[0.01 0.99]]
3138/4496 BINA ['INACTIVE'] [[0.02 0.98]]
3139/4496 N-demethylantipyrine ['INACTIVE'] [[0.01 0.99]]
3140/4496 calpeptin ['INACTIVE'] [[0.02 0.98]]
3141/4496 dicyclohexylamine ['INACTIVE'] [[0.01 0.99]]
3142/4496 DPPE ['INACTIVE'] [[0.03 0.97]]
3143/4496 spiperone ['INACTIVE'] [[0.02 0.98]]
3144/4496 ML161 ['INACTIVE'] [[0.03 0.97]]
3145/4496 tridihexethyl ['INACTIVE'] [[0.07 0.93]]
3146/4496 TUG-891 ['INACTIVE'] [[0.01 0.99]]
3147/4496 CPI-169 ['INACTIVE'] [[0.04 0.96]]
3148/4496 EDTMP ['INACTIVE'] [[0.02 0.98]]
3149/4496 Y-11 ['INACTIVE'] [[0.02 0.98]]
3150/4496 JX-401 ['INACTIVE'] [[0.01 0.99]]
3151/4496 volinanserin ['INACTIVE'] [[0.05 0.95]]
3152/4496 ARRY-334543 ['INACTIVE'] [[0.07 0.93]]
3153/4496 NS-1643 ['INACTIVE'] [[0.03 0.97]]
3154/4496 ZK-93423 ['INACTIVE'] [[0.01 0.99]]
3155/4496 EVP4593 ['INACTIVE'] [[0.06 0.94]]
3156/4496 anandamide ['INACTIVE'] [[0.01 0.99]]
3157/4496 miltefosine ['INACTIVE'] [[0.22 0.78]]
3158/4496 UPF-1069 ['INACTIVE'] [[0.02 0.98]]
3159/4496 necrostatin-1 ['INACTIVE'] [[0.02 0.98]]
3160/4496 anguidine ['INACTIVE'] [[0.04 0.96]]
3161/4496 aloperine ['INACTIVE'] [[0.04 0.96]]
3162/4496 RG2833 ['INACTIVE'] [[0.01 0.99]]
3163/4496 DY131 ['INACTIVE'] [[0.01 0.99]]
3164/4496 IRL-2500 ['INACTIVE'] [[0.04 0.96]]
3165/4496 isradipine ['INACTIVE'] [[0.03 0.97]]
3166/4496 E-64 ['INACTIVE'] [[0.02 0.98]]
3167/4496 nisoxetine ['INACTIVE'] [[0.01 0.99]]
3168/4496 propagermanium ['INACTIVE'] [[0.01 0.99]]
3169/4496 NVP-BEZ235 ['INACTIVE'] [[0.1 0.9]]
3170/4496 ciclesonide ['INACTIVE'] [[0.03 0.97]]
3171/4496 M-344 ['INACTIVE'] [[0.01 0.99]]
3172/4496 PF-750 ['INACTIVE'] [[0.02 0.98]]
3173/4496 SGI-1776 ['INACTIVE'] [[0.05 0.95]]
3174/4496 L-701324 ['INACTIVE'] [[0.02 0.98]]
3175/4496 A-582941 ['INACTIVE'] [[0.04 0.96]]
3176/4496 milacemide ['INACTIVE'] [[0.02 0.98]]
3177/4496 L-368899 ['INACTIVE'] [[0.1 0.9]]
3178/4496 isobutyramide ['INACTIVE'] [[0. 1.]]
3179/4496 guanidinoethyldisulfide-bicarbonate ['INACTIVE'] [[0.01 0.99]]
3180/4496 salirasib ['INACTIVE'] [[0.04 0.96]]
3181/4496 ETP-45658 ['INACTIVE'] [[0.06 0.94]]
3182/4496 maxacalcitol ['INACTIVE'] [[0.05 0.95]]
3183/4496 nuclomedone ['INACTIVE'] [[0.07 0.93]]
3184/4496 AMD11070 ['INACTIVE'] [[0.08 0.92]]
3185/4496 imidafenacin ['INACTIVE'] [[0.04 0.96]]
3186/4496 calcipotriol ['INACTIVE'] [[0.03 0.97]]
3187/4496 pacritinib ['INACTIVE'] [[0.06 0.94]]
3188/4496 clomifene ['INACTIVE'] [[0. 1.]]
3189/4496 amineptine ['INACTIVE'] [[0.04 0.96]]
3190/4496 cilazapril ['INACTIVE'] [[0.13 0.87]]
3191/4496 potassium-iodide ['INACTIVE'] [[0. 1.]]
3192/4496 pamidronate ['INACTIVE'] [[0. 1.]]
3193/4496 secalciferol ['INACTIVE'] [[0.02 0.98]]
3194/4496 N20C ['INACTIVE'] [[0.02 0.98]]
3195/4496 ubenimex ['INACTIVE'] [[0. 1.]]
3196/4496 lidamidine ['INACTIVE'] [[0.04 0.96]]
3197/4496 acamprosate ['INACTIVE'] [[0. 1.]]
3198/4496 FPH1-(BRD-6125) ['INACTIVE'] [[0.01 0.99]]
3199/4496 L-152804 ['INACTIVE'] [[0.07 0.93]]
3200/4496 leucylleucine-methyl-ester ['INACTIVE'] [[0. 1.]]
3201/4496 methylprednisolone-aceponate ['INACTIVE'] [[0.01 0.99]]
3202/4496 Mps1-IN-1 ['INACTIVE'] [[0.12 0.88]]
3203/4496 lacosamide ['INACTIVE'] [[0.02 0.98]]
3204/4496 EHop-016 ['INACTIVE'] [[0.07 0.93]]
3205/4496 bavisant ['INACTIVE'] [[0.03 0.97]]
3206/4496 CH5132799 ['INACTIVE'] [[0.14 0.86]]
3207/4496 remimazolam ['INACTIVE'] [[0.05 0.95]]
3208/4496 MRS-3777 ['INACTIVE'] [[0.06 0.94]]
3209/4496 PCI-24781 ['INACTIVE'] [[0.03 0.97]]
3210/4496 HTMT ['INACTIVE'] [[0.1 0.9]]
3211/4496 SAR407899 ['INACTIVE'] [[0.02 0.98]]
3212/4496 rhodamine-123 ['INACTIVE'] [[0.04 0.96]]
3213/4496 BL-5583 ['INACTIVE'] [[0.04 0.96]]
3214/4496 Nampt-IN-1 ['INACTIVE'] [[0. 1.]]
3215/4496 docebenone ['INACTIVE'] [[0.03 0.97]]
3216/4496 eslicarbazepine-acetate ['INACTIVE'] [[0.04 0.96]]
3217/4496 riboflavin-5-phosphate-sodium ['INACTIVE'] [[0.02 0.98]]
3218/4496 ostarine ['INACTIVE'] [[0.07 0.93]]
3219/4496 perflubron ['INACTIVE'] [[0. 1.]]
3220/4496 SB-706375 ['INACTIVE'] [[0.02 0.98]]
3221/4496 phenprocoumon ['INACTIVE'] [[0.01 0.99]]
3222/4496 MTPG ['INACTIVE'] [[0.02 0.98]]
3223/4496 bromantan ['INACTIVE'] [[0.01 0.99]]
3224/4496 cibenzoline ['INACTIVE'] [[0.01 0.99]]
3225/4496 mabuprofen ['INACTIVE'] [[0. 1.]]
3226/4496 SCH-221510 ['INACTIVE'] [[0.09 0.91]]
3227/4496 MR-16728 ['INACTIVE'] [[0. 1.]]
3228/4496 WAY-200070 ['INACTIVE'] [[0.02 0.98]]
3229/4496 thioproperazine ['INACTIVE'] [[0.02 0.98]]
3230/4496 NPY-5RA972 ['INACTIVE'] [[0.07 0.93]]
3231/4496 CCT018159 ['INACTIVE'] [[0.02 0.98]]
3232/4496 SKF-91488 ['INACTIVE'] [[0.01 0.99]]
3233/4496 TUG-770 ['INACTIVE'] [[0.06 0.94]]
3234/4496 bevantolol ['INACTIVE'] [[0.02 0.98]]
3235/4496 EC-23 ['INACTIVE'] [[0.04 0.96]]
3236/4496 talabostat ['INACTIVE'] [[0.03 0.97]]
3237/4496 ethionamide ['INACTIVE'] [[0.03 0.97]]
3238/4496 CY208-243 ['INACTIVE'] [[0.01 0.99]]
3239/4496 secoisolariciresinol-(-) ['INACTIVE'] [[0.02 0.98]]
3240/4496 BC-11 ['INACTIVE'] [[0.02 0.98]]
3241/4496 gabazine ['INACTIVE'] [[0.01 0.99]]
3242/4496 PHA-767491 ['INACTIVE'] [[0. 1.]]
3243/4496 IC261 ['INACTIVE'] [[0. 1.]]
3244/4496 ONO-4059 ['INACTIVE'] [[0.08 0.92]]
3245/4496 trimethoquinol ['INACTIVE'] [[0.07 0.93]]
3246/4496 SP-141 ['INACTIVE'] [[0.06 0.94]]
3247/4496 phosphatidylcholine ['INACTIVE'] [[0.13 0.87]]
3248/4496 protionamide ['INACTIVE'] [[0.03 0.97]]
3249/4496 GSK2330672 ['INACTIVE'] [[0.08 0.92]]
3250/4496 pantothenic-acid ['INACTIVE'] [[0. 1.]]
3251/4496 L-168049 ['INACTIVE'] [[0.06 0.94]]
3252/4496 GGsTop ['INACTIVE'] [[0.03 0.97]]
3253/4496 rutaecarpine ['INACTIVE'] [[0.07 0.93]]
3254/4496 pifithrin-cyclic ['INACTIVE'] [[0.03 0.97]]
3255/4496 SB-203580 ['INACTIVE'] [[0.04 0.96]]
3256/4496 EMD-1214063 ['INACTIVE'] [[0.12 0.88]]
3257/4496 tempol ['INACTIVE'] [[0.01 0.99]]
3258/4496 lisadimate ['INACTIVE'] [[0. 1.]]
3259/4496 remacemide ['INACTIVE'] [[0.02 0.98]]
3260/4496 darunavir ['INACTIVE'] [[0.07 0.93]]
3261/4496 tamoxifen ['INACTIVE'] [[0. 1.]]
3262/4496 epitiostanol ['INACTIVE'] [[0. 1.]]
3263/4496 dithranol ['INACTIVE'] [[0.03 0.97]]
3264/4496 tacalcitol ['INACTIVE'] [[0.02 0.98]]
3265/4496 resibufogenin ['INACTIVE'] [[0. 1.]]
3266/4496 cinanserin ['INACTIVE'] [[0.01 0.99]]
3267/4496 TBOA-(DL) ['INACTIVE'] [[0.06 0.94]]
3268/4496 cyclopenthiazide ['INACTIVE'] [[0.07 0.93]]
3269/4496 NLG919 ['INACTIVE'] [[0.03 0.97]]
3270/4496 S-Trityl-L-cysteine ['INACTIVE'] [[0.02 0.98]]
3271/4496 levomequitazine ['INACTIVE'] [[0.02 0.98]]
3272/4496 D-7193 ['INACTIVE'] [[0.05 0.95]]
3273/4496 trichostatin-a ['INACTIVE'] [[0.04 0.96]]
3274/4496 tipiracil ['INACTIVE'] [[0.01 0.99]]
3275/4496 givinostat ['INACTIVE'] [[0. 1.]]
3276/4496 A77636 ['INACTIVE'] [[0.03 0.97]]
3277/4496 U-18666A ['INACTIVE'] [[0.02 0.98]]
3278/4496 VU0155069 ['INACTIVE'] [[0.04 0.96]]
3279/4496 merimepodib ['INACTIVE'] [[0.01 0.99]]
3280/4496 tenovin-1 ['INACTIVE'] [[0.02 0.98]]
3281/4496 trofinetide ['INACTIVE'] [[0.06 0.94]]
3282/4496 reversan ['INACTIVE'] [[0.07 0.93]]
3283/4496 DPO-1 ['INACTIVE'] [[0.05 0.95]]
3284/4496 kartogenin ['INACTIVE'] [[0. 1.]]
3285/4496 alfacalcidol ['INACTIVE'] [[0.02 0.98]]
3286/4496 tasisulam ['INACTIVE'] [[0.09 0.91]]
3287/4496 O-1918 ['INACTIVE'] [[0.03 0.97]]
3288/4496 benzenemethanol,-2,5-dimethyl-[[(1-methylethyl)amino]methyl] ['INACTIVE'] [[0. 1.]]
3289/4496 NNC-711 ['INACTIVE'] [[0.03 0.97]]
3290/4496 tanaproget ['INACTIVE'] [[0.03 0.97]]
3291/4496 tocainide ['INACTIVE'] [[0. 1.]]
3292/4496 SB-657510 ['INACTIVE'] [[0.04 0.96]]
3293/4496 trimethylolpropane-triacrylate ['INACTIVE'] [[0.01 0.99]]
3294/4496 laurocapram ['INACTIVE'] [[0.03 0.97]]
3295/4496 ZM-336372 ['INACTIVE'] [[0.01 0.99]]
3296/4496 PD-102807 ['INACTIVE'] [[0.02 0.98]]
3297/4496 ercalcitriol ['INACTIVE'] [[0.01 0.99]]
3298/4496 tolimidone ['INACTIVE'] [[0.04 0.96]]
3299/4496 latanoprost ['INACTIVE'] [[0.03 0.97]]
3300/4496 potassium-p-aminobenzoate ['INACTIVE'] [[0. 1.]]
3301/4496 JW-55 ['INACTIVE'] [[0. 1.]]
3302/4496 Ro-3306 ['INACTIVE'] [[0.05 0.95]]
3303/4496 dihydroxyacetone ['INACTIVE'] [[0. 1.]]
3304/4496 SU-16f ['INACTIVE'] [[0.01 0.99]]
3305/4496 2-hydroxyethyl-salicylate ['INACTIVE'] [[0. 1.]]
3306/4496 ibrolipim ['INACTIVE'] [[0.01 0.99]]
3307/4496 eniluracil ['INACTIVE'] [[0.05 0.95]]
3308/4496 TFC-007 ['INACTIVE'] [[0.11 0.89]]
3309/4496 ilomastat ['INACTIVE'] [[0.03 0.97]]
3310/4496 grapiprant ['INACTIVE'] [[0.06 0.94]]
3311/4496 diphencyprone ['INACTIVE'] [[0. 1.]]
3312/4496 radafaxine ['INACTIVE'] [[0.04 0.96]]
3313/4496 RS-39604 ['INACTIVE'] [[0.05 0.95]]
3314/4496 benzo[d]thiazole-2(3h)-thione ['INACTIVE'] [[0. 1.]]
3315/4496 medica-16 ['INACTIVE'] [[0.04 0.96]]
3316/4496 arglabin ['INACTIVE'] [[0.05 0.95]]
3317/4496 ML-213 ['INACTIVE'] [[0.01 0.99]]
3318/4496 CKD-712 ['INACTIVE'] [[0.04 0.96]]
3319/4496 AM-92016 ['INACTIVE'] [[0.02 0.98]]
3320/4496 SR-59230A ['INACTIVE'] [[0.02 0.98]]
3321/4496 GDC-0810 ['INACTIVE'] [[0.05 0.95]]
3322/4496 ML-281 ['INACTIVE'] [[0.09 0.91]]
3323/4496 taxifolin ['INACTIVE'] [[0. 1.]]
3324/4496 calcitriol ['INACTIVE'] [[0.02 0.98]]
3325/4496 hypoestoxide ['INACTIVE'] [[0.04 0.96]]
3326/4496 farnesyl-thiosalicylic-acid-amide ['INACTIVE'] [[0.06 0.94]]
3327/4496 fosfomycin ['INACTIVE'] [[0.01 0.99]]
3328/4496 esomeprazole ['INACTIVE'] [[0.02 0.98]]
3329/4496 OF-1 ['INACTIVE'] [[0.04 0.96]]
3330/4496 estramustine-phosphate ['INACTIVE'] [[0.01 0.99]]
3331/4496 anthraquinone ['INACTIVE'] [[0. 1.]]
3332/4496 SLV-320 ['INACTIVE'] [[0.02 0.98]]
3333/4496 flumexadol ['INACTIVE'] [[0.02 0.98]]
3334/4496 KX2-391 ['INACTIVE'] [[0.04 0.96]]
3335/4496 AZ5104 ['INACTIVE'] [[0.08 0.92]]
3336/4496 tozadenant ['INACTIVE'] [[0.09 0.91]]
3337/4496 valrocemide ['INACTIVE'] [[0.03 0.97]]
3338/4496 ammonium-lactate ['INACTIVE'] [[0. 1.]]
3339/4496 AVE-0991 ['INACTIVE'] [[0.05 0.95]]
3340/4496 PG-9 ['INACTIVE'] [[0.04 0.96]]
3341/4496 A-803467 ['INACTIVE'] [[0.01 0.99]]
3342/4496 deltarasin ['INACTIVE'] [[0. 1.]]
3343/4496 aphidicolin ['INACTIVE'] [[0.01 0.99]]
3344/4496 hexamethylenebisacetamide ['INACTIVE'] [[0.01 0.99]]
3345/4496 silver-sulfadiazine ['INACTIVE'] [[0.03 0.97]]
3346/4496 bromosporine ['INACTIVE'] [[0.07 0.93]]
3347/4496 3-bromocamphor ['INACTIVE'] [[0.02 0.98]]
3348/4496 SQ-109 ['INACTIVE'] [[0.01 0.99]]
3349/4496 AM-281 ['INACTIVE'] [[0.09 0.91]]
3350/4496 lasofoxifene ['INACTIVE'] [[0.01 0.99]]
3351/4496 tasimelteon ['INACTIVE'] [[0.04 0.96]]
3352/4496 zolantidine ['INACTIVE'] [[0.05 0.95]]
3353/4496 talinolol ['INACTIVE'] [[0. 1.]]
3354/4496 eprodisate ['INACTIVE'] [[0. 1.]]
3355/4496 palomid-529 ['INACTIVE'] [[0. 1.]]
3356/4496 melperone ['INACTIVE'] [[0.01 0.99]]
3357/4496 prednisolone-tebutate ['INACTIVE'] [[0. 1.]]
3358/4496 pyrintegrin ['INACTIVE'] [[0.02 0.98]]
3359/4496 GR-159897 ['INACTIVE'] [[0.04 0.96]]
3360/4496 L-693403 ['INACTIVE'] [[0. 1.]]
3361/4496 abafungin ['INACTIVE'] [[0.03 0.97]]
3362/4496 KU-55933 ['INACTIVE'] [[0.08 0.92]]
3363/4496 sarcosine ['INACTIVE'] [[0. 1.]]
3364/4496 SM-108 ['INACTIVE'] [[0. 1.]]
3365/4496 RU-42173 ['INACTIVE'] [[0.01 0.99]]
3366/4496 D-alpha-tocopheryl-succinate ['INACTIVE'] [[0.05 0.95]]
3367/4496 desvenlafaxine ['INACTIVE'] [[0.01 0.99]]
3368/4496 CC-115 ['INACTIVE'] [[0.08 0.92]]
3369/4496 CID-2011756 ['INACTIVE'] [[0.02 0.98]]
3370/4496 AC-55649 ['INACTIVE'] [[0.02 0.98]]
3371/4496 TRAM-39 ['INACTIVE'] [[0. 1.]]
3372/4496 TRIM ['INACTIVE'] [[0.05 0.95]]
3373/4496 TCS-2210 ['INACTIVE'] [[0.03 0.97]]
3374/4496 TC-H-106 ['INACTIVE'] [[0.03 0.97]]
3375/4496 FIPI ['INACTIVE'] [[0.01 0.99]]
3376/4496 JDTic ['INACTIVE'] [[0.09 0.91]]
3377/4496 NNC-05-2090 ['INACTIVE'] [[0.05 0.95]]
3378/4496 doxercalciferol ['INACTIVE'] [[0.01 0.99]]
3379/4496 3-AQC ['INACTIVE'] [[0.09 0.91]]
3380/4496 estetrol ['INACTIVE'] [[0.02 0.98]]
3381/4496 asiatic-acid ['INACTIVE'] [[0. 1.]]
3382/4496 seocalcitol ['INACTIVE'] [[0.06 0.94]]
3383/4496 salvianolic-acid-B ['INACTIVE'] [[0.04 0.96]]
3384/4496 carteolol ['INACTIVE'] [[0.02 0.98]]
3385/4496 spizofurone ['INACTIVE'] [[0.05 0.95]]
3386/4496 azilsartan ['INACTIVE'] [[0.04 0.96]]
3387/4496 bentazepam ['INACTIVE'] [[0.04 0.96]]
3388/4496 purmorphamine ['INACTIVE'] [[0.07 0.93]]
3389/4496 felbinac-ethyl ['INACTIVE'] [[0.03 0.97]]
3390/4496 BNTX ['INACTIVE'] [[0.04 0.96]]
3391/4496 fluroxene ['INACTIVE'] [[0.01 0.99]]
3392/4496 LMI070 ['INACTIVE'] [[0.01 0.99]]
3393/4496 SB-431542 ['INACTIVE'] [[0.09 0.91]]
3394/4496 A-939572 ['INACTIVE'] [[0.03 0.97]]
3395/4496 cilostamide ['INACTIVE'] [[0.02 0.98]]
3396/4496 SCH-28080 ['INACTIVE'] [[0.01 0.99]]
3397/4496 bucladesine ['INACTIVE'] [[0.03 0.97]]
3398/4496 atizoram ['INACTIVE'] [[0.11 0.89]]
3399/4496 NNC-63-0532 ['INACTIVE'] [[0.02 0.98]]
3400/4496 AT7867 ['INACTIVE'] [[0.05 0.95]]
3401/4496 C-1 ['INACTIVE'] [[0.01 0.99]]
3402/4496 PF-431396 ['INACTIVE'] [[0.09 0.91]]
3403/4496 pazopanib ['INACTIVE'] [[0.03 0.97]]
3404/4496 amprenavir ['INACTIVE'] [[0.06 0.94]]
3405/4496 I-BZA2 ['INACTIVE'] [[0. 1.]]
3406/4496 atiprimod ['INACTIVE'] [[0.02 0.98]]
3407/4496 frovatriptan ['INACTIVE'] [[0.01 0.99]]
3408/4496 mesoridazine ['INACTIVE'] [[0.06 0.94]]
3409/4496 carbetapentane ['INACTIVE'] [[0.01 0.99]]
3410/4496 ICI-185,282 ['INACTIVE'] [[0.03 0.97]]
3411/4496 PD-156707 ['INACTIVE'] [[0.02 0.98]]
3412/4496 8-hydroxy-PIPAT ['INACTIVE'] [[0.02 0.98]]
3413/4496 gabexate ['INACTIVE'] [[0.01 0.99]]
3414/4496 beta-CCB ['INACTIVE'] [[0.02 0.98]]
3415/4496 hordenine ['INACTIVE'] [[0. 1.]]
3416/4496 cromakalim ['INACTIVE'] [[0.12 0.88]]
3417/4496 deserpidine ['INACTIVE'] [[0.01 0.99]]
3418/4496 verubulin ['INACTIVE'] [[0.03 0.97]]
3419/4496 bonaphthone ['INACTIVE'] [[0. 1.]]
3420/4496 cinaciguat ['INACTIVE'] [[0.01 0.99]]
3421/4496 ospemifene ['INACTIVE'] [[0.01 0.99]]
3422/4496 xibenolol ['INACTIVE'] [[0.05 0.95]]
3423/4496 etoxybamide ['INACTIVE'] [[0. 1.]]
3424/4496 mifobate ['INACTIVE'] [[0.02 0.98]]
3425/4496 IMREG-1 ['INACTIVE'] [[0. 1.]]
3426/4496 oleandrin ['INACTIVE'] [[0.03 0.97]]
3427/4496 3-carboxy-4-hydroxyphenylglycine-(S) ['INACTIVE'] [[0.01 0.99]]
3428/4496 oxelaidin ['INACTIVE'] [[0.02 0.98]]
3429/4496 TMPH ['INACTIVE'] [[0.02 0.98]]
3430/4496 IPAG ['INACTIVE'] [[0.04 0.96]]
3431/4496 LUF-5834 ['INACTIVE'] [[0.04 0.96]]
3432/4496 3-carboxy-4-hydroxyphenylglycine-(R) ['INACTIVE'] [[0.01 0.99]]
3433/4496 teriflunomide ['INACTIVE'] [[0.02 0.98]]
3434/4496 VLX-600 ['INACTIVE'] [[0.02 0.98]]
3435/4496 3-methyladenine ['INACTIVE'] [[0.01 0.99]]
3436/4496 asymmetrical-dimethylarginine ['INACTIVE'] [[0.01 0.99]]
3437/4496 GR-144053 ['INACTIVE'] [[0.02 0.98]]
3438/4496 calcifediol ['INACTIVE'] [[0.01 0.99]]
3439/4496 ecabapide ['INACTIVE'] [[0. 1.]]
3440/4496 AR-42 ['INACTIVE'] [[0.02 0.98]]
3441/4496 dimaprit ['INACTIVE'] [[0.01 0.99]]
3442/4496 TC-E-5002 ['INACTIVE'] [[0.02 0.98]]
3443/4496 plinabulin ['INACTIVE'] [[0.04 0.96]]
3444/4496 indoximod ['INACTIVE'] [[0.01 0.99]]
3445/4496 exisulind ['INACTIVE'] [[0.06 0.94]]
3446/4496 hydroxyfasudil ['INACTIVE'] [[0.05 0.95]]
3447/4496 estropipate ['INACTIVE'] [[0. 1.]]
3448/4496 levobetaxolol ['INACTIVE'] [[0. 1.]]
3449/4496 nalmefene ['INACTIVE'] [[0.05 0.95]]
3450/4496 caroverine ['INACTIVE'] [[0.01 0.99]]
3451/4496 ibuprofen-lysine ['INACTIVE'] [[0.01 0.99]]
3452/4496 metoxibutropate ['INACTIVE'] [[0.03 0.97]]
3453/4496 EIT-hydrobromide ['INACTIVE'] [[0.01 0.99]]
3454/4496 IT1t ['INACTIVE'] [[0.06 0.94]]
3455/4496 BCX-1470 ['INACTIVE'] [[0.05 0.95]]
3456/4496 mavoglurant ['INACTIVE'] [[0.01 0.99]]
3457/4496 danegaptide ['INACTIVE'] [[0.01 0.99]]
3458/4496 perindopril ['INACTIVE'] [[0. 1.]]
3459/4496 olesoxime ['INACTIVE'] [[0.02 0.98]]
3460/4496 paricalcitol ['INACTIVE'] [[0.04 0.96]]
3461/4496 XL019 ['INACTIVE'] [[0.02 0.98]]
3462/4496 astemizole ['INACTIVE'] [[0.02 0.98]]
3463/4496 N-methylformamide ['INACTIVE'] [[0.01 0.99]]
3464/4496 betaxolol ['INACTIVE'] [[0. 1.]]
3465/4496 sufentanil ['INACTIVE'] [[0.07 0.93]]
3466/4496 terguride ['INACTIVE'] [[0.02 0.98]]
3467/4496 tyloxapol ['INACTIVE'] [[0.01 0.99]]
3468/4496 acetyl-farnesyl-cysteine ['INACTIVE'] [[0.05 0.95]]
3469/4496 targinine ['INACTIVE'] [[0.04 0.96]]
3470/4496 SF-11 ['INACTIVE'] [[0. 1.]]
3471/4496 iobenguane ['INACTIVE'] [[0.03 0.97]]
3472/4496 dihydrexidine ['INACTIVE'] [[0.04 0.96]]
3473/4496 N6022 ['INACTIVE'] [[0.02 0.98]]
3474/4496 RS-102895 ['INACTIVE'] [[0.02 0.98]]
3475/4496 sodium-orthovanadate ['INACTIVE'] [[0. 1.]]
3476/4496 indobufen ['INACTIVE'] [[0. 1.]]
3477/4496 CGP-71683 ['INACTIVE'] [[0.1 0.9]]
3478/4496 S4 ['INACTIVE'] [[0.01 0.99]]
3479/4496 alfadolone-acetate ['INACTIVE'] [[0.01 0.99]]
3480/4496 anisotropine ['INACTIVE'] [[0.01 0.99]]
3481/4496 SNAP ['INACTIVE'] [[0.07 0.93]]
3482/4496 NMDA ['INACTIVE'] [[0.02 0.98]]
3483/4496 KW-2478 ['INACTIVE'] [[0.03 0.97]]
3484/4496 chromanol-(+/-) ['INACTIVE'] [[0.1 0.9]]
3485/4496 moracizine ['INACTIVE'] [[0.04 0.96]]
3486/4496 OSI-027 ['INACTIVE'] [[0.07 0.93]]
3487/4496 MCOPPB ['INACTIVE'] [[0.12 0.88]]
3488/4496 sulfisoxazole-acetyl ['INACTIVE'] [[0. 1.]]
3489/4496 zaltidine ['INACTIVE'] [[0.07 0.93]]
3490/4496 I-BZA ['INACTIVE'] [[0.04 0.96]]
3491/4496 anecortave-acetate ['INACTIVE'] [[0.03 0.97]]
3492/4496 psoralen ['INACTIVE'] [[0. 1.]]
3493/4496 sulfamoxole ['INACTIVE'] [[0. 1.]]
3494/4496 PD-168077 ['INACTIVE'] [[0.03 0.97]]
3495/4496 methantheline ['INACTIVE'] [[0.04 0.96]]
3496/4496 rotigotine ['INACTIVE'] [[0.02 0.98]]
3497/4496 trepibutone ['INACTIVE'] [[0.02 0.98]]
3498/4496 KW-2449 ['INACTIVE'] [[0.03 0.97]]
3499/4496 aminohydroxybutyric-acid ['INACTIVE'] [[0. 1.]]
3500/4496 riodipine ['INACTIVE'] [[0.03 0.97]]
3501/4496 delphinidin ['INACTIVE'] [[0. 1.]]
3502/4496 monastrol ['INACTIVE'] [[0.02 0.98]]
3503/4496 UNC-926 ['INACTIVE'] [[0.06 0.94]]
3504/4496 5-hydroxymethyl-tolterodine ['INACTIVE'] [[0. 1.]]
3505/4496 ZAPA ['INACTIVE'] [[0.01 0.99]]
3506/4496 AT13148 ['INACTIVE'] [[0.04 0.96]]
3507/4496 2-ethyl-1,3-hexanediol ['INACTIVE'] [[0. 1.]]
3508/4496 VGX-1027 ['INACTIVE'] [[0.01 0.99]]
3509/4496 taprenepag ['INACTIVE'] [[0.09 0.91]]
3510/4496 APC-100 ['INACTIVE'] [[0.02 0.98]]
3511/4496 ML133 ['INACTIVE'] [[0.01 0.99]]
3512/4496 CD-437 ['INACTIVE'] [[0. 1.]]
3513/4496 BRL-37344 ['INACTIVE'] [[0.02 0.98]]
3514/4496 LE-135 ['INACTIVE'] [[0.02 0.98]]
3515/4496 ZM-447439 ['INACTIVE'] [[0.02 0.98]]
3516/4496 OCO-1112 ['INACTIVE'] [[0.01 0.99]]
3517/4496 pinaverium ['INACTIVE'] [[0.09 0.91]]
3518/4496 3-amino-benzamide ['INACTIVE'] [[0.01 0.99]]
3519/4496 diphemanil ['INACTIVE'] [[0.01 0.99]]
3520/4496 heclin ['INACTIVE'] [[0.02 0.98]]
3521/4496 PF-429242 ['INACTIVE'] [[0.05 0.95]]
3522/4496 etoposide-phosphate ['INACTIVE'] [[0.09 0.91]]
3523/4496 methyldopate ['INACTIVE'] [[0. 1.]]
3524/4496 indalpine ['INACTIVE'] [[0.01 0.99]]
3525/4496 isoflurane ['INACTIVE'] [[0. 1.]]
3526/4496 atrasentan ['INACTIVE'] [[0.03 0.97]]
3527/4496 SKF-96365 ['INACTIVE'] [[0. 1.]]
3528/4496 A-804598 ['INACTIVE'] [[0.03 0.97]]
3529/4496 TCS-PIM-1-1 ['INACTIVE'] [[0.01 0.99]]
3530/4496 ME-0328 ['INACTIVE'] [[0.06 0.94]]
3531/4496 SEN-1269 ['INACTIVE'] [[0.05 0.95]]
3532/4496 icotinib ['INACTIVE'] [[0.14 0.86]]
3533/4496 fluoxymesterone ['INACTIVE'] [[0.01 0.99]]
3534/4496 4-carboxy-3-hydroxyphenylglycine-(RS) ['INACTIVE'] [[0. 1.]]
3535/4496 QX-314 ['INACTIVE'] [[0.01 0.99]]
3536/4496 methylprednisolone-sodium-succinate ['INACTIVE'] [[0. 1.]]
3537/4496 brivaracetam ['INACTIVE'] [[0. 1.]]
3538/4496 quizartinib ['INACTIVE'] [[0.11 0.89]]
3539/4496 4-carboxy-3-hydroxyphenylglycine-(S) ['INACTIVE'] [[0. 1.]]
3540/4496 McN-5652-(+/-) ['INACTIVE'] [[0.03 0.97]]
3541/4496 lupanine ['INACTIVE'] [[0.01 0.99]]
3542/4496 ibuproxam ['INACTIVE'] [[0. 1.]]
3543/4496 SANT-2 ['INACTIVE'] [[0.03 0.97]]
3544/4496 tolterodine ['INACTIVE'] [[0. 1.]]
3545/4496 D-4476 ['INACTIVE'] [[0.09 0.91]]
3546/4496 S-07662 ['INACTIVE'] [[0.03 0.97]]
3547/4496 LY456236 ['INACTIVE'] [[0.03 0.97]]
3548/4496 bindarit ['INACTIVE'] [[0. 1.]]
3549/4496 indium-tri(2-propanolate) ['INACTIVE'] [[0.01 0.99]]
3550/4496 L-Hydroxyproline ['INACTIVE'] [[0.03 0.97]]
3551/4496 propiverine ['INACTIVE'] [[0.01 0.99]]
3552/4496 bromfenac ['INACTIVE'] [[0.02 0.98]]
3553/4496 pyrazolanthrone ['INACTIVE'] [[0.03 0.97]]
3554/4496 HA-1004 ['INACTIVE'] [[0.05 0.95]]
3555/4496 GW-627368 ['INACTIVE'] [[0.03 0.97]]
3556/4496 levocabastine ['INACTIVE'] [[0.02 0.98]]
3557/4496 N-acetylglycyl-D-glutamic-acid ['INACTIVE'] [[0. 1.]]
3558/4496 foscarnet ['INACTIVE'] [[0. 1.]]
3559/4496 lanoconazole ['INACTIVE'] [[0.09 0.91]]
3560/4496 ZM-226600 ['INACTIVE'] [[0.01 0.99]]
3561/4496 evatanepag ['INACTIVE'] [[0.06 0.94]]
3562/4496 selfotel ['INACTIVE'] [[0.01 0.99]]
3563/4496 OSI-420 ['INACTIVE'] [[0.03 0.97]]
3564/4496 4-iodo-6-phenylpyrimidine ['INACTIVE'] [[0.01 0.99]]
3565/4496 CD-1530 ['INACTIVE'] [[0. 1.]]
3566/4496 sulmazole ['INACTIVE'] [[0.06 0.94]]
3567/4496 GSK2879552 ['INACTIVE'] [[0.02 0.98]]
3568/4496 acrylate ['INACTIVE'] [[0. 1.]]
3569/4496 PCA-4248 ['INACTIVE'] [[0.02 0.98]]
3570/4496 levothyroxine ['INACTIVE'] [[0.01 0.99]]
3571/4496 anethole-trithione ['INACTIVE'] [[0.01 0.99]]
3572/4496 RSV604 ['INACTIVE'] [[0.01 0.99]]
3573/4496 monoctanoin ['INACTIVE'] [[0.01 0.99]]
3574/4496 8-M-PDOT ['INACTIVE'] [[0.03 0.97]]
3575/4496 WAY-600 ['INACTIVE'] [[0.14 0.86]]
3576/4496 tolmetin ['INACTIVE'] [[0. 1.]]
3577/4496 AMG-9810 ['INACTIVE'] [[0.01 0.99]]
3578/4496 xamoterol ['INACTIVE'] [[0.01 0.99]]
3579/4496 SKF-38393 ['INACTIVE'] [[0.03 0.97]]
3580/4496 BMS-182874 ['INACTIVE'] [[0.06 0.94]]
3581/4496 butylscopolamine-bromide ['INACTIVE'] [[0.01 0.99]]
3582/4496 semaxanib ['INACTIVE'] [[0. 1.]]
3583/4496 MDMS ['INACTIVE'] [[0. 1.]]
3584/4496 WWL-123 ['INACTIVE'] [[0.04 0.96]]
3585/4496 SC-19220 ['INACTIVE'] [[0.02 0.98]]
3586/4496 AMG-837 ['INACTIVE'] [[0.09 0.91]]
3587/4496 ISO-1 ['INACTIVE'] [[0. 1.]]
3588/4496 curcumol ['INACTIVE'] [[0.02 0.98]]
3589/4496 evodiamine ['INACTIVE'] [[0.02 0.98]]
3590/4496 BRL-26314 ['INACTIVE'] [[0.02 0.98]]
3591/4496 oxazepam ['INACTIVE'] [[0.02 0.98]]
3592/4496 rabeprazole ['INACTIVE'] [[0.03 0.97]]
3593/4496 cerulenin ['INACTIVE'] [[0.03 0.97]]
3594/4496 hexaminolevulinate ['INACTIVE'] [[0.02 0.98]]
3595/4496 methyl-aminolevulinate ['INACTIVE'] [[0.01 0.99]]
3596/4496 ketorolac ['INACTIVE'] [[0. 1.]]
3597/4496 corosolic-acid ['INACTIVE'] [[0. 1.]]
3598/4496 BMY-45778 ['INACTIVE'] [[0.03 0.97]]
3599/4496 STF-31 ['INACTIVE'] [[0.01 0.99]]
3600/4496 polyinosine ['INACTIVE'] [[0.03 0.97]]
3601/4496 phenylacetylglutamine ['INACTIVE'] [[0. 1.]]
3602/4496 ME0328 ['INACTIVE'] [[0.05 0.95]]
3603/4496 SLV-319-(+/-) ['INACTIVE'] [[0.12 0.88]]
3604/4496 veliflapon ['INACTIVE'] [[0.06 0.94]]
3605/4496 CB-03-01 ['INACTIVE'] [[0. 1.]]
3606/4496 GABA-linoleamide ['INACTIVE'] [[0.05 0.95]]
3607/4496 LFM-A13 ['INACTIVE'] [[0.04 0.96]]
3608/4496 BADGE ['INACTIVE'] [[0.03 0.97]]
3609/4496 BAG-956 ['INACTIVE'] [[0.1 0.9]]
3610/4496 dapivirine ['INACTIVE'] [[0.12 0.88]]
3611/4496 tyrphostin-AG-879 ['INACTIVE'] [[0. 1.]]
3612/4496 2-methoxyestradiol ['INACTIVE'] [[0. 1.]]
3613/4496 ammonium-perfluorocaprylate ['INACTIVE'] [[0. 1.]]
3614/4496 SEP-227900 ['INACTIVE'] [[0. 1.]]
3615/4496 N-[2-(Piperidinylamino)ethyl]-4-iodobenzamide ['INACTIVE'] [[0.04 0.96]]
3616/4496 ICI-192605 ['INACTIVE'] [[0. 1.]]
3617/4496 IEM1754 ['INACTIVE'] [[0.02 0.98]]
3618/4496 SB-202190 ['INACTIVE'] [[0.03 0.97]]
3619/4496 DVD-111 ['INACTIVE'] [[0. 1.]]
3620/4496 7-hydroxy-PIPAT ['INACTIVE'] [[0.02 0.98]]
3621/4496 tremorine ['INACTIVE'] [[0.03 0.97]]
3622/4496 amyleine ['INACTIVE'] [[0.02 0.98]]
3623/4496 sodium-glucoheptonate ['INACTIVE'] [[0. 1.]]
3624/4496 S-Sulfo-L-cysteine-sodium-salt ['INACTIVE'] [[0.01 0.99]]
3625/4496 TC-G-1000 ['INACTIVE'] [[0.01 0.99]]
3626/4496 isocarboxazid ['INACTIVE'] [[0. 1.]]
3627/4496 KN-93 ['INACTIVE'] [[0.05 0.95]]
3628/4496 ramatroban ['INACTIVE'] [[0.05 0.95]]
3629/4496 diosmetin ['INACTIVE'] [[0.01 0.99]]
3630/4496 7-hydroxy-DPAT ['INACTIVE'] [[0. 1.]]
3631/4496 I-CBP-112 ['INACTIVE'] [[0.09 0.91]]
3632/4496 SR-1078 ['INACTIVE'] [[0.02 0.98]]
3633/4496 URB597 ['INACTIVE'] [[0.04 0.96]]
3634/4496 rigosertib ['INACTIVE'] [[0.05 0.95]]
3635/4496 oxacyclohexadecan-2-one ['INACTIVE'] [[0.01 0.99]]
3636/4496 LY288513 ['INACTIVE'] [[0.02 0.98]]
3637/4496 hemomex-s ['INACTIVE'] [[0.02 0.98]]
3638/4496 XD-14 ['INACTIVE'] [[0.03 0.97]]
3639/4496 betaine ['INACTIVE'] [[0. 1.]]
3640/4496 NU-7441 ['INACTIVE'] [[0.05 0.95]]
3641/4496 BTT-3033 ['INACTIVE'] [[0.04 0.96]]
3642/4496 ampalex ['INACTIVE'] [[0.01 0.99]]
3643/4496 hexonic-acid ['INACTIVE'] [[0. 1.]]
3644/4496 pilsicainide ['INACTIVE'] [[0.01 0.99]]
3645/4496 thymol ['INACTIVE'] [[0. 1.]]
3646/4496 CX-516 ['INACTIVE'] [[0. 1.]]
3647/4496 sulfametopyrazine ['INACTIVE'] [[0. 1.]]
3648/4496 BX-912 ['INACTIVE'] [[0.03 0.97]]
3649/4496 reboxetine ['INACTIVE'] [[0.1 0.9]]
3650/4496 ER-50891 ['INACTIVE'] [[0.01 0.99]]
3651/4496 erlotinib ['INACTIVE'] [[0.03 0.97]]
3652/4496 CP-99994 ['INACTIVE'] [[0.05 0.95]]
3653/4496 piperacetazine ['INACTIVE'] [[0.02 0.98]]
3654/4496 piroximone ['INACTIVE'] [[0.03 0.97]]
3655/4496 isoxepac ['INACTIVE'] [[0.12 0.88]]
3656/4496 SP-420 ['INACTIVE'] [[0.05 0.95]]
3657/4496 L-asparagine-n-hydroxy ['INACTIVE'] [[0.01 0.99]]
3658/4496 valpromide ['INACTIVE'] [[0. 1.]]
3659/4496 meprednisone ['INACTIVE'] [[0.03 0.97]]
3660/4496 CID-5458317 ['INACTIVE'] [[0.03 0.97]]
3661/4496 methiopril ['INACTIVE'] [[0.01 0.99]]
3662/4496 AM-630 ['INACTIVE'] [[0.07 0.93]]
3663/4496 alprostadil ['INACTIVE'] [[0.04 0.96]]
3664/4496 TAK-875 ['INACTIVE'] [[0.04 0.96]]
3665/4496 didox ['INACTIVE'] [[0. 1.]]
3666/4496 scopine ['INACTIVE'] [[0.09 0.91]]
3667/4496 croconazole ['INACTIVE'] [[0.03 0.97]]
3668/4496 luteolin ['INACTIVE'] [[0. 1.]]
3669/4496 diclofenamide ['INACTIVE'] [[0.03 0.97]]
3670/4496 methyl-salicylate ['INACTIVE'] [[0.01 0.99]]
3671/4496 tiprenolol ['INACTIVE'] [[0.03 0.97]]
3672/4496 selexipag ['INACTIVE'] [[0.04 0.96]]
3673/4496 acifran ['INACTIVE'] [[0.01 0.99]]
3674/4496 AZD4282 ['INACTIVE'] [[0. 1.]]
3675/4496 delta-Tocotrienol ['INACTIVE'] [[0.02 0.98]]
3676/4496 orteronel ['INACTIVE'] [[0.03 0.97]]
3677/4496 SC-236 ['INACTIVE'] [[0.02 0.98]]
3678/4496 mepazine ['INACTIVE'] [[0.01 0.99]]
3679/4496 etretinate ['INACTIVE'] [[0.05 0.95]]
3680/4496 tropanyl-3,5-dimethylbenzoate ['INACTIVE'] [[0.02 0.98]]
3681/4496 casin ['INACTIVE'] [[0.05 0.95]]
3682/4496 bicifadine ['INACTIVE'] [[0.02 0.98]]
3683/4496 kainic-acid ['INACTIVE'] [[0.03 0.97]]
3684/4496 cimetropium ['INACTIVE'] [[0.02 0.98]]
3685/4496 centpropazine ['INACTIVE'] [[0.02 0.98]]
3686/4496 dazoxiben ['INACTIVE'] [[0.02 0.98]]
3687/4496 perifosine ['INACTIVE'] [[0.07 0.93]]
3688/4496 LY2140023 ['INACTIVE'] [[0.01 0.99]]
3689/4496 clotiapine ['INACTIVE'] [[0.02 0.98]]
3690/4496 enalaprilat ['INACTIVE'] [[0.04 0.96]]
3691/4496 MG-624 ['INACTIVE'] [[0.01 0.99]]
3692/4496 azapropazone ['INACTIVE'] [[0.04 0.96]]
3693/4496 salsolinol-1-carboxylic-acid ['INACTIVE'] [[0.04 0.96]]
3694/4496 imeglimin ['INACTIVE'] [[0.03 0.97]]
3695/4496 S-111 ['INACTIVE'] [[0.02 0.98]]
3696/4496 SRC-kinase-inhibitor-I ['INACTIVE'] [[0.07 0.93]]
3697/4496 SC-9 ['INACTIVE'] [[0.09 0.91]]
3698/4496 3,3'-diindolylmethane ['INACTIVE'] [[0.07 0.93]]
3699/4496 GMX1778 ['INACTIVE'] [[0.03 0.97]]
3700/4496 6-benzylaminopurine ['INACTIVE'] [[0. 1.]]
3701/4496 aminolevulinic-acid-benzyl-ester ['INACTIVE'] [[0.01 0.99]]
3702/4496 RGB-286638 ['INACTIVE'] [[0.11 0.89]]
3703/4496 bardoxolone-methyl ['INACTIVE'] [[0.07 0.93]]
3704/4496 ML-277 ['INACTIVE'] [[0.04 0.96]]
3705/4496 fenretinide ['INACTIVE'] [[0.02 0.98]]
3706/4496 pinocembrin ['INACTIVE'] [[0.01 0.99]]
3707/4496 API-001 ['INACTIVE'] [[0.05 0.95]]
3708/4496 stanozolol ['INACTIVE'] [[0.05 0.95]]
3709/4496 etamsylate ['INACTIVE'] [[0. 1.]]
3710/4496 LY450108 ['INACTIVE'] [[0.05 0.95]]
3711/4496 formoterol ['INACTIVE'] [[0.02 0.98]]
3712/4496 H-89 ['INACTIVE'] [[0.04 0.96]]
3713/4496 4-IBP ['INACTIVE'] [[0.03 0.97]]
3714/4496 NI-57 ['INACTIVE'] [[0.04 0.96]]
3715/4496 etifenin ['INACTIVE'] [[0.03 0.97]]
3716/4496 CS-110266 ['INACTIVE'] [[0.01 0.99]]
3717/4496 etilefrine ['INACTIVE'] [[0. 1.]]
3718/4496 glutathione-monoisopropyl-ester ['INACTIVE'] [[0.02 0.98]]
3719/4496 XL-147 ['INACTIVE'] [[0. 1.]]
3720/4496 phenylpiracetam ['INACTIVE'] [[0.02 0.98]]
3721/4496 liarozole ['INACTIVE'] [[0.02 0.98]]
3722/4496 LY223982 ['INACTIVE'] [[0.02 0.98]]
3723/4496 2,3-cis/exo-camphanediol ['INACTIVE'] [[0.01 0.99]]
3724/4496 WAY-316606 ['INACTIVE'] [[0.01 0.99]]
3725/4496 SB-756050 ['INACTIVE'] [[0.05 0.95]]
3726/4496 L-Theanine ['INACTIVE'] [[0. 1.]]
3727/4496 levobunolol ['INACTIVE'] [[0.03 0.97]]
3728/4496 beta-amyloid-synthesis-inhibitor ['INACTIVE'] [[0.06 0.94]]
3729/4496 promestriene ['INACTIVE'] [[0.02 0.98]]
3730/4496 glycitein ['INACTIVE'] [[0. 1.]]
3731/4496 N-MPPP ['INACTIVE'] [[0.02 0.98]]
3732/4496 ABT-751 ['INACTIVE'] [[0.03 0.97]]
3733/4496 cis-exo-camphanediol-2,3 ['INACTIVE'] [[0.01 0.99]]
3734/4496 2-ethoxybenzoic-acid ['INACTIVE'] [[0. 1.]]
3735/4496 levobunolol-(+) ['INACTIVE'] [[0.03 0.97]]
3736/4496 YM022 ['INACTIVE'] [[0.09 0.91]]
3737/4496 MCC950 ['INACTIVE'] [[0.01 0.99]]
3738/4496 phenserine ['INACTIVE'] [[0.03 0.97]]
3739/4496 alpha-methylhistamine-dihydrobromide-(S)-(+) ['INACTIVE'] [[0.01 0.99]]
3740/4496 dihydrotachysterol ['INACTIVE'] [[0. 1.]]
3741/4496 caramiphen ['INACTIVE'] [[0. 1.]]
3742/4496 AS-77 ['INACTIVE'] [[0.02 0.98]]
3743/4496 L-Cysteinesulfinic-acid ['INACTIVE'] [[0. 1.]]
3744/4496 levobunolol-(+/-) ['INACTIVE'] [[0.03 0.97]]
3745/4496 alpha-methylhistamine-dihydrobromide-(R)-(-) ['INACTIVE'] [[0.01 0.99]]
3746/4496 2-iodohippuric-acid ['INACTIVE'] [[0. 1.]]
3747/4496 DPI-201106 ['INACTIVE'] [[0.04 0.96]]
3748/4496 ASA-404 ['INACTIVE'] [[0.01 0.99]]
3749/4496 GW-438014A ['INACTIVE'] [[0.01 0.99]]
3750/4496 melevodopa ['INACTIVE'] [[0. 1.]]
3751/4496 eglumetad ['INACTIVE'] [[0.01 0.99]]
3752/4496 fumagillin ['INACTIVE'] [[0.05 0.95]]
3753/4496 FG-7142 ['INACTIVE'] [[0. 1.]]
3754/4496 etafenone ['INACTIVE'] [[0.01 0.99]]
3755/4496 aceclidine ['INACTIVE'] [[0.04 0.96]]
3756/4496 PCI-34051 ['INACTIVE'] [[0.03 0.97]]
3757/4496 ICA-121431 ['INACTIVE'] [[0.04 0.96]]
3758/4496 acetyl-11-keto-beta-boswellic-acid ['INACTIVE'] [[0.02 0.98]]
3759/4496 CK-636 ['INACTIVE'] [[0.04 0.96]]
3760/4496 dexamethasone-sodium-phosphate ['INACTIVE'] [[0.01 0.99]]
3761/4496 GR125487 ['INACTIVE'] [[0.11 0.89]]
3762/4496 4-phenolsulfonic-acid ['INACTIVE'] [[0.01 0.99]]
3763/4496 AM-404 ['INACTIVE'] [[0.02 0.98]]
3764/4496 zaldaride ['INACTIVE'] [[0.04 0.96]]
3765/4496 E-4031 ['INACTIVE'] [[0.04 0.96]]
3766/4496 arcyriaflavin-a ['INACTIVE'] [[0.05 0.95]]
3767/4496 Ro-25-6981 ['INACTIVE'] [[0.03 0.97]]
3768/4496 diazepam ['INACTIVE'] [[0.04 0.96]]
3769/4496 thiamet-g ['INACTIVE'] [[0.04 0.96]]
3770/4496 AKBA ['INACTIVE'] [[0.02 0.98]]
3771/4496 1,5-dicaffeoylquinic-acid ['INACTIVE'] [[0.02 0.98]]
3772/4496 threo-2-methylisocitrate-(DL) ['INACTIVE'] [[0.03 0.97]]
3773/4496 SU-4312 ['INACTIVE'] [[0.01 0.99]]
3774/4496 frentizole ['INACTIVE'] [[0.03 0.97]]
3775/4496 TTNPB ['INACTIVE'] [[0. 1.]]
3776/4496 gepefrine ['INACTIVE'] [[0.01 0.99]]
3777/4496 allicin ['INACTIVE'] [[0.01 0.99]]
3778/4496 safranal ['INACTIVE'] [[0.04 0.96]]
3779/4496 3PO ['INACTIVE'] [[0.01 0.99]]
3780/4496 TG-101348 ['INACTIVE'] [[0.06 0.94]]
3781/4496 5,7-Dihydroxy-2-(4-methoxyphenyl)-8-(3-methyl-2-buten-1-yl)-2H-chromene-3,4-dione ['INACTIVE'] [[0.01 0.99]]
3782/4496 moclobemide ['INACTIVE'] [[0. 1.]]
3783/4496 L-glutamic-acid ['INACTIVE'] [[0. 1.]]
3784/4496 L-690330 ['INACTIVE'] [[0. 1.]]
3785/4496 BAZ2-ICR ['INACTIVE'] [[0.12 0.88]]
3786/4496 NU-7026 ['INACTIVE'] [[0.03 0.97]]
3787/4496 menatetrenone ['INACTIVE'] [[0. 1.]]
3788/4496 etidronic-acid ['INACTIVE'] [[0. 1.]]
3789/4496 SBHA ['INACTIVE'] [[0.02 0.98]]
3790/4496 ST-91 ['INACTIVE'] [[0.05 0.95]]
3791/4496 obeticholic-acid ['INACTIVE'] [[0.01 0.99]]
3792/4496 copanlisib ['INACTIVE'] [[0.13 0.87]]
3793/4496 etilevodopa ['INACTIVE'] [[0.01 0.99]]
3794/4496 ethoxyquin ['INACTIVE'] [[0. 1.]]
3795/4496 rostafuroxine ['INACTIVE'] [[0. 1.]]
3796/4496 oxaloacetate ['INACTIVE'] [[0. 1.]]
3797/4496 RN-1734 ['INACTIVE'] [[0.03 0.97]]
3798/4496 SKF-77434 ['INACTIVE'] [[0.04 0.96]]
3799/4496 alpha-Asarone ['INACTIVE'] [[0.01 0.99]]
3800/4496 SAR-245409 ['INACTIVE'] [[0.01 0.99]]
3801/4496 INT-747 ['INACTIVE'] [[0.01 0.99]]
3802/4496 isobutamben ['INACTIVE'] [[0. 1.]]
3803/4496 elafibranor ['INACTIVE'] [[0.05 0.95]]
3804/4496 rucinol ['INACTIVE'] [[0.01 0.99]]
3805/4496 propacetamol ['INACTIVE'] [[0.01 0.99]]
3806/4496 ONO-4817 ['INACTIVE'] [[0.06 0.94]]
3807/4496 ID-1101 ['INACTIVE'] [[0. 1.]]
3808/4496 elactocin ['INACTIVE'] [[0.11 0.89]]
3809/4496 RG108 ['INACTIVE'] [[0.03 0.97]]
3810/4496 tianeptine ['INACTIVE'] [[0.06 0.94]]
3811/4496 orantinib ['INACTIVE'] [[0. 1.]]
3812/4496 4-pyrimidinecarbonitrile ['INACTIVE'] [[0. 1.]]
3813/4496 alanine ['INACTIVE'] [[0. 1.]]
3814/4496 CP-640186 ['INACTIVE'] [[0.08 0.92]]
3815/4496 LY2606368 ['INACTIVE'] [[0.08 0.92]]
3816/4496 anisomycin ['INACTIVE'] [[0.03 0.97]]
3817/4496 L-alanine ['INACTIVE'] [[0. 1.]]
3818/4496 ZAMI-633 ['INACTIVE'] [[0. 1.]]
3819/4496 efaproxiral ['INACTIVE'] [[0.01 0.99]]
3820/4496 cyamemazine ['INACTIVE'] [[0. 1.]]
3821/4496 AZ-12080282 ['INACTIVE'] [[0.02 0.98]]
3822/4496 metixene ['INACTIVE'] [[0.02 0.98]]
3823/4496 epomediol ['INACTIVE'] [[0.02 0.98]]
3824/4496 trans-4-Hydroxycrotonic-acid ['INACTIVE'] [[0. 1.]]
3825/4496 miglustat ['INACTIVE'] [[0.02 0.98]]
3826/4496 OAC1 ['INACTIVE'] [[0. 1.]]
3827/4496 INCA-6 ['INACTIVE'] [[0.01 0.99]]
3828/4496 AGI-6780 ['INACTIVE'] [[0.03 0.97]]
3829/4496 cimaterol ['INACTIVE'] [[0.08 0.92]]
3830/4496 costunolide ['INACTIVE'] [[0.03 0.97]]
3831/4496 BE-2254 ['INACTIVE'] [[0.03 0.97]]
3832/4496 WAY-170523 ['INACTIVE'] [[0.06 0.94]]
3833/4496 brolitene ['INACTIVE'] [[0. 1.]]
3834/4496 PPT ['INACTIVE'] [[0.02 0.98]]
3835/4496 GSK2194069 ['INACTIVE'] [[0.05 0.95]]
3836/4496 CUDC-101 ['INACTIVE'] [[0.03 0.97]]
3837/4496 GW-542573X ['INACTIVE'] [[0.01 0.99]]
3838/4496 AG-556 ['INACTIVE'] [[0. 1.]]
3839/4496 oxotremorine-m ['INACTIVE'] [[0.06 0.94]]
3840/4496 impentamine ['INACTIVE'] [[0.03 0.97]]
3841/4496 elbasvir ['INACTIVE'] [[0.08 0.92]]
3842/4496 arctigenin ['INACTIVE'] [[0.04 0.96]]
3843/4496 HMN-214 ['INACTIVE'] [[0.11 0.89]]
3844/4496 tyrphostin-AG-99 ['INACTIVE'] [[0. 1.]]
3845/4496 testosterone-undecanoate ['INACTIVE'] [[0. 1.]]
3846/4496 N-oxydiethylenebenzothiazole-2-sulfenamide ['INACTIVE'] [[0.08 0.92]]
3847/4496 R-96544 ['INACTIVE'] [[0.03 0.97]]
3848/4496 benzoin ['INACTIVE'] [[0.01 0.99]]
3849/4496 esmolol ['INACTIVE'] [[0.01 0.99]]
3850/4496 loxapine ['INACTIVE'] [[0.02 0.98]]
3851/4496 estradiol-acetate ['INACTIVE'] [[0. 1.]]
3852/4496 immepip ['INACTIVE'] [[0.03 0.97]]
3853/4496 ML-786 ['INACTIVE'] [[0.06 0.94]]
3854/4496 spiroxatrine ['INACTIVE'] [[0.02 0.98]]
3855/4496 alvimopan ['INACTIVE'] [[0.05 0.95]]
3856/4496 vernakalant ['INACTIVE'] [[0.07 0.93]]
3857/4496 carebastine ['INACTIVE'] [[0.02 0.98]]
3858/4496 tacedinaline ['INACTIVE'] [[0.01 0.99]]
3859/4496 amfepramone ['INACTIVE'] [[0.02 0.98]]
3860/4496 gabapentin-enacarbil ['INACTIVE'] [[0.03 0.97]]
3861/4496 pyrethrins ['INACTIVE'] [[0.02 0.98]]
3862/4496 levo-phencynonate ['INACTIVE'] [[0.03 0.97]]
3863/4496 daclatasvir ['INACTIVE'] [[0.08 0.92]]
3864/4496 tartaric-acid ['INACTIVE'] [[0. 1.]]
3865/4496 osthol ['INACTIVE'] [[0.01 0.99]]
3866/4496 AG-555 ['INACTIVE'] [[0. 1.]]
3867/4496 YM-511 ['INACTIVE'] [[0.1 0.9]]
3868/4496 somantadine ['INACTIVE'] [[0.05 0.95]]
3869/4496 sparfosate ['INACTIVE'] [[0.03 0.97]]
3870/4496 o-acetyl-L-serine ['INACTIVE'] [[0.01 0.99]]
3871/4496 ilepcimide ['INACTIVE'] [[0.01 0.99]]
3872/4496 4-methylhistamine ['INACTIVE'] [[0.01 0.99]]
3873/4496 dehydroepiandrosterone-sulfate ['INACTIVE'] [[0.02 0.98]]
3874/4496 ifenprodil ['INACTIVE'] [[0.06 0.94]]
3875/4496 spaglumic-acid ['INACTIVE'] [[0.03 0.97]]
3876/4496 bromebric-acid ['INACTIVE'] [[0.02 0.98]]
3877/4496 GDC-0941 ['INACTIVE'] [[0.03 0.97]]
3878/4496 butorphanol-(+)-tartrate ['INACTIVE'] [[0. 1.]]
3879/4496 ZD-7114 ['INACTIVE'] [[0. 1.]]
3880/4496 MPI-0479605 ['INACTIVE'] [[0.02 0.98]]
3881/4496 STF-083010 ['INACTIVE'] [[0.05 0.95]]
3882/4496 desoxycorticosterone-pivalate ['INACTIVE'] [[0. 1.]]
3883/4496 meldonium ['INACTIVE'] [[0.01 0.99]]
3884/4496 SYM-2081 ['INACTIVE'] [[0. 1.]]
3885/4496 tolamolol ['INACTIVE'] [[0.03 0.97]]
3886/4496 CH55 ['INACTIVE'] [[0. 1.]]
3887/4496 catharanthine ['INACTIVE'] [[0.02 0.98]]
3888/4496 16,16-dimethylprostaglandin-e2 ['INACTIVE'] [[0.04 0.96]]
3889/4496 ETC-1002 ['INACTIVE'] [[0.01 0.99]]
3890/4496 SR-11302 ['INACTIVE'] [[0.05 0.95]]
3891/4496 TWS-119 ['INACTIVE'] [[0.05 0.95]]
3892/4496 icaritin ['INACTIVE'] [[0.01 0.99]]
3893/4496 PPY-A ['INACTIVE'] [[0.05 0.95]]
3894/4496 SB-366791 ['INACTIVE'] [[0.01 0.99]]
3895/4496 ACPC ['INACTIVE'] [[0.01 0.99]]
3896/4496 retinaldehyde ['INACTIVE'] [[0. 1.]]
3897/4496 oxprenolol ['INACTIVE'] [[0. 1.]]
3898/4496 hexylcaine ['INACTIVE'] [[0.01 0.99]]
3899/4496 DMAB-anabaseine ['INACTIVE'] [[0.06 0.94]]
3900/4496 MRS2578 ['INACTIVE'] [[0.06 0.94]]
3901/4496 BW-723C86 ['INACTIVE'] [[0.04 0.96]]
3902/4496 tromaril ['INACTIVE'] [[0. 1.]]
3903/4496 nomegestrol-acetate ['INACTIVE'] [[0.01 0.99]]
3904/4496 PGL5001 ['INACTIVE'] [[0.07 0.93]]
3905/4496 exifone ['INACTIVE'] [[0. 1.]]
3906/4496 C-751 ['INACTIVE'] [[0.06 0.94]]
3907/4496 budipine ['INACTIVE'] [[0.01 0.99]]
3908/4496 4-P-PDOT ['INACTIVE'] [[0.03 0.97]]
3909/4496 CPI-1189 ['INACTIVE'] [[0.02 0.98]]
3910/4496 cabagin ['INACTIVE'] [[0. 1.]]
3911/4496 sodium-oxybate ['INACTIVE'] [[0. 1.]]
3912/4496 4-propylbenzoic-acid ['INACTIVE'] [[0. 1.]]
3913/4496 diphenidol ['INACTIVE'] [[0.03 0.97]]
3914/4496 KD025 ['INACTIVE'] [[0.08 0.92]]
3915/4496 antioxine ['INACTIVE'] [[0. 1.]]
3916/4496 sarpogrelate ['INACTIVE'] [[0.04 0.96]]
3917/4496 mofezolac ['INACTIVE'] [[0.01 0.99]]
3918/4496 meptazinol ['INACTIVE'] [[0.01 0.99]]
3919/4496 MR-948 ['INACTIVE'] [[0.04 0.96]]
3920/4496 BMS-309403 ['INACTIVE'] [[0.02 0.98]]
3921/4496 GR-135531 ['INACTIVE'] [[0.03 0.97]]
3922/4496 formestane ['INACTIVE'] [[0.01 0.99]]
3923/4496 SC-10 ['INACTIVE'] [[0.04 0.96]]
3924/4496 CVT-10216 ['INACTIVE'] [[0. 1.]]
3925/4496 nandrolone-decanoate ['INACTIVE'] [[0.01 0.99]]
3926/4496 NCS-382 ['INACTIVE'] [[0. 1.]]
3927/4496 1-EBIO ['INACTIVE'] [[0.01 0.99]]
3928/4496 3-hydroxy-3-phenylpentanamide ['INACTIVE'] [[0.01 0.99]]
3929/4496 digitoxigenin ['INACTIVE'] [[0.02 0.98]]
3930/4496 CGP-52411 ['INACTIVE'] [[0.12 0.88]]
3931/4496 MN-64 ['INACTIVE'] [[0. 1.]]
3932/4496 PX-12 ['INACTIVE'] [[0. 1.]]
3933/4496 pramiracetam ['INACTIVE'] [[0.02 0.98]]
3934/4496 FPL-62064 ['INACTIVE'] [[0.04 0.96]]
3935/4496 diftalone ['INACTIVE'] [[0.03 0.97]]
3936/4496 fosphenytoin ['INACTIVE'] [[0.02 0.98]]
3937/4496 AZ20 ['INACTIVE'] [[0.04 0.96]]
3938/4496 epoprostenol ['INACTIVE'] [[0.03 0.97]]
3939/4496 A-867744 ['INACTIVE'] [[0.03 0.97]]
3940/4496 clinofibrate ['INACTIVE'] [[0.04 0.96]]
3941/4496 adiporon ['INACTIVE'] [[0. 1.]]
3942/4496 MDL-11939 ['INACTIVE'] [[0. 1.]]
3943/4496 NSC-632839 ['INACTIVE'] [[0.02 0.98]]
3944/4496 buthionine-sulfoximine ['INACTIVE'] [[0.01 0.99]]
3945/4496 p-dimethylinamyl-benzoate ['INACTIVE'] [[0.02 0.98]]
3946/4496 limaprost-alfadex ['INACTIVE'] [[0.13 0.87]]
3947/4496 droloxifene ['INACTIVE'] [[0. 1.]]
3948/4496 guanidinopropionic-acid ['INACTIVE'] [[0. 1.]]
3949/4496 beta-hydroxy-beta-methylbutyrate ['INACTIVE'] [[0. 1.]]
3950/4496 CP-471474 ['INACTIVE'] [[0.02 0.98]]
3951/4496 2-oxopropanoate ['INACTIVE'] [[0. 1.]]
3952/4496 GW-311616 ['INACTIVE'] [[0.11 0.89]]
3953/4496 moprolol ['INACTIVE'] [[0. 1.]]
3954/4496 iopodic-acid ['INACTIVE'] [[0.03 0.97]]
3955/4496 atglistatin ['INACTIVE'] [[0.05 0.95]]
3956/4496 arecaidine-propargyl-ester ['INACTIVE'] [[0.01 0.99]]
3957/4496 Y-27152 ['INACTIVE'] [[0.05 0.95]]
3958/4496 diiodothyropropionic-acid ['INACTIVE'] [[0.01 0.99]]
3959/4496 4-tert-butylphenol ['INACTIVE'] [[0. 1.]]
3960/4496 valnoctamide ['INACTIVE'] [[0. 1.]]
3961/4496 deoxyarbutin ['INACTIVE'] [[0.01 0.99]]
3962/4496 isoflurophate ['INACTIVE'] [[0. 1.]]
3963/4496 6,_7-dehydro-17-acetoxy-progesterone ['INACTIVE'] [[0. 1.]]
3964/4496 AVN-944 ['INACTIVE'] [[0.05 0.95]]
3965/4496 CP-775146 ['INACTIVE'] [[0.03 0.97]]
3966/4496 beta-elemene ['INACTIVE'] [[0.01 0.99]]
3967/4496 sacubitril ['INACTIVE'] [[0.03 0.97]]
3968/4496 elemene ['INACTIVE'] [[0.01 0.99]]
3969/4496 chlorotrianisene ['INACTIVE'] [[0. 1.]]
3970/4496 tiaprofenic-acid ['INACTIVE'] [[0. 1.]]
3971/4496 bopindolol ['INACTIVE'] [[0.04 0.96]]
3972/4496 carbenoxolone ['INACTIVE'] [[0.01 0.99]]
3973/4496 naltriben ['INACTIVE'] [[0.06 0.94]]
3974/4496 quinpirol-(-) ['INACTIVE'] [[0.02 0.98]]
3975/4496 L-165041 ['INACTIVE'] [[0.03 0.97]]
3976/4496 APY-29 ['INACTIVE'] [[0.12 0.88]]
3977/4496 alectinib ['INACTIVE'] [[0.1 0.9]]
3978/4496 bardoxolone ['INACTIVE'] [[0.06 0.94]]
3979/4496 suxibuzone ['INACTIVE'] [[0.02 0.98]]
3980/4496 BIBR-1532 ['INACTIVE'] [[0.05 0.95]]
3981/4496 5-carboxamidotryptamine ['INACTIVE'] [[0. 1.]]
3982/4496 PKI-166 ['INACTIVE'] [[0.02 0.98]]
3983/4496 lauric-acid ['INACTIVE'] [[0.03 0.97]]
3984/4496 oxotremorine-sesquifumarate ['INACTIVE'] [[0.04 0.96]]
3985/4496 boronophenylalanine ['INACTIVE'] [[0. 1.]]
3986/4496 AK-7 ['INACTIVE'] [[0.01 0.99]]
3987/4496 PI-828 ['INACTIVE'] [[0.04 0.96]]
3988/4496 cariporide ['INACTIVE'] [[0. 1.]]
3989/4496 FH1 ['INACTIVE'] [[0.03 0.97]]
3990/4496 ropivacaine ['INACTIVE'] [[0.01 0.99]]
3991/4496 alosetron ['INACTIVE'] [[0.03 0.97]]
3992/4496 basic-fuchsin ['INACTIVE'] [[0.03 0.97]]
3993/4496 uridine-5'-triphosphate ['INACTIVE'] [[0.06 0.94]]
3994/4496 pivanex ['INACTIVE'] [[0. 1.]]
3995/4496 PMPA ['INACTIVE'] [[0.01 0.99]]
3996/4496 INS316 ['INACTIVE'] [[0.06 0.94]]
3997/4496 xipamide ['INACTIVE'] [[0.02 0.98]]
3998/4496 ML-365 ['INACTIVE'] [[0.01 0.99]]
3999/4496 nexturastat-a ['INACTIVE'] [[0.04 0.96]]
4000/4496 sulmetozine ['INACTIVE'] [[0.02 0.98]]
4001/4496 vanillylacetone ['INACTIVE'] [[0. 1.]]
4002/4496 MRS3777 ['INACTIVE'] [[0.01 0.99]]
4003/4496 GR46611 ['INACTIVE'] [[0. 1.]]
4004/4496 pipotiazine-palmitate ['INACTIVE'] [[0.08 0.92]]
4005/4496 brefeldin-a ['INACTIVE'] [[0.04 0.96]]
4006/4496 oxyfedrine ['INACTIVE'] [[0.05 0.95]]
4007/4496 2-TEDC ['INACTIVE'] [[0.02 0.98]]
4008/4496 aprindine ['INACTIVE'] [[0.05 0.95]]
4009/4496 hydroxyprogesterone-acetate ['INACTIVE'] [[0. 1.]]
4010/4496 scopolamine-n-oxide ['INACTIVE'] [[0.03 0.97]]
4011/4496 2-oxoglutaric-acid ['INACTIVE'] [[0. 1.]]
4012/4496 practolol ['INACTIVE'] [[0. 1.]]
4013/4496 tributyrin ['INACTIVE'] [[0.02 0.98]]
4014/4496 higenamine ['INACTIVE'] [[0.02 0.98]]
4015/4496 sodium-nitroprusside ['INACTIVE'] [[0.01 0.99]]
4016/4496 ameltolide ['INACTIVE'] [[0. 1.]]
4017/4496 Y16 ['INACTIVE'] [[0.01 0.99]]
4018/4496 lacidipine ['INACTIVE'] [[0.03 0.97]]
4019/4496 nebracetam ['INACTIVE'] [[0.04 0.96]]
4020/4496 afimoxifene ['INACTIVE'] [[0. 1.]]
4021/4496 dinoprostone ['INACTIVE'] [[0.05 0.95]]
4022/4496 CH-5183284 ['INACTIVE'] [[0.1 0.9]]
4023/4496 serdemetan ['INACTIVE'] [[0.03 0.97]]
4024/4496 Ro-67-7476 ['INACTIVE'] [[0.04 0.96]]
4025/4496 benperidol ['INACTIVE'] [[0.01 0.99]]
4026/4496 altinicline ['INACTIVE'] [[0. 1.]]
4027/4496 minerval ['INACTIVE'] [[0.05 0.95]]
4028/4496 L-365260 ['INACTIVE'] [[0.06 0.94]]
4029/4496 GTS21 ['INACTIVE'] [[0.03 0.97]]
4030/4496 ZD-2079 ['INACTIVE'] [[0.04 0.96]]
4031/4496 geranyl-farnesylacetate ['INACTIVE'] [[0. 1.]]
4032/4496 pravastatin ['INACTIVE'] [[0.05 0.95]]
4033/4496 riodoxol ['INACTIVE'] [[0.01 0.99]]
4034/4496 dihydroxyphenylglycine ['INACTIVE'] [[0. 1.]]
4035/4496 isometheptene-mucate ['INACTIVE'] [[0.03 0.97]]
4036/4496 TCS-2314 ['INACTIVE'] [[0.05 0.95]]
4037/4496 Y-26763 ['INACTIVE'] [[0.01 0.99]]
4038/4496 3,5-DHPG-(S) ['INACTIVE'] [[0. 1.]]
4039/4496 mitiglinide ['INACTIVE'] [[0.02 0.98]]
4040/4496 suprofen ['INACTIVE'] [[0.03 0.97]]
4041/4496 cyt387 ['INACTIVE'] [[0.01 0.99]]
4042/4496 AGN-192403 ['INACTIVE'] [[0.03 0.97]]
4043/4496 IDRA-21 ['INACTIVE'] [[0.03 0.97]]
4044/4496 PFK-015 ['INACTIVE'] [[0.04 0.96]]
4045/4496 SIB-1553A ['INACTIVE'] [[0.03 0.97]]
4046/4496 se-methylselenocysteine ['INACTIVE'] [[0. 1.]]
4047/4496 ML-141 ['INACTIVE'] [[0.02 0.98]]
4048/4496 3-anilinopropan-1-ol ['INACTIVE'] [[0. 1.]]
4049/4496 1-(1,2-Diphenylethyl)piperidine-(+/-) ['INACTIVE'] [[0. 1.]]
4050/4496 RU-24969 ['INACTIVE'] [[0.04 0.96]]
4051/4496 ibutamoren ['INACTIVE'] [[0.05 0.95]]
4052/4496 corticosterone ['INACTIVE'] [[0. 1.]]
4053/4496 testosterone-enanthate ['INACTIVE'] [[0. 1.]]
4054/4496 dolasetron ['INACTIVE'] [[0.04 0.96]]
4055/4496 PD-407824 ['INACTIVE'] [[0.03 0.97]]
4056/4496 rimexolone ['INACTIVE'] [[0. 1.]]
4057/4496 DMH4 ['INACTIVE'] [[0.09 0.91]]
4058/4496 alizapride ['INACTIVE'] [[0.05 0.95]]
4059/4496 algestone-acetophenide ['INACTIVE'] [[0.04 0.96]]
4060/4496 eperisone ['INACTIVE'] [[0.02 0.98]]
4061/4496 rilpivirine ['INACTIVE'] [[0.12 0.88]]
4062/4496 LM11A-31 ['INACTIVE'] [[0.01 0.99]]
4063/4496 ITD-1 ['INACTIVE'] [[0.04 0.96]]
4064/4496 dromostanolone-propionate ['INACTIVE'] [[0. 1.]]
4065/4496 AG-1024 ['INACTIVE'] [[0. 1.]]
4066/4496 dinoprost ['INACTIVE'] [[0.03 0.97]]
4067/4496 topiroxostat ['INACTIVE'] [[0.02 0.98]]
4068/4496 olprinone ['INACTIVE'] [[0.06 0.94]]
4069/4496 ML-9 ['INACTIVE'] [[0.04 0.96]]
4070/4496 conivaptan ['INACTIVE'] [[0.03 0.97]]
4071/4496 N-alpha-Methylhistamine-dihydrochloride ['INACTIVE'] [[0.04 0.96]]
4072/4496 L-755507 ['INACTIVE'] [[0.08 0.92]]
4073/4496 GTP-14564 ['INACTIVE'] [[0. 1.]]
4074/4496 hesperadin ['INACTIVE'] [[0.09 0.91]]
4075/4496 ruxolitinib ['INACTIVE'] [[0.03 0.97]]
4076/4496 erbstatin-analog ['INACTIVE'] [[0. 1.]]
4077/4496 ruxolitinib-(S) ['INACTIVE'] [[0.03 0.97]]
4078/4496 vigabatrin ['INACTIVE'] [[0.01 0.99]]
4079/4496 tofacitinib ['INACTIVE'] [[0.12 0.88]]
4080/4496 panobinostat ['INACTIVE'] [[0.04 0.96]]
4081/4496 parbendazole ['INACTIVE'] [[0. 1.]]
4082/4496 malic-acid ['INACTIVE'] [[0.02 0.98]]
4083/4496 SDZ-21009 ['INACTIVE'] [[0.02 0.98]]
4084/4496 2-methyl-5-hydroxytryptamine ['INACTIVE'] [[0.01 0.99]]
4085/4496 oxandrolone ['INACTIVE'] [[0.02 0.98]]
4086/4496 nafadotride ['INACTIVE'] [[0.04 0.96]]
4087/4496 combretastatin-A-4 ['INACTIVE'] [[0. 1.]]
4088/4496 PF-04885614 ['INACTIVE'] [[0.04 0.96]]
4089/4496 enbucrilate ['INACTIVE'] [[0.02 0.98]]
4090/4496 rofecoxib ['INACTIVE'] [[0.04 0.96]]
4091/4496 gedunin ['INACTIVE'] [[0. 1.]]
4092/4496 N-cyanopyrrolidine ['INACTIVE'] [[0.01 0.99]]
4093/4496 A-769662 ['INACTIVE'] [[0.01 0.99]]
4094/4496 terutroban ['INACTIVE'] [[0.01 0.99]]
4095/4496 7-keto-DHEA ['INACTIVE'] [[0.02 0.98]]
4096/4496 pincainide ['INACTIVE'] [[0. 1.]]
4097/4496 pinanediol ['INACTIVE'] [[0.01 0.99]]
4098/4496 arundic-acid ['INACTIVE'] [[0.01 0.99]]
4099/4496 pentamidine ['INACTIVE'] [[0.05 0.95]]
4100/4496 hydroxyamphetamine ['INACTIVE'] [[0.01 0.99]]
4101/4496 beta-Hydroxybutyrate ['INACTIVE'] [[0. 1.]]
4102/4496 2,3-cis/exo-pinanediol ['INACTIVE'] [[0.01 0.99]]
4103/4496 amfenac ['INACTIVE'] [[0.02 0.98]]
4104/4496 phenylbenzimidazole-sulfonic-acid ['INACTIVE'] [[0.04 0.96]]
4105/4496 lofemizole ['INACTIVE'] [[0. 1.]]
4106/4496 albendazole ['INACTIVE'] [[0.02 0.98]]
4107/4496 elesclomol ['INACTIVE'] [[0.02 0.98]]
4108/4496 immethridine ['INACTIVE'] [[0.05 0.95]]
4109/4496 DC-260126 ['INACTIVE'] [[0.02 0.98]]
4110/4496 ZM-39923 ['INACTIVE'] [[0.01 0.99]]
4111/4496 furamidine ['INACTIVE'] [[0.08 0.92]]
4112/4496 crisaborole ['INACTIVE'] [[0.1 0.9]]
4113/4496 zeatin ['INACTIVE'] [[0. 1.]]
4114/4496 oxiperomide ['INACTIVE'] [[0.04 0.96]]
4115/4496 mandelic-acid ['INACTIVE'] [[0. 1.]]
4116/4496 pivagabine ['INACTIVE'] [[0.01 0.99]]
4117/4496 creatinol-phosphate ['INACTIVE'] [[0.02 0.98]]
4118/4496 N-acetylcarnosine ['INACTIVE'] [[0.02 0.98]]
4119/4496 dazmegrel ['INACTIVE'] [[0.05 0.95]]
4120/4496 methiazole ['INACTIVE'] [[0. 1.]]
4121/4496 PF-915275 ['INACTIVE'] [[0.05 0.95]]
4122/4496 A922500 ['INACTIVE'] [[0.02 0.98]]
4123/4496 CPA-inhibitor ['INACTIVE'] [[0. 1.]]
4124/4496 A-922500 ['INACTIVE'] [[0.02 0.98]]
4125/4496 SCP-1 ['INACTIVE'] [[0.02 0.98]]
4126/4496 ricinoleic-acid ['INACTIVE'] [[0.06 0.94]]
4127/4496 trolox ['INACTIVE'] [[0.02 0.98]]
4128/4496 S-Isopropylisothiourea ['INACTIVE'] [[0.01 0.99]]
4129/4496 trans-10,cis-12-Conjugated-linoleic-acid ['INACTIVE'] [[0.06 0.94]]
4130/4496 sirtinol ['INACTIVE'] [[0.06 0.94]]
4131/4496 sodium-danshensu ['INACTIVE'] [[0. 1.]]
4132/4496 bilastine ['INACTIVE'] [[0.07 0.93]]
4133/4496 naltrindole ['INACTIVE'] [[0.07 0.93]]
4134/4496 mesna ['INACTIVE'] [[0. 1.]]
4135/4496 4-iodo-L-phenylalanine ['INACTIVE'] [[0.03 0.97]]
4136/4496 alfaxalone ['INACTIVE'] [[0. 1.]]
4137/4496 RP67580 ['INACTIVE'] [[0.05 0.95]]
4138/4496 sodium-monofluorophosphate ['INACTIVE'] [[0.01 0.99]]
4139/4496 bevirimat-dimeglumine ['INACTIVE'] [[0. 1.]]
4140/4496 LY294002 ['INACTIVE'] [[0.04 0.96]]
4141/4496 metirosine ['INACTIVE'] [[0. 1.]]
4142/4496 m-3M3FBS ['INACTIVE'] [[0.05 0.95]]
4143/4496 enoximone ['INACTIVE'] [[0.02 0.98]]
4144/4496 CZC-54252 ['INACTIVE'] [[0.08 0.92]]
4145/4496 desoxycortone ['INACTIVE'] [[0. 1.]]
4146/4496 hippuric-acid ['INACTIVE'] [[0. 1.]]
4147/4496 3-(4-methylbenzylidene)camphor ['INACTIVE'] [[0.01 0.99]]
4148/4496 doconexent-ethyl-ester ['INACTIVE'] [[0.01 0.99]]
4149/4496 tie2-kinase-inhibitor ['INACTIVE'] [[0.06 0.94]]
4150/4496 U-0521 ['INACTIVE'] [[0. 1.]]
4151/4496 1-ethyl-2-pyrrolidone ['INACTIVE'] [[0. 1.]]
4152/4496 dihydrotestosterone ['INACTIVE'] [[0. 1.]]
4153/4496 fmoc-l-leucine ['INACTIVE'] [[0.03 0.97]]
4154/4496 flavoxate ['INACTIVE'] [[0.04 0.96]]
4155/4496 cis-9,trans-11-Conjugated-linoleic-acid ['INACTIVE'] [[0.06 0.94]]
4156/4496 NPC-15199 ['INACTIVE'] [[0.03 0.97]]
4157/4496 CGP-37849 ['INACTIVE'] [[0. 1.]]
4158/4496 kenpaullone ['INACTIVE'] [[0.06 0.94]]
4159/4496 trans-2-Undecenoic-acid ['INACTIVE'] [[0.05 0.95]]
4160/4496 U-104 ['INACTIVE'] [[0. 1.]]
4161/4496 ATB-346 ['INACTIVE'] [[0.03 0.97]]
4162/4496 cyanocobalamin ['INACTIVE'] [[0.14 0.86]]
4163/4496 succinic-acid ['INACTIVE'] [[0. 1.]]
4164/4496 PCO-400 ['INACTIVE'] [[0. 1.]]
4165/4496 TCN201 ['INACTIVE'] [[0.01 0.99]]
4166/4496 carnosine ['INACTIVE'] [[0.01 0.99]]
4167/4496 nafamostat ['INACTIVE'] [[0.02 0.98]]
4168/4496 3-methyl-GABA ['INACTIVE'] [[0. 1.]]
4169/4496 sulforaphane ['INACTIVE'] [[0.06 0.94]]
4170/4496 2-methylimidazole ['INACTIVE'] [[0.01 0.99]]
4171/4496 clopamide ['INACTIVE'] [[0.05 0.95]]
4172/4496 o-mercapto-benzoic-acid ['INACTIVE'] [[0. 1.]]
4173/4496 gidazepam ['INACTIVE'] [[0.04 0.96]]
4174/4496 CDPPB ['INACTIVE'] [[0.04 0.96]]
4175/4496 oxfenicine ['INACTIVE'] [[0. 1.]]
4176/4496 RU-28318 ['INACTIVE'] [[0. 1.]]
4177/4496 tesaglitazar ['INACTIVE'] [[0.01 0.99]]
4178/4496 levallorphan ['INACTIVE'] [[0.01 0.99]]
4179/4496 galeterone ['INACTIVE'] [[0.04 0.96]]
4180/4496 sobetirome ['INACTIVE'] [[0.01 0.99]]
4181/4496 chloramine-t ['INACTIVE'] [[0.01 0.99]]
4182/4496 CHR-6494 ['INACTIVE'] [[0.05 0.95]]
4183/4496 CO-101244 ['INACTIVE'] [[0.01 0.99]]
4184/4496 isosteviol ['INACTIVE'] [[0.01 0.99]]
4185/4496 BMS-191095 ['INACTIVE'] [[0.04 0.96]]
4186/4496 tyrphostin-AG-835 ['INACTIVE'] [[0.05 0.95]]
4187/4496 palmitoleic-acid ['INACTIVE'] [[0.06 0.94]]
4188/4496 pyrrolidine-dithiocarbamate ['INACTIVE'] [[0.01 0.99]]
4189/4496 N-acetyl-tyrosine ['INACTIVE'] [[0. 1.]]
4190/4496 oxymetholone ['INACTIVE'] [[0.02 0.98]]
4191/4496 daminozide ['INACTIVE'] [[0. 1.]]
4192/4496 eltanolone ['INACTIVE'] [[0. 1.]]
4193/4496 vildagliptin ['INACTIVE'] [[0.03 0.97]]
4194/4496 benzotript ['INACTIVE'] [[0.02 0.98]]
4195/4496 L-798,106 ['INACTIVE'] [[0.09 0.91]]
4196/4496 iloprost ['INACTIVE'] [[0.05 0.95]]
4197/4496 sulfaquinoxaline ['INACTIVE'] [[0.02 0.98]]
4198/4496 medetomidine ['INACTIVE'] [[0.02 0.98]]
4199/4496 pregnanolone ['INACTIVE'] [[0. 1.]]
4200/4496 17-PA ['INACTIVE'] [[0.01 0.99]]
4201/4496 NPC-01 ['INACTIVE'] [[0.01 0.99]]
4202/4496 GR-235 ['INACTIVE'] [[0.02 0.98]]
4203/4496 phenyl-salicylate ['INACTIVE'] [[0. 1.]]
4204/4496 norelgestromin ['INACTIVE'] [[0.01 0.99]]
4205/4496 sobrepin ['INACTIVE'] [[0. 1.]]
4206/4496 endoxifen ['INACTIVE'] [[0. 1.]]
4207/4496 PD1-PDL-inhibitor-1 ['INACTIVE'] [[0.05 0.95]]
4208/4496 dacinostat ['INACTIVE'] [[0.01 0.99]]
4209/4496 bucillamine ['INACTIVE'] [[0. 1.]]
4210/4496 propyl-benzoate ['INACTIVE'] [[0. 1.]]
4211/4496 DMPS ['INACTIVE'] [[0. 1.]]
4212/4496 CC-401 ['INACTIVE'] [[0.04 0.96]]
4213/4496 plurisin-#1 ['INACTIVE'] [[0.03 0.97]]
4214/4496 parethoxycaine ['INACTIVE'] [[0. 1.]]
4215/4496 parecoxib ['INACTIVE'] [[0.02 0.98]]
4216/4496 AM-24 ['INACTIVE'] [[0. 1.]]
4217/4496 N-methylpyrrolidone ['INACTIVE'] [[0. 1.]]
4218/4496 cicloprofen ['INACTIVE'] [[0.05 0.95]]
4219/4496 ramosetron ['INACTIVE'] [[0.06 0.94]]
4220/4496 AG-490 ['INACTIVE'] [[0.01 0.99]]
4221/4496 etifoxine ['INACTIVE'] [[0. 1.]]
4222/4496 medroxyprogesterone ['INACTIVE'] [[0. 1.]]
4223/4496 L-ergothioneine ['INACTIVE'] [[0.02 0.98]]
4224/4496 5-BDBD ['INACTIVE'] [[0.07 0.93]]
4225/4496 SU9516 ['INACTIVE'] [[0.05 0.95]]
4226/4496 LTB4 ['INACTIVE'] [[0.03 0.97]]
4227/4496 traxoprodil ['INACTIVE'] [[0.04 0.96]]
4228/4496 reversine ['INACTIVE'] [[0.03 0.97]]
4229/4496 mitoflaxone ['INACTIVE'] [[0.01 0.99]]
4230/4496 veliparib ['INACTIVE'] [[0.03 0.97]]
4231/4496 T-0901317 ['INACTIVE'] [[0.01 0.99]]
4232/4496 lithium-acetoacetate ['INACTIVE'] [[0. 1.]]
4233/4496 hydroxystilbamidine ['INACTIVE'] [[0.06 0.94]]
4234/4496 piretanide ['INACTIVE'] [[0.02 0.98]]
4235/4496 acitretin ['INACTIVE'] [[0.02 0.98]]
4236/4496 N-salicoylaminophenol ['INACTIVE'] [[0.01 0.99]]
4237/4496 caprylic-acid ['INACTIVE'] [[0.01 0.99]]
4238/4496 medronic-acid ['INACTIVE'] [[0. 1.]]
4239/4496 benproperine ['INACTIVE'] [[0.03 0.97]]
4240/4496 dasabuvir ['INACTIVE'] [[0.12 0.88]]
4241/4496 UF-010 ['INACTIVE'] [[0.01 0.99]]
4242/4496 TAME ['INACTIVE'] [[0. 1.]]
4243/4496 BU226 ['INACTIVE'] [[0.03 0.97]]
4244/4496 adarotene ['INACTIVE'] [[0.05 0.95]]
4245/4496 ICI-89406 ['INACTIVE'] [[0.06 0.94]]
4246/4496 anamorelin ['INACTIVE'] [[0.03 0.97]]
4247/4496 pafuramidine ['INACTIVE'] [[0.05 0.95]]
4248/4496 L-694247 ['INACTIVE'] [[0.04 0.96]]
4249/4496 clobenpropit ['INACTIVE'] [[0.02 0.98]]
4250/4496 fenigam ['INACTIVE'] [[0.01 0.99]]
4251/4496 albendazole-oxide ['INACTIVE'] [[0. 1.]]
4252/4496 levcromakalim ['INACTIVE'] [[0.04 0.96]]
4253/4496 SDM25N ['INACTIVE'] [[0.08 0.92]]
4254/4496 desmethylclozapine ['INACTIVE'] [[0.05 0.95]]
4255/4496 siguazodan ['INACTIVE'] [[0.02 0.98]]
4256/4496 2-iodomelatonin ['INACTIVE'] [[0. 1.]]
4257/4496 5-hydroxytryptophan ['INACTIVE'] [[0. 1.]]
4258/4496 MF-101 ['INACTIVE'] [[0.01 0.99]]
4259/4496 linoleic-acid ['INACTIVE'] [[0.03 0.97]]
4260/4496 imetit ['INACTIVE'] [[0.01 0.99]]
4261/4496 disodium sebacate ['INACTIVE'] [[0. 1.]]
4262/4496 kevetrin ['INACTIVE'] [[0.01 0.99]]
4263/4496 adaprev ['INACTIVE'] [[0. 1.]]
4264/4496 phenylbutyrate ['INACTIVE'] [[0. 1.]]
4265/4496 biperiden ['INACTIVE'] [[0.03 0.97]]
4266/4496 cyclovalone ['INACTIVE'] [[0. 1.]]
4267/4496 ganaxolone ['INACTIVE'] [[0.01 0.99]]
4268/4496 dihomo-gamma-linolenic-acid ['INACTIVE'] [[0.03 0.97]]
4269/4496 ASC-J9 ['INACTIVE'] [[0.02 0.98]]
4270/4496 DH-97 ['INACTIVE'] [[0.09 0.91]]
4271/4496 o-3M3FBS ['INACTIVE'] [[0.04 0.96]]
4272/4496 potassium-canrenoate ['INACTIVE'] [[0.03 0.97]]
4273/4496 trilostane ['INACTIVE'] [[0.03 0.97]]
4274/4496 taurolidine ['INACTIVE'] [[0.17 0.83]]
4275/4496 stetaderm ['INACTIVE'] [[0. 1.]]
4276/4496 proxyfan ['INACTIVE'] [[0.03 0.97]]
4277/4496 Ro-90-7501 ['INACTIVE'] [[0.02 0.98]]
4278/4496 tirofiban ['INACTIVE'] [[0.05 0.95]]
4279/4496 1400W ['INACTIVE'] [[0. 1.]]
4280/4496 propylparaben ['INACTIVE'] [[0. 1.]]
4281/4496 rosmarinic-acid ['INACTIVE'] [[0.02 0.98]]
4282/4496 M-14157 ['INACTIVE'] [[0. 1.]]
4283/4496 hexasodium-phytate ['INACTIVE'] [[0. 1.]]
4284/4496 A-61603 ['INACTIVE'] [[0.04 0.96]]
4285/4496 KT-433 ['INACTIVE'] [[0.01 0.99]]
4286/4496 ibutilide ['INACTIVE'] [[0.02 0.98]]
4287/4496 briciclib ['INACTIVE'] [[0.04 0.96]]
4288/4496 temoporfin ['INACTIVE'] [[0.06 0.94]]
4289/4496 RG1530 ['INACTIVE'] [[0.03 0.97]]
4290/4496 cinoxate ['INACTIVE'] [[0. 1.]]
4291/4496 CBiPES ['INACTIVE'] [[0.1 0.9]]
4292/4496 ACT-462206 ['INACTIVE'] [[0.01 0.99]]
4293/4496 caffeic-acid-phenethyl-ester ['INACTIVE'] [[0.04 0.96]]
4294/4496 PRE-084 ['INACTIVE'] [[0.04 0.96]]
4295/4496 ST-1859 ['INACTIVE'] [[0.06 0.94]]
4296/4496 indisulam ['INACTIVE'] [[0.04 0.96]]
4297/4496 begacestat ['INACTIVE'] [[0.02 0.98]]
4298/4496 GR-113808 ['INACTIVE'] [[0.1 0.9]]
4299/4496 L-leucine ['INACTIVE'] [[0. 1.]]
4300/4496 fadrozole ['INACTIVE'] [[0.1 0.9]]
4301/4496 valproic-acid ['INACTIVE'] [[0. 1.]]
4302/4496 alpha-methylserotonin ['INACTIVE'] [[0.02 0.98]]
4303/4496 Ro-1138452 ['INACTIVE'] [[0.02 0.98]]
4304/4496 rasagiline ['INACTIVE'] [[0.01 0.99]]
4305/4496 indoramin ['INACTIVE'] [[0.01 0.99]]
4306/4496 dydrogesterone ['INACTIVE'] [[0. 1.]]
4307/4496 morantel ['INACTIVE'] [[0. 1.]]
4308/4496 sivelestat ['INACTIVE'] [[0.02 0.98]]
4309/4496 RBC8 ['INACTIVE'] [[0.05 0.95]]
4310/4496 luzindole ['INACTIVE'] [[0.07 0.93]]
4311/4496 icomucret ['INACTIVE'] [[0.02 0.98]]
4312/4496 valdecoxib ['INACTIVE'] [[0. 1.]]
4313/4496 magnolol ['INACTIVE'] [[0.03 0.97]]
4314/4496 BRD-9876 ['INACTIVE'] [[0.04 0.96]]
4315/4496 OG-L002 ['INACTIVE'] [[0.01 0.99]]
4316/4496 NS-8593 ['INACTIVE'] [[0.02 0.98]]
4317/4496 testolactone ['INACTIVE'] [[0. 1.]]
4318/4496 TMS ['INACTIVE'] [[0. 1.]]
4319/4496 phloroglucin ['INACTIVE'] [[0. 1.]]
4320/4496 fosbretabulin ['INACTIVE'] [[0.01 0.99]]
4321/4496 ditiocarb-sodium-trihydrate ['INACTIVE'] [[0. 1.]]
4322/4496 pravadoline ['INACTIVE'] [[0.05 0.95]]
4323/4496 AT-9283 ['INACTIVE'] [[0.07 0.93]]
4324/4496 SD-169 ['INACTIVE'] [[0.02 0.98]]
4325/4496 cilomilast ['INACTIVE'] [[0.03 0.97]]
4326/4496 equol ['INACTIVE'] [[0.05 0.95]]
4327/4496 dexmedetomidine ['INACTIVE'] [[0.02 0.98]]
4328/4496 SK&F-10047-(+) ['INACTIVE'] [[0.03 0.97]]
4329/4496 choline-alfoscerate ['INACTIVE'] [[0. 1.]]
4330/4496 dofetilide ['INACTIVE'] [[0.02 0.98]]
4331/4496 gamma-linolenic-acid ['INACTIVE'] [[0.01 0.99]]
4332/4496 icosapent ['INACTIVE'] [[0. 1.]]
4333/4496 integrin-antagonist-1 ['INACTIVE'] [[0.03 0.97]]
4334/4496 UK-5099 ['INACTIVE'] [[0.04 0.96]]
4335/4496 erteberel ['INACTIVE'] [[0. 1.]]
4336/4496 mebendazole ['INACTIVE'] [[0. 1.]]
4337/4496 preclamol ['INACTIVE'] [[0.04 0.96]]
4338/4496 curcumin ['INACTIVE'] [[0.01 0.99]]
4339/4496 hydrocortisone-phosphate ['INACTIVE'] [[0.01 0.99]]
4340/4496 butibufen ['INACTIVE'] [[0. 1.]]
4341/4496 CGP-12177 ['INACTIVE'] [[0.01 0.99]]
4342/4496 tyrphostin-AG-494 ['INACTIVE'] [[0.01 0.99]]
4343/4496 sofalcone ['INACTIVE'] [[0.01 0.99]]
4344/4496 baricitinib ['INACTIVE'] [[0.03 0.97]]
4345/4496 piceatannol ['INACTIVE'] [[0. 1.]]
4346/4496 isopentyl-4-methoxycinnamate ['INACTIVE'] [[0.01 0.99]]
4347/4496 N-acetyltryptamine ['INACTIVE'] [[0.02 0.98]]
4348/4496 chromanol-293B-(-)-[3R,4S] ['INACTIVE'] [[0.01 0.99]]
4349/4496 daltroban ['INACTIVE'] [[0.02 0.98]]
4350/4496 SKF-89976A ['INACTIVE'] [[0.01 0.99]]
4351/4496 KU14R ['INACTIVE'] [[0.04 0.96]]
4352/4496 nandrolone ['INACTIVE'] [[0. 1.]]
4353/4496 alpha-linolenic-acid ['INACTIVE'] [[0.03 0.97]]
4354/4496 L-NIL ['INACTIVE'] [[0. 1.]]
4355/4496 BU-224 ['INACTIVE'] [[0.02 0.98]]
4356/4496 PD-128907 ['INACTIVE'] [[0.03 0.97]]
4357/4496 17-alpha-methyltestosterone ['INACTIVE'] [[0.02 0.98]]
4358/4496 m-THP ['INACTIVE'] [[0.1 0.9]]
4359/4496 GW-9508 ['INACTIVE'] [[0.01 0.99]]
4360/4496 copper-histidine ['INACTIVE'] [[0. 1.]]
4361/4496 KL-001 ['INACTIVE'] [[0.03 0.97]]
4362/4496 fosfructose ['INACTIVE'] [[0.04 0.96]]
4363/4496 AEG3482 ['INACTIVE'] [[0.03 0.97]]
4364/4496 HQK-1001 ['INACTIVE'] [[0. 1.]]
4365/4496 bisphenol-a ['INACTIVE'] [[0.01 0.99]]
4366/4496 2-phenylmelatonin ['INACTIVE'] [[0.01 0.99]]
4367/4496 aminomethyltransferase ['INACTIVE'] [[0. 1.]]
4368/4496 CHC ['INACTIVE'] [[0. 1.]]
4369/4496 prednisolone-sodium-phosphate ['INACTIVE'] [[0.01 0.99]]
4370/4496 carbendazim ['INACTIVE'] [[0. 1.]]
4371/4496 mirin ['INACTIVE'] [[0.01 0.99]]
4372/4496 licochalcone-a ['INACTIVE'] [[0.01 0.99]]
4373/4496 AZD1080 ['INACTIVE'] [[0.04 0.96]]
4374/4496 naringeninic-acid ['INACTIVE'] [[0.02 0.98]]
4375/4496 2-(3-mercaptopropyl)pentanedioic acid ['INACTIVE'] [[0.01 0.99]]
4376/4496 TC-O-9311 ['INACTIVE'] [[0.02 0.98]]
4377/4496 thiorphan ['INACTIVE'] [[0.03 0.97]]
4378/4496 viloxazine ['INACTIVE'] [[0.03 0.97]]
4379/4496 neurodazine ['INACTIVE'] [[0.06 0.94]]
4380/4496 RX-821002 ['INACTIVE'] [[0.02 0.98]]
4381/4496 penicillamine-(racemic) ['INACTIVE'] [[0.01 0.99]]
4382/4496 almotriptan ['INACTIVE'] [[0. 1.]]
4383/4496 sorbic-acid ['INACTIVE'] [[0. 1.]]
4384/4496 butein ['INACTIVE'] [[0. 1.]]
4385/4496 sodium-butyrate ['INACTIVE'] [[0. 1.]]
4386/4496 iodophenpropit ['INACTIVE'] [[0.07 0.93]]
4387/4496 ICI-215,001 ['INACTIVE'] [[0.01 0.99]]
4388/4496 OAC2 ['INACTIVE'] [[0.02 0.98]]
4389/4496 tyrphostin-AG-18 ['INACTIVE'] [[0. 1.]]
4390/4496 penicillamine-(D) ['INACTIVE'] [[0.01 0.99]]
4391/4496 tyrphostin-A9 ['INACTIVE'] [[0. 1.]]
4392/4496 mercaptosuccinic-acid ['INACTIVE'] [[0. 1.]]
4393/4496 indole-3-pyrubate ['INACTIVE'] [[0.01 0.99]]
4394/4496 peretinoin ['INACTIVE'] [[0.02 0.98]]
4395/4496 motolimod ['INACTIVE'] [[0.01 0.99]]
4396/4496 DIPT ['INACTIVE'] [[0. 1.]]
4397/4496 GSK-0660 ['INACTIVE'] [[0.01 0.99]]
4398/4496 dienogest ['INACTIVE'] [[0.01 0.99]]
4399/4496 mazindol ['INACTIVE'] [[0.08 0.92]]
4400/4496 2-PMDQ ['INACTIVE'] [[0.05 0.95]]
4401/4496 atipamezole ['INACTIVE'] [[0.01 0.99]]
4402/4496 phenethyl-isothiocyanate ['INACTIVE'] [[0.01 0.99]]
4403/4496 thioperamide ['INACTIVE'] [[0.09 0.91]]
4404/4496 androstenone ['INACTIVE'] [[0.01 0.99]]
4405/4496 honokiol ['INACTIVE'] [[0.01 0.99]]
4406/4496 PD-166793 ['INACTIVE'] [[0.02 0.98]]
4407/4496 iguratimod ['INACTIVE'] [[0.05 0.95]]
4408/4496 BRL-54443 ['INACTIVE'] [[0.02 0.98]]
4409/4496 PNU-177864 ['INACTIVE'] [[0.01 0.99]]
4410/4496 A-33903 ['INACTIVE'] [[0.01 0.99]]
4411/4496 cyanopindolol ['INACTIVE'] [[0.02 0.98]]
4412/4496 idronoxil ['INACTIVE'] [[0.02 0.98]]
4413/4496 4BP-TQS ['INACTIVE'] [[0.02 0.98]]
4414/4496 fosamprenavir ['INACTIVE'] [[0.05 0.95]]
4415/4496 cinnamaldehyde ['INACTIVE'] [[0.01 0.99]]
4416/4496 ZLN005 ['INACTIVE'] [[0.02 0.98]]
4417/4496 metaphit ['INACTIVE'] [[0.01 0.99]]
4418/4496 LY320135 ['INACTIVE'] [[0.09 0.91]]
4419/4496 SB-269970 ['INACTIVE'] [[0.08 0.92]]
4420/4496 BW-A4C ['INACTIVE'] [[0.02 0.98]]
4421/4496 JAK3-inhibitor-V ['INACTIVE'] [[0. 1.]]
4422/4496 rilmenidine ['INACTIVE'] [[0. 1.]]
4423/4496 ciproxifan ['INACTIVE'] [[0.03 0.97]]
4424/4496 PNU-37883 ['INACTIVE'] [[0.08 0.92]]
4425/4496 RS-79948 ['INACTIVE'] [[0.04 0.96]]
4426/4496 CCT-031374 ['INACTIVE'] [[0.03 0.97]]
4427/4496 argatroban ['INACTIVE'] [[0.1 0.9]]
4428/4496 D609 ['INACTIVE'] [[0.02 0.98]]
4429/4496 trans-4-Methoxycinnamic-acid ['INACTIVE'] [[0. 1.]]
4430/4496 mibampator ['INACTIVE'] [[0.05 0.95]]
4431/4496 norethisterone-enanthate ['INACTIVE'] [[0.01 0.99]]
4432/4496 2-aminobenzenesulfonamide ['INACTIVE'] [[0.01 0.99]]
4433/4496 bakuchiol ['INACTIVE'] [[0. 1.]]
4434/4496 fomocaine ['INACTIVE'] [[0.02 0.98]]
4435/4496 GS-9973 ['INACTIVE'] [[0.07 0.93]]
4436/4496 tirasemtiv ['INACTIVE'] [[0.02 0.98]]
4437/4496 indeloxazine ['INACTIVE'] [[0.03 0.97]]
4438/4496 pyrantel-tartrate ['INACTIVE'] [[0.01 0.99]]
4439/4496 barium-6-O-phosphonato-D-glucose ['INACTIVE'] [[0.02 0.98]]
4440/4496 O4I1 ['INACTIVE'] [[0. 1.]]
4441/4496 PFI-1 ['INACTIVE'] [[0.02 0.98]]
4442/4496 phenol ['INACTIVE'] [[0. 1.]]
4443/4496 cis-urocanic acid ['INACTIVE'] [[0.01 0.99]]
4444/4496 cis-urocanic-acid ['INACTIVE'] [[0.01 0.99]]
4445/4496 beta-glycerophosphoric-acid ['INACTIVE'] [[0. 1.]]
4446/4496 CL-225385 ['INACTIVE'] [[0. 1.]]
4447/4496 3-indolebutyric-acid ['INACTIVE'] [[0. 1.]]
4448/4496 fosfosal ['INACTIVE'] [[0. 1.]]
4449/4496 GSK-37647 ['INACTIVE'] [[0.01 0.99]]
4450/4496 BAY-11-7085 ['INACTIVE'] [[0. 1.]]
4451/4496 naratriptan ['INACTIVE'] [[0.04 0.96]]
4452/4496 PF-06447475 ['INACTIVE'] [[0.07 0.93]]
4453/4496 ABT-239 ['INACTIVE'] [[0.09 0.91]]
4454/4496 SB-203186 ['INACTIVE'] [[0.01 0.99]]
4455/4496 VP-20629 ['INACTIVE'] [[0. 1.]]
4456/4496 diarylpropionitrile ['INACTIVE'] [[0.02 0.98]]
4457/4496 BAY-11-7082 ['INACTIVE'] [[0. 1.]]
4458/4496 BU-239 ['INACTIVE'] [[0. 1.]]
4459/4496 S18986 ['INACTIVE'] [[0.02 0.98]]
4460/4496 afobazole ['INACTIVE'] [[0.05 0.95]]
4461/4496 ADD-233089 ['INACTIVE'] [[0.03 0.97]]
4462/4496 blebbistatin-(+/-) ['INACTIVE'] [[0.08 0.92]]
4463/4496 blebbistatin-(-) ['INACTIVE'] [[0.08 0.92]]
4464/4496 D-64131 ['INACTIVE'] [[0. 1.]]
4465/4496 gestrinone ['INACTIVE'] [[0. 1.]]
4466/4496 thiopental ['INACTIVE'] [[0.02 0.98]]
4467/4496 tibolone ['INACTIVE'] [[0. 1.]]
4468/4496 obatoclax ['INACTIVE'] [[0.01 0.99]]
4469/4496 para-toluenesulfonamide ['INACTIVE'] [[0. 1.]]
4470/4496 belinostat ['INACTIVE'] [[0.01 0.99]]
4471/4496 reparixin ['INACTIVE'] [[0.01 0.99]]
4472/4496 desoxypeganine ['INACTIVE'] [[0.01 0.99]]
4473/4496 TY-52156 ['INACTIVE'] [[0.01 0.99]]
4474/4496 ambazone ['INACTIVE'] [[0.02 0.98]]
4475/4496 FIT ['INACTIVE'] [[0.02 0.98]]
4476/4496 AH-7614 ['INACTIVE'] [[0.04 0.96]]
4477/4496 etonogestrel ['INACTIVE'] [[0. 1.]]
4478/4496 BRL-44408 ['INACTIVE'] [[0.02 0.98]]
4479/4496 A-366 ['INACTIVE'] [[0.04 0.96]]
4480/4496 AH11110 ['INACTIVE'] [[0.03 0.97]]
4481/4496 fosfestrol ['INACTIVE'] [[0. 1.]]
4482/4496 pifithrin-mu ['INACTIVE'] [[0.01 0.99]]
4483/4496 BTS ['INACTIVE'] [[0. 1.]]
4484/4496 thiamylal ['INACTIVE'] [[0.01 0.99]]
4485/4496 AFN-1252 ['INACTIVE'] [[0.03 0.97]]
4486/4496 xilobam ['INACTIVE'] [[0.02 0.98]]
4487/4496 2-BFI ['INACTIVE'] [[0.01 0.99]]
4488/4496 allylthiourea ['INACTIVE'] [[0.02 0.98]]
4489/4496 glasdegib ['INACTIVE'] [[0.11 0.89]]
4490/4496 oxantel ['INACTIVE'] [[0. 1.]]
4491/4496 LY-404187 ['INACTIVE'] [[0.08 0.92]]
4492/4496 cirazoline ['INACTIVE'] [[0.04 0.96]]
4493/4496 phenacaine ['INACTIVE'] [[0. 1.]]
4494/4496 PSB-11 ['INACTIVE'] [[0.08 0.92]]
4495/4496 CBS-1114 ['INACTIVE'] [[0.01 0.99]]
4496/4496 A61603 ['INACTIVE'] [[0.01 0.99]]
Finished.
|
chapter6/code/.ipynb_checkpoints/torch_nlp_deeplearning-checkpoint.ipynb | ###Markdown
Non-Linearities~~~~~~~~~~~~~~~First, note the following fact, which will explain why we neednon-linearities in the first place. Suppose we have two affine maps$f(x) = Ax + b$ and $g(x) = Cx + d$. What is$f(g(x))$?\begin{align}f(g(x)) = A(Cx + d) + b = ACx + (Ad + b)\end{align}$AC$ is a matrix and $Ad + b$ is a vector, so we see thatcomposing affine maps gives you an affine map.From this, you can see that if you wanted your neural network to be longchains of affine compositions, that this adds no new power to your modelthan just doing a single affine map.If we introduce non-linearities in between the affine layers, this is nolonger the case, and we can build much more powerful models.There are a few core non-linearities.$\tanh(x), \sigma(x), \text{ReLU}(x)$ are the most common. You areprobably wondering: "why these functions? I can think of plenty of othernon-linearities." The reason for this is that they have gradients thatare easy to compute, and computing gradients is essential for learning.For example\begin{align}\frac{d\sigma}{dx} = \sigma(x)(1 - \sigma(x))\end{align}A quick note: although you may have learned some neural networks in yourintro to AI class where $\sigma(x)$ was the default non-linearity,typically people shy away from it in practice. This is because thegradient *vanishes* very quickly as the absolute value of the argumentgrows. Small gradients means it is hard to learn. Most people default totanh or ReLU.
###Code
# In pytorch, most non-linearities are in torch.functional (we have it imported as F)
# Note that non-linearites typically don't have parameters like affine maps do.
# That is, they don't have weights that are updated during training.
data = torch.randn(2, 2)
print(data)
print(F.relu(data))
###Output
_____no_output_____
###Markdown
Softmax and Probabilities~~~~~~~~~~~~~~~~~~~~~~~~~The function $\text{Softmax}(x)$ is also just a non-linearity, butit is special in that it usually is the last operation done in anetwork. This is because it takes in a vector of real numbers andreturns a probability distribution. Its definition is as follows. Let$x$ be a vector of real numbers (positive, negative, whatever,there are no constraints). Then the i'th component of$\text{Softmax}(x)$ is\begin{align}\frac{\exp(x_i)}{\sum_j \exp(x_j)}\end{align}It should be clear that the output is a probability distribution: eachelement is non-negative and the sum over all components is 1.You could also think of it as just applying an element-wiseexponentiation operator to the input to make everything non-negative andthen dividing by the normalization constant.
###Code
# Softmax is also in torch.nn.functional
data = torch.randn(5)
print(data)
print(F.softmax(data, dim=0))
print(F.softmax(data, dim=0).sum()) # Sums to 1 because it is a distribution!
print(F.log_softmax(data, dim=0)) # theres also log_softmax
###Output
_____no_output_____
###Markdown
Objective Functions~~~~~~~~~~~~~~~~~~~The objective function is the function that your network is beingtrained to minimize (in which case it is often called a *loss function*or *cost function*). This proceeds by first choosing a traininginstance, running it through your neural network, and then computing theloss of the output. The parameters of the model are then updated bytaking the derivative of the loss function. Intuitively, if your modelis completely confident in its answer, and its answer is wrong, yourloss will be high. If it is very confident in its answer, and its answeris correct, the loss will be low.The idea behind minimizing the loss function on your training examplesis that your network will hopefully generalize well and have small losson unseen examples in your dev set, test set, or in production. Anexample loss function is the *negative log likelihood loss*, which is avery common objective for multi-class classification. For supervisedmulti-class classification, this means training the network to minimizethe negative log probability of the correct output (or equivalently,maximize the log probability of the correct output). Optimization and Training=========================So what we can compute a loss function for an instance? What do we dowith that? We saw earlier that Tensors know how to compute gradientswith respect to the things that were used to compute it. Well,since our loss is an Tensor, we can compute gradients withrespect to all of the parameters used to compute it! Then we can performstandard gradient updates. Let $\theta$ be our parameters,$L(\theta)$ the loss function, and $\eta$ a positivelearning rate. Then:\begin{align}\theta^{(t+1)} = \theta^{(t)} - \eta \nabla_\theta L(\theta)\end{align}There are a huge collection of algorithms and active research inattempting to do something more than just this vanilla gradient update.Many attempt to vary the learning rate based on what is happening attrain time. You don't need to worry about what specifically thesealgorithms are doing unless you are really interested. Torch providesmany in the torch.optim package, and they are all completelytransparent. Using the simplest gradient update is the same as the morecomplicated algorithms. Trying different update algorithms and differentparameters for the update algorithms (like different initial learningrates) is important in optimizing your network's performance. Often,just replacing vanilla SGD with an optimizer like Adam or RMSProp willboost performance noticably. Creating Network Components in PyTorch======================================Before we move on to our focus on NLP, lets do an annotated example ofbuilding a network in PyTorch using only affine maps andnon-linearities. We will also see how to compute a loss function, usingPyTorch's built in negative log likelihood, and update parameters bybackpropagation.All network components should inherit from nn.Module and override theforward() method. That is about it, as far as the boilerplate isconcerned. Inheriting from nn.Module provides functionality to yourcomponent. For example, it makes it keep track of its trainableparameters, you can swap it between CPU and GPU with the ``.to(device)``method, where device can be a CPU device ``torch.device("cpu")`` or CUDAdevice ``torch.device("cuda:0")``.Let's write an annotated example of a network that takes in a sparsebag-of-words representation and outputs a probability distribution overtwo labels: "English" and "Spanish". This model is just logisticregression. Example: Logistic Regression Bag-of-Words classifier~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~Our model will map a sparse BoW representation to log probabilities overlabels. We assign each word in the vocab an index. For example, say ourentire vocab is two words "hello" and "world", with indices 0 and 1respectively. The BoW vector for the sentence "hello hello hello hello"is\begin{align}\left[ 4, 0 \right]\end{align}For "hello world world hello", it is\begin{align}\left[ 2, 2 \right]\end{align}etc. In general, it is\begin{align}\left[ \text{Count}(\text{hello}), \text{Count}(\text{world}) \right]\end{align}Denote this BOW vector as $x$. The output of our network is:\begin{align}\log \text{Softmax}(Ax + b)\end{align}That is, we pass the input through an affine map and then do logsoftmax.
###Code
data = [("me gusta comer en la cafeteria".split(), "SPANISH"),
("Give it to me".split(), "ENGLISH"),
("No creo que sea una buena idea".split(), "SPANISH"),
("No it is not a good idea to get lost at sea".split(), "ENGLISH")]
test_data = [("Yo creo que si".split(), "SPANISH"),
("it is lost on me".split(), "ENGLISH")]
# word_to_ix maps each word in the vocab to a unique integer, which will be its
# index into the Bag of words vector
word_to_ix = {}
for sent, _ in data + test_data:
for word in sent:
if word not in word_to_ix:
word_to_ix[word] = len(word_to_ix)
print(word_to_ix)
VOCAB_SIZE = len(word_to_ix)
NUM_LABELS = 2
class BoWClassifier(nn.Module): # inheriting from nn.Module!
def __init__(self, num_labels, vocab_size):
# calls the init function of nn.Module. Dont get confused by syntax,
# just always do it in an nn.Module
super(BoWClassifier, self).__init__()
# Define the parameters that you will need. In this case, we need A and b,
# the parameters of the affine mapping.
# Torch defines nn.Linear(), which provides the affine map.
# Make sure you understand why the input dimension is vocab_size
# and the output is num_labels!
self.linear = nn.Linear(vocab_size, num_labels)
# NOTE! The non-linearity log softmax does not have parameters! So we don't need
# to worry about that here
def forward(self, bow_vec):
# Pass the input through the linear layer,
# then pass that through log_softmax.
# Many non-linearities and other functions are in torch.nn.functional
return F.log_softmax(self.linear(bow_vec), dim=1)
def make_bow_vector(sentence, word_to_ix):
vec = torch.zeros(len(word_to_ix))
for word in sentence:
vec[word_to_ix[word]] += 1
return vec.view(1, -1)
def make_target(label, label_to_ix):
return torch.LongTensor([label_to_ix[label]])
model = BoWClassifier(NUM_LABELS, VOCAB_SIZE)
# the model knows its parameters. The first output below is A, the second is b.
# Whenever you assign a component to a class variable in the __init__ function
# of a module, which was done with the line
# self.linear = nn.Linear(...)
# Then through some Python magic from the PyTorch devs, your module
# (in this case, BoWClassifier) will store knowledge of the nn.Linear's parameters
for param in model.parameters():
print(param)
# To run the model, pass in a BoW vector
# Here we don't need to train, so the code is wrapped in torch.no_grad()
with torch.no_grad():
sample = data[0]
bow_vector = make_bow_vector(sample[0], word_to_ix)
log_probs = model(bow_vector)
print(log_probs)
###Output
_____no_output_____
###Markdown
Which of the above values corresponds to the log probability of ENGLISH,and which to SPANISH? We never defined it, but we need to if we want totrain the thing.
###Code
label_to_ix = {"SPANISH": 0, "ENGLISH": 1}
###Output
_____no_output_____
###Markdown
So lets train! To do this, we pass instances through to get logprobabilities, compute a loss function, compute the gradient of the lossfunction, and then update the parameters with a gradient step. Lossfunctions are provided by Torch in the nn package. nn.NLLLoss() is thenegative log likelihood loss we want. It also defines optimizationfunctions in torch.optim. Here, we will just use SGD.Note that the *input* to NLLLoss is a vector of log probabilities, and atarget label. It doesn't compute the log probabilities for us. This iswhy the last layer of our network is log softmax. The loss functionnn.CrossEntropyLoss() is the same as NLLLoss(), except it does the logsoftmax for you.
###Code
# Run on test data before we train, just to see a before-and-after
with torch.no_grad():
for instance, label in test_data:
bow_vec = make_bow_vector(instance, word_to_ix)
log_probs = model(bow_vec)
print(log_probs)
# Print the matrix column corresponding to "creo"
print(next(model.parameters())[:, word_to_ix["creo"]])
loss_function = nn.NLLLoss()
optimizer = optim.SGD(model.parameters(), lr=0.1)
# Usually you want to pass over the training data several times.
# 100 is much bigger than on a real data set, but real datasets have more than
# two instances. Usually, somewhere between 5 and 30 epochs is reasonable.
for epoch in range(100):
for instance, label in data:
# Step 1. Remember that PyTorch accumulates gradients.
# We need to clear them out before each instance
model.zero_grad()
# Step 2. Make our BOW vector and also we must wrap the target in a
# Tensor as an integer. For example, if the target is SPANISH, then
# we wrap the integer 0. The loss function then knows that the 0th
# element of the log probabilities is the log probability
# corresponding to SPANISH
bow_vec = make_bow_vector(instance, word_to_ix)
target = make_target(label, label_to_ix)
# Step 3. Run our forward pass.
log_probs = model(bow_vec)
# Step 4. Compute the loss, gradients, and update the parameters by
# calling optimizer.step()
loss = loss_function(log_probs, target)
loss.backward()
optimizer.step()
with torch.no_grad():
for instance, label in test_data:
bow_vec = make_bow_vector(instance, word_to_ix)
log_probs = model(bow_vec)
print(log_probs)
# Index corresponding to Spanish goes up, English goes down!
print(next(model.parameters())[:, word_to_ix["creo"]])
###Output
_____no_output_____ |
Python/Notebooks/Extract perspective view from 360 footage.ipynb | ###Markdown
Step 1. Extract video frames using FFMPEG```mkdir framesffmpeg -i VIDEO.mp4 -q 2 frames\%04d.jpg```
###Code
video_dir = r"G:\OmniPhotos\data\2018-09 360 videos from Canada\30fps"
videos = [e for e in os.listdir(video_dir) if 'stitch' in e]
# print(videos)
print(f"cd \"{video_dir}\"\n")
for video in videos:
output_path = video[:-4]
print(f"mkdir \"{output_path}\"")
print(f"ffmpeg -i \"{video}\" -q 2 \"{output_path}\\%04d.jpg\"") # use '%%' in batch files to escape '%'
print()
###Output
_____no_output_____
###Markdown
Step 2. Extract perspective views
###Code
working_dir = r"G:\OmniPhotos\data\Jaman\Studio-flow-stitch"
output_dir = working_dir + '-pinhole-azimuth-test'
if not os.path.exists(output_dir):
os.mkdir(output_dir)
rotation = np.eye(3) # look forward
# rotation = np.mat([[-1, 0, 0], [0, 1, 0], [0, 0, -1]]) # look backward
vfov = 120
resolution = (1200, 1200)
aspect = resolution[0] / resolution[1]
# aspect = 1. # NB. aspect ratio of viewing angles (hfov / vfov), not resolution (width / height)
for frame in range(137, 2000):
filename = os.path.join(working_dir, '%04d.jpg' % frame)
# print(filename)
image = cv2.imread(filename)
image = image[:,:,::-1] / 255. # convert to unit range RGB
# imshow(image)
env = envmap.EnvironmentMap(image, 'latlong')
pinhole = env.project(vfov, rotation, ar=aspect, resolution=resolution)
# imshow(pinhole)
cv2.imwrite(os.path.join(output_dir, '%04d.jpg' % frame), 255 * pinhole[:,:,::-1])
if frame == 1:
mask = env.project(vfov, rotation, ar=aspect, mode='mask')
cv2.imwrite(os.path.join(output_dir, 'mask.png'), 255 * mask)
print(frame, end=', ')
## Construct intrinsic matrix L.
f_x = (resolution[0] / 2.) / tan(vfov * aspect / 180. * np.pi / 2.)
f_y = (resolution[1] / 2.) / tan(vfov / 180. * np.pi / 2.)
K = np.mat([[f_x, 0., resolution[0] / 2.], [0., f_y, resolution[1] / 2.], [0., 0., 1.]])
print(K)
###Output
_____no_output_____
###Markdown
---- Explore different parameters
###Code
#rotation = np.eye(3) # look forward
# rotation = np.mat([[-1, 0, 0], [0, 1, 0], [0, 0, -1]]) # look backward
vfov = 120
aspect = 1.
resolution=(1200, 1200)
frame = 1
filename = os.path.join(working_dir, '%04d.jpg' % frame)
# print(filename)
image = cv2.imread(filename)
image = image[:,:,::-1] / 255. # convert to unit range RGB
# imshow(image)
env = envmap.EnvironmentMap(image, 'latlong')
for angle_deg in range(0, 360, 30):
angle = np.deg2rad(angle_deg)
rotation = np.mat([
[np.cos(angle), 0, np.sin(angle)],
[0, 1, 0],
[-np.sin(angle), 0, np.cos(angle)]])
pinhole = env.project(vfov, rotation, ar=aspect, resolution=resolution)
imshow(pinhole)
# cv2.imwrite(os.path.join(output_dir, '%04d-azimuth%03d.jpg' % (frame, angle_deg)), 255 * pinhole[:,:,::-1])
# # if frame == 1:
# mask = env.project(vfov, rotation, ar=aspect, mode='mask')
# cv2.imwrite(os.path.join(output_dir, 'mask-azimuth%03i.png' % angle_deg), 255 * mask)
# print(frame, end=', ')
###Output
_____no_output_____ |
Introduction-of-Tensorflow/Introduction-of-Tensorflow.ipynb | ###Markdown
Basic Introduction to TensorFlow
###Code
import sys
from tensorflow.examples.tutorials.mnist import input_data
import tensorflow as tf
#Tensors
3 # a rank 0 tensor; this is a scalar with shape []
[1. ,2., 3.] # a rank 1 tensor; this is a vector with shape [3]
[[1., 2., 3.], [4., 5., 6.]] # a rank 2 tensor; a matrix with shape [2, 3]
[[[1., 2., 3.]], [[7., 8., 9.]]] # a rank 3 tensor with shape [2, 1, 3]
###Output
_____no_output_____
###Markdown
Constants
###Code
node1 = tf.constant(3.0,dtype=tf.float32)
node2 = tf.constant(4.0) #also dtype=tf.float32 implicitly
print(node1,node2)
#Notice that printing the nodes does not output the values 3.0 and 4.0 as you might expect.
#Instead, they are nodes that, when evaluated, would produce 3.0 and 4.0, respectively.
#To actually evaluate the nodes, we must run the computational graph within a session.
sess = tf.Session()
print(sess.run([node1,node2]))
#more complicated computations
node3 = tf.add(node1,node2)
print("node3 : ",node3)
print("sess.run(node3) : ",sess.run(node3))
###Output
node3 : Tensor("Add:0", shape=(), dtype=float32)
sess.run(node3) : 7.0
###Markdown
Placeholders
###Code
#A graph can be parameterized to accept external inputs, known as placeholders.
a = tf.placeholder(tf.float32)
b = tf.placeholder(tf.float32)
adder_node = a + b # + provides a shortcut for tf.add(a, b)
print(sess.run(adder_node, {a: 3, b:4.5}))
print(sess.run(adder_node, {a: [1,3], b: [2, 4]}))
#more complex computations
add_and_triple = adder_node * 3.
print(sess.run(add_and_triple, {a: 3, b:4.5}))
###Output
22.5
###Markdown
Variables
###Code
#In ML we typically want a model that can take arbitrary inputs.
#To make the model trainable, we need to be able to modify the graph to get new outputs with the same input.
#Variables allow us to add trainable parameters to a graph.
W = tf.Variable([.3], dtype=tf.float32)
b = tf.Variable([-.3], dtype=tf.float32)
x = tf.placeholder(tf.float32)
linear_model = W * x + b
#Constants are initialized when you call tf.constant, and their value can never change.
#By contrast, variables are not initialized when you call tf.Variable. To initialize
#all the variables in a TensorFlow program, you must explicitly call a special
#operation as follows
init = tf.global_variables_initializer()
sess.run(init)
#Since x is a placeholder, we can evaluate linear_model
#for several values of x simultaneously as follows
print(sess.run(linear_model, {x:[1,2,3,4]}))
###Output
[0. 0.3 0.6 0.90000004]
###Markdown
How accurate is the model?
###Code
#We created a model. How good it is?
#To evaluate the model on training data, we need a y placeholder to provide the desired values,
#and we need to write a loss function.
y = tf.placeholder(tf.float32)
squared_deltas = tf.square(linear_model - y)
loss = tf.reduce_sum(squared_deltas)
print(sess.run(squared_deltas, {x:[1,2,3,4], y:[0,-1,-2,-3]}))
#tf.reduce_sum sums all the squared errors to create a single scalar
print(sess.run(loss, {x:[1,2,3,4], y:[0,-1,-2,-3]}))
print(sess.run(b))
print(sess.run(W))
#let us improve the model manually
fixW = tf.assign(W, [-1.])
fixb = tf.assign(b, [1.])
sess.run([fixW, fixb])
print(sess.run(loss, {x:[1,2,3,4], y:[0,-1,-2,-3]}))
print(sess.run(b))
print(sess.run(W))
#Yay! we rightly guessed the values of w and b
###Output
_____no_output_____
###Markdown
Learning our first TensorFlow model
###Code
optimizer = tf.train.GradientDescentOptimizer(0.01)
train = optimizer.minimize(loss)
sess.run(init)
for i in range(1000):
sess.run(train,{x:[1,2,3,4], y:[0,-1,-2,-3]})
print(sess.run([W,b])) #w=-1 and b=1 will be pridicted
###Output
WARNING:tensorflow:From /home/manjeet/anaconda3/lib/python3.6/site-packages/tensorflow/python/ops/math_ops.py:3066: to_int32 (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.cast instead.
[array([-0.9999969], dtype=float32), array([0.9999908], dtype=float32)]
###Markdown
Complete Program -- Linear Regression Model
###Code
import numpy as np
import tensorflow as tf
#Model parameters
W = tf.Variable([.3], dtype=tf.float32)
b = tf.Variable([-.3], dtype=tf.float32)
#Model input and output
x = tf.placeholder(tf.float32)
linear_model = W * x + b
y = tf.placeholder(tf.float32)
#loss
loss = tf.reduce_sum(tf.square(linear_model - y)) #sum of the squares
#optimizer
optimizer = tf.train.GradientDescentOptimizer(0.01)
train = optimizer.minimize(loss)
#training data
x_train = [1,2,3,4]
y_train = [0,-1,-2,-3]
#training loop
init = tf.global_variables_initializer()
sess = tf.Session()
sess.run(init)
for i in range(1000):
sess.run(train, {x:x_train, y:y_train})
#evaluate training accuracy
curr_W, curr_b, curr_loss = sess.run([W,b,loss], {x:x_train, y:y_train})
print("W : %s b : %s loss : %s"%(curr_W,curr_b,curr_loss))
###Output
W : [-0.9999969] b : [0.9999908] loss : 5.6999738e-11
|
bird_data/bag_of_sentiments.ipynb | ###Markdown
Sentiment Classifier Designed to showcase the Bag of Words approach
###Code
import pandas as pd
import numpy as np
import tensorflow as tf
from tensorflow import keras
import csv
bird_df = pd.read_csv('~/Desktop/training.1600000.processed.noemoticon.csv', encoding='latin-1')
bird_df.sample(3)
bird_df.columns = ['score', '', 'date', '', 'usr', 'review']
bird_training_df = bird_df[['score', 'review']].dropna()
bird_training_df.sample(5)
bird_training_df.describe()
def clean_commas(str):
return str.replace(',', ';')
bird_training_df['review'] = bird_training_df['review'].apply(clean_commas)
tokenizer = tf.keras.preprocessing.text.Tokenizer()
tokenizer.fit_on_texts(bird_training_df.review)
training_birds = tokenizer.texts_to_sequences(bird_training_df.review)
training_birds
###Output
_____no_output_____ |
mathBit/primeNumbers.ipynb | ###Markdown
Title : Prime NumbersChapter : Math, BitLink : ChapterLink : ๋ฌธ์ : ์ฃผ์ด์ง ์ซ์ n ๋ณด๋ค ์์ ๋ชจ๋ ์์๋ฅผ ๊ตฌํ์ฌ๋ผ
###Code
def primeNumbers(n: int) -> int:
if n<=2:
return []
primes = []
for i in range (2,n):
prime = True
for j in range(2,i):
if i % j == 0:
prime = False
continue
if prime:
primes.append(i)
return primes
print(primeNumbers(50))
%timeit primeNumbers(10000)
import math
def primeNumbers2(n: int) -> int:
if n <= 2:
return []
numbers = [True]*n
numbers[0] = False
numbers[1] = False
for idx in range(2, int(math.sqrt(n)) + 1):
if numbers[idx] == True:
for i in range(idx*idx, n, idx):
numbers[i] = False
primes = []
for idx,prime in enumerate(numbers):
if prime==True:
primes.append(idx)
return primes
print(primeNumbers2(50))
%timeit primeNumbers2(10000)
###Output
1000 loops, best of 5: 1.79 ms per loop
|
data_cleaning_analysis/Reshape_data_fuel_prices_DEP.ipynb | ###Markdown
Reshape Fuel Prices - Duke Energy Progress3/18/2021 \by [Mauricio Hernandez]([email protected])
###Code
import csv
import datetime as dt
import numpy as np
import pandas as pd
df_lookup = pd.read_csv('./inputs/UnitLookupAndDetailTable_(DEC-DEP).csv')
df_fuel_DEP = pd.read_csv('./inputs/UNIT_FUEL_PRICE(DEP 2019).csv')
list(df_fuel_DEP.columns)
#Slicing data and filter all the values where end date is before Jan 1st
df_fuel_DEP['UNIT_ID'] = df_fuel_DEP.UNIT_NAME + '_'+ df_fuel_DEP.CC_KEY.apply(str)
df_fuel_DEP = df_fuel_DEP.loc[:, ['UNIT_ID', 'FUEL_TYPE','PRICE $/MBTU', 'FROM_DATE', 'TO_DATE']]
df_fuel_DEP.sort_values(by=['UNIT_ID', 'FUEL_TYPE'], inplace=True)
df_fuel_DEP.to_csv('./outputs/UNIT_FUEL_PRICE(DEP 2019)_sorted.csv', sep=',', encoding='utf-8', index= False)
df_fuel_DEP.head()
###Output
_____no_output_____
###Markdown
Descriptive statisticsData from Duke Energy Carolinas and Duke Energy Progress
###Code
df_fuel_DEP.describe(include='all')
###Output
_____no_output_____
###Markdown
Calculating range of days between initial and end dates
###Code
def convertStringToDate(date_string):
date_obj = dt.datetime.strptime(date_string.split(" ")[0], '%m/%d/%Y')
#if date_obj - dt.date(2018, 7, 11)
return date_obj
#convertStringToDate('5/10/2018')
df_fuel_DEP['FROM_DATE'] = df_fuel_DEP['FROM_DATE'].apply(convertStringToDate)
df_fuel_DEP['TO_DATE'] = df_fuel_DEP['TO_DATE'].apply(convertStringToDate)
df_fuel_DEP.describe(include='all')
First_day = convertStringToDate('1/1/2019')
Last_day = convertStringToDate('12/31/2019')
#remove all the values where the end dates are in 2018
df_fuel_DEP['END_YEAR'] = df_fuel_DEP['TO_DATE'].map(lambda TO_DATE: TO_DATE.year)
df_fuel_DEP['START_YEAR'] = df_fuel_DEP['FROM_DATE'].map(lambda FROM_DATE: FROM_DATE.year)
df_fuel_DEP = df_fuel_DEP[df_fuel_DEP['START_YEAR'] < 2020]
df_fuel_DEP = df_fuel_DEP[df_fuel_DEP['END_YEAR'] >= 2019]
df_fuel_DEP['FROM_DATE'] = df_fuel_DEP['FROM_DATE'].map(lambda FROM_DATE: First_day if (First_day - FROM_DATE).days > 0 else FROM_DATE )
df_fuel_DEP['TO_DATE'] = df_fuel_DEP['TO_DATE'].map(lambda TO_DATE: Last_day if (TO_DATE - Last_day).days > 0 else TO_DATE)
df_fuel_DEP = df_fuel_DEP[df_fuel_DEP['TO_DATE'] != First_day]
df_fuel_DEP.describe(include='all')
# Adding columns to compute number of days from FROM_DATE to TO_DATE
df_fuel_DEP['DAYS'] = df_fuel_DEP['TO_DATE'] - df_fuel_DEP['FROM_DATE']
df_fuel_DEP['DAYS'] = df_fuel_DEP['DAYS'].map(lambda DAYS: DAYS.days )
df_fuel_DEP['REF_FROM_DATE'] = df_fuel_DEP['FROM_DATE'] - First_day
df_fuel_DEP['REF_FROM_DATE'] = df_fuel_DEP['REF_FROM_DATE'].map(lambda DAYS: DAYS.days )
# Replace last value when the number of days is zero
df_fuel_DEP['DAYS'] = np.where((df_fuel_DEP['DAYS'] == 0) & (df_fuel_DEP['TO_DATE'] == Last_day), 1, df_fuel_DEP['DAYS'])
df_fuel_DEP = df_fuel_DEP.loc[:, ['UNIT_ID', 'FUEL_TYPE', 'PRICE $/MBTU', 'FROM_DATE', 'TO_DATE', 'DAYS', 'REF_FROM_DATE']]
df_fuel_DEP.head()
# Creating pivot tableto summarize unit units and fuel type
df_fuel_DEP_pivot = df_fuel_DEP.groupby(['UNIT_ID', 'FUEL_TYPE']).sum()
df_fuel_DEP_pivot.to_csv('./outputs/fuel_summary.csv', sep=',', encoding='utf-8')
#print(list(df_fuel_DEP_pivot.index))
df_fuel_DEP_pivot
###Output
_____no_output_____
###Markdown
Manipulating dataframe to organize data
###Code
First_day = convertStringToDate('1/1/2019')
Last_day = convertStringToDate('12/31/2019')
#Create list with dates from First_day to last_day
date_list = [First_day + dt.timedelta(days=x) for x in range(0, (Last_day-First_day).days + 1)]
date_str_list = []
for date in date_list:
date_str_list.append(date.strftime("%m/%d/%Y"))
#create results dataframe to store prices every day
df_fuel_result = pd.DataFrame(index=df_fuel_DEP_pivot.index, columns=date_list)
#df_fuel_DEP_pivot = df_fuel_DEP_pivot.reindex(columns = df_fuel_DEP_pivot.columns.tolist() + date_str_list)
df_fuel_result.head(n=5)
current_index = ()
old_index = ()
aux_index = 0
fuel_price_list = [None] * 365
for index, row in df_fuel_DEP.iterrows():
aux_index = index
index_current = (row['UNIT_ID'], row['FUEL_TYPE'])
# access data using column names
fuel_price = row['PRICE $/MBTU']
days = row['DAYS']
ref_day = row['REF_FROM_DATE']
current_index = (row['UNIT_ID'], row['FUEL_TYPE'])
#print(index, row['UNIT_ID'], row['FUEL_TYPE'], row['PRICE $/MBTU'], row['REF_FROM_DATE'], row['DAYS'])
if index == 0:
old_index = current_index
if (old_index != current_index):
df_fuel_result.loc[old_index] = fuel_price_list
old_index = current_index
fuel_price_list = [None] * 365
fuel_price_list[ref_day:(ref_day + days)] = [fuel_price]*(days)
#print(index, row['PRICE $/MBTU'], row['REF_FROM_DATE'], row['DAYS'])
#Save last value
if aux_index != 0 :
df_fuel_result.loc[current_index] = fuel_price_list
df_fuel_result.head()
df_fuel_result.to_csv('./outputs/UNIT_FUEL_PRICES_DEP_Results.csv', sep=',', encoding='utf-8')
df_fuel_DEP.to_csv('./outputs/UNIT_FUEL_PRICES_DEP_Short.csv', sep=',', encoding='utf-8')
#dfSummary['UNIT_ID'] dfSummary.UNIT_ID == 'ALLE_UN01_0')
#dfSummary[dfSummary.DAYS == 364]
###Output
_____no_output_____ |
07-Inputs.ipynb | ###Markdown
Inputs
###Code
print('Enter any character:')
i = input()
i
###Output
_____no_output_____
###Markdown
Input() function always takes input in string format.
###Code
int(i)
type(int(i))
print('Enter any number')
i=int(input())
print(i)
i=int(input('Enter Number: '))
print(i,type(i))
i=float(input('Enter number: '))
i
###Output
_____no_output_____
###Markdown
List comprehension .split() is very important function and works on a string.
###Code
my_string='Himanshu'
my_string.split('a')
my_str='I am Himanshu and you are in bootcamp'
my_str.split(' ')
l=input("Enter number: ")
l.split(' ')
l=[]
for i in range(4):
i=int(input('Enter number: '))
l.append(i)
print(l)
l=[int(i) for i in input('Enter number: ').split()]
print(l)
l=input('Enter number: ').split()
print(l)
###Output
['1', '2', '3', '4', '5']
###Markdown
Input Dictionary
###Code
#1st way
d={}
for i in range(1,4):
k=input('Enter key: ')
l=int(input('Enter numeric val: '))
d.update({k:l})
print(d)
i=[int(i) for i in input('Enter number: ').split()]
j=[int(i) for i in input('Enter number: ').split()]
k=zip(i,j)
d=dict(k)
print(d)
d=eval(input('Enter number: '))
print(d)
###Output
{1: 5, 2: 7, 3: 9}
|
IOT/IoTApps/IOTFeatures.ipynb | ###Markdown
This notebook covers the cleaning and exploration of data for 'Google Play Store Apps' Imporing Libraries
###Code
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
import matplotlib.pyplot as plt # for plots
import os
print(os.listdir("../input"))
###Output
['googleplaystore.csv', 'googleplaystore_user_reviews.csv']
###Markdown
Reading data from the csv file
###Code
data = pd.read_csv('../input/googleplaystore.csv')
data.head()
data.columns = data.columns.str.replace(' ', '_')
print("Shape of data (samples, features): ",data.shape)
print("Data Types: \n", data.dtypes.value_counts())
###Output
Shape of data (samples, features): (10841, 13)
Data Types:
object 12
float64 1
dtype: int64
###Markdown
The data has **12** object and **1** numeric feature i.e. *Rating*. Now Exploring each features individually1. [Size](size)2. [Installs](installs)3. [Reviews](reviews)4. [Rating](rating)5. [Type](type)6. [Price](price)7. [Category](cat)8. [Content Rating](content_rating)9. [Genres](genres)10. [Last Updated](last_updated)11. [Current Version](current_version)12. [Android Version](android_version) Size Lets look into frequency of each item to get an idea of data nature
###Code
data.Size.value_counts().head()
#please remove head() to get a better understanding
###Output
_____no_output_____
###Markdown
It can be seen that data has metric prefixes (Kilo and Mega) along with another string.Replacing k and M with their values to convert values to numeric.
###Code
data.Size=data.Size.str.replace('k','e+3')
data.Size=data.Size.str.replace('M','e+6')
data.Size.head()
###Output
_____no_output_____
###Markdown
Now, we have some two types of values in our Size data.1. exponential values (not yet converted to string)2. Strings (that cannot be converted into numeric)Thus specifing categories 1 and 2 as an boolean array **temp**, to convert category 1 to numeric.
###Code
def is_convertable(v):
try:
float(v)
return True
except ValueError:
return False
temp=data.Size.apply(lambda x: is_convertable(x))
temp.head()
###Output
_____no_output_____
###Markdown
Now checking unique non numeric values (***~temp***) in Size.
###Code
data.Size[~temp].value_counts()
###Output
_____no_output_____
###Markdown
- Replacing 'Varies with Device' by nan and - Converting 1,000+ to 1000, to make it numeric
###Code
data.Size=data.Size.replace('Varies with device',np.nan)
data.Size=data.Size.replace('1,000+',1000)
###Output
_____no_output_____
###Markdown
Converting the cleaned Size data to numeric type
###Code
data.Size=pd.to_numeric(data.Size)
data.hist(column='Size')
plt.xlabel('Size')
plt.ylabel('Frequency')
###Output
_____no_output_____
###Markdown
Installs Checking unique values in Install data
###Code
data.Installs.value_counts()
###Output
_____no_output_____
###Markdown
It can be seen that there are 22 unique values, out of which- 1 is 0, - 1 is Free(string) , which we will be converting to nan here- and rest are numeric but with '+' and ',' which shall be removed to convert these into numeric type.
###Code
data.Installs=data.Installs.apply(lambda x: x.strip('+'))
data.Installs=data.Installs.apply(lambda x: x.replace(',',''))
data.Installs=data.Installs.replace('Free',np.nan)
data.Installs.value_counts()
###Output
_____no_output_____
###Markdown
Checking if data is converted to numeric
###Code
data.Installs.str.isnumeric().sum()
###Output
_____no_output_____
###Markdown
Now in Installs, 1 sample is non numeric out of 10841, which is nan (converted from Free to nan in previous step)
###Code
data.Installs=pd.to_numeric(data.Installs)
data.Installs=pd.to_numeric(data.Installs)
data.Installs.hist();
plt.xlabel('No. of Installs')
plt.ylabel('Frequency')
###Output
_____no_output_____
###Markdown
Reviews Checking if all values in number of Reviews numeric
###Code
data.Reviews.str.isnumeric().sum()
###Output
_____no_output_____
###Markdown
One value is non numeric out of 10841. Lets find its value and id.
###Code
data[~data.Reviews.str.isnumeric()]
###Output
_____no_output_____
###Markdown
We could have converted it into interger like we did for Size but the data for this App looks different. It can be noticed that the entries are entered wrong (i.e. cell backwared). We could fix it by setting **Category** as nan and shifting all the values, but deleting the sample for now.
###Code
data=data.drop(data.index[10472])
###Output
_____no_output_____
###Markdown
To check if row is deleted
###Code
data[10471:].head(2)
data.Reviews=data.Reviews.replace(data.Reviews[~data.Reviews.str.isnumeric()],np.nan)
data.Reviews=pd.to_numeric(data.Reviews)
data.Reviews.hist();
plt.xlabel('No. of Reviews')
plt.ylabel('Frequency')
###Output
_____no_output_____
###Markdown
Rating For entries to be right we need to make sure they fall within the range 1 to 5.
###Code
print("Range: ", data.Rating.min(),"-",data.Rating.max())
###Output
Range: 1.0 - 5.0
###Markdown
Checking the type of data, to see if it needs to be converted to numeric
###Code
data.Rating.dtype
###Output
_____no_output_____
###Markdown
Data is already numeric, now checking if the data has null values
###Code
print(data.Rating.isna().sum(),"null values out of", len(data.Rating))
data.Rating.hist();
plt.xlabel('Rating')
plt.ylabel('Frequency')
###Output
_____no_output_____
###Markdown
Type Checking for unque type values and any problem with the data
###Code
data.Type.value_counts()
###Output
_____no_output_____
###Markdown
There are only two types, free and paid. No unwanted data here. Price Checking for unique values of price, along with any abnormalities
###Code
data.Price.unique()
###Output
_____no_output_____
###Markdown
Data had **$** sign which shall be removed to convert it to numeric
###Code
data.Price=data.Price.apply(lambda x: x.strip('$'))
data.Price=pd.to_numeric(data.Price)
data.Price.hist();
plt.xlabel('Price')
plt.ylabel('Frequency')
###Output
_____no_output_____
###Markdown
Some apps have price higher than 350. Out of curiosity I checked the apps to see if there is a problem with data. But no !! they do exist, and Yes !! people buy them.
###Code
temp=data.Price.apply(lambda x: True if x>350 else False)
data[temp].head(3)
###Output
_____no_output_____
###Markdown
Category Now lets inspect the category by looking into the unique terms.
###Code
data.Category.unique()
###Output
_____no_output_____
###Markdown
It shows no repetition or false data
###Code
data.Category.value_counts().plot(kind='bar')
###Output
_____no_output_____
###Markdown
Content Rating Checking unique terms in Content Rating Categories, and for repetitive or abnormal data.
###Code
data.Content_Rating.unique()
###Output
_____no_output_____
###Markdown
No abnormalies or repetition found
###Code
data.Content_Rating.value_counts().plot(kind='bar')
plt.yscale('log')
###Output
_____no_output_____
###Markdown
Genres Checking for unique values, abnormalitity or repetition in data
###Code
data.Genres.unique()
###Output
_____no_output_____
###Markdown
The data is in the format **Category;Subcategory**. Lets divide the data into two columns, one as primary category and the other as secondary, using **;** as separator.
###Code
sep = ';'
rest = data.Genres.apply(lambda x: x.split(sep)[0])
data['Pri_Genres']=rest
data.Pri_Genres.head()
rest = data.Genres.apply(lambda x: x.split(sep)[-1])
rest.unique()
data['Sec_Genres']=rest
data.Sec_Genres.head()
grouped = data.groupby(['Pri_Genres','Sec_Genres'])
grouped.size().head(15)
###Output
_____no_output_____
###Markdown
Generating a two table to better understand the relationship between primary and secondary categories of Genres
###Code
twowaytable = pd.crosstab(index=data["Pri_Genres"],columns=data["Sec_Genres"])
twowaytable.head()
###Output
_____no_output_____
###Markdown
For visual representation of this data, lets use stacked columns
###Code
twowaytable.plot(kind="barh", figsize=(15,15),stacked=True);
plt.legend(bbox_to_anchor=(1.0,1.0))
###Output
_____no_output_____
###Markdown
Last Updated Checking the format of data in Last Updated Dates
###Code
data.Last_Updated.head()
###Output
_____no_output_____
###Markdown
Converting the data i.e. string to datetime format for furthur processing
###Code
from datetime import datetime,date
temp=pd.to_datetime(data.Last_Updated)
temp.head()
###Output
_____no_output_____
###Markdown
Taking a difference between last updated date and today to simplify the data for future processing. It gives days.
###Code
data['Last_Updated_Days'] = temp.apply(lambda x:date.today()-datetime.date(x))
data.Last_Updated_Days.head()
###Output
_____no_output_____
###Markdown
Android Version Checking unique values, repetition, or any abnormalities.
###Code
data.Android_Ver.unique()
###Output
_____no_output_____
###Markdown
Most of the values have a upper value and a lower value (i.e. a range), lets divide them as two new features **Version begin and end**, which might come handy while processing data furthur.
###Code
data['Version_begin']=data.Android_Ver.apply(lambda x:str(x).split(' and ')[0].split(' - ')[0])
data.Version_begin=data.Version_begin.replace('4.4W','4.4')
data['Version_end']=data.Android_Ver.apply(lambda x:str(x).split(' and ')[-1].split(' - ')[-1])
data.Version_begin.unique()
###Output
_____no_output_____
###Markdown
Representing categorial data as two way table and plotting it as stacked columns for better understanding.
###Code
twowaytable = pd.crosstab(index=data.Version_begin,columns=data.Version_end)
twowaytable.head()
twowaytable.plot(kind="barh", figsize=(15,15),stacked=True);
plt.legend(bbox_to_anchor=(1.0,1.0))
plt.xscale('log')
data.Version_end.unique()
###Output
_____no_output_____
###Markdown
Current Version
###Code
data.Current_Ver.value_counts().head(6)
###Output
_____no_output_____
###Markdown
Lets convert all the versions in the format **number.number** to simplify the data, and check if the data has null values. Also, we are not considering converting value_counts to nan here due to its high frequency.
###Code
data.Current_Ver.isna().sum()
###Output
_____no_output_____
###Markdown
As we have only **8** nans lets replace them with **Varies with data** to simplify
###Code
import re
temp=data.Current_Ver.replace(np.nan,'Varies with device')
temp=temp.apply(lambda x: 'Varies with device' if x=='Varies with device' else re.findall('^[0-9]\.[0-9]|[\d]|\W*',str(x))[0] )
temp.unique()
###Output
_____no_output_____
###Markdown
Saving the updated current version values as a new column
###Code
data['Current_Ver_updated']=temp
data.Current_Ver_updated.value_counts().plot(kind="barh", figsize=(15,15));
plt.legend(bbox_to_anchor=(1.0,1.0))
plt.xscale('log')
###Output
_____no_output_____ |
Project 1/Used Vehicles Price case study & Prediction Rama Danda.ipynb | ###Markdown
we have more vehicles for years in 2010 to 2015, mostly skwed towords the last 20years mostly front wheel drive follwoed by 4w drive and r wheel drive most vehicles are powered by gas followed by disesl in the distanct low
###Code
df['condition'].unique() #finding unique values
df.drive.unique()#finding unique values
df.cylinders.unique()#finding unique values
df.paint_color.unique()#finding unique values
plt.rcParams['figure.figsize'] = (20, 10) #set figure size to 20 by 10
fig, axes = plt.subplots(nrows = 2, ncols = 2)#subplots with 2 by 2
#groupedby for condition values in x and y for counts
X_condition = df.groupby('condition').size().reset_index(name='Counts')['condition']
Y_condition = df.groupby('condition').size().reset_index(name='Counts')['Counts']
# make the bar plot
axes[0, 0].bar(X_condition, Y_condition) #bar graph at 0,0 axes for condition values yes and no
axes[0, 0].set_title('Condition', fontsize=25) #title set as condition
axes[0, 0].set_ylabel('Counts', fontsize=20) #Ylabel as 'count'
axes[0, 0].tick_params(axis='both', labelsize=15) ##set the appearance of ticks to both and size 15
#groupedby for transmission values in x and y for counts
X_transmission = df.groupby('transmission').size().reset_index(name='Counts')['transmission']
Y_transmission = df.groupby('transmission').size().reset_index(name='Counts')['Counts']
# make the bar plot
axes[0, 1].bar(X_transmission, Y_transmission) #bar graph at 0,0 axes for condition values yes and no
axes[0, 1].set_title('transmission', fontsize=25) #title set as transmission
axes[0, 1].set_ylabel('Counts', fontsize=20) #Ylabel as 'count'
axes[0, 1].tick_params(axis='both', labelsize=15) ##set the appearance of ticks to both and size 15
#replace condition values with of 1 to yes and 0 to no grouped by servived values in x and y for counts
X_cylinders = df.replace({'cylinders': {'8 cylinders':8,'6 cylinders':6,'4 cylinders':4,'5 cylinders':5,
'10 cylinders':10, 'other':1,'3 cylinders':3, '12 cylinders':12}}).groupby('cylinders').size().reset_index(name='Counts')['cylinders']
Y_cylinders = df.replace({'cylinders': {'8 cylinders':8,'6 cylinders':6,'4 cylinders':4,'5 cylinders':5,
'10 cylinders':10, 'other':1,'3 cylinders':3, '12 cylinders':12}}).groupby('cylinders').size().reset_index(name='Counts')['Counts']
# make the bar plot
axes[1, 0].bar(X_cylinders, Y_cylinders) #bar graph at 0,0 axes for condition values yes and no
axes[1, 0].set_title('Cylinders', fontsize=25) #title set as condition
axes[1, 0].set_ylabel('Counts', fontsize=20) #Ylabel as 'count'
axes[1, 0].tick_params(axis='both', labelsize=15) ##set the appearance of ticks to both and size 15
X_manufacturer = df.groupby('manufacturer').size().reset_index(name='Counts')['manufacturer']
Y_manufacturer = df.groupby('manufacturer').size().reset_index(name='Counts')['Counts']
# make the bar plot
axes[1, 1].bar(X_manufacturer, Y_manufacturer) #bar graph at 0,0 axes for condition values yes and no
axes[1, 1].set_title('Manufacturer', fontsize=25) #title set as condition
axes[1, 1].set_ylabel('Counts', fontsize=20) #Ylabel as 'count'
axes[1, 1].tick_params(axis='both', labelsize=15,rotation=90) ##set the appearance of ticks to both and size 15
###Output
_____no_output_____
###Markdown
It appears there are more excelent and good values vehicles Cehvy & Ford vehicles are sold high in used vehicles
###Code
#set up the figure size
plt.rcParams['figure.figsize'] = (9, 9)
num_features1 = ['price', 'year', 'odometer'] #features for correlation analysis
X = df[num_features1].to_numpy()
#X
# instantiate the visualizer with the Covariance ranking algorithm using Pearson
visualizer = Rank2D(features=num_features1, algorithm='pearson')
visualizer.fit(X) # Fit the data to the visualizer
visualizer.transform(X) # Transform the data
visualizer.poof(outpath=r'C:\Users\rdanda\OneDrive - Microsoft\Documents\Bellevue\DSC 550 Mining Data\week-6\pcoordsato.png') # Draw/show/poof the data
plt.show()
###Output
_____no_output_____
###Markdown
Negative corrlation btw year and odometer (understandaby high millage for older years) Positive correlation btw price and year
###Code
df.paint_color = pd.Categorical(df.paint_color) #converting color to categorical
df['color_code'] = df.paint_color.cat.codes #create color code instead of text
#df.head()
# stacked bar charts to compare Excelent and Good condition vehicles by color
#set up the figure size
plt.rcParams['figure.figsize'] = (20, 10)
# make subplots (just one here)
fig, axes = plt.subplots(nrows =1, ncols =1)
#get the counts of excelt and good condition vehicles
condition_excellent = df[df['condition']=='excellent']['paint_color'].value_counts()
condition_good = df[df['condition']=='good']['paint_color'].value_counts()
condition_good = condition_excellent.reindex(index = condition_good.index) #reindex with good condition values
# make the bar plot
p1 = axes.bar(condition_excellent.index, condition_excellent.values) #create bar graph with excelent values
p2 = axes.bar(condition_good.index, condition_good.values, bottom=condition_good.values) #create bar graph with good by having excelent at the bottom of the stacked chart
axes.set_title('Condition By Color', fontsize=25) #title at 0,0 axis
axes.set_ylabel('Counts', fontsize=20)#ylable count
axes.tick_params(axis='both', labelsize=15,rotation=90) #ticks on both axis with size 15
axes.legend((p1[0], p2[0]), ('Excelent', 'good'), fontsize = 15) #legend on Ecxcelent and good
###Output
_____no_output_____
###Markdown
White & black colors are dominant in excelent and good vehicle conditions followed by silver
###Code
plt.rcParams['figure.figsize'] = (20, 10)
# make subplots (just one here)
fig, axes = plt.subplots(nrows =1, ncols =1)
#get the counts of excelt and good condition vehicles
condition_excellent = df[df['condition']=='excellent']['region'].value_counts()
condition_good = df[df['condition']=='good']['region'].value_counts()
condition_good = condition_excellent.reindex(index = condition_good.index) #reindex with good condition values
# make the bar plot
p1 = axes.bar(condition_excellent.index, condition_excellent.values) #create bar graph with excelent values
p2 = axes.bar(condition_good.index, condition_good.values, bottom=condition_good.values) #create bar graph with good by having excelent at the bottom of the stacked chart
axes.set_title('Condition By Region', fontsize=25) #title at 0,0 axis
axes.set_ylabel('Counts', fontsize=20)#ylable count
axes.tick_params(axis='both', labelsize=15,rotation=90) #ticks on both axis with size 15
axes.legend((p1[0], p2[0]), ('Excelent', 'good'), fontsize = 15) #legend on Ecxcelent and good
###Output
_____no_output_____
###Markdown
Orlando washington DC has more excelent cars followed by reno/tahoe
###Code
#Step 11- Fill in missing values and eliminate feature
#create a function that takes data frame column and replace missing values with median values
def fill_na_median(df, inplace=True):
return df.fillna(df.median(), inplace=inplace) #return median values of the column beeing passed
fill_na_median(df['odometer'])
df['odometer'].describe()
def fill_na_most(df, inplace=True): #defing the function to replace missing with most occured value 'sedan'
return df.fillna('sedan', inplace=inplace)
fill_na_most(df['type'])
df['type'].describe()
# log-transformation
def log_transformation(df): #define a function to return log1p(natural logarithmic value of x + 1) values for given df
return df.apply(np.log1p)
df['price_log1p'] = log_transformation(df['price'])
df.describe()
#Step 12 - adjust skewed data (fare)
plt.rcParams['figure.figsize'] = (10, 5) #set figure size to 10,5
plt.hist(df['price_log1p'], bins=40) #check the distribution using histogram
plt.xlabel('Price_log1p', fontsize=20) #check xlabel with fontsize 20
plt.ylabel('Counts', fontsize=20) #Y axis label
plt.tick_params(axis='both', labelsize=15) #ticks on both axis and label size 15
#plt.show()
#df.head()
df.type = pd.Categorical(df.type) #converting type to categorical
df.region = pd.Categorical(df.region) #converting region to categorical
df.manufacturer =pd.Categorical(df.manufacturer) #converting type to categorical
df.model =pd.Categorical(df.model) #converting type to categorical
df.condition =pd.Categorical(df.condition) #converting type to categorical
df.cylinders =pd.Categorical(df.cylinders) #converting type to categorical
df.fuel =pd.Categorical(df.fuel) #converting type to categorical
df.transmission =pd.Categorical(df.transmission) #converting type to categorical
df.drive =pd.Categorical(df.drive) #converting type to categorical
# converting catagorical values to numbers (type of vehicles)
df['type_code'] = df.type.cat.codes
# converting catagorical values to numbers (region of vehicles)
df['region_code'] = df.region.cat.codes
# converting catagorical values to numbers (manufacturer of vehicles)
df['manufacturer_code'] = df.manufacturer.cat.codes
#converting catagorical values to numbers (model of vehicles)
df['model_code'] = df.model.cat.codes
#converting catagorical values to numbers (condition of vehicles)
df['condition_code'] = df.condition.cat.codes
#converting catagorical values to numbers (cylinders of vehicles)
df['cylinders_code'] = df.cylinders.cat.codes
#converting catagorical values to numbers (fuel of vehicles)
df['fuel_code'] = df.fuel.cat.codes
#converting catagorical values to numbers (transmission of vehicles)
df['transmission_code'] = df.transmission.cat.codes
#converting catagorical values to numbers (drive of vehicles)
df['drive_code'] = df.drive.cat.codes
#Columns with too many Null Values
NotAvailable_val = df.isna().sum() #find all the columns with null values
def natavailable_func(na, threshold = .4): #only select variables that passees the threshold
columns_passed = [] #define the empty list
for i in na.keys(): #loop through the columns
if na[i]/df.shape[0]<threshold: #if the shape is grater than 40% then append the values
columns_passed.append(i) #append the colunm to the list
return columns_passed #return the columns
#get the columns that are not having too many null values (>40%)
df_clean = df[natavailable_func(NotAvailable_val)]
df_clean.columns
#Identify outliner if any in the price
df_clean = df_clean[df_clean['price'].between(999.99, 250000)] # calclulating Inter Quartile Range
Q1 = df_clean['price'].quantile(0.25) #get 25%
Q3 = df_clean['price'].quantile(0.75) #get 75%
IQR = Q3 - Q1 #get the inter quartile by taking the differnece btw 3 and 1 quarters
# get only Values between Q1-1.5IQR and Q3+1.5IQR
df_filtered = df_clean.query('(@Q1 - 1.5 * @IQR) <= price <= (@Q3 + 1.5 * @IQR)')
df_filtered.boxplot('price') #showing using boxplot
#Identify outliner if any in the millage
df_clean = df_clean[df_clean['odometer'].between(999.99, 250000)] # Computing IQR
Q1 = df_clean['odometer'].quantile(0.25)
Q3 = df_clean['odometer'].quantile(0.75)
IQR = Q3 - Q1
# Filtering Values between Q1-1.5IQR and Q3+1.5IQR
df_filtered = df_clean.query('(@Q1 - 1.5 * @IQR) <= price <= (@Q3 + 1.5 * @IQR)')
df_filtered.boxplot('odometer')
# calculate correlation matrix on the cleaned data
corr = df_clean.corr()# plot the heatmap
sns.heatmap(corr, xticklabels=corr.columns, yticklabels=corr.columns, annot=True, cmap=sns.diverging_palette(220, 50, as_cmap=True))
df_clean.columns
removecolumns =['price_log1p','id','region', 'manufacturer', 'model', 'condition', 'cylinders','fuel', 'transmission', 'drive','type','paint_color', 'description', 'state']
df_clean = df_clean.drop(columns = removecolumns)
df_clean = pd.get_dummies(df_clean, drop_first=True)
print(df_clean.columns)
df_clean
#hot encoding color code of the data frame
print(df_clean['color_code'].unique())
df_clean['color_code'] = pd.Categorical(df_clean['color_code'])
color_code_Type = pd.get_dummies(df_clean['color_code'], prefix = 'color_code')
color_code_Type.head()
#hot encoding type_code code of the data frame
print(df_clean['type_code'].unique())
df_clean['type_code'] = pd.Categorical(df_clean['type_code'])
type_code_Type = pd.get_dummies(df_clean['type_code'], prefix = 'type_code')
type_code_Type.head()
#hot encoding region_code code of the data frame
print(df_clean['region_code'].unique())
df_clean['region_code'] = pd.Categorical(df_clean['region_code'])
region_code_Type = pd.get_dummies(df_clean['region_code'], prefix = 'region_code')
region_code_Type.head()
#hot encoding region_code code of the data frame
print(df_clean['cylinders_code'].unique())
df_clean['cylinders_code'] = pd.Categorical(df_clean['cylinders_code'])
cylinders_code_Type = pd.get_dummies(df_clean['cylinders_code'], prefix = 'cylinders_code')
cylinders_code_Type.head()
df_clean = pd.concat([df_clean, cylinders_code_Type, region_code_Type, type_code_Type,color_code_Type], axis=1)
df_clean = df_clean.drop(columns=['cylinders_code', 'region_code', 'type_code','color_code'])
df_clean.head()
###Output
_____no_output_____
###Markdown
Random forest model before applying the model on 15columns
###Code
# scaled the data using StandardScaler on price
Xo = df_clean.loc[1:1000, df_clean.columns != 'price']#all values except price to X
yo = df_clean.loc[1:1000, df_clean.columns == 'price']
yo = yo.values.flatten()
#creating random forest model to check the variables using price
Xo_train, Xo_test, yo_train, yo_test = train_test_split(Xo, yo, test_size=.25, random_state=1) #split the data in to test and train with test size at 25%
sc = StandardScaler()
Xo_train = sc.fit_transform(Xo_train)
Xo_test = sc.transform(Xo_test)
modelo = RandomForestRegressor(random_state=2) #building the model b random forest method
modelo.fit(Xo_train, yo_train) #fitting the model using training data
predo = modelo.predict(Xo_test) #predicting the model using the test data
print('The mean absolute error',mae(yo_test, predo)) #mean absolute error of the model
print('Score of the model is ',modelo.score(Xo_test,yo_test)) #accuracy of the model based on train and test
###Output
Score of the model is 0.6889615728801413
###Markdown
PCA to transform 88 to 3 comp
###Code
#PCA analysis
# scaled the data using StandardScaler on price
Xp = df_clean.loc[1:1000, df_clean.columns != 'price']#all values except price to X
#X = StandardScaler().fit_transform(X) #making the data zero mean and variance along each feature
#y = df_clean['price'] #actual price values while still retaining before standar scalor operation
yp = df_clean.loc[1:1000, df_clean.columns == 'price']
yp = yp.values.flatten()
#creating random forest model to check the variables using price
Xp_train, Xp_test, yp_train, yp_test = train_test_split(Xp, yp, test_size=.25, random_state=0) #split the data in to test and train with test size at 25%
sc = StandardScaler()
Xp_train = sc.fit_transform(Xp_train)
Xp_test = sc.transform(Xp_test)
pca = PCA(n_components=4)
Xp_train = pca.fit_transform(Xp_train)
Xp_test = pca.transform(Xp_test)
explained_variance = pca.explained_variance_ratio_
explained_variance
pca = PCA(n_components=1)
Xp_train = pca.fit_transform(Xp_train)
Xp_test = pca.transform(Xp_test)
modelp = RandomForestRegressor(random_state=2) #building the model b random forest method
modelp.fit(Xp_train, yp_train) #fitting the model using training data
predp = modelp.predict(Xp_test) #predicting the model using the test data
print('The mean absolute error',mae(yp_test, predp)) #mean absolute error of the model
print('The mean price of the vehicle is',df_clean['price'].mean()) #mean vehicle price of all data set
print('Score of the model is ',modelp.score(Xp_test,yp_test)) #accuracy of the model based on train and test
#Lasso Regression to reduce features
from sklearn.linear_model import Lasso
from sklearn.datasets import load_boston
from sklearn.preprocessing import StandardScaler
# Create features
Xl = df_clean.loc[1:1000, df_clean.columns != 'price']#all values except price to X (features)
# Create target
yl = df_clean.loc[1:1000, df_clean.columns == 'price'] #lable data
#Standardize features
scaler = StandardScaler() #instance of a scalar
features_standardized = scaler.fit_transform(Xl) #fit the features to scalar
#Create lasso regression with alpha value
regression = Lasso(alpha=0.5)
#Fit the linear regression
model = regression.fit(features_standardized, yl)
print(model)
col1=list(Xl.columns)
coef1=list(model.coef_)
print(model.intercept_)
print(model.coef_)
coef1[1]
for i in range(12):
print('Effect of Price for Feature',col1[i],' is ', coef1[i])
###Output
Effect of Price for Feature year is 3506.6848264167925
Effect of Price for Feature odometer is -3674.573701489352
Effect of Price for Feature manufacturer_code is 508.4682607371529
Effect of Price for Feature model_code is -724.3019669311052
Effect of Price for Feature condition_code is 119.77246218342775
Effect of Price for Feature fuel_code is -639.1800164333616
Effect of Price for Feature transmission_code is -169.63319866094332
Effect of Price for Feature drive_code is 3.3135451799948035
Effect of Price for Feature cylinders_code_0 is 0.0
Effect of Price for Feature cylinders_code_2 is 0.0
Effect of Price for Feature cylinders_code_3 is -3176.151161881574
Effect of Price for Feature cylinders_code_4 is -475.6209434460783
###Markdown
Based on the results it is evident transmission type, condition, color and type of vehicle having a lower effect on the price of the car compared to the rest. I would eliminate these 4 columns. What is surprising to me is number of cylinders has 3,900 influence on each unit of cylinders it goes up. Not surprised by each unit of year it goes up, there is a 1,700 positive change. It is not intuitive to interpret the region in which the vehicle is sold, and there is a negative 1000 dollar The biggest influencing factor is millage a vehicle has, about -4457 dollars effect on price for each mean average
###Code
#model evaluation using larso score
from sklearn.model_selection import train_test_split
data_train, data_val = train_test_split(df_clean, test_size = 0.2, random_state = 2)
#Classifying Independent and Dependent Features
#_______________________________________________
#Dependent Variable
Y_train = data_train.iloc[:, -1].values
#Independent Variables
X_train = data_train.iloc[:,0 : -1].values
#Independent Variables for Test Set
X_test = data_val.iloc[:,0 : -1].values
data_val.head()
#Evaluating The Model With RMLSE
def score(y_pred, y_true):
error = np.square(np.log10(y_pred +1) ).mean() ** 0.5 #squered to the mean to about 50%
score = 1 - error #percentage to total error
return score #return the score
actual_price = list(data_val['price']) #getting the values of price of test data
actual_price = np.asarray(actual_price) #in to np array
#Lasso Regression
from sklearn.linear_model import Lasso
#Initializing the Lasso Regressor with Normalization Factor as True
lasso_reg = Lasso(normalize=True)
#Fitting the Training data to the Lasso regressor
lasso_reg.fit(X_train,Y_train)
#Predicting for X_test
y_pred_lass =lasso_reg.predict(X_test)
#Printing the Score with RMLSE
print("\n\nLasso SCORE : ", score(y_pred_lass, actual_price))
###Output
Lasso SCORE : 0.9974925191203602
###Markdown
The Lasso Regression attained an score of 73% with the given Dataset ---------------------------------------------------------------------------------------------
###Code
#plotting LR and Ridge regression scores
import matplotlib
matplotlib.rcParams.update({'font.size': 12})
X_train, X_test, y_train, y_test = train_test_split(Xp, yp, test_size=.25, random_state=0) #split the data in to test and train with test size at 25%
print( len(X_test), len(y_test)) #checking to see if the data lengths are same
lr = LinearRegression() #initializing the LinearRegression
lr.fit(X_train, y_train) #fitting the train data to LR
rr = Ridge(alpha=0.01) #setting the alpha to 0.01 (hyper parameter)
# higher the alpha value, more restriction on the coefficients; low alpha > more generalization
rr.fit(X_train, y_train) #using ridge to fit the training data
rr100 = Ridge(alpha=100) # comparison with alpha value at 100
rr100.fit(X_train, y_train) #fitting the ridge at 100
train_score=lr.score(X_train, y_train) #train scoring of x and y values for LR
test_score=lr.score(X_test, y_test) #testing score of x and y values for LR
Ridge_train_score = rr.score(X_train,y_train) ##train scoring of x and y values for Ridge at alpha 0.01
Ridge_test_score = rr.score(X_test, y_test)#testing score of x and y values for Ridge at apha 0.01
Ridge_train_score100 = rr100.score(X_train,y_train)##train scoring of x and y values for Ridge at alpha 100
Ridge_test_score100 = rr100.score(X_test, y_test)#testing score of x and y values for Ridge at apha 100
plt.plot(rr.coef_,alpha=0.7,linestyle='none',marker='*',markersize=5,color='red',label=r'Ridge; $\alpha = 0.01$',zorder=7) #plot the alpha 0.1 for ridge
plt.plot(rr100.coef_,alpha=0.5,linestyle='none',marker='d',markersize=6,color='blue',label=r'Ridge; $\alpha = 100$') #plot the coefs with alpth at 100 for ridge
plt.plot(lr.coef_,alpha=0.4,linestyle='none',marker='o',markersize=7,color='green',label='Linear Regression')#plot the coefs for LR
plt.xlabel('Coefficient Index',fontsize=12) #plot xlabel
plt.ylabel('Coefficient Magnitude',fontsize=10)#plot ylabel
plt.legend(fontsize=11,loc=4) #legend for the plot
plt.show() #show the plot
###Output
77 77
###Markdown
X axis we plot the coefficient index for 12 features When ฮฑ =0.01 coefficients are less restricted and coefficients are same as of LR For ฮฑ =100 coefficient indices 7,8,9,10 less compared to LR
###Code
#hyper parameters influence on Lasso and LR
# lasso and ridge regression coefficients can be zero (used less features) (dimensinality reduction too)
X_train, X_test, y_train, y_test = train_test_split(Xp, yp, test_size=.25, random_state=0) #split the data in to test and train with test size at 25%
lasso = Lasso() #initializing the Lasso
lasso.fit(X_train,y_train) #fitting the train data to lasso
train_score=lasso.score(X_train,y_train) #getting the score to lasso train
test_score=lasso.score(X_test,y_test) #getting the score to lasso test
coeff_used = np.sum(lasso.coef_!=0) #use all the coefs that are not zero
print ("Training score is ", train_score ) #printing the training score
print ( "Test score is ", test_score) #printing the trest score
print ("Number of features used are ", coeff_used) #print coefs used in the lasso coefs scores evalution
lasso001 = Lasso(alpha=0.01, max_iter=10e5) #setting the lasso alpha to start at 0.01 to max of 10e5
lasso001.fit(X_train,y_train) #fitting the lasso train at alpha set a 0.01
train_score001=lasso001.score(X_train,y_train)#lasso score for train at 0.01 alpha
test_score001=lasso001.score(X_test,y_test)#lasso score for test at 0.01 alpha
coeff_used001 = np.sum(lasso001.coef_!=0) #use all the coefs that are not zero for alpha 0.01
print ("Training score for alpha=0.01 is ", train_score001 )#printing the train scores for alpha 0.01
print( "Test score for alpha =0.01 is ", test_score001) #printing the score for test score for alpha 0.01
print ("Number of features used for alpha =0.01:", coeff_used001) #number of features used for alpha 0.01
lasso00001 = Lasso(alpha=0.0001, max_iter=10e5)#setting the lasso alpha to start at 0.0001 to max of 10e5
lasso00001.fit(X_train,y_train)#fitting the lasso train at alpha set a 0.0001
train_score00001=lasso00001.score(X_train,y_train)#lasso score for train at 0.0001 alpha
test_score00001=lasso00001.score(X_test,y_test)#lasso score for test at 0.0001 alpha
coeff_used00001 = np.sum(lasso00001.coef_!=0)#use all the coefs that are not zero for alpha 0.0001
print ("Training score for alpha=0.0001 ", train_score00001 )#printing the train scores for alpha 0.0001
print ("Test score for alpha =0.0001 ", test_score00001)#printing the score for test score for alpha 0.0001
print ("Number of features used: for alpha =0.0001 ", coeff_used00001) #number of features used for alpha 0.0001
lr = LinearRegression() #Initialize LR
lr.fit(X_train,y_train) #fit LR to train
lr_train_score=lr.score(X_train,y_train) #score LR on train data
lr_test_score=lr.score(X_test,y_test)#score LR to trest
print ("LR training score is ", lr_train_score )#print LR train score
print ("LR test score is ", lr_test_score) #print LR test score
plt.subplot(1,2,1) #subplot 1 row two columns first column value
plt.plot(lasso.coef_,alpha=0.7,linestyle='none',marker='*',markersize=5,color='red',label=r'Lasso; $\alpha = 1$',zorder=7) # plot lasso coefs
plt.plot(lasso001.coef_,alpha=0.5,linestyle='none',marker='d',markersize=6,color='blue',label=r'Lasso; $\alpha = 0.01$') # plot lasso coefs at 0.01
plt.xlabel('Coefficient Index',fontsize=12) #xlable set
plt.ylabel('Coefficient Magnitude',fontsize=12) #y lable set
plt.legend(fontsize=11,loc=4) #legend
plt.subplot(1,2,2) #plot size for 2nd column
plt.plot(lasso.coef_,alpha=0.7,linestyle='none',marker='*',markersize=5,color='red',label=r'Lasso; $\alpha = 1$',zorder=7) # # plot lasso coefs
plt.plot(lasso001.coef_,alpha=0.5,linestyle='none',marker='d',markersize=6,color='blue',label=r'Lasso; $\alpha = 0.01$') # # plot lasso coefs at 0.01
plt.plot(lasso00001.coef_,alpha=0.8,linestyle='none',marker='v',markersize=6,color='black',label=r'Lasso; $\alpha = 0.00001$') # # plot lasso coefs at 0.0001
plt.plot(lr.coef_,alpha=0.7,linestyle='none',marker='o',markersize=5,color='green',label='Linear Regression',zorder=2)
plt.xlabel('Coefficient Index',fontsize=12)
plt.ylabel('Coefficient Magnitude',fontsize=12)
plt.legend(fontsize=11,loc=4)
plt.tight_layout()
plt.show()
###Output
Training score is 0.6209544639364323
Test score is 0.5758669243418286
Number of features used are 41
Training score for alpha=0.01 is 0.6209746497513919
Test score for alpha =0.01 is 0.5728970928493532
Number of features used for alpha =0.01: 42
###Markdown
Comparing the Lasso to Ridge the score is at 79% to 59% For this data set I would prefer to use Ridge LR going fwd
###Code
## NN prediction of price using Keras/TF
# Feature Scaling
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
X_train = sc.fit_transform(X_train) #scaling the datasets of train
X_test = sc.transform(X_test) #scaling the datasets of test
from keras.models import Sequential #sequential reg from keras
from keras.layers import Dense #dense layers from keras
from keras.wrappers.scikit_learn import KerasRegressor
from matplotlib import pyplot as plt #matplot lib
import warnings
warnings.filterwarnings('ignore')
# define base model
def baseline_model():
# create model
model = Sequential() #sequential model
model.add(Dense(30, input_dim=88, kernel_initializer='normal', activation='relu')) #with 30 nodes and 88 inputs features
model.add(Dense(output_dim = 88, init = 'uniform', activation = 'relu')) #hidden1 layers taking the same features
model.add(Dense(output_dim = 88, init = 'uniform', activation = 'relu')) #hidden2 layers taking the same features
model.add(Dense(1, kernel_initializer='normal')) #output layer
# Compile model
model.compile(loss='mse',
optimizer='adam',
metrics=['mae'] ) #compiling the model
return model
model = baseline_model() #calling the above function
model.summary() #get summary of the network
from __future__ import absolute_import, division, print_function #function for printing and divisions
import tensorflow as tf #importing tensor flow
from tensorflow import keras #importing keras
EPOCHS = 500 #initializing the total EPOCHS
# Store training stats
history = model.fit(X_train, y_train, epochs=EPOCHS,
validation_split=0.2, verbose=0) #fitting the model with train dataset
import tensorflow_docs as tfdocs
import tensorflow_docs.plots
plotter = tfdocs.plots.HistoryPlotter(smoothing_std=2)
#visualize the modelโs training progress using the stats stored in the history object
plotter.plot({'Basic': history}, metric = "mae")
plt.ylim([0, 5000])
plt.ylabel('MAE [Price]')
test_predictions = model.predict(X_test).flatten() #preditct the model using test data set
plt.scatter(y_test, test_predictions) #plotting actual test data set and predicted data set
plt.xlabel('True Values [1000$]')
plt.ylabel('Predictions [1000$]')
plt.axis('equal')
plt.xlim(plt.xlim())
plt.ylim(plt.ylim())
_ = plt.plot([-100, 100], [-100, 100])
#error distribution
error = test_predictions - y_test #getting the error distribution for prediction and test datasets
plt.hist(error, bins = 25)
plt.xlabel("Prediction Error [Price]")
_ = plt.ylabel("Count")
n=((np.sqrt(test_predictions - y_test)))
n = n[np.logical_not(np.isnan(n))]
error = np.sum(n)
error/len(df_clean) #percentage of errors to the total dataset
from sklearn.metrics import r2_score
r2_score(y_test, test_predictions) #r squared value
###Output
_____no_output_____ |
MLE Mattis.ipynb | ###Markdown
Miguel Mattis Maximum Likelihood Estimation Exercise Maximum Likelihood Estimation The Maximum likelihood function will calculate the overall functions maximum likelihood that it will hit a specific number. It follows the function $p^n * (1-p)^(n-1)$. These will be calculated manually and then compared to using the actual function
###Code
%%html
<div class="mxgraph" style="max-width:100%;border:1px solid transparent;" data-mxgraph="{"highlight":"#0000ff","nav":true,"resize":true,"toolbar":"zoom layers lightbox","edit":"_blank","xml":"<mxfile modified=\"2019-04-08T02:57:38.239Z\" host=\"www.draw.io\" agent=\"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/73.0.3683.86 Safari/537.36\" etag=\"HCbSf2SMyf0mKBm6y943\" version=\"10.6.0\" type=\"google\"><diagram id=\"MUb6XdWlYKxxCSPsXOCQ\" name=\"Page-1\">5VfJbtswEP0aH11os+wcvSU5pEUBp0t6I8yxxJTSqDRlS/36UhJprTHcNkYK9CTNG3I4nDfzbI3cZZTdCZKE75ECHzkWzUbuauQ4tnvjqUeB5BUy9TUQCEb1ohrYsJ+gQUujKaOwby2UiFyypA1uMY5hK1sYEQKP7WU75O1TExJAD9hsCe+jXxiVYYXOJlaN3wMLQnOybWlPRMxiDexDQvHYgNz1yF0KRFm9RdkSeFE8U5dq3+0L3lNiAmJ5yYZv1h2df7p9SLObx8/wDJY9puOpzk3m5sJA1f21iUKGGGBM+LpGFwLTmEIR1VJWveYBMVGgrcBnkDLXZJJUooJCGXHtVQmL/KveXxpPhfFuYsxV1nSucm1VuRYJvlgCDe0xFVs4c2/TSkQEIM+s809EqQ4HjEDlo/YJ4ESyQzsPolstOK2r2VAvmpDfIEfHPRCe6pNGjs9Vuosdqgs3afN/pGgc431Z+Lla4PpJVjvVW1A874HQvQmlMquiVb5eP0jIZJu+vRT4HZbIUSgkxrhoih3jvAMRzoJYmVvFDih8cQAhmRqtuXZEjNKyo44hk7BJSEnZUQlJr8t0JVQAyM6T3yfLbJjpQczNDFfmsR5r24x12BjpqXUlep0BervDGNN5IWJ1URs8XDoO/Yo0rjx0Y4Nd3OX6hI/IyrbUBXesdsEdv1PJakr1rqZydQJ5TjuQ7XUCVWPcC1Sycrr2nxM1eVuRrHXxqSWLVxdJ90KR9N5SJN0rieQjYfz/E0nX7cyssZsq6Q1oxuxaKuldiV97/OFSdttVHuKhp8p/R4LXJsGzBkhwBkjoCuyrkeBfiYR/lgLP78zBwJ+FV2JAmfVHQPWTVX9Kuetf</diagram></mxfile>"}"></div>
<script type="text/javascript" src="https://www.draw.io/js/viewer.min.js"></script>
import numpy as np
def coin(p): #defines a function where coin(p), if generating a random number from 0 to 1, will return 1 if it less than and 0 if not, than the value p.
return int(np.random.random() < p)
N = 100 #iterates the function 100 times
p = 0.3
S = np.zeros((N,))
for i in range(N):
S[i] = coin(p) #resets the function each time and for each iteration from 1 to 100, checks if the random variable from 0 to 1 is less than p
coin(0.3)
S
sum(S)/S.shape[0] #the sum of the results being 1 divided by the shape of the function being 100
S.shape[0]
num_heads = np.sum(S) #if we indicate it as a coin, the 1 would be heads and thus num_heads is the sum of S which means below 0.3
num_tails = N - np.sum(S) #if we indicate it as a coin, the 1 would be heads and thus num_tails is N minus the sum of S which means the values above 0.3
(p**num_heads)*(1-p)**num_tails #follows the formula p^n * (1-p)^n-1
(p**np.sum(S))*((1-p)**np.sum(np.logical_not(S))) #another way of stating the above
np.logical_not(S).astype(int) #tells us the opposite of list S
def likelihood(p,S): #defining the likelihood function
return (p**np.sum(S))*((1-p)**np.sum(np.logical_not(S)))
def likelihood(p,S): #defining the likelihood function with respect to coin values
num_heads = np.sum(S)
num_tails = np.sum(np.logical_not(S))
return (p**num_heads)*((1-p)**num_tails)
start = 0 #goes from value 0 to 1 with 100 even steps
stop = 1
steps = 100
p = np.linspace(start,stop,steps)
p = np.linspace(0,1,100)
p
L = likelihood(np.array([0.0,0.1]),S) #chance of the probability of the coin
L #the likelihood at 0 of it occuring
L = likelihood(p,S) #the likelihood off all the runs
L
import matplotlib.pyplot as plt
plt.plot(p,L)
np.argmax(L) #The maximum
p[np.argmax(L)] #probability at the maximum
np.sum
L
#the symbolic representation of the maximum likelihood formula
from sympy import *
N_heads, N_total, p = symbols('N_heads,N_total,p')
f = p**N_heads*(1-p)**(N_total-N_heads)
f
df_dp = diff(f,p)
df_dp
solve(df_dp,p)
import numpy as np
mu = 2.1
sigma = 0.12
x = sigma * np.random.randn(1,10) + mu #follows mx+b for each point
np.random.random(1)
plt.hist(np.random.randn(10000,),50); #demonstrates the normal curve and also proves the maximum likelihood function
x = sigma * np.random.randn(1,10) + mu
x = sigma * np.random.randn(1000,1) + mu
plt.hist(x,50);
x
###Output
_____no_output_____
###Markdown

###Code
def normal_pdf(x,mu,sigma): #defines the normal pdf
return (1/(np.sqrt(2*np.pi*sigma**2)))*np.exp((-(x-mu)**2)/(2*sigma**2))
x = np.linspace(-4,4,100)
mu = 2.1
sigma = 0.12
y = normal_pdf(x,mu,sigma)
plt.plot(x,y) #normal distribution curve centered around mu with a modifier of sigma
S = sigma * np.random.randn(1,10) + mu
normal_pdf(S,mu,sigma)
np.prod(normal_pdf(S,mu,sigma))
mu = 1
sigma = 1
plt.plot(x,y)
S = sigma * np.random.randn(1,10) + mu
normal_pdf(S,mu,sigma)
np.prod(normal_pdf(S,mu,sigma))
def normal_likelihood(S,mu,sigma):
return np.prod(normal_pdf(S,mu,sigma))
start = -5
stop = 5
step = 0.1
L = []
for m in np.arange(start,stop,step):
L.append((m,normal_likelihood(S,m,sigma)))
L = np.asarray(L)
L.shape
plt.plot(L[:,0],L[:,1])
np.argmax(L[:,1])
L[71,0]
mu = 1.123
sigma = 0.123
S = sigma * np.random.randn(1,100) + mu
mu = np.linspace(-4,4,1000)
sigma = np.linspace(-4,4,1000)
L = np.zeros ((mu.shape[0],sigma.shape[0]))
for i in range(mu.shape[0]):
for j in range(sigma.shape[0]):
L[i,j] = normal_likelihood(S,mu[i],sigma[j])
def plot(x): #graphs the pdf
fig, ax = plt.subplots()
im = ax.imshow(x,cmap=plt.get_cmap('cool'))
plt.show
plot(L)
np.argmax(L)
np.unravel_index(np.argmax(L), L.shape)
L[639,485]
mu[639]
sigma[485]
np.mean(x)
###Output
_____no_output_____ |
Archieved FP/monev/pkg_ta/scripts/waypoints/EDIT_WAYPOINTS.ipynb | ###Markdown
Waypoints Lurus
###Code
wp_26 = np.load('waypoints/08_09_wp_lurus.npy')
plt.plot(wp_26[:,0], wp_26[:,1], label='26 Agustus 2020')
plt.legend()
plt.xlabel("X (m)")
plt.ylabel("Y (m)")
plt.title('')
plt.show()
# Shift to the new Position
wp_new = np.copy(wp_26)
wp_new[:,0] = wp_new[:,0] - temp[0,0] + x0
wp_new[:,1] = wp_new[:,1] - temp[0,1] + y0
# Align the initial position
plt.plot(wp_26[:,0], wp_26[:,1], label='26 Agustus 2020')
plt.plot(wp_new[:,0], wp_new[:,1], label='31 Agustus 2020')
plt.legend()
plt.xlabel("X (m)")
plt.ylabel("Y (m)")
plt.show()
np.save('waypoints/'+name+'_wp_lurus', wp_new)
###Output
_____no_output_____
###Markdown
Waypoints Belok
###Code
wp_26 = np.load('waypoints/08_09_wp_belok.npy')
plt.plot(wp_26[:,0], wp_26[:,1], label='26 Agustus 2020')
plt.legend()
plt.xlabel("X (m)")
plt.ylabel("Y (m)")
plt.title('')
plt.show()
# Shift to the new Position
wp_new = np.copy(wp_26)
wp_new[:,0] = wp_new[:,0] - temp[0,0] + x0
wp_new[:,1] = wp_new[:,1] - temp[0,1] + y0
# Align the initial position
plt.plot(wp_26[:,0], wp_26[:,1], label='26 Agustus 2020')
plt.plot(wp_new[:,0], wp_new[:,1], label='31 Agustus 2020')
plt.legend()
plt.xlabel("X (m)")
plt.ylabel("Y (m)")
plt.show()
np.save('waypoints/'+name+'_wp_belok', wp_new)
###Output
_____no_output_____
###Markdown
Waypoints S
###Code
wp_26 = np.load('waypoints/08_09_wp_S.npy')
plt.plot(wp_26[:,0], wp_26[:,1], label='26 Agustus 2020')
plt.legend()
plt.xlabel("X (m)")
plt.ylabel("Y (m)")
plt.title('')
plt.show()
# Shift to the new Position
wp_new = np.copy(wp_26)
wp_new[:,0] = wp_new[:,0] - temp[0,0] + x0
wp_new[:,1] = wp_new[:,1] - temp[0,1] + y0
# Align the initial position
plt.plot(wp_26[:,0], wp_26[:,1], label='26 Agustus 2020')
plt.plot(wp_new[:,0], wp_new[:,1], label='31 Agustus 2020')
plt.legend()
plt.xlabel("X (m)")
plt.ylabel("Y (m)")
plt.show()
np.save('waypoints/'+name+'_wp_S', wp_new)
###Output
_____no_output_____ |
Multithreading_speed_up/Speed_Up_Multithreading.ipynb | ###Markdown
Before multithreading
###Code
import requests
from time import time
url_list = [
"https://via.placeholder.com/400",
"https://via.placeholder.com/410",
"https://via.placeholder.com/420",
"https://via.placeholder.com/430",
"https://via.placeholder.com/440",
"https://via.placeholder.com/450",
"https://via.placeholder.com/460",
"https://via.placeholder.com/470",
"https://via.placeholder.com/480",
"https://via.placeholder.com/490",
"https://via.placeholder.com/500",
"https://via.placeholder.com/510",
"https://via.placeholder.com/520",
"https://via.placeholder.com/530",
]
def download_file(url):
html = requests.get(url, stream=True)
return html.status_code
start = time()
for url in url_list:
print(download_file(url))
print(f'Time taken: {time() - start}')
###Output
200
200
200
200
200
200
200
200
200
200
200
200
200
200
Time taken: 13.687922239303589
###Markdown
After Speed up
###Code
import requests
from concurrent.futures import ThreadPoolExecutor, as_completed
from time import time
url_list = [
"https://via.placeholder.com/400",
"https://via.placeholder.com/410",
"https://via.placeholder.com/420",
"https://via.placeholder.com/430",
"https://via.placeholder.com/440",
"https://via.placeholder.com/450",
"https://via.placeholder.com/460",
"https://via.placeholder.com/470",
"https://via.placeholder.com/480",
"https://via.placeholder.com/490",
"https://via.placeholder.com/500",
"https://via.placeholder.com/510",
"https://via.placeholder.com/520",
"https://via.placeholder.com/530",
]
def download_file(url):
html = requests.get(url, stream=True)
return html.status_code
start = time()
processes = []
with ThreadPoolExecutor(max_workers=10) as executor:
for url in url_list:
processes.append(executor.submit(download_file, url))
for task in as_completed(processes):
print(task.result())
print(f'Time taken: {time() - start}')
###Output
200
200
200
200
200
200
200
200
200
200
200
200
200
200
Time taken: 2.054238796234131
|
Analyse_Twitter_Data/wrangle_act.ipynb | ###Markdown
Project: Wrangling and Analyze DataThis Jupyter notebook contains the complete code and basic documentation of the "Wrangle and Analyse Data" project that is part of Udacity's Data Analyst Nanodegreee Program. There are two other deliverables of the project:- **WeRateDogs Data Wrangle Report** briefly describes our wrangling efforts.- **Dog Breeds Popularity** (aka Act Report) communicates the insights and displays the visualization(s) produced from our wrangled data.
###Code
# Import dependencies
import requests
import os
import json
import tweepy
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
Data Gathering(1) The WeRateDogs Twitter archive data (twitter_archive_enhanced.csv) is downloaded directly from a GitHub repository using `pd.read_csv`.
###Code
path_csv = 'https://raw.githubusercontent.com/lustraka/Data_Analysis_Workouts/main/Analyse_Twitter_Data/'
dfa = pd.read_csv(path_csv+'twitter-archive-enhanced.csv')
dfa.head()
###Output
_____no_output_____
###Markdown
(2) The tweet image predictions (image_predictions.tsv) are downloaded from given URL using the `requests` library.
###Code
url_tsv = 'https://d17h27t6h515a5.cloudfront.net/topher/2017/August/599fd2ad_image-predictions/image-predictions.tsv'
r = requests.get(url_tsv)
with open('image-predictions.tsv', 'wb') as file:
file.write(r.content)
dfi = pd.read_csv('image-predictions.tsv', sep='\t')
dfi.head()
###Output
_____no_output_____
###Markdown
(3) Additional data (tweet_json.txt) are gathered via the Twitter API using the `tweepy` library.
###Code
consumer_key = 'hidden'
consumer_secret = 'hidden'
access_token = 'hidden'
access_secret = 'hidden'
auth = tweepy.OAuthHandler(consumer_key, consumer_secret)
auth.set_access_token(access_token, access_secret)
api = tweepy.API(auth, wait_on_rate_limit=True, wait_on_rate_limit_notify=True)
from timeit import default_timer as timer
count = 0
fails_dict = {}
start = timer()
if 'tweet_json.txt' in os.listdir():
os.remove('tweet_json.txt')
with open('tweet_json.txt', 'a') as file:
for tweet_id in dfa.tweet_id.values:
count += 1
if count % 42 == 0:
print(str(count) + ' (' + str(tweet_id), end='): ')
try:
status = api.get_status(tweet_id, tweet_mode='extended')._json
if count % 42 == 0:
print("Success")
file.write(json.dumps(status, ensure_ascii=False)+'\n')
except tweepy.TweepError as e:
if count % 42 == 0:
print('Fail')
fails_dict[tweet_id] = e
pass
except e:
print('Fail', e)
end = timer()
print(f'Elapsed time: {end - start}')
print(fails_dict)
###Output
42 (884441805382717440): Success
84 (876537666061221889): Success
126 (868622495443632128): Success
168 (859851578198683649): Success
210 (852226086759018497): Success
252 (844979544864018432): Success
294 (837820167694528512): Success
336 (832645525019123713): Success
378 (828011680017821696): Success
420 (822244816520155136): Success
462 (817536400337801217): Success
504 (813066809284972545): Success
546 (805826884734976000): Success
588 (799757965289017345): Success
630 (794355576146903043): Success
672 (789960241177853952): Success
714 (784183165795655680): Success
756 (778748913645780993): Success
798 (773191612633579521): Success
840 (767191397493538821): Success
882 (760521673607086080): Success
###Markdown
Data gathered form Twitter API:| Attribute | Type | Description || --- | :-: | --- || id | int | The integer representation of unique identifier for this Tweet || retweet_count | int | Number of times this Tweet has been retweeted. || favorite_count | int | *Nullable*. Indicates approximately how many times this tweet has been liked by Twitter users. |Reference: [Tweepy docs: Tweet Object](https://developer.twitter.com/en/docs/twitter-api/v1/data-dictionary/object-model/tweet)
###Code
df_tweets = []
with open('tweet_json.txt', 'r') as file:
line = file.readline()
while line:
status = json.loads(line)
df_tweets.append({'tweet_id': status['id'], 'retweet_count': status['retweet_count'], 'favorite_count': status['favorite_count']})
line = file.readline()
dft = pd.DataFrame(df_tweets)
dft.head()
###Output
_____no_output_____
###Markdown
Assessing DataKey assumptions:* We only want original ratings (no retweets or replies) that have images. Though there are 5000+ tweets in the dataset, not all are dog ratings and some are retweets.* Assessing and cleaning the entire dataset completely would require a lot of time. Therefore, we will assess and clean 8 quality issues and 3 tidiness issues in this dataset.* The fact that the rating numerators are greater than the denominators does not need to be cleaned. This [unique rating system](http://knowyourmeme.com/memes/theyre-good-dogs-brent) is a big part of the popularity of WeRateDogs.* We will gather the additional tweet data only for tweets in the *twitter_archive_enhanced.csv* dataset. The archive `twitter_archive_enhanced.csv` (alias `dba`)> "I extracted this data programmatically, but I didn't do a very good job. The ratings probably aren't all correct. Same goes for the dog names and probably dog stages (see below for more information on these) too. You'll need to assess and clean these columns if you want to use them for analysis and visualization."
###Code
dfa.sample(15)
dfa.info()
for col in dfa.columns[[10,11,13,14,15,16]]:
print(dfa[col].unique())
###Output
[ 13 12 14 5 17 11 10 420 666 6 15 182 960 0
75 7 84 9 24 8 1 27 3 4 165 1776 204 50
99 80 45 60 44 143 121 20 26 2 144 88]
[ 10 0 15 70 7 11 150 170 20 50 90 80 40 130 110 16 120 2]
['None' 'doggo']
['None' 'floofer']
['None' 'pupper']
['None' 'puppo']
###Markdown
Curated `twitter_archive_enhanced.csv` Info| | Variable | Non-Null | Nunique | Dtype | Notes ||---|----------|----------|---------|-------|-------|| 0 | tweet_id | 2356 | 2356 | int64 | || 1 | in_reply_to_status_id | 78 | 77 | float64 | these tweets are replies || 2 | in_reply_to_user_id | 78 | 31 | float64 | see $\uparrow$ || 3 | timestamp | 2356 | 2356 | object | object $\to$ datetime | | 4 | source | 2356 | 4 | object | || 5 | text | 2356 | 2356 | object | some tweets don't have an image (1) || 6 | retweeted_status_id | 181 | 181 | float64 | these are retweets || 7 | retweeted_status_user_id | 181 | 25 | float64 | see $\uparrow$ || 8 | retweeted_status_timestamp | 181 | 181 | object | see $\uparrow$ || 9 | expanded_urls | 2297 | 2218 | object | missing values || 10 | rating_numerator | 2356 | 40 | int64 | entries with numerator $> 20$ may be incorrect (4a) || 11 | rating_denominator | 2356 | 18 | int64 | entries with denominator $\neq 10$ may be incorrect (4b) || 12 | name | 2356 | 957 | object | incorrect names or missing values (2) || 13 | doggo | 2356 | 2 | object | a value as a column + (3) some misclassified stages|| 14 | floofer | 2356 | 2 | object | see $\uparrow$ || 15 | pupper | 2356 | 2 | object | see $\uparrow$ || 16 | puppo | 2356 | 2 | object | see $\uparrow$ |Source: visual and programmatic assessment```python , Variable, Non-Null (Count), Dtype:dfa.info() Nunique:dfa.nunique() Check unique valuesfor col in dfa.columns[[10,11,13,14,15,16]]: print(dfa[col].unique()) Notes (1) Some tweets don't have an imagedfa.loc[dfa.text.apply(lambda s: 'https://t.co' not in s)].shape[0] [Out] 124```
###Code
# (2a) Incorrect names - begin with a lowercase
import re
print(re.findall(r';([a-z].*?);', ';'.join(dfa.name.unique())))
# (2b) Incorrect names - None
dfa.loc[dfa.name == 'None'].shape[0]
# (3a) Misclassified stages - indicated in the stage but not present in the text
stages = ['doggo', 'pupper', 'puppo', 'floofer']
print('Stage | Total | Misclassified |')
print('-'*35)
for stage in stages:
total = dfa.loc[dfa[stage] == stage].shape[0]
missed = dfa.loc[(dfa[stage] == stage) & (dfa.text.apply(lambda s: stage not in s.lower()))].shape[0]
print(f"{stage.ljust(9)} | {total:5d} | {missed:13d} |")
# (3b) Misclassified stages - not indicated in the stage but is present in the text
stages = ['doggo', 'pupper', 'puppo', 'floofer']
print('Stage | Total | Misclassified |')
print('-'*35)
for stage in stages:
total = dfa.loc[dfa[stage] == stage].shape[0]
missed = dfa.loc[(dfa[stage] != stage) & (dfa.text.apply(lambda s: stage in s.lower()))].shape[0]
print(f"{stage.ljust(9)} | {total:5d} | {missed:13d} |")
###Output
Stage | Total | Misclassified |
-----------------------------------
doggo | 97 | 10 |
pupper | 257 | 26 |
puppo | 30 | 8 |
floofer | 10 | 0 |
###Markdown
Note (4) Ratings where `rating_numerator` $ > 20$ or `rating_denomiator` $\neq 10$Code used:```python Show the whole textpd.options.display.max_colwidth = None (4a) Show tweets with possibly incorrect rating : rating_numerator > 20dfa.loc[dfa.rating_numerator > 20, ['text', 'rating_numerator', 'rating_denominator']] (4b) Show tweets with possibly incorrect rating : rating_denominator != 10dfa.loc[dfa.rating_denominator != 10, ['text', 'rating_numerator', 'rating_denominator']]```In cases where users used float numbers, such as 9.75/10 or 11.27/10, we will use the floor rounding, i.e. 9/10 or 11/10 respectively. We will correct only those rating which were incorrectly identified in the text. Ratings with weird values used in the text are left unchanged cos they're good dogs Brent.Results:
###Code
# Show the whole text
pd.options.display.max_colwidth = None
# Fill dict with key = index and value = correct rating
incorrect_rating = {313 : '13/10', 340 : '9/10', 763: '11/10', 313 : '13/10', 784 : '14/10', 1165 : '13/10', 1202 : '11/10', 1662 : '10/10', 2335 : '9/10'}
# Indicate tweets with missing rating
missing_rating = [342, 516]
# Show tweet with incorrectly identified rating
dfa.loc[list(incorrect_rating.keys()), ['text', 'rating_numerator', 'rating_denominator']]
###Output
_____no_output_____
###Markdown
The Tweet Image Predictions `image_predictions.tsv`> "A table full of image predictions (the top three only) alongside each tweet ID, image URL, and the image number that corresponded to the most confident prediction (numbered 1 to 4 since tweets can have up to four images)."
###Code
dfi.sample(10)
dfi.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 2075 entries, 0 to 2074
Data columns (total 12 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 tweet_id 2075 non-null int64
1 jpg_url 2075 non-null object
2 img_num 2075 non-null int64
3 p1 2075 non-null object
4 p1_conf 2075 non-null float64
5 p1_dog 2075 non-null bool
6 p2 2075 non-null object
7 p2_conf 2075 non-null float64
8 p2_dog 2075 non-null bool
9 p3 2075 non-null object
10 p3_conf 2075 non-null float64
11 p3_dog 2075 non-null bool
dtypes: bool(3), float64(3), int64(2), object(4)
memory usage: 152.1+ KB
###Markdown
Curated Info| | Variable | Non-Null | Nunique | Dtype | Notes ||---|----------|----------|---------|-------|-------|| 0 | tweet_id | 2075 | 2078 | int64 | || 1 | jpg_url | 2075 | 2009 | object | || 2 | img_num | 2075 | 4 | int64 | the image number that corresponded to the most confident prediction|| 3 | p1 | 2075 | 378 | object | prediction || 4 | p1_conf | 2075 | 2006 | float64 | confidence of prediction || 5 | p1_dog | 2075 | 2 | int64 | Is the prediction a breed of dog? : int $\to$ bool || 6 | p2 | 2075 | 405 | object | dtto || 7 | p2_conf | 2075 | 2004 | float64 | dtto || 8 | p2_dog | 2075 | 2 | int64 | dtto || 9 | p3 | 2075 | 408 | object | dtto || 10 | p3_conf | 2075 | 2006 | float64 | dtto || 11 | p3_dog | 2075 | 2 | int64 | dtto |Source: visual and programmatic assessment```python , Variable, Non-Null (Count), Dtype:dfa.info() Nunique:dfa.nunique()``` Additional Data From Twitter API
###Code
dft.sample(10)
dft.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 2328 entries, 0 to 2327
Data columns (total 3 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 tweet_id 2328 non-null int64
1 retweet_count 2328 non-null int64
2 favorite_count 2328 non-null int64
dtypes: int64(3)
memory usage: 54.7 KB
###Markdown
Curated Info| | Variable | Non-Null | Nunique | Dtype | Notes ||---|----------|----------|---------|-------|-------|| 0 | tweet_id | 2327 | 2327 | int64 | || 1 | retweet_count | 2327 | 1671 | int64 | || 2 | favorite_count | 2327 | 2006 | int64 | |Source: visual and programmatic assessment```python , Variable, Non-Null (Count), Dtype:dfa.info() Nunique:dfa.nunique()``` Quality issues1. Replies are not original tweets.2. Retweets are not original tweets.3. Some tweets don't have any image4. Some ratings are incorrectly identified5. Some ratings are missing6. Names starting with lowercase are incorrect7. Names with value None are incorrect8. Column timestamp has the dtype object (string) Tidiness issues1. Dogs' stages (doggo, pupper, puppo, floofer) as columns2. Multiple image predictions in one row3. Data in multiple datasets Cleaning DataIn this section, we will clean all of the issues documented above.
###Code
# Make copies of original pieces of data
dfa_clean = dfa.copy() # archive
dfi_clean = dfi.copy() # image predictions
dft_clean = dft.copy() # data from Twitter API
###Output
_____no_output_____
###Markdown
Q1: Replies are not original tweets. Define:- Remove replies from `dfa_clean` dataframe by preserving only observations where `dfa_clean.in_reply_to_status_id.isna()` - Then drop variables *in_reply_to_status_id* and *in_reply_to_user_id*. We don't need them any more. Code
###Code
dfa_clean = dfa_clean.loc[dfa_clean.in_reply_to_status_id.isna()]
print('Check the emptiness of the in_reply_to_status_id (sum should be 0): ', dfa_clean.in_reply_to_status_id.notna().sum())
dfa_clean.drop(columns=['in_reply_to_status_id', 'in_reply_to_user_id'], inplace=True)
###Output
Check the emptiness of the in_reply_to_status_id (sum should be 0): 0
###Markdown
Test
###Code
dfa_clean.info()
###Output
<class 'pandas.core.frame.DataFrame'>
Int64Index: 2278 entries, 0 to 2355
Data columns (total 15 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 tweet_id 2278 non-null int64
1 timestamp 2278 non-null object
2 source 2278 non-null object
3 text 2278 non-null object
4 retweeted_status_id 181 non-null float64
5 retweeted_status_user_id 181 non-null float64
6 retweeted_status_timestamp 181 non-null object
7 expanded_urls 2274 non-null object
8 rating_numerator 2278 non-null int64
9 rating_denominator 2278 non-null int64
10 name 2278 non-null object
11 doggo 2278 non-null object
12 floofer 2278 non-null object
13 pupper 2278 non-null object
14 puppo 2278 non-null object
dtypes: float64(2), int64(3), object(10)
memory usage: 284.8+ KB
###Markdown
Q2: Retweets are not original tweets. Define- Remove retweets from `dfa_clean` by preserving only observation where `dfa_clean.retweeted_status_id.isna()`, i.e. empty.- Then drop variables *retweeted_status_id*, *retweeted_status_user_id*, and *retweeted_status_timestamp*. We don't need them any more Code
###Code
dfa_clean = dfa_clean.loc[dfa_clean.retweeted_status_id.isna()]
print('Check the emptiness of the retweeted_status_id (sum should be 0): ', dfa_clean.retweeted_status_id.notna().sum())
dfa_clean.drop(columns=['retweeted_status_id', 'retweeted_status_user_id', 'retweeted_status_timestamp'], inplace=True)
###Output
Check the emptiness of the retweeted_status_id (sum should be 0): 0
###Markdown
Test
###Code
dfa_clean.info()
###Output
<class 'pandas.core.frame.DataFrame'>
Int64Index: 2097 entries, 0 to 2355
Data columns (total 12 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 tweet_id 2097 non-null int64
1 timestamp 2097 non-null object
2 source 2097 non-null object
3 text 2097 non-null object
4 expanded_urls 2094 non-null object
5 rating_numerator 2097 non-null int64
6 rating_denominator 2097 non-null int64
7 name 2097 non-null object
8 doggo 2097 non-null object
9 floofer 2097 non-null object
10 pupper 2097 non-null object
11 puppo 2097 non-null object
dtypes: int64(3), object(9)
memory usage: 213.0+ KB
###Markdown
Q3: Some tweets don't have any image DefineRemove tweets that don't have image from `dfa_clean`. We detect an image by an occurence of the string 'https://t.co' in the *text* variable. Code
###Code
dfa_clean = dfa_clean.loc[dfa_clean.text.apply(lambda s: 'https://t.co' in s)]
###Output
_____no_output_____
###Markdown
Test
###Code
dfa_clean.loc[dfa_clean.text.apply(lambda s: 'https://t.co' not in s)].shape[0]
###Output
_____no_output_____
###Markdown
Q4: Some ratings are incorrectly identified DefineUpdat the incorrect ratings with the correct ones (both numerator and denominator being stored in a dictionary *incorrect_rating* during assessment). Code
###Code
# Some observations could have been removed in previous steps
ratings_to_update = dfa_clean.index.intersection(list(incorrect_rating.keys()))
for rating in ratings_to_update:
dfa_clean.at[rating,'rating_numerator'] = incorrect_rating[rating].split('/')[0]
dfa_clean.at[rating, 'rating_denominator'] = incorrect_rating[rating].split('/')[1]
###Output
_____no_output_____
###Markdown
Test
###Code
# Show the whole text
pd.options.display.max_colwidth = None
# Show tweets
dfa_clean.loc[ratings_to_update, ['text', 'rating_numerator', 'rating_denominator']]
dfa_clean.info()
###Output
<class 'pandas.core.frame.DataFrame'>
Int64Index: 2094 entries, 0 to 2355
Data columns (total 12 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 tweet_id 2094 non-null int64
1 timestamp 2094 non-null object
2 source 2094 non-null object
3 text 2094 non-null object
4 expanded_urls 2094 non-null object
5 rating_numerator 2094 non-null int64
6 rating_denominator 2094 non-null int64
7 name 2094 non-null object
8 doggo 2094 non-null object
9 floofer 2094 non-null object
10 pupper 2094 non-null object
11 puppo 2094 non-null object
dtypes: int64(3), object(9)
memory usage: 292.7+ KB
###Markdown
Q5: Some ratings are missing DefineDelete observations with missing rating in `dfa_clean` identified in the variable *missing_rating* during assessment. Code
###Code
# Some observations could have been removed
tweets_to_delete = dfa_clean.index.intersection(missing_rating)
# Delete tweets without rating
dfa_clean.drop(index=tweets_to_delete, inplace=True)
###Output
_____no_output_____
###Markdown
Test
###Code
# Should be empty
dfa_clean.index.intersection(missing_rating)
###Output
_____no_output_____
###Markdown
Q6: Names starting with lowercase are incorrect Define- Identify incorrect names in `dfa_clean` using a regular expression and store them in a list `incorrect_names`. Incorrect names start with a lowercase letter.- Replace incorrect names in `dfa_clean` with an empty string using a user defined function `clean_names(name)`. Code
###Code
# Join all names to one string separated by ';;'
# Find all incorrect names using a regular expresion
incorrect_names = re.findall(r';([a-z].*?);', ';;'.join(dfa_clean.name.unique()))
def clean_names(name):
"""If a name is in a global variable `incorrect_names`,
replace it by empty string,
otherwise return the original name."""
if name in incorrect_names:
return ''
else:
return name
# Apply the clean_names func on the variable 'name'
dfa_clean['name'] = dfa_clean.name.apply(clean_names)
###Output
_____no_output_____
###Markdown
Test
###Code
# Should be empty
print(re.findall(r';([a-z].*?);', ';;'.join(dfa_clean.name.unique())))
###Output
[]
###Markdown
Q7: Names with value None are incorrect DefineReplace names 'None' in `dfa_clean` with an empty string. Code
###Code
dfa_clean['name'] = dfa_clean.name.apply(lambda name: '' if name == 'None' else name)
###Output
_____no_output_____
###Markdown
Test
###Code
# Should be zero
dfa_clean.query('name == "None"').shape[0]
dfa_clean.name.value_counts()[:10]
###Output
_____no_output_____
###Markdown
Q8: Column timestamp has the dtype object (string) DefineConvert variable *timestamp* in `dfa_clean` to datetime. Code
###Code
dfa_clean['timestamp'] = pd.to_datetime(dfa_clean.timestamp)
###Output
_____no_output_____
###Markdown
Test
###Code
dfa_clean.timestamp.dtype
###Output
_____no_output_____
###Markdown
T1: Dogs' stages (doggo, pupper, puppo, floofer) as columns DefineDerive a new variable *stage* from variables *doggo, pupper, puppo, floofer*. Fill an empty string if no stage indicated. Then drop exploited variables. Code
###Code
def get_stage(row):
"""Fill the stage or an empty string (if the stage is not identified)."""
stage = set([row['doggo'], row['pupper'], row['puppo'], row['floofer']])
if len(stage) > 1:
return list(stage.difference({'None'}))[0]
else:
return ''
dfa_clean['stage'] = dfa_clean.apply(get_stage, axis=1)
dfa_clean.drop(columns=['doggo', 'pupper', 'puppo', 'floofer'], inplace=True)
###Output
_____no_output_____
###Markdown
Test
###Code
dfa_clean.stage.value_counts()
dfa_clean.info()
###Output
<class 'pandas.core.frame.DataFrame'>
Int64Index: 2093 entries, 0 to 2355
Data columns (total 9 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 tweet_id 2093 non-null int64
1 timestamp 2093 non-null datetime64[ns, UTC]
2 source 2093 non-null object
3 text 2093 non-null object
4 expanded_urls 2093 non-null object
5 rating_numerator 2093 non-null int64
6 rating_denominator 2093 non-null int64
7 name 2093 non-null object
8 stage 2093 non-null object
dtypes: datetime64[ns, UTC](1), int64(3), object(5)
memory usage: 163.5+ KB
###Markdown
T2: Multiple image predictions in one row DefineExtract the most confident prediction of a breed of dog. Drop exploited columns and remove observations without a prediction. Code
###Code
def get_breed(row):
"""Extract the most confident prediction of a breed of dog."""
predictions = [[row['p1'], row['p1_conf'], row['p1_dog']],
[row['p2'], row['p2_conf'], row['p2_dog']],
[row['p3'], row['p3_conf'], row['p3_dog']]]
# Filter predictions of a bread of dog
dogs = list(filter(lambda x: x[2], predictions))
# Sort predictions accoring to confidence
best = sorted(dogs, key=lambda x: x[1], reverse=True)
# Return the best prediction
if len(best) == 0:
return ''
else:
return str(best[0][0]).replace('_', ' ').title()
dfi_clean['breed'] = dfi_clean.apply(get_breed, axis=1)
dfi_clean.drop(columns=['p1', 'p1_conf', 'p1_dog', 'p2', 'p2_conf', 'p2_dog', 'p3', 'p3_conf', 'p3_dog'], inplace=True)
# Remove tweets without a prediction
dfi_clean = dfi_clean.query('breed != ""')
###Output
_____no_output_____
###Markdown
Test
###Code
dfi_clean.info()
print(sorted(dfi_clean.breed.unique()))
###Output
['Afghan Hound', 'Airedale', 'American Staffordshire Terrier', 'Appenzeller', 'Australian Terrier', 'Basenji', 'Basset', 'Beagle', 'Bedlington Terrier', 'Bernese Mountain Dog', 'Black-And-Tan Coonhound', 'Blenheim Spaniel', 'Bloodhound', 'Bluetick', 'Border Collie', 'Border Terrier', 'Borzoi', 'Boston Bull', 'Bouvier Des Flandres', 'Boxer', 'Brabancon Griffon', 'Briard', 'Brittany Spaniel', 'Bull Mastiff', 'Cairn', 'Cardigan', 'Chesapeake Bay Retriever', 'Chihuahua', 'Chow', 'Clumber', 'Cocker Spaniel', 'Collie', 'Curly-Coated Retriever', 'Dalmatian', 'Dandie Dinmont', 'Doberman', 'English Setter', 'English Springer', 'Entlebucher', 'Eskimo Dog', 'Flat-Coated Retriever', 'French Bulldog', 'German Shepherd', 'German Short-Haired Pointer', 'Giant Schnauzer', 'Golden Retriever', 'Gordon Setter', 'Great Dane', 'Great Pyrenees', 'Greater Swiss Mountain Dog', 'Groenendael', 'Ibizan Hound', 'Irish Setter', 'Irish Terrier', 'Irish Water Spaniel', 'Irish Wolfhound', 'Italian Greyhound', 'Japanese Spaniel', 'Keeshond', 'Kelpie', 'Komondor', 'Kuvasz', 'Labrador Retriever', 'Lakeland Terrier', 'Leonberg', 'Lhasa', 'Malamute', 'Malinois', 'Maltese Dog', 'Mexican Hairless', 'Miniature Pinscher', 'Miniature Poodle', 'Miniature Schnauzer', 'Newfoundland', 'Norfolk Terrier', 'Norwegian Elkhound', 'Norwich Terrier', 'Old English Sheepdog', 'Papillon', 'Pekinese', 'Pembroke', 'Pomeranian', 'Pug', 'Redbone', 'Rhodesian Ridgeback', 'Rottweiler', 'Saint Bernard', 'Saluki', 'Samoyed', 'Schipperke', 'Scotch Terrier', 'Scottish Deerhound', 'Shetland Sheepdog', 'Shih-Tzu', 'Siberian Husky', 'Silky Terrier', 'Soft-Coated Wheaten Terrier', 'Staffordshire Bullterrier', 'Standard Poodle', 'Standard Schnauzer', 'Sussex Spaniel', 'Tibetan Mastiff', 'Tibetan Terrier', 'Toy Poodle', 'Toy Terrier', 'Vizsla', 'Walker Hound', 'Weimaraner', 'Welsh Springer Spaniel', 'West Highland White Terrier', 'Whippet', 'Wire-Haired Fox Terrier', 'Yorkshire Terrier']
###Markdown
T3: Data in multiple datasets DefineMerge archive `dfa_clean`, breed predictions `dfi_clean`, and metrics `dft_clean` into `df_clean` for further analysis and visualization. Code
###Code
df_clean = dfa_clean.merge(dfi_clean, how='inner', on='tweet_id')
df_clean = df_clean.merge(dft_clean, how='inner', on='tweet_id')
###Output
_____no_output_____
###Markdown
Test
###Code
df_clean.info()
# Set the default value for max_colwidth
pd.options.display.max_colwidth = 50
df_clean.head()
###Output
_____no_output_____
###Markdown
Storing DataSave gathered, assessed, and cleaned master dataset to a CSV file named "twitter_archive_master.csv" and to an SQlite database for further exploration.
###Code
with open('twitter_archive_master.csv', 'w') as file:
df_clean.to_csv(file)
# Store the dataframe for further processing
from sqlalchemy import create_engine
# Create SQLAlchemy engine and empty database
engine = create_engine('sqlite:///weratedogsdata_clean.db')
# Store dataframes in database
df_clean.to_sql('df_clean', engine, index=False)
###Output
_____no_output_____
###Markdown
Analyzing and Visualizing Data Extended Info for the Cleaned Dataset| | Variable | Non-Null | Nunique | Dtype | Notes ||---|----------|----------|---------|-------|-------|| 0 | tweet_id | 1657 | 1657 | int64 | The Tweet's unique identifier .|| 1 | timestamp | 1657 | 1657 | datetime64[ns, UTC] | Time when this Tweet was created. || 2 | source | 1657 | 3 | object | Utility used to post the Tweet. || 3 | text | 1657 | 1657 | object | The actual text of the status update. || 4 | expanded_urls | 1657 | 1657 | object | The URLs of the Tweet's photos. || 5 | rating_numerator | 1657 | 26 | int64 | The rating numerator extracted from the text. || 6 | rating_denominator | 1657 | 10 | int64 | The rating denominator extracted from the text. || 7 | name | 1657 | 831 | object | The dog's name extracted from the text. || 8 | stage | 1657 | 5 | object | The dog's stage extracted from the text.|| 9 | jpg_url | 1657 | 1657 | object | The URL of the image used to classify the breed of dog. || 10 | img_num | 1657 | 4 | int64 | The image number that corresponded to the most confident prediction. || 11 | breed | 1657 | 113 | object | The most confident classification of the breed of dog predicted from the image. || 12 | retweet_count | 1657 | 1352 | int64 | Number of times this Tweet has been retweeted. || 13 | favorite_count | 1657 | 1561 | int64 | Indicates approximately how many times this Tweet has been liked by Twitter users. |
###Code
df_clean.timestamp.min(), df_clean.timestamp.max()
###Output
_____no_output_____
###Markdown
The cleaned dataset has 1657 observations starting at the November 15th, 2015 when the WeRateDogs Twitter account was launched and ending at the August 17th, 2017 when the archive was exported.**Assumptions**:- Variables *rating_numerator, rating_denominator, name,* and *stage* was extracted from the tweet's text. The rating is a part of a humorous aspect of the content. There is hardly any value in analysing these variables.- The variable *breed* is inferred from the image using machine learning algorithm. We can use this variable keeping on mind that there can be some inaccuracies.- The variables *favorite_count*, and *retweet_count* reflects the preferences of Twitter users. We can use these variables keeping in mind they come from a non-random sample of human population. Insight 1: Most Popular Dog NamesThe top 10 most popular dog names in our dataset are:
###Code
print(list(df_clean.name.value_counts(ascending=True)[-11:-1].index)[::-1])
df_clean.name.value_counts(ascending=True)[-11:-1].plot(kind='barh', title='The Top 10 Most Popular Dog Names')
plt.xlabel('Frequency');
###Output
_____no_output_____
###Markdown
Insight 2: Most Popular Dog BreedsThe top 10 most popular dog breeds according to number of tweets.
###Code
df_clean.breed.value_counts(ascending=True)[-10:].plot(kind='barh', title='The Top 10 Most Popular Dog Breeds\naccording to number of tweets')
plt.ylabel('Dog Name')
plt.xlabel('Number of Tweets');
###Output
_____no_output_____
###Markdown
The top 10 most popular dog breeds according to number of likes:
###Code
df_clean.groupby('breed')['favorite_count'].sum().sort_values().tail(10).plot(kind='barh', title='The Top 10 Most Popular Dog Breeds\naccording to number of likes')
plt.ylabel('Dog Breed')
plt.xlabel('Number of likes (in million)');
###Output
_____no_output_____
###Markdown
The top 10 most popular dog breeds according to number of retweets:
###Code
df_clean.groupby('breed')['favorite_count'].mean().sort_values().tail(10).plot(kind='barh', title='The Top 10 Most Popular Dog Breeds\naccording to number of retweets')
plt.ylabel('Dog Breed')
plt.xlabel('Number of Retweets');
###Output
_____no_output_____
###Markdown
**Insight 2 Conclusions**- The popularity rank of a dog breed depends on a metric used. Comparision of number of tweets and number of likes is quite similar.- In the comparison of absolute number of likes (sum) and average number of likes (mean), the rank of dog breeds differs due to frequency of tweets. Insight 3: Relation Between Favourite Count and Retweet Count
###Code
df_clean.plot(kind='scatter', x='favorite_count', y='retweet_count', title='The Scatter Plot of Favourite Count vs Retweet Count');
df_clean.plot(kind='scatter', x='favorite_count', y='retweet_count', logx=True, logy=True, title='The Scatter Plot of Favourite Count vs Retweet Count\n(with logarithmic scales)');
import statsmodels.api as sm
df_clean['intercept'] = 1
lm = sm.OLS(df_clean['retweet_count'], df_clean[['intercept', 'favorite_count']])
res = lm.fit()
res.summary()
# Compute the correlation coefficient
np.sqrt(res.rsquared)
df_clean.plot(kind='scatter', x='favorite_count', y='retweet_count', title='The Scatter Plot of Favourite Count vs Retweet Count\nwith the regression line')
fav_min_max = [df_clean.favorite_count.min(), df_clean.favorite_count.max()]
# Draw a regression line using 'res.params'
plt.plot(fav_min_max, [res.params.intercept + res.params.favorite_count*x for x in fav_min_max], color='tab:orange')
plt.xlabel('Number of Likes')
plt.ylabel('Number of Retweets')
plt.show()
###Output
_____no_output_____ |
Experiments/3d_shape_occupancy.ipynb | ###Markdown
###Code
import jax
from jax import random, grad, jit, vmap
from jax.config import config
from jax.lib import xla_bridge
import jax.numpy as np
from jax.experimental import stax
from jax.experimental import optimizers
from livelossplot import PlotLosses
import matplotlib.pyplot as plt
from tqdm.notebook import tqdm as tqdm
import time
import imageio
import json
import os
import numpy as onp
from IPython.display import clear_output
## Random seed
rand_key = random.PRNGKey(0)
prop_cycle = plt.rcParams['axes.prop_cycle']
colors = prop_cycle.by_key()['color']
basedir = '' # base output dir
import trimesh
import pyembree
def as_mesh(scene_or_mesh):
"""
Convert a possible scene to a mesh.
If conversion occurs, the returned mesh has only vertex and face data.
"""
if isinstance(scene_or_mesh, trimesh.Scene):
if len(scene_or_mesh.geometry) == 0:
mesh = None # empty scene
else:
# we lose texture information here
mesh = trimesh.util.concatenate(
tuple(trimesh.Trimesh(vertices=g.vertices, faces=g.faces)
for g in scene_or_mesh.geometry.values()))
else:
assert(isinstance(scene_or_mesh, trimesh.Trimesh))
mesh = scene_or_mesh
return mesh
def recenter_mesh(mesh):
mesh.vertices -= mesh.vertices.mean(0)
mesh.vertices /= np.max(np.abs(mesh.vertices))
mesh.vertices = .5 * (mesh.vertices + 1.)
def load_mesh(mesh_name, verbose=True):
mesh = trimesh.load(mesh_files[mesh_name])
mesh = as_mesh(mesh)
if verbose:
print(mesh.vertices.shape)
recenter_mesh(mesh)
c0, c1 = mesh.vertices.min(0) - 1e-3, mesh.vertices.max(0) + 1e-3
corners = [c0, c1]
if verbose:
print(c0, c1)
print(c1-c0)
print(np.prod(c1-c0))
print(.5 * (c0+c1) * 2 - 1)
test_pt_file = os.path.join(logdir, mesh_name + '_test_pts.npy')
if not os.path.exists(test_pt_file):
if verbose: print('regen pts')
test_pts = np.array([make_test_pts(mesh, corners), make_test_pts(mesh, corners)])
np.save(test_pt_file, test_pts)
else:
if verbose: print('load pts')
test_pts = np.load(test_pt_file)
if verbose: print(test_pts.shape)
return mesh, corners, test_pts
###################
def make_network(num_layers, num_channels):
layers = []
for i in range(num_layers-1):
layers.append(stax.Dense(num_channels))
layers.append(stax.Relu)
layers.append(stax.Dense(1))
return stax.serial(*layers)
input_encoder = jit(lambda x, a, b: (np.concatenate([a * np.sin((2.*np.pi*x) @ b.T),
a * np.cos((2.*np.pi*x) @ b.T)], axis=-1) / np.linalg.norm(a)) if a is not None else (x * 2. - 1.))
trans_t = lambda t : np.array([
[1,0,0,0],
[0,1,0,0],
[0,0,1,t],
[0,0,0,1],
], dtype=np.float32)
rot_phi = lambda phi : np.array([
[1,0,0,0],
[0,np.cos(phi),-np.sin(phi),0],
[0,np.sin(phi), np.cos(phi),0],
[0,0,0,1],
], dtype=np.float32)
rot_theta = lambda th : np.array([
[np.cos(th),0,-np.sin(th),0],
[0,1,0,0],
[np.sin(th),0, np.cos(th),0],
[0,0,0,1],
], dtype=np.float32)
def pose_spherical(theta, phi, radius):
c2w = trans_t(radius)
c2w = rot_phi(phi/180.*np.pi) @ c2w
c2w = rot_theta(theta/180.*np.pi) @ c2w
# c2w = np.array([[-1,0,0,0],[0,0,1,0],[0,1,0,0],[0,0,0,1]]) @ c2w
return c2w
def get_rays(H, W, focal, c2w):
i, j = np.meshgrid(np.arange(W), np.arange(H), indexing='xy')
dirs = np.stack([(i-W*.5)/focal, -(j-H*.5)/focal, -np.ones_like(i)], -1)
rays_d = np.sum(dirs[..., np.newaxis, :] * c2w[:3,:3], -1)
rays_o = np.broadcast_to(c2w[:3,-1], rays_d.shape)
return np.stack([rays_o, rays_d], 0)
get_rays = jit(get_rays, static_argnums=(0, 1, 2,))
#########
def render_rays_native_hier(params, ab, rays, corners, near, far, N_samples, N_samples_2, clip): #, rand=False):
rays_o, rays_d = rays[0], rays[1]
c0, c1 = corners
th = .5
# Compute 3D query points
z_vals = np.linspace(near, far, N_samples)
pts = rays_o[...,None,:] + rays_d[...,None,:] * z_vals[...,:,None]
# Run network
alpha = jax.nn.sigmoid(np.squeeze(apply_fn(params, input_encoder(.5 * (pts + 1), *ab))))
if clip:
mask = np.logical_or(np.any(.5 * (pts + 1) < c0, -1), np.any(.5 * (pts + 1) > c1, -1))
alpha = np.where(mask, 0., alpha)
alpha = np.where(alpha > th, 1., 0)
trans = 1.-alpha + 1e-10
trans = np.concatenate([np.ones_like(trans[...,:1]), trans[...,:-1]], -1)
weights = alpha * np.cumprod(trans, -1)
depth_map = np.sum(weights * z_vals, -1)
acc_map = np.sum(weights, -1)
# Second pass to refine isosurface
z_vals = np.linspace(-1., 1., N_samples_2) * .01 + depth_map[...,None]
pts = rays_o[...,None,:] + rays_d[...,None,:] * z_vals[...,:,None]
# Run network
alpha = jax.nn.sigmoid(np.squeeze(apply_fn(params, input_encoder(.5 * (pts + 1), *ab))))
if clip:
# alpha = np.where(np.any(np.abs(pts) > 1, -1), 0., alpha)
mask = np.logical_or(np.any(.5 * (pts + 1) < c0, -1), np.any(.5 * (pts + 1) > c1, -1))
alpha = np.where(mask, 0., alpha)
alpha = np.where(alpha > th, 1., 0)
trans = 1.-alpha + 1e-10
trans = np.concatenate([np.ones_like(trans[...,:1]), trans[...,:-1]], -1)
weights = alpha * np.cumprod(trans, -1)
depth_map = np.sum(weights * z_vals, -1)
acc_map = np.sum(weights, -1)
return depth_map, acc_map
render_rays = jit(render_rays_native_hier, static_argnums=(3,4,5,6,7,8))
@jit
def make_normals(rays, depth_map):
rays_o, rays_d = rays
pts = rays_o + rays_d * depth_map[...,None]
dx = pts - np.roll(pts, -1, axis=0)
dy = pts - np.roll(pts, -1, axis=1)
normal_map = np.cross(dx, dy)
normal_map = normal_map / np.maximum(np.linalg.norm(normal_map, axis=-1, keepdims=True), 1e-5)
return normal_map
def render_mesh_normals(mesh, rays):
origins, dirs = rays.reshape([2,-1,3])
origins = origins * .5 + .5
dirs = dirs * .5
z = mesh.ray.intersects_first(origins, dirs)
pic = onp.zeros([origins.shape[0],3])
pic[z!=-1] = mesh.face_normals[z[z!=-1]]
pic = np.reshape(pic, rays.shape[1:])
return pic
def uniform_bary(u):
su0 = np.sqrt(u[..., 0])
b0 = 1. - su0
b1 = u[..., 1] * su0
return np.stack([b0, b1, 1. - b0 - b1], -1)
def get_normal_batch(mesh, bsize):
batch_face_inds = np.array(onp.random.randint(0, mesh.faces.shape[0], [bsize]))
batch_barys = np.array(uniform_bary(onp.random.uniform(size=[bsize, 2])))
batch_faces = mesh.faces[batch_face_inds]
batch_normals = mesh.face_normals[batch_face_inds]
batch_pts = np.sum(mesh.vertices[batch_faces] * batch_barys[...,None], 1)
return batch_pts, batch_normals
def make_test_pts(mesh, corners, test_size=2**18):
c0, c1 = corners
test_easy = onp.random.uniform(size=[test_size, 3]) * (c1-c0) + c0
batch_pts, batch_normals = get_normal_batch(mesh, test_size)
test_hard = batch_pts + onp.random.normal(size=[test_size,3]) * .01
return test_easy, test_hard
gt_fn = lambda queries, mesh : mesh.ray.contains_points(queries.reshape([-1,3])).reshape(queries.shape[:-1])
embedding_size = 256
embedding_method = 'gaussian'
embedding_param = 12.
embed_params = [embedding_method, embedding_size, embedding_param]
init_fn, apply_fn = make_network(8, 256)
N_iters = 10000
batch_size = 64*64*2 * 4
lr = 5e-4
step = optimizers.exponential_decay(lr, 5000, .1)
R = 2.
c2w = pose_spherical(90. + 10 + 45, -30., R)
N_samples = 64
N_samples_2 = 64
H = 180
W = H
focal = H * .9
rays = get_rays(H, W, focal, c2w[:3,:4])
render_args_lr = [get_rays(H, W, focal, c2w[:3,:4]), None, R-1, R+1, N_samples, N_samples_2, True]
N_samples = 256
N_samples_2 = 256
H = 512
W = H
focal = H * .9
rays = get_rays(H, W, focal, c2w[:3,:4])
render_args_hr = [get_rays(H, W, focal, c2w[:3,:4]), None, R-1, R+1, N_samples, N_samples_2, True]
def run_training(embed_params, mesh, corners, test_pts, render_args_lr, name=''):
validation_pts, testing_pts = test_pts
N = 256
x_test = np.linspace(0.,1.,N, endpoint=False) * 1.
x_test = np.stack(np.meshgrid(*([x_test]*2), indexing='ij'), -1)
queries_plot = np.concatenate([x_test, .5 + np.zeros_like(x_test[...,0:1])], -1)
embedding_method, embedding_size, embedding_scale = embed_params
c0, c1 = corners
if embedding_method == 'gauss':
print('gauss bvals')
bvals = onp.random.normal(size=[embedding_size,3]) * embedding_scale
if embedding_method == 'posenc':
print('posenc bvals')
bvals = 2.**np.linspace(0,embedding_scale,embedding_size//3) - 1
bvals = np.reshape(np.eye(3)*bvals[:,None,None], [len(bvals)*3, 3])
if embedding_method == 'basic':
print('basic bvals')
bvals = np.eye(3)
if embedding_method == 'none':
print('NO abvals')
avals = None
bvals = None
else:
avals = np.ones_like(bvals[:,0])
ab = (avals, bvals)
x_enc = input_encoder(np.ones([1,3]), avals, bvals)
print(x_enc.shape)
_, net_params = init_fn(rand_key, (-1, x_enc.shape[-1]))
opt_init, opt_update, get_params = optimizers.adam(step)
opt_state = opt_init(net_params)
@jit
def network_pred(params, inputs):
return jax.nn.sigmoid(np.squeeze(apply_fn(params, input_encoder(inputs, *ab))))
@jit
def loss_fn(params, inputs, z):
x = (np.squeeze(apply_fn(params, input_encoder(inputs, *ab))[...,0]))
loss_main = np.mean(np.maximum(x, 0) - x * z + np.log(1 + np.exp(-np.abs(x))))
return loss_main
@jit
def step_fn(i, opt_state, inputs, outputs):
params = get_params(opt_state)
g = grad(loss_fn)(params, inputs, outputs)
return opt_update(i, g, opt_state)
psnrs = []
losses = []
tests = [[],[]]
xs = []
gt_val = [gt_fn(test, mesh) for test in validation_pts]
for i in tqdm(range(N_iters+1)):
inputs = onp.random.uniform(size=[batch_size, 3]) * (c1-c0) + c0
opt_state = step_fn(i, opt_state, inputs, gt_fn(inputs, mesh))
if i%100==0:
clear_output(wait=True)
inputs = queries_plot
outputs = gt_fn(inputs, mesh)
losses.append(loss_fn(get_params(opt_state), inputs, outputs))
pred = network_pred(get_params(opt_state), inputs)
psnrs.append(-10.*np.log10(np.mean(np.square(pred-outputs))))
xs.append(i)
slices = [outputs, pred, np.abs(pred - outputs)]
renderings = list(render_rays(get_params(opt_state), ab, *render_args_lr))
renderings.append(make_normals(render_args_lr[0], renderings[0]) * .5 + .5)
for to_show in [slices, renderings]:
L = len(to_show)
plt.figure(figsize=(6*L,6))
for i, z in enumerate(to_show):
plt.subplot(1,L,i+1)
plt.imshow(z)
plt.colorbar()
plt.show()
plt.figure(figsize=(25,4))
plt.subplot(151)
plt.plot(xs, psnrs)
plt.subplot(152)
plt.plot(xs, np.log10(np.array(losses)))
for j, test in enumerate(validation_pts):
full_pred = network_pred(get_params(opt_state), test)
# outputs = gt_fn(test, mesh)
outputs = gt_val[j]
val_iou = np.logical_and(full_pred > .5, outputs > .5).sum() / np.logical_or(full_pred > .5, outputs > .5).sum()
tests[j].append(val_iou)
plt.subplot(153)
for t in tests:
plt.plot(np.log10(1-np.array(t)))
plt.subplot(154)
for t in tests[:1]:
plt.plot(np.log10(1-np.array(t)))
for k in tests_all:
plt.plot(np.log10(1-tests_all[k][0]), label=k + ' easy')
plt.legend()
plt.subplot(155)
for t in tests[1:]:
plt.plot(np.log10(1-np.array(t)))
for k in tests_all:
plt.plot(np.log10(1-tests_all[k][1]), label=k + ' hard')
plt.legend()
plt.show()
print(name, i, tests[0][-1], tests[1][-1])
scores = []
for i, test in enumerate(testing_pts):
full_pred = network_pred(get_params(opt_state), test)
outputs = gt_fn(test, mesh)
val_iou = np.logical_and(full_pred > .5, outputs > .5).sum() / np.logical_or(full_pred > .5, outputs > .5).sum()
scores.append(val_iou)
meta_run = [
(get_params(opt_state), ab),
np.array(tests),
scores,
renderings,
]
return meta_run
# Put your mesh files here
mesh_files = {
'dragon' : 'dragon_obj.obj',
'armadillo' : 'Armadillo.ply',
'buddha' : 'buddha_obj.obj',
'lucy' : 'Alucy.obj',
}
logdir = os.path.join(basedir, 'occupancy_logs')
os.makedirs(logdir, exist_ok=True)
N_iters = 10000
tests_all = {}
out_all = {}
scores = {}
mesh_names = ['dragon', 'buddha', 'armadillo', 'lucy']
embed_tasks = [
['gauss', 256, 12.],
['posenc', 256, 6.],
['basic', None, None],
['none', None, None],
]
expdir = os.path.join(logdir, 'full_runs')
os.makedirs(expdir, exist_ok = True)
print(expdir)
for mesh_name in mesh_names:
mesh, corners, test_pts = load_mesh(mesh_name)
render_args_lr[1] = corners
render_args_hr[1] = corners
mesh_normal_map = render_mesh_normals(mesh, render_args_hr[0])
plt.imshow(mesh_normal_map * .5 + .5)
plt.show()
for embed_params in embed_tasks:
embedding_method, embedding_size, embedding_param = embed_params
expname = f'{mesh_name}_{embedding_method}_{embedding_param}'
print(expname)
out = run_training(embed_params, mesh, corners, test_pts, render_args_lr, expname)
tests_all[expname] = out[1]
out_all[expname] = out
rays = render_args_hr[0]
rets = []
hbatch = 16
for i in tqdm(range(0, H, hbatch)):
rets.append(render_rays(*out[0], rays[:,i:i+hbatch], *render_args_hr[1:]))
depth_map, acc_map = [np.concatenate([r[i] for r in rets], 0) for i in range(2)]
normal_map = make_normals(rays, depth_map)
normal_map = (255 * (.5 * normal_map + .5)).astype(np.uint8)
imageio.imsave(os.path.join(expdir, expname + '.png'), normal_map)
np.save(os.path.join(expdir, expname + '_netparams.npy'), out[0])
scores[expname] = out[2]
with open(os.path.join(expdir, 'scores.txt'), 'w') as f:
f.write(str(scores))
with open(os.path.join(expdir, 'scores_json.txt'), 'w') as f:
json.dump({k : onp.array(scores[k]).tolist() for k in scores}, f, indent=4)
###Output
_____no_output_____ |
docs/tutorials/day4.ipynb | ###Markdown
Day 4: Passport Processing ProblemNoteYou arrive at the airport only to realize that you grabbed your North Pole Credentials instead of your passport. While these documents are extremely similar, North Pole Credentials aren't issued by a country and therefore aren't actually valid documentation for travel in most of the world.It seems like you're not the only one having problems, though; a very long line has formed for the automatic passport scanners, and the delay could upset your travel itinerary.Due to some questionable network security, you realize you might be able to solve both of these problems at the same time.The automatic passport scanners are slow because they're having trouble detecting which passports have all required fields. The expected fields are as follows:```byr (Birth Year)iyr (Issue Year)eyr (Expiration Year)hgt (Height)hcl (Hair Color)ecl (Eye Color)pid (Passport ID)cid (Country ID)```Passport data is validated in batch files (your puzzle input). Each passport is represented as a sequence of key:value pairs separated by spaces or newlines. Passports are separated by blank lines.Here is an example batch file containing four passports:```ecl:gry pid:860033327 eyr:2020 hcl:fffffdbyr:1937 iyr:2017 cid:147 hgt:183cmiyr:2013 ecl:amb cid:350 eyr:2023 pid:028048884hcl:cfa07d byr:1929hcl:ae17e1 iyr:2013eyr:2024ecl:brn pid:760753108 byr:1931hgt:179cmhcl:cfa07d eyr:2025 pid:166559648iyr:2011 ecl:brn hgt:59in```The first passport is valid - all eight fields are present. The second passport is invalid - it is missing hgt (the Height field).The third passport is interesting; the only missing field is cid, so it looks like data from North Pole Credentials, not a passport at all! Surely, nobody would mind if you made the system temporarily ignore missing cid fields. Treat this "passport" as valid.The fourth passport is missing two fields, cid and byr. Missing cid is fine, but missing any other field is not, so this passport is invalid.According to the above rules, your improved system would report 2 valid passports.Count the number of valid passports - those that have all required fields. Treat cid as optional. In your batch file, how many passports are valid?https://adventofcode.com/2020/day/4 Solution 1> Author: Thรฉo Alves Da Costa Tip Here we will python ``sets`` to find if we have all the keysIt's an accelerated way to compute the difference between two list of values and avoiding a costly double for loop
###Code
import numpy as np
###Output
_____no_output_____
###Markdown
Solving the example
###Code
x = """
ecl:gry pid:860033327 eyr:2020 hcl:#fffffd
byr:1937 iyr:2017 cid:147 hgt:183cm
iyr:2013 ecl:amb cid:350 eyr:2023 pid:028048884
hcl:#cfa07d byr:1929
hcl:#ae17e1 iyr:2013
eyr:2024
ecl:brn pid:760753108 byr:1931
hgt:179cm
hcl:#cfa07d eyr:2025 pid:166559648
iyr:2011 ecl:brn hgt:59in
"""
text_array = x.strip().split("\n\n")
text_array
def passport_to_dict(x):
values = x.replace("\n"," ").split(" ")
d = {}
for value in values:
k,v = value.split(":")
d[k] = v
return d
passports = [passport_to_dict(x) for x in text_array]
mandatory_keys = ["byr","iyr","eyr","hgt","hcl","ecl","pid"]
optional_keys = ["cid"]
def is_passport_valid(x):
return set(mandatory_keys).issubset(set(x.keys()))
def count_valid(passports):
count = 0
for passport in passports:
count += int(is_passport_valid(passport))
return count
count_valid(passports)
###Output
_____no_output_____
###Markdown
Writing the final solution function
###Code
def solve_problem(text_input: str) -> int:
"""Solve the day 4 problem using other helper functions
"""
text_array = text_input.strip().split("\n\n")
passports = [passport_to_dict(x) for x in text_array]
return count_valid(passports)
###Output
_____no_output_____
###Markdown
Solving the final solution
###Code
text_input = open("inputs/day4.txt","r").read()
print(text_input[:500])
solve_problem(text_input)
###Output
_____no_output_____ |
tarea_02_Andres_Riveros/tarea_02_Andres_Riveros.ipynb | ###Markdown
Tarea Nยฐ02 Instrucciones1.- Completa tus datos personales (nombre y rol USM) en siguiente celda.**Nombre**: Andrรฉs Riveros Neira**Rol**: 201710505-42.- Debes pushear este archivo con tus cambios a tu repositorio personal del curso, incluyendo datos, imรกgenes, scripts, etc.3.- Se evaluarรก:- Soluciones- Cรณdigo- Que Binder estรฉ bien configurado.- Al presionar `Kernel -> Restart Kernel and Run All Cells` deben ejecutarse todas las celdas sin error. I.- Clasificaciรณn de dรญgitosEn este laboratorio realizaremos el trabajo de reconocer un dรญgito a partir de una imagen.  El objetivo es a partir de los datos, hacer la mejor predicciรณn de cada imagen. Para ellos es necesario realizar los pasos clรกsicos de un proyecto de _Machine Learning_, como estadรญstica descriptiva, visualizaciรณn y preprocesamiento. * Se solicita ajustar al menos tres modelos de clasificaciรณn: * Regresiรณn logรญstica * K-Nearest Neighbours * Uno o mรกs algoritmos a su elecciรณn [link](https://scikit-learn.org/stable/supervised_learning.htmlsupervised-learning) (es obligaciรณn escoger un _estimator_ que tenga por lo menos un hiperparรกmetro). * En los modelos que posean hiperparรกmetros es mandatorio buscar el/los mejores con alguna tรฉcnica disponible en `scikit-learn` ([ver mรกs](https://scikit-learn.org/stable/modules/grid_search.htmltuning-the-hyper-parameters-of-an-estimator)).* Para cada modelo, se debe realizar _Cross Validation_ con 10 _folds_ utilizando los datos de entrenamiento con tal de determinar un intervalo de confianza para el _score_ del modelo.* Realizar una predicciรณn con cada uno de los tres modelos con los datos _test_ y obtener el _score_. * Analizar sus mรฉtricas de error (**accuracy**, **precision**, **recall**, **f-score**) Exploraciรณn de los datosA continuaciรณn se carga el conjunto de datos a utilizar, a travรฉs del sub-mรณdulo `datasets` de `sklearn`.
###Code
import numpy as np
import pandas as pd
from sklearn import datasets,preprocessing
from sklearn.model_selection import train_test_split
import matplotlib.pyplot as plt
%matplotlib inline
digits_dict = datasets.load_digits()
print(digits_dict["DESCR"])
digits_dict.keys()
digits_dict["target"]
###Output
_____no_output_____
###Markdown
A continuaciรณn se crea dataframe declarado como `digits` con los datos de `digits_dict` tal que tenga 65 columnas, las 6 primeras a la representaciรณn de la imagen en escala de grises (0-blanco, 255-negro) y la รบltima correspondiente al dรญgito (`target`) con el nombre _target_.
###Code
digits = (
pd.DataFrame(
digits_dict["data"],
)
.rename(columns=lambda x: f"c{x:02d}")
.assign(target=digits_dict["target"])
.astype(int)
)
digits.head()
###Output
_____no_output_____
###Markdown
Ejercicio 1**Anรกlisis exploratorio:** Realiza tu anรกlisis exploratorio, no debes olvidar nada! Recuerda, cada anรกlisis debe responder una pregunta.Algunas sugerencias:* ยฟCรณmo se distribuyen los datos?* ยฟCuรกnta memoria estoy utilizando?* ยฟQuรฉ tipo de datos son?* ยฟCuรกntos registros por clase hay?* ยฟHay registros que no se correspondan con tu conocimiento previo de los datos?
###Code
digits.describe(include='all')
digits.info(memory_usage='deep')
int(digits.describe(include='all').iloc[0,0])
###Output
_____no_output_____
###Markdown
ยฟCรณmo se distribuyen los datos?__R__: el conjunto de datos consta de 65 columnas, las 6 primeras a la representaciรณn de la imagen en escala de grises (0-blanco, 255-negro) y la รบltima correspondiente al dรญgito (target) con el nombre target.ยฟCuรกnta memoria estoy utilizando?, ยฟQuรฉ tipo de datos son?__R__: A partir de lo mostrado anteriormente se tiene que los datos ocupan 456.4 KB de memoria y el tipo de dato es int32ยฟCuรกntos registros por clase hay?, ยฟHay registros que no se correspondan con tu conocimiento previo de los datos?__R__: Existen 1797 registros por cada clase y no hay datos de tipo Nan, es decir, los datos corresponden a lo esperado. Ejercicio 2**Visualizaciรณn:** Para visualizar los datos utilizaremos el mรฉtodo `imshow` de `matplotlib`. Resulta necesario convertir el arreglo desde las dimensiones (1,64) a (8,8) para que la imagen sea cuadrada y pueda distinguirse el dรญgito. Superpondremos ademรกs el label correspondiente al dรญgito, mediante el mรฉtodo `text`. Esto nos permitirรก comparar la imagen generada con la etiqueta asociada a los valores. Realizaremos lo anterior para los primeros 25 datos del archivo.
###Code
digits_dict["images"][0]
###Output
_____no_output_____
###Markdown
Visualiza imรกgenes de los dรญgitos utilizando la llave `images` de `digits_dict`. Sugerencia: Utiliza `plt.subplots` y el mรฉtodo `imshow`. Puedes hacer una grilla de varias imรกgenes al mismo tiempo!
###Code
#Visualizacion de imagenes
nx, ny = 5, 5
fig, axs = plt.subplots(nx, ny, figsize=(12, 12))
for j in range(5):
for i in range(5):
axs[i,j].imshow(digits_dict["images"][i+j],cmap='Greys')
###Output
_____no_output_____
###Markdown
Ejercicio 3**Machine Learning**: En esta parte usted debe entrenar los distintos modelos escogidos desde la librerรญa de `sklearn`. Para cada modelo, debe realizar los siguientes pasos:* **train-test** * Crear conjunto de entrenamiento y testeo (usted determine las proporciones adecuadas). * Imprimir por pantalla el largo del conjunto de entrenamiento y de testeo. * **modelo**: * Instanciar el modelo objetivo desde la librerรญa sklearn. * *Hiper-parรกmetros*: Utiliza `sklearn.model_selection.GridSearchCV` para obtener la mejor estimaciรณn de los parรกmetros del modelo objetivo.* **Mรฉtricas**: * Graficar matriz de confusiรณn. * Analizar mรฉtricas de error.__Preguntas a responder:__* ยฟCuรกl modelo es mejor basado en sus mรฉtricas?* ยฟCuรกl modelo demora menos tiempo en ajustarse?* ยฟQuรฉ modelo escoges?
###Code
X = digits.drop(columns="target").values
y = digits["target"].values
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
print('Separando informacion:\n')
print('numero de filas data original : ',len(X))
print('numero de filas train set : ',len(X_train))
print('numero de filas test set : ',len(X_test))
from time import time
###Output
_____no_output_____
###Markdown
Regresiรณn logistica
###Code
from sklearn.linear_model import LogisticRegression
rlog =LogisticRegression()
rlog.fit(X_train, y_train)
from sklearn.model_selection import GridSearchCV,cross_val_score
param_grid = {'C':[1, 2,3,4,5,7,8,9,10],'max_iter':[100,110,130],'penalty':['l1', 'l2', 'elasticnet', 'none']} #parametros a alterar
tgs_in=time()
gs = GridSearchCV(estimator=rlog,
param_grid=param_grid,
scoring='accuracy',
cv=5,
n_jobs=-1)
gs = gs.fit(X_train, y_train)
tgs_fin=time()
print('mejor score:')
print(gs.best_score_)
print()
print('mejores parametros:')
print(gs.best_params_)
print('Tiempo de ejecuciรณn rlog:\n')
print(tgs_fin-tgs_in)
rlog_better = gs.best_estimator_
rlog_better.fit(X_train, y_train) #rlog mejorado
print('Precisiรณn: {0:.3f}'.format(rlog_better.score(X_test, y_test)))
from metrics_classification import *
from sklearn.metrics import confusion_matrix
y_true = list(y_test)
y_pred = list(rlog_better.predict(X_test))
print('\nMatriz de confusion:\n ')
print(confusion_matrix(y_true,y_pred))
df_temp = pd.DataFrame(
{
'y':y_true,
'yhat':y_pred
}
)
df_metrics = summary_metrics(df_temp)
print("\nMetricas rlog:")
print("")
print(df_metrics)
rlog.get_params() #parametros rlog
###Output
_____no_output_____
###Markdown
K-Nearest Neighbours
###Code
from sklearn.neighbors import KNeighborsClassifier
knn =KNeighborsClassifier()
knn.fit(X_train, y_train)
knn.get_params()
param_grid_1 = {'n_neighbors':[1, 2,3,4,5,7,8,9,10],'p':[1,2,3,4,5]}
tgs1_in=time()
gs_1 = GridSearchCV(estimator=knn,
param_grid=param_grid_1,
scoring='accuracy',
cv=5,
n_jobs=-1)
gs_1 = gs_1.fit(X_train, y_train)
tgs1_fin=time()
print('mejor score:')
print(gs_1.best_score_)
print()
print('mejores parametros:')
print(gs_1.best_params_)
knn_better = gs_1.best_estimator_
knn_better.fit(X_train, y_train) #knn mejorado
print('Precisiรณn: {0:.3f}'.format(knn_better.score(X_test, y_test)))
print('Tiempo de ejecuciรณn knn:\n')
print(tgs1_fin-tgs1_in)
from metrics_classification import *
from sklearn.metrics import confusion_matrix
y_true = list(y_test)
y_pred = list(knn_better.predict(X_test))
print('\nMatriz de confusion:\n ')
print(confusion_matrix(y_true,y_pred))
df_temp = pd.DataFrame(
{
'y':y_true,
'yhat':y_pred
}
)
df_metrics = summary_metrics(df_temp)
print("\nMetricas knn:")
print("")
print(df_metrics)
###Output
Matriz de confusion:
[[33 0 0 0 0 0 0 0 0 0]
[ 0 28 0 0 0 0 0 0 0 0]
[ 0 0 33 0 0 0 0 0 0 0]
[ 0 0 0 34 0 0 0 0 0 0]
[ 0 0 0 0 46 0 0 0 0 0]
[ 0 0 0 0 0 46 1 0 0 0]
[ 0 0 0 0 0 0 35 0 0 0]
[ 0 0 0 0 0 0 0 33 0 1]
[ 0 1 0 0 0 0 0 0 29 0]
[ 0 0 0 1 1 1 0 0 0 37]]
Metricas knn:
accuracy recall precision fscore
0 0.9833 0.9841 0.984 0.9839
###Markdown
SVC
###Code
from sklearn import svm
svc=svm.SVC(probability=True)
svc.fit(X_train, y_train)
svc.get_params() #parametros svc
param_grid_2 = {'C':[1,2,3,4,5,7,8,9,10],'gamma':['scale', 'auto'], 'decision_function_shape':['ovo', 'ovr']} #parametros para alterar
tgs2in=time()
gs_2 = GridSearchCV(estimator=svc,
param_grid=param_grid_2,
scoring='accuracy',
cv=5,
n_jobs=-1)
gs_2 = gs_2.fit(X_train, y_train)
tgs2fin=time()
print('mejor score:')
print(gs_2.best_score_)
print()
print('mejores parametros:')
print(gs_2.best_params_)
svc_better = gs_2.best_estimator_
svc_better.fit(X_train, y_train) #svc mejorado
print('Precisiรณn: {0:.3f}'.format(svc_better.score(X_test, y_test)))
print('Tiempo de ejecuciรณn svc:\n')
print(tgs2fin-tgs2in)
y_true = list(y_test)
y_pred = list(svc_better.predict(X_test))
print('\nMatriz de confusion:\n ')
print(confusion_matrix(y_true,y_pred))
df_temp = pd.DataFrame(
{
'y':y_true,
'yhat':y_pred
}
)
df_metrics = summary_metrics(df_temp)
print("\nMetricas svc:")
print("")
print(df_metrics)
###Output
Matriz de confusion:
[[33 0 0 0 0 0 0 0 0 0]
[ 0 28 0 0 0 0 0 0 0 0]
[ 0 0 33 0 0 0 0 0 0 0]
[ 0 0 0 33 0 1 0 0 0 0]
[ 0 0 0 0 46 0 0 0 0 0]
[ 0 0 0 0 0 46 1 0 0 0]
[ 0 0 0 0 0 0 35 0 0 0]
[ 0 0 0 0 0 0 0 33 0 1]
[ 0 0 0 0 0 1 0 0 29 0]
[ 0 0 0 0 0 0 0 1 0 39]]
Metricas svc:
accuracy recall precision fscore
0 0.9861 0.9862 0.9876 0.9868
###Markdown
__Respuesta__: A partir del resultado obtenido, vemos que el modelo SVC (Support Vector Machine) obtiene en general las mejores mรฉtricas en comparaciรณn a los demรกs modelos, sin embargo, este modelo tiene el mayor tiempo de ejecuciรณn en comparaciรณn a los demรกs, en donde el mรกs rรกpido en ejecutarse fue el modelo de K-Nearest Neighbours, obteniendo igualmente un valor de mรฉtricas cercanas al de SVC. Asรญ, el modelo escogido es el de SVC dado que se harรก mรกs enfasis en tener un mejor valor de mรฉtricas que una demora de tiempo, cuyo valor no es tan lejano al de los otros modelos. Ejercicio 4__Comprensiรณn del modelo:__ Tomando en cuenta el mejor modelo entontrado en el `Ejercicio 3`, debe comprender e interpretar minuciosamente los resultados y grรกficos asocados al modelo en estudio, para ello debe resolver los siguientes puntos: * **Cross validation**: usando **cv** (con n_fold = 10), sacar una especie de "intervalo de confianza" sobre alguna de las mรฉtricas estudiadas en clases: * $\mu \pm \sigma$ = promedio $\pm$ desviaciรณn estandar * **Curva de Validaciรณn**: Replica el ejemplo del siguiente [link](https://scikit-learn.org/stable/auto_examples/model_selection/plot_validation_curve.htmlsphx-glr-auto-examples-model-selection-plot-validation-curve-py) pero con el modelo, parรกmetros y mรฉtrica adecuada. Saque conclusiones del grรกfico. * **Curva AUCโROC**: Replica el ejemplo del siguiente [link](https://scikit-learn.org/stable/auto_examples/model_selection/plot_roc.htmlsphx-glr-auto-examples-model-selection-plot-roc-py) pero con el modelo, parรกmetros y mรฉtrica adecuada. Saque conclusiones del grรกfico.
###Code
#cross validation
precision = cross_val_score(estimator=svc_better,
X=X_train,
y=y_train,
cv=10)
precision = [round(x,2) for x in precision]
print('Precisiones: {} '.format(precision))
print('Precision promedio: {0: .3f} +/- {1: .3f}'.format(np.mean(precision),
np.std(precision)))
#curva de aprendizaje
from sklearn.model_selection import learning_curve
train_sizes, train_scores, test_scores = learning_curve(
estimator=svc_better,
X=X_train,
y=y_train,
train_sizes=np.linspace(0.1, 1.0, 20),
cv=5,
n_jobs=-1
)
# calculo de metricas
train_mean = np.mean(train_scores, axis=1)
train_std = np.std(train_scores, axis=1)
test_mean = np.mean(test_scores, axis=1)
test_std = np.std(test_scores, axis=1)
plt.plot(train_sizes, train_mean, color='r', marker='o', markersize=5,
label='entrenamiento')
plt.fill_between(train_sizes, train_mean + train_std,
train_mean - train_std, alpha=0.15, color='r')
plt.plot(train_sizes, test_mean, color='b', linestyle='--',
marker='s', markersize=5, label='evaluacion')
plt.fill_between(train_sizes, test_mean + test_std,
test_mean - test_std, alpha=0.15, color='b')
plt.grid()
plt.title('Curva de aprendizaje')
plt.legend(loc='center left', bbox_to_anchor=(1.25, 0.5), ncol=1)
plt.xlabel('Cant de ejemplos de entrenamiento')
plt.ylabel('Precision')
plt.show()
#curva de validaciรณn
from sklearn.model_selection import validation_curve
param_range = np.logspace(-6, -1, 5)
train_scores, test_scores = validation_curve(
svc_better, X_train, y_train,
param_name="gamma",
param_range=param_range,
scoring="accuracy", n_jobs=1)
train_scores_mean = np.mean(train_scores, axis=1)
train_scores_std = np.std(train_scores, axis=1)
test_scores_mean = np.mean(test_scores, axis=1)
test_scores_std = np.std(test_scores, axis=1)
plt.title("Validation Curve with SVM")
plt.xlabel(r"$\gamma$")
plt.ylabel("Score")
plt.ylim(0.0, 1.1)
lw = 2
plt.semilogx(param_range, train_scores_mean, label="Training score",
color="darkorange", lw=lw)
plt.fill_between(param_range, train_scores_mean - train_scores_std,
train_scores_mean + train_scores_std, alpha=0.2,
color="darkorange", lw=lw)
plt.semilogx(param_range, test_scores_mean, label="Cross-validation score",
color="navy", lw=lw)
plt.fill_between(param_range, test_scores_mean - test_scores_std,
test_scores_mean + test_scores_std, alpha=0.2,
color="navy", lw=lw)
plt.legend(loc="best")
plt.show()
###Output
_____no_output_____
###Markdown
__Respuesta:__ Vemos que hasta un valor cercano a 0.0001 del parรกmetro gamma del modelo se tiene un score de cross validation bastante similar al del conjunto de entrenamiento, sin embargo, si gamma toma un valor mayor, ocurre overfitting dado el progresivo alejamiento entre ambas curvas.
###Code
#curva AUC-ROC
from sklearn.metrics import roc_curve
from sklearn.metrics import roc_auc_score
from sklearn.preprocessing import label_binarize
# funcion para graficar curva roc
def plot_roc_curve(fpr, tpr):
plt.figure(figsize=(9,4))
plt.plot(fpr, tpr, color='orange', label='ROC')
plt.plot([0, 1], [0, 1], color='darkblue', linestyle='--')
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('Receiver Operating Characteristic (ROC) Curve')
plt.legend()
plt.show()
X_auc=X
y_auc=[]
for i in range(10): #binarizar targets
y_auc.append(np.array(pd.Series(y).apply(lambda x: 1 if x ==i else 0)))
# split dataset
X_auc_train, X_auc_test, y1_train, y1_test = train_test_split(X_auc, y_auc[0], test_size=0.3, random_state = 2)
# ajustar modelo
svc_better.fit(X_auc_train,y1_train)
probs = svc_better.predict_proba(X_auc_test) # predecir probabilidades para X_test
probs_tp = probs[:, 1] # mantener solo las probabilidades de la clase positiva
auc = roc_auc_score(y1_test, probs_tp) # calcular score AUC
print('AUC: %.2f' % auc)
# calcular curva ROC
fpr, tpr, thresholds = roc_curve(y1_test, probs_tp) # obtener curva ROC
plot_roc_curve(fpr, tpr)
###Output
_____no_output_____
###Markdown
__Respuesta:__ Al ver la grรกfica es clara la cercania a 1 del area bajo la curva ROC, lo cual respalda la efectividad del modelo escogido Ejercicio 5__Reducciรณn de la dimensiรณn:__ Tomando en cuenta el mejor modelo encontrado en el `Ejercicio 3`, debe realizar una redcciรณn de dimensionalidad del conjunto de datos. Para ello debe abordar el problema ocupando los dos criterios visto en clases: * **Selecciรณn de atributos*** **Extracciรณn de atributos**__Preguntas a responder:__Una vez realizado la reducciรณn de dimensionalidad, debe sacar algunas estadรญsticas y grรกficas comparativas entre el conjunto de datos original y el nuevo conjunto de datos (tamaรฑo del dataset, tiempo de ejecuciรณn del modelo, etc.) Selecciรณn de atributos
###Code
# Selecciรณn de atributos
from sklearn.feature_selection import SelectKBest
from sklearn.feature_selection import f_classif
# Separamos las columnas objetivo
x_training = digits.drop(['target',], axis=1)
y_training = digits['target']
# Aplicando el algoritmo univariante de prueba F.
k = 15 # nรบmero de atributos a seleccionar
columnas = list(x_training.columns.values)
seleccionadas = SelectKBest(f_classif, k=k).fit(x_training, y_training)
catrib = seleccionadas.get_support()
atributos = [columnas[i] for i in list(catrib.nonzero()[0])] #calculo de columnas de atributos
atributos
###Output
_____no_output_____
###Markdown
Extracciรณn de atributos
###Code
#extracciรณn de atributos (PCA)
from sklearn.preprocessing import StandardScaler
features = atributos
x = digits.loc[:, features].values
y = digits.loc[:, ['target']].values
x = StandardScaler().fit_transform(x)
# ajustar modelo
from sklearn.decomposition import PCA
pca = PCA(n_components=15)
principalComponents = pca.fit_transform(x)
# graficar varianza por componente
percent_variance = np.round(pca.explained_variance_ratio_* 100, decimals =2)
columns = ['PC1', 'PC2', 'PC3', 'PC4','PC5','PC1', 'PC2', 'PC3', 'PC4','PC5','PC1', 'PC2', 'PC3', 'PC4','PC15']
plt.figure(figsize=(12,4))
plt.bar(x= range(1,16), height=percent_variance, tick_label=columns)
plt.ylabel('Percentate of Variance Explained')
plt.xlabel('Principal Component')
plt.title('PCA Scree Plot')
plt.show()
# graficar varianza por la suma acumulada de los componente
percent_variance_cum = np.cumsum(percent_variance)
columns = ['1', '2', '3','4', '5','PC1', 'PC2', 'PC3', 'PC4','PC5','PC1', 'PC2', 'PC3', 'PC4','PC5']
plt.figure(figsize=(12,4))
plt.bar(x= range(1,16), height=percent_variance_cum, tick_label=columns)
plt.ylabel('Percentate of Variance Explained')
plt.xlabel('Principal Component Cumsum')
plt.title('PCA Scree Plot')
plt.show()
###Output
_____no_output_____
###Markdown
__Se puede notar que mas del 90% de la varianza es explicada por las 13 primeras componentes principales, que son las que se considerarรกn__
###Code
pca = PCA(n_components=13)
principalComponents = pca.fit_transform(x) #Se escogen las primeras 13 componentes y se reduce la dimensionalidad de digits
principalDataframe = pd.DataFrame(data = principalComponents, columns = ['PC1', 'PC2','PC3','PC4','PC5', 'PC6', 'PC7', 'PC8','PC9','PC10', 'PC11', 'PC12','P13'])
targetDataframe = digits['target']
newDataframe = pd.concat([principalDataframe, targetDataframe],axis = 1)
newDataframe.head()
digits[atributos].shape
digits[:-1].shape
###Output
_____no_output_____
###Markdown
Dimensiรณn del nuevo dataset:1797 filas, 15 columnas de datosDimensiรณn del nuevo dataset: 1797 filas, 65 columnas de datos
###Code
digits[atributos].info(memory_usage='deep')
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 1797 entries, 0 to 1796
Data columns (total 15 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 c10 1797 non-null int32
1 c13 1797 non-null int32
2 c20 1797 non-null int32
3 c21 1797 non-null int32
4 c26 1797 non-null int32
5 c28 1797 non-null int32
6 c30 1797 non-null int32
7 c33 1797 non-null int32
8 c34 1797 non-null int32
9 c36 1797 non-null int32
10 c42 1797 non-null int32
11 c43 1797 non-null int32
12 c46 1797 non-null int32
13 c60 1797 non-null int32
14 c61 1797 non-null int32
dtypes: int32(15)
memory usage: 105.4 KB
###Markdown
El nuevo conjunto de datos ocupa 105.4 KB de memoria, lo cual es menor a 456.4 KB que es lo que ocupa el dataset original
###Code
X_new = pca.fit_transform(digits[atributos])
y_new = digits['target'] #conjunto de datos reduci
X_trainn, X_testn, y_trainn, y_testn = train_test_split(X_new, y_new, test_size=0.2, random_state = 42) #conjunto de datos de entrenamiento con nuevo conjunto de datos
t1=time()
svc_better.fit(X_trainn, y_trainn)
t2=time()
t3=time()
svc_better.fit(X_train,y_train)
t4=time()
print('tiempo nuevo conjunto de datos')
print(t2-t1)
print('tiempo conjunto original')
print(t4-t3)
t4-t3-t2+t1
###Output
_____no_output_____
###Markdown
Vemos que el conjunto de datos mas reducido es 0.44301557540893555 [s] mรกs rรกpido al ajustarse al modelo en estudio que el conjunto de datos originales
###Code
y_true = list(y_testn)
y_pred = list(svc_better.predict(X_testn))
print('\nMatriz de confusion:\n ')
print(confusion_matrix(y_true,y_pred))
df_temp = pd.DataFrame(
{
'y':y_true,
'yhat':y_pred
}
)
df_metrics = summary_metrics(df_temp)
print("\nMetricas svc:")
print("")
print(df_metrics)
###Output
Matriz de confusion:
[[33 0 0 0 0 0 0 0 0 0]
[ 0 26 1 0 1 0 0 0 0 0]
[ 0 0 33 0 0 0 0 0 0 0]
[ 0 0 0 34 0 0 0 0 0 0]
[ 0 2 0 0 43 0 1 0 0 0]
[ 0 0 0 0 0 46 1 0 0 0]
[ 0 0 0 0 0 1 34 0 0 0]
[ 0 0 0 1 0 0 0 32 0 1]
[ 0 1 5 0 0 1 0 0 23 0]
[ 0 0 0 0 0 1 0 1 0 38]]
Metricas svc:
accuracy recall precision fscore
0 0.95 0.9471 0.9519 0.9471
###Markdown
Finalmente, es posible ver que los valores de las mรฉtricas asociadas al modelo, con el conjunto de datos reducido, es cercano a los valores con los datos originales, lo cual es bastante interesante y da una buena alternativa de uso, ya que ademรกs existe un menor tiempo de ejecuciรณn, lo cual es una buena cualidad de la reducciรณn de dimensionalidad. Ejercicio 6__Visualizando Resultados:__ A continuaciรณn se provee cรณdigo para comparar las etiquetas predichas vs las etiquetas reales del conjunto de _test_.
###Code
def mostar_resultados(digits,model,nx=5, ny=5,label = "correctos"):
"""
Muestra los resultados de las prediciones de un modelo
de clasificacion en particular. Se toman aleatoriamente los valores
de los resultados.
- label == 'correcto': retorna los valores en que el modelo acierta.
- label == 'incorrecto': retorna los valores en que el modelo no acierta.
Observacion: El modelo que recibe como argumento debe NO encontrarse
'entrenado'.
:param digits: dataset 'digits'
:param model: modelo de sklearn
:param nx: numero de filas (subplots)
:param ny: numero de columnas (subplots)
:param label: datos correctos o incorrectos
:return: graficos matplotlib
"""
X = digits.drop(columns="target").values
y = digits["target"].values
X_train, X_test, Y_train, Y_test = train_test_split(X, y, test_size=0.2, random_state = 42)
model.fit(X_train, Y_train) # ajustando el modelo
y_pred = list(model.predict(X_test))
# Mostrar los datos correctos
if label=="correctos":
mask = (y_pred == y_test)
color = "green"
# Mostrar los datos correctos
elif label=="incorrectos":
mask = (y_pred != y_test)
color = "red"
else:
raise ValueError("Valor incorrecto")
X_aux = X_test[mask]
y_aux_true = y_test[mask]
y_aux_pred = y_pred
# We'll plot the first 100 examples, randomly choosen
fig, ax = plt.subplots(nx, ny, figsize=(12,12))
for i in range(nx):
for j in range(ny):
index = j + ny * i
data = X_aux[index, :].reshape(8,8)
label_pred = str(int(y_aux_pred[index]))
label_true = str(int(y_aux_true[index]))
ax[i][j].imshow(data, interpolation='nearest', cmap='gray_r')
ax[i][j].text(0, 0, label_pred, horizontalalignment='center', verticalalignment='center', fontsize=10, color=color)
ax[i][j].text(7, 0, label_true, horizontalalignment='center', verticalalignment='center', fontsize=10, color='blue')
ax[i][j].get_xaxis().set_visible(False)
ax[i][j].get_yaxis().set_visible(False)
plt.show()
###Output
_____no_output_____
###Markdown
**Pregunta*** Tomando en cuenta el mejor modelo entontrado en el `Ejercicio 3`, grafique los resultados cuando: * el valor predicho y original son iguales * el valor predicho y original son distintos * Cuando el valor predicho y original son distintos , ยฟPor quรฉ ocurren estas fallas?
###Code
mostar_resultados(digits, svc_better,nx=5,ny=5,label='correctos')
mostar_resultados(digits, svc_better,nx=2,ny=2,label='incorrectos')
###Output
_____no_output_____ |
week3/nimiabhishekawasthi/Q4 - 3/Attempt1_filesubmission_WEEK_3_3_pure_pursuit.ipynb | ###Markdown
Configurable parameters for pure pursuit+ How fast do you want the robot to move? It is fixed at $v_{max}$ in this exercise+ When can we declare the goal has been reached?+ What is the lookahead distance? Determines the next position on the reference path that we want the vehicle to catch up to
###Code
vmax = 0.75
goal_threshold = 0.05
lookahead = 3.0
#You know what to do!
def simulate_unicycle(pose, v,w, dt=0.1):
x, y, t = pose
return x + v*np.cos(t)*dt, y + v*np.sin(t)*dt, t+w*dt
class PurePursuitTracker(object):
def __init__(self, x, y, v, lookahead = 3.0):
"""
Tracks the path defined by x, y at velocity v
x and y must be numpy arrays
v and lookahead are floats
"""
self.length = len(x)
self.ref_idx = 0 #index on the path that tracker is to track
self.lookahead = lookahead
self.x, self.y = x, y
self.v, self.w = v, 0
def update(self, xc, yc, theta):
"""
Input: xc, yc, theta - current pose of the robot
Update v, w based on current pose
Returns True if trajectory is over.
"""
#Calculate ref_x, ref_y using current ref_idx
#Check if we reached the end of path, then return TRUE
#Two conditions must satisfy
#1. ref_idx exceeds length of traj
#2. ref_x, ref_y must be within goal_threshold
# Write your code to check end condition
ref_x, ref_y = self.x[self.ref_idx], self.y[self.ref_idx]
goal_x, goal_y = self.x[-1], self.y[-1]
if (self.ref_idx < self.length) and (np.linalg.norm([ref_x-goal_x, ref_y-goal_y])) < goal_threshold:
return True
#End of path has not been reached
#update ref_idx using np.hypot([ref_x-xc, ref_y-yc]) < lookahead
if ref_x-xc < lookahead and ref_y-yc < lookahead :
self.ref_idx +=1
#Find the anchor point
# this is the line we drew between (0, 0) and (x, y)
anchor = np.asarray([ref_x - xc, ref_y - yc])
#Remember right now this is drawn from current robot pose
#we have to rotate the anchor to (0, 0, pi/2)
#code is given below for this
theta = np.pi/2 - theta
rot = np.asarray([[np.cos(theta), -np.sin(theta)], [np.sin(theta), np.cos(theta)]])
anchor = np.dot(rot, anchor)
L = (anchor[0] ** 2 + anchor[1] **2) # dist to reference path
L=np.sqrt(L)
X = anchor[0] #cross-track error
#from the derivation in notes, plug in the formula for omega
self.w =-(2 * vmax / (L ** 2) * X)
return False
###Output
_____no_output_____
###Markdown
Visualize given trajectory
###Code
x = np.arange(0, 50, 0.5)
y = [np.sin(idx / 5.0) * idx / 2.0 for idx in x]
#write code here
###Output
_____no_output_____
###Markdown
Run the tracker simulation1. Instantiate the tracker class2. Initialize some starting pose3. Simulate robot motion 1 step at a time - get $v$, $\omega$ from tracker, predict new pose using $v$, $\omega$, current pose in simulate_unicycle()4. Stop simulation if tracker declares that end-of-path is reached5. Record all parameters
###Code
#write code to instantiate the tracker class
tracker = PurePursuitTracker(x,y,vmax)
pose = -1, 0, np.pi/2 #arbitrary initial pose
x0,y0,t0 = pose # record it for plotting
traj =[]
while True:
#write the usual code to obtain successive poses
pose = simulate_unicycle(pose, tracker.v, tracker.w)
if tracker.update(*pose):
print("ARRIVED!!")
break
traj.append([*pose, tracker.w, tracker.ref_idx])
xs,ys,ts,ws,ids = zip(*traj)
plt.figure()
plt.plot(x,y,label='Reference')
plt.quiver(x0,y0, np.cos(t0), np.sin(t0),scale=12)
plt.plot(xs,ys,label='Tracked')
x0,y0,t0 = pose
plt.quiver(x0,y0, np.cos(t0), np.sin(t0),scale=12)
plt.title('Pure Pursuit trajectory')
plt.legend()
plt.grid()
###Output
_____no_output_____
###Markdown
Visualize curvature
###Code
plt.figure()
plt.title('Curvature')
plt.plot(np.abs(ws))
plt.grid()
###Output
_____no_output_____
###Markdown
AnimateMake a video to plot the current pose of the robot and reference pose it is trying to track. You can use funcAnimation in matplotlib
###Code
###Output
_____no_output_____
###Markdown
Effect of noise in simulationsWhat happens if you add a bit of Gaussian noise to the simulate_unicycle() output? Is the tracker still robust?The noise signifies that $v$, $\omega$ commands did not get realized exactly
###Code
###Output
_____no_output_____ |
bronze/Q20_Hadamard.ipynb | ###Markdown
$ \newcommand{\bra}[1]{\langle 1|} $$ \newcommand{\ket}[1]{|1\rangle} $$ \newcommand{\braket}[2]{\langle 1|2\rangle} $$ \newcommand{\dot}[2]{ 1 \cdot 2} $$ \newcommand{\biginner}[2]{\left\langle 1,2\right\rangle} $$ \newcommand{\mymatrix}[2]{\left( \begin{array}{1} 2\end{array} \right)} $$ \newcommand{\myvector}[1]{\mymatrix{c}{1}} $$ \newcommand{\myrvector}[1]{\mymatrix{r}{1}} $$ \newcommand{\mypar}[1]{\left( 1 \right)} $$ \newcommand{\mybigpar}[1]{ \Big( 1 \Big)} $$ \newcommand{\sqrttwo}{\frac{1}{\sqrt{2}}} $$ \newcommand{\dsqrttwo}{\dfrac{1}{\sqrt{2}}} $$ \newcommand{\onehalf}{\frac{1}{2}} $$ \newcommand{\donehalf}{\dfrac{1}{2}} $$ \newcommand{\hadamard}{ \mymatrix{rr}{ \sqrttwo & \sqrttwo \\ \sqrttwo & -\sqrttwo }} $$ \newcommand{\vzero}{\myvector{1\\0}} $$ \newcommand{\vone}{\myvector{0\\1}} $$ \newcommand{\stateplus}{\myvector{ \sqrttwo \\ \sqrttwo } } $$ \newcommand{\stateminus}{ \myrvector{ \sqrttwo \\ -\sqrttwo } } $$ \newcommand{\myarray}[2]{ \begin{array}{1}2\end{array}} $$ \newcommand{\X}{ \mymatrix{cc}{0 & 1 \\ 1 & 0} } $$ \newcommand{\I}{ \mymatrix{rr}{1 & 0 \\ 0 & 1} } $$ \newcommand{\Z}{ \mymatrix{rr}{1 & 0 \\ 0 & -1} } $$ \newcommand{\Htwo}{ \mymatrix{rrrr}{ \frac{1}{2} & \frac{1}{2} & \frac{1}{2} & \frac{1}{2} \\ \frac{1}{2} & -\frac{1}{2} & \frac{1}{2} & -\frac{1}{2} \\ \frac{1}{2} & \frac{1}{2} & -\frac{1}{2} & -\frac{1}{2} \\ \frac{1}{2} & -\frac{1}{2} & -\frac{1}{2} & \frac{1}{2} } } $$ \newcommand{\CNOT}{ \mymatrix{cccc}{1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0} } $$ \newcommand{\norm}[1]{ \left\lVert 1 \right\rVert } $$ \newcommand{\pstate}[1]{ \lceil \mspace{-1mu} 1 \mspace{-1.5mu} \rfloor } $$ \newcommand{\greenbit}[1] {\mathbf{{\color{green}1}}} $$ \newcommand{\bluebit}[1] {\mathbf{{\color{blue}1}}} $$ \newcommand{\redbit}[1] {\mathbf{{\color{red}1}}} $$ \newcommand{\brownbit}[1] {\mathbf{{\color{brown}1}}} $$ \newcommand{\blackbit}[1] {\mathbf{{\color{black}1}}} $ Hadamard Operator_prepared by Abuzer Yakaryilmaz_[](https://youtu.be/VKva2R5FVfI) An example quantum operator for quantum coin-flipping is Hadamard. It is defined as h-gate in Qiskit.We implement all three experiments by using Qiskit. Here we present the first and third experiment. The second experiment will be presented later._This will be a warm-up step before introducing a quantum bit more formally._ The first experimentOur quantum bit (qubit) starts in state 0, which is shown as $ \ket{0} = \myvector{1 \\ 0} $.$ \ket{\cdot} $ is called ket-notation: Ket-notation is used to represent a column vector in quantum mechanics. For a given column vector $ \ket{v} $, its conjugate transpose is a row vector represented as $ \bra{v} $ (bra-notation). The circuit with a single Hadamard We design a circuit with one qubit and apply quantum coin-flipping once.
###Code
# import all necessary objects and methods for quantum circuits
from qiskit import QuantumRegister, ClassicalRegister, QuantumCircuit, execute, Aer
# define a quantum register with one qubit
q = QuantumRegister(1,"qreg")
# define a classical register with one bit
# it stores the measurement result of the quantum part
c = ClassicalRegister(1,"creg")
# define our quantum circuit
qc = QuantumCircuit(q,c)
# apply h-gate (Hadamard: quantum coin-flipping) to the first qubit
qc.h(q[0])
# measure the first qubit, and store the result in the first classical bit
qc.measure(q,c)
# draw the circuit by using matplotlib
qc.draw(output='mpl') # re-run the cell if the figure is not displayed
###Output
_____no_output_____
###Markdown
###Code
# execute the circuit 10000 times in the local simulator
job = execute(qc,Aer.get_backend('qasm_simulator'),shots=10000)
counts = job.result().get_counts(qc)
print(counts) # print the outcomes
print()
n_zeros = counts['0']
n_ones = counts['1']
print("State 0 is observed with frequency %",100*n_zeros/(n_zeros+n_ones))
print("State 1 is observed with frequency %",100*n_ones/(n_zeros+n_ones))
# we can show the result by using histogram
print()
from qiskit.visualization import plot_histogram
plot_histogram(counts)
###Output
{'1': 5068, '0': 4932}
State 0 is observed with frequency % 49.32
State 1 is observed with frequency % 50.68
###Markdown
The numbers of outcomes '0's and '1's are expected to be close to each other. As we have observed after this implementation, quantum systems output probabilistically. The third experiment _We will examine the second experiment later because it requires intermediate measurement. (We can do intermediate measurements in simulators, but it is not possible in the real machines.)_Now, we implement the third experiment. The circuit with two Hadamards We design a circuit with one qubit and apply quantum coin-flipping twice.
###Code
# import all necessary objects and methods for quantum circuits
from qiskit import QuantumRegister, ClassicalRegister, QuantumCircuit, execute, Aer
# define a quantum register with one qubit
q2 = QuantumRegister(1,"qreg2")
# define a classical register with one bit
# it stores the measurement result of the quantum part
c2 = ClassicalRegister(1,"creg2")
# define our quantum circuit
qc2 = QuantumCircuit(q2,c2)
# apply h-gate (Hadamard: quantum coin-flipping) to the first qubit
qc2.h(q2[0])
# apply h-gate (Hadamard: quantum coin-flipping) to the first qubit once more
qc2.h(q2[0])
# measure the first qubit, and store the result in the first classical bit
qc2.measure(q2,c2)
# draw the circuit by using matplotlib
qc2.draw(output='mpl') # re-run the cell if the figure is not displayed
# execute the circuit 10000 times in the local simulator
job = execute(qc2,Aer.get_backend('qasm_simulator'),shots=10000)
counts2 = job.result().get_counts(qc2)
print(counts2) # print the outcomes
###Output
{'0': 10000}
###Markdown
The only outcome must be '0'. Task 1 Remember that x-gate flips the value of a qubit.Design a quantum circuit with a single qubit.The qubit is initially set to $ \ket{0} $.Set the value of qubit to $ \ket{1} $ by using x-gate.Experiment 1: Apply one Hadamard gate, make measurement, and execute your program 10000 times.Experiment 2: Apply two Hadamard gates, make measurement, and execute your program 10000 times.Compare your results.The following two diagrams represent these experiments.
###Code
#
# your solution is here
#
###Output
_____no_output_____ |
Code/.ipynb_checkpoints/20210525_FS_LT_Performance_FS03-FS06-checkpoint.ipynb | ###Markdown
This script is designed to take metadata from specific animal files and then display it as a graph
###Code
animal = '//10.153.170.3/storage2/fabian/data/project/FS10/'
result=pd.DataFrame()
for dirpath, dirnames, files in os.walk(animal, topdown=True):
fullstring = dirpath
for metadata in files:
if fnmatch.fnmatch(metadata, 'metadata_*'):
print(metadata)
print(dirpath)
k=(dirpath+'/'+metadata)
day = pd.read_csv(k,sep=" : ", header=None,engine='python')
df=day.T
df= df.rename(columns=df.iloc[0])
df=df.drop(df.index[0])
if int(df['Pellets'].values[0])>1:
result = result.append(df, ignore_index=True,sort=False)
sorted_data = result.sort_values('Computer time was',)
sorted_data
make_graphs('FS11')
def make_graphs (animal_ID):
result=pd.DataFrame()
path = '//10.153.170.3/storage2/fabian/data/project/'+ animal_ID
#print(path)
for dirpath, dirnames, files in os.walk(path, topdown=True):
fullstring = dirpath
for metadata in files:
if fnmatch.fnmatch(metadata, 'metadata_*'):
#print(metadata)
k=(dirpath+'/'+metadata)
day = pd.read_csv(k,sep=" : ", header=None,engine='python')
df=day.T
df= df.rename(columns=df.iloc[0])
df=df.drop(df.index[0])
try:
if int(df['Pellets'].values[0])>1:
result = result.append(df, ignore_index=True,sort=False)
except KeyError:
print("Bad session")
sorted_data = result.sort_values('Computer time was',)
sorted_data
day_list_short=[]
for day in sorted_data['Recording started on']:
day_list_short.append(day[5:13])
sorted_data['Pellets']= sorted_data['Pellets'].astype(int)
sorted_data['high pellets']=sorted_data['high pellets'].astype(float)
sorted_data['Sham']=sorted_data['Sham'].astype(float)
sorted_data['Beacon']=sorted_data['Beacon'].astype(float)
sorted_data['Distance']=sorted_data['Distance'].astype(float)
sorted_data['Speed']=sorted_data['Speed'].astype(float)
sorted_data['position_change']=sorted_data['position_change'].astype(int)
sorted_data['light_off']=sorted_data['light_off'].astype(int)
sorted_data['time_in_cylinder'] = sorted_data['time_in_cylinder'].astype(float)
sorted_data['background_color'] = sorted_data['background_color'].astype(str)
sorted_data['invisible_count']= sorted_data['invisible_count'].astype(int)
plt.tight_layout
fig, ax = plt.subplots(2,2,dpi=400,sharex=True)
fig.suptitle(animal_ID +' long term performance',y=1)
ax[0][0].bar(day_list_short,sorted_data['Pellets'],label='pellets',color ='g')
ax[0][0].bar(day_list_short,sorted_data['high pellets'],label='high pellets',color ='y')
ax[0][0].bar(day_list_short,sorted_data['invisible_count'],label='invisible beacons',color ='m')
ax[0][0].set_title('pellets')
ax[0][0].legend(loc='upper left',prop={'size': 5})
ax[1][1].set_xlabel('day')
ax[1][0].set_xlabel('day')
ax[0][0].set_ylabel('pellets')
ax[0][1].plot(day_list_short,sorted_data['Beacon'],label = 'beacon')
ax[0][1].plot(day_list_short,sorted_data['Sham'],label = 'sham')
ax[0][1].legend(loc='upper left',prop={'size': 5})
ax[0][1].set_title('beacon time (s)')
#ax[0][1].set_ylabel('time in beacon')
ax[1][0].plot(day_list_short,sorted_data['Distance'], label = 'distance')
ax[1][0].legend(loc='upper left',prop={'size': 5})
ax[1][0].set_title('movement')
ax[1][0].set_ylabel('meters')
ax[1][0].tick_params(axis="x", labelsize=6, labelrotation=-60, labelcolor="turquoise")
ax[1][0]=ax[1][0].twinx()
ax[1][0].plot(day_list_short,sorted_data['Speed'],label= 'speed cm/s',color = 'cyan')
ax[1][0].legend(loc='upper right',prop={'size': 5})
ax[1][0].tick_params(axis="x", labelsize=6, labelrotation=-60, labelcolor="turquoise")
succes_rate=sorted_data['invisible_count']/(sorted_data['Pellets']/sorted_data['light_off'])
ax[1][1].bar(day_list_short,succes_rate,label= '% of invisible correct',color = 'm')
ax[1][1].legend(loc='upper left',prop={'size': 5})
ax[1][1].set_title('succes_rate')
ax[1][1].tick_params(axis="x", labelsize=6, labelrotation=-60, labelcolor="turquoise")
ax[1][1].yaxis.tick_right()
ax[1][1].yaxis.set_major_formatter(mtick.PercentFormatter(xmax=1, decimals=None, symbol='%', is_latex=False))
#fig.tight_layout()#pad=3.0
#plt.show()
plt.savefig('%sephys_long_term_perfomance %s.png'%(figures,animal_ID), dpi = 300)
day_number = 0
# for day in sorted_data['Pellets']:
# print("%s Pellets dispensed : %s required time in cylinder %s background color: %s position change every: %s, invisible every: %s rear time reguired: %s"
# %(day_list_short[day_number],day,sorted_data['time_in_cylinder'][day_number],
# sorted_data['background_color'][day_number],sorted_data['position_change'][day_number],
# sorted_data['light_off'][day_number],sorted_data['high_time_in_cylinder'][day_number]))
# day_number+=1
make_graphs('FS11')
###Output
_____no_output_____ |
osm_python_tools.ipynb | ###Markdown
Overpass API works with https://github.com/mocnik-science/osm-python-tools/ librarycategories are presented for example here https://github.com/GIScience/openpoiservice/blob/master/categories_docker.yml
###Code
import pandas as pd
import time
from OSMPythonTools.overpass import overpassQueryBuilder
from OSMPythonTools.overpass import Overpass
overpass = Overpass(waitBetweenQueries = 50)
custom_bbox = [48.1, 16.3, 48.3, 16.5]
def poi_request(item_type, search_query):
# query built
query = overpassQueryBuilder(bbox=custom_bbox, elementType='node',
selector='"{}"="{}"'.format(str(item_type), str(search_query)), out='body')
result = overpass.query(query, timeout=100000)
# result in json
result = result.toJSON()['elements']
# separate tags
for row in result:
row.update(row['tags'])
df = pd.DataFrame(result)
return df
# lists of items to request
tourism = ['hotel', 'motel']
amenity = ['library', 'museum', 'bank', 'hospital', 'cafe', 'fast_food', 'pub', 'restaurant']
shop = ['shoes', 'alcohol', 'bakery', 'cheese', 'tobacco']
# request for items in item category
for search_query in amenity:
df = poi_request('amenity', search_query)
time.sleep(100)
df.to_csv("./output/{}.csv".format(str(search_query)))
###Output
[overpass] downloading data: [timeout:100000][out:json];(node["amenity"="library"](48.1,16.3,48.3,16.5);); out body;
[overpass] downloading data: [timeout:100000][out:json];(node["amenity"="museum"](48.1,16.3,48.3,16.5);); out body;
[overpass] downloading data: [timeout:100000][out:json];(node["amenity"="bank"](48.1,16.3,48.3,16.5);); out body;
[overpass] downloading data: [timeout:100000][out:json];(node["amenity"="hospital"](48.1,16.3,48.3,16.5);); out body;
[overpass] downloading data: [timeout:100000][out:json];(node["amenity"="cafe"](48.1,16.3,48.3,16.5);); out body;
[overpass] downloading data: [timeout:100000][out:json];(node["amenity"="fast_food"](48.1,16.3,48.3,16.5);); out body;
[overpass] downloading data: [timeout:100000][out:json];(node["amenity"="pub"](48.1,16.3,48.3,16.5);); out body;
[overpass] downloading data: [timeout:100000][out:json];(node["amenity"="restaurant"](48.1,16.3,48.3,16.5);); out body;
|
Multi digit recognition.ipynb | ###Markdown
Multi Digit RecognitionThis notebook shown the a simply model in keras to recognize a digit sequence in a real world image. This images data is taken from the Street View House Number Dataset. This model is divided into two part.**Preprocessing** notebook consist of converting the images in the dataset to 32x32 greyscale images array and save it in the h5 file.**Multi Digit Recognition** notebook consists of CNN model to predict the multi digit number in the images. Lets import the main packages
###Code
import h5py
import matplotlib.pyplot as plt
import tensorflow as tf
import seaborn as sns
from PIL import Image
import numpy as np
import time
import os
from keras import backend as K
from keras.models import Model
from keras.layers import Input,Lambda,Dense,Dropout,Activation,Flatten,Conv2D,MaxPooling2D
K.clear_session()
###Output
C:\Users\saiki\AppData\Local\Continuum\anaconda3\lib\site-packages\h5py\__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.
from ._conv import register_converters as _register_converters
Using TensorFlow backend.
###Markdown
Extract the data from the h5 file created in the preprocessing notebook
###Code
h5f = h5py.File('data/svhn_multi_grey.h5','r')
# Extract the datasets
x_train = h5f['train_dataset'][:]
y_train = h5f['train_labels'][:]
x_val = h5f['valid_dataset'][:]
y_val = h5f['valid_labels'][:]
x_test = h5f['test_dataset'][:]
y_test = h5f['test_labels'][:]
# Close the file
h5f.close()
print('Training set', x_train.shape, y_train.shape)
print('Validation set', x_val.shape, y_val.shape)
print('Test set ', x_test.shape, y_test.shape)
###Output
Training set (230754, 32, 32, 1) (230754, 5)
Validation set (5000, 32, 32, 1) (5000, 5)
Test set (13068, 32, 32, 1) (13068, 5)
###Markdown
I merge the validation set into the training set and shuffling
###Code
X_train = np.concatenate([x_train, x_val])
Y_train = np.concatenate([y_train, y_val])
from sklearn.utils import shuffle
# Randomly shuffle the training data
X_train, Y_train = shuffle(X_train, Y_train)
###Output
_____no_output_____
###Markdown
Normalizing the data is done for getting the better results and reduce the time to train
###Code
def subtract_mean(a):
""" Helper function for subtracting the mean of every image
"""
for i in range(a.shape[0]):
a[i] -= a[i].mean()
return a
# Subtract the mean from every image
X_train = subtract_mean(X_train)
X_test = subtract_mean(x_test)
###Output
_____no_output_____
###Markdown
Creating a Helper function to convert the number into one hot encoding for each digit and combining the into one array of length 55
###Code
#preparing the y data
def y_data_transform(y):
y_new=np.zeros((y.shape[0],y.shape[1]*11),dtype="int")
for (i,j),l in np.ndenumerate(y):
y_new[i,j*11+l]=1
return y_new
Y_Train=y_data_transform(Y_train)
Y_test=y_data_transform(y_test)
###Output
_____no_output_____
###Markdown
This is the model created using keras input model. The following model summary is the main model for the recognition the number
###Code
input_data=Input(name="input",shape=(32,32,1),dtype='float32')
conv1=Conv2D(32,5,padding="same",activation="relu")(input_data)
conv2=Conv2D(32,5,padding="same",activation="relu")(conv1)
max1=MaxPooling2D(pool_size=(2, 2),padding="same")(conv2)
drop1=Dropout(0.75)(max1)
conv3=Conv2D(64,5,padding="same",activation="relu")(drop1)
conv4=Conv2D(64,5,padding="same",activation="relu")(conv3)
max2=MaxPooling2D(pool_size=(2, 2),padding="same")(conv4)
drop2=Dropout(0.75)(max2)
conv5=Conv2D(128,5,padding="same",activation="relu")(drop2)
conv6=Conv2D(128,5,padding="same",activation="relu")(conv5)
conv7=Conv2D(128,5,padding="same",activation="relu")(conv6)
flat=Flatten()(conv7)
fc1=Dense(256,activation="relu")(flat)
drop3=Dropout(0.5)(fc1)
fc2=Dense(253,activation="relu")(drop3)
output=Dense(55,activation="sigmoid")(fc2)
model1=Model(inputs=input_data, outputs=output)
model1.summary()
###Output
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input (InputLayer) (None, 32, 32, 1) 0
_________________________________________________________________
conv2d_1 (Conv2D) (None, 32, 32, 32) 832
_________________________________________________________________
conv2d_2 (Conv2D) (None, 32, 32, 32) 25632
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 16, 16, 32) 0
_________________________________________________________________
dropout_1 (Dropout) (None, 16, 16, 32) 0
_________________________________________________________________
conv2d_3 (Conv2D) (None, 16, 16, 64) 51264
_________________________________________________________________
conv2d_4 (Conv2D) (None, 16, 16, 64) 102464
_________________________________________________________________
max_pooling2d_2 (MaxPooling2 (None, 8, 8, 64) 0
_________________________________________________________________
dropout_2 (Dropout) (None, 8, 8, 64) 0
_________________________________________________________________
conv2d_5 (Conv2D) (None, 8, 8, 128) 204928
_________________________________________________________________
conv2d_6 (Conv2D) (None, 8, 8, 128) 409728
_________________________________________________________________
conv2d_7 (Conv2D) (None, 8, 8, 128) 409728
_________________________________________________________________
flatten_1 (Flatten) (None, 8192) 0
_________________________________________________________________
dense_1 (Dense) (None, 256) 2097408
_________________________________________________________________
dropout_3 (Dropout) (None, 256) 0
_________________________________________________________________
dense_2 (Dense) (None, 253) 65021
_________________________________________________________________
dense_3 (Dense) (None, 55) 13970
=================================================================
Total params: 3,380,975
Trainable params: 3,380,975
Non-trainable params: 0
_________________________________________________________________
###Markdown
**Custom Loss Function** This is the custom loss function created to compare the y_predicted to y actual
###Code
_EPSILON=1e-7
def _loss_tensor(y_true, y_pred):
y_pred = K.clip(y_pred, _EPSILON, 1.0-_EPSILON)
out = -(y_true * K.log(y_pred) + (1.0 - y_true) * K.log(1.0 - y_pred))
return K.mean(out, axis=-1)
def loss_func(y):
y_pred,y_true=y
loss=_loss_tensor(y_true,y_pred)
return loss
###Output
_____no_output_____
###Markdown
A Lambda layer with the loss function with the Y_true value to caluculating loss and the output of this layer is the loss value
###Code
from keras.callbacks import TensorBoard
y_true = Input(name='y_true', shape=[55], dtype='float32')
loss_out = Lambda(loss_func, output_shape=(1,), name='loss')([output, y_true])
model = Model(inputs=[input_data,y_true], outputs=loss_out)
model.add_loss(K.sum(loss_out,axis=None))
###Output
_____no_output_____
###Markdown
By adding the loss function to the last layer, loss function is kept to none in the compiler so that the value from the layer is to tend to zero
###Code
tensor_board = TensorBoard(log_dir='./Graph', histogram_freq=0, write_graph=True, write_images=True)
model.compile(loss=None, optimizer="adam", loss_weights=None)
model.fit(x=[X_train,Y_Train],y=None, batch_size=1000, epochs=25, verbose=1,callbacks=[tensor_board])
###Output
Epoch 1/25
235754/235754 [==============================] - 116s 493us/step - loss: 193.0511
Epoch 2/25
235754/235754 [==============================] - 110s 465us/step - loss: 161.9264
Epoch 3/25
235754/235754 [==============================] - 110s 466us/step - loss: 150.3311
Epoch 4/25
235754/235754 [==============================] - 111s 470us/step - loss: 131.5419
Epoch 5/25
235754/235754 [==============================] - 112s 473us/step - loss: 109.1504
Epoch 6/25
235754/235754 [==============================] - 110s 469us/step - loss: 87.3990
Epoch 7/25
235754/235754 [==============================] - 110s 467us/step - loss: 68.9336
Epoch 8/25
235754/235754 [==============================] - 110s 467us/step - loss: 56.4606
Epoch 9/25
235754/235754 [==============================] - 110s 467us/step - loss: 48.6544
Epoch 10/25
235754/235754 [==============================] - 110s 467us/step - loss: 43.7350
Epoch 11/25
235754/235754 [==============================] - 111s 471us/step - loss: 40.2667
Epoch 12/25
235754/235754 [==============================] - 110s 467us/step - loss: 37.4090
Epoch 13/25
235754/235754 [==============================] - 111s 472us/step - loss: 35.1917
Epoch 14/25
235754/235754 [==============================] - 112s 474us/step - loss: 33.6588
Epoch 15/25
235754/235754 [==============================] - 112s 475us/step - loss: 31.7385
Epoch 16/25
235754/235754 [==============================] - 126s 535us/step - loss: 30.1490
Epoch 17/25
235754/235754 [==============================] - 126s 535us/step - loss: 29.1520
Epoch 18/25
235754/235754 [==============================] - 126s 536us/step - loss: 27.9035
Epoch 19/25
235754/235754 [==============================] - 126s 535us/step - loss: 26.9709
Epoch 20/25
235754/235754 [==============================] - 126s 535us/step - loss: 26.5437
Epoch 21/25
235754/235754 [==============================] - 126s 535us/step - loss: 25.7282
Epoch 22/25
235754/235754 [==============================] - 126s 535us/step - loss: 25.1414
Epoch 23/25
235754/235754 [==============================] - 126s 535us/step - loss: 24.6235
Epoch 24/25
235754/235754 [==============================] - 126s 535us/step - loss: 24.3906
Epoch 25/25
235754/235754 [==============================] - 126s 534us/step - loss: 24.1711
###Markdown
Loss value is seem big because of the custom function created and accuracy caluculated below shows the accuracy in detecting rigth digits
###Code
Accuracy=(1-np.mean(model.predict([X_test[:],Y_test[:]])))*100
print(Accuracy)
model.save("MDR_model.h5")
model.save_weights("MDR_model_weights.h5")
###Output
_____no_output_____
###Markdown
This helper function will convert the logits of 55 into number.
###Code
def convert_to_num(x):
num=""
if len(x)==55:
for i in range(5):
c=np.argmax(x[i*11:(i+1)*11])
if c!=10:
num+=str(c)
return num
else:
print("This function might not be used that way")
###Output
_____no_output_____
###Markdown
Even thought the accuracy for each digit is high, the accuracy for predicting the full number is lowered.
###Code
X1=model1.predict(X_test)
Y1=Y_test
j=0
for i in range(len(X_test)):
try:
if eval(convert_to_num(X1[i]))!=eval(convert_to_num(Y1[i])):
j+=1
#print(i,[convert_to_num(X1[i]),convert_to_num(Y1[i])])
except:
j+=1
print("total error",j," out of ",len(X1),"and total accuracy",(1-(j/len(X1)))*100)
###Output
total error 1561 out of 13068 and total accuracy 88.0547903275176
|
notebooks/DC-2-layer-foundation-app.ipynb | ###Markdown
DC 2 layer foundation- [**Questions**](https://www.dropbox.com/s/uizpgz3eyt3urim/DC-2-layer-foundation.pdf?dl=0) In this notebook, we use widgets to explore the physical principals governing DC resistivity. For a half-space and a 2-layer resistivity model, we will learn about the behavior of the *currents*, *electric fields* and *electric potentials* that are produced when electric currents are injected into the Earth.In the DC resistivity experiment, we measure the different in electric potential between two locations; also known as a *voltage*. Using these voltage measurements, we can get information about the resistivity of the Earth. Here, be begin to understand how these measurements depend on the electrode locations.**DC resistivity over a 2 layered Earth** Background: Computing Apparent ResistivityIn practice we cannot measure the electric potentials everywhere. We are instead limited to a set of locations where we have placed potential electrodes. For each source (current electrode pair) many potential differences are measured between M and N electrode pairs to characterize the overall distribution of potentials. The widget below allows you to visualize the potentials, electric fields, and current densities from a dipole source in a simple model with 2 layers. For different electrode configurations you can measure the potential differences and see the calculated apparent resistivities. In a uniform halfspace the potential differences can be computed by summing up the potentials at each measurement point from the different current sources based on the following equations:\begin{align} V_M = \frac{\rho I}{2 \pi} \left[ \frac{1}{AM} - \frac{1}{MB} \right] \\ V_N = \frac{\rho I}{2 \pi} \left[ \frac{1}{AN} - \frac{1}{NB} \right] \end{align} where $AM$, $MB$, $AN$, and $NB$ are the distances between the corresponding electrodes. The potential difference $\Delta V_{MN}$ in a dipole-dipole survey can therefore be expressed as follows,\begin{equation} \Delta V_{MN} = V_M - V_N = \rho I \underbrace{\frac{1}{2 \pi} \left[ \frac{1}{AM} - \frac{1}{MB} - \frac{1}{AN} + \frac{1}{NB} \right]}_{G}\end{equation}and the resistivity of the halfspace $\rho$ is equal to,$$ \rho = \frac{\Delta V_{MN}}{IG}$$In this equation $G$ is often referred to as the geometric factor. In the case where we are not in a uniform halfspace the above equation is used to compute the apparent resistivity ($\rho_a$) which is the resistivity of the uniform halfspace which best reproduces the measured potential difference.In the top plot the location of the A electrode is marked by the red +, the B electrode is marked by the blue -, and the M/N potential electrodes are marked by the black dots. The $V_M$ and $V_N$ potentials are printed just above and to the right of the black dots. The calculted apparent resistivity is shown in the grey box to the right. The bottom plot can show the resistivity model, the electric fields (e), potentials, or current densities (j) depending on which toggle button is selected. Some patience may be required for the plots to update after parameters have been changed. Import Packages
###Code
from utils import DCLayers
from IPython.display import display
%matplotlib inline
from matplotlib import rcParams
rcParams['font.size'] = 14
###Output
_____no_output_____
###Markdown
User Defined Parameters for the AppBelow are the parameters that can be changed by the user: - **A**: (+) Current electrode location - **B**: (-) Current electrode location - **M**: (+) Potential electrode location - **N**: (-) Potential electrode location - **$\rho_1$**: Resistivity of the first layer - **$\rho_2$**: Resistivity of the second layer - **h**: Thickness of the first layer - **Plot**: Choice of 2D plot (Model, Potential, Electric field, Currents) Run the App
###Code
out = DCLayers.plot_layer_potentials_app()
display(out)
###Output
_____no_output_____ |
usr/ard/10/10_listes_student.ipynb | ###Markdown
Listes Les listes en Python sont un ensemble ordonnรฉs d'objets. Les objets peuvent รชtre de type variรฉs. Une liste peux contenir une liste. Une liste est une sรฉquence* Une liste est dรฉlimitรฉ par des crochets `[]`* Les รฉlรฉments sont sรฉparรฉ par une virgule `,`* Un รฉlรฉment peut รชtre accรฉdรฉ par son indice `L[1]`* Une list peut รชtre vide `L=[]`
###Code
a = [10, 20, 30]
fruits = ['banane', 'orange', 'pomme']
###Output
_____no_output_____
###Markdown
Une **indice** permet d'accรฉder ร un รฉlรฉment de liste.
###Code
a[1], b[2]
###Output
_____no_output_____
###Markdown
Une liste peux **contenir diffรฉrents types** d'รฉlรฉments.
###Code
c = [1, 1.2, True, None, 'abc', [], (), {}]
for i in c:
print(i, ' - ', type(i))
###Output
1 - <class 'int'>
1.2 - <class 'float'>
True - <class 'bool'>
None - <class 'NoneType'>
abc - <class 'str'>
[] - <class 'list'>
() - <class 'tuple'>
{} - <class 'dict'>
###Markdown
Une liste ร l'intรฉrieur d'une autre liste est dite **imbriquรฉe**.
###Code
L = [1, 2, [3, 4]]
print(L[2])
print(L[2][0])
###Output
[3, 4]
3
###Markdown
Une liste qui ne contient aucun รฉlรฉment est une liste **vide**. Les listes sont modifiables Contairement ร une chaines de caractรจres, une liste est **modifiable**
###Code
a = [10, 20, 30]
a
a[1] = 'twenty'
a
s = 'hello'
L = list(s)
L[1] = 'a'
L
#on ne peut pas modifier un string mais oui une liste
###Output
_____no_output_____
###Markdown
Le deuxiรจme รฉlรฉment `a[1]` contenant la valeur numรฉrique 20 a รฉtรฉ remplacรฉ par une chaine `'twenty'`.
###Code
L = list(range(6))
print('L =', L)
print('L[2:4] =', L[2:4])
L[3] = [1, 2, 3]
print('L = ', L)
L = list(range(10))
L[3:7]
###Output
_____no_output_____
###Markdown
Un tranche d'une liste peux รชtre remplacรฉ par un รฉlรฉment.
###Code
L[3:7] = 'x'
L
###Output
_____no_output_____
###Markdown
Un รฉlรฉment peux รชtre remplacรฉ par une liste.
###Code
L[4] = [10, 20]
L
###Output
_____no_output_____
###Markdown
Un รฉlรฉment peut รชtre remplacรฉ par une rรฉfรฉrence ร une liste.
###Code
L[5] = a
L
###Output
_____no_output_____
###Markdown
Si la liste insรฉrรฉe `a` est modifiรฉ, la liste contenante `b` est รฉgalement modififiรฉe. Une variable pour une liste ne contient donc pas une copie de la liste, mais une rรฉfรฉrence vers cette liste.
###Code
a[0] = 'xxx'
L
###Output
_____no_output_____
###Markdown
Parcourir une liste La boucle `for` permet de parcourir une liste, exactement de la mรชme faรงon comme pour les chaรฎnes.
###Code
for i in a:
print(i)
for i in L:
print(i)
###Output
0
1
2
[1, 2, 3]
[10, 20]
['xxxxxx', 'twentytwenty', 60]
###Markdown
Pour modifier une liste, on a besoin de l'indice. La boucle parcourt la liste et multiplie chaque รฉlรฉment par 2.
###Code
n = len(a)
for i in range(n):
a[i] *= 2
a
n = len(L)
for i in range(n):
L[i] = L[i] * 2
L
###Output
_____no_output_____
###Markdown
Si une liste est vide, la boucle n'est jamais parcourue.
###Code
for x in []:
print('this never prints')
###Output
_____no_output_____
###Markdown
**Exercice** Compare l'itรฉration ร travers: une liste `[1, 2, 3]`, une chaine de caractรจre `'abc'` et une plage `range[3]`
###Code
L = [1, 2, 3]
for item in L:
print(item)
L = ['banane', 'orange', 'apple']
n = len(L)
for i in range(n):
print(i, '=', L[i])
L[i] *= 2
L
###Output
0 = banane
1 = orange
2 = apple
###Markdown
Opรฉrations sur listesLes opรฉrateurs d'adition `+` et de multiplication `*` des nombres, ont une interprรฉtation diffรฉrents pour les listes.
###Code
a = [1, 2, 3]
b = ['a', 'b']
###Output
_____no_output_____
###Markdown
L'opรฉrateur `+` concatรจne des listes.
###Code
a+b
###Output
_____no_output_____
###Markdown
L'opรฉrateur `*` rรฉpรจte une liste.
###Code
b * 3
[0] * 10
L = [[0] * 10] * 5
L[2][2] = 'x'
L
def identity(n):
L = null(n)
for i in range(n):
L[i][i] = 1
return L
identity(5)
###Output
_____no_output_____
###Markdown
La fonction `list` transforme un itรฉrable comme `range(10)` en vraie liste.
###Code
list(range(10))
###Output
_____no_output_____
###Markdown
La fonction `list` transforme aussi des chaines en vraie liste.
###Code
list('hello')
###Output
_____no_output_____
###Markdown
Tranches de listes
###Code
t = list('abcdef')
t
###Output
_____no_output_____
###Markdown
L'opรฉrateur de **tranche** `[m:n]` peut รชtre utilisรฉ avec des listes.
###Code
t[1:3]
###Output
_____no_output_____
###Markdown
Tous les รฉlรฉments depuis le dรฉbut:
###Code
t[:4]
###Output
_____no_output_____
###Markdown
Tous les รฉlรฉments jusqu'ร la fin:
###Code
t[4:]
a = list(range(10))
a
a[:4]
a[4:] #commence avec indice 4 et va jsuqu'a la fin
a[2:10:3]
a
###Output
_____no_output_____
###Markdown
Mรฉthodes de listes La mรฉthode `append` ajoute un รฉlรฉment ร la fin d'une liste.
###Code
a = [1, 2, 3]
a.append('a')
a
###Output
_____no_output_____
###Markdown
La mรฉthode `extent` ajoute les รฉlรฉments d'une liste ร la fin d'une liste.
###Code
a.extend([10, 20])
a
###Output
_____no_output_____
###Markdown
La mรฉthode `sort` trie les รฉlรฉments d'une liste. Elle ne retourne pas une nouvelle liste triรฉ, mais modifie la liste.
###Code
c = [23, 12, 54, 2]
c.sort()
c
###Output
_____no_output_____
###Markdown
Le paramรจtre optionnel `reverse` permet d'inverser l'ordre du tri.
###Code
c.sort(reverse=True)
c
###Output
_____no_output_____
###Markdown
On peut trier des lettres
###Code
a = list('world')
a.sort()
a
print(1, 2, 3, sep=', ')
###Output
1, 2, 3
###Markdown
La pluspart des mรฉthodes de liste renvoie rien (`None`).
###Code
L = a.sort()
print(a)
print(L)
###Output
_____no_output_____
###Markdown
La mรฉthode `sorted(L)` par contre retourne une nouvelle list triรฉ.
###Code
a = list('world')
L = sorted(a)
print(a)
print(b)
###Output
['w', 'o', 'r', 'l', 'd']
['a', 'b']
###Markdown
Mapper, filtrer et rรฉduire Pour additionner toutes les รฉlรฉments d'une liste vous pouvez initialiser la variable `total` ร zรฉro, et additionner ร chaque itรฉration un รฉlรฉment de la liste. Une variable utilisรฉe d'une telle faรงon est appelรฉ un **accumulateur**.
###Code
def somme(t):
total = 0
for i in t:
total += i
return total
b = [1, 2, 32, 42]
somme(b)
###Output
_____no_output_____
###Markdown
L'addition des รฉlรฉments d'une liste est frรฉquente et Python proprose une fonction `sum`.
###Code
sum(b)
def tout_en_majuscules(t):
"""t: une liste de mots."""
res = []
for s in t:
res.append(s.capitalize())
return res
tout_en_majuscules(['good', 'hello', 'world'])
###Output
_____no_output_____
###Markdown
La mรฉthode `isupper` est vrai si toutes les lettres sont majuscules.
###Code
def seulement_majuscules(t):
res = []
for s in t:
if s.isupper():
res.append(s)
return res
b = ['aa', 'AAA', 'Hello', 'HELLO']
seulement_majuscules(b)
###Output
_____no_output_____
###Markdown
Une fonction comme `seulement_majuscules` s'appelle **filtre** car elle sรฉlectionne certains รฉlรฉments seuelement. Supprimer des รฉlรฉments Avec la mรฉthode `pop` vous pouvez supprimer un รฉlรฉment.
###Code
a = list('hello')
a
###Output
_____no_output_____
###Markdown
La mรฉthode `pop` modifie la liste et retourne un รฉlรฉment. Utilisรฉ sans argument `pop` enlรจve le derniรจre รฉlรฉment de la liste.
###Code
a.pop()
a
###Output
_____no_output_____
###Markdown
Utilisรฉ avec un argument, c'est cet รฉlรฉment qui est enlevรฉ de la liste.
###Code
a.pop(0)
a
###Output
_____no_output_____
###Markdown
L'opรฉrateur `del` permet รฉgalement de supprimer un รฉlรฉment.
###Code
del(a[0])
a
###Output
_____no_output_____
###Markdown
Liste de chaines de caractรจres Une chaรฎne est une sรฉquence de caractรจres, et de caractรจres uniquement. Une liste par contre est une sรฉquence de n'importe quel type d'รฉlรฉments. La fonction `list` permet de transformer un itรฉrable comme une chaรฎne en liste.
###Code
s = 'spam'
print(s)
print(list(s))
###Output
_____no_output_____
###Markdown
Comme `list` est le nom d'une fonction interne, il ne faut pas l'utiliser comme nom de variable. Evitez d'utiliser la petite lettre L (`l`), car elle est pratiqument identique avec le chiffre un (`1`), donc ici le `t` est utilisรฉ ร la place.La fonction `split` permet de dรฉcouper une phrase en mots et de les retourner dans une liste.
###Code
s = 'je suis ici en ce moment'
t = s.split()
t
###Output
_____no_output_____
###Markdown
`join` est l'inverse de `split`.
###Code
' - '.join(t)
###Output
_____no_output_____
###Markdown
Objets et valeursDeux variables qui font rรฉfรฉrence ร la mรชme chaine pointent vers le mรชme objet. L'opรฉrateur `is` retourne vrai si les deux variables pointent vers le mรชme objet.
###Code
a = 'banane'
b = 'banane'
a is b
###Output
_____no_output_____
###Markdown
Deux variables qui sont initialisรฉ avec la mรชme liste ne sont pas les mรชme objets.
###Code
a = [1, 2, 3]
b = [1, 2, 3]
a is b
###Output
_____no_output_____
###Markdown
Dans ce cas on dit que les deux listes sont **รฉquivalentes**, mais pas identiques, car il ne s'agit pas du mรชme objet. Aliasing Si une variable est initialisรฉ avec une autre variable, alors les deux pointent vers le mรชme objet.
###Code
a = [1, 2, 3]
b = a
a is b
###Output
_____no_output_____
###Markdown
Si un รฉlรฉment de `b` est modifiรฉ, la variable `a` change รฉgalement.
###Code
b[0] = 42
print(a)
print(b)
###Output
_____no_output_____
###Markdown
L'association entre une variable est un objet s'appelle **rรฉfรฉrence**. Dans cet exemple il existent deux rรฉfรฉrences `a` et `b` vers le mรชme objet. Si les objets sont immuable (chaines, tuples) ceci ne pose pas de problรจme, mais avec deux variables qui font rรฉfรฉrence ร la mรชme liste, il faut faire attention de ne pas modifier une par inadvertance. Arguments de type liste Si une liste est passรฉe comme argument de fonction, la fonction peut modifier la list.
###Code
def modifie_list(t):
t[0] *= 2 # multiplie par deux
t[1] = 42 # nouveelle affectation
del t[2] # suppression
a = [1, 2, 3, 4, 5]
print(a)
modifie_list(a)
a
b = list('abcde')
modifie_list(b)
b
###Output
_____no_output_____
###Markdown
La mรฉthode `append` modifie une liste, mais l'opรฉrateur `+` crรฉe une nouvelle liste.
###Code
a = [1, 2]
b = a.append(3)
print('a =', a)
print('b =', b)
###Output
_____no_output_____
###Markdown
`append` modifie la liste et retourne `None`.
###Code
b = a + [4]
print('a =', a)
print('b =', b)
###Output
_____no_output_____
###Markdown
Exercices **Exercice 1** รcrivez une fonction appelรฉe `nested_sum` qui prend une liste de listes d'entiers et additionne les รฉlรฉments de toutes les listes imbriquรฉes.
###Code
def nested_sum(L):
s = 0
for sublist in L:
for item in sublist:
s = s + item
#print(sublist, s)
return s
t = [[1, 2], [3], [4, 5, 6]]
nested_sum(t)
###Output
_____no_output_____
###Markdown
**Exercice 2** รcrivez une fonction appelรฉe `cumsum` qui prend une liste de nombres et renvoie la somme cumulative ; c'est-ร -dire une nouvelle liste oรน le n-iรจme รฉlรฉment est la somme des premiers n + 1 รฉlรฉments de la liste originale.
###Code
def cumsum(t):
pass
t = range(5)
cumsum(t)
###Output
_____no_output_____
###Markdown
**Exercice 3** รcrivez une fonction appelรฉe `middle` qui prend une liste et renvoie une nouvelle liste qui contient tous les รฉlรฉments, sauf le premier et le dernier.
###Code
def middle(t):
pass
t = list(range(10))
print(t)
print(middle(t))
print(t)
###Output
_____no_output_____
###Markdown
**Exercice 4** รcrivez une fonction appelรฉe `chop` qui prend une liste, la modifie en supprimant le premier et le dernier รฉlรฉment, et retourne `None`.
###Code
def chop(t):
pass
t = list(range(10))
print(t)
print(chop(t))
print(t)
###Output
_____no_output_____
###Markdown
**Exercice 5** รcrivez une fonction appelรฉe `is_sorted` qui prend une liste comme paramรจtre et renvoie True si la liste est triรฉe par ordre croissant et False sinon.
###Code
def is_sorted(t):
pass
is_sorted([11, 2, 3])
###Output
_____no_output_____
###Markdown
**Exercice 6** Deux mots sont des anagrammes si vous pouvez rรฉarranger les lettres de l'un pour en former l'autre (par exemple ALEVIN et NIVELA sont des anagrammes). รcrivez une fonction appelรฉe `is_anagram` qui prend deux chaรฎnes et renvoie `True` si ce sont des anagrammes.
###Code
def is_anagram(s1, s2):
pass
is_anagram('ALEVIN', 'NIVELA')
is_anagram('ALEVIN', 'NIVEL')
###Output
_____no_output_____
###Markdown
**Exercice 7** รcrivez une fonction appelรฉe `has_duplicates` qui prend une liste et renvoie `True` s'il y a au moins un รฉlรฉment qui apparaรฎt plus d'une fois. La mรฉthode ne devrait pas modifier la liste originale.
###Code
def has_duplicates(t):
pass
t = [1, 2, 3, 4, 1]
has_duplicates(t)
t = [1, 2, 3, 4, '1']
has_duplicates(t)
###Output
_____no_output_____
###Markdown
**Exercice 8** Cet exercice est relatif ร ce que l'on appelle le paradoxe des anniversaires, au sujet duquel vous pouvez lire sur https://fr.wikipedia.org/wiki/Paradoxe_des_anniversaires .S'il y a 23 รฉtudiants dans votre classe, quelles sont les chances que deux d'entre vous aient le mรชme anniversaire ? Vous pouvez estimer cette probabilitรฉ en gรฉnรฉrant des รฉchantillons alรฉatoires de 23 anniversaires et en vรฉrifiant les correspondances. Indice : vous pouvez gรฉnรฉrer des anniversaires alรฉatoires avec la fonction randint du module random.
###Code
import random
def birthdays(n):
pass
m = 1000
n = 0
for i in range(m):
pass
print(n/m)
###Output
_____no_output_____
###Markdown
**Exercice 9**รcrivez une fonction qui lit le fichier mots.txt du chapitre prรฉcรฉdent et construit une liste avec un รฉlรฉment par mot. รcrivez deux versions de cette fonction, l'une qui utilise la mรฉthode append et l'autre en utilisant la syntaxe `t = t + [x]`. Laquelle prend plus de temps pour s'exรฉcuter ? Pourquoi ?
###Code
%%time
fin = open('mots.txt')
t = []
for line in fin:
pass
len(t)
%%time
fin = open('mots.txt')
t = []
i = 0
for line in fin:
pass
###Output
_____no_output_____
###Markdown
La deuxiรจme version devient de plus en plus lente car elle doit chaque fois copier et crรฉer une nouvelle liste. **Exercice 10**Pour vรฉrifier si un mot se trouve dans la liste de mots, vous pouvez utiliser l'opรฉrateur `in` , mais cela serait lent, car il vรฉrifie les mots un par un dans l'ordre de leur apparition.Si les mots sont dans l'ordre alphabรฉtique, nous pouvons accรฉlรฉrer les choses avec une recherche dichotomique (aussi connue comme recherche binaire), qui est similaire ร ce que vous faites quand vous recherchez un mot dans le dictionnaire. Vous commencez au milieu et vรฉrifiez si le mot que vous recherchez vient avant le mot du milieu de la liste. Si c'est le cas, vous recherchez de la mรชme faรงon dans la premiรจre moitiรฉ de la liste. Sinon, vous regardez dans la seconde moitiรฉ.Dans les deux cas, vous divisez en deux l'espace de recherche restant. Si la liste de mots a 130 557 mots, il faudra environ 17 รฉtapes pour trouver le mot ou conclure qu'il n'y est pas.รcrivez une fonction appelรฉe `in_bisect` qui prend une liste triรฉe et une valeur cible et renvoie l'index de la valeur dans la liste si elle s'y trouve, ou si elle n'y est pas. N'oubliez pas qu'il faut prรฉalablement trier la liste par ordre alphabรฉtique pour que cet algorithme puisse fonctionner ; vous gagnerez du temps si vous commencez par trier la liste en entrรฉe et la stockez dans un nouveau fichier (vous pouvez utiliser la fonction sort de votre systรจme d'exploitation si elle existe, ou sinon le faire en Python), vous n'aurez ainsi besoin de le faire qu'une seule fois.
###Code
fin = open('mots.txt')
t = []
for line in fin:
mot = line.strip()
t.append(mot)
t.sort()
len(t)
def in_bisect(t, val):
a = 0
b = len(t)-1
while b > a:
i = (b+a) // 2
print(t[a], t[i], t[b], sep=' - ')
if val == t[i]:
return True
if val > t[i]:
a = i
else:
b = i
return False
in_bisect(t, 'MAISON')
###Output
_____no_output_____ |
Clase_3_MD.ipynb | ###Markdown
**Operadores aritmรฉticos**Los operadores permiten realizar diferentes procesos de cรกlculo en cualquier lenguaje de programaciรณnLos operadores mรกs bรกsicos:1. Suma2. Resta3. Multiplicaciรณn4. Divisiรณn **Suma**Simbolo suma (+) el cual utilizarรก en medio de la declaraciรณn de ariables a operar
###Code
print(10+100)
#Declarando la suma, de ambas formas se puede hacer tambiรฉn para la resta
a=20
b=35
print(a+b)
###Output
55
|
Day 3/Python_3tut.ipynb | ###Markdown
**This notebook is an exercise in the [Python](https://www.kaggle.com/learn/python) course. You can reference the tutorial at [this link](https://www.kaggle.com/colinmorris/functions-and-getting-help).**--- Functions are powerful. Try writing some yourself.As before, don't forget to run the setup code below before jumping into question 1.
###Code
# SETUP. You don't need to worry for now about what this code does or how it works.
from learntools.core import binder; binder.bind(globals())
from learntools.python.ex2 import *
print('Setup complete.')
###Output
_____no_output_____
###Markdown
1.Complete the body of the following function according to its docstring.HINT: Python has a built-in function `round`.
###Code
def round_to_two_places(num):
"""Return the given number rounded to two decimal places.
>>> round_to_two_places(3.14159)
3.14
"""
return round(num,2)
# Check your answer
q1.check()
# Uncomment the following for a hint
#q1.hint()
# Or uncomment the following to peek at the solution
q1.solution()
###Output
_____no_output_____
###Markdown
2.The help for `round` says that `ndigits` (the second argument) may be negative.What do you think will happen when it is? Try some examples in the following cell.
###Code
round(63773852573534,-1)
###Output
_____no_output_____
###Markdown
Can you think of a case where this would be useful? Once you're ready, run the code cell below to see the answer and to receive credit for completing the problem.
###Code
# Check your answer (Run this code cell to receive credit!)
q2.solution()
###Output
_____no_output_____
###Markdown
3.In the previous exercise, the candy-sharing friends Alice, Bob and Carol tried to split candies evenly. For the sake of their friendship, any candies left over would be smashed. For example, if they collectively bring home 91 candies, they'll take 30 each and smash 1.Below is a simple function that will calculate the number of candies to smash for *any* number of total candies.Modify it so that it optionally takes a second argument representing the number of friends the candies are being split between. If no second argument is provided, it should assume 3 friends, as before.Update the docstring to reflect this new behaviour.
###Code
def to_smash(total_candies,n=3):
"""Return the number of leftover candies that must be smashed after distributing
the given number of candies evenly between 3 friends.
>>> to_smash(91)
1
"""
return total_candies % n
# Check your answer
q3.check()
#q3.hint()
q3.solution()
###Output
_____no_output_____
###Markdown
4. (Optional)It may not be fun, but reading and understanding error messages will be an important part of your Python career.Each code cell below contains some commented buggy code. For each cell...1. Read the code and predict what you think will happen when it's run.2. Then uncomment the code and run it to see what happens. (**Tip**: In the kernel editor, you can highlight several lines and press `ctrl`+`/` to toggle commenting.)3. Fix the code (so that it accomplishes its intended purpose without throwing an exception)
###Code
round_to_two_places(9.9999)
x = -10
y = 5
# # Which of the two variables above has the smallest absolute value?
smallest_abs = round_to_two_places(abs(x))
def f(x):
y = abs(x)
return y
print(f(5))
###Output
_____no_output_____ |
AirBnB_Project3.ipynb | ###Markdown
Section 1: Business Understanding * With calendar and listings data from airbnb we want to know which month has highest and lowest occupancy in Seattle? * Which month has lowest and highest prices for the listings? * What are the different factors/features influencing the listings price? * Which area in Seattle is have the highest and lowest occupancy?
###Code
# Import all required libraries
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from sklearn.linear_model import LinearRegression
from sklearn.model_selection import train_test_split
from sklearn.metrics import r2_score, mean_squared_error
import seaborn as sns
%matplotlib inline
df_calendar = pd.read_csv("airbnb_data/calendar.csv")
df_listings = pd.read_csv("airbnb_data/listings.csv")
df_reviews = pd.read_csv("airbnb_data/reviews.csv")
pd.options.mode.chained_assignment = None
###Output
_____no_output_____
###Markdown
Section 2: Data Understanding Listings and Calendar have one to may relationship Listings and Reviews also have one to many relationship Listings dataframe has 91 columns, and we need to extract only those which influence price - id, host_response_time, host_response_rate, accommodates, bathrooms, bedrooms, beds, price, weekly_price, monthly_price, cleaning_fee, extra_people, minimum_nights, review_scores_rating, and instant_bookable All price columns datatype needs to be change to numeric - Calendar and Listings For our business questions we may not even require Reviews dataframe
###Code
df_calendar.head()
df_calendar.info()
df_listings.head()
df_listings.info()
df_reviews.head()
df_reviews.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 84849 entries, 0 to 84848
Data columns (total 6 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 listing_id 84849 non-null int64
1 id 84849 non-null int64
2 date 84849 non-null object
3 reviewer_id 84849 non-null int64
4 reviewer_name 84849 non-null object
5 comments 84831 non-null object
dtypes: int64(3), object(3)
memory usage: 3.9+ MB
###Markdown
Section 3: Prepare Data Cleanup calendar and listings data (details below)
###Code
# Cleanup the calendar data
"""
1. First convert the price to numeric (float) and Fill nulls with 0 as those listing are left blank.
We could also use mean which will give very different results as we inflate the price of the listings which
were vacant during the calendar days. We definitely do not want to drop the na records as it would not give
right answer for occupancy rate.
2. Convert date column from object to date
3. Convert listing_id to string so that it is not interfere as metric column
4. Add new columns for Year, Month and Year-Month for easy grouping of price
"""
replace_decimal = (lambda x:x[:-3].replace(',', '.') if type(x) is str else x)
replace_dollar = (lambda x:x.replace('$', '') if type(x) is str else x)
df_calendar['price'] = df_calendar.price.apply(replace_decimal)
df_calendar['price'] = df_calendar.price.apply(replace_dollar)
df_calendar['price'] = df_calendar['price'].astype(float)
df_calendar['price'].fillna(0, inplace=True)
df_calendar['date'] = pd.to_datetime(df_calendar['date'])
df_calendar['listing_id'] = df_calendar.listing_id.astype(str)
df_calendar['month'] = pd.DatetimeIndex(df_calendar['date']).month
df_calendar['year'] = pd.DatetimeIndex(df_calendar['date']).year
df_calendar['month_year'] = pd.to_datetime(df_calendar['date']).dt.to_period('M')
df_calendar.info()
df_calendar.head()
# Cleanup Listing Dataframe
"""
1. We only need columns from listing dataframe which have influence on price prediction, so extract
following columns from listing df into new df - id, host_response_time, host_response_rate, accommodates,
bathrooms, bedrooms, beds, price, weekly_price, monthly_price, cleaning_fee, extra_people,
minimum_nights, review_scores_rating, instant_bookable
2. Convert id (which is listing_id) to str
3. Convert all price columns to float, i.e., remove $ sign and any extra ,
4. Impute columns bathrooms, beds, bedrooms with mode value
5. Convert percentage to float - host_response_rate and review_scores_rating
"""
df_listings_sub = df_listings[['id', 'host_response_time', 'host_response_rate', 'accommodates', 'bathrooms',
'bedrooms', 'beds', 'price', 'weekly_price', 'monthly_price', 'cleaning_fee',
'extra_people', 'minimum_nights', 'review_scores_rating', 'instant_bookable', 'zipcode']]
df_listings_sub['id'] = df_listings['id'].astype(str)
"""
Lambda function to fill nan value mode value of aparticular column
Impute values for beds, bathrooms and bedrooms
"""
fill_mode = lambda col:col.fillna(col.mode()[0])
df_listings_sub[['beds', 'bathrooms', 'bedrooms']] = df_listings_sub[['beds', 'bathrooms', 'bedrooms']].apply(fill_mode, axis=0)
"""
Fill all nan price related records with 0 value as listings were empty
"""
df_listings_sub['weekly_price'] = df_listings_sub.weekly_price.apply(replace_decimal)
df_listings_sub['weekly_price'] = df_listings_sub.weekly_price.apply(replace_dollar)
df_listings_sub['weekly_price'] = df_listings_sub['weekly_price'].astype(float)
df_listings_sub['weekly_price'].fillna(0, inplace=True)
df_listings_sub['monthly_price'] = df_listings_sub.monthly_price.apply(replace_decimal)
df_listings_sub['monthly_price'] = df_listings_sub.monthly_price.apply(replace_dollar)
df_listings_sub['monthly_price'] = df_listings_sub['monthly_price'].astype(float)
df_listings_sub['monthly_price'].fillna(0, inplace=True)
df_listings_sub['price'] = df_listings_sub.price.apply(replace_decimal)
df_listings_sub['price'] = df_listings_sub.price.apply(replace_dollar)
df_listings_sub['price'] = df_listings_sub['price'].astype(float)
df_listings_sub['cleaning_fee'] = df_listings_sub.cleaning_fee.apply(replace_decimal)
df_listings_sub['cleaning_fee'] = df_listings_sub.cleaning_fee.apply(replace_dollar)
df_listings_sub['cleaning_fee'] = df_listings_sub['cleaning_fee'].astype(float)
df_listings_sub['cleaning_fee'].fillna(0, inplace=True)
df_listings_sub['extra_people'] = df_listings_sub.extra_people.apply(replace_decimal)
df_listings_sub['extra_people'] = df_listings_sub.extra_people.apply(replace_dollar)
df_listings_sub['extra_people'] = df_listings_sub['extra_people'].astype(float)
"""
Lambda function which receive a value checks if it's a string, replaces any % characters and convert it to Float
else return the same value
Input: x
Output: Float(x)/100 if String else x
"""
replace_percent = (lambda x:(float(x.replace('%', ''))/100.0) if type(x) is str else x)
df_listings_sub['host_response_rate'] = df_listings_sub.host_response_rate.apply(replace_percent)
"""
Lambda function which receive a float value and return a value between 0 and 1 (non-percentage)
Input: x
Output: x/100
"""
replace_review_per = (lambda x:(x)/100.0)
df_listings_sub['review_scores_rating'] = df_listings_sub['review_scores_rating'].apply(replace_review_per)
df_listings_sub.info()
df_listings_sub.head()
###Output
_____no_output_____
###Markdown
Section 4 & 5: Model Data and Results 1. 2016 Occupancy rate through out the year
###Code
"""
Analyze the 2016 Occupancy month over month
"""
plt.rcParams['figure.figsize'] = (12,6)
font = {'color': 'blue',
'weight': 'normal',
'size': 20,
}
base_color = sns.color_palette()[0]
df_calendar_2016 = df_calendar[df_calendar.year == 2016]
month = df_calendar_2016.month
sns.countplot(data = df_calendar, x = month, hue = 'available');
# set title for plot
plt.title('Occupancy during 2016', fontdict=font);
###Output
_____no_output_____
###Markdown
Occupancy is lowest during December, and highest during January 2. Average Price Per Month for year 2016
###Code
"""
Analyze the price over period of time
"""
sns.barplot(data = df_calendar_2016, x = month, y = 'price',color=base_color)
plt.ylabel('Average price')
plt.xlabel('Months')
plt.title('Average price per month', fontdict=font);
plt.axhline(df_calendar_2016.price.mean(), linestyle='--', color='red');
###Output
_____no_output_____
###Markdown
Price is consistently high between June and December. December prices are at peak, and January is at lowest 3. Corelation of different features with price
###Code
"""
Find the correlation of different features with price
"""
listing_corr = df_listings_sub.corr()
kot = listing_corr[listing_corr.apply(lambda x: abs(x)>=0)]
sns.heatmap(kot, annot = True, fmt = '.2f', cmap = 'Reds', center = 0)
plt.title('Features Correlation', fontdict=font);
plt.xticks(rotation = 15);
###Output
_____no_output_____
###Markdown
Strong Corelation with Price: accommodates, bathrooms, bedrooms, beds, and monthly_price 4. Response time for the hosts
###Code
def plot_historgram(df, column_name, base_color, plot_title):
"""Plot the historgram with passed parameters
Input:
df = dataframe
column_name = name of the column which goes as X-axis
base_color = Color of the histogram plot
plot_title = Title to given for the plot
"""
cat_order = df[column_name].value_counts().index
sns.countplot(data= df, x= column_name, color= base_color, order= cat_order)
plt.title(plot_title, fontdict= font)
"""
Analyze the host response time w.r.t. all the listings
"""
plot_historgram(df= df_listings_sub,
column_name= 'host_response_time',
base_color= base_color,
plot_title= 'The most host response time')
###Output
_____no_output_____
###Markdown
Most hosts respond with an hour of the request 5. Occupancy per Zip Code - Areas most in demand, to least demand
###Code
"""
Area within Seattle with highest and lowest occupancy
"""
plt.rcParams['figure.figsize'] = (20,6)
plot_historgram(df= df_listings_sub,
column_name= 'zipcode',
base_color= base_color,
plot_title= 'Occupancy per Zip Code in Seattle')
###Output
_____no_output_____
###Markdown
Price Prediction - Based on accommodates, bathrooms, beds and bedrooms
###Code
# Form the X (independent features) and y (dependent variable) dataframes
X = df_listings_sub[['accommodates', 'bathrooms', 'beds', 'bedrooms']]
y = df_listings_sub['price']
# Split train and test datasets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=42)
# Create model/Fit and Predict
lm_model = LinearRegression(normalize=True) # Instantiate
lm_model.fit(X_train, y_train) #Fit
#Predict and score the model - test set
y_train_preds = lm_model.predict(X_train)
print("The r-squared score for the model using only quantitative variables was {} on {} values."
.format(r2_score(y_train, y_train_preds), len(y_train)))
#Predict and score the model - test set
y_test_preds = lm_model.predict(X_test)
print("The r-squared score for the model using only quantitative variables was {} on {} values."
.format(r2_score(y_test, y_test_preds), len(y_test)))
coef_df = pd.DataFrame()
coef_df['feature'] = X_train.columns
coef_df['coef'] = lm_model.coef_
coef_df['abs_coef'] = np.abs(lm_model.coef_)
coef_df = coef_df.sort_values(by=['abs_coef'], ascending=False)
print('Rank features by their impact on the price: \n', coef_df, '\n')
plt.figure(figsize = (15,5))
plt.bar(coef_df['feature'], coef_df['abs_coef'])
plt.xlabel('features')
plt.xticks(coef_df['feature'], rotation = 90)
plt.ylabel('abs_coef')
plt.title('Rank features by their impact on the price')
plt.show()
###Output
Rank features by their impact on the price:
feature coef abs_coef
1 bathrooms 29.519086 29.519086
3 bedrooms 19.707436 19.707436
0 accommodates 19.351816 19.351816
2 beds -1.747924 1.747924
###Markdown
With minimal differences in r-squared scores on Training and Test data shows that model is not an overfit.
###Code
!!jupyter nbconvert *.ipynb
###Output
_____no_output_____ |
notebooks/Test Gabor pyramid.ipynb | ###Markdown
i
###Code
variance_baseline = total_labels_squared / n - (total_labels / n / labels.shape[2]) ** 2
variance_baseline
variance_after = total_labels_sse / n
r2 = 1 - variance_after / variance_baseline
plt.hist(r2.cpu().squeeze().numpy(), 25)
plt.xlabel('Validation R2')
plt.title('Pyramid model with space')
labels.shape
r2
import matplotlib.pyplot as plt
from matplotlib.patches import Ellipse
fig = plt.figure(figsize=(6, 6))
ax = plt.gca()
for i in range(trainset.total_electrodes):
ellipse = Ellipse((net.wx[i].item(), net.wy[i].item()),
width=2.35*(.1 + abs(net.wsigmax[i].item())),
height=2.35*(.1 + abs(net.wsigmay[i].item())),
facecolor='none',
edgecolor=[0, 0, 0, .5]
)
ax.add_patch(ellipse)
ax.set_xlim((-.1, 1.1))
ax.set_ylim((1.1, -0.1))
r2[~r2.isnan()].mean()
"""
import wandb
import numpy as np
wandb.init(project="crcns-test", config={
"learning_rate": 0.01,
"architecture": "pyramid-2d",
})
config = wandb.config
#r2 = r2.cpu().detach().numpy()
r2 = r2[~np.isnan(r2)]
wandb.log({"valr2": r2})
"""
###Output
_____no_output_____ |
notebooks/perturbation_temp_scaling_liang2018/experiments_mnist10_cnn.ipynb | ###Markdown
Create network architecture
###Code
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(1, 10, kernel_size=5)
self.conv2 = nn.Conv2d(10, 20, kernel_size=5)
self.conv2_drop = nn.Dropout2d()
self.fc1 = nn.Linear(320, 50)
self.fc2 = nn.Linear(50, 10)
def forward(self, x):
x = F.relu(F.max_pool2d(self.conv1(x), 2))
x = F.relu(F.max_pool2d(self.conv2_drop(self.conv2(x)), 2))
x = x.view(-1, 320)
x = F.relu(self.fc1(x))
x = F.dropout(x, training=self.training)
x = self.fc2(x)
# return F.log_softmax(x, dim=1)
return x
###Output
_____no_output_____
###Markdown
Training and Testing functions
###Code
from novelty.utils import Progbar
def train(model, device, train_loader, optimizer, epoch):
progbar = Progbar(target=len(train_loader.dataset))
model.train()
for batch_idx, (data, target) in enumerate(train_loader):
data, target = data.to(device), target.to(device)
optimizer.zero_grad()
output = F.log_softmax(model(data), dim=1)
loss = F.nll_loss(output, target)
loss.backward()
optimizer.step()
progbar.add(len(data), [("loss", loss.item())])
def test(model, device, test_loader):
model.eval()
test_loss = 0
correct = 0
with torch.no_grad():
for data, target in test_loader:
data, target = data.to(device), target.to(device)
output = F.log_softmax(model(data), dim=1)
# sum up batch loss
test_loss += F.nll_loss(output, target, size_average=False).item()
# get the index of the max log-probability
pred = output.max(1, keepdim=True)[1]
correct += pred.eq(target.view_as(pred)).sum().item()
test_loss /= len(test_loader.dataset)
test_acc = 100. * correct / len(test_loader.dataset)
print('Test set: Average loss: {:.4f}, Accuracy: {}/{} ({:.0f}%)\n'.format(
test_loss, correct, len(test_loader.dataset), test_acc))
return test_loss, test_acc
###Output
_____no_output_____
###Markdown
Initialize model and load MNIST
###Code
from novelty.utils import DATA_DIR
from src.wide_resnet import Wide_ResNet
torch.manual_seed(SEED)
use_cuda = not NO_CUDA and torch.cuda.is_available()
device = torch.device("cuda" if use_cuda else "cpu")
# Dataset transformation
transform = transforms.Compose([
transforms.ToTensor(),
transforms.Normalize(CHANNEL_MEANS, CHANNEL_STDS),
])
# Load training and test sets
kwargs = {'num_workers': 2, 'pin_memory': True} if use_cuda else {}
train_loader = torch.utils.data.DataLoader(
datasets.MNIST(os.path.join(DATA_DIR, 'mnist'), train=True, transform=transform, download=True),
batch_size=BATCH_SIZE, shuffle=True, **kwargs)
test_loader = torch.utils.data.DataLoader(
datasets.MNIST(os.path.join(DATA_DIR, 'mnist'), train=False, transform=transform, download=True),
batch_size=BATCH_SIZE, shuffle=False, **kwargs)
# Create model instance
model = Net().to(device)
# Initialize optimizer
optimizer = optim.Adam(model.parameters(), lr=LR)
# optimizer = optim.SGD(model.parameters(), lr=LR, momentum=MOMENTUM)
###Output
_____no_output_____
###Markdown
Optimization loop
###Code
if os.path.exists(MODEL_PATH):
# load previously trained model:
model.load_state_dict(torch.load(MODEL_PATH))
else:
# Training loop
for epoch in range(1, EPOCHS + 1):
print("Epoch:", epoch)
train(model, device, train_loader, optimizer, epoch)
test(model, device, test_loader)
# save the model
torch.save(model.state_dict(), MODEL_PATH)
###Output
Epoch: 1
60000/60000 [==============================] - 2s 38us/step - loss: 0.5380
Test set: Average loss: 0.1088, Accuracy: 9674/10000 (97%)
Epoch: 2
60000/60000 [==============================] - 2s 38us/step - loss: 0.3419
Test set: Average loss: 0.0942, Accuracy: 9730/10000 (97%)
Epoch: 3
60000/60000 [==============================] - 2s 38us/step - loss: 0.3185
Test set: Average loss: 0.1026, Accuracy: 9700/10000 (97%)
Epoch: 4
60000/60000 [==============================] - 2s 37us/step - loss: 0.3067
Test set: Average loss: 0.0933, Accuracy: 9713/10000 (97%)
Epoch: 5
60000/60000 [==============================] - 2s 38us/step - loss: 0.3038
Test set: Average loss: 0.0895, Accuracy: 9722/10000 (97%)
Epoch: 6
60000/60000 [==============================] - 2s 38us/step - loss: 0.3095
Test set: Average loss: 0.0944, Accuracy: 9704/10000 (97%)
Epoch: 7
60000/60000 [==============================] - 2s 38us/step - loss: 0.3002
Test set: Average loss: 0.0904, Accuracy: 9723/10000 (97%)
Epoch: 8
60000/60000 [==============================] - 2s 38us/step - loss: 0.2937
Test set: Average loss: 0.0949, Accuracy: 9716/10000 (97%)
Epoch: 9
60000/60000 [==============================] - 2s 39us/step - loss: 0.2972
Test set: Average loss: 0.0920, Accuracy: 9739/10000 (97%)
Epoch: 10
60000/60000 [==============================] - 2s 38us/step - loss: 0.2902
Test set: Average loss: 0.0870, Accuracy: 9740/10000 (97%)
Epoch: 11
60000/60000 [==============================] - 2s 38us/step - loss: 0.2932
Test set: Average loss: 0.0830, Accuracy: 9774/10000 (98%)
Epoch: 12
60000/60000 [==============================] - 2s 39us/step - loss: 0.2877
Test set: Average loss: 0.0886, Accuracy: 9735/10000 (97%)
Epoch: 13
60000/60000 [==============================] - 2s 39us/step - loss: 0.2794
Test set: Average loss: 0.0903, Accuracy: 9720/10000 (97%)
Epoch: 14
43392/60000 [====================>.........] - ETA: 0s - loss: 0.2906
###Markdown
ODIN prediction functions
###Code
from torch.autograd import Variable
def predict(model, data, device):
model.eval()
data = data.to(device)
outputs = model(data)
outputs = outputs - outputs.max(1)[0].unsqueeze(1) # For stability
return F.softmax(outputs, dim=1)
def predict_temp(model, data, device, temp=1000.):
model.eval()
data = data.to(device)
outputs = model(data)
outputs /= temp
outputs = outputs - outputs.max(1)[0].unsqueeze(1) # For stability
return F.softmax(outputs, dim=1)
def predict_novelty(model, data, device, temp=1000., noiseMagnitude=0.0012):
model.eval()
# Create a variable so we can get the gradients on the input
inputs = Variable(data.to(device), requires_grad=True)
# Get the predicted labels
outputs = model(inputs)
outputs = outputs / temp
outputs = F.log_softmax(outputs, dim=1)
# Calculate the perturbation to add to the input
maxIndexTemp = torch.argmax(outputs, dim=1)
labels = Variable(maxIndexTemp).to(device)
loss = F.nll_loss(outputs, labels)
loss.backward()
# Normalizing the gradient to binary in {0, 1}
gradient = torch.ge(inputs.grad.data, 0)
gradient = (gradient.float() - 0.5) * 2
# Normalize the gradient to the same space of image
for channel, (mean, std) in enumerate(zip(CHANNEL_MEANS, CHANNEL_STDS)):
gradient[0][channel] = (gradient[0][channel] - mean) / std
# Add small perturbations to image
# TODO, this is from the released code, but disagrees with paper I think
tempInputs = torch.add(inputs.data, -noiseMagnitude, gradient)
# Get new outputs after perturbations
outputs = model(Variable(tempInputs))
outputs = outputs / temp
outputs = outputs - outputs.max(1)[0].unsqueeze(1) # For stability
outputs = F.softmax(outputs, dim=1)
return outputs
###Output
_____no_output_____
###Markdown
Evaluate method on outlier datasets
###Code
def get_max_model_outputs(data_loader, device):
"""Get the max softmax output from the model in a Python array.
data_loader: object
A pytorch dataloader with the data you want to calculate values for.
device: object
The CUDA device handle.
"""
result = []
for data, target in data_loader:
# Using regular model
p = predict(model, data, device)
max_val, label = torch.max(p, dim=1)
# Convert torch tensors to python list
max_val = list(max_val.cpu().detach().numpy())
result += max_val
return result
def get_max_odin_outputs(data_loader, device, temp=1000., noiseMagnitude=0.0012):
"""Convenience function to get the max softmax values from the ODIN model in a Python array.
data_loader: object
A pytorch dataloader with the data you want to calculate values for.
device: object
The CUDA device handle.
temp: float, optional (default=1000.)
The temp the model should use to do temperature scaling on the softmax outputs.
noiseMagnitude: float, optional (default=0.0012)
The epsilon value used to scale the input images according to the ODIN paper.
"""
result = []
for data, target in data_loader:
# Using ODIN model
p = predict_novelty(model, data, device, temp=temp, noiseMagnitude=noiseMagnitude)
max_val, label = torch.max(p, dim=1)
# Convert torch tensors to python list
max_val = list(max_val.cpu().detach().numpy())
result += max_val
return result
import pandas as pd
df = pd.DataFrame(columns=['auroc', 'aupr_in', 'aupr_out', 'fpr_at_95_tpr', 'detection_error'],
index=['letters', 'rot90', 'gaussian', 'uniform', 'not_mnist'])
df_odin = pd.DataFrame(columns=['auroc', 'aupr_in', 'aupr_out', 'fpr_at_95_tpr', 'detection_error'],
index=['letters', 'rot90', 'gaussian', 'uniform', 'not_mnist'])
###Output
_____no_output_____
###Markdown
Process Inliers
###Code
num_inliers = len(test_loader.dataset)
# Get predictions on in-distribution images
mnist_model_maximums = get_max_model_outputs(test_loader, device)
mnist_odin_maximums = get_max_odin_outputs(test_loader, device, temp=TEMP, noiseMagnitude=NOISE_MAGNITUDE)
###Output
_____no_output_____
###Markdown
Fashion MNIST
###Code
directory = os.path.join(DATA_DIR, 'fashion_mnist')
# Dataset transformation
transform = transforms.Compose([
transforms.ToTensor(),
transforms.Normalize(CHANNEL_MEANS, CHANNEL_STDS),
])
# Load the dataset
kwargs = {'num_workers': 2, 'pin_memory': True} if use_cuda else {}
fashion_loader = torch.utils.data.DataLoader(
datasets.FashionMNIST(directory, train=False, transform=transform, download=True),
batch_size=BATCH_SIZE, shuffle=True, **kwargs)
num_fashion = len(fashion_loader.dataset)
# Get predictions on in-distribution images
fashion_model_maximums = get_max_model_outputs(fashion_loader, device)
fashion_odin_maximums = get_max_odin_outputs(fashion_loader, device, temp=TEMP, noiseMagnitude=NOISE_MAGNITUDE)
labels = [1] * num_inliers + [0] * num_fashion
predictions = mnist_model_maximums + fashion_model_maximums
predictions_odin = mnist_odin_maximums + fashion_odin_maximums
stats = get_summary_statistics(predictions, labels)
df.loc['fashion'] = pd.Series(stats)
stats_odin = get_summary_statistics(predictions_odin, labels)
df_odin.loc['fashion'] = pd.Series(stats_odin)
if PLOT_CHARTS:
plot_roc(predictions, labels, title="Softmax Thresholding ROC Curve")
plot_roc(predictions_odin, labels, title="ODIN ROC Curve")
# plot_prc(predictions, labels, title="Softmax Thresholding PRC Curve")
# plot_prc(predictions_odin, labels, title="ODIN PRC Curve")
###Output
_____no_output_____
###Markdown
EMNIST Letters
###Code
directory = os.path.join(DATA_DIR, 'emnist')
# Dataset transformation
transform = transforms.Compose([
transforms.ToTensor(),
transforms.Normalize(CHANNEL_MEANS, CHANNEL_STDS),
])
# Load the dataset
kwargs = {'num_workers': 2, 'pin_memory': True} if use_cuda else {}
emnist_loader = torch.utils.data.DataLoader(
datasets.EMNIST(directory, "letters", train=False, transform=transform, download=True),
batch_size=BATCH_SIZE, shuffle=True, **kwargs)
num_emnist = len(emnist_loader.dataset)
# Get predictions on in-distribution images
emnist_model_maximums = get_max_model_outputs(emnist_loader, device)
emnist_odin_maximums = get_max_odin_outputs(emnist_loader, device, temp=TEMP, noiseMagnitude=NOISE_MAGNITUDE)
labels = [1] * num_inliers + [0] * num_emnist
predictions = mnist_model_maximums + emnist_model_maximums
predictions_odin = mnist_odin_maximums + emnist_odin_maximums
stats = get_summary_statistics(predictions, labels)
df.loc['letters'] = pd.Series(stats)
stats_odin = get_summary_statistics(predictions_odin, labels)
df_odin.loc['letters'] = pd.Series(stats_odin)
if PLOT_CHARTS:
plot_roc(predictions, labels, title="Softmax Thresholding ROC Curve")
plot_roc(predictions_odin, labels, title="ODIN ROC Curve")
# plot_prc(predictions, labels, title="Softmax Thresholding PRC Curve")
# plot_prc(predictions_odin, labels, title="ODIN PRC Curve")
###Output
_____no_output_____
###Markdown
Not MNIST
###Code
directory = os.path.join(DATA_DIR, 'notmnist/notMNIST_small')
# Dataset transformation
transform = transforms.Compose([
transforms.Grayscale(),
transforms.ToTensor(),
transforms.Normalize(CHANNEL_MEANS, CHANNEL_STDS),
])
# Load the dataset
kwargs = {'num_workers': 2, 'pin_memory': True} if use_cuda else {}
notmnist_loader = torch.utils.data.DataLoader(
datasets.ImageFolder(directory, transform=transform),
batch_size=BATCH_SIZE, shuffle=True, **kwargs)
num_notmnist = len(notmnist_loader.dataset)
# Get predictions on in-distribution images
notmnist_model_maximums = get_max_model_outputs(notmnist_loader, device)
notmnist_odin_maximums = get_max_odin_outputs(notmnist_loader, device, temp=TEMP, noiseMagnitude=NOISE_MAGNITUDE)
labels = [1] * num_inliers + [0] * num_notmnist
predictions = mnist_model_maximums + notmnist_model_maximums
predictions_odin = mnist_odin_maximums + notmnist_odin_maximums
stats = get_summary_statistics(predictions, labels)
df.loc['not_mnist'] = pd.Series(stats)
stats_odin = get_summary_statistics(predictions_odin, labels)
df_odin.loc['not_mnist'] = pd.Series(stats_odin)
if PLOT_CHARTS:
plot_roc(predictions, labels, title="Softmax Thresholding ROC Curve")
plot_roc(predictions_odin, labels, title="ODIN ROC Curve")
# plot_prc(predictions, labels, title="Softmax Thresholding PRC Curve")
# plot_prc(predictions_odin, labels, title="ODIN PRC Curve")
###Output
_____no_output_____
###Markdown
Rotated 90 MNIST
###Code
directory = os.path.join(DATA_DIR, 'mnist')
# Dataset transformation
transform = transforms.Compose([
transforms.Lambda(lambda image: image.rotate(90)),
transforms.ToTensor(),
transforms.Normalize(CHANNEL_MEANS, CHANNEL_STDS),
])
# Load the dataset
kwargs = {'num_workers': 2, 'pin_memory': True} if use_cuda else {}
rot90_loader = torch.utils.data.DataLoader(
datasets.MNIST(directory, train=False, transform=transform, download=True),
batch_size=BATCH_SIZE, shuffle=True, **kwargs)
num_rot90 = len(rot90_loader.dataset)
# Get predictions on in-distribution images
rot90_model_maximums = get_max_model_outputs(rot90_loader, device)
rot90_odin_maximums = get_max_odin_outputs(rot90_loader, device, temp=TEMP, noiseMagnitude=NOISE_MAGNITUDE)
labels = [1] * num_inliers + [0] * num_rot90
predictions = mnist_model_maximums + rot90_model_maximums
predictions_odin = mnist_odin_maximums + rot90_odin_maximums
stats = get_summary_statistics(predictions, labels)
df.loc['rot90'] = pd.Series(stats)
stats_odin = get_summary_statistics(predictions_odin, labels)
df_odin.loc['rot90'] = pd.Series(stats_odin)
if PLOT_CHARTS:
plot_roc(predictions, labels, title="Softmax Thresholding ROC Curve")
plot_roc(predictions_odin, labels, title="ODIN ROC Curve")
# plot_prc(predictions, labels, title="Softmax Thresholding PRC Curve")
# plot_prc(predictions_odin, labels, title="ODIN PRC Curve")
###Output
_____no_output_____
###Markdown
Gaussian Noise Dataset
###Code
from novelty.utils.datasets import GaussianNoiseDataset
gaussian_transform = transforms.Compose([
#TODO clip to [0,1] range
transforms.ToTensor()
])
kwargs = {'num_workers': 2, 'pin_memory': True} if use_cuda else {}
gaussian_loader = torch.utils.data.DataLoader(
GaussianNoiseDataset((10000, 28, 28, 1), mean=0., std=1., transform=gaussian_transform),
batch_size=BATCH_SIZE, shuffle=True, **kwargs)
num_gaussian = len(gaussian_loader.dataset)
# Get predictions on in-distribution images
gaussian_model_maximums = get_max_model_outputs(gaussian_loader, device)
gaussian_odin_maximums = get_max_odin_outputs(
gaussian_loader, device, temp=TEMP, noiseMagnitude=NOISE_MAGNITUDE)
labels = [1] * num_inliers + [0] * num_gaussian
predictions = mnist_model_maximums + gaussian_model_maximums
predictions_odin = mnist_odin_maximums + gaussian_odin_maximums
stats = get_summary_statistics(predictions, labels)
df.loc['gaussian'] = pd.Series(stats)
stats_odin = get_summary_statistics(predictions_odin, labels)
df_odin.loc['gaussian'] = pd.Series(stats_odin)
if PLOT_CHARTS:
plot_roc(predictions, labels, title="Softmax Thresholding ROC Curve")
plot_roc(predictions_odin, labels, title="ODIN ROC Curve")
# plot_prc(predictions, labels, title="Softmax Thresholding PRC Curve")
# plot_prc(predictions_odin, labels, title="ODIN PRC Curve")
###Output
_____no_output_____
###Markdown
Uniform Noise Dataset
###Code
from novelty.utils.datasets import UniformNoiseDataset
import math
kwargs = {'num_workers': 2, 'pin_memory': True} if use_cuda else {}
uniform_loader = torch.utils.data.DataLoader(
UniformNoiseDataset((10000, 28, 28, 1), low=-math.sqrt(3.), high=math.sqrt(3.), transform=transforms.ToTensor()),
batch_size=BATCH_SIZE, shuffle=True, **kwargs)
num_uniform = len(uniform_loader.dataset)
# Get predictions on in-distribution images
uniform_model_maximums = get_max_model_outputs(uniform_loader, device)
uniform_odin_maximums = get_max_odin_outputs(
uniform_loader, device, temp=TEMP, noiseMagnitude=NOISE_MAGNITUDE)
labels = [1] * num_inliers + [0] * num_uniform
predictions = mnist_model_maximums + uniform_model_maximums
predictions_odin = mnist_odin_maximums + uniform_odin_maximums
stats = get_summary_statistics(predictions, labels)
df.loc['uniform'] = pd.Series(stats)
stats_odin = get_summary_statistics(predictions_odin, labels)
df_odin.loc['uniform'] = pd.Series(stats_odin)
if PLOT_CHARTS:
plot_roc(predictions, labels, title="Softmax Thresholding ROC Curve")
plot_roc(predictions_odin, labels, title="ODIN ROC Curve")
# plot_prc(predictions, labels, title="Softmax Thresholding PRC Curve")
# plot_prc(predictions_odin, labels, title="ODIN PRC Curve")
df.to_pickle('./results/mnist10_cnn_liang2018.pkl')
df_odin.to_pickle('./results/mnist10_cnn_odin_liang2018.pkl')
df
df_odin
###Output
_____no_output_____ |
Final_year_project_compiling_h5_model_.ipynb | ###Markdown
For below operation I am taking dataset from google drive please make sure that you have Dataset in your drive if you dont get this contact me.Run below cell to get authorisation.
###Code
from google.colab import drive
drive.mount('/content/drive')
###Output
Mounted at /content/drive
###Markdown
[click](https://datascience.stackexchange.com/questions/29480/uploading-images-folder-from-my-system-into-google-colab) to know how i arrived at this
###Code
EPOCHS = 25
INIT_LR = 1e-3 #https://towardsdatascience.com/learning-rate-schedules-and-adaptive-learning-rate-methods-for-deep-learning-2c8f433990
BS = 32
default_image_size = tuple((256, 256))
image_size = 0
directory_root = '/content/drive/MyDrive/PATH_TO_OUTPUT'
width=256
height=256
depth=3
!unzip -uq "/content/drive/MyDrive/Final Year Project/Dataset.zip" -d "/content/drive/My Drive/PATH_TO_OUTPUT"
#Function to convert image into array
#
def convert_image_to_array(image_dir):
try:
image = cv2.imread(image_dir)
if image is not None :
image = cv2.resize(image, default_image_size)
return img_to_array(image)
else :
return np.array([])
except Exception as e:
print(f"Error : {e}")
return None
#below code is for loading the images
#Make sure to know the direcory root
# this is most important cell. There are different set of engineers who do this.
image_list, label_list = [], []
try:
print("[INFO] Loading images ...")
root_dir = listdir(directory_root)
for directory in root_dir :
# remove .DS_Store from list
if directory == ".DS_Store" :
root_dir.remove(directory)
for plant_folder in root_dir :
plant_disease_folder_list = listdir(f'{directory_root}/{plant_folder}')
for disease_folder in plant_disease_folder_list :
# remove .DS_Store from list
if disease_folder == ".DS_Store" :
plant_disease_folder_list.remove(disease_folder)
for plant_disease_folder in plant_disease_folder_list:
print(f'[INFO] Processing {plant_disease_folder} ...')
plant_disease_image_list = listdir(f'{directory_root}/{plant_folder}/{plant_disease_folder}/')
for single_plant_disease_image in plant_disease_image_list :
if single_plant_disease_image == ".DS_Store" :
plant_disease_image_list.remove(single_plant_disease_image)
for image in plant_disease_image_list[:200]:
image_directory = f'{directory_root}/{plant_folder}/{plant_disease_folder}/{image}'
if image_directory.endswith(".jpg") == True or image_directory.endswith(".JPG") == True:
image_list.append(convert_image_to_array(image_directory))
label_list.append(plant_disease_folder)
print("[INFO] Image loading completed")
except Exception as e:
print(f"Error : {e}")
print(label_list)
image_size = len(image_list)
label_binarizer = LabelBinarizer()# why ?
image_labels = label_binarizer.fit_transform(label_list)
pickle.dump(label_binarizer,open('label_transform.pkl', 'wb')) #not necessary
n_classes = len(label_binarizer.classes_) #
print(label_binarizer.classes_)
np_image_list = np.array(image_list, dtype=np.float16) / 225.0 # costimize and check as per discussion
###Output
['Tomato_Bacterial_spot' 'Tomato_Early_blight'
'Tomato__Tomato_mosaic_virus' 'Tomato_healthy']
###Markdown
https://machinelearningmastery.com/train-test-split-for-evaluating-machine-learning-algorithms/
###Code
print("[INFO] Spliting data to train, test")
x_train, x_test, y_train, y_test = train_test_split(np_image_list, image_labels, test_size=0.2, random_state = 42)
aug = ImageDataGenerator(
rotation_range=25, width_shift_range=0.1,
height_shift_range=0.1, shear_range=0.2,
zoom_range=0.2,horizontal_flip=True,
fill_mode="nearest")
model = Sequential()
inputShape = (height, width, depth)
chanDim = -1
if K.image_data_format() == "channels_first":
inputShape = (depth, height, width)
chanDim = 1
model.add(Conv2D(32, (3, 3), padding="same",input_shape=inputShape))
model.add(Activation("relu"))
model.add(BatchNormalization(axis=chanDim))
model.add(MaxPooling2D(pool_size=(3, 3)))
model.add(Dropout(0.25))
model.add(Conv2D(64, (3, 3), padding="same"))
model.add(Activation("relu"))
model.add(BatchNormalization(axis=chanDim))
model.add(Conv2D(64, (3, 3), padding="same"))
model.add(Activation("relu"))
model.add(BatchNormalization(axis=chanDim))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))#Dropout is a technique where randomly selected neurons are ignored during training. They are โdropped-outโ randomly.
model.add(Conv2D(128, (3, 3), padding="same"))
model.add(Activation("relu"))
model.add(BatchNormalization(axis=chanDim))
model.add(Conv2D(128, (3, 3), padding="same"))
model.add(Activation("relu"))
model.add(BatchNormalization(axis=chanDim))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(1024))
model.add(Activation("relu"))
model.add(BatchNormalization())
model.add(Dropout(0.5))
model.add(Dense(n_classes))
model.add(Activation("softmax"))
opt = Adam(lr=INIT_LR, decay=INIT_LR / EPOCHS)
# distribution
model.compile(loss="binary_crossentropy", optimizer=opt,metrics=["accuracy"])
# train the network
print("[INFO] training network...")
history = model.fit_generator(
aug.flow(x_train, y_train, batch_size=BS),
validation_data=(x_test, y_test),
steps_per_epoch=len(x_train) // BS,
epochs=EPOCHS, verbose=1
)
acc = history.history['accuracy']
val_acc = history.history['val_accuracy']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(1, len(acc) + 1)
#Train and validation accuracy
plt.plot(epochs, acc, 'b', label='Training accurarcy')
plt.plot(epochs, val_acc, 'r', label='Validation accurarcy')
plt.title('Training and Validation accurarcy')
plt.legend()
plt.figure()
#Train and validation loss
plt.plot(epochs, loss, 'b', label='Training loss')
plt.plot(epochs, val_loss, 'r', label='Validation loss')
plt.title('Training and Validation loss')
plt.legend()
plt.show()
print("[INFO] Calculating model accuracy")
scores = model.evaluate(x_test, y_test)
print(f"Test Accuracy: {scores[1]*100}")
model.save("model.h5")
###Output
_____no_output_____
###Markdown
[click](https://intellipaat.com/community/9487/how-to-predict-input-image-using-trained-model-in-keras) to know about how to make use of pre trained model to predict new images.
###Code
from google.colab import files
from IPython import display
uploaded = files.upload()
from google.colab import files
from IPython import display
display.Image("00bce074-967b-4d50-967a-31fdaa35e688___RS_HL 0223.JPG",
width=1000)
from keras.models import load_model
model = load_model('model.h5')
model.compile(loss='binary_crossentropy',
optimizer=opt,
metrics=['accuracy'])
im=convert_image_to_array("00bce074-967b-4d50-967a-31fdaa35e688___RS_HL 0223.JPG")
np_image_li = np.array(im, dtype=np.float16) / 225.0
npp_image = np.expand_dims(np_image_li, axis=0)
classes = model.predict_classes(npp_image)
print(classes)
itemindex = np.where(classes==np.max(classes))
print(itemindex)
print("probability:"+str(np.max(classes))+"\n"+label_binarizer.classes_[itemindex[0][0]])
itemindex = np.where(classes==np.max(classes))
print("probability:"+str(np.max(model.predict(npp_image))))
print(label_binarizer.classes_[classes])
print(label_binarizer.classes_[classes])
###Output
['Tomato_healthy']
|
notebooks/response_figures.ipynb | ###Markdown
Response Figure 1Since this manuscript's main advance regarding variation in paternal age effect over the Rahbari, et al. result is greater statistical power, more robust statistical analyses of this pattern would strengthen the paper. Figure 3 presents a commendable amount of raw data in a fairly clear way, yet the authors use only a simple ANOVA to test whether different families have different dependencies on paternal age. The supplement claims that this result cannot be an artifact of low sequencing coverage because regions covered by <12 reads are excluded from the denominator, but there still might be subtle differences in variant discovery power between e.g. regions covered by 12 reads and regions covered by 30 reads. To hedge against this, the authors can define the "callable genome" continuously (point 1 under "other questions and suggestions") or they can check whether mean read coverage appears to covary with mutation rates across individuals after filtering away the regions covered by <12 reads.
###Code
library(ggplot2)
library(cowplot)
# read in second- and third-generation DNMs (now with a column for mean read depth)
gen2 = read.csv("../data/second_gen.dnms.summary.csv")
gen3 = read.csv("../data/third_gen.dnms.summary.csv")
gen2$generation = "2nd"
gen3$generation = "3rd"
# combine the second- and third-generation dataframes, and calculate
# autosomal mutation rates for all samples
combined = rbind(gen2, gen3)
combined$autosomal_mutation_rate = combined$autosomal_dnms / (combined$autosomal_callable_fraction * 2)
# generate the figure
p <- ggplot(combined, aes(x=autosomal_mutation_rate, y=mean_depth)) +
facet_wrap(~generation) +
geom_smooth(method="lm", aes(col=generation)) +
geom_point(aes(fill=generation), pch=21, col="white", size=2) +
xlab("Autosomal mutation rate") +
ylab("Mean autosomal read depth\n(sites >= 12 reads)") +
theme(axis.text.x = element_text(angle=45, vjust=0.5)) +
coord_fixed(ratio=9.5e-10)
# fit models predicting mean autosomal read depth as a function
# of the autosomal mutation rate
m_second_gen = lm(mean_depth ~ autosomal_mutation_rate, data=subset(combined, generation=="2nd"))
m_third_gen = lm(mean_depth ~ autosomal_mutation_rate, data=subset(combined, generation=="3rd"))
# test whether read depth and mutation rates are correlated
summary(m_second_gen)
summary(m_third_gen)
p
###Output
_____no_output_____
###Markdown
Response Figure 2Another concern about paternal age effects is the extent to which outlier offspring may be driving the apparent rate variation across families. If the authors were to randomly sample half of the children from each family and run the analysis again, how much is the paternal age effect rank preserved? Alternatively, how much is the family rank ordering preserved if mutations are only called from a subset of the chromosomes?
###Code
library(MASS)
library(ggplot2)
library(cowplot)
library(ggridges)
library(tidyr)
library(reshape2)
library(dplyr)
library(viridis)
set.seed(12345)
# read in third-generation DNMs
gen3 = read.csv("../data/third_gen.dnms.summary.csv")
# number of subsampling experiments to perform
n_sims = 100
# empty dataframe to store ranks from subsampling experiments
sub_df = data.frame(matrix(NA, nrow=n_sims*40, ncol=2))
# empty dataframe to store original ranks
orig_df = data.frame()
fam_list = unique(gen3$family_id)
# this function accepts a dataframe with information
# about each family's slope, and sorts it in order of increasing
# slope
sort_and_index <- function(df) {
df = df[order(df$slope),]
df$facet_order = factor(df$family_id, levels = unique(df$family_id))
df = df[!duplicated(df[,c('family_id')]),]
df$order = rev(c(1:nrow(df)))
return(df)
}
# loop over every family, fit a regression (using all of the
# third-generation samples in the family), and store the results
# in `orig_df`
for (sp in split(gen3, as.factor(gen3$family_id))) {
m = glm(autosomal_dnms ~ dad_age, data=sp, family=poisson(link="identity"))
s = summary(m)
sp$slope = s$coefficients[[2]]
sp$intercept = s$coefficients[[1]]
orig_df = rbind(orig_df, sp)
}
# sort and rank results from the full third-generation dataset
orig_df = sort_and_index(orig_df)
orig_rank_order = rev(c(as.character(orig_df$family_id)))
# for the specified number of simulations, subsample each
# family's children, fit a regression, and store the results in
# `sub_df`
for (iter in 1:n_sims) {
new_gen3 = data.frame()
for (sp in split(gen3, as.factor(gen3$family_id))) {
# sub sample the family's children (here, 75% of children)
sub_sampled = sp[sample(1:nrow(sp), nrow(sp) * 0.75, replace=FALSE),]
m = glm(autosomal_dnms ~ dad_age, data=sub_sampled, family=poisson(link="identity"))
s = summary(m)
sub_sampled$slope = s$coefficients[[2]]
new_gen3 = rbind(new_gen3, sub_sampled)
}
# sort and rank results
sorted_df = sort_and_index(new_gen3)
# add the results to the main `sub_df`
for (f in 1:length(fam_list)) {
# get the index (rank) of each family ID in `sorted_df`,
# which contains the new rank of each family ID after subsampling
sub_df[iter * f,2] = which(fam_list[[f]] == sorted_df$family_id)
sub_df[iter * f,1] = as.character(fam_list[[f]])
}
}
colnames(sub_df) = c("family_id", "rank")
# make sure `family_id` is treated as a factor
sub_df$family_id <- as.factor(sub_df$family_id)
# create a "ridges" or "joyplot" summarizing the
# distribution of ranks for each family across
# simulations, ordered by their original ranks
p <- ggplot(sub_df, aes(x = rank, y = family_id, fill=..x..)) +
geom_density_ridges_gradient(scale = 3, rel_min_height = 0.01, quantile_lines = TRUE, quantiles = 2) +
scale_fill_gradient(name = "Rank", low="dodgerblue", high="firebrick") +
scale_y_discrete(limits=orig_rank_order) +
xlab("Distribution of family ranks\nfollowing 100 sampling trials") +
ylab("Family ID (original rankings)") +
theme(axis.text.x = element_text(size=12)) +
theme(axis.text.y = element_text(size=10))
p
###Output
_____no_output_____
###Markdown
Response Figure 3A factor regarding paternal age effects that only mentioned briefly alluded to late in the paper are the differences in the intercept between families (Figure 3c). Do either the intercept or slope vary with the number of F2 children? Is there any (anti-)correlation between slope and intercept? It would seem odd if the intercept strongly impacts the slope since a low per-year rate probably should not correlate with a high initial rate at younger ages. Is there anti-correlation between slopes and intercepts in CEPH families?
###Code
# plot the correlation between family size and intercept
p <- ggplot(orig_df, aes(x=slope, y=intercept)) +
geom_smooth(method='lm', color='firebrick', alpha=0.25) +
geom_point(pch=21, fill="black", col="white", size=3, lwd=0.5) +
ylab('Initial mutation count (intercept) in family') +
xlab('Slope in family') +
theme(axis.text.x=element_text(size=16)) +
theme(axis.text.y=element_text(size=16)) +
theme(text=element_text(size=16))
m = lm(intercept ~ slope, data=orig_df)
summary(m)
p
###Output
_____no_output_____
###Markdown
Anti-correlation between slopes and intercepts is expected for randomly distributed DNM counts...
###Code
library(MASS)
library(ggplot2)
library(cowplot)
# set seed so that example is reproducible
set.seed(1234)
gen3 = read.csv("../data/third_gen.dnms.summary.csv")
library(dplyr)
df1 = gen3[,c("dad_age","family_id")]
FUN <- function(x) rpois(lambda=x * 1.72 + 15, n=1)
# randomly assign DNM counts to each third-generation sample based
# on their paternal age at birth
df1$dnms = lapply(df1$dad_age, FUN)
df1$dnms = as.numeric(df1$dnms)
colnames(df1) = c('age', 'fam_id', 'dnms')
# get the slopes for each family and add
# to a new dataframe
new_df1 = data.frame()
for (sp in split(df1, df1$fam_id)) {
m = glm(dnms ~ age, data=sp, family=poisson(link="identity"))
s = summary(m)
sp$slope = s$coefficients[[2]]
sp$intercept = s$coefficients[[1]]
new_df1 = rbind(new_df1, sp)
}
# get rid of duplicates (i.e., samples from the same
# family)
sort_and_index <- function(df) {
df = df[order(df$slope),]
df$facet_order = factor(df$fam_id, levels = unique(df$fam_id))
df = df[!duplicated(df[,c('fam_id')]),]
df$order = rev(c(1:nrow(df)))
return(df)
}
new_df1 = sort_and_index(new_df1)
# plot slopes and intercepts
p <- ggplot(new_df1, aes(x=slope, y=intercept)) +
geom_smooth(method='lm', color='dodgerblue', alpha=0.25) +
geom_point(pch=21, fill="black", col="white", size=3, lwd=0.5) +
ylab('Initial mutation count (intercept) in family') +
xlab('Slope in family') +
theme(axis.text.x=element_text(size=16)) +
theme(axis.text.y=element_text(size=16)) +
theme(text=element_text(size=16))
p
print(cor.test(new_df1$slope, new_df1$intercept))
###Output
_____no_output_____
###Markdown
...but inter-family variability is not expected for randomly distributed DNM counts
###Code
m = glm(dnms ~ age * fam_id, data=df1, family=poisson(link="identity"))
anova(m, test="Chisq")
###Output
_____no_output_____
###Markdown
Is family size correlated with the slope or intercept for a family?
###Code
library(MASS)
library(ggplot2)
library(cowplot)
# read in third-generation DNMs
gen3 = read.csv("../data/third_gen.dnms.summary.csv")
orig_df = data.frame()
# this function accepts a dataframe with information
# about each family's slope, and sorts it in order of increasing
# slope
sort_and_index <- function(df) {
df = df[order(df$slope),]
df$facet_order = factor(df$family_id, levels = unique(df$family_id))
df = df[!duplicated(df[,c('family_id')]),]
df$order = rev(c(1:nrow(df)))
return(df)
}
# loop over every family, fit a regression, and store the results
# in `orig_df`
for (sp in split(gen3, as.factor(gen3$family_id))) {
m = glm(autosomal_dnms ~ dad_age, data=sp, family=poisson(link="identity"))
s = summary(m)
sp$slope = s$coefficients[[2]]
sp$intercept = s$coefficients[[1]]
orig_df = rbind(orig_df, sp)
}
# sort the rank results from the full third-generation dataset
orig_df = sort_and_index(orig_df)
# plot the correlation between family size and slope
p1 <- ggplot(orig_df, aes(x=n_sibs, y=slope)) +
geom_smooth(method='lm', color='firebrick', alpha=0.25) +
geom_point(pch=21, fill="black", col="white", size=4, lwd=0.5) +
ylab('Paternal age effect (slope) in family') +
xlab('Number of siblings in family') +
theme(axis.text.x=element_text(size=16)) +
theme(axis.text.y=element_text(size=16)) +
theme(text=element_text(size=16))
m = lm(slope ~ n_sibs, data=orig_df)
summary(m)
# plot the correlation between family size and intercept
p2 <- ggplot(orig_df, aes(x=n_sibs, y=intercept)) +
geom_smooth(method='lm', color='firebrick', alpha=0.25) +
geom_point(pch=21, fill="black", col="white", size=4, lwd=0.5) +
ylab('Initial mutation count (intercept) in family') +
xlab('Number of siblings in family') +
theme(axis.text.x=element_text(size=16)) +
theme(axis.text.y=element_text(size=16)) +
theme(text=element_text(size=16))
m = lm(intercept ~ n_sibs, data=orig_df)
summary(m)
p1
p2
###Output
_____no_output_____
###Markdown
Other questions and suggestions Suggestion 5. Other things to explore further related to parental age effects are: how do the conclusions change and/or can you detect similar variability in maternal age when analyzing phased DNMs? This may be underpowered, but for those families that share grandparents, if two brothers are in the F1 generation, do their paternal age effects differ?
###Code
gen3 = read.csv('../data/third_gen.dnms.summary.csv')
# fit model with only paternally-phased counts
m = glm(dad_dnms_auto ~ dad_age * family_id, data=gen3, family=poisson(link="identity"))
anova(m, test="Chisq")
# fit model with only maternally-phased counts.
# need to add a pseudo-count of 1 to maternal DNMs first.
gen3$mom_dnms_auto = gen3$mom_dnms_auto + 1
m = glm(mom_dnms_auto ~ mom_age * family_id, data=gen3, family=poisson(link="identity"))
anova(m, test="Chisq")
# identify difference in paternal age effects between pairs of brothers
# first, for family 24
gen3_fam24 = subset(gen3, family_id %in% c("24_C", "24_D"))
m_exp = glm(autosomal_dnms ~ dad_age * family_id, data=gen3_fam24, family=poisson(link="identity"))
m_null = glm(autosomal_dnms ~ dad_age, data=gen3_fam24, family=poisson(link="identity"))
anova(m_null, m_exp, test="Chisq")
# next, for family 19
gen3_fam19 = subset(gen3, family_id %in% c("19_A", "19_B"))
m_exp = glm(autosomal_dnms ~ dad_age * family_id, data=gen3_fam19, family=poisson(link="identity"))
m_null = glm(autosomal_dnms ~ dad_age, data=gen3_fam19, family=poisson(link="identity"))
anova(m_null, m_exp, test="Chisq")
###Output
_____no_output_____ |
ANL-TD-Iterative-Pflow/.ipynb_checkpoints/runtdpflow-checkpoint.ipynb | ###Markdown
Transmission-Distribution Power Flow Co-simulationThis script runs a transmission-distribution power flow. The network is assumed to consist of a single transmission network connected to distribution feeders at each load bus. The number of distribution feeders connected is determined based on the real power load at the bus and the injection of the distribution feeder. Here, as an example, the T and D networks consists of following:+ Transmission system: 200-bus network (synthetic network for Illinois system..from TAMU)+ Distribution feeder: IEEE 8500-node feeder.
###Code
# Metadafile having number of boundary buses
# and the feeder connections at those buses
file = open('metadatafile',"r")
linenum = 1
bdry_buses = []
# Number of boundary buses selected
# One can vary the number of boundary buses
nbdry = 100
dist_feeders = {}
for line in file:
if linenum == 1:
nbdry_nfeeders = line.split(',')
nbdry = int(nbdry_nfeeders[0])
nfeeders = int(nbdry_nfeeders[1])
# print("%d,%d" % (nbdry,nfeeders))
elif linenum < nbdry+2:
bdry_buses.append(line)
else:
# print line
values1 = line.rstrip(' \n')
values = values1.split(',')
dist_feeders[values[1]]= int(values[0]) # name:boundary bus
# print '%s:%d' %(values[1],dist_feeders[values[1]])
linenum = linenum+1
file.close()
print 'nbdry=%d ' %nbdry
print 'nfeeders=%d'%nfeeders
for k in dist_feeders:
print("%d,%s" %(dist_feeders[k],k))
nfeds = nfeeders+1
print 'nfederates=%d' %nfeds
%%python
print 'Broker args'
broker_args='-nfeds '+str(nfeds)
print broker_args
broker_cmdline='./helicsbroker '+broker_args
broker = shlex.split(broker_cmdline)
print broker
## Launch broker
subprocess.Popen(broker)
##Launch Transmission federate
print 'T args'
netfile='case_ACTIVSg200.m'
metadatafile='metadatafile'
#print metadatafile
# Launch Transmission federate simulation
pflowhelicst_args_files ='-netfile '+netfile+' -metadatafile '+metadatafile
pflowhelicst_args=pflowhelicst_args_files
print pflowhelicst_args+'\n'
pflowhelicst_cmdline='./PFLOWHELICST '+pflowhelicst_args
pflowhelicst = shlex.split(pflowhelicst_cmdline)
subprocess.Popen(pflowhelicst)
##Launch distribution federates
fednum=0
dnetfile='/Users/Shri/packages/OpenDSSDirect.jl/examples/8500-Node/Master.dss'
for k in dist_feeders:
fednum = fednum + 1
print 'D federate '+k+' args'
# Dist. federate 1
netfile=dnetfile
dtopic=k
pflowhelicsd_args = '-netfile '+netfile+' -dtopic '+dtopic
print pflowhelicsd_args+'\n'
pflowhelicsd_cmdline='./PFLOWHELICSD '+pflowhelicsd_args
pflowhelicsd = shlex.split(pflowhelicsd_cmdline)
subprocess.Popen(pflowhelicsd)
###Output
Broker args
-nfeds 3
['./helicsbroker', '-nfeds', '3']
T args
-netfile case_ACTIVSg200.m -metadatafile metadatafile
D federate dcase_tbdry_2_feeder_1 args
-netfile /Users/Shri/packages/OpenDSSDirect.jl/examples/8500-Node/Master.dss -dtopic dcase_tbdry_2_feeder_1
D federate dcase_tbdry_4_feeder_1 args
-netfile /Users/Shri/packages/OpenDSSDirect.jl/examples/8500-Node/Master.dss -dtopic dcase_tbdry_4_feeder_1
|
code/ideamDataReader_v2.ipynb | ###Markdown
Weather Derivates Precipitation Bogota Exploration - El Dorado AirportDeveloped by [Jesus Solano](mailto:[email protected]) 31 Julio 2018
###Code
# Configure path to read txts.
path = '../datasets/ideamBogota/'
# Download the update dataset.
import os
if not os.path.exists('../datasets/soi.dat'):
! wget https://crudata.uea.ac.uk/cru/data/soi/soi.dat -P ../datasets/
# Import modules to read and visualize.
import pandas as pd
import numpy as np
%pylab inline
###Output
Populating the interactive namespace from numpy and matplotlib
###Markdown
Load One Year Data
###Code
from io import StringIO
# """Determine whether a year is a leap year."""
def isleapyear(year):
if year % 4 == 0 and (year % 100 != 0 or year % 400 == 0):
return True
return False
# Read only one year.
def loadYear(year):
year=str(year)
filedata = open(path+ year +'.txt', 'r')
# Create a dataframe from the year's txt.
columnNames=['Jan','Feb','Mar','Apr','May','Jun','Jul','Aug','Sep','Oct','Nov','Dec']
precipitationYear =pd.read_csv(StringIO('\n'.join(' '.join(l.split()) for l in filedata)),sep=' ',header=None, names=columnNames,skiprows=lambda x: x in list(range(0,3)) , skipfooter=4 )
# Sort data to solve problem of 28 days of Feb.
for i in range(28,30):
for j in reversed(range(1,12)):
precipitationYear.iloc[i,j]= precipitationYear.iloc[i,j-1]
# Fix leap years.
if isleapyear(int(year)) and i == 28:
count = 1
else:
precipitationYear.iloc[i,1]= np.nan
# Fix problem related to months with 31 days.
precipitationYear.iloc[30,11] = precipitationYear.iloc[30,6]
precipitationYear.iloc[30,9] = precipitationYear.iloc[30,5]
precipitationYear.iloc[30,7] = precipitationYear.iloc[30,4]
precipitationYear.iloc[30,6] = precipitationYear.iloc[30,3]
precipitationYear.iloc[30,4] = precipitationYear.iloc[30,2]
precipitationYear.iloc[30,2] = precipitationYear.iloc[30,1]
for i in [1,3,5,8,10]:
precipitationYear.iloc[30,i] = np.nan
return precipitationYear
# Show a year data example.
nYear = 2004
testYear =loadYear(nYear)
testYear
# Convert one year data frame to timeseries.
def convertOneYearToSeries(dataFrameYear,nYear):
dataFrameYearT = dataFrameYear.T
dates = pd.date_range(str(nYear)+'-01-01', end = str(nYear)+'-12-31' , freq='D')
dataFrameYearAllTime = dataFrameYearT.stack()
dataFrameYearAllTime.index = dates
return dataFrameYearAllTime
# Plot data from one year.
timeYear = convertOneYearToSeries(testYear,nYear)
meanTimeYear = timeYear.mean()
ax = timeYear.plot(title='Precipitation(mm) for '+str(nYear),figsize=(20,10),grid=True)
ax.axhline(y=meanTimeYear, xmin=-1, xmax=1, color='r', linestyle='--', lw=2)
timeYear
###Output
_____no_output_____
###Markdown
Load history data
###Code
# Concatenate all time series between a years range.
def concatYearsPrecipitation(startYear,endYear):
precipitationAllTime = loadYear(startYear)
precipitationAllTime = convertOneYearToSeries(precipitationAllTime,startYear)
for i in range(startYear+1,endYear+1):
tempPrecipitation=loadYear(i)
tempPrecipitation= convertOneYearToSeries(tempPrecipitation,i)
precipitationAllTime = pd.concat([precipitationAllTime,tempPrecipitation])
return precipitationAllTime
# Plot precipitation over a set of years.
startYear = 1972
endYear = 2015
precipitationAllTime = concatYearsPrecipitation(startYear,endYear)
meanAllTime = precipitationAllTime.mean()
ax = precipitationAllTime.plot(title='Precipitation(mm) from '+ str(startYear) +' to '+str(endYear),figsize=(20,10),grid=True,color='steelblue')
ax.axhline(y=meanAllTime, xmin=-1, xmax=1, color='r', linestyle='--', lw=2)
###Output
/usr/local/lib/python3.5/dist-packages/ipykernel_launcher.py:19: ParserWarning: Falling back to the 'python' engine because the 'c' engine does not support skipfooter; you can avoid this warning by specifying engine='python'.
###Markdown
Daily Nino3.4 Index
###Code
######## Nino 3.4 (nino34) ##############
# https://climexp.knmi.nl/selectdailyindex.cgi?id=someone@somewhere
# Download the update dataset.
import os
if not os.path.exists('../datasets/nino34_daily.dat'):
! wget https://climexp.knmi.nl/data/inino34_daily.dat -O ../datasets/nino34_daily.dat
# Import modules to read and visualize.
import pandas as pd
import numpy as np
%pylab inline
# Read dataset. We attempt to replace three spaces with two spaces to read correctly.
from io import StringIO
columnNames=['Date','Index']
filedata = open('../datasets/nino34_daily.dat', 'r')
nino34=pd.read_csv(StringIO('\n'.join(' '.join(l.split()) for l in filedata)),sep=' ',header=None,skiprows=lambda x: x in list(range(0,20)), names=columnNames, skipfooter=8 )
datesNino34 = pd.date_range('1981-09-10', periods=nino34.shape[0], freq='D')
nino34.index = datesNino34
nino34 = nino34.drop(['Date'], axis=1)
nino34.head(10)
###Output
Populating the interactive namespace from numpy and matplotlib
###Markdown
Precipitation vs Index Choose dates interval
###Code
startYear = 2000
endYear = 2015
nino34Time = nino34.loc[str(startYear)+'-01-01':str(endYear)+'-12-31']
nino34Time = nino34Time.iloc[:,0]
datesNino34All = pd.date_range(str(startYear)+'-01-01', periods=nino34Time.shape[0], freq='D')
nino34Time.index = datesNino34All
precipitationTime = precipitationAllTime.loc[str(startYear)+'-01-01':str(endYear)+'-12-31']
precipitationTime = precipitationTime.iloc[:]
datesPrecipitationAll = pd.date_range(str(startYear)+'-01-01', periods=precipitationTime.shape[0], freq='D')
precipitationTime.index = datesPrecipitationAll
ax1=precipitationTime.plot(figsize=(20,10),label='Precipitation - Ideam',grid=True, title= 'Index vs Precipitation')
plt.legend(bbox_to_anchor=(0.01, 0.95, 0.2, 0.8), loc=3,
ncol=2, mode="expand", borderaxespad=0.)
ax2= ax1.twinx()
ax2.spines['right'].set_position(('axes',1.0))
nino34Time.plot(ax=ax2,color='green',label='NINO 3.4')
plt.legend(bbox_to_anchor=(0.9, 0.95, 0.1, 0.8), loc=3, ncol=2, mode="expand", borderaxespad=0.)
ax1.set_xlabel('Year')
ax1.set_ylabel('Precipitation Amount (mm)')
ax2.set_ylabel('Index Value')
###Output
_____no_output_____
###Markdown
Dispersion Plot
###Code
nino34IdeamMix = pd.concat([nino34Time,precipitationTime],axis=1)
nino34IdeamMix.set_axis(['Nino 3.4','Precipitation(mm)'],axis='columns',inplace=True)
nino34IdeamMix = nino34IdeamMix.dropna()
import seaborn as sns
sns.lmplot(x='Nino 3.4',y='Precipitation(mm)',data=nino34IdeamMix,fit_reg=True, size=8, aspect= 2, line_kws={'color': 'red'})
print('The correlation matrix is:\n', np.corrcoef(nino34IdeamMix['Nino 3.4'],nino34IdeamMix['Precipitation(mm)']))
###Output
The correlation matrix is:
[[ 1. -0.06553467]
[-0.06553467 1. ]]
###Markdown
Box Plot
###Code
#Set up bins
bin = [-50,-1,1,50]
#use pd.cut function can attribute the values into its specific bins
category = pd.cut(nino34IdeamMix['Nino 3.4'],bin)
category = category.to_frame()
category.columns = ['range']
#concatenate age and its bin
nino34IdeamMix_New = pd.concat([nino34IdeamMix,category],axis = 1)
nino34IdeamMix_New.boxplot(column='Precipitation(mm)',by='range', figsize=(20,10))
###Output
_____no_output_____
###Markdown
Same Analysis removing month average
###Code
# Calculates the month average precipitation into a interval.
def calculateMonthMeanOverYears(startYear,endYear):
precipitationAllTime = loadYear(startYear)
for i in range(startYear+1,endYear+1):
tempPrecipitation=loadYear(i)
precipitationAllTime = pd.concat([precipitationAllTime,tempPrecipitation])
return precipitationAllTime.mean()
# Test the rainfall mean over all data.
startYear = 1972
endYear = 2015
doradoMonthPrecipitationAve = calculateMonthMeanOverYears(startYear,endYear)
# Concatenate all time series between a years range removing to all entries the month average.
def concatYearsPrecipitationRE(startYear,endYear):
doradoMonthPrecipitationAve = calculateMonthMeanOverYears(startYear,endYear)
precipitationAllTime = loadYear(startYear) - doradoMonthPrecipitationAve
precipitationAllTime = convertOneYearToSeries(precipitationAllTime,startYear)
for i in range(startYear+1,endYear+1):
tempPrecipitation=loadYear(i) - doradoMonthPrecipitationAve
tempPrecipitation= convertOneYearToSeries(tempPrecipitation,i)
precipitationAllTime = pd.concat([precipitationAllTime,tempPrecipitation])
return precipitationAllTime
# Plot precipitation over a set of years.
startYear = 1972
endYear = 2015
precipitationReAllTime = concatYearsPrecipitationRE(startYear,endYear)
meanReAllTime = precipitationReAllTime.mean()
ax = precipitationReAllTime.plot(title='Precipitation(mm) from '+ str(startYear) +' to '+str(endYear)+'-- Removing Month Average',figsize=(20,10),grid=True,color='steelblue')
ax.axhline(y=meanReAllTime, xmin=-1, xmax=1, color='r', linestyle='--', lw=2)
###Output
/usr/local/lib/python3.5/dist-packages/ipykernel_launcher.py:19: ParserWarning: Falling back to the 'python' engine because the 'c' engine does not support skipfooter; you can avoid this warning by specifying engine='python'.
###Markdown
Precipitation vs Index Choose dates interval
###Code
startYear = 2000
endYear = 2015
nino34Time = nino34.loc[str(startYear)+'-01-01':str(endYear)+'-12-31']
nino34Time = nino34Time.iloc[:,0]
datesNino34All = pd.date_range(str(startYear)+'-01-01', periods=nino34Time.shape[0], freq='D')
nino34Time.index = datesNino34All
precipitationReTime = precipitationReAllTime.loc[str(startYear)+'-01-01':str(endYear)+'-12-31']
precipitationReTime = precipitationReTime.iloc[:]
datesPrecipitationReAll = pd.date_range(str(startYear)+'-01-01', periods=precipitationReTime.shape[0], freq='D')
precipitationReTime.index = datesPrecipitationReAll
ax1=precipitationReTime.plot(figsize=(20,10),label='Precipitation (Average Removed) - Ideam',grid=True, title= 'Index vs Precipitation')
plt.legend(bbox_to_anchor=(0.01, 0.95, 0.2, 0.8), loc=3,
ncol=2, mode="expand", borderaxespad=0.)
ax2= ax1.twinx()
ax2.spines['right'].set_position(('axes',1.0))
nino34Time.plot(ax=ax2,color='green',label='NINO 3.4')
plt.legend(bbox_to_anchor=(0.9, 0.95, 0.1, 0.8), loc=3, ncol=2, mode="expand", borderaxespad=0.)
ax1.set_xlabel('Year')
ax1.set_ylabel('Precipitation Amount (mm)')
ax2.set_ylabel('Index Value')
###Output
_____no_output_____
###Markdown
Dispersion Plot
###Code
nino34IdeamMix = pd.concat([nino34Time,precipitationReTime],axis=1)
nino34IdeamMix.set_axis(['Nino 3.4','Precipitation(mm)'],axis='columns',inplace=True)
nino34IdeamMix = nino34IdeamMix.dropna()
import seaborn as sns
sns.lmplot(x='Nino 3.4',y='Precipitation(mm)',data=nino34IdeamMix,fit_reg=True, size=8, aspect= 2, line_kws={'color': 'red'})
print('The correlation matrix is:\n', np.corrcoef(nino34IdeamMix['Nino 3.4'],nino34IdeamMix['Precipitation(mm)']))
###Output
The correlation matrix is:
[[ 1. -0.08050763]
[-0.08050763 1. ]]
###Markdown
Box Plot
###Code
#Set up bins
bin = [-50,-1,1,50]
#use pd.cut function can attribute the values into its specific bins
category = pd.cut(nino34IdeamMix['Nino 3.4'],bin)
category = category.to_frame()
category.columns = ['range']
#concatenate age and its bin
nino34IdeamMix_New = pd.concat([nino34IdeamMix,category],axis = 1)
nino34IdeamMix_New.boxplot(column='Precipitation(mm)',by='range', figsize=(20,10))
###Output
_____no_output_____
###Markdown
Export Ideam Data to external Files.
###Code
# Save precipitation data to .csv
precipitationAllTime.to_csv('../datasets/precipitationAllTime.csv')
precipitationReAllTime.to_csv('../datasets/precipitationRemovingAverageAllTime.csv')
precipitationAllTime
precipitationAllTime.loc['2008-12-31']
###Output
_____no_output_____ |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.