path
stringlengths 7
265
| concatenated_notebook
stringlengths 46
17M
|
---|---|
presentaciones/14-pandas/14-pandas-ex-clean1.ipynb | ###Markdown
Ejercicios de pandas ***
###Code
import pandas as pd
import numpy as np
###Output
_____no_output_____
###Markdown
Ejercicio 11. Generar arreglos de numpy para obtener el resultado de $ z = f(x,y) $ para un conjunto de entrada aleatorio de al menos 100 filas.- Crear un `DataFrame` de pandas con las columnas $x$, $y$ y $z$.- Obtener el promedio de cada `Series` en el `DataFrame`.
###Code
# Tu código acá
###Output
_____no_output_____
###Markdown
Ejercicio 1.1De forma similar al ejercicio anterior, genere un `DataFrame` con los resultados de una función $y = f(x)$ con al menos 100 filas. - Grafique los resultados con `matplotlib` utilizando 2 variantes: - a. Utilizando los arreglos de numpy (objetos `ndarray`) que utilizó para generar el `DataFrame`. - b. Utilizando los objetos `Series` del `DataFrame`. - Obtenga el objeto *handler* de la gráfica y guarde la gráfica en PDF.- Asegúrese de entender la diferencia entre generar la gráfica para guardarla (sin mostrarla) y generar la gráfica para mostrarla. Ejercicio 21. Prueba 5 funciones de la primera página del *cheat sheet* no vistas en la exposición.
###Code
# Tu código acá
###Output
_____no_output_____
###Markdown
Ejercicio 3Resuelva las siguientes preguntas utilizando las funciones de pandas. Ayúdese con la hoja de trucos.
###Code
df = pd.read_csv("data/titanic.csv")
df.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 891 entries, 0 to 890
Data columns (total 12 columns):
PassengerId 891 non-null int64
Survived 891 non-null int64
Pclass 891 non-null int64
Name 891 non-null object
Sex 891 non-null object
Age 714 non-null float64
SibSp 891 non-null int64
Parch 891 non-null int64
Ticket 891 non-null object
Fare 891 non-null float64
Cabin 204 non-null object
Embarked 889 non-null object
dtypes: float64(2), int64(5), object(5)
memory usage: 83.7+ KB
###Markdown
- ¿Cuál es la tarifa máxima que se pagó? ¿Y la mediana?
###Code
# Tu código acá
###Output
_____no_output_____
###Markdown
- Calcule la tasa de supervivencia promedio para todos los pasajeros (nota: la columna `Survived` indica si alguien sobrevivió (1) o no (0)).
###Code
# Tu código acá
###Output
_____no_output_____
###Markdown
- Haga una gráfica de la distribución de edades de los pasajeros del Titanic
###Code
# Tu código acá
###Output
_____no_output_____
###Markdown
- Según el conjunto de datos del Titanic, seleccione todas las filas para pasajeros masculinos y calcule la edad media de esos pasajeros. Haz lo mismo para las pasajeras.
###Code
# Tu código acá
###Output
_____no_output_____
###Markdown
- Según el conjunto de datos del Titanic, ¿cuántos pasajeros mayores de 70 estaban en el Titanic?
###Code
# Tu código acá
###Output
_____no_output_____
###Markdown
- Calcula la edad promedio para cada sexo. Ahora utiliza el método `groupby`.
###Code
# Tu código acá
###Output
_____no_output_____
###Markdown
- Calcule esta relación de supervivencia para todos los pasajeros menores de 25 años (recuerde: filtrado / indexación booleana).
###Code
# Tu código acá
###Output
_____no_output_____
###Markdown
- ¿Cuál es la diferencia en la proporción de supervivencia entre sexos?
###Code
# Tu código acá
###Output
_____no_output_____
###Markdown
- ¿O cómo difiere la proporción de sobrevivientes entre las diferentes clases de pasajeros? Haz una gráfica de barras visualizando la relación de supervivencia para las 3 clases.
###Code
# Tu código acá
###Output
_____no_output_____
###Markdown
- Haga una gráfica de barras para visualizar la tarifa promedio pagada por los pasajeros de acuerdo a su edad. Para esto: - Primero, separamos a los pasajeros por rango, utilizando la función `pd.cut` y agregamos esta serie al DataFrame. - Debemos agrupar por esta columna y calcular el promedio de las tarifas. - Por último, agregamos `plot(kind='bar')` para obtener la gráfica de barras.
###Code
df['AgeClass'] = pd.cut(df['Age'], bins=np.arange(0,90,10))
# Tu código acá
###Output
_____no_output_____ |
NumPy/02.07-Fancy-Indexing.ipynb | ###Markdown
*This notebook contains an excerpt from the [Python Data Science Handbook](http://shop.oreilly.com/product/0636920034919.do) by Jake VanderPlas; the content is available [on GitHub](https://github.com/jakevdp/PythonDataScienceHandbook).**The text is released under the [CC-BY-NC-ND license](https://creativecommons.org/licenses/by-nc-nd/3.0/us/legalcode), and code is released under the [MIT license](https://opensource.org/licenses/MIT). If you find this content useful, please consider supporting the work by [buying the book](http://shop.oreilly.com/product/0636920034919.do)!* Fancy Indexing In the previous sections, we saw how to access and modify portions of arrays using simple indices (e.g., ``arr[0]``), slices (e.g., ``arr[:5]``), and Boolean masks (e.g., ``arr[arr > 0]``).In this section, we'll look at another style of array indexing, known as *fancy indexing*.Fancy indexing is like the simple indexing we've already seen, but we pass arrays of indices in place of single scalars.This allows us to very quickly access and modify complicated subsets of an array's values. Exploring Fancy IndexingFancy indexing is conceptually simple: it means passing an array of indices to access multiple array elements at once.For example, consider the following array:
###Code
import numpy as np
rand = np.random.RandomState(42)
x = rand.randint(100, size=10)
print(x)
###Output
[51 92 14 71 60 20 82 86 74 74]
###Markdown
Suppose we want to access three different elements. We could do it like this:
###Code
[x[3], x[7], x[2]]
###Output
_____no_output_____
###Markdown
Alternatively, we can pass a single list or array of indices to obtain the same result:
###Code
ind = [3, 7, 4]
x[ind]
###Output
_____no_output_____
###Markdown
When using fancy indexing, the shape of the result reflects the shape of the *index arrays* rather than the shape of the *array being indexed*:
###Code
ind = np.array([[3, 7],
[4, 5]])
x[ind]
###Output
_____no_output_____
###Markdown
Fancy indexing also works in multiple dimensions. Consider the following array:
###Code
X = np.arange(12).reshape((3, 4))
X
###Output
_____no_output_____
###Markdown
Like with standard indexing, the first index refers to the row, and the second to the column:
###Code
row = np.array([0, 1, 2])
col = np.array([2, 1, 3])
X[row, col]
###Output
_____no_output_____
###Markdown
Notice that the first value in the result is ``X[0, 2]``, the second is ``X[1, 1]``, and the third is ``X[2, 3]``.The pairing of indices in fancy indexing follows all the broadcasting rules that were mentioned in [Computation on Arrays: Broadcasting](02.05-Computation-on-arrays-broadcasting.ipynb).So, for example, if we combine a column vector and a row vector within the indices, we get a two-dimensional result:
###Code
X[row[:, np.newaxis], col]
###Output
_____no_output_____
###Markdown
Here, each row value is matched with each column vector, exactly as we saw in broadcasting of arithmetic operations.For example:
###Code
row[:, np.newaxis] * col
###Output
_____no_output_____
###Markdown
It is always important to remember with fancy indexing that the return value reflects the *broadcasted shape of the indices*, rather than the shape of the array being indexed. Combined IndexingFor even more powerful operations, fancy indexing can be combined with the other indexing schemes we've seen:
###Code
print(X)
###Output
[[ 0 1 2 3]
[ 4 5 6 7]
[ 8 9 10 11]]
###Markdown
We can combine fancy and simple indices:
###Code
X[2, [2, 0, 1]]
###Output
_____no_output_____
###Markdown
We can also combine fancy indexing with slicing:
###Code
X[1:, [2, 0, 1]]
###Output
_____no_output_____
###Markdown
And we can combine fancy indexing with masking:
###Code
mask = np.array([1, 0, 1, 0], dtype=bool)
X[row[:, np.newaxis], mask]
###Output
_____no_output_____
###Markdown
All of these indexing options combined lead to a very flexible set of operations for accessing and modifying array values. Example: Selecting Random PointsOne common use of fancy indexing is the selection of subsets of rows from a matrix.For example, we might have an $N$ by $D$ matrix representing $N$ points in $D$ dimensions, such as the following points drawn from a two-dimensional normal distribution:
###Code
mean = [0, 0]
cov = [[1, 2],
[2, 5]]
X = rand.multivariate_normal(mean, cov, 100)
X.shape
###Output
_____no_output_____
###Markdown
Using the plotting tools we will discuss in [Introduction to Matplotlib](04.00-Introduction-To-Matplotlib.ipynb), we can visualize these points as a scatter-plot:
###Code
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn; seaborn.set() # for plot styling
plt.scatter(X[:, 0], X[:, 1]);
###Output
_____no_output_____
###Markdown
Let's use fancy indexing to select 20 random points. We'll do this by first choosing 20 random indices with no repeats, and use these indices to select a portion of the original array:
###Code
indices = np.random.choice(X.shape[0], 20, replace=False)
indices
selection = X[indices] # fancy indexing here
selection.shape
###Output
_____no_output_____
###Markdown
Now to see which points were selected, let's over-plot large circles at the locations of the selected points:
###Code
plt.scatter(X[:, 0], X[:, 1], alpha=0.3)
plt.scatter(selection[:, 0], selection[:, 1],
facecolor='none', s=200);
###Output
_____no_output_____
###Markdown
This sort of strategy is often used to quickly partition datasets, as is often needed in train/test splitting for validation of statistical models (see [Hyperparameters and Model Validation](05.03-Hyperparameters-and-Model-Validation.ipynb)), and in sampling approaches to answering statistical questions. Modifying Values with Fancy IndexingJust as fancy indexing can be used to access parts of an array, it can also be used to modify parts of an array.For example, imagine we have an array of indices and we'd like to set the corresponding items in an array to some value:
###Code
x = np.arange(10)
i = np.array([2, 1, 8, 4])
x[i] = 99
print(x)
###Output
[ 0 99 99 3 99 5 6 7 99 9]
###Markdown
We can use any assignment-type operator for this. For example:
###Code
x[i] -= 10
print(x)
###Output
[ 0 89 89 3 89 5 6 7 89 9]
###Markdown
Notice, though, that repeated indices with these operations can cause some potentially unexpected results. Consider the following:
###Code
x = np.zeros(10)
x[[0, 0]] = [4, 6]
print(x)
###Output
[ 6. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
###Markdown
Where did the 4 go? The result of this operation is to first assign ``x[0] = 4``, followed by ``x[0] = 6``.The result, of course, is that ``x[0]`` contains the value 6.Fair enough, but consider this operation:
###Code
i = [2, 3, 3, 4, 4, 4]
x[i] += 1
x
###Output
_____no_output_____
###Markdown
You might expect that ``x[3]`` would contain the value 2, and ``x[4]`` would contain the value 3, as this is how many times each index is repeated. Why is this not the case?Conceptually, this is because ``x[i] += 1`` is meant as a shorthand of ``x[i] = x[i] + 1``. ``x[i] + 1`` is evaluated, and then the result is assigned to the indices in x.With this in mind, it is not the augmentation that happens multiple times, but the assignment, which leads to the rather nonintuitive results.So what if you want the other behavior where the operation is repeated? For this, you can use the ``at()`` method of ufuncs (available since NumPy 1.8), and do the following:
###Code
x = np.zeros(10)
np.add.at(x, i, 1)
print(x)
###Output
[ 0. 0. 1. 2. 3. 0. 0. 0. 0. 0.]
###Markdown
The ``at()`` method does an in-place application of the given operator at the specified indices (here, ``i``) with the specified value (here, 1).Another method that is similar in spirit is the ``reduceat()`` method of ufuncs, which you can read about in the NumPy documentation. Example: Binning DataYou can use these ideas to efficiently bin data to create a histogram by hand.For example, imagine we have 1,000 values and would like to quickly find where they fall within an array of bins.We could compute it using ``ufunc.at`` like this:
###Code
np.random.seed(42)
x = np.random.randn(100)
# compute a histogram by hand
bins = np.linspace(-5, 5, 20)
counts = np.zeros_like(bins)
# find the appropriate bin for each x
i = np.searchsorted(bins, x)
# add 1 to each of these bins
np.add.at(counts, i, 1)
###Output
_____no_output_____
###Markdown
The counts now reflect the number of points within each bin–in other words, a histogram:
###Code
# plot the results
plt.plot(bins, counts, linestyle='steps');
###Output
_____no_output_____
###Markdown
Of course, it would be silly to have to do this each time you want to plot a histogram.This is why Matplotlib provides the ``plt.hist()`` routine, which does the same in a single line:```pythonplt.hist(x, bins, histtype='step');```This function will create a nearly identical plot to the one seen here.To compute the binning, ``matplotlib`` uses the ``np.histogram`` function, which does a very similar computation to what we did before. Let's compare the two here:
###Code
print("NumPy routine:")
%timeit counts, edges = np.histogram(x, bins)
print("Custom routine:")
%timeit np.add.at(counts, np.searchsorted(bins, x), 1)
###Output
NumPy routine:
10000 loops, best of 3: 97.6 µs per loop
Custom routine:
10000 loops, best of 3: 19.5 µs per loop
###Markdown
Our own one-line algorithm is several times faster than the optimized algorithm in NumPy! How can this be?If you dig into the ``np.histogram`` source code (you can do this in IPython by typing ``np.histogram??``), you'll see that it's quite a bit more involved than the simple search-and-count that we've done; this is because NumPy's algorithm is more flexible, and particularly is designed for better performance when the number of data points becomes large:
###Code
x = np.random.randn(1000000)
print("NumPy routine:")
%timeit counts, edges = np.histogram(x, bins)
print("Custom routine:")
%timeit np.add.at(counts, np.searchsorted(bins, x), 1)
###Output
NumPy routine:
10 loops, best of 3: 68.7 ms per loop
Custom routine:
10 loops, best of 3: 135 ms per loop
|
ipython_ntb/5.kernel_svm_implementation.ipynb | ###Markdown
This notebook solves the kernel SVM with CVXOPT
###Code
import numpy as np
import math
import cvxopt as cvx
import pickle
import pandas as pd
import networkx as nx
from matplotlib import pyplot as plt
import seaborn as sns
% matplotlib inline
X_mat = pd.read_csv('data/building_level_data.csv').as_matrix()
filename_y_vec = 'data/y_vec_full_adj_Dec-23-2018.txt'
with open(filename_y_vec, 'r') as f:
lines = [el.rstrip()[-1] for el in f.readlines()]
y_vec = np.array([float(el) for el in lines if el != '"'])
f.close()
print X_mat.shape
print y_vec.shape
###Output
(39, 14)
(39,)
###Markdown
Solve the SVM with CVXOPT
###Code
def rbf_kernel_function(x1, x2, gamma_param):
return math.exp(-gamma_param*np.dot(x1-x2, x1-x2))
def create_Gram_matrix_RBF(X_mat, gamma_param=500):
'''
I am going to assume that in X_mat the samples are on the rows and the features on the columns
'''
n_samples = X_mat.shape[0]
out_mat = np.ones((n_samples, n_samples))
for ii in range(n_samples):
for jj in range(ii+1, n_samples):
out_mat[ii, jj] = rbf_kernel_function(X_mat[ii,:], X_mat[jj,:], gamma_param=gamma_param)
out_mat[jj, ii] = out_mat[ii, jj]
return out_mat
def kernel_building(filename='data/gram_mat_full_adj_Dec-23-2018.csv'):
kern_mat = pd.read_csv(filename).set_index('Unnamed: 0').as_matrix()
return kern_mat
kern_mat_temp = kernel_building()
print kern_mat_temp.shape
## Account for cases in which we have -1 (Kernel Singular Matrix) or 0 (Graph too big) in the columns of the Kernel.
## Either the matrix was invertible or we did not have enough computing power
indices = []
for row in range(X_mat.shape[0]):
if sum(kern_mat_temp[row, :]) == -1*X_mat.shape[0]:
print row
indices.append(row)
if len(np.where(kern_mat_temp[row, :] == 0)[0]) > 0:
print 'Removing row:', row
indices.append(row)
indices_sel = [el for el in range(X_mat.shape[0]) if el not in indices]
X_mat = X_mat[indices_sel, :]
y_vec = y_vec[indices_sel]
kern_mat_temp = kern_mat_temp[indices_sel, :]
kern_mat_temp = kern_mat_temp[:, indices_sel]
print 'X matrix shape:', X_mat.shape
print 'Y vector shape:', y_vec.shape
print 'Proportion of Mosques:', 1 - np.mean(y_vec)
def fit(x, y, K_mat):
NUM = x.shape[0]
P = cvx.matrix(K_mat)
q = cvx.matrix(-np.ones((NUM, 1)))
G = cvx.matrix(-np.eye(NUM))
h = cvx.matrix(np.zeros(NUM))
A = cvx.matrix(y.reshape(1, -1))
b = cvx.matrix(np.zeros(1))
cvx.solvers.options['show_progress'] = False
sol = cvx.solvers.qp(P, q, G, h, A, b)
alphas = np.array(sol['x'])
return alphas
w_vec = fit(X_mat, y_vec, K_mat=kern_mat_temp)
## Printing Indices of the Smallest Values
for value in np.sort(np.abs(w_vec).reshape((-1,)))[::-1]:
print 'Solution Value:', value, 'Index:',np.where(np.abs(w_vec) == value)[0][0]
from sklearn.metrics import accuracy_score
def calculate_optimal_parameters(X_mat, y_mat, solution, gram_mat=kern_mat_temp, sol_thresh=1e-5):
idx_alpha_not_zero = np.where(np.abs(solution) > sol_thresh)[0]
beta0 = []
for idx in idx_alpha_not_zero:
beta0_temp = 0
for i in range(solution.shape[0]):
beta0_temp += solution[i]*y_mat[i]*gram_mat[i, idx]
beta0.append(y_mat[idx] - beta0_temp)
return np.mean(beta0)
beta_0 = calculate_optimal_parameters(X_mat=X_mat, y_mat=y_vec, solution=w_vec)
### LOOCV Calculation
def loocv_SVM(X, y, kern_mat, sol_thresh=1e-5):
y_predicted = []
y_true = []
n = X.shape[0]
for row in range(n):
indices_temp = [el for el in range(n) if el != row]
X_mat = X[indices_temp, :]
y_vec = y[indices_temp]
kern_mat_temp = kern_mat[indices_temp, :]
kern_mat_temp = kern_mat_temp[:, indices_temp]
w_vec = fit(X_mat, y_vec, K_mat=kern_mat_temp)
beta_0 = calculate_optimal_parameters(X_mat=X_mat, y_mat=y_vec, solution=w_vec, sol_thresh=sol_thresh)
y_pred = 0
for i in range(w_vec.shape[0]):
y_pred += w_vec[i]*y_vec[i]*kern_mat[row, i]
y_pred += beta_0
y_predicted.append(1 if np.sign(y_pred) == 1 else 0)
y_true.append(y[row])
return y_predicted, y_true
y_predicted_loocv, y_true_loocv = loocv_SVM(X=X_mat, y=y_vec, kern_mat=kern_mat_temp, sol_thresh=1e-5)
print 'Predicted:', y_predicted_loocv
print 'Actual:', map(int, y_true_loocv)
print 'Classification accuracy:', (accuracy_score(y_true_loocv, y_predicted_loocv))*100
###Output
Predicted: [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
Actual: [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
Classification accuracy: 94.73684210526315
###Markdown
Let's plot the graph mis-classified
###Code
full_dict = pickle.load(open('processed_dict/graph_dict_flnms_dfs_adjmats.pkl', 'rb'))
dataframe_dict = full_dict['dataframe']
dataframe_list = []
name_list = []
for bld_name in ['Monasteries', 'Mosques']:
for i in range(len(dataframe_dict[bld_name])):
dataframe_list.append(dataframe_dict[bld_name][i])
name_list.append(full_dict['filename'][bld_name][i])
dataframe_list_pruned = [el for j, el in enumerate(dataframe_list) if j in indices_sel]
name_list_pruned = [el for j, el in enumerate(name_list) if j in indices_sel]
def return_graph(df):
G=nx.Graph()
df['nconn'] = df['connectivity'].apply(lambda x: len(x))
node_list = range(len(df['index'].values))
G.add_nodes_from(node_list)
sublist_connectivity = df['connectivity']
list_connectivity = [el for sublist in sublist_connectivity for el in sublist]
G.add_edges_from(list_connectivity)
return G
correct_predictions = [index for index, pred in enumerate(y_predicted_loocv) if pred == y_true_loocv[index]]
wrong_predictions = [index for index, pred in enumerate(y_predicted_loocv) if pred != y_true_loocv[index]]
plt.rcParams['figure.figsize'] = 25, 35
number_of_columns = 4
number_of_rows = (len(correct_predictions) / number_of_columns)
indices_plot = [((index / number_of_columns) + 1, (index % number_of_columns)+1) for index, _ in enumerate(correct_predictions)]
number_of_subplots = len(correct_predictions)
plt_index = 1
for j,df in enumerate(dataframe_list_pruned):
if j in correct_predictions:
G = return_graph(df)
ax1 = plt.subplot(number_of_rows, number_of_columns, plt_index)
plt_index += 1
nx.draw(G)
if j <= 18:
ax1.set_title('Monastery ' + name_list_pruned[j].split('/')[-1].replace('.txt', ''), fontsize=16)
else:
ax1.set_title('Mosque ' + name_list_pruned[j].split('/')[-1].replace('.txt', ''), fontsize=16)
plt.savefig('images/correct_predictions_graphs.png', dpi=200)
plt.show()
plt.rcParams['figure.figsize'] = 9, 5
number_of_columns = 2
number_of_rows = (len(wrong_predictions) / number_of_columns)
indices_plot = [((index / number_of_columns) + 1, (index % number_of_columns)+1) for index, _ in enumerate(wrong_predictions)]
number_of_subplots = len(wrong_predictions)
fig = plt.figure(1)
plt_index = 1
for j,df in enumerate(dataframe_list_pruned):
if j in wrong_predictions:
G = return_graph(df)
ax1 = plt.subplot(number_of_rows, number_of_columns, plt_index)
plt_index += 1
nx.draw(G)
if j <= 18:
ax1.set_title('Monastery ' + name_list_pruned[j].split('/')[-1].replace('.txt', ''))
else:
ax1.set_title('Mosque ' + name_list_pruned[j].split('/')[-1].replace('.txt', ''))
plt.savefig('images/wrong_predictions_graph.png', dpi=600)
plt.show()
###Output
_____no_output_____
###Markdown
Plot the closest point to boundary
###Code
indices_sv = [21, 22, 37, 17]
plt.rcParams['figure.figsize'] = 14, 6
number_of_columns = 4
number_of_rows = (len(indices_sv) / number_of_columns)
number_of_subplots = len(indices_sv)
for j, idx in enumerate(indices_sv):
df = dataframe_list_pruned[idx]
G = return_graph(df)
ax1 = plt.subplot(number_of_rows, number_of_columns, j+1)
#nx.draw_networkx(G, node_size=150)
nx.draw(G, node_size=70)
if idx <= 18:
ax1.set_title('Monastery ' + name_list_pruned[idx].split('/')[-1].replace('.txt', ''))
else:
ax1.set_title('Mosque ' + name_list_pruned[idx].split('/')[-1].replace('.txt', ''))
plt.savefig('images/closest_points_boundary.png', dpi=500)
plt.show()
###Output
_____no_output_____ |
Python Statistics/Exercise Files/chapter4/04_04/04_04_testing_begin.ipynb | ###Markdown
Python statistics essential training - 04_04_testing Standard imports
###Code
import math
import io
import numpy as np
import pandas as pd
import matplotlib
import matplotlib.pyplot as pp
%matplotlib inline
import scipy.stats
import scipy.optimize
import scipy.spatial
pumps = pd.read_csv('pumps.csv')
pumps
cholera = pd.read_csv('cholera.csv')
cholera.loc[0:20]
pp.figure(figsize=(6,6))
pp.scatter(pumps.x, pumps.y, color='b')
pp.scatter(cholera.x, cholera.y, color='r', s=3)
img = matplotlib.image.imread('london.png')
pp.figure(figsize=(10,10))
pp.imshow(img, extent=[-0.38, 0.38, -0.38, 0.38])
pp.scatter(pumps.x, pumps.y, color='b')
pp.scatter(cholera.x, cholera.y, color='r', s=3)
cholera.closest.value_counts()
cholera.groupby('closest').deaths.sum()
def simulate(n):
return pd.DataFrame({'closest': np.random.choice([0,1,4,5], size=n, p=[0.65, 0.15, 0.10, 0.10])})
simulate(489).closest.value_counts()
sampling = pd.DataFrame({'counts': [simulate(489).closest.value_counts()[0] for i in range(10000)]})
sampling.counts.hist(histtype='step')
scipy.stats.percentileofscore(sampling.counts, 340)
100 - 98.14
###Output
_____no_output_____ |
03. Training a Classifier/DigitClassifier.ipynb | ###Markdown
**INITIALIZATION:**- I use these three lines of code on top of my each notebooks because it will help to prevent any problems while reloading the same project. And the third line of code helps to make visualization within the notebook.
###Code
#@ INITIALIZATION:
%reload_ext autoreload
%autoreload 2
%matplotlib inline
###Output
_____no_output_____
###Markdown
**DOWNLOADING LIBRARIES AND DEPENDENCIES:**- I have downloaded all the libraries and dependencies required for the project in one particular cell.
###Code
#@ INSTALLING DEPENDENCIES: UNCOMMENT BELOW:
# !pip install -Uqq fastbook
# import fastbook
# fastbook.setup_book()
#@ DOWNLOADING LIBRARIES AND DEPENDENCIES:
from fastai.vision.all import * # Getting all the Libraries.
from fastbook import * # Getting all the Libraries.
matplotlib.rc("image", cmap="Greys") # Initializing Dependencies.
###Output
_____no_output_____
###Markdown
**GETTING THE DATA:**- I will download a sample of **MNIST**.
###Code
#@ GETTING THE DATA:
PATH = untar_data(URLs.MNIST_SAMPLE) # Path to the Dataset.
PATH.ls() # Inspecting Items.
#@ INSPECTING THE TRAINING SET:
(PATH/"train").ls() # Inspecting Items.
#@ INSPECTING THE FOLDERS:
threes = (PATH/"train"/"3").ls().sorted() # Getting Same Order of Items.
sevens = (PATH/"train"/"7").ls().sorted() # Getting Same Order of Items.
threes # Inspecting Items.
#@ INSPECTING IMAGE:
im3_path = threes[1] # Path to the Image.
im3 = Image.open(im3_path) # Getting an Image.
im3
#@ CONVERTING INTO ARRAY:
array(im3)[4:10, 4:10] # Getting Numpy Array.
#@ CONVERTING INTO ARRAY:
tensor(im3)[4:10, 4:10] # Getting a Tensor.
#@ INSPECTING PIXELS:
im3_t = tensor(im3) # Getting a Tensor.
df = pd.DataFrame(im3_t[4:15, 4:22]) # Creating a DataFrame.
df.style.set_properties(**{"font-size":"6pt"})\
.background_gradient("Greys") # Inspecting the Image.
###Output
_____no_output_____
###Markdown
**PIXEL SIMILARITY:**- I will get the average of pixel values for each groups of 3 and 7. I will create a tensor containing all the 3s stacked together.
###Code
#@ GETTING TENSOR VALUES:
seven_tensors = [tensor(Image.open(o)) for o in sevens] # Initializing Tensor Values.
three_tensors = [tensor(Image.open(o)) for o in threes] # Initializing Tensor Values.
len(three_tensors), len(seven_tensors) # Inspecting Number of Tensors.
#@ INSPECTING A IMAGE:
show_image(three_tensors[1]); # Inspecting a Image.
#@ GETTING STACKED TENSORS OF FLOATS:
stacked_sevens = torch.stack(seven_tensors).float() / 255 # Getting Stacked Tensors.
stacked_threes = torch.stack(three_tensors).float() / 255 # Getting Stacked Tesnors.
stacked_threes.shape # Inspecting Shape of Stack.
#@ RANK OF TENSORS:
len(stacked_threes.shape) # Getting Rank of Tensor.
#@ RANK OF TENSORS:
stacked_threes.ndim # Getting Rank of Tensor.
#@ MEAN OF IMAGE TENSORS:
mean3 = stacked_threes.mean(0) # Getting Mean of Pixels.
show_image(mean3); # Inspecting the Mean.
#@ MEAN OF IMAGE TENSORS:
mean7 = stacked_sevens.mean(0) # Getting Mean of Pixels.
show_image(mean7); # Inspecting the Mean.
#@ GETTING SAMPLE IMAGE:
a_3 = stacked_threes[1] # Getting an Element.
show_image(a_3); # Inspecting the Item.
###Output
_____no_output_____
###Markdown
**Note:**- Taking the mean of absolute value of differences is called **Mean Absolute Difference** or **L1 Norm**.- Taking the mean of square of differences and then taking the square root is called **Root Mean Squared Error** or **L2 Norm**.
###Code
#@ CALCULATING L1 NORM AND L2 NORM:
dist_3_abs = (a_3 - mean3).abs().mean() # Getting L1 Norm.
dist_3_sqr = ((a_3 - mean3)**2).mean().sqrt() # Getting L2 Norm.
dist_3_abs, dist_3_sqr # Inspecting the Results.
#@ CALCULATING L1 NORM AND L2 NORM:
dist_7_abs = (a_3 - mean7).abs().mean() # Getting L1 Norm.
dist_7_sqr = ((a_3 - mean7)**2).mean().sqrt() # Getting L2 Norm.
dist_7_abs, dist_7_sqr # Inspecting the Results.
#@ CALCULATING L1 NORM AND L2 NORM:
print(F.l1_loss(a_3.float(), mean7)) # Getting L1 Norm.
print(F.mse_loss(a_3, mean7).sqrt()) # Getting L2 Norm.
###Output
tensor(0.1586)
tensor(0.3021)
###Markdown
**ARRAYS AND TENSORS:**
###Code
#@ ARRAYS AND TENSORS:
data = [[1, 2, 3], [4, 5, 6]] # Example List.
arr = array(data) # Creating Array.
tns = tensor(data) # Creating Tensor.
arr, tns # Inspecting the Results.
###Output
_____no_output_____
###Markdown
**COMPUTING METRICS USING BROADCASTING:**
###Code
#@ CREATING VALIDATION DATASET:
valid_3_tens = torch.stack([tensor(Image.open(o)) for o in \
(PATH/"valid"/"3").ls()]) # Creating Validation Tensors.
valid_3_tens = valid_3_tens.float() / 255 # Getting Normalized Tensors.
valid_7_tens = torch.stack([tensor(Image.open(o)) for o in \
(PATH/"valid"/"7").ls()]) # Creating Validation Tensors.
valid_7_tens = valid_7_tens.float() / 255 # Getting Normalized Tensors.
valid_3_tens.shape, valid_7_tens.shape # Inspecting the Shape of Tensors.
#@ FUNCTION FOR CALCULATING MAE:
def mnist_distance(a, b): # Initializing Function.
return (a - b).abs().mean((-1, -2)) # Getting MAE.
mnist_distance(a_3, mean3) # Implementation of Function.
#@ IMPLEMENTATION OF MAE:
valid_3_dist = mnist_distance(valid_3_tens, mean3) # Initializing the Function.
valid_3_dist, valid_3_dist.shape # Inspecting the Results.
###Output
_____no_output_____
###Markdown
**Note:**- I will use the function to figure out whether an image is a 3 by using the following logic: If the distance between the digit in a question and the ideal 3 is less than the distance to the ideal 7 then it's 3.
###Code
#@ DEFINING THE FUNCTION:
def is_3(x): # Initializing the Function.
return mnist_distance(x, mean3) < mnist_distance(x, mean7) # Inspecting Distance.
is_3(a_3), is_3(a_3).float() # Inspecting the Results.
#@ INSPECTING ACCURACY:
accuracy_3s = is_3(valid_3_tens).float().mean() # Getting Accuracy.
accuracy_7s = (1 - is_3(valid_7_tens).float()).mean() # Getting Accuracy.
accuracy_3s, accuracy_7s, (accuracy_3s + accuracy_7s) / 2 # Getting Results.
###Output
_____no_output_____
###Markdown
**STOCHASTIC GRADIENT DESCENT:**
###Code
#@ GETTING GRADIENTS:
def f(x): # Initializing a Function.
return x**2
xt = tensor(3.).requires_grad_() # Getting Gradients.
yt = f(xt) # Calculating Gradients.
yt # Inspecting.
#@ GETTING GRADIENTS:
def f(x): # Initializing a Function.
return (x**2).sum()
xt = tensor([3., 4., 10.]).requires_grad_() # Getting Gradients.
yt = f(xt) # Calculating Gradients.
yt # Inspecting.
#@ STOCHASTIC GRADIENT DESCENT:
time = torch.arange(0, 20).float(); time # Initialization.
speed = torch.randn(20)*3 + 0.75*(time-9.5)**2 + 1 # Initialization.
plt.scatter(time, speed); # Inspecting.
#@ IMPLEMENTATION OF STOCHASTIC GRADIENT DESCENT:
def f(t, params): # Defining a Function.
a, b, c = params
return a*(t**2) + (b*t) + c # Initializing a Quadratic Function.
def mse(preds, targets): # Defining a Loss Function.
return ((preds - targets)**2).mean() # Initializing MSE.
###Output
_____no_output_____
###Markdown
**Step 1: Initialize the Parameters**- I will initialize the parameters to random values and tell **PyTorch** to track the gradients.
###Code
#@ INITIALIZING PARAMETERS:
params = torch.randn(3).requires_grad_() # Initializing Parameters.
orig_params = params.clone()
###Output
_____no_output_____
###Markdown
**Step 2: Calculate the Predictions**- Now I will calculate the predictions.
###Code
#@ CALCULATING PREDICTIONS:
preds = f(time, params) # Implementation of Function.
#@ INSPECTING PREDICTIONS:
def show_preds(preds, ax=None): # Defining a Function.
if ax is None: ax=plt.subplots()[1] # Initialization.
ax.scatter(time, speed) # Scatter Plots.
ax.scatter(time, to_np(preds), color="red") # Scatter Plots.
ax.set_ylim(-300, 100)
show_preds(preds) # Implementation of Function.
###Output
_____no_output_____
###Markdown
**Step 3: Calculate the Loss**- I will calculate the loss here.
###Code
#@ CALCULATING LOSS:
loss = mse(preds, speed) # Implementation of Function.
loss
###Output
_____no_output_____
###Markdown
**Step 4: Calculate the Gradients**- The next step is to calculate the gradients or an approximation of how the parameters need to change. I will pick the learning rate to improve the predictions.
###Code
#@ CALCULATING GRADIENTS:
loss.backward() # Initializing Backpropagation.
params.grad # Gradients.
params.grad * 1e-5 # Gradients.
###Output
_____no_output_____
###Markdown
**Step 5: Step the Weights**- I will update the parameters based on the gradients.
###Code
#@ STEPPING THE WEIGHTS:
lr = 1e-5 # Initializing Learning Rate.
params.data -= lr * params.grad.data # Updating Parameters.
params.grad = None
#@ INSPECTING LOSS:
preds = f(time, params) # Implementation of Function.
mse(preds, speed) # Getting Loss.
#@ INSPECTING PREDICTIONS:
show_preds(preds)
###Output
_____no_output_____
###Markdown
**Step 6: Repeat the Process**- I will define a function to repeat the above mentioned process a few times.
###Code
#@ DEFINING THE FUNCTION:
def apply_step(params, prn=True): # Defining the Function.
preds = f(time, params) # Getting Predictions.
loss = mse(preds, speed) # Getting Loss.
loss.backward() # Initializing Backpropagation.
params.data -= lr * params.grad.data # Updating Parameters.
params.grad = None
if prn: print(loss.item())
return preds
#@ IMPLEMENTATION OF FUNCTION:
for i in range(10):
apply_step(params) # Repeating the Process.
#@ INSPECTING THE PREDICTIONS:
params = orig_params.detach().requires_grad_()
_, axs = plt.subplots(1, 4, figsize=(12, 3)) # Initialization.
for ax in axs:
show_preds(apply_step(params, False), ax) # Inspecting Predictions.
plt.tight_layout()
###Output
_____no_output_____
###Markdown
**Step 7: Stop** **MNIST LOSS FUNCTION:**- I will concatenate all the images into a single tensor and change them from a list of matrices or **Rank 3** tensor to a list of vectors or **Rank 2** tensor. I will label 1 for 3s images and 0 for 7s images.
###Code
#@ CREATING TRAINING DATASET:
train_x = torch.cat([stacked_threes, stacked_sevens]).view(-1, 28*28) # Initializing Concatenation.
train_y = tensor([1]*len(threes) + [0]*len(sevens)).unsqueeze(1) # Creating Labels.
train_x.shape, train_y.shape # Inspecting Shape of Dataset.
#@ CREATING TRAINING DATASET:
dset = list(zip(train_x, train_y)) # Creating Dataset.
x, y = dset[0]
x.shape, y # Inspecting Shape.
#@ CREATING VALIDATION DATASET:
valid_x = torch.cat([valid_3_tens, valid_7_tens]).view(-1, 28*28) # Initializing Concatenation.
valid_y=tensor([1]*len(valid_3_tens)+[0]*len(valid_7_tens)).unsqueeze(1) # Creating Labels.
valid_dset = list(zip(valid_x, valid_y)) # Creating Dataset.
#@ INITIALIZING RANDOM WEIGHTS:
def init_params(size, std=1.0): # Initializing Function.
return (torch.randn(size) * std).requires_grad_() # Getting Gradients.
weights = init_params((28*28, 1)) # Implementation of Function.
###Output
_____no_output_____
###Markdown
**Note:**- The function **weights*pixels** is not flexible. I will initialize a random number to intercept as well. In **Neural Networks** the equation **y=w*x+b**, w is called the **weights** and the b is called the **bias**. Together the weights and bias make up the parameters.
###Code
#@ INITIALIZING BIAS:
bias = init_params(1) # Initializing Intercepts.
(train_x[0]*weights.T).sum() + bias # Inspecting Predictions.
#@ FUNCTION FOR MATRIX MULTIPLICATION:
def linear1(xb): # Initializing Function.
return xb@weights + bias # Initializing Matrix Multiplication.
#@ IMPLEMENTATION:
preds = linear1(train_x) # Initializing Function.
preds # Inspecting Predictions.
#@ CALCULATING ACCURACY:
corrects = (preds > 0.0).float() == train_y # Getting Accuracy.
corrects.float().mean().item() # Getting Accuracy.
#@ CALCULATING ACCURACY:
with torch.no_grad():
weights[0] *= 1.0001 # Changing Weights.
preds = linear1(train_x) # Getting Predictions.
((preds > 0.0).float() == train_y).float().mean().item() # Getting Accuracy.
#@ DEFINING LOSS FUNCTION:
def mnist_loss(predictions, targets): # Initializing Function.
return torch.where(targets==1, 1-predictions,
predictions).mean() # Getting Mean of Distances.
#@ INSPECTING THE IMPLEMENTATION:
trgts = tensor([1, 0, 1]) # Initializing Tensor.
prds = tensor([0.9, 0.4, 0.2]) # Initializing Tensor.
torch.where(trgts==1, 1-prds, prds) # Getting Loss.
###Output
_____no_output_____
###Markdown
**SIGMOID FUNCTION:**- The **Sigmoid** function always outputs a number between 0 and 1.
###Code
#@ INITIALIZING SIGMOID FUNCTION:
def sigmoid(x): # Defining Sigmoid Function.
return 1 / (1 + torch.exp(-x));
#@ INSPECTING SIGMOID FUNCTION:
plot_function(torch.sigmoid, title="Sigmoid", min=-4,
max=4); # Inspecting Function.
#@ UPDATING LOSS FUNCTION:
def mnist_loss(predictions, targets): # Initializing Loss Function.
predictions = predictions.sigmoid() # Initializing Sigmoid Function.
return torch.where(targets==1, 1-predictions,
predictions).mean() # Getting Mean of Losses.
###Output
_____no_output_____
###Markdown
**SGD AND MINIBATCHES:**- The process to change or update the weights based on the gradients in order to consider some of the details involved in the next phase of the learning process is called an **Optimization Step**. The calculation of average loss for a few data items at a time is called a **Minibatch**. The number of data items in the **Minibatch** is called **Batchsize**. A larger **Batchsize** means more accurate and stable estimate of the dataset gradients from the loss function whereas a single **Batchsize** result in an imprecise and unstable gradient.
###Code
#@ INITIALIZING DATALOADER:
coll = range(15) # Initialization.
dl = DataLoader(coll, batch_size=5, shuffle=True) # Initializing DataLoader.
list(dl) # Inspecting the Results.
###Output
_____no_output_____
###Markdown
**Note:**- A collection that contains the tuples of independent and dependent variables is known in **PyTorch** as a **Dataset**.
###Code
#@ IMPLEMENTATION OF DATALOADER:
ds = L(enumerate(string.ascii_lowercase)) # Initializing a Dataset.
dl = DataLoader(ds, batch_size=6, shuffle=True) # Initializing DataLoader.
list(dl) # Inspecting the Results.
###Output
_____no_output_____
###Markdown
**PUTTING ALL TOGETHER AND CONCLUSION:**
###Code
#@ INITIALIZING PARAMETERS:
weights = init_params((28*28, 1)) # Initializing Weights.
bias = init_params(1) # Initializing Bias.
#@ INITIALIZING DATALOADER: TRAINING DATASET:
dl = DataLoader(dset, batch_size=256) # Initializing DataLoader.
xb, yb = first(dl) # Getting First Elements.
xb.shape, yb.shape # Inspecting the Shape.
#@ INITIALIZING DATALOADER: VALIDATION DATASET:
valid_dl = DataLoader(valid_dset, batch_size=256) # Initializing DataLoader.
v_xb, v_yb = first(valid_dl) # Getting First Elements.
v_xb.shape, v_yb.shape # Inspecting the Shape.
#@ CREATING MINBATCH FOR TESTING:
batch = train_x[:4] # Initialization.
batch.shape # Inspecting Shape.
#@ GETTING PREDICTIONS:
preds = linear1(batch); preds # Getting Predictions.
#@ GETTING LOSS:
loss = mnist_loss(preds, train_y[:4]); loss # Getting Loss.
#@ CALCULATING GRADIENTS:
loss.backward() # Backpropagation.
weights.grad.shape, weights.grad.mean(), bias.grad # Inspecting Gradients.
#@ DEFINING THE FUNCTION:
def calc_grad(xb, yb, model): # Defining Function.
preds = model(xb) # Getting Predictions.
loss = mnist_loss(preds, yb) # Getting Loss.
loss.backward() # Initializing Backpropagation.
#@ IMPLEMENTATION OF FUNCTION:
calc_grad(batch, train_y[:4], linear1) # Initializing the Function.
weights.grad.mean(), bias.grad # Inspecting Weights and Bias.
#@ ZEROING GRADIENTS:
weights.grad.zero_() # Zeroing Weights.
bias.grad.zero_(); # Zeroing Bias.
#@ INITIALIZING BASIC TRAINING LOOP:
def train_epoch(model, lr, params): # Initializing Training Function.
for xb, yb in dl:
calc_grad(xb, yb, model) # Calculating Gradients.
for p in params:
p.data -= p.grad * lr # Optimizing Parameters.
p.grad.zero_() # Zeroing Gradients.
#@ DEFINING FUNCTION FOR ACCURACY:
def batch_accuracy(xb, yb): # Initializing Function.
preds = xb.sigmoid() # Initializing Sigmoid.
correct = (preds > 0.5) == yb # Checking Predictions.
return correct.float().mean() # Getting Mean.
#@ IMPLEMENTATION:
batch_accuracy(batch, train_y[:4]) # Getting Accuracy.
#@ DEFINING THE FUNCTION:
def validate_epoch(model): # Initializing the Function.
accs = [batch_accuracy(model(xb), yb) for xb, yb in \
valid_dl] # Getting Accuracy.
return round(torch.stack(accs).mean().item(), 4) # Getting Rounded Accuracy.
#@ IMPLEMENTATION:
validate_epoch(linear1) # Implementation of Function.
#@ TRAINING AND EVALUATION: ONE EPOCH:
lr = 1. # Initializing LR.
params = weights, bias # Initializing Weights and Bias.
train_epoch(linear1, lr, params) # Initializing Training.
validate_epoch(linear1) # Evaluation.
#@ TRAINING AND EVALUATION: 20 EPOCH:
for i in range(20):
train_epoch(linear1, lr, params) # Initializing Training.
print(validate_epoch(linear1), end=(" ")) # Evaluation.
###Output
0.8265 0.89 0.9183 0.9276 0.9398 0.9466 0.9505 0.9524 0.9559 0.9578 0.9598 0.9608 0.9612 0.9617 0.9632 0.9637 0.9647 0.9656 0.9671 0.9676
###Markdown
**CREATING AN OPTIMIZER:**
###Code
#@ INITIALIZING LINEAR MODEL:
linear_model = nn.Linear(28*28, 1) # Linear Model.
#@ INSPECTING PARAMETERS:
w, b = linear_model.parameters() # Initializing Parameters.
w.shape, b.shape # Inspecting Shape.
#@ CREATING AN OPTIMIZER:
class BasicOptim: # Initializing Optimizer Class.
def __init__(self, params, lr): # Initializing Constructor Function.
self.params, self.lr = list(params), lr # Initializing Parameters.
def step(self, *args, **kwargs): # Function for Optimization.
for p in self.params:
p.data -= p.grad.data * self.lr # Optimizing Parameters.
def zero_grad(self, *args, **kwargs): # Function for Zeroing Gradients.
for p in self.params:
p.grad = None # Zeroing Gradients.
#@ IMPLEMENTATION:
opt = BasicOptim(linear_model.parameters(), lr) # Initializing Optimizer.
#@ UPDATING THE TRAINING LOOP:
def train_epoch(model): # Initializing Training Function.
for xb, yb in dl:
calc_grad(xb, yb, model) # Calculating Gradients.
opt.step() # Initializing Optimization.
opt.zero_grad() # Zeroing Gradients.
#@ INSPECTING VALIDATION:
validate_epoch(linear_model) # Implementation of Function.
#@ CREATING TRAINING LOOP:
def train_model(model, epochs): # Function for Training.
for i in range(epochs):
train_epoch(model) # Training Epoch.
print(validate_epoch(model), end=(" ")) # Inspecting Accuracy.
#@ IMPLEMENTATION:
train_model(linear_model, 20) # Training the Model.
###Output
0.4932 0.7744 0.854 0.9165 0.936 0.9482 0.956 0.9633 0.9658 0.9668 0.9682 0.9712 0.9726 0.9746 0.9755 0.9765 0.9775 0.978 0.9785 0.9785
###Markdown
**IMPLEMENTATION USING FASTAI:**
###Code
#@ IMPLEMENTATION OF OPTIMIZER USING FASTAI:
linear_model = nn.Linear(28*28, 1) # Linear Model.
opt = SGD(linear_model.parameters(), lr) # Initializing SGD Optimizer.
train_model(linear_model, 20) # Training the Model.
#@ IMPLEMENTATION OF LEARNER USING FASTAI:
dls = DataLoaders(dl, valid_dl) # Initializing DataLoader.
learn = Learner(dls, nn.Linear(28*28, 1), opt_func=SGD,
loss_func=mnist_loss,
metrics=batch_accuracy) # Initializing Learner.
learn.fit(10, lr=lr) # Training the Model.
###Output
_____no_output_____
###Markdown
**ADDING NONLINEARITY:**
###Code
#@ DEFINING SIMPLE NEURAL NETWORK:
def simple_net(xb): # Initializing Neural Networks.
res = xb@w1 + b1 # Simple Linear Classifier.
res = res.max(tensor(0.0)) # Getting Maximum.
res = res@w2 + b2 # Simple Linear Classifier.
return res
#@ INITIALIZING PARAMETERS:
w1 = init_params((28*28, 30)) # Initializing Weight.
b1 = init_params(30) # Initializing Bias.
w2 = init_params((30, 1)) # Initializing Weight.
b2 = init_params(1) # Initializing Bias.
###Output
_____no_output_____
###Markdown
**Note:**- **w1** has 30 output activations which means that **w2** must have 30 input activations. That means that the first layer can construct 30 different features, each representing different mix of pixels. - The function **res.max(tensor(0.0))** is called a **Rectified Linear Unit** or **RELU**. It replaces every negative number with zero.
###Code
#@ INSPECTING RELU:
plot_function(F.relu); # RELU Function.
#@ SIMPLE NEURAL NETWORKS USING PYTORCH:
simple_net = nn.Sequential(nn.Linear(28*28, 30), # Initializing Linear Layer.
nn.ReLU(), # Initializing RELU Activation.
nn.Linear(30, 1)) # Initializing Linear Layer.
#@ INITIALIZING MODEL:
learn = Learner(dls, simple_net, opt_func=SGD, # Initializing DataLoaders, Model and Optimizer.
loss_func=mnist_loss, # Initializing Loss Function.
metrics=batch_accuracy) # Initializing Accuracy.
learn.fit(40, 0.1) # Training the Model.
#@ INSPECTING ACCURACY:
plt.plot(L(learn.recorder.values).itemgot(2));
#@ GETTING ACCURACY:
learn.recorder.values[-1][2] # Getting Accuracy.
#@ IMPLEMENTATION USING FASTAI:
dls = ImageDataLoaders.from_folder(PATH) # Initializing DataLoders.
learn = cnn_learner(dls, resnet18, pretrained=False, # Initializing Convolutions.
loss_func=F.cross_entropy, # Initializing Cross Entropy Loss.
metrics=accuracy) # Initializing Accuracy Metric.
learn.fit_one_cycle(1, 0.1) # Training the Classifier.
###Output
_____no_output_____ |
pytorch/02-nn.ipynb | ###Markdown
Neural NetworksNeural networks can be constructed using torch.nn package.* autograd: defines models and differentiates them* nn.Module: contains layers, and a method forward(input) that returns the outputA typical training procedure for a neural network is as follows:1. define the neural network that has some learnable parameters(or weights)2. iterate over a dataset of inputs3. process input through the network(forward pass)4. compute the loss(how far is the output from being correct)5. propagate gradients back into the network's parameters(backward pass)6. update the weights of the network, typically using a simple update rule: weight -= learning_rate * gradientMain objects and modules in PyTorch* **torch.Tensor** - A multi-dimensional array with support for autograd operations like *backward()*. Also holds the gradient w.r.t. the tensor.* **nn.Module** - Neural network module. Convenient way of encapsulating parameters, with helpers for moving them to GPU, exporting, loading, etc.* **nn.Parameter** - A kind of Tensor, that is automatically registered as a parameter when assigned as an attribute to a Module.* **autograd.Function** - Implements forward and backward definitions of an autograd operation. Every Tensor operation, creates at least a single Function node, that connects to functions that created a Tensor and encodes its history.
###Code
import torch
import torch.nn as nn
import torch.nn.functional as F
###Output
_____no_output_____
###Markdown
Define the NetworkYou just have to define the **forward** function, and the **backward** function (where gradients are computed) is automatically defined for you using **autograd**. You can use any of the **Tensor** operations in the forward function.
###Code
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
# 1 channel, 6 output channels, 5x5 square convoluation
self.conv1 = nn.Conv2d(1, 6, 5)
self.conv2 = nn.Conv2d(6, 16, 5)
# an affine operation: y = Wx + b
self.fc1 = nn.Linear(16 * 5 * 5, 120)
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 10)
def forward(self, x):
# Max pooling over a (2, 2) window
x = F.max_pool2d(F.relu(self.conv1(x)), (2, 2))
# the square: only specify a single number
x = F.max_pool2d(F.relu(self.conv2(x)), 2)
x = x.view(-1, self.num_flat_features(x))
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x
def num_flat_features(self, x):
size = x.size()[1:] # except the batch dimension
num_features = 1
for s in size:
num_features *= s
return num_features
net = Net()
print(net)
# list the net parameters
params = list(net.parameters())
for param in params:
print(param.size())
###Output
_____no_output_____
###Markdown
Feed the Network and Process forward
###Code
# feed random data into the network
input = torch.randn(1, 1, 32, 32)
out = net(input)
print(out)
# zero the gradient buffers of all parameters and backprops with random gradients
net.zero_grad()
out.backward(torch.randn(1, 10))
###Output
_____no_output_____
###Markdown
Compute the LossA *loss function* takes the (output, target) pair of inputs, and computes a value that estimates how far away the output is from the target. There are several different loss functions under the nn package. A simple loss is: nn.MSELoss, which computes the mean-squared error between the input and the target.
###Code
output = net(input)
target = torch.randn(10)
target = target.view(1, -1)
criterion = nn.MSELoss()
loss = criterion(output, target)
print(loss)
print(loss.grad_fn)
print(loss.grad_fn.next_functions[0][0])
print(loss.grad_fn.next_functions[0][0].next_functions[0][0])
###Output
_____no_output_____
###Markdown
Back PropagationTo backpropagate the error all we have to do is to *loss.backward()*. You need to clear the existing gradients though, else gradients will be accumulated to existing gradients. All Tensors in the graph that has *requires_grad=True* will have their *.grad* Tensor accumulated with the gradient.
###Code
net.zero_grad()
print('conv1.bias.grad before backward')
print(net.conv1.bias.grad)
loss.backward()
print('conv1.bias.grad after backward')
print(net.conv1.bias.grad)
###Output
_____no_output_____
###Markdown
Update the WeightsThe simplest update rule used in practice is the Stochastic Gradient Descent (SGD): weight = weight - learning_rate * gradient
###Code
learning_rate = 0.01
for f in net.parameters():
f.data.sub_(f.grad.data * learning_rate)
###Output
_____no_output_____
###Markdown
However, as you use neural networks, you want to use various different update rules such as SGD, Nesterov-SGD, Adam, RMSProp, etc. To enable this, we built a small package: *torch.optim* that implements all these methods. Using it is very simple:
###Code
import torch.optim as optim
optimizer = optim.SGD(net.parameters(), lr=0.01)
optimizer.zero_grad()
output = net(input)
loss = criterion(output, target)
loss.backward()
optimizer.step()
print(net.conv1.bias.data)
###Output
_____no_output_____ |
feasibility_problem/Tomography_reconstruction.ipynb | ###Markdown
Tomography reconstruction$\newcommand{\n}[1]{\left\|1 \right\|}$ $\renewcommand{\a}{\alpha} $ $\renewcommand{\b}{\beta} $ $\renewcommand{\c}{\gamma} $ $\renewcommand{\d}{\delta} $ $\newcommand{\D}{\Delta} $ $\newcommand{\la}{\lambda} $ $\renewcommand{\t}{\tau} $ $\newcommand{\s}{\sigma} $ $\newcommand{\e}{\varepsilon} $ $\renewcommand{\th}{\theta} $ $\newcommand{\x}{\bar x} $ $\newcommand{\R}{\mathbb R} $ $\newcommand{\N}{\mathbb N} $ $\newcommand{\Z}{\mathbb Z} $ $\newcommand{\E}{\mathcal E} $ $\newcommand{\lr}[1]{\left\langle 1\right\rangle}$$\newcommand{\nf}[1]{\nabla f(1)} $$\newcommand{\hx}{\hat x} $$\newcommand{\hy}{\hat y} $$\DeclareMathOperator{\prox}{prox} $$\DeclareMathOperator{\argmin}{argmin} $$\DeclareMathOperator{\dom}{dom} $$\DeclareMathOperator{\id}{Id} $$\DeclareMathOperator{\conv}{conv} $We want to solve $Ax = b$,where $A$ is a matrix, obtained from the projection tomography operator and $b$ is the observed sinogram.
###Code
import scipy.sparse.linalg as spr_LA
import matplotlib as mpl
from skimage import data, transform, img_as_float, transform
from skimage.color import rgb2gray
from tomo_utils import generate_synthetic_data, build_projection_operator
from fixed_points import *
import matplotlib.pyplot as plt
import seaborn as sns
%load_ext autoreload
%autoreload 2
%matplotlib inline
sns.set()
###Output
The autoreload extension is already loaded. To reload it, use:
%reload_ext autoreload
###Markdown
Generate the data
###Code
# number of positions
n_pos = 256
# number or angles
n_angles = 128
#img = generate_synthetic_data(p)
x = rgb2gray(data.load('phantom.png'))
img = transform.resize(x, (n_pos, n_pos))
w = img.ravel()
A = build_projection_operator(n_pos, n_angles)
#A = build_projection_operator(n_pos, n_dir=n_angles, l_det=64)
n = n_pos * n_pos
m = n_pos * n_angles
x_true = w
# no noise
#b = A.dot(x_true)
# with noise
b = A.dot(x_true) + np.random.randn(m)
# starting point
x0 = np.zeros(n)
# define operator T:
norms = spr_LA.norm(A, axis=1)
T = lambda x: x - 1./m * A.T.dot((A.dot(x)-b)/norms**2)
J = lambda x: LA.norm(T(x)-x)
N = 1000
ans1 = krasn_mann(T, x0, 0, numb_iter=N)
ans2 = fixed_point_agraal(T, x0, numb_iter=N, phi=1.5, output=False)
x1 = ans1[1]
x2 = ans2[1]
print("Fixed point residuals. KM and aGRAAL:", J(x1), J(x2))
###Output
Fixed point residuals. KM and aGRAAL: 0.01206923787295626 4.7282722448181315e-05
###Markdown
Show the results
###Code
plt.plot(ans1[0], '--b', label="KM: $x^{k+1}=Tx^k$")
plt.plot(ans2[0], '#FFD700', label="aGRAAL")
plt.yscale('log')
plt.legend()
plt.xlabel(u'iterations, $k$')
plt.ylabel('residual')
plt.legend()
#plt.grid()
plt.savefig('figures/tomo-12-darkgrid.pdf',bbox_inches='tight')
plt.show()
plt.clf()
###Output
_____no_output_____
###Markdown
What is the stepsize in aGRAAL?
###Code
plt.plot(ans2[2], '.', color='#FFD700', label="aGRAAL")
plt.legend()
plt.xlabel(u'iterations, $k$')
plt.ylabel('stepsize $\lambda_k$')
plt.legend()
#plt.grid()
plt.savefig('figures/tomo-22-darkdrid.pdf',bbox_inches='tight')
plt.show()
plt.clf()
###Output
_____no_output_____
###Markdown
Show the original image and reconstructed ones
###Code
print("Original image and reconstructed ones")
img1 = x1.reshape(n_pos, n_pos)
img2 = x2.reshape(n_pos, n_pos)
fig, ax, = plt.subplots(nrows=1, ncols=3, figsize=(15, 5))
ax[0].imshow(img, cmap='gray')
ax[0].set_title("True")
ax[1].imshow(img1, cmap='gray')
ax[1].set_title("KM")
ax[2].imshow(img2, cmap='gray')
ax[2].set_title("aGRAAL")
plt.show(fig)
###Output
Original image and reconstructed ones
|
Model/12-1xgb(easy).ipynb | ###Markdown
XGBoosting Easy version
###Code
import numpy as np # array, vector, matrix calculations
import pandas as pd # DataFrame handling
#import shap # for consistent, signed variable importance measurements
import xgboost as xgb # gradient boosting machines (GBMs)
import math
import matplotlib.pyplot as plt # plotting
pd.options.display.max_columns = 999 # enable display of all columns in notebook
# enables display of plots in notebook
%matplotlib inline
np.random.seed(42) # set random seed for reproducibility
# import XLS file
path = ".\\credit_cards_dataset.csv"
#data = pd.read_excel(path, skiprows=1) # skip the first row of the spreadsheet
#path = 'C:\\Users\\User\\Desktop\\data\\original_data.csv'
#data = pd.read_csv(path, skiprows=1) # skip the first row of the spreadsheet
data = pd.read_csv(path) # skip the first row of the spreadsheet
# remove spaces from target column name
data = data.rename(columns={'default payment next month': 'DEFAULT_NEXT_MONTH'})
# assign target and inputs for GBM
#y = 'DEFAULT_NEXT_MONTH'
y='default.payment.next.month'
X = [name for name in data.columns if name not in [y, 'ID', 'Y_Value']]
print('y =', y)
print('X =', X)
split_ratio=0.7
# execute split
split = np.random.rand(len(data)) < split_ratio
train=data[split]
test=data[~split]
print('Train data rows = %d. columns = %d' % (train.shape[0], train.shape[1]))
print('Test data rows = %d. columns = %d' % (test.shape[0], test.shape[1]))
# XGBoost uses SVMLight data structure, not Numpy arrays or Pandas DataFrames
mod = xgb.XGBRegressor(
gamma=1,
learning_rate=0.01,
max_depth=3,
n_estimators=10000,
subsample=0.8,
random_state=42,
verbosity=1
)
mod.fit(train[X], train[y])
predictions = mod.predict(test[X])
from sklearn.metrics import mean_squared_error
test[y]
rmse = math.sqrt(mean_squared_error(test[y], predictions))
print(rmse)
#print("score: {0:,.0f}".format(rmse))
predictions=np.rint(predictions)
predictions
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score
from sklearn.metrics import confusion_matrix
from sklearn.metrics import precision_score
from sklearn.metrics import recall_score
from sklearn import metrics
from sklearn.model_selection import GridSearchCV
accuracy = accuracy_score(test[y], predictions)
cm = confusion_matrix(test[y], predictions)
precision = precision_score(test[y], predictions)
recall = recall_score(test[y], predictions)
print(accuracy)
print(cm)
print(precision)
print(recall)
import matplotlib
import matplotlib.pyplot as plt
%matplotlib inline
import itertools
def plot_confusion_matrix(cm, classes,
normalize=False,
title='Confusion matrix',
cmap=plt.cm.Blues):
if normalize:
cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
print("Normalized confusion matrix")
else:
print('Confusion matrix, without normalization')
print(cm)
plt.imshow(cm, interpolation='nearest', cmap=cmap)
plt.title(title)
plt.colorbar()
tick_marks = np.arange(len(classes))
plt.xticks(tick_marks, classes, rotation=45)
plt.yticks(tick_marks, classes)
fmt = '.2f' if normalize else 'd'
thresh = cm.max() / 2.
for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
plt.text(j, i, format(cm[i, j], fmt),
horizontalalignment="center",
color="white" if cm[i, j] > thresh else "black")
plt.figure()
plot_confusion_matrix(cm, classes=['Non_Default','Default'], normalize=False,
title='Non Normalized confusion matrix')
###Output
Confusion matrix, without normalization
[[6567 338]
[1283 725]]
|
Section_04/Example_04_(05-08)/05-3D_Chart.ipynb | ###Markdown
Data visualization with 3D Charts $$f(x, y)=\frac{1}{3^{-x^2-y^2}+1}$$
###Code
x = np.linspace(-2, 2, 1001)
y = np.linspace(-2, 2, 1001)
x, y = np.meshgrid(x, y) # This turns the x and y vectors into 2D arrays.
def f(x, y):
return 1/(3**(-(x**2)-(y**2)) + 1)
fig = plt.figure(figsize=[16, 10] , dpi=200)
ax = fig.gca(projection='3d')
ax.set_xlabel('x', fontsize=20)
ax.set_ylabel('y', fontsize=20)
ax.set_zlabel('f(x, y) - Cost', fontsize=20)
ax.plot_surface(x, y, f(x,y), cmap=cm.seismic, alpha=.5) # cmap defines the colors of the figure.
plt.show()
###Output
_____no_output_____ |
notebook_templates/_02_loss.ipynb | ###Markdown
Loss> Train and evaluate your algorithms on real data. You can also save the model for later use, or deploy it to production! ***input:*** clean and tidy dataset from data notebook + ML model class from hypotheses space notebook***output:*** evaluated, trained and (optionally) deployed model***description:***In this notebook you train and evaluate ML methods implemented with the whole dataset.You can also save the model for later use, or deploy it to production environment, or use this notebook as the final output.Edit this and the other text cells to describe your project. Import relevant modules
###Code
import numpy as np
# your code here
###Output
_____no_output_____
###Markdown
Define notebook parameters
###Code
# This cell is tagged with 'parameters'
seed = 0
###Output
_____no_output_____
###Markdown
make direct derivations from the parameters:
###Code
np.random.seed(seed)
# your code here
###Output
_____no_output_____
###Markdown
Load clean and tidy dataset
###Code
# your code here
###Output
_____no_output_____
###Markdown
> Note that depending on the file format and your variables, you might have to refefine datatypes in your dataframe! Split the data into training, testing and validation data
###Code
# your code here
###Output
_____no_output_____
###Markdown
Define lossHow do you evaluate model performance?
###Code
# your code here
###Output
_____no_output_____
###Markdown
Train and evaluate the models
###Code
# your code here
###Output
_____no_output_____
###Markdown
Visualize the results
###Code
# your code here
###Output
_____no_output_____
###Markdown
You can also include statistical tests!
###Code
# your code here
###Output
_____no_output_____
###Markdown
Validate model (if hyperparameters are optimized)
###Code
# your code here
###Output
_____no_output_____
###Markdown
Visualize validation
###Code
# your code here
###Output
_____no_output_____
###Markdown
ConclusionsHow the results look like? Output of this notebookSaved or deployed trained model
###Code
# your code here
###Output
_____no_output_____ |
web_scraping_tuto/web_scraping_tutorial.ipynb | ###Markdown
Scrap html data from url and save it Here we get all links ('' tags ) from "https://www.total.com/" and we download html content from all those links. You should adapt this part according to the structure of the website you want to scrap
###Code
# DATA_FOLDER = "html_data_dir/"
# DOCUMENTS_FOLDER = "documents/"
url = "https://www.total.com/"
check_directory(DATA_FOLDER)
check_directory(DOCUMENTS_FOLDER)
data_folder_is_empty = len(glob.glob(DATA_FOLDER+"*")) == 0
if data_folder_is_empty:
response = requests.get(url)
soup = BeautifulSoup(response.content, 'html.parser')
all_a_tags = soup.find_all("a")
for a_tag in tqdm(all_a_tags): ##this part is specific to each website structure. Please change it accordingly
href = a_tag.get("href",None)
if (href is not None):
if "http" not in href:
href = "https://total.com" + href
if "total.com/" in href:
filename = DATA_FOLDER + from_url_to_filename(href)
download_save_html_data(href,filename)
###Output
_____no_output_____
###Markdown
Process downloaded html data and save them as json filesEach json file is a document that our ODQA framework can process. It has 3 key-value pairs : "title", "text" and "url"
###Code
NB_SENTENCES_PER_JSON_DOCUMENT = 5 #Number of sentences per json document.
for filepath in tqdm(glob.glob(DATA_FOLDER+"*")):
json_file_path = os.path.join(DOCUMENTS_FOLDER,os.path.basename(filepath))
if not os.path.isfile(json_file_path+"_0.json"):
try:
doc_text = document_string_from_source(filepath)
doc_json = {"title": doc_text[:20], "text" : "", "url": from_filename_to_url(os.path.basename(filepath))}
doc_text_sentences = doc_text.split(". ")
for i in range(0,len(doc_text_sentences),NB_SENTENCES_PER_JSON_DOCUMENT):
doc_json["text"] = ". ".join(doc_text_sentences[i:i+NB_SENTENCES_PER_JSON_DOCUMENT])
sub_doc_json_file_path = json_file_path + "_{}.json".format(i)
with open(sub_doc_json_file_path,"w+") as json_file:
json.dump(doc_json,json_file)
except UnicodeDecodeError as e:
print("Error : ",e,json_file_path)
else:
print("{} file(s) already exist(s)".format(json_file_path))
###Output
1%| | 1/136 [00:00<00:21, 6.17it/s] |
notebook - machine learning sklearn/ipython notebook/2017-25-11-so-grid-search-sklearn.ipynb | ###Markdown
Grid SearchHyper-parameters are parameters that are not directly learnt within estimators. In scikit-learn they are passed as arguments to the constructor of the estimator classes. Typical examples include C, kernel and gamma for Support Vector Classifier, alpha for Lasso, etc.
###Code
# Load libraries
import numpy as np
from sklearn.datasets import load_iris
from sklearn import linear_model
from sklearn.model_selection import GridSearchCV
# Load data
iris = load_iris()
X = iris.data
y = iris.target
###Output
_____no_output_____
###Markdown
create logistic regression
###Code
logistic = linear_model.LogisticRegression()
###Output
_____no_output_____
###Markdown
Create regularization penalty space
###Code
penalty = ['l1', 'l2']
# Create regularization hyperparameter space
C = np.logspace(0, 4, 10)
# Create hyperparameter options
hyperparameters = dict(C=C, penalty=penalty)
###Output
_____no_output_____
###Markdown
Create grid search using 10-fold cross validation
###Code
clf = GridSearchCV(logistic, hyperparameters, cv=10)
###Output
_____no_output_____
###Markdown
Fit grid search
###Code
best_model = clf.fit(X, y)
# View best hyperparameters
print('Best Penalty:', best_model.best_estimator_.get_params()['penalty'])
print('Best C:', best_model.best_estimator_.get_params()['C'])
###Output
Best Penalty: l1
Best C: 7.74263682681
###Markdown
Predict target vector
###Code
best_model.predict(X)
best_model.score(X,y)
###Output
_____no_output_____ |
text_classsification_module/Sentiment_Classification.ipynb | ###Markdown
pre-process textdata
###Code
stop_words = stopwords.words("english")
TEXT_CLEANING_RE = "@\S+|https?:\S+|http?:\S|[^A-Za-z0-9]+"
def preprocess(text, stem=False):
# Remove link,user and special characters
text = re.sub(TEXT_CLEANING_RE, ' ', str(text).lower()).strip()
tokens = []
for token in text.split():
if token not in stop_words:
if stem:
tokens.append(stemmer.stem(token))
else:
tokens.append(token)
return " ".join(tokens)
###Output
_____no_output_____
###Markdown
before pre-processing text
###Code
df.text.tail(3)
df.text = df.text.apply(lambda x: preprocess(x))
###Output
_____no_output_____
###Markdown
text after pre-processing
###Code
df.text.tail()
df_train, df_test = train_test_split(df, test_size=0.2, random_state=2)
print("TRAIN size:", len(df_train))
print("TEST size:", len(df_test))
###Output
TRAIN size: 1280000
TEST size: 320000
###Markdown
Word2vec
###Code
documents = [_text.split() for _text in df_train.text]
w2v_model = gensim.models.word2vec.Word2Vec(size=300,
window=7,
min_count=10,
workers=8)
w2v_model = gensim.models.Word2Vec.load('model.w2v')
w2v_model.build_vocab(documents)
words = w2v_model.wv.vocab.keys()
vocab_size = len(words)
print("Vocab size", vocab_size)
%%time
w2v_model.train(documents, total_examples=len(documents), epochs=32)
w2v_model.wv.most_similar("queen")
w2v_model.save("model.w2v")
###Output
_____no_output_____
###Markdown
Label encoding
###Code
labels = df_train.target.unique().tolist()
labels
encoder = LabelEncoder()
encoder.fit(df_train.target.tolist())
y_train = encoder.transform(df_train.target.tolist())
y_test = encoder.transform(df_test.target.tolist())
y_train = y_train.reshape(-1,1)
y_test = y_test.reshape(-1,1)
print("y_train",y_train.shape)
print("y_test",y_test.shape)
###Output
y_train (1280000, 1)
y_test (320000, 1)
###Markdown
Preparing X_train and X_test
###Code
%%time
tokenizer = Tokenizer()
tokenizer.fit_on_texts(df_train.text)
vocab_size = len(tokenizer.word_index) + 1
print("Total words", vocab_size)
X_train = pad_sequences(tokenizer.texts_to_sequences(df_train.text), maxlen=300)
X_test = pad_sequences(tokenizer.texts_to_sequences(df_test.text), maxlen=300)
print("X_train", X_train.shape)
print("y_train", y_train.shape)
print("X_test", X_test.shape)
print("y_test", y_test.shape)
X_train[-10:]
###Output
_____no_output_____
###Markdown
Embedding layers
###Code
embedding_matrix = np.zeros((vocab_size, 300))
for word, i in tokenizer.word_index.items():
if word in w2v_model.wv:
embedding_matrix[i] = w2v_model.wv[word]
print(embedding_matrix.shape)
embedding_layer = Embedding(vocab_size, 300, weights=[embedding_matrix], input_length=300, trainable=False)
model = Sequential()
model.add(embedding_layer)
model.add(Dropout(0.2))
model.add(Conv1D(250,
4,
padding='valid',
activation='relu',
strides=1))
model.add(GlobalAveragePooling1D())
model.add(Dense(64))
model.add(Dropout(0.2))
model.add(Activation('relu'))
model.add(Dense(1))
model.add(Activation('sigmoid'))
model.summary()
utils.plot_model(model,to_file='CNNTextclassifier.png',show_shapes=True)
opt = Adam(learning_rate=0.001)
model.compile(loss='binary_crossentropy',
optimizer=opt,
metrics=['accuracy',tensorflow.keras.metrics.Precision(),tensorflow.keras.metrics.Recall()])
history = model.fit(X_train, y_train,
batch_size=256,
epochs=10,
validation_split=0.1,
verbose=1)
model.save("sentiment_classifier_CNN.h5")
score = model.evaluate(X_test, y_test, batch_size=256)
print()
print("ACCURACY:",score[1])
print("LOSS:",score[0])
acc = history.history['accuracy']
val_acc = history.history['val_accuracy']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(len(acc))
plt.plot(epochs, acc, 'b', label='Training acc')
plt.plot(epochs, val_acc, 'r', label='Validation acc')
plt.title('Training and validation accuracy')
plt.legend()
plt.figure()
plt.plot(epochs, loss, 'b', label='Training loss')
plt.plot(epochs, val_loss, 'r', label='Validation loss')
plt.title('Training and validation loss')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Model Performance on test data* Accuracy* Precision: TruePositive / TruePositive + FalsePositive * Recall: TruePositive / TruePositive + FalseNegative* f1-sore = 2(Precision x Recall)/(Precision + Recall)
###Code
model = load_model("sentiment_classifier_CNN.h5")
score = model.evaluate(X_test, y_test, batch_size=256)
f1_score = 2*((score[2]*score[3]) / (score[2]+score[3]))
print("ACCURACY:",score[1])
print("LOSS:",score[0])
print("PRECISION:",score[2])
print("RECALL:",score[3])
print("f1-score:",f1_score)
tokenizer = pickle.load(open("tokens.pkl", "rb"))
import time
def decode_sentiment(score):
return "NEGATIVE" if score < 0.5 else "POSITIVE"
def predict(text):
start_at = time.time()
# Tokenize text
x_test = pad_sequences(tokenizer.texts_to_sequences([text]), maxlen=300)
# Predict
out_put = model.predict([x_test])[0]
# Decode sentiment
label = decode_sentiment(out_put)
return {"label": label, "score": float(out_put)}
predict("I love the music")
predict("this is so disgusting")
predict("i don't know what i'm doing")
###Output
_____no_output_____ |
notebooks/Process for tableau.ipynb | ###Markdown
----
###Code
fuel_cons_df['trun'].unique()
tips = sns.load_dataset("tips")
tips.info()
with sns.axes_style("white"):
g = sns.FacetGrid(tips, row="sex", col="smoker", margin_titles=True, height=2.5)
g.map(plt.scatter, "total_bill", "tip", color="#334488", edgecolor="white", lw=.5);
g.set_axis_labels("Total bill (US Dollars)", "Tip");
g.set(xticks=[1, 8, 13], yticks=[2, 6, 10]);
g.fig.subplots_adjust(wspace=.02, hspace=.02);
g.savefig("../results/smokers.png")
###Output
_____no_output_____
###Markdown
Fuel consumption
###Code
_x.loc[('H','ksa')]
fuel_cons_df.head()
_a = fuel_cons_df.pivot_table(index=['scenario','c','trun'],columns='f',values='value')
_a = _a.reset_index()
_a
plt.style.use('tableau-colorblind10')
fig, axes = plt.subplots(nrows=8, ncols=6, sharex=False, sharey=False, figsize=(12,8))
for ax,num in zip(axes.flatten(),range(1,49)):
#print(ax)
for scenario in scenarios:
#print('Creating plot for %s...' %scenario)
for country in countries:
#print('Creating plot for %s...' %country)
_b = _a[(_a['scenario']==scenario) & (_a['c']==country)]
_b.plot()
#print(_b)
#for column in _b.drop(['scenario','c','trun'],axis=1):
# _b.plot(x=_b['trun'],y=_b[column],ax=ax)
#ax.plot(_b['trun'],_b[column])
#ax.set_title(num)
#ax.set_ylabel()
###Output
_____no_output_____ |
LPwithTwoVariables.ipynb | ###Markdown
Solving LP with Two Variables with Pyomo by Stefano Gualandi is licensed under a Creative Commons Attribution 4.0 International License.Based on a work at https://github.com/mathcoding/opt4ds. 2. Solving LP model with Two Variables with PYOMOIn this notebook, we explain how to solve **Linear Programming** problems:\begin{align}\max \quad& c^T x \\\mbox{s.t. } \quad & Ax \geq b \\& x \geq 0\end{align}We begin with a very simple problem with only two positive variables. 2.1 Software InstallationIf you are running this notebook in a Colab, you don't need to install anything else on your computer.Otherwise, if you have installed the recommended Anaconda Python distribution, you have to run the following two commands:1. To install the [Pyomo](http://www.pyomo.org/) optimization modeling language:```conda install -c conda-forge pyomo```2. To install the open source [GLPK](https://www.gnu.org/software/glpk/) solver:```conda install -c conda-forge glpk```3. (Optional) You can install some extra packages of Pyomo using the following command:```conda install -c conda-forge pyomo.extras```For details about the Pyomo installation, we refer to the official [Pyomo Documentation](https://pyomo.readthedocs.io/en/stable/). The following lines are for running this notebook in a Google Colab:
###Code
import shutil
import sys
import os.path
if not shutil.which("pyomo"):
!pip install -q pyomo
assert(shutil.which("pyomo"))
if not (shutil.which("glpk") or os.path.isfile("glpk")):
if "google.colab" in sys.modules:
!apt-get install -y -qq glpk-utils
else:
try:
!conda install -c conda-forge glpk
except:
pass
###Output
_____no_output_____
###Markdown
2.2 Example From the SlidesConsider the following example, which is solved graphically on the slide *Introduction to Linear Programming*.We want to solve the following Linear Programming problem:\begin{align}\min \;\; & -x_1 - x_2 \\\mbox{s.t. } \;\; & x_1 + 2x_2 \leq 3 & \\& 2x_1 + x_2 \leq 3 & \\& x_1 \geq 0 & \\& x_2 \geq 0 & \\\end{align}Note that we have $c = \left[\begin{array}{c} -1 \\ -1 \end{array}\right]$, $A = \left[ \begin{array}{cc}1 & 2 \\ 2 & 1 \end{array} \right]$, and$b = \left[\begin{array}{c} 3 \\ 3 \end{array}\right]$.
###Code
from pyomo.environ import *
model = ConcreteModel()
# declare decision variables
model.x1 = Var(domain=NonNegativeReals)
model.x2 = Var(domain=NonNegativeReals)
# declare objective
model.cost = Objective(
expr = - model.x1 - model.x2,
sense = maximize)
# declare constraints
model.cnstr1 = Constraint(expr = model.x1 + 2*model.x2 <= 3)
model.cnstr2 = Constraint(expr = 2*model.x1 + model.x2 <= 3)
# solve
sol = SolverFactory('glpk').solve(model)
from pylab import *
figure(figsize=(6, 6))
subplot(111, aspect='equal')
axis([0, 3.5, 0, 3.5])
xlabel('$x_1$')
ylabel('$x_2$')
# First constraint
x = array([0, 3])
y = 3/2 - 1/2*x
plot(x, y, 'g', lw=2)
fill_between([0, 3, 2], [1.5, 0, 0], [0, 0, 0], color='g', alpha=0.15)
t1 = annotate('$x_1 + 2x_2 \leq 3$', xy=(2,0.5), xytext=(2.5, 0.7),
arrowprops=dict(shrink=0.1, width=2, headwidth=10, color='g'))
savefig("example16a.pdf", bbox_inches='tight')
figure(figsize=(6, 6))
subplot(111, aspect='equal')
axis([0, 3.5, 0, 3.5])
xlabel('$x_1$')
ylabel('$x_2$')
# Second constraint
x = array([0, 1.5])
y = 3 - 2*x
plot(x, y, 'b', lw=2)
fill_between([0, 1.5, 100], [3, 0, 0], [0, 0, 0], color='b', alpha=0.15)
t2 = annotate('$2x_1 + x_2 \leq 3$', xy=(0.25,2.5), xytext=(0.7, 2.5),
arrowprops=dict(shrink=0.1, width=2, headwidth=10, color='b'))
savefig("example16b.pdf", bbox_inches='tight')
from pylab import *
figure(figsize=(6, 6))
subplot(111, aspect='equal')
axis([0, 3.5, 0, 3.5])
xlabel('$x_1$')
ylabel('$x_2$')
# First constraint
x = array([0, 3])
y = 3/2 - 1/2*x
plot(x, y, 'g', lw=2)
fill_between([0, 3, 2], [1.5, 0, 0], [0, 0, 0], color='g', alpha=0.15)
# Second constraint
x = array([0, 1.5])
y = 3 - 2*x
plot(x, y, 'b', lw=2)
fill_between([0, 1.5, 100], [3, 0, 0], [0, 0, 0], color='b', alpha=0.15)
annotate('$x_1 + 2x_2 \leq 3$', xy=(2,0.5), xytext=(2.5, 0.7),
arrowprops=dict(shrink=0.1, width=2, headwidth=10, color='g'))
annotate('$2x_1 + x_2 \leq 3$', xy=(0.25,2.5), xytext=(0.7, 2.5),
arrowprops=dict(shrink=0.1, width=2, headwidth=10, color='b'))
legend(['Constraint 1','Constraint 2'])
savefig("example16c.pdf", bbox_inches='tight')
from pylab import *
figure(figsize=(6, 6))
subplot(111, aspect='equal')
axis([0, 3.5, 0, 3.5])
xlabel('$x_1$')
ylabel('$x_2$')
# First constraint
x = array([0, 3])
y = 3/2 - 1/2*x
plot(x, y, 'g', lw=2)
fill_between([0, 3, 2], [1.5, 0, 0], [0, 0, 0], color='g', alpha=0.15)
# Second constraint
x = array([0, 1.5])
y = 3 - 2*x
plot(x, y, 'b', lw=2)
fill_between([0, 1.5, 100], [3, 0, 0], [0, 0, 0], color='b', alpha=0.15)
t1 = annotate('$x_1 + 2x_2 \leq 3$', xy=(2,0.5), xytext=(2.5, 0.7),
arrowprops=dict(shrink=0.1, width=2, headwidth=10, color='g'))
t2 = annotate('$2x_1 + x_2 \leq 3$', xy=(0.25,2.5), xytext=(0.7, 2.5),
arrowprops=dict(shrink=0.1, width=2, headwidth=10, color='b'))
# Objective function
x = array([0, 4])
for p in linspace(0, 10, 21):
y = p - x
plot(x,y,'y--', color='red', alpha=0.3)
text(0.1,0.6,'Decreasing Cost')
annotate('', xy=(0.5, 0.5), xytext=(0, 0),
arrowprops=dict(width=0.5, headwidth=5, color='r'))
savefig("example16d.pdf", bbox_inches='tight')
from pylab import *
figure(figsize=(6, 6))
subplot(111, aspect='equal')
axis([0, 3.5, 0, 3.5])
xlabel('$x_1$')
ylabel('$x_2$')
# First constraint
x = array([0, 3])
y = 3/2 - 1/2*x
plot(x, y, 'g', lw=2)
fill_between([0, 3, 2], [1.5, 0, 0], [0, 0, 0], color='g', alpha=0.15)
# Second constraint
x = array([0, 1.5])
y = 3 - 2*x
plot(x, y, 'b', lw=2)
fill_between([0, 1.5, 100], [3, 0, 0], [0, 0, 0], color='b', alpha=0.15)
legend(['Constraint 1','Constraint 2'])
# Put some arrows
plot(1.5,0,'b.',ms=20)
annotate('$x_1$ only', xy=(1.5,0), xytext=(2, 0.1),
arrowprops=dict(shrink=0.1, width=2, headwidth=10, color='r'))
annotate('$x_1 + 2x_2 \leq 3$', xy=(2,0.5), xytext=(2.5, 0.7),
arrowprops=dict(shrink=0.1, width=2, headwidth=10, color='g'))
plot(0,1.5,'g.',ms=20)
annotate('$x_2$ only', xy=(0,1.5), xytext=(1.0, 2.0),
arrowprops=dict(shrink=0.1, width=2, headwidth=10, color='r'))
annotate('$2x_1 + x_2 \leq 3$', xy=(0.25,2.5), xytext=(0.7, 2.5),
arrowprops=dict(shrink=0.1, width=2, headwidth=10, color='b'))
# Objective function
x = array([0, 4])
for p in linspace(0, 10, 21):
y = p - x
plot(x,y,'y--', color='red', alpha=0.3)
# Optimum
plot(1,1,'r.',ms=20)
annotate('Optimal Solution', xy=(1,1), xytext=(2,1.5),
arrowprops=dict(shrink=.1, width=2, headwidth=10, color='r'))
text(0.1,0.6,'Decreasing Cost')
t3 = annotate('', xy=(0.5, 0.5), xytext=(0, 0),
arrowprops=dict(width=0.5, headwidth=5, color='r'))
savefig("example16d.pdf", bbox_inches='tight')
###Output
_____no_output_____ |
GraphMatching.ipynb | ###Markdown
Graph MatchingIn this notebook we show how to use qFGW to perform graph matching on TOSCA meshes
###Code
%load_ext autoreload
%autoreload 2
import pickle
import numpy as np
import networkx as nx
import ot
import time
import os
from quantizedGW import *
###Output
_____no_output_____
###Markdown
Centaur meshes
###Code
with open('data/centaurs.pkl','rb') as handle:
centaurs = pickle.load(handle)
# Parameters
k = 100 #number of partition representatives
dims = 64 #number of bins to use for Weisfeiler-Lehman (WL) histogram
wl_steps = 1 #number of WL steps
distribution_exponent = 1 # probability vector based on degree
len(centaurs[0])
dataset = partition_featurize_graphlist_fpdwl(centaurs,k=k,dims=dims,wl_steps=wl_steps,
distribution_offset=0,distribution_exponent=distribution_exponent,verbose=True)
%%time
coup = compress_fgw_from_dicts(dataset[0],dataset[1],alpha=0.5,beta=0.5,verbose = False, return_dense = False)
plt.spy(coup)
###Output
_____no_output_____ |
sewer-overflows/notebooks/River Reversals vs. Rainfall.ipynb | ###Markdown
This notebook will investigate instances where the river is reversed, and sewage is dumped into the lake. We will take a look at rainfall before these events, to see if there is a correlation
###Code
# Get River reversals
reversals = pd.read_csv('data/lake_michigan_reversals.csv')
reversals['start_date'] = pd.to_datetime(reversals['start_date'])
reversals.head()
# Create rainfall dataframe. Create a series that has hourly precipitation
rain_df = pd.read_csv('data/ohare_hourly_20160929.csv')
rain_df['datetime'] = pd.to_datetime(rain_df['datetime'])
rain_df = rain_df.set_index(pd.DatetimeIndex(rain_df['datetime']))
rain_df = rain_df['19700101':]
chi_rain_series = rain_df['HOURLYPrecip'].resample('1H', label='right').max()
chi_rain_series.head()
# Find the rainfall 'hours' hours before the timestamp
def cum_rain(timestamp, hours):
end_of_day = (timestamp + timedelta(days=1)).replace(hour=0, minute=0)
start_time = end_of_day - timedelta(hours=(hours-1))
return chi_rain_series[start_time:end_of_day].sum()
t = pd.to_datetime('2015-06-15')
cum_rain(t, 240)
# Set the ten_day_rain field in reversals to the amount of rain that fell the previous 10 days (including the day that
# the lock was opened)
# TODO: Is there a more Pandaic way to do this?
for index, reversal in reversals.iterrows():
reversals.loc[index,'ten_day_rain'] = cum_rain(reversal['start_date'], 240)
reversals
# Information about the 10 days that preceed these overflows
reversals['ten_day_rain'].describe(percentiles=[.25, .5, .75])
###Output
_____no_output_____
###Markdown
Now we will look at any n-year storms that occurred during the 10 days prior to the reversal
###Code
# N-Year Storm stuff
n_year_threshes = pd.read_csv('../../n-year/notebooks/data/n_year_definitions.csv')
n_year_threshes = n_year_threshes.set_index('Duration')
dur_str_to_hours = {
'5-min':5/60.0,
'10-min':10/60.0,
'15-min':15/60.0,
'30-min':0.5,
'1-hr':1.0,
'2-hr':2.0,
'3-hr':3.0,
'6-hr':6.0,
'12-hr':12.0,
'18-hr':18.0,
'24-hr':24.0,
'48-hr':48.0,
'72-hr':72.0,
'5-day':5*24.0,
'10-day':10*24.0
}
n_s = [int(x.replace('-year','')) for x in reversed(list(n_year_threshes.columns.values))]
duration_strs = sorted(dur_str_to_hours.items(), key=operator.itemgetter(1), reverse=False)
n_year_threshes
# This method returns the first n-year storm found in a given interval. It starts at the 100-year storm and decriments, so
# will return the highest n-year storm found
def find_n_year_storm(start_time, end_time):
for n in n_s:
n_index = n_s.index(n)
next_n = n_s[n_index-1] if n_index != 0 else None
for duration_tuple in reversed(duration_strs):
duration_str = duration_tuple[0]
low_thresh = n_year_threshes.loc[duration_str, str(n) + '-year']
high_thresh = n_year_threshes.loc[duration_str, str(next_n) + '-year'] if next_n is not None else None
duration = int(dur_str_to_hours[duration_str])
sub_series = chi_rain_series[start_time: end_time]
rolling = sub_series.rolling(window=int(duration), min_periods=0).sum()
if high_thresh is not None:
event_endtimes = rolling[(rolling >= low_thresh) & (rolling < high_thresh)].sort_values(ascending=False)
else:
event_endtimes = rolling[(rolling >= low_thresh)].sort_values(ascending=False)
if len(event_endtimes) > 0:
return {'inches': event_endtimes[0], 'n': n, 'end_time': event_endtimes.index[0], 'hours': duration}
return None
start_time = pd.to_datetime('2008-09-04 01:00:00')
end_time = pd.to_datetime('2008-09-14 20:00:00')
find_n_year_storm(start_time, end_time)
# Add a column to the reversals data frame to show n-year storms that occurred before the reversal
# TODO: Is there a more Pandaic way to do this?
for index, reversal in reversals.iterrows():
end_of_day = (reversal['start_date'] + timedelta(days=1)).replace(hour=0, minute=0)
start_time = end_of_day - timedelta(days=10)
reversals.loc[index,'find_n_year_storm'] = str(find_n_year_storm(start_time, end_of_day))
reversals
no_n_year = reversals.loc[reversals['find_n_year_storm'] == 'None']
print("There are %s reversals without an n-year event" % len(no_n_year))
no_n_year
reversals.loc[reversals['year'] == 1997]
reversals.sort_values('crcw', ascending=False)
###Output
_____no_output_____ |
Analyzed-Notebooks/HandWashing/Dr. Semmelweis and the Discovery of Handwashing/notebook.ipynb | ###Markdown
1. Meet Dr. Ignaz Semmelweis<!---->This is Dr. Ignaz Semmelweis, a Hungarian physician born in 1818 and active at the Vienna General Hospital. If Dr. Semmelweis looks troubled it's probably because he's thinking about childbed fever: A deadly disease affecting women that just have given birth. He is thinking about it because in the early 1840s at the Vienna General Hospital as many as 10% of the women giving birth die from it. He is thinking about it because he knows the cause of childbed fever: It's the contaminated hands of the doctors delivering the babies. And they won't listen to him and wash their hands!In this notebook, we're going to reanalyze the data that made Semmelweis discover the importance of handwashing. Let's start by looking at the data that made Semmelweis realize that something was wrong with the procedures at Vienna General Hospital.
###Code
# importing modules
# ... YOUR CODE FOR TASK 1 ...
import pandas as pd
# Read datasets/yearly_deaths_by_clinic.csv into yearly
yearly = pd.read_csv('./datasets/yearly_deaths_by_clinic.csv')
print(yearly)
# Print out yearly
# ... YOUR CODE FOR TASK 1 ...
###Output
year births deaths clinic
0 1841 3036 237 clinic 1
1 1842 3287 518 clinic 1
2 1843 3060 274 clinic 1
3 1844 3157 260 clinic 1
4 1845 3492 241 clinic 1
5 1846 4010 459 clinic 1
6 1841 2442 86 clinic 2
7 1842 2659 202 clinic 2
8 1843 2739 164 clinic 2
9 1844 2956 68 clinic 2
10 1845 3241 66 clinic 2
11 1846 3754 105 clinic 2
###Markdown
2. The alarming number of deathsThe table above shows the number of women giving birth at the two clinics at the Vienna General Hospital for the years 1841 to 1846. You'll notice that giving birth was very dangerous; an alarming number of women died as the result of childbirth, most of them from childbed fever.We see this more clearly if we look at the proportion of deaths out of the number of women giving birth. Let's zoom in on the proportion of deaths at Clinic 1.
###Code
# Calculate proportion of deaths per no. births
# ... YOUR CODE FOR TASK 2 ...
yearly["proportion_deaths"] = yearly['deaths'] / yearly['births']
# Extract clinic 1 data into yearly1 and clinic 2 data into yearly2
yearly1 = yearly[ yearly['clinic'] == 'clinic 1']
yearly2 = yearly[ yearly['clinic'] == 'clinic 2']
print(yearly1)
# Print out yearly1
# ... YOUR CODE FOR TASK 2 ...
###Output
year births deaths clinic proportion_deaths
0 1841 3036 237 clinic 1 0.078063
1 1842 3287 518 clinic 1 0.157591
2 1843 3060 274 clinic 1 0.089542
3 1844 3157 260 clinic 1 0.082357
4 1845 3492 241 clinic 1 0.069015
5 1846 4010 459 clinic 1 0.114464
###Markdown
3. Death at the clinicsIf we now plot the proportion of deaths at both clinic 1 and clinic 2 we'll see a curious pattern...
###Code
# This makes plots appear in the notebook
%matplotlib inline
ax = yearly1.plot('year' , 'proportion_deaths' , label = 'Clinic1')
ax = yearly2.plot('year' , 'proportion_deaths' ,ax = ax , label = 'Clinic2')
# Plot yearly proportion of deaths at the two clinics
# ... YOUR CODE FOR TASK 3 ...
###Output
_____no_output_____
###Markdown
4. The handwashing beginsWhy is the proportion of deaths constantly so much higher in Clinic 1? Semmelweis saw the same pattern and was puzzled and distressed. The only difference between the clinics was that many medical students served at Clinic 1, while mostly midwife students served at Clinic 2. While the midwives only tended to the women giving birth, the medical students also spent time in the autopsy rooms examining corpses. Semmelweis started to suspect that something on the corpses, spread from the hands of the medical students, caused childbed fever. So in a desperate attempt to stop the high mortality rates, he decreed: Wash your hands! This was an unorthodox and controversial request, nobody in Vienna knew about bacteria at this point in time. Let's load in monthly data from Clinic 1 to see if the handwashing had any effect.
###Code
# Read datasets/monthly_deaths.csv into monthly
monthly = pd.read_csv("./datasets/monthly_deaths.csv" ,parse_dates = ['date'] )
# Calculate proportion of deaths per no. births
# ... YOUR CODE FOR TASK 4 ...
monthly["proportion_deaths"] = monthly["deaths"] / monthly["births"]
print(monthly.head())
# Print out the first rows in monthly
# ... YOUR CODE FOR TASK 4 ...
###Output
date births deaths proportion_deaths
0 1841-01-01 254 37 0.145669
1 1841-02-01 239 18 0.075314
2 1841-03-01 277 12 0.043321
3 1841-04-01 255 4 0.015686
4 1841-05-01 255 2 0.007843
###Markdown
5. The effect of handwashingWith the data loaded we can now look at the proportion of deaths over time. In the plot below we haven't marked where obligatory handwashing started, but it reduced the proportion of deaths to such a degree that you should be able to spot it!
###Code
# Plot monthly proportion of deaths
# ... YOUR CODE FOR TASK 5 ...
ax = monthly.plot('date' , 'proportion_deaths' ,label = 'Proportion deaths')
###Output
_____no_output_____
###Markdown
6. The effect of handwashing highlightedStarting from the summer of 1847 the proportion of deaths is drastically reduced and, yes, this was when Semmelweis made handwashing obligatory. The effect of handwashing is made even more clear if we highlight this in the graph.
###Code
# Date when handwashing was made mandatory
import pandas as pd
handwashing_start = pd.to_datetime('1847-06-01')
# Split monthly into before and after handwashing_start
before_washing = monthly[monthly['date'] < handwashing_start]
after_washing = monthly[ monthly['date'] >= handwashing_start]
ax = before_washing.plot('date' , 'proportion_deaths' , label = 'before')
ax = after_washing.plot('date', 'proportion_deaths' , label = 'after' , ax = ax)
ax.set_ylabel('Proportion deaths')
print(before_washing.columns)
# Plot monthly proportion of deaths before and after handwashing
# ... YOUR CODE FOR TASK 6 ...
###Output
Index(['date', 'births', 'deaths', 'proportion_deaths'], dtype='object')
###Markdown
7. More handwashing, fewer deaths?Again, the graph shows that handwashing had a huge effect. How much did it reduce the monthly proportion of deaths on average?
###Code
'''# Difference in mean monthly proportion of deaths due to handwashing
print(before_washing.columns)
before_washing['date'] = pd.to_datetime(before_washing['date'])
after_washing['date'] = pd.to_datetime(after_washing['date'])
before_washing.set_index('date' , inplace = True)
after_washing.set_index('date' , inplace = True)
before_proportion = before_washing['proportion_deaths'].resample('M' , how = 'mean')
after_proportion = after_washing['proportion_deaths'].resample('M' , how = 'mean')
#before_washing['proportion_deaths'].groupby(pd.Grouper(freq = 'M')).mean()
#grouped_monthly_a = after_washing['proportion_deaths'].groupby(pd.Grouper(freq = 'M')).mean()
#before_proportion = before_washing['proportion_deaths'].mean()
#after_proportion = after_washing['proportion_deaths'].mean()
print(after_proportion.head())
mean_diff = after_proportion - before_proportion
mean_diff'''
before_proportion = before_washing['proportion_deaths']
after_proportion = after_washing['proportion_deaths']
mean_diff = after_proportion.mean() - before_proportion.mean()
mean_diff
###Output
_____no_output_____
###Markdown
8. A Bootstrap analysis of Semmelweis handwashing dataIt reduced the proportion of deaths by around 8 percentage points! From 10% on average to just 2% (which is still a high number by modern standards). To get a feeling for the uncertainty around how much handwashing reduces mortalities we could look at a confidence interval (here calculated using the bootstrap method).
###Code
# A bootstrap analysis of the reduction of deaths due to handwashing
boot_mean_diff = []
for i in range(3000):
boot_before = before_proportion.sample(frac = 1 , replace = True)
boot_after = after_proportion.sample(frac = 1 , replace = True)
boot_mean_diff.append( boot_after.mean() - boot_before.mean() )
# Calculating a 95% confidence interval from boot_mean_diff
confidence_interval = pd.Series(boot_mean_diff).quantile([0.025 , 0.975])
print(confidence_interval)
###Output
0.025 -0.101084
0.975 -0.067542
dtype: float64
###Markdown
9. The fate of Dr. SemmelweisSo handwashing reduced the proportion of deaths by between 6.7 and 10 percentage points, according to a 95% confidence interval. All in all, it would seem that Semmelweis had solid evidence that handwashing was a simple but highly effective procedure that could save many lives.The tragedy is that, despite the evidence, Semmelweis' theory — that childbed fever was caused by some "substance" (what we today know as bacteria) from autopsy room corpses — was ridiculed by contemporary scientists. The medical community largely rejected his discovery and in 1849 he was forced to leave the Vienna General Hospital for good.One reason for this was that statistics and statistical arguments were uncommon in medical science in the 1800s. Semmelweis only published his data as long tables of raw data, but he didn't show any graphs nor confidence intervals. If he would have had access to the analysis we've just put together he might have been more successful in getting the Viennese doctors to wash their hands.
###Code
# The data Semmelweis collected points to that:
doctors_should_wash_their_hands = True
###Output
_____no_output_____ |
Machine Learning Approach/Predicting Price Signals - Machine Learning Approach.ipynb | ###Markdown
In this notebook, we explore the usage of triple-barrier method / meta-labelling to identify price signals in the market using an ML approach. The triple-barrier is a simple labelling technique consisting of three barriers: an upper barrier, a lower barrier and a vertical barrier. 1. Upper Barrier: Threshold an observation return needs to reach to consider a BUY (1) 2. Lower Barrier: Treshold an observed return needs to reach to consider a SELL (-1) 3. Vertical Barrier: Amount of time the observation has to reach without hitting the upper or lower barrier (0)  Meta Learning is where we make use of a secondary ML model to learn how to use the result of the first ML model. In this instance, it is termed meta labelling as we train the model to decide if we should trade or pass on the initial predict of [-1, 0, 1]. We do this by labelling the predicted outputs with the actual labels of the primary model as use the new label as prediction goal for the second model. 
###Code
data = hist.copy()
hist.dropna(axis=0, how='any', inplace=True)
factor = [2, 2] # Scale of barrier height
labels = TripleBarrier(hist['Close'], vol_span=50, barrier_horizon=5, factors=factor, label=0).labels
# Load additional features
data['log_ret'] = np.log(data['Close']).diff()
data['mom1'] = data['Close'].pct_change(periods=1)
data['mom2'] = data['Close'].pct_change(periods=2)
data['mom3'] = data['Close'].pct_change(periods=3)
data['mom4'] = data['Close'].pct_change(periods=4)
data['mom5'] = data['Close'].pct_change(periods=5)
data['volatility'] = data['Close'].rolling(window=7, min_periods=7, center=False).std()
data['autocorr_1'] = data['Close'].rolling(window=7, min_periods=7, center=False).apply(lambda x: x.autocorr(lag=1), raw=False)
data['autocorr_2'] = data['Close'].rolling(window=7, min_periods=7, center=False).apply(lambda x: x.autocorr(lag=2), raw=False)
data['autocorr_3'] = data['Close'].rolling(window=7, min_periods=7, center=False).apply(lambda x: x.autocorr(lag=3), raw=False)
data['autocorr_4'] = data['Close'].rolling(window=7, min_periods=7, center=False).apply(lambda x: x.autocorr(lag=4), raw=False)
data['autocorr_5'] = data['Close'].rolling(window=7, min_periods=7, center=False).apply(lambda x: x.autocorr(lag=5), raw=False)
data['log_t1'] = data['log_ret'].shift(1)
data['log_t1'] = data['log_ret'].shift(2)
data['log_t1'] = data['log_ret'].shift(3)
data['log_t1'] = data['log_ret'].shift(4)
data['log_t1'] = data['log_ret'].shift(5)
data['fast_mavg'] = data['Close'].rolling(window=7, min_periods=7, center=False).mean()
data['slow_mavg'] = data['Close'].rolling(window=15, min_periods=15, center=False).mean()
# Split into X and Y
X = data.loc[labels.index, :]
X = X.dropna()
y_labels = labels.loc[X.index, :]
y = y_labels['bin']
# Use VIF to identify and drop highly correlated features
k = X.copy()
k = k.replace([np.inf, -np.inf], np.nan).dropna(axis=1)
def calc_vif(k, threshold=5):
vif = pd.DataFrame()
vif["VIF Factor"] = [variance_inflation_factor(k.values, i) for i in range(k.shape[1])]
vif["features"] = k.columns
vif = vif.replace([np.inf, -np.inf], np.nan)
vif = vif.dropna()
vif["VIF Factor"] = vif[vif["VIF Factor"] < threshold]
vif = vif.dropna()
print(vif)
return vif.features.values.tolist()
X = k[calc_vif(k)]
# Split into Train Test and leave the last year for OOS
X_training_val = X['2001-01-01' : '2019-01-01']
Y_training_val = y['2001-01-01' : '2019-01-01']
X_train, X_validate, y_train, y_validate = train_test_split(X_training_val, Y_training_val, test_size=0.2, shuffle=False)
train_df = pd.concat([y_train, X_train], axis=1, join='inner')
train_df['bin'].value_counts()
# Lets Resample them
majority = train_df[train_df['bin']==0]
mid = train_df[train_df['bin']==-1]
minority = train_df[train_df['bin']==1]
sell = resample(minority,
replace=True,
n_samples=majority.shape[0],
random_state=144)
buy = resample(mid,
replace=True,
n_samples=majority.shape[0],
random_state=144)
train_df = pd.concat([majority, buy, sell])
train_df = shuffle(train_df, random_state=144)
train_df['bin'].value_counts()
# Create training data
y_train = train_df['bin']
X_train = train_df.loc[:, train_df.columns != 'bin']
parameters = {'max_depth':[2, 3, 4, 5, 6, 7],
'n_estimators':[1, 10, 25, 50, 100, 256, 512],
'random_state':[144]}
def grid_search(X, y):
rf = RandomForestClassifier(criterion='entropy')
clf = GridSearchCV(rf, parameters, cv=4, scoring='f1_macro', n_jobs=-1)
clf.fit(X, y)
return clf.best_params_['n_estimators'], clf.best_params_['max_depth']
n_estimators, depth = grid_search(X_train, y_train)
###Output
_____no_output_____
###Markdown
Primary Model Get the predictions of the first model [-1, 0, 1]
###Code
# Random Forest Model
rf = RandomForestClassifier(max_depth=depth, n_estimators=n_estimators, criterion='entropy', random_state=144)
rf.fit(X_train, y_train.values.ravel())
# Metrics
y_pred = rf.predict(X_validate)
print(classification_report(y_validate, y_pred))
print("Confusion Matrix")
print(confusion_matrix(y_validate, y_pred))
print("Accuracy")
print(accuracy_score(y_validate, y_pred))
def binary_label(y_pred, y_test):
bin_label = np.zeros_like(y_pred)
for i in range(y_pred.shape[0]):
if y_pred[i] != 0 and y_pred[i]*y_test[i] > 0:
bin_label[i] = 1
return bin_label
print(classification_report(binary_label(y_pred, y_validate), y_pred != 0))
print("Confusion Matrix")
print(confusion_matrix(binary_label(y_pred, y_validate), y_pred != 0))
print("Accuracy")
print(accuracy_score(binary_label(y_pred, y_validate), y_pred != 0))
###Output
precision recall f1-score support
0 1.00 0.30 0.47 713
1 0.10 1.00 0.18 55
accuracy 0.35 768
macro avg 0.55 0.65 0.32 768
weighted avg 0.94 0.35 0.45 768
Confusion Matrix
[[217 496]
[ 0 55]]
Accuracy
0.3541666666666667
###Markdown
Secondary Model
###Code
y_train_pred = rf.predict(X_train)
X_train_meta = np.hstack([y_train_pred[:, None], X_train])
X_test_meta = np.hstack([y_pred[:, None], X_validate])
y_train_meta = binary_label(y_train_pred, y_train)
sm = SMOTE()
X_train_meta_res, y_train_meta_res = sm.fit_sample(X_train_meta, y_train_meta)
n_estimators, depth = grid_search(X_train_meta_res, y_train_meta_res)
rf2 = RandomForestClassifier(max_depth=depth, n_estimators=n_estimators, criterion='entropy', random_state=144)
model_s = rf2.fit(X_train_meta_res, y_train_meta_res)
y_pred_meta = model_s.predict(X_test_meta)
print("Confusion Matrix")
print(confusion_matrix(binary_label(y_pred, y_validate), (y_pred * y_pred_meta) != 0))
print("Accuracy")
print(accuracy_score(binary_label(y_pred, y_validate), (y_pred * y_pred_meta) != 0))
###Output
Confusion Matrix
[[284 429]
[ 10 45]]
Accuracy
0.4283854166666667
###Markdown
OOS Sample Results
###Code
X_oos = X['2019-01-01':'2020-01-01']
y_oos = y['2019-01-01':'2020-01-01']
y_pred = rf.predict(X_oos)
X_test_meta = np.hstack([y_pred[:,None], X_oos])
y_pred_meta = model_s.predict(X_test_meta)
print(classification_report(binary_label(y_pred, y_validate), (y_pred*y_pred_meta) != 0))
print("Confusion Matrix")
print(confusion_matrix(binary_label(y_pred, y_validate), (y_pred*y_pred_meta) != 0))
print("Accuracy")
print(accuracy_score(binary_label(y_pred, y_validate), (y_pred*y_pred_meta) != 0))
predictions = y_pred * y_pred_meta
# Compare against market returns
price = hist['Open']
ret = price[X_oos.index]
d = pd.DataFrame()
d['stock_returns'] = np.log(ret) - np.log(ret.shift(1))
d['strategy_returns'] = d['stock_returns'] * predictions
d[['stock_returns', 'strategy_returns']].cumsum().plot(figsize=(15,8))
###Output
_____no_output_____ |
notebooks/homework aifi.ipynb | ###Markdown
HomeworkThe purpose of this homework is to go through all steps in the machine learning pipeline with scikit-learn* Make the necessary import* Divide in train and test sets* Preprocess the input data* Test different types of cross-validation* Train and predict Useful resourcesTrain-test split* https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.train_test_split.htmlHyper-parameters search* https://scikit-learn.org/stable/modules/classes.htmlmodule-sklearn.model_selection
###Code
import pandas as pd
import numpy as np
import os, sys
scr_dir = os.path.join(os.getcwd(), os.pardir, 'src')
sys.path.append(scr_dir)
from load import TimeSeriesLoader
import warnings
warnings.filterwarnings('ignore')
# import preprocessing
from preprocess import (
create_rolling_ts,
split_data,
flatten
)
from error_metrics import regression_metrics
ROOT_PATH = 'C://Users/gilbe/Documents/aifi-bootcamp'
df = pd.read_csv(f'{ROOT_PATH}/data/aapl.csv')
df['Unnamed: 0'] = pd.to_datetime(df['Unnamed: 0'])
df.set_index('Unnamed: 0', inplace=True)
df.index.rename('Date', inplace=True)
# df.rename(columns={'Unnamed: 0', 'Date'}, inplace=True)
def split_sequence(sequence, n_steps):
"""
This function produces input and output for a univariate time series
--Args:
sequence: sequence to split
n_steps: number of steps to use for predicting the next time step
--Return:
sequence with n_steps as input and the next time step to predict
"""
X, y = [], []
for i in range(len(sequence)):
end_ix = i + n_steps
if end_ix > len(sequence) - 1:
break
seq_x, seq_y = sequence[i:end_ix], sequence[end_ix]
X.append(seq_x)
y.append(seq_y)
return pd.DataFrame(np.array(X)), pd.DataFrame(np.array(y))
price = pd.read_csv(f'{ROOT_PATH}/data/closing_prices.csv')
price.set_index('date', inplace=True)
price.head()
###Output
_____no_output_____
###Markdown
Here is an example on how to use the split_squence function, it works only for 1-dim data
###Code
ex1, ex2 = split_sequence(price['AAPL'], n_steps=3)
ex1.head()
ex2.head()
###Output
_____no_output_____
###Markdown
Compute returns and plot some of then. You can use the pct_change() method in pandas Divide data in features and targets* Here you need to use the split_sequence function to split in features and targets Preprocess features* You might need to do imputation* You might need some kind of normalization of the inputs You can either do single regression or multiple regression* if you choose single regression then you have to select a single stock from the dataframe price* you do ***multiple regression*** then you have to wrape your algorithm with the MultiOutputRegressor class. Multiple regression requires to modify the split_sequence function above to get the result for multiple outputs. Multiple regression in scikit-learn:https://scikit-learn.org/stable/modules/multiclass.htmlmultioutput-regression Divide in train and test Do hyper parameter search, you are free to choose between GridSearchCV or RandomSearchCV* use cross-validation for i.i.d* use time series crossvalidation* which method gives better results on the test set?https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.GridSearchCV.htmlhttps://scikit-learn.org/stable/modules/generated/sklearn.model_selection.RandomizedSearchCV.html Do time series split without Hyperparameter search (advanced)For this you might need to do a for-loop and iterate trough the cv-folds in the dataset. Then train the modelin each round of the for-loop. The following function called split can be used.Using for-loops together with the zip() function might be useful.\https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.TimeSeriesSplit.html
###Code
def split(X, y, n_splits=5):
"""
Function for time series splitting. It returns
a list of training and test sets.
--Args:
X: dataframe with Xtrain
y: dataframe with ytrain
"""
tscv = TimeSeriesSplit(n_splits=n_splits)
test_y_list = []
test_x_list = []
train_y_list = []
train_x_list = []
if isinstance(X, (pd.DataFrame, pd.Series)):
X.reset_index(inplace=True, drop=True)
if isinstance(y, (pd.DataFrame, pd.Series)):
y.reset_index(inplace=True, drop=True)
for train_index, test_index in tscv.split(X):
if isinstance(X, (pd.DataFrame, pd.Series)):
X_train, X_test = X.iloc[train_index], X.iloc[test_index]
else:
X_train, X_test = X[train_index], X[test_index]
if isinstance(y, (pd.DataFrame, pd.Series)):
y_train, y_test = y.iloc[train_index], y.iloc[test_index]
else:
y_train, y_test = y[train_index], y[test_index]
train_x_list.append(X_train)
train_y_list.append(y_train)
test_x_list.append(X_test)
test_y_list.append(y_test)
return train_x_list, test_x_list, train_y_list, test_y_list
###Output
_____no_output_____ |
examples/subclasses/AggregatedGradientIMNN.ipynb | ###Markdown
Using the AggregatedGradientIMNNWith a generative model built in ``jax`` for example we can calculate the derivatives of some simulations with which we can use the chain rule to calculate the derivatives of the network outputs with respect to the physical model parameters necessary to fit an IMNN. Furthermore, if the simulations are too numerous or too large to fit into memory or could be accelerated over several devices, we can aggregate the gradients too. This is an expensive operation and should only be used if memory over a single device is really an issue. Note that with a fixed data set for training an IMNN it is important to have a validation set for early stopping. This is because, with a limited training set there *will* be accidental correlations which look like they are due to parameters and the IMNN *will* extract these features. Using a validation set for early stopping makes sure that once all the features in the validation set have been extracted then no extra information can be incorrectly processed.For this example we are going to summaries the unknown mean, $\mu$, and variance, $\Sigma$, of $n_{\bf d}=10$ data points of two 1D random Gaussian field, ${\bf d}=\{d_i\sim\mathcal{N}(\mu,\Sigma)|i\in[1, n_{\bf d}]\}$. This is an interesting problem since we know the likelihood analytically, but it is non-Gaussian$$\mathcal{L}({\bf d}|\mu,\Sigma) = \prod_i^{n_{\bf d}}\frac{1}{\sqrt{2\pi|\Sigma|}}\exp\left[-\frac{1}{2}\frac{(d_i-\mu)^2}{\Sigma}\right]$$As well as knowing the likelihood for this problem, we also know what sufficient statistics describe the mean and variance of the data - they are the mean and the variance$$\frac{1}{n_{\bf d}}\sum_i^{n_{\bf d}}d_i = \mu\textrm{ and }\frac{1}{n_{\bf d}-1}\sum_i^{n_{\bf d}}(d_i-\mu)^2=\Sigma$$What makes this an interesting problem for the IMNN is the fact that the sufficient statistic for the variance is non-linear, i.e. it is a sum of the square of the data, and so linear methods like MOPED would be lossy in terms of information.We can calculate the Fisher information by taking the negative second derivative of the likelihood taking the expectation by inserting the relations for the sufficient statistics, i.e. and examining at the fiducial parameter values$${\bf F}_{\alpha\beta} = -\left.\left(\begin{array}{cc}\displaystyle-\frac{n_{\bf d}}{\Sigma}&0\\0&\displaystyle-\frac{n_{\bf d}}{2\Sigma^2}\end{array}\right)\right|_{\Sigma=\Sigma^{\textrm{fid}}}.$$Choosing fiducial parameter values of $\mu^\textrm{fid}=0$ and $\Sigma^\textrm{fid}=1$ we find that the determinant of the Fisher information matrix is $|{\bf F}_{\alpha\beta}|=50$.
###Code
from imnn import AggregatedGradientIMNN
from imnn.utils import value_and_jacfwd
import jax
import jax.numpy as np
from jax.experimental import stax, optimizers
###Output
_____no_output_____
###Markdown
We're going to use 1000 summary vectors, with a length of two, at a time to make an estimate of the covariance of network outputs and the derivative of the mean of the network outputs with respect to the two model parameters.
###Code
n_s = 1000
n_d = n_s
n_params = 2
n_summaries = n_params
input_shape = (10,)
###Output
_____no_output_____
###Markdown
The simulator is simply
###Code
def simulator(key, θ):
return θ[0] + jax.random.normal(key, shape=input_shape) * np.sqrt(θ[1])
###Output
_____no_output_____
###Markdown
Our fiducial parameter values are $\mu^\textrm{fid}=0$ and $\Sigma^\textrm{fid}=1$.
###Code
θ_fid = np.array([0., 1.])
###Output
_____no_output_____
###Markdown
We can generate the simulations using:
###Code
get_sims_and_ders = value_and_jacfwd(simulator, argnums=1)
###Output
_____no_output_____
###Markdown
For initialising the neural network a random number generator and we'll grab another for generating the data:
###Code
rng = jax.random.PRNGKey(1)
rng, model_key, data_key = jax.random.split(rng, num=3)
###Output
_____no_output_____
###Markdown
We'll make the keys for each of the simulations for fitting and validation
###Code
data_keys = np.array(jax.random.split(rng, num=2 * n_s))
fiducial, derivative = jax.vmap(get_sims_and_ders)(
data_keys[:n_s], np.repeat(np.expand_dims(θ_fid, 0), n_s, axis=0))
validation_fiducial, validation_derivative = jax.vmap(get_sims_and_ders)(
data_keys[n_s:], np.repeat(np.expand_dims(θ_fid, 0), n_s, axis=0))
###Output
_____no_output_____
###Markdown
We're going to use ``jax``'s stax module to build a simple network with three hidden layers each with 128 neurons and which are activated by leaky relu before outputting the two summaries. The optimiser will be a ``jax`` Adam optimiser with a step size of 0.001.
###Code
model = stax.serial(
stax.Dense(128),
stax.LeakyRelu,
stax.Dense(128),
stax.LeakyRelu,
stax.Dense(128),
stax.LeakyRelu,
stax.Dense(n_summaries))
optimiser = optimizers.adam(step_size=1e-3)
###Output
_____no_output_____
###Markdown
We will use the CPU as the host memory and use the GPUs for calculating the summaries.
###Code
host = jax.devices("cpu")[0]
devices = jax.devices("gpu")
###Output
_____no_output_____
###Markdown
Now lets say that we know that we can process 100 simulations at a time per device before running out of memory, we therefore can set
###Code
n_per_device = 100
imnn = AggregatedGradientIMNN(
n_s=n_s, n_d=n_d, n_params=n_params, n_summaries=n_summaries,
input_shape=input_shape, θ_fid=θ_fid, model=model,
optimiser=optimiser, key_or_state=model_key, host=host,
devices=devices, n_per_device=n_per_device,
fiducial=fiducial, derivative=derivative,
validation_fiducial=validation_fiducial,
validation_derivative=validation_derivative,
prefetch=None, cache=True)
###Output
_____no_output_____
###Markdown
To set the scale of the regularisation we use a coupling strength $\lambda$ whose value should mean that the determinant of the difference between the covariance of network outputs and the identity matrix is larger than the expected initial value of the determinant of the Fisher information matrix from the network. How close to the identity matrix the covariance should be is set by $\epsilon$. These parameters should not be very important, but they will help with convergence time.
###Code
λ = 10.
ϵ = 0.1
###Output
_____no_output_____
###Markdown
Fitting can then be done simply by calling:
###Code
imnn.fit(λ, ϵ, patience=10, max_iterations=1000, print_rate=1)
###Output
_____no_output_____
###Markdown
Here we have included a ``print_rate`` for a progress bar, but leaving this out will massively reduce fitting time (at the expense of not knowing how many iterations have been run). The IMNN will be fit for a maximum of ``max_iterations = 1000`` iterations, but with early stopping which can turn on after ``min_iterations = 100`` iterations and after ``patience = 10`` iterations where the maximum determinant of the Fisher information matrix has not increased. ``imnn.w`` is set to the values of the network parameters which obtained the highest value of the determinant of the Fisher information matrix, but the values at the final iteration can be set using ``best = False``.To continue training one can simply rerun fit```pythonimnn.fit(λ, ϵ, patience=10, max_iterations=1000, print_rate=1)```although we will not run it in this example.To visualise the fitting history we can plot the results:
###Code
imnn.plot(expected_detF=50);
###Output
_____no_output_____ |
1.DeepLearning/03.Optimizers/optimizers.ipynb | ###Markdown
Optimziers
###Code
import numpy as np
import matplotlib.pyplot as plt
from collections import OrderedDict
%matplotlib inline
###Output
_____no_output_____
###Markdown
Optimizers 1. SGD (Stocastic Gradient Descent)
###Code
class SGD:
def __init__(self, learning_rate=0.01):
self.learning_rate = learning_rate
def update(self, params, grads):
for key in params.keys():
params[key] -= self.learning_rate * grads[key]
###Output
_____no_output_____
###Markdown
2. Momentum
###Code
class Momentum:
def __init__(self, learning_rate=0.01, momentum=0.9):
self.learning_rate = learning_rate
self.momentum = momentum
self.v = None
def update(self, params, grads):
if self.v is None:
self.v = {}
for key, val in params.items():
self.v[key] = np.zeros_like(val)
for key in params.keys():
self.v[key] = self.momentum * self.v[key] - self.learning_rate * grads[key]
params[key] += self.v[key]
###Output
_____no_output_____
###Markdown
3. Nesterov- Nesterov's Accelerated Gradient (http://arxiv.org/abs/1212.0901)
###Code
class Nesterov:
def __init__(self, learning_rate=0.01, momentum=0.9):
self.learning_rate = learning_rate
self.momentum = momentum
self.v = None
def update(self, params, grads):
if self.v is None:
self.v = {}
for key, val in params.items():
self.v[key] = np.zeros_like(val)
for key in params.keys():
self.v[key] = self.momentum * self.v[key] - self.learning_rate * grads[key]
params[key] += self.momentum * self.momentum * self.v[key]
params[key] -= (1 + self.momentum) * self.learning_rate * grads[key]
###Output
_____no_output_____
###Markdown
4. AdaGrad- John Duchi, Elad Hazan, Yoram Singer, "Adaptive Subgradient Methods for Online Learning and Stochastic Optimization," Journal of Machine Learning Research 12 (2011) 2121-2159.- http://www.jmlr.org/papers/volume12/duchi11a/duchi11a.pdf
###Code
class AdaGrad:
def __init__(self, learning_rate=0.01):
self.learning_rate = learning_rate
self.h = None
def update(self, params, grads):
if self.h is None:
self.h = {}
for key, val in params.items():
self.h[key] = np.zeros_like(val)
for key in params.keys():
self.h[key] += grads[key] * grads[key]
params[key] -= self.learning_rate * grads[key] / np.sqrt(self.h[key] + 1e-7)
###Output
_____no_output_____
###Markdown
5. RMSprop- Tieleman, T. and Hinton, G., Divide the gradient by a running average of its recent magnitude, COUSERA: Neural Networks for Machine LearningRMSProp: Lecture 6.5, 2012- https://www.coursera.org/learn/neural-networks/lecture/YQHki/rmsprop-divide-the-gradient-by-a-running-average-of-its-recent-magnitude
###Code
class RMSprop:
def __init__(self, learning_rate=0.01, decay_rate = 0.99):
self.learning_rate = learning_rate
self.decay_rate = decay_rate
self.h = None
def update(self, params, grads):
if self.h is None:
self.h = {}
for key, val in params.items():
self.h[key] = np.zeros_like(val)
for key in params.keys():
self.h[key] *= self.decay_rate
self.h[key] += (1 - self.decay_rate) * grads[key] * grads[key]
params[key] -= self.learning_rate * grads[key] / np.sqrt(self.h[key] + 1e-7)
###Output
_____no_output_____
###Markdown
6. Adam- Diederik P. Kingma, Jimmy Ba, "Adam: A Method for Stochastic Optimization," arXiv:1412.6980, Dec., 2014.- https://arxiv.org/pdf/1412.6980.pdf
###Code
class Adam:
def __init__(self, learning_rate=0.01, beta1=0.9, beta2=0.999):
self.learning_rate = learning_rate
self.beta1 = beta1
self.beta2 = beta2
self.iter = 0
self.m = None
self.v = None
def update(self, params, grads):
if self.m is None:
self.m, self.v = {}, {}
for key, val in params.items():
self.m[key] = np.zeros_like(val)
self.v[key] = np.zeros_like(val)
self.iter += 1
learning_rate_t = self.learning_rate * np.sqrt(1.0 - self.beta2**self.iter) / (1.0 - self.beta1**self.iter)
for key in params.keys():
self.m[key] = self.beta1 * self.m[key] + (1 - self.beta1) * grads[key]
self.v[key] = self.beta2 * self.v[key] + (1 - self.beta2) * grads[key] ** 2
params[key] -= learning_rate_t * self.m[key] / np.sqrt(self.v[key] + 1e-7)
optimizers = OrderedDict()
optimizers["SGD"] = SGD(learning_rate=0.95)
optimizers["Momentum"] = Momentum(learning_rate=0.1)
optimizers["Nesterov"] = Nesterov(learning_rate=0.08)
optimizers["AdaGrad"] = AdaGrad(learning_rate=1.5)
optimizers["RMSprop"] = RMSprop(learning_rate=0.2)
optimizers["Adam"] = Adam(learning_rate=0.3)
###Output
_____no_output_____
###Markdown
Funcions- $f(x) = \frac{1}{20}x^2 + y^2$- $f'(x) = \frac{1}{10}x + 2y$
###Code
def f(x, y):
return x**2 / 20.0 + y**2
def df(x, y):
return x / 10.0, 2.0*y
init_pos = (-7.0, 2.0)
params = {}
params['x'], params['y'] = init_pos[0], init_pos[1]
grads = {}
grads['x'], grads['y'] = 0, 0
idx = 1
fig = plt.figure(figsize=(15, 10))
for key in optimizers:
optimizer = optimizers[key]
x_history = []
y_history = []
params['x'], params['y'] = init_pos[0], init_pos[1]
for i in range(30):
x_history.append(params['x'])
y_history.append(params['y'])
grads['x'], grads['y'] = df(params['x'], params['y'])
optimizer.update(params, grads)
x = np.arange(-15, 15, 0.01)
y = np.arange(-5, 5, 0.01)
X, Y = np.meshgrid(x, y)
Z = f(X, Y)
# for simple contour line
mask = Z > 7
Z[mask] = 0
# plot
sub_figure = fig.add_subplot(230 + idx)
idx += 1
sub_figure.plot(x_history, y_history, 'o-', color="red")
sub_figure.contour(X, Y, Z)
sub_figure.set_xlim(-15, 15)
sub_figure.set_ylim(-5, 5)
sub_figure.plot(0, 0, '+')
sub_figure.set_title(key)
sub_figure.set_xlabel("x")
sub_figure.set_ylabel("y")
plt.show()
###Output
_____no_output_____ |
session1/Python101_Session1.ipynb | ###Markdown
Python Programming LanguageThis notebook gives an overview of python syntax, different data types and an introduction to functions and object-oriented programming. A note on Jupyter NotebooksJupyter notebooks are organized in cells, each cell contains a portion of code that will be run together, producing an output. Aside from code cells, notebooks also have Markdown (i.e text) cells such as this one. To change the cell type, simply choose from the dropdown list in the upper bar. To add text inside a code cell, you must comment it using a hash () for a single line or three quotes ("""...""") for docstrings. Hashed comments will not be run as code, and they are really helpful for following the code.
###Code
# This is a comment -> comments do not run or affect the script
"""
For multiline comments, you can use the three double quotes
Documenting your code is very important for your future you or other collaborators
Always try to write in a line what is the function objective
"""
###Output
_____no_output_____
###Markdown
Data types StringText is passed as a string (str) in python, always inside single ('...') or double quotes ("...") Numbers- Integers (int): whole numbers (positive or negative)- Floating Point Numbers (float): real numbers with decimal positions BooleansA boolean (bool) value is either a True or False ListsA list is an ordered collection of items which can include different data types inside. Each item in a list is recognized by its position. Keep in mind that lists start at position 0 in python, not 1. A list is defined by square brackets [1,2,2,3] SetsA set is an unordered collection of unique items. It is defined by curly brackets {1,2,3} DictionariesA dictionary is an unordered conatiner of Key:Value pairs. Keys must be unique, and can map to one or more values. A dictionary is defined by curly brackets {Key:Value}, for example {"Name":"Gemma", "Age":30}*lists and dictionaries are mutable, i.e the elements inside can change, but sets are immutable.
###Code
#string
"Hello world"
#Single vs Double Quote
"Hello world double quote"
'Hello world, single quote'
"Hello world, that's nested quotes"
#Numbers
2.5 #float
#We can check our data type using type(...)
type(2.5)
2+2 #addition
2*2 #multiplication
6/3 #division
3**3 #exponentiation
2+3*5 #multiple operations
(2+3)*5
# importantly the equal operator must be == (never =)
2 == 2 #returns a boolean
# and the different operator is !=
2 != 2 #returns a boolean
#we can also use and / or conditions to obtain a boleean
#and
(5>2) and (4>6)
#or
(5>2) or (4>6)
#let's create a list
mylist = [1, "Ersilia", "third", 10]
mylist[0] #we use the position in the list to retrieve an element
mylist[2] #this calls the list element [2], remember counting starts at 0
mydict = {"key":"value", "book":"lord of the rings", "author": "Tolkien"}
mydict["key"] #we use the key to retrieve data from the dictionary, that is why keys must be unique
###Output
_____no_output_____
###Markdown
VariablesBefore continuing, we need to understand the "variable" concept. A variable stores data values, and its name must be UNIQUE.Variables are simply assigned by the = operator, and can be reassigned at any moment.
###Code
#actually, in the previous examples we have declared the variables "mylist" and "mydict"
hello = "Hello"
name = "Gemma"
hello + name
#I can reassign the variable at any point
name = "Miquel"
hello + " " + name
# we can do operations on variables
a = 3
b = 5
a*b
###Output
_____no_output_____
###Markdown
Working further with stringsSome built-in functions that might be useful Print and format print
###Code
#printing allows to get several outputs printed from the same cell. Print is a predefined function in Python
print(a+b)
print(hello+name)
#print format: if you want to print a changing statement depending on the variable:
print("My name is {}".format(name))
###Output
My name is Miquel
###Markdown
Upper and lower case
###Code
#make a sentence all upper or lower case
sentence = "This is a Test Example"
print(sentence.upper())
print(sentence.lower())
###Output
THIS IS A TEST EXAMPLE
this is a test example
###Markdown
Split and join
###Code
#split a sentence on blank spaces
print(sentence.split())
#we can then select the item of the list we want
sentence.split()[3]
#split using different characters
date = "20-10-2021"
print(date.split("-"))
date.split("-")[2]
day, month, year = date.split("-")
print(month)
#we can also join strings
date_joined = "/".join([day, month, year])
print(date_joined)
###Output
10
20/10/2021
###Markdown
Dive deeper into lists and dictionaries
###Code
# call last element of a list
list1 = ["a", "b", "c", "d", "e", "f", "g"]
list1[-1]
# call several elements of a list using slicing
print(list1[2:5]) #position 5 is not selected
print(list1[:3]) #up to position 3 (not included)
print(list1[2:]) #from position 2 (included) to the end
# a list could actually be a list of lists
nested_list = [1, 2, [3,4,5], 6, 7]
nested_list[2][2]
#dictionaries can also be nested
dict_list = {"letters":["a", "b", "c", "d"], "nums": ["one","two","three"]}
dict_list["nums"][1] #call the key and then the position of the value you are interested in
dict_of_dicts = {"key":{"letter":["a", "b", "c"]}}
dict_of_dicts["key"]["letter"][2]
#adding elements to a list
#option one: .append
list1.append("h")
list1
#option 2: +=
list1 += ["i"]
list1
#adding keys to a dictionary
dict_list["Names"]=["Gemma", "Miquel", "Edo"]
dict_list
#adding values to a key in a dictionary
dict_list["Names"].append("Ersilia")
dict_list
#get keys
dict_list.keys()
#get values
dict_list.value
# sets
myset = {1,2,2,3,4,5,5,5,6}
myset
# we can convert a list into a set to eliminate duplicates:
duplicates = [1,2,4,5,5,6,7,8,8,10]
nodups = set(duplicates)
###Output
_____no_output_____
###Markdown
Some built-in functions for listsRange, max, min, abs, len, sorted
###Code
#len() gives the length of the list or the length of a dictionary (keys)
print(len(list1))
print(len(dict_list))
#sorted orders lists in alphabetical order
abc = ["d", "g", "t", "a", "x"]
sorted(abc)
list2 = [3,5,1,7,8,2]
sorted(list2)
#max and min return the respective values of a list
print(max(list2))
print(min(abc))
#convertion to absolute values
abs(-6.25)
###Output
_____no_output_____
###Markdown
ConditionalsIf, elif
###Code
if 2+2==4:
print("correct!") #indented 4 spaces
list1 = ["a","b", "c", "d"]
list2 = ["x", "y", "z"]
if "h" in list1:
print("h is in list1")
elif "h" in list2:
print("h is in list2")
else:
print("h is not in lists")
if "h" in list1:
print("h is in list1")
elif "h" in list2:
print("h is in list2")
else:
list1.append("h")
###Output
_____no_output_____
###Markdown
LoopsFor and while loops
###Code
#for allows to iterate over a list
for x in list1: #x is a temporal variable, it could be letter, number, a, anything you want
print(x)
"""
while repeats a block of code as long as a condition is met:
while (condition is True):
do something
end
Watch out for INFINITE LOOPS, if the condition never turns false the loop won't stop
"""
x = 1
while (x<10):
x = x + 1
print("run number" + str(x-1))
###Output
run number1
run number2
run number3
run number4
run number5
run number6
run number7
run number8
run number9
###Markdown
List ComprehensionHow to use one-line code to create lists using for loops
###Code
nums = [1,2,3,4,5]
nums_sum = []
for item in nums:
nums_sum += [(item+2)]
nums_sum
#the for loop can be simplified by:
nums_sum = [item+2 for item in nums]
#list comprehension can start from a string
chars = [char for char in "comprehension"]
chars
#it can include conditionals inside
num_list = [num for num in range(20) if num<10]
num_list
###Output
_____no_output_____
###Markdown
FunctionsFunctions allow to call a block of code without having to rewrite it again.Use def() to set up a new function
###Code
def funct():
print("hi")
funct()
#let's specify some parameters for our function
def name_funct(name):
print("My name is "+name)
name_funct("Gemma")
#use return to store a new variable
def difference(x1,x2):
return x1-x2
diff = difference(6,4)
###Output
_____no_output_____ |
notebooks/Christensenellales/01_genomes.ipynb | ###Markdown
Goal* Generate a collection of MAGs and isolate genomes for the Christensenellales order
###Code
work_dir = '/ebio/abt3_projects/software/dev/ll_pipelines/llprimer/experiments/christensenellales/genomes/'
clade = 'Christensenellales'
taxid = 990719 # Christensenellaceae
threads = 8
###Output
_____no_output_____
###Markdown
Init
###Code
library(dplyr)
library(tidyr)
library(data.table)
library(tidytable)
library(ggplot2)
library(LeyLabRMisc)
library(curl)
df.dims()
setDTthreads(threads)
make_dir(work_dir)
###Output
Created directory: /ebio/abt3_projects/software/dev/ll_pipelines/llprimer/experiments/christensenellales/genomes/
###Markdown
Genomes From Genbank * Used the genomes from the Christensenellaceae primer design From Colombian metagenome asesmbly
###Code
list_files(file.path(work_dir, 'Colombian_MG'), '*.fna') %>% length
###Output
_____no_output_____
###Markdown
From UHGG
###Code
F = file.path('/ebio/abt3_projects/databases_no-backup/UHGG/2019_09', 'genomes-nr_metadata.tsv')
genomes = Fread(F) %>%
filter.(grepl('o__Christensenellales', Lineage)) %>%
separate.(Lineage, taxonomy_levels(), sep=';')
genomes
genomes_f = genomes %>%
filter.(Completeness >= 0.9,
Contamination < 0.05,
N_contigs <= 500) %>%
filter.(Family != 'f__')
genomes_f
# summarizing taxonomy
p = genomes_f %>%
summarize.(n = n.(), .by=c(Family, Genus)) %>%
mutate.(Taxonomy = paste(Family, Genus, sep=';')) %>%
ggplot(aes(Taxonomy, n, fill=Family)) +
geom_bar(stat='identity') +
labs(y='No. of genomes', x='Family;Genus') +
theme_bw() +
coord_flip()
p.dims(5,3.2)
plot(p)
# downloading
get_file = function(url, base_dir){
outfile = file.path(base_dir, 'UHGG', gsub('.+/', '', url))
message('Downloading: ', url)
curl_download(url, outfile, mode = "wb")
}
ret = genomes_f$FTP_download %>%
lapply(get_file, work_dir)
ret %>% length
###Output
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-007/MGYG-HGUT-00730/genomes2/GUT_GENOME001999.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-026/MGYG-HGUT-02681/genomes1/GUT_GENOME004317.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-042/MGYG-HGUT-04245/genomes2/GUT_GENOME005637.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-007/MGYG-HGUT-00730/genomes2/GUT_GENOME006149.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-009/MGYG-HGUT-00939/genomes1/GUT_GENOME006702.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-040/MGYG-HGUT-04070/genomes1/GUT_GENOME006711.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-007/MGYG-HGUT-00730/genomes1/GUT_GENOME007481.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-007/MGYG-HGUT-00730/genomes1/GUT_GENOME009152.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-007/MGYG-HGUT-00730/genomes1/GUT_GENOME009895.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-039/MGYG-HGUT-03972/genomes1/GUT_GENOME011026.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-007/MGYG-HGUT-00730/genomes1/GUT_GENOME011034.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-007/MGYG-HGUT-00730/genomes1/GUT_GENOME012505.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-007/MGYG-HGUT-00730/genomes1/GUT_GENOME014750.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-007/MGYG-HGUT-00730/genomes1/GUT_GENOME015036.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-007/MGYG-HGUT-00730/genomes1/GUT_GENOME018032.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-007/MGYG-HGUT-00730/genomes1/GUT_GENOME018884.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-007/MGYG-HGUT-00730/genomes2/GUT_GENOME018997.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-007/MGYG-HGUT-00730/genomes2/GUT_GENOME019075.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-007/MGYG-HGUT-00730/genomes1/GUT_GENOME020800.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-007/MGYG-HGUT-00730/genomes1/GUT_GENOME021800.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-007/MGYG-HGUT-00730/genomes1/GUT_GENOME021845.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-007/MGYG-HGUT-00730/genomes1/GUT_GENOME021934.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-042/MGYG-HGUT-04245/genomes1/GUT_GENOME022422.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-008/MGYG-HGUT-00875/genomes1/GUT_GENOME022791.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-007/MGYG-HGUT-00730/genomes1/GUT_GENOME023175.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-007/MGYG-HGUT-00730/genomes1/GUT_GENOME023308.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-007/MGYG-HGUT-00730/genomes1/GUT_GENOME023691.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-011/MGYG-HGUT-01132/genomes1/GUT_GENOME023750.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-018/MGYG-HGUT-01889/genomes1/GUT_GENOME023931.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-007/MGYG-HGUT-00730/genomes1/GUT_GENOME024612.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-007/MGYG-HGUT-00730/genomes1/GUT_GENOME024865.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-007/MGYG-HGUT-00730/genomes1/GUT_GENOME025342.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-040/MGYG-HGUT-04052/genomes1/GUT_GENOME025383.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-007/MGYG-HGUT-00730/genomes1/GUT_GENOME025390.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-007/MGYG-HGUT-00730/genomes1/GUT_GENOME025396.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-007/MGYG-HGUT-00718/genomes1/GUT_GENOME025577.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-007/MGYG-HGUT-00730/genomes1/GUT_GENOME025672.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-007/MGYG-HGUT-00730/genomes1/GUT_GENOME025895.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-007/MGYG-HGUT-00730/genomes1/GUT_GENOME025905.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-007/MGYG-HGUT-00727/genomes1/GUT_GENOME025984.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-007/MGYG-HGUT-00730/genomes1/GUT_GENOME026012.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-007/MGYG-HGUT-00730/genomes1/GUT_GENOME026147.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-011/MGYG-HGUT-01132/genomes1/GUT_GENOME026613.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-007/MGYG-HGUT-00730/genomes2/GUT_GENOME026753.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-011/MGYG-HGUT-01132/genomes1/GUT_GENOME026776.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-007/MGYG-HGUT-00730/genomes1/GUT_GENOME026838.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-005/MGYG-HGUT-00530/genomes2/GUT_GENOME031108.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-038/MGYG-HGUT-03891/genomes2/GUT_GENOME033234.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-039/MGYG-HGUT-03972/genomes1/GUT_GENOME035464.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-005/MGYG-HGUT-00530/genomes1/GUT_GENOME038027.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-032/MGYG-HGUT-03224/genomes1/GUT_GENOME043661.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-011/MGYG-HGUT-01132/genomes1/GUT_GENOME044199.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-011/MGYG-HGUT-01132/genomes1/GUT_GENOME044239.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-042/MGYG-HGUT-04245/genomes2/GUT_GENOME047625.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-007/MGYG-HGUT-00730/genomes2/GUT_GENOME048317.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-009/MGYG-HGUT-00939/genomes1/GUT_GENOME048628.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-007/MGYG-HGUT-00730/genomes1/GUT_GENOME049881.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-007/MGYG-HGUT-00730/genomes1/GUT_GENOME055430.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-026/MGYG-HGUT-02681/genomes1/GUT_GENOME055959.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-007/MGYG-HGUT-00730/genomes1/GUT_GENOME057004.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-042/MGYG-HGUT-04245/genomes1/GUT_GENOME057303.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-005/MGYG-HGUT-00530/genomes1/GUT_GENOME057861.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-042/MGYG-HGUT-04245/genomes1/GUT_GENOME067108.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-020/MGYG-HGUT-02072/genomes1/GUT_GENOME067315.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-007/MGYG-HGUT-00730/genomes1/GUT_GENOME067942.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-042/MGYG-HGUT-04245/genomes1/GUT_GENOME070737.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-042/MGYG-HGUT-04245/genomes1/GUT_GENOME071621.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-026/MGYG-HGUT-02681/genomes1/GUT_GENOME072725.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-007/MGYG-HGUT-00730/genomes1/GUT_GENOME074075.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-011/MGYG-HGUT-01132/genomes1/GUT_GENOME081213.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-011/MGYG-HGUT-01103/genomes1/GUT_GENOME081297.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-007/MGYG-HGUT-00730/genomes1/GUT_GENOME081465.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-007/MGYG-HGUT-00730/genomes1/GUT_GENOME081799.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-007/MGYG-HGUT-00730/genomes1/GUT_GENOME083231.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-007/MGYG-HGUT-00730/genomes1/GUT_GENOME083365.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-007/MGYG-HGUT-00730/genomes1/GUT_GENOME083987.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-039/MGYG-HGUT-03967/genomes1/GUT_GENOME084056.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-007/MGYG-HGUT-00730/genomes1/GUT_GENOME084566.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-011/MGYG-HGUT-01132/genomes1/GUT_GENOME084610.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-038/MGYG-HGUT-03891/genomes1/GUT_GENOME086293.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-039/MGYG-HGUT-03961/genomes1/GUT_GENOME087392.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-011/MGYG-HGUT-01132/genomes1/GUT_GENOME089788.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-042/MGYG-HGUT-04245/genomes2/GUT_GENOME091022.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-007/MGYG-HGUT-00730/genomes1/GUT_GENOME091174.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-007/MGYG-HGUT-00730/genomes1/GUT_GENOME091251.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-007/MGYG-HGUT-00718/genomes1/GUT_GENOME091444.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-007/MGYG-HGUT-00730/genomes1/GUT_GENOME092910.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-007/MGYG-HGUT-00730/genomes1/GUT_GENOME093203.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-007/MGYG-HGUT-00730/genomes2/GUT_GENOME093433.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-007/MGYG-HGUT-00730/genomes1/GUT_GENOME093687.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-007/MGYG-HGUT-00730/genomes1/GUT_GENOME094110.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-007/MGYG-HGUT-00730/genomes1/GUT_GENOME095220.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-007/MGYG-HGUT-00730/genomes1/GUT_GENOME095708.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-005/MGYG-HGUT-00530/genomes1/GUT_GENOME095807.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-026/MGYG-HGUT-02681/genomes1/GUT_GENOME098981.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-007/MGYG-HGUT-00730/genomes1/GUT_GENOME102313.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-016/MGYG-HGUT-01675/genomes1/GUT_GENOME102728.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-007/MGYG-HGUT-00730/genomes1/GUT_GENOME102818.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-011/MGYG-HGUT-01132/genomes1/GUT_GENOME103985.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-007/MGYG-HGUT-00718/genomes1/GUT_GENOME104483.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-019/MGYG-HGUT-01932/genomes1/GUT_GENOME104663.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-045/MGYG-HGUT-04551/genomes1/GUT_GENOME104784.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-008/MGYG-HGUT-00875/genomes1/GUT_GENOME105230.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-009/MGYG-HGUT-00939/genomes1/GUT_GENOME105699.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-007/MGYG-HGUT-00730/genomes1/GUT_GENOME105752.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-005/MGYG-HGUT-00530/genomes1/GUT_GENOME107619.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-007/MGYG-HGUT-00730/genomes1/GUT_GENOME107801.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-005/MGYG-HGUT-00530/genomes1/GUT_GENOME107816.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-007/MGYG-HGUT-00730/genomes2/GUT_GENOME108244.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-042/MGYG-HGUT-04245/genomes1/GUT_GENOME108859.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-042/MGYG-HGUT-04245/genomes1/GUT_GENOME109505.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-007/MGYG-HGUT-00730/genomes1/GUT_GENOME110114.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-008/MGYG-HGUT-00875/genomes1/GUT_GENOME110789.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-007/MGYG-HGUT-00730/genomes1/GUT_GENOME110934.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-007/MGYG-HGUT-00730/genomes1/GUT_GENOME110954.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-007/MGYG-HGUT-00730/genomes2/GUT_GENOME111365.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-007/MGYG-HGUT-00730/genomes1/GUT_GENOME111416.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-026/MGYG-HGUT-02681/genomes1/GUT_GENOME111491.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-042/MGYG-HGUT-04245/genomes2/GUT_GENOME111501.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-007/MGYG-HGUT-00730/genomes1/GUT_GENOME112062.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-005/MGYG-HGUT-00530/genomes1/GUT_GENOME112317.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-007/MGYG-HGUT-00730/genomes1/GUT_GENOME112403.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-007/MGYG-HGUT-00730/genomes2/GUT_GENOME112898.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-042/MGYG-HGUT-04245/genomes1/GUT_GENOME112935.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-007/MGYG-HGUT-00730/genomes1/GUT_GENOME113027.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-042/MGYG-HGUT-04245/genomes1/GUT_GENOME113041.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-005/MGYG-HGUT-00530/genomes1/GUT_GENOME113243.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-005/MGYG-HGUT-00530/genomes3/GUT_GENOME113391.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-007/MGYG-HGUT-00730/genomes1/GUT_GENOME113455.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-011/MGYG-HGUT-01132/genomes1/GUT_GENOME113652.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-007/MGYG-HGUT-00730/genomes1/GUT_GENOME113942.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-011/MGYG-HGUT-01132/genomes1/GUT_GENOME114047.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-007/MGYG-HGUT-00730/genomes1/GUT_GENOME114314.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-026/MGYG-HGUT-02681/genomes1/GUT_GENOME114383.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-042/MGYG-HGUT-04245/genomes2/GUT_GENOME114384.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-005/MGYG-HGUT-00530/genomes1/GUT_GENOME114438.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-007/MGYG-HGUT-00730/genomes2/GUT_GENOME114538.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-007/MGYG-HGUT-00730/genomes1/GUT_GENOME114925.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-007/MGYG-HGUT-00730/genomes2/GUT_GENOME115405.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-007/MGYG-HGUT-00730/genomes1/GUT_GENOME115664.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-007/MGYG-HGUT-00730/genomes1/GUT_GENOME115734.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-007/MGYG-HGUT-00730/genomes1/GUT_GENOME116268.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-042/MGYG-HGUT-04245/genomes1/GUT_GENOME116339.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-007/MGYG-HGUT-00730/genomes2/GUT_GENOME116765.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-011/MGYG-HGUT-01132/genomes1/GUT_GENOME117240.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-039/MGYG-HGUT-03961/genomes1/GUT_GENOME117847.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-007/MGYG-HGUT-00730/genomes1/GUT_GENOME117964.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-042/MGYG-HGUT-04245/genomes2/GUT_GENOME118007.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-005/MGYG-HGUT-00530/genomes2/GUT_GENOME118193.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-042/MGYG-HGUT-04245/genomes1/GUT_GENOME118498.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-007/MGYG-HGUT-00730/genomes1/GUT_GENOME119315.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-005/MGYG-HGUT-00530/genomes1/GUT_GENOME120873.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-007/MGYG-HGUT-00730/genomes1/GUT_GENOME122088.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-020/MGYG-HGUT-02072/genomes1/GUT_GENOME123861.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-011/MGYG-HGUT-01132/genomes1/GUT_GENOME123945.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-007/MGYG-HGUT-00730/genomes2/GUT_GENOME124482.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-005/MGYG-HGUT-00530/genomes2/GUT_GENOME124704.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-042/MGYG-HGUT-04245/genomes2/GUT_GENOME124708.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-038/MGYG-HGUT-03891/genomes2/GUT_GENOME125485.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-020/MGYG-HGUT-02072/genomes1/GUT_GENOME125851.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-011/MGYG-HGUT-01132/genomes1/GUT_GENOME126128.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-009/MGYG-HGUT-00939/genomes1/GUT_GENOME126626.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-007/MGYG-HGUT-00730/genomes1/GUT_GENOME127157.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-007/MGYG-HGUT-00730/genomes1/GUT_GENOME127754.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-007/MGYG-HGUT-00730/genomes1/GUT_GENOME129204.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-007/MGYG-HGUT-00730/genomes1/GUT_GENOME129433.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-020/MGYG-HGUT-02072/genomes1/GUT_GENOME129467.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-007/MGYG-HGUT-00730/genomes1/GUT_GENOME129678.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-007/MGYG-HGUT-00730/genomes1/GUT_GENOME132602.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-007/MGYG-HGUT-00730/genomes1/GUT_GENOME133038.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-005/MGYG-HGUT-00530/genomes2/GUT_GENOME134096.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-011/MGYG-HGUT-01132/genomes1/GUT_GENOME134119.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-005/MGYG-HGUT-00530/genomes2/GUT_GENOME134648.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-007/MGYG-HGUT-00730/genomes1/GUT_GENOME135093.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-007/MGYG-HGUT-00730/genomes1/GUT_GENOME135179.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-007/MGYG-HGUT-00730/genomes2/GUT_GENOME135922.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-007/MGYG-HGUT-00730/genomes1/GUT_GENOME136346.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-007/MGYG-HGUT-00730/genomes1/GUT_GENOME136676.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-007/MGYG-HGUT-00730/genomes1/GUT_GENOME138001.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-007/MGYG-HGUT-00730/genomes1/GUT_GENOME138477.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-008/MGYG-HGUT-00875/genomes1/GUT_GENOME139071.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-045/MGYG-HGUT-04551/genomes1/GUT_GENOME139869.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-007/MGYG-HGUT-00730/genomes2/GUT_GENOME148347.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-038/MGYG-HGUT-03891/genomes4/GUT_GENOME148387.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-007/MGYG-HGUT-00730/genomes1/GUT_GENOME149021.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-011/MGYG-HGUT-01132/genomes1/GUT_GENOME150000.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-007/MGYG-HGUT-00730/genomes1/GUT_GENOME151676.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-026/MGYG-HGUT-02629/genomes1/GUT_GENOME155741.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-026/MGYG-HGUT-02679/genomes1/GUT_GENOME158031.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-026/MGYG-HGUT-02681/genomes1/GUT_GENOME158142.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-011/MGYG-HGUT-01132/genomes1/GUT_GENOME158277.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-007/MGYG-HGUT-00730/genomes1/GUT_GENOME158309.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-011/MGYG-HGUT-01132/genomes1/GUT_GENOME158323.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-009/MGYG-HGUT-00939/genomes1/GUT_GENOME158370.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-009/MGYG-HGUT-00939/genomes1/GUT_GENOME158436.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-009/MGYG-HGUT-00939/genomes1/GUT_GENOME158456.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-011/MGYG-HGUT-01132/genomes1/GUT_GENOME158543.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-011/MGYG-HGUT-01132/genomes1/GUT_GENOME158610.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-038/MGYG-HGUT-03891/genomes1/GUT_GENOME158947.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-038/MGYG-HGUT-03891/genomes1/GUT_GENOME159103.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-042/MGYG-HGUT-04245/genomes1/GUT_GENOME159777.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-042/MGYG-HGUT-04245/genomes2/GUT_GENOME160252.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-005/MGYG-HGUT-00530/genomes3/GUT_GENOME162740.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-007/MGYG-HGUT-00718/genomes1/GUT_GENOME170651.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-020/MGYG-HGUT-02072/genomes1/GUT_GENOME173659.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-026/MGYG-HGUT-02679/genomes1/GUT_GENOME175025.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-007/MGYG-HGUT-00730/genomes2/GUT_GENOME175257.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-007/MGYG-HGUT-00730/genomes1/GUT_GENOME176010.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-008/MGYG-HGUT-00875/genomes1/GUT_GENOME179638.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-016/MGYG-HGUT-01658/genomes1/GUT_GENOME180128.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-016/MGYG-HGUT-01658/genomes1/GUT_GENOME180773.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-038/MGYG-HGUT-03891/genomes2/GUT_GENOME186446.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-038/MGYG-HGUT-03891/genomes2/GUT_GENOME188950.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-007/MGYG-HGUT-00730/genomes1/GUT_GENOME191092.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-017/MGYG-HGUT-01747/genomes1/GUT_GENOME191624.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-007/MGYG-HGUT-00730/genomes1/GUT_GENOME193107.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-007/MGYG-HGUT-00730/genomes1/GUT_GENOME193901.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-038/MGYG-HGUT-03891/genomes4/GUT_GENOME196897.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-042/MGYG-HGUT-04245/genomes1/GUT_GENOME197544.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-038/MGYG-HGUT-03891/genomes1/GUT_GENOME198224.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-007/MGYG-HGUT-00730/genomes1/GUT_GENOME198593.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-007/MGYG-HGUT-00730/genomes1/GUT_GENOME206070.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-038/MGYG-HGUT-03891/genomes1/GUT_GENOME207134.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-038/MGYG-HGUT-03891/genomes4/GUT_GENOME207194.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-038/MGYG-HGUT-03891/genomes1/GUT_GENOME208933.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-011/MGYG-HGUT-01132/genomes1/GUT_GENOME209311.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-038/MGYG-HGUT-03891/genomes4/GUT_GENOME209384.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-038/MGYG-HGUT-03891/genomes1/GUT_GENOME211577.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-005/MGYG-HGUT-00530/genomes1/GUT_GENOME211643.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-011/MGYG-HGUT-01132/genomes1/GUT_GENOME211803.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-039/MGYG-HGUT-03972/genomes1/GUT_GENOME213158.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-039/MGYG-HGUT-03972/genomes1/GUT_GENOME214515.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-026/MGYG-HGUT-02681/genomes1/GUT_GENOME214773.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-026/MGYG-HGUT-02681/genomes1/GUT_GENOME217538.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-039/MGYG-HGUT-03972/genomes1/GUT_GENOME217857.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-042/MGYG-HGUT-04245/genomes2/GUT_GENOME217871.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-039/MGYG-HGUT-03972/genomes1/GUT_GENOME217887.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-007/MGYG-HGUT-00730/genomes1/GUT_GENOME218690.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-026/MGYG-HGUT-02681/genomes1/GUT_GENOME219111.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-032/MGYG-HGUT-03224/genomes1/GUT_GENOME219688.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-020/MGYG-HGUT-02072/genomes1/GUT_GENOME220052.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-020/MGYG-HGUT-02072/genomes1/GUT_GENOME220179.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-007/MGYG-HGUT-00730/genomes1/GUT_GENOME221658.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-042/MGYG-HGUT-04245/genomes2/GUT_GENOME226773.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-007/MGYG-HGUT-00730/genomes1/GUT_GENOME229160.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-007/MGYG-HGUT-00730/genomes1/GUT_GENOME229523.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-011/MGYG-HGUT-01132/genomes1/GUT_GENOME231159.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-035/MGYG-HGUT-03558/genomes1/GUT_GENOME237426.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-007/MGYG-HGUT-00730/genomes1/GUT_GENOME237619.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-032/MGYG-HGUT-03224/genomes1/GUT_GENOME243616.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-038/MGYG-HGUT-03875/genomes1/GUT_GENOME243997.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-007/MGYG-HGUT-00730/genomes1/GUT_GENOME245548.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-039/MGYG-HGUT-03961/genomes1/GUT_GENOME245676.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-007/MGYG-HGUT-00730/genomes1/GUT_GENOME246707.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-019/MGYG-HGUT-01932/genomes1/GUT_GENOME246796.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-011/MGYG-HGUT-01132/genomes1/GUT_GENOME247127.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-042/MGYG-HGUT-04245/genomes1/GUT_GENOME247253.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-007/MGYG-HGUT-00730/genomes1/GUT_GENOME247368.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-007/MGYG-HGUT-00730/genomes2/GUT_GENOME249008.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-040/MGYG-HGUT-04034/genomes1/GUT_GENOME251006.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-011/MGYG-HGUT-01156/genomes1/GUT_GENOME251370.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-007/MGYG-HGUT-00730/genomes2/GUT_GENOME251395.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-039/MGYG-HGUT-03945/genomes1/GUT_GENOME251449.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-007/MGYG-HGUT-00730/genomes1/GUT_GENOME251481.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-011/MGYG-HGUT-01132/genomes1/GUT_GENOME251502.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-007/MGYG-HGUT-00730/genomes2/GUT_GENOME251715.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-007/MGYG-HGUT-00730/genomes1/GUT_GENOME251769.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-026/MGYG-HGUT-02681/genomes1/GUT_GENOME251866.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-042/MGYG-HGUT-04245/genomes2/GUT_GENOME251979.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-007/MGYG-HGUT-00730/genomes1/GUT_GENOME251995.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-038/MGYG-HGUT-03891/genomes4/GUT_GENOME252229.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-005/MGYG-HGUT-00530/genomes2/GUT_GENOME252265.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-007/MGYG-HGUT-00730/genomes1/GUT_GENOME252323.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-007/MGYG-HGUT-00730/genomes1/GUT_GENOME252447.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-007/MGYG-HGUT-00730/genomes1/GUT_GENOME252532.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-038/MGYG-HGUT-03891/genomes3/GUT_GENOME252572.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-042/MGYG-HGUT-04245/genomes2/GUT_GENOME252763.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-008/MGYG-HGUT-00875/genomes1/GUT_GENOME253102.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-011/MGYG-HGUT-01132/genomes1/GUT_GENOME253389.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-007/MGYG-HGUT-00730/genomes1/GUT_GENOME253599.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-038/MGYG-HGUT-03891/genomes3/GUT_GENOME253664.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-007/MGYG-HGUT-00730/genomes1/GUT_GENOME254184.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-012/MGYG-HGUT-01237/genomes1/GUT_GENOME254327.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-040/MGYG-HGUT-04034/genomes1/GUT_GENOME254353.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-007/MGYG-HGUT-00730/genomes1/GUT_GENOME254480.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-007/MGYG-HGUT-00730/genomes1/GUT_GENOME254551.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-032/MGYG-HGUT-03224/genomes1/GUT_GENOME254685.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-007/MGYG-HGUT-00730/genomes1/GUT_GENOME255044.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-005/MGYG-HGUT-00530/genomes2/GUT_GENOME255058.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-005/MGYG-HGUT-00530/genomes2/GUT_GENOME255127.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-007/MGYG-HGUT-00730/genomes1/GUT_GENOME255169.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-007/MGYG-HGUT-00730/genomes1/GUT_GENOME255357.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-009/MGYG-HGUT-00939/genomes1/GUT_GENOME255516.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-011/MGYG-HGUT-01132/genomes1/GUT_GENOME255520.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-011/MGYG-HGUT-01132/genomes1/GUT_GENOME255625.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-007/MGYG-HGUT-00730/genomes1/GUT_GENOME255693.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-007/MGYG-HGUT-00730/genomes1/GUT_GENOME255920.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-007/MGYG-HGUT-00730/genomes1/GUT_GENOME256026.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-007/MGYG-HGUT-00730/genomes1/GUT_GENOME256080.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-020/MGYG-HGUT-02072/genomes1/GUT_GENOME256472.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-007/MGYG-HGUT-00730/genomes1/GUT_GENOME256501.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-040/MGYG-HGUT-04052/genomes1/GUT_GENOME256891.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-039/MGYG-HGUT-03972/genomes1/GUT_GENOME256928.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-012/MGYG-HGUT-01237/genomes1/GUT_GENOME256935.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-007/MGYG-HGUT-00730/genomes1/GUT_GENOME257064.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-007/MGYG-HGUT-00730/genomes1/GUT_GENOME257240.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-011/MGYG-HGUT-01132/genomes1/GUT_GENOME257346.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-007/MGYG-HGUT-00730/genomes1/GUT_GENOME257380.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-007/MGYG-HGUT-00730/genomes1/GUT_GENOME257491.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-011/MGYG-HGUT-01132/genomes1/GUT_GENOME257687.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-011/MGYG-HGUT-01132/genomes1/GUT_GENOME257761.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-007/MGYG-HGUT-00730/genomes1/GUT_GENOME258061.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-042/MGYG-HGUT-04245/genomes2/GUT_GENOME258296.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-009/MGYG-HGUT-00939/genomes1/GUT_GENOME258401.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-011/MGYG-HGUT-01132/genomes1/GUT_GENOME258517.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-009/MGYG-HGUT-00939/genomes1/GUT_GENOME258570.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-007/MGYG-HGUT-00730/genomes1/GUT_GENOME258620.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-041/MGYG-HGUT-04179/genomes1/GUT_GENOME258665.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-007/MGYG-HGUT-00730/genomes2/GUT_GENOME258671.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-011/MGYG-HGUT-01132/genomes1/GUT_GENOME258678.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-042/MGYG-HGUT-04245/genomes1/GUT_GENOME259078.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-007/MGYG-HGUT-00730/genomes1/GUT_GENOME259116.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-020/MGYG-HGUT-02072/genomes1/GUT_GENOME259347.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-007/MGYG-HGUT-00730/genomes1/GUT_GENOME259475.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-039/MGYG-HGUT-03967/genomes1/GUT_GENOME259485.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-005/MGYG-HGUT-00530/genomes1/GUT_GENOME259575.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-007/MGYG-HGUT-00730/genomes1/GUT_GENOME259624.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-007/MGYG-HGUT-00718/genomes1/GUT_GENOME259815.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-007/MGYG-HGUT-00730/genomes1/GUT_GENOME259848.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-011/MGYG-HGUT-01132/genomes1/GUT_GENOME260244.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-038/MGYG-HGUT-03891/genomes1/GUT_GENOME260260.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-007/MGYG-HGUT-00730/genomes1/GUT_GENOME260362.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-039/MGYG-HGUT-03972/genomes1/GUT_GENOME260711.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-007/MGYG-HGUT-00730/genomes1/GUT_GENOME260852.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-007/MGYG-HGUT-00730/genomes1/GUT_GENOME260912.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-032/MGYG-HGUT-03224/genomes1/GUT_GENOME260971.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-038/MGYG-HGUT-03891/genomes2/GUT_GENOME260994.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-007/MGYG-HGUT-00730/genomes1/GUT_GENOME261098.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-011/MGYG-HGUT-01132/genomes1/GUT_GENOME261099.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-040/MGYG-HGUT-04052/genomes1/GUT_GENOME261125.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-007/MGYG-HGUT-00730/genomes1/GUT_GENOME261147.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-007/MGYG-HGUT-00730/genomes1/GUT_GENOME261177.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-007/MGYG-HGUT-00730/genomes1/GUT_GENOME262360.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-007/MGYG-HGUT-00730/genomes1/GUT_GENOME263396.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-005/MGYG-HGUT-00530/genomes2/GUT_GENOME265850.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-039/MGYG-HGUT-03961/genomes1/GUT_GENOME265851.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-007/MGYG-HGUT-00730/genomes1/GUT_GENOME266061.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-007/MGYG-HGUT-00730/genomes1/GUT_GENOME268393.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-007/MGYG-HGUT-00730/genomes1/GUT_GENOME268512.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-041/MGYG-HGUT-04179/genomes1/GUT_GENOME269902.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-007/MGYG-HGUT-00730/genomes1/GUT_GENOME270724.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-007/MGYG-HGUT-00730/genomes1/GUT_GENOME274044.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-007/MGYG-HGUT-00730/genomes1/GUT_GENOME274160.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-007/MGYG-HGUT-00730/genomes1/GUT_GENOME274720.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-038/MGYG-HGUT-03891/genomes2/GUT_GENOME275434.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-011/MGYG-HGUT-01132/genomes1/GUT_GENOME275671.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-007/MGYG-HGUT-00730/genomes1/GUT_GENOME276507.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-005/MGYG-HGUT-00530/genomes2/GUT_GENOME277111.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-007/MGYG-HGUT-00730/genomes1/GUT_GENOME277241.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-011/MGYG-HGUT-01132/genomes1/GUT_GENOME277680.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-007/MGYG-HGUT-00730/genomes1/GUT_GENOME278880.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-007/MGYG-HGUT-00730/genomes1/GUT_GENOME279410.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-007/MGYG-HGUT-00730/genomes1/GUT_GENOME279435.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-005/MGYG-HGUT-00530/genomes1/GUT_GENOME279772.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-007/MGYG-HGUT-00730/genomes1/GUT_GENOME280640.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-007/MGYG-HGUT-00718/genomes1/GUT_GENOME281443.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-007/MGYG-HGUT-00730/genomes1/GUT_GENOME281451.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-007/MGYG-HGUT-00730/genomes1/GUT_GENOME281578.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-007/MGYG-HGUT-00730/genomes1/GUT_GENOME282452.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-007/MGYG-HGUT-00730/genomes1/GUT_GENOME282719.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-045/MGYG-HGUT-04551/genomes1/GUT_GENOME282790.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-026/MGYG-HGUT-02681/genomes1/GUT_GENOME282987.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-007/MGYG-HGUT-00730/genomes1/GUT_GENOME283318.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-007/MGYG-HGUT-00730/genomes2/GUT_GENOME283538.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-007/MGYG-HGUT-00730/genomes1/GUT_GENOME283719.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-005/MGYG-HGUT-00530/genomes2/GUT_GENOME284294.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-011/MGYG-HGUT-01132/genomes1/GUT_GENOME285124.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-007/MGYG-HGUT-00730/genomes1/GUT_GENOME285532.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-007/MGYG-HGUT-00730/genomes1/GUT_GENOME285613.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-007/MGYG-HGUT-00730/genomes1/GUT_GENOME285799.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-007/MGYG-HGUT-00730/genomes1/GUT_GENOME286150.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-038/MGYG-HGUT-03891/genomes4/GUT_GENOME286223.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-007/MGYG-HGUT-00730/genomes2/GUT_GENOME286615.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-005/MGYG-HGUT-00530/genomes2/GUT_GENOME286691.gff.gz
###Markdown
Parsing gff files```(genome) @ rick:/ebio/abt3_projects/software/dev/ll_pipelines/llprimer/experiments/christensenellales/genomes/UHGG$ find . -name "*.gff.gz" | xargs -I % /ebio/abt3_projects/databases_no-backup/UHGG/2019_09/prokka_gff2fasta.py %``` TUK MAGs
###Code
F = '/ebio/abt3_projects/Anxiety_Twins_Metagenomes/data/metagenome/TUK-5projects/LLMGA/v0.12/LLG//rnd1/final_MAGs.tsv'
TUK = Fread(F)
TUK
TUK = TUK %>%
filter.(Order == 'Christensenellales')
TUK
TUK_f = TUK %>%
filter.(Completeness >= 0.9,
Contamination < 0.05,
X..contigs <= 500) %>%
filter.(Family != 'f__')
TUK_f
copy_file = function(F, base_dir){
outfile = file.path(base_dir, basename(F))
stopifnot(F != outfile)
file.copy(F, outfile)
}
res = TUK_f$Fasta %>%
lapply(copy_file, base_dir=file.path(work_dir, 'TUK'))
res %>% length
###Output
_____no_output_____
###Markdown
T1k MAGs
###Code
F = '/ebio/abt3_projects/Anxiety_Twins_Metagenomes/data/metagenome/Tulsa1000/LLMGA_v0.12/HiSeq_R160-163-169/LLG//rnd1/final_MAGs.tsv'
T1k = Fread(F)
T1k
T1k = T1k %>%
filter.(Order == 'Christensenellales')
T1k
T1k_f = T1k %>%
filter.(Completeness >= 0.9,
Contamination < 0.05,
X..contigs <= 500) %>%
filter.(Family != 'f__')
T1k_f
# copying files
res = T1k_f$Fasta %>%
lapply(copy_file, base_dir=file.path(work_dir, 'T1k'))
res %>% length
###Output
_____no_output_____
###Markdown
List of all genomes
###Code
files = list_files(work_dir, '.fna')
samps = data.frame(Name = files %>% as.character %>% basename,
Fasta = files,
Domain = 'Bacteria',
Taxid = taxid) %>%
mutate(Fasta = gsub('/+', '/', Fasta))
samps
# writing file
outfile = file.path(work_dir, 'genomes_raw.txt')
write_table(samps, outfile)
###Output
File written: /ebio/abt3_projects/software/dev/ll_pipelines/llprimer/experiments/christensenellales/genomes//genomes_raw.txt
###Markdown
WARNING! > The taxid will need to be changed for the primer design. The proper taxids can be determined by: `GTDB-Tk => ncbi-lineage => names.dmp` LLG Config
###Code
cat_file(file.path(work_dir, 'config_llg.yaml'))
###Output
# table with genome --> fasta_file information
samples_file: /ebio/abt3_projects/software/dev/ll_pipelines/llprimer/experiments/christensenellales/genomes/genomes_raw.txt
# output location
output_dir: /ebio/abt3_projects/software/dev/ll_pipelines/llprimer/experiments/christensenellales/genomes/
# temporary file directory (your username will be added automatically)
tmp_dir: /ebio/abt3_scratch/
# batch processing of genomes for certain steps
## increase to better parallelize
batches: 5
# Domain of genomes ('Archaea' or 'Bacteria)
## Use "Skip" if provided as a "Domain" column in the genome table
Domain: Skip
# software parameters
# Use "Skip" to skip any of these steps. If no params for rule, use ""
# dRep MAGs are not further analyzed, but you can de-rep & then use the de-rep genome table as input.
params:
ionice: -c 3
# assembly assessment
seqkit: ""
quast: Skip #""
multiqc_on_quast: ""
checkm: ""
# de-replication (CheckM recommended)
drep:
algorithm: auto # will select fastANI if >1000 genomes, else accurate mode
params: -comp 50 -con 5 -sa 0.999
# taxonomy
sourmash:
compute: Skip #--scaled 10000 -k 31
gather: -k 31
gtdbtk:
classify_wf: --min_perc_aa 10
# genome pairwise ANI
fastani: Skip #--fragLen 3000 --minFraction 0.2 -k 16
dashing: Skip # -k 31 --full-tsv
comparem_aai: Skip # --evalue 0.001
# gene annotation
gene_call:
prokka: Skip #""
multiqc_on_prokka: ""
prodigal: Skip #""
eggnog_mapper: Skip #""
eggnog_mapper_annot: ""
# rRNA (16S alignment & phylogeny)
barrnap: Skip #--lencutoff 0.8
vsearch_per_genome_drep: --id 0.95 # Skip to prevent drep of 16S copies within each genome
qiime2_fasttree: ""
qiime2_iqtree: --p-alrt 1000 --p-abayes --p-lbp 1000 --p-substitution-model 'GTR+I+G'
# genome phylogeny
phylophlan_config: Skip #--map_dna diamond --db_aa diamond --map_aa diamond --msa mafft --trim trimal --tree1 fasttree --tree2 raxml
phylophlan:
accuracy: --auto # --auto will select --fast if >2000 genomes, otherwise --accurate
other_params: --diversity high --min_num_markers 50
# phenotype
traitar: Skip #""
# biosynthetic gene clusters (BGCs)
antismash: Skip #--cb-knownclusters --cb-subclusters --asf
DeepBGC: Skip #--score 0.5 --classifier-score 0.5 --prodigal-meta-mode
# antimicrobial resistance (AMR)
abricate: Skip #--minid 75 --mincov 80
# CRISPRs
cctyper: Skip #--prodigal meta
# databases
databases:
checkM_data: /ebio/abt3_projects/databases_no-backup/checkM/
sourmash: /ebio/abt3_projects/databases_no-backup/sourmash/genbank-k31.sbt.json
sourmash_lca: /ebio/abt3_projects/databases_no-backup/sourmash/genbank-k31.lca.json.gz
gtdbtk: /ebio/abt3_projects/databases_no-backup/GTDB/release95/gtdbtk/db_info.md
phylophlan: /ebio/abt3_projects/databases_no-backup/phylophlan/PhyloPhlan/phylophlan.faa.bz2
eggnog: /ebio/abt3_projects/databases_no-backup/Eggnog/v2/eggnog.db
eggnog_diamond: /ebio/abt3_projects/databases_no-backup/Eggnog/v2/eggnog_proteins.dmnd
antismash: /ebio/abt3_projects/databases_no-backup/antismash/v5/
deepbgc: /ebio/abt3_projects/databases_no-backup/DeepBGC/
traitar: /ebio/abt3_projects/databases_no-backup/pfam/traitar/
taxdump: # used for adding taxids to GTDB-Tk classifications
names: /ebio/abt3_projects/databases_no-backup/GTDB/release95/taxdump/names.dmp
nodes: /ebio/abt3_projects/databases_no-backup/GTDB/release95/taxdump/nodes.dmp
abricate:
ncbi: /ebio/abt3_projects/databases_no-backup/abricate/ncbi/sequences
card: /ebio/abt3_projects/databases_no-backup/abricate/card/sequences
resfinder: /ebio/abt3_projects/databases_no-backup/abricate/resfinder/sequences
argannot: /ebio/abt3_projects/databases_no-backup/abricate/argannot/sequences
bacmet2: /ebio/abt3_projects/databases_no-backup/abricate/bacmet2/sequences
vfdb: /ebio/abt3_projects/databases_no-backup/abricate/vfdb/sequences
megares: /ebio/abt3_projects/databases_no-backup/abricate/megares/sequences
plasmidfinder: /ebio/abt3_projects/databases_no-backup/abricate/plasmidfinder/sequences
# snakemake pipeline
pipeline:
snakemake_folder: ./
script_folder: ./bin/scripts/
use_shared_mem: True
name: LLG
###Markdown
Run ```(snakemake) @ rick:/ebio/abt3_projects/software/dev/ll_pipelines/llg$ screen -L -s llg-christ ./snakemake_sge.sh /ebio/abt3_projects/software/dev/ll_pipelines/llprimer/experiments/christensenellales/genomes/config_llg.yaml 30 -F``` Samples table of high quality genomes
###Code
# checkM summary
checkm = file.path(work_dir, 'LLG_output', 'checkM', 'checkm_qa_summary.tsv') %>%
read.delim(sep='\t')
checkm
# dRep summary
drep = file.path(work_dir, 'LLG_output', 'drep', 'checkm_markers_qa_summary.tsv') %>%
read.delim(sep='\t') %>%
mutate(Bin.Id = gsub('.+/', '', genome),
Bin.Id = gsub('\\.fna$', '', Bin.Id))
drep
# de-replicated genomes
drep_gen = file.path(work_dir, 'LLG_output', 'drep', 'dereplicated_genomes.tsv') %>%
read.delim(sep='\t')
drep_gen
# GTDBTk summary
tax = file.path(work_dir, 'LLG_output', 'gtdbtk', 'gtdbtk_summary_wTaxid.tsv') %>%
read.delim(, sep='\t') %>%
separate(classification,
c('Domain', 'Phylum', 'Class', 'Order', 'Family', 'Genus', 'Species'),
sep=';') %>%
select(-note, -classification_method, -pplacer_taxonomy,
-other_related_references.genome_id.species_name.radius.ANI.AF.)
tax
# getting NCBI taxid from GTDB metadata
D = '/ebio/abt3_projects/databases_no-backup/GTDB/release95/metadata/'
F = file.path(D, 'bac120_metadata_r95.tsv')
tax = tax %>%
inner_join(read.delim(F, sep='\t') %>% select(gtdb_taxonomy, ncbi_taxonomy, ncbi_taxid), c('closest_placement_taxonomy'='gtdb_taxonomy')) %>%
group_by(user_genome, Domain, Phylum, Class, Order, Family, Genus, Species) %>%
summarize(ncbi_taxid = first(ncbi_taxid), .groups='drop')
tax$ncbi_taxid %>% unique_n('taxids')
tax
tax$ncbi_taxid %>% table %>% sort
# checking overlap
cat('-- drep --\n')
overlap(basename(as.character(drep_gen$Fasta)),
basename(as.character(drep$genome)))
cat('-- checkm --\n')
overlap(drep$Bin.Id, checkm$Bin.Id)
cat('-- gtdbtk --\n')
overlap(drep$Bin.Id, tax$user_genome)
# joining based on Bin.Id
drep = drep %>%
inner_join(checkm, c('Bin.Id')) %>%
mutate(GEN = genome %>% as.character %>% basename) %>%
inner_join(drep_gen %>% mutate(GEN = Fasta %>% as.character %>% basename),
by=c('GEN')) %>%
inner_join(tax, c('Bin.Id'='user_genome')) #%>%
drep
# summarizing the taxonomy
df.dims(20)
drep %>%
group_by(Order, Family) %>%
summarize(n_genomes = n(), .groups='drop')
df.dims()
# filtering by quality
hq_genomes = drep %>%
filter(completeness >= 90,
contamination < 5,
Strain.heterogeneity < 50,
N50..contigs. >= 20000)
hq_genomes
# summarizing the taxonomy
df.dims(30)
hq_genomes %>%
group_by(Order, Family) %>%
summarize(n_genomes = n(), .groups='drop')
df.dims()
# summarizing
hq_genomes$Completeness %>% summary_x('Completeness')
hq_genomes$X..contigs %>% summary_x('No. of contigs')
hq_genomes$Mean.contig.length..bp. %>% summary_x('Mean contig length')
hq_genomes$X..predicted.genes %>% summary_x('No. of genes')
hq_genomes$N50..contigs. %>% summary_x('N50')
# writing samples table for LLPRIMER
outfile = file.path(work_dir, 'LLG_output', 'samples_genomes_hq.txt')
hq_genomes %>%
select(Bin.Id, Fasta, Domain, ncbi_taxid) %>%
rename('Taxon' = Bin.Id,
'Taxid' = ncbi_taxid) %>%
mutate(Taxon = gsub('_chromosome.+', '', Taxon),
Taxon = gsub('_bin_.+', '', Taxon),
Taxon = gsub('_genomic', '', Taxon),
Taxon = gsub('_annotated_assembly', '', Taxon)) %>%
write_table(outfile)
###Output
File written: /ebio/abt3_projects/software/dev/ll_pipelines/llprimer/experiments/christensenellales/genomes//LLG_output/samples_genomes_hq.txt
###Markdown
sessionInfo
###Code
sessionInfo()
###Output
_____no_output_____ |
深度学习/d2l-zh-1.1/chapter_computational-performance/multiple-gpus.ipynb | ###Markdown
多GPU计算本节中我们将展示如何使用多块GPU计算,例如,使用多块GPU训练同一个模型。正如所期望的那样,运行本节中的程序需要至少2块GPU。事实上,一台机器上安装多块GPU很常见,这是因为主板上通常会有多个PCIe插槽。如果正确安装了NVIDIA驱动,我们可以通过`nvidia-smi`命令来查看当前计算机上的全部GPU。
###Code
!nvidia-smi
###Output
Sat Mar 7 04:19:26 2020
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 418.67 Driver Version: 418.67 CUDA Version: 10.1 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
###Markdown
[“自动并行计算”](auto-parallelism.ipynb)一节介绍过,大部分运算可以使用所有的CPU的全部计算资源,或者单块GPU的全部计算资源。但如果使用多块GPU训练模型,我们仍然需要实现相应的算法。这些算法中最常用的叫作数据并行。 数据并行数据并行目前是深度学习里使用最广泛的将模型训练任务划分到多块GPU的方法。回忆一下我们在[“小批量随机梯度下降”](../chapter_optimization/minibatch-sgd.ipynb)一节中介绍的使用优化算法训练模型的过程。下面我们就以小批量随机梯度下降为例来介绍数据并行是如何工作的。假设一台机器上有$k$块GPU。给定需要训练的模型,每块GPU及其相应的显存将分别独立维护一份完整的模型参数。在模型训练的任意一次迭代中,给定一个随机小批量,我们将该批量中的样本划分成$k$份并分给每块显卡的显存一份。然后,每块GPU将根据相应显存所分到的小批量子集和所维护的模型参数分别计算模型参数的本地梯度。接下来,我们把$k$块显卡的显存上的本地梯度相加,便得到当前的小批量随机梯度。之后,每块GPU都使用这个小批量随机梯度分别更新相应显存所维护的那一份完整的模型参数。图8.1描绘了使用2块GPU的数据并行下的小批量随机梯度的计算。为了从零开始实现多GPU训练中的数据并行,让我们先导入需要的包或模块。
###Code
import d2lzh as d2l
import mxnet as mx
from mxnet import autograd, nd
from mxnet.gluon import loss as gloss
import time
###Output
_____no_output_____
###Markdown
定义模型我们使用[“卷积神经网络(LeNet)”](../chapter_convolutional-neural-networks/lenet.ipynb)一节里介绍的LeNet来作为本节的样例模型。这里的模型实现部分只用到了`NDArray`。
###Code
# 初始化模型参数
scale = 0.01
W1 = nd.random.normal(scale=scale, shape=(20, 1, 3, 3))
b1 = nd.zeros(shape=20)
W2 = nd.random.normal(scale=scale, shape=(50, 20, 5, 5))
b2 = nd.zeros(shape=50)
W3 = nd.random.normal(scale=scale, shape=(800, 128))
b3 = nd.zeros(shape=128)
W4 = nd.random.normal(scale=scale, shape=(128, 10))
b4 = nd.zeros(shape=10)
params = [W1, b1, W2, b2, W3, b3, W4, b4]
# 定义模型
def lenet(X, params):
h1_conv = nd.Convolution(data=X, weight=params[0], bias=params[1],
kernel=(3, 3), num_filter=20)
h1_activation = nd.relu(h1_conv)
h1 = nd.Pooling(data=h1_activation, pool_type='avg', kernel=(2, 2),
stride=(2, 2))
h2_conv = nd.Convolution(data=h1, weight=params[2], bias=params[3],
kernel=(5, 5), num_filter=50)
h2_activation = nd.relu(h2_conv)
h2 = nd.Pooling(data=h2_activation, pool_type='avg', kernel=(2, 2),
stride=(2, 2))
h2 = nd.flatten(h2)
h3_linear = nd.dot(h2, params[4]) + params[5]
h3 = nd.relu(h3_linear)
y_hat = nd.dot(h3, params[6]) + params[7]
return y_hat
# 交叉熵损失函数
loss = gloss.SoftmaxCrossEntropyLoss()
###Output
_____no_output_____
###Markdown
多GPU之间同步数据我们需要实现一些多GPU之间同步数据的辅助函数。下面的`get_params`函数将模型参数复制到某块显卡的显存并初始化梯度。
###Code
def get_params(params, ctx):
new_params = [p.copyto(ctx) for p in params]
for p in new_params:
p.attach_grad()
return new_params
###Output
_____no_output_____
###Markdown
尝试把模型参数`params`复制到`gpu(0)`上。
###Code
new_params = get_params(params, mx.gpu(0))
print('b1 weight:', new_params[1])
print('b1 grad:', new_params[1].grad)
###Output
b1 weight:
[0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
<NDArray 20 @gpu(0)>
b1 grad:
[0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
<NDArray 20 @gpu(0)>
###Markdown
给定分布在多块显卡的显存之间的数据。下面的`allreduce`函数可以把各块显卡的显存上的数据加起来,然后再广播到所有的显存上。
###Code
def allreduce(data):
for i in range(1, len(data)):
data[0][:] += data[i].copyto(data[0].context)
for i in range(1, len(data)):
data[0].copyto(data[i])
###Output
_____no_output_____
###Markdown
简单测试一下`allreduce`函数。
###Code
data = [nd.ones((1, 2), ctx=mx.gpu(i)) * (i + 1) for i in range(2)]
print('before allreduce:', data)
allreduce(data)
print('after allreduce:', data)
###Output
before allreduce: [
[[1. 1.]]
<NDArray 1x2 @gpu(0)>,
[[2. 2.]]
<NDArray 1x2 @gpu(1)>]
after allreduce: [
[[3. 3.]]
<NDArray 1x2 @gpu(0)>,
[[3. 3.]]
<NDArray 1x2 @gpu(1)>]
###Markdown
给定一个批量的数据样本,下面的`split_and_load`函数可以将其划分并复制到各块显卡的显存上。
###Code
def split_and_load(data, ctx):
n, k = data.shape[0], len(ctx)
m = n // k # 简单起见,假设可以整除
assert m * k == n, '# examples is not divided by # devices.'
return [data[i * m: (i + 1) * m].as_in_context(ctx[i]) for i in range(k)]
###Output
_____no_output_____
###Markdown
让我们试着用`split_and_load`函数将6个数据样本平均分给2块显卡的显存。
###Code
batch = nd.arange(24).reshape((6, 4))
ctx = [mx.gpu(0), mx.gpu(1)]
splitted = split_and_load(batch, ctx)
print('input: ', batch)
print('load into', ctx)
print('output:', splitted)
###Output
input:
[[ 0. 1. 2. 3.]
[ 4. 5. 6. 7.]
[ 8. 9. 10. 11.]
[12. 13. 14. 15.]
[16. 17. 18. 19.]
[20. 21. 22. 23.]]
<NDArray 6x4 @cpu(0)>
load into [gpu(0), gpu(1)]
output: [
[[ 0. 1. 2. 3.]
[ 4. 5. 6. 7.]
[ 8. 9. 10. 11.]]
<NDArray 3x4 @gpu(0)>,
[[12. 13. 14. 15.]
[16. 17. 18. 19.]
[20. 21. 22. 23.]]
<NDArray 3x4 @gpu(1)>]
###Markdown
单个小批量上的多GPU训练现在我们可以实现单个小批量上的多GPU训练了。它的实现主要依据本节介绍的数据并行方法。我们将使用刚刚定义的多GPU之间同步数据的辅助函数`allreduce`和`split_and_load`。
###Code
def train_batch(X, y, gpu_params, ctx, lr):
# 当ctx包含多块GPU及相应的显存时,将小批量数据样本划分并复制到各个显存上
gpu_Xs, gpu_ys = split_and_load(X, ctx), split_and_load(y, ctx)
with autograd.record(): # 在各块GPU上分别计算损失
ls = [loss(lenet(gpu_X, gpu_W), gpu_y)
for gpu_X, gpu_y, gpu_W in zip(gpu_Xs, gpu_ys, gpu_params)]
for l in ls: # 在各块GPU上分别反向传播
l.backward()
# 把各块显卡的显存上的梯度加起来,然后广播到所有显存上
for i in range(len(gpu_params[0])):
allreduce([gpu_params[c][i].grad for c in range(len(ctx))])
for param in gpu_params: # 在各块显卡的显存上分别更新模型参数
d2l.sgd(param, lr, X.shape[0]) # 这里使用了完整批量大小
###Output
_____no_output_____
###Markdown
定义训练函数现在我们可以定义训练函数了。这里的训练函数和[“softmax回归的从零开始实现”](../chapter_deep-learning-basics/softmax-regression-scratch.ipynb)一节定义的训练函数`train_ch3`有所不同。值得强调的是,在这里我们需要依据数据并行将完整的模型参数复制到多块显卡的显存上,并在每次迭代时对单个小批量进行多GPU训练。
###Code
def train(num_gpus, batch_size, lr):
train_iter, test_iter = d2l.load_data_fashion_mnist(batch_size)
ctx = [mx.gpu(i) for i in range(num_gpus)]
print('running on:', ctx)
# 将模型参数复制到num_gpus块显卡的显存上
gpu_params = [get_params(params, c) for c in ctx]
for epoch in range(4):
start = time.time()
for X, y in train_iter:
# 对单个小批量进行多GPU训练
train_batch(X, y, gpu_params, ctx, lr)
nd.waitall()
train_time = time.time() - start
def net(x): # 在gpu(0)上验证模型
return lenet(x, gpu_params[0])
test_acc = d2l.evaluate_accuracy(test_iter, net, ctx[0])
print('epoch %d, time %.1f sec, test acc %.2f'
% (epoch + 1, train_time, test_acc))
###Output
_____no_output_____
###Markdown
多GPU训练实验让我们先从单GPU训练开始。设批量大小为256,学习率为0.2。
###Code
train(num_gpus=1, batch_size=256, lr=0.2)
###Output
running on: [gpu(0)]
###Markdown
保持批量大小和学习率不变,将使用的GPU数量改为2。可以看到,测试准确率的提升同上一个实验中的结果大体相当。因为有额外的通信开销,所以我们并没有看到训练时间的显著降低。因此,我们将在下一节实验计算更加复杂的模型。
###Code
train(num_gpus=2, batch_size=256, lr=0.2)
###Output
running on: [gpu(0), gpu(1)]
|
lab_exercise-checkpoint.ipynb | ###Markdown
Name:Emmanuel Isebor Mat No: 2012012034 Email:[email protected] Exercise Description: Programming Exercise I***A python programme to get the difference between a given number and 17, if the number is greater than 17 return double the absulute difference***
###Code
j = int(input("Input given number"))
k = 17
l = j - k
if j > k:
m = 2 * l
print(m)
if j < k:
y = l * (-1)
print(y)
###Output
Input given number11
6
###Markdown
**Exercise II*****A python programme to calculate the sum of three given numbers, if the values are equal, then return thrice the value of their sum***
###Code
A = int(input("Enter first number"))
B = int(input("Enter second number"))
C = int(input("Enter third number"))
if A == B == C:
v = A + B + C
q= v*3
print(q)
else:
v = A + B + C
print(v)
###Output
Enter first number3
Enter second number4
Enter third number5
12
###Markdown
**Exercise III*****A python programme to return true if the values of two given integers are equal to their sum or difference is 5***
###Code
V = float(input("Enter given integer"))
K = float(input("Enter given integer"))
w = V + K
if V == K:
print ("True")
if w == 5:
print ("True")
g = V - K
if g == 5:
print ("True")
###Output
Enter given integer5
Enter given integer5
True
###Markdown
EXERCISE IV
###Code
X = int(input("Input an integer: "))
Y = int(input("Input an integer: "))
Z = int(input("Input an integer: "))
Mx = max(X,Y,Z)
Mn = min(X,Y,Z)
total = X+Y+Z
Mid = total - Mx - Mn
print(f"Maximum: {Mx}, middle: {Mid}, Minimum: {Mn}")
###Output
Input an integer: 3
Input an integer: 4
Input an integer: 5
Maximum: 5, middle: 4, Minimum: 3
###Markdown
Exercise V***Write a Python function that takes a positive integer and returns the sum of the cube of all the positive integers smaller than the specified number***
###Code
def sum_of_cubes(x):
x-= 1
total = 0
while x > 0:
total += x * x * x
x-= 1
return total
print("sum of cubes: ", sum_of_cubes(5))
###Output
sum of cubes: 100
|
basico/variaveis1-LUCAS.ipynb | ###Markdown
Cálculos feito com Python default
###Code
x = 35
y = x + 35
print(y)
###Output
70
###Markdown
Iniciando com TensorFlow
###Code
import tensorflow as tf
tf.__version__
# Passamos como parâmetro também o valor da constante
# É possível nomear as Constantes utilizando o parâmetro "name"
valor1 = tf.constant(15, name = 'valor1')
print(valor1)
###Output
Tensor("valor1:0", shape=(), dtype=int32)
###Markdown
Por mais que seja estranho, definimos 'valor1' e depois passamos o nome da constante 'valor1'.Isso ocorre porque a definição de atribuição do valor1 é apenas uma definição de uma constante.
###Code
# Veja que estamos atribuindo à variável 'soma' uma variável que
# recebe a soma de uma constante mais o valor 5 e dêmos o nome de "valor1"
# soma na verdade é a definição do grafo, onde somamos uma constante com um valor variável
soma = tf.Variable(valor1 + 5, name = 'valor1')
# Veja que o tipo de dado de soma é uma Variable
print(soma)
type(soma)
# Quando utilizamos variables no TensorFlow é preciso inicializá-las
# Será apenas criado o grafo com as dependências das variáveis, mas não será executado
init = tf.global_variables_initializer()
###Output
_____no_output_____
###Markdown
Obs: O motivo pelo qual utilizar o 'init' (inicializar as variáveis), é criado um grafo com as dependências entre as variáveis.Ou seja, a variável 'soma' que criamos anteriormente, ela depende da constante 'valor1' e a soma + 5 é computada APENAS quando utilizamos o método run() dentro de uma sessão.
###Code
with tf.Session() as sess:
sess.run(init) # executando a inicialização das variáveis dentro da sessão
print(sess.run(soma)) # executando os cálculos dos grafos/fórmulas definidas anteriormente.
###Output
20
|
notebooks/2_Basic_Keras_Regression.ipynb | ###Markdown
Keras for basic regressionRevisiting the task of regression, let's look at trying to model a more complicated function; a sine wave with some Gaussian noise. We'll also look at an alternative way of building models in Keras.
###Code
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from keras.models import Model, Sequential
from keras.layers import Dense, Activation, Input
from keras.optimizers import Adam
from keras.utils import plot_model
n = 200
x = 2*np.pi*np.random.random(n)
y = np.sin(x)+np.random.normal(scale=0.2, size=n)
d = np.array(list(zip(x,y)))
np.random.shuffle(d)
x,y = d[:,0],d[:,1]
plt.scatter(x, y)
###Output
_____no_output_____
###Markdown
Previously we used a class-based approach to building our Keras model, where we added layers one after another. Here we'll use an alternative, *functional*, approach. This involves taking a dummy tensor and applying transformations to it. The `Model` is the created by passing the original tensor and the final tensor as arguments. The history of the tensor transformations is automatically saved, and the mode will reapply them to any tensors which it is passed in the future.This method can either be more intuitive (if you view NNs as a mathematical map from input to output), or more complicated (if you instead see NNs as statistical models). In either case, it offers great flexibility than the pure class-based approach, as we'll see later.Applying transformations to the dummy tensor agin takes the form of instantiating Dense layers and activation functions, however rather than appending these to a model, they instead act on the dummy tensor.
###Code
inputs = Input(shape=(1,)) # This is our dummy tensor
h = Dense(10, activation='relu', kernel_initializer='he_normal')(inputs) # The Dense layer is applied to the dummy tensor
for _ in range(5): h = Dense(10, activation='relu', kernel_initializer='he_normal')(h) # This can also be done in loops
outputs = Dense(1, activation='linear')(h) # Final transformation with linear output
model = Model(inputs=inputs, outputs=outputs) # Model class then instantiated based on initial and final tensor
model.compile(optimizer=Adam(lr=0.01), loss='mse') # Compile as before (MSE=Mean Squared Error)
model.summary()
###Output
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_1 (InputLayer) (None, 1) 0
_________________________________________________________________
dense_1 (Dense) (None, 10) 20
_________________________________________________________________
dense_2 (Dense) (None, 10) 110
_________________________________________________________________
dense_3 (Dense) (None, 10) 110
_________________________________________________________________
dense_4 (Dense) (None, 10) 110
_________________________________________________________________
dense_5 (Dense) (None, 10) 110
_________________________________________________________________
dense_6 (Dense) (None, 10) 110
_________________________________________________________________
dense_7 (Dense) (None, 1) 11
=================================================================
Total params: 581
Trainable params: 581
Non-trainable params: 0
_________________________________________________________________
###Markdown
As before, model summary shows the layers in the NN, despite not ever explicitly appending them to the model
###Code
history = model.fit(x=x, y=y, batch_size=64, epochs=100)
plt.plot(range(len(history.history['loss'])), history.history['loss'])
preds = model.predict(x)
_ = plt.scatter(x, y, label='True')
_ = plt.scatter(x, preds.squeeze(), label='Pred.')
plt.legend();
###Output
_____no_output_____ |
NER_BERT/BERT_ner_pytorch_lightning_TPU.ipynb | ###Markdown
Installing/ Importing required libraries
###Code
# !pip install torch==1.4
!pip install transformers
!pip install cloud-tpu-client==0.10 https://storage.googleapis.com/tpu-pytorch/wheels/torch_xla-1.9-cp37-cp37m-linux_x86_64.whl
!pip install pytorch_lightning
# !pip install --quiet "pytorch-lightning>=1.3"
import pandas as pd
import numpy as np
import seaborn as sns
import torch
import torch.nn as nn
from torch.utils.data import Dataset, DataLoader, random_split
from sklearn.preprocessing import LabelEncoder
import torch.optim as optim
import matplotlib.pyplot as plt
from transformers import AutoTokenizer
from tqdm import tqdm_notebook as tqdm
import pytorch_lightning as pl
import transformers
from transformers import BertForTokenClassification, AdamW
%matplotlib inline
device = "cuda" if torch.cuda.is_available() else "cpu"
device
# !pip install kaggle
# !mkdir ~/.kaggle
# !cp kaggle.json ~/.kaggle/
# !chmod 600 ~/.kaggle/kaggle.json
# !kaggle datasets download -d abhinavwalia95/entity-annotated-corpus
# import zipfile
# with zipfile.ZipFile('/content/drive/MyDrive/entity-annotated-corpus.zip', 'r') as zip_ref:
# zip_ref.extractall("/content/drive/MyDrive")
# !pip install transformers
df = pd.read_csv('/content/drive/MyDrive/ner.csv', encoding = "ISO-8859-1", error_bad_lines=False)
df1 = pd.read_csv('/content/drive/MyDrive/ner_dataset.csv', encoding = "ISO-8859-1", error_bad_lines=False)
df1['Sentence #'] = df1['Sentence #'].fillna(method = 'ffill')
Label_encoder = LabelEncoder()
df1["Tag"] = Label_encoder.fit_transform(df1["Tag"])
sns.countplot(df1['Tag'])
plt.xticks(rotation=90)
plt.show()
sentences = list(df1.groupby("Sentence #")["Word"].apply(list).reset_index()['Word'].values)
vals = list(df1.groupby("Sentence #")["Tag"].apply(list).reset_index()['Tag'].values)
sentences = [" ".join(s) for s in sentences]
# from torch.nn.utils.rnn import pad_sequence,pack_padded_sequence
tokenizer = transformers.BertTokenizer.from_pretrained(
'bert-base-cased',
do_lower_case=True
)
class ner_dataset(Dataset):
def __init__(self,sentences,vals,tokenizer,max_len):
self.sentences = sentences
self.vals = vals
self.tokenizer = tokenizer
self.max_len = max_len
def __getitem__(self,idx):
s = self.sentences[idx].split(" ")
v = self.vals[idx]
d = {'input_ids':[],'attention_mask':[],'labels':[]}
text = []
labels = []
mask = []
for w in range(len(s)) :
i, l = self.align_labels(self.tokenizer,s[w],v[w])
text.extend(i['input_ids'])
labels.extend(l)
mask.extend(i['attention_mask'])
d['input_ids'] = [101] + self.pad(text+ [102],self.max_len)
d['labels'] = [0] + self.pad(labels+ [0],self.max_len)
d['attention_mask'] = [1] + self.pad(mask+ [1],self.max_len)
d['input_ids'] = torch.tensor(d['input_ids'])
d['labels'] = torch.tensor(d['labels'])
d['attention_mask'] = torch.tensor(d['attention_mask'])
return d
def __len__(self):
return len(self.sentences)
def align_labels(self,tokenizer,word,label):
word = tokenizer(word,add_special_tokens=False)
labels = []
for i in range(len(word['input_ids'])):
labels.append(label)
return word,labels
def pad(self,s,max_len):
pad_len = max_len - len(s)
if pad_len>0:
for i in range(pad_len):
s.append(0)
return s[:max_len-1]
dataset = ner_dataset(sentences,vals,tokenizer,100)
train_dataset, test_dataset = random_split(dataset,[int(len(dataset)*0.8),len(dataset)-int(len(dataset)*0.8)])
train_dataloader = DataLoader(train_dataset,batch_size=64,shuffle=False)
test_dataloader = DataLoader(train_dataset,batch_size=64,shuffle=False)
class ner_model(nn.Module):
def __init__(self,num_class):
super(ner_model,self).__init__()
self.num_class = num_class
self.bert = transformers.BertModel.from_pretrained(
"bert-base-uncased"
)
self.logit = nn.Linear(768,self.num_class)
def forward(self,ids,mask):
x = self.bert(ids,
attention_mask=mask)
x = self.logit(x['last_hidden_state'])
return x
model = ner_model(df1['Tag'].nunique())
num_class = df1['Tag'].nunique()
param_optimizer = list(model.named_parameters())
no_decay = ["bias", "LayerNorm.bias", "LayerNorm.weight"]
optimizer_parameters = [
{
"params": [
p for n, p in param_optimizer if not any(
nd in n for nd in no_decay
)
],
"weight_decay": 0.001,
},
{
"params": [
p for n, p in param_optimizer if any(
nd in n for nd in no_decay
)
],
"weight_decay": 0.0,
},
]
optimizer = optim.AdamW(optimizer_parameters,lr=1e-5)
loss_fn = nn.CrossEntropyLoss()
def loss_fn1(output, target, mask, num_labels):
lfn = nn.CrossEntropyLoss()
active_loss = mask.view(-1) == 1
active_logits = output.view(-1, num_labels)
active_labels = torch.where(
active_loss,
target.view(-1),
torch.tensor(lfn.ignore_index).type_as(target)
)
loss = lfn(active_logits, active_labels)
return loss
class NERmodel(pl.LightningModule):
def __init__(self,model):
super().__init__()
self.model = model
def forward(self, x, msk):
return self.model(x,msk)
def configure_optimizers(self):
param_optimizer = list(self.named_parameters())
no_decay = ["bias", "LayerNorm.bias", "LayerNorm.weight"]
optimizer_parameters = [
{
"params": [
p for n, p in param_optimizer if not any(
nd in n for nd in no_decay
)
],
"weight_decay": 0.001,
},
{
"params": [
p for n, p in param_optimizer if any(
nd in n for nd in no_decay
)
],
"weight_decay": 0.0,
},
]
optimizer = optim.AdamW(optimizer_parameters,lr=1e-5)
# optimizer = optim.AdamW(self.para,lr=1e-5)
return optimizer
def loss_fn1(self,output, target, mask, num_labels):
lfn = nn.CrossEntropyLoss()
active_loss = mask.view(-1) == 1
active_logits = output.view(-1, num_labels)
active_labels = torch.where(
active_loss,
target.view(-1),
torch.tensor(lfn.ignore_index).type_as(target)
)
loss = lfn(active_logits, active_labels)
return loss
def training_step(self, train_batch, batch_idx):
d = train_batch
x, msk = d['input_ids'],d['attention_mask']
out = self.model(x,msk)
loss = self.loss_fn1(out, d['labels'], msk, num_class)
self.log('train_loss', loss)
# print(loss)
return loss
def validation_step(self, val_batch, batch_idx):
d = val_batch
x, msk = d['input_ids'],d['attention_mask']
out = self.model(x,msk)
loss = self.loss_fn1(out, d['labels'], msk, num_class)
self.log('val_loss', loss)
return loss
md =NERmodel(model)
# training
trainer = pl.Trainer(tpu_cores=1,max_epochs=1,precision=16,progress_bar_refresh_rate=1)
trainer.fit(md,train_dataloader, test_dataloader)
model = trainer.model
###Output
_____no_output_____
###Markdown
Inference
###Code
model = model.to('cpu')
model.eval()
text = 'Mr. Steve Jobs founded Apple Corp. at Palo Alto in 1976'
lent = len(text.split(" "))
infer_dataset = ner_dataset([text],[[1]*lent],tokenizer,100)
ids, msk, ys= infer_dataset[0]['input_ids'], infer_dataset[0]['attention_mask'], infer_dataset[0]['labels']
out = model(ids.unsqueeze(0).to('cpu'),msk.unsqueeze(0).to('cpu'))[0]
out_idx = [torch.argmax(o,dim=-1).item() for o in out]
[Label_encoder.classes_[i] for i in out_idx]
###Output
_____no_output_____ |
docs/source/examples/criteo.ipynb | ###Markdown
Criteo Example Here we'll show how to use NVTabular first as a preprocessing library to prepare the [Criteo Display Advertising Challenge](https://www.kaggle.com/c/criteo-display-ad-challenge) dataset, and then as a dataloader to train a FastAI model on the prepared data. The large memory footprint of the Criteo dataset presents a great opportunity to highlight the advantages of the online fashion in which NVTabular loads and transforms data. Data PrepBefore we get started, make sure you've run the [`optimize_criteo` notebook](./optimize_criteo.ipynb), which will convert the tsv data published by Criteo into the parquet format that our accelerated readers prefer. It's fair to mention at this point that that notebook will take ~4 hours to run. While we're hoping to release accelerated csv readers in the near future, we also believe that inefficiencies in existing data representations like csv are in no small part a consequence of inefficiencies in the existing hardware/software stack. Accelerating these pipelines on new hardware like GPUs may require us to make new choices about the representations we use to store that data, and parquet represents a strong alternative. Quick Aside: Clearing CacheThe following line is not strictly necessary, but is included for those who want to validate NVIDIA's benchmarks. We start by clearing the existing cache to start as "fresh" as possible. If you're having trouble running it, try executing the container with the `--privileged` flag.
###Code
!sync; echo 3 > /proc/sys/vm/drop_caches
import os
from time import time
import re
import glob
import warnings
# tools for data preproc/loading
import torch
import rmm
import nvtabular as nvt
from nvtabular.ops import Normalize, Categorify, LogOp, ZeroFill, get_embedding_sizes
from nvtabular.torch_dataloader import AsyncTensorBatchDatasetItr, DLDataLoader
# tools for training
from fastai.basic_train import Learner
from fastai.basic_data import DataBunch
from fastai.tabular import TabularModel
from fastai.metrics import accuracy
###Output
_____no_output_____
###Markdown
Initializing the Memory PoolFor applications like the one that follows where RAPIDS will be the only workhorse user of GPU memory and resource, a good best practices is to use the RAPIDS Memory Manager library `rmm` to allocate a dedicated pool of GPU memory that allows for fast, asynchronous memory management. Here, we'll dedicate 80% of free GPU memory to this pool to make sure we get the most utilization possible.
###Code
rmm.reinitialize(pool_allocator=True, initial_pool_size=0.8 * nvt.io.device_mem_size(kind='free'))
###Output
_____no_output_____
###Markdown
Dataset and Dataset SchemaOnce our data is ready, we'll define some high level parameters to describe where our data is and what it "looks like" at a high level.
###Code
# define some information about where to get our data
INPUT_DATA_DIR = os.environ.get('INPUT_DATA_DIR', '/raid/criteo/tests/crit_int_pq')
OUTPUT_DATA_DIR = os.environ.get('OUTPUT_DATA_DIR', '/raid/criteo/tests/test_dask') # where we'll save our procesed data to
BATCH_SIZE = int(os.environ.get('BATCH_SIZE', 800000))
NUM_TRAIN_DAYS = 23 # number of days worth of data to use for training, the rest will be used for validation
# define our dataset schema
CONTINUOUS_COLUMNS = ['I' + str(x) for x in range(1,14)]
CATEGORICAL_COLUMNS = ['C' + str(x) for x in range(1,27)]
LABEL_COLUMNS = ['label']
COLUMNS = CONTINUOUS_COLUMNS + CATEGORICAL_COLUMNS + LABEL_COLUMNS
# ! ls $INPUT_DATA_DIR
fname = 'day_{}.parquet'
num_days = len([i for i in os.listdir(INPUT_DATA_DIR) if re.match(fname.format('[0-9]{1,2}'), i) is not None])
train_paths = [os.path.join(INPUT_DATA_DIR, fname.format(day)) for day in range(NUM_TRAIN_DAYS)]
valid_paths = [os.path.join(INPUT_DATA_DIR, fname.format(day)) for day in range(NUM_TRAIN_DAYS, num_days)]
###Output
_____no_output_____
###Markdown
PreprocessingAt this point, our data still isn't in a form that's ideal for consumption by neural networks. The most pressing issues are missing values and the fact that our categorical variables are still represented by random, discrete identifiers, and need to be transformed into contiguous indices that can be leveraged by a learned embedding. Less pressing, but still important for learning dynamics, are the distributions of our continuous variables, which are distributed across multiple orders of magnitude and are uncentered (i.e. E[x] != 0).We can fix these issues in a conscise and GPU-accelerated manner with an NVTabular `Workflow`. We'll instantiate one with our current dataset schema, then symbolically add operations _on_ that schema. By setting all these `Ops` to use `replace=True`, the schema itself will remain unmodified, while the variables represented by each field in the schema will be transformed. Frequency ThresholdingOne interesting thing worth pointing out is that we're using _frequency thresholding_ in our `Categorify` op. This handy functionality will map all categories which occur in the dataset with some threshold level of infrequency (which we've set here to be 15 occurrences throughout the dataset) to the _same_ index, keeping the model from overfitting to sparse signals.
###Code
proc = nvt.Workflow(
cat_names=CATEGORICAL_COLUMNS,
cont_names=CONTINUOUS_COLUMNS,
label_name=LABEL_COLUMNS)
# log -> normalize continuous features. Note that doing this in the opposite
# order wouldn't make sense! Note also that we're zero filling continuous
# values before the log: this is a good time to remember that LogOp
# performs log(1+x), not log(x)
proc.add_cont_feature([ZeroFill(), LogOp()])
proc.add_cont_preprocess(Normalize())
# categorification with frequency thresholding
proc.add_cat_preprocess(Categorify(freq_threshold=15, out_path=OUTPUT_DATA_DIR))
###Output
_____no_output_____
###Markdown
Now instantiate dataset iterators to loop through our dataset (which we couldn't fit into GPU memory)
###Code
train_dataset = nvt.Dataset(train_paths, engine='parquet', part_mem_fraction=0.15)
valid_dataset = nvt.Dataset(valid_paths, engine='parquet', part_mem_fraction=0.15)
###Output
_____no_output_____
###Markdown
Now run them through our workflows to collect statistics on the train set, then transform and save to parquet files.
###Code
output_train_dir = os.path.join(OUTPUT_DATA_DIR, 'train/')
output_valid_dir = os.path.join(OUTPUT_DATA_DIR, 'valid/')
! mkdir -p $output_train_dir
! mkdir -p $output_valid_dir
###Output
_____no_output_____
###Markdown
For reference, let's time it to see how long it takes...
###Code
%%time
proc.apply(train_dataset, shuffle=nvt.io.Shuffle.PER_PARTITION, output_path=output_train_dir, out_files_per_proc=5)
%%time
proc.apply(valid_dataset, record_stats=False, shuffle=nvt.io.Shuffle.PER_PARTITION, output_path=output_valid_dir, out_files_per_proc=5)
###Output
_____no_output_____
###Markdown
And just like that, we have training and validation sets ready to feed to a model! Deep Learning Data LoadingWe'll start by using the parquet files we just created to feed an NVTabular `AsyncTensorBatchDatasetItr`, which will loop through the files in chunks. First, we'll reinitialize our memory pool from earlier to free up some memory so that we can share it with PyTorch.
###Code
rmm.reinitialize(pool_allocator=False)
train_paths = glob.glob(os.path.join(output_train_dir, "*.parquet"))
valid_paths = glob.glob(os.path.join(output_valid_dir, "*.parquet"))
train_data = nvt.Dataset(train_paths, engine="parquet", part_mem_fraction=0.02)
valid_data = nvt.Dataset(valid_paths, engine="parquet", part_mem_fraction=0.02)
train_data_itrs = AsyncTensorBatchDatasetItr(train_data, batch_size=BATCH_SIZE, cats=CATEGORICAL_COLUMNS, conts=CONTINUOUS_COLUMNS, labels=LABEL_COLUMNS)
valid_data_itrs = AsyncTensorBatchDatasetItr(valid_data, batch_size=BATCH_SIZE, cats=CATEGORICAL_COLUMNS, conts=CONTINUOUS_COLUMNS, labels=LABEL_COLUMNS)
def gen_col(batch):
batch = batch[0]
return (batch[0], batch[1]), batch[2].long()
train_dataloader = DLDataLoader(train_data_itrs, collate_fn=gen_col, pin_memory=False, num_workers=0)
valid_dataloader = DLDataLoader(valid_data_itrs, collate_fn=gen_col, pin_memory=False, num_workers=0)
databunch = DataBunch(train_dataloader, valid_dataloader, collate_fn=gen_col, device="cuda")
###Output
_____no_output_____
###Markdown
Now we have data ready to be fed to our model online! TrainingOne extra handy functionality of NVTabular is the ability to use the stats collected by the `Categorify` op to define embedding dictionary sizes (i.e. the number of rows of your embedding table). It even includes a heuristic for computing a good embedding size (i.e. the number of columns of your embedding table) based off of the number of categories.
###Code
embeddings = list(get_embedding_sizes(proc).values())
model = TabularModel(emb_szs=embeddings, n_cont=len(CONTINUOUS_COLUMNS), out_sz=2, layers=[512, 256])
learn = Learner(databunch, model, metrics=[accuracy])
learn.loss_func = torch.nn.CrossEntropyLoss()
learning_rate = 1.32e-2
epochs = 1
start = time()
learn.fit_one_cycle(epochs, learning_rate)
t_final = time() - start
print(t_final)
###Output
_____no_output_____ |
src/pose_estimation/1_make_folders_and_data_downloads.ipynb | ###Markdown
Chuẩn bị data
###Code
import os
import urllib.request
import zipfile
import tarfile
data_dir = "./data/"
if not os.path.exists(data_dir):
os.mkdir(data_dir)
weights_dir = "./weights/"
if not os.path.exists(weights_dir):
os.mkdir(weights_dir)
# Download MSCOCOの2014 Val images [41K/6GB]
url = "http://images.cocodataset.org/zips/val2014.zip"
target_path = os.path.join(data_dir, "val2014.zip")
if not os.path.exists(target_path):
urllib.request.urlretrieve(url, target_path)
zip = zipfile.ZipFile(target_path)
zip.extractall(data_dir)
zip.close()
# Download COCO.json and input into data folder
# https://www.dropbox.com/s/0sj2q24hipiiq5t/COCO.json?dl=0
# Download mask data and input into data foloder
# https://www.dropbox.com/s/bd9ty7b4fqd5ebf/mask.tar.gz?dl=0
# giải nén tar gz
save_path = os.path.join(data_dir, "mask.tar.gz")
with tarfile.open(save_path, 'r:*') as tar:
tar.extractall(data_dir)
# download model and input to weights
https://www.dropbox.com/s/5v654d2u65fuvyr/pose_model_scratch.pth?dl=0
###Output
_____no_output_____ |
applications/classification/natural_language_inference/NLI with Transformer.ipynb | ###Markdown
Natural Language InferenceThe goal of natural language inference (NLI), a widely-studied natural language processing task, is to determine if one given statement (a premise) semantically entails another given statement (a hypothesis). Imports
###Code
import time
import math
import random
import numpy as np
import torch
import torch.nn as nn
import torch.optim as optim
import torch.nn.functional as F
from torch.nn import TransformerEncoder, TransformerEncoderLayer
from torchtext import data, datasets, vocab
SEED = 42
random.seed(SEED)
np.random.seed(SEED)
torch.manual_seed(SEED)
torch.backends.cudnn.deterministic = True
###Output
_____no_output_____
###Markdown
Fields
###Code
TEXT = data.Field(tokenize = 'spacy', lower = True)
LABEL = data.LabelField()
###Output
_____no_output_____
###Markdown
SNLI (Stanford Natural Language Inference) Dataset
###Code
train_data, valid_data, test_data = datasets.SNLI.splits(TEXT, LABEL)
print(f"Number of training examples: {len(train_data)}")
print(f"Number of validation examples: {len(valid_data)}")
print(f"Number of testing examples: {len(test_data)}")
print(vars(train_data.examples[0]))
###Output
{'premise': ['a', 'person', 'on', 'a', 'horse', 'jumps', 'over', 'a', 'broken', 'down', 'airplane', '.'], 'hypothesis': ['a', 'person', 'is', 'training', 'his', 'horse', 'for', 'a', 'competition', '.'], 'label': 'neutral'}
###Markdown
Building Vocabulary
###Code
MIN_FREQ = 10
TEXT.build_vocab(train_data, min_freq = MIN_FREQ)
LABEL.build_vocab(train_data)
print(f"Unique tokens in TEXT vocabulary: {len(TEXT.vocab)}")
print(LABEL.vocab.itos)
print(LABEL.vocab.freqs.most_common())
###Output
[('entailment', 183416), ('contradiction', 183187), ('neutral', 182764)]
###Markdown
Data Iterators
###Code
BATCH_SIZE = 128
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
train_iterator, valid_iterator, test_iterator = data.BucketIterator.splits(
(train_data, valid_data, test_data),
batch_size = BATCH_SIZE,
device = device)
# sample check
sample = next(iter(valid_iterator))
sample.premise.shape, sample.hypothesis.shape
###Output
_____no_output_____
###Markdown
Model
###Code
class PositionalEncoding(nn.Module):
def __init__(self, d_model, dropout=0.1, max_len=512):
super().__init__()
self.dropout = nn.Dropout(p=dropout)
pe = torch.zeros(max_len, d_model)
position = torch.arange(0, max_len, dtype=torch.float).unsqueeze(1)
div_term = torch.exp(torch.arange(0, d_model, 2).float() * (-math.log(10000.0) / d_model))
pe[:, 0::2] = torch.sin(position * div_term)
pe[:, 1::2] = torch.cos(position * div_term)
pe = pe.unsqueeze(0).transpose(0, 1)
self.register_buffer('pe', pe)
def forward(self, x):
x = x + self.pe[:x.size(0), :]
return self.dropout(x)
class TransformerModel(nn.Module):
def __init__(self, input_dim, d_model, n_head, hid_dim, n_layers, n_linear_layers, output_dim, dropout, pad_idx):
super().__init__()
self.pad_idx = pad_idx
self.d_model = d_model
self.pos_encoder = PositionalEncoding(d_model, dropout)
self.embedding = nn.Embedding(input_dim, d_model, padding_idx=pad_idx)
encoder_layers = TransformerEncoderLayer(d_model, n_head, hid_dim, dropout)
self.transformer_encoder = TransformerEncoder(encoder_layers, n_layers)
self.fcs = nn.ModuleList([nn.Linear(d_model * 2, d_model * 2) for _ in range(n_linear_layers)])
self.layer_norms = nn.ModuleList([nn.LayerNorm(d_model * 2) for _ in range(n_linear_layers)])
self.out = nn.Linear(d_model * 2, output_dim)
self.dropout = nn.Dropout(dropout)
def create_mask(self, seq):
# seq => [seq_len, batch_size]
mask = (seq == self.pad_idx)
mask = mask.permute(1, 0)
# mask => [batch_size, seq_len]
def forward(self, premise, hypothesis):
# premise => [prem_seq_len, batch_size]
# hypothesis => [hypo_seq_len, batch_size]
# create input masks
prem_mask = self.create_mask(premise)
# prem_mask => [batch_size, prem_seq_len]
hypo_mask = self.create_mask(hypothesis)
# hypo_mask => [batch_size, hypo_seq_len]
embedded_prem = self.dropout(self.embedding(premise)) * math.sqrt(self.d_model)
# embedded_prem => [prem_seq_len, batch_size, emb_dim]
embedded_hypo = self.dropout(self.embedding(hypothesis)) * math.sqrt(self.d_model)
# embedded_hypo => [hypo_seq_len, batch_size, emb_dim]
embedded_prem = self.pos_encoder(embedded_prem)
embedded_hypo = self.pos_encoder(embedded_hypo)
outputs_prem = self.transformer_encoder(embedded_prem, src_key_padding_mask=prem_mask)
# outputs_prem => [prem_seq_len, batch_size, d_model]
outputs_hypo = self.transformer_encoder(embedded_hypo, src_key_padding_mask=hypo_mask)
# outputs_hypo => [hypo_seq_len, batch_size, d_model]
# add the representation through attention
prem_representation = self.dropout(torch.sum(outputs_prem, dim=0))
hypo_representation = self.dropout(torch.sum(outputs_hypo, dim=0))
# representation => [batch_size, d_model]
hidden = torch.cat((prem_representation, hypo_representation), dim=-1)
# hidden => [batch_size, d_model * 2]
for fc, norm in zip(self.fcs, self.layer_norms):
hidden_ = fc(hidden)
hidden_ = self.dropout(hidden_)
# residual connection
hidden = hidden + F.relu(hidden_)
# layer normalization
hidden = norm(hidden)
logits = self.out(hidden)
# logits => [batch_size, output_dim]
return logits
INPUT_DIM = len(TEXT.vocab)
D_MODEL = 128
N_HEAD = 8
HIDDEN_DIM = 200
N_LAYERS = 3
N_FC_LAYERS = 3
OUTPUT_DIM = len(LABEL.vocab)
DROPOUT = 0.3
PAD_IDX = TEXT.vocab.stoi[TEXT.pad_token]
model = TransformerModel(
INPUT_DIM,
D_MODEL,
N_HEAD,
HIDDEN_DIM,
N_LAYERS,
N_FC_LAYERS,
OUTPUT_DIM,
DROPOUT,
PAD_IDX).to(device)
def count_parameters(model):
return sum(p.numel() for p in model.parameters() if p.requires_grad)
print(f'The model has {count_parameters(model):,} trainable parameters')
def init_weights(model):
for name, param in model.named_parameters():
nn.init.uniform_(param.data, -0.08, 0.08)
model.apply(init_weights)
# def init_weights(m):
# for name, param in m.named_parameters():
# nn.init.normal_(param.data, mean = 0, std = 0.1)
# model.apply(init_weights)
###Output
_____no_output_____
###Markdown
Optimizer & Loss Criterion
###Code
optimizer = optim.Adam(model.parameters())
criterion = nn.CrossEntropyLoss()
###Output
_____no_output_____
###Markdown
Accuracy
###Code
def categorical_accuracy(preds, y):
"""
Returns accuracy per batch, i.e. if you get 8/10 right, this returns 0.8, NOT 8
"""
max_preds = preds.argmax(dim = 1, keepdim = True) # get the index of the max probability
correct = max_preds.squeeze(1).eq(y)
return correct.sum() / torch.FloatTensor([y.shape[0]])
###Output
_____no_output_____
###Markdown
Train Loop
###Code
def train(model, iterator, optimizer, criterion):
epoch_loss = 0
epoch_acc = 0
model.train()
for batch in iterator:
prem = batch.premise
hypo = batch.hypothesis
labels = batch.label
optimizer.zero_grad()
predictions = model(prem, hypo)
# predictions => [batch size, output dim]
# labels => [batch size]
loss = criterion(predictions, labels)
acc = categorical_accuracy(predictions, labels)
loss.backward()
optimizer.step()
epoch_loss += loss.item()
epoch_acc += acc.item()
return epoch_loss / len(iterator), epoch_acc / len(iterator)
###Output
_____no_output_____
###Markdown
Validation Loop
###Code
def evaluate(model, iterator, criterion):
epoch_loss = 0
epoch_acc = 0
model.eval()
with torch.no_grad():
for batch in iterator:
prem = batch.premise
hypo = batch.hypothesis
labels = batch.label
predictions = model(prem, hypo)
loss = criterion(predictions, labels)
acc = categorical_accuracy(predictions, labels)
epoch_loss += loss.item()
epoch_acc += acc.item()
return epoch_loss / len(iterator), epoch_acc / len(iterator)
def epoch_time(start_time, end_time):
elapsed_time = end_time - start_time
elapsed_mins = int(elapsed_time / 60)
elapsed_secs = int(elapsed_time - (elapsed_mins * 60))
return elapsed_mins, elapsed_secs
###Output
_____no_output_____
###Markdown
Training
###Code
N_EPOCHS = 20
best_valid_loss = float('inf')
for epoch in range(N_EPOCHS):
start_time = time.time()
train_loss, train_acc = train(model, train_iterator, optimizer, criterion)
valid_loss, valid_acc = evaluate(model, valid_iterator, criterion)
end_time = time.time()
epoch_mins, epoch_secs = epoch_time(start_time, end_time)
if valid_loss < best_valid_loss:
best_valid_loss = valid_loss
torch.save(model.state_dict(), 'model.pt')
print(f'Epoch: {epoch+1:02} | Epoch Time: {epoch_mins}m {epoch_secs}s')
print(f'\tTrain Loss: {train_loss:.3f} | Train Acc: {train_acc*100:.2f}%')
print(f'\t Val. Loss: {valid_loss:.3f} | Val. Acc: {valid_acc*100:.2f}%')
###Output
Epoch: 01 | Epoch Time: 4m 32s
Train Loss: 0.905 | Train Acc: 57.45%
Val. Loss: 0.802 | Val. Acc: 64.25%
Epoch: 02 | Epoch Time: 4m 33s
Train Loss: 0.797 | Train Acc: 64.58%
Val. Loss: 0.771 | Val. Acc: 66.86%
Epoch: 03 | Epoch Time: 4m 33s
Train Loss: 0.772 | Train Acc: 66.05%
Val. Loss: 0.776 | Val. Acc: 66.96%
Epoch: 04 | Epoch Time: 4m 32s
Train Loss: 0.759 | Train Acc: 66.82%
Val. Loss: 0.771 | Val. Acc: 68.07%
Epoch: 05 | Epoch Time: 4m 32s
Train Loss: 0.749 | Train Acc: 67.39%
Val. Loss: 0.766 | Val. Acc: 68.45%
Epoch: 06 | Epoch Time: 4m 33s
Train Loss: 0.741 | Train Acc: 67.84%
Val. Loss: 0.766 | Val. Acc: 68.75%
Epoch: 07 | Epoch Time: 4m 33s
Train Loss: 0.734 | Train Acc: 68.15%
Val. Loss: 0.775 | Val. Acc: 69.00%
Epoch: 08 | Epoch Time: 4m 33s
Train Loss: 0.729 | Train Acc: 68.49%
Val. Loss: 0.780 | Val. Acc: 68.98%
Epoch: 09 | Epoch Time: 4m 33s
Train Loss: 0.724 | Train Acc: 68.74%
Val. Loss: 0.765 | Val. Acc: 69.01%
Epoch: 10 | Epoch Time: 4m 34s
Train Loss: 0.721 | Train Acc: 68.89%
Val. Loss: 0.763 | Val. Acc: 68.84%
Epoch: 11 | Epoch Time: 4m 33s
Train Loss: 0.716 | Train Acc: 69.14%
Val. Loss: 0.752 | Val. Acc: 69.63%
Epoch: 12 | Epoch Time: 4m 33s
Train Loss: 0.714 | Train Acc: 69.24%
Val. Loss: 0.774 | Val. Acc: 69.46%
Epoch: 13 | Epoch Time: 4m 33s
Train Loss: 0.711 | Train Acc: 69.44%
Val. Loss: 0.789 | Val. Acc: 69.52%
Epoch: 14 | Epoch Time: 4m 33s
Train Loss: 0.708 | Train Acc: 69.64%
Val. Loss: 0.775 | Val. Acc: 69.90%
Epoch: 15 | Epoch Time: 4m 33s
Train Loss: 0.707 | Train Acc: 69.64%
Val. Loss: 0.790 | Val. Acc: 70.01%
Epoch: 16 | Epoch Time: 4m 33s
Train Loss: 0.705 | Train Acc: 69.74%
Val. Loss: 0.778 | Val. Acc: 69.54%
Epoch: 17 | Epoch Time: 4m 33s
Train Loss: 0.703 | Train Acc: 69.81%
Val. Loss: 0.784 | Val. Acc: 70.16%
Epoch: 18 | Epoch Time: 4m 33s
Train Loss: 0.701 | Train Acc: 69.99%
Val. Loss: 0.771 | Val. Acc: 70.36%
Epoch: 19 | Epoch Time: 4m 33s
Train Loss: 0.699 | Train Acc: 70.07%
Val. Loss: 0.753 | Val. Acc: 70.16%
Epoch: 20 | Epoch Time: 4m 33s
Train Loss: 0.698 | Train Acc: 70.13%
Val. Loss: 0.766 | Val. Acc: 69.71%
###Markdown
Testing
###Code
model.load_state_dict(torch.load('model.pt'))
test_loss, test_acc = evaluate(model, test_iterator, criterion)
print(f'Test Loss: {test_loss:.3f} | Test Acc: {test_acc*100:.2f}%')
###Output
Test Loss: 0.746 | Test Acc: 69.61%
###Markdown
Inference
###Code
def inference(premise, hypothesis, text_field, label_field, model, device):
model.eval()
if isinstance(premise, str):
premise = text_field.tokenize(premise)
if isinstance(hypothesis, str):
hypothesis = text_field.tokenize(hypothesis)
if text_field.lower:
premise = [t.lower() for t in premise]
hypothesis = [t.lower() for t in hypothesis]
# numericalize
premise = [text_field.vocab.stoi[t] for t in premise]
hypothesis = [text_field.vocab.stoi[t] for t in hypothesis]
# convert into tensors
premise = torch.LongTensor(premise).unsqueeze(1).to(device)
# premise => [prem_len, 1]
hypothesis = torch.LongTensor(hypothesis).unsqueeze(1).to(device)
# hypothesis => [hypo_len, 1]
prediction = model(premise, hypothesis)
prediction = prediction.argmax(dim=-1).item()
return label_field.vocab.itos[prediction]
premise = 'A woman selling bamboo sticks talking to two men on a loading dock.'
hypothesis = 'There are at least three people on a loading dock.'
inference(premise, hypothesis, TEXT, LABEL, model, device)
premise = 'A woman selling bamboo sticks talking to two men on a loading dock.'
hypothesis = 'A woman is selling bamboo sticks to help provide for her family.'
inference(premise, hypothesis, TEXT, LABEL, model, device)
premise = 'A woman selling bamboo sticks talking to two men on a loading dock.'
hypothesis = ' A woman is not taking money for any of her sticks.'
inference(premise, hypothesis, TEXT, LABEL, model, device)
###Output
_____no_output_____ |
Program's_Contributed_By_Contributors/AI-Summer-Course/py-master/DeepLearningML/8_sgd_vs_gd/mini_batch_gd.ipynb | ###Markdown
Implementation of mini batch grandient descent in python We will use very simple home prices data set to implement mini batch gradient descent in python. 1. Batch gradient descent uses *all* training samples in forward pass to calculate cumulitive error and than we adjust weights using derivaties2. Stochastic GD: we randomly pick *one* training sample, perform forward pass, compute the error and immidiately adjust weights3. Mini batch GD: we use a batch of m samples where 0 < m < n (where n is total number of training samples)
###Code
import pandas as pd
import numpy as np
from matplotlib import pyplot as plt
%matplotlib inline
###Output
_____no_output_____
###Markdown
Load the dataset in pandas dataframe
###Code
df = pd.read_csv("homeprices_banglore.csv")
df.sample(5)
###Output
_____no_output_____
###Markdown
Preprocessing/Scaling: Since our columns are on different sacle it is important to perform scaling on them
###Code
from sklearn import preprocessing
sx = preprocessing.MinMaxScaler()
sy = preprocessing.MinMaxScaler()
scaled_X = sx.fit_transform(df.drop('price',axis='columns'))
scaled_y = sy.fit_transform(df['price'].values.reshape(df.shape[0],1))
scaled_X
scaled_y
###Output
_____no_output_____
###Markdown
We should convert target column (i.e. price) into one dimensional array. It has become 2D due to scaling that we did above but now we should change to 1D
###Code
scaled_y.reshape(20,)
###Output
_____no_output_____
###Markdown
Gradient descent allows you to find weights (w1,w2,w3) and bias in following linear equation for housing price prediction Now is the time to implement batch gradient descent. (1) Batch Gradient Descent Implementation
###Code
np.random.permutation(20)
def mini_batch_gradient_descent(X, y_true, epochs = 100, batch_size = 5, learning_rate = 0.01):
number_of_features = X.shape[1]
# numpy array with 1 row and columns equal to number of features. In
# our case number_of_features = 3 (area, bedroom and age)
w = np.ones(shape=(number_of_features))
b = 0
total_samples = X.shape[0] # number of rows in X
if batch_size > total_samples: # In this case mini batch becomes same as batch gradient descent
batch_size = total_samples
cost_list = []
epoch_list = []
num_batches = int(total_samples/batch_size)
for i in range(epochs):
random_indices = np.random.permutation(total_samples)
X_tmp = X[random_indices]
y_tmp = y_true[random_indices]
for j in range(0,total_samples,batch_size):
Xj = X_tmp[j:j+batch_size]
yj = y_tmp[j:j+batch_size]
y_predicted = np.dot(w, Xj.T) + b
w_grad = -(2/len(Xj))*(Xj.T.dot(yj-y_predicted))
b_grad = -(2/len(Xj))*np.sum(yj-y_predicted)
w = w - learning_rate * w_grad
b = b - learning_rate * b_grad
cost = np.mean(np.square(yj-y_predicted)) # MSE (Mean Squared Error)
if i%10==0:
cost_list.append(cost)
epoch_list.append(i)
return w, b, cost, cost_list, epoch_list
w, b, cost, cost_list, epoch_list = mini_batch_gradient_descent(
scaled_X,
scaled_y.reshape(scaled_y.shape[0],),
epochs = 120,
batch_size = 5
)
w, b, cost
(array([0.70712464, 0.67456527]), -0.23034857438407427, 0.0068641890429808105)
###Output
_____no_output_____
###Markdown
Check price equation above. In that equation we were trying to find values of w1,w2,w3 and bias. Here we got these values for each of them,w1 = 0.50381807w2 = 0.85506386w3 = 0.34167275bias = -0.3223 Now plot epoch vs cost graph to see how cost reduces as number of epoch increases
###Code
plt.xlabel("epoch")
plt.ylabel("cost")
plt.plot(epoch_list,cost_list)
###Output
_____no_output_____
###Markdown
Lets do some predictions now.
###Code
def predict(area,bedrooms,w,b):
scaled_X = sx.transform([[area, bedrooms]])[0]
# here w1 = w[0] , w2 = w[1], w3 = w[2] and bias is b
# equation for price is w1*area + w2*bedrooms + w3*age + bias
# scaled_X[0] is area
# scaled_X[1] is bedrooms
# scaled_X[2] is age
scaled_price = w[0] * scaled_X[0] + w[1] * scaled_X[1] + b
# once we get price prediction we need to to rescal it back to original value
# also since it returns 2D array, to get single value we need to do value[0][0]
return sy.inverse_transform([[scaled_price]])[0][0]
predict(2600,4,w,b)
predict(1000,2,w,b)
predict(1500,3,w,b)
###Output
_____no_output_____ |
templates/spec-api-r/README.ipynb | ###Markdown
To run this example locally, [install Ploomber](https://ploomber.readthedocs.io/en/latest/get-started/install.html) and execute: `ploomber examples -n templates/spec-api-r`To start a free, hosted JupyterLab: [](https://mybinder.org/v2/gh/ploomber/binder-env/main?urlpath=git-pull%3Frepo%3Dhttps%253A%252F%252Fgithub.com%252Fploomber%252Fprojects%26urlpath%3Dlab%252Ftree%252Fprojects%252Ftemplates/spec-api-r%252FREADME.ipynb%26branch%3Dmaster)Found an issue? [Let us know.](https://github.com/ploomber/projects/issues/new?title=templates/spec-api-r%20issue)Have questions? [Ask us anything on Slack.](https://ploomber.io/community/) R pipelineLoad, clean and plot data with R.**Note:** If using conda (`environment.yml`), R will be installed and configured. If using pip (`requirements.txt`), you must install R and [configure it yourself]( https://github.com/IRkernel/IRkernel). Pipeline descriptionThis pipeline contains three tasks. The last task generates a plot. To get thepipeline description:
###Code
%%bash
ploomber status
###Output
name Last run Outdated? Product Doc (short) Location
------ ------------ ----------- ------------ ------------- ------------
raw Has not been Source code MetaProduct( /Users/Edu/d
run {'data': Fil ev/projects-
e('output/ra ploomber/tem
w.csv'), plates/spec-
'nb': File(' api-r/raw.R
output/raw.h
tml')})
clean Has not been Source code MetaProduct( /Users/Edu/d
run & Upstream {'data': Fil ev/projects-
e('output/cl ploomber/tem
ean.csv'), plates/spec-
'nb': File(' api-r/clean.
output/clean R
.html')})
plot Has not been Source code File('output /Users/Edu/d
run & Upstream /plot.html') ev/projects-
ploomber/tem
plates/spec-
api-r/plot.R
md
###Markdown
Build the pipeline from the command line
###Code
%%bash
mkdir output
ploomber build
###Output
name Ran? Elapsed (s) Percentage
------ ------ ------------- ------------
raw True 1.64641 27.9727
clean True 1.87843 31.9147
plot True 2.36093 40.1125
|
Jupyter/.ipynb_checkpoints/Slaktdata-checkpoint.ipynb | ###Markdown
Test att hämta data [släktdata](https://www.slaktdata.org)* [Denna notebook](https://github.com/salgo60/Slaktdata/blob/master/Slaktdata.ipynb)
###Code
from datetime import datetime
start_time = datetime.now()
print("Last run: ", start_time)
import urllib3, json
import pandas as pd
from pandas.io.json import json_normalize
http = urllib3.PoolManager()
urlbase= "https://www.slaktdata.org"
url = urlbase + "/?p=getregbyid&sldid=156206_F7_710"
print(url)
r = http.request('GET', urlbase + "?p=getregbyid&sldid=156206_F7_710",
headers={'Content-Type': 'application/json'})
df=pd.read_json(r.data)
df.columns
#df["res"].to_frame().T
from tqdm.notebook import trange
dfList = []
for slakdatanr in trange(100000,200000): #test
url = urlbase + "/?p=getregbyid&sldid=" + str(slakdatanr) + "_F7_710"
r = http.request('GET', url,
headers={'Content-Type': 'application/json'})
if len(r.data) > 400:
print (url,len(r.data))
df=pd.read_json(r.data)
dfList.append(df["res"].to_frame().T)
dfTot = pd.concat(dfList, ignore_index=True)
dfTot.info()
dfTot
dfTot["adexkluderat"].value_counts()
dfTot["adid"].value_counts()
dfTot["adress"].value_counts()
dfTot["enamn"].value_counts()
dfTot["fnamn"].value_counts()
dfTot["ovr1"].value_counts()
dfTot["scbkod"].value_counts()
dfTot["sdsuffix"].value_counts()
dfTot["sidnr"].value_counts()
dfTot["uppdaterat"].value_counts()
dfTot["web"].value_counts()
###Output
_____no_output_____ |
examples/qng_with_noise_demo.ipynb | ###Markdown
Experiment with adding noise originating from a IBMQ machine to the pennylane's builtin QNG optimizer and to a simple manual method of computing the Fisher information metric tensor.More info on the algorithm:https://pennylane.ai/qml/demos/tutorial_quantum_natural_gradient.htmlMore info on the noise model:https://qiskit.org/documentation/stubs/qiskit.providers.aer.noise.NoiseModel.htmlqiskit.providers.aer.noise.NoiseModelhttps://qiskit.org/documentation/stubs/qiskit.providers.aer.QasmSimulator.htmlqiskit.providers.aer.QasmSimulator
###Code
import numpy as np
import qiskit
import pennylane as qml
from pennylane import expval, var
qiskit.IBMQ.load_account()
# Build noise model from backend properties
provider = qiskit.IBMQ.get_provider(group='open')
ibmq_backend = provider.get_backend('ibmq_burlington')
device_properties = ibmq_backend.properties()
noise_model = qiskit.providers.aer.noise.NoiseModel.from_backend(device_properties)
# Get coupling map from backend
coupling_map = ibmq_backend.configuration().coupling_map
# Get basis gates from noise model
basis_gates = noise_model.basis_gates
# Provision the the default device with noise
dev = qml.device('qiskit.aer', wires=3, noise_model=noise_model,
basis_gates=basis_gates, coupling_map=coupling_map, backend='qasm_simulator')
@qml.qnode(dev)
def circuit(params):
# |psi_0>: state preparation
qml.RY(np.pi / 4, wires=0)
qml.RY(np.pi / 3, wires=1)
qml.RY(np.pi / 7, wires=2)
# V0(theta0, theta1): Parametrized layer 0
qml.RZ(params[0], wires=0)
qml.RZ(params[1], wires=1)
# W1: non-parametrized gates
qml.CNOT(wires=[0, 1])
qml.CNOT(wires=[1, 2])
# V_1(theta2, theta3): Parametrized layer 1
qml.RY(params[2], wires=1)
qml.RX(params[3], wires=2)
# W2: non-parametrized gates
qml.CNOT(wires=[0, 1])
qml.CNOT(wires=[1, 2])
return qml.expval(qml.PauliY(0))
params = np.array([0.432, -0.123, 0.543, 0.233])
g0 = np.zeros([2, 2])
def layer0_subcircuit(params):
"""This function contains all gates that
precede parametrized layer 0"""
qml.RY(np.pi / 4, wires=0)
qml.RY(np.pi / 3, wires=1)
qml.RY(np.pi / 7, wires=2)
@qml.qnode(dev)
def layer0_diag(params):
layer0_subcircuit(params)
return var(qml.PauliZ(0)), var(qml.PauliZ(1))
# calculate the diagonal terms
varK0, varK1 = layer0_diag(params)
g0[0, 0] = varK0 / 4
g0[1, 1] = varK1 / 4
@qml.qnode(dev)
def layer0_off_diag_single(params):
layer0_subcircuit(params)
return expval(qml.PauliZ(0)), expval(qml.PauliZ(1))
@qml.qnode(dev)
def layer0_off_diag_double(params):
layer0_subcircuit(params)
ZZ = np.kron(np.diag([1, -1]), np.diag([1, -1]))
return expval(qml.Hermitian(ZZ, wires=[0, 1]))
# calculate the off-diagonal terms
exK0, exK1 = layer0_off_diag_single(params)
exK0K1 = layer0_off_diag_double(params)
g0[0, 1] = (exK0K1 - exK0 * exK1) / 4
g0[1, 0] = (exK0K1 - exK0 * exK1) / 4
#########################################
g1 = np.zeros([2, 2])
def layer1_subcircuit(params):
"""This function contains all gates that
precede parametrized layer 1"""
# |psi_0>: state preparation
qml.RY(np.pi / 4, wires=0)
qml.RY(np.pi / 3, wires=1)
qml.RY(np.pi / 7, wires=2)
# V0(theta0, theta1): Parametrized layer 0
qml.RZ(params[0], wires=0)
qml.RZ(params[1], wires=1)
# W1: non-parametrized gates
qml.CNOT(wires=[0, 1])
qml.CNOT(wires=[1, 2])
@qml.qnode(dev)
def layer1_diag(params):
layer1_subcircuit(params)
return var(qml.PauliY(1)), var(qml.PauliX(2))
varK0, varK1 = layer1_diag(params)
g1[0, 0] = varK0 / 4
g1[1, 1] = varK1 / 4
@qml.qnode(dev)
def layer1_off_diag_single(params):
layer1_subcircuit(params)
return expval(qml.PauliY(1)), expval(qml.PauliX(2))
@qml.qnode(dev)
def layer1_off_diag_double(params):
layer1_subcircuit(params)
X = np.array([[0, 1], [1, 0]])
Y = np.array([[0, -1j], [1j, 0]])
YX = np.kron(Y, X)
return expval(qml.Hermitian(YX, wires=[1, 2]))
# calculate the off-diagonal terms
exK0, exK1 = layer1_off_diag_single(params)
exK0K1 = layer1_off_diag_double(params)
g1[0, 1] = (exK0K1 - exK0 * exK1) / 4
g1[1, 0] = g1[0, 1]
from scipy.linalg import block_diag
g = block_diag(g0, g1)
print(np.round(g, 8))
#print(np.round(circuit.metric_tensor([params]), 8))
print(circuit.metric_tensor([params], diag_approx=True))
steps = 200
init_params = np.array([0.432, -0.123, 0.543, 0.233])
gd_cost = []
opt = qml.GradientDescentOptimizer(0.01)
print("Starting GD Optimizer run...")
theta = init_params
for _ in range(steps):
theta = opt.step(circuit, theta)
gd_cost.append(circuit(theta))
print("Done.")
qng_cost = []
opt = qml.QNGOptimizer(0.01, diag_approx=True)
print("Starting builtin QNG Optimizer run...")
theta = init_params
for _ in range(steps):
theta = opt.step(circuit, theta)
qng_cost.append(circuit(theta))
print("Done.")
from matplotlib import pyplot as plt
plt.style.use("seaborn")
plt.plot(gd_cost, "b", label="Vanilla gradient descent")
plt.plot(qng_cost, "g", label="Quantum natural gradient descent")
plt.ylabel("Cost function value")
plt.xlabel("Optimization steps")
plt.legend()
plt.show()
###Output
Starting GD Optimizer run...
Done.
Starting builtin QNG Optimizer run...
Done.
###Markdown
Now let's use measurement error mitigation.
###Code
with pennylane_extra.qiskit_measurement_error_mitigation(shots=1024):
gd_cost = []
opt = qml.GradientDescentOptimizer(0.01)
print("Starting GD Optimizer run...")
theta = init_params
for _ in range(steps):
theta = opt.step(circuit, theta)
gd_cost.append(circuit(theta))
print("Done.")
qng_cost_mitigated = []
opt = qml.QNGOptimizer(0.01, diag_approx=True)
print("Starting builtin QNG Optimizer run...")
theta = init_params
for _ in range(steps):
theta = opt.step(circuit, theta)
qng_cost_mitigated.append(circuit(theta))
print("Done.")
plt.style.use("seaborn")
plt.plot(gd_cost, "b", label="Vanilla gradient descent")
plt.plot(qng_cost_mitigated, "g", label="Quantum natural gradient descent (with error mitigation)")
plt.ylabel("Cost function value")
plt.xlabel("Optimization steps")
plt.legend()
plt.show()
plt.plot(qng_cost, "b", label="QNGD")
plt.plot(qng_cost_mitigated, "g", label="QNGD with error mitigation")
plt.ylabel("Cost function value")
plt.xlabel("Optimization steps")
plt.legend()
plt.show()
cost_difference = [(a - b) for a, b in zip(qng_cost, qng_cost_mitigated)]
print('Average cost difference:', sum(cost_difference)/len(cost_difference))
plt.plot(cost_difference, "b", label="Cost difference (the more the better)")
plt.ylabel("Cost function value")
plt.xlabel("Optimization steps")
plt.ylim(-0.25, 0.25)
plt.legend()
plt.show()
###Output
Average error difference: 0.059208984375
###Markdown
Now let's compare these results with a new one using larger number of shots made during mitigation process
###Code
with pennylane_extra.qiskit_measurement_error_mitigation(shots=4096):
gd_cost = []
opt = qml.GradientDescentOptimizer(0.01)
print("Starting GD Optimizer run...")
theta = init_params
for _ in range(steps):
theta = opt.step(circuit, theta)
gd_cost.append(circuit(theta))
print("Done.")
qng_cost_mitigated_4096 = []
opt = qml.QNGOptimizer(0.01, diag_approx=True)
print("Starting builtin QNG Optimizer run...")
theta = init_params
for _ in range(steps):
theta = opt.step(circuit, theta)
qng_cost_mitigated_4096.append(circuit(theta))
print("Done.")
plt.style.use("seaborn")
plt.plot(gd_cost, "b", label="Vanilla gradient descent")
plt.plot(qng_cost_mitigated_4096, "g", label="Quantum natural gradient descent (4096 shots)")
plt.ylabel("Cost function value")
plt.xlabel("Optimization steps")
plt.legend()
plt.show()
plt.figure(figsize=(16,12))
plt.plot(qng_cost, "r", label="QNGD")
plt.plot(qng_cost_mitigated, "b", label="QNGD with error mitigation (1024 mitigation shots)")
plt.plot(qng_cost_mitigated_4096, "g", label="QNGD with error mitigation (4096 mitigation shots)")
plt.ylabel("Cost function value")
plt.xlabel("Optimization steps")
plt.legend()
plt.show()
cost_difference = [(a - b) for a, b in zip(qng_cost_mitigated, qng_cost_mitigated_4096)]
print('Average cost difference:', sum(cost_difference)/len(cost_difference))
plt.plot(cost_difference, "b", label="Cost difference (the more the better)")
plt.ylabel("Cost function value")
plt.xlabel("Optimization steps")
plt.ylim(-0.25, 0.25)
plt.legend()
plt.show()
###Output
Average error difference: -0.001484375
###Markdown
Those were the sample configurations. Now we can measure the mean values for those experiments performed multiple times and compare them instead
###Code
import pennylane_extra
n = 10
gd_cost_1024 = []
qng_cost_mitigated_1024 = []
with pennylane_extra.qiskit_measurement_error_mitigation(shots=1024):
for i in range(n):
opt = qml.GradientDescentOptimizer(0.01)
print("Starting GD Optimizer run...")
theta = init_params
for _ in range(steps):
theta = opt.step(circuit, theta)
gd_cost_1024.append(circuit(theta))
print("Done.")
opt = qml.QNGOptimizer(0.01, diag_approx=True)
print("Starting builtin QNG Optimizer run...")
theta = init_params
for _ in range(steps):
theta = opt.step(circuit, theta)
qng_cost_mitigated_1024.append(circuit(theta))
print("Done.")
qng_cost_mitigated_4096 = []
with pennylane_extra.qiskit_measurement_error_mitigation(shots=4096):
for i in range(n):
opt = qml.QNGOptimizer(0.01, diag_approx=True)
print("Starting builtin QNG Optimizer run...")
theta = init_params
for _ in range(steps):
theta = opt.step(circuit, theta)
qng_cost_mitigated_4096.append(circuit(theta))
print("Done.")
qng_cost_no_mitigation = []
gd_cost_no_mitigation = []
for i in range(n):
opt = qml.QNGOptimizer(0.01, diag_approx=True)
print("Starting builtin QNG Optimizer run...")
theta = init_params
for _ in range(steps):
theta = opt.step(circuit, theta)
qng_cost_no_mitigation.append(circuit(theta))
print("Done.")
opt = qml.GradientDescentOptimizer(0.01)
print("Starting GD Optimizer run...")
theta = init_params
for _ in range(steps):
theta = opt.step(circuit, theta)
gd_cost_no_mitigation.append(circuit(theta))
print("Done.")
all_gd_cost_1024 = np.array(gd_cost_1024).reshape(n,-1)
mean_gd_cost_1024 = np.mean(all_gd_cost_1024, axis=0)
all_qng_cost_no_mitigation = np.array(qng_cost_no_mitigation).reshape(n,-1)
mean_qng_cost_no_mitigation = np.mean(all_qng_cost_no_mitigation, axis=0)
all_gd_cost_no_mitigation = np.array(gd_cost_no_mitigation).reshape(n,-1)
mean_gd_cost_no_mitigation = np.mean(all_gd_cost_no_mitigation, axis=0)
all_qng_cost_mitigated_1024 = np.array(qng_cost_mitigated_1024).reshape(n,-1)
mean_qng_cost_mitigated_1024 = np.mean(all_qng_cost_mitigated_1024, axis=0)
all_qng_cost_mitigated_4096 = np.array(qng_cost_mitigated_4096).reshape(n,-1)
mean_qng_cost_mitigated_4096 = np.mean(all_qng_cost_mitigated_4096, axis=0)
from matplotlib import pyplot as plt
def plot_opt_results(title, filename, *results):
f = plt.figure(figsize=(16,12))
plt.style.use("seaborn")
for result in results:
plt.plot(result[0], result[1], label=result[2])
plt.title(title, fontsize=22)
plt.ylabel("Cost function value", fontsize=22)
plt.xlabel("Optimization steps", fontsize=22)
plt.legend(prop={'size': 18})
f.savefig(filename, bbox_inches='tight')
plt.show()
plot_opt_results("QNG vs GD with noise", "qng_vs_gd_noise_mitigation.pdf",
(mean_qng_cost_mitigated_1024, "g", "Quantum natural gradient descent with error mitigation (1024 shots)"),
(mean_qng_cost_no_mitigation, "b", "Quantum natural gradient descent (no error mitigation)"),
(mean_gd_cost_1024, "black", "Vanilla gradient descent with error mitigation (1024 shots)"),
(mean_gd_cost_no_mitigation, "r", "Vanilla gradient descent (no error mitigation)"))
plot_opt_results("QNG (shots impact comparison)", "qng_vs_gd_noise_mitigation_shots.pdf",
(mean_qng_cost_mitigated_1024, "b", "Quantum natural gradient descent with error mitigation (1024 shots)"),
(mean_qng_cost_mitigated_4096, "r", "Quantum natural gradient descent with error mitigation (4096 shots)"))
###Output
_____no_output_____
###Markdown
Below we also present optimization results without any noise (on an ideal simulator).
###Code
# Provision the the default device with no noise
dev_no_noise = qml.device("default.qubit", wires=3)
@qml.qnode(dev_no_noise)
def circuit_no_noise(params):
# |psi_0>: state preparation
qml.RY(np.pi / 4, wires=0)
qml.RY(np.pi / 3, wires=1)
qml.RY(np.pi / 7, wires=2)
# V0(theta0, theta1): Parametrized layer 0
qml.RZ(params[0], wires=0)
qml.RZ(params[1], wires=1)
# W1: non-parametrized gates
qml.CNOT(wires=[0, 1])
qml.CNOT(wires=[1, 2])
# V_1(theta2, theta3): Parametrized layer 1
qml.RY(params[2], wires=1)
qml.RX(params[3], wires=2)
# W2: non-parametrized gates
qml.CNOT(wires=[0, 1])
qml.CNOT(wires=[1, 2])
return qml.expval(qml.PauliY(0))
# same as above but as a separate cell
n = 10
steps = 200
init_params = np.array([0.432, -0.123, 0.543, 0.233])
qng_cost_no_noise = []
gd_cost_no_noise = []
for i in range(n):
opt = qml.QNGOptimizer(0.01, diag_approx=True)
print("Starting builtin QNG Optimizer run...")
theta = init_params
for _ in range(steps):
theta = opt.step(circuit_no_noise, theta)
qng_cost_no_noise.append(circuit_no_noise(theta))
print("Done.")
opt = qml.GradientDescentOptimizer(0.01)
print("Starting GD Optimizer run...")
theta = init_params
for _ in range(steps):
theta = opt.step(circuit_no_noise, theta)
gd_cost_no_noise.append(circuit_no_noise(theta))
print("Done.")
all_qng_cost_no_noise = np.array(qng_cost_no_noise).reshape(n,-1)
mean_qng_cost_no_noise = np.mean(all_qng_cost_no_noise, axis=0)
all_gd_cost_no_noise = np.array(gd_cost_no_noise).reshape(n,-1)
mean_gd_cost_no_noise = np.mean(all_gd_cost_no_noise, axis=0)
plot_opt_results("QNG vs GD averaged without noise", "qng_vs_gd_no_noise_avg.pdf",
(mean_qng_cost_no_noise, "b", "Quantum natural gradient descent (no error mitigation)"),
(mean_gd_cost_no_noise, "r", "Vanilla gradient descent (no error mitigation)"))
###Output
_____no_output_____ |
notebooks/data_scrubbing.ipynb | ###Markdown
Movie Analysis: Data Scrubbing About:In the data scrubbing phase I will focus on cleaning up the columns I plan on using, and building up the data frame I will use for the EDA phase:1. US Gross Revenue2. Genre3. Actors4. Time of Year (date)5. Keywords (content)6. Combined Project imports:
###Code
# imports for entire data gathering phase
import pandas as pd
import os
###Output
_____no_output_____
###Markdown
1. US Gross ReveneThis column will be how we measure the other columns, so we will start here and drop any rows that don't have this information.
###Code
revenue_path = os.path.join(os.pardir, 'data', 'interim', 'money.csv')
revenue_df = pd.read_csv(revenue_path)
revenue_df.head()
revenue_df.loc[revenue_df['imdb_id'] == 'tt0091605']
revenue_df = revenue_df[:-1]
###Output
_____no_output_____
###Markdown
Changes:1. Convert 'us_gross', and 'budget_usd' values into floats. That means stripping the non-number characters out as well as changing 'MM' to ',000,000'.2. Convert year column to int, so the years don't have the trailing .0.3. region_code does not need the brackets around the abbreviations.
###Code
# Created 3/22/2020 with current exhange values. Values not adjusted for the date the movie was created.
def get_conversion_rate(value):
"""Get exchange rate for given currency code
Arguments:
value (string): String with currency code or symbol in it
Returns:
rate (float): Conversion rate to usd
"""
if '£' in value:
return 0.854
elif '€' in value:
return 0.9334
elif 'AUD' in value:
return 1.7229
elif 'CAD' in value:
return 1.435
elif 'FRF' in value:
return 6.55957 * 0.9334
elif 'INR' in value:
return 75.394
elif 'THB' in value:
return 32.68
elif 'EM' in value:
return 1 # cant find info on EM
elif 'JPY' in value:
return 110.75
elif 'SKW' in value:
return 1254.45
elif 'HUF' in value:
return 327.94
elif 'NGN' in value:
return 364
elif 'CNY' in value:
return 7.0950
elif 'ESP' in value:
return 155.42826
elif 'RUR' in value:
return 79.87
elif 'HKD' in value:
return 7.7570
elif 'ISK' in value:
return 140.490
elif 'PHP' in value:
return 51.19
elif 'DKK' in value:
return 6.9716
elif 'CZK' in value:
return 25.5620
elif 'SKK' in value:
return 10.3753
elif 'NOK' in value:
return 11.7890
elif 'MXN' in value:
return 24.4215
elif 'JMD' in value:
return 135.07
elif 'PLN' in value:
return 4.23
elif 'KRW' in value:
return 1228.97
elif 'ITL' in value:
return 1804.64
else:
return 1
def strip_currency_code(value):
"""Strips currency code from front of currency string
Arguments:
value (string): currency amount prefaced with currency code
Returns:
value (string): value without the currency code
"""
if value[:1] in '$£€':
return value[1:]
else:
return value[3:]
def convert_money(value):
"""Takes currency string and parses it into correct amount in USD
Arguments:
value (string): currency in form: CAD 345.3B
Returns:
value (int): currency converted to USD and in standard numeric form
"""
# type check:
if type(value) != str:
return
# check currency sign and get coefficient
coef = get_conversion_rate(value)
value = strip_currency_code(value)
if 'K' in value:
value = (float(value.strip('K')) * 1000) / coef
elif 'MM' in value:
value = (float(value.strip('MM')) * 1000000) / coef
elif 'B' in value:
value = (float(value.strip('B')) * 1000000000) / coef
else:
value = float(value.strip()) / coef
return value
revenue_df['us_gross'] = revenue_df['us_gross'].apply(convert_money)
revenue_df['us_gross']
revenue_df['budget_usd'] = revenue_df['budget_usd'].apply(convert_money)
revenue_df['budget_usd'].isna().sum()
revenue_df.sample(5)
###Output
_____no_output_____
###Markdown
Now for region code. We actually don't need this column so we will drop it.
###Code
revenue_df.drop(columns='region_code', inplace=True)
###Output
_____no_output_____
###Markdown
For the 'year' column we went ahead and dropped the missing rows, because there were only 6 of them.
###Code
revenue_df.isna().sum()
###Output
_____no_output_____
###Markdown
Cleaning up Nan values:
###Code
# first, change the missing values from budget to -1, so we dont drop 1910 rows.
revenue_df['budget_usd'] = revenue_df['budget_usd'].fillna(-1)
# also, fill in the production_co missing values with an 'Unknown'
revenue_df['production_co'] = revenue_df['production_co'].fillna('Unknown')
revenue_df.info()
revenue_df = revenue_df.dropna()
revenue_df.sample(5)
###Output
_____no_output_____
###Markdown
Now for dropping duplicates:
###Code
revenue_df = revenue_df.drop_duplicates()
revenue_df.info()
revenue_df.sample(3)
def calc_revenue(df):
return df['us_gross'] - df['budget_usd']
revenue_df['revenue'] = revenue_df[revenue_df['budget_usd'] > 0].apply(calc_revenue, axis=1)
###Output
_____no_output_____
###Markdown
Added in a revenue column after being advised about how much better it would be for a metric.
###Code
revenue_df[revenue_df['budget_usd'] > 0].sort_values('revenue').head(2)
###Output
_____no_output_____
###Markdown
Change rank type from string to int
###Code
def fix_rank(value):
value = value.replace(',', '')
return int(value)
revenue_df['rank'] = revenue_df['rank'].apply(fix_rank)
revenue_df['popular'] = revenue_df['rank'].apply(lambda x: x < revenue_df['rank'].quantile(.1))
revenue_df.sample(3)
###Output
_____no_output_____
###Markdown
Save as CSV
###Code
revenue_save_path = os.path.join(os.pardir, 'data', 'processed', 'revenue.csv')
revenue_df.to_csv(revenue_save_path, index=False)
test_revenue_save = pd.read_csv(revenue_save_path)
test_revenue_save.sample(3)
###Output
_____no_output_____
###Markdown
2. Genre:For genre we will need a dataset that lists each movie and it's genre. To analyze the success of the genre, we will need to examine the relationship of genre to the revenue earned. Bringing in the list of movie titles:
###Code
titles_path = os.path.join(os.pardir, 'data', 'raw', 'movies.csv')
genres_df = pd.read_csv(titles_path)
genres_df.head()
genres_df.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 545821 entries, 0 to 545820
Data columns (total 4 columns):
tconst 545821 non-null object
primaryTitle 545821 non-null object
startYear 545821 non-null object
genres 545821 non-null object
dtypes: object(4)
memory usage: 16.7+ MB
###Markdown
Changes:Looking at the initial dataframe, these are the things I would like to change:1. Change column names2. Drop original_title and runtime_minutes columns
###Code
genres_df = genres_df.rename(columns={'tconst': 'imdb_id', 'primaryTitle': 'title', 'startYear': 'year'})
genres_df.sample(3)
###Output
_____no_output_____
###Markdown
That looks good. Let me deal with Nan's:
###Code
genres_df.isna().sum()
###Output
_____no_output_____
###Markdown
Save as CSV
###Code
genres_save_path = os.path.join(os.pardir, 'data', 'processed', 'genres.csv')
genres_df.to_csv(genres_save_path, index=False)
test_genres_save = pd.read_csv(genres_save_path)
test_genres_save.sample(3)
###Output
_____no_output_____
###Markdown
3. ActorsThese columns will be key in identifying the people who have the ability to produce high quality work on a consistent basis.
###Code
people_path = os.path.join(os.pardir, 'data', 'raw', 'imdb.name.basics.csv')
people_df = pd.read_csv(people_path)
people_df.sample(3)
###Output
_____no_output_____
###Markdown
Changes:Some cleanup tasks:1. Change name of primary_name column to 'name'2. Select all the actors and actress3. Drop birth_year, death_year, known_for_titles
###Code
people_df.sample(3)
people_df = people_df.rename(columns={'primary_name': 'name'})
def can_act(professions):
if type(professions) != str:
return False
if 'actor' in professions or 'actress' in professions:
return True
else:
return False
people_df['can_act'] = people_df['primary_profession'].apply(can_act)
people_df.sample(3)
###Output
_____no_output_____
###Markdown
Okay, we will grab all the actors and directors and make individual dataframes for them:
###Code
actors_df = people_df[people_df['can_act'] == True]
###Output
_____no_output_____
###Markdown
And now we can drop the unwanted columns:
###Code
drop_columns = ['primary_profession', 'can_act', 'birth_year', 'death_year', 'known_for_titles']
actors_df = actors_df.drop(columns=drop_columns)
actors_df.sample(3)
###Output
_____no_output_____
###Markdown
Let's check for missing values:
###Code
actors_df.isna().sum()
###Output
_____no_output_____
###Markdown
There we go. A very large list of actors and actresses. We can join them to the titles and see if there are any patterns amongst the top performing titles. Save as CSV
###Code
actors_save_path = os.path.join(os.pardir, 'data', 'processed', 'actors.csv')
actors_df.to_csv(actors_save_path, index=False)
test_actors_save = pd.read_csv(actors_save_path)
test_actors_save.sample(3)
###Output
_____no_output_____
###Markdown
4. Time of Year (date)Time of year will be an important metric to discover the most opportune time to release a film.
###Code
date_path = os.path.join(os.pardir, 'data', 'raw', 'tmdb_movies.csv')
date_df = pd.read_csv(date_path)
date_df.sample(3)
###Output
_____no_output_____
###Markdown
Changes:We only need a couple columns from this set:1. imdb_id2. release_date3. add month columnThe column names are ok as well, so this will be very simple.
###Code
date_df = date_df.drop_duplicates()
date_df = date_df.rename(columns={'imdbId': 'imdb_id', 'originalTitle': 'title', 'releaseDate': 'date'})
date_df = date_df[['imdb_id', 'date']]
date_df = date_df.dropna()
date_df.sample(3)
date_df['date'] = pd.to_datetime(date_df['date'], infer_datetime_format=True)
date_df['month'] = date_df['date'].apply(lambda x: x.month_name())
date_df.info()
date_df.isna().sum()
###Output
_____no_output_____
###Markdown
Save to CSV
###Code
date_save_path = os.path.join(os.pardir, 'data', 'processed', 'date.csv')
date_df.to_csv(date_save_path, index=False)
test_date_save = pd.read_csv(date_save_path)
test_date_save.sample(3)
###Output
_____no_output_____
###Markdown
5. Keywords (content)
###Code
keywords_path = os.path.join(os.pardir, 'data', 'raw', 'tmdb_keywords.csv')
keywords_df = pd.read_csv(keywords_path)
keywords_df.sample(3)
keywords_df.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 257984 entries, 0 to 257983
Data columns (total 3 columns):
imdbId 257984 non-null object
keywordId 257984 non-null int64
keyword 257984 non-null object
dtypes: int64(1), object(2)
memory usage: 5.9+ MB
###Markdown
This is a simple dataframe, when I created it I knew exactly the columns I would use. I do need to change the column names from camelCase to snake_case (node.js uses camelCase):
###Code
keywords_df = keywords_df.rename(columns={'imdbId': 'imdb_id', 'keywordId': 'keyword_id'})
keywords_df.sample(3)
keywords_df.duplicated()
keywords_df.isna().sum()
###Output
_____no_output_____
###Markdown
Save to CSV
###Code
keywords_save_path = os.path.join(os.pardir, 'data', 'processed', 'keywords.csv')
keywords_df.to_csv(keywords_save_path, index=False)
test_keywords_save = pd.read_csv(keywords_save_path)
test_keywords_save.sample(3)
###Output
_____no_output_____
###Markdown
Building DatasetIn this section I will combine all the individual datasets into one large dataframe that I can explore in the EDA phase. I will keep the actors and keywords seperate for now so they don't explode the dataframe.
###Code
# joining revenue with genres:
combined_df = revenue_df.set_index('imdb_id').join(genres_df.set_index('imdb_id'), rsuffix='_rev')
combined_df.head(3)
combined_df = combined_df.drop(columns=['title_rev', 'year_rev'])
combined_df = combined_df.reset_index()
# adding in time of year next:
combined_df = combined_df.set_index('imdb_id').join(date_df.set_index('imdb_id')).reset_index()
combined_df.sample(5)
combined_df.info()
combined_df = combined_df.dropna()
combined_df.info()
###Output
<class 'pandas.core.frame.DataFrame'>
Int64Index: 7865 entries, 0 to 14430
Data columns (total 13 columns):
imdb_id 7865 non-null object
title 7865 non-null object
year 7865 non-null object
director 7865 non-null object
production_co 7865 non-null object
rank 7865 non-null int64
budget_usd 7865 non-null float64
us_gross 7865 non-null float64
revenue 7865 non-null float64
popular 7865 non-null bool
genres 7865 non-null object
date 7865 non-null datetime64[ns]
month 7865 non-null object
dtypes: bool(1), datetime64[ns](1), float64(3), int64(1), object(7)
memory usage: 806.5+ KB
###Markdown
Save to CSV
###Code
combined_save_path = os.path.join(os.pardir, 'data', 'processed', 'combined.csv')
combined_df.to_csv(combined_save_path, index=False)
test_combined_save = pd.read_csv(combined_save_path)
test_combined_save.sample(3)
combined_df.info()
###Output
<class 'pandas.core.frame.DataFrame'>
Int64Index: 7865 entries, 0 to 14430
Data columns (total 13 columns):
imdb_id 7865 non-null object
title 7865 non-null object
year 7865 non-null object
director 7865 non-null object
production_co 7865 non-null object
rank 7865 non-null int64
budget_usd 7865 non-null float64
us_gross 7865 non-null float64
revenue 7865 non-null float64
popular 7865 non-null bool
genres 7865 non-null object
date 7865 non-null datetime64[ns]
month 7865 non-null object
dtypes: bool(1), datetime64[ns](1), float64(3), int64(1), object(7)
memory usage: 806.5+ KB
|
Pipeline progression/14_Green lane.ipynb | ###Markdown
Ye ab tak ka code:
###Code
import cv2
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
# Import everything needed to edit/save/watch video clips
from moviepy.editor import VideoFileClip
from IPython.display import HTML
def process_image(frame):
def cal_undistort(img):
# Reads mtx and dist matrices, peforms image distortion correction and returns the undistorted image
import pickle
# Read in the saved matrices
my_dist_pickle = pickle.load( open( "output_files/calib_pickle_files/dist_pickle.p", "rb" ) )
mtx = my_dist_pickle["mtx"]
dist = my_dist_pickle["dist"]
img_size = (img.shape[1], img.shape[0])
undistorted_img = cv2.undistort(img, mtx, dist, None, mtx)
#undistorted_img = cv2.cvtColor(undistorted_img, cv2.COLOR_BGR2RGB) #Use if you use cv2 to import image. ax.imshow() needs RGB image
return undistorted_img
def yellow_threshold(img, sxbinary):
# Convert to HLS color space and separate the S channel
# Note: img is the undistorted image
hls = cv2.cvtColor(img, cv2.COLOR_RGB2HLS)
s_channel = hls[:,:,2]
h_channel = hls[:,:,0]
# Threshold color channel
s_thresh_min = 100
s_thresh_max = 255
#for 360 degree, my value for yellow ranged between 35 and 50. So uska half kar diya
h_thresh_min = 10
h_thresh_max = 25
s_binary = np.zeros_like(s_channel)
s_binary[(s_channel >= s_thresh_min) & (s_channel <= s_thresh_max)] = 1
h_binary = np.zeros_like(h_channel)
h_binary[(h_channel >= h_thresh_min) & (h_channel <= h_thresh_max)] = 1
# Combine the two binary thresholds
yellow_binary = np.zeros_like(s_binary)
yellow_binary[(((s_binary == 1) | (sxbinary == 1) ) & (h_binary ==1))] = 1
return yellow_binary
def xgrad_binary(img, thresh_min=30, thresh_max=100):
# Grayscale image
gray = cv2.cvtColor(img, cv2.COLOR_RGB2GRAY)
# Sobel x
sobelx = cv2.Sobel(gray, cv2.CV_64F, 1, 0) # Take the derivative in x
abs_sobelx = np.absolute(sobelx) # Absolute x derivative to accentuate lines away from horizontal
scaled_sobel = np.uint8(255*abs_sobelx/np.max(abs_sobelx))
# Threshold x gradient
#thresh_min = 30 #Already given above
#thresh_max = 100
sxbinary = np.zeros_like(scaled_sobel)
sxbinary[(scaled_sobel >= thresh_min) & (scaled_sobel <= thresh_max)] = 1
return sxbinary
def white_threshold(img, sxbinary, lower_white_thresh = 170):
r_channel = img[:,:,0]
g_channel = img[:,:,1]
b_channel = img[:,:,2]
# Threshold color channel
r_thresh_min = lower_white_thresh
r_thresh_max = 255
r_binary = np.zeros_like(r_channel)
r_binary[(r_channel >= r_thresh_min) & (r_channel <= r_thresh_max)] = 1
g_thresh_min = lower_white_thresh
g_thresh_max = 255
g_binary = np.zeros_like(g_channel)
g_binary[(g_channel >= g_thresh_min) & (g_channel <= g_thresh_max)] = 1
b_thresh_min = lower_white_thresh
b_thresh_max = 255
b_binary = np.zeros_like(b_channel)
b_binary[(b_channel >= b_thresh_min) & (b_channel <= b_thresh_max)] = 1
white_binary = np.zeros_like(r_channel)
white_binary[((r_binary ==1) & (g_binary ==1) & (b_binary ==1) & (sxbinary==1))] = 1
return white_binary
def thresh_img(img):
#sxbinary = xgrad_binary(img, thresh_min=30, thresh_max=100)
sxbinary = xgrad_binary(img, thresh_min=25, thresh_max=130)
yellow_binary = yellow_threshold(img, sxbinary) #(((s) | (sx)) & (h))
white_binary = white_threshold(img, sxbinary, lower_white_thresh = 150)
# Combine the two binary thresholds
combined_binary = np.zeros_like(sxbinary)
combined_binary[((yellow_binary == 1) | (white_binary == 1))] = 1
out_img = np.dstack((combined_binary, combined_binary, combined_binary))*255
return out_img
def perspective_transform(img):
# Define calibration box in source (original) and destination (desired or warped) coordinates
img_size = (img.shape[1], img.shape[0])
"""Notice the format used for img_size. Yaha bhi ulta hai. x axis aur fir y axis chahiye.
Apne format mein rows(y axis) and columns (x axis) hain"""
# Four source coordinates
# Order of points: top left, top right, bottom right, bottom left
src = np.array(
[[435*img.shape[1]/960, 350*img.shape[0]/540],
[530*img.shape[1]/960, 350*img.shape[0]/540],
[885*img.shape[1]/960, img.shape[0]],
[220*img.shape[1]/960, img.shape[0]]], dtype='f')
# Next, we'll define a desired rectangle plane for the warped image.
# We'll choose 4 points where we want source points to end up
# This time we'll choose our points by eyeballing a rectangle
dst = np.array(
[[290*img.shape[1]/960, 0],
[740*img.shape[1]/960, 0],
[740*img.shape[1]/960, img.shape[0]],
[290*img.shape[1]/960, img.shape[0]]], dtype='f')
#Compute the perspective transform, M, given source and destination points:
M = cv2.getPerspectiveTransform(src, dst)
#Warp an image using the perspective transform, M; using linear interpolation
#Interpolating points is just filling in missing points as it warps an image
# The input image for this function can be a colored image too
warped = cv2.warpPerspective(img, M, img_size, flags=cv2.INTER_LINEAR)
return warped, src, dst
def rev_perspective_transform(img, src, dst):
img_size = (img.shape[1], img.shape[0])
#Compute the perspective transform, M, given source and destination points:
Minv = cv2.getPerspectiveTransform(dst, src)
#Warp an image using the perspective transform, M; using linear interpolation
#Interpolating points is just filling in missing points as it warps an image
# The input image for this function can be a colored image too
un_warped = cv2.warpPerspective(img, Minv, img_size, flags=cv2.INTER_LINEAR)
return un_warped, Minv
def draw_polygon(img1, img2, src, dst):
src = src.astype(int) #Very important step (Pixels cannot be in decimals)
dst = dst.astype(int)
cv2.polylines(img1, [src], True, (255,0,0), 3)
cv2.polylines(img2, [dst], True, (255,0,0), 3)
def histogram_bottom_peaks (warped_img):
# This will detect the bottom point of our lane lines
# Take a histogram of the bottom half of the image
bottom_half = warped_img[((2*warped_img.shape[0])//5):,:,0] # Collecting all pixels in the bottom half
histogram = np.sum(bottom_half, axis=0) # Summing them along y axis (or along columns)
# Find the peak of the left and right halves of the histogram
# These will be the starting point for the left and right lines
midpoint = np.int(histogram.shape[0]//2) # 1D array hai histogram toh uska bas 0th index filled hoga
#print(np.shape(histogram)) #OUTPUT:(1280,)
leftx_base = np.argmax(histogram[:midpoint])
rightx_base = np.argmax(histogram[midpoint:]) + midpoint
return leftx_base, rightx_base
def find_lane_pixels(warped_img):
leftx_base, rightx_base = histogram_bottom_peaks(warped_img)
# HYPERPARAMETERS
# Choose the number of sliding windows
nwindows = 9
# Set the width of the windows +/- margin. So width = 2*margin
margin = 90
# Set minimum number of pixels found to recenter window
minpix = 1000 #I've changed this from 50 as given in lectures
# Set height of windows - based on nwindows above and image shape
window_height = np.int(warped_img.shape[0]//nwindows)
# Identify the x and y positions of all nonzero pixels in the image
nonzero = warped_img.nonzero() #pixel ke coordinates dega 2 seperate arrays mein
nonzeroy = np.array(nonzero[0]) # Y coordinates milenge 1D array mein. They will we arranged in the order of pixels
nonzerox = np.array(nonzero[1])
# Current positions to be updated later for each window in nwindows
leftx_current = leftx_base #initially set kar diya hai. For loop ke end mein change karenge
rightx_current = rightx_base
# Create empty lists to receive left and right lane pixel indices
left_lane_inds = [] # Ismein lane-pixels ke indices collect karenge.
# 'nonzerox' array mein index daalke coordinate mil jaayega
right_lane_inds = []
# Step through the windows one by one
for window in range(nwindows):
# Identify window boundaries in x and y (and right and left)
win_y_low = warped_img.shape[0] - (window+1)*window_height
win_y_high = warped_img.shape[0] - window*window_height
"""### TO-DO: Find the four below boundaries of the window ###"""
win_xleft_low = leftx_current - margin
win_xleft_high = leftx_current + margin
win_xright_low = rightx_current - margin
win_xright_high = rightx_current + margin
"""
# Create an output image to draw on and visualize the result
out_img = np.copy(warped_img)
# Draw the windows on the visualization image
cv2.rectangle(out_img,(win_xleft_low,win_y_low),
(win_xleft_high,win_y_high),(0,255,0), 2)
cv2.rectangle(out_img,(win_xright_low,win_y_low),
(win_xright_high,win_y_high),(0,255,0), 2)
"""
### TO-DO: Identify the nonzero pixels in x and y within the window ###
#Iska poora explanation seperate page mein likha hai
good_left_inds = ((nonzeroy >= win_y_low) & (nonzeroy < win_y_high) &
(nonzerox >= win_xleft_low) & (nonzerox < win_xleft_high)).nonzero()[0]
good_right_inds = ((nonzeroy >= win_y_low) & (nonzeroy < win_y_high) &
(nonzerox >= win_xright_low) & (nonzerox < win_xright_high)).nonzero()[0]
# Append these indices to the lists
left_lane_inds.append(good_left_inds)
right_lane_inds.append(good_right_inds)
# If you found > minpix pixels, recenter next window on the mean position of the pixels in your current window (re-centre)
if len(good_left_inds) > minpix:
leftx_current = np.int(np.mean(nonzerox[good_left_inds]))
if len(good_right_inds) > minpix:
rightx_current = np.int(np.mean(nonzerox[good_right_inds]))
# Concatenate the arrays of indices (previously was a list of lists of pixels)
try:
left_lane_inds = np.concatenate(left_lane_inds)
right_lane_inds = np.concatenate(right_lane_inds)
except ValueError:
# Avoids an error if the above is not implemented fully
pass
# Extract left and right line pixel positions
leftx = nonzerox[left_lane_inds]
lefty = nonzeroy[left_lane_inds]
rightx = nonzerox[right_lane_inds]
righty = nonzeroy[right_lane_inds]
"""return leftx, lefty, rightx, righty, out_img""" #agar rectangles bana rahe ho toh out_image rakhna
return leftx, lefty, rightx, righty
def fit_polynomial(warped_img, leftx, lefty, rightx, righty, right_fit_history, right_variance_history):
#Fit a second order polynomial to each using `np.polyfit` ###
left_fit = np.polyfit(lefty,leftx,2)
right_fit = np.polyfit(righty,rightx,2)
# Generate x and y values for plotting.
#NOTE: y is the independent variable. Refer "fit polynomial" notes for explanation
# We'll plot x as a function of y
ploty = np.linspace(0, warped_img.shape[0]-1, warped_img.shape[0])
# Eqn of parabola: a(x**2) + bx + c. Where a and b denote the shape of parabola. Shape of parabola will be amost constant inn our case
variance_new=0 #initializing the variable
try:
left_fitx = left_fit[0]*ploty**2 + left_fit[1]*ploty + left_fit[2]
if(right_fit_history == None):
a2 = (0.6*left_fit[0] + 0.4*right_fit[0])
b2 = (0.6*left_fit[1] + 0.4*right_fit[1])
c2 = (warped_img.shape[1] - (left_fit[0]*(warped_img.shape[0]-1)**2 + left_fit[1]*(warped_img.shape[0]-1) + left_fit[2]))*0.1 + 0.9*right_fit[2]
for index in range(len(rightx)):
variance_new+= abs(rightx[index]-(a2*righty[index]**2 + b2*righty[index] + c2))
variance_new=variance_new/len(rightx)
print("variance_new",variance_new)
else:
a2_new = (0.6*left_fit[0] + 0.4*right_fit[0])
b2_new = (0.6*left_fit[1] + 0.4*right_fit[1])
c2_new = (warped_img.shape[1] - (left_fit[0]*(warped_img.shape[0]-1)**2 + left_fit[1]*(warped_img.shape[0]-1) + left_fit[2]))*0.1 + 0.9*right_fit[2]
# Finding weighted average for the previous elements data within right_fit_history
a2_old= sum([(0.2*(index+1)*element[0]) for index,element in enumerate(right_fit_history)])/sum([0.2*(index+1) for index in range(0,5)])
b2_old= sum([(0.2*(index+1)*element[1]) for index,element in enumerate(right_fit_history)])/sum([0.2*(index+1) for index in range(0,5)])
c2_old= sum([(0.2*(index+1)*element[2]) for index,element in enumerate(right_fit_history)])/sum([0.2*(index+1) for index in range(0,5)])
"""Trying to find variance"""
for index in range(len(rightx)):
variance_new+= abs(rightx[index]-(a2_new*righty[index]**2 + b2_new*righty[index] + c2_new))
variance_new=variance_new/len(rightx)
print("variance_new",variance_new)
#variance_old = sum([(0.2*(index+1)*element) for index,element in enumerate(right_variance_history)])/sum([0.2*(index+1) for index in range(0,5)])
variance_old = sum([(0.2*((5-index)**3)*element) for index,element in enumerate(right_variance_history)])/sum([0.2*((5-index)**3) for index in range(0,5)])
#variance_old = right_variance_history[4]
#variance_old = sum([element for element in right_variance_history])/5
"""yaha ke coefficients variance se aa sakte hain"""
coeff_new=variance_old/(variance_new+variance_old)
coeff_old=variance_new/(variance_new+variance_old)
a2= a2_new*coeff_new + a2_old*coeff_old
b2= b2_new*coeff_new + b2_old*coeff_old
c2= c2_new*coeff_new + c2_old*coeff_old
right_fitx = a2*ploty**2 + b2*ploty + c2
status = True
#try:
# left_fitx = left_fit[0]*ploty**2 + left_fit[1]*ploty + left_fit[2]
# right_fitx = right_fit[0]*ploty**2 + right_fit[1]*ploty + right_fit[2]
except TypeError:
# Avoids an error if `left` and `right_fit` are still none or incorrect
print('The function failed to fit a line!')
left_fitx = 1*ploty**2 + 1*ploty
right_fitx = 1*ploty**2 + 1*ploty
status = False
return left_fit, [a2,b2,c2], left_fitx, right_fitx, status, variance_new, ploty
# out_img here has boxes drawn and the pixels are colored
def color_pixels_and_curve(out_img, leftx, lefty, rightx, righty, left_fitx, right_fitx):
ploty = np.linspace(0, warped_img.shape[0]-1, warped_img.shape[0])
## Visualization ##
# Colors in the left and right lane regions
out_img[lefty, leftx] = [255, 0, 0]
out_img[righty, rightx] = [0, 0, 255]
# Converting the coordinates of our line into integer values as index of the image can't take decimals
left_fitx_int = left_fitx.astype(np.int32)
right_fitx_int = right_fitx.astype(np.int32)
ploty_int = ploty.astype(np.int32)
# Coloring the curve as yellow
out_img[ploty_int,left_fitx_int] = [255,255,0]
out_img[ploty_int,right_fitx_int] = [255,255,0]
# To thicken the curve
out_img[ploty_int,left_fitx_int+1] = [255,255,0]
out_img[ploty_int,right_fitx_int+1] = [255,255,0]
out_img[ploty_int,left_fitx_int-1] = [255,255,0]
out_img[ploty_int,right_fitx_int-1] = [255,255,0]
out_img[ploty_int,left_fitx_int+2] = [255,255,0]
out_img[ploty_int,right_fitx_int+2] = [255,255,0]
out_img[ploty_int,left_fitx_int-2] = [255,255,0]
out_img[ploty_int,right_fitx_int-2] = [255,255,0]
def search_around_poly(warped_img, left_fit, right_fit):
# HYPERPARAMETER
# Choose the width of the margin around the previous polynomial to search
# The quiz grader expects 100 here, but feel free to tune on your own!
margin = 100
# Grab activated pixels
nonzero = warped_img.nonzero()
nonzeroy = np.array(nonzero[0])
nonzerox = np.array(nonzero[1])
### TO-DO: Set the area of search based on activated x-values ###
### within the +/- margin of our polynomial function ###
### Hint: consider the window areas for the similarly named variables ###
### in the previous quiz, but change the windows to our new search area ###
left_lane_inds = ((nonzerox > (left_fit[0]*(nonzeroy**2) + left_fit[1]*nonzeroy +
left_fit[2] - margin)) & (nonzerox < (left_fit[0]*(nonzeroy**2) +
left_fit[1]*nonzeroy + left_fit[2] + margin)))
right_lane_inds = ((nonzerox > (right_fit[0]*(nonzeroy**2) + right_fit[1]*nonzeroy +
right_fit[2] - margin)) & (nonzerox < (right_fit[0]*(nonzeroy**2) +
right_fit[1]*nonzeroy + right_fit[2] + margin)))
# Again, extract left and right line pixel positions
leftx = nonzerox[left_lane_inds]
lefty = nonzeroy[left_lane_inds]
rightx = nonzerox[right_lane_inds]
righty = nonzeroy[right_lane_inds]
return leftx, lefty, rightx, righty
def modify_array(array, new_value):
if len(array)!=5:
for i in range(0,5):
array.append(new_value)
else:
dump_var=array[0]
array[0]=array[1]
array[1]=array[2]
array[2]=array[3]
array[3]=array[4]
array[4]=new_value
return array
undist_img = cal_undistort(frame)
thresh_img = thresh_img(undist_img) # Note: This is not a binary iamge. It has been stacked already within the function
warped_img, src, dst = perspective_transform(thresh_img)
#draw_polygon(frame, warped_img, src, dst) #the first image is the original image that you import into the system
print("starting count",lane.count)
if (lane.count == 0):
leftx, lefty, rightx, righty = find_lane_pixels(warped_img) # Find our lane pixels first
left_fit, right_fit, left_fitx, right_fitx, status, variance_new, ploty = fit_polynomial(warped_img, leftx, lefty, rightx, righty, right_fit_history=None, right_variance_history=None)
print("First case mein variance ye hai", variance_new)
elif (lane.count > 0):
left_fit_previous = [i[0] for i in lane.curve_fit]
right_fit_previous = [i[1] for i in lane.curve_fit]
#print(left_fit_previous)
#print(right_fit_previous)
leftx, lefty, rightx, righty = search_around_poly(warped_img, left_fit_previous[4], right_fit_previous[4])
left_fit, right_fit, left_fitx, right_fitx, status, variance_new, ploty = fit_polynomial(warped_img, leftx, lefty, rightx, righty, right_fit_history=right_fit_previous, right_variance_history=lane.right_variance)
color_pixels_and_curve(warped_img, leftx, lefty, rightx, righty, left_fitx, right_fitx)
lane.detected = status
lane.curve_fit = modify_array(lane.curve_fit,[left_fit, right_fit])
lane.right_variance = modify_array(lane.right_variance, variance_new)
print(lane.right_variance)
unwarped_img, Minv = rev_perspective_transform(warped_img, src, dst)
lane.count = lane.count+1
"""green lane"""
# Create an image to draw the lines on
color_warp = np.zeros_like(warped_img).astype(np.uint8)
print(np.shape(color_warp))
# Recast the x and y points into usable format for cv2.fillPoly()
pts_left = np.array([np.transpose(np.vstack([left_fitx, ploty]))])
pts_right = np.array([np.flipud(np.transpose(np.vstack([right_fitx, ploty])))])
pts = np.hstack((pts_left, pts_right))
# Draw the lane onto the warped blank image
cv2.fillPoly(color_warp, np.int_([pts]), (0,255, 0))
# Warp the blank back to original image space using inverse perspective matrix (Minv)
newwarp = cv2.warpPerspective(color_warp, Minv, (frame.shape[1], frame.shape[0]))
# Combine the result with the original image
result = cv2.addWeighted(undist_img, 1, newwarp, 0.3, 0)
return result
#return unwarped_img
###Output
_____no_output_____
###Markdown
Let's try classes
###Code
# Define a class to receive the characteristics of each line detection
class Line():
def __init__(self):
#Let's count the number of consecutive frames
self.count = 0
# was the line detected in the last iteration?
self.detected = False
#polynomial coefficients for the most recent fit
self.curve_fit = []
# Traking variance for the right lane
self.right_variance = []
# x values of the curve that we fit intially
#self.current_xfitted = []
# x values for detected line pixels
#self.allx = []
# y values for detected line pixels
#self.ally = []
#store your image in this
#self.image_output = []
# x values of the last n fits of the line
self.recent_xfitted = []
#average x values of the fitted line over the last n iterations
self.bestx = None
#polynomial coefficients averaged over the last n iterations
self.best_fit = None
#radius of curvature of the line in some units
self.radius_of_curvature = None
#distance in meters of vehicle center from the line
self.line_base_pos = None
#difference in fit coefficients between last and new fits
self.diffs = np.array([0,0,0], dtype='float')
lane=Line()
frame1= mpimg.imread("my_test_images/Highway_snaps/image (1).jpg")
frame2= mpimg.imread("my_test_images/Highway_snaps/image (2).jpg")
frame3= mpimg.imread("my_test_images/Highway_snaps/image (3).jpg")
print("starting count value",lane.count)
(process_image(frame1))
(process_image(frame2))
plt.imshow(process_image(frame3))
###Output
starting count value 0
starting count 0
variance_new 16.411728943874007
First case mein variance ye hai 16.411728943874007
[16.411728943874007, 16.411728943874007, 16.411728943874007, 16.411728943874007, 16.411728943874007]
(1080, 1920, 3)
starting count 1
variance_new 20.454135208135213
[16.411728943874007, 16.411728943874007, 16.411728943874007, 16.411728943874007, 20.454135208135213]
(1080, 1920, 3)
starting count 2
variance_new 14.975480798140142
[16.411728943874007, 16.411728943874007, 16.411728943874007, 20.454135208135213, 14.975480798140142]
(1080, 1920, 3)
###Markdown
Videoo test
###Code
# Define a class to receive the characteristics of each line detection
class Line():
def __init__(self):
#Let's count the number of consecutive frames
self.count = 0
# was the line detected in the last iteration?
self.detected = False
#polynomial coefficients for the most recent fit
self.curve_fit = []
# Traking variance for the right lane
self.right_variance = []
# x values of the curve that we fit intially
#self.current_xfitted = []
# x values for detected line pixels
#self.allx = []
# y values for detected line pixels
#self.ally = []
#store your image in this
#self.image_output = []
# x values of the last n fits of the line
self.recent_xfitted = []
#average x values of the fitted line over the last n iterations
self.bestx = None
#polynomial coefficients averaged over the last n iterations
self.best_fit = None
#radius of curvature of the line in some units
self.radius_of_curvature = None
#distance in meters of vehicle center from the line
self.line_base_pos = None
#difference in fit coefficients between last and new fits
self.diffs = np.array([0,0,0], dtype='float')
lane=Line()
project_output = 'output_files/video_clips/project_video_with_history.mp4'
clip1 = VideoFileClip("project_video.mp4")
#clip1 = VideoFileClip("project_video.mp4").subclip(20,23)
project_clip = clip1.fl_image(process_image) #NOTE: this function expects color images!
%time project_clip.write_videofile(project_output, audio=False)
HTML("""
<video width="960" height="540" controls>
<source src="{0}">
</video>
""".format(project_output))
###Output
_____no_output_____
###Markdown
.
###Code
import numpy as np
def modify_array(array, new_value):
if len(array)!=5:
for i in range(0,5):
array.append(new_value)
else:
dump_var=array[0]
array[0]=array[1]
array[1]=array[2]
array[2]=array[3]
array[3]=array[4]
array[4]=new_value
return array
a=[]
modify_array(a,[4,2])
modify_array(a,[7,3])
modify_array(a,[2,1])
modify_array(a,[9,6])
print(a)
Ans = [i[0] for i in a]
print(Ans)
"""a[:,0] """ # This wont work. TypeError: list indices must be integers or slices, not tuple
a = np.array(a)
modify_array(a,[1,4])
print(a)
a[:,0]
a=[[10,20,30],[30,60,80],[60,10,20], [100,20,10], [90,70,10]]
ans = sum([(0.2*(index+1)*element[0]) for index,element in enumerate(a)])/sum([0.2*(index+1) for index in range(0,5)])
print(ans)
[(0.25*(index+1)*element[0]) for index,element in enumerate(a)]
###Output
_____no_output_____ |
engr1330jb/_build/jupyter_execute/lessons/lesson02/lesson02.ipynb | ###Markdown
ENGR 1330 Computational Thinking with Data Science Copyright © 2021 Theodore G. Cleveland and Farhang ForghanparastLast GitHub Commit Date: 12 August 2021 2: Expressions- fundamental operators- arithmetic expressions- simple output: print() Programming FundamentalsRecall the 5 fundamental CT concepts are:1. **Decomposition**: the process of taking a complex problem and breaking it into more manageable sub-problems. 2. **Pattern Recognition**: finding similarities, or shared characteristics of problems to reuse of solution methods ( **automation** ) for each occurrence of the pattern. 3. **Abstraction** : Determine important characteristics of the problem and use these characteristics to create a representation of the problem. 4. **Algorithms** : Step-by-step instructions of how to solve a problem.5. **System Integration**: the assembly of the parts above into the complete (integrated) solution. Integration combines parts into a **program** which is the realization of an algorithm using a syntax that the computer can understand. **Programming** is (generally) writing code in a specific programming language to address a certain problem. In the above list it is largely addressed by the algorithms and system integration concepts. iPythonThe programming language we will use is Python (actually iPython). Python is an example of a high-level language; there are also low-level languages, sometimes referred to as machine languages or assembly languages. Machine language is the encoding of instructions in binary so that they can be directly executed by the computer. Assembly language uses a slightly easier format to refer to the low level instructions. Loosely speaking, computers can only execute programs written in low-level languages. To be exact, computers can actually only execute programs written in machine language. Thus, programs written in a high-level language (and even those in assembly language) have to be processed before they can run. This extra processing takes some time, which is a small disadvantage of high-level languages. However, the advantages to high-level languages are enormous:- First, it is much easier to program in a high-level language. Programs written in a high-level language take less time to write, they are shorter and easier to read, and they are more likely to be correct. - Second, high-level languages are portable, meaning that they can run on different kinds of computers with just a few modifications. - Low-level programs can run on only one kind of computer (chipset-specific for sure, in some cases hardware specific) and have to be rewritten to run on other processors. (e.g. x86-64 vs. arm7 vs. aarch64 vs. PowerPC ...)Due to these advantages, almost all programs are written in high-level languages. Low-level languages are used only for a few specialized applications.Two kinds of programs process high-level languages into low-level languages: interpreters and compilers. An interpreter reads a high-level program and executes it, meaning that it does what the program says. It processes the program a little at a time, alternately reading lines and performing computations. Recall how an Excel spreadsheet computes from top to bottom, left to right - an interpreted program is much the same, each line is like a cell in a spreadsheet.As a language, python is a formal language that has certain requirements and structure called "syntax." Syntax rules come in two flavors, pertaining to **tokens** and **structure**. **Tokens** are the basic elements of the language, such as words, numbers, and chemical elements. The second type of syntax rule pertains to the **structure of a statement** specifically in the way the tokens are arranged. Tokens and StructureConsider the relativistic equation relating energy, mass, and the speed of light $ e = m \cdot c^2 $In this equation the tokens are $e$,$m$,$c$,$=$,$\cdot$,$~^2$ and the structure is parsed from left to right as into the token named $e$ place the result of the product of the contents of the tokens $m$ and $c \times c$ (this operation on $c$ is what the $~^2$ token means. Given that the speed of light is some universal constant, the only things that can change are the contents of $m$ and the resulting change in $e$. In the above discourse, the tokens $e$,$m$,$c$ are names for things that can have values -- we will call these variables (or constants as appropriate). The tokens $=$,$\cdot$, and $~^2$ are symbols for various arithmetic operations -- we will call these operators. The structure of the equation is specific -- we will call it a statement.:::{note}When we attempt to write and execute python scripts - we will make various mistakes; these will generate warnings and errors, which we will repair to make a working program.The next two code blocks do not compile into a JupyterBook, we will see in class why. The cells can be cut-and-pasted into your Jupyter Notebook:::Consider our equation: ```clear all variables Example Energy = Mass * SpeedOfLight**2 ``` Notice how the interpreter tells us that Mass is undefined - so a simple fix is to define it and try again ``` Example Mass = 1000000Energy = Mass * SpeedOfLight**2 ``` Notice how the interpreter now tells us that SpeedOfLight is undefined - so a simple fix is to define it and try again
###Code
# Example
Mass = 1000000 #kilograms
SpeedOfLight = 299792458 #meters per second
Energy = Mass * SpeedOfLight**2
###Output
_____no_output_____
###Markdown
Now the script ran without any reported errors, but we have not instructed the program on how to produce output. To keep the example simple we will just add a generic print statement.
###Code
# Example
Mass = 1000000 #kilograms
SpeedOfLight = 299792458 #meters per second
Energy = Mass * SpeedOfLight**2
print("Energy is:", Energy, "Newton meters")
###Output
Energy is: 89875517873681764000000 Newton meters
###Markdown
Now lets examine our program. Identify the tokens that have values, Identify the tokens that are symbols of operations, identify the structure. VariablesVariables are names given to data that we want to store, manipulate, **and change** in programs. A variable has a name and a value. The value representation depends on what type of object the variable represents. The utility of variables comes in when we have a structure that is universal, but values of variables within the structure will change - otherwise it would be simple enough to just hardwire the arithmetic.Suppose we want to store the time of concentration for some hydrologic calculation. To do so, we can name a variable `TimeOfConcentration`, and then `assign` a value to the variable,for instance: TimeOfConcentration = 0.0 After this assignment statement the variable is created in the program and has a value of 0.0. The use of a decimal point in the initial assignment establishes the variable as a float (a real variable is called a floating point representation -- or just a float). Naming RulesVariable names in Python can only contain letters (a - z, A - Z), numerals (0 - 9), or underscores. The first character cannot be a number, otherwise there is considerable freedom in naming. The names can be reasonably long. `runTime`, `run_Time`, `_run_Time2`, `_2runTime` are all valid names, but `2runTime` is not valid, and will create an error when you try to use it.
###Code
# Script to illustrate variable names
runTime = 1.
_2runTime = 2 # change to 2runTime = 2 and rerun script
runTime2 = 2
print(type(runTime),type(_2runTime),type(runTime2))
###Output
<class 'float'> <class 'int'> <class 'int'>
###Markdown
There are some reserved words that cannot be used as variable names because they have preassigned meaning in Parseltongue. These words include `print`, `input`, `if`, `while`, and `for`. There are several more; the interpreter won't allow you to use these names as variables and will issue an error message when you attempt to run a program with such words used as variables. OperatorsThe `=` sign used in the variable definition is called an assignment operator (or assignment sign). The symbol means that the expression to the right of the symbol is to be evaluated and the result placed into the variable on the left side of the symbol. The "operation" is assignment, the "=" symbol is the operator name.Consider the script below
###Code
# Assignment Operator
x = 5
y = 10
print (x,y)
y=x # reverse order y=x and re-run, what happens?
print (x,y)
###Output
5 10
5 5
###Markdown
So look at what happened. When we assigned values to the variables named `x` and `y`, they started life as 5 and 10. We then wrote those values to the console, and the program returned 5 and 10. Then we assigned `y` to `x` which took the value in y and replaced the value that was in x with this value. We then wrote the contents again, and both variables have the value 10. Arithmetic OperatorsIn addition to assignment we can also perform arithmetic operations on variables. Thefundamental arithmetic operators are:| Symbol | Meaning | Example ||:---|:---|:---|| = |Assignment| x=3 Assigns value of 3 to x.|| + |Addition| x+y Adds values in x and y.|| - |Subtraction| x-y Subtracts values in y from x.|| * |Multiplication| x*y Multiplies values in x and y.|| / |Division| x/y Divides value in x by value in y.|| // |Floor division| x//y Divide x by y, truncate result to whole number.|| % |Modulus| x%y Returns remainder when x is divided by y.|| ** |Exponentation| x ** y Raises value in x by value in y. ( e.g. x ** y)|| += |Additive assignment| x+=2 Equivalent to x = x+2.|| -= |Subtractive assignment| x-=2 Equivalent to x = x-2.|| *= |Multiplicative assignment| x\*=3 Equivalent to x = x\*3.|| /= |Divide assignment| x/=3 Equivalent to x = x/3.|Run the script in the next cell for some illustrative results
###Code
# Uniary Arithmetic Operators
x = 10
y = 5
print(x, y)
print(x+y)
print(x-y)
print(x*y)
print(x/y)
print((x+1)//y)
print((x+1)%y)
print(x**y)
# Arithmetic assignment operators
x = 1
x += 2
print(type(x),x)
x = 1
x -= 2
print(type(x),x)
x = 1
x *=3
print(type(x),x)
x = 10
x /= 2
print(type(x),x) # Interesting what division does to variable type
###Output
<class 'int'> 3
<class 'int'> -1
<class 'int'> 3
<class 'float'> 5.0
###Markdown
ExpressionsExpressions are the "algebraic" constructions that are evaluated and then placed into a variable.Consider x1 = 7 + 3 * 6 / 2 - 1The expression is evaluated from the left to right and in words is- Into the object named x1 place the result of: integer 7 + (integer 6 divide by integer 2 = float 3 * integer 3 = float 9 - integer 1 = float 8) = float 15The division operation by default produces a float result unless forced otherwise. The result is the variable `x1` is a float with a value of `15.0`
###Code
# Expressions Example
x1 = 7 + 3 * 6 // 2 - 1 # Change / into // and see what happens!
print(type(x1),x1)
## Simple I/O (Input/Output)
###Output
<class 'int'> 15
|
Updated pyEPR Files/pyEPR/_tutorial_notebooks/Tutorial 2. Field calculations - dielectric energy participation ratios (EPRs).ipynb | ###Markdown
pyEPR Calculating Disipative Participation Ratios Zlatko Minev**Summary:** Following Appendix E of the energy-participation-ratio (EPR) paper, here we demonstrate how to calcualte the dielectric EPR of a chip substrate in a qubit eigen mode. We use the following definitions for the RMS energy stored in a volume $V$,\begin{align}\mathcal{E}_{\mathrm{elec}}&=&\frac{1}{4}\mathrm{Re}\int_{V}\mathrm{d}v\vec{E}_{\text{max}}^{*}\overleftrightarrow{\epsilon}\vec{E}_{\text{max}}\;,\\\mathcal{E}_{\mathrm{mag}}&=&\frac{1}{4}\mathrm{Re}\int_{V}\mathrm{d}v\vec{H}_{\text{max}}^{*}\overleftrightarrow{\mu}\vec{H}_{\text{max}}\;,\end{align} The simple way Following the first tutorial, let's loads the `pyEPR` package under the shorthand name `epr`.
###Code
import pyEPR as epr
###Output
_____no_output_____
###Markdown
Load Ansys HFSS tutorial file As we did in the previous tutorial, let us first determine where the example file is stored.For tutorial, let us get the path to the tutorial folder.
###Code
# Load Path temporarily just to find where the tutorial folder is
# return path_to_project
from pathlib import Path
path_to_project = Path(epr.__file__).parent.parent / '_example_files'
print(f'We will the example project located in\n {path_to_project}')
###Output
We will the example project located in
C:\zkm-code\pyEPR\_example_files
###Markdown
Now, we will open Ansys Desktop and connect to a specific project and we will create the analsys eprh
###Code
pinfo = epr.ProjectInfo(project_path = path_to_project,
project_name = 'pyEPR_tutorial1',
design_name = '1. single_transmon')
eprh = epr.DistributedAnalysis(pinfo)
###Output
INFO 02:35AM [connect]: Connecting to Ansys Desktop API...
INFO 02:35AM [load_ansys_project]: File path to HFSS project found.
INFO 02:35AM [load_ansys_project]: Opened Ansys App
INFO 02:35AM [load_ansys_project]: Opened Ansys Desktop v2016.0.0
INFO 02:35AM [load_ansys_project]: Opened Ansys Project
Folder: C:/zkm-code/pyEPR/_example_files/
Project: pyEPR_tutorial1
INFO 02:35AM [connect]: Opened active design
Design: 1. single_transmon [Solution type: Eigenmode]
INFO 02:35AM [get_setup]: Opened setup `Setup1` (<class 'pyEPR.ansys.HfssEMSetup'>)
INFO 02:35AM [connect]: Connection to Ansys established successfully. 😀
###Markdown
Calculate participation of the substrate for mode 1 First, select which eigenmode to work on. Here the fundamental mode, mode 0, is the qubit.```pythoneprh.set_mode(0) ```Let us now calculate the dielectric energy-participatio ratioof the substrate relative to the dielectric energy of all objects, using the function```python eprh.calc_p_electric_volume```Note that when all objects are specified, this does not include any energythat might be stored in any lumped elements or lumped capacitors.Returns:--------- ℰ_object/ℰ_total, (ℰ_object, _total)
###Code
eprh.set_mode(0)
# Calculate the EPR p_dielectic
p_dielectic, (ℰ_substr, ℰ_total) = eprh.calc_p_electric_volume('substrate', 'AllObjects')
print(f'Energy in silicon substrate = {100*p_dielectic:.1f}%')
###Output
Energy in silicon substrate = 87.7%
###Markdown
Now, compute the electric energy stored in the vacuum Use the calculated total energy in all objectsso that we don't have to recompute it, since we computed it above
###Code
# Here we will pass in the precomputed E_total=ℰ_total
p_vac, (ℰ_vac, ℰ_total) = eprh.calc_p_electric_volume('cavity_enclosure', E_total=ℰ_total)
print(f'''Energy in vacuum = {100*p_vac:.1f}%
Since there are no other volumes,
the two energies should sum to one: {p_dielectic + p_vac}''')
###Output
Energy in vacuum = 12.3%
Since there are no other volumes,
the two energies should sum to one: 0.9999999999999989
###Markdown
Let's find outmore about the functuion signature
###Code
? eprh.calc_p_electric_volume
###Output
_____no_output_____
###Markdown
Calculating the energies directlyUsing lower level functions
###Code
ℰ_total = eprh.calc_energy_electric(volume='AllObjects')
ℰ_substr = eprh.calc_energy_electric(volume='substrate')
print(f'Energy in substrate = {100*ℰ_substr/ℰ_total:.1f}%')
?eprh.calc_energy_electric
###Output
_____no_output_____
###Markdown
Using the Fields calculator in HFSS directly We will do the same calculation again, but now using the internals of `eprh.calc_energy_electric` to demonstrate how the fields calcualtor object can be used for custom integrals and how the internals work. Using the HFSS Fields CalculatorThe Fields calculator enables you to perform computations using basic field quantities. The calculator will compute derived quantities from the general electric field solution; write field quantities to files, locate maximum and minimum field values, and perform other operations on the field solution. The calculator does not perform the computations until a value is needed or is forced for a result. This makes it more efficient, saving computing resources and time; you can do all the calculations without regard to data storage of all the calculated points of the field. It is generally easier to do all the calculations first, then plot the results. Direct calculation of \begin{align}\mathcal{E}_{\mathrm{elec}}&=&\mathrm{Re}\int_{V}\mathrm{d}v\vec{E}_{\text{max}}^{*}\overleftrightarrow{\epsilon}\vec{E}_{\text{max}}\;.\end{align}
###Code
from pyEPR.core import *
from pyEPR.core import CalcObject
self, volume = eprh, 'AllObjects'
calcobject = CalcObject([], self.setup)
vecE = calcobject.getQty("E").smooth()
A = vecE.times_eps()
B = vecE.conj()
A = A.dot(B)
A = A.real()
A = A.integrate_vol(name=volume)
E_total = A.evaluate(lv=self._get_lv())
# This command numerically evaluates and displays the
# results of calculator operations
E_total
from pyEPR.core import *
self, volume = eprh, 'substrate'
calcobject = CalcObject([], self.setup)
vecE = calcobject.getQty("E").smooth()
A = vecE.times_eps()
B = vecE.conj()
A = A.dot(B)
A = A.real()
A = A.integrate_vol(name=volume)
E_subs = A.evaluate(lv=self._get_lv())
# This command numerically evaluates and displays the
# results of calculator operations
E_subs
print(f'Energy in substrate: {100*E_subs/E_total:.1f}%')
###Output
Energy in substrate: 87.7%
|
23_CNN/23_CNN.ipynb | ###Markdown
Convolutional Neural Networks What is a convolution?- Let's consider a 2-D convolution (used, for example, in image processing): \begin{equation}g(x,y)*f(x,y) = \sum_{s=-a}^a \sum_{t=-b}^b g(s,t)f(x-s,y-t)\end{equation}where $g$ is the filter and $f$ is the image to be convolved. Essentially, we flip f both horizontally and vertically and, then, slide $g$ across $f$ where at each location we perform a pointwise multiplication and then a sum. - Why the flip? Without flipping, the operation is a correlation (or also called cross-correlation). The equation for cross-correlation is: \begin{equation}g(x,y)*f(x,y) = \sum_{s=-a}^a \sum_{t=-b}^b g(s,t)f(x+s,y+t)\end{equation}- A nice video describing the difference between convolution and cross-correlation can be found here: https://www.youtube.com/watch?v=C3EEy8adxvc- A nice property of convolution is that it corresponds to the product in the frequency domain. (Cross-correlation in the frequency domain is the product of $g$ and the complex conjugate of $f$. - Also, convolution is associative whereas correlation is not. See the following example:
###Code
import numpy as np
conv1 = np.convolve(np.convolve([-1,0,1], [-1, 0, 1], 'full'), [1, 2, 3, 4, 5], 'full')
print(conv1)
conv2 = np.convolve([-1, 0, 1], np.convolve([-1,0,1], [1, 2, 3, 4, 5], 'full'), 'full')
print(conv2)
corr1 = np.correlate(np.correlate([-1,0,1], [-1, 0, 1], 'full'), [1, 2, 3, 4, 5], 'full')
print(corr1)
corr2 = np.correlate([-1, 0, 1], np.correlate([-1,0,1], [1, 2, 3, 4, 5], 'full'), 'full')
print(corr2)
###Output
[ 1 2 1 0 0 -6 -7 4 5]
[ 1 2 1 0 0 -6 -7 4 5]
[-5 -4 7 6 0 0 -1 -2 -1]
[-1 -2 -1 0 0 6 7 -4 -5]
###Markdown
Why would you use a convolution? - Convolutions are very common operations. Here are some image processing examples: - Edge Detection: Can detect edges by convolving with edge masks (e.g., the Sobel edge detectors): \begin{equation} \left[\begin{array}[c c]\\ -1 &0 & 1\\ -2 &0 &2\\ -1 &0 &1\end{array} \right] \end{equation}\begin{equation} \left[\begin{array}[c c]\\-1 &-2 & -1\\ 0 &0 & 0\\ 1 & 2 &1\end{array} \right] \end{equation}The vertical and horizontal Sobel edge masks are shown above. - Image Smoothing: Can smooth/blur images using a mean filter- Unsharp Masking: Can sharpen imagery by subtracting a mean filtered image from the original- and more...
###Code
#Import Necessary Libraries
import scipy.signal as signal
import matplotlib.pyplot as plt
%matplotlib inline
# Create Sobel Edge Masks
vMask = np.array([[-1, 0, 1],[-2, 0, 2],[-1, 0, 1]])
hMask = np.array([[-1, -2, -1],[0, 0, 0],[1, 2, 1]])
# Convolve Edge Masks with Image
gradV = signal.convolve2d(ascent, vMask, boundary='symm', mode='same')
gradH = signal.convolve2d(ascent, hMask, boundary='symm', mode='same')
# Visualize Results
fig, (ax_orig, ax_mag, ax_mag2) = plt.subplots(3, 1, figsize=(6, 15))
ax_orig.imshow(ascent, cmap='gray')
ax_orig.set_title('Original')
ax_orig.set_axis_off()
ax_mag.imshow(abs(gradV), cmap='gray')
ax_mag.set_title('Vertical Gradient magnitude')
ax_mag.set_axis_off()
ax_mag2.imshow(abs(gradH), cmap='gray')
ax_mag2.set_title('Horizontal Gradient magnitude')
ax_mag2.set_axis_off()
plt.show()
# Create Mean Filter
mMask = (1/100)*np.ones((10,10))
# Convolve Mean Masks with Image
blurI = signal.convolve2d(ascent, mMask, boundary='symm', mode='same')
# Perform unsharp masking
sharpI = ascent + (ascent - blurI)
# Visualize Results
fig, (ax_orig, ax_mag, ax_mag2) = plt.subplots(3, 1, figsize=(6, 15))
ax_orig.imshow(ascent, cmap='gray')
ax_orig.set_title('Original')
ax_orig.set_axis_off()
ax_mag.imshow(abs(blurI), cmap='gray')
ax_mag.set_title('Blurred Image')
ax_mag.set_axis_off()
ax_mag2.imshow(abs(sharpI), cmap='gray')
ax_mag2.set_title('Sharpened Image')
ax_mag2.set_axis_off()
plt.show()
###Output
_____no_output_____ |
20 - Core Data Visualisation - Matplotlib subplot2grid.ipynb | ###Markdown
Importing Libraries and Data
###Code
import pandas as pd
import matplotlib.pyplot as plt
import lasio
df = pd.read_csv('Data/15_9-19A-CORE.csv')
df
###Output
_____no_output_____
###Markdown
Creating the Figure With Subplots
###Code
#Create the figure
fig, ax = plt.subplots(figsize=(10,10))
#Add the axes / subplots using subplot2grid
ax1 = plt.subplot2grid(shape=(3,3), loc=(0,0), rowspan=3)
ax2 = plt.subplot2grid(shape=(3,3), loc=(0,1), rowspan=3)
ax3 = plt.subplot2grid(shape=(3,3), loc=(0,2))
ax4 = plt.subplot2grid(shape=(3,3), loc=(1,2))
ax5 = plt.subplot2grid(shape=(3,3), loc=(2,2))
#Add ax1 to show CPOR (Core Porosity) vs DEPTH
ax1.scatter(df['CPOR'], df['DEPTH'], marker='.', c='red')
ax1.set_xlim(0, 50)
ax1.set_ylim(4010, 3825)
ax1.set_title('Core Porosity')
ax1.grid()
#Add ax2 to show CKHG (Core Permeability) vs DEPTH
ax2.scatter(df['CKHG'], df['DEPTH'], marker='.', c='blue')
ax2.set_xlim(0.01, 10000)
ax2.semilogx()
ax2.set_ylim(4010, 3825)
ax2.set_title('Core Permeability')
ax2.grid()
#Add ax3 to show CPOR (Core Porosity) vs CKHG (Core Permeability)
ax3.scatter(df['CPOR'], df['CKHG'], marker='.', alpha=0.5)
ax3.semilogy()
ax3.set_xlim(0.01, 10000)
ax3.set_xlim(0,50)
ax3.set_title('Poro-Perm Scatter Plot')
ax3.set_xlabel('Core Porosity (%)')
ax3.set_ylabel('Core Permeability (mD)')
ax3.grid()
#Add ax4 to show a histogram of CPOR - Core Porosity
ax4.hist(df['CPOR'], bins=30, edgecolor='black', color='red', alpha=0.6)
ax4.set_xlabel('Core Porosity')
#Add ax5 to show a histogram of CGD - Core Grain Density
ax5.hist(df['CGD'], bins=30, edgecolor='black', color='blue', alpha=0.6)
ax5.set_xlabel('Core Grain Density')
plt.tight_layout()
plt.savefig('CoreDataDashBoard.png', dpi=300)
plt.show()
###Output
_____no_output_____
###Markdown
Adding Interpreted Data as Line Plots
###Code
cpi = lasio.read('Data/15_9-19_A_CPI.las').df()
cpi['PHIF']=cpi['PHIF']*100
cpi.columns
#Add lines from the CPI dataframe to the plot.
#Create the figure
fig, ax = plt.subplots(figsize=(10,10))
#Add the axes / subplots using subplot2grid
ax1 = plt.subplot2grid(shape=(3,3), loc=(0,0), rowspan=3)
ax2 = plt.subplot2grid(shape=(3,3), loc=(0,1), rowspan=3)
ax3 = plt.subplot2grid(shape=(3,3), loc=(0,2), rowspan=1)
ax4 = plt.subplot2grid(shape=(3,3), loc=(1,2), rowspan=1)
ax5 = plt.subplot2grid(shape=(3,3), loc=(2,2), rowspan=1)
#Add ax1 to show CPOR (Core Porosity) vs DEPTH
ax1.scatter(df['CPOR'], df['DEPTH'], marker='.', c='red')
ax1.plot(cpi['PHIF'], cpi.index, c='black', lw=0.5)
ax1.set_xlim(0, 50)
ax1.set_ylim(4010, 3825)
ax1.set_title('Core Porosity')
ax1.grid()
#Add ax2 to show CKHG (Core Permeability) vs DEPTH
ax2.scatter(df['CKHG'], df['DEPTH'], marker='.')
ax2.plot(cpi['KLOGH'], cpi.index, c='black', lw=0.5)
ax2.set_xlim(0.01, 100000)
ax2.set_ylim(4010, 3825)
ax2.semilogx()
ax2.set_title('Core Permeability')
ax2.grid()
#Add ax3 to show CPOR (Core Porosity) vs CKHG (Core Permeability)
ax3.scatter(df['CPOR'], df['CKHG'], marker='.', alpha=0.5)
ax3.semilogy()
ax3.set_ylim(0.01, 100000)
ax3.set_xlim(0, 50)
ax3.set_title('Poro-Perm Crossplot')
ax3.set_xlabel('Core Porosity (%)')
ax3.set_ylabel('Core Permeability (mD)')
ax3.grid()
#Add ax4 to show a histogram of CPOR - Core Porosity
ax4.hist(df['CPOR'], bins=30, edgecolor='black', color='red')
ax4.set_xlabel('Core Porosity')
#Add ax5 to show a histogram of CGD - Core Grain Density
ax5.hist(df['CGD'], bins=30, edgecolor='black', color='green')
ax5.set_xlabel('Core Grain Density')
plt.tight_layout()
plt.show()
###Output
_____no_output_____ |
spravanie_zamestnancov_v_zavislosti_od_casu_IV/ZAM_accessing_the_web_parts.ipynb | ###Markdown
0. Imports
###Code
import pandas as pd
import numpy as np
###Output
_____no_output_____
###Markdown
1. Load csv
###Code
# change to your file location
df = pd.read_csv('/content/drive/MyDrive/Škola/DM/spravanie_zamestnancov_v_zavislosti_od_casu_IV/MLM_ZAM_stats.csv', ';', usecols=range(0,10))
df.head(10)
###Output
_____no_output_____
###Markdown
2. Create collection of weekdays
###Code
days = ['PO', 'UT', 'STR', 'STVR', 'PIA']
###Output
_____no_output_____
###Markdown
3. Create estimates for web parts
###Code
df1 = pd.DataFrame()
df2 = pd.DataFrame()
df3 = pd.DataFrame()
index = 0
# Cycle through hours from 7 to 23
for x in range (7,23):
new_row_uvod = {}
new_row_studium = {}
new_row_oznamy = {}
i = 1
# Cycle through weekdays
for day in days:
# Create logits estimates
logit_uvod = df.at[index, 'Intercept'] + df.at[index, 'HODINA']*x+df.at[index, 'HODINA_STV']*(x*x)+df.at[index, day]
logit_studium = df.at[index+1, 'Intercept'] + df.at[index+1, 'HODINA']*x+df.at[index+1, 'HODINA_STV']*(x*x)+df.at[index+1, day]
logit_oznamy = df.at[index+2, 'Intercept'] + df.at[index+2, 'HODINA']*x+df.at[index+2, 'HODINA_STV']*(x*x)+df.at[index+2, day]
reference_web = 1 / (1 + np.exp(logit_uvod) + np.exp(logit_studium) + np.exp(logit_oznamy))
# Create estimates for web parts
estimate_uvod = np.exp(logit_uvod) * reference_web
estimate_studium = np.exp(logit_studium) * reference_web
estimate_oznamy = np.exp(logit_oznamy) * reference_web
den = str(i) + '_' + day
# Create new rows and append it to dataframe
new_row_uvod.update({den: estimate_uvod})
new_row_studium.update({den: estimate_studium})
new_row_oznamy.update({den: estimate_oznamy})
i = i + 1
# Append time to rows
new_row_uvod.update({'0_hod': x})
new_row_studium.update({'0_hod': x})
new_row_oznamy.update({'0_hod': x})
# Update dataframes
df1 = df1.append(new_row_uvod, sort=False, ignore_index=True)
df2 = df2.append(new_row_studium, sort=False, ignore_index=True)
df3 = df3.append(new_row_oznamy, sort=False, ignore_index=True)
df1.head()
###Output
_____no_output_____
###Markdown
4. Export to excel
###Code
# Creating Excel Writer Object from Pandas
writer = pd.ExcelWriter('ZAM_accessing_the_web_parts.xlsx',engine='xlsxwriter')
workbook=writer.book
worksheet=workbook.add_worksheet('ZAM')
writer.sheets['ZAM'] = worksheet
# Úvod
worksheet.write(0, 0, "Úvod")
df1.to_excel(writer, sheet_name='ZAM',startrow=1 , startcol=0, index=False)
# Śtúdium
worksheet.write(0, 7, "Štúdium")
df2.to_excel(writer, sheet_name='ZAM',startrow=1 , startcol=7, index=False)
# Oznamy
worksheet.write(0, 14, "Oznamy")
df3.to_excel(writer, sheet_name='ZAM',startrow=1 , startcol=14, index=False)
writer.save()
###Output
_____no_output_____ |
ImageDB/Test_imageDB.ipynb | ###Markdown
MetadataWe can download metadata for an entire multidimensional acquistion. These are directly from Micromanager and the fields are explained [here](https://micro-manager.org/wiki/Files_and_Metadata) and [here](https://micro-manager.org/wiki/Micro-Manager_File_Formats). We could extract physical coordinates from these metadata.
###Code
print(db.getAcqMeta(dataset_identifier))
###Output
/opt/local/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/psycopg2/__init__.py:144: UserWarning: The psycopg2 wheel package will be renamed from release 2.8; in order to keep installing from binary please use "pip install psycopg2-binary" instead. For details see: <http://initd.org/psycopg/docs/install.html#binary-install-from-pypi>.
""")
###Markdown
Downloading imagesWe can use the getStack method to download images as numpy arrays. They are returned as (rnd, channel, z, x, y) NDarrays. In multi FOV, multi-round experiments, the pos_idx corresponds to the field of view index and time_idx is equivalent to round. Since our current experiments are not barcoded, we currently just download one channel at a time and process the channels independently.
###Code
im = db.getStack(dataset_identifier, channel='FITC', pos_idx=1, time_idx=0, verbose=True)
print(im.shape)
print(im.dtype)
###Output
(1, 1, 13, 2048, 2048)
uint16
###Markdown
ImageStackWe then instantiate an ImageStack with the downloaded numpy array.
###Code
from starfish.imagestack.imagestack import ImageStack
from starfish.types import Features, Indices
from skimage import img_as_float32
IS = ImageStack.from_numpy_array(im)
IS.show_stack({Indices.CH: 0})
###Output
_____no_output_____ |
samples/notebooks/fsharp/Docs/Importing packages.ipynb | ###Markdown
[this doc on github](https://github.com/dotnet/interactive/tree/master/samples/notebooks/fsharp/Docs) Importing packages, libraries, and scripts You can load packages into a .NET notebook from NuGet using the following syntax:```fsharpr "nuget:[,]"```If you don't provide an explicit package version, the latest available non-preview version will be loaded.Here's an example:
###Code
#r "nuget:FSharp.Data"
###Output
_____no_output_____
###Markdown
Now that the package is loaded, we can add some `using` statements and write some code.
###Code
open FSharp.Data
[<Literal>]
let url = "https://en.wikipedia.org/wiki/2017_Formula_One_World_Championship"
type F1_2017 = HtmlProvider<url>
let f1Calendar = F1_2017.Load(url).Tables.``Season calendar``
f1Calendar.Rows
|> Seq.map (fun x -> x.Circuit, x.Date)
###Output
_____no_output_____
###Markdown
If you want to load an assembly that's already on disk, you can do so using this syntax:```fsharpr ""``` You can load an F script (typically a `.fsx` file) into the notebook using this syntax:```fsharpload ""```
###Code
// Example:
#load "some-fsharp-script-file.fsx"
###Output
_____no_output_____ |
Articles/Getting_Started_with_BigMart_Sales(AV_Datahacks)/exploration_and_feature_engineering.ipynb | ###Markdown
Data Exploration & Feature Engineering 1. Data Exploration
###Code
import pandas as pd
import numpy as np
###Output
_____no_output_____
###Markdown
Loading data:The files can be downloaded from: http://datahack.analyticsvidhya.com/contest/practice-problem-bigmart-sales-prediction
###Code
#Read files:
train = pd.read_csv("train.csv")
test = pd.read_csv("test.csv")
#Combine test and train into one file
train['source']='train'
test['source']='test'
data = pd.concat([train, test],ignore_index=True)
print train.shape, test.shape, data.shape
#Check missing values:
data.apply(lambda x: sum(x.isnull()))
#Numerical data summary:
data.describe()
#Number of unique values in each:
data.apply(lambda x: len(x.unique()))
#Filter categorical variables
categorical_columns = [x for x in data.dtypes.index if data.dtypes[x]=='object']
#Exclude ID cols and source:
categorical_columns = [x for x in categorical_columns if x not in ['Item_Identifier','Outlet_Identifier','source']]
#Print frequency of categories
for col in categorical_columns:
print '\nFrequency of Categories for varible %s'%col
print data[col].value_counts()
###Output
Frequency of Categories for varible Item_Fat_Content
Low Fat 8485
Regular 4824
LF 522
reg 195
low fat 178
Name: Item_Fat_Content, dtype: int64
Frequency of Categories for varible Item_Type
Fruits and Vegetables 2013
Snack Foods 1989
Household 1548
Frozen Foods 1426
Dairy 1136
Baking Goods 1086
Canned 1084
Health and Hygiene 858
Meat 736
Soft Drinks 726
Breads 416
Hard Drinks 362
Others 280
Starchy Foods 269
Breakfast 186
Seafood 89
Name: Item_Type, dtype: int64
Frequency of Categories for varible Outlet_Location_Type
Tier 3 5583
Tier 2 4641
Tier 1 3980
Name: Outlet_Location_Type, dtype: int64
Frequency of Categories for varible Outlet_Size
Medium 4655
Small 3980
High 1553
Name: Outlet_Size, dtype: int64
Frequency of Categories for varible Outlet_Type
Supermarket Type1 9294
Grocery Store 1805
Supermarket Type3 1559
Supermarket Type2 1546
Name: Outlet_Type, dtype: int64
###Markdown
2. Data Cleaning Imputation
###Code
#Determine the average weight per item:
item_avg_weight = data.pivot_table(values='Item_Weight', index='Item_Identifier')
#Get a boolean variable specifying missing Item_Weight values
miss_bool = data['Item_Weight'].isnull()
#Impute data and check #missing values before and after imputation to confirm
print 'Orignal #missing: %d'% sum(miss_bool)
data.loc[miss_bool,'Item_Weight'] = data.loc[miss_bool,'Item_Identifier'].apply(lambda x: item_avg_weight[x])
print 'Final #missing: %d'% sum(data['Item_Weight'].isnull())
#Import mode function:
from scipy.stats import mode
#Determing the mode for each
outlet_size_mode = data.pivot_table(values='Outlet_Size', columns='Outlet_Type',aggfunc=(lambda x:mode(x).mode[0]) )
print 'Mode for each Outlet_Type:'
print outlet_size_mode
#Get a boolean variable specifying missing Item_Weight values
miss_bool = data['Outlet_Size'].isnull()
#Impute data and check #missing values before and after imputation to confirm
print '\nOrignal #missing: %d'% sum(miss_bool)
data.loc[miss_bool,'Outlet_Size'] = data.loc[miss_bool,'Outlet_Type'].apply(lambda x: outlet_size_mode[x])
print sum(data['Outlet_Size'].isnull())
###Output
Mode for each Outlet_Type:
Outlet_Type
Grocery Store Small
Supermarket Type1 Small
Supermarket Type2 Medium
Supermarket Type3 Medium
Name: Outlet_Size, dtype: object
Orignal #missing: 4016
0
###Markdown
2. Feature Engineering: Step1: Consider combining categories in Outlet_Type
###Code
#Check the mean sales by type:
data.pivot_table(values='Item_Outlet_Sales',index='Outlet_Type')
###Output
_____no_output_____
###Markdown
Step2: Modify Item_Visibility
###Code
#Determine average visibility of a product
visibility_avg = data.pivot_table(values='Item_Visibility', index='Item_Identifier')
#Impute 0 values with mean visibility of that product:
miss_bool = (data['Item_Visibility'] == 0)
print 'Number of 0 values initially: %d'%sum(miss_bool)
data.loc[miss_bool,'Item_Visibility'] = data.loc[miss_bool,'Item_Identifier'].apply(lambda x: visibility_avg[x])
print 'Number of 0 values after modification: %d'%sum(data['Item_Visibility'] == 0)
#Determine another variable with means ratio
data['Item_Visibility_MeanRatio'] = data.apply(lambda x: x['Item_Visibility']/visibility_avg[x['Item_Identifier']], axis=1)
print data['Item_Visibility_MeanRatio'].describe()
###Output
count 14204.000000
mean 1.061884
std 0.235907
min 0.844563
25% 0.925131
50% 0.999070
75% 1.042007
max 3.010094
Name: Item_Visibility_MeanRatio, dtype: float64
###Markdown
Step 3: Create a broad category of Type of Item
###Code
#Item type combine:
data['Item_Identifier'].value_counts()
data['Item_Type_Combined'] = data['Item_Identifier'].apply(lambda x: x[0:2])
data['Item_Type_Combined'] = data['Item_Type_Combined'].map({'FD':'Food',
'NC':'Non-Consumable',
'DR':'Drinks'})
data['Item_Type_Combined'].value_counts()
###Output
_____no_output_____
###Markdown
Step 4: Determine the years of operation of a store
###Code
#Years:
data['Outlet_Years'] = 2013 - data['Outlet_Establishment_Year']
data['Outlet_Years'].describe()
###Output
_____no_output_____
###Markdown
Step 5: Modify categories of Item_Fat_Content
###Code
#Change categories of low fat:
print 'Original Categories:'
print data['Item_Fat_Content'].value_counts()
print '\nModified Categories:'
data['Item_Fat_Content'] = data['Item_Fat_Content'].replace({'LF':'Low Fat',
'reg':'Regular',
'low fat':'Low Fat'})
print data['Item_Fat_Content'].value_counts()
#Mark non-consumables as separate category in low_fat:
data.loc[data['Item_Type_Combined']=="Non-Consumable",'Item_Fat_Content'] = "Non-Edible"
data['Item_Fat_Content'].value_counts()
###Output
_____no_output_____
###Markdown
Step 6: Numerical and One-Hot Coding of Categorical variables
###Code
#Import library:
from sklearn.preprocessing import LabelEncoder
le = LabelEncoder()
#New variable for outlet
data['Outlet'] = le.fit_transform(data['Outlet_Identifier'])
var_mod = ['Item_Fat_Content','Outlet_Location_Type','Outlet_Size','Item_Type_Combined','Outlet_Type','Outlet']
le = LabelEncoder()
for i in var_mod:
data[i] = le.fit_transform(data[i])
#One Hot Coding:
data = pd.get_dummies(data, columns=['Item_Fat_Content','Outlet_Location_Type','Outlet_Size','Outlet_Type',
'Item_Type_Combined','Outlet'])
data.dtypes
data[['Item_Fat_Content_0','Item_Fat_Content_1','Item_Fat_Content_2']].head(10)
###Output
_____no_output_____
###Markdown
Step7: Exporting Data
###Code
#Drop the columns which have been converted to different types:
data.drop(['Item_Type','Outlet_Establishment_Year'],axis=1,inplace=True)
#Divide into test and train:
train = data.loc[data['source']=="train"]
test = data.loc[data['source']=="test"]
#Drop unnecessary columns:
test.drop(['Item_Outlet_Sales','source'],axis=1,inplace=True)
train.drop(['source'],axis=1,inplace=True)
#Export files as modified versions:
train.to_csv("train_modified.csv",index=False)
test.to_csv("test_modified.csv",index=False)
###Output
/Users/aarshay/anaconda/lib/python2.7/site-packages/ipykernel/__main__.py:9: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame
See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy
/Users/aarshay/anaconda/lib/python2.7/site-packages/ipykernel/__main__.py:10: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame
See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy
|
preliminary/hyperparam_experiment.ipynb | ###Markdown
Hyperparameters experimentWe train and eval the model varying the following:* cohorts: unbalanced vs balanced* optimizers: Adam vs SGD* learning rates* epochs* number of samples
###Code
cohorts = ['unbalanced', 'balanced']
optimizers = [torch.optim.Adam, torch.optim.SGD]
learning_rates = [0.0001, 0.001, 0.01]
epochs = [5, 10, 15]
samples = [1000, 5000, 0]
results = np.empty(shape=(len(cohorts), len(optimizers), len(learning_rates), len(epochs), len(samples)), dtype='object')
criterion = nn.BCELoss()
for c, cohort in enumerate(cohorts):
for o, optim in enumerate(optimizers):
for l, learning_rate in enumerate(learning_rates):
for e, n_epochs in enumerate(epochs):
for s, n_samples in enumerate(samples):
model, optimizer = create_model_and_optimizer()
optimizer = optim(model.parameters(), lr=learning_rate)
print ("Training for:\n")
print ("Cohort \t= ", cohort)
print ("Optimizer \t= ", optimizer)
print ("Learning rate \t= ", learning_rate)
print ("No. of epochs \t= ", n_epochs)
print ("No. of samples\t= ", n_samples)
print ('---------------')
model_filename = cohort + '-' + str(o) + '-' + str(learning_rate) + '-' + str(n_epochs) + '-' + str(n_samples) + '.pt'
train_loader, val_loader = get_unbalanced_dataloaders(n_samples) if cohort=='unbalanced' else get_balanced_dataloaders(n_samples)
p, r, f, roc_auc = train_and_eval(model, train_loader, val_loader, n_epochs, model_filename)
results[c,o,l,e,s] = [p, r, f, roc_auc]
print ('---------------\n\n')
###Output
_____no_output_____
###Markdown
Saving of the results
###Code
now = datetime.datetime.now()
r = {}
r['cohorts'] = cohorts
r['optimizers'] = optimizers
r['learning_rates'] = learning_rates
r['epochs'] = epochs
r['samples'] = samples
r['results'] = results
pickle.dump( r, open("hyperparam-exp-" + now.strftime("%Y%m%d-%H%M") +".p", "wb" ) )
r
###Output
_____no_output_____ |
Semana 3 - Random Forest Regressor/.ipynb_checkpoints/Untitled-checkpoint.ipynb | ###Markdown
Caso 1Predecir precio de un bulldozer utilizando Random Forest
###Code
# Importar Librerias
# primero las más báscias y de mayor uso: pandas y numpy!
import pandas as pd
import numpy as np
# Matplotlib nos ayudará a imprimir gráficos.
# Seaborn nos ayuda a quelas visualizaciones sean más esteticas.
import matplotlib.pyplot as plt
import seaborn as sns
# importamos warnings para poder ignorar las advertencias.
# muchas veces hay advertencias que se dejará de usar x libreria o que cambiará de nombre.
import warnings
warnings.filterwarnings("ignore")
###Output
_____no_output_____
###Markdown
Cargar y explorar nuestro datasetEl presente dataset fue obtenido de Kaggle.
###Code
!wget https://raw.githubusercontent.com/Giffy/Personal_dataset_repository/master/train.tar.gz
!tar xvf train.tar.gz
# previamente descargamos el data set https://raw.githubusercontent.com/Giffy/Personal_dataset_repository/master/train.tar.gz
# lo descomprimimos dentro de la carpeta "data"
# vamos a usar la funcion read_csv de pandas.
dataset = pd.read_csv('data/bulldozers/Train.csv')
# vamos a visualizar los datos dentro del dataset
dataset.head()
# revisar la informacion global del dataset
dataset.info()
# Revisemos el tamaño del dataset
dataset.shape
###Output
_____no_output_____
###Markdown
El dataset tiene 401,125 renglones y 53 columnas
###Code
# revisar si hay duplicados
dataset.duplicated().sum()
# revisar si hay features faltantes
dataset.isnull().sum()
# Analisis de correclacion
corrm = dataset.corr()
corrm['SalePrice'].sort_values(ascending = False)
# Explorar los resultados objetivo, es decir el precio de un bulldozer.
dataset['SalePrice'].value_counts()
sns.countplot(x = 'SalePrice', data = dataset)
# Informacion estadistica de los features númericos
dataset.describe()
# vamos a ver cuales de las columnas son numericas
num_cols = dataset._get_numeric_data().columns
num_cols
###Output
_____no_output_____
###Markdown
Solo 8 de las 53 columnas son numericas. Tenemos que convertir el resto de columnas categorias a numericas.
###Code
# vamos a visualizar el año del bulldozer
year_made = dataset['YearMade'].value_counts().plot.bar(title="YearMade", figsize=(14,8))
_ = year_made.set_xlabel('Año del Modelo')
_ = year_made.set_ylabel('Total')
###Output
_____no_output_____
###Markdown
nota: como podemos ver hay un gran número de tractores que tienen de año de modelo = 1000, es decir que nunca se capturó. Esto puede afectar el resultado de nuestra predicción si es que le damos un peso importante al año d efabricación. Estaría interesante ver como funciona el modelo haciendo drop de todos los modelos =1000
###Code
# vamos a visualizar de donde sacaron la informacion
data_source = dataset['datasource'].value_counts().plot.bar(title="Fuente", figsize=(14,8))
_ = data_source.set_xlabel('Fuente')
_ = data_source.set_ylabel('Total')
###Output
_____no_output_____
###Markdown
podemos observar que la informacion se saco de 5 fuentes en total. Vamos ahora a convertir la fecha en columnas con información independiente.
###Code
# vamos a utilizar directamente pandas con su funcion to_datetime para obtener la información de la columna.
dataset['saledate'] = pd.to_datetime(dataset['saledate'])
dataset['saledate']
dataset['Year'] = dataset['saledate'].dt.year
dataset['Month'] = dataset['saledate'].dt.month
dataset['Day'] = dataset['saledate'].dt.day
dataset['Hour'] = dataset['saledate'].dt.hour
dataset['Weekday'] = dataset['saledate'].dt.dayofweek
dataset = dataset.drop(['saledate'], axis=1)
dataset.head(10)
###Output
_____no_output_____
###Markdown
Como podemos ver arriba, hemos borrado la columna "saledate" del dataset y hemos añadido las columnas: Año, Mes, Dia, Hora, Dia de la semana
###Code
num_cols
# Mapa de correlacion con las nuevas variables
fig = plt.figure(figsize = (12,10))
sns.heatmap(dataset.corr(), cmap='Greens', annot = True)
# vamos a usar la funcion de fastai para convertir las variable scategoricas en numericas
from pandas.api.types import is_string_dtype, is_numeric_dtype, is_categorical_dtype
def train_cats(df):
"""Change any columns of strings in a panda's dataframe to a column of
categorical values. This applies the changes inplace.
Parameters:
-----------
df: A pandas dataframe. Any columns of strings will be changed to
categorical values.
Examples:
---------
>>> df = pd.DataFrame({'col1' : [1, 2, 3], 'col2' : ['a', 'b', 'a']})
>>> df
col1 col2
0 1 a
1 2 b
2 3 a
note the type of col2 is string
>>> train_cats(df)
>>> df
col1 col2
0 1 a
1 2 b
2 3 a
now the type of col2 is category
"""
for n,c in df.items():
if is_string_dtype(c): df[n] = c.astype('category').cat.as_ordered()
def proc_df(df, y_fld=None, skip_flds=None, ignore_flds=None, do_scale=False, na_dict=None,
preproc_fn=None, max_n_cat=None, subset=None, mapper=None):
""" proc_df takes a data frame df and splits off the response variable, and
changes the df into an entirely numeric dataframe. For each column of df
which is not in skip_flds nor in ignore_flds, na values are replaced by the
median value of the column.
Parameters:
-----------
df: The data frame you wish to process.
y_fld: The name of the response variable
skip_flds: A list of fields that dropped from df.
ignore_flds: A list of fields that are ignored during processing.
do_scale: Standardizes each column in df. Takes Boolean Values(True,False)
na_dict: a dictionary of na columns to add. Na columns are also added if there
are any missing values.
preproc_fn: A function that gets applied to df.
max_n_cat: The maximum number of categories to break into dummy values, instead
of integer codes.
subset: Takes a random subset of size subset from df.
mapper: If do_scale is set as True, the mapper variable
calculates the values used for scaling of variables during training time (mean and standard deviation).
Returns:
--------
[x, y, nas, mapper(optional)]:
x: x is the transformed version of df. x will not have the response variable
and is entirely numeric.
y: y is the response variable
nas: returns a dictionary of which nas it created, and the associated median.
mapper: A DataFrameMapper which stores the mean and standard deviation of the corresponding continuous
variables which is then used for scaling of during test-time.
Examples:
---------
>>> df = pd.DataFrame({'col1' : [1, 2, 3], 'col2' : ['a', 'b', 'a']})
>>> df
col1 col2
0 1 a
1 2 b
2 3 a
note the type of col2 is string
>>> train_cats(df)
>>> df
col1 col2
0 1 a
1 2 b
2 3 a
now the type of col2 is category { a : 1, b : 2}
>>> x, y, nas = proc_df(df, 'col1')
>>> x
col2
0 1
1 2
2 1
>>> data = DataFrame(pet=["cat", "dog", "dog", "fish", "cat", "dog", "cat", "fish"],
children=[4., 6, 3, 3, 2, 3, 5, 4],
salary=[90, 24, 44, 27, 32, 59, 36, 27])
>>> mapper = DataFrameMapper([(:pet, LabelBinarizer()),
([:children], StandardScaler())])
>>>round(fit_transform!(mapper, copy(data)), 2)
8x4 Array{Float64,2}:
1.0 0.0 0.0 0.21
0.0 1.0 0.0 1.88
0.0 1.0 0.0 -0.63
0.0 0.0 1.0 -0.63
1.0 0.0 0.0 -1.46
0.0 1.0 0.0 -0.63
1.0 0.0 0.0 1.04
0.0 0.0 1.0 0.21
"""
if not ignore_flds: ignore_flds=[]
if not skip_flds: skip_flds=[]
if subset: df = get_sample(df,subset)
else: df = df.copy()
ignored_flds = df.loc[:, ignore_flds]
df.drop(ignore_flds, axis=1, inplace=True)
if preproc_fn: preproc_fn(df)
if y_fld is None: y = None
else:
if not is_numeric_dtype(df[y_fld]): df[y_fld] = pd.Categorical(df[y_fld]).codes
y = df[y_fld].values
skip_flds += [y_fld]
df.drop(skip_flds, axis=1, inplace=True)
if na_dict is None: na_dict = {}
else: na_dict = na_dict.copy()
na_dict_initial = na_dict.copy()
for n,c in df.items(): na_dict = fix_missing(df, c, n, na_dict)
if len(na_dict_initial.keys()) > 0:
df.drop([a + '_na' for a in list(set(na_dict.keys()) - set(na_dict_initial.keys()))], axis=1, inplace=True)
if do_scale: mapper = scale_vars(df, mapper)
for n,c in df.items(): numericalize(df, c, n, max_n_cat)
df = pd.get_dummies(df, dummy_na=True)
df = pd.concat([ignored_flds, df], axis=1)
res = [df, y, na_dict]
if do_scale: res = res + [mapper]
return res
def fix_missing(df, col, name, na_dict):
""" Fill missing data in a column of df with the median, and add a {name}_na column
which specifies if the data was missing.
Parameters:
-----------
df: The data frame that will be changed.
col: The column of data to fix by filling in missing data.
name: The name of the new filled column in df.
na_dict: A dictionary of values to create na's of and the value to insert. If
name is not a key of na_dict the median will fill any missing data. Also
if name is not a key of na_dict and there is no missing data in col, then
no {name}_na column is not created.
Examples:
---------
>>> df = pd.DataFrame({'col1' : [1, np.NaN, 3], 'col2' : [5, 2, 2]})
>>> df
col1 col2
0 1 5
1 nan 2
2 3 2
>>> fix_missing(df, df['col1'], 'col1', {})
>>> df
col1 col2 col1_na
0 1 5 False
1 2 2 True
2 3 2 False
>>> df = pd.DataFrame({'col1' : [1, np.NaN, 3], 'col2' : [5, 2, 2]})
>>> df
col1 col2
0 1 5
1 nan 2
2 3 2
>>> fix_missing(df, df['col2'], 'col2', {})
>>> df
col1 col2
0 1 5
1 nan 2
2 3 2
>>> df = pd.DataFrame({'col1' : [1, np.NaN, 3], 'col2' : [5, 2, 2]})
>>> df
col1 col2
0 1 5
1 nan 2
2 3 2
>>> fix_missing(df, df['col1'], 'col1', {'col1' : 500})
>>> df
col1 col2 col1_na
0 1 5 False
1 500 2 True
2 3 2 False
"""
if is_numeric_dtype(col):
if pd.isnull(col).sum() or (name in na_dict):
df[name+'_na'] = pd.isnull(col)
filler = na_dict[name] if name in na_dict else col.median()
df[name] = col.fillna(filler)
na_dict[name] = filler
return na_dict
def numericalize(df, col, name, max_n_cat):
""" Changes the column col from a categorical type to it's integer codes.
Parameters:
-----------
df: A pandas dataframe. df[name] will be filled with the integer codes from
col.
col: The column you wish to change into the categories.
name: The column name you wish to insert into df. This column will hold the
integer codes.
max_n_cat: If col has more categories than max_n_cat it will not change the
it to its integer codes. If max_n_cat is None, then col will always be
converted.
Examples:
---------
>>> df = pd.DataFrame({'col1' : [1, 2, 3], 'col2' : ['a', 'b', 'a']})
>>> df
col1 col2
0 1 a
1 2 b
2 3 a
note the type of col2 is string
>>> train_cats(df)
>>> df
col1 col2
0 1 a
1 2 b
2 3 a
now the type of col2 is category { a : 1, b : 2}
>>> numericalize(df, df['col2'], 'col3', None)
col1 col2 col3
0 1 a 1
1 2 b 2
2 3 a 1
"""
if not is_numeric_dtype(col) and ( max_n_cat is None or len(col.cat.categories)>max_n_cat):
df[name] = pd.Categorical(col).codes+1
# primero vamos a copiar el dataset
ds = dataset.copy()
# sanity check
ds.shape
###Output
_____no_output_____
###Markdown
como vemos ahora el dataset tiene 57 columnas, esto es porqeu transformamos la fecha.
###Code
# aplicamos la funcion anterior al nuevo dataset
train_cats(ds)
# ahora veamos el dataset!
ds.head(5)
###Output
_____no_output_____
###Markdown
Creamos ahora el modelo Random Forest Primero vamos a reemplazar las categorias con codigos numericos, vamos a manejar los NaN y separar las variables.
###Code
X, y, nas = proc_df(ds, 'SalePrice')
# sanity check
X.shape
X.head(5)
y.shape
y
# particionamos el dataset en dos, uno para entrenamiento y el otro para comprobar nuestro modelo.
# tenemos que importar una nueva libreria para hacer el split.
# dejaremos 80% para el training y 20% para el test.
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)
# Escalar los datos
from sklearn.preprocessing import StandardScaler
standardScaler = StandardScaler()
X_train = standardScaler.fit_transform(X_train)
X_test = standardScaler.transform(X_test)
###Output
_____no_output_____
###Markdown
Ahora si el modelo Random Forest
###Code
# Random Forest en dataset de entrenamiento
from sklearn.ensemble import RandomForestRegressor
randm_frst = RandomForestRegressor()
randm_frst.fit(X_train, y_train)
#Predicción
y_frst = randm_frst.predict(X_test)
randm_frst.score(X_train, y_train)
# vamos a partir el dataset para tener una dimension como la que pide kaggle
def split_vals(a,n): return a[:n].copy(), a[n:].copy()
n_valid = 12000 # same as Kaggle's test set size
n_trn = len(dataset)-n_valid
raw_train, raw_valid = split_vals(ds, n_trn)
X_train, X_valid = split_vals(X, n_trn)
y_train, y_valid = split_vals(y, n_trn)
X_train.shape, y_train.shape, X_valid.shape
# vamos a intentar nuestro modelo nuevamente
# esta es la funcion que pide kaggle como medida de evaluacion
def rmse(x,y): return math.sqrt(((x-y)**2).mean())
# esta funcion nos permite imprimir el score en cada dataset
def print_score(m):
res = [rmse(m.predict(X_train), y_train), rmse(m.predict(X_valid), y_valid),
m.score(X_train, y_train), m.score(X_valid, y_valid)]
if hasattr(m, 'oob_score_'): res.append(m.oob_score_)
print(res)
# vamos a importar una libreria para medir el tiempo
import math
# de nuevo declaramos que random forest usara 10 estimators.
rf = RandomForestRegressor(n_estimators=10, n_jobs=-1)
# computaremos el tiempo que lleva al modelo entrenarse
%time rf.fit(X_train, y_train)
# y finalmente imprimimos el score
print_score(rf)
rf = RandomForestRegressor(random_state = 42)
from pprint import pprint
# Que parametros estamos usando?
print('Parametros en uso:\n')
pprint(rf.get_params())
from sklearn.model_selection import RandomizedSearchCV
# Numero de arbols random forest
n_estimators = [int(x) for x in np.linspace(start = 200, stop = 2000, num = 10)]
# Numero de features a considerar en cada split
max_features = ['auto', 'sqrt']
# Numero maximo de niveles del arbol
max_depth = [int(x) for x in np.linspace(10, 110, num = 11)]
max_depth.append(None)
# Numero minimo de muestras requeridas para hacer un split
min_samples_split = [2, 5, 10]
# Numero minimo de muestras en cada nodo
min_samples_leaf = [1, 2, 4]
# Metodo de seleccion de muestras
bootstrap = [True, False]
# Crear grid random
random_grid = {'n_estimators': n_estimators,
'max_features': max_features,
'max_depth': max_depth,
'min_samples_split': min_samples_split,
'min_samples_leaf': min_samples_leaf,
'bootstrap': bootstrap}
pprint(random_grid)
# creamos un nuevo modelo donde usaremos nuestros hiperparametros
rf = RandomForestRegressor()
# Vamos a utilizar 3 folds de cross validation,
# buscaremos 100 posibles combinaciones
rf_random = RandomizedSearchCV(estimator = rf, param_distributions = random_grid, n_iter = 100,
cv = 3, verbose=2, random_state=42, n_jobs = -1)
# Hacer fit al modelo
rf_random.fit(X_train, y_train)
# Para determinar si el random search fue mejor, tenemos que compararlo contra el modelo inicial
def evaluate(model, test_features, test_labels):
predictions = model.predict(test_features)
errors = abs(predictions - test_labels)
mape = 100 * np.mean(errors / test_labels)
accuracy = 100 - mape
print('Perfomance del Modelo')
print('Error promedio: {:0.4f} '.format(np.mean(errors)))
print('Accuracy = {:0.2f}%.'.format(accuracy))
return accuracy
base_model = RandomForestRegressor(n_estimators = 10, random_state = 42)
base_model.fit(X_train, y_train)
base_accuracy = evaluate(base_model, X_test, y_test)
best_random = rf_random.best_estimator_
random_accuracy = evaluate(best_random, test_features, test_labels)
print('Improvement of {:0.2f}%.'.format( 100 * (random_accuracy - base_accuracy) / base_accuracy))
###Output
_____no_output_____ |
FaceRecognitionForTheHappyHouseDoorSystem.ipynb | ###Markdown
Face Recognition for the Happy House Door SystemWelcome to the Happy House! Here you will build a face recognition system. Many of the ideas presented here are from [FaceNet](https://arxiv.org/pdf/1503.03832.pdf). In lecture, we also talked about [DeepFace](https://research.fb.com/wp-content/uploads/2016/11/deepface-closing-the-gap-to-human-level-performance-in-face-verification.pdf). Face recognition problems commonly fall into two categories: - **Face Verification** - "is this the claimed person?". For example, at some airports, you can pass through customs by letting a system scan your passport and then verifying that you (the person carrying the passport) are the correct person. A mobile phone that unlocks using your face is also using face verification. This is a 1:1 matching problem. - **Face Recognition** - "who is this person?". For example, the video lecture showed a face recognition video (https://www.youtube.com/watch?v=wr4rx0Spihs) of Baidu employees entering the office without needing to otherwise identify themselves. This is a 1:K matching problem. FaceNet learns a neural network that encodes a face image into a vector of 128 numbers. By comparing two such vectors, you can then determine if two pictures are of the same person. **In this, you will:**- Implement the triplet loss function- Use a pretrained model to map face images into 128-dimensional encodings- Use these encodings to perform face verification and face recognitionIn this exercise, we will be using a pre-trained model which represents ConvNet activations using a "channels first" convention, as opposed to the "channels last" convention used in lecture and previous programming assignments. In other words, a batch of images will be of shape $(m, n_C, n_H, n_W)$ instead of $(m, n_H, n_W, n_C)$. Both of these conventions have a reasonable amount of traction among open-source implementations; there isn't a uniform standard yet within the deep learning community. Let's load the required packages.
###Code
from keras.models import Sequential
from keras.layers import Conv2D, ZeroPadding2D, Activation, Input, concatenate
from keras.models import Model
from keras.layers.normalization import BatchNormalization
from keras.layers.pooling import MaxPooling2D, AveragePooling2D
from keras.layers.merge import Concatenate
from keras.layers.core import Lambda, Flatten, Dense
from keras.initializers import glorot_uniform
from keras.engine.topology import Layer
from keras import backend as K
K.set_image_data_format('channels_first')
import cv2
import os
import numpy as np
from numpy import genfromtxt
import pandas as pd
import tensorflow as tf
from fr_utils import *
from inception_blocks_v2 import *
%matplotlib inline
%load_ext autoreload
%autoreload 2
np.set_printoptions(threshold=np.nan)
###Output
Using TensorFlow backend.
###Markdown
0 - Naive Face VerificationIn Face Verification, you're given two images and you have to tell if they are of the same person. The simplest way to do this is to compare the two images pixel-by-pixel. If the distance between the raw images are less than a chosen threshold, it may be the same person! **Figure 1** Of course, this algorithm performs really poorly, since the pixel values change dramatically due to variations in lighting, orientation of the person's face, even minor changes in head position, and so on. You'll see that rather than using the raw image, you can learn an encoding $f(img)$ so that element-wise comparisons of this encoding gives more accurate judgements as to whether two pictures are of the same person. 1 - Encoding face images into a 128-dimensional vector 1.1 - Using an ConvNet to compute encodingsThe FaceNet model takes a lot of data and a long time to train. So following common practice in applied deep learning settings, let's just load weights that someone else has already trained. The network architecture follows the Inception model from [Szegedy *et al.*](https://arxiv.org/abs/1409.4842). We have provided an inception network implementation. You can look in the file `inception_blocks.py` to see how it is implemented (do so by going to "File->Open..." at the top of the Jupyter notebook). The key things you need to know are:- This network uses 96x96 dimensional RGB images as its input. Specifically, inputs a face image (or batch of $m$ face images) as a tensor of shape $(m, n_C, n_H, n_W) = (m, 3, 96, 96)$ - It outputs a matrix of shape $(m, 128)$ that encodes each input face image into a 128-dimensional vectorRun the cell below to create the model for face images.
###Code
FRmodel = faceRecoModel(input_shape=(3, 96, 96))
print("Total Params:", FRmodel.count_params())
###Output
Total Params: 3743280
###Markdown
** Expected Output **Total Params: 3743280 By using a 128-neuron fully connected layer as its last layer, the model ensures that the output is an encoding vector of size 128. You then use the encodings the compare two face images as follows: **Figure 2**: By computing a distance between two encodings and thresholding, you can determine if the two pictures represent the same personSo, an encoding is a good one if: - The encodings of two images of the same person are quite similar to each other - The encodings of two images of different persons are very differentThe triplet loss function formalizes this, and tries to "push" the encodings of two images of the same person (Anchor and Positive) closer together, while "pulling" the encodings of two images of different persons (Anchor, Negative) further apart. **Figure 3**: In the next part, we will call the pictures from left to right: Anchor (A), Positive (P), Negative (N) 1.2 - The Triplet LossFor an image $x$, we denote its encoding $f(x)$, where $f$ is the function computed by the neural network.<!--We will also add a normalization step at the end of our model so that $\mid \mid f(x) \mid \mid_2 = 1$ (means the vector of encoding should be of norm 1).!-->Training will use triplets of images $(A, P, N)$: - A is an "Anchor" image--a picture of a person. - P is a "Positive" image--a picture of the same person as the Anchor image.- N is a "Negative" image--a picture of a different person than the Anchor image.These triplets are picked from our training dataset. We will write $(A^{(i)}, P^{(i)}, N^{(i)})$ to denote the $i$-th training example. You'd like to make sure that an image $A^{(i)}$ of an individual is closer to the Positive $P^{(i)}$ than to the Negative image $N^{(i)}$) by at least a margin $\alpha$:$$\mid \mid f(A^{(i)}) - f(P^{(i)}) \mid \mid_2^2 + \alpha < \mid \mid f(A^{(i)}) - f(N^{(i)}) \mid \mid_2^2$$You would thus like to minimize the following "triplet cost":$$\mathcal{J} = \sum^{m}_{i=1} \large[ \small \underbrace{\mid \mid f(A^{(i)}) - f(P^{(i)}) \mid \mid_2^2}_\text{(1)} - \underbrace{\mid \mid f(A^{(i)}) - f(N^{(i)}) \mid \mid_2^2}_\text{(2)} + \alpha \large ] \small_+ \tag{3}$$Here, we are using the notation "$[z]_+$" to denote $max(z,0)$. Notes:- The term (1) is the squared distance between the anchor "A" and the positive "P" for a given triplet; you want this to be small. - The term (2) is the squared distance between the anchor "A" and the negative "N" for a given triplet, you want this to be relatively large, so it thus makes sense to have a minus sign preceding it. - $\alpha$ is called the margin. It is a hyperparameter that you should pick manually. We will use $\alpha = 0.2$. Most implementations also normalize the encoding vectors to have norm equal one (i.e., $\mid \mid f(img)\mid \mid_2$=1); you won't have to worry about that here.**Exercise**: Implement the triplet loss as defined by formula (3). Here are the 4 steps:1. Compute the distance between the encodings of "anchor" and "positive": $\mid \mid f(A^{(i)}) - f(P^{(i)}) \mid \mid_2^2$2. Compute the distance between the encodings of "anchor" and "negative": $\mid \mid f(A^{(i)}) - f(N^{(i)}) \mid \mid_2^2$3. Compute the formula per training example: $ \mid \mid f(A^{(i)}) - f(P^{(i)}) \mid - \mid \mid f(A^{(i)}) - f(N^{(i)}) \mid \mid_2^2 + \alpha$3. Compute the full formula by taking the max with zero and summing over the training examples:$$\mathcal{J} = \sum^{m}_{i=1} \large[ \small \mid \mid f(A^{(i)}) - f(P^{(i)}) \mid \mid_2^2 - \mid \mid f(A^{(i)}) - f(N^{(i)}) \mid \mid_2^2+ \alpha \large ] \small_+ \tag{3}$$Useful functions: `tf.reduce_sum()`, `tf.square()`, `tf.subtract()`, `tf.add()`, `tf.maximum()`.For steps 1 and 2, you will need to sum over the entries of $\mid \mid f(A^{(i)}) - f(P^{(i)}) \mid \mid_2^2$ and $\mid \mid f(A^{(i)}) - f(N^{(i)}) \mid \mid_2^2$ while for step 4 you will need to sum over the training examples.
###Code
# GRADED FUNCTION: triplet_loss
def triplet_loss(y_true, y_pred, alpha = 0.2):
"""
Implementation of the triplet loss as defined by formula (3)
Arguments:
y_true -- true labels, required when you define a loss in Keras, you don't need it in this function.
y_pred -- python list containing three objects:
anchor -- the encodings for the anchor images, of shape (None, 128)
positive -- the encodings for the positive images, of shape (None, 128)
negative -- the encodings for the negative images, of shape (None, 128)
Returns:
loss -- real number, value of the loss
"""
anchor, positive, negative = y_pred[0], y_pred[1], y_pred[2]
### START CODE HERE ### (≈ 4 lines)
# Step 1: Compute the (encoding) distance between the anchor and the positive, you will need to sum over axis=-1
pos_dist = tf.reduce_sum(tf.square(tf.subtract(anchor,positive)))
# Step 2: Compute the (encoding) distance between the anchor and the negative, you will need to sum over axis=-1
neg_dist = tf.reduce_sum(tf.square(tf.subtract(anchor,negative)))
# Step 3: subtract the two previous distances and add alpha.
basic_loss = pos_dist - neg_dist + alpha
# Step 4: Take the maximum of basic_loss and 0.0. Sum over the training examples.
loss = tf.maximum(basic_loss, 0)
### END CODE HERE ###
return loss
with tf.Session() as test:
tf.set_random_seed(1)
y_true = (None, None, None)
y_pred = (tf.random_normal([3, 128], mean=6, stddev=0.1, seed = 1),
tf.random_normal([3, 128], mean=1, stddev=1, seed = 1),
tf.random_normal([3, 128], mean=3, stddev=4, seed = 1))
loss = triplet_loss(y_true, y_pred)
print("loss = " + str(loss.eval()))
###Output
loss = 350.026
###Markdown
**Expected Output**: **loss** 528.143 2 - Loading the trained modelFaceNet is trained by minimizing the triplet loss. But since training requires a lot of data and a lot of computation, we won't train it from scratch here. Instead, we load a previously trained model. Load a model using the following cell; this might take a couple of minutes to run.
###Code
FRmodel.compile(optimizer = 'adam', loss = triplet_loss, metrics = ['accuracy'])
load_weights_from_FaceNet(FRmodel)
###Output
_____no_output_____
###Markdown
Here're some examples of distances between the encodings between three individuals: **Figure 4**: Example of distance outputs between three individuals' encodingsLet's now use this model to perform face verification and face recognition! 3 - Applying the model Back to the Happy House! Residents are living blissfully since you implemented happiness recognition for the house in an earlier assignment. However, several issues keep coming up: The Happy House became so happy that every happy person in the neighborhood is coming to hang out in your living room. It is getting really crowded, which is having a negative impact on the residents of the house. All these random happy people are also eating all your food. So, you decide to change the door entry policy, and not just let random happy people enter anymore, even if they are happy! Instead, you'd like to build a **Face verification** system so as to only let people from a specified list come in. To get admitted, each person has to swipe an ID card (identification card) to identify themselves at the door. The face recognition system then checks that they are who they claim to be. 3.1 - Face VerificationLet's build a database containing one encoding vector for each person allowed to enter the happy house. To generate the encoding we use `img_to_encoding(image_path, model)` which basically runs the forward propagation of the model on the specified image. Run the following code to build the database (represented as a python dictionary). This database maps each person's name to a 128-dimensional encoding of their face.
###Code
database = {}
database["danielle"] = img_to_encoding("images/danielle.png", FRmodel)
database["younes"] = img_to_encoding("images/younes.jpg", FRmodel)
database["tian"] = img_to_encoding("images/tian.jpg", FRmodel)
database["andrew"] = img_to_encoding("images/andrew.jpg", FRmodel)
database["kian"] = img_to_encoding("images/kian.jpg", FRmodel)
database["dan"] = img_to_encoding("images/dan.jpg", FRmodel)
database["sebastiano"] = img_to_encoding("images/sebastiano.jpg", FRmodel)
database["bertrand"] = img_to_encoding("images/bertrand.jpg", FRmodel)
database["kevin"] = img_to_encoding("images/kevin.jpg", FRmodel)
database["felix"] = img_to_encoding("images/felix.jpg", FRmodel)
database["benoit"] = img_to_encoding("images/benoit.jpg", FRmodel)
database["arnaud"] = img_to_encoding("images/arnaud.jpg", FRmodel)
###Output
_____no_output_____
###Markdown
Now, when someone shows up at your front door and swipes their ID card (thus giving you their name), you can look up their encoding in the database, and use it to check if the person standing at the front door matches the name on the ID.**Exercise**: Implement the verify() function which checks if the front-door camera picture (`image_path`) is actually the person called "identity". You will have to go through the following steps:1. Compute the encoding of the image from image_path2. Compute the distance about this encoding and the encoding of the identity image stored in the database3. Open the door if the distance is less than 0.7, else do not open.As presented above, you should use the L2 distance (np.linalg.norm). (Note: In this implementation, compare the L2 distance, not the square of the L2 distance, to the threshold 0.7.)
###Code
# GRADED FUNCTION: verify
def verify(image_path, identity, database, model):
"""
Function that verifies if the person on the "image_path" image is "identity".
Arguments:
image_path -- path to an image
identity -- string, name of the person you'd like to verify the identity. Has to be a resident of the Happy house.
database -- python dictionary mapping names of allowed people's names (strings) to their encodings (vectors).
model -- your Inception model instance in Keras
Returns:
dist -- distance between the image_path and the image of "identity" in the database.
door_open -- True, if the door should open. False otherwise.
"""
### START CODE HERE ###
# Step 1: Compute the encoding for the image. Use img_to_encoding() see example above. (≈ 1 line)
encoding = img_to_encoding(image_path, FRmodel)
# Step 2: Compute distance with identity's image (≈ 1 line)
dist = np.linalg.norm(database[identity]-encoding)
# Step 3: Open the door if dist < 0.7, else don't open (≈ 3 lines)
if dist < 0.7:
print("It's " + str(identity) + ", welcome home!")
door_open = True
else:
print("It's not " + str(identity) + ", please go away")
door_open = False
### END CODE HERE ###
return dist, door_open
###Output
_____no_output_____
###Markdown
Younes is trying to enter the Happy House and the camera takes a picture of him ("images/camera_0.jpg"). Let's run your verification algorithm on this picture:
###Code
verify("images/camera_0.jpg", "younes", database, FRmodel)
###Output
It's younes, welcome home!
###Markdown
**Expected Output**: **It's younes, welcome home!** (0.65939283, True) Benoit, who broke the aquarium last weekend, has been banned from the house and removed from the database. He stole Kian's ID card and came back to the house to try to present himself as Kian. The front-door camera took a picture of Benoit ("images/vinayak.jpg). Let's run the verification algorithm to check if benoit can enter.
###Code
verify("images/vinayak.jpg", "vinayak", database, FRmodel)
###Output
It's not kian, please go away
###Markdown
**Expected Output**: **It's not kian, please go away** (0.86224014, False) 3.2 - Face RecognitionYour face verification system is mostly working well. But since Kian got his ID card stolen, when he came back to the house that evening he couldn't get in! To reduce such shenanigans, you'd like to change your face verification system to a face recognition system. This way, no one has to carry an ID card anymore. An authorized person can just walk up to the house, and the front door will unlock for them! You'll implement a face recognition system that takes as input an image, and figures out if it is one of the authorized persons (and if so, who). Unlike the previous face verification system, we will no longer get a person's name as another input. **Exercise**: Implement `who_is_it()`. You will have to go through the following steps:1. Compute the target encoding of the image from image_path2. Find the encoding from the database that has smallest distance with the target encoding. - Initialize the `min_dist` variable to a large enough number (100). It will help you keep track of what is the closest encoding to the input's encoding. - Loop over the database dictionary's names and encodings. To loop use `for (name, db_enc) in database.items()`. - Compute L2 distance between the target "encoding" and the current "encoding" from the database. - If this distance is less than the min_dist, then set min_dist to dist, and identity to name.
###Code
# GRADED FUNCTION: who_is_it
def who_is_it(image_path, database, model):
"""
Implements face recognition for the happy house by finding who is the person on the image_path image.
Arguments:
image_path -- path to an image
database -- database containing image encodings along with the name of the person on the image
model -- your Inception model instance in Keras
Returns:
min_dist -- the minimum distance between image_path encoding and the encodings from the database
identity -- string, the name prediction for the person on image_path
"""
### START CODE HERE ###
## Step 1: Compute the target "encoding" for the image. Use img_to_encoding() see example above. ## (≈ 1 line)
encoding = img_to_encoding(image_path, FRmodel)
## Step 2: Find the closest encoding ##
# Initialize "min_dist" to a large value, say 100 (≈1 line)
min_dist = 100
# Loop over the database dictionary's names and encodings.
for (name, db_enc) in database.items():
# Compute L2 distance between the target "encoding" and the current "emb" from the database. (≈ 1 line)
dist = np.linalg.norm(db_enc-encoding)
# If this distance is less than the min_dist, then set min_dist to dist, and identity to name. (≈ 3 lines)
if dist < min_dist:
min_dist = dist
identity = name
### END CODE HERE ###
if min_dist > 0.7:
print("Not in the database.")
else:
print ("it's " + str(identity) + ", the distance is " + str(min_dist))
return min_dist, identity
###Output
_____no_output_____
###Markdown
Younes is at the front-door and the camera takes a picture of him ("images/camera_0.jpg"). Let's see if your who_it_is() algorithm identifies Younes.
###Code
who_is_it("images/camera_0.jpg", database, FRmodel)
###Output
it's younes, the distance is 0.659393
###Markdown
**Expected Output**: **it's younes, the distance is 0.659393** (0.65939283, 'younes') You can change "`camera_0.jpg`" (picture of younes) to "`camera_1.jpg`" (picture of bertrand) and see the result. Your Happy House is running well. It only lets in authorized persons, and people don't need to carry an ID card around anymore! You've now seen how a state-of-the-art face recognition system works.Although we won't implement it here, here're some ways to further improve the algorithm:- Put more images of each person (under different lighting conditions, taken on different days, etc.) into the database. Then given a new image, compare the new face to multiple pictures of the person. This would increae accuracy.- Crop the images to just contain the face, and less of the "border" region around the face. This preprocessing removes some of the irrelevant pixels around the face, and also makes the algorithm more robust. **What you should remember**:- Face verification solves an easier 1:1 matching problem; face recognition addresses a harder 1:K matching problem. - The triplet loss is an effective loss function for training a neural network to learn an encoding of a face image.- The same encoding can be used for verification and recognition. Measuring distances between two images' encodings allows you to determine whether they are pictures of the same person.
###Code
###Output
_____no_output_____ |
_src/Section 5/5.4 Operator Module.ipynb | ###Markdown
Official Documentation : https://docs.python.org/2/library/operator.html
###Code
import operator
print operator.mul(5,6)
print operator.add(5,6)
print operator.sub(5,6)
print operator.ge(5,6)
print operator.lt(5,6)
print operator.le(5,5)
print operator.div(5.0,6)
print operator.floordiv(5.0,6)
print operator.countOf([1, 2, 1, 2, 3, 1, 1], 1)
print operator.contains([1, 2, 1, 2, 3, 1, 1], 1)
print operator.indexOf([1, 2, 1, 2, 3, 1, 1], 3)
###Output
4
###Markdown
Passing to Higher Order Functions
###Code
my_list = [(1, "Hello"), (200, "World"), (50, "Yolo"), (170, "XOXO")]
sorted(my_list, key=operator.itemgetter(1), reverse=True)
###Output
_____no_output_____
###Markdown
Performance speedups
###Code
import timeit
timeit.timeit('reduce(lambda x,y : x*y, range(1,100))')
timeit.timeit('reduce(mul, range(1,100))',setup='from operator import mul')
###Output
_____no_output_____ |
Data Science With Python/02 - Project - pandas DataFrame tricks.ipynb | ###Markdown
Project - Play with DataFrames Goal of Project- Master pandas DataFrame Step 1: Import pandas- Execute the cell below (SHIFT + ENTER)
###Code
import pandas as pd
###Output
_____no_output_____ |
CellFormation.ipynb | ###Markdown
**Задача**$asd$*Ваша четвертая лабораторная работа будет заключаться в реализации одного из двух алгоритмов: метод имитации обжига или генетический алгоритм со встроенной эвристикой в виде локального поиска. Алгоритмы реализуются для задачи Cell formation problem.* Реализовать можно на любом языке. Также нужно будет сравнить время работы алгоритма, а так же показать лучшее решение алгоритма, которое удалось найти вам. **Данные:**Данные для работы прикрепил архивом к письму. **Структура такая:** m p (number of machines and parts) Next m rows: m(row number) list of parts processed by machine m separated by space e.g: 1 9 17 19 31 33means machine 1 processes parts 9 17 19 31 33 **Выходной формат для ответов:** Output file: instancename.sol (e.g. 20x20.sol) Output file format: m1_clusterId m2_clusterId ... - machines to clusters mapping p1_clusterId p2_clusterId ... - parts to clusters mapping
###Code
txtfiles = [f for f in os.listdir(PATH_TO_BENCHMARKS) if os.path.isfile(os.path.join(PATH_TO_BENCHMARKS, f))]
txtfiles
from algorithm import SimulatedAnnealing
from datetime import datetime
tools = imp.reload(tools)
SimulatedAnnealing = imp.reload(SimulatedAnnealing)
params = {
'initial_temperature' : 10,
'final_temperature' : 0.002,
'chain_len' : 4,
'len_of_period' : 6,
'numb_of_cells' : 2,
'check' : 5,
'cooling_rate' : 0.7,
}
solutions = {}
for txt in txtfiles:
print(txt)
machine_part_matrix = tools.get_data(PATH_TO_BENCHMARKS + txt)
cells_p, cells_m = tools.get_solution(machine_part_matrix, 2)
SA = SimulatedAnnealing.SimulatedAnnealing(machine_part_matrix)
SA.set_params(params)
start = datetime.now()
SA.solve()
solutions[txt] = SA.S['best']
print('time:', datetime.now() - start)
import collections
for sol in solutions:
ret_p, ret_m = {}, {}
for ch in ['p', 'm']:
for ind in solutions[sol][ch]:
list_of_cluster = [ind] * len(solutions[sol][ch][ind])
list_of_val = solutions[sol][ch][ind]
if ch == 'p':
ret_p.update(dict(zip(list_of_val, list_of_cluster)))
else:
ret_m.update(dict(zip(list_of_val, list_of_cluster)))
ret_p = collections.OrderedDict(sorted(ret_p.items()))
ret_m = collections.OrderedDict(sorted(ret_m.items()))
with open('solutions/' + sol, 'w') as the_file:
line_m, line_p = "", ""
for m in ret_m:
line_m += str(m) + "_" + str(ret_m[m]) + " "
for p in ret_p:
line_p += str(p) + "_" + str(ret_p[p]) + " "
the_file.write(line_m + '\n')
the_file.write(line_p + '\n')
###Output
_____no_output_____ |
nb_dev_python/python_astropy_en.ipynb | ###Markdown
Astropy Official documentation: http://docs.astropy.org/en/stable/ Units and Quantities **Quantities** (``astropy.units.quantity``) are the combination of a **value** (integer, float, ...) and a **unit** (``astropy.units``). Documentation:- http://docs.astropy.org/en/stable/units/index.html- http://docs.astropy.org/en/stable/units/quantity.htmlList of available units: http://docs.astropy.org/en/stable/units/index.htmlmodule-astropy.units.si
###Code
from astropy import units as u
###Output
_____no_output_____
###Markdown
Attach units to scalar values
###Code
1.85 * u.meter
500. * u.gram
400. * u.hertz
q = 2000. * u.meter
type(q)
###Output
_____no_output_____
###Markdown
Attach units to lists
###Code
[1., 2., 3.] * u.meter
###Output
_____no_output_____
###Markdown
Attach units to Numpy array
###Code
import numpy as np
np.array([1., 2., 3.]) * u.meter
###Output
_____no_output_____ |
qc/cirq/mnist.ipynb | ###Markdown
Copyright 2020 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
MNIST classification View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook This tutorial builds a quantum neural network (QNN) to classify a simplified version of MNIST, similar to the approach used in Farhi et al. The performance of the quantum neural network on this classical data problem is compared with a classical neural network. Setup
###Code
!pip install tensorflow==2.4.1
###Output
_____no_output_____
###Markdown
Install TensorFlow Quantum:
###Code
!pip install tensorflow-quantum
###Output
_____no_output_____
###Markdown
Now import TensorFlow and the module dependencies:
###Code
import tensorflow as tf
import tensorflow_quantum as tfq
import cirq
import sympy
import numpy as np
import seaborn as sns
import collections
# visualization tools
%matplotlib inline
import matplotlib.pyplot as plt
from cirq.contrib.svg import SVGCircuit
###Output
_____no_output_____
###Markdown
1. Load the dataIn this tutorial you will build a binary classifier to distinguish between the digits 3 and 6, following Farhi et al. This section covers the data handling that:- Loads the raw data from Keras.- Filters the dataset to only 3s and 6s.- Downscales the images so they fit can fit in a quantum computer.- Removes any contradictory examples.- Converts the binary images to Cirq circuits.- Converts the Cirq circuits to TensorFlow Quantum circuits. 1.1 Load the raw data Load the MNIST dataset distributed with Keras.
###Code
(x_train, y_train), (x_test, y_test) = tf.keras.datasets.mnist.load_data()
# Rescale the images from [0,255] to the [0.0,1.0] range.
x_train, x_test = x_train[..., np.newaxis]/255.0, x_test[..., np.newaxis]/255.0
print("Number of original training examples:", len(x_train))
print("Number of original test examples:", len(x_test))
###Output
_____no_output_____
###Markdown
Filter the dataset to keep just the 3s and 6s, remove the other classes. At the same time convert the label, `y`, to boolean: `True` for `3` and `False` for 6.
###Code
def filter_36(x, y):
keep = (y == 3) | (y == 6)
x, y = x[keep], y[keep]
y = y == 3
return x,y
x_train, y_train = filter_36(x_train, y_train)
x_test, y_test = filter_36(x_test, y_test)
print("Number of filtered training examples:", len(x_train))
print("Number of filtered test examples:", len(x_test))
###Output
_____no_output_____
###Markdown
Show the first example:
###Code
print(type(y_train), y_train.shape)
print(y_train[12000])
plt.imshow(x_train[0, :, :, 0])
plt.colorbar()
###Output
_____no_output_____
###Markdown
1.2 Downscale the images An image size of 28x28 is much too large for current quantum computers. Resize the image down to 4x4:
###Code
x_train_small = tf.image.resize(x_train, (4,4)).numpy()
x_test_small = tf.image.resize(x_test, (4,4)).numpy()
print(type(x_train_small), y_train.shape, x_train_small.shape)
###Output
_____no_output_____
###Markdown
Again, display the first training example—after resize:
###Code
print(y_train[12000])
plt.imshow(x_train_small[0,:,:,0], vmin=0, vmax=1)
plt.colorbar()
###Output
_____no_output_____
###Markdown
1.3 Remove contradictory examples From section *3.3 Learning to Distinguish Digits* of Farhi et al., filter the dataset to remove images that are labeled as belonging to both classes.This is not a standard machine-learning procedure, but is included in the interest of following the paper.
###Code
def remove_contradicting(xs, ys):
mapping = collections.defaultdict(set)
orig_x = {}
# Determine the set of labels for each unique image:
for x,y in zip(xs,ys):
orig_x[tuple(x.flatten())] = x
mapping[tuple(x.flatten())].add(y)
new_x = []
new_y = []
for flatten_x in mapping:
x = orig_x[flatten_x]
labels = mapping[flatten_x]
if len(labels) == 1:
new_x.append(x)
new_y.append(next(iter(labels)))
else:
# Throw out images that match more than one label.
pass
num_uniq_3 = sum(1 for value in mapping.values() if len(value) == 1 and True in value)
num_uniq_6 = sum(1 for value in mapping.values() if len(value) == 1 and False in value)
num_uniq_both = sum(1 for value in mapping.values() if len(value) == 2)
print("Number of unique images:", len(mapping.values()))
print("Number of unique 3s: ", num_uniq_3)
print("Number of unique 6s: ", num_uniq_6)
print("Number of unique contradicting labels (both 3 and 6): ", num_uniq_both)
print()
print("Initial number of images: ", len(xs))
print("Remaining non-contradicting unique images: ", len(new_x))
return np.array(new_x), np.array(new_y)
###Output
_____no_output_____
###Markdown
The resulting counts do not closely match the reported values, but the exact procedure is not specified.It is also worth noting here that applying filtering contradictory examples at this point does not totally prevent the model from receiving contradictory training examples: the next step binarizes the data which will cause more collisions.
###Code
x_train_nocon, y_train_nocon = remove_contradicting(x_train_small, y_train)
x_train_nocon.shape
x_train_nocon[0]
x_train_nocon[0][0]
###Output
_____no_output_____
###Markdown
1.4 Encode the data as quantum circuitsTo process images using a quantum computer, Farhi et al. proposed representing each pixel with a qubit, with the state depending on the value of the pixel. The first step is to convert to a binary encoding.
###Code
THRESHOLD = 0.5
x_train_bin = np.array(x_train_nocon > THRESHOLD, dtype=np.float32)
x_test_bin = np.array(x_test_small > THRESHOLD, dtype=np.float32)
x_train_bin[0]
x_train_bin.shape
###Output
_____no_output_____
###Markdown
If you were to remove contradictory images at this point you would be left with only 193, likely not enough for effective training.
###Code
cons = remove_contradicting(x_train_bin, y_train_nocon)
len(cons)
###Output
_____no_output_____
###Markdown
The qubits at pixel indices with values that exceed a threshold, are rotated through an $X$ gate.
###Code
x_train_bin.shape
np.ndarray.flatten(x_train_bin[0])
np.ndarray.flatten(x_train_bin[0]).shape
def convert_to_circuit(image):
"""Encode truncated classical image into quantum datapoint."""
values = np.ndarray.flatten(image)
qubits = cirq.GridQubit.rect(4, 4)
circuit = cirq.Circuit()
for i, value in enumerate(values):
if value:
circuit.append(cirq.X(qubits[i]))
return circuit
x_train_circ = [convert_to_circuit(x) for x in x_train_bin]
x_test_circ = [convert_to_circuit(x) for x in x_test_bin]
x_train_bin[:5], x_train_circ[:5]
###Output
_____no_output_____
###Markdown
Here is the circuit created for the first example (circuit diagrams do not show qubits with zero gates):
###Code
SVGCircuit(x_train_circ[0])
###Output
_____no_output_____
###Markdown
Compare this circuit to the indices where the image value exceeds the threshold:
###Code
bin_img = x_train_bin[0,:,:,0]
indices = np.array(np.where(bin_img)).T
indices
###Output
_____no_output_____
###Markdown
Convert these `Cirq` circuits to tensors for `tfq`:
###Code
x_train_tfcirc = tfq.convert_to_tensor(x_train_circ)
x_test_tfcirc = tfq.convert_to_tensor(x_test_circ)
type(x_train_tfcirc), x_train_tfcirc.shape
type(x_train_tfcirc[0]), x_train_tfcirc[0].shape
###Output
_____no_output_____
###Markdown
2. Quantum neural networkThere is little guidance for a quantum circuit structure that classifies images. Since the classification is based on the expectation of the readout qubit, Farhi et al. propose using two qubit gates, with the readout qubit always acted upon. This is similar in some ways to running small a Unitary RNN across the pixels. 2.1 Build the model circuitThis following example shows this layered approach. Each layer uses *n* instances of the same gate, with each of the data qubits acting on the readout qubit.Start with a simple class that will add a layer of these gates to a circuit:
###Code
class CircuitLayerBuilder():
def __init__(self, data_qubits, readout):
self.data_qubits = data_qubits
self.readout = readout
def add_layer(self, circuit, gate, prefix):
for i, qubit in enumerate(self.data_qubits):
symbol = sympy.Symbol(prefix + '-' + str(i))
circuit.append(gate(qubit, self.readout)**symbol)
###Output
_____no_output_____
###Markdown
Build an example circuit layer to see how it looks:
###Code
demo_builder = CircuitLayerBuilder(data_qubits = cirq.GridQubit.rect(4,1),
readout=cirq.GridQubit(-1,-1))
circuit = cirq.Circuit()
demo_builder.add_layer(circuit, gate = cirq.XX, prefix='xx')
SVGCircuit(circuit)
demo_builder.data_qubits
demo_builder.readout
###Output
_____no_output_____
###Markdown
Now build a two-layered model, matching the data-circuit size, and include the preparation and readout operations.
###Code
def create_quantum_model():
"""Create a QNN model circuit and readout operation to go along with it."""
data_qubits = cirq.GridQubit.rect(4, 4) # a 4x4 grid.
readout = cirq.GridQubit(-1, -1) # a single qubit at [-1,-1]
circuit = cirq.Circuit()
# Prepare the readout qubit.
circuit.append(cirq.X(readout))
circuit.append(cirq.H(readout))
builder = CircuitLayerBuilder(
data_qubits = data_qubits,
readout=readout)
# Then add layers (experiment by adding more).
builder.add_layer(circuit, cirq.XX, "xx1")
builder.add_layer(circuit, cirq.ZZ, "zz1")
# Finally, prepare the readout qubit.
circuit.append(cirq.H(readout))
return circuit, cirq.Z(readout)
model_circuit, model_readout = create_quantum_model()
###Output
_____no_output_____
###Markdown
2.2 Wrap the model-circuit in a tfq-keras modelBuild the Keras model with the quantum components. This model is fed the "quantum data", from `x_train_circ`, that encodes the classical data. It uses a *Parametrized Quantum Circuit* layer, `tfq.layers.PQC`, to train the model circuit, on the quantum data.To classify these images, Farhi et al. proposed taking the expectation of a readout qubit in a parameterized circuit. The expectation returns a value between 1 and -1.
###Code
# Build the Keras model.
model = tf.keras.Sequential([
# The input is the data-circuit, encoded as a tf.string
tf.keras.layers.Input(shape=(), dtype=tf.string),
# The PQC layer returns the expected value of the readout gate, range [-1,1].
tfq.layers.PQC(model_circuit, model_readout),
])
###Output
_____no_output_____
###Markdown
Next, describe the training procedure to the model, using the `compile` method.Since the the expected readout is in the range `[-1,1]`, optimizing the hinge loss is a somewhat natural fit. Note: Another valid approach would be to shift the output range to `[0,1]`, and treat it as the probability the model assigns to class `3`. This could be used with a standard a `tf.losses.BinaryCrossentropy` loss.To use the hinge loss here you need to make two small adjustments. First convert the labels, `y_train_nocon`, from boolean to `[-1,1]`, as expected by the hinge loss.
###Code
y_train_hinge = 2.0*y_train_nocon-1.0
y_test_hinge = 2.0*y_test-1.0
###Output
_____no_output_____
###Markdown
Second, use a custiom `hinge_accuracy` metric that correctly handles `[-1, 1]` as the `y_true` labels argument. `tf.losses.BinaryAccuracy(threshold=0.0)` expects `y_true` to be a boolean, and so can't be used with hinge loss).
###Code
def hinge_accuracy(y_true, y_pred):
y_true = tf.squeeze(y_true) > 0.0
y_pred = tf.squeeze(y_pred) > 0.0
result = tf.cast(y_true == y_pred, tf.float32)
return tf.reduce_mean(result)
model.compile(
loss=tf.keras.losses.Hinge(),
optimizer=tf.keras.optimizers.Adam(),
metrics=[hinge_accuracy])
print(model.summary())
###Output
_____no_output_____
###Markdown
Train the quantum modelNow train the model—this takes about 45 min. If you don't want to wait that long, use a small subset of the data (set `NUM_EXAMPLES=500`, below). This doesn't really affect the model's progress during training (it only has 32 parameters, and doesn't need much data to constrain these). Using fewer examples just ends training earlier (5min), but runs long enough to show that it is making progress in the validation logs.
###Code
EPOCHS = 3
BATCH_SIZE = 32
NUM_EXAMPLES = len(x_train_tfcirc)
x_train_tfcirc_sub = x_train_tfcirc[:NUM_EXAMPLES]
y_train_hinge_sub = y_train_hinge[:NUM_EXAMPLES]
###Output
_____no_output_____
###Markdown
Training this model to convergence should achieve >85% accuracy on the test set.
###Code
qnn_history = model.fit(
x_train_tfcirc_sub, y_train_hinge_sub,
batch_size=32,
epochs=EPOCHS,
verbose=1,
validation_data=(x_test_tfcirc, y_test_hinge))
qnn_results = model.evaluate(x_test_tfcirc, y_test)
###Output
_____no_output_____
###Markdown
Note: The training accuracy reports the average over the epoch. The validation accuracy is evaluated at the end of each epoch. 3. Classical neural networkWhile the quantum neural network works for this simplified MNIST problem, a basic classical neural network can easily outperform a QNN on this task. After a single epoch, a classical neural network can achieve >98% accuracy on the holdout set.In the following example, a classical neural network is used for for the 3-6 classification problem using the entire 28x28 image instead of subsampling the image. This easily converges to nearly 100% accuracy of the test set.
###Code
def create_classical_model():
# A simple model based off LeNet from https://keras.io/examples/mnist_cnn/
model = tf.keras.Sequential()
model.add(tf.keras.layers.Conv2D(32, [3, 3], activation='relu', input_shape=(28,28,1)))
model.add(tf.keras.layers.Conv2D(64, [3, 3], activation='relu'))
model.add(tf.keras.layers.MaxPooling2D(pool_size=(2, 2)))
model.add(tf.keras.layers.Dropout(0.25))
model.add(tf.keras.layers.Flatten())
model.add(tf.keras.layers.Dense(128, activation='relu'))
model.add(tf.keras.layers.Dropout(0.5))
model.add(tf.keras.layers.Dense(1))
return model
model = create_classical_model()
model.compile(loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
optimizer=tf.keras.optimizers.Adam(),
metrics=['accuracy'])
model.summary()
model.fit(x_train,
y_train,
batch_size=128,
epochs=1,
verbose=1,
validation_data=(x_test, y_test))
cnn_results = model.evaluate(x_test, y_test)
###Output
_____no_output_____
###Markdown
The above model has nearly 1.2M parameters. For a more fair comparison, try a 37-parameter model, on the subsampled images:
###Code
def create_fair_classical_model():
# A simple model based off LeNet from https://keras.io/examples/mnist_cnn/
model = tf.keras.Sequential()
model.add(tf.keras.layers.Flatten(input_shape=(4,4,1)))
model.add(tf.keras.layers.Dense(2, activation='relu'))
model.add(tf.keras.layers.Dense(1))
return model
model = create_fair_classical_model()
model.compile(loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
optimizer=tf.keras.optimizers.Adam(),
metrics=['accuracy'])
model.summary()
model.fit(x_train_bin,
y_train_nocon,
batch_size=128,
epochs=20,
verbose=2,
validation_data=(x_test_bin, y_test))
fair_nn_results = model.evaluate(x_test_bin, y_test)
###Output
_____no_output_____
###Markdown
4. ComparisonHigher resolution input and a more powerful model make this problem easy for the CNN. While a classical model of similar power (~32 parameters) trains to a similar accuracy in a fraction of the time. One way or the other, the classical neural network easily outperforms the quantum neural network. For classical data, it is difficult to beat a classical neural network.
###Code
qnn_accuracy = qnn_results[1]
cnn_accuracy = cnn_results[1]
fair_nn_accuracy = fair_nn_results[1]
sns.barplot(["Quantum", "Classical, full", "Classical, fair"],
[qnn_accuracy, cnn_accuracy, fair_nn_accuracy])
###Output
_____no_output_____ |
examples/ACO_Graph.ipynb | ###Markdown
Get data from website
###Code
data_site = "https://people.sc.fsu.edu/~jburkardt/datasets/tsp/att48_xy.txt"
raw_data = urlopen(data_site).read()
soup = BeautifulSoup(raw_data, features="html.parser")
text_points = soup.get_text()
list_points = list(map(lambda x: list(map(lambda y: int(y), x.split())), text_points.split('\n')))[:-1]
###Output
_____no_output_____
###Markdown
Constantes
###Code
qt_formigas = 30
rho = 0.25
alpha = 0.5
beta = 0.7
iteracoes = 500 # quantidade de iterações que irão acontecer até a parada da otimização
repeticoes = 10
###Output
_____no_output_____
###Markdown
Ant Colony
###Code
myACO = ACO_Graph(list_points, alpha, beta, rho)
myACO.search(qt_formigas, iteracoes, plot_at_every = 10, method='aco')
###Output
2%|▏ | 10/500 [00:04<03:44, 2.18it/s]######## iteracao 10 ##########
###Markdown
10 executions
###Code
results_aco = []
for _ in range(repeticoes):
myACO = ACO_Graph(list_points, alpha, beta, rho)
_, distancia = myACO.search(qt_formigas, iteracoes, plot_at_every = None, method='aco')
results_aco.append(distancia)
###Output
100%|██████████| 500/500 [05:59<00:00, 1.39it/s]
100%|██████████| 500/500 [05:51<00:00, 1.42it/s]
100%|██████████| 500/500 [05:55<00:00, 1.41it/s]
100%|██████████| 500/500 [05:55<00:00, 1.40it/s]
100%|██████████| 500/500 [05:50<00:00, 1.43it/s]
100%|██████████| 500/500 [05:51<00:00, 1.42it/s]
100%|██████████| 500/500 [05:50<00:00, 1.43it/s]
100%|██████████| 500/500 [05:51<00:00, 1.42it/s]
100%|██████████| 500/500 [05:55<00:00, 1.41it/s]
100%|██████████| 500/500 [05:53<00:00, 1.41it/s]
###Markdown
MAX-MIN Ant System
###Code
# myACO_maxmin = ACO_Graph(list_points, alpha, beta, rho)
# myACO_maxmin.search(qt_formigas, iteracoes, plot_at_every = 10, method='max_min', tal_saturation=[0.2, 0.8])
###Output
_____no_output_____
###Markdown
10 executions
###Code
results_mm = []
for _ in range(repeticoes):
myACO_maxmin = ACO_Graph(list_points, alpha, beta, rho)
_, distancia = myACO_maxmin.search(qt_formigas, iteracoes, plot_at_every = None, method='max_min', tal_saturation=[0.2, 0.8])
results_mm.append(distancia)
###Output
100%|██████████| 500/500 [06:15<00:00, 1.33it/s]
100%|██████████| 500/500 [06:14<00:00, 1.34it/s]
100%|██████████| 500/500 [06:15<00:00, 1.33it/s]
100%|██████████| 500/500 [06:16<00:00, 1.33it/s]
100%|██████████| 500/500 [06:18<00:00, 1.32it/s]
100%|██████████| 500/500 [06:14<00:00, 1.33it/s]
100%|██████████| 500/500 [06:14<00:00, 1.34it/s]
100%|██████████| 500/500 [06:14<00:00, 1.34it/s]
100%|██████████| 500/500 [06:13<00:00, 1.34it/s]
100%|██████████| 500/500 [06:16<00:00, 1.33it/s]
###Markdown
Box Plot
###Code
fig, ax = plt.subplots(figsize=(15,8), dpi=300)
bp1 = ax.boxplot(results_aco, positions=[1], widths=0.7,
patch_artist=True, boxprops=dict(facecolor="C0"))
bp2 = ax.boxplot(results_mm, positions=[3], widths=0.7,
patch_artist=True, boxprops=dict(facecolor="C2"))
ax.legend([bp1["boxes"][0], bp2["boxes"][0]], ['ACO', 'MIN MAX - ANT SYSTEM'], loc='upper right', fontsize='large')
ax.set_xlim(0,4)
ax.tick_params(axis='both', which='major', labelsize=12)
ax.set_title('Comparison between ACO and MIN MAX', fontdict={'size':16})
ax.set_xlabel('Box Plots', fontdict={'size':16})
ax.set_ylabel('Distance', fontdict={'size':16})
plt.show()
fig.savefig('resultado.png',dpi=300)
###Output
_____no_output_____ |
code/16S Mantel tests.ipynb | ###Markdown
Set up notebook environment NOTE: Use a QIIME2 kernel
###Code
import os
import biom
import warnings
import pickle
import numpy as np
import pandas as pd
import qiime2 as q2
from biom import Table
from skbio import OrdinationResults
from skbio.stats import subsample_counts
from skbio.stats.distance import permanova, anosim, mantel
from skbio.stats.distance import DistanceMatrix
from qiime2.plugins.deicode.actions import rpca
from qiime2.plugins.feature_table.actions import rarefy
from qiime2.plugins.diversity.actions import beta_group_significance
from qiime2.plugins.emperor.actions import biplot, plot
from qiime2.plugins.diversity.actions import (beta,
beta_phylogenetic,
pcoa)
from qiime2.plugins import demux, deblur, quality_filter, \
metadata, feature_table, alignment, \
phylogeny, diversity, emperor, feature_classifier, \
taxa, composition
from assets.step_wise_anova import run_stepwise_anova
from qiime2.plugins.fragment_insertion.actions import filter_features
warnings.filterwarnings("ignore", category=DeprecationWarning)
# helper functions
from assets.util_updated import (mantel_matched, simulate_depth,
all_dists, all_dists_no_tree, nested_permanova)
# plotting
import matplotlib
import seaborn as sns
import matplotlib.pyplot as plt
warnings.filterwarnings('ignore')
plt.style.use('ggplot')
%matplotlib inline
###Output
_____no_output_____
###Markdown
Subset metadata to make paired files between extraction kits
###Code
# Read in sample metadata
md = pd.read_csv('/Users/Justin/Mycelium/UCSD/00_Knight_Lab/03_Extraction_test_12201/round_02/sample_metadata/12201_metadata.txt',
sep = '\t')
# Subset sample metadata to make files for round 1 and round 2
md_round1and2 = md[md['round'] != 3]
md_round1 = md_round1and2[md_round1and2['round'] == 1]
md_round2 = md_round1and2[md_round1and2['round'] == 2]
# Subset round-specific metadata files to make files for each kit
md_round1_powersoil = md_round1[md_round1['extraction_kit'] == 'PowerSoil']
md_round1_powersoil_pro = md_round1[md_round1['extraction_kit'] == 'PowerSoil Pro']
md_round1_norgen = md_round1[md_round1['extraction_kit'] == 'Norgen']
md_round2_powersoil = md_round2[md_round2['extraction_kit'] == 'PowerSoil']
md_round2_magmax = md_round2[md_round2['extraction_kit'] == 'MagMAX Microbiome']
md_round2_nucleomag = md_round2[md_round2['extraction_kit'] == 'NucleoMag Food']
md_round2_zymo = md_round2[md_round2['extraction_kit'] == 'Zymo MagBead']
# Merge kit-specific files to make paired files for comparison
md_round1_ps_vs_pro = pd.concat([md_round1_powersoil, md_round1_powersoil_pro])
md_round1_ps_vs_norgen = pd.concat([md_round1_powersoil, md_round1_norgen])
md_round2_ps_vs_magmax = pd.concat([md_round2_powersoil, md_round2_magmax])
md_round2_ps_vs_nucleomag = pd.concat([md_round2_powersoil, md_round2_nucleomag])
md_round2_ps_vs_zymo = pd.concat([md_round2_powersoil, md_round2_zymo])
# Export paired files
md_round1_ps_vs_pro.to_csv('/Users/Justin/Mycelium/UCSD/00_Knight_Lab/03_Extraction_test_12201/round_02/sample_metadata/01_paired_files/12201_metadata_round1_ps_vs_pro.txt',
sep = '\t',
index = False)
md_round1_ps_vs_norgen.to_csv('/Users/Justin/Mycelium/UCSD/00_Knight_Lab/03_Extraction_test_12201/round_02/sample_metadata/01_paired_files/12201_metadata_round1_ps_vs_norgen.txt',
sep = '\t',
index = False)
md_round2_ps_vs_magmax.to_csv('/Users/Justin/Mycelium/UCSD/00_Knight_Lab/03_Extraction_test_12201/round_02/sample_metadata/01_paired_files/12201_metadata_round2_ps_vs_magmax.txt',
sep = '\t',
index = False)
md_round2_ps_vs_nucleomag.to_csv('/Users/Justin/Mycelium/UCSD/00_Knight_Lab/03_Extraction_test_12201/round_02/sample_metadata/01_paired_files/12201_metadata_round2_ps_vs_nucleomag.txt',
sep = '\t',
index = False)
md_round2_ps_vs_zymo.to_csv('/Users/Justin/Mycelium/UCSD/00_Knight_Lab/03_Extraction_test_12201/round_02/sample_metadata/01_paired_files/12201_metadata_round2_ps_vs_zymo.txt',
sep = '\t',
index = False)
###Output
_____no_output_____
###Markdown
Mantel tests between pairs of kits 16S data
###Code
# Import data
md_round1_ps_vs_pro_q2 = q2.Metadata.load('/Users/Justin/Mycelium/UCSD/00_Knight_Lab/03_Extraction_test_12201/round_02/sample_metadata/01_paired_files/12201_metadata_round1_ps_vs_pro.txt')
md_round1_ps_vs_norgen_q2 = q2.Metadata.load('/Users/Justin/Mycelium/UCSD/00_Knight_Lab/03_Extraction_test_12201/round_02/sample_metadata/01_paired_files/12201_metadata_round1_ps_vs_norgen.txt')
md_round2_ps_vs_magmax_q2 = q2.Metadata.load('/Users/Justin/Mycelium/UCSD/00_Knight_Lab/03_Extraction_test_12201/round_02/sample_metadata/01_paired_files/12201_metadata_round2_ps_vs_magmax.txt')
md_round2_ps_vs_nucleomag_q2 = q2.Metadata.load('/Users/Justin/Mycelium/UCSD/00_Knight_Lab/03_Extraction_test_12201/round_02/sample_metadata/01_paired_files/12201_metadata_round2_ps_vs_nucleomag.txt')
md_round2_ps_vs_zymo_q2 = q2.Metadata.load('/Users/Justin/Mycelium/UCSD/00_Knight_Lab/03_Extraction_test_12201/round_02/sample_metadata/01_paired_files/12201_metadata_round2_ps_vs_zymo.txt')
table_16S_hbm = q2.Artifact.load('/Users/Justin/Mycelium/UCSD/00_Knight_Lab/03_Extraction_test_12201/round_02/data/16S/10_filtered_data/dna_bothPS_16S_deblur_biom_lod_noChl_noMit_sepp_gg_noNTCs_noMock_hbm.qza')
table_16S_lbm = q2.Artifact.load('/Users/Justin/Mycelium/UCSD/00_Knight_Lab/03_Extraction_test_12201/round_02/data/16S/10_filtered_data/dna_bothPS_16S_deblur_biom_lod_noChl_noMit_sepp_gg_noNTCs_noMock_lbm.qza')
tree_16S = q2.Artifact.load('/Users/Justin/Mycelium/UCSD/00_Knight_Lab/03_Extraction_test_12201/round_02/data/16S/09_fragment_insertion/dna_all_16S_deblur_seqs_noChl_noMit_tree_gg.qza')
# PowerSoil vs. PowerSoil Pro - High biomass samples
## Filter table
table_16S_hbm_biom = table_16S_hbm.view(Table)
md_round1_ps_vs_pro_df_hbm = md_round1_ps_vs_pro_q2.to_dataframe()
shared_ = list(set(table_16S_hbm_biom.ids()) & set(md_round1_ps_vs_pro_df_hbm.index))
md_round1_ps_vs_pro_df_hbm = md_round1_ps_vs_pro_df_hbm.reindex(shared_)
table_16S_hbm_biom_ps_vs_pro = table_16S_hbm_biom.filter(shared_)
keep_ = table_16S_hbm_biom_ps_vs_pro.ids('observation')[table_16S_hbm_biom_ps_vs_pro.sum('observation') > 0]
table_16S_hbm_biom_ps_vs_pro.filter(keep_, axis='observation')
## Import filtered table and re-indexed metadata file
table_16S_hbm_ps_vs_pro = q2.Artifact.import_data('FeatureTable[Frequency]', table_16S_hbm_biom_ps_vs_pro)
md_round1_ps_vs_pro_q2_hbm = q2.Metadata(md_round1_ps_vs_pro_df_hbm)
## Generate distance matrices using 'all_dissts' utils
rare_depth_16S_hbm = 12690
dists_res_16S_hbm = all_dists(table_16S_hbm_ps_vs_pro,
rare_depth_16S_hbm, tree_16S)
## Make a unique ID
md_round1_ps_vs_pro_q2_dist = md_round1_ps_vs_pro_q2_hbm.to_dataframe().copy()
md_round1_ps_vs_pro_q2_dist['unique_sample_id'] = ['.'.join(rn_.split('.')[:-2])
for rn_ in md_round1_ps_vs_pro_q2_dist.index]
grouping = 'extraction_kit'
ids = 'unique_sample_id'
## Run Mantel test for each distance matrix
mantel_res_16S_hbm = {}
for metric_, dist_mantel in dists_res_16S_hbm.items():
# subset mf for dist (rare)
dist_mantel = dist_mantel.distance_matrix.view(DistanceMatrix)
md_round1_ps_vs_pro_q2_dist_sub = md_round1_ps_vs_pro_q2_dist.reindex(dist_mantel.ids)
# corr, p, n
mantel_res_16S_hbm[metric_] = mantel_matched(dist_mantel,
md_round1_ps_vs_pro_q2_dist_sub,
grouping,
ids)
## Compile
mantel_res_16S_hbm = pd.DataFrame(mantel_res_16S_hbm,
['corr', 'p', 'n'])
mantel_res_16S_hbm.to_csv('/Users/Justin/Mycelium/UCSD/00_Knight_Lab/03_Extraction_test_12201/round_02/results/mantel_correlations/table_mantel_16S_hbm_ps_vs_pro.txt', sep='\t')
mantel_res_16S_hbm
# PowerSoil vs. PowerSoil Pro - Low biomass samples
## Filter table
table_16S_lbm_biom = table_16S_lbm.view(Table)
md_round1_ps_vs_pro_df_lbm = md_round1_ps_vs_pro_q2.to_dataframe()
shared_ = list(set(table_16S_lbm_biom.ids()) & set(md_round1_ps_vs_pro_df_lbm.index))
md_round1_ps_vs_pro_df_lbm = md_round1_ps_vs_pro_df_lbm.reindex(shared_)
table_16S_lbm_biom_ps_vs_pro = table_16S_lbm_biom.filter(shared_)
keep_ = table_16S_lbm_biom_ps_vs_pro.ids('observation')[table_16S_lbm_biom_ps_vs_pro.sum('observation') > 0]
table_16S_lbm_biom_ps_vs_pro.filter(keep_, axis='observation')
## Import filtered table and re-indexed metadata file
table_16S_lbm_ps_vs_pro = q2.Artifact.import_data('FeatureTable[Frequency]', table_16S_lbm_biom_ps_vs_pro)
md_round1_ps_vs_pro_q2_lbm = q2.Metadata(md_round1_ps_vs_pro_df_lbm)
## Generate distance matrices using 'all_dissts' utils
rare_depth_16S_lbm = 3295
dists_res_16S_lbm = all_dists(table_16S_lbm_ps_vs_pro,
rare_depth_16S_lbm, tree_16S)
## Make a unique ID
md_round1_ps_vs_pro_q2_dist = md_round1_ps_vs_pro_q2_lbm.to_dataframe().copy()
md_round1_ps_vs_pro_q2_dist['unique_sample_id'] = ['.'.join(rn_.split('.')[:-2])
for rn_ in md_round1_ps_vs_pro_q2_dist.index]
grouping = 'extraction_kit'
ids = 'unique_sample_id'
## Run Mantel test for each distance matrix
mantel_res_16S_lbm = {}
for metric_, dist_mantel in dists_res_16S_lbm.items():
# subset mf for dist (rare)
dist_mantel = dist_mantel.distance_matrix.view(DistanceMatrix)
md_round1_ps_vs_pro_q2_dist_sub = md_round1_ps_vs_pro_q2_dist.reindex(dist_mantel.ids)
# corr, p, n
mantel_res_16S_lbm[metric_] = mantel_matched(dist_mantel,
md_round1_ps_vs_pro_q2_dist_sub,
grouping,
ids)
## Compile
mantel_res_16S_lbm = pd.DataFrame(mantel_res_16S_lbm,
['corr', 'p', 'n'])
mantel_res_16S_lbm.to_csv('/Users/Justin/Mycelium/UCSD/00_Knight_Lab/03_Extraction_test_12201/round_02/results/mantel_correlations/table_mantel_16S_lbm_ps_vs_pro.txt', sep='\t')
mantel_res_16S_lbm
# PowerSoil vs. Norgen - High biomass samples
## Filter table
table_16S_hbm_biom = table_16S_hbm.view(Table)
md_round1_ps_vs_norgen_df_hbm = md_round1_ps_vs_norgen_q2.to_dataframe()
shared_ = list(set(table_16S_hbm_biom.ids()) & set(md_round1_ps_vs_norgen_df_hbm.index))
md_round1_ps_vs_norgen_df_hbm = md_round1_ps_vs_norgen_df_hbm.reindex(shared_)
table_16S_hbm_biom_ps_vs_norgen = table_16S_hbm_biom.filter(shared_)
keep_ = table_16S_hbm_biom_ps_vs_norgen.ids('observation')[table_16S_hbm_biom_ps_vs_norgen.sum('observation') > 0]
table_16S_hbm_biom_ps_vs_norgen.filter(keep_, axis='observation')
## Import filtered table and re-indexed metadata file
table_16S_hbm_ps_vs_norgen = q2.Artifact.import_data('FeatureTable[Frequency]', table_16S_hbm_biom_ps_vs_norgen)
md_round1_ps_vs_norgen_q2_hbm = q2.Metadata(md_round1_ps_vs_norgen_df_hbm)
## Generate distance matrices using 'all_dissts' utils
rare_depth_16S_hbm = 12690
dists_res_16S_hbm = all_dists(table_16S_hbm_ps_vs_norgen,
rare_depth_16S_hbm, tree_16S)
## Make a unique ID
md_round1_ps_vs_norgen_q2_dist = md_round1_ps_vs_norgen_q2_hbm.to_dataframe().copy()
md_round1_ps_vs_norgen_q2_dist['unique_sample_id'] = ['.'.join(rn_.split('.')[:-2])
for rn_ in md_round1_ps_vs_norgen_q2_dist.index]
grouping = 'extraction_kit'
ids = 'unique_sample_id'
## Run Mantel test for each distance matrix
mantel_res_16S_hbm = {}
for metric_, dist_mantel in dists_res_16S_hbm.items():
# subset mf for dist (rare)
dist_mantel = dist_mantel.distance_matrix.view(DistanceMatrix)
md_round1_ps_vs_norgen_q2_dist_sub = md_round1_ps_vs_norgen_q2_dist.reindex(dist_mantel.ids)
# corr, p, n
mantel_res_16S_hbm[metric_] = mantel_matched(dist_mantel,
md_round1_ps_vs_norgen_q2_dist_sub,
grouping,
ids)
## Compile
mantel_res_16S_hbm = pd.DataFrame(mantel_res_16S_hbm,
['corr', 'p', 'n'])
mantel_res_16S_hbm.to_csv('/Users/Justin/Mycelium/UCSD/00_Knight_Lab/03_Extraction_test_12201/round_02/results/mantel_correlations/table_mantel_16S_hbm_ps_vs_norgen.txt', sep='\t')
mantel_res_16S_hbm
# PowerSoil vs. Norgen - High biomass samples
## Filter table
table_16S_lbm_biom = table_16S_lbm.view(Table)
md_round1_ps_vs_norgen_df_lbm = md_round1_ps_vs_norgen_q2.to_dataframe()
shared_ = list(set(table_16S_lbm_biom.ids()) & set(md_round1_ps_vs_norgen_df_lbm.index))
md_round1_ps_vs_norgen_df_lbm = md_round1_ps_vs_norgen_df_lbm.reindex(shared_)
table_16S_lbm_biom_ps_vs_norgen = table_16S_lbm_biom.filter(shared_)
keep_ = table_16S_lbm_biom_ps_vs_norgen.ids('observation')[table_16S_lbm_biom_ps_vs_norgen.sum('observation') > 0]
table_16S_lbm_biom_ps_vs_norgen.filter(keep_, axis='observation')
## Import filtered table and re-indexed metadata file
table_16S_lbm_ps_vs_norgen = q2.Artifact.import_data('FeatureTable[Frequency]', table_16S_lbm_biom_ps_vs_norgen)
md_round1_ps_vs_norgen_q2_lbm = q2.Metadata(md_round1_ps_vs_norgen_df_lbm)
## Generate distance matrices using 'all_dissts' utils
rare_depth_16S_lbm = 3295
dists_res_16S_lbm = all_dists(table_16S_lbm_ps_vs_norgen,
rare_depth_16S_lbm, tree_16S)
## Make a unique ID
md_round1_ps_vs_norgen_q2_dist = md_round1_ps_vs_norgen_q2_lbm.to_dataframe().copy()
md_round1_ps_vs_norgen_q2_dist['unique_sample_id'] = ['.'.join(rn_.split('.')[:-2])
for rn_ in md_round1_ps_vs_norgen_q2_dist.index]
grouping = 'extraction_kit'
ids = 'unique_sample_id'
## Run Mantel test for each distance matrix
mantel_res_16S_lbm = {}
for metric_, dist_mantel in dists_res_16S_lbm.items():
# subset mf for dist (rare)
dist_mantel = dist_mantel.distance_matrix.view(DistanceMatrix)
md_round1_ps_vs_norgen_q2_dist_sub = md_round1_ps_vs_norgen_q2_dist.reindex(dist_mantel.ids)
# corr, p, n
mantel_res_16S_lbm[metric_] = mantel_matched(dist_mantel,
md_round1_ps_vs_norgen_q2_dist_sub,
grouping,
ids)
## Compile
mantel_res_16S_lbm = pd.DataFrame(mantel_res_16S_lbm,
['corr', 'p', 'n'])
mantel_res_16S_lbm.to_csv('/Users/Justin/Mycelium/UCSD/00_Knight_Lab/03_Extraction_test_12201/round_02/results/mantel_correlations/table_mantel_16S_lbm_ps_vs_norgen.txt', sep='\t')
mantel_res_16S_lbm
# PowerSoil vs. MagMAX Microbiome - High biomass samples
## Filter table
table_16S_hbm_biom = table_16S_hbm.view(Table)
md_round2_ps_vs_magmax_df_hbm = md_round2_ps_vs_magmax_q2.to_dataframe()
shared_ = list(set(table_16S_hbm_biom.ids()) & set(md_round2_ps_vs_magmax_df_hbm.index))
md_round2_ps_vs_magmax_df_hbm = md_round2_ps_vs_magmax_df_hbm.reindex(shared_)
table_16S_hbm_biom_ps_vs_magmax = table_16S_hbm_biom.filter(shared_)
keep_ = table_16S_hbm_biom_ps_vs_magmax.ids('observation')[table_16S_hbm_biom_ps_vs_magmax.sum('observation') > 0]
table_16S_hbm_biom_ps_vs_magmax.filter(keep_, axis='observation')
## Import filtered table and re-indexed metadata file
table_16S_hbm_ps_vs_magmax = q2.Artifact.import_data('FeatureTable[Frequency]', table_16S_hbm_biom_ps_vs_magmax)
md_round2_ps_vs_magmax_q2_hbm = q2.Metadata(md_round2_ps_vs_magmax_df_hbm)
## Generate distance matrices using 'all_dissts' utils
rare_depth_16S_hbm = 12690
dists_res_16S_hbm = all_dists(table_16S_hbm_ps_vs_magmax,
rare_depth_16S_hbm, tree_16S)
## Make a unique ID
md_round2_ps_vs_magmax_q2_dist = md_round2_ps_vs_magmax_q2_hbm.to_dataframe().copy()
md_round2_ps_vs_magmax_q2_dist['unique_sample_id'] = ['.'.join(rn_.split('.')[:-2])
for rn_ in md_round2_ps_vs_magmax_q2_dist.index]
grouping = 'extraction_kit'
ids = 'unique_sample_id'
## Run Mantel test for each distance matrix
mantel_res_16S_hbm = {}
for metric_, dist_mantel in dists_res_16S_hbm.items():
# subset mf for dist (rare)
dist_mantel = dist_mantel.distance_matrix.view(DistanceMatrix)
md_round2_ps_vs_magmax_q2_dist_sub = md_round2_ps_vs_magmax_q2_dist.reindex(dist_mantel.ids)
# corr, p, n
mantel_res_16S_hbm[metric_] = mantel_matched(dist_mantel,
md_round2_ps_vs_magmax_q2_dist_sub,
grouping,
ids)
## Compile
mantel_res_16S_hbm = pd.DataFrame(mantel_res_16S_hbm,
['corr', 'p', 'n'])
mantel_res_16S_hbm.to_csv('/Users/Justin/Mycelium/UCSD/00_Knight_Lab/03_Extraction_test_12201/round_02/results/mantel_correlations/table_mantel_16S_hbm_ps_vs_magmax.txt', sep='\t')
mantel_res_16S_hbm
# PowerSoil vs. MagMAX Microbiome - Low biomass samples
## Filter table
table_16S_lbm_biom = table_16S_lbm.view(Table)
md_round2_ps_vs_magmax_df_lbm = md_round2_ps_vs_magmax_q2.to_dataframe()
shared_ = list(set(table_16S_lbm_biom.ids()) & set(md_round2_ps_vs_magmax_df_lbm.index))
md_round2_ps_vs_magmax_df_lbm = md_round2_ps_vs_magmax_df_lbm.reindex(shared_)
table_16S_lbm_biom_ps_vs_magmax = table_16S_lbm_biom.filter(shared_)
keep_ = table_16S_lbm_biom_ps_vs_magmax.ids('observation')[table_16S_lbm_biom_ps_vs_magmax.sum('observation') > 0]
table_16S_lbm_biom_ps_vs_magmax.filter(keep_, axis='observation')
## Import filtered table and re-indexed metadata file
table_16S_lbm_ps_vs_magmax = q2.Artifact.import_data('FeatureTable[Frequency]', table_16S_lbm_biom_ps_vs_magmax)
md_round2_ps_vs_magmax_q2_lbm = q2.Metadata(md_round2_ps_vs_magmax_df_lbm)
## Generate distance matrices using 'all_dissts' utils
rare_depth_16S_lbm = 3295
dists_res_16S_lbm = all_dists(table_16S_lbm_ps_vs_magmax,
rare_depth_16S_lbm, tree_16S)
## Make a unique ID
md_round2_ps_vs_magmax_q2_dist = md_round2_ps_vs_magmax_q2_lbm.to_dataframe().copy()
md_round2_ps_vs_magmax_q2_dist['unique_sample_id'] = ['.'.join(rn_.split('.')[:-2])
for rn_ in md_round2_ps_vs_magmax_q2_dist.index]
grouping = 'extraction_kit'
ids = 'unique_sample_id'
## Run Mantel test for each distance matrix
mantel_res_16S_lbm = {}
for metric_, dist_mantel in dists_res_16S_lbm.items():
# subset mf for dist (rare)
dist_mantel = dist_mantel.distance_matrix.view(DistanceMatrix)
md_round2_ps_vs_magmax_q2_dist_sub = md_round2_ps_vs_magmax_q2_dist.reindex(dist_mantel.ids)
# corr, p, n
mantel_res_16S_lbm[metric_] = mantel_matched(dist_mantel,
md_round2_ps_vs_magmax_q2_dist_sub,
grouping,
ids)
## Compile
mantel_res_16S_lbm = pd.DataFrame(mantel_res_16S_lbm,
['corr', 'p', 'n'])
mantel_res_16S_lbm.to_csv('/Users/Justin/Mycelium/UCSD/00_Knight_Lab/03_Extraction_test_12201/round_02/results/mantel_correlations/table_mantel_16S_lbm_ps_vs_magmax.txt', sep='\t')
mantel_res_16S_lbm
# PowerSoil vs. NucleoMag Food - High biomass samples
## Filter table
table_16S_hbm_biom = table_16S_hbm.view(Table)
md_round2_ps_vs_nucleomag_df_hbm = md_round2_ps_vs_nucleomag_q2.to_dataframe()
shared_ = list(set(table_16S_hbm_biom.ids()) & set(md_round2_ps_vs_nucleomag_df_hbm.index))
md_round2_ps_vs_nucleomag_df_hbm = md_round2_ps_vs_nucleomag_df_hbm.reindex(shared_)
table_16S_hbm_biom_ps_vs_nucleomag = table_16S_hbm_biom.filter(shared_)
keep_ = table_16S_hbm_biom_ps_vs_nucleomag.ids('observation')[table_16S_hbm_biom_ps_vs_nucleomag.sum('observation') > 0]
table_16S_hbm_biom_ps_vs_nucleomag.filter(keep_, axis='observation')
## Import filtered table and re-indexed metadata file
table_16S_hbm_ps_vs_nucleomag = q2.Artifact.import_data('FeatureTable[Frequency]', table_16S_hbm_biom_ps_vs_nucleomag)
md_round2_ps_vs_nucleomag_q2_hbm = q2.Metadata(md_round2_ps_vs_nucleomag_df_hbm)
## Generate distance matrices using 'all_dissts' utils
rare_depth_16S_hbm = 12690
dists_res_16S_hbm = all_dists(table_16S_hbm_ps_vs_nucleomag,
rare_depth_16S_hbm, tree_16S)
## Make a unique ID
md_round2_ps_vs_nucleomag_q2_dist = md_round2_ps_vs_nucleomag_q2_hbm.to_dataframe().copy()
md_round2_ps_vs_nucleomag_q2_dist['unique_sample_id'] = ['.'.join(rn_.split('.')[:-2])
for rn_ in md_round2_ps_vs_nucleomag_q2_dist.index]
grouping = 'extraction_kit'
ids = 'unique_sample_id'
## Run Mantel test for each distance matrix
mantel_res_16S_hbm = {}
for metric_, dist_mantel in dists_res_16S_hbm.items():
# subset mf for dist (rare)
dist_mantel = dist_mantel.distance_matrix.view(DistanceMatrix)
md_round2_ps_vs_nucleomag_q2_dist_sub = md_round2_ps_vs_nucleomag_q2_dist.reindex(dist_mantel.ids)
# corr, p, n
mantel_res_16S_hbm[metric_] = mantel_matched(dist_mantel,
md_round2_ps_vs_nucleomag_q2_dist_sub,
grouping,
ids)
## Compile
mantel_res_16S_hbm = pd.DataFrame(mantel_res_16S_hbm,
['corr', 'p', 'n'])
mantel_res_16S_hbm.to_csv('/Users/Justin/Mycelium/UCSD/00_Knight_Lab/03_Extraction_test_12201/round_02/results/mantel_correlations/table_mantel_16S_hbm_ps_vs_nucleomag.txt', sep='\t')
mantel_res_16S_hbm
# PowerSoil vs. NucleoMag Food - Low biomass samples
## Filter table
table_16S_lbm_biom = table_16S_lbm.view(Table)
md_round2_ps_vs_nucleomag_df_lbm = md_round2_ps_vs_nucleomag_q2.to_dataframe()
shared_ = list(set(table_16S_lbm_biom.ids()) & set(md_round2_ps_vs_nucleomag_df_lbm.index))
md_round2_ps_vs_nucleomag_df_lbm = md_round2_ps_vs_nucleomag_df_lbm.reindex(shared_)
table_16S_lbm_biom_ps_vs_nucleomag = table_16S_lbm_biom.filter(shared_)
keep_ = table_16S_lbm_biom_ps_vs_nucleomag.ids('observation')[table_16S_lbm_biom_ps_vs_nucleomag.sum('observation') > 0]
table_16S_lbm_biom_ps_vs_nucleomag.filter(keep_, axis='observation')
## Import filtered table and re-indexed metadata file
table_16S_lbm_ps_vs_nucleomag = q2.Artifact.import_data('FeatureTable[Frequency]', table_16S_lbm_biom_ps_vs_nucleomag)
md_round2_ps_vs_nucleomag_q2_lbm = q2.Metadata(md_round2_ps_vs_nucleomag_df_lbm)
## Generate distance matrices using 'all_dissts' utils
rare_depth_16S_lbm = 3295
dists_res_16S_lbm = all_dists(table_16S_lbm_ps_vs_nucleomag,
rare_depth_16S_lbm, tree_16S)
## Make a unique ID
md_round2_ps_vs_nucleomag_q2_dist = md_round2_ps_vs_nucleomag_q2_lbm.to_dataframe().copy()
md_round2_ps_vs_nucleomag_q2_dist['unique_sample_id'] = ['.'.join(rn_.split('.')[:-2])
for rn_ in md_round2_ps_vs_nucleomag_q2_dist.index]
grouping = 'extraction_kit'
ids = 'unique_sample_id'
## Run Mantel test for each distance matrix
mantel_res_16S_lbm = {}
for metric_, dist_mantel in dists_res_16S_lbm.items():
# subset mf for dist (rare)
dist_mantel = dist_mantel.distance_matrix.view(DistanceMatrix)
md_round2_ps_vs_nucleomag_q2_dist_sub = md_round2_ps_vs_nucleomag_q2_dist.reindex(dist_mantel.ids)
# corr, p, n
mantel_res_16S_lbm[metric_] = mantel_matched(dist_mantel,
md_round2_ps_vs_nucleomag_q2_dist_sub,
grouping,
ids)
## Compile
mantel_res_16S_lbm = pd.DataFrame(mantel_res_16S_lbm,
['corr', 'p', 'n'])
mantel_res_16S_lbm.to_csv('/Users/Justin/Mycelium/UCSD/00_Knight_Lab/03_Extraction_test_12201/round_02/results/mantel_correlations/table_mantel_16S_lbm_ps_vs_nucleomag.txt', sep='\t')
mantel_res_16S_lbm
# PowerSoil vs. Zymo MagBead - High biomass samples
## Filter table
table_16S_hbm_biom = table_16S_hbm.view(Table)
md_round2_ps_vs_zymo_df_hbm = md_round2_ps_vs_zymo_q2.to_dataframe()
shared_ = list(set(table_16S_hbm_biom.ids()) & set(md_round2_ps_vs_zymo_df_hbm.index))
md_round2_ps_vs_zymo_df_hbm = md_round2_ps_vs_zymo_df_hbm.reindex(shared_)
table_16S_hbm_biom_ps_vs_zymo = table_16S_hbm_biom.filter(shared_)
keep_ = table_16S_hbm_biom_ps_vs_zymo.ids('observation')[table_16S_hbm_biom_ps_vs_zymo.sum('observation') > 0]
table_16S_hbm_biom_ps_vs_zymo.filter(keep_, axis='observation')
## Import filtered table and re-indexed metadata file
table_16S_hbm_ps_vs_zymo = q2.Artifact.import_data('FeatureTable[Frequency]', table_16S_hbm_biom_ps_vs_zymo)
md_round2_ps_vs_zymo_q2_hbm = q2.Metadata(md_round2_ps_vs_zymo_df_hbm)
## Generate distance matrices using 'all_dissts' utils
rare_depth_16S_hbm = 12690
dists_res_16S_hbm = all_dists(table_16S_hbm_ps_vs_zymo,
rare_depth_16S_hbm, tree_16S)
## Make a unique ID
md_round2_ps_vs_zymo_q2_dist = md_round2_ps_vs_zymo_q2_hbm.to_dataframe().copy()
md_round2_ps_vs_zymo_q2_dist['unique_sample_id'] = ['.'.join(rn_.split('.')[:-2])
for rn_ in md_round2_ps_vs_zymo_q2_dist.index]
grouping = 'extraction_kit'
ids = 'unique_sample_id'
## Run Mantel test for each distance matrix
mantel_res_16S_hbm = {}
for metric_, dist_mantel in dists_res_16S_hbm.items():
# subset mf for dist (rare)
dist_mantel = dist_mantel.distance_matrix.view(DistanceMatrix)
md_round2_ps_vs_zymo_q2_dist_sub = md_round2_ps_vs_zymo_q2_dist.reindex(dist_mantel.ids)
# corr, p, n
mantel_res_16S_hbm[metric_] = mantel_matched(dist_mantel,
md_round2_ps_vs_zymo_q2_dist_sub,
grouping,
ids)
## Compile
mantel_res_16S_hbm = pd.DataFrame(mantel_res_16S_hbm,
['corr', 'p', 'n'])
mantel_res_16S_hbm.to_csv('/Users/Justin/Mycelium/UCSD/00_Knight_Lab/03_Extraction_test_12201/round_02/results/mantel_correlations/table_mantel_16S_hbm_ps_vs_zymo.txt', sep='\t')
mantel_res_16S_hbm
# PowerSoil vs. Zymo MagBead - High biomass samples
## Filter table
table_16S_lbm_biom = table_16S_lbm.view(Table)
md_round2_ps_vs_zymo_df_lbm = md_round2_ps_vs_zymo_q2.to_dataframe()
shared_ = list(set(table_16S_lbm_biom.ids()) & set(md_round2_ps_vs_zymo_df_lbm.index))
md_round2_ps_vs_zymo_df_lbm = md_round2_ps_vs_zymo_df_lbm.reindex(shared_)
table_16S_lbm_biom_ps_vs_zymo = table_16S_lbm_biom.filter(shared_)
keep_ = table_16S_lbm_biom_ps_vs_zymo.ids('observation')[table_16S_lbm_biom_ps_vs_zymo.sum('observation') > 0]
table_16S_lbm_biom_ps_vs_zymo.filter(keep_, axis='observation')
## Import filtered table and re-indexed metadata file
table_16S_lbm_ps_vs_zymo = q2.Artifact.import_data('FeatureTable[Frequency]', table_16S_lbm_biom_ps_vs_zymo)
md_round2_ps_vs_zymo_q2_lbm = q2.Metadata(md_round2_ps_vs_zymo_df_lbm)
## Generate distance matrices using 'all_dissts' utils
rare_depth_16S_lbm = 3295
dists_res_16S_lbm = all_dists(table_16S_lbm_ps_vs_zymo,
rare_depth_16S_lbm, tree_16S)
## Make a unique ID
md_round2_ps_vs_zymo_q2_dist = md_round2_ps_vs_zymo_q2_lbm.to_dataframe().copy()
md_round2_ps_vs_zymo_q2_dist['unique_sample_id'] = ['.'.join(rn_.split('.')[:-2])
for rn_ in md_round2_ps_vs_zymo_q2_dist.index]
grouping = 'extraction_kit'
ids = 'unique_sample_id'
## Run Mantel test for each distance matrix
mantel_res_16S_lbm = {}
for metric_, dist_mantel in dists_res_16S_lbm.items():
# subset mf for dist (rare)
dist_mantel = dist_mantel.distance_matrix.view(DistanceMatrix)
md_round2_ps_vs_zymo_q2_dist_sub = md_round2_ps_vs_zymo_q2_dist.reindex(dist_mantel.ids)
# corr, p, n
mantel_res_16S_lbm[metric_] = mantel_matched(dist_mantel,
md_round2_ps_vs_zymo_q2_dist_sub,
grouping,
ids)
## Compile
mantel_res_16S_lbm = pd.DataFrame(mantel_res_16S_lbm,
['corr', 'p', 'n'])
mantel_res_16S_lbm.to_csv('/Users/Justin/Mycelium/UCSD/00_Knight_Lab/03_Extraction_test_12201/round_02/results/mantel_correlations/table_mantel_16S_lbm_ps_vs_zymo.txt', sep='\t')
mantel_res_16S_lbm
###Output
_____no_output_____
###Markdown
Stepwise ANOVA
###Code
md_round2_ps_vs_zymo_q2_dist[ids]
# Generate ordinations (row=samples, cols=axes)
pcoa_res = {}
pcoa_res['Jaccard'] = pcoa(dists_res_16S_lbm['Jaccard'].distance_matrix).pcoa.view(OrdinationResults).samples
pcoa_res['Unweighted UniFrac'] = pcoa(dists_res_16S_lbm['Unweighted UniFrac'].distance_matrix).pcoa.view(OrdinationResults).samples
pcoa_res['Weighted UniFrac'] = pcoa(dists_res_16S_lbm['Weighted UniFrac'].distance_matrix).pcoa.view(OrdinationResults).samples
pcoa_res['RPCA'] = dists_res_16S_lbm['RPCA'].biplot.view(OrdinationResults).samples
es_all = {}
use_ = ['sample_type', 'sample_type_2','sample_type_3','biomass_sample','sample_technical_replicate', 'bead_beating']
# clean up meta (only stuff to run)
mf_ord = mf.to_dataframe().copy()
# shit filter but works for now
keep_ = [v_ for v_ in mf_ord.columns
if len(set(mf_ord[v_])) > 1 and
len(set(mf_ord[v_])) < mf_ord.shape[0]//2]
mf_ord = mf_ord[keep_]
# run stp-wise ANOVA for all ords
for metric_, ord_ in pcoa_res.items():
# get first three axes
ord_ = ord_[[0,1,2]]
ord_.columns = ['PC1','PC2','PC3']
# subset/match
mf_ord_ = mf_ord.copy()
shared_ids = list(set(ord_.index)\
& set(mf_ord_.index))
mf_ord_ = mf_ord_.loc[shared_ids,:]
ord_ = ord_.loc[shared_ids,:]
es_all[metric_] = run_stepwise_anova(ord_, mf_ord_, use_) #mf_ord_.columns)
# concat all runs
es_alldf = pd.concat(es_all).rename({'+ sample_type_2':'Sample Type'}, axis=0)
es_alldf.to_csv('results/tables/effect-size_2min_20min.tsv', sep='\t')
es_alldf
###Output
_____no_output_____ |
YoloX_Custom_Training_Emotion_Detection.ipynb | ###Markdown
**YoloX Custom Training - Emotion Detection**YoloX just got released, and it is better and faster than, YoloR, YoloV4, Scaled YoloV4, YoloV5 and PP-YOLOv2.In this Tutorial let us learn how to do Emotion Detection by doing custom training on YoloX. All right here and right now. No time to waste guys, lets get started!! **Want to Become a YOLOX Expert?**💻Get Started with YOLOX [Get Started](https://www.augmentedstartups.com/yolox-registration). ⭐ Download the Code at the [AI Vision Store](https://augmentedstartups.info/VisionStore)☕ Buy me [Chai/Coffee](https://bit.ly/BuymeaCoffeeAS) **About US**[Augmented Startups](https://www.augmentedstartups.com) provides tutorials in AI Computer Vision and Augmented Reality. With over **95K subscribers** on our channel, we teach state-of-art models and build apps and projects that solve real-world problems. From the Author hey there **I am Rohit Kukreja - A Computer Vision & Deep Learning Practitioner** Setup
###Code
from google.colab import drive
drive.mount('/content/gdrive')
%%capture
!git clone https://github.com/dmuinoo/gestual/
###Output
_____no_output_____
###Markdown
Install YOLOX Dependencies
###Code
%%capture
%cd /content/gestual
!pip3 install -U pip && pip3 install -r requirements.txt
!pip3 install -v -e .
!pip uninstall -y torch torchvision torchaudio
# May need to change in the future if Colab no longer uses CUDA 11.0
!pip install torch==1.8.0+cu111 torchvision==0.9.0+cu111 torchaudio==0.8.0 -f https://download.pytorch.org/whl/torch_stable.html
!pip3 install cython; pip3 install 'git+https://github.com/cocodataset/cocoapi.git#subdirectory=PythonAPI'
###Output
_____no_output_____
###Markdown
Install Nvidia Apex
###Code
%%capture
!git clone https://github.com/NVIDIA/apex
%cd apex
!pip install -v --disable-pip-version-check --no-cache-dir --global-option="--cpp_ext" --global-option="--cuda_ext" ./
###Output
_____no_output_____
###Markdown
Import the Dataset
###Code
%cd /content/gestual/
!curl -L "https://public.roboflow.com/ds/RM02OBPfKD?key=6IpgAWtQ0U" > roboflow.zip; unzip roboflow.zip; rm roboflow.zip
root = "/content/gestual/train/"
%mkdir /content/gestual/datasets/VOCdevkit/
!cp -a "/content/gestual/train/." "/content/gestual/datasets/VOCdevkit"
###Output
_____no_output_____
###Markdown
Prepare dataset as per VOC
###Code
%cd /content/gestual/
%mkdir "/content/gestual/datasets/VOCdevkit/VOC2007"
!python3 voc_txt.py "/content/gestual/datasets/VOCdevkit/"
%mkdir "/content/gestual/datasets/VOCdevkit/VOC2012"
!cp -r "/content/gestual/datasets/VOCdevkit/VOC2007/." "/content/gestual/datasets/VOCdevkit/VOC2012"
###Output
_____no_output_____
###Markdown
Update the default classes
###Code
from IPython.core.magic import register_line_cell_magic
@register_line_cell_magic
def writetemplate(line, cell):
with open(line, 'w') as f:
f.write(cell.format(**globals()))
##REPLACE this cell with your classnames stripped of whitespace and lowercase
%%writetemplate /content/gestual/yolox/data/datasets/voc_classes.py
VOC_CLASSES = (
'pistol'
)
##REPLACE this cell with your classnames stripped of whitespace and lowercase
%%writetemplate /content/gestual/yolox/data/datasets/coco_classes.py
COCO_CLASSES = (
'pistol'
)
NUM_CLASSES = 1
!sed -i -e 's/self.num_classes = 20/self.num_classes = {NUM_CLASSES}/g' "/content/gestual/exps/example/yolox_voc/yolox_voc_s.py"
###Output
_____no_output_____
###Markdown
Training
###Code
#Download the weights
%cd /content/gestual
!wget https://github.com/Megvii-BaseDetection/storage/releases/download/0.0.1/yolox_s.pth
%env OUT_DIR= /content/gdrive/MyDrive/gestual
!mkdir = /content/gdrive/MyDrive/gestual
!python tools/train.py -f exps/example/yolox_voc/yolox_voc_s.py -d 1 -b 16 --fp16 -o -c /content/gestual/yolox_s.pth
###Output
_____no_output_____
###Markdown
Inference
###Code
from google.colab import files
files.upload()
%env OUT_DIR= /content/gdrive/MyDrive/gestual
VIDEO_PATH = "/content/gdrive/MyDrive/yolo/prueba2.mp4"
MODEL_PATH = "/content/gdrive/MyDrive/gestual/YOLOX_outputs/yolox_voc_s/last_epoch_ckpt.pth.tar"
!python tools/demo.py video -f /content/gestual/exps/example/yolox_voc/yolox_voc_s.py -c {MODEL_PATH} --path {VIDEO_PATH} --conf 0.25 --nms 0.45 --save_result --device gpu
###Output
_____no_output_____ |
Deep Learning-SEMICOLON/2. Data Analytics/Sklearn Tutorial - Housing example.ipynb | ###Markdown
Source : Sklean datasets
###Code
import pandas as pd
import numpy as np
from sklearn import linear_model
from sklearn.model_selection import train_test_split
from sklearn.datasets import load_boston
boston = load_boston()
boston
df_x=pd.DataFrame(boston.data,columns=boston.feature_names)
df_y=pd.DataFrame(boston.target)
df_x.describe()
reg = linear_model.LinearRegression()
x_train, x_test, y_train, y_test = train_test_split(df_x, df_y, test_size=0.2, random_state=4)
x_train.head()
reg.fit(x_train,y_train)
reg.coef_
# predict the house prices.
reg.predict(x_test)
#mean square error
np.mean((reg.predict(x_test) - y_test)**2)
###Output
_____no_output_____ |
question2.ipynb | ###Markdown
You need to implement Logistic Regression from scratch in this question 1. You are provided with the dataset of sign language digits. Implement logistic regression from scratch to classify the images provided in the dataset. Load the dataset and perform splitting into training and test sets with 70:30 ratio randomly using test train split.2. Plot a diagram for the sigmoid function. This is used for binary classi cation. How do you modify it for multilabel dataset classification problems? State and Explain the methods used.3. Use both one vs all and one vs one method for the above problem statement purpose.4. Also get results using Log Reg from scikit learn.5. Report accuracy score, Confusion matrix and any other metrics you feel useful and Compare the results - from all the three.[BONUS]6. Display few pictures with their predicted and original labels 7. Do the results differ? State the reasons why it is so. dataset link : https://iiitaphyd-my.sharepoint.com/:f:/g/personal/apurva_jadhav_students_iiit_ac_in/Eictt5_qmoxNqezgQQiMWeIBph4sxlfA6jWAJNPnV2SF9Q?e=mQmYN0
###Code
import pandas as pd
import numpy as np
from sklearn.model_selection import train_test_split
import matplotlib.pyplot as plt
from sklearn.metrics import accuracy_score, f1_score,confusion_matrix
import seaborn as sns
from sklearn.linear_model import LogisticRegression
from sklearn.multiclass import OneVsOneClassifier
x_l = np.load("X.npy") # image
y_l = np.load("Y.npy") # label
x_l = x_l.reshape(2062, -1)
x = np.linspace(-5, 5, 100)
z = 1/(1 + np.exp(-x))
plt.plot(x, z)
plt.xlabel('x')
plt.ylabel('Sigmoid(x)')
plt.title('Graph of sigmoid')
plt.show()
###Output
_____no_output_____
###Markdown
Multiclass classification**Logistic regression** can only be used for two class problems where the output label is binary like a **yes** or **no**, **true** or **false** case. To use Logistic Regression for multiple classes there are two methods that we can use :-1. **One Vs One**: in this method we need to consider $K \choose 2$ classifiers *(K = number of classes)* one for each pair of classes. To build it we need to extract only those data point *(samples)* belonging to these classes. For testing we need to use all the $K \choose 2$ classifiers and count the frequencies for each class. Finally we predict the label *(class)* with the maximum frequency.2. **One Vs All**: in this method we need to consider $K$ classifiers one for each class. For each classifier the data belonging to that class is labeled as 1 and everything else as 0, which converts it into a two class *(binary)* problem.
###Code
# Logistic Regression
class MyLogisticRegression:
def __init__(self, train_data, Y):
self.data = train_data # It is assumed that data is normalized and shuffled (rows, cols)
self.Y = Y[:, np.newaxis]
self.b = np.random.randn()
self.cols = self.data.shape[1]
self.rows = self.data.shape[0]
self.weights = np.random.randn(self.cols, 1) # Initialising weights to 1, shape (cols, 1)
self.num_iterations = 500
self.learning_rate = 0.0001
self.batch_size = 30
@staticmethod
def sigmoid(x):
return 1/(1 + np.exp(-x))
def calc_mini_batches(self):
new_data = np.hstack((self.data, self.Y))
np.random.shuffle(new_data)
rem = self.rows % self.batch_size
num = self.rows // self.batch_size
till = self.batch_size * num
if num > 0:
dd = np.array(np.vsplit(new_data[ :till, :], num))
X_batch = dd[:, :, :-1]
Y_batch = dd[:, :, -1]
return X_batch, Y_batch
def update_weights(self, X, Y):
Y_predicted = self.predict(X) # Remember that X has data stored along the row for one sample
gradient = np.dot(np.transpose(X), Y_predicted - Y)
self.b = self.b - np.sum(Y_predicted - Y)
self.weights = self.weights - (self.learning_rate * gradient) # vector subtraction
def print_error(self):
Y_Predicted = self.predict(self.data)
class_one = self.Y == 1
class_two = np.invert(class_one)
val = np.sum(np.log(Y_Predicted[class_one]))
val += np.sum(np.log(1 - Y_Predicted[class_two]))
print(-val)
def gradient_descent(self):
for j in range(self.num_iterations):
X, Y = self.calc_mini_batches()
num_batches = X.shape[0]
for i in range(num_batches):
self.update_weights(X[i, :, :], Y[i, :][:, np.newaxis]) # update the weights
if (j)%500 == 0:
self.print_error()
def predict(self, X):
# X is 2 dimensional array, samples along the rows
return self.sigmoid(np.dot(X, self.weights) + self.b)
###Output
_____no_output_____
###Markdown
One VS Rest Classifier
###Code
log_regressors_ova = []
for i in range(10):
mask = y_l[:,i] >= 1.0 - 1e-6
others = np.invert(mask)
x_pos = x_l[mask]
x_neg = x_l[others]
y_pos = [1]*len(x_pos)
y_neg = [0]*len(x_neg)
y_new = y_pos + y_neg
y_new = np.array(y_new)
x_new = np.vstack((x_pos, x_neg))
x_train, x_test, y_train, y_test = train_test_split(x_new, y_new, test_size=0.30, shuffle=True)
reg = MyLogisticRegression(x_train, y_train)
reg.gradient_descent()
y_pred = reg.predict(x_test)
pred = y_pred >= 0.5
pred = pred.astype(int)
log_regressors_ova.append(reg)
###Output
_____no_output_____
###Markdown
My Model
###Code
x_train, x_test, y_train, y_test = train_test_split(x_l, y_l, test_size=0.30, shuffle=True)
y_test = np.argmax(y_test, axis=1)
preds = np.zeros((x_test.shape[0], 10))
for i in range(10):
reg = log_regressors_ova[i]
y_pred = reg.predict(x_test)
preds[:, i] = y_pred.flatten()
final_pred = np.argmax(preds, axis=1)
print('accuracy : {a}'.format(a=accuracy_score(y_test, final_pred)))
print('f1 score : {a}'.format(a = f1_score(y_test, final_pred, average='weighted')))
sns.heatmap(confusion_matrix(y_test, final_pred))
###Output
accuracy : 0.6252019386106623
f1 score : 0.6135143865654283
###Markdown
Scikit Learn Model
###Code
x_train, x_test, y_train, y_test = train_test_split(x_l, y_l, test_size=0.30, shuffle=True)
clf = LogisticRegression(random_state=42, max_iter=1000, multi_class='ovr')
clf.fit(x_train, np.argmax(y_train, axis=1))
pred = clf.predict(x_test)
y_test = np.argmax(y_test, axis=1)
print('accuracy : {a}'.format(a=accuracy_score(y_test, pred)))
print('f1 score : {a}'.format(a = f1_score(y_test, pred, average='weighted')))
sns.heatmap(confusion_matrix(y_test, pred))
###Output
accuracy : 0.7609046849757674
f1 score : 0.7615100447146925
###Markdown
One Vs One classifier
###Code
log_regressors_ovo = []
for i in range(10):
for j in range(i+1,10):
mask1 = (y_l[:, i] >= 1.0 - 1e-6)
mask0 = (y_l[:, j] >= 1.0 - 1e-6)
x_pos = x_l[mask1]
x_neg = x_l[mask0]
y_pos = [1]*(x_pos.shape[0])
y_neg = [0]*(x_neg.shape[0])
y_new = y_pos + y_neg
y_new = np.array(y_new)
x_new = np.vstack((x_pos, x_neg))
x_train, x_test, y_train, y_test = train_test_split(x_new, y_new, test_size=0.20, shuffle=True)
reg = MyLogisticRegression(x_train, y_train)
reg.gradient_descent()
y_pred = reg.predict(x_test)
pred = y_pred >= 0.5
pred = pred.astype(int)
log_regressors_ovo.append(reg)
x_train, x_test, y_train, y_test = train_test_split(x_l, y_l, test_size=0.30, shuffle=True)
predictions = np.zeros((x_test.shape[0], len(log_regressors_ovo)))
ind = 0
for i in range(10):
for j in range(i+1, 10):
reg = log_regressors_ovo[ind]
y_pred = reg.predict(x_test)
predictions[(y_pred >= 0.5).flatten(), ind] = i
predictions[(y_pred < 0.5).flatten(), ind] = j
ind += 1
###Output
_____no_output_____
###Markdown
My Model
###Code
def give_max(x):
return np.argmax(np.bincount(x))
final_pred = np.vectorize(give_max,signature='(n)->()')(predictions.astype(int))
y_test = np.argmax(y_test, axis=1)
print('accuracy : {a}'.format(a=accuracy_score(y_test, final_pred)))
print('f1 score : {a}'.format(a = f1_score(y_test, final_pred, average='weighted')))
sns.heatmap(confusion_matrix(y_test, final_pred))
###Output
accuracy : 0.6607431340872375
f1 score : 0.6573875286236148
###Markdown
Scikit Learn Model
###Code
x_train, x_test, y_train, y_test = train_test_split(x_l, y_l, test_size=0.30, shuffle=True)
clf = OneVsOneClassifier(LogisticRegression(random_state=42, max_iter=10000))
clf.fit(x_train, np.argmax(y_train, axis=1))
pred = clf.predict(x_test)
y_test = np.argmax(y_test, axis=1)
print('accuracy : {a}'.format(a=accuracy_score(y_test, pred)))
print('f1 score : {a}'.format(a = f1_score(y_test, pred, average='weighted')))
sns.heatmap(confusion_matrix(y_test, pred))
###Output
accuracy : 0.8109854604200323
f1 score : 0.8118301573795765
###Markdown
Bonus The Images are labelled according to 'True label - Predicted Label' format as you can see below
###Code
x_l = x_l.reshape(2062, 64, 64)
class_names = ['0', '1', '2', '3', '4', '5', '6', '7', '8', '9']
x_train, x_test, y_train, y_test = train_test_split(x_l, y_l, test_size=0.10, shuffle=True)
xx = x_test.reshape(x_test.shape[0], -1)
predictions = np.zeros((xx.shape[0], len(log_regressors_ovo)))
ind = 0
for i in range(10):
for j in range(i+1, 10):
reg = log_regressors_ovo[ind]
y_pred = reg.predict(xx)
predictions[(y_pred >= 0.5).flatten(), ind] = i
predictions[(y_pred < 0.5).flatten(), ind] = j
ind += 1
def give_max(x):
return np.argmax(np.bincount(x))
final_pred = np.vectorize(give_max,signature='(n)->()')(predictions.astype(int))
plt.figure(figsize=(10,10))
for i in range(16):
plt.subplot(4,4,i+1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
plt.imshow(x_test[i], cmap=plt.cm.binary)
plt.xlabel(str(class_names[np.argmax(y_test[i])]) + '-' + str(class_names[final_pred[i]])) # this is done to get the index where one is present
plt.show()
###Output
_____no_output_____ |
3-Readings/s22-ex1-deploy.ipynb | ###Markdown
ENGR 1330-2022-1 Exam1-Laboratory Portion **McIntyre, Emma****RR11758171**ENGR 1330 Exam 1 - Laboratory/Programming Skills---**Download** (right-click, save target as ...) this page as a jupyterlab notebook from: [s22-ex1-deploy.ipynb](http://54.243.252.9/engr-1330-webroot/5-ExamProblems/Exam1/Exam1/spring2022/s22-ex1-deploy.ipynb)**If you are unable to download the file, create an empty notebook and copy paste the problems into Markdown cells and Code cells (problem-by-problem)** --- Problem 1 (10 pts) : *Profile your computer*Execute the code cell below exactly as written. If you get an error just continue to the remaining problems.
###Code
# Preamble script block to identify host, user, and kernel
import sys
! hostname
! whoami
print(sys.executable)
print(sys.version)
print(sys.version_info)
###Output
DESKTOP-4UUCIGD
desktop-4uucigd\emmal
C:\Users\emmal\anaconda3\python.exe
3.9.7 (default, Sep 16 2021, 16:59:28) [MSC v.1916 64 bit (AMD64)]
sys.version_info(major=3, minor=9, micro=7, releaselevel='final', serial=0)
###Markdown
--- Problem 2 (10 pts): *input(),typecast, string reversal, comparison based selection, print()*Build a script where the user will supply a number then determine if it is a palindrome number. A palindrome number is a number that is same after reversal. For example 545, is a palindrome number.- Case 1: 545- Case 2: 123- Case 3: 666
###Code
numberString = input("give a number for n")
#check if the number of digits is even or add
#get number of digits
numDigits = len(numberString)
#check if number is even
if (numDigits % 2 == 0):
#cut the string in half
middlePoint = numDigits // 2
firstHalf = numberString[0:middlePoint]
secondHalf = numberString[middlepoint:]
#reverse first half
reverseFirstHalf = firstHalf[::-1]
#compare to second half
if(reverseFirstHalf == secondHalf):
print(True)
else:
print(False)
else:
#cut the string in half
middlePoint = numDigits // 2
firstHalf = numberString[0:middlePoint-1]
secondHalf = numberString[middlePoint+1:]
#reverse first half
reverseFirstHalf = firstHalf[::-1]
#compare to second half
if(reverseFirstHalf == secondHalf):
print(True)
else:
print(False)
# Case 1
numberString = input("give a number for n")
#check if the number of digits is even or add
#get number of digits
numDigits = len(numberString)
#check if number is even
if (numDigits % 2 == 0):
#cut the string in half
middlePoint = numDigits // 2
firstHalf = numberString[0:middlePoint]
secondHalf = numberString[middlepoint:]
#reverse first half
reverseFirstHalf = firstHalf[::-1]
#compare to second half
if(reverseFirstHalf == secondHalf):
print(True)
else:
print(False)
else:
#cut the string in half
middlePoint = numDigits // 2
firstHalf = numberString[0:middlePoint-1]
secondHalf = numberString[middlePoint+1:]
#reverse first half
reverseFirstHalf = firstHalf[::-1]
#compare to second half
if(reverseFirstHalf == secondHalf):
print(True)
else:
print(False)
# Case 2
numberString = input("give a number for n")
#check if the number of digits is even or add
#get number of digits
numDigits = len(numberString)
#check if number is even
if (numDigits % 2 == 0):
#cut the string in half
middlePoint = numDigits // 2
firstHalf = numberString[0:middlePoint]
secondHalf = numberString[middlepoint:]
#reverse first half
reverseFirstHalf = firstHalf[::-1]
#compare to second half
if(reverseFirstHalf == secondHalf):
print(True)
else:
print(False)
else:
#cut the string in half
middlePoint = numDigits // 2
firstHalf = numberString[0:middlePoint-1]
secondHalf = numberString[middlePoint+1:]
#reverse first half
reverseFirstHalf = firstHalf[::-1]
#compare to second half
if(reverseFirstHalf == secondHalf):
print(True)
else:
print(False)
###Output
give a number for n666
False
###Markdown
--- Problem 3 (15 pts): *len(),compare,accumulator, populate an empty list,for loop, print()*Two lists are defined as```x= [1.0,1.1,1.2,1.3,1.4,1.5,1.6,1.7,1.8]f_of_x = [1.543,1.668,1.811,1.971,2.151,2.352,2.577,2.828,3.107]```Create a script that determines the length of each list and if they are the same length then print the contents of each list row-wise, and the running sum of `f_of_x` so the output looks like```--x-- --f_of_x-- --sum--1.0 1.543 1.5431.1 1.668 3.211... ... ...... ... ...1.7 2.828 16.9011.8 3.107 20.008```Test your script using the two lists above, then with the two lists below:```x= [1.0,1.1,1.2,1.3,1.4,1.5,1.6,1.7,1.8]f_of_x =[1.543, 3.211, 5.022, 6.993, 9.144, 11.496, 14.073, 16.901, 20.008]```
###Code
# define variables
# Case 1
x = [1.0,1.1,1.2,1.3,1.4,1.5,1.6,1.7,1.8]
f_of_x = [1.543,1.668,1.811,1.971,2.151,2.352,2.577,2.828,3.107]
# validate lengths
# initialize accumulator and empty list to store a running sum
Sum = []
# print header line
print("--x-- --f_of_x-- --sum--")
# repetition (for loop) structure
if (len(x)) == (len(f_of_x)):
for i in range(0,(len(x)),1):
Sum = Sum + [f_of_x[i]]
results = sum(Sum)
print( x[i]," ", f_of_x[i]," ", results)
# define variables
# Case 2
x= [1.0,1.1,1.2,1.3,1.4,1.5,1.6,1.7,1.8]
f_of_x =[1.543, 3.211, 5.022, 6.993, 9.144, 11.496, 14.073, 16.901, 20.008]
Sum = []
print("--x-- --f_of_x-- --sum--")
if (len(x)) == (len(f_of_x)):
for i in range(0,(len(x)),1):
Sum = Sum + [f_of_x[i]]
results = sum(Sum)
print( x[i]," ", f_of_x[i]," ", results)
###Output
--x-- --f_of_x-- --sum--
1.8 20.008 88.39099999999999
###Markdown
--- Problem 4 Function (15 points) : *def ..., input(),typecast,arithmetic based selection, print()* Build a function that takes as input two integer numbers. The function should return their product if the product is greater than 666, otherwise the function should return their sum. Employ the function in an interactive script and test the following cases:- Case 1: 65 and 10- Case 2: 66 and 11- Case 3: 25 and 5
###Code
n1 = int(input("give a integer for n1"))
#take n2
n2 = int(input("give a integer for n2"))
#return the product of n1 and n2 if product is > 1000
#check product > 1000
product1 = n1 * n2
sum1 = n1 + n2
print("product1 (n1 * n2) is:", product1)
print("sum1 (n1 + n2) is:", sum1)
if (product1 > 666):
print(product1)
#return the sum of n1 and n2 if the product is <= 1000
else:
print(sum1)
# Case 1
n1 = int(input("give a integer for n1"))
#take n2
n2 = int(input("give a integer for n2"))
#return the product of n1 and n2 if product is > 1000
#check product > 1000
product1 = n1 * n2
sum1 = n1 + n2
print("product1 (n1 * n2) is:", product1)
print("sum1 (n1 + n2) is:", sum1)
if (product1 > 666):
print(product1)
#return the sum of n1 and n2 if the product is <= 1000
else:
print(sum1)
# Case 2
n1 = int(input("give a integer for n1"))
#take n2
n2 = int(input("give a integer for n2"))
#return the product of n1 and n2 if product is > 1000
#check product > 1000
product1 = n1 * n2
sum1 = n1 + n2
print("product1 (n1 * n2) is:", product1)
print("sum1 (n1 + n2) is:", sum1)
if (product1 > 666):
print(product1)
#return the sum of n1 and n2 if the product is <= 1000
else:
print(sum1)
# Case 3
n1 = int(input("give a integer for n1"))
#take n2
n2 = int(input("give a integer for n2"))
#return the product of n1 and n2 if product is > 1000
#check product > 1000
product1 = n1 * n2
sum1 = n1 + n2
print("product1 (n1 * n2) is:", product1)
print("sum1 (n1 + n2) is:", sum1)
if (product1 > 666):
print(product1)
#return the sum of n1 and n2 if the product is <= 1000
else:
print(sum1)
###Output
give a integer for n125
give a integer for n25
product1 (n1 * n2) is: 125
sum1 (n1 + n2) is: 30
30
|
hw2/DL_Evonne.ipynb | ###Markdown
EMNIST baseline 準確率 = 81% or upModel Improvement 要比baseline up
###Code
# Transform
# 圖片進來時是值為[0~255]之間(黑白),經過ToTensor後(就是每一個像素除以255)值就變為[0~1]之間,normalize後是[-1,1]之間
# rotate 是將原圖轉-90度 而 hflip 是將圖片垂直鏡像翻轉
transform = transforms.Compose(
[lambda img : transforms.functional.rotate(img,-90),
lambda img : transforms.functional.hflip(img),
transforms.ToTensor(),
transforms.Normalize((0.5), (0.5))])
# Data
trainSet = torchvision.datasets.EMNIST(root="data/", split="byclass", download=True, train=True, transform=transform)
testSet = torchvision.datasets.EMNIST(root="data/", split="byclass", download=True, train=False, transform=transform)
trainLoader = torch.utils.data.DataLoader(trainSet, batch_size=64, shuffle=True)
testLoader = torch.utils.data.DataLoader(testSet, batch_size=64, shuffle=False)
trainSet
testSet
print("Total No of Images in EMNIST dataset:", len(trainSet) + len(testSet))
print("No of images in Training dataset: ",len(trainSet))
print("No of images in Testing dataset: ",len(testSet))
l = trainSet.classes
l.sort()
print("No of classes: ",len(l))
print("List of all classes")
print(l)
classes=l
import matplotlib.pyplot as plt
import numpy as np
# functions to show an image
def imshow(img):
img = img / 2 + 0.5 # unnormalize
npimg = img.numpy()
plt.imshow(np.transpose(npimg, (1, 2, 0)))
plt.show()
# get some random training images
for i, data in enumerate(trainLoader, 1):
images, labels = data
break
# show images
imshow(torchvision.utils.make_grid(images))
print(images.shape)
import torch.nn as nn
import torch.nn.functional as F
# Model
#進來的圖片是(4,3,32,32) batchsize,rgb,height,width
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.fc1 = nn.Linear(784,500)
# self.fc2 = nn.Linear(500,350)
self.fc2 = nn.Linear(500,200)
self.dropout1 = nn.Dropout(p=0.2)
self.fc3 = nn.Linear(200,100)
self.dropout2 = nn.Dropout(p=0.2)
self.fc4 = nn.Linear(100,62)
def forward(self, x):
x = F.relu(self.fc1(x))
# x = F.relu(self.fc2(x))
x = F.relu(self.fc2(x))
x = self.dropout1(x)
x = F.relu(self.fc3(x))
x = self.dropout2(x)
x = F.leaky_relu(self.fc4(x))
return x
net = Net().to(device)
print(net)
tensor_dict = net.state_dict()
tensor_list = list(tensor_dict.items())
print('\nModel Parameters:')
for layer_tensor_name, tensor in tensor_list:
print('Layer {}: {} elements'.format(layer_tensor_name, torch.numel(tensor)))
# Parameters
import torch.optim as optim
x = []
y = []
criterion = nn.CrossEntropyLoss()
optimizer = optim.Adam(net.parameters(), lr=0.001) # learning_rate = 0.001
for epoch in range(5): # loop over the dataset multiple times
x.append(epoch)
running_loss = 0.0
for i, data in enumerate(trainLoader, 1):
# get the inputs; data is a list of [inputs, labels]
inputs, labels = data[0].to(device), data[1].to(device)
inputs = inputs.view(inputs.shape[0], -1)
# zero the parameter gradients
optimizer.zero_grad()
# forward + backward + optimize
outputs = net(inputs)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
# print statistics
running_loss += loss.item()
if i % 5000 == 4999: # print every 5000 batch
print('[%d, %5d] loss: %.3f' %
(epoch + 1, i + 1, running_loss / 5000))
y.append(running_loss / 5000)
running_loss = 0.0
print('Finished Training')
PATH = './cifar_net_baseline.pth'
torch.save(net.state_dict(), PATH)
import matplotlib.pyplot as plt
x=np.array([1,2,3,4,5,6,7,8,9,10])
plt.xlabel('Epoches')
plt.ylabel('Training Loss')
plt.title('Training Procedure')
plt.xticks(x, ['1', '1(10000)','2', '2(10000)','3', '3(10000)','4', '4(10000)','5', '5(10000)'])
plt.plot(x,y)
plt.show()
# x = np.array([0,1,2,3])
# y = np.array([20,21,22,23])
# my_xticks = ['John','Arnold','Mavis','Matt']
# plt.xticks(x, my_xticks)
# plt.plot(x, y)
# plt.show()
# Test
correct = 0
total = 0
class_correct = [0 for i in range(62)]
class_total = [0 for i in range(62)]
for i, data in enumerate(testLoader, 1):
with torch.no_grad():
inputs, labels = data[0].to(device), data[1].to(device)
inputs = inputs.view(inputs.shape[0], -1)
outputs = net(inputs)
_, predicted = torch.max(outputs, 1)
total += labels.size(0)
correct += (predicted == labels).sum().item()
c = (predicted == labels)
for i in range(len(labels)):
label = labels[i]
class_correct[label] += c[i].item()
class_total[label] += 1
# print(class_correct)
# print(class_total)
print('Accuracy of the network on the 10000 test images: %d %%' % (100*correct / total))
for i in range(62):
print('Accuracy of %d: %3f' % (i, (class_correct[i]/class_total[i])))
for i, data in enumerate(testLoader, 1):
with torch.no_grad():
images, labels = data
inputs, labels = data[0].to(device), data[1].to(device)
inputs = inputs.view(inputs.shape[0], -1)
outputs = net(inputs)
_, predicted = torch.max(outputs, 1)
break
# show images
imshow(torchvision.utils.make_grid(images))
print(','.join('%5s' % classes[j] for j in predicted.cpu().numpy()))
from sklearn.metrics import confusion_matrix
confusion_matrix = torch.zeros(len(classes), len(classes))
for i, (datas, labels) in enumerate(testLoader, 1):
inputs = datas.to(device)
labels = labels.to(device)
inputs = inputs.view(inputs.shape[0], -1)
outputs = net(inputs)
_, preds = torch.max(outputs, 1)
for t, p in zip(labels.view(-1), preds.view(-1)):
# print(t, p)
confusion_matrix[t.long(), p.long()] += 1
print(confusion_matrix)
print(confusion_matrix.diag()/confusion_matrix.sum(1))
import pandas as pd
import seaborn as sns
plt.figure(figsize = (35,20))
ax = sns.heatmap(confusion_matrix, annot=True, cmap='Blues', fmt='g')
ax.set_title('Seaborn Confusion Matrix with labels\n');
ax.set_xlabel('\nPredicted Values')
ax.set_ylabel('Actual Values ');
## Ticket labels - List must be in alphabetical order
ax.xaxis.set_ticklabels(classes)
ax.yaxis.set_ticklabels(classes)
## Display the visualization of the Confusion Matrix.
plt.show()
# plt.savfig("Evoone_DL2")
# 配合講者HW2 投影片的P11 要注意的地方
# net.train(True)
# for i, data in enumerate(trainLoader, 1):
# net.train(False)
# for i, data in enumerate(testLoader, 1):
# with torch.no_grad():
# net.train(?) ? = True or False
# 主要用於通知dropout層和batchnorm層在train和val模式間切換
# 在train模式下,dropout網絡層會按照設定的參數p設置保留激活單元的機率(保留機率=p); batchnorm層會繼續計算數據的mean和var等參數並更新。
# 在val模式下,dropout層會讓所有的激活單元都通過,而batchnorm層會停止計算和更新mean和var,直接使用在訓練階段已經學出的mean和var值。
# 該模式不會影響各層的gradient計算行為,即gradient計算和存儲與training模式一樣,只是不進行反傳(backprobagation)
# 而with torch.no_grad()則主要是用於停止autograd模塊的工作,以起到加速和節省顯存的作用,具體行為就是停止gradient計算,從而節省了GPU算力和顯存,但是並不會影響dropout和batchnorm層的行為。
###Output
_____no_output_____ |
Pandas DataFrames.ipynb | ###Markdown
Pandas Crash Course DataFramesA DataFrame contains multiple series that share the same index.It is a tabular data storage
###Code
import numpy as np
import pandas as pd
rand_mat = np.random.randn(5,4)
rand_mat
pd.DataFrame(data =rand_mat) # create dataframe from numpy array
df = pd.DataFrame(data =rand_mat, index='A B C D E'.split(), columns='W X Y Z'.split())
df
###Output
_____no_output_____
###Markdown
Accessing columns
###Code
df['W'] # access column
type(df['W'])
df[['W','Y']] # access multiple columns
type(df[['W','Y']])
df['NEW'] = df['W']+df['Y'] # add a column
df
df.drop('NEW',axis=1) # axis=0 row, axis=1 col
df # does not drop the column
df.drop('NEW',axis=1,inplace=True) # or df = df.drop('NEW',axis=1)
df
###Output
_____no_output_____
###Markdown
Accessing rows
###Code
df.drop('A') # default axis=0 and inplace=False
df
df.loc['A'] # access row location
type(df.loc['A'])
df.iloc[0] # access row with index location
type(df.iloc[0])
df.loc[['A','C']] # access multiple rows
type(df.loc[['A','C']])
df.loc[['A','C']][['Y','Z']] # select subset of the dataframe
df.loc[['A','C'],['Y','Z']] # select subset of the dataframe
###Output
_____no_output_____
###Markdown
Conditional Filteringand/or operators do not work with pd.Series, use `&` and `|` insteadmore than one conditions should be wrapped inside of ()
###Code
df>0 # conditional operations
df[df>0] # conditional acess
df['W']>0
df[df['W']>0] # conditional acess on columns
df[df['W']>0][['X','Y']]
df[(df['W']>0) & (df['Y']<0.5)]
###Output
_____no_output_____
###Markdown
Dataframe Index
###Code
df.reset_index() # not inplace
new_ind = 'CA NY WY OR CO'.split()
df['States'] = new_ind
df
df.set_index('States') # not inplace
df.set_index('States',inplace=True)
df
###Output
_____no_output_____
###Markdown
Dataframe Summary
###Code
df.info()
df.dtypes
df.describe()
df['W']>0
series = df['W']>0
series.value_counts()
###Output
_____no_output_____ |
Lesson07/Activity09/Activity09_Extracting_top_100_ebooks_Solution.ipynb | ###Markdown
Lesson 7 Activity 1: Top 100 ebooks' name extraction from Gutenberg.org What is Project Gutenberg? - Project Gutenberg is a volunteer effort to digitize and archive cultural works, to "encourage the creation and distribution of eBooks". It was founded in 1971 by American writer Michael S. Hart and is the **oldest digital library.** This longest-established ebook project releases books that entered the public domain, and can be freely read or downloaded in various electronic formats. What is this activity all about?* **This activity aims to scrape the url of the Project Gutenberg's Top 100 ebooks (yesterday's ranking) for identifying the ebook links. *** **It uses BeautifulSoup4 for parsing the HTML and regular expression code for identifying the Top 100 ebook file numbers.*** **You can use those book ID numbers to download the book into your local drive if you want** Import necessary libraries including regex, and beautifulsoup
###Code
import urllib.request, urllib.parse, urllib.error
import requests
from bs4 import BeautifulSoup
import ssl
import re
###Output
_____no_output_____
###Markdown
Ignore SSL errors (this code will be given)
###Code
# Ignore SSL certificate errors
ctx = ssl.create_default_context()
ctx.check_hostname = False
ctx.verify_mode = ssl.CERT_NONE
###Output
_____no_output_____
###Markdown
Read the HTML from the URL
###Code
# Read the HTML from the URL and pass on to BeautifulSoup
top100url = 'https://www.gutenberg.org/browse/scores/top'
response = requests.get(top100url)
###Output
_____no_output_____
###Markdown
Write a small function to check the status of web request
###Code
def status_check(r):
if r.status_code==200:
print("Success!")
return 1
else:
print("Failed!")
return -1
status_check(response)
###Output
Success!
###Markdown
Decode the response and pass on to `BeautifulSoup` for HTML parsing
###Code
contents = response.content.decode(response.encoding)
soup = BeautifulSoup(contents, 'html.parser')
###Output
_____no_output_____
###Markdown
Find all the _href_ tags and store them in the list of links. Check how the list looks like - print first 30 elements
###Code
# Empty list to hold all the http links in the HTML page
lst_links=[]
# Find all the href tags and store them in the list of links
for link in soup.find_all('a'):
#print(link.get('href'))
lst_links.append(link.get('href'))
lst_links[:30]
###Output
_____no_output_____
###Markdown
Use regular expression to find the numeric digits in these links. These are the file number for the Top 100 books. Initialize empty list to hold the file numbers
###Code
booknum=[]
###Output
_____no_output_____
###Markdown
* Number 19 to 118 in the original list of links have the Top 100 ebooks' number. * Loop over appropriate range and use regex to find the numeric digits in the link (href) string.* Hint: Use `findall()` method
###Code
for i in range(19,119):
link=lst_links[i]
link=link.strip()
# Regular expression to find the numeric digits in the link (href) string
n=re.findall('[0-9]+',link)
if len(n)==1:
# Append the filenumber casted as integer
booknum.append(int(n[0]))
###Output
_____no_output_____
###Markdown
Print the file numbers
###Code
print ("\nThe file numbers for the top 100 ebooks on Gutenberg are shown below\n"+"-"*70)
print(booknum)
###Output
The file numbers for the top 100 ebooks on Gutenberg are shown below
----------------------------------------------------------------------
[1342, 84, 1080, 46, 219, 2542, 98, 345, 2701, 844, 11, 5200, 43, 16328, 76, 74, 1952, 6130, 2591, 1661, 41, 174, 23, 1260, 1497, 408, 3207, 1400, 30254, 58271, 1232, 25344, 58269, 158, 44881, 1322, 205, 2554, 1184, 2600, 120, 16, 58276, 5740, 34901, 28054, 829, 33, 2814, 4300, 100, 55, 160, 1404, 786, 58267, 3600, 19942, 8800, 514, 244, 2500, 2852, 135, 768, 58263, 1251, 3825, 779, 58262, 203, 730, 20203, 35, 1250, 45, 161, 30360, 7370, 58274, 209, 27827, 58256, 33283, 4363, 375, 996, 58270, 521, 58268, 36, 815, 1934, 3296, 58279, 105, 2148, 932, 1064, 13415]
###Markdown
How does the `soup` object's text look like? Use `.text()` method and print only first 2000 characters (i.e. do not print the whole thing, it is long).You will notice lot of empty spaces/blanks here and there. Ignore them. They are part of HTML page markup and its whimsical nature!
###Code
print(soup.text[:2000])
###Output
if (top != self) {
top.location.replace ('http://www.gutenberg.org');
alert ('Project Gutenberg is a FREE service with NO membership required. If you paid somebody else to get here, make them give you your money back!');
}
Top 100 - Project Gutenberg
Online Book Catalog
=>
Book Search
-- Recent Books
-- Top 100
-- Offline Catalogs
-- My Bookmarks
Main Page
Project Gutenberg needs your donation!
More Info
Did you know that you can help us produce ebooks
by proof-reading just one page a day?
Go to: Distributed Proofreaders
Top 100
To determine the ranking we count the times each file gets downloaded.
Both HTTP and FTP transfers are counted.
Only transfers from ibiblio.org are counted as we have no access to our mirrors log files.
Multiple downloads from the same IP address on the same day count as one download.
IP addresses that download more than 100 files a day are considered
robots and are not considered.
Books made out of multiple files like most audio books are counted
if any file is downloaded.
Downloaded Books
2018-11-13127018
last 7 days809583
last 30 days3292793
Pretty Pictures
Top 100 EBooks yesterday —
Top 100 Authors yesterday —
Top 100 EBooks last 7 days —
Top 100 Authors last 7 days —
Top 100 EBooks last 30 days —
Top 100 Authors last 30 days
Top 100 EBooks yesterday
Pride and Prejudice by Jane Austen (1826)
Frankenstein; Or, The Modern Prometheus by Mary Wollstonecraft Shelley (1367)
A Modest Proposal by Jonathan Swift (1020)
A Christmas Carol in Prose; Being a Ghost Story of Christmas by Charles Dickens (953)
Heart of Darkness by Joseph Conrad (887)
Et dukkehjem. English by Henrik Ibsen (761)
A Tale of Two Cities by Charles Dickens (741)
Dracula by Bram Stoker (732)
Moby Dick; Or, The Whale by Herman Melville (651)
The Importance of Being Earnest: A Trivial Comedy for Serious People by Oscar Wilde (646)
Alice's Adventures in Wonderland by Lewis Carrol
###Markdown
Search in the extracted text (using regular expression) from the `soup` object to find the names of top 100 Ebooks (Yesterday's rank)
###Code
# Temp empty list of Ebook names
lst_titles_temp=[]
###Output
_____no_output_____
###Markdown
Create a starting index. It should point at the text _"Top 100 Ebooks yesterday"_. Hint: Use `splitlines()` method of the `soup.text`. It splits the lines of the text of the `soup` object.
###Code
start_idx=soup.text.splitlines().index('Top 100 EBooks yesterday')
###Output
_____no_output_____
###Markdown
Loop 1-100 to add the strings of next 100 lines to this temporary list. Hint: `splitlines()` method
###Code
for i in range(100):
lst_titles_temp.append(soup.text.splitlines()[start_idx+2+i])
###Output
_____no_output_____
###Markdown
Use regular expression to extract only text from the name strings and append to an empty list* Hint: Use `match` and `span` to find indices and use them
###Code
lst_titles=[]
for i in range(100):
id1,id2=re.match('^[a-zA-Z ]*',lst_titles_temp[i]).span()
lst_titles.append(lst_titles_temp[i][id1:id2])
###Output
_____no_output_____
###Markdown
Print the list of titles
###Code
for l in lst_titles:
print(l)
###Output
Pride and Prejudice by Jane Austen
Frankenstein
A Modest Proposal by Jonathan Swift
A Christmas Carol in Prose
Heart of Darkness by Joseph Conrad
Et dukkehjem
A Tale of Two Cities by Charles Dickens
Dracula by Bram Stoker
Moby Dick
The Importance of Being Earnest
Alice
Metamorphosis by Franz Kafka
The Strange Case of Dr
Beowulf
Adventures of Huckleberry Finn by Mark Twain
The Adventures of Tom Sawyer by Mark Twain
The Yellow Wallpaper by Charlotte Perkins Gilman
The Iliad by Homer
Grimms
The Adventures of Sherlock Holmes by Arthur Conan Doyle
The Legend of Sleepy Hollow by Washington Irving
The Picture of Dorian Gray by Oscar Wilde
Narrative of the Life of Frederick Douglass
Jane Eyre
The Republic by Plato
The Souls of Black Folk by W
Leviathan by Thomas Hobbes
Great Expectations by Charles Dickens
The Romance of Lust
The Tower of London by William Benham
Il Principe
The Scarlet Letter by Nathaniel Hawthorne
Emma by Jane Austen
Confessions of a Thug by Meadows Taylor
Leaves of Grass by Walt Whitman
Walden
Prestuplenie i nakazanie
The Count of Monte Cristo
War and Peace by graf Leo Tolstoy
Treasure Island by Robert Louis Stevenson
Peter Pan by J
The Florist and Horticultural Journal
Tractatus Logico
On Liberty by John Stuart Mill
The Brothers Karamazov by Fyodor Dostoyevsky
Gulliver
The Scarlet Letter by Nathaniel Hawthorne
Dubliners by James Joyce
Ulysses by James Joyce
The Complete Works of William Shakespeare by William Shakespeare
The Wonderful Wizard of Oz by L
The Awakening
The Federalist Papers by Alexander Hamilton and John Jay and James Madison
Hard Times by Charles Dickens
The Delinquent
Essays of Michel de Montaigne
Candide by Voltaire
The Divine Comedy by Dante
Little Women by Louisa May Alcott
A Study in Scarlet by Arthur Conan Doyle
Siddhartha by Hermann Hesse
The Hound of the Baskervilles by Arthur Conan Doyle
Les Mis
Wuthering Heights by Emily Bront
The Candle and the Cat by Mary Finley Leonard
Le Morte d
Pygmalion by Bernard Shaw
The Tragical History of Doctor Faustus by Christopher Marlowe
Captain John
Uncle Tom
Oliver Twist by Charles Dickens
Autobiography of Benjamin Franklin by Benjamin Franklin
The Time Machine by H
Anthem by Ayn Rand
Anne of Green Gables by L
Sense and Sensibility by Jane Austen
My Secret Life
Second Treatise of Government by John Locke
The Tragic Story of the Empress of Ireland by Logan Marshall
The Turn of the Screw by Henry James
The Kama Sutra of Vatsyayana by Vatsyayana
The Russian Army and the Japanese War
Calculus Made Easy by Silvanus P
Beyond Good and Evil by Friedrich Wilhelm Nietzsche
An Occurrence at Owl Creek Bridge by Ambrose Bierce
Don Quixote by Miguel de Cervantes Saavedra
Blue Jackets by Edward Greey
The Life and Adventures of Robinson Crusoe by Daniel Defoe
The Waterloo Campaign
The War of the Worlds by H
Democracy in America
Songs of Innocence
The Confessions of St
Modern French Masters by Marie Van Vorst
Persuasion by Jane Austen
The Works of Edgar Allan Poe
The Fall of the House of Usher by Edgar Allan Poe
The Masque of the Red Death by Edgar Allan Poe
The Lady with the Dog and Other Stories by Anton Pavlovich Chekhov
|
nbs/95_nc_gru2gru.ipynb | ###Markdown
Try news commentary data with gru2gru model> Datasets
###Code
small_dss = get_nc_dss(tok_data_loc, enc_tokenizer, dec_tokenizer, enc_seq_len, dec_seq_len, pct=0.2)
dss = get_nc_dss(tok_data_loc, enc_tokenizer, dec_tokenizer, enc_seq_len, dec_seq_len)
###Output
_____no_output_____
###Markdown
Model
###Code
enc_vocab_size = len(enc_tokenizer)
enc_pad_id = enc_tokenizer.pad_token_id
dec_vocab_size = len(dec_tokenizer)
dec_pad_id = dec_tokenizer.pad_token_id
embeded_size = 768
num_encoder_layers = 2
num_decoder_layers = 2
drop_p = 0.1
%xdel gru2gru
%xdel decoder
%xdel encoder
encoder = GRUEncoder(enc_vocab_size, embeded_size, enc_pad_id, num_encoder_layers, drop_p)
decoder = GRUDecoder(dec_vocab_size, embeded_size, dec_pad_id, num_decoder_layers, drop_p)
gru2gru = GRU2GRU(encoder, decoder, num_encoder_layers, num_decoder_layers)
###Output
NameError: name 'gru2gru' is not defined
NameError: name 'decoder' is not defined
NameError: name 'encoder' is not defined
###Markdown
Learner and Train
###Code
%xdel dls
%xdel learn
# dls = small_dss.dataloaders(bs=64)
dls = dss.dataloaders(bs=64)
learn = Learner(dls,
gru2gru,
loss_func=CrossEntropyLossFlat(ignore_index=dec_pad_id),
opt_func=Adam,
metrics=[accuracy, Perplexity()],
).to_fp16()
learn.lr_find()
learn.fit_one_cycle(3, 2e-3)
torch.save(gru2gru.state_dict(), './models/1-nc_gru2gru.pt')
gru2gru.load_state_dict(torch.load('./models/1-nc_gru2gru.pt', map_location='cuda'))
###Output
_____no_output_____
###Markdown
Bleu
###Code
generated_gru2gru = GeneratedGRU2GRU(gru2gru, enc_tokenizer, dec_tokenizer)
generate_args = GenerateArgs(
max_length=30,
# do_sample=True,
num_beams=1,
# temperature=1.0,
# repetition_penalty=1,
# length_penalty=1.0,
)
compute_bleu(generated_gru2gru, generate_args, dec_tokenizer, dls.valid)
###Output
_____no_output_____
###Markdown
Generate
###Code
generate_args = GenerateArgs(
max_length=65,
# do_sample=True,
num_beams=3,
temperature=1.0,
# top_k=3,
# top_p=0.9,
# repetition_penalty=5,
# length_penalty=6,
)
src_strs = [
'They have their own vision; their own planners, architects, and engineers; and their own manpower.',
'As demand rises, more and better jobs will be created not only in Asia, but also globally, along supply chains and across production networks.',
'If the EU is to progress beyond the limits of a common economic and monetary policy and develop a defense and security policy along with a common foreign policy, the UK must be on board.',
'Today, when a new strain of influenza appears in Asia, scientists collect a throat swab, isolate the virus, and run the strain’s genetic sequence.',
'Other elements of Lee’s plan include construction of eco-friendly transportation networks, such as high-speed railways and hundreds of kilometers of bicycle tracks, and generating energy using waste methane from landfills.',
]
tgt_strs = [
'他们有自己的愿景,自己的规划师、建筑师和工程师,自己的劳动力。',
'随着需求不断攀升,不仅亚洲会创造出更多更好的就业机会,全球范围内的供应链及整个生产网络也将会从中受益。',
'如果欧盟想要突破共同的经济和货币政策的界限,在发展安全防卫政策的同时发展共同的外交政策,英国必须参与。',
'如今,当新的流感菌株在亚洲出现时,科学家收集咽喉棉签,分离病毒,测定毒株的基因序列。',
'李总统计划的其他要素还包括建设生态友好的运输网络,例如高速铁路以及几百公里长的自行车车道,并且从垃圾堆中利用甲烷来制造能源。',
]
result = generated_gru2gru.generate_from_strs(src_strs, generate_args, device='cuda:0')
result
###Output
_____no_output_____ |
Rafay notes/Samsung Course/Chapter 3/Quiz/Mohammad Abdul Rafay BSE183009-problem_0204.ipynb | ###Markdown
Quiz 0204
###Code
import pandas as pd
import numpy as np
import os
import matplotlib.pyplot as plt
%matplotlib inline
###Output
_____no_output_____
###Markdown
Read in the data.
###Code
# Go to the directory where the data file is located.
# os.chdir(r'~~') # Please, replace the path with your own.
df = pd.read_csv('data_coffeeshop.csv', header='infer',na_values=[' '])
df.shape
df.head(5)
###Output
_____no_output_____
###Markdown
Answer the following questions. 1). Make a frequency table of 'yearOfStart' and visualize by year. - Sort by the year.- Draw a line plot from 1997 to 2014. <= Hint: plt.xlim()
###Code
freq = df.yearOfStart.value_counts()
my_counts= list(freq.values)
my_labels= list(freq.index)
df2 = pd.DataFrame({'counts': my_counts}, index=my_labels )
df2.plot.bar(color='orange')
plt.show()
###Output
_____no_output_____
###Markdown
2). Now, split the data by the current state of business ('In' or 'Out' of business). Then, visualize the yearly trend of the 'yearOfStart' frequencies. - Sort by the year.- Draw two overlapping line plots from 1997 to 2014.- Use the 'figure' object.
###Code
In = df[df.CurrentState == 'In']
print(In)
Out = df[df.CurrentState =='Out']
print(Out)
fig = plt.figure(figsize=(8,5), dpi =100)
axes= fig.add_axes([0,0,1,1])
axes.plot(In.yearOfStart == 1987 ,In.yearOfStart == 2014 , color= 'red' )
axes.plot(Out.yearOfStart == 1987, Out.yearOfStart == 2014, color= 'blue',alpha= 0.8 )
plt.show()
###Output
yearOfStart CurrentState sizeOfsite
0 2008.0 In 20.80
1 2010.0 In 212.72
2 2013.0 In 20.04
3 2012.0 In 64.17
5 2013.0 In 10.99
... ... ... ...
43168 2012.0 In 64.14
43172 2014.0 In 29.06
43178 2011.0 In 44.21
43179 2013.0 In 35.70
43181 2014.0 In 176.49
[30004 rows x 3 columns]
yearOfStart CurrentState sizeOfsite
4 2002.0 Out 11.40
10 2008.0 Out 23.33
15 2006.0 Out 43.00
16 2009.0 Out 15.46
28 2014.0 Out 40.00
... ... ... ...
43174 2003.0 Out 30.67
43175 2012.0 Out 199.76
43176 2008.0 Out 93.84
43177 2014.0 Out 30.61
43180 2011.0 Out 46.20
[13159 rows x 3 columns]
|
docs/start/03_transforms.ipynb | ###Markdown
变换数据并不总是以训练机器学习算法所需的最终处理形式出现。使用 `transforms` 来执行一些数据操作,使其适合于训练。所有 TorchVision 数据集都有两个参数:用于修改特征的 `transform` 和用于修改标签的 `target_transform`。它们接受包含变换逻辑的可调用对象。[torchvision.transforms](https://pytorch.org/vision/stable/transforms.html) 模块提供了几种常用的开箱即用的变换。FashionMNIST 函数是 PIL 图像格式,标签是整数。为了进行训练,需要将特征作为归一化张量,标签作为一个热编码张量。为了做这些变换,使用 `ToTensor` 和 `Lambda`。
###Code
import torch
from torchvision import datasets
from torchvision.transforms import ToTensor, Lambda
f = lambda y: torch.zeros(10,
dtype=torch.float).scatter_(0,
torch.tensor(y),
value=1)
ds = datasets.FashionMNIST(
root="../../datasets",
train=True,
download=True,
transform=ToTensor(),
target_transform=Lambda(f)
)
###Output
C:\Users\xinet\.conda\envs\torch\lib\site-packages\torchvision\datasets\mnist.py:498: UserWarning: The given NumPy array is not writeable, and PyTorch does not support non-writeable tensors. This means you can write to the underlying (supposedly non-writeable) NumPy array using the tensor. You may want to copy the array to protect its data or make it writeable before converting it to a tensor. This type of warning will be suppressed for the rest of this program. (Triggered internally at ..\torch\csrc\utils\tensor_numpy.cpp:180.)
return torch.from_numpy(parsed.astype(m[2], copy=False)).view(*s)
###Markdown
`ToTensor()``ToTensor` 将 `PIL` 图像或 NumPy `ndarray` 转换为 `FloatTensor`。并将图像的像素强度值在 $[0, 1]$范围内进行缩放。 `Lambda` 变换`Lambda` 转换应用任何用户定义的 `lambda` 函数。这里,我们定义一个函数将整数转换为一个热编码张量。它首先创建一个大小为 10 的零张量(数据集中标签的数量),然后调用 `scatter_`,它在标签 `y` 给出的索引上赋值为 `1`。
###Code
target_transform = Lambda(lambda y: torch.zeros(
10, dtype=torch.float).scatter_(dim=0, index=torch.tensor(y), value=1))
###Output
_____no_output_____ |
Applied_Machine_Learning_Datacuration.ipynb | ###Markdown
Author: Zachary Strasser and William FunkbuschDate: 11-2-2020 Import necessary modules
###Code
import pandas as pd
import numpy as np
import itertools
import math as m
from matplotlib import pyplot as plt
###Output
_____no_output_____
###Markdown
Read tsv file from BindingDB website
###Code
tsv_file= '/content/sample_data/BindingDB_PDSPKi3.tsv'
tsv_file = pd.read_table(tsv_file, sep='\t', error_bad_lines=False)
###Output
b'Skipping line 1717: expected 49 fields, saw 85\nSkipping line 1718: expected 49 fields, saw 85\nSkipping line 1719: expected 49 fields, saw 85\nSkipping line 1720: expected 49 fields, saw 85\nSkipping line 1721: expected 49 fields, saw 85\nSkipping line 3561: expected 49 fields, saw 85\nSkipping line 3562: expected 49 fields, saw 85\nSkipping line 3563: expected 49 fields, saw 85\nSkipping line 3564: expected 49 fields, saw 85\nSkipping line 3565: expected 49 fields, saw 85\nSkipping line 3566: expected 49 fields, saw 85\nSkipping line 3567: expected 49 fields, saw 85\nSkipping line 3568: expected 49 fields, saw 85\nSkipping line 3569: expected 49 fields, saw 85\nSkipping line 3570: expected 49 fields, saw 85\nSkipping line 3571: expected 49 fields, saw 85\nSkipping line 3572: expected 49 fields, saw 85\nSkipping line 3573: expected 49 fields, saw 85\nSkipping line 3574: expected 49 fields, saw 85\nSkipping line 4976: expected 49 fields, saw 85\nSkipping line 11152: expected 49 fields, saw 85\n'
###Markdown
Check columns
###Code
tsv_file.columns
###Output
_____no_output_____
###Markdown
Filter the necessary columns - SMILEs, AA chain, and Ki
###Code
tsv_file_short = tsv_file[['Ligand SMILES', 'BindingDB Target Chain Sequence', 'Ki (nM)']]
###Output
_____no_output_____
###Markdown
There are 27,712 SMILE and protein sequence pairs with associated Ki values.
###Code
tsv_file_short.head
###Output
_____no_output_____
###Markdown
Check to see if an rows within SMILE column have NaN
###Code
tsv_file_short[['Ligand SMILES']].isnull().values.any()
###Output
_____no_output_____
###Markdown
No rows have NaN in SMILEs, now check in the AA row
###Code
tsv_file_short[['BindingDB Target Chain Sequence']].isnull().values.any()
###Output
_____no_output_____
###Markdown
Check final column for null values. None found
###Code
tsv_file_short[['Ki (nM)']].isnull().values.any()
###Output
_____no_output_____
###Markdown
Convert PANDA into np.array
###Code
DBBind = tsv_file_short.to_numpy()
###Output
_____no_output_____
###Markdown
Remove all numbers from SMILES
###Code
value = len(DBBind[:,0])
for x in range((value)):
DBBind[x,0] = ''.join([i for i in DBBind[x,0] if not i.isdigit()])
###Output
_____no_output_____
###Markdown
First we want to cycle through the SMILES that have two symbols back to back that are single entity and convert this to one symbol. Br-> B, Cl-> K, @@->X. Subsitute B for Br
###Code
for x in range(len(DBBind[:,0])):
s = DBBind[x,0]
for i in range(0, len(s)-1):
if s[i:i+2]=="Br":
s = s[:i]+'B' + s[i+2:]
DBBind[x,0] = s
###Output
_____no_output_____
###Markdown
Substitute K for Cl
###Code
for x in range(len(DBBind[:,0])):
s = DBBind[x,0]
for i in range(0, len(s)-1):
if s[i:i+2]=="Cl":
s = s[:i]+'K' + s[i+2:]
DBBind[x,0] = s
###Output
_____no_output_____
###Markdown
Substitute X for @@
###Code
for x in range(len(DBBind[:,0])):
s = DBBind[x,0]
for i in range(0, len(s)-1):
if s[i:i+2]=="@@":
s = s[:i]+'X' + s[i+2:]
DBBind[x,0] = s
###Output
_____no_output_____
###Markdown
Check the length of each of the SMILES. Starting with the minimum and maximum
###Code
min([len(x) for x in DBBind[:,0].tolist()])
###Output
_____no_output_____
###Markdown
Minimum SMILE is a length of 5
###Code
max([len(x) for x in DBBind[:,0].tolist()])
###Output
_____no_output_____
###Markdown
Maximum SMILE is a length of 1132 Now check minimum and maximum of the protein
###Code
min([len(x) for x in DBBind[:,1].tolist()])
###Output
_____no_output_____
###Markdown
Minimum protein AA is 11
###Code
max([len(x) for x in DBBind[:,1].tolist()])
###Output
_____no_output_____
###Markdown
Maximum protein AA is 4303 The vast majority of the ligands fall between 20 and 75 length. Therefore we removed any combinations with a SMILE length greater than 90.
###Code
value = len(DBBind[:,0])
place_holder = []
for x in range((value)):
if len(DBBind[x,0]) > 90:
place_holder.append(x)
DBBind = np.delete(DBBind, place_holder, axis=0)
###Output
_____no_output_____
###Markdown
Now we remove all proteins greater than 990 AA, which is about 100 pairs
###Code
value = len(DBBind[:,0])
place_holder = []
for x in range((value)):
if len(DBBind[x,1]) > 990:
place_holder.append(x)
DBBind = np.delete(DBBind, place_holder, axis=0)
###Output
_____no_output_____
###Markdown
Our new shape is (23,109 by 3) representing 23,109 pairs
###Code
DBBind.shape
###Output
_____no_output_____
###Markdown
For now we added 0s to get the ligand sizes to all equal 800We then add on 0s to every protein AA sequence to get it to 2400 AA's. Also remove > sign and convert Ki to float
###Code
for x in range(len(DBBind[:,0])):
DBBind[x,0] = DBBind[x,0][::-1]
DBBind[x,0] = DBBind[x,0].zfill(100) #fill ligand to 100
DBBind[x,0] = DBBind[x,0][::-1]
DBBind[x,1] = DBBind[x,1][::-1]
DBBind[x,1] = DBBind[x,1].zfill(1000) #fill protein to 2400
DBBind[x,1] = DBBind[x,1][::-1]
DBBind[x,2] = (DBBind[x,2]).strip() #strip sides
if '>' == DBBind[x,2][0] : #if Ki >10000 treat as 10000, Ki >70000 treat as 10000, ect.
DBBind[x,2] = DBBind[x,2][1:]
DBBind[x,2] = float(DBBind[x,2]) #convert Ki to int
###Output
_____no_output_____
###Markdown
Check the head
###Code
DBBind[0:3]
###Output
_____no_output_____
###Markdown
Check the tail
###Code
DBBind[-3:]
###Output
_____no_output_____
###Markdown
Switch back from numpy to PANDAS
###Code
curated_dataframe = pd.DataFrame(data=DBBind)
###Output
_____no_output_____
###Markdown
Rename the column titles
###Code
curated_dataframe.columns = ['SMILES', "Protein", "Ki"]
###Output
_____no_output_____
###Markdown
Print to an excel file
###Code
curated_dataframe.to_excel("curated_df.xlsx")
###Output
_____no_output_____ |
docs/FlavorR/dplyr_pandas.ipynb | ###Markdown
Winpython with R : comparing DPLYR and PandasIt is based on the Thomas Augspurger comparison [Notebook](http://nbviewer.ipython.org/urls/gist.githubusercontent.com/TomAugspurger/6e052140eaa5fdb6e8c0/raw/811585624e843f3f80b9b6fe89e18119d7d2d73c/dplyr_pandas.ipynb) (refreshed for Pandas 0.16.0)http://nbviewer.ipython.org/urls/gist.githubusercontent.com/TomAugspurger/6e052140eaa5fdb6e8c0/raw/811585624e843f3f80b9b6fe89e18119d7d2d73c/dplyr_pandas.ipynbWe just play the "R" code at the same time, instead of keeping it in comments
###Code
!echo %R_HOME%
# Some prep work to get the data from R and into pandas
%matplotlib inline
#bad tetst : move magic befor module imprort
#%load_ext rpy2.ipython
import rpy2
%load_ext rpy2.ipython
from rpy2.robjects.conversion import ri2py
from rpy2.ipython.rmagic import ri2ipython
ri2ipython.register(rpy2.robjects.Sexp, ri2py)
import numpy as np
import pandas as pd
import seaborn as sns
pd.set_option("display.max_rows", 5)
###Output
_____no_output_____
###Markdown
Transform this Markdown cell to a Code cell, if ever you need to re-feed a basic R environnement%R install.packages("tidyr")%R install.packages("dplyr") %R install.packages("ggplot2")%R install.packages("rvest")%R install.packages('RSQLite') %R install.packages("zoo") %R install.packages("forecast") %R install.packages('R.utils')%R install.packages("nycflights13")%R install.packages('hflights') Thomas Augspurger part (with comments replaced by true %R code)This notebook compares [pandas](http://pandas.pydata.org)and [dplyr](http://cran.r-project.org/web/packages/dplyr/index.html).The comparison is just on syntax (verbage), not performance. Whether you're an R user looking to switch to pandas (or the other way around), I hope this guide will help ease the transition.We'll work through the [introductory dplyr vignette](http://cran.r-project.org/web/packages/dplyr/vignettes/introduction.html) to analyze some flight data.I'm working on a better layout to show the two packages side by side.But for now I'm just putting the ``dplyr`` code in a comment above each python call.
###Code
%%R
library("dplyr") # for functions
library("nycflights13")
write.csv(flights, "flights.csv")
###Output
_____no_output_____
###Markdown
Data: nycflights13
###Code
flights = pd.read_csv("flights.csv", index_col=0)
%R dim(flights)
# dim(flights) <--- The R code
flights.shape # <--- The python code
%R head(flights)
# head(flights)
flights.head()
###Output
_____no_output_____
###Markdown
Single table verbs ``dplyr`` has a small set of nicely defined verbs. I've listed their closest pandas verbs. dplyr pandas filter() (and slice()) query() (and loc[], iloc[]) arrange() sort() select() (and rename()) \_\_getitem\_\_ (and rename()) distinct() drop_duplicates() mutate() (and transmute()) None summarise() None sample_n() and sample_frac() None Some of the "missing" verbs in pandas are because there are other, different ways of achieving the same goal. For example `summarise` is spread across `mean`, `std`, etc. Others, like `sample_n`, just haven't been implemented yet. Filter rows with filter(), query()
###Code
%R filter(flights, month == 1, day == 1)
# filter(flights, month == 1, day == 1)
flights.query("month == 1 & day == 1")
###Output
_____no_output_____
###Markdown
The more verbose version:
###Code
%R flights[flights$month == 1 & flights$day == 1, ]
# flights[flights$month == 1 & flights$day == 1, ]
flights[(flights.month == 1) & (flights.day == 1)]
%R slice(flights, 1:10)
# slice(flights, 1:10)
flights.iloc[:9]
###Output
_____no_output_____
###Markdown
Arrange rows with arrange(), sort()
###Code
%R arrange(flights, year, month, day)
# arrange(flights, year, month, day)
flights.sort(['year', 'month', 'day'])
%R arrange(flights, desc(arr_delay))
# arrange(flights, desc(arr_delay))
flights.sort('arr_delay', ascending=False)
###Output
_____no_output_____
###Markdown
Select columns with select(), []
###Code
%R select(flights, year, month, day)
# select(flights, year, month, day)
flights[['year', 'month', 'day']]
%R select(flights, year:day)
# select(flights, year:day)
# No real equivalent here. Although I think this is OK.
# Typically I'll have the columns I want stored in a list
# somewhere, which can be passed right into __getitem__ ([]).
%%R
select(flights, -(year:day))
# select(flights, -(year:day))
# Again, simliar story. I would just use
# flights.drop(cols_to_drop, axis=1)
# or fligths[flights.columns.difference(pd.Index(cols_to_drop))]
# point to dplyr!
%R select(flights, tail_num = tailnum)
# select(flights, tail_num = tailnum)
flights.rename(columns={'tailnum': 'tail_num'})['tail_num']
###Output
_____no_output_____
###Markdown
But like Hadley mentions, not that useful since it only returns the one column. ``dplyr`` and ``pandas`` compare well here.
###Code
%R rename(flights, tail_num = tailnum)
# rename(flights, tail_num = tailnum)
flights.rename(columns={'tailnum': 'tail_num'})
###Output
_____no_output_____
###Markdown
Pandas is more verbose, but the the argument to `columns` can be any mapping. So it's often used with a function to perform a common task, say `df.rename(columns=lambda x: x.replace('-', '_'))` to replace any dashes with underscores. Also, ``rename`` (the pandas version) can be applied to the Index. Extract distinct (unique) rows
###Code
%R distinct(select(flights, tailnum))
# distinct(select(flights, tailnum))
flights.tailnum.unique()
###Output
_____no_output_____
###Markdown
FYI this returns a numpy array instead of a Series.
###Code
%R distinct(select(flights, origin, dest))
# distinct(select(flights, origin, dest))
flights[['origin', 'dest']].drop_duplicates()
###Output
_____no_output_____
###Markdown
OK, so ``dplyr`` wins there from a consistency point of view. ``unique`` is only defined on Series, not DataFrames. The original intention for `drop_duplicates` is to check for records that were accidentally included twice. This feels a bit hacky using it to select the distinct combinations, but it works! Add new columns with mutate()
###Code
%R mutate(flights, gain = arr_delay - dep_delay, speed = distance / air_time * 60)
# mutate(flights,
# gain = arr_delay - dep_delay,
# speed = distance / air_time * 60)
#before pandas 0.16.0
# flights['gain'] = flights.arr_delay - flights.dep_delay
# flights['speed'] = flights.distance / flights.air_time * 60
# flights
flights.assign(gain=flights.arr_delay - flights.dep_delay,
speed=flights.distance / flights.air_time * 60)
%R mutate(flights, gain = arr_delay - dep_delay, gain_per_hour = gain / (air_time / 60) )
# mutate(flights,
# gain = arr_delay - dep_delay,
# gain_per_hour = gain / (air_time / 60)
# )
#before pandas 0.16.0
# flights['gain'] = flights.arr_delay - flights.dep_delay
# flights['gain_per_hour'] = flights.gain / (flights.air_time / 60)
# flights
(flights.assign(gain=flights.arr_delay - flights.dep_delay)
.assign(gain_per_hour = lambda df: df.gain / (df.air_time / 60)))
###Output
_____no_output_____
###Markdown
The first example is pretty much identical (aside from the names, mutate vs. assign).The second example just comes down to language differences. In R, it's possible to implement a function like mutate where you can refer to gain in the line calcuating gain_per_hour, even though gain hasn't actually been calcuated yet.In Python, you can have arbitrary keyword arguments to functions (which we needed for .assign), but the order of the argumnets is arbitrary. So you can't have something like df.assign(x=df.a / df.b, y=x **2), because you don't know whether x or y will come first (you'd also get an error saying x is undefined.To work around that with pandas, you'll need to split up the assigns, and pass in a callable to the second assign. The callable looks at itself to find a column named gain. Since the line above returns a DataFrame with the gain column added, the pipeline goes through just fine.
###Code
%R transmute(flights, gain = arr_delay - dep_delay, gain_per_hour = gain / (air_time / 60) )
# transmute(flights,
# gain = arr_delay - dep_delay,
# gain_per_hour = gain / (air_time / 60)
# )
#before pandas 0.16.0
# flights['gain'] = flights.arr_delay - flights.dep_delay
# flights['gain_per_hour'] = flights.gain / (flights.air_time / 60)
# flights[['gain', 'gain_per_hour']]
(flights.assign(gain=flights.arr_delay - flights.dep_delay)
.assign(gain_per_hour = lambda df: df.gain / (df.air_time / 60))
[['gain', 'gain_per_hour']])
###Output
_____no_output_____
###Markdown
Summarise values with summarise()
###Code
flights.dep_delay.mean()
###Output
_____no_output_____
###Markdown
Randomly sample rows with sample_n() and sample_frac() There's an open PR on [Github](https://github.com/pydata/pandas/pull/7274) to make this nicer (closer to ``dplyr``). For now you can drop down to numpy.
###Code
%R sample_n(flights, 10)
# sample_n(flights, 10)
flights.loc[np.random.choice(flights.index, 10)]
%R sample_frac(flights, 0.01)
# sample_frac(flights, 0.01)
flights.iloc[np.random.randint(0, len(flights),
.1 * len(flights))]
###Output
_____no_output_____
###Markdown
Grouped operations
###Code
%R planes <- group_by(flights, tailnum)
%R delay <- summarise(planes, count = n(),dist = mean(distance, na.rm = TRUE), delay = mean(arr_delay, na.rm = TRUE))
%R delay <- filter(delay, count > 20, dist < 2000)
# planes <- group_by(flights, tailnum)
# delay <- summarise(planes,
# count = n(),
# dist = mean(distance, na.rm = TRUE),
# delay = mean(arr_delay, na.rm = TRUE))
# delay <- filter(delay, count > 20, dist < 2000)
planes = flights.groupby("tailnum")
delay = (planes.agg({"year": "count",
"distance": "mean",
"arr_delay": "mean"})
.rename(columns={"distance": "dist",
"arr_delay": "delay",
"year": "count"})
.query("count > 20 & dist < 2000"))
delay
###Output
_____no_output_____
###Markdown
For me, dplyr's ``n()`` looked is a bit starge at first, but it's already growing on me.I think pandas is more difficult for this particular example.There isn't as natural a way to mix column-agnostic aggregations (like ``count``) with column-specific aggregations like the other two. You end up writing could like `.agg{'year': 'count'}` which reads, "I want the count of `year`", even though you don't care about `year` specifically.Additionally assigning names can't be done as cleanly in pandas; you have to just follow it up with a ``rename`` like before. We may as well reproduce the graph. It looks like ggplots geom_smooth is some kind of lowess smoother. We can either us seaborn:
###Code
import matplotlib.pyplot as plt
fig, ax = plt.subplots(figsize=(12, 6))
sns.regplot("dist", "delay", data=delay, lowess=True, ax=ax,
scatter_kws={'color': 'k', 'alpha': .5, 's': delay['count'] / 10}, ci=90,
line_kws={'linewidth': 3});
###Output
_____no_output_____
###Markdown
Or using statsmodels directly for more control over the lowess, with an extremely lazy "confidence interval".
###Code
import statsmodels.api as sm
smooth = sm.nonparametric.lowess(delay.delay, delay.dist, frac=1/8)
ax = delay.plot(kind='scatter', x='dist', y = 'delay', figsize=(12, 6),
color='k', alpha=.5, s=delay['count'] / 10)
ax.plot(smooth[:, 0], smooth[:, 1], linewidth=3);
std = smooth[:, 1].std()
ax.fill_between(smooth[:, 0], smooth[:, 1] - std, smooth[:, 1] + std, alpha=.25);
%R destinations <- group_by(flights, dest)
%R summarise(destinations, planes = n_distinct(tailnum), flights = n())
# destinations <- group_by(flights, dest)
# summarise(destinations,
# planes = n_distinct(tailnum),
# flights = n()
# )
destinations = flights.groupby('dest')
destinations.agg({
'tailnum': lambda x: len(x.unique()),
'year': 'count'
}).rename(columns={'tailnum': 'planes',
'year': 'flights'})
###Output
_____no_output_____
###Markdown
Similar to how ``dplyr`` provides optimized C++ versions of most of the `summarise` functions, pandas uses [cython](http://cython.org) optimized versions for most of the `agg` methods.
###Code
%R daily <- group_by(flights, year, month, day)
%R (per_day <- summarise(daily, flights = n()))
# daily <- group_by(flights, year, month, day)
# (per_day <- summarise(daily, flights = n()))
daily = flights.groupby(['year', 'month', 'day'])
per_day = daily['distance'].count()
per_day
%R (per_month <- summarise(per_day, flights = sum(flights)))
# (per_month <- summarise(per_day, flights = sum(flights)))
per_month = per_day.groupby(level=['year', 'month']).sum()
per_month
%R (per_year <- summarise(per_month, flights = sum(flights)))
# (per_year <- summarise(per_month, flights = sum(flights)))
per_year = per_month.sum()
per_year
###Output
_____no_output_____
###Markdown
I'm not sure how ``dplyr`` is handling the other columns, like `year`, in the last example. With pandas, it's clear that we're grouping by them since they're included in the groupby. For the last example, we didn't group by anything, so they aren't included in the result. Chaining Any follower of Hadley's [twitter account](https://twitter.com/hadleywickham/) will know how much R users *love* the ``%>%`` (pipe) operator. And for good reason!
###Code
%R flights %>% group_by(year, month, day) %>% select(arr_delay, dep_delay) %>% summarise( arr = mean(arr_delay, na.rm = TRUE), dep = mean(dep_delay, na.rm = TRUE)) %>% filter(arr > 30 | dep > 30)
# flights %>%
# group_by(year, month, day) %>%
# select(arr_delay, dep_delay) %>%
# summarise(
# arr = mean(arr_delay, na.rm = TRUE),
# dep = mean(dep_delay, na.rm = TRUE)
# ) %>%
# filter(arr > 30 | dep > 30)
(
flights.groupby(['year', 'month', 'day'])
[['arr_delay', 'dep_delay']]
.mean()
.query('arr_delay > 30 | dep_delay > 30')
)
###Output
_____no_output_____ |
class/03-Object Oriented Programming Homework - Solution.ipynb | ###Markdown
Object Oriented Programming Homework Assignment Problem 1Fill in the Line class methods to accept coordinates as a pair of tuples and return the slope and distance of the line.
###Code
class Line(object):
def __init__(self,coor1,coor2):
self.coor1 = coor1
self.coor2 = coor2
def distance(self):
x1,y1 = self.coor1
x2,y2 = self.coor2
return ((x2-x1)**2 + (y2-y1)**2)**0.5
def slope(self):
x1,y1 = self.coor1
x2,y2 = self.coor2
return (y2-y1)/(x2-x1)
coordinate1 = (3,2)
coordinate2 = (8,10)
li = Line(coordinate1,coordinate2)
li.distance()
li.slope()
###Output
_____no_output_____
###Markdown
________ Problem 2 Fill in the class
###Code
class Cylinder:
def __init__(self,height=1,radius=1):
self.height = height
self.radius = radius
def volume(self):
return self.height*3.14*(self.radius)**2
def surface_area(self):
top = 3.14 * (self.radius)**2
return (2*top) + (2*3.14*self.radius*self.height)
c = Cylinder(2,3)
c.volume()
c.surface_area()
###Output
_____no_output_____ |
3_Regression/Decision_Tree_Regression/Python/decision _tree_writeup.ipynb | ###Markdown
**DECISION TREE REGRESSION** **Importing the libraries**
###Code
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
###Output
_____no_output_____
###Markdown
**Import the dataset**
###Code
dataset = pd.read_csv('Position_Salaries.csv')
X = dataset.iloc[:, 1:2].values
y = dataset.iloc[:, 2].values
###Output
_____no_output_____
###Markdown
**Splitting the dataset into the Training set and Test set**
###Code
from sklearn.cross_validation import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2, random_state = 0)
###Output
_____no_output_____
###Markdown
**Feature Scaling**
###Code
from sklearn.preprocessing import StandardScaler
sc_X = StandardScaler()
X_train = sc_X.fit_transform(X_train)
X_test = sc_X.transform(X_test)
sc_y = StandardScaler()
y_train = sc_y.fit_transform(y_train)
###Output
_____no_output_____
###Markdown
**Fitting Decision Tree Regression to the dataset**
###Code
from sklearn.tree import DecisionTreeRegressor
regressor = DecisionTreeRegressor(random_state = 0)
regressor.fit(X, y)
###Output
_____no_output_____
###Markdown
**Predicting a new result**
###Code
y_pred = regressor.predict(6.5)
###Output
_____no_output_____
###Markdown
**Visualising the Decision Tree Regression results (higher resolution)**
###Code
X_grid = np.arange(min(X), max(X), 0.01)
X_grid = X_grid.reshape((len(X_grid), 1))
plt.scatter(X, y, color = 'red')
plt.plot(X_grid, regressor.predict(X_grid), color = 'blue')
plt.title('Truth or Bluff (Decision Tree Regression)')
plt.xlabel('Position level')
plt.ylabel('Salary')
plt.show()
###Output
_____no_output_____ |
doc/methods.ipynb | ###Markdown
Note: you can try this tutorial in [](https://mybinder.org/v2/gh/zh217/aiochan/master?filepath=doc%2Fmethods.ipynb). Methods and functions Now we know the basics of channels and operations on them, we will learn about additional methods and functions that can be convenient in various situations. Putting and getting As we have already seen, we can `add` into a channel. Immediately closing the channel afterwards ensures that no further items can be put into the channel:
###Code
import aiochan as ac
import asyncio
async def main():
c = ac.Chan().add(1, 2, 3).close()
async for v in c:
print(v)
await c.put(4)
r = await c.get()
print('put/get after closing:', r)
ac.run(main())
###Output
1
2
3
put/get after closing: None
###Markdown
This method is mainly provided for convenience. You should NOT adding too much stuff into a channel in this way: it is non-blocking, the puts are accumulated, and if there are too many pending puts accumulated in this way overflow will occur. Adding fewer than 10 items during the initialization phase of a channel is considered ok though. In the last example we consumed values using the `async for` syntax. In case where we *must* deal with many values of the channel at once instead of one by one, we can use `collect`:
###Code
async def main():
c = ac.Chan().add(1, 2, 3).close()
r = await c.collect()
print(r)
ac.run(main())
###Output
[1, 2, 3]
###Markdown
In this case, closing the channel first before calling `collect` is essential: otherwise the `await` would block forever (and overflow would probably occur if values continuously come in). `collect` also accepts an argument `n` which specifies the maximum number of elements that will be collected. Using it, we can `collect` on channels that are not yet closed (but we still need to think about how many items we can deal with):
###Code
async def main():
c = ac.Chan().add(1, 2, 3) # no closing
r = await c.collect(2)
print(r)
ac.run(main())
###Output
[1, 2]
###Markdown
Above we have said that using `add` to add too many items is dangerous. If you have an existing sequence which you want to turn into a channel, it is much better to use `from_iter`:
###Code
async def main():
c = ac.from_iter([1, 2, 3, 4, 5, 6])
r = await c.collect()
print(r)
print(c.closed)
ac.run(main())
###Output
[1, 2, 3, 4, 5, 6]
True
###Markdown
Note that the channel is closed on construction (we can check whether a channel is closed by using the `.closed` property on a channel). Infinite collections are ok:
###Code
def natural_numbers():
i = 0
while True:
yield i
i += 1
async def main():
c = ac.from_iter(natural_numbers())
r = await c.collect(10)
print(r)
print(c.closed)
r = await c.collect(10)
print(r)
print(c.closed)
ac.run(main())
###Output
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
True
[10, 11, 12, 13, 14, 15, 16, 17, 18, 19]
True
###Markdown
Even when the channel is closed, values can still be obtained from it (and in this case values cannot be exhausted). Closing only stops putting operations immediately. Making channels producing numbers is so common that we have a function for it:
###Code
async def main():
c1 = ac.from_range()
r = await c1.collect(10)
print(r) # natural numbers
c2 = ac.from_range(5) # same as ac.from_iter(range(5))
r = await c2.collect()
print(r)
c3 = ac.from_range(0, 10, 3) # same as ac.from_iter(range(0, 10, 3))
r = await c3.collect()
print(r)
ac.run(main())
###Output
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
[0, 1, 2, 3, 4]
[0, 3, 6, 9]
###Markdown
To recap:* `.add` can be used to add a few items into a channel on initialization (any other use is dangerous)* `.collect` can be used to bulk get items from channel* `.closed` tests if a channel is already closed* `from_iter` creates channels containing all elements from an iterable (even infinite iterable is ok)* `from_range` is tailored for making channels generating number series Time-based operations So far we have always used `asyncio.sleep` to make execution stop for a little while, pretending to do work. We also have `timeout` function that does almost the same thing by producing a channel that automatically closes after an interval:
###Code
async def main():
start = asyncio.get_event_loop().time()
c = ac.timeout(1.0)
await c.get()
end = asyncio.get_event_loop().time()
print(end - start)
ac.run(main())
###Output
1.001395168947056
###Markdown
This is useful even when we are not pretending to do work, for example, for timeout control:
###Code
async def main():
tout = ac.timeout(1.0)
while (await ac.select(tout, default=True))[0]:
print('do work')
await asyncio.sleep(0.2)
print('done')
ac.run(main())
###Output
do work
do work
do work
do work
do work
done
###Markdown
The above example is written in a somewhat terse style. You should try to understand why it achieves the closing on time behaviour. As `timeout` produces a channel, which can be passed arount and `select`ed, it offers great flexibility for controlling time-based behaviours. However, using it for the ticks of a clock is harmful, as exemplified below:
###Code
async def main():
start = asyncio.get_event_loop().time()
for i in range(20):
await ac.timeout(0.1).get()
print(i, asyncio.get_event_loop().time() - start)
ac.run(main())
###Output
0 0.10043401701841503
1 0.20142484991811216
2 0.30242938199080527
3 0.4030482260277495
4 0.5035843959776685
5 0.6041081629227847
6 0.7046528200153261
7 0.8056348919635639
8 0.9063465989893302
9 1.0068686519516632
10 1.1073921599891037
11 1.2079381300136447
12 1.3089604979613796
13 1.4095268349628896
14 1.5100650689564645
15 1.6105891889892519
16 1.7114433919778094
17 1.81249319401104
18 1.9130375039530918
19 2.0135989299742505
###Markdown
The problem is that `timeout` guarantees that it will close *after* the specified time has elapsed, and will make an attempt to close as soon as possible, but it can never close at the precise instant. Over time, errors will accumulate. In the above example, we have already accumulated 0.01 seconds of error in mere 2 seconds. If want something that ticks, use the `tick_tock` function:
###Code
async def main():
start = asyncio.get_event_loop().time()
ticker = ac.tick_tock(0.1)
for i in range(20):
await ticker.get()
print(i, asyncio.get_event_loop().time() - start)
ac.run(main())
###Output
0 0.1004815329797566
1 0.2012625669594854
2 0.3008053069934249
3 0.4013087539933622
4 0.5008452819893137
5 0.6013440380338579
6 0.7008649010676891
7 0.8013983579585329
8 0.900891529978253
9 1.001404833048582
10 1.100898704957217
11 1.2013944609789178
12 1.3008839710382745
13 1.4013996929861605
14 1.501174372038804
15 1.6006878040498123
16 1.701174663961865
17 1.8006792459636927
18 1.9011599159566686
19 2.000674612005241
###Markdown
Errors are still unavoidable, but they do not accumulate. To recap:* Use `timeout` to control the timing of operations (maybe together with `select`)* If the timing control is recurrent, consider using `tick_tock` Functional methods If you have done any functional programming, you are certainly familiar with things like `map`, `reduce` (or `foldl`, `foldr`), `filter` and friends. Channels are armed with these so-called functional chainable methods which, when called, return new channels containing the expected elements. Examples:
###Code
async def main():
print('map', await ac.from_range(10).map(lambda x: x*2).collect())
print('filter', await ac.from_range(10).filter(lambda x: x % 2 == 0).collect())
print('take', await ac.from_range(10).take(5).collect())
print('drop', await ac.from_range(10).drop(5).collect())
print('take_while', await ac.from_range(10).take_while(lambda x: x < 5).collect())
print('drop_while', await ac.from_range(10).drop_while(lambda x: x < 5).collect())
ac.run(main())
###Output
map [0, 2, 4, 6, 8, 10, 12, 14, 16, 18]
filter [0, 2, 4, 6, 8]
take [0, 1, 2, 3, 4]
drop [5, 6, 7, 8, 9]
take_while [0, 1, 2, 3, 4]
drop_while [5, 6, 7, 8, 9]
###Markdown
There is also `distinct`:
###Code
async def main():
c = ac.from_iter([0,0,0,1,1,2,2,2,2,3,3,4,4,4,5,4,4,3,3,2,1,1,1,0])
print(await c.distinct().collect())
ac.run(main())
###Output
[0, 1, 2, 3, 4, 5, 4, 3, 2, 1, 0]
###Markdown
Note that only *consecutive* values are tested for distinctness. You probably know `reduce`, the so-called universal reducing function:
###Code
async def main():
print(await ac.from_range(10).reduce(lambda a, b: a+b).collect())
print(await ac.from_range(10).reduce(lambda acc, nxt: acc + [nxt], init=[]).collect())
ac.run(main())
###Output
[45]
[[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]]
###Markdown
As we can see, you can optionally pass an initial value for `reduce`. Notice that `reduce` only returns a value when the channel is closed: it turns a whole channel of values into a channel containing only a single value. Most of the time you may want intermediate results as well, so you probably want to use `scan` instead:
###Code
async def main():
print(await ac.from_range(10).scan(lambda a, b: a+b).collect())
print(await ac.from_range(10).scan(lambda acc, nxt: acc + [nxt], init=[]).collect())
ac.run(main())
###Output
[0, 1, 3, 6, 10, 15, 21, 28, 36, 45]
[[], [0], [0, 1], [0, 1, 2], [0, 1, 2, 3], [0, 1, 2, 3, 4], [0, 1, 2, 3, 4, 5], [0, 1, 2, 3, 4, 5, 6], [0, 1, 2, 3, 4, 5, 6, 7], [0, 1, 2, 3, 4, 5, 6, 7, 8], [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]]
###Markdown
All of these "functional" methods accept two optional values: `out` and `close`. As we have said previously, these functions operate by returning a new channel containing the processed values. If another channel is given as the `out` argument, then that channel will receive the processed values instead. Also, when the source channel is closed, by default the out channel will be as well. You can prevent this by setting `close` to `False`. This is illustrated below:
###Code
async def main():
out = ac.Chan(5) # we can use buffers as we please
ac.from_range(10).map(lambda x: x*2, out=out, close=False)
print(out.closed)
print(await out.collect(10))
ac.run(main())
###Output
False
[0, 2, 4, 6, 8, 10, 12, 14, 16, 18]
###Markdown
To recap:* `map`, `reduce`, `filter`, `distinct`, `take`, `drop`, `take_while`, `drop_while`, `scan` do what you expect them to do.* You can control the construction of the output channel and whether to close it when the input is exhausted by specifying the `out` and `close` argument. Pipeline methods There are times that your processing is rather complicated to express with the above functional methods. For example, given the sequence `[1,2,1,3,1]`, you want to produce the sequence `[1,2,2,1,3,3,3,1]`. In this case you can use the `async_apply` method:
###Code
async def duplicate_face_value(inp, out):
async for v in inp:
for _ in range(v):
await out.put(v)
out.close()
async def main():
vals = [1,2,3,2,1]
print(await ac.from_iter(vals).async_apply(duplicate_face_value).collect())
ac.run(main())
###Output
[1, 2, 2, 3, 3, 3, 2, 2, 1]
###Markdown
You may think that this is not too different from connecting the channels yourself and spawn a processing coroutine with `go`. But writing it using `async_apply` makes your intention clearer. Processing values in a channel and putting the result onto another channel is a very common theme. With `async_apply`, only a single coroutine is working on the values. With `async_pipe`, you can use multiple coroutine instances, getting closer to parallelism:
###Code
async def worker(n):
await asyncio.sleep(0.1)
return n*2
async def main():
start = asyncio.get_event_loop().time()
print(await ac.from_range(20).async_pipe(10, worker).collect())
print(asyncio.get_event_loop().time() - start)
ac.run(main())
###Output
[0, 2, 4, 6, 8, 10, 12, 14, 16, 18, 20, 22, 24, 26, 28, 30, 32, 34, 36, 38]
0.20754481800395297
###Markdown
We see that processing 20 values only takes about 0.2 seconds even though processing a single value with a single coroutine takes 0.1 seconds: parallelism. Notice that the output values are in the correct order. This is the case even if later works complete earlier: `async_pipe` ensures the order while doing its best to have the minimal waiting time. However, in some cases the order is not important, in which case we can use `async_pipe_unordered`:
###Code
import random
async def worker(n):
await asyncio.sleep(random.uniform(0, 0.2))
return n*2
async def main():
start = asyncio.get_event_loop().time()
print(await ac.from_range(20).async_pipe(10, worker).collect())
print('ordered time:', asyncio.get_event_loop().time() - start)
start = asyncio.get_event_loop().time()
print(await ac.from_range(20).async_pipe_unordered(10, worker).collect())
print('unordered time:', asyncio.get_event_loop().time() - start)
ac.run(main())
###Output
[0, 2, 4, 6, 8, 10, 12, 14, 16, 18, 20, 22, 24, 26, 28, 30, 32, 34, 36, 38]
ordered time: 0.33254893589764833
[14, 8, 6, 0, 20, 12, 24, 10, 22, 16, 36, 18, 2, 4, 26, 28, 38, 30, 32, 34]
unordered time: 0.2875210080528632
|
pathway networks pca.ipynb | ###Markdown
Loading gene expression data
###Code
f = pd.read_csv('GSE156063_swab_gene_counts.csv.gz')
f.index = f.iloc[:, 0] # Make ENSG genes as row indexing
f = f.iloc[:, 1:] # Remove first index column
f
# Normalize data
norm = preprocess(f)
# Convert Ensembl number index to gene symbol
norm = get_symbol(norm)
norm
###Output
_____no_output_____
###Markdown
Principal component analysis for pathway representationsConvert gene x sample --> pathway x sample by getting a PCA representation for each gene x sample matrix with the genes representing a pathway.
###Code
libraries = ['GO_Biological_Process_2018', 'GO_Molecular_Function_2018', 'GO_Cellular_Component_2018']
pathway_by_sample = h5py.File("new_path_by_sample.h5", "w")
list_df = []
list_pathways = []
norm_genes = set(norm.index)
for lib in libraries:
pathway_to_genes, gene_set = gene_set_dictionaries(lib)
for i in trange(len(pathway_to_genes)):
pathway = list(pathway_to_genes.keys())[i]
genes = set(pathway_to_genes[pathway])
if genes.issubset(norm_genes):
pca = PCA(n_components=1)
values = pca.fit_transform(norm.loc[genes].T)
list_df.append( pd.DataFrame(values) )
list_pathways.append(pathway)
path_by_sample = pd.concat(list_df, axis=1).T
path_by_sample.index = list_pathways
path_by_sample.columns = norm.columns
path_by_sample
pathway_by_sample.create_dataset("path_by_sample", data=path_by_sample)
pathway_list = (pd.DataFrame(path_by_sample.index)).values.astype("S").tolist()
sample_list = (pd.DataFrame(path_by_sample.columns)).values.astype("S").tolist()
pathway_by_sample.create_dataset("pathways", data=pathway_list)
pathway_by_sample.create_dataset("samples", data=sample_list)
pathway_by_sample.close()
###Output
_____no_output_____
###Markdown
Ground truth data
###Code
gr_truth = pd.read_csv("GSE156063_series_matrix.txt", sep='\t')
gr_truth
gr_truth = gr_truth.iloc[10, 1:]
idx = [ "_".join(i.split("_")[:2]) for i in gr_truth.index ]
gr_truth.index = idx
gr_truth
gr_truth = pd.DataFrame(gr_truth).loc[norm.columns]
test = [ 1 if res[-3:] == 'POS' else 0 for res in gr_truth.iloc[:, 0] ]
gr_truth.iloc[:, 0] = test
gr_truth.columns = ["Truth"]
gr_truth
###Output
_____no_output_____
###Markdown
Lasso model
###Code
pathway_by_sample = h5py.File("new_path_by_sample.h5", "r+")
def h5py_to_list(lst):
return [ str(i)[3:-2] for i in lst ]
key_matrix = pathway_by_sample['path_by_sample']
key_pathways = h5py_to_list(pathway_by_sample['pathways'])
key_samples = h5py_to_list(pathway_by_sample['samples'])
key_df = pd.DataFrame(key_matrix, index=key_pathways, columns=key_samples)
display(key_df)
###Output
_____no_output_____
###Markdown
Lasso model Nested cross-validation to optimize lasso model
###Code
from numpy import mean
from numpy import std
from sklearn.datasets import make_classification
from sklearn.model_selection import KFold, cross_val_score
from sklearn.model_selection import GridSearchCV
from sklearn.linear_model import Lasso, LassoCV
from sklearn.metrics import roc_curve, roc_auc_score
from sklearn import metrics
from matplotlib import pyplot as plt
import matplotlib.cm as cm
X, y = key_df.T, gr_truth.values.ravel()
# generate no skill prediction
ns_probs = [0 for n in y]
ns_fpr, ns_tpr, _ = roc_curve(y, ns_probs)
plt.plot(ns_fpr, ns_tpr, linestyle='--', color='black')
# configure the cross-validation procedure
cv_outer = KFold(n_splits=10, shuffle=True, random_state=1)
# enumerate splits
outer_results = []
summary = []
color=iter(cm.rainbow(np.linspace(0,1,10)))
for train_idx, test_idx in cv_outer.split(X):
# split data
X_train, X_test = X.iloc[train_idx, :], X.iloc[test_idx, :]
y_train, y_test = y[train_idx], y[test_idx]
# configure the cross-validation procedure
cv_inner = KFold(n_splits=3, shuffle=True, random_state=1)
# define the model
lassocv = LassoCV(alphas=None, cv=cv_inner, max_iter=100000)
result = lassocv.fit(X_train, y_train)
# evaluate model on the holdout dataset
yhat = result.predict(X_test)
# evaluate model
fpr, tpr, _ = roc_curve(y_test, yhat)
auc_ = metrics.auc(fpr, tpr)
outer_results.append(auc_)
# plot the roc curve for the model
plt.plot(fpr, tpr, marker='.', color=next(color))
# report progress
print(auc_)
# add parameters to summary
summary.append(result.coef_)
# summarize estimated model performance
print('AUC: %.3f (%.3f)' % (mean(outer_results), std(outer_results)))
# axis labels
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
# show the plot
plt.show()
# get summary table
sum_df = pd.DataFrame(summary, columns=X.columns)
summary_cols = (pd.DataFrame(X.columns)).values.astype("S").tolist()
pathway_by_sample.create_dataset("summaries", data=sum_df)
pathway_by_sample.create_dataset("summaries_columns", data=summary_cols)
pathway_by_sample.close()
sum_df
new_sum_df = sum_df.loc[:, (sum_df != 0).any(axis=0)]
new_sum_df
###Output
_____no_output_____
###Markdown
Get pathway lambda data
###Code
list(pathway_by_sample.keys())
summaries_columns = [str(pathway)[3:-2] for pathway in pathway_by_sample['summaries_columns']]
summaries = pd.DataFrame(pathway_by_sample['summaries'], columns=summaries_columns)
summaries
new_df = summaries.loc[:, (summaries != 0).any(axis=0)]
new_df
lambda_counts = pd.DataFrame(new_df.astype(bool).sum(axis=0), columns=["Count"]).sort_values(by='Count', ascending=True)
lambda_counts
###Output
_____no_output_____
###Markdown
Bar Plot
###Code
lambdas = lambda_counts.iloc[-30:]
lambdas.loc[:, 'Count'] /= 10
lambdas['labels'] = lambdas.index
ax = lambdas.plot.barh(x='labels', y='Count', figsize=(10,10))
usage_dict = {"100% Usage": [0], "50% Usage": [0], "25% Usage": [0], "Total": [len(lambda_counts)]}
for val in lambda_counts.values:
if val[0] == 10: usage_dict["100% Usage"][0] += 1
elif val[0] > 5: usage_dict["50% Usage"][0] += 1
elif val[0] > 2.5: usage_dict["25% Usage"][0] += 1
usage_df = pd.DataFrame(usage_dict, index=[''])
usage_df
###Output
_____no_output_____
###Markdown
Box Plot
###Code
fig, axes = plt.subplots(nrows=1, ncols=2, figsize=(9, 25))
meds = summaries.median()
meds.sort_values(ascending=False, inplace=True)
meds = meds[meds != 0]
df2 = summaries[meds.index]
df2.boxplot(vert=False, rot=0, figsize=(10,10))
###Output
_____no_output_____ |
nb/sv3/sv3_provabgs_comparison.ipynb | ###Markdown
compare the forward modeled `provabgs` photometry to SV3
###Code
import os
import numpy as np
# --- astropy ---
from astropy.table import Table
# --- plotting ---
import corner as DFM
import matplotlib as mpl
import matplotlib.pyplot as plt
mpl.rcParams['text.usetex'] = True
mpl.rcParams['font.family'] = 'serif'
mpl.rcParams['axes.linewidth'] = 1.5
mpl.rcParams['axes.xmargin'] = 1
mpl.rcParams['xtick.labelsize'] = 'x-large'
mpl.rcParams['xtick.major.size'] = 5
mpl.rcParams['xtick.major.width'] = 1.5
mpl.rcParams['ytick.labelsize'] = 'x-large'
mpl.rcParams['ytick.major.size'] = 5
mpl.rcParams['ytick.major.width'] = 1.5
mpl.rcParams['legend.frameon'] = False
from provabgs import infer as Infer
from provabgs import models as Models
sv3_bgs = Table.read('/Users/chahah/data/provabgs/sv3.20210420.bgs_clean.fits')
sv3_gflux = sv3_bgs['FLUX_G']
sv3_rflux = sv3_bgs['FLUX_R']
sv3_zflux = sv3_bgs['FLUX_Z']
sv3_gflux_ivar = sv3_bgs['FLUX_IVAR_G']
sv3_rflux_ivar = sv3_bgs['FLUX_IVAR_R']
sv3_zflux_ivar = sv3_bgs['FLUX_IVAR_Z']
sv3_gmag = 22.5 - 2.5*np.log10(sv3_gflux.clip(1e-16))
sv3_rmag = 22.5 - 2.5*np.log10(sv3_rflux.clip(1e-16))
sv3_zmag = 22.5 - 2.5*np.log10(sv3_zflux.clip(1e-16))
sv3_z = sv3_bgs['Z']
fig = DFM.corner(np.vstack([sv3_gmag, sv3_rmag, sv3_zmag, sv3_z]).T,
labels=['$g$ magnitude', '$r$ magnitude', '$z$ magnitude', '$z$ (redshift)'],
label_kwargs={'fontsize': 25},
range=[(14., 22.), (14., 22.), (14., 22.), (0., 0.6)])
###Output
_____no_output_____
###Markdown
read forward modeled photometry
###Code
fm_theta = np.load('/Users/chahah/data/arcoiris/provabgs_cnf/train_decam.v0.thetas_sps.npy')
fm_theta_unt = np.load('/Users/chahah/data/arcoiris/provabgs_cnf/train_decam.v0.thetas_unt_sps.npy')
fm_photo = np.load('/Users/chahah/data/arcoiris/provabgs_cnf/train_decam.v0.xphoto_nonoise.npy')
fm_gflux = fm_photo[:,0]
fm_rflux = fm_photo[:,1]
fm_zflux = fm_photo[:,2]
fm_gmag = 22.5 - 2.5*np.log10(fm_gflux.clip(1e-16))
fm_rmag = 22.5 - 2.5*np.log10(fm_rflux.clip(1e-16))
fm_zmag = 22.5 - 2.5*np.log10(fm_zflux.clip(1e-16))
fm_z = fm_theta[:,-1]
(fm_gmag - fm_rmag).max()
(fm_gmag - fm_rmag).max()
fig = DFM.corner(np.vstack([fm_gmag, fm_rmag, fm_zmag, fm_z]).T,
labels=['$g$ magnitude', '$r$ magnitude', '$z$ magnitude', '$z$ (redshift)'],
label_kwargs={'fontsize': 25},
range=[(14., 22.), (14., 22.), (14., 22.), (0., 0.6)])
fig = plt.figure(figsize=(6,6))
sub = fig.add_subplot(111)
_ = DFM.hist2d(sv3_gmag - sv3_rmag, sv3_rmag - sv3_zmag,
range=[(-1., 2.), (-1., 2.)], levels=[0.68, 0.95, 0.997],
plot_density=False, plot_datapoints=False, color='C0', ax=sub)
sub.scatter(fm_gmag - fm_rmag, fm_rmag - fm_zmag, c='k', s=1)
sub.set_xlabel('$g-r$ color', fontsize=25)
sub.set_xlim(-1, 4)
sub.set_ylabel('$r-z$ color', fontsize=25)
sub.set_ylim(-1, 4)
fig = plt.figure(figsize=(6,6))
sub = fig.add_subplot(111)
sub.scatter(fm_gmag - fm_rmag, fm_rmag - fm_zmag, c='k', s=1)
sub.scatter((sv3_gmag - sv3_rmag)[:10], (sv3_rmag - sv3_zmag)[:10], c='C1', s=10)
sub.set_xlabel('$g-r$ color', fontsize=25)
sub.set_xlim(-1, 3)
sub.set_ylabel('$r-z$ color', fontsize=25)
sub.set_ylim(-1, 2)
fig = plt.figure(figsize=(6,6))
sub = fig.add_subplot(111)
_ = DFM.hist2d(sv3_rmag, sv3_gmag - sv3_rmag, range=[(16., 21.), (-1., 2.)],
plot_density=False, plot_datapoints=False, color='C0', ax=sub)
sub.scatter(fm_rmag, fm_gmag - fm_rmag, c='k', s=1)
sub.set_xlabel('$r$ magnitude', fontsize=25)
sub.set_ylabel('$g-r$ color', fontsize=25)
sub.set_ylim(-1, 3)
###Output
_____no_output_____ |
_ipynb/Chapter3.ipynb | ###Markdown
3.1 MNIST
###Code
from sklearn.datasets import fetch_openml
mnist = fetch_openml('mnist_784', version = 1)
mnist.keys()
X, y = mnist['data'], mnist['target']
X.shape
y.shape
import matplotlib.pyplot as plt
some_digit = X[0]
some_digit_image = some_digit.reshape(28, 28)
plt.imshow(some_digit_image, cmap = 'binary')
plt.axis('off')
plt.show
y[0]
import numpy as np
y = y.astype(np.uint8)
X_train, X_test, y_train, y_test = X[:60000], X[60000:], y[:60000], y[60000:]
###Output
_____no_output_____
###Markdown
3.2 이진분류기 훈련
###Code
y_train_5 = (y_train == 5)
y_test_5 = (y_test == 5)
from sklearn.linear_model import SGDClassifier
sgd_clf = SGDClassifier(random_state = 42)
sgd_clf.fit(X_train, y_train_5)
sgd_clf.predict([some_digit])
###Output
_____no_output_____
###Markdown
성능 측정 교차 검증을 사용한 정확도 측정
###Code
from sklearn.model_selection import StratifiedKFold
from sklearn.base import clone
skfolds = StratifiedKFold(n_splits = 3, random_state = 42, shuffle = True)
for train_index, test_index in skfolds.split(X_train, y_train_5):
clone_clf = clone(sgd_clf)
X_train_folds = X_train[train_index]
y_train_folds = y_train_5[train_index]
X_test_fold = X_train[test_index]
y_test_fold = y_train_5[test_index]
clone_clf.fit(X_train_folds, y_train_folds)
y_pred = clone_clf.predict(X_test_fold)
n_correct = sum(y_pred == y_test_fold)
print(n_correct / len(y_pred))
from sklearn.model_selection import cross_val_score
cross_val_score(sgd_clf, X_train, y_train_5, cv = 3, scoring = 'accuracy')
from sklearn.base import BaseEstimator
class Never5Classifier(BaseEstimator):
def fit(self, X, y=None):
return self
def predict(self, X):
return np.zeros((len(X), 1), dtype = bool)
never_5_clf = Never5Classifier()
cross_val_score(never_5_clf, X_train, y_train_5, cv = 3, scoring = 'accuracy')
from sklearn.model_selection import cross_val_predict
y_train_pred = cross_val_predict(sgd_clf, X_train, y_train_5, cv = 3)
from sklearn.metrics import confusion_matrix
confusion_matrix(y_train_5, y_train_pred)
y_train_perfect_predictions = y_train_5
confusion_matrix(y_train_5, y_train_perfect_predictions)
###Output
_____no_output_____
###Markdown
정밀도와 재현율
###Code
from sklearn.metrics import precision_score, recall_score
precision_score(y_train_5, y_train_pred)
recall_score(y_train_5, y_train_pred)
from sklearn.metrics import f1_score
f1_score(y_train_5, y_train_pred)
y_scores = sgd_clf.decision_function([some_digit])
y_scores
threshold = 0
y_some_digit_pred = (y_scores > threshold)
y_some_digit_pred
threshold = 8000
y_some_digit_pred = (y_scores > threshold)
y_some_digit_pred
y_scores = cross_val_predict(sgd_clf, X_train, y_train_5, cv = 3, method = 'decision_function')
from sklearn.metrics import precision_recall_curve
precisions, recalls, thresholds = precision_recall_curve(y_train_5, y_scores)
def plot_precision_recall_vs_threshold(precision, recalls, thresholds):
plt.plot(thresholds, precision[:-1], "b--", label = '정밀도')
plt.plot(thresholds, recalls[:-1],'g--', label = "재현율")
plt.xlabel('value')
plt.ylabel('score')
plt.grid()
plt.legend()
plt.show()
plot_precision_recall_vs_threshold(precisions, recalls, thresholds,)
threshold_90_precision = thresholds[np.argmax(precisions >= 0.90)]
y_train_pred_90 = (y_scores >= threshold_90_precision)
precision_score(y_train_5, y_train_pred_90)
recall_score(y_train_5, y_train_pred_90)
###Output
_____no_output_____
###Markdown
3.3.5 ROC 곡선
###Code
from sklearn.metrics import roc_curve
fpr, tpr, thresholds = roc_curve(y_train_5, y_scores)
def plot_roc_curve(fpr, tpr, label = None):
plt.plot(fpr, tpr, linewidth = 2, label = label)
plt.plot([0,1],[0,1], 'k--')
plt.grid()
plot_roc_curve(fpr, tpr)
plt.show()
from sklearn.metrics import roc_auc_score
roc_auc_score(y_train_5, y_scores)
from sklearn.ensemble import RandomForestClassifier
forest_clf = RandomForestClassifier(random_state = 42)
y_probas_forest = cross_val_predict(forest_clf, X_train, y_train_5, cv = 3,
method = 'predict_proba')
y_scores_forest = y_probas_forest[:,1]
fpr_forest, tpr_forest, thresholds_forest = roc_curve(y_train_5, y_scores_forest)
plt.plot(fpr, tpr, "b:", label = "SGD")
plot_roc_curve(fpr_forest, tpr_forest, "랜덤 포레스트")
plt.legend(loc = 'lower right')
plt.show()
roc_auc_score(y_train_5, y_scores_forest)
from sklearn.metrics import precision_recall_curve
precisions, recalls, thresholds = precision_recall_curve(y_train_5, y_scores_forest)
def plot_precision_recall_vs_threshold(precision, recalls, thresholds):
plt.plot(thresholds, precision[:-1], "b--", label = '정밀도')
plt.plot(thresholds, recalls[:-1],'g--', label = "재현율")
plt.xlabel('value')
plt.ylabel('score')
plt.grid()
plt.legend()
plt.show()
plot_precision_recall_vs_threshold(precisions, recalls, thresholds,)
# PR 곡선
plt.plot(precisions[:-1], recalls[:-1], "b--", label = 'PR곡선')
plt.xlabel('정밀도')
plt.ylabel('재현율')
plt.grid()
plt.legend()
plt.show()
###Output
/usr/local/lib/python3.7/dist-packages/matplotlib/backends/backend_agg.py:214: RuntimeWarning: Glyph 51221 missing from current font.
font.set_text(s, 0.0, flags=flags)
/usr/local/lib/python3.7/dist-packages/matplotlib/backends/backend_agg.py:214: RuntimeWarning: Glyph 48128 missing from current font.
font.set_text(s, 0.0, flags=flags)
/usr/local/lib/python3.7/dist-packages/matplotlib/backends/backend_agg.py:214: RuntimeWarning: Glyph 46020 missing from current font.
font.set_text(s, 0.0, flags=flags)
/usr/local/lib/python3.7/dist-packages/matplotlib/backends/backend_agg.py:214: RuntimeWarning: Glyph 51116 missing from current font.
font.set_text(s, 0.0, flags=flags)
/usr/local/lib/python3.7/dist-packages/matplotlib/backends/backend_agg.py:214: RuntimeWarning: Glyph 54788 missing from current font.
font.set_text(s, 0.0, flags=flags)
/usr/local/lib/python3.7/dist-packages/matplotlib/backends/backend_agg.py:214: RuntimeWarning: Glyph 50984 missing from current font.
font.set_text(s, 0.0, flags=flags)
/usr/local/lib/python3.7/dist-packages/matplotlib/backends/backend_agg.py:214: RuntimeWarning: Glyph 44257 missing from current font.
font.set_text(s, 0.0, flags=flags)
/usr/local/lib/python3.7/dist-packages/matplotlib/backends/backend_agg.py:214: RuntimeWarning: Glyph 49440 missing from current font.
font.set_text(s, 0.0, flags=flags)
/usr/local/lib/python3.7/dist-packages/matplotlib/backends/backend_agg.py:183: RuntimeWarning: Glyph 51221 missing from current font.
font.set_text(s, 0, flags=flags)
/usr/local/lib/python3.7/dist-packages/matplotlib/backends/backend_agg.py:183: RuntimeWarning: Glyph 48128 missing from current font.
font.set_text(s, 0, flags=flags)
/usr/local/lib/python3.7/dist-packages/matplotlib/backends/backend_agg.py:183: RuntimeWarning: Glyph 46020 missing from current font.
font.set_text(s, 0, flags=flags)
/usr/local/lib/python3.7/dist-packages/matplotlib/backends/backend_agg.py:183: RuntimeWarning: Glyph 51116 missing from current font.
font.set_text(s, 0, flags=flags)
/usr/local/lib/python3.7/dist-packages/matplotlib/backends/backend_agg.py:183: RuntimeWarning: Glyph 54788 missing from current font.
font.set_text(s, 0, flags=flags)
/usr/local/lib/python3.7/dist-packages/matplotlib/backends/backend_agg.py:183: RuntimeWarning: Glyph 50984 missing from current font.
font.set_text(s, 0, flags=flags)
/usr/local/lib/python3.7/dist-packages/matplotlib/backends/backend_agg.py:183: RuntimeWarning: Glyph 44257 missing from current font.
font.set_text(s, 0, flags=flags)
/usr/local/lib/python3.7/dist-packages/matplotlib/backends/backend_agg.py:183: RuntimeWarning: Glyph 49440 missing from current font.
font.set_text(s, 0, flags=flags)
###Markdown
3.4 다중 분류
###Code
# from sklearn.svm import SVC
# svm_clf = SVC()
# svm_clf.fit(X_train, y_train)
# svm_clf.predict([some_digit])
# some_digit_scores = svm_clf.decision_function([some_digit])
# some_digit_scores
# np.argmax(some_digit_scores)
# svm_clf.classes_
# svm_clf.classes_[5]
# from sklearn.multiclass import OneVsRestClassifier
# ovr_clf = OneVsRestClassifier(SVC())
# ovr_clf.fit(X_train, y_train)
# ovr_clf.predict([some_digit])
# len(ovr_clf.estimators_)
from sklearn.linear_model import SGDClassifier
sgd_clf = SGDClassifier(random_state = 42)
sgd_clf.fit(X_train, y_train)
sgd_clf.predict([some_digit])
sgd_clf.decision_function([some_digit])
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import cross_val_predict
cross_val_score(sgd_clf, X_train, y_train, scoring = 'accuracy')
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
X_train_scaled = scaler.fit_transform(X_train.astype(np.float64))
cross_val_score(sgd_clf, X_train_scaled, y_train, cv = 3, scoring = 'accuracy')
###Output
_____no_output_____
###Markdown
에러 분석
###Code
from sklearn.metrics import confusion_matrix
y_train_pred = cross_val_predict(sgd_clf, X_train_scaled, y_train, cv = 3)
conf_mx = confusion_matrix(y_train, y_train_pred)
conf_mx
plt.matshow(conf_mx, cmap = plt.cm.gray)
plt.show()
row_sums = conf_mx.sum(axis = 1, keepdims = True)
norm_conf_mx = conf_mx / row_sums
np.fill_diagonal(norm_conf_mx, 0)
plt.matshow(norm_conf_mx, cmap = plt.cm.gray)
plt.show()
def plot_digits(instances, images_per_row=10, **options):
size = 28
images_per_row = min(len(instances), images_per_row)
images = [instance.reshape(size,size) for instance in instances]
n_rows = (len(instances) - 1) // images_per_row + 1
row_images = []
n_empty = n_rows * images_per_row - len(instances)
images.append(np.zeros((size, size * n_empty)))
for row in range(n_rows):
rimages = images[row * images_per_row : (row + 1) * images_per_row]
row_images.append(np.concatenate(rimages, axis=1))
image = np.concatenate(row_images, axis=0)
plt.imshow(image, cmap = matplotlib.cm.binary, **options)
plt.axis("off")
import matplotlib
import matplotlib.pyplot as plt
cl_a, cl_b = 3,5
X_aa = X_train[(y_train == cl_a) & (y_train_pred == cl_a)]
X_ab = X_train[(y_train == cl_a) & (y_train_pred == cl_b)]
X_ba = X_train[(y_train == cl_b) & (y_train_pred == cl_a)]
X_bb = X_train[(y_train == cl_b) & (y_train_pred == cl_b)]
plt.figure(figsize = (8,8))
plt.subplot(221); plot_digits(X_aa[:25], images_per_row = 5)
plt.subplot(222); plot_digits(X_ab[:25], images_per_row = 5)
plt.subplot(223); plot_digits(X_ba[:25], images_per_row = 5)
plt.subplot(224); plot_digits(X_bb[:25], images_per_row = 5)
plt.show()
###Output
_____no_output_____
###Markdown
다중 레이블 분류
###Code
from sklearn.neighbors import KNeighborsClassifier
y_train_large = (y_train >= 7)
y_train_odd = (y_train % 2 == 1)
y_multilabel = np.c_[y_train_large, y_train_odd]
knn_clf = KNeighborsClassifier()
knn_clf.fit(X_train, y_multilabel)
knn_clf.predict([some_digit])
from sklearn.metrics import f1_score
y_train_knn_pred = cross_val_predict(knn_clf, X_train, y_multilabel, cv = 3)
f1_score(y_multilabel, y_train_knn_pred, average = 'macro')
###Output
_____no_output_____
###Markdown
3.7 다중 출력 분류
###Code
noise = np.random.randint(0, 100, (len(X_train), 784))
X_train_mod = X_train + noise
noise = np.random.randint(0, 100, (len(X_test), 784))
X_test_mod = X_test + noise
y_train_mod = X_train
y_test_mod = X_test
# knn_clf.fit(X_train_mod, y_train_mod)
clean_digit = knn_clf.predict([X_test_mod[10]])
plot_digits(clean_digit)
###Output
_____no_output_____ |
Data Science Course/1. Programming/3. Python (with solutions)/Module 7 - Data Visualization/Practice Solution/01-Data_Visualization_Exercise - Solution.ipynb | ###Markdown
____ 01-Data_Visualization_Exercise - Solution____ KeytoDataScience.com Use the `practice_exercise_comp_sales_data` CSV file for the exercise. Problem 1:__Read Total Gain of all months and plot it using a line plot.__1. Label the x axis as "Month" and Y axis as "Total Gain"2. Title the plot as "Company gain per month"
###Code
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
df = pd.read_csv("comp_sales_data.csv")
fig,ax =plt.subplots()
ax.plot(df['Month'], df['total_gain'])
plt.xlabel('Month')
plt.ylabel('Total Gain')
plt.xticks(df['Month'])
plt.title('Company gain per month')
plt.yticks([20000, 30000, 40000, 50000])
plt.show()
###Output
_____no_output_____
###Markdown
Problem 2: __Get Total gain of all months and plot line plot. Plot must include following "Style properties":__1. Line Style dotted and Line color should be grey2. Show legend at the lower right location with label "Profit data for last year"3. X label name = Month 4. Y label name = Sold units 5. Line width should be 26. Add a circle marker.7. Line marker color as red
###Code
fig,ax =plt.subplots()
ax.plot(df['Month'], df['total_gain'], color='grey', marker='o', markerfacecolor='r',
linestyle='--', linewidth=3,label = 'Profit data for last year')
plt.xlabel('Month')
plt.ylabel('Total Gain')
plt.legend(loc='lower right')
plt.xticks(df['Month'])
plt.title('Company profit/gain per month')
plt.yticks([20000, 30000, 40000, 50000])
plt.show()
###Output
_____no_output_____
###Markdown
Problem 3: __Read total units sold per month data for all items and show it on the same plot with multiple lines.__1. Line Style solid and Line colors can be anything2. Show legend at the upper left location with corr labels "item x sales data", x being the item in question3. X label name = Month 4. Y label name = Sold units 5. No marker6. Ticks on y axis should be from 200 to 3800 in the increments of 400
###Code
import numpy as np
monthList = df['Month'].tolist()
WhiteningCreamSalesData = df ['whitening_cream'].tolist()
MouthWashSalesData = df ['mouth_wash'].tolist()
ShavingFoamSalesData = df ['shaving_foam'].tolist()
BathingGelSalesData = df ['bathing_gel'].tolist()
HairConditionerSalesData = df ['hair_conditioner'].tolist()
SkinMoisturizerSalesData = df ['skin_moisturizer'].tolist()
fig,ax =plt.subplots()
ax.plot(monthList, WhiteningCreamSalesData, label = 'WhiteningCream Sales Data', marker=None)
ax.plot(monthList, MouthWashSalesData, label = 'MouthWash Sales Data', marker=None)
ax.plot(monthList, ShavingFoamSalesData, label = 'ShavingFoam Sales Data', marker=None)
ax.plot(monthList, BathingGelSalesData, label = 'BathingGel Sales Data', marker=None)
ax.plot(monthList, HairConditionerSalesData, label = 'HairConditioner Sales Data', marker=None)
ax.plot(monthList, SkinMoisturizerSalesData, label = 'SkinMoisturizer Sales Data', marker=None)
plt.xlabel('Month')
plt.ylabel('Sold units')
plt.legend(loc='upper left',fontsize=8)
plt.xticks(monthList)
plt.yticks(np.arange(200, 3800, step=400))
plt.show()
###Output
_____no_output_____
###Markdown
Problem 4: __Compare Hair Conditioner and Whitening Cream sales using bar plots in the same chart__1. Add a grid in the plot with gridstyle as “–.“2. Pick appropriate width for the barplot so they dont touch across months3. Upper right legend with labels for Hair conditioner and whitening cream. Make sure bars dont get covered by legends4. Title the plot as "Whitening Cream sales VS Hair conditioner sales"5. Save this plot as CreamvsConditioner.png and save it with a dpi of 200
###Code
WhiteningCreamSalesData = df['whitening_cream'].tolist()
HairConditionerSalesData = df['hair_conditioner'].tolist()
width = 0.3
fig,ax =plt.subplots()
ax.bar([a-width for a in monthList], HairConditionerSalesData, width= width, label = 'Hair Conditioner', align='edge')
ax.bar([a+width for a in monthList], WhiteningCreamSalesData, width= -width, label = 'Whitening Cream', align='edge')
plt.xlabel('Month')
plt.ylabel('Sales units')
plt.legend(loc='upper right')
plt.title('Sales data')
plt.xticks(monthList)
plt.yticks(np.arange(0, 1100, step=100))
plt.grid(True,linestyle="-.")
plt.title("Whitening Cream sales VS Hair conditioner sales\n")
plt.savefig('CreamvsConditioner.png', dpi=200)
plt.show()
###Output
_____no_output_____
###Markdown
Problem 5: __Two time series plots on same axes, one for product sales in units and one for total sales in units__1. Color for Total units should be orange in all plots2. Multiple subplots with common x axis (month) for different product units comparison with total units3. Color for products should be from colors 'bgrmcy' 4. Different y-axes for Product units (same colors respectively and Total units color label as orange. 5. Legends for both product and total units in visible area6. Single plot title as ""Monthly Product units compared with Total units"
###Code
products_cols=df.columns[1:-2]
print(products_cols)
fig, ax = plt.subplots(6,1,figsize=(10, 20))
# Loop over the different columns for diff products
for i,product in enumerate(products_cols):
product_df = df[product]
colors = "bgrcmy" # blue,green,red,cyan,magenta,yellow
ax[i].plot(df["Month"],df[product],color=colors[i],label=product)
ax[i].set_ylabel("Sales Units")
ax[i].legend(loc='upper left')
ax2=ax[i].twinx()
ax2.plot(df["Month"],df["total_units"],color='grey',label="Total")
plt.legend(loc='upper center')
ax[0].set_title("Monthly Product units compared with Total units\n")
ax[i].set_xlabel("Month")
plt.show()
###Output
_____no_output_____
###Markdown
Problem 6: __Plot monthly mean sales as bar charts for all products in same plot. (Optional: Use Seaborn)__1. Add error bars using standard dev for each bar2. Plot legend in the best possible position3. Choose the plot style as seaborn4. Annotate for the product having largest variation (least consistent mothly sales) in 12 months as :Largest variation in units"5. Keep x-label, y-label, Annotation text and Bar labels font size as 12
###Code
products_cols=df.columns[1:-2]
print(products_cols)
plt.style.use("seaborn")
fig,ax = plt.subplots(figsize=(15,5))
# Loop over the different columns for diff products
for i,product in enumerate(products_cols):
product_df = df[product]
ax.bar(product,product_df.mean(), yerr=product_df.std(),label=product)
ax.set_xticklabels(products_cols,fontsize=12)
ax.set_xlabel("Products",fontsize=12)
ax.set_ylabel("Mean Units and Std Error in Units",fontsize=12)
ax.annotate("Largest variation in units",
xy=("bathing_gel",1900),
xytext=("mouth_wash",2000),
arrowprops={"color":"black"},fontsize=12)
plt.legend(loc="best")
plt.show()
###Output
C:\Users\Prateek\AppData\Local\Temp/ipykernel_1120/2649602212.py:7: UserWarning: FixedFormatter should only be used together with FixedLocator
ax.set_xticklabels(products_cols,fontsize=12)
###Markdown
Problem 7: Out of Syllabus question__Read all product sales data and show it using a stack plot (Optional: Use Seaborn)__ __Output:__ The plot should look like below. 
###Code
# write code here
monthList = df['Month'].tolist()
WhiteningCreamSalesData = df['whitening_cream'].tolist()
MouthWashSalesData = df['mouth_wash'].tolist()
ShavingFoamSalesData = df['shaving_foam'].tolist()
BathingGelSalesData = df['bathing_gel'].tolist()
HairConditionerSalesData = df['hair_conditioner'].tolist()
SkinMoisturizerSalesData = df['skin_moisturizer'].tolist()
fig,ax =plt.subplots(figsize=(10,8))
ax.plot([],[],color='m', label = 'WhiteningCream Sales Data', linewidth=5)
ax.plot([],[],color='c', label = 'MouthWash Sales Data', linewidth=5)
ax.plot([],[],color='r', label = 'ShavingFoam Sales Data', linewidth=5)
ax.plot([],[],color='k', label = 'BathingGel Sales Data', linewidth=5)
ax.plot([],[],color='g', label = 'HairConditioner Sales Data', linewidth=5)
ax.plot([],[],color='y', label = 'SkinMoisturizer Sales Data', linewidth=5)
ax.stackplot(monthList, WhiteningCreamSalesData, MouthWashSalesData, ShavingFoamSalesData,
BathingGelSalesData, HairConditionerSalesData, SkinMoisturizerSalesData,
colors=['m','c','r','k','g','y'])
plt.xlabel('Month')
plt.ylabel('Sales unints')
plt.title('All product units data using stack plot')
plt.legend(loc='upper left')
plt.show()
###Output
_____no_output_____
###Markdown
Problem 8 (Optional)__Using Seaborn create the following visualizations for Titanic Dataset. (Use Seaborn)__You can load the dataset using the below command. There could be more than one correct ways of visualizing data. import seaborn as sns titanic = sns.load_dataset('titanic')
###Code
import seaborn as sns
titanic = sns.load_dataset('titanic')
titanic.head(20)
###Output
_____no_output_____
###Markdown
__Q8.1 Create a visualization to visualize age of all male and female passengers.__- __Visualization should show the outliers.__- __Annotate (call out in the plot) the oldest male.__
###Code
ax = sns.stripplot(data=titanic,x=titanic["age"],y=titanic["sex"])
max_age_male = titanic[titanic["sex"]=='male'].age.max()
ax.annotate("Oldest Male",
xy=(max_age_male, 0), xycoords='data',
xytext=(0.5,0.5), textcoords='axes fraction',
arrowprops={"color":"black"},fontsize=12
)
###Output
_____no_output_____
###Markdown
__Q8.2 Create a seaborn plot that shows central tendency of "age" (mean and Standard dev) for different "class" of boarding and comparing them for "embark_town".__- __Find out where the age demography is most varied__
###Code
sns.barplot(data=titanic,x=titanic["age"],y=titanic["embark_town"],hue="class")
#sns.pointplot(data=titanic,x=titanic["age"],y=titanic["embark_town"],hue="class")
###Output
_____no_output_____
###Markdown
Queenstown Second class passengers are most varied in their age from 30 to 80 __8.3 Explore if there is a linear relationship between:the class and the fare paid__- __Remove outliers on age and fare__ 1. Plot strip plot of age
###Code
# plot strip plot of age
plt=sns.stripplot(data=titanic,x=titanic["age"])
###Output
_____no_output_____
###Markdown
2. Plot strip plot of fare
###Code
# plot strip plot of fare
sns.stripplot(data=titanic,x=titanic["fare"])
###Output
_____no_output_____
###Markdown
3. __Remove outlier data__ and __plot linear relationship plot between fare and age__
###Code
# plot linear relationship plot between fare and age
titanic_new = titanic[(titanic["fare"]<300) & (titanic["age"]<70)]
sns.regplot(data=titanic_new,x=titanic_new["fare"],y=titanic_new["age"])
###Output
_____no_output_____
###Markdown
4. Plot final graph of residual plot between fare and age
###Code
sns.residplot(data=titanic_new,x=titanic_new["fare"],y=titanic_new["age"])
###Output
_____no_output_____
###Markdown
Since residual plot is not randomly distributed, no linear relationship exists __Q8.4 Using Seaborn, for the titanic dataset depict if there is a correlation between survival ,age and whether they were alone or with somebody "alone"__
###Code
sns.heatmap(titanic.filter(["survived","alone","age"]).corr(),cmap="coolwarm")
###Output
_____no_output_____ |
code/.ipynb_checkpoints/12.topic_models-checkpoint.ipynb | ###Markdown
****** 主题模型******王成军[email protected]计算传播网 http://computational-communication.com 2014年高考前夕,百度“基于海量作文范文和搜索数据,利用概率主题模型,预测2014年高考作文的命题方向”。如上图所示,共分为了六个主题:时间、生命、民族、教育、心灵、发展。而每个主题下面又包括了一些具体的关键词。比如,生命的主题对应:平凡、自由、美丽、梦想、奋斗、青春、快乐、孤独。[Read more](https://site.douban.com/146782/widget/notes/15462869/note/356806087/)  latent Dirichlet allocation (LDA)The simplest topic model (on which all others are based) is latent Dirichlet allocation (LDA). - LDA is a generative model that infers unobserved meanings from a large set of observations. Reference- Blei DM, Ng J, Jordan MI. Latent dirichlet allocation. J Mach Learn Res. 2003; 3: 993–1022.- Blei DM, Lafferty JD. Correction: a correlated topic model of science. Ann Appl Stat. 2007; 1: 634. - Blei DM. Probabilistic topic models. Commun ACM. 2012; 55: 55–65.- Chandra Y, Jiang LC, Wang C-J (2016) Mining Social Entrepreneurship Strategies Using Topic Modeling. PLoS ONE 11(3): e0151342. doi:10.1371/journal.pone.0151342 - Topic models assume that each document contains a mixture of topics - Topics are considered latent/unobserved variables that stand between the documents and termsIt is impossible to directly assess the relationships between topics and documents and between topics and terms. - What can be directly observed is the distribution of terms over documents, which is known as the document term matrix (DTM).Topic models algorithmically identify the best set of latent variables (topics) that can best explain the observed distribution of terms in the documents. The DTM is further decomposed into two matrices:- a term-topic matrix (TTM) - a topic-document matrix (TDM)Each document can be assigned to a primary topic that demonstrates the highest topic-document probability and can then be linked to other topics with declining probabilities. Assume K topics are in D documents, and each topic is denoted with $\phi_{1:K}$. Each topic $\phi_K$ is a distribution of fixed words in the given documents. The topic proportion in the document is denoted as $\theta_d$. - e.g., the kth topic's proportion in document d is $\theta_{d, k}$. Let $w_{d,n}$ denote the nth term in document d. Further, topic models assign topics to a document and its terms. - For example, the topic assigned to document d is denoted as $z_d$, - and the topic assigned to the nth term in document d is denoted as $z_{d,n}$. According to Blei et al. the joint distribution of $\phi_{1:K}$,$\theta_{1:D}$, $z_{1:D}$ and $w_{d, n}$ plus the generative process for LDA can be expressed as:$ p(\phi_{1:K}, \theta_{1:D}, z_{1:D}, w_{d, n}) $ = $\prod_{i=1}^{K} p(\phi_i) \prod_{d =1}^D p(\theta_d)(\prod_{n=1}^N p(z_{d,n} \mid \theta_d) \times p(w_{d, n} \mid \phi_{1:K}, Z_{d, n}) ) $ Note that $\phi_{1:k},\theta_{1:D},and z_{1:D}$ are latent, unobservable variables. Thus, the computational challenge of LDA is to compute the conditional distribution of them given the observable specific words in the documents $w_{d, n}$. Accordingly, the posterior distribution of LDA can be expressed as: $p(\phi_{1:K}, \theta_{1:D}, z_{1:D} \mid w_{d, n}) = \frac{p(\phi_{1:K}, \theta_{1:D}, z_{1:D}, w_{d, n})}{p(w_{1:D})}$ Because the number of possible topic structures is exponentially large, it is impossible to compute the posterior of LDA. Topic models aim to develop efficient algorithms to approximate the posterior of LDA. - There are two categories of algorithms: - sampling-based algorithms - variational algorithms Using the Gibbs sampling method, we can build a Markov chain for the sequence of random variables (see Eq 1). The sampling algorithm is applied to the chain to sample from the limited distribution, and it approximates the posterior. GensimUnfortunately, scikit-learn does not support latent Dirichlet allocation.Therefore, we are going to use the gensim package in Python. Gensim is developed by Radim Řehůřek,who is a machine learning researcher and consultant in the Czech Republic. We muststart by installing it. We can achieve this by running one of the following commands:> pip install gensim
###Code
%matplotlib inline
from __future__ import print_function
from wordcloud import WordCloud
from gensim import corpora, models, similarities, matutils
import matplotlib.pyplot as plt
import numpy as np
###Output
_____no_output_____
###Markdown
Download datahttp://www.cs.princeton.edu/~blei/lda-c/ap.tgzUnzip the data and put them into /Users/chengjun/bigdata/ap/
###Code
# Load the data
corpus = corpora.BleiCorpus('/Users/chengjun/bigdata/ap/ap.dat', '/Users/chengjun/bigdata/ap/vocab.txt')
' '.join(dir(corpus))
corpus.id2word.items()[:3]
###Output
_____no_output_____
###Markdown
Build the topic model
###Code
NUM_TOPICS = 100
model = models.ldamodel.LdaModel(
corpus, num_topics=NUM_TOPICS, id2word=corpus.id2word, alpha=None)
' '.join(dir(model))
###Output
_____no_output_____
###Markdown
We can see the list of topics a document refers to by using the model[doc] syntax:
###Code
document_topics = [model[c] for c in corpus]
# how many topics does one document cover?
document_topics[2]
# The first topic
# format: weight, term
model.show_topic(0, 10)
# The 100 topic
# format: weight, term
model.show_topic(99, 10)
words = model.show_topic(0, 5)
words
model.show_topics(4)
for f, w in words[:10]:
print(f, w)
# write out topcis with 10 terms with weights
for ti in range(model.num_topics):
words = model.show_topic(ti, 10)
tf = sum(f for f, w in words)
with open('/Users/chengjun/github/cjc2016/data/topics_term_weight.txt', 'a') as output:
for f, w in words:
line = str(ti) + '\t' + w + '\t' + str(f/tf)
output.write(line + '\n')
# We first identify the most discussed topic, i.e., the one with the
# highest total weight
topics = matutils.corpus2dense(model[corpus], num_terms=model.num_topics)
weight = topics.sum(1)
max_topic = weight.argmax()
# Get the top 64 words for this topic
# Without the argument, show_topic would return only 10 words
words = model.show_topic(max_topic, 64)
words = np.array(words).T
words_freq=[float(i)*10000000 for i in words[0]]
words = zip(words[1], words_freq)
wordcloud = WordCloud().generate_from_frequencies(words)
plt.imshow(wordcloud)
plt.axis("off")
plt.show()
num_topics_used = [len(model[doc]) for doc in corpus]
fig,ax = plt.subplots()
ax.hist(num_topics_used, np.arange(42))
ax.set_ylabel('Nr of documents')
ax.set_xlabel('Nr of topics')
fig.tight_layout()
#fig.savefig('Figure_04_01.png')
###Output
_____no_output_____
###Markdown
We can see that about 150 documents have 5 topics, - while the majority deal with around 10 to 12 of them. - No document talks about more than 20 topics.
###Code
# Now, repeat the same exercise using alpha=1.0
# You can edit the constant below to play around with this parameter
ALPHA = 1.0
model1 = models.ldamodel.LdaModel(
corpus, num_topics=NUM_TOPICS, id2word=corpus.id2word, alpha=ALPHA)
num_topics_used1 = [len(model1[doc]) for doc in corpus]
fig,ax = plt.subplots()
ax.hist([num_topics_used, num_topics_used1], np.arange(42))
ax.set_ylabel('Nr of documents')
ax.set_xlabel('Nr of topics')
# The coordinates below were fit by trial and error to look good
plt.text(9, 223, r'default alpha')
plt.text(26, 156, 'alpha=1.0')
fig.tight_layout()
###Output
_____no_output_____
###Markdown
使用pyLDAvis可视化主体模型http://nbviewer.jupyter.org/github/bmabey/pyLDAvis/blob/master/notebooks/pyLDAvis_overview.ipynb 读取并清洗数据
###Code
with open('/Users/chengjun/bigdata/ap/ap.txt', 'r') as f:
dat = f.readlines()
dat[:6]
dat[4].strip()[0]
docs = []
for i in dat[:100]:
if i.strip()[0] != '<':
docs.append(i)
def clean_doc(doc):
doc = doc.replace('.', '').replace(',', '')
doc = doc.replace('``', '').replace('"', '')
doc = doc.replace('_', '').replace("'", '')
doc = doc.replace('!', '')
return doc
docs = [clean_doc(doc) for doc in docs]
texts = [[i for i in doc.lower().split()] for doc in docs]
from nltk.corpus import stopwords
stop = stopwords.words('english')
' '.join(stop)
stop.append('said')
from collections import defaultdict
frequency = defaultdict(int)
for text in texts:
for token in text:
frequency[token] += 1
texts = [[token for token in text if frequency[token] > 1 and token not in stop]
for text in texts]
docs[8]
' '.join(texts[9])
dictionary = corpora.Dictionary(texts)
lda_corpus = [dictionary.doc2bow(text) for text in texts]
#The function doc2bow() simply counts the number of occurences of each distinct word,
# converts the word to its integer word id and returns the result as a sparse vector.
lda_model = models.ldamodel.LdaModel(
lda_corpus, num_topics=NUM_TOPICS, id2word=dictionary, alpha=None)
import pyLDAvis.gensim
ap_data = pyLDAvis.gensim.prepare(lda_model, lda_corpus, dictionary)
pyLDAvis.enable_notebook()
pyLDAvis.display(ap_data)
pyLDAvis.save_html(ap_data, '/Users/chengjun/github/cjc2016/vis/ap_ldavis.html')
###Output
_____no_output_____ |
Introduction to Pytorch/.ipynb_checkpoints/Part 2 - Neural Networks in PyTorch (Exercises)-checkpoint.ipynb | ###Markdown
Neural networks with PyTorchDeep learning networks tend to be massive with dozens or hundreds of layers, that's where the term "deep" comes from. You can build one of these deep networks using only weight matrices as we did in the previous notebook, but in general it's very cumbersome and difficult to implement. PyTorch has a nice module `nn` that provides a nice way to efficiently build large neural networks.
###Code
# Import necessary packages
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import numpy as np
import torch
import helper
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
Now we're going to build a larger network that can solve a (formerly) difficult problem, identifying text in an image. Here we'll use the MNIST dataset which consists of greyscale handwritten digits. Each image is 28x28 pixels, you can see a sample belowOur goal is to build a neural network that can take one of these images and predict the digit in the image.First up, we need to get our dataset. This is provided through the `torchvision` package. The code below will download the MNIST dataset, then create training and test datasets for us. Don't worry too much about the details here, you'll learn more about this later.
###Code
### Run this cell
from torchvision import datasets, transforms
# Define a transform to normalize the data
transform = transforms.Compose([transforms.ToTensor(),
transforms.Normalize((0.5,), (0.5,)),
])
# Download and load the training data
trainset = datasets.MNIST('~/.pytorch/MNIST_data/', download=True, train=True, transform=transform)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=64, shuffle=True)
###Output
_____no_output_____
###Markdown
We have the training data loaded into `trainloader` and we make that an iterator with `iter(trainloader)`. Later, we'll use this to loop through the dataset for training, like```pythonfor image, label in trainloader: do things with images and labels```You'll notice I created the `trainloader` with a batch size of 64, and `shuffle=True`. The batch size is the number of images we get in one iteration from the data loader and pass through our network, often called a *batch*. And `shuffle=True` tells it to shuffle the dataset every time we start going through the data loader again. But here I'm just grabbing the first batch so we can check out the data. We can see below that `images` is just a tensor with size `(64, 1, 28, 28)`. So, 64 images per batch, 1 color channel, and 28x28 images.
###Code
dataiter = iter(trainloader)
images, labels = dataiter.next()
print(type(images))
print(images.shape)
print(labels.shape)
###Output
<class 'torch.Tensor'>
torch.Size([64, 1, 28, 28])
torch.Size([64])
###Markdown
This is what one of the images looks like.
###Code
plt.imshow(images[1].numpy().squeeze(), cmap='Greys_r');
###Output
_____no_output_____
###Markdown
First, let's try to build a simple network for this dataset using weight matrices and matrix multiplications. Then, we'll see how to do it using PyTorch's `nn` module which provides a much more convenient and powerful method for defining network architectures.The networks you've seen so far are called *fully-connected* or *dense* networks. Each unit in one layer is connected to each unit in the next layer. In fully-connected networks, the input to each layer must be a one-dimensional vector (which can be stacked into a 2D tensor as a batch of multiple examples). However, our images are 28x28 2D tensors, so we need to convert them into 1D vectors. Thinking about sizes, we need to convert the batch of images with shape `(64, 1, 28, 28)` to a have a shape of `(64, 784)`, 784 is 28 times 28. This is typically called *flattening*, we flattened the 2D images into 1D vectors.Previously you built a network with one output unit. Here we need 10 output units, one for each digit. We want our network to predict the digit shown in an image, so what we'll do is calculate probabilities that the image is of any one digit or class. This ends up being a discrete probability distribution over the classes (digits) that tells us the most likely class for the image. That means we need 10 output units for the 10 classes (digits). We'll see how to convert the network output into a probability distribution next.> **Exercise:** Flatten the batch of images `images`. Then build a multi-layer network with 784 input units, 256 hidden units, and 10 output units using random tensors for the weights and biases. For now, use a sigmoid activation for the hidden layer. Leave the output layer without an activation, we'll add one that gives us a probability distribution next.
###Code
def activation(x):
return 1 / (1 + torch.exp(-x))
## Your solution
input = images.view(64, 784)
#print('input size: ', input.shape)
n_input = input.shape[1]
n_hidden = 256
n_output = 10
W1 = torch.randn(n_input, n_hidden)
b1 = torch.randn((1, n_hidden))
W2 = torch.randn(n_hidden, n_output)
b2 = torch.randn((1, n_output))
hidden_out = activation(torch.mm(input, W1) + b1)
#print('last layer inp is : ', hidden_out.shape)
out = activation(torch.mm(hidden_out, W2) + b2) # output of your network, should have shape (64,10)
# dont need activation here
#print('last layer output shape is: ', out.shape)
###Output
_____no_output_____
###Markdown
Now we have 10 outputs for our network. We want to pass in an image to our network and get out a probability distribution over the classes that tells us the likely class(es) the image belongs to. Something that looks like this:Here we see that the probability for each class is roughly the same. This is representing an untrained network, it hasn't seen any data yet so it just returns a uniform distribution with equal probabilities for each class.To calculate this probability distribution, we often use the [**softmax** function](https://en.wikipedia.org/wiki/Softmax_function). Mathematically this looks like$$\Large \sigma(x_i) = \cfrac{e^{x_i}}{\sum_k^K{e^{x_k}}}$$What this does is squish each input $x_i$ between 0 and 1 and normalizes the values to give you a proper probability distribution where the probabilites sum up to one.> **Exercise:** Implement a function `softmax` that performs the softmax calculation and returns probability distributions for each example in the batch. Note that you'll need to pay attention to the shapes when doing this. If you have a tensor `a` with shape `(64, 10)` and a tensor `b` with shape `(64,)`, doing `a/b` will give you an error because PyTorch will try to do the division across the columns (called broadcasting) but you'll get a size mismatch. The way to think about this is for each of the 64 examples, you only want to divide by one value, the sum in the denominator. So you need `b` to have a shape of `(64, 1)`. This way PyTorch will divide the 10 values in each row of `a` by the one value in each row of `b`. Pay attention to how you take the sum as well. You'll need to define the `dim` keyword in `torch.sum`. Setting `dim=0` takes the sum across the rows while `dim=1` takes the sum across the columns.
###Code
def softmax(x):
## TODO: Implement the softmax function here
#print(x.shape)
sum_values = torch.sum(torch.exp(x), dim = 1)
sum_values = sum_values.view(64,1)
#print(sum_values.shape)
return (torch.exp(x) / sum_values)
# Here, out should be the output of the network in the previous excercise with shape (64,10)
probabilities = softmax(out)
# Does it have the right shape? Should be (64, 10)
print(probabilities.shape)
# Does it sum to 1?
print(probabilities.sum(dim=1))
###Output
torch.Size([64, 10])
tensor([1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000,
1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000,
1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000,
1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000,
1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000,
1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000,
1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000,
1.0000])
###Markdown
Building networks with PyTorchPyTorch provides a module `nn` that makes building networks much simpler. Here I'll show you how to build the same one as above with 784 inputs, 256 hidden units, 10 output units and a softmax output.
###Code
from torch import nn
class Network(nn.Module):
def __init__(self):
super().__init__()
# Inputs to hidden layer linear transformation
self.hidden = nn.Linear(784, 256)
# Output layer, 10 units - one for each digit
self.output = nn.Linear(256, 10)
# Define sigmoid activation and softmax output
self.sigmoid = nn.Sigmoid()
self.softmax = nn.Softmax(dim=1)
def forward(self, x):
# Pass the input tensor through each of our operations
x = self.hidden(x)
x = self.sigmoid(x)
x = self.output(x)
x = self.softmax(x)
return x
###Output
_____no_output_____
###Markdown
Let's go through this bit by bit.```pythonclass Network(nn.Module):```Here we're inheriting from `nn.Module`. Combined with `super().__init__()` this creates a class that tracks the architecture and provides a lot of useful methods and attributes. It is mandatory to inherit from `nn.Module` when you're creating a class for your network. The name of the class itself can be anything.```pythonself.hidden = nn.Linear(784, 256)```This line creates a module for a linear transformation, $x\mathbf{W} + b$, with 784 inputs and 256 outputs and assigns it to `self.hidden`. The module automatically creates the weight and bias tensors which we'll use in the `forward` method. You can access the weight and bias tensors once the network (`net`) is created with `net.hidden.weight` and `net.hidden.bias`.```pythonself.output = nn.Linear(256, 10)```Similarly, this creates another linear transformation with 256 inputs and 10 outputs.```pythonself.sigmoid = nn.Sigmoid()self.softmax = nn.Softmax(dim=1)```Here I defined operations for the sigmoid activation and softmax output. Setting `dim=1` in `nn.Softmax(dim=1)` calculates softmax across the columns.```pythondef forward(self, x):```PyTorch networks created with `nn.Module` must have a `forward` method defined. It takes in a tensor `x` and passes it through the operations you defined in the `__init__` method.```pythonx = self.hidden(x)x = self.sigmoid(x)x = self.output(x)x = self.softmax(x)```Here the input tensor `x` is passed through each operation a reassigned to `x`. We can see that the input tensor goes through the hidden layer, then a sigmoid function, then the output layer, and finally the softmax function. It doesn't matter what you name the variables here, as long as the inputs and outputs of the operations match the network architecture you want to build. The order in which you define things in the `__init__` method doesn't matter, but you'll need to sequence the operations correctly in the `forward` method.Now we can create a `Network` object.
###Code
# Create the network and look at it's text representation
model = Network()
model
###Output
_____no_output_____
###Markdown
You can define the network somewhat more concisely and clearly using the `torch.nn.functional` module. This is the most common way you'll see networks defined as many operations are simple element-wise functions. We normally import this module as `F`, `import torch.nn.functional as F`.
###Code
import torch.nn.functional as F
class Network(nn.Module):
def __init__(self):
super().__init__()
# Inputs to hidden layer linear transformation
self.hidden = nn.Linear(784, 256)
# Output layer, 10 units - one for each digit
self.output = nn.Linear(256, 10)
def forward(self, x):
# Hidden layer with sigmoid activation
x = F.sigmoid(self.hidden(x))
# Output layer with softmax activation
x = F.softmax(self.output(x), dim=1)
return x
###Output
_____no_output_____
###Markdown
Activation functionsSo far we've only been looking at the softmax activation, but in general any function can be used as an activation function. The only requirement is that for a network to approximate a non-linear function, the activation functions must be non-linear. Here are a few more examples of common activation functions: Tanh (hyperbolic tangent), and ReLU (rectified linear unit).In practice, the ReLU function is used almost exclusively as the activation function for hidden layers. Your Turn to Build a Network> **Exercise:** Create a network with 784 input units, a hidden layer with 128 units and a ReLU activation, then a hidden layer with 64 units and a ReLU activation, and finally an output layer with a softmax activation as shown above. You can use a ReLU activation with the `nn.ReLU` module or `F.relu` function.
###Code
## Your solution here
import torch.nn.functional as F
class myNetwork(nn.Module):
def __init__(self):
super().__init__()
self.fc1 = nn.Linear(784, 128)
self.fc2 = nn.Linear(128, 64)
self.output = nn.Linear(64, 10)
def forward(self, x):
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = F.softmax(self.output(x), dim = 1)
return x
model = myNetwork()
model
###Output
_____no_output_____
###Markdown
Initializing weights and biasesThe weights and such are automatically initialized for you, but it's possible to customize how they are initialized. The weights and biases are tensors attached to the layer you defined, you can get them with `model.fc1.weight` for instance.
###Code
print(model.fc1.weight)
print(model.fc1.bias)
###Output
Parameter containing:
tensor([[-0.0078, 0.0033, 0.0271, ..., 0.0098, -0.0090, -0.0190],
[ 0.0228, -0.0037, 0.0244, ..., 0.0137, 0.0168, -0.0309],
[-0.0293, -0.0279, 0.0221, ..., -0.0113, 0.0185, 0.0356],
...,
[ 0.0040, -0.0171, -0.0245, ..., 0.0093, -0.0037, -0.0081],
[ 0.0017, -0.0182, -0.0152, ..., -0.0313, -0.0183, -0.0096],
[ 0.0209, 0.0137, -0.0270, ..., 0.0022, 0.0221, -0.0035]],
requires_grad=True)
Parameter containing:
tensor([ 0.0067, 0.0028, -0.0134, -0.0026, 0.0326, 0.0158, -0.0071, 0.0134,
-0.0069, -0.0102, 0.0166, 0.0132, 0.0318, 0.0339, 0.0131, -0.0107,
-0.0100, 0.0061, 0.0173, 0.0055, 0.0126, 0.0196, 0.0141, -0.0075,
-0.0317, 0.0194, -0.0078, 0.0225, 0.0082, 0.0145, 0.0020, -0.0045,
0.0175, 0.0087, 0.0044, -0.0040, 0.0076, 0.0049, 0.0152, -0.0113,
-0.0323, -0.0002, -0.0128, 0.0268, 0.0016, -0.0017, -0.0215, -0.0027,
0.0102, -0.0187, 0.0047, 0.0341, 0.0031, 0.0007, 0.0331, -0.0134,
-0.0273, 0.0023, 0.0240, 0.0101, 0.0098, 0.0154, -0.0119, 0.0023,
0.0180, -0.0267, -0.0162, 0.0157, -0.0299, -0.0145, -0.0080, -0.0085,
-0.0313, -0.0342, 0.0172, 0.0016, 0.0353, -0.0147, -0.0197, -0.0350,
0.0029, -0.0123, -0.0027, -0.0051, 0.0338, -0.0211, 0.0164, 0.0067,
-0.0344, 0.0289, -0.0327, 0.0276, 0.0158, -0.0302, -0.0270, 0.0234,
0.0203, -0.0018, -0.0002, 0.0239, -0.0202, 0.0138, 0.0342, 0.0328,
-0.0175, 0.0224, 0.0167, -0.0005, 0.0224, 0.0104, 0.0155, 0.0187,
0.0016, -0.0103, 0.0030, 0.0102, 0.0082, 0.0347, 0.0353, -0.0023,
-0.0322, -0.0006, 0.0206, 0.0091, 0.0194, -0.0017, -0.0149, 0.0095],
requires_grad=True)
###Markdown
For custom initialization, we want to modify these tensors in place. These are actually autograd *Variables*, so we need to get back the actual tensors with `model.fc1.weight.data`. Once we have the tensors, we can fill them with zeros (for biases) or random normal values.
###Code
# Set biases to all zeros
model.fc1.bias.data.fill_(0)
# sample from random normal with standard dev = 0.01
model.fc1.weight.data.normal_(std=0.01)
###Output
_____no_output_____
###Markdown
Forward passNow that we have a network, let's see what happens when we pass in an image.
###Code
# Grab some data
dataiter = iter(trainloader)
images, labels = dataiter.next()
# Resize images into a 1D vector, new shape is (batch size, color channels, image pixels)
images.resize_(64, 1, 784)
# or images.resize_(images.shape[0], 1, 784) to automatically get batch size
# Forward pass through the network
img_idx = 0
ps = model.forward(images[img_idx,:])
img = images[img_idx]
helper.view_classify(img.view(1, 28, 28), ps)
###Output
_____no_output_____
###Markdown
As you can see above, our network has basically no idea what this digit is. It's because we haven't trained it yet, all the weights are random! Using `nn.Sequential`PyTorch provides a convenient way to build networks like this where a tensor is passed sequentially through operations, `nn.Sequential` ([documentation](https://pytorch.org/docs/master/nn.htmltorch.nn.Sequential)). Using this to build the equivalent network:
###Code
# Hyperparameters for our network
input_size = 784
hidden_sizes = [128, 64]
output_size = 10
# Build a feed-forward network
model = nn.Sequential(nn.Linear(input_size, hidden_sizes[0]),
nn.ReLU(),
nn.Linear(hidden_sizes[0], hidden_sizes[1]),
nn.ReLU(),
nn.Linear(hidden_sizes[1], output_size),
nn.Softmax(dim=1))
print(model)
# Forward pass through the network and display output
images, labels = next(iter(trainloader))
images.resize_(images.shape[0], 1, 784)
ps = model.forward(images[0,:])
helper.view_classify(images[0].view(1, 28, 28), ps)
###Output
Sequential(
(0): Linear(in_features=784, out_features=128, bias=True)
(1): ReLU()
(2): Linear(in_features=128, out_features=64, bias=True)
(3): ReLU()
(4): Linear(in_features=64, out_features=10, bias=True)
(5): Softmax()
)
###Markdown
Here our model is the same as before: 784 input units, a hidden layer with 128 units, ReLU activation, 64 unit hidden layer, another ReLU, then the output layer with 10 units, and the softmax output.The operations are availble by passing in the appropriate index. For example, if you want to get first Linear operation and look at the weights, you'd use `model[0]`.
###Code
print(model[0])
model[0].weight
###Output
Linear(in_features=784, out_features=128, bias=True)
###Markdown
You can also pass in an `OrderedDict` to name the individual layers and operations, instead of using incremental integers. Note that dictionary keys must be unique, so _each operation must have a different name_.
###Code
from collections import OrderedDict
model = nn.Sequential(OrderedDict([
('fc1', nn.Linear(input_size, hidden_sizes[0])),
('relu1', nn.ReLU()),
('fc2', nn.Linear(hidden_sizes[0], hidden_sizes[1])),
('relu2', nn.ReLU()),
('output', nn.Linear(hidden_sizes[1], output_size)),
('softmax', nn.Softmax(dim=1))]))
model
###Output
_____no_output_____
###Markdown
Now you can access layers either by integer or the name
###Code
print(model[0])
print(model.fc1)
###Output
Linear(in_features=784, out_features=128, bias=True)
Linear(in_features=784, out_features=128, bias=True)
|
TrumpTweetsAnalysis-2.ipynb | ###Markdown
Trump Twitter Sentiment Analysis In this notebook, I will walk through how I went about doing sentiment analysis on Trump's Tweets, using Twitter API's to extract his tweets. I also use various tools such as Pandas to make data analysis easier.
###Code
# Run this cell to set up your notebook
import csv
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
plt.style.use('fivethirtyeight')
import seaborn as sns
sns.set()
sns.set_context("talk")
import re
from IPython.display import display, Latex, Markdown
from client.api.notebook import Notebook
###Output
_____no_output_____
###Markdown
Getting the Dataset---Since we'll be looking at Twitter data, we need to download the data from Twitter!Twitter provides an API for downloading tweet data in large batches. The `tweepy` package makes it fairly easy to use. SetupInstall `tweepy`, if you don't already have it. (Be sure to activate your Conda environment for the class first. Then run `pip install tweepy`.)
###Code
## Make sure you are in your ds100 conda environment.
## Uncomment the following line to install tweepy
!pip install tweepy
# The following should run
import tweepy
###Output
_____no_output_____
###Markdown
There are instructions on using `tweepy` [here](http://tweepy.readthedocs.io/en/v3.5.0/getting_started.html), but we will give you example code.Twitter requires you to have authentication keys to access their API. To get your keys, you'll have to sign up as a Twitter developer. The next question will walk you through this process. --- Getting StartedFollow the instructions below to get your Twitter API keys. Read the instructions completely before starting.1. [Create a Twitter account](https://twitter.com). You can use an existing account if you have one.2. Under account settings, add your phone number to the account.3. [Create a Twitter developer account](https://dev.twitter.com/resources/signup). Attach it to your Twitter account.4. Once you're logged into your developer account, [create an application for this assignment](https://apps.twitter.com/app/new). You can call it whatever you want, and you can write any URL when it asks for a web site.5. On the page for that application, find your Consumer Key and Consumer Secret.6. On the same page, create an Access Token. Record the resulting Access Token and Access Token Secret.7. Edit the file [keys.json](keys.json) and replace the placeholders with your keys. Don't turn in that file. WARNING (Please Read) !!!! Protect your Twitter KeysIf someone has your authentication keys, they can access your Twitter account and post as you! So don't give them to anyone, and **don't write them down in this notebook**. The usual way to store sensitive information like this is to put it in a separate file and read it programmatically. That way, you can share the rest of your code without sharing your keys. That's why we're asking you to put your keys in `keys.json` for this assignment. Avoid making too many API calls.Twitter limits developers to a certain rate of requests for data. If you make too many requests in a short period of time, you'll have to wait awhile (around 15 minutes) before you can make more. So carefully follow the code examples you see and don't rerun cells without thinking. Instead, always save the data you've collected to a file. We've provided templates to help you do that. Be careful about which functions you call!This API can retweet tweets, follow and unfollow people, and modify your twitter settings. Be careful which functions you invoke! One of your instructors accidentally re-tweeted some tweets because that instructor typed `retweet` instead of `retweet_count`.
###Code
import json
key_file = 'keys.json'
# Loading your keys from keys.json (which you should have filled
# in in question 1):
with open(key_file) as f:
keys = json.load(f)
###Output
_____no_output_____
###Markdown
This cell tests the Twitter authentication. It should run without errors or warnings and display your Twitter username.
###Code
import tweepy
from tweepy import TweepError
import logging
try:
auth = tweepy.OAuthHandler(keys["consumer_key"], keys["consumer_secret"])
auth.set_access_token(keys["access_token"], keys["access_token_secret"])
api = tweepy.API(auth)
print("Your username is:", api.auth.get_username())
except TweepError as e:
logging.warning("There was a Tweepy error. Double check your API keys and try again.")
logging.warning(e)
###Output
Your username is: saranshg23
###Markdown
--- Extracting and Saving the Tweet DataThe code below will load in the data, and save it to a local file cache, which will make it faster for us to access later on and prevent us from having to call the data using Twitter API's everytime, which can be slow.
###Code
def load_keys(path):
"""Loads your Twitter authentication keys from a file on disk.
Args:
path (str): The path to your key file. The file should
be in JSON format and look like this (but filled in):
{
"consumer_key": "<your Consumer Key here>",
"consumer_secret": "<your Consumer Secret here>",
"access_token": "<your Access Token here>",
"access_token_secret": "<your Access Token Secret here>"
}
Returns:
dict: A dictionary mapping key names (like "consumer_key") to
key values."""
with open (path) as f:
keys = json.load(f)
return keys
def download_recent_tweets_by_user(user_account_name, keys):
"""Downloads tweets by one Twitter user.
Args:
user_account_name (str): The name of the Twitter account
whose tweets will be downloaded.
keys (dict): A Python dictionary with Twitter authentication
keys (strings), like this (but filled in):
{
"consumer_key": "<your Consumer Key here>",
"consumer_secret": "<your Consumer Secret here>",
"access_token": "<your Access Token here>",
"access_token_secret": "<your Access Token Secret here>"
}
Returns:
list: A list of Status objects, each representing one tweet."""
auth = tweepy.OAuthHandler(keys["consumer_key"], keys["consumer_secret"])
auth.set_access_token(keys["access_token"], keys["access_token_secret"])
api = tweepy.API(auth)
# Getting as many recent tweets by @user_account_name as Twitter will let us have:
tweets = list(tweepy.Cursor(api.user_timeline, id=user_account_name).items())
return tweets
def save_tweets(tweets, path):
"""Saves a list of tweets to a file in the local filesystem.
This function makes no guarantee about the format of the saved
tweets, **except** that calling load_tweets(path) after
save_tweets(tweets, path) will produce the same list of tweets
and that only the file at the given path is used to store the
tweets. (That means you can implement this function however
you want, as long as saving and loading works!)
Args:
tweets (list): A list of tweet objects (of type Status) to
be saved.
path (str): The place where the tweets will be saved.
Returns:
None"""
with open(path, "wb") as f:
import pickle
pickle.dump(tweets, f)
def load_tweets(path):
"""Loads tweets that have previously been saved.
Calling load_tweets(path) after save_tweets(tweets, path)
will produce the same list of tweets.
Args:
path (str): The place where the tweets were be saved.
Returns:
list: A list of Status objects, each representing one tweet."""
with open(path, "rb") as f:
import pickle
tweets = pickle.load(f)
return tweets
def get_tweets_with_cache(user_account_name, keys_path):
"""Get recent tweets from one user, loading from a disk cache if available.
The first time you call this function, it will download tweets by
a user. Subsequent calls will not re-download the tweets; instead
they'll load the tweets from a save file in your local filesystem.
All this is done using the functions you defined in the previous cell.
This has benefits and drawbacks that often appear when you cache data:
+: Using this function will prevent extraneous usage of the Twitter API.
+: You will get your data much faster after the first time it's called.
-: If you really want to re-download the tweets (say, to get newer ones,
or because you screwed up something in the previous cell and your
tweets aren't what you wanted), you'll have to find the save file
(which will look like <something>_recent_tweets.pkl) and delete it.
Args:
user_account_name (str): The Twitter handle of a user, without the @.
keys_path (str): The path to a JSON keys file in your filesystem.
"""
keys = load_keys(keys_path)
if Path(user_account_name + "_recent_tweets.pkl").exists():
return load_tweets(user_account_name + "_recent_tweets.pkl")
else:
tweets = download_recent_tweets_by_user(user_account_name, keys)
save_tweets(tweets, user_account_name + "_recent_tweets.pkl")
return tweets
###Output
_____no_output_____
###Markdown
If everything was implemented correctly you should be able to obtain roughly the last 3000 tweets by the `realdonaldtrump`.
###Code
# When you are done, run this cell to load @realdonaldtrump's tweets.
# Note the function get_tweets_with_cache. You may find it useful
# later.
trump_tweets = get_tweets_with_cache("realdonaldtrump", key_file)
print("Number of tweets downloaded:", len(trump_tweets))
###Output
Number of tweets downloaded: 3242
###Markdown
--- Exploring the Data with Pandas DataFramesI will now extract important fields from the tweet objects and convert them into a Pandas dataframe for further analysis. Each trump tweet is stored in a `tweepy.models.Status` object:
###Code
type(trump_tweets[0])
###Output
_____no_output_____
###Markdown
We can list all the members of this object by looking at the private `__dict__` variable:
###Code
list(trump_tweets[0].__dict__.keys())
###Output
_____no_output_____
###Markdown
Therefore we can extract a field simply by reading its value:
###Code
trump_tweets[0].text
###Output
_____no_output_____
###Markdown
Constructing the DataFrameI will now construct a DataFrame called `trump`. The index of the dataframe should be the ID of each tweet (looks something like `907698529606541312`). It should have these columns:- `time`: The time the tweet was created.- `source`: The source device of the tweet.- `text`: The text of the tweet.- `retweet_count`: The retweet count of the tweet.
###Code
trumps=trump_tweets
trump = pd.DataFrame({'retweet_count': [t.retweet_count for t in trumps], 'text': [t.text for t in trumps],
'source': [t.source for t in trumps], 'time': [t.created_at for t in trumps]})
trump.index = [t.id_str for t in trumps]
trump.head(20)
###Output
_____no_output_____
###Markdown
Here are two important dates that we'll use in our analysis. `ELEC_DATE` is the date when Trump won the 2016 Presidential election, and `INAUG_DATE` is the date that Trump was sworn into office.
###Code
from datetime import datetime
ELEC_DATE = datetime(2016, 11, 8)
INAUG_DATE = datetime(2017, 1, 20)
###Output
_____no_output_____
###Markdown
Here are the first and last rows of your tweet data.You'll notice that the data contains tweets from before the election.
###Code
trump.iloc[[0, -1], :]
###Output
_____no_output_____
###Markdown
Tweet Source AnalysisI am now going to find out the charateristics of Trump tweets and the devices used for the tweets.--- Unique Sources This code will find out the number of unique sources of the Trump tweets and save the result in `num_sources`. Then, I'll make a bar plot of the counts of different sources to visualize the data.
###Code
source = trump['source']
num_sources = source.nunique()
num_each = pd.value_counts(source)
# make a bar plot here
num_each.plot.bar()
###Output
_____no_output_____
###Markdown
As we can see from the plot above, Trump tweets are mostly from iPhone or Android. Is there a difference between his Tweet behavior between the two devices?We will attempt to answer this question in our subsequent analysis.First, we'll take a look at whether Trump's tweets from an Android come at different times than his tweets from an iPhone. Note that Twitter gives us his tweets in the [UTC timezone](https://www.wikiwand.com/en/List_of_UTC_time_offsets):
###Code
print(trump_tweets[0]._json['created_at'])
###Output
Tue Sep 26 20:11:48 +0000 2017
###Markdown
We'll convert the tweet times to US Eastern Time, the timezone of New York and Washington D.C., since those are the places we would expect the most tweet activity from Trump.
###Code
trump['est_time'] = pd.Index(trump['time']).tz_localize("UTC").tz_convert("US/Eastern")
trump.head()
###Output
_____no_output_____
###Markdown
--- Plotting Tweet Timings By DeviceI will use this data to make a line plot with two curves:1. The number of iPhone tweets vs. hour of the day, normalized over the hours of the day. For example, if there were 10 tweets at 1pm and 20 tweets at 2pm, the line plot should be 0, then 0.33 at 1pm, 0.66 at 2pm, then back to 0.2. The same curve for Android tweets.
###Code
a = trump[trump['source'] == 'Twitter for iPhone']
b = trump[trump['source'] == 'Twitter for Android']
iPhone = a.groupby(trump['est_time'].dt.hour).size()/2165
Android = b.groupby(trump['est_time'].dt.hour).size()/771
iPhone.plot(label = "iPhone")
Android.plot(label = "Android")
plt.legend()
plt.xlabel("hour")
plt.ylabel("fraction")
###Output
_____no_output_____
###Markdown
--- Analyzing the ResultsTrump seems to tweet the most between hours 5 and 10 in the morning. Trump seems to tweet through the iPhone at around the same general proportion during normal hours throughout the day.However, a large proportion of his Android tweets happen in the early hours, which means maybe he uses his Android more when he is travelling to certain places. --- Let's now look at his tweet device usage over the entire time period we have in the dataset.Take a look at the code below and the plot it generates.You should be answer the following questions about this code. You don't have to write the answers down anywhere, but you'll need to make variations of this plot in later questions so understanding this code will help you greatly.1. What does `set_index` do here?1. What does `resample` do? What does the `'D'` argument do in `resample`?1. What does `unstack` do? What does the `level=0` argument do in unstack?1. Why does one call to `plot()` generate 7 lines?Feel free to copy this cell, play around with the code to see the intermediate result, then delete your cell after you're done.
###Code
#1. set_index gives a basis for the x-axis.
#2. resample means that we resample into bins, and the D means the bins are by the day.
#3. unstack
#4. plot calls 7 lines b/c it groups by source, and there are 7 different sources, so 1 line for each different source.
(trump.loc[:, ['est_time', 'source']]
.set_index('est_time')
.groupby('source')
.resample('D')
.size()
.unstack(level=0)
.plot()
)
plt.xlabel('time')
plt.ylabel('count')
###Output
_____no_output_____
###Markdown
Cleaning the PlotOne problem with the plot above is that it plots too many points to see overall trends in the device usage.Thus, I will recreate the plot above, grouping by each month instead of each day.
###Code
(trump.loc[:, ['est_time', 'source']]
.set_index('est_time')
.groupby('source')
.resample('M')
.size()
.unstack(level=0)
.plot()
)
plt.xlabel('time')
plt.ylabel('count')
plt.figure(figsize=(50,30))
###Output
_____no_output_____
###Markdown
According to the plot, Trump's tweets come from many different sources. It turns out that many of his tweets were not from Trump himself but from his staff. [Take a look at this Verge article.](https://www.theverge.com/2017/3/29/15103504/donald-trump-iphone-using-switched-android)Does the data support the information in the article? What else do you find out about changes in Trump's tweets sources from the plot?Yes, Trump tweets from many different sources, but it seems to be concentrated to a few main ones. The info in the article is also supported by this data because Trump has indeed essentially stopped posting from his Android since March according to the plot. However, this might mean that many of his tweets from iPhone before March were not actually from him but from his staff members, as Trump mainly used his Android to post his own tweets before. Likewise, his usage of using his iPhone has gone up significantly after stopping use of the Android, which makes sense because he has one less medium since he stopped using the Android. It also seems every medium other than the iPhone and Media Studio has decreased, while those 2 has increased according to the plot. What are some ways we can distinguish between tweets that came from Trump and tweets from his staff? Before, it was as easy as checked which device the tweet came from. Now, we have to rely on more sophisticated methods. --- Sentiment AnalysisIt turns out that we can use the words in Trump's tweets to calculate a measure of the sentiment of the tweet. For example, the sentence "I love America!" has positive sentiment, whereas the sentence "I hate taxes!" has a negative sentiment. In addition, some words have stronger positive / negative sentiment than others: "I love America." is more positive than "I like America."We will use the [VADER (Valence Aware Dictionary and sEntiment Reasoner)](https://github.com/cjhutto/vaderSentiment) lexicon to analyze the sentiment of Trump's tweets. VADER is a lexicon and rule-based sentiment analysis tool that is specifically attuned to sentiments expressed in social media which is great for our usage.The VADER lexicon gives the sentiment of individual words. Run the following cell to show the first few rows of the lexicon:
###Code
!head vader_lexicon.txt
###Output
$: -1.5 0.80623 [-1, -1, -1, -1, -3, -1, -3, -1, -2, -1]
%) -0.4 1.0198 [-1, 0, -1, 0, 0, -2, -1, 2, -1, 0]
%-) -1.5 1.43178 [-2, 0, -2, -2, -1, 2, -2, -3, -2, -3]
&-: -0.4 1.42829 [-3, -1, 0, 0, -1, -1, -1, 2, -1, 2]
&: -0.7 0.64031 [0, -1, -1, -1, 1, -1, -1, -1, -1, -1]
( '}{' ) 1.6 0.66332 [1, 2, 2, 1, 1, 2, 2, 1, 3, 1]
(% -0.9 0.9434 [0, 0, 1, -1, -1, -1, -2, -2, -1, -2]
('-: 2.2 1.16619 [4, 1, 4, 3, 1, 2, 3, 1, 2, 1]
(': 2.3 0.9 [1, 3, 3, 2, 2, 4, 2, 3, 1, 2]
((-: 2.1 0.53852 [2, 2, 2, 1, 2, 3, 2, 2, 3, 2]
###Markdown
--- Lexicon DataFrameAs you can see, the lexicon contains emojis too! The first column of the lexicon is the *token*, or the word itself. The second column is the *polarity* of the word, or how positive / negative it is.(How did they decide the polarities of these words? What are the other two columns in the lexicon? See the link above.) Read in the lexicon into a DataFrame called `sent`. The index of the DF should be the tokens in the lexicon. `sent` should have one column: `polarity`: The polarity of each token.
###Code
sent = pd.read_csv('vader_lexicon.txt', sep =" ", header = None, error_bad_lines = False)
sent.columns = ["word", "polarity", "c", "d"]
sent = sent[["word", "polarity"]]
sent.index = sent["word"]
sent = sent[["polarity"]]
sent
###Output
_____no_output_____
###Markdown
--- Calculating Sentiment by TweetNow, let's use this lexicon to calculate the overall sentiment for each of Trump's tweets. Here's the basic idea:1. For each tweet, find the sentiment of each word.2. Calculate the sentiment of each tweet by taking the sum of the sentiments of its words.First, let's lowercase the text in the tweets since the lexicon is also lowercase. Set the `text` column of the `trump` DF to be the lowercased text of each tweet.
###Code
trump['text'] = trump['text'].str.lower()
###Output
_____no_output_____
###Markdown
---Now, let's get rid of punctuation since it'll cause us to fail to match words. Create a new column called `no_punc` in the `trump` DF to be the lowercased text of each tweet with all punctuation replaced by a single space. We consider punctuation characters to be any character that isn't a Unicode word character or a whitespace character. You may want to consult the Python documentation on regexes for this problem.(Why don't we simply remove punctuation instead of replacing with a space? See if you can figure this out by looking at the tweet data.)
###Code
# Save your regex in punct_re
punct_re = r'[^\w\s]'
trump['no_punc'] = trump['text'].str.replace(punct_re, ' ')
###Output
_____no_output_____
###Markdown
--- Breaking the Tweets into Individual WordsNow, let's convert the tweets into what's called a *tidy format* to make the sentiments easier to calculate. Use the `no_punc` column of `trump` to create a table called `tidy_format`. The index of the table should be the IDs of the tweets, repeated once for every word in the tweet. It has two columns:1. `num`: The location of the word in the tweet. For example, if the tweet was "i love america", then the location of the word "i" is 0, "love" is 1, and "america" is 2.2. `word`: The individual words of each tweet.
###Code
strings_split = trump.loc[:, ["no_punc"]]
strings_split = trump['no_punc'].str.split(expand = True).stack()
strings_split = strings_split.to_frame()
strings_split = strings_split.reset_index()
strings_split.columns = ['index', 'num', 'word']
strings_split.index = strings_split['index']
strings_split = strings_split[['num', 'word']]
tidy_format = strings_split
tidy_format.head()
###Output
_____no_output_____
###Markdown
--- Calculating Total Sentiment on each TweetNow that we have this table in the tidy format, it becomes much easier to find the sentiment of each tweet: we can join the table with the lexicon table. Calculate a table called `polarities`. Its index should be the IDs of the tweets (one row per ID). It should have one column called `polarity` containing the summed sentiment polarity of each tweet.
###Code
joined = tidy_format.join(sent, on = tidy_format['word']).fillna(0)
joined = joined.groupby(joined.index).sum()
polarities = joined[['polarity']]
polarities.head()
###Output
_____no_output_____
###Markdown
--- Adding a Sentiment Score to Each Tweet in the DataFrameFinally, I will use the `polarities` and `trump` tables to create a new table called `senti` that is the `trump` table with an extra column called `polarity` containing the sentiment polarity of each tweet.
###Code
trump.head()
len(trump)
len(polarities)
senti = trump.join(polarities)
senti
###Output
_____no_output_____
###Markdown
Now we have a measure of the sentiment of each of his tweets! Note that this calculation is rather basic; you can read over the VADER readme to understand a more robust sentiment analysis.Now, run the cells below to see the most positive and most negative tweets from Trump in your dataset:
###Code
print('Most negative tweets:')
for t in senti.sort_values('polarity').head()['text']:
print(' ', t)
print('Most positive tweets:')
for t in senti.sort_values('polarity', ascending=False).head()['text']:
print(' ', t)
###Output
Most positive tweets:
thank you to linda bean of l.l.bean for your great support and courage. people will support you even more now. buy l.l.bean. @lbperfectmaine
rt @ivankatrump: 2016 has been one of the most eventful and exciting years of my life. i wish you peace, joy, love and laughter. happy new…
"@pauladuvall2: we're all enjoying you, as well, mr. t.! you've inspired hope and a positive spirit throughout america! god bless you!" nice
great honor to be endorsed by popular & successful @gov_gilmore of va. a state that i very much want to win-thx jim! https://t.co/x4y1tafhvn
hope you like my nomination of judge neil gorsuch for the united states supreme court. he is a good and brilliant man, respected by all.
###Markdown
--- Polarity Visualization of Trump's TweetsI used seaborn to create a `distplot` of the sentiments.
###Code
### make your plot here
sns.distplot(senti['polarity'])
###Output
_____no_output_____
###Markdown
--- Polarity Over TimeNow I created a line plot of the sentiment of Trump's tweets over time and plotted the mean sentiment of each month of his tweets over time. Then, plot vertical lines corresponding to his election and inauguration dates.
###Code
import warnings
warnings.simplefilter(action='ignore', category=FutureWarning)
(senti.loc[:, ['est_time', 'polarity']].set_index('est_time').resample('M')).plot()
b = plt.axvline(pd.to_datetime('2016-11-08'), color = 'r', linestyle = 'dashed')
c = plt.axvline(pd.to_datetime('2017-01-21'), color = 'r', linestyle = 'dashed')
plt.xlabel('time')
###Output
_____no_output_____
###Markdown
In general, his positive sentiment seems to have increased after being elected as president. Its interesting to note the sharp dip in sentiment right after being elected, as he was probably negatively responding to all the criticism he was receiving after being voted in. Its also interesting to note how his sentiment rose before the election, as he went from being labelled as having no chance of winning the election to actually winning. HashtagsLet's return now to the problem of determining which tweets were from Trump and which were from his aides. As a reminder, most analysts agree that tweets that came from an Android device were likely from Trump himself, not one of his aides ([example](http://didtrumptweetit.com/machine-learning-tweet-prediction/)).In addition, browsing his Twitter shows that some tweets that sound more "official" often have a hashtag, link, or a picture:Whereas tweets that sound like Trump himself usually don't have a hashtag, link, or picture:So, we can hypothesize that if a tweet has a hashtag, link, or picture it came from one of Trump's aides, not Trump himself. Let's see if this idea is backed up by the data. --- Checking for Retweets, Hashtags, and LinksI created a DF called `hash_or_link` that contains only the rows from the `senti` table where the tweet isn't a retweet and contains a hashtag, link, or picture. We say that:- A tweet is a retweet if it has the string 'rt' anywhere in the tweet if it is preceeded and followed by a non-word character (the start and end of the string count as non-word characters).- A tweet has a hashtag if it has the character '' anywhere in the tweet followed by a letter.- A tweet contains a link or a picture if it has the word `http` anywhere in the tweet(You can check out Trump's Twitter for why these criteria are true).
###Code
# You must save your regex for retweets in this variable
rt_re = r'\Wrt\W|\brt\W|\Wrt\b'
# You must save your regex for hashtags, links, or pictures in this variable
hash_re = r'#\w|http'
hash_or_link = senti[senti['text'].str.contains(hash_re)]
hash_or_link = hash_or_link[~hash_or_link['text'].str.contains(rt_re)]
hash_or_link.head()
len(hash_or_link)
trump['text'].head()
###Output
_____no_output_____
###Markdown
--- Tweets By DeviceCreate a line plot of the number of tweets containing a hashtag, link, or picture from each tweet device.If a device doesn't have at least 20 tweets in a particular year, I won't include the tweets from that device for that year. (Eg. if there are 100 tweets from Twitter Ads in 2016 but only 10 in 2017, plot the counts for Twitter Ads in 2016 but not 2017.)What conclusions can you draw from this plot? Does this plot allow us to say whether a tweet containing a hashtag/link/picture likely came from Trump himself? Write your takeaways in `hashtag_answer` variable.
###Code
# Create your plot here...
L = hash_or_link['year'] = hash_or_link['est_time'].apply(lambda x: x.year)
L_filter = hash_or_link.groupby(['source', 'year']).filter(lambda x: len(x) > 20)
(L_filter.loc[:,['est_time', 'source']].set_index('est_time').groupby('source')
.resample('W').size().unstack(level=0)).plot()
plt.xlabel('Time')
# ...then write your takeaways here.
hashtag_answer = '''
From the graph, we can see that while Trump used to post numerous tweets from a variety of different sources, recently
he has only been Media Studio and Twitter for iPhone extensively to post tweets. Also, after becoming president, it seems
that Trump has been tweeting less often, though he still tweets consistently.
'''
display(Markdown(hashtag_answer))
###Output
_____no_output_____
###Markdown
--- Do Hashtagged Tweets Have a Different Sentiment than Non-Hashtagged Tweets? Now, let's see whether there's a difference in sentiment for tweets with hashtags and those without.I will create a line plot of the sentiment of Trump's non-retweet tweets over time, taking the mean sentiment for every month. I plotted one line for tweets with hashtags and one for tweets without. Then, I drew two vertical lines for the election date and inauguration date. I also draw a horizontal line for y=0 as the baseline.
###Code
# Create your plot here...
import warnings
warnings.simplefilter(action='ignore', category=FutureWarning)
senti['bool'] = senti['text'].str.contains(hash_re)
senti['bool2'] = senti['text'].str.contains(rt_re)
senti1 = senti[senti['bool2'] == False]
senti1_true = senti1[senti1['bool'] == True]
senti1_false = senti1[senti1['bool'] == False]
a = (senti1_true.loc[:, ['est_time', 'polarity']].set_index('est_time').resample('M'))
b = (senti1_false.loc[:, ['est_time', 'polarity']].set_index('est_time').resample('M'))
ax = a.plot()
b.plot(ax = ax)
plt.axvline(pd.to_datetime('2016-11-08'), color = 'r', linestyle = 'dashed')
plt.axvline(pd.to_datetime('2017-01-21'), color = 'r', linestyle = 'dashed')
plt.axhline(y = 0, color = 'black')
plt.xlabel('time')
ax.legend(['True', 'False'])
# ...then write your takeaways here.
hash_senti_answer = '''
From this graph, Trump's tweets seem to be more positive when he his tweets do contain hashtags or links compared to when
his tweets contain neither. It is also interesting to note how significantly the polarity of his tweets in general has increased since
he has taken office after becoming president.
'''
display(Markdown(hash_senti_answer))
###Output
_____no_output_____
###Markdown
--- Engagement--- Retweet CountsWhich of Trump's tweets had the most retweets? Were there certain words that often led to more retweets?We can find this out by using our `tidy_format` DataFrame. For each word in the `tidy_format` DF, find out the number of retweets that its tweet got. Filter out words that didn't appear in at least 25 tweets, find out the median number of retweets each word got, and save the top 20 most retweeted words into a DataFrame called `top_20`.
###Code
tidy_format.head()
top_20 = tidy_format.copy()
top_20["retweet_count"] = [trump.loc[i, "retweet_count"] for i in tidy_format.index.values]
top_20 = top_20.groupby("word").filter(lambda x: len(x) >= 25)
top_20 = top_20.loc[:,["word", "retweet_count"]]
top_20 = top_20.groupby("word").median()
top_20 = top_20.sort_values("retweet_count", ascending = False)
top_20 = top_20.head(20)
top_20.head()
###Output
_____no_output_____
###Markdown
Here's a bar chart of the results:
###Code
top_20['retweet_count'].sort_values().plot.barh(figsize=(10, 8))
###Output
_____no_output_____
###Markdown
--- Fake News!The phrase "fake news" is apparently really popular! We can conclude that Trump's tweets containing "fake" and/or "news" result in the most retweets relative to words his other tweets. Or can we?Lets consider each of the statements about possible confounding factors in our retweet analysis below.1. We didn't restrict our word list to nouns, so we have unhelpful words like "let" and "any" in our result.True. However, this is not a confounding factor because taking a subset of nouns won't affect the ranking. 2. We didn't remove hashtags in our text, so we have duplicate words (eg. great and great).False, all special characters were replaced with whitespace.3. We didn't account for the fact that Trump's follower count has increased over time.True, since he probably gained many followers after being elected president. And since fake news more recently became his tagline,this could definitely be inflating the numbers and make the words appear more popular than they actually are. --- More Fake News!--- How Much Fake News?Let's investigate the term "fake news" a bit more. I will create a table called `fake_counts` that has two columns:1. `fake_news`: The number of tweets containing the term "fake news".1. `total`: The total number of tweets for the time period.The index of the table will be datetimes for each two-week period in the data.
###Code
temp = senti.copy()
temp['fake_counts_bool'] = temp['text'].str.contains('fake news')
temp_true = temp[temp['fake_counts_bool'] == True]
a = (temp_true.loc[:, ['est_time', 'fake_counts_bool']].set_index('est_time')
.resample('2W-SUN', closed = 'left')).count()
b = (temp.loc[:, ['est_time', 'fake_counts_bool']].set_index('est_time').resample('2W-SUN')).count()
fake_counts = pd.concat([a, b], axis=1).fillna(0)
fake_counts.columns = (['fake_news', 'total'])
fake_counts.index = pd.DatetimeIndex(fake_counts.index).date
fake_counts
###Output
_____no_output_____
###Markdown
--- Proportion of Fake NewsNow, I will create a line plot showing the proportion of tweets containing the term "fake news" over time. Then, I will draw two vertical lines corresponding to the election and inauguration dates.
###Code
# Create your plot here...
c = (temp.loc[:, ['est_time', 'fake_counts_bool']].set_index('est_time').resample('2W-SAT')).plot()
c.legend('Prop')
plt.axvline(pd.to_datetime('2016-11-08'), color = 'r', linestyle = 'dashed')
plt.axvline(pd.to_datetime('2017-01-21'), color = 'r', linestyle = 'dashed')
plt.xlabel('Time')
# ...then write your takeaways here.
fake_news_answer = '''
The famous term of fake news seems to only have originated after Trump was elected, since it had 0 usage before. It seems
to have stuck thanks to Trump's constant criticism of news outlets as 'fake news', and news outlets' criticism of Trump
as well. However, it seems to have decreased as of recently compared to previous months.
'''
display(Markdown(fake_news_answer))
###Output
_____no_output_____ |
examples/fundamentals/math_review_numpy.ipynb | ###Markdown
Review of `numpy` and basic mathematics_written by [Gene Kogan](https://www.genekogan.com)_-----Before learning about what regression and classification are, we will do a review of key mathematical concepts from linear algebra and calculus, as well as an introduction to the `numpy` package. These fundamentals will be helpful to understand some of the theoretical materials of the next few guides.We will be working a lot with `numpy`, a Python library for large-scale vector and matrix operations, as well as fast and efficient computation of various mathematical functions. Additionally, most of the deep learning frameworks, including [PyTorch](https://www.pytorch.org/) and [Tensorflow](https://www.tensorflow.org/), largely follow the conventions laid out by `numpy` and are often called "numpy-like".`numpy` is a very large library with many convenient functions. A review of them is beyond the scope of this notebook. We will introduce relevant functions in future sessions as we go, depending on when we need them. A good non-comprehensive review can also be found in [Stanford CS231n](https://cs231n.github.io/python-numpy-tutorial/). If you are unfamiliar with `numpy`, it will be very helpful to go through the exercises in that tutorial, which also contain a short Python review. In the following section, we will just introduce some of the most common operations briefly, along with their corresponding mathematical concepts, focusing on the ones that will help us in the next section.To start, we import `numpy` (often imported under the alias `np` to make calls shorter).
###Code
import numpy as np
###Output
_____no_output_____
###Markdown
VectorsThe most basic data structure in `numpy` is a vector, or array. So for example, to represent the vector $v = \begin{bmatrix} 1 \\ -5 \\ 3 \end{bmatrix}$, we would write:
###Code
v = np.array([1, -5, 3])
print(v)
###Output
[ 1 -5 3]
###Markdown
`numpy` has many convenience functions for generating vectors. For example, to create a list of all integers between 0 and 10:
###Code
v = np.arange(0, 10)
print(v)
###Output
[0 1 2 3 4 5 6 7 8 9]
###Markdown
Or the `numpy.linspace` function, which gives you a linear interpolation of `n` numbers between two endpoints.
###Code
v = np.linspace(0, 10, 8) # give me 8 numbers linearly interpolated between 0 and 10
print(v)
###Output
[ 0. 1.42857143 2.85714286 4.28571429 5.71428571 7.14285714
8.57142857 10. ]
###Markdown
AdditionWhen two vectors of equal length are added, the elements are added point-wise.$$\begin{bmatrix} 2 \\ 3 \\ 1 \end{bmatrix} + \begin{bmatrix} 0 \\ 2 \\ -2 \end{bmatrix} = \begin{bmatrix} 2 \\ 5 \\ -1 \end{bmatrix}$$
###Code
a = np.array([2, 3, 1])
b = np.array([0, 2, -2])
c = a + b
print(c)
###Output
[ 2 5 -1]
###Markdown
MultiplicationA vector can be multiplied element-wise by a number (called a "scalar"). For example: $$3 \begin{bmatrix} 2 \\ 3 \\ 1 \end{bmatrix} = \begin{bmatrix} 6 \\ 9 \\ 3 \end{bmatrix}$$
###Code
v = 3 * np.array([2,3,1])
print(v)
###Output
[6 9 3]
###Markdown
Dot productA dot product is defined as the sum of the element-wise products of two equal-sized vectors. For two vectors $a$ and $b$, it is denoted as $a \cdot b$ or as $a b^T$ (where T refers to the transpose operation, introduced further down this notebook.$$\begin{bmatrix} 1 & -2 & 2 \end{bmatrix} \begin{bmatrix} 0 \\ 2 \\ 3 \end{bmatrix} = 2$$In other words, it's:$$(1 \cdot 0) + (-2 \cdot 2) + (2 \cdot 3) = 2$$This can be calculated with the `numpy.dot` function:
###Code
a = np.array([1,-2,2])
b = np.array([0,2,3])
c = np.dot(a, b)
print(c)
###Output
2
###Markdown
Or the shorter way:
###Code
c = a.dot(b)
print(c)
###Output
2
###Markdown
MatricesA matrix is a rectangular array of numbers. For example, consider the following 2x3 matrix:$$\begin{bmatrix} 2 & 3 & 1 \\ 0 & 4 & -2 \end{bmatrix}$$Note that we always denote the size of the matix as rows x columns. So a 2x3 matrix has two rows and 3 columns.`numpy` can create matrices from normal Python lists using `numpy.matrix`. For example:
###Code
np.matrix([[2,3,1],[0, 4,-2]])
###Output
_____no_output_____
###Markdown
To instantiate a matrix of all zeros:
###Code
np.zeros((3, 3))
###Output
_____no_output_____
###Markdown
To instantiate a matrix of all ones:
###Code
np.ones((2, 2))
###Output
_____no_output_____
###Markdown
Identity matrixIn linear algebra, a square matrix whose elements are all zeros, except the diagonals, which are ones, is called an "identity matrix." For example:$$\mathbf I =\begin{bmatrix}1 & 0 & 0 \\0 & 1 & 0 \\0 & 0 & 1\end{bmatrix}$$is a 3x3 identity matrix. The reason why it is called an identity matrix is that it is analagous to multiplying a scalar by 1. A matrix multiplied by an identity matrix is unchanged. $$\mathbf I v = v$$To instantiate an identity matrix, use `numpy.eye`. For example:
###Code
np.eye(3)
###Output
_____no_output_____
###Markdown
Notice when you multiple an identity matrix by another matrix, the result is the same as the original matrix. This goes in either order. Basically, the identity matrix is like $\times 1$.
###Code
M = np.matrix([[9,5,6],[-1,0,5],[-2,4,2]])
I = np.eye(3)
print("original matrix = \n", M)
M2 = I * M
print("I * M = \n", M2)
M3 = M * I
print("M * I = \n", M3)
###Output
original matrix =
[[ 9 5 6]
[-1 0 5]
[-2 4 2]]
I * M =
[[ 9. 5. 6.]
[-1. 0. 5.]
[-2. 4. 2.]]
M * I =
[[ 9. 5. 6.]
[-1. 0. 5.]
[-2. 4. 2.]]
###Markdown
Random matricesTo instantiate a matrix of random elements (between 0 and 1), you can use `numpy.random`:
###Code
A = np.random.random((2, 3))
print(A)
###Output
[[0.05486456 0.68847484 0.44925268]
[0.63482393 0.26960727 0.47707929]]
###Markdown
TranspositionTo transpose a matrix is to reverse the axes of the matrix. So the element at `i,j` in the transposed matrix is equal to the element at `j,i` in the original. The matrix $A$ transposed is denoted as $A^T$.
###Code
A_transpose = np.transpose(A)
print(A_transpose)
###Output
[[0.05486456 0.63482393]
[0.68847484 0.26960727]
[0.44925268 0.47707929]]
###Markdown
It can also be done with the shorthand `.T` operation, as in:
###Code
A_transpose = A.T
print(A_transpose)
###Output
[[0.05486456 0.63482393]
[0.68847484 0.26960727]
[0.44925268 0.47707929]]
###Markdown
Matrix addditionLike regular vectors, matrices are added point-wise (or element-wise) and must be of the same size. So for example:$$\begin{bmatrix} 4 & 3 \\ 3 & -1 \\ -2 & 1 \end{bmatrix} + \begin{bmatrix} -2 & 1 \\ 5 & 3 \\ 1 & 0 \end{bmatrix} = \begin{bmatrix} 2 & 4 \\ 8 & 2 \\ -1 & 1 \end{bmatrix} $$
###Code
a = np.matrix([[4, 3],[3,-1],[-2,1]])
b = np.matrix([[-2, 1],[5,3],[1,0]])
c = a + b
print(c)
###Output
[[ 2 4]
[ 8 2]
[-1 1]]
###Markdown
Matrix multiplicationAlso like vectors, matrices can be multiplied element-wise by a scalar.$$-2 \begin{bmatrix} 1 & -2 & 0 \\ 6 & 4 & -2 \end{bmatrix} = \begin{bmatrix} -2 & 4 & 0 \\ -12 & -8 & 4 \end{bmatrix} $$
###Code
a = np.matrix([[1,-2,0],[6,4,-2]])
-2 * a
###Output
_____no_output_____
###Markdown
To multiply two matrices together, you take the dot product of each row of the first matrix and each column of the second matrix, as in: So in order to multiply matrices $A$ and $B$ together, as in $C = A \dot B$, $A$ must have the same number of columns as $B$ has rows. For example:$$\begin{bmatrix} 1 & -2 & 0 \\ 6 & 4 & -2 \end{bmatrix} * \begin{bmatrix} 4 & -1 \\ 0 & -2 \\ 1 & 3 \end{bmatrix} = \begin{bmatrix} 4 & 3 \\ 22 & -20 \end{bmatrix} $$
###Code
a = np.matrix([[1,-2,0],[6,4,-2]])
b = np.matrix([[4,-1],[0,-2],[1,3]])
c = a * b
print(c)
###Output
[[ 4 3]
[ 22 -20]]
###Markdown
Hadamard productThe Hadamard product of two matrices differs from normal multiplication in that it is the element-wise multiplication of two matrices. $$\mathbf A \odot B =\begin{bmatrix}A_{1,1} B_{1,1} & \dots & A_{1,n} B_{1,n} \\\vdots & \dots & \vdots \\A_{m,1} B_{m,1} & \dots & A_{m,n} B_{m,n}\end{bmatrix}$$So for example:$$\begin{bmatrix} 3 & 1 \\ 0 & 5 \end{bmatrix} \odot \begin{bmatrix} -2 & 4 \\ 1 & -2 \end{bmatrix} = \begin{bmatrix} -6 & 4 \\ 0 & -10 \end{bmatrix} $$To calculate this with numpy, simply instantiate the matrices with `numpy.array` instead of `numpy.matrix` and it will use element-wise multiplication by default.
###Code
a = np.array([[3,1],[0,5]])
b = np.array([[-2,4],[1,-2]])
np.multiply(a,b)
###Output
_____no_output_____
###Markdown
FunctionsA function is an equation which shows the value of some expression which depends on or more variables. For example:$$f(x) = 3x^2 - 5x + 9$$So for example, at $x=2$, $f(2)=11$. We will be encountering functions constantly. A neural network is one very big function.With functions, in machine learning, we often make a distinction between "variables" and "parameters". The variable is that part of the equation which can vary, and the output depends on it. So the above function depends on $x$. The coefficients in the above function (3, -5, 9) are sometimes called parameters because they are characterize the shape of the function, but are held fixed.
###Code
def f(x):
return 3*(x**2)-5*x+9
f(2)
###Output
_____no_output_____
###Markdown
DerivativesThe derivative of a function $f(x)$ is the instantaneous slope of the function at a given point, and is denoted as $f^\prime(x)$.$$f^\prime(x) = \lim_{\Delta x\to 0} \frac{f(x + \Delta x) - f(x)}{\Delta x} $$The derivative of $f$ with respect to $x$ can also be denoted as $\frac{df}{dx}$.The derivative can be interpreted as the slope of a function at any point, as in the following [video clip](https://en.wikipedia.org/wiki/Derivative), which shows that the limit converges upon the true slope as $\Delta x$ approaches 0.The derivative of a polynomial function is given below: $$f(x) = a x ^ b$$$$\frac{df}{dx} = b a x^{b-1}$$For example, let:$$f(x) = -2 x^3$$then:$$\frac{df}{dx} = -6 x^2$$
###Code
def f(x):
return -2*(x**3)
def f_deriv(x):
return -6*(x**2)
print(f(2))
print(f_deriv(2))
###Output
-16
-24
###Markdown
The derivative of any constant is 0. To see why, let:$$f(x) = C$$Then:$$f^\prime(x) = \lim_{\Delta x\to 0} \frac{f(x + \Delta x) - f(x)}{\Delta x} \\f^\prime(x) = \lim_{\Delta x\to 0} \frac{C - C}{\Delta x} \\f^\prime(x) = \lim_{\Delta x\to 0} \frac{0}{\Delta x} \\f^\prime(x) = 0$$ Properties of derivativesDerivatives are commutative. That is, the derivative of a sum is the sum of the derivatives. In other words:Let $g$ and $h$ be functions. Then:$$\frac{d}{dx}(g + h) = \frac{dg}{dx} +\frac{dh}{dx}$$Similarly, constants can be factored out of derivatives, using the following property:$$\frac{d}{dx}(C f(x)) = C \frac{df}{dx}$$ Chain ruleFunctions can be composites of multiple functions. For example, consider the function:$$f(x) = (4x-5)^3$$This function can be broken down by letting:$$h(x) = 4x-5 \\g(x) = x^3 \\f(x) = g(h(x)) $$The chain rule states that the derivative of a composite function $g(h(x))$ is:$$f^\prime(x) = g^\prime(h(x)) h^\prime(x)$$Another way of expressing this is:$$\frac{df}{dx} = \frac{dg}{dh} \frac{dh}{dx}$$Since $f$ and $g$ are both polynomials we find, we can easily calculate that:$$g^\prime(x) = 3x^2 \\h^\prime(x) = 4$$and therefore:$$f^\prime(x) = g^\prime(h(x)) h^\prime(x) \\f^\prime(x) = g^\prime(4x-5) \cdot 4 \\f^\prime(x) = 3 \cdot (4x-5)^2 \cdot 4 \\f^\prime(x) = 12 \cdot (4x-5)^2$$The chain rule is fundamental to [how neural networks are trained](https://ml4a.github.io/ml4a/how_neural_networks_are_trained/), because it is what allows us to compute the derivative (gradient) of the network's cost function efficiently. We will see more about this in the next notebook.
###Code
def h(x):
return 4*x-5
def g(x):
return x**3
def f(x):
return g(h(x))
def h_deriv(x):
return 4
def g_deriv(x):
return 3*(x**2)
def f_deriv(x):
return g_deriv(h(x)) * h_deriv(x)
f(4)
f_deriv(2)
###Output
_____no_output_____
###Markdown
Multivariate functionsA function may depend on more than one variable. For example:$$f(X) = w_1 x_1 + w_2 x_2 + w_3 x_3 + ... + w_n x_n + b $$or using sum notation:$$f(X) = b + \sum_i w_i x_i$$One useful trick to simplify this formula is to append a $1$ to the input vector $X$, so that:$$X = \begin{bmatrix} x_1 & x_2 & ... & x_n & 1 \end{bmatrix}$$and let $b$ just be an element in the weights vector, so:$$W = \begin{bmatrix} w_1 & w_2 & ... & w_n & b \end{bmatrix}$$So then we can rewrite the function as:$$f(X) = W X^T$$ Partial derivativesA partial derivative of a multivariable function is the derivative of the function with respect to just one of the variables, holding all the others constant.The partial derivative of $f$ with respect to $x_i$ is denoted as $\frac{\partial f}{\partial x_i}$. GradientThe [gradient](https://en.wikipedia.org/wiki/Gradient) of a function is the vector containing each of its partial derivatives at point $x$.$$\nabla f(X) = \left[ \frac{\partial f}{\partial x_1}, \frac{\partial f}{\partial x_2}, ..., \frac{\partial f}{\partial x_n} \right]$$We will look more closely at the gradient later when we get into how neural networks are trained. Plotting with numpyWe introduced plotting in the previous guide. The example below recreates that plot, except using numpy.
###Code
import matplotlib.pyplot as plt
X = np.arange(-5, 5, 0.1)
Y = np.sin(X)
# make the figure
plt.figure(figsize=(6,6))
plt.plot(X, Y)
plt.xlabel('x')
plt.ylabel('y = sin(x)')
plt.title('My plot title')
###Output
_____no_output_____ |
tutorial/306_cliques_en.ipynb | ###Markdown
CliquesCliques is a problem in the graph theory to find cliques which has complete connectivity among K nodes in the clique.The first term of the equation is a constraint for select K nodes.The 2nd term is to find the complete graph among these nodes. If you can find clique K the total cost function will be E=0 ExampleLet's find clique 3 from 6 node network graph
###Code
import networkx as nx
import matplotlib.pyplot as plt
options = {'node_color': '#efefef','node_size': 1200,'with_labels':'True'}
G = nx.Graph()
G.add_nodes_from(nx.path_graph(6))
G.add_edges_from([(0,1),(0,4),(1,2),(1,4),(2,3),(3,4),(3,5)])
nx.draw(G, **options)
###Output
_____no_output_____
###Markdown
This time the number of nodes are 6 and prepare 6 qubits. And we try to find clique 3 and put 3 into K in the equation.This time we try solve using some matrix to prepare the 1st and 2nd term separetely.
###Code
!pip install -U blueqat
import blueqat.opt as wq
import numpy as np
a = wq.opt()
###Output
_____no_output_____
###Markdown
Let's see x0+x1+x2+x3+x4+x5 and all these coefficients are 1 and we prepare this as diagonal matrix
###Code
A = [1,1,1,1,1,1]
print(np.diag(A))
###Output
[[1 0 0 0 0 0]
[0 1 0 0 0 0]
[0 0 1 0 0 0]
[0 0 0 1 0 0]
[0 0 0 0 1 0]
[0 0 0 0 0 1]]
###Markdown
and using this A calculate -6A+A^2 using blueqat's function
###Code
print(wq.sqr(A))
###Output
[[1 2 2 2 2 2]
[0 1 2 2 2 2]
[0 0 1 2 2 2]
[0 0 0 1 2 2]
[0 0 0 0 1 2]
[0 0 0 0 0 1]]
###Markdown
by adding these we can get the 1st term of the equation
###Code
matrix1 = -6*np.diag(A)+wq.sqr(A)
print(matrix1)
###Output
[[-5 2 2 2 2 2]
[ 0 -5 2 2 2 2]
[ 0 0 -5 2 2 2]
[ 0 0 0 -5 2 2]
[ 0 0 0 0 -5 2]
[ 0 0 0 0 0 -5]]
###Markdown
and we can get the 2nd term simply creating QUBO by hand. This time we try use B=0.8
###Code
B = 0.8
matrix2 = B*np.asarray([[0,-1,0,0,-1,0],[0,0,-1,0,-1,0],[0,0,0,-1,0,0],[0,0,0,0,-1,-1],[0,0,0,0,0,0],[0,0,0,0,0,0]])
print(matrix2)
###Output
[[ 0. -0.8 0. 0. -0.8 0. ]
[ 0. 0. -0.8 0. -0.8 0. ]
[ 0. 0. 0. -0.8 0. 0. ]
[ 0. 0. 0. 0. -0.8 -0.8]
[ 0. 0. 0. 0. 0. 0. ]
[ 0. 0. 0. 0. 0. 0. ]]
###Markdown
Finally we get qubo by adding these matrix
###Code
a.qubo = matrix1 + matrix2
print(a.qubo)
a.sa()
###Output
[[-5. 1.2 2. 2. 1.2 2. ]
[ 0. -5. 1.2 2. 1.2 2. ]
[ 0. 0. -5. 1.2 2. 2. ]
[ 0. 0. 0. -5. 1.2 1.2]
[ 0. 0. 0. 0. -5. 2. ]
[ 0. 0. 0. 0. 0. -5. ]]
1.4822120666503906
###Markdown
We get acutally created QUBO , solving time and final answer.0th,2nd,4th qubit is 1. We get clique 3 among 0th, 2nd, 4th.If you want to check the Jij matrix finanlly created,
###Code
print(a.J)
###Output
[[-0.4 0.3 0.5 0.5 0.3 0.5]
[ 0. -0.6 0.3 0.5 0.3 0.5]
[ 0. 0. -0.4 0.3 0.5 0.5]
[ 0. 0. 0. -0.6 0.3 0.3]
[ 0. 0. 0. 0. -0.6 0.5]
[ 0. 0. 0. 0. 0. -0.2]]
###Markdown
Let's check the time evolution of cost function
###Code
a.plot()
###Output
_____no_output_____ |
GradProject_NB3.ipynb | ###Markdown
DS200A Computer Vision Assignment Part Three: Classifier training and performance assessment.
###Code
def train_test_split(df):
return df
#Split the data into a training set, and test set
def accuracy(pred, actual):
return 0
# Calculate the accuracy percentage of the predicted values
###Output
_____no_output_____ |
book/LaneDetection/CameraBasics.ipynb | ###Markdown
Basics of Image Formation Digital images A raster image consists of a grid of pixels as shown in {numref}`Pixel-example`. ```{figure} images/Pixel-example.png---name: Pixel-examplealign: center---An image is made from pixels. Image source [wikipedia](https://commons.wikimedia.org/w/index.php?curid=807503)``` To represent an image, we use a three dimensional array with shape (H,W,3). We say that the array has H rows, W columns and 3 color channels (red, green, and blue). Let's load an image with python and have a look!
###Code
import numpy as np
import matplotlib.pyplot as plt
from pathlib import Path
import cv2
from IPython import display
display.set_matplotlib_formats('svg')
img_fn = str(Path("images/carla_scene.png"))
img = cv2.imread(img_fn)
# opencv (cv2) stores colors in the order blue, green, red, but we want red, green, blue
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
plt.imshow(img)
plt.xlabel("$u$") # horizontal pixel coordinate
plt.ylabel("$v$") # vertical pixel coordinate
print("(H,W,3)=",img.shape)
###Output
_____no_output_____
###Markdown
Let's inspect the pixel in the 100th row and the 750th column:
###Code
u,v = 750, 100
img[v,u]
###Output
_____no_output_____ |
Data_Visualization&Analysis.ipynb | ###Markdown
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_style("darkgrid")
###Output
/usr/local/lib/python3.6/dist-packages/statsmodels/tools/_testing.py:19: FutureWarning: pandas.util.testing is deprecated. Use the functions in the public API at pandas.testing instead.
import pandas.util.testing as tm
###Markdown
***1) Import dataset******2) Data Understanding***a) identify dependent and indpendent variableb) Split features based on datatypes - Numeric ,discrete, categorical , time variablesc) Understand cardinality of data***3)Data correctness***a) Are there any outliers?b) Do you see missing values?***4)Ask Questions***1) What is the general trend of salary?Is it increasing or decreasing or remaining constant? ***1) Import Dataset ***
###Code
#### salary data is avaialable in 2020 folder
### Importing data
data_file="/content/salary_data.csv" ### modify here
data=pd.read_csv(data_file)
print(data.shape)
data.head()
###Output
(167274, 27)
###Markdown
**** 2) Data Understanding *** *** 2 a) Set Independent and Dependent variables ***
###Code
### Set independent variable and dependent variable
dependent_var="PAID_WAGE_PER_YEAR" ### modify here
independent_var=data.drop(dependent_var,axis=1).columns.to_list()
print(f" The dependent feature is {dependent_var}")
print(f" Number of Independent features are {len(independent_var)} and {independent_var}")
###Output
The dependent feature is PAID_WAGE_PER_YEAR
Number of Independent features are 26 and ['Index', 'CASE_NUMBER', 'CASE_STATUS', 'CASE_RECEIVED_DATE', 'DECISION_DATE', 'EMPLOYER_NAME', 'PREVAILING_WAGE_SUBMITTED', 'PREVAILING_WAGE_SUBMITTED_UNIT', 'PAID_WAGE_SUBMITTED', 'PAID_WAGE_SUBMITTED_UNIT', 'JOB_TITLE', 'WORK_CITY', 'EDUCATION_LEVEL_REQUIRED', 'COLLEGE_MAJOR_REQUIRED', 'EXPERIENCE_REQUIRED_Y_N', 'EXPERIENCE_REQUIRED_NUM_MONTHS', 'COUNTRY_OF_CITIZENSHIP', 'PREVAILING_WAGE_SOC_CODE', 'PREVAILING_WAGE_SOC_TITLE', 'WORK_STATE', 'WORK_POSTAL_CODE', 'FULL_TIME_POSITION_Y_N', 'VISA_CLASS', 'PREVAILING_WAGE_PER_YEAR', 'JOB_TITLE_SUBGROUP', 'order']
###Markdown
**2 b) Split numerical ,discrete, categorical and time features***
###Code
## Use this code to extract variables ######## modify here
##[("feature name is -",var,data[var].nunique(), data[var].unique(),print()) for var in independent_var ] #### use this code to identify categorical,discrete,numer quickly
## splitting categorical and numerical variable
categorical_var=[var for var in data.columns if data[var].dtypes=='object']
categorical_var=[]
numerical_var=[var for var in data.columns if data[var].dtypes ==('float64' or 'int64')]
time_var=[var for var in data.columns if data[var].dtypes=='datetime64']
time_var=["CASE_RECEIVED_DATE","DECISION_DATE"]
numerical_var=["CASE_NUMBER","CASE_STATUS","EMPLOYER_NAME","EXPERIENCE_REQUIRED_NUM_MONTHS","Index",
"PREVAILING_WAGE_SUBMITTED","PAID_WAGE_SUBMITTED","PAID_WAGE_PER_YEAR","order"]
discrete_var=[]
categorical_var=["PREVAILING_WAGE_SUBMITTED_UNIT","PAID_WAGE_SUBMITTED_UNIT","JOB_TITLE","WORK_CITY","EDUCATION_LEVEL_REQUIRED","COLLEGE_MAJOR_REQUIRED",
"EXPERIENCE_REQUIRED_Y_N","COUNTRY_OF_CITIZENSHIP","PREVAILING_WAGE_SOC_CODE","PREVAILING_WAGE_SOC_TITLE","WORK_STATE","WORK_POSTAL_CODE",
"FULL_TIME_POSITION_Y_N","VISA_CLASS","PREVAILING_WAGE_PER_YEAR","JOB_TITLE_SUBGROUP",]
print("Number of Categorical variables :",len(categorical_var))
print("Categorical variables :",categorical_var)
print()
print("Number of Numerical variables :",len(numerical_var))
print("Numerical variables :",numerical_var)
print()
print("Time variables : ",len(time_var))
print("Time variables :",time_var)
print()
print("Number of Discrete variables :",len(discrete_var))
print("Discrete variables :",discrete_var)
### verifiying if there are any columns appearing in both split
print("Checking if there are duplicate columns or not")
### verifiying if there are any columns appearing in both split
print([var for var in numerical_var if var in (categorical_var or discrete_var or time_var) ])
print([var for var in discrete_var if var in (categorical_var or numerical_var or time_var) ])
print([var for var in categorical_var if var in ( discrete_var or numerical_var or time_var) ])
print([var for var in time_var if var in ( discrete_var or numerical_var or catgorical_var) ])
###Output
_____no_output_____
###Markdown
*** 2 C) Data cardinality - Understanding subcategories categorical data is taking***
###Code
[(var,data[var].nunique()) for var in ( categorical_var or discret_var) ]
###Output
_____no_output_____
###Markdown
***3) Data Correctness*** ***3 a) Identify Missing Values ***
###Code
### Finding if the data set contains missing elements are not.True indicates presence of missing elements
print("Missing values :",data.isnull().values.any())
print()
### Finding total number of rows that contains missing elements
print("Number of rows that contain missing element ",data.isnull().sum().sum())
print(f"Percentage of Dataset that contains missing element is {round(data.isnull().sum().sum()/len(data),2)} %")
print()
# make a list of the variables that contain missing values
missing_val_columns=[(col,data[col].isnull().sum()) for col in data.columns if data[col].isnull().sum()>0]
print("The missing value columns are ")
print(missing_val_columns)
### Calculating Percentage of missing observation with respect to full dataset
missing_val_columns=pd.DataFrame(missing_val_columns)
missing_val_columns[1]=round(missing_val_columns[1]/len(data),2)
missing_val_columns=missing_val_columns.rename(columns={0:"Feature",1:"% of missing observations"}).reset_index()
missing_val_columns=missing_val_columns[missing_val_columns["% of missing observations"]>0.1]
missing_val_columns=missing_val_columns.sort_values(by='% of missing observations', ascending=False)
### Visualizing the distribution of missing features
ax = sns.barplot(x="% of missing observations",y="Feature", data=missing_val_columns,orient = 'h')
###Output
Missing values : True
Number of rows that contain missing element 911840
Percentage of Dataset that contains missing element is 5.45 %
The missing value columns are
[('WORK_CITY', 3), ('EDUCATION_LEVEL_REQUIRED', 156181), ('COLLEGE_MAJOR_REQUIRED', 156223), ('EXPERIENCE_REQUIRED_Y_N', 156181), ('EXPERIENCE_REQUIRED_NUM_MONTHS', 162309), ('COUNTRY_OF_CITIZENSHIP', 156181), ('WORK_POSTAL_CODE', 113601), ('FULL_TIME_POSITION_Y_N', 11093), ('PREVAILING_WAGE_PER_YEAR', 68)]
###Markdown
Insight for Missing Values :Our dataset contains a few variables with missing values. We need to account for this in our Feature engineering /preprocessing step. *** 3 b)Outliers - Extreme values may affect the performance of a linear model. Let's find out if we have any in our variables***
###Code
# let's make boxplots to visualise outliers in the continuous variables
def find_outliers(df, var):
df.boxplot(column=var)
print("Median value :",df[var].median())
plt.title(var)
plt.ylabel(var)
plt.show()
## function to treat outliers:
def outlier_treatment(df,col):
sorted(df[col])
Q1,Q3 = np.percentile(df[col] , [25,75])
IQR = Q3 - Q1
lower_range = Q1 - (1.5 * IQR)
upper_range = Q3 + (1.5 * IQR)
quantile_10_value=df[col].quantile(0.10)
quantile_90_value=df[col].quantile(0.90)
df[col] = np.where(df[col] < lower_range, quantile_10_value,df[col])
df[col] = np.where(df[col] >upper_range,quantile_90_value,df[col])
find_outliers(data,"PAID_WAGE_PER_YEAR")
outlier_treatment(data,"PAID_WAGE_PER_YEAR")
find_outliers(data,"PAID_WAGE_PER_YEAR")
###Output
Median value : 78600.0
###Markdown
**Function to extract Year,Month,Date from Independent Features**
###Code
### function extracts year and month
import datetime
def get_year_month_date(data,col):
data[col] = pd.to_datetime(data[col] )
data[col+"_Yr"]=data[col].dt.year
data[col+"_Month"]=data[col].dt.month
data[col+"_Day"]=data[col].dt.day
data[col+"_Quarter"]=data["CASE_RECEIVED_DATE"].dt.quarter
get_year_month_date(data,"CASE_RECEIVED_DATE")
get_year_month_date(data,"DECISION_DATE")
###Output
_____no_output_____
###Markdown
***4 1) General Trend of salary ***
###Code
### distribution of salary
fig = plt.figure(figsize=(10,4),)
ax=sns.kdeplot(data[dependent_var] , color='r',shade=True,label='Salary')
#ax=sns.kdeplot(df.loc[(df['turnover'] == 1),'evaluation'] , color='r',shade=True, label='turnover')
plt.title('Distribution of Income')
###Output
_____no_output_____
###Markdown
***Insight : Most of the professionals are making between 60K-100K***
###Code
def trend_chart(df2,output_var,date_var,kind="bar", figsize=(8,10)):
df2=pd.DataFrame()
df2["median"]=data.groupby(date_var)[output_var].median()
df2["min"]=data.groupby(date_var)[output_var].min()
df2["max"]=data.groupby(date_var)[output_var].max()
df2.plot(kind=kind, figsize=figsize)
plt.xlabel(date_var)
plt.ylabel(output_var)
print("Distribution of ",output_var)
trend_chart(data,"PAID_WAGE_PER_YEAR","CASE_RECEIVED_DATE_Yr",kind="line", figsize=(10,8))
###Output
Distribution of PAID_WAGE_PER_YEAR
###Markdown
Insights : pending Testing strength of trend using regression
###Code
### testing the strength of trend using signficance
import statsmodels.api as sm
def trend_signifcance_test(df2,output_var,date_var,kind="bar", figsize=(8,10)):
df2=pd.DataFrame()
df2["Income"]=data.groupby(date_var)[output_var].median()
df2["Application_Count"]=data.groupby(date_var)[output_var].count()
df2["Income_Type"]="Median"
df2=df2.reset_index()
mod = sm.OLS(df2["Application_Count"], df2["Income"])
res = mod.fit()
print("Summary statistics for Median Income")
print(res.summary())
df3=pd.DataFrame()
df3["Income"]=data.groupby(date_var)[output_var].max()
df3["Application_Count"]=data.groupby(date_var)[output_var].count()
df3["Income_Type"]="Max"
df3=df3.reset_index()
mod = sm.OLS(df3["Application_Count"], df3["Income"])
res = mod.fit()
print("Summary statistics for Maximum Income")
print(res.summary())
print()
print()
df4=pd.DataFrame()
df4["Income"]=data.groupby(date_var)[output_var].min()
df4["Application_Count"]=data.groupby(date_var)[output_var].count()
df4["Income_Type"]="Min"
df4=df4.reset_index()
mod = sm.OLS(df4["Application_Count"], df4["Income"])
res = mod.fit()
print("Summary statistics for Minimum Income")
print(res.summary())
print()
print()
df2=pd.concat([df2,df3,df4],axis=0)
sns.lmplot(x="Application_Count", y="Income", hue="Income_Type", data=df2)
plt.xlabel("")
plt.ylabel(output_var)
print("Distribution of ",output_var)
trend_signifcance_test(data,"PAID_WAGE_PER_YEAR","CASE_RECEIVED_DATE_Yr",kind="lmplot", figsize=(10,8))
###Output
Summary statistics for Median Income
OLS Regression Results
=======================================================================================
Dep. Variable: Application_Count R-squared (uncentered): 0.439
Model: OLS Adj. R-squared (uncentered): 0.359
Method: Least Squares F-statistic: 5.479
Date: Mon, 01 Jun 2020 Prob (F-statistic): 0.0518
Time: 02:14:43 Log-Likelihood: -91.976
No. Observations: 8 AIC: 186.0
Df Residuals: 7 BIC: 186.0
Df Model: 1
Covariance Type: nonrobust
==============================================================================
coef std err t P>|t| [0.025 0.975]
------------------------------------------------------------------------------
Income 0.2711 0.116 2.341 0.052 -0.003 0.545
==============================================================================
Omnibus: 1.812 Durbin-Watson: 0.698
Prob(Omnibus): 0.404 Jarque-Bera (JB): 1.107
Skew: 0.802 Prob(JB): 0.575
Kurtosis: 2.136 Cond. No. 1.00
==============================================================================
Warnings:
[1] Standard Errors assume that the covariance matrix of the errors is correctly specified.
Summary statistics for Maximum Income
OLS Regression Results
=======================================================================================
Dep. Variable: Application_Count R-squared (uncentered): 0.488
Model: OLS Adj. R-squared (uncentered): 0.415
Method: Least Squares F-statistic: 6.674
Date: Mon, 01 Jun 2020 Prob (F-statistic): 0.0363
Time: 02:14:43 Log-Likelihood: -91.610
No. Observations: 8 AIC: 185.2
Df Residuals: 7 BIC: 185.3
Df Model: 1
Covariance Type: nonrobust
==============================================================================
coef std err t P>|t| [0.025 0.975]
------------------------------------------------------------------------------
Income 0.1520 0.059 2.583 0.036 0.013 0.291
==============================================================================
Omnibus: 2.390 Durbin-Watson: 0.785
Prob(Omnibus): 0.303 Jarque-Bera (JB): 1.290
Skew: 0.924 Prob(JB): 0.525
Kurtosis: 2.326 Cond. No. 1.00
==============================================================================
Warnings:
[1] Standard Errors assume that the covariance matrix of the errors is correctly specified.
Summary statistics for Minimum Income
OLS Regression Results
=======================================================================================
Dep. Variable: Application_Count R-squared (uncentered): 0.150
Model: OLS Adj. R-squared (uncentered): 0.028
Method: Least Squares F-statistic: 1.231
Date: Mon, 01 Jun 2020 Prob (F-statistic): 0.304
Time: 02:14:43 Log-Likelihood: -93.640
No. Observations: 8 AIC: 189.3
Df Residuals: 7 BIC: 189.4
Df Model: 1
Covariance Type: nonrobust
==============================================================================
coef std err t P>|t| [0.025 0.975]
------------------------------------------------------------------------------
Income 0.5903 0.532 1.110 0.304 -0.667 1.848
==============================================================================
Omnibus: 0.910 Durbin-Watson: 0.516
Prob(Omnibus): 0.634 Jarque-Bera (JB): 0.677
Skew: 0.561 Prob(JB): 0.713
Kurtosis: 2.121 Cond. No. 1.00
==============================================================================
Warnings:
[1] Standard Errors assume that the covariance matrix of the errors is correctly specified.
###Markdown
Finding : Median is s up but not significant.Mininum is decreasing.Max salary is increasing.Insight : Maximum salaries are remaining exactly constant and the Median salaries are slightly going up while mininum salries are decreasing over time.Increased opportunity of people to make more in the future as the salaries has been going up.And for every category other than data scientists, as time goes on, there might actually be increased risk that you will make less than you expected as minimum salaries are going down. ***Distribution of Salaries for different Jobs - How much money can I make in every profession?***
###Code
var="JOB_TITLE_SUBGROUP"
features=data[var].unique()
features
fig = plt.figure(figsize=(20,10),)
color=['r','b','g','y','black','c','crimson','m','lighpink']
col=0
for feature in features:
#fig = plt.figure(figsize=(20,10),)
ax=sns.kdeplot(data[dependent_var][data[var]==feature] , color=color[col],shade=False,label=feature)
col=col+1
###Output
_____no_output_____
###Markdown
Insights :1) Teachers are paid less compared to other professionals.Most of them make between 25K-50K.2) While Business Analyst and Data Analyst make more money than Teachers, they are relatively paid less compared to other professionals.They make between 55K-75K3) Most of the Software engineers make between 65K-100K4) Assistant Professor - Bimodal distribution indicating some people are less and some people are paid more.There is a chance that you might make less than expected or make more in the future.5)Attorneys are highest paid professions with most of them making over 1lakh *** - Are the salries going up or down in every profession?***
###Code
### plot a trend line to see the trend
features
### dataframe subsetted to select only
var="JOB_TITLE_SUBGROUP"
features=data[var].unique()
features
feature="teacher"
data2=data[data[var]==feature]
trend_chart(data2,"PAID_WAGE_PER_YEAR","CASE_RECEIVED_DATE_Yr",kind="line", figsize=(5,5))
trend_signifcance_test(data2,"PAID_WAGE_PER_YEAR","CASE_RECEIVED_DATE_Yr",kind="lmplot", figsize=(10,8))
###Output
===============================teacher==============================================
Distribution of PAID_WAGE_PER_YEAR
Summary statistics for Median Income
OLS Regression Results
=======================================================================================
Dep. Variable: Application_Count R-squared (uncentered): 0.439
Model: OLS Adj. R-squared (uncentered): 0.359
Method: Least Squares F-statistic: 5.479
Date: Mon, 01 Jun 2020 Prob (F-statistic): 0.0518
Time: 02:14:50 Log-Likelihood: -91.976
No. Observations: 8 AIC: 186.0
Df Residuals: 7 BIC: 186.0
Df Model: 1
Covariance Type: nonrobust
==============================================================================
coef std err t P>|t| [0.025 0.975]
------------------------------------------------------------------------------
Income 0.2711 0.116 2.341 0.052 -0.003 0.545
==============================================================================
Omnibus: 1.812 Durbin-Watson: 0.698
Prob(Omnibus): 0.404 Jarque-Bera (JB): 1.107
Skew: 0.802 Prob(JB): 0.575
Kurtosis: 2.136 Cond. No. 1.00
==============================================================================
Warnings:
[1] Standard Errors assume that the covariance matrix of the errors is correctly specified.
Summary statistics for Maximum Income
OLS Regression Results
=======================================================================================
Dep. Variable: Application_Count R-squared (uncentered): 0.488
Model: OLS Adj. R-squared (uncentered): 0.415
Method: Least Squares F-statistic: 6.674
Date: Mon, 01 Jun 2020 Prob (F-statistic): 0.0363
Time: 02:14:50 Log-Likelihood: -91.610
No. Observations: 8 AIC: 185.2
Df Residuals: 7 BIC: 185.3
Df Model: 1
Covariance Type: nonrobust
==============================================================================
coef std err t P>|t| [0.025 0.975]
------------------------------------------------------------------------------
Income 0.1520 0.059 2.583 0.036 0.013 0.291
==============================================================================
Omnibus: 2.390 Durbin-Watson: 0.785
Prob(Omnibus): 0.303 Jarque-Bera (JB): 1.290
Skew: 0.924 Prob(JB): 0.525
Kurtosis: 2.326 Cond. No. 1.00
==============================================================================
Warnings:
[1] Standard Errors assume that the covariance matrix of the errors is correctly specified.
Summary statistics for Minimum Income
OLS Regression Results
=======================================================================================
Dep. Variable: Application_Count R-squared (uncentered): 0.150
Model: OLS Adj. R-squared (uncentered): 0.028
Method: Least Squares F-statistic: 1.231
Date: Mon, 01 Jun 2020 Prob (F-statistic): 0.304
Time: 02:14:50 Log-Likelihood: -93.640
No. Observations: 8 AIC: 189.3
Df Residuals: 7 BIC: 189.4
Df Model: 1
Covariance Type: nonrobust
==============================================================================
coef std err t P>|t| [0.025 0.975]
------------------------------------------------------------------------------
Income 0.5903 0.532 1.110 0.304 -0.667 1.848
==============================================================================
Omnibus: 0.910 Durbin-Watson: 0.516
Prob(Omnibus): 0.634 Jarque-Bera (JB): 0.677
Skew: 0.561 Prob(JB): 0.713
Kurtosis: 2.121 Cond. No. 1.00
==============================================================================
Warnings:
[1] Standard Errors assume that the covariance matrix of the errors is correctly specified.
###Markdown
Insights for Teacher - Increased opportunity for Teachers to make more in the future as their salaries are rising.
###Code
### This is for business anlayst
### dataframe subsetted to select only
var="JOB_TITLE_SUBGROUP"
features=data[var].unique()
features
feature="business analyst"
data2=data[data[var]==feature]
trend_chart(data2,"PAID_WAGE_PER_YEAR","CASE_RECEIVED_DATE_Yr",kind="line", figsize=(5,5))
trend_signifcance_test(data2,"PAID_WAGE_PER_YEAR","CASE_RECEIVED_DATE_Yr",kind="lmplot", figsize=(10,8))
###Output
===============================business analyst==============================================
Distribution of PAID_WAGE_PER_YEAR
Summary statistics for Median Income
OLS Regression Results
=======================================================================================
Dep. Variable: Application_Count R-squared (uncentered): 0.439
Model: OLS Adj. R-squared (uncentered): 0.359
Method: Least Squares F-statistic: 5.479
Date: Mon, 01 Jun 2020 Prob (F-statistic): 0.0518
Time: 02:16:38 Log-Likelihood: -91.976
No. Observations: 8 AIC: 186.0
Df Residuals: 7 BIC: 186.0
Df Model: 1
Covariance Type: nonrobust
==============================================================================
coef std err t P>|t| [0.025 0.975]
------------------------------------------------------------------------------
Income 0.2711 0.116 2.341 0.052 -0.003 0.545
==============================================================================
Omnibus: 1.812 Durbin-Watson: 0.698
Prob(Omnibus): 0.404 Jarque-Bera (JB): 1.107
Skew: 0.802 Prob(JB): 0.575
Kurtosis: 2.136 Cond. No. 1.00
==============================================================================
Warnings:
[1] Standard Errors assume that the covariance matrix of the errors is correctly specified.
Summary statistics for Maximum Income
OLS Regression Results
=======================================================================================
Dep. Variable: Application_Count R-squared (uncentered): 0.488
Model: OLS Adj. R-squared (uncentered): 0.415
Method: Least Squares F-statistic: 6.674
Date: Mon, 01 Jun 2020 Prob (F-statistic): 0.0363
Time: 02:16:38 Log-Likelihood: -91.610
No. Observations: 8 AIC: 185.2
Df Residuals: 7 BIC: 185.3
Df Model: 1
Covariance Type: nonrobust
==============================================================================
coef std err t P>|t| [0.025 0.975]
------------------------------------------------------------------------------
Income 0.1520 0.059 2.583 0.036 0.013 0.291
==============================================================================
Omnibus: 2.390 Durbin-Watson: 0.785
Prob(Omnibus): 0.303 Jarque-Bera (JB): 1.290
Skew: 0.924 Prob(JB): 0.525
Kurtosis: 2.326 Cond. No. 1.00
==============================================================================
Warnings:
[1] Standard Errors assume that the covariance matrix of the errors is correctly specified.
Summary statistics for Minimum Income
OLS Regression Results
=======================================================================================
Dep. Variable: Application_Count R-squared (uncentered): 0.150
Model: OLS Adj. R-squared (uncentered): 0.028
Method: Least Squares F-statistic: 1.231
Date: Mon, 01 Jun 2020 Prob (F-statistic): 0.304
Time: 02:16:38 Log-Likelihood: -93.640
No. Observations: 8 AIC: 189.3
Df Residuals: 7 BIC: 189.4
Df Model: 1
Covariance Type: nonrobust
==============================================================================
coef std err t P>|t| [0.025 0.975]
------------------------------------------------------------------------------
Income 0.5903 0.532 1.110 0.304 -0.667 1.848
==============================================================================
Omnibus: 0.910 Durbin-Watson: 0.516
Prob(Omnibus): 0.634 Jarque-Bera (JB): 0.677
Skew: 0.561 Prob(JB): 0.713
Kurtosis: 2.121 Cond. No. 1.00
==============================================================================
Warnings:
[1] Standard Errors assume that the covariance matrix of the errors is correctly specified.
###Markdown
Insights for Business Anlyst - Increased opportunity for Teachers to make more in the future but increased risk to make less money in future as the salaries are going down. ***Does state of employment has impact on salary**
###Code
x="WORK_STATE"
feature="business analyst"
data2=data[data[var]==feature]
sns.boxplot(x=x, y=dependent_var, data=data2)
###Output
_____no_output_____
###Markdown
***Reference articles***https://www.kaggle.com/randylaosat/hr-analytics-simple-visualizationshttps://www.kaggle.com/gsdeepakkumar/visualisation-with-python-hr-datahttps://www.kaggle.com/kukreti12/hr-analytics-using-python|
###Code
###Output
_____no_output_____
###Markdown
**Functions**************************
###Code
sns.set_style("darkgrid")
### line plot to get trend - trend analysis
### 1) Date is represented as individual date.This need aggregation.
## include datecolumn also in the feature_name
def get_line_plot(dataframe,feature_name,date_col):
### Date on X-axis / continous var on Y-axis
data["year"]=dataframe[feature]
data=d
get_line_plot(data,["PREVAILING_WAGE_SUBMITTED","PAID_WAGE_SUBMITTED"])
data.columns
###Output
_____no_output_____
###Markdown
**end of functions**
###Code
### Functions for plotting and visualization
### This plots bar chart with proportions
def get_bar_plot(dataframe,feature_name):
df=pd.DataFrame(dataframe[feature_name])
df["Proportion (%)"]=1
df=df.groupby([feature_name]).agg({"Proportion (%)":"count"}).reset_index()
df["Proportion (%)"]=df["Proportion (%)"]/df["Proportion (%)"].sum()*100
sns.barplot(x="Proportion (%)", y=feature_name,data=df)
title_name="Distribution of " + feature_name
plt.title(title_name)
plt.show()
###############################################################################################################
#### Draw bar and box plot - with Quartiles
from numpy import median
def get_bar_facet_plot(data,var,output_var,chart_type=plt.hist,hue=None,col=None):
df=pd.DataFrame(data[var])
df["Proportion (%)"]=1
df=df.groupby([var]).agg({"Proportion (%)":"count"}).reset_index()
df["Proportion (%)"]=df["Proportion (%)"]/df["Proportion (%)"].sum()*100
sns.barplot(x="Proportion (%)", y=var,data=df)
title_name="Proportion of " + var
plt.title(title_name)
plt.show()
### code to calculate median value
print("Distribution of",output_var ,"based on ",var)
plt.figure(figsize=(16, 16))
#g = sns.catplot(x=output_var, y=var,
# hue=hue, col=col,
# data=data, kind="bar",
# height=4, aspect=.7);
sns.barplot(x=output_var, y=var, data=data, estimator=median,hue=hue)
plt.show()
### code to plot based on output value
plt.figure(figsize=(16, 16))
g = sns.FacetGrid(data, col=var,palette='Set1')
g.map(chart_type, output_var,color="r");
#title_name1="Distribution of Median" +output_variable+ var
#plt.title(title_name1)
print("Distribution of",output_var ,"based on ",var)
plt.show()
#### salary data is avaialable in 2020 folder
### Importing data
data=pd.read_csv("/content/salary_data.csv")
data.head()
data.shape
### setting independent variable and dependent variable
dependent_var="PAID_WAGE_PER_YEAR"
independent_var=data.drop(dependent_var,axis=1).columns.to_list()
print((dependent_var))
print(independent_var)
### Splitting numerical ,discreate and categorical
## splitting categorical and numerical variable
categorical_var=[var for var in data.columns if data[var].dtypes=='object']
numerical_var=[var for var in data.columns if data[var].dtypes ==('float64' or 'int64')]
discrete_var=[]
print("Categorical variables are",categorical_var)
print("numerical variables are",numerical_var)
print("discrete variables are",discrete_var)
### verifiying if there are any columns appearing in both split
print("Checking if there are duplicate columns or not")
print([var for var in numerical_var if var in (categorical_var or discrete_var) ])
print([var for var in discrete_var if var in (categorical_var or numerical_var) ])
print([var for var in categorical_var if var in ( discrete_var or numerical_var) ])
###Output
Checking if there are duplicate columns or not
[]
[]
[]
###Markdown
***1)Plan Analysis*** ****2) Ask Questions******1) Do specific sub-types of data-related jobs have higher or lower salaries than others?******2)Do salaries change based on visa type (very important to know if you are not a US citizen)?***
###Code
###Output
_____no_output_____
###Markdown
***1) Do specific sub-types of data-related jobs have higher or lower salaries than others?***
###Code
var="JOB_TITLE_SUBGROUP"
output_variable=dependent_var
get_bar_facet_plot(data,var,output_variable)
###Output
_____no_output_____
###Markdown
***2)Do salaries change based on visa type (very important to know if you are not a US citizen)?***
###Code
var="JOB_TITLE_SUBGROUP"
output_variable=dependent_var
get_bar_facet_plot(data,var,output_variable,hue="VISA_CLASS")
plt.figure(figsize=(16, 16))
sns.set(style="whitegrid")
g = sns.catplot(x=dependent_var, y="JOB_TITLE_SUBGROUP",
hue="VISA_CLASS",
data=data, kind="bar")
###Output
_____no_output_____ |
Lab_Artificial_Intelligence_Intro.ipynb | ###Markdown
Lab AIObiettivo del laboratorio: realizzazione un classificatore!!!!1. Descriptive analysis: what are the most important features? (8 punti)2. Precision Weighted >= 98.5% sul dataset di test (10 punti)3. Individuare gli errori e descriverne il motivo (6 punti)4. Proporre ed implementare degli improvements per aumentare la precision (6 punti) 1. Environment preparation
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
###Output
_____no_output_____
###Markdown
Occs3 dataset3 occupations:'Mechanical Engineer', 'Marketing Manager', 'Construction Manager
###Code
df = pd.read_csv('/content/dataset.csv')
df.head(1)
df["OCCUPATION_NAME"].unique()
df
col = ['TITLE_RAW', 'OCCUPATION_NAME']
df = df[col]
df = df[pd.notnull(df['OCCUPATION_NAME'])]
fig = plt.figure(figsize=(8,6))
df.groupby('OCCUPATION_NAME').count().plot.bar(ylim=0)
plt.grid()
plt.show()
###Output
_____no_output_____
###Markdown
1. Descriptive analysis: what are the most important features? (8 punti)2. Precision Weighted >= 98.5% sul dataset di test (10 punti)3. Individuare gli errori e descriverne il motivo (6 punti)4. Proporre ed implementare degli improvements per aumentare la precision (6 punti)
###Code
###Output
_____no_output_____ |
05_lab_clasificacion/05_lab_clasificacion.ipynb | ###Markdown
MAT281 - Laboratorio 8 Aplicaciones de la Matemática en la IngenieríaPuedes ejecutar este jupyter notebook de manera interactiva:[](https://mybinder.org/v2/gh/sebastiandres/mat281_m04_data_science/master?filepath=/05_lab_clasificacion//05_lab_clasificacion.ipynb)[](https://colab.research.google.com/github/sebastiandres/mat281_m04_data_science/blob/master///05_lab_clasificacion//05_lab_clasificacion.ipynb) __Intrucciones__* Completa tus datos personales (nombre y rol USM).* Debes enviar este .ipynb con el siguiente formato de nombre: 08_lab_clasificacion_NOMBRE_APELLIDO.ipynb con tus respuestas a [email protected] y [email protected] .* Se evaluará: - Soluciones - Código - Al presionar `Kernel -> Restart Kernel and Run All Cells` deben ejecutarse todas las celdas sin error. - La escala es de 0 a 4 considerando solo valores enteros.* __La entrega es al final de esta clase.____Nombre__:__Rol__: ObservaciónEste laboratorio utiliza la librería sklearn (oficialmente llamada [scikit learn](http://scikit-learn.org/stable/)), de la cual utilizaremos el método de clasificación **k Nearest Neighbors**. Problema: clasificación de dígitosEn este laboratorio realizaremos el trabajo de reconocer un dígito a partir de una imagen.El repositorio con los datos se encuentra en el siguiente [link](https://archive.ics.uci.edu/ml/datasets/Optical+Recognition+of+Handwritten+Digits), pero los datos ya han sido incluídos en el directorio `data/`. ContenidoEl laboratorio consiste de 4 secciones:0. Explicación de k Neirest Neighbours1. Exploración de los datos.2. Entrenando el modelo kNN.3. Estimación del error de predicción de dígitos utilizando kNN. ¿Qué es k Neirest Neighbours? El algoritmo **k Nearest Neighbors** es un método no paramétrico: una vez que el parámetro $k$ se ha fijado, no se busca obtener ningún parámetro adicional.Sean los puntos $x^{(i)} = (x^{(i)}_1, ..., x^{(i)}_n)$ de etiqueta $y^{(i)}$ conocida, para $i=1, ..., m$.El problema de clasificación consiste en encontrar la etiqueta de un nuevo punto $x=(x_1, ..., x_m)$ para el cual no conocemos la etiqueta. La etiqueta de un punto se obtiene de la siguiente forma:* Para $k=1$, **1NN** asigna a $x$ la etiqueta de su vecino más cercano. * Para $k$ genérico, **kNN** asigna a $x$ la etiqueta más popular de los k vecinos más cercanos. El modelo subyacente a kNN es el conjunto de entrenamiento completo. A diferencia de otros métodos que efectivamente generalizan y resumen la información (como regresión logística, por ejemplo), cuando se necesita realizar una predicción, el algoritmo kNN mira **todos** los datos y selecciona los k datos más cercanos, para regresar la etiqueta más popular/más común. Los datos no se resumen en parámetros, sino que siempre deben mantenerse en memoria. Es un método por tanto que no escala bien con un gran número de datos. En caso de empate, existen diversas maneras de desempatar:* Elegir la etiqueta del vecino más cercano (problema: no garantiza solución).* Elegir la etiqueta de menor valor (problema: arbitrario).* Elegir la etiqueta que se obtendría con $k+1$ o $k-1$ (problema: no garantiza solución, aumenta tiempo de cálculo). La cercanía o similaridad entre los datos se mide de diversas maneras, pero en general depende del tipo de datos y del contexto.* Para datos reales, puede utilizarse cualquier distancia, siendo la **distancia euclidiana** la más utilizada. También es posible ponderar unas componentes más que otras. Resulta conveniente normalizar para poder utilizar la noción de distancia más naturalmente.* Para **datos categóricos o binarios**, suele utilizarse la distancia de Hamming. A continuación, una implementación de "bare bones" en numpy:
###Code
from matplotlib import pyplot as plt
import numpy as np
# Parámetros
k = 3
def knn_search(X, k, x):
""" find K nearest neighbours of data among D """
# Distancia euclidiana
d = np.sqrt(((X - x[:,:k])**2).sum(axis=0))
# Ordenar por cercania
idx = np.argsort(d)
# Regresar los k mas cercanos
id_closest = idx[:k]
return id_closest, d[id_closest].max()
def knn(X,Y,k,x):
# Obtener los k mas cercanos
k_closest, dmax = knn_search(X, k, x)
# Obtener las etiquetas
Y_closest = Y[k_closest]
# Obtener la mas popular
counts = np.bincount(Y_closest.flatten())
#print(counts)
# Regresar la mas popular (cualquiera, si hay empate)
return np.argmax(counts)
N = 100
X = np.random.rand(2,N) # random dataset
Y = np.array(np.random.rand(N)<0.4, dtype=int).reshape((N,1)) # random dataset
x = np.random.rand(2,1) # query point
# performing the search
neig_idx, dmax = knn_search(X, k, x)
y = knn(X, Y, k, x)
# plotting the data and the input point
fig = plt.figure(figsize=(8,8))
plt.plot(x[0,0],x[1,0],'ok', ms=16)
m_ob = Y[:,0]==0
plt.plot(X[0,m_ob], X[1,m_ob], 'ob', ms=8)
m_sr = Y[:,0]==1
plt.plot(X[0,m_sr],X[1,m_sr],'sr', ms=8)
# highlighting the neighbours
plt.plot(X[0,neig_idx], X[1,neig_idx], 'o', markerfacecolor='None', markersize=24, markeredgewidth=1)
# Plot a circle
x_circle = dmax*np.cos(np.linspace(0,2*np.pi,360)) + x[0,0]
y_circle = dmax*np.sin(np.linspace(0,2*np.pi,360)) + x[1,0]
plt.plot(x_circle, y_circle, 'k', alpha=0.25)
# Show all
plt.xlim([0,1])
plt.ylim([0,1])
plt.show()
# Print result
if y==0:
print("Prediccion realizada para etiqueta del punto = {} (circulo azul)".format(y))
else:
print("Prediccion realizada para etiqueta del punto = {} (cuadrado rojo)".format(y))
###Output
_____no_output_____
###Markdown
Puedes ejecutar varias veces el código anterior, variando el número de vecinos `k` para ver cómo afecta el algoritmo. 1. Exploración de los datosLos datos se encuentran en 2 archivos, `data/optdigits.train` y `data/optdigits.test`. Como su nombre lo indica, el set `data/optdigits.train` contiene los ejemplos que deben ser usados para entrenar el modelo, mientras que el set `data/optdigits.test` se utilizará para obtener una estimación del error de predicción.Ambos archivos comparten el mismo formato: cada línea contiene 65 valores. Los 64 primeros corresponden a la representación de la imagen en escala de grises (0-blanco, 255-negro), y el valor 65 corresponde al dígito de la imagen (0-9). Código para mostrar los archivos en un directorio
###Code
%%bash
ls data
###Output
optdigits.names.txt
optdigits.test
optdigits.train
###Markdown
Código para revisar un archivo (reemplazar por cualquiera de los archivos de interés).
###Code
%%bash
cat data/optdigits.names.txt
###Output
1. Title of Database: Optical Recognition of Handwritten Digits
2. Source:
E. Alpaydin, C. Kaynak
Department of Computer Engineering
Bogazici University, 80815 Istanbul Turkey
[email protected]
July 1998
3. Past Usage:
C. Kaynak (1995) Methods of Combining Multiple Classifiers and Their
Applications to Handwritten Digit Recognition,
MSc Thesis, Institute of Graduate Studies in Science and
Engineering, Bogazici University.
E. Alpaydin, C. Kaynak (1998) Cascading Classifiers, Kybernetika,
to appear. ftp://ftp.icsi.berkeley.edu/pub/ai/ethem/kyb.ps.Z
4. Relevant Information:
We used preprocessing programs made available by NIST to extract
normalized bitmaps of handwritten digits from a preprinted form. From
a total of 43 people, 30 contributed to the training set and different
13 to the test set. 32x32 bitmaps are divided into nonoverlapping
blocks of 4x4 and the number of on pixels are counted in each block.
This generates an input matrix of 8x8 where each element is an
integer in the range 0..16. This reduces dimensionality and gives
invariance to small distortions.
For info on NIST preprocessing routines, see
M. D. Garris, J. L. Blue, G. T. Candela, D. L. Dimmick, J. Geist,
P. J. Grother, S. A. Janet, and C. L. Wilson, NIST Form-Based
Handprint Recognition System, NISTIR 5469, 1994.
5. Number of Instances
optdigits.tra Training 3823
optdigits.tes Testing 1797
The way we used the dataset was to use half of training for
actual training, one-fourth for validation and one-fourth
for writer-dependent testing. The test set was used for
writer-independent testing and is the actual quality measure.
6. Number of Attributes
64 input+1 class attribute
7. For Each Attribute:
All input attributes are integers in the range 0..16.
The last attribute is the class code 0..9
8. Missing Attribute Values
None
9. Class Distribution
Class: No of examples in training set
0: 376
1: 389
2: 380
3: 389
4: 387
5: 376
6: 377
7: 387
8: 380
9: 382
Class: No of examples in testing set
0: 178
1: 182
2: 177
3: 183
4: 181
5: 182
6: 181
7: 179
8: 174
9: 180
Accuracy on the testing set with k-nn
using Euclidean distance as the metric
k = 1 : 98.00
k = 2 : 97.38
k = 3 : 97.83
k = 4 : 97.61
k = 5 : 97.89
k = 6 : 97.77
k = 7 : 97.66
k = 8 : 97.66
k = 9 : 97.72
k = 10 : 97.55
k = 11 : 97.89
###Markdown
Código para mostrar las primeras líneas del archivo
###Code
%%bash
head data/optdigits.train
###Output
0,1,6,15,12,1,0,0,0,7,16,6,6,10,0,0,0,8,16,2,0,11,2,0,0,5,16,3,0,5,7,0,0,7,13,3,0,8,7,0,0,4,12,0,1,13,5,0,0,0,14,9,15,9,0,0,0,0,6,14,7,1,0,0,0
0,0,10,16,6,0,0,0,0,7,16,8,16,5,0,0,0,11,16,0,6,14,3,0,0,12,12,0,0,11,11,0,0,12,12,0,0,8,12,0,0,7,15,1,0,13,11,0,0,0,16,8,10,15,3,0,0,0,10,16,15,3,0,0,0
0,0,8,15,16,13,0,0,0,1,11,9,11,16,1,0,0,0,0,0,7,14,0,0,0,0,3,4,14,12,2,0,0,1,16,16,16,16,10,0,0,2,12,16,10,0,0,0,0,0,2,16,4,0,0,0,0,0,9,14,0,0,0,0,7
0,0,0,3,11,16,0,0,0,0,5,16,11,13,7,0,0,3,15,8,1,15,6,0,0,11,16,16,16,16,10,0,0,1,4,4,13,10,2,0,0,0,0,0,15,4,0,0,0,0,0,3,16,0,0,0,0,0,0,1,15,2,0,0,4
0,0,5,14,4,0,0,0,0,0,13,8,0,0,0,0,0,3,14,4,0,0,0,0,0,6,16,14,9,2,0,0,0,4,16,3,4,11,2,0,0,0,14,3,0,4,11,0,0,0,10,8,4,11,12,0,0,0,4,12,14,7,0,0,6
0,0,11,16,10,1,0,0,0,4,16,10,15,8,0,0,0,4,16,3,11,13,0,0,0,1,14,6,9,14,0,0,0,0,0,0,12,10,0,0,0,0,0,6,16,6,0,0,0,0,5,15,15,8,8,3,0,0,10,16,16,16,16,6,2
0,0,1,11,13,11,7,0,0,0,9,14,6,4,3,0,0,0,16,12,16,15,2,0,0,5,16,10,4,12,6,0,0,1,1,0,0,10,4,0,0,0,0,0,5,10,0,0,0,0,0,8,15,3,0,0,0,0,1,13,5,0,0,0,5
0,0,8,10,8,7,2,0,0,1,15,14,12,12,4,0,0,7,15,12,5,0,0,0,0,5,14,12,15,7,0,0,0,0,0,0,2,13,0,0,0,0,0,0,4,12,0,0,0,0,6,7,14,5,0,0,0,0,4,13,8,0,0,0,5
0,0,15,2,14,13,2,0,0,0,16,15,12,13,8,0,0,2,16,12,1,6,10,0,0,7,15,3,0,5,8,0,0,5,12,0,0,8,8,0,0,5,12,0,7,15,5,0,0,5,16,13,16,6,0,0,0,0,10,12,5,0,0,0,0
0,0,3,13,13,2,0,0,0,6,16,12,10,8,0,0,0,9,15,12,16,6,0,0,0,10,16,16,13,0,0,0,0,1,12,16,12,14,4,0,0,0,11,8,0,3,12,0,0,0,13,11,8,13,12,0,0,0,3,15,11,6,0,0,8
###Markdown
1.1 Cargando los datos en numpyPara cargar los datos, utilizamos np.loadtxt con los parámetros extra delimiter (para indicar que el separador será en esta ocasión una coma) y con el dype np.int8 (para que su representación en memoria sea la mínima posible, 8 bits en vez de 32/64 bits para un float).
###Code
import numpy as np
import os
filepath = os.path.join("data", "optdigits.train")
XY_tv = np.loadtxt(filepath, delimiter=",", dtype=np.int8)
print(XY_tv)
# Split into X (values) and Y (labels)
X_tv = XY_tv[:,:64]
Y_tv = XY_tv[:, 64]
# Some printings
print(X_tv.shape)
print(Y_tv.shape)
print(X_tv[0,:])
print(X_tv[0,:].reshape(8,8))
print(Y_tv[0])
###Output
(3823, 64)
(3823,)
[ 0 1 6 15 12 1 0 0 0 7 16 6 6 10 0 0 0 8 16 2 0 11 2 0
0 5 16 3 0 5 7 0 0 7 13 3 0 8 7 0 0 4 12 0 1 13 5 0
0 0 14 9 15 9 0 0 0 0 6 14 7 1 0 0]
[[ 0 1 6 15 12 1 0 0]
[ 0 7 16 6 6 10 0 0]
[ 0 8 16 2 0 11 2 0]
[ 0 5 16 3 0 5 7 0]
[ 0 7 13 3 0 8 7 0]
[ 0 4 12 0 1 13 5 0]
[ 0 0 14 9 15 9 0 0]
[ 0 0 6 14 7 1 0 0]]
0
###Markdown
1.1 (bis) Cargando los datos en pandasPara cargar los datos, también sería posible utilizar pandas. Sin embargo, como el archivo no tiene encabezado, necesitaremos crear nombres para las columnas, además de indicar el separador y tipo de datos.
###Code
import pandas as pd
import os
filepath = os.path.join("data", "optdigits.train")
col_names = ["c{:02d}".format(i) for i in range(65)]
df_XY_tv = pd.read_csv(filepath, names=col_names, sep=",", dtype=np.int8)
print(df_XY_tv)
# Split into X (values) and Y (labels)
XY_tv = df_XY_tv.values
X_tv = XY_tv[:,:64]
Y_tv = XY_tv[:, 64]
# Some printings
print(X_tv.shape)
print(Y_tv.shape)
print(X_tv[0,:])
print(X_tv[0,:].reshape(8,8))
print(Y_tv[0])
###Output
(3823, 64)
(3823,)
[ 0 1 6 15 12 1 0 0 0 7 16 6 6 10 0 0 0 8 16 2 0 11 2 0
0 5 16 3 0 5 7 0 0 7 13 3 0 8 7 0 0 4 12 0 1 13 5 0
0 0 14 9 15 9 0 0 0 0 6 14 7 1 0 0]
[[ 0 1 6 15 12 1 0 0]
[ 0 7 16 6 6 10 0 0]
[ 0 8 16 2 0 11 2 0]
[ 0 5 16 3 0 5 7 0]
[ 0 7 13 3 0 8 7 0]
[ 0 4 12 0 1 13 5 0]
[ 0 0 14 9 15 9 0 0]
[ 0 0 6 14 7 1 0 0]]
0
###Markdown
Como ya hemos mencionado anteriormente, pandas tiene la ventaja de poseer el método `describe` para obtener un rápido resumen de los datos.
###Code
df_XY_tv.describe(include="all").T
###Output
_____no_output_____
###Markdown
1.2 Visualizando los datosPara visualizar los datos utilizaremos el método `imshow` de `matplotlib`. Resulta necesario convertir el arreglo desde las dimensiones (1,64) a (8,8) para que la imagen sea cuadrada y pueda distinguirse el dígito. Superpondremos además el label correspondiente al dígito, mediante el método `text`. Esto nos permitirá comparar la imagen generada con la etiqueta asociada a los valores. Realizaremos lo anterior para los primeros 25 datos del archivo.
###Code
from matplotlib import pyplot as plt
# Well plot the first nx*ny examples
nx, ny = 5, 5
fig, ax = plt.subplots(nx, ny, figsize=(12,12))
for i in range(nx):
for j in range(ny):
index = j+ny*i
data = X_tv[index,:].reshape(8,8)
label = Y_tv[index]
ax[i][j].imshow(data, interpolation='nearest', cmap=plt.get_cmap('gray_r'))
ax[i][j].text(7, 0, str(int(label)), horizontalalignment='center',
verticalalignment='center', fontsize=10, color='blue')
ax[i][j].get_xaxis().set_visible(False)
ax[i][j].get_yaxis().set_visible(False)
plt.show()
###Output
_____no_output_____
###Markdown
2. Entrenando el modelo 2.1 Entrenamiento trivialEntrenaremos el modelo con 1 vecino y verificaremos el error de predicción en el set de entrenamiento.
###Code
from sklearn.neighbors import KNeighborsClassifier
k = 1
kNN = KNeighborsClassifier(n_neighbors=k)
kNN.fit(X_tv, Y_tv)
Y_pred = kNN.predict(X_tv)
n_errors = sum(Y_pred!=Y_tv)
print("Hay %d errores de un total de %d ejemplos de entrenamiento" %(n_errors, len(Y_tv)))
###Output
Hay 0 errores de un total de 3823 ejemplos de entrenamiento
###Markdown
Desafío 1¿Porqué el error de entrenamiento es 0 en el modelo?***RESPONDA AQUI*** 2.2 Buscando el valor de k más apropiadoA partir del análisis del punto anterior, nos damos cuenta de la necesidad de:1. Calcular el error en un set distinto al utilizado para entrenar.2. Calcular el mejor valor de vecinos para el algoritmo. Desafío 2Complete el código entregado a continuación, de modo que se calcule el error de predicción como el porcentaje de aciertos de kNN, para k entre 1 y 10 (ambos incluidos). Realice una división en set de entrenamiento (75%) y de validación (25%), y calcule el valor promedio y desviación estándar del error de predicción (en porcentaje), tomando al menos 20 repeticiones para cada valor de k.OBS: Ejecución de la celda debería tomar alrededor de 5 minutos.
###Code
from sklearn.neighbors import KNeighborsClassifier
from sklearn.model_selection import train_test_split # Spoiler alert #
print("Paciencia. Debería tomar algunos minutos.")
template = "k={0:,d}: {1:.2f} +- {2:.2f} errores de clasificación de un total de {3:,d} puntos"
# Fitting the model
mean_error_for_k = []
std_error_for_k = []
k_range = # FIX ME #
for k in k_range:
errors_k = []
for i in # FIX ME #
# Splitting the data
X_train, X_valid, Y_train, Y_valid = # FIX ME #
# Training the model
kNN = KNeighborsClassifier(n_neighbors=k)
kNN.fit(X_train, Y_train)
# Predicting values
Y_valid_pred = kNN.predict(X_valid)
# Count the errors
n_errors = sum(Y_valid_pred!=Y_valid)
# Add them to vector
errors_k.append(100.*n_errors/len(Y_valid))
errors = np.array(errors_k)
print(template.format(k, errors.mean(), errors.std(), len(Y_valid)))
mean_error_for_k.append(errors.mean())
std_error_for_k.append(errors.std())
###Output
Paciencia. Debería tomar algunos minutos.
k=1: 1.62 +- 0.34 errores de clasificación de un total de 956 puntos
k=2: 2.12 +- 0.38 errores de clasificación de un total de 956 puntos
k=3: 1.48 +- 0.33 errores de clasificación de un total de 956 puntos
k=4: 1.75 +- 0.38 errores de clasificación de un total de 956 puntos
k=5: 1.59 +- 0.33 errores de clasificación de un total de 956 puntos
k=6: 1.82 +- 0.34 errores de clasificación de un total de 956 puntos
k=7: 1.72 +- 0.35 errores de clasificación de un total de 956 puntos
k=8: 1.88 +- 0.35 errores de clasificación de un total de 956 puntos
k=9: 1.88 +- 0.39 errores de clasificación de un total de 956 puntos
k=10: 2.00 +- 0.37 errores de clasificación de un total de 956 puntos
###Markdown
Observación: El código anterior debería dar un resultado similar al siguiente:```k=1: 1.62 +- 0.34 errores de clasificación de un total de 956 puntos``` 2.3 Visualizado el error de predicciónPodemos visualizar los datos anteriores utilizando el siguiente código, que requiere que `sd_error_for k` y `mean_error_for_k` hayan sido apropiadamente definidos.
###Code
mean = np.array(mean_error_for_k)
std = np.array(std_error_for_k)
plt.figure(figsize=(12,8))
plt.plot(k_range, mean - std, "k:")
plt.plot(k_range, mean , "r.-")
plt.plot(k_range, mean + std, "k:")
plt.xlabel("Numero de vecinos k")
plt.ylabel("Error de clasificacion")
plt.show()
###Output
_____no_output_____
###Markdown
Desafío 3¿Qué patrón se observa en los datos, en relación a los números pares e impares? ¿Porqué sucede esto? ¿Qué valor de $k$ elegirá para el algoritmo?***RESPONDA AQUI*** 2.4 Entrenando con todos los datosA partir de lo anterior, se fija el número de vecinos $k$ y se procede a entrenar el modelo con todos los datos.
###Code
from sklearn.neighbors import KNeighborsClassifier
from sklearn.model_selection import train_test_split
import numpy as np
k = 3 # Fix here, maybe
kNN = KNeighborsClassifier(n_neighbors=k)
kNN.fit(X_tv, Y_tv)
###Output
_____no_output_____
###Markdown
2.5 Predicción en testing datasetAhora que el modelo kNN ha sido completamente entrenado, calcularemos el error de predicción en un set de datos completamente nuevo: el set de testing. Desafío 4Complete el código a continuación, para cargar los datos del set de entrenamiento y realizar una predicción de los dígitos de cada imagen. ***No cambie los nombres de las variables***.
###Code
# Cargando el archivo data/optdigits.test
filepath = os.path.join("data", "optdigits.test")
XY_test = ##FIX ME##
X_test = ##FIX ME##
Y_test = ##FIX ME##
# Predicción de etiquetas
Y_pred = ##FIX ME##
###Output
_____no_output_____
###Markdown
2.6 Visualización de etiquetas correctasPuesto que tenemos las etiquetas verdaderas en el set de test, podemos visualizar únicamente los números que han sido correctamente etiquetados. Ejecute el código a continuación.
###Code
from matplotlib import pyplot as plt
# Mostrar los datos correctos
mask = (Y_pred==Y_test)
X_aux = X_test[mask]
Y_aux_true = Y_test[mask]
Y_aux_pred = Y_pred[mask]
# We'll plot the first 100 examples, randomly choosen
nx, ny = 5, 5
fig, ax = plt.subplots(nx, ny, figsize=(12,12))
for i in range(nx):
for j in range(ny):
index = j+ny*i
data = X_aux[index,:].reshape(8,8)
label_pred = str(int(Y_aux_pred[index]))
label_true = str(int(Y_aux_true[index]))
ax[i][j].imshow(data, interpolation='nearest', cmap=plt.get_cmap('gray_r'))
ax[i][j].text(0, 0, label_pred, horizontalalignment='center',
verticalalignment='center', fontsize=10, color='green')
ax[i][j].text(7, 0, label_true, horizontalalignment='center',
verticalalignment='center', fontsize=10, color='blue')
ax[i][j].get_xaxis().set_visible(False)
ax[i][j].get_yaxis().set_visible(False)
plt.show()
###Output
_____no_output_____
###Markdown
2.7 Visualización de etiquetas incorrectasMás interesante que el gráfico anterior, resulta considerar los casos donde los dígitos han sido incorrectamente etiquetados. Desafio 5Modifique el código anteriormente provisto para que muestre los dígitos incorrectamente etiquetados, cambiando apropiadamente la máscara. Cambie también el color de la etiqueta desde verde a rojo, para indicar una mala etiquetación.
###Code
"""
##FIX ME##
"""
###Output
_____no_output_____
###Markdown
2.8 Análisis del errorDespués de la exploración visual de los resultados, estamos ansiosos de obtener el error de predicción real del modelo. Desafío 6Complete el código, obteniendo el error de clasificación para cada dígito. Indique el error obtenido en su respuesta.¿Existen dígitos más fáciles o difíciles de clasificar?***RESPONDER AQUI***
###Code
# Error global
mask = (Y_pred!=Y_test)
error_prediccion = # FIX ME #
print("Error de predicción total de {:.1f}%".format(error_prediccion))
for digito in range(0,10):
mask_digito = # FIX ME #
Y_test_digito = Y_test[mask_digito]
Y_pred_digito = Y_pred[mask_digito]
error_prediccion = 100.*sum((Y_pred_digito!=Y_test_digito)) / len(Y_pred_digito)
print("Error de predicción para digito {} de {:.1f}%".format(digito, error_prediccion))
###Output
Error de predicción total de 0.7%
Error de predicción para digito 0 de 0.5%
Error de predicción para digito 1 de 0.5%
Error de predicción para digito 2 de 0.0%
Error de predicción para digito 3 de 0.8%
Error de predicción para digito 4 de 0.5%
Error de predicción para digito 5 de 0.8%
Error de predicción para digito 6 de 0.5%
Error de predicción para digito 7 de 0.5%
Error de predicción para digito 8 de 1.8%
Error de predicción para digito 9 de 1.3%
###Markdown
2.9 Análisis del error (cont. de)La matriz de confusión (confusion matrix) indica la relación entre las etiquetas clasificadas correcta e incorrectamente:*Compute confusion matrix to evaluate the accuracy of a classification**By definition a confusion matrix $C$ is such that $C_{i,j}$ is equal to the number of observations known to be in group $i$ but predicted to be in group $j$.* Es decir, el elemento $C_{3,3}$ cuenta la cantidad de veces que el dígito 3 fue clasificado correctamente, mientras que $C_{9,7}$ indica la cantidad de veces que el dígito 9 fue incorrectamente clasificado como el digito 7.Observación: Lo anterior corresponde a la convención utilizada por sklearn. La convención puede variar según la referencia.El siguiente código muestra cómo clasificar el error de clasificación con la matriz de confusión:
###Code
from sklearn.metrics import confusion_matrix
cm = confusion_matrix(Y_test, Y_pred)
print(cm)
# As in http://scikit-learn.org/stable/auto_examples/model_selection/plot_confusion_matrix.html
def plot_confusion_matrix(cm, title='Confusion matrix (withou)', cmap=plt.cm.gray_r):
cm_aux = cm - np.diag(np.diag(cm))
plt.figure(figsize=(10,10))
plt.imshow(cm_aux, interpolation='nearest', cmap=cmap)
plt.title(title)
plt.colorbar()
tick_marks = np.arange(10)
plt.xticks(tick_marks, tick_marks)
plt.yticks(tick_marks, tick_marks)
plt.tight_layout()
plt.ylabel('True label')
plt.xlabel('Predicted label')
plt.show()
return None
# Compute confusion matrix
plt.figure()
plot_confusion_matrix(cm)
###Output
_____no_output_____ |
Jesse_Ghansah_assignment_regression_classification_4.ipynb | ###Markdown
Lambda School Data Science*Unit 2, Sprint 1, Module 4*--- Logistic Regression Assignment 🌯You'll use a [**dataset of 400+ burrito reviews**](https://srcole.github.io/100burritos/). How accurately can you predict whether a burrito is rated 'Great'?> We have developed a 10-dimensional system for rating the burritos in San Diego. ... Generate models for what makes a burrito great and investigate correlations in its dimensions.- [ ] Do train/validate/test split. Train on reviews from 2016 & earlier. Validate on 2017. Test on 2018 & later.- [ ] Begin with baselines for classification.- [ ] Use scikit-learn for logistic regression.- [ ] Get your model's validation accuracy. (Multiple times if you try multiple iterations.)- [ ] Get your model's test accuracy. (One time, at the end.)- [ ] Commit your notebook to your fork of the GitHub repo.- [ ] Watch Aaron's [video 1](https://www.youtube.com/watch?v=pREaWFli-5I) (12 minutes) & [video 2](https://www.youtube.com/watch?v=bDQgVt4hFgY) (9 minutes) to learn about the mathematics of Logistic Regression. Stretch Goals- [ ] Add your own stretch goal(s) !- [ ] Make exploratory visualizations.- [ ] Do one-hot encoding.- [ ] Do [feature scaling](https://scikit-learn.org/stable/modules/preprocessing.html).- [ ] Get and plot your coefficients.- [ ] Try [scikit-learn pipelines](https://scikit-learn.org/stable/modules/compose.html).
###Code
%%capture
import sys
# If you're on Colab:
if 'google.colab' in sys.modules:
DATA_PATH = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Linear-Models/master/data/'
!pip install category_encoders==2.*
# If you're working locally:
else:
DATA_PATH = '../data/'
# Load data downloaded from https://srcole.github.io/100burritos/
import pandas as pd
df = pd.read_csv(DATA_PATH+'burritos/burritos.csv')
# Derive binary classification target:
# We define a 'Great' burrito as having an
# overall rating of 4 or higher, on a 5 point scale.
# Drop unrated burritos.
df = df.dropna(subset=['overall'])
df['Great'] = df['overall'] >= 4
# Clean/combine the Burrito categories
df['Burrito'] = df['Burrito'].str.lower()
california = df['Burrito'].str.contains('california')
asada = df['Burrito'].str.contains('asada')
surf = df['Burrito'].str.contains('surf')
carnitas = df['Burrito'].str.contains('carnitas')
df.loc[california, 'Burrito'] = 'California'
df.loc[asada, 'Burrito'] = 'Asada'
df.loc[surf, 'Burrito'] = 'Surf & Turf'
df.loc[carnitas, 'Burrito'] = 'Carnitas'
df.loc[~california & ~asada & ~surf & ~carnitas, 'Burrito'] = 'Other'
# Drop some high cardinality categoricals
df = df.drop(columns=['Notes', 'Location', 'Reviewer', 'Address', 'URL', 'Neighborhood'])
# Drop some columns to prevent "leakage"
df = df.drop(columns=['Rec', 'overall'])
###Output
_____no_output_____ |
fall_2017/hw6_release/hw6.ipynb | ###Markdown
Homework 6*This notebook includes both coding and written questions. Please hand in this notebook file with all the outputs as a pdf on gradescope for "HW6 pdf". Upload the three files of code (`compression.py`, `k_nearest_neighbor.py` and `features.py`) on gradescope for "HW6 code".*This assignment covers:- image compression using SVD- kNN methods for image recognition.- PCA and LDA to improve kNN
###Code
# Setup
from time import time
from collections import defaultdict
import numpy as np
import matplotlib.pyplot as plt
from matplotlib import rc
from skimage import io
%matplotlib inline
plt.rcParams['figure.figsize'] = (15.0, 12.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading external modules
%load_ext autoreload
%autoreload 2
###Output
_____no_output_____
###Markdown
Part 1 - Image Compression (15 points)Image compression is used to reduce the cost of storage and transmission of images (or videos).One lossy compression method is to apply Singular Value Decomposition (SVD) to an image, and only keep the top n singular values.
###Code
image = io.imread('pitbull.jpg', as_grey=True)
plt.imshow(image)
plt.axis('off')
plt.show()
###Output
_____no_output_____
###Markdown
Let's implement image compression using SVD. We first compute the SVD of the image, and as seen in class we keep the `n` largest singular values and singular vectors to reconstruct the image.Implement function `compress_image` in `compression.py`.
###Code
from compression import compress_image
compressed_image, compressed_size = compress_image(image, 100)
compression_ratio = compressed_size / image.size
print('Original image shape:', image.shape)
print('Compressed size: %d' % compressed_size)
print('Compression ratio: %.3f' % compression_ratio)
assert compressed_size == 298500
# Number of singular values to keep
n_values = [10, 50, 100]
for n in n_values:
# Compress the image using `n` singular values
compressed_image, compressed_size = compress_image(image, n)
compression_ratio = compressed_size / image.size
print("Data size (original): %d" % (image.size))
print("Data size (compressed): %d" % compressed_size)
print("Compression ratio: %f" % (compression_ratio))
plt.imshow(compressed_image, cmap='gray')
title = "n = %s" % n
plt.title(title)
plt.axis('off')
plt.show()
###Output
_____no_output_____
###Markdown
Face DatasetWe will use a dataset of faces of celebrities. Download the dataset using the following command: sh get_dataset.shThe face dataset for CS131 assignment.The directory containing the dataset has the following structure: faces/ train/ angelina jolie/ anne hathaway/ ... test/ angelina jolie/ anne hathaway/ ...Each class has 50 training images and 10 testing images.
###Code
from utils import load_dataset
X_train, y_train, classes_train = load_dataset('faces', train=True, as_grey=True)
X_test, y_test, classes_test = load_dataset('faces', train=False, as_grey=True)
assert classes_train == classes_test
classes = classes_train
print('Class names:', classes)
print('Training data shape:', X_train.shape)
print('Training labels shape: ', y_train.shape)
print('Test data shape:', X_test.shape)
print('Test labels shape: ', y_test.shape)
# Visualize some examples from the dataset.
# We show a few examples of training images from each class.
num_classes = len(classes)
samples_per_class = 10
for y, cls in enumerate(classes):
idxs = np.flatnonzero(y_train == y)
idxs = np.random.choice(idxs, samples_per_class, replace=False)
for i, idx in enumerate(idxs):
plt_idx = i * num_classes + y + 1
plt.subplot(samples_per_class, num_classes, plt_idx)
plt.imshow(X_train[idx])
plt.axis('off')
if i == 0:
plt.title(y)
plt.show()
# Flatten the image data into rows
# we now have one 4096 dimensional featue vector for each example
X_train = np.reshape(X_train, (X_train.shape[0], -1))
X_test = np.reshape(X_test, (X_test.shape[0], -1))
print("Training data shape:", X_train.shape)
print("Test data shape:", X_test.shape)
###Output
_____no_output_____
###Markdown
--- Part 2 - k-Nearest Neighbor (30 points)We're now going to try to classify the test images using the k-nearest neighbors algorithm on the **raw features of the images** (i.e. the pixel values themselves). We will see later how we can use kNN on better features.Here are the steps that we will follow:1. We compute the L2 distances between every element of X_test and every element of X_train in `compute_distances`.2. We split the dataset into 5 folds for cross-validation in `split_folds`.3. For each fold, and for different values of `k`, we predict the labels and measure accuracy.4. Using the best `k` found through cross-validation, we measure accuracy on the test set.
###Code
from k_nearest_neighbor import compute_distances
# Step 1: compute the distances between all features from X_train and from X_test
dists = compute_distances(X_test, X_train)
assert dists.shape == (160, 800)
print("dists shape:", dists.shape)
from k_nearest_neighbor import predict_labels
# We use k = 1 (which corresponds to only taking the nearest neighbor to decide)
y_test_pred = predict_labels(dists, y_train, k=1)
# Compute and print the fraction of correctly predicted examples
num_test = y_test.shape[0]
num_correct = np.sum(y_test_pred == y_test)
accuracy = float(num_correct) / num_test
print('Got %d / %d correct => accuracy: %f' % (num_correct, num_test, accuracy))
###Output
_____no_output_____
###Markdown
Cross-ValidationWe don't know the best value for our parameter `k`. There is no theory on how to choose an optimal `k`, and the way to choose it is through cross-validation.We **cannot** compute any metric on the test set to choose the best `k`, because we want our final test accuracy to reflect a real use case. This real use case would be a setting where we have new examples come and we classify them on the go. There is no way to check the accuracy beforehand on that set of test examples to determine `k`.Cross-validation will make use split the data into different fold (5 here). For each fold, if we have a total of 5 folds we will have:- 80% of the data as training data- 20% of the data as validation dataWe will compute the accuracy on the validation accuracy for each fold, and use the mean of these 5 accuracies to determine the best parameter `k`.
###Code
from k_nearest_neighbor import split_folds
# Step 2: split the data into 5 folds to perform cross-validation.
num_folds = 5
X_trains, y_trains, X_vals, y_vals = split_folds(X_train, y_train, num_folds)
assert X_trains.shape == (5, 640, 4096)
assert y_trains.shape == (5, 640)
assert X_vals.shape == (5, 160, 4096)
assert y_vals.shape == (5, 160)
# Step 3: Measure the mean accuracy for each value of `k`
# List of k to choose from
k_choices = list(range(5, 101, 5))
# Dictionnary mapping k values to accuracies
# For each k value, we will have `num_folds` accuracies to compute
# k_to_accuracies[1] will be for instance [0.22, 0.23, 0.19, 0.25, 0.20] for 5 folds
k_to_accuracies = {}
for k in k_choices:
print("Running for k=%d" % k)
accuracies = []
for i in range(num_folds):
# Make predictions
fold_dists = compute_distances(X_vals[i], X_trains[i])
y_pred = predict_labels(fold_dists, y_trains[i], k)
# Compute and print the fraction of correctly predicted examples
num_correct = np.sum(y_pred == y_vals[i])
accuracy = float(num_correct) / len(y_vals[i])
accuracies.append(accuracy)
k_to_accuracies[k] = accuracies
# plot the raw observations
for k in k_choices:
accuracies = k_to_accuracies[k]
plt.scatter([k] * len(accuracies), accuracies)
# plot the trend line with error bars that correspond to standard deviation
accuracies_mean = np.array([np.mean(v) for k,v in sorted(k_to_accuracies.items())])
accuracies_std = np.array([np.std(v) for k,v in sorted(k_to_accuracies.items())])
plt.errorbar(k_choices, accuracies_mean, yerr=accuracies_std)
plt.title('Cross-validation on k')
plt.xlabel('k')
plt.ylabel('Cross-validation accuracy')
plt.show()
# Based on the cross-validation results above, choose the best value for k,
# retrain the classifier using all the training data, and test it on the test
# data. You should be able to get above 26% accuracy on the test data.
best_k = None
# YOUR CODE HERE
# Choose the best k based on the cross validation above
pass
# END YOUR CODE
y_test_pred = predict_labels(dists, y_train, k=best_k)
# Compute and display the accuracy
num_correct = np.sum(y_test_pred == y_test)
accuracy = float(num_correct) / num_test
print('For k = %d, got %d / %d correct => accuracy: %f' % (best_k, num_correct, num_test, accuracy))
###Output
_____no_output_____
###Markdown
--- Part 3: PCA (30 points)Principal Component Analysis (PCA) is a simple yet popular and useful linear transformation technique that is used in numerous applications, such as stock market predictions, the analysis of gene expression data, and many more. In this tutorial, we will see that PCA is not just a "black box", and we are going to unravel its internals in 3 basic steps. IntroductionThe sheer size of data in the modern age is not only a challenge for computer hardware but also a main bottleneck for the performance of many machine learning algorithms. The main goal of a PCA analysis is to identify patterns in data; PCA aims to detect the correlation between variables. If a strong correlation between variables exists, the attempt to reduce the dimensionality only makes sense. In a nutshell, this is what PCA is all about: Finding the directions of maximum variance in high-dimensional data and project it onto a smaller dimensional subspace while retaining most of the information. A Summary of the PCA Approach- Standardize the data.- Obtain the Eigenvectors and Eigenvalues from the covariance matrix or correlation matrix, or perform Singular Vector Decomposition.- Sort eigenvalues in descending order and choose the $k$ eigenvectors that correspond to the $k$ largest eigenvalues where $k$ is the number of dimensions of the new feature subspace ($k \leq d$).- Construct the projection matrix $\mathbf{W}$ from the selected $k$ eigenvectors.- Transform the original dataset $\mathbf{X}$ via $\mathbf{W}$ to obtain a $k$-dimensional feature subspace Y.
###Code
from features import PCA
pca = PCA()
###Output
_____no_output_____
###Markdown
3.1 - EigendecompositionThe eigenvectors and eigenvalues of a covariance (or correlation) matrix represent the "core" of a PCA: The eigenvectors (principal components) determine the directions of the new feature space, and the eigenvalues determine their magnitude. In other words, the eigenvalues explain the variance of the data along the new feature axes.Implement **`_eigen_decomp`** in `pca.py`.
###Code
# Perform eigenvalue decomposition on the covariance matrix of training data.
e_vecs, e_vals = pca._eigen_decomp(X_train - X_train.mean(axis=0))
print(e_vals.shape)
print(e_vecs.shape)
###Output
_____no_output_____
###Markdown
3.2 - Singular Value DecompositionDoing an eigendecomposition of the covariance matrix is very expensive, especially when the number of features (`D = 4096` here) gets very high.To obtain the same eigenvalues and eigenvectors in a more efficient way, we can use Singular Value Decomposition (SVD). If we perform SVD on matrix $X$, we obtain $U$, $S$ and $V$ such that:$$X = U S V^T$$- the columns of $U$ are the eigenvectors of $X X^T$- the columns of $V^T$ are the eigenvectors of $X^T X$- the values of $S$ are the square roots of the eigenvalues of $X^T X$ (or $X X^T$)Therefore, we can find out the top `k` eigenvectors of the covariance matrix $X^T X$ using SVD.Implement **`_svd`** in `pca.py`.
###Code
# Perform SVD on directly on the training data.
u, s = pca._svd(X_train - X_train.mean(axis=0))
print(s.shape)
print(u.shape)
# Test whether the square of singular values and eigenvalues are the same.
# We also observe that `e_vecs` and `u` are the same (only the sign of each column can differ).
N = X_train.shape[0]
assert np.allclose((s ** 2) / (N - 1), e_vals[:len(s)])
for i in range(len(s) - 1):
assert np.allclose(e_vecs[:, i], u[:, i]) or np.allclose(e_vecs[:, i], -u[:, i])
# (the last eigenvector for i = len(s) - 1 is very noisy because the eigenvalue is almost 0,
# so imprecisions in the computation build up)
###Output
_____no_output_____
###Markdown
3.3 - Dimensionality ReductionThe top $k$ principal components explain most of the variance of the underlying data.By projecting our initial data (the images) onto the subspace spanned by the top k principal components,we can reduce the dimension of our inputs while keeping most of the information.In the example below, we can see that **using the first two components in PCA is not enough** to allow us to see pattern in the data. All the classes seem placed at random in the 2D plane.
###Code
# Dimensionality reduction by projecting the data onto
# lower dimensional subspace spanned by k principal components
# To visualize, we will project in 2 dimensions
n_components = 2
pca.fit(X_train)
X_proj = pca.transform(X_train, n_components)
# Plot the top two principal components
for y in np.unique(y_train):
plt.scatter(X_proj[y_train==y,0], X_proj[y_train==y,1], label=classes[y])
plt.xlabel('1st component')
plt.ylabel('2nd component')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
3.4 - Visualizing EigenfacesThe columns of the PCA projection matrix `pca.W_pca` represent the eigenvectors of $X^T X$.We can visualize the biggest singular values as well as the corresponding vectors to get a sense of what the PCA algorithm is extracting.If we visualize the top 10 eigenfaces, we can see tht the algorithm focuses on the different shades of the faces. For instance, in face n°2 the light seems to come from the left.
###Code
for i in range(10):
plt.subplot(1, 10, i+1)
plt.imshow(pca.W_pca[:, i].reshape(64, 64))
plt.title("%.2f" % s[i])
plt.show()
# Reconstruct data with principal components
n_components = 100 # Experiment with different number of components.
X_proj = pca.transform(X_train, n_components)
X_rec = pca.reconstruct(X_proj)
print(X_rec.shape)
print(classes)
# Visualize reconstructed faces
samples_per_class = 10
for y, cls in enumerate(classes):
idxs = np.flatnonzero(y_train == y)
idxs = np.random.choice(idxs, samples_per_class, replace=False)
for i, idx in enumerate(idxs):
plt_idx = i * num_classes + y + 1
plt.subplot(samples_per_class, num_classes, plt_idx)
plt.imshow((X_rec[idx]).reshape((64, 64)))
plt.axis('off')
if i == 0:
plt.title(y)
plt.show()
###Output
_____no_output_____
###Markdown
Written Question 1 (5 points) *Question*: Consider a dataset of $N$ face images, each with shape $(h, w)$. Then, we need $O(Nhw)$ of memory to store the data. Suppose we perform dimensionality reduction on the dataset with $p$ principal components, and use the components as bases to represent images. Calculate how much memory we need to store the images and the matrix used to get back to the original space.Said in another way, how much memory does storing the compressed images **and** the uncompresser cost.*Answer:* TODO 3.5 - Reconstruction error and captured varianceWe can plot the reconstruction error with respect to the dimension of the projected space.The reconstruction gets better with more components.We can see in the plot that the inflexion point is around dimension 200 or 300. This means that using this number of components is a good compromise between good reconstruction and low dimension.
###Code
# Plot reconstruction errors for different k
N = X_train.shape[0]
d = X_train.shape[1]
ns = range(1, d, 100)
errors = []
for n in ns:
X_proj = pca.transform(X_train, n)
X_rec = pca.reconstruct(X_proj)
# Compute reconstruction error
error = np.mean((X_rec - X_train) ** 2)
errors.append(error)
plt.plot(ns, errors)
plt.xlabel('Number of Components')
plt.ylabel('Reconstruction Error')
plt.show()
###Output
_____no_output_____
###Markdown
We can do the same process to see how much variance is captured by the projection.Again, we see that the inflexion point is around 200 or 300 dimensions.
###Code
# Plot captured variance
ns = range(1, d, 100)
var_cap = []
for n in ns:
var_cap.append(np.sum(s[:n] ** 2)/np.sum(s ** 2))
plt.plot(ns, var_cap)
plt.xlabel('Number of Components')
plt.ylabel('Variance Captured')
plt.show()
###Output
_____no_output_____
###Markdown
3.6 - kNN with PCAPerforming kNN on raw features (the pixels of the image) does not yield very good results. Computing the distance between images in the image space is not a very good metric for actual proximity of images. For instance, an image of person A with a dark background will be close to an image of B with a dark background, although these people are not the same.Using a technique like PCA can help discover the real interesting features and perform kNN on them could give better accuracy.However, we observe here that PCA doesn't really help to disentangle the features and obtain useful distance metrics between the different classes. We basically obtain the same performance as with raw features.
###Code
num_test = X_test.shape[0]
# We computed the best k and n for you
best_k = 20
best_n = 500
# PCA
pca = PCA()
pca.fit(X_train)
X_proj = pca.transform(X_train, best_n)
X_test_proj = pca.transform(X_test, best_n)
# kNN
dists = compute_distances(X_test_proj, X_proj)
y_test_pred = predict_labels(dists, y_train, k=best_k)
# Compute and display the accuracy
num_correct = np.sum(y_test_pred == y_test)
accuracy = float(num_correct) / num_test
print('Got %d / %d correct => accuracy: %f' % (num_correct, num_test, accuracy))
###Output
_____no_output_____
###Markdown
--- Part 4 - Fisherface: Linear Discriminant Analysis (25 points)LDA is a linear transformation method like PCA, but with a different goal. The main difference is that LDA takes information from the labels of the examples to maximize the separation of the different classes in the transformed space.Therefore, LDA is not totally **unsupervised** since it requires labels. PCA is **fully unsupervised**.In summary:- PCA perserves maximum variance in the projected space.- LDA preserves discrimination between classes in the project space. We want to maximize scatter between classes and minimize scatter intra class.
###Code
from features import LDA
lda = LDA()
###Output
_____no_output_____
###Markdown
4.1 - Dimensionality Reduction via PCATo apply LDA, we need $D < N$. Since in our dataset, $N = 800$ and $D = 4096$, we first need to reduce the number of dimensions of the images using PCA. More information at: http://www.scholarpedia.org/article/Fisherfaces
###Code
N = X_train.shape[0]
c = num_classes
pca = PCA()
pca.fit(X_train)
X_train_pca = pca.transform(X_train, N-c)
X_test_pca = pca.transform(X_test, N-c)
###Output
_____no_output_____
###Markdown
4.2 - Scatter matricesWe first need to compute the within-class scatter matrix:$$S_W = \sum_{i=1}^c S_i$$where $S_i = \sum_{x_k \in Y_i} (x_k - \mu_i)(x_k - \mu_i)^T$ is the scatter of class $i$.We then need to compute the between-class scatter matrix:$$S_B = \sum_{i=1}^c N_i (\mu_i - \mu)(\mu_i - \mu)^T$$where $N_i$ is the number of examples in class $i$.
###Code
# Compute within-class scatter matrix
S_W = lda._within_class_scatter(X_train_pca, y_train)
print(S_W.shape)
# Compute between-class scatter matrix
S_B = lda._between_class_scatter(X_train_pca, y_train)
print(S_B.shape)
###Output
_____no_output_____
###Markdown
4.3 - Solving generalized Eigenvalue problemImplement methods `fit` and `transform` of the `LDA` class.
###Code
lda.fit(X_train_pca, y_train)
# Dimensionality reduction by projecting the data onto
# lower dimensional subspace spanned by k principal components
n_components = 2
X_proj = lda.transform(X_train_pca, n_components)
X_test_proj = lda.transform(X_test_pca, n_components)
# Plot the top two principal components on the training set
for y in np.unique(y_train):
plt.scatter(X_proj[y_train==y, 0], X_proj[y_train==y, 1], label=classes[y])
plt.xlabel('1st component')
plt.ylabel('2nd component')
plt.legend()
plt.title("Training set")
plt.show()
# Plot the top two principal components on the test set
for y in np.unique(y_test):
plt.scatter(X_test_proj[y_test==y, 0], X_test_proj[y_test==y,1], label=classes[y])
plt.xlabel('1st component')
plt.ylabel('2nd component')
plt.legend()
plt.title("Test set")
plt.show()
###Output
_____no_output_____
###Markdown
4.4 - kNN with LDAThanks to having the information from the labels, LDA gives a discriminant space where the classes are far apart from each other. This should help kNN a lot, as the job should just be to find the obvious 10 clusters.However, as we've seen in the previous plot (section 4.3), the training data gets clustered pretty well, but the test data isn't as nicely clustered as the training data (overfitting?).Perform cross validation following the code below (you can change the values of `k_choices` and `n_choices` to search). Using the best result from cross validation, obtain the test accuracy.
###Code
num_folds = 5
X_trains, y_trains, X_vals, y_vals = split_folds(X_train, y_train, num_folds)
k_choices = [1, 5, 10, 20]
n_choices = [5, 10, 20, 50, 100, 200, 500]
pass
# n_k_to_accuracies[(n, k)] should be a list of length num_folds giving the different
# accuracy values that we found when using that value of n and k.
n_k_to_accuracies = defaultdict(list)
for i in range(num_folds):
# Fit PCA
pca = PCA()
pca.fit(X_trains[i])
N = len(X_trains[i])
X_train_pca = pca.transform(X_trains[i], N-c)
X_val_pca = pca.transform(X_vals[i], N-c)
# Fit LDA
lda = LDA()
lda.fit(X_train_pca, y_trains[i])
for n in n_choices:
X_train_proj = lda.transform(X_train_pca, n)
X_val_proj = lda.transform(X_val_pca, n)
dists = compute_distances(X_val_proj, X_train_proj)
for k in k_choices:
y_pred = predict_labels(dists, y_trains[i], k=k)
# Compute and print the fraction of correctly predicted examples
num_correct = np.sum(y_pred == y_vals[i])
accuracy = float(num_correct) / len(y_vals[i])
n_k_to_accuracies[(n, k)].append(accuracy)
for n in n_choices:
print()
for k in k_choices:
accuracies = n_k_to_accuracies[(n, k)]
print("For n=%d, k=%d: average accuracy is %f" % (n, k, np.mean(accuracies)))
# Based on the cross-validation results above, choose the best value for k,
# retrain the classifier using all the training data, and test it on the test
# data. You should be able to get above 40% accuracy on the test data.
best_k = None
best_n = None
# YOUR CODE HERE
# Choose the best k based on the cross validation above
pass
# END YOUR CODE
N = len(X_train)
# Fit PCA
pca = PCA()
pca.fit(X_train)
X_train_pca = pca.transform(X_train, N-c)
X_test_pca = pca.transform(X_test, N-c)
# Fit LDA
lda = LDA()
lda.fit(X_train_pca, y_train)
# Project using LDA
X_train_proj = lda.transform(X_train_pca, best_n)
X_test_proj = lda.transform(X_test_pca, best_n)
dists = compute_distances(X_test_proj, X_train_proj)
y_test_pred = predict_labels(dists, y_train, k=best_k)
# Compute and display the accuracy
num_correct = np.sum(y_test_pred == y_test)
accuracy = float(num_correct) / num_test
print("For k=%d and n=%d" % (best_k, best_n))
print('Got %d / %d correct => accuracy: %f' % (num_correct, num_test, accuracy))
###Output
_____no_output_____ |
IEEE-CIS Fraud Detection/code/Model/Improving BaseLine IEEE Model 9551 - ka.ipynb | ###Markdown
BaseLine IEEE Model 9549
###Code
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
import scipy as sp
from scipy import stats
import matplotlib.pyplot as plt
import seaborn as sns
# Standard plotly imports
#import plotly.plotly as py
import plotly.graph_objs as go
import plotly.tools as tls
from plotly.offline import iplot, init_notebook_mode
#import cufflinks
#import cufflinks as cf
import plotly.figure_factory as ff
import gc
from tqdm import tqdm_notebook
from tqdm import tqdm
from sklearn.preprocessing import LabelEncoder
tqdm.pandas()
## Function to reduce the DF size
def reduce_mem_usage(df, verbose=True):
numerics = ['int16', 'int32', 'int64', 'float16', 'float32', 'float64']
start_mem = df.memory_usage().sum() / 1024**2
for col in df.columns:
col_type = df[col].dtypes
if col_type in numerics:
c_min = df[col].min()
c_max = df[col].max()
if str(col_type)[:3] == 'int':
if c_min > np.iinfo(np.int8).min and c_max < np.iinfo(np.int8).max:
df[col] = df[col].astype(np.int8)
elif c_min > np.iinfo(np.int16).min and c_max < np.iinfo(np.int16).max:
df[col] = df[col].astype(np.int16)
elif c_min > np.iinfo(np.int32).min and c_max < np.iinfo(np.int32).max:
df[col] = df[col].astype(np.int32)
elif c_min > np.iinfo(np.int64).min and c_max < np.iinfo(np.int64).max:
df[col] = df[col].astype(np.int64)
else:
if c_min > np.finfo(np.float16).min and c_max < np.finfo(np.float16).max:
df[col] = df[col].astype(np.float16)
elif c_min > np.finfo(np.float32).min and c_max < np.finfo(np.float32).max:
df[col] = df[col].astype(np.float32)
else:
df[col] = df[col].astype(np.float64)
end_mem = df.memory_usage().sum() / 1024**2
if verbose: print('Mem. usage decreased to {:5.2f} Mb ({:.1f}% reduction)'.format(end_mem, 100 * (start_mem - end_mem) / start_mem))
return df
trans_cat_cols = ["ProductCD","card1","card2","card3","card4","card5","card6","addr1","addr2","P_emaildomain_bin","P_emaildomain_suffix",
"R_emaildomain_bin","R_emaildomain_suffix","M1","M2","M3","M4","M5","M6","M7","M8","M9"]
iden_cat_cols = ["DeviceType","DeviceInfo","id_12",
"id_13","id_14","id_15","id_16","id_17","id_18","id_19","id_20","id_21","id_22","id_23","id_24",
"id_25","id_26","id_27","id_28","id_29","id_30","id_31","id_32","id_33","id_34","id_35","id_36",
"id_37","id_38"]
def transform_transaction_catcols(df1, df2):
for cat_col in tqdm_notebook( trans_cat_cols):
# Get the indices for NaN values
trn_null_ind = [ind for ind, val in enumerate(df1[cat_col].isnull().values) if val == True]
ts_null_ind = [ind for ind, val in enumerate(df2[cat_col].isnull().values) if val == True]
uniq_train_cat_val , uniq_test_cat_val = set(df1[cat_col].dropna() ), set(df2[cat_col].dropna() )
common_cat_val = uniq_train_cat_val.intersection(uniq_test_cat_val)
df1.loc[ ~df1[cat_col].isin( common_cat_val), cat_col ] = -99999
df2.loc[ ~df2[cat_col].isin( common_cat_val), cat_col ] = -99999
# Replace the value for orignal NaN values
df1.loc[df1.index.isin(trn_null_ind), cat_col] = np.NaN
df2.loc[df2.index.isin(ts_null_ind), cat_col] = np.NaN
del uniq_train_cat_val, uniq_test_cat_val, common_cat_val; gc.collect()
# Reduce the usage of memory
df1 = reduce_mem_usage(df1)
df2 = reduce_mem_usage(df2)
return df1, df2
def transform_identity_catcols(df1,df2):
for cat_col in tqdm_notebook( iden_cat_cols ):
# Get the indices for NaN values
trn_null_ind = [ind for ind, val in enumerate(df1[cat_col].isnull().values) if val == True]
ts_null_ind = [ind for ind, val in enumerate(df2[cat_col].isnull().values) if val == True]
uniq_train_cat_val , uniq_test_cat_val = set(df1[cat_col].dropna() ), set(df2[cat_col].dropna() )
common_cat_val = uniq_train_cat_val.intersection(uniq_test_cat_val)
df1.loc[ ~df1[cat_col].isin( common_cat_val), cat_col ] = -99999
df2.loc[ ~df2[cat_col].isin( common_cat_val), cat_col ] = -99999
# Replace the value for orignal NaN values
df1.loc[df1.index.isin(trn_null_ind), cat_col] = np.NaN
df2.loc[df2.index.isin(ts_null_ind), cat_col] = np.NaN
del uniq_train_cat_val, uniq_test_cat_val, common_cat_val; gc.collect();
# Reduce the usage of memory
df1 = reduce_mem_usage(df1)
df2 = reduce_mem_usage(df2)
return df1, df2
###Output
_____no_output_____
###Markdown
1. Preprocessing
###Code
train_idf = pd.read_csv('./input/train_identity.csv')
train_trans = pd.read_csv('./input/train_transaction.csv')
test_idf =pd.read_csv('input/test_identity.csv')
test_trans = pd.read_csv('input/test_transaction.csv')
# Email
def email_categorical_expression(emails):
"""
Get the type of email
(1) Both "P_emaildomain" & "R_emaildomain" are None
(2) "P_emaildomain" is None, but "R_emaildomain" isn't None
(3) "P_emaildomain" isn't None, but "R_emaildomain" is None
(4) Both "P_emaildomain" & "R_emaildomain" aren't None
"""
P_emaildomain, R_emaildomain = emails
if type(P_emaildomain) == float:
if type(R_emaildomain) == float:
email_type = 1
else:
email_type = 2
else:
if type(R_emaildomain) == float:
email_type = 3
else:
email_type = 4
return email_type
def email_null_concat(emails):
"""
Get the row-wise concat of email_address
"""
temp = emails.isnull().astype(np.int8)
label= ''
for col in ['P_emaildomain','R_emaildomain']:
label += str(temp[col] ) +'_'
return label
# Implement
train_trans['email_type'] = train_trans[['P_emaildomain', 'R_emaildomain']].progress_apply(lambda x : email_categorical_expression(x) , axis=1)
train_trans['email_null_concat'] = train_trans[['P_emaildomain', 'R_emaildomain']].progress_apply(lambda x : email_null_concat(x) , axis=1)
test_trans['email_type'] = test_trans[['P_emaildomain', 'R_emaildomain']].progress_apply(lambda x : email_categorical_expression(x) , axis=1)
test_trans['email_null_concat'] = test_trans[['P_emaildomain', 'R_emaildomain']].progress_apply(lambda x : email_null_concat(x) , axis=1)
train_trans.head()
###Output
_____no_output_____
###Markdown
Email Preprocessing
###Code
# email preprocessing
emails = {'gmail': 'google', 'att.net': 'att', 'twc.com': 'spectrum', 'scranton.edu': 'other', 'optonline.net': 'other', 'hotmail.co.uk': 'microsoft', 'comcast.net': 'other', 'yahoo.com.mx': 'yahoo', 'yahoo.fr': 'yahoo', 'yahoo.es': 'yahoo', 'charter.net': 'spectrum', 'live.com': 'microsoft', 'aim.com': 'aol', 'hotmail.de': 'microsoft', 'centurylink.net': 'centurylink', 'gmail.com': 'google', 'me.com': 'apple', 'earthlink.net': 'other', 'gmx.de': 'other', 'web.de': 'other', 'cfl.rr.com': 'other', 'hotmail.com': 'microsoft', 'protonmail.com': 'other', 'hotmail.fr': 'microsoft', 'windstream.net': 'other', 'outlook.es': 'microsoft', 'yahoo.co.jp': 'yahoo', 'yahoo.de': 'yahoo', 'servicios-ta.com': 'other', 'netzero.net': 'other', 'suddenlink.net': 'other', 'roadrunner.com': 'other', 'sc.rr.com': 'other', 'live.fr': 'microsoft', 'verizon.net': 'yahoo', 'msn.com': 'microsoft', 'q.com': 'centurylink', 'prodigy.net.mx': 'att', 'frontier.com': 'yahoo', 'anonymous.com': 'other', 'rocketmail.com': 'yahoo', 'sbcglobal.net': 'att', 'frontiernet.net': 'yahoo', 'ymail.com': 'yahoo', 'outlook.com': 'microsoft', 'mail.com': 'other', 'bellsouth.net': 'other', 'embarqmail.com': 'centurylink', 'cableone.net': 'other', 'hotmail.es': 'microsoft', 'mac.com': 'apple', 'yahoo.co.uk': 'yahoo', 'netzero.com': 'other', 'yahoo.com': 'yahoo', 'live.com.mx': 'microsoft', 'ptd.net': 'other', 'cox.net': 'other', 'aol.com': 'aol', 'juno.com': 'other', 'icloud.com': 'apple'}
us_emails = ['gmail', 'net', 'edu']
emaildomain = ['P_emaildomain', 'R_emaildomain']
for c in emaildomain:
train_trans[c + '_bin'] = train_trans[c].map(emails)
test_trans[c + '_bin'] = test_trans[c].map(emails)
train_trans[c + '_suffix'] = train_trans[c].map(lambda x: str(x).split('.')[-1])
test_trans[c + '_suffix'] = test_trans[c].map(lambda x: str(x).split('.')[-1])
train_trans[c + '_suffix'] = train_trans[c + '_suffix'].map(lambda x: x if str(x) not in us_emails else 'us')
test_trans[c + '_suffix'] = test_trans[c + '_suffix'].map(lambda x: x if str(x) not in us_emails else 'us')
###Output
_____no_output_____
###Markdown
m_cols
###Code
m_cols = [c for c in list(train_trans) if 'M' == c[0]]
# Use "M_cols" information
train_m = train_trans[['TransactionID'] + m_cols]
test_m = test_trans[['TransactionID'] + m_cols]
# Combination of all "M" columns
train_m['m_comb'] = ''
test_m['m_comb'] = ''
for col in m_cols:
train_m['m_comb'] += train_m[col].astype(np.str) +'_'
test_m['m_comb'] += test_m[col].astype(np.str) +'_'
# If the combination is not in the common value, replace those into "Unknown"
unique_trn_m_comb = np.unique( train_m['m_comb'] )
unique_ts_m_comb = np.unique( test_m['m_comb'] )
common_m_comb = np.intersect1d( unique_trn_m_comb , unique_ts_m_comb )
train_m.loc[~train_m['m_comb'].isin(common_m_comb), 'm_comb'] = 'Unknown'
test_m.loc[~test_m['m_comb'].isin(common_m_comb), 'm_comb'] = 'Unknown'
# Sum of the null value for all "M" columns & "# of True value"
train_m['m_null_sum'] = train_m[m_cols].isnull().sum(axis=1)
train_m['m_T_sum'] = (train_m[m_cols]=='T').sum(axis=1)
test_m['m_null_sum'] = test_m[m_cols].isnull().sum(axis=1)
test_m['m_T_sum'] = (test_m[m_cols]=='T').sum(axis=1)
# Label Encoding columns related with 'M':
# 'm_comb' + m_cols
lbl = LabelEncoder()
for col in tqdm_notebook( m_cols + ['m_comb'] ):
lbl.fit( train_m[col].fillna('Unknown') )
train_m[col] = lbl.transform( train_m[col].fillna('Unknown') ).astype(np.int8)
test_m[col] = lbl.transform( test_m[col].fillna('Unknown') ).astype(np.int8)
train_m = train_m[['TransactionID', 'm_comb','m_null_sum','m_T_sum']]
test_m = test_m[['TransactionID', 'm_comb','m_null_sum','m_T_sum']]
train_trans = train_trans.merge(train_m, on ='TransactionID', how='left')
test_trans = test_trans.merge(test_m, on ='TransactionID', how='left')
###Output
_____no_output_____
###Markdown
2. Feature Engineering Date
###Code
# timeblock으로 시간을 만드는 코드
import datetime
start_date = datetime.datetime.strptime('2017.11.30', '%Y.%m.%d')
train_trans['timeblock'] = train_trans['TransactionDT'].apply(lambda x: datetime.timedelta(seconds = x) + start_date )
test_trans['timeblock'] = test_trans['TransactionDT'].apply(lambda x: datetime.timedelta(seconds = x) + start_date )
tb = train_trans['timeblock']
train_trans.drop('timeblock', 1, inplace=True)
train_trans.insert(0, 'timeblock', tb)
tb = test_trans['timeblock']
test_trans.drop('timeblock', 1, inplace=True)
test_trans.insert(0, 'timeblock', tb)
# "가입일로부터의 시간"(D8)을 통해 "가입일"을 만드는 코드.
def account_start_date(val):
if np.isnan(val) :
return np.NaN
else:
days= int( str(val).split('.')[0])
return pd.Timedelta( str(days) +' days')
for i in ['D1', 'D2', 'D4', 'D8','D10', 'D15']:
train_trans['account_start_day'] = train_trans[i].apply(account_start_date)
test_trans['account_start_day'] = test_trans[i].apply(account_start_date)
# account_make_date 컴퓨터가 인식할 수 있도록 수치형으로 바꿔 줌.
train_trans['account_make_date'] = (train_trans['timeblock'] - train_trans['account_start_day']).dt.date
test_trans['account_make_date'] = (test_trans['timeblock'] - test_trans['account_start_day']).dt.date
train_trans['account_make_date_{}'.format(i)] = (10000 * pd.to_datetime(train_trans['account_make_date']).dt.year) + (100 * pd.to_datetime(train_trans['account_make_date']).dt.month) + (1 * pd.to_datetime(train_trans['account_make_date']).dt.day)
test_trans['account_make_date_{}'.format(i)] = (10000 * pd.to_datetime(test_trans['account_make_date']).dt.year) + (100 * pd.to_datetime(test_trans['account_make_date']).dt.month) + (1 * pd.to_datetime(test_trans['account_make_date']).dt.day)
del train_trans['account_make_date']; del test_trans['account_make_date']
del train_trans['account_start_day']; del test_trans['account_start_day']
train_trans['date'] = pd.to_datetime(train_trans['timeblock']).dt.date
test_trans['date'] = pd.to_datetime(test_trans['timeblock']).dt.date
train_trans['year'] = train_trans['timeblock'].dt.year
train_trans['month'] = train_trans['timeblock'].dt.month
train_trans['day'] = train_trans['timeblock'].dt.day
train_trans['dayofweek'] = train_trans['timeblock'].dt.dayofweek
train_trans['hour'] = train_trans['timeblock'].dt.hour
# train_trans['minute'] = train_trans['timeblock'].dt.minute
# train_trans['second'] = train_trans['timeblock'].dt.second
test_trans['year'] = test_trans['timeblock'].dt.year
test_trans['month'] = test_trans['timeblock'].dt.month
test_trans['day'] = test_trans['timeblock'].dt.day
test_trans['dayofweek'] = test_trans['timeblock'].dt.dayofweek
test_trans['hour'] = test_trans['timeblock'].dt.hour
# test_trans['minute'] = test_trans['timeblock'].dt.minute
# test_trans['second'] = test_trans['timeblock'].dt.second
###Output
_____no_output_____
###Markdown
소숫점
###Code
train_trans['TransactionAmt_decimal_count'] = ((train_trans['TransactionAmt'] - train_trans['TransactionAmt'].astype(int))).astype(str).apply(lambda x: len(x.split('.')[1]))
test_trans['TransactionAmt_decimal_count'] = ((test_trans['TransactionAmt'] - test_trans['TransactionAmt'].astype(int))).astype(str).apply(lambda x: len(x.split('.')[1]))
train_trans['TransactionAmt_decimal'] = ((train_trans['TransactionAmt'] - train_trans['TransactionAmt'].astype(int)) * 1000).astype(int)
test_trans['TransactionAmt_decimal'] = ((test_trans['TransactionAmt'] - test_trans['TransactionAmt'].astype(int)) * 1000).astype(int)
###Output
_____no_output_____
###Markdown
Count Encoding
###Code
categorical_variables_trans = ["ProductCD","card1","card2","card3","card4","card5","card6","addr1","addr2","P_emaildomain","R_emaildomain","P_emaildomain_bin",
"R_emaildomain_bin","M1","M2","M3","M4","M5","M6","M7","M8","M9",'email_null_concat']
categorical_variables_idf = ["DeviceType","DeviceInfo","id_12",
"id_13","id_14","id_15","id_16","id_17","id_18","id_19","id_20","id_21","id_22","id_23","id_24",
"id_25","id_26","id_27","id_28","id_29","id_30","id_31","id_32","id_33","id_34","id_35","id_36",
"id_37","id_38"]
for i in tqdm_notebook(categorical_variables_trans):
train_trans['{}_count_full'.format(i)] = train_trans[i].map(pd.concat([train_trans[i], test_trans[i]], ignore_index=True).value_counts(dropna=False))
test_trans['{}_count_full'.format(i)] = test_trans[i].map(pd.concat([train_trans[i], test_trans[i]], ignore_index=True).value_counts(dropna=False))
for i in tqdm_notebook(categorical_variables_idf):
train_idf['{}_count_full'.format(i)] = train_idf[i].map(pd.concat([train_idf[i], test_idf[i]], ignore_index=True).value_counts(dropna=False))
test_idf['{}_count_full'.format(i)] = test_idf[i].map(pd.concat([train_idf[i], test_idf[i]], ignore_index=True).value_counts(dropna=False))
###Output
_____no_output_____
###Markdown
Preprocessing
###Code
train_trans, test_trans = transform_transaction_catcols(train_trans, test_trans)
train_idf, test_idf = transform_identity_catcols(train_idf, test_idf)
total_trans = pd.concat([train_trans,test_trans],axis=0,sort=False)
D_columns = [c for c in train_trans.columns if (c[0] == 'D')]
D_columns.remove('D1'); D_columns.remove('D2'); D_columns.remove('D9')
for i in tqdm_notebook(D_columns):
total_trans_size = total_trans.groupby(['year','month'])[i].agg({'mean','std'}).reset_index()
train_trans = train_trans.merge(total_trans_size,how='left',on=['year','month'])
test_trans = test_trans.merge(total_trans_size,how='left',on=['year','month'])
train_trans[i] = (train_trans[i] - train_trans['mean'])/ train_trans['std']
test_trans[i] = (test_trans[i] - test_trans['mean'])/ test_trans['std']
del train_trans['mean']; del test_trans['mean']; del train_trans['std']; del test_trans['std']
###Output
_____no_output_____
###Markdown
카테고리의 결합
###Code
train_trans['card1_addr1'] = train_trans['card1'].astype(str) + '_' + train_trans['addr1'].astype(str)
test_trans['card1_addr1'] = test_trans['card1'].astype(str) + '_' + test_trans['addr1'].astype(str)
train_trans['card1_addr2'] = train_trans['card1'].astype(str) + '_' + train_trans['addr2'].astype(str)
test_trans['card1_addr2'] = test_trans['card1'].astype(str) + '_' + test_trans['addr2'].astype(str)
train_trans['card1_ProductCD'] = train_trans['card1'].astype(str) + '_' + train_trans['ProductCD'].astype(str)
test_trans['card1_ProductCD'] = test_trans['card1'].astype(str) + '_' + test_trans['ProductCD'].astype(str)
train_trans['TransactionAmt_ProductCD'] = train_trans['TransactionAmt'].astype(str) + '_' + train_trans['ProductCD'].astype(str)
test_trans['TransactionAmt_ProductCD'] = test_trans['TransactionAmt'].astype(str) + '_' + test_trans['ProductCD'].astype(str)
train_trans['addr1_addr2'] = train_trans['addr1'].astype(str) + '_' + train_trans['addr2'].astype(str)
test_trans['addr1_addr2'] = test_trans['addr1'].astype(str) + '_' + test_trans['addr2'].astype(str)
###Output
_____no_output_____
###Markdown
결합된 카테고리의 Count Encoding
###Code
categorical_variables_trans = ["card1_addr1", "card1_addr2", "card1_ProductCD",'TransactionAmt_ProductCD','addr1_addr2']
categorical_variables_idf = []
for i in tqdm_notebook(categorical_variables_trans):
train_trans['{}_count_full'.format(i)] = train_trans[i].map(pd.concat([train_trans[i], test_trans[i]], ignore_index=True).value_counts(dropna=False))
test_trans['{}_count_full'.format(i)] = test_trans[i].map(pd.concat([train_trans[i], test_trans[i]], ignore_index=True).value_counts(dropna=False))
###Output
_____no_output_____
###Markdown
같은 날 같은 card1으로 구매한 동일한 TransactionAmt
###Code
train_trans_Amt = pd.DataFrame(train_trans.groupby(['date','card1','TransactionAmt'])['TransactionAmt'].agg({'count'})).reset_index()
test_trans_Amt = pd.DataFrame(test_trans.groupby(['date','card1','TransactionAmt'])['TransactionAmt'].agg({'count'})).reset_index()
train_trans_Amt1 = pd.DataFrame(train_trans.groupby(['date','card3','addr1','TransactionAmt'])['TransactionAmt'].agg({'count'})).reset_index()
test_trans_Amt1 = pd.DataFrame(test_trans.groupby(['date','card3','addr1','TransactionAmt'])['TransactionAmt'].agg({'count'})).reset_index()
###Output
_____no_output_____
###Markdown
결합
###Code
# Data Merge
train_df = pd.merge(train_trans,train_idf,how='left',on='TransactionID')
test_df = pd.merge(test_trans,test_idf,how='left',on='TransactionID')
###Output
_____no_output_____
###Markdown
Prev_click , Next_click , Prev_Amt , Next_Amt with id info.
###Code
# ['id_30','id_31','id_33','DeviceType','DeviceInfo']
train_df['id_30_31_33_Type_Info_prev_click'] = train_df['TransactionDT'] - train_df.groupby(['id_30','id_31','id_33','DeviceType','DeviceInfo'])['TransactionDT'].shift(1)
test_df['id_30_31_33_Type_Info_prev_click'] = test_df['TransactionDT'] - test_df.groupby(['id_30','id_31','id_33','DeviceType','DeviceInfo'])['TransactionDT'].shift(1)
train_df['id_30_31_33_Type_Info_next_click'] = train_df['TransactionDT'] - train_df.groupby(['id_30','id_31','id_33','DeviceType','DeviceInfo'])['TransactionDT'].shift(-1)
test_df['id_30_31_33_Type_Info_next_click'] = test_df['TransactionDT'] - test_df.groupby(['id_30','id_31','id_33','DeviceType','DeviceInfo'])['TransactionDT'].shift(-1)
###Output
_____no_output_____
###Markdown
Merge
###Code
# Data Merge
train_df = pd.merge(train_df,train_trans_Amt,how='left',on=['date','card1','TransactionAmt'])
test_df = pd.merge(test_df,test_trans_Amt,how='left',on=['date','card1','TransactionAmt'])
# Data Merge
train_df = pd.merge(train_df,train_trans_Amt1,how='left',on=['date','card3','addr1','TransactionAmt'])
test_df = pd.merge(test_df,test_trans_Amt1,how='left',on=['date','card3','addr1','TransactionAmt'])
###Output
_____no_output_____
###Markdown
kyakovlev 변수
###Code
# 점수비교
# https://www.kaggle.com/kyakovlev/ieee-gb-2-make-amount-useful-again
train_df['uid'] = train_df['card1'].astype(str)+'_'+train_df['card2'].astype(str)+'_'+train_df['card3'].astype(str)+'_'+train_df['card4'].astype(str)
test_df['uid'] = test_df['card1'].astype(str)+'_'+test_df['card2'].astype(str)+'_'+test_df['card3'].astype(str)+'_'+test_df['card4'].astype(str)
train_df['uid2'] = train_df['uid'].astype(str)+'_'+train_df['addr1'].astype(str)+'_'+train_df['addr2'].astype(str)
test_df['uid2'] = test_df['uid'].astype(str)+'_'+test_df['addr1'].astype(str)+'_'+test_df['addr2'].astype(str)
i_cols = ['card1','card2','card3','card5','uid','uid2']
for col in i_cols:
for agg_type in ['mean', 'std', 'nunique']:
new_col_name = col+'_TransactionAmt_'+agg_type
temp_df = pd.concat([train_df[[col, 'TransactionAmt']], test_df[[col,'TransactionAmt']]])
temp_df = temp_df.groupby([col])['TransactionAmt'].agg([agg_type]).reset_index().rename(
columns={agg_type: new_col_name})
temp_df.index = list(temp_df[col])
temp_df = temp_df[new_col_name].to_dict()
train_df[new_col_name] = train_df[col].map(temp_df)
test_df[new_col_name] = test_df[col].map(temp_df)
########################### Anomaly Search in geo information
# Let's look on bank addres and client addres matching
# card3/card5 bank country and name?
# Addr2 -> Clients geo position (country)
# Most common entries -> normal transactions
# Less common etries -> some anonaly
train_df['bank_type'] = train_df['card3'].astype(str)+'_'+train_df['card5'].astype(str)
test_df['bank_type'] = test_df['card3'].astype(str)+'_'+test_df['card5'].astype(str)
train_df['address_match'] = train_df['bank_type'].astype(str)+'_'+train_df['addr2'].astype(str)
test_df['address_match'] = test_df['bank_type'].astype(str)+'_'+test_df['addr2'].astype(str)
for col in ['address_match','bank_type']:
temp_df = pd.concat([train_df[[col]], test_df[[col]]])
temp_df[col] = np.where(temp_df[col].str.contains('nan'), np.nan, temp_df[col])
temp_df = temp_df.dropna()
fq_encode = temp_df[col].value_counts().to_dict()
train_df[col] = train_df[col].map(fq_encode)
test_df[col] = test_df[col].map(fq_encode)
train_df['address_match'] = train_df['address_match']/train_df['bank_type']
test_df['address_match'] = test_df['address_match']/test_df['bank_type']
###Output
_____no_output_____
###Markdown
Aggregate
###Code
i_cols = ['uid','uid2', "card1_addr2", "card1_ProductCD"]
for col in i_cols:
for agg_type in ['median']:
new_col_name = col+'_hour_'+agg_type
temp_df = pd.concat([train_df[[col, 'hour']], test_df[[col,'hour']]])
temp_df = temp_df.groupby([col])['hour'].agg([agg_type]).reset_index().rename(
columns={agg_type: new_col_name})
temp_df.index = list(temp_df[col])
temp_df = temp_df[new_col_name].to_dict()
train_df[new_col_name] = train_df[col].map(temp_df)
test_df[new_col_name] = test_df[col].map(temp_df)
###Output
_____no_output_____
###Markdown
prev / next click
###Code
# train_df['uid2_prev_click'] = train_df['TransactionDT'] - train_df.groupby(['uid2'])['TransactionDT'].shift(1)
# test_df['uid2_prev_click'] = test_df['TransactionDT'] - test_df.groupby(['uid2'])['TransactionDT'].shift(1)
total_df = pd.concat([train_df,test_df],axis=0,sort=False)
train_df['uid2_next_click'] = train_df['TransactionDT'] - train_df.groupby(['uid2'])['TransactionDT'].shift(-1)
test_df['uid2_next_click'] = test_df['TransactionDT'] - test_df.groupby(['uid2'])['TransactionDT'].shift(-1)
del train_df['uid']; del train_df['uid2']; del train_df['bank_type']
del test_df['uid']; del test_df['uid2']; del test_df['bank_type']
train_df = train_df.merge(total_df.groupby(['card1','account_make_date_D1','ProductCD'])['TransactionAmt'].agg({'mean','std'}).reset_index().rename(columns={'mean':'card1_D1_productCD_Amt_mean','std':'card1_D1_productCD_Amt_std'}), how='left', on = ['card1','account_make_date_D1','ProductCD'])
test_df = test_df.merge(total_df.groupby(['card1','account_make_date_D1','ProductCD'])['TransactionAmt'].agg({'mean','std'}).reset_index().rename(columns={'mean':'card1_D1_productCD_Amt_mean','std':'card1_D1_productCD_Amt_std'}), how='left', on = ['card1','account_make_date_D1','ProductCD'])
train_df = train_df.merge(total_df.groupby(['card1','card2','card3','card4','addr1','addr2','ProductCD'])['dayofweek'].agg({'mean','std'}).reset_index().rename(columns={'mean':'uid2_dayofweek_mean','std':'uid2_dayofweek_std'}), how='left', on = ['card1','card2','card3','card4','addr1','addr2','ProductCD'])
test_df = test_df.merge(total_df.groupby(['card1','card2','card3','card4','addr1','addr2','ProductCD'])['dayofweek'].agg({'mean','std'}).reset_index().rename(columns={'mean':'uid2_dayofweek_mean','std':'uid2_dayofweek_std'}), how='left', on = ['card1','card2','card3','card4','addr1','addr2','ProductCD'])
###Output
_____no_output_____
###Markdown
D1, ProductCD add featurescard1을 제외하고, D1_make_date와 ProductCD만으로 feature 생성
###Code
train_df_D1_ProductCD_Amt = pd.DataFrame(train_df.groupby(['date','account_make_date_D1','ProductCD'])['TransactionAmt'].agg({'count'})).reset_index()
test_df_D1_ProductCD_Amt = pd.DataFrame(test_df.groupby(['date','account_make_date_D1','ProductCD'])['TransactionAmt'].agg({'count'})).reset_index()
train_df_D1_ProductCD_Amt.columns = ['date','account_make_date_D1','ProductCD', 'ProductCD_D1_Amt_byDate']
test_df_D1_ProductCD_Amt.columns = ['date','account_make_date_D1','ProductCD','ProductCD_D1_Amt_byDate']
# Data Merge
train_df = pd.merge(train_df,train_df_D1_ProductCD_Amt,how='left',on=['date','account_make_date_D1','ProductCD'])
test_df = pd.merge(test_df,test_df_D1_ProductCD_Amt,how='left',on=['date','account_make_date_D1','ProductCD'])
train_df = train_df.merge(total_df.groupby(['account_make_date_D1','hour','ProductCD'])['TransactionAmt'].agg({'mean','std'}).reset_index().rename(columns={'mean':'D1_productCD_hour_Amt_mean','std':'D1_productCD_hour_Amt_std'}), how='left', on = ['account_make_date_D1','hour','ProductCD'])
test_df = test_df.merge(total_df.groupby(['account_make_date_D1','hour','ProductCD'])['TransactionAmt'].agg({'mean','std'}).reset_index().rename(columns={'mean':'D1_productCD_hour_Amt_mean','std':'D1_productCD_hour_Amt_std'}), how='left', on = ['account_make_date_D1','hour','ProductCD'])
###Output
_____no_output_____
###Markdown
D add featuresD6, D7, D8, D13, D14는 notnull값에 W가 존재하지 않으며, null값일 경우 Fraud가 증가하는 경향이 발생 따라서 이를 묶어 feature 생성
###Code
train_df['D_sum'] = train_df[['D6', 'D7', 'D8', 'D13', 'D14']].sum(axis = 1)
train_df['D_mean'] = train_df[['D6', 'D7', 'D8', 'D13', 'D14']].mean(axis = 1)
train_df['D_std'] = train_df[['D6', 'D7', 'D8', 'D13', 'D14']].std(axis = 1)
train_df['D_min'] = train_df[['D6', 'D7', 'D8', 'D13', 'D14']].min(axis = 1)
train_df['D_max'] = train_df[['D6', 'D7', 'D8', 'D13', 'D14']].max(axis = 1)
train_df['D_na_counts'] = train_df[['D6', 'D7', 'D8', 'D13', 'D14']].isna().sum(axis = 1)
test_df['D_sum'] = test_df[['D6', 'D7', 'D8', 'D13', 'D14']].sum(axis = 1)
test_df['D_mean'] = test_df[['D6', 'D7', 'D8', 'D13', 'D14']].mean(axis = 1)
test_df['D_std'] = test_df[['D6', 'D7', 'D8', 'D13', 'D14']].std(axis = 1)
test_df['D_min'] = test_df[['D6', 'D7', 'D8', 'D13', 'D14']].min(axis = 1)
test_df['D_max'] = test_df[['D6', 'D7', 'D8', 'D13', 'D14']].max(axis = 1)
test_df['D_na_counts'] = test_df[['D6', 'D7', 'D8', 'D13', 'D14']].isna().sum(axis = 1)
###Output
C:\Users\User\Anaconda3\lib\site-packages\pandas\core\nanops.py:121: RuntimeWarning:
Mean of empty slice
C:\Users\User\Anaconda3\lib\site-packages\pandas\core\nanops.py:121: RuntimeWarning:
Mean of empty slice
###Markdown
TransactionAmt-D1make-ProductCD counts
###Code
train_df['same_Product_po'] = train_df['account_make_date_D1'].astype('str') + train_df['ProductCD'] + train_df['TransactionAmt'].astype('str')
test_df['same_Product_po'] = test_df['account_make_date_D1'].astype('str') + test_df['ProductCD'] + test_df['TransactionAmt'].astype('str')
df = train_df['same_Product_po'].append(test_df['same_Product_po'])
df = df.value_counts().reset_index()
df.columns = ['same_Product_po', "same_Product_po_cnt"]
df.head()
train_df = train_df.merge(df, on = 'same_Product_po', how = 'left')
test_df = test_df.merge(df, on = 'same_Product_po', how = 'left')
df = train_df[['same_Product_po','date']].append(test_df[['same_Product_po','date']])
df = df.groupby(['same_Product_po','date']).size().reset_index()
df.columns = ['same_Product_po','date', "same_Product_po_cnt_bydate"]
df.head()
train_df = train_df.merge(df, on = ['same_Product_po','date'], how = 'left')
test_df = test_df.merge(df, on = ['same_Product_po','date'], how = 'left')
###Output
_____no_output_____
###Markdown
Count Encoding
###Code
# LB 9543
train_df['card1_account_make_date_D15'] = train_df['card1'].astype(str) + '_' + train_df['account_make_date_D15'].astype(str)
test_df['card1_account_make_date_D15'] = test_df['card1'].astype(str) + '_' + test_df['account_make_date_D15'].astype(str)
train_df['card1_account_make_date_D2'] = train_df['card1'].astype(str) + '_' + train_df['account_make_date_D2'].astype(str)
test_df['card1_account_make_date_D2'] = test_df['card1'].astype(str) + '_' + test_df['account_make_date_D2'].astype(str)
train_df['card1_account_make_date_D10'] = train_df['card1'].astype(str) + '_' + train_df['account_make_date_D10'].astype(str)
test_df['card1_account_make_date_D10'] = test_df['card1'].astype(str) + '_' + test_df['account_make_date_D10'].astype(str)
for i in ['card1_account_make_date_D15', 'card1_account_make_date_D2', 'card1_account_make_date_D10','account_make_date_D1']:
train_df['{}_count_full'.format(i)] = train_df[i].map(pd.concat([train_df[i], test_df[i]], ignore_index=True).value_counts(dropna=False))
test_df['{}_count_full'.format(i)] = test_df[i].map(pd.concat([train_df[i], test_df[i]], ignore_index=True).value_counts(dropna=False))
del train_df['card1_account_make_date_D15']; del test_df['card1_account_make_date_D15']
del train_df['card1_account_make_date_D2']; del test_df['card1_account_make_date_D2']
del train_df['card1_account_make_date_D10']; del test_df['card1_account_make_date_D10']
###Output
_____no_output_____
###Markdown
for i in ['account_make_date_D1']: train_df['{}_count_full'.format(i)] = train_df[i].map(pd.concat([train_df[i], test_df[i]], ignore_index=True).value_counts(dropna=False)) test_df['{}_count_full'.format(i)] = test_df[i].map(pd.concat([train_df[i], test_df[i]], ignore_index=True).value_counts(dropna=False)) train_df = train_df.merge(train_df.groupby(['date','hour'])['TransactionID'].agg({'count'}).reset_index().rename(columns = {'count':'TransactionPerHour'}),how='left',on=['date','hour'])test_df = test_df.merge(test_df.groupby(['date','hour'])['TransactionID'].agg({'count'}).reset_index().rename(columns = {'count':'TransactionPerHour'}),how='left',on=['date','hour'])train_df = train_df.merge(train_df.groupby(['hour'])['TransactionID'].agg({'count'}).reset_index().rename(columns = {'count':'Transactionhourcount'}),how='left',on=['hour'])test_df = test_df.merge(test_df.groupby(['hour'])['TransactionID'].agg({'count'}).reset_index().rename(columns = {'count':'Transactionhourcount'}),how='left',on=['hour'])train_df['TransactionPerHour'] = train_df['TransactionPerHour'] / train_df['Transactionhourcount']test_df['TransactionPerHour'] = test_df['TransactionPerHour'] / test_df['Transactionhourcount']
###Code
### Label Encoding
###Output
_____no_output_____
###Markdown
from sklearn.preprocessing import LabelEncoderfor col in tqdm_notebook(train_df.columns): if train_df[col].dtype == 'object': le = LabelEncoder() le.fit(list(train_df[col].astype(str).values) + list(test_df[col].astype(str).values)) train_df[col] = le.transform(list(train_df[col].astype(str).values)) test_df[col] = le.transform(list(test_df[col].astype(str).values))
###Code
## Feature selection : LightGBM - Adversarial Validation
###Output
_____no_output_____
###Markdown
from sklearn.model_selection import KFold, StratifiedKFold, TimeSeriesSplitfrom sklearn.metrics import roc_auc_scoreimport lightgbm as lgb features = [c for c in train_df.columns if c not in ['TransactionID', 'isFraud','TransactionDT','timeblock','account_start_day', 'date' , 'year', 'month', 'target', 'day','account_make_date_D11', 'account_make_date_D3', 'account_make_date_D5', 'account_make_date_D4' , 'account_make_date_D8', 'account_make_date_D14', 'account_make_date_D6', 'account_make_date_D12', 'account_make_date_D7' , 'card_1_2_3_5_nunique', 'card_1_2_3_5_prev_click', 'card_1_2_3_5_next_click', 'card_1_3_TransactionAmt_prev_click', 'card_1_3_TransactionAmt_next_click', 'account_make_date' , 'poten_card1_nunique_D5', 'poten_card1_nunique_D11','poten_card1_nunique_D6', 'poten_card1_nunique_D3','poten_card1_nunique_D7','poten_card1_nunique_D12','poten_card1_nunique_D8','poten_card1_nunique_D4','poten_card1_nunique_D14' , 'id_13', 'id_31', 'id_13_count_full', 'id_31_count_full','ProductCD', 'card3', 'card4', 'card5', 'card6', 'M1', 'M2', 'M3', 'M4', 'M5', 'M7', 'M8', 'M9', 'P_emaildomain_bin', 'P_emaildomain_suffix', 'R_emaildomain_bin', 'R_emaildomain_suffix', 'account_make_date', 'account_make_date_D3', 'account_make_date_D4', 'account_make_date_D7', 'account_make_date_D8', 'account_make_date_D11', 'account_make_date_D12', 'account_make_date_D14', 'dayofweek', 'hour', 'card1_addr1', 'card1_ProductCD', 'count_x', 'count_y', 'D15', "card1_TransactionAmt_mean", 'card1_addr1hourstd','card1_addr1hourmedian','uid_hour_std','uid2_hour_std','card1_ProductCD_hour_std','card1_addr2_hour_std', 'card1_TransactionAmt_nunique','card2_TransactionAmt_nunique','card3_TransactionAmt_nunique','card5_TransactionAmt_nunique','uid_TransactionAmt_nunique', 'uid_hour_nunique','uid2_hour_nunique','card1_addr2_hour_nunique','card1_ProductCD_hour_nunique','account_make_date_D1','card1_year_month_mean','uid2_D4_mean','uid2_dayofweek_std','DT_M','Transactionhourcount']] train = train_df.copy()test = test_df.copy() from sklearn import model_selection, preprocessing, metricstrain['target'] = 0test['target'] = 1train_test = pd.concat([train, test], axis =0)target = train_test['target'].valuestrain, test = model_selection.train_test_split(train_test, test_size=0.33, random_state=42, shuffle=True)train_y = train['target'].valuestest_y = test['target'].valuesdel train['target'], test['target']gc.collect()train = lgb.Dataset(train[features], label=train_y)test = lgb.Dataset(test[features], label=test_y) 문제점 파라미터에 따라서 아래와 결과가 달라짐. params = {'num_leaves': 491, 'min_child_weight': 0.03454472573214212, 'feature_fraction': 0.3797454081646243, 'bagging_fraction': 0.4181193142567742, 'min_data_in_leaf': 106, 'objective': 'binary', 'max_depth': -1, 'learning_rate': 0.006883242363721497, "boosting_type": "gbdt", "bagging_seed": 11, "metric": 'auc', "verbosity": -1, 'reg_alpha': 0.3899927210061127, 'reg_lambda': 0.6485237330340494, 'random_state': 47 } num_round = 50clf = lgb.train(params, train, num_round, valid_sets = [train, test], verbose_eval=50, early_stopping_rounds = 50) feature_imp = pd.DataFrame(sorted(zip(clf.feature_importance(),features)), columns=['Value','Feature'])plt.figure(figsize=(20, 10))sns.barplot(x="Value", y="Feature", data=feature_imp.sort_values(by="Value", ascending=False).head(20))plt.title('LightGBM Features')plt.tight_layout()plt.show()plt.savefig('lgbm_importances-01.png') feature_imp.sort_values(by='Value',ascending=False) feature_imp.sort_values(by='Value',ascending=False).to_csv("importance.csv",index=False)
###Code
[41] training's auc: 0.901351 valid_1's auc: 0.897481
###Output
_____no_output_____
###Markdown
RandomForestClassifier - Covariate Shift LightGBM - 8 : 2 Split
###Code
from sklearn.model_selection import KFold, StratifiedKFold, TimeSeriesSplit
from sklearn.metrics import roc_auc_score
import lightgbm as lgb
params = {'num_leaves': 491,
'min_child_weight': 0.03454472573214212,
'feature_fraction': 0.3797454081646243,
'bagging_fraction': 0.4181193142567742,
'min_data_in_leaf': 106,
'objective': 'binary',
'max_depth': -1,
'learning_rate': 0.006883242363721497,
"boosting_type": "gbdt",
"bagging_seed": 11,
"metric": 'auc',
"verbosity": -1,
'reg_alpha': 0.3899927210061127,
'reg_lambda': 0.6485237330340494,
'random_state': 47
}
train_df['DT_M'] = (train_df['year']-2017)*12 + train_df['month']
test_df['DT_M'] = (test_df['year']-2017)*12 + test_df['month']
y = train_df['isFraud']
X = train_df[features].reset_index(drop=True)
test = test_df[features].reset_index(drop=True)
###Output
_____no_output_____
###Markdown
Group K fold
###Code
from time import time
from sklearn.model_selection import KFold, StratifiedKFold, TimeSeriesSplit, GroupKFold
NFOLD = 5
folds = GroupKFold(n_splits=NFOLD)
split_groups = train_df['DT_M']
aucs = list()
feature_importances = pd.DataFrame()
feature_importances['feature'] = X.columns
oofs = np.zeros(len(train_df))
preds = np.zeros(len(test_df))
training_start_time = time()
for fold, (trn_idx, test_idx) in enumerate(folds.split(X, y, groups = split_groups)):
start_time = time()
print('Training on fold {}'.format(fold + 1))
trn_data = lgb.Dataset(X.iloc[trn_idx], label=y.iloc[trn_idx])
val_data = lgb.Dataset(X.iloc[test_idx], label=y.iloc[test_idx])
clf = lgb.train(params, trn_data, 10000, valid_sets = [trn_data, val_data], verbose_eval=500, early_stopping_rounds=100)
oofs[test_idx] = clf.predict(X.iloc[test_idx])
preds += clf.predict(test)/NFOLD
feature_importances['fold_{}'.format(fold + 1)] = clf.feature_importance()
aucs.append(clf.best_score['valid_1']['auc'])
print('Fold {} finished in {}'.format(fold + 1, str(datetime.timedelta(seconds=time() - start_time))))
print('-' * 30)
print('Training has finished.')
print('Total training time is {}'.format(str(datetime.timedelta(seconds=time() - training_start_time))))
print('Mean AUC:', np.mean(aucs))
print("Total Validation: ", roc_auc_score(y, oofs))
print('-' * 30)
###Output
Training on fold 1
Training until validation scores don't improve for 100 rounds.
[100] training's auc: 0.956405 valid_1's auc: 0.879919
[200] training's auc: 0.968641 valid_1's auc: 0.887431
[300] training's auc: 0.978915 valid_1's auc: 0.895578
[400] training's auc: 0.986536 valid_1's auc: 0.900859
[500] training's auc: 0.991249 valid_1's auc: 0.905234
[600] training's auc: 0.994498 valid_1's auc: 0.908132
[700] training's auc: 0.996714 valid_1's auc: 0.910416
[800] training's auc: 0.998114 valid_1's auc: 0.912357
[900] training's auc: 0.998929 valid_1's auc: 0.91402
[1000] training's auc: 0.999412 valid_1's auc: 0.915384
[1100] training's auc: 0.999681 valid_1's auc: 0.916268
[1200] training's auc: 0.999831 valid_1's auc: 0.917063
[1300] training's auc: 0.999913 valid_1's auc: 0.917482
[1400] training's auc: 0.999956 valid_1's auc: 0.917966
[1500] training's auc: 0.999979 valid_1's auc: 0.918373
[1600] training's auc: 0.999991 valid_1's auc: 0.918973
[1700] training's auc: 0.999996 valid_1's auc: 0.919327
[1800] training's auc: 0.999999 valid_1's auc: 0.91952
[1900] training's auc: 0.999999 valid_1's auc: 0.919697
[2000] training's auc: 1 valid_1's auc: 0.919996
[2100] training's auc: 1 valid_1's auc: 0.920212
[2200] training's auc: 1 valid_1's auc: 0.920524
[2300] training's auc: 1 valid_1's auc: 0.920573
[2400] training's auc: 1 valid_1's auc: 0.920629
[2500] training's auc: 1 valid_1's auc: 0.920578
Early stopping, best iteration is:
[2428] training's auc: 1 valid_1's auc: 0.920681
Fold 1 finished in 0:24:14.074270
Training on fold 2
Training until validation scores don't improve for 100 rounds.
[100] training's auc: 0.955105 valid_1's auc: 0.916595
[200] training's auc: 0.967888 valid_1's auc: 0.925044
[300] training's auc: 0.978111 valid_1's auc: 0.931921
[400] training's auc: 0.985909 valid_1's auc: 0.93758
[500] training's auc: 0.991082 valid_1's auc: 0.941581
[600] training's auc: 0.99445 valid_1's auc: 0.944688
[700] training's auc: 0.996653 valid_1's auc: 0.94668
[800] training's auc: 0.998089 valid_1's auc: 0.947999
[900] training's auc: 0.998954 valid_1's auc: 0.948887
[1000] training's auc: 0.999435 valid_1's auc: 0.949671
[1100] training's auc: 0.999703 valid_1's auc: 0.950312
[1200] training's auc: 0.999847 valid_1's auc: 0.950714
[1300] training's auc: 0.999923 valid_1's auc: 0.951222
[1400] training's auc: 0.999962 valid_1's auc: 0.951603
[1500] training's auc: 0.999982 valid_1's auc: 0.951884
[1600] training's auc: 0.999992 valid_1's auc: 0.952037
[1700] training's auc: 0.999997 valid_1's auc: 0.952303
[1800] training's auc: 0.999999 valid_1's auc: 0.952461
[1900] training's auc: 1 valid_1's auc: 0.95252
[2000] training's auc: 1 valid_1's auc: 0.952512
[2100] training's auc: 1 valid_1's auc: 0.952605
[2200] training's auc: 1 valid_1's auc: 0.952562
Early stopping, best iteration is:
[2118] training's auc: 1 valid_1's auc: 0.952639
Fold 2 finished in 0:21:21.997041
Training on fold 3
Training until validation scores don't improve for 100 rounds.
[100] training's auc: 0.952284 valid_1's auc: 0.915285
[200] training's auc: 0.965556 valid_1's auc: 0.92468
[300] training's auc: 0.976206 valid_1's auc: 0.93164
[400] training's auc: 0.984347 valid_1's auc: 0.93781
[500] training's auc: 0.989924 valid_1's auc: 0.942416
[600] training's auc: 0.993633 valid_1's auc: 0.945965
[700] training's auc: 0.996113 valid_1's auc: 0.948517
[800] training's auc: 0.997746 valid_1's auc: 0.949925
[900] training's auc: 0.998726 valid_1's auc: 0.950882
[1000] training's auc: 0.999287 valid_1's auc: 0.951825
[1100] training's auc: 0.999617 valid_1's auc: 0.952548
[1200] training's auc: 0.999797 valid_1's auc: 0.953094
[1300] training's auc: 0.999894 valid_1's auc: 0.95353
[1400] training's auc: 0.999947 valid_1's auc: 0.95383
[1500] training's auc: 0.999974 valid_1's auc: 0.954072
[1600] training's auc: 0.999988 valid_1's auc: 0.954373
[1700] training's auc: 0.999995 valid_1's auc: 0.954449
[1800] training's auc: 0.999998 valid_1's auc: 0.954713
[1900] training's auc: 0.999999 valid_1's auc: 0.954774
[2000] training's auc: 1 valid_1's auc: 0.954905
[2100] training's auc: 1 valid_1's auc: 0.955052
[2200] training's auc: 1 valid_1's auc: 0.955091
[2300] training's auc: 1 valid_1's auc: 0.955188
[2400] training's auc: 1 valid_1's auc: 0.955322
[2500] training's auc: 1 valid_1's auc: 0.955364
Early stopping, best iteration is:
[2497] training's auc: 1 valid_1's auc: 0.955365
Fold 3 finished in 0:18:20.990757
Training on fold 4
Training until validation scores don't improve for 100 rounds.
[100] training's auc: 0.954228 valid_1's auc: 0.907918
[200] training's auc: 0.966471 valid_1's auc: 0.917757
[300] training's auc: 0.976924 valid_1's auc: 0.926428
[400] training's auc: 0.985191 valid_1's auc: 0.933417
[500] training's auc: 0.990307 valid_1's auc: 0.93788
[600] training's auc: 0.993779 valid_1's auc: 0.940614
[700] training's auc: 0.996203 valid_1's auc: 0.942525
[800] training's auc: 0.997772 valid_1's auc: 0.943647
[900] training's auc: 0.998735 valid_1's auc: 0.944501
[1000] training's auc: 0.99929 valid_1's auc: 0.945139
[1100] training's auc: 0.999611 valid_1's auc: 0.945589
[1200] training's auc: 0.999793 valid_1's auc: 0.945899
[1300] training's auc: 0.999893 valid_1's auc: 0.946081
[1400] training's auc: 0.999945 valid_1's auc: 0.946286
[1500] training's auc: 0.999973 valid_1's auc: 0.946544
[1600] training's auc: 0.999988 valid_1's auc: 0.946604
[1700] training's auc: 0.999995 valid_1's auc: 0.946653
[1800] training's auc: 0.999998 valid_1's auc: 0.946724
[1900] training's auc: 0.999999 valid_1's auc: 0.94674
Early stopping, best iteration is:
[1830] training's auc: 0.999998 valid_1's auc: 0.946846
Fold 4 finished in 0:10:34.614958
Training on fold 5
Training until validation scores don't improve for 100 rounds.
[100] training's auc: 0.956479 valid_1's auc: 0.923223
[200] training's auc: 0.968872 valid_1's auc: 0.931341
[300] training's auc: 0.978828 valid_1's auc: 0.938321
[400] training's auc: 0.986746 valid_1's auc: 0.944949
[500] training's auc: 0.991906 valid_1's auc: 0.949556
[600] training's auc: 0.995159 valid_1's auc: 0.952583
[700] training's auc: 0.997246 valid_1's auc: 0.954809
[800] training's auc: 0.998509 valid_1's auc: 0.95608
[900] training's auc: 0.999229 valid_1's auc: 0.95689
[1000] training's auc: 0.999618 valid_1's auc: 0.957561
[1100] training's auc: 0.999813 valid_1's auc: 0.958041
[1200] training's auc: 0.999912 valid_1's auc: 0.95845
[1300] training's auc: 0.99996 valid_1's auc: 0.958769
[1400] training's auc: 0.999983 valid_1's auc: 0.959084
[1500] training's auc: 0.999993 valid_1's auc: 0.959232
[1600] training's auc: 0.999997 valid_1's auc: 0.959409
[1700] training's auc: 0.999999 valid_1's auc: 0.959536
[1800] training's auc: 1 valid_1's auc: 0.959663
[1900] training's auc: 1 valid_1's auc: 0.959774
[2000] training's auc: 1 valid_1's auc: 0.959818
[2100] training's auc: 1 valid_1's auc: 0.959921
[2200] training's auc: 1 valid_1's auc: 0.960006
[2300] training's auc: 1 valid_1's auc: 0.960027
[2400] training's auc: 1 valid_1's auc: 0.960021
Early stopping, best iteration is:
[2328] training's auc: 1 valid_1's auc: 0.960027
Fold 5 finished in 0:11:40.608725
------------------------------
Training has finished.
Total training time is 1:26:12.591944
Mean AUC: 0.9471116046487978
Total Validation: 0.9485097310098599
------------------------------
###Markdown
------------------------------LB 9551Training has finished.Total training time is 1:15:08.666788Mean AUC: 0.9469605523659353Total Validation: 0.9482106596886817------------------------------ ------------------------------LB 9554Training has finished.Total training time is 1:26:12.591944Mean AUC: 0.9471116046487978Total Validation: 0.9485097310098599------------------------------
###Code
feature_importances['average'] = feature_importances[['fold_{}'.format(fold + 1) for fold in range(folds.n_splits)]].mean(axis=1)
feature_importances.to_csv('feature_importances.csv')
plt.figure(figsize=(16, 16))
sns.barplot(data=feature_importances.sort_values(by='average', ascending=False).head(50), x='average', y='feature');
plt.title('50 TOP feature importance over {} folds average'.format(folds.n_splits));
feature_importances.sort_values(by='average',ascending=False).head()
sub1 = pd.read_csv("input/sample_submission.csv")
sub1['isFraud'] = preds
sub1.to_csv('BaseLine_IEEE_Model_9551_ka.csv', index=False)
###Output
_____no_output_____ |
notebooks/test_active_learning_deepweeds_margin.ipynb | ###Markdown
###Code
!git clone --single-branch --branch cassava-deepweeds https://github.com/ravindrabharathi/fsdl-active-learning2.git
%cd fsdl-active-learning2
from google.colab import drive
drive.mount('/gdrive')
!mkdir './data/deepweeds/'
!cp '/gdrive/MyDrive/LiveAI/AgriAI/images.zip' './data/deepweeds/'
!unzip -q './data/deepweeds/images.zip' -d './data/deepweeds/images'
!cp '/gdrive/MyDrive/LiveAI/AgriAI/labels_deep_weeds.csv' './data/deepweeds/'
# alternative way: if you cloned the repository to your GDrive account, you can mount it here
#from google.colab import drive
#drive.mount('/content/drive', force_remount=True)
#%cd /content/drive/MyDrive/fsdl-active-learning
!pip3 install PyYAML==5.3.1
!pip3 install boltons wandb pytorch_lightning==1.2.8
!pip3 install torch==1.7.1+cu110 torchvision==0.8.2+cu110 torchaudio==0.7.2 torchtext==0.8.1 -f https://download.pytorch.org/whl/torch_stable.html # general lab / pytorch installs
!pip3 install modAL tensorflow # active learning project
!pip install hdbscan
%env PYTHONPATH=.:$PYTHONPATH
#!python training/run_experiment.py --wandb --gpus=1 --max_epochs=1 --num_workers=4 --data_class=DroughtWatch --model_class=ResnetClassifier --batch_size=32 --sampling_method="random"
!python training/run_experiment.py --gpus=1 --max_epochs=10 --num_workers=4 --data_class=DeepweedsDataModule --model_class=ResnetClassifier3 --sampling_method="margin" --batch_size=128
###Output
2021-05-14 08:37:47.942730: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0
INIT SETUP CALLED!!
___________________
[34m[1mwandb[0m: (1) Create a W&B account
[34m[1mwandb[0m: (2) Use an existing W&B account
[34m[1mwandb[0m: (3) Don't visualize my results
[34m[1mwandb[0m: Enter your choice: 2
[34m[1mwandb[0m: You chose 'Use an existing W&B account'
[34m[1mwandb[0m: You can find your API key in your browser here: https://wandb.ai/authorize
[34m[1mwandb[0m: Paste an API key from your profile and hit enter:
[34m[1mwandb[0m: Appending key for api.wandb.ai to your netrc file: /root/.netrc
2021-05-14 08:38:11.291591: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0
[34m[1mwandb[0m: Tracking run with wandb version 0.10.30
[34m[1mwandb[0m: Syncing run [33mfsdl-active-learning_DeepweedsDataModule_margin_multi-class_all-channels[0m
[34m[1mwandb[0m: ⭐️ View project at [34m[4mhttps://wandb.ai/ravindra/fsdl-active-learning2-training[0m
[34m[1mwandb[0m: 🚀 View run at [34m[4mhttps://wandb.ai/ravindra/fsdl-active-learning2-training/runs/2t35m1rj[0m
[34m[1mwandb[0m: Run data is saved locally in /content/fsdl-active-learning2/wandb/run-20210514_083809-2t35m1rj
[34m[1mwandb[0m: Run `wandb offline` to turn off syncing.
Initializing model for active learning iteration 0
setting n_channels to 3
Downloading: "https://download.pytorch.org/models/resnet50-19c8e357.pth" to /root/.cache/torch/hub/checkpoints/resnet50-19c8e357.pth
100%|███████████████████████████████████████| 97.8M/97.8M [00:00<00:00, 116MB/s]
GPU available: True, used: True
TPU available: False, using: 0 TPU cores
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
Epoch 0: 0%| | 0/39 [00:00<?, ?it/s]/usr/local/lib/python3.7/dist-packages/torchmetrics/utilities/prints.py:36: UserWarning: The ``compute`` method of metric Accuracy was called before the ``update`` method which may lead to errors, as metric states have not yet been updated.
warnings.warn(*args, **kwargs)
/usr/local/lib/python3.7/dist-packages/torchmetrics/utilities/prints.py:36: UserWarning: The ``compute`` method of metric F1_Score was called before the ``update`` method which may lead to errors, as metric states have not yet been updated.
warnings.warn(*args, **kwargs)
Epoch 0: 51%|█████████████████▍ | 20/39 [00:08<00:07, 2.41it/s]
Validating: 0it [00:00, ?it/s][A
Validating: 0%| | 0/28 [00:00<?, ?it/s][A
Validating: 71%|██████████████████████▏ | 20/28 [00:06<00:02, 3.14it/s][A
Epoch 0: 100%|███████████| 39/39 [00:16<00:00, 2.32it/s, loss=1.45, v_num=m1rj]
Epoch 1: 51%|█████▋ | 20/39 [00:08<00:07, 2.48it/s, loss=1.45, v_num=m1rj]
Validating: 0it [00:00, ?it/s][A
Validating: 0%| | 0/28 [00:00<?, ?it/s][A
Validating: 71%|██████████████████████▏ | 20/28 [00:06<00:02, 3.30it/s][A
Epoch 1: 100%|███████████| 39/39 [00:15<00:00, 2.47it/s, loss=1.08, v_num=m1rj]
Epoch 2: 51%|█████▋ | 20/39 [00:08<00:07, 2.48it/s, loss=1.08, v_num=m1rj]
Validating: 0it [00:00, ?it/s][A
Validating: 0%| | 0/28 [00:00<?, ?it/s][A
Validating: 71%|██████████████████████▏ | 20/28 [00:05<00:02, 3.35it/s][A
Epoch 2: 100%|██████████| 39/39 [00:15<00:00, 2.48it/s, loss=0.739, v_num=m1rj]
Epoch 3: 51%|█████▏ | 20/39 [00:08<00:07, 2.49it/s, loss=0.739, v_num=m1rj]
Validating: 0it [00:00, ?it/s][A
Validating: 0%| | 0/28 [00:00<?, ?it/s][A
Validating: 71%|██████████████████████▏ | 20/28 [00:06<00:02, 3.30it/s][A
Epoch 3: 100%|██████████| 39/39 [00:15<00:00, 2.47it/s, loss=0.541, v_num=m1rj]
Epoch 4: 51%|█████▏ | 20/39 [00:08<00:07, 2.48it/s, loss=0.541, v_num=m1rj]
Validating: 0it [00:00, ?it/s][A
Validating: 0%| | 0/28 [00:00<?, ?it/s][A
Validating: 71%|██████████████████████▏ | 20/28 [00:05<00:02, 3.36it/s][A
Epoch 4: 100%|██████████| 39/39 [00:15<00:00, 2.48it/s, loss=0.407, v_num=m1rj]
Epoch 5: 51%|█████▏ | 20/39 [00:08<00:07, 2.47it/s, loss=0.407, v_num=m1rj]
Validating: 0it [00:00, ?it/s][A
Validating: 0%| | 0/28 [00:00<?, ?it/s][A
Validating: 71%|██████████████████████▏ | 20/28 [00:05<00:02, 3.34it/s][A
Epoch 5: 100%|██████████| 39/39 [00:15<00:00, 2.46it/s, loss=0.351, v_num=m1rj]
Epoch 6: 51%|█████▏ | 20/39 [00:08<00:07, 2.50it/s, loss=0.351, v_num=m1rj]
Validating: 0it [00:00, ?it/s][A
Validating: 0%| | 0/28 [00:00<?, ?it/s][A
Validating: 71%|██████████████████████▏ | 20/28 [00:06<00:02, 3.29it/s][A
Epoch 6: 100%|██████████| 39/39 [00:15<00:00, 2.47it/s, loss=0.303, v_num=m1rj]
Epoch 7: 51%|█████▏ | 20/39 [00:08<00:07, 2.49it/s, loss=0.303, v_num=m1rj]
Validating: 0it [00:00, ?it/s][A
Validating: 0%| | 0/28 [00:00<?, ?it/s][A
Validating: 71%|██████████████████████▏ | 20/28 [00:06<00:02, 3.30it/s][A
Epoch 7: 100%|██████████| 39/39 [00:15<00:00, 2.47it/s, loss=0.249, v_num=m1rj]
Epoch 8: 51%|█████▏ | 20/39 [00:08<00:07, 2.48it/s, loss=0.249, v_num=m1rj]
Validating: 0it [00:00, ?it/s][A
Validating: 0%| | 0/28 [00:00<?, ?it/s][A
Validating: 71%|██████████████████████▏ | 20/28 [00:05<00:02, 3.35it/s][A
Epoch 8: 100%|██████████| 39/39 [00:15<00:00, 2.47it/s, loss=0.207, v_num=m1rj]
Epoch 9: 51%|█████▏ | 20/39 [00:08<00:08, 2.37it/s, loss=0.207, v_num=m1rj]
Validating: 0it [00:00, ?it/s][A
Validating: 0%| | 0/28 [00:00<?, ?it/s][A
Validating: 71%|██████████████████████▏ | 20/28 [00:06<00:02, 3.20it/s][A
Epoch 9: 100%|██████████| 39/39 [00:16<00:00, 2.38it/s, loss=0.158, v_num=m1rj]
Epoch 9: 100%|██████████| 39/39 [00:17<00:00, 2.17it/s, loss=0.158, v_num=m1rj]
Total Unlabelled Pool Size 12607
Query Sample size 2000
Resetting Predictions
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
Testing: 100%|██████████████████████████████████| 99/99 [00:27<00:00, 3.63it/s]
--------------------------------------------------------------------------------
DATALOADER:0 TEST RESULTS
{'test_acc': 0.7548980712890625,
'test_f1': 0.6795843243598938,
'train_size': 1400.0}
--------------------------------------------------------------------------------
Indices selected for labelling via method "margin":
-----------------
[ 9228 10473 7804 ... 4511 6369 3444]
-----------------
New train set size 3400
New unlabelled pool size 10607
Initializing model for active learning iteration 1
setting n_channels to 3
GPU available: True, used: True
TPU available: False, using: 0 TPU cores
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
Epoch 0: 36%|████ | 20/55 [00:13<00:23, 1.46it/s, loss=1.48, v_num=m1rj]/usr/local/lib/python3.7/dist-packages/torchmetrics/utilities/prints.py:36: UserWarning: The ``compute`` method of metric Accuracy was called before the ``update`` method which may lead to errors, as metric states have not yet been updated.
warnings.warn(*args, **kwargs)
/usr/local/lib/python3.7/dist-packages/torchmetrics/utilities/prints.py:36: UserWarning: The ``compute`` method of metric F1_Score was called before the ``update`` method which may lead to errors, as metric states have not yet been updated.
warnings.warn(*args, **kwargs)
Epoch 0: 73%|████████ | 40/55 [00:17<00:06, 2.24it/s, loss=1.48, v_num=m1rj]
Validating: 0it [00:00, ?it/s][A
Validating: 0%| | 0/28 [00:00<?, ?it/s][A
Validating: 71%|██████████████████████▏ | 20/28 [00:06<00:02, 3.30it/s][A
Epoch 0: 100%|███████████| 55/55 [00:25<00:00, 2.14it/s, loss=1.23, v_num=m1rj]
Epoch 1: 73%|███████▎ | 40/55 [00:17<00:06, 2.24it/s, loss=0.901, v_num=m1rj]
Validating: 0it [00:00, ?it/s][A
Validating: 0%| | 0/28 [00:00<?, ?it/s][A
Validating: 71%|██████████████████████▏ | 20/28 [00:06<00:02, 3.27it/s][A
Epoch 1: 100%|██████████| 55/55 [00:25<00:00, 2.13it/s, loss=0.857, v_num=m1rj]
Epoch 2: 73%|███████▎ | 40/55 [00:17<00:06, 2.24it/s, loss=0.662, v_num=m1rj]
Validating: 0it [00:00, ?it/s][A
Validating: 0%| | 0/28 [00:00<?, ?it/s][A
Validating: 71%|██████████████████████▏ | 20/28 [00:06<00:02, 3.26it/s][A
Epoch 2: 100%|██████████| 55/55 [00:25<00:00, 2.13it/s, loss=0.612, v_num=m1rj]
Epoch 3: 73%|███████▎ | 40/55 [00:18<00:06, 2.20it/s, loss=0.533, v_num=m1rj]
Validating: 0it [00:00, ?it/s][A
Validating: 0%| | 0/28 [00:00<?, ?it/s][A
Validating: 71%|██████████████████████▏ | 20/28 [00:06<00:02, 3.21it/s][A
Epoch 3: 100%|███████████| 55/55 [00:26<00:00, 2.11it/s, loss=0.54, v_num=m1rj]
Epoch 4: 73%|███████▎ | 40/55 [00:17<00:06, 2.25it/s, loss=0.326, v_num=m1rj]
Validating: 0it [00:00, ?it/s][A
Validating: 0%| | 0/28 [00:00<?, ?it/s][A
Validating: 71%|██████████████████████▏ | 20/28 [00:06<00:02, 3.33it/s][A
Epoch 4: 100%|██████████| 55/55 [00:25<00:00, 2.15it/s, loss=0.351, v_num=m1rj]
Epoch 5: 73%|████████ | 40/55 [00:17<00:06, 2.24it/s, loss=0.27, v_num=m1rj]
Validating: 0it [00:00, ?it/s][A
Validating: 0%| | 0/28 [00:00<?, ?it/s][A
Validating: 71%|██████████████████████▏ | 20/28 [00:06<00:02, 3.27it/s][A
Epoch 5: 100%|██████████| 55/55 [00:25<00:00, 2.14it/s, loss=0.273, v_num=m1rj]
Epoch 6: 73%|███████▎ | 40/55 [00:17<00:06, 2.24it/s, loss=0.206, v_num=m1rj]
Validating: 0it [00:00, ?it/s][A
Validating: 0%| | 0/28 [00:00<?, ?it/s][A
Validating: 71%|██████████████████████▏ | 20/28 [00:05<00:02, 3.36it/s][A
Epoch 6: 100%|██████████| 55/55 [00:25<00:00, 2.15it/s, loss=0.224, v_num=m1rj]
Epoch 7: 73%|███████▎ | 40/55 [00:18<00:06, 2.22it/s, loss=0.198, v_num=m1rj]
Validating: 0it [00:00, ?it/s][A
Validating: 0%| | 0/28 [00:00<?, ?it/s][A
Validating: 71%|██████████████████████▏ | 20/28 [00:06<00:02, 3.14it/s][A
Epoch 7: 100%|██████████| 55/55 [00:26<00:00, 2.10it/s, loss=0.228, v_num=m1rj]
Epoch 8: 73%|███████▎ | 40/55 [00:18<00:06, 2.20it/s, loss=0.184, v_num=m1rj]
Validating: 0it [00:00, ?it/s][A
Validating: 0%| | 0/28 [00:00<?, ?it/s][A
Validating: 71%|██████████████████████▏ | 20/28 [00:06<00:02, 3.23it/s][A
Epoch 8: 100%|███████████| 55/55 [00:26<00:00, 2.11it/s, loss=0.21, v_num=m1rj]
Epoch 9: 73%|███████▎ | 40/55 [00:17<00:06, 2.24it/s, loss=0.208, v_num=m1rj]
Validating: 0it [00:00, ?it/s][A
Validating: 0%| | 0/28 [00:00<?, ?it/s][A
Validating: 71%|██████████████████████▏ | 20/28 [00:05<00:02, 3.35it/s][A
Epoch 9: 100%|██████████| 55/55 [00:25<00:00, 2.15it/s, loss=0.242, v_num=m1rj]
Epoch 9: 100%|██████████| 55/55 [00:25<00:00, 2.15it/s, loss=0.242, v_num=m1rj]
Total Unlabelled Pool Size 10607
Query Sample size 2000
Resetting Predictions
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
Testing: 100%|██████████████████████████████████| 83/83 [00:21<00:00, 3.85it/s]
--------------------------------------------------------------------------------
DATALOADER:0 TEST RESULTS
{'test_acc': 0.6862449049949646,
'test_f1': 0.5954818725585938,
'train_size': 3400.0}
--------------------------------------------------------------------------------
Indices selected for labelling via method "margin":
-----------------
[9339 6471 7707 ... 8235 7635 4606]
-----------------
New train set size 5400
New unlabelled pool size 8607
Initializing model for active learning iteration 2
setting n_channels to 3
GPU available: True, used: True
TPU available: False, using: 0 TPU cores
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
Epoch 0: 56%|██████▏ | 40/71 [00:26<00:20, 1.54it/s, loss=1.13, v_num=m1rj]/usr/local/lib/python3.7/dist-packages/torchmetrics/utilities/prints.py:36: UserWarning: The ``compute`` method of metric Accuracy was called before the ``update`` method which may lead to errors, as metric states have not yet been updated.
warnings.warn(*args, **kwargs)
/usr/local/lib/python3.7/dist-packages/torchmetrics/utilities/prints.py:36: UserWarning: The ``compute`` method of metric F1_Score was called before the ``update`` method which may lead to errors, as metric states have not yet been updated.
warnings.warn(*args, **kwargs)
Epoch 0: 85%|█████████▎ | 60/71 [00:27<00:05, 2.18it/s, loss=1.13, v_num=m1rj]
Validating: 0it [00:00, ?it/s][A
Validating: 0%| | 0/28 [00:00<?, ?it/s][A
Validating: 71%|██████████████████████▏ | 20/28 [00:05<00:02, 3.39it/s][A
Epoch 0: 100%|███████████| 71/71 [00:35<00:00, 2.01it/s, loss=1.07, v_num=m1rj]
Epoch 1: 85%|████████▍ | 60/71 [00:27<00:05, 2.17it/s, loss=0.746, v_num=m1rj]
Validating: 0it [00:00, ?it/s][A
Validating: 0%| | 0/28 [00:00<?, ?it/s][A
Validating: 71%|██████████████████████▏ | 20/28 [00:05<00:02, 3.41it/s][A
Epoch 1: 100%|██████████| 71/71 [00:35<00:00, 2.01it/s, loss=0.726, v_num=m1rj]
Epoch 2: 85%|████████▍ | 60/71 [00:27<00:05, 2.17it/s, loss=0.571, v_num=m1rj]
Validating: 0it [00:00, ?it/s][A
Validating: 0%| | 0/28 [00:00<?, ?it/s][A
Validating: 71%|██████████████████████▏ | 20/28 [00:06<00:02, 3.14it/s][A
Epoch 2: 100%|██████████| 71/71 [00:35<00:00, 1.98it/s, loss=0.589, v_num=m1rj]
Epoch 3: 85%|████████▍ | 60/71 [00:27<00:05, 2.17it/s, loss=0.482, v_num=m1rj]
Validating: 0it [00:00, ?it/s][A
Validating: 0%| | 0/28 [00:00<?, ?it/s][A
Validating: 71%|██████████████████████▏ | 20/28 [00:06<00:02, 3.28it/s][A
Epoch 3: 100%|██████████| 71/71 [00:35<00:00, 2.00it/s, loss=0.503, v_num=m1rj]
Epoch 4: 85%|████████▍ | 60/71 [00:27<00:05, 2.16it/s, loss=0.365, v_num=m1rj]
Validating: 0it [00:00, ?it/s][A
Validating: 0%| | 0/28 [00:00<?, ?it/s][A
Validating: 71%|██████████████████████▏ | 20/28 [00:06<00:02, 3.29it/s][A
Epoch 4: 100%|███████████| 71/71 [00:35<00:00, 2.00it/s, loss=0.38, v_num=m1rj]
Epoch 5: 85%|████████▍ | 60/71 [00:27<00:05, 2.17it/s, loss=0.292, v_num=m1rj]
Validating: 0it [00:00, ?it/s][A
Validating: 0%| | 0/28 [00:00<?, ?it/s][A
Validating: 71%|██████████████████████▏ | 20/28 [00:06<00:02, 3.24it/s][A
Epoch 5: 100%|██████████| 71/71 [00:35<00:00, 2.00it/s, loss=0.286, v_num=m1rj]
Epoch 6: 85%|████████▍ | 60/71 [00:27<00:05, 2.16it/s, loss=0.235, v_num=m1rj]
Validating: 0it [00:00, ?it/s][A
Validating: 0%| | 0/28 [00:00<?, ?it/s][A
Validating: 71%|██████████████████████▏ | 20/28 [00:06<00:02, 3.27it/s][A
Epoch 6: 100%|██████████| 71/71 [00:35<00:00, 2.00it/s, loss=0.245, v_num=m1rj]
Epoch 7: 85%|████████▍ | 60/71 [00:27<00:05, 2.18it/s, loss=0.228, v_num=m1rj]
Validating: 0it [00:00, ?it/s][A
Validating: 0%| | 0/28 [00:00<?, ?it/s][A
Validating: 71%|██████████████████████▏ | 20/28 [00:06<00:02, 3.32it/s][A
Epoch 7: 100%|██████████| 71/71 [00:35<00:00, 2.01it/s, loss=0.226, v_num=m1rj]
Epoch 8: 85%|████████▍ | 60/71 [00:27<00:05, 2.18it/s, loss=0.156, v_num=m1rj]
Validating: 0it [00:00, ?it/s][A
Validating: 0%| | 0/28 [00:00<?, ?it/s][A
Validating: 71%|██████████████████████▏ | 20/28 [00:06<00:02, 3.31it/s][A
Epoch 8: 100%|██████████| 71/71 [00:35<00:00, 2.01it/s, loss=0.161, v_num=m1rj]
Epoch 9: 85%|████████▍ | 60/71 [00:27<00:05, 2.16it/s, loss=0.181, v_num=m1rj]
Validating: 0it [00:00, ?it/s][A
Validating: 0%| | 0/28 [00:00<?, ?it/s][A
Validating: 71%|██████████████████████▏ | 20/28 [00:06<00:02, 3.16it/s][A
Epoch 9: 100%|██████████| 71/71 [00:35<00:00, 1.99it/s, loss=0.178, v_num=m1rj]
Epoch 9: 100%|██████████| 71/71 [00:35<00:00, 1.99it/s, loss=0.178, v_num=m1rj]
Total Unlabelled Pool Size 8607
Query Sample size 2000
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
Resetting Predictions
Testing: 100%|██████████████████████████████████| 68/68 [00:17<00:00, 3.80it/s]
--------------------------------------------------------------------------------
DATALOADER:0 TEST RESULTS
{'test_acc': 0.783315896987915,
'test_f1': 0.6576308012008667,
'train_size': 5400.0}
--------------------------------------------------------------------------------
Indices selected for labelling via method "margin":
-----------------
[2210 1674 725 ... 3949 4673 1022]
-----------------
New train set size 7400
New unlabelled pool size 6607
Initializing model for active learning iteration 3
setting n_channels to 3
GPU available: True, used: True
TPU available: False, using: 0 TPU cores
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
Epoch 0: 47%|████▋ | 40/86 [00:26<00:29, 1.53it/s, loss=0.979, v_num=m1rj]/usr/local/lib/python3.7/dist-packages/torchmetrics/utilities/prints.py:36: UserWarning: The ``compute`` method of metric Accuracy was called before the ``update`` method which may lead to errors, as metric states have not yet been updated.
warnings.warn(*args, **kwargs)
/usr/local/lib/python3.7/dist-packages/torchmetrics/utilities/prints.py:36: UserWarning: The ``compute`` method of metric F1_Score was called before the ``update`` method which may lead to errors, as metric states have not yet been updated.
warnings.warn(*args, **kwargs)
Epoch 0: 70%|██████▉ | 60/86 [00:37<00:16, 1.61it/s, loss=0.979, v_num=m1rj]
Validating: 0it [00:00, ?it/s][A
Validating: 0%| | 0/28 [00:00<?, ?it/s][A
Epoch 0: 93%|█████████▎| 80/86 [00:43<00:03, 1.85it/s, loss=0.979, v_num=m1rj]
Epoch 0: 100%|██████████| 86/86 [00:44<00:00, 1.91it/s, loss=0.851, v_num=m1rj]
Epoch 1: 70%|██████▉ | 60/86 [00:37<00:16, 1.61it/s, loss=0.622, v_num=m1rj]
Validating: 0it [00:00, ?it/s][A
Validating: 0%| | 0/28 [00:00<?, ?it/s][A
Epoch 1: 93%|█████████▎| 80/86 [00:43<00:03, 1.83it/s, loss=0.622, v_num=m1rj]
Epoch 1: 100%|██████████| 86/86 [00:45<00:00, 1.89it/s, loss=0.552, v_num=m1rj]
Epoch 2: 70%|██████▉ | 60/86 [00:37<00:16, 1.61it/s, loss=0.475, v_num=m1rj]
Validating: 0it [00:00, ?it/s][A
Validating: 0%| | 0/28 [00:00<?, ?it/s][A
Epoch 2: 93%|█████████▎| 80/86 [00:43<00:03, 1.84it/s, loss=0.475, v_num=m1rj]
Epoch 2: 100%|██████████| 86/86 [00:45<00:00, 1.90it/s, loss=0.494, v_num=m1rj]
Epoch 3: 70%|██████▉ | 60/86 [00:37<00:16, 1.60it/s, loss=0.371, v_num=m1rj]
Validating: 0it [00:00, ?it/s][A
Validating: 0%| | 0/28 [00:00<?, ?it/s][A
Epoch 3: 93%|█████████▎| 80/86 [00:43<00:03, 1.84it/s, loss=0.371, v_num=m1rj]
Epoch 3: 100%|██████████| 86/86 [00:45<00:00, 1.89it/s, loss=0.366, v_num=m1rj]
Epoch 4: 70%|██████▉ | 60/86 [00:37<00:16, 1.61it/s, loss=0.319, v_num=m1rj]
Validating: 0it [00:00, ?it/s][A
Validating: 0%| | 0/28 [00:00<?, ?it/s][A
Epoch 4: 93%|█████████▎| 80/86 [00:43<00:03, 1.85it/s, loss=0.319, v_num=m1rj]
Epoch 4: 100%|██████████| 86/86 [00:45<00:00, 1.91it/s, loss=0.346, v_num=m1rj]
Epoch 5: 70%|██████▉ | 60/86 [00:37<00:16, 1.61it/s, loss=0.225, v_num=m1rj]
Validating: 0it [00:00, ?it/s][A
Validating: 0%| | 0/28 [00:00<?, ?it/s][A
Epoch 5: 93%|█████████▎| 80/86 [00:43<00:03, 1.84it/s, loss=0.225, v_num=m1rj]
Epoch 5: 100%|██████████| 86/86 [00:45<00:00, 1.91it/s, loss=0.241, v_num=m1rj]
Epoch 6: 70%|██████▉ | 60/86 [00:37<00:16, 1.60it/s, loss=0.212, v_num=m1rj]
Validating: 0it [00:00, ?it/s][A
Validating: 0%| | 0/28 [00:00<?, ?it/s][A
Epoch 6: 93%|█████████▎| 80/86 [00:43<00:03, 1.84it/s, loss=0.212, v_num=m1rj]
Epoch 6: 100%|██████████| 86/86 [00:45<00:00, 1.90it/s, loss=0.202, v_num=m1rj]
Epoch 7: 70%|███████▋ | 60/86 [00:37<00:16, 1.61it/s, loss=0.15, v_num=m1rj]
Validating: 0it [00:00, ?it/s][A
Validating: 0%| | 0/28 [00:00<?, ?it/s][A
Epoch 7: 93%|██████████▏| 80/86 [00:43<00:03, 1.84it/s, loss=0.15, v_num=m1rj]
Epoch 7: 100%|██████████| 86/86 [00:45<00:00, 1.91it/s, loss=0.172, v_num=m1rj]
Epoch 8: 70%|██████▉ | 60/86 [00:37<00:16, 1.61it/s, loss=0.148, v_num=m1rj]
Validating: 0it [00:00, ?it/s][A
Validating: 0%| | 0/28 [00:00<?, ?it/s][A
Epoch 8: 93%|█████████▎| 80/86 [00:43<00:03, 1.85it/s, loss=0.148, v_num=m1rj]
Epoch 8: 100%|███████████| 86/86 [00:44<00:00, 1.91it/s, loss=0.19, v_num=m1rj]
Epoch 9: 70%|██████▉ | 60/86 [00:37<00:16, 1.61it/s, loss=0.107, v_num=m1rj]
Validating: 0it [00:00, ?it/s][A
Validating: 0%| | 0/28 [00:00<?, ?it/s][A
Epoch 9: 93%|█████████▎| 80/86 [00:43<00:03, 1.85it/s, loss=0.107, v_num=m1rj]
Epoch 9: 100%|██████████| 86/86 [00:45<00:00, 1.91it/s, loss=0.102, v_num=m1rj]
Epoch 9: 100%|██████████| 86/86 [00:45<00:00, 1.91it/s, loss=0.102, v_num=m1rj]
Total Unlabelled Pool Size 6607
Query Sample size 2000
Resetting Predictions
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
Testing: 100%|██████████████████████████████████| 52/52 [00:13<00:00, 3.73it/s]
--------------------------------------------------------------------------------
DATALOADER:0 TEST RESULTS
{'test_acc': 0.8521265387535095,
'test_f1': 0.7208153605461121,
'train_size': 7400.0}
--------------------------------------------------------------------------------
Indices selected for labelling via method "margin":
-----------------
[2392 1346 1192 ... 2190 700 3413]
-----------------
New train set size 9400
New unlabelled pool size 4607
Initializing model for active learning iteration 4
setting n_channels to 3
GPU available: True, used: True
TPU available: False, using: 0 TPU cores
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
Epoch 0: 59%|█████▎ | 60/102 [00:38<00:26, 1.56it/s, loss=0.799, v_num=m1rj]/usr/local/lib/python3.7/dist-packages/torchmetrics/utilities/prints.py:36: UserWarning: The ``compute`` method of metric Accuracy was called before the ``update`` method which may lead to errors, as metric states have not yet been updated.
warnings.warn(*args, **kwargs)
/usr/local/lib/python3.7/dist-packages/torchmetrics/utilities/prints.py:36: UserWarning: The ``compute`` method of metric F1_Score was called before the ``update`` method which may lead to errors, as metric states have not yet been updated.
warnings.warn(*args, **kwargs)
Epoch 0: 78%|███████ | 80/102 [00:47<00:12, 1.70it/s, loss=0.799, v_num=m1rj]
Validating: 0it [00:00, ?it/s][A
Validating: 0%| | 0/28 [00:00<?, ?it/s][A
Epoch 0: 98%|███████▊| 100/102 [00:53<00:01, 1.88it/s, loss=0.799, v_num=m1rj]
Epoch 0: 100%|████████| 102/102 [00:54<00:00, 1.86it/s, loss=0.745, v_num=m1rj]
Epoch 1: 78%|███████ | 80/102 [00:47<00:12, 1.70it/s, loss=0.573, v_num=m1rj]
Validating: 0it [00:00, ?it/s][A
Validating: 0%| | 0/28 [00:00<?, ?it/s][A
Epoch 1: 98%|███████▊| 100/102 [00:53<00:01, 1.87it/s, loss=0.573, v_num=m1rj]
Epoch 1: 100%|████████| 102/102 [00:55<00:00, 1.85it/s, loss=0.573, v_num=m1rj]
Epoch 2: 78%|███████ | 80/102 [00:47<00:12, 1.70it/s, loss=0.374, v_num=m1rj]
Validating: 0it [00:00, ?it/s][A
Validating: 0%| | 0/28 [00:00<?, ?it/s][A
Epoch 2: 98%|███████▊| 100/102 [00:53<00:01, 1.88it/s, loss=0.374, v_num=m1rj]
Epoch 2: 100%|████████| 102/102 [00:54<00:00, 1.86it/s, loss=0.416, v_num=m1rj]
Epoch 3: 78%|███████ | 80/102 [00:47<00:12, 1.70it/s, loss=0.385, v_num=m1rj]
Validating: 0it [00:00, ?it/s][A
Validating: 0%| | 0/28 [00:00<?, ?it/s][A
Epoch 3: 98%|███████▊| 100/102 [00:53<00:01, 1.88it/s, loss=0.385, v_num=m1rj]
Epoch 3: 100%|████████| 102/102 [00:54<00:00, 1.86it/s, loss=0.375, v_num=m1rj]
Epoch 4: 78%|███████ | 80/102 [00:47<00:12, 1.70it/s, loss=0.315, v_num=m1rj]
Validating: 0it [00:00, ?it/s][A
Validating: 0%| | 0/28 [00:00<?, ?it/s][A
Epoch 4: 98%|███████▊| 100/102 [00:53<00:01, 1.88it/s, loss=0.315, v_num=m1rj]
Epoch 4: 100%|████████| 102/102 [00:54<00:00, 1.86it/s, loss=0.315, v_num=m1rj]
Epoch 5: 78%|███████ | 80/102 [00:47<00:12, 1.70it/s, loss=0.264, v_num=m1rj]
Validating: 0it [00:00, ?it/s][A
Validating: 0%| | 0/28 [00:00<?, ?it/s][A
Epoch 5: 98%|███████▊| 100/102 [00:53<00:01, 1.88it/s, loss=0.264, v_num=m1rj]
Epoch 5: 100%|████████| 102/102 [00:54<00:00, 1.85it/s, loss=0.246, v_num=m1rj]
Epoch 6: 78%|███████ | 80/102 [00:47<00:12, 1.69it/s, loss=0.196, v_num=m1rj]
Validating: 0it [00:00, ?it/s][A
Validating: 0%| | 0/28 [00:00<?, ?it/s][A
Epoch 6: 98%|███████▊| 100/102 [00:53<00:01, 1.88it/s, loss=0.196, v_num=m1rj]
Epoch 6: 100%|████████| 102/102 [00:55<00:00, 1.85it/s, loss=0.191, v_num=m1rj]
Epoch 7: 78%|███████ | 80/102 [00:46<00:12, 1.71it/s, loss=0.179, v_num=m1rj]
Validating: 0it [00:00, ?it/s][A
Validating: 0%| | 0/28 [00:00<?, ?it/s][A
Epoch 7: 98%|███████▊| 100/102 [00:52<00:01, 1.90it/s, loss=0.179, v_num=m1rj]
Epoch 7: 100%|████████| 102/102 [00:54<00:00, 1.87it/s, loss=0.189, v_num=m1rj]
Epoch 8: 78%|███████ | 80/102 [00:47<00:12, 1.70it/s, loss=0.164, v_num=m1rj]
Validating: 0it [00:00, ?it/s][A
Validating: 0%| | 0/28 [00:00<?, ?it/s][A
Epoch 8: 98%|███████▊| 100/102 [00:53<00:01, 1.88it/s, loss=0.164, v_num=m1rj]
Epoch 8: 100%|████████| 102/102 [00:54<00:00, 1.86it/s, loss=0.131, v_num=m1rj]
Epoch 9: 78%|███████ | 80/102 [00:47<00:12, 1.70it/s, loss=0.125, v_num=m1rj]
Validating: 0it [00:00, ?it/s][A
Validating: 0%| | 0/28 [00:00<?, ?it/s][A
Epoch 9: 98%|███████▊| 100/102 [00:53<00:01, 1.88it/s, loss=0.125, v_num=m1rj]
Epoch 9: 100%|████████| 102/102 [00:54<00:00, 1.86it/s, loss=0.139, v_num=m1rj]
Epoch 9: 100%|████████| 102/102 [00:54<00:00, 1.86it/s, loss=0.139, v_num=m1rj]
Total Unlabelled Pool Size 4607
Query Sample size 2000
Resetting Predictions
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
Testing: 100%|██████████████████████████████████| 36/36 [00:09<00:00, 3.62it/s]
--------------------------------------------------------------------------------
DATALOADER:0 TEST RESULTS
{'test_acc': 0.8626003861427307,
'test_f1': 0.7415996789932251,
'train_size': 9400.0}
--------------------------------------------------------------------------------
Indices selected for labelling via method "margin":
-----------------
[3121 4398 1540 ... 2798 2432 4027]
-----------------
New train set size 11400
New unlabelled pool size 2607
Initializing model for active learning iteration 5
setting n_channels to 3
GPU available: True, used: True
TPU available: False, using: 0 TPU cores
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
Epoch 0: 68%|██████▊ | 80/118 [00:50<00:24, 1.57it/s, loss=0.73, v_num=m1rj]/usr/local/lib/python3.7/dist-packages/torchmetrics/utilities/prints.py:36: UserWarning: The ``compute`` method of metric Accuracy was called before the ``update`` method which may lead to errors, as metric states have not yet been updated.
warnings.warn(*args, **kwargs)
/usr/local/lib/python3.7/dist-packages/torchmetrics/utilities/prints.py:36: UserWarning: The ``compute`` method of metric F1_Score was called before the ``update`` method which may lead to errors, as metric states have not yet been updated.
warnings.warn(*args, **kwargs)
Epoch 0: 85%|███████▋ | 100/118 [00:56<00:10, 1.77it/s, loss=0.73, v_num=m1rj]
Validating: 0it [00:00, ?it/s][A
Validating: 0%| | 0/28 [00:00<?, ?it/s][A
Validating: 71%|██████████████████████▏ | 20/28 [00:06<00:02, 3.29it/s][A
Epoch 0: 100%|████████| 118/118 [01:04<00:00, 1.83it/s, loss=0.702, v_num=m1rj]
Epoch 1: 85%|██████▊ | 100/118 [00:56<00:10, 1.76it/s, loss=0.537, v_num=m1rj]
Validating: 0it [00:00, ?it/s][A
Validating: 0%| | 0/28 [00:00<?, ?it/s][A
Validating: 71%|██████████████████████▏ | 20/28 [00:06<00:02, 3.06it/s][A
Epoch 1: 100%|████████| 118/118 [01:05<00:00, 1.81it/s, loss=0.553, v_num=m1rj]
Epoch 2: 85%|██████▊ | 100/118 [00:56<00:10, 1.76it/s, loss=0.422, v_num=m1rj]
Validating: 0it [00:00, ?it/s][A
Validating: 0%| | 0/28 [00:00<?, ?it/s][A
Validating: 71%|██████████████████████▏ | 20/28 [00:05<00:02, 3.46it/s][A
Epoch 2: 100%|████████| 118/118 [01:04<00:00, 1.83it/s, loss=0.399, v_num=m1rj]
Epoch 3: 85%|██████▊ | 100/118 [00:56<00:10, 1.76it/s, loss=0.328, v_num=m1rj]
Validating: 0it [00:00, ?it/s][A
Validating: 0%| | 0/28 [00:00<?, ?it/s][A
Validating: 71%|██████████████████████▏ | 20/28 [00:05<00:02, 3.37it/s][A
Epoch 3: 100%|████████| 118/118 [01:04<00:00, 1.83it/s, loss=0.365, v_num=m1rj]
Epoch 4: 85%|████████▍ | 100/118 [00:56<00:10, 1.76it/s, loss=0.3, v_num=m1rj]
Validating: 0it [00:00, ?it/s][A
Validating: 0%| | 0/28 [00:00<?, ?it/s][A
Validating: 71%|██████████████████████▏ | 20/28 [00:06<00:02, 3.24it/s][A
Epoch 4: 100%|████████| 118/118 [01:04<00:00, 1.82it/s, loss=0.275, v_num=m1rj]
Epoch 5: 85%|██████▊ | 100/118 [00:56<00:10, 1.76it/s, loss=0.224, v_num=m1rj]
Validating: 0it [00:00, ?it/s][A
Validating: 0%| | 0/28 [00:00<?, ?it/s][A
Validating: 71%|██████████████████████▏ | 20/28 [00:06<00:02, 3.29it/s][A
Epoch 5: 100%|████████| 118/118 [01:04<00:00, 1.82it/s, loss=0.237, v_num=m1rj]
Epoch 6: 85%|██████▊ | 100/118 [00:56<00:10, 1.76it/s, loss=0.227, v_num=m1rj]
Validating: 0it [00:00, ?it/s][A
Validating: 0%| | 0/28 [00:00<?, ?it/s][A
Validating: 71%|██████████████████████▏ | 20/28 [00:05<00:02, 3.35it/s][A
Epoch 6: 100%|█████████| 118/118 [01:04<00:00, 1.83it/s, loss=0.28, v_num=m1rj]
Epoch 7: 85%|██████▊ | 100/118 [00:56<00:10, 1.76it/s, loss=0.203, v_num=m1rj]
Validating: 0it [00:00, ?it/s][A
Validating: 0%| | 0/28 [00:00<?, ?it/s][A
Validating: 71%|██████████████████████▏ | 20/28 [00:06<00:02, 3.22it/s][A
Epoch 7: 100%|████████| 118/118 [01:04<00:00, 1.82it/s, loss=0.328, v_num=m1rj]
Epoch 8: 85%|██████▊ | 100/118 [00:56<00:10, 1.76it/s, loss=0.266, v_num=m1rj]
Validating: 0it [00:00, ?it/s][A
Validating: 0%| | 0/28 [00:00<?, ?it/s][A
Validating: 71%|██████████████████████▏ | 20/28 [00:06<00:02, 3.31it/s][A
Epoch 8: 100%|█████████| 118/118 [01:04<00:00, 1.83it/s, loss=0.27, v_num=m1rj]
Epoch 9: 85%|██████▊ | 100/118 [00:56<00:10, 1.76it/s, loss=0.147, v_num=m1rj]
Validating: 0it [00:00, ?it/s][A
Validating: 0%| | 0/28 [00:00<?, ?it/s][A
Validating: 71%|██████████████████████▏ | 20/28 [00:06<00:02, 3.30it/s][A
Epoch 9: 100%|████████| 118/118 [01:04<00:00, 1.82it/s, loss=0.165, v_num=m1rj]
Epoch 9: 100%|████████| 118/118 [01:04<00:00, 1.82it/s, loss=0.165, v_num=m1rj]
Total Unlabelled Pool Size 2607
Query Sample size 2000
Resetting Predictions
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
Testing: 100%|██████████████████████████████████| 21/21 [00:06<00:00, 3.25it/s]
--------------------------------------------------------------------------------
DATALOADER:0 TEST RESULTS
{'test_acc': 0.9075565934181213,
'test_f1': 0.692285418510437,
'train_size': 11400.0}
--------------------------------------------------------------------------------
Indices selected for labelling via method "margin":
-----------------
[1993 383 1512 ... 1038 628 333]
-----------------
New train set size 13400
New unlabelled pool size 607
Initializing model for active learning iteration 6
setting n_channels to 3
GPU available: True, used: True
TPU available: False, using: 0 TPU cores
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
Epoch 0: 75%|██████ | 100/133 [01:03<00:21, 1.57it/s, loss=0.605, v_num=m1rj]/usr/local/lib/python3.7/dist-packages/torchmetrics/utilities/prints.py:36: UserWarning: The ``compute`` method of metric Accuracy was called before the ``update`` method which may lead to errors, as metric states have not yet been updated.
warnings.warn(*args, **kwargs)
/usr/local/lib/python3.7/dist-packages/torchmetrics/utilities/prints.py:36: UserWarning: The ``compute`` method of metric F1_Score was called before the ``update`` method which may lead to errors, as metric states have not yet been updated.
warnings.warn(*args, **kwargs)
Epoch 0: 90%|███████▏| 120/133 [01:06<00:07, 1.80it/s, loss=0.605, v_num=m1rj]
Validating: 0it [00:00, ?it/s][A
Validating: 0%| | 0/28 [00:00<?, ?it/s][A
Validating: 71%|██████████████████████▏ | 20/28 [00:06<00:02, 3.02it/s][A
Epoch 0: 100%|████████| 133/133 [01:15<00:00, 1.77it/s, loss=0.569, v_num=m1rj]
Epoch 1: 90%|███████▏| 120/133 [01:06<00:07, 1.80it/s, loss=0.398, v_num=m1rj]
Validating: 0it [00:00, ?it/s][A
Validating: 0%| | 0/28 [00:00<?, ?it/s][A
Validating: 71%|██████████████████████▏ | 20/28 [00:06<00:02, 3.27it/s][A
Epoch 1: 100%|████████| 133/133 [01:14<00:00, 1.79it/s, loss=0.393, v_num=m1rj]
Epoch 2: 90%|███████▏| 120/133 [01:06<00:07, 1.80it/s, loss=0.365, v_num=m1rj]
Validating: 0it [00:00, ?it/s][A
Validating: 0%| | 0/28 [00:00<?, ?it/s][A
Validating: 71%|██████████████████████▏ | 20/28 [00:06<00:02, 3.32it/s][A
Epoch 2: 100%|████████| 133/133 [01:14<00:00, 1.79it/s, loss=0.361, v_num=m1rj]
Epoch 3: 90%|████████ | 120/133 [01:06<00:07, 1.80it/s, loss=0.28, v_num=m1rj]
Validating: 0it [00:00, ?it/s][A
Validating: 0%| | 0/28 [00:00<?, ?it/s][A
Validating: 71%|██████████████████████▏ | 20/28 [00:06<00:02, 3.28it/s][A
Epoch 3: 100%|████████| 133/133 [01:14<00:00, 1.79it/s, loss=0.298, v_num=m1rj]
Epoch 4: 90%|███████▏| 120/133 [01:06<00:07, 1.80it/s, loss=0.243, v_num=m1rj]
Validating: 0it [00:00, ?it/s][A
Validating: 0%| | 0/28 [00:00<?, ?it/s][A
Validating: 71%|██████████████████████▏ | 20/28 [00:06<00:02, 3.23it/s][A
Epoch 4: 100%|████████| 133/133 [01:14<00:00, 1.78it/s, loss=0.241, v_num=m1rj]
Epoch 5: 90%|███████▏| 120/133 [01:06<00:07, 1.80it/s, loss=0.196, v_num=m1rj]
Validating: 0it [00:00, ?it/s][A
Validating: 0%| | 0/28 [00:00<?, ?it/s][A
Validating: 71%|██████████████████████▏ | 20/28 [00:06<00:02, 3.27it/s][A
Epoch 5: 100%|████████| 133/133 [01:14<00:00, 1.78it/s, loss=0.217, v_num=m1rj]
Epoch 6: 90%|███████▏| 120/133 [01:06<00:07, 1.80it/s, loss=0.181, v_num=m1rj]
Validating: 0it [00:00, ?it/s][A
Validating: 0%| | 0/28 [00:00<?, ?it/s][A
Validating: 71%|██████████████████████▏ | 20/28 [00:05<00:02, 3.33it/s][A
Epoch 6: 100%|████████| 133/133 [01:14<00:00, 1.79it/s, loss=0.204, v_num=m1rj]
Epoch 7: 90%|███████▏| 120/133 [01:06<00:07, 1.80it/s, loss=0.123, v_num=m1rj]
Validating: 0it [00:00, ?it/s][A
Validating: 0%| | 0/28 [00:00<?, ?it/s][A
Validating: 71%|██████████████████████▏ | 20/28 [00:06<00:02, 3.22it/s][A
Epoch 7: 100%|████████| 133/133 [01:14<00:00, 1.78it/s, loss=0.126, v_num=m1rj]
Epoch 8: 90%|███████▏| 120/133 [01:06<00:07, 1.80it/s, loss=0.123, v_num=m1rj]
Validating: 0it [00:00, ?it/s][A
Validating: 0%| | 0/28 [00:00<?, ?it/s][A
Validating: 71%|██████████████████████▏ | 20/28 [00:06<00:02, 3.16it/s][A
Epoch 8: 100%|████████| 133/133 [01:14<00:00, 1.78it/s, loss=0.134, v_num=m1rj]
Epoch 9: 90%|███████▏| 120/133 [01:06<00:07, 1.80it/s, loss=0.119, v_num=m1rj]
Validating: 0it [00:00, ?it/s][A
Validating: 0%| | 0/28 [00:00<?, ?it/s][A
Validating: 71%|██████████████████████▏ | 20/28 [00:05<00:02, 3.35it/s][A
Epoch 9: 100%|████████| 133/133 [01:14<00:00, 1.79it/s, loss=0.108, v_num=m1rj]
Epoch 9: 100%|████████| 133/133 [01:14<00:00, 1.79it/s, loss=0.108, v_num=m1rj]
Total Unlabelled Pool Size 607
Query Sample size 607
Resetting Predictions
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
Testing: 100%|████████████████████████████████████| 5/5 [00:02<00:00, 2.27it/s]
--------------------------------------------------------------------------------
DATALOADER:0 TEST RESULTS
{'test_acc': 0.9868203997612,
'test_f1': 0.9668959379196167,
'train_size': 13400.0}
--------------------------------------------------------------------------------
Indices selected for labelling via method "margin":
-----------------
[160 334 288 152 237 181 284 150 64 461 433 350 125 79 155 492 85 529
96 553 324 60 78 219 544 596 316 392 404 507 271 355 497 245 174 275
523 154 402 454 88 225 295 459 299 524 347 73 127 395 527 403 182 304
448 16 183 490 538 278 41 419 450 235 427 317 72 517 203 383 349 501
589 37 25 314 376 592 583 74 345 411 564 348 129 208 556 519 111 205
588 508 44 487 371 23 59 297 307 172 562 532 522 506 66 164 509 234
212 442 280 52 273 435 327 600 180 292 430 460 429 384 565 243 77 503
114 283 478 406 146 465 110 156 120 536 3 311 270 139 572 83 561 193
306 289 189 337 94 185 91 244 238 206 169 148 28 440 33 422 444 431
236 14 486 417 400 42 472 363 252 393 213 251 230 547 387 470 95 426
233 89 65 75 166 456 263 386 210 137 47 173 62 445 587 103 557 531
321 482 211 452 385 551 336 364 20 153 217 375 104 157 302 586 67 325
113 272 45 449 310 274 86 259 341 463 598 194 19 474 56 414 537 570
291 353 494 606 447 224 312 580 202 342 246 266 131 93 560 601 356 475
242 223 50 128 32 305 585 30 10 545 158 535 418 578 485 199 593 17
214 256 249 484 453 130 344 126 405 138 413 279 188 391 141 287 412 257
123 398 365 330 591 358 577 389 290 416 134 368 437 458 294 70 303 451
57 415 604 549 84 397 119 315 144 605 540 6 301 563 176 339 260 399
207 68 265 599 322 8 248 107 479 262 186 145 268 495 542 331 457 328
296 143 5 247 116 320 171 374 204 594 502 466 201 92 602 480 380 239
505 493 423 525 200 121 282 4 584 340 520 432 526 13 63 590 48 221
2 319 136 516 438 468 573 539 373 255 240 436 105 232 488 99 379 132
175 108 515 184 102 163 100 264 135 408 582 140 231 521 285 390 49 394
382 528 147 603 473 513 46 510 518 552 476 40 51 351 335 467 343 286
360 97 167 26 361 227 165 115 428 54 298 253 388 226 498 512 162 109
367 496 323 354 541 559 35 370 309 36 276 197 357 179 425 333 142 101
87 267 215 409 0 250 192 372 261 569 228 477 581 216 31 410 118 15
550 407 332 554 159 229 12 378 567 254 499 21 421 29 43 269 471 530
434 338 359 195 71 53 90 82 597 168 464 326 420 329 27 313 555 574
58 369 196 24 446 1 190 38 187 191 124 548 489 81 39 491 106 469
277 500 293 396 170 571 117 11 533 220 575 362 455 441 424 558 481 566
98 543 579 443 18 483 218 7 511 595 149 198 222 178 151 504 209 576
76 9 80 258 346 568 177 55 281 112 546 514 300 22 61 462 381 133
352 161 439 534 34 122 401 318 241 69 377 308 366]
-----------------
New train set size 14007
New unlabelled pool size 0
Initializing model for active learning iteration 7
setting n_channels to 3
GPU available: True, used: True
TPU available: False, using: 0 TPU cores
[34m[1mwandb[0m: Waiting for W&B process to finish, PID 481
[34m[1mwandb[0m: Program ended successfully.
[34m[1mwandb[0m:
[34m[1mwandb[0m: Find user logs for this run at: /content/fsdl-active-learning2/wandb/run-20210514_083809-2t35m1rj/logs/debug.log
[34m[1mwandb[0m: Find internal logs for this run at: /content/fsdl-active-learning2/wandb/run-20210514_083809-2t35m1rj/logs/debug-internal.log
[34m[1mwandb[0m: Run summary:
[34m[1mwandb[0m: train_loss 0.12032
[34m[1mwandb[0m: train_acc 0.96142
[34m[1mwandb[0m: train_f1 0.95084
[34m[1mwandb[0m: train_size 13400.0
[34m[1mwandb[0m: epoch 9
[34m[1mwandb[0m: trainer/global_step 1050
[34m[1mwandb[0m: _runtime 3355
[34m[1mwandb[0m: _timestamp 1620984844
[34m[1mwandb[0m: _step 153
[34m[1mwandb[0m: val_loss 0.50587
[34m[1mwandb[0m: val_acc 0.87322
[34m[1mwandb[0m: val_f1 0.82097
[34m[1mwandb[0m: train_acc_max 0.96754
[34m[1mwandb[0m: val_acc_max 0.89035
[34m[1mwandb[0m: train_f1_max 0.95792
[34m[1mwandb[0m: val_f1_max 0.85489
[34m[1mwandb[0m: train_acc_best 0.96754
[34m[1mwandb[0m: val_acc_best 0.89035
[34m[1mwandb[0m: train_f1_best 0.95792
[34m[1mwandb[0m: val_f1_best 0.85489
[34m[1mwandb[0m: test_acc 0.98682
[34m[1mwandb[0m: test_f1 0.9669
[34m[1mwandb[0m: Run history:
[34m[1mwandb[0m: train_loss █▅▃▂▂▁█▄▂▂▂▂▅▃▂▂▁▆▄▂▂▁▁▆▃▂▁▁▁▄▂▂▂▃▅▃▂▁▁▁
[34m[1mwandb[0m: train_acc ▁▄▆▇██▁▅▇▇█▇▄▆▆▇█▃▅▇▇██▃▇▇███▅▇▇▇▇▄▆▇███
[34m[1mwandb[0m: train_f1 ▁▃▆▇▇█▁▅▇▇▇▇▄▆▆▇█▃▆▇███▄▇▇███▆▇▇▇▇▄▆▇███
[34m[1mwandb[0m: train_size ▁▁▁▁▁▁▂▂▂▂▂▂▃▃▃▃▃▅▅▅▅▅▅▆▆▆▆▆▆▇▇▇▇▇██████
[34m[1mwandb[0m: epoch ▁▂▃▅▆█▂▃▄▆▇█▂▃▄▆▇▁▃▄▅▆█▁▃▄▆▆█▂▃▅▆▇█▂▃▅▆█
[34m[1mwandb[0m: trainer/global_step ▁▁▁▁▂▂▁▁▂▂▃▃▂▂▃▃▄▁▂▃▃▄▅▁▂▃▄▅▆▂▃▅▆▇▂▃▄▆▇█
[34m[1mwandb[0m: _runtime ▁▁▁▁▁▁▁▂▂▂▂▂▂▂▂▃▃▃▃▃▃▄▄▄▄▄▅▅▅▅▆▆▆▆▇▇▇███
[34m[1mwandb[0m: _timestamp ▁▁▁▁▁▁▁▂▂▂▂▂▂▂▂▃▃▃▃▃▃▄▄▄▄▄▅▅▅▅▆▆▆▆▇▇▇███
[34m[1mwandb[0m: _step ▁▁▁▁▂▂▂▂▂▃▃▃▃▃▃▄▄▄▄▄▅▅▅▅▅▅▆▆▆▆▆▇▇▇▇▇▇███
[34m[1mwandb[0m: val_loss █▄▂▃▂▂▄▂▂▂▂▃▃▂▃▂▁▄▃▂▂▂▂▃▂▂▂▁▂▁▃▁▁▁▂▂▁▂▁▁
[34m[1mwandb[0m: val_acc ▁▄▆▆▆▆▄▆▇▆▇▆▅▆▅▆█▄▅▇▇▇▇▅▇▇▇▇▇▇▆██▇▆▇▇▇██
[34m[1mwandb[0m: val_f1 ▁▃▅▄▅▆▂▆▆▆▅▅▄▅▆▅▇▄▄▆▇▆▇▅▆▆▆▇▇▇▅▇▇▇▆▆▇▇██
[34m[1mwandb[0m: train_acc_max ▁▄▆▇▇█▁▅▇▇▇█▄▆▆▇█▃▅▇▇██▃▆▇███▅▇▇▇▇▄▆▇███
[34m[1mwandb[0m: val_acc_max ▁▄▆▆▆▆▄▆▇▇▇▇▆▆▆▇█▄▅▇▇▇▇▅▇████▇████▆▇▇███
[34m[1mwandb[0m: train_f1_max ▁▃▆▇▇▇▁▅▇▇██▄▆▆▇█▃▆▇▇██▄▇▇███▆▇▇██▄▆▇███
[34m[1mwandb[0m: val_f1_max ▁▃▅▅▅▆▂▆▆▆▆▇▄▆▆▆▇▄▄▆▇▇▇▅▆▇▇▇█▇▇▇▇▇▆▆▇███
[34m[1mwandb[0m: train_acc_best █▃▄▆▆▁▇
[34m[1mwandb[0m: val_acc_best ▁▃▆▄▇▆█
[34m[1mwandb[0m: train_f1_best █▂▃█▇▁█
[34m[1mwandb[0m: val_f1_best ▁▃▄▃▇▅█
[34m[1mwandb[0m: test_acc ▃▁▃▅▅▆█
[34m[1mwandb[0m: test_f1 ▃▁▂▃▄▃█
[34m[1mwandb[0m:
[34m[1mwandb[0m: Synced 5 W&B file(s), 7 media file(s), 0 artifact file(s) and 1 other file(s)
[34m[1mwandb[0m:
[34m[1mwandb[0m: Synced [33mfsdl-active-learning_DeepweedsDataModule_margin_multi-class_all-channels[0m: [34mhttps://wandb.ai/ravindra/fsdl-active-learning2-training/runs/2t35m1rj[0m
|
notebooks/Gait Problem Classification.ipynb | ###Markdown
Data Paths **ALWAYS RUN THE FOLLOWING CELL** Here we specify the files where the algorithm takes the data and trained models from.
###Code
experimentDir = 'model-nl-10-normalized'
modelDir = 'model'
trainingDataFile = 'training-data.txt'
testingDataFile = 'testing-data.txt'
runLoadData = True
runTrainModel = True
runTestModel = True
modelsPath = os.path.join('..', 'data', 'models')
dataPath = os.path.join('..', 'data', '08-07-19')
experimentPath = os.path.join(modelsPath, experimentDir)
modelPath = os.path.join(experimentPath, modelDir)
trainingDataPath = os.path.join(experimentPath, trainingDataFile)
testingDataPath = os.path.join(experimentPath, testingDataFile)
if not os.path.isdir(experimentPath):
os.mkdir(experimentPath)
###Output
_____no_output_____
###Markdown
Load measurement data
###Code
def readBonsai(path):
bonsai = pd.read_csv(path)
bonsai = bonsai[['accX', 'accY', 'accZ', 'gyrX', 'gyrY', 'gyrZ']]
return bonsai
def readEXLS3(path):
exl = pd.read_fwf(path)
exl.columns = exl.iloc[2]
exl = exl[['a_x [g]:', 'a_y [g]:', 'a_z [g]:', 'ar_x [rad/s]:', 'ar_y [rad/s]:', 'ar_z [rad/s]:']]
exl.rename(index=int, columns={
'a_x [g]:': 'accX', 'a_y [g]:': 'accY', 'a_z [g]:': 'accZ',
'ar_x [rad/s]:': 'gyrX', 'ar_y [rad/s]:': 'gyrY', 'ar_z [rad/s]:': 'gyrZ'
}, inplace=True)
exl = exl.iloc[3:]
exl.reset_index(drop=True, inplace=True)
exl = exl.apply(pd.to_numeric)
exl = exl.multiply(9.80665)
return exl
def tagColumnNames(df, tag):
newColumnNames = {columnName: columnName + tag for columnName in df.columns}
return df.rename(index=int, columns=newColumnNames)
fileNameLocationMap = {
'I-L9H': 'hip-r',
'I-74V': 'hip-l',
'I-WXB': 'knee-l',
'I-0GN': 'knee-r',
'I-2VZ': 'knee-r',
'Gait - R': 'foot-r',
'Gait - L': 'foot-l'
}
def mapFileNameToLocation(fileName):
for name, location in fileNameLocationMap.items():
if (name in fileName):
return location
return 'unknown'
def loadMeasurements(path):
measurements = {}
for fileOrDir in os.listdir(path):
if (fileOrDir.endswith('.txt')):
measurement = readEXLS3(os.path.join(path, fileOrDir))
elif (fileOrDir.endswith('.csv')):
measurement = readBonsai(os.path.join(path, fileOrDir))
if (measurement is not None):
measurementLocation = mapFileNameToLocation(fileOrDir)
measurement = tagColumnNames(measurement, '_' + measurementLocation)
measurements[measurementLocation] = measurement
return measurements
###Output
_____no_output_____
###Markdown
Calibration
###Code
zeroMovementWindowSize = 200 # 10ms * zeroMovement
def calibrate(series):
zeroWindowIndex = series.abs().rolling(zeroMovementWindowSize).median().sort_values().index[0]
zero = series.rolling(zeroMovementWindowSize).median().iloc[zeroWindowIndex]
series -= zero
###Output
_____no_output_____
###Markdown
Synchronize the sensor data
###Code
numberOfJumps = 3
jumpBinSize = 50 # 10ms * jumpBinSize = time per bin; bundles neighbor values to avoid multiple amplitudes during same jump
jumpSequenceLength = 800 # 10 ms * jumpSequenceLength
relativeMaxThreshold = 7 / 12
def binMeasurement(measurement, binSize):
absMeasurement = measurement
return absMeasurement.groupby(pd.cut(absMeasurement.index, np.arange(absMeasurement.index[0], absMeasurement.index[len(absMeasurement) - 1], binSize))).max()
def findJumpingWindow(measurement):
measurement = measurement.head(int(len(measurement) / 2)) # jumping should be in first half
absMeasurement = measurement.abs()
threshold = absMeasurement.max() * relativeMaxThreshold
absMeasurement = absMeasurement.apply(lambda value: value if value >= threshold else 0)
bins = binMeasurement(absMeasurement, jumpBinSize).reset_index().drop('index', axis='columns')
upperBound = bins.rolling(int(jumpSequenceLength / jumpBinSize)).sum().iloc[:,0].sort_values(ascending=False).index[0]
lowerBound = upperBound - int(jumpSequenceLength / jumpBinSize)
upperBound *= jumpBinSize
lowerBound *= jumpBinSize
return max(lowerBound - 100, 0), min(upperBound + 100, len(measurement) - 1)
def getFirstJumpIndex(measurement):
windowIndicies = findJumpingWindow(measurement)
window = measurement[windowIndicies[0]: windowIndicies[1]]
threshold = window.max() * relativeMaxThreshold
window = window.apply(lambda value: 1 if value >= threshold else 0)
return window.loc[window == 1].index[0]
def alignSignals(dfX, dfY):
return getFirstJumpIndex(dfX) - getFirstJumpIndex(dfY)
def alignAccelerationYWithRightFoot(measurements, location, axis):
offset = alignSignals(
measurements['foot-r']['accY_foot-r'],
measurements[location]['acc' + axis.upper() + '_' + location])
measurements[location] = measurements[location].shift(offset, axis='index')
###Output
_____no_output_____
###Markdown
Exercise detection
###Code
zeroMovementThreshold = 1.5 # given in meters per second
def getNextBinaryBlock(series, startPosition, minSubsequentMovements, zeroMode=True):
start = series[startPosition:]
start = start.loc[lambda value: value == 0] if zeroMode else start.loc[lambda value: value == 1]
if (len(start) == 0):
raise ValueError
start = start.index[0]
iValue = start
zeroCounter = 0
while (iValue < len(series)):
if (not series[iValue]):
zeroCounter += 1
iValue += 1
elif (zeroCounter < minSubsequentMovements):
return getNextBinaryBlock(series, iValue + 1, minSubsequentMovements)
else:
break
return start, iValue - 1
def findAllNonZeroBlocks(series, startPosition, minSubsequentZeroMovements=200, minSubsequentNonZeroMovements=29, ignoreMinSubsequentNonZeroMovements=True):
'''
Finds all blocks of movement (expects a filtered list with 1s and 0s, gives back indices of 1-blocks).
Thresholds:
- minSubsequentZeroMovements: minimal length of zero blocks to interrupt movement blocks
- minSubsequentNonZeroMovements: minimal length of movement blocks
- ignoreMinSubsequentNonZeroMovements: if minSubsequentNonZeroMovements should be ignored
'''
blocks = []
start = series[startPosition:][series == 1].index[0]
while (start < len(series)):
try:
zeroStart, zeroEnd = getNextBinaryBlock(series, start, minSubsequentZeroMovements)
if ((((zeroStart - 1) - start) > minSubsequentNonZeroMovements) or ignoreMinSubsequentNonZeroMovements):
blocks.append((start, zeroStart - 1))
start = zeroEnd + 1
except ValueError:
if ((((len(series) - 1) - start) > minSubsequentNonZeroMovements) or ignoreMinSubsequentNonZeroMovements):
blocks.append((start, len(series) - 1))
start = len(series)
return blocks
def splitDataFrameIntoExercises(df, columnName):
measurement = df[columnName]
windowIndicies = findJumpingWindow(measurement)
filteredByTH = measurement.abs().apply(lambda value: 1 if value > zeroMovementThreshold else 0)
exerciseIntervals = findAllNonZeroBlocks(filteredByTH, windowIndicies[1])
return list(map(lambda interval: df[interval[0] : interval[1]], exerciseIntervals))
###Output
_____no_output_____
###Markdown
Stride Detection
###Code
restingThreshold=2.5 # given in m/s
minRestingInterval = 25 # we are taking the resting intervals of the right foot to detect ends of strides
minMovementInterval = 5 # movementIntervals seperate the resting intervals, we are not looking for them
def findRestingBlocks(series):
filteredByTH = series.abs().apply(lambda value: 1 if value < restingThreshold else 0).reset_index(drop=True)
return findAllNonZeroBlocks(filteredByTH, 0, minSubsequentZeroMovements=minMovementInterval, minSubsequentNonZeroMovements=minRestingInterval, ignoreMinSubsequentNonZeroMovements=False)
def findFirstStride(series, nextStrides):
firstRestingInterval = findRestingBlocks(series)[0]
if ((nextStrides[0][0] - minRestingInterval) > firstRestingInterval[0]):
return (firstRestingInterval[0], nextStrides[0][0])
def findStrideIntervals(series):
restingIntervals = findRestingBlocks(series)
strideIntervals = []
for i in range(len(restingIntervals) - 1):
if (restingIntervals[i][1] < restingIntervals[i+1][1]):
strideIntervals.append((restingIntervals[i][1], restingIntervals[i+1][1]))
return strideIntervals
def splitExerciseIntoStrides(df):
measurement = df['accY_foot-r']
otherFoot = df['accY_foot-l']
strideIntervals = findStrideIntervals(measurement)
# in case of complete first stride being present but starting with left foot,
# take its start until first already measured stride
firstStride = findFirstStride(otherFoot, strideIntervals)
if (firstStride):
strideIntervals = [firstStride] + strideIntervals
splittedExercise = [df]
splittedExercise += list(map(lambda interval: df[interval[0] : interval[1]], strideIntervals))
return splittedExercise
###Output
_____no_output_____
###Markdown
Normalize Strides
###Code
normalizedStrideLength = 10
def interpolateStride(stride):
difference = normalizedStrideLength - len(stride)
return stride.append(pd.DataFrame([[0 for column in stride.columns]] * difference, columns=stride.columns), ignore_index=True)
def resampleStride(stride):
absStride = stride
return absStride.groupby(pd.cut(absStride.index, np.linspace(absStride.index[0], absStride.index[len(absStride) - 1], normalizedStrideLength + 1))).median()
def normalizeStride(stride):
for column in stride:
stride[column] += stride[column].min()
if (stride[column].max() > 0):
stride[column] /= stride[column].max()
def normalizeStrides(strides):
'''
bring strides to same length by interpolating strides that are too short and resampling strides that are too long
expects a list of stride dataframes
'''
for i, stride in enumerate(strides):
if (len(stride) > normalizedStrideLength):
strides[i] = resampleStride(stride)
elif (len(stride) < normalizedStrideLength):
strides[i] = interpolateStride(stride)
normalizeStride(strides[i])
return strides
###Output
_____no_output_____
###Markdown
Combine date with calibration and sync
###Code
minExerciseLength = 300 # 10ms * minExerciseLength
expectedExerciseCount = 6
def alignAll(measurements):
alignAccelerationYWithRightFoot(measurements, 'hip-r', 'y')
alignAccelerationYWithRightFoot(measurements, 'hip-l', 'y')
alignAccelerationYWithRightFoot(measurements, 'foot-l', 'y')
alignAccelerationYWithRightFoot(measurements, 'knee-l', 'y')
alignAccelerationYWithRightFoot(measurements, 'knee-r', 'Y')
def calibrateAll(measurements):
for location in measurements.values():
for column in location.columns:
calibrate(location[column])
def resetTimePointZero(mergedDf):
firstIndex = max([mergedDf[column].first_valid_index() for column in mergedDf])
lastIndex = min([mergedDf[column].last_valid_index() for column in mergedDf])
return mergedDf[firstIndex:lastIndex]
def loadSyncedMeasurements(path):
measurements = loadMeasurements(path)
calibrateAll(measurements)
alignAll(measurements)
mergedDf = pd.DataFrame()
for measurement in measurements.values():
mergedDf = mergedDf.join(measurement, how='outer')
mergedDf = resetTimePointZero(mergedDf).reset_index().drop('index', axis='columns')
exercisesAndTurns = splitDataFrameIntoExercises(mergedDf, 'accY_foot-r')
exercises = list(filter(lambda exerciseOrTurn: len(exerciseOrTurn) > minExerciseLength, exercisesAndTurns))
if (len(exercises) is not expectedExerciseCount):
print("Unexpected exercise count: ", len(exercises))
data = [mergedDf] + exercises
return data
###Output
_____no_output_____
###Markdown
Classification Similarity of Time Series Dynamic Time Warping
###Code
def DTWDistance(series1, series2):
windowSize = 4
DTW = {}
windowSize = max(windowSize, abs(len(series1) - len(series2)))
for i in range(-1, len(series1)):
for j in range(-1, len(series2)):
DTW[(i, j)] = float('inf')
DTW[(-1, -1)] = 0
for i in range(len(series1)):
for j in range(max(0, i - windowSize), min(len(series2), i + windowSize)):
dist = (series1[i] - series2[j])**2
DTW[(i, j)] = dist + min(DTW[(i - 1, j)], DTW[(i, j - 1)], DTW[(i - 1, j - 1)])
return math.sqrt(DTW[len(series1) - 1, len(series2) - 1])
###Output
_____no_output_____
###Markdown
Classification algorithm K-NN
###Code
def collectVotesForStride(knnResults, strideIndex):
strideVotingResults = []
for key in knnResults.keys():
cluster = knnResults[key]['results'][strideIndex]
votingIndex = next((index for (index, vote) in enumerate(strideVotingResults) if vote['cluster'] == cluster), None)
if (type(votingIndex) is int):
strideVotingResults[votingIndex]['count'] += knnResults[key]['precision'][str(cluster)]
else:
strideVotingResults.append({'cluster': cluster, 'count': knnResults[key]['precision'][str(cluster)]})
return strideVotingResults
def voteOnResults(knnResults):
votingResults = []
for strideIndex in range(len(knnResults[list(knnResults.keys())[0]]['results'])):
strideVotes = collectVotesForStride(knnResults, strideIndex)
mostFrequentVote = max(strideVotes, key=lambda x:x['count'])
votingResults.append(mostFrequentVote['cluster'])
return votingResults
def trainKnnForSensor(trainingData, sensorKey):
targetClasses = [example[-1] for example in trainingData]
cleanData = np.array(example[:-1] for example in trainingData)
model = KNeighborsClassifier(n_neighbors=5, weights='distance', metric=DTWDistance)
model.fit(trainingData, targetClasses)
with open(os.path.join(modelPath, sensorKey + '.txt'), 'wb') as file:
pickle.dump(model, file)
def trainKnn(trainStrides):
try:
shutil.rmtree(modelPath)
except OSError:
pass
os.mkdir(modelPath)
for i, key in enumerate(trainStrides):
trainKnnForSensor(trainStrides[key], key)
print('trained knn for sensor ', i+1 ,' of ', len(trainStrides.keys()), ' : ', key)
print()
print('finished training')
def testKnnForSensor(testData, sensorKey):
with open(os.path.join(modelPath, sensorKey + '.txt'), 'rb') as file:
model = pickle.load(file)
testClasses = [example[-1] for example in testData]
cleanData = np.array(example[:-1] for example in testData)
results = model.predict(testData)
report = classification_report(testClasses, results, output_dict=True)
print('accuracy of', sensorKey, ': ')
print(report)
if 'accuracy' in report: del report['accuracy']
return {
'results': results,
'precision': {classLabel: report[classLabel]['precision'] for classLabel in report}
}
def testKnn(testStrides):
if not os.path.isdir(modelPath):
raise Error('There is no trained model.')
knnResults = {key:[] for key in testStrides.keys()}
for i, key in enumerate(testStrides):
knnResults[key] = testKnnForSensor(testStrides[key], key)
print('tested knn for sensor ', i+1 ,' of ', len(testStrides.keys()), ' : ', key)
print()
print('finished testing')
print()
votingResults = voteOnResults(knnResults)
print('calculated votingResults')
return classification_report(testStrides[list(testStrides.keys())[0]][:,-1],votingResults)
###Output
_____no_output_____
###Markdown
Prepare Data for ClassificationExpected classification data input: arrays of time series for train and test.Since we have multiple time series per stride (multiple sensors), we will put the time series into a dict with sensor keys and one array with all series each. The order is thus important and has to stay consistent to identify complete strides again.A majority vote will be performed on the sensors in the end to classify the strides. Format Data and Split Training and Testing
###Code
trainRatio = 0.7 # share of data that should go into training. e.g. 0.7: 70% training, 30% testing
def initializeSensorDict(strides):
return {column:[] for column in strides['normal'][0]}
def listDictToNumpyArrayDict(dictionary):
for key in dictionary:
dictionary[key] = np.array(dictionary[key])
return dictionary
def createSensorNumpyArray(stride, sensor, exerciseNumber):
clusterLabel = float(exerciseNumber + 1)
strideSensorWithLabel = stride[sensor].append(pd.Series([clusterLabel]), ignore_index=True)
return np.array(strideSensorWithLabel)
def shuffleStrides(stridesDict):
for exercise in stridesDict:
random.shuffle(stridesDict[exercise])
return stridesDict
def getTrainAndTestStrides(labelledStrides):
train = initializeSensorDict(labelledStrides)
test = initializeSensorDict(labelledStrides)
labelledStrides = shuffleStrides(labelledStrides)
for exerciseNumber, exercise in enumerate(labelledStrides):
trainEndIndex = math.floor(len(labelledStrides[exercise]) * trainRatio)
for stride in labelledStrides[exercise][:trainEndIndex]:
for sensor in stride.columns:
train[sensor].append(createSensorNumpyArray(stride, sensor, exerciseNumber))
for stride in labelledStrides[exercise][trainEndIndex:]:
for sensor in stride.columns:
test[sensor].append(createSensorNumpyArray(stride, sensor, exerciseNumber))
train = listDictToNumpyArrayDict(train)
test = listDictToNumpyArrayDict(test)
return train, test
def saveTrainTestSplit(labelledStrides):
trainStrides, testStrides = getTrainAndTestStrides(labelledStrides)
# most secure way to ensure that files are deleted if extant
try:
os.remove(trainingDataPath)
except OSError:
pass
try:
os.remove(testingDataPath)
except OSError:
pass
with open(trainingDataPath, 'wb') as file:
pickle.dump(trainStrides, file)
with open(testingDataPath, 'wb') as file:
pickle.dump(testStrides, file)
###Output
_____no_output_____
###Markdown
Load Data
###Code
def loadData():
labelledStrides = {
'normal': [],
'pelvic displacement': [],
'limping': [],
'shuffling': [],
'small steps': [],
'insecure walking': []
}
subjectPaths = [folderTuple[0] for folderTuple in os.walk(dataPath)][1:]
for subjectPath in subjectPaths:
print(subjectPath)
loadedMeasurement = loadSyncedMeasurements(subjectPath)
if (len(loadedMeasurement) == expectedExerciseCount + 1):
for i, exercise in enumerate(loadedMeasurement[1:]):
strides = splitExerciseIntoStrides(exercise)[1:]
print(list(labelledStrides.keys())[i], len(strides))
labelledStrides[list(labelledStrides.keys())[i]] += normalizeStrides(strides)
return labelledStrides
###Output
_____no_output_____
###Markdown
Execution We are persisting the training and testing data to prevent the data loading process having to be performed multiple times when testing different clustering parameters.**PERFORMING THE FOLLOWING CELL IS ONLY NECESSARY, IF TESTING AND TRAINING DATA SHOULD BE UPDATED / NEWLY SAVED**
###Code
if runLoadData:
saveTrainTestSplit(loadData())
###Output
_____no_output_____
###Markdown
Cluster Data We are persisting the trained model to prevent the training process having to be performed multiple times when testing different clustering parameters.**PERFORMING THE FOLLOWING CELL IS ONLY NECESSARY, IF THE TRAINED MODEL SHOULD BE UPDATED / NEWLY SAVED**
###Code
if runTrainModel:
with open(trainingDataPath, 'rb') as file:
trainStrides = pickle.load(file)
trainKnn(trainStrides)
if runTestModel:
with open(testingDataPath, 'rb') as file:
testStrides = pickle.load(file)
print(testKnn(testStrides))
###Output
_____no_output_____ |
search/Binary search practice.ipynb | ###Markdown
Binary search practiceLet's get some practice doing binary search on an array of integers. We'll solve the problem two different ways—both iteratively and resursively.Here is a reminder of how the algorithm works:1. Find the center of the list (try setting an upper and lower bound to find the center)2. Check to see if the element at the center is your target.3. If it is, return the index.4. If not, is the target greater or less than that element?5. If greater, move the lower bound to just above the current center6. If less, move the upper bound to just below the current center7. Repeat steps 1-6 until you find the target or until the bounds are the same or cross (the upper bound is less than the lower bound). Problem statement:Given a sorted array of integers, and a target value, find the index of the target value in the array. If the target value is not present in the array, return -1. Iterative solutionFirst, see if you can code an iterative solution (i.e., one that uses loops). If you get stuck, the solution is below.
###Code
def binary_search(array, target):
'''Write a function that implements the binary search algorithm using iteration
args:
array: a sorted array of items of the same type
target: the element you're searching for
returns:
int: the index of the target, if found, in the source
-1: if the target is not found
'''
start_index = 0
end_index = len(array) - 1
while start_index <= end_index:
mid_index = (start_index + end_index)//2 # integer division in Python 3
mid_element = array[mid_index]
if target == mid_element: # we have found the element
return mid_index
elif target < mid_element: # the target is less than mid element
end_index = mid_index - 1 # we will only search in the left half
else: # the target is greater than mid element
start_index = mid_element + 1 # we will search only in the right half
return -1
###Output
_____no_output_____
###Markdown
Here's some code you can use to test the function:
###Code
def test_function(test_case):
answer = binary_search(test_case[0], test_case[1])
if answer == test_case[2]:
print("Pass!")
else:
print("Fail!")
array = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
target = 6
index = 6
test_case = [array, target, index]
test_function(test_case)
###Output
Pass!
###Markdown
Recursive solutionNow, see if you can write a function that gives the same results, but that uses recursion to do so.
###Code
def binary_search_recursive(array, target, start_index, end_index):
'''Write a function that implements the binary search algorithm using recursion
args:
array: a sorted array of items of the same type
target: the element you're searching for
returns:
int: the index of the target, if found, in the source
-1: if the target is not found
'''
if start_index > end_index:
return -1
mid_index = (start_index + end_index)//2
mid_element = array[mid_index]
if mid_element == target:
return mid_index
elif target < mid_element:
return binary_search_recursive(array, target, start_index, mid_index - 1)
else:
return binary_search_recursive(array, target, mid_index + 1, end_index)
###Output
_____no_output_____
###Markdown
Here's some code you can use to test the function:
###Code
def test_function(test_case):
answer = binary_search_recursive(test_case[0], test_case[1])
if answer == test_case[2]:
print("Pass!")
else:
print("Fail!")
array = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
target = 4
index = 4
test_case = [array, target, index]
test_function(test_case)
###Output
_____no_output_____ |
Apartado 2. Extracciones.ipynb | ###Markdown
ExtraccionesEn el siguiente cuaderno se tratan las extracciones de datos complementarias a los datasets originales. Se realizan un total de dos extracciones:1. Códigos CIIU.2. Ruralidad de los municipios de Colombia.Los códigos CIIU almacenan información acerca de las actividades que realiza la empresa de forma jerárquica. Esta información puede ser realmente útil para poder agrupar las empresas según varios criterios.Por la otra parte, la ruralidad es una categoría de las ciudades que describe su grado de urbanización. Se decide extraer la ruralidad de cada municipio colombiano, ya que otros estudios anteriores, como Blažková and Dvouletý (2020), utilizaron la ruralidad como descriptor de las empresas zombis. Extracción de códigos CIIULa siguiente sección trata de una extracción de los códigos ciiu de actividades en Colombia a partir de los datos del DANE. La extracción se basa en la publicación de la siguiente [página del instituto](https://www.dane.gov.co/index.php/sistema-estadistico-nacional-sen/normas-y-estandares/nomenclaturas-y-clasificaciones/clasificaciones/clasificacion-industrial-internacional-uniforme-de-todas-las-actividades-economicas-ciiu). La copia utilizada como fuente de datos se puede consultar en el fichero: **data/EstructuraDetalladaCIIU.xls.**El código CIIU está compuesto por un conjunto de dígitos que distinguen de forma jerárquica la actividad de operación de las empresas.El dataset resultante está compuesto por las siguientes columnas:* Código CIIU original* Sección* División* Grupo* ClasePor ejemplo:El código CIIU **A0111** denota:* Sección A: AGRICULTURA, GANADERÍA, CAZA, SILVICULTURA Y PESCA* División 01: Agricultura, ganadería, caza y actividades de servicios conexas* Grupo 011: Cultivos agrícolas transitorios* Clase 0111: Cultivo de cereales (excepto arroz), legumbres y semillas oleaginosasEl siguiente algoritmo se encarga de obtener el dataset resultante a partir de la fuente de datos publicada en el DANE.
###Code
# Importes necesarios para el cuaderno
import xlrd
import re
import pandas as pd
import unidecode
import matplotlib as mpl
import seaborn as sns
import matplotlib.pyplot as plt
from IPython.display import Image
# Especificar el máximo de filas del excel
MAX_ROWS = 713
# Leer los datos
book = xlrd.open_workbook("data/EstructuraDetalladaCIIU.xls")
sh = book.sheet_by_index(0)
# Función para extraer la información jerárquica del excel de códigos CIIU
def extract_level(parent, column, regex):
items = []
for row in range(parent['start'], parent['end']+1):
cell = sh.cell(row, column)
matches = re.match(regex, str(cell.value))
if (matches):
item = matches.group(1)
if (len(items) > 0):
items[-1]['end'] = row-1
items = items + [{'code':item,'desc':sh.cell(row,3).value,'start':row,'end':-1}]
items[-1]['end'] = parent['end']
return items
def build_ciiu(seccion, seccionDesc, division, divisionDesc, grupo, grupoDesc, clase, claseDesc):
return {'Ciiu':seccion+clase, 'Seccion': seccion, 'SeccionDesc':seccionDesc, 'Division': division, 'DivisionDesc': divisionDesc, \
'Grupo': grupo, 'GrupoDesc': grupoDesc, 'Clase': clase, 'ClaseDesc': claseDesc}
# Proceso de extracción jerárquica de los códigos ciiu
parent = {'start':3,'end':MAX_ROWS}
result = []
secciones = extract_level(parent, 0, r'^SECCIÓN\s*(\w+)\s*$')
for seccion in secciones:
divisiones = extract_level(seccion, 0, r'^(\d+)(?:\.0)?$')
seccion['divisiones'] = divisiones
for division in divisiones:
groups = extract_level(division, 1, r'^(\d+)(?:\.0)?$')
division['grupos'] = groups
for group in groups:
clases = extract_level(group, 2, r'^(\d+)(?:\.0)?$')
group['clases'] = clases
for clase in clases:
result = result + [build_ciiu(seccion['code'], seccion['desc'], division['code'], division['desc'], group['code'], group['desc'], clase['code'], clase['desc'])]
df = pd.DataFrame(result)
df.to_csv('data/ExtraccionCIIU.csv', sep=';', encoding='utf-8', index=False)
df.head()
###Output
_____no_output_____
###Markdown
Extracción de la categoría ruralEn segundo lugar, se puede extraer la categoría rural de cada municipio Colombiano. Existen varias definiciones acerca de cómo separar los territorios según urbanos o rurales. En la mayoría de casos se utiliza la población total del municipio y se establece un punto de corte.Sin embargo, en Colombia existe [un estudio](https://colaboracion.dnp.gov.co/CDT/Estudios%20Econmicos/2015ago6%20Documento%20de%20Ruralidad%20-%20DDRS-MTC.pdf) realizado por la Dirección de Desarrollo Rural Sostenible (DDRS) con la finalidad de crear categorías rurales específicas para el país.En el estudio, se utiliza el criterio de la población de la cabecera del municipio (perímetro urbano), la densidad de población, y la proporción de población en la zona resto (población fuera de la cabecera) para crear un total de 4 categorías urbanas:* Urbana.* Intermedia.* Rural.* Rural dispersa.La lógica de clasificación se resume en la siguiente tabla:
###Code
# Mostrar una imagen
Image(filename='assets/CategoriasRurales.PNG')
###Output
_____no_output_____
###Markdown
Aunque el estudio incluye un listado de municipios categorizados, se tratan de datos del 2014. Y además, no es trivial extraerlos de un fichero pdf. Por consiguiente, se realizará el cálculo de la ruralidad de nuevo utilizando unos datos poblacionales más actualizados.* La población de cada ciudad (así como en la cabecera y en la zona resto), se extrae del DANE en el siguiente [enlace](https://www.dane.gov.co/index.php/estadisticas-por-tema/demografia-y-poblacion/censo-nacional-de-poblacion-y-vivenda-2018)* La densidad poblacional de cada municipio. Extraído del Centro de Recursos para el análisis de Conflictos (CERAC) [enlace](https://www.cerac.org.co/es/publicaciones/libros/viejasguerras/anexoestadistico.html)Como apunte final, es necesario tener en cuenta que algunos municipios tienen nombres repetidos, por lo que es indispensable utilizar el nombre del departamento y del municipio conjuntamente como identificadores únicos de un municipio.Existe una cantidad considerable de datos erróneos en los orígenes de datos encontrados (nombres de municipios y departamentos erróneos, algunas superficies de municipios sin informar, nombres distintos según el fichero... Todos estos errores también se tratan mediante el siguiente código:
###Code
# Abrir la fuente de datos con la población total y de cobertura de los municipios colombianos
book = xlrd.open_workbook("data/CNPV-2018-Poblacion-Ajustada-Por-Cobertura.xls")
sh = book.sheet_by_index(2)
poblaciones = []
# Leer el municipio, el total de habitantes, los habitantes de la cobertura y el departamento de cada fila de datos
for row in range(9, 1131):
# Se guardan los nombres en minúsculas, sin carácteres especiales (como acentos) y eliminando espacios redundantes
departamento = unidecode.unidecode(str(sh.cell(row,1).value).lower()).strip()
municipio = unidecode.unidecode(str(sh.cell(row, 2).value).lower()).strip()
poblacionTotal = sh.cell(row, 3).value
poblacionCabecera = sh.cell(row, 4).value
poblaciones = poblaciones + \
[{'Municipio':municipio, 'Departamento':departamento, 'poblacionTotal':poblacionTotal, 'poblacionCabecera': poblacionCabecera}]
# Convertir la lista a dataset de pandas
poblaciones = pd.DataFrame(poblaciones)
# Arregla algunos municipios con el sufijo (ANM), que significa áreas no metropolitanas, cuya población en la cabecera es cero.
def arreglar_sufijo_anm(df):
def fix_anm(val):
if (val.endswith('(anm)')):
return val[0:-5].strip()
else:
return val
df['Municipio'] = df['Municipio'].map(fix_anm)
# Arreglar el sufijo ANM
arreglar_sufijo_anm(poblaciones)
# Arreglar las diferencias entre la nomenclatura de Bogotá
poblaciones.at[poblaciones['Municipio'] == 'bogota, d.c.','Departamento'] ="bogota"
poblaciones.at[poblaciones['Municipio'] == 'bogota, d.c.','Municipio'] ="bogota"
# Arreglar el municipio de Tumaco
poblaciones.at[poblaciones['Municipio'] == 'san andres de tumaco','Municipio'] = 'tumaco'
# Lectura de los datos de la densidad poblacional por cada municipio
book = xlrd.open_workbook("data/DensidadPoblacionalv2.xls")
sh = book.sheet_by_index(0)
superficies = []
for row in range(1, 1120):
municipio = unidecode.unidecode(str(sh.cell(row,5).value).lower()).strip()
departamento = unidecode.unidecode(str(sh.cell(row,4).value).lower()).strip()
superficie = sh.cell(row,10).value
superficies = superficies + [{'Municipio':municipio,'Departamento':departamento,'Superficie':superficie}]
# Crear el dataframe a partir de la lista
superficies = pd.DataFrame(superficies)
# Cambiar el departamento del archipielago por su nomenclatura corta
superficies.at[superficies['Departamento'] == 'archipielago de san andres, providencia y santa catalina','Departamento'] = 'archipielago de san andres'
# Municipios con el departamento erróneo que debería ser Antioquía
municipios_departamento_antioquia = ['medellin', 'abejorral', 'abriaqui', 'alejandria', 'amaga',\
'amalfi', 'andes', 'angelopolis', 'angostura', 'anori',\
'santafe de antioquia', 'anza', 'apartado', 'arboletes',\
'belmira', 'bello', 'betania',\
'copacabana', 'dabeiba', 'don matias',\
'ebejico', 'el bagre', 'entrerrios', 'envigado', 'fredonia',\
'frontino', 'liborina', 'maceo', 'marinilla',\
'montebello', 'murindo', 'mutata', 'necocli', 'nechi', 'bogota']
# Municipios con el departamento erróneo que debería ser Atlántico
municipios_departamento_atlantico = ['campo de la cruz', 'palmar de varela', 'piojo',\
'polonuevo', 'ponedera', 'repelon',\
'sabanagrande', 'santa lucia', 'santo tomas',\
'tubara', 'usiacuri']
# Arreglar municipios con departamento erróneo (detectados manualmente)
superficies.at[superficies['Municipio'].isin(municipios_departamento_antioquia), 'Departamento'] = 'antioquia'
superficies.at[superficies['Municipio'].isin(municipios_departamento_atlantico), 'Departamento'] = 'atlantico'
# Arreglar el resto de errores detectados
superficies.at[superficies['Municipio'] == 'bogota', 'Departamento'] = 'bogota'
superficies.at[(superficies['Municipio'] == 'barbosa') & (superficies['Departamento'] == 'meta'),'Departamento'] = 'antioquia'
superficies.at[(superficies['Municipio'] == 'armenia') & (superficies['Departamento'] == 'meta'),'Departamento'] = 'antioquia'
superficies.at[(superficies['Municipio'] == 'argelia') & (superficies['Departamento'] == 'meta'),'Departamento'] = 'antioquia'
superficies.at[(superficies['Municipio'] == 'barbosa') & (superficies['Departamento'] == 'meta'),'Departamento'] = 'antioquia'
superficies.at[(superficies['Municipio'] == 'betulia') & (superficies['Departamento'] == 'meta'),'Departamento'] = 'antioquia'
superficies.at[(superficies['Municipio'] == 'concepcion') & (superficies['Departamento'] == 'narino'),'Departamento'] = 'antioquia'
superficies.at[(superficies['Municipio'] == 'concordia') & (superficies['Departamento'] == 'narino'),'Departamento'] = 'antioquia'
superficies.at[(superficies['Municipio'] == 'la union') & (superficies['Departamento'] == 'norte de santander'),'Departamento'] = 'antioquia'
superficies.at[(superficies['Municipio'] == 'narino') & (superficies['Departamento'] == 'norte de santander'),'Departamento'] = 'antioquia'
superficies.at[(superficies['Municipio'] == 'sabanalarga') & (superficies['Departamento'] == 'putumayo'),'Departamento'] = 'atlantico'
superficies.at[(superficies['Municipio'] == 'candelaria') & (superficies['Departamento'] == 'arauca'),'Departamento'] = 'atlantico'
superficies.at[(superficies['Municipio'] == 'puerto colombia') & (superficies['Departamento'] == 'casanare'),'Departamento'] = 'atlantico'
superficies.at[(superficies['Municipio'] == 'chibolo'),'Municipio'] = 'chivolo'
# Operación merge entre poblaciones y superficies
result = poblaciones.merge(superficies, on=['Municipio','Departamento'],how='left')
# Llenar las superficies faltantes de forma manual (fuente de las superficies: Wikipedia)
result.at[result['Municipio'] == 'guachene','Superficie'] = 392.21
result.at[result['Municipio'] == 'norosi','Superficie'] = 407.2
result.at[result['Municipio'] == 'san jose de ure(1)','Superficie'] = 516.19
result.at[result['Municipio'] == 'tuchin','Superficie'] = 32
# Comprobar que el número de resultados sin superfície sea 0
print("Resultados sin la variable de superficie informada: {0}".format(result[result['Superficie'].isna()]['Superficie'].count()))
# Mostrar la estructura del dataset de resultado
result.head()
###Output
Resultados sin la variable de superficie informada: 0
###Markdown
Una vez se han juntado las poblaciones y las superficies ya es posible realizar el cálculo de la categoría de cada municipio en función de la población total, cabecera y los kilómetros cuadrados.
###Code
# Función del cálculo de categoria rural
def calcular_categoria_municipio(habitantesTotales, habitantesCabecera, superficie):
densidad = habitantesTotales / superficie
zonaResto = (habitantesTotales - habitantesCabecera) / habitantesTotales
if (habitantesCabecera > 100000):
return 'Urbana'
elif (habitantesCabecera >= 25000 and habitantesCabecera <= 100000):
if (densidad > 10):
return 'Intermedia'
else:
return 'Rural'
elif (habitantesCabecera < 25000):
if (densidad > 100):
return 'Intermedia'
elif (densidad > 50 and densidad <= 100):
if (zonaResto < 0.7):
return 'Intermedia'
else:
return 'Rural'
elif (densidad > 10 and densidad <= 50):
if (zonaResto < 0.7):
return 'Rural'
else:
return 'Rural disperso'
else:
return 'Rural disperso'
def calcular_categoria_fila(fila):
return calcular_categoria_municipio(fila['poblacionTotal'], fila['poblacionCabecera'], fila['Superficie'])
# Calcular la nueva columna y ver la cabecera de los datos
result['CategoriaRural'] = result.apply(calcular_categoria_fila, axis=1)
result.head()
###Output
_____no_output_____
###Markdown
Finalmente, se observa la distribución de la nueva variable y se guardan los datos en un fichero de salida: **data/ExtraccionRuralidad.csv**.
###Code
sns.countplot(data=result,x="CategoriaRural")
plt.title("Conteo de las categorias rurales de los municipios de Colombia")
plt.show()
# Guardar en csv
result.to_csv('data/ExtraccionRuralidad.csv', sep=';', encoding='utf-8', index=False)
###Output
_____no_output_____ |
examples/dask/notebooks/era5_zarr.ipynb | ###Markdown
Processing ERA5 data in Zarr FormatThis notebook demonstrates how to work with the ECMWF ERA5 reanalysis available as part of the AWS Public Dataset Program (https://registry.opendata.aws/ecmwf-era5/). This notebook utilizes Amazon SageMaker & AWS Fargate for providing an environment with a Jupyter notebook and Dask cluster. There is an example AWS CloudFormation template available at https://github.com/awslabs/amazon-asdi/tree/main/examples/dask for quickly creating this environment in your own AWS account to run this notebook. Python Imports
###Code
%matplotlib inline
import boto3
import botocore
import datetime
import matplotlib.pyplot as plt
import matplotlib
import xarray as xr
import numpy as np
import s3fs
import fsspec
import dask
from dask.distributed import performance_report, Client, progress
font = {'family' : 'sans-serif',
'weight' : 'normal',
'size' : 18}
matplotlib.rc('font', **font)
###Output
_____no_output_____
###Markdown
Scale out Dask Workers
###Code
ecs = boto3.client('ecs')
resp = ecs.list_clusters()
clusters = resp['clusterArns']
if len(clusters) > 1:
print("Please manually select your cluster")
cluster = clusters[0]
cluster
# Scale up the Fargate cluster
numWorkers=70
ecs.update_service(cluster=cluster, service='Dask-Worker', desiredCount=numWorkers)
ecs.get_waiter('services_stable').wait(cluster=cluster, services=['Dask-Worker'])
###Output
_____no_output_____
###Markdown
Set up the Dask Client to talk to our Fargate Dask Distributed Cluster
###Code
client = Client('Dask-Scheduler.local-dask:8786')
client
###Output
_____no_output_____
###Markdown
Open 2-m air temperature as a single dataset
###Code
def fix_accum_var_dims(ds, var):
# Some varibles like precip have extra time bounds varibles, we drop them here to allow merging with other variables
# Select variable of interest (drops dims that are not linked to current variable)
ds = ds[[var]]
if var in ['air_temperature_at_2_metres',
'dew_point_temperature_at_2_metres',
'air_pressure_at_mean_sea_level',
'northward_wind_at_10_metres',
'eastward_wind_at_10_metres']:
ds = ds.rename({'time0':'valid_time_end_utc'})
elif var in ['precipitation_amount_1hour_Accumulation',
'integral_wrt_time_of_surface_direct_downwelling_shortwave_flux_in_air_1hour_Accumulation']:
ds = ds.rename({'time1':'valid_time_end_utc'})
else:
print("Warning, Haven't seen {var} varible yet! Time renaming might not work.".format(var=var))
return ds
@dask.delayed
def s3open(path):
fs = s3fs.S3FileSystem(anon=True, default_fill_cache=False,
config_kwargs = {'max_pool_connections': 20})
return s3fs.S3Map(path, s3=fs)
def open_era5_range(start_year, end_year, variables):
''' Opens ERA5 monthly Zarr files in S3, given a start and end year (all months loaded) and a list of variables'''
file_pattern = 'era5-pds/zarr/{year}/{month}/data/{var}.zarr/'
years = list(np.arange(start_year, end_year+1, 1))
months = ["01", "02", "03", "04", "05", "06", "07", "08", "09", "10", "11", "12"]
l = []
for var in variables:
print(var)
# Get files
files_mapper = [s3open(file_pattern.format(year=year, month=month, var=var)) for year in years for month in months]
# Look up correct time dimension by variable name
if var in ['precipitation_amount_1hour_Accumulation']:
concat_dim='time1'
else:
concat_dim='time0'
# Lazy load
ds = xr.open_mfdataset(files_mapper, engine='zarr',
concat_dim=concat_dim, combine='nested',
coords='minimal', compat='override', parallel=True)
# Fix dimension names
ds = fix_accum_var_dims(ds, var)
l.append(ds)
ds_out = xr.merge(l)
return ds_out
%%time
ds = open_era5_range(1979, 2020, ["air_temperature_at_2_metres"])
print('ds size in GB {:0.2f}\n'.format(ds.nbytes / 1e9))
ds.info
###Output
ds size in GB 1529.06
###Markdown
The `ds.info` output above shows us that there are four dimensions to the data: lat, lon, and time0; and two data variables: air_temperature_at_2_metres, and air_pressure_at_mean_sea_level. Convert units to F from K
###Code
ds['air_temperature_at_2_metres'] = (ds.air_temperature_at_2_metres - 273.15) * 9.0 / 5.0 + 32.0
ds.air_temperature_at_2_metres.attrs['units'] = 'F'
###Output
_____no_output_____
###Markdown
Calculate the mean 2-m air temperature for all times
###Code
# calculates the mean along the time dimension
temp_mean = ds['air_temperature_at_2_metres'].mean(dim='valid_time_end_utc')
###Output
_____no_output_____
###Markdown
The expressions above didn’t actually compute anything. They just build the dask task graph. To do the computations, we call the `persist` method:
###Code
temp_mean = temp_mean.persist()
progress(temp_mean)
###Output
_____no_output_____
###Markdown
Plot Average Surface Temperature
###Code
temp_mean.compute()
temp_mean.plot(figsize=(20, 10))
plt.title('1979-2020 Mean 2-m Air Temperature')
###Output
_____no_output_____
###Markdown
Repeat for standard deviation
###Code
temp_std = ds['air_temperature_at_2_metres'].std(dim='valid_time_end_utc')
temp_std = temp_std.persist()
progress(temp_std)
temp_std.compute()
temp_std.plot(figsize=(20, 10))
plt.title('1979-2020 Standard Deviation 2-m Air Temperature')
###Output
_____no_output_____
###Markdown
Plot temperature time series for points
###Code
# location coordinates
locs = [
{'name': 'Santa Barbara', 'lon': -119.70, 'lat': 34.42},
{'name': 'Colorado Springs', 'lon': -104.82, 'lat': 38.83},
{'name': 'Honolulu', 'lon': -157.84, 'lat': 21.29},
{'name': 'Seattle', 'lon': -122.33, 'lat': 47.61},
]
# convert westward longitudes to degrees east
for l in locs:
if l['lon'] < 0:
l['lon'] = 360 + l['lon']
locs
ds_locs = xr.Dataset()
air_temp_ds = ds
# interate through the locations and create a dataset
# containing the temperature values for each location
for l in locs:
name = l['name']
lon = l['lon']
lat = l['lat']
var_name = name
ds2 = air_temp_ds.sel(lon=lon, lat=lat, method='nearest')
lon_attr = '%s_lon' % name
lat_attr = '%s_lat' % name
ds2.attrs[lon_attr] = ds2.lon.values.tolist()
ds2.attrs[lat_attr] = ds2.lat.values.tolist()
ds2 = ds2.rename({'air_temperature_at_2_metres' : var_name}).drop(('lat', 'lon'))
ds_locs = xr.merge([ds_locs, ds2])
ds_locs.data_vars
###Output
_____no_output_____
###Markdown
Convert to dataframe
###Code
df_f = ds_locs.to_dataframe()
df_f.describe()
###Output
_____no_output_____
###Markdown
Plot temperature timeseries
###Code
ax = df_f.plot(figsize=(20, 10), title="ERA5", grid=1)
ax.set(xlabel='Date', ylabel='2-m Air Temperature (deg F)')
plt.show()
###Output
_____no_output_____
###Markdown
Cluster scale downWhen we are temporarily done with the cluster we can scale it down to save on costs
###Code
numWorkers=0
ecs.update_service(cluster=cluster, service='Dask-Worker', desiredCount=numWorkers)
ecs.get_waiter('services_stable').wait(cluster=cluster, services=['Dask-Worker'])
###Output
_____no_output_____ |
code/CoordleExample.ipynb | ###Markdown
How to setup Coordle index and use it
###Code
import pandas as pd
from coordle.backend import (CordDoc, Index, RecursiveDescentParser,
QueryAppenderIndex)
from gensim.models import Word2Vec
from os.path import join as join_path
from gensim.models.callbacks import CallbackAny2Vec
###Output
_____no_output_____
###Markdown
Gotta have this shit
###Code
class EpochSaver(CallbackAny2Vec):
'''Callback to save model after each epoch.'''
class DocEpochSaver(CallbackAny2Vec):
'''Callback to save model after each epoch.'''
# Load the last trained model
model = Word2Vec.load(join_path('data', 'cord-19-w2v.model'))
###Output
_____no_output_____
###Markdown
Load pd.DataFrame, note that only loading first 8196 rows to save time here
###Code
df = pd.read_csv('data/cord-19-data.csv', nrows=8196)
###Output
_____no_output_____
###Markdown
Instantiate QueryAppenderIndex and index df
###Code
ai_index = QueryAppenderIndex(model.wv.most_similar, n_similars=1)
ai_index.build_from_df(
df=df,
uid='cord_uid',
title='title',
text='body_text',
verbose=True,
use_multiprocessing=True,
workers=-1
)
###Output
Text cleaning initilized on 16 workers
###Markdown
Searching with a proper query
###Code
docs, scores, errmsgs = ai_index.search('white AND retarded woman')
if errmsgs:
print(errmsgs)
else:
for doc, score in zip(docs, scores):
print(f'{doc.uid} {str(doc.title)[:70]:<70} {score:.4f}')
###Output
g42p98ku Gliopathy of Demyelinating and Non-Demyelinating Strains of Mouse Hepa 4.4402
c33lqwua Risky Bodies in the Plasma Bioeconomy: A Feminist Analysis 3.8496
sbixelq5 Proteomic Profiling of the Amniotic Fluid to Detect Inflammation, Infe 2.0619
g8m6n1u3 A DESCRIPTIVE STUDY OF PANDEMIC INFLUENZA A(H1N1)PDM09 IN BRAZIL, 2009 1.8174
i3ydg93k Biomarkers for Chronic Kidney Disease Associated with High Salt Intake 1.4376
thkzoirr A Flood of Health Functional Foods: What Is to Be Recommended? 1.4162
i5p3vbfr Worry experienced during the 2015 Middle East Respiratory Syndrome (ME 1.3704
y65lguub Migration and health in Southern Africa: 100 years and still circulati 1.3571
d28li5ao Congenital Malaria in China 1.2672
tuyac4oq Kikuchi-Fujimoto disease (histiocytic necrotizing lymphadenitis) with 1.1020
hwjkbpqp Abstracts from the 11th International Congress of Behavioral Medicine 1.0833
cd8u42xo A Case of Acute Disseminated Encephalomyelitis Associated with Hepatit 1.0751
zgz4snfu MicroRNA-155 enhances T cell trafficking and antiviral effector functi 1.0345
8mvsqh6t Severe community-acquired adenovirus pneumonia in an immunocompetent 4 1.0095
6l4s2gpb Frequency of tuberculosis among diabetic patients in the People’s Repu 0.9909
7uukeqea Secondary Syphilis in Cali, Colombia: New Concepts in Disease Pathogen 0.9699
19jo817j High Pulmonary Levels of IL-6 and IL-1β in Children with Chronic Suppu 0.9664
mg2zikyp Public views of the uk media and government reaction to the 2009 swine 0.9419
no58ucyq Hepatitis E Virus–Associated Meningoencephalitis in a Lung Transplant 0.9044
0knc49f6 Oxidative tissue injury in multiple sclerosis is only partly reflected 0.8758
h6dzaqul Twenty Years of Active Bacterial Core Surveillance 0.8452
gcjgfasj Factors influencing psychological distress during a disease epidemic: 0.8125
shnejtjt Candidiasis and other oral mucosal lesions during and after interferon 0.8079
zwfxnd7r Imaging Findings in Patients With H1N1 Influenza A Infection 0.8074
xn6n63fw Changes in Cytokine Levels and NK Cell Activation Associated with Infl 0.7870
a9wmor1i The crazy-paving pattern: a radiological-pathological correlation 0.7863
6eer4qgr Persistent Human Cosavirus Infection in Lung Transplant Recipient, Ita 0.7489
vocsxblm Towards Equity in Health: Researchers Take Stock 0.7289
bnqmwst8 Rotavirus Disrupts Calcium Homeostasis by NSP4 Viroporin Activity 0.7281
d0ybqh8n Crossed fused renal ectopia in a Persian cat 0.7182
sqe7h05t Human Challenge Pilot Study with Cyclospora cayetanensis 0.7179
dp6vzgef ELR(+) chemokine signaling in host defense and disease in a viral mode 0.7060
rpnuauwz FTY720 (fingolimod) modulates the severity of viral-induced encephalom 0.6829
j33faib3 De novo Fatty Acid Biosynthesis Contributes Significantly to Establish 0.6676
10xqdm7m Pertussis in infants: an underestimated disease 0.6527
tplf7fcj Habitat disturbance results in chronic stress and impaired health stat 0.6442
xcd3795e Incidence of acute disseminated encephalomyelitis in the Jiangsu provi 0.6434
iodatzo3 Critically ill patients with Middle East respiratory syndrome coronavi 0.6319
qatiqnac Life threatening pneumonia in a lupus patient: a case report 0.6262
eb3asgvi Lobar flexible fiberoptic lung lavage: therapeutic benefit in severe r 0.6195
p5wmqmvj Rhinovirus Associated Severe Respiratory Failure in Immunocompetent Ad 0.6184
l7hscca3 Anti-tumor effects of Abnormal Savda Munziq on the transplanted cervic 0.6061
cvwf63zc Zika Virus: What Have We Learnt Since the Start of the Recent Epidemic 0.6052
d5d8amk7 New Approaches to Preventing, Diagnosing, and Treating Neonatal Sepsis 0.6024
texk4z76 Emergent severe acute respiratory distress syndrome caused by adenovir 0.5965
h1hriihu Interleukin‐10 is a critical regulator of white matter lesion containm 0.5941
l7rn00vq Abstracts from the 36th Annual Meeting of the Society of General Inter 0.5892
gfn7d8wm Does the human immune system ever really become “senescent”? 0.5880
4al57prg Improving Biosurveillance Systems to Enable Situational Awareness Duri 0.5667
28uqefrg Agent based modeling of Treg-Teff cross regulation in relapsing-remitt 0.5519
52sbckm3 Establishment of minimal positive-control conditions to ensure brain s 0.5421
3meiks6o Unrealistic Optimism, Sex, and Risk Perception of Type 2 Diabetes Onse 0.5104
6anybxnk RNase L Mediated Protection from Virus Induced Demyelination 0.5093
v08d3oag CXCR7 antagonism prevents axonal injury during experimental autoimmune 0.5063
n6qipu17 Glial response during cuprizone-induced de- and remyelination in the C 0.5057
eva0soja Monkeypox Virus Infections in Small Animal Models for Evaluation of An 0.4996
cbelhu32 Intrathecal Humoral Immunity to Encephalitic RNA Viruses 0.4935
yxne28f0 Using HIV Networks to Inform Real Time Prevention Interventions 0.4860
qww6pe61 Pandemic Influenza Planning in the United States from a Health Dispari 0.4773
57ghjur1 Abstracts from the 37th Annual Meeting of the Society of General Inter 0.4722
ow2xqhmp A Correlation between the Severity of Lung Lesions on Radiographs and 0.4647
gmznmdgh Sex- and age-dependent association of SLC11A1 polymorphisms with tuber 0.4574
gvq8uyfk Detecting the emergence of novel, zoonotic viruses pathogenic to human 0.4564
mvpz1nv9 From fish to man: understanding endogenous remyelination in CNS demyel 0.4504
lzquegwc Respiratory failure presenting in H1N1 influenza with Legionnaires dis 0.4478
91bldl8x The Relationships between Respiratory Virus Infection and Aminotransfe 0.4445
8iysset3 Leukemia/lymphoma‐related factor (LRF) exhibits stage‐ and context‐dep 0.4416
6nrfipz8 TREM2 in Neurodegenerative Diseases 0.4306
ap0zwzzl Transcriptomic Profiling in Childhood H1N1/09 Influenza Reveals Reduce 0.4301
tg0j6huk Impact of MBL and MASP-2 gene polymorphism and its interaction on susc 0.4277
p1ajw8mq Hospitalized adult patients with 2009 influenza A(H1N1) in Beijing, Ch 0.4259
5rmhjbri Clinical and Epidemiologic Characteristics of Hospitalized Patients wi 0.4239
gthanxy9 Multiple Sclerosis: The Role of Cytokines in Pathogenesis and in Thera 0.4066
b7km2oy1 Education to Action: Improving Public Perception of Bats 0.4026
na8xecga Choindroitinase ABC I-Mediated Enhancement of Oncolytic Virus Spread a 0.3960
fhm8abxp Analysis on the Pathogenesis of Symptomatic Pulmonary Embolism with Hu 0.3901
at4j87fn Molecular Imaging of Influenza and Other Emerging Respiratory Viral In 0.3849
qzm9wgde Macrophages and cytokines in the early defence against herpes simplex 0.3844
sdumq61z The great opportunity: Evolutionary applications to medicine and publi 0.3838
nvwfigyr Respiratory viral infections are underdiagnosed in patients with suspe 0.3805
104sqoxz Composition and Function of Haemolymphatic Tissues in the European Com 0.3797
8ckb20hi Detection of Plant DNA in the Bronchoalveolar Lavage of Patients with 0.3786
5mtmvl7r Potential Triggers of MS 0.3767
z8cbbit3 Pandemic Influenza Virus 2009 H1N1 and Adenovirus in a High Risk Popul 0.3685
5omt75qr Parasites or Cohabitants: Cruel Omnipresent Usurpers or Creative “Émin 0.3675
pjbr6yl2 Abstracts from the 12th International Symposium on NeuroVirology: Octo 0.3667
sr53cekh Distinguishing Characteristics between Pandemic 2009–2010 Influenza A 0.3665
kb91eagd Clinical Features and Courses of Adenovirus Pneumonia in Healthy Young 0.3625
h5o2ksfk In memory of Patrick Manson, founding father of tropical medicine and 0.3568
oz7hras7 Clomiphene and Its Isomers Block Ebola Virus Particle Entry and Infect 0.3534
nj1p4ehx T-cell-mediated immune response to respiratory coronaviruses 0.3525
jrmfl3te PTH[1-34] improves the effects of core decompression in early-stage st 0.3520
njzuu4xl Balkan endemic nephropathy—current status and future perspectives 0.3443
xbfguoj4 Multiple sclerosis: experimental models and reality 0.3437
gbf3ivzc Introduction of a point mutation into an HLA class I single-chain trim 0.3384
o6rwqnu2 Sensitization with vaccinia virus encoding H5N1 hemagglutinin restores 0.3340
hm9wd817 A Comprehensive Review of Common Bacterial, Parasitic and Viral Zoonos 0.3334
rnkawxrq Healthcare workers' attitudes to working during pandemic influenza: a 0.3301
es3hubn4 Lectins: production and practical applications 0.3273
7uxbkru9 Porcine Reproductive and Respiratory Syndrome Virus Infection Induces 0.3262
sasmwpg6 Detection of respiratory syncytial virus and rhinovirus in healthy inf 0.3191
acei0pn8 Inflammatory monocytes damage the hippocampus during acute picornaviru 0.3190
6kkqoh7f Apes, lice and prehistory 0.3184
cuaiyw56 Probiotic Lactobacillus rhamnosus GG mono-association suppresses human 0.3181
vrzp0516 Imaging Axonal Degeneration and Repair in Preclinical Animal Models of 0.3165
1hcp36cw Etanercept for steroid-refractory acute graft-versus-host disease: A s 0.3149
kt9ja8nc Virus-mediated autoimmunity in Multiple Sclerosis 0.3098
bha8vsce Bronchiectasis in Children: Current Concepts in Immunology and Microbi 0.2987
6vpx4opa Airflow Dynamics of Coughing in Healthy Human Volunteers by Shadowgrap 0.2975
qxkwjkif Vesicular Stomatitis Virus-Based Ebola Vaccine Is Well-Tolerated and P 0.2952
v68bilzj Necrotizing pneumonia: an emerging problem in children? 0.2939
i03fyaz8 Chemical and Biological Mechanisms of Pathogen Reduction Technologies 0.2894
jzr8xge6 Between Securitisation and Neglect: Managing Ebola at the Borders of G 0.2871
1egq5a2i Intravenous vitamin C as adjunctive therapy for enterovirus/rhinovirus 0.2863
rixiupem Qualitative study on the shifting sociocultural meanings of the facema 0.2849
h0wyqnkh Moving interdisciplinary science forward: integrating participatory mo 0.2839
wncab2qc An Upstream Open Reading Frame Modulates Ebola Virus Polymerase Transl 0.2820
wwc2073d Aging Does Not Affect Axon Initial Segment Structure and Somatic Local 0.2795
583quxl1 Herpes simplex virus type 1 and Alzheimer’s disease: increasing eviden 0.2755
uhrhsrky Cardiac Function in Kawasaki Disease Patients with Respiratory Symptom 0.2737
yz578wou Knowledge and awareness of tuberculosis among Roma population in Belgr 0.2726
eu16bwti Remyelination Is Correlated with Regulatory T Cell Induction Following 0.2723
qmut89kb Dual role of chloroquine in liver ischemia reperfusion injury: reducti 0.2706
3j2q83ll Pandemic 2009 H1N1 virus infection in children and adults: A cohort st 0.2701
5616n73p Multiple Sclerosis: Immunopathology and Treatment Update 0.2615
gro59kjg A Rare Cause of Childhood Cerebellitis-Influenza Infection: A Case Rep 0.2586
gerhieav A Mutation in Myo15 Leads to Usher-Like Symptoms in LEW/Ztm-ci2 Rats 0.2583
yewon0i8 B cell homeostasis and follicle confines are governed by fibroblastic 0.2581
oqc9h2lu A Possible Mechanism of Zika Virus Associated Microcephaly: Imperative 0.2577
464cqc16 Trend analysis of mortality rates and causes of death in children unde 0.2577
ug5a9wx6 The Role of Mannose-Binding Lectin in Severe Sepsis and Septic Shock 0.2568
de0y745b Behavioural change models for infectious disease transmission: a syste 0.2561
etx3chnt A current update on the phytopharmacological aspects of Houttuynia cor 0.2553
w33e5edq Comparative and kinetic analysis of viral shedding and immunological r 0.2553
liqhbqxl Deep Sequencing for the Detection of Virus-Like Sequences in the Brain 0.2532
amze2iqh Enhancing the Teaching of Evolution in Public Health 0.2478
9hrp07zo Zika (PRVABC59) Infection Is Associated with T cell Infiltration and N 0.2476
pwlcqavv Two Series of Familial Cases With Unclassified Interstitial Pneumonia 0.2455
kz4pof48 Patient-Based Transcriptome-Wide Analysis Identify Interferon and Ubiq 0.2443
96xoj10p Attenuated Salmonella choleraesuis-mediated RNAi targeted to conserved 0.2432
19noki6p Toll-like receptors, chemokine receptors and death receptor ligands re 0.2424
lz0ff16u Severe Measles Infection: The Spectrum of Disease in 36 Critically Ill 0.2419
wp4rch6v Current Status of the Immunomodulation and Immunomediated Therapeutic 0.2399
4aeu9n5v Multiple functions of USP18 0.2381
si9jhugj Public perceptions, anxiety, and behaviour change in relation to the s 0.2373
4fwp1dnl Lobeglitazone, a Novel Thiazolidinedione, Improves Non-Alcoholic Fatty 0.2341
bnld9o72 Cystatins in Immune System 0.2251
oobydik2 Progress in Global Surveillance and Response Capacity 10 Years after S 0.2248
md0bcflu Additive Effects of Mechanical Marrow Ablation and PTH Treatment on de 0.2237
9yrvjlpx Severe acute respiratory syndrome-coronavirus infection in aged nonhum 0.2180
c7f1l13s Macrophage colony-stimulating factor (CSF1) controls monocyte producti 0.2157
mq94yfs8 Photodynamic Inactivation of Mammalian Viruses and Bacteriophages 0.2144
rdpsxb4n BUHO: A MATLAB Script for the Study of Stress Granules and Processing 0.2139
58p9senq Interferon-Induced Transmembrane Protein 3 Inhibits Hantaan Virus Infe 0.2135
8hcujmuo IFN-γ protects from lethal IL-17 mediated viral encephalomyelitis inde 0.2121
xrptpcvn Severe Community-Acquired Pneumonia Caused by Human Adenovirus in Immu 0.2121
36nks5os Toward unsupervised outbreak detection through visual perception of ne 0.2104
gn0xm2gy Progress toward universal health coverage in ASEAN 0.2063
uuxj6kh7 Influenza Transmission in the Mother-Infant Dyad Leads to Severe Disea 0.2055
h6j7iqbm CD8(+) T-Cells as Immune Regulators of Multiple Sclerosis 0.2013
r46umdlh Zoonoses, One Health and complexity: wicked problems and constructive 0.1987
cn21fgug BoHV-4-Based Vector Single Heterologous Antigen Delivery Protects STAT 0.1982
wu2mogfa Methamphetamine induces autophagy as a pro-survival response against a 0.1970
skavefji Usefulness of Cellular Analysis of Bronchoalveolar Lavage Fluid for Pr 0.1967
74b7tzas Astragalin inhibits autophagy-associated airway epithelial fibrosis 0.1966
85pu5mvm Differential Induction of Functional IgG Using the Plasmodium falcipar 0.1912
lh4qhqsc Glucose-6-Phosphate Dehydrogenase Enhances Antiviral Response through 0.1889
wnvhhrhy Nonstructural Protein 11 of Porcine Reproductive and Respiratory Syndr 0.1888
vhi6nszc A Guide to Utilization of the Microbiology Laboratory for Diagnosis of 0.1879
16pzlvzz Interactions between cyclodextrins and cellular components: Towards gr 0.1875
6jyv0q5e The Pathogenesis of Rift Valley Fever 0.1870
lgkqwm6u Epidemiologic investigation of a family cluster of imported ZIKV cases 0.1862
nblmshni Targeting Toll-Like Receptors: Promising Therapeutic Strategies for th 0.1859
r11cgpvs CXCR2 is essential for cerebral endothelial activation and leukocyte r 0.1856
9hjk3vl7 Impact of Influenza on Outpatient Visits, Hospitalizations, and Deaths 0.1821
b9ovj5cr Molecular Mechanisms of White Spot Syndrome Virus Infection and Perspe 0.1810
f0t5e46j Evasion of Antiviral Innate Immunity by Theiler's Virus L* Protein thr 0.1791
tihc3ldd A comparative study of experimental mouse models of central nervous sy 0.1771
0zkeoa1z The discriminative capacity of soluble Toll-like receptor (sTLR)2 and 0.1765
wurzy88k Challenge models to assess new therapies in chronic obstructive pulmon 0.1759
h4matthe Novel Human Reovirus Isolated from Children with Acute Necrotizing Enc 0.1753
h8j8z5rz A Novel High-Mannose Specific Lectin from the Green Alga Halimeda rens 0.1742
b5l4o9oe Effects of Fruit and Vegetable Consumption on Risk of Asthma, Wheezing 0.1703
qp3biiu8 Enhanced CD8 T-cell anti-viral function and clinical disease in B7-H1- 0.1696
uwl1c4vs Nasopharyngeal bacterial load as a marker for rapid and easy diagnosis 0.1688
rb5vwolt The HIV-1 Envelope Transmembrane Domain Binds TLR2 through a Distinct 0.1685
lzfd5zxq An Analogue of the Antibiotic Teicoplanin Prevents Flavivirus Entry In 0.1669
u61mcemm Pathology in Captive Wild Felids at German Zoological Gardens 0.1658
09hmet7r Transfer of Anti-Rotavirus Antibodies during Pregnancy and in Milk Fol 0.1651
uu53dfo6 MMP-independent role of TIMP-1 at the blood brain barrier during viral 0.1648
o4o4bzna ALV-J strain SCAU-HN06 induces innate immune responses in chicken prim 0.1620
btre0gjz Genome-wide gene–environment interaction analysis for asbestos exposur 0.1619
odcfac79 Pathogenetic determinants in Kawasaki disease: the haematological poin 0.1610
2hb28brw Experimental infection of a US spike-insertion deletion porcine epidem 0.1593
puxz2f9g Coronavirus membrane-associated papain-like proteases induce autophagy 0.1589
2g5tfwqj An examination of the factorial and convergent validity of four measur 0.1571
guciupc8 Vaccines Through Centuries: Major Cornerstones of Global Health 0.1565
rwt6rn5b Interference with the production of infectious viral particles and bim 0.1560
01b0vnnm The changing phenotype of microglia from homeostasis to disease 0.1551
b58irdd8 A Disintegrin and Metalloprotease 17 in the Cardiovascular and Central 0.1550
4ce1io1h Dysregulation of pulmonary endothelial protein C receptor and thrombom 0.1546
4203vjep No Direct Association Between Asthma and the Microbiome Based on Curre 0.1544
h8fqijk5 The epidermal growth factor receptor regulates cofilin activity and pr 0.1449
sshuauve Efficient generation of influenza virus with a mouse RNA polymerase I- 0.1440
sb3ccvsk Differential Regulation of Self-reactive CD4(+) T Cells in Cervical Ly 0.1430
eham6trt Key Ethical Issues Discussed at CDC-Sponsored International, Regional 0.1420
xgp2vx6o Analysis of intrapatient heterogeneity uncovers the microevolution of 0.1412
a1k5g3zh Population genetics, community of parasites, and resistance to rodenti 0.1408
eeljt0ur Evaluation of Two Dry Commercial Therapeutic Diets for the Management 0.1407
49u2onq0 A Compact Viral Processing Proteinase/Ubiquitin Hydrolase from the OTU 0.1399
56ocrlfl Analysis of the spleen proteome of chickens infected with reticuloendo 0.1390
3tieiaiq Nuclease escape elements protect messenger RNA against cleavage by mul 0.1371
1aptufp6 Illuminating the Sites of Enterovirus Replication in Living Cells by U 0.1366
9ka3jdbf B Cell Repertoire Analysis Identifies New Antigenic Domains on Glycopr 0.1349
987w6ypg IFITM proteins are incorporated onto HIV-1 virion particles and negati 0.1318
4ko557n1 One Health, emerging infectious diseases and wildlife: two decades of 0.1303
szzr24ji Validation of a short form Wisconsin Upper Respiratory Symptom Survey 0.1302
qdety098 A chemokine gene expression signature derived from meta-analysis predi 0.1288
37s6bxhw Identification of miRNomes reveals ssc-miR-30d-R_1 as a potential ther 0.1278
7vvj0vfs The Moraxella adhesin UspA1 binds to its human CEACAM1 receptor by a d 0.1262
16032h3d Poly(I:C) promotes TNFα/TNFR1-dependent oligodendrocyte death in mixed 0.1255
aesiff1f Membranous Replication Factories Induced by Plus-Strand RNA Viruses 0.1250
abpge1fb Type I and Type III Interferons Display Different Dependency on Mitoge 0.1247
shmephya Neurological and behavioral abnormalities, ventricular dilatation, alt 0.1247
1mzq227n Intracranial Administration of P Gene siRNA Protects Mice from Lethal 0.1237
c31xsyeo Thermostable DNA Polymerase from a Viral Metagenome Is a Potent RT-PCR 0.1223
rafvxgx1 Clinical Aspects of Feline Retroviruses: A Review 0.1223
9y0zr0we The Role of Macrophage Polarization in Infectious and Inflammatory Dis 0.1211
qcbskifq Coronaviruses Lacking Exoribonuclease Activity Are Susceptible to Leth 0.1186
hty71qj7 The Ubiquitin Proteasome System Plays a Role in Venezuelan Equine Ence 0.1178
3g3apbon The Footprint of Genome Architecture in the Largest Genome Expansion i 0.1150
m8lw5zbf What Macromolecular Crowding Can Do to a Protein 0.1150
j8ebslif Role of Human Sec63 in Modulating the Steady-State Levels of Multi-Spa 0.1146
3e2i20m4 Protein-Protein Interactions of Viroporins in Coronaviruses and Paramy 0.1140
feohfmbo Characteristics of human infection with avian influenza viruses and de 0.1124
9ozguyl8 Novel Mechanisms Revealed in the Trachea Transcriptome of Resistant an 0.1110
gdt32ity Mouse Hepatitis Virus Infection Upregulates Genes Involved in Innate I 0.1103
j4iwq2ld Modulation of Autophagy-Like Processes by Tumor Viruses 0.1091
iy6qzq58 Classical Swine Fever Virus vs. Classical Swine Fever Virus: The Super 0.1036
56dbkz6x Transcriptional profiling of the spleen in progressive visceral leishm 0.1031
qevosik3 A Neutralizing Monoclonal Antibody Targeting the Acid-Sensitive Region 0.1027
1vimqhdp 36th International Symposium on Intensive Care and Emergency Medicine: 0.1026
vipx6t7e Recurrent rhinovirus infections in a child with inherited MDA5 deficie 0.0999
utklvcw3 Clinical Development of a Cytomegalovirus DNA Vaccine: From Product Co 0.0993
qi6qmgk2 Trypsin-independent porcine epidemic diarrhea virus US strain with alt 0.0943
7hbsxo7q The Golgi associated ERI3 is a Flavivirus host factor 0.0940
wbh06gzb Arenavirus Stable Signal Peptide Is the Keystone Subunit for Glycoprot 0.0938
7zhj9oc3 Immune Heterogeneity in Neuroinflammation: Dendritic Cells in the Brai 0.0933
vnrmb3rh CFTR Delivery to 25% of Surface Epithelial Cells Restores Normal Rates 0.0926
b6aklh44 Viral Polymerase-Helicase Complexes Regulate Replication Fidelity To O 0.0873
1nbocmux Structure-Based Design of Head-Only Fusion Glycoprotein Immunogens for 0.0870
zptw1rkk Could FIV zoonosis responsible of the breakdown of the pathocenosis wh 0.0842
8tif7p7p Interactions of Francisella tularensis with Alveolar Type II Epithelia 0.0842
rn7p9a09 Pathogen Security-Help or Hindrance? 0.0820
pgbrtnqi Contrasting academic and lay press print coverage of the 2013-2016 Ebo 0.0794
nn9gj0z1 Host Cell Entry of Respiratory Syncytial Virus Involves Macropinocytos 0.0692
d62iuk72 Hsp70 Isoforms Are Essential for the Formation of Kaposi’s Sarcoma-Ass 0.0687
98z3y7h5 VSIG4 inhibits proinflammatory macrophage activation by reprogramming 0.0641
ri5v6u4x Towards evidence-based, GIS-driven national spatial health information 0.0397
###Markdown
Searching with bad query
###Code
docs, scores, errmsgs = ai_index.search('AND (white AND retarded woman) OR OR (')
if errmsgs:
print(errmsgs)
else:
for doc, score in zip(docs, scores):
print(f'{doc.uid} {str(doc.title)[:70]:<70} {score:.4f}')
###Output
['SyntaxError: First token "AND" is an operator', 'SyntaxError: Two succeeding operators "OR OR"', 'SyntaxError: Found stray opening parenthesis']
|
matplotlib/gallery_jupyter/text_labels_and_annotations/rainbow_text.ipynb | ###Markdown
Rainbow textThe example shows how to string together several text objects.History-------On the matplotlib-users list back in February 2012, Gökhan Sever asked thefollowing question: Is there a way in matplotlib to partially specify the color of a string? Example: plt.ylabel("Today is cloudy.") How can I show "today" as red, "is" as green and "cloudy." as blue? Thanks.The solution below is modified from Paul Ivanov's original answer.
###Code
import matplotlib.pyplot as plt
from matplotlib.transforms import Affine2D
def rainbow_text(x, y, strings, colors, orientation='horizontal',
ax=None, **kwargs):
"""
Take a list of *strings* and *colors* and place them next to each
other, with text strings[i] being shown in colors[i].
Parameters
----------
x, y : float
Text position in data coordinates.
strings : list of str
The strings to draw.
colors : list of color
The colors to use.
orientation : {'horizontal', 'vertical'}
ax : Axes, optional
The Axes to draw into. If None, the current axes will be used.
**kwargs
All other keyword arguments are passed to plt.text(), so you can
set the font size, family, etc.
"""
if ax is None:
ax = plt.gca()
t = ax.transData
canvas = ax.figure.canvas
assert orientation in ['horizontal', 'vertical']
if orientation == 'vertical':
kwargs.update(rotation=90, verticalalignment='bottom')
for s, c in zip(strings, colors):
text = ax.text(x, y, s + " ", color=c, transform=t, **kwargs)
# Need to draw to update the text position.
text.draw(canvas.get_renderer())
ex = text.get_window_extent()
if orientation == 'horizontal':
t = text.get_transform() + Affine2D().translate(ex.width, 0)
else:
t = text.get_transform() + Affine2D().translate(0, ex.height)
words = "all unicorns poop rainbows ! ! !".split()
colors = ['red', 'orange', 'gold', 'lawngreen', 'lightseagreen', 'royalblue',
'blueviolet']
plt.figure(figsize=(6, 6))
rainbow_text(0.1, 0.05, words, colors, size=18)
rainbow_text(0.05, 0.1, words, colors, orientation='vertical', size=18)
plt.show()
###Output
_____no_output_____ |
c1_nlp_with_classification_and_vector_spaces/week_2/NLP_C1_W2_lecture_nb_01.ipynb | ###Markdown
Visualizing Naive BayesIn this lab, we will cover an essential part of data analysis that has not been included in the lecture videos. As we stated in the previous module, data visualization gives insight into the expected performance of any model. In the following exercise, you are going to make a visual inspection of the tweets dataset using the Naïve Bayes features. We will see how we can understand the log-likelihood ratio explained in the videos as a pair of numerical features that can be fed in a machine learning algorithm. At the end of this lab, we will introduce the concept of __confidence ellipse__ as a tool for representing the Naïve Bayes model visually.
###Code
import numpy as np # Library for linear algebra and math utils
import pandas as pd # Dataframe library
import matplotlib.pyplot as plt # Library for plots
from utils import confidence_ellipse # Function to add confidence ellipses to charts
###Output
_____no_output_____
###Markdown
Calculate the likelihoods for each tweetFor each tweet, we have calculated the likelihood of the tweet to be positive and the likelihood to be negative. We have calculated in different columns the numerator and denominator of the likelihood ratio introduced previously. $$log \frac{P(tweet|pos)}{P(tweet|neg)} = log(P(tweet|pos)) - log(P(tweet|neg)) $$$$positive = log(P(tweet|pos)) = \sum_{i=0}^{n}{log P(W_i|pos)}$$$$negative = log(P(tweet|neg)) = \sum_{i=0}^{n}{log P(W_i|neg)}$$We did not include the code because this is part of this week's assignment. The __'bayes_features.csv'__ file contains the final result of this process. The cell below loads the table in a dataframe. Dataframes are data structures that simplify the manipulation of data, allowing filtering, slicing, joining, and summarization.
###Code
data = pd.read_csv('bayes_features.csv'); # Load the data from the csv file
data.head(5) # Print the first 5 tweets features. Each row represents a tweet
# Plot the samples using columns 1 and 2 of the matrix
fig, ax = plt.subplots(figsize = (8, 8)) #Create a new figure with a custom size
colors = ['red', 'green'] # Define a color palete
# Color base on sentiment
ax.scatter(data.positive, data.negative,
c=[colors[int(k)] for k in data.sentiment], s = 0.1, marker='*') # Plot a dot for each tweet
# Custom limits for this chart
plt.xlim(-250,0)
plt.ylim(-250,0)
plt.xlabel("Positive") # x-axis label
plt.ylabel("Negative") # y-axis label
###Output
_____no_output_____
###Markdown
Using Confidence Ellipses to interpret Naïve BayesIn this section, we will use the [confidence ellipse]( https://matplotlib.org/3.1.1/gallery/statistics/confidence_ellipse.htmlsphx-glr-gallery-statistics-confidence-ellipse-py) to give us an idea of what the Naïve Bayes model see.A confidence ellipse is a way to visualize a 2D random variable. It is a better way than plotting the points over a cartesian plane because, with big datasets, the points can overlap badly and hide the real distribution of the data. Confidence ellipses summarize the information of the dataset with only four parameters: * Center: It is the numerical mean of the attributes* Height and width: Related with the variance of each attribute. The user must specify the desired amount of standard deviations used to plot the ellipse. * Angle: Related with the covariance among attributes.The parameter __n_std__ stands for the number of standard deviations bounded by the ellipse. Remember that for normal random distributions:* About 68% of the area under the curve falls within 1 standard deviation around the mean.* About 95% of the area under the curve falls within 2 standard deviations around the mean.* About 99.7% of the area under the curve falls within 3 standard deviations around the mean.In the next chart, we will plot the data and its corresponding confidence ellipses using 2 std and 3 std.
###Code
# Plot the samples using columns 1 and 2 of the matrix
fig, ax = plt.subplots(figsize = (8, 8))
colors = ['red', 'green'] # Define a color palete
# Color base on sentiment
ax.scatter(data.positive, data.negative, c=[colors[int(k)] for k in data.sentiment], s = 0.1, marker='*') # Plot a dot for tweet
# Custom limits for this chart
plt.xlim(-200,40)
plt.ylim(-200,40)
plt.xlabel("Positive") # x-axis label
plt.ylabel("Negative") # y-axis label
data_pos = data[data.sentiment == 1] # Filter only the positive samples
data_neg = data[data.sentiment == 0] # Filter only the negative samples
# Print confidence ellipses of 2 std
confidence_ellipse(data_pos.positive, data_pos.negative, ax, n_std=2, edgecolor='black', label=r'$2\sigma$' )
confidence_ellipse(data_neg.positive, data_neg.negative, ax, n_std=2, edgecolor='orange')
# Print confidence ellipses of 3 std
confidence_ellipse(data_pos.positive, data_pos.negative, ax, n_std=3, edgecolor='black', linestyle=':', label=r'$3\sigma$')
confidence_ellipse(data_neg.positive, data_neg.negative, ax, n_std=3, edgecolor='orange', linestyle=':')
ax.legend()
plt.show()
###Output
_____no_output_____
###Markdown
In the next cell, we will modify the features of the samples with positive sentiment (1), in a way that the two distributions overlap. In this case, the Naïve Bayes method will produce a lower accuracy than with the original data.
###Code
data2 = data.copy() # Copy the whole data frame
# The following 2 lines only modify the entries in the data frame where sentiment == 1
data2.negative[data.sentiment == 1] = data2.negative * 1.5 + 50 # Modify the negative attribute
data2.positive[data.sentiment == 1] = data2.positive / 1.5 - 50 # Modify the positive attribute
###Output
_____no_output_____
###Markdown
Now let us plot the two distributions and the confidence ellipses
###Code
# Plot the samples using columns 1 and 2 of the matrix
fig, ax = plt.subplots(figsize = (8, 8))
colors = ['red', 'green'] # Define a color palete
# Color base on sentiment
#data.negative[data.sentiment == 1] = data.negative * 2
ax.scatter(data2.positive, data2.negative, c=[colors[int(k)] for k in data2.sentiment], s = 0.1, marker='*') # Plot a dot for tweet
# Custom limits for this chart
plt.xlim(-200,40)
plt.ylim(-200,40)
plt.xlabel("Positive") # x-axis label
plt.ylabel("Negative") # y-axis label
data_pos = data2[data2.sentiment == 1] # Filter only the positive samples
data_neg = data[data2.sentiment == 0] # Filter only the negative samples
# Print confidence ellipses of 2 std
confidence_ellipse(data_pos.positive, data_pos.negative, ax, n_std=2, edgecolor='black', label=r'$2\sigma$' )
confidence_ellipse(data_neg.positive, data_neg.negative, ax, n_std=2, edgecolor='orange')
# Print confidence ellipses of 3 std
confidence_ellipse(data_pos.positive, data_pos.negative, ax, n_std=3, edgecolor='black', linestyle=':', label=r'$3\sigma$')
confidence_ellipse(data_neg.positive, data_neg.negative, ax, n_std=3, edgecolor='orange', linestyle=':')
ax.legend()
plt.show()
###Output
_____no_output_____ |
python3.6/.ipynb_checkpoints/Assignment01-checkpoint.ipynb | ###Markdown
*** Q1*** 1000 ~ 9999 까지(1000과 9999도 계산에 포함)의 네 자리 숫자 가운데에 '10'을 포함하는 숫자의 갯수는? Answer1 *** Q2*** 10 ~ 99999 까지(10과 99999도 계산에 포함)의 숫자 가운데에 20의 배수이며 '080'을 포함하는 숫자의 갯수는? Answer2 *** Q3*** d = {'-':1, ' ':1, '\n':2}는 사전 자료형이며, 각 element의 key는 문양을 의미하고 value는 그 문양의 가로 길이(cm)를 의미한다('-'과 ' '의 길이는 1cm이고, '\n'의 길이는 2cm이며, '\n'은 항상 수평면과 수직으로 위치한다).각 문양들의 세로 길이는 잴 수 없을만큼 얇다고 가정한다.아래 문자열 s는 문양들을 도형의 형상을 띄도록 조합한 것이다.도형의 넓이는 무엇인가?
###Code
s = "----------\n - -\n - -\n - -\n ----------"
###Output
_____no_output_____ |
src/visualisation/evaluation/MSE/evaluation_artifical_2_signals.ipynb | ###Markdown
Evaluation of Euclidian Distance Metric Setup
###Code
all_data = pd.read_csv("../../files/classification/MSE/artfic_max_minus_60percent.csv", sep=";")
all_data.head()
###Output
_____no_output_____
###Markdown
Evaluation Metrics F1-score
###Code
f1_score = calculate_score(all_data, "f1_score", 2)
print("F1 score: {}".format(f1_score[0]))
###Output
F1 score: 0.12703583061889248
###Markdown
Precision Score
###Code
precision_score = calculate_score(all_data, "precision_score", 2)
print("Precision score: {}".format(precision_score[0]))
print(precision_score)
###Output
precision sensor_0 0.094203
precision sensor_1 0.047856
dtype: float64
###Markdown
Recall Score
###Code
recall_score = calculate_score(all_data, "recall_score", 2)
print("Recall score: {}".format(recall_score[0]))
print(recall_score)
###Output
recall sensor_0 0.195
recall sensor_1 0.080
dtype: float64
###Markdown
Confusion Matrix Sensor with errors
###Code
print("Positive --> Anomaly")
print("Negative --> Normal Behaviour")
print("--"*15)
tn, fp, fn, tp = get_confusion_matrix(all_data, 2, 0)
print("Sensor No. {}:".format(1))
print("True negative: {}".format(tn))
print("False positive: {}".format(fp))
print("False negative: {}".format(fn))
print("True positive: {}".format(tp))
###Output
Positive --> Anomaly
Negative --> Normal Behaviour
------------------------------
Sensor No. 1:
True negative: 10225
False positive: 1125
False negative: 483
True positive: 117
###Markdown
Sensor without errors
###Code
print("Positive --> Anomaly")
print("Negative --> Normal Behaviour")
print("--"*15)
tn, fp, fn, tp = get_confusion_matrix(all_data, 2, 1)
print("Sensor No. {}:".format(2))
print("True negative: {}".format(tn))
print("False positive: {}".format(fp))
print("False negative: {}".format(fn))
print("True positive: {}".format(tp))
###Output
Positive --> Anomaly
Negative --> Normal Behaviour
------------------------------
Sensor No. 2:
True negative: 10947
False positive: 1003
False negative: 0
True positive: 0
|
abTesting/abtesting.ipynb | ###Markdown
Steps to conduct A/B Testings and Caveats **Outline*** Statistics Concepts * [Hypothesis Testing](ht) * Set hypotheses * Type I & II error, Power * Test statistics * Make a decision * P-value* A/B Testing * [Steps](steps) * [Step 1: Start with value propositions and define metric](step1) * [Step 2: Separate Traffic](step2) * [Step 3: Hypothesis Testing](step3) * [Determining Sample Size](size) * [Determining Duration](duration) * [Caveats for A/B Testing](caveats)* [Reference](reference)
###Code
%load_ext watermark
import pandas as pd
import numpy as np
from statsmodels.stats.proportion import proportions_ztest
import statsmodels.stats.api as sms
%watermark -a 'Johnny' -d -t -v -p pandas,numpy,statsmodels
###Output
Johnny 2018-01-06 15:52:48
CPython 3.6.2
IPython 6.2.1
pandas 0.20.3
numpy 1.13.1
statsmodels 0.8.0
###Markdown
--- Hypothesis Testing Hypothesis testing is a process to test claims about the population on the basis of sample.Here is a simple example explaining the concept of hypothesis testing that I found on Quora:> Suppose, I've applied for a typing job and I've stated in my resume that my typing speed is over 60 words per minute on an average. My recruiter may want to test my claim. If he finds my claim to be acceptable, he will hire me otherwise reject my candidature. So he asked me to type a sample letter and found that my speed is 54 words a minute. Now, he can decide on whether to hire me or not.Hypothesis testing can help us know whether claims about the population is true or not, i.e, if we should accept it or not using the sample data.When conducting a hypothesis testing, there are the steps we should take to do a hypothesis testing1. **Set hypotheses**: what is the claim about the population that we want to test on2. **Test statistics**: What is the statistics that we want to calculate using the sample we have3. **Make a decision**: Decide whether to accept or reject the claim. **1. Set hypotheses** From the previous example, our claim will be whether "my typing speed is over 60 words per minute on an average" or not, i.e.,$$\mu_{speed} \le 60 \quad \text{or} \quad \mu_{speed} > 60$$The recruiter would then want to know which one should he believe using the sample data. In hypothesis testing, a claim with an equality sign is a **$H_0$ or Null Hypothesis**, and one without is called **$H_1$ or Alternative Hypothesis**.  When we make a decision, we can easily imagine that since we don't know what the actual typing speed is(suppose there is a real value for my typing speed. We could probably come up with a better example.), it is possible the null hypothesis is actually true but we still reject it. When this happens, we say we are commiting to **Type I eror**. **Type II eror** is when our null hypothesis is false but we don't reject it.Here is an easy way to understand the concept: Another term that is also important is called **Power**. It is the ability that we can correctly reject $H_0$ when it is indeed false. In other words, it is the probability of correctly rejecting the null hypothesis when the null hypothesis is false. The larger the difference between $\mu_{h1}$ and $\mu_{h0}$, the better our ability is in correctly identifying $H_0$ as false, since when the sampling distribution of sample mean given that $H_1$ is further away from that of $H_0$, the area that represent the power will be bigger. We can see clearly in the gif below **2. Test statistics **The recruiter then want to use the sample data he got, and calculate a value from it in order to make a decision. In this case, because we want to test whether if the mean value of my typing speed is over 60 or not. We assume that the distribution of the "mean typing speed" is normally distributed, i.e, I have measure my typing speed for a large number amount of time and according to Central Limit Theorem, the distribution of the sample mean is normally distributed. The recruiter would then calculate the sample mean, i.e., the typing speed from the test I just did, so that he can then decide whether he will accept the claim or not. **3. Make a decision** It turns out the sample mean is 64 in this case. Let's also assume that I have typed for 36 mins and the variance of the average speed per min is 5. How can the recruiter decide whether to accept or reject the claim?Intuitively, If the sample mean is too far away from the population mean, we may more likely to reject the the null hypothesis. However, the skeptic person might argue that a good typer may have a bad performance from time to time.This could just be a chance event. So, the question would then be "how can the recruiter determine if I am really that good?"That's why we need a significance level to help us set the threshold. Before doing the testing, we should firstly decide what is the **Type I error** we allow us to commit for the test? Usually, people use 5% as the threshold.If the the $H_0$ is true, we can plot out the sampling distribution of the sample mean as the left bell shape curve in the plot below. The bell shape curve on the right hand side is the sampling distribution of the sample mean given $H_1$ is true.When we get the actual sample mean from our data, we can then see where the sample mean is in the plot. Then, intuitively, we would want to know what the probability of performing 64 or even better when doing the test. If the probability is large, then there is not strong evidence that my typing speed is over 60 per minute on average. This probability is called **p-value**. It is the probability of obtaining the observed or more extreme outcome, given that the null hypothesis is true. We use p-value as a measure to check the level of evidence.  In our example, we can the sample mean(test statistics) is 64. Since we assume that the population is normally distributed, the sampling distribution of the sample mean will be normally distributed as well. Therefore, when our population variance is unknown, we know that$$ t = \frac{\bar{X}-\mu_X}{S/\sqrt{n}}$$ Then we can calculate$$Pr(\bar{X}\geq \bar{X}_* | H_0 \text{ is true}) = Pr(t \geq t_* | H_0 \text{ is true}) = 0.05$$$$t_* = 1.6895 \text{ when df = 36-1}$$$$t_* = \frac{\bar{X_*}-\mu_0}{S/\sqrt{n}} = \frac{\bar{X_*}-60}{5/6} = 1.6895$$$$\bar{X_*} = 61.41$$ Therefore, the number we should put in the "Any Mean" in the above plot is 61.42. When our sample mean is over that threshold, then we say that we reject $H_0$ and admit that we will have 5% chance of commiting type I error; if the sample mean is below the threshold, then we don't reject $H_0$, since we don't have enough evidence to say that the sample mean is larger than 60. We can also calculate our **p-value** using the sample mean we get. $$Pr(\bar{X}\geq \bar{X}_0 | H_0 \text{ is true}) = Pr(\bar{X}\geq 64 | H_0 \text{ is true}) = Pr(t \geq \frac{64-60}{5/\sqrt{36}}) = Pr(t \geq 4.8) = 0.000015$$ This value represent the area to the right of the "Any Mean" line in the left bell shape curve. It is the area that is smaller than the red region shown in the picture, since 64 is larger than the threshold we just calcualted. --- A/B Testing Let's also start our introduction with an example:> Google wants to change a button into a new one to their main search page. How can they determine whether or not people enjoy this new button feature?To determine this, we will need to do some A/B testing, to know the difference and then use the difference to determine whether or not the new feature is good or not. Below are the steps I think we should take in order to make a decision Step 1: Start with value propositions and define metricBefore we want to decide which metric we should use to compare the the result. We should think of the value proposition of the company, since the value proposition should align with the value that the business provides and the metric should take that into account. For Google, the value proposition should be something like**Google creates an extremely user friendly platform which directly connects people's queries to the information desired, enhancing the overall user experience.**We want to make sure the new button feature help the google's user find their desired information in a better way. It can be easier, more efficient or any other improvement that you can think of.Some possible metrics in this case are * Daily active user(DAU), Monthly active user(MAU)* CTR of the certain buttonA metric can also be something we create. For example, for LinkedIn, they use **Quality Signup**. It tracks the number of new members who have taken steps to establish their identity and grow their network within their first week as a member. Specifically, they track new members who have listed an occupation, made their first connection, and are reachable by other members. They think these are the basic requirements for any member to start receiving value on LinkedIn. As we can imagine, to get the number of **Quality Signup** for each version of our testing should be a lot of work, since it uses different actions that the users interact with LinkedIn. Also, we can imagine that to decide which feature we should use in order to for a user to be defined as a quality signup, machine learning techniques can be used in order to determine which feature has a higher importance on the target response. They'll need to label the users who are qualify to be a quality signup, and build a model on it to see which factors affect the outcome. People call these kinds of metrics **true-north metric**, which refers to what we should not what we can use to compare the differece. In LinkedIn's case, there are several steps in order to define and predict who will be a qualift signup. The steps are provided below1. **Data collection, label, and features**: They gathered all new members from a six-month cohort as samples. They classified new members who were still active six months after registration as positive outcomes, and those who were not as negative outcomes. By using “active” as the label, they made the assumption that members who were active were the ones who were receiving sufficient value from LinkedIn.2. **Obligatory machine learning**: build simple classification model on the features to the outcome. 3. **Making your metric actionable (drive product strategy)**: Thinking more about the tradeoff between accuracy to simplicity. The later is more interpretable.4. **Validating your metric in its ecosystem**: run some A/B testing using the metric that we created and see if it make sense.Note: it firstly label user in a 6 month cohort in order to build a machine learning model. The way to label as positive or negative in this step cannot be used when doing an A/B testing, since 6 month is too long for A/B testing to conduct. By no means we'll want to wait 6 month just to know whether if this new feature is good or bad. Therefore, by building a machine learing model using all the feature we think may directly or indirectly effect the label, when conducting an A/B testing, we can use the most important features to predict whether the new user will be classify as positive or negative. One thing we should keep in mind is that we picking features, there is a tradeoff between accuracy to simplicity. Dropping some features may decrease the accuracy of our model but it will make it easier to explain. In other words, when we conduct an A/B testing, an oberservation may be incorrectly classifed because we want a more interpretable model.For our example, since we don't know what the feature of the new button is, let's assume that the higher CTR means it helps more people to find their desired information, hence the metric is relevant to Google's value proposition. Another alternative of CTR is Click-Through-Probability, which is defined as the unique clicks to unique users. CTR should be used when we want to test the usability of the button, whereas CTP should be used when we want to test if the new button help people to get to the second page. Step 2: Separate TrafficWe then want to separate the traffic of the website into test and control so that we can compare the metric on the two versions of the page.To do this, we'll need to collect event and gather samples of what we’re trying to measure. In our case, since our metric is CTR, we simply need to collect clicks event. Step 3: Hypothesis Testing In our case, since the metric we want to compare is CTR, which is the propotion of clicks to the impression. we want to do a Hypothesis testing comparing two population proportions. * **Set Hypothesis** We denote the CTR for original version as $p_A$, and the one for the page with the new button feature as $p_B$We claim our hypothesis as follows:$$H_0: p_A-p_B=0$$$$H_1: p_A-p_B > 0$$A one-sided test was chosen here for charting-simplicity. * **Test statistics** For our test the underlying metric is a binary yes/no variable (event), which means the appropriate test statistic is a test for differences in proportions:$$Z=\frac{(\hat{p_A}−\hat{p_B})−(p_A−p_B)}{SE(p_A−p_B)} \sim N(0, 1)$$The test statistic makes sense as it measuring the difference in the observed proportions and the estimated proportion, standardized by an estimate of the standard error of this quantity. This is the sampling distribution of the difference between two propotions. To compute the test statistic, we first need to find the standard deviation/variance of $p_A−p_B$:$$Var(p_A−p_B) = Var(p_A) + Var(p_B) -2 Cov(p_A,p_B)$$$$ = Var(p_A) + Var(p_B) $$$$ = \frac{p_A(1-p_A)}{n_A} + \frac{p_B(1-p_B)}{n_B} $$$$ = p(1-p)\Big(\frac{1}{n_A}+ \frac{1}{n_B}\Big) $$Where* $n_i$ is the number of sample we have for each group.* p is the pooled probability, which equals $\frac{n_Ap_A+n_Bp_B}{n_A+n_B}$We know that when we separate the traffic, the two groups should be independent from each other. Therefore, the covariance between the two should be 0. Given that we assume the null hypothesis is true, the test statistics, i.e., the quantile that our sample should lie in the sampling distribution becomes$$Z = \frac{\hat{p_A}-\hat{p_B}-0}{\sqrt{\hat{p}(1-\hat{p})\Big(\frac{1}{n_A}+ \frac{1}{n_B} \Big)}}$$ Let's assume the number we get from two groups are as follows:
###Code
data = pd.DataFrame({
'version': ['A', 'B'],
'impression': [5000, 5000],
'click': [486, 527]
})[['version', 'impression', 'click']]
data
counts = np.array([486, 527])
nobs = np.array([5000, 5000])
zscore, pvalue = proportions_ztest(counts, nobs, alternative = 'two-sided')
print('zscore = {:.3f}, pvalue = {:.3f}'.format(zscore, pvalue))
###Output
zscore = -1.359, pvalue = 0.174
###Markdown
* **Make a decision** Then we can make a decision based on the p-value we get from the two proportion hypothesis testing. In the example above, we don't reject the null hypothesis that there is no difference between each version. --- Determining Sample Size When doing a two proportion hypothesis testing, how can we decide the sample size?When conducting a testing, we need to consider both of type I and II errors when choosing the sample size. More specificly, we need to consider the following two probability to make the testing trustworthy.* **Significance level**: The probability that we commit a false positive error(type I error, $\alpha$ error). When that happens, we end up recommending something that does not work. The probability that the observation was actually due to chance. A rule of thumb value for this probability is 5%.* **Statistical Power**: The probability that when there is actually an effect, and we can detect it. A rule of thumb value for this probability is 80%, i.e., there is an 80% chance that if there was an effect, we would detect it.To actually solve for the equation of finding the suitable sample size, we also need to specify the detectable difference, the level of impact we want to be able to detect with our test. When conducting a hypothesis, if our claim is the same as above, i.e., $$H_0: p_A-p_B=0$$$$H_1: p_A-p_B > 0$$we actually don't know how large or how small the difference will be. If we want our testing to be able to detect a small difference, then we will need to have a very big sample size; on the other hand, if we only want the testing to be able to test a small difference, we only need a smaller sample size in oerder to achieve the siginicance level and statistics power.Let's consider two illustrative examples: if we want our testing be able to detect the difference of, say, 0.0001, then the sampling distribution of given $H_0$ and $H_1$ is true will be very close, so close that it is nearly indistinguishable. Then we will need a very large number of sample in order to get a power of 80%. On the other hand, if we want our testing to be able to to detect the difference of, say, 0.1, then two distribution will be further apart, which means we can conduct the testing with a much smaller sample size. In the following gif, we can see that as the sample size goes up, the variance of each distribution become narrower. This is because the standard variance has this formula$$\alpha_X=\frac{\alpha}{\sqrt{N}}$$Therefore, if we want our testing to be able to detect a small difference of two group, we should need a bigger sample size. The distribution below are the sampling distribution given h0 and h1 is true. For example, Given h0 is true, since it is a binomial distribution, the mean is 0 and the variance will be $\alpha^2/n$. Given h1 is true, that is, when we do A/B testing, for A group the CTR is 0.01 and 0.02 for group B, then the sampling distribution of the difference of the two group will be mean 0.01 with variance $\alpha^2/n$.  Let use the function copied from Ethen's [blog post](http://nbviewer.jupyter.org/github/ethen8181/machine-learning/blob/master/ab_tests/frequentist_ab_test.ipynb) about A/B testing to calculate the sample size we need for our testing to be able to test a difference of 0.02 in CTR.
###Code
def compute_sample_size(prop1, min_diff, significance = 0.05, power = 0.8):
"""
Computes the sample sized required for a two-proportion A/B test;
result matches R's pwr.2p.test from the pwr package
Parameters
----------
prop1 : float
The baseline proportion, e.g. ctr
min_diff : float
Minimum detectable difference
significance : float, default 0.05
Often denoted as alpha. Governs the chance of a false positive.
A significance level of 0.05 means that there is a 5% chance of
a false positive. In other words, our confidence level is
1 - 0.05 = 0.95
power : float, default 0.8
Often denoted as beta. Power of 0.80 means that there is an 80%
chance that if there was an effect, we would detect it
(or a 20% chance that we'd miss the effect)
Returns
-------
sample_size : int
Required sample size for each group of the experiment
References
----------
R pwr package's vignette
- https://cran.r-project.org/web/packages/pwr/vignettes/pwr-vignette.html
Stackoverflow: Is there a python (scipy) function to determine parameters
needed to obtain a target power?
- https://stackoverflow.com/questions/15204070/is-there-a-python-scipy-function-to-determine-parameters-needed-to-obtain-a-ta
"""
prop2 = prop1 + min_diff
effect_size = sms.proportion_effectsize(prop1, prop2)
print(effect_size)
sample_size = sms.NormalIndPower().solve_power(
effect_size, power = power, alpha = significance, ratio = 1)
return sample_size
sample_size = compute_sample_size(prop1 = 0.0972, min_diff = 0.02)
print('sample size required per group:', sample_size)
sample_size = compute_sample_size(prop1 = 0.04, min_diff = 0.01, significance = 0.05, power = 0.95)
print('sample size required per group:', sample_size)
###Output
-0.0483109702156
sample size required per group: 11135.379907758259
|
scikit-learn/scikit-learn-book/Chapter 3 - Unsupervised Learning - Principal Component Analysis.ipynb | ###Markdown
Learning Scikit-learn: Machine Learning in Python IPython Notebook for Chapter 3: Unsupervised Learning - Principal Component Analysis _Principal Component Analysis (PCA) is useful for exploratory data analysis before building predictive models.For our learning methods, PCA will allow us to reduce a high-dimensional space into a low-dimensional one while preserving as much variance as possible. We will use the handwritten digits recognition problem to show how it can be used_ Start by importing numpy, scikit-learn, and pyplot, the Python libraries we will be using in this chapter. Show the versions we will be using (in case you have problems running the notebooks).
###Code
%pylab inline
import IPython
import sklearn as sk
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
print 'IPython version:', IPython.__version__
print 'numpy version:', np.__version__
print 'scikit-learn version:', sk.__version__
print 'matplotlib version:', matplotlib.__version__
###Output
Populating the interactive namespace from numpy and matplotlib
IPython version: 2.1.0
numpy version: 1.8.2
scikit-learn version: 0.15.1
matplotlib version: 1.3.1
###Markdown
Import the digits dataset (http://scikit-learn.org/stable/auto_examples/datasets/plot_digits_last_image.html) and show its attributes
###Code
from sklearn.datasets import load_digits
digits = load_digits()
X_digits, y_digits = digits.data, digits.target
print digits.keys()
###Output
['images', 'data', 'target_names', 'DESCR', 'target']
###Markdown
Let's show how the digits look like...
###Code
n_row, n_col = 2, 5
def print_digits(images, y, max_n=10):
# set up the figure size in inches
fig = plt.figure(figsize=(2. * n_col, 2.26 * n_row))
i=0
while i < max_n and i < images.shape[0]:
p = fig.add_subplot(n_row, n_col, i + 1, xticks=[], yticks=[])
p.imshow(images[i], cmap=plt.cm.bone, interpolation='nearest')
# label the image with the target value
p.text(0, -1, str(y[i]))
i = i + 1
print_digits(digits.images, digits.target, max_n=10)
###Output
_____no_output_____
###Markdown
Now, let's define a function that will plot a scatter with the two-dimensional points that will be obtained by a PCA transformation. Our data points will also be colored according to their classes. Recall that the target class will not be used to perform the transformation; we want to investigate if the distribution after PCA reveals the distribution of the different classes, and if they are clearly separable. We will use ten different colors for each of the digits, from 0 to 9.Find components and plot first and second components
###Code
def plot_pca_scatter():
colors = ['black', 'blue', 'purple', 'yellow', 'white', 'red', 'lime', 'cyan', 'orange', 'gray']
for i in xrange(len(colors)):
px = X_pca[:, 0][y_digits == i]
py = X_pca[:, 1][y_digits == i]
plt.scatter(px, py, c=colors[i])
plt.legend(digits.target_names)
plt.xlabel('First Principal Component')
plt.ylabel('Second Principal Component')
###Output
_____no_output_____
###Markdown
At this point, we are ready to perform the PCA transformation. In scikit-learn, PCA is implemented as a transformer object that learns n number of components through the fit method, and can be used on new data to project it onto these components. In scikit-learn, we have various classes that implement different kinds of PCA decompositions. In our case, we will work with the PCA class from the sklearn.decomposition module. The most important parameter we can change is n_components, which allows us to specify the number of features that the obtained instances will have.
###Code
from sklearn.decomposition import PCA
n_components = n_row * n_col # 10
estimator = PCA(n_components=n_components)
X_pca = estimator.fit_transform(X_digits)
plot_pca_scatter() # Note that we only plot the first and second principal component
###Output
_____no_output_____
###Markdown
To finish, let us look at principal component transformations. We will take the principal components from the estimator by accessing the components attribute. Each of its components is a matrix that is used to transform a vector from the original space to the transformed space. In the scatter we previously plotted, we only took into account the first two components.
###Code
def print_pca_components(images, n_col, n_row):
plt.figure(figsize=(2. * n_col, 2.26 * n_row))
for i, comp in enumerate(images):
plt.subplot(n_row, n_col, i + 1)
plt.imshow(comp.reshape((8, 8)), interpolation='nearest')
plt.text(0, -1, str(i + 1) + '-component')
plt.xticks(())
plt.yticks(())
print_pca_components(estimator.components_[:n_components], n_col, n_row)
###Output
_____no_output_____ |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.