path
stringlengths
7
265
concatenated_notebook
stringlengths
46
17M
Natural Language Processing Tensorflow/C3_W3_Lab_3_Conv1D.ipynb
###Markdown Ungraded Lab: Using Convolutional Neural NetworksIn this lab, you will look at another way of building your text classification model and this will be with a convolution layer. As you learned in Course 2 of this specialization, convolutions extract features by applying filters to the input. Let's see how you can use that for text data in the next sections. Download and prepare the dataset ###Code import tensorflow_datasets as tfds # Download the subword encoded pretokenized dataset dataset, info = tfds.load('imdb_reviews/subwords8k', with_info=True, as_supervised=True) # Get the tokenizer tokenizer = info.features['text'].encoder BUFFER_SIZE = 10000 BATCH_SIZE = 256 # Get the train and test splits train_data, test_data = dataset['train'], dataset['test'], # Shuffle the training data train_dataset = train_data.shuffle(BUFFER_SIZE) # Batch and pad the datasets to the maximum length of the sequences train_dataset = train_dataset.padded_batch(BATCH_SIZE) test_dataset = test_data.padded_batch(BATCH_SIZE) ###Output _____no_output_____ ###Markdown Build the ModelIn Course 2, you were using 2D convolution layers because you were applying it on images. For temporal data such as text sequences, you will use [Conv1D](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Conv1D) instead so the convolution will happen over a single dimension. You will also append a pooling layer to reduce the output of the convolution layer. For this lab, you will use [GlobalMaxPooling1D](https://www.tensorflow.org/api_docs/python/tf/keras/layers/GlobalMaxPool1D) to get the max value across the time dimension. You can also use average pooling and you will do that in the next labs. See how these layers behave as standalone layers in the cell below. ###Code import tensorflow as tf import numpy as np # Hyperparameters batch_size = 1 timesteps = 20 features = 20 filters = 128 kernel_size = 5 print(f'batch_size: {batch_size}') print(f'timesteps (sequence length): {timesteps}') print(f'features (embedding size): {features}') print(f'filters: {filters}') print(f'kernel_size: {kernel_size}') # Define array input with random values random_input = np.random.rand(batch_size,timesteps,features) print(f'shape of input array: {random_input.shape}') # Pass array to convolution layer and inspect output shape conv1d = tf.keras.layers.Conv1D(filters=filters, kernel_size=kernel_size, activation='relu') result = conv1d(random_input) print(f'shape of conv1d output: {result.shape}') # Pass array to max pooling layer and inspect output shape gmp = tf.keras.layers.GlobalMaxPooling1D() result = gmp(result) print(f'shape of global max pooling output: {result.shape}') ###Output batch_size: 1 timesteps (sequence length): 20 features (embedding size): 20 filters: 128 kernel_size: 5 shape of input array: (1, 20, 20) shape of conv1d output: (1, 16, 128) shape of global max pooling output: (1, 128) ###Markdown You can build the model by simply appending the convolution and pooling layer after the embedding layer as shown below. ###Code import tensorflow as tf # Hyperparameters embedding_dim = 64 filters = 128 kernel_size = 5 dense_dim = 64 # Build the model model = tf.keras.Sequential([ tf.keras.layers.Embedding(tokenizer.vocab_size, embedding_dim), tf.keras.layers.Conv1D(filters=filters, kernel_size=kernel_size, activation='relu'), tf.keras.layers.GlobalMaxPooling1D(), tf.keras.layers.Dense(dense_dim, activation='relu'), tf.keras.layers.Dense(1, activation='sigmoid') ]) # Print the model summary model.summary() # Set the training parameters model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy']) ###Output _____no_output_____ ###Markdown Train the modelTraining will take around 30 seconds per epoch and you will notice that it reaches higher accuracies than the previous models you've built. ###Code NUM_EPOCHS = 10 # Train the model history = model.fit(train_dataset, epochs=NUM_EPOCHS, validation_data=test_dataset) import matplotlib.pyplot as plt # Plot utility def plot_graphs(history, string): plt.plot(history.history[string]) plt.plot(history.history['val_'+string]) plt.xlabel("Epochs") plt.ylabel(string) plt.legend([string, 'val_'+string]) plt.show() # Plot the accuracy and results plot_graphs(history, "accuracy") plot_graphs(history, "loss") ###Output _____no_output_____
Flower_Classifier_TransferLearning.ipynb
###Markdown Classify Images of Flowers using Transfer Learning **Purpose**: Classify images of flowers with transfer learning using TensorFlow Hub, Google's Flowers Dataset, and MobileNet v2. Dataset Used: [Flower dataset from Google](https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz) Project based on [TensorFlow's classification example](https://colab.research.google.com/github/tensorflow/examples/blob/master/courses/udacity_intro_to_tensorflow_for_deep_learning/l06c02_exercise_flowers_with_transfer_learning.ipynb) ###Code # import tf and dependencies import tensorflow as tf import numpy as np import matplotlib.pyplot as plt import tensorflow_hub as hub import tensorflow_datasets as tfds from tensorflow.keras import layers import logging logger = tf.get_logger() logger.setLevel(logging.ERROR) ###Output _____no_output_____ ###Markdown **Set up** 1. Download the flowers dataset 2. Print info and reformat images 3. Create batches ###Code # download the flowers dataset (training_set, validation_set), dataset_info = tfds.load( 'tf_flowers', with_info=True, as_supervised=True, split = ['train[:70%]', 'train[70%:]'] ) # print info about the flowers dataset num_classes = dataset_info.features['label'].num_classes num_training_examples = 0 num_validation_examples = 0 for example in training_set: num_training_examples += 1 for example in validation_set: num_validation_examples += 1 print('Total Number of Classes: {}'.format(num_classes)) print('Total Number of Training Images: {}'.format(num_training_examples)) print('Total Number of Validation Images: {} \n'.format(num_validation_examples)) # print size of images for i, example in enumerate(training_set.take(5)): print('Image {} shape: {} label: {}'.format(i+1, example[0].shape, example[1])) # reformat images IMAGE_RES = 224 def format_image(image, label): image = tf.image.resize(image, (IMAGE_RES, IMAGE_RES))/255.0 return image, label # create batches BATCH_SIZE = 32 train_batches = training_set.shuffle(num_training_examples//4).map(format_image).batch(BATCH_SIZE).prefetch(1) validation_batches = validation_set.map(format_image).batch(BATCH_SIZE).prefetch(1) ###Output _____no_output_____ ###Markdown **Do Transfer Learning with TensorFlow Hub** 1. Create a Feature Extractor using MobileNet v2 2. Freeze the pretrained model 3. Attach a classification head 4. Train the model 5. Plot training and validation graphs ###Code # create a feature extractor URL = "https://tfhub.dev/google/tf2-preview/mobilenet_v2/feature_vector/4" feature_extractor = hub.KerasLayer(URL, input_shape=(IMAGE_RES, IMAGE_RES, 3)) # freeze the variables in the feature extractor layer feature_extractor.trainable = False # attach a classification head model = tf.keras.Sequential([ feature_extractor, layers.Dense(num_classes) ]) # train the model model.compile(optimizer='adam', loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True), metrics=['accuracy'] ) EPOCHS = 6 history = model.fit(train_batches, epochs=EPOCHS, validation_data=validation_batches) # plot training and validation graph acc = history.history['accuracy'] val_acc = history.history['val_accuracy'] loss = history.history['loss'] val_loss = history.history['val_loss'] epochs_range = range(EPOCHS) plt.figure(figsize=(8,8)) plt.subplot(1,2,1) plt.plot(epochs_range, acc, label='Training Accuracy') plt.plot(epochs_range, val_acc, label='Validation Accuracy') plt.legend(loc='lower right') plt.title('Training and Validation Accuracy') plt.subplot(1, 2, 2) plt.plot(epochs_range, loss, label='Training Loss') plt.plot(epochs_range, val_loss, label='Validation Loss') plt.legend(loc='upper right') plt.title('Training and Validation Loss') plt.show() # check predictions against actual and convert class names into numpy array class_names = np.array(dataset_info.features['label'].names) class_names # create an image batch image_batch, label_batch = next(iter(train_batches)) image_batch = image_batch.numpy() label_batch = label_batch.numpy() predicted_batch = model.predict(image_batch) predicted_batch = tf.squeeze(predicted_batch).numpy() predicted_ids = np.argmax(predicted_batch, axis=-1) predicted_class_names = class_names[predicted_ids] print(predicted_class_names) print("Labels: ", label_batch) print("Predicted labels: ", predicted_ids) # plot plt.figure(figsize=(10,9)) for n in range(30): plt.subplot(6,5,n+1) plt.subplots_adjust(hspace = 0.3) plt.imshow(image_batch[n]) color = "blue" if predicted_ids[n] == label_batch[n] else "red" plt.title(predicted_class_names[n].title(), color=color) plt.axis('off') _ = plt.suptitle("Model predictions (blue: correct, red: incorrect)") ###Output _____no_output_____ ###Markdown **Extension: Perform Transfer Learning with the Inception Model** Go to the [TensorFlow Hub documentation](https://tfhub.dev/s?module-type=image-feature-vector&q=tf2) and click on `tf2-preview/inception_v3/feature_vector`. This feature vector corresponds to the Inception v3 model. In the cells below, use transfer learning to create a CNN that uses Inception v3 as the pretrained model to classify the images from the Flowers dataset. Note that Inception, takes as input, images that are 299 x 299 pixels. Compare the accuracy you get with Inception v3 to the accuracy you got with MobileNet v2. ###Code IMAGE_RES = 299 (training_set, validation_set), dataset_info = tfds.load( 'tf_flowers', split=['train[:70%]', 'train[70%:]'], with_info=True, as_supervised=True, ) train_batches = training_set.shuffle(num_training_examples//4).map(format_image).batch(BATCH_SIZE).prefetch(1) validation_batches = validation_set.map(format_image).batch(BATCH_SIZE).prefetch(1) URL = "https://tfhub.dev/google/tf2-preview/inception_v3/feature_vector/4" feature_extractor = hub.KerasLayer(URL, input_shape=(IMAGE_RES, IMAGE_RES, 3)) model_inception = tf.keras.Sequential([ feature_extractor, layers.Dense(num_classes) ]) model_inception.summary() model_inception.compile( optimizer='adam', loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True), metrics=['accuracy'] ) EPOCHS = 6 history = model_inception.fit(train_batches, epochs=EPOCHS, validation_data=validation_batches) # plot training and validation graph acc = history.history['accuracy'] val_acc = history.history['val_accuracy'] loss = history.history['loss'] val_loss = history.history['val_loss'] epochs_range = range(EPOCHS) plt.figure(figsize=(8,8)) plt.subplot(1,2,1) plt.plot(epochs_range, acc, label='Training Accuracy') plt.plot(epochs_range, val_acc, label='Validation Accuracy') plt.legend(loc='lower right') plt.title('Training and Validation Accuracy') plt.subplot(1, 2, 2) plt.plot(epochs_range, loss, label='Training Loss') plt.plot(epochs_range, val_loss, label='Validation Loss') plt.legend(loc='upper right') plt.title('Training and Validation Loss') plt.show() # check predictions against actual and convert class names into numpy array class_names = np.array(dataset_info.features['label'].names) class_names # create an image batch image_batch, label_batch = next(iter(train_batches)) image_batch = image_batch.numpy() label_batch = label_batch.numpy() predicted_batch = model_inception.predict(image_batch) predicted_batch = tf.squeeze(predicted_batch).numpy() predicted_ids = np.argmax(predicted_batch, axis=-1) predicted_class_names = class_names[predicted_ids] print(predicted_class_names) print("Labels: ", label_batch) print("Predicted labels: ", predicted_ids) # plot plt.figure(figsize=(10,9)) for n in range(30): plt.subplot(6,5,n+1) plt.subplots_adjust(hspace = 0.3) plt.imshow(image_batch[n]) color = "blue" if predicted_ids[n] == label_batch[n] else "red" plt.title(predicted_class_names[n].title(), color=color) plt.axis('off') _ = plt.suptitle("Model predictions (blue: correct, red: incorrect)") ###Output _____no_output_____
talleres_inov_docente/2-02-aprendizaje_no_supervisado_transformaciones.ipynb
###Markdown Aprendizaje no supervisado parte 1 - transformación Muchas formas de aprendizaje no supervisado, como reducción de dimensionalidad, aprendizaje de variedades y extracción de características, encuentran una nueva representación de los datos de entrada sin ninguna variable adicional (al contrario que en aprendizaje supervisado, los algoritmos nos supervisados no requieren o consideran variables objetivo como en los casos anteriores de clasificación y regresión). Un ejemplo muy básico es el rescalado de los datos, que es un requisito para muchos algoritmos de aprendizaje automático ya que no son invariantes a escala (aunque el reescalado de los datos es más bien un método de preprocesamiento ya que no hay mucho *aprendizaje*). Existen muchas técnicas de reescalado y, en el siguiente ejemplo, veremos un método particular que se denomina "estandarización". Con este método, reescalaremos los datos para que cada característica esté centrada en el cero (media=0) con varianza unitaria (desviación típica = 1).Por ejemplo, si tenemos un dataset de una dimensión con los datos $[1, 2, 3, 4, 5]$, los valores estandarizados serían:- 1 -> -1.41- 2 -> -0.71- 3 -> 0.0- 4 -> 0.71- 5 -> 1.41los cuales se pueden obtener con la ecuación:$$x_{standardized} = \frac{x - \mu_x}{\sigma_x}$$donde $\mu$ es la media muestral, y $\sigma$ la desviación típica. ###Code ary = np.array([1, 2, 3, 4, 5]) ary_standardized = (ary - ary.mean()) / ary.std() ary_standardized ###Output _____no_output_____ ###Markdown Aunque la estandarización es un método muy básico (y su código es simple, como acabamos de ver) scikit-learn implemente una clase `StandardScaler` para realizar los cálculos. En secciones posteriores veremos porqué es mejor usar la interfaz de scikit-learn que el código anterior.Aplicar un algoritmo de preprocesamiento tiene una interfaz muy similar a la que se usa para los algoritmos supervisados que hemos visto hasta el momento. Para coger más práctica con la interfaz ``Transformer`` de scikit-learn, vamos a empezar cargando el dataset iris y reescalándolo: ###Code from sklearn.datasets import load_iris from sklearn.model_selection import train_test_split iris = load_iris() X_train, X_test, y_train, y_test = train_test_split(iris.data, iris.target, random_state=0) print(X_train.shape) ###Output _____no_output_____ ###Markdown El dataset iris no está "centrado", es decir, tiene media distinta de cero y desviación típica distinta para cada componente: ###Code print("media : %s " % X_train.mean(axis=0)) print("desviacion típica : %s " % X_train.std(axis=0)) ###Output _____no_output_____ ###Markdown Para usar un método de preprocesamiento, primero importamos el estimador, en este caso, ``StandardScaler``, y luego lo instanciamos: ###Code from sklearn.preprocessing import StandardScaler scaler = StandardScaler() ###Output _____no_output_____ ###Markdown Como con los algoritmos de regresión y clasificación, llamamos a ``fit`` para aprender el modelo de los datos. Como es un modelo no supervisado, solo le pasamos ``X``, no ``y``. Esto simplemente calcula la media y la desviación típica. ###Code scaler.fit(X_train) print(scaler.mean_) print(scaler.scale_) ###Output _____no_output_____ ###Markdown Ahora podemos reescalar los datos aplicando el método ``transform`` (no ``predict``): ###Code X_train_scaled = scaler.transform(X_train) ###Output _____no_output_____ ###Markdown ``X_train_scaled`` tiene el mismo número de ejemplos y características, pero la media ha sido restada y todos las variables tienen desviación típica unitaria: ###Code print(X_train_scaled.shape) print("media : %s " % X_train_scaled.mean(axis=0)) print("desviación típica : %s " % X_train_scaled.std(axis=0)) ###Output _____no_output_____ ###Markdown Resumiendo, el método `fit` ajusta el estimador a los datos que le proporcionamos. En este paso, el estimador estima los parámetros de los datos (p.ej. media y desviación típica). Después, si aplicamos `transform`, estos parámetros se utilizan para transformar un dataset (**el método `transform` no modifica los parámetros**). Es importante indicar que la misma transformación se utiliza para los datos de entrenamiento y de test. Como consecuencia, la media y desviación típica en test no tienen porque ser 0 y 1: ###Code X_test_scaled = scaler.transform(X_test) print("medias de los datos de test: %s" % X_test_scaled.mean(axis=0)) ###Output _____no_output_____ ###Markdown La transformación en entrenamiento y test debe ser siempre la misma, para que tenga sentido lo que estamos haciendo. En el siguiente ejemplo aplicamos `MinMaxScaler` a un conjunto de datos de dos formas: la primera considera los datos azules para "entrenar" la transformación y luego aplica lo mismo a los rojos (correcto), mientras que la segunda "entrena" la transformación tanto en los azules y los rojos, de forma independiente. Observa lo que pasa con el dato rojo (0.6,0.4). Hay muchas formas de escalar los datos. La más común es el ``StandardScaler`` que hemos mencionada, pero hay otras clases útiles como:- ``MinMaxScaler``: reescalar los datos para que se ajusten a un mínimo y un máximo (normalmente, entre 0 y 1)- ``RobustScaler``: utilizar otros estadísticos más robustos como la mediana o los cuartiles, en lugar de la media y la desviación típica.- ``Normalizer``: normaliza cada ejemplo individualmente para que tengan como norma (L1 o L2) la unidad. Por defecto, se utiliza L2. Observa lo que pasa con el `MinMaxScaler`: ###Code from sklearn.preprocessing import MinMaxScaler scaler = MinMaxScaler() scaler = MinMaxScaler() scaler.fit(X_train) X_train_minmax = scaler.transform(X_train) print("mínimos: %s " % X_train_minmax.min(axis=0)) print("máximos: %s " % X_train_minmax.max(axis=0)) ###Output _____no_output_____ ###Markdown En este gráfico se comparan los distintos escalados: ###Code from figures import plot_scaling plot_scaling() from figures import plot_relative_scaling plot_relative_scaling() ###Output _____no_output_____ ###Markdown Análisis de componentes principales============================ Una transformación no supervisada algo más interesante es el Análisis de Componentes Principales (*Principal Component Analysis*, PCA). Es una técnica para reducir la dimensionalidad de los datos, creando una proyección lineal. Es decir, encontramos características nuevas para representar los datos que son una combinación lineal de los datos originales (lo cual es equivalente a rotar los datos). De esta forma, podemos pensar en el PCA como una proyección de nuestros datos en un *nuevo* espacio de características.La forma en que el PCA encuentra estas nuevas direcciones es buscando direcciones de máxima varianza. Normalmente, solo unas pocas componentes principales son capaces explicar la mayor parte de la varianza y el resto se pueden obviar. La premisa es reducir el tamaño (dimensionalidad) del dataset, al mismo tiempo que se captura la mayor parte de información. Hay muchas razones por las que es bueno reducir la dimensionalidad de un dataset: reducimos el coste computacional de los algoritmos de aprendizaje, reducimos el espacio en disco y ayudamos a combatir la llamada *maldición de la dimensionalidad* (*curse of dimensionality*), que discutiremos después más a fondo.Para ilustrar como puede funcionar una rotación, primero la mostraremos en datos bidimensionales y mantendremos las dos componentes principales: ###Code from figures import plot_pca_illustration plot_pca_illustration() ###Output _____no_output_____ ###Markdown Veamos ahora todos los pasos con más detalle.Creamos una nube Gaussiana de puntos, que es rotada: ###Code rnd = np.random.RandomState(5) X_ = rnd.normal(size=(300, 2)) X_blob = np.dot(X_, rnd.normal(size=(2, 2)))+rnd.normal(size=2) y = X_[:, 0] > 0 plt.scatter(X_blob[:, 0], X_blob[:, 1], c=y, linewidths=0, s=30) plt.xlabel(u"característica 1") plt.ylabel(u"característica 2"); ###Output _____no_output_____ ###Markdown Como siempre, instanciamos nuestro modelo PCA. Por defecto, todas las componentes se mantienen: ###Code from sklearn.decomposition import PCA pca = PCA() ###Output _____no_output_____ ###Markdown Después, ajustamos el PCA a los datos. Como PCA es un algoritmo no supervisado, no hay que suministrar ninguna ``y``. ###Code pca.fit(X_blob) ###Output _____no_output_____ ###Markdown Después podemos transformar los datos, proyectando en las componentes principales: ###Code X_pca = pca.transform(X_blob) plt.scatter(X_pca[:, 0], X_pca[:, 1], c=y, linewidths=0, s=30) plt.xlabel("primera componente principal") plt.ylabel("segunda componente principal"); ###Output _____no_output_____ ###Markdown Ahora vamos a usar una sola componente principal ###Code pca = PCA(n_components=1).fit(X_blob) X_blob.shape X_pca = pca.transform(X_blob) print(X_pca.shape) plt.scatter(X_pca[:, 0], np.zeros(X_pca.shape[0]), c=y, linewidths=0, s=30) plt.xlabel("primera componente principal"); ###Output _____no_output_____ ###Markdown El PCA sitúa la primera componente en la diagonal de los datos (máxima variabilidad) y la segunda perpendicular a la primera. Las componentes siempre son ortogonales entre si. Reducción de la dimensionalidad para visualización con PCA-------------------------------------------------------------Considera el dataset de dígitos. No puede ser visualizado en un único gráfico 2D, porque tiene 64 características. Vamos a extraer 2 dimensiones para visualizarlo, utilizando este [ejemplo](http://scikit-learn.org/stable/auto_examples/manifold/plot_lle_digits.html) de scikit learn. ###Code from figures import digits_plot digits_plot() ###Output _____no_output_____
exploratory analysis/business anaysis/Data exploratory analysis.ipynb
###Markdown Exploratory Analysis of Yelp Datasets ###Code %pylab inline import warnings warnings.filterwarnings('ignore') import pandas as pd ###Output Populating the interactive namespace from numpy and matplotlib ###Markdown Business Categories with maximum Reviews ###Code categories_vs_reviews = pd.read_csv('business-categories-reviews.csv') top_50_categories_vs_reviews = categories_vs_reviews.pivot_table('Review Count', index='Category').sort_values(by='Review Count', ascending=False)[:30] print("Business Categories wit maximum Reviews") display(top_50_categories_vs_reviews.head(10)) print (len(categories_vs_reviews)) graph = top_50_categories_vs_reviews.plot(kind='bar',width = 0.35,figsize=(16,8)) graph.set_ylabel('Review Count',fontsize=18) graph.set_xlabel('Category',fontsize=18) graph.set_title('Title',fontsize=18) for tick in graph.get_xticklabels(): tick.set_fontsize("20") for tick in graph.get_yticklabels(): tick.set_fontsize("20") plt.show() # libraries and data import matplotlib.pyplot as plt import matplotlib.lines as mlines import matplotlib.patches as mpatches import numpy as np import pandas as pd import pprint from tabulate import tabulate def plot_line(df,header): # style plt.style.use('seaborn-darkgrid') # line plot # first is x axis, 2nd is y axis plt.plot(df[str(header[0])], df[str(header[1])], marker='', color='red', linewidth=1, alpha=1) # Add legend red_line = mlines.Line2D([], [], color='red', alpha=1, linewidth=2, label=str(header[1])) plt.legend(loc=1, ncol=2, handles=[red_line]) #red_patch = mpatches.Patch(color='red', label=header[1]) #plt.legend(loc=1, ncol=2, handles=[red_patch]) # Add titles # plt.title("Frequency", loc='left', fontsize=14, fontweight=0, color='orange') plt.xlabel(str(header[0])) plt.ylabel(str(header[1])) #plt.xticks(df[str(header[0])] , rotation=45 ) plt.show(block=True) df = pd.DataFrame( {'Business Categories': list(range(1,len(categories_vs_reviews)+1)), 'No. of Reviews': categories_vs_reviews['Review Count'] }) display(df.head()) plot_line(df,list(df)) ###Output _____no_output_____ ###Markdown States with maximum Reviews ###Code state_vs_reviews = pd.read_csv('states-reviews.csv') top_50_state_vs_reviews = (state_vs_reviews.pivot_table('Review Count', index='State') .sort_values(by='Review Count', ascending=False)[:30]) print(" Top 10 states with max reviews") display(top_50_state_vs_reviews.head(10)) print (len(state_vs_reviews)) graph = top_50_state_vs_reviews.plot(kind='bar',width = 0.35,figsize=(16,8)) graph.set_ylabel('Review Count',fontsize=18) graph.set_xlabel('State',fontsize=18) # graph.set_title('Title',fontsize=18) for tick in graph.get_xticklabels(): tick.set_fontsize("18") for tick in graph.get_yticklabels(): tick.set_fontsize("18") plt.show() def plot_line2(X,Y,x_label,y_label,title,legend): # style plt.style.use('seaborn-darkgrid') # line plot # first is x axis, 2nd is y axis plt.plot(X, Y, marker='', color='red', linewidth=1, alpha=1) # Add legend red_line = mlines.Line2D([], [], color='red', alpha=1, linewidth=2, label=legend) plt.legend(loc=1, ncol=2, handles=[red_line]) #red_patch = mpatches.Patch(color='red', label=header[1]) #plt.legend(loc=1, ncol=2, handles=[red_patch]) # Add titles plt.title(title, loc='left', fontsize=12, fontweight=0, color='orange') plt.xlabel(x_label) plt.ylabel(y_label) #plt.xticks(df[str(header[0])] , rotation=45 ) plt.show(block=True) df2 = pd.DataFrame( { 'No. of Reviews': state_vs_reviews['Review Count'], 'State': list(range(1,len(state_vs_reviews)+1)) }) title = "Frequency of no. of reviews according to states" header = list(df2) plot_line2(df2[str(header[1])],df2[str(header[0])],str(header[1]),str(header[0]),title,str(header[0])) ###Output _____no_output_____ ###Markdown Cities with maximum Reviews ###Code cities_vs_reviews = pd.read_csv('cities-reviews.csv') top_50_cities_vs_reviews = cities_vs_reviews.pivot_table('Review Count', index='City').sort_values(by='Review Count', ascending=False)[:30] display(top_50_cities_vs_reviews.head(10)) graph = top_50_cities_vs_reviews.plot(kind='bar',width = 0.35,figsize=(16,8)) graph.set_ylabel('Review Count',fontsize=18) graph.set_xlabel('City',fontsize=18) # graph.set_title('Title',fontsize=18) for tick in graph.get_xticklabels(): tick.set_fontsize("18") for tick in graph.get_yticklabels(): tick.set_fontsize("18") plt.show() df3 = pd.DataFrame( {'City': list(range(1,len(cities_vs_reviews)+1)), 'No. of Reviews': cities_vs_reviews['Review Count'] }) plot_line(df3,list(df3)) ###Output _____no_output_____
docs/examples/QDevil/QDAC2/Scan2DDiode.ipynb
###Markdown QDAC-II 2D diode scan ###Code from time import sleep import numpy as np import matplotlib.pyplot as plt from IPython.display import Image, display from qcodes_contrib_drivers.drivers.QDevil import QDAC2 qdac_addr = '192.168.8.17' qdac = QDAC2.QDac2('QDAC2', visalib='@py', address=f'TCPIP::{qdac_addr}::5025::SOCKET') qdac.reset() sleep(3) arrangement = qdac.arrange( # QDAC channels 2 & 3 connected to the ends of two back-to-back Ge diodes gates={'diodes_left': 2, 'diodes_right': 3}, # Internal trigger for measuring current internal_triggers={'inner'}) inner_steps = 21 inner_V = np.linspace(-0.3, 0.4, inner_steps) outer_steps = 21 outer_V = np.linspace(-0.2, 0.5, outer_steps) sweep = arrangement.virtual_sweep2d( inner_gate='diodes_left', inner_voltages=inner_V, outer_gate='diodes_right', outer_voltages=outer_V, inner_step_time_s=20e-3, inner_step_trigger='inner') qdac.errors() # Hook up current measurement to the internal trigger produced by the sweep diodes = qdac.channel(2) diodes.clear_measurements() measurement = diodes.measurement() measurement.start_on(arrangement.get_trigger_by_name('inner')) qdac.errors() # Start sweep sweep.start() sleep(10) # Stop current flow qdac.channel(2).dc_constant_V(0) qdac.channel(3).dc_constant_V(0) sleep(3) raw = measurement.available_A() # Circumvent flaw in 0.12.0 driver print(len(raw)) available = list(map(lambda x: float(x), raw[-(outer_steps * inner_steps):])) currents = np.reshape(available, (-1, inner_steps)) * 1000 fig, ax = plt.subplots() plt.title('diodes (Ge) back-to-back') extent = [inner_V[0],inner_V[-1],outer_V[0],outer_V[-1]] img = ax.imshow(currents, cmap='plasma', interpolation='nearest', extent=extent) ax.set_xlabel('Volt') ax.set_ylabel('Volt') colorbar = fig.colorbar(img) colorbar.set_label('mA') ###Output _____no_output_____
experiments/tl_3v2/jitter10/cores-oracle.run1.framed/trials/8/trial.ipynb
###Markdown Transfer Learning Template ###Code %load_ext autoreload %autoreload 2 %matplotlib inline import os, json, sys, time, random import numpy as np import torch from torch.optim import Adam from easydict import EasyDict import matplotlib.pyplot as plt from steves_models.steves_ptn import Steves_Prototypical_Network from steves_utils.lazy_iterable_wrapper import Lazy_Iterable_Wrapper from steves_utils.iterable_aggregator import Iterable_Aggregator from steves_utils.ptn_train_eval_test_jig import PTN_Train_Eval_Test_Jig from steves_utils.torch_sequential_builder import build_sequential from steves_utils.torch_utils import get_dataset_metrics, ptn_confusion_by_domain_over_dataloader from steves_utils.utils_v2 import (per_domain_accuracy_from_confusion, get_datasets_base_path) from steves_utils.PTN.utils import independent_accuracy_assesment from torch.utils.data import DataLoader from steves_utils.stratified_dataset.episodic_accessor import Episodic_Accessor_Factory from steves_utils.ptn_do_report import ( get_loss_curve, get_results_table, get_parameters_table, get_domain_accuracies, ) from steves_utils.transforms import get_chained_transform ###Output _____no_output_____ ###Markdown Allowed ParametersThese are allowed parameters, not defaultsEach of these values need to be present in the injected parameters (the notebook will raise an exception if they are not present)Papermill uses the cell tag "parameters" to inject the real parameters below this cell.Enable tags to see what I mean ###Code required_parameters = { "experiment_name", "lr", "device", "seed", "dataset_seed", "n_shot", "n_query", "n_way", "train_k_factor", "val_k_factor", "test_k_factor", "n_epoch", "patience", "criteria_for_best", "x_net", "datasets", "torch_default_dtype", "NUM_LOGS_PER_EPOCH", "BEST_MODEL_PATH", "x_shape", } from steves_utils.CORES.utils import ( ALL_NODES, ALL_NODES_MINIMUM_1000_EXAMPLES, ALL_DAYS ) from steves_utils.ORACLE.utils_v2 import ( ALL_DISTANCES_FEET_NARROWED, ALL_RUNS, ALL_SERIAL_NUMBERS, ) standalone_parameters = {} standalone_parameters["experiment_name"] = "STANDALONE PTN" standalone_parameters["lr"] = 0.001 standalone_parameters["device"] = "cuda" standalone_parameters["seed"] = 1337 standalone_parameters["dataset_seed"] = 1337 standalone_parameters["n_way"] = 8 standalone_parameters["n_shot"] = 3 standalone_parameters["n_query"] = 2 standalone_parameters["train_k_factor"] = 1 standalone_parameters["val_k_factor"] = 2 standalone_parameters["test_k_factor"] = 2 standalone_parameters["n_epoch"] = 50 standalone_parameters["patience"] = 10 standalone_parameters["criteria_for_best"] = "source_loss" standalone_parameters["datasets"] = [ { "labels": ALL_SERIAL_NUMBERS, "domains": ALL_DISTANCES_FEET_NARROWED, "num_examples_per_domain_per_label": 100, "pickle_path": os.path.join(get_datasets_base_path(), "oracle.Run1_framed_2000Examples_stratified_ds.2022A.pkl"), "source_or_target_dataset": "source", "x_transforms": ["unit_mag", "minus_two"], "episode_transforms": [], "domain_prefix": "ORACLE_" }, { "labels": ALL_NODES, "domains": ALL_DAYS, "num_examples_per_domain_per_label": 100, "pickle_path": os.path.join(get_datasets_base_path(), "cores.stratified_ds.2022A.pkl"), "source_or_target_dataset": "target", "x_transforms": ["unit_power", "times_zero"], "episode_transforms": [], "domain_prefix": "CORES_" } ] standalone_parameters["torch_default_dtype"] = "torch.float32" standalone_parameters["x_net"] = [ {"class": "nnReshape", "kargs": {"shape":[-1, 1, 2, 256]}}, {"class": "Conv2d", "kargs": { "in_channels":1, "out_channels":256, "kernel_size":(1,7), "bias":False, "padding":(0,3), },}, {"class": "ReLU", "kargs": {"inplace": True}}, {"class": "BatchNorm2d", "kargs": {"num_features":256}}, {"class": "Conv2d", "kargs": { "in_channels":256, "out_channels":80, "kernel_size":(2,7), "bias":True, "padding":(0,3), },}, {"class": "ReLU", "kargs": {"inplace": True}}, {"class": "BatchNorm2d", "kargs": {"num_features":80}}, {"class": "Flatten", "kargs": {}}, {"class": "Linear", "kargs": {"in_features": 80*256, "out_features": 256}}, # 80 units per IQ pair {"class": "ReLU", "kargs": {"inplace": True}}, {"class": "BatchNorm1d", "kargs": {"num_features":256}}, {"class": "Linear", "kargs": {"in_features": 256, "out_features": 256}}, ] # Parameters relevant to results # These parameters will basically never need to change standalone_parameters["NUM_LOGS_PER_EPOCH"] = 10 standalone_parameters["BEST_MODEL_PATH"] = "./best_model.pth" # Parameters parameters = { "experiment_name": "tl_3-jitter10v2:cores -> oracle.run1.framed", "device": "cuda", "lr": 0.0001, "x_shape": [2, 256], "n_shot": 3, "n_query": 2, "train_k_factor": 3, "val_k_factor": 2, "test_k_factor": 2, "torch_default_dtype": "torch.float32", "n_epoch": 50, "patience": 3, "criteria_for_best": "target_accuracy", "x_net": [ {"class": "nnReshape", "kargs": {"shape": [-1, 1, 2, 256]}}, { "class": "Conv2d", "kargs": { "in_channels": 1, "out_channels": 256, "kernel_size": [1, 7], "bias": False, "padding": [0, 3], }, }, {"class": "ReLU", "kargs": {"inplace": True}}, {"class": "BatchNorm2d", "kargs": {"num_features": 256}}, { "class": "Conv2d", "kargs": { "in_channels": 256, "out_channels": 80, "kernel_size": [2, 7], "bias": True, "padding": [0, 3], }, }, {"class": "ReLU", "kargs": {"inplace": True}}, {"class": "BatchNorm2d", "kargs": {"num_features": 80}}, {"class": "Flatten", "kargs": {}}, {"class": "Linear", "kargs": {"in_features": 20480, "out_features": 256}}, {"class": "ReLU", "kargs": {"inplace": True}}, {"class": "BatchNorm1d", "kargs": {"num_features": 256}}, {"class": "Linear", "kargs": {"in_features": 256, "out_features": 256}}, ], "NUM_LOGS_PER_EPOCH": 10, "BEST_MODEL_PATH": "./best_model.pth", "n_way": 16, "datasets": [ { "labels": [ "1-10.", "1-11.", "1-15.", "1-16.", "1-17.", "1-18.", "1-19.", "10-4.", "10-7.", "11-1.", "11-14.", "11-17.", "11-20.", "11-7.", "13-20.", "13-8.", "14-10.", "14-11.", "14-14.", "14-7.", "15-1.", "15-20.", "16-1.", "16-16.", "17-10.", "17-11.", "17-2.", "19-1.", "19-16.", "19-19.", "19-20.", "19-3.", "2-10.", "2-11.", "2-17.", "2-18.", "2-20.", "2-3.", "2-4.", "2-5.", "2-6.", "2-7.", "2-8.", "3-13.", "3-18.", "3-3.", "4-1.", "4-10.", "4-11.", "4-19.", "5-5.", "6-15.", "7-10.", "7-14.", "8-18.", "8-20.", "8-3.", "8-8.", ], "domains": [1, 2, 3, 4, 5], "num_examples_per_domain_per_label": -1, "pickle_path": "/root/csc500-main/datasets/cores.stratified_ds.2022A.pkl", "source_or_target_dataset": "source", "x_transforms": ["jitter_256_10", "lowpass_+/-10MHz", "take_200"], "episode_transforms": [], "domain_prefix": "C_", }, { "labels": [ "3123D52", "3123D65", "3123D79", "3123D80", "3123D54", "3123D70", "3123D7B", "3123D89", "3123D58", "3123D76", "3123D7D", "3123EFE", "3123D64", "3123D78", "3123D7E", "3124E4A", ], "domains": [32, 38, 8, 44, 14, 50, 20, 26], "num_examples_per_domain_per_label": 2000, "pickle_path": "/root/csc500-main/datasets/oracle.Run1_framed_2000Examples_stratified_ds.2022A.pkl", "source_or_target_dataset": "target", "x_transforms": ["jitter_256_10", "take_200", "resample_20Msps_to_25Msps"], "episode_transforms": [], "domain_prefix": "O_", }, ], "seed": 154325, "dataset_seed": 154325, } # Set this to True if you want to run this template directly STANDALONE = False if STANDALONE: print("parameters not injected, running with standalone_parameters") parameters = standalone_parameters if not 'parameters' in locals() and not 'parameters' in globals(): raise Exception("Parameter injection failed") #Use an easy dict for all the parameters p = EasyDict(parameters) if "x_shape" not in p: p.x_shape = [2,256] # Default to this if we dont supply x_shape supplied_keys = set(p.keys()) if supplied_keys != required_parameters: print("Parameters are incorrect") if len(supplied_keys - required_parameters)>0: print("Shouldn't have:", str(supplied_keys - required_parameters)) if len(required_parameters - supplied_keys)>0: print("Need to have:", str(required_parameters - supplied_keys)) raise RuntimeError("Parameters are incorrect") ################################### # Set the RNGs and make it all deterministic ################################### np.random.seed(p.seed) random.seed(p.seed) torch.manual_seed(p.seed) torch.use_deterministic_algorithms(True) ########################################### # The stratified datasets honor this ########################################### torch.set_default_dtype(eval(p.torch_default_dtype)) ################################### # Build the network(s) # Note: It's critical to do this AFTER setting the RNG ################################### x_net = build_sequential(p.x_net) start_time_secs = time.time() p.domains_source = [] p.domains_target = [] train_original_source = [] val_original_source = [] test_original_source = [] train_original_target = [] val_original_target = [] test_original_target = [] # global_x_transform_func = lambda x: normalize(x.to(torch.get_default_dtype()), "unit_power") # unit_power, unit_mag # global_x_transform_func = lambda x: normalize(x, "unit_power") # unit_power, unit_mag def add_dataset( labels, domains, pickle_path, x_transforms, episode_transforms, domain_prefix, num_examples_per_domain_per_label, source_or_target_dataset:str, iterator_seed=p.seed, dataset_seed=p.dataset_seed, n_shot=p.n_shot, n_way=p.n_way, n_query=p.n_query, train_val_test_k_factors=(p.train_k_factor,p.val_k_factor,p.test_k_factor), ): if x_transforms == []: x_transform = None else: x_transform = get_chained_transform(x_transforms) if episode_transforms == []: episode_transform = None else: raise Exception("episode_transforms not implemented") episode_transform = lambda tup, _prefix=domain_prefix: (_prefix + str(tup[0]), tup[1]) eaf = Episodic_Accessor_Factory( labels=labels, domains=domains, num_examples_per_domain_per_label=num_examples_per_domain_per_label, iterator_seed=iterator_seed, dataset_seed=dataset_seed, n_shot=n_shot, n_way=n_way, n_query=n_query, train_val_test_k_factors=train_val_test_k_factors, pickle_path=pickle_path, x_transform_func=x_transform, ) train, val, test = eaf.get_train(), eaf.get_val(), eaf.get_test() train = Lazy_Iterable_Wrapper(train, episode_transform) val = Lazy_Iterable_Wrapper(val, episode_transform) test = Lazy_Iterable_Wrapper(test, episode_transform) if source_or_target_dataset=="source": train_original_source.append(train) val_original_source.append(val) test_original_source.append(test) p.domains_source.extend( [domain_prefix + str(u) for u in domains] ) elif source_or_target_dataset=="target": train_original_target.append(train) val_original_target.append(val) test_original_target.append(test) p.domains_target.extend( [domain_prefix + str(u) for u in domains] ) else: raise Exception(f"invalid source_or_target_dataset: {source_or_target_dataset}") for ds in p.datasets: add_dataset(**ds) # from steves_utils.CORES.utils import ( # ALL_NODES, # ALL_NODES_MINIMUM_1000_EXAMPLES, # ALL_DAYS # ) # add_dataset( # labels=ALL_NODES, # domains = ALL_DAYS, # num_examples_per_domain_per_label=100, # pickle_path=os.path.join(get_datasets_base_path(), "cores.stratified_ds.2022A.pkl"), # source_or_target_dataset="target", # x_transform_func=global_x_transform_func, # domain_modifier=lambda u: f"cores_{u}" # ) # from steves_utils.ORACLE.utils_v2 import ( # ALL_DISTANCES_FEET, # ALL_RUNS, # ALL_SERIAL_NUMBERS, # ) # add_dataset( # labels=ALL_SERIAL_NUMBERS, # domains = list(set(ALL_DISTANCES_FEET) - {2,62}), # num_examples_per_domain_per_label=100, # pickle_path=os.path.join(get_datasets_base_path(), "oracle.Run2_framed_2000Examples_stratified_ds.2022A.pkl"), # source_or_target_dataset="source", # x_transform_func=global_x_transform_func, # domain_modifier=lambda u: f"oracle1_{u}" # ) # from steves_utils.ORACLE.utils_v2 import ( # ALL_DISTANCES_FEET, # ALL_RUNS, # ALL_SERIAL_NUMBERS, # ) # add_dataset( # labels=ALL_SERIAL_NUMBERS, # domains = list(set(ALL_DISTANCES_FEET) - {2,62,56}), # num_examples_per_domain_per_label=100, # pickle_path=os.path.join(get_datasets_base_path(), "oracle.Run2_framed_2000Examples_stratified_ds.2022A.pkl"), # source_or_target_dataset="source", # x_transform_func=global_x_transform_func, # domain_modifier=lambda u: f"oracle2_{u}" # ) # add_dataset( # labels=list(range(19)), # domains = [0,1,2], # num_examples_per_domain_per_label=100, # pickle_path=os.path.join(get_datasets_base_path(), "metehan.stratified_ds.2022A.pkl"), # source_or_target_dataset="target", # x_transform_func=global_x_transform_func, # domain_modifier=lambda u: f"met_{u}" # ) # # from steves_utils.wisig.utils import ( # # ALL_NODES_MINIMUM_100_EXAMPLES, # # ALL_NODES_MINIMUM_500_EXAMPLES, # # ALL_NODES_MINIMUM_1000_EXAMPLES, # # ALL_DAYS # # ) # import steves_utils.wisig.utils as wisig # add_dataset( # labels=wisig.ALL_NODES_MINIMUM_100_EXAMPLES, # domains = wisig.ALL_DAYS, # num_examples_per_domain_per_label=100, # pickle_path=os.path.join(get_datasets_base_path(), "wisig.node3-19.stratified_ds.2022A.pkl"), # source_or_target_dataset="target", # x_transform_func=global_x_transform_func, # domain_modifier=lambda u: f"wisig_{u}" # ) ################################### # Build the dataset ################################### train_original_source = Iterable_Aggregator(train_original_source, p.seed) val_original_source = Iterable_Aggregator(val_original_source, p.seed) test_original_source = Iterable_Aggregator(test_original_source, p.seed) train_original_target = Iterable_Aggregator(train_original_target, p.seed) val_original_target = Iterable_Aggregator(val_original_target, p.seed) test_original_target = Iterable_Aggregator(test_original_target, p.seed) # For CNN We only use X and Y. And we only train on the source. # Properly form the data using a transform lambda and Lazy_Iterable_Wrapper. Finally wrap them in a dataloader transform_lambda = lambda ex: ex[1] # Original is (<domain>, <episode>) so we strip down to episode only train_processed_source = Lazy_Iterable_Wrapper(train_original_source, transform_lambda) val_processed_source = Lazy_Iterable_Wrapper(val_original_source, transform_lambda) test_processed_source = Lazy_Iterable_Wrapper(test_original_source, transform_lambda) train_processed_target = Lazy_Iterable_Wrapper(train_original_target, transform_lambda) val_processed_target = Lazy_Iterable_Wrapper(val_original_target, transform_lambda) test_processed_target = Lazy_Iterable_Wrapper(test_original_target, transform_lambda) datasets = EasyDict({ "source": { "original": {"train":train_original_source, "val":val_original_source, "test":test_original_source}, "processed": {"train":train_processed_source, "val":val_processed_source, "test":test_processed_source} }, "target": { "original": {"train":train_original_target, "val":val_original_target, "test":test_original_target}, "processed": {"train":train_processed_target, "val":val_processed_target, "test":test_processed_target} }, }) from steves_utils.transforms import get_average_magnitude, get_average_power print(set([u for u,_ in val_original_source])) print(set([u for u,_ in val_original_target])) s_x, s_y, q_x, q_y, _ = next(iter(train_processed_source)) print(s_x) # for ds in [ # train_processed_source, # val_processed_source, # test_processed_source, # train_processed_target, # val_processed_target, # test_processed_target # ]: # for s_x, s_y, q_x, q_y, _ in ds: # for X in (s_x, q_x): # for x in X: # assert np.isclose(get_average_magnitude(x.numpy()), 1.0) # assert np.isclose(get_average_power(x.numpy()), 1.0) ################################### # Build the model ################################### # easfsl only wants a tuple for the shape model = Steves_Prototypical_Network(x_net, device=p.device, x_shape=tuple(p.x_shape)) optimizer = Adam(params=model.parameters(), lr=p.lr) ################################### # train ################################### jig = PTN_Train_Eval_Test_Jig(model, p.BEST_MODEL_PATH, p.device) jig.train( train_iterable=datasets.source.processed.train, source_val_iterable=datasets.source.processed.val, target_val_iterable=datasets.target.processed.val, num_epochs=p.n_epoch, num_logs_per_epoch=p.NUM_LOGS_PER_EPOCH, patience=p.patience, optimizer=optimizer, criteria_for_best=p.criteria_for_best, ) total_experiment_time_secs = time.time() - start_time_secs ################################### # Evaluate the model ################################### source_test_label_accuracy, source_test_label_loss = jig.test(datasets.source.processed.test) target_test_label_accuracy, target_test_label_loss = jig.test(datasets.target.processed.test) source_val_label_accuracy, source_val_label_loss = jig.test(datasets.source.processed.val) target_val_label_accuracy, target_val_label_loss = jig.test(datasets.target.processed.val) history = jig.get_history() total_epochs_trained = len(history["epoch_indices"]) val_dl = Iterable_Aggregator((datasets.source.original.val,datasets.target.original.val)) confusion = ptn_confusion_by_domain_over_dataloader(model, p.device, val_dl) per_domain_accuracy = per_domain_accuracy_from_confusion(confusion) # Add a key to per_domain_accuracy for if it was a source domain for domain, accuracy in per_domain_accuracy.items(): per_domain_accuracy[domain] = { "accuracy": accuracy, "source?": domain in p.domains_source } # Do an independent accuracy assesment JUST TO BE SURE! # _source_test_label_accuracy = independent_accuracy_assesment(model, datasets.source.processed.test, p.device) # _target_test_label_accuracy = independent_accuracy_assesment(model, datasets.target.processed.test, p.device) # _source_val_label_accuracy = independent_accuracy_assesment(model, datasets.source.processed.val, p.device) # _target_val_label_accuracy = independent_accuracy_assesment(model, datasets.target.processed.val, p.device) # assert(_source_test_label_accuracy == source_test_label_accuracy) # assert(_target_test_label_accuracy == target_test_label_accuracy) # assert(_source_val_label_accuracy == source_val_label_accuracy) # assert(_target_val_label_accuracy == target_val_label_accuracy) experiment = { "experiment_name": p.experiment_name, "parameters": dict(p), "results": { "source_test_label_accuracy": source_test_label_accuracy, "source_test_label_loss": source_test_label_loss, "target_test_label_accuracy": target_test_label_accuracy, "target_test_label_loss": target_test_label_loss, "source_val_label_accuracy": source_val_label_accuracy, "source_val_label_loss": source_val_label_loss, "target_val_label_accuracy": target_val_label_accuracy, "target_val_label_loss": target_val_label_loss, "total_epochs_trained": total_epochs_trained, "total_experiment_time_secs": total_experiment_time_secs, "confusion": confusion, "per_domain_accuracy": per_domain_accuracy, }, "history": history, "dataset_metrics": get_dataset_metrics(datasets, "ptn"), } ax = get_loss_curve(experiment) plt.show() get_results_table(experiment) get_domain_accuracies(experiment) print("Source Test Label Accuracy:", experiment["results"]["source_test_label_accuracy"], "Target Test Label Accuracy:", experiment["results"]["target_test_label_accuracy"]) print("Source Val Label Accuracy:", experiment["results"]["source_val_label_accuracy"], "Target Val Label Accuracy:", experiment["results"]["target_val_label_accuracy"]) json.dumps(experiment) ###Output _____no_output_____
plots/measures_CSET_histograms.ipynb
###Markdown CSET profiles with MERRA, composited by cloud scene type The main data structure for this project is going to have to be a data frame, with each entry representing an aircraft profile. There may be an additional data frame for e.g. level legs.For that, we'll need to parse in every profile from the CSET data. Let's call this build_profile_table. This populates the table from the raw data and saves it to disk, or reloads it from disk. rows of the table: profile_ID | flight | start_time | end_time | z_i | lwp | dec | ###Code import os os.environ['PROJ_LIB'] = '/home/disk/p/jkcm/anaconda3/envs/measures/share/proj' import sys sys.path.insert(0, '/home/disk/p/jkcm/Code') import pandas as pd pd.options.mode.chained_assignment = None import datetime as dt import numpy as np np.warnings.filterwarnings('ignore') from Lagrangian_CSET import met_utils as mu from Lagrangian_CSET import utils as CSET_utils from classified_cset import utils from Lagrangian_CSET.CSET_data_classes import CSET_Flight from tools.decorators import timed from tools.LoopTimer import LoopTimer from mpl_toolkits.basemap import Basemap import matplotlib.pyplot as plt import matplotlib as mpl import xarray as xr import glob import seaborn as sns import pickle %load_ext autoreload %autoreload 2 print('boogers') ###Output The autoreload extension is already loaded. To reload it, use: %reload_ext autoreload boogers ###Markdown PROFILE-BASED ANALYSIS ###Code class_df = utils.load_class_data('cset') soundings_file = r'/home/disk/eos4/jkcm/Data/CSET/saved_all_soundings.pickle' with open(soundings_file, 'rb') as f: sounding_data = pickle.load(f) profiles_file = r'/home/disk/eos4/jkcm/Data/CSET/saved_all_profiles.pickle' with open(profiles_file, 'rb') as f: profile_data = pickle.load(f) def get_EIS_from_profile(profile): if 'PSXC' not in profile.keys(): profile['PSXC'] = profile['PSX'] i_1000 = np.nonzero(abs(profile['PSXC']-1000)<10)[0] i_850 = np.nonzero(abs(profile['PSXC']-850)<10)[0] i_700 = np.nonzero(abs(profile['PSXC']-700)<10)[0] t_1000 = np.nanmean(profile['ATX'][i_1000]) t_850 = np.nanmean(profile['ATX'][i_850]) t_700 = np.nanmean(profile['ATX'][i_700]) z_1000 = np.nanmean(profile['GGALT'][i_1000]) z_700 = np.nanmean(profile['GGALT'][i_700]) r_1000 = np.nanmean(profile['RHUM'][i_1000]) return mu.calculate_EIS(t_1000, t_850, t_700, z_1000, z_700, r_1000) def get_dz(mdata, lat, lon, time, lev=700): x = mdata['OMEGA'].sel(lon=lon, lat=lat, method='nearest') y = x.sel(method='nearest', time=time, tolerance=np.timedelta64(2, 'h')) pres = CSET_utils.pres_map[lev] omega_700 = y.sel(lev=pres).values.item() x = mdata['AIRDENS'].sel(lon=lon, lat=lat, method='nearest') y = x.sel(method='nearest', time=time, tolerance=np.timedelta64(2, 'h')) rho_700 = y.sel(lev=pres).values.item() dz_700 = omega_700/(9.81*rho_700) return dz_700 # def get_divergence(mdata, lat, lon, time): def build_profile_table(soundings, profiles, classifications, reload=False): if reload: #unpickle the table from disk and return it return def get_best_row(lat, lon, date, classifications): right_day = classifications[(classifications['datetime'] - date)<np.timedelta64(12,'h')] # should ensure only the right days... right_day['dist'] = np.sqrt((right_day['lat']-lat)**2 + (right_day['lon']-lon)**2) return right_day.loc[right_day['dist'].idxmin()] df = pd.DataFrame(columns=list(classifications.columns) + ['type', 'dec', 'zi', 'EIS', 'w_700', 'w_850', 'w_910', 'div_sfc', 'prof_date', 'lcl', 'd_q_inv', 'd_t_inv']) scat_data = xr.open_mfdataset(sorted(glob.glob(r'/home/disk/eos9/jkcm/Data/ascat/rss/2015/all/*.nc')), combine='by_coords') with xr.open_mfdataset(sorted(glob.glob(r'/home/disk/eos4/jkcm/Data/CSET/MERRA/unified_2/*.unified*.nc4')), combine='by_coords') as mdata: for key,snd in soundings.items(): lat = np.nanmedian(snd['GGLAT']) lon = np.nanmedian(snd['GGLON'])%360 date = snd['TIME'][int(len(snd['TIME'])/2)] best_class = get_best_row(lat, lon, date.replace(tzinfo=None), classifications).copy() best_class['type'] = 'sonde' best_class['dec'] = snd['dec']['alpha_qt'] best_class['zi'] = snd['dec']['z_bot'] best_class['EIS'] = get_EIS_from_profile(snd) best_class['w_700'] = get_dz(mdata, lat, lon, date, lev=700) best_class['w_850'] = get_dz(mdata, lat, lon, date, lev=850) best_class['w_910'] = get_dz(mdata, lat, lon, date, lev=910) best_class['prof_date'] = date best_class['lcl'] = get_lcl(snd) best_class['d_q_inv'] = snd['dec']['d_q_inv'] best_class['d_t_inv'] = snd['dec']['d_t_inv'] df = df.append(best_class, ignore_index=True) for key,prof in profiles.items(): lat = prof['dec']['lat'] lon = prof['dec']['lon']%360 date = utils.as_datetime(prof['dec']['time']) best_class = get_best_row(lat, lon, date.replace(tzinfo=None), classifications).copy() best_class['type'] = 'prof' best_class['dec'] = prof['dec']['alpha_qt'] best_class['zi'] = prof['dec']['z_bot'] best_class['EIS'] = get_EIS_from_profile(prof['sounding']) best_class['w_700'] = get_dz(mdata, lat, lon, date, lev=700) best_class['w_850'] = get_dz(mdata, lat, lon, date, lev=850) best_class['w_910'] = get_dz(mdata, lat, lon, date, lev=910) best_class['prof_date'] = date best_class['lcl'] = get_lcl(prof['sounding']) best_class['d_q_inv'] = prof['dec']['d_q_inv'] best_class['d_q_inv'] = prof['dec']['d_q_inv'] df = df.append(best_class, ignore_index=True) return df #read in all flight dates # return soundings = sounding_data profiles = profile_data classifications = class_df df = build_profile_table(soundings, profiles, classifications, reload=False) print('bananas') def get_lcl(prof): alt = prof['GGALT'] low_idx = np.where(alt<np.min(alt)+50) pres = np.nanmean(prof['PSXC'][low_idx])*100 temp = np.nanmean(prof['ATX'][low_idx]) rh = np.nanmean(prof['RHUM'][low_idx])/100 lcl = mu.lcl(pres,temp,rhl=rh) return lcl profiles[list(profiles.keys())[0]]['dec'] alt = profiles[list(profiles.keys())[0]]['GGALT'] low_idx = np.where(alt<np.min(alt)+50) pres = np.nanmean(soundings[list(soundings.keys())[0]]['PSXC'][low_idx])*100 temp = np.nanmean(soundings[list(soundings.keys())[0]]['ATX'][low_idx]) rh = np.nanmean(soundings[list(soundings.keys())[0]]['RHUM'][low_idx])/100 lcl = mu.lcl(pres,temp,rhl=rh) grouped = df.groupby('cat') df['class'] = df.apply(lambda x: utils.short_labels[x['cat']], axis=1) df['w_700'] = df['w_700'] * 1000 df['w_850'] = df['w_850'] * 1000 df['w_910'] = df['w_910'] * 1000 grouped = df.groupby('cat') print('bananas') types = df.type.unique() fig, ax = plt.subplots() ax.hist([df.loc[df.type == x, 'dist'] for x in types], label=types) ax.legend() ax.set_xlabel('profile-scene distance ($^{\circ}$)') ax.set_title('min distance from valid classification'); types = df.type.unique() fig, ax = plt.subplots() ax.hist([df.loc[df.type == x, 'cat'] for x in types], bins=np.arange(7)-0.5, label=types) ax.legend() ax.set_title('classifications by profile type') ax.set_xticks(np.arange(6)) ax.set_xticklabels([utils.short_labels[i] for i in ax.get_xticks()], rotation=45); def plot_grouped_var_dists(grouped, varname, xlims=None, xlabel=None, ax=None, savename=None, verbose=False, scale=None): if not ax: fig, ax = plt.subplots(figsize=(10,6)) else: fig = ax.figure colors = [mpl.cm.get_cmap('viridis')(i) for i in np.linspace(0,1,6)] lss = ['-',(0,(5,1,5,1)),(0,(4,1,1,1)),(0,(1,3,1,3)),(0,(2,1,1,1,1,1)),(0,(1,2,1,2))] ordering = [4, 0, 2, 3, 1, 5] for i, name in enumerate(ordering): group = grouped.get_group(name) if verbose: print(utils.short_labels[name]+':', len(group), sum(~np.isnan(group[varname].values))) sns.distplot(group[varname].values, hist = False, kde = True, kde_kws = {'shade': True, 'linewidth': 3, 'linestyle': lss[i]},#, 'bw': scale}, # hist_kws = {'histtype': 'step', 'linewidth': 2, 'alpha': 1}, label = utils.short_labels[name], color=colors[i], ax=ax) if xlims: ax.set_xlim(xlims) if xlabel: ax.set_xlabel(xlabel) ax.set_ylabel("normed density") ax.set_yticklabels if savename: fig.savefig(savename, bbox_inches='tight') return fig, ax fig, [ax1, ax2] = plt.subplots(figsize=(8, 3), nrows=1, ncols=2) plot_grouped_var_dists(grouped=grouped, varname='zi', xlabel='PBL depth (m)', ax=ax1, savename=None, verbose=False, scale=0.4) plot_grouped_var_dists(grouped=grouped, varname='dec', xlims=(0, 1), xlabel='decoupling (0-1)', ax=ax2, savename=None, verbose=False, scale=0.4); # plot_grouped_var_dists(grouped=grouped, varname='EIS', xlims=None, xlabel='EIS (K)', # ax=ax3, savename=None, verbose=False); # plot_grouped_var_dists(grouped=grouped, varname='low_cf', xlims=(0, 1), xlabel="Cloud Fraction", # ax=ax4, savename=None, verbose=True); for axi in [ax1, ax2]:#, ax3, ax4]: axi.get_legend().remove() axi.get_yaxis().set_ticks([]) axi.set_ylabel('') ax2.legend(loc='center', bbox_to_anchor=(0., -0.5), ncol=3) plt.subplots_adjust(wspace=0.15) for i, axi in enumerate([ax1, ax2]): letter = chr(ord('a') + i) axi.text(0.015, 0.985, f'({letter})', fontsize=12, horizontalalignment='left', verticalalignment='top', transform=axi.transAxes, backgroundcolor='w') # fig.savefig('/home/disk/p/jkcm/plots/measures/final/cset_hists.png', bbox_inches='tight') # depth_savename = '/home/disk/p/jkcm/plots/measures/cset_depth_by_category.png' # plot_grouped_var_dists(grouped_df=grouped, varname='zi', xlabel='PBL depth (m)', # ax=None, savename=depth_Savename, verbose=True); # dec_savename = '/home/disk/p/jkcm/plots/measures/cset_decoupling_by_category.png' # plot_grouped_var_dists(grouped_df = grouped, varname='dec', xlims=(0, 1), xlabel='decoupling (0-1)', # ax=None, savename=dec_savename, verbose=False); # EIS_savename = '/home/disk/p/jkcm/plots/measures/cset_EIS_by_category.png' # plot_grouped_var_dists(grouped_df = grouped, varname='EIS', xlims=None, xlabel='EIS (K)', # ax=None, savename=EIS_savename, verbose=False); # subs_savename = '/home/disk/p/jkcm/plots/measures/cset_subsidence_by_category.png' # plot_grouped_var_dists(grouped_df = grouped, varname='w_700', xlims=(-10, 20), xlabel="w$_{700}$ (mm/s)", # ax=None, savename=subs_savename, verbose=False); decs, zis, lcls = df.dec.values, df.zi.values, df.lcl.values idx = np.all([~np.isnan(decs), ~np.isnan(zis), ~np.isnan(lcls)], axis=0) decs, zis, lcls = decs[idx], zis[idx], lcls[idx] cld_layer = zis-lcls plt.hist(cld_layer) fig, ax = plt.subplots() ax.scatter(cld_layer, decs) ax.set_ylim((0,1)) counts,ybins,xbins,image = plt.hist2d(cld_layer,decs,bins = (np.arange(0,2500,400), np.arange(0, 1.01, 0.2))); fig, ax = plt.subplots() ax.hist2d(cld_layer, decs, bins = (np.arange(0,2500,200), np.arange(0, 1.01, 0.15)), density=True, cmap='Greys') # ax.contour(counts,extent=[xbins.min(),xbins.max(),ybins.min(),ybins.max()],linewidths=3) lcls = np.arange(0,2500,200) alphs = (lcls/2750)**1.1 ax.plot(lcls, alphs, label=r'Park et al. (2004), $\gamma$=1.1') ax.set_ylabel(r'$\alpha_{qT}$') ax.set_xlabel('$z_i-z_{lcl}$ (m)') ax.set_title('JHist of decoupling and cloud layer depth, \n CSET flight profiles') ax.legend() dqs, dts, low_cf = df.d_q_inv.values, df.d_t_inv.values, df.low_cf.values idx = np.all([~np.isnan(dqs), ~np.isnan(dts), ~np.isnan(low_cf), dqs<15], axis=0) dqs, dts, low_cf = dqs[idx], dts[idx], low_cf[idx] kappa = 1 + (dts)/((2260/1006)*dqs) plt.hist(kappa) fig, ax = plt.subplots() ax.plot(kappa, low_cf, '.') ax.set_xlim((-1,1)) # fig, [[ax1, ax2], [ax3, ax4]] = plt.subplots(figsize=(10, 6), nrows=2, ncols=2) fig, [ax1, ax2, ax3] = plt.subplots(figsize=(14, 3), nrows=1, ncols=3) plot_grouped_var_dists(grouped=grouped, varname='w_700', xlims = (-10, 20), xlabel="w$_{700}$ (mm/s)", ax=ax1, savename=None, verbose=False) plot_grouped_var_dists(grouped=grouped, varname='w_850', xlims=(-10, 20), xlabel='"w$_{850}$ (mm/s)"', ax=ax2, savename=None, verbose=False); plot_grouped_var_dists(grouped=grouped, varname='w_910', xlims=(-10, 20), xlabel='"w$_{910}$ (mm/s)"', ax=ax3, savename=None, verbose=False); # plot_grouped_var_dists(grouped_df = grouped, varname='w_700', xlims=(-10, 20), xlabel="w$_{700}$ (mm/s)", # ax=ax4, savename=None, verbose=False); fig.subplots_adjust(hspace=0.4) for axi in [ax1, ax2, ax3]:#, ax4]: axi.get_legend().remove() axi.get_yaxis().set_ticks([]) axi.set_ylabel('') ax2.legend(loc='center', bbox_to_anchor=(0.5, -0.4), ncol=6) ###Output _____no_output_____ ###Markdown ABOVE-CLOUD LEG-BASED ANALYSIS ###Code # Get all level legs from all flights # MFdataset the entire MODIS dataset # do the same as above lt = LoopTimer(len(CSET_utils.datemap.values())) all_level_legs = [] for i in range(15): # for i in [6]: lt.update() f = CSET_Flight('RF{:02d}'.format(i+1)) f_legs = f.split_into_legs(legs = ['p', 'c']) for leg_type, legdict in f_legs.items(): for k, v in legdict.items(): cloud_masked = CSET_utils.get_cloud_only_vals(v) if len(cloud_masked['time'])<30: # print(f'skipping flight {i+1}, seq {k}, leg {leg_type}') continue nd = np.nanmedian(cloud_masked['CONCD_LWOI']) lat = np.nanmedian(cloud_masked['GGLAT']) lon = np.nanmedian(cloud_masked['GGLON']) time = cloud_masked['time'][int(len(cloud_masked['time'])/2)].values # print(f'appending flight {i+1}, seq {k}, leg {leg_type}') all_level_legs.append(dict(nd=nd, lat=lat, lon=lon, time=time, ltype=leg_type, leg=k, flight=i)) def build_cloud_leg_table(all_level_legs, classifications, reload=False): if reload: #unpickle the table from disk and return it return def get_best_row(lat, lon, date, classifications): right_day = classifications[(classifications['datetime'] - date.replace(tzinfo=None))<np.timedelta64(12,'h')] # should ensure only the right days... right_day['dist'] = np.sqrt((right_day['lat']-lat)**2 + (right_day['lon']-lon)**2) return right_day.loc[right_day['dist'].idxmin()] df = pd.DataFrame(columns=list(classifications.columns) + ['type', 'ac_nd']) # with xr.open_mfdataset(sorted(glob.glob(r'/home/disk/eos4/jkcm/Data/CSET/MERRA/unified_2/*.unified*.nc4'))) as mdata: if True: for pt in all_level_legs: best_class = get_best_row(pt['lat'], pt['lon'], utils.as_datetime(pt['time']), classifications) # best_class['type'] = pt['ltype'] df = df.append(best_class, ignore_index=True) return df classifications = class_df df = build_cloud_leg_table(all_level_legs, classifications, reload=False) grouped = df.groupby('cat') # fig, [[ax1, ax2], [ax3, ax4]] = plt.subplots(figsize=(10, 6), nrows=2, ncols=2) fig, ax1 = plt.subplots(figsize=(5, 3)) plot_grouped_var_dists(grouped_df=grouped, varname='ac_nd', xlabel="ac nd", ax=ax1, savename=None, verbose=False) # plot_grouped_var_dists(grouped_df = grouped, varname='w_700', xlims=(-10, 20), xlabel="w$_{700}$ (mm/s)", # ax=ax4, savename=None, verbose=False); # fig.subplots_adjust(hspace=0.4) # for axi in [ax1, ax2, ax3]:#, ax4]: # axi.get_legend().remove() # axi.get_yaxis().set_ticks([]) # axi.set_ylabel('') # ax2.legend(loc='center', bbox_to_anchor=(0.5, -0.4), ncol=6) # @timed def build_all_profiles(): all_profiles = {} all_soundings = {} lt = LoopTimer(len(CSET_utils.datemap.values())) for i in range(15): lt.update() f = CSET_Flight('RF{:02d}'.format(i+1)) # f.add_ERA_data() f.add_AVAPS_data() #profiles # profs = f.get_profiles() # for k,v in profs.items(): # all_profiles['RF{:02d}{}'.format(i+1, k)] = v for (d,sounding) in f.AVAPS_profiles.items(): sounding['PSXC'] = sounding['PSX'] decoupling_dict = mu.calc_decoupling_and_inversion_from_sounding(sounding, usetheta=False, smooth_t=False) sounding['dec'] = decoupling_dict all_soundings['RF{:02d}{}'.format(i+1, d)] = sounding del f return all_profiles, all_soundings _, all_soundings = build_all_profiles() # savefile = r'/home/disk/eos4/jkcm/Data/CSET/saved_all_profiles.pickle' # with open(savefile, 'wb') as f: # pickle.dump(all_profiles,f) savefile = r'/home/disk/eos4/jkcm/Data/CSET/saved_all_soundings.pickle' with open(savefile, 'wb') as f: pickle.dump(all_soundings,f) ###Output 93.75% ETA 02:01:07 time left: 41 seconds
python-data-science-machine-learning/4-visualizacao-dados/matplotlib/matplotlib-basic.ipynb
###Markdown Usando a matplotlib -- forma simples ###Code plt.plot(x, y, color = 'r') plt.xlabel('Eixo X') plt.ylabel('Eixo Y') plt.title('Titulo') plt.subplot(1, 2, 1) plt.plot(x, y, 'r--') plt.subplot(1, 2, 2) plt.plot(y, x, 'g*-') ###Output _____no_output_____ ###Markdown Usando a matplotlib -- forma completa Configurando o posicionamento ###Code fig = plt.figure() axes = fig.add_axes([.1, .1, .8, .8]) #10% de distância da parte esquerda e inferior e 80% de largura e altura axes.set_xlabel('Eixo X') axes.set_ylabel('Eixo Y') axes.set_title('Título') axes.plot(x, y) fig = plt.figure() axes1 = fig.add_axes([.1, .1, .8, .8]) #10% de distância da parte esquerda e inferior e 80% de largura e altura axes1.set_xlabel('Eixo X') axes1.set_ylabel('Eixo Y') axes1.set_title('Título') axes1.plot(x, y) axes2 = fig.add_axes([.2, .4, .3, .3]) axes2.plot(y, x) ###Output _____no_output_____ ###Markdown Subplots ###Code fig, ax = plt.subplots() ax.plot(x, x**3, 'b--') ax.plot(x, x**4, 'r.-') ax.set_xlabel('Eixo x') ax.set_ylabel('Eixo y') ax.set_title('Título') fig, ax = plt.subplots(nrows = 1, ncols = 2) ax[0].plot(x, x**2, 'b--') ax[1].plot(x, x**4, 'r') fig, ax = plt.subplots(nrows = 5, ncols = 5) plt.tight_layout() ###Output _____no_output_____ ###Markdown Configurações da Figura ###Code fig, ax = plt.subplots( figsize = ( 12, 3 ), dpi = 100) ax.plot(x, y, 'r', label = 'y = x') ax.plot(x, y*4, 'g', label = 'y = x * 4') ax.set_title('Título') ax.legend(loc = 2) # loc = posição // o valor padrão é selecionado a melhor posição fig.savefig('imagem.png') ###Output _____no_output_____ ###Markdown Alterando o Layout ###Code fig , ax = plt.subplots(figsize = (10, 10), dpi = 100) ax.plot(x, x**2, color = 'red' , label = 'y = x²') ax.plot(x, x**3, color = 'blue', label = 'y = x³', linewidth = 10, alpha = 0.2, linestyle = ':') ax.legend(fontsize = 'large') ax.set_xlim([0, 6]) ax.set_ylim([0, 150]) ###Output _____no_output_____ ###Markdown Outros layouts (MELHOR NO SEABORN ) Scatter ###Code fig = plt.scatter(x, y) ###Output _____no_output_____ ###Markdown Histograma ###Code from random import sample data = sample( range( 1, 1000 ), 100 ) plt.hist(data) ###Output _____no_output_____ ###Markdown Boxplot ###Code data = [np.random.normal( 0, std, 100 ) for std in range( 1, 4 )] plt.boxplot( data, vert = True, patch_artist = True) ###Output _____no_output_____
Clustering Exploration.ipynb
###Markdown Exploring representative tuples by clustering the embedding space ###Code import warnings warnings.filterwarnings('ignore') import time from sklearn.cluster import KMeans, Birch from gensim.models.wrappers import FastText from gensim.models import Word2Vec from gensim.models import KeyedVectors import pandas as pd import numpy as np import h5py ###Output _____no_output_____ ###Markdown Perform clustering and return the cluster centers ###Code word2VecModelPath = 'amazonModelWord2Vec.w2v' fastTextModelPath = 'amazonModelFastText.w2v' ###Output _____no_output_____ ###Markdown Clustering with KMeans ###Code def getClusterCentersWithKMeans(model, numberOfClusters): # Get the word vectors of the model word_vectors = model.wv.syn0 n_words = word_vectors.shape[0] vec_size = word_vectors.shape[1] print("Number of words = {0}, vector size = {1}".format(n_words, vec_size)) # Cluster using KMeans start = time.time() print("Clustering ... ", end="", flush=True) kmeans = KMeans(n_clusters=numberOfClusters, n_jobs=-1, random_state=0) idx = kmeans.fit_predict(word_vectors) print("Finished clustering in {:.2f} sec.".format(time.time() - start), flush=True) # Return cluster centers return kmeans.cluster_centers_ ###Output _____no_output_____ ###Markdown Get the closest vector to each of the cluster centersWe'll pass the number of cluster centers as an argument. This can be thought of as a drill down equivalent. Greater the number of cluster centers, more detailed will be the resulting results returned. Number of clusters chosen is 3 by default. This can be overriden, if needed. ###Code def getClosestWordEmbedding(modelPath, numberOfClusters = 3): # Load the model start = time.time() model = KeyedVectors.load(modelPath) print("Finished loading model in {:.2f} sec.".format(time.time() - start), flush=True) clusterCenters = getClusterCentersWithKMeans(model, numberOfClusters) # Create an empty numpy array of size equal to cluster centers to store the closest words closestWords = [] # Get the closest word for each of the cluster centers for clusterCenter in clusterCenters: closestWords.append(model.similar_by_vector(clusterCenter)) return closestWords getClosestWordEmbedding(word2VecModelPath) getClosestWordEmbedding(fastTextModelPath, 100) ###Output Finished loading model in 0.95 sec. Number of words = 18405, vector size = 100 Clustering ... Finished clustering in 9.23 sec.
202001111_Group_theory.ipynb
###Markdown ###Code ###Output _____no_output_____ ###Markdown This article is by no means a replacement of any rigorous textbook or courses on abstract algebra or group theory. It just represents some of my attempts to understand the concepts of group in an intuitive (and hopefully joyful) way. _Ingredients:_ - Some rigorous math notations (for definitions and proofs): ~30% - Intuitivive explanation in layman's language: ~70% 1. A "Closed" Playground: Definitions of GroupThe defition of group can be simply expressed as "group = set + operation". Set is a very general concept and can be a set of anything (really anything). But just like we want any game (soccer or basketball) to play within a boundary, the operation on this set has to be "closed". **Definition 1.1.** A group is a set G associated with a binary operation: $G \times G \rightarrow G$ that associates an element $a \cdot b \in G$ to every pair of elements $a$ and $b$. The operation must satisfy the following conditions: G1: Associativity: $a\cdot (b \cdot c) = (a\cdot b) \cdot c$G2: Identify: $a \cdot e = e \cdot a = a$G3: Inverse: $\forall a \in G$, there is a $a^{-1} \in G$, s.t. $a \cdot a^{-1}=a^{-1} \cdot a = e$ 2. Subgroup: What Are "Inside" a Group?Now we have a "boundary" by requiring the group to have a closed operation, and the next question to ask is, "what can we do within a group"?The most interesting idea (to me) in group theory is that you can have subgroups within a group. A subgroup is essentially a subset of the group that can be called a group as well. An analog in real life could be small families (with your parents and you) in a large family (with your grandparents, your uncle and his family, your aunt and her famility, etc). To check whether a subset is a subgroup or not, we should check the three conditions (G1 to G3) as listed in section 1 in principle. However, we can show that as long as the same operation is closed within the subset, it is a group and thus quaifies as a "subgroup". **Theorem 2.1.** A subset H of a group G is a subgroup iff ...*Proof:* 3. Cosets: The "shifted images" of group/subgroup. **Definition 3.1.** If $H$ is a subgroup in a group $G$, any set of the form $gH:=\{g\cdot h: g \in G, h \in H\}$ is a left $H$-coset in $G$. NOTE: A $H$-coset in $G$ does not have to be a subgroup of $G$.From the definition, we can tell that a coset is nothing but a subgroup "shifted" by the translation (defined by $g$). And in fact, any translation of the form $L_g(h)=g\cdot h$ is bijective. So, we can think of the translation as the "magical mirror" that projects a subgroup to a different set (a coset). Interesting properties of cosets: gH = H iff g in H; g1H = g2H if they have one element in common; otherwise disjoint. Any gH/Hg has a bijection to H; In other words, any gH has the same order as H. Lagrange's theorem: 4. Extended ReadingBackground: How group theory come in history and the role of abstract algebra in the development of modern mathematics.Group theoretical methods for machine learning: _REFERENCES:_ ###Code ###Output _____no_output_____
notebook-templates/standalone/python/design-of-experiments_tutorial/notebook.ipynb
###Markdown © Copyright 2013–2014, Abraham Lee© Copyright 2019, Dataiku Design of Experiments tutorial based on pyDOECopied from the pyDOE web: site https://pythonhosted.org/pyDOE/index.html ###Code %pylab inline import dataiku from pyDOE2 import * ###Output _____no_output_____ ###Markdown Factorial design General Full-Factorial (fullfact)This kind of design offers full flexibility as to the number of discrete levels for each factor in the design. Its usage is simple: ###Code levels = np.array([2, 3]) levels.astype(int) fullfact( levels ) ###Output _____no_output_____ ###Markdown where levels is an array of integers.As can be seen in the output, the design matrix has as many columns as items in the input array. 2-Level Full-Factorial (ff2n)This function is a convenience wrapper to fullfact that forces all the factors to have two levels each, you simply tell it how many factors to create a design for: ###Code ff2n(3) ###Output _____no_output_____ ###Markdown 2-Level Fractional-Factorial (fracfact)This function requires a little more knowledge of how the confounding will be allowed (this means that some factor effects get muddled with other interaction effects, so it’s harder to distinguish between them).Let’s assume that we just can’t afford (for whatever reason) the number of runs in a full-factorial design. We can systematically decide on a fraction of the full-factorial by allowing some of the factor main effects to be confounded with other factor interaction effects. This is done by defining an alias structure that defines, symbolically, these interactions. These alias structures are written like “C = AB” or “I = ABC”, or “AB = CD”, etc. These define how one column is related to the others.For example, the alias “C = AB” or “I = ABC” indicate that there are three factors (A, B, and C) and that the main effect of factor C is confounded with the interaction effect of the product AB, and by extension, A is confounded with BC and B is confounded with AC. A full- factorial design with these three factors results in a design matrix with 8 runs, but we will assume that we can only afford 4 of those runs. To create this fractional design, we need a matrix with three columns, one for A, B, and C, only now where the levels in the C column is created by the product of the A and B columns.The input to fracfact is a generator string of symbolic characters (lowercase or uppercase, but not both) separated by spaces, like: ###Code gen = 'a b ab' ###Output _____no_output_____ ###Markdown This design would result in a 3-column matrix, where the third column is implicitly defined as "c = ab". This means that the factor in the third column is confounded with the interaction of the factors in the first two columns. The design ends up looking like this: ###Code fracfact(gen) ###Output _____no_output_____ ###Markdown Fractional factorial designs are usually specified using the notation $2^{k-p}$, where $k$ is the number of columns and $p$ is the number of effects that are confounded. In terms of resolution level, higher is “better”. The above design would be considered a $2^{3-1}$ fractional factorial design, a 1/2-fraction design, or a Resolution III design (since the smallest alias “I=ABC” has three terms on the right-hand side). Another common design is a Resolution III, $2^{7-4}$ fractional factorial and would be created using the following string generator: ###Code fracfact('a b ab c ac bc abc') ###Output _____no_output_____ ###Markdown More sophisticated generator strings can be created using the “+” and “-” operators. The “-” operator swaps the levels of that column like this: ###Code fracfact('a b -ab') ###Output _____no_output_____ ###Markdown In order to reduce confounding, we can utilize the fold function: ###Code m = fracfact('a b ab') fold(m) ###Output _____no_output_____ ###Markdown Applying the fold to all columns in the design breaks the alias chains between every main factor and two-factor interactions. This means that we can then estimate all the main effects clear of any two-factor interactions. Typically, when all columns are folded, this “upgrades” the resolution of the design.By default, fold applies the level swapping to all columns, but we can fold specific columns (first column = 0), if desired, by supplying an array to the keyword columns: ###Code fold(m, columns=[2]) ###Output _____no_output_____ ###Markdown NoteCare should be taken to decide the appropriate alias structure for your design and the effects that folding has on it. Plackett-Burman (pbdesign)Another way to generate fractional-factorial designs is through the use of Plackett-Burman designs. These designs are unique in that the number of trial conditions (rows) expands by multiples of four (e.g. 4, 8, 12, etc.). The max number of columns allowed before a design increases the number of rows is always one less than the next higher multiple of four.For example, I can use up to 3 factors in a design with 4 rows: ###Code pbdesign(3) ###Output _____no_output_____ ###Markdown But if I want to do 4 factors, the design needs to increase the number of rows up to the next multiple of four (8 in this case): ###Code pbdesign(4) ###Output _____no_output_____ ###Markdown Thus, an 8-run Plackett-Burman design can handle up to (8 - 1) = 7 factors.As a side note, It just so happens that the Plackett-Burman and 2^(7-4) fractional factorial design are identical: ###Code np.all(pbdesign(7)==fracfact('a b ab c ac bc abc')) ###Output _____no_output_____ ###Markdown More InformationIf the user needs more information about appropriate designs, please consult the following articles on Wikipedia:- [Factorial designs](http://en.wikipedia.org/wiki/Factorial_experiment)- [Plackett-Burman designs](http://en.wikipedia.org/wiki/Plackett-Burman_design)There is also a wealth of information on the [NIST](http://www.itl.nist.gov/div898/handbook/pri/pri.htm) website about the various design matrices that can be created as well as detailed information about designing/setting-up/running experiments in general.Any questions, comments, bug-fixes, etc. can be forwarded to the author or the pyDOE package. Response Surface Designs Box-Behnken (bbdesign)![Box-Behnken image](http://www.itl.nist.gov/div898/handbook/pri/section3/gifs/bb.gif)Box-Behnken designs can be created using the following simple syntax: ###Code n=3 bbdesign(n, center=1) ###Output _____no_output_____ ###Markdown where n is the number of factors (at least 3 required) and center is the number of center points to include. If no inputs given to center, then a pre-determined number of points are automatically included. Central Composite (ccdesign)![Central Composite image](http://www.itl.nist.gov/div898/handbook/pri/section3/gifs/fig5.gif)Central composite designs can be created and customized using the syntax: ###Code n=3 ccdesign(3, center=(0, 1), alpha='r', face='cci') ###Output _____no_output_____ ###Markdown where- n is the number of factors,- center is a 2-tuple of center points (one for the factorial block, one for the star block, default (4, 4)),- alpha is either “orthogonal” (or “o”, default) or “rotatable” (or “r”)- face is either “circumscribed” (or “ccc”, default), “inscribed” (or “cci”), or “faced” (or “ccf”).![cc2 image](http://www.itl.nist.gov/div898/handbook/pri/section3/gifs/ccd2.gif)The two optional keyword arguments alpha and face help describe how the variance in the quadratic approximation is distributed. Please see the NIST web pages if you are uncertain which options are suitable for your situation. Note‘ccc’ and ‘cci’ can be rotatable designs, but ‘ccf’ cannot.If face is specified, while alpha is not, then the default value of alpha is ‘orthogonal’. More InformationIf the user needs more information about appropriate designs, please consult the following articles on Wikipedia:- [Box-Behnken designs](http://en.wikipedia.org/wiki/Box-Behnken_design)- [Central composite designs](http://en.wikipedia.org/wiki/Central_composite_design)There is also a wealth of information on the [NIST](http://www.itl.nist.gov/div898/handbook/pri/pri.htm) website about the various design matrices that can be created as well as detailed information about designing/setting-up/running experiments in general.Any questions, comments, bug-fixes, etc. can be forwarded to the author of the package. Randomized Designs Latin-Hypercube (lhs)![Latin-Hypercube image](https://pythonhosted.org/pyDOE/_images/lhs.png)Latin-hypercube designs can be created using the following simple syntax: ###Code n = 4 lhs(n, samples=10, criterion='center') ###Output _____no_output_____ ###Markdown where- `n`: an integer that designates the number of factors (required)- `samples`: an integer that designates the number of sample points to generate for each factor (default: n)criterion: a string that tells lhs how to sample the points (default: None, which simply randomizes the points within the intervals):- `"center"` or `"c"`: center the points within the sampling intervals- `“maximin”` or `“m”`: maximize the minimum distance between points, but place the point in a randomized location within its interval- `“centermaximin”` or `“cm”`: same as `“maximin”`, but centered within the intervals- `“correlation”` or `“corr”`: minimize the maximum correlation coefficientThe output design scales all the variable ranges from zero to one which can then be transformed as the user wishes (like to a specific statistical distribution using the `scipy.stats.distributions` `ppf` (inverse cumulative distribution) function. An example of this is shown below.For example, if I wanted to transform the uniform distribution of 8 samples to a normal distribution (mean=0, standard deviation=1), I would do something like: ###Code from scipy.stats.distributions import norm lhd = lhs(2, samples=5) lhd = norm(loc=0, scale=1).ppf(lhd) # this applies to both factors here ###Output _____no_output_____ ###Markdown Graphically, each transformation would look like the following, going from the blue sampled points (from using lhs) to the green sampled points that are normally distributed:![LHS custom distribution](https://pythonhosted.org/pyDOE/_images/lhs_custom_distribution.png) Customizing with Statistical DistributionsNow, let’s say we want to transform these designs to be normally distributed with means = [1, 2, 3, 4] and standard deviations = [0.1, 0.5, 1, 0.25]: ###Code design = lhs(4, samples=10) from scipy.stats.distributions import norm means = [1, 2, 3, 4] stdvs = [0.1, 0.5, 1, 0.25] for i in range(4): design[:, i] = norm(loc=means[i], scale=stdvs[i]).ppf(design[:, i]) design ###Output _____no_output_____
lijin-THU:notes-python/05-advanced-python/05.10-generators.ipynb
###Markdown 生成器 `while` 循环通常有这样的形式:```pythonresult = []while True: result.append(value) if : break```使用迭代器实现这样的循环:```pythonclass GenericIterator(object): def __init__(self, ...): 需要额外储存状态 def next(self): if : raise StopIteration() return value```更简单的,可以使用生成器:```pythondef generator(...): while True: yield 说明这个函数可以返回多个值! yield value if : break```生成器使用 `yield` 关键字将值输出,而迭代器则通过 `next` 的 `return` 将值返回;与迭代器不同的是,生成器会自动记录当前的状态,而迭代器则需要进行额外的操作来记录当前的状态。对于之前的 `collatz` 猜想,简单循环的实现如下: ###Code def collatz(n): sequence = [] while n != 1: if n % 2 == 0: n /= 2 else: n = 3*n + 1 sequence.append(n) return sequence for x in collatz(7): print x, ###Output 22 11 34 17 52 26 13 40 20 10 5 16 8 4 2 1 ###Markdown 迭代器的版本如下: ###Code class Collatz(object): def __init__(self, start): self.value = start def __iter__(self): return self def next(self): if self.value == 1: raise StopIteration() elif self.value % 2 == 0: self.value = self.value/2 else: self.value = 3*self.value + 1 return self.value for x in Collatz(7): print x, ###Output 22 11 34 17 52 26 13 40 20 10 5 16 8 4 2 1 ###Markdown 生成器的版本如下: ###Code def collatz(n): while n != 1: if n % 2 == 0: n /= 2 else: n = 3*n + 1 yield n for x in collatz(7): print x, ###Output 22 11 34 17 52 26 13 40 20 10 5 16 8 4 2 1 ###Markdown 事实上,生成器也是一种迭代器: ###Code x = collatz(7) print x ###Output <generator object collatz at 0x0000000003B63750> ###Markdown 它支持 `next` 方法,返回下一个 `yield` 的值: ###Code print x.next() print x.next() ###Output 22 11 ###Markdown `__iter__` 方法返回的是它本身: ###Code print x.__iter__() ###Output <generator object collatz at 0x0000000003B63750> ###Markdown 之前的二叉树迭代器可以改写为更简单的生成器模式来进行中序遍历: ###Code class BinaryTree(object): def __init__(self, value, left=None, right=None): self.value = value self.left = left self.right = right def __iter__(self): # 将迭代器设为生成器方法 return self.inorder() def inorder(self): # traverse the left branch if self.left is not None: for value in self.left: yield value # yield node's value yield self.value # traverse the right branch if self.right is not None: for value in self.right: yield value ###Output _____no_output_____ ###Markdown 非递归的实现: ###Code def inorder(self): node = self stack = [] while len(stack) > 0 or node is not None: while node is not None: stack.append(node) node = node.left node = stack.pop() yield node.value node = node.right tree = BinaryTree( left=BinaryTree( left=BinaryTree(1), value=2, right=BinaryTree( left=BinaryTree(3), value=4, right=BinaryTree(5) ), ), value=6, right=BinaryTree( value=7, right=BinaryTree(8) ) ) for value in tree: print value, ###Output 1 2 3 4 5 6 7 8
nbs/course2020/vision/04_DBlock_Summary.ipynb
###Markdown Lesson 4 - DataBlock Summary Lesson Video: ###Code #hide_input from IPython.lib.display import YouTubeVideo from datetime import timedelta start = int(timedelta(minutes=47, seconds=53).total_seconds()) YouTubeVideo('X4Bp7gMPx_E', start=start) #hide #Run once per session !pip install fastai wwf -q --upgrade #hide_input from wwf.utils import state_versions state_versions(['fastai', 'fastcore', 'wwf']) ###Output _____no_output_____ ###Markdown In this notebook we'll be looking at `dblock.summary` and how to interpret it Libraries for today: ###Code from fastai.vision.all import * ###Output _____no_output_____ ###Markdown Below you will find the exact imports for everything we use today ###Code from fastcore.transform import Pipeline from fastai.data.block import CategoryBlock, DataBlock from fastai.data.core import Datasets from fastai.data.external import untar_data, URLs from fastai.data.transforms import Categorize, GrandparentSplitter, IntToFloatTensor, Normalize, RandomSplitter, ToTensor, parent_label from fastai.torch_core import to_device from fastai.vision.augment import aug_transforms, Resize, RandomResizedCrop, FlipItem from fastai.vision.data import ImageBlock, PILImage, get_image_files, imagenet_stats ###Output _____no_output_____ ###Markdown Getting the Dataset We'll use `ImageWoof` like we did in previous notebooks ###Code path = untar_data(URLs.IMAGEWOOF) ###Output _____no_output_____ ###Markdown And create our label dictionary similarly ###Code lbl_dict = dict( n02086240= 'Shih-Tzu', n02087394= 'Rhodesian ridgeback', n02088364= 'Beagle', n02089973= 'English foxhound', n02093754= 'Australian terrier', n02096294= 'Border terrier', n02099601= 'Golden retriever', n02105641= 'Old English sheepdog', n02111889= 'Samoyed', n02115641= 'Dingo' ) ###Output _____no_output_____ ###Markdown Some minimal transforms to get us by ###Code item_tfms = Resize(128) batch_tfms = [*aug_transforms(size=224, max_warp=0), Normalize.from_stats(*imagenet_stats)] bs=64 ###Output _____no_output_____ ###Markdown And our `DataBlock` ###Code pets = DataBlock(blocks=(ImageBlock, CategoryBlock), get_items=get_image_files, splitter=RandomSplitter(), get_y=Pipeline([parent_label, lbl_dict.__getitem__]), item_tfms=item_tfms, batch_tfms=batch_tfms) ###Output _____no_output_____ ###Markdown Using `Summary` Now to run `.summary`, we need to send in what our `DataBlock` expects. In this case it's a path (think how we make our `DataLoaders` from the `DataBlock`) ###Code pets.summary(path) ###Output Setting-up type transforms pipelines Collecting items from /root/.fastai/data/imagewoof2 Found 12954 items 2 datasets of sizes 10364,2590 Setting up Pipeline: PILBase.create Setting up Pipeline: parent_label -> dict.__getitem__ -> Categorize -- {'vocab': None, 'sort': True, 'add_na': False} Building one sample Pipeline: PILBase.create starting from /root/.fastai/data/imagewoof2/train/n02086240/n02086240_6168.JPEG applying PILBase.create gives PILImage mode=RGB size=500x375 Pipeline: parent_label -> dict.__getitem__ -> Categorize -- {'vocab': None, 'sort': True, 'add_na': False} starting from /root/.fastai/data/imagewoof2/train/n02086240/n02086240_6168.JPEG applying parent_label gives n02086240 applying dict.__getitem__ gives Shih-Tzu applying Categorize -- {'vocab': None, 'sort': True, 'add_na': False} gives TensorCategory(9) Final sample: (PILImage mode=RGB size=500x375, TensorCategory(9)) Collecting items from /root/.fastai/data/imagewoof2 Found 12954 items 2 datasets of sizes 10364,2590 Setting up Pipeline: PILBase.create Setting up Pipeline: parent_label -> dict.__getitem__ -> Categorize -- {'vocab': None, 'sort': True, 'add_na': False} Setting up after_item: Pipeline: Resize -- {'size': (128, 128), 'method': 'crop', 'pad_mode': 'reflection', 'resamples': (2, 0), 'p': 1.0} -> ToTensor Setting up before_batch: Pipeline: Setting up after_batch: Pipeline: IntToFloatTensor -- {'div': 255.0, 'div_mask': 1} -> Flip -- {'size': 224, 'mode': 'bilinear', 'pad_mode': 'reflection', 'mode_mask': 'nearest', 'align_corners': True, 'p': 0.5} -> Brightness -- {'max_lighting': 0.2, 'p': 1.0, 'draw': None, 'batch': False} -> Normalize -- {'mean': tensor([[[[0.4850]], [[0.4560]], [[0.4060]]]], device='cuda:0'), 'std': tensor([[[[0.2290]], [[0.2240]], [[0.2250]]]], device='cuda:0'), 'axes': (0, 2, 3)} Building one batch Applying item_tfms to the first sample: Pipeline: Resize -- {'size': (128, 128), 'method': 'crop', 'pad_mode': 'reflection', 'resamples': (2, 0), 'p': 1.0} -> ToTensor starting from (PILImage mode=RGB size=500x375, TensorCategory(9)) applying Resize -- {'size': (128, 128), 'method': 'crop', 'pad_mode': 'reflection', 'resamples': (2, 0), 'p': 1.0} gives (PILImage mode=RGB size=128x128, TensorCategory(9)) applying ToTensor gives (TensorImage of size 3x128x128, TensorCategory(9)) Adding the next 3 samples No before_batch transform to apply Collating items in a batch Applying batch_tfms to the batch built Pipeline: IntToFloatTensor -- {'div': 255.0, 'div_mask': 1} -> Flip -- {'size': 224, 'mode': 'bilinear', 'pad_mode': 'reflection', 'mode_mask': 'nearest', 'align_corners': True, 'p': 0.5} -> Brightness -- {'max_lighting': 0.2, 'p': 1.0, 'draw': None, 'batch': False} -> Normalize -- {'mean': tensor([[[[0.4850]], [[0.4560]], [[0.4060]]]], device='cuda:0'), 'std': tensor([[[[0.2290]], [[0.2240]], [[0.2250]]]], device='cuda:0'), 'axes': (0, 2, 3)} starting from (TensorImage of size 4x3x128x128, TensorCategory([9, 9, 5, 0], device='cuda:0')) applying IntToFloatTensor -- {'div': 255.0, 'div_mask': 1} gives (TensorImage of size 4x3x128x128, TensorCategory([9, 9, 5, 0], device='cuda:0')) applying Flip -- {'size': 224, 'mode': 'bilinear', 'pad_mode': 'reflection', 'mode_mask': 'nearest', 'align_corners': True, 'p': 0.5} gives (TensorImage of size 4x3x224x224, TensorCategory([9, 9, 5, 0], device='cuda:0')) applying Brightness -- {'max_lighting': 0.2, 'p': 1.0, 'draw': None, 'batch': False} gives (TensorImage of size 4x3x224x224, TensorCategory([9, 9, 5, 0], device='cuda:0')) applying Normalize -- {'mean': tensor([[[[0.4850]], [[0.4560]], [[0.4060]]]], device='cuda:0'), 'std': tensor([[[[0.2290]], [[0.2240]], [[0.2250]]]], device='cuda:0'), 'axes': (0, 2, 3)} gives (TensorImage of size 4x3x224x224, TensorCategory([9, 9, 5, 0], device='cuda:0')) ###Markdown Debugging without the DataBlock What we find is it will go through **each** and every single part of our `DataBlock`, test it on an item, and we can see what popped out! **But!** What if we are using the `Datasets` instead? Let's go through how to utilize it ###Code tfms = [[PILImage.create], [parent_label, Categorize()]] item_tfms = [ToTensor(), Resize(128)] batch_tfms = [FlipItem(), RandomResizedCrop(128, min_scale=0.35), IntToFloatTensor(), Normalize.from_stats(*imagenet_stats)] items = get_image_files(path) split_idx = GrandparentSplitter(valid_name='val')(items) dsets = Datasets(items, tfms, splits=split_idx) dls = dsets.dataloaders(after_item=item_tfms, after_batch=batch_tfms, bs=64) ###Output _____no_output_____ ###Markdown We'll want to grab the first item from our set ###Code x = dsets.train[0] x ###Output _____no_output_____ ###Markdown And pass it into any `after_item` or `after_batch` transform `Pipeline`. We can list them by calling them ###Code dls.train.after_item dls.train.after_batch ###Output _____no_output_____ ###Markdown And now we can pass in our item through the `Pipeline` like so:(`x[0]` has our input and `x[1]` has our `y`) ###Code for f in dls.train.after_item: name = f.name x = f(x) print(name, x[0]) for f in dls.train.after_batch: name = f.name x = f(to_device(x, 'cuda')) # we need to move our data to the GPU print(name, x[0]) ###Output FlipItem -- {'p': 0.5} TensorImage([[[0.0627, 0.0784, 0.0941, ..., 0.5922, 0.5255, 0.5373], [0.1059, 0.1020, 0.1255, ..., 0.5725, 0.5922, 0.5333], [0.1686, 0.1804, 0.1843, ..., 0.4824, 0.5569, 0.5608], ..., [0.5686, 0.5255, 0.4157, ..., 0.2000, 0.1451, 0.1059], [0.1490, 0.2157, 0.1804, ..., 0.1294, 0.1373, 0.0902], [0.2510, 0.3333, 0.2588, ..., 0.1294, 0.2078, 0.2196]], [[0.0706, 0.0706, 0.0863, ..., 0.5882, 0.5176, 0.5294], [0.1137, 0.0980, 0.1176, ..., 0.5647, 0.5843, 0.5255], [0.1804, 0.1725, 0.1765, ..., 0.4667, 0.5490, 0.5529], ..., [0.5608, 0.5647, 0.4588, ..., 0.2784, 0.3020, 0.2627], [0.2000, 0.3059, 0.2941, ..., 0.1922, 0.1961, 0.1451], [0.2980, 0.4118, 0.3490, ..., 0.1961, 0.2627, 0.2706]], [[0.1137, 0.1255, 0.1412, ..., 0.4157, 0.3529, 0.3686], [0.1569, 0.1490, 0.1725, ..., 0.4196, 0.4275, 0.3647], [0.2196, 0.2275, 0.2314, ..., 0.3529, 0.3922, 0.3922], ..., [0.5569, 0.4745, 0.3608, ..., 0.2392, 0.2118, 0.1765], [0.1647, 0.2627, 0.1765, ..., 0.1490, 0.1529, 0.1020], [0.2392, 0.3490, 0.2392, ..., 0.1294, 0.2157, 0.2314]]], device='cuda:0') RandomResizedCrop -- {'size': (128, 128), 'min_scale': 0.35, 'ratio': (0.75, 1.3333333333333333), 'resamples': (2, 0), 'val_xtra': 0.14, 'p': 1.0} TensorImage([[[0.0627, 0.0784, 0.0941, ..., 0.5922, 0.5255, 0.5373], [0.1059, 0.1020, 0.1255, ..., 0.5725, 0.5922, 0.5333], [0.1686, 0.1804, 0.1843, ..., 0.4824, 0.5569, 0.5608], ..., [0.5686, 0.5255, 0.4157, ..., 0.2000, 0.1451, 0.1059], [0.1490, 0.2157, 0.1804, ..., 0.1294, 0.1373, 0.0902], [0.2510, 0.3333, 0.2588, ..., 0.1294, 0.2078, 0.2196]], [[0.0706, 0.0706, 0.0863, ..., 0.5882, 0.5176, 0.5294], [0.1137, 0.0980, 0.1176, ..., 0.5647, 0.5843, 0.5255], [0.1804, 0.1725, 0.1765, ..., 0.4667, 0.5490, 0.5529], ..., [0.5608, 0.5647, 0.4588, ..., 0.2784, 0.3020, 0.2627], [0.2000, 0.3059, 0.2941, ..., 0.1922, 0.1961, 0.1451], [0.2980, 0.4118, 0.3490, ..., 0.1961, 0.2627, 0.2706]], [[0.1137, 0.1255, 0.1412, ..., 0.4157, 0.3529, 0.3686], [0.1569, 0.1490, 0.1725, ..., 0.4196, 0.4275, 0.3647], [0.2196, 0.2275, 0.2314, ..., 0.3529, 0.3922, 0.3922], ..., [0.5569, 0.4745, 0.3608, ..., 0.2392, 0.2118, 0.1765], [0.1647, 0.2627, 0.1765, ..., 0.1490, 0.1529, 0.1020], [0.2392, 0.3490, 0.2392, ..., 0.1294, 0.2157, 0.2314]]], device='cuda:0') IntToFloatTensor -- {'div': 255.0, 'div_mask': 1} TensorImage([[[0.0002, 0.0003, 0.0004, ..., 0.0023, 0.0021, 0.0021], [0.0004, 0.0004, 0.0005, ..., 0.0022, 0.0023, 0.0021], [0.0007, 0.0007, 0.0007, ..., 0.0019, 0.0022, 0.0022], ..., [0.0022, 0.0021, 0.0016, ..., 0.0008, 0.0006, 0.0004], [0.0006, 0.0008, 0.0007, ..., 0.0005, 0.0005, 0.0004], [0.0010, 0.0013, 0.0010, ..., 0.0005, 0.0008, 0.0009]], [[0.0003, 0.0003, 0.0003, ..., 0.0023, 0.0020, 0.0021], [0.0004, 0.0004, 0.0005, ..., 0.0022, 0.0023, 0.0021], [0.0007, 0.0007, 0.0007, ..., 0.0018, 0.0022, 0.0022], ..., [0.0022, 0.0022, 0.0018, ..., 0.0011, 0.0012, 0.0010], [0.0008, 0.0012, 0.0012, ..., 0.0008, 0.0008, 0.0006], [0.0012, 0.0016, 0.0014, ..., 0.0008, 0.0010, 0.0011]], [[0.0004, 0.0005, 0.0006, ..., 0.0016, 0.0014, 0.0014], [0.0006, 0.0006, 0.0007, ..., 0.0016, 0.0017, 0.0014], [0.0009, 0.0009, 0.0009, ..., 0.0014, 0.0015, 0.0015], ..., [0.0022, 0.0019, 0.0014, ..., 0.0009, 0.0008, 0.0007], [0.0006, 0.0010, 0.0007, ..., 0.0006, 0.0006, 0.0004], [0.0009, 0.0014, 0.0009, ..., 0.0005, 0.0008, 0.0009]]], device='cuda:0') Normalize -- {'mean': tensor([[[[0.4850]], [[0.4560]], [[0.4060]]]], device='cuda:0'), 'std': tensor([[[[0.2290]], [[0.2240]], [[0.2250]]]], device='cuda:0'), 'axes': (0, 2, 3)} TensorImage([[[[-2.1168, -2.1166, -2.1163, ..., -2.1078, -2.1089, -2.1087], [-2.1161, -2.1162, -2.1158, ..., -2.1081, -2.1078, -2.1088], [-2.1150, -2.1148, -2.1147, ..., -2.1096, -2.1084, -2.1083], ..., [-2.1082, -2.1089, -2.1108, ..., -2.1145, -2.1154, -2.1161], [-2.1154, -2.1142, -2.1148, ..., -2.1157, -2.1156, -2.1164], [-2.1136, -2.1122, -2.1135, ..., -2.1157, -2.1143, -2.1141]], [[-2.0345, -2.0345, -2.0342, ..., -2.0254, -2.0267, -2.0264], [-2.0337, -2.0340, -2.0337, ..., -2.0258, -2.0255, -2.0265], [-2.0326, -2.0327, -2.0326, ..., -2.0275, -2.0261, -2.0260], ..., [-2.0259, -2.0258, -2.0277, ..., -2.0308, -2.0304, -2.0311], [-2.0322, -2.0304, -2.0306, ..., -2.0324, -2.0323, -2.0332], [-2.0305, -2.0285, -2.0296, ..., -2.0323, -2.0311, -2.0310]], [[-1.8025, -1.8023, -1.8020, ..., -1.7972, -1.7983, -1.7980], [-1.8017, -1.8018, -1.8014, ..., -1.7971, -1.7970, -1.7981], [-1.8006, -1.8005, -1.8004, ..., -1.7983, -1.7976, -1.7976], ..., [-1.7947, -1.7962, -1.7982, ..., -1.8003, -1.8008, -1.8014], [-1.8016, -1.7999, -1.8014, ..., -1.8018, -1.8018, -1.8027], [-1.8003, -1.7984, -1.8003, ..., -1.8022, -1.8007, -1.8004]]]], device='cuda:0')
youtube/error_metrics.ipynb
###Markdown Popular Data Science Error Metrics in PythonThis notebook shows how to calculate several popular error metrics in Python. ###Code %matplotlib inline import numpy as np import pandas as pd from matplotlib.pyplot import figure, show from numpy import arange from sklearn import metrics from sklearn.metrics import confusion_matrix import matplotlib.pyplot as plt ###Output _____no_output_____ ###Markdown Sample Data SetsThis section defines three data sets that will be used. Only the expected outcomes ($y$) and the actual outputs ($\hat{y}$) are given. No $x$ values are given for simplicity. This code could be used to evaluate the output and expected values for most model types. Binary ClassificationBinary classification is a type of classification where there are only two possible outcomes, typically true and false or positive and negative. The data below shows the labels for $y$ as being either 0 or 1. The predicted $\hat{y}$ values are probabilities predicted for how likely the output value should be true or 1. Higher values mean the model predicts a greater likelihood of the value being 1 or True. ###Code # Binary Classification binary_classification_y = [ 1, 0, 1, 1, 0, 0, 1, 0, 0, 1, 0] binary_classification_yhat = [ 0.95, 0.1, 0.1, .99, .98, 0.01, 0.02, 0.01, 0.01, .97, 0.05] binary_classification = pd.DataFrame({'y': binary_classification_y, 'yhat': binary_classification_yhat}) binary_classification ###Output _____no_output_____ ###Markdown Multi-Class ClassificationMulti-class classification is a type of classification where there are two or more possible outcomes. The data below shows the labels for $y$ as having a numeric class ID. Though data might have class labels as strings it will be converted to this numeric ID. The predicted $\hat{y}$ values are probabilities predicted for how likely the output value should for each of the classes. All class probabilities should sum to 1.0. ###Code # Multi-class Classification (in this case, 4-class) multiclass_classification_y = [ 1, 2, 3, 1, 1, 2, 2, 3, 0, 0, 0] multiclass_classification_yhat = [ [0.05, 0.00, 0.90, 0.05], [0.00, 0.01, 0.99, 0.00], [0.02, 0.00, 0.03, 0.95], [0.95, 0.03, 0.02, 0.00], [0.02, 0.94, 0.01, 0.03], [0.00, 0.00, 0.93, 0.07], [0.00, 0.20, 0.80, 0.00], [0.02, 0.02, 0.01, 0.95], [0.96, 0.04, 0.00, 0.00], [0.97, 0.01, 0.01, 0.01], [0.98, 0.00, 0.00, 0.02]] multiclass_classification = pd.DataFrame(multiclass_classification_yhat) multiclass_classification.insert(loc=0,column='yhat',value=multiclass_classification_y) multiclass_classification.columns = ['yhat', 'yhat_0', 'yhat_1', 'yhat_2', 'yhat_3'] multiclass_classification # Should all sum to 1.0 (multiclass_classification['yhat_0'] + multiclass_classification['yhat_1'] + multiclass_classification['yhat_2'] + multiclass_classification['yhat_3']).tolist() ###Output _____no_output_____ ###Markdown Regression DataRegression data is used when you wish to predict a number. The data below shows the expected values for $y$ as a floating point number. The predicted $\hat{y}$ should match the $y$ values as close as possible. ###Code # Regression regression_y = [ 1.2, 2.2, 3.0, 2.1, 3.5, 3.3, 1.2, 1.2, 0.1, 1, 0.05] regression_yhat = [ 1.3, 2.9, 3.1, 2.2, 3.5, 3.3, 1.1, 1.3, 7.2, 1, 0.07] regression = pd.DataFrame({'y': regression_y, 'yhat': regression_yhat}) regression ###Output _____no_output_____ ###Markdown Accuracy, Precision, Recall, & F1These three metrics are closely related will make use of binary and multi-class data. Binary is covered first. When binary data are used a threshold value must be used. Any score from the model above the threshold will be positive/true and below will be negative/false. The scores are affected greatly by this choice of threshold. Other metrics, such as AUC/ROC evaluate the model independent of threshold. ###Code THRESHOLD = 0.5 y = np.array(binary_classification_y) y_hat = np.array([(1 if t > THRESHOLD else 0) for t in binary_classification_yhat]) print(f'y: {y}') print(f'yhat: {y_hat}') ###Output y: [1 0 1 1 0 0 1 0 0 1 0] yhat: [1 0 0 1 1 0 0 0 0 1 0] ###Markdown False Positives and NegativesTrue positives, false positives, true negatives and false negatives must all be calculated to get their associated rates. These rates are used to calculate accuracy and the F1 score. ###Code count_pos = sum(y==1) count_neg = sum(y==0) count = len(y) print(f'Positive count: {count_pos}') print(f'Negatice count: {count_neg}') tp = sum(np.logical_and(y==1, y_hat==1)) tp_rate = float(tp)/count_pos tn = sum(np.logical_and(y==0, y_hat==0)) tn_rate = float(tn)/count_neg fp = sum(np.logical_and(y==1, y_hat==0)) fp_rate = float(fp)/count_pos fn = sum(np.logical_and(y==0, y_hat==1)) fn_rate = float(fn)/count_neg print(f'Count: {count}') print(f'True Positive (TP,sensativity): {tp} ({int(tp_rate*100)}%)') print(f'True Negative (TN,specificity): {tn} ({int(tn_rate*100)}%)') print(f'False Positive (FP): {fp} ({int(fp_rate*100)}%)') print(f'False Negative (fn): {fn} ({int(fn_rate*100)}%)') ###Output Positive count: 5 Negatice count: 6 Count: 11 True Positive (TP,sensativity): 3 (60%) True Negative (TN,specificity): 5 (83%) False Positive (FP): 2 (40%) False Negative (fn): 1 (16%) ###Markdown AccuracyAccuracy is much like a test score. What percent were correct. Accuracy does not penalize a model for overconfidence in wrong answers or under-confidence in correct ones. Accuracy ranges form 0% to 100%. Higher is better. ###Code ac1 = (tp+tn)/count ac2 = metrics.accuracy_score(y, y_hat) print(f"Accuarcy (manual): {ac1}") print(f"Accuarcy (from sklearn import metrics): {ac2}") ###Output Accuarcy (manual): 0.7272727272727273 Accuarcy (from sklearn import metrics): 0.7272727272727273 ###Markdown Precision, Recall, & F1Recall (aka sensitivity) and prevision (aka positive predictive value) are measurement values used both for analysis and calculation of the more holistic F1 score. Precision is "how useful the results are", and recall is "how complete the results are". Higher values are better for precision, recall and F1 and they range from 0 to 1. F1 is a measure of accuracy that takes into account both false positives and false negatives. $\mathrm{recall}=\frac{\mathrm{TP}}{\mathrm{TP}+\mathrm{FN}}$$\mathrm{precision}=\frac{\mathrm{TP}}{\mathrm{TP}+\mathrm{FP}}$$F_1 = \left(\frac{\mathrm{recall}^{-1} + \mathrm{precision}^{-1}}{2}\right)^{-1} = 2 \cdot \frac{\mathrm{precision} \cdot \mathrm{recall}}{\mathrm{precision} + \mathrm{recall}}$ ###Code precision = tp / (tp + fn) recall = tp / (tp + fp) f1 = 2 * (precision*recall)/(precision+recall) f1b = metrics.f1_score(y, y_hat) print(f"recall: {recall}") print(f"precision: {precision}") print(f"f1 (manual): {f1}") print(f"f1 (sklearn): {f1b}") ###Output recall: 0.6 precision: 0.75 f1 (manual): 0.6666666666666665 f1 (sklearn): 0.6666666666666665 ###Markdown Confusion MatrixA confusion matrix tracks which classes are often misclassified for other classes. A strong model will have a dark diagonal from NW to SE. ###Code # Plot a confusion matrix. # cm is the confusion matrix, names are the names of the classes. def plot_confusion_matrix(cm, names, title='Confusion matrix', cmap=plt.cm.Blues): plt.imshow(cm, interpolation='nearest', cmap=cmap) plt.title(title) plt.colorbar() tick_marks = np.arange(len(names)) plt.xticks(tick_marks, names, rotation=45) plt.yticks(tick_marks, names) plt.tight_layout() plt.ylabel('True label') plt.xlabel('Predicted label') labels = ['T', 'F'] # Compute confusion matrix cm = confusion_matrix(y, y_hat) np.set_printoptions(precision=2) print('Confusion matrix, without normalization') print(cm) plt.figure() plot_confusion_matrix(cm, labels) # Normalize the confusion matrix by row (i.e by the number of samples # in each class) cm_normalized = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis] print('Normalized confusion matrix') print(cm_normalized) plt.figure() plot_confusion_matrix(cm_normalized, labels, title='Normalized confusion matrix') plt.show() y = multiclass_classification_y y_hat = np.argmax(multiclass_classification_yhat,axis=1) print(f'y: {y}') print(f'yhat: {y_hat}') ac = metrics.accuracy_score(y, y_hat) f1 = metrics.f1_score(y, y_hat,average = None) print(f"Accuarcy: {ac}") print(f"F1: {f1}") labels = ['0', '1', '2', '3'] # Compute confusion matrix cm = confusion_matrix(y, y_hat) np.set_printoptions(precision=2) print('Confusion matrix, without normalization') print(cm) plt.figure() plot_confusion_matrix(cm, labels) # Normalize the confusion matrix by row (i.e by the number of samples # in each class) cm_normalized = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis] print('Normalized confusion matrix') print(cm_normalized) plt.figure() plot_confusion_matrix(cm_normalized, labels, title='Normalized confusion matrix') plt.show() ###Output Confusion matrix, without normalization [[3 0 0 0] [1 1 1 0] [0 0 3 0] [0 0 0 2]] Normalized confusion matrix [[1. 0. 0. 0. ] [0.33 0.33 0.33 0. ] [0. 0. 1. 0. ] [0. 0. 0. 1. ]] ###Markdown Area Under the Curve (AUC)Area Under the Curve (AUC) is closely related to the ROC chart. Both will make use of the binary classification data. ###Code y = np.array(binary_classification_y) y_hat = np.array(binary_classification_yhat) import numpy as np import matplotlib.pyplot as plt from itertools import cycle from sklearn import svm, datasets from sklearn.metrics import roc_curve, auc from sklearn.model_selection import train_test_split from sklearn.preprocessing import label_binarize from sklearn.multiclass import OneVsRestClassifier from scipy import interp # Compute ROC curve and ROC area for each class fpr = dict() tpr = dict() roc_auc = dict() fpr, tpr, _ = roc_curve(y, y_hat) roc_auc = auc(fpr, tpr) # Compute micro-average ROC curve and ROC area fpr, tpr, thresholds = metrics.roc_curve(y, y_hat, pos_label=1) roc_auc = auc(fpr,tpr) print(f"FPR: {fpr}") print(f"TPR: {tpr}") print(f"AUC: {roc_auc}") print(f"thresholds: {thresholds}") plt.figure() lw = 2 plt.plot(fpr, tpr, color='darkorange', lw=lw, label='ROC curve (area = %0.2f)' % roc_auc) plt.plot([0, 1], [0, 1], color='navy', lw=lw, linestyle='--') plt.xlim([0.0, 1.0]) plt.ylim([0.0, 1.05]) plt.xlabel('False Positive Rate') plt.ylabel('True Positive Rate') plt.title('Receiver operating characteristic') plt.legend(loc="lower right") plt.show() ###Output _____no_output_____ ###Markdown In general, a ROC chart with a large amount of space below the curve is desirable. Below 0.5 is a very bad model and the closer to 1.0 you can go (without overfitting) the better. Decile AnalysisDecile analysis or top 10% scoring looks at the distribution of positive values among 10 bins. Sort the predicted probabilities from highest to lowest and determine the percent of each bin that are positively scored outcomes. The higher bins should have higher percents and the lower scores should have lower. The percent for the top bin can be used as a score. ###Code y = np.array(binary_classification_y) y_hat = np.array(binary_classification_yhat) # Increase size and add a little noise np.random.seed(42) y = np.concatenate([y,y,y,y]) y_hat = np.concatenate([y_hat,y_hat,y_hat,y_hat]) y_hat = y_hat + np.random.normal(size=len(y_hat))/10 y_hat = np.clip(y_hat,0.01,0.99) print(y_hat) df = pd.DataFrame({'y':y,'y_hat':y_hat}) df.sort_values(by='y_hat',ascending=False,inplace=True) df['bucket'] = pd.qcut(range(len(df)), 10, labels=False)+1 df.drop('y_hat', 1, inplace=True) df['count'] = np.ones(len(df)) df = df.groupby(by='bucket').sum() df['score'] = df['y'].values / df['count'].values df.columns = ['tp','count','score'] df df.drop('count', 1, inplace=True) df.drop('tp', 1, inplace=True) df.plot(kind="bar") ###Output _____no_output_____ ###Markdown Regression ChartThe regression or lift chart can be drawn in various ways. Presented here is a monotonically increasing plot of the expected values with the predicted values. Ideally, the two lines would overlap. However, they can show a model's tendency to over or under predict. ###Code y = np.array(regression_y) y_hat = np.array(regression_yhat) # Regression chart. def chart_regression(pred, y, sort=True): t = pd.DataFrame({'pred': pred, 'y': y.flatten()}) if sort: t.sort_values(by=['y'], inplace=True) plt.plot(t['y'].tolist(), label='expected') plt.plot(t['pred'].tolist(), label='prediction') plt.ylabel('output') plt.legend() plt.show() chart_regression(y_hat,y) ###Output _____no_output_____ ###Markdown Log LossLog loss can measure single or multi-classification. Lower values are good, higher values are bad. The best possible score is 0 and scores above 2-3 are generally bad. Log-loss penalizes over confidence. Single:$ \text{log loss} = -{(y\log(\hat{y}) + (1 - y)\log(1 - \hat{y}))} $Multi-class:$ \text{log loss} = -\frac{1}{N}\sum_{i=1}^N {( {y}_i\log(\hat{y}_i) + (1 - {y}_i)\log(1 - \hat{y}_i))} $ ###Code t = arange(1e-6, 1.0, 0.00001) # data scientists fig = figure(1,figsize=(12, 10)) ax1 = fig.add_subplot(211) ax1.plot(t, np.log(t)) ax1.grid(True) ax1.set_ylim((-8, 1.5)) ax1.set_xlim((-0.1, 2)) ax1.set_xlabel('x') ax1.set_ylabel('y') ax1.set_title('log(x)') show() y = np.array(binary_classification_y) y_hat = np.array(binary_classification_yhat) llos = metrics.log_loss(y,y_hat) print(f"Log loss: {llos}") # What is a good log loss? print(f"Perfect score (1.0): {np.log(1.0)}") print(f"95% Prediction on True : {np.log(0.95)}") print(f"90% Prediction on True : {np.log(0.90)}") print(f"85% Prediction on True : {np.log(0.85)}") print(f"80% Prediction on True : {np.log(0.80)}") print(f"75% Prediction on True : {np.log(0.75)}") print(f"70% Prediction on True : {np.log(0.70)}") print(f"65% Prediction on True : {np.log(0.65)}") print(f"60% Prediction on True : {np.log(0.60)}") print(f"55% Prediction on True : {np.log(0.55)}") print(f"40% Prediction on True : {np.log(0.4)}") print(f"30% Prediction on True : {np.log(0.3)}") print(f"10% Prediction on True : {np.log(0.1)}") print(f"1% Prediction on True : {np.log(0.01)}") print(f"0.001% Prediction on True : {np.log(0.001)}") list(zip(y,y_hat)) ###Output _____no_output_____ ###Markdown R2 R2 is a value that measures the goodness of fit. They typically range between 0 and 1 and are often written as percentages. Particularly bad R2 values can be negative. ###Code y = np.array(regression_y) y_hat = np.array(regression_yhat) ###Output _____no_output_____ ###Markdown (from Wikipedia)If $\bar{y}$ is the mean of the observed data:$\bar{y}=\frac{1}{n}\sum_{i=1}^n y_i$then the variability of the data set can be measured using three Mean squared error formulas:* The total sum of squares (proportional to the variance of the data $SS_\text{tot}=\sum_i (y_i-\bar{y})^2,$* The regression sum of squares, also called the explained sum of squares:* $SS_\text{reg}=\sum_i (f_i -\bar{y})^2,$* The sum of squares of residuals, also called the residual sum of squares:* $SS_\text{res}=\sum_i (y_i - f_i)^2=\sum_i e_i^2\,$The most general definition of the coefficient of determination is:$R^2 \equiv 1 - {SS_{\rm res}\over SS_{\rm tot}} \,$ ###Code r2 = metrics.r2_score(y,y_hat) print(f"R2 Score: {r2}") print(list(zip(y,y_hat))) ###Output R2 Score: -2.5332034672970853 [(1.2, 1.3), (2.2, 2.9), (3.0, 3.1), (2.1, 2.2), (3.5, 3.5), (3.3, 3.3), (1.2, 1.1), (1.2, 1.3), (0.1, 7.2), (1.0, 1.0), (0.05, 0.07)] ###Markdown RMSE and MSEMean square error and root mean square error are used to measure regression. MSE is simply a value where higher values indicate a worse model. MSE is not in the same units as $y$, whereas RMSE is in the same units. RMSE will not be negative and a value of 10 would mean that the errors are generally +/- 10 units. ###Code y = np.array(regression_y) y_hat = np.array(regression_yhat) ###Output _____no_output_____ ###Markdown $ \text{MSE} = \frac{1}{n} \sum_{i=1}^n \left(\hat{y}_i - y_i\right)^2 $ ###Code mse = metrics.mean_squared_error(y_hat,y) print("Score (MSE): {}".format(mse)) ###Output Score (MSE): 4.631854545454546 ###Markdown $ \text{RMSE} = \sqrt{\frac{1}{n} \sum_{i=1}^n \left(\hat{y}_i - y_i\right)^2} $ ###Code rmse = np.sqrt(mse) print("Score (RMSE): {}".format(np.sqrt(mse))) ###Output Score (RMSE): 2.1521743761727454
notebooks/1d. Analytical solution: Normal-Normal model with known precision.ipynb
###Markdown Analytical solution: Normal-Normal model with known precision ###Code from scipy.stats import norm import numpy as np import seaborn as sns; sns.set_context('notebook'); import matplotlib.pyplot as plt %matplotlib inline ###Output _____no_output_____ ###Markdown In the last notebook we have solved the Beta-Binomial model analytically and found that a Binomial likelihood is conjugate with a Beta prior. This means that the posterior distribution is also a Beta distribution but with updated parameters.In this case, we assume that we are interested in estimating some parameter $\mu$ that we cannot observe directly but we can measure some estimate $X$ using a measurement device with a known and fixed variance (or precision, defined below) and that the measurement error is normally distributed. We further assume that our prior on the value of the parameter is also normal with an assumed mean and precision.Using our assumptions and this one measurement, we want to obtain an estimate of $\mu$. Parametrizing a Normal distributionA Normal distribution can be parametrized by a mean $\mu$ and variance $\sigma^2$ but alternatively (and less frequently) it can be parametrized by a mean $\mu$ and precision $\tau = \sigma^{-2}$.Thus, instead of denoting $X \sim {\cal N}(\mu,\sigma^2)$, we write $X \sim {\cal N}(\mu,\tau)$. Formalizing the problemWe are interested in estimating $\mu$ given our prior assumptions and given the one observation $X$. Note: we use the function $f$ to denote all densities which are distinguished according to their parameters.By assumption, we have$$f(\mu) \sim {\cal N}(\mu_0, \tau_0),$$for some apriori chosen (assumed) $\mu, \tau_0$. We also assume that the observation $X$ is distributed as$$f(X|\mu) \sim {\cal N}(\mu, \tau),$$where again we assume that we know the precision $\tau$ because the manufacturer states it (or the standard deviation) on the device.The question now is, what is the distribution of $\mu|x$ given a measurement $x$. By the Bayes theorem, we know that$$f(\mu|X) \propto f(X|\mu)f(\mu).$$ The only parameter we are interested in is $\mu$, so any multiplicative terms that don't contain $\mu$ can be neglected in the following calculations, since the equation above requires that we preserve proportionality, not equality. This is very important to remember as otherwise the proceedings can become very difficult in more complicated models.For example, although the full density of $f(\mu)$ is$$f(\mu) = \sqrt{\frac{\tau_0}{2\pi}} \exp \left ( -\frac{\tau_0}{2}(\mu-\mu_0)^2 \right ),$$we can simplify this form by removing all apparent constant factors and writing$$f(\mu) \propto \exp \left ( -\frac{\tau_0}{2}(\mu-\mu_0)^2 \right ),$$since even $\tau_0$ is assumed constant. In a similar vein, we can write$$f(X|\mu) \propto \exp \left ( -\frac{\tau}{2}(X-\mu)^2 \right ),$$which is also considerably simpler. Formal solution We can then write that$$f(\mu|X) \propto f(X|\mu)f(\mu) \propto \exp \left ( -\frac{\tau}{2}(X-\mu)^2 \right ) \exp \left ( -\frac{\tau_0}{2}(\mu-\mu_0)^2 \right ).$$The only thing left to do is to determine what form is this distribution. This is again a normal distribution but we must determine its mean value and precision given assumptions and measurements. After merging both exponential terms, we only have one expression containing terms with $\mu^2$,$\mu$ and terms without $\mu$:$$\exp \left ( -\frac{\tau}{2}(X-\mu)^2 \right ) \exp \left ( -\frac{\tau_0}{2}(\mu-\mu_0)^2 \right ) = \exp \left ( -\frac{\tau}{2}(X-\mu)^2 -\frac{\tau_0}{2}(\mu-\mu_0)^2 \right ).$$This looks like the result could again be a normal distribution. This leads us to consider completing the square and converting the entire expression into the form$$\exp \left ( -\frac{\tau_1}{2}(\mu - \mu_1)^2 \right ).$$We could then conclude that $\mu \sim {\cal N}(\mu_1, \tau_1)$. The remaining question is how to identify $\mu_1$ and $\tau_1$. Let us expand the expression inside the parentheses and collect terms that contain $\mu^2$, $\mu$ and neglect other terms. Any terms that don't contain $\mu$ can be considered constants, since they are summands inside an exponential function and thus equivalent to multiplication by a constant, which can be neglected as we noted above.$$-\frac{\tau}{2}(X-\mu)^2 -\frac{\tau_0}{2}(\mu-\mu_0)^2$$We will only show one intermediate step but in fact the algebra is somewhat tedious. Note: we use lower case $x$ for the actual measured value in the following formulas.$$-\frac{\tau_0+\tau}{2} \left [ \mu^2 - 2\mu\frac{\tau_0\mu_0+\tau x}{\tau_0+\tau} + \frac{\tau_0\mu_0^2 + \tau x^2}{\tau_0+\tau} \right ]$$And this leads to:$$-\frac{\tau_0+\tau}{2} \left ( \mu - \frac{\tau_0\mu_0+\tau x}{\tau_0+\tau} \right )^2,$$where we neglect any terms that don't contain $\mu$.From the form of the expression, we can conclude that:$$\begin{array}{rcl} \tau_1 &=& \tau_0 + \tau \\ \mu_1 &=& \frac{\tau_0\mu_0+\tau x}{\tau_0+\tau} \\ \end{array}$$ How do we interpret the result for $\mu_1$? We can look at the final $\mu$ as a precision-weighted sum of it's prior value $\mu_0$ and the single obtained measurement $x$. Numerical testLet's look at a simple numerical model in PyMC3 that captures the same situation. ###Code import pymc3 as pm mu0, tau0 = 1, 1 # prior parameters tau = 10 # let's assume that the measurement device is accurate (precision 10x prior) observed = [1.6] # the single measurement we have obtained with pm.Model() as norm2_model: # NOTE: PyMC3 normally distributed variables will accept precision directly using keyword argument tau mu = pm.Normal('mu', mu=mu0, tau=tau0) x = pm.Normal('x', mu=mu, tau=tau, observed=observed) ###Output _____no_output_____ ###Markdown Compute the MAPThis will give us an estimate of the mode. ###Code with norm2_model: model_map = pm.find_MAP() model_map ###Output _____no_output_____ ###Markdown MCMC samplingLet us examine the entire posterior of $\mu$ as estimated by PyMC3 using MCMC. ###Code with norm2_model: trace = pm.sample(draws=4000, tune=1000, chains=1) _ = pm.traceplot(trace) ###Output _____no_output_____ ###Markdown Comparison to analytical solutionWe compute the parameters of the analytical solution below and compare the posterior to that estimated numerically. ###Code # Analytical solution is a normal distribution with the following parameters: mu_1 = (mu0*tau0 + observed[0]*tau)/(tau0 + tau) tau_1 = tau0 + tau mu_1, tau_1 import matplotlib.pyplot as plt %matplotlib inline plt.figure(figsize=(12,6)) sns.distplot(trace['mu']) z = np.linspace(0, 3, 100) plt.plot(z, norm.pdf(z, loc=mu_1, scale=tau_1**-0.5)) plt.axvline(model_map['mu'], color='red') plt.legend(['true posterior', 'samples', 'MAP']) plt.title('Comparison of samples, kernel estimate and true posterior') plt.show() # Let us look at some summary statistics of the posterior distribution pm.summary(trace) ###Output _____no_output_____
Daily/Partition Sums.ipynb
###Markdown Partition Sums QuestionGiven a multiset of integers, return whether it can be partitioned into two subsets whose sums are the same.For example, given the multiset {15, 5, 20, 10, 35, 15, 10}, it would return true, since we can split it up into {15, 5, 10, 15, 10} and {20, 35}, which both add up to 55. Given the multiset {15, 5, 20, 10, 35}, it would return false, since we can't split it up into two subsets that add up to the same sum. ###Code def power_set(s): if not s: return [[]] result = power_set(s[1:]) return result + [subset + [s[0]] for subset in result] def partition(s): k = sum(s) if k %2 != 0: return False powerset = power_set(s) for subset in powerset: if sum(subset) == k /2: return True return False ###Output _____no_output_____
Machine_Learning/06-Ensemble_methods-03-Bootstrap-Estimates-and-Bagging.ipynb
###Markdown 3. Bootstrap EstimationWe previously looked at the bias-variance tradeoff and if you were thinking critically you may have wondered: "Could it be possible in some way to lower bias and variance simultaneously?"In this section, we are going to take our first look into **model averaging**. The key tool that we need to do this is called **bootstrapping**, aka **resampling**. The fascinating result of this is that even though we are using the same data, we can get a better result. This should seem odd at first, since if we create a model from a set of samples, how can that be any different than taking the averages of different models trained on different subsets of those same samples again and again-it is the same set of samples after all. However, model averaging does work, even if it is true that they work on the same data that you would have if you only have 1 model. Before we talk about bootstraping for models, we are going to look at bootstrapping for simple parameter estimates like the mean. 1.1 Bootstrap Estimation - MeanSo, how does bootstrap estimation work? We are given a set of data points from $1...N$$$X = x_1,x_2,...,x_N$$We then draw a sample, with replacement, from this data set, $B$ times. For each of the $b$ subsample datasets, we calculate the parameter of interest-aka the mean, variance, or any other statistic. Once the loop is done we will have $B$ different estimates of the parameter. We can use this to find the mean of the parameter, and the variance of the parameter. Why do we care about the mean and variance? First, the mean tells us the most likely value of the parameter, in other words the expected value of the parameter. The variance then can tell us how accurate that estimate is! A **large variance** means not that accurate, and a **small variance** means more accurate. So, in pseudo code the algorithm could look like this:```X = x1, x2,...xNfor b = 1..B: Xb = sample_with_replacement(X) size of Xb is N sample_mean[b] = sum(Xb)/NCalculate mean and variance of {sample_mean[1],...,sample_mean[B]}```As an example, lets just say that $X$ has a size N = 5 with the following values: $$X = 1,2,3,4,5$$Let's say that we decide to make $B = 4$, in order words we are going to have 4 iterations of sampling. The first time we sample (with replacement) we may end up with:$$X_{b1} = 1,2,5,5,2$$We have sampled five values from our original $X$. If we perform that 3 more times, we may end up with:$$X_{b2} = 4,3,3,1,5$$$$X_{b3} = 2,4,1,5,4$$$$X_{b4} = 3,1,3,2,4$$Now, let's say that our original goal was to be finding a certain parameter of $X$, in this case the mean. We can then calculate the mean of each of above samples:$$\mu_{B1} = \frac{15}{5} = 3$$$$\mu_{B2} = \frac{16}{5} = 3.2$$$$\mu_{B3} = \frac{16}{5} = 3.2$$$$\mu_{B4} = \frac{13}{5} = 2.6$$We now have $B$ different estimates of the mean of our samples, which were taken from $X$. What we may want to do is now try and find the **mean** of these means. In other words, we can try and find the parameters that describe this set of sampled data:$$mean(\mu_{B1},\mu_{B2},\mu_{B3},\mu_{B4}) = \frac{\mu_{B1} + \mu_{B2}+\mu_{B3}+\mu_{B4}}{B}$$And the variance which can tell us how accurate our estimate is:$$var(\mu_{B1},\mu_{B2},\mu_{B3},\mu_{B4})$$ --- 1.2 Sampling with ReplacementIn case you have not come across sampling with replacement, let's quickly touch on it now. Suppose we have a dataset with the points 1,2,3,4,5.$$X = 1,2,3,4,5$$Suppose we then draw a sample and get 5. Sampling with replacements means that if we draw another sample, we can get 5 again. In fact, we could draw a sample with all 5s! $$sample = 5,5,5,5,5$$This is because we replace the sample after we take it from the dataset. This is the opposite of sampling without replacement. If we were to sample without replacement and we drew a number of samples equal to the dataset size, we would just draw the dataset itself. Hence, sampling with replacement is important to this process. --- 1.3 Why does bootstrapping work?As you can see, bootstrapping is a very simple algorithm- you are just computing the parameter estimate multiple times from the same dataset. So, why does it work? Lets look at the results first and then we can derive them. Remember, we are interested in the mean and variance. **Mean**The mean of the bootstrap estimate is equal to the parameter itself:$$E(\bar{\theta_B}) = mean(\bar{\theta_B}) = \theta$$And, as an example, if our parameter had been the mean, then what we find is that mean of our bootstrap estimate of $\mu$, is equal to the actual value of $\mu$. In other words, the mean of our bootstrap sampled means, is the actual mean of the original data.$$E(\bar{\mu_B}) = \mu$$Or in the case of our example earlier:$$E(\frac{\mu_{B1}+\mu_{B2}+\mu_{B3}+\mu_{B4}}{B}) = \mu_X$$ **Variance**The variance is a bit more complicated. Let's suppose the correlation coefficient between two different estimates of the parameter, $\hat{\theta}_i, \hat{\theta}_j$ is $\rho$, and the variance of each $\hat{\theta}$ is $\sigma^2$:$$\rho = corr(\hat{\theta}_i, \hat{\theta}_j), var(\hat{\theta}) = \sigma^2$$Then, it can be derived that the variance of the bootstrap estimate is:$$var(\bar{\theta}_B) = \frac{1 - \rho}{B}\sigma^2 + \rho \sigma^2$$Notice that if each bootstrap estimate is completely uncorrelated from the others, the variance would be the original variance divided by $B$. This means that for every bootstrap sample we take, we reduce the variance of our estimate. That is remarkable! Unfortunately, there will probably be correlation. --- 1.4 Confidence IntervalOne application of bootstrap estimation, is that we can also estimate the confidence interval of our estimate. We assume a gaussian approximation, so let's say we want a 95% confidence interval. That means that we want the lower and upper bound of $\theta$ that covers 95% of the area under the probability distribution. This is approximately equal to the sample mean of the bootstrap $\theta$, plus or minus 1.96 times the standard deviation of the bootstrap $\theta$:$$95\% CI \approx \bar{\theta}_B \;\pm\; 1.96 std(\hat{\theta}_B)$$ --- 1.5 Derivation of Mean and Variance Now that we know the main results of bootstrap estimation, how do we show that they are true? 1.5.1 Mean DerivationLet's start with the mean. We want to be able to show that the mean value of our bootstrap estimated parameter is the value of the parameter itself. In other words, think back to our simple example at the start of lecture. Our data set was $X$, and the parameter we were looking at was $\mu$, the mean of $X$. We want to be able to prove that after performing our bootstrap sampling, and that expected value of the mean of our samples (think $\mu_{B1}$, $\mu_{b2}$, and so on) are equal to actual mean of $X$, since this was the parameter we were originally trying to estimate! So, we know that expected value (based on its definiton: *expected value of a random variable, intuitively, is the long-run average value of repetitions of the experiment it represents*), is equivalent to the mean. Let's define the following: > * $\bar{\theta}_B$ = sample mean of resampled sample means* $\hat{\theta}_i$ = sample mean of bootstrap sample $i$* $\theta$ = original parameter we're trying to estimateOkay, now we can start with looking at the expected value of our resampled sample means:$$E(\bar{\theta}_B)$$We can expand $\bar{\theta}_B$ based on its definition:$$E(\bar{\theta}_B) = E \Big[ \frac{1}{B} \sum_{i=1}^B \hat{\theta}_i \Big] = E\Big[\frac{1}{B}(\hat{\theta}_1 + ...+\hat{\theta}_B)\Big]$$Because $\frac{1}{B}$ is a constant, and the expected value of constant is just itself, we can pull it out:$$E(\bar{\theta}_B) = \frac{1}{B}*E\Big[(\hat{\theta}_1 + ...+\hat{\theta}_B)\Big]$$And then we know the expected value of any $\hat{\theta}_i$ is going to be $\theta$, the actual parameter. We also know that there are $B$ total $\hat{\theta}$s, so we can pull that out and end up with the final equation:$$E(\bar{\theta}_B) = \frac{1}{B}BE(\hat{\theta}) = \theta$$We can see that the expected value of the bootstrap estimate of the parameter, is equal to the parameter, which is exactly what we were looking for. 1.5.2 Variance Derivation 1.5.2.1 Variance Derivation - DefinitionsNext, let's look at the variance. We can start with some definitions. Let's suppose that the expected value of $\hat{\theta}$ (aka the expected value of the parameter that we calculate after resampling) is equal to $\mu$. $$E(\hat{\theta}) = \mu$$This is not necessarily equal to the original mean of data $X$. It is the mean of whatever parameter we are trying to estimate. For instance, say that the parameter we are trying to estimate is the mean (as in our simple example from earlier). We are stating that the expected value of any of the means we have sampled ($\mu_{B1},\mu_{B2}$, etc) is just equal to $\mu$. So in this case $\mu_{B1}$ and so on would be represented as $\hat{\theta}$ (the parameter we are trying to find, and $\mu$ is the mean/expected value of that parameter. Let's also define the variance of $\hat{\theta}$ to be $\sigma^2$:$$var(\hat{\theta}) = E \Big[(\hat{\theta} - \mu)^2\Big] = \sigma^2$$We can next define the correlation between two different $\hat{\theta}$s to be $\rho$:$$\rho = \frac{E \Big[(\hat{\theta}_i - \mu)(\hat{\theta}_j - \mu) \Big]}{\sigma^2}$$Note that correlation is scaled by standard deviation, so it always $[-1, 1]$. We then define the sum of all $\hat{\theta}$ to be $S_B$:$$S_B = \sum_{i=1}^B \hat{\theta}_i$$And therefore the sample mean of the bootstrap estimates is:$$\bar{\theta}_B = \frac{1}{B}S_B$$ 1.5.2.2 Variance Derivation - Write out definitionLet's start by writing out the definiton for variance of $\bar{\theta}_B$:$$var(\bar{\theta}_B) = E \Big[(\bar{\theta}_B - \mu)^2\Big]= E \Big[(\frac{1}{B}S_B - \mu)^2\Big]$$Then we can perform several algebraic operations to the right side of the equation:$$E \Big[(\frac{1}{B}S_B - \frac{1}{B} B\mu)^2\Big]$$$$E \Big[\Big((\frac{1}{B})(S_B - B\mu)\Big)^2\Big]$$$$E \Big[\frac{1}{B^2}(S_B - \mu)^2\Big]$$$$\frac{1}{B^2}E \Big[(S_B - \mu)^2\Big]$$$$\frac{1}{B^2}E \Big[S_B^2 - 2\mu BS_B + \mu^2 B^2\Big]$$Now, if we look specifically at the term $- 2\mu BS_B$, we can see that 2, $\mu$, and $B$ are constant, so we can pull them outside of the expected value:$$E\Big[-2 \mu B S_B\Big] = -2 \mu B *E\Big[S_B\Big] $$We know that the expected value of $S_B$, based on its definition, can be rewritten as:$$-2 \mu B *E\Big[S_B\Big] = -2 \mu B *E\Big[B\hat{\theta}\Big] $$And since $B$ is constant, it can be pulled outside of the expected value, and we have defined the expected value of $\hat{\theta}$ to be $\mu$:$$-2 \mu B^2 *E\Big[\hat{\theta}\Big] = -2 \mu^2 B^2 $$Now, back to the equation we branched off from: $$\frac{1}{B^2}E \Big[S_B^2 - 2\mu BS_B + \mu^2 B^2\Big]$$We can replace the middle term with that which we just found above, and end up with: $$var(\bar{\theta}_B) = \frac{1}{B^2}E \Big[S_B^2 - \mu^2 B^2\Big]$$At this point, we can note that $\mu$ and $B$ are **both constant**, so they can be pulled out of the expected value: $$var(\bar{\theta}_B) = \frac{1}{B^2}\Big(E \Big[S_B^2\Big] - \mu^2 B^2\Big)$$Which if we then multiply the fraction through, we end up with: $$var(\bar{\theta}_B) = \frac{1}{B^2}E \Big[S_B^2\Big] - \mu^2$$Our main focus now is to find $E\Big[S_B^2\Big]$. 1.5.2.3 Variance Derivation - Find $E\Big[S_B^2\Big]$We can start by using the definition of $S_B$, which is just the sum of the individual sample $\hat{\theta}$s:$$E\Big[S_B^2\Big] = E\Big[(\hat{\theta}_1+\hat{\theta}_2+...+\hat{\theta}_B)(\hat{\theta}_1+\hat{\theta}_2+...+\hat{\theta}_B)\Big]$$The important point to notice here is that we will end up with two types of terms here when we multiply this out. There will be the type where it is $\hat{\theta}_i*\hat{\theta}_i$ (aka the subscript is the same for both $\hat{\theta}$s in the expected value, it will look like $(\hat{\theta}_1*\hat{\theta}_1 +\hat{\theta}_2*\hat{\theta}_2)$ and so on). There will be $B$ of these terms. The other type of term that we will get is $\hat{\theta}_i*\hat{\theta}_j$, where $i \neq j$. Since there will be $B^2$ terms in total, and there will be $B$ where $i = j$, then there will be $B(B-1)$ terms where $i \neq j$. Both of these terms will be non zero.$$E\Big[S_B^2\Big] = BE\Big[\hat{\theta}_i^2\Big] + B(B-1)E_{i \neq j}\Big[\hat{\theta}_i\hat{\theta}_j\Big]$$We can find these two expected values using our previous definitions! 1.5.2.3 Variance Derivation - Rearange equation for Variance and CorrelationIf we rearange our equation for the variance and correlation of $\hat{\theta}$, we can find these two expected values. We can start with the variance and begin looking for $E\Big[\hat{\theta}_i^2\Big]$:$$var(\hat{\theta}_i) = \sigma^2 = E\Big[ (\hat{\theta}_i - \mu)^2 \Big]$$ And we can expand out the inside: $$E\Big[ (\hat{\theta}_i - \mu)^2 \Big] = E\Big[ (\hat{\theta}_i^2 - 2\mu \hat{\theta}_i+ \mu^2) \Big]$$ Take the expected value of each term:$$E\Big[\hat{\theta}_i^2\Big] - E\Big[ 2\mu \hat{\theta}_i \Big] + \mu^2 $$ Focusing on the middle term (as we did earlier), we can see that the 2 and $\mu$ just have an expected value of themselves, so they can be pulled out:$$E\Big[\hat{\theta}_i^2\Big] - 2\mu E\Big[\hat{\theta}_i \Big] + \mu^2 $$ And the expected value of $\hat{\theta}_i$ is just $\mu$ (by definition!). So we end up with: $$\sigma^2 = E\Big[\hat{\theta}_i^2\Big] - \mu^2 $$ $$E\Big[\hat{\theta}_i^2\Big] = \sigma^2 + \mu^2 $$ Great! Now we can look at $\rho$ and begin solving for $E_{i \neq j}\Big[\hat{\theta}_i\hat{\theta}_j\Big]$. Let's start with the definition of $\rho$:$$\rho = \frac{E \Big[(\hat{\theta}_i - \mu)(\hat{\theta}_j - \mu) \Big]}{\sigma^2}$$We can expand the top:$$\rho = \frac{E \Big[(\hat{\theta}_i\hat{\theta}_j - \mu \hat{\theta}_i - \mu \hat{\theta}_j +\mu^2) \Big]}{\sigma^2}$$And again if we look at the middle terms, we can simplify them by taking out the $\mu$, and knowing the the expected value of $\hat{\theta}$ is just $\mu$. This allows us to simplify our equation to:$$\rho = \frac{E \Big[\hat{\theta}_i\hat{\theta}_j\Big] - \mu^2}{\sigma^2}$$And we can rearange that to find: $$E \Big[\hat{\theta}_i\hat{\theta}_j\Big] = \rho\sigma^2 + \mu^2$$ 1.5.2.4 Variance Derivation - Plug in values to find $E\Big[S_B^2\Big]$Now if we go back to the equation we branched off from: $$E\Big[S_B^2\Big] = BE\Big[\hat{\theta}_i^2\Big] + B(B-1)E_{i \neq j}\Big[\hat{\theta}_i\hat{\theta}_j\Big]$$We can being solving for $E\Big[S_B^2\Big]$. So let's plug in our recently determined expected values:$$ E\Big[S_B^2\Big] = \sigma^2 + \mu^2 + B(B-1)(\rho\sigma^2 + \mu^2)$$And then we can simplify to:$$ E\Big[S_B^2\Big] = B \sigma^2 + B(B-1)\rho\sigma^2 + \mu^2B^2$$Great, now we have our value for $E\Big[S_B^2\Big]$! Let's now return to the equation for variance that we had branched off from:$$var(\bar{\theta}_B) = \frac{1}{B^2}E \Big[S_B^2\Big] - \mu^2$$And plug in the expected value of $S_B$ we just solved for:$$var(\bar{\theta}_B) = \frac{1}{B^2} \Big[B \sigma^2 + B(B-1)\rho\sigma^2 + \mu^2B^2 \Big] - \mu^2$$And finally let's simplify the above equation:$$ \Big[\frac{1}{B} \sigma^2 + \frac{(B-1)}{B}\rho\sigma^2 + \mu^2\Big] - \mu^2$$$$ \frac{\sigma^2}{B} + \frac{(B-1)}{B}\rho\sigma^2 $$$$ \frac{\sigma^2}{B} + \frac{B\rho\sigma^2}{B} - \frac{\rho\sigma^2}{B} $$$$ \frac{\sigma^2- \rho \sigma^2}{B} + \rho \sigma^2$$$$var(\bar{\theta}_B) = \frac{1- \rho}{B}\sigma^2 + \rho \sigma^2$$We have arrived at the final solution for variance of $\bar{\theta}_B$! 1.5.3 Variance AnalysisOne interesting question to ask now that we have arrived at the solution for variance, is what will the variance of the bootstrap estimate be, if the correlation is maximum (recalling that the maximum correlation coefficient, $\rho$, is 1, meaning that the 2 variables are perfectly correlated with eachother)? Well, if $\rho$ is 1 then we see that the $\frac{1}{B}$ term goes away and we are just left with $\sigma^2$. This makes sense, because if each individual estimate is correlated with eachother, then the variance of the bootstrap estimate will not go down at all. However, if $\rho$ is 0, then we can get the best possible decrease in variance, which is $\frac{1}{B}$.It is import to note that using the bootstrap estimate of the mean, or using bootstrap for any linear model, does not greatly improve the variance of the model. For linear statistics, for which the sample mean is an example:$$\rho = \frac{N}{2N - 1} \approx 0.5$$ The biggest advantage of bootstrapping occurs when you use highly nonlinear models like **decision trees**. On different data sets they will produce highly irregular decision boundaries that will **not correlate** with each other that much, so $\rho$ will be small. In other words, we will reduce the variance more by combing non linear models that are less likely to be correlated! --- 2. Bootstrap Demo in CodeWe are now going to demonstrate bootstrapping in order to estimate the confidence interval of the sample mean, in order to show that it is approximately equal to the traditional method of estimating the confidence interval. ###Code import numpy as np import matplotlib.pyplot as plt import seaborn as sns from scipy.stats import norm, t # Seaborn Plot Styling sns.set(style="white", palette="husl") sns.set_context("poster") sns.set_style("ticks") B = 200 # B is the number of times we are going to sample N = 20 # N = number of data points X = np.random.randn(N) # X is a standard normal distribution with N points ###Output _____no_output_____ ###Markdown Note that the `numpy.random.randn` function creates a **univariate/normal distribution**, hence we are looking for a $\mu$ of 0, and a variance, $\sigma^2$ of 1. This can be seen here in the docs: https://docs.scipy.org/doc/numpy/reference/generated/numpy.random.randn.html ###Code print("Sample mean of X: ", X.mean()) # This is the regular sample mean individual_estimates = np.empty(B) # Create array to hold estimates # Sample from X (with replacement) B times for b in range(B): sample = np.random.choice(X, size=N) # draw a random sample from X of size N = 20 individual_estimates[b] = sample.mean() # save the sample mean # Calculate bootstrap estimate of the mean & STD bmean = individual_estimates.mean() # Mean of all individual estimates bstd = individual_estimates.std() # Standard Deviation of all individual est ###Output Sample mean of X: 0.1915196036658688 ###Markdown Now we can calculate the confidence interval! Remember, the confidence interval is used in finding the range of values that are most likely to contain the true mean, $\mu$. It can visualized as: ###Code """Find lower and upper bound of 95% confidence interval for estimate of mean of X""" lower = bmean + norm.ppf(0.025) * bstd # norm.ppf(0.025) == -1.96 upper = bmean + norm.ppf(0.975) * bstd # norm.ppf(0.975) == +1.96 print("Bootstrap mean of X: ", bmean) ###Output Bootstrap mean of X: 0.21382653327368076 ###Markdown And let's find the confidence interval the traditional way: ###Code # Traditional way of calculating CI lower2 = X.mean() + norm.ppf(0.025) * X.std() / np.sqrt(N) upper2 = X.mean() + norm.ppf(0.975) * X.std() / np.sqrt(N) ###Output _____no_output_____ ###Markdown Now we are going to want to plot this. ###Code fig, ax = plt.subplots(figsize=(12,8)) plt.hist(individual_estimates, bins=20, ec="black") plt.axvline(x=lower, linestyle='--', color='g', label="lower bound for 95% CI (bootstrap)") plt.axvline(x=upper, linestyle='--', color='g', label="upper bound for 95% CI (bootstrap)") plt.axvline(x=lower2, linestyle='--', color='b', label="lower bound for 95% CI") plt.axvline(x=upper2, linestyle='--', color='b', label="upper bound for 95% CI") plt.legend(fontsize=20, bbox_to_anchor=(1, 1), loc=2) plt.show() ###Output _____no_output_____ ###Markdown --- 3. BaggingWe are now going to look at the application of **bootstrapping** to **model averaging**. This is called **bagging**, which is short for **bootstrap aggregating**. In particular, we will use the bootstrap resampling method to create to create multiple different models, and then average them to create a final model, which may or may not have reduced variance depending on which model you use. The algorithm looks pretty much the same as bootstrapping, except that now instead of calculating the sample mean or some other statistic ($\hat{\theta}$), we are training the model instead. Here is how the algorithm works in pseudocode: 3.1 Training``` Trainingmodels = []for b=1..B: We loop up to B model = Model() Create a model each time Xb, Yb = resample(X) Get the bootstrap sample model.fit(Xb, Yb) Train the model models.append(model) Append model to list of models``` 3.2 PredictionPrediction is then done by **averaging each model** in the case of **regression**, or **voting** the case of **classification**. So for regression the algorithm would look like:``` Regressiondef predict(X): return np.mean([model.predict(X) for model in models], axis=1)```Now classification is a bit more complicated because we need to collect the votes. In the case where each model returns a class probability this is not needed since we can just take the average like we did for regression. Here is an example of how to do prediction for one sample at a time. This a pretty naive implementation that can probably be improved. Note that we don't want to both sorting the votes dictionary, since sorting is **O(nlogn)** and finding the max is just **O(n)**:``` Classificationdef predict_one(X): votes = {} for model in models: Loop through all models k = model.predict(X) Find prediction class for each votes[k]++ Increment that class value in the votes dict argmax = 0, max = -inf We don't sort that hash, since that is O(nlogn) for k, v in votes.iteritems(): Iterate through votes dictionary if v > max: If value is greater than current max, set as max argmax = k; max = v return k Return class k, the class with the most votes```Another option is to create an **(N x K)** matrix and accumulate the predictions. Since numpy allows us to index an array by providing arrays for each index, we can use those to accumulate the votes for each corresponding sample and class pair. Since everything is inside a 2d array by the end, we can just take the argmax over one axis. ``` Classification def predict(X): output = np.zeros((N, K)) Create N x K matrix to hold predictions for model in models: Loop through all models Here we use np.arange(N) as our row index (size N), and we then use model.predict(X) As our column index (size N). For each example we are predicting 1..N, we increment the value of the class that model.predict(X) returned as an output output[np.arange(N), model.predict(X)] += 1 return output.argmax(axis=1) Return the argmax over 1 axis```We can do a simpler version of that if we are doing binary classification. Suppose we have B models in total- the number of votes will always be between 0 and B, so if we just add up all of the votes and then divide by B, we will get a number between 0 and 1 which we can just round. ``` Classificationdef predict(X): output = np.zeros(N) for model in models: output += models.predict(X) return np.round(output / B)``` --- 4. Bagging Regression Trees In this section we are going to implement bagging for regression. We can start with our imports. ###Code import numpy as np import matplotlib.pyplot as plt import seaborn as sns from sklearn.tree import DecisionTreeRegressor from sklearn.utils import shuffle # Seaborn Plot Styling sns.set(style="white", palette="husl") sns.set_context("poster") sns.set_style("ticks") ###Output _____no_output_____ ###Markdown Our first step is to create the data. ###Code """Create the data""" T = 100 # 100 points to represent the x axis x_axis = np.linspace(0, 2*np.pi, T) # [0, 2pi] with T points y_axis = np.sin(x_axis) # sin(x) is ground truth function ###Output _____no_output_____ ###Markdown Now we need to actually get the training data. ###Code """Get the training data""" # Training data will be 30 points N = 30 # Selecting T random points from 1..T without replacement idx = np.random.choice(T, size=N, replace=False) # Selecting training data from x_axis, must then reshape to N x D (D is 1 in this example) Xtrain = x_axis[idx].reshape(N, 1) Ytrain = y_axis[idx] ###Output _____no_output_____ ###Markdown Now what we want to do is try a **lone decision tree**, so that we can compare the ensemble to the first result. ###Code """Create a lone decision tree""" model = DecisionTreeRegressor() model.fit(Xtrain, Ytrain) prediction = model.predict(x_axis.reshape(T, 1)) # need to reshape for sklearn api print("Score for 1 tree:", model.score(x_axis.reshape(T, 1), y_axis)) # Print R^2 score ###Output Score for 1 tree: 0.9913636587270763 ###Markdown And let's plot the predictions our decision tree made vs. the actual curve. ###Code """Plot the lone decision tree's predictions""" fig, ax = plt.subplots(figsize=(12,8)) plt.plot(x_axis, prediction) plt.plot(x_axis, y_axis) plt.show() ###Output _____no_output_____ ###Markdown Awesome, now that we have created everything for our lone decision tree, we can move on to our bagged regressor! ###Code class BaggedTreeRegressor: def __init__(self, B): # Init function self.B = B """Fit function""" def fit(self, X, Y): N = len(X) # Set N to len(X), number training examples self.models = [] # Initialize models to empty array for b in range(self.B): # Loop through all B models idx = np.random.choice(N, size=N, replace=True) # Generate bootstrap sample Xb = X[idx] # X bootstrap sample Yb = Y[idx] # Y bootstrap sample model = DecisionTreeRegressor() # Create decision tree model.fit(Xb, Yb) # Fit to bootstrap sample self.models.append(model) # Append to models array """Predict function, mean of all predictions when performing regression""" def predict(self, X): predictions = np.zeros(len(X)) for model in self.models: # Loop through all models predictions += model.predict(X) # Accumulate predictions return predictions / self.B # Return mean of predictions """Score function, used to calculate R^2, the amount of variance explained""" def score(self, X, Y): # R^2 = 1 - (unexplained variation / total vari d1 = Y - self.predict(X) # Observed values minus predictions d2 = Y - Y.mean() # Observed values minus mean unexplained_var = d1.dot(d1) # Square and sum diff between observed and prediction total_var = d2.dot(d2) # Square and sum diff between observed and mean return 1 - (unexplained_var / total_var) ###Output _____no_output_____ ###Markdown And now we can put our class into use. ###Code model = BaggedTreeRegressor(200) # Setting B to 200 (200-500 are common B values) model.fit(Xtrain, Ytrain) print("Score for bagged tree:", model.score(x_axis.reshape(T, 1), y_axis)) prediction = model.predict(x_axis.reshape(T, 1)) # Plot the bagged regressor's predictions fig, ax = plt.subplots(figsize=(12,8)) plt.plot(x_axis, prediction) plt.plot(x_axis, y_axis) plt.show() ###Output Score for bagged tree: 0.9950689398322677 ###Markdown --- 4. Bagging ClassificationIn this section we are going to implement bagging for classification. We can start with our imports. ###Code import numpy as np import matplotlib.pyplot as plt from sklearn.tree import DecisionTreeClassifier from sklearn.utils import shuffle """Function to plot decision boundary""" def plot_decision_boundary(X, model): h = .02 # step size in the mesh # create a mesh to plot in x_min, x_max = X[:, 0].min() - 1, X[:, 0].max() + 1 y_min, y_max = X[:, 1].min() - 1, X[:, 1].max() + 1 xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h)) # Plot the decision boundary. For that, we will assign a color to each # point in the mesh [x_min, m_max]x[y_min, y_max]. Z = model.predict(np.c_[xx.ravel(), yy.ravel()]) # Put the result into a color plot Z = Z.reshape(xx.shape) plt.contour(xx, yy, Z, cmap=plt.cm.Paired) ###Output _____no_output_____ ###Markdown And let's set our seed. ###Code np.random.seed(10) ###Output _____no_output_____ ###Markdown Now we can create our data. ###Code N = 500 # 500 points D = 2 # Dimensionality is 2 X = np.random.randn(N, D) # Generate some Gaussian random numbers """Dataset is going to be a noisy XOR""" sep = 2 # Separation parameter is equal to 2 X[:125] += np.array([sep, sep]) # 1st 125 pts centered at [+2, +2] X[125:250] += np.array([sep, -sep]) # centered at [+2, -2] X[250:375] += np.array([-sep, -sep]) # centered at [-2, -2] X[375:] += np.array([-sep, sep]) # centered at [-2, +2] Y = np.array([0]*125 + [1]*125 + [0]*125 + [1]*125) ###Output _____no_output_____ ###Markdown And let's plot it to see what it looks like: ###Code # Plot the data fig, ax = plt.subplots(figsize=(12,8)) plt.scatter(X[:,0], X[:,1], s=100, c=Y, alpha=0.5, cmap="viridis") plt.show() ###Output _____no_output_____ ###Markdown Now we can see how a lone decision tree performs. ###Code # Lone decision tree model = DecisionTreeClassifier() model.fit(X, Y) print("Score for 1 tree:", model.score(X, Y)) ###Output Score for 1 tree: 1.0 ###Markdown And plot the lone decision tree decision boundary: ###Code # Plot data with lone decision tree boundary fig, ax = plt.subplots(figsize=(12,8)) plt.scatter(X[:,0], X[:,1], s=100, c=Y, alpha=0.5, cmap="viridis") plot_decision_boundary(X, model) plt.show() ###Output _____no_output_____ ###Markdown We can clearly see that the decision boundary that our lone decision tree came up with is **very** noisy. At the same time the score is 1 because we are overfitting. Let's continue and create our bagged model. ###Code # Create the bagged model class BaggedTreeClassifier: def __init__(self, B): self.B = B """Create B models, sample with replacement, get bootstrap sample, create decision tree""" def fit(self, X, Y): N = len(X) self.models = [] for b in range(self.B): idx = np.random.choice(N, size=N, replace=True) Xb = X[idx] Yb = Y[idx] model = DecisionTreeClassifier(max_depth=2) # limit max depth so boundary smooth model.fit(Xb, Yb) self.models.append(model) def predict(self, X): # No need to keep a dictionary since we are doing binary classification predictions = np.zeros(len(X)) for model in self.models: predictions += model.predict(X) # Sum predictions return np.round(predictions / self.B) # Divide by total num predictions, round """Score function - the accuracy of the model""" def score(self, X, Y): P = self.predict(X) return np.mean(Y == P) ###Output _____no_output_____ ###Markdown Now we can test our model out! ###Code model = BaggedTreeClassifier(200) model.fit(X, Y) print("Score for bagged model:", model.score(X, Y)) # Plot data with boundary fig, ax = plt.subplots(figsize=(12,8)) plt.scatter(X[:,0], X[:,1], s=100, c=Y, alpha=0.5, cmap="viridis") plot_decision_boundary(X, model) plt.show() ###Output Score for bagged model: 0.968
practices/Day-002/day_002_example.ipynb
###Markdown String Operation --- String :判別字串是否為特定字元 isnumeric(), isdigit(), isdecimal() ###Code ## 差別主要在於unicode定義的區間不同,isdecimal() ⊆ isdigit() ⊆ isnumeric() def spam(s): for attr in ['isnumeric', 'isdecimal', 'isdigit']: print(attr, getattr(s, attr)()) spam('3') spam('½') spam("⑩⑬㊿") spam("🄀⒊⒏") spam('³') ## 因為.不屬於numeric定義內字元,所以三者都返回False spam('2.345') ###Output isnumeric False isdecimal False isdigit False ###Markdown isalnum() ###Code ## 如果string至少有一個字符和所有字符都是字母或數字則返回True,否則返回False '23'.isalnum() '我要學python'.isalnum() ## .不算字母或數字 '我要學python.'.isalnum() ## space 不算字母或數字 '我要學python '.isalnum() ###Output _____no_output_____ ###Markdown isupper() / islower() ###Code 'ABC'.isupper() 'ABC'.islower() 'ABc'.islower() ###Output _____no_output_____ ###Markdown 常見格式化符號 !['escape'](escape.png) %s 格式化為string ###Code 'I will like to be an AI %s' % ('engineer') 'I will like to be an %s %s' % ('AI','engineer') ###Output _____no_output_____ ###Markdown %i , %d 格式化為整數 ###Code '%i' % (4.356) '%d' % (4.356) ###Output _____no_output_____ ###Markdown %e 格式化為科學符號 ###Code '%e' % (4.356) ###Output _____no_output_____ ###Markdown %f 格式化為float ###Code '%f' % (4.356) ## 只呈現到小數點後面兩位 '%.2f' % (4.356) ###Output _____no_output_____ ###Markdown --- string.format() 以可讀性更高的語法做到 string formatting 不指定位置,按照順序排列 :基本语法是通过 {} 和 : 来代替以前的 % ###Code '{} {} {}'.format('I','Love','Python') ###Output _____no_output_____ ###Markdown 給定順序 ###Code '{1} {0} {2}'.format('Love','I','Python') ###Output _____no_output_____ ###Markdown 給定變數名稱 ###Code '{name} {verb} {language}'.format(verb = 'Love', name = 'I', language = 'Python') ###Output _____no_output_____ ###Markdown 輸入字典 ###Code dic_ = {'verb' : 'Love', 'name' : 'I', 'language' : 'Python'} '{name} {verb} {language}'.format(**dic_) ###Output _____no_output_____ ###Markdown 輸入list ###Code list_ = ['Love', 'I', 'Python'] ## 0 代表給入的list '{0[1]} {0[0]} {0[2]}'.format(list_) ###Output _____no_output_____ ###Markdown 用format 取代 % ###Code ## 不給訂任何escape也可以 '{}'.format(4.356) ##與上方'%.2f'%(4.356)相同 '{:.2f}'.format(4.356) '{:.2%}'.format(4.356) ###Output _____no_output_____ ###Markdown 補齊數字字串長度 ###Code ## < 補右邊, 補0到10位數 '{:0<10d}'.format(5) ## > 補左邊, 補1到10位數 '{:1>10d}'.format(5) ## > 補左邊, 補空格,可用來對齊 '{:>10d}'.format(5) ###Output _____no_output_____
ST449.ipynb
###Markdown Data ###Code movies = pd.read_csv("movies.csv") ratings = pd.read_csv("ratings.csv") display(movies.head(5)) display(ratings.head(5)) display(movies.info()) display(ratings.info()) ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 9742 entries, 0 to 9741 Data columns (total 3 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 movieId 9742 non-null int64 1 title 9742 non-null object 2 genres 9742 non-null object dtypes: int64(1), object(2) memory usage: 228.5+ KB ###Markdown No null values in any of the columns in both datasets. ###Code display(ratings['rating'].describe()) ###Output _____no_output_____ ###Markdown On average, most users have rated 3.5 to movies. 4.1 Content Based Filtering ###Code tf = TfidfVectorizer(analyzer=lambda s: (c for i in range(1,4) for c in combinations(s.split('|'), r=i))) tfidf_matrix = tf.fit_transform(movies['genres']) pd.DataFrame(tfidf_matrix.todense(), columns=tf.get_feature_names(), index=movies.title).head() ###Output _____no_output_____ ###Markdown What we have done is -- taken combinations of genres upto 4, i.e if we have "Adventure|Comedy|Action", we are taking combinations like "Adventure", "Comedy", "Action", "Adventure, Comedy", "Comedy, Action" and so on, but in such a way that "Comedy, Action" and "Action, Comedy" are treated the same, since order doesn't matter. Then we have caculated the TF-IDF weights for each movies using these combinations. We calculate the similarity between the movies by using the Cosine Similarity ###Code cos_sim = cosine_similarity(tfidf_matrix) cos_sim_df = pd.DataFrame(cos_sim, index=movies['title'], columns=movies['title']) display(cos_sim_df.head()) ###Output _____no_output_____ ###Markdown We observe that of the genres are a perfect match -- for example, the genres for Toy Story (1995) obviously perfectly match with itself, the similarity score is 1, and if the genres don't match at all, like genres of Toy Story (1995) dont match with Sudden Death (1995), then the similarity score is 0 ###Code def genre_recommend(movie,n): score = pd.DataFrame(cos_sim_df[movie]) score = score.sort_values(by=movie, ascending = False).head(n+1) values = list(score.index.values) values.remove(movie) return values movies[movies.title.eq('Aladdin (1992)')] rec = genre_recommend('Aladdin (1992)',10) mov = [] gen = [] for r in rec: mov.append(movies[movies['title']==r]['title'].values[0]) gen.append(movies[movies['title']==r]['genres'].values[0]) df = pd.DataFrame() df['Movies'] = mov df["Genres"] = gen display(df) ###Output _____no_output_____ ###Markdown The recommendation system works well ###Code genre_recommend('Stalker (1979)',10) ###Output _____no_output_____ ###Markdown Again, the recommendations make sense. 4.2 Item Based Collaborative Filtering *References "Prototyping a Recommender System Step by Step Part 1: KNN Item-Based Collaborative Filtering" by Kevin Liao, URL: https://towardsdatascience.com/prototyping-a-recommender-system-step-by-step-part-1-knn-item-based-collaborative-filtering-637969614ea:~:text=When%20KNN%20makes%20inference%20about,the%20most%20similar%20movie%20recommendations.* ###Code # We merge the movies and ratings dataframes df_ratings = pd.merge(ratings,movies,on="movieId") df_ratings.head() ###Output _____no_output_____ ###Markdown To fit the K-Nearest Neighbor algorithm, we need a m x n matrix where m is the number of movies and n is the number of users. We can use the pivot_table() command to achieve this. We fill any missing values with 0.The matrix thus formed will be a very sparse matrix. We don't want to fit the KNN model on a matrix with mostly just zero values. So, for more efficient calculation and less memory footprint, we need to transform the values of the dataframe into a scipy sparse matrix. ###Code from scipy.sparse import csr_matrix # pivot ratings into movie features movie_features_df = df_ratings.pivot_table(index='title',columns='userId',values='rating').fillna(0) movie_features_df # convert dataframe of movie features to scipy sparse matrix mat_features = csr_matrix(movie_features_df.values) movie_features_df.head() # storing the movie_titles from the pivot table in a variable test which can help us getting the movie index for getting # the recommendations. test = movie_features_df.index test ###Output _____no_output_____ ###Markdown The matrix has too many features, and if this is fit directly to the KNN Algorithm, then it will suffer from Curse of Dimensionality. This is because, by default, KNN uses **Euclidean Distance** to measure the distance between points. With so many features, the resulting vactors corresponding to movies would almost be equidistant to the target movie's vector, which is unhelpful for us. So instead of using Euclidean Distance, we use **Cosine Similarity** for the search of the nearest neighbors. ###Code from sklearn.neighbors import NearestNeighbors model_knn = NearestNeighbors(metric = 'cosine', algorithm = 'brute', n_jobs=-1) model_knn.fit(mat_features) unique_index = pd.Index(test) j = unique_index.get_loc('Aladdin (1992)') print(j) # We then use the nearest neighbours model to find the 10 neighbors for the movie title. # These 10 neighbors are the recommendations. distances, indices = model_knn.kneighbors(movie_features_df.iloc[j,:].values.reshape(1, -1), n_neighbors = 11) for i in range(0, len(distances.flatten())): if i == 0: print('Recommendations for {0}:\n'.format(movie_features_df.index[j])) else: print('{0}: {1}, with distance of {2}:'.format(i, movie_features_df.index[indices.flatten()[i]], distances.flatten()[i])) ###Output Recommendations for Aladdin (1992): 1: Beauty and the Beast (1991), with distance of 0.2529439728150389: 2: Lion King, The (1994), with distance of 0.28209064327932476: 3: Jurassic Park (1993), with distance of 0.3865152329768784: 4: True Lies (1994), with distance of 0.4000935259988143: 5: Batman (1989), with distance of 0.4032788709453009: 6: Ace Ventura: Pet Detective (1994), with distance of 0.4161857691893087: 7: Mrs. Doubtfire (1993), with distance of 0.42457691053382474: 8: Die Hard: With a Vengeance (1995), with distance of 0.4315038141425057: 9: Batman Forever (1995), with distance of 0.4336164363530862: 10: Apollo 13 (1995), with distance of 0.4338500822834891: ###Markdown We can see that the recomendations are very different to what we got when we used Content Based Filtering. However, intuitively, the recommendations are still relevant and good. ###Code j = unique_index.get_loc('Stalker (1979)') print(j) distances, indices = model_knn.kneighbors(movie_features_df.iloc[j,:].values.reshape(1, -1), n_neighbors = 11) for i in range(0, len(distances.flatten())): if i == 0: print('Recommendations for {0}:\n'.format(movie_features_df.index[j])) else: print('{0}: {1}, with distance of {2}:'.format(i, movie_features_df.index[indices.flatten()[i]], distances.flatten()[i])) ###Output Recommendations for Stalker (1979): 1: Bob le Flambeur (1955), with distance of 0.3399961522566557: 2: Cercle Rouge, Le (Red Circle, The) (1970), with distance of 0.3853309121263231: 3: Samouraï, Le (Godson, The) (1967), with distance of 0.4338569483962659: 4: That Obscure Object of Desire (Cet obscur objet du désir) (1977), with distance of 0.45125281016524865: 5: Ghost in the Shell: Stand Alone Complex - The Laughing Man (2005), with distance of 0.4705853560298272: 6: Pierrot le fou (1965), with distance of 0.48427583865920343: 7: Serbian Film, A (Srpski film) (2010), with distance of 0.48773100932499924: 8: Leaves of Grass (2009), with distance of 0.4965441452114183: 9: Ghost in the Shell 2.0 (2008), with distance of 0.5011025371741235: 10: Outlander (2008), with distance of 0.5090215278982438: ###Markdown Again, the recommendations are different to what we got in Content Based Filtering. But the recommendations make sense. 4.2 User Based Collaborative Filtering*References - https://medium.com/mlearning-ai/building-movie-recommendation-system-with-surprise-and-python-e905de755c61* ###Code # Creating the data in the form required for the Surprise library. np.random.seed(100) reader = Reader(rating_scale=(0, 5)) data = Dataset.load_from_df(ratings[['userId', 'movieId', 'rating']], reader) np.random.seed(100) # User-User collaborative Filtering benchmark = [] sim_options = { "name": "cosine", "user_based": True, # Compute similarities between users } # Iterate over all algorithms for algorithm in [KNNBasic(sim_options=sim_options,verbose = False), KNNWithMeans(sim_options=sim_options,verbose = False), KNNWithZScore(sim_options=sim_options,verbose = False)]: # Perform cross validation results = cross_validate(algorithm, data, measures=['RMSE'], cv=10, verbose=False) # Get results & append algorithm name tmp = pd.DataFrame.from_dict(results).mean(axis=0) tmp = tmp.append(pd.Series([str(algorithm).split(' ')[0].split('.')[-1]], index=['Algorithm'])) benchmark.append(tmp) pd.DataFrame(benchmark).set_index('Algorithm').sort_values('test_rmse') ###Output _____no_output_____ ###Markdown KNNZScore algorithm gives us the least rMSE value. ###Code # Performing Grid Search CV with KNNZScore method to get values of the parameters. np.random.seed(100) sim_options_grid = { "name": ["msd", "cosine", "pearson"], "min_support": [3, 4, 5,6,7], "user_based": [True], } param_grid = {"k" : range(20,100,10),"sim_options": sim_options_grid} gs = GridSearchCV(KNNWithZScore, param_grid, measures=["rmse", "mae"], cv=5,joblib_verbose = 0,n_jobs = -1) gs.fit(data) print(gs.best_score["rmse"]) print(gs.best_params["rmse"]) # Performing train-test split and fitting on train and testing on test data to get rMSE score. np.random.seed(100) sim_options = { "name": "pearson", "user_based": True, "min_support" : 7 } trainset, testset = train_test_split(data, test_size=0.25) algo = KNNWithZScore(k = 20, sim_options=sim_options,verbose = False) predictions = algo.fit(trainset).test(testset) accuracy.rmse(predictions) ###Output RMSE: 0.9004 ###Markdown Getting Recommendations ###Code # User ID 10 unique_ids = movies['movieId'].unique() # List of all movies in the dataset iids10 = ratings.loc[ratings['userId']==10, 'movieId'] # Getting ratings of user id 10 movies_to_predict = np.setdiff1d(unique_ids,iids10) # Considering the movies not rated by user id 10 titles = [] for i in range(len(movies_to_predict)): titles.append(movies['title'][movies['movieId'] == movies_to_predict[i]]) # Storing the movie titles of movie ids corresponding to those not rated by the user algo = KNNWithZScore(k = 20, sim_options=sim_options,verbose = False) algo.fit(data.build_full_trainset()) my_recs = [] for iid in movies_to_predict: my_recs.append((iid, algo.predict(uid=10,iid=iid).est)) # Storing the predicted ratings of user id 10 on those movies not rated before. df = pd.DataFrame(my_recs, columns=['iid', 'predictions']) df['Title'] = titles df.sort_values('predictions', ascending=False).head(10) # User ID 250 unique_ids = movies['movieId'].unique() iids10 = ratings.loc[ratings['userId']==250, 'movieId'] movies_to_predict = np.setdiff1d(unique_ids,iids10) titles = [] for i in range(len(movies_to_predict)): titles.append(movies['title'][movies['movieId'] == movies_to_predict[i]]) algo = KNNWithZScore(k = 20, sim_options=sim_options,verbose = False) algo.fit(data.build_full_trainset()) my_recs = [] for iid in movies_to_predict: my_recs.append((iid, algo.predict(uid=250,iid=iid).est)) df = pd.DataFrame(my_recs, columns=['iid', 'predictions']) df['Title'] = titles df.sort_values('predictions', ascending=False).head(10) ###Output _____no_output_____ ###Markdown 5.1 Matrix Factorization via Singular Value Decomposition*References "Matrix Factorization for Movie Recommendations in Python" by Nick Becker, URL : https://beckernick.github.io/matrix-factorization-recommender/* ###Code R_df = ratings.pivot(index = 'userId', columns ='movieId', values = 'rating').fillna(0) display(R_df.head()) # Converting R to a matrix R = R_df.values # Normalize the data user_ratings_mean = np.mean(R, axis = 1) R_demeaned = R - user_ratings_mean.reshape(-1, 1) ###Output _____no_output_____ ###Markdown Performing Singular Value Decomposition. We choose the value of k as 50, however, we could make our model better by optimizing this value further by training - testing - validation techniques. ###Code from scipy.sparse.linalg import svds U, sigma, Vt = svds(R_demeaned, k = 50) # Converting Sigma to diagonal Matrix sigma = np.diag(sigma) ###Output _____no_output_____ ###Markdown Making predictions ###Code # Taking the product of U, Sigma and transpose(V) all_user_predicted_ratings = np.dot(np.dot(U, sigma), Vt) + user_ratings_mean.reshape(-1, 1) # Converting to DataFrame preds_df = pd.DataFrame(all_user_predicted_ratings, columns = R_df.columns) # Function to make recommendations def recommend_movies(predictions_df, userID, movies_df, original_ratings_df, num_recommendations=5): # Get and sort the user's predictions user_row_number = userID - 1 # UserID starts at 1, not 0 sorted_user_predictions = predictions_df.iloc[user_row_number].sort_values(ascending=False) # Get the user's data and merge in the movie information. user_data = original_ratings_df[original_ratings_df.userId == (userID)] user_full = (user_data.merge(movies_df, how = 'left', left_on = 'movieId', right_on = 'movieId'). sort_values(['rating'], ascending=False) ) print('User {0} has already rated {1} movies.'.format(userID, user_full.shape[0])) print( 'Recommending the highest {0} predicted ratings movies not already rated.'.format(num_recommendations)) # Recommend the highest predicted rating movies that the user hasn't seen yet. recommendations = (movies_df[~movies_df['movieId'].isin(user_full['movieId'])]. merge(pd.DataFrame(sorted_user_predictions).reset_index(), how = 'left', left_on = 'movieId', right_on = 'movieId'). rename(columns = {user_row_number: 'Predictions'}). sort_values('Predictions', ascending = False). iloc[:num_recommendations, :-1] ) return user_full, recommendations already_rated, predictions = recommend_movies(preds_df, 10, movies, ratings, 10) already_rated.head(10) predictions ###Output _____no_output_____ ###Markdown The recommendations make sense intuitively 5.3 Matrix Factorization Using Bayesian Personalized Ranking*References - https://medium.com/@rohansharma4050_32736/the-unique-movie-recommendation-system-using-lightfm-library-52f31506cac5* ###Code # Using the movies dataset stored in LightFM library. movielens = fetch_movielens() train = movielens['train'] test = movielens['test'] # Fitting LightFM model. model = LightFM(no_components = 15,learning_rate=0.05, loss='bpr') model.fit(train, epochs=10) # Calculating the auc score and precision at K score for both training and testing data. train_precision = precision_at_k(model, train, k=10).mean() test_precision = precision_at_k(model, test, k=10).mean() train_auc = auc_score(model, train).mean() test_auc = auc_score(model, test).mean() print('Precision: train %.2f, test %.2f.' % (train_precision, test_precision)) print('AUC: train %.2f, test %.2f.' % (train_auc, test_auc)) ###Output Precision: train 0.61, test 0.09. AUC: train 0.90, test 0.86. ###Markdown Getting Recommendations ###Code # Getting movie recommendations given the user id. def sample_recommendation(model, data, user_ids): n_users, n_items = train.shape for user_id in user_ids: known_positives = movielens['item_labels'][movielens['train'].tocsr()[user_id].indices] # Set of all movies rated by the user scores = model.predict(user_id, np.arange(n_items)) top_items = movielens['item_labels'][np.argsort(-scores)] # Top predicted ratings for the user recommendations = np.setdiff1d(top_items,known_positives,assume_unique=True) # Storing all movies not rated by user print("User %s" % user_id) print(" Known positives:") for x in known_positives[:3]: print(" %s" % x) print(" Recommended:") for x in recommendations[:10]: print(" %s" % x) sample_recommendation(model, movielens, [10,250]) ###Output _____no_output_____
code/notebooks/dysk_preprocessing.ipynb
###Markdown Preprocessing Neurophysiology Data [ReTune Dyskinesia Project]This notebook contains a step-by-step overview of the preprocessing workflow for ECoG- and LFP-data within the ReTune-Project work package B04. This step-wise structure is provided to understand, visualize, and adjust the single steps. Besides this notebook, another script provides execution of the preprocessing steps at once. Data is required to converted into the BIDS-standard. 0. Loading packages and functions, defining paths ###Code # Importing Python and external packages import os import sys import importlib import json from abc import ABCMeta, abstractmethod from dataclasses import dataclass, field, fields from collections import namedtuple from typing import Any from itertools import compress from pathlib import Path import pandas as pd import numpy as np import sklearn as sk import scipy import matplotlib.pyplot as plt from scipy import signal import csv #mne import mne_bids import mne # check some package versions for documentation and reproducability print('Python sys', sys.version) print('pandas', pd.__version__) print('numpy', np.__version__) print('mne_bids', mne_bids.__version__) print('mne', mne.__version__) print('sci-py', scipy.__version__) print('sci-kit learn', sk.__version__) # define local storage directories projectpath = '/Users/jeroenhabets/Research/CHARITE/projects/dyskinesia_neurophys' codepath = os.path.join(projectpath, 'code') pynmd_path = os.path.join(codepath, 'py_neuromodulation') rawdatapath = '/Users/jeroenhabets/OneDrive - Charité - Universitätsmedizin Berlin/BIDS_Berlin_ECOG_LFP/rawdata_old' # change working directory to project-code folder os.chdir(codepath) os.getcwd() import lfpecog_preproc.preproc_data_management as dataMng import lfpecog_preproc.preproc_reref as reref import lfpecog_preproc.preproc_artefacts as artefacts import lfpecog_preproc.preproc_filters as fltrs import lfpecog_preproc.preproc_resample as resample # # import from py_neuromodulation after setting directory # # PM the directory of py_neuromodulation has to be added to sys.PATHS # os.chdir(pynmd_path) # print(os.getcwd()) # # run from dyskinesia branch-folder in py_nmd # import dyskinesia.preprocessing as preproc # import dyskinesia.preproc_reref as reref # import dyskinesia.preproc_artefacts as artefacts # import dyskinesia.preproc_filters as fltrs ###Output /Users/jeroenhabets/Research/CHARITE/projects/dyskinesia_neurophys/code/py_neuromodulation ###Markdown 1. Data selection, defining SettingsRelevant info on BIDS-structure and the handling data-classes- Note that the resulting Data-Class Objects below do not contain actual data yet (!)- Create RawBrainVision data-objects: load data with rawRun1.ecog.load_data() (incl. internal mne-functionality)- Create np.array's: load data with rawRun1.ecog.get_data(), use return_times=True to return two tuples (data, times); (used in preprocessing.py functions)BIDS-RAW Data Structure Info:- Grouped MNE BIDS Raw Object consists all channels within the group,e.g. lfp_left, lfp_left, ecog, acc. Each channel (rawRun1.ecog[0])is a tuple with the first object a ndarray of shape 1, N_samples.- Calling rawRun1.ecog[0][0] gives the ndarray containing only data-points.- Calling rawRun1.ecog[1] gives the ndarray containing the time stamps. 1A. Define Preprocess SettingsCreate data-structures (named-tuples) which contain the defined settings for the preprocessing. These settings contain the parameters of the preprocessing analyses:- win_len (float): Length of single windows in which the data is binned (Default: 1 sec)- artfct_sd_tresh (float): how many std-dev's are used as artefact removal threshold- bandpass_f (int, int): lower and higher borders of freq bandpass filter- transBW (int): transition bandwidth for notch-filter (is full width, 50% above and 50% below the chosen frequencies to filter)- notchW (int): Notch width of notch filter- Fs_orig (int): original sampling frequency (Hz)- Fs_resample (int): sampling frequency (Hz) to which data is resampled- settings_version (str): Abbreviation/codename for this specific version of settings (do not use spaces but rather underscores), e.g. 'v0.0_Jan22' ###Code ### Create Settings via JSON-files importlib.reload(dataMng) # Load JSON-files with settings and runinfo json_path = os.path.join(projectpath, 'data/preprocess/preprocess_jsons') runsfile = os.path.join(json_path, 'runinfos_11FEB22a.json') # runinfos_008_medOn2_all settfile = os.path.join(json_path, f'settings_v2.1_Feb22.json') with open(os.path.join(json_path, settfile)) as f: json_settings = json.load(f, ) # dict of group-settings with open(os.path.join(json_path, runsfile)) as f: runs = json.load(f, ) # list of runinfo-dicts settings, groups = dataMng.create_settings_list(json_settings) ###Output _____no_output_____ ###Markdown 1B. Define Patient and Recording Settings- First DataClass (RunInfo) gets Patient-Run specific input variables to define which run/data-file should be used - sub (str): patient number - ses (str): session code (new version e.g. 'LfpEcogMedOn01', old version e.g. 'EphysMedOn01') - task (str): performed task, e.g. 'Rest' - acq (str): acquisition, aka state of recording, usually indicates Stimulation status, but also contains time after Dopamine-intake in case of Dyskinesia-Protocol, e.g. 'StimOn01', or 'StimOn02Dopa30' - run (str): run number, e.g. '01' - raw_path (str): directory where the raw-BIDS-data is stored (Poly5-files etc), needs to direct to '/.../BIDS_Berlin_ECOG_LFP/rawdata' - project_path (str): directory where created files and figures are saved; should be main-project-directory, containing sub-folders 'data', 'code', 'figures' - preproc_sett (str): code of preprocessing settings, is extracted from PreprocSettings DataClass- Second DataClass (RunRawData) creates the MNE-objects which are used in the following function to load the data ###Code # DEFINE PTATIENT-RUN SETTINGS sub = '008' ses = 'EphysMedOn02' # 'EphysMedOn02' task = 'Rest' acq = 'StimOffDopa10' # 'StimOffLD00' run = '1' rawpath = rawdatapath # ext_datapath # create specific patient-run BIDS-Object for further pre-processing importlib.reload(dataMng) runInfo0 = dataMng.RunInfo( sub=sub, ses=ses, task=task, acq=acq, run=run, raw_path=rawpath, # used to import the source-bids-data preproc_sett=getattr(settings, groups[0]).settings_version, project_path=projectpath, # used to write the created figures and processed data ) rawRun = dataMng.RunRawData(bidspath=runInfo0.bidspath) ###Output ------------ BIDS DATA INFO ------------ The raw-bids-object contains 49 channels with 1207293 datapoints and sample freq 4000.0 Hz Bad channels are: ['LFP_R_2_STN_BS', 'LFP_R_3_STN_BS', 'LFP_R_5_STN_BS', 'LFP_R_6_STN_BS', 'LFP_R_7_STN_BS', 'LFP_L_16_STN_BS', 'EEG_Cz_TM', 'EEG_Fz_TM'] BIDS contains: 6 ECOG channels, 26 DBS channels: (15 left, 11 right), 2 EMG channels, 1 ECG channel(s), 6 Accelerometry (misc) channels. ###Markdown 2. Automated Artefact Removal (incl. Visualization)!!!! To adjust to full recording (2d + 3d optinoality) ###Code # Actual Loading of the Data from BIDS-files # data_raw is filled with loaded mne-bids data per group data_raw = {} for field in rawRun.__dataclass_fields__: print(field) # loops over variables within the data class if str(field)[:4] == 'lfp_': data_raw[str(field)] = getattr(rawRun, field).load_data() elif str(field)[:4] == 'ecog': data_raw[str(field)] = getattr(rawRun, field).load_data() ch_names = {} for group in groups: ch_names[group] = data_raw[group].info['ch_names'] # Artefact Removal importlib.reload(artefacts) data_clean = {} ch_nms_clean = {} save_dir = runInfo0.fig_path saveNot = None for group in groups: data_clean[group], ch_nms_clean[group] = artefacts.artefact_selection( data_bids=data_raw[group], # raw BIDS group to process group=group, win_len=getattr(settings, group).win_len, n_stds_cut=getattr(settings, group).artfct_sd_tresh, # number of std-dev from mean that is used as cut-off # to save: give directory, to show inline: give 'show', w/o fig: None save=saveNot, # if None: no figure saved RunInfo=runInfo0, ) # Quality check: delete groups without valid channels to_del = [] for group in data_clean.keys(): if data_clean[group].shape[1] <= 1: to_del.append(group) for group in to_del: del(data_clean[group]) del(ch_nms_clean[group]) groups.remove(group) print(f'Group(s) removed: {to_del}') ###Output Group(s) removed: [] ###Markdown 3. Bandpass Filtering ###Code importlib.reload(fltrs) data_bp = {} for group in groups: data_bp[group] = fltrs.bp_filter( data=data_clean[group], sfreq=getattr(settings, group).Fs_orig, l_freq=getattr(settings, group).bandpass_f[0], h_freq=getattr(settings, group).bandpass_f[1], method='iir', # faster than fir ) ###Output _____no_output_____ ###Markdown 4. Notch-filtering for Powerline Noise ###Code # notch filtering in BLOCKS importlib.reload(fltrs) save_dir = runInfo0.fig_path saveNOT = None data_nf = {} for group in data_bp.keys(): print(f'Start Notch-Filter GROUP: {group}') data_nf[group] = fltrs.notch_filter( data=data_bp[group], ch_names=ch_nms_clean[group], group=group, transBW=getattr(settings, group).transBW, notchW=getattr(settings, group).notchW, method='fir', #iir (8th or. Butterwidth) takes too long save=saveNOT, # if None: no figures made and saved verbose=False, RunInfo=runInfo0, ) ###Output Start Notch-Filter GROUP: lfp_left Start Notch-Filter GROUP: lfp_right Start Notch-Filter GROUP: ecog ###Markdown 5. ResamplingSince freq's of interest are up to +/- 100 - 120 Hz, according to the Nyquist-theorem the max sample freq does not need to be more than double (~ 250 Hz).Check differences with resampling to 400 or 800 Hz later. Or working with wider windows.- Swann '16: 800 Hz- Heger/ Herff: 600 Hz (https://www.csl.uni-bremen.de/cms/images/documents/publications/IS2015_brain2text.pdf) ###Code importlib.reload(resample) # resampling one run at a time data_rs = {} # dict to store resampled data for group in groups: data_rs[group] = resample.resample( data=data_nf[group], Fs_orig=getattr(settings, 'ecog').Fs_orig, Fs_new = getattr(settings, 'ecog').Fs_resample, ) ###Output _____no_output_____ ###Markdown 6. RereferencingCommon Practice LFP Re-referencing: difference between two nieghbouring contacts- For segmented Leads: average every levelRelevant ECOG-rereferencing literature used: - Common Average Rereferencing (Liu ea, J Neural Eng 2015 (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5485665/)- ECOG is local sign with spread +/- 3mm (Dubey, J Neurosc 2019): https://www.jneurosci.org/content/39/22/4299 - READ ON - DATA ANALYSIS: Relevance of data-driven spatial filtering for invasive EEG. For gamma: CAR is probably sufficient. For alpha-beta: ... Hihg inter-subject variability in ECOG. (Shaworonko & Voytek, PLOS Comp Biol 2021: https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1009298)- Submilimeter (micro)ECOG: http://iebl.ucsd.edu/sites/iebl.ucsd.edu/files/2018-06/Sub-millimeter%20ECoG%20pitch%20in%20human%20enables%20higher%20%EF%AC%81delity%20cognitiveneural%20state%20estimation.pdfCheck rereferencing methods:- de Cheveigne/Arzounian NeuroImage 2018- pre-prints Merk 2021 and Petersen 2021 (AG Kühn / AG Neumann)- pre-print epilepsy ecog movement (MUMC)P.M. Check further in to Spatial Filtering:- Spatial filter estimation via spatio-spectral decomposition: ............ TO READ (Nikulin & Curio, NeuroImage 2011, https://www.sciencedirect.com/science/article/pii/S1053811911000930?via%3Dihub)- Spatio-Spectral Decomposition: proposed dimensionality-reduction instead of PCA (Haufe, ..., Nikulin, https://www.sciencedirect.com/science/article/pii/S1053811914005503?via%3Dihub)- Also check: SPoC (Castano et al NeuroImage Clin 2020) ###Code importlib.reload(reref) lfp_reref='segments' data_rrf = {} names = {} # deleting possible existing report-file if 'reref_report.txt' in os.listdir( runInfo0.data_path): with open(os.path.join(runInfo0.data_path, 'reref_report.txt'), 'w'): pass for group in groups: data_rrf[group], names[group] = reref.rereferencing( data=data_rs[group], group=group, runInfo=runInfo0, lfp_reref=lfp_reref, chs_clean=ch_nms_clean[group], ) ###Output LFP_L_1_STN_BS LFP_L_2_STN_BS LFP_L_3_STN_BS LFP_L_4_STN_BS LFP_L_5_STN_BS LFP_L_6_STN_BS LFP_L_7_STN_BS LFP_L_8_STN_BS LFP_L_9_STN_BS LFP_L_11_STN_BS LFP_L_12_STN_BS LFP_L_15_STN_BS LFP_L_16_STN_BS {0: ['LFP_L_1_', 'LFP_L_2_', 'LFP_L_3_'], 1: ['LFP_L_4_', 'LFP_L_5_', 'LFP_L_6_'], 2: ['LFP_L_7_', 'LFP_L_8_', 'LFP_L_9_'], 3: ['LFP_L_11_', 'LFP_L_12_'], 4: ['LFP_L_15_'], 5: ['LFP_L_16_']} {0: [1, 2, 3], 1: [4, 5, 6], 2: [7, 8, 9], 3: [10, 11], 4: [12], 5: [13]} Rereferencing BS Vercise Cartesia X (L) against other contacts of same level Row REFS [2, 3], SHAPE (302, 14, 800) Row REFS [1, 3], SHAPE (302, 14, 800) Row REFS [1, 2], SHAPE (302, 14, 800) Row REFS [5, 6], SHAPE (302, 14, 800) Row REFS [4, 6], SHAPE (302, 14, 800) Row REFS [4, 5], SHAPE (302, 14, 800) Row REFS [8, 9], SHAPE (302, 14, 800) Row REFS [7, 9], SHAPE (302, 14, 800) Row REFS [7, 8], SHAPE (302, 14, 800) Row REFS [11], SHAPE (302, 14, 800) Row REFS [10], SHAPE (302, 14, 800) TAKE LEVEL HIGHER ref rows [13] (302, 14, 800) TAKE LEVEL LOWER ref rows [12] (302, 14, 800) LFP_R_1_STN_BS LFP_R_2_STN_BS LFP_R_3_STN_BS LFP_R_4_STN_BS LFP_R_5_STN_BS LFP_R_6_STN_BS LFP_R_7_STN_BS LFP_R_8_STN_BS LFP_R_9_STN_BS LFP_R_10_STN_BS LFP_R_11_STN_BS LFP_R_12_STN_BS LFP_R_13_STN_BS LFP_R_14_STN_BS LFP_R_15_STN_BS {0: ['LFP_R_1_', 'LFP_R_2_', 'LFP_R_3_'], 1: ['LFP_R_4_', 'LFP_R_5_', 'LFP_R_6_'], 2: ['LFP_R_7_', 'LFP_R_8_', 'LFP_R_9_'], 3: ['LFP_R_10_', 'LFP_R_11_', 'LFP_R_12_'], 4: ['LFP_R_13_', 'LFP_R_14_', 'LFP_R_15_'], 5: []} {0: [1, 2, 3], 1: [4, 5, 6], 2: [7, 8, 9], 3: [10, 11, 12], 4: [13, 14, 15], 5: []} Rereferencing BS Vercise Cartesia X (R) against other contacts of same level Row REFS [2, 3], SHAPE (302, 16, 800) Row REFS [1, 3], SHAPE (302, 16, 800) Row REFS [1, 2], SHAPE (302, 16, 800) Row REFS [5, 6], SHAPE (302, 16, 800) Row REFS [4, 6], SHAPE (302, 16, 800) Row REFS [4, 5], SHAPE (302, 16, 800) Row REFS [8, 9], SHAPE (302, 16, 800) Row REFS [7, 9], SHAPE (302, 16, 800) Row REFS [7, 8], SHAPE (302, 16, 800) Row REFS [11, 12], SHAPE (302, 16, 800) Row REFS [10, 12], SHAPE (302, 16, 800) Row REFS [10, 11], SHAPE (302, 16, 800) Row REFS [14, 15], SHAPE (302, 16, 800) Row REFS [13, 15], SHAPE (302, 16, 800) Row REFS [13, 14], SHAPE (302, 16, 800) ECoG Rereferncing: Common Average ###Markdown 7. Saving Preprocessed Signals ###Code importlib.reload(dataMng) for group in groups: dataMng.save_arrays( data=data_rrf[group], names=names[group], group=group, runInfo=runInfo0, lfp_reref=lfp_reref, ) ###Output _____no_output_____
roc_curves.ipynb
###Markdown Plot ROC curves Maintain by Xiaotong.Chen ###Code import torch import torch.nn as nn import torch.nn.functional as F import torch.optim as optim import torch.optim.lr_scheduler as lr_scheduler from torch.optim.lr_scheduler import _LRScheduler import torch.utils.data as data import torchvision.transforms as transforms import torchvision.datasets as datasets import torchvision.models as models from torch.utils.data import Dataset, DataLoader from torchvision import utils from sklearn import decomposition from sklearn import manifold from sklearn import metrics from sklearn import model_selection from sklearn.metrics import confusion_matrix import matplotlib.pyplot as plt %matplotlib inline import copy from collections import namedtuple import os, re, time import random import shutil import math import pandas as pd import numpy as np from sklearn import svm # set seed to make sure the results are reproducible SEED = 99 random.seed(SEED) np.random.seed(SEED) torch.manual_seed(SEED) torch.cuda.manual_seed(SEED) torch.backends.cudnn.deterministic = True # change directory of data CV_ID = 2 CV_ID = str(CV_ID) datapath = os.path.join("D:\\0jiazhuangxian2020\\jiazhuangxian",".data","hcb","cv" + CV_ID) load_name = 'basic-model-cv'+CV_ID+'.pt' train_dir = os.path.join(datapath, 'train') val_dir = os.path.join(datapath, 'val') test_dir = os.path.join(datapath, 'test') pretrained_size = (224, 224) pretrained_means = [0.485, 0.456, 0.406] pretrained_stds= [0.229, 0.224, 0.225] test_transforms = transforms.Compose([ transforms.Resize(pretrained_size), transforms.ToTensor(), transforms.Normalize(mean = pretrained_means, std = pretrained_stds) ]) # inorder to load img with it's label class MyImageFolder(datasets.ImageFolder): def __getitem__(self, index): path, _ = self.imgs[index] #img path, label return super(MyImageFolder, self).__getitem__(index), path # return image path BATCH_SIZE = 1000 #4 train_data = MyImageFolder(root = train_dir, transform = test_transforms) test_data = MyImageFolder(root = test_dir, transform = test_transforms) valid_data = MyImageFolder(root = val_dir, transform = test_transforms) train_iterator = data.DataLoader(train_data, shuffle = True, drop_last = False, batch_size = BATCH_SIZE) valid_iterator = data.DataLoader(valid_data, drop_last = False, batch_size = BATCH_SIZE) test_iterator = data.DataLoader(test_data, drop_last = False, batch_size = BATCH_SIZE) # read csv data path_to_text = '.data\\text.csv' text_table = pd.read_csv(path_to_text, header=0, index_col=0) Dimension_Text = len(text_table.columns) def imgid2index(id): return 1000*int(id[0])+int(id[2:]) def imgid2textinfo(imgid): # convert img path into text info in .csv # input: ['.data\\train\\zy\\1_152.bmp','2_8.bmp'] # output: tensor([[ 1, 4, 7, 10], # [ 3, 6, 9, 12]]) return torch.tensor(text_table.loc[[imgid2index(re.search('\d_\d+',i.split('\\')[-1]).group()) for i in imgid],:].values, dtype=torch.float32, device=device) basic_model = torch.hub.load('pytorch/vision:v0.6.0', 'resnet50') # change output dimension to what we need IN_FEATURES = basic_model.fc.in_features OUTPUT_DIM = 2 basic_model.fc = nn.Linear(IN_FEATURES, OUTPUT_DIM) fc1=nn.Linear(IN_FEATURES, 32) fc2=nn.Linear(32,OUTPUT_DIM) basic_model.fc = nn.Sequential(fc1, fc2) basic_model.load_state_dict(torch.load(load_name)) basic_model.fc[1] = nn.Identity() device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') # text_table.head() # load train data num_train = len(train_data) train_data_combine_model = data.DataLoader(train_data, shuffle = True, batch_size = num_train) xy, train_id = next(iter(train_data_combine_model)) x, y = xy img_output = basic_model(x) train_x = torch.cat((img_output, imgid2textinfo(train_id).cpu()), dim=-1).detach().numpy() train_y = y # load val data and combine with train num_val = len(valid_data) val_data_combine_model = data.DataLoader(valid_data, shuffle = True, batch_size = num_val) xy, val_id = next(iter(val_data_combine_model)) x, y = xy img_output = basic_model(x) val_x = torch.cat((img_output, imgid2textinfo(val_id).cpu()), dim=-1).detach().numpy() val_y = y train_x = np.concatenate((train_x, val_x), axis=0) train_y = np.concatenate((train_y, val_y), axis=0) # load test data num_test = len(test_data) test_data_combine_model = data.DataLoader(test_data, batch_size = num_test) xy, test_id = next(iter(test_data_combine_model)) x, y = xy img_output = basic_model(x) test_x = torch.cat((img_output, imgid2textinfo(test_id).cpu()), dim=-1).detach().numpy() test_y = y.numpy() tmpt=[imgid2index(re.search('\d_\d+',i.split('\\')[-1]).group()) for i in train_id] tmpv=[imgid2index(re.search('\d_\d+',i.split('\\')[-1]).group()) for i in val_id] train_ID=tmpt+tmpv test_ID=[imgid2index(re.search('\d_\d+',i.split('\\')[-1]).group()) for i in test_id] x_train=text_table.loc[train_ID,:].values y_train=[2-i//1000 for i in train_ID] # y_train=[i//1000-1 for i in train_ID] # print(x_train,y_train) x_test=text_table.loc[test_ID,:].values y_test=[2-i//1000 for i in test_ID] # y_test=[i//1000-1 for i in test_ID] # print(x_test,y_test) all_sen={} all_spe={} ###Output _____no_output_____ ###Markdown SVM TEXT+IMG ###Code #search best params for SVM svm_para = svm.SVC() param_grid = {'C': range(5,30), 'gamma': [1e-2, 7e-3, 5e-3, 3e-3, 1e-3, 7e-4, 5e-4, 3e-4, 1e-4, 7e-5], 'kernel': ['rbf', 'linear', 'poly', 'sigmoid']} grid_search = model_selection.GridSearchCV(svm_para, param_grid) grid_search.fit(train_x, train_y) best_parameters = grid_search.best_estimator_.get_params() for para, val in list(best_parameters.items()): print(para, val) def roc_results(y_true, y_pred, path): r""" Return: sensitivity, specificity, ppv, npv, accuracy, AUC, threshold this function is used to calculate AUC,sensitivity,specificity,ppv,npv,accuracy baesed on each threshold and plot the ROC curve. It will also choose the best threshold, which means that this threshold lead to the highest accuracy. input: true, pred, path """ # calculate the target vector from roc fpr, sen, threshold = metrics.roc_curve(y_true, y_pred) spe = 1-fpr pos_num = np.sum(y_true == 1) neg_num = len(y_true)-pos_num tp = sen*pos_num tn = spe*neg_num fn = pos_num-tp fp = neg_num-tn ppv = tp/(tp+fp+1e-16) npv = tn/(tn+fn+1e-16) acc = (tp+tn)/len(y_true) # the best point data j_statistic=sen+spe-1#(based on J_statistic) ind=np.argmax(j_statistic) # ind = np.max(np.where(acc == np.max(acc)))#(based on accuracy) pred_p = tp[ind]+fp[ind] pred_n = tn[ind]+fn[ind] best_acc = acc[ind] best_sen = sen[ind] best_spe = spe[ind] best_ppv = tp[ind]/pred_p best_npv = tn[ind]/pred_n AUC = metrics.auc(fpr, sen) # plot plt.plot(fpr, sen) plt.plot(np.linspace(0, 1, 10), np.linspace(0, 1, 10)) plt.plot(fpr[ind], sen[ind], 'r.', markersize=9) text = "AUC={:.2f}\nSEN={:.2f}\nSPE={:.2f}\nPPV={:.2f}\nNPV={:.2f}\nACC={:.2f}".format( AUC, sen[ind], spe[ind], best_ppv, best_npv, best_acc) plt.text(0.82, 0.1, text, fontsize=10, style="italic", horizontalalignment="center") plt.xlabel('1-Specificity') plt.ylabel('Sensitivity') plt.title('ROC Curve') path = path+"/"+"cv"+str(CV_ID)+".jpg" plt.savefig(path) plt.show() return sen, spe, ppv, npv, acc, AUC, threshold # combine text info and output of img model with Support Vector Machine rbf_svc = svm.SVC(C=best_parameters['C'], gamma=best_parameters['gamma'], kernel=best_parameters['kernel']) # rbf_svc = svm.SVC(kernel='rbf',C = 18, gamma = 0.001) rbf_svc.fit(train_x, train_y) path='D:/0jiazhuangxian2020/jiazhuangxian/fig/svmcombine/train' svm_pred = rbf_svc.predict(train_x) svm_pred_prob=rbf_svc.decision_function(train_x) sen, spe, ppv, npv, acc, auc, _=roc_results(1-train_y,1-svm_pred_prob,path) svm_pred = rbf_svc.predict(test_x) path='D:/0jiazhuangxian2020/jiazhuangxian/fig/svmcombine/test' sen, spe, ppv, npv, acc, auc, _=roc_results(1-test_y,1-rbf_svc.decision_function(test_x),path) all_sen['svm_c']=sen all_spe['svm_c']=spe ###Output _____no_output_____ ###Markdown SVM TEXT ###Code #search best params for SVM svm_para = svm.SVC() param_grid = {'C': range(5,8), 'gamma': [1e-2, 1e-1,1e-3], 'kernel': ['rbf', 'sigmoid']} grid_search = model_selection.GridSearchCV(svm_para, param_grid) grid_search.fit(x_train, y_train) best_parameters = grid_search.best_estimator_.get_params() for para, val in list(best_parameters.items()): print(para, val) # combine text info and output of img model with Support Vector Machine rbf_svc = svm.SVC(C=best_parameters['C'], gamma=best_parameters['gamma'], kernel=best_parameters['kernel']) # rbf_svc = svm.SVC(kernel='sigmoid',C = 5, gamma = 0.1) rbf_svc.fit(x_train, y_train) path='D:/0jiazhuangxian2020/jiazhuangxian/fig/svmtext/train' svm_pred = rbf_svc.predict(x_train) y_train=np.array(y_train) sen, spe, ppv, npv, acc, auc, _=roc_results(1-np.array(y_train),1-np.array(rbf_svc.decision_function(x_train)),path) svm_pred = rbf_svc.predict(x_test) path='D:/0jiazhuangxian2020/jiazhuangxian/fig/svmtext/test' sen, spe, ppv, npv, acc, auc, _=roc_results(1-np.array(y_test),1-np.array(rbf_svc.decision_function(x_test)),path) all_sen['svm_t']=sen all_spe['svm_t']=spe ###Output _____no_output_____ ###Markdown basic model(only image) ###Code def get_predictions(model, iterator): model.eval() images = [] labels = [] probs = [] top_pred = [] with torch.no_grad(): for (xy, _) in iterator: x, y = xy y_pred = model(x) y_prob = F.softmax(y_pred, dim=-1) _, _top_pred = torch.max(y_pred, 1) images.append(x.cpu()) labels.append(y.cpu()) probs.append(y_prob.cpu()) top_pred.append(_top_pred.cpu()) images = torch.cat(images, dim=0) labels = torch.cat(labels, dim=0) probs = torch.cat(probs, dim=0) top_pred = torch.cat(top_pred, dim=0) return images, labels, probs, top_pred basic_model.to("cpu") _, labels, probs, _ =get_predictions(basic_model, test_iterator) pred=probs[:,1] # sen, spe, ppv, npv, acc, auc, _ = roc_results(1-labels.numpy(),pred.numpy(),path) sen, spe, ppv, npv, acc, auc, _ = roc_results(labels.numpy(),pred.numpy(),path) all_sen['img']=sen all_spe['img']=spe ###Output _____no_output_____ ###Markdown all roc ###Code plt.style.use('ggplot') plt.plot(1-all_spe['svm_c'],all_sen['svm_c'],linewidth=1.5) plt.plot(1-all_spe['svm_t'],all_sen['svm_t'],linewidth=1.5) plt.plot(1-all_spe['img'],all_sen['img'],color='goldenrod',linewidth=1.5) plt.plot(np.linspace(0, 1, 10), np.linspace(0, 1, 10),'--',label=None,color='forestgreen',linewidth=1.5) plt.xlim([-0.02, 1.0]) plt.ylim([0.0, 1.02]) plt.xlabel('1-Specificity') plt.ylabel('Sensitivity') plt.title('ROC Curves') plt.legend((r"$Ours$",r"$SVM_C$",r"$DNN_I$"),loc='lower right') # plt.axis('equal') plt.gca().set_aspect('equal', adjustable='box') plt.savefig("three_curve_roc11.svg") plt.show() ###Output _____no_output_____
calaccess-exploration/campaign_finance_filings.ipynb
###Markdown Campaign Finance Disclosure Filings Setup ###Code %load_ext sql from django.conf import settings connection_string = 'postgresql+psycopg2://{USER}:{PASSWORD}@{HOST}:{PORT}/{NAME}'.format( **settings.DATABASES['default'] ) %sql $connection_string ###Output _____no_output_____ ###Markdown Cover Sheets Every campaign finance disclosure filing has a cover sheet, and the top-level information from these cover sheets ends up in the `CVR_CAMPAIGN_DISCLOSURE_CD`. This is an important table because it links because it links the dollar amount totals from the `SMRY_CD` table to a name fields and a unique identifier of the filer.The [forms](http://calaccess.californiacivicdata.org/documentation/calaccess-files/cvr-campaign-disclosure-cd/forms) of the filings that end up in `CVR_CAMPAIGN_DISCLOSURE_CD` are all the ones that include financial disclosures of campaigns as opposed to statements of intention and organization. Do the `FORM_TYPE` values vary between amendments to the same any campaign filing? No, which is good because we can easily sort records linked to the `FILING_ID` and `AMEND_ID` combinations. ###Code %%sql SELECT cvr."FILING_ID", COUNT(DISTINCT cvr."FORM_TYPE") FROM "CVR_CAMPAIGN_DISCLOSURE_CD" cvr JOIN "FILER_FILINGS_CD" ff ON cvr."FILING_ID" = ff."FILING_ID" AND cvr."AMEND_ID" = ff."FILING_SEQUENCE" GROUP BY 1 HAVING COUNT(DISTINCT cvr."FORM_TYPE") > 1; ###Output 0 rows affected. ###Markdown Do the `FILER_ID` values vary between amendments to the same any campaign filing? No, which is good because we should then be able to sort out who filed which filings. ###Code %%sql SELECT "FILING_ID", COUNT(DISTINCT "FILER_ID") FROM "CVR_CAMPAIGN_DISCLOSURE_CD" GROUP BY 1 HAVING COUNT(DISTINCT "FILER_ID") > 1 ORDER BY COUNT(DISTINCT "FILER_ID") DESC; ###Output 0 rows affected. ###Markdown Joining to `FILER_FILINGS_CD` The `FILER_FILINGS_CD` has additional information about each filing, including the filing period in which the filing files. There are also lot of fields that seem to be redundant with fields on `CVR_CAMPAIGN_DISCLOSURE_CD`, specifically:* `CVR_CAMPAIGN_DISCLOSURE_CD.FORM_TYPE` and `FILER_FILINGS_CD.FORM_ID`* `CVR_CAMPAIGN_DISCLOSURE_CD.FILER_ID` and `FILER_FILINGS_CD.FILER_ID`* `CVR_CAMPAIGN_DISCLOSURE_CD.STMT_TYPE` and `FILER_FILINGS_CD.STMNT_TYPE`* `CVR_CAMPAIGN_DISCLOSURE_CD.RPT_DATE` and `FILER_FILINGS_CD.FILING_DATE`* `CVR_CAMPAIGN_DISCLOSURE_CD.FROM_START` and `FILER_FILINGS_CD.RPT_START`* `CVR_CAMPAIGN_DISCLOSURE_CD.THRU_DATE` and `FILER_FILINGS_CD.RPT_END`* `CVR_CAMPAIGN_DISCLOSURE_CD.RPT_DATE` and `FILER_FILINGS_CD.RPT_DATE`Might be worth checking if values in each pair of fields ever conflict. Does every `CVR_CAMPAIGN_DISCLOSURE_CD` record have a `FILER_FILINGS_CD` record? Almost. And among records left on the `FILER_FILINGS_CD` table, many of the fields or blank of have values that suggest they are only for testing purposes. ###Code %%sql SELECT cvr."FORM_TYPE", cvr."FILING_ID", cvr."AMEND_ID", cvr."FILER_ID", cvr."FILER_NAML", cvr."RPT_DATE", * FROM "CVR_CAMPAIGN_DISCLOSURE_CD" cvr LEFT JOIN "FILER_FILINGS_CD" ff ON cvr."FILING_ID" = ff."FILING_ID" AND cvr."AMEND_ID" = ff."FILING_SEQUENCE" WHERE ff."FILING_ID" IS NULL or ff."FILING_SEQUENCE" IS NULL; ###Output 37 rows affected. ###Markdown Do any records have conflicting `CVR_CAMPAIGN_DISCLOSURE_CD.FORM_TYPE` and `FILER_FILINGS_CD.FORM_ID` values? They do, but not a lot. And there is a pretty clear pattern with `CVR_CAMPAIGN_DISCLOSURE_CD.FORM_TYPE` saying `F497` and `FILER_FILINGS_CD.FORM_ID` saying `F496`. ###Code %%sql SELECT cvr."FORM_TYPE", ff."FORM_ID", cvr."FILING_ID", cvr."AMEND_ID", cvr."RPT_DATE", * FROM "CVR_CAMPAIGN_DISCLOSURE_CD" cvr JOIN "FILER_FILINGS_CD" ff ON cvr."FILING_ID" = ff."FILING_ID" AND cvr."AMEND_ID" = ff."FILING_SEQUENCE" WHERE UPPER(cvr."FORM_TYPE") <> UPPER(ff."FORM_ID") ORDER BY cvr."RPT_DATE" DESC, cvr."FILING_ID" DESC, cvr."AMEND_ID" DESC; ###Output 113 rows affected. ###Markdown Do any records have conflicting `CVR_CAMPAIGN_DISCLOSURE_CD.FILER_ID` and `FILER_FILINGS_CD.FILER_ID` values? Yes, this happens about a third of the time. ###Code %%sql SELECT COUNT(*)::float / ( SELECT COUNT(*) FROM "CVR_CAMPAIGN_DISCLOSURE_CD" CVR JOIN "FILER_FILINGS_CD" FF ON CVR."FILING_ID" = FF."FILING_ID" AND CVR."AMEND_ID" = FF."FILING_SEQUENCE" ) as pct_conflict FROM "CVR_CAMPAIGN_DISCLOSURE_CD" CVR JOIN "FILER_FILINGS_CD" FF ON CVR."FILING_ID" = FF."FILING_ID" AND CVR."AMEND_ID" = FF."FILING_SEQUENCE" WHERE CVR."FILER_ID" <> FF."FILER_ID"::VARCHAR; ###Output 1 rows affected. ###Markdown But one thing to note is that these `FILER_ID` fields are too different data types: char on `CVR_CAMPAIGN_DISCLOSURE_CD` and int on `FILER_FILINGS_CD`. We previously discovered that the `FILER_XREF_CD` table is a translator from seemingly old string filer_ids to numeric filer_ids. Does every `CVR_CAMPAIGN_DISCLOSURE_CD` record have a `FILER_XREF_ID` record? No. Here are the missing filer_ids, which are also not found in either the `FILERNAME_CD` or `FILERS_CD` tables. ###Code %%sql SELECT cvr."FILER_ID" as cvr_filer_id, fn."FILER_ID" as filername_filer_id, f."FILER_ID" as filer_filer_id FROM ( SELECT DISTINCT cvr."FILER_ID" FROM "CVR_CAMPAIGN_DISCLOSURE_CD" cvr LEFT JOIN "FILER_XREF_CD" x ON cvr."FILER_ID" = x."XREF_ID" WHERE x."XREF_ID" IS NULL ) cvr LEFT JOIN "FILERNAME_CD" fn ON cvr."FILER_ID" = fn."FILER_ID"::varchar LEFT JOIN "FILERS_CD" f ON cvr."FILER_ID" = f."FILER_ID"::varchar ORDER BY cvr."FILER_ID"::VARCHAR DESC; ###Output 45 rows affected. ###Markdown Should probably look more into these later, but this might have something to do with conflicting filer_ids on `CVR_CAMPAIGN_DISCLOSURE_CD` and `FILER_FILINGS_CD`. Does the filer_id from `FILER_XREF_CD` and `FILER_FILINGS_CD` ever conflict for the same filing? Yes, but less than 1 percent of the time. ###Code %%sql SELECT COUNT(*)::float / ( SELECT COUNT(*) FROM "CVR_CAMPAIGN_DISCLOSURE_CD" CVR JOIN "FILER_FILINGS_CD" FF ON CVR."FILING_ID" = FF."FILING_ID" AND CVR."AMEND_ID" = FF."FILING_SEQUENCE" ) as pct_conflict FROM "CVR_CAMPAIGN_DISCLOSURE_CD" CVR JOIN "FILER_FILINGS_CD" FF ON CVR."FILING_ID" = FF."FILING_ID" AND CVR."AMEND_ID" = FF."FILING_SEQUENCE" JOIN "FILER_XREF_CD" X ON CVR."FILER_ID" = X."XREF_ID" WHERE X."FILER_ID" <> FF."FILER_ID"; ###Output 1 rows affected. ###Markdown Mostly this seems to be a problem for Form 497 filings. ###Code %%sql SELECT cvr."FORM_TYPE", COUNT(*) FROM "CVR_CAMPAIGN_DISCLOSURE_CD" CVR JOIN "FILER_FILINGS_CD" FF ON CVR."FILING_ID" = FF."FILING_ID" AND CVR."AMEND_ID" = FF."FILING_SEQUENCE" JOIN "FILER_XREF_CD" X ON CVR."FILER_ID" = X."XREF_ID" WHERE X."FILER_ID" <> FF."FILER_ID" GROUP BY 1; ###Output 2 rows affected. ###Markdown Does the `FILER_FILINGS_CD.FORM_ID` value ever vary between amendments to the same filing? I guess there always has to be at least one. ###Code %%sql SELECT "FILING_ID", COUNT(DISTINCT "FORM_ID") FROM "FILER_FILINGS_CD" WHERE "FORM_ID" IN ( SELECT DISTINCT "FORM_TYPE" FROM "CVR_CAMPAIGN_DISCLOSURE_CD" ) GROUP BY 1 HAVING COUNT(DISTINCT "FORM_ID") > 1 ORDER BY 1 DESC; %%sql SELECT * FROM "FILER_FILINGS_CD" WHERE "FILING_ID" = 826532; ###Output 2 rows affected. ###Markdown But there aren't any `CVR_CAMPAIGN_DISCLOSURE_CD` or `SMRY_CD` records for this filing_id, so maybe it isn't real. ###Code %%sql SELECT * FROM "CVR_CAMPAIGN_DISCLOSURE_CD" WHERE "FILING_ID" = 826532; SELECT * FROM "SMRY_CD" WHERE "FILING_ID" = 826532; ###Output 0 rows affected. 0 rows affected. ###Markdown Does the `FILER_FILINGS_CD.FILER_ID` value ever vary between amendments to the same filing? Even when we narrow to only the campaign finance-related forms, the answer is "yes". ###Code %%sql SELECT "FILING_ID", COUNT(DISTINCT "FILING_SEQUENCE"), COUNT(DISTINCT "FILER_ID") FROM "FILER_FILINGS_CD" WHERE "FORM_ID" IN ( SELECT DISTINCT "FORM_TYPE" FROM "CVR_CAMPAIGN_DISCLOSURE_CD" ) GROUP BY 1 HAVING COUNT(DISTINCT "FILER_ID") > 1 ORDER BY 1 DESC; ###Output 1274 rows affected.
ResNet30_actual.ipynb
###Markdown ###Code import torch import torchvision # torch package for vision related things import torch.nn.functional as F # Parameterless functions, like (some) activation functions import torchvision.datasets as datasets # Standard datasets import torchvision.transforms as transforms # Transformations we can perform on our dataset for augmentation from torch import optim # For optimizers like SGD, Adam, etc. from torch import nn # All neural network modules from torch.utils.data import DataLoader # Gives easier dataset managment by creating mini batches etc. from tqdm import tqdm # For nice progress bar! device = (torch.device('cuda') if torch.cuda.is_available() else torch.device('cpu')) print(f"Training on device {device}.") from matplotlib import pyplot as plt import numpy as np import collections import torch.nn as nn import torch.optim as optim from torchvision import datasets, transforms data_path = '/data-unversioned/p1ch6/' cifar10 = datasets.CIFAR10( data_path, train=True, download=True, transform=transforms.Compose([ transforms.RandomCrop(size=[32,32], padding=4), transforms.ToTensor(), transforms.Normalize((0.4915, 0.4823, 0.4468), (0.2470, 0.2435, 0.2616)) ])) cifar10_val = datasets.CIFAR10( data_path, train=False, download=True, transform=transforms.Compose([ transforms.ToTensor(), transforms.Normalize((0.4915, 0.4823, 0.4468), (0.2470, 0.2435, 0.2616)) ])) class resblock(nn.Module): def __init__(self, in_channels, mid_channels,identity_downsample=None, stride=1): super(resblock, self).__init__() self.expansion = 4 self.conv1 = nn.Conv2d(in_channels, mid_channels, kernel_size=1, stride=1, padding=0, bias=False) self.bn1 = nn.BatchNorm2d(mid_channels) self.conv2 = nn.Conv2d(mid_channels,mid_channels,kernel_size=3,stride=stride,padding=1,bias=False) self.bn2 = nn.BatchNorm2d(mid_channels) self.conv3 = nn.Conv2d(mid_channels,mid_channels * self.expansion,kernel_size=1,stride=1,padding=0,bias=False) self.bn3 = nn.BatchNorm2d(mid_channels * self.expansion) self.relu = nn.ReLU() self.identity_downsample = identity_downsample self.stride = stride def forward(self, x): identity = x.clone() x = self.conv1(x) x = self.bn1(x) x = self.relu(x) x = self.conv2(x) x = self.bn2(x) x = self.relu(x) x = self.conv3(x) x = self.bn3(x) if self.identity_downsample is not None: identity = self.identity_downsample(identity) x += identity x = self.relu(x) return x class ResNet(nn.Module): def __init__(self, resblock, layers, image_channels = 3, num_classes = 10): super(ResNet, self).__init__() self.in_channels = 64 self.conv1 = nn.Conv2d(image_channels, 64, kernel_size=7, stride=2, padding=3, bias=False) self.bn1 = nn.BatchNorm2d(64) self.relu = nn.ReLU() self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1) self.layer1 = self._create_layer(resblock, layers[0], mid_channels=64, stride=1) self.layer2 = self._create_layer(resblock, layers[1], mid_channels=128, stride=2) self.layer3 = self._create_layer(resblock, layers[2], mid_channels=256, stride=2) self.layer4 = self._create_layer(resblock, layers[3], mid_channels=512, stride=2) self.avgpool = nn.AdaptiveAvgPool2d((1, 1)) self.fc1 = nn.Linear(512 * 4, 32) self.fc2 = nn.Linear(32, 16) self.fc3 = nn.Linear(16, 10) def forward(self, x): x = self.conv1(x) x = self.bn1(x) x = self.relu(x) x = self.maxpool(x) x = self.layer1(x) x = self.layer2(x) x = self.layer3(x) x = self.layer4(x) x = self.avgpool(x) x = x.reshape(x.shape[0], -1) x = self.fc1(x) x = self.fc2(x) x = self.fc3(x) return x def _create_layer(self, resblock, num_residual_blocks, mid_channels, stride): identity_downsample = None new_layers = [] # Either if we half the input space for ex, 56x56 -> 28x28 (stride=2), or channels changes # we need to adapt the Identity (skip connection) so it will be able to be added # to the layer that's ahead if stride != 1 or self.in_channels != mid_channels * 4: identity_downsample = nn.Sequential( nn.Conv2d( self.in_channels, mid_channels * 4, kernel_size=1, stride=stride, bias=False ), nn.BatchNorm2d(mid_channels * 4), ) new_layers.append( resblock(self.in_channels, mid_channels, identity_downsample, stride) ) # The expansion size is always 4 for ResNet 50,101,152 self.in_channels = mid_channels * 4 # For example for first resnet layer: 256 will be mapped to 64 as intermediate layer, # then finally back to 256. Hence no identity downsample is needed, since stride = 1, # and also same amount of channels. for i in range(num_residual_blocks - 1): new_layers.append(resblock(self.in_channels, mid_channels)) return nn.Sequential(*new_layers) import datetime def training_loop(n_epochs, optimizer, model, loss_fn, train_loader): for epoch in range(1, n_epochs + 1): loss_train = 0.0 for imgs, labels in train_loader: imgs = imgs.to(device=device) # <1> labels = labels.to(device=device) outputs = model(imgs) loss = loss_fn(outputs, labels) optimizer.zero_grad() loss.backward() optimizer.step() loss_train += loss.item() print('{} Epoch {}, Training loss {}'.format( datetime.datetime.now(), epoch, loss_train / len(train_loader))) train_loader = torch.utils.data.DataLoader(cifar10, batch_size=64, shuffle=False) model = ResNet(resblock,[2, 3, 7, 2]).to(device=device)# <1> optimizer = optim.SGD(model.parameters(), lr=1e-2) loss_fn = nn.CrossEntropyLoss() numel_list = [p.numel() for p in model.parameters()] sum(numel_list), numel_list training_loop( n_epochs = 60, optimizer = optimizer, model = model, loss_fn = loss_fn, train_loader = train_loader, ) train_loader = torch.utils.data.DataLoader(cifar10, batch_size=64, shuffle=False) val_loader = torch.utils.data.DataLoader(cifar10_val, batch_size=64, shuffle=False) def validate(model, train_loader, val_loader): for name, loader in [("train", train_loader), ("val", val_loader)]: correct = 0 total = 0 with torch.no_grad(): # <1> for imgs, labels in loader: imgs = imgs.to(device=device) labels = labels.to(device=device) outputs = model(imgs) _, predicted = torch.max(outputs, dim=1) # <2> total += labels.shape[0] # <3> correct += int((predicted == labels).sum()) # <4> print("Accuracy {}: {:.2f}".format(name , correct / total)) validate(model, train_loader, val_loader) ###Output Accuracy train: 0.96 Accuracy val: 0.76
PULP/tutorial/2.7 Logical constraint exercise.ipynb
###Markdown Logical constraint exerciseYour customer has ordered six products to be delivered over the next month. You will need to ship multiple truck loads to deliver all of the products. There is a weight limit on your trucks of 25,000 lbs. For cash flow reasons you desire to ship the most profitable combination of products that can fit on your truck. Product Weight (lbs) Profitability ($US) A 12,583 102,564 B 9,204 130,043 C 12,611 127,648 D 12,131 155,058 E 12,889 238,846 F 11,529 197,030Two Python dictionaries weight, and prof, and a list prod have been created for you containing the weight, profitability, and name of each product. ###Code from pulp import * prod = ['A','B','C','D','E','F'] weight = {'A':12800, 'B':10900, 'C':11400, 'D':2100, 'E':11300, 'F':2300} prof = {'A':77878, 'B':82713, 'C':82728, 'D':68423, 'E':84119, 'F':77765} # Initialized model, defined decision variables and objective model = LpProblem("Loading Truck Problem", LpMaximize) x = LpVariable.dicts('ship_', prod, cat='Binary') model += lpSum([prof[i] * x[i] for i in prod]) # Define Constraint model += lpSum([weight[i] * x[i] for i in prod]) <=25000 model += x['E'] + x['D'] + x['F']<= 1 model.solve() for i in prod: print("{} status {}".format(i, x[i].varValue)) ###Output A status 0.0 B status 1.0 C status 1.0 D status 0.0 E status 0.0 F status 1.0
2017-09-13_diode-I-V.ipynb
###Markdown 2017-09-13 diode I/V curves ###Code import matplotlib as mpl import matplotlib.pyplot as plt import numpy as np #plt.style.use('seaborn-bright') mpl.rcParams['figure.figsize'] = (15.0, 8.0) mpl.rcParams['font.size'] = 24 mpl.rcParams['lines.linewidth'] = 5 mpl.rcParams['lines.markersize'] = 5 ###Output _____no_output_____ ###Markdown \begin{equation}i_D = I_S \left[ \exp \left( \dfrac{v_D}{V_T} \right) - 1 \right]\end{equation}\begin{equation}\approx I_S \exp \left( \dfrac{v_D}{V_T} \right)\end{equation} Plot both the full equation and the approximation. ###Code kB = 1.381e-23 # J/K q = 1.602e-19 # C def id(vd, Is=1e-15, T=300): VT = kB * T / q return Is * (np.exp(vd / VT) - 1) def id_approx(vd, Is=1e-15, T=300): VT = kB * T / q return Is * np.exp(vd / VT) def plotlin_idvd(v, nvt=5): plt.close('all') ax1 = plt.subplot() ax1.plot(v, 1000*id_approx(v), '-', label='approx') ax1.plot(v, 1000*id(v), 'o', label='exact') r = plt.xlim([0, v.max()]) plt.ylabel('$I_D$ (mA)') plt.xlabel('$V_D$ (V)') #plt.suptitle('$i_D$ vs. $v_D$', y=1.05, size='x-large') plt.legend(loc='upper left') plt.grid(axis='y') VT = kB * 300 / q ax2 = ax1.twiny() vts = np.arange(0, int(v.max() / VT) + 1, nvt) ax2.set_xticks(VT * vts) ax2.set_xticklabels(vts) ax2.set_xlabel(r'$n\times V_T$') ax2.grid(True) ax2.set_xlim(r) plotlin_idvd(np.linspace(1e-3, 0.8, 100), nvt=10) plt.show() ###Output _____no_output_____ ###Markdown A linear-*y* plot isn't so helpful.Plot with a logarithmic scale to see the vertical scale better: ###Code def plot_idvd(v, nvt=5): plt.close('all') ax1 = plt.subplot() plt.semilogy(v, id_approx(v), '-', label='approx') plt.semilogy(v, id(v), 'o', label='exact') plt.ylabel('$I_D$ (A)') plt.xlabel('$V_D$ (V)') #plt.suptitle('$I_D$ vs. $V_D$', y=1.05, size='x-large') plt.legend(loc='best') r = plt.xlim([0, v.max()]) plt.grid(axis='y') VT = kB*300/q ax2 = ax1.twiny() vts = np.arange(0, int(v.max()/VT)+1, nvt) ax2.set_xticks(VT*vts) ax2.set_xticklabels(vts) ax2.set_xlabel(r'$n\times V_T$') ax2.grid(True) ax2.set_xlim(r) plot_idvd(np.linspace(1e-3, 0.8, 100), nvt=10) plt.show() ###Output _____no_output_____ ###Markdown Zoom in to near $V_D = 0$ ###Code v = np.linspace(0, 0.1, 100) plot_idvd(v, nvt=1) plt.show() ###Output _____no_output_____
notebooks/sym_sizing.ipynb
###Markdown Symmetrical OPIn this notebook the circuit shown in the following schematic will be sized to acheive a certain gain.The resulting procedure takes a target gain as input and can be run for any technology, for which trained models are available.![Symmetrical Operatrional Amplifier](./fig/sym.png) ###Code %matplotlib inline import os import numpy as np import torch as pt import pandas as pd import joblib as jl from functools import partial from scipy.interpolate import pchip_interpolate, interp1d from scipy.optimize import minimize from matplotlib import pyplot as plt from sklearn.preprocessing import MinMaxScaler, minmax_scale ###Output _____no_output_____ ###Markdown SpecificationThe overall performance of the circuit is approximated during sizing by$$A_{0} \approx M \cdot \frac{g_{\mathrm{m}, \mathtt{d}}}{g_{\mathrm{ds},\mathtt{cm22}} + g_{\mathrm{ds},\mathtt{cm32}}}$$and $$ f_{0} \approx \frac{g_{\mathrm{ds},\mathtt{cm22}} + g_{\mathrm{ds},\mathtt{cm32}}}{2 \cdot \pi \cdot C_{L}} $$At the end these performances are verified by simulation.| Parameter | Specification ||-----------------------|--------------:|| $V_{\mathrm{DD}}$ | $1.2\,V$ || $V_{\mathrm{in,cm}}$ | $0.6\,V$ || $V_{\mathrm{out,cm}}$ | $0.6\,V$ || $I_{\mathtt{B0}}$ | $10\,\mu A$ || $C_{\mathrm{L}}$ | $10\,p F$ | ###Code V_DD = 1.2 V_SS = 0.0 V_ICM = 0.6 V_OCM = 0.6 I_B0 = 10e-6 C_L = 10e-12 ###Output _____no_output_____ ###Markdown Simulator Setup[PySpice](https://pyspice.fabrice-salvaire.fr/) is used for verifying the design by simulation within this notebook. ###Code import logging from PySpice.Spice.Netlist import Circuit, SubCircuitFactory from PySpice.Spice.Library import SpiceLibrary from PySpice.Unit import * ###Output _____no_output_____ ###Markdown The Symmetrical amplifier is setup as a subcircuit to be included into a testbench. ###Code class SymAmp(SubCircuitFactory): NAME = "symamp" NODES = (10, 11, 12, 13, 14, 15) # B, INP, INN, OUT, GND, VDD def __init__(self): super().__init__() # Biasing Current Mirror self.MOSFET("NCM11" , 10, 10, 14, 14, model = "nmos") self.MOSFET("NCM12" , 16, 10, 14, 14, model = "nmos") # Differential Pair self.MOSFET("ND11" , 17, 12, 16, 14, model = "nmos") self.MOSFET("ND12" , 18, 11, 16, 14, model = "nmos") # PMOS Current Mirrors self.MOSFET("PCM221", 17, 17, 15, 15, model = "pmos") self.MOSFET("PCM222", 19, 17, 15, 15, model = "pmos") self.MOSFET("PCM211", 18, 18, 15, 15, model = "pmos") self.MOSFET("PCM212", 13, 18, 15, 15, model = "pmos") # NMOS Current Mirror self.MOSFET("NCM31" , 19, 19, 14, 14, model = "nmos") self.MOSFET("NCM32" , 13, 19, 14, 14, model = "nmos") ###Output _____no_output_____ ###Markdown The subckt as well as the MOS library have to be specified in the netlist. This is where changes need to be made if another technology is to be sized. ###Code spice_lib_90 = SpiceLibrary("../lib/90nm_bulk.lib") netlist_90 = Circuit("symamp_tb") netlist_90.include(spice_lib_90["nmos"]) netlist_90.subcircuit(SymAmp()) ###Output _____no_output_____ ###Markdown The Open Loop Gain is obtained through the following Testbench. ###Code netlist_90.X("sym", "symamp", "B", "P", "N", "O", 0, "D") symamp = list(netlist_90.subcircuits)[0] i_ref = netlist_90.CurrentSource("ref", 0 , "B", I_B0@u_A) v_dd = netlist_90.VoltageSource("dd" , "D", 0 , V_DD@u_V) v_ip = netlist_90.VoltageSource("ip" , "P", 0 , V_ICM@u_V) v_in = netlist_90.SinusoidalVoltageSource( "in", "N", "E" , dc_offset=0.0@u_V , ac_magnitude=-1.0@u_V , ) e_buf = netlist_90.VoltageControlledVoltageSource("in", "E", 0, "O", 0, 1.0@u_V) c_l = netlist_90.C("L", "O", 0, C_L@u_F) ###Output _____no_output_____ ###Markdown The simulation function takes a `dict` with a key for each device in the circuit and at least columns `W` and `L` for sizing. It returns the performances obtained through an AC Analysis. ###Code def simulate(sizing_data, netlist): for device in sizing_data.index: symamp.element(device).width = sizing_data.loc[device].W symamp.element(device).length = sizing_data.loc[device].L simulator = netlist.simulator( simulator="ngspice-subprocess" , temperature=27 , nominal_temperature=27 , ) logging.disable(logging.FATAL) analysis = simulator.ac( start_frequency = 1.0@u_Hz , stop_frequency = 1e11@u_Hz , number_of_points = 10 , variation = "dec" , ) freq = np.array(analysis.frequency) gain = ((20 * np.log10(np.absolute(analysis["O"]))) - (20 * np.log10(np.absolute(analysis["N"])))) phase = np.angle(analysis["O"], deg=True) - np.angle(analysis["N"], deg=True) logging.disable(logging.NOTSET) gf = [gain[np.argsort(gain)], freq[np.argsort(gain)]] pf = [phase[np.argsort(phase)], freq[np.argsort(phase)]] A0dB = gain[0] A3dB = A0dB - 3.0 f3dB = pchip_interpolate(*gf, [A3dB]) return (simulator, freq, gain, phase) ###Output _____no_output_____ ###Markdown Device Model SetupThe `PrimitiveDevice` class acts as interface to the machine learning models. With this, multiple models of different types and technologies can be instantiatede and compared. ###Code class PrimitiveDevice(): def __init__(self, prefix, params_x, params_y): self.prefix = prefix self.params_x = params_x self.params_y = params_y self.model = pt.jit.load(f"{self.prefix}/model.pt") self.model.cpu() self.model.eval() self.scale_x = jl.load(f"{self.prefix}/scale.X") self.scale_y = jl.load(f"{self.prefix}/scale.Y") def predict(self, X): with pt.no_grad(): X.fug = np.log10(X.fug.values) X_ = self.scale_x.transform(X[params_x].values) Y_ = self.model(pt.from_numpy(np.float32(X_))).numpy() Y = pd.DataFrame( self.scale_y.inverse_transform(Y_) , columns=params_y ) Y.jd = np.power(10, Y.jd.values) Y.gdsw = np.power(10, Y.gdsw.values) return pd.DataFrame(Y, columns=self.params_y) ###Output _____no_output_____ ###Markdown The circuit is not divided into single devices, but rather in _building blocks_ as indicated by the dashed boxes in the schematic. ###Code devices = [ "MNCM11", "MNCM12", "MND11", "MND12", "MNCM31", "MNCM32" , "MPCM221" , "MPCM222", "MPCM211", "MPCM212" ] reference_devices = [ "MNCM12", "MND12", "MPCM212", "MNCM32" ] ###Output _____no_output_____ ###Markdown The inputs and outputs of the model, trained in `model_training.ipynb` have to specified again. ###Code params_x = ["gmid", "fug", "Vds", "Vbs"] params_y = ["jd", "L", "gdsw", "Vgs"] ###Output _____no_output_____ ###Markdown Initially the symmetrical amplifier is sized with the models for the $90\,\mathrm{nm}$ technology. Later this can be changed to any other technology model, yielding similar results. ###Code nmos = PrimitiveDevice("../models/example/90nm-nmos", params_x, params_y) pmos = PrimitiveDevice("../models/example/90nm-pmos", params_x, params_y) ###Output _____no_output_____ ###Markdown Sizing Procedure![Symmetrical Amplifier](./fig/sym.png)For simplicity in this example, only the $\frac{g_{\mathrm{m}}}{I_{\mathrm{d}}}$ dependend modelsare considered. Therefore, sizing for all devices is expressed in terms of $\frac{g_{\mathrm{m}}}{I_{\mathrm{d}}}$ and $f_{\mathrm{ug}}$.$$\gamma_{\mathrm{n,p}} \left ( \left [ \frac{g_{\mathrm{m}}}{I_{\mathrm{d}}}, f_{\mathrm{ug}}, V_{\mathrm{ds}}, V_{\mathrm{bs}} \right ]^{\dagger} \right ) \Rightarrow \left [ L, \frac{I_{\mathrm{d}}}{W}, \frac{g_{\mathrm{ds}}}{W}, V_{\mathrm{gs}} \right ]^{\dagger}$$First, the specification, given in the table above is considered,from which a biasing current ${I_{\mathtt{B1}} = \frac{I_{\mathrm{B0}}}{2}}$ is defined.This, in turn results in a mirror ratio $M_{\mathrm{n}} = 2 : 1$ of the NMOS current mirror `NCM1`.Additionally, the ratio $M_{\mathrm{p}} = 1 : 4$ of the PMOS current mirrors `MPCM2` is specified. Usually, this is chosen to balance power consumption and phase margin. Since thishas to be analyzed separately by simulation, starting values ${M_{\mathtt{cm21}} = 1}$ and ${M_{\mathtt{cm22}} = 4}$ are selected. Furthermore, the remaining branch current$I_{\mathtt{B2}} = M_{\mathrm{p}} \cdot \frac{I_{\mathtt{B2}}}{2}$ is determined. ###Code M_P = 4 M_N = 2 I_B1 = I_B0 / M_N I_B2 = (I_B1 / 2) * M_P ###Output _____no_output_____ ###Markdown Since the common mode output voltage $V_{\mathrm{out,cm}} = 0.6$ is known, the sizing procedure starts with `MPCM212`:$$ \gamma_{\mathrm{p}, \mathtt{MPCM2}} \left ( \left [ \left ( \frac{g_{\mathrm{m}}}{I_{\mathrm{d}}} \right )_{\mathtt{MPCM2}} , f_{\mathrm{ug}, \mathtt{MPCM2}}, (V_{\mathrm{DD}} - V_{\mathrm{out,cm}}), 0.0 \right ]^{\dagger} \right ) $$The gate voltage $V_{\mathrm{gs}, \mathtt{MPCM}}$ helps guiding the sizing for the differential pair later.Next, `MNCM32` is considered with:$$ \gamma_{\mathrm{n}, \mathtt{MNCM3}} \left ( \left [ \left ( \frac{g_{\mathrm{m}}}{I_{\mathrm{d}}} \right )_{\mathtt{MNCM3}} , f_{\mathrm{ug}, \mathtt{MNCM3}}, V_{\mathrm{out,cm}}, 0.0 \right ]^{\dagger} \right ) $$Sizing the differential pair requires, _guessing_ $V_{\mathrm{x}} = 0.23\,\mathrm{V}$ which is done by considering the fact that 3 devices are stacked and `MNCM1` merely serves as biasing. Therefore:$$ \gamma_{\mathrm{n}, \mathtt{MND1}} \left ( \left [ \left ( \frac{g_{\mathrm{m}}}{I_{\mathrm{d}}} \right )_{\mathtt{MND1}} , f_{\mathrm{ug}, \mathtt{MND1}} , (V_{\mathrm{DD}} - V_{\mathrm{gs}, \mathtt{MPCM}} - V_{\mathrm{x}}) , - V_{\mathrm{x}} \right ]^{\dagger} \right ) $$Subsequently, the biasing current mirror `MNCM3` is sized:$$ \gamma_{\mathrm{n}, \mathtt{MNCM3}} \left ( \left [ \left ( \frac{g_{\mathrm{m}}}{I_{\mathrm{d}}} \right )_{\mathtt{MNCM3}} , f_{\mathrm{ug}, \mathtt{MNCM3}} , V_{\mathrm{x}} , 0.0 \right ]^{\dagger} \right ) $$ With these four function calls, the sizing of the entire circuit is expressed in terms of eight electrical characteristics.The following function `symamp_sizing` takes a `dict` with keys for each _reference device_ $\in$ `reference_devices = [ "MNCM12", "MND12", "MPCM212", "MNCM32" ]` and corresponding, desired characteristics. The obtained sizing for each device is propageted to related devices in the same building block and a new `dict` with sizing information is returned. ###Code def symamp_sizing( gmid_ncm12, gmid_ndp12, gmid_pcm212, gmid_ncm32 , fug_ncm12, fug_ndp12, fug_pcm212, fug_ncm32 ): ec = {} ## PMOS Current Mirror MPCM2: input_pcm212 = pd.DataFrame( np.array([[gmid_pcm212, fug_pcm212, (V_DD - V_OCM), 0.0]]) , columns=params_x ) ec["MPCM212"] = pmos.predict(input_pcm212).join(input_pcm212) # Determine width based on known branch current ec["MPCM212"]["W"] = I_B2 / ec["MPCM212"].jd # Copy to related device, don't overwrite ec["MPCM211"] = ec["MPCM212"].copy() # MPCM211's width has to be reduced by M_P ec["MPCM211"].W = ec["MPCM212"].W / M_P # Size MPCM222 and MPCM211 accordingly ec["MPCM222"] = ec["MPCM212"] ec["MPCM221"] = ec["MPCM211"] ## NMOS Current Mirror NCM3: input_ncm32 = pd.DataFrame( np.array([[gmid_ncm32, fug_ncm32, V_OCM, 0.0]]) , columns=params_x ) ec["MNCM32"] = nmos.predict(input_ncm32).join(input_ncm32) ec["MNCM32"]["W"] = I_B2 / ec["MNCM32"].jd ec["MNCM31"] = ec["MNCM32"].copy() ## NMOS Differential Pair NDP1: V_X = 0.23 V_GS = ec["MNCM32"].Vgs.values[0] input_nd12 = pd.DataFrame( np.array([[gmid_ndp12, fug_ndp12, (V_DD - V_GS - V_X), -V_X]]) , columns=params_x ) ec["MND12"] = nmos.predict(input_nd12).join(input_nd12) ec["MND12"]["W"] = (I_B1 / 2) / ec["MND12"].jd ec["MND11"] = ec["MND12"] ## NMOS Current Mirror NCM1 input_ncm12 = pd.DataFrame( np.array([[gmid_ncm12, fug_ncm12, V_X, 0.0]]) , columns=params_x ) ec["MNCM12"] = nmos.predict(input_ncm12).join(input_ncm12) ec["MNCM12"]["W"] = I_B1 / ec["MNCM12"].jd ec["MNCM11"] = ec["MNCM12"].copy() # MNCM11's width has to be adjusted by M_N ec["MNCM11"].W = ec["MNCM11"].W * M_N ## Calculate/Approximate Operating point Parameters for dev,val in ec.items(): val["gds"] = val.gdsw * val.W val["id"] = val.jd * val.W val["gm"] = val.gmid * val.id return ec ###Output _____no_output_____ ###Markdown To make it more useable in an optimization context, a wrapper function that takes the 8 individual characteristics for the 4 _reference devices_ and returns a performance estimate is set up here with `symamp_performance`. ###Code def approximate( gmid_ncm12, gmid_ndp12, gmid_pcm212, gmid_ncm32 , fug_ncm12, fug_ndp12, fug_pcm212, fug_ncm32 ): dc = symamp_sizing( gmid_ncm12, gmid_ndp12, gmid_pcm212, gmid_ncm32 , fug_ncm12, fug_ndp12, fug_pcm212, fug_ncm32 ) A0dB = 20 * np.log10(M_P * (dc["MND12"].gm / (dc["MNCM32"].gds + dc["MPCM212"].gds)).values[0]) f3dB = ((dc["MNCM32"].gds + dc["MPCM212"].gds).values[0] / (2 * np.pi * C_L)) return [A0dB, f3dB] ###Output _____no_output_____ ###Markdown As described in the Paper, after wrapping the performnace estimate into a function of electrical characteristics, the $\frac{g_{\mathrm{m}}}{I_{\mathrm{d}}} = 10.0\,\mathrm{V}^{-1}$ for all devices is **fixed**, which is expressed by $p_{\mathrm{sym}}$ (`p_sym`) here. Furthermore, $f_{\mathrm{ug}, \mathtt{NCM12}}$ and $f_{\mathrm{ug}, \mathtt{NDP12}}$ can be fixed, since they don't affect thecircuits gain approximation. ###Code gmid_ncm12 = 10.0 gmid_ndp12 = 10.0 gmid_pcm212 = 10.0 gmid_ncm32 = 10.0 fug_ncm12 = 1e8 fug_ndp12 = 1e8 fug_pcm212 = np.nan # optimization variable fug_ncm32 = np.nan # optimization variable # Partially apply fixed values to Performance Estimation function p_sym = partial( approximate , gmid_ncm12, gmid_ndp12 , gmid_pcm212, gmid_ncm32 , fug_ncm12, fug_ndp12 ) ###Output _____no_output_____ ###Markdown The optimization target / cost function is defined in terms of the desired target gain. For better convergence $\log (f_{\mathrm{ug}})$ is optimized. ###Code A0dB_target = 50 res = minimize( lambda f: np.abs(p_sym(*np.power(10, f).tolist())[0] - A0dB_target) , [7, 7] , method = "Nelder-Mead" ) fug_pcm212, fug_ncm32 = np.power(10, res.x) ###Output _____no_output_____ ###Markdown The sizing is obtained by calling the `symamp_sizing` function one more time with the previously determiend $f_{\mathrm{ug}}$s. ###Code sizing_90 = symamp_sizing( gmid_ncm12, gmid_ndp12, gmid_pcm212, gmid_ncm32 , fug_ncm12, fug_ndp12, fug_pcm212, fug_ncm32 ) sizing_data_90 = pd.concat(sizing_90.values(), names=sizing_90.keys()) sizing_data_90.index = sizing_90.keys() sizing_data_90 A0dB_apx_90, f3dB_apx_90 = approximate( gmid_ncm12, gmid_ndp12, gmid_pcm212, gmid_ncm32 , fug_ncm12, fug_ndp12, fug_pcm212, fug_ncm32 ) _, freq_90, gain_90, phase_90 = simulate(sizing_data_90, netlist_90) gf_90 = [gain_90[np.argsort(gain_90)], freq_90[np.argsort(gain_90)]] A0dB_tru_90 = gain_90[0] A3dB_tru_90 = A0dB_tru_90 - 3.0 f3dB_tru_90 = pchip_interpolate(*gf_90, [A3dB_tru_90])[0] fig = plt.figure(figsize=(12,6)) plt.plot(freq_90, gain_90) plt.axvline( x=f3dB_tru_90 , color="tab:blue" , ls="--" , label=f"Simulated $f_{{-3dB}} = {f3dB_tru_90:.3e}$ Hz") plt.axvline( x=f3dB_apx_90 , color="tab:orange" , ls="--" , label=f"Predicted $f_{{-3dB}} = {f3dB_apx_90:.3e}$ Hz") plt.axhline( y=A0dB_tru_90 , color="tab:blue" , ls="--" , label=f"Simulated $A_{{0}} = {A0dB_tru_90:.2f}$ dB") plt.axhline( y=A0dB_apx_90 , color="tab:orange" , ls="--" , label=f"Predicted $A_{{0}} = {A0dB_apx_90:.2f}$ dB") plt.xscale("log") plt.legend() plt.title("Simulated Gain") plt.xlabel("Frequency [Hz]") plt.ylabel("Gain [dB]") plt.grid("on") ###Output _____no_output_____ ###Markdown The error in performance approximation is due to the inaccuarcy of the simplified equations for $A_{0}$ and $f_{0}$.The `moa_sizing.ipynb` notebook addresses this issue by instead using the simulator in the optimization loop. Technology MigrationRunning the same procedure with the previously obtained electrical characteristics will size the circuit for other technologies as well, **no** additional optimization required! $45\,\mathrm{nm}$ Netlist: ###Code spice_lib_45 = SpiceLibrary("../lib/45nm_bulk.lib") netlist_45 = Circuit("symamp_tb") netlist_45.include(spice_lib_45["nmos"]) netlist_45.subcircuit(SymAmp()) netlist_45.X("sym", "symamp", "B", "P", "N", "O", 0, "D") symamp = list(netlist_45.subcircuits)[0] i_ref = netlist_45.CurrentSource("ref", 0 , "B", I_B0@u_A) v_dd = netlist_45.VoltageSource("dd" , "D", 0 , V_DD@u_V) v_ip = netlist_45.VoltageSource("ip" , "P", 0 , V_ICM@u_V) v_in = netlist_45.SinusoidalVoltageSource( "in", "N", "E" , dc_offset=0.0@u_V , ac_magnitude=-1.0@u_V , ) e_buf = netlist_45.VoltageControlledVoltageSource("in", "E", 0, "O", 0, 1.0@u_V) c_l = netlist_45.C("L", "O", 0, C_L@u_F) ###Output _____no_output_____ ###Markdown Simply load the corresponding device model and evaluate the procedure with the previously obtained parameters again: ###Code nmos = PrimitiveDevice("../models/example/45nm-nmos", params_x, params_y) pmos = PrimitiveDevice("../models/example/45nm-pmos", params_x, params_y) sizing_45 = symamp_sizing( gmid_ncm12, gmid_ndp12, gmid_pcm212, gmid_ncm32 , fug_ncm12, fug_ndp12, fug_pcm212, fug_ncm32 ) sizing_data_45 = pd.concat(sizing_45.values(), names=sizing_45.keys()) sizing_data_45.index = sizing_45.keys() sizing_data_45 A0dB_apx_45, f3dB_apx_45 = approximate( gmid_ncm12, gmid_ndp12, gmid_pcm212, gmid_ncm32 , fug_ncm12, fug_ndp12, fug_pcm212, fug_ncm32 ) _, freq_45, gain_45, phase_45 = simulate(sizing_data_45, netlist_45) gf_45 = [gain_45[np.argsort(gain_45)], freq_45[np.argsort(gain_45)]] A0dB_tru_45 = gain_45[0] A3dB_tru_45 = A0dB_tru_45 - 3.0 f3dB_tru_45 = pchip_interpolate(*gf_45, [A3dB_tru_45])[0] ###Output _____no_output_____ ###Markdown $130\,\mathrm{nm}$ Netlist: ###Code spice_lib_130 = SpiceLibrary("../lib/130nm_bulk.lib") netlist_130 = Circuit("symamp_tb") netlist_130.include(spice_lib_130["nmos"]) netlist_130.subcircuit(SymAmp()) netlist_130.X("sym", "symamp", "B", "P", "N", "O", 0, "D") symamp = list(netlist_130.subcircuits)[0] i_ref = netlist_130.CurrentSource("ref", 0 , "B", I_B0@u_A) v_dd = netlist_130.VoltageSource("dd" , "D", 0 , V_DD@u_V) v_ip = netlist_130.VoltageSource("ip" , "P", 0 , V_ICM@u_V) v_in = netlist_130.SinusoidalVoltageSource( "in", "N", "E" , dc_offset=0.0@u_V , ac_magnitude=-1.0@u_V , ) e_buf = netlist_130.VoltageControlledVoltageSource("in", "E", 0, "O", 0, 1.0@u_V) c_l = netlist_130.C("L", "O", 0, C_L@u_F) ###Output _____no_output_____ ###Markdown Simply load the corresponding device model and evaluate the procedure with the previously obtained parameters again: ###Code nmos = PrimitiveDevice("../models/example/130nm-nmos", params_x, params_y) pmos = PrimitiveDevice("../models/example/130nm-pmos", params_x, params_y) sizing_130 = symamp_sizing( gmid_ncm12, gmid_ndp12, gmid_pcm212, gmid_ncm32 , fug_ncm12, fug_ndp12, fug_pcm212, fug_ncm32 ) sizing_data_130 = pd.concat(sizing_130.values(), names=sizing_130.keys()) sizing_data_130.index = sizing_130.keys() sizing_data_130 A0dB_apx_130, f3dB_apx_130 = approximate( gmid_ncm12, gmid_ndp12, gmid_pcm212, gmid_ncm32 , fug_ncm12, fug_ndp12, fug_pcm212, fug_ncm32 ) _, freq_130, gain_130, phase_130 = simulate(sizing_data_130, netlist_130) gf_130 = [gain_130[np.argsort(gain_130)], freq_130[np.argsort(gain_130)]] A0dB_tru_130 = gain_130[0] A3dB_tru_130 = A0dB_tru_130 - 3.0 f3dB_tru_130 = pchip_interpolate(*gf_130, [A3dB_tru_130])[0] ###Output _____no_output_____ ###Markdown The plot below shows how well the predictions agree with the simulation and how the same electrical characteristics result in very similar behaviour of the entire circuit regardless of the technology node. ###Code fig = plt.figure(figsize=(12,6)) plt.plot(freq_90, gain_90) plt.axvline( x=f3dB_tru_90 , color="tab:blue" , ls="--" , label=f"Simulated $f_{{-3dB}} = {f3dB_tru_90:.3e}$ Hz") plt.axvline( x=f3dB_apx_90 , color="tab:blue" , ls=":" , label=f"Predicted $f_{{-3dB}} = {f3dB_apx_90:.3e}$ Hz") plt.axhline( y=A0dB_tru_90 , color="tab:blue" , ls="--" , label=f"Simulated $A_{{0}} = {A0dB_tru_90:.2f}$ dB") plt.axhline( y=A0dB_apx_90 , color="tab:blue" , ls=":" , label=f"Predicted $A_{{0}} = {A0dB_apx_90:.2f}$ dB") plt.plot(freq_45, gain_45) plt.axvline( x=f3dB_tru_45 , color="tab:orange" , ls="--" , label=f"Simulated $f_{{-3dB}} = {f3dB_tru_45:.3e}$ Hz") plt.axvline( x=f3dB_apx_45 , color="tab:orange" , ls=":" , label=f"Predicted $f_{{-3dB}} = {f3dB_apx_45:.3e}$ Hz") plt.axhline( y=A0dB_tru_45 , color="tab:orange" , ls="--" , label=f"Simulated $A_{{0}} = {A0dB_tru_45:.2f}$ dB") plt.axhline( y=A0dB_apx_45 , color="tab:orange" , ls=":" , label=f"Predicted $A_{{0}} = {A0dB_apx_45:.2f}$ dB") plt.plot(freq_130, gain_130) plt.axvline( x=f3dB_tru_130 , color="tab:green" , ls="--" , label=f"Simulated $f_{{-3dB}} = {f3dB_tru_130:.3e}$ Hz") plt.axvline( x=f3dB_apx_130 , color="tab:green" , ls=":" , label=f"Predicted $f_{{-3dB}} = {f3dB_apx_130:.3e}$ Hz") plt.axhline( y=A0dB_tru_130 , color="tab:green" , ls="--" , label=f"Simulated $A_{{0}} = {A0dB_tru_130:.2f}$ dB") plt.axhline( y=A0dB_apx_130 , color="tab:green" , ls=":" , label=f"Predicted $A_{{0}} = {A0dB_apx_130:.2f}$ dB") plt.xscale("log") plt.legend() plt.title("Simulated Gain") plt.xlabel("Frequency [Hz]") plt.ylabel("Gain [dB]") plt.grid("on") ###Output _____no_output_____
hmwk2/HW-2.ipynb
###Markdown Homework 2In this homework, we are going to play with Twitter data.The data is represented as rows of of [JSON](https://en.wikipedia.org/wiki/JSONExample) strings.It consists of [tweets](https://dev.twitter.com/overview/api/tweets), [messages](https://dev.twitter.com/streaming/overview/messages-types), and a small amount of broken data (cannot be parsed as JSON).For this homework, we will only focus on tweets and ignore all other messages. UPDATES Announcement**We changed the test files size and the corresponding file paths.**In order to avoid long waiting queue, we decided to limit the input files size for the Playground submissions. Please read the following files to get the input file paths: * 1GB test: `../Data/hw2-files-1gb.txt` * 5GB test: `../Data/hw2-files-5gb.txt` * 20GB test: `../Data/hw2-files-20gb.txt`**We updated the json parsing section of this notebook.**Python built-in json library is too slow. In our experiment, 70% of the total running time is spent on parsing tweets. Therefore we recommend using [ujson](https://pypi.python.org/pypi/ujson) instead of json. It is at least 15x faster than the built-in json library according to our tests. Important Reminders1. The tokenizer in this notebook contains UTF-8 characters. So the first line of your `.py` source code must be ` -*- coding: utf-8 -*-` to define its encoding. Learn more about this topic [here](https://www.python.org/dev/peps/pep-0263/).2. The input files (the tweets) contain UTF-8 characters. So you have to correctly encode your input with some function like `lambda text: text.encode('utf-8')`.3. `../Data/hw2-files-` may contain multiple lines, one line for one input file. You can use a single textFile call to read multiple files: `sc.textFile(','.join(files))`.4. The input file paths in `../Data/hw2-files-` contains trailing spaces (newline etc.), which may confuse HDFS if not removed. 5. Your program will be killed if it cannot finish in 5 minutes. The running time of last 100 submissions (yours and others) can be checked at the "View last 100 jobs" tab. For your information, here is the running time of our solution: * 1GB test: 53 seconds, * 5GB test: 60 seconds, * 20GB test: 114 seconds. TweetsA tweet consists of many data fields. [Here is an example](https://gist.github.com/arapat/03d02c9b327e6ff3f6c3c5c602eeaf8b). You can learn all about them in the Twitter API doc. We are going to briefly introduce only the data fields that will be used in this homework.* `created_at`: Posted time of this tweet (time zone is included)* `id_str`: Tweet ID - we recommend using `id_str` over using `id` as Tweet IDs, becauase `id` is an integer and may bring some overflow problems.* `text`: Tweet content* `user`: A JSON object for information about the author of the tweet * `id_str`: User ID * `name`: User name (may contain spaces) * `screen_name`: User screen name (no spaces)* `retweeted_status`: A JSON object for information about the retweeted tweet (i.e. this tweet is not original but retweeteed some other tweet) * All data fields of a tweet except `retweeted_status`* `entities`: A JSON object for all entities in this tweet * `hashtags`: An array for all the hashtags that are mentioned in this tweet * `urls`: An array for all the URLs that are mentioned in this tweet Data sourceAll tweets are collected using the [Twitter Streaming API](https://dev.twitter.com/streaming/overview). Users partitionBesides the original tweets, we will provide you with a Pickle file, which contains a partition over 452,743 Twitter users. It contains a Python dictionary `{user_id: partition_id}`. The users are partitioned into 7 groups. Part 0: Load data to a RDD The tweets data is stored on AWS S3. We have in total a little over 1 TB of tweets. We provide 10 MB of tweets for your local development. For the testing and grading on the homework server, we will use different data. Testing on the homework serverIn the Playground, we provide three different input sizes to test your program: 1 GB, 10 GB, and 100 GB. To test them, read files list from `../Data/hw2-files-1gb.txt`, `../Data/hw2-files-5gb.txt`, `../Data/hw2-files-20gb.txt`, respectively.For final submission, make sure to read files list from `../Data/hw2-files-final.txt`. Otherwise your program will receive no points. Local testFor local testing, read files list from `../Data/hw2-files.txt`.Now let's see how many lines there are in the input files.1. Make RDD from the list of files in `hw2-files.txt`.2. Mark the RDD to be cached (so in next operation data will be loaded in memory) 3. call the `print_count` method to print number of lines in all these filesIt should print```Number of elements: 2193``` ###Code import findspark findspark.init() import pyspark sc = pyspark.SparkContext() # %install_ext https://raw.github.com/cpcloud/ipython-autotime/master/autotime.py %load_ext autotime def print_count(rdd): print 'Number of elements:', rdd.count() env="local" files='' path = "Data/hw2-files.txt" if env=="prod": path = '../Data/hw2-files-1gb.txt' with open(path) as f: files=','.join(f.readlines()).replace('\n','') rdd = sc.textFile(files).cache() print_count(rdd) ###Output Number of elements: 2193 time: 31.6 s ###Markdown Part 1: Parse JSON strings to JSON objects Python has built-in support for JSON.**UPDATE:** Python built-in json library is too slow. In our experiment, 70% of the total running time is spent on parsing tweets. Therefore we recommend using [ujson](https://pypi.python.org/pypi/ujson) instead of json. It is at least 15x faster than the built-in json library according to our tests. ###Code import ujson json_example = ''' { "id": 1, "name": "A green door", "price": 12.50, "tags": ["home", "green"] } ''' json_obj = ujson.loads(json_example) json_obj ###Output _____no_output_____ ###Markdown Broken tweets and irrelevant messagesThe data of this assignment may contain broken tweets (invalid JSON strings). So make sure that your code is robust for such cases.In addition, some lines in the input file might not be tweets, but messages that the Twitter server sent to the developer (such as [limit notices](https://dev.twitter.com/streaming/overview/messages-typeslimit_notices)). Your program should also ignore these messages.*Hint:* [Catch the ValueError](http://stackoverflow.com/questions/11294535/verify-if-a-string-is-json-in-python)(1) Parse raw JSON tweets to obtain valid JSON objects. From all valid tweets, construct a pair RDD of `(user_id, text)`, where `user_id` is the `id_str` data field of the `user` dictionary (read [Tweets](Tweets) section above), `text` is the `text` data field. ###Code import ujson def safe_parse(raw_json): tweet={} try: tweet = ujson.loads(raw_json) except ValueError: pass return tweet #filter out rate limites {"limit":{"track":77,"timestamp_ms":"1457610531879"}} tweets = rdd.map(lambda json_str: safe_parse(json_str))\ .filter(lambda h: "text" in h)\ .map(lambda tweet: (tweet["user"]["id_str"], tweet["text"]))\ .map(lambda (x,y): (x, y.encode("utf-8"))).cache() ###Output time: 10.3 ms ###Markdown (2) Count the number of different users in all valid tweets (hint: [the `distinct()` method](https://spark.apache.org/docs/latest/programming-guide.htmltransformations)).It should print```The number of unique users is: 2083``` ###Code def print_users_count(count): print 'The number of unique users is:', count print_users_count(tweets.map(lambda x:x[0]).distinct().count()) ###Output The number of unique users is: 2083 time: 73.8 ms ###Markdown Part 2: Number of posts from each user partition Load the Pickle file `../Data/users-partition.pickle`, you will get a dictionary which represents a partition over 452,743 Twitter users, `{user_id: partition_id}`. The users are partitioned into 7 groups. For example, if the dictionary is loaded into a variable named `partition`, the partition ID of the user `59458445` is `partition["59458445"]`. These users are partitioned into 7 groups. The partition ID is an integer between 0-6.Note that the user partition we provide doesn't cover all users appear in the input data. (1) Load the pickle file. ###Code import cPickle as pickle path = 'Data/users-partition.pickle' if env=="prod": path = '../Data/users-partition.pickle' partitions = pickle.load(open(path, 'rb')) #{user_Id, partition_id} - {'583105596': 6} partition_bc = sc.broadcast(partitions) ###Output time: 1.24 s ###Markdown (2) Count the number of posts from each user partitionCount the number of posts from group 0, 1, ..., 6, plus the number of posts from users who are not in any partition. Assign users who are not in any partition to the group 7.Put the results of this step into a pair RDD `(group_id, count)` that is sorted by key. ###Code count = tweets.map(lambda x:partition_bc.value.get(x[0], 7)).countByValue().items() ###Output time: 53 ms ###Markdown (3) Print the post count using the `print_post_count` function we provided.It should print```Group 0 posted 81 tweetsGroup 1 posted 199 tweetsGroup 2 posted 45 tweetsGroup 3 posted 313 tweetsGroup 4 posted 86 tweetsGroup 5 posted 221 tweetsGroup 6 posted 400 tweetsGroup 7 posted 798 tweets``` ###Code def print_post_count(counts): for group_id, count in counts: print 'Group %d posted %d tweets' % (group_id, count) print print_post_count(count) ###Output Group 0 posted 81 tweets Group 1 posted 199 tweets Group 2 posted 45 tweets Group 3 posted 313 tweets Group 4 posted 86 tweets Group 5 posted 221 tweets Group 6 posted 400 tweets Group 7 posted 798 tweets None time: 930 µs ###Markdown Part 3: Tokens that are relatively popular in each user partition In this step, we are going to find tokens that are relatively popular in each user partition.We define the number of mentions of a token $t$ in a specific user partition $k$ as the number of users from the user partition $k$ that ever mentioned the token $t$ in their tweets. Note that even if some users might mention a token $t$ multiple times or in multiple tweets, a user will contribute at most 1 to the counter of the token $t$.Please make sure that the number of mentions of a token is equal to the number of users who mentioned this token but NOT the number of tweets that mentioned this token.Let $N_t^k$ be the number of mentions of the token $t$ in the user partition $k$. Let $N_t^{all} = \sum_{i=0}^7 N_t^{i}$ be the number of total mentions of the token $t$.We define the relative popularity of a token $t$ in a user partition $k$ as the log ratio between $N_t^k$ and $N_t^{all}$, i.e. \begin{equation}p_t^k = \log \frac{N_t^k}{N_t^{all}}.\end{equation}You can compute the relative popularity by calling the function `get_rel_popularity`. (0) Load the tweet tokenizer. ###Code # %load happyfuntokenizing.py #!/usr/bin/env python """ This code implements a basic, Twitter-aware tokenizer. A tokenizer is a function that splits a string of text into words. In Python terms, we map string and unicode objects into lists of unicode objects. There is not a single right way to do tokenizing. The best method depends on the application. This tokenizer is designed to be flexible and this easy to adapt to new domains and tasks. The basic logic is this: 1. The tuple regex_strings defines a list of regular expression strings. 2. The regex_strings strings are put, in order, into a compiled regular expression object called word_re. 3. The tokenization is done by word_re.findall(s), where s is the user-supplied string, inside the tokenize() method of the class Tokenizer. 4. When instantiating Tokenizer objects, there is a single option: preserve_case. By default, it is set to True. If it is set to False, then the tokenizer will downcase everything except for emoticons. The __main__ method illustrates by tokenizing a few examples. I've also included a Tokenizer method tokenize_random_tweet(). If the twitter library is installed (http://code.google.com/p/python-twitter/) and Twitter is cooperating, then it should tokenize a random English-language tweet. Julaiti Alafate: I modified the regex strings to extract URLs in tweets. """ __author__ = "Christopher Potts" __copyright__ = "Copyright 2011, Christopher Potts" __credits__ = [] __license__ = "Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License: http://creativecommons.org/licenses/by-nc-sa/3.0/" __version__ = "1.0" __maintainer__ = "Christopher Potts" __email__ = "See the author's website" ###################################################################### import re import htmlentitydefs ###################################################################### # The following strings are components in the regular expression # that is used for tokenizing. It's important that phone_number # appears first in the final regex (since it can contain whitespace). # It also could matter that tags comes after emoticons, due to the # possibility of having text like # # <:| and some text >:) # # Most imporatantly, the final element should always be last, since it # does a last ditch whitespace-based tokenization of whatever is left. # This particular element is used in a couple ways, so we define it # with a name: emoticon_string = r""" (?: [<>]? [:;=8] # eyes [\-o\*\']? # optional nose [\)\]\(\[dDpP/\:\}\{@\|\\] # mouth | [\)\]\(\[dDpP/\:\}\{@\|\\] # mouth [\-o\*\']? # optional nose [:;=8] # eyes [<>]? )""" # The components of the tokenizer: regex_strings = ( # Phone numbers: r""" (?: (?: # (international) \+?[01] [\-\s.]* )? (?: # (area code) [\(]? \d{3} [\-\s.\)]* )? \d{3} # exchange [\-\s.]* \d{4} # base )""" , # URLs: r"""http[s]?://(?:[a-zA-Z]|[0-9]|[$-_@.&+]|[!*\(\),]|(?:%[0-9a-fA-F][0-9a-fA-F]))+""" , # Emoticons: emoticon_string , # HTML tags: r"""<[^>]+>""" , # Twitter username: r"""(?:@[\w_]+)""" , # Twitter hashtags: r"""(?:\#+[\w_]+[\w\'_\-]*[\w_]+)""" , # Remaining word types: r""" (?:[a-z][a-z'\-_]+[a-z]) # Words with apostrophes or dashes. | (?:[+\-]?\d+[,/.:-]\d+[+\-]?) # Numbers, including fractions, decimals. | (?:[\w_]+) # Words without apostrophes or dashes. | (?:\.(?:\s*\.){1,}) # Ellipsis dots. | (?:\S) # Everything else that isn't whitespace. """ ) ###################################################################### # This is the core tokenizing regex: word_re = re.compile(r"""(%s)""" % "|".join(regex_strings), re.VERBOSE | re.I | re.UNICODE) # The emoticon string gets its own regex so that we can preserve case for them as needed: emoticon_re = re.compile(regex_strings[1], re.VERBOSE | re.I | re.UNICODE) # These are for regularizing HTML entities to Unicode: html_entity_digit_re = re.compile(r"&#\d+;") html_entity_alpha_re = re.compile(r"&\w+;") amp = "&amp;" ###################################################################### class Tokenizer: def __init__(self, preserve_case=False): self.preserve_case = preserve_case def tokenize(self, s): """ Argument: s -- any string or unicode object Value: a tokenize list of strings; conatenating this list returns the original string if preserve_case=False """ # Try to ensure unicode: try: s = unicode(s) except UnicodeDecodeError: s = str(s).encode('string_escape') s = unicode(s) # Fix HTML character entitites: s = self.__html2unicode(s) # Tokenize: words = word_re.findall(s) # Possible alter the case, but avoid changing emoticons like :D into :d: if not self.preserve_case: words = map((lambda x : x if emoticon_re.search(x) else x.lower()), words) return words def tokenize_random_tweet(self): """ If the twitter library is installed and a twitter connection can be established, then tokenize a random tweet. """ try: import twitter except ImportError: print "Apologies. The random tweet functionality requires the Python twitter library: http://code.google.com/p/python-twitter/" from random import shuffle api = twitter.Api() tweets = api.GetPublicTimeline() if tweets: for tweet in tweets: if tweet.user.lang == 'en': return self.tokenize(tweet.text) else: raise Exception("Apologies. I couldn't get Twitter to give me a public English-language tweet. Perhaps try again") def __html2unicode(self, s): """ Internal metod that seeks to replace all the HTML entities in s with their corresponding unicode characters. """ # First the digits: ents = set(html_entity_digit_re.findall(s)) if len(ents) > 0: for ent in ents: entnum = ent[2:-1] try: entnum = int(entnum) s = s.replace(ent, unichr(entnum)) except: pass # Now the alpha versions: ents = set(html_entity_alpha_re.findall(s)) ents = filter((lambda x : x != amp), ents) for ent in ents: entname = ent[1:-1] try: s = s.replace(ent, unichr(htmlentitydefs.name2codepoint[entname])) except: pass s = s.replace(amp, " and ") return s from math import log tok = Tokenizer(preserve_case=False) def get_rel_popularity(c_k, c_all): return log(1.0 * c_k / c_all) / log(2) def print_tokens(tokens, gid = None): group_name = "overall" if gid is not None: group_name = "group %d" % gid print '=' * 5 + ' ' + group_name + ' ' + '=' * 5 for t, n in tokens: print "%s\t%.4f" % (t, n) print ###Output time: 4.32 ms ###Markdown (1) Tokenize the tweets using the tokenizer we provided above named `tok`. Count the number of mentions for each tokens regardless of specific user group.Call `print_count` function to show how many different tokens we have.It should print```Number of elements: 8979``` ###Code # unique_tokens = tweets.flatMap(lambda tweet: tok.tokenize(tweet[1])).distinct() splitter = lambda x: [(x[0],t) for t in x[1]] unique_tokens = tweets.map(lambda tweet: (tweet[0], tok.tokenize(tweet[1])))\ .flatMap(lambda t: splitter(t))\ .distinct() ut1 = unique_tokens.map(lambda x: ((partition_bc.value.get(x[0],7), x[1]), 1)).cache() utr = ut1.reduceByKey(lambda x,y: x+y).cache() group_tokens = utr.map(lambda (x,y):(x[1],y)).reduceByKey(lambda x,y:x+y) ##format: (token, k_all) print_count(group_tokens) ###Output Number of elements: 8979 time: 1.36 s ###Markdown (2) Tokens that are mentioned by too few users are usually not very interesting. So we want to only keep tokens that are mentioned by at least 100 users. Please filter out tokens that don't meet this requirement.Call `print_count` function to show how many different tokens we have after the filtering.Call `print_tokens` function to show top 20 most frequent tokens.It should print```Number of elements: 52===== overall =====: 1386.0000rt 1237.0000. 865.0000\ 745.0000the 621.0000trump 595.0000x80 545.0000xe2 543.0000to 499.0000, 489.0000xa6 457.0000a 403.0000is 376.0000in 296.0000' 294.0000of 292.0000and 287.0000for 280.0000! 269.0000? 210.0000``` ###Code # splitter = lambda x: [(x[0],t) for t in x[1]] # tokens = tweets.map(lambda tweet: (tweet[0], tok.tokenize(tweet[1])))\ # .flatMap(lambda t: splitter(t))\ # .distinct() popular_tokens = group_tokens.filter(lambda x: x[1]>100).cache() # .sortBy(lambda x: x[1], ascending=False).cache() print_count(popular_tokens) print_tokens(popular_tokens.top(20, lambda x:x[1])) ###Output ===== overall ===== : 1386.0000 rt 1237.0000 . 865.0000 \ 745.0000 the 621.0000 trump 595.0000 x80 545.0000 xe2 543.0000 to 499.0000 , 489.0000 xa6 457.0000 a 403.0000 is 376.0000 in 296.0000 ' 294.0000 of 292.0000 and 287.0000 for 280.0000 ! 269.0000 ? 210.0000 time: 40.3 ms ###Markdown (3) For all tokens that are mentioned by at least 100 users, compute their relative popularity in each user group. Then print the top 10 tokens with highest relative popularity in each user group. In case two tokens have same relative popularity, break the tie by printing the alphabetically smaller one.**Hint:** Let the relative popularity of a token $t$ be $p$. The order of the items will be satisfied by sorting them using (-p, t) as the key. ###Code # i want to join the partion on the top100 tweets!, so ineed to get it in the form (uid, tweet) twg = sc.parallelize(partitions.items()).rightOuterJoin(tweets)\ .map(lambda (uid,(gid,tweet)): (uid,(7,tweet)) if gid<0 or gid>6 else (uid,(gid,tweet))).cache() def group_score(gid): group_counts = utr.filter(lambda (x,y): x[0]==gid).map(lambda (x,y): (x[1], y)) merged = group_counts.join(popular_tokens) group_scores = merged.map(lambda (token,(V,W)): (token, get_rel_popularity(V,W))) return group_scores for _gid in range(0,8): _rdd = group_score(_gid) print_tokens(_rdd.top(10, lambda a:a[1]), gid=_gid) ###Output ===== group 0 ===== ... -3.5648 at -3.5983 hillary -4.0484 bernie -4.1430 not -4.2479 he -4.2574 i -4.2854 s -4.3309 are -4.3646 in -4.4021 ===== group 1 ===== #demdebate -2.4391 - -2.6202 clinton -2.7174 amp -2.7472 & -2.7472 ; -2.7980 sanders -2.8745 ? -2.9069 in -2.9615 if -2.9861 ===== group 2 ===== are -4.6865 and -4.7055 bernie -4.7279 at -4.7682 sanders -4.9449 in -5.0395 donald -5.0531 a -5.0697 #demdebate -5.1396 that -5.1599 ===== group 3 ===== #demdebate -1.3847 bernie -1.8535 sanders -2.1793 of -2.2356 t -2.2675 clinton -2.4179 hillary -2.4203 the -2.4330 xa6 -2.4962 that -2.5160 ===== group 4 ===== hillary -3.8074 sanders -3.9449 of -4.0199 what -4.0875 clinton -4.0959 at -4.1832 in -4.2095 a -4.2623 on -4.2854 ' -4.2928 ===== group 5 ===== cruz -2.3344 he -2.6724 will -2.7705 are -2.7796 the -2.8522 is -2.8822 that -2.9119 this -2.9542 for -2.9594 of -2.9804 ===== group 6 ===== @realdonaldtrump -1.1520 cruz -1.4657 n -1.4877 ! -1.5479 not -1.8904 xa6 -1.9172 xe2 -1.9973 / -2.0238 x80 -2.0240 will -2.0506 ===== group 7 ===== donald -0.6471 ... -0.7922 sanders -1.0380 what -1.1178 trump -1.1293 bernie -1.2044 you -1.2099 - -1.2253 if -1.2602 clinton -1.2681 time: 1.43 s
2018/05/solution.ipynb
###Markdown Advent of Code 2018 - Day 4 Input ###Code # input_file = 'input-sample.txt' input_file = 'input-full.txt' # data examples: dabAcCaCBAcCcaDA input = '' with open(input_file, 'r') as f: input = f.read().rstrip() # Python limits recursion to 1000 recursive calls by default import sys if len(input) > 1000: sys.setrecursionlimit(len(input)) ###Output _____no_output_____ ###Markdown Part 1 ###Code def reduce_polymer(prefix, polymer): new_polymer = prefix prev_char = '' for i, curr_char in enumerate(polymer): if prev_char != curr_char.swapcase(): new_polymer += prev_char prev_char = curr_char else: # merge polymer chunks without prev_char + curr_char new_polymer += polymer[i+1:] # call function recursively # split two chars back to account for new "reactions" split = len(prefix) + i - 2 if split < 0: split = 0 return reduce_polymer(new_polymer[:split], new_polymer[split:]) new_polymer += prev_char return new_polymer reduced_polymer = reduce_polymer('', input) print 'polymer length: {}'.format(len(reduced_polymer)) ###Output polymer length: 9202 ###Markdown Part 2 ###Code min_polymer_len = len(input) # get all unique chars in input uniq_chars = ''.join(set(input.lower())) for char in uniq_chars: # remove uppercase/lowercase char from polymer and reduce stripped_polymer = input.translate(None, '{}{}'.format(char, char.upper())) reduced_polymer = reduce_polymer('', stripped_polymer) if len(reduced_polymer) < min_polymer_len: min_polymer_len = len(reduced_polymer) print 'min polymer len: {}'.format(min_polymer_len) ###Output min polymer len: 6394
nb/devices.ipynb
###Markdown Devices ###Code with open('startup.py') as f: exec(f.read()) ! ls -l /dev ! ls /sys ! ls /sys/devices ! ls /sys/devices/cpu ! ls /sys/devices/cpu/type ! file /sys/devices/cpu/type ! cat /sys/devices/cpu/type ! ls -l /sys/block ! udevadm info --query=all --name=/dev/sda ! dd if=/dev/zero bs=1k count=1 ! dmesg ! mount ! cat /proc/devices ###Output Character devices: 1 mem 4 /dev/vc/0 4 tty 4 ttyS 5 /dev/tty 5 /dev/console 5 /dev/ptmx 5 ttyprintk 6 lp 7 vcs 10 misc 13 input 14 mixer 14 dsp 14 adsp 21 sg 29 fb 81 video4linux 89 i2c 99 ppdev 108 ppp 116 alsa 128 ptm 136 pts 180 usb 189 usb_device 202 cpu/msr 204 ttyMAX 216 rfcomm 226 drm 235 media 236 aux 237 cec 238 lirc 239 mei 240 ttyDBC 241 hidraw 242 vfio 243 bsg 244 watchdog 245 remoteproc 246 ptp 247 pps 248 rtc 249 dma_heap 250 dax 251 dimmctl 252 ndctl 253 tpm 254 gpiochip Block devices: 7 loop 8 sd 9 md 11 sr 65 sd 66 sd 67 sd 68 sd 69 sd 70 sd 71 sd 128 sd 129 sd 130 sd 131 sd 132 sd 133 sd 134 sd 135 sd 253 device-mapper 254 mdp 259 blkext ###Markdown Hard Disks ###Code ! ls /dev | grep sd ! lsscsi ###Output /bin/bash: lsscsi: command not found ###Markdown CDs and DVDs ###Code ! ls /dev | grep ^sg ###Output sg0 ###Markdown Terminals ###Code columnize([s for s in run('ls /dev'.split()).split('\n') if s.startswith('tty')]) ###Output tty tty17 tty26 tty35 tty44 tty53 tty62 ttyS12 ttyS21 ttyS30 tty0 tty18 tty27 tty36 tty45 tty54 tty63 ttyS13 ttyS22 ttyS31 tty1 tty19 tty28 tty37 tty46 tty55 tty7 ttyS14 ttyS23 ttyS4 tty10 tty2 tty29 tty38 tty47 tty56 tty8 ttyS15 ttyS24 ttyS5 tty11 tty20 tty3 tty39 tty48 tty57 tty9 ttyS16 ttyS25 ttyS6 tty12 tty21 tty30 tty4 tty49 tty58 ttyprintk ttyS17 ttyS26 ttyS7 tty13 tty22 tty31 tty40 tty5 tty59 ttyS0 ttyS18 ttyS27 ttyS8 tty14 tty23 tty32 tty41 tty50 tty6 ttyS1 ttyS19 ttyS28 ttyS9 tty15 tty24 tty33 tty42 tty51 tty60 ttyS10 ttyS2 ttyS29 tty16 tty25 tty34 tty43 tty52 tty61 ttyS11 ttyS20 ttyS3 ###Markdown Audio Devices ###Code ! ls /dev/snd ###Output by-path hwC0D0 pcmC0D0c pcmC0D10p pcmC0D7p pcmC0D9p timer controlC0 hwC0D2 pcmC0D0p pcmC0D3p pcmC0D8p seq ###Markdown `udev` ###Code ! ls /dev/disk/by-id ! man 7 udev ! man 8 udevadm ###Output UDEVADM(8) udevadm UDEVADM(8) NAME udevadm - udev management tool SYNOPSIS udevadm [--debug] [--version] [--help] udevadm info [options] [devpath] udevadm trigger [options] [devpath] udevadm settle [options] udevadm control option udevadm monitor [options] udevadm test [options] devpath udevadm test-builtin [options] command devpath DESCRIPTION udevadm expects a command and command specific options. It controls the runtime behavior of systemd-udevd, requests kernel events, manages the event queue, and provides simple debugging mechanisms. OPTIONS -d, --debug Print debug messages to standard error. This option is implied in udevadm test and udevadm test-builtin commands. -h, --help Print a short help text and exit. udevadm info [options] [devpath|file|unit...] Query the udev database for device information. Positional arguments should be used to specify one or more devices. Each one may be a device name (in which case it must start with /dev/), a sys path (in which case it must start with /sys/), or a systemd device unit name (in which case it must end with ".device", see systemd.device(5)). -q, --query=TYPE Query the database for the specified type of device data. Valid TYPEs are: name, symlink, path, property, all. -p, --path=DEVPATH The /sys path of the device to query, e.g. [/sys]/class/block/sda. This option is an alternative to the positional argument with a /sys/ prefix. udevadm info --path=/class/block/sda is equivalent to udevadm info /sys/class/block/sda. -n, --name=FILE The name of the device node or a symlink to query, e.g. [/dev]/sda. This option is an alternative to the positional argument with a /dev/ prefix. udevadm info --name=sda is equivalent to udevadm info /dev/sda. -r, --root Print absolute paths in name or symlink query. -a, --attribute-walk Print all sysfs properties of the specified device that can be used in udev rules to match the specified device. It prints all devices along the chain, up to the root of sysfs that can be used in udev rules. -x, --export Print output as key/value pairs. Values are enclosed in single quotes. This takes effects only when --query=property or --device-id-of-file=FILE is specified. -P, --export-prefix=NAME Add a prefix to the key name of exported values. This implies --export. -d, --device-id-of-file=FILE Print major/minor numbers of the underlying device, where the file lives on. If this is specified, all positional arguments are ignored. -e, --export-db Export the content of the udev database. -c, --cleanup-db Cleanup the udev database. -w[SECONDS], --wait-for-initialization[=SECONDS] Wait for device to be initialized. If argument SECONDS is not specified, the default is to wait forever. -h, --help Print a short help text and exit. udevadm trigger [options] [devpath|file|unit] Request device events from the kernel. Primarily used to replay events at system coldplug time. Takes device specifications as positional arguments. See the description of info above. -v, --verbose Print the list of devices which will be triggered. -n, --dry-run Do not actually trigger the event. -t, --type=TYPE Trigger a specific type of devices. Valid types are: devices, subsystems. The default value is devices. -c, --action=ACTION Type of event to be triggered. Possible actions are "add", "remove", "change", "move", "online", "offline", "bind", and "unbind". Also, the special value "help" can be used to list the possible actions. The default value is "change". -s, --subsystem-match=SUBSYSTEM Trigger events for devices which belong to a matching subsystem. This option supports shell style pattern matching. When this option is specified more than once, then each matching result is ORed, that is, all the devices in each subsystem are triggered. -S, --subsystem-nomatch=SUBSYSTEM Do not trigger events for devices which belong to a matching subsystem. This option supports shell style pattern matching. When this option is specified more than once, then each matching result is ANDed, that is, devices which do not match all specified subsystems are triggered. -a, --attr-match=ATTRIBUTE=VALUE Trigger events for devices with a matching sysfs attribute. If a value is specified along with the attribute name, the content of the attribute is matched against the given value using shell style pattern matching. If no value is specified, the existence of the sysfs attribute is checked. When this option is specified multiple times, then each matching result is ANDed, that is, only devices which have all specified attributes are triggered. -A, --attr-nomatch=ATTRIBUTE=VALUE Do not trigger events for devices with a matching sysfs attribute. If a value is specified along with the attribute name, the content of the attribute is matched against the given value using shell style pattern matching. If no value is specified, the existence of the sysfs attribute is checked. When this option is specified multiple times, then each matching result is ANDed, that is, only devices which have none of the specified attributes are triggered. -p, --property-match=PROPERTY=VALUE Trigger events for devices with a matching property value. This option supports shell style pattern matching. When this option is specified more than once, then each matching result is ORed, that is, devices which have one of the specified properties are triggered. -g, --tag-match=PROPERTY Trigger events for devices with a matching tag. When this option is specified multiple times, then each matching result is ANDed, that is, devices which have all specified tags are triggered. -y, --sysname-match=NAME Trigger events for devices for which the last component (i.e. the filename) of the /sys path matches the specified PATH. This option supports shell style pattern matching. When this option is specified more than once, then each matching result is ORed, that is, all devices which have any of the specified NAME are triggered. --name-match=NAME Trigger events for devices with a matching device path. When this option is specified more than once, then each matching result is ORed, that is, all specified devices are triggered. -b, --parent-match=SYSPATH Trigger events for all children of a given device. When this option is specified more than once, then each matching result is ORed, that is, all children of each specified device are triggered. -w, --settle Apart from triggering events, also waits for those events to finish. Note that this is different from calling udevadm settle. udevadm settle waits for all events to finish. This option only waits for events triggered by the same command to finish. --wait-daemon[=SECONDS] Before triggering uevents, wait for systemd-udevd daemon to be initialized. Optionally takes timeout value. Default timeout is 5 seconds. This is equivalent to invoke invoking udevadm control --ping before udevadm trigger. -h, --help Print a short help text and exit. In addition, optional positional arguments can be used to specify device names or sys paths. They must start with /dev or /sys respectively. udevadm settle [options] Watches the udev event queue, and exits if all current events are handled. -t, --timeout=SECONDS Maximum number of seconds to wait for the event queue to become empty. The default value is 120 seconds. A value of 0 will check if the queue is empty and always return immediately. -E, --exit-if-exists=FILE Stop waiting if file exists. -h, --help Print a short help text and exit. See systemd-udev-settle.service(8) for more information. udevadm control option Modify the internal state of the running udev daemon. -e, --exit Signal and wait for systemd-udevd to exit. No option except for --timeout can be specified after this option. Note that systemd-udevd.service contains Restart=always and so as a result, this option restarts systemd-udevd. If you want to stop systemd-udevd.service, please use the following: systemctl stop systemd-udevd-control.socket systemd-udevd-kernel.socket systemd-udevd.service -l, --log-priority=value Set the internal log level of systemd-udevd. Valid values are the numerical syslog priorities or their textual representations: emerg, alert, crit, err, warning, notice, info, and debug. -s, --stop-exec-queue Signal systemd-udevd to stop executing new events. Incoming events will be queued. -S, --start-exec-queue Signal systemd-udevd to enable the execution of events. -R, --reload Signal systemd-udevd to reload the rules files and other databases like the kernel module index. Reloading rules and databases does not apply any changes to already existing devices; the new configuration will only be applied to new events. -p, --property=KEY=value Set a global property for all events. -m, --children-max=value Set the maximum number of events, systemd-udevd will handle at the same time. --ping Send a ping message to systemd-udevd and wait for the reply. This may be useful to check that systemd-udevd daemon is running. -t, --timeout=seconds The maximum number of seconds to wait for a reply from systemd-udevd. -h, --help Print a short help text and exit. udevadm monitor [options] Listens to the kernel uevents and events sent out by a udev rule and prints the devpath of the event to the console. It can be used to analyze the event timing, by comparing the timestamps of the kernel uevent and the udev event. -k, --kernel Print the kernel uevents. -u, --udev Print the udev event after the rule processing. -p, --property Also print the properties of the event. -s, --subsystem-match=string[/string] Filter kernel uevents and udev events by subsystem[/devtype]. Only events with a matching subsystem value will pass. When this option is specified more than once, then each matching result is ORed, that is, all devices in the specified subsystems are monitored. -t, --tag-match=string Filter udev events by tag. Only udev events with a given tag attached will pass. When this option is specified more than once, then each matching result is ORed, that is, devices which have one of the specified tags are monitored. -h, --help Print a short help text and exit. udevadm test [options] [devpath] Simulate a udev event run for the given device, and print debug output. -a, --action=ACTION Type of event to be simulated. Possible actions are "add", "remove", "change", "move", "online", "offline", "bind", and "unbind". Also, the special value "help" can be used to list the possible actions. The default value is "add". -N, --resolve-names=early|late|never Specify when udevadm should resolve names of users and groups. When set to early (the default), names will be resolved when the rules are parsed. When set to late, names will be resolved for every event. When set to never, names will never be resolved and all devices will be owned by root. -h, --help Print a short help text and exit. udevadm test-builtin [options] [command] [devpath] Run a built-in command COMMAND for device DEVPATH, and print debug output. -h, --help Print a short help text and exit. SEE ALSO udev(7), systemd-udevd.service(8) systemd 245 UDEVADM(8)
Copy_of_Python_Conditional_(If)_Statements2LATEST.ipynb
###Markdown *To start working on this notebook, or any other notebook that we will use in this course, we will need to save our own copy of it. We can do this by clicking File > Save a Copy in Drive. We will then be able to make edits to our own copy of this notebook.* Python Programming: Conditional (If) Statements 1.0 Overview So as to write useful python programs, we almost always need the ability to check conditions and apply a certain operation accordingly. Conditional statements like the if statement provide us with that ability. 1.1 If ###Code # Example 1 # We can write an if statement by using the if keyword as shown: # x = 200 y = 100 if y < x: print("y is less than x") # Example 2 # Below is another example of an if statement # x = 1 y = 7 # let's find out if x is greater than 7 if x > y: print('yes') print('x is not greater than y') ###Output x is not greater than y ###Markdown 1.1 Challenges ###Code # Challenge 1 # In our first challenge, we will find out if y is less than 7 # If so, we will print out yes. # # We first declare and assign 1 to variable x and 7 to variable y x = 1 y = 7 if y<7: print('yes') print('no,y is not less than 7') # Challenge 2 # Let's find out if x == true. If so, we will print out yes. # We won't need to declare any variable in this challenge. # if x==True: print('Yes') # Challenge 3 # Let's find out if y is true. If so, we will print out yes. # We won't need to declare any variable in this challenge as well # if y==True: print('yes') print('no') ###Output no ###Markdown 1.2 Elif ###Code # Example 1 # The elif keyword is a keyword that will try another condition # if the previous condition was not true. # The example below shows how the elif keyword can be used. # x = 33 y = 33 if y < x: print("y is less than x") elif x == y: print("x and y are equal") # Example 2 # This is another example of how the elif keyword can be used. # choice = 'a' if choice == 'a': print("You chose 'a'.") elif choice == 'b': print("You chose 'b'.") elif choice == 'c': print("You chose 'c'.") ###Output You chose 'a'. ###Markdown 1.2 Challenges ###Code # Challenge 1 # Let's now write a program that reads an integer # from a user then displays a message indicating # whether the integer is even or odd. # number=int(input('enter a number: ')) if number%2==0: print('even number') elif number%2 !=0: print('odd number') # Challenge 2: # We now write a program that reads a input from a user. # If the user enters a, e, i, o or u then our program should # display a message indicating that the entered letter is a vowel. # If our user enters y then our program should display a message # indicating that sometimes y is a vowel, and sometimes y is a consonant. # Otherwise our program should display a message indicating that the # letter is a consonant. # #vowels=['a','e','i','o','u'] letter=input('enter an alphabet: ') if letter in ('a','A','b','B','c','C','d','D','e','E'): print('%s is a vowel' %letter) elif letter: print('somestimes %s is a vowel' %letter) print('alphabet %s is a consonant' %letter) # Challenge 3 # Let's now also write another program that asks for a number and computes the # square of that number. If the square is 100 or greater, print the squared value # and the word 'big'. Otherwise if the square is 50 or greater, print the # squared value and the word 'medium'. Otherwise just print 'too small to bother with'. # Hint: To compute squared, use x*x or x**2 # namba=int(input('enter a number: ')) sq_namba=namba**2 if sq_namba>=100: print("The squre of %d is big" %sq_namba) elif sq_namba>=50: print(sq_namba,'medium') elif sq_namba<=49: print('too small to bother with') ###Output enter a number: 10 The squre of 100 is big ###Markdown 1.3 Else ###Code # Example 1 # The else keyword will catch anything else which isn't caught by the preceding conditions. # A use case of else keyword is in the following example. # # Declaring our variables x = 134 y = 33 if y > x: print("y is greater than x") elif x == y: print("x and y are equal") else: print("x is greater than y") # Example 2 # The else keyword can also be used without elif as shown # # Prompting the user for an input temp_outside = float(input('What is the temperature outside? ')) if temp_outside > 12: print('No need for a sweater') else: print('You need an sweater') # Example 3 # We should also note that indentation is significantly important as shown # # Declaring our varioable x = 7 if x > 12: print ('x is greater than 12') if x >= 17: print ('x is also at least 17') elif x > 7: print ('x is greater than 7 but not greater than 12') else: print ('no condition matched') print ('so x is 7 or less') ###Output no condition matched so x is 7 or less ###Markdown 1.3 Challenges ###Code # Challenge 1 # Let's write a program that reads a month and day from the user. # If the month and day matches one of the holidays listed previously # then the program should display the Kenyan holiday’s name. # Otherwise our program should indicate that the entered month and day # do not correspond to a holiday. (You should use the else keyword) # holiday_dates=['Jan 01','Apr 10','April 13','May 01','June 01','July 31','Oct 10','Oct 20','Dec 12','Dec 25','Dec 26'] tarehe=input('Enter month and day: ') if tarehe in 'Jan 01': print('New Year') elif 'April 10': print('Good Friday') elif 'April 13': print('Easter Monday') elif 'May 01': print('Labour Day') elif 'June 01': print('Madaraka day') elif 'July 31': print('Eid al-Adha') elif 'Oct 10': print('Huduma Day') elif 'Oct 20': print('Mashujaa Day') elif 'Dec 12': print('Jamhuri Day') elif 'Dec 25': print('Christmas Day') elif 'Dec 26': print('Utamaduni Day') else: print('it is a normal working day') # Challenge 2 # Write a program that reads a wavelength from the user and reports its color. # Display an appropriate error message if the wavelength entered by the user # is outside of the visible spectrum. # (You can do external research on this challenge to determine the wavelength). # You should use the else keyword. # wave_length=float(input('enter the wavelength: ')) if 380<wave_length<700: print('Visible light wavelenght') elif wave_length<380: print('light below wave length requirement for visbility') else: print('The wavelenght does not meet threshold for visibility') # Challenge 3 # Let's create a program that reads the name of a month from the user as a string, # then displays the number of days in that month. # The length of a month varies from 28 to 31 days. # The program needs to also display “28 or 29 days” for February # so that leap years are taken into account. # The program should use the else keyword. # months=['January','February','March','April','May','June','July','August','September','October','November','December'] print(months) month=input('Enter the Month') if month=='February': print('has 28 or 29 days') elif month in ('April','June','September','November'): print('It is 30 days') else: print('it is 31 days') ###Output ['January', 'February', 'March', 'April', 'May', 'June', 'July', 'August', 'September', 'October', 'November', 'December'] Enter the MonthJanuary it is 31 days ###Markdown 1.4 And ###Code # Example 1 # We can also use the and keyword to combine conditional statements. # # Declaring our variables x = 11 y = 10 z = 12 if x > y and z > x: print("Both conditions are True") # Example 2 # # Declaring our variables name = "James" age = 24 if name == "James" and age == 24: print("Your name is James, and you are also 24 years old.") # Example 3 # # Declaring our variables name = "George Maina" # Let's store our name in the name variable age = 12 # Let's store our age in the age variable if name == "James" and age == 24: print("Your name is James, and you are also 24 years old.") else: print("You are not James, who is 24 years old.") ###Output You are not James, who is 24 years old. ###Markdown 1.4 Challenges ###Code # Challenge 1: # Given the 3 sides of a triangle x, y and z, let's find out # whether the triangle is equilateral, isosceles or obtuse. # Note: An equilateral triagnle has all sides are equal, # isosceles means that two of the sides are equal but not the third one, # obtuse means all 3 are different. The user will be prompted # to provide the values of x, y and z # x=int(input('Enter measurement of x: ')) y=int(input('Enter the measurement of y: ')) z=int(input('Enter the measurement of z: ')) if x==y==z: print('This is an equilateral triangle') elif x==y and x!=z and y!=z: print('This is an isocles triangle') elif x!=y and y!=z and x!=z: print('This is an obtuse triangle') ###Output _____no_output_____ ###Markdown 1.5 Or ###Code # Example 1 # We can also use the if keyword to combine conditional statements. # # Declaring our variables x = 10 y = 11 z = 9 if x > y or x > z: print("At least one of the conditions is True") # Example 2 # # Declaring our variables x = int(input("Enter value of x = ")) y = int(input("Enter value of y = ")) if (x >= 15) or (y <= 25): print("x >= 15 or y <= 25 so if statement is True") else: print("Value of x < 15 and y > 25 so if statement is False!") # Example 3 # # Declaring our variables w = 10 x = 15 y = 20 z = 25 if (w == 10 or x == 15) and (y == 16 or z == 25): print("If statement is True") else: print("False") ###Output If statement is True ###Markdown 1.5 Challenges ###Code # Challenge 1 # Using the Or operator, let's create a function # that determines the type of a triangle # based on the lengths of its sides. A user will be inputting # the sides of the triange # x=int(input('Enter measurement of x: ')) y=int(input('Enter the measurement of y: ')) z=int(input('Enter the measurement of z: ')) if x==y==z: print('This is an equilateral triangle') elif x==y or x!=z and y!=z: print('This is an isocles triangle') elif (x!=y or y!=z) and x!=z: print('This is an obtuse triangle') # Challenge 2 # Using the Or operator, let's create a function # that checks whether a number given # by a user is greater than 30 or greater than 100. # x=int(input('Enteer a number: ')) if x>=30 or x>=100: print('number greater than limit') else: print('number within threshold') # Challenge 3 # Let's write a program that prompts a letter from a user. # When any of the letters a, e, i, o or u are entered, # then your program should display a message indicating that the entered letter is a vowel. # If the user enters y then your program should display a message # indicating that sometimes y is a vowel, and sometimes y is a consonant. # Else the program should display a message indicating that the letter is a consonant. # letter=input('enter an alphabet: ') if letter in ('a','A','b','B','c','C','d','D','e','E'): print('%s is a vowel' %letter) elif letter: print('some times %s is a vowel' %letter) print('alphabet %s is a consonant' %letter) ###Output enter an alphabet: y some times y is a vowel alphabet y is a consonant
Week 12 _ Regression and PCA/more_lectures/2.Matrix_notation_and_operations.ipynb
###Markdown Matrices ----- Notation and Operations Matrix notationMatrix Notation is a notation system that allows succinct representation of complex operations, such as a change of basis. * **Matlab** is based on Matrix Notation.* **Python**: similar functionality by using **numpy** Recall that a **vector** can be represented as a one dimensional array of numbers. A **matrix** is a two dimensional rectangle of numbers. A matrix consists of rows, indexed from the top to the bottom and of columns, indexed from the left to the right. As is described in the figure.A matrix with $n$ rows and $m$ columns is said to be an "$m$ by $n$ matrix".In numpy we will say that the **shape** of the matrix is $(m,n)$. We will also use the LaTeX notation $M_{m \times n}$ to indicate that $M$ is an $m \times n$ matrix. Transposing a MatrixAt times it is useful to switch the rows and column dimensions of matrices. Consider the matrix$$\begin{equation} A=\begin{bmatrix} a_{11} & a_{12} \\ a_{21} & a_{22} \\ a_{31} & a_{32} \end{bmatrix}\end{equation}$$The transpose of A is$$\begin{equation} A^{\mathsf{T}}=\begin{bmatrix} a_{11} & a_{21} & a_{31} \\ a_{12} & a_{22} & a_{32} \\ \end{bmatrix}\end{equation}$$ ###Code import numpy as np # The .reshape command reorganized the elements of a matrix into a new shape A = np.array(range(6)) print('A=',A) B=A.reshape(2,3) print("B is a 2X3 matrix:\n",B) print("the shape of B is:",B.shape) print("The transpose of B is\n",B.T) print("the shape of B.T is:",B.T.shape) ###Output _____no_output_____ ###Markdown Vectors as matrices.When using matrix notation, vectors can be represented as either [row or column vectors](https://en.wikipedia.org/wiki/Row_and_column_vectors). In a matrix context, a vector $\vec{v}$ is denoted by a bold-face letter. ${\bf v}$ for a column vector and ${\bf v}^\top$ for row vector: * By default a vector is represented as a **column vector** which is a matrix consisting of a single column:$$\begin{equation}{\bf v}= \begin{bmatrix} v_1 \\ v_2 \\ \vdots \\ v_d \end{bmatrix}\end{equation}$$ * If $\vec{v}$ is a column vector then the **transpose** of $\vec{v}$, denoted by $\vec{v}^\top$ is a **row vector** which is a matrix consists of a single row:$$\begin{equation}{\bf v}^{\top}= \begin{bmatrix} v_1 & v_2 & \cdots & v_d \end{bmatrix}\end{equation}$$ A vector as a matrixRow and Column vectors can be thought of as matrices.* The column vector ${\bf v}$ is a $d \times 1$ matrix.* The row vector ${\bf v}^{\top}$ is a $1 \times d$ matrix. A matrix as a collection of vectorsMatrices can be represented as a collection of vectors. For example, consider the $2\times 3$ matrix ${\bf A}=\begin{bmatrix} a_{11} & a_{12} & a_{13}\\ a_{21} & a_{22} & a_{23} \end{bmatrix}$ We can represent ${\bf A}=\begin{bmatrix} a_{11} & a_{12} & a_{13}\\ a_{21} & a_{22} & a_{23} \end{bmatrix}$ as vectors in one of two ways:* As a row of column vectors:$$ {\bf A} = \begin{bmatrix} {\bf c}_1 , {\bf c}_2 , {\bf c}_3 \end{bmatrix}$$where$$ {\bf c}_1=\begin{bmatrix} a_{11}\\ a_{21} \end{bmatrix}, {\bf c}_2=\begin{bmatrix} a_{12}\\ a_{22} \end{bmatrix}, {\bf c}_3=\begin{bmatrix} a_{13}\\ a_{23} \end{bmatrix}$$ * Or as a column of row vectors: ${\bf A} = \begin{bmatrix} {\bf r}_1 \\ {\bf r}_2 \end{bmatrix}$ where $ {\bf r}_1=\begin{bmatrix} a_{11}, a_{12}, a_{13} \end{bmatrix}, {\bf r}_2=\begin{bmatrix} a_{21}, a_{22}, a_{23} \end{bmatrix}, $ ###Code A=np.array(range(6)).reshape(2,3) print('A=\n',A) print("Splitting A into columns:") Columns=np.split(A,3,axis=1) for i in range(len(Columns)): print('column %d'%i) print(Columns[i]) A_recon=np.concatenate(Columns,axis=1) print('reconstructing the matrix from the columns:') print(A_recon) print('Checking that the reconstruction is equal to the original') print(A_recon==A) print("Splitting A into rows:") Rows=np.split(A,2,axis=0) for i in range(len(Rows)): print('row %d'%i) print(Rows[i]) A_recon=np.concatenate(Rows,axis=0) print('reconstructing the matrix from the rows:') print(A_recon) print('Checking that the reconstruction is equal to the original') print(A_recon==A) ###Output _____no_output_____ ###Markdown Numpy functionsBeyond the commands `reshape`, `split` and `concatanate` numpy has a rich set of functions to manipulate arrays, for a complete list see [Numpy Array Manipulation routines](https://docs.scipy.org/doc/numpy/reference/routines.array-manipulation.html) Matrix - scalar operationsYou can add/subtract multiply/divide a scalar from a matrix Adding a scalar value to a matrixLet $A$=$\bigl[ \begin{smallmatrix} a_{11} & a_{12} \\ a_{21} & a_{22} \end{smallmatrix} \bigr]$. Here is how we would add the scalar $3$ to $A$:$$\begin{equation} A+3=\begin{bmatrix} a_{11} & a_{12} \\ a_{21} & a_{22} \end{bmatrix}+3 =\begin{bmatrix} a_{11}+3 & a_{12}+3 \\ a_{21}+3 & a_{22}+3 \end{bmatrix}\end{equation}$$ Subtracting a scalar value to a matrixSubstraction is similar$$\begin{equation} A-3=\begin{bmatrix} a_{11} & a_{12} \\ a_{21} & a_{22} \end{bmatrix}-3 =\begin{bmatrix} a_{11}-3 & a_{12}-3 \\ a_{21}-3 & a_{22}-3 \end{bmatrix}\end{equation}$$ Product of a scalar and a matrixMultiplication is also similar$$\begin{equation} 3 \times A = 3 \times \begin{bmatrix} a_{11} & a_{12} \\ a_{21} & a_{22} \end{bmatrix} = \begin{bmatrix} 3a_{11} & 3a_{12} \\ 3a_{21} & 3a_{22} \end{bmatrix}\end{equation}$$ Dividing a matrix by a scalarDivision by $a$ is the same as multiplying by $1/a$. Note that you cn divide a matrix by a scalar, but dividing a scalar by a matrix is not defined.$$\begin{equation} A/5= A \times \frac{1}{5}= \begin{bmatrix} a_{11}/5 & a_{12}/5 \\ a_{21}/5 & a_{22}/5 \end{bmatrix}\end{equation}$$ ###Code # Some examples of matrix-scalar operations using numpy print('A=\n',A) print('A+3=3+A=\n',A+3) # addition print('A*3=\n',A*3) # product print('A/2=\n',A/2) # integer division print('A/2.=\n',A/2.) # floating point division ###Output _____no_output_____ ###Markdown Adding and subtracting two matricesLet $A$=$\bigl[ \begin{smallmatrix} a_{11} & a_{12} \\ a_{21} & a_{22} \end{smallmatrix} \bigr]$ and $B$=$\bigl[ \begin{smallmatrix} b_{11} & b_{12} \\ b_{21} & b_{22} \end{smallmatrix} \bigr]$. To compute $A-B$, subtract each element of B from the corresponding element of A:$ A -B = \begin{bmatrix} a_{11} & a_{12} \\ a_{21} & a_{22} \end{bmatrix} - \begin{bmatrix} b_{11} & b_{12} \\ b_{21} & b_{22} \end{bmatrix} $$ = \begin{bmatrix} a_{11}-b_{11} & a_{12}-b_{12} \\ a_{21}-b_{21} & a_{22}-b_{22} \end{bmatrix}$ Addition works exactly the same way:$ A + B = \begin{bmatrix} a_{11} & a_{12} \\ a_{21} & a_{22} \end{bmatrix} + \begin{bmatrix} b_{11} & b_{12} \\ b_{21} & b_{22} \end{bmatrix} $ $ = \begin{bmatrix} a_{11}+b_{11} & a_{12}+b_{12} \\ a_{21}+b_{21} & a_{22}+b_{22} \end{bmatrix}$ An important point to know about matrix addition and subtraction is that it is only defined when $A$ and $B$ are of the same size. Here, both are $2 \times 2$. Since operations are performed element by element, these two matrices must be conformable- and for addition and subtraction that means they must have the same numbers of rows and columns. I like to be explicit about the dimensions of matrices for checking conformability as I write the equations, so write$$A_{2 \times 2} + B_{2 \times 2}= \begin{bmatrix} a_{11}+b_{11} & a_{12}+b_{12} \\ a_{21}+b_{21} & a_{22}+b_{22} \end{bmatrix}_{2 \times 2}$$Notice that the result of a matrix addition or subtraction operation is always of the same dimension as the two operands.Let's define another matrix, B, that is also $2 \times 2$ and add it to A: ###Code B = np.random.randn(2,2) print(B) try: result = A + B except Exception as e: print(e) ###Output _____no_output_____ ###Markdown Matrix-Matrix products The dot product of two vectors* Recall that a vector is just a skinny matrix.* Consider the dot product $(1,2,3) \cdot (1,1,0) = 1 \times 1 + 2 \times 1 +3 \times 0= 3$. Conventions of dot product in matrix notation: * The first vector is a row vector and the second vector is a column vector. * There is no operator ($\cdot$) between the two vectors $$ \begin{bmatrix} 1,2,3 \end{bmatrix} \begin{bmatrix} 1 \\ 1 \\ 0 \end{bmatrix} = 1 \times 1 + 2 \times 1 +3 \times 0= 3$$ The dot product of a matrix and a vectorTo multiply the matrix ${\bf A}=\begin{bmatrix} a_{11} & a_{12} & a_{13}\\ a_{21} & a_{22} & a_{23} \end{bmatrix}$by the column vector ${\bf c}=\begin{bmatrix} c_1 \\ c_2 \\ c_3 \end{bmatrix}$ We think of ${\bf A}$ as consisting or two row vectors:${\bf A} = \begin{bmatrix} {\bf r}_1 \\ {\bf r}_2 \end{bmatrix}$ where $ {\bf r}_1=\begin{bmatrix} a_{11}, a_{12}, a_{13} \end{bmatrix}, {\bf r}_2=\begin{bmatrix} a_{21}, a_{22}, a_{23} \end{bmatrix}, $and take the dot products of ${\bf r}_1,{\bf r}_2$ with ${\bf c}$ to create a column vector of dimension 2:${\bf A} {\bf c} = \begin{bmatrix} {\bf r}_1 {\bf c} \\ {\bf r}_2 {\bf c} \end{bmatrix} = \begin{bmatrix} a_{11}c_1 + a_{12}c_2 + a_{13} c_3 \\ a_{21}c_1 + a_{22}c_2 + a_{23} c_3 \end{bmatrix}$ Dot product of two matricesMultiplying a matrix and a column vector can be generalized to multiplying two matrices.To do so we think of Alternatively, consider a matrix ${\bf C}$ of size $2 \times 3$ and a matrix ${\bf A}$ of size $3 \times 2$$$\begin{equation} {\bf A}=\begin{bmatrix} a_{11} & a_{12} \\ a_{21} & a_{22} \\ a_{31} & a_{32} \end{bmatrix} , {\bf C} = \begin{bmatrix} c_{11} & c_{12} & c_{13} \\ c_{21} & c_{22} & c_{23} \end{bmatrix} \end{equation}$$ To compute ${\bf AC}$ we think of ${\bf A}$ as a column of row vectors:${\bf A} =\begin{bmatrix} {\bf a}_1 \\ {\bf a}_2 \\ {\bf a}_3 \end{bmatrix} $ and of ${\bf C}$ as a row of column vectors: ${\bf C} =\begin{bmatrix} {\bf c}_1, {\bf c}_2, {\bf c}_3 \end{bmatrix} $ ${\bf AC}$ is the matrix generated from taking the dot product of each row vector in ${\bf A}$ with each column vector in ${\bf C}$${\bf AC}= \begin{bmatrix} {\bf a}_1 \\ {\bf a}_2 \\ {\bf a}_3 \end{bmatrix} \begin{bmatrix} {\bf c}_1, {\bf c}_2, {\bf c}_3 \end{bmatrix}= \begin{bmatrix} {\bf a}_1 \cdot {\bf c}_1 & {\bf a}_1 \cdot {\bf c}_2 & {\bf a}_1 \cdot {\bf c}_3 \\ {\bf a}_2 \cdot {\bf c}_1 & {\bf a}_2 \cdot {\bf c}_2 & {\bf a}_2 \cdot {\bf c}_3 \\ {\bf a}_3 \cdot {\bf c}_1 & {\bf a}_3 \cdot {\bf c}_2 & {\bf a}_3 \cdot {\bf c}_3 \end{bmatrix} = $ $= \begin{bmatrix} a_{11} c_{11}+a_{12} c_{21} & a_{11} c_{12}+a_{12} c_{22} & a_{11} c_{13}+a_{12} c_{23} \\ a_{21} c_{11}+a_{22} c_{21} & a_{21} c_{12}+a_{22} c_{22} & a_{21} c_{13}+a_{22} c_{23} \\ a_{31} c_{11}+a_{32} c_{21} & a_{31} c_{12}+a_{32} c_{22} & a_{31} c_{13}+a_{32} c_{23} \end{bmatrix}$ For more information on the topic of matrix multiplication, see http://en.wikipedia.org/wiki/Matrix_multiplication. ###Code # Matrix - Vector product A = np.arange(6).reshape((3,2)) C = np.array([-1,1]) print(A.shape) print(C.shape) print(np.dot(A,C.T)) # Matrix - Matrix product # Define the matrices A and C A = np.arange(6).reshape((3,2)) C = np.random.randn(2,2) print('A=\n',A) print('C=\n',C) ###Output _____no_output_____ ###Markdown We will use the numpy dot operator to perform the these multiplications. You can use it two ways to yield the same result: ###Code print('A.dot(C)=\n',A.dot(C)) print('np.dot(A,C)=\n',np.dot(A,C)) ###Output _____no_output_____ ###Markdown ConformityNote that the number of columns in the first matrix has to be equal to the number of columns in the second matrix. Otherwise, the matrix product is not defined. When this condition holds we say that the two matrices **conform**.Taking the product of two matrices that don't conform results in an exception: ###Code np.dot(C,A) ###Output _____no_output_____ ###Markdown Orthonormal matrices and change of Basis**As was explained in the notebook: "Linear Algebra Review"** We say that the vectors $\vec{u}_1,\vec{u}_2,\ldots,\vec{u}_d \in R^d$ form an **orthonormal basis** of $R^d$. If:* **Normality:** $\vec{u}_1,\vec{u}_2,\ldots,\vec{u}_d$ are unit vectors: $\forall 1 \leq i \leq d: \vec{u}_i \cdot \vec{u}_i =1 $* **Orthogonality:** Every pair of vectors are orthogonal: $\forall 1 \leq i\neq j \leq d: \vec{u}_i \cdot \vec{u}_j =0 $** Orthonormal basis can be used to rotate the vector space:*** $\vec{v}$ is **represented** as a list of $d$ dot products: $$[\vec{v}\cdot\vec{u_1},\vec{v}\cdot\vec{u_2},\ldots,\vec{v}\cdot\vec{u_d}]$$* $\vec{v}$ is **reconstructed** by summing its projections on the basis vectors:$$\vec{v} = (\vec{v}\cdot\vec{u_1})\vec{u_1} + (\vec{v}\cdot\vec{u_2})\vec{u_2} + \cdots + (\vec{v}\cdot\vec{u_d})\vec{u_d}$$ Change of Basis using matrix notationTo use matrix notation, we think of $\vec{u}_i$ as a row vector:$$ {\bf u}_i=\begin{bmatrix} u_{i1}, u_{i2},\ldots, u_{id} \end{bmatrix},$$ We can combine the orthonormal vectors to create an *orthonormal matrix*$$ {\bf U} = \begin{bmatrix} {\bf u}_1 \\ {\bf u}_2 \\ \vdots \\ {\bf u}_d \end{bmatrix}= \begin{bmatrix} u_{11}, u_{12},\ldots, u_{1d} \\ u_{21}, u_{22},\ldots, u_{2d} \\ \vdots\\u_{d1}, u_{d2},\ldots, u_{dd} \end{bmatrix}$$Orthonormality: ${\bf UU^{\top} = I}$ Using this notation, the representation of a column vector $\bf v$ in the orthonormal basis corresponsing to the rows of ${\bf U}$ is equal to $${\bf Uv} = \begin{bmatrix} {\bf u}_1 {\bf v} \\ {\bf u}_2 {\bf v} \\ \vdots \\ {\bf u}_d {\bf v} \end{bmatrix}$$ And the reconstruction of $\bf v$ is equal to ${\bf U U^{\mathsf{T}} v}$ The Identity MatrixThe identity matrix behaves like the number $1$: The dot product of any matrix ${\bf A}$ by the identity matrix ${\bf I}$ yields ${\bf A}$.$$ {\bf A I = I A = A} $$ The identity matrix is zero everywhere other than the diagonal, where it is $1$.$${\bf I} = \begin{bmatrix} 1, 0,\ldots, 0 \\ 0, 1,\ldots, 0 \\ \ddots \\0,0,\ldots, 1 \end{bmatrix}$$**Excercise:** Check that ${\bf A I = I A = A}$. Inverting a Matrix Recall that the multiplicative inverse of the number $a$ is $a^{-1}=1/a$The property of $a^{-1}$ is that $a a^{-1}=1$.Recall also that $0$ does not have a multiplicative inverse. **Some** square matrices ${\bf A}$ have a multiplicative inverse ${\bf A^{-1}}$ such that ${\bf A A^{-1} = A^{-1} A =I}$ Finding the inverse of a matrix is called *inverting* the matrix. An $n\times n$ matrix $\bf A$ represents a linear transformation from $R^n$ to $R^n$. If the matrix is [**invertible**](https://en.wikipedia.org/wiki/Invertible_matrix) then there is another transformation ${\bf A}^{-1}$ that represents the inverse transformation, such that for any column vctor ${\bf v} \in R^n$:$${\bf A}^{-1}{\bf A}{\bf v} = {\bf A}{\bf A}^{-1}{\bf v} = {\bf v} $$ Inverting a 2X2 matrixConsider the square $2 \times 2$ matrix ${\bf A} = \bigl( \begin{smallmatrix} a_{11} & a_{12} \\ a_{21} & a_{22}\end{smallmatrix} \bigr)$. The inverse of matrix ${\bf A}$ is$$\begin{equation} {\bf A}^{-1}=\begin{bmatrix} a_{11} & a_{12} \\ a_{21} & a_{22} \end{bmatrix}^{-1}=\frac{1}{a_{11}a_{22}-a_{12}a_{21}} \begin{bmatrix} a_{22} & -a_{12} \\ -a_{21} & a_{11} \end{bmatrix}\end{equation}$$ **Excercise:** Check that $ {\bf A A^{-1}=A^{-1} A=I }$ For more information on inverting matrices, see thishttp://en.wikipedia.org/wiki/Matrix_inversion. ###Code # An example of computing the inverse using numpy.linalg.inv # note, we need a square matrix (# rows = # cols), use C: C = np.random.randn(2,2) print("C=\n",C) C_inverse = np.linalg.inv(C) print("C_inverse=\n",C_inverse) ###Output _____no_output_____ ###Markdown Checking that $C\times C^{-1} = I$: ###Code I = np.eye(2) print("identity matrix=\n",I) print("C.dot(C_inverse)-I=\n",C.dot(C_inverse)-I) print("C_inverse.dot(C)-I=\n",C_inverse.dot(C)-I) ###Output _____no_output_____ ###Markdown Singular matricesNot all matrices have an inverse. Those that do not are called **singular** ###Code C=np.array([[1,0],[1,0]]) print("C=\n",C) try: C_inverse = np.linalg.inv(C) except: print('C cannot be inverted: it is a singular matrix') ###Output _____no_output_____
Class_0/Introduction to Jupyter Notebook.ipynb
###Markdown Introduction to Jupyter Notebook Introduction to Jupyter Notebook + Plus for Bullet- Minus for Bullet Welcome to `Anthonio's Notebook` **`Bold`** ###Code print ('Hello world') k = ('A','B','C','D','E') k[4]= 'F' k z = [1,2,3,4,5] star = {'z': z, 'k': k} star star.keys() star.values() def perimeter(R): return 2*3.142*R perimeter(50) def p(S): return 5*((3.142)**2)*S p(5) ###Output _____no_output_____
tamrin/ZTM/pandas-exercises.ipynb
###Markdown Pandas PracticeThis notebook is dedicated to practicing different tasks with pandas. The solutions are available in a solutions notebook, however, you should always try to figure them out yourself first.It should be noted there may be more than one different way to answer a question or complete an exercise.Exercises are based off (and directly taken from) the quick introduction to pandas notebook.Different tasks will be detailed by comments or text.For further reference and resources, it's advised to check out the [pandas documnetation](https://pandas.pydata.org/pandas-docs/stable/). ###Code # Import pandas import pandas as pd # Create a series of three different colours Colour = pd.Series(["red", "blue", "black"]) Colour # View the series of different colours Colour # Create a series of three different car types and view it Cars = pd.Series(["Toyota", "BMW", "Tesla"]) Cars # Combine the Series of cars and colours into a DataFrame df1 = pd.DataFrame({"Cars":Cars, "Colour":Colour}) df1 # Import "../data/car-sales.csv" and turn it into a DataFrame df = pd.read_csv("../data/car-sales.csv") df ###Output _____no_output_____ ###Markdown **Note:** Since you've imported `../data/car-sales.csv` as a DataFrame, we'll now refer to this DataFrame as 'the car sales DataFrame'. ###Code # Export the DataFrame you created to a .csv file df.to_csv("df-exersice.csv") # Find the different datatypes of the car data DataFrame df.dtypes # Describe your current car sales DataFrame using describe() df.describe() # Get information about your DataFrame using info() df.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 10 entries, 0 to 9 Data columns (total 5 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 Make 10 non-null object 1 Colour 10 non-null object 2 Odometer (KM) 10 non-null int64 3 Doors 10 non-null int64 4 Price 10 non-null object dtypes: int64(2), object(3) memory usage: 528.0+ bytes ###Markdown What does it show you? ###Code # Create a Series of different numbers and find the mean of them pd.Series([2, 4, 6, 8, 10]).mean() # Create a Series of different numbers and find the sum of them pd.Series([2, 4, 6, 8, 10]).sum() # List out all the column names of the car sales DataFrame df.columns # Find the length of the car sales DataFrame len(df) # Show the first 5 rows of the car sales DataFrame df.head(5) # Show the first 7 rows of the car sales DataFrame df.head(7) # Show the bottom 5 rows of the car sales DataFrame df.tail(5) # Use .loc to select the row at index 3 of the car sales DataFrame df.loc[:3] # Use .iloc to select the row at position 3 of the car sales DataFrame df.iloc[3] ###Output _____no_output_____ ###Markdown Notice how they're the same? Why do you think this is? Check the pandas documentation for [.loc](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.loc.html) and [.iloc](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.iloc.html). Think about a different situation each could be used for and try them out. ###Code # Select the "Odometer (KM)" column from the car sales DataFrame df ["Odometer (KM)"] # Find the mean of the "Odometer (KM)" column in the car sales DataFrame df["Odometer (KM)"].mean() # Select the rows with over 100,000 kilometers on the Odometer df[df["Odometer (KM)"] > 100000] # Create a crosstab of the Make and Doors columns pd.crosstab(df["Make"], df["Doors"]) # Group columns of the car sales DataFrame by the Make column and find the average df.groupby("Make").mean() # Import Matplotlib and create a plot of the Odometer column # Don't forget to use %matplotlib inline %matplotlib inline import matplotlib.pyplot as plt # Create a histogram of the Odometer column using hist() df["Odometer (KM)"].hist() # Try to plot the Price column using plot() df["Price"].plot() ###Output _____no_output_____ ###Markdown Why didn't it work? Can you think of a solution?You might want to search for "how to convert a pandas string columb to numbers".And if you're still stuck, check out this [Stack Overflow question and answer on turning a price column into integers](https://stackoverflow.com/questions/44469313/price-column-object-to-int-in-pandas).See how you can provide the example code there to the problem here. ###Code # Remove the punctuation from price column df["Price"] = df["Price"].str.replace('[\$\,\.]', '').astype(int) # Check the changes to the price column df["Price"].plot() # Remove the two extra zeros at the end of the price column df["Price"] = df["Price"] / 100 df["Price"] # Check the changes to the Price column df["Price"] df["Price"].plot() # Change the datatype of the Price column to integers df["Price"].dtype # Lower the strings of the Make column df["Make"].str.lower() ###Output _____no_output_____ ###Markdown If you check the car sales DataFrame, you'll notice the Make column hasn't been lowered.How could you make these changes permanent?Try it out. ###Code # Make lowering the case of the Make column permanent df ["Make"] = df["Make"].str.lower() # Check the car sales DataFrame df ###Output _____no_output_____ ###Markdown Notice how the Make column stays lowered after reassigning.Now let's deal with missing data. ###Code # Import the car sales DataFrame with missing data ("../data/car-sales-missing-data.csv") dfm = pd.read_csv("../data/car-sales-missing-data.csv") # Check out the new DataFrame dfm ###Output _____no_output_____ ###Markdown Notice the missing values are represented as `NaN` in pandas DataFrames.Let's try fill them. ###Code # Fill the Odometer column missing values with the mean of the column inplace dfm["Odometer"] = dfm["Odometer"].fillna(dfm ["Odometer"].mean()) # View the car sales missing DataFrame and verify the changes dfm # Remove the rest of the missing data inplace dfm.dropna(inplace = True) # Verify the missing values are removed by viewing the DataFrame dfm ###Output _____no_output_____ ###Markdown We'll now start to add columns to our DataFrame. ###Code # Create a "Seats" column where every row has a value of 5 dfm["Seats"] = 5 dfm # Create a column called "Engine Size" with random values between 1.3 and 4.5 # Remember: If you're doing it from a Python list, the list has to be the same length # as the DataFrame Engine_Size = [1.3, 1.5, 2, 2.2, 4, 4.5] dfm["Engine Size"] = Engine_Size dfm # Create a column which represents the price of a car per kilometer # Then view the DataFrame dfm ["Price per km"] = dfm["Engine Size"] * 1000 dfm # Remove the last column you added using .drop() dfm = dfm.drop("Price per km", axis=1) dfm # Shuffle the DataFrame using sample() with the frac parameter set to 1 # Save the the shuffled DataFrame to a new variable dfs = df.sample(frac=1) dfs ###Output _____no_output_____ ###Markdown Notice how the index numbers get moved around. The [`sample()`](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.sample.html) function is a great way to get random samples from your DataFrame. It's also another great way to shuffle the rows by setting `frac=1`. ###Code # Reset the indexes of the shuffled DataFrame dfs.reset_index(drop= True) ###Output _____no_output_____ ###Markdown Notice the index numbers have been changed to have order (start from 0). ###Code # Change the Odometer values from kilometers to miles using a Lambda function # Then view the DataFrame df ["Odometer (KM)"] = df ["Odometer (KM)"].apply(lambda x: x / 1.6) df # Change the title of the Odometer (KM) to represent miles instead of kilometers df = df.rename(columns = {"Odometer (KM)": "Odometer (MI)"}) df ###Output _____no_output_____
9.30.20-ClassSession.ipynb
###Markdown Working with Lists - Part 3Sample lists: - 'lastName','firstName', 'timesAtBat', 'hits', 'homeRuns', 'runs', 'rbi', 'walks', 'years', 'careerTimesAtBat', 'careerHits', 'careerHomeRuns', 'careerRuns', 'careerRBI', 'careerWalks'- 'Balboni','Steve',512,117,29,54,88,43,6,1750,412,100,204,276,155- 'Bochte','Bruce',407,104,6,57,43,65,12,5233,1478,100,643,658,653- 'Bream','Sid',522,140,16,73,77,60,4,730,185,22,93,106,86 ###Code header = ['lastName','firstName', 'timesAtBat', 'hits', 'homeRuns', 'runs', 'rbi', 'walks', 'years', 'careerTimesAtBat', 'careerHits', 'careerHomeRuns', 'careerRuns', 'careerRBI', 'careerWalks'] steve = ['Balboni','Steve',512,117,29,54,88,43,6,1750,412,100,204,276,155] bruce = ['Bochte','Bruce',407,104,6,57,43,65,12,5233,1478,100,643,658,653] sid = ['Bream','Sid',522,140,16,73,77,60,4,730,185,22,93,106,86] players =[ ('Balboni','Steve',[512,117,29,54,88,43,6,1750,412,100,204,276,155]), ('Bochte','Bruce',[407,104,6,57,43,65,12,5233,1478,100,643,658,653]), ('Bream','Sid',[522,140,16,73,77,60,4,730,185,22,93,106,86]) ] data = players[0][2] #sum(data) sum(players[0][2]) ###Output _____no_output_____ ###Markdown Passing Lists to FunctionsThis is a **very important** discussion - you have to be careful how you manipulate lists that you pass into/return from functions. Python passes functions as a pointer to memory, not the actual value (this is called pass by reference). If you change a list that you passed into a function, it changes the list in memory. This example below does this. ###Code data def print_list(d): d[0]=1001 for i in range(len(d)): print(d[i], end=' ') def print_list(d): local_data = d.copy() local_data[0]=1001 for i in range(len(local_data)): print(local_data[i], end=' ') print_list(data) #local_data <-- local_data is out of scope, this will not work. data ###Output _____no_output_____ ###Markdown List FunctionsThe list object contains numerous functions we can use that operate on lists.- **sort()** I typically use sorted() - sort() permanently sorts.- **index()** - returns the index of item if it exists in the list- **insert()** -allows us to insert into the list at any location- **append()** -allows us to append a single element to the end of the list.- **extend()** -allows us to append several elements to the end of a list (in the form of a list)- **remove()** - removes an element from the list if it exists (first occurrence)- **clear()** - removes all elements from the list- **count()** - returns how many times a particular element appears in the list- **reverse()** - reverses a list (this is permanent like sort)- **copy()** - returns a copy of a list that is independent of the original list (it can be changed without changing the original list.These are not really functions, but logically belong with the others - **in / not in** - returns True/False as to whether an element is in the list.- **any / all** - any returns true if any element in the list is true; all returns true if all elements are true ###Code numbers = [36,76,89,76,34,56,23] #numbers.sort() #<-- this will permanently sort your list - be careful using it. ###Output _____no_output_____ ###Markdown **index()** - returns the index of item if it exists in the list - otherwise returns error message ###Code numbers.index(89) ###Output _____no_output_____ ###Markdown **insert()** -allows us to insert into the list at any location (only 1 element) ###Code numbers.insert(6,4096) ###Output _____no_output_____ ###Markdown **append()** -allows us to append a single element to the end of the list. ###Code numbers.append(1000) ###Output _____no_output_____ ###Markdown **extend()** -allows us to append several elements to the end of a list (in the form of a list) ###Code numbers.extend([1000,10000,100000,1000000]) ###Output _____no_output_____ ###Markdown **remove()** - removes an element from the list if it exists (first occurrence) ###Code numbers.remove(1000) ###Output _____no_output_____ ###Markdown **clear()** - removes all elements from the list ###Code #numbers.clear() <--the list will still be in memory, it will just be empty. ###Output _____no_output_____ ###Markdown **count()** - returns how many times a particular element appears in the list ###Code numbers.extend([1000,10000,100000,1000000]) numbers.count(1000) ###Output _____no_output_____ ###Markdown **reverse()** - reverses a list (this is permanent like sort) ###Code numbers.reverse() print(numbers) ###Output _____no_output_____ ###Markdown **copy()** - returns a copy of a list that is independent of the original list (it can be changed without changing the original list. ###Code n2 = numbers.copy() print(n2) ###Output _____no_output_____ ###Markdown **in / not in** - returns True/False as to whether an element is in the list. ###Code if 1000 in numbers: print(numbers) if 1000 not in numbers: print(numbers) ###Output _____no_output_____ ###Markdown **any / all** - any returns true if any element in the list is true; all returns true if all elements are true ###Code booleans = [True,True,False] any(booleans) all(booleans) ###Output _____no_output_____
docs/processing.ipynb
###Markdown ProcessingGateNLP does not limit how documents are stored in collections, iterated over, functions applied to the documents to modify them etc.However, GateNLP provides a number of abstractions to help with this in an organized fashion:* Corpus, DocumentSource, DocumentDestination: this is how collections of documents which can be read/written, read only, or written only are represented in GateNLP. See [Corpora](corpora).* Annotator: an annotator is something that processes a document and returns the processed document. Most annotators simple return the modified document but GateNLP abstractions also allow to return None to indicate filtering a document or a list of documents (e.g. for splitting up documents) Any Python callable can be used as an Annotator but GateNLP annotators in addition may implement the methods `start` (to perform some start of corpus processing), `finish` (to perform some end of corpus processing and return some over-the-corpus result) and `reduce` (to merge several partial over-the corpus results from parallel processing into a single result). * Pipeline: a special annotator the encapsulates several annotators. When the pipeline is run on a document, all the contained annotators are run in sequence. * Executor: an object that runs some Annotator on a corpus or document source and optionally stores the results back into the corpus or into a document destination AnnotatorsAnny callable that takes a document and returns that document can act as an annotator. Note that an annotatorusually modifies the annotations or features of the document it receives. This happens in place, so the annotator would not have to return the document. However, it is a convention that annotators always return the document that got modified to indicate this to downstream annotators or document destinations. If an annotator returns a list, the result of processing is instead the documents in the list which could be none, or more than one. This convention allows a processing pipeline to filter documents or generate several documents from a single one. Lets create a simple annotator as a function and apply it to a corpus of documents which in the simplest form is just a list of documents: ###Code import os from gatenlp import Document from gatenlp.processing.executor import SerialCorpusExecutor def annotator1(doc): doc.annset().add(2,3,"Type1") return doc texts = [ "Text for the first document.", "Text for the second document. This one has two sentences.", "And another one.", ] corpus = [Document(txt) for txt in texts] # everything happens in memory here, so we can ignore the returned document for doc in corpus: annotator1(doc) for doc in corpus: print(doc) ###Output Document(Text for the first document.,features=Features({}),anns=['':1]) Document(Text for the second document. This one has two sentences.,features=Features({}),anns=['':1]) Document(And another one.,features=Features({}),anns=['':1])
ARIMA_Approach_to_Index_2k18_Stocks.ipynb
###Markdown 1. Importing the necessary packages ###Code ## Base packages import pandas as pd import numpy as np import matplotlib.pyplot as plt from math import sqrt import seaborn as sns sns.set() import warnings warnings.filterwarnings("ignore") ## For statistical modelling and ARIMA import statsmodels.graphics.tsaplots as sgt import statsmodels.tsa.stattools as sts from statsmodels.tsa.arima_model import ARIMA from scipy.stats.distributions import chi2 print("All the necessary packages have imported successfully!") ###Output All the necessary packages have imported successfully! ###Markdown 2. Importing the Dataset ###Code raw_csv_data = pd.read_csv("https://raw.githubusercontent.com/MainakRepositor/Datasets-/master/Index2018.csv") df_comp=raw_csv_data.copy() df_comp.head(10) ###Output _____no_output_____ ###Markdown 3. Preprocessing the data ###Code df_comp.date = pd.to_datetime(df_comp.date, dayfirst = True) df_comp.set_index("date", inplace=True) df_comp=df_comp.asfreq('b') df_comp=df_comp.fillna(method='ffill') df_comp['market_value']=df_comp.ftse size = int(len(df_comp)*0.8) df, df_test = df_comp.iloc[:size], df_comp.iloc[size:] ###Output _____no_output_____ ###Markdown 4. The LLR Test ###Code def LLR_test(mod_1, mod_2, DF = 1): L1 = mod_1.fit().llf L2 = mod_2.fit().llf LR = (2*(L2-L1)) p = chi2.sf(LR, DF).round(3) return p ###Output _____no_output_____ ###Markdown 5. Creating Returns ###Code df['returns'] = df.market_value.pct_change(1)*100 ###Output _____no_output_____ ###Markdown 6. ARIMA(1,1,1) ###Code model_ar_1_i_1_ma_1 = ARIMA(df.market_value, order=(1,1,1)) results_ar_1_i_1_ma_1 = model_ar_1_i_1_ma_1.fit() results_ar_1_i_1_ma_1.summary() ###Output _____no_output_____ ###Markdown 7. Residuals of the ARIMA(1,1,1) ###Code df['res_ar_1_i_1_ma_1'] = results_ar_1_i_1_ma_1.resid sgt.plot_acf(df.res_ar_1_i_1_ma_1, zero = False, lags = 40) plt.title("ACF Of Residuals for ARIMA(1,1,1)",size=20) plt.show() df['res_ar_1_i_1_ma_1'] = results_ar_1_i_1_ma_1.resid.iloc[:] sgt.plot_acf(df.res_ar_1_i_1_ma_1[1:], zero = False, lags = 40) plt.title("ACF Of Residuals for ARIMA(1,1,1)",size=20) plt.show() ###Output _____no_output_____ ###Markdown 8. Higher-Lag ARIMA Models ###Code model_ar_1_i_1_ma_2 = ARIMA(df.market_value, order=(1,1,2)) results_ar_1_i_1_ma_2 = model_ar_1_i_1_ma_2.fit() model_ar_1_i_1_ma_3 = ARIMA(df.market_value, order=(1,1,3)) results_ar_1_i_1_ma_3 = model_ar_1_i_1_ma_3.fit() model_ar_2_i_1_ma_1 = ARIMA(df.market_value, order=(2,1,1)) results_ar_2_i_1_ma_1 = model_ar_2_i_1_ma_1.fit() model_ar_3_i_1_ma_1 = ARIMA(df.market_value, order=(3,1,1)) results_ar_3_i_1_ma_1 = model_ar_3_i_1_ma_1.fit() model_ar_3_i_1_ma_2 = ARIMA(df.market_value, order=(3,1,2)) results_ar_3_i_1_ma_2 = model_ar_3_i_1_ma_2.fit(start_ar_lags=5) print("ARIMA(1,1,1): \t LL = ", results_ar_1_i_1_ma_1.llf, "\t AIC = ", results_ar_1_i_1_ma_1.aic) print("ARIMA(1,1,2): \t LL = ", results_ar_1_i_1_ma_2.llf, "\t AIC = ", results_ar_1_i_1_ma_2.aic) print("ARIMA(1,1,3): \t LL = ", results_ar_1_i_1_ma_3.llf, "\t AIC = ", results_ar_1_i_1_ma_3.aic) print("ARIMA(2,1,1): \t LL = ", results_ar_2_i_1_ma_1.llf, "\t AIC = ", results_ar_2_i_1_ma_1.aic) print("ARIMA(3,1,1): \t LL = ", results_ar_3_i_1_ma_1.llf, "\t AIC = ", results_ar_3_i_1_ma_1.aic) print("ARIMA(3,1,2): \t LL = ", results_ar_3_i_1_ma_2.llf, "\t AIC = ", results_ar_3_i_1_ma_2.aic) df['res_ar_1_i_1_ma_3'] = results_ar_1_i_1_ma_3.resid sgt.plot_acf(df.res_ar_1_i_1_ma_3[1:], zero = False, lags = 40) plt.title("ACF Of Residuals for ARIMA(1,1,3)", size=20) plt.show() model_ar_5_i_1_ma_1 = ARIMA(df.market_value, order=(5,1,1)) results_ar_5_i_1_ma_1 = model_ar_5_i_1_ma_1.fit(start_ar_lags=11) model_ar_6_i_1_ma_3 = ARIMA(df.market_value, order=(6,1,3)) results_ar_6_i_1_ma_3 = model_ar_6_i_1_ma_3.fit(start_ar_lags=11) results_ar_5_i_1_ma_1.summary() print("ARIMA(1,1,3): \t LL = ", results_ar_1_i_1_ma_3.llf, "\t AIC = ", results_ar_1_i_1_ma_3.aic) print("ARIMA(5,1,1): \t LL = ", results_ar_5_i_1_ma_1.llf, "\t AIC = ", results_ar_5_i_1_ma_1.aic) print("ARIMA(6,1,3): \t LL = ", results_ar_6_i_1_ma_3.llf, "\t AIC = ", results_ar_6_i_1_ma_3.aic) df['res_ar_5_i_1_ma_1'] = results_ar_5_i_1_ma_1.resid sgt.plot_acf(df.res_ar_5_i_1_ma_1[1:], zero = False, lags = 40) plt.title("ACF Of Residuals for ARIMA(5,1,1)", size=20) plt.show() print("\n") plt.plot(df.res_ar_5_i_1_ma_1) plt.title("Plots Residuals for ARIMA(5,1,1)", size=20) plt.show() ###Output _____no_output_____ ###Markdown 9. Models with Higher Levels of Integration ###Code df['delta_prices']=df.market_value.diff(1) model_delta_ar_1_i_1_ma_1 = ARIMA(df.delta_prices[1:], order=(1,0,1)) results_delta_ar_1_i_1_ma_1 = model_delta_ar_1_i_1_ma_1.fit() results_delta_ar_1_i_1_ma_1.summary() ###Output _____no_output_____ ###Markdown 10. ADF Results ###Code sts.adfuller(df.delta_prices[1:]) model_ar_1_i_2_ma_1 = ARIMA(df.market_value, order=(1,2,1)) results_ar_1_i_2_ma_1 = model_ar_1_i_2_ma_1.fit(start_ar_lags=10) results_ar_1_i_2_ma_1.summary() df['res_ar_1_i_2_ma_1'] = results_ar_1_i_2_ma_1.resid.iloc[:] sgt.plot_acf(df.res_ar_1_i_2_ma_1[2:], zero = False, lags = 40) plt.title("ACF Of Residuals for ARIMA(1,2,1)",size=20) plt.show() ###Output _____no_output_____ ###Markdown 11. ARIMAX Approach ###Code model_ar_1_i_1_ma_1_Xspx = ARIMA(df.market_value, exog = df.spx, order=(1,1,1)) results_ar_1_i_1_ma_1_Xspx = model_ar_1_i_1_ma_1_Xspx.fit() results_ar_1_i_1_ma_1_Xspx.summary() ###Output _____no_output_____
analysis/DES/galaxy_galaxy_lensing.ipynb
###Markdown GALAXY-GALAXY LENSING ANGULAR POWER SPECTRA ###Code %matplotlib inline import matplotlib.pyplot as plt import numpy as np from scipy.interpolate import interp1d from astropy.cosmology import FlatLambdaCDM plt.rcParams.update({ 'text.usetex': False, 'font.family': 'serif', 'legend.frameon': False, 'legend.handlelength': 1.5, }) ###Output _____no_output_____ ###Markdown 1. Load DES-Y1 quantities * Metadata ###Code nbin = 4 nbinl = 5 bin_a, bin_b = np.tril_indices(nbin) bin_a += 1 bin_b += 1 binl_a, binl_b = np.tril_indices(nbinl) binl_a += 1 binl_b += 1 bin_A = np.array([1,1,1,1,1,2,2,2,2,2,3,3,3,3,3,4,4,4,4,4]) binl_B = np.array([1,2,3,4,5,1,2,3,4,5,1,2,3,4,5,1,2,3,4,5]) ###Output _____no_output_____ ###Markdown * Cosmology ###Code cosmo = {} with open('../../data/des-y1-test/cosmological_parameters/values.txt') as cosmo_values: for line in cosmo_values: if line: key, val = line.partition('=')[::2] cosmo[key.strip()] = float(val) cosmo_astropy = FlatLambdaCDM(H0=cosmo['hubble'], Ob0=cosmo['omega_b'], Om0= cosmo['omega_m'], Tcmb0=2.7) ###Output _____no_output_____ ###Markdown * Distance functions ###Code zdM = np.loadtxt('../../data/des-y1-test/distances/z.txt') dM = np.loadtxt('../../data/des-y1-test/distances/d_m.txt') ###Output _____no_output_____ ###Markdown * Matter power spectrum ###Code zp = np.loadtxt('../../data/des-y1-test/matter_power_nl/z.txt') k_h = np.loadtxt('../../data/des-y1-test/matter_power_nl/k_h.txt') p_h = np.loadtxt('../../data/des-y1-test/matter_power_nl/p_k.txt') xp = np.interp(zp, zdM, dM) k0, kf = k_h[15]*(cosmo['hubble']/100), k_h[-1]*(cosmo['hubble']/100) k_h2 = np.logspace(np.log10(k0), np.log10(kf), 1024) p_h2 = np.exp([np.interp(np.log(k_h2), np.log(k_h), np.log(p)) for p in p_h]) k = k_h2*cosmo['h0'] p = p_h2*cosmo['h0']**(-3) ###Output _____no_output_____ ###Markdown * Unequal-time power spectra ###Code import sys sys.path.append("../../unequalpy") from skypy.power_spectrum import growth_function from approximation import growth_midpoint from matter import matter_power_spectrum_1loop as P1loop from matter import matter_unequal_time_power_spectrum as Puetc from approximation import geometric_approx as Pgeom from approximation import midpoint_approx as Pmid d = np.loadtxt('../../data/Pfastpt.txt',unpack=True) ks, pk, p22, p13 = d[:, 0], d[:, 1], d[:, 2], d[:, 3] p11_int = interp1d( ks, pk, fill_value="extrapolate") p22_int = interp1d( ks, p22, fill_value="extrapolate") p13_int = interp1d( ks, p13, fill_value="extrapolate") powerk = (p11_int, p22_int, p13_int) g = growth_function(np.asarray(zp), cosmo_astropy)/growth_function(0, cosmo_astropy) gm = growth_midpoint(np.asarray(zp), np.asarray(zp), growth_function, cosmo_astropy) pet = P1loop(k, g, powerk) puet = Puetc(k, g, g, powerk) pgeom = Pgeom(pet) pmid = Pmid(k, gm, powerk) ###Output _____no_output_____ ###Markdown 2. The correlation function ###Code import corfu r_uet, xi_uet = corfu.ptoxi(k, puet, q=0.2) r_limb, xi_limb = corfu.ptoxi(k, pet, q=0, limber=True) r_geom, xi_geom = corfu.ptoxi(k, pgeom, q=0) r_mid, xi_mid = corfu.ptoxi(k, pmid, q=0) plt.figure(figsize=(6,4)) plt.loglog(r_uet, +xi_uet[0,0], 'k', label='Unequal-time', lw=1) plt.loglog(r_uet, -xi_uet[0,0], '--k', lw=1) plt.loglog(r_limb, +xi_limb[0], '--b', label='Limber', lw=1) plt.loglog(r_limb, -xi_limb[0], ':b', lw=1) plt.loglog(r_geom, +xi_geom[0,0], '--r', label='Geometric', lw=1) plt.loglog(r_geom, -xi_geom[0,0], ':r', lw=1) plt.loglog(r_mid, +xi_mid[0,0], '--g', label='Midpoint', lw=2) plt.loglog(r_mid, -xi_mid[0,0], ':g', lw=2) plt.legend() plt.xlabel('r') plt.ylabel(r'$\xi(r)$') plt.show() ###Output _____no_output_____ ###Markdown 3. Lensing filters ###Code from lens_filter import filter_galaxy_clustering, lensing_efficiency, filter_convergence ###Output _____no_output_____ ###Markdown * Redshift distribution of galaxiesSource: ###Code zn = np.loadtxt('../../data/des-y1-test/nz_source/z.txt') nz = [np.loadtxt('../../data/des-y1-test/nz_source/bin_%d.txt' % i) for i in range(1, nbin+1)] xf = np.interp(zn, zdM, dM) ###Output _____no_output_____ ###Markdown Lensed: ###Code nlz = [np.loadtxt('../../data/des-y1-test/nz_lens/bin_%d.txt' % i) for i in range(1, nbinl+1)] ###Output _____no_output_____ ###Markdown * Lensing efficiency ###Code q = [lensing_efficiency(xf, zn, n) for n in nz] ###Output _____no_output_____ ###Markdown * Convergence ###Code fc = [filter_convergence(xf, zn, qq, cosmo_astropy) for qq in q] ###Output _____no_output_____ ###Markdown * Galaxy clustering ###Code bias_DESY1 = [1.45, 1.55, 1.65, 1.8, 2.0] fg = [filter_galaxy_clustering(xf, zn, n, bias, cosmo_astropy) for n,bias in zip(nlz, bias_DESY1)] ###Output _____no_output_____ ###Markdown 4. Angular correlation function ###Code theta = np.logspace(-3, np.log10(np.pi), 2048) theta_arcmin = np.degrees(theta)*60 w_limb = [corfu.eqt(theta, (xf, fc[a-1]*fg[b-1]), (xp, r_limb, xi_limb)) for a, b in zip(bin_A, binl_B)] w_geom = [corfu.uneqt(theta, (xf, fc[a-1]), (xf, fg[b-1]), (xp, xp, r_geom, xi_geom), True) for a, b in zip(bin_A, binl_B)] w_uet = [corfu.uneqt(theta, (xf, fc[a-1]), (xf, fg[b-1]), (xp, xp, r_uet, xi_uet), True) for a, b in zip(bin_A, binl_B)] w_mid = [corfu.uneqt(theta, (xf, fc[a-1]), (xf, fg[b-1]), (xp, xp, r_mid, xi_mid), True) for a, b in zip(bin_A, binl_B)] fig, axes = plt.subplots(4, 4, figsize=(14, 10), sharex=True, sharey=True) for ax in axes.ravel(): ax.axis('off') for i, (a, b) in enumerate(zip(bin_a, bin_b)): ax = axes[a-1, b-1] ax.axis('on') ax.loglog(theta_arcmin, +w_limb[i], '--b', label='Limber', lw=1) ax.loglog(theta_arcmin, -w_limb[i], ':b', lw=1) ax.loglog(theta_arcmin, +w_geom[i], '--r', label='Geometric', lw=1) ax.loglog(theta_arcmin, -w_geom[i], ':r', lw=1) ax.loglog(theta_arcmin, +w_mid[i], '--g', label='Midpoint', lw=2) ax.loglog(theta_arcmin, -w_mid[i], ':g', lw=2) ax.loglog(theta_arcmin, +w_uet[i], 'k', label='Unequal-time', lw=1) ax.loglog(theta_arcmin, -w_uet[i], '--k', lw=1) ax.set_xlim(5e0, 1e4) # ax.set_ylim(5e-11, 2e-2) ax.set_xticks([1e1, 1e2, 1e3, 1e4]) ax.tick_params(axis='y', which='minor', labelcolor='none') string = '({0},{1})'.format(a,b) ax.text(2e3,1e-4,string) axes[0, 0].legend(markerfirst=False, loc='lower left') ax = fig.add_subplot(111, frameon=False) ax.set_xlabel(r'Angular separation, $\theta$ [arcmin]', size=12) ax.set_ylabel(r'Angular correlation, $w(\theta)$', size=12) ax.tick_params(labelcolor='none', top=False, bottom=False, left=False, right=False) ax.tick_params(axis='y', pad=15) fig.tight_layout(pad=0.5) # fig.savefig('plots/w_galaxy_lensing.pdf', bbox_inches='tight') plt.show() ###Output _____no_output_____ ###Markdown 5. Angular power spectrum analysis 5.1. Angular power spectra ###Code l_limb, cl_limb = np.transpose([corfu.wtocl(theta, w, lmax=2000) for w in w_limb], (1, 0, 2)) l_geom, cl_geom = np.transpose([corfu.wtocl(theta, w, lmax=2000) for w in w_geom], (1, 0, 2)) l_uet, cl_uet = np.transpose([corfu.wtocl(theta, w, lmax=2000) for w in w_uet], (1, 0, 2)) l_mid, cl_mid = np.transpose([corfu.wtocl(theta, w, lmax=2000)for w in w_mid], (1, 0, 2)) fig, axes = plt.subplots(4, 4, figsize=(14, 12), sharex=True, sharey=True) for ax in axes.ravel(): ax.axis('off') for i, (a, b) in enumerate(zip(bin_a, bin_b)): ax = axes[a-1, b-1] ax.axis('on') ax.loglog(l_limb[i], cl_limb[i], ':b', label='Limber', lw=1) ax.loglog(l_geom[i], cl_geom[i], ':r', label='Geometric', lw=1) ax.loglog(l_mid[i], cl_mid[i], '--g', label='Midpoint', lw=2) ax.loglog(l_uet[i], cl_uet[i], 'k', label='Unequal-time', lw=1) ax.set_xlim(5, 2e3) # ax.set_ylim(2e-10, 5e-7) ax.set_xticks([1e1, 1e2, 1e3]) string = '({0},{1})'.format(a,b) ax.text(6e2,1e-7,string) axes[0, 0].legend(markerfirst=False, loc='lower left') ax = fig.add_subplot(111, frameon=False) ax.set_xlabel(r'Angular mode, $\ell$', size=12) ax.set_ylabel(r'Angular power, $C_{\ell}$', size=12) ax.tick_params(labelcolor='none', top=False, bottom=False, left=False, right=False) ax.tick_params(axis='y', pad=12) fig.tight_layout(pad=0.5) # fig.savefig('plots/cl_galaxy_lensing.pdf', bbox_inches='tight') plt.show() ###Output _____no_output_____ ###Markdown 5.2. Relative error ###Code frac_limb = cl_limb/cl_uet frac_geom = cl_geom/cl_uet frac_mid = cl_mid/cl_uet frac_uet = cl_uet/cl_uet fig, axes = plt.subplots(4, 4, figsize=(14,12), sharex=True, sharey=True) for ax in axes.ravel(): ax.axis('off') for i, (a, b) in enumerate(zip(bin_a, bin_b)): ax = axes[a-1, b-1] ax.axis('on') ax.semilogx(l_limb[i], frac_limb[i], 'b', label='Limber', lw=1) ax.semilogx(l_geom[i], frac_geom[i], ':r', label='Geometric', lw=1) ax.semilogx(l_mid[i], frac_mid[i], '--g', label='Midpoint', lw=2) ax.semilogx(l_uet[i], frac_uet[i], ':k', label='Unequal-time', lw=0.5) ax.set_xlim(5, 2e3) ax.set_ylim(0.95, 1.06) ax.set_yticks([0.96, 0.98, 1, 1.02, 1.04]) string = '({0},{1})'.format(a,b) ax.text(5e2,1.05,string) ax.fill_between(l_limb[i], 0.99, 1.01, alpha=0.05) ax.fill_between(l_limb[i], 0.98, 1.02, alpha=0.05) axes[0, 0].legend(markerfirst=False, loc='upper left') ax = fig.add_subplot(111, frameon=False) ax.set_xlabel(r'Angular mode, $\ell$', size=12) ax.set_ylabel(r'$C_{\ell}^{approx} / C_{\ell}^{uetc}$', size=12) ax.tick_params(labelcolor='none', top=False, bottom=False, left=False, right=False) ax.tick_params(axis='y', pad=12) fig.tight_layout(pad=0.5) # fig.savefig('plots/fraction_cl_galaxy_lensing.pdf', bbox_inches='tight') plt.show() ###Output _____no_output_____
Books_Price_Predictions/Price_Books_Notebook.ipynb
###Markdown Data Analysis and Processing **Look if there are some missing values** ###Code data.info() data_submit.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 1560 entries, 0 to 1559 Data columns (total 8 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 Title 1560 non-null object 1 Author 1560 non-null object 2 Edition 1560 non-null object 3 Reviews 1560 non-null object 4 Ratings 1560 non-null object 5 Synopsis 1560 non-null object 6 Genre 1560 non-null object 7 BookCategory 1560 non-null object dtypes: object(8) memory usage: 97.6+ KB ###Markdown Well, fortunately, there are no missing values (both datatrain & datatest) Some minor DataTransformation Edition I want to explain why I created a extract_edition_sub : Sometimes as edition we have either "NAME_EDITION,– XX MONTH YEAR" or "NAME_EDITION,– SUB_EDITION, XX MONTH YEAR" or "NAME_EDITION,– SUB_EDITION"SUB_EDITION Refers sometimes to Box Set which means that this is a collection of books which is good to know. ###Code def hasNumbers(inputString): return any(char.isdigit() for char in inputString) def extract_date(edition): """Function made to extract the date from the Edition Column""" split_of_interest = edition.split(',– ')[1] split = split_of_interest.split(', ') if len(split) < 2: string = split[0] if hasNumbers(string): return string if len(split) >= 2: string = split[1] if hasNumbers(string): return string return float('nan') def extract_edition_sub(edition): """Function made to extract the sub edition from the Edition Column""" split_of_interest = edition.split(',– ')[1] split = split_of_interest.split(', ') if len(split) < 2: string = split[0] if not hasNumbers(string): return string if len(split) >= 2: string = split[0] if not hasNumbers(string): return string return float('nan') data['EditionName'] = data['Edition'].apply(lambda x : x.split(',')[0]) data['EditionDate'] = data['Edition'].apply(extract_date) data['EditionSub'] = data['Edition'].apply(extract_edition_sub) data_submit['EditionName'] = data_submit['Edition'].apply(lambda x : x.split(',')[0]) data_submit['EditionDate'] = data_submit['Edition'].apply(extract_date) data_submit['EditionSub'] = data_submit['Edition'].apply(extract_edition_sub) data = data.drop('Edition', axis=1) data_submit = data_submit.drop('Edition',axis=1) data.head() ###Output _____no_output_____ ###Markdown Reviews ###Code data['Reviews'] = data['Reviews'].apply(lambda x: float(x.split()[0])) data_submit['Reviews'] = data_submit['Reviews'].apply(lambda x: float(x.split()[0])) ###Output _____no_output_____ ###Markdown Ratings ###Code data['Ratings'] = data['Ratings'].apply(lambda x: float(x.split()[0].replace(',','.'))) data_submit['Ratings'] = data_submit['Ratings'].apply(lambda x: float(x.split()[0].replace(',','.'))) ###Output _____no_output_____ ###Markdown Date ###Code month_num = {'Jan': '01', 'Feb': '02', 'Mar': '03', 'Apr': '04', 'May': '05', 'Jun': '06', 'Jul': '07', 'Aug': '08', 'Sep': '09', 'Oct': '10', 'Nov': '11', 'Dec': '12'} def convert_to_date(x): if x != x: return x component = x.split() if len(component) > 2: return component[2] + '-' + month_num[component[1]] + '-' + component[0] elif len(component) > 1: return component[1] + '-' + month_num[component[0]] + '-01' else: return component[0] + '-01-01' data['EditionDate'] = data['EditionDate'].apply(convert_to_date) data_submit['EditionDate'] = data_submit['EditionDate'].apply(convert_to_date) ###Output _____no_output_____ ###Markdown New info on data ###Code data.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 6237 entries, 0 to 6236 Data columns (total 11 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 Title 6237 non-null object 1 Author 6237 non-null object 2 Reviews 6237 non-null float64 3 Ratings 6237 non-null float64 4 Synopsis 6237 non-null object 5 Genre 6237 non-null object 6 BookCategory 6237 non-null object 7 Price 6237 non-null float64 8 EditionName 6237 non-null object 9 EditionDate 6216 non-null object 10 EditionSub 786 non-null object dtypes: float64(3), object(8) memory usage: 536.1+ KB ###Markdown **Well there are some missing data such as the edition's date. Not Really a problem since I will use some ml algorithm than handles missing data** Comparaison between the train & the test set Author ###Code all_author_train = data.Author.unique().tolist() all_author_test = data_submit.Author.unique().tolist() print(f'There are {len(all_author_train)} differents authors in the train set') print() print(f'There are {len(all_author_test)} differents authors in the test set') author_only_in_test = [x for x in all_author_test if x not in all_author_train] print(f'There are {len(author_only_in_test)} authors that in the test set but not in the train set') ###Output There are 693 authors that in the test set but not in the train set ###Markdown Edition ###Code all_edition_train = data.EditionName.unique().tolist() all_edition_test = data_submit.EditionName.unique().tolist() print(f'There are {len(all_edition_train)} differents editions in the train set') print() print(f'There are {len(all_edition_test)} differents editions in the test set') edition_only_in_test = [x for x in all_edition_test if x not in all_edition_train] print(f'There are {len(edition_only_in_test)} edition that in the test set but not in the train set') ###Output There are 1 edition that in the test set but not in the train set ###Markdown Genre ###Code all_genre_train = data.Genre.unique().tolist() all_genre_test = data_submit.Genre.unique().tolist() print(f'There are {len(all_genre_train)} differents genre in the train set') print() print(f'There are {len(all_genre_test)} differents genre in the test set') genre_only_in_test = [x for x in all_genre_test if x not in all_genre_train] print(f'There are {len(genre_only_in_test)} genre that in the test set but not in the train set') ###Output There are 18 genre that in the test set but not in the train set ###Markdown BookCategory ###Code all_BookCategory_train = data.BookCategory.unique().tolist() all_BookCategory_test = data_submit.BookCategory.unique().tolist() print(f'There are {len(all_BookCategory_train)} differents BookCategory in the train set') print() print(f'There are {len(all_BookCategory_test)} differents BookCategory in the test set') BookCategory_only_in_test = [x for x in all_BookCategory_test if x not in all_BookCategory_train] print(f'There are {len(BookCategory_only_in_test)} genre that in the test set but not in the train set') ###Output There are 0 genre that in the test set but not in the train set ###Markdown Price : y ###Code # seaborn histogram plt.figure(figsize=(20,20)) sns.distplot(data['Price'], kde=False) plt.show() ###Output /Users/omarsouaidi/opt/miniconda3/lib/python3.8/site-packages/seaborn/distributions.py:2551: FutureWarning: `distplot` is a deprecated function and will be removed in a future version. Please adapt your code to use either `displot` (a figure-level function with similar flexibility) or `histplot` (an axes-level function for histograms). warnings.warn(msg, FutureWarning) ###Markdown We can notice here that more than 70% of the dataset's books price belongs to range[0,500] Price and Ratings ###Code plt.figure(figsize=(15,15)) sns.scatterplot(x="Price", y="Ratings", data=data) plt.show() ###Output _____no_output_____ ###Markdown **As you can see, a large number of ratings means that the book is cheaper, which is normal since that the less it costs, the more people will buy it.**Now Im going to look into some of the "outliers" ###Code data[(data['Price'] >= 2000) & (data['Ratings'] >= 20)] ###Output _____no_output_____ ###Markdown **Well I think I understand why we have such outliers : it's because most of them are a collection of books, so we don't buy only one book but many. To conclude : We have to add one feature which represent the number of books that we are going to buy. The number is going to be extracted via the title or via the Synopsis** Price and Reviews ###Code plt.figure(figsize=(15,15)) sns.scatterplot(x="Price", y="Reviews", data=data) plt.show() ###Output _____no_output_____ ###Markdown Price and EditionSub ###Code # plot sns.set_style('ticks') fig, ax = plt.subplots() # the size of A4 paper fig.set_size_inches(12, 12) sns.stripplot(x="EditionSub", y="Price", data=data, ax=ax) plt.xticks(rotation=90) sns.despine() ###Output _____no_output_____ ###Markdown Well, I think it will be a good idea to use the edition sub as a feature since it gives a range of the price for some books (for example : For Internation Edition, the price is from 0 to 300 .. BookCategory ###Code # plot sns.set_style('ticks') fig, ax = plt.subplots() # the size of A4 paper fig.set_size_inches(12, 12) sns.stripplot(x="BookCategory", y="Price", data=data, ax=ax) plt.xticks(rotation=90) sns.despine() ###Output _____no_output_____ ###Markdown Price and Date ###Code df = data[['EditionDate', 'Price', 'EditionName']] df.loc[:,'EditionDate'] = pd.to_datetime(df.EditionDate) df = df.sort_values('EditionDate', ascending=True).dropna() df['month'] = df['EditionDate'].dt.to_period('M') df['year'] = df['EditionDate'].dt.to_period('Y') ###Output /Users/omarsouaidi/opt/miniconda3/lib/python3.8/site-packages/pandas/core/indexing.py:1745: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame. Try using .loc[row_indexer,col_indexer] = value instead See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy isetter(ilocs[0], value) ###Markdown Yearly and By Edition ###Code df_year = df[['year', 'Price', 'EditionName']] df_year = df_year.groupby(['year','EditionName']).mean().reset_index() df_year.EditionName.value_counts() # plot sns.set_style('ticks') fig, ax = plt.subplots() # the size of A4 paper fig.set_size_inches(12, 12) sns.stripplot(x='year', y="Price", data=df_year[df_year.EditionName=='Paperback'][5:], ax=ax) plt.xticks(rotation=90) sns.despine() # plot sns.set_style('ticks') fig, ax = plt.subplots() # the size of A4 paper fig.set_size_inches(12, 12) sns.stripplot(x='year', y="Price", data=df_year[df_year.EditionName=='Mass Market Paperback'], ax=ax) plt.xticks(rotation=90) sns.despine() # plot sns.set_style('ticks') fig, ax = plt.subplots() # the size of A4 paper fig.set_size_inches(12, 12) sns.stripplot(x='year', y="Price", data=df_year[df_year.EditionName=='Hardcover'], ax=ax) plt.xticks(rotation=90) sns.despine() ###Output _____no_output_____ ###Markdown **I can't see any trend/seasonality when I look at the evolution of the price by year and by edition, it's not really relevant as feature** Monthly and By Edition ###Code df_month = df[['year', 'month', 'Price', 'EditionName']] df_month = df_month.groupby(['year', 'month', 'EditionName']).mean().reset_index() # plot sns.set_style('ticks') fig, ax = plt.subplots() # the size of A4 paper fig.set_size_inches(12, 12) sns.stripplot(x='month', y="Price", data=df_month[(df_month.EditionName == 'Paperback') & (df_month.year.dt.year == 2019)], ax=ax) plt.xticks(rotation=90) sns.despine() ###Output _____no_output_____ ###Markdown **This is a dead end, I tried to do more visualisation, but I can't see any monthly trend, (nor seasonality)Unfortunately, I wanted to encore three or two time variables (month and the year) but it is useless .. Well sometimes you think you'll find something but at the end there is nothing haha** ###Code data.loc[:,'EditionDate'] = pd.to_datetime(data.EditionDate) data_submit.loc[:,'EditionDate'] = pd.to_datetime(data_submit.EditionDate) ###Output _____no_output_____ ###Markdown Price and Author ###Code data.head() author_to_consider = [x for x in data.Author.unique() if x in data_submit.Author.unique()] df = data[data['Author'].isin(author_to_consider)] df.Author.value_counts() ###Output _____no_output_____ ###Markdown Title ###Code for index, row in data[data.Price < 100].sample(n=10).iterrows(): print(row['Title']) print(row['Price']) print('-'*50) for index, row in data[(data.Price >= 100) & (data.Price < 200)].sample(n=10).iterrows(): print(row['Title']) print(row['Price']) print('-'*50) for index, row in data[(data.Price >= 200) & (data.Price < 400)].sample(n=10).iterrows(): print(row['Title']) print(row['Price']) print('-'*50) for index, row in data[(data.Price >= 400) & (data.Price < 800)].sample(n=10).iterrows(): print(row['Title']) print(row['Price']) print('-'*50) for index, row in data[(data.Price >= 800) & (data.Price < 1500)].sample(n=10).iterrows(): print(row['Title']) print(row['Price']) print('-'*50) for index, row in data[(data.Price >= 1500) & (data.Price < 4000)].sample(n=10).iterrows(): print(row['Title']) print(row['Price']) print('-'*50) for index, row in data[data.Price >= 4000].sample(n=10).iterrows(): print(row['Title']) print(row['Price']) print('-'*50) ###Output The Complete Calvin and Hobbes (Set of 4 Books) 4175.0 -------------------------------------------------- The Tintin Collection: The Adventure of Tintin (The Adventures of Tintin - Compact Editions) 5968.0 -------------------------------------------------- ABAP Development for SAP HANA 4292.0 -------------------------------------------------- Fifty Cars that Changed the World: Design Museum Fifty 11715.12 -------------------------------------------------- Modern Labor Economics: Theory and Public Policy (The Addison-Wesley Series in Economics) 13244.67 -------------------------------------------------- Born to Ice 5530.0 -------------------------------------------------- Discovering Statistics Using R 5253.0 -------------------------------------------------- Webley Air Rifles 1925-2005 4936.0 -------------------------------------------------- Threat Modeling: Designing for Security 4013.0 -------------------------------------------------- Ranga Roopa: Gods. Words. Images 9096.0 -------------------------------------------------- ###Markdown I can't see anything relevant BaseLine With PYCARET Firt of all, I will use PYCARET as a baseline, Im gonna drop Title and Synopsis (because I think it's so much to handle (big unstructured data) ###Code from pycaret.regression import * reg1 = setup(data.drop(['Title',"Synopsis"],axis=1), target = 'Price', session_id = 123, normalize = True, normalize_method = 'zscore', transformation = True, transformation_method = 'yeo-johnson', transform_target = True, ignore_low_variance = True, combine_rare_levels = True, silent=True) compare_models(include = ['catboost', 'lightgbm', 'xgboost', 'rf']) # Take the 3 bests models catb = create_model('catboost', verbose=False) xgb = create_model('xgboost', verbose=False) rf = create_model('rf', verbose=False) # Blend all the 3 bests models blend_all = blend_models(estimator_list = [catb, xgb, rf]) # Finalise models and make predictions final_blender = finalize_model(blend_all) predictions = predict_model(final_blender, data = data_submit.drop(['Title',"Synopsis"],axis=1)) predictions.head() predictions['Price'] = predictions['Label'] predictions[['Price']].to_excel('submissions.xlsx', index=False) ###Output _____no_output_____
1.0_Amazon_corpus_to_pandas.ipynb
###Markdown Amazon review corpusThe purpose of this notebook is to transform the raw corpus into a [Pandas](http://pandas.pydata.org/) `DataFrame` with a standardized format. The format is fairly simple:* Each row contains a review* There are two columns named `text` and `category` containing the respective informationEvery data set obeying these simple rules can be plugged into the forthcoming pipeline.The data set was downloaded from [this](http://jmcauley.ucsd.edu/data/amazon/) location. We chose the following files:* `reviews_Books_5.json`* `reviews_Electronics_5.json`* `reviews_Home_and_Kitchen_5.json`* `reviews_Movies_and_TV_5.json`As the `5` at the end of the file name indicates, we selected the 5-core versions. This guarantees that each item has at least 5 reviews.Each line of each file contains a JSON object. For example the first line of `reviews_Home_and_Kitchen_5.json` looks like this:`{"reviewerID": "APYOBQE6M18AA", "asin": "0615391206", "reviewerName": "Martin Schwartz", "helpful": [0, 0], "reviewText": "My daughter wanted this book and the price on Amazon was the best. She has already tried one recipe a day after receiving the book. She seems happy with it.", "overall": 5.0, "summary": "Best Price", "unixReviewTime": 1382140800, "reviewTime": "10 19, 2013"}`The data set contains several useful information but in this work we are only interested in the `reviewText` field as well as the assigned class which is determined by the filename (e.b. `reviews_Movies_and_TV_5.json` -> `reviews_Movies_and_TV`) The variables `raw_corpus_path` and `pd_corpus_path` may need some adaption.* `raw_corpus_path` is expected to contain a path to a folder that contains the raw data set files (like `reviews_Books_5.json`) and nothing else.* The resulting Pandas DataFrame will be stored into the directory referred to by `raw_corpus_path`. ###Code raw_corpus_path = 'data/AMAZON/raw' pd_corpus_path = 'data/AMAZON/dataframes' from os import walk, sep import pandas as pd from tqdm import tqdm import json reviews = [] for root, dirs, files in walk(raw_corpus_path): for file in files: with open(root + '/' + file) as fh: for line in tqdm(fh): datapoint = json.loads(line) category = file.replace('reviews_', '') category = category.replace('_5.json', '') datapoint['category'] = category datapoint['text'] = datapoint.pop('reviewText') reviews.append(pd.Series(datapoint)) print('Processing finished. Start compiling DataFrame.') df = pd.DataFrame(reviews) df.to_pickle(pd_corpus_path + '/amazon.pkl') print('finished') ###Output _____no_output_____
Phase 1/Numpy_Pandas/Introduction to Pandas.ipynb
###Markdown Pandas*pandas* is a Python library for data analysis. It offers a number of data exploration, cleaning and transformation operations that are critical in working with data in Python. *pandas* build upon *numpy* and *scipy* providing easy-to-use data structures and data manipulation functions with integrated indexing.The main data structures *pandas* provides are *Series* and *DataFrames*. After a brief introduction to these two data structures and data ingestion, the key features of *pandas* this notebook covers are:* Generating descriptive statistics on data* Data cleaning using built in pandas functions* Frequent data operations for subsetting, filtering, insertion, deletion and aggregation of data* Merging multiple datasets using dataframes* Working with timestamps and time-series dataLet's get started with our first *pandas* notebook! Import Libraries ###Code import pandas as pd ###Output _____no_output_____ ###Markdown Introduction to pandas Data Structures*pandas* has two main data structures it uses, namely, *Series* and *DataFrames*. pandas Series*pandas Series* one-dimensional labeled array. ###Code ser = pd.Series([100, 'foo', 300, 'bar', 500], ['tom', 'bob', 'nancy', 'dan', 'eric']) ser ser.index ser.loc[['nancy','bob']] ser[[4, 3, 1]] ser.iloc[2] 'bob' in ser ser ser * 2 ser[['nancy', 'eric']] ** 2 ###Output _____no_output_____ ###Markdown pandas DataFrame*pandas DataFrame* is a 2-dimensional labeled data structure. Create DataFrame from dictionary of Python Series ###Code d = {'one' : pd.Series([100., 200., 300.], index=['apple', 'ball', 'clock']), 'two' : pd.Series([111., 222., 333., 4444.], index=['apple', 'ball', 'cerill', 'dancy'])} df = pd.DataFrame(d) print(df) df.index df.columns pd.DataFrame(d, index=['dancy', 'ball', 'apple']) pd.DataFrame(d, index=['dancy', 'ball', 'apple'], columns=['two', 'five']) ###Output _____no_output_____ ###Markdown Create DataFrame from list of Python dictionaries ###Code data = [{'alex': 1, 'joe': 2}, {'ema': 5, 'dora': 10, 'alice': 20}] pd.DataFrame(data) pd.DataFrame(data, index=['orange', 'red']) pd.DataFrame(data, columns=['joe', 'dora','alice']) ###Output _____no_output_____ ###Markdown Basic DataFrame operations ###Code df df['one'] df['three'] = df['one'] * df['two'] df df['flag'] = df['one'] > 250 df three = df.pop('three') three df del df['two'] df df.insert(2, 'copy_of_one', df['one']) df df['one_upper_half'] = df['one'][:2] df ###Output _____no_output_____ ###Markdown Case Study: Movie Data AnalysisThis notebook uses a dataset from the MovieLens website. We will describe the dataset further as we explore with it using *pandas*. Download the DatasetPlease note that **you will need to download the dataset**. Although the video for this notebook says that the data is in your folder, the folder turned out to be too large to fit on the edX platform due to size constraints.Here are the links to the data source and location:* **Data Source:** MovieLens web site (filename: ml-20m.zip)* **Location:** https://grouplens.org/datasets/movielens/Once the download completes, please make sure the data files are in a directory called *movielens*. Use Pandas to Read the DatasetIn this notebook, we will be using three CSV files:* **ratings.csv :** *userId*,*movieId*,*rating*, *timestamp** **tags.csv :** *userId*,*movieId*, *tag*, *timestamp** **movies.csv :** *movieId*, *title*, *genres* Using the *read_csv* function in pandas, we will ingest these three files. ###Code movies = pd.read_csv('./movielens/movies.csv', sep=',') print(type(movies)) movies.head(15) # Timestamps represent seconds since midnight Coordinated Universal Time (UTC) of January 1, 1970 tags = pd.read_csv('./movielens/tags.csv', sep=',') tags.head() ratings = pd.read_csv('./movielens/ratings.csv', sep=',', parse_dates=['timestamp']) ratings.head() # For current analysis, we will remove timestamp (we will come back to it!) del ratings['timestamp'] del tags['timestamp'] ###Output _____no_output_____ ###Markdown Data Structures Series ###Code #Extract 0th row: notice that it is infact a Series row_0 = tags.iloc[0] type(row_0) print(row_0) row_0.index row_0['userId'] 'rating' in row_0 row_0.name row_0 = row_0.rename('first_row') row_0.name ###Output _____no_output_____ ###Markdown DataFrames ###Code tags.head() tags.index tags.columns # Extract row 0, 11, 2000 from DataFrame tags.iloc[ [0,11,2000] ] ###Output _____no_output_____ ###Markdown Descriptive StatisticsLet's look how the ratings are distributed! ###Code ratings['rating'].describe() ratings.describe() ratings['rating'].mean() ratings.mean() ratings['rating'].min() ratings['rating'].max() ratings['rating'].std() ratings['rating'].mode() ratings.corr() filter_1 = ratings['rating'] > 5 print(filter_1) filter_1.any() filter_2 = ratings['rating'] > 0 filter_2.all() ###Output _____no_output_____ ###Markdown Data Cleaning: Handling Missing Data ###Code movies.shape #is any row NULL ? movies.isnull().any() ###Output _____no_output_____ ###Markdown Thats nice ! No NULL values ! ###Code ratings.shape #is any row NULL ? ratings.isnull().any() ###Output _____no_output_____ ###Markdown Thats nice ! No NULL values ! ###Code tags.shape #is any row NULL ? tags.isnull().any() ###Output _____no_output_____ ###Markdown We have some tags which are NULL. ###Code tags = tags.dropna() #Check again: is any row NULL ? tags.isnull().any() tags.shape ###Output _____no_output_____ ###Markdown Thats nice ! No NULL values ! Notice the number of lines have reduced. Data Visualization ###Code %matplotlib inline ratings.hist(column='rating', figsize=(15,10)) ratings.boxplot(column='rating', figsize=(15,20)) ###Output _____no_output_____ ###Markdown Slicing Out Columns ###Code tags['tag'].head() movies[['title','genres']].head() ratings[-10:] tag_counts = tags['tag'].value_counts() tag_counts[-10:] tag_counts[:10].plot(kind='bar', figsize=(15,10)) ###Output _____no_output_____ ###Markdown Filters for Selecting Rows ###Code is_highly_rated = ratings['rating'] >= 4.0 ratings[is_highly_rated][30:50] is_animation = movies['genres'].str.contains('Animation') movies[is_animation][5:15] movies[is_animation].head(15) ###Output _____no_output_____ ###Markdown Group By and Aggregate ###Code ratings_count = ratings[['movieId','rating']].groupby('rating').count() ratings_count average_rating = ratings[['movieId','rating']].groupby('movieId').mean() average_rating.head() movie_count = ratings[['movieId','rating']].groupby('movieId').count() movie_count.head() movie_count = ratings[['movieId','rating']].groupby('movieId').count() movie_count.tail() ###Output _____no_output_____ ###Markdown Merge Dataframes ###Code tags.head() movies.head() t = movies.merge(tags, on='movieId', how='inner') t.head() ###Output _____no_output_____ ###Markdown More examples: http://pandas.pydata.org/pandas-docs/stable/merging.html Combine aggreagation, merging, and filters to get useful analytics ###Code avg_ratings = ratings.groupby('movieId', as_index=False).mean() del avg_ratings['userId'] avg_ratings.head() box_office = movies.merge(avg_ratings, on='movieId', how='inner') box_office.tail() is_highly_rated = box_office['rating'] >= 4.0 box_office[is_highly_rated][-5:] is_comedy = box_office['genres'].str.contains('Comedy') box_office[is_comedy][:5] box_office[is_comedy & is_highly_rated][-5:] ###Output _____no_output_____ ###Markdown Vectorized String Operations ###Code movies.head() ###Output _____no_output_____ ###Markdown Split 'genres' into multiple columns ###Code movie_genres = movies['genres'].str.split('|', expand=True) movie_genres[:10] ###Output _____no_output_____ ###Markdown Add a new column for comedy genre flag ###Code movie_genres['isComedy'] = movies['genres'].str.contains('Comedy') movie_genres[:10] ###Output _____no_output_____ ###Markdown Extract year from title e.g. (1995) ###Code movies['year'] = movies['title'].str.extract('.*\((.*)\).*', expand=True) movies.tail() ###Output _____no_output_____ ###Markdown More here: http://pandas.pydata.org/pandas-docs/stable/text.htmltext-string-methods Parsing Timestamps Timestamps are common in sensor data or other time series datasets.Let us revisit the *tags.csv* dataset and read the timestamps! ###Code tags = pd.read_csv('./movielens/tags.csv', sep=',') tags.dtypes ###Output _____no_output_____ ###Markdown Unix time / POSIX time / epoch time records time in seconds since midnight Coordinated Universal Time (UTC) of January 1, 1970 ###Code tags.head(5) tags['parsed_time'] = pd.to_datetime(tags['timestamp'], unit='s') ###Output _____no_output_____ ###Markdown Data Type datetime64[ns] maps to either M8[ns] depending on the hardware ###Code tags['parsed_time'].dtype tags.head(2) ###Output _____no_output_____ ###Markdown Selecting rows based on timestamps ###Code greater_than_t = tags['parsed_time'] > '2015-02-01' selected_rows = tags[greater_than_t] tags.shape, selected_rows.shape ###Output _____no_output_____ ###Markdown Sorting the table using the timestamps ###Code tags.sort_values(by='parsed_time', ascending=True)[:10] ###Output _____no_output_____ ###Markdown Average Movie Ratings over Time Are Movie ratings related to the year of launch? ###Code average_rating = ratings[['movieId','rating']].groupby('movieId', as_index=False).mean() average_rating.tail() joined = movies.merge(average_rating, on='movieId', how='inner') joined.head() joined.corr() yearly_average = joined[['year','rating']].groupby('year', as_index=False).mean() yearly_average[:10] yearly_average[-20:].plot(x='year', y='rating', figsize=(15,10), grid=True) ###Output _____no_output_____
code/macroeconomic-analysis/economic-indicator-analysis.ipynb
###Markdown Economic indicator analysis **The objective of economic data analysis is to explore how the economic indicators affect the monthly average house price per saleable area in Hong Kong in 2016 to 2019.** Data Source:1. Transaction records - Centaline Property3. Macroeconomic indicators - Census and statistics department **Import libraries** ###Code import pandas as pd import matplotlib.pyplot as plt import seaborn as sns import pandas as pd import numpy as np from scipy.stats import norm from scipy import stats import warnings warnings.filterwarnings('ignore') %matplotlib inline ###Output _____no_output_____ ###Markdown **Import data** ###Code # For google drive #from google.colab import drive #from google.colab import files #drive.mount('/content/drive') #data_dir = "/content/drive/My Drive/FYP/centaline/" # For local directory data_dir = "../../database_real/macroeconomic_data/centaline/" hk = ["Kennedy_town_sai_ying_pun", "South_horizon", "Bel_air_sasson", "Aberdeen_ap_lei_chau", "Mid_level_west", "Peak_south", "Wanchai_causeway_bay", "Mid_level_central", "Happy_valley_mid_level_east", "Mid_level_north_point", "North_point_fortress_hill", "Quarry_bay_kornhill", "Taikoo_shing", "Shau_kei_wan_chai_wan", "Heng_fa_chuen", "Sheung_wan_central_admiralty"] kowloon = ["Olympic_station", "Kowloon_station", "Mongkok_yaumatei", "Tsimshatsui_jordan", "Ho_man_tin_kings_park", "To_kwa_wan", "Whampoa_laguna_verde", "Tseung_kwan_o", "Meifoo_wonderland", "Cheung_sha_wan_west", "Cheung_sha_wan_sham_shui_po", "Yau_yat_tsuen_shek_kip_mei", "Kowloon_tong_beacon_hill", "Lam_tin_yau_tong", "Kowloon_bay_ngau_chi_wan", "Kwun_tong", "Diamond_hill_wong_tai_sin", "To_kwa_wan_east", "Hung_hum", "Kai_tak"] new_east = ["Sai_kung", "Tai_wai", "Shatin", "Fotan_shatin_mid_level_kau_to_shan", "Ma_on_shan", "Tai_po_mid_level_hong_lok_yuen", "Tai_po_market_tai_wo", "Sheung_shui_fanling_kwu_tung"] new_west = ["Discovery_bay", "Fairview_park_palm_spring_the_vineyard", "Yuen_long", "Tuen_mun", "Tin_shui_wai", "Tsuen_wan", "Kwai_chung", "Tsing_yi", "Ma_wan_park_island","Tung_chung_islands", "Sham_tseng_castle_peak_road", "Belvedere_garden_castle_peak_road"] # Data directory dir_hk = "./hk_island/" dir_kowloon = "./kowloon/" dir_new_east = "./new_east/" dir_new_west = "./new_west/" def get_data_by_district(district_name, disctrict_dir): district_df = pd.DataFrame() for region in district_name: new_df = pd.read_csv(data_dir+disctrict_dir+region+".csv") new_df["District"] = region district_df = pd.concat([district_df, new_df], axis=0) # Data cleaning district_df['SaleableArea'] = district_df['SaleableArea'].replace("-", np.nan) district_df['SaleableArea'] = pd.to_numeric(district_df['SaleableArea']) district_df['UnitPricePerSaleableArea'] = district_df['UnitPricePerSaleableArea'].replace("-", np.nan) district_df['UnitPricePerSaleableArea'] = pd.to_numeric(district_df['UnitPricePerSaleableArea']) district_df['UnitPricePerGrossArea'] = district_df['UnitPricePerGrossArea'].replace("-", np.nan) district_df['UnitPricePerGrossArea'] = pd.to_numeric(district_df['UnitPricePerGrossArea']) district_df['GrossArea'] = district_df['GrossArea'].replace("-", np.nan) district_df['GrossArea'] = pd.to_numeric(district_df['GrossArea']) district_df['LastHold'] = district_df['LastHold'].replace("-", np.nan) district_df['LastHold'] = pd.to_numeric(district_df['LastHold']) district_df['GainLoss'] = district_df['GainLoss'].replace("-", np.nan) district_df['GainLoss'] = pd.to_numeric(district_df['GainLoss']) district_df = district_df.drop(district_df.columns[0], axis=1) district_df['RegDate'] = pd.to_datetime(district_df['RegDate']) district_df.sort_values(by=['RegDate'], inplace=True, ascending=False) district_df = district_df.reset_index() district_df = district_df.drop(['index'], axis=1) return district_df def download_data(filename, download_data): dataFrame = pd.DataFrame(data=download_data) dataFrame.to_csv(filename) files.download(filename) # Get data by distirct data_df_hk = get_data_by_district(hk, dir_hk) data_df_kowloon = get_data_by_district(kowloon, dir_kowloon) data_df_new_east = get_data_by_district(new_east, dir_new_east) data_df_new_west = get_data_by_district(new_west, dir_new_west) # Get all district data data_df_all = pd.concat([data_df_hk, data_df_kowloon, data_df_new_east, data_df_new_west], axis=0) data_df_all.sort_values(by=['RegDate'], inplace=True, ascending=False) data_df_all = data_df_all.reset_index() data_df_all = data_df_all.drop(['index'], axis=1) data_df_all.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 251660 entries, 0 to 251659 Data columns (total 11 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 Address 251660 non-null object 1 BuildingAge 251660 non-null int64 2 RegDate 251660 non-null datetime64[ns] 3 Price 251660 non-null float64 4 SaleableArea 227968 non-null float64 5 GrossArea 132535 non-null float64 6 UnitPricePerSaleableArea 227967 non-null float64 7 UnitPricePerGrossArea 132535 non-null float64 8 LastHold 120840 non-null float64 9 GainLoss 120840 non-null float64 10 District 251660 non-null object dtypes: datetime64[ns](1), float64(7), int64(1), object(2) memory usage: 21.1+ MB ###Markdown **Price distribution** ###Code # Distribution print(data_df_all['Price'].describe()) # Histogram plt.figure(figsize=(10,5)) sns.distplot(data_df_all['Price'].dropna()); ###Output count 251660.000000 mean 8.798477 std 14.874840 min 0.400000 25% 4.650000 50% 6.400000 75% 9.180000 max 2566.000000 Name: Price, dtype: float64 ###Markdown **Unit price for saleable area distribution** ###Code # Distribution print(data_df_all['UnitPricePerSaleableArea'].describe()) # Histogram plt.figure(figsize=(10,5)) sns.distplot(data_df_all['UnitPricePerSaleableArea'].dropna()); ###Output count 227967.000000 mean 15324.452842 std 6105.796666 min 1476.000000 25% 11594.000000 50% 14618.000000 75% 17802.000000 max 484585.000000 Name: UnitPricePerSaleableArea, dtype: float64 ###Markdown **Data Preprocessing** ###Code # Data preprocessing processed_df = data_df_all.copy() # Add new features processed_df['month'] = pd.to_datetime(processed_df['RegDate']).dt.month processed_df['year'] = pd.to_datetime(processed_df['RegDate']).dt.year # Drop unnecessary columns processed_df = processed_df.drop(['Address', 'BuildingAge', 'LastHold', 'GainLoss', 'District', 'RegDate', 'Price', 'SaleableArea', 'GrossArea', 'UnitPricePerGrossArea'], axis=1) # Handling missinig values # Fill with mean unitSaleableArea_mean = processed_df['UnitPricePerSaleableArea'].mean() processed_df['UnitPricePerSaleableArea'] = processed_df['UnitPricePerSaleableArea'].fillna(unitSaleableArea_mean) ###Output _____no_output_____ ###Markdown **Calculate monthly average unit price for saleable area** ###Code monthly_df = processed_df.copy() monthly_df = monthly_df.groupby(['year','month'],as_index=False).mean() monthly_df = monthly_df.rename(columns={'UnitPricePerSaleableArea': 'AveragePricePerSaleableArea'}) monthly_df.head(50) ###Output _____no_output_____ ###Markdown **Monthly average price for saleable area distribution** ###Code # Distribution print(monthly_df['AveragePricePerSaleableArea'].describe()) # Histogram plt.figure(figsize=(10,5)) sns.distplot(monthly_df['AveragePricePerSaleableArea']); ###Output count 48.000000 mean 15299.247059 std 790.124519 min 13429.026810 25% 14901.058708 50% 15462.212889 75% 15693.370515 max 17135.004160 Name: AveragePricePerSaleableArea, dtype: float64 ###Markdown **Import economic indicators and join with original dataframe** ###Code new_monthly_df = monthly_df.copy() conditions_year = [ (new_monthly_df['year'] == 2017), (new_monthly_df['year'] == 2018), (new_monthly_df['year'] == 2019) ] conditions_half = [ (new_monthly_df['year'] == 2017) & (new_monthly_df['month'] < 7), (new_monthly_df['year'] == 2017) & (new_monthly_df['month'] < 13), (new_monthly_df['year'] == 2018) & (new_monthly_df['month'] < 7), (new_monthly_df['year'] == 2018) & (new_monthly_df['month'] < 13), (new_monthly_df['year'] == 2019) & (new_monthly_df['month'] < 7), (new_monthly_df['year'] == 2019) & (new_monthly_df['month'] < 13), (new_monthly_df['year'] == 2020) & (new_monthly_df['month'] < 7) ] conditions_quarter = [ (new_monthly_df['year'] == 2017) & (new_monthly_df['month'] < 4), (new_monthly_df['year'] == 2017) & (new_monthly_df['month'] < 7), (new_monthly_df['year'] == 2017) & (new_monthly_df['month'] < 10), (new_monthly_df['year'] == 2017) & (new_monthly_df['month'] < 13), (new_monthly_df['year'] == 2018) & (new_monthly_df['month'] < 4), (new_monthly_df['year'] == 2018) & (new_monthly_df['month'] < 7), (new_monthly_df['year'] == 2018) & (new_monthly_df['month'] < 10), (new_monthly_df['year'] == 2018) & (new_monthly_df['month'] < 13), (new_monthly_df['year'] == 2019) & (new_monthly_df['month'] < 4), (new_monthly_df['year'] == 2019) & (new_monthly_df['month'] < 7), (new_monthly_df['year'] == 2019) & (new_monthly_df['month'] < 10), (new_monthly_df['year'] == 2019) & (new_monthly_df['month'] < 13), (new_monthly_df['year'] == 2020) & (new_monthly_df['month'] < 4), (new_monthly_df['year'] == 2020) & (new_monthly_df['month'] < 7), (new_monthly_df['year'] == 2020) & (new_monthly_df['month'] < 10) ] conditions_month = [ (new_monthly_df['year'] == 2017) & (new_monthly_df['month'] == 1), (new_monthly_df['year'] == 2017) & (new_monthly_df['month'] == 2), (new_monthly_df['year'] == 2017) & (new_monthly_df['month'] == 3), (new_monthly_df['year'] == 2017) & (new_monthly_df['month'] == 4), (new_monthly_df['year'] == 2017) & (new_monthly_df['month'] == 5), (new_monthly_df['year'] == 2017) & (new_monthly_df['month'] == 6), (new_monthly_df['year'] == 2017) & (new_monthly_df['month'] == 7), (new_monthly_df['year'] == 2017) & (new_monthly_df['month'] == 8), (new_monthly_df['year'] == 2017) & (new_monthly_df['month'] == 9), (new_monthly_df['year'] == 2017) & (new_monthly_df['month'] == 10), (new_monthly_df['year'] == 2017) & (new_monthly_df['month'] == 11), (new_monthly_df['year'] == 2017) & (new_monthly_df['month'] == 12), (new_monthly_df['year'] == 2018) & (new_monthly_df['month'] == 1), (new_monthly_df['year'] == 2018) & (new_monthly_df['month'] == 2), (new_monthly_df['year'] == 2018) & (new_monthly_df['month'] == 3), (new_monthly_df['year'] == 2018) & (new_monthly_df['month'] == 4), (new_monthly_df['year'] == 2018) & (new_monthly_df['month'] == 5), (new_monthly_df['year'] == 2018) & (new_monthly_df['month'] == 6), (new_monthly_df['year'] == 2018) & (new_monthly_df['month'] == 7), (new_monthly_df['year'] == 2018) & (new_monthly_df['month'] == 8), (new_monthly_df['year'] == 2018) & (new_monthly_df['month'] == 9), (new_monthly_df['year'] == 2018) & (new_monthly_df['month'] == 10), (new_monthly_df['year'] == 2018) & (new_monthly_df['month'] == 11), (new_monthly_df['year'] == 2018) & (new_monthly_df['month'] == 12), (new_monthly_df['year'] == 2019) & (new_monthly_df['month'] == 1), (new_monthly_df['year'] == 2019) & (new_monthly_df['month'] == 2), (new_monthly_df['year'] == 2019) & (new_monthly_df['month'] == 3), (new_monthly_df['year'] == 2019) & (new_monthly_df['month'] == 4), (new_monthly_df['year'] == 2019) & (new_monthly_df['month'] == 5), (new_monthly_df['year'] == 2019) & (new_monthly_df['month'] == 6), (new_monthly_df['year'] == 2019) & (new_monthly_df['month'] == 7), (new_monthly_df['year'] == 2019) & (new_monthly_df['month'] == 8), (new_monthly_df['year'] == 2019) & (new_monthly_df['month'] == 9), (new_monthly_df['year'] == 2019) & (new_monthly_df['month'] == 10), (new_monthly_df['year'] == 2019) & (new_monthly_df['month'] == 11), (new_monthly_df['year'] == 2019) & (new_monthly_df['month'] == 12), (new_monthly_df['year'] == 2020) & (new_monthly_df['month'] == 1), (new_monthly_df['year'] == 2020) & (new_monthly_df['month'] == 2), (new_monthly_df['year'] == 2020) & (new_monthly_df['month'] == 3), (new_monthly_df['year'] == 2020) & (new_monthly_df['month'] == 4), (new_monthly_df['year'] == 2020) & (new_monthly_df['month'] == 5), (new_monthly_df['year'] == 2020) & (new_monthly_df['month'] == 6), (new_monthly_df['year'] == 2020) & (new_monthly_df['month'] == 7), (new_monthly_df['year'] == 2020) & (new_monthly_df['month'] == 8), (new_monthly_df['year'] == 2020) & (new_monthly_df['month'] == 9), (new_monthly_df['year'] == 2020) & (new_monthly_df['month'] == 10), (new_monthly_df['year'] == 2020) & (new_monthly_df['month'] == 11) ] # Data directory file_dir = "../determinants/" district_df = pd.DataFrame() population_df = pd.read_csv(data_dir+file_dir+"TABLE001.csv") unemployment_rate_df = pd.read_csv(data_dir+file_dir+"TABLE006.csv") export_import_df = pd.read_csv(data_dir+file_dir+"TABLE055.csv") gdp_df = pd.read_csv(data_dir+file_dir+"TABLE030.csv") consumer_price_indices_df = pd.read_csv(data_dir+file_dir+"TABLE052.csv") value_population = population_df['Number_000'].tolist() value_unemployment_adjusted = unemployment_rate_df['Unemployment_rate_seasonally_adjusted'].tolist() value_unemployment_not_adjusted = unemployment_rate_df['Unemployment_rate_not_adjusted'].tolist() value_import = export_import_df['Imports'].tolist() value_export = export_import_df['Total_exports'].tolist() value_gdp = gdp_df['GDP_current_market_prices'].tolist() value_gdp_per_capita = gdp_df['Per_capita_GDP_current_market_prices'].tolist() value_gdp = gdp_df['GDP_current_market_prices'].tolist() value_gdp_per_capita = gdp_df['Per_capita_GDP_current_market_prices'].tolist() value_consumer_price_indices = consumer_price_indices_df['Composite_Consumer_Price_Index'].tolist() new_monthly_df['Population'] = np.select(conditions_half, value_population) new_monthly_df['Unemployment_adjusted'] = np.select(conditions_month[:46], value_unemployment_adjusted) new_monthly_df['Unemployment_not_adjusted'] = np.select(conditions_month[:46], value_unemployment_not_adjusted) new_monthly_df['Imports'] = np.select(conditions_month, value_import) new_monthly_df['Total_exports'] = np.select(conditions_month, value_export) new_monthly_df['GDP'] = np.select(conditions_quarter, value_gdp[3:]) #new_monthly_df['GDP_per_capita'] = np.select(conditions_year, value_gdp_per_capita[:3]) new_monthly_df['CCP_index'] = np.select(conditions_month, value_consumer_price_indices) new_monthly_df['GDP'] = new_monthly_df['GDP'].str.replace(',', '').astype(float) #new_monthly_df['GDP_per_capita'] = new_monthly_df['GDP_per_capita'].str.replace(',', '').astype(float) new_monthly_df.head() df = new_monthly_df.copy() df = df[(df['year'] <= 2019)] df.head(50) ###Output _____no_output_____ ###Markdown **Univariate analysis** ###Code def univariate_analysis(feature_name): # Statistical summary print(df[feature_name].describe()) # Histogram plt.figure(figsize=(8,4)) sns.distplot(df[feature_name], axlabel=feature_name); univariate_analysis('Imports') ###Output count 36.000000 mean 375129.222222 std 37068.626934 min 277508.000000 25% 362872.250000 50% 379221.500000 75% 402971.250000 max 428452.000000 Name: Imports, dtype: float64 ###Markdown **Bivariate analysis** ###Code for i in range(3, len(df.columns), 4): sns.pairplot(data=df, x_vars=df.columns[i:i+4], y_vars=['AveragePricePerSaleableArea'], size=4) def graphWithTrendLine(var): x = df[var] y = df['AveragePricePerSaleableArea'] plt.scatter(x, y) plt.xticks(rotation=45) fig = sns.regplot(x=var, y="AveragePricePerSaleableArea", data=df) graphWithTrendLine("Population") graphWithTrendLine("Unemployment_adjusted") graphWithTrendLine("Unemployment_not_adjusted") graphWithTrendLine("Imports") graphWithTrendLine("Total_exports") graphWithTrendLine("GDP") #graphWithTrendLine("GDP_per_capita") graphWithTrendLine("CCP_index") ###Output _____no_output_____ ###Markdown **Correlation Matrix and Heatmap** ###Code # Heatmap fig, ax = plt.subplots(figsize=(10,10)) cols = df.corr().sort_values('AveragePricePerSaleableArea', ascending=False).index cm = np.corrcoef(df[cols].values.T) hm = sns.heatmap(cm, annot=True, square=True, annot_kws={'size':11}, yticklabels=cols.values, xticklabels=cols.values) plt.show() ###Output _____no_output_____
report-annexes/pa-w2v-mono-training.ipynb
###Markdown Gensim Training Experiments- **Machines:** - HEIA-FR GPU-2 (32 cpu dual threaded) - CPU Monster at HEIA-FR (48 cpu single threaded)- **Dataset:** - wikipedia english dump from 2019-03-19 (16GB) - wikipedia english dump from 2019-04-09 (16GB)- **Dictionary:** - lemmatized dictionary(16MB) - unlemmatized dictionary(16MB) What's going on- Training a Word2Vec on the full wikipedia english dataset using its pre-extracted lemmatized and unlemmatized dictionary. ###Code # Word2Vec settings import multiprocessing #w2v_w2v_sentences=None #w2v_corpus_file=None w2v_size=300 # (default: 100) #w2v_alpha=0.025 w2v_window=10 # (default: 5) w2v_min_count=1 # (default: 5) #w2v_max_vocab_size=None #w2v_sample=0.001 #w2v_seed=1 w2v_workers=4 # (default: 3) # multiprocessing.cpu_count() #w2v_min_alpha=0.0001 w2v_sg=0 # if sg=0 CBOW is used (default); if sg=1 skip-gram is used #w2v_hs=0 #w2v_negative=5 #w2v_ns_exponent=0.75 #w2v_cbow_mean=1 #w2v_hashfxn=<built-in function hash> w2v_iter=5 # (default: 5) #w2v_null_word=0 #w2v_trim_rule=None #w2v_sorted_vocab=1 w2v_batch_words=10000 # (default: 10000) #w2v_compute_loss=False #w2v_callbacks=() #w2v_max_final_vocab=None # General settings lemmatization = False run_corpus = "wiki" run_lang = "en" run_date = "190409" run_log_prefix = "train" run_model_dir = "models/" run_dict_dir = "dictionaries/" run_datasets_dir = "datasets/" run_log_dir = "logs/" run_w2v_algo = "cbow" if w2v_sg==0 else "sg" run_options = "s"+str(w2v_size)+"-w"+str(w2v_window)+"-mc"+str(w2v_min_count)+"-bw"+str(w2v_batch_words)+"-"+run_w2v_algo+"-i"+str(w2v_iter)+"-c"+str(w2v_workers) print(run_options) run_base_name = run_corpus+"-"+run_lang+"-"+run_date # wiki-en-190409 run_model_name = run_model_dir+run_base_name+"-"+run_options run_dict_name = run_dict_dir+run_base_name+"-dict" run_dataset_name = run_datasets_dir+run_base_name+"-latest-pages-articles.xml.bz2" run_log_name = run_log_dir+run_log_prefix+"-"+run_base_name+"-"+run_options run_lem = "-lem" if lemmatization else "-unlem" run_model_name += run_lem+".model" run_dict_name += run_lem+".txt.bz2" run_log_name += run_lem+".log" print(run_model_name) print(run_dict_name) print(run_dataset_name) print(run_log_name) # Start logging process at root level import logging logging.basicConfig(filename=run_log_name, format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO) #logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO) logging.root.setLevel(level=logging.INFO) # Load dictionary from file from gensim.corpora import Dictionary dictionary = Dictionary.load_from_text(run_dict_name) # Build WikiCorpus based on the dictionary from gensim.corpora import WikiCorpus wc_fname=run_dataset_name #wc_processes=None wc_lemmatize=lemmatization wc_dictionary=dictionary #wc_filter_namespaces=('0', ) #wc_tokenizer_func=<function tokenize> #wc_article_min_tokens=50 #wc_token_min_len=2 #wc_token_max_len=15 #wc_lower=True #wc_filter_articles=None wiki = WikiCorpus(fname=wc_fname, dictionary=wc_dictionary,lemmatize=wc_lemmatize) # Initialize simple sentence iterator required for the Word2Vec model # Trying to bypass memory errors if lemmatization: class SentencesIterator: def __init__(self, wiki): self.wiki = wiki def __iter__(self): for sentence in self.wiki.get_texts(): yield list(map(lambda x: x.decode('utf-8'), sentence)) #yield gensim.utils.simple_preprocess(line) else: class SentencesIterator: def __init__(self, wiki): self.wiki = wiki def __iter__(self): for sentence in self.wiki.get_texts(): yield list(map(lambda x: x.encode('utf-8').decode('utf-8'), sentence)) sentences = SentencesIterator(wiki) # Train model from gensim.models import Word2Vec print("Running with: " + str(w2v_workers) + " cores") model = Word2Vec(sentences=sentences, size=w2v_size, window=w2v_window, min_count=w2v_min_count, workers=w2v_workers, sg=w2v_sg, iter=w2v_iter ) model.save(run_model_name) del model del wiki del sentences del dictionary ###Output Running with: 4 cores
multivariate_test_prediction_grading.ipynb
###Markdown Multivariate Test dataset ###Code path = '/gdrive/My Drive/ML:Pilot/Assignments/Data/Chennai_house_price_multivariate_test.csv' raw_data = pd.read_csv(path) raw_data.head() raw_data.shape raw_data.isnull().sum(axis=0) raw_data = raw_data.fillna({ "QS_OVERALL": raw_data["QS_OVERALL"].mean()}) raw_data raw_data.isnull().sum(axis = 0) x_raw_data = raw_data.drop(columns=['PRT_ID','DATE_SALE','DATE_BUILD']) x_raw_data.head() from sklearn.preprocessing import LabelEncoder labelencoder = LabelEncoder() x_raw_data["AREA"] = labelencoder.fit_transform(x_raw_data["AREA"]) x_raw_data["SALE_COND"] = labelencoder.fit_transform(x_raw_data["SALE_COND"]) x_raw_data["PARK_FACIL"] = labelencoder.fit_transform(x_raw_data["PARK_FACIL"]) x_raw_data["BUILDTYPE"] = labelencoder.fit_transform(x_raw_data["BUILDTYPE"]) x_raw_data["UTILITY_AVAIL"] = labelencoder.fit_transform(x_raw_data["UTILITY_AVAIL"]) x_raw_data["STREET"] = labelencoder.fit_transform(x_raw_data["STREET"]) x_raw_data["AREA"] = labelencoder.fit_transform(x_raw_data["AREA"]) x_raw_data["MZZONE"] = labelencoder.fit_transform(x_raw_data["MZZONE"]) x_raw_data.head() t = scaler.fit_transform(x_raw_data) t.shape pred = lreg.predict(t) pred pred2 = scalery.inverse_transform(pred) pred2 pred3 = pd.DataFrame(pred2) pred3 pred3.to_csv('/gdrive/My Drive/multivariate_test_prediction_grading.csv') ###Output _____no_output_____
02 - Supervised Learning - Regression/02e_LAB_Regularization.ipynb
###Markdown Machine Learning Foundation Section 2, Part e: Regularization LAB Learning objectivesBy the end of this lesson, you will be able to:* Implement data standardization* Implement variants of regularized regression* Combine data standardization with the train-test split procedure* Implement regularization to prevent overfitting in regression problems ###Code import numpy as np import pandas as pd from helper import boston_dataframe np.set_printoptions(precision=3, suppress=True) ###Output _____no_output_____ ###Markdown Loading in Boston Data **Note:** See `helper.py` file to see how boston data is read in from SciKit Learn. ###Code boston = boston_dataframe(description=True) boston_data = boston[0] boston_description = boston[1] ###Output _____no_output_____ ###Markdown Data standardization **Standardizing** data refers to transforming each variable so that it more closely follows a **standard** normal distribution, with mean 0 and standard deviation 1.The [`StandardScaler`](http://scikit-learn.org/dev/modules/generated/sklearn.preprocessing.StandardScaler.htmlsklearn.preprocessing.StandardScaler) object in SciKit Learn can do this. **Generate X and y**: ###Code y_col = "MEDV" X = boston_data.drop(y_col, axis=1) y = boston_data[y_col] ###Output _____no_output_____ ###Markdown **Import, fit, and transform using `StandardScaler`** ###Code from sklearn.preprocessing import StandardScaler s = StandardScaler() X_ss = s.fit_transform(X) ###Output _____no_output_____ ###Markdown Exercise: Confirm standard scaling Hint: ###Code a = np.array([[1, 2, 3], [4, 5, 6]]) print(a) # 2 rows, 3 columns a.mean(axis=0) # mean along the *columns* a.mean(axis=1) # mean along the *rows* ### BEGIN SOLUTION X2 = np.array(X) man_transform = (X2-X2.mean(axis=0))/X2.std(axis=0) np.allclose(man_transform, X_ss) ### END SOLUTION ###Output _____no_output_____ ###Markdown Coefficients with and without scaling ###Code from sklearn.linear_model import LinearRegression lr = LinearRegression() y_col = "MEDV" X = boston_data.drop(y_col, axis=1) y = boston_data[y_col] lr.fit(X, y) print(lr.coef_) # min = -18 ###Output [ -0.108 0.046 0.021 2.687 -17.767 3.81 0.001 -1.476 0.306 -0.012 -0.953 0.009 -0.525] ###Markdown Discussion (together): The coefficients are on widely different scales. Is this "bad"? ###Code from sklearn.preprocessing import StandardScaler s = StandardScaler() X_ss = s.fit_transform(X) lr2 = LinearRegression() lr2.fit(X_ss, y) print(lr2.coef_) # coefficients now "on the same scale" ###Output [-0.928 1.082 0.141 0.682 -2.057 2.674 0.019 -3.104 2.662 -2.077 -2.061 0.849 -3.744] ###Markdown Exercise: Based on these results, what is the most "impactful" feature (this is intended to be slightly ambiguous)? "In what direction" does it affect "y"?**Hint:** Recall from last week that we can "zip up" the names of the features of a DataFrame `df` with a model `model` fitted on that DataFrame using:```pythondict(zip(df.columns.values, model.coef_))``` ###Code ### BEGIN SOLUTION pd.DataFrame(zip(X.columns, lr2.coef_)).sort_values(by=1) ### END SOLUTION ###Output _____no_output_____ ###Markdown Looking just at the strength of the standardized coefficients LSTAT, DIS, RM and RAD are all the 'most impactful'. Sklearn does not have built in statistical signifigance of each of these variables which would aid in making this claim stronger/weaker Lasso with and without scaling We discussed Lasso in lecture. Let's review together:1. What is different about Lasso vs. regular Linear Regression?1. Is standardization more or less important with Lasso vs. Linear Regression? Why? ###Code from sklearn.linear_model import Lasso from sklearn.preprocessing import PolynomialFeatures ###Output _____no_output_____ ###Markdown Create polynomial features [`PolynomialFeatures`](http://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.PolynomialFeatures.html) ###Code pf = PolynomialFeatures(degree=2, include_bias=False,) X_pf = pf.fit_transform(X) ###Output _____no_output_____ ###Markdown **Note:** We use `include_bias=False` since `Lasso` includes a bias by default. ###Code X_pf_ss = s.fit_transform(X_pf) ###Output _____no_output_____ ###Markdown Lasso [`Lasso` documentation](http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.Lasso.html) ###Code las = Lasso() las.fit(X_pf_ss, y) las.coef_ ###Output _____no_output_____ ###Markdown ExerciseCompare * Sum of magnitudes of the coefficients* Number of coefficients that are zerofor Lasso with alpha 0.1 vs. 1.Before doing the exercise, answer the following questions in one sentence each:* Which do you expect to have greater magnitude?* Which do you expect to have more zeros? ###Code ### BEGIN SOLUTION las01 = Lasso(alpha = 0.1) las01.fit(X_pf_ss, y) print('sum of coefficients:', abs(las01.coef_).sum() ) print('number of coefficients not equal to 0:', (las01.coef_!=0).sum()) las1 = Lasso(alpha = 1) las1.fit(X_pf_ss, y) print('sum of coefficients:',abs(las1.coef_).sum() ) print('number of coefficients not equal to 0:',(las1.coef_!=0).sum()) ### END SOLUTION ###Output sum of coefficients: 8.47240504455307 number of coefficients not equal to 0: 7 ###Markdown With more regularization (higher alpha) we will expect the penalty for higher weights to be greater and thus the coefficients to be pushed down. Thus a higher alpha means lower magnitude with more coefficients pushed down to 0. Exercise: $R^2$ Calculate the $R^2$ of each model without train/test split.Recall that we import $R^2$ using:```pythonfrom sklearn.metrics import r2_score``` ###Code ### BEGIN SOLUTION from sklearn.metrics import r2_score r2_score(y,las.predict(X_pf_ss)) ### END SOLUTION ###Output _____no_output_____ ###Markdown Discuss:Will regularization ever increase model performance if we evaluate on the same dataset that we trained on? With train/test split DiscussAre there any issues with what we've done so far?**Hint:** Think about the way we have done feature scaling.Discuss in groups of two or three. ###Code from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(X_pf, y, test_size=0.3, random_state=72018) X_train_s = s.fit_transform(X_train) las.fit(X_train_s, y_train) X_test_s = s.transform(X_test) y_pred = las.predict(X_test_s) r2_score(y_pred, y_test) X_train_s = s.fit_transform(X_train) las01.fit(X_train_s, y_train) X_test_s = s.transform(X_test) y_pred = las01.predict(X_test_s) r2_score(y_pred, y_test) ###Output _____no_output_____ ###Markdown Exercise Part 1:Do the same thing with Lasso of:* `alpha` of 0.001* Increase `max_iter` to 100000 to ensure convergence. Calculate the $R^2$ of the model.Feel free to copy-paste code from above, but write a one sentence comment above each line of code explaining why you're doing what you're doing. Part 2:Do the same procedure as before, but with Linear Regression.Calculate the $R^2$ of this model. Part 3: Compare the sums of the absolute values of the coefficients for both models, as well as the number of coefficients that are zero. Based on these measures, which model is a "simpler" description of the relationship between the features and the target? ###Code ### BEGIN SOLUTION # Part 1 # Decreasing regularization and ensuring convergence las001 = Lasso(alpha = 0.001, max_iter=100000) # Transforming training set to get standardized units X_train_s = s.fit_transform(X_train) # Fitting model to training set las001.fit(X_train_s, y_train) # Transforming test set using the parameters defined from training set X_test_s = s.transform(X_test) # Finding prediction on test set y_pred = las001.predict(X_test_s) # Calculating r2 score print("r2 score for alpha = 0.001:", r2_score(y_pred, y_test)) # Part 2 # Using vanilla Linear Regression lr = LinearRegression() # Fitting model to training set lr.fit(X_train_s, y_train) # predicting on test set y_pred_lr = lr.predict(X_test_s) # Calculating r2 score print("r2 score for Linear Regression:", r2_score(y_pred_lr, y_test)) # Part 3 print('Magnitude of Lasso coefficients:', abs(las001.coef_).sum()) print('Number of coeffients not equal to 0 for Lasso:', (las001.coef_!=0).sum()) print('Magnitude of Linear Regression coefficients:', abs(lr.coef_).sum()) print('Number of coeffients not equal to 0 for Linear Regression:', (lr.coef_!=0).sum()) ### END SOLUTION ###Output r2 score for alpha = 0.001: 0.8686454101886476 r2 score for Linear Regression: 0.8555202098064165 Magnitude of Lasso coefficients: 436.2616426306515 Number of coeffients not equal to 0 for Lasso: 89 Magnitude of Linear Regression coefficients: 1185.285825446944 Number of coeffients not equal to 0 for Linear Regression: 104 ###Markdown L1 vs. L2 Regularization As mentioned in the deck: `Lasso` and `Ridge` regression have the same syntax in SciKit Learn.Now we're going to compare the results from Ridge vs. Lasso regression: [`Ridge`](http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.Ridge.html) ###Code from sklearn.linear_model import Ridge ###Output _____no_output_____ ###Markdown ExerciseFollowing the Ridge documentation from above:1. Define a Ridge object `r` with the same `alpha` as `las001`.2. Fit that object on `X` and `y` and print out the resulting coefficients. ###Code ### BEGIN SOLUTION # Decreasing regularization and ensuring convergence r = Ridge(alpha = 0.001) X_train_s = s.fit_transform(X_train) r.fit(X_train_s, y_train) X_test_s = s.transform(X_test) y_pred_r = r.predict(X_test_s) # Calculating r2 score r.coef_ ### END SOLUTION las001 # same alpha as Ridge above las001.coef_ print(np.sum(np.abs(r.coef_))) print(np.sum(np.abs(las001.coef_))) print(np.sum(r.coef_ != 0)) print(np.sum(las001.coef_ != 0)) ###Output 795.6521694351709 436.2616426306515 104 89 ###Markdown **Conclusion:** Ridge does not make any coefficients 0. In addition, on this particular dataset, Lasso provides stronger overall regularization than Ridge for this value of `alpha` (not necessarily true in general). ###Code y_pred = r.predict(X_pf_ss) print(r2_score(y, y_pred)) y_pred = las001.predict(X_pf_ss) print(r2_score(y, y_pred)) ###Output 0.9075278340576804 0.9102933722688202 ###Markdown **Conclusion**: Ignoring issues of overfitting, Ridge does slightly better than Lasso when `alpha` is set to 0.001 for each (not necessarily true in general). Example: Does it matter when you scale? ###Code X_train, X_test, y_train, y_test = train_test_split(X_ss, y, test_size=0.3, random_state=72018) lr = LinearRegression() lr.fit(X_train, y_train) y_pred = lr.predict(X_test) r2_score(y_pred, y_test) X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=72018) s = StandardScaler() lr_s = LinearRegression() X_train_s = s.fit_transform(X_train) lr_s.fit(X_train_s, y_train) X_test_s = s.transform(X_test) y_pred_s = lr_s.predict(X_test_s) r2_score(y_pred_s, y_test) ###Output _____no_output_____
notebook/00-test/AssertionError.ipynb
###Markdown AssertionError ###Code print("AssertionError") assert False ###Output _____no_output_____
Course/1.0/Example/10-Deep-Dream.ipynb
###Markdown DeepDream - 深度理解神经网路结构及应用 ###Code from __future__ import print_function import numpy as np import scipy.misc import tensorflow as tf from PIL import Image ###Output _____no_output_____ ###Markdown 创建图和会话 ###Code graph = tf.Graph() session = tf.InteractiveSession(graph=graph) ###Output D:\Anaconda3\envs\Python36\lib\site-packages\tensorflow\python\client\session.py:1645: UserWarning: An interactive session is already active. This can cause out-of-memory errors in some cases. You must explicitly call `InteractiveSession.close()` to release resources held by the other session(s). warnings.warn('An interactive session is already active. This can ' ###Markdown Inception模型(开源已训练完成模型)的加载 ###Code model_fn = "Data/Deep-Dream/tensorflow_inception_graph.pb" with tf.gfile.FastGFile(model_fn, "rb") as f: graph_def = tf.GraphDef() graph_def.ParseFromString(f.read()) # 定义输入图像的占位符 t_input = tf.placeholder(np.float32, name="input") # 图像预处理 - 减均值 # 开源Inception模型在训练时做了减均值预处理,此处也需坚强同样的均值以保持一致 imagenet_mean = 117.0 # 图像预处理 - 增加维度 # 图像数据格式一般是(height 高度,width 宽度,channels 通道),为同时将多张图片输入网络而在前面增加一维 # 变为(batch,height,width,channels) # batch代表将多张图像送入网络 # 0代表在Tensor下标为0位置进行插入,即头插入,-1则代表从后往前插入,即尾插 # 若为1则为在下标0与2中间进行插入,往后同理 t_preprocessed = tf.expand_dims(t_input - imagenet_mean, 0) # 导入模型并将预处理的图像送入网络中 tf.import_graph_def(graph_def, {"input": t_preprocessed}) ###Output _____no_output_____ ###Markdown 找到卷积层 ###Code layers = [op.name for op in graph.get_operations() if op.type == "Conv2D" and "import/" in op.name] # 卷积层层数 print("Number of layers: ", len(layers)) # 输出所有卷积层名称 print(layers) # 输出指定卷积层的参数 name = "mixed4d_3x3_bottleneck_pre_relu" print("shape of %s:%s" % (name, str(graph.get_tensor_by_name("import/" + name + ":0").get_shape()))) name2 = "mixed4e_5x5_bottleneck_pre_relu" print("shape of %s:%s" % (name2, str(graph.get_tensor_by_name("import/" + name2 + ":0").get_shape()))) ###Output shape of mixed4d_3x3_bottleneck_pre_relu:(?, ?, ?, 144) shape of mixed4e_5x5_bottleneck_pre_relu:(?, ?, ?, 32) ###Markdown 生成原始Deep Dream图像(单通道) ###Code # 把一个numpy.ndarray保存成图像文件 def savearray(img_array, img_name): # 如果想保存成RGB图片,则将上面的 'L' 改为 'RGB' 即可 # Image.fromarray(img_array).convert("L").save(img_name) # Image.fromarray(img_array).save(img_name) scipy.misc.toimage(img_array).save(img_name) print("img saved: %s" % img_name) # 渲染函数 # t_obj:是layer_output[:, :, :, channel],即卷积层某个通道的值 # img0:初始图像(噪声图像) # iter_n:迭代次数 # step:用于控制每次迭代步长,可以看作学习率 def render_naive_function(t_obj, img0, iter_n=20, step=1.0): # t_score是t_obj的平均值 # 由于我们的目标是调整输出图像使卷积层激活值尽可能大 # 即最大化t_score # 为达到此目标,可使用梯度下降 # 计算t_score对t_input的梯度 t_score = tf.reduce_mean(t_obj) t_grad = tf.gradients(t_score, t_input)[0] # 复制新图像可避免影响原图像的值 img = img0.copy() for i in range(iter_n): # 在session中计算梯度,以及当前的t_score g, score = session.run([t_grad, t_score], feed_dict={t_input: img}) # 对img应用梯度 # 首先对梯度进行归一化处理 # 1e-8:0.00000001,即1 * 10^(-8) g /= g.std() + 1e-8 # 将正规化处理后的梯度应用在图像上,step用于控制每次迭代步长,此处为1.0 img += g * step print("iter: %d" % (i + 1), "score(mean) = %f" % score) # 保存图片 savearray(img, "Image/naive_deep_dream.jpg") channel = 139 # "mixed4a_3x3_bottleneck_pre_relu"共144个通道 # 选取任意通道(0 ~ 143之间任意整数)进行最大化 layer_output = graph.get_tensor_by_name("import/%s: 0" % name) # 定义噪声图像 image_noise = np.random.uniform(size=(224, 224, 3)) + 100.0 # 调用render_naive_function函数渲染 render_naive_function(layer_output[:, :, :, channel], image_noise, iter_n=20) # 保存并显示图片 im = Image.open("Image/naive_deep_dream.jpg") im.show() im.save("Image/naive_single_chn.jpg") ###Output iter: 1 score(mean) = -20.084309 iter: 2 score(mean) = -32.286839 iter: 3 score(mean) = 25.363852 iter: 4 score(mean) = 91.472725 iter: 5 score(mean) = 150.023575 iter: 6 score(mean) = 201.139053 iter: 7 score(mean) = 262.625397 iter: 8 score(mean) = 329.964844 iter: 9 score(mean) = 368.014954 iter: 10 score(mean) = 416.939453 iter: 11 score(mean) = 456.069672 iter: 12 score(mean) = 496.902832 iter: 13 score(mean) = 537.332153 iter: 14 score(mean) = 562.979187 iter: 15 score(mean) = 598.710266 iter: 16 score(mean) = 630.103699 iter: 17 score(mean) = 656.299927 iter: 18 score(mean) = 688.252625 iter: 19 score(mean) = 711.564026 iter: 20 score(mean) = 733.312195 img saved: Image/naive_deep_dream.jpg ###Markdown 较低层单通道卷积特征生成Deep Dream图像 ###Code # 定义卷积层、通道数,并取出对应Tensor name3 = "mixed3a_3x3_bottleneck_pre_relu" layer_output = graph.get_tensor_by_name("import/%s:0" % name3) print("shape of %s: %s" % (name3, str(graph.get_tensor_by_name("import/" + name3 + ":0").get_shape()),)) ###Output shape of mixed3a_3x3_bottleneck_pre_relu: (?, ?, ?, 96) ###Markdown 高层单通道卷积特征生成Deep Dream图像 ###Code # 定义卷积层、通道数,并取出对应Tensor name4 = "mixed5b_5x5_pre_relu" layer_output = graph.get_tensor_by_name("import/%s:0" % name4) print("shape of %s: %s" % (name4, str(graph.get_tensor_by_name("import/" + name4 + ":0").get_shape()),)) # 定义噪声图像 img_noise = np.random.uniform(size=(224, 224, 3)) + 100.0 # 调用render_naive_function函数渲染 channel = 118 render_naive_function(layer_output[:, :, :, channel], img_noise, iter_n=20) # 保存并显示照片 im = Image.open("Image/naive_deep_dream.jpg") im.show() im.save("Image/deep_single_chn.jpg") ###Output iter: 1 score(mean) = -7.564476 iter: 2 score(mean) = -9.213534 iter: 3 score(mean) = -3.566386 iter: 4 score(mean) = 6.309409 iter: 5 score(mean) = 14.429076 iter: 6 score(mean) = 24.695652 iter: 7 score(mean) = 35.990284 iter: 8 score(mean) = 39.517696 iter: 9 score(mean) = 51.566605 iter: 10 score(mean) = 57.235104 iter: 11 score(mean) = 68.215401 iter: 12 score(mean) = 69.156860 iter: 13 score(mean) = 76.962822 iter: 14 score(mean) = 90.145866 iter: 15 score(mean) = 90.103256 iter: 16 score(mean) = 100.993958 iter: 17 score(mean) = 104.738289 iter: 18 score(mean) = 115.121605 iter: 19 score(mean) = 123.789383 iter: 20 score(mean) = 135.100174 ###Markdown 生成原始Deep Dream图像(所有通道) ###Code # 定义卷积层、通道数,并取出对应Tensor name = "mixed4d_3x3_bottleneck_pre_relu" layer_output = graph.get_tensor_by_name("import/%s:0" % name) print("shape of %s: %s" % (name, str(graph.get_tensor_by_name("import/" + name + ":0").get_shape()),)) # 定义噪声图像 img_noise = np.random.uniform(size=(224, 224, 3)) + 100.0 # 调用render_naive_function函数渲染 # 不指定特定通道,即表示利用所有通道特征 render_naive_function(layer_output, img_noise, iter_n=20) # 单通道时:layer_output[:, :, :, channel] # 保存并显示照片 im = Image.open("Image/naive_deep_dream.jpg") im.show() im.save("Image/all_chn.jpg") ###Output shape of mixed4d_3x3_bottleneck_pre_relu: (?, ?, ?, 144) iter: 1 score(mean) = -6.793727 iter: 2 score(mean) = -8.700486 iter: 3 score(mean) = -6.525362 iter: 4 score(mean) = 0.586559 iter: 5 score(mean) = 7.096675 iter: 6 score(mean) = 11.247742 iter: 7 score(mean) = 15.640568 iter: 8 score(mean) = 19.768738 iter: 9 score(mean) = 22.326097 iter: 10 score(mean) = 25.582514 iter: 11 score(mean) = 29.860027 iter: 12 score(mean) = 32.495106 iter: 13 score(mean) = 35.702557 iter: 14 score(mean) = 39.227608 iter: 15 score(mean) = 40.770065 iter: 16 score(mean) = 45.148392 iter: 17 score(mean) = 46.574661 iter: 18 score(mean) = 49.879055 iter: 19 score(mean) = 50.959644 iter: 20 score(mean) = 54.952278 img saved: Image/naive_deep_dream.jpg ###Markdown 以背景图像为起点生成Deep Dream图像 ###Code # 定义卷积层、通道数,并取出对应Tensor name = "mixed4c" layer_output = graph.get_tensor_by_name("import/%s:0" % name) print(layer_output) # 用一张背景图像(而不是随机噪音图像)作为起点对图像进化优化 img_test = Image.open("Image/IMG_0722.jpeg") render_naive_function(layer_output, img_noise, iter_n=100) # 保存并显示照片 im = Image.open("Image/naive_deep_dream.jpg") im.show() im.save("Image/new.jpg") ###Output iter: 1 score(mean) = 5.419296 iter: 2 score(mean) = 10.414704 iter: 3 score(mean) = 19.698999 iter: 4 score(mean) = 29.727409 iter: 5 score(mean) = 38.942368 iter: 6 score(mean) = 46.909119 iter: 7 score(mean) = 54.018745 iter: 8 score(mean) = 60.285233 iter: 9 score(mean) = 65.990456 iter: 10 score(mean) = 71.035934 iter: 11 score(mean) = 75.265663 iter: 12 score(mean) = 79.099968 iter: 13 score(mean) = 82.899994 iter: 14 score(mean) = 85.147491 iter: 15 score(mean) = 88.846619 iter: 16 score(mean) = 90.967453 iter: 17 score(mean) = 93.590637 iter: 18 score(mean) = 95.463150 iter: 19 score(mean) = 97.497620 iter: 20 score(mean) = 99.496071 iter: 21 score(mean) = 101.037498 iter: 22 score(mean) = 102.938408 iter: 23 score(mean) = 104.367973 iter: 24 score(mean) = 105.550201 iter: 25 score(mean) = 107.192665 iter: 26 score(mean) = 108.457817 iter: 27 score(mean) = 109.990860 iter: 28 score(mean) = 111.066895 iter: 29 score(mean) = 111.910309 iter: 30 score(mean) = 113.125336 iter: 31 score(mean) = 114.284042 iter: 32 score(mean) = 115.008102 iter: 33 score(mean) = 116.192841 iter: 34 score(mean) = 116.870758 iter: 35 score(mean) = 117.825951 iter: 36 score(mean) = 118.952049 iter: 37 score(mean) = 119.507835 iter: 38 score(mean) = 120.124084 iter: 39 score(mean) = 121.236565 iter: 40 score(mean) = 121.761330 iter: 41 score(mean) = 122.336555 iter: 42 score(mean) = 123.018837 iter: 43 score(mean) = 123.749718 iter: 44 score(mean) = 124.073250 iter: 45 score(mean) = 125.345367 iter: 46 score(mean) = 125.034760 iter: 47 score(mean) = 126.288246 iter: 48 score(mean) = 126.357590 iter: 49 score(mean) = 127.219292 iter: 50 score(mean) = 127.563759 iter: 51 score(mean) = 128.152725 iter: 52 score(mean) = 128.707733 iter: 53 score(mean) = 129.114182 iter: 54 score(mean) = 129.697525 iter: 55 score(mean) = 130.096634 iter: 56 score(mean) = 130.604034 iter: 57 score(mean) = 131.015671 iter: 58 score(mean) = 131.272705 iter: 59 score(mean) = 131.770615 iter: 60 score(mean) = 132.011246 iter: 61 score(mean) = 132.630905 iter: 62 score(mean) = 132.896683 iter: 63 score(mean) = 133.221817 iter: 64 score(mean) = 133.393448 iter: 65 score(mean) = 134.129196 iter: 66 score(mean) = 134.275955 iter: 67 score(mean) = 134.721024 iter: 68 score(mean) = 135.193359 iter: 69 score(mean) = 135.310608 iter: 70 score(mean) = 135.800461 iter: 71 score(mean) = 135.802017 iter: 72 score(mean) = 136.457703 iter: 73 score(mean) = 136.562241 iter: 74 score(mean) = 136.926437 iter: 75 score(mean) = 137.044815 iter: 76 score(mean) = 137.548615 iter: 77 score(mean) = 137.745026 iter: 78 score(mean) = 137.912216 iter: 79 score(mean) = 137.996445 iter: 80 score(mean) = 138.380188 iter: 81 score(mean) = 138.589996 iter: 82 score(mean) = 138.596817 iter: 83 score(mean) = 139.431854 iter: 84 score(mean) = 139.214706 iter: 85 score(mean) = 139.745728 iter: 86 score(mean) = 139.814240 iter: 87 score(mean) = 140.128250 iter: 88 score(mean) = 140.297089 iter: 89 score(mean) = 140.654160 iter: 90 score(mean) = 140.781525 iter: 91 score(mean) = 141.010925 iter: 92 score(mean) = 141.137131 iter: 93 score(mean) = 141.385300 iter: 94 score(mean) = 141.633179 iter: 95 score(mean) = 141.939407 iter: 96 score(mean) = 141.952667 iter: 97 score(mean) = 142.306427 iter: 98 score(mean) = 142.345184 iter: 99 score(mean) = 142.590240 iter: 100 score(mean) = 142.702057 img saved: Image/naive_deep_dream.jpg ###Markdown 定义相关函数 ###Code # 调整图像尺寸 def resize(img, hw): min = img.min() max = img.max() img = (img - min) / (max - min) * 255 img = np.float32(scipy.misc.imresize(img, hw)) img = img / 255 * (max - min) + min return img # 将图像放大ratio倍 def resize_ratio(img, ratio): min = img.min() max = img.max() img = (img - min) / (max - min) * 255 img = np.float32(scipy.misc.imresize(img, ratio)) img = img / 255 * (max - min) + min return img # 原始图像尺寸可能很大,从而导致内存耗尽问题 # 每次只对tile_size * tile_size 大小的图像计算梯度,避免内存问题 def calc_grad_tiled(img, t_grad, tile_size=512): sz = tile_size h, w = img.shape[:2] sx, sy = np.random.randint(sz, size=2) # 先在行上做整体移动,再在列上做整体移动 img_shift = np.roll(np.roll(img, sx, 1), sy, 0) grad = np.zeros_like(img) for y in range(0, max(h - sz // 2, sz), sz): for x in range(0, max(w - sz // 2, sz), sz): sub = img_shift[y:y + sz, x:x + sz] g = session.run(t_grad, {t_input: sub}) grad[y:y + sz, x: x + sz] = g return np.roll(np.roll(grad, -sx, 1), -sy, 0) # 优化图像后的渲染函数 def render_deep_dream_pro(t_obj, img0, iter_n=10, step=1.5, octave_n=4, octave_scale=1.4): t_score = tf.reduce_mean(t_obj) t_grad = tf.gradients(t_score, t_input)[0] img = img0.copy() # 将图像进行金字塔分解 # 从而分为高频,低频部分 octaves = [] for i in range(octave_n - 1): hw = img.shape[:2] lo = resize(img, np.int32(np.float32(hw) / octave_scale)) hi = img - resize(lo, hw) img = lo octaves.append(hi) # 首先生成低频的图像,再依次放大并加上高频 for octave in range(octave_n): if octave > 0: hi = octaves[-octave] img = resize(img, hi.shape[:2]) + hi for i in range(iter_n): g = calc_grad_tiled(img, t_grad) img += g * (step / (np.abs(g).mean() + 1e-7)) img = img.clip(0, 255) savearray(img, "Image/new_pro.jpg") im = Image.open("Image/new_pro.jpg").show() # 定义卷积层、通道数,并取出对应Tensor name = "mixed4c" layer_output = graph.get_tensor_by_name("import/%s:0" % name) print("shape of %s: %s" % (name, str(graph.get_tensor_by_name("import/" + name + ":0").get_shape()),)) # 定义噪声图像 img0 = Image.open("Image/IMG_0722.jpeg") img0 = np.float32(img0) render_deep_dream_pro(tf.square(layer_output), img0) ###Output shape of mixed4c: (?, ?, ?, 512)
visualize_output.ipynb
###Markdown A notebook to query the trained embedding with the test set to qualitatively evaluate the text-video retrieval ###Code import torch as th from torch.utils.data import DataLoader import numpy as np import torch.optim as optim from args import get_args import random import os from model import Net from metrics import compute_metrics, print_computed_metrics from loss import MaxMarginRankingLoss from gensim.models.keyedvectors import KeyedVectors import pickle from m2e2_dataloader import M2E2DataLoader import pandas as pd import json word2vec_path='data/GoogleNews-vectors-negative300.bin' print('Loading word vectors: {}'.format(word2vec_path)) we = KeyedVectors.load_word2vec_format(word2vec_path, binary=True) print('done') sentences_path = "/kiwi-data/users/shoya/AIDA/event_occurences_video_and_text_pairs.json" max_words = 20 we_dim = 300 batch_size=256 batch_size_val=3500 num_workers = 4 dataset_val_m2e2 = M2E2DataLoader( csv="/home/shoya/howto100m/data_paths_test.csv", sentences=sentences_path, we=we, max_words=max_words, we_dim=we_dim, ) dataloader_val_m2e2 = DataLoader( dataset_val_m2e2, batch_size=batch_size_val, num_workers=num_workers, shuffle=False, ) net = Net( video_dim=4096, embd_dim=6144, we_dim=300, n_pair=1, max_words=max_words, sentence_dim=-1, ) net.load_checkpoint('e31.pth') net.eval() net.cuda() batch = next(iter(dataloader_val_m2e2)) text = batch['text'].cuda() video = batch['video'].cuda() output = net(video, text) output = output.cpu().detach().numpy() eval_paths = pd.read_csv("/home/shoya/howto100m/data_paths_test.csv") eval_file_names = eval_paths['video_id'].values video2sentence = json.load(open(sentences_path)) for correct_idx, prob in enumerate(output[:30]): corresponding_video = eval_file_names[correct_idx] corresponding_sentences = video2sentence[corresponding_video] correct_idx_prob = prob[correct_idx] sorted_x = np.sort(-prob) correct_guessed_at = np.where(sorted_x+correct_idx_prob == 0)[0][0] print('============= Test Sample {} ============='.format(correct_idx+1)) print("Query Sentence: {}".format(corresponding_sentences)) print("Corresponding Video: {}".format(corresponding_video)) print("Correctly Guessed on Index {}".format(correct_guessed_at)) print() top_5_index = prob.argsort()[-5:][::-1] for i,pred_idx in enumerate(top_5_index): print("Guess {}: {}".format(i+1,eval_file_names[pred_idx])) print() ###Output ============= Test Sample 1 ============= Query Sentence: ['US President Donald Trump has told his citizens they should brace for " painful " weeks ahead .'] Corresponding Video: Coronavirus Trump warns of very painful weeks ahead - BBC News clipped_23.557_86.748.mp4 Correctly Guessed on Index 1 Guess 1: Coronavirus President Trump Cuts US Funding to WHO clipped_0_38.157.mp4 Guess 2: Coronavirus Trump warns of very painful weeks ahead - BBC News clipped_23.557_86.748.mp4 Guess 3: Blasts Heard at Ashraf Ghani Inauguration as Afghan President clipped_38.802_46.198.mp4 Guess 4: Senate Committee Holds Hearing into Fort Hood Attack clipped_116.027_155.277.mp4 Guess 5: Prince William attends Manchester attack remembrance service in UK clipped_19.004_30.04.mp4 ============= Test Sample 2 ============= Query Sentence: ["Police officers and army personnel in Mosul , Kirkuk and Ramadi queued to cast their votes two days before the rest of the nation ' s voters can go to the polls to elect a new parliament ."] Corresponding Video: Early voting for Iraq military ahead of 12 May parliamentary elections Voice of America clipped_4.627_30.002.mp4 Correctly Guessed on Index 27 Guess 1: Mass protests and arrests across US over George Floyd death - BBC News clipped_150.527_165.01.mp4 Guess 2: Dramatic Eyewitness Footage Captures New Orleans Hard Rock Hotel Collapse Ensuing Chaos clipped_94.264_103.592.mp4 Guess 3: Indian Army Troops Seen Moving Towards Border After Clash With China clipped_0.434_32.44.mp4 Guess 4: Line of Trucks Stranded as Iran Closes Border Amid Coronavirus Outbreak clipped_0.503_35.42.mp4 Guess 5: Palestinian Protesters Clash With Israeli Soldiers clipped_0.053_26.453.mp4 ============= Test Sample 3 ============= Query Sentence: ['Medical tents were set up outside Citizens Bank Park in Philadelphia as possible future testing stations for the new coronavirus .'] Corresponding Video: Medical Tents Set up Outside Philadelphia as Possible Coronavirus Screening Facilities clipped_0.315_35.544.mp4 Correctly Guessed on Index 4 Guess 1: South Korean Military Boosts Mask Manufacturing for Coronavirus clipped_0.265_26.444.mp4 Guess 2: Hong Kong Protesters Clash with Police at Subway Station clipped_37.027_90.389.mp4 Guess 3: Coronavirus General Motors Workers in Michigan Make Masks clipped_9.315_13.838.mp4 Guess 4: Italy Uses Snow Cannons to Disinfect Villages Amid Coronavirus Lockdown clipped_4.002_22.231.mp4 Guess 5: Medical Tents Set up Outside Philadelphia as Possible Coronavirus Screening Facilities clipped_0.315_35.544.mp4 ============= Test Sample 4 ============= Query Sentence: ['Tens of thousands of people have been protesting across Australia in support of the Black Lives Matter movement , following the death of George Floyd .'] Corresponding Video: Australia protests highlight indigenous deaths - BBC News clipped_0.54_70.005.mp4 Correctly Guessed on Index 0 Guess 1: Australia protests highlight indigenous deaths - BBC News clipped_0.54_70.005.mp4 Guess 2: Sudan protest Demonstrators continue sit-in despite crackdown - BBC News clipped_0.053_37.889.mp4 Guess 3: Coronavirus US faced with protests amid pressure to reopen - BBC News clipped_102.335_171.335.mp4 Guess 4: Police Fire Tear Gas Block Exit at Anti-Racism Demonstration in France clipped_0_58.759.mp4 Guess 5: More US police officers charged over George Floyd death as protests continue - BBC News clipped_535.963_642.648.mp4 ============= Test Sample 5 ============= Query Sentence: ["CCTV footage broadcast by news channel Al Arabiya shows the moment a projectile struck an airport in Saudi Arabia ' s Abha International Airport , located 200 kilometers from the border with Yemen ."] Corresponding Video: Houthis Strike Saudi Arabia Airport Close to Yemen Border clipped_0.002_8.961.mp4 Correctly Guessed on Index 42 Guess 1: Coronavirus Deserted Roads in Saudi Arabias Holy City Mecca clipped_41.474_55.91.mp4 Guess 2: Coronavirus Deserted Roads in Saudi Arabias Holy City Mecca clipped_18.691_22.231.mp4 Guess 3: Hiroshima atomic bomb Survivor recalls horrors - BBC News clipped_0.011_17.147.mp4 Guess 4: Coronavirus How to fly during a pandemic - BBC News clipped_75.106_77.711.mp4 Guess 5: Chinas new island in the South China Sea - BBC News clipped_11.765_20.252.mp4 ============= Test Sample 6 ============= Query Sentence: ["Here BBC Health and Science Correspondent , Laura Foster , has been to Southend Airport to show you what you need to do if you ' re thinking of catching a flight ."] Corresponding Video: Coronavirus How to fly during a pandemic - BBC News clipped_22.631_25.523.mp4 Correctly Guessed on Index 68 Guess 1: Coronavirus President Trump Cuts US Funding to WHO clipped_0_38.157.mp4 Guess 2: New Zealand Mosque Attacks Send Shock Waves Throughout Muslim World clipped_63.389_86.389.mp4 Guess 3: Senate Committee Holds Hearing into Fort Hood Attack clipped_116.027_155.277.mp4 Guess 4: Hong Kong protests China condemns horrendous incidents - BBC News clipped_23.426_51.592.mp4 Guess 5: Clashes erupted at the Chinese University of Hong Kong - BBC News clipped_285.077_500.027.mp4 ============= Test Sample 7 ============= Query Sentence: ['👉 Riot police threw tear gas and protesters responded by throwing Molotov cocktails and setting up barricades made of bricks .'] Corresponding Video: Police and Protesters Clash at Hong Kong Polytechnic University clipped_31.001_38.468.mp4 Correctly Guessed on Index 0 Guess 1: Police and Protesters Clash at Hong Kong Polytechnic University clipped_31.001_38.468.mp4 Guess 2: Israeli Security Forces Clash with Palestinian Protesters in West Bank clipped_3.002_5.815.mp4 Guess 3: Malawi Opposition Supporters Clash with Police as Election Results Challenged clipped_11.38_23.505.mp4 Guess 4: China moves to impose controversial Hong Kong security law - BBC News clipped_99.426_119.176.mp4 Guess 5: Police and Protesters Clash at Hong Kong Polytechnic University clipped_0_13.825.mp4 ============= Test Sample 8 ============= Query Sentence: ["Kilauea ' s 19 - day eruption showed no sign of easing , with repeated explosions at its summit and fountains of lava up to 160 feet ( 50 m ) from giant cracks or fissures on its flank ."] Corresponding Video: Lava from Hawaiis erupting Kilauea volcano lit up the night on Tuesday (May 22) clipped_0_32.002.mp4 Correctly Guessed on Index 0 Guess 1: Lava from Hawaiis erupting Kilauea volcano lit up the night on Tuesday (May 22) clipped_0_32.002.mp4 Guess 2: Missile Strikes Military Parade in Yemens Aden clipped_17.63_37.838.mp4 Guess 3: Coronavirus Deserted Roads in Saudi Arabias Holy City Mecca clipped_41.474_55.91.mp4 Guess 4: Putin Forever - BBC News clipped_248.014_275.639.mp4 Guess 5: Hiroshima atomic bomb Survivor recalls horrors - BBC News clipped_0.011_17.147.mp4 ============= Test Sample 9 ============= Query Sentence: ['👉 Riot police threw tear gas and protesters responded by throwing Molotov cocktails and setting up barricades made of bricks .'] Corresponding Video: Police and Protesters Clash at Hong Kong Polytechnic University clipped_0_13.825.mp4 Correctly Guessed on Index 4 Guess 1: Police and Protesters Clash at Hong Kong Polytechnic University clipped_31.001_38.468.mp4 Guess 2: Israeli Security Forces Clash with Palestinian Protesters in West Bank clipped_3.002_5.815.mp4 Guess 3: Malawi Opposition Supporters Clash with Police as Election Results Challenged clipped_11.38_23.505.mp4 Guess 4: China moves to impose controversial Hong Kong security law - BBC News clipped_99.426_119.176.mp4 Guess 5: Police and Protesters Clash at Hong Kong Polytechnic University clipped_0_13.825.mp4 ============= Test Sample 10 ============= Query Sentence: ["The national security adviser to U . S . President Donald Trump denies any U . S . government involvement in Saturday ' s drone explosions in Venezuela ' s capital during President Nicolas Maduro ' s speech ."] Corresponding Video: Bolton Denies Any US Involvement in Drone Attacks in Venezuela clipped_0.003_8.774.mp4 Correctly Guessed on Index 46 Guess 1: Coronavirus President Trump Cuts US Funding to WHO clipped_0_38.157.mp4 Guess 2: Coronavirus Trump warns of very painful weeks ahead - BBC News clipped_23.557_86.748.mp4 Guess 3: Blasts Heard at Ashraf Ghani Inauguration as Afghan President clipped_38.802_46.198.mp4 Guess 4: New Zealand Mosque Attacks Send Shock Waves Throughout Muslim World clipped_63.389_86.389.mp4 Guess 5: Senate Committee Holds Hearing into Fort Hood Attack clipped_116.027_155.277.mp4 ============= Test Sample 11 ============= Query Sentence: ['General Motors workers in Warren , Michigan , make masks Thursday , April 23 , to stem shortages of protective gear and equipment amid the coronavirus pandemic .'] Corresponding Video: Coronavirus General Motors Workers in Michigan Make Masks clipped_9.315_13.838.mp4 Correctly Guessed on Index 0 Guess 1: Coronavirus General Motors Workers in Michigan Make Masks clipped_9.315_13.838.mp4 Guess 2: Hong Kong protest Tensions on the front line - BBC News clipped_0.002_16.656.mp4 Guess 3: South Korean Military Boosts Mask Manufacturing for Coronavirus clipped_0.265_26.444.mp4 Guess 4: Russian Toilet Paper Factory Increases Production clipped_0.759_4.259.mp4 Guess 5: Double-Arm Transplant Gives Marine Corps Veteran a Shot at New Life clipped_0_7.092.mp4 ============= Test Sample 12 ============= Query Sentence: ['Hundreds of protesters held a rally near the Indonesian presidential palace in Jakarta after numerous riots and demonstrations brought several Papuan cities to a standstill over recent days .'] Corresponding Video: Indonesia cuts off internet to Papua following protests - BBC News clipped_0.513_11.69.mp4 Correctly Guessed on Index 25 Guess 1: Australia protests highlight indigenous deaths - BBC News clipped_0.54_70.005.mp4 Guess 2: More US police officers charged over George Floyd death as protests continue - BBC News clipped_535.963_642.648.mp4 Guess 3: Beirut Why has there been crisis after crisis in Lebanon - BBC News clipped_83.03_92.294.mp4 Guess 4: Greece election Alexis Tsipras hails victory of the people - BBC News clipped_12.336_39.921.mp4 Guess 5: Hong Kong protests On the frontline - BBC News clipped_0.002_154.264.mp4 ============= Test Sample 13 ============= Query Sentence: ['An Israeli air strike destroyed a house in the neighborhood of Rafah in southern Gaza , Wednesday , November 1 , as seen in footage shared on social media .'] Corresponding Video: Israeli Air Strike Destroys House in Gaza clipped_10.315_24.648.mp4 Correctly Guessed on Index 0 Guess 1: Israeli Air Strike Destroys House in Gaza clipped_10.315_24.648.mp4 Guess 2: The trauma of Hong Kongs teenage protesters - BBC News clipped_17.044_25.127.mp4 Guess 3: Cranes Demolished Over Hard Rock Hotel Ruins in New Orleans clipped_20.37_32.606.mp4 Guess 4: Berlin attack Police uncertain detained suspect drove lorry - BBC News clipped_0.102_13.107.mp4 Guess 5: Afghanistan Blast clipped_515.592_528.426.mp4 ============= Test Sample 14 ============= Query Sentence: ['In an exclusive video interview with AFP news agency , he said " there was no order to make any attack ".'] Corresponding Video: Syria chemical attack fabricated - Assad - BBC News clipped_126.319_131.632.mp4 Correctly Guessed on Index 9 Guess 1: Coronavirus President Trump Cuts US Funding to WHO clipped_0_38.157.mp4 Guess 2: Senate Committee Holds Hearing into Fort Hood Attack clipped_116.027_155.277.mp4 Guess 3: New Zealand Mosque Attacks Send Shock Waves Throughout Muslim World clipped_63.389_86.389.mp4 Guess 4: Coronavirus US faced with protests amid pressure to reopen - BBC News clipped_102.335_171.335.mp4 Guess 5: Coronavirus Trump warns of very painful weeks ahead - BBC News clipped_23.557_86.748.mp4 ============= Test Sample 15 ============= Query Sentence: ['A Russian toilet paper factory increases production to avoid a toilet paper shortage amid the coronavirus pandemic , St . Petersburg , Russia , Wednesday , March 25 .'] Corresponding Video: Russian Toilet Paper Factory Increases Production clipped_0.759_4.259.mp4 Correctly Guessed on Index 1 Guess 1: South Korean Military Boosts Mask Manufacturing for Coronavirus clipped_0.265_26.444.mp4 Guess 2: Russian Toilet Paper Factory Increases Production clipped_0.759_4.259.mp4 Guess 3: Coronavirus General Motors Workers in Michigan Make Masks clipped_9.315_13.838.mp4 Guess 4: Italy Uses Snow Cannons to Disinfect Villages Amid Coronavirus Lockdown clipped_4.002_22.231.mp4 Guess 5: Hong Kong protest Tensions on the front line - BBC News clipped_0.002_16.656.mp4 ============= Test Sample 16 ============= Query Sentence: ['👉 A 70 - year - old female patient died during the disturbance , authorities said , with hospital staff claiming the rioters " broke everything " when they stormed the cardiology hospital \' s Intensive Care Unit .'] Corresponding Video: Patient Dies After Mob of Lawyers Ransack Hospital in Pakistan clipped_0_11.132.mp4 Correctly Guessed on Index 38 Guess 1: More US police officers charged over George Floyd death as protests continue - BBC News clipped_161.689_189.544.mp4 Guess 2: China moves to impose controversial Hong Kong security law - BBC News clipped_99.426_119.176.mp4 Guess 3: Protecting Hong Kongs young protesters - BBC News clipped_205.737_220.264.mp4 Guess 4: Early voting for Iraq military ahead of 12 May parliamentary elections Voice of America clipped_4.627_30.002.mp4 Guess 5: Nine Hurt in Violent Demonstration at Turkish Ambassadors Residence clipped_0.007_46.371.mp4 ============= Test Sample 17 ============= Query Sentence: ["Within the past hour , there ' s been been an explosion on board a bus in Jerusalem ."] Corresponding Video: Jerusalem bus explosion injures several people - BBC News clipped_1.211_37.065.mp4 Correctly Guessed on Index 8 Guess 1: Afghanistan Blast clipped_515.592_528.426.mp4 Guess 2: Lava from Hawaiis erupting Kilauea volcano lit up the night on Tuesday (May 22) clipped_0_32.002.mp4 Guess 3: Cranes Demolished Over Hard Rock Hotel Ruins in New Orleans clipped_20.37_32.606.mp4 Guess 4: The trauma of Hong Kongs teenage protesters - BBC News clipped_17.044_25.127.mp4 Guess 5: Lebanon Video Footage Shows Moment of Beirut Explosion clipped_0_32.648.mp4 ============= Test Sample 18 ============= Query Sentence: ["Thursday marks 70 years to the day since the United States dropped the world ' s first atomic bomb on the Japanese city of Hiroshima ."] Corresponding Video: Hiroshima atomic bomb Survivor recalls horrors - BBC News clipped_0.011_17.147.mp4 Correctly Guessed on Index 39 Guess 1: Coronavirus Deserted Roads in Saudi Arabias Holy City Mecca clipped_18.691_22.231.mp4 Guess 2: Berlin attack Police uncertain detained suspect drove lorry - BBC News clipped_0.102_13.107.mp4 Guess 3: Coronavirus Deserted Roads in Saudi Arabias Holy City Mecca clipped_41.474_55.91.mp4 Guess 4: Millions of Americans Barred From Voting This Election clipped_0.726_164.765.mp4 Guess 5: Cranes Demolished Over Hard Rock Hotel Ruins in New Orleans clipped_20.37_32.606.mp4 ============= Test Sample 19 ============= Query Sentence: ['China says it will finish building some new islands in the South China Sea " within days ".'] Corresponding Video: Chinas new island in the South China Sea - BBC News clipped_11.765_20.252.mp4 Correctly Guessed on Index 0 Guess 1: Chinas new island in the South China Sea - BBC News clipped_11.765_20.252.mp4 Guess 2: Line of Trucks Stranded as Iran Closes Border Amid Coronavirus Outbreak clipped_0.503_35.42.mp4 Guess 3: Cranes Demolished Over Hard Rock Hotel Ruins in New Orleans clipped_20.37_32.606.mp4 Guess 4: Israeli Air Strike Destroys House in Gaza clipped_10.315_24.648.mp4 Guess 5: Romania Importing More Supplies for Coronavirus Cases clipped_12.252_17.523.mp4 ============= Test Sample 20 ============= Query Sentence: ['Drivers in Pakistan are demanding the government open the border as they are carrying perishable products such as fruits and vegetables .'] Corresponding Video: Line of Trucks Stranded as Iran Closes Border Amid Coronavirus Outbreak clipped_0.503_35.42.mp4 Correctly Guessed on Index 3 Guess 1: Indian Army Troops Seen Moving Towards Border After Clash With China clipped_0.434_32.44.mp4 Guess 2: Mass protests and arrests across US over George Floyd death - BBC News clipped_150.527_165.01.mp4 Guess 3: London 77 attacks How the day unfolded (montage) - BBC News clipped_125.507_176.897.mp4 Guess 4: Line of Trucks Stranded as Iran Closes Border Amid Coronavirus Outbreak clipped_0.503_35.42.mp4 Guess 5: Israeli Security Forces Clash with Palestinian Protesters in West Bank clipped_3.002_5.815.mp4 ============= Test Sample 21 ============= Query Sentence: ["A brawl broke out between supporters and opponents of Turkey ' s President Recep Tayyip Erdogan in Washington DC , injuring nine ."] Corresponding Video: Protesters injured outside Turkish embassy in Washington - BBC News clipped_7.132_14.842.mp4 Correctly Guessed on Index 0 Guess 1: Protesters injured outside Turkish embassy in Washington - BBC News clipped_7.132_14.842.mp4 Guess 2: Nine Hurt in Violent Demonstration at Turkish Ambassadors Residence clipped_0.007_46.371.mp4 Guess 3: Police officer draws gun at Paris protest - BBC News clipped_12.19_37.847.mp4 Guess 4: Indonesia cuts off internet to Papua following protests - BBC News clipped_0.513_11.69.mp4 Guess 5: China moves to impose controversial Hong Kong security law - BBC News clipped_99.426_119.176.mp4 ============= Test Sample 22 ============= Query Sentence: ["Dignitaries , survivors , first responders and the people of Manchester gathered Tuesday to mark the anniversary of last year ' s concert bombing that killed 22 people in the city ."] Corresponding Video: Prince William attends Manchester attack remembrance service in UK clipped_19.004_30.04.mp4 Correctly Guessed on Index 9 Guess 1: Hong Kong Protesters Clash with Police at Subway Station clipped_37.027_90.389.mp4 Guess 2: Polands conservative President Duda re-elected - BBC News clipped_23.654_54.842.mp4 Guess 3: Russia protests Crowds take to streets over corruption - BBC News clipped_0.866_22.481.mp4 Guess 4: Prince William attends Manchester attack remembrance service in UK clipped_0.194_30.507.mp4 Guess 5: Hong Kong protests On the frontline - BBC News clipped_0.002_154.264.mp4 ============= Test Sample 23 ============= Query Sentence: ['An Indian army convoy moved towards the border region of Ladakh , Wednesday , June 17 , after a clash between Indian and Chinese troops left 20 Indian soldiers dead .'] Corresponding Video: Indian Army Troops Seen Moving Towards Border After Clash With China clipped_0.434_32.44.mp4 Correctly Guessed on Index 1 Guess 1: Indian injured by US police speaks out - BBC News clipped_121.961_130.356.mp4 Guess 2: Indian Army Troops Seen Moving Towards Border After Clash With China clipped_0.434_32.44.mp4 Guess 3: Coronavirus Deserted Roads in Saudi Arabias Holy City Mecca clipped_41.474_55.91.mp4 Guess 4: Putin Forever - BBC News clipped_248.014_275.639.mp4 Guess 5: Coronavirus Deserted Roads in Saudi Arabias Holy City Mecca clipped_18.691_22.231.mp4 ============= Test Sample 24 ============= Query Sentence: ['Tens of thousands of people have joined anti - racism demonstrations across the UK , despite government appeals for protestors to stay at home , because of fears of spreading the coronavirus .'] Corresponding Video: Thousands join anti-racism demonstrations across the UK - BBC News clipped_111.336_119.169.mp4 Correctly Guessed on Index 56 Guess 1: Australia protests highlight indigenous deaths - BBC News clipped_0.54_70.005.mp4 Guess 2: Sudan protest Demonstrators continue sit-in despite crackdown - BBC News clipped_0.053_37.889.mp4 Guess 3: Hong Kong protests On the frontline - BBC News clipped_0.002_154.264.mp4 Guess 4: Thousands attend Tommy Robinson BBC demo - BBC News clipped_0_29.606.mp4 Guess 5: Millions turn out in Iran for General Soleimanis funeral - BBC News clipped_86.029_312.529.mp4 ============= Test Sample 25 ============= Query Sentence: ['In Northern Ireland , you can meet up to 30 people outdoors , or up to six people indoors - while maintaining social distancing .'] Corresponding Video: Lockdown rules How to keep your guests safe from Covid-19 - BBC News clipped_0.955_113.764.mp4 Correctly Guessed on Index 37 Guess 1: Locked down India struggles as workers flee cities - BBC News clipped_18.027_176.777.mp4 Guess 2: Anti-Erdogan Protesters Say They Were Attacked by Presidents Bodyguards clipped_0.194_18.382.mp4 Guess 3: Nine Hurt in Violent Demonstration at Turkish Ambassadors Residence clipped_0.007_46.371.mp4 Guess 4: More US police officers charged over George Floyd death as protests continue - BBC News clipped_535.963_642.648.mp4 Guess 5: Pink Dot Singapores gay rights rally- BBC News clipped_14.139_24.764.mp4 ============= Test Sample 26 ============= Query Sentence: ['US prosecutors say they plan to try a police officer for a third time for allegedly using excessive force against an Indian grandfather who was pushed to the ground while he went for a morning walk .'] Corresponding Video: Indian injured by US police speaks out - BBC News clipped_121.961_130.356.mp4 Correctly Guessed on Index 10 Guess 1: Meet Pakistans 10m wanted man Hafiz Saeed - BBC News clipped_1.611_149.64.mp4 Guess 2: Palestinian Protesters Clash With Israeli Soldiers clipped_0.053_26.453.mp4 Guess 3: Militant Attacks Kill 12 in Kabul Jalalabad clipped_42.843_61.51.mp4 Guess 4: Milan Prisoners Riot Over Coronavirus Restrictions clipped_0.054_54.005.mp4 Guess 5: New Zealand Mosque Attacks Send Shock Waves Throughout Muslim World clipped_63.389_86.389.mp4 ============= Test Sample 27 ============= Query Sentence: ['Soldiers tried to chase away pick - up trucks firing tear gas , on the second night of a sit - in protest calling for President Omar al - Bashir to resign .'] Corresponding Video: Sudan protest Demonstrators continue sit-in despite crackdown - BBC News clipped_0.053_37.889.mp4 Correctly Guessed on Index 88 Guess 1: Nine Hurt in Violent Demonstration at Turkish Ambassadors Residence clipped_0.007_46.371.mp4 Guess 2: Palestinian Protesters Clash With Israeli Soldiers clipped_0.053_26.453.mp4 Guess 3: Protesters injured outside Turkish embassy in Washington - BBC News clipped_7.132_14.842.mp4 Guess 4: Early voting for Iraq military ahead of 12 May parliamentary elections Voice of America clipped_4.627_30.002.mp4 Guess 5: More US police officers charged over George Floyd death as protests continue - BBC News clipped_161.689_189.544.mp4 ============= Test Sample 28 ============= Query Sentence: ['The peaceful protests soon lead to a brutal civil war causing the death of over 250 , 000 Syrian people .'] Corresponding Video: Syria 5 year milestone since protests lead to civil war - BBC News clipped_29.183_38.836.mp4 Correctly Guessed on Index 0 Guess 1: Syria 5 year milestone since protests lead to civil war - BBC News clipped_29.183_38.836.mp4 Guess 2: Patient Dies After Mob of Lawyers Ransack Hospital in Pakistan clipped_0_11.132.mp4 Guess 3: Indonesia cuts off internet to Papua following protests - BBC News clipped_0.513_11.69.mp4 Guess 4: Police Fire Tear Gas Block Exit at Anti-Racism Demonstration in France clipped_0_58.759.mp4 Guess 5: Police and Protesters Clash at Hong Kong Polytechnic University clipped_31.001_38.468.mp4 ============= Test Sample 29 ============= Query Sentence: ['Israeli security forces and Palestinian protesters clashed in the West Bank city of Hebron , Thursday , February 8 .'] Corresponding Video: Israeli Security Forces Clash with Palestinian Protesters in West Bank clipped_3.002_5.815.mp4 Correctly Guessed on Index 0 Guess 1: Israeli Security Forces Clash with Palestinian Protesters in West Bank clipped_3.002_5.815.mp4 Guess 2: Palestinian Protesters Clash With Israeli Soldiers clipped_0.053_26.453.mp4 Guess 3: Dramatic Eyewitness Footage Captures New Orleans Hard Rock Hotel Collapse Ensuing Chaos clipped_94.264_103.592.mp4 Guess 4: Palestinian Stabs Israelis Shot Dead by Police clipped_1.869_10.586.mp4 Guess 5: Malawi Opposition Supporters Clash with Police as Election Results Challenged clipped_11.38_23.505.mp4 ============= Test Sample 30 ============= Query Sentence: ['A massive truck bombing in Kabul kills at least 90 people , mostly civilians , and wounds hundreds more .'] Corresponding Video: Afghanistan Blast clipped_515.592_528.426.mp4 Correctly Guessed on Index 24 Guess 1: Double-Arm Transplant Gives Marine Corps Veteran a Shot at New Life clipped_0_7.092.mp4 Guess 2: Taliban Launches Attack in Central Afghanistan Amid Intra-Afghan Summit in Qatar clipped_0.086_15.836.mp4 Guess 3: Syrian Troops Take Control of Idlib Town clipped_0.002_6.752.mp4 Guess 4: South Korean Military Boosts Mask Manufacturing for Coronavirus clipped_0.265_26.444.mp4 Guess 5: Hong Kong protest Tensions on the front line - BBC News clipped_0.002_16.656.mp4
01-MTA_Turnstile_EDA/Peter's Scrap Code/challenges.ipynb
###Markdown Looking at the data, it seems obvious that some of these entries have been reset (The ones where min are VERY low compared to the max. We can now get a better sense of the correct cutoff to not remove the entries that did not seem to reset, i.e. 20,000. ###Code # gb_r is gb with the obviousresets removed gb_r = gb[gb.dif < 20000] # Show boxplot sns.boxplot(gb_r.dif); plt.barh(gb_r[:7].index.levels[1], gb_r[:7]["dif"]) #gb_r.index.levels[1] gb_r.reset_index() len(gb[(gb["min"] == 0) & (gb["max"] != 0)]) df_r.groupby(["STATION"]).ENTRIES.describe() ###Output _____no_output_____
demos/LLVM-Cauldron.ipynb
###Markdown LLVM Cauldron - Wuthering Bytes 2016-09-08 Generating Python & Ruby bindings from C++ Jonathan B Coe [email protected] https://github.com/ffig/ffig[Updated Links and API use on 2018-01-25] Write a C++ class out to a file in the current working directory ###Code outputfile = "Shape.h" %%file $outputfile #include <stdexcept> #include <string> #ifdef __clang__ #define C_API __attribute__((annotate("GENERATE_C_API"))) #else #define C_API #endif #include <ffig/attributes.h> struct FFIG_EXPORT Shape { virtual ~Shape() = default; virtual double area() const = 0; virtual double perimeter() const = 0; virtual const char* name() const = 0; } __attribute__((annotate("GENERATE_C_API"))); static const double pi = 4.0; class Circle : public Shape { const double radius_; public: double area() const override { return pi * radius_ * radius_; } double perimeter() const override { return 2 * pi * radius_; } const char* name() const override { return "Circle"; } Circle(double radius) : radius_(radius) { if ( radius < 0 ) { std::string s = "Circle radius \"" + std::to_string(radius_) + "\" must be non-negative."; throw std::runtime_error(s); } } }; ###Output Overwriting Shape.h ###Markdown Compile our header to check it's valid C++ ###Code %%sh clang++-3.8 -x c++ -fsyntax-only -std=c++14 -I../ffig/include Shape.h ###Output _____no_output_____ ###Markdown Read the code using libclang ###Code import sys sys.path.insert(0,'..') import ffig.clang.cindex index = ffig.clang.cindex.Index.create() translation_unit = index.parse(outputfile, ['-x', 'c++', '-std=c++14', '-I../ffig/include']) import asciitree def node_children(node): return (c for c in node.get_children() if c.location.file.name == outputfile) print asciitree.draw_tree(translation_unit.cursor, lambda n: [c for c in node_children(n)], lambda n: "%s (%s)" % (n.spelling or n.displayname, str(n.kind).split(".")[1])) ###Output Shape.h (TRANSLATION_UNIT) +--Shape (STRUCT_DECL) | +--FFIG:EXPORT (ANNOTATE_ATTR) | +--GENERATE_C_API (ANNOTATE_ATTR) | +--~Shape (DESTRUCTOR) | | +-- (COMPOUND_STMT) | +--area (CXX_METHOD) | +--perimeter (CXX_METHOD) | +--name (CXX_METHOD) +--pi (VAR_DECL) | +-- (FLOATING_LITERAL) +--Circle (CLASS_DECL) +--struct Shape (CXX_BASE_SPECIFIER) | +--struct Shape (TYPE_REF) +--radius_ (FIELD_DECL) +-- (CXX_ACCESS_SPEC_DECL) +--area (CXX_METHOD) | +-- (CXX_OVERRIDE_ATTR) | +-- (COMPOUND_STMT) | +-- (RETURN_STMT) | +-- (BINARY_OPERATOR) | +-- (BINARY_OPERATOR) | | +--pi (UNEXPOSED_EXPR) | | | +--pi (DECL_REF_EXPR) | | +--radius_ (UNEXPOSED_EXPR) | | +--radius_ (MEMBER_REF_EXPR) | +--radius_ (UNEXPOSED_EXPR) | +--radius_ (MEMBER_REF_EXPR) +--perimeter (CXX_METHOD) | +-- (CXX_OVERRIDE_ATTR) | +-- (COMPOUND_STMT) | +-- (RETURN_STMT) | +-- (BINARY_OPERATOR) | +-- (BINARY_OPERATOR) | | +-- (UNEXPOSED_EXPR) | | | +-- (INTEGER_LITERAL) | | +--pi (UNEXPOSED_EXPR) | | +--pi (DECL_REF_EXPR) | +--radius_ (UNEXPOSED_EXPR) | +--radius_ (MEMBER_REF_EXPR) +--name (CXX_METHOD) | +-- (CXX_OVERRIDE_ATTR) | +-- (COMPOUND_STMT) | +-- (RETURN_STMT) | +-- (UNEXPOSED_EXPR) | +--"Circle" (STRING_LITERAL) +--Circle (CONSTRUCTOR) +--radius (PARM_DECL) +--radius_ (MEMBER_REF) +--radius (UNEXPOSED_EXPR) | +--radius (DECL_REF_EXPR) +-- (COMPOUND_STMT) +-- (IF_STMT) +-- (BINARY_OPERATOR) | +--radius (UNEXPOSED_EXPR) | | +--radius (DECL_REF_EXPR) | +-- (UNEXPOSED_EXPR) | +-- (INTEGER_LITERAL) +-- (COMPOUND_STMT) +-- (DECL_STMT) | +--s (VAR_DECL) | +--std (NAMESPACE_REF) | +--string (TYPE_REF) | +-- (UNEXPOSED_EXPR) | +-- (CALL_EXPR) | +-- (UNEXPOSED_EXPR) | +-- (UNEXPOSED_EXPR) | +--operator+ (CALL_EXPR) | +-- (UNEXPOSED_EXPR) | | +-- (UNEXPOSED_EXPR) | | +--operator+ (CALL_EXPR) | | +-- (UNEXPOSED_EXPR) | | | +--"Circle radius \"" (STRING_LITERAL) | | +--operator+ (UNEXPOSED_EXPR) | | | +--operator+ (DECL_REF_EXPR) | | +-- (UNEXPOSED_EXPR) | | +-- (UNEXPOSED_EXPR) | | +--to_string (CALL_EXPR) | | +--to_string (UNEXPOSED_EXPR) | | | +--to_string (DECL_REF_EXPR) | | | +--std (NAMESPACE_REF) | | +--radius_ (UNEXPOSED_EXPR) | | +--radius_ (MEMBER_REF_EXPR) | +--operator+ (UNEXPOSED_EXPR) | | +--operator+ (DECL_REF_EXPR) | +-- (UNEXPOSED_EXPR) | +--"\" must be non-negative." (STRING_LITERAL) +-- (UNEXPOSED_EXPR) +-- (CXX_THROW_EXPR) +-- (CALL_EXPR) +-- (UNEXPOSED_EXPR) +-- (UNEXPOSED_EXPR) +-- (CXX_FUNCTIONAL_CAST_EXPR) +--std (NAMESPACE_REF) +--class std::runtime_error (TYPE_REF) +-- (UNEXPOSED_EXPR) +--runtime_error (CALL_EXPR) +--s (UNEXPOSED_EXPR) +--s (DECL_REF_EXPR) ###Markdown Turn the AST into some easy to manipulate Python classes ###Code from ffig import cppmodel model = cppmodel.Model(translation_unit) model [f.name for f in model.functions][-5:] [c.name for c in model.classes][-5:] shape_class = [c for c in model.classes if c.name=='Shape'][0] ["{}::{}".format(shape_class.name,m.name) for m in shape_class.methods] ###Output _____no_output_____ ###Markdown Look at the templates the generator uses ###Code %cat ../ffig/templates/json.tmpl ###Output [{% for class in classes %} { "name" : "{{class.name}}"{% if class.methods %}, "methods" : [{% for method in class.methods %} { "name" : "{{method.name}}", "return_type" : "{{method.return_type}}" }{% if not loop.last %},{% endif %}{% endfor %} ]{% endif %} }{% if not loop.last %},{% endif %}{% endfor %} ] ###Markdown Run the code generator ###Code %%sh cd .. python -m ffig -b json.tmpl rb.tmpl python -m Shape -i demos/Shape.h -o demos/ ###Output _____no_output_____ ###Markdown See what it created ###Code %ls %cat Shape.json ###Output [{ "name" : "Shape", "methods" : [ { "name" : "area", "return_type" : "double" }, { "name" : "perimeter", "return_type" : "double" }, { "name" : "name", "return_type" : "const char *" } ]}] ###Markdown Build some bindings with the generated code. ###Code %%file CMakeLists.txt cmake_minimum_required(VERSION 3.0) set(CMAKE_CXX_STANDARD 14) add_library(Shape_c SHARED Shape_c.cpp) target_include_directories(Shape_c PRIVATE ../ffig/include) %%sh cmake . cmake --build . %%python2 import shape c = shape.Circle(8) print "A {} with radius {} has area {}".format(c.name(), 8, c.area()) %%script pypy import shape c = shape.Circle(8) print "A {} with radius {} has area {}".format(c.name(), 8, c.area()) %%ruby load "Shape.rb" c = Circle.new(8) puts("A #{c.name()} with radius #{8} has area #{c.area()}") ###Output A Circle with radius 8 has area 256.0
pygslib/Ipython_templates/.ipynb_checkpoints/rotscale_raw-checkpoint.ipynb
###Markdown Rotate coordinatesThis is a non standard GSLIB function developed by Adrian Martinez (Opengeostat) using as reference the rotation matrices defined in http://www.ccgalberta.com/ccgresources/report06/2004-403-angle_rotations.pdfThe resulting rotation is [-ZX-Y], that is: Counter Clockwise along axis Z; Clockwise along axis X; Counter Clockwise along axis Y; Importing python modules ###Code %matplotlib inline import pygslib #to plot in 2D and 3D from mpl_toolkits.mplot3d import Axes3D from matplotlib import cm import matplotlib.pyplot as plt import pandas as pd #numpy for matrix import numpy as np # to see user help help (pygslib.gslib.rotscale) ###Output Help on function rotscale in module pygslib.gslib: rotscale(parameters) Rotates and rescales a set of 3D coordinates The new rotated and rescaled system of coordinates will have origin at [x = 0, y = 0, z = 0]. This point corresponds to [x0,y0,z0] (the pivot point) in the original system of coordinates. Parameters ---------- parameters : dict dictionary with calculation parameters Parameters are pased in a dictionary as follows:: parameters = { 'x' : mydata.x,# data x coordinates, array('f') with bounds (na), na is number of data points 'y' : mydata.y,# data y coordinates, array('f') with bounds (na) 'z' : mydata.z,# data z coordinates, array('f') with bounds (na) 'x0' : 0, # pivot point coordinate X, 'f' 'y0' : 0, # pivot point coordinate Y, 'f' 'z0' : 0, # pivot point coordinate Z, 'f' 'ang1' : 45., # Z Rotation angle, 'f' 'ang2' : 0., # X Rotation angle, 'f' 'ang3' : 0., # Y Rotation angle, 'f' 'anis1' : 1., # Y cell anisotropy, 'f' 'anis2' : 1., # Z cell anisotropy, 'f' 'invert' : 0} # 0 do rotation, <> 0 invert rotation, 'i' Returns ------- xr : rank-1 array('d') with bounds (nd), new X coordinate yr : rank-1 array('d') with bounds (nd), new X coordinate zr : rank-1 array('d') with bounds (nd), new X coordinate Note ------- This is nonstandard gslib function and is based on the paper: http://www.ccgalberta.com/ccgresources/report06/2004-403-angle_rotations.pdf The rotation is {Z counter clockwise ; X clockwise; Y counter clockwise} [-ZX-Y] ###Markdown Creating some dummy data ###Code #we have a vector pointing noth mydata = pd.DataFrame ({'x': [0,20], 'y': [0,20], 'z': [0,20]}) print (mydata) # No rotation at all parameters = { 'x' : mydata.x, # data x coordinates, array('f') with bounds (na), na is number of data points 'y' : mydata.y, # data y coordinates, array('f') with bounds (na) 'z' : mydata.z, # data z coordinates, array('f') with bounds (na) 'x0' : 0., # new X origin of coordinate , 'f' 'y0' : 0., # new Y origin of coordinate , 'f' 'z0' : 0., # new Z origin of coordinate , 'f' 'ang1' : 0., # Z Rotation angle, 'f' 'ang2' : 0., # X Rotation angle, 'f' 'ang3' : 0., # Y Rotation angle, 'f' 'anis1' : 1., # Y cell anisotropy, 'f' 'anis2' : 1., # Z cell anisotropy, 'f' 'invert' : 0} # 0 do rotation, <> 0 invert rotation, 'i' mydata['xr'],mydata['yr'],mydata['zr'] = pygslib.gslib.rotscale(parameters) plt.subplot(2, 2, 1) plt.plot(mydata.x, mydata.y, 'o-') plt.plot(mydata.xr, mydata.yr, 'o-') plt.title('to view (XY)') plt.subplot(2, 2, 2) plt.plot(mydata.x, mydata.z, 'o-') plt.plot(mydata.xr, mydata.zr, 'o-') plt.title('front (XZ)') plt.subplot(2, 2, 3) plt.plot(mydata.y, mydata.z, 'o-') plt.plot(mydata.yr, mydata.zr, 'o-') plt.title('side view (YZ)') print (mydata) fig = plt.figure() ax = fig.gca(projection='3d') ax.plot (mydata.x, mydata.y, mydata.z, linewidth=2, color= 'b') ax.plot (mydata.xr, mydata.yr, mydata.zr, linewidth=2, color= 'g') # Rotating azimuth 45 counter clockwise (Z unchanged) parameters = { 'x' : mydata.x, # data x coordinates, array('f') with bounds (na), na is number of data points 'y' : mydata.y, # data y coordinates, array('f') with bounds (na) 'z' : mydata.z, # data z coordinates, array('f') with bounds (na) 'x0' : 0., # new X origin of coordinate , 'f' 'y0' : 0., # new Y origin of coordinate , 'f' 'z0' : 0., # new Z origin of coordinate , 'f' 'ang1' : 45., # Z Rotation angle, 'f' 'ang2' : 0., # X Rotation angle, 'f' 'ang3' : 0., # Y Rotation angle, 'f' 'anis1' : 1., # Y cell anisotropy, 'f' 'anis2' : 1., # Z cell anisotropy, 'f' 'invert' : 0} # 0 do rotation, <> 0 invert rotation, 'i' mydata['xr'],mydata['yr'],mydata['zr'] = pygslib.gslib.rotscale(parameters) plt.subplot(2, 2, 1) plt.plot(mydata.x, mydata.y, 'o-') plt.plot(mydata.xr, mydata.yr, 'o-') plt.title('to view (XY)') plt.subplot(2, 2, 2) plt.plot(mydata.x, mydata.z, 'o-') plt.plot(mydata.xr, mydata.zr, 'o-') plt.title('front (XZ)') plt.subplot(2, 2, 3) plt.plot(mydata.y, mydata.z, 'o-') plt.plot(mydata.yr, mydata.zr, 'o-') plt.title('side view (YZ)') print (np.round(mydata,2)) fig = plt.figure() ax = fig.gca(projection='3d') ax.plot (mydata.x, mydata.y, mydata.z, linewidth=2, color= 'b') ax.plot (mydata.xr, mydata.yr, mydata.zr, linewidth=2, color= 'g') # Rotating clockwise 45 (X unchanged, this is a dip correction) parameters = { 'x' : mydata.x, # data x coordinates, array('f') with bounds (na), na is number of data points 'y' : mydata.y, # data y coordinates, array('f') with bounds (na) 'z' : mydata.z, # data z coordinates, array('f') with bounds (na) 'x0' : 0., # new X origin of coordinate , 'f' 'y0' : 0., # new Y origin of coordinate , 'f' 'z0' : 0., # new Z origin of coordinate , 'f' 'ang1' : 0., # Z Rotation angle, 'f' 'ang2' : 45., # X Rotation angle, 'f' 'ang3' : 0., # Y Rotation angle, 'f' 'anis1' : 1., # Y cell anisotropy, 'f' 'anis2' : 1., # Z cell anisotropy, 'f' 'invert' : 0} # 0 do rotation, <> 0 invert rotation, 'i' mydata['xr'],mydata['yr'],mydata['zr'] = pygslib.gslib.rotscale(parameters) plt.subplot(2, 2, 1) plt.plot(mydata.x, mydata.y, 'o-') plt.plot(mydata.xr, mydata.yr, 'o-') plt.title('to view (XY)') plt.subplot(2, 2, 2) plt.plot(mydata.x, mydata.z, 'o-') plt.plot(mydata.xr, mydata.zr, 'o-') plt.title('front (XZ)') plt.subplot(2, 2, 3) plt.plot(mydata.y, mydata.z, 'o-') plt.plot(mydata.yr, mydata.zr, 'o-') plt.title('side view (YZ)') print (np.round(mydata,2)) fig = plt.figure() ax = fig.gca(projection='3d') ax.plot (mydata.x, mydata.y, mydata.z, linewidth=2, color= 'b') ax.plot (mydata.xr, mydata.yr, mydata.zr, linewidth=2, color= 'g') # Rotating counter clockwise 45 (Y unchanged, this is a plunge correction) parameters = { 'x' : mydata.x, # data x coordinates, array('f') with bounds (na), na is number of data points 'y' : mydata.y, # data y coordinates, array('f') with bounds (na) 'z' : mydata.z, # data z coordinates, array('f') with bounds (na) 'x0' : 0., # new X origin of coordinate , 'f' 'y0' : 0., # new Y origin of coordinate , 'f' 'z0' : 0., # new Z origin of coordinate , 'f' 'ang1' : 0., # Z Rotation angle, 'f' 'ang2' : 0., # X Rotation angle, 'f' 'ang3' : 45., # Y Rotation angle, 'f' 'anis1' : 1., # Y cell anisotropy, 'f' 'anis2' : 1., # Z cell anisotropy, 'f' 'invert' : 0} # 0 do rotation, <> 0 invert rotation, 'i' mydata['xr'],mydata['yr'],mydata['zr'] = pygslib.gslib.rotscale(parameters) plt.subplot(2, 2, 1) plt.plot(mydata.x, mydata.y, 'o-') plt.plot(mydata.xr, mydata.yr, 'o-') plt.title('to view (XY)') plt.subplot(2, 2, 2) plt.plot(mydata.x, mydata.z, 'o-') plt.plot(mydata.xr, mydata.zr, 'o-') plt.title('front (XZ)') plt.subplot(2, 2, 3) plt.plot(mydata.y, mydata.z, 'o-') plt.plot(mydata.yr, mydata.zr, 'o-') plt.title('side view (YZ)') print (np.round(mydata,2)) fig = plt.figure() ax = fig.gca(projection='3d') ax.plot (mydata.x, mydata.y, mydata.z, linewidth=2, color= 'b') ax.plot (mydata.xr, mydata.yr, mydata.zr, linewidth=2, color= 'g') # invert rotation parameters = { 'x' : mydata.x, # data x coordinates, array('f') with bounds (na), na is number of data points 'y' : mydata.y, # data y coordinates, array('f') with bounds (na) 'z' : mydata.z, # data z coordinates, array('f') with bounds (na) 'x0' : 0, # new X origin of coordinate , 'f' 'y0' : 0, # new Y origin of coordinate , 'f' 'z0' : 0, # new Z origin of coordinate , 'f' 'ang1' : 0., # Y cell anisotropy, 'f' 'ang2' : 0., # Y cell anisotropy, 'f' 'ang3' : 45., # Y cell anisotropy, 'f' 'anis1' : 1., # Y cell anisotropy, 'f' 'anis2' : 1., # Z cell anisotropy, 'f' 'invert' : 0} # 0 do rotation, <> 0 invert rotation, 'i' mydata['xr'],mydata['yr'],mydata['zr'] = pygslib.gslib.rotscale(parameters) parameters = { 'x' : mydata.xr, # data x coordinates, array('f') with bounds (na), na is number of data points 'y' : mydata.yr, # data y coordinates, array('f') with bounds (na) 'z' : mydata.zr, # data z coordinates, array('f') with bounds (na) 'x0' : 0, # new X origin of coordinate , 'f' 'y0' : 0, # new Y origin of coordinate , 'f' 'z0' : 0, # new Z origin of coordinate , 'f' 'ang1' : 0., # Y cell anisotropy, 'f' 'ang2' : 0., # Y cell anisotropy, 'f' 'ang3' : 45., # Y cell anisotropy, 'f' 'anis1' : 1., # Y cell anisotropy, 'f' 'anis2' : 1., # Z cell anisotropy, 'f' 'invert' : 1} # 0 do rotation, <> 0 invert rotation, 'i' mydata['xi'],mydata['yi'],mydata['zi'] = pygslib.gslib.rotscale(parameters) print (np.round(mydata,2)) ###Output x y z xr yr zr xi yi zi 0 0 0 0 0.0 0.0 0.00 0.0 0.0 0.0 1 20 20 20 0.0 20.0 28.28 20.0 20.0 20.0
simulation_test.ipynb
###Markdown Done by [Sasa Buklijas](http://buklijas.info/blog/) ###Code import ipywidgets as widgets import time # method def start_game(event): #button.disabled=True button.description = 'STOP Simulation' button.button_style='danger' slider.disabled=True progress.max = slider.value for i in range(slider.value+1): label2.value = '%s / %s' % (i, slider.value) progress.value += 1 time.sleep(0.1) button.description = 'Start Simulation' button.button_style='info' slider.disabled=False button.disabled=False progress.value = 0 label2.value = '' # UI slider = widgets.IntSlider(min=10, max=100, value=15) label = widgets.Label(value='Select number of games') button = widgets.Button(description='Start Simulation', button_style='info', tooltip='Start Game') progress = widgets.IntProgress(description='Progress:') label2 = widgets.Label() # Interactions button.on_click(start_game) # UI Layout top_box = widgets.HBox([label, slider]) down_box = widgets.HBox([button, progress, label2]) widgets.VBox([top_box, down_box]) ###Output _____no_output_____
Teste Hipotese/Conf_hip(gabarito).ipynb
###Markdown Confrontação de hipóteses ###Code import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns from scipy import stats import statsmodels.api as sm ###Output _____no_output_____ ###Markdown Dataset https://www.kaggle.com/skverma875/bank-marketing-dataset ###Code banco = pd.read_csv('bank-full.csv') ###Output _____no_output_____ ###Markdown Análise incial ###Code banco.shape banco.head() banco.isnull().sum() banco.info() banco.describe() ###Output _____no_output_____ ###Markdown Revisão EDA Uma variável Categórica ###Code sns.countplot(data=banco, x='job', order=banco['job'].value_counts().index, palette='pastel') plt.xticks(rotation = 90); ###Output _____no_output_____ ###Markdown Para mais paletas de cores: https://seaborn.pydata.org/tutorial/color_palettes.html Numérica ###Code #sns.distplot(banco['balance'], kde=False, hist_kws={"range": [0,10000]}) sns.displot(banco['age'], kde=False) ###Output _____no_output_____ ###Markdown Duas variáveis Numérica x Numérica ###Code sns.scatterplot(data=banco, x='age', y='balance', hue='housing') ###Output _____no_output_____ ###Markdown Numérica x Categórica ###Code plt.subplots(figsize=(7,5)) sns.stripplot(data=banco, x='loan', y='balance') sns.boxplot(data=banco, x='loan', y='age') sns.boxplot(data=banco, x='loan', y='balance', ) plt.ylim(-4000, 4000) ###Output _____no_output_____ ###Markdown Categórico x Categórico ###Code pd.crosstab(banco['education'], banco['loan']) pd.crosstab(banco['education'], banco['loan']).plot(kind='bar', stacked=True) ###Output _____no_output_____ ###Markdown Teste de hipótese **A média de idade que você conhece de mercado é 42. O Banco T tem a mesma média?** T test: https://pt.wikipedia.org/wiki/Teste_t_de_Student ###Code media_mercado = 42 banco.age.mean() amostra = banco.sample(500, random_state=101) sns.displot(amostra.age) sns.displot(banco.age) stats.ttest_1samp(amostra.age, media_mercado) ###Output _____no_output_____ ###Markdown One Sample: t-test https://www.statisticshowto.com/one-sample-t-test/ **As médias de idade das pessoas com e sem empréstimo são as mesmas?** ###Code loans=amostra[amostra.loan=="yes"].age no_loans=amostra[amostra.loan=="no"].age stats.ttest_ind(loans,no_loans) ###Output _____no_output_____ ###Markdown t-test independent: https://www.statisticshowto.com/independent-samples-t-test/ **Pré-requisitos do test t** ###Code #homogeneidade stats.levene(loans, no_loans) ###Output _____no_output_____ ###Markdown Teste de Levene: https://en.wikipedia.org/wiki/Levene%27s_testCaso dê significante, passar como argumento `equal_var=False` no ttest_ind ###Code stats.kstest(np.log(amostra.age),'norm') ###Output _____no_output_____ ###Markdown Kolmogorov–Smirnov test https://en.wikipedia.org/wiki/Kolmogorov%E2%80%93Smirnov_test ###Code sm.qqplot(amostra.age, stats.norm, fit=True, line='45') ###Output _____no_output_____ ###Markdown Gráfico Q-Q | QQ-Plot https://pt.wikipedia.org/wiki/Gr%C3%A1fico_Q-Q **Teste não paramétrico** ###Code stats.mannwhitneyu(loans, no_loans) ###Output _____no_output_____ ###Markdown Teste U de Mann-Whitney: https://pt.wikipedia.org/wiki/Teste_U_de_Mann-Whitney **A idade muda significativamente de acordo com a educação?** ###Code # ANOVA Analysis of Variance stats.f_oneway(amostra.loc[amostra['education']=='tertiary',['age']], amostra.loc[amostra['education']=='secondary',['age']], amostra.loc[amostra['education']=='primary',['age']], amostra.loc[amostra['education']=='unknown',['age']]) ###Output _____no_output_____ ###Markdown ANOVA: https://blog.minitab.com/pt/entendendo-analise-de-variancia-anova-e-o-teste-f ###Code from statsmodels.stats.multicomp import MultiComparison anova = MultiComparison(amostra.age,amostra.education) results = anova.tukeyhsd() results.plot_simultaneous(); ###Output /Users/eduhideki/opt/anaconda3/lib/python3.8/site-packages/statsmodels/sandbox/stats/multicomp.py:775: UserWarning: FixedFormatter should only be used together with FixedLocator ax1.set_yticklabels(np.insert(self.groupsunique.astype(str), 0, '')) ###Markdown Post-hoc para ANOVA: http://www.portalaction.com.br/anova/31-teste-de-tukey ****Existe uma associação com nível de educação e pessoas que tem empréstimo?**** ###Code education_loan = pd.crosstab(amostra['education'], amostra['loan']) education_loan stats.chi2_contingency(education_loan) ###Output _____no_output_____
notebooks/DataLoaders.ipynb
###Markdown https://pytorch.org/tutorials/beginner/nn_tutorial.htmlhttps://github.com/spro/practical-pytorch/tree/master/char-rnn-classificationhttps://pytorch.org/tutorials/beginner/text_sentiment_ngrams_tutorial.htmlBatch RNNhttps://pytorch.org/tutorials/beginner/text_sentiment_ngrams_tutorial.html ![alt text](https://docs.gdc.cancer.gov/Encyclopedia/pages/images/barcode.png "TCGA Barcode")To identify a sample, which is used for both methylation and expression measurements, we need only first 4 "-" separated identifiers, i.e. up to sample:vial level;It is assumed that each patient is measured once. ###Code import re import json import warnings from pybiomart import Dataset as BiomartDataset from pyfaidx import Fasta # random access to fasta import numpy as np from pathlib import Path import pandas as pd import torch from torch.utils.data import Dataset, DataLoader DIR = Path("/data/eugen/meth2expr/") TCGA_DIR = Path("/data/eugen/tcga/projects") GENOMICS_DIR = DIR / 'data/genomics/' PROMS_PATH = DIR / "data/genomics/proms.fna" METH_CHIP_PATH = DIR / "data/genomics/GPL13534-11288.txt" BROAD_CPG_CORR_PATH = DIR / "data/broad_tcga/analysis/gdac.broadinstitute.org_STAD-TP.Correlate_Methylation_vs_mRNA.Level_4.2016012800.0.0/Correlate_Methylation_vs_mRNA_STAD-TP_matrix.txt" projects = [str(x).split("/")[-1] for x in TCGA_DIR.iterdir() if x.is_dir()] projects_paths = [x for x in TCGA_DIR.iterdir() if x.is_dir()] cpgs_chip = pd.read_csv(METH_CHIP_PATH, sep='\t', header=37) cols = ['ID', 'RANGE_GB', 'MAPINFO', 'Strand'] cpgs_chip = cpgs_chip[cols] cpgs_chip.head() PROJECTS_DIR = "/data/eugen/tcga/projects/" class Project(object): """ Project data loader and descriptor. :attributes: :public: get_case_data() :private: _collect_samples() _collect_metadata() _get_case_datapaths() """ def __init__(self, name, projects_dir=PROJECTS_DIR): ''' :params name -- project name path -- project dir ''' self.name = name self.dir = projects_dir + name self.meth_path = None self.meth_fpath = None self.samples = {} self.sample_ids = [] self.sample_paths = {} self.cases = {} self.case_ids = [] self.meta = {} # Get samples ids and paths self.collect_samples_() self.collect_metadata_() def collect_samples_(self): """ Extracts samples' ids and paths from project directory. """ self.sample_paths = {"methylation": {}, "expression": {}} # Extract methylation self.meth_path = Path(self.dir) / 'data/methylation' / self.name / 'harmonized/DNA_Methylation/Methylation_Beta_Value' self.expr_path = Path(self.dir) / 'data/expression' / self.name / 'harmonized/Transcriptome_Profiling/Gene_Expression_Quantification' self.sample_paths = {datatype: {str(f).split("/")[-1]: str(f) for f in path.iterdir()} for datatype, path in [('methylation', self.meth_path), ('expression', self.expr_path)]} # In path there is an experiment ID, not sample id self.samples = {f: list(self.sample_paths[f].keys()) for f in ['methylation', 'expression']} self.sample_ids = list(self.samples.keys()) def collect_metadata_(self): """ Gets all necessary metadata of project. Includes ids and case ids: necessary to match expression and methylation measurements. """ metadata_paths = [(Path(self.dir) / (self.name + "_" + f + ".csv")) for f in ['methylation', 'expression'] ] ### Access metadata files cols = ['file_id', 'file_name', 'cases'] meth, expr = list(map(lambda p: pd.read_csv(p, sep='\t', usecols=cols), metadata_paths)) ### Prune the TCGA barcode of case ids up to sample level, resulting in Project-TSS-ParticipantID-<SampleID><vial> barcode_splitter = lambda x: "-".join(x.split('-')[:4]) meth['cases'] = meth['cases'].apply(barcode_splitter) expr['cases'] = expr['cases'].apply(barcode_splitter) meth = meth.set_index("file_id", drop=False) expr = expr.set_index("file_id", drop=False) meta = {'methylation': meth.to_dict(orient='index'), 'expression': expr.to_dict(orient='index') } # Assert equivalency of ids in metadata files and available files assert set(meta['methylation'].keys()) == set(self.samples['methylation']) assert set(meta['expression'].keys()) == set(self.samples['expression']) self.meta = meta ### Collect metadata for cases # There may be case duplicates, will drop meth_dup = meth['cases'][meth.duplicated(['cases'], keep=False)] expr_dup = expr['cases'][expr.duplicated(['cases'], keep=False)] dup = set(meth_dup).union(set(expr_dup)) # set of duplicated cases in meth or expr if (len(dup) > 0): warnings.warn("Droping duplicated cases entries.", UserWarning) meth = meth.loc[~(meth['cases'].isin(dup))] expr = expr.loc[~(expr['cases'].isin(dup))] meth = meth.set_index("cases") expr = expr.set_index("cases") # recheck equivalency of cases for methylation and expression (should be true by constraint of download script) if (set(meth.index) != set(expr.index)): warnings.warn("There is non-equivalence of cases in methylation and expression.") common_cases = set(meth.index).intersection(set(expr.index)) meth = meth.loc[common_cases] expr = expr.loc[common_cases] cases = {'methylation': meth.to_dict(orient='index'), 'expression': expr.to_dict(orient='index') } self.old_cases = cases # make cases as keys { case -> {methylation -> {}; expression -> {}} } new_dict = {case: {'methylation': cases['methylation'][case], 'expression': cases['expression'][case]} for case in cases['methylation'].keys() } self.cases = new_dict self.case_ids = list(self.cases.keys()) def get_case_datapaths_(self, case: str): '''Getter method for meth, expr or both types data.''' assert case in self.cases.keys() self.meth_fpath = Path(self.meth_path) / self.cases[case]['methylation']['file_id'] / self.cases[case]['methylation']['file_name'] self.expr_fpath = Path(self.expr_path) / self.cases[case]['expression']['file_id'] / self.cases[case]['expression']['file_name'] return {'methylation': self.meth_fpath, 'expression': self.expr_fpath} def get_case_data(self, case: str, genes = None, cpgs = None, get_expr=True, get_meth=True) -> dict: """ Get case methylation and expression dataframe. :params case -- case id; genes -- list of genes to get data for. if None: gets data for all genes; cpgs -- list of cpgs to get data for. if None: gets data for all cpgs; """ # get_expr, get_meth = True, True paths = self.get_case_datapaths_(case) # We ignore rest columns, since terribly redundant. They describe CpG features, which are assumed to be constant for every # experiment (to be tested). That's the reason for the inefficient data storage. After testing invariance, # methylation data files have to cleared (i.e. remove redundant columns). meth_cols = ['Beta_value'] # 'Composite Element REF' becomes index # load both meth and expr for now # if dtype == 'methylation': # get_expr = False # elif dtype == 'expression': # get_meth = False if get_expr: if genes is not None: expr = pd.read_csv(paths['expression'], sep='\t', index_col=0, header=None) expr.index = list(map(lambda x: x.split(".")[0], expr.index)) # cut off version of ids e.g. ENS0000.1 -> ENS0000 expr = expr.loc[genes] else: expr = pd.read_csv(paths['expression'], sep='\t', index_col=0, header=None) else: expr = None if get_meth: if cpgs is not None: meth = pd.read_csv(paths['methylation'], sep='\t', index_col=0, header=0)[meth_cols].loc[cpgs] else: meth = pd.read_csv(paths['methylation'], sep='\t', index_col=0, header=0)[meth_cols] else: meth = None return {'methylation': meth, 'expression': expr} VOCAB = ['A', 'C', 'T', 'G', 'N'] BASE2IDX = {"A": 0, "C": 1, "T": 2, "G": 3, "N": 4} def base2tensor(base: str, vocab: list = VOCAB) -> torch.tensor: tensor = torch.zeros(1, len(vocab)) tensor[BASE2IDX[base]][0] = 1 return tensor def seq2tensor(seq: str, vocab: list = VOCAB) -> torch.tensor: tensor = torch.zeros(len(seq), 1, len(vocab)) # important choice for preserving shape compatibility with hidden layer. for idx, base in enumerate(seq): tensor[idx][0][BASE2IDX[base]] = 1 # that extra 1 dimension is because PyTorch assumes everything is in batches - we’re just using a batch size of 1 here. return tensor def methylate_seq(seq_tensor: torch.tensor, loc: int, value: float, meth_idx: int, mask_idx: int) -> torch.tensor: # only C or N can be methylated assert (seq_tensor[loc][0][meth_idx] > 0) or (seq_tensor[loc][0][mask_idx] > 0) seq_tensor[loc][0][meth_idx] = value return seq_tensor with open(GENOMICS_DIR / 'genes.json', 'r') as f: genes_dict = json.load(f) cols = ['ID', 'RANGE_GB', 'MAPINFO', 'Strand'] # essential info for finding location cpgs_chip = pd.read_csv(METH_CHIP_PATH, sep='\t', header=37, usecols=cols) class Gene(object): """ . """ def __init__(self, name, genomics_dir=GENOMICS_DIR, get_seq=False, get_cpgs=False): """ TODO: think about how to compute for each gene local coordinates for CpGs within promoter. """ self.dir = genomics_dir self.name = name # Entrez id self.tensor = None # to refactor # global var, to refactor self.mask_idx = BASE2IDX["N"] self.meth_idx = BASE2IDX["C"] # Get pre-extracted gene features with open(GENOMICS_DIR / 'genes.json', 'r') as f: self.features = json.load(f)[self.name] if get_seq: self.promoter_seq = self.get_promoter_seq_() self.seq = seq2tensor(self.promoter_seq) if get_cpgs: self.gene_cpgs = self.get_cpgs() def __str__(self): return self.name def __repr__(self): return self.tensor def get_promoter_seq_(self, fpath = PROMS_PATH): """ Load the promoter sequence. TODO: remove duplicates from fasta. Just rerun 01_, modified code. """ prom = Fasta(str(PROMS_PATH))[self.name + "(" + self.features['strand'] + ")"] seq = str(prom).upper() return seq def get_cpgs(self, cpg_df = cpgs_chip): """ Load the gene's cpgs. """ if self.features['strand'] == '+': strand = 'F' elif self.features['strand'] == '-': strand = 'R' else: raise ValueError df = cpg_df[(cpg_df['RANGE_GB'] == self.features['chr']) & (cpg_df['Strand'] == strand) & (cpg_df['MAPINFO'] >= int(self.features['start'])) & (cpg_df['MAPINFO'] <= int(self.features['end']))] df = df.set_index('ID', drop=False) cpgs = df.to_dict(orient='index') return cpgs def get_corr_cpgs(self, path=BROAD_CPG_CORR_PATH): """Load correlated cpgs ids from the preprocessed dict.""" raise NotImplementedError p = Path(path) with open(p, 'r') as f: cpg_ids = json.load(f)[self.name] return cpg_ids def tensorize_(self): """ Represent sequence as tensor. """ self.tensor = seq2tensor(self.promoter_seq) def methylate_tensor(self, cpgs=None): """ Methylate the tensor with values. """ assert self.tensor is not None if cpgs is None: if self.gene_cpgs is None: cpgs = self._get_cpgs() else: cpgs = self.gene_cpgs for cpg in cpgs: # insert cpg value in row for meth_base (cytosine) cpg_loc = cpg[0] cpg_value = cpg[1] seq = methylate_seq(self.tensor, cpg_loc, cpg_value, self.meth_idx, self.mask_idx) self.tensor = seq return seq g = Gene("7133") print(g.promoter_seq, g.features) cpgs = g._get_cpgs() cpgs idx = int(cpgs['cg09818691']['MAPINFO']) - int(g.features['start']) g.promoter_seq[idx-5:idx+3] def download_lookup(path='../data/genomics/gene_id_lookup.csv'): dataset = BiomartDataset(name='hsapiens_gene_ensembl', host='http://www.ensembl.org') # attrs = dataset.list_attributes() # attrs[list(map(lambda x: bool(re.search(r'entrez', x)), attrs['name']))] lk = dataset.query(attributes=['ensembl_gene_id', 'entrezgene_id'], filters=None).dropna() lk['NCBI gene ID'] = lk['NCBI gene ID'].astype('int32') lk.to_csv(path, index=False, header=True) return lk def converter(genes, to: str, lk_path='../data/genomics/gene_id_lookup.csv'): '''Convert Ensembl to NCBI ids.''' assert to in ['ENS', 'ENTREZ'] path = Path(lk_path) if not path.is_file(): lk = download_lookup() else: lk = pd.read_csv(path) # lookup table lk["NCBI gene ID"] = lk["NCBI gene ID"].astype("int32") # lk['Gene stable ID'] = list(map(lambda x: x.split(".")[0], 'Gene stable ID')) if to == 'ENTREZ': id_col = "Gene stable ID" conv_col = "NCBI gene ID" else: id_col = "NCBI gene ID" conv_col = "Gene stable ID" genes = list(map(int, genes)) assert set(genes).issubset(set(lk[id_col])) ids = lk.set_index(id_col).loc[genes][conv_col].to_list() return ids class MethExprSequenceDataset(Dataset): ''' . ''' def __init__(self, genes: list, projects: list, cases = list): '''.''' self.data = {} # Lookup table, genes in fasta with open("../data/genomics/genes.json", 'r') as f: available_genes = json.load(f) # Entrez lk = download_lookup() lk = lk.loc[lk['NCBI gene ID'].isin(available_genes.keys())] if genes[0][:3] == 'ENS': gene_entrez_ids = converter(genes, to='ENTREZ') gene_ensembl_ids = genes else: gene_entrez_ids = genes gene_ensembl_ids = converter(genes, to='ENS') record_id = 0 for proj in projects: p = Project(proj) for case in cases: if case not in p.cases.keys(): continue expr = p.get_case_data(case, gene_ensembl_ids, get_meth=None)['expression'] expr = pd.merge(expr, lk, left_index=True, right_on='Gene stable ID', how='inner') expr.set_index("NCBI gene ID", drop=False, inplace=True) case_cpgs = p.get_case_data(case, cpgs=None, get_expr=False)['methylation'] # idx = int(cpgs['cg09818691']['MAPINFO']) - int(g.features['start']) assert len(set(expr.index)) == len(set(genes)) for gene in expr.index: # Get all gene data g = Gene(str(gene)) # add window parameters gene_cpgs = g._get_cpgs() cpgs = [] for cpg, d in gene_cpgs.items(): loc = int(d['MAPINFO'] - int(g.features['start'])) value = case_cpgs.loc[cpg].values[0] print(loc, value) cpgs.append((loc, value)) g.methylate_tensor(cpgs=cpgs) self.data[record_id] = (g.tensor, expr.loc[[gene]]) record_id += 1 pass def __len__(self): return len(self.data) def __getitem__(self, idx): return self.data[idx] def save(self): raise NotImplementedError def load(self, fp): raise NotImplementedError class MethExprCpgDataset(Dataset): '''Dataset for gene - correlated cpgs.''' def __init__(self, gene: str, projects: list): self.entrez_id = gene self.projects = projects self.df = None def load(self): data = {} for project in self.projects: p = Project(project) g = Gene(self.name) # extract ids of cpg highly correlated with gene cpg_ids = g.get_corr_cpgs() for case in p.case_ids: # extract expr and methylation values rec = p.get_case_data(case, str(g), cpg_ids) break break return rec # process rec into 1D array # rec_array = np.concat(rec['expression'], rec['methylation']) # data[(project, case)] = rec_array # Aggregate all the arrays into dataframe # df = # self.df = df # return df def __len__(self): assert self.df is not None return self.df.shape[0] # def methylate_seq(seq_tensor: torch.tensor, loc: int, value: float, meth_idx: int, mask_idx: int) -> torch.tensor: # # only C or N can be methylated # assert (seq_tensor[loc][0][meth_idx] > 0) or (seq_tensor[loc][0][mask_idx] > 0) # seq_tensor[loc][0][meth_idx] = value # return seq_tensor ###Output _____no_output_____ ###Markdown Main ###Code # Interface # MethExprDataset(gene='11215', projects=[], get_seq=False, get_corrcpgs=True) genes = ["11215", "91582", "285335"] cases = ['TCGA-2G-AAHC-01A', 'TCGA-4K-AA1G-01A', 'TCGA-GM-A2DA-01A', 'TCGA-DU-6400-01A', 'TCGA-TQ-A7RI-01A'] projects = ['TCGA-TGCT', 'TCGA-BRCA', 'TCGA-LGG'] ds = MethExprDataset(genes, projects, cases) import matplotlib.pyplot as plt from scipy.signal.windows import gaussian def build_gaussian_filter(N=N, sd=SD): gaussian_window = gaussian(N, SD) plt.stem(gaussian_window, use_line_collection=True) return gaussian_window def cytosine_signals(ds) -> list: sigs = [] for idx, d in ds.data.items(): Cs = d[0][:, 0, 1].numpy().copy() Cs[Cs == 1] = 0 sigs.append(Cs) plt.stem(sigs[0], use_line_collection=True) plt.show() return sigs def stemplots(sigs, gaussian_window): assert len(sigs) <= 10 fig, axs = plt.subplots(nrows=len(sigs), ncols=2, figsize=(15,20)) for i, ax in enumerate(axs): ax[0].stem(sigs[i], use_line_collection=True) conv = np.convolve(sigs[i], gaussian_window) warnings.warn("Make the filter symmetric.", UserWarning) ax[1].stem(conv, use_line_collection=True) sigs = cytosine_signals(ds) gauss = build_gaussian_filter(N=100, sd=20) stemplots(sigs[:10], gauss) def process_broad_cpg_gene_corr(path = BROAD_CPG_CORR_PATH): ''' Processes the df downloaded from broad on gene expression - cpg sites correlations. Data was downloaded from: http://firebrowse.org/?cohort=STAD&download_dialog=true# Preprocessing includes: ''' df = pd.read_csv(path, sep='\t') return df df = process_broad_cpg_gene_corr() df.head() np.mean(abs(df['Corr_Coeff'])) np.std(abs(df['Corr_Coeff'])) import seaborn as sns sns.distplot(abs(df['Corr_Coeff'])) ###Output _____no_output_____
personal-learning/spooky-author/Spooky Author Identification.ipynb
###Markdown Spooky Author IdentificationKaggle competition info found [here](https://www.kaggle.com/c/spooky-author-identification) Getting the data Setting up the kaggle CLI ###Code ! pip install kaggle --upgrade ###Output Collecting kaggle [?25l Downloading https://files.pythonhosted.org/packages/78/3a/64a6447e5faa313b70cb555e21b5a30718c95bcc4902d91784b57fbab737/kaggle-1.5.3.tar.gz (54kB)  100% |████████████████████████████████| 61kB 37.6MB/s ta 0:00:01 [?25hRequirement already satisfied, skipping upgrade: urllib3<1.25,>=1.21.1 in /opt/anaconda3/lib/python3.7/site-packages (from kaggle) (1.24.1) Requirement already satisfied, skipping upgrade: six>=1.10 in /opt/anaconda3/lib/python3.7/site-packages (from kaggle) (1.12.0) Requirement already satisfied, skipping upgrade: certifi in /opt/anaconda3/lib/python3.7/site-packages (from kaggle) (2018.11.29) Requirement already satisfied, skipping upgrade: python-dateutil in /opt/anaconda3/lib/python3.7/site-packages (from kaggle) (2.8.0) Requirement already satisfied, skipping upgrade: requests in /opt/anaconda3/lib/python3.7/site-packages (from kaggle) (2.21.0) Requirement already satisfied, skipping upgrade: tqdm in /opt/anaconda3/lib/python3.7/site-packages (from kaggle) (4.28.1) Collecting python-slugify (from kaggle) Downloading https://files.pythonhosted.org/packages/1f/9c/8b07d625e9c9df567986d887f0375075abb1923e49d074a7803cd1527dae/python-slugify-2.0.1.tar.gz Requirement already satisfied, skipping upgrade: idna<2.9,>=2.5 in /opt/anaconda3/lib/python3.7/site-packages (from requests->kaggle) (2.8) Requirement already satisfied, skipping upgrade: chardet<3.1.0,>=3.0.2 in /opt/anaconda3/lib/python3.7/site-packages (from requests->kaggle) (3.0.4) Collecting Unidecode>=0.04.16 (from python-slugify->kaggle) [?25l Downloading https://files.pythonhosted.org/packages/31/39/53096f9217b057cb049fe872b7fc7ce799a1a89b76cf917d9639e7a558b5/Unidecode-1.0.23-py2.py3-none-any.whl (237kB)  100% |████████████████████████████████| 245kB 37.6MB/s ta 0:00:01 [?25hBuilding wheels for collected packages: kaggle, python-slugify Running setup.py bdist_wheel for kaggle ... [?25ldone [?25h Stored in directory: /home/jupyter/.cache/pip/wheels/ee/97/c5/87dcdc9434fe4e632ed5945e31a03703af229db178ef6a00e8 Running setup.py bdist_wheel for python-slugify ... [?25ldone [?25h Stored in directory: /home/jupyter/.cache/pip/wheels/2b/9e/c8/14a18ab55d8f144384de8186a3df8401dcc9264936f71d470f Successfully built kaggle python-slugify Installing collected packages: Unidecode, python-slugify, kaggle Successfully installed Unidecode-1.0.23 kaggle-1.5.3 python-slugify-2.0.1 ###Markdown Then you need to upload your credentials from Kaggle on your instance. Login to kaggle and click on your profile picture on the top left corner, then 'My account'. Scroll down until you find a button named 'Create New API Token' and click on it. This will trigger the download of a file named 'kaggle.json'.Upload this file to the directory this notebook is running in, by clicking "Upload" on your main Jupyter page, then uncomment and execute the next two commands (or run them in a terminal). For Windows, uncomment the last two commands. ###Code # ! mkdir -p ~/.kaggle/ # ! mv kaggle.json ~/.kaggle/ from fastai.text import * Config.data_path() path = Config.data_path()/'spooky-author' path.mkdir(parents=True, exist_ok=True) path # ! kaggle competitions download -c spooky-author-identification -p {path} path.ls() # ! unzip /home/jupyter/.fastai/data/spooky-author/test.zip -d {path} # ! unzip /home/jupyter/.fastai/data/spooky-author/train.zip -d {path} # ! unzip /home/jupyter/.fastai/data/spooky-author/sample_submission.zip -d {path} path.ls() df_train = pd.read_csv(path/'train.csv') df_train.head() df_test = pd.read_csv(path/'test.csv') df_test.head() df_train['text'][0] data_lm = TextDataBunch.from_csv(path, 'train.csv') data_lm.save('lm-train') ###Output _____no_output_____ ###Markdown Next time we launch this notebook, we can skip the cell above that took a bit of time (and that will take a lot more when you get to the full dataset) and load those results like this: ###Code data = load_data(path, 'lm-train') data = TextClasDataBunch.from_csv(path, 'train.csv') data.show_batch() df_train.columns data = (TextList.from_csv(path, 'train.csv', cols='text') .random_split_by_pct(seed=42) .label_from_df(cols=2)) data # Language model data data_lm = TextLMDataBunch.from_csv(path, 'train.csv') data_lm.save('data_lm_1.pkl') # Classifier model data data_clas = TextClasDataBunch.from_csv(path, 'train.csv', vocab=data_lm.train_ds.vocab, bs=32, label_cols='author') data_clas.save('data_clas_1.pkl') data_lm = load_data(path, fname='data_lm_1.pkl') data_clas = load_data(path, fname='data_clas_1.pkl', bs=16) learn = language_model_learner(data_lm, AWD_LSTM, drop_mult=0.5) learn.lr_find() learn.recorder.plot() learn.fit_one_cycle(1, 5e-2) learn.recorder.plot_losses() learn.predict('this is a test', n_words=10) learn.unfreeze() learn.fit_one_cycle(1, 1e-3) learn.recorder.plot_losses() learn.predict('this is a test', n_words=20) learn.fit_one_cycle(1, 1e-4) learn.recorder.plot_losses() learn.predict('this is a test', n_words=20) learn.save_encoder('ft_enc') learn = text_classifier_learner(data_clas, AWD_LSTM, drop_mult=0.5) learn.load_encoder('ft_enc') data_clas.show_batch() learn.lr_find() learn.recorder.plot() learn.fit_one_cycle(1, 1e-2) learn.freeze_to(-2) learn.fit_one_cycle(1, slice(5e-3/2., 5e-3)) learn.predict('this is a test of ascent on the vice continent, and a somewhat distinct insurance of ground with Oxford.') learn.metrics learn.unfreeze() learn.fit_one_cycle(1, slice(2e-3/100, 2e-3)) learn.recorder.plot_losses() log(0.61) sample_sub = pd.read_csv(path/'sample_submission.csv') sample_sub.head() len(df_test) df_test.head(1) df_test['text'][0] learn.predict(df_test['text'][0]) learn.predict(df_test['text'][0])[0] learn.save('class-predictor-1') df_train.head() learn.get_preds('test') learn.predict(df_train['text'][0]) learn.predict(df_train['text'][1]) learn.predict(df_train['text'][3]) learn.predict(df_test['text'][1]) float(learn.predict(df_test['text'][1])[2][0]) preds = [] for i,row in df_test.iterrows(): pred = learn.predict(row['text'])[2] [EAP, HPL, MWS] = [float(i) for i in pred] preds.append({ 'id': row['id'], 'EAP': EAP, 'HPL': HPL, 'MWS': MWS }) submission = pd.DataFrame(preds, columns=['id', 'EAP', 'HPL', 'MWS']) submission.head() len(submission) submission.to_csv('/home/jupyter/tutorials/fastai/personal-learning/submission-1.csv', index=False) ###Output _____no_output_____ ###Markdown Try 2 Datablock API ###Code path.ls() df_train.head() df_test.head() TextList.from_csv?? TextList.from_df?? bs = 64 data_lm = (TextList.from_csv(path, 'train.csv', cols='text') .random_split_by_pct(0.1, seed=42) .label_for_lm() .add_test(TextList.from_csv(path, 'test.csv', cols='text')) .databunch()) data_lm.save('data_lm.pkl') data_lm = load_data(path, 'data_lm.pkl', bs=bs) data_lm.show_batch() learn = language_model_learner(data_lm, AWD_LSTM, drop_mult=0.3) learn.lr_find() learn.recorder.plot(skip_end=10) learn.fit_one_cycle(1, 3e-2, moms=(0.8,0.7)) learn.recorder.plot_losses() learn.save('fit_head.pkl') learn.fit_one_cycle(1, 3e-3, moms=(0.8,0.7)) learn.recorder.plot_losses() learn.save('fit_head.pkl') learn.freeze_to(-2) learn.fit_one_cycle(1, slice(5e-3/2., 5e-3)) learn.predict('this is a test', n_words=10) learn.predict('As I turned the', n_words=10) learn.save('lm_fit_2.pkl') learn.fit_one_cycle(10, 1e-3, moms=(0.8,0.7)) learn.recorder.plot_losses() TEXT = "He began the process of" N_WORDS = 40 N_SENTENCES = 2 print("\n".join(learn.predict(TEXT, N_WORDS, temperature=0.75) for _ in range(N_SENTENCES))) learn.save('lm_fine_tuned_1.pkl') learn.save_encoder('fine_tuned_enc.pkl') ###Output _____no_output_____ ###Markdown Classifier ###Code bs = 48 data_lm.train_ds.vocab data_clas = (TextList.from_csv(path, 'train.csv', cols='text', vocab=data_lm.train_ds.vocab) .random_split_by_pct(0.1, seed=42) .label_from_df(cols='author') .add_test(TextList.from_csv(path, 'test.csv', cols='text')) .databunch()) data_clas.save('data_clas_2.pkl') data_clas = load_data(path, 'data_clas_2.pkl', bs=bs) data_clas.show_batch() learn = text_classifier_learner(data_clas, AWD_LSTM, drop_mult=0.5) learn.load_encoder('fine_tuned_enc.pkl') learn.lr_find() learn.recorder.plot() learn.fit_one_cycle(1, 5e-3, moms=(0.8,0.7)) learn.recorder.plot_losses() learn.save('clas_1.pkl') learn.fit_one_cycle(1, 1e-3, moms=(0.8,0.7)) learn.fit_one_cycle(5, 1e-3, moms=(0.8,0.7)) learn.recorder.plot_losses() learn.freeze_to(-2) learn.fit_one_cycle(1, slice(1e-3/(2.6**4),1e-2), moms=(0.8,0.7)) learn.fit_one_cycle(1, slice(1e-3/(2.6**4),1e-2), moms=(0.8,0.7)) learn.recorder.plot_losses() learn.save('clas_2.pkl') learn.fit_one_cycle(3, slice(1e-3/(2.6**4),1e-2), moms=(0.8,0.7)) learn.recorder.plot_losses() learn.freeze_to(-3) learn.fit_one_cycle(1, slice(5e-3/(2.6**4),5e-3), moms=(0.8,0.7)) learn.fit_one_cycle(2, slice(2e-3/(2.6**4),5e-3), moms=(0.8,0.7)) learn.recorder.plot_losses() learn.fit_one_cycle(3, slice(2e-3/(2.6**4),5e-3), moms=(0.8,0.7)) learn.save('clas_3.pkl') learn.fit_one_cycle(3, slice(2e-3/(2.6**4),5e-3), moms=(0.8,0.7)) learn.save('clas_4.pkl') learn.load('clas_4.pkl') learn.data.batch_size learn.unfreeze() learn.fit_one_cycle(2, slice(1e-3/(2.6**4),1e-3), moms=(0.8,0.7)) learn.fit_one_cycle(5, slice(1e-3/(2.6**4),1e-3), moms=(0.8,0.7)) learn.save('clas_5.pkl') learn.load('clas_5.pkl') learn.recorder.plot_losses() learn.fit_one_cycle(5, slice(1e-4/(2.6**4),1e-4), moms=(0.8,0.7)) learn.recorder.plot_losses() learn.fit_one_cycle(5, slice(1e-4/(2.6**4),1e-4), moms=(0.8,0.7)) learn.recorder.plot_losses() learn.save('clas_6.pkl') learn.export() preds, targets = learn.get_preds(ds_type=DatasetType.Test) preds[:10] len(targets) len(df_test) df_test.head() learn.data.classes df_test.iloc[0]['id'] ###Output _____no_output_____ ###Markdown Assuming that the ordering is the same in the preds and the original test csv ###Code formatted = [] for i,pred in enumerate(preds): [EAP, HPL, MWS] = [float(p) for p in pred] formatted.append({ 'id': df_test.iloc[i]['id'], 'EAP': EAP, 'HPL': HPL, 'MWS': MWS }) formatted[0] submission = pd.DataFrame(formatted, columns=['id', 'EAP', 'HPL', 'MWS']) submission.head() submission.to_csv('submission-2.csv', index=False) ! kaggle competitions submit spooky-author-identification -f ./submission-2.csv -m "Attempt 2" ###Output 100%|█████████████████████████████████████████| 562k/562k [00:04<00:00, 138kB/s] Successfully submitted to Spooky Author Identification ###Markdown Something is messed up with my submission the rating is 3.04 ###Code test = TextList.from_csv(path, 'test.csv', cols='text', vocab=data_clas.train_ds.vocab) len(test) learn = load_learner(path, test=test) preds, targets = learn.get_preds(ds_type=DatasetType.Test) preds[:10] learn.data.classes preds2 = [] for i,row in df_test.iterrows(): pred = learn.predict(row['text'])[2] [EAP, HPL, MWS] = [float(i) for i in pred] preds2.append({ 'id': row['id'], 'EAP': EAP, 'HPL': HPL, 'MWS': MWS }) preds2[:5] data_clas.train_ds.vocab submission = pd.DataFrame(preds2, columns=['id', 'EAP', 'HPL', 'MWS']) submission.head() submission.to_csv('submission-2.csv', index=False) ! kaggle competitions submit spooky-author-identification -f ./submission-2.csv -m "Attempt 2 - formatting take 2" ###Output 100%|█████████████████████████████████████████| 562k/562k [00:04<00:00, 127kB/s] Successfully submitted to Spooky Author Identification
src/TA2 - difference in result.ipynb
###Markdown Visualization ###Code def generate_graph(nodes: List) -> nx.Graph: R = nx.Graph() R.add_nodes_from(nodes) R.add_edges_from(nx.utils.pairwise(nodes)) return R def get_node_color_result(kinputs, koutputs, union_result): color_map = [] for n in union_result: if n in kinputs: color_map.append("green") elif n in koutputs: color_map.append("blue") else: color_map.append("yellow") return color_map def get_edge_weight(G, result): r = result.edges() ans = [] for g in G.edges(): if g in r: ans.append(5) else: ans.append(1) return ans def visualize_graph(Graph: nx.Graph, figsize: tuple=(15,15), color_map: List[str]=None, node_size: int=3000, with_labels: bool=True) -> None: if color_map == None: color_map = "yellow" else: color_map = color_map plt.figure(1,figsize=figsize) nx.draw_kamada_kawai(Graph, node_color=color_map, with_labels=with_labels, node_size=node_size, font_size=20,font_family="Noto Serif CJK JP") plt.show() def visualize_graph_in_all(Graph: nx.Graph, figsize: tuple=(15,15), color_map: List[str]=None, node_size: int=3000, weights: List[int] = None, with_labels: bool=True) -> None: if color_map == None: color_map = "yellow" else: color_map = color_map plt.figure(1,figsize=figsize) nx.draw_kamada_kawai(Graph, node_color=color_map, with_labels=with_labels, node_size=node_size, font_size=20,width=weights,font_family="Noto Serif CJK JP") plt.show() ###Output _____no_output_____ ###Markdown Algorithm ###Code inputs = [5,8] # outputs = [27,2] outputs = [23,16,12] ###Output _____no_output_____ ###Markdown Brute Force ###Code def find_path_bf(MOrig: List, MDest: List) -> nx.Graph: result = [] for kin in MOrig: for kout in MDest: sp_raw = nx.dijkstra_path(G, source=kin, target=kout) sp_graph = generate_graph(sp_raw) result.append(sp_graph) return nx.compose_all(result) result = find_path_bf(inputs, outputs) visualize_graph_in_all(Graph=G, figsize=(7,7), node_size=500, weights=get_edge_weight(G, result),color_map=get_node_color_result(inputs, outputs, G)) ###Output _____no_output_____ ###Markdown A* ###Code def common_neighbor(u, v): return len(list(nx.common_neighbors(G, u, v))) def jaccard_function(u, v): union_size = len(set(G[u]) | set(G[v])) # union neighbor if union_size == 0: return 0 return len(list(nx.common_neighbors(G, u, v))) / union_size def find_path_astar(MOrig: List, MDest: List, heuristic_func) -> nx.Graph: result = [] for kin in MOrig: for kout in MDest: sp_raw = nx.astar_path(G, source=kin, target=kout, heuristic=heuristic_func) sp_graph = generate_graph(sp_raw) result.append(sp_graph) return nx.compose_all(result) result = find_path_astar(inputs, outputs, common_neighbor) visualize_graph_in_all(Graph=G, figsize=(7,7), node_size=500, weights=get_edge_weight(G, result),color_map=get_node_color_result(inputs, outputs, G)) ###Output _____no_output_____ ###Markdown Steiner Tree ###Code from itertools import count from heapq import heappush, heappop def _dijkstra_multisource( G, sources, weight, pred=None, paths=None, cutoff=None, target=None ): G_succ = G._succ if G.is_directed() else G._adj push = heappush pop = heappop dist = {} # dictionary of final distances seen = {} # fringe is heapq with 3-tuples (distance,c,node) # use the count c to avoid comparing nodes (may not be able to) c = count() fringe = [] for source in sources: if source not in G: raise nx.NodeNotFound(f"Source {source} not in G") seen[source] = 0 push(fringe, (0, next(c), source)) while fringe: (d, _, v) = pop(fringe) if v in dist: continue # already searched this node. dist[v] = d if v == target: break for u, e in G_succ[v].items(): cost = weight(v, u, e) if cost is None: continue vu_dist = dist[v] + cost if cutoff is not None: if vu_dist > cutoff: continue if u in dist: u_dist = dist[u] if vu_dist < u_dist: raise ValueError("Contradictory paths found:", "negative weights?") elif pred is not None and vu_dist == u_dist: pred[u].append(v) elif u not in seen or vu_dist < seen[u]: seen[u] = vu_dist push(fringe, (vu_dist, next(c), u)) if paths is not None: paths[u] = paths[v] + [u] if pred is not None: pred[u] = [v] elif vu_dist == seen[u]: if pred is not None: pred[u].append(v) return dist def multi_source_dijkstra(G, sources, target=None, cutoff=None, weight="weight"): if target in sources: return (0, [target]) weight = lambda u, v, data: data.get(weight, 1) paths = {source: [source] for source in sources} # dictionary of paths dist = _dijkstra_multisource(G, sources, weight, paths=paths) if target is None: return (dist, paths) try: return (dist[target], paths[target]) except KeyError as e: raise nx.NetworkXNoPath(f"No path to {target}.") from e def my_all_pairs_dijkstra(G): i = 0 for n in G: i += 1 print('\r%s' % i, end = '\r') dist, path = multi_source_dijkstra(G, {n}) yield (n, (dist, path)) def metric_closure(G, weight="weight"): M = nx.Graph() Gnodes = set(G) all_paths_iter = my_all_pairs_dijkstra(G) for u, (distance, path) in all_paths_iter: Gnodes.remove(u) for v in Gnodes: M.add_edge(u, v, distance=distance[v], path=path[v]) return M from itertools import chain from networkx.utils import pairwise def my_steiner_tree(G, terminal_nodes, weight="weight"): global mcg # H is the subgraph induced by terminal_nodes in the metric closure M of G. M = metric_closure(G, weight=weight) # O(|GV|^2) H = M.subgraph(terminal_nodes) # O(|GV|^2) * O(|MOrig| + |MDest|) # Use the 'distance' attribute of each edge provided by M. mst_edges = nx.minimum_spanning_edges(H, weight="distance", data=True) # O (|GE| log GV) # Create an iterator over each edge in each shortest path; repeats are okay edges = chain.from_iterable(pairwise(d["path"]) for u, v, d in mst_edges) T = G.edge_subgraph(edges) return T def find_path_steiner(G: nx.Graph, MOrig: List, MDest: List) -> nx.Graph: return my_steiner_tree(G, MOrig + MDest) result = find_path_steiner(G, inputs, outputs) visualize_graph_in_all(Graph=G, figsize=(7,7), node_size=500, weights=get_edge_weight(G, result),color_map=get_node_color_result(inputs, outputs, G)) result = find_path_bf(inputs, outputs) visualize_graph(Graph=result, figsize=(5,5), color_map=get_node_color_result(inputs, outputs, result)) ###Output _____no_output_____
Budget Tracker V4.ipynb
###Markdown How it works1. User places CSV file from the Bank in the directory2. Program imports it into a pandas dataframe cleans it up, fills in NaNs, extracts and adds the year and month The program works with 2 dictionaries - the Categoriser and the Category Mapper * The Categoriser is a list of key words that asssign a subcategory to each entry * The Category Mapper assigns categories and classes to subcategories3. The program goes through the CSV line by line, scanning for subcategory key words. Three cases follow:* Case 1: No results found --> the user is prompted to enter a new keyword/subcategory pair into the dictionary OR make it a once-off classification* Case 2: One result found --> the program assigns the subcategory to the keyword* Case 3: More than one result found --> the program is in danger of making a mistaken classification because the user hasn't chosen their keyword closely enough. The user is prompted to Functions ###Code import json import os import pandas as pd %cd "F:/Google Drive/JupyterNotebooks/Useful Projects/Budget Tracker/" # I operate across a couple of different computers. Sadly, they don't have the same path. path = "F:/Google Drive/JupyterNotebooks/Useful Projects/Budget Tracker/" ###Output F:\Google Drive\JupyterNotebooks\Useful Projects\Budget Tracker ###Markdown Display, Save and Load ###Code def print_dict(my_dict): for key,value in sorted(my_dict.items()): print(key,":",value) def load_dict_from_json(target_dict): with open("{}{}.json".format(path,target_dict),"r") as fh: return(json.load(fh)) def save_dict_to_json(filename,dict_to_save): with open("{}{}.json".format(path,filename),"w") as savefile: json.dump(dict_to_save,savefile,indent=2, sort_keys = True) my_dict = {1:"one",2:"two",3:"three"} save_dict_to_json("Test_Dict",my_dict) ###Output _____no_output_____ ###Markdown Dynamically adding things - helpful functions ###Code def list_subcategory_options(): category_map_dict = load_dict_from_json("Category_Map") subcats_list = sorted(category_map_dict.keys()) subcat_chooser = dict(zip(range(len(subcats_list)),subcats_list)) return(subcat_chooser) list_subcategory_options() def list_category_options(): category_map_dict = load_dict_from_json("Category_Map") subcats_list = category_map_dict.keys() category_list = [] for x in subcats_list: category_list.append(category_map_dict[x]["Category"]) category_list = sorted(set(category_list)) cat_chooser = dict(zip(range(len(category_list)),category_list)) return(cat_chooser) list_category_options() def new_subcategory(): category_map_dict = load_dict_from_json("Category_Map") #print out the pre-existing options print_dict(category_map_dict) new_subcat = input("What is your sub-category?") # testing to see if it already exists while new_subcat in category_map_dict: # room for checking if this is in or NEARLY in the thing already new_subcat = input("Oops. That category already extists. Choose another. ") # once we have a unique value, enter the category and class new_cat = input("What is your category?") new_class = input("What is your class?") #now actually adding the new category to the repository category_map_dict[new_subcat] = {"Category":new_cat,"Class":new_class} #save the file print("Adding new subcategory --> {}:{{{}:{}}}".format(new_subcat,new_cat,new_class)) save_dict_to_json("Category_Map",category_map_dict) new_subcategory() new_subcategory() # passed test def kw_match_tracker(eachDescription): with open("{}AutoCatV2.json".format(path),"r") as keyword_map: keyword_map = json.load(keyword_map) keyword_list = list(keyword_map.keys()) hit_count = 0 match_dict = {} for eachKey in keyword_list: #print("searching for {} in {}".format(eachKey,eachDescription)) if eachKey in eachDescription: hit_count += 1 #print("Hit count: ",str(hit_count)) match_dict[hit_count] = eachKey output = {"match_dict":match_dict,"hit_count":hit_count} return(output) desc = "TFR to 224238S6 ONLINE To-P W NEWMAN Ref-Savings Savings" kw_match_tracker(desc) def new_keyword(): keyword_map = load_dict_from_json("AutoCatV2") new_kw = input("Please choose a keyword: ") print("New keyword: {} ".format(new_kw)) options = list_subcategory_options() print_dict(options) new_choice = str.upper(input("To enter a new subcategory, type N. Type anything else to use the current options")) if new_choice == "N": new_subcategory() # refresh list options = list_subcategory_options() print_dict(options) subcat_choice = int(input("Enter the number of the subcategory mapping you want to use: ")) subcat = options[subcat_choice] keyword_map[new_kw] = subcat save_option = str.upper(input("Save new keyword mapping {} : {}? Press Y ".format(new_kw,subcat))) if save_option == "Y": save_dict_to_json("AutoCatV2",keyword_map) return(subcat) #new_keyword() #passed test ###Output _____no_output_____ ###Markdown Pandas Functions ###Code # to be called in an apply in pandas # goes through the df reading dates and adds the month # dependency: dates column def pd_add_month(eachDate): return(eachDate.month) # to be called in an apply in pandas # goes through the df reading dates and adds the month # dependency: dates column def pd_add_year(eachDate): return(eachDate.year) # to be called in an apply in pandas # goes through the df reading descriptions and adds the subcategory using a keyword search def pd_add_subcategories(eachDescription): match_dict = kw_match_tracker(eachDescription)["match_dict"] hit_count = kw_match_tracker(eachDescription)["hit_count"] #"{} --> {}".format(eachKey,keyword_map[eachKey]) keyword_map = load_dict_from_json("AutoCatV2") if hit_count == 0: print("No Matches found for '",eachDescription,"'") user_choice = str.upper(input("Type N for new keyword, C to categorise as a once-off or S for skip")) if user_choice == "N": return(new_keyword()) elif user_choice == "C": print_dict(list_subcategory_options()) subcat = int(input("Make your choice from the list above by entering the number")) return(list_subcategory_options()[subcat]) elif user_choice == "S": return("Uncategorised") elif hit_count == 1: return(keyword_map[match_dict[hit_count]]) elif hit_count > 1: print("More than one hit found for '{}'".format(eachDescription)) print_dict(match_dict) shortlist_choice = int(input("Please choose a number from the options above or anything else to skip.")) if shortlist_choice in range(hit_count): return(keyword_map[match_dict[shortlist_choice]]) else: return("Uncategorised") # to be called in an apply in pandas # goes through the df reading subcategories and adds the category # dependency: subcategory column def pd_add_categories(eachSubcategory): category_map_dict = load_dict_from_json("Category_Map") if eachSubcategory in category_map_dict: #this line is probably not necessary as you have to subcategorise first... return(category_map_dict[eachSubcategory]["Category"]) else: return("Uncategorised") test = "Supplies" pd_add_categories(test) # to be called in an apply in pandas # goes through the df reading subcategories and adds the class # dependency: subcategory column def pd_add_classes(eachSubcategory): category_map_dict = load_dict_from_json("Category_Map") if eachSubcategory in category_map_dict: return(category_map_dict[eachSubcategory]["Class"]) else: return("Uncategorised") test = "Supplies" pd_add_classes(test) ###Output _____no_output_____ ###Markdown Actions for the main loop ###Code def pd_import_data(filename): return(pd.read_excel("{}{}.xlsx".format(path,filename))) df = pd_import_data("March April 2015") df.head(5) #cleans data def clean_data(df): df.fillna("NA",inplace = True) def add_year(df): df["Year"] = df.apply(lambda x: pd_add_year(x["Date"]),axis= 1) df = pd.read_excel("March April 2015.xlsx") clean_data(df) add_year(df) df.head() def add_month(df): df["Month"] = df.apply(lambda x: pd_add_month(x["Date"]),axis= 1) df = pd.read_excel("March April 2015.xlsx") clean_data(df) add_month(df) df.head() def add_subcategories(df): df["Subcategory"] = df.apply(lambda x: pd_add_subcategories(x["Description"]),axis=1) df = pd.read_excel("March April 2015.xlsx") clean_data(df) add_subcategories(df) df # dependency: add_subcategories def add_categories(df): df["Category"] = df.apply(lambda x: pd_add_categories(x["Subcategory"]),axis=1) df = pd.read_excel("March April 2015.xlsx") clean_data(df) add_subcategories(df) add_categories(df) df # dependency: add_subcategories def add_classes(df): df["Class"] = df.apply(lambda x: pd_add_classes(x["Subcategory"]),axis=1) df = pd.read_excel("March April 2015.xlsx") clean_data(df) add_subcategories(df) add_classes(df) df def save_to_excel(df,filename,sheetname): path_name = "{}{}.xlsx".format(path,filename) print("Saving to",path_name+"?") if str.upper(input("Continue? Y or N")) == "Y": # insert check for overwrite of file here df.to_excel(path_name,sheet_name=sheetname) elif "N": print("Save cancelled") df = pd.read_excel("March April 2015.xlsx") clean_data(df) add_subcategories(df) df save_to_excel(df,"Export 1","March-April 2015") ###Output Saving to F:/Google Drive/JupyterNotebooks/Useful Projects/Budget Tracker/Export 1.xlsx? Continue? Y or NY ###Markdown Main ###Code def main(filename): df = pd_import_data(filename) clean_data(df) add_year(df) add_month(df) add_subcategories(df) add_categories(df) add_classes(df) save_filename = input("Exporting to Excel. Filename: ") save_excel(df,"{}{}".format(filename,) ###Output _____no_output_____ ###Markdown Testing Area ###Code os.listdir() df= pd.read_excel("March April 2015.xlsx") clean_data(df) #df.fillna("NA",inplace=True) df["Subcategory"] = df.apply(lambda x: assign_subcategory(df["Description"]),axis= 1) ###Output No Matches found for ' 0 TFR TO 484799 502175908 ONLINE To-MISS MELISSA... 1 TFR TO 032001 143827 ONLINE To-MUNGINDI CENTRA... 2 TELSTRAKENAN20EASYPAYA From: Telstra DDebit Re... 3 DIRECT CREDIT From: DEPT OF SCHOOL E Ref: DOSE... 4 TF From: TF Ref: NSW Teachers Feder 5 TFR TO 932000 652932 ONLINE To-MR PETER NEWMAN... 6 TFR to 224238S6 ONLINE To-P W NEWMAN Ref-Savin... 7 TFR TO 932000 652932 MOB To-MR PETER NEWMAN Re... 8 DIRECT CREDIT From: DEPT OF SCHOOL E Ref: DOSE... 9 TFR TO 484799 502175908 ONLINE To-MISS MELISSA... 10 TFR to 224238S6 ONLINE To-P W NEWMAN Ref-Savin... 11 TF From: TF Ref: NSW Teachers Feder 12 DIRECT DEBIT From: TPG Internet Ref: DF3Q0PJTS... 13 BPAY BOSTES ONLINE Ref-2763787 #134329061 14 TELSTRAKENAN20EASYPAYA From: Telstra DDebit Re... 15 DIRECT CREDIT From: DEPT OF SCHOOL E Ref: DOSE... 16 TF From: TF Ref: NSW Teachers Feder 17 TFR TO 932000 652932 MOB To-MR PETER NEWMAN Re... 18 BPAY 2511-023903-RTA NSW MOB Ref-100244329406 ... 19 DIRECT CREDIT From: DEPT OF SCHOOL E Ref: DOSE... 20 TF From: TF Ref: NSW Teachers Feder 21 PAYPAL AUSTRALIA From: PAYPAL AUSTRALIA Ref: 5... 22 PAYPAL AUSTRALIA From: PAYPAL AUSTRALIA Ref: 5... 23 DIRECT DEBIT From: TPG Internet Ref: DF4Q1J4VV... 24 TFR TO 032001 143827 MOB To-MUNGINDI CENTRAL S... 25 TELSTRAKENAN20EASYPAYA From: Telstra DDebit Re... 26 POS W/D SPAR MUNGINDI-16:53 $30.50 CASH 27 PAYPAL AUSTRALIA From: PAYPAL AUSTRALIA Ref: J... 28 PAYPAL AUSTRALIA From: PAYPAL AUSTRALIA Ref: J... 29 INTEREST CREDIT Name: Description, dtype: object ' Type N for new keyword, C to categorise as a once-off or S for skipS No Matches found for ' 0 TFR TO 484799 502175908 ONLINE To-MISS MELISSA... 1 TFR TO 032001 143827 ONLINE To-MUNGINDI CENTRA... 2 TELSTRAKENAN20EASYPAYA From: Telstra DDebit Re... 3 DIRECT CREDIT From: DEPT OF SCHOOL E Ref: DOSE... 4 TF From: TF Ref: NSW Teachers Feder 5 TFR TO 932000 652932 ONLINE To-MR PETER NEWMAN... 6 TFR to 224238S6 ONLINE To-P W NEWMAN Ref-Savin... 7 TFR TO 932000 652932 MOB To-MR PETER NEWMAN Re... 8 DIRECT CREDIT From: DEPT OF SCHOOL E Ref: DOSE... 9 TFR TO 484799 502175908 ONLINE To-MISS MELISSA... 10 TFR to 224238S6 ONLINE To-P W NEWMAN Ref-Savin... 11 TF From: TF Ref: NSW Teachers Feder 12 DIRECT DEBIT From: TPG Internet Ref: DF3Q0PJTS... 13 BPAY BOSTES ONLINE Ref-2763787 #134329061 14 TELSTRAKENAN20EASYPAYA From: Telstra DDebit Re... 15 DIRECT CREDIT From: DEPT OF SCHOOL E Ref: DOSE... 16 TF From: TF Ref: NSW Teachers Feder 17 TFR TO 932000 652932 MOB To-MR PETER NEWMAN Re... 18 BPAY 2511-023903-RTA NSW MOB Ref-100244329406 ... 19 DIRECT CREDIT From: DEPT OF SCHOOL E Ref: DOSE... 20 TF From: TF Ref: NSW Teachers Feder 21 PAYPAL AUSTRALIA From: PAYPAL AUSTRALIA Ref: 5... 22 PAYPAL AUSTRALIA From: PAYPAL AUSTRALIA Ref: 5... 23 DIRECT DEBIT From: TPG Internet Ref: DF4Q1J4VV... 24 TFR TO 032001 143827 MOB To-MUNGINDI CENTRAL S... 25 TELSTRAKENAN20EASYPAYA From: Telstra DDebit Re... 26 POS W/D SPAR MUNGINDI-16:53 $30.50 CASH 27 PAYPAL AUSTRALIA From: PAYPAL AUSTRALIA Ref: J... 28 PAYPAL AUSTRALIA From: PAYPAL AUSTRALIA Ref: J... 29 INTEREST CREDIT Name: Description, dtype: object ' Type N for new keyword, C to categorise as a once-off or S for skipN
Uniform_Sampling/02_Time-Series_Uniform-Sampling.ipynb
###Markdown Step 2 - Uniform Sampling for Current, Voltage, TemperatureFor the times series data, convert the rows D, E, K to uniform time step (Uniform sampling). These values should be easily changed, but we can start with 10 seconds.We will be doing this for the Current, Voltage, and Temperature time series that are in rows D, E and K of the times series files.Lastly, from the uniform sampled data, create an additional time series for Power which equals Current x Voltage. ###Code df.head(10) CURRENT = 'Current (A)' VOLTAGE = 'Voltage (V)' TEMP = 'Cell_Temperature (C)' DATE = 'Date_Time' df = df[[DATE, TEMP, CURRENT, VOLTAGE]].copy() df['Date_Time'] = pd.to_datetime(df['Date_Time']) df['Date_Time'] = df['Date_Time'].dt.round(freq='T') # minutely frequency df.head(10) print(df.shape) df.drop_duplicates(subset=[DATE], keep='last', inplace=True) print(df.shape) df.head(10) timestamp_df = df.copy().reset_index(drop=True) timestamp_df['Date_Time'] = timestamp_df['Date_Time'].apply(lambda x: x.timestamp()) timestamp_df.head() timestamp_df.tail() interval = 10 first_date_time = timestamp_df['Date_Time'].iloc[0] last_date_time = timestamp_df['Date_Time'].iloc[-1] + interval first_date_time, last_date_time devided_values = np.arange(first_date_time ,last_date_time, interval) len(devided_values) a_dataframe = pd.DataFrame(devided_values, columns= ['Time']) len(a_dataframe) final_df = pd.merge(a_dataframe, timestamp_df, how='left', left_on='Time',right_on='Date_Time') len(final_df) final_df.head(10) x = (timestamp_df['Date_Time'].to_numpy()).astype(float) y_curreny = (timestamp_df[CURRENT].to_numpy()).astype(float) y_voltage = (timestamp_df[VOLTAGE].to_numpy()).astype(float) y_temp = (timestamp_df[TEMP].to_numpy()).astype(float) result_current = np.interp(devided_values, x, y_curreny) result_voltage = np.interp(devided_values, x, y_voltage) result_temp = np.interp(devided_values, x, y_temp) len(result_current) final_df[CURRENT] = result_current final_df[VOLTAGE] = result_voltage final_df[TEMP] = result_temp final_df['Power'] = final_df[CURRENT] * final_df[VOLTAGE] final_df final_df.drop(columns=['Date_Time'], inplace=True) final_df['Time'] = pd.to_datetime(final_df['Time'], unit='s') final_df.sample(5) final_df.to_csv('SNL_18650_LFP_15C_0-100_0.5-1C_a_timeseries.csv', index=False) ! zip -9 SNL_18650_LFP_15C_0-100_0.5-1C_a_timeseries.zip /content/SNL_18650_LFP_15C_0-100_0.5-1C_a_timeseries.csv ! cp SNL_18650_LFP_15C_0-100_0.5-1C_a_timeseries.zip /content/drive/MyDrive/Projects/Ian_SensAI/data/ ###Output _____no_output_____
plaque_prediction/plaque_pred.ipynb
###Markdown Predict plaque ###Code %matplotlib inline import matplotlib import matplotlib.pyplot as plt # math/matrix/algebra packages import numpy as np import math import os # image analysis packages from PIL import Image # keras import keras from keras.models import load_model from keras.datasets import cifar10 from keras.layers import Input, Dense, Conv2D, MaxPooling2D, UpSampling2D, Conv2DTranspose, concatenate from keras.models import Model from keras.callbacks import EarlyStopping, ModelCheckpoint from keras.optimizers import Adam from keras import backend as K # Ignore warnings import warnings warnings.filterwarnings("ignore") ###Output Using TensorFlow backend. ###Markdown Define the model: ###Code # define dice coefficient:------------------------------------------------------ def dice_coef(y_true, y_pred): y_true_f = K.flatten(y_true) y_pred_f = K.flatten(y_pred) intersection = K.sum(y_true_f * y_pred_f) return (2. * intersection + 1) / (K.sum(y_true_f) + K.sum(y_pred_f) + 1) def dice_coef_loss(y_true, y_pred): return 1-dice_coef(y_true, y_pred) # redefine network with image size: # define network:--------------------------------------------------------------- def set_up_model(modelpath): inputs = Input((None, None, 1)) conv1 = Conv2D(32, (3, 3), activation='relu', padding='same')(inputs) conv1 = Conv2D(32, (3, 3), activation='relu', padding='same')(conv1) pool1 = MaxPooling2D(pool_size=(2, 2))(conv1) conv2 = Conv2D(64, (3, 3), activation='relu', padding='same')(pool1) conv2 = Conv2D(64, (3, 3), activation='relu', padding='same')(conv2) pool2 = MaxPooling2D(pool_size=(2, 2))(conv2) conv3 = Conv2D(128, (3, 3), activation='relu', padding='same')(pool2) conv3 = Conv2D(128, (3, 3), activation='relu', padding='same')(conv3) pool3 = MaxPooling2D(pool_size=(2, 2))(conv3) conv4 = Conv2D(256, (3, 3), activation='relu', padding='same')(pool3) conv4 = Conv2D(256, (3, 3), activation='relu', padding='same')(conv4) pool4 = MaxPooling2D(pool_size=(2, 2))(conv4) conv5 = Conv2D(512, (3, 3), activation='relu', padding='same')(pool4) conv5 = Conv2D(512, (3, 3), activation='relu', padding='same')(conv5) up6 = concatenate([Conv2DTranspose(256, (2, 2), strides=(2, 2), padding='same')(conv5), conv4], axis=3) conv6 = Conv2D(256, (3, 3), activation='relu', padding='same')(up6) conv6 = Conv2D(256, (3, 3), activation='relu', padding='same')(conv6) up7 = concatenate([Conv2DTranspose(128, (2, 2), strides=(2, 2), padding='same')(conv6), conv3], axis=3) conv7 = Conv2D(128, (3, 3), activation='relu', padding='same')(up7) conv7 = Conv2D(128, (3, 3), activation='relu', padding='same')(conv7) up8 = concatenate([Conv2DTranspose(64, (2, 2), strides=(2, 2), padding='same')(conv7), conv2], axis=3) conv8 = Conv2D(64, (3, 3), activation='relu', padding='same')(up8) conv8 = Conv2D(64, (3, 3), activation='relu', padding='same')(conv8) up9 = concatenate([Conv2DTranspose(32, (2, 2), strides=(2, 2), padding='same')(conv8), conv1], axis=3) conv9 = Conv2D(32, (3, 3), activation='relu', padding='same')(up9) conv9 = Conv2D(32, (3, 3), activation='relu', padding='same')(conv9) conv10 = Conv2D(1, (3, 3), activation='sigmoid', padding='same')(conv9) model = Model(inputs=[inputs], outputs=[conv10]) model.compile(optimizer=Adam(lr = 0.0001), loss=dice_coef_loss, metrics=[dice_coef]) # load weights: model.load_weights(modelpath) return model ###Output _____no_output_____ ###Markdown Load data: ###Code test_im=np.load("../data/plaque_data_small.npz")['arr_0'] test_labels=np.load("../data/plaque_data_small.npz")['arr_1'] test_im=test_im.reshape((test_im.shape[0],test_im.shape[1],test_im.shape[2],1)) test_labels=test_labels.reshape((test_labels.shape[0],test_labels.shape[1],test_labels.shape[2],1)) ###Output _____no_output_____ ###Markdown Load model weights: ###Code p_model=set_up_model('../models/plaque_model_weights.hdf5') l,d=p_model.evaluate(test_im,test_labels, batch_size=32) print("Plaque model dice coefficient: " + str(d)) ###Output 20/20 [==============================] - 5s 237ms/step Plaque model dice coefficient: 0.8358448147773743 ###Markdown Visualize predictions: ###Code # predict images: pred_p=p_model.predict(test_im, batch_size=32) plt.rcParams['figure.figsize'] = [20, 10] plt.subplot(1, 3, 1) tmp=np.zeros((test_im.shape[1],test_im.shape[2],3)) tmp[:,:,0] = tmp[:,:,0] + np.reshape(test_im[19],(test_im.shape[1],test_im.shape[2])) + 0.5*pred_p[19,:,:,0] tmp[:,:,1] = tmp[:,:,1] + np.reshape(test_im[19],(test_im.shape[1],test_im.shape[2])) + 0.5*np.reshape(test_labels[19],(test_labels.shape[1],test_labels.shape[2])) tmp[:,:,2] = tmp[:,:,2] + np.reshape(test_im[19],(test_im.shape[1],test_im.shape[2])) tmp[tmp>1]=1 plt.imshow(tmp) plt.axis('off') plt.subplot(1, 3, 2) tmp=np.zeros((test_im.shape[1],test_im.shape[2],3)) tmp[:,:,0] = tmp[:,:,0] + np.reshape(test_im[1],(test_im.shape[1],test_im.shape[2])) + 0.5*pred_p[1,:,:,0] tmp[:,:,1] = tmp[:,:,1] + np.reshape(test_im[1],(test_im.shape[1],test_im.shape[2])) + 0.5*np.reshape(test_labels[1],(test_labels.shape[1],test_labels.shape[2])) tmp[:,:,2] = tmp[:,:,2] + np.reshape(test_im[1],(test_im.shape[1],test_im.shape[2])) tmp[tmp>1]=1 plt.imshow(tmp) plt.axis('off') plt.subplot(1, 3, 3) tmp=np.zeros((test_im.shape[1],test_im.shape[2],3)) tmp[:,:,0] = tmp[:,:,0] + np.reshape(test_im[2],(test_im.shape[1],test_im.shape[2])) + 0.5*pred_p[2,:,:,0] tmp[:,:,1] = tmp[:,:,1] + np.reshape(test_im[2],(test_im.shape[1],test_im.shape[2])) + 0.5*np.reshape(test_labels[2],(test_labels.shape[1],test_labels.shape[2])) tmp[:,:,2] = tmp[:,:,2] + np.reshape(test_im[2],(test_im.shape[1],test_im.shape[2])) tmp[tmp>1]=1 plt.imshow(tmp) plt.axis('off') ###Output _____no_output_____
notebooks/Training_example.ipynb
###Markdown Example showing how to train the CNN Training without a GPU takes a very long time ###Code %matplotlib inline import numpy as np import time import os import sys import random import gc import matplotlib.pyplot as plt from deepmass import map_functions as mf from deepmass import lens_data as ld from deepmass import wiener from deepmass import cnn_keras as cnn ###Output _____no_output_____ ###Markdown This demonstration uses the validation data as training data (the separate full training data cannot fit on the git repository) ###Code map_size = 256 n_test = int(1000) n_epoch = 20 batch_size = 32 learning_rate = 1-5 # make SV mask mask = np.float32(np.real(np.where(np.load('../picola_training/Ncov.npy') > 1.0, 0.0, 1.0))) _ = plt.imshow(mask, origin='lower', clim=(0,1)), plt.colorbar() wiener_array = np.load('../picola_training/validation_data/test_array_wiener.npy') gc.collect() clean_array = np.load('../picola_training/validation_data/test_array_clean.npy') gc.collect() train_array_noisy = wiener_array[n_test:] train_array_clean = clean_array[n_test:] test_array_noisy = wiener_array[:n_test] test_array_clean = clean_array[:n_test:] gc.collect() train_gen = cnn.BatchGenerator(train_array_noisy, train_array_clean, gen_batch_size=batch_size) test_gen = cnn.BatchGenerator(test_array_noisy, test_array_clean, gen_batch_size=batch_size) ###Output _____no_output_____ ###Markdown Load and train model ###Code cnn_instance = cnn.UnetlikeBaseline(map_size=map_size, learning_rate=learning_rate) cnn_model = cnn_instance.model() history = cnn_model.fit_generator(generator=train_gen, epochs=n_epoch, steps_per_epoch=np.ceil(train_array_noisy.shape[0] / int(batch_size)), validation_data=test_gen, validation_steps=np.ceil(test_array_noisy.shape[0] / int(batch_size))) gc.collect() _ = plt.plot(np.arange(n_epoch)+1., history.history['loss'], label = 'loss', marker = 'o') _ = plt.plot(np.arange(n_epoch)+1., history.history['val_loss'], label = 'val loss', marker = 'x') _ = plt.legend() ###Output _____no_output_____ ###Markdown Apply model ###Code test_output = cnn_model.predict(test_array_noisy) print('Result MSE =' + str(mf.mean_square_error(test_array_clean.flatten(), test_output.flatten()))) xticks=[None,'65°','75°','85°'] yticks=[] _ = plt.figure(figsize =(15,4.5)) _ = plt.subplot(1,3,1), plt.title(r'${\rm Truth\ (Target)}$', fontsize=16) _ = plt.imshow(np.where(mask!=0., (test_array_clean[0,:,:,0] -0.5)/3, np.nan), origin='lower', cmap='inferno', clim = (-0.025,0.025)) plt.xlabel(r'${\rm RA}$') plt.ylabel(r'${\rm DEC}$', labelpad = 20.) _ = plt.subplot(1,3,2), plt.title(r'${\rm Wiener\ filter}$', fontsize=16) _ = plt.imshow(np.where(mask!=0., (test_array_noisy[0,:,:,0] -0.5)/3, np.nan), origin='lower', cmap='inferno', clim = (-0.025,0.025)) plt.xlabel(r'${\rm RA}$') _ = plt.subplot(1,3,3), plt.title(r'${\rm DeepMass}$', fontsize=16) _ = plt.imshow(np.where(mask!=0., (test_output[0,:,:,0] -0.5)/3, np.nan), origin='lower', cmap='inferno', clim = (-0.025,0.025)) plt.xlabel(r'${\rm RA}$') plt.subplots_adjust(wspace=-0.3) ###Output _____no_output_____
Lesson_One/Python_Lesson_One.ipynb
###Markdown 第一部分 数据类型 1. 基本类型:数字、字符串、布尔 1.1 数字类型· int 整型 整数 integer ###Code 1 2 3 ###Output _____no_output_____ ###Markdown · float 浮点型 带小数的数字 ###Code 1.30058795316 + 1 ###Output _____no_output_____ ###Markdown · complex 复数 a+bj ###Code 1+3.5j ###Output _____no_output_____ ###Markdown 1.2 字符串类型· str 字符串 视作文本· 组成:由数字、字幕、空格,其他字符等组合而成· 表达:用""或'' 1.3 布尔类型· bool 布尔类型 主要用于逻辑运算 ###Code True False ###Output _____no_output_____ ###Markdown 上述类型均可定义单个数据,但如果我们有一组数据,该如何表示? 2. 组合类型:列表,元组,字典,集合2.1 列表· list 列表 序列类型:数据有位置顺序· 表示方式:[data1, data2, ...] ###Code a = [["Tom", 21, "Male", "New York", False], ["Kevin", 21, "Male", "New York", False]] print(a) a[0] = "Cat" print(a) ###Output [['Tom', 21, 'Male', 'New York', False], ['Kevin', 21, 'Male', 'New York', False]] ['Cat', ['Kevin', 21, 'Male', 'New York', False]] ###Markdown 2.2 元组· tuple 元组 序列类型· 表示方式:(data1, data2, ...)· 元素不支持修改——不可变的列表 ###Code a = ("Tom", 21, "Male", "New York") print(a) a[0] = "Cat" print(a) ###Output ('Tom', 21, 'Male', 'New York') ###Markdown 2.3 字典· dict 字典 映射类型:通过“键”-“值”的映射实现数据存储和查找· 表示方式:{key1:value1, key2:value2, ...} ###Code {"Tom":15, "Cathey":17} ###Output _____no_output_____ ###Markdown 2.4 集合· set 集合 一系列互不相等元素的集合,无序的· 表示方式: {data1, data2, ...} ###Code a = {1, 5, 7, 8, 8, 7, 10} print(a) a[1] ###Output {1, 5, 7, 8, 10} ###Markdown 在程序中,我们如何引用这些数据呢?· 非常通俗的处理办法:赋值给一个变量 第二部分 变量 1. 变量的概念· 量 实实在在的对象:如数据、文件· 变 可变性:增、删、查、改· 变量定义二要素 变量名、赋值 ###Code soloman = 1 soloman = 2 ###Output _____no_output_____ ###Markdown 2. 变量的命名2.1 哪些可以用来做变量名?· 大写字母、小写字母、数字、下划线、汉字及其组合· 严格区分大小写 ###Code eyes age student171_001 student171_002 ###Output _____no_output_____ ###Markdown 2.2 哪些情况不被允许?· 首字符不允许为数字· 变量名中间不能有空格· 不能与33个Python保留字相同import keywordkeyword.kwlist ###Code import keyword keyword.kwlist 13abc _abc_ fds@sdf ###Output _____no_output_____ ###Markdown 2.3 变量名定义技巧· 变量名尽可能有实际意义,表征数据的某种特性· 下划线(推荐:变量和函数名),当变量名由多个单词组成,用_连接· 驼峰体(推荐:类名),当变量名由多个单词组成,单词首字母大写· 尽量避免用中文和拼音做变量名· 特殊的变量:常量(如π,e),变量名所有字母均为大写 ###Code a = "Tom" student_name = "Tom" student_gender = 1 PI = 3.141592653 E = ###Output _____no_output_____ ###Markdown 3. 变量的赋值3.1 一般赋值 通过等号自右向左进行赋值 ###Code a = 1 ###Output _____no_output_____ ###Markdown 3.2 增量赋值 ###Code a = a + 1 print(a) a += 1 ###Output 2 ###Markdown 3.3 打包赋值 ###Code x, y = 1, 2 print(x) print(y) ###Output 1 2 ###Markdown 第三部分 控制流程 1. 顺序流程 自上向下依次执行例:实现1到5的整数求和 ###Code a = 1 a = a + 2 a = a + 3 a = a + 4 a = a + 5 print(a) ###Output 15 ###Markdown 2. 循环流程——遍历循环(for)主要形式:· for 元素 in 可迭代对象: 执行语句 执行过程:· 从可迭代对象中,依次取出每一个元素,并进行相应的操作例:实现1到5的整数求和 ###Code a = [1, 2, 3, 4, 5] range(5) ###Output _____no_output_____ ###Markdown 3. 循环流程——无限循环(while)主要形式:· while 判断条件: 条件为真,执行语句 条件为假,while循环结束 例:实现1到5的整数求和 ###Code i = 1 sum = 0 while i <= 5: sum += i i += 1 print(sum) while True: print('a') ###Output 15 ###Markdown 4. 分支流程(if)最简单的形式:· if 判断条件:· 条件为真,执行语句· else:· 条件为假,执行语句 ###Code if i <= 5: i += 1 else: return asdf ###Output _____no_output_____ ###Markdown 有了数据和变量,以及控制流程的中间过程后,我们回过头来考虑程序的输入和输出 第四部分 输入输出 4.1. 数据从哪里来?外部文件导入· 从本地硬盘、网络端读入(该部分在下节课讲解)程序中定义动态交互输入 input· 在程序运行的过程中进行输入 ###Code name = input() print(name) ###Output Adam ###Markdown ·eval() 去掉引号 4.2 数据到哪里去?存储到本地硬盘或网络端(该部分在下节课讲解)打印输出 print· 直接打印数据 ###Code print(123) ###Output 123 ###Markdown · 打印变量 ###Code print(name) ###Output Adam ###Markdown · print 默认换行 ###Code print(123) print(name) ###Output 123 Adam ###Markdown · 如果不想换行怎么办? 换行控制 end= ###Code print(123, end='') print(name) ###Output 123Adam ###Markdown · 有时候,我们需要一些复杂的输出,比如几个变量一起组合输出 ###Code PI = 3.1415926 E = 2.71828 print(PI, E) ###Output 3.1415926 2.71828 ###Markdown 3. 格式化输出方法 format· 基本格式: "字符{0} 字符{1}字符".format(v0, v1) ###Code 'PI = {0}, E = {1}'.format(PI, E) ###Output _____no_output_____ ###Markdown · 再进一步 修饰性输出填充输出 ###Code # ____3.1415926____ 进行填充 print("{0:_^20}".format(PI)) print("{0:*<30}".format(PI)) ###Output 3.1415926********************* ###Markdown 数字千分位分隔符, 如显示1,000,000 ###Code print("{0:,}".format(1000000)) print("{0:&>20,}".format(1000000)) ###Output &&&&&&&&&&&1,000,000 ###Markdown 浮点数简化输出· 保留2位小数 ###Code print("{0:.2f}".format(PI)) ###Output 3.14 ###Markdown · 按百分数输出 ###Code print("{0:.1%}".format(PI)) ###Output 314.2% ###Markdown · 科学记数法输出 ###Code print("{0:.1e}".format(PI)) ###Output 3.1e+00 ###Markdown 整数的进制转换输出· 十进制转二进制、unicode、八进制、十六进制 ###Code "二进制{0:b}, Unicode码{0:c}, 八进制{0:o}, 十六进制{0:x}".format(10) ###Output _____no_output_____ ###Markdown 第五部分 程序格式 1. 行最大长度所有行限制的最大字符数为792. 缩进· 用缩进来表示语句间的逻辑· 在for while if def class等:之后下一行开始缩进,表示后续代码与前句之间的从属关系· 缩进量:4字符 3. 使用空格· 二元运算符两边加一个空格 ###Code a = 1 + 2 a=1+2 ###Output _____no_output_____ ###Markdown · 使用不同优先级的运算符,考虑在最低优先级的运算符周围添加空格 · 在逗号后使用空格 · 不要使用一个以上的空格 4. 避免使用空格· 在制定关键字参数或者默认参数值的时候,不要在附近使用空格 ###Code def fun(n=1, m=2): print(n, m) ###Output _____no_output_____ ###Markdown *小结*a. 以上属于PEP8格式指南的部分内容,养成良好的编码习惯利人利己b. 格式约定的目的:· 使大量Python代码风格一致· 提升代码可读性c. 尽信书不如无书,不应死板教条地执行格式规范· 项目规范优先 5. 注释· 单行注释 使用 注释内容 ###Code def calculate(a,b):# 这是一个计算年龄的函数。 None ###Output _____no_output_____ ###Markdown · 多行注释 使用"""注释内容,可分行""" ###Code ''' 版权声明 作者 最后修改日期 …… ''' ###Output _____no_output_____
form-compact-ratings-and-popular-movies.ipynb
###Markdown Load data from previous processing ###Code data_directory = "../../Downloads/ml-latest/" movieDatasetFile_filtered = 'movie_full_dataset_filtered.csv' ratingDatasetFile_filtered = 'rating_full_dataset_normalized_filtered.csv' df_ratingDataset_filtered = pd.read_csv(data_directory + ratingDatasetFile_filtered) df_movieDataset_filtered = pd.read_csv(data_directory + movieDatasetFile_filtered) print(len(df_ratingDataset_filtered)) print(len(df_movieDataset_filtered)) moviesWithRatings = list(np.unique(df_ratingDataset_filtered['movie_id'])) raterIds = np.unique(df_ratingDataset_filtered['rater_id']) print(len(raterIds)) ###Output 282695 ###Markdown Distributions of number of ratings for each movie/rater ###Code numRatingsPerRater = {rId : 0 for rId in raterIds} numRatingsPerMovie = {mId : 0 for mId in moviesWithRatings} for i in range(len(df_ratingDataset_filtered)): row = df_ratingDataset_filtered.iloc[i] numRatingsPerRater[row['rater_id']] += 1 numRatingsPerMovie[row['movie_id']] += 1 bins = list(np.linspace(0, 10000, 20)) plt.hist(numRatingsPerMovie.values(), bins = bins) bins = list(np.linspace(0, 250, 26)) + list(np.linspace(250, 1000, 10)) + list(np.linspace(1000, 1800, 4)) plt.hist(numRatingsPerRater.values(), bins = bins) print(np.mean(list(numRatingsPerMovie.values())), np.median(list(numRatingsPerMovie.values()))) print(np.max(list(numRatingsPerMovie.values()))) print(np.mean(list(numRatingsPerRater.values()))) print(np.median(list(numRatingsPerRater.values()))) ###Output 90.03191071649658 28.0 ###Markdown Sort out raters who have fewer ratings and mainly rate over-represented movies (with > 5000 ratings) ###Code moviesOver5000 = [i for i in numRatingsPerMovie.keys() if numRatingsPerMovie[i] > 5000] ###Output _____no_output_____ ###Markdown filter raters with <= 10 ratings ###Code ratersTen = [i for i in numRatingsPerRater.keys() if numRatingsPerRater[i] < 10] print(len(ratersTen)) threshold = 0.5 count = 0 total = len(ratersTen) ratersRemove = [] for r in ratersTen: df1 = df_ratingDataset_filtered[df_ratingDataset_filtered['rater_id'] == r] df2 = df1[~df1['movie_id'].isin(moviesOver5000)] if len(df2) / len(df1) < threshold: ratersRemove.append(r) count += 1 if (count % 1000 == 0): print(f'{count} processed {total - count} left') print(len(ratersRemove)) len(df_ratingDataset_filtered[df_ratingDataset_filtered['rater_id'].isin(ratersTen)]) len(df_ratingDataset_filtered[df_ratingDataset_filtered['rater_id'].isin(ratersRemove)]) ratersExactTen = [i for i in numRatingsPerRater.keys() if numRatingsPerRater[i] == 10] print(len(ratersExactTen)) threshold = 0.5 count = 0 total = len(ratersExactTen) ratersRemove0 = [] for r in ratersExactTen: df1 = df_ratingDataset_filtered[df_ratingDataset_filtered['rater_id'] == r] df2 = df1[~df1['movie_id'].isin(moviesOver5000)] if len(df2) / len(df1) < threshold: ratersRemove0.append(r) count += 1 if (count % 1000 == 0): print(f'{count} processed {total - count} left') print(len(ratersRemove0)) ###Output 1000 processed 4736 left 2000 processed 3736 left 3000 processed 2736 left 4000 processed 1736 left 5000 processed 736 left 4893 ###Markdown filter raters with > 10 and < 20 ratings ###Code ratersTenTwenty = [i for i in numRatingsPerRater.keys() if numRatingsPerRater[i] > 10 and numRatingsPerRater[i] < 20] print(len(ratersTenTwenty)) threshold = 0.5 count = 0 total = len(ratersTenTwenty) ratersRemove2 = [] for r in ratersTenTwenty: df1 = df_ratingDataset_filtered[df_ratingDataset_filtered['rater_id'] == r] df2 = df1[~df1['movie_id'].isin(moviesOver5000)] if len(df2) / len(df1) < threshold: ratersRemove2.append(r) count += 1 if (count % 5000 == 0): print(f'{count} processed {total - count} left') print(len(ratersRemove2)) len(df_ratingDataset_filtered[df_ratingDataset_filtered['rater_id'].isin(ratersTenTwenty)]) len(df_ratingDataset_filtered[df_ratingDataset_filtered['rater_id'].isin(ratersRemove2)]) ###Output _____no_output_____ ###Markdown filter raters with >= 10 and < 40 ratings ###Code ratersTwentyForty = [i for i in numRatingsPerRater.keys() if numRatingsPerRater[i] >= 20 and numRatingsPerRater[i] < 40] print(len(ratersTwentyForty)) threshold = 0.5 count = 0 total = len(ratersTwentyForty) ratersRemove3 = [] for r in ratersTwentyForty: df1 = df_ratingDataset_filtered[df_ratingDataset_filtered['rater_id'] == r] df2 = df1[~df1['movie_id'].isin(moviesOver5000)] if len(df2) / len(df1) < threshold: ratersRemove3.append(r) count += 1 if (count % 5000 == 0): print(f'{count} processed {total - count} left') print(len(ratersRemove3)) len(df_ratingDataset_filtered[df_ratingDataset_filtered['rater_id'].isin(ratersTwentyForty)]) len(df_ratingDataset_filtered[df_ratingDataset_filtered['rater_id'].isin(ratersRemove3)]) ###Output _____no_output_____ ###Markdown So far, 2 million ratings will be removed from 25 million ratings ###Code ratersRemoveAll = ratersRemove0 + ratersRemove + ratersRemove2 + ratersRemove3 print(len(ratersRemoveAll), len(df_ratingDataset_filtered[df_ratingDataset_filtered['rater_id'].isin(ratersRemoveAll)])) df_ratingDataset_filtered_further = df_ratingDataset_filtered[~df_ratingDataset_filtered['rater_id'].isin(ratersRemoveAll)] print(len(df_ratingDataset_filtered_further)) ###Output 23061535 ###Markdown Down-sample movies with more ratings to formulate a compact dataset Till now, we have 23 million ratings left. Map 1000 ~ 51000 ratings to 1000 ~ 2000 ratings: f(x) = 1000 + (x-1000)//50 If downsample moviesOver1000 using the above map, then we yield 3 ~ 6 million out of 22 million ratings ###Code def getSampleNumber(num): return 1000 + min((num - 1000) // 50, 1000) moviesOver1000 = [i for i in numRatingsPerMovie.keys() if numRatingsPerMovie[i] > 1000] print(len(moviesOver1000)) ratingDatasetFile_compact = "rating_compact_dataset.csv" m = moviesOver1000[0] n = getSampleNumber(numRatingsPerMovie[m]) df1 = df_ratingDataset_filtered_further[df_ratingDataset_filtered_further['movie_id'] == m].sample(n) df1.to_csv(data_directory + ratingDatasetFile_compact, index=False) count = len(df1) c = 1 total = len(moviesOver1000) for m in moviesOver1000[1:]: df0 = df_ratingDataset_filtered_further[df_ratingDataset_filtered_further['movie_id'] == m] if len(df0) > 1000: n = getSampleNumber(numRatingsPerMovie[m]) df1 = df0.sample(n) df1.to_csv(data_directory + ratingDatasetFile_compact, mode='a', header=False, index=False) count += len(df1) else: df0.to_csv(data_directory + ratingDatasetFile_compact, mode='a', header=False, index=False) count += len(df0) c += 1 if c % 100 == 0: print(f"{c} processed {total - c} left") print(count) df = pd.read_csv(data_directory + ratingDatasetFile_compact) print(len(df)) df.head() ###Output 4039311 ###Markdown Create a compact movie dataset to draw movies from popular movies ###Code print(len(df_movieDataset_filtered)) df_movieDataset_popular = df_movieDataset_filtered[df_movieDataset_filtered['movie_id'].isin(moviesOver5000)] print(len(df_movieDataset_popular)) movieDatasetFile_popular = "movie_popular_dataset.csv" df_movieDataset_popular.to_csv(data_directory + movieDatasetFile_popular, index=False) ###Output _____no_output_____
AIOpSchool/KIKS/MachineLearningClassificatie/0100_IrisClassificatie.ipynb
###Markdown CLASSIFICATIE VAN DE IRIS DATASET In deze notebook zie je hoe een machinaal leren-systeem erin slaagt twee klassen van punten lineair van elkaar te scheiden. Het Perceptron-algoritme vertrekt daarbij van een willekeurig gekozen rechte. Het algortime past de coëfficiënten in de vergelijking van de rechte stap voor stap aan, gebaseerd op gelabelde data, tot uiteindelijk een rechte bekomen wordt die de twee klassen van elkaar scheidt. De Iris dataset werd in 1936 door de Brit Ronald Fischer gepubliceerd in 'The use of multiple measurements in taxonomic problems' [1][2]. De dataset beteft drie soorten irissen (*Iris setosa*, *Iris virginica* en *Iris versicolor*).Fischer kon de soorten van elkaar onderscheiden afgaande op vier kenmerken: de lengte en de breedte van de kelkbladen en de bloembladen. Iris setosa [3] &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Iris versicolor [4]&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Iris virginica [5]Figuur 1: Iris setosa door Radomil Binek [CC BY-SA 3.0 (https://creativecommons.org/licenses/by-sa/3.0)], via Wikimedia Commons; Iris versicolor. No machine-readable author provided. Dlanglois assumed (based on copyright claims). CC BY-SA 3.0, via Wikimedia Commons; Iris virginica door Frank Mayfield [CC BY-SA 2.0 (https://creativecommons.org/licenses/by-sa/2.0)], via Wikimedia Commons. De Iris dataset is een *multivariate dataset*, d.w.z. een dataset met meerdere variabelen, die van elke soort 50 monsters bevat. Van elk monster werden de lengte en de breedte van een kroonblad en een kelkblad opgemeten in centimeter. Figuur 2: Kroon- en kelkblad. De nodige modules importeren ###Code import numpy as np import matplotlib.pyplot as plt import pandas as pd from matplotlib import animation # voor animatie from IPython.display import HTML # voor animatie ###Output _____no_output_____ ###Markdown 1. Inlezen van de data Lees met de module `pandas` de Iris dataset in. ###Code # dataset inlezen # in te lezen tabel heeft een hoofding iris = pd.read_csv("data/iris.dat", header="infer") ###Output _____no_output_____ ###Markdown 2. Tonen van de ingelezen data Kijk de gegevens in. Zowel de vier kenmerken, als de naam van de soort worden weergegeven. Het aantal monsters is gemakkelijk af te lezen. Hoeveel **variabelen** heeft deze *multivariate dataset*? Antwoord: de dataset heeft ... variabelen. ###Code # dataset weergeven in tabel iris ###Output _____no_output_____ ###Markdown Deze tabel komt overeen met een matrix met 150 rijen en 5 kolommen: 150 monsters, 4 kenmerken (x1, x2, x3, x4) en 1 label (y) De kenmerken:- eerste kolom: lengte kelkblad - tweede kolom: breedte kelkblad- derde kolom: lengte bloemblad - vierde kolom: breedte bloembladHet label:- laatste kolom: de naam van de soort Voor het machinaal leren-systeem zullen de kenmerken als input dienen en de labels als output. Het is mogelijk enkel het begin of enkel het laatste deel van de tabel te tonen. ###Code # eerste deel van de tabel iris.head() # laatste deel van de tabel iris.tail() ###Output _____no_output_____ ###Markdown Het is ook mogelijk om een bepaald deel van de tabel te tonen. ###Code # tabel tonen van rij 46 t.e.m. rij 53 iris[46:54] ###Output _____no_output_____ ###Markdown Merk op dat [46:54] staat voor het *halfopen interval* [46:54[. In deze notebook zal je met deze laatste deeltabel werken. 3. Onderzoek: kunnen twee soorten irissen onderscheiden worden gebaseerd op twee kenmerken? 3.1 Beschouw van elk van twee soorten irissen, Iris setosa en Iris versicolor, vier monsters Figuur 3: Iris setosa &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Iris versicolor In de deeltabel staan vier monsters van elk. In de eerste vier kolommen van de tabel staat een kenmerk, in de laatste kolom staat het label. Voor het machinaal leren-systeem, noemt men deze kenmerken $x_{i}$ en het label $y$. ###Code x1 = iris["lengte kelkblad"] # kenmerk: lengte kelkblad x2 = iris["breedte kelkblad"] # kenmerk: breedte kelkblad x3 = iris["lengte kroonblad"] # kenmerk: lengte kroonblad x4 = iris["breedte kroonblad"] # kenmerk: breedte kroonblad y = iris["soort Iris"] # label: soort print(x1) print(y) ###Output _____no_output_____ ###Markdown 3.2 De data voorbereiden ###Code # omzetten naar NumPy array x1 = np.array(x1) x2 = np.array(x2) x3 = np.array(x3) x4 = np.array(x4) ###Output _____no_output_____ ###Markdown Je moet slechts met twee kenmerken werken: de lengte van het kroonblad en van het kelkblad.En je hebt enkel de 8 monsters van de deeltabel nodig. ###Code # lengte kelkblad en lengte bloemblad kiezen, deze staan in eerste en derde kolom # vier monsters van setosa en vier monsters van versicolor selecteren x1 = x1[46:54] x3 = x3[46:54] y = y[46:54] ###Output _____no_output_____ ###Markdown 3.3 De data standaardiseren Om te standaardiseren wordt er overgegaan op de Z-scores van de kenmerken.Voor meer uitleg over het belang van standaardiseren verwijzen we naar de notebook 'Standaardiseren'. ###Code x1 = (x1-np.mean(x1))/np.std(x1) x3 = (x3-np.mean(x3))/np.std(x3) print(x1) print(x3) # gestandaardiseerde kenmerken opnieuw in matrix steken # deze matrix X bevat dan de kenmerken die het machinaal leren-systeem zal gebruiken X = np.stack((x1, x3), axis=1) # axis 1 betekent dat x1 en x3 worden beschouwd als kolommen (bij axis 0 als rijen) print(X) print(X.shape) print(X.shape[1]) ###Output _____no_output_____ ###Markdown 3.4 De data weergeven in puntenwolk ###Code # lengte bloemblad t.o.v. lengte kelkblad # lengte kelkblad komt op x-as, lengte bloemblad komt op y-as plt.scatter(x1, x3, color="black", marker="o") plt.title("Iris") plt.xlabel("lengte kelkblad (cm)") # xlabel geeft een omschrijving op de x1-as plt.ylabel("lengte bloemblad (cm)") # ylabel geeft een omschrijving op de x3-as plt.show() ###Output _____no_output_____ ###Markdown Er zijn twee groepen te onderscheiden. Bovendien zijn deze groepen **lineair scheidbaar**: ze kunnen gescheiden worden door een rechte. Op de grafiek is niet duidelijk welk datapunt bij welke soort iris hoort, aangezien alle punten op dezelfde manier worden voorgesteld. 3.5 Data in puntenwolk weergeven als twee klassen De voorstelling van de puntenwolk wordt aangepast zodat de twee irissoorten elk door een ander symbool worden weergegeven. ###Code # lengte bloemblad t.o.v. lengte kelkblad plt.scatter(x1[:4], x3[:4], color="green", marker="o", label="setosa") # setosa zijn eerste 4 plt.scatter(x1[4:], x3[4:], color="blue", marker="x", label="versicolor") # versicolor zijn volgende 4 plt.title("Iris") plt.xlabel("lengte kelkblad (cm)") plt.ylabel("lengte bloemblad (cm)") plt.legend(loc="lower right") plt.show() ###Output _____no_output_____ ###Markdown 4. Classificatie met het Perceptron 4.1 Geannoteerde data Het AI-systeem zal leren uit de 8 gelabelde voorbeelden. De kolom met de labels heb je reeds $y$ genoemd. Het label is echter geen kwantitatieve (numerieke) variabele. Er zijn twee soorten irissen. Als je de soort *setosa* laat overeenkomen met klasse $0$ en de soort *versicolor* met klasse $1$, dan heb je het **label** $y$ **numeriek** gemaakt. ###Code # labels numeriek maken, setosa:0, versicolor:1 y = np.where(y == "Iris-setosa", 0, 1) # als setosa, dan 0, anders 1 print(y) ###Output _____no_output_____ ###Markdown De kenmerken zitten in een matrix X en de labels in een vector y. De i-de rij van X komt overeen met twee kenmerken van een bepaald monster en het label van dat monster zit op de i-de plaats in y. 4.2 Het Perceptron Het Perceptron is een neuraal netwerk met twee lagen: een invoerlaag en een uitvoerlaag.De neuronen van de invoerlaag zijn verbonden met het neuron van de uitvoerlaag.Het Perceptron beschikt over een algoritme om te kunnen leren. Het wordt getraind met gelabelde voorbeelden: een aantal inputpunten X$_{i}$ met telkens een corresponderend label $y_{i}$. Tussen de neuronen van de invoer- en uitvoerlaag zijn er verbindingen met een bepaald gewicht. Het Perceptron leert: gebaseerd op de gelabelde voorbeelden worden de gewichten gaandeweg aangepast; de aanpassing gebeurt op basis van het Perceptron-algoritme. Figuur 4: Het Perceptron-algoritme. Figuur 5: Schematische voorstelling van het Perceptron. Om een rechte te vinden die de twee soorten irissen van elkaar scheidt, wordt er vertrokken van een **willekeurig gekozen rechte**. Dit gebeurt door de coëfficiënten in de vergelijking van deze rechte willekeurig te kiezen. Beide kanten van deze *scheidingslijn* bepalen een andere *klasse*. Het systeem wordt *getraind* met de trainingset inclusief de corresponderende labels: **Voor elk punt van de trainingset wordt nagegaan of het punt aan de juiste kant van de scheidingslijn ligt.** Bij een punt dat niet aan de juiste kant van de scheidingslijn ligt, worden de coëfficiënten in de vergelijking van de rechte aangepast. De volledige trainingset wordt een aantal keer doorlopen. Zo'n keer noemt men een *epoch*. Het systeem *leert* gedurende deze *pogingen ('epochs')*. Als twee klassen lineair scheidbaar zijn, kan men een rechte vinden die beide klassen scheidt. Men kan de vergelijking van de scheidingslijn zodanig opschrijven (in de vorm $ax+by+c=0$) dat voor elk punt $(x_{1}, y_{1})$ in de ene klasse $ax_{1}+by_{1}+c >= 0$ en voor elk punt $(x_{1}, y_{1})$ in de andere klasse $ax_{1} +by_{1}+c Zolang dit niet voldaan is, moeten de coëfficiënten worden aangepast.De trainingset met bijhorende labels wordt enkele keren doorlopen. Voor elk p{u}nt _{w}orden de coëfficiënten aangepast indien nodig.**De gewichten van het Perceptron zijn de coëfficiënten in de vergelijking van de scheidingsrechte.** Hier geldt dus:De vergelijking van de scheidingslijn: $ax+by+c=0$; of dus dat voor elk punt (x1, x3) in de ene klasse $ax1+bx3+c >= 0$ en voor elk punt (x1, y3) in de andere klasse $ax1 +bx3+c $a$ is dus de coëfficiënt van de variabele x1 en $b$ die van x3 $c$ is een constante.In de code-cel die volgt wordt $a$ voorgesteld door coeff_x1 en $b$ door coeff_x3, $c$ door cte.Voor een schuine rechte $ax+by+c=0$ is $y = -\frac{a}{b} x - \frac{c}{b}$. ###Code font = {"family": "serif", "color": "black", "weight": "normal", "size": 16, } def grafiek(coeff_x1, coeff_x3, cte): """Plot scheidingsrechte ('decision boundary') en geeft vergelijking ervan.""" # lengte kroonlad t.o.v. lengte kelkblad plt.scatter(x1[:4], x3[:4], color="green", marker="o", label="setosa") # setosa zijn eerste 4 (label 0) plt.scatter(x1[4:], x3[4:], color="blue", marker="x", label="versicolor") # versicolor zijn de volgende 4 (label 1) x = np.linspace(-1.5, 1.5, 10) y_r = -coeff_x1/coeff_x3 * x - cte/coeff_x3 print("De grens is een rechte met vgl.", coeff_x1, "* x1 +", coeff_x3, "* x3 +", cte, "= 0") plt.plot(x, y_r, color="black") plt.title("Scheiden vqn twee soorten irissen", fontdict=font) plt.xlabel("lengte kelkblad (cm)", fontdict=font) plt.ylabel("lengte bloemblad (cm)", fontdict=font) plt.legend(loc="lower right") plt.show() class Perceptron(object): """Perceptron classifier.""" def __init__(self, eta=0.01, n_iter=50, random_state=1): """self heeft drie parameters: leersnelheid, aantal pogingen, willekeurigheid.""" self.eta = eta self.n_iter = n_iter self.random_state = random_state def fit(self, X, y): """Fit training data.""" rgen = np.random.RandomState(self.random_state) # kolommatrix van de gewichten ('weights') # willekeurig gegenereerd uit normale verdeling met gemiddelde 0 en standaardafwijking 0.01 # aantal gewichten is aantal kenmerken in X plus 1 (+1 voor de bias) self.w_ = rgen.normal(loc=0.0, scale=0.01, size=X.shape[1]+1) # gewichtenmatrix die 3 gewichten bevat print("Initiële willekeurige gewichten:", self.w_) self.errors_ = [] # foutenlijst # plot grafiek met scheidingsrechte # grafiek(self.w_[0], self.w_[1], self.w_[2]) rechten = np.array([self.w_]) print(rechten) # gewichten punt per punt aanpassen, gebaseerd op feedback van de verschillende pogingen for _ in range(self.n_iter): print("epoch =", _) errors = 0 teller = 0 for x, label in zip(X, y): # x is datapunt (monster) uit matrix X, y overeenkomstig label print("teller =", teller) # tel punten, het zijn er acht print("punt:", x, "\tlabel:", label) gegiste_klasse = self.predict(x) print("gegiste klasse =", gegiste_klasse) # aanpassing nagaan voor dit punt update = self.eta * (label - gegiste_klasse) # als update = 0, juiste klasse, geen aanpassing nodig print("update=", update) # grafiek en gewichten eventueel aanpassen na dit punt if update !=0: self.w_[0:2] += update *x self.w_[2] += update errors += update print("gewichten =", self.w_) # grafiek(self.w_[0], self.w_[1], self.w_[2]) # voorlopige 'decision boundary' rechten = np.append(rechten, [self.w_], axis =0) print(rechten) teller += 1 self.errors_.append(errors) # na alle punten, totale fout toevoegen aan foutenlijst print("foutenlijst =", self.errors_) return self, rechten # geeft gewichtenmatrix en errorlijst terug def net_input(self, x): # punt invullen in de voorlopige scheidingsrechte """Berekenen van z = lineaire combinatie van de inputs inclusief bias en de weights voor elke gegeven punt.""" return np.dot(x, self.w_[0:2]) + self.w_[2] def predict(self, x): """Gist klasse.""" print("punt ingevuld in vergelijking rechte:", self.net_input(x)) klasse = np.where(self.net_input(x) >=0, 1, 0) return klasse ###Output _____no_output_____ ###Markdown Opdracht 4.2.1Ga op zoek naar het Perceptron-algoritme in de code-cel hierboven. Gevonden? ###Code # Perceptron, leersnelheid 0.001 en 12 pogingen ppn = Perceptron(eta=0.001, n_iter=12) gewichtenlijst = ppn.fit(X,y)[1] print("Gewichtenlijst =", gewichtenlijst) ###Output _____no_output_____ ###Markdown 4.3 Animatie Nu volgt een **animatie** waarin je ziet hoe het Perceptron bijleert. Eerst zie je een willekeurig gekozen rechte. Erna wordt deze rechte stap voor stap aangepast tot de twee klassen van elkaar gescheiden zijn. ###Code # animatie xcoord = np.linspace(-1.5, 1.5, 10) ycoord = [] for w in gewichtenlijst: y_r = -w[0]/w[1] * xcoord - w[2]/w[1] ycoord.append(y_r) ycoord = np.array(ycoord) fig, ax = plt.subplots() line, = ax.plot(xcoord, ycoord[0], color="black") plt.scatter(x1[:4], x3[:4], color="green", marker="o", label="setosa") # setosa zijn eerste 4 (label 0) plt.scatter(x1[4:], x3[4:], color="blue", marker="x", label="versicolor") # versicolor zijn de volgende 4 (label 1) plt.title("Scheiden van twee soorten irissen", fontdict=font) plt.xlabel("lengte kelkblad (cm)", fontdict=font) plt.ylabel("lengte kroonblad (cm)", fontdict=font) plt.legend(loc="lower right") plt.savefig("eerstelijn.png", dpi=300) def animate(i): line.set_ydata(ycoord[i]) # update the data return line, ax.axis([-2,2,-5, 5]) plt.close() ani = animation.FuncAnimation( fig, animate, interval=1000, blit=True, save_count=10, frames=len(ycoord)) HTML(ani.to_jshtml()) ###Output _____no_output_____ ###Markdown 4.4 Experimenteer Opdracht 4.4.1De leersnelheid of het aantal pogingen kunnen worden aangepast.- Gaat het sneller net een kleinere of grotere leersnelheid?- Lukt het ook met minder epochs (pogingen)?De code is hieronder reeds gekopieerd. Pas aan naar wens! ###Code # Perceptron, leersnelheid 0.001 en 12 pogingen ppn = Perceptron(eta=0.001, n_iter=12) gewichtenlijst = ppn.fit(X,y)[1] print("Gewichtenlijst =", gewichtenlijst) # animatie xcoord = np.linspace(-1.5, 1.5, 10) ycoord = [] for w in gewichtenlijst: y_r = -w[0]/w[1] * xcoord - w[2]/w[1] ycoord.append(y_r) ycoord = np.array(ycoord) fig, ax = plt.subplots() line, = ax.plot(xcoord, ycoord[0], color="black") plt.scatter(x1[:4], x3[:4], color="green", marker="o", label="setosa") # setosa zijn eerste 4 (label 0) plt.scatter(x1[4:], x3[4:], color="blue", marker="x", label="versicolor") # versicolor zijn de volgende 4 (label 1) plt.title("Scheiden van twee soorten irissen", fontdict=font) plt.xlabel("lengte kelkblad (cm)", fontdict=font) plt.ylabel("lengte kroonblad (cm)", fontdict=font) plt.legend(loc="lower right") plt.savefig("eerstelijn.png", dpi=300) def animate(i): line.set_ydata(ycoord[i]) # update the data return line, ax.axis([-2,2,-5, 5]) plt.close() ani = animation.FuncAnimation( fig, animate, interval=1000, blit=True, save_count=10, frames=len(ycoord)) HTML(ani.to_jshtml()) ###Output _____no_output_____
nbs/P245068_OLX_Job_Recommendations_using_LightFM_SLIM_ALS_and_baseline_models.ipynb
###Markdown OLX Job Recommendations using LightFM, SLIM, ALS and baseline models Process flow Setup ###Code !pip install -q implicit !pip install -q lightfm !pip install -q -U kaggle !pip install --upgrade --force-reinstall --no-deps kaggle !mkdir ~/.kaggle !cp /content/drive/MyDrive/kaggle.json ~/.kaggle/ !chmod 600 ~/.kaggle/kaggle.json !kaggle datasets download -d olxdatascience/olx-jobs-interactions !unzip /content/olx-jobs-interactions.zip import sys import pandas as pd import matplotlib import matplotlib.pyplot as plt from datetime import datetime import random import os from pathlib import Path from collections import defaultdict import numpy as np from tqdm import tqdm from scipy import sparse import scipy.sparse as sparse from sklearn.preprocessing import normalize from sklearn.exceptions import ConvergenceWarning from sklearn.linear_model import ElasticNet from sklearn.utils._testing import ignore_warnings import implicit from lightfm import LightFM import tracemalloc from datetime import datetime from time import time from functools import partial import multiprocessing from multiprocessing.pool import ThreadPool ###Output _____no_output_____ ###Markdown Data Loading and Sampling ###Code df = pd.read_csv('interactions.csv') df.head() df.info() df.user.astype('str').nunique() def get_interactions_subset( interactions, fraction_users, fraction_items, random_seed=10 ): """ Select subset from interactions based on fraction of users and items :param interactions: Original interactions :param fraction_users: Fraction of users :param fraction_items: Fraction of items :param random_seed: Random seed :return: Dataframe with subset of interactions """ def _get_subset_by_column(column, fraction): column_df = interactions[column].unique() subset = set(np.random.choice(column_df, int(len(column_df) * fraction))) return interactions[interactions[column].isin(subset)] np.random.seed(random_seed) if fraction_users < 1: interactions = _get_subset_by_column("user", fraction_users) if fraction_items < 1: interactions = _get_subset_by_column("item", fraction_items) return interactions df_subset_users = get_interactions_subset( df, fraction_users=0.1, fraction_items=1, random_seed=10 ) df_subset_users.info() df_subset_users.to_parquet('df_subset_users.parquet.snappy', compression='snappy') df_subset_items = get_interactions_subset( df, fraction_users=1, fraction_items=0.1, random_seed=10 ) df_subset_items.info() df_subset_items.to_parquet('df_subset_items.parquet.snappy', compression='snappy') ###Output _____no_output_____ ###Markdown Utils Data split ###Code def splitting_functions_factory(function_name): """Returns splitting function based on name""" if function_name == "by_time": return split_by_time def split_by_time(interactions, fraction_test, random_state=30): """ Splits interactions by time. Returns tuple of dataframes: train and test. """ np.random.seed(random_state) test_min_timestamp = np.percentile( interactions["timestamp"], 100 * (1 - fraction_test) ) train = interactions[interactions["timestamp"] < test_min_timestamp] test = interactions[interactions["timestamp"] >= test_min_timestamp] return train, test def filtering_restrict_to_train_users(train, test): """ Returns test DataFrame restricted to users from train set. """ train_users = set(train["user"]) return test[test["user"].isin(train_users)] def filtering_already_interacted_items(train, test): """ Filters out (user, item) pairs from the test set if the given user interacted with a given item in train set. """ columns = test.columns already_interacted_items = train[["user", "item"]].drop_duplicates() merged = pd.merge( test, already_interacted_items, on=["user", "item"], how="left", indicator=True ) test = merged[merged["_merge"] == "left_only"] return test[columns] def filtering_restrict_to_unique_user_item_pair(dataframe): """ Returns pd.DataFrame where each (user, item) pair appears only once. A list of corresponding events is stores instead of a single event. Returned timestamp is the timestamp of the first (user, item) interaction. """ return ( dataframe.groupby(["user", "item"]) .agg({"event": list, "timestamp": "min"}) .reset_index() ) def split( interactions, splitting_config=None, restrict_to_train_users=True, filter_out_already_interacted_items=True, restrict_train_to_unique_user_item_pairs=True, restrict_test_to_unique_user_item_pairs=True, replace_events_by_ones=True, ): """ Main function used for splitting the dataset into the train and test sets. Parameters ---------- interactions: pd.DataFrame Interactions dataframe splitting_config : dict, optional Dict with name and parameters passed to splitting function. Currently only name="by_time" supported. restrict_to_train_users : boolean, optional Whether to restrict users in the test set only to users from the train set. filter_out_already_interacted_items : boolean, optional Whether to filter out (user, item) pairs from the test set if the given user interacted with a given item in the train set. restrict_test_to_unique_user_item_pairs Whether to return only one row per (user, item) pair in test set. """ if splitting_config is None: splitting_config = { "name": "by_time", "fraction_test": 0.2, } splitting_name = splitting_config["name"] splitting_config = {k: v for k, v in splitting_config.items() if k != "name"} train, test = splitting_functions_factory(splitting_name)( interactions=interactions, **splitting_config ) if restrict_to_train_users: test = filtering_restrict_to_train_users(train, test) if filter_out_already_interacted_items: test = filtering_already_interacted_items(train, test) if restrict_train_to_unique_user_item_pairs: train = filtering_restrict_to_unique_user_item_pair(train) if restrict_test_to_unique_user_item_pairs: test = filtering_restrict_to_unique_user_item_pair(test) if replace_events_by_ones: train["event"] = 1 test["event"] = 1 return train, test ###Output _____no_output_____ ###Markdown Metrics ###Code def ranking_metrics(test_matrix, recommendations, k=10): """ Calculates ranking metrics (precision, recall, F1, F0.5, NDCG, mAP, MRR, LAUC, HR) based on test interactions matrix and recommendations :param test_matrix: Test interactions matrix :param recommendations: Recommendations :param k: Number of top recommendations to calculate metrics on :return: Dataframe with metrics """ items_number = test_matrix.shape[1] metrics = { "precision": 0, "recall": 0, "F_1": 0, "F_05": 0, "ndcg": 0, "mAP": 0, "MRR": 0, "LAUC": 0, "HR": 0, } denominators = { "relevant_users": 0, } for (user_count, user) in tqdm(enumerate(recommendations[:, 0])): u_interacted_items = get_interacted_items(test_matrix, user) interacted_items_amount = len(u_interacted_items) if interacted_items_amount > 0: # skip users with no items in test set denominators["relevant_users"] += 1 # evaluation success_statistics = calculate_successes( k, recommendations, u_interacted_items, user_count ) user_metrics = calculate_ranking_metrics( success_statistics, interacted_items_amount, items_number, k, ) for metric_name in metrics: metrics[metric_name] += user_metrics[metric_name] metrics = { name: metric / denominators["relevant_users"] for name, metric in metrics.items() } return pd.DataFrame.from_dict(metrics, orient="index").T def calculate_ranking_metrics( success_statistics, interacted_items_amount, items_number, k, ): """ Calculates ranking metrics based on success statistics :param success_statistics: Success statistics dictionary :param interacted_items_amount: :param items_number: :param k: Number of top recommendations to calculate metrics on :return: Dictionary with metrics """ precision = success_statistics["total_amount"] / k recall = success_statistics["total_amount"] / interacted_items_amount user_metrics = dict( precision=precision, recall=recall, F_1=calculate_f(precision, recall, 1), F_05=calculate_f(precision, recall, 0.5), ndcg=calculate_ndcg(interacted_items_amount, k, success_statistics["total"]), mAP=calculate_map(success_statistics, interacted_items_amount, k), MRR=calculate_mrr(success_statistics["total"]), LAUC=calculate_lauc( success_statistics, interacted_items_amount, items_number, k ), HR=success_statistics["total_amount"] > 0, ) return user_metrics def calculate_mrr(user_successes): return ( 1 / (user_successes.nonzero()[0][0] + 1) if user_successes.nonzero()[0].size > 0 else 0 ) def calculate_f(precision, recall, f): return ( (f ** 2 + 1) * (precision * recall) / (f ** 2 * precision + recall) if precision + recall > 0 else 0 ) def calculate_lauc(successes, interacted_items_amount, items_number, k): return ( np.dot(successes["cumsum"], 1 - successes["total"]) + (successes["total_amount"] + interacted_items_amount) / 2 * ((items_number - interacted_items_amount) - (k - successes["total_amount"])) ) / ((items_number - interacted_items_amount) * interacted_items_amount) def calculate_map(successes, interacted_items_amount, k): return np.dot(successes["cumsum"] / np.arange(1, k + 1), successes["total"]) / min( k, interacted_items_amount ) def calculate_ndcg(interacted_items_amount, k, user_successes): cumulative_gain = 1.0 / np.log2(np.arange(2, k + 2)) cg_sum = np.cumsum(cumulative_gain) return ( np.dot(user_successes, cumulative_gain) / cg_sum[min(k, interacted_items_amount) - 1] ) def calculate_successes(k, recommendations, u_interacted_items, user_count): items = recommendations[user_count, 1 : k + 1] user_successes = np.isin(items, u_interacted_items) return dict( total=user_successes.astype(int), total_amount=user_successes.sum(), cumsum=np.cumsum(user_successes), ) def get_reactions(test_matrix, user): return test_matrix.data[test_matrix.indptr[user] : test_matrix.indptr[user + 1]] def get_interacted_items(test_matrix, user): return test_matrix.indices[test_matrix.indptr[user] : test_matrix.indptr[user + 1]] def diversity_metrics( test_matrix, formatted_recommendations, original_recommendations, k=10 ): """ Calculates diversity metrics (% if recommendations in test, test coverage, Shannon, Gini, users without recommendations) based on test interactions matrix and recommendations :param test_matrix: user/item interactions' matrix :param formatted_recommendations: recommendations where user and item ids were replaced by respective codes based on test_matrix :param original_recommendations: original format recommendations :param k: Number of top recommendations to calculate metrics on :return: Dataframe with metrics """ formatted_recommendations = formatted_recommendations[:, : k + 1] frequency_statistics = calculate_frequencies(formatted_recommendations, test_matrix) with np.errstate( divide="ignore" ): # let's put zeros we items with 0 frequency and ignore division warning log_frequencies = np.nan_to_num( np.log(frequency_statistics["frequencies"]), posinf=0, neginf=0 ) metrics = dict( reco_in_test=frequency_statistics["recommendations_in_test_n"] / frequency_statistics["total_recommendations_n"], test_coverage=frequency_statistics["recommended_items_n"] / test_matrix.shape[1], Shannon=-np.dot(frequency_statistics["frequencies"], log_frequencies), Gini=calculate_gini( frequency_statistics["frequencies"], frequency_statistics["items_in_test_n"] ), users_without_reco=original_recommendations.iloc[:, 1].isna().sum() / len(original_recommendations), users_without_k_reco=original_recommendations.iloc[:, k - 1].isna().sum() / len(original_recommendations), ) return pd.DataFrame.from_dict(metrics, orient="index").T def calculate_gini(frequencies, items_in_test_n): return ( np.dot( frequencies, np.arange( 1 - items_in_test_n, items_in_test_n, 2, ), ) / (items_in_test_n - 1) ) def calculate_frequencies(formatted_recommendations, test_matrix): frequencies = defaultdict( int, [(item, 0) for item in list(set(test_matrix.indices))] ) for item in formatted_recommendations[:, 1:].flat: frequencies[item] += 1 recommendations_out_test_n = frequencies[-1] del frequencies[-1] frequencies = np.array(list(frequencies.values())) items_in_test_n = len(frequencies) recommended_items_n = len(frequencies[frequencies > 0]) recommendations_in_test_n = np.sum(frequencies) frequencies = frequencies / np.sum(frequencies) frequencies = np.sort(frequencies) return dict( frequencies=frequencies, items_in_test_n=items_in_test_n, recommended_items_n=recommended_items_n, recommendations_in_test_n=recommendations_in_test_n, total_recommendations_n=recommendations_out_test_n + recommendations_in_test_n, ) ###Output _____no_output_____ ###Markdown Evaluator ###Code def preprocess_test(test: pd.DataFrame): """ Preprocesses test set to speed up evaluation """ def _map_column(test, column): test[f"{column}_code"] = test[column].astype("category").cat.codes return dict(zip(test[column], test[f"{column}_code"])) test = test.copy() test.columns = ["user", "item", "event", "timestamp"] user_map = _map_column(test, "user") item_map = _map_column(test, "item") test_matrix = sparse.csr_matrix( (np.ones(len(test)), (test["user_code"], test["item_code"])) ) return user_map, item_map, test_matrix class Evaluator: """ Class used for models evaluation """ # pylint: disable=too-many-instance-attributes # pylint: disable=too-many-arguments def __init__( self, recommendations_path: Path, test_path: Path, k, models_to_evaluate, ): self.recommendations_path = recommendations_path self.test_path = test_path self.k = k self.models_to_evaluate = models_to_evaluate self.located_models = None self.test = None self.user_map = None self.item_map = None self.test_matrix = None self.evaluation_results = [] def prepare(self): """ Prepares test set and models to evaluate """ def _get_models(models_to_evaluate, recommendations_path): models = [ (file_name.split(".")[0], file_name) for file_name in os.listdir(recommendations_path) ] if models_to_evaluate: return [model for model in models if model[0] in models_to_evaluate] return models self.test = pd.read_csv(self.test_path, compression="gzip").astype( {"user": str, "item": str} ) self.user_map, self.item_map, self.test_matrix = preprocess_test(self.test) self.located_models = _get_models( self.models_to_evaluate, self.recommendations_path ) def evaluate_models(self): """ Evaluating multiple models """ def _read_recommendations(file_name): return pd.read_csv( os.path.join(self.recommendations_path, file_name), header=None, compression="gzip", dtype=str, ) for model, file_name in self.located_models: recommendations = _read_recommendations(file_name) evaluation_result = self.evaluate( original_recommendations=recommendations, ) evaluation_result.insert(0, "model_name", model) self.evaluation_results.append(evaluation_result) self.evaluation_results = pd.concat(self.evaluation_results).set_index( "model_name" ) if "precision" in self.evaluation_results.columns: self.evaluation_results = self.evaluation_results.sort_values( by="precision", ascending=False ) def evaluate( self, original_recommendations: pd.DataFrame, ): """ Evaluate single model """ def _format_recommendations(recommendations, user_id_code, item_id_code): users = recommendations.iloc[:, :1].applymap( lambda x: user_id_code.setdefault(str(x), -1) ) items = recommendations.iloc[:, 1:].applymap( lambda x: -1 if pd.isna(x) else item_id_code.setdefault(x, -1) ) return np.array(pd.concat([users, items], axis=1)) original_recommendations = original_recommendations.iloc[:, : self.k + 1].copy() formatted_recommendations = _format_recommendations( original_recommendations, self.user_map, self.item_map ) evaluation_results = pd.concat( [ ranking_metrics( self.test_matrix, formatted_recommendations, k=self.k, ), diversity_metrics( self.test_matrix, formatted_recommendations, original_recommendations, self.k, ), ], axis=1, ) return evaluation_results ###Output _____no_output_____ ###Markdown Helpers ###Code def overlap(df1, df2): """ Returns the Overlap Coefficient with respect to (user, item) pairs. We assume uniqueness of (user, item) pairs in DataFrames (not recommending the same item to the same users multiple times)). :param df1: DataFrame which index is user_id and column ["items"] is a list of recommended items :param df2: DataFrame which index is user_id and column ["items"] is a list of recommended items """ nb_items = min(df1["items"].apply(len).sum(), df2["items"].apply(len).sum()) merged_df = pd.merge(df1, df2, left_index=True, right_index=True) nb_common_items = merged_df.apply( lambda x: len(set(x["items_x"]) & set(x["items_y"])), axis=1 ).sum() return 1.00 * nb_common_items / nb_items def get_recommendations(models_to_evaluate, recommendations_path): """ Returns dictionary with model_names as keys and recommendations as values. :param models_to_evaluate: List of model names :param recommendations_path: Stored recommendations directory """ models = [ (file_name.split(".")[0], file_name) for file_name in os.listdir(recommendations_path) ] return { model[0]: pd.read_csv( os.path.join(recommendations_path, model[1]), header=None, compression="gzip", dtype=str, ) for model in models if model[0] in models_to_evaluate } def dict_to_df(dictionary): """ Creates pandas dataframe from dictionary :param dictionary: Original dictionary :return: Dataframe from original dictionary """ return pd.DataFrame({k: [v] for k, v in dictionary.items()}) def efficiency(path, base_params=None): """ Parametrized decorator for executing function with efficiency logging and storing the results under the given path """ base_params = base_params or {} def efficiency_decorator(func): def wrapper(*args, **kwargs): tracemalloc.start() start_time = time() result = func(*args, **kwargs) execution_time = time() - start_time _, peak = tracemalloc.get_traced_memory() dict_to_df( { **base_params, **{ "function_name": func.__name__, "execution_time": execution_time, "memory_peak": peak, }, } ).to_csv(path, index=False) tracemalloc.stop() return result return wrapper return efficiency_decorator def get_unix_path(path): """ Returns the input path with unique csv filename """ return path / f"{datetime.utcnow().strftime('%Y_%m_%d_%H_%M_%S_%f')}.csv" def df_from_dir(dir_path): """ Returns pd.DataFrame with concatenated files from the given path """ files_read = [ pd.read_csv(dir_path / filename) for filename in os.listdir(dir_path) if filename.endswith(".csv") ] return pd.concat(files_read, axis=0, ignore_index=True) def get_interactions_subset( interactions, fraction_users, fraction_items, random_seed=10 ): """ Select subset from interactions based on fraction of users and items :param interactions: Original interactions :param fraction_users: Fraction of users :param fraction_items: Fraction of items :param random_seed: Random seed :return: Dataframe with subset of interactions """ def _get_subset_by_column(column, fraction): column_df = interactions[column].unique() subset = set(np.random.choice(column_df, int(len(column_df) * fraction))) return interactions[interactions[column].isin(subset)] np.random.seed(random_seed) if fraction_users < 1: interactions = _get_subset_by_column("user", fraction_users) if fraction_items < 1: interactions = _get_subset_by_column("item", fraction_items) return interactions ###Output _____no_output_____ ###Markdown Data loading ###Code def load_interactions(data_path): return pd.read_csv(data_path, compression='gzip', header=0,names=["user", "item", "event", "timestamp"], ).astype({"user": str, "item": str, "event": str, "timestamp": int}) def load_target_users(path): return list(pd.read_csv(path, compression="gzip", header=None).astype(str).iloc[:, 0]) def save_recommendations(recommendations, path): recommendations.to_csv(path, index=False, header=False, compression="gzip") data_path = 'df_subset_items.parquet.snappy' interactions = pd.read_parquet(data_path) interactions.columns = ["user", "item", "event", "timestamp"] interactions = interactions.astype({"user": str, "item": str, "event": str, "timestamp": int}) interactions.head() interactions.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 6159193 entries, 10 to 65502199 Data columns (total 4 columns): # Column Dtype --- ------ ----- 0 user object 1 item object 2 event object 3 timestamp int64 dtypes: int64(1), object(3) memory usage: 235.0+ MB ###Markdown EDA ###Code n_users = interactions["user"].nunique() n_items = interactions["item"].nunique() n_interactions = len(interactions) interactions_per_user = interactions.groupby("user").size() interactions_per_item = interactions.groupby("item").size() print(f"We have {n_users} users, {n_items} items and {n_interactions} interactions.\n") print( f"Data sparsity (% of missing entries) is {round(100 * (1- n_interactions / (n_users * n_items)), 4)}%.\n" ) print( f"Average number of interactions per user is {round(interactions_per_user.mean(), 3)}\ (standard deviation {round(interactions_per_user.std(ddof=0),3)}).\n" ) print( f"Average number of interactions per item is {round(interactions_per_item.mean(), 3)}\ (standard deviation {round(interactions_per_item.std(ddof=0),3)}).\n" ) def compute_quantiles(series, quantiles=[0.01, 0.1, 0.25, 0.5, 0.75, 0.9, 0.99]): return pd.DataFrame( [[quantile, series.quantile(quantile)] for quantile in quantiles], columns=["quantile", "value"], ) print("Interactions distribution per user:") compute_quantiles(interactions_per_user) def plot_interactions_distribution(series, aggregation="user", ylabel="Users", bins=30): matplotlib.rcParams.update({"font.size": 22}) series.plot.hist(bins=bins, rwidth=0.9, logy=True, figsize=(16, 9)) plt.title(f"Number of interactions per {aggregation}") plt.xlabel("Interactions") plt.ylabel(ylabel) plt.grid(axis="y", alpha=0.5) plot_interactions_distribution(interactions_per_user, "user", "Users") print("Interactions distribution per item:") compute_quantiles(interactions_per_item) plot_interactions_distribution(interactions_per_item, "item", "Items") event_frequency = pd.DataFrame( interactions["event"].value_counts() / len(interactions) ).rename(columns={"event": "frequency"}) event_frequency["frequency"] = event_frequency["frequency"].apply( lambda x: f"{round(100*x,3)}%" ) event_frequency def unix_to_day(timestamps): min_timestamp = timestamps.min() seconds_in_day = 60*60*24 return (timestamps - min_timestamp)//seconds_in_day + 1 interactions["day"] = unix_to_day(interactions["timestamp"]) def plot_interactions_over_time(series): freq = series.value_counts() labels, counts = freq.index, freq.values/10**6 matplotlib.rcParams.update({"font.size": 22}) plt.figure(figsize=(16,5)) plt.bar(labels, counts, align='center') plt.gca().set_xticks(labels) plt.title(f"Data split") plt.xlabel("Day") plt.ylabel("Interactions [mln]") plt.grid(axis="y") plot_interactions_over_time(interactions["day"]) ###Output _____no_output_____ ###Markdown Preprocessing Splitting ###Code random_seed=10 validation_target_users_size = 10000 validation_fraction_users = 0.2 validation_fraction_items = 0.2 # split into train_and_validation and test train_and_validation, test = split(interactions) train_and_validation.to_csv('train_valid.gzip', compression="gzip", index=None) test.to_csv('test.gzip', compression="gzip", index=None) # split into train and validation interactions_subset = get_interactions_subset( interactions=train_and_validation, fraction_users=validation_fraction_users, fraction_items=validation_fraction_items, ) train, validation = split(interactions_subset) train.to_csv('train.gzip', compression="gzip", index=None) validation.to_csv('validation.gzip', compression="gzip", index=None) # prepare target_users test["user"].drop_duplicates().to_csv('target_users_all.gzip', header=None, index=None, compression="gzip" ) # prepare target_users for validation np.random.seed(random_seed) validation_users = validation["user"].drop_duplicates() validation_users.sample( n=min(validation_target_users_size, len(validation_users)) ).to_csv('target_users_subset_validation.gzip', header=None, index=None, compression="gzip", ) ###Output _____no_output_____ ###Markdown Encoding ###Code def dataprep(interactions): """ Prepare interactions dataset for training model """ data = interactions.copy() user_code_id = dict(enumerate(data["user"].unique())) user_id_code = {v: k for k, v in user_code_id.items()} data["user_code"] = data["user"].apply(user_id_code.get) item_code_id = dict(enumerate(data["item"].unique())) item_id_code = {v: k for k, v in item_code_id.items()} data["item_code"] = data["item"].apply(item_id_code.get) train_ui = sparse.csr_matrix( (np.ones(len(data)), (data["user_code"], data["item_code"])) ) return train_ui, {'user_code_id':user_code_id, 'user_id_code':user_id_code, 'item_code_id':item_code_id, 'item_id_code':item_id_code} ###Output _____no_output_____ ###Markdown Models Base Class ###Code class BaseRecommender: """Base recommender interface""" def preprocess(self): """Implement any needed input data preprocessing""" raise NotImplementedError def fit(self): """Implement model fitter""" raise NotImplementedError def recommend(self, *args, **kwargs): """Implement recommend method Should return a DataFrame containing * user_id: id of the user for whom we provide recommendations * n columns containing item recommendations (or None if missing) """ raise NotImplementedError ###Output _____no_output_____ ###Markdown TopPop ###Code class TopPop(BaseRecommender): """ TopPop recommender, which recommends the most popular items """ def __init__(self, train_ui, encode_maps, show_progress=True): super().__init__() self.popular_items = None self.train_ui = train_ui self.user_id_code = encode_maps['user_id_code'] self.user_code_id = encode_maps['user_code_id'] self.item_code_id = encode_maps['item_code_id'] self.show_progress = show_progress def fit(self): """ Fit the model """ self.popular_items = (-self.train_ui.sum(axis=0).A.ravel()).argsort() def recommend( self, target_users, n_recommendations, filter_out_interacted_items=True, ) -> pd.DataFrame: """ Recommends n_recommendations items for target_users :return: pd.DataFrame (user, item_1, item_2, ..., item_n) """ with ThreadPool() as thread_pool: recommendations = list( tqdm( thread_pool.imap( partial( self.recommend_per_user, n_recommendations=n_recommendations, filter_out_interacted_items=filter_out_interacted_items, ), target_users, ), disable=not self.show_progress, ) ) return pd.DataFrame(recommendations) def recommend_per_user( self, user, n_recommendations, filter_out_interacted_items=True ): """ Recommends n items per user :param user: User id :param n_recommendations: Number of recommendations :param filter_out_interacted_items: boolean value to filter interacted items :return: list of format [user_id, item1, item2 ...] """ u_code = self.user_id_code.get(user) u_recommended_items = [] if u_code is not None: exclude_items = [] if filter_out_interacted_items: exclude_items = self.train_ui.indices[ self.train_ui.indptr[u_code] : self.train_ui.indptr[u_code + 1] ] u_recommended_items = self.popular_items[ : n_recommendations + len(exclude_items) ] u_recommended_items = [ self.item_code_id[i] for i in u_recommended_items if i not in exclude_items ] u_recommended_items = u_recommended_items[:n_recommendations] return ( [user] + u_recommended_items + [None] * (n_recommendations - len(u_recommended_items)) ) ###Output _____no_output_____ ###Markdown Random ###Code class Random(BaseRecommender): """ TopPop recommender, which recommends the most popular items """ def __init__(self, train_ui, encode_maps, show_progress=True): super().__init__() self.train_ui = train_ui self.user_id_code = encode_maps['user_id_code'] self.user_code_id = encode_maps['user_code_id'] self.item_code_id = encode_maps['item_code_id'] self.show_progress = show_progress def fit(self): """ Fit the model """ pass def recommend( self, target_users, n_recommendations, filter_out_interacted_items=True, ) -> pd.DataFrame: """ Recommends n_recommendations items for target_users :return: pd.DataFrame (user, item_1, item_2, ..., item_n) """ with ThreadPool() as thread_pool: recommendations = list( tqdm( thread_pool.imap( partial( self.recommend_per_user, n_recommendations=n_recommendations, filter_out_interacted_items=filter_out_interacted_items, ), target_users, ), disable=not self.show_progress, ) ) return pd.DataFrame(recommendations) def recommend_per_user( self, user, n_recommendations, filter_out_interacted_items=True ): """ Recommends n items per user :param user: User id :param n_recommendations: Number of recommendations :param filter_out_interacted_items: boolean value to filter interacted items :return: list of format [user_id, item1, item2 ...] """ u_code = self.user_id_code.get(user) u_recommended_items = [] if u_code is not None: exclude_items = [] if filter_out_interacted_items: exclude_items = self.train_ui.indices[ self.train_ui.indptr[u_code] : self.train_ui.indptr[u_code + 1] ] u_recommended_items = random.sample( range(self.train_ui.shape[1]), n_recommendations + len(exclude_items) ) u_recommended_items = [ self.item_code_id[i] for i in u_recommended_items if i not in exclude_items ] u_recommended_items = u_recommended_items[:n_recommendations] return ( [user] + u_recommended_items + [None] * (n_recommendations - len(u_recommended_items)) ) ###Output _____no_output_____ ###Markdown ALS ###Code class ALS(BaseRecommender): """ Module implementing a wrapper for the ALS model Wrapper over ALS model """ def __init__(self, train_ui, encode_maps, factors=100, regularization=0.01, use_gpu=False, iterations=15, event_weights_multiplier=100, show_progress=True, ): """ Source of descriptions: https://github.com/benfred/implicit/blob/master/implicit/als.py Alternating Least Squares A Recommendation Model based on the algorithms described in the paper 'Collaborative Filtering for Implicit Feedback Datasets' with performance optimizations described in 'Applications of the Conjugate Gradient Method for Implicit Feedback Collaborative Filtering.' Parameters ---------- factors : int, optional The number of latent factors to compute regularization : float, optional The regularization factor to use use_gpu : bool, optional Fit on the GPU if available, default is to run on CPU iterations : int, optional The number of ALS iterations to use when fitting data event_weights_multiplier: int, optional The multiplier of weights. Used to find a tradeoff between the importance of interacted and not interacted items. """ super().__init__() self.train_ui = train_ui self.user_id_code = encode_maps['user_id_code'] self.user_code_id = encode_maps['user_code_id'] self.item_code_id = encode_maps['item_code_id'] self.mapping_user_test_items = None self.similarity_matrix = None self.show_progress = show_progress self.model = implicit.als.AlternatingLeastSquares( factors=factors, regularization=regularization, use_gpu=use_gpu, iterations=iterations, ) self.event_weights_multiplier = event_weights_multiplier def fit(self): """ Fit the model """ self.model.fit(self.train_ui.T, show_progress=self.show_progress) def recommend( self, target_users, n_recommendations, filter_out_interacted_items=True, ) -> pd.DataFrame: """ Recommends n_recommendations items for target_users :return: pd.DataFrame (user, item_1, item_2, ..., item_n) """ with ThreadPool() as thread_pool: recommendations = list( tqdm( thread_pool.imap( partial( self.recommend_per_user, n_recommendations=n_recommendations, filter_out_interacted_items=filter_out_interacted_items, ), target_users, ), disable=not self.show_progress, ) ) return pd.DataFrame(recommendations) def recommend_per_user( self, user, n_recommendations, filter_out_interacted_items=True ): """ Recommends n items per user :param user: User id :param n_recommendations: Number of recommendations :param filter_out_interacted_items: boolean value to filter interacted items :return: list of format [user_id, item1, item2 ...] """ u_code = self.user_id_code.get(user) u_recommended_items = [] if u_code is not None: u_recommended_items = list( zip( *self.model.recommend( u_code, self.train_ui, N=n_recommendations, filter_already_liked_items=filter_out_interacted_items, ) ) )[0] u_recommended_items = [self.item_code_id[i] for i in u_recommended_items] return ( [user] + u_recommended_items + [None] * (n_recommendations - len(u_recommended_items)) ) ###Output _____no_output_____ ###Markdown LightFM ###Code class LFM(BaseRecommender): """ Module implementing a wrapper for the ALS model Wrapper over LightFM model """ def __init__(self, train_ui, encode_maps, no_components=30, k=5, n=10, learning_schedule="adagrad", loss="logistic", learning_rate=0.05, rho=0.95, epsilon=1e-06, item_alpha=0.0, user_alpha=0.0, max_sampled=10, random_state=42, epochs=20, show_progress=True, ): """ Source of descriptions: https://making.lyst.com/lightfm/docs/_modules/lightfm/lightfm.html#LightFM A hybrid latent representation recommender model. The model learns embeddings (latent representations in a high-dimensional space) for users and items in a way that encodes user preferences over items. When multiplied together, these representations produce scores for every item for a given user; items scored highly are more likely to be interesting to the user. The user and item representations are expressed in terms of representations of their features: an embedding is estimated for every feature, and these features are then summed together to arrive at representations for users and items. For example, if the movie 'Wizard of Oz' is described by the following features: 'musical fantasy', 'Judy Garland', and 'Wizard of Oz', then its embedding will be given by taking the features' embeddings and adding them together. The same applies to user features. The embeddings are learned through `stochastic gradient descent <http://cs231n.github.io/optimization-1/>`_ methods. Four loss functions are available: - logistic: useful when both positive (1) and negative (-1) interactions are present. - BPR: Bayesian Personalised Ranking [1]_ pairwise loss. Maximises the prediction difference between a positive example and a randomly chosen negative example. Useful when only positive interactions are present and optimising ROC AUC is desired. - WARP: Weighted Approximate-Rank Pairwise [2]_ loss. Maximises the rank of positive examples by repeatedly sampling negative examples until rank violating one is found. Useful when only positive interactions are present and optimising the top of the recommendation list (precision@k) is desired. - k-OS WARP: k-th order statistic loss [3]_. A modification of WARP that uses the k-th positive example for any given user as a basis for pairwise updates. Two learning rate schedules are available: - adagrad: [4]_ - adadelta: [5]_ Parameters ---------- no_components: int, optional the dimensionality of the feature latent embeddings. k: int, optional for k-OS training, the k-th positive example will be selected from the n positive examples sampled for every user. n: int, optional for k-OS training, maximum number of positives sampled for each update. learning_schedule: string, optional one of ('adagrad', 'adadelta'). loss: string, optional one of ('logistic', 'bpr', 'warp', 'warp-kos'): the loss function. learning_rate: float, optional initial learning rate for the adagrad learning schedule. rho: float, optional moving average coefficient for the adadelta learning schedule. epsilon: float, optional conditioning parameter for the adadelta learning schedule. item_alpha: float, optional L2 penalty on item features. Tip: setting this number too high can slow down training. One good way to check is if the final weights in the embeddings turned out to be mostly zero. The same idea applies to the user_alpha parameter. user_alpha: float, optional L2 penalty on user features. max_sampled: int, optional maximum number of negative samples used during WARP fitting. It requires a lot of sampling to find negative triplets for users that are already well represented by the model; this can lead to very long training times and overfitting. Setting this to a higher number will generally lead to longer training times, but may in some cases improve accuracy. random_state: int seed, RandomState instance, or None The seed of the pseudo random number generator to use when shuffling the data and initializing the parameters. epochs: (int, optional) number of epochs to run """ super().__init__() self.model = LightFM( no_components=no_components, k=k, n=n, learning_schedule=learning_schedule, loss=loss, learning_rate=learning_rate, rho=rho, epsilon=epsilon, item_alpha=item_alpha, user_alpha=user_alpha, max_sampled=max_sampled, random_state=random_state, ) self.epochs = epochs self.train_ui = train_ui self.user_id_code = encode_maps['user_id_code'] self.user_code_id = encode_maps['user_code_id'] self.item_code_id = encode_maps['item_code_id'] self.mapping_user_test_items = None self.similarity_matrix = None self.show_progress = show_progress def fit(self): """ Fit the model """ self.model.fit( self.train_ui, epochs=self.epochs, num_threads=multiprocessing.cpu_count(), verbose=self.show_progress, ) def recommend( self, target_users, n_recommendations, filter_out_interacted_items=True, ) -> pd.DataFrame: """ Recommends n_recommendations items for target_users :return: pd.DataFrame (user, item_1, item_2, ..., item_n) """ self.items_to_recommend = np.arange(len(self.item_code_id)) with ThreadPool() as thread_pool: recommendations = list( tqdm( thread_pool.imap( partial( self.recommend_per_user, n_recommendations=n_recommendations, filter_out_interacted_items=filter_out_interacted_items, ), target_users, ), disable=not self.show_progress, ) ) return pd.DataFrame(recommendations) def recommend_per_user( self, user, n_recommendations, filter_out_interacted_items=True ): """ Recommends n items per user :param user: User id :param n_recommendations: Number of recommendations :param filter_out_interacted_items: boolean value to filter interacted items :return: list of format [user_id, item1, item2 ...] """ u_code = self.user_id_code.get(user) if u_code is not None: interacted_items = self.train_ui.indices[ self.train_ui.indptr[u_code] : self.train_ui.indptr[u_code + 1] ] scores = self.model.predict(int(u_code), self.items_to_recommend) item_recommendations = self.items_to_recommend[np.argsort(-scores)][ : n_recommendations + len(interacted_items) ] item_recommendations = [ self.item_code_id[item] for item in item_recommendations if item not in interacted_items ] return ( [user] + item_recommendations + [None] * (n_recommendations - len(item_recommendations)) ) ###Output _____no_output_____ ###Markdown RP3Beta ###Code class RP3Beta(BaseRecommender): """ Module implementing a RP3Beta model RP3Beta model proposed in the paper "Updatable, Accurate, Diverse, and Scalable Recommendations for Interactive Applications". In our implementation we perform direct computations on sparse matrices instead of random walks approximation. """ def __init__(self, train_ui, encode_maps, alpha=1, beta=0, show_progress=True): super().__init__() self.train_ui = train_ui self.user_id_code = encode_maps['user_id_code'] self.user_code_id = encode_maps['user_code_id'] self.item_code_id = encode_maps['item_code_id'] self.alpha = alpha self.beta = beta self.p_ui = None self.similarity_matrix = None self.show_progress = show_progress def fit(self): """ Fit the model """ # Define Pui self.p_ui = normalize(self.train_ui, norm="l1", axis=1).power(self.alpha) # Define Piu p_iu = normalize( self.train_ui.transpose(copy=True).tocsr(), norm="l1", axis=1 ).power(self.alpha) self.similarity_matrix = p_iu * self.p_ui item_orders = (self.train_ui > 0).sum(axis=0).A.ravel() self.similarity_matrix *= sparse.diags(1 / item_orders.clip(min=1) ** self.beta) def recommend( self, target_users, n_recommendations, filter_out_interacted_items=True, ) -> pd.DataFrame: """ Recommends n_recommendations items for target_users :return: pd.DataFrame (user, item_1, item_2, ..., item_n) """ with ThreadPool() as thread_pool: recommendations = list( tqdm( thread_pool.imap( partial( self.recommend_per_user, n_recommendations=n_recommendations, filter_out_interacted_items=filter_out_interacted_items, ), target_users, ), disable=not self.show_progress, ) ) return pd.DataFrame(recommendations) def recommend_per_user( self, user, n_recommendations, filter_out_interacted_items=True ): """ Recommends n items per user :param user: User id :param n_recommendations: Number of recommendations :param filter_out_interacted_items: boolean value to filter interacted items :return: list of format [user_id, item1, item2 ...] """ u_code = self.user_id_code.get(user) u_recommended_items = [] if u_code is not None: exclude_items = [] if filter_out_interacted_items: exclude_items = self.train_ui.indices[ self.train_ui.indptr[u_code] : self.train_ui.indptr[u_code + 1] ] scores = self.p_ui[u_code] * self.similarity_matrix u_recommended_items = scores.indices[ (-scores.data).argsort()[: n_recommendations + len(exclude_items)] ] u_recommended_items = [ self.item_code_id[i] for i in u_recommended_items if i not in exclude_items ] u_recommended_items = u_recommended_items[:n_recommendations] return ( [user] + u_recommended_items + [None] * (n_recommendations - len(u_recommended_items)) ) ###Output _____no_output_____ ###Markdown SLIM ###Code class SLIM(BaseRecommender): """ Module implementing SLIM model SLIM model proposed in "SLIM: Sparse Linear Methods for Top-N Recommender Systems """ def __init__(self, train_ui, encode_maps, alpha=0.0001, l1_ratio=0.5, iterations=3, show_progress=True): super().__init__() self.train_ui = train_ui self.user_id_code = encode_maps['user_id_code'] self.user_code_id = encode_maps['user_code_id'] self.item_code_id = encode_maps['item_code_id'] self.alpha = alpha self.l1_ratio = l1_ratio self.iterations = iterations self.similarity_matrix = None self.show_progress = show_progress def fit_per_item(self, column_id): """ Fits ElasticNet per item :param column_id: Id of column to setup as predicted value :return: coefficients of the ElasticNet model """ model = ElasticNet( alpha=self.alpha, l1_ratio=self.l1_ratio, positive=True, fit_intercept=False, copy_X=False, precompute=True, selection="random", max_iter=self.iterations, ) # set to zeros all entries in the given column of train_ui y = self.train_ui[:, column_id].A start_indptr = self.train_ui.indptr[column_id] end_indptr = self.train_ui.indptr[column_id + 1] column_ratings = self.train_ui.data[start_indptr:end_indptr].copy() self.train_ui.data[start_indptr:end_indptr] = 0 # learn item-item similarities model.fit(self.train_ui, y) # return original ratings to train_ui self.train_ui.data[start_indptr:end_indptr] = column_ratings return model.sparse_coef_.T @ignore_warnings(category=ConvergenceWarning) def fit(self): """ Fit the model """ self.train_ui = self.train_ui.tocsc() with ThreadPool() as thread_pool: coefs = list( tqdm( thread_pool.imap(self.fit_per_item, range(self.train_ui.shape[1])), disable=not self.show_progress, ) ) self.similarity_matrix = sparse.hstack(coefs).tocsr() self.train_ui = self.train_ui.tocsr() def recommend( self, target_users, n_recommendations, filter_out_interacted_items=True, ): """ Recommends n_recommendations items for target_users :return: pd.DataFrame (user, item_1, item_2, ..., item_n) """ with ThreadPool() as thread_pool: recommendations = list( tqdm( thread_pool.imap( partial( self.recommend_per_user, n_recommendations=n_recommendations, filter_out_interacted_items=filter_out_interacted_items, ), target_users, ), disable=not self.show_progress, ) ) return pd.DataFrame(recommendations) def recommend_per_user( self, user, n_recommendations, filter_out_interacted_items=True ): """ Recommends n items per user :param user: User id :param n_recommendations: Number of recommendations :param filter_out_interacted_items: boolean value to filter interacted items :return: list of format [user_id, item1, item2 ...] """ u_code = self.user_id_code.get(user) if u_code is not None: exclude_items = [] if filter_out_interacted_items: exclude_items = self.train_ui.indices[ self.train_ui.indptr[u_code] : self.train_ui.indptr[u_code + 1] ] scores = self.train_ui[u_code] * self.similarity_matrix u_recommended_items = scores.indices[ (-scores.data).argsort()[: n_recommendations + len(exclude_items)] ] u_recommended_items = [ self.item_code_id[i] for i in u_recommended_items if i not in exclude_items ][:n_recommendations] return ( [user] + u_recommended_items + [None] * (n_recommendations - len(u_recommended_items)) ) ###Output _____no_output_____ ###Markdown Runs ###Code # load the training data interactions_train = load_interactions('train.gzip') # encode user ids and convert interactions into sparse interaction matrix train_ui, encode_maps = dataprep(interactions_train) # # load target users target_users = load_target_users('target_users_subset_validation.gzip') # models list models = {'itempop': TopPop, 'random': Random, 'als': ALS, 'lightfm': LFM, 'rp3': RP3Beta, 'slim': SLIM, } ###Output _____no_output_____ ###Markdown Training Random Model ###Code model_name = 'random' Model = models[model_name] # number of recommendations N_RECOMMENDATIONS = 10 # initiate the model model = Model(train_ui, encode_maps) # train the model model.fit() # # recommend recommendations = model.recommend(target_users=target_users, n_recommendations=N_RECOMMENDATIONS) # # save the recommendations save_recommendations(recommendations, '{}.gzip'.format(model_name)) ###Output 5373it [00:00, 17610.42it/s] ###Markdown Training Item Pop Model ###Code model_name = 'itempop' Model = models[model_name] # number of recommendations N_RECOMMENDATIONS = 10 # initiate the model model = Model(train_ui, encode_maps) # train the model model.fit() # # recommend recommendations = model.recommend(target_users=target_users, n_recommendations=N_RECOMMENDATIONS) # # save the recommendations save_recommendations(recommendations, '{}.gzip'.format(model_name)) ###Output 5373it [00:00, 15331.51it/s] ###Markdown Training ALS Model ###Code model_name = 'als' Model = models[model_name] FACTORS = 400 REGULARIZATION = 0.1 ITERATIONS = 6 EVENT_WEIGHTS_MULTIPLIER = 100 N_RECOMMENDATIONS = 10 # initiate the model model = Model(train_ui, encode_maps, factors=FACTORS, regularization=REGULARIZATION, iterations=ITERATIONS, event_weights_multiplier=EVENT_WEIGHTS_MULTIPLIER, ) # train the model model.fit() # # recommend recommendations = model.recommend(target_users=target_users, n_recommendations=N_RECOMMENDATIONS) # # save the recommendations save_recommendations(recommendations, '{}.gzip'.format(model_name)) ###Output WARNING:root:OpenBLAS detected. Its highly recommend to set the environment variable 'export OPENBLAS_NUM_THREADS=1' to disable its internal multithreading ###Markdown Training LightFM Model ###Code model_name = 'lightfm' Model = models[model_name] no_components=200 learning_schedule="adadelta" loss="warp" max_sampled=61 epochs=11 N_RECOMMENDATIONS = 10 # initiate the model model = Model(train_ui, encode_maps, no_components=no_components, learning_schedule=learning_schedule, loss=loss, max_sampled=max_sampled, epochs=epochs, ) # train the model model.fit() # # recommend recommendations = model.recommend(target_users=target_users, n_recommendations=N_RECOMMENDATIONS) # # save the recommendations save_recommendations(recommendations, '{}.gzip'.format(model_name)) ###Output Epoch: 100%|██████████| 11/11 [00:24<00:00, 2.22s/it] 5373it [00:11, 486.61it/s] ###Markdown Training RP3Beta Model ###Code model_name = 'rp3' Model = models[model_name] ALPHA = 1 BETA = 0 N_RECOMMENDATIONS = 10 # initiate the model model = Model(train_ui, encode_maps, alpha=ALPHA, beta=BETA, ) # train the model model.fit() # # recommend recommendations = model.recommend(target_users=target_users, n_recommendations=N_RECOMMENDATIONS) # # save the recommendations save_recommendations(recommendations, '{}.gzip'.format(model_name)) ###Output 5373it [00:03, 1637.26it/s] ###Markdown Training SLIM Model ###Code model_name = 'slim' Model = models[model_name] N_RECOMMENDATIONS = 10 # initiate the model model = Model(train_ui, encode_maps, ) # train the model model.fit() # # recommend recommendations = model.recommend(target_users=target_users, n_recommendations=N_RECOMMENDATIONS) # # save the recommendations save_recommendations(recommendations, '{}.gzip'.format(model_name)) ###Output 2376it [00:14, 163.75it/s] 5373it [00:02, 2016.80it/s] ###Markdown Evaluation ###Code evaluator = Evaluator( recommendations_path='/content', test_path='validation.gzip', k=10, models_to_evaluate=list(models.keys()), ) evaluator.prepare() evaluator.evaluate_models() evaluator.evaluation_results evaluator = Evaluator( recommendations_path='/content', test_path='validation.gzip', k=10, models_to_evaluate=list(models.keys()), ) evaluator.prepare() evaluator.evaluate_models() evaluator.evaluation_results ###Output 5373it [00:00, 12328.94it/s] 5373it [00:00, 11966.13it/s] 5373it [00:00, 11717.03it/s] 5373it [00:00, 11618.24it/s] 5373it [00:00, 11584.71it/s] 5373it [00:00, 11754.46it/s] ###Markdown --- ###Code !apt-get -qq install tree !rm -r sample_data !tree --du -h -C . !pip install -q watermark %reload_ext watermark %watermark -a "Sparsh A." -m -iv -u -t -d -p lightfm import sklearn sklearn.__version__ ###Output _____no_output_____
DeepRL-PPO-solution/pong-PPO.ipynb
###Markdown Welcome!Below, we will learn to implement and train a policy to play atari-pong, using only the pixels as input. We will use convolutional neural nets, multiprocessing, and pytorch to implement and train our policy. Let's get started! ###Code # custom utilies for displaying animation, collecting rollouts and more import pong_utils %matplotlib inline # check which device is being used. # I recommend disabling gpu until you've made sure that the code runs device = pong_utils.device print("using device: ",device) # render ai gym environment import gym import time # PongDeterministic does not contain random frameskip # so is faster to train than the vanilla Pong-v4 environment env = gym.make('PongDeterministic-v4') print("List of available actions: ", env.unwrapped.get_action_meanings()) # we will only use the actions 'RIGHTFIRE' = 4 and 'LEFTFIRE" = 5 # the 'FIRE' part ensures that the game starts again after losing a life # the actions are hard-coded in pong_utils.py ###Output List of available actions: ['NOOP', 'FIRE', 'RIGHT', 'LEFT', 'RIGHTFIRE', 'LEFTFIRE'] ###Markdown PreprocessingTo speed up training, we can simplify the input by cropping the images and use every other pixel ###Code import matplotlib import matplotlib.pyplot as plt # show what a preprocessed image looks like env.reset() _, _, _, _ = env.step(0) # get a frame after 20 steps for _ in range(20): frame, _, _, _ = env.step(1) plt.subplot(1,2,1) plt.imshow(frame) plt.title('original image') plt.subplot(1,2,2) plt.title('preprocessed image') # 80 x 80 black and white image plt.imshow(pong_utils.preprocess_single(frame), cmap='Greys') plt.show() ###Output _____no_output_____ ###Markdown Policy Exercise 1: Implement your policy Here, we define our policy. The input is the stack of two different frames (which captures the movement), and the output is a number $P_{\rm right}$, the probability of moving left. Note that $P_{\rm left}= 1-P_{\rm right}$ ###Code import torch import torch.nn as nn import torch.nn.functional as F # set up a convolutional neural net # the output is the probability of moving right # P(left) = 1-P(right) class Policy(nn.Module): def __init__(self): super(Policy, self).__init__() # 80x80 to outputsize x outputsize # outputsize = (inputsize - kernel_size + stride)/stride # (round up if not an integer) # conv1 : 80 x 80 -> 40 x 40 self.conv1 = nn.Conv2d(2, 4, kernel_size=2, stride=2) # conv2 : 40 x 40 -> 20 x 20 self.conv2 = nn.Conv2d(4, 8, kernel_size=2, stride=2) # conv3 : 20 x 20 -> 10 x 10 self.conv3 = nn.Conv2d(8, 16, kernel_size=2, stride=2) # conv4 : 10 x 10 -> 5 x 5 self.conv4 = nn.Conv2d(16, 32, kernel_size=2, stride=2) self.size = 32 * 5 * 5 # 1 fully connected layer self.fc1 = nn.Linear(self.size, 64) self.fc2 = nn.Linear(64, 8) self.fc3 = nn.Linear(8, 1) self.sig = nn.Sigmoid() def forward(self, x): x = F.relu(self.conv1(x)) x = F.relu(self.conv2(x)) x = F.relu(self.conv3(x)) x = F.relu(self.conv4(x)) x = x.view(-1, self.size) x = F.relu(self.fc1(x)) x = F.relu(self.fc2(x)) x = self.sig(self.fc3(x)) return x # use your own policy! # policy=Policy().to(device) policy=pong_utils.Policy().to(device) # we use the adam optimizer with learning rate 2e-4 # optim.SGD is also possible import torch.optim as optim optimizer = optim.Adam(policy.parameters(), lr=1e-4) ###Output _____no_output_____ ###Markdown Game visualizationpong_utils contain a play function given the environment and a policy. An optional preprocess function can be supplied. Here we define a function that plays a game and shows learning progress ###Code pong_utils.play(env, policy, time=200) # try to add the option "preprocess=pong_utils.preprocess_single" # to see what the agent sees ###Output _____no_output_____ ###Markdown Function DefinitionsHere you will define key functions for training. Exercise 2: write your own function for training(what I call scalar function is the same as policy_loss up to a negative sign) PPOLater on, you'll implement the PPO algorithm as well, and the scalar function is given by$\frac{1}{T}\sum^T_t \min\left\{R_{t}^{\rm future}\frac{\pi_{\theta'}(a_t|s_t)}{\pi_{\theta}(a_t|s_t)},R_{t}^{\rm future}{\rm clip}_{\epsilon}\!\left(\frac{\pi_{\theta'}(a_t|s_t)}{\pi_{\theta}(a_t|s_t)}\right)\right\}$the ${\rm clip}_\epsilon$ function is implemented in pytorch as ```torch.clamp(ratio, 1-epsilon, 1+epsilon)``` ###Code def discounted_future_rewards(rewards, ratio=0.999): n = rewards.shape[1] step = torch.arange(n)[:,None] - torch.arange(n)[None,:] ones = torch.ones_like(step) zeros = torch.zeros_like(step) target = torch.where(step >= 0, ones, zeros) step = torch.where(step >= 0, step, zeros) discount = target * (ratio ** step) discount = discount.to(device) rewards_discounted = torch.mm(rewards, discount) return rewards_discounted def clipped_surrogate(policy, old_probs, states, actions, rewards, discount = 0.995, epsilon=0.1, beta=0.01): actions = torch.tensor(actions, dtype=torch.int8, device=device) rewards = torch.tensor(rewards, dtype=torch.float, device=device) old_probs = torch.tensor(old_probs, dtype=torch.float, device=device) # convert states to policy (or probability) new_probs = pong_utils.states_to_prob(policy, states) new_probs = torch.where(actions == pong_utils.RIGHT, new_probs, 1.0-new_probs) # discounted cumulative reward R_future = discounted_future_rewards(rewards, discount) # subtract baseline (= mean of reward) R_mean = torch.mean(R_future) R_future -= R_mean ratio = new_probs / (old_probs + 1e-6) ratio_clamped = torch.clamp(ratio, 1-epsilon, 1+epsilon) ratio_PPO = torch.where(ratio < ratio_clamped, ratio, ratio_clamped) # policy gradient maxmize target surrogates = (R_future * ratio_PPO).mean() # include a regularization term # this steers new_policy towards 0.5 # which prevents policy to become exactly 0 or 1 # this helps with exploration # add in 1.e-10 to avoid log(0) which gives nan # entropy = -(new_probs*torch.log(old_probs+1.e-10) + (1.0-new_probs)*torch.log(1.0-old_probs+1.e-10)) # surrogates += torch.mean(beta*entropy) return surrogates envs = pong_utils.parallelEnv('PongDeterministic-v4', n=4, seed=12345) prob, state, action, reward = pong_utils.collect_trajectories(envs, policy, tmax=100) Lsur= clipped_surrogate(policy, prob, state, action, reward) print(Lsur) ###Output tensor(1.00000e-09 * 8.9407, device='cuda:0') ###Markdown TrainingWe are now ready to train our policy!WARNING: make sure to turn on GPU, which also enables multicore processing. It may take up to 45 minutes even with GPU enabled, otherwise it will take much longer! ###Code from parallelEnv import parallelEnv import numpy as np # keep track of how long training takes # WARNING: running through all 800 episodes will take 30-45 minutes # training loop max iterations episode = 500 # widget bar to display progress import progressbar as pb widget = ['training loop: ', pb.Percentage(), ' ', pb.Bar(), ' ', pb.ETA() ] timer = pb.ProgressBar(widgets=widget, maxval=episode).start() envs = parallelEnv('PongDeterministic-v4', n=8, seed=1234) discount_rate = .99 epsilon = 0.1 beta = .01 tmax = 200 SGD_epoch = 4 # keep track of progress mean_rewards = [] for e in range(episode): # collect trajectories old_probs, states, actions, rewards = \ pong_utils.collect_trajectories(envs, policy, tmax=tmax) total_rewards = np.sum(rewards, axis=0) # gradient ascent step for _ in range(SGD_epoch): # uncomment to utilize your own clipped function! # L = -clipped_surrogate(policy, old_probs, states, actions, rewards, epsilon=epsilon, beta=beta) L = -pong_utils.clipped_surrogate(policy, old_probs, states, actions, rewards, epsilon=epsilon, beta=beta) optimizer.zero_grad() L.backward() optimizer.step() del L # the clipping parameter reduces as time goes on epsilon*=.999 # the regulation term also reduces # this reduces exploration in later runs beta*=.995 # get the average reward of the parallel environments mean_rewards.append(np.mean(total_rewards)) # display some progress every 20 iterations if (e+1)%20 ==0 : print("Episode: {0:d}, score: {1:f}".format(e+1,np.mean(total_rewards))) print(total_rewards) # update progress widget bar timer.update(e+1) timer.finish() pong_utils.play(env, policy, time=200) # save your policy! torch.save(policy, 'PPO.policy') # load policy if needed # policy = torch.load('PPO.policy') # try and test out the solution # make sure GPU is enabled, otherwise loading will fail # (the PPO verion can win more often than not)! # # policy_solution = torch.load('PPO_solution.policy') # pong_utils.play(env, policy_solution, time=2000) ###Output _____no_output_____
3.1_Animal-crackers_function.ipynb
###Markdown ANIMAL CRACKERS: Write a function takes a two-word string and returns True if both words begin with same letter animal_crackers('Levelheaded Llama') --> True animal_crackers('Crazy Kangaroo') --> False ###Code def animal_crackers(arg1,arg2): if arg1[0]==arg2[0]: print(True) else: print(False) animal_crackers('Rachel', 'Etter-Phoya') ###Output False
Class 01/Tutorial Numpy.ipynb
###Markdown __MÉTODOS NUMÉRICOS__ __Tutorial Básico de Numpy__ Nesse ponto, é esperado que você tenha feito a instalação do anaconda 3 em sua máquina e passado pelo tutorial de Jupyter Notebook. Se não o fez, retorne e faça, ou não terá sucesso nesse tutorial.A ideia aqui é servir de auto estudo. Dessa forma, espera-se que você crie um notebook novo e execute todos os comandos as seguir, tentando entender cada saída, e responda a todos os exercícios espalhados no próprio notebook que você acabou de criar, anotando suas dúvidas, quando tiver (após consultar as referências da biblioteca, lógico), para que que essas sejam discutidas em sala de aula. De outro jeito, não haverá aprendizado Começamos, como discutido em sala, pela importação do módulo, ###Code import numpy as np lista = [-2, 4., '7', 9] lista lista*2 b = [] for i in lista: b.append(i*2) b ###Output _____no_output_____ ###Markdown 1. CRIANDO ARRAYS Existem várias formas de criar arrays, pode ser através do método np.array(), usando uma lista ou tupla do python: ###Code np.array() ###Output _____no_output_____ ###Markdown string > complex > float > int ###Code np.array((1,2)) np.array([-2, 4., 7, 9j]) 0.1 + 1j my_tuple = (8, -5, 0, 3.2) arr = np.array(my_tuple) print('arr = ', arr) ###Output arr = [ 8. -5. 0. 3.2] ###Markdown Usando o método arange (similar ao range do python): ###Code np.arange(inicio,fim,passo) ###Output _____no_output_____ ###Markdown np.arange(12) vetor de 0 a 12, com passo 1 np.arange(2,12) vetor de 2 a 12, com passo unitário np.arange(2,12,2) vetor de 2 a 12, com passo 2 ###Code Temos também outro método, o linspace ###Output _____no_output_____ ###Markdown np.linspace(inicio, fim, qtde.) ###Code np.linspace(10) # não roda só com um valor np.linspace(1,10) #vetor entre 1 e 10 com 50 valores np.linspace(1,10,10) #vetor entre 1 e 10 com 10 valores ###Output _____no_output_____ ###Markdown Outros métodos ###Code np.empty(5) # normalmente a saída é lixo de memória ou zeros np.empty((2,3)) # Np.Eye np.eye(4) # eye cria uma array de duas dimensões com 1 na da diagnoal e 0 nos outros elementos 10*np.eye(4) ###Output _____no_output_____ ###Markdown Usando os métodos 'zeros()' e 'ones()' para criar arrays de 0 e 1, respectivamente: ###Code arr = np.zeros(8) # array contendo 8 'zeros' print (arr) arr2 = np.ones((4,3)) # array contendo 12 'uns', no formato 4 x 3 print(arr2) ###Output [0. 0. 0. 0. 0. 0. 0. 0.] [[1. 1. 1.] [1. 1. 1.] [1. 1. 1.] [1. 1. 1.]] ###Markdown 2. ATRIBUTOS DAS ARRAYS Existem vários atributos, mas o três principais são .shape , .size e .dtype ###Code arr = np.linspace(.5, 50, 12) arr m = arr.reshape(3,4) m print('.shape: ', arr.shape) # imprime o formato da array print('.size: ', arr.size) # imprime o tamanho da array print('.dtype: ', arr.dtype) # imprime o tipo de dado dos elementos print('.shape: ', m.shape) # imprime o formato da array print('.size: ', m.size) # imprime o tamanho da array print('.dtype: ', m.dtype) # imprime o tipo de dado dos elementos ###Output .shape: (3, 4) .size: 12 .dtype: float64 ###Markdown Outros atributos: ###Code print('.ndim: ', m.ndim) # número de dimensões print('.itemsize: ', m.itemsize) # tamanho de cada elemento em bytes ###Output .ndim: 2 .itemsize: 8 ###Markdown 3. MÉTODOS Existem inúmeros métodos que nos ajudam a trabalhar com numpy arrays, vou mostrar os mais usados: ###Code arr = np.arange(1,10) print('array original: ', arr, '\n') ###Output array original: [1 2 3 4 5 6 7 8 9] ###Markdown Como já vimos 'reshape()' muda o formato da array, MAS o número de elementos dos dois formatos tem que coincidir, exemplo: ###Code arr2 = arr.reshape((3,3)) print ('formato 3x3: \n' , arr2, '\n') arr2.reshape(9) arr.max() # retorna o valor máximo print(arr.argmax()) # retorna o índice que contém o valor máximo arr[8] == arr[arr.argmax()] arr.min() # retorna o mínimo print(arr.argmin()) # retorna o índice que contém o valor máximo arr[0] == arr[arr.argmin()] arr.mean() # retorna a média arr.std() # retorna o desvio padrão arr.sum() #retorna a soma de todos os elementos da array ###Output _____no_output_____ ###Markdown 4. OPERAÇÕES BÁSICAS É bem fácil realizar operações matemáticas com arrays ###Code # vamos criar as arrays A e B para fazer operações com elas A = np.arange(10) B = np.arange(10,38,3) C = np.array([0., 1., 2., 3., 4., 5., 6., 7.,np.nan, np.nan]) print (' A: ', A, '\n', 'B: ', B, '\n', 'C: ', C) A+B A**2 ###Output _____no_output_____ ###Markdown Perceba que se você dividir por zero, numpy não vai dar erro (o código vai rodar), porém vai emitir uma advertência! ###Code B/A 10/0 ###Output _____no_output_____ ###Markdown BROADCAST = transmissão ###Code ''' Você pode fazer operações envolvendo arrays e um único elemento, dessa forma numpy itera a operação do elemento único com todos os componentes da array, exemplo: ''' A+C ###Output _____no_output_____ ###Markdown 5. FUNÇÕES UNIVERSAIS ###Code arr np.sqrt(arr) # retorna um array com a raiz quadrada de cada elemento np.sin(arr) # retorna um array com o seno de cada elemento np.cos(arr) # retorna um array com o cosseno de cada elemento np.exp(arr) # retorna um array com o exponencial cada elemento (e^x) ###Output _____no_output_____ ###Markdown 6. INDEXAÇÃO E FATIAMENTO (SLICING) O acesso de elementos de uma array, funciona de forma similar ao das listas do Python: ###Code # criar uma array com o quadrado dos números de 1 a 10 arr = np.arange(1, 10) **2 arr # acessando o primeiro elemento print('1º elemento: ', arr[0]) # lembrar que índice começa do 0 # acessando o quinto elemento print ('5º elemento: ', arr[4]) # O quinto elemento é acessado pelo índice 4 # Em um array de duas dimensões (ou mais) existem duas formas de acessar o índice: arr = arr.reshape((3,3)) print (arr, '\n') # acessando o último elemento usando o -1 arr[:,1:] # Para acessar fatias de elementos, usamos o slice [i:i], exemplo # linha: acessar da linha de índice 0 a linha de índice 2 (excluindo esta) = [0:2] # coluna: acessar da coluna de índice 0 a coluna de 3 (excluindo esta) = [0:3] - no nosso caso, todas as colunas print(arr[0:2 , 0:3],'\n') # outra forma de obter o mesmo resultado é usando [:], que pega todos os elementos print(arr[0:2 , 0:3]) # Podemos criar uma nova array com slice de outra arr2 = arr[ : , 1:3] arr2 # Podemor atribuir novos elementos a nossa array, usando slice arr[1:2, :] = -99 arr ###Output _____no_output_____ ###Markdown Uma imagem vale mais que mil palavras: ![Image of Yaktocat](http://www.scipy-lectures.org/_images/numpy_indexing.png) SELEÇÃO CONDICIONAL Uma forma interessante de selecionar elementos baseado em alguma condição ###Code A = np.arange(10) print ('A: ', A) A > 5 # retorna uma array de variáveis booleanas de acordo com a condição # Também odemos criar uma array a partir dessa condição B = A > 5 # Isso nos dá 2 opções para fazer seleção condicional, passando a condição ou passando o array como índice print (A [A > 5]) print (A [B]) # em ambos os casos só retorna os valores que admitem essa condição ###Output _____no_output_____
QLearning/dqn_asteroids_gym.ipynb
###Markdown ATARI Asteroids DQN_gym with keras-rl ###Code import numpy as no import gym from keras.models import Sequential from keras.layers import Dense, Activation, Flatten from keras.optimizers import Adam from rl.agents.dqn import DQNAgent from rl.agents.ddpg import DDPGAgent from rl.policy import BoltzmannQPolicy , LinearAnnealedPolicy , EpsGreedyQPolicy from rl.memory import SequentialMemory ENV_NAME_2 = 'Asteroids-v0' # Get the environment and extract the number of actions env = gym.make(ENV_NAME_2) nb_actions = env.action_space.n nb_actions # Next, we build a neural network model model = Sequential() model.add(Flatten(input_shape=(1,) + env.observation_space.shape)) model.add(Dense(3, activation= 'tanh')) # One layer of 3 units with tanh activation function model.add(Dense(nb_actions)) model.add(Activation('sigmoid')) # one layer of 1 unit with sigmoid activation function print(model.summary()) #DQN -- Deep Reinforcement Learning #Configure and compile the agent. #Use every built-in Keras optimizer and metrics! memory = SequentialMemory(limit=20000, window_length=1) policy = BoltzmannQPolicy() dqn = DQNAgent(model=model, nb_actions=nb_actions, memory=memory, nb_steps_warmup=10, target_model_update=1e-2, policy=policy) dqn.compile(Adam(lr=1e-3), metrics=['mae', 'acc']) ## Visualize the training during 500000 steps dqn.fit(env, nb_steps=500000, visualize=True, verbose=2) #Plot loss variations import matplotlib.pyplot as plt episodes = [1455,2423,3192,4037,4472,6292,7098,8897, 10784,11988,13541,14309,14855,15676,17303,18249, 21461,22917,23238,23586,26189,27369,28548,28919, 30695,32084,33770,35502,36281,37151,38717,39922, 41911,42709,43787,46754,48099,48561] loss = [21.74,36.16,32.86,33.93,31.62,31.17,28.76,27.21,31.20, 30.66,28.269,28.651,25.91,29.79,31.83,33.02,30.15,28.89, 26.92,25.30,27.16,27.08,24.59,30.02,26.48,28.96,30.13, 29.65,27.24,28.61,27.87,27.72,26.7,27.76,27.96,30.41,32.34,25.04] plt.plot(episodes, loss, 'r--') plt.axis([0, 50000, 0, 40]) plt.show() ## Save the model dqn.save_weights('dqn_{}_weights.h5f'.format(ENV_NAME_2), overwrite=True) # Evaluate the algorithm for 10 episodes dqn.test(env, nb_episodes=10, visualize=True) ### Another Policy with dqn policy = LinearAnnealedPolicy(EpsGreedyQPolicy(), attr="eps", value_max=.8, value_min=.01, value_test=.0, nb_steps=100000) dqn = DQNAgent(model=model, nb_actions=nb_actions, nb_steps_warmup=10, policy=policy, test_policy=policy, memory = memory, target_model_update=1e-2) dqn.compile(Adam(lr=1e-3), metrics=['mae', 'acc']) dqn.fit(env, nb_steps=50000, visualize=True, verbose=2) episodes_p2 = [2647,5062,6988,7721,9006,9489, 10482,11303,12967,14767,17088,17370,17887,18599,19641, 20361,20921,22419,23198,24514,26366,27983,29873, 30851,31931,32370,34069,35248,36460,38501,39551, 40200,42374,43610] loss_p2 = [29.99,29.21,30.04,26.03,26.04,26.78,27.92,32.33,25.37,26.68,26.14,28.39, 33.06,24.59,26.5,30.07,30.02,32.05,25.4,29.14,28.68,30.82, 30.10,31.20, 33.85,30.20,35.34,31.25,32.28,30.29,32.97,29.07,31.01,28.14] plt.plot(episodes_p2, loss_p2, 'r--') plt.axis([0, 50000, 0, 40]) plt.show() dqn.test(env, nb_episodes=10, visualize=True) #SARSA Agent -- Reinforcement Learning from rl.agents.sarsa import SARSAAgent sarsa = SARSAAgent(model, nb_actions, policy=None, test_policy=None, gamma=0.99, nb_steps_warmup=10, train_interval=1) sarsa.compile(Adam(lr=1e-3), metrics=['mae', 'acc']) sarsa.fit(env, nb_steps=50000, visualize=True, verbose=2) sarsa.test(env, nb_episodes=10, visualize=True) ###Output _____no_output_____
patch_detection.ipynb
###Markdown ML Playground (CNN, mostly) Creating DatasetBased on above information, we can create custom dataset. 사실 Dataset만 상속받고 나머지는 알아서 잘 해도 된다 하더라.- https://pytorch.org/tutorials/beginner/basics/data_tutorial.html- https://blog.paperspace.com/dataloaders-abstractions-pytorch/- https://stanford.edu/~shervine/blog/pytorch-how-to-generate-data-parallel- https://pytorch.org/tutorials/beginner/data_loading_tutorial.html ###Code class PatchLandmarkDataSet(Dataset): def __init__(self, data_dir: str, image_postfix:str, tsv_postfix:str, landmark_name:str): self.photo_img_string = image_postfix self.photo_tsv_string = tsv_postfix self.data_dir = data_dir self.landmark_name = landmark_name files = os.listdir(self.data_dir) self.photo_images = [x for x in files if self.photo_img_string in x] self.photo_tsvs = [x for x in files if self.photo_tsv_string in x] assert(len(self.photo_images) == len(self.photo_tsvs)) for i in range(len(self.photo_images)): x, y = self.photo_images[i], self.photo_tsvs[i] assert(os.path.splitext(x)[0] == os.path.splitext(y)[0]) def __len__(self): return len(self.photo_tsvs) # load_tsv: load tsv --> return dataframe with name, x, y column. def load_tsv(self, name): # Loading dataframe df = pd.read_csv(os.path.join(self.data_dir, name), sep='\t') df = df.iloc[:99, 0:3] df.columns = ['name', 'X', 'Y'] return df # load_image: load image --> return plt.Image grayscale. def load_image(self, name): image = cv2.imread(os.path.join(self.data_dir, name), flags=cv2.IMREAD_GRAYSCALE) img = Image.fromarray(image) return img def extract_landmark(self, df): df = df.loc[df['name'] == self.landmark_name] df = df.loc[:, ['X', 'Y']] df = df.reset_index(drop=True) landmark = df.to_numpy(dtype=np.float32) return landmark # bounding_box: landmark --> return top, left, height, width def bounding_box(self, landmark): cx, cy = landmark[0] width, height = random.randint(80, 160), random.randint(80, 160) top, left = cy - random.randint(0, height), cx - random.randint(0, width) return int(top), int(left), int(height), int(width) def rotate(self, img, landmark, angle): angle = random.uniform(-angle, +angle) transformation_matrix = torch.tensor([ [+math.cos(math.radians(angle)), -math.sin(math.radians(angle))], [+math.sin(math.radians(angle)), +math.cos(math.radians(angle))] ]) image = imutils.rotate(np.array(img), angle) landmark = landmark - 0.5 new_landmarks = np.matmul(landmark, transformation_matrix) new_landmarks = new_landmarks + 0.5 return Image.fromarray(image), new_landmarks def crop(self, img, landmark, top, left, height, width): # Cropping image... img = TF.crop(img, top, left, height, width) #oh, ow = np.array(img).shape[0], np.array(img).shape[1] landmark = torch.tensor(landmark) - torch.tensor([[left, top]]) landmark = landmark / torch.tensor([width, height]) return img, landmark def normalize(self, img, landmark): # normalizing the pixel values img = TF.to_tensor(img) img = TF.normalize(img, [0.6945], [0.33497]) landmark -= 0.5 return img, landmark def __getitem__(self, index): img_name = self.photo_images[index] tsv_name = self.photo_tsvs[index] img = self.load_image(img_name) df = self.load_tsv(tsv_name) landmark = self.extract_landmark(df) top, left, height, width = self.bounding_box(landmark) img, landmark = self.crop(img, landmark, top, left, height, width) # resizing image.. img = TF.resize(img, (224, 224)) # packing image # use dsplit when RGB to make 224x224x3 --> 3x224x224 #img = np.dsplit(img, img.shape[-1]) img, landmark = self.rotate(img, landmark, 10) img, landmark = self.normalize(img, landmark) #arr = arr.flatten('F') return img, landmark data_dir = "./AutoAlign" test_dir = "./AutoAlign_test" weights_path = 'face_landmarks_patch_' landmark_name = '29@2' # '29@[2479]|30@[34]' # for 18: '29@[1-9]\d?|30@[1-7]' photo_postfix = "lat_photo.jpg" tsv_postfix = "lat_photo.txt" dataset = PatchLandmarkDataSet(data_dir, photo_postfix, tsv_postfix, landmark_name) test_dataset = PatchLandmarkDataSet(test_dir, photo_postfix, tsv_postfix, landmark_name) # split the dataset into validation and test sets len_valid_set = len(test_dataset) len_train_set = len(dataset) print("The length of Train set is {}".format(len_train_set)) print("The length of Valid set is {}".format(len_valid_set)) data_train_loader = DataLoader(dataset, batch_size=64, shuffle=True) print(len(data_train_loader)) valid_loader = DataLoader(test_dataset, batch_size=8, shuffle=True) # Display image and label. train_features, train_labels = next(iter(data_train_loader)) print(f"Feature batch shape: {train_features.size()}") print(f"Labels batch shape: {train_labels.size()}") img = train_features[0] label = train_labels[0] print(f"Label: {label}") landmarks = (label + 0.5) * 224 print(f"landmarks: {landmarks}") plt.figure(figsize=(10, 10)) plt.imshow(img.squeeze(), cmap='gray') plt.scatter(landmarks[:,0], landmarks[:,1], s=8) device = 'cuda' if torch.cuda.is_available() else 'cpu' print('Using {} device'.format(device)) ###Output Using cuda device ###Markdown NetworkBased on https://colab.research.google.com/drive/1-28T5nIAevrDo6MwN0Qi_Cgdy9TEiSP_?usp=sharingscrollTo=XH_bqPXo6YG8Resnext50을 이용한다. 일단은 Greyscale(컬러로 확장도 가능하나 실익이 크지 않다.)https://towardsdatascience.com/face-landmarks-detection-with-pytorch-4b4852f5e9c4 ###Code class Network(nn.Module): def __init__(self,num_classes=2): super().__init__() self.model_name='resnet50' #self.model=models.resnet18(pretrained=True) self.model=models.resnet50(pretrained=True) self.model.conv1=nn.Conv2d(1, 64, kernel_size=7, stride=2, padding=3, bias=False) # for param in self.parameters(): # param.requires_grad = False # RGB: self.model.conv1=nn.Conv2d(3, 64, kernel_size=7, stride=2, padding=3, bias=False) #self.model.conv1=nn.Conv2d(1, 64, kernel_size=3, stride=1, padding=1, bias=False) #self.model.conv1=nn.Conv2d(1, 64, kernel_size=7, stride=2, padding=3, bias=False) self.model.fc=nn.Linear(self.model.fc.in_features, num_classes) def forward(self, x): x=self.model(x) return x ###Output _____no_output_____ ###Markdown Training그림이 커서 그런지 초반에는 구데기로 나오고, 최소 100 epoch 이상은 해 줘야 할 것 같다.- https://pytorch.org/tutorials/beginner/transfer_learning_tutorial.html- http://incredible.ai/artificial-intelligence/2017/05/13/Transfer-Learning/%EC%83%88%EB%A1%9C-%ED%9B%88%EB%A0%A8%ED%95%A0-%EB%8D%B0%EC%9D%B4%ED%84%B0%EA%B0%80-%EC%A0%81%EC%9C%BC%EB%A9%B0-original-%EB%8D%B0%EC%9D%B4%ED%84%B0%EC%99%80-%EB%8B%A4%EB%A5%B8-%EA%B2%BD%EC%9A%B0- https://hanqingguo.github.io/Face_detection ###Code # %%capture cap_out --no-stderr torch.autograd.set_detect_anomaly(True) network = Network() # use load_state_dict to load previously trained model. # checkpoint = torch.load(f"model/0518_1226_6_100_face_landmarks_transfer_resnext50.tar")) # use pth only # network.load_state_dict(torch.load(f"model/0525_1926_6_100_face_landmarks_transfer__resnext50.pth")) # use checkpoint # network.load_state_dict(checkpoint['network_state_dict']) network.cuda() print(network) criterion = nn.MSELoss() optimizer = optim.Adam(filter(lambda p: p.requires_grad, network.parameters()), lr=0.001) # Load optimizer too # optimizer.load_state_dict(checkpoint['optimizer_state_dict']) loss_min = np.inf num_epochs = 100 logger = pd.DataFrame(columns=['train loss', 'valid loss']) start_time = time.time() time_str = time.strftime(f"%m%d_%H%M") for epoch in range(1,num_epochs+1): loss_train = 0 loss_valid = 0 running_loss = 0 network.train() for step in range(1,len(data_train_loader)+1): images, landmarks = next(iter(data_train_loader)) images = images.cuda() landmarks = landmarks.view(landmarks.size(0),-1).cuda() predictions = network(images) optimizer.zero_grad() loss_train_step = criterion(predictions, landmarks) loss_train_step.backward() optimizer.step() loss_train += loss_train_step.item() running_loss = loss_train/step print_overwrite(step, len(data_train_loader), running_loss, 'train') network.eval() with torch.no_grad(): for step in range(1,len(valid_loader)+1): images, landmarks = next(iter(valid_loader)) images = images.cuda() landmarks = landmarks.view(landmarks.size(0),-1).cuda() predictions = network(images) loss_valid_step = criterion(predictions, landmarks) loss_valid += loss_valid_step.item() running_loss = loss_valid/step print_overwrite(step, len(valid_loader), running_loss, 'valid') loss_train /= len(data_train_loader) loss_valid /= len(valid_loader) print('\n--------------------------------------------------') print('Epoch: {} Train Loss: {:.4f} Valid Loss: {:.4f}'.format(epoch, loss_train, loss_valid)) print('--------------------------------------------------') logger.loc[epoch - 1] = [loss_train, loss_valid] if loss_valid < loss_min: loss_min = loss_valid # torch.save(network.state_dict(), f"model/{time_str}_{landmark_number}_{num_epochs}_{weights_path}_{network.model_name}.pth") torch.save({ 'network_state_dict': network.state_dict(), 'optimizer_state_dict': optimizer.state_dict() }, f"model/{time_str}_{landmark_name}_{num_epochs}_{weights_path}_{network.model_name}.tar") print("\nMinimum Validation Loss of {:.4f} at epoch {}/{}".format(loss_min, epoch, num_epochs)) print('Model Saved\n') # cap_out.show() print('Training Complete') print("Total Elapsed Time : {} s".format(time.time()-start_time)) logger.to_csv(f'csv/{time_str}_{landmark_name}_{num_epochs}_{weights_path}_train_data.csv') # cap_out.show() # with open(f'csv/{time_str}_{landmark_number}_{num_epochs}_train_log.txt') as capture_file: # capture_file.write(cap_out) def pixel_distance(landmark, reference): ''' pixel_distance(landmark: np.array[[x, y], ..] reference: np.array[[x, y]] with true landmark value return: average: float average distance, each: np.array[distance, ..] with distance of each landmark ''' each = [] for i in range(len(landmark)): each.append(np.linalg.norm(landmark[i] - reference[i])) each = np.array(each) average = np.average(each) return average, each start_time = time.time() with torch.no_grad(): best_network = Network() best_network.cuda() best_network.load_state_dict(torch.load(f"model/{time_str}_{landmark_name}_{num_epochs}_{weights_path}_{network.model_name}.tar")['network_state_dict']) # best_network.load_state_dict(torch.load(f"model/0528_1615_29@7_100_face_landmarks_patch__resnext50.tar")['network_state_dict']) best_network.eval() images, landmarks = next(iter(valid_loader)) landmarks = (landmarks+0.5) * 224 images = images.cuda() predictions = (best_network(images).cpu() + 0.5) * 224 predictions = predictions.view(-1,1,2) plt.figure(figsize=(10,40)) for img_num in range(8): plt.subplot(8,1,img_num+1) plt.imshow(images[img_num].cpu().numpy().transpose(1,2,0).squeeze(), cmap='gray') plt.scatter(predictions[img_num,:,0], predictions[img_num,:,1], c = 'r', s = 5) plt.scatter(landmarks[img_num,:,0], landmarks[img_num,:,1], c = 'g', s = 5) average, each = pixel_distance(predictions[img_num], landmarks[img_num]) print(average) print(each) print('Total number of test images: {}'.format(len(valid_dataset))) end_time = time.time() print("Elapsed Time : {}".format(end_time - start_time)) ###Output 77.12213 [77.12213] 76.465126 [76.465126] 61.048904 [61.048904] 38.681107 [38.681107] 141.01897 [141.01897] 26.293432 [26.293432] 37.25666 [37.25666] 23.708183 [23.708183] Total number of test images: 50 Elapsed Time : 1.7429306507110596 ###Markdown From Face detection to landmark detection, IRLhttps://github.com/timesler/facenet-pytorch- With pip:pip install facenet-pytorch- or clone this repo, removing the '-' to allow python imports:git clone https://github.com/timesler/facenet-pytorch.git facenet_pytorch- or use a docker container (see https://github.com/timesler/docker-jupyter-dl-gpu):docker run -it --rm timesler/jupyter-dl-gpu pip install facenet-pytorch && ipython ###Code def reject_outliers(data, m = 2.): d = np.abs(data - np.median(data, 0)) mdev = np.median(d, 0) s = d/mdev if mdev.all() else np.array([0, 0]) sqsum = (s[:,0] * s[:, 0] + s[:, 1] * s[:, 1]) ** 0.5 return data[sqsum<m] from facenet_pytorch import MTCNN, InceptionResnetV1 start = time.time() # If required, create a face detection pipeline using MTCNN: mtcnn = MTCNN(image_size=224, device=device) ####################################################################### image_path = 'AutoAlign/0001___________000_lat_photo.jpg' patch_size = 80 patch_size_upper = 160 ####################################################################### base_network = Network(num_classes=12) base_network.load_state_dict(torch.load(f"model/0602_1303_6_100_face_landmarks_transfer__resnet50.tar")['network_state_dict']) # base_network.load_state_dict(torch.load(f"model/0518_1226_6_100_face_landmarks_transfer_resnext50.pth")) base_network.eval() image_open = time.time() input_image = Image.open(image_path) grayscale_image = input_image.convert('L') height, width = input_image.size[0], input_image.size[1] print(height, width) # Get cropped and prewhitened image tensor boxes, probs = mtcnn.detect(input_image) face = boxes[0] x0, y0, x1, y1 = face x0, y0, x1, y1 = int(x0), int(y0), int(x1), int(y1) #face = (faces + 1) * 255 # image = np.array(grayscale_image) # image = image[y0:y1, x0:x1] # image = TF.crop(grayscale_image, y0, x0, y1-y0, x1-x0) image = TF.resized_crop(grayscale_image, y0, x0, y1-y0, x1-x0, size=(224, 224)) # plt.imshow(image, cmap='gray') # plt.imsave("profile_cut.png", image, cmap='gray') # image = TF.resize(Image.fromarray(image), size=(224, 224)) image = TF.to_tensor(image) image = TF.normalize(image, [0.6945], [0.33497]) from_image_inference = time.time() with torch.no_grad(): landmarks = base_network(image.unsqueeze(0)) # second inference using landmark and bounding box # use for i in range(6) for real case. # for testing, testing using landmarks[0,0:2] patch_nets = [] patch_net0 = Network(num_classes=2) patch_net0.load_state_dict(torch.load(f"model/0602_1419_29@2_100_face_landmarks_patch__resnet50.tar")['network_state_dict']) patch_net0.eval() patch_nets.append(patch_net0) # patch_net1 = Network(num_classes=2) # patch_net1.load_state_dict(torch.load(f"model/0528_1418_29@4_100_face_landmarks_patch__resnext50.tar")['network_state_dict']) # patch_net1.eval() # patch_nets.append(patch_net1) # patch_net2 = Network(num_classes=2) # patch_net2.load_state_dict(torch.load(f"model/0528_1615_29@7_100_face_landmarks_patch__resnext50.tar")['network_state_dict']) # patch_net2.eval() # patch_nets.append(patch_net2) # patch_net3 = Network(num_classes=2) # patch_net3.load_state_dict(torch.load(f"model/0529_1908_29@9_100_face_landmarks_patch__resnext50.tar")['network_state_dict']) # patch_net3.eval() # patch_nets.append(patch_net3) # patch_net4 = Network(num_classes=2) # patch_net4.load_state_dict(torch.load(f"model/0529_2015_30@3_100_face_landmarks_patch__resnext50.tar")['network_state_dict']) # patch_net4.eval() # patch_nets.append(patch_net4) # patch_net5 = Network(num_classes=2) # patch_net5.load_state_dict(torch.load(f"model/0529_2119_30@4_100_face_landmarks_patch__resnext50.tar")['network_state_dict']) # patch_net5.eval() # patch_nets.append(patch_net5) print(len(patch_nets)) landmark_means = [] print(landmarks) landmarks = (landmarks.view(6, 2).detach().numpy() + 0.5) * np.array([[x1-x0, y1-y0]]) + np.array([[x0, y0]]) for i in range(0, len(patch_nets), 1): x, y = landmarks[i, 0:2] patches = torch.tensor(()) patches = patches.new_empty([10, 1, 224, 224]) print(patches.shape) single_landmarks = [] width_height = np.empty([10, 2]) top_left = np.empty([10, 2]) for j in range(10): h, w = random.randint(patch_size, patch_size_upper), random.randint(patch_size, patch_size_upper) bias_y, bias_x = random.randint(h//4, h), random.randint(w//4, w) t, l = y - bias_y , x - bias_x patch = TF.resized_crop(grayscale_image, t,l, h, w, size=(224, 224)) # plt.imshow(patch, cmap='gray') patch = TF.to_tensor(patch) patch = TF.normalize(patch, [0.6945], [0.33497]) patches[j] = patch width_height[j] = np.array([w, h]) top_left[j] = np.array([l, t]) print(patches.shape) patch_net = patch_nets[i] with torch.no_grad(): single_landmark = patch_net(patches) print(single_landmark) single_landmark = (single_landmark.view(10,2).detach().numpy() + 0.5) * width_height + top_left print(single_landmark) std, mean = np.std(single_landmark, 0), np.mean(single_landmark, 0) print(std, mean) trimmed = reject_outliers(single_landmark, 2.) trimmed_mean = np.mean(trimmed, 0) print(f"trimmed_mean: {trimmed_mean}") landmark_means.append(trimmed_mean) # landmark_means.append(mean) landmark_means = np.array(landmark_means) print(landmark_means) # plt.figure() # plt.imshow(input_image) # plt.scatter(landmark_means[:,0], landmark_means[:,1], c = 'c', s = 5) # plt.savefig('result.png', dpi=300) # plt.show() end = time.time() print(f"took about {end - start}s") print(f"from image open, took {end - image_open}s") print(f"from inference, took {end - from_image_inference}s") ###Output 1488 2240 1 tensor([[ 0.3147, -0.1025, 0.4909, 0.1018, 0.3669, 0.1645, 0.4302, 0.2312, 0.4175, 0.3056, 0.3538, 0.3632]]) torch.Size([10, 1, 224, 224]) torch.Size([10, 1, 224, 224]) tensor([[ 0.2475, -0.2307], [ 0.2646, -0.2865], [ 0.2358, 0.0239], [-0.1364, 0.3452], [ 0.0705, -0.1153], [ 0.1976, -0.0858], [ 0.0407, 0.0711], [-0.0019, 0.0084], [ 0.3098, -0.2357], [-0.0951, 0.0366]]) [[1160.13297987 791.63704854] [1147.69425291 795.80870664] [1160.69068837 784.20037657] [1164.5067445 782.09372288] [1179.04774117 785.36970523] [1141.52858847 788.69395405] [1183.52525675 772.91980082] [1171.91022444 774.73556024] [1152.26205873 784.67078525] [1181.54648089 774.45529085]] [13.82785075 7.22892254] [1164.28450161 783.45849511] trimmed_mean: [1161.4398608 784.48587897] [[1161.4398608 784.48587897]] took about 9.43578839302063s from image open, took 6.245835781097412s from inference, took 4.445919990539551s ###Markdown Checking against answer. ###Code tsv_path = "AutoAlign/0001___________000_lat_photo.txt" def load_tsv(path): # Loading dataframe df = pd.read_csv(path, sep='\t') df = df.iloc[:99, 0:3] df.columns = ['name', 'X', 'Y'] return df def extract_landmarks(df, landmark_regex, landmark_length): # (gathering only needed landmarks) df = df.loc[df['name'].str.contains(landmark_regex, regex=True), :] # there are **18** landmarks that is unique and valid among all files # should we sort df? df = df.sort_values(by=['name']) df = df.loc[:, ['X', 'Y']] df = df.reset_index(drop=True) # ... and landmark landmark = df.to_numpy(dtype=np.float32) return landmark df = load_tsv(tsv_path) correct_landmarks = extract_landmarks(df, '29@[2479]|30@[34]', 6) print(landmarks) print(correct_landmarks) print(landmark_means) print(pixel_distance(landmarks, correct_landmarks)) print(pixel_distance(landmark_means, correct_landmarks)) ###Output [[1167.69924212 800.67183262] [1296.38644266 1021.55046564] [1205.8604722 1089.3155418 ] [1252.01578903 1161.47189438] [1242.76109028 1241.8140893 ] [1196.30767691 1304.09781796]] [[1176.356 773.9456] [1307.143 1001.12 ] [1210.724 1076.661 ] [1247.465 1171.943 ] [1245.158 1244.38 ] [1196.29 1316.294 ]] [[1161.4398608 784.48587897]] (15.310655967836823, array([28.09322478, 23.08909817, 13.55695576, 11.41726331, 3.51125368, 12.19614011])) (18.2643651677734, array([18.26436517])) ###Markdown Showing warped overlay image against film image ###Code film_path = "AutoAlign/0001___________000_lat_film.txt" film_landmarks = extract_landmarks(load_tsv(film_path), '29@[2479]|30@[34]', 6) # matrix, inliers = cv2.estimateAffinePartial2D(landmark_means, film_landmarks, method=cv2.LMEDS) matrix, inliers = cv2.estimateAffinePartial2D(landmarks[:], film_landmarks[:], method=cv2.LMEDS) # matrix, inliers = cv2.estimateAffinePartial2D(correct_landmarks, film_landmarks, method=cv2.LMEDS) print(matrix) print(inliers) translation_x, translation_y = matrix[0, 2], matrix[1, 2] scale = (matrix[0, 0] * matrix[0, 0] + matrix[1, 0] * matrix[1, 0]) ** 0.5 rotation = np.arctan2(matrix[0, 1], matrix[1, 1]) degree = np.rad2deg(rotation) print([translation_x, translation_y, scale, degree]) # solution: [-764.8932178875267, -713.3610459581183, 1.9335031260399236, -0.6305061269798841] output_string = f"translation_x={translation_x}\ntranslation_y={translation_y}\nscale={scale}\ndegree={degree}\n" with open("output.txt", "w") as output: output.write(output_string) film_img_path = "AutoAlign/0001___________000_lat_film.jpg" # load_image: load image --> return opencv. def load_image(path): img = Image.open(path) return img film_image = cv2.imread(film_img_path) input_image_cv2 = np.array(input_image) warped_image = cv2.warpAffine(input_image_cv2, matrix, (film_image.shape[1], film_image.shape[0])) # plt.figure() # plt.imshow(film_image) # plt.imshow(warped_image, alpha=0.5) # plt.show() ###Output _____no_output_____
helpers_imam.ipynb
###Markdown Python Helper Methods ###Code import sys import networkx ###Output _____no_output_____ ###Markdown --------- Read input file ---------- ###Code #TODO: make f for every input f = "inputs/100_50.in" #f = "inputs/342_50.in" #f = "inputs/151_50.in" #Takes in file location as string #Outputs tuple of ({adjacency dictionary}, [names of homes], [indices of homes], start_loc) def take_input(f): file = open(f, "r") file_lines = file.readlines() n = int(file_lines[0]) #num locs h = int(file_lines[1]) #num homes locs = file_lines[2].split() homes = file_lines[3].split() start = file_lines[4] home_indices = [locs.index(i) for i in homes] #list of home location indices #Create dictionary for every location adjacencies = {} #Create dictionary of adjacencent locations for every location # adjacencies = {"loc" : [distance to every other loc], ...} # If distance == "x" -> None for i in range(0,n): adj = [None if j == "x" else int(j) for j in file_lines[5+i].split()] adjacencies[locs[i]] = adj return adjacencies, homes, home_indices, start output = take_input(f) print("LOCATION KEYS: \n", output[0].keys()) print("HOMES: \n", output[1]) print("HOME INDICES: \n", output[2]) ###Output LOCATION KEYS: dict_keys(['Marbleiron', 'Courtcastle', 'Ironriver', 'Courtwall', 'Morwald', 'Clearnesse', 'Valhaven', 'Buttermead', 'Violetley', 'Westvale', 'Woodice', 'Esterfall', 'Eriwall', 'Goldwyvern', 'Newby', 'Foxvale', 'Morwall', 'Oakmere', 'Foxhaven', 'Freyview', 'Ironshore', 'Oldmoor', 'Dellwyn', 'Butterbush', 'Corbank', 'Esterash', 'Stonefair', 'Westerlight', 'Clearland', 'Bushgate', 'Brightmere', 'Stoneglass', 'Wildemarsh', 'Wellmage', 'Snowshore', 'Irondeer', 'Lightmere', 'Southwheat', 'Westmill', 'Lochmage', 'Shoreburn', 'Erilyn', 'Eastcastle', 'Rosemill', 'Wolflea', 'Greenshore', 'Bluedell', 'Newley', 'Fairsage', 'Greyice']) HOMES: ['Oldmoor', 'Woodice', 'Dellwyn', 'Freyview', 'Oakmere', 'Eriwall', 'Valhaven', 'Newby', 'Foxhaven', 'Morwall', 'Goldwyvern', 'Corbank', 'Westvale', 'Morwald', 'Ironriver', 'Violetley', 'Courtwall', 'Ironshore', 'Marbleiron', 'Buttermead', 'Courtcastle', 'Butterbush', 'Foxvale', 'Esterfall', 'Clearnesse'] HOME INDICES: [21, 10, 22, 19, 17, 12, 6, 14, 18, 16, 13, 24, 9, 4, 2, 8, 3, 20, 0, 7, 1, 23, 15, 11, 5] ###Markdown ------------ Create nodes ------------- ###Code # TODO: set dynamic limit #Take in dictionary of adjacent locations and list of homes #Return dictionary representing nodes whose base #Does return repetitions of houses between nodes #Ignore? will get discounted on dropoff anyway def make_nodes(adjacencies, homes, home_indices): limit = 12000 #maximum distance away from node's base nodes = {} #create a node around every location #Create nodes for every home for home in homes: nodes[home] = [home] #Start every home's node with itself for index in home_indices: distance = adjacencies[home][index] if (distance != None) and (distance < limit): #append other home that is within limit to the node starting at that home current = nodes[home] nodes[home].append(homes[index]) #returns None #print(nodes) #Clean up node dictionary to only contain largest nodes ------------ deleted_nodes = nodes.copy() homes_represented = homes nodes_to_keep = list() #for node in node.keys(): while homes_represented: v = list(deleted_nodes.values()) k = list(deleted_nodes.keys()) biggest_node = k[v.index(max(v, key=len))] #remove homes that are already included in list homes_represented = [x for x in homes_represented if x not in nodes[biggest_node]] for home in nodes[biggest_node]: deleted_nodes.pop(home, None) nodes_to_keep.append(biggest_node) #print(nodes_to_keep) return {key: nodes[key] for key in nodes_to_keep} nodes = make_nodes(output[0], output[1], output[2]) nodes locs = list(nodes.values()) locs = set(sum(locs, [])) len(locs) ###Output _____no_output_____ ###Markdown -------------- Shortest route between nodes -------------Dijkstra's ###Code nodes_travel = list(nodes.keys()) nodes_travel def minDistance(self, dist, sptSet): # Initilaize minimum distance for next node min = sys.maxint # Search not nearest vertex not in the # shortest path tree for v in range(self.V): if dist[v] < min and sptSet[v] == False: min = dist[v] min_index = v return min_index # Funtion that implements Dijkstra's single source # shortest path algorithm for a graph represented # using adjacency matrix representation def dijkstra(self, src): dist = [sys.maxint] * self.V dist[src] = 0 sptSet = [False] * self.V for cout in range(self.V): # Pick the minimum distance vertex from # the set of vertices not yet processed. # u is always equal to src in first iteration u = self.minDistance(dist, sptSet) # Put the minimum distance vertex in the # shotest path tree sptSet[u] = True # Update dist value of the adjacent vertices # of the picked vertex only if the current # distance is greater than new distance and # the vertex in not in the shotest path tree for v in range(self.V): if self.graph[u][v] > 0 and \ sptSet[v] == False and \ dist[v] > dist[u] + self.graph[u][v]: dist[v] = dist[u] + self.graph[u][v] self.printSolution(dist) ###Output _____no_output_____
notebooks/07-import-hostgroups.ipynb
###Markdown SteelScript NetProfiler HostGroup Importing Imports and Setup ###Code import csv from collections import defaultdict import steelscript from steelscript.netprofiler.core.netprofiler import NetProfiler from steelscript.common.service import UserAuth from steelscript.netprofiler.core.hostgroup import HostGroupType, HostGroup from steelscript.commands.steel import prompt_yn from steelscript.common.exceptions import RvbdException hostname = "NETPROFILER.HOSTNAME.COM" username = "USERNAME" password = "PASSWORD" host = hostname auth = UserAuth(username, password) ###Output _____no_output_____ ###Markdown Initialize NetProfiler Object ###Code p = NetProfiler(host, auth=auth) ###Output _____no_output_____ ###Markdown Import Hostgroup File Create an example file using the required format and save it to temp directory. ###Code EXAMPLE = """\ "subnet","SiteName" "10.143.58.64/26","CZ-Prague-HG" "10.194.32.0/23","MX-SantaFe-HG" "10.170.55.0/24","KR-Seoul-HG" "10.234.9.0/24","ID-Surabaya-HG" "10.143.58.63/23","CZ-Prague-HG" """ example_file = '/tmp/example_groups.txt' with open(example_file, 'w') as f: f.writelines(EXAMPLE) ###Output _____no_output_____ ###Markdown Define an import function to read and process the input file. ###Code def import_file(input_file): """Process the input file and load into dict.""" groups = defaultdict(list) with open(input_file, 'rb') as f: reader = csv.reader(f) header = reader.next() if header != ['subnet', 'SiteName']: print 'Invalid file format' print 'Ensure file has correct header.' print 'example file:' print EXAMPLE for row in reader: cidr, group = row groups[group].append(cidr) return groups groups = import_file(example_file) groups ###Output _____no_output_____ ###Markdown Post Hostgroup Updates to NetProfiler ###Code def update_hostgroups(netprofiler, hostgroup, groups): """Replace existing ``hostgroup`` with contents of ``groups`` dict.""" # First find any existing HostGroupType and delete it. try: hgtype = HostGroupType.find_by_name(netprofiler, hostgroup) print ('Deleting existing HostGroupType "%s".' % hostgroup) hgtype.delete() except RvbdException: print 'No existing HostGroupType found, will create a new one.' pass # Create a new one hgtype = HostGroupType.create(netprofiler, hostgroup) # Add new values for group, cidrs in groups.iteritems(): hg = HostGroup(hgtype, group) hg.add(cidrs) # Save to NetProfiler hgtype.save() print 'Saved HostGroupType "%s".' % hostgroup update_hostgroups(p, 'TestHostGroup', groups) ###Output _____no_output_____ ###Markdown Complete Script Save to file 'import_hostgroups.py' and execute as follows: python import_hostgroups.py -u -p --hostgroup -i ###Code #!/usr/bin/env python # Copyright (c) 2019 Riverbed Technology, Inc. # # This software is licensed under the terms and conditions of the MIT License # accompanying the software ("License"). This software is distributed "AS IS" # as set forth in the License. import csv import sys import optparse from collections import defaultdict from steelscript.netprofiler.core.app import NetProfilerApp from steelscript.netprofiler.core.hostgroup import HostGroupType, HostGroup from steelscript.commands.steel import prompt_yn from steelscript.common.exceptions import RvbdException # This script will take a file with subnets and SiteNames # and create a HostGroupType on the target NetProfiler. # If the HostGroupType already exists, it will be deleted, # before creating a new one with the same name. # # See the EXAMPLE text below for the format of the input # file. Note that multiple SiteNames with different # IP address spaces can be included. EXAMPLE = """ "subnet","SiteName" "10.143.58.64/26","CZ-Prague-HG" "10.194.32.0/23","MX-SantaFe-HG" "10.170.55.0/24","KR-Seoul-HG" "10.234.9.0/24","ID-Surabaya-HG" "10.143.58.63/23","CZ-Prague-HG" """ class HostGroupImport(NetProfilerApp): def add_options(self, parser): super(HostGroupImport, self).add_options(parser) group = optparse.OptionGroup(parser, "HostGroup Options") group.add_option('--hostgroup', action='store', help='Name of hostgroup to overwrite') group.add_option('-i', '--input-file', action='store', help='File path to hostgroup file') parser.add_option_group(group) def validate_args(self): """Ensure all arguments are present.""" super(HostGroupImport, self).validate_args() if not self.options.input_file: self.parser.error('Host group file is required, specify with ' '"-i" or "--input-file"') if not self.options.hostgroup: self.parser.error('Hostgroup name is required, specify with ' '"--hostgroup"') def import_file(self): """Process the input file and load into dict.""" groups = defaultdict(list) with open(self.options.input_file, 'rb') as f: reader = csv.reader(f) header = reader.next() if header != ['subnet', 'SiteName']: print 'Invalid file format' print 'Ensure file has correct header.' print 'example file:' print EXAMPLE for row in reader: cidr, group = row groups[group].append(cidr) return groups def update_hostgroups(self, groups): """Replace existing HostGroupType with contents of groups dict.""" # First find any existing HostGroupType and delete it. try: hgtype = HostGroupType.find_by_name(self.netprofiler, self.options.hostgroup) print ('Deleting existing HostGroupType "%s".' % self.options.hostgroup) hgtype.delete() except RvbdException: print 'No existing HostGroupType found, will create a new one.' pass # Create a new one hgtype = HostGroupType.create(self.netprofiler, self.options.hostgroup) # Add new values for group, cidrs in groups.iteritems(): hg = HostGroup(hgtype, group) hg.add(cidrs) # Save to NetProfiler hgtype.save() def main(self): """Confirm overwrite then update hostgroups.""" confirm = ('The contents of hostgroup %s will be overwritten' 'by the file %s, are you sure?' % (self.options.hostgroup, self.options.input_file)) if not prompt_yn(confirm): print 'Okay, aborting.' sys.exit() groups = self.import_file() self.update_hostgroups(groups) print 'Successfully updated %s on %s' % (self.options.hostgroup, self.netprofiler.host) if __name__ == '__main__': HostGroupImport().run() ###Output _____no_output_____
jupyter/jupyter-javascript_calculated_data_to_python_for_plotting_with_matplotlib.ipynb
###Markdown Basic method to transfer data from javascript to python adapted from [A little exchange - python variable -> javascript variable -> python variable](https://michhar.github.io/javascript-and-python-have-a-party/).Here is a jupyter notebook that implements the method in this blog post: [Jupyter_and_JavaScript.ipynb](https://github.com/michhar/python-jupyter-notebooks/blob/master/primers/Jupyter_and_JavaScript.ipynb). ###Code import numpy as np import matplotlib.pyplot as plt %matplotlib inline %%javascript // Helper functions const range = (start, stop, step) => Array.from({ length: (stop - start) / step + 1}, (_, i) => start + (i * step)); const fill_zeros = (start, stop, step) => Array.from({ length: (stop - start) / step + 1}, (_, i) => 0); var layer_info = [ [10, 300], [10, 300], [10, 300], [10, 300], [10, 300], [10, 300], [10, 300], [10, 300], [10, 300], [10, 300], ] var z_min = 0; var z_max = 100; var z_step = 0.25; var h_a = 10; var z_values = range(z_min, z_max, z_step); var dose = fill_zeros(z_min, z_max, z_step); var z_current = 0; for (var i=0; i<layer_info.length; i++) { z_current += layer_info[i][0] // console.log(layer_info[i]); var j = 0; while (z_current >= z_values[j]){ var delta_z = z_current - z_values[j]; dose[j] += layer_info[i][1] * Math.exp(-delta_z/h_a) // console.log(i, j, z_current, z_values[j], dose[j]); j++; } } var data = { 'z': z_values, 'dose': dose } var data_json = JSON.stringify(data); IPython.notebook.kernel.execute('data_json=' + data_json); # data_json # type(data_json) fix, ax = plt.subplots() ax.plot(data_json['z'], data_json['dose']) ax.set_ylim(0,) ax.set_xlabel("z ($\mu$m)") ax.set_ylabel("Dose") ###Output _____no_output_____
content/labs/lab05/notebook/cs109b_lab5_solutions.ipynb
###Markdown CS109A Introduction to Data Science Lab 5: Feed Forward Neural Networks 2 (Training, Evaluation, & Interogation)**Harvard University****Spring 2022****Instructors**: Mark Glickman & Pavlos Protopapas**Lab Team**: Eleni Kaxiras, Marios Mattheakis, Chris Gumb, Shivas Jayaram ###Code #RUN THIS CELL import requests from IPython.core.display import HTML styles = requests.get("https://raw.githubusercontent.com/Harvard-IACS/2018-CS109A/master/content/styles/cs109.css").text HTML(styles) ###Output _____no_output_____ ###Markdown Table of Contents- Building a NN /w Keras Quick Review- Learning weights from Data- Evaluating a Keras Model- Inspecting Training History- Multi-class Classification Example (Tabular Data)- Interpreting Our Black Box NN- Bagging Review- Image Classification Example ###Code import pandas as pd import matplotlib.pyplot as plt import numpy as np ###Output _____no_output_____ ###Markdown Let's revisit the toy dataset from the first NN lab. ###Code # Load toy data toydata = pd.read_csv('data/toyDataSet_1.csv') x_toy = toydata['x'].values.reshape(-1,1) y_toy = toydata['y'].values.reshape(-1,1) # Plot toy data ax = plt.gca() ax.scatter(x_toy, y_toy) ax.set_xlabel('x') ax.set_ylabel('y') ax.set_title('Toy Dataset'); from tensorflow.keras import models, layers, activations from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Dense, Input # Input "layer"?!? ###Output _____no_output_____ ###Markdown Here we construct a sequential Keras model with one Dense hidden layer containing only a single neuron with a relu activation.\The output layer will be of size 1 and have no activation (e.g., 'linear'). ###Code # Instantiate sequential Keras model and give it a name toy_model = Sequential(name='toy_model') # Despite designation in Keras, Input is not a true layer # It only specifies the shape of the input toy_model.add(Input(shape=(1,))) # hidden layer with 1 neurons (or nodes) toy_model.add(Dense(1, activation='relu')) # output layer, one neuron toy_model.add(Dense(1, activation='linear')) toy_model.summary() ###Output WARNING:tensorflow:Please add `keras.layers.InputLayer` instead of `keras.Input` to Sequential model. `keras.Input` is intended to be used by Functional model. Model: "toy_model" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= dense (Dense) (None, 1) 2 _________________________________________________________________ dense_1 (Dense) (None, 1) 2 ================================================================= Total params: 4 Trainable params: 4 Non-trainable params: 0 _________________________________________________________________ ###Markdown Compiling the NN `model.compile(optimizer, loss, metrics, **kwargs)``optimizer` - defines how the weights are updated (we'll use SGD)\`loss` - what the model is trying to minimize\`metric` - list of metrics to report during training process`compile` is used to configure a NN model be for it can be fit. We aren't ready to fit *just* yet, but we are compiling here because doing so reinitilizes the model weights. We are going to manually set our weights before training so we need to to the compilation first. **Q:** Why do I want metrics if I already have a loss? ###Code import tensorflow as tf from tensorflow.keras import optimizers, losses, metrics from tensorflow.keras.optimizers import SGD from tensorflow.keras.losses import mse toy_model.compile(optimizer=SGD(learning_rate=1e-1), loss='mse', metrics=[]) ###Output _____no_output_____ ###Markdown A little nudge... Our toy model is very simply. It only has 4 weights. But the problem there are only 4 possible weight values that would make this a good fit. That is like finding a needle in a haystack. So we will cheat a bit and initialize our weights in the 'neighborhood' of the true weights which generated the data. Our future models will be complex enough that they won't need to worry about finding a specific combination of weights: some local minima (but not all) will do the job just fine. ###Code # A FUNCTION THAT READS AND PRINTS OUT THE MODEL WEIGHTS/BIASES def print_weights(model): weights = model.get_weights() print(dict(zip(["w1", "b1", "w2", "b2"], [weight.flatten()[0] for weight in weights]))) # MANUALLY SETTING THE WEIGHTS/BIASES ## True weights from data generating function # w1 = 2 # b1 = 0.0 # w2 = 1 # b2 = 0.5 # Initialize weights to that 'neighborhood' w1 = 1.85 b1 = -0.5 w2 = 0.9 b2 = 0.4 # Store current weight data structure weights = toy_model.get_weights() # hidden layer weights[0][0] = np.array([w1]) #weights weights[1] = np.array([b1]) # biases # output layer weights[2] = np.array([[w2]]) # weights weights[3] = np.array([b2]) # bias # hidden layer # Set the weights toy_model.set_weights(weights) print('Manually Initialized Weights:') print_weights(toy_model) ###Output Manually Initialized Weights: {'w1': 1.85, 'b1': -0.5, 'w2': 0.9, 'b2': 0.4} ###Markdown Forward Pass Review **Input, Hidden Layers, and Output Layers**The **forward** pass through an FFNN is a sequence of linear (affine) and nonlinear operations (activation). We use the model's `predict` method to execut the forward pass with a linspace spanning the range of the `x` data as input. ###Code # Predict x_lin = np.linspace(x_toy.min(), x_toy.max(), 500) y_hat = toy_model.predict(x_lin) # Plot plt.scatter(x_toy, y_toy, alpha=0.4, lw=4, label='data') plt.plot(x_lin, y_hat, label='NN', ls='--', c='r') plt.xlabel('x') plt.ylabel('y') plt.title('Predictions with Manually Set Weights') plt.legend(); ###Output _____no_output_____ ###Markdown We'll let back propogation and stochastic gradient descent take it from here. Back Propogation & SGD Review The **backward** pass is the training. It is based on the chain rule of calculus, and it calculates the gradient of the loss w.r.t. the weights. This gradient is used by the optimizer to update the weights to minimize the loss function. **Batching, stochastic gradient descent, and epochs**Shuffle and partition the dataset in mini-batches to help escape from local minima. Each batch is seen once per epoch. And thus each observation is also seen once per epoch. We can train the network for as many epochs as we like. Reproducibility There is a lot of stochasticity in the training of neural networks, from weight initilizations, to shuffling of data between epochs.\Below is some code that appears to be working for me to get reproducible results. Though I think some of the steps taken may be purely superstitious. ###Code # Advice gleaned from: https://deeplizard.com/learn/video/HcW0DeWRggs import os import random as rn os.environ['PYTHONHASHSEED'] = '0' os.environ['CUDA_VISIBLE_DEVICES'] = '' tf.random.set_seed(109) np.random.seed(109) rn.seed(109) ###Output _____no_output_____ ###Markdown Fitting the NN `Model.fit(x=None, y=None, batch_size=None, epochs=1, verbose="auto", validation_split=0.0, validation_data=None, shuffle=True, **kwargs)``batch_size` - number of observations overwhich the loss is calculated before each weight update\`epochs` - number of times the complete dataset is seen in the fitting process\`verbose` - you can silence the training output by setting this to `0`\`validation_split` - splits off a portion of the `x` and `y` training data to be used as validation (see warning below)\`validation_data` - tuple designating a seperate `x_val` and `y_val` dataset\`shuffle` - whether to shuffle the training data before each epoch We fit the model for 100 `epochs` and set `batch_size` to 64. The results of `fit()` are then stored in a variable called `history`. ###Code # Fit model and store training histry history = toy_model.fit(x_toy, y_toy, epochs=100, batch_size=64, verbose=1) ###Output Epoch 1/100 1/1 [==============================] - 0s 273ms/step - loss: 0.2432 Epoch 2/100 1/1 [==============================] - 0s 6ms/step - loss: 0.1465 Epoch 3/100 1/1 [==============================] - 0s 4ms/step - loss: 0.0889 Epoch 4/100 1/1 [==============================] - 0s 73ms/step - loss: 0.0558 Epoch 5/100 1/1 [==============================] - 0s 5ms/step - loss: 0.0375 Epoch 6/100 1/1 [==============================] - 0s 70ms/step - loss: 0.0271 Epoch 7/100 1/1 [==============================] - 0s 61ms/step - loss: 0.0213 Epoch 8/100 1/1 [==============================] - 0s 13ms/step - loss: 0.0179 Epoch 9/100 1/1 [==============================] - 0s 3ms/step - loss: 0.0158 Epoch 10/100 1/1 [==============================] - 0s 3ms/step - loss: 0.0141 Epoch 11/100 1/1 [==============================] - 0s 4ms/step - loss: 0.0129 Epoch 12/100 1/1 [==============================] - 0s 3ms/step - loss: 0.0120 Epoch 13/100 1/1 [==============================] - 0s 2ms/step - loss: 0.0112 Epoch 14/100 1/1 [==============================] - 0s 3ms/step - loss: 0.0105 Epoch 15/100 1/1 [==============================] - 0s 5ms/step - loss: 0.0099 Epoch 16/100 1/1 [==============================] - 0s 2ms/step - loss: 0.0094 Epoch 17/100 1/1 [==============================] - 0s 3ms/step - loss: 0.0090 Epoch 18/100 1/1 [==============================] - 0s 3ms/step - loss: 0.0086 Epoch 19/100 1/1 [==============================] - 0s 4ms/step - loss: 0.0083 Epoch 20/100 1/1 [==============================] - 0s 4ms/step - loss: 0.0080 Epoch 21/100 1/1 [==============================] - 0s 3ms/step - loss: 0.0077 Epoch 22/100 1/1 [==============================] - 0s 3ms/step - loss: 0.0074 Epoch 23/100 1/1 [==============================] - 0s 4ms/step - loss: 0.0071 Epoch 24/100 1/1 [==============================] - 0s 3ms/step - loss: 0.0069 Epoch 25/100 1/1 [==============================] - 0s 2ms/step - loss: 0.0067 Epoch 26/100 1/1 [==============================] - 0s 2ms/step - loss: 0.0065 Epoch 27/100 1/1 [==============================] - 0s 3ms/step - loss: 0.0063 Epoch 28/100 1/1 [==============================] - 0s 3ms/step - loss: 0.0062 Epoch 29/100 1/1 [==============================] - 0s 3ms/step - loss: 0.0060 Epoch 30/100 1/1 [==============================] - 0s 3ms/step - loss: 0.0059 Epoch 31/100 1/1 [==============================] - 0s 3ms/step - loss: 0.0057 Epoch 32/100 1/1 [==============================] - 0s 3ms/step - loss: 0.0056 Epoch 33/100 1/1 [==============================] - 0s 3ms/step - loss: 0.0055 Epoch 34/100 1/1 [==============================] - 0s 3ms/step - loss: 0.0054 Epoch 35/100 1/1 [==============================] - 0s 4ms/step - loss: 0.0053 Epoch 36/100 1/1 [==============================] - 0s 3ms/step - loss: 0.0052 Epoch 37/100 1/1 [==============================] - 0s 2ms/step - loss: 0.0051 Epoch 38/100 1/1 [==============================] - 0s 3ms/step - loss: 0.0050 Epoch 39/100 1/1 [==============================] - 0s 4ms/step - loss: 0.0049 Epoch 40/100 1/1 [==============================] - 0s 2ms/step - loss: 0.0048 Epoch 41/100 1/1 [==============================] - 0s 2ms/step - loss: 0.0048 Epoch 42/100 1/1 [==============================] - 0s 3ms/step - loss: 0.0047 Epoch 43/100 1/1 [==============================] - 0s 5ms/step - loss: 0.0046 Epoch 44/100 1/1 [==============================] - 0s 2ms/step - loss: 0.0046 Epoch 45/100 1/1 [==============================] - 0s 3ms/step - loss: 0.0045 Epoch 46/100 1/1 [==============================] - 0s 2ms/step - loss: 0.0044 Epoch 47/100 1/1 [==============================] - 0s 3ms/step - loss: 0.0044 Epoch 48/100 1/1 [==============================] - 0s 2ms/step - loss: 0.0043 Epoch 49/100 1/1 [==============================] - 0s 2ms/step - loss: 0.0043 Epoch 50/100 1/1 [==============================] - 0s 3ms/step - loss: 0.0042 Epoch 51/100 1/1 [==============================] - 0s 2ms/step - loss: 0.0041 Epoch 52/100 1/1 [==============================] - 0s 4ms/step - loss: 0.0041 Epoch 53/100 1/1 [==============================] - 0s 3ms/step - loss: 0.0040 Epoch 54/100 1/1 [==============================] - 0s 3ms/step - loss: 0.0040 Epoch 55/100 1/1 [==============================] - 0s 3ms/step - loss: 0.0039 Epoch 56/100 1/1 [==============================] - 0s 4ms/step - loss: 0.0039 Epoch 57/100 1/1 [==============================] - 0s 2ms/step - loss: 0.0038 Epoch 58/100 1/1 [==============================] - 0s 2ms/step - loss: 0.0038 Epoch 59/100 1/1 [==============================] - 0s 2ms/step - loss: 0.0037 Epoch 60/100 1/1 [==============================] - 0s 5ms/step - loss: 0.0037 Epoch 61/100 1/1 [==============================] - 0s 3ms/step - loss: 0.0036 Epoch 62/100 1/1 [==============================] - 0s 2ms/step - loss: 0.0036 Epoch 63/100 1/1 [==============================] - 0s 3ms/step - loss: 0.0036 Epoch 64/100 1/1 [==============================] - 0s 4ms/step - loss: 0.0035 Epoch 65/100 1/1 [==============================] - 0s 2ms/step - loss: 0.0035 Epoch 66/100 1/1 [==============================] - 0s 3ms/step - loss: 0.0035 Epoch 67/100 1/1 [==============================] - 0s 3ms/step - loss: 0.0034 Epoch 68/100 1/1 [==============================] - 0s 3ms/step - loss: 0.0034 Epoch 69/100 1/1 [==============================] - 0s 2ms/step - loss: 0.0034 Epoch 70/100 1/1 [==============================] - 0s 3ms/step - loss: 0.0033 Epoch 71/100 1/1 [==============================] - 0s 3ms/step - loss: 0.0033 Epoch 72/100 1/1 [==============================] - 0s 3ms/step - loss: 0.0033 Epoch 73/100 1/1 [==============================] - 0s 2ms/step - loss: 0.0032 Epoch 74/100 1/1 [==============================] - 0s 3ms/step - loss: 0.0032 Epoch 75/100 1/1 [==============================] - 0s 2ms/step - loss: 0.0032 Epoch 76/100 1/1 [==============================] - 0s 3ms/step - loss: 0.0032 Epoch 77/100 1/1 [==============================] - 0s 4ms/step - loss: 0.0031 Epoch 78/100 1/1 [==============================] - 0s 2ms/step - loss: 0.0031 Epoch 79/100 1/1 [==============================] - 0s 3ms/step - loss: 0.0031 Epoch 80/100 1/1 [==============================] - 0s 2ms/step - loss: 0.0031 Epoch 81/100 1/1 [==============================] - 0s 4ms/step - loss: 0.0030 Epoch 82/100 1/1 [==============================] - 0s 4ms/step - loss: 0.0030 Epoch 83/100 1/1 [==============================] - 0s 2ms/step - loss: 0.0030 Epoch 84/100 1/1 [==============================] - 0s 3ms/step - loss: 0.0030 Epoch 85/100 1/1 [==============================] - 0s 5ms/step - loss: 0.0030 Epoch 86/100 1/1 [==============================] - 0s 3ms/step - loss: 0.0029 Epoch 87/100 1/1 [==============================] - 0s 3ms/step - loss: 0.0029 Epoch 88/100 1/1 [==============================] - 0s 3ms/step - loss: 0.0029 Epoch 89/100 1/1 [==============================] - 0s 3ms/step - loss: 0.0029 Epoch 90/100 1/1 [==============================] - 0s 2ms/step - loss: 0.0029 Epoch 91/100 1/1 [==============================] - 0s 2ms/step - loss: 0.0029 Epoch 92/100 1/1 [==============================] - 0s 2ms/step - loss: 0.0028 Epoch 93/100 1/1 [==============================] - 0s 3ms/step - loss: 0.0028 Epoch 94/100 1/1 [==============================] - 0s 3ms/step - loss: 0.0028 Epoch 95/100 1/1 [==============================] - 0s 2ms/step - loss: 0.0028 Epoch 96/100 1/1 [==============================] - 0s 2ms/step - loss: 0.0028 Epoch 97/100 1/1 [==============================] - 0s 3ms/step - loss: 0.0028 Epoch 98/100 1/1 [==============================] - 0s 3ms/step - loss: 0.0028 Epoch 99/100 1/1 [==============================] - 0s 2ms/step - loss: 0.0028 Epoch 100/100 1/1 [==============================] - 0s 2ms/step - loss: 0.0027 ###Markdown Plot Training History`history.history` is a dictionary which contains information from each training epoch (no, I don't know the rationale behind the double name). Use it to plot the loss across epochs. Don't for get those labels! ###Code # Plot training history plt.plot(history.history['loss'], c='r') plt.ylabel('MSE loss') plt.xlabel('epoch') plt.title('NN Training History'); # Weights learned for the data print_weights(toy_model) ###Output {'w1': 1.9872459, 'b1': -0.051335897, 'w2': 1.0406225, 'b2': 0.5044429} ###Markdown We can see we've moved much closer to the original weights after fitting. But visualizing our model's predictions will make this more clear. Predict & PlotWe use the model's `predict` method on a linspace, `x_lin`, which we construct to span the range of the dataset's $x$ values. We save the resulting predictions in `y_hat` ###Code # Predict x_lin = np.linspace(x_toy.min(), x_toy.max(), 500) y_hat = toy_model.predict(x_lin) # Plot plt.scatter(x_toy, y_toy, alpha=0.4, lw=4, label='data') plt.plot(x_lin, y_hat, label='NN', ls='--', c='r') plt.xlabel('x') plt.ylabel('y') plt.title('Predictions After Training') plt.legend(); ###Output _____no_output_____ ###Markdown Much better! But perhaps you are not impressed yet? An Ugly Function ###Code def ugly_function(x): if x < 0: return np.exp(-(x**2))/2 + 1 + np.exp(-((10*x)**2)) else: return np.exp(-(x**2)) + np.exp(-((10*x)**2)) ###Output _____no_output_____ ###Markdown How do you feel about the prospect of manually setting the weights to approximate this beauty? ###Code # Generate data x_ugly = np.linspace(-3,3,1500) # create x-values for input y_ugly = np.array(list(map(ugly_function, x_ugly))) # Plot data plt.plot(x_ugly, y_ugly); plt.title('An Ugly Function') plt.xlabel('X') plt.ylabel('Y'); ###Output _____no_output_____ ###Markdown And here we don't even have the option of cheating by initializing our weights strategically! 🏋🏻‍♂️ TEAM ACTIVITY: We're Gonna Need a Bigger Model... 1. Complete the `build_nn` function for quickly constructing different NN architectures. 2. Use `build_nn` to construct an NN to approximate the ugly function 3. Compile the model & print its summary - _Tip: Remember, if it is the last line of the cell, Jupyter will display the return value without an explicit call to `print()` required. In fact, Jupyter uses its own `display()` function which often results in prettier output for tables_4. Fit the model5. Plot the training history Hyperparameters to play with:- Architecture - Number of hidden layers - Number of neurons in each hidden layer - Hidden layers' `activation` function- Training - `SGD`'s `learning_rate` - `batch_size` - `epochs` **NN Build Function**\**Arguments:**- `name`: str - A name for your NN.- `input_shape`: tuple - number of predictors in input (remember the trailing ','!)- `hidden_dims`: list of int - specifies the number of neurons in each hidden layer - Ex: [2,4,8] would mean 3 hidden layers with 2, 4, and 8 neurons respectively- `hidden_act`: str (or Keras activation object) - activation function used by all hidden layers- `out_dim`: int - number of output neurons a.k.a 'ouput units'- `out_act`: str (or Keras activation object) - activation function used by output layer**Hint:** We will reuse this function throughout the notebook in different settings, but you should go ahead and set some sensible defaults for *all* of the arguments. ###Code # your code here def build_NN(name='NN', input_shape=(1,), hidden_dims=[2], hidden_act='relu', out_dim=1, out_act='linear'): model = Sequential(name=name) model.add(Input(shape=input_shape)) for hidden_dim in hidden_dims: model.add(Dense(hidden_dim, activation=hidden_act)) model.add(Dense(out_dim, activation=out_act)) return model # %load ../solutions/sol1_1.py def build_NN(name='NN', input_shape=(1,), hidden_dims=[2], hidden_act='relu', out_dim=1, out_act='linear'): model = Sequential(name=name) model.add(Input(shape=input_shape)) for hidden_dim in hidden_dims: model.add(Dense(hidden_dim, activation=hidden_act)) model.add(Dense(out_dim, activation=out_act)) return model ###Output _____no_output_____ ###Markdown **Build & Print Model Summary**You can play with `hidden_dims` and `hidden_act`. ###Code # your code here ugly_model = build_NN(name='ugly', hidden_dims=[64, 32, 16, 8]) ugly_model.summary() ###Output WARNING:tensorflow:Please add `keras.layers.InputLayer` instead of `keras.Input` to Sequential model. `keras.Input` is intended to be used by Functional model. Model: "ugly" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= dense_2 (Dense) (None, 64) 128 _________________________________________________________________ dense_3 (Dense) (None, 32) 2080 _________________________________________________________________ dense_4 (Dense) (None, 16) 528 _________________________________________________________________ dense_5 (Dense) (None, 8) 136 _________________________________________________________________ dense_6 (Dense) (None, 1) 9 ================================================================= Total params: 2,881 Trainable params: 2,881 Non-trainable params: 0 _________________________________________________________________ ###Markdown **Compile**\Use the `SGD` optimizer and `'mse'` as your loss.\You can expermiment with `SGD`'s `learning_rate`. ###Code # Compile # your code here ugly_model.compile(optimizer=SGD(learning_rate=1e-1), loss='mse') ###Output _____no_output_____ ###Markdown **Fit**\Fit `ugly_model` on `x_ugly` and `y_ugly` and story the results in a variable called `history`.\You can experiment with `epochs` and `batch_size`. ###Code # Fit # your code here history = ugly_model.fit(x_ugly, y_ugly, epochs=100, batch_size=32) ###Output Epoch 1/100 47/47 [==============================] - 1s 6ms/step - loss: 0.0668 Epoch 2/100 47/47 [==============================] - 0s 3ms/step - loss: 0.0135 Epoch 3/100 47/47 [==============================] - 0s 3ms/step - loss: 0.0113 Epoch 4/100 47/47 [==============================] - 0s 3ms/step - loss: 0.0109 Epoch 5/100 47/47 [==============================] - 0s 3ms/step - loss: 0.0074 Epoch 6/100 47/47 [==============================] - 0s 3ms/step - loss: 0.0077 Epoch 7/100 47/47 [==============================] - 0s 3ms/step - loss: 0.0075 Epoch 8/100 47/47 [==============================] - 0s 3ms/step - loss: 0.0076 Epoch 9/100 47/47 [==============================] - 0s 3ms/step - loss: 0.0070 Epoch 10/100 47/47 [==============================] - 0s 3ms/step - loss: 0.0060 Epoch 11/100 47/47 [==============================] - 0s 3ms/step - loss: 0.0053 Epoch 12/100 47/47 [==============================] - 0s 3ms/step - loss: 0.0055 Epoch 13/100 47/47 [==============================] - 0s 3ms/step - loss: 0.0047 Epoch 14/100 47/47 [==============================] - 0s 3ms/step - loss: 0.0068 Epoch 15/100 47/47 [==============================] - 0s 3ms/step - loss: 0.0044 Epoch 16/100 47/47 [==============================] - 0s 3ms/step - loss: 0.0041 Epoch 17/100 47/47 [==============================] - 0s 3ms/step - loss: 0.0038 Epoch 18/100 47/47 [==============================] - 0s 3ms/step - loss: 0.0043 Epoch 19/100 47/47 [==============================] - 0s 3ms/step - loss: 0.0037 Epoch 20/100 47/47 [==============================] - 0s 3ms/step - loss: 0.0034 Epoch 21/100 47/47 [==============================] - 0s 3ms/step - loss: 0.0035 Epoch 22/100 47/47 [==============================] - 0s 3ms/step - loss: 0.0028 Epoch 23/100 47/47 [==============================] - 0s 3ms/step - loss: 0.0045 Epoch 24/100 47/47 [==============================] - 0s 3ms/step - loss: 0.0032 Epoch 25/100 47/47 [==============================] - 0s 3ms/step - loss: 0.0031 Epoch 26/100 47/47 [==============================] - 0s 3ms/step - loss: 0.0027 Epoch 27/100 47/47 [==============================] - 0s 3ms/step - loss: 0.0028 Epoch 28/100 47/47 [==============================] - 0s 3ms/step - loss: 0.0029 Epoch 29/100 47/47 [==============================] - 0s 3ms/step - loss: 0.0026 Epoch 30/100 47/47 [==============================] - 0s 3ms/step - loss: 0.0029 Epoch 31/100 47/47 [==============================] - 0s 3ms/step - loss: 0.0024 Epoch 32/100 47/47 [==============================] - 0s 3ms/step - loss: 0.0022 Epoch 33/100 47/47 [==============================] - 0s 3ms/step - loss: 0.0034 Epoch 34/100 47/47 [==============================] - 0s 3ms/step - loss: 0.0025 Epoch 35/100 47/47 [==============================] - 0s 3ms/step - loss: 0.0017 Epoch 36/100 47/47 [==============================] - 0s 3ms/step - loss: 0.0019 Epoch 37/100 47/47 [==============================] - 0s 3ms/step - loss: 0.0021 Epoch 38/100 47/47 [==============================] - 0s 3ms/step - loss: 0.0024 Epoch 39/100 47/47 [==============================] - 0s 3ms/step - loss: 0.0026 Epoch 40/100 47/47 [==============================] - 0s 3ms/step - loss: 0.0018 Epoch 41/100 47/47 [==============================] - 0s 3ms/step - loss: 0.0021 Epoch 42/100 47/47 [==============================] - 0s 3ms/step - loss: 0.0019 Epoch 43/100 47/47 [==============================] - 0s 3ms/step - loss: 0.0020 Epoch 44/100 47/47 [==============================] - 0s 3ms/step - loss: 0.0014 Epoch 45/100 47/47 [==============================] - 0s 3ms/step - loss: 0.0016 Epoch 46/100 47/47 [==============================] - 0s 3ms/step - loss: 0.0014 Epoch 47/100 47/47 [==============================] - 0s 3ms/step - loss: 0.0018 Epoch 48/100 47/47 [==============================] - 0s 3ms/step - loss: 0.0044 Epoch 49/100 47/47 [==============================] - 0s 3ms/step - loss: 0.0016 Epoch 50/100 47/47 [==============================] - 0s 3ms/step - loss: 0.0011 Epoch 51/100 47/47 [==============================] - 0s 3ms/step - loss: 0.0012 Epoch 52/100 47/47 [==============================] - 0s 3ms/step - loss: 0.0022 Epoch 53/100 47/47 [==============================] - 0s 3ms/step - loss: 0.0024 Epoch 54/100 47/47 [==============================] - 0s 5ms/step - loss: 0.0011 Epoch 55/100 47/47 [==============================] - 0s 3ms/step - loss: 0.0011 Epoch 56/100 47/47 [==============================] - 0s 3ms/step - loss: 0.0019 Epoch 57/100 47/47 [==============================] - 0s 3ms/step - loss: 0.0014 Epoch 58/100 47/47 [==============================] - 0s 3ms/step - loss: 0.0013 Epoch 59/100 47/47 [==============================] - 0s 3ms/step - loss: 0.0010 Epoch 60/100 47/47 [==============================] - 0s 3ms/step - loss: 0.0018 Epoch 61/100 47/47 [==============================] - 0s 3ms/step - loss: 7.7473e-04 Epoch 62/100 47/47 [==============================] - 0s 3ms/step - loss: 0.0018 Epoch 63/100 47/47 [==============================] - 0s 3ms/step - loss: 7.9571e-04 Epoch 64/100 47/47 [==============================] - 0s 3ms/step - loss: 0.0012 Epoch 65/100 47/47 [==============================] - 0s 3ms/step - loss: 8.7445e-04 Epoch 66/100 47/47 [==============================] - 0s 3ms/step - loss: 8.6565e-04 Epoch 67/100 47/47 [==============================] - 0s 3ms/step - loss: 0.0011 Epoch 68/100 47/47 [==============================] - 0s 3ms/step - loss: 0.0020 Epoch 69/100 47/47 [==============================] - 0s 3ms/step - loss: 0.0025 Epoch 70/100 47/47 [==============================] - 0s 3ms/step - loss: 0.0012 Epoch 71/100 47/47 [==============================] - 0s 3ms/step - loss: 0.0012 Epoch 72/100 47/47 [==============================] - 0s 3ms/step - loss: 0.0011 Epoch 73/100 47/47 [==============================] - 0s 3ms/step - loss: 9.4999e-04 Epoch 74/100 47/47 [==============================] - 0s 3ms/step - loss: 9.9993e-04 Epoch 75/100 47/47 [==============================] - 0s 3ms/step - loss: 0.0010 Epoch 76/100 47/47 [==============================] - 0s 3ms/step - loss: 8.3702e-04 Epoch 77/100 47/47 [==============================] - 0s 3ms/step - loss: 8.0729e-04 Epoch 78/100 47/47 [==============================] - 0s 3ms/step - loss: 0.0012 Epoch 79/100 47/47 [==============================] - 0s 3ms/step - loss: 0.0018 Epoch 80/100 47/47 [==============================] - 0s 3ms/step - loss: 6.6710e-04 Epoch 81/100 47/47 [==============================] - 0s 3ms/step - loss: 5.1621e-04 Epoch 82/100 47/47 [==============================] - 0s 3ms/step - loss: 6.7908e-04 Epoch 83/100 47/47 [==============================] - 0s 3ms/step - loss: 0.0012 Epoch 84/100 47/47 [==============================] - 0s 3ms/step - loss: 7.7497e-04 Epoch 85/100 47/47 [==============================] - 0s 4ms/step - loss: 7.5196e-04 Epoch 86/100 47/47 [==============================] - 0s 3ms/step - loss: 8.7653e-04 Epoch 87/100 47/47 [==============================] - 0s 3ms/step - loss: 0.0010 Epoch 88/100 47/47 [==============================] - 0s 3ms/step - loss: 7.0451e-04 Epoch 89/100 47/47 [==============================] - 0s 3ms/step - loss: 0.0013 Epoch 90/100 47/47 [==============================] - 0s 3ms/step - loss: 8.9418e-04 Epoch 91/100 47/47 [==============================] - 0s 3ms/step - loss: 5.4738e-04 Epoch 92/100 47/47 [==============================] - 0s 3ms/step - loss: 5.1742e-04 Epoch 93/100 47/47 [==============================] - 0s 3ms/step - loss: 6.5018e-04 Epoch 94/100 47/47 [==============================] - 0s 4ms/step - loss: 0.0010 Epoch 95/100 47/47 [==============================] - 0s 3ms/step - loss: 9.0246e-04 Epoch 96/100 47/47 [==============================] - 0s 4ms/step - loss: 5.2294e-04 Epoch 97/100 47/47 [==============================] - 0s 3ms/step - loss: 7.6487e-04 Epoch 98/100 47/47 [==============================] - 0s 3ms/step - loss: 5.7439e-04 Epoch 99/100 47/47 [==============================] - 0s 4ms/step - loss: 8.7740e-04 Epoch 100/100 47/47 [==============================] - 0s 3ms/step - loss: 9.1594e-04 ###Markdown **Plot Training History** Plot the model's training history. Don't forget your axis labels!\**Hint:** Remember, `fit` returns a `history` object which itself has a `history` dictionary attribute. Because this (2nd object) is a dictionary, you can always use its `keys`method if you don't know what's in it. You can also access the history from the model itself. Ex: `ugly_model.history.history` ###Code # Plot History # your code here plt.plot(history.history['loss']) plt.xlabel('epoch') plt.ylabel('Train MSE'); ###Output _____no_output_____ ###Markdown **Get Predictions**\Similar to `sklearn` models, `keras` models have a `predict` method. Use your model's `predict` method to predict on `x_ugly` and store the results in a variable called `y_hat_ugly`. ###Code # Predict y_hat_ugly = ugly_model.predict(x_ugly) ###Output _____no_output_____ ###Markdown **Plot Predictions**\Run the cell below to compare your model's predictions to the true (ugly) function. Still not quite right? Try tweaking some of the hyperparameters above and re-run the cells in this section to see if you can improve. ###Code # Plot predictions plt.plot(x_ugly, y_ugly, alpha=0.4, lw=4, label='true function') plt.plot(x_ugly, y_hat_ugly, label='NN', ls='--', c='r') plt.xlabel('x') plt.ylabel('y') plt.legend(); ###Output _____no_output_____ ###Markdown **End of Team Activity** Multi-class Classification with Keras ###Code import seaborn as sns ###Output _____no_output_____ ###Markdown So far we've only used our new Keras powers for toy regression problems. We'll move to classification next... but with 3 classes!This example will use `seaborn`'s penguins dataset (a worthy successor to the connonical iris dataset.)We'll build a model to identify a penguin's species from its other features. In the process we'll dust off our Python skills with a quick run through a basic model building workflow. ###Code # Bring on the penguins! penguins = sns.load_dataset('penguins') penguins.head() ###Output _____no_output_____ ###Markdown We have 3 species of penguins living across 3 different islands. There are measurements of bill length, bill depth, flipper length, and body mass. We also have categorcial variable for each penguin's sex giving us a total of 7 features.Here's a plot that tries to show too much at once. But you can ignore the marker shapes and sizes. The bill and flipper length alone ($x$ and $y$ axes) seem too already provide a fair amount of information about the species (color). ###Code # Plot penguins with too much info sns.relplot(data=penguins, x='flipper_length_mm', y='bill_length_mm', hue='species', style='sex', size='body_mass_g', height=6); plt.title('Penguins!', fontdict={'color': 'teal', 'size': 20, 'weight': 'bold', 'family': 'serif'}); ###Output _____no_output_____ ###Markdown You may have noticed some pesky `NaN`s when we displayed the beginning of the DataFrame.\We should investigate further. Missingness ###Code # How many missing values in each column? penguins.isna().sum() ###Output _____no_output_____ ###Markdown Let's take a look at them first all the rows with missing data. ###Code # Rows with missingness penguins[penguins.isna().any(axis=1)] ###Output _____no_output_____ ###Markdown Yikes! There are two observations where all predictors except `species` and `island` are missing.\These rows won't be of any use to us. We see that dropping rows missing `body_mass_g` will take care of most our missingness. ###Code # Drop the bad rows identified above penguins = penguins.dropna(subset=['body_mass_g']) # Check state of missingness after dropping penguins.isna().sum() ###Output _____no_output_____ ###Markdown It looks like there are 9 rows where `sex` is missing. We can try to **impute** these values.\But first, take a look at our DataFrame again. ###Code penguins.head() ###Output _____no_output_____ ###Markdown Notice how the indices go from `2` to `4`. What happened to `3`?\It was one of the rows we dropped! This issue with the indices can cause headaches later on (think `loc`/`iloc` distinction).But we can make things good as new using the `reset_index` method. Just be sure to set `drop=True`, otherwise the old indices will be added to the DataFrame as a new column. ###Code # Reset index penguins = penguins.reset_index(drop=True) penguins.head() ###Output _____no_output_____ ###Markdown Much better!\Now, on to imputing the missing `sex` values. Let's take a look at the `value_counts`. ###Code # Counts of each unique value in the dataset penguins.sex.value_counts() ###Output _____no_output_____ ###Markdown It's almost an even split. We'll impute the **mode** because it's a quick fix. ###Code # The mode here should match the value with the most counts above sex_mode = penguins.sex.mode()[0] sex_mode ###Output _____no_output_____ ###Markdown Finally, we use `fillna` to replace the remaining `NaN`s with the `sex_mode` and confirm that there are no more missing values in the DataFrame. ###Code # Replace missing values with most common value (i.e, mode) penguins = penguins.fillna(sex_mode) penguins.isna().sum() ###Output _____no_output_____ ###Markdown **Q:** Imputing the mode here was very easy, but does this approach make you a bit nervous? Why? Is there some other way we could have imputed this values? PreprocessingWe can't just throw this DataFrame at a neural network as it is. There's some work we need to do first. **Separate predictors from response variable** ###Code # Isolate response from predictors response = 'species' X = penguins.drop(response, axis=1) y = penguins[response] ###Output _____no_output_____ ###Markdown **Encode Categorical Predictor Variables** ###Code # Check the predictor data types X.dtypes ###Output _____no_output_____ ###Markdown Both `island` and `sex` are categotical. We can use `pd.get_dummies` to one-hot-encode them (don't forget to `drop_first`!). ###Code # Identify the categorical columns cat_cols = ['island', 'sex'] # one-hot encode the categorical columns X_design = pd.get_dummies(X, columns=cat_cols, drop_first=True) X_design.head(1) ###Output _____no_output_____ ###Markdown From the remaining columns we can infer that the 'reference' values for our categorical variables are `island = 'Biscoe'`, and `sex = 'Female'`. **Feature Scaling**We should take a closer look at the range of values our predictors take on. ###Code # Summary stats of predictors X_design.describe() ###Output _____no_output_____ ###Markdown Our features are not on the same scale. Just compare the min/max of `bill_depth_mm` and `body_mass_g` for example.\This can slow down neural network training for reasons we'll see in an upcoming lecture.Let's make use of `sklearn`'s `StandardScaler` to standardize the data, centering each predictor at 0 and setting their standard deviations to 1. ###Code from sklearn.preprocessing import StandardScaler # Remember the column names for later; we'll lose them when we scale X_cols = X_design.columns # Saving the scaler object in a variable allows us to reverse the transformation later scaler = StandardScaler() X_scaled = scaler.fit_transform(X_design) # The scaler was passed a pandas DataFrame but returns a numpy array type(X_scaled), X_scaled.shape # We can always add the column names back later if we need to pd.DataFrame(X_scaled, columns=X_cols).head(3) ###Output _____no_output_____ ###Markdown **Encoding the Response Variable** ###Code # Take a look at our response y ###Output _____no_output_____ ###Markdown Our response variable is still a `string`. We need to turn it into some numerical representation for our neural network.\We could to this ourselves with a few list comprehensions, but `sklearn`'s `LabelEncoder` makes this very easy. ###Code from sklearn.preprocessing import LabelEncoder # Encode string labels as integers # LabelEncoder uses the familar fit/transform methods we saw with StandardScaler labenc = LabelEncoder().fit(y) y_enc = labenc.transform(y) y_enc # We can recover the class labels from the encoder object later labenc.classes_ ###Output _____no_output_____ ###Markdown This gets us part of the way there. But the penguin species are **categorical** not **ordinal**. Keeping the labels as integers implies that species `2` is twice as "different" from species `1` as it is from species `0`. We want to perform a conversion here similar to the one-hot encoding above, except will will not 'drop' one of the values. This is where Keras's `to_categorical` utility function comes in. ###Code from tensorflow.keras.utils import to_categorical y_cat = to_categorical(y_enc) y_cat ###Output _____no_output_____ ###Markdown Perfect! **Q:** If this is what our array of response variables looks like, what will this mean for the output layer of our neural network? **Train-test Split** ###Code from sklearn.model_selection import train_test_split ###Output _____no_output_____ ###Markdown You may be familiar with using `train_test_split` to split the `X` and `y` arrays themselves. But here we will using it to create a set of train and test *indices*.We'll see later that being able to determine which rows in the original `X` and `y` ended up in train or test will be helpful.**Q:** But couldn't we just sample integers to get random indices? Why use `train_test_split`?**A:** Because `train_test_split` allows for **stratified** splitting!Here we use a trick to stratify on both the `sex` and `island` variables by concatinating their values together. This gives us a total of 6 possible values (2 sexs x 3 islands). By stratifying on this column we help ensure that each of the 6 possible sex/island combinations is equally represented in both train and test. ###Code # Concatenate categorical columns; use this for stratified splitting strat_col = penguins['sex'].astype('str') + penguins['island'].astype('str') strat_col # Create train/test indices train_idx, test_idx = train_test_split(np.arange(X_scaled.shape[0]), test_size=0.5, random_state=109, stratify=strat_col) # Index into X_scaled and y_cat to create the train and test sets X_train = X_scaled[train_idx] y_train = y_cat[train_idx] X_test = X_scaled[test_idx] y_test = y_cat[test_idx] # Sanity check on the resulting shapes X_train.shape, y_train.shape, X_test.shape, y_test.shape ###Output _____no_output_____ ###Markdown **Validation Split** Here is where those indices we saved come in handy.\We also want to also ensure equal representation across train and validation. ###Code # Subset original stratify column using saved train split indices strat_col2 = strat_col.iloc[train_idx] strat_col2.shape # Create train and validation splits from original train split X_train, X_val, y_train, y_val = train_test_split(X_train, y_train, test_size=0.5, random_state=109, stratify=strat_col2) ###Output _____no_output_____ ###Markdown 🏋🏻‍♂️ TEAM ACTIVITY: Classify Those Penguins! ###Code from tensorflow.keras.losses import categorical_crossentropy from tensorflow.keras.metrics import Accuracy, AUC from tensorflow.keras.activations import softmax ###Output _____no_output_____ ###Markdown **Build**Construct your NN penguin classifier. You can make use of your `build_NN` function from earlier. What output activation should you use?**Hint:** try to programaticlaly determin the input and output shape from your data rather than hard coding those values. ###Code # Construct your NN and print the model summary # your code here model = build_NN(name='penguins', input_shape=(X_train.shape[1],), hidden_dims=[8,16,3], hidden_act='relu', out_dim=y_train.shape[1], out_act='softmax') model.summary() # %load solutions/sol2_1.py ###Output _____no_output_____ ###Markdown **Compile**Again, let's use `SGD` as our optimizer. You can fiddle with the `learning_rate`.\What loss and metric(s) do you think are appropriate? ###Code # Compile # youre code here model.compile(optimizer=SGD(learning_rate=1e-1), loss='categorical_crossentropy', metrics=['acc', 'AUC']) ###Output _____no_output_____ ###Markdown **Fit**Fit your model and store the results in a variable called `history`.\Feel free to play with `batch_size` and `epochs`.Don't forget to include the `validation_data`! ###Code # Fit # your code here history = model.fit(X_train, y_train, validation_data=(X_val, y_val), batch_size=64, epochs=50) ###Output Epoch 1/50 2/2 [==============================] - 2s 865ms/step - loss: 0.9754 - acc: 0.5718 - auc: 0.7584 - val_loss: 0.8847 - val_acc: 0.7674 - val_auc: 0.8688 Epoch 2/50 2/2 [==============================] - 0s 205ms/step - loss: 0.8840 - acc: 0.6737 - auc: 0.8183 - val_loss: 0.7838 - val_acc: 0.7907 - val_auc: 0.9042 Epoch 3/50 2/2 [==============================] - 0s 270ms/step - loss: 0.7871 - acc: 0.7023 - auc: 0.8596 - val_loss: 0.7291 - val_acc: 0.8256 - val_auc: 0.9338 Epoch 4/50 2/2 [==============================] - 0s 245ms/step - loss: 0.7558 - acc: 0.7520 - auc: 0.8943 - val_loss: 0.6760 - val_acc: 0.8372 - val_auc: 0.9456 Epoch 5/50 2/2 [==============================] - 0s 262ms/step - loss: 0.7123 - acc: 0.7468 - auc: 0.9075 - val_loss: 0.6318 - val_acc: 0.8372 - val_auc: 0.9528 Epoch 6/50 2/2 [==============================] - 0s 232ms/step - loss: 0.6749 - acc: 0.7624 - auc: 0.9190 - val_loss: 0.5994 - val_acc: 0.8372 - val_auc: 0.9634 Epoch 7/50 2/2 [==============================] - 0s 248ms/step - loss: 0.6420 - acc: 0.7520 - auc: 0.9370 - val_loss: 0.5717 - val_acc: 0.8372 - val_auc: 0.9696 Epoch 8/50 2/2 [==============================] - 0s 238ms/step - loss: 0.6048 - acc: 0.7624 - auc: 0.9466 - val_loss: 0.5456 - val_acc: 0.8372 - val_auc: 0.9737 Epoch 9/50 2/2 [==============================] - 0s 228ms/step - loss: 0.6017 - acc: 0.7546 - auc: 0.9455 - val_loss: 0.5160 - val_acc: 0.8372 - val_auc: 0.9757 Epoch 10/50 2/2 [==============================] - 0s 243ms/step - loss: 0.5585 - acc: 0.7858 - auc: 0.9554 - val_loss: 0.4985 - val_acc: 0.8372 - val_auc: 0.9783 Epoch 11/50 2/2 [==============================] - 0s 214ms/step - loss: 0.5421 - acc: 0.7858 - auc: 0.9623 - val_loss: 0.4837 - val_acc: 0.8372 - val_auc: 0.9804 Epoch 12/50 2/2 [==============================] - 0s 274ms/step - loss: 0.5393 - acc: 0.7702 - auc: 0.9609 - val_loss: 0.4614 - val_acc: 0.8372 - val_auc: 0.9816 Epoch 13/50 2/2 [==============================] - 0s 226ms/step - loss: 0.5135 - acc: 0.7754 - auc: 0.9664 - val_loss: 0.4427 - val_acc: 0.8372 - val_auc: 0.9827 Epoch 14/50 2/2 [==============================] - 0s 261ms/step - loss: 0.5025 - acc: 0.7754 - auc: 0.9681 - val_loss: 0.4285 - val_acc: 0.8372 - val_auc: 0.9835 Epoch 15/50 2/2 [==============================] - 0s 208ms/step - loss: 0.4784 - acc: 0.7754 - auc: 0.9704 - val_loss: 0.4116 - val_acc: 0.8372 - val_auc: 0.9841 Epoch 16/50 2/2 [==============================] - 0s 236ms/step - loss: 0.4766 - acc: 0.7546 - auc: 0.9667 - val_loss: 0.3894 - val_acc: 0.8372 - val_auc: 0.9847 Epoch 17/50 2/2 [==============================] - 0s 260ms/step - loss: 0.4635 - acc: 0.7598 - auc: 0.9688 - val_loss: 0.3737 - val_acc: 0.8372 - val_auc: 0.9854 Epoch 18/50 2/2 [==============================] - 0s 233ms/step - loss: 0.4464 - acc: 0.7650 - auc: 0.9709 - val_loss: 0.3609 - val_acc: 0.8372 - val_auc: 0.9861 Epoch 19/50 2/2 [==============================] - 0s 239ms/step - loss: 0.4409 - acc: 0.7494 - auc: 0.9679 - val_loss: 0.3445 - val_acc: 0.8372 - val_auc: 0.9863 Epoch 20/50 2/2 [==============================] - 0s 233ms/step - loss: 0.3985 - acc: 0.7754 - auc: 0.9744 - val_loss: 0.3323 - val_acc: 0.8372 - val_auc: 0.9866 Epoch 21/50 2/2 [==============================] - 0s 251ms/step - loss: 0.4041 - acc: 0.7546 - auc: 0.9698 - val_loss: 0.3167 - val_acc: 0.8372 - val_auc: 0.9867 Epoch 22/50 2/2 [==============================] - 0s 263ms/step - loss: 0.3770 - acc: 0.7754 - auc: 0.9747 - val_loss: 0.3064 - val_acc: 0.8372 - val_auc: 0.9867 Epoch 23/50 2/2 [==============================] - 0s 239ms/step - loss: 0.3609 - acc: 0.7702 - auc: 0.9736 - val_loss: 0.2927 - val_acc: 0.8372 - val_auc: 0.9867 Epoch 24/50 2/2 [==============================] - 0s 158ms/step - loss: 0.3288 - acc: 0.7858 - auc: 0.9766 - val_loss: 0.2813 - val_acc: 0.8372 - val_auc: 0.9867 Epoch 25/50 2/2 [==============================] - 0s 232ms/step - loss: 0.3434 - acc: 0.7546 - auc: 0.9698 - val_loss: 0.2677 - val_acc: 0.8372 - val_auc: 0.9867 Epoch 26/50 2/2 [==============================] - 0s 153ms/step - loss: 0.3204 - acc: 0.7807 - auc: 0.9715 - val_loss: 0.2544 - val_acc: 0.8488 - val_auc: 0.9869 Epoch 27/50 2/2 [==============================] - 0s 228ms/step - loss: 0.3049 - acc: 0.7859 - auc: 0.9723 - val_loss: 0.2422 - val_acc: 0.8605 - val_auc: 0.9870 Epoch 28/50 2/2 [==============================] - 0s 233ms/step - loss: 0.3019 - acc: 0.8146 - auc: 0.9732 - val_loss: 0.2320 - val_acc: 0.8605 - val_auc: 0.9874 Epoch 29/50 2/2 [==============================] - 0s 242ms/step - loss: 0.2783 - acc: 0.8538 - auc: 0.9788 - val_loss: 0.2223 - val_acc: 0.8605 - val_auc: 0.9884 Epoch 30/50 2/2 [==============================] - 0s 239ms/step - loss: 0.2476 - acc: 0.8773 - auc: 0.9865 - val_loss: 0.2128 - val_acc: 0.8837 - val_auc: 0.9900 Epoch 31/50 2/2 [==============================] - 0s 266ms/step - loss: 0.2446 - acc: 0.9191 - auc: 0.9933 - val_loss: 0.2036 - val_acc: 0.9070 - val_auc: 0.9915 Epoch 32/50 2/2 [==============================] - 0s 262ms/step - loss: 0.2487 - acc: 0.9269 - auc: 0.9956 - val_loss: 0.1950 - val_acc: 0.9070 - val_auc: 0.9918 Epoch 33/50 2/2 [==============================] - 0s 166ms/step - loss: 0.2231 - acc: 0.9452 - auc: 0.9972 - val_loss: 0.1867 - val_acc: 0.9186 - val_auc: 0.9936 Epoch 34/50 2/2 [==============================] - 0s 297ms/step - loss: 0.2189 - acc: 0.9530 - auc: 0.9986 - val_loss: 0.1795 - val_acc: 0.9535 - val_auc: 0.9948 Epoch 35/50 2/2 [==============================] - 0s 236ms/step - loss: 0.2142 - acc: 0.9869 - auc: 0.9996 - val_loss: 0.1733 - val_acc: 0.9535 - val_auc: 0.9953 Epoch 36/50 2/2 [==============================] - 0s 210ms/step - loss: 0.2041 - acc: 0.9869 - auc: 0.9999 - val_loss: 0.1673 - val_acc: 0.9651 - val_auc: 0.9956 Epoch 37/50 2/2 [==============================] - 0s 215ms/step - loss: 0.1952 - acc: 1.0000 - auc: 1.0000 - val_loss: 0.1612 - val_acc: 0.9651 - val_auc: 0.9961 Epoch 38/50 2/2 [==============================] - 0s 189ms/step - loss: 0.1880 - acc: 1.0000 - auc: 1.0000 - val_loss: 0.1555 - val_acc: 0.9651 - val_auc: 0.9964 Epoch 39/50 2/2 [==============================] - 0s 227ms/step - loss: 0.1951 - acc: 1.0000 - auc: 1.0000 - val_loss: 0.1513 - val_acc: 0.9651 - val_auc: 0.9965 Epoch 40/50 2/2 [==============================] - 0s 191ms/step - loss: 0.1756 - acc: 1.0000 - auc: 1.0000 - val_loss: 0.1463 - val_acc: 0.9651 - val_auc: 0.9968 Epoch 41/50 2/2 [==============================] - 0s 230ms/step - loss: 0.1727 - acc: 1.0000 - auc: 1.0000 - val_loss: 0.1413 - val_acc: 0.9651 - val_auc: 0.9976 Epoch 42/50 2/2 [==============================] - 0s 231ms/step - loss: 0.1688 - acc: 1.0000 - auc: 1.0000 - val_loss: 0.1375 - val_acc: 0.9651 - val_auc: 0.9980 Epoch 43/50 2/2 [==============================] - 0s 249ms/step - loss: 0.1662 - acc: 1.0000 - auc: 1.0000 - val_loss: 0.1335 - val_acc: 0.9651 - val_auc: 0.9987 Epoch 44/50 2/2 [==============================] - 0s 238ms/step - loss: 0.1587 - acc: 1.0000 - auc: 1.0000 - val_loss: 0.1300 - val_acc: 0.9651 - val_auc: 0.9990 Epoch 45/50 2/2 [==============================] - 0s 242ms/step - loss: 0.1535 - acc: 1.0000 - auc: 1.0000 - val_loss: 0.1267 - val_acc: 0.9651 - val_auc: 0.9991 Epoch 46/50 2/2 [==============================] - 0s 261ms/step - loss: 0.1434 - acc: 1.0000 - auc: 1.0000 - val_loss: 0.1226 - val_acc: 0.9651 - val_auc: 0.9994 Epoch 47/50 2/2 [==============================] - 0s 238ms/step - loss: 0.1406 - acc: 1.0000 - auc: 1.0000 - val_loss: 0.1196 - val_acc: 0.9651 - val_auc: 0.9994 Epoch 48/50 2/2 [==============================] - 0s 260ms/step - loss: 0.1364 - acc: 1.0000 - auc: 1.0000 - val_loss: 0.1163 - val_acc: 0.9651 - val_auc: 0.9994 Epoch 49/50 2/2 [==============================] - 0s 202ms/step - loss: 0.1354 - acc: 1.0000 - auc: 1.0000 - val_loss: 0.1134 - val_acc: 0.9651 - val_auc: 0.9994 Epoch 50/50 2/2 [==============================] - 0s 243ms/step - loss: 0.1289 - acc: 1.0000 - auc: 1.0000 - val_loss: 0.1104 - val_acc: 0.9651 - val_auc: 0.9994 ###Markdown **Plot**Finally, write some code to visualize your loss and metric(s) across the training epochs. You should include both **train** and **validation** scores. This is where a **legend** is very important!**Note:** If you load the solutions they may not run for you unless you have selected the same metric(s) ###Code # Plot training history # your code here fig, axs = plt.subplots(1,3, figsize=(18,5)) axs[0].loglog(history.history['loss'],linewidth=4, label = 'Training') axs[0].loglog(history.history['val_loss'],linewidth=4, label = 'Validation', alpha=0.7) axs[0].set_ylabel('Loss') axs[1].plot(history.history['acc'], label='Training') axs[1].plot(history.history['val_acc'], label='Validation') axs[2].plot(history.history['auc'], label='Training') axs[2].plot(history.history['val_auc'], label='Validation') titles = ['Categorical Crossentropy Loss', 'Accuracy', 'AUC'] for ax, title in zip(axs, titles): ax.set_xlabel('Epoch') ax.set_title(title) ax.legend() #%load solutions/sol2_2.py ###Output _____no_output_____ ###Markdown **End of Team Activity** Evaluating the Model First, let's see how well we could to by simply predicting the majority class in the training data for all observations. ###Code naive_acc = y_train.mean(axis=0).max() print('Naive Accuracy:', naive_acc) # Train model.evaluate(X_train, y_train) # Validation model.evaluate(X_val, y_val) # Test model.evaluate(X_test, y_test) ###Output 6/6 [==============================] - 0s 5ms/step - loss: 0.1212 - acc: 0.9942 - auc: 0.9998 ###Markdown Black Box Interpretation **Proxy Model** ###Code from sklearn.tree import DecisionTreeClassifier from sklearn.model_selection import cross_validate, GridSearchCV # Create train & test response variables for proxy model y_train_bb = model.predict(X_train).argmax(-1) y_test_bb = model.predict(X_test).argmax(-1) # Use cross-validation to tune proxy's hyperparameters parameters = {'max_depth':range(1,10), 'criterion': ['gini', 'entropy']} clf = DecisionTreeClassifier(random_state=42) grid = GridSearchCV(clf, parameters, cv=5) # fit using same train but NN's predictions as response grid.fit(X_train, y_train_bb) print('Best Score:', grid.best_score_) print('Best Params:', grid.best_params_) # Retrieve best estimator from the grid object proxy = grid.best_estimator_ bb_test_score = sum(y_test_bb == y_test.argmax(-1))/len(y_test_bb) proxy_test_score = proxy.score(X_test, y_test_bb) print('Black Box Test Score:', bb_test_score) print('Proxy Model Test Score:', proxy_test_score) ###Output Black Box Test Score: 0.9941520467836257 Proxy Model Test Score: 0.9707602339181286 ###Markdown **Feature Importance** ###Code feature_importances = proxy.feature_importances_ feature_importances sort_idx = np.argsort(feature_importances)[::-1] ax = sns.barplot(x=feature_importances[sort_idx], y=X_cols[sort_idx], color='purple', orient='h') for index, val in enumerate(feature_importances[sort_idx]): ax.text(val/3, index, round(val, 2),color='white', weight='bold', va='center') ax.set_title('NN Feature Importance According to DTree Proxy') sns.despine(right=True) ###Output _____no_output_____ ###Markdown **Fixing All But One Predictor**We can alo try to see how a predictor affects the NN's output by fixing all the other to some "reasonable" values (e.g., mean, mode) and then only varying the predictor of interest.Based on the results above, let's explore how `bill_length_mm` effects the NN's output. **Construct 'Average' Observation** ###Code # Review data types X_design.dtypes # Take means for continous means = X_scaled[:,:4].mean(axis=0) # And modes for catgoricals modes = pd.DataFrame(X_scaled[:,4:]).mode().values.reshape(-1) # Shape Sanity Check means.shape, modes.shape # Concatenate these two back together avg_obs = np.concatenate([means, modes]) # And stick it back in a DataFrame avg_obs = pd.DataFrame(avg_obs).transpose() avg_obs.columns = X_design.columns avg_obs # Identify column in our array that corresponds to bill length bill_col = np.argmax(X_design.columns == 'bill_length_mm') # Find the min and max bill length stdevs in the data set bill_min_std = np.min(X_scaled[:,bill_col]) bill_max_std = np.max(X_scaled[:, bill_col]) # Create 100 evenly spaced values within that range bill_lengths = np.linspace(bill_min_std, bill_max_std, 100) # Create 100 duplicates of the average observation avg_df = pd.concat([avg_obs]*bill_lengths.size,ignore_index=True) # Set the bill length column to then linspace we just created avg_df['bill_length_mm'] = bill_lengths ###Output _____no_output_____ ###Markdown Notice now that all rows are identical except for `bill_length_mm` which slowly covers the entire range of values observed in the dataset. ###Code avg_df.head() ###Output _____no_output_____ ###Markdown **Return Predictor to Original Scale**When we visualize our results we'll want to do so back in the original scale for better interpretability.Here we make use of our scaler object from way back when as it stores the means and standard deviations of the original, unscaled predictors. ###Code # Recover the feature of interest on the original scale bill_std = np.sqrt(scaler.var_[bill_col]) bill_mean = scaler.mean_[bill_col] bill_lengths_original = (bill_std*bill_lengths)+bill_mean ###Output _____no_output_____ ###Markdown We can sanity check out inverse transformation by confirming we recovereed the same min and max bill length from our very first DataFrame! ###Code # Min Sanity Check bill_lengths_original.min(), penguins.bill_length_mm.min() # Max Sanity Check bill_lengths_original.max(), penguins.bill_length_mm.max() ###Output _____no_output_____ ###Markdown Now we are ready to plot an approximation of how `bill_length_mm` affects the NN's predictions. ###Code # Plot predicted class probabilities as a function of bill length (approx) avg_pred = model.predict(avg_df) fig, ax = plt.subplots() for idx, species in enumerate(labenc.classes_): plt.plot(bill_lengths_original, avg_pred[:,idx], label=species) ax.set_ylabel('predicted probability') ax.set_xlabel('bill length in mm') ax.set_title('NN Predictions varying only bill length, holding all other predictors at mean/mode') ax.legend(); ###Output _____no_output_____ ###Markdown If you know your penguins this should be too surprising. Gentoo penguins are the 3rd largest species after the emperor and king penguins (not represented in our dataset).**Q:** Why is this only an *approximation* of how `bill_length_mm` affects the NN's predictions? BaggingYou'll be using bagging ("bootstrap aggregating") in your HW so let's take a minute to review the idea and see how it would work with a Kerass model. The idea is to similuate multiple datasets by sampling our current one with replacement and fitting a model on this sample. The process is repeated multiple times until we have an *ensemble* of fitted models, all trained on slightly different datasets. We can then treat the ensemble as a singled 'bagged' model. When it is time to predict, each model in the ensemble makes its own predictions. These predictions can then be *aggregated* across models, for example, by taking the average or through majority voting. We may also be interested in looking at the distribution of the predictions for a given observation as this may help us quanity our uncertainty in a way in which we could not with a single model's predictions (even if that model outputs a probability!) ###Code # Set sup parameters for the bagging process learning_rate=1e-1 epochs = 50 batch_size = 64 n_boot = 30 bagged_model = [] np.random.seed(109) for n in range(n_boot): # Bootstrap boot_idx = np.random.choice(X_train.shape[0], size=X_train.shape[0], replace=True) X_train_boot = X_train[boot_idx] y_train_boot = y_train[boot_idx] # Build boot_model= build_NN(name=f'penguins_{n}', input_shape=(X_train_boot.shape[1],), hidden_dims=[8,16,32], hidden_act='relu', out_dim=3, out_act='softmax') # Compile boot_model.compile(optimizer=SGD(learning_rate=learning_rate), loss='categorical_crossentropy', metrics=['acc', 'AUC']) # Fit boot_model.fit(X_train_boot, y_train_boot, batch_size=batch_size, epochs=epochs, verbose=0) # Store bootstrapped model's probability predictions bagged_model.append(boot_model) # Notice we can programatically recover the shape of a model's output layer m = bagged_model[0] out_dim = m.layers[-1].output_shape[-1] print(out_dim) def get_bagged_pred(bagged_model, X): # Number of observations n_obs = X.shape[0] # Prediction dimensions (here, number of classes) pred_dim = bagged_model[0].layers[-1].output_shape[-1] # Number of models in the bagged ensemble n_models = len(bagged_model) # 3D tensor to store predictions from each bootstrapped model # n_observations x n_classes x n_models boot_preds = np.zeros((n_obs, pred_dim, n_models)) # Store all predictions in the tensor for i, model in enumerate(bagged_model): boot_preds[:,:,i] = model.predict(X) # Average the predictions across models bag_pred = boot_preds.mean(axis=-1) return bag_pred, boot_preds # Get aggregated and unaggregated ensemble predictions bag_pred, boot_preds = get_bagged_pred(bagged_model, X_test) # Example of aggregated predictions bag_pred[:3] # Shape of unaggregated ensemble predictions tensor boot_preds.shape # Calculate bagged accuracy bag_acc = sum(bag_pred.argmax(axis=-1) == y_test.argmax(axis=-1))/bag_pred.shape[0] print('Bagged Acc:', bag_acc) ###Output Bagged Acc: 0.9941520467836257 ###Markdown 🏋🏻‍♂️ Optional Take-home Challenges **Bagged NN Custom Python Class** It would be nice if we could interact with our bagged model like any other keras model, passing Create a custom `Bagged_NN` class with its own `build`, `compile`, `fit`, `eval`, and `predict` methods! **Use Bootstraped Predictions To Quantify Uncertainty**In your HW you'll use bootstrapping to quantify uncertainty on predictions of a *binary* variable using Posterior Predictive Ratio (PPR). How might you do something similar with *categorical* bootstrapped predictions like we have here? ###Code # Might something like entropy be useful? from scipy.stats import entropy entropy([0.25,0.25], base=2), entropy([0.8,0.2], base=2), entropy([1,0,0,0], base=2) ###Output _____no_output_____ ###Markdown An Image Classification ExampleThe 2nd half of your HW asks you to classifying images. Let's try soemthing similar now using the famouse MNIST dataset of handwritten digits.\We can load the dataset directly from Tensorflow/Keras! You can read more about TensorFlow's datasets [here](https://www.tensorflow.org/api_docs/python/tf/keras/datasets). ###Code from tensorflow.keras.datasets import mnist (x_train, y_train), (x_test, y_test) = mnist.load_data() print(x_train.shape, y_train.shape, x_test.shape, y_test.shape) # Unique response variable values set(y_test) ###Output _____no_output_____ ###Markdown Each observation is an 28x28 pixel image.\There are 60,000 training examples and 10,000 test images.\The $y$ values corresponde to which of the digit the image represents, 0-9. This is how each image is represented numerically. ###Code np.set_printoptions(edgeitems=30, linewidth=100000, formatter=dict(float=lambda x: "%.3g" % x)) x_train[10] ###Output _____no_output_____ ###Markdown The values represent pixel intensity and range from 0-255.\We can use `plt.imshow` or `ax.imshow` to display it as an image. ###Code # Display and example observation as an image print('This picture belongs to the class for number', y_train[10]) ax = plt.gca() ax.grid('off') ax.imshow(x_train[10], cmap='gray'); ###Output This picture belongs to the class for number 3 ###Markdown (Just a Little) Preprocessing **Flattening**We don't know how to feed a 2D input into our neural networks (yet!). So we will simply flatted each image to a length 28x28 = 784 array. ###Code # Flatten image data x_train = x_train.reshape(x_train.shape[0], 784) x_test = x_test.reshape(x_test.shape[0], 784) # check if the shapes are ok print(x_train.shape, y_train.shape, x_test.shape, y_test.shape) ###Output (60000, 784) (60000,) (10000, 784) (10000,) ###Markdown **Normalizing**Let's confirm what we said about pixel values ranging from 0-255 and then normalize them to the range [0,1]. ###Code # checking the min and max of x_train and x_test print(x_train.min(), x_train.max(), x_test.min(), x_test.max()) # Normalize x_train = x_train/255 x_test = x_test/255 print(x_train.min(), x_train.max(), x_test.min(), x_test.max()) ###Output 0.0 1.0 0.0 1.0 ###Markdown Build & CompileHere we use a little trick with the `'sparse_categorical_crossentropy'` loss. Basically, the saves us from having to turn our response variable into a categorical one! We can just leave them as integers. We'll also cheat a bit here and use the `Adam` optimizer. We'll learn more about this and other optimizers in the coming lectures and advanced section. Notice too how a sequential Keras model can also be defined as a list passed to the `Sequential` constructor rather than by repeatedly using the `add` method. In future labs, we'll look at the *functional* Keras API, which is an alternative approach to sequential which is more flexible, allowing for more complex architectures. ###Code from tensorflow.keras.losses import sparse_categorical_crossentropy # Build MNIST model model_mnist = tf.keras.models.Sequential([ tf.keras.layers.Input(shape = (784,)), tf.keras.layers.Dense(128,activation='relu'), tf.keras.layers.Dense(10, activation='softmax') ]) model_mnist.compile( loss='sparse_categorical_crossentropy', optimizer=tf.keras.optimizers.Adam(0.001), metrics=['accuracy'] ) ###Output _____no_output_____ ###Markdown Fit ###Code # Fit the MNIST model trained_mnist = model_mnist.fit(x_train, y_train, epochs=6, batch_size=128, validation_data=(x_test, y_test)) # Helper function for plotting training history def plot_accuracy_loss(model_history): plt.figure(figsize=[12,4]) plt.subplot(1,2,1) plt.semilogx(model_history.history['accuracy'], label = 'train_acc', linewidth=4) plt.semilogx(model_history.history['val_accuracy'], label = 'val_acc', linewidth=4, alpha=.7) plt.xlabel('Epoch') plt.ylabel('Accuracy') plt.legend() plt.subplot(1,2,2) plt.loglog(model_history.history['loss'], label = 'train_loss', linewidth=4) plt.loglog(model_history.history['val_loss'], label = 'val_loss', linewidth=4, alpha=.7) plt.xlabel('Epoch') plt.ylabel('Loss') plt.legend() plt.tight_layout() # Plot MNIST training history plot_accuracy_loss(trained_mnist) ###Output _____no_output_____ ###Markdown Not bad! But do see some overfitting as the validation accuracy starts to diverge from the training accuracy in later epochs. The same general trend can also be seen in the plot of the losses. In the next lecture we'll look at methods for dealing with overfitting in neural networks. Visually Inspecting Model Performance A great benefit of working with image date is that you can often (but not always) simply look at an observation to see if your model's prediction make sense or not. Let's try that now! ###Code # Make a single prediction and validate it def example_NN_prediction(dataset = x_test, model_ = model_mnist): """ This tests our MNist FFNN by examining a single prediction on the test set and checking if it matches the real label. Arguments: n: if you select n then you will choose the nth test set """ mnist_preds = model_mnist.predict(x_test) all_predictions = np.argmax(mnist_preds, axis = 1) n = np.random.choice(784) digit = x_test[n,:] actual_label = y_test[n] plt.imshow(digit.reshape(-1, 28)) prediction_array = model_.predict(digit.reshape(1,-1)) prediction = np.argmax(prediction_array) if prediction == y_test[n]: print("The Mnist model correctly predicted:", prediction) else: print("The true label was", actual_label) print("The Mnist model incorrectly predicted:", prediction) #################################################### # Make a many predictions and validate them ################################################### def example_NN_predictions(model_, dataset_ = x_test, response_ = y_test, get_incorrect = False): """ This tests our MNist FFNN by examining 3 images and checking if our nueral network can correctly classify them. Arguments: model_ : the mnist model you want to check predictions for. get_incorrect (boolean): if True, the model will find 3 examples where the model made a mistake. Otherwise it just select randomly. """ dataset = dataset_.copy() response = response_.copy() # If get_incorrect is True, then get an example of incorrect predictions. # Otherwise get random predictions. if not get_incorrect: n = np.random.choice(dataset.shape[0], size = 3) digits = dataset[n,:] actual_label = response[n] else: # Determine where the model is making mistakes: mnist_preds = model_mnist.predict(dataset) all_predictions = np.argmax(mnist_preds, axis = 1) incorrect_index = all_predictions != response incorrect = x_test[incorrect_index, :] # Randomly select a mistake to show: n = np.random.choice(incorrect.shape[0], size = 3) digits = incorrect[n,:] # determine the correct label labels = response[incorrect_index] actual_label = labels[n] #get the predictions and make the plot: fig, ax = plt.subplots(1,3, figsize = (12, 4)) ax = ax.flatten() for i in range(3): #show the digit: digit = digits[i,:] ax[i].imshow(digit.reshape(28,-1)) #reshape the image to 28 by 28 for viewing # reshape the input correctly and get the prediction: prediction_array = model_.predict(digit.reshape(1,-1)) prediction = np.argmax(prediction_array) #Properly label the prediction (correct vs incorrect): if prediction == actual_label[i]: ax[i].set_title("Correct Prediction: " + str(prediction)) else: ax[i].set_title('Incorrect Prediction: {} (True label: {})'.format( prediction, actual_label[i])) plt.tight_layout() # Here's a random prediction example example_NN_prediction() # Correct predictions example_NN_predictions(model_ = model_mnist, get_incorrect = False) ###Output _____no_output_____ ###Markdown Let's see some examples where the network makes the wrong prediction. ###Code # Incorrect Predictions example_NN_predictions(model_ = model_mnist, get_incorrect = True) ###Output _____no_output_____
Homework notebooks/(HW notebooks) netology Big Data and Python/7. Py_Spark_dep/dep_bd_2_spark_v2.1.ipynb
###Markdown 7. Загрузить данные в Spark ###Code df_data = spark.read.csv("u.data", sep='\t', header=None, inferSchema=True) df_genre = spark.read.csv("u.genre", sep='|', header=None, inferSchema=True) df_info = spark.read.csv("u.info", sep=' ', header=None, inferSchema=True) df_occupation = spark.read.csv("u.occupation", sep=' ', header=None, inferSchema=True) df_user = spark.read.csv("u.user", sep='|', header=None, inferSchema=True) df_item = spark.read.csv("u.item", sep='|', header=None, inferSchema=True) # encoding='latin_1' # new_names_df_data = ['user_id', 'movie_id', 'rating', 'timestamp'] # df_data = df_data.toDF(*new_names_df_data) # new_names_df_genre = ['genres', 'genres_id'] # df_genre = df_genre.toDF(*new_names_df_genre) # new_names_df_user = ['user_id', 'age', 'gender', 'occupation', 'zip_code'] # df_user = df_user.toDF(*new_names_df_user) # new_names_df_item = ['movie_id', 'movie_title', 'release_date', # 'video_release_date', 'IMDb_URL', 'unknown', # 'Action', 'Adventure', 'Animation', "Children's", # 'Comedy', 'Crime', 'Documentary', 'Drama', 'Fantasy', # 'Film-Noir', 'Horror', 'Musical', 'Mystery', 'Romance', # 'Sci-Fi', 'Thriller', 'War', 'Western'] # df_item = df_item.toDF(*new_names_df_item) df_data.show() df_data.dtypes df_data.describe().show() ###Output +-------+------------------+------------------+------------------+-----------------+ |summary| _c0| _c1| _c2| _c3| +-------+------------------+------------------+------------------+-----------------+ | count| 100000| 100000| 100000| 100000| | mean| 462.48475| 425.53013| 3.52986|8.8352885148862E8| | stddev|266.61442012750905|330.79835632558473|1.1256735991443214|5343856.189502848| | min| 1| 1| 1| 874724710| | max| 943| 1682| 5| 893286638| +-------+------------------+------------------+------------------+-----------------+ ###Markdown 8. Средствами спарка вывести среднюю оценку для каждого фильма ###Code df_data.columns df_data = df_data.withColumnRenamed('_c0', 'user id')\ .withColumnRenamed('_c1', 'item id')\ .withColumnRenamed('_c2', 'rating')\ .withColumnRenamed('_c3', 'timestamp'); df_data_grp = df_data.groupby('item id') df_data_grp_mean = df_data_grp.mean('rating') df_data_grp_mean.show() ###Output +-------+------------------+ |item id| avg(rating)| +-------+------------------+ | 496| 4.121212121212121| | 471|3.6108597285067874| | 463| 3.859154929577465| | 148| 3.203125| | 1342| 2.5| | 833| 3.204081632653061| | 1088| 2.230769230769231| | 1591|3.1666666666666665| | 1238| 3.125| | 1580| 1.0| | 1645| 4.0| | 392|3.5441176470588234| | 623| 2.923076923076923| | 540| 2.511627906976744| | 858| 1.0| | 737| 2.983050847457627| | 243|2.4393939393939394| | 1025|2.9318181818181817| | 1084| 3.857142857142857| | 1127| 2.909090909090909| +-------+------------------+ only showing top 20 rows ###Markdown 9. В Spark получить 2 df с 5-ю самыми популярными и самыми непопулярными фильмами (по количеству оценок, либо по самой оценке) ###Code df_items = df_item['_c0', '_c1'] df_items = df_items.withColumnRenamed('_c0','item id')\ .withColumnRenamed('_c1','movie title') df_data_grp_mp = spark.createDataFrame(df_data_grp.count().orderBy('count', ascending=False).take(5)) df_data_grp_lp = spark.createDataFrame(df_data_grp.count().orderBy('count', ascending=True).take(5)) df_data_grp_mp.join(df_items, 'item id', how='inner').show() df_data_grp_lp.join(df_items, 'item id', how='inner').show() ###Output +-------+-----+--------------------+ |item id|count| movie title| +-------+-----+--------------------+ | 50| 583| Star Wars (1977)| | 258| 509| Contact (1997)| | 100| 508| Fargo (1996)| | 181| 507|Return of the Jed...| | 294| 485| Liar Liar (1997)| +-------+-----+--------------------+ +-------+-----+--------------------+ |item id|count| movie title| +-------+-----+--------------------+ | 1460| 1| Sleepover (1995)| | 1507| 1|Three Lives and O...| | 1580| 1| Liebelei (1933)| | 1645| 1|Butcher Boy, The ...| | 1618| 1|King of New York ...| +-------+-----+--------------------+ ###Markdown 10. Средствами Spark соедините информацию по фильмам и жанрам (u.genre) ###Code df_item.show() df_genre.show() df_item = df_item.withColumnRenamed('_c0','item id')\ .withColumnRenamed('_c1','movie title')\ .withColumnRenamed('_c5','unknown')\ .withColumnRenamed('_c6','Action')\ .withColumnRenamed('_c7','Adventure')\ .withColumnRenamed('_c8','Animation')\ .withColumnRenamed('_c9','Children\'s')\ .withColumnRenamed('_c10','Comedy')\ .withColumnRenamed('_c11','Crime')\ .withColumnRenamed('_c12','Documentary')\ .withColumnRenamed('_c13','Drama')\ .withColumnRenamed('_c14','Fantasy')\ .withColumnRenamed('_c15','Film-Noir')\ .withColumnRenamed('_c16','Horror')\ .withColumnRenamed('_c17','Musical')\ .withColumnRenamed('_c18','Mystery')\ .withColumnRenamed('_c19','Romance')\ .withColumnRenamed('_c20','Sci-Fi')\ .withColumnRenamed('_c21','Thriller')\ .withColumnRenamed('_c22','War')\ .withColumnRenamed('_c23','Western')\ df_item = df_item['item id', 'unknown', 'Action', 'Adventure', 'Animation', 'Children\'s', 'Comedy', 'Crime', 'Documentary', 'Drama', 'Fantasy', 'Film-Noir', 'Horror', 'Musical','Mystery','Romance', 'Sci-Fi', 'Thriller', 'War','Western'] def to_long(df, by): cols, dtypes = zip(*((c, t) for (c, t) in df.dtypes if c not in by)) kvs = explode(array([ struct(lit(c).alias("key"), col(c).alias("val")) for c in cols ])).alias("kvs") return df.select(by + [kvs]).select(by + ["kvs.key", "kvs.val"]) df_g_trans = to_long(df_item, ["item id"]) df_g_trans = df_g_trans.where(df_g_trans['val'] > 0) df_res = df_g_trans.join(df_items, 'item id', how='inner')['item id', 'movie title', 'key']\ .withColumnRenamed('key','genre') df_res = df_res.join(df_data_grp_mean, 'item id', how='inner') df_res.show() ###Output +-------+--------------------+----------+------------------+ |item id| movie title| genre| avg(rating)| +-------+--------------------+----------+------------------+ | 1| Toy Story (1995)| Animation|3.8783185840707963| | 1| Toy Story (1995)|Children's|3.8783185840707963| | 1| Toy Story (1995)| Comedy|3.8783185840707963| | 2| GoldenEye (1995)| Action|3.2061068702290076| | 2| GoldenEye (1995)| Adventure|3.2061068702290076| | 2| GoldenEye (1995)| Thriller|3.2061068702290076| | 3| Four Rooms (1995)| Thriller| 3.033333333333333| | 4| Get Shorty (1995)| Action| 3.550239234449761| | 4| Get Shorty (1995)| Comedy| 3.550239234449761| | 4| Get Shorty (1995)| Drama| 3.550239234449761| | 5| Copycat (1995)| Crime| 3.302325581395349| | 5| Copycat (1995)| Drama| 3.302325581395349| | 5| Copycat (1995)| Thriller| 3.302325581395349| | 6|Shanghai Triad (Y...| Drama| 3.576923076923077| | 7|Twelve Monkeys (1...| Drama| 3.798469387755102| | 7|Twelve Monkeys (1...| Sci-Fi| 3.798469387755102| | 8| Babe (1995)|Children's|3.9954337899543377| | 8| Babe (1995)| Comedy|3.9954337899543377| | 8| Babe (1995)| Drama|3.9954337899543377| | 9|Dead Man Walking ...| Drama|3.8963210702341136| +-------+--------------------+----------+------------------+ only showing top 20 rows ###Markdown 11. Посчитайте средствами Spark среднюю оценку для каждого жанра ###Code df_data_grp_g = df_res.groupby('genre') df_data_grp_mean_g = df_data_grp_g.mean('avg(rating)') df_data_grp_mean_g.show() pass ###Output _____no_output_____
Learn/Week 3 Visualization/Week_3_Day_5.ipynb
###Markdown Downoad file vgsales.csv di sini ###Code import pandas as pd import matplotlib.pyplot as plt df = pd.read_csv('vgsales.csv') df.head() ###Output _____no_output_____ ###Markdown Quiz 1 Preparing DataManipulasi data tersebut dengan menggroupkan berdasarkan Genre, kemudian ambil rata2 penjualan untuk setiap Region Kecuali Global_Sales berdasarkan kategori Genre. ###Code mean_genre = df.groupby('Genre')['NA_Sales', 'EU_Sales', 'JP_Sales', 'Other_Sales'].mean() mean_genre ###Output /usr/local/lib/python3.6/dist-packages/ipykernel_launcher.py:1: FutureWarning: Indexing with multiple keys (implicitly converted to a tuple of keys) will be deprecated, use a list instead. """Entry point for launching an IPython kernel. ###Markdown Expected Output :![image.png](attachment:image.png) Perbandingan Kuantitatif Barplot : Grouping Visualisasi Dengan Barplot1. Visualisasikan gambar yang tadi kita manipulasi, untuk menhasilkan visualisasi seperti di bawah ini2. Tuliskan apa insight yang bisa kamu dapat dari visualisasi tersebut ###Code import numpy as np genre = mean_genre.T x = np.arange(len(genre.index)) y1 = genre['Action'] y2 = genre['Adventure'] y3 = genre['Fighting'] y4 = genre['Misc'] y5 = genre['Platform'] y6 = genre['Puzzle'] y7 = genre['Racing'] y8 = genre['Role-Playing'] y9 = genre['Shooter'] y10 = genre['Simulation'] y11 = genre['Sports'] y12 = genre['Strategy'] width = 0.05 fig, ax = plt.subplots(figsize=(12, 6)) ax.bar(x, y1, width, label='Action') ax.bar([i + width for i in x], y2, width, label='Adventure') ax.bar([i + width*2 for i in x], y3, width, label='Fighting') ax.bar([i + width*3 for i in x], y4, width, label='Misc') ax.bar([i + width*4 for i in x], y5, width, label='Platform') ax.bar([i + width*5 for i in x], y6, width, label='Puzzle') ax.bar([i + width*6 for i in x], y7, width, label='Racing') ax.bar([i + width*7 for i in x], y8, width, label='Role-Playing') ax.bar([i + width*8 for i in x], y9, width, label='Shooter') ax.bar([i + width*9 for i in x], y10, width, label='Simulation') ax.bar([i + width*10 for i in x], y11, width, label='Sports') ax.bar([i + width*11 for i in x], y12, width, label='Strategy') ax.set_xticks([i + 5 * width for i in x]) ax.set_xticklabels(genre.index) ax.set_ylabel('Mean Sales') ax.set_xlabel('Region Sales') ax.set_title('Mean Sales Video Games By Genre') ax.set_facecolor('#DCDCDC') plt.legend(loc='upper right') plt.grid() plt.show() ###Output _____no_output_____ ###Markdown Insight yang saya dapat yaitu kita dapat mengetahui rata-rata penjualan video games disuatu negara berdasarkan genrenya dengan dilihat dari tinggi atau rendahnya bar tersebut antara bar yang lainnya disebelahnya ![image.png](attachment:image.png) Perbandingan Kuantitatif Barplot : Stack Barplot1. Visualisasikan gambar yang tadi kita manipulasi, untuk menhasilkan visualisasi seperti di bawah ini2. Tuliskan apa insight yang bisa kamu dapat dari visualisasi tersebut ###Code fig, ax = plt.subplots(figsize=(12, 6)) ax.bar(x, y1, label='Action') ax.bar(x, y2, bottom=y1, label='Adventure') ax.bar(x, y3, bottom=y1+y2, label='Fighting') ax.bar(x, y4, bottom=y1+y2+y3, label='Misc') ax.bar(x, y5, bottom=y1+y2+y3+y4, label='Platform') ax.bar(x, y6, bottom=y1+y2+y3+y4+y5, label='Puzzle') ax.bar(x, y7, bottom=y1+y2+y3+y4+y5+y6, label='Racing') ax.bar(x, y8, bottom=y1+y2+y3+y4+y5+y6+y7, label='Role-Playing') ax.bar(x, y9, bottom=y1+y2+y3+y4+y5+y6+y7+y8, label='Shooter') ax.bar(x, y10, bottom=y1+y2+y3+y4+y5+y6+y7+y8+y9, label='Simulation') ax.bar(x, y11, bottom=y1+y2+y3+y4+y5+y6+y7+y8+y9+y10, label='Sports') ax.bar(x, y12, bottom=y1+y2+y3+y4+y5+y6+y7+y8+y9+y10+y11, label='Strategy') ax.set_xticks([i + 0.5 * width for i in x]) ax.set_xticklabels(genre.index) ax.set_ylabel('Mean Sales') ax.set_xlabel('Region Sales') ax.set_title('Mean Sales Video Games By Genre') ax.set_facecolor('#DCDCDC') plt.legend(loc='upper right') plt.grid() plt.show() ###Output _____no_output_____ ###Markdown Insight yang saya dapat yaitu kita dapat mengetahui rata-rata penjualan video games disuatu negara berdasarkan genrenya dengan dilihat dari tinggi atau rendahnya bar tersebut antara bar yang lainnya diatas atau dibawahnya ![image.png](attachment:image.png) ###Code ###Output _____no_output_____
PID_auto-tunning.ipynb
###Markdown Kinematic Bicycle ModelPID parameter tunning depends on the characteristics of system. And it is known that there's no 'one-size-fit-all' tunning method. For the project, i decided to go for a model-based auto-tunning using our python script of kinematic bicycle model, and modified it to write the auto-tunning script. ###Code import random import numpy as np import matplotlib.pyplot as plt # ------------------------------------------------ # # this is the Robot class # class Robot(object): def __init__(self, length=20.0): """ Creates robot and initializes location/orientation to 0, 0, 0. """ self.x = 0.0 self.y = 0.0 self.orientation = 0.0 self.length = length self.steering_noise = 0.0 self.distance_noise = 0.0 self.steering_drift = 0.0 def set(self, x, y, orientation): """ Sets a robot coordinate. """ self.x = x self.y = y self.orientation = orientation % (2.0 * np.pi) def set_noise(self, steering_noise, distance_noise): """ Sets the noise parameters. """ # makes it possible to change the noise parameters # this is often useful in particle filters self.steering_noise = steering_noise self.distance_noise = distance_noise def set_steering_drift(self, drift): """ Sets the systematical steering drift parameter """ self.steering_drift = drift def move(self, steering, distance, tolerance=0.001, max_steering_angle=np.pi / 4.0): """ steering = front wheel steering angle, limited by max_steering_angle distance = total distance driven, most be non-negative """ if steering > max_steering_angle: steering = max_steering_angle if steering < -max_steering_angle: steering = -max_steering_angle if distance < 0.0: distance = 0.0 # apply noise steering2 = random.gauss(steering, self.steering_noise) distance2 = random.gauss(distance, self.distance_noise) # apply steering drift steering2 += self.steering_drift # Execute motion # theta = w = tan(delta) * speed / L turn = np.tan(steering2) * distance2 / self.length if abs(turn) < tolerance: # approximate by straight line motion self.x += distance2 * np.cos(self.orientation) self.y += distance2 * np.sin(self.orientation) self.orientation = (self.orientation + turn) % (2.0 * np.pi) else: # approximate bicycle model for motion radius = distance2 / turn cx = self.x - (np.sin(self.orientation) * radius) cy = self.y + (np.cos(self.orientation) * radius) self.orientation = (self.orientation + turn) % (2.0 * np.pi) self.x = cx + (np.sin(self.orientation) * radius) self.y = cy - (np.cos(self.orientation) * radius) def __repr__(self): return '[x=%.5f y=%.5f orient=%.5f]' % (self.x, self.y, self.orientation) def make_robot(): """ Resets the robot back to the initial position and drift. You'll want to call this after you call `run`. """ robot = Robot() robot.set(0, 1, 0) robot.set_steering_drift(10 / 180 * np.pi) return robot # run - does a single control run # NOTE: We use params instead of tau_p, tau_d, tau_i def run(robot, params, n=100, speed=1): x_trajectory = [] y_trajectory = [] err = 0 prev_cte = robot.y int_cte = 0 for i in range(2 * n): cte = robot.y diff_cte = cte - prev_cte int_cte += cte prev_cte = cte steer = -params[0] * cte - params[1] * diff_cte - params[2] * int_cte robot.move(steer, speed) x_trajectory.append(robot.x) y_trajectory.append(robot.y) if i >= n: err += cte ** 2 return x_trajectory, y_trajectory, err / n ###Output _____no_output_____ ###Markdown Goordinate Ascent auto-tunning ###Code def twiddle(tol, p, dp, s): robot = make_robot() x_trajectory, y_trajectory, best_err = run(robot, p,speed=s) it = 0 while sum(dp) > tol: #print("Iter {}, berror = {}, params={}".format(it, best_err, p)) for i in range(len(p)): p[i] += dp[i] robot = make_robot() x_trajectory, y_trajectory, err = run(robot, p,speed=s) if err < best_err: best_err = err dp[i] *= 1.1 else: p[i] -= 2 * dp[i] robot = make_robot() x_trajectory, y_trajectory, err = run(robot, p,speed=s) if err < best_err: best_err = err dp[i] *= 1.1 else: p[i] += dp[i] dp[i] *= 0.9 it += 1 return it,p,best_err # Auto-tunning: # Decide initial p value, the script spit out optimized p,i,d gain. # Note the 2nd argument of tupple is speed for our kinematic model. p, s = [0, 0, 0], 1 #p, s = [0, 0, 0], 2 tol = 0.00002 # optimize p with others fixed. dp = [1, 0, 0] it,params, err = twiddle(tol, p, dp, s) print("iter{} Final error = {}, params={}".format(it,err, params)) robot = make_robot() x_traj1, y_traj1, err = run(robot, params, speed=s) init_p = params[0] # optimize d with others fixed. tol = 0.00002 p = [init_p, 0, 0] dp = [0, 1, 0] it,params, err = twiddle(tol, p, dp, s) print("iter{} Final error = {}, params={}".format(it,err, params)) robot = make_robot() x_traj2, y_traj2, err = run(robot, params, speed=s) init_d = params[1] # optimize i with others fixed. tol = 0.00002 p = [init_p, init_d, 0] dp = [0, 0, 1] it,params, err = twiddle(tol, p, dp, s) print("iter{} Final error = {}, params={}".format(it,err, params)) robot = make_robot() x_traj3, y_traj3, err = run(robot, params, speed=s) init_i = params[2] n = len(x_traj1) fig, (ax1,ax2,ax3) = plt.subplots(3, 1, figsize=(8, 8)) ax1.plot(x_traj1, y_traj1, 'g', label='Twiddle PID controller') ax1.plot(x_traj1, np.zeros(n), 'r', label='reference') ax2.plot(x_traj2, y_traj2, 'g', label='Twiddle PID controller') ax2.plot(x_traj2, np.zeros(n), 'r', label='reference') ax3.plot(x_traj3, y_traj3, 'g', label='Twiddle PID controller') ax3.plot(x_traj3, np.zeros(n), 'r', label='reference') #params=[0.2864386636430725, 3.0843418153144158, 0.01033423736942282] ###Output iter191 Final error = 0.5528806292269706, params=[0.2864386636430725, 0.0, 0.0] iter1151 Final error = 0.37126759751677324, params=[0.2864386636430725, 3.0843418153144158, 0.0] iter358 Final error = 1.1696428867847007e-07, params=[0.2864386636430725, 3.0843418153144158, 0.01033423736942282] ###Markdown **Auto-tunning for curv at speed 1** ###Code def run_curv(robot, params, y, n=100, speed=1.0): x_trajectory = [] y_trajectory = [] err = 0 prev_cte = robot.y int_cte = 0 for i in range(2 * n): #cte = robot.y #cte = y[i]-robot.y cte = robot.y-y[i] diff_cte = cte - prev_cte int_cte += cte prev_cte = cte steer = -params[0] * cte - params[1] * diff_cte - params[2] * int_cte robot.move(steer, speed) x_trajectory.append(robot.x) y_trajectory.append(robot.y) if i >= n: err += cte ** 2 return x_trajectory, y_trajectory, err / n def twiddle_curv(tol, y, p, dp, s): robot = make_robot() x_trajectory, y_trajectory, best_err = run_curv(robot, p,y, speed=s) # TODO: twiddle loop here it = 0 while sum(dp) > tol: print("Iter {}, berror = {}, params={}".format(it, best_err, p)) for i in range(3): p[i]+=dp[i] #try going uphill robot = make_robot() _,_,err = run_curv(robot, p,y, speed=s) if err < best_err: #error reduced? best_err = err #if succeed, keep it dp[i] *= 1.1 #inc d else: p[i] -= 2*dp[i] #if no, go opp. way robot = make_robot() _,_,err = run_curv(robot, p,y, speed=s) if err < best_err:#error reduced? best_err = err dp[i] *=1.1 else: p[i]+= dp[i] #if either way is not good, reduce scale. dp[i] *= 0.9 it+=1 return it,p, best_err #for speed =1 p, s = [0.28535166420074565, 3.072299626296505, 0.009462923496873782],1 #p, s =[2.058339659228449, 4.119454875695468, 0.14754626999666645], 2 dp = [1, 1, 1] tol = 0.00002 it,params, err = twiddle(tol, p, dp, s) print("iter{} Final error = {}, params={}".format(it,err, params)) print("let's run PID with learned parameter") robot = make_robot() x_trajectory, y_trajectory, err = run(robot, params, speed=s) init_p = params[0] y_curv = 1/5*np.sin(1/10*np.arange(200)) it,params_curv, err = twiddle_curv(tol,y_curv, p, dp, s) print("iter{} Final error = {}, params={}".format(it,err, params_curv)) print("let's run PID with learned parameter") robot = make_robot() x_trajectory2, y_trajectory2, err = run_curv(robot, params_curv, y_curv, speed=s) y_comb = np.concatenate((np.zeros(100),1/2*np.sin(1/10*np.arange(100))), axis=0) it,params_comb, err = twiddle_curv(tol,y_comb, p, dp, s) print("iter{} Final error = {}, params={}".format(it,err, params_comb)) print("let's run PID with learned parameter") robot = make_robot() x_trajectory3, y_trajectory3, err = run_curv(robot, params_comb, y=y_comb, speed=s) fig, (ax1, ax2, ax3) = plt.subplots(3, 1, figsize=(8, 8)) ax1.plot(x_trajectory, y_trajectory, 'g', label='Twiddle PID controller') ax1.plot(x_trajectory, np.zeros(n), 'r', label='reference') ax2.plot(x_trajectory2, y_trajectory2, 'g', label='Twiddle PID controller') ax2.plot(x_trajectory2, y_curv, 'r', label='reference') ax3.plot(x_trajectory3, y_trajectory3, 'g', label='Twiddle PID controller') ax3.plot(x_trajectory3, y_comb, 'r', label='reference') ###Output iter495 Final error = 3.0042262317874055e-10, params=[0.3063393801031459, 3.757411395133915, 0.00895626708915423] let's run PID with learned parameter iter0 Final error = 0.008213734142174724, params=[0.3063393801031459, 3.757411395133915, 0.00895626708915423] let's run PID with learned parameter iter0 Final error = 0.04391380163435384, params=[0.3063393801031459, 3.757411395133915, 0.00895626708915423] let's run PID with learned parameter
predicting-energy-consumption-of-turkey-lightgbm.ipynb
###Markdown Prediction of Hourly Energy Consumption of Turkey Predicting the power demand with high accuracy might introduce a great set of values for a country, for a city or even for households. Stakeholders might adjust their power production accordingly to reduce cost; or they can buy sufficient amounts of energy if they meet their power needs from external sources. In some certain cases, such as in tendering processes in a daily energy exchange, the stakeholders may generate addtional profit, too. In this notebook I will introduce basics of training a Machine Learning model predicting Power Consumption of Turkey for the next 24 hours, using Ensemble Methods. Importing and Processing the Data ###Code import numpy as np import pandas as pd import seaborn as sns import matplotlib.pyplot as plt df = pd.read_csv('../input/hourly-power-consumption-of-turkey-20162020/RealTimeConsumption-01012016-04082020.csv', encoding='cp1254') df.head() df['Date'] =pd.to_datetime(df['Date'] +' '+ df['Hour'], format='%d.%m.%Y %H:%M') ###Output _____no_output_____ ###Markdown Let's check whether we miss any entry in the time series "Data" feature: ###Code pd.date_range(start = '2016-01-01 00:00:00', end = '2020-03-24 00:00:00', freq = 'D').difference(df.Date) df = df.drop('Hour', axis = 1) df.head() df['Consumption (MWh)'] = df['Consumption (MWh)'].str.replace(',','') df['Consumption (MWh)'] = pd.to_numeric(df['Consumption (MWh)']) df = df.sort_values('Date') df.head() print(df['Date'].min(), df['Date'].max()) ###Output 2016-01-01 00:00:00 2020-08-04 23:00:00 ###Markdown For the purposes of this notebook, I will not be including the Covid period as approximately started in Turkey: ###Code df = df.set_index('Date').loc[:'2020-03-24 23:00:00', :].reset_index() df.tail() df.info() df.set_index('Date').plot(style='.', figsize=(15,5), title='Consumption vs. Date') plt.show() import matplotlib.pyplot as plt from statsmodels.graphics.tsaplots import plot_acf, plot_pacf fig, ax = plt.subplots(figsize=(30, 5)) plot_acf(df.set_index('Date'),lags = 720, ax=ax) plt.show() sns.set(style='whitegrid') fig, ax = plt.subplots(figsize=(35, 5)) plot_pacf(df.set_index('Date'),lags = 205, ax=ax) plt.xticks(np.arange(0, 210, step=5)) plt.show() plt.figure(figsize = (15, 7)) ax = sns.boxplot(x=df['Date'].dt.hour, y="Consumption (MWh)", data=df) plt.title('Hourly Consumption', fontsize=11) df['Consumption (MWh)'] = np.log1p(df['Consumption (MWh)']) ###Output _____no_output_____ ###Markdown Basic Feature Engineering Lag Features ###Code df['rolling_mean_t41'] = df['Consumption (MWh)'].shift(38) df['rolling_mean_t41'] = df['Consumption (MWh)'].shift(41) df['rolling_mean_t48'] = df['Consumption (MWh)'].shift(48) df['rolling_mean_t72'] = df['Consumption (MWh)'].shift(72) df['rolling_mean_t168'] = df['Consumption (MWh)'].shift(168) df ###Output _____no_output_____ ###Markdown Rolling Features ###Code df['rolling_mean_t38'] = df['Consumption (MWh)'].transform(lambda x: x.shift(38).rolling(12).mean()) df['rolling_mean_t50'] = df['Consumption (MWh)'].transform(lambda x: x.shift(38).rolling(24).mean()) df['rolling_mean_t62'] = df['Consumption (MWh)'].transform(lambda x: x.shift(38).rolling(48).mean()) df['rolling_median_t38'] = df['Consumption (MWh)'].transform(lambda x: x.shift(38).rolling(12).median()) df['rolling_median_t50'] = df['Consumption (MWh)'].transform(lambda x: x.shift(38).rolling(24).median()) df['rolling_median_t62'] = df['Consumption (MWh)'].transform(lambda x: x.shift(38).rolling(48).median()) df['rolling_std_t38'] = df['Consumption (MWh)'].transform(lambda x: x.shift(38).rolling(12).std()) df['rolling_std_t50'] = df['Consumption (MWh)'].transform(lambda x: x.shift(38).rolling(24).std()) df['rolling_std_t62'] = df['Consumption (MWh)'].transform(lambda x: x.shift(38).rolling(48).std()) df df = df.dropna(axis=0, how='any').reset_index(drop=True) ###Output _____no_output_____ ###Markdown Time Features ###Code df['hourofday'] = df['Date'].dt.hour df['quarter'] = df['Date'].dt.quarter df['month'] = df['Date'].dt.month df['year'] = df['Date'].dt.year df['dayofyear'] = df['Date'].dt.dayofyear df['dayofmonth'] = df['Date'].dt.day df['weekofyear'] = df['Date'].dt.weekofyear df['days_in_month'] = df['Date'].dt.days_in_month df.head() df.tail() ###Output _____no_output_____ ###Markdown Train-Test Split ###Code split_date = '01-Jan-2016' split_date1 = '01-Jan-2020' split_date2 = '14-Mar-2020' split_date3 = '15-Mar-2020' df_train = df.set_index('Date').loc[split_date:'31-Dec-2019', :].reset_index() df_test = df.set_index('Date').loc[split_date1:split_date2, :].reset_index() df_test[['Date','Consumption (MWh)']].set_index('Date').rename(columns={'Consumption (MWh)': 'TEST SET'})\ .join(df_train[['Date','Consumption (MWh)']].set_index('Date')\ .rename(columns={'Consumption (MWh)': 'TRAINING SET'}),how='outer').plot(figsize=(25,5), title='Tüketim Miktarı (MWh)', style='.') plt.ylim(9.8, 10.8) plt.show() df_train.to_csv('energy_cons_train.csv', index = None) #Keeping the train and test data for another notebook :) df_test.to_csv('energy_cons_test.csv', index = None) df_train = df_train.drop(['Date'], axis=1) df_test = df_test.drop(['Date'], axis=1) ###Output _____no_output_____ ###Markdown The Model ###Code def percentage_error(actual, predicted): res = np.empty(actual.shape) for j in range(actual.shape[0]): if actual[j] != 0: res[j] = (actual[j] - predicted[j]) / actual[j] else: res[j] = predicted[j] / np.mean(actual) return res def mean_absolute_percentage_error(y_true, y_pred): return np.mean(np.abs(percentage_error(np.asarray(y_true), np.asarray(y_pred)))) * 100 print(df_train.shape, df_test.shape) y_train = df_train['Consumption (MWh)'].values X_train = df_train.drop('Consumption (MWh)', axis=1).values y_test = df_test['Consumption (MWh)'].values X_test = df_test.drop('Consumption (MWh)', axis=1).values from sklearn.metrics import mean_absolute_error from sklearn.metrics import mean_squared_error #!pip install lightgbm from lightgbm import LGBMRegressor model_lgbm = LGBMRegressor(objective='rmse', n_estimators=3000, learning_rate=0.01, num_leaves=36, min_child_samples = 15, n_jobs=-1, random_state = None, max_depth = 3, reg_lambda = 0.0, reg_alpha = 0.0, min_split_gain=0.0) eval_set_ALLRESTS = [(X_train, y_train), (X_test, y_test)] model_lgbm.fit(X_train, y_train, eval_set = eval_set_ALLRESTS ,eval_metric='rmse', early_stopping_rounds=15, verbose=20) y_train_lgbm = model_lgbm.predict(X_train) print("Train set RMSE (Log): " + str(np.sqrt(mean_squared_error(y_train_lgbm, y_train)))) print("Train set MAPE (Log): " + str(mean_absolute_percentage_error(y_train, y_train_lgbm))) print("Train set RMSE (Non-Log): " + str(np.sqrt(mean_squared_error(np.expm1(y_train_lgbm), np.expm1(y_train))))) print("Train set MAPE (Non-Log): " + str(mean_absolute_percentage_error(np.expm1(y_train), np.expm1(y_train_lgbm)))) print("% Success (Non-Log): " + str(100 - mean_absolute_percentage_error(np.expm1(y_train), np.expm1(y_train_lgbm)))) y_test_lgbm = model_lgbm.predict(X_test) print("Validation set RMSE (Log): " + str(np.sqrt(mean_squared_error(y_test_lgbm, y_test)))) print("Validation set MAPE (Log): " + str(mean_absolute_percentage_error(y_test, y_test_lgbm))) print("Validation set RMSE (Non-Log): " + str(np.sqrt(mean_squared_error(np.expm1(y_test_lgbm), np.expm1(y_test))))) print("Validation set MAPE (Non-Log): " + str(mean_absolute_percentage_error(np.expm1(y_test), np.expm1(y_test_lgbm)))) print("% Success (Non-Log): " + str(100 - mean_absolute_percentage_error(np.expm1(y_test), np.expm1(y_test_lgbm)))) ###Output Validation set RMSE (Log): 0.048082906764366815 Validation set MAPE (Log): 0.31060464728053844 Validation set RMSE (Non-Log): 1672.158226302014 Validation set MAPE (Non-Log): 3.259334799927164 % Success (Non-Log): 96.74066520007284 ###Markdown We may say that our model performed ~96.2% on train set and ~96.7% on the test set, not bad isn't it! ###Code from matplotlib import pyplot # retrieve performance metrics results = model_lgbm.evals_result_ epochs = len(results['training']['rmse']) x_axis = range(0, epochs) # plot MAE plt.figure(figsize=(17,8)) fig, ax = pyplot.subplots() ax.plot(x_axis, results['training']['rmse'], label='Train') ax.plot(x_axis, results['valid_1']['rmse'], label='Validation') ax.legend(); pyplot.ylabel('RMSE') pyplot.xlabel('# of iterations (or # of estimators)') pyplot.title('LGBM RMSE') pyplot.show() # Create a pd.Series of features importances importances = pd.Series(data=model_lgbm.feature_importances_, index= df_train.drop('Consumption (MWh)', axis=1).columns) # Sort importances importances_sorted = importances.sort_values() plt.figure(figsize=(12,20)) # Draw a horizontal barplot of importances_sorted importances_sorted.plot(kind='barh', color='lightblue') plt.title('Features Importances') plt.show() ###Output _____no_output_____
Python-Data-Science-and-Machine-Learning-Bootcamp/Python-for-Data-Visualization/Seaborn/Distribution Plots.ipynb
###Markdown ___ ___ Distribution PlotsLet's discuss some plots that allow us to visualize the distribution of a data set. These plots are:* distplot* jointplot* pairplot* rugplot* kdeplot ___ Imports ###Code import seaborn as sns %matplotlib inline ###Output _____no_output_____ ###Markdown DataSeaborn comes with built-in data sets! ###Code tips = sns.load_dataset('tips') tips.head() ###Output _____no_output_____ ###Markdown distplotThe distplot shows the distribution of a univariate set of observations. ###Code sns.distplot(tips['total_bill']) # Safe to ignore warnings ###Output /Users/marci/anaconda/lib/python3.5/site-packages/statsmodels/nonparametric/kdetools.py:20: VisibleDeprecationWarning: using a non-integer number instead of an integer will result in an error in the future y = X[:m/2+1] + np.r_[0,X[m/2+1:],0]*1j ###Markdown To remove the kde layer and just have the histogram use: ###Code sns.distplot(tips['total_bill'],kde=False,bins=30) ###Output _____no_output_____ ###Markdown jointplotjointplot() allows you to basically match up two distplots for bivariate data. With your choice of what **kind** parameter to compare with: * “scatter” * “reg” * “resid” * “kde” * “hex” ###Code sns.jointplot(x='total_bill',y='tip',data=tips,kind='scatter') sns.jointplot(x='total_bill',y='tip',data=tips,kind='hex') sns.jointplot(x='total_bill',y='tip',data=tips,kind='reg') ###Output /Users/marci/anaconda/lib/python3.5/site-packages/statsmodels/nonparametric/kdetools.py:20: VisibleDeprecationWarning: using a non-integer number instead of an integer will result in an error in the future y = X[:m/2+1] + np.r_[0,X[m/2+1:],0]*1j ###Markdown pairplotpairplot will plot pairwise relationships across an entire dataframe (for the numerical columns) and supports a color hue argument (for categorical columns). ###Code sns.pairplot(tips) sns.pairplot(tips,hue='sex',palette='coolwarm') ###Output _____no_output_____ ###Markdown rugplotrugplots are actually a very simple concept, they just draw a dash mark for every point on a univariate distribution. They are the building block of a KDE plot: ###Code sns.rugplot(tips['total_bill']) ###Output _____no_output_____ ###Markdown kdeplotkdeplots are [Kernel Density Estimation plots](http://en.wikipedia.org/wiki/Kernel_density_estimationPractical_estimation_of_the_bandwidth). These KDE plots replace every single observation with a Gaussian (Normal) distribution centered around that value. For example: ###Code # Don't worry about understanding this code! # It's just for the diagram below import numpy as np import matplotlib.pyplot as plt from scipy import stats #Create dataset dataset = np.random.randn(25) # Create another rugplot sns.rugplot(dataset); # Set up the x-axis for the plot x_min = dataset.min() - 2 x_max = dataset.max() + 2 # 100 equally spaced points from x_min to x_max x_axis = np.linspace(x_min,x_max,100) # Set up the bandwidth, for info on this: url = 'http://en.wikipedia.org/wiki/Kernel_density_estimation#Practical_estimation_of_the_bandwidth' bandwidth = ((4*dataset.std()**5)/(3*len(dataset)))**.2 # Create an empty kernel list kernel_list = [] # Plot each basis function for data_point in dataset: # Create a kernel for each point and append to list kernel = stats.norm(data_point,bandwidth).pdf(x_axis) kernel_list.append(kernel) #Scale for plotting kernel = kernel / kernel.max() kernel = kernel * .4 plt.plot(x_axis,kernel,color = 'grey',alpha=0.5) plt.ylim(0,1) # To get the kde plot we can sum these basis functions. # Plot the sum of the basis function sum_of_kde = np.sum(kernel_list,axis=0) # Plot figure fig = plt.plot(x_axis,sum_of_kde,color='indianred') # Add the initial rugplot sns.rugplot(dataset,c = 'indianred') # Get rid of y-tick marks plt.yticks([]) # Set title plt.suptitle("Sum of the Basis Functions") ###Output _____no_output_____ ###Markdown So with our tips dataset: ###Code sns.kdeplot(tips['total_bill']) sns.rugplot(tips['total_bill']) sns.kdeplot(tips['tip']) sns.rugplot(tips['tip']) ###Output /Users/marci/anaconda/lib/python3.5/site-packages/statsmodels/nonparametric/kdetools.py:20: VisibleDeprecationWarning: using a non-integer number instead of an integer will result in an error in the future y = X[:m/2+1] + np.r_[0,X[m/2+1:],0]*1j
learning/meanshift-clustering.ipynb
###Markdown Mean shift clustering Look for groupings of Titanic passengers with similar characteristics ###Code import math import pandas as pd import numpy as np import sklearn from sklearn import preprocessing from sklearn.model_selection import train_test_split from sklearn.metrics import mean_squared_error %pylab inline pylab.rcParams['figure.figsize'] = (15, 6) # Do not use normal form (scietific notation) when printing numbers, exponents can make it harder to compare values pd.set_option('float_format', '{:f}'.format) titanic_data = pd.read_csv("../datasets/kaggle/titanic/train.csv", quotechar='"') ###Output _____no_output_____ ###Markdown Explore ###Code titanic_data.head() titanic_data.info() titanic_data.describe() ###Output _____no_output_____ ###Markdown Prepare Remove features that are too specific to individual passengers to be useful when looking for patterns ###Code titanic_data.drop(["PassengerId", "Name", "Ticket", "Cabin"], "columns", inplace=True) titanic_data.head() ###Output _____no_output_____ ###Markdown Convert Sex to numeric ###Code le = preprocessing.LabelEncoder() titanic_data["Sex"] = le.fit_transform(titanic_data["Sex"].astype(str)) titanic_data.head() ###Output _____no_output_____ ###Markdown One hot encode the _Embarked_ feature ###Code titanic_data = pd.get_dummies(titanic_data, columns=["Embarked"]) titanic_data.head() ###Output _____no_output_____ ###Markdown Look for any null values ###Code titanic_data[titanic_data.isnull().any(axis=1)] ###Output _____no_output_____ ###Markdown Drop any rows with null values ###Code titanic_data = titanic_data.dropna() ###Output _____no_output_____ ###Markdown Train ###Code from sklearn.cluster import MeanShift analyser = MeanShift(bandwidth=30) analyser.fit(titanic_data) ###Output _____no_output_____ ###Markdown Estimate a good value for the bandwidth based on the data, this is called under the hood if no bandwidth is specified ###Code from sklearn.cluster import estimate_bandwidth estimate_bandwidth(titanic_data) labels = analyser.labels_ ###Output _____no_output_____ ###Markdown See how many clusters the data was distributed into. A bandwidth of **50** produces **3** clusters - every point is assigned to one of these clusters. A bandwidth of **30** produces **5** clusters - every point is assigned to one of these clusters. Each of these groups will contain passengers with similar characteristics. ###Code np.unique(labels) ###Output _____no_output_____ ###Markdown Add a cluster group column ###Code titanic_data["cluster_group"] = np.nan data_length = len(titanic_data) for i in range(data_length): titanic_data.iloc[i, titanic_data.columns.get_loc("cluster_group")] = labels[i] len(titanic_data) titanic_data.head() ###Output _____no_output_____ ###Markdown Evaluate Group passengers by cluster and see how similar the clusters are ###Code titanic_cluster_data = titanic_data.groupby(["cluster_group"]).mean() titanic_cluster_data ###Output _____no_output_____ ###Markdown View the number of samples in each cluster ###Code titanic_cluster_data["Counts"] = titanic_data.groupby(["cluster_group"]).size() titanic_cluster_data ###Output _____no_output_____ ###Markdown Look at more detailed information on a single cluster ###Code titanic_data[titanic_data["cluster_group"] == 1].describe() ###Output _____no_output_____ ###Markdown View all the passengers in this cluster ###Code titanic_data[titanic_data["cluster_group"] == 1] ###Output _____no_output_____
.ipynb_checkpoints/appmode-checkpoint.ipynb
###Markdown 测试appmode ###Code from ipywidgets import interact @interact(x=5) def add3(x): return x + 3 interact(add3, x=5) ###Output _____no_output_____
solutions/ch4/solution_1.ipynb
###Markdown Q1.MNIST 데이터셋은 복잡하지 않은 데이터셋이기 때문에 가벼운 환경에서도 다양한 실험을 해보기에 적합합니다. 필자는 계속해서 신경망이 스케일에 매우 민감하다고 언급해왔습니다. MNIST 데이터셋에서의 스케일에 대한 전처리로 데이터를 255로 나누는 과정을 기억하나요? 이 과정을 거치지 않은 데이터셋에서의 결과와 비교해보기 바랍니다. 또한, 보스턴 주택 가격 예측 문제에서도 스케일 문제를 해결하기 위해 표준화를 진행해주었습니다. 표준화를 적용하지 않은 상태에서 신경망을 학습시켜보고 결과를 비교해보길 바랍니다. MNIST Dataset ###Code from tensorflow.keras.datasets.mnist import load_data # 텐서플로우 저장소에서 데이터를 다운받습니다. (x_train, y_train), (x_test, y_test) = load_data(path='mnist.npz') from sklearn.model_selection import train_test_split # 훈련/테스트 데이터를 0.7/0.3의 비율로 분리합니다. x_train, x_val, y_train, y_val = train_test_split(x_train, y_train, test_size = 0.3, random_state = 777) num_x_train = x_train.shape[0] num_x_val = x_val.shape[0] num_x_test = x_test.shape[0] # 모델의 입력으로 사용하기 위한 전처리 과정입니다. # 전처리를 진행하지 않습니다. x_train = (x_train.reshape((num_x_train, 28 * 28))) x_val = (x_val.reshape((num_x_val, 28 * 28))) x_test = (x_test.reshape((num_x_test, 28 * 28))) from tensorflow.keras.utils import to_categorical # 각 데이터의 레이블을 범주형 형태로 변경합니다. y_train = to_categorical(y_train) y_val = to_categorical(y_val) y_test = to_categorical(y_test) from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Dense model = Sequential() # 입력 데이터의 형태를 꼭 명시해야 합니다. # 784차원의 데이터를 입력으로 받고, 64개의 출력을 가지는 첫 번째 Dense 층 model.add(Dense(64, activation = 'relu', input_shape = (784, ))) model.add(Dense(32, activation = 'relu')) # 32개의 출력을 가지는 Dense 층 model.add(Dense(10, activation = 'softmax')) # 10개의 출력을 가지는 신경망 model.compile(optimizer='adam', # 옵티마이저 : Adam loss = 'categorical_crossentropy', # 손실 함수 : categorical_crossentropy metrics=['acc']) # 모니터링 할 평가지표 : acc # 작은 차이라고 느껴질 수 있지만, 분명히 전처리를 수행한 # 데이터셋을 학습하는 것이 성능이 더 좋습니다. history = model.fit(x_train, y_train, epochs = 30, batch_size = 128, validation_data = (x_val, y_val)) ###Output _____no_output_____ ###Markdown Boston Dataset ###Code from tensorflow.keras.datasets.boston_housing import load_data # 데이터를 다운받습니다. (x_train, y_train), (x_test, y_test) = load_data(path='boston_housing.npz', test_split=0.2, seed=777) import numpy as np # 데이터 표준화 # 전처리를 진행하지 않습니다. # mean = np.mean(x_train, axis = 0) # std = np.std(x_train, axis = 0) # x_train = (x_train - mean) / std # x_test = (x_test - mean) / std # 검증 데이터셋을 만듭니다. from sklearn.model_selection import train_test_split x_train, x_val, y_train, y_val = train_test_split(x_train, y_train, test_size = 0.33, random_state = 777) from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Dense model = Sequential() # 입력 데이터의 형태를 꼭 명시해야 합니다. # 13차원의 데이터를 입력으로 받고, 64개의 출력을 가지는 첫 번째 Dense 층 model.add(Dense(64, activation = 'relu', input_shape = (13, ))) model.add(Dense(32, activation = 'relu')) # 32개의 출력을 가지는 Dense 층 model.add(Dense(1)) # 하나의 값을 출력합니다. model.compile(optimizer = 'adam', loss = 'mse', metrics = ['mae']) # 전처리한 코드의 결과와 비교했을때, 매우 큰 차이가 남을 볼 수 있습니다. # 또한, 학습이 진행되고 있는지 loss 값과 metrics를 확인하면서 점검해보세요. # 이번 문제를 통해 전처리 작업의 중요성(여기서는 스케일링)을 알 수 있습니다. history = model.fit(x_train, y_train, epochs = 300, validation_data = (x_val, y_val)) ###Output _____no_output_____
python3/notebooks/files-directories-2021/Untitled.ipynb
###Markdown Create directory if not exists ###Code new_directory_path = "/path/to/new/directory" if not os.path.exists(new_directory_path): os.mkdir(new_directory_path) ###Output _____no_output_____ ###Markdown Mkdir -p behaviour ###Code new_directory_path = "/path/to/new/directory" if not os.path.exists(new_directory_path): os.makedirs(new_directory_path) ###Output _____no_output_____ ###Markdown Delete directory> Including subdirectories, if they exist! ```foo└── bar ├── baz │   └── some-other-file.txt └── some-file.txt``` ###Code import shutil directory_path_to_remove = "/path/to/foo/" if os.path.exists(directory_path_to_remove): shutil.rmtree(directory_path_to_remove) ###Output _____no_output_____
colabCraft_2_0.ipynb
###Markdown **Colabcraft** : Stream Minecraft Java (TLauncher) to Chromebook using Chrome RDP> **Warning : Not for Cryptocurrency Mining** As this runs TLauncher, be warned that you won't be able to join non-cracked servers. This includes Hypixel.Technically, the code could be changed to use the official Minecraft client, but in this day and age it's unsafe to type in your MC account details on anywhere you don't trust.**[Colab Hacks](https://github.com/PradyumnaKrishna/Colab-Hacks)** ###Code #@title **Create User** #@markdown Just press the play button, no need to change the default variables. username = "user" #@param {type:"string"} password = "root" #@param {type:"string"} print("Creating User and Setting it up") # Creation of user ! sudo useradd -m $username &> /dev/null # Add user to sudo group ! sudo adduser $username sudo &> /dev/null # Set password of user to 'root' ! echo '$username:$password' | sudo chpasswd # Change default shell from sh to bash ! sed -i 's/\/bin\/sh/\/bin\/bash/g' /etc/passwd print("User Created and Configured") #@title **RDP** #@markdown It takes 4-5 minutes for installation import os import subprocess #@markdown Visit http://remotedesktop.google.com/headless and Copy the command after authentication CRP = "" #@param {type:"string"} #@markdown Enter a pin more or equal to 6 digits Pin = 123456 #@param {type: "integer"} class CRD: def __init__(self): os.system("apt update") self.installCRD() self.installDesktopEnvironment() self.installGoogleChorme() self.installTLauncher() self.finish() @staticmethod def installCRD(): print("Installing Chrome Remote Desktop") subprocess.run(['wget', 'https://dl.google.com/linux/direct/chrome-remote-desktop_current_amd64.deb'], stdout=subprocess.PIPE) subprocess.run(['dpkg', '--install', 'chrome-remote-desktop_current_amd64.deb'], stdout=subprocess.PIPE) subprocess.run(['apt', 'install', '--assume-yes', '--fix-broken'], stdout=subprocess.PIPE) @staticmethod def installDesktopEnvironment(): print("Installing Desktop Environment") os.system("export DEBIAN_FRONTEND=noninteractive") os.system("apt install --assume-yes xfce4 desktop-base xfce4-terminal") os.system("bash -c 'echo \"exec /etc/X11/Xsession /usr/bin/xfce4-session\" > /etc/chrome-remote-desktop-session'") os.system("apt remove --assume-yes gnome-terminal") os.system("apt install --assume-yes xscreensaver") os.system("systemctl disable lightdm.service") @staticmethod def installGoogleChorme(): print("Installing Google Chrome") subprocess.run(["wget", "https://dl.google.com/linux/direct/google-chrome-stable_current_amd64.deb"], stdout=subprocess.PIPE) subprocess.run(["dpkg", "--install", "google-chrome-stable_current_amd64.deb"], stdout=subprocess.PIPE) subprocess.run(['apt', 'install', '--assume-yes', '--fix-broken'], stdout=subprocess.PIPE) @staticmethod def installTLauncher(): print("Installing TLauncher") subprocess.run(["wget", "https://github.com/RikyIsola/tlauncher-linux/releases/download/v1.0/tlauncher_1.0_amd64.deb"], stdout=subprocess.PIPE) subprocess.run(["dpkg", "--install", "tlauncher_1.0_amd64.deb"], stdout=subprocess.PIPE) subprocess.run(['apt', 'install', '--assume-yes', '--fix-broken'], stdout=subprocess.PIPE) @staticmethod def finish(): print("Finalizing") os.system(f"adduser {username} chrome-remote-desktop") command = f"{CRP} --pin={Pin}" os.system(f"su - {username} -c '{command}'") os.system("service chrome-remote-desktop start") print("Finished Succesfully") try: if username: if CRP == "": print("Please enter authcode from the given link") elif len(str(Pin)) < 6: print("Enter a pin more or equal to 6 digits") else: CRD() except NameError as e: print("username variable not found") print("Create a User First") #@title **Automatically apply performance settings** #@markdown Run to apply the premade performance settings for Optifine instead of changing them manually.<br> #@markdown These premade settings may make the graphics look pretty terrible, but overall make the game more playable and less laggy.<br> #@markdown In most cases you probably could just use the default settings, but if the lag becomes unbearable then you can use these instead. !pip install gdown -q !mkdir /home/user/.minecraft !cd /home/user/.minecraft !gdown https://drive.google.com/uc?id=1jWEH71TjAY-F3D-5q2atSl54fLhSJGix -O /home/user/.minecraft/options.txt -q !gdown https://drive.google.com/uc?id=1GSEYPFqukw6MisUUiLyzjOL5JS1LXIRc -O /home/user/.minecraft/optionsof.txt -q ! runuser -l $user -c "sudo chmod u+rwx,go+rwx /home/user/.minecraft" > /dev/null 2>&1 #@title **Google Drive Mount** #@markdown Google Drive used for files.<br> #@markdown In this case, used for storing MC world saves.<br> #@markdown Mounted at `user` Home directory inside drive folder<br> #@markdown Run <b>before</b> using options below! def MountGDrive(): from google.colab import drive ! runuser -l $user -c "yes | python3 -m pip install --user google-colab" > /dev/null 2>&1 mount = """from os import environ as env from google.colab import drive env['CLOUDSDK_CONFIG'] = '/content/.config' drive.mount('{}')""".format(mountpoint) with open('/content/mount.py', 'w') as script: script.write(mount) ! runuser -l $user -c "python3 /content/mount.py" try: if username: mountpoint = "/home/"+username+"/drive" user = username except NameError: print("username variable not found, mounting at `/content/drive' using `root'") mountpoint = '/content/drive' user = 'root' MountGDrive() #@title **RESTORE MC Saves from Drive to RDP** #@markdown Use this to copy your Minecraft worlds from your Google Drive to your RDP server.<br> #@markdown <b>EXTRA TIP!</b> You can add custom worlds by uploading them to the "0_MC_Saves" folder in your Google Drive! ! runuser -l $user -c "mkdir -p /home/user/.minecraft/saves" > /dev/null 2>&1 ! runuser -l $user -c "mkdir -p /home/user/drive/MyDrive/0_MC_Saves" > /dev/null 2>&1 ! runuser -l $user -c "cp -r /home/user/drive/MyDrive/0_MC_Saves/saves /home/user/.minecraft/" > /dev/null 2>&1 print("Done! If it didn't work, it's likely you don't have any worlds backed up.") #@title **BACKUP MC Saves from Drive to RDP** #@markdown Use this to copy your Minecraft worlds from your RDP server to your Google Drive.<br> ! runuser -l $user -c "mkdir -p /home/user/.minecraft/saves" > /dev/null 2>&1 ! runuser -l $user -c "mkdir -p /home/user/drive/MyDrive/0_MC_Saves" > /dev/null 2>&1 ! runuser -l $user -c "cp -r /home/user/.minecraft/saves /home/user/drive/MyDrive/0_MC_Saves/" > /dev/null 2>&1 print("Done! If it didn't work, it's likely you don't have any worlds.") #@title **Anti-Shutdown** #@markdown Run this when you're playing Minecraft to prevent the RDP from shutting down. When you need to restore or backup MC saves, stop this block, restore/backup, and start this block again. Note that RDP will always shut down after 12 hours, even if still being used.<br> while True:pass ###Output _____no_output_____
pytlesson_notebooks/AppB_Python_Basics.ipynb
###Markdown 1.변수와 연산 ###Code # 변수를 정의하고 값을 대입 a = 10 # 변수 a의 값을 출력 a # 변수 x의 값을 출력 #x # 변수를 정의하고 값을 대입 b = 1.5 c = -2.0 # 변수 b,c의 값을 출력 print(b) print(c) # 변수의 데이터 타입 확인 print(type(a)) print(type(b)) print(type(c)) # 사칙연산 예제 e = a + b # 합 f = b * c # 곱 # 계산 결과 출력 print(e) print(f) # 변수를 정의하고 값을 대입하기 o = 'Welcome to PyTorch' p = '파이토치' # 변수 o,p의 값 출력하기 print(o) print(p) # 문자열을 연결하고 출력하기 q = o + '\t' + p print(q) # 문자열 치환하기 r = q.replace('o', '*') print(r) # 변수의 데이터 타입 print(type(o)) print(type(p)) print(type(q)) print(type(r)) ###Output _____no_output_____ ###Markdown 2.데이터 구조 다루기 ###Code # 리스트 생성 lst = [1, 7, 5, 3, 2] # 리스트의 데이터 타입 확인 print(type(lst)) # 첫 번째 요소값에 접근 print(lst[0]) # 마지막 요소값에 접근 print(lst[4]) print(lst[-1]) # 첫 번째부터 세 번째 요소를 담은 부분 리스트 print(lst[0:3]) # 요소값의 오름차순으로 정렬 print(sorted(lst)) # 리스트 생성 lst = [1, 7, 5, 3, 2] # 요소 추가 lst.append(4) print(lst) lst.append([5, 8]) print(lst) lst.extend([5, 8]) print(lst) # 요소 삭제 lst.remove(4) print(lst) # 요소 삽입 lst.insert(3, 5) print(lst) # 특정 요소 개수 세기 print(lst.count(5)) # 튜플 생성 tpl = (1, 7, 5, 3, 2) # 튜플의 데이터 타입 확인 print(type(tpl)) # 첫 번째 요소값에 접근 print(tpl[0]) # 마지막 요소값에 접근 print(tpl[4]) print(tpl[-1]) # 첫 번째부터 세 번째 요소를 담은 부분 튜플 print(tpl[0:3]) # 요소를 오름차순으로 정렬한 리스트 print(sorted(tpl)) # 딕셔너리 생성 dic = {0:'Welcome to PyTorch', 1:'파이토치', 2:'PyTorch'} # 딕셔너리의 데이터 타입 확인 print(type(dic)) # 첫 번째 요소값에 접근 print(dic[0]) # 마지막 요소값에 접근 print(dic[2]) # 요소 추가 dic[4] = 'Python' print(dic) # 키-값 쌍 목록 보기 print(dic.items()) # 키의 목록 보기 print(dic.keys()) # 값의 목록 보기 print(dic.values()) ###Output _____no_output_____ ###Markdown 3.제어문 활용 ###Code # 리스트 생성 lst = [1, 7, 5, 3, 2] # for 문으로 리스트 순회 i = 0 for i in lst: print(i) # 리스트 생성 lst = [1, 7, 5, 3, 2] # while 문으로 리스트 순회 i = 0 while i < len(lst): print(lst[i]) i = i + 1 # 리스트 생성 lst = [1, 7, 5, 3, 2] # while 문으로 리스트 순회 i = 0 while i < len(lst): # 2로 나누어 떨어지는 요소를 출력 if not lst[i]%2: print(lst[i]) i = i + 1 # 변수 정의 s = 'PyTorch' # 문자열을 한 글자씩 순회 c = 0 for c in s: print(c) ###Output _____no_output_____ ###Markdown 4.컴프리헨션 문법 활용 ###Code # 빈 리스트를 생성 lst1 = [] # 리스트에 요소 추가 for i in range(10): lst1.append(i) print(lst1) # 리스트 생성과 동시에 요소 추가 lst2 = [i for i in range(10)] print(lst2) # 빈 딕셔너리 생성 dic1 = {} # 딕셔너리에 요소 추가 for i in range(5): dic1[i] = chr(i + 65) print(dic1) # 딕셔너리 생성과 동시에 요소 추가 dic2 = {i:chr(i + 65) for i in range(5)} print(dic2) ###Output _____no_output_____ ###Markdown 5.함수 활용 ###Code # 함수 정의 def calc(x, y, op='+'): if op == '+': z = x + y elif op == '-': z = x - y elif op == '*': z = x * y elif op == '/': z = x / y return z # 함수 호출 add = calc(1, 2) print(add) mul = calc(3, 4, '*') print(mul) div = calc(op='/', x=10, y=5) print(div) # 함수 정의 def add_tpl(x, y, *args): for arg in args: print(arg) # 함수 호출 add_tpl(1, 2, 3, [4, 5], 'PyTorch', ['PyTorch', '파이토치']) # 함수 정의 def add_dic(id1='PyTorch', id2='파이토치', **kwargs): for k,v in kwargs.items(): print(k + ':' + v) # 함수 호출 add_dic(id3='Python') ###Output _____no_output_____ ###Markdown 6.클래스 활용 ###Code # 클래스 정의 class cls: pass # 인스턴스 생성 f = cls() print(f) print(type(f)) # 클래스 정의 class cls: # 인스턴스 초기화 def __init__(self, val): # 변수 초기화 self.val = val # 인스턴스 메서드 def view(self): print(self.val) # 인스턴스 생성 및 메서드 호출 f = cls(3) f.view() f = cls('PyTorch') f.view() # 클래스 정의 class cls: # 인스턴스 초기화 def __init__(self, val): # 변수 초기화 self.val = val # 인스턴스 메서드 def view1(self): print(self.val) def view2(self): print('파이토치') self.view1() # 인스턴스 생성 및 메서드 호출 f = cls(3) f.view1() f.view2() # 클래스 정의 class calc: # 인스턴스 초기화 def __init__(self, x, y): self.x = x self.y = y def add(self): z = self.x + self.y print(z) def mul(self): z = self.x * self.y print(z) def apnd(self): z = [] z.append(self.x) z.append(self.y) print(z) f = calc(3, 5) f.add() f.mul() f.apnd() # 클래스 정의 class cls1: # 인스턴스 초기화 def __init__(self, cls1_val): # 변수 초기화 self.cls1_val = cls1_val # 클래스 정의 class cls2(cls1): # 인스턴스 초기화 def __init__(self, cls1_val, cls2_val): # cls1의 생성자 메서드를 호출 super().__init__(cls1_val) # 변수 초기화 self.cls2_val = cls2_val f = cls2(3, 5) print(f.cls1_val) print(f.cls2_val) # 클래스 정의 class cls1: # 인스턴스 초기화 def __init__(self, cls1_val): # 변수 초기화 self.cls1_val = cls1_val # 인스턴스 메서드 def view(self): print('클래스1') # 클래스 정의 class cls2(cls1): # 인스턴스 초기화 def __init__(self, cls1_val, cls2_val): # cls1의 생성자 메서드 호출 super().__init__(cls1_val) # 변수 초기화 self.cls2_val = cls2_val # 인스턴스 메서드 def view(self): #super().view() print('클래스2') f = cls2(3, 5) f.view() ###Output _____no_output_____ ###Markdown 7.파일 다루기 ###Code # 파일 열기 fp = open('test.txt', 'r') # 파일 전체를 한번에 읽기 txt = fp.read() print(txt) # 파일 닫기 fp.close() # 파일 열기 fp = open('test.txt', 'r') # 파일을 한줄씩 읽기 txt = fp.readline() print(txt) txt = fp.readline() print(txt) # 파일 닫기 fp.close() # 파일 열기 fp = open('test.txt', 'r') # 파일을 한 줄씩 읽기 for line in fp: print(line) # 파일 닫기 fp.close() # 파일을 열고 (묵시적으로) 닫기 with open('test.txt', 'r') as fp: # 파일을 한 줄씩 읽기 for line in fp: print(line) ###Output _____no_output_____
Applications/Actions/Server.ipynb
###Markdown Writing a Simple Action Server using the Execute CallbackThis tutorial covers using the `simple_action_server` library to create a Fibonacci action server in Python. This example action server generates a Fibonacci sequence, the goal is the order of the sequence, the feedback is the sequence as it is computed, and the result is the final sequence.If the order of the sequence is greater than 100, the action is aborted by the server.In addition, the action can be cancelled by the client at any time during the execution. ###Code import rospy from actionlib import SimpleActionServer from actionlib_tutorials.msg import FibonacciAction, \ FibonacciFeedback, FibonacciResult ###Output _____no_output_____ ###Markdown Here we import the `SimpleActionServer` class from the `actionlib` library, and the classes for the messages. The action specification generates such messages for sending goals, receiving feedbacks, etc... ###Code feedback = FibonacciFeedback() result = FibonacciResult() ###Output _____no_output_____ ###Markdown These are the objects for storing the feedback and result data. ###Code def execute_cb(goal): r = rospy.Rate(1) success = True feedback.sequence = [0, 1] if goal.order > 100: result.sequence = feedback.sequence print('Aborted') action_server.set_aborted(result, "Sequence aborted due to excessive order") return print('Executing, creating fibonacci sequence of order %i with seeds %i, %i' % (goal.order, feedback.sequence[0], feedback.sequence[1])) for i in range(1, goal.order): if not action_server.is_preempt_requested(): feedback.sequence.append(feedback.sequence[i] + feedback.sequence[i-1]) action_server.publish_feedback(feedback) r.sleep() if not action_server.is_preempt_requested(): result.sequence = feedback.sequence print('Succeeded') action_server.set_succeeded(result, "Sequence completed successfully") ###Output _____no_output_____ ###Markdown This is the execute callback function that we'll run everytime a new goal is received.If the action is not preempted, the Fibonacci sequence is put into the feedback variable and then published on the feedback channel provided by the action server. Then, the action continues looping and publishing feedback.Once the action has finished computing the Fibonacci sequence, the action server notifies the action client that the goal is complete by calling `set_succeeded`. ###Code def preempt_cb(): result.sequence = feedback.sequence print('Preempted') action_server.set_preempted(result, "Sequence preempted") return ###Output _____no_output_____ ###Markdown An important component of an action server is the ability to allow an action client to request that the goal under execution be canceled. When a client requests that the current goal be preempted, the action server should cancel the goal, perform any necessary cleanup, and call the `set_preempted` function, which signals that the action has been preempted by user request. Here, we execute this callback when a preempt request is received. ###Code rospy.init_node('fibonacci_server') ###Output _____no_output_____ ###Markdown The server node is initialized. ###Code action_server = SimpleActionServer('fibonacci', FibonacciAction, execute_cb = execute_cb, auto_start = False) ###Output _____no_output_____ ###Markdown Here, the `SimpleActionServer` is created, we pass it a name, the action type, and the execute callback. Since we've specified an execute callback in this example, a thread will be spun for us which allows us to take long running actions in a callback received when a new goal comes in.Note you should always set `auto_start` to `False` explicitly. ###Code action_server.register_preempt_callback(preempt_cb) ###Output _____no_output_____ ###Markdown The preempt callback function is registered. ###Code action_server.start() rospy.spin() ###Output _____no_output_____
the_archive/archived_rapids_event_notebooks/KDD_2020/notebooks/parking/codes/1_rapids_seattleParking.ipynb
###Markdown KDD 2020 Where should I park?Using RAPIDS to find parking spots in Seattle. Load the modules ###Code !nvidia-smi ###Output Sat Aug 22 02:15:52 2020 +-----------------------------------------------------------------------------+ | NVIDIA-SMI 450.51.06 Driver Version: 450.51.06 CUDA Version: 11.0 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |===============================+======================+======================| | 0 TITAN RTX Off | 00000000:01:00.0 Off | N/A | | 41% 33C P8 24W / 280W | 0MiB / 24217MiB | 0% Default | | | | N/A | +-------------------------------+----------------------+----------------------+ +-----------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=============================================================================| | No running processes found | +-----------------------------------------------------------------------------+ ###Markdown Import modules ###Code import cudf from collections import OrderedDict import numpy as np import datetime as dt import matplotlib.pyplot as plt %load_ext autotime print(cudf.__version__) ###Output 0.15.0a+4954.ga5dda7faf time: 443 µs ###Markdown Download the dataIf necessary, download the data from my website and unpack. Note -- this may take around 10 minutes depending on the speed of your Internet connection. ###Code import os directory = os.path.exists('../data') archive = os.path.exists('../data/parking_MayJun2019.tar.gz') file = os.path.exists('../data/parking_MayJun2019.csv') if not directory: os.mkdir('../data') if not archive and not file: import wget, shutil def bar_custom(current, total, width=80): print('Downloading: %d%% [%d / %d] bytes' % (current / total * 100.0, current, total)) wget.download('http://tomdrabas.com/data/seattle_parking/parking_MayJun2019.tar.gz') shutil.move('parking_MayJun2019.tar.gz', '../data/parking_MayJun2019.tar.gz') if not file: import tarfile tf = tarfile.open('../data/parking_MayJun2019.tar.gz') tf.extractall(path='../data/') ###Output time: 3.63 ms ###Markdown Read the data ###Code !head -n 10 ../data/parking_MayJun2019.csv dtypes = OrderedDict([ ('OccupancyDateTime', 'date'), ('PaidOccupancy', 'int64'), ('BlockfaceName', 'str'), ('SideOfStreet', 'str'), ('SourceElementKey', 'int64'), ('ParkingTimeLimitCategory', 'int64'), ('ParkingSpaceCount', 'int64'), ('PaidParkingArea', 'str'), ('PaidParkingSubArea', 'str'), ('PaidParkingRate', 'int8'), ('ParkingCategory', 'str'), ('Location', 'str'), ('dow', 'int8') ]) df = cudf.read_csv( '../data/parking_MayJun2019.csv' , skiprows=1 , dtype=list(dtypes.values()) , names=list(dtypes.keys()) ) df = df.fillna({'PaidOccupancy': 0, 'ParkingSpaceCount': 999, 'PaidParkingSubArea': 'UKN'}) # size of the file import os print('Filesize: {0:.2f}GB'.format(os.path.getsize('../data/parking_MayJun2019.csv') / (1024 ** 3))) df['PaidOccupancy'] = df['PaidOccupancy'].astype('float64') df['ParkingSpaceCount'] = df['ParkingSpaceCount'].astype('float64') df.dtypes print('The dataset has {0:,} records and {1} columns.'.format(*df.shape)) df.head().to_pandas() ###Output _____no_output_____ ###Markdown Extract date information ###Code df['year'] = df['OccupancyDateTime']._column.year df['month'] = df['OccupancyDateTime']._column.month df['day'] = df['OccupancyDateTime']._column.day df['hour'] = df['OccupancyDateTime']._column.hour df['minute'] = df['OccupancyDateTime']._column.minute df[['OccupancyDateTime','year','month','day','hour', 'minute']].head().to_pandas() counts = df.groupby(['year', 'month', 'day']).agg({'OccupancyDateTime': 'count'}) counts print('Average number of transactions per day: {0:,.0f}'.format(counts['OccupancyDateTime'].mean())) ###Output Average number of transactions per day: 954,413 time: 814 µs ###Markdown All parking locations ###Code locations = df[['SourceElementKey', 'BlockfaceName', 'SideOfStreet', 'ParkingTimeLimitCategory', 'ParkingSpaceCount', 'PaidParkingArea', 'PaidParkingSubArea', 'ParkingCategory', 'Location']].drop_duplicates() locations.head().to_pandas() print('Number of parking locations in Seattle: {0:,}'.format(locations.shape[0])) def extractLon(location): lon = location.str.extract('([0-9\.\-]+) ([0-9\.]+)')[0] return lon#.stod() def extractLat(location): lon = location.str.extract('([0-9\.\-]+) ([0-9\.]+)')[1] return lon#.str.stod() locations['longitude'] = extractLon(locations['Location']).astype('float') locations['latitude'] = extractLat(locations['Location']).astype('float') locations[['Location', 'longitude', 'latitude']].head().to_pandas() ###Output _____no_output_____ ###Markdown Average occupancy ###Code def avgOccupancy(PaidOccupancy, ParkingSpaceCount, AvgOccupancy): for i, (paid, available) in enumerate(zip(PaidOccupancy, ParkingSpaceCount)): AvgOccupancy[i] = min(1.0, paid / available) # cap it at 100%, sometimes we see more paid occupancy than spaces available df = ( df[['OccupancyDateTime', 'PaidOccupancy', 'ParkingSpaceCount' , 'SourceElementKey', 'BlockfaceName', 'SideOfStreet' , 'ParkingTimeLimitCategory', 'ParkingSpaceCount' , 'PaidParkingArea', 'PaidParkingSubArea', 'ParkingCategory', 'dow', 'year', 'month' , 'day', 'hour', 'minute']] .apply_rows( avgOccupancy , incols=['PaidOccupancy', 'ParkingSpaceCount'] , outcols={'AvgOccupancy': np.float64} , kwargs={} ) ) df.head() def calcMean(AvgOccupancy, ParkingSpaceCount, MeanOccupancy): ''' Calculate mean ''' for i, (avgOccSum, avgCnt) in enumerate(zip(AvgOccupancy, ParkingSpaceCount)): MeanOccupancy[i] = float(avgOccSum) / avgCnt df_agg_dt = ( df .groupby(['SourceElementKey', 'dow','hour']) .agg({ 'ParkingSpaceCount': 'count' , 'AvgOccupancy': 'sum' }) .reset_index() ) df_agg_dt = df_agg_dt.apply_rows( calcMean , incols=['AvgOccupancy', 'ParkingSpaceCount'] , outcols={'MeanOccupancy':np.float64} , kwargs={} ) df_agg_dt.drop_column('AvgOccupancy') df_agg_dt.drop_column('ParkingSpaceCount') df_agg_dt.head().to_pandas() ###Output _____no_output_____ ###Markdown Find the best parking ###Code from geopy.geocoders import Nominatim geolocator = Nominatim(user_agent="todrabas_test") location = geolocator.geocode("400 Broad St, Seattle, WA 98109") # SPACE NEEDLE locations['LON_Ref'] = location.longitude locations['LAT_Ref'] = location.latitude from math import sin, cos, sqrt, atan2, pi def calculateDistance(latitude, longitude, LAT_Ref, LON_Ref, Distance): R = 3958.8 # Earth's radius in miles for i, (lt, ln, lt_r, ln_r) in enumerate(zip(latitude, longitude, LAT_Ref, LON_Ref)): lt_rad = lt / 180.0 * pi ln_rad = ln / 180.0 * pi dlon = (ln_r - ln) / 180.0 * pi dlat = (lt_r - lt) / 180.0 * pi a = (sin(dlat/2.0))**2 + cos(lt_rad) * cos(lt_rad) * (sin(dlon/2.0))**2 c = 2 * atan2(sqrt(a), sqrt(1-a)) distance = R * c Distance[i] = distance * 5280 # in feet locations = locations.apply_rows( calculateDistance , incols=['latitude', 'longitude', 'LAT_Ref', 'LON_Ref'] , outcols={'Distance':np.float64} , kwargs={} ) # get only meters within 1000 ft closest = locations.query('Distance < 1000') closest = ( closest .merge(df_agg_dt, how='inner', on=['SourceElementKey']) .query('dow == 3 and hour == 13') .sort_values(by='MeanOccupancy') ) closest_host = closest[['BlockfaceName', 'SideOfStreet' , 'ParkingTimeLimitCategory', 'ParkingSpaceCount', 'PaidParkingArea' , 'PaidParkingSubArea', 'ParkingCategory', 'Location', 'Distance' , 'dow', 'hour', 'MeanOccupancy', 'longitude', 'latitude'] ].head().to_pandas() closest_host ###Output _____no_output_____ ###Markdown Plot the parking spots on the mapWe're using gmaps python package that can be found here: https://github.com/pbugnion/gmaps. Follow the instructions contained within the README.md about how to install the package so the map shows properly in jupyter lab. ###Code closest_host[['BlockfaceName', 'Distance', 'MeanOccupancy']].to_dict('records') info_box_template = """ <dl> <dt>Name</dt><dd>{BlockfaceName}</dd> <dt>Distance</dt><dd>{Distance:.0f}</dd> <dt>Occupancy (AVG)</dt><dd>{MeanOccupancy:.3f}</dd> </dl> """ parking_info = [info_box_template.format(**parking) for parking in closest_host[['BlockfaceName', 'Distance', 'MeanOccupancy']].to_dict('records')] import gmaps from ipywidgets.embed import embed_minimal_html #################################################### ## ## ## CHANGE THE API CREDS IN THE GoogleMapsAPI.cred ## ## ## #################################################### with open('config/GoogleMapsAPI.cred', 'r') as f: gmaps_creds = f.read() gmaps.configure(api_key=gmaps_creds) # Your Google API key, go to https://console.developers.google.com parking_layer = gmaps.symbol_layer( closest_host[['latitude', 'longitude']], fill_color="green", stroke_color="green", scale=3, info_box_content=parking_info ) destinations_layer = gmaps.symbol_layer( [[location.latitude, location.longitude]] , info_box_content=['DESTINATION'] , scale=5 , fill_color="red" , stroke_color="red" ) parkings = closest_host.to_dict('records') lines_layer = gmaps.drawing_layer(features=[ gmaps.Line( start= (parking['latitude'], parking['longitude']) , end = (location.latitude, location.longitude) , stroke_weight=2 , stroke_color="red" ) for parking in parkings] ) fig = gmaps.figure(layout={'height': '500px'}) fig.add_layer(parking_layer) fig.add_layer(destinations_layer) fig.add_layer(lines_layer) embed_minimal_html('maps_rendered/map_as_crow_flies.html', views=[fig]) ###Output time: 67.2 ms
Data Collect and Prep/data_dump_MongoDB.ipynb
###Markdown Dumping NYT's articles with main context and their metadata ###Code def removekey(article_metadata): keys_to_remove = ["multimedia", "lead_paragraph","_id"] for key in keys_to_remove: del article_metadata[key] # get id and text for filtered articles with open('feb_dict.json','r') as json_file: feb_text_filtered = json.load(json_file) # get ids and make it into int for indexing feb_id_keys = list(feb_text_filtered.keys()) feb_article_ids = list(map(int, feb_id_keys)) # get metadata for articles with open('NYT_feb_data.txt','r') as json_file: feb_medadata = json.load(json_file) # filter metadata using filtered text's keys feb_metadata_filtered = [feb_medadata[i] for i in feb_article_ids] # adding article_indexing to each metadata dict for article_meta in list(enumerate(feb_metadata_filtered)): removekey(article_meta[1]) pos = article_meta[0] article_meta[1].update({"article_index": feb_article_ids[pos]}) # fixing article's text dict, which will have article_index and main_content as keys feb_text_list = [] for key, values in feb_text_filtered.items(): raw_keys = ['article_index', 'main_content'] raw_vals = [key, values] item_dict = {raw_keys[i]: raw_vals[i] for i in range(len(raw_keys))} feb_text_list.append(item_dict) # get id and text for filtered articles with open('march_dict.json','r') as json_file: march_text_filtered = json.load(json_file) # get ids and make it into int for indexing march_id_keys = list(march_text_filtered.keys()) march_article_ids = list(map(int, march_id_keys)) # get metadata for articles with open('NYT_march_data_2.txt','r') as json_file: march_medadata = json.load(json_file) # filter metadata using filtered text's keys march_metadata_filtered = [march_medadata[i] for i in march_article_ids] # adding article_indexing to each metadata dict for article_meta in list(enumerate(march_metadata_filtered)): removekey(article_meta[1]) pos = article_meta[0] article_meta[1].update({"article_index": march_article_ids[pos]}) # fixing article's text dict, which will have article_index and main_content as keys march_text_list = [] for key, values in march_text_filtered.items(): raw_keys = ['article_index', 'main_content'] raw_vals = [key, values] item_dict = {raw_keys[i]: raw_vals[i] for i in range(len(raw_keys))} march_text_list.append(item_dict) # combine lists of metadata and raw text for articles metadata_all = feb_metadata_filtered+march_metadata_filtered text_all = feb_text_list+march_text_list # Connect to the MongoDB and create database "DDR-ML-Final" # each article's main content goes to collection "article_main_content" client = MongoClient('localhost', 27017) db = client['DDR-ML-Final'] col_main = db['article_main_content'] col_main.insert_many(text_all) # each article's metadata goes to collection "article_metadata" col_meta = db['article_metadata'] col_meta.insert_many(metadata_all) ###Output _____no_output_____ ###Markdown WebScraping Delegates data and Dump into DB ###Code ## Navigate to each url, & save article information headers = {'user-agent':'Mozilla/5.0'} url = 'https://www.nytimes.com/interactive/2020/us/elections/delegate-count-primary-results.html' response = requests.get(url, headers) # save the html file with open('delegate_counts.htm', 'w') as file: file.write(response.text) file.close #open the file and parse to soup object with open('delegate_counts.htm','r') as file: soup = BeautifulSoup(file) #get only the rows of the table, so you can extract data table_rows = soup.find_all("tr", class_ ="g-event") ### this code block will result in an error, but it captures all the data we need so that's fine states_list = [] biden_delegates_list = [] sanders_delegates_list = [] biden_wins_list = [] sanders_wins_list = [] for row in table_rows: soup = BeautifulSoup(str(row)) #get the state and add to list state = soup.find("span", class_="g-full-name").string states_list.append(state) #set each value to null at the beginning of each run of the lopp biden_delegates_nonwinner = np.nan biden_delegates_winner = np.nan sanders_delegates_nonwinner = np.nan sanders_delegates_winner = np.nan #if Biden did not win, get delegate count from table row try: biden_delegates_nonwinner = int(soup.find("td", class_="g-cand-wide g-cand g-biden in").string) #if that table row is not present, it means Biden won and need to get value as sibling of checkmark image except AttributeError: biden_delegates_winner = int(soup.find("img", class_="g-checkmark").next_sibling) #if biden_delegates_winner is > 0 (aka not null), it means Biden won the state so use that delegate value if biden_delegates_winner > 0: biden_delegates = biden_delegates_winner biden_winner = 1 #if Biden lost the state, use biden_delegates_nonwinner value else: biden_delegates = biden_delegates_nonwinner biden_winner = 0 #add delegate count & whether Biden won to list biden_delegates_list.append(biden_delegates) biden_wins_list.append(biden_winner) #if Sanders did not win, get delegate count from table row try: sanders_delegates_nonwinner = int(soup.find("td", class_="g-cand-wide g-cand g-sanders in").string) #if that table row is not present, it means Sanders won and need to get value as sibling of checkmark image except AttributeError: sanders_delegates_winner = int(soup.find("img", class_="g-checkmark").next_sibling) #if sanders_delegates_winner is > 0 (aka not null), it means Sanders won the state so use that delegate value if sanders_delegates_winner > 0: sanders_delegates = sanders_delegates_winner sanders_winner = 1 #if Sanders lost the state, use sanders_delegates_nonwinner value else: sanders_delegates = sanders_delegates_nonwinner sanders_winner = 0 #add delegate count & whether Sanders won to list sanders_delegates_list.append(sanders_delegates) sanders_wins_list.append(sanders_winner) dictionary = {'state':states_list, \ 'biden_delegates':biden_delegates_list, \ 'sanders_delegates':sanders_delegates_list, \ 'biden_win':biden_wins_list, \ 'sanders_win':sanders_wins_list} df = pd.DataFrame(dictionary) delegates_records = df.to_dict('records') col_delegates = db['delegates'] col_delegates.insert_many(delegates_records) ###Output _____no_output_____ ###Markdown Funding info for Bernie and Joe ###Code # get funding info with open('finances.json','r') as json_funding: funding = json.load(json_funding) col_funding = db['fec Filings'] col_funding.insert_many(funding) ###Output _____no_output_____
Session-2/sklearn.ipynb
###Markdown Scikit-learn Documentation : http://scikit-learn.org/stable/documentation.html Load iris dataset ###Code from sklearn import datasets iris = datasets.load_iris() ###Output _____no_output_____ ###Markdown Separate feature values(X) and target values(y) ###Code X = iris.data y = iris.target print(X[:5]) print(y[:5]) print(y[80:85]) print(y[130:135]) ###Output [0 0 0 0 0] [1 1 1 1 1] [2 2 2 2 2] ###Markdown Split dataset into training(67%) and test(33%) dataset randomly ###Code from sklearn.cross_validation import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=42) print(len(X_train)) print(len(X_test)) print(X_train[:5]) print(y_train[:5]) ###Output 100 50 [[5.7 2.9 4.2 1.3] [7.6 3. 6.6 2.1] [5.6 3. 4.5 1.5] [5.1 3.5 1.4 0.2] [7.7 2.8 6.7 2. ]] [1 2 1 0 2] ###Markdown Import and initialise KNeighboursClassifier algorithm Doc : http://scikit-learn.org/stable/modules/generated/sklearn.neighbors.KNeighborsClassifier.html ###Code from sklearn.neighbors import KNeighborsClassifier knn = KNeighborsClassifier(n_neighbors=5) ###Output _____no_output_____ ###Markdown Train the model ###Code knn.fit(X_train, y_train) ###Output _____no_output_____ ###Markdown Make predictions ###Code pred = knn.predict(X_test) print(pred) ###Output [1 0 2 1 1 0 1 2 1 1 2 0 0 0 0 1 2 1 1 2 0 2 0 2 2 2 2 2 0 0 0 0 1 0 0 2 1 0 0 0 2 1 1 0 0 1 1 2 1 2] ###Markdown Check accuracy on test data ###Code score = knn.score(X_test, y_test) print("Accuracy: : ", score*100, " %") ###Output Accuracy: : 98.0 %
LeetCode-76-100.ipynb
###Markdown 76. Minimum Window Substringmark。 ###Code (defun min-window (s tt) (let ((table (make-hash-table)) (start 0) (end 0) (missing (length tt))) (loop for c across tt do (incf (gethash c table 0))) (loop for c across s for j from 1 to (1+ (length s)) with i = 0 do (if (plusp (gethash c table 0)) (decf missing)) (decf (gethash c table 0)) (loop while (zerop missing) do (incf (gethash (elt s i) table)) (when (plusp (gethash (elt s i) table)) (incf missing) (if (or (zerop end) (> (- end start) (- j i))) (setf start i end j))) (incf i)) finally (return (list start end))))) ###Output _____no_output_____ ###Markdown 77. Combinations ###Code (defun combine (n k) (loop repeat k with res = '(nil) do (setf res (loop for set in res append (loop for i from (if (last set) (1+ (car (last set))) 1) to n collect (append set `(,i))))) finally (return res))) ###Output _____no_output_____ ###Markdown 78. Subsets ###Code (defun subsets (nums) (loop for num in nums with res = '(nil) do (nconc res (loop for set in res collect (append set `(,num)))) finally (return res))) ###Output _____no_output_____ ###Markdown 79. Word Search ###Code (defun exist (board word) (loop for i below (length board) do (loop for j below (length (first board)) do (if (dfs i j board word) (return-from exist t))) finally (return nil))) (defun dfs (i j board word) (cond ((string= word "") t) ((or (minusp i) (minusp j) (= i (length board)) (= j (length (first board))) (not (string= (nth j (nth i board)) (elt word 0)))) nil) (t (let ((tmp (nth j (nth i board))) (word (subseq word 1)) (exist)) (setf (nth j (nth i board)) "" exist (or (dfs (1+ i) j board word) (dfs (1- i) j board word) (dfs i (1+ j) board word) (dfs i (1- j) board word)) (nth j (nth i board)) tmp) exist)))) ###Output _____no_output_____ ###Markdown 80. Remove Duplicates from Sorted Array II ###Code (defun my-remove-duplicates (nums) (loop for n in nums with i = 0 do (if (or (< i 2) (not (= n (nth (- i 2) nums)))) (setf (nth i nums) n i (1+ i))) finally (return i))) ###Output _____no_output_____ ###Markdown 81. Search in Rotated Sorted Array II ###Code (defun my-search (nums target) (let* ((lo 0) (hi (1- (length nums))) (mid)) (loop while (<= lo hi) do (setf mid (floor (+ lo (/ (- hi lo) 2)))) (if (= target (nth mid nums)) (return t)) (loop while (and (< lo mid) (= (nth lo nums) (nth mid nums))) do (incf lo)) (cond ((<= (nth lo nums) (nth mid nums)) (if (and (<= (nth lo nums) target) (< target (nth mid nums))) (setf hi (1- mid)) (setf lo (1+ mid)))) ((and (< (nth lo nums) target) (<= target (nth hi nums))) (setf lo (1+ mid))) (t (setf hi (1- mid))))))) ###Output _____no_output_____ ###Markdown 82. Remove Duplicates from Sorted List II ###Code (defun my-delete-duplicates (head) (loop for i below (length head) for val in head with duplicate with res do (if (or (equal val duplicate) (and (< i (length head)) (= val (nth (1+ i) head)))) (setf duplicate val) (push val res)) finally (return (reverse res)))) ###Output _____no_output_____ ###Markdown 83. Remove Duplicates from Sorted List ###Code (defun my-delete-duplicates (head) (loop for e in head with prev with res do (unless (equal prev e) (push e res) (setf prev e)) finally (return (reverse res)))) ###Output _____no_output_____ ###Markdown 84. Largest Rectangle in Histogram ###Code (defun largest-rectangle-area (heights) (let* ((heights (append heights '(0))) (length (length heights)) (stack `(,(1- length))) (largest 0)) (loop for i below length with h with start do (loop while (< (nth i heights) (nth (first stack) heights)) do (setf h (nth (pop stack) heights)) (if (= (first stack) (1- length)) (setf start -1) (setf start (first stack))) (setf largest (max largest (* h (- i start 1))))) (push i stack) finally (return largest)))) ###Output _____no_output_____ ###Markdown 85. Maximal Rectangle ###Code (defun maximal-rectangle (matrix) (let* ((size (length (first matrix))) (heights (loop repeat (1+ size) collect 0))) (loop for row in matrix with res = 0 do (loop for i to size do (if (string= "1" (nth i row)) (incf (nth i heights)) (setf (nth i heights) 0))) (loop for i to size with stack = `(,size) with h with w do (loop while (< (nth i heights) (nth (first stack) heights)) do (setf h (nth (pop stack) heights)) (setf w (- i 1 (if (= (first stack) size) -1 (first stack)))) (setf res (max res (* h w)))) (push i stack)) finally (return res)))) ###Output _____no_output_____ ###Markdown 86. Partition List ###Code (defun partition (head x) (loop for n in head with small with large do (if (< n x) (push n small) (push n large)) finally (return (append (reverse small) (reverse large))))) ###Output _____no_output_____ ###Markdown 87. Scramble String ###Code (defun is-scramble (s1 s2) (let ((n (length s1))) (cond ((not (string= (sort (copy-seq s1) #'string<) (sort (copy-seq s2) #'string<))) nil) ((or (< n 4) (string= s1 s2)) t) (t (loop for i from 1 below n do (if (or (and (is-scramble (subseq s1 0 i) (subseq s2 0 i)) (is-scramble (subseq s1 i) (subseq s2 i))) (and (is-scramble (subseq s1 0 i) (subseq s2 (- n i))) (is-scramble (subseq s1 i) (subseq s2 0 (- n i))))) (return t))))))) ###Output _____no_output_____ ###Markdown 88. Merge Sorted Array ###Code (defun merge-88 (nums1 m nums2 n) (loop while (and (plusp m) (plusp n)) do (if (>= (nth (1- m) nums1) (nth (1- n) nums2)) ;; setf 从左到右运算,先执行 decf 操作 (setf (nth (+ (decf m) n) nums1) (nth m nums1)) (setf (nth (+ m (decf n)) nums1) (nth n nums2))) finally (when (plusp n) ;; 自动忽略 nums2 中多余的部分 (setf (subseq nums1 0 n) nums2)))) ###Output _____no_output_____ ###Markdown 89. Gray Code ###Code (defun gray-code (n) (loop for i below n with res = '(0) do (loop for element in res with base = (ash 1 i) ; (expt 2 i) do (push (logior element base) res)) finally (return (reverse res)))) ###Output _____no_output_____ ###Markdown 90. Subsets II用了额外的空间,还有不需要额外空间的解法。 ###Code (defun subsets-with-dup (nums) (let ((nums (sort (copy-seq nums) #'<)) (res '(()))) (loop for n in nums with prev = '(nil) with l do (setf l (length prev)) (loop for r in res do (if (or (not (equal n (first prev))) (and r (>= (length r) l) (equal (subseq r 0 l) prev))) (push (append r `(,n)) res))) (if (equal n (first prev)) (push n prev) (setf prev `(,n))) finally (return res)))) ###Output _____no_output_____
notebook/2018-11-01_testing_scrublet.ipynb
###Markdown Scrublet Testing out the python package [scrublet](https://github.com/AllonKleinLab/scrublet) for ID of doublets. ###Code import numpy as np import pandas as pd import scrublet as scr from larval_gonad.io import cellranger_counts multi_rate = { 'rep1': { 'n_cells': 6_000, 'pct_multi': 0.023, }, 'rep2': { 'n_cells': 6_000, 'pct_multi': 0.023, }, 'rep3': { 'n_cells': 16_000, 'pct_multi': 0.061, }, } def run_scrublet(fname, rep, threshold=None): # Import data cnts = cellranger_counts(fname) counts_matrix = cnts.matrix.T.tocsc() # run scrublet scrub = scr.Scrublet(counts_matrix, expected_doublet_rate=multi_rate[rep]['pct_multi']) doublet_scores, predicted_doublets = scrub.scrub_doublets(min_counts=2, min_cells=3, min_gene_variability_pctl=85, n_prin_comps=10) if threshold: predicted_doublets = scrub.call_doublets(threshold=threshold) # Plot histogram scrub.plot_histogram() # Plot UMAP scrub.set_embedding('UMAP', scr.get_umap(scrub.manifold_obs_, 10, min_dist=0.3)) fig, axes = scrub.plot_embedding('UMAP', order_points=True) print(f'Found {predicted_doublets.sum():,} doublets.') # Write out predicted doublets dups = [f'{rep}_' + x for x in cnts.barcodes[predicted_doublets]] with open(f'../output/notebook/2018-11-01_testing_scrublet_{rep}.txt', 'w') as fh: fh.write('\n'.join(dups)) return dups rep1 = run_scrublet('../output/scrnaseq-wf/scrnaseq_samples/testis1_force/outs/filtered_gene_bc_matrices_h5.h5', 'rep1', threshold=.15) rep2 = run_scrublet('../output/scrnaseq-wf/scrnaseq_samples/testis2_force/outs/filtered_gene_bc_matrices_h5.h5', 'rep2', threshold=.12) rep3 = run_scrublet('../output/scrnaseq-wf/scrnaseq_samples/testis3_force/outs/filtered_gene_bc_matrices_h5.h5', 'rep3', threshold=.23) def read_list(fname): with open(fname) as fh: return fh.read().strip().split('\n') def putative_dups(fname, rep): # Get all cells background = [f'{rep}_' + x.split('-')[0] for x in read_list(fname)] df = pd.DataFrame(index=background, columns=['scrublet', 'dupDetector', 'dupFinder']).fillna(False) # dup calls from different tools scrublet = read_list(f'../output/notebook/2018-11-01_testing_scrublet_{rep}.txt') df.loc[scrublet, 'scrublet'] = True dupFinder = read_list(f'../output/notebook/2018-10-30_testing_doubletFinder_{rep}.txt') df.loc[dupFinder, 'doubletFinder'] = True dupDetector = read_list(f'../output/notebook/2018-10-29_testing_doubletdetection_{rep}.txt') df.loc[dupDetector, 'doubletDetector'] = True _all = df[df.sum(axis=1) == 3].index.tolist() _most = df[df.sum(axis=1) == 2].index.tolist() _single = df[df.sum(axis=1) == 1].index.tolist() print(f'{len(_all)} cells were duplicates with all three methods.') print(f'{len(_most)} cells were duplicates in two of three methods.') print(f'{len(_single)} cells were duplicates in one of three methods.') with open('../output/notebook/2018-11-01_probable_doublet_rep1.txt', 'w') as fh: _dat = [*_all, *_most] fh.write('\n'.join(_dat)) with open('../output/notebook/2018-11-01_putative_doublet_rep1.txt', 'w') as fh: _dat = [*_all, *_most, *_single] fh.write('\n'.join(_dat)) putative_dups('../output/scrnaseq-wf/scrnaseq_samples/testis1_force/outs/filtered_gene_bc_matrices/dm6.16/barcodes.tsv', 'rep1') putative_dups('../output/scrnaseq-wf/scrnaseq_samples/testis2_force/outs/filtered_gene_bc_matrices/dm6.16/barcodes.tsv', 'rep2') putative_dups('../output/scrnaseq-wf/scrnaseq_samples/testis3_force/outs/filtered_gene_bc_matrices/dm6.16/barcodes.tsv', 'rep3') ###Output 50 cells were duplicates with all three methods. 148 cells were duplicates in two of three methods. 942 cells were duplicates in one of three methods.
Transoceanic/.ipynb_checkpoints/SignalTransmission-checkpoint.ipynb
###Markdown Analog vs Digital TransmissionIn this notebook we will explore the potential advantages of digital transmission over analog transmission. We will consider the case of transmission over a long (e.g. transoceanic) cable in which several repeaters are used to compensate for the attenuation introduced by the transmission.Remember that if each cable segment introduces an attenuation of $1/G$, we can recover the original amplitude by boosting the signal with a repeater with gain $G$. However, if the signal has accumulated additive noise, the noise will be amplified as well so that, after $N$ repeaters, the noise will have been amplified $N$ times:$$ \hat{x}_N(t) = x(t) + NG\sigma(t)$$If we use a digital signal, on the other hand, we can threshold the signal after each repeater and virtually eliminate the noise at each stage, so that even after several repeaters the trasmission is still noise-free. Let's start with the standard initial bookkeeping... ###Code %matplotlib inline import matplotlib import matplotlib.pyplot as plt import numpy as np import IPython from scipy.io import wavfile plt.rcParams["figure.figsize"] = (14,4) ###Output _____no_output_____ ###Markdown Now we can read in an audio file from disk; we can plot it and play it back. The `wavfile.read()` function returns the audio data and the playback rate, which we will need to pass to the playback functions. ###Code rate, s = wavfile.read('speech.wav') plt.plot(s); IPython.display.Audio(s, rate=rate) rate w = (200.0 / rate) * 2 * np.pi c = np.cos(w * np.arange(0,len(s))) IPython.display.Audio(np.multiply(s, c) , rate=rate) ###Output _____no_output_____ ###Markdown The "Analog" and "Digital" Signals We will now create two version of the audio signal, an "analog" version and a "digital" version. Obviously the analog version is just a simulation, since we're using a digital computer; we will assume that, by using floating point values, we're in fact close enough to infinite precision. In the digital version of the signal, on the other hand, the audio samples will only take integer values between -100 and +100 (i.e. we will use approximately 8 bits per audio sample). ###Code # the analog signal is simply rescaled between -100 and +100 # largest element in magnitude: norm = 1.0 / max(np.absolute([min(s), max(s)])) sA = 100.0 * s * norm # the digital version is clamped to the integers sD = np.round(sA) ###Output _____no_output_____ ###Markdown Rememeber that there is no free lunch and quantization implies a loss of quality; this initial loss (that we can minimize by using more bits per sample) is the price to pay for digital transmission. We can plot the error and compute the Signal to Noise Ratio (SNR) of the quantized signal ###Code plt.plot(sA-sD); ###Output _____no_output_____ ###Markdown as expected, the error is between -0.5 and +0.5, since in the "analog" signal the values are real-valued, whereas in the "digital" version they can only take integer values. As for the SNR, ###Code # we will be computing SNRs later as well, so let's define a function def SNR(noisy, original): # power of the error err = np.linalg.norm(original-noisy) # power of the signal sig = np.linalg.norm(original) # SNR in dBs return 10 * np.log10(sig/err) print ('SNR = %f dB' % SNR(sD, sA)) ###Output _____no_output_____ ###Markdown Can we hear the 17dB difference? A bit... ###Code IPython.display.Audio(sA, rate=rate) IPython.display.Audio(sD, rate=rate) ###Output _____no_output_____ ###Markdown Transmission Let's now define a function that represents the net effect of transmitting audio over a cable segment terminated by a repeater:* the signal is attenuated* the signal is accumulates additive noise as it propagates through the cable* the signal is amplified to the original amplitude by the repeater ###Code def repeater(x, noise_amplitude, attenuation): # first, create the noise noise = np.random.uniform(-noise_amplitude, noise_amplitude, len(x)) # attenuation x = x * attenuation # noise x = x + noise # gain compensation return x / attenuation ###Output _____no_output_____ ###Markdown we can use the repeater for both analog and digital signals. Transmission of the analog signal is simply a sequence of repeaters: ###Code def analog_tx(x, num_repeaters, noise_amplitude, attenuation): for n in range(0, num_repeaters): x = repeater(x, noise_amplitude, attenuation) return x ###Output _____no_output_____ ###Markdown For digital signals, however, we can rectify the signal after each repeater, because we know that values should only be integer-valued: ###Code def digital_tx(x, num_repeaters, noise_amplitude, attenuation): for n in range(0, num_repeaters): x = np.round(repeater(x, noise_amplitude, attenuation)) return x ###Output _____no_output_____ ###Markdown Let's compare transmission schemes ###Code NUM_REPEATERS = 70 NOISE_AMPLITUDE = 0.2 ATTENUATION = 0.5 yA = analog_tx(sA, NUM_REPEATERS, NOISE_AMPLITUDE, ATTENUATION) print ('Analog trasmission: SNR = %f dB' % SNR(yA, sA)) yD = digital_tx(sD, NUM_REPEATERS, NOISE_AMPLITUDE, ATTENUATION) print ('Digital trasmission: SNR = %f dB' % SNR(yD, sA)) ###Output _____no_output_____ ###Markdown As you can see, the SNR after digital transmission has not changed! Now the difference between audio clips should be easy to hear: ###Code IPython.display.Audio(yA, rate=rate) IPython.display.Audio(yD, rate=rate) ###Output _____no_output_____ ###Markdown Note however that, if the noise amplitude exceeds a certain value, digital transmission degrades even less gracefully than analog transmission: ###Code NOISE_AMPLITUDE = 0.3 yA = analog_tx(sA, NUM_REPEATERS, NOISE_AMPLITUDE, ATTENUATION) print ('Analog trasmission: SNR = %f dB' % SNR(yA, sA)) yD = digital_tx(sD, NUM_REPEATERS, NOISE_AMPLITUDE, ATTENUATION) print ('Digital trasmission: SNR = %f dB' % SNR(yD, sA)) ###Output _____no_output_____
notebooks/short_atmodat_checker_demonstration.ipynb
###Markdown *** Short demonstration of the ATMODAT Standard Compliance Checker ![AtMoDat Image](https://www.dkrz.de/en/projects-and-partners/projects-1/atmodat-1/@@images/logo/preview) Angelika Heil, DKRZ TGIF Oct 1, 2021 *** README – For demonstration purposes, we prepared netCDF files created from CMIP6 model output, see Appendix. – In addition, we included a HD(CP)2-file. – Test netCDF files are stored in the directory demo_data. – To to execute BASH commands in a cell, put %%bash in the first line of that cell. Step 1: Check available netCDF files in demo_data directory using BASH listing command `ls` (-lh option to print out details on files inlcuding human-readible file sizes) Test files created from CMIP6 https://cera-www.dkrz.de/WDCC/ui/cerasearch/cmip6?input=CMIP6.CMIP.MPI-M.MPI-ESM1-2-HR.historical ###Code %%bash ls -lh demo_data/*.nc ###Output _____no_output_____ ###Markdown Test files created from https://cera-www.dkrz.de/WDCC/ui/cerasearch/entry?acronym=hope_trop_pyrnet01_l1_ta ###Code %%bash ls -lh demo_data/hope/*.nc ###Output _____no_output_____ ###Markdown Step 2: Exploring netCDF file content with NCOs using the BASH terminal command `ncdump -h` (-h options means that only the netCDF header is shown) Step 2.1: This is how the original metadata of a CMIP6 netCDF file look like ###Code %%bash ncdump -h demo_data/CMIP6_ATTRIBUTES.nc ###Output _____no_output_____ ###Markdown Step 2.2: This is how a CMIP6 netCDF header looks when all metadata were removed ###Code %%bash ncdump -h demo_data/NO_ATTRIBUTES.nc ###Output _____no_output_____ ###Markdown Step 2.3: This is how a CMIP6 netCDF header looks like that only contains all ATMODAT Standard attributes (mandatory, recommended, optional) ###Code %%bash ncdump -h demo_data/ALL_ATMODAT_ATTRIBUTES.nc ###Output _____no_output_____ ###Markdown Step 3: Exploring netCDF file content with xarray. Step 3.1: Import required Python module ###Code import xarray as xr ###Output _____no_output_____ ###Markdown Step 3.2: Read a CMIP6 file that is standardised according to the ATMODAT Standard ###Code ifile = 'demo_data/ALL_ATMODAT_ATTRIBUTES.nc' ds = xr.open_dataset(ifile) ###Output _____no_output_____ ###Markdown Step 3.3: Have a look at the file content* Click on the file icon next to the database icon to view the attributes of the individual coordinate and data variables. * click on the $\nabla$ Attributes to look at the 31 global attributes. ###Code ds ###Output _____no_output_____ ###Markdown Step 3.5: Plot first time step of variable tas ###Code ds.tas.isel(time=1).plot() ###Output _____no_output_____ ###Markdown **--> With metadata, the plotting routine automatically labels the plot with units .** Step 4: Evaluate the netCDF files with the atmodat checker*Notes* * Run the atmodat checker using the command `run_checks.py` from BASH terminal.* Please note that the atmodat checker contains two modules: * one that checks the global attributes for compliance with the ATMODAT standard, * and another that performs a standard CF check (building upon the cfchecks library). Step 4.1: Show usage instructions of the `run_checks.py` ###Code %%bash run_checks.py --help ###Output _____no_output_____ ###Markdown Step 4.2: Check the file ALL_ATMODAT_ATTRIBUTES.nc and write checker output to output directory *myoutputdir**Notes* * Without specifying a user-defined output directory (-op flag), the atmodat checker would write the checker output into ../checker_ouput/YYYYMMDD_HHMM* We use the -s option to create summary checker output (files: short_summary.txt and long_summary_*.csv) ###Code %%bash run_checks.py -s -f demo_data/ALL_ATMODAT_ATTRIBUTES.nc -op myoutputdir ###Output _____no_output_____ ###Markdown Step 4.3: Check content of the checker output directory myoutputdir Step 4.3.1: List folders and subfolders ###Code %%bash echo $'\n===== content of myoutputdir ====' ls -g myoutputdir echo $'\n===== atmodat subdirectory with detailed checker output ====' #echo '\n ===== atmodat subdirectory with detailed checker output ====' ls -g myoutputdir/atmodat echo $'\n====== CF subdirectory with detailed checker output ====' ls -g myoutputdir/CF ###Output _____no_output_____ ###Markdown Step 4.3.2: Show content of short_summary.txt that provides summary statistics on the atmodat checker and CF checker results ###Code %%bash cat myoutputdir/short_summary.txt ###Output _____no_output_____ ###Markdown Step 4.3.3: Show content of long_summary_recommended.csv ###Code %%bash cat myoutputdir/long_summary_recommended.csv ###Output _____no_output_____ ###Markdown **--> File contains all recommended metadata, so the long_summary has no entry except of the header .** Step 4.3: Check all files contained in directory demo_data and write checker output to output directory *myoutputdir2**Notes* * Let the checker run over all files contained in the entire directory demo_data (-p flag).* Write checker output to output directory *myoutputdir2* (-op flag).* We use the -s option to create summary checker output (files: short_summary.txt and long_summary_*.csv) ###Code %%bash run_checks.py -s -p demo_data/ -op myoutputdir2 ###Output _____no_output_____ ###Markdown Step 4.4: Check content of the checker output directory myoutputdir2 Step 4.4.1: List folders and subfolders ###Code %%bash ls -g myoutputdir2 echo $'\n===== atmodat subdirectory with detailed checker output ====' #echo '\n ===== atmodat subdirectory with detailed checker output ====' ls -g myoutputdir2/atmodat echo $'\n====== CF subdirectory with detailed checker output ====' ls -g myoutputdir2/CF ###Output _____no_output_____ ###Markdown Step 4.4.2: Show content of short_summary.txt that provides summary statistics on the atmodat checker and CF checker results ###Code %%bash cat myoutputdir2/short_summary.txt ###Output _____no_output_____ ###Markdown Step 4.4.3: Show content of long_summary_mandatory.csv ###Code %%bash cat myoutputdir2/long_summary_mandatory.csv ###Output _____no_output_____ ###Markdown **--> lon_summary_mandatory.txt lists all mandatory errors detected in any file contained in the directory demo_data.** – CMIP6_ATTRIBUTES.nc: The content of the global attribute *Conventions* is 'CF-1.7 CMIP-6.2' , but for meeting the ATMODAT Standard, this attribute has to be 'CF-1.8 ATMODAT-3.0' – hope_trop_pyrnet01_l1_ta_1.nc: Spelling error; Institution should be insitution Step 4.4.4: Show content of long_summary_recommended.csv ###Code %%bash head -20 myoutputdir2/long_summary_recommended.csv ###Output _____no_output_____ ###Markdown Step 4.4.5: Check which file provoked an error message in the CF checker ###Code %%bash CFinvalid=`grep -l nvalid myoutputdir2/CF/*.txt` cat ${CFinvalid} ###Output _____no_output_____ ###Markdown Step 4.4.6: Check how the CF checker output looks like for NO_ATTRIBUTES.nc ###Code %%bash cat myoutputdir2/CF/NO_ATTRIBUTES_cfchecks_result.txt ###Output _____no_output_____ ###Markdown **--> cfchecks routine only issues a warning/information message if variable metadata are completely missing.** – Zero errors in the cfchecks routine does not necessarily mean that a data file is CF compliant!' – We have to enhance the atmodat checker output to capture insufficient variable metadata. APPENDIX How the CMIP6 sample files were prepared See more details on the CMIP6 experiment: https://cera-www.dkrz.de/WDCC/ui/cerasearch/cmip6?input=CMIP6.CMIP.MPI-M.MPI-ESM1-2-HR.historical Merge the variables ps and tas into a single netCDF file, subset region of Germany and select the day 2010-07-12 ###Code %%bash #-- make output directory for test files odir='demo_data' mkdir -p ${odir} cd ${odir} fileroot='3hr_MPI-ESM1-2-HR_historical_r6i1p1f1_gn_201001010300-201501010000' #-- get surface pressure data wget http://esgf3.dkrz.de/thredds/fileServer/cmip6/CMIP/MPI-M/MPI-ESM1-2-HR/historical/r6i1p1f1/3hr/ps/gn/v20190710/ps_${fileroot}.nc #-- get (surface) air temperature data wget http://esgf3.dkrz.de/thredds/fileServer/cmip6/CMIP/MPI-M/MPI-ESM1-2-HR/historical/r6i1p1f1/3hr/tas/gn/v20190710/tas_${fileroot}.nc #-- subset data: extract 2010-07-12 and region of Germany ofileroot='CMIP6_MPI-ESM1-2-HR_hist_r6i1p1f1' region='Germany' lonlatbox='5,15,47,55' for var in ps tas;do cdo seldate,2010-07-12 -sellonlatbox,${lonlatbox} ${var}_${fileroot}.nc ${var}_${ofileroot}_${region}_2010-07-12.nc;done rm -f *_${fileroot}.nc cdo -s merge {??,???}_${ofileroot}_${region}_2010-07-12.nc CMIP6_ATTRIBUTES.nc rm -f {??,???}_${ofileroot}_${region}_2010-07-12.nc ###Output _____no_output_____ ###Markdown Modify the CMIP6 metadata to create the files: NOATTRIBUTES.nc, MINUM_ATMODAT_ATTRIBUTES.nc, WRONG_STANDARD_NAME.nc ###Code %%bash cp -p CMIP6_ATTRIBUTES.nc NOATTRIBUTES.nc ncatted -h -a ,global,d,, -a ,,d,, NOATTRIBUTES.nc ncks -h -O -C -x -v lon_bnds,lat_bnds,time_bnds,height NOATTRIBUTES.nc tmpfile; mv tmpfile NOATTRIBUTES.nc cp -p CMIP6_ATTRIBUTES.nc MINUM_ATMODAT_ATTRIBUTES.nc ncatted -O -h -a ,global,d,, -a Conventions,global,c,c,'CF-1.8 ATMODAT-3.0' -a institution,global,c,c,'Max Planck Institute for Meteorology' \ -a source,global,c,c,'MPI-ESM1.2-HR (2017)' MINUM_ATMODAT_ATTRIBUTES.nc for varatt in history comment standard_name axis bounds coordinates CDI_grid_type CDI_grid_num_LPE _FillValue missing_value \ cell_methods cell_measures; do ncatted -h -O -a ${varatt},,d,, MINUM_ATMODAT_ATTRIBUTES.nc;done ncks -h -O -C -x -v lon_bnds,lat_bnds,time_bnds,height MINUM_ATMODAT_ATTRIBUTES.nc tmpfile;mv tmpfile MINUM_ATMODAT_ATTRIBUTES.nc cp -p MINUM_ATMODAT_ATTRIBUTES.nc WRONG_STANDARD_NAME.nc ncatted -O -h -a standard_name,ps,c,c,"surface_air_pressure" -a units,ps,d,, -a standard_name,time,c,c,"times" \ -a standard_name,tas,c,c,"air_temperature" -a units,tas,o,c,"°C" WRONG_STANDARD_NAME.nc ###Output _____no_output_____ ###Markdown Modify the CMIP6 metadata to create the file: ALL_ATMODAT_ATTRIBUTES.nc ###Code %%bash creator='Jungclaus, Johann, https://orcid.org/0000-0002-3849-4339 et al.' #-- optimally add full list of creators; # for CMIP6, the list would be very long, https://cera-www.dkrz.de/WDCC/ui/cerasearch/cmip6?input=CMIP6.CMIP.MPI-M.MPI-ESM1-2-HR.historical.r6i1p1f1.6hrPlev.tas.gn.v20190815 which is why we truncate it in this example file crs='WGS84 (T127 gaussian grid)' #-- gridtype is Gaussian, globally 384 x 192 longitude/latitude geospatial_lat_resolution='0.94 degree' #-- approximate value geospatial_lon_resolution='0.9375 degree' geospatial_vertical_resolution='point' #-- only surface layer keywords='CMIP, COUPLED CLIMATE MODELS' standard_name_vocabulary='CF Standard Name Table v77' summary='These data have been generated as part of the internationally-coordinated Coupled Model Intercomparison Project Phase 6 (CMIP6; see also GMD Special Issue: http://www.geosci-model-dev.net/special_issue590.html). The data contained in this file represents MPI-M MPI-ESM1.2-HR model output prepared for CMIP6 CMIP historical.' comment='This file has been prepared for the EMS21 user workshop https://meetingorganizer.copernicus.org/EMS2021/session/41771. The user workshop focusses on netCDF metadata. ' featureType='point' keywords_vocabulary='GCMD' metadata_link='https://cera-www.dkrz.de/WDCC/ui/cerasearch/cmip6?input=CMIP6.CMIP.MPI-M.MPI-ESM1-2-HR' processing_level='not applicable; data are model data' program='CMIP6' project='CMIP6' cp -p demo_data/CMIP6_ATTRIBUTES.nc demo_data/ALL_ATMODAT_ATTRIBUTES.nc #-- Recommended ATMODAT global attributes NOT contained in CMIP6 attributes: rec_att_AS_miss="'creator' 'crs' 'geospatial_lat_resolution' 'geospatial_lon_resolution' 'geospatial_vertical_resolution' 'keywords' 'standard_name_vocabulary' 'summary'" rec_att_AS_mand="'Conventions' 'institution' 'source'" rec_att_AS_rec="''contact' 'creation_date' 'creator' 'crs' 'frequency' 'geospatial_lat_resolution' 'geospatial_lon_resolution' 'geospatial_vertical_resolution' 'history' 'institution_id' 'keywords' 'license' 'nominal_resolution' 'realm' 'source_type' 'standard_name_vocabulary' 'summary' 'title'" rec_att_AS_opt="'comment' 'featureType' 'further_info_url' 'keywords_vocabulary' 'metadata_link' 'processing_level' 'program' 'project' 'references'" conv_att_AS='CF-1.8 ATMODAT-3.0' #-- required Conventions entry in ATMODAT Standard conv_att_C6='CF-1.7 CMIP-6.2' #-- required Conventions entry in CMIP6 Standard #-- Get all CMIP6 global attributes rec_att_C6_all=`ncdump -h demo_data/ALL_ATMODAT_ATTRIBUTES.nc|grep '\s:'|cut -d: -f2|cut -d= -f1|tr -d ' '|sed -e "s/^/'/" -e "s/$/' /"|tr -d '\n'` #-- Remove all CMIP6 global attributes that are not requiered for the ATMODAT standard for rec in $rec_att_C6_all;do rec2=`echo $rec|tr -d \'\" ` grep_res=`echo $rec_att_AS_mand $rec_att_AS_rec $rec_att_AS_opt |grep $rec2` if [ -z "${grep_res}" ]; then ncatted -O -h -a $rec2,global,d,, demo_data/ALL_ATMODAT_ATTRIBUTES.nc fi done #-- Get all CMIP6 global attributes that are also ATMODAT global attributes rec_att_C6_min=`ncdump -h demo_data/ALL_ATMODAT_ATTRIBUTES.nc|grep '\s:'|cut -d: -f2|cut -d= -f1|tr -d ' '|sed -e "s/^/'/" -e "s/$/' /"|tr -d '\n'` #-- Get all ATMODAT global attributes that are not provided by the CMIP6 global attributes for rec in $rec_att_AS_mand $rec_att_AS_rec $rec_att_AS_opt;do rec2=`echo $rec|tr -d \'\" ` grep_res=`echo $rec_att_C6_min |grep $rec2` if [ -z "${grep_res}" ]; then eval attributetext=\$$rec2 ncatted -O -h -a $rec2,global,c,c,"`echo $attributetext`" demo_data/ALL_ATMODAT_ATTRIBUTES.nc fi done #-- Replace CMIP Conventions attribute with ATMODAT Conventions attribute ncatted -O -h -a Conventions,global,o,c,"$conv_att_AS" demo_data/ALL_ATMODAT_ATTRIBUTES.nc #-- Remove variable attributes which are not mandatory to the ATMODAT standard ncks -h -O -C -x -v lon_bnds,lat_bnds,time_bnds,height demo_data/ALL_ATMODAT_ATTRIBUTES.nc tmpfile; mv tmpfile demo_data/ALL_ATMODAT_ATTRIBUTES.nc for varatt in history comment axis bounds coordinates CDI_grid_type CDI_grid_num_LPE _FillValue missing_value cell_methods cell_measures; do ncatted -h -O -a ${varatt},,d,, demo_data/ALL_ATMODAT_ATTRIBUTES.nc done ###Output _____no_output_____
aml-pipelines-use-databricks-as-compute-target.ipynb
###Markdown Copyright (c) Microsoft Corporation. All rights reserved. Licensed under the MIT License. Using Databricks as a Compute Target from Azure Machine Learning PipelineTo use Databricks as a compute target from [Azure Machine Learning Pipeline](https://docs.microsoft.com/en-us/azure/machine-learning/service/concept-ml-pipelines), a [DatabricksStep](https://docs.microsoft.com/en-us/python/api/azureml-pipeline-steps/azureml.pipeline.steps.databricks_step.databricksstep?view=azure-ml-py) is used. This notebook demonstrates the use of DatabricksStep in Azure Machine Learning Pipeline.The notebook will show:1. Running an arbitrary Databricks notebook that the customer has in Databricks workspace2. Running an arbitrary Python script that the customer has in DBFS3. Running an arbitrary Python script that is available on local computer (will upload to DBFS, and then run in Databricks) 4. Running a JAR job that the customer has in DBFS. Before you begin:1. **Create an Azure Databricks workspace** in the same subscription where you have your Azure Machine Learning workspace. You will need details of this workspace later on to define DatabricksStep. [Click here](https://ms.portal.azure.com/blade/HubsExtension/Resources/resourceType/Microsoft.Databricks%2Fworkspaces) for more information.2. **Create PAT (access token)**: Manually create a Databricks access token at the Azure Databricks portal. See [this](https://docs.databricks.com/api/latest/authentication.htmlgenerate-a-token) for more information.3. **Add demo notebook to ADB**: This notebook has a sample you can use as is. Launch Azure Databricks attached to your Azure Machine Learning workspace and add a new notebook. 4. **Create/attach a Blob storage** for use from ADB Add demo notebook to ADB WorkspaceCopy and paste the below code to create a new notebook in your ADB workspace. ```python direct accessdbutils.widgets.get("myparam")p = getArgument("myparam")print ("Param -\'myparam':")print (p)dbutils.widgets.get("input")i = getArgument("input")print ("Param -\'input':")print (i)dbutils.widgets.get("output")o = getArgument("output")print ("Param -\'output':")print (o)n = i + "/testdata.txt"df = spark.read.csv(n)display (df)data = [('value1', 'value2')]df2 = spark.createDataFrame(data)z = o + "/output.txt"df2.write.csv(z)``` Azure Machine Learning and Pipeline SDK-specific imports ###Code import os import azureml.core from azureml.core.runconfig import MavenLibrary, PyPiLibrary, RCranLibrary, JarLibrary, EggLibrary from azureml.core.compute import ComputeTarget, DatabricksCompute from azureml.exceptions import ComputeTargetException from azureml.core import Workspace, Experiment from azureml.pipeline.core import Pipeline, PipelineData from azureml.pipeline.steps import DatabricksStep from azureml.core.datastore import Datastore from azureml.data.data_reference import DataReference # Check core SDK version number print("SDK version:", azureml.core.VERSION) ###Output _____no_output_____ ###Markdown Initialize WorkspaceInitialize a workspace object from persisted configuration. Make sure the config file is present at .\config.json ###Code ws = Workspace.from_config() print(ws.name, ws.resource_group, ws.location, ws.subscription_id, sep = '\n') ###Output _____no_output_____ ###Markdown Attach Databricks compute targetNext, you need to add your Databricks workspace to Azure Machine Learning as a compute target and give it a name. You will use this name to refer to your Databricks workspace compute target inside Azure Machine Learning.- **Resource Group** - The resource group name of your Azure Machine Learning workspace- **Databricks Workspace Name** - The workspace name of your Azure Databricks workspace- **Databricks Access Token** - The access token you created in ADB**The Databricks workspace need to be present in the same subscription as your AML workspace** ###Code # Replace with your account info before running. db_compute_name=os.getenv("DATABRICKS_COMPUTE_NAME", "<my-databricks-compute-name>") # Databricks compute name db_resource_group=os.getenv("DATABRICKS_RESOURCE_GROUP", "<my-db-resource-group>") # Databricks resource group db_workspace_name=os.getenv("DATABRICKS_WORKSPACE_NAME", "<my-db-workspace-name>") # Databricks workspace name db_access_token=os.getenv("DATABRICKS_ACCESS_TOKEN", "<my-access-token>") # Databricks access token try: databricks_compute = DatabricksCompute(workspace=ws, name=db_compute_name) print('Compute target {} already exists'.format(db_compute_name)) except ComputeTargetException: print('Compute not found, will use below parameters to attach new one') print('db_compute_name {}'.format(db_compute_name)) print('db_resource_group {}'.format(db_resource_group)) print('db_workspace_name {}'.format(db_workspace_name)) print('db_access_token {}'.format(db_access_token)) config = DatabricksCompute.attach_configuration( resource_group = db_resource_group, workspace_name = db_workspace_name, access_token= db_access_token) databricks_compute=ComputeTarget.attach(ws, db_compute_name, config) databricks_compute.wait_for_completion(True) ###Output _____no_output_____ ###Markdown Data Connections with Inputs and OutputsThe DatabricksStep supports Azure Bloband ADLS for inputs and outputs. You also will need to define a [Secrets](https://docs.azuredatabricks.net/user-guide/secrets/index.html) scope to enable authentication to external data sources such as Blob and ADLS from Databricks.- Databricks documentation on [Azure Blob](https://docs.azuredatabricks.net/spark/latest/data-sources/azure/azure-storage.html)- Databricks documentation on [ADLS](https://docs.databricks.com/spark/latest/data-sources/azure/azure-datalake.html) Type of Data AccessDatabricks allows to interact with Azure Blob and ADLS in two ways.- **Direct Access**: Databricks allows you to interact with Azure Blob or ADLS URIs directly. The input or output URIs will be mapped to a Databricks widget param in the Databricks notebook.- **Mouting**: You will be supplied with additional parameters and secrets that will enable you to mount your ADLS or Azure Blob input or output location in your Databricks notebook. Direct Access: Python sample codeIf you have a data reference named "input" it will represent the URI of the input and you can access it directly in the Databricks python notebook like so: ```pythondbutils.widgets.get("input")y = getArgument("input")df = spark.read.csv(y)``` Mounting: Python sample code for Azure BlobGiven an Azure Blob data reference named "input" the following widget params will be made available in the Databricks notebook: ```python This contains the input URIdbutils.widgets.get("input")myinput_uri = getArgument("input") How to get the input datastore name inside ADB notebook This contains the name of a Databricks secret (in the predefined "amlscope" secret scope) that contians an access key or sas for the Azure Blob input (this name is obtained by appending the name of the input with "_blob_secretname". dbutils.widgets.get("input_blob_secretname") myinput_blob_secretname = getArgument("input_blob_secretname") This contains the required configuration for mountingdbutils.widgets.get("input_blob_config")myinput_blob_config = getArgument("input_blob_config") Usagedbutils.fs.mount( source = myinput_uri, mount_point = "/mnt/input", extra_configs = {myinput_blob_config:dbutils.secrets.get(scope = "amlscope", key = myinput_blob_secretname)})``` Mounting: Python sample code for ADLSGiven an ADLS data reference named "input" the following widget params will be made available in the Databricks notebook: ```python This contains the input URIdbutils.widgets.get("input") myinput_uri = getArgument("input") This contains the client id for the service principal that has access to the adls inputdbutils.widgets.get("input_adls_clientid") myinput_adls_clientid = getArgument("input_adls_clientid") This contains the name of a Databricks secret (in the predefined "amlscope" secret scope) that contains the secret for the above mentioned service principaldbutils.widgets.get("input_adls_secretname") myinput_adls_secretname = getArgument("input_adls_secretname") This contains the refresh url for the mounting configsdbutils.widgets.get("input_adls_refresh_url") myinput_adls_refresh_url = getArgument("input_adls_refresh_url") Usage configs = {"dfs.adls.oauth2.access.token.provider.type": "ClientCredential", "dfs.adls.oauth2.client.id": myinput_adls_clientid, "dfs.adls.oauth2.credential": dbutils.secrets.get(scope = "amlscope", key =myinput_adls_secretname), "dfs.adls.oauth2.refresh.url": myinput_adls_refresh_url}dbutils.fs.mount( source = myinput_uri, mount_point = "/mnt/output", extra_configs = configs)``` Use Databricks from Azure Machine Learning PipelineTo use Databricks as a compute target from Azure Machine Learning Pipeline, a DatabricksStep is used. Let's define a datasource (via DataReference) and intermediate data (via PipelineData) to be used in DatabricksStep. ###Code # Use the default blob storage def_blob_store = Datastore(ws, "workspaceblobstore") print('Datastore {} will be used'.format(def_blob_store.name)) # We are uploading a sample file in the local directory to be used as a datasource def_blob_store.upload_files(files=["./testdata.txt"], target_path="dbtest", overwrite=False) step_1_input = DataReference(datastore=def_blob_store, path_on_datastore="dbtest", data_reference_name="input") step_1_output = PipelineData("output", datastore=def_blob_store) ###Output _____no_output_____ ###Markdown Add a DatabricksStepAdds a Databricks notebook as a step in a Pipeline.- ***name:** Name of the Module- **inputs:** List of input connections for data consumed by this step. Fetch this inside the notebook using dbutils.widgets.get("input")- **outputs:** List of output port definitions for outputs produced by this step. Fetch this inside the notebook using dbutils.widgets.get("output")- **existing_cluster_id:** Cluster ID of an existing Interactive cluster on the Databricks workspace. If you are providing this, do not provide any of the parameters below that are used to create a new cluster such as spark_version, node_type, etc.- **spark_version:** Version of spark for the databricks run cluster. default value: 4.0.x-scala2.11- **node_type:** Azure vm node types for the databricks run cluster. default value: Standard_D3_v2- **num_workers:** Specifies a static number of workers for the databricks run cluster- **min_workers:** Specifies a min number of workers to use for auto-scaling the databricks run cluster- **max_workers:** Specifies a max number of workers to use for auto-scaling the databricks run cluster- **spark_env_variables:** Spark environment variables for the databricks run cluster (dictionary of {str:str}). default value: {'PYSPARK_PYTHON': '/databricks/python3/bin/python3'}- **notebook_path:** Path to the notebook in the databricks instance. If you are providing this, do not provide python script related paramaters or JAR related parameters.- **notebook_params:** Parameters for the databricks notebook (dictionary of {str:str}). Fetch this inside the notebook using dbutils.widgets.get("myparam")- **python_script_path:** The path to the python script in the DBFS or S3. If you are providing this, do not provide python_script_name which is used for uploading script from local machine.- **python_script_params:** Parameters for the python script (list of str)- **main_class_name:** The name of the entry point in a JAR module. If you are providing this, do not provide any python script or notebook related parameters.- **jar_params:** Parameters for the JAR module (list of str)- **python_script_name:** name of a python script on your local machine (relative to source_directory). If you are providing this do not provide python_script_path which is used to execute a remote python script; or any of the JAR or notebook related parameters.- **source_directory:** folder that contains the script and other files- **hash_paths:** list of paths to hash to detect a change in source_directory (script file is always hashed)- **run_name:** Name in databricks for this run- **timeout_seconds:** Timeout for the databricks run- **runconfig:** Runconfig to use. Either pass runconfig or each library type as a separate parameter but do not mix the two- **maven_libraries:** maven libraries for the databricks run- **pypi_libraries:** pypi libraries for the databricks run- **egg_libraries:** egg libraries for the databricks run- **jar_libraries:** jar libraries for the databricks run- **rcran_libraries:** rcran libraries for the databricks run- **compute_target:** Azure Databricks compute- **allow_reuse:** Whether the step should reuse previous results when run with the same settings/inputs- **version:** Optional version tag to denote a change in functionality for the step\* *denotes required fields* *You must provide exactly one of num_workers or min_workers and max_workers paramaters* *You must provide exactly one of databricks_compute or databricks_compute_name parameters* Use runconfig to specify library dependenciesYou can use a runconfig to specify the library dependencies for your cluster in Databricks. The runconfig will contain a databricks section as follows:```yamlenvironment: Databricks details databricks: List of maven libraries. mavenLibraries: - coordinates: org.jsoup:jsoup:1.7.1 repo: '' exclusions: - slf4j:slf4j - '*:hadoop-client' List of PyPi libraries pypiLibraries: - package: beautifulsoup4 repo: '' List of RCran libraries rcranLibraries: - Coordinates. package: ada Repo repo: http://cran.us.r-project.org List of JAR libraries jarLibraries: - Coordinates. library: dbfs:/mnt/libraries/library.jar List of Egg libraries eggLibraries: - Coordinates. library: dbfs:/mnt/libraries/library.egg```You can then create a RunConfiguration object using this file and pass it as the runconfig parameter to DatabricksStep.```pythonfrom azureml.core.runconfig import RunConfigurationrunconfig = RunConfiguration()runconfig.load(path='', name='')``` 1. Running the demo notebook already added to the Databricks workspaceCreate a notebook in the Azure Databricks workspace, and provide the path to that notebook as the value associated with the environment variable "DATABRICKS_NOTEBOOK_PATH". This will then set the variable notebook_path when you run the code cell below: ###Code notebook_path=os.getenv("DATABRICKS_NOTEBOOK_PATH", "<my-databricks-notebook-path>") # Databricks notebook path dbNbStep = DatabricksStep( name="DBNotebookInWS", inputs=[step_1_input], outputs=[step_1_output], num_workers=1, notebook_path=notebook_path, notebook_params={'myparam': 'testparam'}, run_name='DB_Notebook_demo', compute_target=databricks_compute, allow_reuse=False ) ###Output _____no_output_____ ###Markdown Build and submit the Experiment ###Code #PUBLISHONLY #steps = [dbNbStep] #pipeline = Pipeline(workspace=ws, steps=steps) #pipeline_run = Experiment(ws, 'DB_Notebook_demo').submit(pipeline) #pipeline_run.wait_for_completion() ###Output _____no_output_____ ###Markdown View Run Details ###Code #PUBLISHONLY #from azureml.widgets import RunDetails #RunDetails(pipeline_run).show() ###Output _____no_output_____ ###Markdown 2. Running a Python script from DBFSThis shows how to run a Python script in DBFS. To complete this, you will need to first upload the Python script in your local machine to DBFS using the [CLI](https://docs.azuredatabricks.net/user-guide/dbfs-databricks-file-system.html). The CLI command is given below:```dbfs cp ./train-db-dbfs.py dbfs:/train-db-dbfs.py```The code in the below cell assumes that you have completed the previous step of uploading the script `train-db-dbfs.py` to the root folder in DBFS. ###Code python_script_path = "dbfs:/train-db-dbfs.py" dbPythonInDbfsStep = DatabricksStep( name="DBPythonInDBFS", inputs=[step_1_input], num_workers=1, python_script_path=python_script_path, python_script_params={'--input_data'}, run_name='DB_Python_demo', compute_target=databricks_compute, allow_reuse=False ) ###Output _____no_output_____ ###Markdown Build and submit the Experiment ###Code #PUBLISHONLY #steps = [dbPythonInDbfsStep] #pipeline = Pipeline(workspace=ws, steps=steps) #pipeline_run = Experiment(ws, 'DB_Python_demo').submit(pipeline) #pipeline_run.wait_for_completion() ###Output _____no_output_____ ###Markdown View Run Details ###Code #PUBLISHONLY #from azureml.widgets import RunDetails #RunDetails(pipeline_run).show() ###Output _____no_output_____ ###Markdown 3. Running a Python script in Databricks that currenlty is in local computerTo run a Python script that is currently in your local computer, follow the instructions below. The commented out code below code assumes that you have `train-db-local.py` in the `scripts` subdirectory under the current working directory.In this case, the Python script will be uploaded first to DBFS, and then the script will be run in Databricks. ###Code python_script_name = "train-db-local.py" source_directory = "." dbPythonInLocalMachineStep = DatabricksStep( name="DBPythonInLocalMachine", inputs=[step_1_input], num_workers=1, python_script_name=python_script_name, source_directory=source_directory, run_name='DB_Python_Local_demo', compute_target=databricks_compute, allow_reuse=False ) ###Output _____no_output_____ ###Markdown Build and submit the Experiment ###Code steps = [dbPythonInLocalMachineStep] pipeline = Pipeline(workspace=ws, steps=steps) pipeline_run = Experiment(ws, 'DB_Python_Local_demo').submit(pipeline) pipeline_run.wait_for_completion() ###Output _____no_output_____ ###Markdown View Run Details ###Code from azureml.widgets import RunDetails RunDetails(pipeline_run).show() ###Output _____no_output_____ ###Markdown 4. Running a JAR job that is alreay added in DBFSTo run a JAR job that is already uploaded to DBFS, follow the instructions below. You will first upload the JAR file to DBFS using the [CLI](https://docs.azuredatabricks.net/user-guide/dbfs-databricks-file-system.html).The commented out code in the below cell assumes that you have uploaded `train-db-dbfs.jar` to the root folder in DBFS. You can upload `train-db-dbfs.jar` to the root folder in DBFS using this commandline so you can use `jar_library_dbfs_path = "dbfs:/train-db-dbfs.jar"`:```dbfs cp ./train-db-dbfs.jar dbfs:/train-db-dbfs.jar``` ###Code main_jar_class_name = "com.microsoft.aeva.Main" jar_library_dbfs_path = "dbfs:/train-db-dbfs.jar" dbJarInDbfsStep = DatabricksStep( name="DBJarInDBFS", inputs=[step_1_input], num_workers=1, main_class_name=main_jar_class_name, jar_params={'arg1', 'arg2'}, run_name='DB_JAR_demo', jar_libraries=[JarLibrary(jar_library_dbfs_path)], compute_target=databricks_compute, allow_reuse=False ) ###Output _____no_output_____ ###Markdown Build and submit the Experiment ###Code #PUBLISHONLY #steps = [dbJarInDbfsStep] #pipeline = Pipeline(workspace=ws, steps=steps) #pipeline_run = Experiment(ws, 'DB_JAR_demo').submit(pipeline) #pipeline_run.wait_for_completion() ###Output _____no_output_____ ###Markdown View Run Details ###Code #PUBLISHONLY #from azureml.widgets import RunDetails #RunDetails(pipeline_run).show() ###Output _____no_output_____
Notebooks/Intro/One Dimensional Data Worksheet-Python.ipynb
###Markdown One Dimensional Data WorksheetThis worksheet reviews the concepts discussed about 1 dimensional data. The goal for these exercises is getting you to think in terms of vectorized computing. This worksheet should take 20-30 minutes to complete. ###Code import pandas as pd import numpy as np ###Output _____no_output_____ ###Markdown Exercise 1Create a Series object with 100 random integers, then filter out odd integers and reindex the Series. Hint: you can use ```python np.random.random_integers(1, 100, 100) ``` to create the random numbers. Print out the first 20 numbers. Exercise 2You will be given a list containing 10 strings. Create a new Series called validPhoneNumbers that only contains data in the format (XXX)XXX-XXXX. Don't forget to reindex the series after you've filtered it. ###Code numbers = ['(342)123-2345', '410-342-3421', '(234 434-2121', '(301)822-3423', '123-234-3423', '(410)555-4443', 'AAAAHHH', '(XXX)XXX-XXXX', '(602)123-4535', '(234)127-4534'] #Your code here... ###Output _____no_output_____ ###Markdown Exercise 3The code below contains a lambda function which converts a temperature from Farenheit to Celsius. You are given a Series called temperatures in Farhenheit. Using the ```.apply()``` function, convert the data into degrees Celsius. ###Code #This function converts a number from Farenheit to Celsius toCelsius = lambda x: (float(5)/9)*(x-32) #Creates a series with numbers that represent temperatures in Farenheit tempsInFarenheit = pd.Series( [92,33,-5,17,122,87 ]) #Your code here... ###Output _____no_output_____ ###Markdown Exercise 4You are given a list of numbers called `numList`. Without using a loop, write a script to count occurances of each value in the list. ###Code numList = [1,1,1,1,1,2,4,5,7,5,4,5,6,4,3,5,5,5,6,9,0,7,6,7,5,4,4,7] #Your code here... ###Output _____no_output_____ ###Markdown Exercise 5You are given a Series of IP Addresses and the goal is to limit this data to private IP addresses. Python has an `ipaddress` module which provides the capability to create, manipulate and operate on IPv4 and IPv6 addresses and networks. Complete documentation is available here: https://docs.python.org/3/library/ipaddress.html. Here are some examples of how you might use this module:```pythonimport ipaddressmyIP = ipaddress.ip_address( '192.168.0.1' )myNetwork = ipaddress.ip_network( '192.168.0.0/28' )Check membership in networkif myIP in myNetwork: This works print "Yay!"Loop through CIDR blocksfor ip in myNetwork: print( ip )192.168.0.0192.168.0.1……192.168.0.13192.168.0.14192.168.0.15Testing to see if an IP is privateif myIP.is_private: print( "This IP is private" )else: print( "Routable IP" )```1. First, write a function which takes an IP address and returns true if the IP is private, false if it is public. HINT: use the ```ipaddress``` module. 2. Next, use this to create a Series of true/false values in the same sequence as your original Series.3. Finally, use this to filter out the original Series so that it contains only private IP addresses. ###Code import ipaddress hosts = [ '192.168.1.2', '10.10.10.2', '172.143.23.34', '34.34.35.34', '172.15.0.1', '172.17.0.1'] #Your code here... ###Output _____no_output_____
tutorial/tutorial_2.ipynb
###Markdown Create a Search RoutineNow that we have turbo_seti installed and the data downloaded, we can start a Doppler drift search.First we want to create a search object using turbo_seti.find_doppler.find_doppler.FindDoppler()Note that this tutorial exposes a lot of internal details about `find_event_pipeline` and `plot_event_pipeline` that the first tutorial did not. ###Code import os import time from pathlib import Path from turbo_seti.find_doppler.find_doppler import FindDoppler DATADIR = str(Path.home()) + "/turbo_seti_data/" # Get rid of any pre-existing output files from a prior run. for x_file in sorted(os.listdir(DATADIR)): x_type = x_file.split('.')[-1] if x_type != 'h5': os.remove(DATADIR + x_file) # Get ready for search by instantiating the doppler object. doppler = FindDoppler(DATADIR + 'single_coarse_guppi_59046_80036_DIAG_VOYAGER-1_0011.rawspec.0000.h5', max_drift = 4, snr = 10, out_dir = DATADIR # This is where the turboSETI output files will be stored. ) print("\ntutorial_2: FindDoppler object was instantiated.") ###Output turbo_seti version 2.1.16 blimpy version 2.0.31 h5py version 3.5.0 tutorial_2: FindDoppler object was instantiated. ###Markdown Now we run the search routine on the spectra contained in this single HDF5 file: ###Code t1 = time.time() doppler.search() print("\ntutorial_2: Search complete, et = {:.1f} seconds.".format(time.time() - t1)) ###Output HDF5 header info: {'DIMENSION_LABELS': array(['time', 'feed_id', 'frequency'], dtype=object), 'az_start': 0.0, 'data_type': 1, 'fch1': 8421.38671875, 'foff': -2.7939677238464355e-06, 'machine_id': 20, 'nbits': 32, 'nchans': 1048576, 'nifs': 1, 'source_name': 'VOYAGER-1', 'src_dej': <Angle 12.40378167 deg>, 'src_raj': <Angle 17.21124472 hourangle>, 'telescope_id': 6, 'tsamp': 18.253611007999982, 'tstart': 59046.92634259259, 'za_start': 0.0} Starting ET search with parameters: datafile=/home/elkins/turbo_seti_data/single_coarse_guppi_59046_80036_DIAG_VOYAGER-1_0011.rawspec.0000.h5, max_drift=4, min_drift=1e-05, snr=10, out_dir=/home/elkins/turbo_seti_data/, coarse_chans=None, flagging=False, n_coarse_chan=1, kernels=None, gpu_id=0, gpu_backend=False, blank_dc=True, precision=1, append_output=False, log_level_int=20, obs_info={'pulsar': 0, 'pulsar_found': 0, 'pulsar_dm': 0.0, 'pulsar_snr': 0.0, 'pulsar_stats': array([0., 0., 0., 0., 0., 0.]), 'RFI_level': 0.0, 'Mean_SEFD': 0.0, 'psrflux_Sens': 0.0, 'SEFDs_val': [0.0], 'SEFDs_freq': [0.0], 'SEFDs_freq_up': [0.0]} Computed drift rate resolution: 0.00956648975722505 find_doppler.0 INFO Spectra 2x3 postage stamp (0, 0, 0:3): [2995706. 2707467.5 2586438.8] find_doppler.0 INFO ::::::::::::::::::::::::: (1, 0, 0:3): [2392047.5 2505239. 2590622.5] find_doppler.0 INFO Top hit found! SNR 22.329107, Drift Rate -0.363527, index 651879 find_doppler.0 INFO Top hit found! SNR 192.895111, Drift Rate -0.353960, index 659989 find_doppler.0 INFO Top hit found! SNR 22.572433, Drift Rate -0.363527, index 667983 tutorial_2: Search complete, et = 6.8 seconds. ###Markdown Pease wait for the "Search complete" message.You will find that the search process can take several minutes, potentially more than 1 hour. This is especially true when looking at multiple files that are Gigabytes in size. If you are doing your work on the BL servers or any other server system where you will be kicked out after an amount of time, I reccomend using either [tmux](https://github.com/tmux/tmux/wiki) or [screen](https://linuxize.com/post/how-to-use-linux-screen/). Making a Pandas DataframeWe can convert the `.dat` file produced in the previous search into a [Pandas](https://pandas.pydata.org/) dataframe so it is easier to read: ###Code from turbo_seti.find_event.find_event import read_dat df = read_dat(DATADIR + 'single_coarse_guppi_59046_80036_DIAG_VOYAGER-1_0011.rawspec.0000.dat') df ###Output _____no_output_____ ###Markdown Finding EventsNow let's run the executable `turboSETI` on all of the HDF5 files from the same observation so we can find interesting events (a hit that occurs across multiple files) via the [ON/OFF method](https://github.com/UCBerkeleySETI/breakthrough/blob/master/GBT/README.md).Normally it takes a chunk of time to run the algorithm on all of the files, so here is a little script that keeps `turboSETI` running in the background if executed in a tmux session: ###Code # %load example_script.py import glob # glob will create a list of specific files in a directory. In this case, any file ending in .h5. h5list = sorted(glob.glob(DATADIR + '*.h5')) # Get rid of any pre-existing output files from a prior run. for x_file in sorted(os.listdir(DATADIR)): x_type = x_file.split('.')[-1] if x_type != 'h5': os.remove(DATADIR + x_file) # Iterate over the 6 HDF5 files print("tutorial_2: Please wait for the \"End\" message,\n") for file in h5list: # Execute turboSETI in the terminal console = 'turboSETI ' + file + ' -M 4 -s 10 -o ' + DATADIR os.system(console) print("\ntutorial_2: All HDF5 files have been successfully processed.") print("tutorial_2: End.") ###Output tutorial_2: Please wait for the "End" message, turbo_seti version 2.1.16 blimpy version 2.0.31 h5py version 3.5.0 HDF5 header info: {'DIMENSION_LABELS': array(['time', 'feed_id', 'frequency'], dtype=object), 'az_start': 0.0, 'data_type': 1, 'fch1': 8421.38671875, 'foff': -2.7939677238464355e-06, 'machine_id': 20, 'nbits': 32, 'nchans': 1048576, 'nifs': 1, 'source_name': 'VOYAGER-1', 'src_dej': <Angle 12.40378167 deg>, 'src_raj': <Angle 17.21124472 hourangle>, 'telescope_id': 6, 'tsamp': 18.253611007999982, 'tstart': 59046.92634259259, 'za_start': 0.0} Starting ET search with parameters: datafile=/home/elkins/turbo_seti_data/single_coarse_guppi_59046_80036_DIAG_VOYAGER-1_0011.rawspec.0000.h5, max_drift=4.0, min_drift=1e-05, snr=10.0, out_dir=/home/elkins/turbo_seti_data/, coarse_chans=, flagging=False, n_coarse_chan=1, kernels=None, gpu_id=0, gpu_backend=False, blank_dc=True, precision=1, append_output=False, log_level_int=20, obs_info={'pulsar': 0, 'pulsar_found': 0, 'pulsar_dm': 0.0, 'pulsar_snr': 0.0, 'pulsar_stats': array([0., 0., 0., 0., 0., 0.]), 'RFI_level': 0.0, 'Mean_SEFD': 0.0, 'psrflux_Sens': 0.0, 'SEFDs_val': [0.0], 'SEFDs_freq': [0.0], 'SEFDs_freq_up': [0.0]} Computed drift rate resolution: 0.00956648975722505 find_doppler.0 INFO Spectra 2x3 postage stamp (0, 0, 0:3): [2995706. 2707467.5 2586438.8] find_doppler.0 INFO ::::::::::::::::::::::::: (1, 0, 0:3): [2392047.5 2505239. 2590622.5] find_doppler.0 INFO Top hit found! SNR 22.329107, Drift Rate -0.363527, index 651879 find_doppler.0 INFO Top hit found! SNR 192.895111, Drift Rate -0.353960, index 659989 find_doppler.0 INFO Top hit found! SNR 22.572433, Drift Rate -0.363527, index 667983 Search time: 0.12 min turbo_seti version 2.1.16 blimpy version 2.0.31 h5py version 3.5.0 HDF5 header info: {'DIMENSION_LABELS': array(['time', 'feed_id', 'frequency'], dtype=object), 'az_start': 0.0, 'data_type': 1, 'fch1': 8421.38671875, 'foff': -2.7939677238464355e-06, 'machine_id': 20, 'nbits': 32, 'nchans': 1048576, 'nifs': 1, 'source_name': 'VOYAGER-1', 'src_dej': <Angle 13.40378444 deg>, 'src_raj': <Angle 17.211245 hourangle>, 'telescope_id': 6, 'tsamp': 18.253611007999982, 'tstart': 59046.93002314815, 'za_start': 0.0} Starting ET search with parameters: datafile=/home/elkins/turbo_seti_data/single_coarse_guppi_59046_80354_DIAG_VOYAGER-1_0012.rawspec.0000.h5, max_drift=4.0, min_drift=1e-05, snr=10.0, out_dir=/home/elkins/turbo_seti_data/, coarse_chans=, flagging=False, n_coarse_chan=1, kernels=None, gpu_id=0, gpu_backend=False, blank_dc=True, precision=1, append_output=False, log_level_int=20, obs_info={'pulsar': 0, 'pulsar_found': 0, 'pulsar_dm': 0.0, 'pulsar_snr': 0.0, 'pulsar_stats': array([0., 0., 0., 0., 0., 0.]), 'RFI_level': 0.0, 'Mean_SEFD': 0.0, 'psrflux_Sens': 0.0, 'SEFDs_val': [0.0], 'SEFDs_freq': [0.0], 'SEFDs_freq_up': [0.0]} Computed drift rate resolution: 0.00956648975722505 find_doppler.0 INFO Spectra 2x3 postage stamp (0, 0, 0:3): [3152164.8 2917681.8 2709741. ] find_doppler.0 INFO ::::::::::::::::::::::::: (1, 0, 0:3): [3064531.5 3476725.8 2935090.2] Search time: 0.11 min turbo_seti version 2.1.16 blimpy version 2.0.31 h5py version 3.5.0 HDF5 header info: {'DIMENSION_LABELS': array(['time', 'feed_id', 'frequency'], dtype=object), 'az_start': 0.0, 'data_type': 1, 'fch1': 8421.38671875, 'foff': -2.7939677238464355e-06, 'machine_id': 20, 'nbits': 32, 'nchans': 1048576, 'nifs': 1, 'source_name': 'VOYAGER-1', 'src_dej': <Angle 12.40379333 deg>, 'src_raj': <Angle 17.211245 hourangle>, 'telescope_id': 6, 'tsamp': 18.253611007999982, 'tstart': 59046.933703703704, 'za_start': 0.0} Starting ET search with parameters: datafile=/home/elkins/turbo_seti_data/single_coarse_guppi_59046_80672_DIAG_VOYAGER-1_0013.rawspec.0000.h5, max_drift=4.0, min_drift=1e-05, snr=10.0, out_dir=/home/elkins/turbo_seti_data/, coarse_chans=, flagging=False, n_coarse_chan=1, kernels=None, gpu_id=0, gpu_backend=False, blank_dc=True, precision=1, append_output=False, log_level_int=20, obs_info={'pulsar': 0, 'pulsar_found': 0, 'pulsar_dm': 0.0, 'pulsar_snr': 0.0, 'pulsar_stats': array([0., 0., 0., 0., 0., 0.]), 'RFI_level': 0.0, 'Mean_SEFD': 0.0, 'psrflux_Sens': 0.0, 'SEFDs_val': [0.0], 'SEFDs_freq': [0.0], 'SEFDs_freq_up': [0.0]} Computed drift rate resolution: 0.00956648975722505 find_doppler.0 INFO Spectra 2x3 postage stamp (0, 0, 0:3): [5880825. 5606774.5 7201990. ] find_doppler.0 INFO ::::::::::::::::::::::::: (1, 0, 0:3): [5208161. 5433646.5 5460422.5] find_doppler.0 INFO Top hit found! SNR 10.996549, Drift Rate -0.401793, index 651964 find_doppler.0 INFO Top hit found! SNR 82.802612, Drift Rate -0.382660, index 660074 find_doppler.0 INFO Top hit found! SNR 10.476917, Drift Rate -0.373093, index 668184 Search time: 0.11 min turbo_seti version 2.1.16 blimpy version 2.0.31 h5py version 3.5.0 HDF5 header info: {'DIMENSION_LABELS': array(['time', 'feed_id', 'frequency'], dtype=object), 'az_start': 0.0, 'data_type': 1, 'fch1': 8421.38671875, 'foff': -2.7939677238464355e-06, 'machine_id': 20, 'nbits': 32, 'nchans': 1048576, 'nifs': 1, 'source_name': 'VOYAGER-1', 'src_dej': <Angle 12.40376944 deg>, 'src_raj': <Angle 17.27950361 hourangle>, 'telescope_id': 6, 'tsamp': 18.253611007999982, 'tstart': 59046.937372685185, 'za_start': 0.0} Starting ET search with parameters: datafile=/home/elkins/turbo_seti_data/single_coarse_guppi_59046_80989_DIAG_VOYAGER-1_0014.rawspec.0000.h5, max_drift=4.0, min_drift=1e-05, snr=10.0, out_dir=/home/elkins/turbo_seti_data/, coarse_chans=, flagging=False, n_coarse_chan=1, kernels=None, gpu_id=0, gpu_backend=False, blank_dc=True, precision=1, append_output=False, log_level_int=20, obs_info={'pulsar': 0, 'pulsar_found': 0, 'pulsar_dm': 0.0, 'pulsar_snr': 0.0, 'pulsar_stats': array([0., 0., 0., 0., 0., 0.]), 'RFI_level': 0.0, 'Mean_SEFD': 0.0, 'psrflux_Sens': 0.0, 'SEFDs_val': [0.0], 'SEFDs_freq': [0.0], 'SEFDs_freq_up': [0.0]} Computed drift rate resolution: 0.00956648975722505 find_doppler.0 INFO Spectra 2x3 postage stamp (0, 0, 0:3): [4233161.5 4866011.5 4252712.5] find_doppler.0 INFO ::::::::::::::::::::::::: (1, 0, 0:3): [4741422.5 4620766. 4691866.5] Search time: 0.01 min turbo_seti version 2.1.16 blimpy version 2.0.31 h5py version 3.5.0 HDF5 header info: {'DIMENSION_LABELS': array(['time', 'feed_id', 'frequency'], dtype=object), 'az_start': 0.0, 'data_type': 1, 'fch1': 8421.38671875, 'foff': -2.7939677238464355e-06, 'machine_id': 20, 'nbits': 32, 'nchans': 1048576, 'nifs': 1, 'source_name': 'VOYAGER-1', 'src_dej': <Angle 12.40378 deg>, 'src_raj': <Angle 17.21124361 hourangle>, 'telescope_id': 6, 'tsamp': 18.253611007999982, 'tstart': 59046.941087962965, 'za_start': 0.0} Starting ET search with parameters: datafile=/home/elkins/turbo_seti_data/single_coarse_guppi_59046_81310_DIAG_VOYAGER-1_0015.rawspec.0000.h5, max_drift=4.0, min_drift=1e-05, snr=10.0, out_dir=/home/elkins/turbo_seti_data/, coarse_chans=, flagging=False, n_coarse_chan=1, kernels=None, gpu_id=0, gpu_backend=False, blank_dc=True, precision=1, append_output=False, log_level_int=20, obs_info={'pulsar': 0, 'pulsar_found': 0, 'pulsar_dm': 0.0, 'pulsar_snr': 0.0, 'pulsar_stats': array([0., 0., 0., 0., 0., 0.]), 'RFI_level': 0.0, 'Mean_SEFD': 0.0, 'psrflux_Sens': 0.0, 'SEFDs_val': [0.0], 'SEFDs_freq': [0.0], 'SEFDs_freq_up': [0.0]} Computed drift rate resolution: 0.00956648975722505 find_doppler.0 INFO Spectra 2x3 postage stamp (0, 0, 0:3): [2835156. 2706873.2 3367498.5] find_doppler.0 INFO ::::::::::::::::::::::::: (1, 0, 0:3): [3646120.8 3037369. 3306062.8] find_doppler.0 INFO Top hit found! SNR 18.529829, Drift Rate -0.411359, index 652058 find_doppler.0 INFO Top hit found! SNR 145.126450, Drift Rate -0.430492, index 660166 find_doppler.0 INFO Top hit found! SNR 19.028582, Drift Rate -0.411359, index 668162 Search time: 0.11 min turbo_seti version 2.1.16 blimpy version 2.0.31 h5py version 3.5.0 HDF5 header info: {'DIMENSION_LABELS': array(['time', 'feed_id', 'frequency'], dtype=object), 'az_start': 0.0, 'data_type': 1, 'fch1': 8421.38671875, 'foff': -2.7939677238464355e-06, 'machine_id': 20, 'nbits': 32, 'nchans': 1048576, 'nifs': 1, 'source_name': 'VOYAGER-1', 'src_dej': <Angle 11.40377306 deg>, 'src_raj': <Angle 17.21124389 hourangle>, 'telescope_id': 6, 'tsamp': 18.253611007999982, 'tstart': 59046.944768518515, 'za_start': 0.0} Starting ET search with parameters: datafile=/home/elkins/turbo_seti_data/single_coarse_guppi_59046_81628_DIAG_VOYAGER-1_0016.rawspec.0000.h5, max_drift=4.0, min_drift=1e-05, snr=10.0, out_dir=/home/elkins/turbo_seti_data/, coarse_chans=, flagging=False, n_coarse_chan=1, kernels=None, gpu_id=0, gpu_backend=False, blank_dc=True, precision=1, append_output=False, log_level_int=20, obs_info={'pulsar': 0, 'pulsar_found': 0, 'pulsar_dm': 0.0, 'pulsar_snr': 0.0, 'pulsar_stats': array([0., 0., 0., 0., 0., 0.]), 'RFI_level': 0.0, 'Mean_SEFD': 0.0, 'psrflux_Sens': 0.0, 'SEFDs_val': [0.0], 'SEFDs_freq': [0.0], 'SEFDs_freq_up': [0.0]} Computed drift rate resolution: 0.00956648975722505 find_doppler.0 INFO Spectra 2x3 postage stamp (0, 0, 0:3): [3193343.8 2054488.8 2877254.2] find_doppler.0 INFO ::::::::::::::::::::::::: (1, 0, 0:3): [3471901.8 2847157.5 2962188. ] ###Markdown Now that we have ran `turboSETI` on six observations, let's use the `find_event_pipeline` to find events that only exist in the ON observations (on-target). It is important to consider the value to use for the filter_threshold. Here is a docstring explaining what filter threshold is:```filter_threshold Specification for how strict the hit filtering will be. There are 3 different levels of filtering, specified by the integers 1, 2, and 3. Filter_threshold = 1 returns hits above an SNR cut, taking into account the check_zero_drift parameter, but without an ON-OFF check. Filter_threshold = 2 returns hits that passed level 1 AND that are in at least one ON but no OFFs. Filter_threshold = 3 returns events that passed level 2 AND that are present in *ALL* ONs.``` ###Code from turbo_seti.find_event.find_event_pipeline import find_event_pipeline # A list of all the .dat files that turboSETI produced dat_list = sorted(glob.glob(DATADIR + '*.dat')) # This writes the dat files into a .lst, as required by the find_event_pipeline with open(DATADIR + 'dat_files.lst', 'w') as f: for item in dat_list: f.write("%s\n" % item) PATH_CSVF = DATADIR + 'maxdrift4_snr10_f3.csv' find_event_pipeline(DATADIR + 'dat_files.lst', filter_threshold = 3, # Using the strictest filter threshold value. number_in_cadence = len(dat_list), user_validation=False, saving=True, # Save the CSV file of events. csv_name=PATH_CSVF) ###Output ************ BEGINNING FIND_EVENT PIPELINE ************** Assuming the first observation is an ON find_event_pipeline INFO find_event_pipeline: file=single_coarse_guppi_59046_80036_DIAG_VOYAGER-1_0011.rawspec.0000.dat, tstart=59046.92634259259, source_name=VOYAGER-1, fch1=8421.38671875, foff=-2.7939677238464355e-06, nchans=1048576 find_event_pipeline INFO find_event_pipeline: file=single_coarse_guppi_59046_80354_DIAG_VOYAGER-1_0012.rawspec.0000.dat, tstart=59046.93002314815, source_name=VOYAGER-1, fch1=8421.38671875, foff=-2.7939677238464355e-06, nchans=1048576 find_event_pipeline INFO find_event_pipeline: file=single_coarse_guppi_59046_80672_DIAG_VOYAGER-1_0013.rawspec.0000.dat, tstart=59046.933703703704, source_name=VOYAGER-1, fch1=8421.38671875, foff=-2.7939677238464355e-06, nchans=1048576 find_event_pipeline INFO find_event_pipeline: file=single_coarse_guppi_59046_80989_DIAG_VOYAGER-1_0014.rawspec.0000.dat, tstart=59046.937372685185, source_name=VOYAGER-1, fch1=8421.38671875, foff=-2.7939677238464355e-06, nchans=1048576 find_event_pipeline INFO find_event_pipeline: file=single_coarse_guppi_59046_81310_DIAG_VOYAGER-1_0015.rawspec.0000.dat, tstart=59046.941087962965, source_name=VOYAGER-1, fch1=8421.38671875, foff=-2.7939677238464355e-06, nchans=1048576 find_event_pipeline INFO find_event_pipeline: file=single_coarse_guppi_59046_81628_DIAG_VOYAGER-1_0016.rawspec.0000.dat, tstart=59046.944768518515, source_name=VOYAGER-1, fch1=8421.38671875, foff=-2.7939677238464355e-06, nchans=1048576 There are 6 total files in the filelist /home/elkins/turbo_seti_data/dat_files.lst therefore, looking for events in 1 on-off set(s) with a minimum SNR of 10 Present in all ON sources with RFI rejection from the OFF sources not including signals with zero drift saving the output files *** First DAT file in set: single_coarse_guppi_59046_80036_DIAG_VOYAGER-1_0011.rawspec.0000.dat *** ------ o ------- Loading data... Loaded 3 hits from /home/elkins/turbo_seti_data/single_coarse_guppi_59046_80036_DIAG_VOYAGER-1_0011.rawspec.0000.dat (ON) Loaded 0 hits from /home/elkins/turbo_seti_data/single_coarse_guppi_59046_80354_DIAG_VOYAGER-1_0012.rawspec.0000.dat (OFF) Loaded 3 hits from /home/elkins/turbo_seti_data/single_coarse_guppi_59046_80672_DIAG_VOYAGER-1_0013.rawspec.0000.dat (ON) Loaded 0 hits from /home/elkins/turbo_seti_data/single_coarse_guppi_59046_80989_DIAG_VOYAGER-1_0014.rawspec.0000.dat (OFF) Loaded 3 hits from /home/elkins/turbo_seti_data/single_coarse_guppi_59046_81310_DIAG_VOYAGER-1_0015.rawspec.0000.dat (ON) Loaded 0 hits from /home/elkins/turbo_seti_data/single_coarse_guppi_59046_81628_DIAG_VOYAGER-1_0016.rawspec.0000.dat (OFF) All data loaded! Finding events in this cadence... Found a total of 9 hits above the SNR cut in this cadence! Length of off_table = 0 Found a total of 9 hits in only the on observations in this cadence! Found a total of 2 events across this cadence! Search time: 0.04 sec ------ o ------- *** find_event_output_dataframe is complete *** find_event_pipeline: Saved CSV file to /home/elkins/turbo_seti_data/maxdrift4_snr10_f3.csv ###Markdown Plotting eventsPlease wait for the "Saved CSV file" message,Now that we have some events, let's generate plots for them. The `plot_event_pipeline` will output the plots as PNG files. ###Code from turbo_seti.find_event.plot_event_pipeline import plot_event_pipeline # We need to create a .lst of the .h5 files, so lets go ahead and do that: filelist = sorted(glob.glob(DATADIR + '*.h5')) # Write file locations to a .lst with open(DATADIR + 'h5list.lst', 'w') as f: for item in filelist: f.write("{}\n".format(item)) # and finally we plot print("tutorial_2: Plots will be stored here: {}".format(DATADIR)) print("tutorial_2: Please wait for the \"End\" message,\n") plot_event_pipeline(PATH_CSVF, DATADIR + 'h5list.lst', filter_spec=3, plot_dir=DATADIR, user_validation=False) print("\ntutorial_2: End, all plots are complete.") ###Output tutorial_2: Plots will be stored here: /home/elkins/turbo_seti_data/ tutorial_2: Please wait for the "End" message, plot_event_pipeline: Opened file /home/elkins/turbo_seti_data/maxdrift4_snr10_f3.csv plot_event_pipeline: file = single_coarse_guppi_59046_80036_DIAG_VOYAGER-1_0011.rawspec.0000.h5, tstart = 59046.92634259259, source_name = VOYAGER-1 plot_event_pipeline: file = single_coarse_guppi_59046_80354_DIAG_VOYAGER-1_0012.rawspec.0000.h5, tstart = 59046.93002314815, source_name = VOYAGER-1 plot_event_pipeline: file = single_coarse_guppi_59046_80672_DIAG_VOYAGER-1_0013.rawspec.0000.h5, tstart = 59046.933703703704, source_name = VOYAGER-1 plot_event_pipeline: file = single_coarse_guppi_59046_80989_DIAG_VOYAGER-1_0014.rawspec.0000.h5, tstart = 59046.937372685185, source_name = VOYAGER-1 plot_event_pipeline: file = single_coarse_guppi_59046_81310_DIAG_VOYAGER-1_0015.rawspec.0000.h5, tstart = 59046.941087962965, source_name = VOYAGER-1 plot_event_pipeline: file = single_coarse_guppi_59046_81628_DIAG_VOYAGER-1_0016.rawspec.0000.h5, tstart = 59046.944768518515, source_name = VOYAGER-1 Plotting some events for: VOYAGER-1 There are 2 total events in the csv file /home/elkins/turbo_seti_data/maxdrift4_snr10_f3.csv therefore, you are about to make 2 .png files. tutorial_2: End, all plots are complete. ###Markdown Please wait for the "End, all plots are complete" message.Great! Now we can have a look at the waterfall plot PNG files. ###Code from IPython.display import Image, display pnglist = sorted(glob.glob(DATADIR + '*.png')) for pngfile in pnglist: display(Image(filename=pngfile)) ###Output _____no_output_____
pd_slicing.ipynb
###Markdown Pandas Dataframe slicing methodsstyling reference: - https://stackoverflow.com/questions/41654949/pandas-style-function-to-highlight-specific-columns- https://kanoki.org/2019/01/02/pandas-trick-for-the-day-color-code-columns-rows-cells-of-dataframe/ ###Code import numpy as np import pandas as pd # create a dummy pd DF df = pd.DataFrame( { 'a': [25, 20, 15, 10, 5], 'b': [24, 19, 14, 9, 4], 'c': [23, 18, 13, 8, 3], 'd': [22, 17, 12, 7, 2], 'e': [21, 16, 11, 6, 1] }, index = ['u', 'v', 'x', 'y', 'z'] ) df ###Output _____no_output_____ ###Markdown select column(s) by name or index select one column ###Code def highlight_cols(s): color = 'green' return 'background-color: %s' % color df.style.applymap(highlight_cols, subset=pd.IndexSlice[:, ['b']]) # return a pd Series # df['b'] # via column name # df.b # via the dot notation, i.e. treat column name as attribute # df.loc[:, 'b'] # loc column name df.iloc[:, 1] # iloc column index num # return a pd DataFrame # df.loc[: , df.columns.isin(['b'])] # via boolean value # df.loc[: , np.logical_not(df.columns.isin(['a', 'c', 'd', 'e']))] df.loc[: , ~df.columns.isin(['a', 'c', 'd', 'e'])] # via the negative of boolean value # select a range of continuous columns def highlight_cols(s): color = 'green' return 'background-color: %s' % color df.style.applymap(highlight_cols, subset=pd.IndexSlice[:, 'b':'d']) # df[['b', 'c', 'd']] # df.loc[:, 'b':'d'] # note that the last column name is inclusive # df.loc[:, df.columns.isin(['b', 'c', 'd'])] # df.loc[:, ~df.columns.isin(['a', 'e'])] # df.loc[:, np.logical_not(df.columns.isin(['a', 'e']))] df.iloc[:, 1:4] # note that the last column index is exclusive # select several incontinuous columns def highlight_cols(s): color = 'green' return 'background-color: %s' % color df.style.applymap(highlight_cols, subset=pd.IndexSlice[:, ['b', 'd', 'e']]) # df[['b', 'd', 'e']] # df.loc[:, ['b', 'd', 'e']] # df.loc[:, df.columns.isin(['b', 'd', 'e'])] # df.loc[:, np.logical_not(df.columns.isin(['a', 'e']))] # df.loc[:, ~df.columns.isin(['a', 'e'])] df.iloc[:, [1, 3, 4]] # select every nth columns def highlight_cols(s): color = 'green' return 'background-color: %s' % color df.style.applymap(highlight_cols, subset=pd.IndexSlice[:, ['b', 'd']]) # df[df.columns[1::2]] # df.loc[: , df.columns[1::2]] df.iloc[: , 1::2] ###Output _____no_output_____ ###Markdown select row(s) by name or index ###Code # select one row def highlight_cols(s): color = 'green' return 'background-color: %s' % color df.style.applymap(highlight_cols, subset=pd.IndexSlice['v', :]) # return a Series # df.loc['v'] # via row index name # df.loc['v' , :] # via rown index name and column range # df.iloc[1] # via row index num df.iloc[1 , :] # via row index num and columne index num # return a DataFrame # df.loc[df.index.isin(['v']) , :] # via boolean values df.loc[~df.index.isin(['u', 'x', 'y', 'z']) , :] # exclude row via boolean values # select a range of continuous rows def highlight_cols(s): color = 'green' return 'background-color: %s' % color df.style.applymap(highlight_cols, subset=pd.IndexSlice['v':'y', :]) # df.loc['v':'y' , :] df.iloc[1:4 , :] # note that the end of the index range is exclusive # select incontinuous rows def highlight_cols(s): color = 'green' return 'background-color: %s' % color df.style.applymap(highlight_cols, subset=pd.IndexSlice[['v', 'y', 'z'], :]) # df.loc[['v', 'y', 'z'] , :] df.iloc[[1, 3, 4] , :] # select every nth rows def highlight_cols(s): color = 'green' return 'background-color: %s' % color df.style.applymap(highlight_cols, subset=pd.IndexSlice[['v', 'y'], :]) # df.loc[df.index[1::2] , :] df.iloc[1::2 , :] ###Output _____no_output_____ ###Markdown select column(s) and row(s) by name or index ###Code # select every nth rows def highlight_cols(s): color = 'green' return 'background-color: %s' % color df.style.applymap(highlight_cols, subset=pd.IndexSlice[['u', 'x', 'z'], ['a', 'c', 'e']]) # df.loc[['u', 'x', 'z'] , ['a', 'c', 'e']] df.iloc[[0, 2, 4] , [0, 2, 4]] ###Output _____no_output_____ ###Markdown select a row by condition for values in one column ###Code def highlight_cols(s): color = 'green' return 'background-color: %s' % color df.style.applymap(highlight_cols, subset=pd.IndexSlice['u':'x' , :]) df[ df['d'] >= 12 ] ###Output _____no_output_____
section_6/01_preprocessing.ipynb
###Markdown データの前処理対話文のデータセットに前処理を行い、保存します。 ライブラリのインストール分かち書きのためにjanomeを、テキストデータの前処理のためにtorchtextをインストールします。 ###Code !pip install janome==0.4.1 !pip install torchvision==0.7.0 !pip install torchtext==0.7.0 !pip install torch==1.6.0 ###Output _____no_output_____ ###Markdown Google ドライブとの連携 以下のコードを実行し、認証コードを使用してGoogle ドライブをマウントします。 ###Code from google.colab import drive drive.mount('/content/drive/') ###Output _____no_output_____ ###Markdown 対話文の取得雑談対話コーパス「projectnextnlp-chat-dialogue-corpus.zip」をダウンロードします。 > Copyright (c) 2015 Project Next NLP 対話タスク 参加者一同 > https://sites.google.com/site/dialoguebreakdowndetection/chat-dialogue-corpus/LICENSE.txt > Released under the MIT license解凍したフォルダをGoogle ドライブにアップします。 フォルダからjsonファイルを読み込み、対話文として成り立っている文章を取り出してリストに格納します。 ###Code import glob # ファイルの取得に使用 import json # jsonファイルの読み込みに使用 import re path = "/content/drive/My Drive/live_ai_data/projectnextnlp-chat-dialogue-corpus/json" # フォルダの場所を指定 files = glob.glob(path + "/*/*.json") # ファイルの一覧 dialogues = [] # 複数の対話文を格納するリスト file_count= 0 # ファイル数のカウント for file in files: with open(file, "r") as f: json_dic = json.load(f) dialogue = [] # 単一の対話 for turn in json_dic["turns"]: annotations = turn["annotations"] # 注釈 speaker = turn["speaker"] # 発言者 utterance = turn["utterance"] # 発言 # 空の文章や、特殊文字や数字が含まれる文章は除く if (utterance=="") or ("\\u" in utterance) or (re.search("\d", utterance)!=None): dialogue.clear() # 対話をリセット continue utterance = utterance.replace(".", "。").replace(",", "、") # 全角 utterance = utterance.replace(".", "。").replace(",", "、") # 半角 utterance = utterance.split("。")[0] if speaker=="U": # 発言者が人間であれば dialogue.append(utterance) else: # 発言者がシステムであれば is_wrong = False for annotation in annotations: breakdown = annotation["breakdown"] # 分類 if breakdown=="X": # 1つでも不適切評価があれば is_wrong = True break if is_wrong: dialogue.clear() # 対話をリセット else: dialogue.append(utterance) # 不適切評価が無ければ対話に追加 if len(dialogue) >= 2: # 単一の会話が成立すれば dialogues.append(dialogue.copy()) dialogue.pop(0) # 最初の要素を削除 file_count += 1 if file_count%100 == 0: print("files:", file_count, "dialogues", len(dialogues)) print("files:", file_count, "dialogues", len(dialogues)) ###Output _____no_output_____ ###Markdown データ拡張の準備データ拡張の準備として、正規表現の設定および分かち書きを行います。 ###Code import re from janome.tokenizer import Tokenizer re_kanji = re.compile(r"^[\u4E00-\u9FD0]+$") # 漢字の検出用 re_katakana = re.compile(r"[\u30A1-\u30F4]+") # カタカナの検出用 j_tk = Tokenizer() def wakati(text): return [tok for tok in j_tk.tokenize(text, wakati=True)] wakati_inp = [] # 単語に分割された入力文 wakati_rep = [] # 単語に分割された応答文 for dialogue in dialogues: wakati_inp.append(wakati(dialogue[0])[:10]) wakati_rep.append(wakati(dialogue[1])[:10]) ###Output _____no_output_____ ###Markdown データ拡張対話データの数を水増しします。 ある入力文を、それに対応する応答文以外の複数の応答文と組み合わせます。 組み合わせる応答文は、入力文に含まれる漢字やカタカナの単語を含むものを選択します。 ###Code dialogues_plus = [] for i, w_inp in enumerate(wakati_inp): # 全ての入力文でループ inp_count = 0 # ある入力から生成された対話文をカウント for j, w_rep in enumerate(wakati_rep): # 全ての応答文でループ if i==j: dialogues_plus.append(["".join(w_inp), "".join(w_rep)]) continue similarity = 0 # 類似度 for w in w_inp: # 入力文と同じ単語があり、それが漢字かカタカナであれば類似度を上げる if (w in w_rep) and (re_kanji.fullmatch(w) or re_katakana.fullmatch(w)): similarity += 1 if similarity >= 1: dialogue_plus = ["".join(w_inp), "".join(w_rep)] if dialogue_plus not in dialogues_plus: dialogues_plus.append(dialogue_plus) inp_count += 1 if inp_count >= 12: # ある入力から生成する対話文の上限 break if i%1000 == 0: print("i:", i, "dialogues_pus:", len(dialogues_plus)) print("i:", i, "dialogues_pus:", len(dialogues_plus)) ###Output _____no_output_____ ###Markdown 拡張された対話データを、新たな対話データとします。 ###Code dialogues = dialogues_plus ###Output _____no_output_____ ###Markdown 対話データの保存対話データをcsvファイルとしてGoogle Driveに保存します。 ###Code import csv from sklearn.model_selection import train_test_split dialogues_train, dialogues_test = train_test_split(dialogues, shuffle=True, test_size=0.05) # 5%がテストデータ path = "/content/drive/My Drive/live_ai_data/" # 保存場所 with open(path+"dialogues_train.csv", "w") as f: writer = csv.writer(f) writer.writerows(dialogues_train) with open(path+"dialogues_test.csv", "w") as f: writer = csv.writer(f) writer.writerows(dialogues_test) ###Output _____no_output_____ ###Markdown 対話文の取得Googleドライブから、対話文のデータを取り出してデータセットに格納します。 ###Code import torch import torchtext from janome.tokenizer import Tokenizer path = "/content/drive/My Drive/live_ai_data/" # 保存場所を指定 j_tk = Tokenizer() def tokenizer(text): return [tok for tok in j_tk.tokenize(text, wakati=True)] # 内包表記 # データセットの列を定義 input_field = torchtext.data.Field( # 入力文 sequential=True, # データ長さが可変かどうか tokenize=tokenizer, # 前処理や単語分割などのための関数 batch_first=True, # バッチの次元を先頭に lower=True # アルファベットを小文字に変換 ) reply_field = torchtext.data.Field( # 応答文 sequential=True, # データ長さが可変かどうか tokenize=tokenizer, # 前処理や単語分割などのための関数 init_token = "<sos>", # 文章開始のトークン eos_token = "<eos>", # 文章終了のトークン batch_first=True, # バッチの次元を先頭に lower=True # アルファベットを小文字に変換 ) # csvファイルからデータセットを作成 train_data, test_data = torchtext.data.TabularDataset.splits( path=path, train="dialogues_train.csv", validation="dialogues_test.csv", format="csv", fields=[("inp_text", input_field), ("rep_text", reply_field)] # 列の設定 ) ###Output _____no_output_____ ###Markdown 単語とインデックスの対応単語にインデックスを割り振り、辞書として格納します。 ###Code input_field.build_vocab( train_data, min_freq=3, ) reply_field.build_vocab( train_data, min_freq=3, ) print(input_field.vocab.freqs) # 各単語の出現頻度 print(len(input_field.vocab.stoi)) print(len(input_field.vocab.itos)) print(len(reply_field.vocab.stoi)) print(len(reply_field.vocab.itos)) ###Output _____no_output_____ ###Markdown データセットの保存データセットの`examples`とFieldをそれぞれ保存します。 ###Code import dill torch.save(train_data.examples, path+"train_examples.pkl", pickle_module=dill) torch.save(test_data.examples, path+"test_examples.pkl", pickle_module=dill) torch.save(input_field, path+"input_field.pkl", pickle_module=dill) torch.save(reply_field, path+"reply_field.pkl", pickle_module=dill) ###Output _____no_output_____
samples/notebooks/week05-04-advanced-usage-of-recurrent-neural-networks.ipynb
###Markdown ReferenceThis example is taken from the book [DL with Python](https://www.manning.com/books/deep-learning-with-python) by F. Chollet. All the notebooks from the book are available for free on [Github](https://github.com/fchollet/deep-learning-with-python-notebooks)If you like to run the example locally follow the instructions provided on [Keras website](https://keras.io/installation)--- ###Code import keras keras.__version__ ###Output Using TensorFlow backend. ###Markdown Advanced usage of recurrent neural networksThis notebook contains the code samples found in Chapter 6, Section 3 of [Deep Learning with Python](https://www.manning.com/books/deep-learning-with-python?a_aid=keras&a_bid=76564dff). Note that the original text features far more content, in particular further explanations and figures: in this notebook, you will only find source code and related comments.---In this section, we will review three advanced techniques for improving the performance and generalization power of recurrent neural networks. By the end of the section, you will know most of what there is to know about using recurrent networks with Keras. We will demonstrate all three concepts on a weather forecasting problem, where we have access to a timeseries of data points coming from sensors installed on the roof of a building, such as temperature, air pressure, and humidity, which we use to predict what the temperature will be 24 hours after the last data point collected. This is a fairly challenging problem that exemplifies many common difficulties encountered when working with timeseries.We will cover the following techniques:* *Recurrent dropout*, a specific, built-in way to use dropout to fight overfitting in recurrent layers.* *Stacking recurrent layers*, to increase the representational power of the network (at the cost of higher computational loads).* *Bidirectional recurrent layers*, which presents the same information to a recurrent network in different ways, increasing accuracy and mitigating forgetting issues. A temperature forecasting problemUntil now, the only sequence data we have covered has been text data, for instance the IMDB dataset and the Reuters dataset. But sequence data is found in many more problems than just language processing. In all of our examples in this section, we will be playing with a weather timeseries dataset recorded at the Weather Station at the Max-Planck-Institute for Biogeochemistry in Jena, Germany: http://www.bgc-jena.mpg.de/wetter/.In this dataset, fourteen different quantities (such air temperature, atmospheric pressure, humidity, wind direction, etc.) are recorded every ten minutes, over several years. The original data goes back to 2003, but we limit ourselves to data from 2009-2016. This dataset is perfect for learning to work with numerical timeseries. We will use it to build a model that takes as input some data from the recent past (a few days worth of data points) and predicts the air temperature 24 hours in the future. Let's take a look at the data: ###Code import os data_dir = '/Users/guillaume/Documents/Dropbox/Exchange_fhnw/UBT/deep-learning-with-python-notebooks/datasets/' fname = os.path.join(data_dir, 'jena_climate_2009_2016.csv') f = open(fname) data = f.read() f.close() lines = data.split('\n') header = lines[0].split(',') lines = lines[1:] print(header) print(len(lines)) ###Output ['"Date Time"', '"p (mbar)"', '"T (degC)"', '"Tpot (K)"', '"Tdew (degC)"', '"rh (%)"', '"VPmax (mbar)"', '"VPact (mbar)"', '"VPdef (mbar)"', '"sh (g/kg)"', '"H2OC (mmol/mol)"', '"rho (g/m**3)"', '"wv (m/s)"', '"max. wv (m/s)"', '"wd (deg)"'] 420551 ###Markdown Let's convert all of these 420,551 lines of data into a Numpy array: ###Code import numpy as np float_data = np.zeros((len(lines), len(header) - 1)) for i, line in enumerate(lines): values = [float(x) for x in line.split(',')[1:]] float_data[i, :] = values ###Output _____no_output_____ ###Markdown For instance, here is the plot of temperature (in degrees Celsius) over time: ###Code from matplotlib import pyplot as plt temp = float_data[:, 1] # temperature (in degrees Celsius) plt.plot(range(len(temp)), temp) plt.show() ###Output _____no_output_____ ###Markdown On this plot, you can clearly see the yearly periodicity of temperature.Here is a more narrow plot of the first ten days of temperature data (since the data is recorded every ten minutes, we get 144 data points per day): ###Code plt.plot(range(1440), temp[:1440]) plt.show() ###Output _____no_output_____ ###Markdown On this plot, you can see daily periodicity, especially evident for the last 4 days. We can also note that this ten-days period must be coming from a fairly cold winter month.If we were trying to predict average temperature for the next month given a few month of past data, the problem would be easy, due to the reliable year-scale periodicity of the data. But looking at the data over a scale of days, the temperature looks a lot more chaotic. So is this timeseries predictable at a daily scale? Let's find out. Preparing the dataThe exact formulation of our problem will be the following: given data going as far back as `lookback` timesteps (a timestep is 10 minutes) and sampled every `steps` timesteps, can we predict the temperature in `delay` timesteps?We will use the following parameter values:* `lookback = 720`, i.e. our observations will go back 5 days.* `steps = 6`, i.e. our observations will be sampled at one data point per hour.* `delay = 144`, i.e. our targets will be 24 hours in the future.To get started, we need to do two things:* Preprocess the data to a format a neural network can ingest. This is easy: the data is already numerical, so we don't need to do any vectorization. However each timeseries in the data is on a different scale (e.g. temperature is typically between -20 and +30, but pressure, measured in mbar, is around 1000). So we will normalize each timeseries independently so that they all take small values on a similar scale.* Write a Python generator that takes our current array of float data and yields batches of data from the recent past, alongside with a target temperature in the future. Since the samples in our dataset are highly redundant (e.g. sample `N` and sample `N + 1` will have most of their timesteps in common), it would be very wasteful to explicitly allocate every sample. Instead, we will generate the samples on the fly using the original data.We preprocess the data by subtracting the mean of each timeseries and dividing by the standard deviation. We plan on using the first 200,000 timesteps as training data, so we compute the mean and standard deviation only on this fraction of the data: ###Code mean = float_data[:200000].mean(axis=0) float_data -= mean std = float_data[:200000].std(axis=0) float_data /= std ###Output _____no_output_____ ###Markdown Now here is the data generator that we will use. It yields a tuple `(samples, targets)` where `samples` is one batch of input data and `targets` is the corresponding array of target temperatures. It takes the following arguments:* `data`: The original array of floating point data, which we just normalized in the code snippet above.* `lookback`: How many timesteps back should our input data go.* `delay`: How many timesteps in the future should our target be.* `min_index` and `max_index`: Indices in the `data` array that delimit which timesteps to draw from. This is useful for keeping a segment of the data for validation and another one for testing.* `shuffle`: Whether to shuffle our samples or draw them in chronological order.* `batch_size`: The number of samples per batch.* `step`: The period, in timesteps, at which we sample data. We will set it 6 in order to draw one data point every hour. ###Code def generator(data, lookback, delay, min_index, max_index, shuffle=False, batch_size=128, step=6): if max_index is None: max_index = len(data) - delay - 1 i = min_index + lookback while 1: if shuffle: rows = np.random.randint( min_index + lookback, max_index, size=batch_size) else: if i + batch_size >= max_index: i = min_index + lookback rows = np.arange(i, min(i + batch_size, max_index)) i += len(rows) samples = np.zeros((len(rows), lookback // step, data.shape[-1])) targets = np.zeros((len(rows),)) for j, row in enumerate(rows): indices = range(rows[j] - lookback, rows[j], step) samples[j] = data[indices] targets[j] = data[rows[j] + delay][1] yield samples, targets ###Output _____no_output_____ ###Markdown Now let's use our abstract generator function to instantiate three generators, one for training, one for validation and one for testing. Each will look at different temporal segments of the original data: the training generator looks at the first 200,000 timesteps, the validation generator looks at the following 100,000, and the test generator looks at the remainder. ###Code lookback = 1440 step = 6 delay = 144 batch_size = 128 train_gen = generator(float_data, lookback=lookback, delay=delay, min_index=0, max_index=200000, shuffle=True, step=step, batch_size=batch_size) val_gen = generator(float_data, lookback=lookback, delay=delay, min_index=200001, max_index=300000, step=step, batch_size=batch_size) test_gen = generator(float_data, lookback=lookback, delay=delay, min_index=300001, max_index=None, step=step, batch_size=batch_size) # This is how many steps to draw from `val_gen` # in order to see the whole validation set: val_steps = (300000 - 200001 - lookback) // batch_size # This is how many steps to draw from `test_gen` # in order to see the whole test set: test_steps = (len(float_data) - 300001 - lookback) // batch_size ###Output _____no_output_____ ###Markdown A common sense, non-machine learning baselineBefore we start leveraging black-box deep learning models to solve our temperature prediction problem, let's try out a simple common-sense approach. It will serve as a sanity check, and it will establish a baseline that we will have to beat in order to demonstrate the usefulness of more advanced machine learning models. Such common-sense baselines can be very useful when approaching a new problem for which there is no known solution (yet). A classic example is that of unbalanced classification tasks, where some classes can be much more common than others. If your dataset contains 90% of instances of class A and 10% of instances of class B, then a common sense approach to the classification task would be to always predict "A" when presented with a new sample. Such a classifier would be 90% accurate overall, and any learning-based approach should therefore beat this 90% score in order to demonstrate usefulness. Sometimes such elementary baseline can prove surprisingly hard to beat.In our case, the temperature timeseries can safely be assumed to be continuous (the temperatures tomorrow are likely to be close to the temperatures today) as well as periodical with a daily period. Thus a common sense approach would be to always predict that the temperature 24 hours from now will be equal to the temperature right now. Let's evaluate this approach, using the Mean Absolute Error metric (MAE). Mean Absolute Error is simply equal to: ###Code np.mean(np.abs(preds - targets)) ###Output _____no_output_____ ###Markdown Here's our evaluation loop: ###Code def evaluate_naive_method(): batch_maes = [] for step in range(val_steps): samples, targets = next(val_gen) preds = samples[:, -1, 1] mae = np.mean(np.abs(preds - targets)) batch_maes.append(mae) print(np.mean(batch_maes)) evaluate_naive_method() ###Output 0.2897359729905486 ###Markdown It yields a MAE of 0.29. Since our temperature data has been normalized to be centered on 0 and have a standard deviation of one, this number is not immediately interpretable. It translates to an average absolute error of `0.29 * temperature_std` degrees Celsius, i.e. 2.57˚C. That's a fairly large average absolute error -- now the game is to leverage our knowledge of deep learning to do better. A basic machine learning approachIn the same way that it is useful to establish a common sense baseline before trying machine learning approaches, it is useful to try simple and cheap machine learning models (such as small densely-connected networks) before looking into complicated and computationally expensive models such as RNNs. This is the best way to make sure that any further complexity we throw at the problem later on is legitimate and delivers real benefits.Here is a simply fully-connected model in which we start by flattening the data, then run it through two `Dense` layers. Note the lack of activation function on the last `Dense` layer, which is typical for a regression problem. We use MAE as the loss. Since we are evaluating on the exact same data and with the exact same metric as with our common sense approach, the results will be directly comparable. ###Code from keras.models import Sequential from keras import layers from keras.optimizers import RMSprop model = Sequential() model.add(layers.Flatten(input_shape=(lookback // step, float_data.shape[-1]))) model.add(layers.Dense(32, activation='relu')) model.add(layers.Dense(1)) model.compile(optimizer=RMSprop(), loss='mae') history = model.fit_generator(train_gen, steps_per_epoch=500, epochs=20, validation_data=val_gen, validation_steps=val_steps) ###Output Epoch 1/20 500/500 [==============================] - 9s 18ms/step - loss: 1.2750 - val_loss: 0.6896 Epoch 2/20 500/500 [==============================] - 9s 18ms/step - loss: 0.4567 - val_loss: 0.3376 Epoch 3/20 500/500 [==============================] - 9s 18ms/step - loss: 0.2940 - val_loss: 0.3040 Epoch 4/20 500/500 [==============================] - 9s 18ms/step - loss: 0.2692 - val_loss: 0.3033 Epoch 5/20 500/500 [==============================] - 9s 18ms/step - loss: 0.2559 - val_loss: 0.3223 Epoch 6/20 500/500 [==============================] - 9s 18ms/step - loss: 0.2443 - val_loss: 0.3079 Epoch 7/20 500/500 [==============================] - 9s 18ms/step - loss: 0.2367 - val_loss: 0.3383 Epoch 8/20 500/500 [==============================] - 9s 18ms/step - loss: 0.2323 - val_loss: 0.3113 Epoch 9/20 500/500 [==============================] - 9s 18ms/step - loss: 0.2283 - val_loss: 0.3395 Epoch 10/20 500/500 [==============================] - 9s 18ms/step - loss: 0.2217 - val_loss: 0.3133 Epoch 11/20 500/500 [==============================] - 9s 18ms/step - loss: 0.2190 - val_loss: 0.3654 Epoch 12/20 500/500 [==============================] - 9s 18ms/step - loss: 0.2161 - val_loss: 0.3193 Epoch 13/20 500/500 [==============================] - 9s 18ms/step - loss: 0.2127 - val_loss: 0.3384 Epoch 14/20 500/500 [==============================] - 9s 18ms/step - loss: 0.2103 - val_loss: 0.3205 Epoch 15/20 500/500 [==============================] - 9s 19ms/step - loss: 0.2088 - val_loss: 0.3530 Epoch 16/20 500/500 [==============================] - 9s 18ms/step - loss: 0.2063 - val_loss: 0.3244 Epoch 17/20 500/500 [==============================] - 9s 19ms/step - loss: 0.2019 - val_loss: 0.3302 Epoch 18/20 500/500 [==============================] - 9s 18ms/step - loss: 0.2015 - val_loss: 0.3384 Epoch 19/20 500/500 [==============================] - 9s 18ms/step - loss: 0.2012 - val_loss: 0.3614 Epoch 20/20 500/500 [==============================] - 9s 19ms/step - loss: 0.1986 - val_loss: 0.3446 ###Markdown Let's display the loss curves for validation and training: ###Code import matplotlib.pyplot as plt loss = history.history['loss'] val_loss = history.history['val_loss'] epochs = range(len(loss)) plt.figure() plt.plot(epochs, loss, 'bo', label='Training loss') plt.plot(epochs, val_loss, 'b', label='Validation loss') plt.title('Training and validation loss') plt.legend() plt.show() ###Output _____no_output_____ ###Markdown Some of our validation losses get close to the no-learning baseline, but not very reliably. This goes to show the merit of having had this baseline in the first place: it turns out not to be so easy to outperform. Our common sense contains already a lot of valuable information that a machine learning model does not have access to.You may ask, if there exists a simple, well-performing model to go from the data to the targets (our common sense baseline), why doesn't the model we are training find it and improve on it? Simply put: because this simple solution is not what our training setup is looking for. The space of models in which we are searching for a solution, i.e. our hypothesis space, is the space of all possible 2-layer networks with the configuration that we defined. These networks are already fairly complicated. When looking for a solution with a space of complicated models, the simple well-performing baseline might be unlearnable, even if it's technically part of the hypothesis space. That is a pretty significant limitation of machine learning in general: unless the learning algorithm is hard-coded to look for a specific kind of simple model, parameter learning can sometimes fail to find a simple solution to a simple problem. A first recurrent baselineOur first fully-connected approach didn't do so well, but that doesn't mean machine learning is not applicable to our problem. The approach above consisted in first flattening the timeseries, which removed the notion of time from the input data. Let us instead look at our data as what it is: a sequence, where causality and order matter. We will try a recurrent sequence processing model -- it should be the perfect fit for such sequence data, precisely because it does exploit the temporal ordering of data points, unlike our first approach.Instead of the `LSTM` layer introduced in the previous section, we will use the `GRU` layer, developed by Cho et al. in 2014. `GRU` layers (which stands for "gated recurrent unit") work by leveraging the same principle as LSTM, but they are somewhat streamlined and thus cheaper to run, albeit they may not have quite as much representational power as LSTM. This trade-off between computational expensiveness and representational power is seen everywhere in machine learning. ###Code from keras.models import Sequential from keras import layers from keras.optimizers import RMSprop model = Sequential() model.add(layers.GRU(32, input_shape=(None, float_data.shape[-1]))) model.add(layers.Dense(1)) model.compile(optimizer=RMSprop(), loss='mae') history = model.fit_generator(train_gen, steps_per_epoch=500, epochs=20, validation_data=val_gen, validation_steps=val_steps) ###Output Epoch 1/20 500/500 [==============================] - 115s 230ms/step - loss: 0.2949 - val_loss: 0.2707 Epoch 2/20 500/500 [==============================] - 118s 236ms/step - loss: 0.2833 - val_loss: 0.2681 Epoch 3/20 500/500 [==============================] - 115s 230ms/step - loss: 0.2758 - val_loss: 0.2712 Epoch 4/20 499/500 [============================>.] - ETA: 0s - loss: 0.2717 ###Markdown Let look at our results: ###Code loss = history.history['loss'] val_loss = history.history['val_loss'] epochs = range(len(loss)) plt.figure() plt.plot(epochs, loss, 'bo', label='Training loss') plt.plot(epochs, val_loss, 'b', label='Validation loss') plt.title('Training and validation loss') plt.legend() plt.show() ###Output _____no_output_____ ###Markdown Much better! We are able to significantly beat the common sense baseline, such demonstrating the value of machine learning here, as well as the superiority of recurrent networks compared to sequence-flattening dense networks on this type of task.Our new validation MAE of ~0.265 (before we start significantly overfitting) translates to a mean absolute error of 2.35˚C after de-normalization. That's a solid gain on our initial error of 2.57˚C, but we probably still have a bit of margin for improvement. Using recurrent dropout to fight overfittingIt is evident from our training and validation curves that our model is overfitting: the training and validation losses start diverging considerably after a few epochs. You are already familiar with a classic technique for fighting this phenomenon: dropout, consisting in randomly zeroing-out input units of a layer in order to break happenstance correlations in the training data that the layer is exposed to. How to correctly apply dropout in recurrent networks, however, is not a trivial question. It has long been known that applying dropout before a recurrent layer hinders learning rather than helping with regularization. In 2015, Yarin Gal, as part of his Ph.D. thesis on Bayesian deep learning, determined the proper way to use dropout with a recurrent network: the same dropout mask (the same pattern of dropped units) should be applied at every timestep, instead of a dropout mask that would vary randomly from timestep to timestep. What's more: in order to regularize the representations formed by the recurrent gates of layers such as GRU and LSTM, a temporally constant dropout mask should be applied to the inner recurrent activations of the layer (a "recurrent" dropout mask). Using the same dropout mask at every timestep allows the network to properly propagate its learning error through time; a temporally random dropout mask would instead disrupt this error signal and be harmful to the learning process.Yarin Gal did his research using Keras and helped build this mechanism directly into Keras recurrent layers. Every recurrent layer in Keras has two dropout-related arguments: `dropout`, a float specifying the dropout rate for input units of the layer, and `recurrent_dropout`, specifying the dropout rate of the recurrent units. Let's add dropout and recurrent dropout to our GRU layer and see how it impacts overfitting. Because networks being regularized with dropout always take longer to fully converge, we train our network for twice as many epochs. ###Code from keras.models import Sequential from keras import layers from keras.optimizers import RMSprop model = Sequential() model.add(layers.GRU(32, dropout=0.2, recurrent_dropout=0.2, input_shape=(None, float_data.shape[-1]))) model.add(layers.Dense(1)) model.compile(optimizer=RMSprop(), loss='mae') history = model.fit_generator(train_gen, steps_per_epoch=500, epochs=40, validation_data=val_gen, validation_steps=val_steps) loss = history.history['loss'] val_loss = history.history['val_loss'] epochs = range(len(loss)) plt.figure() plt.plot(epochs, loss, 'bo', label='Training loss') plt.plot(epochs, val_loss, 'b', label='Validation loss') plt.title('Training and validation loss') plt.legend() plt.show() ###Output _____no_output_____ ###Markdown Great success; we are no longer overfitting during the first 30 epochs. However, while we have more stable evaluation scores, our best scores are not much lower than they were previously. Stacking recurrent layersSince we are no longer overfitting yet we seem to have hit a performance bottleneck, we should start considering increasing the capacity of our network. If you remember our description of the "universal machine learning workflow": it is a generally a good idea to increase the capacity of your network until overfitting becomes your primary obstacle (assuming that you are already taking basic steps to mitigate overfitting, such as using dropout). As long as you are not overfitting too badly, then you are likely under-capacity.Increasing network capacity is typically done by increasing the number of units in the layers, or adding more layers. Recurrent layer stacking is a classic way to build more powerful recurrent networks: for instance, what currently powers the Google translate algorithm is a stack of seven large LSTM layers -- that's huge.To stack recurrent layers on top of each other in Keras, all intermediate layers should return their full sequence of outputs (a 3D tensor) rather than their output at the last timestep. This is done by specifying `return_sequences=True`: ###Code from keras.models import Sequential from keras import layers from keras.optimizers import RMSprop model = Sequential() model.add(layers.GRU(32, dropout=0.1, recurrent_dropout=0.5, return_sequences=True, input_shape=(None, float_data.shape[-1]))) model.add(layers.GRU(64, activation='relu', dropout=0.1, recurrent_dropout=0.5)) model.add(layers.Dense(1)) model.compile(optimizer=RMSprop(), loss='mae') history = model.fit_generator(train_gen, steps_per_epoch=500, epochs=40, validation_data=val_gen, validation_steps=val_steps) ###Output Epoch 1/40 500/500 [==============================] - 346s - loss: 0.3341 - val_loss: 0.2780 Epoch 2/40 500/500 [==============================] - 344s - loss: 0.3125 - val_loss: 0.2754 Epoch 3/40 500/500 [==============================] - 344s - loss: 0.3045 - val_loss: 0.2696 Epoch 4/40 500/500 [==============================] - 344s - loss: 0.3018 - val_loss: 0.2747 Epoch 5/40 500/500 [==============================] - 344s - loss: 0.2957 - val_loss: 0.2690 Epoch 6/40 500/500 [==============================] - 344s - loss: 0.2923 - val_loss: 0.2692 Epoch 7/40 500/500 [==============================] - 344s - loss: 0.2907 - val_loss: 0.2673 Epoch 8/40 500/500 [==============================] - 343s - loss: 0.2879 - val_loss: 0.2690 Epoch 9/40 500/500 [==============================] - 343s - loss: 0.2866 - val_loss: 0.2743 Epoch 10/40 500/500 [==============================] - 344s - loss: 0.2833 - val_loss: 0.2669 Epoch 11/40 500/500 [==============================] - 344s - loss: 0.2825 - val_loss: 0.2669 Epoch 12/40 500/500 [==============================] - 344s - loss: 0.2822 - val_loss: 0.2700 Epoch 13/40 500/500 [==============================] - 345s - loss: 0.2785 - val_loss: 0.2698 Epoch 14/40 500/500 [==============================] - 345s - loss: 0.2775 - val_loss: 0.2634 Epoch 15/40 500/500 [==============================] - 344s - loss: 0.2778 - val_loss: 0.2653 Epoch 16/40 500/500 [==============================] - 344s - loss: 0.2740 - val_loss: 0.2633 Epoch 17/40 500/500 [==============================] - 344s - loss: 0.2746 - val_loss: 0.2680 Epoch 18/40 500/500 [==============================] - 344s - loss: 0.2731 - val_loss: 0.2649 Epoch 19/40 500/500 [==============================] - 345s - loss: 0.2709 - val_loss: 0.2699 Epoch 20/40 500/500 [==============================] - 344s - loss: 0.2693 - val_loss: 0.2655 Epoch 21/40 500/500 [==============================] - 345s - loss: 0.2679 - val_loss: 0.2654 Epoch 22/40 500/500 [==============================] - 344s - loss: 0.2677 - val_loss: 0.2731 Epoch 23/40 500/500 [==============================] - 345s - loss: 0.2672 - val_loss: 0.2680 Epoch 24/40 500/500 [==============================] - 345s - loss: 0.2648 - val_loss: 0.2669 Epoch 25/40 500/500 [==============================] - 345s - loss: 0.2645 - val_loss: 0.2655 Epoch 26/40 500/500 [==============================] - 344s - loss: 0.2648 - val_loss: 0.2673 Epoch 27/40 500/500 [==============================] - 344s - loss: 0.2624 - val_loss: 0.2694 Epoch 28/40 500/500 [==============================] - 344s - loss: 0.2624 - val_loss: 0.2698 Epoch 29/40 500/500 [==============================] - 344s - loss: 0.2602 - val_loss: 0.2765 Epoch 30/40 500/500 [==============================] - 344s - loss: 0.2596 - val_loss: 0.2795 Epoch 31/40 500/500 [==============================] - 344s - loss: 0.2598 - val_loss: 0.2688 Epoch 32/40 500/500 [==============================] - 344s - loss: 0.2590 - val_loss: 0.2724 Epoch 33/40 500/500 [==============================] - 344s - loss: 0.2581 - val_loss: 0.2754 Epoch 34/40 500/500 [==============================] - 344s - loss: 0.2570 - val_loss: 0.2688 Epoch 35/40 500/500 [==============================] - 344s - loss: 0.2559 - val_loss: 0.2753 Epoch 36/40 500/500 [==============================] - 345s - loss: 0.2552 - val_loss: 0.2719 Epoch 37/40 500/500 [==============================] - 344s - loss: 0.2552 - val_loss: 0.2745 Epoch 38/40 500/500 [==============================] - 344s - loss: 0.2537 - val_loss: 0.2761 Epoch 39/40 500/500 [==============================] - 344s - loss: 0.2546 - val_loss: 0.2793 Epoch 40/40 500/500 [==============================] - 345s - loss: 0.2532 - val_loss: 0.2782 ###Markdown Let's take a look at our results: ###Code loss = history.history['loss'] val_loss = history.history['val_loss'] epochs = range(len(loss)) plt.figure() plt.plot(epochs, loss, 'bo', label='Training loss') plt.plot(epochs, val_loss, 'b', label='Validation loss') plt.title('Training and validation loss') plt.legend() plt.show() ###Output _____no_output_____ ###Markdown We can see that the added layers does improve ours results by a bit, albeit not very significantly. We can draw two conclusions:* Since we are still not overfitting too badly, we could safely increase the size of our layers, in quest for a bit of validation loss improvement. This does have a non-negligible computational cost, though. * Since adding a layer did not help us by a significant factor, we may be seeing diminishing returns to increasing network capacity at this point. Using bidirectional RNNsThe last technique that we will introduce in this section is called "bidirectional RNNs". A bidirectional RNN is common RNN variant which can offer higher performance than a regular RNN on certain tasks. It is frequently used in natural language processing -- you could call it the Swiss army knife of deep learning for NLP.RNNs are notably order-dependent, or time-dependent: they process the timesteps of their input sequences in order, and shuffling or reversing the timesteps can completely change the representations that the RNN will extract from the sequence. This is precisely the reason why they perform well on problems where order is meaningful, such as our temperature forecasting problem. A bidirectional RNN exploits the order-sensitivity of RNNs: it simply consists of two regular RNNs, such as the GRU or LSTM layers that you are already familiar with, each processing input sequence in one direction (chronologically and antichronologically), then merging their representations. By processing a sequence both way, a bidirectional RNN is able to catch patterns that may have been overlooked by a one-direction RNN.Remarkably, the fact that the RNN layers in this section have so far processed sequences in chronological order (older timesteps first) may have been an arbitrary decision. At least, it's a decision we made no attempt at questioning so far. Could it be that our RNNs could have performed well enough if it were processing input sequences in antichronological order, for instance (newer timesteps first)? Let's try this in practice and see what we get. All we need to do is write a variant of our data generator, where the input sequences get reverted along the time dimension (replace the last line with `yield samples[:, ::-1, :], targets`). Training the same one-GRU-layer network as we used in the first experiment in this section, we get the following results: ###Code def reverse_order_generator(data, lookback, delay, min_index, max_index, shuffle=False, batch_size=128, step=6): if max_index is None: max_index = len(data) - delay - 1 i = min_index + lookback while 1: if shuffle: rows = np.random.randint( min_index + lookback, max_index, size=batch_size) else: if i + batch_size >= max_index: i = min_index + lookback rows = np.arange(i, min(i + batch_size, max_index)) i += len(rows) samples = np.zeros((len(rows), lookback // step, data.shape[-1])) targets = np.zeros((len(rows),)) for j, row in enumerate(rows): indices = range(rows[j] - lookback, rows[j], step) samples[j] = data[indices] targets[j] = data[rows[j] + delay][1] yield samples[:, ::-1, :], targets train_gen_reverse = reverse_order_generator( float_data, lookback=lookback, delay=delay, min_index=0, max_index=200000, shuffle=True, step=step, batch_size=batch_size) val_gen_reverse = reverse_order_generator( float_data, lookback=lookback, delay=delay, min_index=200001, max_index=300000, step=step, batch_size=batch_size) model = Sequential() model.add(layers.GRU(32, input_shape=(None, float_data.shape[-1]))) model.add(layers.Dense(1)) model.compile(optimizer=RMSprop(), loss='mae') history = model.fit_generator(train_gen_reverse, steps_per_epoch=500, epochs=20, validation_data=val_gen_reverse, validation_steps=val_steps) loss = history.history['loss'] val_loss = history.history['val_loss'] epochs = range(len(loss)) plt.figure() plt.plot(epochs, loss, 'bo', label='Training loss') plt.plot(epochs, val_loss, 'b', label='Validation loss') plt.title('Training and validation loss') plt.legend() plt.show() ###Output _____no_output_____ ###Markdown So the reversed-order GRU strongly underperforms even the common-sense baseline, indicating that the in our case chronological processing is very important to the success of our approach. This makes perfect sense: the underlying GRU layer will typically be better at remembering the recent past than the distant past, and naturally the more recent weather data points are more predictive than older data points in our problem (that's precisely what makes the common-sense baseline a fairly strong baseline). Thus the chronological version of the layer is bound to outperform the reversed-order version. Importantly, this is generally not true for many other problems, including natural language: intuitively, the importance of a word in understanding a sentence is not usually dependent on its position in the sentence. Let's try the same trick on the LSTM IMDB example from the previous section: ###Code from keras.datasets import imdb from keras.preprocessing import sequence from keras import layers from keras.models import Sequential # Number of words to consider as features max_features = 10000 # Cut texts after this number of words (among top max_features most common words) maxlen = 500 # Load data (x_train, y_train), (x_test, y_test) = imdb.load_data(num_words=max_features) # Reverse sequences x_train = [x[::-1] for x in x_train] x_test = [x[::-1] for x in x_test] # Pad sequences x_train = sequence.pad_sequences(x_train, maxlen=maxlen) x_test = sequence.pad_sequences(x_test, maxlen=maxlen) model = Sequential() model.add(layers.Embedding(max_features, 128)) model.add(layers.LSTM(32)) model.add(layers.Dense(1, activation='sigmoid')) model.compile(optimizer='rmsprop', loss='binary_crossentropy', metrics=['acc']) history = model.fit(x_train, y_train, epochs=10, batch_size=128, validation_split=0.2) ###Output Train on 20000 samples, validate on 5000 samples Epoch 1/10 20000/20000 [==============================] - 111s - loss: 0.4965 - acc: 0.7648 - val_loss: 0.3593 - val_acc: 0.8570 Epoch 2/10 20000/20000 [==============================] - 107s - loss: 0.3105 - acc: 0.8810 - val_loss: 0.3329 - val_acc: 0.8648 Epoch 3/10 20000/20000 [==============================] - 105s - loss: 0.2566 - acc: 0.9057 - val_loss: 0.3863 - val_acc: 0.8770 Epoch 4/10 20000/20000 [==============================] - 106s - loss: 0.2231 - acc: 0.9195 - val_loss: 0.3471 - val_acc: 0.8556 Epoch 5/10 20000/20000 [==============================] - 105s - loss: 0.1912 - acc: 0.9314 - val_loss: 0.3346 - val_acc: 0.8694 Epoch 6/10 20000/20000 [==============================] - 105s - loss: 0.1721 - acc: 0.9379 - val_loss: 0.3621 - val_acc: 0.8520 Epoch 7/10 20000/20000 [==============================] - 105s - loss: 0.1613 - acc: 0.9427 - val_loss: 0.3438 - val_acc: 0.8694 Epoch 8/10 20000/20000 [==============================] - 105s - loss: 0.1502 - acc: 0.9503 - val_loss: 0.3890 - val_acc: 0.8588 Epoch 9/10 20000/20000 [==============================] - 105s - loss: 0.1369 - acc: 0.9520 - val_loss: 0.3626 - val_acc: 0.8768 Epoch 10/10 20000/20000 [==============================] - 105s - loss: 0.1249 - acc: 0.9579 - val_loss: 0.4639 - val_acc: 0.8566 ###Markdown We get near-identical performance as the chronological-order LSTM we tried in the previous section.Thus, remarkably, on such a text dataset, reversed-order processing works just as well as chronological processing, confirming our hypothesis that, albeit word order *does* matter in understanding language, *which* order you use isn't crucial. Importantly, a RNN trained on reversed sequences will learn different representations than one trained on the original sequences, in much the same way that you would have quite different mental models if time flowed backwards in the real world -- if you lived a life where you died on your first day and you were born on your last day. In machine learning, representations that are *different* yet *useful* are always worth exploiting, and the more they differ the better: they offer a new angle from which to look at your data, capturing aspects of the data that were missed by other approaches, and thus they can allow to boost performance on a task. This is the intuition behind "ensembling", a concept that we will introduce in the next chapter.A bidirectional RNN exploits this idea to improve upon the performance of chronological-order RNNs: it looks at its inputs sequence both ways, obtaining potentially richer representations and capturing patterns that may have been missed by the chronological-order version alone. ![bidirectional rnn](https://s3.amazonaws.com/book.keras.io/img/ch6/bidirectional_rnn.png) To instantiate a bidirectional RNN in Keras, one would use the `Bidirectional` layer, which takes as first argument a recurrent layer instance. `Bidirectional` will create a second, separate instance of this recurrent layer, and will use one instance for processing the input sequences in chronological order and the other instance for processing the input sequences in reversed order. Let's try it on the IMDB sentiment analysis task: ###Code from keras import backend as K K.clear_session() model = Sequential() model.add(layers.Embedding(max_features, 32)) model.add(layers.Bidirectional(layers.LSTM(32))) model.add(layers.Dense(1, activation='sigmoid')) model.compile(optimizer='rmsprop', loss='binary_crossentropy', metrics=['acc']) history = model.fit(x_train, y_train, epochs=10, batch_size=128, validation_split=0.2) ###Output Train on 20000 samples, validate on 5000 samples Epoch 1/10 20000/20000 [==============================] - 214s - loss: 0.5994 - acc: 0.6865 - val_loss: 0.4722 - val_acc: 0.8090 Epoch 2/10 20000/20000 [==============================] - 213s - loss: 0.3673 - acc: 0.8543 - val_loss: 0.3769 - val_acc: 0.8448 Epoch 3/10 20000/20000 [==============================] - 213s - loss: 0.2743 - acc: 0.8972 - val_loss: 0.3196 - val_acc: 0.8688 Epoch 4/10 20000/20000 [==============================] - 211s - loss: 0.2310 - acc: 0.9150 - val_loss: 0.2972 - val_acc: 0.8856 Epoch 5/10 20000/20000 [==============================] - 211s - loss: 0.2009 - acc: 0.9261 - val_loss: 0.4461 - val_acc: 0.8514 Epoch 6/10 20000/20000 [==============================] - 210s - loss: 0.1912 - acc: 0.9339 - val_loss: 0.3636 - val_acc: 0.8640 Epoch 7/10 20000/20000 [==============================] - 209s - loss: 0.1670 - acc: 0.9423 - val_loss: 0.3476 - val_acc: 0.8580 Epoch 8/10 20000/20000 [==============================] - 210s - loss: 0.1523 - acc: 0.9469 - val_loss: 0.3887 - val_acc: 0.8830 Epoch 9/10 20000/20000 [==============================] - 209s - loss: 0.1431 - acc: 0.9506 - val_loss: 0.3781 - val_acc: 0.8810 Epoch 10/10 20000/20000 [==============================] - 209s - loss: 0.1366 - acc: 0.9521 - val_loss: 0.3713 - val_acc: 0.8792 ###Markdown It performs slightly better than the regular LSTM we tried in the previous section, going above 88% validation accuracy. It also seems to overfit faster, which is unsurprising since a bidirectional layer has twice more parameters than a chronological LSTM. With some regularization, the bidirectional approach would likely be a strong performer on this task.Now let's try the same approach on the weather prediction task: ###Code from keras.models import Sequential from keras import layers from keras.optimizers import RMSprop model = Sequential() model.add(layers.Bidirectional( layers.GRU(32), input_shape=(None, float_data.shape[-1]))) model.add(layers.Dense(1)) model.compile(optimizer=RMSprop(), loss='mae') history = model.fit_generator(train_gen, steps_per_epoch=500, epochs=40, validation_data=val_gen, validation_steps=val_steps) ###Output Epoch 1/40 500/500 [==============================] - 325s - loss: 0.3029 - val_loss: 0.2660 Epoch 2/40 500/500 [==============================] - 324s - loss: 0.2751 - val_loss: 0.2660 Epoch 3/40 500/500 [==============================] - 326s - loss: 0.2668 - val_loss: 0.2628 Epoch 4/40 500/500 [==============================] - 326s - loss: 0.2594 - val_loss: 0.2615 Epoch 5/40 500/500 [==============================] - 324s - loss: 0.2532 - val_loss: 0.2684 Epoch 6/40 500/500 [==============================] - 324s - loss: 0.2442 - val_loss: 0.2674 Epoch 7/40 500/500 [==============================] - 324s - loss: 0.2405 - val_loss: 0.2700 Epoch 8/40 500/500 [==============================] - 324s - loss: 0.2343 - val_loss: 0.2782 Epoch 9/40 500/500 [==============================] - 324s - loss: 0.2293 - val_loss: 0.2778 Epoch 10/40 500/500 [==============================] - 324s - loss: 0.2233 - val_loss: 0.2813 Epoch 11/40 500/500 [==============================] - 324s - loss: 0.2167 - val_loss: 0.2978 Epoch 12/40 500/500 [==============================] - 324s - loss: 0.2116 - val_loss: 0.2984 Epoch 13/40 500/500 [==============================] - 324s - loss: 0.2061 - val_loss: 0.2920 Epoch 14/40 500/500 [==============================] - 323s - loss: 0.2008 - val_loss: 0.3016 Epoch 15/40 500/500 [==============================] - 324s - loss: 0.1952 - val_loss: 0.2985 Epoch 16/40 500/500 [==============================] - 324s - loss: 0.1915 - val_loss: 0.3029 Epoch 17/40 500/500 [==============================] - 323s - loss: 0.1862 - val_loss: 0.3127 Epoch 18/40 500/500 [==============================] - 324s - loss: 0.1821 - val_loss: 0.3079 Epoch 19/40 500/500 [==============================] - 324s - loss: 0.1772 - val_loss: 0.3116 Epoch 20/40 500/500 [==============================] - 323s - loss: 0.1735 - val_loss: 0.3151 Epoch 21/40 500/500 [==============================] - 323s - loss: 0.1705 - val_loss: 0.3208 Epoch 22/40 500/500 [==============================] - 324s - loss: 0.1664 - val_loss: 0.3345 Epoch 23/40 500/500 [==============================] - 323s - loss: 0.1631 - val_loss: 0.3162 Epoch 24/40 500/500 [==============================] - 324s - loss: 0.1604 - val_loss: 0.3141 Epoch 25/40 500/500 [==============================] - 324s - loss: 0.1572 - val_loss: 0.3173 Epoch 26/40 500/500 [==============================] - 325s - loss: 0.1559 - val_loss: 0.3156 Epoch 27/40 500/500 [==============================] - 324s - loss: 0.1530 - val_loss: 0.3227 Epoch 28/40 500/500 [==============================] - 324s - loss: 0.1521 - val_loss: 0.3288 Epoch 29/40 500/500 [==============================] - 325s - loss: 0.1496 - val_loss: 0.3264 Epoch 30/40 500/500 [==============================] - 324s - loss: 0.1481 - val_loss: 0.3266 Epoch 31/40 500/500 [==============================] - 323s - loss: 0.1456 - val_loss: 0.3241 Epoch 32/40 500/500 [==============================] - 323s - loss: 0.1436 - val_loss: 0.3293 Epoch 33/40 500/500 [==============================] - 324s - loss: 0.1426 - val_loss: 0.3301 Epoch 34/40 500/500 [==============================] - 324s - loss: 0.1409 - val_loss: 0.3298 Epoch 35/40 500/500 [==============================] - 324s - loss: 0.1399 - val_loss: 0.3372 Epoch 36/40 500/500 [==============================] - 323s - loss: 0.1387 - val_loss: 0.3304 Epoch 37/40 500/500 [==============================] - 324s - loss: 0.1388 - val_loss: 0.3324 Epoch 38/40 500/500 [==============================] - 324s - loss: 0.1362 - val_loss: 0.3317 Epoch 39/40 500/500 [==============================] - 323s - loss: 0.1342 - val_loss: 0.3319 Epoch 40/40 500/500 [==============================] - 324s - loss: 0.1350 - val_loss: 0.3289
docs_source/auto_examples/execute_recognize.ipynb
###Markdown Transcribing a single audio file================================In this example script, DanSpeech is used to transcribe the same audio file with three different outputs:- **Greedy decoding**: using no external language model.- **Beam search decoding 1**: Decoding with a language model (:meth:`language_models.DSL3gram`).- **Beam search decoding 2**: Decoding with a language model (:meth:`language_models.DSL3gram`) and returning all the beam_width most probable beams. ###Code from danspeech import Recognizer from danspeech.pretrained_models import TestModel from danspeech.language_models import DSL3gram from danspeech.audio import load_audio # Load a DanSpeech model. If the model does not exists, it will be downloaded. model = TestModel() recognizer = Recognizer(model=model) # Load the audio file. audio = load_audio(path="../example_files/u0013002.wav") print() print("No language model:") print(recognizer.recognize(audio)) # DanSpeech with a language model. # Note: Requires ctcdecode to work! try: lm = DSL3gram() recognizer.update_decoder(lm=lm, alpha=1.2, beta=0.15, beam_width=10) except ImportError: print("ctcdecode not installed. Using greedy decoding.") print() print("Single transcription:") print(recognizer.recognize(audio, show_all=False)) print() beams = recognizer.recognize(audio, show_all=True) print("Most likely beams:") for beam in beams: print(beam) ###Output _____no_output_____
scripts/new-dvmdostem-outputs.ipynb
###Markdown Setup File Structure ###Code import netCDF4 import numpy as np try: ncfile.close() except: pass ncfile = netCDF4.Dataset("new-dvmdostem-output.nc", mode="w", format='NETCDF4') # Dimensions for the file. time_dim = ncfile.createDimension('time', None) # unlimited axis (can be appended to). community_type = ncfile.createDimension('community_type', 10) pft = ncfile.createDimension('pft', 10) y = ncfile.createDimension('x', 10) x = ncfile.createDimension('y', 10) # Coordinate Variables x = ncfile.createVariable('x', np.int, ('x',)) # x,y are pixel coords in 2D (spatial?) image y = ncfile.createVariable('y', np.int, ('y',)) community_type = ncfile.createVariable('community_type', np.int, ('y','x')) # Spatial Reference Variables? lat = ncfile.createVariable('lat', np.float32, ('y', 'x',)) lon = ncfile.createVariable('lon', np.float32, ('y', 'x',)) # Add space/time variables... grow_start = ncfile.createVariable('grow_start', np.int, ('time', 'y', 'x',)) # day of year grow_end = ncfile.createVariable('grow_end', np.int, ('time', 'y', 'x')) org_shlw_thickness = ncfile.createVariable('org_shlw_thickness', np.float32, ('time', 'y', 'x')) # Need to add all these: # 1 //OSHLWDZ - (23) shallow fibrous organic soil horizon thickness (m) # 1 //ODEEPDZ - (24) deep amorphous organic soil horizon thickness (m) # 1 //MINEADZ - (25) upper minearal soil horizon thickness (m) # 1 //MINEBDZ - (26) middel mineral soil horizon thickness (m) # 1 //MINECDZ - (27) lower mineral soil horizon thickness (m) # 1 //OSHLWC - (28) SOM C in firbrous soil horizon (gC/m2) # 1 //ODEEPC - (29) SOM C in amorphous soil horizon (gC/m2) # 1 //MINEAC - (30) SOM C in upper mineral soil horizon (gC/m2) # 1 //MINEBC - (31) SOM C in middle mineral soil horizon (gC/m2) # 1 //MINECC - (32) SOM C in lower mineral soil horizon (gC/m2) # 1 //ORGN - (33) total soil organic N (gN/m2) # 2 //AVLN - (35) total soil mineral N (gN/m2) # Add more complicated time/cmt type/PFT/Y/X variables... veg_fraction = ncfile.createVariable('veg_fraction', np.float32, ('time','community_type','pft','y','x')) vegc = ncfile.createVariable('vegc', np.float64, ('time','community_type','pft','y','x')) # Need to add all these: # 1 //VEGFRAC - (3) each pft's land coverage fraction (m2/m2) # 1 //VEGAGE - (4) each pft's age (years) # 2 //LAI - (5) each pft's LAI (m2/m2) # 2 //VEGC - (6) each pft's total veg. biomass C (gC/m2) # 2 //LEAFC - (7) each pft's leaf biomass C (gC/m2) # 2 //STEMC - (8) each pft's stem biomass C (gC/m2) # 2 //ROOTC - (9) each pft's root biomass C (gC/m2) # 2 //VEGN - (10) each pft's total veg. biomass N (gC/m2) # 2 //LABN - (11) each pft's labile N (gN/m2) # 2 //LEAFN - (12) each pft's leaf structural N (gN/m2) # 2 //STEMN - (13) each pft's stem structural N (gN/m2) # Add some random data to the vegC variable so we can check it # out with ncview and see if the dimensions "make sense" vegc[:,:,:,:,:] = np.reshape(np.random.uniform(0, 1, 40000), (4,10,10,10,10)) # vegc[time, cmt, pft, y, x] print "NetCDF File Dimensions:" for dim in ncfile.dimensions.items(): print " -->", dim ncfile.close() print("") !ncdump -h new-dvmdostem-output.nc ###Output _____no_output_____
practice/courses/Sequences, Time Series and Predicion/week1/Week_1_Exercise_Question.ipynb
###Markdown Now that we have the time series, let's split it so we can start forecasting ###Code split_time = 1100 time_train = time[:split_time] x_train = series[:split_time] time_valid = time[split_time:] x_valid = series[split_time:] plt.figure(figsize=(10, 6)) plot_series(time_train, x_train) plt.show() plt.figure(figsize=(10, 6)) plot_series(time_valid, x_valid) plt.show() # EXPECTED OUTPUT # Chart WITH 4 PEAKS between 50 and 65 and 3 troughs between -12 and 0 # Chart with 2 Peaks, first at slightly above 60, last at a little more than that, should also have a single trough at about 0 ###Output _____no_output_____ ###Markdown Naive Forecast ###Code naive_forecast = series[split_time - 1:-1] naive_forecast.shape time_valid.shape plt.figure(figsize=(10, 6)) plot_series(time_valid, x_valid) plot_series(time_valid, naive_forecast) # Expected output: Chart similar to above, but with forecast overlay ###Output _____no_output_____ ###Markdown Let's zoom in on the start of the validation period: ###Code plt.figure(figsize=(10, 6)) plot_series(time_valid, x_valid, start = 0, end = 150) plot_series(time_valid, naive_forecast, start = 0, end = 150) # EXPECTED - Chart with X-Axis from 1100-1250 and Y Axes with series value and projections. Projections should be time stepped 1 unit 'after' series ###Output _____no_output_____ ###Markdown Now let's compute the mean squared error and the mean absolute error between the forecasts and the predictions in the validation period: ###Code print(keras.metrics.mean_squared_error(x_valid, naive_forecast).numpy()) print(keras.metrics.mean_absolute_error(x_valid, naive_forecast).numpy()) # Expected Output # 19.578304 # 2.6011968 ###Output 19.578304 2.6011972 ###Markdown That's our baseline, now let's try a moving average: ###Code def moving_average_forecast(series, window_size): """Forecasts the mean of the last few values. If window_size=1, then this is equivalent to naive forecast""" forecast = [] for time in range(len(series) - window_size): forecast.append(series[time:time + window_size].mean()) return np.array(forecast) window = 30 moving_avg = moving_average_forecast(series, window)[split_time - window:] moving_avg.shape window = 30 moving_avg = moving_average_forecast(series, window)[split_time - window:] plt.figure(figsize=(10, 6)) plot_series(time_valid, x_valid) plot_series(time_valid, moving_avg) # EXPECTED OUTPUT # CHart with time series from 1100->1450+ on X # Time series plotted # Moving average plotted over it print(keras.metrics.mean_squared_error(x_valid, moving_avg).numpy()) print(keras.metrics.mean_absolute_error(x_valid, moving_avg).numpy()) # EXPECTED OUTPUT # 65.786224 # 4.3040023 diff_series = (series[365:] - series[:-365]) # basically do a shift and subtract one year from the other diff_time = time[365:] plt.figure(figsize=(10, 6)) plot_series(diff_time, diff_series) plt.show() # EXPECETED OUTPUT: CHart with diffs ###Output _____no_output_____ ###Markdown Great, the trend and seasonality seem to be gone, so now we can use the moving average: ###Code diff_moving_avg = moving_average_forecast(diff_series, 50)[split_time - 365 - 50:] plt.figure(figsize=(10, 6)) plot_series(time_valid, diff_series[split_time - 365:]) plot_series(time_valid, diff_moving_avg) plt.show() # Expected output. Diff chart from 1100->1450 + # Overlaid with moving average ###Output _____no_output_____ ###Markdown Now let's bring back the trend and seasonality by adding the past values from t – 365: ###Code diff_moving_avg_plus_past = series[split_time - 365:-365] + diff_moving_avg plt.figure(figsize=(10, 6)) plot_series(time_valid, x_valid) plot_series(time_valid, diff_moving_avg_plus_past) plt.show() # Expected output: Chart from 1100->1450+ on X. Same chart as earlier for time series, but projection overlaid looks close in value to it print(keras.metrics.mean_squared_error(x_valid, diff_moving_avg_plus_past).numpy()) print(keras.metrics.mean_absolute_error(x_valid, diff_moving_avg_plus_past).numpy()) # EXPECTED OUTPUT # 8.498155 # 2.327179 ###Output 8.498155 2.327179 ###Markdown Better than naive forecast, good. However the forecasts look a bit too random, because we're just adding past values, which were noisy. Let's use a moving averaging on past values to remove some of the noise: ###Code diff_moving_avg_plus_smooth_past = moving_average_forecast(series[split_time - 370:-360], 10) + diff_moving_avg plt.figure(figsize=(10, 6)) plot_series(time_valid, x_valid) plot_series(time_valid, diff_moving_avg_plus_smooth_past) plt.show() # EXPECTED OUTPUT: # Similar chart to above, but the overlaid projections are much smoother print(keras.metrics.mean_squared_error(x_valid, diff_moving_avg_plus_smooth_past).numpy()) print(keras.metrics.mean_absolute_error(x_valid, diff_moving_avg_plus_smooth_past).numpy()) # EXPECTED OUTPUT # 12.527958 # 2.2034433 ###Output 12.527956 2.2034435